* [gentoo-commits] proj/linux-patches:4.10 commit in: /
@ 2017-01-03 18:56 Mike Pagano
0 siblings, 0 replies; 22+ messages in thread
From: Mike Pagano @ 2017-01-03 18:56 UTC (permalink / raw
To: gentoo-commits
commit: c0eedfefc3bb4ac7faeffba1e11aa4f86cfdd58f
Author: Mike Pagano <mpagano <AT> gentoo <DOT> org>
AuthorDate: Tue Jan 3 18:56:30 2017 +0000
Commit: Mike Pagano <mpagano <AT> gentoo <DOT> org>
CommitDate: Tue Jan 3 18:56:30 2017 +0000
URL: https://gitweb.gentoo.org/proj/linux-patches.git/commit/?id=c0eedfef
Gentoo Linux support config settings and defaults. Patch to add support for namespace user.pax.* on tmpfs. Patch to enable link security restrictions by default. Patch to ensure that /dev/root doesn't appear in /proc/mounts when booting without an initramfs. Patch to enable control of the unaligned access control policy from sysctl.
0000_README | 24 +
1500_XATTR_USER_PREFIX.patch | 69 +
...ble-link-security-restrictions-by-default.patch | 22 +
2900_dev-root-proc-mount-fix.patch | 38 +
4200_fbcondecor.patch | 2095 ++++++++++++++++++++
4400_alpha-sysctl-uac.patch | 142 ++
...able-additional-cpu-optimizations-for-gcc.patch | 426 ++++
7 files changed, 2816 insertions(+)
diff --git a/0000_README b/0000_README
index 9018993..646b303 100644
--- a/0000_README
+++ b/0000_README
@@ -43,6 +43,30 @@ EXPERIMENTAL
Individual Patch Descriptions:
--------------------------------------------------------------------------
+Patch: 1500_XATTR_USER_PREFIX.patch
+From: https://bugs.gentoo.org/show_bug.cgi?id=470644
+Desc: Support for namespace user.pax.* on tmpfs.
+
+Patch: 1510_fs-enable-link-security-restrictions-by-default.patch
+From: http://sources.debian.net/src/linux/3.16.7-ckt4-3/debian/patches/debian/fs-enable-link-security-restrictions-by-default.patch/
+Desc: Enable link security restrictions by default.
+
+Patch: 2900_dev-root-proc-mount-fix.patch
+From: https://bugs.gentoo.org/show_bug.cgi?id=438380
+Desc: Ensure that /dev/root doesn't appear in /proc/mounts when bootint without an initramfs.
+
+Patch: 4200_fbcondecor.patch
+From: http://www.mepiscommunity.org/fbcondecor
+Desc: Bootsplash ported by Uladzimir Bely. (Bug #596126)
+
+Patch: 4400_alpha-sysctl-uac.patch
+From: Tobias Klausmann (klausman@gentoo.org) and http://bugs.gentoo.org/show_bug.cgi?id=217323
+Desc: Enable control of the unaligned access control policy from sysctl
+
Patch: 4567_distro-Gentoo-Kconfig.patch
From: Tom Wijsman <TomWij@gentoo.org>
Desc: Add Gentoo Linux support config settings and defaults.
+
+Patch: 5010_enable-additional-cpu-optimizations-for-gcc.patch
+From: https://github.com/graysky2/kernel_gcc_patch/
+Desc: Kernel patch enables gcc >= v4.9 optimizations for additional CPUs.
diff --git a/1500_XATTR_USER_PREFIX.patch b/1500_XATTR_USER_PREFIX.patch
new file mode 100644
index 0000000..bacd032
--- /dev/null
+++ b/1500_XATTR_USER_PREFIX.patch
@@ -0,0 +1,69 @@
+From: Anthony G. Basile <blueness@gentoo.org>
+
+This patch adds support for a restricted user-controlled namespace on
+tmpfs filesystem used to house PaX flags. The namespace must be of the
+form user.pax.* and its value cannot exceed a size of 8 bytes.
+
+This is needed even on all Gentoo systems so that XATTR_PAX flags
+are preserved for users who might build packages using portage on
+a tmpfs system with a non-hardened kernel and then switch to a
+hardened kernel with XATTR_PAX enabled.
+
+The namespace is added to any user with Extended Attribute support
+enabled for tmpfs. Users who do not enable xattrs will not have
+the XATTR_PAX flags preserved.
+
+diff --git a/include/uapi/linux/xattr.h b/include/uapi/linux/xattr.h
+index 1590c49..5eab462 100644
+--- a/include/uapi/linux/xattr.h
++++ b/include/uapi/linux/xattr.h
+@@ -73,5 +73,9 @@
+ #define XATTR_POSIX_ACL_DEFAULT "posix_acl_default"
+ #define XATTR_NAME_POSIX_ACL_DEFAULT XATTR_SYSTEM_PREFIX XATTR_POSIX_ACL_DEFAULT
+
++/* User namespace */
++#define XATTR_PAX_PREFIX XATTR_USER_PREFIX "pax."
++#define XATTR_PAX_FLAGS_SUFFIX "flags"
++#define XATTR_NAME_PAX_FLAGS XATTR_PAX_PREFIX XATTR_PAX_FLAGS_SUFFIX
+
+ #endif /* _UAPI_LINUX_XATTR_H */
+diff --git a/mm/shmem.c b/mm/shmem.c
+index 440e2a7..c377172 100644
+--- a/mm/shmem.c
++++ b/mm/shmem.c
+@@ -2667,6 +2667,14 @@ static int shmem_xattr_handler_set(const struct xattr_handler *handler,
+ struct shmem_inode_info *info = SHMEM_I(d_inode(dentry));
+
+ name = xattr_full_name(handler, name);
++
++ if (!strncmp(name, XATTR_USER_PREFIX, XATTR_USER_PREFIX_LEN)) {
++ if (strcmp(name, XATTR_NAME_PAX_FLAGS))
++ return -EOPNOTSUPP;
++ if (size > 8)
++ return -EINVAL;
++ }
++
+ return simple_xattr_set(&info->xattrs, name, value, size, flags);
+ }
+
+@@ -2682,6 +2690,12 @@ static const struct xattr_handler shmem_trusted_xattr_handler = {
+ .set = shmem_xattr_handler_set,
+ };
+
++static const struct xattr_handler shmem_user_xattr_handler = {
++ .prefix = XATTR_USER_PREFIX,
++ .get = shmem_xattr_handler_get,
++ .set = shmem_xattr_handler_set,
++};
++
+ static const struct xattr_handler *shmem_xattr_handlers[] = {
+ #ifdef CONFIG_TMPFS_POSIX_ACL
+ &posix_acl_access_xattr_handler,
+@@ -2689,6 +2703,7 @@ static const struct xattr_handler *shmem_xattr_handlers[] = {
+ #endif
+ &shmem_security_xattr_handler,
+ &shmem_trusted_xattr_handler,
++ &shmem_user_xattr_handler,
+ NULL
+ };
+
diff --git a/1510_fs-enable-link-security-restrictions-by-default.patch b/1510_fs-enable-link-security-restrictions-by-default.patch
new file mode 100644
index 0000000..639fb3c
--- /dev/null
+++ b/1510_fs-enable-link-security-restrictions-by-default.patch
@@ -0,0 +1,22 @@
+From: Ben Hutchings <ben@decadent.org.uk>
+Subject: fs: Enable link security restrictions by default
+Date: Fri, 02 Nov 2012 05:32:06 +0000
+Bug-Debian: https://bugs.debian.org/609455
+Forwarded: not-needed
+
+This reverts commit 561ec64ae67ef25cac8d72bb9c4bfc955edfd415
+('VFS: don't do protected {sym,hard}links by default').
+
+--- a/fs/namei.c
++++ b/fs/namei.c
+@@ -651,8 +651,8 @@ static inline void put_link(struct namei
+ path_put(link);
+ }
+
+-int sysctl_protected_symlinks __read_mostly = 0;
+-int sysctl_protected_hardlinks __read_mostly = 0;
++int sysctl_protected_symlinks __read_mostly = 1;
++int sysctl_protected_hardlinks __read_mostly = 1;
+
+ /**
+ * may_follow_link - Check symlink following for unsafe situations
diff --git a/2900_dev-root-proc-mount-fix.patch b/2900_dev-root-proc-mount-fix.patch
new file mode 100644
index 0000000..60af1eb
--- /dev/null
+++ b/2900_dev-root-proc-mount-fix.patch
@@ -0,0 +1,38 @@
+--- a/init/do_mounts.c 2015-08-19 10:27:16.753852576 -0400
++++ b/init/do_mounts.c 2015-08-19 10:34:25.473850353 -0400
+@@ -490,7 +490,11 @@ void __init change_floppy(char *fmt, ...
+ va_start(args, fmt);
+ vsprintf(buf, fmt, args);
+ va_end(args);
+- fd = sys_open("/dev/root", O_RDWR | O_NDELAY, 0);
++ if (saved_root_name[0])
++ fd = sys_open(saved_root_name, O_RDWR | O_NDELAY, 0);
++ else
++ fd = sys_open("/dev/root", O_RDWR | O_NDELAY, 0);
++
+ if (fd >= 0) {
+ sys_ioctl(fd, FDEJECT, 0);
+ sys_close(fd);
+@@ -534,11 +538,17 @@ void __init mount_root(void)
+ #endif
+ #ifdef CONFIG_BLOCK
+ {
+- int err = create_dev("/dev/root", ROOT_DEV);
+-
+- if (err < 0)
+- pr_emerg("Failed to create /dev/root: %d\n", err);
+- mount_block_root("/dev/root", root_mountflags);
++ if (saved_root_name[0] == '/') {
++ int err = create_dev(saved_root_name, ROOT_DEV);
++ if (err < 0)
++ pr_emerg("Failed to create %s: %d\n", saved_root_name, err);
++ mount_block_root(saved_root_name, root_mountflags);
++ } else {
++ int err = create_dev("/dev/root", ROOT_DEV);
++ if (err < 0)
++ pr_emerg("Failed to create /dev/root: %d\n", err);
++ mount_block_root("/dev/root", root_mountflags);
++ }
+ }
+ #endif
+ }
diff --git a/4200_fbcondecor.patch b/4200_fbcondecor.patch
new file mode 100644
index 0000000..f7d9879
--- /dev/null
+++ b/4200_fbcondecor.patch
@@ -0,0 +1,2095 @@
+diff --git a/Documentation/fb/00-INDEX b/Documentation/fb/00-INDEX
+index fe85e7c..2230930 100644
+--- a/Documentation/fb/00-INDEX
++++ b/Documentation/fb/00-INDEX
+@@ -23,6 +23,8 @@ ep93xx-fb.txt
+ - info on the driver for EP93xx LCD controller.
+ fbcon.txt
+ - intro to and usage guide for the framebuffer console (fbcon).
++fbcondecor.txt
++ - info on the Framebuffer Console Decoration
+ framebuffer.txt
+ - introduction to frame buffer devices.
+ gxfb.txt
+diff --git a/Documentation/fb/fbcondecor.txt b/Documentation/fb/fbcondecor.txt
+new file mode 100644
+index 0000000..637209e
+--- /dev/null
++++ b/Documentation/fb/fbcondecor.txt
+@@ -0,0 +1,207 @@
++What is it?
++-----------
++
++The framebuffer decorations are a kernel feature which allows displaying a
++background picture on selected consoles.
++
++What do I need to get it to work?
++---------------------------------
++
++To get fbcondecor up-and-running you will have to:
++ 1) get a copy of splashutils [1] or a similar program
++ 2) get some fbcondecor themes
++ 3) build the kernel helper program
++ 4) build your kernel with the FB_CON_DECOR option enabled.
++
++To get fbcondecor operational right after fbcon initialization is finished, you
++will have to include a theme and the kernel helper into your initramfs image.
++Please refer to splashutils documentation for instructions on how to do that.
++
++[1] The splashutils package can be downloaded from:
++ http://github.com/alanhaggai/fbsplash
++
++The userspace helper
++--------------------
++
++The userspace fbcondecor helper (by default: /sbin/fbcondecor_helper) is called by the
++kernel whenever an important event occurs and the kernel needs some kind of
++job to be carried out. Important events include console switches and video
++mode switches (the kernel requests background images and configuration
++parameters for the current console). The fbcondecor helper must be accessible at
++all times. If it's not, fbcondecor will be switched off automatically.
++
++It's possible to set path to the fbcondecor helper by writing it to
++/proc/sys/kernel/fbcondecor.
++
++*****************************************************************************
++
++The information below is mostly technical stuff. There's probably no need to
++read it unless you plan to develop a userspace helper.
++
++The fbcondecor protocol
++-----------------------
++
++The fbcondecor protocol defines a communication interface between the kernel and
++the userspace fbcondecor helper.
++
++The kernel side is responsible for:
++
++ * rendering console text, using an image as a background (instead of a
++ standard solid color fbcon uses),
++ * accepting commands from the user via ioctls on the fbcondecor device,
++ * calling the userspace helper to set things up as soon as the fb subsystem
++ is initialized.
++
++The userspace helper is responsible for everything else, including parsing
++configuration files, decompressing the image files whenever the kernel needs
++it, and communicating with the kernel if necessary.
++
++The fbcondecor protocol specifies how communication is done in both ways:
++kernel->userspace and userspace->helper.
++
++Kernel -> Userspace
++-------------------
++
++The kernel communicates with the userspace helper by calling it and specifying
++the task to be done in a series of arguments.
++
++The arguments follow the pattern:
++<fbcondecor protocol version> <command> <parameters>
++
++All commands defined in fbcondecor protocol v2 have the following parameters:
++ virtual console
++ framebuffer number
++ theme
++
++Fbcondecor protocol v1 specified an additional 'fbcondecor mode' after the
++framebuffer number. Fbcondecor protocol v1 is deprecated and should not be used.
++
++Fbcondecor protocol v2 specifies the following commands:
++
++getpic
++------
++ The kernel issues this command to request image data. It's up to the
++ userspace helper to find a background image appropriate for the specified
++ theme and the current resolution. The userspace helper should respond by
++ issuing the FBIOCONDECOR_SETPIC ioctl.
++
++init
++----
++ The kernel issues this command after the fbcondecor device is created and
++ the fbcondecor interface is initialized. Upon receiving 'init', the userspace
++ helper should parse the kernel command line (/proc/cmdline) or otherwise
++ decide whether fbcondecor is to be activated.
++
++ To activate fbcondecor on the first console the helper should issue the
++ FBIOCONDECOR_SETCFG, FBIOCONDECOR_SETPIC and FBIOCONDECOR_SETSTATE commands,
++ in the above-mentioned order.
++
++ When the userspace helper is called in an early phase of the boot process
++ (right after the initialization of fbcon), no filesystems will be mounted.
++ The helper program should mount sysfs and then create the appropriate
++ framebuffer, fbcondecor and tty0 devices (if they don't already exist) to get
++ current display settings and to be able to communicate with the kernel side.
++ It should probably also mount the procfs to be able to parse the kernel
++ command line parameters.
++
++ Note that the console sem is not held when the kernel calls fbcondecor_helper
++ with the 'init' command. The fbcondecor helper should perform all ioctls with
++ origin set to FBCON_DECOR_IO_ORIG_USER.
++
++modechange
++----------
++ The kernel issues this command on a mode change. The helper's response should
++ be similar to the response to the 'init' command. Note that this time the
++ console sem is held and all ioctls must be performed with origin set to
++ FBCON_DECOR_IO_ORIG_KERNEL.
++
++
++Userspace -> Kernel
++-------------------
++
++Userspace programs can communicate with fbcondecor via ioctls on the
++fbcondecor device. These ioctls are to be used by both the userspace helper
++(called only by the kernel) and userspace configuration tools (run by the users).
++
++The fbcondecor helper should set the origin field to FBCON_DECOR_IO_ORIG_KERNEL
++when doing the appropriate ioctls. All userspace configuration tools should
++use FBCON_DECOR_IO_ORIG_USER. Failure to set the appropriate value in the origin
++field when performing ioctls from the kernel helper will most likely result
++in a console deadlock.
++
++FBCON_DECOR_IO_ORIG_KERNEL instructs fbcondecor not to try to acquire the console
++semaphore. Not surprisingly, FBCON_DECOR_IO_ORIG_USER instructs it to acquire
++the console sem.
++
++The framebuffer console decoration provides the following ioctls (all defined in
++linux/fb.h):
++
++FBIOCONDECOR_SETPIC
++description: loads a background picture for a virtual console
++argument: struct fbcon_decor_iowrapper*; data: struct fb_image*
++notes:
++If called for consoles other than the current foreground one, the picture data
++will be ignored.
++
++If the current virtual console is running in a 8-bpp mode, the cmap substruct
++of fb_image has to be filled appropriately: start should be set to 16 (first
++16 colors are reserved for fbcon), len to a value <= 240 and red, green and
++blue should point to valid cmap data. The transp field is ingored. The fields
++dx, dy, bg_color, fg_color in fb_image are ignored as well.
++
++FBIOCONDECOR_SETCFG
++description: sets the fbcondecor config for a virtual console
++argument: struct fbcon_decor_iowrapper*; data: struct vc_decor*
++notes: The structure has to be filled with valid data.
++
++FBIOCONDECOR_GETCFG
++description: gets the fbcondecor config for a virtual console
++argument: struct fbcon_decor_iowrapper*; data: struct vc_decor*
++
++FBIOCONDECOR_SETSTATE
++description: sets the fbcondecor state for a virtual console
++argument: struct fbcon_decor_iowrapper*; data: unsigned int*
++ values: 0 = disabled, 1 = enabled.
++
++FBIOCONDECOR_GETSTATE
++description: gets the fbcondecor state for a virtual console
++argument: struct fbcon_decor_iowrapper*; data: unsigned int*
++ values: as in FBIOCONDECOR_SETSTATE
++
++Info on used structures:
++
++Definition of struct vc_decor can be found in linux/console_decor.h. It's
++heavily commented. Note that the 'theme' field should point to a string
++no longer than FBCON_DECOR_THEME_LEN. When FBIOCONDECOR_GETCFG call is
++performed, the theme field should point to a char buffer of length
++FBCON_DECOR_THEME_LEN.
++
++Definition of struct fbcon_decor_iowrapper can be found in linux/fb.h.
++The fields in this struct have the following meaning:
++
++vc:
++Virtual console number.
++
++origin:
++Specifies if the ioctl is performed as a response to a kernel request. The
++fbcondecor helper should set this field to FBCON_DECOR_IO_ORIG_KERNEL, userspace
++programs should set it to FBCON_DECOR_IO_ORIG_USER. This field is necessary to
++avoid console semaphore deadlocks.
++
++data:
++Pointer to a data structure appropriate for the performed ioctl. Type of
++the data struct is specified in the ioctls description.
++
++*****************************************************************************
++
++Credit
++------
++
++Original 'bootsplash' project & implementation by:
++ Volker Poplawski <volker@poplawski.de>, Stefan Reinauer <stepan@suse.de>,
++ Steffen Winterfeldt <snwint@suse.de>, Michael Schroeder <mls@suse.de>,
++ Ken Wimer <wimer@suse.de>.
++
++Fbcondecor, fbcondecor protocol design, current implementation & docs by:
++ Michal Januszewski <michalj+fbcondecor@gmail.com>
++
+diff --git a/drivers/Makefile b/drivers/Makefile
+index 53abb4a..1721aee 100644
+--- a/drivers/Makefile
++++ b/drivers/Makefile
+@@ -17,6 +17,10 @@ obj-y += pwm/
+ obj-$(CONFIG_PCI) += pci/
+ obj-$(CONFIG_PARISC) += parisc/
+ obj-$(CONFIG_RAPIDIO) += rapidio/
++# tty/ comes before char/ so that the VT console is the boot-time
++# default.
++obj-y += tty/
++obj-y += char/
+ obj-y += video/
+ obj-y += idle/
+
+@@ -45,11 +49,6 @@ obj-$(CONFIG_REGULATOR) += regulator/
+ # reset controllers early, since gpu drivers might rely on them to initialize
+ obj-$(CONFIG_RESET_CONTROLLER) += reset/
+
+-# tty/ comes before char/ so that the VT console is the boot-time
+-# default.
+-obj-y += tty/
+-obj-y += char/
+-
+ # iommu/ comes before gpu as gpu are using iommu controllers
+ obj-$(CONFIG_IOMMU_SUPPORT) += iommu/
+
+diff --git a/drivers/video/console/Kconfig b/drivers/video/console/Kconfig
+index 38da6e2..fe58152 100644
+--- a/drivers/video/console/Kconfig
++++ b/drivers/video/console/Kconfig
+@@ -130,6 +130,19 @@ config FRAMEBUFFER_CONSOLE_ROTATION
+ such that other users of the framebuffer will remain normally
+ oriented.
+
++config FB_CON_DECOR
++ bool "Support for the Framebuffer Console Decorations"
++ depends on FRAMEBUFFER_CONSOLE=y && !FB_TILEBLITTING
++ default n
++ ---help---
++ This option enables support for framebuffer console decorations which
++ makes it possible to display images in the background of the system
++ consoles. Note that userspace utilities are necessary in order to take
++ advantage of these features. Refer to Documentation/fb/fbcondecor.txt
++ for more information.
++
++ If unsure, say N.
++
+ config STI_CONSOLE
+ bool "STI text console"
+ depends on PARISC
+diff --git a/drivers/video/console/Makefile b/drivers/video/console/Makefile
+index 43bfa48..cc104b6 100644
+--- a/drivers/video/console/Makefile
++++ b/drivers/video/console/Makefile
+@@ -16,4 +16,5 @@ obj-$(CONFIG_FRAMEBUFFER_CONSOLE) += fbcon_rotate.o fbcon_cw.o fbcon_ud.o \
+ fbcon_ccw.o
+ endif
+
++obj-$(CONFIG_FB_CON_DECOR) += fbcondecor.o cfbcondecor.o
+ obj-$(CONFIG_FB_STI) += sticore.o
+diff --git a/drivers/video/console/bitblit.c b/drivers/video/console/bitblit.c
+index dbfe4ee..14da307 100644
+--- a/drivers/video/console/bitblit.c
++++ b/drivers/video/console/bitblit.c
+@@ -18,6 +18,7 @@
+ #include <linux/console.h>
+ #include <asm/types.h>
+ #include "fbcon.h"
++#include "fbcondecor.h"
+
+ /*
+ * Accelerated handlers.
+@@ -55,6 +56,13 @@ static void bit_bmove(struct vc_data *vc, struct fb_info *info, int sy,
+ area.height = height * vc->vc_font.height;
+ area.width = width * vc->vc_font.width;
+
++ if (fbcon_decor_active(info, vc)) {
++ area.sx += vc->vc_decor.tx;
++ area.sy += vc->vc_decor.ty;
++ area.dx += vc->vc_decor.tx;
++ area.dy += vc->vc_decor.ty;
++ }
++
+ info->fbops->fb_copyarea(info, &area);
+ }
+
+@@ -379,11 +387,15 @@ static void bit_cursor(struct vc_data *vc, struct fb_info *info, int mode,
+ cursor.image.depth = 1;
+ cursor.rop = ROP_XOR;
+
+- if (info->fbops->fb_cursor)
+- err = info->fbops->fb_cursor(info, &cursor);
++ if (fbcon_decor_active(info, vc)) {
++ fbcon_decor_cursor(info, &cursor);
++ } else {
++ if (info->fbops->fb_cursor)
++ err = info->fbops->fb_cursor(info, &cursor);
+
+- if (err)
+- soft_cursor(info, &cursor);
++ if (err)
++ soft_cursor(info, &cursor);
++ }
+
+ ops->cursor_reset = 0;
+ }
+diff --git a/drivers/video/console/cfbcondecor.c b/drivers/video/console/cfbcondecor.c
+new file mode 100644
+index 0000000..c262540
+--- /dev/null
++++ b/drivers/video/console/cfbcondecor.c
+@@ -0,0 +1,473 @@
++/*
++ * linux/drivers/video/cfbcon_decor.c -- Framebuffer decor render functions
++ *
++ * Copyright (C) 2004 Michal Januszewski <michalj+fbcondecor@gmail.com>
++ *
++ * Code based upon "Bootdecor" (C) 2001-2003
++ * Volker Poplawski <volker@poplawski.de>,
++ * Stefan Reinauer <stepan@suse.de>,
++ * Steffen Winterfeldt <snwint@suse.de>,
++ * Michael Schroeder <mls@suse.de>,
++ * Ken Wimer <wimer@suse.de>.
++ *
++ * This file is subject to the terms and conditions of the GNU General Public
++ * License. See the file COPYING in the main directory of this archive for
++ * more details.
++ */
++#include <linux/module.h>
++#include <linux/types.h>
++#include <linux/fb.h>
++#include <linux/selection.h>
++#include <linux/slab.h>
++#include <linux/vt_kern.h>
++#include <asm/irq.h>
++
++#include "fbcon.h"
++#include "fbcondecor.h"
++
++#define parse_pixel(shift, bpp, type) \
++ do { \
++ if (d & (0x80 >> (shift))) \
++ dd2[(shift)] = fgx; \
++ else \
++ dd2[(shift)] = transparent ? *(type *)decor_src : bgx; \
++ decor_src += (bpp); \
++ } while (0) \
++
++extern int get_color(struct vc_data *vc, struct fb_info *info,
++ u16 c, int is_fg);
++
++void fbcon_decor_fix_pseudo_pal(struct fb_info *info, struct vc_data *vc)
++{
++ int i, j, k;
++ int minlen = min(min(info->var.red.length, info->var.green.length),
++ info->var.blue.length);
++ u32 col;
++
++ for (j = i = 0; i < 16; i++) {
++ k = color_table[i];
++
++ col = ((vc->vc_palette[j++] >> (8-minlen))
++ << info->var.red.offset);
++ col |= ((vc->vc_palette[j++] >> (8-minlen))
++ << info->var.green.offset);
++ col |= ((vc->vc_palette[j++] >> (8-minlen))
++ << info->var.blue.offset);
++ ((u32 *)info->pseudo_palette)[k] = col;
++ }
++}
++
++void fbcon_decor_renderc(struct fb_info *info, int ypos, int xpos, int height,
++ int width, u8 *src, u32 fgx, u32 bgx, u8 transparent)
++{
++ unsigned int x, y;
++ u32 dd;
++ int bytespp = ((info->var.bits_per_pixel + 7) >> 3);
++ unsigned int d = ypos * info->fix.line_length + xpos * bytespp;
++ unsigned int ds = (ypos * info->var.xres + xpos) * bytespp;
++ u16 dd2[4];
++
++ u8 *decor_src = (u8 *)(info->bgdecor.data + ds);
++ u8 *dst = (u8 *)(info->screen_base + d);
++
++ if ((ypos + height) > info->var.yres || (xpos + width) > info->var.xres)
++ return;
++
++ for (y = 0; y < height; y++) {
++ switch (info->var.bits_per_pixel) {
++
++ case 32:
++ for (x = 0; x < width; x++) {
++
++ if ((x & 7) == 0)
++ d = *src++;
++ if (d & 0x80)
++ dd = fgx;
++ else
++ dd = transparent ?
++ *(u32 *)decor_src : bgx;
++
++ d <<= 1;
++ decor_src += 4;
++ fb_writel(dd, dst);
++ dst += 4;
++ }
++ break;
++ case 24:
++ for (x = 0; x < width; x++) {
++
++ if ((x & 7) == 0)
++ d = *src++;
++ if (d & 0x80)
++ dd = fgx;
++ else
++ dd = transparent ?
++ (*(u32 *)decor_src & 0xffffff) : bgx;
++
++ d <<= 1;
++ decor_src += 3;
++#ifdef __LITTLE_ENDIAN
++ fb_writew(dd & 0xffff, dst);
++ dst += 2;
++ fb_writeb((dd >> 16), dst);
++#else
++ fb_writew(dd >> 8, dst);
++ dst += 2;
++ fb_writeb(dd & 0xff, dst);
++#endif
++ dst++;
++ }
++ break;
++ case 16:
++ for (x = 0; x < width; x += 2) {
++ if ((x & 7) == 0)
++ d = *src++;
++
++ parse_pixel(0, 2, u16);
++ parse_pixel(1, 2, u16);
++#ifdef __LITTLE_ENDIAN
++ dd = dd2[0] | (dd2[1] << 16);
++#else
++ dd = dd2[1] | (dd2[0] << 16);
++#endif
++ d <<= 2;
++ fb_writel(dd, dst);
++ dst += 4;
++ }
++ break;
++
++ case 8:
++ for (x = 0; x < width; x += 4) {
++ if ((x & 7) == 0)
++ d = *src++;
++
++ parse_pixel(0, 1, u8);
++ parse_pixel(1, 1, u8);
++ parse_pixel(2, 1, u8);
++ parse_pixel(3, 1, u8);
++
++#ifdef __LITTLE_ENDIAN
++ dd = dd2[0] | (dd2[1] << 8) | (dd2[2] << 16) | (dd2[3] << 24);
++#else
++ dd = dd2[3] | (dd2[2] << 8) | (dd2[1] << 16) | (dd2[0] << 24);
++#endif
++ d <<= 4;
++ fb_writel(dd, dst);
++ dst += 4;
++ }
++ }
++
++ dst += info->fix.line_length - width * bytespp;
++ decor_src += (info->var.xres - width) * bytespp;
++ }
++}
++
++#define cc2cx(a) \
++ ((info->fix.visual == FB_VISUAL_TRUECOLOR || \
++ info->fix.visual == FB_VISUAL_DIRECTCOLOR) ? \
++ ((u32 *)info->pseudo_palette)[a] : a)
++
++void fbcon_decor_putcs(struct vc_data *vc, struct fb_info *info,
++ const unsigned short *s, int count, int yy, int xx)
++{
++ unsigned short charmask = vc->vc_hi_font_mask ? 0x1ff : 0xff;
++ struct fbcon_ops *ops = info->fbcon_par;
++ int fg_color, bg_color, transparent;
++ u8 *src;
++ u32 bgx, fgx;
++ u16 c = scr_readw(s);
++
++ fg_color = get_color(vc, info, c, 1);
++ bg_color = get_color(vc, info, c, 0);
++
++ /* Don't paint the background image if console is blanked */
++ transparent = ops->blank_state ? 0 :
++ (vc->vc_decor.bg_color == bg_color);
++
++ xx = xx * vc->vc_font.width + vc->vc_decor.tx;
++ yy = yy * vc->vc_font.height + vc->vc_decor.ty;
++
++ fgx = cc2cx(fg_color);
++ bgx = cc2cx(bg_color);
++
++ while (count--) {
++ c = scr_readw(s++);
++ src = vc->vc_font.data + (c & charmask) * vc->vc_font.height *
++ ((vc->vc_font.width + 7) >> 3);
++
++ fbcon_decor_renderc(info, yy, xx, vc->vc_font.height,
++ vc->vc_font.width, src, fgx, bgx, transparent);
++ xx += vc->vc_font.width;
++ }
++}
++
++void fbcon_decor_cursor(struct fb_info *info, struct fb_cursor *cursor)
++{
++ int i;
++ unsigned int dsize, s_pitch;
++ struct fbcon_ops *ops = info->fbcon_par;
++ struct vc_data *vc;
++ u8 *src;
++
++ /* we really don't need any cursors while the console is blanked */
++ if (info->state != FBINFO_STATE_RUNNING || ops->blank_state)
++ return;
++
++ vc = vc_cons[ops->currcon].d;
++
++ src = kmalloc(64 + sizeof(struct fb_image), GFP_ATOMIC);
++ if (!src)
++ return;
++
++ s_pitch = (cursor->image.width + 7) >> 3;
++ dsize = s_pitch * cursor->image.height;
++ if (cursor->enable) {
++ switch (cursor->rop) {
++ case ROP_XOR:
++ for (i = 0; i < dsize; i++)
++ src[i] = cursor->image.data[i] ^ cursor->mask[i];
++ break;
++ case ROP_COPY:
++ default:
++ for (i = 0; i < dsize; i++)
++ src[i] = cursor->image.data[i] & cursor->mask[i];
++ break;
++ }
++ } else
++ memcpy(src, cursor->image.data, dsize);
++
++ fbcon_decor_renderc(info,
++ cursor->image.dy + vc->vc_decor.ty,
++ cursor->image.dx + vc->vc_decor.tx,
++ cursor->image.height,
++ cursor->image.width,
++ (u8 *)src,
++ cc2cx(cursor->image.fg_color),
++ cc2cx(cursor->image.bg_color),
++ cursor->image.bg_color == vc->vc_decor.bg_color);
++
++ kfree(src);
++}
++
++static void decorset(u8 *dst, int height, int width, int dstbytes,
++ u32 bgx, int bpp)
++{
++ int i;
++
++ if (bpp == 8)
++ bgx |= bgx << 8;
++ if (bpp == 16 || bpp == 8)
++ bgx |= bgx << 16;
++
++ while (height-- > 0) {
++ u8 *p = dst;
++
++ switch (bpp) {
++
++ case 32:
++ for (i = 0; i < width; i++) {
++ fb_writel(bgx, p); p += 4;
++ }
++ break;
++ case 24:
++ for (i = 0; i < width; i++) {
++#ifdef __LITTLE_ENDIAN
++ fb_writew((bgx & 0xffff), (u16 *)p); p += 2;
++ fb_writeb((bgx >> 16), p++);
++#else
++ fb_writew((bgx >> 8), (u16 *)p); p += 2;
++ fb_writeb((bgx & 0xff), p++);
++#endif
++ }
++ break;
++ case 16:
++ for (i = 0; i < width/4; i++) {
++ fb_writel(bgx, p); p += 4;
++ fb_writel(bgx, p); p += 4;
++ }
++ if (width & 2) {
++ fb_writel(bgx, p); p += 4;
++ }
++ if (width & 1)
++ fb_writew(bgx, (u16 *)p);
++ break;
++ case 8:
++ for (i = 0; i < width/4; i++) {
++ fb_writel(bgx, p); p += 4;
++ }
++
++ if (width & 2) {
++ fb_writew(bgx, p); p += 2;
++ }
++ if (width & 1)
++ fb_writeb(bgx, (u8 *)p);
++ break;
++
++ }
++ dst += dstbytes;
++ }
++}
++
++void fbcon_decor_copy(u8 *dst, u8 *src, int height, int width, int linebytes,
++ int srclinebytes, int bpp)
++{
++ int i;
++
++ while (height-- > 0) {
++ u32 *p = (u32 *)dst;
++ u32 *q = (u32 *)src;
++
++ switch (bpp) {
++
++ case 32:
++ for (i = 0; i < width; i++)
++ fb_writel(*q++, p++);
++ break;
++ case 24:
++ for (i = 0; i < (width * 3 / 4); i++)
++ fb_writel(*q++, p++);
++ if ((width * 3) % 4) {
++ if (width & 2) {
++ fb_writeb(*(u8 *)q, (u8 *)p);
++ } else if (width & 1) {
++ fb_writew(*(u16 *)q, (u16 *)p);
++ fb_writeb(*(u8 *)((u16 *)q + 1),
++ (u8 *)((u16 *)p + 2));
++ }
++ }
++ break;
++ case 16:
++ for (i = 0; i < width/4; i++) {
++ fb_writel(*q++, p++);
++ fb_writel(*q++, p++);
++ }
++ if (width & 2)
++ fb_writel(*q++, p++);
++ if (width & 1)
++ fb_writew(*(u16 *)q, (u16 *)p);
++ break;
++ case 8:
++ for (i = 0; i < width/4; i++)
++ fb_writel(*q++, p++);
++
++ if (width & 2) {
++ fb_writew(*(u16 *)q, (u16 *)p);
++ q = (u32 *) ((u16 *)q + 1);
++ p = (u32 *) ((u16 *)p + 1);
++ }
++ if (width & 1)
++ fb_writeb(*(u8 *)q, (u8 *)p);
++ break;
++ }
++
++ dst += linebytes;
++ src += srclinebytes;
++ }
++}
++
++static void decorfill(struct fb_info *info, int sy, int sx, int height,
++ int width)
++{
++ int bytespp = ((info->var.bits_per_pixel + 7) >> 3);
++ int d = sy * info->fix.line_length + sx * bytespp;
++ int ds = (sy * info->var.xres + sx) * bytespp;
++
++ fbcon_decor_copy((u8 *)(info->screen_base + d), (u8 *)(info->bgdecor.data + ds),
++ height, width, info->fix.line_length, info->var.xres * bytespp,
++ info->var.bits_per_pixel);
++}
++
++void fbcon_decor_clear(struct vc_data *vc, struct fb_info *info, int sy, int sx,
++ int height, int width)
++{
++ int bgshift = (vc->vc_hi_font_mask) ? 13 : 12;
++ struct fbcon_ops *ops = info->fbcon_par;
++ u8 *dst;
++ int transparent, bg_color = attr_bgcol_ec(bgshift, vc, info);
++
++ transparent = (vc->vc_decor.bg_color == bg_color);
++ sy = sy * vc->vc_font.height + vc->vc_decor.ty;
++ sx = sx * vc->vc_font.width + vc->vc_decor.tx;
++ height *= vc->vc_font.height;
++ width *= vc->vc_font.width;
++
++ /* Don't paint the background image if console is blanked */
++ if (transparent && !ops->blank_state) {
++ decorfill(info, sy, sx, height, width);
++ } else {
++ dst = (u8 *)(info->screen_base + sy * info->fix.line_length +
++ sx * ((info->var.bits_per_pixel + 7) >> 3));
++ decorset(dst, height, width, info->fix.line_length, cc2cx(bg_color),
++ info->var.bits_per_pixel);
++ }
++}
++
++void fbcon_decor_clear_margins(struct vc_data *vc, struct fb_info *info,
++ int bottom_only)
++{
++ unsigned int tw = vc->vc_cols*vc->vc_font.width;
++ unsigned int th = vc->vc_rows*vc->vc_font.height;
++
++ if (!bottom_only) {
++ /* top margin */
++ decorfill(info, 0, 0, vc->vc_decor.ty, info->var.xres);
++ /* left margin */
++ decorfill(info, vc->vc_decor.ty, 0, th, vc->vc_decor.tx);
++ /* right margin */
++ decorfill(info, vc->vc_decor.ty, vc->vc_decor.tx + tw, th,
++ info->var.xres - vc->vc_decor.tx - tw);
++ }
++ decorfill(info, vc->vc_decor.ty + th, 0,
++ info->var.yres - vc->vc_decor.ty - th, info->var.xres);
++}
++
++void fbcon_decor_bmove_redraw(struct vc_data *vc, struct fb_info *info, int y,
++ int sx, int dx, int width)
++{
++ u16 *d = (u16 *) (vc->vc_origin + vc->vc_size_row * y + dx * 2);
++ u16 *s = d + (dx - sx);
++ u16 *start = d;
++ u16 *ls = d;
++ u16 *le = d + width;
++ u16 c;
++ int x = dx;
++ u16 attr = 1;
++
++ do {
++ c = scr_readw(d);
++ if (attr != (c & 0xff00)) {
++ attr = c & 0xff00;
++ if (d > start) {
++ fbcon_decor_putcs(vc, info, start, d - start, y, x);
++ x += d - start;
++ start = d;
++ }
++ }
++ if (s >= ls && s < le && c == scr_readw(s)) {
++ if (d > start) {
++ fbcon_decor_putcs(vc, info, start, d - start, y, x);
++ x += d - start + 1;
++ start = d + 1;
++ } else {
++ x++;
++ start++;
++ }
++ }
++ s++;
++ d++;
++ } while (d < le);
++ if (d > start)
++ fbcon_decor_putcs(vc, info, start, d - start, y, x);
++}
++
++void fbcon_decor_blank(struct vc_data *vc, struct fb_info *info, int blank)
++{
++ if (blank) {
++ decorset((u8 *)info->screen_base, info->var.yres, info->var.xres,
++ info->fix.line_length, 0, info->var.bits_per_pixel);
++ } else {
++ update_screen(vc);
++ fbcon_decor_clear_margins(vc, info, 0);
++ }
++}
++
+diff --git a/drivers/video/console/fbcon.c b/drivers/video/console/fbcon.c
+index b87f5cf..ce44538 100644
+--- a/drivers/video/console/fbcon.c
++++ b/drivers/video/console/fbcon.c
+@@ -79,6 +79,7 @@
+ #include <asm/irq.h>
+
+ #include "fbcon.h"
++#include "../console/fbcondecor.h"
+
+ #ifdef FBCONDEBUG
+ # define DPRINTK(fmt, args...) printk(KERN_DEBUG "%s: " fmt, __func__ , ## args)
+@@ -94,7 +95,7 @@ enum {
+
+ static struct display fb_display[MAX_NR_CONSOLES];
+
+-static signed char con2fb_map[MAX_NR_CONSOLES];
++signed char con2fb_map[MAX_NR_CONSOLES];
+ static signed char con2fb_map_boot[MAX_NR_CONSOLES];
+
+ static int logo_lines;
+@@ -282,7 +283,7 @@ static inline int fbcon_is_inactive(struct vc_data *vc, struct fb_info *info)
+ !vt_force_oops_output(vc);
+ }
+
+-static int get_color(struct vc_data *vc, struct fb_info *info,
++int get_color(struct vc_data *vc, struct fb_info *info,
+ u16 c, int is_fg)
+ {
+ int depth = fb_get_color_depth(&info->var, &info->fix);
+@@ -546,6 +547,9 @@ static int do_fbcon_takeover(int show_logo)
+ info_idx = -1;
+ } else {
+ fbcon_has_console_bind = 1;
++#ifdef CONFIG_FB_CON_DECOR
++ fbcon_decor_init();
++#endif
+ }
+
+ return err;
+@@ -1005,6 +1009,12 @@ static const char *fbcon_startup(void)
+ rows = FBCON_SWAP(ops->rotate, info->var.yres, info->var.xres);
+ cols /= vc->vc_font.width;
+ rows /= vc->vc_font.height;
++
++ if (fbcon_decor_active(info, vc)) {
++ cols = vc->vc_decor.twidth / vc->vc_font.width;
++ rows = vc->vc_decor.theight / vc->vc_font.height;
++ }
++
+ vc_resize(vc, cols, rows);
+
+ DPRINTK("mode: %s\n", info->fix.id);
+@@ -1034,7 +1044,7 @@ static void fbcon_init(struct vc_data *vc, int init)
+ cap = info->flags;
+
+ if (vc != svc || logo_shown == FBCON_LOGO_DONTSHOW ||
+- (info->fix.type == FB_TYPE_TEXT))
++ (info->fix.type == FB_TYPE_TEXT) || fbcon_decor_active(info, vc))
+ logo = 0;
+
+ if (var_to_display(p, &info->var, info))
+@@ -1259,6 +1269,11 @@ static void fbcon_clear(struct vc_data *vc, int sy, int sx, int height,
+ fbcon_clear_margins(vc, 0);
+ }
+
++ if (fbcon_decor_active(info, vc)) {
++ fbcon_decor_clear(vc, info, sy, sx, height, width);
++ return;
++ }
++
+ /* Split blits that cross physical y_wrap boundary */
+
+ y_break = p->vrows - p->yscroll;
+@@ -1278,10 +1293,15 @@ static void fbcon_putcs(struct vc_data *vc, const unsigned short *s,
+ struct display *p = &fb_display[vc->vc_num];
+ struct fbcon_ops *ops = info->fbcon_par;
+
+- if (!fbcon_is_inactive(vc, info))
+- ops->putcs(vc, info, s, count, real_y(p, ypos), xpos,
+- get_color(vc, info, scr_readw(s), 1),
+- get_color(vc, info, scr_readw(s), 0));
++ if (!fbcon_is_inactive(vc, info)) {
++
++ if (fbcon_decor_active(info, vc))
++ fbcon_decor_putcs(vc, info, s, count, ypos, xpos);
++ else
++ ops->putcs(vc, info, s, count, real_y(p, ypos), xpos,
++ get_color(vc, info, scr_readw(s), 1),
++ get_color(vc, info, scr_readw(s), 0));
++ }
+ }
+
+ static void fbcon_putc(struct vc_data *vc, int c, int ypos, int xpos)
+@@ -1297,8 +1317,12 @@ static void fbcon_clear_margins(struct vc_data *vc, int bottom_only)
+ struct fb_info *info = registered_fb[con2fb_map[vc->vc_num]];
+ struct fbcon_ops *ops = info->fbcon_par;
+
+- if (!fbcon_is_inactive(vc, info))
+- ops->clear_margins(vc, info, bottom_only);
++ if (!fbcon_is_inactive(vc, info)) {
++ if (fbcon_decor_active(info, vc))
++ fbcon_decor_clear_margins(vc, info, bottom_only);
++ else
++ ops->clear_margins(vc, info, bottom_only);
++ }
+ }
+
+ static void fbcon_cursor(struct vc_data *vc, int mode)
+@@ -1819,7 +1843,7 @@ static int fbcon_scroll(struct vc_data *vc, int t, int b, int dir,
+ count = vc->vc_rows;
+ if (softback_top)
+ fbcon_softback_note(vc, t, count);
+- if (logo_shown >= 0)
++ if (logo_shown >= 0 || fbcon_decor_active(info, vc))
+ goto redraw_up;
+ switch (p->scrollmode) {
+ case SCROLL_MOVE:
+@@ -1912,6 +1936,8 @@ static int fbcon_scroll(struct vc_data *vc, int t, int b, int dir,
+ count = vc->vc_rows;
+ if (logo_shown >= 0)
+ goto redraw_down;
++ if (fbcon_decor_active(info, vc))
++ goto redraw_down;
+ switch (p->scrollmode) {
+ case SCROLL_MOVE:
+ fbcon_redraw_blit(vc, info, p, b - 1, b - t - count,
+@@ -2060,6 +2086,13 @@ static void fbcon_bmove_rec(struct vc_data *vc, struct display *p, int sy, int s
+ }
+ return;
+ }
++
++ if (fbcon_decor_active(info, vc) && sy == dy && height == 1) {
++ /* must use slower redraw bmove to keep background pic intact */
++ fbcon_decor_bmove_redraw(vc, info, sy, sx, dx, width);
++ return;
++ }
++
+ ops->bmove(vc, info, real_y(p, sy), sx, real_y(p, dy), dx,
+ height, width);
+ }
+@@ -2130,8 +2163,8 @@ static int fbcon_resize(struct vc_data *vc, unsigned int width,
+ var.yres = virt_h * virt_fh;
+ x_diff = info->var.xres - var.xres;
+ y_diff = info->var.yres - var.yres;
+- if (x_diff < 0 || x_diff > virt_fw ||
+- y_diff < 0 || y_diff > virt_fh) {
++ if ((x_diff < 0 || x_diff > virt_fw ||
++ y_diff < 0 || y_diff > virt_fh) && !vc->vc_decor.state) {
+ const struct fb_videomode *mode;
+
+ DPRINTK("attempting resize %ix%i\n", var.xres, var.yres);
+@@ -2167,6 +2200,22 @@ static int fbcon_switch(struct vc_data *vc)
+
+ info = registered_fb[con2fb_map[vc->vc_num]];
+ ops = info->fbcon_par;
++ prev_console = ops->currcon;
++ if (prev_console != -1)
++ old_info = registered_fb[con2fb_map[prev_console]];
++
++#ifdef CONFIG_FB_CON_DECOR
++ if (!fbcon_decor_active_vc(vc) && info->fix.visual == FB_VISUAL_DIRECTCOLOR) {
++ struct vc_data *vc_curr = vc_cons[prev_console].d;
++
++ if (vc_curr && fbcon_decor_active_vc(vc_curr)) {
++ // Clear the screen to avoid displaying funky colors
++ // during palette updates.
++ memset((u8 *)info->screen_base + info->fix.line_length * info->var.yoffset,
++ 0, info->var.yres * info->fix.line_length);
++ }
++ }
++#endif
+
+ if (softback_top) {
+ if (softback_lines)
+@@ -2185,9 +2234,6 @@ static int fbcon_switch(struct vc_data *vc)
+ logo_shown = FBCON_LOGO_CANSHOW;
+ }
+
+- prev_console = ops->currcon;
+- if (prev_console != -1)
+- old_info = registered_fb[con2fb_map[prev_console]];
+ /*
+ * FIXME: If we have multiple fbdev's loaded, we need to
+ * update all info->currcon. Perhaps, we can place this
+@@ -2231,6 +2277,18 @@ static int fbcon_switch(struct vc_data *vc)
+ fbcon_del_cursor_timer(old_info);
+ }
+
++ if (fbcon_decor_active_vc(vc)) {
++ struct vc_data *vc_curr = vc_cons[prev_console].d;
++
++ if (!vc_curr->vc_decor.theme ||
++ strcmp(vc->vc_decor.theme, vc_curr->vc_decor.theme) ||
++ (fbcon_decor_active_nores(info, vc_curr) &&
++ !fbcon_decor_active(info, vc_curr))) {
++ fbcon_decor_disable(vc, 0);
++ fbcon_decor_call_helper("modechange", vc->vc_num);
++ }
++ }
++
+ if (fbcon_is_inactive(vc, info) ||
+ ops->blank_state != FB_BLANK_UNBLANK)
+ fbcon_del_cursor_timer(info);
+@@ -2339,15 +2397,20 @@ static int fbcon_blank(struct vc_data *vc, int blank, int mode_switch)
+ }
+ }
+
+- if (!fbcon_is_inactive(vc, info)) {
++ if (!fbcon_is_inactive(vc, info)) {
+ if (ops->blank_state != blank) {
+ ops->blank_state = blank;
+ fbcon_cursor(vc, blank ? CM_ERASE : CM_DRAW);
+ ops->cursor_flash = (!blank);
+
+- if (!(info->flags & FBINFO_MISC_USEREVENT))
+- if (fb_blank(info, blank))
+- fbcon_generic_blank(vc, info, blank);
++ if (!(info->flags & FBINFO_MISC_USEREVENT)) {
++ if (fb_blank(info, blank)) {
++ if (fbcon_decor_active(info, vc))
++ fbcon_decor_blank(vc, info, blank);
++ else
++ fbcon_generic_blank(vc, info, blank);
++ }
++ }
+ }
+
+ if (!blank)
+@@ -2522,13 +2585,22 @@ static int fbcon_do_set_font(struct vc_data *vc, int w, int h,
+ }
+
+ if (resize) {
++ /* reset wrap/pan */
+ int cols, rows;
+
+ cols = FBCON_SWAP(ops->rotate, info->var.xres, info->var.yres);
+ rows = FBCON_SWAP(ops->rotate, info->var.yres, info->var.xres);
++
++ if (fbcon_decor_active(info, vc)) {
++ info->var.xoffset = info->var.yoffset = p->yscroll = 0;
++ cols = vc->vc_decor.twidth;
++ rows = vc->vc_decor.theight;
++ }
+ cols /= w;
+ rows /= h;
++
+ vc_resize(vc, cols, rows);
++
+ if (con_is_visible(vc) && softback_buf)
+ fbcon_update_softback(vc);
+ } else if (con_is_visible(vc)
+@@ -2657,7 +2729,11 @@ static void fbcon_set_palette(struct vc_data *vc, const unsigned char *table)
+ int i, j, k, depth;
+ u8 val;
+
+- if (fbcon_is_inactive(vc, info))
++ if (fbcon_is_inactive(vc, info)
++#ifdef CONFIG_FB_CON_DECOR
++ || vc->vc_num != fg_console
++#endif
++ )
+ return;
+
+ if (!con_is_visible(vc))
+@@ -2683,7 +2759,47 @@ static void fbcon_set_palette(struct vc_data *vc, const unsigned char *table)
+ } else
+ fb_copy_cmap(fb_default_cmap(1 << depth), &palette_cmap);
+
+- fb_set_cmap(&palette_cmap, info);
++ if (fbcon_decor_active(info, vc_cons[fg_console].d) &&
++ info->fix.visual == FB_VISUAL_DIRECTCOLOR) {
++
++ u16 *red, *green, *blue;
++ int minlen = min(min(info->var.red.length, info->var.green.length),
++ info->var.blue.length);
++
++ struct fb_cmap cmap = {
++ .start = 0,
++ .len = (1 << minlen),
++ .red = NULL,
++ .green = NULL,
++ .blue = NULL,
++ .transp = NULL
++ };
++
++ red = kmalloc(256 * sizeof(u16) * 3, GFP_KERNEL);
++
++ if (!red)
++ goto out;
++
++ green = red + 256;
++ blue = green + 256;
++ cmap.red = red;
++ cmap.green = green;
++ cmap.blue = blue;
++
++ for (i = 0; i < cmap.len; i++)
++ red[i] = green[i] = blue[i] = (0xffff * i)/(cmap.len-1);
++
++ fb_set_cmap(&cmap, info);
++ fbcon_decor_fix_pseudo_pal(info, vc_cons[fg_console].d);
++ kfree(red);
++
++ return;
++
++ } else if (fbcon_decor_active(info, vc_cons[fg_console].d) &&
++ info->var.bits_per_pixel == 8 && info->bgdecor.cmap.red != NULL)
++ fb_set_cmap(&info->bgdecor.cmap, info);
++
++out: fb_set_cmap(&palette_cmap, info);
+ }
+
+ static u16 *fbcon_screen_pos(struct vc_data *vc, int offset)
+@@ -2908,7 +3024,14 @@ static void fbcon_modechanged(struct fb_info *info)
+ rows = FBCON_SWAP(ops->rotate, info->var.yres, info->var.xres);
+ cols /= vc->vc_font.width;
+ rows /= vc->vc_font.height;
+- vc_resize(vc, cols, rows);
++
++ if (!fbcon_decor_active_nores(info, vc)) {
++ vc_resize(vc, cols, rows);
++ } else {
++ fbcon_decor_disable(vc, 0);
++ fbcon_decor_call_helper("modechange", vc->vc_num);
++ }
++
+ updatescrollmode(p, info, vc);
+ scrollback_max = 0;
+ scrollback_current = 0;
+@@ -2953,7 +3076,8 @@ static void fbcon_set_all_vcs(struct fb_info *info)
+ rows = FBCON_SWAP(ops->rotate, info->var.yres, info->var.xres);
+ cols /= vc->vc_font.width;
+ rows /= vc->vc_font.height;
+- vc_resize(vc, cols, rows);
++ if (!fbcon_decor_active_nores(info, vc))
++ vc_resize(vc, cols, rows);
+ }
+
+ if (fg != -1)
+@@ -3594,6 +3718,7 @@ static void fbcon_exit(void)
+ }
+ }
+
++ fbcon_decor_exit();
+ fbcon_has_exited = 1;
+ }
+
+diff --git a/drivers/video/console/fbcondecor.c b/drivers/video/console/fbcondecor.c
+new file mode 100644
+index 0000000..65cc0d3
+--- /dev/null
++++ b/drivers/video/console/fbcondecor.c
+@@ -0,0 +1,549 @@
++/*
++ * linux/drivers/video/console/fbcondecor.c -- Framebuffer console decorations
++ *
++ * Copyright (C) 2004-2009 Michal Januszewski <michalj+fbcondecor@gmail.com>
++ *
++ * Code based upon "Bootsplash" (C) 2001-2003
++ * Volker Poplawski <volker@poplawski.de>,
++ * Stefan Reinauer <stepan@suse.de>,
++ * Steffen Winterfeldt <snwint@suse.de>,
++ * Michael Schroeder <mls@suse.de>,
++ * Ken Wimer <wimer@suse.de>.
++ *
++ * Compat ioctl support by Thorsten Klein <TK@Thorsten-Klein.de>.
++ *
++ * This file is subject to the terms and conditions of the GNU General Public
++ * License. See the file COPYING in the main directory of this archive for
++ * more details.
++ *
++ */
++#include <linux/module.h>
++#include <linux/kernel.h>
++#include <linux/string.h>
++#include <linux/types.h>
++#include <linux/fb.h>
++#include <linux/vt_kern.h>
++#include <linux/vmalloc.h>
++#include <linux/unistd.h>
++#include <linux/syscalls.h>
++#include <linux/init.h>
++#include <linux/proc_fs.h>
++#include <linux/workqueue.h>
++#include <linux/kmod.h>
++#include <linux/miscdevice.h>
++#include <linux/device.h>
++#include <linux/fs.h>
++#include <linux/compat.h>
++#include <linux/console.h>
++
++#include <linux/uaccess.h>
++#include <asm/irq.h>
++
++#include "fbcon.h"
++#include "fbcondecor.h"
++
++extern signed char con2fb_map[];
++static int fbcon_decor_enable(struct vc_data *vc);
++
++static int initialized;
++
++char fbcon_decor_path[KMOD_PATH_LEN] = "/sbin/fbcondecor_helper";
++EXPORT_SYMBOL(fbcon_decor_path);
++
++int fbcon_decor_call_helper(char *cmd, unsigned short vc)
++{
++ char *envp[] = {
++ "HOME=/",
++ "PATH=/sbin:/bin",
++ NULL
++ };
++
++ char tfb[5];
++ char tcons[5];
++ unsigned char fb = (int) con2fb_map[vc];
++
++ char *argv[] = {
++ fbcon_decor_path,
++ "2",
++ cmd,
++ tcons,
++ tfb,
++ vc_cons[vc].d->vc_decor.theme,
++ NULL
++ };
++
++ snprintf(tfb, 5, "%d", fb);
++ snprintf(tcons, 5, "%d", vc);
++
++ return call_usermodehelper(fbcon_decor_path, argv, envp, UMH_WAIT_EXEC);
++}
++
++/* Disables fbcondecor on a virtual console; called with console sem held. */
++int fbcon_decor_disable(struct vc_data *vc, unsigned char redraw)
++{
++ struct fb_info *info;
++
++ if (!vc->vc_decor.state)
++ return -EINVAL;
++
++ info = registered_fb[(int) con2fb_map[vc->vc_num]];
++
++ if (info == NULL)
++ return -EINVAL;
++
++ vc->vc_decor.state = 0;
++ vc_resize(vc, info->var.xres / vc->vc_font.width,
++ info->var.yres / vc->vc_font.height);
++
++ if (fg_console == vc->vc_num && redraw) {
++ redraw_screen(vc, 0);
++ update_region(vc, vc->vc_origin +
++ vc->vc_size_row * vc->vc_top,
++ vc->vc_size_row * (vc->vc_bottom - vc->vc_top) / 2);
++ }
++
++ printk(KERN_INFO "fbcondecor: switched decor state to 'off' on console %d\n",
++ vc->vc_num);
++
++ return 0;
++}
++
++/* Enables fbcondecor on a virtual console; called with console sem held. */
++static int fbcon_decor_enable(struct vc_data *vc)
++{
++ struct fb_info *info;
++
++ info = registered_fb[(int) con2fb_map[vc->vc_num]];
++
++ if (vc->vc_decor.twidth == 0 || vc->vc_decor.theight == 0 ||
++ info == NULL || vc->vc_decor.state || (!info->bgdecor.data &&
++ vc->vc_num == fg_console))
++ return -EINVAL;
++
++ vc->vc_decor.state = 1;
++ vc_resize(vc, vc->vc_decor.twidth / vc->vc_font.width,
++ vc->vc_decor.theight / vc->vc_font.height);
++
++ if (fg_console == vc->vc_num) {
++ redraw_screen(vc, 0);
++ update_region(vc, vc->vc_origin +
++ vc->vc_size_row * vc->vc_top,
++ vc->vc_size_row * (vc->vc_bottom - vc->vc_top) / 2);
++ fbcon_decor_clear_margins(vc, info, 0);
++ }
++
++ printk(KERN_INFO "fbcondecor: switched decor state to 'on' on console %d\n",
++ vc->vc_num);
++
++ return 0;
++}
++
++static inline int fbcon_decor_ioctl_dosetstate(struct vc_data *vc, unsigned int state, unsigned char origin)
++{
++ int ret;
++
++ console_lock();
++ if (!state)
++ ret = fbcon_decor_disable(vc, 1);
++ else
++ ret = fbcon_decor_enable(vc);
++ console_unlock();
++
++ return ret;
++}
++
++static inline void fbcon_decor_ioctl_dogetstate(struct vc_data *vc, unsigned int *state)
++{
++ *state = vc->vc_decor.state;
++}
++
++static int fbcon_decor_ioctl_dosetcfg(struct vc_data *vc, struct vc_decor *cfg, unsigned char origin)
++{
++ struct fb_info *info;
++ int len;
++ char *tmp;
++
++ info = registered_fb[(int) con2fb_map[vc->vc_num]];
++
++ if (info == NULL || !cfg->twidth || !cfg->theight ||
++ cfg->tx + cfg->twidth > info->var.xres ||
++ cfg->ty + cfg->theight > info->var.yres)
++ return -EINVAL;
++
++ len = strlen_user(cfg->theme);
++ if (!len || len > FBCON_DECOR_THEME_LEN)
++ return -EINVAL;
++ tmp = kmalloc(len, GFP_KERNEL);
++ if (!tmp)
++ return -ENOMEM;
++ if (copy_from_user(tmp, (void __user *)cfg->theme, len))
++ return -EFAULT;
++ cfg->theme = tmp;
++ cfg->state = 0;
++
++ console_lock();
++ if (vc->vc_decor.state)
++ fbcon_decor_disable(vc, 1);
++ kfree(vc->vc_decor.theme);
++ vc->vc_decor = *cfg;
++ console_unlock();
++
++ printk(KERN_INFO "fbcondecor: console %d using theme '%s'\n",
++ vc->vc_num, vc->vc_decor.theme);
++ return 0;
++}
++
++static int fbcon_decor_ioctl_dogetcfg(struct vc_data *vc,
++ struct vc_decor *decor)
++{
++ char __user *tmp;
++
++ tmp = decor->theme;
++ *decor = vc->vc_decor;
++ decor->theme = tmp;
++
++ if (vc->vc_decor.theme) {
++ if (copy_to_user(tmp, vc->vc_decor.theme,
++ strlen(vc->vc_decor.theme) + 1))
++ return -EFAULT;
++ } else
++ if (put_user(0, tmp))
++ return -EFAULT;
++
++ return 0;
++}
++
++static int fbcon_decor_ioctl_dosetpic(struct vc_data *vc, struct fb_image *img,
++ unsigned char origin)
++{
++ struct fb_info *info;
++ int len;
++ u8 *tmp;
++
++ if (vc->vc_num != fg_console)
++ return -EINVAL;
++
++ info = registered_fb[(int) con2fb_map[vc->vc_num]];
++
++ if (info == NULL)
++ return -EINVAL;
++
++ if (img->width != info->var.xres || img->height != info->var.yres) {
++ printk(KERN_ERR "fbcondecor: picture dimensions mismatch\n");
++ printk(KERN_ERR "%dx%d vs %dx%d\n", img->width, img->height,
++ info->var.xres, info->var.yres);
++ return -EINVAL;
++ }
++
++ if (img->depth != info->var.bits_per_pixel) {
++ printk(KERN_ERR "fbcondecor: picture depth mismatch\n");
++ return -EINVAL;
++ }
++
++ if (img->depth == 8) {
++ if (!img->cmap.len || !img->cmap.red || !img->cmap.green ||
++ !img->cmap.blue)
++ return -EINVAL;
++
++ tmp = vmalloc(img->cmap.len * 3 * 2);
++ if (!tmp)
++ return -ENOMEM;
++
++ if (copy_from_user(tmp,
++ (void __user *)img->cmap.red,
++ (img->cmap.len << 1)) ||
++ copy_from_user(tmp + (img->cmap.len << 1),
++ (void __user *)img->cmap.green,
++ (img->cmap.len << 1)) ||
++ copy_from_user(tmp + (img->cmap.len << 2),
++ (void __user *)img->cmap.blue,
++ (img->cmap.len << 1))) {
++ vfree(tmp);
++ return -EFAULT;
++ }
++
++ img->cmap.transp = NULL;
++ img->cmap.red = (u16 *)tmp;
++ img->cmap.green = img->cmap.red + img->cmap.len;
++ img->cmap.blue = img->cmap.green + img->cmap.len;
++ } else {
++ img->cmap.red = NULL;
++ }
++
++ len = ((img->depth + 7) >> 3) * img->width * img->height;
++
++ /*
++ * Allocate an additional byte so that we never go outside of the
++ * buffer boundaries in the rendering functions in a 24 bpp mode.
++ */
++ tmp = vmalloc(len + 1);
++
++ if (!tmp)
++ goto out;
++
++ if (copy_from_user(tmp, (void __user *)img->data, len))
++ goto out;
++
++ img->data = tmp;
++
++ console_lock();
++
++ if (info->bgdecor.data)
++ vfree((u8 *)info->bgdecor.data);
++ if (info->bgdecor.cmap.red)
++ vfree(info->bgdecor.cmap.red);
++
++ info->bgdecor = *img;
++
++ if (fbcon_decor_active_vc(vc) && fg_console == vc->vc_num) {
++ redraw_screen(vc, 0);
++ update_region(vc, vc->vc_origin +
++ vc->vc_size_row * vc->vc_top,
++ vc->vc_size_row * (vc->vc_bottom - vc->vc_top) / 2);
++ fbcon_decor_clear_margins(vc, info, 0);
++ }
++
++ console_unlock();
++
++ return 0;
++
++out:
++ if (img->cmap.red)
++ vfree(img->cmap.red);
++
++ if (tmp)
++ vfree(tmp);
++ return -ENOMEM;
++}
++
++static long fbcon_decor_ioctl(struct file *filp, u_int cmd, u_long arg)
++{
++ struct fbcon_decor_iowrapper __user *wrapper = (void __user *) arg;
++ struct vc_data *vc = NULL;
++ unsigned short vc_num = 0;
++ unsigned char origin = 0;
++ void __user *data = NULL;
++
++ if (!access_ok(VERIFY_READ, wrapper,
++ sizeof(struct fbcon_decor_iowrapper)))
++ return -EFAULT;
++
++ __get_user(vc_num, &wrapper->vc);
++ __get_user(origin, &wrapper->origin);
++ __get_user(data, &wrapper->data);
++
++ if (!vc_cons_allocated(vc_num))
++ return -EINVAL;
++
++ vc = vc_cons[vc_num].d;
++
++ switch (cmd) {
++ case FBIOCONDECOR_SETPIC:
++ {
++ struct fb_image img;
++
++ if (copy_from_user(&img, (struct fb_image __user *)data, sizeof(struct fb_image)))
++ return -EFAULT;
++
++ return fbcon_decor_ioctl_dosetpic(vc, &img, origin);
++ }
++ case FBIOCONDECOR_SETCFG:
++ {
++ struct vc_decor cfg;
++
++ if (copy_from_user(&cfg, (struct vc_decor __user *)data, sizeof(struct vc_decor)))
++ return -EFAULT;
++
++ return fbcon_decor_ioctl_dosetcfg(vc, &cfg, origin);
++ }
++ case FBIOCONDECOR_GETCFG:
++ {
++ int rval;
++ struct vc_decor cfg;
++
++ if (copy_from_user(&cfg, (struct vc_decor __user *)data, sizeof(struct vc_decor)))
++ return -EFAULT;
++
++ rval = fbcon_decor_ioctl_dogetcfg(vc, &cfg);
++
++ if (copy_to_user(data, &cfg, sizeof(struct vc_decor)))
++ return -EFAULT;
++ return rval;
++ }
++ case FBIOCONDECOR_SETSTATE:
++ {
++ unsigned int state = 0;
++
++ if (get_user(state, (unsigned int __user *)data))
++ return -EFAULT;
++ return fbcon_decor_ioctl_dosetstate(vc, state, origin);
++ }
++ case FBIOCONDECOR_GETSTATE:
++ {
++ unsigned int state = 0;
++
++ fbcon_decor_ioctl_dogetstate(vc, &state);
++ return put_user(state, (unsigned int __user *)data);
++ }
++
++ default:
++ return -ENOIOCTLCMD;
++ }
++}
++
++#ifdef CONFIG_COMPAT
++
++static long fbcon_decor_compat_ioctl(struct file *filp, unsigned int cmd, unsigned long arg)
++{
++ struct fbcon_decor_iowrapper32 __user *wrapper = (void __user *)arg;
++ struct vc_data *vc = NULL;
++ unsigned short vc_num = 0;
++ unsigned char origin = 0;
++ compat_uptr_t data_compat = 0;
++ void __user *data = NULL;
++
++ if (!access_ok(VERIFY_READ, wrapper,
++ sizeof(struct fbcon_decor_iowrapper32)))
++ return -EFAULT;
++
++ __get_user(vc_num, &wrapper->vc);
++ __get_user(origin, &wrapper->origin);
++ __get_user(data_compat, &wrapper->data);
++ data = compat_ptr(data_compat);
++
++ if (!vc_cons_allocated(vc_num))
++ return -EINVAL;
++
++ vc = vc_cons[vc_num].d;
++
++ switch (cmd) {
++ case FBIOCONDECOR_SETPIC32:
++ {
++ struct fb_image32 img_compat;
++ struct fb_image img;
++
++ if (copy_from_user(&img_compat, (struct fb_image32 __user *)data, sizeof(struct fb_image32)))
++ return -EFAULT;
++
++ fb_image_from_compat(img, img_compat);
++
++ return fbcon_decor_ioctl_dosetpic(vc, &img, origin);
++ }
++
++ case FBIOCONDECOR_SETCFG32:
++ {
++ struct vc_decor32 cfg_compat;
++ struct vc_decor cfg;
++
++ if (copy_from_user(&cfg_compat, (struct vc_decor32 __user *)data, sizeof(struct vc_decor32)))
++ return -EFAULT;
++
++ vc_decor_from_compat(cfg, cfg_compat);
++
++ return fbcon_decor_ioctl_dosetcfg(vc, &cfg, origin);
++ }
++
++ case FBIOCONDECOR_GETCFG32:
++ {
++ int rval;
++ struct vc_decor32 cfg_compat;
++ struct vc_decor cfg;
++
++ if (copy_from_user(&cfg_compat, (struct vc_decor32 __user *)data, sizeof(struct vc_decor32)))
++ return -EFAULT;
++ cfg.theme = compat_ptr(cfg_compat.theme);
++
++ rval = fbcon_decor_ioctl_dogetcfg(vc, &cfg);
++
++ vc_decor_to_compat(cfg_compat, cfg);
++
++ if (copy_to_user((struct vc_decor32 __user *)data, &cfg_compat, sizeof(struct vc_decor32)))
++ return -EFAULT;
++ return rval;
++ }
++
++ case FBIOCONDECOR_SETSTATE32:
++ {
++ compat_uint_t state_compat = 0;
++ unsigned int state = 0;
++
++ if (get_user(state_compat, (compat_uint_t __user *)data))
++ return -EFAULT;
++
++ state = (unsigned int)state_compat;
++
++ return fbcon_decor_ioctl_dosetstate(vc, state, origin);
++ }
++
++ case FBIOCONDECOR_GETSTATE32:
++ {
++ compat_uint_t state_compat = 0;
++ unsigned int state = 0;
++
++ fbcon_decor_ioctl_dogetstate(vc, &state);
++ state_compat = (compat_uint_t)state;
++
++ return put_user(state_compat, (compat_uint_t __user *)data);
++ }
++
++ default:
++ return -ENOIOCTLCMD;
++ }
++}
++#else
++ #define fbcon_decor_compat_ioctl NULL
++#endif
++
++static struct file_operations fbcon_decor_ops = {
++ .owner = THIS_MODULE,
++ .unlocked_ioctl = fbcon_decor_ioctl,
++ .compat_ioctl = fbcon_decor_compat_ioctl
++};
++
++static struct miscdevice fbcon_decor_dev = {
++ .minor = MISC_DYNAMIC_MINOR,
++ .name = "fbcondecor",
++ .fops = &fbcon_decor_ops
++};
++
++void fbcon_decor_reset(void)
++{
++ int i;
++
++ for (i = 0; i < num_registered_fb; i++) {
++ registered_fb[i]->bgdecor.data = NULL;
++ registered_fb[i]->bgdecor.cmap.red = NULL;
++ }
++
++ for (i = 0; i < MAX_NR_CONSOLES && vc_cons[i].d; i++) {
++ vc_cons[i].d->vc_decor.state = vc_cons[i].d->vc_decor.twidth =
++ vc_cons[i].d->vc_decor.theight = 0;
++ vc_cons[i].d->vc_decor.theme = NULL;
++ }
++}
++
++int fbcon_decor_init(void)
++{
++ int i;
++
++ fbcon_decor_reset();
++
++ if (initialized)
++ return 0;
++
++ i = misc_register(&fbcon_decor_dev);
++ if (i) {
++ printk(KERN_ERR "fbcondecor: failed to register device\n");
++ return i;
++ }
++
++ fbcon_decor_call_helper("init", 0);
++ initialized = 1;
++ return 0;
++}
++
++int fbcon_decor_exit(void)
++{
++ fbcon_decor_reset();
++ return 0;
++}
+diff --git a/drivers/video/console/fbcondecor.h b/drivers/video/console/fbcondecor.h
+new file mode 100644
+index 0000000..c49386c
+--- /dev/null
++++ b/drivers/video/console/fbcondecor.h
+@@ -0,0 +1,77 @@
++/*
++ * linux/drivers/video/console/fbcondecor.h -- Framebuffer Console Decoration headers
++ *
++ * Copyright (C) 2004 Michal Januszewski <michalj+fbcondecor@gmail.com>
++ *
++ */
++
++#ifndef __FBCON_DECOR_H
++#define __FBCON_DECOR_H
++
++#ifndef _LINUX_FB_H
++#include <linux/fb.h>
++#endif
++
++/* This is needed for vc_cons in fbcmap.c */
++#include <linux/vt_kern.h>
++
++struct fb_cursor;
++struct fb_info;
++struct vc_data;
++
++#ifdef CONFIG_FB_CON_DECOR
++/* fbcondecor.c */
++int fbcon_decor_init(void);
++int fbcon_decor_exit(void);
++int fbcon_decor_call_helper(char *cmd, unsigned short cons);
++int fbcon_decor_disable(struct vc_data *vc, unsigned char redraw);
++
++/* cfbcondecor.c */
++void fbcon_decor_putcs(struct vc_data *vc, struct fb_info *info, const unsigned short *s, int count, int yy, int xx);
++void fbcon_decor_cursor(struct fb_info *info, struct fb_cursor *cursor);
++void fbcon_decor_clear(struct vc_data *vc, struct fb_info *info, int sy, int sx, int height, int width);
++void fbcon_decor_clear_margins(struct vc_data *vc, struct fb_info *info, int bottom_only);
++void fbcon_decor_blank(struct vc_data *vc, struct fb_info *info, int blank);
++void fbcon_decor_bmove_redraw(struct vc_data *vc, struct fb_info *info, int y, int sx, int dx, int width);
++void fbcon_decor_copy(u8 *dst, u8 *src, int height, int width, int linebytes, int srclinesbytes, int bpp);
++void fbcon_decor_fix_pseudo_pal(struct fb_info *info, struct vc_data *vc);
++
++/* vt.c */
++void acquire_console_sem(void);
++void release_console_sem(void);
++void do_unblank_screen(int entering_gfx);
++
++/* struct vc_data *y */
++#define fbcon_decor_active_vc(y) (y->vc_decor.state && y->vc_decor.theme)
++
++/* struct fb_info *x, struct vc_data *y */
++#define fbcon_decor_active_nores(x, y) (x->bgdecor.data && fbcon_decor_active_vc(y))
++
++/* struct fb_info *x, struct vc_data *y */
++#define fbcon_decor_active(x, y) (fbcon_decor_active_nores(x, y) && \
++ x->bgdecor.width == x->var.xres && \
++ x->bgdecor.height == x->var.yres && \
++ x->bgdecor.depth == x->var.bits_per_pixel)
++
++#else /* CONFIG_FB_CON_DECOR */
++
++static inline void fbcon_decor_putcs(struct vc_data *vc, struct fb_info *info, const unsigned short *s, int count, int yy, int xx) {}
++static inline void fbcon_decor_putc(struct vc_data *vc, struct fb_info *info, int c, int ypos, int xpos) {}
++static inline void fbcon_decor_cursor(struct fb_info *info, struct fb_cursor *cursor) {}
++static inline void fbcon_decor_clear(struct vc_data *vc, struct fb_info *info, int sy, int sx, int height, int width) {}
++static inline void fbcon_decor_clear_margins(struct vc_data *vc, struct fb_info *info, int bottom_only) {}
++static inline void fbcon_decor_blank(struct vc_data *vc, struct fb_info *info, int blank) {}
++static inline void fbcon_decor_bmove_redraw(struct vc_data *vc, struct fb_info *info, int y, int sx, int dx, int width) {}
++static inline void fbcon_decor_fix_pseudo_pal(struct fb_info *info, struct vc_data *vc) {}
++static inline int fbcon_decor_call_helper(char *cmd, unsigned short cons) { return 0; }
++static inline int fbcon_decor_init(void) { return 0; }
++static inline int fbcon_decor_exit(void) { return 0; }
++static inline int fbcon_decor_disable(struct vc_data *vc, unsigned char redraw) { return 0; }
++
++#define fbcon_decor_active_vc(y) (0)
++#define fbcon_decor_active_nores(x, y) (0)
++#define fbcon_decor_active(x, y) (0)
++
++#endif /* CONFIG_FB_CON_DECOR */
++
++#endif /* __FBCON_DECOR_H */
+diff --git a/drivers/video/fbdev/Kconfig b/drivers/video/fbdev/Kconfig
+index 88b008f..c84113d 100644
+--- a/drivers/video/fbdev/Kconfig
++++ b/drivers/video/fbdev/Kconfig
+@@ -1216,7 +1216,6 @@ config FB_MATROX
+ select FB_CFB_FILLRECT
+ select FB_CFB_COPYAREA
+ select FB_CFB_IMAGEBLIT
+- select FB_TILEBLITTING
+ select FB_MACMODES if PPC_PMAC
+ ---help---
+ Say Y here if you have a Matrox Millennium, Matrox Millennium II,
+diff --git a/drivers/video/fbdev/core/fbcmap.c b/drivers/video/fbdev/core/fbcmap.c
+index f89245b..c2c12ce 100644
+--- a/drivers/video/fbdev/core/fbcmap.c
++++ b/drivers/video/fbdev/core/fbcmap.c
+@@ -17,6 +17,8 @@
+ #include <linux/slab.h>
+ #include <linux/uaccess.h>
+
++#include "../../console/fbcondecor.h"
++
+ static u16 red2[] __read_mostly = {
+ 0x0000, 0xaaaa
+ };
+@@ -254,9 +256,12 @@ int fb_set_cmap(struct fb_cmap *cmap, struct fb_info *info)
+ break;
+ }
+ }
+- if (rc == 0)
++ if (rc == 0) {
+ fb_copy_cmap(cmap, &info->cmap);
+-
++ if (fbcon_decor_active(info, vc_cons[fg_console].d) &&
++ info->fix.visual == FB_VISUAL_DIRECTCOLOR)
++ fbcon_decor_fix_pseudo_pal(info, vc_cons[fg_console].d);
++ }
+ return rc;
+ }
+
+diff --git a/drivers/video/fbdev/core/fbmem.c b/drivers/video/fbdev/core/fbmem.c
+index 76c1ad9..fafc0af 100644
+--- a/drivers/video/fbdev/core/fbmem.c
++++ b/drivers/video/fbdev/core/fbmem.c
+@@ -1251,15 +1251,6 @@ struct fb_fix_screeninfo32 {
+ u16 reserved[3];
+ };
+
+-struct fb_cmap32 {
+- u32 start;
+- u32 len;
+- compat_caddr_t red;
+- compat_caddr_t green;
+- compat_caddr_t blue;
+- compat_caddr_t transp;
+-};
+-
+ static int fb_getput_cmap(struct fb_info *info, unsigned int cmd,
+ unsigned long arg)
+ {
+diff --git a/include/linux/console_decor.h b/include/linux/console_decor.h
+new file mode 100644
+index 0000000..1514355
+--- /dev/null
++++ b/include/linux/console_decor.h
+@@ -0,0 +1,46 @@
++#ifndef _LINUX_CONSOLE_DECOR_H_
++#define _LINUX_CONSOLE_DECOR_H_ 1
++
++/* A structure used by the framebuffer console decorations (drivers/video/console/fbcondecor.c) */
++struct vc_decor {
++ __u8 bg_color; /* The color that is to be treated as transparent */
++ __u8 state; /* Current decor state: 0 = off, 1 = on */
++ __u16 tx, ty; /* Top left corner coordinates of the text field */
++ __u16 twidth, theight; /* Width and height of the text field */
++ char *theme;
++};
++
++#ifdef __KERNEL__
++#ifdef CONFIG_COMPAT
++#include <linux/compat.h>
++
++struct vc_decor32 {
++ __u8 bg_color; /* The color that is to be treated as transparent */
++ __u8 state; /* Current decor state: 0 = off, 1 = on */
++ __u16 tx, ty; /* Top left corner coordinates of the text field */
++ __u16 twidth, theight; /* Width and height of the text field */
++ compat_uptr_t theme;
++};
++
++#define vc_decor_from_compat(to, from) \
++ (to).bg_color = (from).bg_color; \
++ (to).state = (from).state; \
++ (to).tx = (from).tx; \
++ (to).ty = (from).ty; \
++ (to).twidth = (from).twidth; \
++ (to).theight = (from).theight; \
++ (to).theme = compat_ptr((from).theme)
++
++#define vc_decor_to_compat(to, from) \
++ (to).bg_color = (from).bg_color; \
++ (to).state = (from).state; \
++ (to).tx = (from).tx; \
++ (to).ty = (from).ty; \
++ (to).twidth = (from).twidth; \
++ (to).theight = (from).theight; \
++ (to).theme = ptr_to_compat((from).theme)
++
++#endif /* CONFIG_COMPAT */
++#endif /* __KERNEL__ */
++
++#endif
+diff --git a/include/linux/console_struct.h b/include/linux/console_struct.h
+index 6fd3c90..c649555 100644
+--- a/include/linux/console_struct.h
++++ b/include/linux/console_struct.h
+@@ -20,6 +20,7 @@ struct vt_struct;
+ struct uni_pagedir;
+
+ #define NPAR 16
++#include <linux/console_decor.h>
+
+ /*
+ * Example: vc_data of a console that was scrolled 3 lines down.
+@@ -140,6 +141,8 @@ struct vc_data {
+ struct uni_pagedir *vc_uni_pagedir;
+ struct uni_pagedir **vc_uni_pagedir_loc; /* [!] Location of uni_pagedir variable for this console */
+ bool vc_panic_force_write; /* when oops/panic this VC can accept forced output/blanking */
++
++ struct vc_decor vc_decor;
+ /* additional information is in vt_kern.h */
+ };
+
+diff --git a/include/linux/fb.h b/include/linux/fb.h
+index a964d07..672cc64 100644
+--- a/include/linux/fb.h
++++ b/include/linux/fb.h
+@@ -238,6 +238,34 @@ struct fb_deferred_io {
+ };
+ #endif
+
++#ifdef __KERNEL__
++#ifdef CONFIG_COMPAT
++struct fb_image32 {
++ __u32 dx; /* Where to place image */
++ __u32 dy;
++ __u32 width; /* Size of image */
++ __u32 height;
++ __u32 fg_color; /* Only used when a mono bitmap */
++ __u32 bg_color;
++ __u8 depth; /* Depth of the image */
++ const compat_uptr_t data; /* Pointer to image data */
++ struct fb_cmap32 cmap; /* color map info */
++};
++
++#define fb_image_from_compat(to, from) \
++ (to).dx = (from).dx; \
++ (to).dy = (from).dy; \
++ (to).width = (from).width; \
++ (to).height = (from).height; \
++ (to).fg_color = (from).fg_color; \
++ (to).bg_color = (from).bg_color; \
++ (to).depth = (from).depth; \
++ (to).data = compat_ptr((from).data); \
++ fb_cmap_from_compat((to).cmap, (from).cmap)
++
++#endif /* CONFIG_COMPAT */
++#endif /* __KERNEL__ */
++
+ /*
+ * Frame buffer operations
+ *
+@@ -508,6 +536,9 @@ struct fb_info {
+ #define FBINFO_STATE_SUSPENDED 1
+ u32 state; /* Hardware state i.e suspend */
+ void *fbcon_par; /* fbcon use-only private area */
++
++ struct fb_image bgdecor;
++
+ /* From here on everything is device dependent */
+ void *par;
+ /* we need the PCI or similar aperture base/size not
+diff --git a/include/uapi/linux/fb.h b/include/uapi/linux/fb.h
+index fb795c3..4b57c67 100644
+--- a/include/uapi/linux/fb.h
++++ b/include/uapi/linux/fb.h
+@@ -8,6 +8,23 @@
+
+ #define FB_MAX 32 /* sufficient for now */
+
++struct fbcon_decor_iowrapper {
++ unsigned short vc; /* Virtual console */
++ unsigned char origin; /* Point of origin of the request */
++ void *data;
++};
++
++#ifdef __KERNEL__
++#ifdef CONFIG_COMPAT
++#include <linux/compat.h>
++struct fbcon_decor_iowrapper32 {
++ unsigned short vc; /* Virtual console */
++ unsigned char origin; /* Point of origin of the request */
++ compat_uptr_t data;
++};
++#endif /* CONFIG_COMPAT */
++#endif /* __KERNEL__ */
++
+ /* ioctls
+ 0x46 is 'F' */
+ #define FBIOGET_VSCREENINFO 0x4600
+@@ -35,6 +52,25 @@
+ #define FBIOGET_DISPINFO 0x4618
+ #define FBIO_WAITFORVSYNC _IOW('F', 0x20, __u32)
+
++#define FBIOCONDECOR_SETCFG _IOWR('F', 0x19, struct fbcon_decor_iowrapper)
++#define FBIOCONDECOR_GETCFG _IOR('F', 0x1A, struct fbcon_decor_iowrapper)
++#define FBIOCONDECOR_SETSTATE _IOWR('F', 0x1B, struct fbcon_decor_iowrapper)
++#define FBIOCONDECOR_GETSTATE _IOR('F', 0x1C, struct fbcon_decor_iowrapper)
++#define FBIOCONDECOR_SETPIC _IOWR('F', 0x1D, struct fbcon_decor_iowrapper)
++#ifdef __KERNEL__
++#ifdef CONFIG_COMPAT
++#define FBIOCONDECOR_SETCFG32 _IOWR('F', 0x19, struct fbcon_decor_iowrapper32)
++#define FBIOCONDECOR_GETCFG32 _IOR('F', 0x1A, struct fbcon_decor_iowrapper32)
++#define FBIOCONDECOR_SETSTATE32 _IOWR('F', 0x1B, struct fbcon_decor_iowrapper32)
++#define FBIOCONDECOR_GETSTATE32 _IOR('F', 0x1C, struct fbcon_decor_iowrapper32)
++#define FBIOCONDECOR_SETPIC32 _IOWR('F', 0x1D, struct fbcon_decor_iowrapper32)
++#endif /* CONFIG_COMPAT */
++#endif /* __KERNEL__ */
++
++#define FBCON_DECOR_THEME_LEN 128 /* Maximum length of a theme name */
++#define FBCON_DECOR_IO_ORIG_KERNEL 0 /* Kernel ioctl origin */
++#define FBCON_DECOR_IO_ORIG_USER 1 /* User ioctl origin */
++
+ #define FB_TYPE_PACKED_PIXELS 0 /* Packed Pixels */
+ #define FB_TYPE_PLANES 1 /* Non interleaved planes */
+ #define FB_TYPE_INTERLEAVED_PLANES 2 /* Interleaved planes */
+@@ -277,6 +313,29 @@ struct fb_var_screeninfo {
+ __u32 reserved[4]; /* Reserved for future compatibility */
+ };
+
++#ifdef __KERNEL__
++#ifdef CONFIG_COMPAT
++struct fb_cmap32 {
++ __u32 start;
++ __u32 len; /* Number of entries */
++ compat_uptr_t red; /* Red values */
++ compat_uptr_t green;
++ compat_uptr_t blue;
++ compat_uptr_t transp; /* transparency, can be NULL */
++};
++
++#define fb_cmap_from_compat(to, from) \
++ (to).start = (from).start; \
++ (to).len = (from).len; \
++ (to).red = compat_ptr((from).red); \
++ (to).green = compat_ptr((from).green); \
++ (to).blue = compat_ptr((from).blue); \
++ (to).transp = compat_ptr((from).transp)
++
++#endif /* CONFIG_COMPAT */
++#endif /* __KERNEL__ */
++
++
+ struct fb_cmap {
+ __u32 start; /* First entry */
+ __u32 len; /* Number of entries */
+diff --git a/kernel/sysctl.c b/kernel/sysctl.c
+index 6ee416e..d2c2425 100644
+--- a/kernel/sysctl.c
++++ b/kernel/sysctl.c
+@@ -149,6 +149,10 @@ static const int cap_last_cap = CAP_LAST_CAP;
+ static unsigned long hung_task_timeout_max = (LONG_MAX/HZ);
+ #endif
+
++#ifdef CONFIG_FB_CON_DECOR
++extern char fbcon_decor_path[];
++#endif
++
+ #ifdef CONFIG_INOTIFY_USER
+ #include <linux/inotify.h>
+ #endif
+@@ -266,6 +270,15 @@ static struct ctl_table sysctl_base_table[] = {
+ .mode = 0555,
+ .child = dev_table,
+ },
++#ifdef CONFIG_FB_CON_DECOR
++ {
++ .procname = "fbcondecor",
++ .data = &fbcon_decor_path,
++ .maxlen = KMOD_PATH_LEN,
++ .mode = 0644,
++ .proc_handler = &proc_dostring,
++ },
++#endif
+ { }
+ };
+
diff --git a/4400_alpha-sysctl-uac.patch b/4400_alpha-sysctl-uac.patch
new file mode 100644
index 0000000..d42b4ed
--- /dev/null
+++ b/4400_alpha-sysctl-uac.patch
@@ -0,0 +1,142 @@
+diff --git a/arch/alpha/Kconfig b/arch/alpha/Kconfig
+index 7f312d8..1eb686b 100644
+--- a/arch/alpha/Kconfig
++++ b/arch/alpha/Kconfig
+@@ -697,6 +697,33 @@ config HZ
+ default 1200 if HZ_1200
+ default 1024
+
++config ALPHA_UAC_SYSCTL
++ bool "Configure UAC policy via sysctl"
++ depends on SYSCTL
++ default y
++ ---help---
++ Configuring the UAC (unaligned access control) policy on a Linux
++ system usually involves setting a compile time define. If you say
++ Y here, you will be able to modify the UAC policy at runtime using
++ the /proc interface.
++
++ The UAC policy defines the action Linux should take when an
++ unaligned memory access occurs. The action can include printing a
++ warning message (NOPRINT), sending a signal to the offending
++ program to help developers debug their applications (SIGBUS), or
++ disabling the transparent fixing (NOFIX).
++
++ The sysctls will be initialized to the compile-time defined UAC
++ policy. You can change these manually, or with the sysctl(8)
++ userspace utility.
++
++ To disable the warning messages at runtime, you would use
++
++ echo 1 > /proc/sys/kernel/uac/noprint
++
++ This is pretty harmless. Say Y if you're not sure.
++
++
+ source "drivers/pci/Kconfig"
+ source "drivers/eisa/Kconfig"
+
+diff --git a/arch/alpha/kernel/traps.c b/arch/alpha/kernel/traps.c
+index 74aceea..cb35d80 100644
+--- a/arch/alpha/kernel/traps.c
++++ b/arch/alpha/kernel/traps.c
+@@ -103,6 +103,49 @@ static char * ireg_name[] = {"v0", "t0", "t1", "t2", "t3", "t4", "t5", "t6",
+ "t10", "t11", "ra", "pv", "at", "gp", "sp", "zero"};
+ #endif
+
++#ifdef CONFIG_ALPHA_UAC_SYSCTL
++
++#include <linux/sysctl.h>
++
++static int enabled_noprint = 0;
++static int enabled_sigbus = 0;
++static int enabled_nofix = 0;
++
++struct ctl_table uac_table[] = {
++ {
++ .procname = "noprint",
++ .data = &enabled_noprint,
++ .maxlen = sizeof (int),
++ .mode = 0644,
++ .proc_handler = &proc_dointvec,
++ },
++ {
++ .procname = "sigbus",
++ .data = &enabled_sigbus,
++ .maxlen = sizeof (int),
++ .mode = 0644,
++ .proc_handler = &proc_dointvec,
++ },
++ {
++ .procname = "nofix",
++ .data = &enabled_nofix,
++ .maxlen = sizeof (int),
++ .mode = 0644,
++ .proc_handler = &proc_dointvec,
++ },
++ { }
++};
++
++static int __init init_uac_sysctl(void)
++{
++ /* Initialize sysctls with the #defined UAC policy */
++ enabled_noprint = (test_thread_flag (TS_UAC_NOPRINT)) ? 1 : 0;
++ enabled_sigbus = (test_thread_flag (TS_UAC_SIGBUS)) ? 1 : 0;
++ enabled_nofix = (test_thread_flag (TS_UAC_NOFIX)) ? 1 : 0;
++ return 0;
++}
++#endif
++
+ static void
+ dik_show_code(unsigned int *pc)
+ {
+@@ -785,7 +828,12 @@ do_entUnaUser(void __user * va, unsigned long opcode,
+ /* Check the UAC bits to decide what the user wants us to do
+ with the unaliged access. */
+
++#ifndef CONFIG_ALPHA_UAC_SYSCTL
+ if (!(current_thread_info()->status & TS_UAC_NOPRINT)) {
++#else /* CONFIG_ALPHA_UAC_SYSCTL */
++ if (!(current_thread_info()->status & TS_UAC_NOPRINT) &&
++ !(enabled_noprint)) {
++#endif /* CONFIG_ALPHA_UAC_SYSCTL */
+ if (__ratelimit(&ratelimit)) {
+ printk("%s(%d): unaligned trap at %016lx: %p %lx %ld\n",
+ current->comm, task_pid_nr(current),
+@@ -1090,3 +1138,6 @@ trap_init(void)
+ wrent(entSys, 5);
+ wrent(entDbg, 6);
+ }
++#ifdef CONFIG_ALPHA_UAC_SYSCTL
++ __initcall(init_uac_sysctl);
++#endif
+diff --git a/kernel/sysctl.c b/kernel/sysctl.c
+index 87b2fc3..55021a8 100644
+--- a/kernel/sysctl.c
++++ b/kernel/sysctl.c
+@@ -152,6 +152,11 @@ static unsigned long hung_task_timeout_max = (LONG_MAX/HZ);
+ #ifdef CONFIG_INOTIFY_USER
+ #include <linux/inotify.h>
+ #endif
++
++#ifdef CONFIG_ALPHA_UAC_SYSCTL
++extern struct ctl_table uac_table[];
++#endif
++
+ #ifdef CONFIG_SPARC
+ #endif
+
+@@ -1844,6 +1849,13 @@ static struct ctl_table debug_table[] = {
+ .extra2 = &one,
+ },
+ #endif
++#ifdef CONFIG_ALPHA_UAC_SYSCTL
++ {
++ .procname = "uac",
++ .mode = 0555,
++ .child = uac_table,
++ },
++#endif /* CONFIG_ALPHA_UAC_SYSCTL */
+ { }
+ };
+
diff --git a/5010_enable-additional-cpu-optimizations-for-gcc.patch b/5010_enable-additional-cpu-optimizations-for-gcc.patch
new file mode 100644
index 0000000..d9729b2
--- /dev/null
+++ b/5010_enable-additional-cpu-optimizations-for-gcc.patch
@@ -0,0 +1,426 @@
+WARNING - this version of the patch works with version 4.9+ of gcc and with
+kernel version 3.15.x+ and should NOT be applied when compiling on older
+versions due to name changes of the flags with the 4.9 release of gcc.
+Use the older version of this patch hosted on the same github for older
+versions of gcc. For example:
+
+corei7 --> nehalem
+corei7-avx --> sandybridge
+core-avx-i --> ivybridge
+core-avx2 --> haswell
+
+For more, see: https://gcc.gnu.org/gcc-4.9/changes.html
+
+It also changes 'atom' to 'bonnell' in accordance with the gcc v4.9 changes.
+Note that upstream is using the deprecated 'match=atom' flags when I believe it
+should use the newer 'march=bonnell' flag for atom processors.
+
+I have made that change to this patch set as well. See the following kernel
+bug report to see if I'm right: https://bugzilla.kernel.org/show_bug.cgi?id=77461
+
+This patch will expand the number of microarchitectures to include newer
+processors including: AMD K10-family, AMD Family 10h (Barcelona), AMD Family
+14h (Bobcat), AMD Family 15h (Bulldozer), AMD Family 15h (Piledriver), AMD
+Family 15h (Steamroller), Family 16h (Jaguar), Intel 1st Gen Core i3/i5/i7
+(Nehalem), Intel 1.5 Gen Core i3/i5/i7 (Westmere), Intel 2nd Gen Core i3/i5/i7
+(Sandybridge), Intel 3rd Gen Core i3/i5/i7 (Ivybridge), Intel 4th Gen Core
+i3/i5/i7 (Haswell), Intel 5th Gen Core i3/i5/i7 (Broadwell), and the low power
+Silvermont series of Atom processors (Silvermont). It also offers the compiler
+the 'native' flag.
+
+Small but real speed increases are measurable using a make endpoint comparing
+a generic kernel to one built with one of the respective microarchs.
+
+See the following experimental evidence supporting this statement:
+https://github.com/graysky2/kernel_gcc_patch
+
+REQUIREMENTS
+linux version >=3.15
+gcc version >=4.9
+
+--- a/arch/x86/include/asm/module.h 2015-08-30 14:34:09.000000000 -0400
++++ b/arch/x86/include/asm/module.h 2015-11-06 14:18:24.234941036 -0500
+@@ -15,6 +15,24 @@
+ #define MODULE_PROC_FAMILY "586MMX "
+ #elif defined CONFIG_MCORE2
+ #define MODULE_PROC_FAMILY "CORE2 "
++#elif defined CONFIG_MNATIVE
++#define MODULE_PROC_FAMILY "NATIVE "
++#elif defined CONFIG_MNEHALEM
++#define MODULE_PROC_FAMILY "NEHALEM "
++#elif defined CONFIG_MWESTMERE
++#define MODULE_PROC_FAMILY "WESTMERE "
++#elif defined CONFIG_MSILVERMONT
++#define MODULE_PROC_FAMILY "SILVERMONT "
++#elif defined CONFIG_MSANDYBRIDGE
++#define MODULE_PROC_FAMILY "SANDYBRIDGE "
++#elif defined CONFIG_MIVYBRIDGE
++#define MODULE_PROC_FAMILY "IVYBRIDGE "
++#elif defined CONFIG_MHASWELL
++#define MODULE_PROC_FAMILY "HASWELL "
++#elif defined CONFIG_MBROADWELL
++#define MODULE_PROC_FAMILY "BROADWELL "
++#elif defined CONFIG_MSKYLAKE
++#define MODULE_PROC_FAMILY "SKYLAKE "
+ #elif defined CONFIG_MATOM
+ #define MODULE_PROC_FAMILY "ATOM "
+ #elif defined CONFIG_M686
+@@ -33,6 +51,22 @@
+ #define MODULE_PROC_FAMILY "K7 "
+ #elif defined CONFIG_MK8
+ #define MODULE_PROC_FAMILY "K8 "
++#elif defined CONFIG_MK8SSE3
++#define MODULE_PROC_FAMILY "K8SSE3 "
++#elif defined CONFIG_MK10
++#define MODULE_PROC_FAMILY "K10 "
++#elif defined CONFIG_MBARCELONA
++#define MODULE_PROC_FAMILY "BARCELONA "
++#elif defined CONFIG_MBOBCAT
++#define MODULE_PROC_FAMILY "BOBCAT "
++#elif defined CONFIG_MBULLDOZER
++#define MODULE_PROC_FAMILY "BULLDOZER "
++#elif defined CONFIG_MPILEDRIVER
++#define MODULE_PROC_FAMILY "STEAMROLLER "
++#elif defined CONFIG_MSTEAMROLLER
++#define MODULE_PROC_FAMILY "PILEDRIVER "
++#elif defined CONFIG_MJAGUAR
++#define MODULE_PROC_FAMILY "JAGUAR "
+ #elif defined CONFIG_MELAN
+ #define MODULE_PROC_FAMILY "ELAN "
+ #elif defined CONFIG_MCRUSOE
+--- a/arch/x86/Kconfig.cpu 2015-08-30 14:34:09.000000000 -0400
++++ b/arch/x86/Kconfig.cpu 2015-11-06 14:20:14.948369244 -0500
+@@ -137,9 +137,8 @@ config MPENTIUM4
+ -Paxville
+ -Dempsey
+
+-
+ config MK6
+- bool "K6/K6-II/K6-III"
++ bool "AMD K6/K6-II/K6-III"
+ depends on X86_32
+ ---help---
+ Select this for an AMD K6-family processor. Enables use of
+@@ -147,7 +146,7 @@ config MK6
+ flags to GCC.
+
+ config MK7
+- bool "Athlon/Duron/K7"
++ bool "AMD Athlon/Duron/K7"
+ depends on X86_32
+ ---help---
+ Select this for an AMD Athlon K7-family processor. Enables use of
+@@ -155,12 +154,69 @@ config MK7
+ flags to GCC.
+
+ config MK8
+- bool "Opteron/Athlon64/Hammer/K8"
++ bool "AMD Opteron/Athlon64/Hammer/K8"
+ ---help---
+ Select this for an AMD Opteron or Athlon64 Hammer-family processor.
+ Enables use of some extended instructions, and passes appropriate
+ optimization flags to GCC.
+
++config MK8SSE3
++ bool "AMD Opteron/Athlon64/Hammer/K8 with SSE3"
++ ---help---
++ Select this for improved AMD Opteron or Athlon64 Hammer-family processors.
++ Enables use of some extended instructions, and passes appropriate
++ optimization flags to GCC.
++
++config MK10
++ bool "AMD 61xx/7x50/PhenomX3/X4/II/K10"
++ ---help---
++ Select this for an AMD 61xx Eight-Core Magny-Cours, Athlon X2 7x50,
++ Phenom X3/X4/II, Athlon II X2/X3/X4, or Turion II-family processor.
++ Enables use of some extended instructions, and passes appropriate
++ optimization flags to GCC.
++
++config MBARCELONA
++ bool "AMD Barcelona"
++ ---help---
++ Select this for AMD Barcelona and newer processors.
++
++ Enables -march=barcelona
++
++config MBOBCAT
++ bool "AMD Bobcat"
++ ---help---
++ Select this for AMD Bobcat processors.
++
++ Enables -march=btver1
++
++config MBULLDOZER
++ bool "AMD Bulldozer"
++ ---help---
++ Select this for AMD Bulldozer processors.
++
++ Enables -march=bdver1
++
++config MPILEDRIVER
++ bool "AMD Piledriver"
++ ---help---
++ Select this for AMD Piledriver processors.
++
++ Enables -march=bdver2
++
++config MSTEAMROLLER
++ bool "AMD Steamroller"
++ ---help---
++ Select this for AMD Steamroller processors.
++
++ Enables -march=bdver3
++
++config MJAGUAR
++ bool "AMD Jaguar"
++ ---help---
++ Select this for AMD Jaguar processors.
++
++ Enables -march=btver2
++
+ config MCRUSOE
+ bool "Crusoe"
+ depends on X86_32
+@@ -251,8 +307,17 @@ config MPSC
+ using the cpu family field
+ in /proc/cpuinfo. Family 15 is an older Xeon, Family 6 a newer one.
+
++config MATOM
++ bool "Intel Atom"
++ ---help---
++
++ Select this for the Intel Atom platform. Intel Atom CPUs have an
++ in-order pipelining architecture and thus can benefit from
++ accordingly optimized code. Use a recent GCC with specific Atom
++ support in order to fully benefit from selecting this option.
++
+ config MCORE2
+- bool "Core 2/newer Xeon"
++ bool "Intel Core 2"
+ ---help---
+
+ Select this for Intel Core 2 and newer Core 2 Xeons (Xeon 51xx and
+@@ -260,14 +325,71 @@ config MCORE2
+ family in /proc/cpuinfo. Newer ones have 6 and older ones 15
+ (not a typo)
+
+-config MATOM
+- bool "Intel Atom"
++ Enables -march=core2
++
++config MNEHALEM
++ bool "Intel Nehalem"
+ ---help---
+
+- Select this for the Intel Atom platform. Intel Atom CPUs have an
+- in-order pipelining architecture and thus can benefit from
+- accordingly optimized code. Use a recent GCC with specific Atom
+- support in order to fully benefit from selecting this option.
++ Select this for 1st Gen Core processors in the Nehalem family.
++
++ Enables -march=nehalem
++
++config MWESTMERE
++ bool "Intel Westmere"
++ ---help---
++
++ Select this for the Intel Westmere formerly Nehalem-C family.
++
++ Enables -march=westmere
++
++config MSILVERMONT
++ bool "Intel Silvermont"
++ ---help---
++
++ Select this for the Intel Silvermont platform.
++
++ Enables -march=silvermont
++
++config MSANDYBRIDGE
++ bool "Intel Sandy Bridge"
++ ---help---
++
++ Select this for 2nd Gen Core processors in the Sandy Bridge family.
++
++ Enables -march=sandybridge
++
++config MIVYBRIDGE
++ bool "Intel Ivy Bridge"
++ ---help---
++
++ Select this for 3rd Gen Core processors in the Ivy Bridge family.
++
++ Enables -march=ivybridge
++
++config MHASWELL
++ bool "Intel Haswell"
++ ---help---
++
++ Select this for 4th Gen Core processors in the Haswell family.
++
++ Enables -march=haswell
++
++config MBROADWELL
++ bool "Intel Broadwell"
++ ---help---
++
++ Select this for 5th Gen Core processors in the Broadwell family.
++
++ Enables -march=broadwell
++
++config MSKYLAKE
++ bool "Intel Skylake"
++ ---help---
++
++ Select this for 6th Gen Core processors in the Skylake family.
++
++ Enables -march=skylake
+
+ config GENERIC_CPU
+ bool "Generic-x86-64"
+@@ -276,6 +398,19 @@ config GENERIC_CPU
+ Generic x86-64 CPU.
+ Run equally well on all x86-64 CPUs.
+
++config MNATIVE
++ bool "Native optimizations autodetected by GCC"
++ ---help---
++
++ GCC 4.2 and above support -march=native, which automatically detects
++ the optimum settings to use based on your processor. -march=native
++ also detects and applies additional settings beyond -march specific
++ to your CPU, (eg. -msse4). Unless you have a specific reason not to
++ (e.g. distcc cross-compiling), you should probably be using
++ -march=native rather than anything listed below.
++
++ Enables -march=native
++
+ endchoice
+
+ config X86_GENERIC
+@@ -300,7 +435,7 @@ config X86_INTERNODE_CACHE_SHIFT
+ config X86_L1_CACHE_SHIFT
+ int
+ default "7" if MPENTIUM4 || MPSC
+- default "6" if MK7 || MK8 || MPENTIUMM || MCORE2 || MATOM || MVIAC7 || X86_GENERIC || GENERIC_CPU
++ default "6" if MK7 || MK8 || MK8SSE3 || MK10 || MBARCELONA || MBOBCAT || MBULLDOZER || MPILEDRIVER || MSTEAMROLLER || MJAGUAR || MPENTIUMM || MCORE2 || MNEHALEM || MWESTMERE || MSILVERMONT || MSANDYBRIDGE || MIVYBRIDGE || MHASWELL || MBROADWELL || MSKYLAKE || MNATIVE || MATOM || MVIAC7 || X86_GENERIC || GENERIC_CPU
+ default "4" if MELAN || M486 || MGEODEGX1
+ default "5" if MWINCHIP3D || MWINCHIPC6 || MCRUSOE || MEFFICEON || MCYRIXIII || MK6 || MPENTIUMIII || MPENTIUMII || M686 || M586MMX || M586TSC || M586 || MVIAC3_2 || MGEODE_LX
+
+@@ -331,11 +466,11 @@ config X86_ALIGNMENT_16
+
+ config X86_INTEL_USERCOPY
+ def_bool y
+- depends on MPENTIUM4 || MPENTIUMM || MPENTIUMIII || MPENTIUMII || M586MMX || X86_GENERIC || MK8 || MK7 || MEFFICEON || MCORE2
++ depends on MPENTIUM4 || MPENTIUMM || MPENTIUMIII || MPENTIUMII || M586MMX || X86_GENERIC || MK8 || MK8SSE3 || MK7 || MEFFICEON || MCORE2 || MK10 || MBARCELONA || MNEHALEM || MWESTMERE || MSILVERMONT || MSANDYBRIDGE || MIVYBRIDGE || MHASWELL || MBROADWELL || MSKYLAKE || MNATIVE
+
+ config X86_USE_PPRO_CHECKSUM
+ def_bool y
+- depends on MWINCHIP3D || MWINCHIPC6 || MCYRIXIII || MK7 || MK6 || MPENTIUM4 || MPENTIUMM || MPENTIUMIII || MPENTIUMII || M686 || MK8 || MVIAC3_2 || MVIAC7 || MEFFICEON || MGEODE_LX || MCORE2 || MATOM
++ depends on MWINCHIP3D || MWINCHIPC6 || MCYRIXIII || MK7 || MK6 || MK10 || MPENTIUM4 || MPENTIUMM || MPENTIUMIII || MPENTIUMII || M686 || MK8 || MK8SSE3 || MVIAC3_2 || MVIAC7 || MEFFICEON || MGEODE_LX || MCORE2 || MNEHALEM || MWESTMERE || MSILVERMONT || MSANDYBRIDGE || MIVYBRIDGE || MHASWELL || MBROADWELL || MSKYLAKE || MATOM || MNATIVE
+
+ config X86_USE_3DNOW
+ def_bool y
+@@ -359,17 +494,17 @@ config X86_P6_NOP
+
+ config X86_TSC
+ def_bool y
+- depends on (MWINCHIP3D || MCRUSOE || MEFFICEON || MCYRIXIII || MK7 || MK6 || MPENTIUM4 || MPENTIUMM || MPENTIUMIII || MPENTIUMII || M686 || M586MMX || M586TSC || MK8 || MVIAC3_2 || MVIAC7 || MGEODEGX1 || MGEODE_LX || MCORE2 || MATOM) || X86_64
++ depends on (MWINCHIP3D || MCRUSOE || MEFFICEON || MCYRIXIII || MK7 || MK6 || MPENTIUM4 || MPENTIUMM || MPENTIUMIII || MPENTIUMII || M686 || M586MMX || M586TSC || MK8 || MK8SSE3 || MVIAC3_2 || MVIAC7 || MGEODEGX1 || MGEODE_LX || MCORE2 || MNEHALEM || MWESTMERE || MSILVERMONT || MSANDYBRIDGE || MIVYBRIDGE || MHASWELL || MBROADWELL || MSKYLAKE || MNATIVE || MATOM) || X86_64
+
+ config X86_CMPXCHG64
+ def_bool y
+- depends on X86_PAE || X86_64 || MCORE2 || MPENTIUM4 || MPENTIUMM || MPENTIUMIII || MPENTIUMII || M686 || MATOM
++ depends on X86_PAE || X86_64 || MCORE2 || MNEHALEM || MWESTMERE || MSILVERMONT || MSANDYBRIDGE || MIVYBRIDGE || MHASWELL || MBROADWELL || MSKYLAKE || MPENTIUM4 || MPENTIUMM || MPENTIUMIII || MPENTIUMII || M686 || MATOM || MNATIVE
+
+ # this should be set for all -march=.. options where the compiler
+ # generates cmov.
+ config X86_CMOV
+ def_bool y
+- depends on (MK8 || MK7 || MCORE2 || MPENTIUM4 || MPENTIUMM || MPENTIUMIII || MPENTIUMII || M686 || MVIAC3_2 || MVIAC7 || MCRUSOE || MEFFICEON || X86_64 || MATOM || MGEODE_LX)
++ depends on (MK8 || MK8SSE3 || MK10 || MBARCELONA || MBOBCAT || MBULLDOZER || MPILEDRIVER || MSTEAMROLLER || MJAGUAR || MK7 || MCORE2 || MNEHALEM || MWESTMERE || MSILVERMONT || MSANDYBRIDGE || MIVYBRIDGE || MHASWELL || MBROADWELL || MSKYLAKE || MPENTIUM4 || MPENTIUMM || MPENTIUMIII || MPENTIUMII || M686 || MVIAC3_2 || MVIAC7 || MCRUSOE || MEFFICEON || X86_64 || MNATIVE || MATOM || MGEODE_LX)
+
+ config X86_MINIMUM_CPU_FAMILY
+ int
+--- a/arch/x86/Makefile 2015-08-30 14:34:09.000000000 -0400
++++ b/arch/x86/Makefile 2015-11-06 14:21:05.708983344 -0500
+@@ -94,13 +94,38 @@ else
+ KBUILD_CFLAGS += $(call cc-option,-mskip-rax-setup)
+
+ # FIXME - should be integrated in Makefile.cpu (Makefile_32.cpu)
++ cflags-$(CONFIG_MNATIVE) += $(call cc-option,-march=native)
+ cflags-$(CONFIG_MK8) += $(call cc-option,-march=k8)
++ cflags-$(CONFIG_MK8SSE3) += $(call cc-option,-march=k8-sse3,-mtune=k8)
++ cflags-$(CONFIG_MK10) += $(call cc-option,-march=amdfam10)
++ cflags-$(CONFIG_MBARCELONA) += $(call cc-option,-march=barcelona)
++ cflags-$(CONFIG_MBOBCAT) += $(call cc-option,-march=btver1)
++ cflags-$(CONFIG_MBULLDOZER) += $(call cc-option,-march=bdver1)
++ cflags-$(CONFIG_MPILEDRIVER) += $(call cc-option,-march=bdver2)
++ cflags-$(CONFIG_MSTEAMROLLER) += $(call cc-option,-march=bdver3)
++ cflags-$(CONFIG_MJAGUAR) += $(call cc-option,-march=btver2)
+ cflags-$(CONFIG_MPSC) += $(call cc-option,-march=nocona)
+
+ cflags-$(CONFIG_MCORE2) += \
+- $(call cc-option,-march=core2,$(call cc-option,-mtune=generic))
+- cflags-$(CONFIG_MATOM) += $(call cc-option,-march=atom) \
+- $(call cc-option,-mtune=atom,$(call cc-option,-mtune=generic))
++ $(call cc-option,-march=core2,$(call cc-option,-mtune=core2))
++ cflags-$(CONFIG_MNEHALEM) += \
++ $(call cc-option,-march=nehalem,$(call cc-option,-mtune=nehalem))
++ cflags-$(CONFIG_MWESTMERE) += \
++ $(call cc-option,-march=westmere,$(call cc-option,-mtune=westmere))
++ cflags-$(CONFIG_MSILVERMONT) += \
++ $(call cc-option,-march=silvermont,$(call cc-option,-mtune=silvermont))
++ cflags-$(CONFIG_MSANDYBRIDGE) += \
++ $(call cc-option,-march=sandybridge,$(call cc-option,-mtune=sandybridge))
++ cflags-$(CONFIG_MIVYBRIDGE) += \
++ $(call cc-option,-march=ivybridge,$(call cc-option,-mtune=ivybridge))
++ cflags-$(CONFIG_MHASWELL) += \
++ $(call cc-option,-march=haswell,$(call cc-option,-mtune=haswell))
++ cflags-$(CONFIG_MBROADWELL) += \
++ $(call cc-option,-march=broadwell,$(call cc-option,-mtune=broadwell))
++ cflags-$(CONFIG_MSKYLAKE) += \
++ $(call cc-option,-march=skylake,$(call cc-option,-mtune=skylake))
++ cflags-$(CONFIG_MATOM) += $(call cc-option,-march=bonnell) \
++ $(call cc-option,-mtune=bonnell,$(call cc-option,-mtune=generic))
+ cflags-$(CONFIG_GENERIC_CPU) += $(call cc-option,-mtune=generic)
+ KBUILD_CFLAGS += $(cflags-y)
+
+--- a/arch/x86/Makefile_32.cpu 2015-08-30 14:34:09.000000000 -0400
++++ b/arch/x86/Makefile_32.cpu 2015-11-06 14:21:43.604429077 -0500
+@@ -23,7 +23,16 @@ cflags-$(CONFIG_MK6) += -march=k6
+ # Please note, that patches that add -march=athlon-xp and friends are pointless.
+ # They make zero difference whatsosever to performance at this time.
+ cflags-$(CONFIG_MK7) += -march=athlon
++cflags-$(CONFIG_MNATIVE) += $(call cc-option,-march=native)
+ cflags-$(CONFIG_MK8) += $(call cc-option,-march=k8,-march=athlon)
++cflags-$(CONFIG_MK8SSE3) += $(call cc-option,-march=k8-sse3,-march=athlon)
++cflags-$(CONFIG_MK10) += $(call cc-option,-march=amdfam10,-march=athlon)
++cflags-$(CONFIG_MBARCELONA) += $(call cc-option,-march=barcelona,-march=athlon)
++cflags-$(CONFIG_MBOBCAT) += $(call cc-option,-march=btver1,-march=athlon)
++cflags-$(CONFIG_MBULLDOZER) += $(call cc-option,-march=bdver1,-march=athlon)
++cflags-$(CONFIG_MPILEDRIVER) += $(call cc-option,-march=bdver2,-march=athlon)
++cflags-$(CONFIG_MSTEAMROLLER) += $(call cc-option,-march=bdver3,-march=athlon)
++cflags-$(CONFIG_MJAGUAR) += $(call cc-option,-march=btver2,-march=athlon)
+ cflags-$(CONFIG_MCRUSOE) += -march=i686 $(align)-functions=0 $(align)-jumps=0 $(align)-loops=0
+ cflags-$(CONFIG_MEFFICEON) += -march=i686 $(call tune,pentium3) $(align)-functions=0 $(align)-jumps=0 $(align)-loops=0
+ cflags-$(CONFIG_MWINCHIPC6) += $(call cc-option,-march=winchip-c6,-march=i586)
+@@ -32,8 +41,16 @@ cflags-$(CONFIG_MCYRIXIII) += $(call cc-
+ cflags-$(CONFIG_MVIAC3_2) += $(call cc-option,-march=c3-2,-march=i686)
+ cflags-$(CONFIG_MVIAC7) += -march=i686
+ cflags-$(CONFIG_MCORE2) += -march=i686 $(call tune,core2)
+-cflags-$(CONFIG_MATOM) += $(call cc-option,-march=atom,$(call cc-option,-march=core2,-march=i686)) \
+- $(call cc-option,-mtune=atom,$(call cc-option,-mtune=generic))
++cflags-$(CONFIG_MNEHALEM) += -march=i686 $(call tune,nehalem)
++cflags-$(CONFIG_MWESTMERE) += -march=i686 $(call tune,westmere)
++cflags-$(CONFIG_MSILVERMONT) += -march=i686 $(call tune,silvermont)
++cflags-$(CONFIG_MSANDYBRIDGE) += -march=i686 $(call tune,sandybridge)
++cflags-$(CONFIG_MIVYBRIDGE) += -march=i686 $(call tune,ivybridge)
++cflags-$(CONFIG_MHASWELL) += -march=i686 $(call tune,haswell)
++cflags-$(CONFIG_MBROADWELL) += -march=i686 $(call tune,broadwell)
++cflags-$(CONFIG_MSKYLAKE) += -march=i686 $(call tune,skylake)
++cflags-$(CONFIG_MATOM) += $(call cc-option,-march=bonnell,$(call cc-option,-march=core2,-march=i686)) \
++ $(call cc-option,-mtune=bonnell,$(call cc-option,-mtune=generic))
+
+ # AMD Elan support
+ cflags-$(CONFIG_MELAN) += -march=i486
^ permalink raw reply related [flat|nested] 22+ messages in thread
* [gentoo-commits] proj/linux-patches:4.10 commit in: /
@ 2017-02-14 23:44 Mike Pagano
0 siblings, 0 replies; 22+ messages in thread
From: Mike Pagano @ 2017-02-14 23:44 UTC (permalink / raw
To: gentoo-commits
commit: 06250fee98423edf601c429941c354c9ed1112c7
Author: Mike Pagano <mpagano <AT> gentoo <DOT> org>
AuthorDate: Tue Feb 14 23:44:12 2017 +0000
Commit: Mike Pagano <mpagano <AT> gentoo <DOT> org>
CommitDate: Tue Feb 14 23:44:12 2017 +0000
URL: https://gitweb.gentoo.org/proj/linux-patches.git/commit/?id=06250fee
Workaround to enable poweroff on Mac Pro 11. See bug #601964.
0000_README | 4 ++
2300_enable-poweroff-on-Mac-Pro-11.patch | 76 ++++++++++++++++++++++++++++++++
2 files changed, 80 insertions(+)
diff --git a/0000_README b/0000_README
index 646b303..58e3c74 100644
--- a/0000_README
+++ b/0000_README
@@ -51,6 +51,10 @@ Patch: 1510_fs-enable-link-security-restrictions-by-default.patch
From: http://sources.debian.net/src/linux/3.16.7-ckt4-3/debian/patches/debian/fs-enable-link-security-restrictions-by-default.patch/
Desc: Enable link security restrictions by default.
+Patch: 2300_enable-poweroff-on-Mac-Pro-11.patch
+From: http://kernel.ubuntu.com/git/ubuntu/ubuntu-xenial.git/patch/drivers/pci/quirks.c?id=5080ff61a438f3dd80b88b423e1a20791d8a774c
+Desc: Workaround to enable poweroff on Mac Pro 11. See bug #601964.
+
Patch: 2900_dev-root-proc-mount-fix.patch
From: https://bugs.gentoo.org/show_bug.cgi?id=438380
Desc: Ensure that /dev/root doesn't appear in /proc/mounts when bootint without an initramfs.
diff --git a/2300_enable-poweroff-on-Mac-Pro-11.patch b/2300_enable-poweroff-on-Mac-Pro-11.patch
new file mode 100644
index 0000000..063f2a1
--- /dev/null
+++ b/2300_enable-poweroff-on-Mac-Pro-11.patch
@@ -0,0 +1,76 @@
+From 5080ff61a438f3dd80b88b423e1a20791d8a774c Mon Sep 17 00:00:00 2001
+From: Chen Yu <yu.c.chen@intel.com>
+Date: Fri, 19 Aug 2016 10:25:57 -0700
+Subject: UBUNTU: SAUCE: PCI: Workaround to enable poweroff on Mac Pro 11
+
+BugLink: http://bugs.launchpad.net/bugs/1587714
+
+People reported that they can not do a poweroff nor a
+suspend to ram on their Mac Pro 11. After some investigations
+it was found that, once the PCI bridge 0000:00:1c.0 reassigns its
+mm windows to ([mem 0x7fa00000-0x7fbfffff] and
+[mem 0x7fc00000-0x7fdfffff 64bit pref]), the region of ACPI
+io resource 0x1804 becomes unaccessible immediately, where the
+ACPI Sleep register is located, as a result neither poweroff(S5)
+nor suspend to ram(S3) works.
+
+As suggested by Bjorn, further testing shows that, there is an
+unreported device may be (using) conflict with above aperture,
+which brings unpredictable result such as the failure of accessing
+the io port, which blocks the poweroff(S5). Besides if we reassign
+the memory aperture to the other place, the poweroff works again.
+
+As we do not find any resource declared in _CRS which contain above
+memory aperture, and Mac OS does not use this pci bridge neither, we
+choose a simple workaround to clear the hotplug flag(suggested by
+Yinghai Lu), thus do not allocate any resource for this pci bridge,
+and thereby no conflict anymore.
+
+Bugzilla: https://bugzilla.kernel.org/show_bug.cgi?id=103211
+Cc: Bjorn Helgaas <bhelgaas@google.com>
+Cc: Rafael J. Wysocki <rafael@kernel.org>
+Cc: Lukas Wunner <lukas@wunner.de>
+Signed-off-by: Chen Yu <yu.c.chen@intel.com>
+Reference: https://patchwork.kernel.org/patch/9289777/
+Signed-off-by: Kamal Mostafa <kamal@canonical.com>
+Acked-by: Brad Figg <brad.figg@canonical.com>
+Acked-by: Stefan Bader <stefan.bader@canonical.com>
+Signed-off-by: Tim Gardner <tim.gardner@canonical.com>
+---
+ drivers/pci/quirks.c | 20 ++++++++++++++++++++
+ 1 file changed, 20 insertions(+)
+
+diff --git a/drivers/pci/quirks.c b/drivers/pci/quirks.c
+index 48cfaa0..23968b6 100644
+--- a/drivers/pci/quirks.c
++++ b/drivers/pci/quirks.c
+@@ -2750,6 +2750,26 @@ static void quirk_hotplug_bridge(struct pci_dev *dev)
+ DECLARE_PCI_FIXUP_HEADER(PCI_VENDOR_ID_HINT, 0x0020, quirk_hotplug_bridge);
+
+ /*
++ * Apple: Avoid programming the memory/io aperture of 00:1c.0
++ *
++ * BIOS does not declare any resource for 00:1c.0, but with
++ * hotplug flag set, thus the OS allocates:
++ * [mem 0x7fa00000 - 0x7fbfffff]
++ * [mem 0x7fc00000-0x7fdfffff 64bit pref]
++ * which is conflict with an unreported device, which
++ * causes unpredictable result such as accessing io port.
++ * So clear the hotplug flag to work around it.
++ */
++static void quirk_apple_mbp_poweroff(struct pci_dev *dev)
++{
++ if (dmi_match(DMI_PRODUCT_NAME, "MacBookPro11,4") ||
++ dmi_match(DMI_PRODUCT_NAME, "MacBookPro11,5"))
++ dev->is_hotplug_bridge = 0;
++}
++
++DECLARE_PCI_FIXUP_HEADER(PCI_VENDOR_ID_INTEL, 0x8c10, quirk_apple_mbp_poweroff);
++
++/*
+ * This is a quirk for the Ricoh MMC controller found as a part of
+ * some mulifunction chips.
+
+--
+cgit v0.11.2
+
^ permalink raw reply related [flat|nested] 22+ messages in thread
* [gentoo-commits] proj/linux-patches:4.10 commit in: /
@ 2017-02-20 0:08 Mike Pagano
0 siblings, 0 replies; 22+ messages in thread
From: Mike Pagano @ 2017-02-20 0:08 UTC (permalink / raw
To: gentoo-commits
commit: f00ff26959a9aba3ec6fbcc4ad86f8e1a3fc535a
Author: Mike Pagano <mpagano <AT> gentoo <DOT> org>
AuthorDate: Mon Feb 20 00:08:20 2017 +0000
Commit: Mike Pagano <mpagano <AT> gentoo <DOT> org>
CommitDate: Mon Feb 20 00:08:20 2017 +0000
URL: https://gitweb.gentoo.org/proj/linux-patches.git/commit/?id=f00ff269
For GENTOO_LINUX_INIT_SYSTEMD don't add DMIID for non X86 architectures. See bug #609590.
4567_distro-Gentoo-Kconfig.patch | 6 +++---
1 file changed, 3 insertions(+), 3 deletions(-)
diff --git a/4567_distro-Gentoo-Kconfig.patch b/4567_distro-Gentoo-Kconfig.patch
index acb0972..4a88040 100644
--- a/4567_distro-Gentoo-Kconfig.patch
+++ b/4567_distro-Gentoo-Kconfig.patch
@@ -7,8 +7,8 @@
+source "distro/Kconfig"
+
source "arch/$SRCARCH/Kconfig"
---- /dev/null 2016-11-15 00:56:18.320838834 -0500
-+++ b/distro/Kconfig 2016-11-16 06:24:29.457357409 -0500
+--- /dev/null 2017-02-18 04:25:56.900821893 -0500
++++ b/distro/Kconfig 2017-02-18 10:41:16.512328155 -0500
@@ -0,0 +1,142 @@
+menu "Gentoo Linux"
+
@@ -115,7 +115,7 @@
+ select CGROUPS
+ select CHECKPOINT_RESTORE
+ select DEVPTS_MULTIPLE_INSTANCES
-+ select DMIID
++ select DMIID if X86_32 || X86_64 || X86
+ select EPOLL
+ select FANOTIFY
+ select FHANDLE
^ permalink raw reply related [flat|nested] 22+ messages in thread
* [gentoo-commits] proj/linux-patches:4.10 commit in: /
@ 2017-02-27 1:08 Mike Pagano
0 siblings, 0 replies; 22+ messages in thread
From: Mike Pagano @ 2017-02-27 1:08 UTC (permalink / raw
To: gentoo-commits
commit: a7614e16e5934261b1d377edad0f36a133adeb96
Author: Mike Pagano <mpagano <AT> gentoo <DOT> org>
AuthorDate: Mon Feb 27 01:08:24 2017 +0000
Commit: Mike Pagano <mpagano <AT> gentoo <DOT> org>
CommitDate: Mon Feb 27 01:08:24 2017 +0000
URL: https://gitweb.gentoo.org/proj/linux-patches.git/commit/?id=a7614e16
Linux patch 4.10.1
0000_README | 4 +
1000_linux-4.10.1.patch | 661 ++++++++++++++++++++++++++++++++++++++++++++++++
2 files changed, 665 insertions(+)
diff --git a/0000_README b/0000_README
index 58e3c74..decfe62 100644
--- a/0000_README
+++ b/0000_README
@@ -43,6 +43,10 @@ EXPERIMENTAL
Individual Patch Descriptions:
--------------------------------------------------------------------------
+Patch: 1000_linux-4.10.1.patch
+From: http://www.kernel.org
+Desc: Linux 4.10.1
+
Patch: 1500_XATTR_USER_PREFIX.patch
From: https://bugs.gentoo.org/show_bug.cgi?id=470644
Desc: Support for namespace user.pax.* on tmpfs.
diff --git a/1000_linux-4.10.1.patch b/1000_linux-4.10.1.patch
new file mode 100644
index 0000000..7b19ec2
--- /dev/null
+++ b/1000_linux-4.10.1.patch
@@ -0,0 +1,661 @@
+diff --git a/Documentation/admin-guide/kernel-parameters.txt b/Documentation/admin-guide/kernel-parameters.txt
+index be7c0d9506b1..18eefa860f76 100644
+--- a/Documentation/admin-guide/kernel-parameters.txt
++++ b/Documentation/admin-guide/kernel-parameters.txt
+@@ -1201,6 +1201,10 @@
+ When zero, profiling data is discarded and associated
+ debugfs files are removed at module unload time.
+
++ goldfish [X86] Enable the goldfish android emulator platform.
++ Don't use this when you are not running on the
++ android emulator
++
+ gpt [EFI] Forces disk with valid GPT signature but
+ invalid Protective MBR to be treated as GPT. If the
+ primary GPT is corrupted, it enables the backup/alternate
+diff --git a/Makefile b/Makefile
+index f1e6a02a0c19..09eccff4f569 100644
+--- a/Makefile
++++ b/Makefile
+@@ -1,6 +1,6 @@
+ VERSION = 4
+ PATCHLEVEL = 10
+-SUBLEVEL = 0
++SUBLEVEL = 1
+ EXTRAVERSION =
+ NAME = Fearless Coyote
+
+diff --git a/arch/x86/platform/goldfish/goldfish.c b/arch/x86/platform/goldfish/goldfish.c
+index 1693107a518e..0d17c0aafeb1 100644
+--- a/arch/x86/platform/goldfish/goldfish.c
++++ b/arch/x86/platform/goldfish/goldfish.c
+@@ -42,10 +42,22 @@ static struct resource goldfish_pdev_bus_resources[] = {
+ }
+ };
+
++static bool goldfish_enable __initdata;
++
++static int __init goldfish_setup(char *str)
++{
++ goldfish_enable = true;
++ return 0;
++}
++__setup("goldfish", goldfish_setup);
++
+ static int __init goldfish_init(void)
+ {
++ if (!goldfish_enable)
++ return -ENODEV;
++
+ platform_device_register_simple("goldfish_pdev_bus", -1,
+- goldfish_pdev_bus_resources, 2);
++ goldfish_pdev_bus_resources, 2);
+ return 0;
+ }
+ device_initcall(goldfish_init);
+diff --git a/drivers/net/wireless/realtek/rtlwifi/usb.c b/drivers/net/wireless/realtek/rtlwifi/usb.c
+index 49015b05f3d1..abdaf203835c 100644
+--- a/drivers/net/wireless/realtek/rtlwifi/usb.c
++++ b/drivers/net/wireless/realtek/rtlwifi/usb.c
+@@ -827,12 +827,30 @@ static void rtl_usb_stop(struct ieee80211_hw *hw)
+ struct rtl_priv *rtlpriv = rtl_priv(hw);
+ struct rtl_hal *rtlhal = rtl_hal(rtl_priv(hw));
+ struct rtl_usb *rtlusb = rtl_usbdev(rtl_usbpriv(hw));
++ struct urb *urb;
+
+ /* should after adapter start and interrupt enable. */
+ set_hal_stop(rtlhal);
+ cancel_work_sync(&rtlpriv->works.fill_h2c_cmd);
+ /* Enable software */
+ SET_USB_STOP(rtlusb);
++
++ /* free pre-allocated URBs from rtl_usb_start() */
++ usb_kill_anchored_urbs(&rtlusb->rx_submitted);
++
++ tasklet_kill(&rtlusb->rx_work_tasklet);
++ cancel_work_sync(&rtlpriv->works.lps_change_work);
++
++ flush_workqueue(rtlpriv->works.rtl_wq);
++
++ skb_queue_purge(&rtlusb->rx_queue);
++
++ while ((urb = usb_get_from_anchor(&rtlusb->rx_cleanup_urbs))) {
++ usb_free_coherent(urb->dev, urb->transfer_buffer_length,
++ urb->transfer_buffer, urb->transfer_dma);
++ usb_free_urb(urb);
++ }
++
+ rtlpriv->cfg->ops->hw_disable(hw);
+ }
+
+diff --git a/drivers/platform/goldfish/pdev_bus.c b/drivers/platform/goldfish/pdev_bus.c
+index 1f52462f4cdd..dd9ea463c2a4 100644
+--- a/drivers/platform/goldfish/pdev_bus.c
++++ b/drivers/platform/goldfish/pdev_bus.c
+@@ -157,23 +157,26 @@ static int goldfish_new_pdev(void)
+ static irqreturn_t goldfish_pdev_bus_interrupt(int irq, void *dev_id)
+ {
+ irqreturn_t ret = IRQ_NONE;
++
+ while (1) {
+ u32 op = readl(pdev_bus_base + PDEV_BUS_OP);
+- switch (op) {
+- case PDEV_BUS_OP_DONE:
+- return IRQ_NONE;
+
++ switch (op) {
+ case PDEV_BUS_OP_REMOVE_DEV:
+ goldfish_pdev_remove();
++ ret = IRQ_HANDLED;
+ break;
+
+ case PDEV_BUS_OP_ADD_DEV:
+ goldfish_new_pdev();
++ ret = IRQ_HANDLED;
+ break;
++
++ case PDEV_BUS_OP_DONE:
++ default:
++ return ret;
+ }
+- ret = IRQ_HANDLED;
+ }
+- return ret;
+ }
+
+ static int goldfish_pdev_bus_probe(struct platform_device *pdev)
+diff --git a/drivers/tty/serial/msm_serial.c b/drivers/tty/serial/msm_serial.c
+index 7312e7e01b7e..6788e7532dff 100644
+--- a/drivers/tty/serial/msm_serial.c
++++ b/drivers/tty/serial/msm_serial.c
+@@ -1809,6 +1809,7 @@ static const struct of_device_id msm_match_table[] = {
+ { .compatible = "qcom,msm-uartdm" },
+ {}
+ };
++MODULE_DEVICE_TABLE(of, msm_match_table);
+
+ static struct platform_driver msm_platform_driver = {
+ .remove = msm_serial_remove,
+diff --git a/drivers/usb/serial/ark3116.c b/drivers/usb/serial/ark3116.c
+index 1532cde8a437..7812052dc700 100644
+--- a/drivers/usb/serial/ark3116.c
++++ b/drivers/usb/serial/ark3116.c
+@@ -99,10 +99,17 @@ static int ark3116_read_reg(struct usb_serial *serial,
+ usb_rcvctrlpipe(serial->dev, 0),
+ 0xfe, 0xc0, 0, reg,
+ buf, 1, ARK_TIMEOUT);
+- if (result < 0)
++ if (result < 1) {
++ dev_err(&serial->interface->dev,
++ "failed to read register %u: %d\n",
++ reg, result);
++ if (result >= 0)
++ result = -EIO;
++
+ return result;
+- else
+- return buf[0];
++ }
++
++ return buf[0];
+ }
+
+ static inline int calc_divisor(int bps)
+diff --git a/drivers/usb/serial/console.c b/drivers/usb/serial/console.c
+index 8967715fe6fc..b6f1adefb758 100644
+--- a/drivers/usb/serial/console.c
++++ b/drivers/usb/serial/console.c
+@@ -143,6 +143,7 @@ static int usb_console_setup(struct console *co, char *options)
+ tty->driver = usb_serial_tty_driver;
+ tty->index = co->index;
+ init_ldsem(&tty->ldisc_sem);
++ spin_lock_init(&tty->files_lock);
+ INIT_LIST_HEAD(&tty->tty_files);
+ kref_get(&tty->driver->kref);
+ __module_get(tty->driver->owner);
+diff --git a/drivers/usb/serial/cp210x.c b/drivers/usb/serial/cp210x.c
+index fff718352e0c..fbe69465eefa 100644
+--- a/drivers/usb/serial/cp210x.c
++++ b/drivers/usb/serial/cp210x.c
+@@ -178,6 +178,8 @@ static const struct usb_device_id id_table[] = {
+ { USB_DEVICE(0x1901, 0x0190) }, /* GE B850 CP2105 Recorder interface */
+ { USB_DEVICE(0x1901, 0x0193) }, /* GE B650 CP2104 PMC interface */
+ { USB_DEVICE(0x1901, 0x0194) }, /* GE Healthcare Remote Alarm Box */
++ { USB_DEVICE(0x1901, 0x0195) }, /* GE B850/B650/B450 CP2104 DP UART interface */
++ { USB_DEVICE(0x1901, 0x0196) }, /* GE B850 CP2105 DP UART interface */
+ { USB_DEVICE(0x19CF, 0x3000) }, /* Parrot NMEA GPS Flight Recorder */
+ { USB_DEVICE(0x1ADB, 0x0001) }, /* Schweitzer Engineering C662 Cable */
+ { USB_DEVICE(0x1B1C, 0x1C00) }, /* Corsair USB Dongle */
+diff --git a/drivers/usb/serial/ftdi_sio.c b/drivers/usb/serial/ftdi_sio.c
+index 23d14b98ae2a..7d863fda1f18 100644
+--- a/drivers/usb/serial/ftdi_sio.c
++++ b/drivers/usb/serial/ftdi_sio.c
+@@ -1802,8 +1802,6 @@ static int ftdi_sio_port_probe(struct usb_serial_port *port)
+
+ mutex_init(&priv->cfg_lock);
+
+- priv->flags = ASYNC_LOW_LATENCY;
+-
+ if (quirk && quirk->port_probe)
+ quirk->port_probe(priv);
+
+@@ -2067,6 +2065,20 @@ static int ftdi_process_packet(struct usb_serial_port *port,
+ priv->prev_status = status;
+ }
+
++ /* save if the transmitter is empty or not */
++ if (packet[1] & FTDI_RS_TEMT)
++ priv->transmit_empty = 1;
++ else
++ priv->transmit_empty = 0;
++
++ len -= 2;
++ if (!len)
++ return 0; /* status only */
++
++ /*
++ * Break and error status must only be processed for packets with
++ * data payload to avoid over-reporting.
++ */
+ flag = TTY_NORMAL;
+ if (packet[1] & FTDI_RS_ERR_MASK) {
+ /* Break takes precedence over parity, which takes precedence
+@@ -2089,15 +2101,6 @@ static int ftdi_process_packet(struct usb_serial_port *port,
+ }
+ }
+
+- /* save if the transmitter is empty or not */
+- if (packet[1] & FTDI_RS_TEMT)
+- priv->transmit_empty = 1;
+- else
+- priv->transmit_empty = 0;
+-
+- len -= 2;
+- if (!len)
+- return 0; /* status only */
+ port->icount.rx += len;
+ ch = packet + 2;
+
+@@ -2428,8 +2431,12 @@ static int ftdi_get_modem_status(struct usb_serial_port *port,
+ FTDI_SIO_GET_MODEM_STATUS_REQUEST_TYPE,
+ 0, priv->interface,
+ buf, len, WDR_TIMEOUT);
+- if (ret < 0) {
++
++ /* NOTE: We allow short responses and handle that below. */
++ if (ret < 1) {
+ dev_err(&port->dev, "failed to get modem status: %d\n", ret);
++ if (ret >= 0)
++ ret = -EIO;
+ ret = usb_translate_errors(ret);
+ goto out;
+ }
+diff --git a/drivers/usb/serial/mos7840.c b/drivers/usb/serial/mos7840.c
+index ea27fb23967a..e536ac8a080b 100644
+--- a/drivers/usb/serial/mos7840.c
++++ b/drivers/usb/serial/mos7840.c
+@@ -1023,6 +1023,7 @@ static int mos7840_open(struct tty_struct *tty, struct usb_serial_port *port)
+ * (can't set it up in mos7840_startup as the structures *
+ * were not set up at that time.) */
+ if (port0->open_ports == 1) {
++ /* FIXME: Buffer never NULL, so URB is not submitted. */
+ if (serial->port[0]->interrupt_in_buffer == NULL) {
+ /* set up interrupt urb */
+ usb_fill_int_urb(serial->port[0]->interrupt_in_urb,
+@@ -2106,7 +2107,8 @@ static int mos7840_calc_num_ports(struct usb_serial *serial)
+ static int mos7840_attach(struct usb_serial *serial)
+ {
+ if (serial->num_bulk_in < serial->num_ports ||
+- serial->num_bulk_out < serial->num_ports) {
++ serial->num_bulk_out < serial->num_ports ||
++ serial->num_interrupt_in < 1) {
+ dev_err(&serial->interface->dev, "missing endpoints\n");
+ return -ENODEV;
+ }
+diff --git a/drivers/usb/serial/opticon.c b/drivers/usb/serial/opticon.c
+index 5ded6f524d59..b3c64f557d60 100644
+--- a/drivers/usb/serial/opticon.c
++++ b/drivers/usb/serial/opticon.c
+@@ -142,7 +142,7 @@ static int opticon_open(struct tty_struct *tty, struct usb_serial_port *port)
+ usb_clear_halt(port->serial->dev, port->read_urb->pipe);
+
+ res = usb_serial_generic_open(tty, port);
+- if (!res)
++ if (res)
+ return res;
+
+ /* Request CTS line state, sometimes during opening the current
+diff --git a/drivers/usb/serial/spcp8x5.c b/drivers/usb/serial/spcp8x5.c
+index 475e6c31b266..ddfd787c461c 100644
+--- a/drivers/usb/serial/spcp8x5.c
++++ b/drivers/usb/serial/spcp8x5.c
+@@ -232,11 +232,17 @@ static int spcp8x5_get_msr(struct usb_serial_port *port, u8 *status)
+ ret = usb_control_msg(dev, usb_rcvctrlpipe(dev, 0),
+ GET_UART_STATUS, GET_UART_STATUS_TYPE,
+ 0, GET_UART_STATUS_MSR, buf, 1, 100);
+- if (ret < 0)
++ if (ret < 1) {
+ dev_err(&port->dev, "failed to get modem status: %d\n", ret);
++ if (ret >= 0)
++ ret = -EIO;
++ goto out;
++ }
+
+ dev_dbg(&port->dev, "0xc0:0x22:0:6 %d - 0x02%x\n", ret, *buf);
+ *status = *buf;
++ ret = 0;
++out:
+ kfree(buf);
+
+ return ret;
+diff --git a/fs/xfs/xfs_iomap.c b/fs/xfs/xfs_iomap.c
+index 1aa3abd67b36..fdecf79d2fa4 100644
+--- a/fs/xfs/xfs_iomap.c
++++ b/fs/xfs/xfs_iomap.c
+@@ -1102,7 +1102,15 @@ xfs_file_iomap_end_delalloc(
+ xfs_fileoff_t end_fsb;
+ int error = 0;
+
+- start_fsb = XFS_B_TO_FSB(mp, offset + written);
++ /*
++ * start_fsb refers to the first unused block after a short write. If
++ * nothing was written, round offset down to point at the first block in
++ * the range.
++ */
++ if (unlikely(!written))
++ start_fsb = XFS_B_TO_FSBT(mp, offset);
++ else
++ start_fsb = XFS_B_TO_FSB(mp, offset + written);
+ end_fsb = XFS_B_TO_FSB(mp, offset + length);
+
+ /*
+@@ -1114,6 +1122,9 @@ xfs_file_iomap_end_delalloc(
+ * blocks in the range, they are ours.
+ */
+ if (start_fsb < end_fsb) {
++ truncate_pagecache_range(VFS_I(ip), XFS_FSB_TO_B(mp, start_fsb),
++ XFS_FSB_TO_B(mp, end_fsb) - 1);
++
+ xfs_ilock(ip, XFS_ILOCK_EXCL);
+ error = xfs_bmap_punch_delalloc_range(ip, start_fsb,
+ end_fsb - start_fsb);
+diff --git a/include/acpi/platform/acenv.h b/include/acpi/platform/acenv.h
+index 34cce729109c..fca15390a42c 100644
+--- a/include/acpi/platform/acenv.h
++++ b/include/acpi/platform/acenv.h
+@@ -177,7 +177,7 @@
+ #include "acmsvc.h"
+
+ #elif defined(__INTEL_COMPILER)
+-#include "acintel.h"
++#include <acpi/platform/acintel.h>
+
+ #endif
+
+diff --git a/include/acpi/platform/acintel.h b/include/acpi/platform/acintel.h
+new file mode 100644
+index 000000000000..17bd3b7b4e5a
+--- /dev/null
++++ b/include/acpi/platform/acintel.h
+@@ -0,0 +1,87 @@
++/******************************************************************************
++ *
++ * Name: acintel.h - VC specific defines, etc.
++ *
++ *****************************************************************************/
++
++/*
++ * Copyright (C) 2000 - 2017, Intel Corp.
++ * All rights reserved.
++ *
++ * Redistribution and use in source and binary forms, with or without
++ * modification, are permitted provided that the following conditions
++ * are met:
++ * 1. Redistributions of source code must retain the above copyright
++ * notice, this list of conditions, and the following disclaimer,
++ * without modification.
++ * 2. Redistributions in binary form must reproduce at minimum a disclaimer
++ * substantially similar to the "NO WARRANTY" disclaimer below
++ * ("Disclaimer") and any redistribution must be conditioned upon
++ * including a substantially similar Disclaimer requirement for further
++ * binary redistribution.
++ * 3. Neither the names of the above-listed copyright holders nor the names
++ * of any contributors may be used to endorse or promote products derived
++ * from this software without specific prior written permission.
++ *
++ * Alternatively, this software may be distributed under the terms of the
++ * GNU General Public License ("GPL") version 2 as published by the Free
++ * Software Foundation.
++ *
++ * NO WARRANTY
++ * THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
++ * "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
++ * LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTIBILITY AND FITNESS FOR
++ * A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
++ * HOLDERS OR CONTRIBUTORS BE LIABLE FOR SPECIAL, EXEMPLARY, OR CONSEQUENTIAL
++ * DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS
++ * OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION)
++ * HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT,
++ * STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING
++ * IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE
++ * POSSIBILITY OF SUCH DAMAGES.
++ */
++
++#ifndef __ACINTEL_H__
++#define __ACINTEL_H__
++
++/*
++ * Use compiler specific <stdarg.h> is a good practice for even when
++ * -nostdinc is specified (i.e., ACPI_USE_STANDARD_HEADERS undefined.
++ */
++#include <stdarg.h>
++
++/* Configuration specific to Intel 64-bit C compiler */
++
++#define COMPILER_DEPENDENT_INT64 __int64
++#define COMPILER_DEPENDENT_UINT64 unsigned __int64
++#define ACPI_INLINE __inline
++
++/*
++ * Calling conventions:
++ *
++ * ACPI_SYSTEM_XFACE - Interfaces to host OS (handlers, threads)
++ * ACPI_EXTERNAL_XFACE - External ACPI interfaces
++ * ACPI_INTERNAL_XFACE - Internal ACPI interfaces
++ * ACPI_INTERNAL_VAR_XFACE - Internal variable-parameter list interfaces
++ */
++#define ACPI_SYSTEM_XFACE
++#define ACPI_EXTERNAL_XFACE
++#define ACPI_INTERNAL_XFACE
++#define ACPI_INTERNAL_VAR_XFACE
++
++/* remark 981 - operands evaluated in no particular order */
++#pragma warning(disable:981)
++
++/* warn C4100: unreferenced formal parameter */
++#pragma warning(disable:4100)
++
++/* warn C4127: conditional expression is constant */
++#pragma warning(disable:4127)
++
++/* warn C4706: assignment within conditional expression */
++#pragma warning(disable:4706)
++
++/* warn C4214: bit field types other than int */
++#pragma warning(disable:4214)
++
++#endif /* __ACINTEL_H__ */
+diff --git a/include/linux/ptr_ring.h b/include/linux/ptr_ring.h
+index 2052011bf9fb..6c70444da3b9 100644
+--- a/include/linux/ptr_ring.h
++++ b/include/linux/ptr_ring.h
+@@ -111,6 +111,11 @@ static inline int __ptr_ring_produce(struct ptr_ring *r, void *ptr)
+ return 0;
+ }
+
++/*
++ * Note: resize (below) nests producer lock within consumer lock, so if you
++ * consume in interrupt or BH context, you must disable interrupts/BH when
++ * calling this.
++ */
+ static inline int ptr_ring_produce(struct ptr_ring *r, void *ptr)
+ {
+ int ret;
+@@ -242,6 +247,11 @@ static inline void *__ptr_ring_consume(struct ptr_ring *r)
+ return ptr;
+ }
+
++/*
++ * Note: resize (below) nests producer lock within consumer lock, so if you
++ * call this in interrupt or BH context, you must disable interrupts/BH when
++ * producing.
++ */
+ static inline void *ptr_ring_consume(struct ptr_ring *r)
+ {
+ void *ptr;
+@@ -357,7 +367,7 @@ static inline void **__ptr_ring_swap_queue(struct ptr_ring *r, void **queue,
+ void **old;
+ void *ptr;
+
+- while ((ptr = ptr_ring_consume(r)))
++ while ((ptr = __ptr_ring_consume(r)))
+ if (producer < size)
+ queue[producer++] = ptr;
+ else if (destroy)
+@@ -372,6 +382,12 @@ static inline void **__ptr_ring_swap_queue(struct ptr_ring *r, void **queue,
+ return old;
+ }
+
++/*
++ * Note: producer lock is nested within consumer lock, so if you
++ * resize you must make sure all uses nest correctly.
++ * In particular if you consume ring in interrupt or BH context, you must
++ * disable interrupts/BH when doing so.
++ */
+ static inline int ptr_ring_resize(struct ptr_ring *r, int size, gfp_t gfp,
+ void (*destroy)(void *))
+ {
+@@ -382,17 +398,25 @@ static inline int ptr_ring_resize(struct ptr_ring *r, int size, gfp_t gfp,
+ if (!queue)
+ return -ENOMEM;
+
+- spin_lock_irqsave(&(r)->producer_lock, flags);
++ spin_lock_irqsave(&(r)->consumer_lock, flags);
++ spin_lock(&(r)->producer_lock);
+
+ old = __ptr_ring_swap_queue(r, queue, size, gfp, destroy);
+
+- spin_unlock_irqrestore(&(r)->producer_lock, flags);
++ spin_unlock(&(r)->producer_lock);
++ spin_unlock_irqrestore(&(r)->consumer_lock, flags);
+
+ kfree(old);
+
+ return 0;
+ }
+
++/*
++ * Note: producer lock is nested within consumer lock, so if you
++ * resize you must make sure all uses nest correctly.
++ * In particular if you consume ring in interrupt or BH context, you must
++ * disable interrupts/BH when doing so.
++ */
+ static inline int ptr_ring_resize_multiple(struct ptr_ring **rings, int nrings,
+ int size,
+ gfp_t gfp, void (*destroy)(void *))
+@@ -412,10 +436,12 @@ static inline int ptr_ring_resize_multiple(struct ptr_ring **rings, int nrings,
+ }
+
+ for (i = 0; i < nrings; ++i) {
+- spin_lock_irqsave(&(rings[i])->producer_lock, flags);
++ spin_lock_irqsave(&(rings[i])->consumer_lock, flags);
++ spin_lock(&(rings[i])->producer_lock);
+ queues[i] = __ptr_ring_swap_queue(rings[i], queues[i],
+ size, gfp, destroy);
+- spin_unlock_irqrestore(&(rings[i])->producer_lock, flags);
++ spin_unlock(&(rings[i])->producer_lock);
++ spin_unlock_irqrestore(&(rings[i])->consumer_lock, flags);
+ }
+
+ for (i = 0; i < nrings; ++i)
+diff --git a/mm/backing-dev.c b/mm/backing-dev.c
+index 3bfed5ab2475..61b34071e3ee 100644
+--- a/mm/backing-dev.c
++++ b/mm/backing-dev.c
+@@ -758,15 +758,20 @@ static int cgwb_bdi_init(struct backing_dev_info *bdi)
+ if (!bdi->wb_congested)
+ return -ENOMEM;
+
++ atomic_set(&bdi->wb_congested->refcnt, 1);
++
+ err = wb_init(&bdi->wb, bdi, 1, GFP_KERNEL);
+ if (err) {
+- kfree(bdi->wb_congested);
++ wb_congested_put(bdi->wb_congested);
+ return err;
+ }
+ return 0;
+ }
+
+-static void cgwb_bdi_destroy(struct backing_dev_info *bdi) { }
++static void cgwb_bdi_destroy(struct backing_dev_info *bdi)
++{
++ wb_congested_put(bdi->wb_congested);
++}
+
+ #endif /* CONFIG_CGROUP_WRITEBACK */
+
+diff --git a/net/ipv4/ip_sockglue.c b/net/ipv4/ip_sockglue.c
+index 900011709e3b..fc4bf4d54158 100644
+--- a/net/ipv4/ip_sockglue.c
++++ b/net/ipv4/ip_sockglue.c
+@@ -116,10 +116,10 @@ static void ip_cmsg_recv_checksum(struct msghdr *msg, struct sk_buff *skb,
+ if (skb->ip_summed != CHECKSUM_COMPLETE)
+ return;
+
+- if (offset != 0)
+- csum = csum_sub(csum,
+- csum_partial(skb_transport_header(skb) + tlen,
+- offset, 0));
++ if (offset != 0) {
++ int tend_off = skb_transport_offset(skb) + tlen;
++ csum = csum_sub(csum, skb_checksum(skb, tend_off, offset, 0));
++ }
+
+ put_cmsg(msg, SOL_IP, IP_CHECKSUM, sizeof(__wsum), &csum);
+ }
+diff --git a/net/netfilter/nf_conntrack_helper.c b/net/netfilter/nf_conntrack_helper.c
+index 7341adf7059d..6dc44d9b4190 100644
+--- a/net/netfilter/nf_conntrack_helper.c
++++ b/net/netfilter/nf_conntrack_helper.c
+@@ -188,6 +188,26 @@ nf_ct_helper_ext_add(struct nf_conn *ct,
+ }
+ EXPORT_SYMBOL_GPL(nf_ct_helper_ext_add);
+
++static struct nf_conntrack_helper *
++nf_ct_lookup_helper(struct nf_conn *ct, struct net *net)
++{
++ if (!net->ct.sysctl_auto_assign_helper) {
++ if (net->ct.auto_assign_helper_warned)
++ return NULL;
++ if (!__nf_ct_helper_find(&ct->tuplehash[IP_CT_DIR_REPLY].tuple))
++ return NULL;
++ pr_info("nf_conntrack: default automatic helper assignment "
++ "has been turned off for security reasons and CT-based "
++ " firewall rule not found. Use the iptables CT target "
++ "to attach helpers instead.\n");
++ net->ct.auto_assign_helper_warned = 1;
++ return NULL;
++ }
++
++ return __nf_ct_helper_find(&ct->tuplehash[IP_CT_DIR_REPLY].tuple);
++}
++
++
+ int __nf_ct_try_assign_helper(struct nf_conn *ct, struct nf_conn *tmpl,
+ gfp_t flags)
+ {
+@@ -213,21 +233,14 @@ int __nf_ct_try_assign_helper(struct nf_conn *ct, struct nf_conn *tmpl,
+ }
+
+ help = nfct_help(ct);
+- if (net->ct.sysctl_auto_assign_helper && helper == NULL) {
+- helper = __nf_ct_helper_find(&ct->tuplehash[IP_CT_DIR_REPLY].tuple);
+- if (unlikely(!net->ct.auto_assign_helper_warned && helper)) {
+- pr_info("nf_conntrack: automatic helper "
+- "assignment is deprecated and it will "
+- "be removed soon. Use the iptables CT target "
+- "to attach helpers instead.\n");
+- net->ct.auto_assign_helper_warned = true;
+- }
+- }
+
+ if (helper == NULL) {
+- if (help)
+- RCU_INIT_POINTER(help->helper, NULL);
+- return 0;
++ helper = nf_ct_lookup_helper(ct, net);
++ if (helper == NULL) {
++ if (help)
++ RCU_INIT_POINTER(help->helper, NULL);
++ return 0;
++ }
+ }
+
+ if (help == NULL) {
+diff --git a/net/socket.c b/net/socket.c
+index 0758e13754e2..02bd9249e295 100644
+--- a/net/socket.c
++++ b/net/socket.c
+@@ -2228,8 +2228,10 @@ int __sys_recvmmsg(int fd, struct mmsghdr __user *mmsg, unsigned int vlen,
+ return err;
+
+ err = sock_error(sock->sk);
+- if (err)
++ if (err) {
++ datagrams = err;
+ goto out_put;
++ }
+
+ entry = mmsg;
+ compat_entry = (struct compat_mmsghdr __user *)mmsg;
^ permalink raw reply related [flat|nested] 22+ messages in thread
* [gentoo-commits] proj/linux-patches:4.10 commit in: /
@ 2017-03-02 16:20 Mike Pagano
0 siblings, 0 replies; 22+ messages in thread
From: Mike Pagano @ 2017-03-02 16:20 UTC (permalink / raw
To: gentoo-commits
commit: c3aa61ceb24b851301063a08357da8eaab032fd0
Author: Mike Pagano <mpagano <AT> gentoo <DOT> org>
AuthorDate: Thu Mar 2 16:16:08 2017 +0000
Commit: Mike Pagano <mpagano <AT> gentoo <DOT> org>
CommitDate: Thu Mar 2 16:20:29 2017 +0000
URL: https://gitweb.gentoo.org/proj/linux-patches.git/commit/?id=c3aa61ce
Enable crypto API for systemd as its required for systemd versions >= 233. See bug #611368.
4567_distro-Gentoo-Kconfig.patch | 9 ++++++---
1 file changed, 6 insertions(+), 3 deletions(-)
diff --git a/4567_distro-Gentoo-Kconfig.patch b/4567_distro-Gentoo-Kconfig.patch
index 4a88040..5555b8a 100644
--- a/4567_distro-Gentoo-Kconfig.patch
+++ b/4567_distro-Gentoo-Kconfig.patch
@@ -7,9 +7,9 @@
+source "distro/Kconfig"
+
source "arch/$SRCARCH/Kconfig"
---- /dev/null 2017-02-18 04:25:56.900821893 -0500
-+++ b/distro/Kconfig 2017-02-18 10:41:16.512328155 -0500
-@@ -0,0 +1,142 @@
+--- /dev/null 2017-03-02 01:55:04.096566155 -0500
++++ b/distro/Kconfig 2017-03-02 11:12:05.049448255 -0500
+@@ -0,0 +1,145 @@
+menu "Gentoo Linux"
+
+config GENTOO_LINUX
@@ -114,6 +114,9 @@
+ select BLK_DEV_BSG
+ select CGROUPS
+ select CHECKPOINT_RESTORE
++ select CRYPTO_HMAC
++ select CRYPTO_SHA256
++ select CRYPTO_USER_API_HASH
+ select DEVPTS_MULTIPLE_INSTANCES
+ select DMIID if X86_32 || X86_64 || X86
+ select EPOLL
^ permalink raw reply related [flat|nested] 22+ messages in thread
* [gentoo-commits] proj/linux-patches:4.10 commit in: /
@ 2017-03-12 13:00 Alice Ferrazzi
0 siblings, 0 replies; 22+ messages in thread
From: Alice Ferrazzi @ 2017-03-12 13:00 UTC (permalink / raw
To: gentoo-commits
commit: 8016c1a480d6372573eb235db92a5c1b8bb75f15
Author: Alice Ferrazzi <alicef <AT> gentoo <DOT> org>
AuthorDate: Sun Mar 12 12:42:03 2017 +0000
Commit: Alice Ferrazzi <alicef <AT> gentoo <DOT> org>
CommitDate: Sun Mar 12 12:42:03 2017 +0000
URL: https://gitweb.gentoo.org/proj/linux-patches.git/commit/?id=8016c1a4
Linux patch 4.10.2
0000_README | 4 +
1001_linux-4.10.2.patch | 7455 +++++++++++++++++++++++++++++++++++++++++++++++
2 files changed, 7459 insertions(+)
diff --git a/0000_README b/0000_README
index decfe62..44d9c5f 100644
--- a/0000_README
+++ b/0000_README
@@ -47,6 +47,10 @@ Patch: 1000_linux-4.10.1.patch
From: http://www.kernel.org
Desc: Linux 4.10.1
+Patch: 1001_linux-4.10.2.patch
+From: http://www.kernel.org
+Desc: Linux 4.10.2
+
Patch: 1500_XATTR_USER_PREFIX.patch
From: https://bugs.gentoo.org/show_bug.cgi?id=470644
Desc: Support for namespace user.pax.* on tmpfs.
diff --git a/1001_linux-4.10.2.patch b/1001_linux-4.10.2.patch
new file mode 100644
index 0000000..7989dd6
--- /dev/null
+++ b/1001_linux-4.10.2.patch
@@ -0,0 +1,7455 @@
+diff --git a/Makefile b/Makefile
+index 09eccff4f569..6e09b3a44e9a 100644
+--- a/Makefile
++++ b/Makefile
+@@ -1,6 +1,6 @@
+ VERSION = 4
+ PATCHLEVEL = 10
+-SUBLEVEL = 1
++SUBLEVEL = 2
+ EXTRAVERSION =
+ NAME = Fearless Coyote
+
+diff --git a/arch/arm/boot/dts/at91-sama5d2_xplained.dts b/arch/arm/boot/dts/at91-sama5d2_xplained.dts
+index 0b9a59d5fdac..30fac04289a5 100644
+--- a/arch/arm/boot/dts/at91-sama5d2_xplained.dts
++++ b/arch/arm/boot/dts/at91-sama5d2_xplained.dts
+@@ -148,6 +148,8 @@
+ uart1: serial@f8020000 {
+ pinctrl-names = "default";
+ pinctrl-0 = <&pinctrl_uart1_default>;
++ atmel,use-dma-rx;
++ atmel,use-dma-tx;
+ status = "okay";
+ };
+
+diff --git a/arch/arm/boot/dts/at91-sama5d4_xplained.dts b/arch/arm/boot/dts/at91-sama5d4_xplained.dts
+index ed7fce297738..44d1171c7fc0 100644
+--- a/arch/arm/boot/dts/at91-sama5d4_xplained.dts
++++ b/arch/arm/boot/dts/at91-sama5d4_xplained.dts
+@@ -110,6 +110,8 @@
+ };
+
+ usart3: serial@fc00c000 {
++ atmel,use-dma-rx;
++ atmel,use-dma-tx;
+ status = "okay";
+ };
+
+diff --git a/arch/arm/include/asm/kvm_mmu.h b/arch/arm/include/asm/kvm_mmu.h
+index 74a44727f8e1..a58bbaa3ec60 100644
+--- a/arch/arm/include/asm/kvm_mmu.h
++++ b/arch/arm/include/asm/kvm_mmu.h
+@@ -150,18 +150,12 @@ static inline void __coherent_cache_guest_page(struct kvm_vcpu *vcpu,
+ * and iterate over the range.
+ */
+
+- bool need_flush = !vcpu_has_cache_enabled(vcpu) || ipa_uncached;
+-
+ VM_BUG_ON(size & ~PAGE_MASK);
+
+- if (!need_flush && !icache_is_pipt())
+- goto vipt_cache;
+-
+ while (size) {
+ void *va = kmap_atomic_pfn(pfn);
+
+- if (need_flush)
+- kvm_flush_dcache_to_poc(va, PAGE_SIZE);
++ kvm_flush_dcache_to_poc(va, PAGE_SIZE);
+
+ if (icache_is_pipt())
+ __cpuc_coherent_user_range((unsigned long)va,
+@@ -173,7 +167,6 @@ static inline void __coherent_cache_guest_page(struct kvm_vcpu *vcpu,
+ kunmap_atomic(va);
+ }
+
+-vipt_cache:
+ if (!icache_is_pipt() && !icache_is_vivt_asid_tagged()) {
+ /* any kind of VIPT cache */
+ __flush_icache_all();
+diff --git a/arch/arm64/include/asm/kvm_mmu.h b/arch/arm64/include/asm/kvm_mmu.h
+index 6f72fe8b0e3e..6d22017ebbad 100644
+--- a/arch/arm64/include/asm/kvm_mmu.h
++++ b/arch/arm64/include/asm/kvm_mmu.h
+@@ -241,8 +241,7 @@ static inline void __coherent_cache_guest_page(struct kvm_vcpu *vcpu,
+ {
+ void *va = page_address(pfn_to_page(pfn));
+
+- if (!vcpu_has_cache_enabled(vcpu) || ipa_uncached)
+- kvm_flush_dcache_to_poc(va, size);
++ kvm_flush_dcache_to_poc(va, size);
+
+ if (!icache_is_aliasing()) { /* PIPT */
+ flush_icache_range((unsigned long)va,
+diff --git a/arch/arm64/kernel/cpufeature.c b/arch/arm64/kernel/cpufeature.c
+index fdf8f045929f..16fa1d3c7986 100644
+--- a/arch/arm64/kernel/cpufeature.c
++++ b/arch/arm64/kernel/cpufeature.c
+@@ -654,15 +654,15 @@ static u64 __raw_read_system_reg(u32 sys_id)
+ case SYS_ID_ISAR2_EL1: return read_cpuid(ID_ISAR2_EL1);
+ case SYS_ID_ISAR3_EL1: return read_cpuid(ID_ISAR3_EL1);
+ case SYS_ID_ISAR4_EL1: return read_cpuid(ID_ISAR4_EL1);
+- case SYS_ID_ISAR5_EL1: return read_cpuid(ID_ISAR4_EL1);
++ case SYS_ID_ISAR5_EL1: return read_cpuid(ID_ISAR5_EL1);
+ case SYS_MVFR0_EL1: return read_cpuid(MVFR0_EL1);
+ case SYS_MVFR1_EL1: return read_cpuid(MVFR1_EL1);
+ case SYS_MVFR2_EL1: return read_cpuid(MVFR2_EL1);
+
+ case SYS_ID_AA64PFR0_EL1: return read_cpuid(ID_AA64PFR0_EL1);
+- case SYS_ID_AA64PFR1_EL1: return read_cpuid(ID_AA64PFR0_EL1);
++ case SYS_ID_AA64PFR1_EL1: return read_cpuid(ID_AA64PFR1_EL1);
+ case SYS_ID_AA64DFR0_EL1: return read_cpuid(ID_AA64DFR0_EL1);
+- case SYS_ID_AA64DFR1_EL1: return read_cpuid(ID_AA64DFR0_EL1);
++ case SYS_ID_AA64DFR1_EL1: return read_cpuid(ID_AA64DFR1_EL1);
+ case SYS_ID_AA64MMFR0_EL1: return read_cpuid(ID_AA64MMFR0_EL1);
+ case SYS_ID_AA64MMFR1_EL1: return read_cpuid(ID_AA64MMFR1_EL1);
+ case SYS_ID_AA64MMFR2_EL1: return read_cpuid(ID_AA64MMFR2_EL1);
+diff --git a/arch/arm64/mm/dma-mapping.c b/arch/arm64/mm/dma-mapping.c
+index e04082700bb1..1ffb7d5d299a 100644
+--- a/arch/arm64/mm/dma-mapping.c
++++ b/arch/arm64/mm/dma-mapping.c
+@@ -352,6 +352,13 @@ static int __swiotlb_dma_supported(struct device *hwdev, u64 mask)
+ return 1;
+ }
+
++static int __swiotlb_dma_mapping_error(struct device *hwdev, dma_addr_t addr)
++{
++ if (swiotlb)
++ return swiotlb_dma_mapping_error(hwdev, addr);
++ return 0;
++}
++
+ static struct dma_map_ops swiotlb_dma_ops = {
+ .alloc = __dma_alloc,
+ .free = __dma_free,
+@@ -366,7 +373,7 @@ static struct dma_map_ops swiotlb_dma_ops = {
+ .sync_sg_for_cpu = __swiotlb_sync_sg_for_cpu,
+ .sync_sg_for_device = __swiotlb_sync_sg_for_device,
+ .dma_supported = __swiotlb_dma_supported,
+- .mapping_error = swiotlb_dma_mapping_error,
++ .mapping_error = __swiotlb_dma_mapping_error,
+ };
+
+ static int __init atomic_pool_init(void)
+diff --git a/arch/arm64/mm/mmu.c b/arch/arm64/mm/mmu.c
+index 17243e43184e..c391b1f2beaf 100644
+--- a/arch/arm64/mm/mmu.c
++++ b/arch/arm64/mm/mmu.c
+@@ -108,10 +108,8 @@ static bool pgattr_change_is_safe(u64 old, u64 new)
+ static void alloc_init_pte(pmd_t *pmd, unsigned long addr,
+ unsigned long end, unsigned long pfn,
+ pgprot_t prot,
+- phys_addr_t (*pgtable_alloc)(void),
+- bool page_mappings_only)
++ phys_addr_t (*pgtable_alloc)(void))
+ {
+- pgprot_t __prot = prot;
+ pte_t *pte;
+
+ BUG_ON(pmd_sect(*pmd));
+@@ -129,18 +127,7 @@ static void alloc_init_pte(pmd_t *pmd, unsigned long addr,
+ do {
+ pte_t old_pte = *pte;
+
+- /*
+- * Set the contiguous bit for the subsequent group of PTEs if
+- * its size and alignment are appropriate.
+- */
+- if (((addr | PFN_PHYS(pfn)) & ~CONT_PTE_MASK) == 0) {
+- if (end - addr >= CONT_PTE_SIZE && !page_mappings_only)
+- __prot = __pgprot(pgprot_val(prot) | PTE_CONT);
+- else
+- __prot = prot;
+- }
+-
+- set_pte(pte, pfn_pte(pfn, __prot));
++ set_pte(pte, pfn_pte(pfn, prot));
+ pfn++;
+
+ /*
+@@ -159,7 +146,6 @@ static void alloc_init_pmd(pud_t *pud, unsigned long addr, unsigned long end,
+ phys_addr_t (*pgtable_alloc)(void),
+ bool page_mappings_only)
+ {
+- pgprot_t __prot = prot;
+ pmd_t *pmd;
+ unsigned long next;
+
+@@ -186,18 +172,7 @@ static void alloc_init_pmd(pud_t *pud, unsigned long addr, unsigned long end,
+ /* try section mapping first */
+ if (((addr | next | phys) & ~SECTION_MASK) == 0 &&
+ !page_mappings_only) {
+- /*
+- * Set the contiguous bit for the subsequent group of
+- * PMDs if its size and alignment are appropriate.
+- */
+- if (((addr | phys) & ~CONT_PMD_MASK) == 0) {
+- if (end - addr >= CONT_PMD_SIZE)
+- __prot = __pgprot(pgprot_val(prot) |
+- PTE_CONT);
+- else
+- __prot = prot;
+- }
+- pmd_set_huge(pmd, phys, __prot);
++ pmd_set_huge(pmd, phys, prot);
+
+ /*
+ * After the PMD entry has been populated once, we
+@@ -207,8 +182,7 @@ static void alloc_init_pmd(pud_t *pud, unsigned long addr, unsigned long end,
+ pmd_val(*pmd)));
+ } else {
+ alloc_init_pte(pmd, addr, next, __phys_to_pfn(phys),
+- prot, pgtable_alloc,
+- page_mappings_only);
++ prot, pgtable_alloc);
+
+ BUG_ON(pmd_val(old_pmd) != 0 &&
+ pmd_val(old_pmd) != pmd_val(*pmd));
+diff --git a/arch/mips/bcm47xx/buttons.c b/arch/mips/bcm47xx/buttons.c
+index 52caa75bfe4e..e2f50d690624 100644
+--- a/arch/mips/bcm47xx/buttons.c
++++ b/arch/mips/bcm47xx/buttons.c
+@@ -17,6 +17,12 @@
+ .active_low = 1, \
+ }
+
++#define BCM47XX_GPIO_KEY_H(_gpio, _code) \
++ { \
++ .code = _code, \
++ .gpio = _gpio, \
++ }
++
+ /* Asus */
+
+ static const struct gpio_keys_button
+@@ -79,8 +85,8 @@ bcm47xx_buttons_asus_wl500gpv2[] __initconst = {
+
+ static const struct gpio_keys_button
+ bcm47xx_buttons_asus_wl500w[] __initconst = {
+- BCM47XX_GPIO_KEY(6, KEY_RESTART),
+- BCM47XX_GPIO_KEY(7, KEY_WPS_BUTTON),
++ BCM47XX_GPIO_KEY_H(6, KEY_RESTART),
++ BCM47XX_GPIO_KEY_H(7, KEY_WPS_BUTTON),
+ };
+
+ static const struct gpio_keys_button
+diff --git a/arch/mips/cavium-octeon/octeon-memcpy.S b/arch/mips/cavium-octeon/octeon-memcpy.S
+index 64e08df51d65..8b7004132491 100644
+--- a/arch/mips/cavium-octeon/octeon-memcpy.S
++++ b/arch/mips/cavium-octeon/octeon-memcpy.S
+@@ -208,18 +208,18 @@ EXC( STORE t2, UNIT(6)(dst), s_exc_p10u)
+ ADD src, src, 16*NBYTES
+ EXC( STORE t3, UNIT(7)(dst), s_exc_p9u)
+ ADD dst, dst, 16*NBYTES
+-EXC( LOAD t0, UNIT(-8)(src), l_exc_copy)
+-EXC( LOAD t1, UNIT(-7)(src), l_exc_copy)
+-EXC( LOAD t2, UNIT(-6)(src), l_exc_copy)
+-EXC( LOAD t3, UNIT(-5)(src), l_exc_copy)
++EXC( LOAD t0, UNIT(-8)(src), l_exc_copy_rewind16)
++EXC( LOAD t1, UNIT(-7)(src), l_exc_copy_rewind16)
++EXC( LOAD t2, UNIT(-6)(src), l_exc_copy_rewind16)
++EXC( LOAD t3, UNIT(-5)(src), l_exc_copy_rewind16)
+ EXC( STORE t0, UNIT(-8)(dst), s_exc_p8u)
+ EXC( STORE t1, UNIT(-7)(dst), s_exc_p7u)
+ EXC( STORE t2, UNIT(-6)(dst), s_exc_p6u)
+ EXC( STORE t3, UNIT(-5)(dst), s_exc_p5u)
+-EXC( LOAD t0, UNIT(-4)(src), l_exc_copy)
+-EXC( LOAD t1, UNIT(-3)(src), l_exc_copy)
+-EXC( LOAD t2, UNIT(-2)(src), l_exc_copy)
+-EXC( LOAD t3, UNIT(-1)(src), l_exc_copy)
++EXC( LOAD t0, UNIT(-4)(src), l_exc_copy_rewind16)
++EXC( LOAD t1, UNIT(-3)(src), l_exc_copy_rewind16)
++EXC( LOAD t2, UNIT(-2)(src), l_exc_copy_rewind16)
++EXC( LOAD t3, UNIT(-1)(src), l_exc_copy_rewind16)
+ EXC( STORE t0, UNIT(-4)(dst), s_exc_p4u)
+ EXC( STORE t1, UNIT(-3)(dst), s_exc_p3u)
+ EXC( STORE t2, UNIT(-2)(dst), s_exc_p2u)
+@@ -383,6 +383,10 @@ done:
+ nop
+ END(memcpy)
+
++l_exc_copy_rewind16:
++ /* Rewind src and dst by 16*NBYTES for l_exc_copy */
++ SUB src, src, 16*NBYTES
++ SUB dst, dst, 16*NBYTES
+ l_exc_copy:
+ /*
+ * Copy bytes from src until faulting load address (or until a
+diff --git a/arch/mips/include/asm/checksum.h b/arch/mips/include/asm/checksum.h
+index 7749daf2a465..c8b574f7e0cc 100644
+--- a/arch/mips/include/asm/checksum.h
++++ b/arch/mips/include/asm/checksum.h
+@@ -186,7 +186,9 @@ static inline __wsum csum_tcpudp_nofold(__be32 saddr, __be32 daddr,
+ " daddu %0, %4 \n"
+ " dsll32 $1, %0, 0 \n"
+ " daddu %0, $1 \n"
++ " sltu $1, %0, $1 \n"
+ " dsra32 %0, %0, 0 \n"
++ " addu %0, $1 \n"
+ #endif
+ " .set pop"
+ : "=r" (sum)
+diff --git a/arch/mips/kernel/process.c b/arch/mips/kernel/process.c
+index 5142b1dfe8a7..7d80447e5d03 100644
+--- a/arch/mips/kernel/process.c
++++ b/arch/mips/kernel/process.c
+@@ -195,11 +195,9 @@ struct mips_frame_info {
+ #define J_TARGET(pc,target) \
+ (((unsigned long)(pc) & 0xf0000000) | ((target) << 2))
+
+-static inline int is_ra_save_ins(union mips_instruction *ip)
++static inline int is_ra_save_ins(union mips_instruction *ip, int *poff)
+ {
+ #ifdef CONFIG_CPU_MICROMIPS
+- union mips_instruction mmi;
+-
+ /*
+ * swsp ra,offset
+ * swm16 reglist,offset(sp)
+@@ -209,29 +207,71 @@ static inline int is_ra_save_ins(union mips_instruction *ip)
+ *
+ * microMIPS is way more fun...
+ */
+- if (mm_insn_16bit(ip->halfword[0])) {
+- mmi.word = (ip->halfword[0] << 16);
+- return (mmi.mm16_r5_format.opcode == mm_swsp16_op &&
+- mmi.mm16_r5_format.rt == 31) ||
+- (mmi.mm16_m_format.opcode == mm_pool16c_op &&
+- mmi.mm16_m_format.func == mm_swm16_op);
++ if (mm_insn_16bit(ip->halfword[1])) {
++ switch (ip->mm16_r5_format.opcode) {
++ case mm_swsp16_op:
++ if (ip->mm16_r5_format.rt != 31)
++ return 0;
++
++ *poff = ip->mm16_r5_format.simmediate;
++ *poff = (*poff << 2) / sizeof(ulong);
++ return 1;
++
++ case mm_pool16c_op:
++ switch (ip->mm16_m_format.func) {
++ case mm_swm16_op:
++ *poff = ip->mm16_m_format.imm;
++ *poff += 1 + ip->mm16_m_format.rlist;
++ *poff = (*poff << 2) / sizeof(ulong);
++ return 1;
++
++ default:
++ return 0;
++ }
++
++ default:
++ return 0;
++ }
+ }
+- else {
+- mmi.halfword[0] = ip->halfword[1];
+- mmi.halfword[1] = ip->halfword[0];
+- return (mmi.mm_m_format.opcode == mm_pool32b_op &&
+- mmi.mm_m_format.rd > 9 &&
+- mmi.mm_m_format.base == 29 &&
+- mmi.mm_m_format.func == mm_swm32_func) ||
+- (mmi.i_format.opcode == mm_sw32_op &&
+- mmi.i_format.rs == 29 &&
+- mmi.i_format.rt == 31);
++
++ switch (ip->i_format.opcode) {
++ case mm_sw32_op:
++ if (ip->i_format.rs != 29)
++ return 0;
++ if (ip->i_format.rt != 31)
++ return 0;
++
++ *poff = ip->i_format.simmediate / sizeof(ulong);
++ return 1;
++
++ case mm_pool32b_op:
++ switch (ip->mm_m_format.func) {
++ case mm_swm32_func:
++ if (ip->mm_m_format.rd < 0x10)
++ return 0;
++ if (ip->mm_m_format.base != 29)
++ return 0;
++
++ *poff = ip->mm_m_format.simmediate;
++ *poff += (ip->mm_m_format.rd & 0xf) * sizeof(u32);
++ *poff /= sizeof(ulong);
++ return 1;
++ default:
++ return 0;
++ }
++
++ default:
++ return 0;
+ }
+ #else
+ /* sw / sd $ra, offset($sp) */
+- return (ip->i_format.opcode == sw_op || ip->i_format.opcode == sd_op) &&
+- ip->i_format.rs == 29 &&
+- ip->i_format.rt == 31;
++ if ((ip->i_format.opcode == sw_op || ip->i_format.opcode == sd_op) &&
++ ip->i_format.rs == 29 && ip->i_format.rt == 31) {
++ *poff = ip->i_format.simmediate / sizeof(ulong);
++ return 1;
++ }
++
++ return 0;
+ #endif
+ }
+
+@@ -246,13 +286,16 @@ static inline int is_jump_ins(union mips_instruction *ip)
+ *
+ * microMIPS is kind of more fun...
+ */
+- union mips_instruction mmi;
+-
+- mmi.word = (ip->halfword[0] << 16);
++ if (mm_insn_16bit(ip->halfword[1])) {
++ if ((ip->mm16_r5_format.opcode == mm_pool16c_op &&
++ (ip->mm16_r5_format.rt & mm_jr16_op) == mm_jr16_op))
++ return 1;
++ return 0;
++ }
+
+- if ((mmi.mm16_r5_format.opcode == mm_pool16c_op &&
+- (mmi.mm16_r5_format.rt & mm_jr16_op) == mm_jr16_op) ||
+- ip->j_format.opcode == mm_jal32_op)
++ if (ip->j_format.opcode == mm_j32_op)
++ return 1;
++ if (ip->j_format.opcode == mm_jal32_op)
+ return 1;
+ if (ip->r_format.opcode != mm_pool32a_op ||
+ ip->r_format.func != mm_pool32axf_op)
+@@ -280,15 +323,13 @@ static inline int is_sp_move_ins(union mips_instruction *ip)
+ *
+ * microMIPS is not more fun...
+ */
+- if (mm_insn_16bit(ip->halfword[0])) {
+- union mips_instruction mmi;
+-
+- mmi.word = (ip->halfword[0] << 16);
+- return (mmi.mm16_r3_format.opcode == mm_pool16d_op &&
+- mmi.mm16_r3_format.simmediate && mm_addiusp_func) ||
+- (mmi.mm16_r5_format.opcode == mm_pool16d_op &&
+- mmi.mm16_r5_format.rt == 29);
++ if (mm_insn_16bit(ip->halfword[1])) {
++ return (ip->mm16_r3_format.opcode == mm_pool16d_op &&
++ ip->mm16_r3_format.simmediate && mm_addiusp_func) ||
++ (ip->mm16_r5_format.opcode == mm_pool16d_op &&
++ ip->mm16_r5_format.rt == 29);
+ }
++
+ return ip->mm_i_format.opcode == mm_addiu32_op &&
+ ip->mm_i_format.rt == 29 && ip->mm_i_format.rs == 29;
+ #else
+@@ -303,30 +344,36 @@ static inline int is_sp_move_ins(union mips_instruction *ip)
+
+ static int get_frame_info(struct mips_frame_info *info)
+ {
+-#ifdef CONFIG_CPU_MICROMIPS
+- union mips_instruction *ip = (void *) (((char *) info->func) - 1);
+-#else
+- union mips_instruction *ip = info->func;
+-#endif
+- unsigned max_insns = info->func_size / sizeof(union mips_instruction);
+- unsigned i;
++ bool is_mmips = IS_ENABLED(CONFIG_CPU_MICROMIPS);
++ union mips_instruction insn, *ip, *ip_end;
++ const unsigned int max_insns = 128;
++ unsigned int i;
+
+ info->pc_offset = -1;
+ info->frame_size = 0;
+
++ ip = (void *)msk_isa16_mode((ulong)info->func);
+ if (!ip)
+ goto err;
+
+- if (max_insns == 0)
+- max_insns = 128U; /* unknown function size */
+- max_insns = min(128U, max_insns);
++ ip_end = (void *)ip + info->func_size;
+
+- for (i = 0; i < max_insns; i++, ip++) {
++ for (i = 0; i < max_insns && ip < ip_end; i++, ip++) {
++ if (is_mmips && mm_insn_16bit(ip->halfword[0])) {
++ insn.halfword[0] = 0;
++ insn.halfword[1] = ip->halfword[0];
++ } else if (is_mmips) {
++ insn.halfword[0] = ip->halfword[1];
++ insn.halfword[1] = ip->halfword[0];
++ } else {
++ insn.word = ip->word;
++ }
+
+- if (is_jump_ins(ip))
++ if (is_jump_ins(&insn))
+ break;
++
+ if (!info->frame_size) {
+- if (is_sp_move_ins(ip))
++ if (is_sp_move_ins(&insn))
+ {
+ #ifdef CONFIG_CPU_MICROMIPS
+ if (mm_insn_16bit(ip->halfword[0]))
+@@ -349,11 +396,9 @@ static int get_frame_info(struct mips_frame_info *info)
+ }
+ continue;
+ }
+- if (info->pc_offset == -1 && is_ra_save_ins(ip)) {
+- info->pc_offset =
+- ip->i_format.simmediate / sizeof(long);
++ if (info->pc_offset == -1 &&
++ is_ra_save_ins(&insn, &info->pc_offset))
+ break;
+- }
+ }
+ if (info->frame_size && info->pc_offset >= 0) /* nested */
+ return 0;
+diff --git a/arch/mips/lantiq/xway/sysctrl.c b/arch/mips/lantiq/xway/sysctrl.c
+index 236193b5210b..9a61671c00a7 100644
+--- a/arch/mips/lantiq/xway/sysctrl.c
++++ b/arch/mips/lantiq/xway/sysctrl.c
+@@ -545,7 +545,7 @@ void __init ltq_soc_init(void)
+ clkdev_add_pmu("1a800000.pcie", "msi", 1, 1, PMU1_PCIE2_MSI);
+ clkdev_add_pmu("1a800000.pcie", "pdi", 1, 1, PMU1_PCIE2_PDI);
+ clkdev_add_pmu("1a800000.pcie", "ctl", 1, 1, PMU1_PCIE2_CTL);
+- clkdev_add_pmu("1e108000.eth", NULL, 1, 0, PMU_SWITCH | PMU_PPE_DP);
++ clkdev_add_pmu("1e108000.eth", NULL, 0, 0, PMU_SWITCH | PMU_PPE_DP);
+ clkdev_add_pmu("1da00000.usif", "NULL", 1, 0, PMU_USIF);
+ clkdev_add_pmu("1e103100.deu", NULL, 1, 0, PMU_DEU);
+ } else if (of_machine_is_compatible("lantiq,ar10")) {
+@@ -553,7 +553,7 @@ void __init ltq_soc_init(void)
+ ltq_ar10_fpi_hz(), ltq_ar10_pp32_hz());
+ clkdev_add_pmu("1e101000.usb", "ctl", 1, 0, PMU_USB0);
+ clkdev_add_pmu("1e106000.usb", "ctl", 1, 0, PMU_USB1);
+- clkdev_add_pmu("1e108000.eth", NULL, 1, 0, PMU_SWITCH |
++ clkdev_add_pmu("1e108000.eth", NULL, 0, 0, PMU_SWITCH |
+ PMU_PPE_DP | PMU_PPE_TC);
+ clkdev_add_pmu("1da00000.usif", "NULL", 1, 0, PMU_USIF);
+ clkdev_add_pmu("1f203000.rcu", "gphy", 1, 0, PMU_GPHY);
+@@ -575,11 +575,11 @@ void __init ltq_soc_init(void)
+ clkdev_add_pmu(NULL, "ahb", 1, 0, PMU_AHBM | PMU_AHBS);
+
+ clkdev_add_pmu("1da00000.usif", "NULL", 1, 0, PMU_USIF);
+- clkdev_add_pmu("1e108000.eth", NULL, 1, 0,
++ clkdev_add_pmu("1e108000.eth", NULL, 0, 0,
+ PMU_SWITCH | PMU_PPE_DPLUS | PMU_PPE_DPLUM |
+ PMU_PPE_EMA | PMU_PPE_TC | PMU_PPE_SLL01 |
+ PMU_PPE_QSB | PMU_PPE_TOP);
+- clkdev_add_pmu("1f203000.rcu", "gphy", 1, 0, PMU_GPHY);
++ clkdev_add_pmu("1f203000.rcu", "gphy", 0, 0, PMU_GPHY);
+ clkdev_add_pmu("1e103000.sdio", NULL, 1, 0, PMU_SDIO);
+ clkdev_add_pmu("1e103100.deu", NULL, 1, 0, PMU_DEU);
+ clkdev_add_pmu("1e116000.mei", "dfe", 1, 0, PMU_DFE);
+diff --git a/arch/mips/mm/sc-ip22.c b/arch/mips/mm/sc-ip22.c
+index 026cb59a914d..f293a97cb885 100644
+--- a/arch/mips/mm/sc-ip22.c
++++ b/arch/mips/mm/sc-ip22.c
+@@ -31,26 +31,40 @@ static inline void indy_sc_wipe(unsigned long first, unsigned long last)
+ unsigned long tmp;
+
+ __asm__ __volatile__(
+- ".set\tpush\t\t\t# indy_sc_wipe\n\t"
+- ".set\tnoreorder\n\t"
+- ".set\tmips3\n\t"
+- ".set\tnoat\n\t"
+- "mfc0\t%2, $12\n\t"
+- "li\t$1, 0x80\t\t\t# Go 64 bit\n\t"
+- "mtc0\t$1, $12\n\t"
+-
+- "dli\t$1, 0x9000000080000000\n\t"
+- "or\t%0, $1\t\t\t# first line to flush\n\t"
+- "or\t%1, $1\t\t\t# last line to flush\n\t"
+- ".set\tat\n\t"
+-
+- "1:\tsw\t$0, 0(%0)\n\t"
+- "bne\t%0, %1, 1b\n\t"
+- " daddu\t%0, 32\n\t"
+-
+- "mtc0\t%2, $12\t\t\t# Back to 32 bit\n\t"
+- "nop; nop; nop; nop;\n\t"
+- ".set\tpop"
++ " .set push # indy_sc_wipe \n"
++ " .set noreorder \n"
++ " .set mips3 \n"
++ " .set noat \n"
++ " mfc0 %2, $12 \n"
++ " li $1, 0x80 # Go 64 bit \n"
++ " mtc0 $1, $12 \n"
++ " \n"
++ " # \n"
++ " # Open code a dli $1, 0x9000000080000000 \n"
++ " # \n"
++ " # Required because binutils 2.25 will happily accept \n"
++ " # 64 bit instructions in .set mips3 mode but puke on \n"
++ " # 64 bit constants when generating 32 bit ELF \n"
++ " # \n"
++ " lui $1,0x9000 \n"
++ " dsll $1,$1,0x10 \n"
++ " ori $1,$1,0x8000 \n"
++ " dsll $1,$1,0x10 \n"
++ " \n"
++ " or %0, $1 # first line to flush \n"
++ " or %1, $1 # last line to flush \n"
++ " .set at \n"
++ " \n"
++ "1: sw $0, 0(%0) \n"
++ " bne %0, %1, 1b \n"
++ " daddu %0, 32 \n"
++ " \n"
++ " mtc0 %2, $12 # Back to 32 bit \n"
++ " nop # pipeline hazard \n"
++ " nop \n"
++ " nop \n"
++ " nop \n"
++ " .set pop \n"
+ : "=r" (first), "=r" (last), "=&r" (tmp)
+ : "0" (first), "1" (last));
+ }
+diff --git a/arch/mips/pic32/pic32mzda/Makefile b/arch/mips/pic32/pic32mzda/Makefile
+index 4a4c2728c027..c28649615c6c 100644
+--- a/arch/mips/pic32/pic32mzda/Makefile
++++ b/arch/mips/pic32/pic32mzda/Makefile
+@@ -2,8 +2,7 @@
+ # Joshua Henderson, <joshua.henderson@microchip.com>
+ # Copyright (C) 2015 Microchip Technology, Inc. All rights reserved.
+ #
+-obj-y := init.o time.o config.o
++obj-y := config.o early_clk.o init.o time.o
+
+ obj-$(CONFIG_EARLY_PRINTK) += early_console.o \
+- early_pin.o \
+- early_clk.o
++ early_pin.o
+diff --git a/arch/powerpc/include/asm/mmu.h b/arch/powerpc/include/asm/mmu.h
+index 233a7e8cc8e3..065e762fae85 100644
+--- a/arch/powerpc/include/asm/mmu.h
++++ b/arch/powerpc/include/asm/mmu.h
+@@ -136,6 +136,7 @@ enum {
+ MMU_FTR_NO_SLBIE_B | MMU_FTR_16M_PAGE | MMU_FTR_TLBIEL |
+ MMU_FTR_LOCKLESS_TLBIE | MMU_FTR_CI_LARGE_PAGE |
+ MMU_FTR_1T_SEGMENT | MMU_FTR_TLBIE_CROP_VA |
++ MMU_FTR_KERNEL_RO |
+ #ifdef CONFIG_PPC_RADIX_MMU
+ MMU_FTR_TYPE_RADIX |
+ #endif
+diff --git a/arch/powerpc/kernel/cpu_setup_power.S b/arch/powerpc/kernel/cpu_setup_power.S
+index 917188615bf5..7fe8c79e6937 100644
+--- a/arch/powerpc/kernel/cpu_setup_power.S
++++ b/arch/powerpc/kernel/cpu_setup_power.S
+@@ -101,6 +101,8 @@ _GLOBAL(__setup_cpu_power9)
+ mfspr r3,SPRN_LPCR
+ LOAD_REG_IMMEDIATE(r4, LPCR_PECEDH | LPCR_PECE_HVEE | LPCR_HVICE)
+ or r3, r3, r4
++ LOAD_REG_IMMEDIATE(r4, LPCR_UPRT | LPCR_HR)
++ andc r3, r3, r4
+ bl __init_LPCR
+ bl __init_HFSCR
+ bl __init_tlb_power9
+@@ -122,6 +124,8 @@ _GLOBAL(__restore_cpu_power9)
+ mfspr r3,SPRN_LPCR
+ LOAD_REG_IMMEDIATE(r4, LPCR_PECEDH | LPCR_PECE_HVEE | LPCR_HVICE)
+ or r3, r3, r4
++ LOAD_REG_IMMEDIATE(r4, LPCR_UPRT | LPCR_HR)
++ andc r3, r3, r4
+ bl __init_LPCR
+ bl __init_HFSCR
+ bl __init_tlb_power9
+diff --git a/arch/powerpc/kernel/hw_breakpoint.c b/arch/powerpc/kernel/hw_breakpoint.c
+index 4d3aa05e28be..53cc9270aac8 100644
+--- a/arch/powerpc/kernel/hw_breakpoint.c
++++ b/arch/powerpc/kernel/hw_breakpoint.c
+@@ -228,8 +228,10 @@ int hw_breakpoint_handler(struct die_args *args)
+ rcu_read_lock();
+
+ bp = __this_cpu_read(bp_per_reg);
+- if (!bp)
++ if (!bp) {
++ rc = NOTIFY_DONE;
+ goto out;
++ }
+ info = counter_arch_bp(bp);
+
+ /*
+diff --git a/arch/x86/include/asm/pkeys.h b/arch/x86/include/asm/pkeys.h
+index 34684adb6899..b3b09b98896d 100644
+--- a/arch/x86/include/asm/pkeys.h
++++ b/arch/x86/include/asm/pkeys.h
+@@ -46,6 +46,15 @@ extern int __arch_set_user_pkey_access(struct task_struct *tsk, int pkey,
+ static inline
+ bool mm_pkey_is_allocated(struct mm_struct *mm, int pkey)
+ {
++ /*
++ * "Allocated" pkeys are those that have been returned
++ * from pkey_alloc(). pkey 0 is special, and never
++ * returned from pkey_alloc().
++ */
++ if (pkey <= 0)
++ return false;
++ if (pkey >= arch_max_pkey())
++ return false;
+ return mm_pkey_allocation_map(mm) & (1U << pkey);
+ }
+
+@@ -82,12 +91,6 @@ int mm_pkey_alloc(struct mm_struct *mm)
+ static inline
+ int mm_pkey_free(struct mm_struct *mm, int pkey)
+ {
+- /*
+- * pkey 0 is special, always allocated and can never
+- * be freed.
+- */
+- if (!pkey)
+- return -EINVAL;
+ if (!mm_pkey_is_allocated(mm, pkey))
+ return -EINVAL;
+
+diff --git a/crypto/Kconfig b/crypto/Kconfig
+index 160f08e721cc..9c245eb0dd83 100644
+--- a/crypto/Kconfig
++++ b/crypto/Kconfig
+@@ -374,6 +374,7 @@ config CRYPTO_XTS
+ select CRYPTO_BLKCIPHER
+ select CRYPTO_MANAGER
+ select CRYPTO_GF128MUL
++ select CRYPTO_ECB
+ help
+ XTS: IEEE1619/D16 narrow block cipher use with aes-xts-plain,
+ key size 256, 384 or 512 bits. This implementation currently
+diff --git a/crypto/testmgr.h b/crypto/testmgr.h
+index 9b656be7f52f..21b4be1f6824 100644
+--- a/crypto/testmgr.h
++++ b/crypto/testmgr.h
+@@ -22827,7 +22827,7 @@ static struct aead_testvec aes_ccm_enc_tv_template[] = {
+ "\x09\x75\x9a\x9b\x3c\x9b\x27\x39",
+ .klen = 32,
+ .iv = "\x03\xf9\xd9\x4e\x63\xb5\x3d\x9d"
+- "\x43\xf6\x1e\x50",
++ "\x43\xf6\x1e\x50\0\0\0\0",
+ .assoc = "\x57\xf5\x6b\x8b\x57\x5c\x3d\x3b"
+ "\x13\x02\x01\x0c\x83\x4c\x96\x35"
+ "\x8e\xd6\x39\xcf\x7d\x14\x9b\x94"
+diff --git a/crypto/xts.c b/crypto/xts.c
+index 410a2e299085..baeb34dd8582 100644
+--- a/crypto/xts.c
++++ b/crypto/xts.c
+@@ -463,6 +463,7 @@ static int create(struct crypto_template *tmpl, struct rtattr **tb)
+ struct xts_instance_ctx *ctx;
+ struct skcipher_alg *alg;
+ const char *cipher_name;
++ u32 mask;
+ int err;
+
+ algt = crypto_get_attr_type(tb);
+@@ -483,18 +484,19 @@ static int create(struct crypto_template *tmpl, struct rtattr **tb)
+ ctx = skcipher_instance_ctx(inst);
+
+ crypto_set_skcipher_spawn(&ctx->spawn, skcipher_crypto_instance(inst));
+- err = crypto_grab_skcipher(&ctx->spawn, cipher_name, 0,
+- crypto_requires_sync(algt->type,
+- algt->mask));
++
++ mask = crypto_requires_off(algt->type, algt->mask,
++ CRYPTO_ALG_NEED_FALLBACK |
++ CRYPTO_ALG_ASYNC);
++
++ err = crypto_grab_skcipher(&ctx->spawn, cipher_name, 0, mask);
+ if (err == -ENOENT) {
+ err = -ENAMETOOLONG;
+ if (snprintf(ctx->name, CRYPTO_MAX_ALG_NAME, "ecb(%s)",
+ cipher_name) >= CRYPTO_MAX_ALG_NAME)
+ goto err_free_inst;
+
+- err = crypto_grab_skcipher(&ctx->spawn, ctx->name, 0,
+- crypto_requires_sync(algt->type,
+- algt->mask));
++ err = crypto_grab_skcipher(&ctx->spawn, ctx->name, 0, mask);
+ }
+
+ if (err)
+diff --git a/drivers/bcma/main.c b/drivers/bcma/main.c
+index 2c1798e38abd..38688236b3cd 100644
+--- a/drivers/bcma/main.c
++++ b/drivers/bcma/main.c
+@@ -633,8 +633,11 @@ static int bcma_device_probe(struct device *dev)
+ drv);
+ int err = 0;
+
++ get_device(dev);
+ if (adrv->probe)
+ err = adrv->probe(core);
++ if (err)
++ put_device(dev);
+
+ return err;
+ }
+@@ -647,6 +650,7 @@ static int bcma_device_remove(struct device *dev)
+
+ if (adrv->remove)
+ adrv->remove(core);
++ put_device(dev);
+
+ return 0;
+ }
+diff --git a/drivers/block/loop.c b/drivers/block/loop.c
+index f347285c67ec..7fd5a6ac2675 100644
+--- a/drivers/block/loop.c
++++ b/drivers/block/loop.c
+@@ -1097,9 +1097,12 @@ loop_set_status(struct loop_device *lo, const struct loop_info64 *info)
+ if ((unsigned int) info->lo_encrypt_key_size > LO_KEY_SIZE)
+ return -EINVAL;
+
++ /* I/O need to be drained during transfer transition */
++ blk_mq_freeze_queue(lo->lo_queue);
++
+ err = loop_release_xfer(lo);
+ if (err)
+- return err;
++ goto exit;
+
+ if (info->lo_encrypt_type) {
+ unsigned int type = info->lo_encrypt_type;
+@@ -1114,12 +1117,14 @@ loop_set_status(struct loop_device *lo, const struct loop_info64 *info)
+
+ err = loop_init_xfer(lo, xfer, info);
+ if (err)
+- return err;
++ goto exit;
+
+ if (lo->lo_offset != info->lo_offset ||
+ lo->lo_sizelimit != info->lo_sizelimit)
+- if (figure_loop_size(lo, info->lo_offset, info->lo_sizelimit))
+- return -EFBIG;
++ if (figure_loop_size(lo, info->lo_offset, info->lo_sizelimit)) {
++ err = -EFBIG;
++ goto exit;
++ }
+
+ loop_config_discard(lo);
+
+@@ -1137,13 +1142,6 @@ loop_set_status(struct loop_device *lo, const struct loop_info64 *info)
+ (info->lo_flags & LO_FLAGS_AUTOCLEAR))
+ lo->lo_flags ^= LO_FLAGS_AUTOCLEAR;
+
+- if ((info->lo_flags & LO_FLAGS_PARTSCAN) &&
+- !(lo->lo_flags & LO_FLAGS_PARTSCAN)) {
+- lo->lo_flags |= LO_FLAGS_PARTSCAN;
+- lo->lo_disk->flags &= ~GENHD_FL_NO_PART_SCAN;
+- loop_reread_partitions(lo, lo->lo_device);
+- }
+-
+ lo->lo_encrypt_key_size = info->lo_encrypt_key_size;
+ lo->lo_init[0] = info->lo_init[0];
+ lo->lo_init[1] = info->lo_init[1];
+@@ -1156,7 +1154,17 @@ loop_set_status(struct loop_device *lo, const struct loop_info64 *info)
+ /* update dio if lo_offset or transfer is changed */
+ __loop_update_dio(lo, lo->use_dio);
+
+- return 0;
++ exit:
++ blk_mq_unfreeze_queue(lo->lo_queue);
++
++ if (!err && (info->lo_flags & LO_FLAGS_PARTSCAN) &&
++ !(lo->lo_flags & LO_FLAGS_PARTSCAN)) {
++ lo->lo_flags |= LO_FLAGS_PARTSCAN;
++ lo->lo_disk->flags &= ~GENHD_FL_NO_PART_SCAN;
++ loop_reread_partitions(lo, lo->lo_device);
++ }
++
++ return err;
+ }
+
+ static int
+diff --git a/drivers/char/tpm/tpm-interface.c b/drivers/char/tpm/tpm-interface.c
+index a2688ac2b48f..87964fe049db 100644
+--- a/drivers/char/tpm/tpm-interface.c
++++ b/drivers/char/tpm/tpm-interface.c
+@@ -499,8 +499,7 @@ static int tpm_startup(struct tpm_chip *chip, __be16 startup_type)
+ int tpm_get_timeouts(struct tpm_chip *chip)
+ {
+ cap_t cap;
+- unsigned long new_timeout[4];
+- unsigned long old_timeout[4];
++ unsigned long timeout_old[4], timeout_chip[4], timeout_eff[4];
+ ssize_t rc;
+
+ if (chip->flags & TPM_CHIP_FLAG_HAVE_TIMEOUTS)
+@@ -538,11 +537,15 @@ int tpm_get_timeouts(struct tpm_chip *chip)
+ if (rc)
+ return rc;
+
+- old_timeout[0] = be32_to_cpu(cap.timeout.a);
+- old_timeout[1] = be32_to_cpu(cap.timeout.b);
+- old_timeout[2] = be32_to_cpu(cap.timeout.c);
+- old_timeout[3] = be32_to_cpu(cap.timeout.d);
+- memcpy(new_timeout, old_timeout, sizeof(new_timeout));
++ timeout_old[0] = jiffies_to_usecs(chip->timeout_a);
++ timeout_old[1] = jiffies_to_usecs(chip->timeout_b);
++ timeout_old[2] = jiffies_to_usecs(chip->timeout_c);
++ timeout_old[3] = jiffies_to_usecs(chip->timeout_d);
++ timeout_chip[0] = be32_to_cpu(cap.timeout.a);
++ timeout_chip[1] = be32_to_cpu(cap.timeout.b);
++ timeout_chip[2] = be32_to_cpu(cap.timeout.c);
++ timeout_chip[3] = be32_to_cpu(cap.timeout.d);
++ memcpy(timeout_eff, timeout_chip, sizeof(timeout_eff));
+
+ /*
+ * Provide ability for vendor overrides of timeout values in case
+@@ -550,16 +553,24 @@ int tpm_get_timeouts(struct tpm_chip *chip)
+ */
+ if (chip->ops->update_timeouts != NULL)
+ chip->timeout_adjusted =
+- chip->ops->update_timeouts(chip, new_timeout);
++ chip->ops->update_timeouts(chip, timeout_eff);
+
+ if (!chip->timeout_adjusted) {
+- /* Don't overwrite default if value is 0 */
+- if (new_timeout[0] != 0 && new_timeout[0] < 1000) {
+- int i;
++ /* Restore default if chip reported 0 */
++ int i;
+
++ for (i = 0; i < ARRAY_SIZE(timeout_eff); i++) {
++ if (timeout_eff[i])
++ continue;
++
++ timeout_eff[i] = timeout_old[i];
++ chip->timeout_adjusted = true;
++ }
++
++ if (timeout_eff[0] != 0 && timeout_eff[0] < 1000) {
+ /* timeouts in msec rather usec */
+- for (i = 0; i != ARRAY_SIZE(new_timeout); i++)
+- new_timeout[i] *= 1000;
++ for (i = 0; i != ARRAY_SIZE(timeout_eff); i++)
++ timeout_eff[i] *= 1000;
+ chip->timeout_adjusted = true;
+ }
+ }
+@@ -568,16 +579,16 @@ int tpm_get_timeouts(struct tpm_chip *chip)
+ if (chip->timeout_adjusted) {
+ dev_info(&chip->dev,
+ HW_ERR "Adjusting reported timeouts: A %lu->%luus B %lu->%luus C %lu->%luus D %lu->%luus\n",
+- old_timeout[0], new_timeout[0],
+- old_timeout[1], new_timeout[1],
+- old_timeout[2], new_timeout[2],
+- old_timeout[3], new_timeout[3]);
++ timeout_chip[0], timeout_eff[0],
++ timeout_chip[1], timeout_eff[1],
++ timeout_chip[2], timeout_eff[2],
++ timeout_chip[3], timeout_eff[3]);
+ }
+
+- chip->timeout_a = usecs_to_jiffies(new_timeout[0]);
+- chip->timeout_b = usecs_to_jiffies(new_timeout[1]);
+- chip->timeout_c = usecs_to_jiffies(new_timeout[2]);
+- chip->timeout_d = usecs_to_jiffies(new_timeout[3]);
++ chip->timeout_a = usecs_to_jiffies(timeout_eff[0]);
++ chip->timeout_b = usecs_to_jiffies(timeout_eff[1]);
++ chip->timeout_c = usecs_to_jiffies(timeout_eff[2]);
++ chip->timeout_d = usecs_to_jiffies(timeout_eff[3]);
+
+ rc = tpm_getcap(chip, TPM_CAP_PROP_TIS_DURATION, &cap,
+ "attempting to determine the durations");
+diff --git a/drivers/char/tpm/tpm_tis.c b/drivers/char/tpm/tpm_tis.c
+index 0127af130cb1..c7e1384f1b08 100644
+--- a/drivers/char/tpm/tpm_tis.c
++++ b/drivers/char/tpm/tpm_tis.c
+@@ -159,7 +159,7 @@ static int tpm_tis_init(struct device *dev, struct tpm_info *tpm_info,
+ irq = tpm_info->irq;
+
+ if (itpm)
+- phy->priv.flags |= TPM_TIS_ITPM_POSSIBLE;
++ phy->priv.flags |= TPM_TIS_ITPM_WORKAROUND;
+
+ return tpm_tis_core_init(dev, &phy->priv, irq, &tpm_tcg,
+ acpi_dev_handle);
+@@ -432,7 +432,7 @@ static int __init init_tis(void)
+ acpi_bus_unregister_driver(&tis_acpi_driver);
+ err_acpi:
+ #endif
+- platform_device_unregister(force_pdev);
++ platform_driver_unregister(&tis_drv);
+ err_platform:
+ if (force_pdev)
+ platform_device_unregister(force_pdev);
+diff --git a/drivers/char/tpm/tpm_tis_core.c b/drivers/char/tpm/tpm_tis_core.c
+index 7993678954a2..0cfc0eed8525 100644
+--- a/drivers/char/tpm/tpm_tis_core.c
++++ b/drivers/char/tpm/tpm_tis_core.c
+@@ -264,7 +264,7 @@ static int tpm_tis_send_data(struct tpm_chip *chip, u8 *buf, size_t len)
+ struct tpm_tis_data *priv = dev_get_drvdata(&chip->dev);
+ int rc, status, burstcnt;
+ size_t count = 0;
+- bool itpm = priv->flags & TPM_TIS_ITPM_POSSIBLE;
++ bool itpm = priv->flags & TPM_TIS_ITPM_WORKAROUND;
+
+ if (request_locality(chip, 0) < 0)
+ return -EBUSY;
+@@ -740,7 +740,7 @@ int tpm_tis_core_init(struct device *dev, struct tpm_tis_data *priv, int irq,
+ (chip->flags & TPM_CHIP_FLAG_TPM2) ? "2.0" : "1.2",
+ vendor >> 16, rid);
+
+- if (!(priv->flags & TPM_TIS_ITPM_POSSIBLE)) {
++ if (!(priv->flags & TPM_TIS_ITPM_WORKAROUND)) {
+ probe = probe_itpm(chip);
+ if (probe < 0) {
+ rc = -ENODEV;
+@@ -748,7 +748,7 @@ int tpm_tis_core_init(struct device *dev, struct tpm_tis_data *priv, int irq,
+ }
+
+ if (!!probe)
+- priv->flags |= TPM_TIS_ITPM_POSSIBLE;
++ priv->flags |= TPM_TIS_ITPM_WORKAROUND;
+ }
+
+ /* Figure out the capabilities */
+diff --git a/drivers/char/tpm/tpm_tis_core.h b/drivers/char/tpm/tpm_tis_core.h
+index 9191aabbf9c2..e2212f021a02 100644
+--- a/drivers/char/tpm/tpm_tis_core.h
++++ b/drivers/char/tpm/tpm_tis_core.h
+@@ -80,7 +80,7 @@ enum tis_defaults {
+ #define TPM_RID(l) (0x0F04 | ((l) << 12))
+
+ enum tpm_tis_flags {
+- TPM_TIS_ITPM_POSSIBLE = BIT(0),
++ TPM_TIS_ITPM_WORKAROUND = BIT(0),
+ };
+
+ struct tpm_tis_data {
+diff --git a/drivers/crypto/vmx/aes_cbc.c b/drivers/crypto/vmx/aes_cbc.c
+index 94ad5c0adbcb..72a26eb4e954 100644
+--- a/drivers/crypto/vmx/aes_cbc.c
++++ b/drivers/crypto/vmx/aes_cbc.c
+@@ -27,11 +27,12 @@
+ #include <asm/switch_to.h>
+ #include <crypto/aes.h>
+ #include <crypto/scatterwalk.h>
++#include <crypto/skcipher.h>
+
+ #include "aesp8-ppc.h"
+
+ struct p8_aes_cbc_ctx {
+- struct crypto_blkcipher *fallback;
++ struct crypto_skcipher *fallback;
+ struct aes_key enc_key;
+ struct aes_key dec_key;
+ };
+@@ -39,7 +40,7 @@ struct p8_aes_cbc_ctx {
+ static int p8_aes_cbc_init(struct crypto_tfm *tfm)
+ {
+ const char *alg;
+- struct crypto_blkcipher *fallback;
++ struct crypto_skcipher *fallback;
+ struct p8_aes_cbc_ctx *ctx = crypto_tfm_ctx(tfm);
+
+ if (!(alg = crypto_tfm_alg_name(tfm))) {
+@@ -47,8 +48,9 @@ static int p8_aes_cbc_init(struct crypto_tfm *tfm)
+ return -ENOENT;
+ }
+
+- fallback =
+- crypto_alloc_blkcipher(alg, 0, CRYPTO_ALG_NEED_FALLBACK);
++ fallback = crypto_alloc_skcipher(alg, 0,
++ CRYPTO_ALG_ASYNC | CRYPTO_ALG_NEED_FALLBACK);
++
+ if (IS_ERR(fallback)) {
+ printk(KERN_ERR
+ "Failed to allocate transformation for '%s': %ld\n",
+@@ -56,11 +58,12 @@ static int p8_aes_cbc_init(struct crypto_tfm *tfm)
+ return PTR_ERR(fallback);
+ }
+ printk(KERN_INFO "Using '%s' as fallback implementation.\n",
+- crypto_tfm_alg_driver_name((struct crypto_tfm *) fallback));
++ crypto_skcipher_driver_name(fallback));
++
+
+- crypto_blkcipher_set_flags(
++ crypto_skcipher_set_flags(
+ fallback,
+- crypto_blkcipher_get_flags((struct crypto_blkcipher *)tfm));
++ crypto_skcipher_get_flags((struct crypto_skcipher *)tfm));
+ ctx->fallback = fallback;
+
+ return 0;
+@@ -71,7 +74,7 @@ static void p8_aes_cbc_exit(struct crypto_tfm *tfm)
+ struct p8_aes_cbc_ctx *ctx = crypto_tfm_ctx(tfm);
+
+ if (ctx->fallback) {
+- crypto_free_blkcipher(ctx->fallback);
++ crypto_free_skcipher(ctx->fallback);
+ ctx->fallback = NULL;
+ }
+ }
+@@ -91,7 +94,7 @@ static int p8_aes_cbc_setkey(struct crypto_tfm *tfm, const u8 *key,
+ pagefault_enable();
+ preempt_enable();
+
+- ret += crypto_blkcipher_setkey(ctx->fallback, key, keylen);
++ ret += crypto_skcipher_setkey(ctx->fallback, key, keylen);
+ return ret;
+ }
+
+@@ -103,15 +106,14 @@ static int p8_aes_cbc_encrypt(struct blkcipher_desc *desc,
+ struct blkcipher_walk walk;
+ struct p8_aes_cbc_ctx *ctx =
+ crypto_tfm_ctx(crypto_blkcipher_tfm(desc->tfm));
+- struct blkcipher_desc fallback_desc = {
+- .tfm = ctx->fallback,
+- .info = desc->info,
+- .flags = desc->flags
+- };
+
+ if (in_interrupt()) {
+- ret = crypto_blkcipher_encrypt(&fallback_desc, dst, src,
+- nbytes);
++ SKCIPHER_REQUEST_ON_STACK(req, ctx->fallback);
++ skcipher_request_set_tfm(req, ctx->fallback);
++ skcipher_request_set_callback(req, desc->flags, NULL, NULL);
++ skcipher_request_set_crypt(req, src, dst, nbytes, desc->info);
++ ret = crypto_skcipher_encrypt(req);
++ skcipher_request_zero(req);
+ } else {
+ preempt_disable();
+ pagefault_disable();
+@@ -144,15 +146,14 @@ static int p8_aes_cbc_decrypt(struct blkcipher_desc *desc,
+ struct blkcipher_walk walk;
+ struct p8_aes_cbc_ctx *ctx =
+ crypto_tfm_ctx(crypto_blkcipher_tfm(desc->tfm));
+- struct blkcipher_desc fallback_desc = {
+- .tfm = ctx->fallback,
+- .info = desc->info,
+- .flags = desc->flags
+- };
+
+ if (in_interrupt()) {
+- ret = crypto_blkcipher_decrypt(&fallback_desc, dst, src,
+- nbytes);
++ SKCIPHER_REQUEST_ON_STACK(req, ctx->fallback);
++ skcipher_request_set_tfm(req, ctx->fallback);
++ skcipher_request_set_callback(req, desc->flags, NULL, NULL);
++ skcipher_request_set_crypt(req, src, dst, nbytes, desc->info);
++ ret = crypto_skcipher_decrypt(req);
++ skcipher_request_zero(req);
+ } else {
+ preempt_disable();
+ pagefault_disable();
+diff --git a/drivers/crypto/vmx/aes_xts.c b/drivers/crypto/vmx/aes_xts.c
+index 24353ec336c5..6adc9290557a 100644
+--- a/drivers/crypto/vmx/aes_xts.c
++++ b/drivers/crypto/vmx/aes_xts.c
+@@ -28,11 +28,12 @@
+ #include <crypto/aes.h>
+ #include <crypto/scatterwalk.h>
+ #include <crypto/xts.h>
++#include <crypto/skcipher.h>
+
+ #include "aesp8-ppc.h"
+
+ struct p8_aes_xts_ctx {
+- struct crypto_blkcipher *fallback;
++ struct crypto_skcipher *fallback;
+ struct aes_key enc_key;
+ struct aes_key dec_key;
+ struct aes_key tweak_key;
+@@ -41,7 +42,7 @@ struct p8_aes_xts_ctx {
+ static int p8_aes_xts_init(struct crypto_tfm *tfm)
+ {
+ const char *alg;
+- struct crypto_blkcipher *fallback;
++ struct crypto_skcipher *fallback;
+ struct p8_aes_xts_ctx *ctx = crypto_tfm_ctx(tfm);
+
+ if (!(alg = crypto_tfm_alg_name(tfm))) {
+@@ -49,8 +50,8 @@ static int p8_aes_xts_init(struct crypto_tfm *tfm)
+ return -ENOENT;
+ }
+
+- fallback =
+- crypto_alloc_blkcipher(alg, 0, CRYPTO_ALG_NEED_FALLBACK);
++ fallback = crypto_alloc_skcipher(alg, 0,
++ CRYPTO_ALG_ASYNC | CRYPTO_ALG_NEED_FALLBACK);
+ if (IS_ERR(fallback)) {
+ printk(KERN_ERR
+ "Failed to allocate transformation for '%s': %ld\n",
+@@ -58,11 +59,11 @@ static int p8_aes_xts_init(struct crypto_tfm *tfm)
+ return PTR_ERR(fallback);
+ }
+ printk(KERN_INFO "Using '%s' as fallback implementation.\n",
+- crypto_tfm_alg_driver_name((struct crypto_tfm *) fallback));
++ crypto_skcipher_driver_name(fallback));
+
+- crypto_blkcipher_set_flags(
++ crypto_skcipher_set_flags(
+ fallback,
+- crypto_blkcipher_get_flags((struct crypto_blkcipher *)tfm));
++ crypto_skcipher_get_flags((struct crypto_skcipher *)tfm));
+ ctx->fallback = fallback;
+
+ return 0;
+@@ -73,7 +74,7 @@ static void p8_aes_xts_exit(struct crypto_tfm *tfm)
+ struct p8_aes_xts_ctx *ctx = crypto_tfm_ctx(tfm);
+
+ if (ctx->fallback) {
+- crypto_free_blkcipher(ctx->fallback);
++ crypto_free_skcipher(ctx->fallback);
+ ctx->fallback = NULL;
+ }
+ }
+@@ -98,7 +99,7 @@ static int p8_aes_xts_setkey(struct crypto_tfm *tfm, const u8 *key,
+ pagefault_enable();
+ preempt_enable();
+
+- ret += crypto_blkcipher_setkey(ctx->fallback, key, keylen);
++ ret += crypto_skcipher_setkey(ctx->fallback, key, keylen);
+ return ret;
+ }
+
+@@ -113,15 +114,14 @@ static int p8_aes_xts_crypt(struct blkcipher_desc *desc,
+ struct blkcipher_walk walk;
+ struct p8_aes_xts_ctx *ctx =
+ crypto_tfm_ctx(crypto_blkcipher_tfm(desc->tfm));
+- struct blkcipher_desc fallback_desc = {
+- .tfm = ctx->fallback,
+- .info = desc->info,
+- .flags = desc->flags
+- };
+
+ if (in_interrupt()) {
+- ret = enc ? crypto_blkcipher_encrypt(&fallback_desc, dst, src, nbytes) :
+- crypto_blkcipher_decrypt(&fallback_desc, dst, src, nbytes);
++ SKCIPHER_REQUEST_ON_STACK(req, ctx->fallback);
++ skcipher_request_set_tfm(req, ctx->fallback);
++ skcipher_request_set_callback(req, desc->flags, NULL, NULL);
++ skcipher_request_set_crypt(req, src, dst, nbytes, desc->info);
++ ret = enc? crypto_skcipher_encrypt(req) : crypto_skcipher_decrypt(req);
++ skcipher_request_zero(req);
+ } else {
+ preempt_disable();
+ pagefault_disable();
+diff --git a/drivers/devfreq/devfreq.c b/drivers/devfreq/devfreq.c
+index 47206a21bb90..416736e2b803 100644
+--- a/drivers/devfreq/devfreq.c
++++ b/drivers/devfreq/devfreq.c
+@@ -130,7 +130,7 @@ static void devfreq_set_freq_table(struct devfreq *devfreq)
+ * @devfreq: the devfreq instance
+ * @freq: the update target frequency
+ */
+-static int devfreq_update_status(struct devfreq *devfreq, unsigned long freq)
++int devfreq_update_status(struct devfreq *devfreq, unsigned long freq)
+ {
+ int lev, prev_lev, ret = 0;
+ unsigned long cur_time;
+@@ -166,6 +166,7 @@ static int devfreq_update_status(struct devfreq *devfreq, unsigned long freq)
+ devfreq->last_stat_updated = cur_time;
+ return ret;
+ }
++EXPORT_SYMBOL(devfreq_update_status);
+
+ /**
+ * find_devfreq_governor() - find devfreq governor from name
+@@ -939,6 +940,9 @@ static ssize_t governor_store(struct device *dev, struct device_attribute *attr,
+ if (df->governor == governor) {
+ ret = 0;
+ goto out;
++ } else if (df->governor->immutable || governor->immutable) {
++ ret = -EINVAL;
++ goto out;
+ }
+
+ if (df->governor) {
+@@ -968,13 +972,33 @@ static ssize_t available_governors_show(struct device *d,
+ struct device_attribute *attr,
+ char *buf)
+ {
+- struct devfreq_governor *tmp_governor;
++ struct devfreq *df = to_devfreq(d);
+ ssize_t count = 0;
+
+ mutex_lock(&devfreq_list_lock);
+- list_for_each_entry(tmp_governor, &devfreq_governor_list, node)
+- count += scnprintf(&buf[count], (PAGE_SIZE - count - 2),
+- "%s ", tmp_governor->name);
++
++ /*
++ * The devfreq with immutable governor (e.g., passive) shows
++ * only own governor.
++ */
++ if (df->governor->immutable) {
++ count = scnprintf(&buf[count], DEVFREQ_NAME_LEN,
++ "%s ", df->governor_name);
++ /*
++ * The devfreq device shows the registered governor except for
++ * immutable governors such as passive governor .
++ */
++ } else {
++ struct devfreq_governor *governor;
++
++ list_for_each_entry(governor, &devfreq_governor_list, node) {
++ if (governor->immutable)
++ continue;
++ count += scnprintf(&buf[count], (PAGE_SIZE - count - 2),
++ "%s ", governor->name);
++ }
++ }
++
+ mutex_unlock(&devfreq_list_lock);
+
+ /* Truncate the trailing space */
+diff --git a/drivers/devfreq/governor.h b/drivers/devfreq/governor.h
+index fad7d6321978..71576b8bdfef 100644
+--- a/drivers/devfreq/governor.h
++++ b/drivers/devfreq/governor.h
+@@ -38,4 +38,6 @@ extern void devfreq_interval_update(struct devfreq *devfreq,
+ extern int devfreq_add_governor(struct devfreq_governor *governor);
+ extern int devfreq_remove_governor(struct devfreq_governor *governor);
+
++extern int devfreq_update_status(struct devfreq *devfreq, unsigned long freq);
++
+ #endif /* _GOVERNOR_H */
+diff --git a/drivers/devfreq/governor_passive.c b/drivers/devfreq/governor_passive.c
+index 9ef46e2592c4..5be96b2249e7 100644
+--- a/drivers/devfreq/governor_passive.c
++++ b/drivers/devfreq/governor_passive.c
+@@ -112,6 +112,11 @@ static int update_devfreq_passive(struct devfreq *devfreq, unsigned long freq)
+ if (ret < 0)
+ goto out;
+
++ if (devfreq->profile->freq_table
++ && (devfreq_update_status(devfreq, freq)))
++ dev_err(&devfreq->dev,
++ "Couldn't update frequency transition information.\n");
++
+ devfreq->previous_freq = freq;
+
+ out:
+@@ -179,6 +184,7 @@ static int devfreq_passive_event_handler(struct devfreq *devfreq,
+
+ static struct devfreq_governor devfreq_passive = {
+ .name = "passive",
++ .immutable = 1,
+ .get_target_freq = devfreq_passive_get_target_freq,
+ .event_handler = devfreq_passive_event_handler,
+ };
+diff --git a/drivers/dma/ipu/ipu_irq.c b/drivers/dma/ipu/ipu_irq.c
+index dd184b50e5b4..284627806b88 100644
+--- a/drivers/dma/ipu/ipu_irq.c
++++ b/drivers/dma/ipu/ipu_irq.c
+@@ -272,7 +272,7 @@ static void ipu_irq_handler(struct irq_desc *desc)
+ u32 status;
+ int i, line;
+
+- for (i = IPU_IRQ_NR_FN_BANKS; i < IPU_IRQ_NR_BANKS; i++) {
++ for (i = 0; i < IPU_IRQ_NR_BANKS; i++) {
+ struct ipu_irq_bank *bank = irq_bank + i;
+
+ raw_spin_lock(&bank_lock);
+diff --git a/drivers/hv/channel.c b/drivers/hv/channel.c
+index 5fb4c6d9209b..be34547cdb68 100644
+--- a/drivers/hv/channel.c
++++ b/drivers/hv/channel.c
+@@ -157,6 +157,7 @@ int vmbus_open(struct vmbus_channel *newchannel, u32 send_ringbuffer_size,
+ }
+
+ init_completion(&open_info->waitevent);
++ open_info->waiting_channel = newchannel;
+
+ open_msg = (struct vmbus_channel_open_channel *)open_info->msg;
+ open_msg->header.msgtype = CHANNELMSG_OPENCHANNEL;
+@@ -181,7 +182,7 @@ int vmbus_open(struct vmbus_channel *newchannel, u32 send_ringbuffer_size,
+ spin_unlock_irqrestore(&vmbus_connection.channelmsg_lock, flags);
+
+ ret = vmbus_post_msg(open_msg,
+- sizeof(struct vmbus_channel_open_channel));
++ sizeof(struct vmbus_channel_open_channel), true);
+
+ if (ret != 0) {
+ err = ret;
+@@ -194,6 +195,11 @@ int vmbus_open(struct vmbus_channel *newchannel, u32 send_ringbuffer_size,
+ list_del(&open_info->msglistentry);
+ spin_unlock_irqrestore(&vmbus_connection.channelmsg_lock, flags);
+
++ if (newchannel->rescind) {
++ err = -ENODEV;
++ goto error_free_gpadl;
++ }
++
+ if (open_info->response.open_result.status) {
+ err = -EAGAIN;
+ goto error_free_gpadl;
+@@ -233,7 +239,7 @@ int vmbus_send_tl_connect_request(const uuid_le *shv_guest_servie_id,
+ conn_msg.guest_endpoint_id = *shv_guest_servie_id;
+ conn_msg.host_service_id = *shv_host_servie_id;
+
+- return vmbus_post_msg(&conn_msg, sizeof(conn_msg));
++ return vmbus_post_msg(&conn_msg, sizeof(conn_msg), true);
+ }
+ EXPORT_SYMBOL_GPL(vmbus_send_tl_connect_request);
+
+@@ -405,6 +411,7 @@ int vmbus_establish_gpadl(struct vmbus_channel *channel, void *kbuffer,
+ return ret;
+
+ init_completion(&msginfo->waitevent);
++ msginfo->waiting_channel = channel;
+
+ gpadlmsg = (struct vmbus_channel_gpadl_header *)msginfo->msg;
+ gpadlmsg->header.msgtype = CHANNELMSG_GPADL_HEADER;
+@@ -419,7 +426,7 @@ int vmbus_establish_gpadl(struct vmbus_channel *channel, void *kbuffer,
+ spin_unlock_irqrestore(&vmbus_connection.channelmsg_lock, flags);
+
+ ret = vmbus_post_msg(gpadlmsg, msginfo->msgsize -
+- sizeof(*msginfo));
++ sizeof(*msginfo), true);
+ if (ret != 0)
+ goto cleanup;
+
+@@ -433,14 +440,19 @@ int vmbus_establish_gpadl(struct vmbus_channel *channel, void *kbuffer,
+ gpadl_body->gpadl = next_gpadl_handle;
+
+ ret = vmbus_post_msg(gpadl_body,
+- submsginfo->msgsize -
+- sizeof(*submsginfo));
++ submsginfo->msgsize - sizeof(*submsginfo),
++ true);
+ if (ret != 0)
+ goto cleanup;
+
+ }
+ wait_for_completion(&msginfo->waitevent);
+
++ if (channel->rescind) {
++ ret = -ENODEV;
++ goto cleanup;
++ }
++
+ /* At this point, we received the gpadl created msg */
+ *gpadl_handle = gpadlmsg->gpadl;
+
+@@ -474,6 +486,7 @@ int vmbus_teardown_gpadl(struct vmbus_channel *channel, u32 gpadl_handle)
+ return -ENOMEM;
+
+ init_completion(&info->waitevent);
++ info->waiting_channel = channel;
+
+ msg = (struct vmbus_channel_gpadl_teardown *)info->msg;
+
+@@ -485,14 +498,19 @@ int vmbus_teardown_gpadl(struct vmbus_channel *channel, u32 gpadl_handle)
+ list_add_tail(&info->msglistentry,
+ &vmbus_connection.chn_msg_list);
+ spin_unlock_irqrestore(&vmbus_connection.channelmsg_lock, flags);
+- ret = vmbus_post_msg(msg,
+- sizeof(struct vmbus_channel_gpadl_teardown));
++ ret = vmbus_post_msg(msg, sizeof(struct vmbus_channel_gpadl_teardown),
++ true);
+
+ if (ret)
+ goto post_msg_err;
+
+ wait_for_completion(&info->waitevent);
+
++ if (channel->rescind) {
++ ret = -ENODEV;
++ goto post_msg_err;
++ }
++
+ post_msg_err:
+ spin_lock_irqsave(&vmbus_connection.channelmsg_lock, flags);
+ list_del(&info->msglistentry);
+@@ -557,7 +575,8 @@ static int vmbus_close_internal(struct vmbus_channel *channel)
+ msg->header.msgtype = CHANNELMSG_CLOSECHANNEL;
+ msg->child_relid = channel->offermsg.child_relid;
+
+- ret = vmbus_post_msg(msg, sizeof(struct vmbus_channel_close_channel));
++ ret = vmbus_post_msg(msg, sizeof(struct vmbus_channel_close_channel),
++ true);
+
+ if (ret) {
+ pr_err("Close failed: close post msg return is %d\n", ret);
+diff --git a/drivers/hv/channel_mgmt.c b/drivers/hv/channel_mgmt.c
+index 26b419203f16..0af7e39006c8 100644
+--- a/drivers/hv/channel_mgmt.c
++++ b/drivers/hv/channel_mgmt.c
+@@ -147,6 +147,29 @@ static const struct {
+ { HV_RDV_GUID },
+ };
+
++/*
++ * The rescinded channel may be blocked waiting for a response from the host;
++ * take care of that.
++ */
++static void vmbus_rescind_cleanup(struct vmbus_channel *channel)
++{
++ struct vmbus_channel_msginfo *msginfo;
++ unsigned long flags;
++
++
++ spin_lock_irqsave(&vmbus_connection.channelmsg_lock, flags);
++
++ list_for_each_entry(msginfo, &vmbus_connection.chn_msg_list,
++ msglistentry) {
++
++ if (msginfo->waiting_channel == channel) {
++ complete(&msginfo->waitevent);
++ break;
++ }
++ }
++ spin_unlock_irqrestore(&vmbus_connection.channelmsg_lock, flags);
++}
++
+ static bool is_unsupported_vmbus_devs(const uuid_le *guid)
+ {
+ int i;
+@@ -321,7 +344,8 @@ static void vmbus_release_relid(u32 relid)
+ memset(&msg, 0, sizeof(struct vmbus_channel_relid_released));
+ msg.child_relid = relid;
+ msg.header.msgtype = CHANNELMSG_RELID_RELEASED;
+- vmbus_post_msg(&msg, sizeof(struct vmbus_channel_relid_released));
++ vmbus_post_msg(&msg, sizeof(struct vmbus_channel_relid_released),
++ true);
+ }
+
+ void hv_event_tasklet_disable(struct vmbus_channel *channel)
+@@ -728,7 +752,8 @@ void vmbus_initiate_unload(bool crash)
+ init_completion(&vmbus_connection.unload_event);
+ memset(&hdr, 0, sizeof(struct vmbus_channel_message_header));
+ hdr.msgtype = CHANNELMSG_UNLOAD;
+- vmbus_post_msg(&hdr, sizeof(struct vmbus_channel_message_header));
++ vmbus_post_msg(&hdr, sizeof(struct vmbus_channel_message_header),
++ !crash);
+
+ /*
+ * vmbus_initiate_unload() is also called on crash and the crash can be
+@@ -823,6 +848,8 @@ static void vmbus_onoffer_rescind(struct vmbus_channel_message_header *hdr)
+ channel->rescind = true;
+ spin_unlock_irqrestore(&channel->lock, flags);
+
++ vmbus_rescind_cleanup(channel);
++
+ if (channel->device_obj) {
+ if (channel->chn_rescind_callback) {
+ channel->chn_rescind_callback(channel);
+@@ -1116,8 +1143,8 @@ int vmbus_request_offers(void)
+ msg->msgtype = CHANNELMSG_REQUESTOFFERS;
+
+
+- ret = vmbus_post_msg(msg,
+- sizeof(struct vmbus_channel_message_header));
++ ret = vmbus_post_msg(msg, sizeof(struct vmbus_channel_message_header),
++ true);
+ if (ret != 0) {
+ pr_err("Unable to request offers - %d\n", ret);
+
+diff --git a/drivers/hv/connection.c b/drivers/hv/connection.c
+index 6ce8b874e833..9b72ebcd37bc 100644
+--- a/drivers/hv/connection.c
++++ b/drivers/hv/connection.c
+@@ -111,7 +111,8 @@ static int vmbus_negotiate_version(struct vmbus_channel_msginfo *msginfo,
+ spin_unlock_irqrestore(&vmbus_connection.channelmsg_lock, flags);
+
+ ret = vmbus_post_msg(msg,
+- sizeof(struct vmbus_channel_initiate_contact));
++ sizeof(struct vmbus_channel_initiate_contact),
++ true);
+ if (ret != 0) {
+ spin_lock_irqsave(&vmbus_connection.channelmsg_lock, flags);
+ list_del(&msginfo->msglistentry);
+@@ -435,7 +436,7 @@ void vmbus_on_event(unsigned long data)
+ /*
+ * vmbus_post_msg - Send a msg on the vmbus's message connection
+ */
+-int vmbus_post_msg(void *buffer, size_t buflen)
++int vmbus_post_msg(void *buffer, size_t buflen, bool can_sleep)
+ {
+ union hv_connection_id conn_id;
+ int ret = 0;
+@@ -450,7 +451,7 @@ int vmbus_post_msg(void *buffer, size_t buflen)
+ * insufficient resources. Retry the operation a couple of
+ * times before giving up.
+ */
+- while (retries < 20) {
++ while (retries < 100) {
+ ret = hv_post_message(conn_id, 1, buffer, buflen);
+
+ switch (ret) {
+@@ -473,8 +474,14 @@ int vmbus_post_msg(void *buffer, size_t buflen)
+ }
+
+ retries++;
+- udelay(usec);
+- if (usec < 2048)
++ if (can_sleep && usec > 1000)
++ msleep(usec / 1000);
++ else if (usec < MAX_UDELAY_MS * 1000)
++ udelay(usec);
++ else
++ mdelay(usec / 1000);
++
++ if (usec < 256000)
+ usec *= 2;
+ }
+ return ret;
+diff --git a/drivers/hv/hv.c b/drivers/hv/hv.c
+index b44b32f21e61..fbd8ce6d7ff3 100644
+--- a/drivers/hv/hv.c
++++ b/drivers/hv/hv.c
+@@ -309,9 +309,10 @@ void hv_cleanup(bool crash)
+
+ hypercall_msr.as_uint64 = 0;
+ wrmsrl(HV_X64_MSR_REFERENCE_TSC, hypercall_msr.as_uint64);
+- if (!crash)
++ if (!crash) {
+ vfree(hv_context.tsc_page);
+- hv_context.tsc_page = NULL;
++ hv_context.tsc_page = NULL;
++ }
+ }
+ #endif
+ }
+@@ -411,7 +412,7 @@ int hv_synic_alloc(void)
+ goto err;
+ }
+
+- for_each_online_cpu(cpu) {
++ for_each_present_cpu(cpu) {
+ hv_context.event_dpc[cpu] = kmalloc(size, GFP_ATOMIC);
+ if (hv_context.event_dpc[cpu] == NULL) {
+ pr_err("Unable to allocate event dpc\n");
+@@ -457,6 +458,8 @@ int hv_synic_alloc(void)
+ pr_err("Unable to allocate post msg page\n");
+ goto err;
+ }
++
++ INIT_LIST_HEAD(&hv_context.percpu_list[cpu]);
+ }
+
+ return 0;
+@@ -482,7 +485,7 @@ void hv_synic_free(void)
+ int cpu;
+
+ kfree(hv_context.hv_numa_map);
+- for_each_online_cpu(cpu)
++ for_each_present_cpu(cpu)
+ hv_synic_free_cpu(cpu);
+ }
+
+@@ -552,8 +555,6 @@ void hv_synic_init(void *arg)
+ rdmsrl(HV_X64_MSR_VP_INDEX, vp_index);
+ hv_context.vp_index[cpu] = (u32)vp_index;
+
+- INIT_LIST_HEAD(&hv_context.percpu_list[cpu]);
+-
+ /*
+ * Register the per-cpu clockevent source.
+ */
+diff --git a/drivers/hv/hv_fcopy.c b/drivers/hv/hv_fcopy.c
+index 8b2ba98831ec..e47d8c9db03a 100644
+--- a/drivers/hv/hv_fcopy.c
++++ b/drivers/hv/hv_fcopy.c
+@@ -61,6 +61,7 @@ static DECLARE_WORK(fcopy_send_work, fcopy_send_data);
+ static const char fcopy_devname[] = "vmbus/hv_fcopy";
+ static u8 *recv_buffer;
+ static struct hvutil_transport *hvt;
++static struct completion release_event;
+ /*
+ * This state maintains the version number registered by the daemon.
+ */
+@@ -317,6 +318,7 @@ static void fcopy_on_reset(void)
+
+ if (cancel_delayed_work_sync(&fcopy_timeout_work))
+ fcopy_respond_to_host(HV_E_FAIL);
++ complete(&release_event);
+ }
+
+ int hv_fcopy_init(struct hv_util_service *srv)
+@@ -324,6 +326,7 @@ int hv_fcopy_init(struct hv_util_service *srv)
+ recv_buffer = srv->recv_buffer;
+ fcopy_transaction.recv_channel = srv->channel;
+
++ init_completion(&release_event);
+ /*
+ * When this driver loads, the user level daemon that
+ * processes the host requests may not yet be running.
+@@ -345,4 +348,5 @@ void hv_fcopy_deinit(void)
+ fcopy_transaction.state = HVUTIL_DEVICE_DYING;
+ cancel_delayed_work_sync(&fcopy_timeout_work);
+ hvutil_transport_destroy(hvt);
++ wait_for_completion(&release_event);
+ }
+diff --git a/drivers/hv/hv_kvp.c b/drivers/hv/hv_kvp.c
+index 5e1fdc8d32ab..3abfc5983c97 100644
+--- a/drivers/hv/hv_kvp.c
++++ b/drivers/hv/hv_kvp.c
+@@ -88,6 +88,7 @@ static DECLARE_WORK(kvp_sendkey_work, kvp_send_key);
+ static const char kvp_devname[] = "vmbus/hv_kvp";
+ static u8 *recv_buffer;
+ static struct hvutil_transport *hvt;
++static struct completion release_event;
+ /*
+ * Register the kernel component with the user-level daemon.
+ * As part of this registration, pass the LIC version number.
+@@ -716,6 +717,7 @@ static void kvp_on_reset(void)
+ if (cancel_delayed_work_sync(&kvp_timeout_work))
+ kvp_respond_to_host(NULL, HV_E_FAIL);
+ kvp_transaction.state = HVUTIL_DEVICE_INIT;
++ complete(&release_event);
+ }
+
+ int
+@@ -724,6 +726,7 @@ hv_kvp_init(struct hv_util_service *srv)
+ recv_buffer = srv->recv_buffer;
+ kvp_transaction.recv_channel = srv->channel;
+
++ init_completion(&release_event);
+ /*
+ * When this driver loads, the user level daemon that
+ * processes the host requests may not yet be running.
+@@ -747,4 +750,5 @@ void hv_kvp_deinit(void)
+ cancel_delayed_work_sync(&kvp_timeout_work);
+ cancel_work_sync(&kvp_sendkey_work);
+ hvutil_transport_destroy(hvt);
++ wait_for_completion(&release_event);
+ }
+diff --git a/drivers/hv/hv_snapshot.c b/drivers/hv/hv_snapshot.c
+index eee238cc60bd..4e543dbb731a 100644
+--- a/drivers/hv/hv_snapshot.c
++++ b/drivers/hv/hv_snapshot.c
+@@ -69,6 +69,7 @@ static int dm_reg_value;
+ static const char vss_devname[] = "vmbus/hv_vss";
+ static __u8 *recv_buffer;
+ static struct hvutil_transport *hvt;
++static struct completion release_event;
+
+ static void vss_timeout_func(struct work_struct *dummy);
+ static void vss_handle_request(struct work_struct *dummy);
+@@ -345,11 +346,13 @@ static void vss_on_reset(void)
+ if (cancel_delayed_work_sync(&vss_timeout_work))
+ vss_respond_to_host(HV_E_FAIL);
+ vss_transaction.state = HVUTIL_DEVICE_INIT;
++ complete(&release_event);
+ }
+
+ int
+ hv_vss_init(struct hv_util_service *srv)
+ {
++ init_completion(&release_event);
+ if (vmbus_proto_version < VERSION_WIN8_1) {
+ pr_warn("Integration service 'Backup (volume snapshot)'"
+ " not supported on this host version.\n");
+@@ -382,4 +385,5 @@ void hv_vss_deinit(void)
+ cancel_delayed_work_sync(&vss_timeout_work);
+ cancel_work_sync(&vss_handle_request_work);
+ hvutil_transport_destroy(hvt);
++ wait_for_completion(&release_event);
+ }
+diff --git a/drivers/hv/hyperv_vmbus.h b/drivers/hv/hyperv_vmbus.h
+index 0675b395ce5c..27982df20421 100644
+--- a/drivers/hv/hyperv_vmbus.h
++++ b/drivers/hv/hyperv_vmbus.h
+@@ -683,7 +683,7 @@ void vmbus_free_channels(void);
+ int vmbus_connect(void);
+ void vmbus_disconnect(void);
+
+-int vmbus_post_msg(void *buffer, size_t buflen);
++int vmbus_post_msg(void *buffer, size_t buflen, bool can_sleep);
+
+ void vmbus_on_event(unsigned long data);
+ void vmbus_on_msg_dpc(unsigned long data);
+diff --git a/drivers/hv/ring_buffer.c b/drivers/hv/ring_buffer.c
+index 308dbda700eb..e94ed1c22c8b 100644
+--- a/drivers/hv/ring_buffer.c
++++ b/drivers/hv/ring_buffer.c
+@@ -298,6 +298,9 @@ int hv_ringbuffer_write(struct vmbus_channel *channel,
+ unsigned long flags = 0;
+ struct hv_ring_buffer_info *outring_info = &channel->outbound;
+
++ if (channel->rescind)
++ return -ENODEV;
++
+ for (i = 0; i < kv_count; i++)
+ totalbytes_towrite += kv_list[i].iov_len;
+
+@@ -350,6 +353,10 @@ int hv_ringbuffer_write(struct vmbus_channel *channel,
+ spin_unlock_irqrestore(&outring_info->ring_lock, flags);
+
+ hv_signal_on_write(old_write, channel, kick_q);
++
++ if (channel->rescind)
++ return -ENODEV;
++
+ return 0;
+ }
+
+diff --git a/drivers/hwmon/it87.c b/drivers/hwmon/it87.c
+index ad82cb28d87a..43146162c122 100644
+--- a/drivers/hwmon/it87.c
++++ b/drivers/hwmon/it87.c
+@@ -1300,25 +1300,35 @@ static ssize_t set_pwm_enable(struct device *dev, struct device_attribute *attr,
+ it87_write_value(data, IT87_REG_FAN_MAIN_CTRL,
+ data->fan_main_ctrl);
+ } else {
++ u8 ctrl;
++
+ /* No on/off mode, set maximum pwm value */
+ data->pwm_duty[nr] = pwm_to_reg(data, 0xff);
+ it87_write_value(data, IT87_REG_PWM_DUTY[nr],
+ data->pwm_duty[nr]);
+ /* and set manual mode */
+- data->pwm_ctrl[nr] = has_newer_autopwm(data) ?
+- data->pwm_temp_map[nr] :
+- data->pwm_duty[nr];
+- it87_write_value(data, IT87_REG_PWM[nr],
+- data->pwm_ctrl[nr]);
++ if (has_newer_autopwm(data)) {
++ ctrl = (data->pwm_ctrl[nr] & 0x7c) |
++ data->pwm_temp_map[nr];
++ } else {
++ ctrl = data->pwm_duty[nr];
++ }
++ data->pwm_ctrl[nr] = ctrl;
++ it87_write_value(data, IT87_REG_PWM[nr], ctrl);
+ }
+ } else {
+- if (val == 1) /* Manual mode */
+- data->pwm_ctrl[nr] = has_newer_autopwm(data) ?
+- data->pwm_temp_map[nr] :
+- data->pwm_duty[nr];
+- else /* Automatic mode */
+- data->pwm_ctrl[nr] = 0x80 | data->pwm_temp_map[nr];
+- it87_write_value(data, IT87_REG_PWM[nr], data->pwm_ctrl[nr]);
++ u8 ctrl;
++
++ if (has_newer_autopwm(data)) {
++ ctrl = (data->pwm_ctrl[nr] & 0x7c) |
++ data->pwm_temp_map[nr];
++ if (val != 1)
++ ctrl |= 0x80;
++ } else {
++ ctrl = (val == 1 ? data->pwm_duty[nr] : 0x80);
++ }
++ data->pwm_ctrl[nr] = ctrl;
++ it87_write_value(data, IT87_REG_PWM[nr], ctrl);
+
+ if (data->type != it8603 && nr < 3) {
+ /* set SmartGuardian mode */
+@@ -1344,6 +1354,7 @@ static ssize_t set_pwm(struct device *dev, struct device_attribute *attr,
+ return -EINVAL;
+
+ mutex_lock(&data->update_lock);
++ it87_update_pwm_ctrl(data, nr);
+ if (has_newer_autopwm(data)) {
+ /*
+ * If we are in automatic mode, the PWM duty cycle register
+@@ -1456,13 +1467,15 @@ static ssize_t set_pwm_temp_map(struct device *dev,
+ }
+
+ mutex_lock(&data->update_lock);
++ it87_update_pwm_ctrl(data, nr);
+ data->pwm_temp_map[nr] = reg;
+ /*
+ * If we are in automatic mode, write the temp mapping immediately;
+ * otherwise, just store it for later use.
+ */
+ if (data->pwm_ctrl[nr] & 0x80) {
+- data->pwm_ctrl[nr] = 0x80 | data->pwm_temp_map[nr];
++ data->pwm_ctrl[nr] = (data->pwm_ctrl[nr] & 0xfc) |
++ data->pwm_temp_map[nr];
+ it87_write_value(data, IT87_REG_PWM[nr], data->pwm_ctrl[nr]);
+ }
+ mutex_unlock(&data->update_lock);
+diff --git a/drivers/hwtracing/coresight/coresight-etm-perf.c b/drivers/hwtracing/coresight/coresight-etm-perf.c
+index 17741969026e..26cfac3e6de7 100644
+--- a/drivers/hwtracing/coresight/coresight-etm-perf.c
++++ b/drivers/hwtracing/coresight/coresight-etm-perf.c
+@@ -242,6 +242,7 @@ static void *etm_setup_aux(int event_cpu, void **pages,
+ if (!sink_ops(sink)->alloc_buffer)
+ goto err;
+
++ cpu = cpumask_first(mask);
+ /* Get the AUX specific data from the sink buffer */
+ event_data->snk_config =
+ sink_ops(sink)->alloc_buffer(sink, cpu, pages,
+diff --git a/drivers/hwtracing/coresight/coresight-stm.c b/drivers/hwtracing/coresight/coresight-stm.c
+index e4c55c5f9988..93fc26f01bab 100644
+--- a/drivers/hwtracing/coresight/coresight-stm.c
++++ b/drivers/hwtracing/coresight/coresight-stm.c
+@@ -356,7 +356,7 @@ static void stm_generic_unlink(struct stm_data *stm_data,
+ if (!drvdata || !drvdata->csdev)
+ return;
+
+- stm_disable(drvdata->csdev, NULL);
++ coresight_disable(drvdata->csdev);
+ }
+
+ static phys_addr_t
+diff --git a/drivers/iio/pressure/mpl115.c b/drivers/iio/pressure/mpl115.c
+index 73f2f0c46e62..8f2bce213248 100644
+--- a/drivers/iio/pressure/mpl115.c
++++ b/drivers/iio/pressure/mpl115.c
+@@ -137,6 +137,7 @@ static const struct iio_chan_spec mpl115_channels[] = {
+ {
+ .type = IIO_TEMP,
+ .info_mask_separate = BIT(IIO_CHAN_INFO_RAW),
++ .info_mask_shared_by_type =
+ BIT(IIO_CHAN_INFO_OFFSET) | BIT(IIO_CHAN_INFO_SCALE),
+ },
+ };
+diff --git a/drivers/iio/pressure/mpl3115.c b/drivers/iio/pressure/mpl3115.c
+index cc3f84139157..525644a7442d 100644
+--- a/drivers/iio/pressure/mpl3115.c
++++ b/drivers/iio/pressure/mpl3115.c
+@@ -190,7 +190,7 @@ static const struct iio_chan_spec mpl3115_channels[] = {
+ {
+ .type = IIO_PRESSURE,
+ .info_mask_separate = BIT(IIO_CHAN_INFO_RAW),
+- BIT(IIO_CHAN_INFO_SCALE),
++ .info_mask_shared_by_type = BIT(IIO_CHAN_INFO_SCALE),
+ .scan_index = 0,
+ .scan_type = {
+ .sign = 'u',
+@@ -203,7 +203,7 @@ static const struct iio_chan_spec mpl3115_channels[] = {
+ {
+ .type = IIO_TEMP,
+ .info_mask_separate = BIT(IIO_CHAN_INFO_RAW),
+- BIT(IIO_CHAN_INFO_SCALE),
++ .info_mask_shared_by_type = BIT(IIO_CHAN_INFO_SCALE),
+ .scan_index = 1,
+ .scan_type = {
+ .sign = 's',
+diff --git a/drivers/infiniband/core/cma.c b/drivers/infiniband/core/cma.c
+index 3e70a9c5d79d..c377afc51da1 100644
+--- a/drivers/infiniband/core/cma.c
++++ b/drivers/infiniband/core/cma.c
+@@ -3583,6 +3583,9 @@ static int cma_accept_iw(struct rdma_id_private *id_priv,
+ struct iw_cm_conn_param iw_param;
+ int ret;
+
++ if (!conn_param)
++ return -EINVAL;
++
+ ret = cma_modify_qp_rtr(id_priv, conn_param);
+ if (ret)
+ return ret;
+diff --git a/drivers/iommu/intel-iommu.c b/drivers/iommu/intel-iommu.c
+index 8a185250ae5a..23eead3cf77c 100644
+--- a/drivers/iommu/intel-iommu.c
++++ b/drivers/iommu/intel-iommu.c
+@@ -3325,13 +3325,14 @@ static int __init init_dmars(void)
+ iommu_identity_mapping |= IDENTMAP_GFX;
+ #endif
+
++ check_tylersburg_isoch();
++
+ if (iommu_identity_mapping) {
+ ret = si_domain_init(hw_pass_through);
+ if (ret)
+ goto free_iommu;
+ }
+
+- check_tylersburg_isoch();
+
+ /*
+ * If we copied translations from a previous kernel in the kdump
+diff --git a/drivers/md/dm-cache-target.c b/drivers/md/dm-cache-target.c
+index e04c61e0839e..897dc72f07c9 100644
+--- a/drivers/md/dm-cache-target.c
++++ b/drivers/md/dm-cache-target.c
+@@ -248,7 +248,7 @@ struct cache {
+ /*
+ * Fields for converting from sectors to blocks.
+ */
+- uint32_t sectors_per_block;
++ sector_t sectors_per_block;
+ int sectors_per_block_shift;
+
+ spinlock_t lock;
+@@ -3547,11 +3547,11 @@ static void cache_status(struct dm_target *ti, status_type_t type,
+
+ residency = policy_residency(cache->policy);
+
+- DMEMIT("%u %llu/%llu %u %llu/%llu %u %u %u %u %u %u %lu ",
++ DMEMIT("%u %llu/%llu %llu %llu/%llu %u %u %u %u %u %u %lu ",
+ (unsigned)DM_CACHE_METADATA_BLOCK_SIZE,
+ (unsigned long long)(nr_blocks_metadata - nr_free_blocks_metadata),
+ (unsigned long long)nr_blocks_metadata,
+- cache->sectors_per_block,
++ (unsigned long long)cache->sectors_per_block,
+ (unsigned long long) from_cblock(residency),
+ (unsigned long long) from_cblock(cache->cache_size),
+ (unsigned) atomic_read(&cache->stats.read_hit),
+diff --git a/drivers/md/dm-raid.c b/drivers/md/dm-raid.c
+index b8f978e551d7..4a157b0f4155 100644
+--- a/drivers/md/dm-raid.c
++++ b/drivers/md/dm-raid.c
+@@ -3626,6 +3626,8 @@ static int raid_preresume(struct dm_target *ti)
+ return r;
+ }
+
++#define RESUME_STAY_FROZEN_FLAGS (CTR_FLAG_DELTA_DISKS | CTR_FLAG_DATA_OFFSET)
++
+ static void raid_resume(struct dm_target *ti)
+ {
+ struct raid_set *rs = ti->private;
+@@ -3643,7 +3645,15 @@ static void raid_resume(struct dm_target *ti)
+ mddev->ro = 0;
+ mddev->in_sync = 0;
+
+- clear_bit(MD_RECOVERY_FROZEN, &mddev->recovery);
++ /*
++ * Keep the RAID set frozen if reshape/rebuild flags are set.
++ * The RAID set is unfrozen once the next table load/resume,
++ * which clears the reshape/rebuild flags, occurs.
++ * This ensures that the constructor for the inactive table
++ * retrieves an up-to-date reshape_position.
++ */
++ if (!(rs->ctr_flags & RESUME_STAY_FROZEN_FLAGS))
++ clear_bit(MD_RECOVERY_FROZEN, &mddev->recovery);
+
+ if (mddev->suspended)
+ mddev_resume(mddev);
+diff --git a/drivers/md/dm-round-robin.c b/drivers/md/dm-round-robin.c
+index 6c25213ab38c..bdbb7e6e8212 100644
+--- a/drivers/md/dm-round-robin.c
++++ b/drivers/md/dm-round-robin.c
+@@ -17,8 +17,8 @@
+ #include <linux/module.h>
+
+ #define DM_MSG_PREFIX "multipath round-robin"
+-#define RR_MIN_IO 1000
+-#define RR_VERSION "1.1.0"
++#define RR_MIN_IO 1
++#define RR_VERSION "1.2.0"
+
+ /*-----------------------------------------------------------------
+ * Path-handling code, paths are held in lists
+@@ -47,44 +47,19 @@ struct selector {
+ struct list_head valid_paths;
+ struct list_head invalid_paths;
+ spinlock_t lock;
+- struct dm_path * __percpu *current_path;
+- struct percpu_counter repeat_count;
+ };
+
+-static void set_percpu_current_path(struct selector *s, struct dm_path *path)
+-{
+- int cpu;
+-
+- for_each_possible_cpu(cpu)
+- *per_cpu_ptr(s->current_path, cpu) = path;
+-}
+-
+ static struct selector *alloc_selector(void)
+ {
+ struct selector *s = kmalloc(sizeof(*s), GFP_KERNEL);
+
+- if (!s)
+- return NULL;
+-
+- INIT_LIST_HEAD(&s->valid_paths);
+- INIT_LIST_HEAD(&s->invalid_paths);
+- spin_lock_init(&s->lock);
+-
+- s->current_path = alloc_percpu(struct dm_path *);
+- if (!s->current_path)
+- goto out_current_path;
+- set_percpu_current_path(s, NULL);
+-
+- if (percpu_counter_init(&s->repeat_count, 0, GFP_KERNEL))
+- goto out_repeat_count;
++ if (s) {
++ INIT_LIST_HEAD(&s->valid_paths);
++ INIT_LIST_HEAD(&s->invalid_paths);
++ spin_lock_init(&s->lock);
++ }
+
+ return s;
+-
+-out_repeat_count:
+- free_percpu(s->current_path);
+-out_current_path:
+- kfree(s);
+- return NULL;;
+ }
+
+ static int rr_create(struct path_selector *ps, unsigned argc, char **argv)
+@@ -105,8 +80,6 @@ static void rr_destroy(struct path_selector *ps)
+
+ free_paths(&s->valid_paths);
+ free_paths(&s->invalid_paths);
+- free_percpu(s->current_path);
+- percpu_counter_destroy(&s->repeat_count);
+ kfree(s);
+ ps->context = NULL;
+ }
+@@ -157,6 +130,11 @@ static int rr_add_path(struct path_selector *ps, struct dm_path *path,
+ return -EINVAL;
+ }
+
++ if (repeat_count > 1) {
++ DMWARN_LIMIT("repeat_count > 1 is deprecated, using 1 instead");
++ repeat_count = 1;
++ }
++
+ /* allocate the path */
+ pi = kmalloc(sizeof(*pi), GFP_KERNEL);
+ if (!pi) {
+@@ -183,9 +161,6 @@ static void rr_fail_path(struct path_selector *ps, struct dm_path *p)
+ struct path_info *pi = p->pscontext;
+
+ spin_lock_irqsave(&s->lock, flags);
+- if (p == *this_cpu_ptr(s->current_path))
+- set_percpu_current_path(s, NULL);
+-
+ list_move(&pi->list, &s->invalid_paths);
+ spin_unlock_irqrestore(&s->lock, flags);
+ }
+@@ -208,29 +183,15 @@ static struct dm_path *rr_select_path(struct path_selector *ps, size_t nr_bytes)
+ unsigned long flags;
+ struct selector *s = ps->context;
+ struct path_info *pi = NULL;
+- struct dm_path *current_path = NULL;
+-
+- local_irq_save(flags);
+- current_path = *this_cpu_ptr(s->current_path);
+- if (current_path) {
+- percpu_counter_dec(&s->repeat_count);
+- if (percpu_counter_read_positive(&s->repeat_count) > 0) {
+- local_irq_restore(flags);
+- return current_path;
+- }
+- }
+
+- spin_lock(&s->lock);
++ spin_lock_irqsave(&s->lock, flags);
+ if (!list_empty(&s->valid_paths)) {
+ pi = list_entry(s->valid_paths.next, struct path_info, list);
+ list_move_tail(&pi->list, &s->valid_paths);
+- percpu_counter_set(&s->repeat_count, pi->repeat_count);
+- set_percpu_current_path(s, pi->path);
+- current_path = pi->path;
+ }
+ spin_unlock_irqrestore(&s->lock, flags);
+
+- return current_path;
++ return pi ? pi->path : NULL;
+ }
+
+ static struct path_selector_type rr_ps = {
+diff --git a/drivers/md/dm-stats.c b/drivers/md/dm-stats.c
+index 38b05f23b96c..0250e7e521ab 100644
+--- a/drivers/md/dm-stats.c
++++ b/drivers/md/dm-stats.c
+@@ -175,6 +175,7 @@ static void dm_stat_free(struct rcu_head *head)
+ int cpu;
+ struct dm_stat *s = container_of(head, struct dm_stat, rcu_head);
+
++ kfree(s->histogram_boundaries);
+ kfree(s->program_id);
+ kfree(s->aux_data);
+ for_each_possible_cpu(cpu) {
+diff --git a/drivers/md/linear.c b/drivers/md/linear.c
+index 5975c9915684..26a73b2002cf 100644
+--- a/drivers/md/linear.c
++++ b/drivers/md/linear.c
+@@ -53,18 +53,26 @@ static inline struct dev_info *which_dev(struct mddev *mddev, sector_t sector)
+ return conf->disks + lo;
+ }
+
++/*
++ * In linear_congested() conf->raid_disks is used as a copy of
++ * mddev->raid_disks to iterate conf->disks[], because conf->raid_disks
++ * and conf->disks[] are created in linear_conf(), they are always
++ * consitent with each other, but mddev->raid_disks does not.
++ */
+ static int linear_congested(struct mddev *mddev, int bits)
+ {
+ struct linear_conf *conf;
+ int i, ret = 0;
+
+- conf = mddev->private;
++ rcu_read_lock();
++ conf = rcu_dereference(mddev->private);
+
+- for (i = 0; i < mddev->raid_disks && !ret ; i++) {
++ for (i = 0; i < conf->raid_disks && !ret ; i++) {
+ struct request_queue *q = bdev_get_queue(conf->disks[i].rdev->bdev);
+ ret |= bdi_congested(&q->backing_dev_info, bits);
+ }
+
++ rcu_read_unlock();
+ return ret;
+ }
+
+@@ -144,6 +152,19 @@ static struct linear_conf *linear_conf(struct mddev *mddev, int raid_disks)
+ conf->disks[i-1].end_sector +
+ conf->disks[i].rdev->sectors;
+
++ /*
++ * conf->raid_disks is copy of mddev->raid_disks. The reason to
++ * keep a copy of mddev->raid_disks in struct linear_conf is,
++ * mddev->raid_disks may not be consistent with pointers number of
++ * conf->disks[] when it is updated in linear_add() and used to
++ * iterate old conf->disks[] earray in linear_congested().
++ * Here conf->raid_disks is always consitent with number of
++ * pointers in conf->disks[] array, and mddev->private is updated
++ * with rcu_assign_pointer() in linear_addr(), such race can be
++ * avoided.
++ */
++ conf->raid_disks = raid_disks;
++
+ return conf;
+
+ out:
+@@ -196,15 +217,23 @@ static int linear_add(struct mddev *mddev, struct md_rdev *rdev)
+ if (!newconf)
+ return -ENOMEM;
+
++ /* newconf->raid_disks already keeps a copy of * the increased
++ * value of mddev->raid_disks, WARN_ONCE() is just used to make
++ * sure of this. It is possible that oldconf is still referenced
++ * in linear_congested(), therefore kfree_rcu() is used to free
++ * oldconf until no one uses it anymore.
++ */
+ mddev_suspend(mddev);
+- oldconf = mddev->private;
++ oldconf = rcu_dereference(mddev->private);
+ mddev->raid_disks++;
+- mddev->private = newconf;
++ WARN_ONCE(mddev->raid_disks != newconf->raid_disks,
++ "copied raid_disks doesn't match mddev->raid_disks");
++ rcu_assign_pointer(mddev->private, newconf);
+ md_set_array_sectors(mddev, linear_size(mddev, 0, 0));
+ set_capacity(mddev->gendisk, mddev->array_sectors);
+ mddev_resume(mddev);
+ revalidate_disk(mddev->gendisk);
+- kfree(oldconf);
++ kfree_rcu(oldconf, rcu);
+ return 0;
+ }
+
+diff --git a/drivers/md/linear.h b/drivers/md/linear.h
+index b685ddd7d7f7..8d392e6098b3 100644
+--- a/drivers/md/linear.h
++++ b/drivers/md/linear.h
+@@ -10,6 +10,7 @@ struct linear_conf
+ {
+ struct rcu_head rcu;
+ sector_t array_sectors;
++ int raid_disks; /* a copy of mddev->raid_disks */
+ struct dev_info disks[0];
+ };
+ #endif
+diff --git a/drivers/media/dvb-frontends/cxd2820r_core.c b/drivers/media/dvb-frontends/cxd2820r_core.c
+index 95267c6edb3a..f6ebbb47b9b2 100644
+--- a/drivers/media/dvb-frontends/cxd2820r_core.c
++++ b/drivers/media/dvb-frontends/cxd2820r_core.c
+@@ -615,6 +615,7 @@ static int cxd2820r_probe(struct i2c_client *client,
+ }
+
+ priv->client[0] = client;
++ priv->fe.demodulator_priv = priv;
+ priv->i2c = client->adapter;
+ priv->ts_mode = pdata->ts_mode;
+ priv->ts_clk_inv = pdata->ts_clk_inv;
+@@ -697,7 +698,6 @@ static int cxd2820r_probe(struct i2c_client *client,
+ memcpy(&priv->fe.ops, &cxd2820r_ops, sizeof(priv->fe.ops));
+ if (!pdata->attach_in_use)
+ priv->fe.ops.release = NULL;
+- priv->fe.demodulator_priv = priv;
+ i2c_set_clientdata(client, priv);
+
+ /* Setup callbacks */
+diff --git a/drivers/media/media-device.c b/drivers/media/media-device.c
+index 8756275e9fc4..892745663765 100644
+--- a/drivers/media/media-device.c
++++ b/drivers/media/media-device.c
+@@ -130,7 +130,7 @@ static long media_device_enum_entities(struct media_device *mdev,
+ * old range.
+ */
+ if (ent->function < MEDIA_ENT_F_OLD_BASE ||
+- ent->function > MEDIA_ENT_T_DEVNODE_UNKNOWN) {
++ ent->function > MEDIA_ENT_F_TUNER) {
+ if (is_media_entity_v4l2_subdev(ent))
+ entd->type = MEDIA_ENT_F_V4L2_SUBDEV_UNKNOWN;
+ else if (ent->function != MEDIA_ENT_F_IO_V4L)
+diff --git a/drivers/media/pci/dm1105/Kconfig b/drivers/media/pci/dm1105/Kconfig
+index 173daf0c0847..14fa7e40f2a6 100644
+--- a/drivers/media/pci/dm1105/Kconfig
++++ b/drivers/media/pci/dm1105/Kconfig
+@@ -1,6 +1,6 @@
+ config DVB_DM1105
+ tristate "SDMC DM1105 based PCI cards"
+- depends on DVB_CORE && PCI && I2C
++ depends on DVB_CORE && PCI && I2C && I2C_ALGOBIT
+ select DVB_PLL if MEDIA_SUBDRV_AUTOSELECT
+ select DVB_STV0299 if MEDIA_SUBDRV_AUTOSELECT
+ select DVB_STV0288 if MEDIA_SUBDRV_AUTOSELECT
+diff --git a/drivers/media/platform/am437x/am437x-vpfe.c b/drivers/media/platform/am437x/am437x-vpfe.c
+index b33b9e35e60e..05489a401c5c 100644
+--- a/drivers/media/platform/am437x/am437x-vpfe.c
++++ b/drivers/media/platform/am437x/am437x-vpfe.c
+@@ -1576,7 +1576,7 @@ static int vpfe_s_fmt(struct file *file, void *priv,
+ return -EBUSY;
+ }
+
+- ret = vpfe_try_fmt(file, priv, &format);
++ ret = __vpfe_get_format(vpfe, &format, &bpp);
+ if (ret)
+ return ret;
+
+diff --git a/drivers/media/rc/lirc_dev.c b/drivers/media/rc/lirc_dev.c
+index 3854809e8531..7f5d109d488b 100644
+--- a/drivers/media/rc/lirc_dev.c
++++ b/drivers/media/rc/lirc_dev.c
+@@ -582,7 +582,7 @@ long lirc_dev_fop_ioctl(struct file *file, unsigned int cmd, unsigned long arg)
+ result = put_user(ir->d.features, (__u32 __user *)arg);
+ break;
+ case LIRC_GET_REC_MODE:
+- if (LIRC_CAN_REC(ir->d.features)) {
++ if (!LIRC_CAN_REC(ir->d.features)) {
+ result = -ENOTTY;
+ break;
+ }
+@@ -592,7 +592,7 @@ long lirc_dev_fop_ioctl(struct file *file, unsigned int cmd, unsigned long arg)
+ (__u32 __user *)arg);
+ break;
+ case LIRC_SET_REC_MODE:
+- if (LIRC_CAN_REC(ir->d.features)) {
++ if (!LIRC_CAN_REC(ir->d.features)) {
+ result = -ENOTTY;
+ break;
+ }
+diff --git a/drivers/media/usb/dvb-usb/dvb-usb-firmware.c b/drivers/media/usb/dvb-usb/dvb-usb-firmware.c
+index f0023dbb7276..ab9866024ec7 100644
+--- a/drivers/media/usb/dvb-usb/dvb-usb-firmware.c
++++ b/drivers/media/usb/dvb-usb/dvb-usb-firmware.c
+@@ -35,28 +35,33 @@ static int usb_cypress_writemem(struct usb_device *udev,u16 addr,u8 *data, u8 le
+
+ int usb_cypress_load_firmware(struct usb_device *udev, const struct firmware *fw, int type)
+ {
+- struct hexline hx;
++ struct hexline *hx;
+ u8 reset;
+ int ret,pos=0;
+
++ hx = kmalloc(sizeof(*hx), GFP_KERNEL);
++ if (!hx)
++ return -ENOMEM;
++
+ /* stop the CPU */
+ reset = 1;
+ if ((ret = usb_cypress_writemem(udev,cypress[type].cpu_cs_register,&reset,1)) != 1)
+ err("could not stop the USB controller CPU.");
+
+- while ((ret = dvb_usb_get_hexline(fw,&hx,&pos)) > 0) {
+- deb_fw("writing to address 0x%04x (buffer: 0x%02x %02x)\n",hx.addr,hx.len,hx.chk);
+- ret = usb_cypress_writemem(udev,hx.addr,hx.data,hx.len);
++ while ((ret = dvb_usb_get_hexline(fw, hx, &pos)) > 0) {
++ deb_fw("writing to address 0x%04x (buffer: 0x%02x %02x)\n", hx->addr, hx->len, hx->chk);
++ ret = usb_cypress_writemem(udev, hx->addr, hx->data, hx->len);
+
+- if (ret != hx.len) {
++ if (ret != hx->len) {
+ err("error while transferring firmware (transferred size: %d, block size: %d)",
+- ret,hx.len);
++ ret, hx->len);
+ ret = -EINVAL;
+ break;
+ }
+ }
+ if (ret < 0) {
+ err("firmware download failed at %d with %d",pos,ret);
++ kfree(hx);
+ return ret;
+ }
+
+@@ -70,6 +75,8 @@ int usb_cypress_load_firmware(struct usb_device *udev, const struct firmware *fw
+ } else
+ ret = -EIO;
+
++ kfree(hx);
++
+ return ret;
+ }
+ EXPORT_SYMBOL(usb_cypress_load_firmware);
+diff --git a/drivers/media/usb/uvc/uvc_queue.c b/drivers/media/usb/uvc/uvc_queue.c
+index 77edd206d345..40e5a6b54955 100644
+--- a/drivers/media/usb/uvc/uvc_queue.c
++++ b/drivers/media/usb/uvc/uvc_queue.c
+@@ -412,7 +412,7 @@ struct uvc_buffer *uvc_queue_next_buffer(struct uvc_video_queue *queue,
+ nextbuf = NULL;
+ spin_unlock_irqrestore(&queue->irqlock, flags);
+
+- buf->state = buf->error ? VB2_BUF_STATE_ERROR : UVC_BUF_STATE_DONE;
++ buf->state = buf->error ? UVC_BUF_STATE_ERROR : UVC_BUF_STATE_DONE;
+ vb2_set_plane_payload(&buf->buf.vb2_buf, 0, buf->bytesused);
+ vb2_buffer_done(&buf->buf.vb2_buf, VB2_BUF_STATE_DONE);
+
+diff --git a/drivers/misc/mei/main.c b/drivers/misc/mei/main.c
+index e1bf54481fd6..9d0b7050c79a 100644
+--- a/drivers/misc/mei/main.c
++++ b/drivers/misc/mei/main.c
+@@ -182,32 +182,36 @@ static ssize_t mei_read(struct file *file, char __user *ubuf,
+ goto out;
+ }
+
+- if (rets == -EBUSY &&
+- !mei_cl_enqueue_ctrl_wr_cb(cl, length, MEI_FOP_READ, file)) {
+- rets = -ENOMEM;
+- goto out;
+- }
+
+- do {
+- mutex_unlock(&dev->device_lock);
+-
+- if (wait_event_interruptible(cl->rx_wait,
+- (!list_empty(&cl->rd_completed)) ||
+- (!mei_cl_is_connected(cl)))) {
++again:
++ mutex_unlock(&dev->device_lock);
++ if (wait_event_interruptible(cl->rx_wait,
++ !list_empty(&cl->rd_completed) ||
++ !mei_cl_is_connected(cl))) {
++ if (signal_pending(current))
++ return -EINTR;
++ return -ERESTARTSYS;
++ }
++ mutex_lock(&dev->device_lock);
+
+- if (signal_pending(current))
+- return -EINTR;
+- return -ERESTARTSYS;
+- }
++ if (!mei_cl_is_connected(cl)) {
++ rets = -ENODEV;
++ goto out;
++ }
+
+- mutex_lock(&dev->device_lock);
+- if (!mei_cl_is_connected(cl)) {
+- rets = -ENODEV;
+- goto out;
+- }
++ cb = mei_cl_read_cb(cl, file);
++ if (!cb) {
++ /*
++ * For amthif all the waiters are woken up,
++ * but only fp with matching cb->fp get the cb,
++ * the others have to return to wait on read.
++ */
++ if (cl == &dev->iamthif_cl)
++ goto again;
+
+- cb = mei_cl_read_cb(cl, file);
+- } while (!cb);
++ rets = 0;
++ goto out;
++ }
+
+ copy_buffer:
+ /* now copy the data to user space */
+diff --git a/drivers/mmc/host/sdhci-acpi.c b/drivers/mmc/host/sdhci-acpi.c
+index 278a5a435ab7..9dcb7048e3b1 100644
+--- a/drivers/mmc/host/sdhci-acpi.c
++++ b/drivers/mmc/host/sdhci-acpi.c
+@@ -467,7 +467,10 @@ static int sdhci_acpi_probe(struct platform_device *pdev)
+ if (sdhci_acpi_flag(c, SDHCI_ACPI_SD_CD)) {
+ bool v = sdhci_acpi_flag(c, SDHCI_ACPI_SD_CD_OVERRIDE_LEVEL);
+
+- if (mmc_gpiod_request_cd(host->mmc, NULL, 0, v, 0, NULL)) {
++ err = mmc_gpiod_request_cd(host->mmc, NULL, 0, v, 0, NULL);
++ if (err) {
++ if (err == -EPROBE_DEFER)
++ goto err_free;
+ dev_warn(dev, "failed to setup card detect gpio\n");
+ c->use_runtime_pm = false;
+ }
+diff --git a/drivers/mtd/nand/fsl_ifc_nand.c b/drivers/mtd/nand/fsl_ifc_nand.c
+index 0a177b1bfe3e..d1570f512f0b 100644
+--- a/drivers/mtd/nand/fsl_ifc_nand.c
++++ b/drivers/mtd/nand/fsl_ifc_nand.c
+@@ -258,9 +258,15 @@ static void fsl_ifc_run_command(struct mtd_info *mtd)
+ int bufnum = nctrl->page & priv->bufnum_mask;
+ int sector = bufnum * chip->ecc.steps;
+ int sector_end = sector + chip->ecc.steps - 1;
++ __be32 *eccstat_regs;
++
++ if (ctrl->version >= FSL_IFC_VERSION_2_0_0)
++ eccstat_regs = ifc->ifc_nand.v2_nand_eccstat;
++ else
++ eccstat_regs = ifc->ifc_nand.v1_nand_eccstat;
+
+ for (i = sector / 4; i <= sector_end / 4; i++)
+- eccstat[i] = ifc_in32(&ifc->ifc_nand.nand_eccstat[i]);
++ eccstat[i] = ifc_in32(&eccstat_regs[i]);
+
+ for (i = sector; i <= sector_end; i++) {
+ errors = check_read_ecc(mtd, ctrl, eccstat, i);
+diff --git a/drivers/net/can/usb/gs_usb.c b/drivers/net/can/usb/gs_usb.c
+index 77e3cc06a30c..a0dabd4038ba 100644
+--- a/drivers/net/can/usb/gs_usb.c
++++ b/drivers/net/can/usb/gs_usb.c
+@@ -908,10 +908,14 @@ static int gs_usb_probe(struct usb_interface *intf,
+ struct gs_usb *dev;
+ int rc = -ENOMEM;
+ unsigned int icount, i;
+- struct gs_host_config hconf = {
+- .byte_order = 0x0000beef,
+- };
+- struct gs_device_config dconf;
++ struct gs_host_config *hconf;
++ struct gs_device_config *dconf;
++
++ hconf = kmalloc(sizeof(*hconf), GFP_KERNEL);
++ if (!hconf)
++ return -ENOMEM;
++
++ hconf->byte_order = 0x0000beef;
+
+ /* send host config */
+ rc = usb_control_msg(interface_to_usbdev(intf),
+@@ -920,16 +924,22 @@ static int gs_usb_probe(struct usb_interface *intf,
+ USB_DIR_OUT|USB_TYPE_VENDOR|USB_RECIP_INTERFACE,
+ 1,
+ intf->altsetting[0].desc.bInterfaceNumber,
+- &hconf,
+- sizeof(hconf),
++ hconf,
++ sizeof(*hconf),
+ 1000);
+
++ kfree(hconf);
++
+ if (rc < 0) {
+ dev_err(&intf->dev, "Couldn't send data format (err=%d)\n",
+ rc);
+ return rc;
+ }
+
++ dconf = kmalloc(sizeof(*dconf), GFP_KERNEL);
++ if (!dconf)
++ return -ENOMEM;
++
+ /* read device config */
+ rc = usb_control_msg(interface_to_usbdev(intf),
+ usb_rcvctrlpipe(interface_to_usbdev(intf), 0),
+@@ -937,28 +947,33 @@ static int gs_usb_probe(struct usb_interface *intf,
+ USB_DIR_IN|USB_TYPE_VENDOR|USB_RECIP_INTERFACE,
+ 1,
+ intf->altsetting[0].desc.bInterfaceNumber,
+- &dconf,
+- sizeof(dconf),
++ dconf,
++ sizeof(*dconf),
+ 1000);
+ if (rc < 0) {
+ dev_err(&intf->dev, "Couldn't get device config: (err=%d)\n",
+ rc);
++ kfree(dconf);
+ return rc;
+ }
+
+- icount = dconf.icount + 1;
++ icount = dconf->icount + 1;
+ dev_info(&intf->dev, "Configuring for %d interfaces\n", icount);
+
+ if (icount > GS_MAX_INTF) {
+ dev_err(&intf->dev,
+ "Driver cannot handle more that %d CAN interfaces\n",
+ GS_MAX_INTF);
++ kfree(dconf);
+ return -EINVAL;
+ }
+
+ dev = kzalloc(sizeof(*dev), GFP_KERNEL);
+- if (!dev)
++ if (!dev) {
++ kfree(dconf);
+ return -ENOMEM;
++ }
++
+ init_usb_anchor(&dev->rx_submitted);
+
+ atomic_set(&dev->active_channels, 0);
+@@ -967,7 +982,7 @@ static int gs_usb_probe(struct usb_interface *intf,
+ dev->udev = interface_to_usbdev(intf);
+
+ for (i = 0; i < icount; i++) {
+- dev->canch[i] = gs_make_candev(i, intf, &dconf);
++ dev->canch[i] = gs_make_candev(i, intf, dconf);
+ if (IS_ERR_OR_NULL(dev->canch[i])) {
+ /* save error code to return later */
+ rc = PTR_ERR(dev->canch[i]);
+@@ -978,12 +993,15 @@ static int gs_usb_probe(struct usb_interface *intf,
+ gs_destroy_candev(dev->canch[i]);
+
+ usb_kill_anchored_urbs(&dev->rx_submitted);
++ kfree(dconf);
+ kfree(dev);
+ return rc;
+ }
+ dev->canch[i]->parent = dev;
+ }
+
++ kfree(dconf);
++
+ return 0;
+ }
+
+diff --git a/drivers/net/can/usb/usb_8dev.c b/drivers/net/can/usb/usb_8dev.c
+index 108a30e15097..d000cb62d6ae 100644
+--- a/drivers/net/can/usb/usb_8dev.c
++++ b/drivers/net/can/usb/usb_8dev.c
+@@ -951,8 +951,8 @@ static int usb_8dev_probe(struct usb_interface *intf,
+ for (i = 0; i < MAX_TX_URBS; i++)
+ priv->tx_contexts[i].echo_index = MAX_TX_URBS;
+
+- priv->cmd_msg_buffer = kzalloc(sizeof(struct usb_8dev_cmd_msg),
+- GFP_KERNEL);
++ priv->cmd_msg_buffer = devm_kzalloc(&intf->dev, sizeof(struct usb_8dev_cmd_msg),
++ GFP_KERNEL);
+ if (!priv->cmd_msg_buffer)
+ goto cleanup_candev;
+
+@@ -966,7 +966,7 @@ static int usb_8dev_probe(struct usb_interface *intf,
+ if (err) {
+ netdev_err(netdev,
+ "couldn't register CAN device: %d\n", err);
+- goto cleanup_cmd_msg_buffer;
++ goto cleanup_candev;
+ }
+
+ err = usb_8dev_cmd_version(priv, &version);
+@@ -987,9 +987,6 @@ static int usb_8dev_probe(struct usb_interface *intf,
+ cleanup_unregister_candev:
+ unregister_netdev(priv->netdev);
+
+-cleanup_cmd_msg_buffer:
+- kfree(priv->cmd_msg_buffer);
+-
+ cleanup_candev:
+ free_candev(netdev);
+
+diff --git a/drivers/net/wireless/ath/ath10k/core.c b/drivers/net/wireless/ath/ath10k/core.c
+index 749e381edd38..01f5d4db4d0e 100644
+--- a/drivers/net/wireless/ath/ath10k/core.c
++++ b/drivers/net/wireless/ath/ath10k/core.c
+@@ -1913,7 +1913,8 @@ int ath10k_core_start(struct ath10k *ar, enum ath10k_firmware_mode mode,
+ ath10k_dbg(ar, ATH10K_DBG_BOOT, "firmware %s booted\n",
+ ar->hw->wiphy->fw_version);
+
+- if (test_bit(WMI_SERVICE_EXT_RES_CFG_SUPPORT, ar->wmi.svc_map)) {
++ if (test_bit(WMI_SERVICE_EXT_RES_CFG_SUPPORT, ar->wmi.svc_map) &&
++ mode == ATH10K_FIRMWARE_MODE_NORMAL) {
+ val = 0;
+ if (ath10k_peer_stats_enabled(ar))
+ val = WMI_10_4_PEER_STATS;
+@@ -1966,10 +1967,13 @@ int ath10k_core_start(struct ath10k *ar, enum ath10k_firmware_mode mode,
+ * possible to implicitly make it correct by creating a dummy vdev and
+ * then deleting it.
+ */
+- status = ath10k_core_reset_rx_filter(ar);
+- if (status) {
+- ath10k_err(ar, "failed to reset rx filter: %d\n", status);
+- goto err_hif_stop;
++ if (mode == ATH10K_FIRMWARE_MODE_NORMAL) {
++ status = ath10k_core_reset_rx_filter(ar);
++ if (status) {
++ ath10k_err(ar,
++ "failed to reset rx filter: %d\n", status);
++ goto err_hif_stop;
++ }
+ }
+
+ /* If firmware indicates Full Rx Reorder support it must be used in a
+diff --git a/drivers/net/wireless/ath/ath5k/mac80211-ops.c b/drivers/net/wireless/ath/ath5k/mac80211-ops.c
+index dc44cfef7517..16e052d02c94 100644
+--- a/drivers/net/wireless/ath/ath5k/mac80211-ops.c
++++ b/drivers/net/wireless/ath/ath5k/mac80211-ops.c
+@@ -502,8 +502,7 @@ ath5k_set_key(struct ieee80211_hw *hw, enum set_key_cmd cmd,
+ break;
+ return -EOPNOTSUPP;
+ default:
+- WARN_ON(1);
+- return -EINVAL;
++ return -EOPNOTSUPP;
+ }
+
+ mutex_lock(&ah->lock);
+diff --git a/drivers/net/wireless/ath/ath9k/ar9003_eeprom.h b/drivers/net/wireless/ath/ath9k/ar9003_eeprom.h
+index 107bcfbbe0fb..cb37bf01920e 100644
+--- a/drivers/net/wireless/ath/ath9k/ar9003_eeprom.h
++++ b/drivers/net/wireless/ath/ath9k/ar9003_eeprom.h
+@@ -73,13 +73,13 @@
+ #define AR9300_OTP_BASE \
+ ((AR_SREV_9340(ah) || AR_SREV_9550(ah)) ? 0x30000 : 0x14000)
+ #define AR9300_OTP_STATUS \
+- ((AR_SREV_9340(ah) || AR_SREV_9550(ah)) ? 0x30018 : 0x15f18)
++ ((AR_SREV_9340(ah) || AR_SREV_9550(ah)) ? 0x31018 : 0x15f18)
+ #define AR9300_OTP_STATUS_TYPE 0x7
+ #define AR9300_OTP_STATUS_VALID 0x4
+ #define AR9300_OTP_STATUS_ACCESS_BUSY 0x2
+ #define AR9300_OTP_STATUS_SM_BUSY 0x1
+ #define AR9300_OTP_READ_DATA \
+- ((AR_SREV_9340(ah) || AR_SREV_9550(ah)) ? 0x3001c : 0x15f1c)
++ ((AR_SREV_9340(ah) || AR_SREV_9550(ah)) ? 0x3101c : 0x15f1c)
+
+ enum targetPowerHTRates {
+ HT_TARGET_RATE_0_8_16,
+diff --git a/drivers/net/wireless/ath/ath9k/ath9k.h b/drivers/net/wireless/ath/ath9k/ath9k.h
+index 378d3458fddb..52578ae69bd8 100644
+--- a/drivers/net/wireless/ath/ath9k/ath9k.h
++++ b/drivers/net/wireless/ath/ath9k/ath9k.h
+@@ -970,6 +970,7 @@ struct ath_softc {
+ struct survey_info *cur_survey;
+ struct survey_info survey[ATH9K_NUM_CHANNELS];
+
++ spinlock_t intr_lock;
+ struct tasklet_struct intr_tq;
+ struct tasklet_struct bcon_tasklet;
+ struct ath_hw *sc_ah;
+diff --git a/drivers/net/wireless/ath/ath9k/init.c b/drivers/net/wireless/ath/ath9k/init.c
+index 20794660d6ae..e0c1f64a29bf 100644
+--- a/drivers/net/wireless/ath/ath9k/init.c
++++ b/drivers/net/wireless/ath/ath9k/init.c
+@@ -667,6 +667,7 @@ static int ath9k_init_softc(u16 devid, struct ath_softc *sc,
+ common->bt_ant_diversity = 1;
+
+ spin_lock_init(&common->cc_lock);
++ spin_lock_init(&sc->intr_lock);
+ spin_lock_init(&sc->sc_serial_rw);
+ spin_lock_init(&sc->sc_pm_lock);
+ spin_lock_init(&sc->chan_lock);
+diff --git a/drivers/net/wireless/ath/ath9k/mac.c b/drivers/net/wireless/ath/ath9k/mac.c
+index bba85d1a6cd1..d937c39b3a0b 100644
+--- a/drivers/net/wireless/ath/ath9k/mac.c
++++ b/drivers/net/wireless/ath/ath9k/mac.c
+@@ -805,21 +805,12 @@ void ath9k_hw_disable_interrupts(struct ath_hw *ah)
+ }
+ EXPORT_SYMBOL(ath9k_hw_disable_interrupts);
+
+-void ath9k_hw_enable_interrupts(struct ath_hw *ah)
++static void __ath9k_hw_enable_interrupts(struct ath_hw *ah)
+ {
+ struct ath_common *common = ath9k_hw_common(ah);
+ u32 sync_default = AR_INTR_SYNC_DEFAULT;
+ u32 async_mask;
+
+- if (!(ah->imask & ATH9K_INT_GLOBAL))
+- return;
+-
+- if (!atomic_inc_and_test(&ah->intr_ref_cnt)) {
+- ath_dbg(common, INTERRUPT, "Do not enable IER ref count %d\n",
+- atomic_read(&ah->intr_ref_cnt));
+- return;
+- }
+-
+ if (AR_SREV_9340(ah) || AR_SREV_9550(ah) || AR_SREV_9531(ah) ||
+ AR_SREV_9561(ah))
+ sync_default &= ~AR_INTR_SYNC_HOST1_FATAL;
+@@ -841,6 +832,39 @@ void ath9k_hw_enable_interrupts(struct ath_hw *ah)
+ ath_dbg(common, INTERRUPT, "AR_IMR 0x%x IER 0x%x\n",
+ REG_READ(ah, AR_IMR), REG_READ(ah, AR_IER));
+ }
++
++void ath9k_hw_resume_interrupts(struct ath_hw *ah)
++{
++ struct ath_common *common = ath9k_hw_common(ah);
++
++ if (!(ah->imask & ATH9K_INT_GLOBAL))
++ return;
++
++ if (atomic_read(&ah->intr_ref_cnt) != 0) {
++ ath_dbg(common, INTERRUPT, "Do not enable IER ref count %d\n",
++ atomic_read(&ah->intr_ref_cnt));
++ return;
++ }
++
++ __ath9k_hw_enable_interrupts(ah);
++}
++EXPORT_SYMBOL(ath9k_hw_resume_interrupts);
++
++void ath9k_hw_enable_interrupts(struct ath_hw *ah)
++{
++ struct ath_common *common = ath9k_hw_common(ah);
++
++ if (!(ah->imask & ATH9K_INT_GLOBAL))
++ return;
++
++ if (!atomic_inc_and_test(&ah->intr_ref_cnt)) {
++ ath_dbg(common, INTERRUPT, "Do not enable IER ref count %d\n",
++ atomic_read(&ah->intr_ref_cnt));
++ return;
++ }
++
++ __ath9k_hw_enable_interrupts(ah);
++}
+ EXPORT_SYMBOL(ath9k_hw_enable_interrupts);
+
+ void ath9k_hw_set_interrupts(struct ath_hw *ah)
+diff --git a/drivers/net/wireless/ath/ath9k/mac.h b/drivers/net/wireless/ath/ath9k/mac.h
+index 3bab01435a86..770fc11b41d1 100644
+--- a/drivers/net/wireless/ath/ath9k/mac.h
++++ b/drivers/net/wireless/ath/ath9k/mac.h
+@@ -744,6 +744,7 @@ void ath9k_hw_set_interrupts(struct ath_hw *ah);
+ void ath9k_hw_enable_interrupts(struct ath_hw *ah);
+ void ath9k_hw_disable_interrupts(struct ath_hw *ah);
+ void ath9k_hw_kill_interrupts(struct ath_hw *ah);
++void ath9k_hw_resume_interrupts(struct ath_hw *ah);
+
+ void ar9002_hw_attach_mac_ops(struct ath_hw *ah);
+
+diff --git a/drivers/net/wireless/ath/ath9k/main.c b/drivers/net/wireless/ath/ath9k/main.c
+index 59e3bd0f4c20..9de8c95e6cdc 100644
+--- a/drivers/net/wireless/ath/ath9k/main.c
++++ b/drivers/net/wireless/ath/ath9k/main.c
+@@ -373,21 +373,20 @@ void ath9k_tasklet(unsigned long data)
+ struct ath_common *common = ath9k_hw_common(ah);
+ enum ath_reset_type type;
+ unsigned long flags;
+- u32 status = sc->intrstatus;
++ u32 status;
+ u32 rxmask;
+
++ spin_lock_irqsave(&sc->intr_lock, flags);
++ status = sc->intrstatus;
++ sc->intrstatus = 0;
++ spin_unlock_irqrestore(&sc->intr_lock, flags);
++
+ ath9k_ps_wakeup(sc);
+ spin_lock(&sc->sc_pcu_lock);
+
+ if (status & ATH9K_INT_FATAL) {
+ type = RESET_TYPE_FATAL_INT;
+ ath9k_queue_reset(sc, type);
+-
+- /*
+- * Increment the ref. counter here so that
+- * interrupts are enabled in the reset routine.
+- */
+- atomic_inc(&ah->intr_ref_cnt);
+ ath_dbg(common, RESET, "FATAL: Skipping interrupts\n");
+ goto out;
+ }
+@@ -403,11 +402,6 @@ void ath9k_tasklet(unsigned long data)
+ type = RESET_TYPE_BB_WATCHDOG;
+ ath9k_queue_reset(sc, type);
+
+- /*
+- * Increment the ref. counter here so that
+- * interrupts are enabled in the reset routine.
+- */
+- atomic_inc(&ah->intr_ref_cnt);
+ ath_dbg(common, RESET,
+ "BB_WATCHDOG: Skipping interrupts\n");
+ goto out;
+@@ -420,7 +414,6 @@ void ath9k_tasklet(unsigned long data)
+ if ((sc->gtt_cnt >= MAX_GTT_CNT) && !ath9k_hw_check_alive(ah)) {
+ type = RESET_TYPE_TX_GTT;
+ ath9k_queue_reset(sc, type);
+- atomic_inc(&ah->intr_ref_cnt);
+ ath_dbg(common, RESET,
+ "GTT: Skipping interrupts\n");
+ goto out;
+@@ -477,7 +470,7 @@ void ath9k_tasklet(unsigned long data)
+ ath9k_btcoex_handle_interrupt(sc, status);
+
+ /* re-enable hardware interrupt */
+- ath9k_hw_enable_interrupts(ah);
++ ath9k_hw_resume_interrupts(ah);
+ out:
+ spin_unlock(&sc->sc_pcu_lock);
+ ath9k_ps_restore(sc);
+@@ -541,7 +534,9 @@ irqreturn_t ath_isr(int irq, void *dev)
+ return IRQ_NONE;
+
+ /* Cache the status */
+- sc->intrstatus = status;
++ spin_lock(&sc->intr_lock);
++ sc->intrstatus |= status;
++ spin_unlock(&sc->intr_lock);
+
+ if (status & SCHED_INTR)
+ sched = true;
+@@ -587,7 +582,7 @@ irqreturn_t ath_isr(int irq, void *dev)
+
+ if (sched) {
+ /* turn off every interrupt */
+- ath9k_hw_disable_interrupts(ah);
++ ath9k_hw_kill_interrupts(ah);
+ tasklet_schedule(&sc->intr_tq);
+ }
+
+diff --git a/drivers/net/wireless/realtek/rtlwifi/pci.h b/drivers/net/wireless/realtek/rtlwifi/pci.h
+index 578b1d900bfb..d9039ea10ba4 100644
+--- a/drivers/net/wireless/realtek/rtlwifi/pci.h
++++ b/drivers/net/wireless/realtek/rtlwifi/pci.h
+@@ -271,10 +271,10 @@ struct mp_adapter {
+ };
+
+ struct rtl_pci_priv {
++ struct bt_coexist_info bt_coexist;
++ struct rtl_led_ctl ledctl;
+ struct rtl_pci dev;
+ struct mp_adapter ndis_adapter;
+- struct rtl_led_ctl ledctl;
+- struct bt_coexist_info bt_coexist;
+ };
+
+ #define rtl_pcipriv(hw) (((struct rtl_pci_priv *)(rtl_priv(hw))->priv))
+diff --git a/drivers/net/wireless/realtek/rtlwifi/rtl8192ee/hw.c b/drivers/net/wireless/realtek/rtlwifi/rtl8192ee/hw.c
+index ebf663e1a81a..cab4601eba8e 100644
+--- a/drivers/net/wireless/realtek/rtlwifi/rtl8192ee/hw.c
++++ b/drivers/net/wireless/realtek/rtlwifi/rtl8192ee/hw.c
+@@ -1006,7 +1006,7 @@ static void _rtl92ee_hw_configure(struct ieee80211_hw *hw)
+ rtl_write_word(rtlpriv, REG_SIFS_TRX, 0x100a);
+
+ /* Note Data sheet don't define */
+- rtl_write_word(rtlpriv, 0x4C7, 0x80);
++ rtl_write_byte(rtlpriv, 0x4C7, 0x80);
+
+ rtl_write_byte(rtlpriv, REG_RX_PKT_LIMIT, 0x20);
+
+diff --git a/drivers/net/wireless/realtek/rtlwifi/rtl8821ae/hw.c b/drivers/net/wireless/realtek/rtlwifi/rtl8821ae/hw.c
+index 1281ebe0c30a..2cbef9647acc 100644
+--- a/drivers/net/wireless/realtek/rtlwifi/rtl8821ae/hw.c
++++ b/drivers/net/wireless/realtek/rtlwifi/rtl8821ae/hw.c
+@@ -1128,7 +1128,7 @@ static u8 _rtl8821ae_dbi_read(struct rtl_priv *rtlpriv, u16 addr)
+ }
+ if (0 == tmp) {
+ read_addr = REG_DBI_RDATA + addr % 4;
+- ret = rtl_read_word(rtlpriv, read_addr);
++ ret = rtl_read_byte(rtlpriv, read_addr);
+ }
+ return ret;
+ }
+diff --git a/drivers/net/wireless/realtek/rtlwifi/usb.h b/drivers/net/wireless/realtek/rtlwifi/usb.h
+index a6d43d2ecd36..cdb9e06db89e 100644
+--- a/drivers/net/wireless/realtek/rtlwifi/usb.h
++++ b/drivers/net/wireless/realtek/rtlwifi/usb.h
+@@ -146,8 +146,9 @@ struct rtl_usb {
+ };
+
+ struct rtl_usb_priv {
+- struct rtl_usb dev;
++ struct bt_coexist_info bt_coexist;
+ struct rtl_led_ctl ledctl;
++ struct rtl_usb dev;
+ };
+
+ #define rtl_usbpriv(hw) (((struct rtl_usb_priv *)(rtl_priv(hw))->priv))
+diff --git a/drivers/pci/host/pci-hyperv.c b/drivers/pci/host/pci-hyperv.c
+index 3efcc7bdc5fb..cd114c6787be 100644
+--- a/drivers/pci/host/pci-hyperv.c
++++ b/drivers/pci/host/pci-hyperv.c
+@@ -130,7 +130,8 @@ union pci_version {
+ */
+ union win_slot_encoding {
+ struct {
+- u32 func:8;
++ u32 dev:5;
++ u32 func:3;
+ u32 reserved:24;
+ } bits;
+ u32 slot;
+@@ -485,7 +486,8 @@ static u32 devfn_to_wslot(int devfn)
+ union win_slot_encoding wslot;
+
+ wslot.slot = 0;
+- wslot.bits.func = PCI_SLOT(devfn) | (PCI_FUNC(devfn) << 5);
++ wslot.bits.dev = PCI_SLOT(devfn);
++ wslot.bits.func = PCI_FUNC(devfn);
+
+ return wslot.slot;
+ }
+@@ -503,7 +505,7 @@ static int wslot_to_devfn(u32 wslot)
+ union win_slot_encoding slot_no;
+
+ slot_no.slot = wslot;
+- return PCI_DEVFN(0, slot_no.bits.func);
++ return PCI_DEVFN(slot_no.bits.dev, slot_no.bits.func);
+ }
+
+ /*
+diff --git a/drivers/pci/host/pcie-altera.c b/drivers/pci/host/pcie-altera.c
+index 0c1540225ca3..68c839f4d029 100644
+--- a/drivers/pci/host/pcie-altera.c
++++ b/drivers/pci/host/pcie-altera.c
+@@ -57,10 +57,14 @@
+ #define TLP_WRITE_TAG 0x10
+ #define RP_DEVFN 0
+ #define TLP_REQ_ID(bus, devfn) (((bus) << 8) | (devfn))
+-#define TLP_CFG_DW0(pcie, bus) \
++#define TLP_CFGRD_DW0(pcie, bus) \
+ ((((bus == pcie->root_bus_nr) ? TLP_FMTTYPE_CFGRD0 \
+ : TLP_FMTTYPE_CFGRD1) << 24) | \
+ TLP_PAYLOAD_SIZE)
++#define TLP_CFGWR_DW0(pcie, bus) \
++ ((((bus == pcie->root_bus_nr) ? TLP_FMTTYPE_CFGWR0 \
++ : TLP_FMTTYPE_CFGWR1) << 24) | \
++ TLP_PAYLOAD_SIZE)
+ #define TLP_CFG_DW1(pcie, tag, be) \
+ (((TLP_REQ_ID(pcie->root_bus_nr, RP_DEVFN)) << 16) | (tag << 8) | (be))
+ #define TLP_CFG_DW2(bus, devfn, offset) \
+@@ -222,7 +226,7 @@ static int tlp_cfg_dword_read(struct altera_pcie *pcie, u8 bus, u32 devfn,
+ {
+ u32 headers[TLP_HDR_SIZE];
+
+- headers[0] = TLP_CFG_DW0(pcie, bus);
++ headers[0] = TLP_CFGRD_DW0(pcie, bus);
+ headers[1] = TLP_CFG_DW1(pcie, TLP_READ_TAG, byte_en);
+ headers[2] = TLP_CFG_DW2(bus, devfn, where);
+
+@@ -237,7 +241,7 @@ static int tlp_cfg_dword_write(struct altera_pcie *pcie, u8 bus, u32 devfn,
+ u32 headers[TLP_HDR_SIZE];
+ int ret;
+
+- headers[0] = TLP_CFG_DW0(pcie, bus);
++ headers[0] = TLP_CFGWR_DW0(pcie, bus);
+ headers[1] = TLP_CFG_DW1(pcie, TLP_WRITE_TAG, byte_en);
+ headers[2] = TLP_CFG_DW2(bus, devfn, where);
+
+diff --git a/drivers/pci/hotplug/pnv_php.c b/drivers/pci/hotplug/pnv_php.c
+index 56efaf72d08e..acb2be0c8c2c 100644
+--- a/drivers/pci/hotplug/pnv_php.c
++++ b/drivers/pci/hotplug/pnv_php.c
+@@ -35,9 +35,11 @@ static void pnv_php_register(struct device_node *dn);
+ static void pnv_php_unregister_one(struct device_node *dn);
+ static void pnv_php_unregister(struct device_node *dn);
+
+-static void pnv_php_disable_irq(struct pnv_php_slot *php_slot)
++static void pnv_php_disable_irq(struct pnv_php_slot *php_slot,
++ bool disable_device)
+ {
+ struct pci_dev *pdev = php_slot->pdev;
++ int irq = php_slot->irq;
+ u16 ctrl;
+
+ if (php_slot->irq > 0) {
+@@ -56,10 +58,14 @@ static void pnv_php_disable_irq(struct pnv_php_slot *php_slot)
+ php_slot->wq = NULL;
+ }
+
+- if (pdev->msix_enabled)
+- pci_disable_msix(pdev);
+- else if (pdev->msi_enabled)
+- pci_disable_msi(pdev);
++ if (disable_device || irq > 0) {
++ if (pdev->msix_enabled)
++ pci_disable_msix(pdev);
++ else if (pdev->msi_enabled)
++ pci_disable_msi(pdev);
++
++ pci_disable_device(pdev);
++ }
+ }
+
+ static void pnv_php_free_slot(struct kref *kref)
+@@ -68,7 +74,7 @@ static void pnv_php_free_slot(struct kref *kref)
+ struct pnv_php_slot, kref);
+
+ WARN_ON(!list_empty(&php_slot->children));
+- pnv_php_disable_irq(php_slot);
++ pnv_php_disable_irq(php_slot, false);
+ kfree(php_slot->name);
+ kfree(php_slot);
+ }
+@@ -759,7 +765,7 @@ static void pnv_php_init_irq(struct pnv_php_slot *php_slot, int irq)
+ php_slot->wq = alloc_workqueue("pciehp-%s", 0, 0, php_slot->name);
+ if (!php_slot->wq) {
+ dev_warn(&pdev->dev, "Cannot alloc workqueue\n");
+- pnv_php_disable_irq(php_slot);
++ pnv_php_disable_irq(php_slot, true);
+ return;
+ }
+
+@@ -772,7 +778,7 @@ static void pnv_php_init_irq(struct pnv_php_slot *php_slot, int irq)
+ ret = request_irq(irq, pnv_php_interrupt, IRQF_SHARED,
+ php_slot->name, php_slot);
+ if (ret) {
+- pnv_php_disable_irq(php_slot);
++ pnv_php_disable_irq(php_slot, true);
+ dev_warn(&pdev->dev, "Error %d enabling IRQ %d\n", ret, irq);
+ return;
+ }
+diff --git a/drivers/power/reset/Kconfig b/drivers/power/reset/Kconfig
+index abeb77217a21..b8cacccf18c8 100644
+--- a/drivers/power/reset/Kconfig
++++ b/drivers/power/reset/Kconfig
+@@ -32,7 +32,7 @@ config POWER_RESET_AT91_RESET
+
+ config POWER_RESET_AT91_SAMA5D2_SHDWC
+ tristate "Atmel AT91 SAMA5D2-Compatible shutdown controller driver"
+- depends on ARCH_AT91 || COMPILE_TEST
++ depends on ARCH_AT91
+ default SOC_SAMA5
+ help
+ This driver supports the alternate shutdown controller for some Atmel
+diff --git a/drivers/power/reset/at91-poweroff.c b/drivers/power/reset/at91-poweroff.c
+index a85dd4d233af..c6c3beea72f9 100644
+--- a/drivers/power/reset/at91-poweroff.c
++++ b/drivers/power/reset/at91-poweroff.c
+@@ -14,9 +14,12 @@
+ #include <linux/io.h>
+ #include <linux/module.h>
+ #include <linux/of.h>
++#include <linux/of_address.h>
+ #include <linux/platform_device.h>
+ #include <linux/printk.h>
+
++#include <soc/at91/at91sam9_ddrsdr.h>
++
+ #define AT91_SHDW_CR 0x00 /* Shut Down Control Register */
+ #define AT91_SHDW_SHDW BIT(0) /* Shut Down command */
+ #define AT91_SHDW_KEY (0xa5 << 24) /* KEY Password */
+@@ -50,6 +53,7 @@ static const char *shdwc_wakeup_modes[] = {
+
+ static void __iomem *at91_shdwc_base;
+ static struct clk *sclk;
++static void __iomem *mpddrc_base;
+
+ static void __init at91_wakeup_status(void)
+ {
+@@ -73,6 +77,29 @@ static void at91_poweroff(void)
+ writel(AT91_SHDW_KEY | AT91_SHDW_SHDW, at91_shdwc_base + AT91_SHDW_CR);
+ }
+
++static void at91_lpddr_poweroff(void)
++{
++ asm volatile(
++ /* Align to cache lines */
++ ".balign 32\n\t"
++
++ /* Ensure AT91_SHDW_CR is in the TLB by reading it */
++ " ldr r6, [%2, #" __stringify(AT91_SHDW_CR) "]\n\t"
++
++ /* Power down SDRAM0 */
++ " str %1, [%0, #" __stringify(AT91_DDRSDRC_LPR) "]\n\t"
++ /* Shutdown CPU */
++ " str %3, [%2, #" __stringify(AT91_SHDW_CR) "]\n\t"
++
++ " b .\n\t"
++ :
++ : "r" (mpddrc_base),
++ "r" cpu_to_le32(AT91_DDRSDRC_LPDDR2_PWOFF),
++ "r" (at91_shdwc_base),
++ "r" cpu_to_le32(AT91_SHDW_KEY | AT91_SHDW_SHDW)
++ : "r0");
++}
++
+ static int at91_poweroff_get_wakeup_mode(struct device_node *np)
+ {
+ const char *pm;
+@@ -124,6 +151,8 @@ static void at91_poweroff_dt_set_wakeup_mode(struct platform_device *pdev)
+ static int __init at91_poweroff_probe(struct platform_device *pdev)
+ {
+ struct resource *res;
++ struct device_node *np;
++ u32 ddr_type;
+ int ret;
+
+ res = platform_get_resource(pdev, IORESOURCE_MEM, 0);
+@@ -150,12 +179,30 @@ static int __init at91_poweroff_probe(struct platform_device *pdev)
+
+ pm_power_off = at91_poweroff;
+
++ np = of_find_compatible_node(NULL, NULL, "atmel,sama5d3-ddramc");
++ if (!np)
++ return 0;
++
++ mpddrc_base = of_iomap(np, 0);
++ of_node_put(np);
++
++ if (!mpddrc_base)
++ return 0;
++
++ ddr_type = readl(mpddrc_base + AT91_DDRSDRC_MDR) & AT91_DDRSDRC_MD;
++ if ((ddr_type == AT91_DDRSDRC_MD_LPDDR2) ||
++ (ddr_type == AT91_DDRSDRC_MD_LPDDR3))
++ pm_power_off = at91_lpddr_poweroff;
++ else
++ iounmap(mpddrc_base);
++
+ return 0;
+ }
+
+ static int __exit at91_poweroff_remove(struct platform_device *pdev)
+ {
+- if (pm_power_off == at91_poweroff)
++ if (pm_power_off == at91_poweroff ||
++ pm_power_off == at91_lpddr_poweroff)
+ pm_power_off = NULL;
+
+ clk_disable_unprepare(sclk);
+@@ -163,6 +210,11 @@ static int __exit at91_poweroff_remove(struct platform_device *pdev)
+ return 0;
+ }
+
++static const struct of_device_id at91_ramc_of_match[] = {
++ { .compatible = "atmel,sama5d3-ddramc", },
++ { /* sentinel */ }
++};
++
+ static const struct of_device_id at91_poweroff_of_match[] = {
+ { .compatible = "atmel,at91sam9260-shdwc", },
+ { .compatible = "atmel,at91sam9rl-shdwc", },
+diff --git a/drivers/power/reset/at91-sama5d2_shdwc.c b/drivers/power/reset/at91-sama5d2_shdwc.c
+index 8a5ac9706c9c..90b0b5a70ce5 100644
+--- a/drivers/power/reset/at91-sama5d2_shdwc.c
++++ b/drivers/power/reset/at91-sama5d2_shdwc.c
+@@ -22,9 +22,12 @@
+ #include <linux/io.h>
+ #include <linux/module.h>
+ #include <linux/of.h>
++#include <linux/of_address.h>
+ #include <linux/platform_device.h>
+ #include <linux/printk.h>
+
++#include <soc/at91/at91sam9_ddrsdr.h>
++
+ #define SLOW_CLOCK_FREQ 32768
+
+ #define AT91_SHDW_CR 0x00 /* Shut Down Control Register */
+@@ -75,6 +78,7 @@ struct shdwc {
+ */
+ static struct shdwc *at91_shdwc;
+ static struct clk *sclk;
++static void __iomem *mpddrc_base;
+
+ static const unsigned long long sdwc_dbc_period[] = {
+ 0, 3, 32, 512, 4096, 32768,
+@@ -108,6 +112,29 @@ static void at91_poweroff(void)
+ at91_shdwc->at91_shdwc_base + AT91_SHDW_CR);
+ }
+
++static void at91_lpddr_poweroff(void)
++{
++ asm volatile(
++ /* Align to cache lines */
++ ".balign 32\n\t"
++
++ /* Ensure AT91_SHDW_CR is in the TLB by reading it */
++ " ldr r6, [%2, #" __stringify(AT91_SHDW_CR) "]\n\t"
++
++ /* Power down SDRAM0 */
++ " str %1, [%0, #" __stringify(AT91_DDRSDRC_LPR) "]\n\t"
++ /* Shutdown CPU */
++ " str %3, [%2, #" __stringify(AT91_SHDW_CR) "]\n\t"
++
++ " b .\n\t"
++ :
++ : "r" (mpddrc_base),
++ "r" cpu_to_le32(AT91_DDRSDRC_LPDDR2_PWOFF),
++ "r" (at91_shdwc->at91_shdwc_base),
++ "r" cpu_to_le32(AT91_SHDW_KEY | AT91_SHDW_SHDW)
++ : "r0");
++}
++
+ static u32 at91_shdwc_debouncer_value(struct platform_device *pdev,
+ u32 in_period_us)
+ {
+@@ -212,6 +239,8 @@ static int __init at91_shdwc_probe(struct platform_device *pdev)
+ {
+ struct resource *res;
+ const struct of_device_id *match;
++ struct device_node *np;
++ u32 ddr_type;
+ int ret;
+
+ if (!pdev->dev.of_node)
+@@ -249,6 +278,23 @@ static int __init at91_shdwc_probe(struct platform_device *pdev)
+
+ pm_power_off = at91_poweroff;
+
++ np = of_find_compatible_node(NULL, NULL, "atmel,sama5d3-ddramc");
++ if (!np)
++ return 0;
++
++ mpddrc_base = of_iomap(np, 0);
++ of_node_put(np);
++
++ if (!mpddrc_base)
++ return 0;
++
++ ddr_type = readl(mpddrc_base + AT91_DDRSDRC_MDR) & AT91_DDRSDRC_MD;
++ if ((ddr_type == AT91_DDRSDRC_MD_LPDDR2) ||
++ (ddr_type == AT91_DDRSDRC_MD_LPDDR3))
++ pm_power_off = at91_lpddr_poweroff;
++ else
++ iounmap(mpddrc_base);
++
+ return 0;
+ }
+
+@@ -256,7 +302,8 @@ static int __exit at91_shdwc_remove(struct platform_device *pdev)
+ {
+ struct shdwc *shdw = platform_get_drvdata(pdev);
+
+- if (pm_power_off == at91_poweroff)
++ if (pm_power_off == at91_poweroff ||
++ pm_power_off == at91_lpddr_poweroff)
+ pm_power_off = NULL;
+
+ /* Reset values to disable wake-up features */
+diff --git a/drivers/regulator/core.c b/drivers/regulator/core.c
+index 04baac9a165b..66319542baa6 100644
+--- a/drivers/regulator/core.c
++++ b/drivers/regulator/core.c
+@@ -4391,12 +4391,13 @@ static void regulator_summary_show_subtree(struct seq_file *s,
+ seq_puts(s, "\n");
+
+ list_for_each_entry(consumer, &rdev->consumer_list, list) {
+- if (consumer->dev->class == ®ulator_class)
++ if (consumer->dev && consumer->dev->class == ®ulator_class)
+ continue;
+
+ seq_printf(s, "%*s%-*s ",
+ (level + 1) * 3 + 1, "",
+- 30 - (level + 1) * 3, dev_name(consumer->dev));
++ 30 - (level + 1) * 3,
++ consumer->dev ? dev_name(consumer->dev) : "deviceless");
+
+ switch (rdev->desc->type) {
+ case REGULATOR_VOLTAGE:
+diff --git a/drivers/remoteproc/qcom_mdt_loader.c b/drivers/remoteproc/qcom_mdt_loader.c
+index 2ff18cd6c096..2393398f63ea 100644
+--- a/drivers/remoteproc/qcom_mdt_loader.c
++++ b/drivers/remoteproc/qcom_mdt_loader.c
+@@ -116,6 +116,7 @@ int qcom_mdt_load(struct rproc *rproc,
+ const struct elf32_phdr *phdrs;
+ const struct elf32_phdr *phdr;
+ const struct elf32_hdr *ehdr;
++ const struct firmware *seg_fw;
+ size_t fw_name_len;
+ char *fw_name;
+ void *ptr;
+@@ -154,16 +155,16 @@ int qcom_mdt_load(struct rproc *rproc,
+
+ if (phdr->p_filesz) {
+ sprintf(fw_name + fw_name_len - 3, "b%02d", i);
+- ret = request_firmware(&fw, fw_name, &rproc->dev);
++ ret = request_firmware(&seg_fw, fw_name, &rproc->dev);
+ if (ret) {
+ dev_err(&rproc->dev, "failed to load %s\n",
+ fw_name);
+ break;
+ }
+
+- memcpy(ptr, fw->data, fw->size);
++ memcpy(ptr, seg_fw->data, seg_fw->size);
+
+- release_firmware(fw);
++ release_firmware(seg_fw);
+ }
+
+ if (phdr->p_memsz > phdr->p_filesz)
+diff --git a/drivers/rtc/Kconfig b/drivers/rtc/Kconfig
+index 5dc673dc9487..3cb42fb8eb53 100644
+--- a/drivers/rtc/Kconfig
++++ b/drivers/rtc/Kconfig
+@@ -1434,7 +1434,7 @@ config RTC_DRV_SUN4V
+ based RTC on SUN4V systems.
+
+ config RTC_DRV_SUN6I
+- tristate "Allwinner A31 RTC"
++ bool "Allwinner A31 RTC"
+ default MACH_SUN6I || MACH_SUN8I || COMPILE_TEST
+ depends on ARCH_SUNXI
+ help
+diff --git a/drivers/rtc/rtc-sun6i.c b/drivers/rtc/rtc-sun6i.c
+index c169a2cd4727..b0d45d23a11b 100644
+--- a/drivers/rtc/rtc-sun6i.c
++++ b/drivers/rtc/rtc-sun6i.c
+@@ -37,9 +37,11 @@
+
+ /* Control register */
+ #define SUN6I_LOSC_CTRL 0x0000
++#define SUN6I_LOSC_CTRL_KEY (0x16aa << 16)
+ #define SUN6I_LOSC_CTRL_ALM_DHMS_ACC BIT(9)
+ #define SUN6I_LOSC_CTRL_RTC_HMS_ACC BIT(8)
+ #define SUN6I_LOSC_CTRL_RTC_YMD_ACC BIT(7)
++#define SUN6I_LOSC_CTRL_EXT_OSC BIT(0)
+ #define SUN6I_LOSC_CTRL_ACC_MASK GENMASK(9, 7)
+
+ /* RTC */
+@@ -114,13 +116,17 @@ struct sun6i_rtc_dev {
+ void __iomem *base;
+ int irq;
+ unsigned long alarm;
++
++ spinlock_t lock;
+ };
+
+ static irqreturn_t sun6i_rtc_alarmirq(int irq, void *id)
+ {
+ struct sun6i_rtc_dev *chip = (struct sun6i_rtc_dev *) id;
++ irqreturn_t ret = IRQ_NONE;
+ u32 val;
+
++ spin_lock(&chip->lock);
+ val = readl(chip->base + SUN6I_ALRM_IRQ_STA);
+
+ if (val & SUN6I_ALRM_IRQ_STA_CNT_IRQ_PEND) {
+@@ -129,10 +135,11 @@ static irqreturn_t sun6i_rtc_alarmirq(int irq, void *id)
+
+ rtc_update_irq(chip->rtc, 1, RTC_AF | RTC_IRQF);
+
+- return IRQ_HANDLED;
++ ret = IRQ_HANDLED;
+ }
++ spin_unlock(&chip->lock);
+
+- return IRQ_NONE;
++ return ret;
+ }
+
+ static void sun6i_rtc_setaie(int to, struct sun6i_rtc_dev *chip)
+@@ -140,6 +147,7 @@ static void sun6i_rtc_setaie(int to, struct sun6i_rtc_dev *chip)
+ u32 alrm_val = 0;
+ u32 alrm_irq_val = 0;
+ u32 alrm_wake_val = 0;
++ unsigned long flags;
+
+ if (to) {
+ alrm_val = SUN6I_ALRM_EN_CNT_EN;
+@@ -150,9 +158,11 @@ static void sun6i_rtc_setaie(int to, struct sun6i_rtc_dev *chip)
+ chip->base + SUN6I_ALRM_IRQ_STA);
+ }
+
++ spin_lock_irqsave(&chip->lock, flags);
+ writel(alrm_val, chip->base + SUN6I_ALRM_EN);
+ writel(alrm_irq_val, chip->base + SUN6I_ALRM_IRQ_EN);
+ writel(alrm_wake_val, chip->base + SUN6I_ALARM_CONFIG);
++ spin_unlock_irqrestore(&chip->lock, flags);
+ }
+
+ static int sun6i_rtc_gettime(struct device *dev, struct rtc_time *rtc_tm)
+@@ -191,11 +201,15 @@ static int sun6i_rtc_gettime(struct device *dev, struct rtc_time *rtc_tm)
+ static int sun6i_rtc_getalarm(struct device *dev, struct rtc_wkalrm *wkalrm)
+ {
+ struct sun6i_rtc_dev *chip = dev_get_drvdata(dev);
++ unsigned long flags;
+ u32 alrm_st;
+ u32 alrm_en;
+
++ spin_lock_irqsave(&chip->lock, flags);
+ alrm_en = readl(chip->base + SUN6I_ALRM_IRQ_EN);
+ alrm_st = readl(chip->base + SUN6I_ALRM_IRQ_STA);
++ spin_unlock_irqrestore(&chip->lock, flags);
++
+ wkalrm->enabled = !!(alrm_en & SUN6I_ALRM_EN_CNT_EN);
+ wkalrm->pending = !!(alrm_st & SUN6I_ALRM_EN_CNT_EN);
+ rtc_time_to_tm(chip->alarm, &wkalrm->time);
+@@ -356,6 +370,7 @@ static int sun6i_rtc_probe(struct platform_device *pdev)
+ chip = devm_kzalloc(&pdev->dev, sizeof(*chip), GFP_KERNEL);
+ if (!chip)
+ return -ENOMEM;
++ spin_lock_init(&chip->lock);
+
+ platform_set_drvdata(pdev, chip);
+ chip->dev = &pdev->dev;
+@@ -404,6 +419,10 @@ static int sun6i_rtc_probe(struct platform_device *pdev)
+ /* disable alarm wakeup */
+ writel(0, chip->base + SUN6I_ALARM_CONFIG);
+
++ /* switch to the external, more precise, oscillator */
++ writel(SUN6I_LOSC_CTRL_KEY | SUN6I_LOSC_CTRL_EXT_OSC,
++ chip->base + SUN6I_LOSC_CTRL);
++
+ chip->rtc = rtc_device_register("rtc-sun6i", &pdev->dev,
+ &sun6i_rtc_ops, THIS_MODULE);
+ if (IS_ERR(chip->rtc)) {
+@@ -439,9 +458,4 @@ static struct platform_driver sun6i_rtc_driver = {
+ .of_match_table = sun6i_rtc_dt_ids,
+ },
+ };
+-
+-module_platform_driver(sun6i_rtc_driver);
+-
+-MODULE_DESCRIPTION("sun6i RTC driver");
+-MODULE_AUTHOR("Chen-Yu Tsai <wens@csie.org>");
+-MODULE_LICENSE("GPL");
++builtin_platform_driver(sun6i_rtc_driver);
+diff --git a/drivers/scsi/aacraid/src.c b/drivers/scsi/aacraid/src.c
+index 0c453880f214..7b178d765726 100644
+--- a/drivers/scsi/aacraid/src.c
++++ b/drivers/scsi/aacraid/src.c
+@@ -414,16 +414,23 @@ static int aac_src_check_health(struct aac_dev *dev)
+ u32 status = src_readl(dev, MUnit.OMR);
+
+ /*
++ * Check to see if the board panic'd.
++ */
++ if (unlikely(status & KERNEL_PANIC))
++ goto err_blink;
++
++ /*
+ * Check to see if the board failed any self tests.
+ */
+ if (unlikely(status & SELF_TEST_FAILED))
+- return -1;
++ goto err_out;
+
+ /*
+- * Check to see if the board panic'd.
++ * Check to see if the board failed any self tests.
+ */
+- if (unlikely(status & KERNEL_PANIC))
+- return (status >> 16) & 0xFF;
++ if (unlikely(status & MONITOR_PANIC))
++ goto err_out;
++
+ /*
+ * Wait for the adapter to be up and running.
+ */
+@@ -433,6 +440,12 @@ static int aac_src_check_health(struct aac_dev *dev)
+ * Everything is OK
+ */
+ return 0;
++
++err_out:
++ return -1;
++
++err_blink:
++ return (status > 16) & 0xFF;
+ }
+
+ /**
+diff --git a/drivers/scsi/lpfc/lpfc_hw4.h b/drivers/scsi/lpfc/lpfc_hw4.h
+index 5646699b0516..964a1fdb076b 100644
+--- a/drivers/scsi/lpfc/lpfc_hw4.h
++++ b/drivers/scsi/lpfc/lpfc_hw4.h
+@@ -1186,6 +1186,7 @@ struct lpfc_mbx_wq_create {
+ #define lpfc_mbx_wq_create_page_size_SHIFT 0
+ #define lpfc_mbx_wq_create_page_size_MASK 0x000000FF
+ #define lpfc_mbx_wq_create_page_size_WORD word1
++#define LPFC_WQ_PAGE_SIZE_4096 0x1
+ #define lpfc_mbx_wq_create_wqe_size_SHIFT 8
+ #define lpfc_mbx_wq_create_wqe_size_MASK 0x0000000F
+ #define lpfc_mbx_wq_create_wqe_size_WORD word1
+@@ -1257,6 +1258,7 @@ struct rq_context {
+ #define lpfc_rq_context_page_size_SHIFT 0 /* Version 1 Only */
+ #define lpfc_rq_context_page_size_MASK 0x000000FF
+ #define lpfc_rq_context_page_size_WORD word0
++#define LPFC_RQ_PAGE_SIZE_4096 0x1
+ uint32_t reserved1;
+ uint32_t word2;
+ #define lpfc_rq_context_cq_id_SHIFT 16
+diff --git a/drivers/scsi/lpfc/lpfc_sli.c b/drivers/scsi/lpfc/lpfc_sli.c
+index a78a3df68f67..fc797b250810 100644
+--- a/drivers/scsi/lpfc/lpfc_sli.c
++++ b/drivers/scsi/lpfc/lpfc_sli.c
+@@ -13718,7 +13718,7 @@ lpfc_wq_create(struct lpfc_hba *phba, struct lpfc_queue *wq,
+ LPFC_WQ_WQE_SIZE_128);
+ bf_set(lpfc_mbx_wq_create_page_size,
+ &wq_create->u.request_1,
+- (PAGE_SIZE/SLI4_PAGE_SIZE));
++ LPFC_WQ_PAGE_SIZE_4096);
+ page = wq_create->u.request_1.page;
+ break;
+ }
+@@ -13744,8 +13744,9 @@ lpfc_wq_create(struct lpfc_hba *phba, struct lpfc_queue *wq,
+ LPFC_WQ_WQE_SIZE_128);
+ break;
+ }
+- bf_set(lpfc_mbx_wq_create_page_size, &wq_create->u.request_1,
+- (PAGE_SIZE/SLI4_PAGE_SIZE));
++ bf_set(lpfc_mbx_wq_create_page_size,
++ &wq_create->u.request_1,
++ LPFC_WQ_PAGE_SIZE_4096);
+ page = wq_create->u.request_1.page;
+ break;
+ default:
+@@ -13931,7 +13932,7 @@ lpfc_rq_create(struct lpfc_hba *phba, struct lpfc_queue *hrq,
+ LPFC_RQE_SIZE_8);
+ bf_set(lpfc_rq_context_page_size,
+ &rq_create->u.request.context,
+- (PAGE_SIZE/SLI4_PAGE_SIZE));
++ LPFC_RQ_PAGE_SIZE_4096);
+ } else {
+ switch (hrq->entry_count) {
+ default:
+diff --git a/drivers/scsi/qla2xxx/qla_isr.c b/drivers/scsi/qla2xxx/qla_isr.c
+index a94b0b6bd030..8bb9a0367b69 100644
+--- a/drivers/scsi/qla2xxx/qla_isr.c
++++ b/drivers/scsi/qla2xxx/qla_isr.c
+@@ -3013,14 +3013,17 @@ qla24xx_enable_msix(struct qla_hw_data *ha, struct rsp_que *rsp)
+ int i, ret;
+ struct qla_msix_entry *qentry;
+ scsi_qla_host_t *vha = pci_get_drvdata(ha->pdev);
++ int min_vecs = QLA_BASE_VECTORS;
+ struct irq_affinity desc = {
+ .pre_vectors = QLA_BASE_VECTORS,
+ };
+
+- if (QLA_TGT_MODE_ENABLED() && IS_ATIO_MSIX_CAPABLE(ha))
++ if (QLA_TGT_MODE_ENABLED() && IS_ATIO_MSIX_CAPABLE(ha)) {
+ desc.pre_vectors++;
++ min_vecs++;
++ }
+
+- ret = pci_alloc_irq_vectors_affinity(ha->pdev, QLA_BASE_VECTORS,
++ ret = pci_alloc_irq_vectors_affinity(ha->pdev, min_vecs,
+ ha->msix_count, PCI_IRQ_MSIX | PCI_IRQ_AFFINITY,
+ &desc);
+
+diff --git a/drivers/scsi/qla2xxx/qla_os.c b/drivers/scsi/qla2xxx/qla_os.c
+index 40660461a4b5..17cdd1d09a57 100644
+--- a/drivers/scsi/qla2xxx/qla_os.c
++++ b/drivers/scsi/qla2xxx/qla_os.c
+@@ -1814,6 +1814,7 @@ qla2x00_iospace_config(struct qla_hw_data *ha)
+
+ /* Determine queue resources */
+ ha->max_req_queues = ha->max_rsp_queues = 1;
++ ha->msix_count = QLA_BASE_VECTORS;
+ if (!ql2xmqsupport || (!IS_QLA25XX(ha) && !IS_QLA81XX(ha)))
+ goto mqiobase_exit;
+
+@@ -1841,9 +1842,8 @@ qla2x00_iospace_config(struct qla_hw_data *ha)
+ "BAR 3 not enabled.\n");
+
+ mqiobase_exit:
+- ha->msix_count = ha->max_rsp_queues + 1;
+ ql_dbg_pci(ql_dbg_init, ha->pdev, 0x001c,
+- "MSIX Count:%d.\n", ha->msix_count);
++ "MSIX Count: %d.\n", ha->msix_count);
+ return (0);
+
+ iospace_error_exit:
+@@ -1891,6 +1891,7 @@ qla83xx_iospace_config(struct qla_hw_data *ha)
+ /* 83XX 26XX always use MQ type access for queues
+ * - mbar 2, a.k.a region 4 */
+ ha->max_req_queues = ha->max_rsp_queues = 1;
++ ha->msix_count = QLA_BASE_VECTORS;
+ ha->mqiobase = ioremap(pci_resource_start(ha->pdev, 4),
+ pci_resource_len(ha->pdev, 4));
+
+@@ -1914,12 +1915,13 @@ qla83xx_iospace_config(struct qla_hw_data *ha)
+ if (ql2xmqsupport) {
+ /* MB interrupt uses 1 vector */
+ ha->max_req_queues = ha->msix_count - 1;
+- ha->max_rsp_queues = ha->max_req_queues;
+
+ /* ATIOQ needs 1 vector. That's 1 less QPair */
+ if (QLA_TGT_MODE_ENABLED())
+ ha->max_req_queues--;
+
++ ha->max_rsp_queues = ha->max_req_queues;
++
+ /* Queue pairs is the max value minus
+ * the base queue pair */
+ ha->max_qpairs = ha->max_req_queues - 1;
+@@ -1933,14 +1935,8 @@ qla83xx_iospace_config(struct qla_hw_data *ha)
+ "BAR 1 not enabled.\n");
+
+ mqiobase_exit:
+- ha->msix_count = ha->max_rsp_queues + 1;
+- if (QLA_TGT_MODE_ENABLED())
+- ha->msix_count++;
+-
+- qlt_83xx_iospace_config(ha);
+-
+ ql_dbg_pci(ql_dbg_init, ha->pdev, 0x011f,
+- "MSIX Count:%d.\n", ha->msix_count);
++ "MSIX Count: %d.\n", ha->msix_count);
+ return 0;
+
+ iospace_error_exit:
+diff --git a/drivers/scsi/scsi_dh.c b/drivers/scsi/scsi_dh.c
+index b8d3b97b217a..84addee05be6 100644
+--- a/drivers/scsi/scsi_dh.c
++++ b/drivers/scsi/scsi_dh.c
+@@ -219,20 +219,6 @@ int scsi_unregister_device_handler(struct scsi_device_handler *scsi_dh)
+ }
+ EXPORT_SYMBOL_GPL(scsi_unregister_device_handler);
+
+-static struct scsi_device *get_sdev_from_queue(struct request_queue *q)
+-{
+- struct scsi_device *sdev;
+- unsigned long flags;
+-
+- spin_lock_irqsave(q->queue_lock, flags);
+- sdev = q->queuedata;
+- if (!sdev || !get_device(&sdev->sdev_gendev))
+- sdev = NULL;
+- spin_unlock_irqrestore(q->queue_lock, flags);
+-
+- return sdev;
+-}
+-
+ /*
+ * scsi_dh_activate - activate the path associated with the scsi_device
+ * corresponding to the given request queue.
+@@ -251,7 +237,7 @@ int scsi_dh_activate(struct request_queue *q, activate_complete fn, void *data)
+ struct scsi_device *sdev;
+ int err = SCSI_DH_NOSYS;
+
+- sdev = get_sdev_from_queue(q);
++ sdev = scsi_device_from_queue(q);
+ if (!sdev) {
+ if (fn)
+ fn(data, err);
+@@ -298,7 +284,7 @@ int scsi_dh_set_params(struct request_queue *q, const char *params)
+ struct scsi_device *sdev;
+ int err = -SCSI_DH_NOSYS;
+
+- sdev = get_sdev_from_queue(q);
++ sdev = scsi_device_from_queue(q);
+ if (!sdev)
+ return err;
+
+@@ -321,7 +307,7 @@ int scsi_dh_attach(struct request_queue *q, const char *name)
+ struct scsi_device_handler *scsi_dh;
+ int err = 0;
+
+- sdev = get_sdev_from_queue(q);
++ sdev = scsi_device_from_queue(q);
+ if (!sdev)
+ return -ENODEV;
+
+@@ -359,7 +345,7 @@ const char *scsi_dh_attached_handler_name(struct request_queue *q, gfp_t gfp)
+ struct scsi_device *sdev;
+ const char *handler_name = NULL;
+
+- sdev = get_sdev_from_queue(q);
++ sdev = scsi_device_from_queue(q);
+ if (!sdev)
+ return NULL;
+
+diff --git a/drivers/scsi/scsi_lib.c b/drivers/scsi/scsi_lib.c
+index 78db07fd8055..f16221b66668 100644
+--- a/drivers/scsi/scsi_lib.c
++++ b/drivers/scsi/scsi_lib.c
+@@ -2145,6 +2145,29 @@ void scsi_mq_destroy_tags(struct Scsi_Host *shost)
+ blk_mq_free_tag_set(&shost->tag_set);
+ }
+
++/**
++ * scsi_device_from_queue - return sdev associated with a request_queue
++ * @q: The request queue to return the sdev from
++ *
++ * Return the sdev associated with a request queue or NULL if the
++ * request_queue does not reference a SCSI device.
++ */
++struct scsi_device *scsi_device_from_queue(struct request_queue *q)
++{
++ struct scsi_device *sdev = NULL;
++
++ if (q->mq_ops) {
++ if (q->mq_ops == &scsi_mq_ops)
++ sdev = q->queuedata;
++ } else if (q->request_fn == scsi_request_fn)
++ sdev = q->queuedata;
++ if (!sdev || !get_device(&sdev->sdev_gendev))
++ sdev = NULL;
++
++ return sdev;
++}
++EXPORT_SYMBOL_GPL(scsi_device_from_queue);
++
+ /*
+ * Function: scsi_block_requests()
+ *
+diff --git a/drivers/scsi/storvsc_drv.c b/drivers/scsi/storvsc_drv.c
+index 05526b71541b..7be04fc0d0e7 100644
+--- a/drivers/scsi/storvsc_drv.c
++++ b/drivers/scsi/storvsc_drv.c
+@@ -136,6 +136,8 @@ struct hv_fc_wwn_packet {
+ #define SRB_FLAGS_PORT_DRIVER_RESERVED 0x0F000000
+ #define SRB_FLAGS_CLASS_DRIVER_RESERVED 0xF0000000
+
++#define SP_UNTAGGED ((unsigned char) ~0)
++#define SRB_SIMPLE_TAG_REQUEST 0x20
+
+ /*
+ * Platform neutral description of a scsi request -
+@@ -375,6 +377,7 @@ enum storvsc_request_type {
+ #define SRB_STATUS_SUCCESS 0x01
+ #define SRB_STATUS_ABORTED 0x02
+ #define SRB_STATUS_ERROR 0x04
++#define SRB_STATUS_DATA_OVERRUN 0x12
+
+ #define SRB_STATUS(status) \
+ (status & ~(SRB_STATUS_AUTOSENSE_VALID | SRB_STATUS_QUEUE_FROZEN))
+@@ -889,6 +892,13 @@ static void storvsc_handle_error(struct vmscsi_request *vm_srb,
+ switch (SRB_STATUS(vm_srb->srb_status)) {
+ case SRB_STATUS_ERROR:
+ /*
++ * Let upper layer deal with error when
++ * sense message is present.
++ */
++
++ if (vm_srb->srb_status & SRB_STATUS_AUTOSENSE_VALID)
++ break;
++ /*
+ * If there is an error; offline the device since all
+ * error recovery strategies would have already been
+ * deployed on the host side. However, if the command
+@@ -953,6 +963,7 @@ static void storvsc_command_completion(struct storvsc_cmd_request *cmd_request,
+ struct scsi_cmnd *scmnd = cmd_request->cmd;
+ struct scsi_sense_hdr sense_hdr;
+ struct vmscsi_request *vm_srb;
++ u32 data_transfer_length;
+ struct Scsi_Host *host;
+ u32 payload_sz = cmd_request->payload_sz;
+ void *payload = cmd_request->payload;
+@@ -960,6 +971,7 @@ static void storvsc_command_completion(struct storvsc_cmd_request *cmd_request,
+ host = stor_dev->host;
+
+ vm_srb = &cmd_request->vstor_packet.vm_srb;
++ data_transfer_length = vm_srb->data_transfer_length;
+
+ scmnd->result = vm_srb->scsi_status;
+
+@@ -973,13 +985,20 @@ static void storvsc_command_completion(struct storvsc_cmd_request *cmd_request,
+ &sense_hdr);
+ }
+
+- if (vm_srb->srb_status != SRB_STATUS_SUCCESS)
++ if (vm_srb->srb_status != SRB_STATUS_SUCCESS) {
+ storvsc_handle_error(vm_srb, scmnd, host, sense_hdr.asc,
+ sense_hdr.ascq);
++ /*
++ * The Windows driver set data_transfer_length on
++ * SRB_STATUS_DATA_OVERRUN. On other errors, this value
++ * is untouched. In these cases we set it to 0.
++ */
++ if (vm_srb->srb_status != SRB_STATUS_DATA_OVERRUN)
++ data_transfer_length = 0;
++ }
+
+ scsi_set_resid(scmnd,
+- cmd_request->payload->range.len -
+- vm_srb->data_transfer_length);
++ cmd_request->payload->range.len - data_transfer_length);
+
+ scmnd->scsi_done(scmnd);
+
+@@ -1451,6 +1470,13 @@ static int storvsc_queuecommand(struct Scsi_Host *host, struct scsi_cmnd *scmnd)
+ vm_srb->win8_extension.srb_flags |=
+ SRB_FLAGS_DISABLE_SYNCH_TRANSFER;
+
++ if (scmnd->device->tagged_supported) {
++ vm_srb->win8_extension.srb_flags |=
++ (SRB_FLAGS_QUEUE_ACTION_ENABLE | SRB_FLAGS_NO_QUEUE_FREEZE);
++ vm_srb->win8_extension.queue_tag = SP_UNTAGGED;
++ vm_srb->win8_extension.queue_action = SRB_SIMPLE_TAG_REQUEST;
++ }
++
+ /* Build the SRB */
+ switch (scmnd->sc_data_direction) {
+ case DMA_TO_DEVICE:
+diff --git a/drivers/spi/spi-s3c64xx.c b/drivers/spi/spi-s3c64xx.c
+index 28dfdce4beae..4235ab92ea35 100644
+--- a/drivers/spi/spi-s3c64xx.c
++++ b/drivers/spi/spi-s3c64xx.c
+@@ -996,7 +996,7 @@ static struct s3c64xx_spi_info *s3c64xx_spi_parse_dt(struct device *dev)
+ sci->num_cs = temp;
+ }
+
+- sci->no_cs = of_property_read_bool(dev->of_node, "broken-cs");
++ sci->no_cs = of_property_read_bool(dev->of_node, "no-cs-readback");
+
+ return sci;
+ }
+diff --git a/drivers/staging/greybus/loopback.c b/drivers/staging/greybus/loopback.c
+index 7882306adeca..29dc249b0c74 100644
+--- a/drivers/staging/greybus/loopback.c
++++ b/drivers/staging/greybus/loopback.c
+@@ -1051,8 +1051,13 @@ static int gb_loopback_fn(void *data)
+ gb_loopback_calculate_stats(gb, !!error);
+ }
+ gb->send_count++;
+- if (us_wait)
+- udelay(us_wait);
++
++ if (us_wait) {
++ if (us_wait < 20000)
++ usleep_range(us_wait, us_wait + 100);
++ else
++ msleep(us_wait / 1000);
++ }
+ }
+
+ gb_pm_runtime_put_autosuspend(bundle);
+diff --git a/drivers/staging/lustre/lnet/selftest/rpc.c b/drivers/staging/lustre/lnet/selftest/rpc.c
+index ce9de8c9be57..cfedebb05046 100644
+--- a/drivers/staging/lustre/lnet/selftest/rpc.c
++++ b/drivers/staging/lustre/lnet/selftest/rpc.c
+@@ -255,7 +255,7 @@ srpc_service_init(struct srpc_service *svc)
+ svc->sv_shuttingdown = 0;
+
+ svc->sv_cpt_data = cfs_percpt_alloc(lnet_cpt_table(),
+- sizeof(*svc->sv_cpt_data));
++ sizeof(**svc->sv_cpt_data));
+ if (!svc->sv_cpt_data)
+ return -ENOMEM;
+
+diff --git a/drivers/staging/rtl8188eu/core/rtw_recv.c b/drivers/staging/rtl8188eu/core/rtw_recv.c
+index 3e6edb63d36b..db15bd3b3504 100644
+--- a/drivers/staging/rtl8188eu/core/rtw_recv.c
++++ b/drivers/staging/rtl8188eu/core/rtw_recv.c
+@@ -1349,6 +1349,9 @@ static int wlanhdr_to_ethhdr(struct recv_frame *precvframe)
+ ptr = recvframe_pull(precvframe, (rmv_len-sizeof(struct ethhdr) + (bsnaphdr ? 2 : 0)));
+ }
+
++ if (!ptr)
++ return _FAIL;
++
+ memcpy(ptr, pattrib->dst, ETH_ALEN);
+ memcpy(ptr+ETH_ALEN, pattrib->src, ETH_ALEN);
+
+diff --git a/drivers/staging/rtl8712/rtl871x_recv.c b/drivers/staging/rtl8712/rtl871x_recv.c
+index 35c721a50598..a5f0ae9e0807 100644
+--- a/drivers/staging/rtl8712/rtl871x_recv.c
++++ b/drivers/staging/rtl8712/rtl871x_recv.c
+@@ -640,11 +640,16 @@ sint r8712_wlanhdr_to_ethhdr(union recv_frame *precvframe)
+ /* append rx status for mp test packets */
+ ptr = recvframe_pull(precvframe, (rmv_len -
+ sizeof(struct ethhdr) + 2) - 24);
++ if (!ptr)
++ return _FAIL;
+ memcpy(ptr, get_rxmem(precvframe), 24);
+ ptr += 24;
+- } else
++ } else {
+ ptr = recvframe_pull(precvframe, (rmv_len -
+ sizeof(struct ethhdr) + (bsnaphdr ? 2 : 0)));
++ if (!ptr)
++ return _FAIL;
++ }
+
+ memcpy(ptr, pattrib->dst, ETH_ALEN);
+ memcpy(ptr + ETH_ALEN, pattrib->src, ETH_ALEN);
+diff --git a/drivers/usb/dwc3/gadget.c b/drivers/usb/dwc3/gadget.c
+index 204c754cc647..a8a4fe4ffa30 100644
+--- a/drivers/usb/dwc3/gadget.c
++++ b/drivers/usb/dwc3/gadget.c
+@@ -1335,6 +1335,9 @@ int __dwc3_gadget_ep_set_halt(struct dwc3_ep *dep, int value, int protocol)
+ unsigned transfer_in_flight;
+ unsigned started;
+
++ if (dep->flags & DWC3_EP_STALL)
++ return 0;
++
+ if (dep->number > 1)
+ trb = dwc3_ep_prev_trb(dep, dep->trb_enqueue);
+ else
+@@ -1356,6 +1359,8 @@ int __dwc3_gadget_ep_set_halt(struct dwc3_ep *dep, int value, int protocol)
+ else
+ dep->flags |= DWC3_EP_STALL;
+ } else {
++ if (!(dep->flags & DWC3_EP_STALL))
++ return 0;
+
+ ret = dwc3_send_clear_stall_ep_cmd(dep);
+ if (ret)
+diff --git a/drivers/usb/gadget/function/f_hid.c b/drivers/usb/gadget/function/f_hid.c
+index 5f8139b8e601..89b48bcc377a 100644
+--- a/drivers/usb/gadget/function/f_hid.c
++++ b/drivers/usb/gadget/function/f_hid.c
+@@ -50,12 +50,12 @@ struct f_hidg {
+
+ /* recv report */
+ struct list_head completed_out_req;
+- spinlock_t spinlock;
++ spinlock_t read_spinlock;
+ wait_queue_head_t read_queue;
+ unsigned int qlen;
+
+ /* send report */
+- struct mutex lock;
++ spinlock_t write_spinlock;
+ bool write_pending;
+ wait_queue_head_t write_queue;
+ struct usb_request *req;
+@@ -258,28 +258,35 @@ static ssize_t f_hidg_read(struct file *file, char __user *buffer,
+ if (!access_ok(VERIFY_WRITE, buffer, count))
+ return -EFAULT;
+
+- spin_lock_irqsave(&hidg->spinlock, flags);
++ spin_lock_irqsave(&hidg->read_spinlock, flags);
+
+ #define READ_COND (!list_empty(&hidg->completed_out_req))
+
+ /* wait for at least one buffer to complete */
+ while (!READ_COND) {
+- spin_unlock_irqrestore(&hidg->spinlock, flags);
++ spin_unlock_irqrestore(&hidg->read_spinlock, flags);
+ if (file->f_flags & O_NONBLOCK)
+ return -EAGAIN;
+
+ if (wait_event_interruptible(hidg->read_queue, READ_COND))
+ return -ERESTARTSYS;
+
+- spin_lock_irqsave(&hidg->spinlock, flags);
++ spin_lock_irqsave(&hidg->read_spinlock, flags);
+ }
+
+ /* pick the first one */
+ list = list_first_entry(&hidg->completed_out_req,
+ struct f_hidg_req_list, list);
++
++ /*
++ * Remove this from list to protect it from beign free()
++ * while host disables our function
++ */
++ list_del(&list->list);
++
+ req = list->req;
+ count = min_t(unsigned int, count, req->actual - list->pos);
+- spin_unlock_irqrestore(&hidg->spinlock, flags);
++ spin_unlock_irqrestore(&hidg->read_spinlock, flags);
+
+ /* copy to user outside spinlock */
+ count -= copy_to_user(buffer, req->buf + list->pos, count);
+@@ -292,15 +299,20 @@ static ssize_t f_hidg_read(struct file *file, char __user *buffer,
+ * call, taking into account its current read position.
+ */
+ if (list->pos == req->actual) {
+- spin_lock_irqsave(&hidg->spinlock, flags);
+- list_del(&list->list);
+ kfree(list);
+- spin_unlock_irqrestore(&hidg->spinlock, flags);
+
+ req->length = hidg->report_length;
+ ret = usb_ep_queue(hidg->out_ep, req, GFP_KERNEL);
+- if (ret < 0)
++ if (ret < 0) {
++ free_ep_req(hidg->out_ep, req);
+ return ret;
++ }
++ } else {
++ spin_lock_irqsave(&hidg->read_spinlock, flags);
++ list_add(&list->list, &hidg->completed_out_req);
++ spin_unlock_irqrestore(&hidg->read_spinlock, flags);
++
++ wake_up(&hidg->read_queue);
+ }
+
+ return count;
+@@ -309,13 +321,16 @@ static ssize_t f_hidg_read(struct file *file, char __user *buffer,
+ static void f_hidg_req_complete(struct usb_ep *ep, struct usb_request *req)
+ {
+ struct f_hidg *hidg = (struct f_hidg *)ep->driver_data;
++ unsigned long flags;
+
+ if (req->status != 0) {
+ ERROR(hidg->func.config->cdev,
+ "End Point Request ERROR: %d\n", req->status);
+ }
+
++ spin_lock_irqsave(&hidg->write_spinlock, flags);
+ hidg->write_pending = 0;
++ spin_unlock_irqrestore(&hidg->write_spinlock, flags);
+ wake_up(&hidg->write_queue);
+ }
+
+@@ -323,18 +338,20 @@ static ssize_t f_hidg_write(struct file *file, const char __user *buffer,
+ size_t count, loff_t *offp)
+ {
+ struct f_hidg *hidg = file->private_data;
++ struct usb_request *req;
++ unsigned long flags;
+ ssize_t status = -ENOMEM;
+
+ if (!access_ok(VERIFY_READ, buffer, count))
+ return -EFAULT;
+
+- mutex_lock(&hidg->lock);
++ spin_lock_irqsave(&hidg->write_spinlock, flags);
+
+ #define WRITE_COND (!hidg->write_pending)
+-
++try_again:
+ /* write queue */
+ while (!WRITE_COND) {
+- mutex_unlock(&hidg->lock);
++ spin_unlock_irqrestore(&hidg->write_spinlock, flags);
+ if (file->f_flags & O_NONBLOCK)
+ return -EAGAIN;
+
+@@ -342,37 +359,59 @@ static ssize_t f_hidg_write(struct file *file, const char __user *buffer,
+ hidg->write_queue, WRITE_COND))
+ return -ERESTARTSYS;
+
+- mutex_lock(&hidg->lock);
++ spin_lock_irqsave(&hidg->write_spinlock, flags);
+ }
+
++ hidg->write_pending = 1;
++ req = hidg->req;
+ count = min_t(unsigned, count, hidg->report_length);
++
++ spin_unlock_irqrestore(&hidg->write_spinlock, flags);
+ status = copy_from_user(hidg->req->buf, buffer, count);
+
+ if (status != 0) {
+ ERROR(hidg->func.config->cdev,
+ "copy_from_user error\n");
+- mutex_unlock(&hidg->lock);
+- return -EINVAL;
++ status = -EINVAL;
++ goto release_write_pending;
+ }
+
+- hidg->req->status = 0;
+- hidg->req->zero = 0;
+- hidg->req->length = count;
+- hidg->req->complete = f_hidg_req_complete;
+- hidg->req->context = hidg;
+- hidg->write_pending = 1;
++ spin_lock_irqsave(&hidg->write_spinlock, flags);
++
++ /* we our function has been disabled by host */
++ if (!hidg->req) {
++ free_ep_req(hidg->in_ep, hidg->req);
++ /*
++ * TODO
++ * Should we fail with error here?
++ */
++ goto try_again;
++ }
++
++ req->status = 0;
++ req->zero = 0;
++ req->length = count;
++ req->complete = f_hidg_req_complete;
++ req->context = hidg;
+
+ status = usb_ep_queue(hidg->in_ep, hidg->req, GFP_ATOMIC);
+ if (status < 0) {
+ ERROR(hidg->func.config->cdev,
+ "usb_ep_queue error on int endpoint %zd\n", status);
+- hidg->write_pending = 0;
+- wake_up(&hidg->write_queue);
++ goto release_write_pending_unlocked;
+ } else {
+ status = count;
+ }
++ spin_unlock_irqrestore(&hidg->write_spinlock, flags);
+
+- mutex_unlock(&hidg->lock);
++ return status;
++release_write_pending:
++ spin_lock_irqsave(&hidg->write_spinlock, flags);
++release_write_pending_unlocked:
++ hidg->write_pending = 0;
++ spin_unlock_irqrestore(&hidg->write_spinlock, flags);
++
++ wake_up(&hidg->write_queue);
+
+ return status;
+ }
+@@ -425,20 +464,36 @@ static inline struct usb_request *hidg_alloc_ep_req(struct usb_ep *ep,
+ static void hidg_set_report_complete(struct usb_ep *ep, struct usb_request *req)
+ {
+ struct f_hidg *hidg = (struct f_hidg *) req->context;
++ struct usb_composite_dev *cdev = hidg->func.config->cdev;
+ struct f_hidg_req_list *req_list;
+ unsigned long flags;
+
+- req_list = kzalloc(sizeof(*req_list), GFP_ATOMIC);
+- if (!req_list)
+- return;
++ switch (req->status) {
++ case 0:
++ req_list = kzalloc(sizeof(*req_list), GFP_ATOMIC);
++ if (!req_list) {
++ ERROR(cdev, "Unable to allocate mem for req_list\n");
++ goto free_req;
++ }
+
+- req_list->req = req;
++ req_list->req = req;
+
+- spin_lock_irqsave(&hidg->spinlock, flags);
+- list_add_tail(&req_list->list, &hidg->completed_out_req);
+- spin_unlock_irqrestore(&hidg->spinlock, flags);
++ spin_lock_irqsave(&hidg->read_spinlock, flags);
++ list_add_tail(&req_list->list, &hidg->completed_out_req);
++ spin_unlock_irqrestore(&hidg->read_spinlock, flags);
+
+- wake_up(&hidg->read_queue);
++ wake_up(&hidg->read_queue);
++ break;
++ default:
++ ERROR(cdev, "Set report failed %d\n", req->status);
++ /* FALLTHROUGH */
++ case -ECONNABORTED: /* hardware forced ep reset */
++ case -ECONNRESET: /* request dequeued */
++ case -ESHUTDOWN: /* disconnect from host */
++free_req:
++ free_ep_req(ep, req);
++ return;
++ }
+ }
+
+ static int hidg_setup(struct usb_function *f,
+@@ -544,20 +599,35 @@ static void hidg_disable(struct usb_function *f)
+ {
+ struct f_hidg *hidg = func_to_hidg(f);
+ struct f_hidg_req_list *list, *next;
++ unsigned long flags;
+
+ usb_ep_disable(hidg->in_ep);
+ usb_ep_disable(hidg->out_ep);
+
++ spin_lock_irqsave(&hidg->read_spinlock, flags);
+ list_for_each_entry_safe(list, next, &hidg->completed_out_req, list) {
++ free_ep_req(hidg->out_ep, list->req);
+ list_del(&list->list);
+ kfree(list);
+ }
++ spin_unlock_irqrestore(&hidg->read_spinlock, flags);
++
++ spin_lock_irqsave(&hidg->write_spinlock, flags);
++ if (!hidg->write_pending) {
++ free_ep_req(hidg->in_ep, hidg->req);
++ hidg->write_pending = 1;
++ }
++
++ hidg->req = NULL;
++ spin_unlock_irqrestore(&hidg->write_spinlock, flags);
+ }
+
+ static int hidg_set_alt(struct usb_function *f, unsigned intf, unsigned alt)
+ {
+ struct usb_composite_dev *cdev = f->config->cdev;
+ struct f_hidg *hidg = func_to_hidg(f);
++ struct usb_request *req_in = NULL;
++ unsigned long flags;
+ int i, status = 0;
+
+ VDBG(cdev, "hidg_set_alt intf:%d alt:%d\n", intf, alt);
+@@ -578,6 +648,12 @@ static int hidg_set_alt(struct usb_function *f, unsigned intf, unsigned alt)
+ goto fail;
+ }
+ hidg->in_ep->driver_data = hidg;
++
++ req_in = hidg_alloc_ep_req(hidg->in_ep, hidg->report_length);
++ if (!req_in) {
++ status = -ENOMEM;
++ goto disable_ep_in;
++ }
+ }
+
+
+@@ -589,12 +665,12 @@ static int hidg_set_alt(struct usb_function *f, unsigned intf, unsigned alt)
+ hidg->out_ep);
+ if (status) {
+ ERROR(cdev, "config_ep_by_speed FAILED!\n");
+- goto fail;
++ goto free_req_in;
+ }
+ status = usb_ep_enable(hidg->out_ep);
+ if (status < 0) {
+ ERROR(cdev, "Enable OUT endpoint FAILED!\n");
+- goto fail;
++ goto free_req_in;
+ }
+ hidg->out_ep->driver_data = hidg;
+
+@@ -610,17 +686,37 @@ static int hidg_set_alt(struct usb_function *f, unsigned intf, unsigned alt)
+ req->context = hidg;
+ status = usb_ep_queue(hidg->out_ep, req,
+ GFP_ATOMIC);
+- if (status)
++ if (status) {
+ ERROR(cdev, "%s queue req --> %d\n",
+ hidg->out_ep->name, status);
++ free_ep_req(hidg->out_ep, req);
++ }
+ } else {
+- usb_ep_disable(hidg->out_ep);
+ status = -ENOMEM;
+- goto fail;
++ goto disable_out_ep;
+ }
+ }
+ }
+
++ if (hidg->in_ep != NULL) {
++ spin_lock_irqsave(&hidg->write_spinlock, flags);
++ hidg->req = req_in;
++ hidg->write_pending = 0;
++ spin_unlock_irqrestore(&hidg->write_spinlock, flags);
++
++ wake_up(&hidg->write_queue);
++ }
++ return 0;
++disable_out_ep:
++ usb_ep_disable(hidg->out_ep);
++free_req_in:
++ if (req_in)
++ free_ep_req(hidg->in_ep, req_in);
++
++disable_ep_in:
++ if (hidg->in_ep)
++ usb_ep_disable(hidg->in_ep);
++
+ fail:
+ return status;
+ }
+@@ -669,12 +765,6 @@ static int hidg_bind(struct usb_configuration *c, struct usb_function *f)
+ goto fail;
+ hidg->out_ep = ep;
+
+- /* preallocate request and buffer */
+- status = -ENOMEM;
+- hidg->req = alloc_ep_req(hidg->in_ep, hidg->report_length);
+- if (!hidg->req)
+- goto fail;
+-
+ /* set descriptor dynamic values */
+ hidg_interface_desc.bInterfaceSubClass = hidg->bInterfaceSubClass;
+ hidg_interface_desc.bInterfaceProtocol = hidg->bInterfaceProtocol;
+@@ -711,8 +801,10 @@ static int hidg_bind(struct usb_configuration *c, struct usb_function *f)
+ if (status)
+ goto fail;
+
+- mutex_init(&hidg->lock);
+- spin_lock_init(&hidg->spinlock);
++ spin_lock_init(&hidg->write_spinlock);
++ hidg->write_pending = 1;
++ hidg->req = NULL;
++ spin_lock_init(&hidg->read_spinlock);
+ init_waitqueue_head(&hidg->write_queue);
+ init_waitqueue_head(&hidg->read_queue);
+ INIT_LIST_HEAD(&hidg->completed_out_req);
+@@ -976,10 +1068,6 @@ static void hidg_unbind(struct usb_configuration *c, struct usb_function *f)
+ device_destroy(hidg_class, MKDEV(major, hidg->minor));
+ cdev_del(&hidg->cdev);
+
+- /* disable/free request and end point */
+- usb_ep_disable(hidg->in_ep);
+- free_ep_req(hidg->in_ep, hidg->req);
+-
+ usb_free_all_descriptors(f);
+ }
+
+diff --git a/drivers/usb/gadget/udc/core.c b/drivers/usb/gadget/udc/core.c
+index 0402177f93cd..d685d82dcf48 100644
+--- a/drivers/usb/gadget/udc/core.c
++++ b/drivers/usb/gadget/udc/core.c
+@@ -1080,6 +1080,24 @@ static void usb_udc_nop_release(struct device *dev)
+ dev_vdbg(dev, "%s\n", __func__);
+ }
+
++/* should be called with udc_lock held */
++static int check_pending_gadget_drivers(struct usb_udc *udc)
++{
++ struct usb_gadget_driver *driver;
++ int ret = 0;
++
++ list_for_each_entry(driver, &gadget_driver_pending_list, pending)
++ if (!driver->udc_name || strcmp(driver->udc_name,
++ dev_name(&udc->dev)) == 0) {
++ ret = udc_bind_to_driver(udc, driver);
++ if (ret != -EPROBE_DEFER)
++ list_del(&driver->pending);
++ break;
++ }
++
++ return ret;
++}
++
+ /**
+ * usb_add_gadget_udc_release - adds a new gadget to the udc class driver list
+ * @parent: the parent device to this udc. Usually the controller driver's
+@@ -1093,7 +1111,6 @@ int usb_add_gadget_udc_release(struct device *parent, struct usb_gadget *gadget,
+ void (*release)(struct device *dev))
+ {
+ struct usb_udc *udc;
+- struct usb_gadget_driver *driver;
+ int ret = -ENOMEM;
+
+ udc = kzalloc(sizeof(*udc), GFP_KERNEL);
+@@ -1136,17 +1153,9 @@ int usb_add_gadget_udc_release(struct device *parent, struct usb_gadget *gadget,
+ udc->vbus = true;
+
+ /* pick up one of pending gadget drivers */
+- list_for_each_entry(driver, &gadget_driver_pending_list, pending) {
+- if (!driver->udc_name || strcmp(driver->udc_name,
+- dev_name(&udc->dev)) == 0) {
+- ret = udc_bind_to_driver(udc, driver);
+- if (ret != -EPROBE_DEFER)
+- list_del(&driver->pending);
+- if (ret)
+- goto err5;
+- break;
+- }
+- }
++ ret = check_pending_gadget_drivers(udc);
++ if (ret)
++ goto err5;
+
+ mutex_unlock(&udc_lock);
+
+@@ -1356,14 +1365,22 @@ int usb_gadget_unregister_driver(struct usb_gadget_driver *driver)
+ return -EINVAL;
+
+ mutex_lock(&udc_lock);
+- list_for_each_entry(udc, &udc_list, list)
++ list_for_each_entry(udc, &udc_list, list) {
+ if (udc->driver == driver) {
+ usb_gadget_remove_driver(udc);
+ usb_gadget_set_state(udc->gadget,
+- USB_STATE_NOTATTACHED);
++ USB_STATE_NOTATTACHED);
++
++ /* Maybe there is someone waiting for this UDC? */
++ check_pending_gadget_drivers(udc);
++ /*
++ * For now we ignore bind errors as probably it's
++ * not a valid reason to fail other's gadget unbind
++ */
+ ret = 0;
+ break;
+ }
++ }
+
+ if (ret) {
+ list_del(&driver->pending);
+diff --git a/drivers/usb/gadget/udc/fsl_udc_core.c b/drivers/usb/gadget/udc/fsl_udc_core.c
+index 71094e479a96..55c755370850 100644
+--- a/drivers/usb/gadget/udc/fsl_udc_core.c
++++ b/drivers/usb/gadget/udc/fsl_udc_core.c
+@@ -1248,6 +1248,12 @@ static const struct usb_gadget_ops fsl_gadget_ops = {
+ .udc_stop = fsl_udc_stop,
+ };
+
++/*
++ * Empty complete function used by this driver to fill in the req->complete
++ * field when creating a request since the complete field is mandatory.
++ */
++static void fsl_noop_complete(struct usb_ep *ep, struct usb_request *req) { }
++
+ /* Set protocol stall on ep0, protocol stall will automatically be cleared
+ on new transaction */
+ static void ep0stall(struct fsl_udc *udc)
+@@ -1282,7 +1288,7 @@ static int ep0_prime_status(struct fsl_udc *udc, int direction)
+ req->req.length = 0;
+ req->req.status = -EINPROGRESS;
+ req->req.actual = 0;
+- req->req.complete = NULL;
++ req->req.complete = fsl_noop_complete;
+ req->dtd_count = 0;
+
+ ret = usb_gadget_map_request(&ep->udc->gadget, &req->req, ep_is_in(ep));
+@@ -1365,7 +1371,7 @@ static void ch9getstatus(struct fsl_udc *udc, u8 request_type, u16 value,
+ req->req.length = 2;
+ req->req.status = -EINPROGRESS;
+ req->req.actual = 0;
+- req->req.complete = NULL;
++ req->req.complete = fsl_noop_complete;
+ req->dtd_count = 0;
+
+ ret = usb_gadget_map_request(&ep->udc->gadget, &req->req, ep_is_in(ep));
+diff --git a/drivers/usb/host/xhci-plat.c b/drivers/usb/host/xhci-plat.c
+index e5834dd9bcde..c0cd98e804a3 100644
+--- a/drivers/usb/host/xhci-plat.c
++++ b/drivers/usb/host/xhci-plat.c
+@@ -232,9 +232,6 @@ static int xhci_plat_probe(struct platform_device *pdev)
+ if (device_property_read_bool(&pdev->dev, "usb3-lpm-capable"))
+ xhci->quirks |= XHCI_LPM_SUPPORT;
+
+- if (HCC_MAX_PSA(xhci->hcc_params) >= 4)
+- xhci->shared_hcd->can_do_streams = 1;
+-
+ hcd->usb_phy = devm_usb_get_phy_by_phandle(&pdev->dev, "usb-phy", 0);
+ if (IS_ERR(hcd->usb_phy)) {
+ ret = PTR_ERR(hcd->usb_phy);
+@@ -251,6 +248,9 @@ static int xhci_plat_probe(struct platform_device *pdev)
+ if (ret)
+ goto disable_usb_phy;
+
++ if (HCC_MAX_PSA(xhci->hcc_params) >= 4)
++ xhci->shared_hcd->can_do_streams = 1;
++
+ ret = usb_add_hcd(xhci->shared_hcd, irq, IRQF_SHARED);
+ if (ret)
+ goto dealloc_usb2_hcd;
+diff --git a/drivers/usb/musb/da8xx.c b/drivers/usb/musb/da8xx.c
+index e89708d839e5..cd3d763720b3 100644
+--- a/drivers/usb/musb/da8xx.c
++++ b/drivers/usb/musb/da8xx.c
+@@ -458,15 +458,11 @@ static inline u8 get_vbus_power(struct device *dev)
+ }
+
+ static const struct musb_platform_ops da8xx_ops = {
+- .quirks = MUSB_DMA_CPPI | MUSB_INDEXED_EP,
++ .quirks = MUSB_INDEXED_EP,
+ .init = da8xx_musb_init,
+ .exit = da8xx_musb_exit,
+
+ .fifo_mode = 2,
+-#ifdef CONFIG_USB_TI_CPPI_DMA
+- .dma_init = cppi_dma_controller_create,
+- .dma_exit = cppi_dma_controller_destroy,
+-#endif
+ .enable = da8xx_musb_enable,
+ .disable = da8xx_musb_disable,
+
+diff --git a/drivers/virtio/virtio_balloon.c b/drivers/virtio/virtio_balloon.c
+index 181793f07852..9d2738e9217f 100644
+--- a/drivers/virtio/virtio_balloon.c
++++ b/drivers/virtio/virtio_balloon.c
+@@ -615,8 +615,12 @@ static void virtballoon_remove(struct virtio_device *vdev)
+ cancel_work_sync(&vb->update_balloon_stats_work);
+
+ remove_common(vb);
++#ifdef CONFIG_BALLOON_COMPACTION
+ if (vb->vb_dev_info.inode)
+ iput(vb->vb_dev_info.inode);
++
++ kern_unmount(balloon_mnt);
++#endif
+ kfree(vb);
+ }
+
+diff --git a/drivers/vme/vme.c b/drivers/vme/vme.c
+index bdbadaa47ef3..0035cf79760a 100644
+--- a/drivers/vme/vme.c
++++ b/drivers/vme/vme.c
+@@ -1625,10 +1625,25 @@ static int vme_bus_probe(struct device *dev)
+ return retval;
+ }
+
++static int vme_bus_remove(struct device *dev)
++{
++ int retval = -ENODEV;
++ struct vme_driver *driver;
++ struct vme_dev *vdev = dev_to_vme_dev(dev);
++
++ driver = dev->platform_data;
++
++ if (driver->remove != NULL)
++ retval = driver->remove(vdev);
++
++ return retval;
++}
++
+ struct bus_type vme_bus_type = {
+ .name = "vme",
+ .match = vme_bus_match,
+ .probe = vme_bus_probe,
++ .remove = vme_bus_remove,
+ };
+ EXPORT_SYMBOL(vme_bus_type);
+
+diff --git a/drivers/w1/masters/ds2490.c b/drivers/w1/masters/ds2490.c
+index 049a884a756f..59d74d1b47a8 100644
+--- a/drivers/w1/masters/ds2490.c
++++ b/drivers/w1/masters/ds2490.c
+@@ -153,6 +153,9 @@ struct ds_device
+ */
+ u16 spu_bit;
+
++ u8 st_buf[ST_SIZE];
++ u8 byte_buf;
++
+ struct w1_bus_master master;
+ };
+
+@@ -174,7 +177,6 @@ struct ds_status
+ u8 data_in_buffer_status;
+ u8 reserved1;
+ u8 reserved2;
+-
+ };
+
+ static struct usb_device_id ds_id_table [] = {
+@@ -244,28 +246,6 @@ static int ds_send_control(struct ds_device *dev, u16 value, u16 index)
+ return err;
+ }
+
+-static int ds_recv_status_nodump(struct ds_device *dev, struct ds_status *st,
+- unsigned char *buf, int size)
+-{
+- int count, err;
+-
+- memset(st, 0, sizeof(*st));
+-
+- count = 0;
+- err = usb_interrupt_msg(dev->udev, usb_rcvintpipe(dev->udev,
+- dev->ep[EP_STATUS]), buf, size, &count, 1000);
+- if (err < 0) {
+- pr_err("Failed to read 1-wire data from 0x%x: err=%d.\n",
+- dev->ep[EP_STATUS], err);
+- return err;
+- }
+-
+- if (count >= sizeof(*st))
+- memcpy(st, buf, sizeof(*st));
+-
+- return count;
+-}
+-
+ static inline void ds_print_msg(unsigned char *buf, unsigned char *str, int off)
+ {
+ pr_info("%45s: %8x\n", str, buf[off]);
+@@ -324,6 +304,35 @@ static void ds_dump_status(struct ds_device *dev, unsigned char *buf, int count)
+ }
+ }
+
++static int ds_recv_status(struct ds_device *dev, struct ds_status *st,
++ bool dump)
++{
++ int count, err;
++
++ if (st)
++ memset(st, 0, sizeof(*st));
++
++ count = 0;
++ err = usb_interrupt_msg(dev->udev,
++ usb_rcvintpipe(dev->udev,
++ dev->ep[EP_STATUS]),
++ dev->st_buf, sizeof(dev->st_buf),
++ &count, 1000);
++ if (err < 0) {
++ pr_err("Failed to read 1-wire data from 0x%x: err=%d.\n",
++ dev->ep[EP_STATUS], err);
++ return err;
++ }
++
++ if (dump)
++ ds_dump_status(dev, dev->st_buf, count);
++
++ if (st && count >= sizeof(*st))
++ memcpy(st, dev->st_buf, sizeof(*st));
++
++ return count;
++}
++
+ static void ds_reset_device(struct ds_device *dev)
+ {
+ ds_send_control_cmd(dev, CTL_RESET_DEVICE, 0);
+@@ -344,7 +353,6 @@ static void ds_reset_device(struct ds_device *dev)
+ static int ds_recv_data(struct ds_device *dev, unsigned char *buf, int size)
+ {
+ int count, err;
+- struct ds_status st;
+
+ /* Careful on size. If size is less than what is available in
+ * the input buffer, the device fails the bulk transfer and
+@@ -359,14 +367,9 @@ static int ds_recv_data(struct ds_device *dev, unsigned char *buf, int size)
+ err = usb_bulk_msg(dev->udev, usb_rcvbulkpipe(dev->udev, dev->ep[EP_DATA_IN]),
+ buf, size, &count, 1000);
+ if (err < 0) {
+- u8 buf[ST_SIZE];
+- int count;
+-
+ pr_info("Clearing ep0x%x.\n", dev->ep[EP_DATA_IN]);
+ usb_clear_halt(dev->udev, usb_rcvbulkpipe(dev->udev, dev->ep[EP_DATA_IN]));
+-
+- count = ds_recv_status_nodump(dev, &st, buf, sizeof(buf));
+- ds_dump_status(dev, buf, count);
++ ds_recv_status(dev, NULL, true);
+ return err;
+ }
+
+@@ -404,7 +407,6 @@ int ds_stop_pulse(struct ds_device *dev, int limit)
+ {
+ struct ds_status st;
+ int count = 0, err = 0;
+- u8 buf[ST_SIZE];
+
+ do {
+ err = ds_send_control(dev, CTL_HALT_EXE_IDLE, 0);
+@@ -413,7 +415,7 @@ int ds_stop_pulse(struct ds_device *dev, int limit)
+ err = ds_send_control(dev, CTL_RESUME_EXE, 0);
+ if (err)
+ break;
+- err = ds_recv_status_nodump(dev, &st, buf, sizeof(buf));
++ err = ds_recv_status(dev, &st, false);
+ if (err)
+ break;
+
+@@ -456,18 +458,17 @@ int ds_detect(struct ds_device *dev, struct ds_status *st)
+
+ static int ds_wait_status(struct ds_device *dev, struct ds_status *st)
+ {
+- u8 buf[ST_SIZE];
+ int err, count = 0;
+
+ do {
+ st->status = 0;
+- err = ds_recv_status_nodump(dev, st, buf, sizeof(buf));
++ err = ds_recv_status(dev, st, false);
+ #if 0
+ if (err >= 0) {
+ int i;
+ printk("0x%x: count=%d, status: ", dev->ep[EP_STATUS], err);
+ for (i=0; i<err; ++i)
+- printk("%02x ", buf[i]);
++ printk("%02x ", dev->st_buf[i]);
+ printk("\n");
+ }
+ #endif
+@@ -485,7 +486,7 @@ static int ds_wait_status(struct ds_device *dev, struct ds_status *st)
+ * can do something with it).
+ */
+ if (err > 16 || count >= 100 || err < 0)
+- ds_dump_status(dev, buf, err);
++ ds_dump_status(dev, dev->st_buf, err);
+
+ /* Extended data isn't an error. Well, a short is, but the dump
+ * would have already told the user that and we can't do anything
+@@ -608,7 +609,6 @@ static int ds_write_byte(struct ds_device *dev, u8 byte)
+ {
+ int err;
+ struct ds_status st;
+- u8 rbyte;
+
+ err = ds_send_control(dev, COMM_BYTE_IO | COMM_IM | dev->spu_bit, byte);
+ if (err)
+@@ -621,11 +621,11 @@ static int ds_write_byte(struct ds_device *dev, u8 byte)
+ if (err)
+ return err;
+
+- err = ds_recv_data(dev, &rbyte, sizeof(rbyte));
++ err = ds_recv_data(dev, &dev->byte_buf, 1);
+ if (err < 0)
+ return err;
+
+- return !(byte == rbyte);
++ return !(byte == dev->byte_buf);
+ }
+
+ static int ds_read_byte(struct ds_device *dev, u8 *byte)
+@@ -712,7 +712,6 @@ static void ds9490r_search(void *data, struct w1_master *master,
+ int err;
+ u16 value, index;
+ struct ds_status st;
+- u8 st_buf[ST_SIZE];
+ int search_limit;
+ int found = 0;
+ int i;
+@@ -724,7 +723,12 @@ static void ds9490r_search(void *data, struct w1_master *master,
+ /* FIFO 128 bytes, bulk packet size 64, read a multiple of the
+ * packet size.
+ */
+- u64 buf[2*64/8];
++ const size_t bufsize = 2 * 64;
++ u64 *buf;
++
++ buf = kmalloc(bufsize, GFP_KERNEL);
++ if (!buf)
++ return;
+
+ mutex_lock(&master->bus_mutex);
+
+@@ -745,10 +749,9 @@ static void ds9490r_search(void *data, struct w1_master *master,
+ do {
+ schedule_timeout(jtime);
+
+- if (ds_recv_status_nodump(dev, &st, st_buf, sizeof(st_buf)) <
+- sizeof(st)) {
++ err = ds_recv_status(dev, &st, false);
++ if (err < 0 || err < sizeof(st))
+ break;
+- }
+
+ if (st.data_in_buffer_status) {
+ /* Bulk in can receive partial ids, but when it does
+@@ -758,7 +761,7 @@ static void ds9490r_search(void *data, struct w1_master *master,
+ * bulk without first checking if status says there
+ * is data to read.
+ */
+- err = ds_recv_data(dev, (u8 *)buf, sizeof(buf));
++ err = ds_recv_data(dev, (u8 *)buf, bufsize);
+ if (err < 0)
+ break;
+ for (i = 0; i < err/8; ++i) {
+@@ -794,9 +797,14 @@ static void ds9490r_search(void *data, struct w1_master *master,
+ }
+ search_out:
+ mutex_unlock(&master->bus_mutex);
++ kfree(buf);
+ }
+
+ #if 0
++/*
++ * FIXME: if this disabled code is ever used in the future all ds_send_data()
++ * calls must be changed to use a DMAable buffer.
++ */
+ static int ds_match_access(struct ds_device *dev, u64 init)
+ {
+ int err;
+@@ -845,13 +853,12 @@ static int ds_set_path(struct ds_device *dev, u64 init)
+
+ static u8 ds9490r_touch_bit(void *data, u8 bit)
+ {
+- u8 ret;
+ struct ds_device *dev = data;
+
+- if (ds_touch_bit(dev, bit, &ret))
++ if (ds_touch_bit(dev, bit, &dev->byte_buf))
+ return 0;
+
+- return ret;
++ return dev->byte_buf;
+ }
+
+ #if 0
+@@ -866,13 +873,12 @@ static u8 ds9490r_read_bit(void *data)
+ {
+ struct ds_device *dev = data;
+ int err;
+- u8 bit = 0;
+
+- err = ds_touch_bit(dev, 1, &bit);
++ err = ds_touch_bit(dev, 1, &dev->byte_buf);
+ if (err)
+ return 0;
+
+- return bit & 1;
++ return dev->byte_buf & 1;
+ }
+ #endif
+
+@@ -887,32 +893,52 @@ static u8 ds9490r_read_byte(void *data)
+ {
+ struct ds_device *dev = data;
+ int err;
+- u8 byte = 0;
+
+- err = ds_read_byte(dev, &byte);
++ err = ds_read_byte(dev, &dev->byte_buf);
+ if (err)
+ return 0;
+
+- return byte;
++ return dev->byte_buf;
+ }
+
+ static void ds9490r_write_block(void *data, const u8 *buf, int len)
+ {
+ struct ds_device *dev = data;
++ u8 *tbuf;
++
++ if (len <= 0)
++ return;
++
++ tbuf = kmalloc(len, GFP_KERNEL);
++ if (!tbuf)
++ return;
+
+- ds_write_block(dev, (u8 *)buf, len);
++ memcpy(tbuf, buf, len);
++ ds_write_block(dev, tbuf, len);
++
++ kfree(tbuf);
+ }
+
+ static u8 ds9490r_read_block(void *data, u8 *buf, int len)
+ {
+ struct ds_device *dev = data;
+ int err;
++ u8 *tbuf;
+
+- err = ds_read_block(dev, buf, len);
+- if (err < 0)
++ if (len <= 0)
++ return 0;
++
++ tbuf = kmalloc(len, GFP_KERNEL);
++ if (!tbuf)
+ return 0;
+
+- return len;
++ err = ds_read_block(dev, tbuf, len);
++ if (err >= 0)
++ memcpy(buf, tbuf, len);
++
++ kfree(tbuf);
++
++ return err >= 0 ? len : 0;
+ }
+
+ static u8 ds9490r_reset(void *data)
+diff --git a/drivers/w1/w1.c b/drivers/w1/w1.c
+index e213c678bbfe..ab0931e7a9bb 100644
+--- a/drivers/w1/w1.c
++++ b/drivers/w1/w1.c
+@@ -763,6 +763,7 @@ int w1_attach_slave_device(struct w1_master *dev, struct w1_reg_num *rn)
+ dev_err(&dev->dev, "%s: Attaching %s failed.\n", __func__,
+ sl->name);
+ w1_family_put(sl->family);
++ atomic_dec(&sl->master->refcnt);
+ kfree(sl);
+ return err;
+ }
+diff --git a/fs/ceph/addr.c b/fs/ceph/addr.c
+index e4b066cd912a..ec52130d7ee3 100644
+--- a/fs/ceph/addr.c
++++ b/fs/ceph/addr.c
+@@ -391,6 +391,7 @@ static int start_read(struct inode *inode, struct list_head *page_list, int max)
+ nr_pages = i;
+ if (nr_pages > 0) {
+ len = nr_pages << PAGE_SHIFT;
++ osd_req_op_extent_update(req, 0, len);
+ break;
+ }
+ goto out_pages;
+diff --git a/fs/cifs/file.c b/fs/cifs/file.c
+index 18a1e1d6671f..1cd0e2eefc66 100644
+--- a/fs/cifs/file.c
++++ b/fs/cifs/file.c
+@@ -2884,7 +2884,15 @@ cifs_readdata_to_iov(struct cifs_readdata *rdata, struct iov_iter *iter)
+ for (i = 0; i < rdata->nr_pages; i++) {
+ struct page *page = rdata->pages[i];
+ size_t copy = min_t(size_t, remaining, PAGE_SIZE);
+- size_t written = copy_page_to_iter(page, 0, copy, iter);
++ size_t written;
++
++ if (unlikely(iter->type & ITER_PIPE)) {
++ void *addr = kmap_atomic(page);
++
++ written = copy_to_iter(addr, copy, iter);
++ kunmap_atomic(addr);
++ } else
++ written = copy_page_to_iter(page, 0, copy, iter);
+ remaining -= written;
+ if (written < copy && iov_iter_count(iter) > 0)
+ break;
+diff --git a/fs/ext4/extents.c b/fs/ext4/extents.c
+index 3e295d3350a9..2a97dff87b96 100644
+--- a/fs/ext4/extents.c
++++ b/fs/ext4/extents.c
+@@ -5334,7 +5334,8 @@ ext4_ext_shift_extents(struct inode *inode, handle_t *handle,
+ ext4_lblk_t stop, *iterator, ex_start, ex_end;
+
+ /* Let path point to the last extent */
+- path = ext4_find_extent(inode, EXT_MAX_BLOCKS - 1, NULL, 0);
++ path = ext4_find_extent(inode, EXT_MAX_BLOCKS - 1, NULL,
++ EXT4_EX_NOCACHE);
+ if (IS_ERR(path))
+ return PTR_ERR(path);
+
+@@ -5343,15 +5344,15 @@ ext4_ext_shift_extents(struct inode *inode, handle_t *handle,
+ if (!extent)
+ goto out;
+
+- stop = le32_to_cpu(extent->ee_block) +
+- ext4_ext_get_actual_len(extent);
++ stop = le32_to_cpu(extent->ee_block);
+
+ /*
+ * In case of left shift, Don't start shifting extents until we make
+ * sure the hole is big enough to accommodate the shift.
+ */
+ if (SHIFT == SHIFT_LEFT) {
+- path = ext4_find_extent(inode, start - 1, &path, 0);
++ path = ext4_find_extent(inode, start - 1, &path,
++ EXT4_EX_NOCACHE);
+ if (IS_ERR(path))
+ return PTR_ERR(path);
+ depth = path->p_depth;
+@@ -5383,9 +5384,14 @@ ext4_ext_shift_extents(struct inode *inode, handle_t *handle,
+ else
+ iterator = &stop;
+
+- /* Its safe to start updating extents */
+- while (start < stop) {
+- path = ext4_find_extent(inode, *iterator, &path, 0);
++ /*
++ * Its safe to start updating extents. Start and stop are unsigned, so
++ * in case of right shift if extent with 0 block is reached, iterator
++ * becomes NULL to indicate the end of the loop.
++ */
++ while (iterator && start <= stop) {
++ path = ext4_find_extent(inode, *iterator, &path,
++ EXT4_EX_NOCACHE);
+ if (IS_ERR(path))
+ return PTR_ERR(path);
+ depth = path->p_depth;
+@@ -5412,8 +5418,11 @@ ext4_ext_shift_extents(struct inode *inode, handle_t *handle,
+ ext4_ext_get_actual_len(extent);
+ } else {
+ extent = EXT_FIRST_EXTENT(path[depth].p_hdr);
+- *iterator = le32_to_cpu(extent->ee_block) > 0 ?
+- le32_to_cpu(extent->ee_block) - 1 : 0;
++ if (le32_to_cpu(extent->ee_block) > 0)
++ *iterator = le32_to_cpu(extent->ee_block) - 1;
++ else
++ /* Beginning is reached, end of the loop */
++ iterator = NULL;
+ /* Update path extent in case we need to stop */
+ while (le32_to_cpu(extent->ee_block) < start)
+ extent++;
+diff --git a/fs/ext4/inline.c b/fs/ext4/inline.c
+index 437df6a1a841..627ace344739 100644
+--- a/fs/ext4/inline.c
++++ b/fs/ext4/inline.c
+@@ -381,7 +381,7 @@ static int ext4_update_inline_data(handle_t *handle, struct inode *inode,
+ static int ext4_prepare_inline_data(handle_t *handle, struct inode *inode,
+ unsigned int len)
+ {
+- int ret, size;
++ int ret, size, no_expand;
+ struct ext4_inode_info *ei = EXT4_I(inode);
+
+ if (!ext4_test_inode_state(inode, EXT4_STATE_MAY_INLINE_DATA))
+@@ -391,15 +391,14 @@ static int ext4_prepare_inline_data(handle_t *handle, struct inode *inode,
+ if (size < len)
+ return -ENOSPC;
+
+- down_write(&EXT4_I(inode)->xattr_sem);
++ ext4_write_lock_xattr(inode, &no_expand);
+
+ if (ei->i_inline_off)
+ ret = ext4_update_inline_data(handle, inode, len);
+ else
+ ret = ext4_create_inline_data(handle, inode, len);
+
+- up_write(&EXT4_I(inode)->xattr_sem);
+-
++ ext4_write_unlock_xattr(inode, &no_expand);
+ return ret;
+ }
+
+@@ -533,7 +532,7 @@ static int ext4_convert_inline_data_to_extent(struct address_space *mapping,
+ struct inode *inode,
+ unsigned flags)
+ {
+- int ret, needed_blocks;
++ int ret, needed_blocks, no_expand;
+ handle_t *handle = NULL;
+ int retries = 0, sem_held = 0;
+ struct page *page = NULL;
+@@ -573,7 +572,7 @@ static int ext4_convert_inline_data_to_extent(struct address_space *mapping,
+ goto out;
+ }
+
+- down_write(&EXT4_I(inode)->xattr_sem);
++ ext4_write_lock_xattr(inode, &no_expand);
+ sem_held = 1;
+ /* If some one has already done this for us, just exit. */
+ if (!ext4_has_inline_data(inode)) {
+@@ -610,7 +609,7 @@ static int ext4_convert_inline_data_to_extent(struct address_space *mapping,
+ put_page(page);
+ page = NULL;
+ ext4_orphan_add(handle, inode);
+- up_write(&EXT4_I(inode)->xattr_sem);
++ ext4_write_unlock_xattr(inode, &no_expand);
+ sem_held = 0;
+ ext4_journal_stop(handle);
+ handle = NULL;
+@@ -636,7 +635,7 @@ static int ext4_convert_inline_data_to_extent(struct address_space *mapping,
+ put_page(page);
+ }
+ if (sem_held)
+- up_write(&EXT4_I(inode)->xattr_sem);
++ ext4_write_unlock_xattr(inode, &no_expand);
+ if (handle)
+ ext4_journal_stop(handle);
+ brelse(iloc.bh);
+@@ -729,7 +728,7 @@ int ext4_try_to_write_inline_data(struct address_space *mapping,
+ int ext4_write_inline_data_end(struct inode *inode, loff_t pos, unsigned len,
+ unsigned copied, struct page *page)
+ {
+- int ret;
++ int ret, no_expand;
+ void *kaddr;
+ struct ext4_iloc iloc;
+
+@@ -747,7 +746,7 @@ int ext4_write_inline_data_end(struct inode *inode, loff_t pos, unsigned len,
+ goto out;
+ }
+
+- down_write(&EXT4_I(inode)->xattr_sem);
++ ext4_write_lock_xattr(inode, &no_expand);
+ BUG_ON(!ext4_has_inline_data(inode));
+
+ kaddr = kmap_atomic(page);
+@@ -757,7 +756,7 @@ int ext4_write_inline_data_end(struct inode *inode, loff_t pos, unsigned len,
+ /* clear page dirty so that writepages wouldn't work for us. */
+ ClearPageDirty(page);
+
+- up_write(&EXT4_I(inode)->xattr_sem);
++ ext4_write_unlock_xattr(inode, &no_expand);
+ brelse(iloc.bh);
+ out:
+ return copied;
+@@ -768,7 +767,7 @@ ext4_journalled_write_inline_data(struct inode *inode,
+ unsigned len,
+ struct page *page)
+ {
+- int ret;
++ int ret, no_expand;
+ void *kaddr;
+ struct ext4_iloc iloc;
+
+@@ -778,11 +777,11 @@ ext4_journalled_write_inline_data(struct inode *inode,
+ return NULL;
+ }
+
+- down_write(&EXT4_I(inode)->xattr_sem);
++ ext4_write_lock_xattr(inode, &no_expand);
+ kaddr = kmap_atomic(page);
+ ext4_write_inline_data(inode, &iloc, kaddr, 0, len);
+ kunmap_atomic(kaddr);
+- up_write(&EXT4_I(inode)->xattr_sem);
++ ext4_write_unlock_xattr(inode, &no_expand);
+
+ return iloc.bh;
+ }
+@@ -944,8 +943,15 @@ int ext4_da_write_inline_data_end(struct inode *inode, loff_t pos,
+ struct page *page)
+ {
+ int i_size_changed = 0;
++ int ret;
+
+- copied = ext4_write_inline_data_end(inode, pos, len, copied, page);
++ ret = ext4_write_inline_data_end(inode, pos, len, copied, page);
++ if (ret < 0) {
++ unlock_page(page);
++ put_page(page);
++ return ret;
++ }
++ copied = ret;
+
+ /*
+ * No need to use i_size_read() here, the i_size
+@@ -1259,7 +1265,7 @@ static int ext4_convert_inline_data_nolock(handle_t *handle,
+ int ext4_try_add_inline_entry(handle_t *handle, struct ext4_filename *fname,
+ struct inode *dir, struct inode *inode)
+ {
+- int ret, inline_size;
++ int ret, inline_size, no_expand;
+ void *inline_start;
+ struct ext4_iloc iloc;
+
+@@ -1267,7 +1273,7 @@ int ext4_try_add_inline_entry(handle_t *handle, struct ext4_filename *fname,
+ if (ret)
+ return ret;
+
+- down_write(&EXT4_I(dir)->xattr_sem);
++ ext4_write_lock_xattr(dir, &no_expand);
+ if (!ext4_has_inline_data(dir))
+ goto out;
+
+@@ -1313,7 +1319,7 @@ int ext4_try_add_inline_entry(handle_t *handle, struct ext4_filename *fname,
+
+ out:
+ ext4_mark_inode_dirty(handle, dir);
+- up_write(&EXT4_I(dir)->xattr_sem);
++ ext4_write_unlock_xattr(dir, &no_expand);
+ brelse(iloc.bh);
+ return ret;
+ }
+@@ -1673,7 +1679,7 @@ int ext4_delete_inline_entry(handle_t *handle,
+ struct buffer_head *bh,
+ int *has_inline_data)
+ {
+- int err, inline_size;
++ int err, inline_size, no_expand;
+ struct ext4_iloc iloc;
+ void *inline_start;
+
+@@ -1681,7 +1687,7 @@ int ext4_delete_inline_entry(handle_t *handle,
+ if (err)
+ return err;
+
+- down_write(&EXT4_I(dir)->xattr_sem);
++ ext4_write_lock_xattr(dir, &no_expand);
+ if (!ext4_has_inline_data(dir)) {
+ *has_inline_data = 0;
+ goto out;
+@@ -1715,7 +1721,7 @@ int ext4_delete_inline_entry(handle_t *handle,
+
+ ext4_show_inline_dir(dir, iloc.bh, inline_start, inline_size);
+ out:
+- up_write(&EXT4_I(dir)->xattr_sem);
++ ext4_write_unlock_xattr(dir, &no_expand);
+ brelse(iloc.bh);
+ if (err != -ENOENT)
+ ext4_std_error(dir->i_sb, err);
+@@ -1814,11 +1820,11 @@ bool empty_inline_dir(struct inode *dir, int *has_inline_data)
+
+ int ext4_destroy_inline_data(handle_t *handle, struct inode *inode)
+ {
+- int ret;
++ int ret, no_expand;
+
+- down_write(&EXT4_I(inode)->xattr_sem);
++ ext4_write_lock_xattr(inode, &no_expand);
+ ret = ext4_destroy_inline_data_nolock(handle, inode);
+- up_write(&EXT4_I(inode)->xattr_sem);
++ ext4_write_unlock_xattr(inode, &no_expand);
+
+ return ret;
+ }
+@@ -1903,7 +1909,7 @@ int ext4_try_to_evict_inline_data(handle_t *handle,
+ void ext4_inline_data_truncate(struct inode *inode, int *has_inline)
+ {
+ handle_t *handle;
+- int inline_size, value_len, needed_blocks;
++ int inline_size, value_len, needed_blocks, no_expand;
+ size_t i_size;
+ void *value = NULL;
+ struct ext4_xattr_ibody_find is = {
+@@ -1920,7 +1926,7 @@ void ext4_inline_data_truncate(struct inode *inode, int *has_inline)
+ if (IS_ERR(handle))
+ return;
+
+- down_write(&EXT4_I(inode)->xattr_sem);
++ ext4_write_lock_xattr(inode, &no_expand);
+ if (!ext4_has_inline_data(inode)) {
+ *has_inline = 0;
+ ext4_journal_stop(handle);
+@@ -1978,7 +1984,7 @@ void ext4_inline_data_truncate(struct inode *inode, int *has_inline)
+ up_write(&EXT4_I(inode)->i_data_sem);
+ out:
+ brelse(is.iloc.bh);
+- up_write(&EXT4_I(inode)->xattr_sem);
++ ext4_write_unlock_xattr(inode, &no_expand);
+ kfree(value);
+ if (inode->i_nlink)
+ ext4_orphan_del(handle, inode);
+@@ -1994,7 +2000,7 @@ void ext4_inline_data_truncate(struct inode *inode, int *has_inline)
+
+ int ext4_convert_inline_data(struct inode *inode)
+ {
+- int error, needed_blocks;
++ int error, needed_blocks, no_expand;
+ handle_t *handle;
+ struct ext4_iloc iloc;
+
+@@ -2016,15 +2022,10 @@ int ext4_convert_inline_data(struct inode *inode)
+ goto out_free;
+ }
+
+- down_write(&EXT4_I(inode)->xattr_sem);
+- if (!ext4_has_inline_data(inode)) {
+- up_write(&EXT4_I(inode)->xattr_sem);
+- goto out;
+- }
+-
+- error = ext4_convert_inline_data_nolock(handle, inode, &iloc);
+- up_write(&EXT4_I(inode)->xattr_sem);
+-out:
++ ext4_write_lock_xattr(inode, &no_expand);
++ if (ext4_has_inline_data(inode))
++ error = ext4_convert_inline_data_nolock(handle, inode, &iloc);
++ ext4_write_unlock_xattr(inode, &no_expand);
+ ext4_journal_stop(handle);
+ out_free:
+ brelse(iloc.bh);
+diff --git a/fs/ext4/inode.c b/fs/ext4/inode.c
+index 88d57af1b516..b4a8173bb80c 100644
+--- a/fs/ext4/inode.c
++++ b/fs/ext4/inode.c
+@@ -1330,8 +1330,11 @@ static int ext4_write_end(struct file *file,
+ if (ext4_has_inline_data(inode)) {
+ ret = ext4_write_inline_data_end(inode, pos, len,
+ copied, page);
+- if (ret < 0)
++ if (ret < 0) {
++ unlock_page(page);
++ put_page(page);
+ goto errout;
++ }
+ copied = ret;
+ } else
+ copied = block_write_end(file, mapping, pos,
+@@ -1385,7 +1388,9 @@ static int ext4_write_end(struct file *file,
+ * set the buffer to be dirty, since in data=journalled mode we need
+ * to call ext4_handle_dirty_metadata() instead.
+ */
+-static void zero_new_buffers(struct page *page, unsigned from, unsigned to)
++static void ext4_journalled_zero_new_buffers(handle_t *handle,
++ struct page *page,
++ unsigned from, unsigned to)
+ {
+ unsigned int block_start = 0, block_end;
+ struct buffer_head *head, *bh;
+@@ -1402,7 +1407,7 @@ static void zero_new_buffers(struct page *page, unsigned from, unsigned to)
+ size = min(to, block_end) - start;
+
+ zero_user(page, start, size);
+- set_buffer_uptodate(bh);
++ write_end_fn(handle, bh);
+ }
+ clear_buffer_new(bh);
+ }
+@@ -1431,18 +1436,25 @@ static int ext4_journalled_write_end(struct file *file,
+
+ BUG_ON(!ext4_handle_valid(handle));
+
+- if (ext4_has_inline_data(inode))
+- copied = ext4_write_inline_data_end(inode, pos, len,
+- copied, page);
+- else {
+- if (copied < len) {
+- if (!PageUptodate(page))
+- copied = 0;
+- zero_new_buffers(page, from+copied, to);
++ if (ext4_has_inline_data(inode)) {
++ ret = ext4_write_inline_data_end(inode, pos, len,
++ copied, page);
++ if (ret < 0) {
++ unlock_page(page);
++ put_page(page);
++ goto errout;
+ }
+-
++ copied = ret;
++ } else if (unlikely(copied < len) && !PageUptodate(page)) {
++ copied = 0;
++ ext4_journalled_zero_new_buffers(handle, page, from, to);
++ } else {
++ if (unlikely(copied < len))
++ ext4_journalled_zero_new_buffers(handle, page,
++ from + copied, to);
+ ret = ext4_walk_page_buffers(handle, page_buffers(page), from,
+- to, &partial, write_end_fn);
++ from + copied, &partial,
++ write_end_fn);
+ if (!partial)
+ SetPageUptodate(page);
+ }
+@@ -1468,6 +1480,7 @@ static int ext4_journalled_write_end(struct file *file,
+ */
+ ext4_orphan_add(handle, inode);
+
++errout:
+ ret2 = ext4_journal_stop(handle);
+ if (!ret)
+ ret = ret2;
+diff --git a/fs/ext4/mballoc.c b/fs/ext4/mballoc.c
+index 7ae43c59bc79..2e9fc7a61048 100644
+--- a/fs/ext4/mballoc.c
++++ b/fs/ext4/mballoc.c
+@@ -3123,6 +3123,13 @@ ext4_mb_normalize_request(struct ext4_allocation_context *ac,
+ if (ar->pright && start + size - 1 >= ar->lright)
+ size -= start + size - ar->lright;
+
++ /*
++ * Trim allocation request for filesystems with artificially small
++ * groups.
++ */
++ if (size > EXT4_BLOCKS_PER_GROUP(ac->ac_sb))
++ size = EXT4_BLOCKS_PER_GROUP(ac->ac_sb);
++
+ end = start + size;
+
+ /* check we don't cross already preallocated blocks */
+diff --git a/fs/ext4/namei.c b/fs/ext4/namei.c
+index eadba919f26b..2fbc63d697e9 100644
+--- a/fs/ext4/namei.c
++++ b/fs/ext4/namei.c
+@@ -1616,13 +1616,15 @@ static struct dentry *ext4_lookup(struct inode *dir, struct dentry *dentry, unsi
+ !fscrypt_has_permitted_context(dir, inode)) {
+ int nokey = ext4_encrypted_inode(inode) &&
+ !fscrypt_has_encryption_key(inode);
+- iput(inode);
+- if (nokey)
++ if (nokey) {
++ iput(inode);
+ return ERR_PTR(-ENOKEY);
++ }
+ ext4_warning(inode->i_sb,
+ "Inconsistent encryption contexts: %lu/%lu",
+ (unsigned long) dir->i_ino,
+ (unsigned long) inode->i_ino);
++ iput(inode);
+ return ERR_PTR(-EPERM);
+ }
+ }
+diff --git a/fs/ext4/super.c b/fs/ext4/super.c
+index 66845a08a87a..699b64ea5e0e 100644
+--- a/fs/ext4/super.c
++++ b/fs/ext4/super.c
+@@ -825,6 +825,7 @@ static void ext4_put_super(struct super_block *sb)
+ {
+ struct ext4_sb_info *sbi = EXT4_SB(sb);
+ struct ext4_super_block *es = sbi->s_es;
++ int aborted = 0;
+ int i, err;
+
+ ext4_unregister_li_request(sb);
+@@ -834,9 +835,10 @@ static void ext4_put_super(struct super_block *sb)
+ destroy_workqueue(sbi->rsv_conversion_wq);
+
+ if (sbi->s_journal) {
++ aborted = is_journal_aborted(sbi->s_journal);
+ err = jbd2_journal_destroy(sbi->s_journal);
+ sbi->s_journal = NULL;
+- if (err < 0)
++ if ((err < 0) && !aborted)
+ ext4_abort(sb, "Couldn't clean up the journal");
+ }
+
+@@ -847,7 +849,7 @@ static void ext4_put_super(struct super_block *sb)
+ ext4_mb_release(sb);
+ ext4_ext_release(sb);
+
+- if (!(sb->s_flags & MS_RDONLY)) {
++ if (!(sb->s_flags & MS_RDONLY) && !aborted) {
+ ext4_clear_feature_journal_needs_recovery(sb);
+ es->s_state = cpu_to_le16(sbi->s_mount_state);
+ }
+@@ -3842,7 +3844,7 @@ static int ext4_fill_super(struct super_block *sb, void *data, int silent)
+ db_count = (sbi->s_groups_count + EXT4_DESC_PER_BLOCK(sb) - 1) /
+ EXT4_DESC_PER_BLOCK(sb);
+ if (ext4_has_feature_meta_bg(sb)) {
+- if (le32_to_cpu(es->s_first_meta_bg) >= db_count) {
++ if (le32_to_cpu(es->s_first_meta_bg) > db_count) {
+ ext4_msg(sb, KERN_WARNING,
+ "first meta block group too large: %u "
+ "(group descriptor block count %u)",
+@@ -3925,7 +3927,8 @@ static int ext4_fill_super(struct super_block *sb, void *data, int silent)
+ * root first: it may be modified in the journal!
+ */
+ if (!test_opt(sb, NOLOAD) && ext4_has_feature_journal(sb)) {
+- if (ext4_load_journal(sb, es, journal_devnum))
++ err = ext4_load_journal(sb, es, journal_devnum);
++ if (err)
+ goto failed_mount3a;
+ } else if (test_opt(sb, NOLOAD) && !(sb->s_flags & MS_RDONLY) &&
+ ext4_has_feature_journal_needs_recovery(sb)) {
+diff --git a/fs/ext4/xattr.c b/fs/ext4/xattr.c
+index 5a94fa52b74f..c40bd55b6400 100644
+--- a/fs/ext4/xattr.c
++++ b/fs/ext4/xattr.c
+@@ -1188,16 +1188,14 @@ ext4_xattr_set_handle(handle_t *handle, struct inode *inode, int name_index,
+ struct ext4_xattr_block_find bs = {
+ .s = { .not_found = -ENODATA, },
+ };
+- unsigned long no_expand;
++ int no_expand;
+ int error;
+
+ if (!name)
+ return -EINVAL;
+ if (strlen(name) > 255)
+ return -ERANGE;
+- down_write(&EXT4_I(inode)->xattr_sem);
+- no_expand = ext4_test_inode_state(inode, EXT4_STATE_NO_EXPAND);
+- ext4_set_inode_state(inode, EXT4_STATE_NO_EXPAND);
++ ext4_write_lock_xattr(inode, &no_expand);
+
+ error = ext4_reserve_inode_write(handle, inode, &is.iloc);
+ if (error)
+@@ -1264,7 +1262,7 @@ ext4_xattr_set_handle(handle_t *handle, struct inode *inode, int name_index,
+ ext4_xattr_update_super_block(handle, inode->i_sb);
+ inode->i_ctime = current_time(inode);
+ if (!value)
+- ext4_clear_inode_state(inode, EXT4_STATE_NO_EXPAND);
++ no_expand = 0;
+ error = ext4_mark_iloc_dirty(handle, inode, &is.iloc);
+ /*
+ * The bh is consumed by ext4_mark_iloc_dirty, even with
+@@ -1278,9 +1276,7 @@ ext4_xattr_set_handle(handle_t *handle, struct inode *inode, int name_index,
+ cleanup:
+ brelse(is.iloc.bh);
+ brelse(bs.bh);
+- if (no_expand == 0)
+- ext4_clear_inode_state(inode, EXT4_STATE_NO_EXPAND);
+- up_write(&EXT4_I(inode)->xattr_sem);
++ ext4_write_unlock_xattr(inode, &no_expand);
+ return error;
+ }
+
+@@ -1497,12 +1493,11 @@ int ext4_expand_extra_isize_ea(struct inode *inode, int new_extra_isize,
+ int error = 0, tried_min_extra_isize = 0;
+ int s_min_extra_isize = le16_to_cpu(EXT4_SB(inode->i_sb)->s_es->s_min_extra_isize);
+ int isize_diff; /* How much do we need to grow i_extra_isize */
++ int no_expand;
++
++ if (ext4_write_trylock_xattr(inode, &no_expand) == 0)
++ return 0;
+
+- down_write(&EXT4_I(inode)->xattr_sem);
+- /*
+- * Set EXT4_STATE_NO_EXPAND to avoid recursion when marking inode dirty
+- */
+- ext4_set_inode_state(inode, EXT4_STATE_NO_EXPAND);
+ retry:
+ isize_diff = new_extra_isize - EXT4_I(inode)->i_extra_isize;
+ if (EXT4_I(inode)->i_extra_isize >= new_extra_isize)
+@@ -1584,17 +1579,16 @@ int ext4_expand_extra_isize_ea(struct inode *inode, int new_extra_isize,
+ EXT4_I(inode)->i_extra_isize = new_extra_isize;
+ brelse(bh);
+ out:
+- ext4_clear_inode_state(inode, EXT4_STATE_NO_EXPAND);
+- up_write(&EXT4_I(inode)->xattr_sem);
++ ext4_write_unlock_xattr(inode, &no_expand);
+ return 0;
+
+ cleanup:
+ brelse(bh);
+ /*
+- * We deliberately leave EXT4_STATE_NO_EXPAND set here since inode
+- * size expansion failed.
++ * Inode size expansion failed; don't try again
+ */
+- up_write(&EXT4_I(inode)->xattr_sem);
++ no_expand = 1;
++ ext4_write_unlock_xattr(inode, &no_expand);
+ return error;
+ }
+
+diff --git a/fs/ext4/xattr.h b/fs/ext4/xattr.h
+index a92e783fa057..099c8b670ef5 100644
+--- a/fs/ext4/xattr.h
++++ b/fs/ext4/xattr.h
+@@ -102,6 +102,38 @@ extern const struct xattr_handler ext4_xattr_security_handler;
+
+ #define EXT4_XATTR_NAME_ENCRYPTION_CONTEXT "c"
+
++/*
++ * The EXT4_STATE_NO_EXPAND is overloaded and used for two purposes.
++ * The first is to signal that there the inline xattrs and data are
++ * taking up so much space that we might as well not keep trying to
++ * expand it. The second is that xattr_sem is taken for writing, so
++ * we shouldn't try to recurse into the inode expansion. For this
++ * second case, we need to make sure that we take save and restore the
++ * NO_EXPAND state flag appropriately.
++ */
++static inline void ext4_write_lock_xattr(struct inode *inode, int *save)
++{
++ down_write(&EXT4_I(inode)->xattr_sem);
++ *save = ext4_test_inode_state(inode, EXT4_STATE_NO_EXPAND);
++ ext4_set_inode_state(inode, EXT4_STATE_NO_EXPAND);
++}
++
++static inline int ext4_write_trylock_xattr(struct inode *inode, int *save)
++{
++ if (down_write_trylock(&EXT4_I(inode)->xattr_sem) == 0)
++ return 0;
++ *save = ext4_test_inode_state(inode, EXT4_STATE_NO_EXPAND);
++ ext4_set_inode_state(inode, EXT4_STATE_NO_EXPAND);
++ return 1;
++}
++
++static inline void ext4_write_unlock_xattr(struct inode *inode, int *save)
++{
++ if (*save == 0)
++ ext4_clear_inode_state(inode, EXT4_STATE_NO_EXPAND);
++ up_write(&EXT4_I(inode)->xattr_sem);
++}
++
+ extern ssize_t ext4_listxattr(struct dentry *, char *, size_t);
+
+ extern int ext4_xattr_get(struct inode *, int, const char *, void *, size_t);
+diff --git a/fs/f2fs/dir.c b/fs/f2fs/dir.c
+index 827c5daef4fc..54aa30ee028f 100644
+--- a/fs/f2fs/dir.c
++++ b/fs/f2fs/dir.c
+@@ -207,9 +207,13 @@ static struct f2fs_dir_entry *find_in_level(struct inode *dir,
+ f2fs_put_page(dentry_page, 0);
+ }
+
+- if (!de && room && F2FS_I(dir)->chash != namehash) {
+- F2FS_I(dir)->chash = namehash;
+- F2FS_I(dir)->clevel = level;
++ /* This is to increase the speed of f2fs_create */
++ if (!de && room) {
++ F2FS_I(dir)->task = current;
++ if (F2FS_I(dir)->chash != namehash) {
++ F2FS_I(dir)->chash = namehash;
++ F2FS_I(dir)->clevel = level;
++ }
+ }
+
+ return de;
+@@ -643,14 +647,34 @@ int __f2fs_add_link(struct inode *dir, const struct qstr *name,
+ struct inode *inode, nid_t ino, umode_t mode)
+ {
+ struct fscrypt_name fname;
++ struct page *page = NULL;
++ struct f2fs_dir_entry *de = NULL;
+ int err;
+
+ err = fscrypt_setup_filename(dir, name, 0, &fname);
+ if (err)
+ return err;
+
+- err = __f2fs_do_add_link(dir, &fname, inode, ino, mode);
+-
++ /*
++ * An immature stakable filesystem shows a race condition between lookup
++ * and create. If we have same task when doing lookup and create, it's
++ * definitely fine as expected by VFS normally. Otherwise, let's just
++ * verify on-disk dentry one more time, which guarantees filesystem
++ * consistency more.
++ */
++ if (current != F2FS_I(dir)->task) {
++ de = __f2fs_find_entry(dir, &fname, &page);
++ F2FS_I(dir)->task = NULL;
++ }
++ if (de) {
++ f2fs_dentry_kunmap(dir, page);
++ f2fs_put_page(page, 0);
++ err = -EEXIST;
++ } else if (IS_ERR(page)) {
++ err = PTR_ERR(page);
++ } else {
++ err = __f2fs_do_add_link(dir, &fname, inode, ino, mode);
++ }
+ fscrypt_free_filename(&fname);
+ return err;
+ }
+diff --git a/fs/f2fs/extent_cache.c b/fs/f2fs/extent_cache.c
+index 4db44da7ef69..e02c3d88dc9a 100644
+--- a/fs/f2fs/extent_cache.c
++++ b/fs/f2fs/extent_cache.c
+@@ -352,11 +352,12 @@ static struct extent_node *__try_merge_extent_node(struct inode *inode,
+ }
+
+ if (next_ex && __is_front_mergeable(ei, &next_ex->ei)) {
+- if (en)
+- __release_extent_node(sbi, et, prev_ex);
+ next_ex->ei.fofs = ei->fofs;
+ next_ex->ei.blk = ei->blk;
+ next_ex->ei.len += ei->len;
++ if (en)
++ __release_extent_node(sbi, et, prev_ex);
++
+ en = next_ex;
+ }
+
+diff --git a/fs/f2fs/f2fs.h b/fs/f2fs/f2fs.h
+index 2da8c3aa0ce5..149fab0161d0 100644
+--- a/fs/f2fs/f2fs.h
++++ b/fs/f2fs/f2fs.h
+@@ -434,6 +434,7 @@ struct f2fs_inode_info {
+ atomic_t dirty_pages; /* # of dirty pages */
+ f2fs_hash_t chash; /* hash value of given file name */
+ unsigned int clevel; /* maximum level of given file name */
++ struct task_struct *task; /* lookup and create consistency */
+ nid_t i_xattr_nid; /* node id that contains xattrs */
+ unsigned long long xattr_ver; /* cp version of xattr modification */
+ loff_t last_disk_size; /* lastly written file size */
+@@ -863,6 +864,9 @@ struct f2fs_sb_info {
+ struct f2fs_gc_kthread *gc_thread; /* GC thread */
+ unsigned int cur_victim_sec; /* current victim section num */
+
++ /* threshold for converting bg victims for fg */
++ u64 fggc_threshold;
++
+ /* maximum # of trials to find a victim segment for SSR and GC */
+ unsigned int max_victim_search;
+
+diff --git a/fs/f2fs/gc.c b/fs/f2fs/gc.c
+index 88bfc3dff496..46800d6b25e5 100644
+--- a/fs/f2fs/gc.c
++++ b/fs/f2fs/gc.c
+@@ -166,7 +166,8 @@ static void select_policy(struct f2fs_sb_info *sbi, int gc_type,
+ p->ofs_unit = sbi->segs_per_sec;
+ }
+
+- if (p->max_search > sbi->max_victim_search)
++ /* we need to check every dirty segments in the FG_GC case */
++ if (gc_type != FG_GC && p->max_search > sbi->max_victim_search)
+ p->max_search = sbi->max_victim_search;
+
+ p->offset = sbi->last_victim[p->gc_mode];
+@@ -199,6 +200,10 @@ static unsigned int check_bg_victims(struct f2fs_sb_info *sbi)
+ for_each_set_bit(secno, dirty_i->victim_secmap, MAIN_SECS(sbi)) {
+ if (sec_usage_check(sbi, secno))
+ continue;
++
++ if (no_fggc_candidate(sbi, secno))
++ continue;
++
+ clear_bit(secno, dirty_i->victim_secmap);
+ return secno * sbi->segs_per_sec;
+ }
+@@ -322,13 +327,15 @@ static int get_victim_by_default(struct f2fs_sb_info *sbi,
+ nsearched++;
+ }
+
+-
+ secno = GET_SECNO(sbi, segno);
+
+ if (sec_usage_check(sbi, secno))
+ goto next;
+ if (gc_type == BG_GC && test_bit(secno, dirty_i->victim_secmap))
+ goto next;
++ if (gc_type == FG_GC && p.alloc_mode == LFS &&
++ no_fggc_candidate(sbi, secno))
++ goto next;
+
+ cost = get_gc_cost(sbi, segno, &p);
+
+@@ -983,5 +990,16 @@ int f2fs_gc(struct f2fs_sb_info *sbi, bool sync, bool background)
+
+ void build_gc_manager(struct f2fs_sb_info *sbi)
+ {
++ u64 main_count, resv_count, ovp_count, blocks_per_sec;
++
+ DIRTY_I(sbi)->v_ops = &default_v_ops;
++
++ /* threshold of # of valid blocks in a section for victims of FG_GC */
++ main_count = SM_I(sbi)->main_segments << sbi->log_blocks_per_seg;
++ resv_count = SM_I(sbi)->reserved_segments << sbi->log_blocks_per_seg;
++ ovp_count = SM_I(sbi)->ovp_segments << sbi->log_blocks_per_seg;
++ blocks_per_sec = sbi->blocks_per_seg * sbi->segs_per_sec;
++
++ sbi->fggc_threshold = div_u64((main_count - ovp_count) * blocks_per_sec,
++ (main_count - resv_count));
+ }
+diff --git a/fs/f2fs/segment.c b/fs/f2fs/segment.c
+index 0d8802453758..983bf0caf2df 100644
+--- a/fs/f2fs/segment.c
++++ b/fs/f2fs/segment.c
+@@ -935,6 +935,8 @@ void clear_prefree_segments(struct f2fs_sb_info *sbi, struct cp_control *cpc)
+ start = start_segno + sbi->segs_per_sec;
+ if (start < end)
+ goto next;
++ else
++ end = start - 1;
+ }
+ mutex_unlock(&dirty_i->seglist_lock);
+
+diff --git a/fs/f2fs/segment.h b/fs/f2fs/segment.h
+index 9d44ce83acb2..3d2add2d656d 100644
+--- a/fs/f2fs/segment.h
++++ b/fs/f2fs/segment.h
+@@ -689,6 +689,15 @@ static inline block_t sum_blk_addr(struct f2fs_sb_info *sbi, int base, int type)
+ - (base + 1) + type;
+ }
+
++static inline bool no_fggc_candidate(struct f2fs_sb_info *sbi,
++ unsigned int secno)
++{
++ if (get_valid_blocks(sbi, secno, sbi->segs_per_sec) >=
++ sbi->fggc_threshold)
++ return true;
++ return false;
++}
++
+ static inline bool sec_usage_check(struct f2fs_sb_info *sbi, unsigned int secno)
+ {
+ if (IS_CURSEC(sbi, secno) || (sbi->cur_victim_sec == secno))
+diff --git a/fs/f2fs/super.c b/fs/f2fs/super.c
+index 46fd30d8af77..287fcbd0551e 100644
+--- a/fs/f2fs/super.c
++++ b/fs/f2fs/super.c
+@@ -1698,36 +1698,55 @@ int f2fs_commit_super(struct f2fs_sb_info *sbi, bool recover)
+ static int f2fs_scan_devices(struct f2fs_sb_info *sbi)
+ {
+ struct f2fs_super_block *raw_super = F2FS_RAW_SUPER(sbi);
++ unsigned int max_devices = MAX_DEVICES;
+ int i;
+
+- for (i = 0; i < MAX_DEVICES; i++) {
+- if (!RDEV(i).path[0])
++ /* Initialize single device information */
++ if (!RDEV(0).path[0]) {
++ if (!bdev_is_zoned(sbi->sb->s_bdev))
+ return 0;
++ max_devices = 1;
++ }
+
+- if (i == 0) {
+- sbi->devs = kzalloc(sizeof(struct f2fs_dev_info) *
+- MAX_DEVICES, GFP_KERNEL);
+- if (!sbi->devs)
+- return -ENOMEM;
+- }
++ /*
++ * Initialize multiple devices information, or single
++ * zoned block device information.
++ */
++ sbi->devs = kcalloc(max_devices, sizeof(struct f2fs_dev_info),
++ GFP_KERNEL);
++ if (!sbi->devs)
++ return -ENOMEM;
+
+- memcpy(FDEV(i).path, RDEV(i).path, MAX_PATH_LEN);
+- FDEV(i).total_segments = le32_to_cpu(RDEV(i).total_segments);
+- if (i == 0) {
+- FDEV(i).start_blk = 0;
+- FDEV(i).end_blk = FDEV(i).start_blk +
+- (FDEV(i).total_segments <<
+- sbi->log_blocks_per_seg) - 1 +
+- le32_to_cpu(raw_super->segment0_blkaddr);
+- } else {
+- FDEV(i).start_blk = FDEV(i - 1).end_blk + 1;
+- FDEV(i).end_blk = FDEV(i).start_blk +
+- (FDEV(i).total_segments <<
+- sbi->log_blocks_per_seg) - 1;
+- }
++ for (i = 0; i < max_devices; i++) {
+
+- FDEV(i).bdev = blkdev_get_by_path(FDEV(i).path,
++ if (i > 0 && !RDEV(i).path[0])
++ break;
++
++ if (max_devices == 1) {
++ /* Single zoned block device mount */
++ FDEV(0).bdev =
++ blkdev_get_by_dev(sbi->sb->s_bdev->bd_dev,
+ sbi->sb->s_mode, sbi->sb->s_type);
++ } else {
++ /* Multi-device mount */
++ memcpy(FDEV(i).path, RDEV(i).path, MAX_PATH_LEN);
++ FDEV(i).total_segments =
++ le32_to_cpu(RDEV(i).total_segments);
++ if (i == 0) {
++ FDEV(i).start_blk = 0;
++ FDEV(i).end_blk = FDEV(i).start_blk +
++ (FDEV(i).total_segments <<
++ sbi->log_blocks_per_seg) - 1 +
++ le32_to_cpu(raw_super->segment0_blkaddr);
++ } else {
++ FDEV(i).start_blk = FDEV(i - 1).end_blk + 1;
++ FDEV(i).end_blk = FDEV(i).start_blk +
++ (FDEV(i).total_segments <<
++ sbi->log_blocks_per_seg) - 1;
++ }
++ FDEV(i).bdev = blkdev_get_by_path(FDEV(i).path,
++ sbi->sb->s_mode, sbi->sb->s_type);
++ }
+ if (IS_ERR(FDEV(i).bdev))
+ return PTR_ERR(FDEV(i).bdev);
+
+@@ -1747,6 +1766,8 @@ static int f2fs_scan_devices(struct f2fs_sb_info *sbi)
+ "Failed to initialize F2FS blkzone information");
+ return -EINVAL;
+ }
++ if (max_devices == 1)
++ break;
+ f2fs_msg(sbi->sb, KERN_INFO,
+ "Mount Device [%2d]: %20s, %8u, %8x - %8x (zone: %s)",
+ i, FDEV(i).path,
+diff --git a/fs/fuse/file.c b/fs/fuse/file.c
+index 2401c5dabb2a..5ec5870e423a 100644
+--- a/fs/fuse/file.c
++++ b/fs/fuse/file.c
+@@ -100,6 +100,7 @@ static void fuse_file_put(struct fuse_file *ff, bool sync)
+ iput(req->misc.release.inode);
+ fuse_put_request(ff->fc, req);
+ } else if (sync) {
++ __set_bit(FR_FORCE, &req->flags);
+ __clear_bit(FR_BACKGROUND, &req->flags);
+ fuse_request_send(ff->fc, req);
+ iput(req->misc.release.inode);
+diff --git a/fs/gfs2/glock.c b/fs/gfs2/glock.c
+index 94f50cac91c6..1d60f5f69ae5 100644
+--- a/fs/gfs2/glock.c
++++ b/fs/gfs2/glock.c
+@@ -658,9 +658,11 @@ int gfs2_glock_get(struct gfs2_sbd *sdp, u64 number,
+ struct kmem_cache *cachep;
+ int ret, tries = 0;
+
++ rcu_read_lock();
+ gl = rhashtable_lookup_fast(&gl_hash_table, &name, ht_parms);
+ if (gl && !lockref_get_not_dead(&gl->gl_lockref))
+ gl = NULL;
++ rcu_read_unlock();
+
+ *glp = gl;
+ if (gl)
+@@ -728,15 +730,18 @@ int gfs2_glock_get(struct gfs2_sbd *sdp, u64 number,
+
+ if (ret == -EEXIST) {
+ ret = 0;
++ rcu_read_lock();
+ tmp = rhashtable_lookup_fast(&gl_hash_table, &name, ht_parms);
+ if (tmp == NULL || !lockref_get_not_dead(&tmp->gl_lockref)) {
+ if (++tries < 100) {
++ rcu_read_unlock();
+ cond_resched();
+ goto again;
+ }
+ tmp = NULL;
+ ret = -ENOMEM;
+ }
++ rcu_read_unlock();
+ } else {
+ WARN_ON_ONCE(ret);
+ }
+diff --git a/fs/jbd2/transaction.c b/fs/jbd2/transaction.c
+index e1652665bd93..5e659ee08d6a 100644
+--- a/fs/jbd2/transaction.c
++++ b/fs/jbd2/transaction.c
+@@ -1863,7 +1863,9 @@ static void __jbd2_journal_temp_unlink_buffer(struct journal_head *jh)
+
+ __blist_del_buffer(list, jh);
+ jh->b_jlist = BJ_None;
+- if (test_clear_buffer_jbddirty(bh))
++ if (transaction && is_journal_aborted(transaction->t_journal))
++ clear_buffer_jbddirty(bh);
++ else if (test_clear_buffer_jbddirty(bh))
+ mark_buffer_dirty(bh); /* Expose it to the VM */
+ }
+
+diff --git a/fs/nfs/flexfilelayout/flexfilelayout.c b/fs/nfs/flexfilelayout/flexfilelayout.c
+index 0ca4af8cca5d..42e3e9daa328 100644
+--- a/fs/nfs/flexfilelayout/flexfilelayout.c
++++ b/fs/nfs/flexfilelayout/flexfilelayout.c
+@@ -1053,9 +1053,6 @@ static int ff_layout_async_handle_error_v4(struct rpc_task *task,
+ struct nfs_client *mds_client = mds_server->nfs_client;
+ struct nfs4_slot_table *tbl = &clp->cl_session->fc_slot_table;
+
+- if (task->tk_status >= 0)
+- return 0;
+-
+ switch (task->tk_status) {
+ /* MDS state errors */
+ case -NFS4ERR_DELEG_REVOKED:
+@@ -1157,9 +1154,6 @@ static int ff_layout_async_handle_error_v3(struct rpc_task *task,
+ {
+ struct nfs4_deviceid_node *devid = FF_LAYOUT_DEVID_NODE(lseg, idx);
+
+- if (task->tk_status >= 0)
+- return 0;
+-
+ switch (task->tk_status) {
+ /* File access problems. Don't mark the device as unavailable */
+ case -EACCES:
+@@ -1195,6 +1189,13 @@ static int ff_layout_async_handle_error(struct rpc_task *task,
+ {
+ int vers = clp->cl_nfs_mod->rpc_vers->number;
+
++ if (task->tk_status >= 0)
++ return 0;
++
++ /* Handle the case of an invalid layout segment */
++ if (!pnfs_is_valid_lseg(lseg))
++ return -NFS4ERR_RESET_TO_PNFS;
++
+ switch (vers) {
+ case 3:
+ return ff_layout_async_handle_error_v3(task, lseg, idx);
+diff --git a/fs/nfs/nfs42proc.c b/fs/nfs/nfs42proc.c
+index d12ff9385f49..78da4087dc73 100644
+--- a/fs/nfs/nfs42proc.c
++++ b/fs/nfs/nfs42proc.c
+@@ -128,30 +128,26 @@ int nfs42_proc_deallocate(struct file *filep, loff_t offset, loff_t len)
+ return err;
+ }
+
+-static ssize_t _nfs42_proc_copy(struct file *src, loff_t pos_src,
++static ssize_t _nfs42_proc_copy(struct file *src,
+ struct nfs_lock_context *src_lock,
+- struct file *dst, loff_t pos_dst,
++ struct file *dst,
+ struct nfs_lock_context *dst_lock,
+- size_t count)
++ struct nfs42_copy_args *args,
++ struct nfs42_copy_res *res)
+ {
+- struct nfs42_copy_args args = {
+- .src_fh = NFS_FH(file_inode(src)),
+- .src_pos = pos_src,
+- .dst_fh = NFS_FH(file_inode(dst)),
+- .dst_pos = pos_dst,
+- .count = count,
+- };
+- struct nfs42_copy_res res;
+ struct rpc_message msg = {
+ .rpc_proc = &nfs4_procedures[NFSPROC4_CLNT_COPY],
+- .rpc_argp = &args,
+- .rpc_resp = &res,
++ .rpc_argp = args,
++ .rpc_resp = res,
+ };
+ struct inode *dst_inode = file_inode(dst);
+ struct nfs_server *server = NFS_SERVER(dst_inode);
++ loff_t pos_src = args->src_pos;
++ loff_t pos_dst = args->dst_pos;
++ size_t count = args->count;
+ int status;
+
+- status = nfs4_set_rw_stateid(&args.src_stateid, src_lock->open_context,
++ status = nfs4_set_rw_stateid(&args->src_stateid, src_lock->open_context,
+ src_lock, FMODE_READ);
+ if (status)
+ return status;
+@@ -161,7 +157,7 @@ static ssize_t _nfs42_proc_copy(struct file *src, loff_t pos_src,
+ if (status)
+ return status;
+
+- status = nfs4_set_rw_stateid(&args.dst_stateid, dst_lock->open_context,
++ status = nfs4_set_rw_stateid(&args->dst_stateid, dst_lock->open_context,
+ dst_lock, FMODE_WRITE);
+ if (status)
+ return status;
+@@ -171,22 +167,22 @@ static ssize_t _nfs42_proc_copy(struct file *src, loff_t pos_src,
+ return status;
+
+ status = nfs4_call_sync(server->client, server, &msg,
+- &args.seq_args, &res.seq_res, 0);
++ &args->seq_args, &res->seq_res, 0);
+ if (status == -ENOTSUPP)
+ server->caps &= ~NFS_CAP_COPY;
+ if (status)
+ return status;
+
+- if (res.write_res.verifier.committed != NFS_FILE_SYNC) {
+- status = nfs_commit_file(dst, &res.write_res.verifier.verifier);
++ if (res->write_res.verifier.committed != NFS_FILE_SYNC) {
++ status = nfs_commit_file(dst, &res->write_res.verifier.verifier);
+ if (status)
+ return status;
+ }
+
+ truncate_pagecache_range(dst_inode, pos_dst,
+- pos_dst + res.write_res.count);
++ pos_dst + res->write_res.count);
+
+- return res.write_res.count;
++ return res->write_res.count;
+ }
+
+ ssize_t nfs42_proc_copy(struct file *src, loff_t pos_src,
+@@ -196,8 +192,22 @@ ssize_t nfs42_proc_copy(struct file *src, loff_t pos_src,
+ struct nfs_server *server = NFS_SERVER(file_inode(dst));
+ struct nfs_lock_context *src_lock;
+ struct nfs_lock_context *dst_lock;
+- struct nfs4_exception src_exception = { };
+- struct nfs4_exception dst_exception = { };
++ struct nfs42_copy_args args = {
++ .src_fh = NFS_FH(file_inode(src)),
++ .src_pos = pos_src,
++ .dst_fh = NFS_FH(file_inode(dst)),
++ .dst_pos = pos_dst,
++ .count = count,
++ };
++ struct nfs42_copy_res res;
++ struct nfs4_exception src_exception = {
++ .inode = file_inode(src),
++ .stateid = &args.src_stateid,
++ };
++ struct nfs4_exception dst_exception = {
++ .inode = file_inode(dst),
++ .stateid = &args.dst_stateid,
++ };
+ ssize_t err, err2;
+
+ if (!nfs_server_capable(file_inode(dst), NFS_CAP_COPY))
+@@ -207,7 +217,6 @@ ssize_t nfs42_proc_copy(struct file *src, loff_t pos_src,
+ if (IS_ERR(src_lock))
+ return PTR_ERR(src_lock);
+
+- src_exception.inode = file_inode(src);
+ src_exception.state = src_lock->open_context->state;
+
+ dst_lock = nfs_get_lock_context(nfs_file_open_context(dst));
+@@ -216,15 +225,17 @@ ssize_t nfs42_proc_copy(struct file *src, loff_t pos_src,
+ goto out_put_src_lock;
+ }
+
+- dst_exception.inode = file_inode(dst);
+ dst_exception.state = dst_lock->open_context->state;
+
+ do {
+ inode_lock(file_inode(dst));
+- err = _nfs42_proc_copy(src, pos_src, src_lock,
+- dst, pos_dst, dst_lock, count);
++ err = _nfs42_proc_copy(src, src_lock,
++ dst, dst_lock,
++ &args, &res);
+ inode_unlock(file_inode(dst));
+
++ if (err >= 0)
++ break;
+ if (err == -ENOTSUPP) {
+ err = -EOPNOTSUPP;
+ break;
+diff --git a/fs/nfs/nfs4proc.c b/fs/nfs/nfs4proc.c
+index 0a0eaecf9676..37bcd887f742 100644
+--- a/fs/nfs/nfs4proc.c
++++ b/fs/nfs/nfs4proc.c
+@@ -815,10 +815,6 @@ static int nfs41_sequence_process(struct rpc_task *task,
+ case -NFS4ERR_SEQ_FALSE_RETRY:
+ ++slot->seq_nr;
+ goto retry_nowait;
+- case -NFS4ERR_DEADSESSION:
+- case -NFS4ERR_BADSESSION:
+- nfs4_schedule_session_recovery(session, res->sr_status);
+- goto retry_nowait;
+ default:
+ /* Just update the slot sequence no. */
+ slot->seq_done = 1;
+@@ -2730,6 +2726,7 @@ static int _nfs4_open_and_get_state(struct nfs4_opendata *opendata,
+ ret = PTR_ERR(state);
+ if (IS_ERR(state))
+ goto out;
++ ctx->state = state;
+ if (server->caps & NFS_CAP_POSIX_LOCK)
+ set_bit(NFS_STATE_POSIX_LOCKS, &state->flags);
+ if (opendata->o_res.rflags & NFS4_OPEN_RESULT_MAY_NOTIFY_LOCK)
+@@ -2755,7 +2752,6 @@ static int _nfs4_open_and_get_state(struct nfs4_opendata *opendata,
+ if (ret != 0)
+ goto out;
+
+- ctx->state = state;
+ if (d_inode(dentry) == state->inode) {
+ nfs_inode_attach_open_context(ctx);
+ if (read_seqcount_retry(&sp->so_reclaim_seqcount, seq))
+@@ -5069,7 +5065,7 @@ static void nfs4_write_cached_acl(struct inode *inode, struct page **pages, size
+ */
+ static ssize_t __nfs4_get_acl_uncached(struct inode *inode, void *buf, size_t buflen)
+ {
+- struct page *pages[NFS4ACL_MAXPAGES] = {NULL, };
++ struct page *pages[NFS4ACL_MAXPAGES + 1] = {NULL, };
+ struct nfs_getaclargs args = {
+ .fh = NFS_FH(inode),
+ .acl_pages = pages,
+@@ -5083,13 +5079,9 @@ static ssize_t __nfs4_get_acl_uncached(struct inode *inode, void *buf, size_t bu
+ .rpc_argp = &args,
+ .rpc_resp = &res,
+ };
+- unsigned int npages = DIV_ROUND_UP(buflen, PAGE_SIZE);
++ unsigned int npages = DIV_ROUND_UP(buflen, PAGE_SIZE) + 1;
+ int ret = -ENOMEM, i;
+
+- /* As long as we're doing a round trip to the server anyway,
+- * let's be prepared for a page of acl data. */
+- if (npages == 0)
+- npages = 1;
+ if (npages > ARRAY_SIZE(pages))
+ return -ERANGE;
+
+diff --git a/fs/nfs/nfs4xdr.c b/fs/nfs/nfs4xdr.c
+index e9255cb453e6..bb95dd2edeef 100644
+--- a/fs/nfs/nfs4xdr.c
++++ b/fs/nfs/nfs4xdr.c
+@@ -2524,7 +2524,7 @@ static void nfs4_xdr_enc_getacl(struct rpc_rqst *req, struct xdr_stream *xdr,
+ encode_compound_hdr(xdr, req, &hdr);
+ encode_sequence(xdr, &args->seq_args, &hdr);
+ encode_putfh(xdr, args->fh, &hdr);
+- replen = hdr.replen + op_decode_hdr_maxsz + 1;
++ replen = hdr.replen + op_decode_hdr_maxsz;
+ encode_getattr_two(xdr, FATTR4_WORD0_ACL, 0, &hdr);
+
+ xdr_inline_pages(&req->rq_rcv_buf, replen << 2,
+diff --git a/fs/nfsd/vfs.c b/fs/nfsd/vfs.c
+index 26c6fdb4bf67..3c36ed5a1f07 100644
+--- a/fs/nfsd/vfs.c
++++ b/fs/nfsd/vfs.c
+@@ -377,7 +377,7 @@ nfsd_setattr(struct svc_rqst *rqstp, struct svc_fh *fhp, struct iattr *iap,
+ __be32 err;
+ int host_err;
+ bool get_write_count;
+- int size_change = 0;
++ bool size_change = (iap->ia_valid & ATTR_SIZE);
+
+ if (iap->ia_valid & (ATTR_ATIME | ATTR_MTIME | ATTR_SIZE))
+ accmode |= NFSD_MAY_WRITE|NFSD_MAY_OWNER_OVERRIDE;
+@@ -390,11 +390,11 @@ nfsd_setattr(struct svc_rqst *rqstp, struct svc_fh *fhp, struct iattr *iap,
+ /* Get inode */
+ err = fh_verify(rqstp, fhp, ftype, accmode);
+ if (err)
+- goto out;
++ return err;
+ if (get_write_count) {
+ host_err = fh_want_write(fhp);
+ if (host_err)
+- return nfserrno(host_err);
++ goto out;
+ }
+
+ dentry = fhp->fh_dentry;
+@@ -405,20 +405,28 @@ nfsd_setattr(struct svc_rqst *rqstp, struct svc_fh *fhp, struct iattr *iap,
+ iap->ia_valid &= ~ATTR_MODE;
+
+ if (!iap->ia_valid)
+- goto out;
++ return 0;
+
+ nfsd_sanitize_attrs(inode, iap);
+
++ if (check_guard && guardtime != inode->i_ctime.tv_sec)
++ return nfserr_notsync;
++
+ /*
+ * The size case is special, it changes the file in addition to the
+- * attributes.
++ * attributes, and file systems don't expect it to be mixed with
++ * "random" attribute changes. We thus split out the size change
++ * into a separate call to ->setattr, and do the rest as a separate
++ * setattr call.
+ */
+- if (iap->ia_valid & ATTR_SIZE) {
++ if (size_change) {
+ err = nfsd_get_write_access(rqstp, fhp, iap);
+ if (err)
+- goto out;
+- size_change = 1;
++ return err;
++ }
+
++ fh_lock(fhp);
++ if (size_change) {
+ /*
+ * RFC5661, Section 18.30.4:
+ * Changing the size of a file with SETATTR indirectly
+@@ -426,29 +434,36 @@ nfsd_setattr(struct svc_rqst *rqstp, struct svc_fh *fhp, struct iattr *iap,
+ *
+ * (and similar for the older RFCs)
+ */
+- if (iap->ia_size != i_size_read(inode))
+- iap->ia_valid |= ATTR_MTIME;
+- }
++ struct iattr size_attr = {
++ .ia_valid = ATTR_SIZE | ATTR_CTIME | ATTR_MTIME,
++ .ia_size = iap->ia_size,
++ };
+
+- iap->ia_valid |= ATTR_CTIME;
++ host_err = notify_change(dentry, &size_attr, NULL);
++ if (host_err)
++ goto out_unlock;
++ iap->ia_valid &= ~ATTR_SIZE;
+
+- if (check_guard && guardtime != inode->i_ctime.tv_sec) {
+- err = nfserr_notsync;
+- goto out_put_write_access;
++ /*
++ * Avoid the additional setattr call below if the only other
++ * attribute that the client sends is the mtime, as we update
++ * it as part of the size change above.
++ */
++ if ((iap->ia_valid & ~ATTR_MTIME) == 0)
++ goto out_unlock;
+ }
+
+- fh_lock(fhp);
++ iap->ia_valid |= ATTR_CTIME;
+ host_err = notify_change(dentry, iap, NULL);
+- fh_unlock(fhp);
+- err = nfserrno(host_err);
+
+-out_put_write_access:
++out_unlock:
++ fh_unlock(fhp);
+ if (size_change)
+ put_write_access(inode);
+- if (!err)
+- err = nfserrno(commit_metadata(fhp));
+ out:
+- return err;
++ if (!host_err)
++ host_err = commit_metadata(fhp);
++ return nfserrno(host_err);
+ }
+
+ #if defined(CONFIG_NFSD_V4)
+diff --git a/include/crypto/algapi.h b/include/crypto/algapi.h
+index 404e9558e879..9acce0bc5863 100644
+--- a/include/crypto/algapi.h
++++ b/include/crypto/algapi.h
+@@ -344,13 +344,18 @@ static inline struct crypto_alg *crypto_get_attr_alg(struct rtattr **tb,
+ return crypto_attr_alg(tb[1], type, mask);
+ }
+
++static inline int crypto_requires_off(u32 type, u32 mask, u32 off)
++{
++ return (type ^ off) & mask & off;
++}
++
+ /*
+ * Returns CRYPTO_ALG_ASYNC if type/mask requires the use of sync algorithms.
+ * Otherwise returns zero.
+ */
+ static inline int crypto_requires_sync(u32 type, u32 mask)
+ {
+- return (type ^ CRYPTO_ALG_ASYNC) & mask & CRYPTO_ALG_ASYNC;
++ return crypto_requires_off(type, mask, CRYPTO_ALG_ASYNC);
+ }
+
+ noinline unsigned long __crypto_memneq(const void *a, const void *b, size_t size);
+diff --git a/include/linux/compat.h b/include/linux/compat.h
+index 63609398ef9f..d8535a430caf 100644
+--- a/include/linux/compat.h
++++ b/include/linux/compat.h
+@@ -711,8 +711,10 @@ int __compat_save_altstack(compat_stack_t __user *, unsigned long);
+ compat_stack_t __user *__uss = uss; \
+ struct task_struct *t = current; \
+ put_user_ex(ptr_to_compat((void __user *)t->sas_ss_sp), &__uss->ss_sp); \
+- put_user_ex(sas_ss_flags(sp), &__uss->ss_flags); \
++ put_user_ex(t->sas_ss_flags, &__uss->ss_flags); \
+ put_user_ex(t->sas_ss_size, &__uss->ss_size); \
++ if (t->sas_ss_flags & SS_AUTODISARM) \
++ sas_ss_reset(t); \
+ } while (0);
+
+ asmlinkage long compat_sys_sched_rr_get_interval(compat_pid_t pid,
+diff --git a/include/linux/devfreq.h b/include/linux/devfreq.h
+index 2de4e2eea180..e0acb0e5243b 100644
+--- a/include/linux/devfreq.h
++++ b/include/linux/devfreq.h
+@@ -104,6 +104,8 @@ struct devfreq_dev_profile {
+ * struct devfreq_governor - Devfreq policy governor
+ * @node: list node - contains registered devfreq governors
+ * @name: Governor's name
++ * @immutable: Immutable flag for governor. If the value is 1,
++ * this govenror is never changeable to other governor.
+ * @get_target_freq: Returns desired operating frequency for the device.
+ * Basically, get_target_freq will run
+ * devfreq_dev_profile.get_dev_status() to get the
+@@ -121,6 +123,7 @@ struct devfreq_governor {
+ struct list_head node;
+
+ const char name[DEVFREQ_NAME_LEN];
++ const unsigned int immutable;
+ int (*get_target_freq)(struct devfreq *this, unsigned long *freq);
+ int (*event_handler)(struct devfreq *devfreq,
+ unsigned int event, void *data);
+diff --git a/include/linux/fsl_ifc.h b/include/linux/fsl_ifc.h
+index 3f9778cbc79d..c332f0a45607 100644
+--- a/include/linux/fsl_ifc.h
++++ b/include/linux/fsl_ifc.h
+@@ -733,8 +733,12 @@ struct fsl_ifc_nand {
+ __be32 nand_erattr1;
+ u32 res19[0x10];
+ __be32 nand_fsr;
+- u32 res20[0x3];
+- __be32 nand_eccstat[6];
++ u32 res20;
++ /* The V1 nand_eccstat is actually 4 words that overlaps the
++ * V2 nand_eccstat.
++ */
++ __be32 v1_nand_eccstat[2];
++ __be32 v2_nand_eccstat[6];
+ u32 res21[0x1c];
+ __be32 nanndcr;
+ u32 res22[0x2];
+diff --git a/include/linux/hyperv.h b/include/linux/hyperv.h
+index 183efde54269..62679a93e01e 100644
+--- a/include/linux/hyperv.h
++++ b/include/linux/hyperv.h
+@@ -641,6 +641,7 @@ struct vmbus_channel_msginfo {
+
+ /* Synchronize the request/response if needed */
+ struct completion waitevent;
++ struct vmbus_channel *waiting_channel;
+ union {
+ struct vmbus_channel_version_supported version_supported;
+ struct vmbus_channel_open_result open_result;
+diff --git a/include/linux/intel-iommu.h b/include/linux/intel-iommu.h
+index d49e26c6cdc7..23e129ef6726 100644
+--- a/include/linux/intel-iommu.h
++++ b/include/linux/intel-iommu.h
+@@ -153,8 +153,8 @@ static inline void dmar_writeq(void __iomem *addr, u64 val)
+ #define DMA_TLB_GLOBAL_FLUSH (((u64)1) << 60)
+ #define DMA_TLB_DSI_FLUSH (((u64)2) << 60)
+ #define DMA_TLB_PSI_FLUSH (((u64)3) << 60)
+-#define DMA_TLB_IIRG(type) ((type >> 60) & 7)
+-#define DMA_TLB_IAIG(val) (((val) >> 57) & 7)
++#define DMA_TLB_IIRG(type) ((type >> 60) & 3)
++#define DMA_TLB_IAIG(val) (((val) >> 57) & 3)
+ #define DMA_TLB_READ_DRAIN (((u64)1) << 49)
+ #define DMA_TLB_WRITE_DRAIN (((u64)1) << 48)
+ #define DMA_TLB_DID(id) (((u64)((id) & 0xffff)) << 32)
+@@ -164,9 +164,9 @@ static inline void dmar_writeq(void __iomem *addr, u64 val)
+
+ /* INVALID_DESC */
+ #define DMA_CCMD_INVL_GRANU_OFFSET 61
+-#define DMA_ID_TLB_GLOBAL_FLUSH (((u64)1) << 3)
+-#define DMA_ID_TLB_DSI_FLUSH (((u64)2) << 3)
+-#define DMA_ID_TLB_PSI_FLUSH (((u64)3) << 3)
++#define DMA_ID_TLB_GLOBAL_FLUSH (((u64)1) << 4)
++#define DMA_ID_TLB_DSI_FLUSH (((u64)2) << 4)
++#define DMA_ID_TLB_PSI_FLUSH (((u64)3) << 4)
+ #define DMA_ID_TLB_READ_DRAIN (((u64)1) << 7)
+ #define DMA_ID_TLB_WRITE_DRAIN (((u64)1) << 6)
+ #define DMA_ID_TLB_DID(id) (((u64)((id & 0xffff) << 16)))
+@@ -316,8 +316,8 @@ enum {
+ #define QI_DEV_EIOTLB_SIZE (((u64)1) << 11)
+ #define QI_DEV_EIOTLB_GLOB(g) ((u64)g)
+ #define QI_DEV_EIOTLB_PASID(p) (((u64)p) << 32)
+-#define QI_DEV_EIOTLB_SID(sid) ((u64)((sid) & 0xffff) << 32)
+-#define QI_DEV_EIOTLB_QDEP(qd) (((qd) & 0x1f) << 16)
++#define QI_DEV_EIOTLB_SID(sid) ((u64)((sid) & 0xffff) << 16)
++#define QI_DEV_EIOTLB_QDEP(qd) ((u64)((qd) & 0x1f) << 4)
+ #define QI_DEV_EIOTLB_MAX_INVS 32
+
+ #define QI_PGRP_IDX(idx) (((u64)(idx)) << 55)
+diff --git a/include/linux/mmzone.h b/include/linux/mmzone.h
+index f4aac87adcc3..82fc632fd11d 100644
+--- a/include/linux/mmzone.h
++++ b/include/linux/mmzone.h
+@@ -779,7 +779,7 @@ static inline struct pglist_data *lruvec_pgdat(struct lruvec *lruvec)
+ #endif
+ }
+
+-extern unsigned long lruvec_lru_size(struct lruvec *lruvec, enum lru_list lru);
++extern unsigned long lruvec_lru_size(struct lruvec *lruvec, enum lru_list lru, int zone_idx);
+
+ #ifdef CONFIG_HAVE_MEMORY_PRESENT
+ void memory_present(int nid, unsigned long start, unsigned long end);
+diff --git a/include/rdma/ib_sa.h b/include/rdma/ib_sa.h
+index 5ee7aab95eb8..fd0e53219f93 100644
+--- a/include/rdma/ib_sa.h
++++ b/include/rdma/ib_sa.h
+@@ -153,12 +153,12 @@ struct ib_sa_path_rec {
+ union ib_gid sgid;
+ __be16 dlid;
+ __be16 slid;
+- int raw_traffic;
++ u8 raw_traffic;
+ /* reserved */
+ __be32 flow_label;
+ u8 hop_limit;
+ u8 traffic_class;
+- int reversible;
++ u8 reversible;
+ u8 numb_path;
+ __be16 pkey;
+ __be16 qos_class;
+@@ -220,7 +220,7 @@ struct ib_sa_mcmember_rec {
+ u8 hop_limit;
+ u8 scope;
+ u8 join_state;
+- int proxy_join;
++ u8 proxy_join;
+ };
+
+ /* Service Record Component Mask Sec 15.2.5.14 Ver 1.1 */
+diff --git a/include/scsi/scsi_device.h b/include/scsi/scsi_device.h
+index 8990e580b278..be41c76ddd48 100644
+--- a/include/scsi/scsi_device.h
++++ b/include/scsi/scsi_device.h
+@@ -315,6 +315,7 @@ extern void scsi_remove_device(struct scsi_device *);
+ extern int scsi_unregister_device_handler(struct scsi_device_handler *scsi_dh);
+ void scsi_attach_vpd(struct scsi_device *sdev);
+
++extern struct scsi_device *scsi_device_from_queue(struct request_queue *q);
+ extern int scsi_device_get(struct scsi_device *);
+ extern void scsi_device_put(struct scsi_device *);
+ extern struct scsi_device *scsi_device_lookup(struct Scsi_Host *,
+diff --git a/include/soc/at91/at91sam9_ddrsdr.h b/include/soc/at91/at91sam9_ddrsdr.h
+index dc10c52e0e91..393362bdb860 100644
+--- a/include/soc/at91/at91sam9_ddrsdr.h
++++ b/include/soc/at91/at91sam9_ddrsdr.h
+@@ -81,6 +81,7 @@
+ #define AT91_DDRSDRC_LPCB_POWER_DOWN 2
+ #define AT91_DDRSDRC_LPCB_DEEP_POWER_DOWN 3
+ #define AT91_DDRSDRC_CLKFR (1 << 2) /* Clock Frozen */
++#define AT91_DDRSDRC_LPDDR2_PWOFF (1 << 3) /* LPDDR Power Off */
+ #define AT91_DDRSDRC_PASR (7 << 4) /* Partial Array Self Refresh */
+ #define AT91_DDRSDRC_TCSR (3 << 8) /* Temperature Compensated Self Refresh */
+ #define AT91_DDRSDRC_DS (3 << 10) /* Drive Strength */
+@@ -96,7 +97,9 @@
+ #define AT91_DDRSDRC_MD_SDR 0
+ #define AT91_DDRSDRC_MD_LOW_POWER_SDR 1
+ #define AT91_DDRSDRC_MD_LOW_POWER_DDR 3
++#define AT91_DDRSDRC_MD_LPDDR3 5
+ #define AT91_DDRSDRC_MD_DDR2 6 /* [SAM9 Only] */
++#define AT91_DDRSDRC_MD_LPDDR2 7
+ #define AT91_DDRSDRC_DBW (1 << 4) /* Data Bus Width */
+ #define AT91_DDRSDRC_DBW_32BITS (0 << 4)
+ #define AT91_DDRSDRC_DBW_16BITS (1 << 4)
+diff --git a/ipc/shm.c b/ipc/shm.c
+index 81203e8ba013..7512b4fecff4 100644
+--- a/ipc/shm.c
++++ b/ipc/shm.c
+@@ -1091,8 +1091,8 @@ SYSCALL_DEFINE3(shmctl, int, shmid, int, cmd, struct shmid_ds __user *, buf)
+ * "raddr" thing points to kernel space, and there has to be a wrapper around
+ * this.
+ */
+-long do_shmat(int shmid, char __user *shmaddr, int shmflg, ulong *raddr,
+- unsigned long shmlba)
++long do_shmat(int shmid, char __user *shmaddr, int shmflg,
++ ulong *raddr, unsigned long shmlba)
+ {
+ struct shmid_kernel *shp;
+ unsigned long addr;
+@@ -1113,8 +1113,13 @@ long do_shmat(int shmid, char __user *shmaddr, int shmflg, ulong *raddr,
+ goto out;
+ else if ((addr = (ulong)shmaddr)) {
+ if (addr & (shmlba - 1)) {
+- if (shmflg & SHM_RND)
+- addr &= ~(shmlba - 1); /* round down */
++ /*
++ * Round down to the nearest multiple of shmlba.
++ * For sane do_mmap_pgoff() parameters, avoid
++ * round downs that trigger nil-page and MAP_FIXED.
++ */
++ if ((shmflg & SHM_RND) && addr >= shmlba)
++ addr &= ~(shmlba - 1);
+ else
+ #ifndef __ARCH_FORCE_SHMLBA
+ if (addr & ~PAGE_MASK)
+diff --git a/kernel/membarrier.c b/kernel/membarrier.c
+index 536c727a56e9..9f9284f37f8d 100644
+--- a/kernel/membarrier.c
++++ b/kernel/membarrier.c
+@@ -16,6 +16,7 @@
+
+ #include <linux/syscalls.h>
+ #include <linux/membarrier.h>
++#include <linux/tick.h>
+
+ /*
+ * Bitmask made from a "or" of all commands within enum membarrier_cmd,
+@@ -51,6 +52,9 @@
+ */
+ SYSCALL_DEFINE2(membarrier, int, cmd, int, flags)
+ {
++ /* MEMBARRIER_CMD_SHARED is not compatible with nohz_full. */
++ if (tick_nohz_full_enabled())
++ return -ENOSYS;
+ if (unlikely(flags))
+ return -EINVAL;
+ switch (cmd) {
+diff --git a/kernel/memremap.c b/kernel/memremap.c
+index 9ecedc28b928..06123234f118 100644
+--- a/kernel/memremap.c
++++ b/kernel/memremap.c
+@@ -246,9 +246,13 @@ static void devm_memremap_pages_release(struct device *dev, void *data)
+ /* pages are dead and unused, undo the arch mapping */
+ align_start = res->start & ~(SECTION_SIZE - 1);
+ align_size = ALIGN(resource_size(res), SECTION_SIZE);
++
++ lock_device_hotplug();
+ mem_hotplug_begin();
+ arch_remove_memory(align_start, align_size);
+ mem_hotplug_done();
++ unlock_device_hotplug();
++
+ untrack_pfn(NULL, PHYS_PFN(align_start), align_size);
+ pgmap_radix_release(res);
+ dev_WARN_ONCE(dev, pgmap->altmap && pgmap->altmap->alloc,
+@@ -360,9 +364,11 @@ void *devm_memremap_pages(struct device *dev, struct resource *res,
+ if (error)
+ goto err_pfn_remap;
+
++ lock_device_hotplug();
+ mem_hotplug_begin();
+ error = arch_add_memory(nid, align_start, align_size, true);
+ mem_hotplug_done();
++ unlock_device_hotplug();
+ if (error)
+ goto err_add_memory;
+
+diff --git a/kernel/module.c b/kernel/module.c
+index 3d8f126208e3..1cd2bf36f405 100644
+--- a/kernel/module.c
++++ b/kernel/module.c
+@@ -3719,6 +3719,7 @@ static int load_module(struct load_info *info, const char __user *uargs,
+ mod_sysfs_teardown(mod);
+ coming_cleanup:
+ mod->state = MODULE_STATE_GOING;
++ destroy_params(mod->kp, mod->num_kp);
+ blocking_notifier_call_chain(&module_notify_list,
+ MODULE_STATE_GOING, mod);
+ klp_module_going(mod);
+diff --git a/kernel/signal.c b/kernel/signal.c
+index 3603d93a1968..0f99304d39fd 100644
+--- a/kernel/signal.c
++++ b/kernel/signal.c
+@@ -3239,10 +3239,17 @@ int compat_restore_altstack(const compat_stack_t __user *uss)
+
+ int __compat_save_altstack(compat_stack_t __user *uss, unsigned long sp)
+ {
++ int err;
+ struct task_struct *t = current;
+- return __put_user(ptr_to_compat((void __user *)t->sas_ss_sp), &uss->ss_sp) |
+- __put_user(sas_ss_flags(sp), &uss->ss_flags) |
++ err = __put_user(ptr_to_compat((void __user *)t->sas_ss_sp),
++ &uss->ss_sp) |
++ __put_user(t->sas_ss_flags, &uss->ss_flags) |
+ __put_user(t->sas_ss_size, &uss->ss_size);
++ if (err)
++ return err;
++ if (t->sas_ss_flags & SS_AUTODISARM)
++ sas_ss_reset(t);
++ return 0;
+ }
+ #endif
+
+diff --git a/mm/filemap.c b/mm/filemap.c
+index 3f9afded581b..3afa2a58acf0 100644
+--- a/mm/filemap.c
++++ b/mm/filemap.c
+@@ -1002,9 +1002,12 @@ void page_endio(struct page *page, bool is_write, int err)
+ unlock_page(page);
+ } else {
+ if (err) {
++ struct address_space *mapping;
++
+ SetPageError(page);
+- if (page->mapping)
+- mapping_set_error(page->mapping, err);
++ mapping = page_mapping(page);
++ if (mapping)
++ mapping_set_error(mapping, err);
+ }
+ end_page_writeback(page);
+ }
+diff --git a/mm/page_alloc.c b/mm/page_alloc.c
+index f3e0c69a97b7..1a5f6655958e 100644
+--- a/mm/page_alloc.c
++++ b/mm/page_alloc.c
+@@ -2877,7 +2877,7 @@ bool zone_watermark_ok_safe(struct zone *z, unsigned int order,
+ #ifdef CONFIG_NUMA
+ static bool zone_allows_reclaim(struct zone *local_zone, struct zone *zone)
+ {
+- return node_distance(zone_to_nid(local_zone), zone_to_nid(zone)) <
++ return node_distance(zone_to_nid(local_zone), zone_to_nid(zone)) <=
+ RECLAIM_DISTANCE;
+ }
+ #else /* CONFIG_NUMA */
+diff --git a/mm/vmpressure.c b/mm/vmpressure.c
+index 149fdf6c5c56..6063581f705c 100644
+--- a/mm/vmpressure.c
++++ b/mm/vmpressure.c
+@@ -112,9 +112,16 @@ static enum vmpressure_levels vmpressure_calc_level(unsigned long scanned,
+ unsigned long reclaimed)
+ {
+ unsigned long scale = scanned + reclaimed;
+- unsigned long pressure;
++ unsigned long pressure = 0;
+
+ /*
++ * reclaimed can be greater than scanned in cases
++ * like THP, where the scanned is 1 and reclaimed
++ * could be 512
++ */
++ if (reclaimed >= scanned)
++ goto out;
++ /*
+ * We calculate the ratio (in percents) of how many pages were
+ * scanned vs. reclaimed in a given time frame (window). Note that
+ * time is in VM reclaimer's "ticks", i.e. number of pages
+@@ -124,6 +131,7 @@ static enum vmpressure_levels vmpressure_calc_level(unsigned long scanned,
+ pressure = scale - (reclaimed * scale / scanned);
+ pressure = pressure * 100 / scale;
+
++out:
+ pr_debug("%s: %3lu (s: %lu r: %lu)\n", __func__, pressure,
+ scanned, reclaimed);
+
+diff --git a/mm/vmscan.c b/mm/vmscan.c
+index 532a2a750952..36a9aa98c207 100644
+--- a/mm/vmscan.c
++++ b/mm/vmscan.c
+@@ -234,22 +234,39 @@ bool pgdat_reclaimable(struct pglist_data *pgdat)
+ pgdat_reclaimable_pages(pgdat) * 6;
+ }
+
+-unsigned long lruvec_lru_size(struct lruvec *lruvec, enum lru_list lru)
++/**
++ * lruvec_lru_size - Returns the number of pages on the given LRU list.
++ * @lruvec: lru vector
++ * @lru: lru to use
++ * @zone_idx: zones to consider (use MAX_NR_ZONES for the whole LRU list)
++ */
++unsigned long lruvec_lru_size(struct lruvec *lruvec, enum lru_list lru, int zone_idx)
+ {
++ unsigned long lru_size;
++ int zid;
++
+ if (!mem_cgroup_disabled())
+- return mem_cgroup_get_lru_size(lruvec, lru);
++ lru_size = mem_cgroup_get_lru_size(lruvec, lru);
++ else
++ lru_size = node_page_state(lruvec_pgdat(lruvec), NR_LRU_BASE + lru);
+
+- return node_page_state(lruvec_pgdat(lruvec), NR_LRU_BASE + lru);
+-}
++ for (zid = zone_idx + 1; zid < MAX_NR_ZONES; zid++) {
++ struct zone *zone = &lruvec_pgdat(lruvec)->node_zones[zid];
++ unsigned long size;
+
+-unsigned long lruvec_zone_lru_size(struct lruvec *lruvec, enum lru_list lru,
+- int zone_idx)
+-{
+- if (!mem_cgroup_disabled())
+- return mem_cgroup_get_zone_lru_size(lruvec, lru, zone_idx);
++ if (!managed_zone(zone))
++ continue;
++
++ if (!mem_cgroup_disabled())
++ size = mem_cgroup_get_zone_lru_size(lruvec, lru, zid);
++ else
++ size = zone_page_state(&lruvec_pgdat(lruvec)->node_zones[zid],
++ NR_ZONE_LRU_BASE + lru);
++ lru_size -= min(size, lru_size);
++ }
++
++ return lru_size;
+
+- return zone_page_state(&lruvec_pgdat(lruvec)->node_zones[zone_idx],
+- NR_ZONE_LRU_BASE + lru);
+ }
+
+ /*
+@@ -2028,11 +2045,10 @@ static bool inactive_list_is_low(struct lruvec *lruvec, bool file,
+ struct scan_control *sc)
+ {
+ unsigned long inactive_ratio;
+- unsigned long inactive;
+- unsigned long active;
++ unsigned long inactive, active;
++ enum lru_list inactive_lru = file * LRU_FILE;
++ enum lru_list active_lru = file * LRU_FILE + LRU_ACTIVE;
+ unsigned long gb;
+- struct pglist_data *pgdat = lruvec_pgdat(lruvec);
+- int zid;
+
+ /*
+ * If we don't have swap space, anonymous page deactivation
+@@ -2041,27 +2057,8 @@ static bool inactive_list_is_low(struct lruvec *lruvec, bool file,
+ if (!file && !total_swap_pages)
+ return false;
+
+- inactive = lruvec_lru_size(lruvec, file * LRU_FILE);
+- active = lruvec_lru_size(lruvec, file * LRU_FILE + LRU_ACTIVE);
+-
+- /*
+- * For zone-constrained allocations, it is necessary to check if
+- * deactivations are required for lowmem to be reclaimed. This
+- * calculates the inactive/active pages available in eligible zones.
+- */
+- for (zid = sc->reclaim_idx + 1; zid < MAX_NR_ZONES; zid++) {
+- struct zone *zone = &pgdat->node_zones[zid];
+- unsigned long inactive_zone, active_zone;
+-
+- if (!managed_zone(zone))
+- continue;
+-
+- inactive_zone = lruvec_zone_lru_size(lruvec, file * LRU_FILE, zid);
+- active_zone = lruvec_zone_lru_size(lruvec, (file * LRU_FILE) + LRU_ACTIVE, zid);
+-
+- inactive -= min(inactive, inactive_zone);
+- active -= min(active, active_zone);
+- }
++ inactive = lruvec_lru_size(lruvec, inactive_lru, sc->reclaim_idx);
++ active = lruvec_lru_size(lruvec, active_lru, sc->reclaim_idx);
+
+ gb = (inactive + active) >> (30 - PAGE_SHIFT);
+ if (gb)
+@@ -2208,7 +2205,7 @@ static void get_scan_count(struct lruvec *lruvec, struct mem_cgroup *memcg,
+ * system is under heavy pressure.
+ */
+ if (!inactive_list_is_low(lruvec, true, sc) &&
+- lruvec_lru_size(lruvec, LRU_INACTIVE_FILE) >> sc->priority) {
++ lruvec_lru_size(lruvec, LRU_INACTIVE_FILE, sc->reclaim_idx) >> sc->priority) {
+ scan_balance = SCAN_FILE;
+ goto out;
+ }
+@@ -2234,10 +2231,10 @@ static void get_scan_count(struct lruvec *lruvec, struct mem_cgroup *memcg,
+ * anon in [0], file in [1]
+ */
+
+- anon = lruvec_lru_size(lruvec, LRU_ACTIVE_ANON) +
+- lruvec_lru_size(lruvec, LRU_INACTIVE_ANON);
+- file = lruvec_lru_size(lruvec, LRU_ACTIVE_FILE) +
+- lruvec_lru_size(lruvec, LRU_INACTIVE_FILE);
++ anon = lruvec_lru_size(lruvec, LRU_ACTIVE_ANON, MAX_NR_ZONES) +
++ lruvec_lru_size(lruvec, LRU_INACTIVE_ANON, MAX_NR_ZONES);
++ file = lruvec_lru_size(lruvec, LRU_ACTIVE_FILE, MAX_NR_ZONES) +
++ lruvec_lru_size(lruvec, LRU_INACTIVE_FILE, MAX_NR_ZONES);
+
+ spin_lock_irq(&pgdat->lru_lock);
+ if (unlikely(reclaim_stat->recent_scanned[0] > anon / 4)) {
+@@ -2275,7 +2272,7 @@ static void get_scan_count(struct lruvec *lruvec, struct mem_cgroup *memcg,
+ unsigned long size;
+ unsigned long scan;
+
+- size = lruvec_lru_size(lruvec, lru);
++ size = lruvec_lru_size(lruvec, lru, sc->reclaim_idx);
+ scan = size >> sc->priority;
+
+ if (!scan && pass && force_scan)
+diff --git a/mm/workingset.c b/mm/workingset.c
+index abb58ffa3c64..a67f5796b995 100644
+--- a/mm/workingset.c
++++ b/mm/workingset.c
+@@ -267,7 +267,7 @@ bool workingset_refault(void *shadow)
+ }
+ lruvec = mem_cgroup_lruvec(pgdat, memcg);
+ refault = atomic_long_read(&lruvec->inactive_age);
+- active_file = lruvec_lru_size(lruvec, LRU_ACTIVE_FILE);
++ active_file = lruvec_lru_size(lruvec, LRU_ACTIVE_FILE, MAX_NR_ZONES);
+ rcu_read_unlock();
+
+ /*
+diff --git a/net/ceph/osd_client.c b/net/ceph/osd_client.c
+index 842f049abb86..3a2417bb6ff0 100644
+--- a/net/ceph/osd_client.c
++++ b/net/ceph/osd_client.c
+@@ -672,7 +672,8 @@ void osd_req_op_extent_update(struct ceph_osd_request *osd_req,
+ BUG_ON(length > previous);
+
+ op->extent.length = length;
+- op->indata_len -= previous - length;
++ if (op->op == CEPH_OSD_OP_WRITE || op->op == CEPH_OSD_OP_WRITEFULL)
++ op->indata_len -= previous - length;
+ }
+ EXPORT_SYMBOL(osd_req_op_extent_update);
+
+diff --git a/net/sunrpc/xprtrdma/rpc_rdma.c b/net/sunrpc/xprtrdma/rpc_rdma.c
+index c52e0f2ffe52..d88988365cd2 100644
+--- a/net/sunrpc/xprtrdma/rpc_rdma.c
++++ b/net/sunrpc/xprtrdma/rpc_rdma.c
+@@ -125,14 +125,34 @@ void rpcrdma_set_max_header_sizes(struct rpcrdma_xprt *r_xprt)
+ /* The client can send a request inline as long as the RPCRDMA header
+ * plus the RPC call fit under the transport's inline limit. If the
+ * combined call message size exceeds that limit, the client must use
+- * the read chunk list for this operation.
++ * a Read chunk for this operation.
++ *
++ * A Read chunk is also required if sending the RPC call inline would
++ * exceed this device's max_sge limit.
+ */
+ static bool rpcrdma_args_inline(struct rpcrdma_xprt *r_xprt,
+ struct rpc_rqst *rqst)
+ {
+- struct rpcrdma_ia *ia = &r_xprt->rx_ia;
++ struct xdr_buf *xdr = &rqst->rq_snd_buf;
++ unsigned int count, remaining, offset;
++
++ if (xdr->len > r_xprt->rx_ia.ri_max_inline_write)
++ return false;
++
++ if (xdr->page_len) {
++ remaining = xdr->page_len;
++ offset = xdr->page_base & ~PAGE_MASK;
++ count = 0;
++ while (remaining) {
++ remaining -= min_t(unsigned int,
++ PAGE_SIZE - offset, remaining);
++ offset = 0;
++ if (++count > r_xprt->rx_ia.ri_max_send_sges)
++ return false;
++ }
++ }
+
+- return rqst->rq_snd_buf.len <= ia->ri_max_inline_write;
++ return true;
+ }
+
+ /* The client can't know how large the actual reply will be. Thus it
+@@ -186,9 +206,9 @@ rpcrdma_convert_kvec(struct kvec *vec, struct rpcrdma_mr_seg *seg, int n)
+ */
+
+ static int
+-rpcrdma_convert_iovs(struct xdr_buf *xdrbuf, unsigned int pos,
+- enum rpcrdma_chunktype type, struct rpcrdma_mr_seg *seg,
+- bool reminv_expected)
++rpcrdma_convert_iovs(struct rpcrdma_xprt *r_xprt, struct xdr_buf *xdrbuf,
++ unsigned int pos, enum rpcrdma_chunktype type,
++ struct rpcrdma_mr_seg *seg)
+ {
+ int len, n, p, page_base;
+ struct page **ppages;
+@@ -226,22 +246,21 @@ rpcrdma_convert_iovs(struct xdr_buf *xdrbuf, unsigned int pos,
+ if (len && n == RPCRDMA_MAX_SEGS)
+ goto out_overflow;
+
+- /* When encoding the read list, the tail is always sent inline */
+- if (type == rpcrdma_readch)
++ /* When encoding a Read chunk, the tail iovec contains an
++ * XDR pad and may be omitted.
++ */
++ if (type == rpcrdma_readch && r_xprt->rx_ia.ri_implicit_roundup)
+ return n;
+
+- /* When encoding the Write list, some servers need to see an extra
+- * segment for odd-length Write chunks. The upper layer provides
+- * space in the tail iovec for this purpose.
++ /* When encoding a Write chunk, some servers need to see an
++ * extra segment for non-XDR-aligned Write chunks. The upper
++ * layer provides space in the tail iovec that may be used
++ * for this purpose.
+ */
+- if (type == rpcrdma_writech && reminv_expected)
++ if (type == rpcrdma_writech && r_xprt->rx_ia.ri_implicit_roundup)
+ return n;
+
+ if (xdrbuf->tail[0].iov_len) {
+- /* the rpcrdma protocol allows us to omit any trailing
+- * xdr pad bytes, saving the server an RDMA operation. */
+- if (xdrbuf->tail[0].iov_len < 4 && xprt_rdma_pad_optimize)
+- return n;
+ n = rpcrdma_convert_kvec(&xdrbuf->tail[0], seg, n);
+ if (n == RPCRDMA_MAX_SEGS)
+ goto out_overflow;
+@@ -293,7 +312,8 @@ rpcrdma_encode_read_list(struct rpcrdma_xprt *r_xprt,
+ if (rtype == rpcrdma_areadch)
+ pos = 0;
+ seg = req->rl_segments;
+- nsegs = rpcrdma_convert_iovs(&rqst->rq_snd_buf, pos, rtype, seg, false);
++ nsegs = rpcrdma_convert_iovs(r_xprt, &rqst->rq_snd_buf, pos,
++ rtype, seg);
+ if (nsegs < 0)
+ return ERR_PTR(nsegs);
+
+@@ -355,10 +375,9 @@ rpcrdma_encode_write_list(struct rpcrdma_xprt *r_xprt, struct rpcrdma_req *req,
+ }
+
+ seg = req->rl_segments;
+- nsegs = rpcrdma_convert_iovs(&rqst->rq_rcv_buf,
++ nsegs = rpcrdma_convert_iovs(r_xprt, &rqst->rq_rcv_buf,
+ rqst->rq_rcv_buf.head[0].iov_len,
+- wtype, seg,
+- r_xprt->rx_ia.ri_reminv_expected);
++ wtype, seg);
+ if (nsegs < 0)
+ return ERR_PTR(nsegs);
+
+@@ -423,8 +442,7 @@ rpcrdma_encode_reply_chunk(struct rpcrdma_xprt *r_xprt,
+ }
+
+ seg = req->rl_segments;
+- nsegs = rpcrdma_convert_iovs(&rqst->rq_rcv_buf, 0, wtype, seg,
+- r_xprt->rx_ia.ri_reminv_expected);
++ nsegs = rpcrdma_convert_iovs(r_xprt, &rqst->rq_rcv_buf, 0, wtype, seg);
+ if (nsegs < 0)
+ return ERR_PTR(nsegs);
+
+diff --git a/net/sunrpc/xprtrdma/transport.c b/net/sunrpc/xprtrdma/transport.c
+index 534c178d2a7e..699058169cfc 100644
+--- a/net/sunrpc/xprtrdma/transport.c
++++ b/net/sunrpc/xprtrdma/transport.c
+@@ -67,7 +67,7 @@ unsigned int xprt_rdma_max_inline_read = RPCRDMA_DEF_INLINE;
+ static unsigned int xprt_rdma_max_inline_write = RPCRDMA_DEF_INLINE;
+ static unsigned int xprt_rdma_inline_write_padding;
+ static unsigned int xprt_rdma_memreg_strategy = RPCRDMA_FRMR;
+- int xprt_rdma_pad_optimize = 1;
++ int xprt_rdma_pad_optimize = 0;
+
+ #if IS_ENABLED(CONFIG_SUNRPC_DEBUG)
+
+diff --git a/net/sunrpc/xprtrdma/verbs.c b/net/sunrpc/xprtrdma/verbs.c
+index 11d07748f699..61d16c39e92c 100644
+--- a/net/sunrpc/xprtrdma/verbs.c
++++ b/net/sunrpc/xprtrdma/verbs.c
+@@ -208,6 +208,7 @@ rpcrdma_update_connect_private(struct rpcrdma_xprt *r_xprt,
+
+ /* Default settings for RPC-over-RDMA Version One */
+ r_xprt->rx_ia.ri_reminv_expected = false;
++ r_xprt->rx_ia.ri_implicit_roundup = xprt_rdma_pad_optimize;
+ rsize = RPCRDMA_V1_DEF_INLINE_SIZE;
+ wsize = RPCRDMA_V1_DEF_INLINE_SIZE;
+
+@@ -215,6 +216,7 @@ rpcrdma_update_connect_private(struct rpcrdma_xprt *r_xprt,
+ pmsg->cp_magic == rpcrdma_cmp_magic &&
+ pmsg->cp_version == RPCRDMA_CMP_VERSION) {
+ r_xprt->rx_ia.ri_reminv_expected = true;
++ r_xprt->rx_ia.ri_implicit_roundup = true;
+ rsize = rpcrdma_decode_buffer_size(pmsg->cp_send_size);
+ wsize = rpcrdma_decode_buffer_size(pmsg->cp_recv_size);
+ }
+@@ -486,18 +488,19 @@ rpcrdma_ia_close(struct rpcrdma_ia *ia)
+ */
+ int
+ rpcrdma_ep_create(struct rpcrdma_ep *ep, struct rpcrdma_ia *ia,
+- struct rpcrdma_create_data_internal *cdata)
++ struct rpcrdma_create_data_internal *cdata)
+ {
+ struct rpcrdma_connect_private *pmsg = &ep->rep_cm_private;
++ unsigned int max_qp_wr, max_sge;
+ struct ib_cq *sendcq, *recvcq;
+- unsigned int max_qp_wr;
+ int rc;
+
+- if (ia->ri_device->attrs.max_sge < RPCRDMA_MAX_SEND_SGES) {
+- dprintk("RPC: %s: insufficient sge's available\n",
+- __func__);
++ max_sge = min(ia->ri_device->attrs.max_sge, RPCRDMA_MAX_SEND_SGES);
++ if (max_sge < RPCRDMA_MIN_SEND_SGES) {
++ pr_warn("rpcrdma: HCA provides only %d send SGEs\n", max_sge);
+ return -ENOMEM;
+ }
++ ia->ri_max_send_sges = max_sge - RPCRDMA_MIN_SEND_SGES;
+
+ if (ia->ri_device->attrs.max_qp_wr <= RPCRDMA_BACKWARD_WRS) {
+ dprintk("RPC: %s: insufficient wqe's available\n",
+@@ -522,7 +525,7 @@ rpcrdma_ep_create(struct rpcrdma_ep *ep, struct rpcrdma_ia *ia,
+ ep->rep_attr.cap.max_recv_wr = cdata->max_requests;
+ ep->rep_attr.cap.max_recv_wr += RPCRDMA_BACKWARD_WRS;
+ ep->rep_attr.cap.max_recv_wr += 1; /* drain cqe */
+- ep->rep_attr.cap.max_send_sge = RPCRDMA_MAX_SEND_SGES;
++ ep->rep_attr.cap.max_send_sge = max_sge;
+ ep->rep_attr.cap.max_recv_sge = 1;
+ ep->rep_attr.cap.max_inline_data = 0;
+ ep->rep_attr.sq_sig_type = IB_SIGNAL_REQ_WR;
+diff --git a/net/sunrpc/xprtrdma/xprt_rdma.h b/net/sunrpc/xprtrdma/xprt_rdma.h
+index e35efd4ac1e4..3d7e9c9bad1f 100644
+--- a/net/sunrpc/xprtrdma/xprt_rdma.h
++++ b/net/sunrpc/xprtrdma/xprt_rdma.h
+@@ -74,7 +74,9 @@ struct rpcrdma_ia {
+ unsigned int ri_max_frmr_depth;
+ unsigned int ri_max_inline_write;
+ unsigned int ri_max_inline_read;
++ unsigned int ri_max_send_sges;
+ bool ri_reminv_expected;
++ bool ri_implicit_roundup;
+ enum ib_mr_type ri_mrtype;
+ struct ib_qp_attr ri_qp_attr;
+ struct ib_qp_init_attr ri_qp_init_attr;
+@@ -310,6 +312,7 @@ struct rpcrdma_mr_seg { /* chunk descriptors */
+ * - xdr_buf tail iovec
+ */
+ enum {
++ RPCRDMA_MIN_SEND_SGES = 3,
+ RPCRDMA_MAX_SEND_PAGES = PAGE_SIZE + RPCRDMA_MAX_INLINE - 1,
+ RPCRDMA_MAX_PAGE_SGES = (RPCRDMA_MAX_SEND_PAGES >> PAGE_SHIFT) + 1,
+ RPCRDMA_MAX_SEND_SGES = 1 + 1 + RPCRDMA_MAX_PAGE_SGES + 1,
+diff --git a/samples/seccomp/bpf-helper.h b/samples/seccomp/bpf-helper.h
+index 38ee70f3cd5b..1d8de9edd858 100644
+--- a/samples/seccomp/bpf-helper.h
++++ b/samples/seccomp/bpf-helper.h
+@@ -138,7 +138,7 @@ union arg64 {
+ #define ARG_32(idx) \
+ BPF_STMT(BPF_LD+BPF_W+BPF_ABS, LO_ARG(idx))
+
+-/* Loads hi into A and lo in X */
++/* Loads lo into M[0] and hi into M[1] and A */
+ #define ARG_64(idx) \
+ BPF_STMT(BPF_LD+BPF_W+BPF_ABS, LO_ARG(idx)), \
+ BPF_STMT(BPF_ST, 0), /* lo -> M[0] */ \
+@@ -153,88 +153,107 @@ union arg64 {
+ BPF_JUMP(BPF_JMP+BPF_JEQ+BPF_K, (value), 1, 0), \
+ jt
+
+-/* Checks the lo, then swaps to check the hi. A=lo,X=hi */
++#define JA32(value, jt) \
++ BPF_JUMP(BPF_JMP+BPF_JSET+BPF_K, (value), 0, 1), \
++ jt
++
++#define JGE32(value, jt) \
++ BPF_JUMP(BPF_JMP+BPF_JGE+BPF_K, (value), 0, 1), \
++ jt
++
++#define JGT32(value, jt) \
++ BPF_JUMP(BPF_JMP+BPF_JGT+BPF_K, (value), 0, 1), \
++ jt
++
++#define JLE32(value, jt) \
++ BPF_JUMP(BPF_JMP+BPF_JGT+BPF_K, (value), 1, 0), \
++ jt
++
++#define JLT32(value, jt) \
++ BPF_JUMP(BPF_JMP+BPF_JGE+BPF_K, (value), 1, 0), \
++ jt
++
++/*
++ * All the JXX64 checks assume lo is saved in M[0] and hi is saved in both
++ * A and M[1]. This invariant is kept by restoring A if necessary.
++ */
+ #define JEQ64(lo, hi, jt) \
++ /* if (hi != arg.hi) goto NOMATCH; */ \
+ BPF_JUMP(BPF_JMP+BPF_JEQ+BPF_K, (hi), 0, 5), \
+ BPF_STMT(BPF_LD+BPF_MEM, 0), /* swap in lo */ \
++ /* if (lo != arg.lo) goto NOMATCH; */ \
+ BPF_JUMP(BPF_JMP+BPF_JEQ+BPF_K, (lo), 0, 2), \
+- BPF_STMT(BPF_LD+BPF_MEM, 1), /* passed: swap hi back in */ \
++ BPF_STMT(BPF_LD+BPF_MEM, 1), \
+ jt, \
+- BPF_STMT(BPF_LD+BPF_MEM, 1) /* failed: swap hi back in */
++ BPF_STMT(BPF_LD+BPF_MEM, 1)
+
+ #define JNE64(lo, hi, jt) \
+- BPF_JUMP(BPF_JMP+BPF_JEQ+BPF_K, (hi), 5, 0), \
+- BPF_STMT(BPF_LD+BPF_MEM, 0), /* swap in lo */ \
++ /* if (hi != arg.hi) goto MATCH; */ \
++ BPF_JUMP(BPF_JMP+BPF_JEQ+BPF_K, (hi), 0, 3), \
++ BPF_STMT(BPF_LD+BPF_MEM, 0), \
++ /* if (lo != arg.lo) goto MATCH; */ \
+ BPF_JUMP(BPF_JMP+BPF_JEQ+BPF_K, (lo), 2, 0), \
+- BPF_STMT(BPF_LD+BPF_MEM, 1), /* passed: swap hi back in */ \
++ BPF_STMT(BPF_LD+BPF_MEM, 1), \
+ jt, \
+- BPF_STMT(BPF_LD+BPF_MEM, 1) /* failed: swap hi back in */
+-
+-#define JA32(value, jt) \
+- BPF_JUMP(BPF_JMP+BPF_JSET+BPF_K, (value), 0, 1), \
+- jt
++ BPF_STMT(BPF_LD+BPF_MEM, 1)
+
+ #define JA64(lo, hi, jt) \
++ /* if (hi & arg.hi) goto MATCH; */ \
+ BPF_JUMP(BPF_JMP+BPF_JSET+BPF_K, (hi), 3, 0), \
+- BPF_STMT(BPF_LD+BPF_MEM, 0), /* swap in lo */ \
++ BPF_STMT(BPF_LD+BPF_MEM, 0), \
++ /* if (lo & arg.lo) goto MATCH; */ \
+ BPF_JUMP(BPF_JMP+BPF_JSET+BPF_K, (lo), 0, 2), \
+- BPF_STMT(BPF_LD+BPF_MEM, 1), /* passed: swap hi back in */ \
++ BPF_STMT(BPF_LD+BPF_MEM, 1), \
+ jt, \
+- BPF_STMT(BPF_LD+BPF_MEM, 1) /* failed: swap hi back in */
++ BPF_STMT(BPF_LD+BPF_MEM, 1)
+
+-#define JGE32(value, jt) \
+- BPF_JUMP(BPF_JMP+BPF_JGE+BPF_K, (value), 0, 1), \
+- jt
+-
+-#define JLT32(value, jt) \
+- BPF_JUMP(BPF_JMP+BPF_JGE+BPF_K, (value), 1, 0), \
+- jt
+-
+-/* Shortcut checking if hi > arg.hi. */
+ #define JGE64(lo, hi, jt) \
++ /* if (hi > arg.hi) goto MATCH; */ \
+ BPF_JUMP(BPF_JMP+BPF_JGT+BPF_K, (hi), 4, 0), \
++ /* if (hi != arg.hi) goto NOMATCH; */ \
+ BPF_JUMP(BPF_JMP+BPF_JEQ+BPF_K, (hi), 0, 5), \
+- BPF_STMT(BPF_LD+BPF_MEM, 0), /* swap in lo */ \
++ BPF_STMT(BPF_LD+BPF_MEM, 0), \
++ /* if (lo >= arg.lo) goto MATCH; */ \
+ BPF_JUMP(BPF_JMP+BPF_JGE+BPF_K, (lo), 0, 2), \
+- BPF_STMT(BPF_LD+BPF_MEM, 1), /* passed: swap hi back in */ \
+- jt, \
+- BPF_STMT(BPF_LD+BPF_MEM, 1) /* failed: swap hi back in */
+-
+-#define JLT64(lo, hi, jt) \
+- BPF_JUMP(BPF_JMP+BPF_JGE+BPF_K, (hi), 0, 4), \
+- BPF_JUMP(BPF_JMP+BPF_JEQ+BPF_K, (hi), 0, 5), \
+- BPF_STMT(BPF_LD+BPF_MEM, 0), /* swap in lo */ \
+- BPF_JUMP(BPF_JMP+BPF_JGT+BPF_K, (lo), 2, 0), \
+- BPF_STMT(BPF_LD+BPF_MEM, 1), /* passed: swap hi back in */ \
++ BPF_STMT(BPF_LD+BPF_MEM, 1), \
+ jt, \
+- BPF_STMT(BPF_LD+BPF_MEM, 1) /* failed: swap hi back in */
++ BPF_STMT(BPF_LD+BPF_MEM, 1)
+
+-#define JGT32(value, jt) \
+- BPF_JUMP(BPF_JMP+BPF_JGT+BPF_K, (value), 0, 1), \
+- jt
+-
+-#define JLE32(value, jt) \
+- BPF_JUMP(BPF_JMP+BPF_JGT+BPF_K, (value), 1, 0), \
+- jt
+-
+-/* Check hi > args.hi first, then do the GE checking */
+ #define JGT64(lo, hi, jt) \
++ /* if (hi > arg.hi) goto MATCH; */ \
+ BPF_JUMP(BPF_JMP+BPF_JGT+BPF_K, (hi), 4, 0), \
++ /* if (hi != arg.hi) goto NOMATCH; */ \
+ BPF_JUMP(BPF_JMP+BPF_JEQ+BPF_K, (hi), 0, 5), \
+- BPF_STMT(BPF_LD+BPF_MEM, 0), /* swap in lo */ \
++ BPF_STMT(BPF_LD+BPF_MEM, 0), \
++ /* if (lo > arg.lo) goto MATCH; */ \
+ BPF_JUMP(BPF_JMP+BPF_JGT+BPF_K, (lo), 0, 2), \
+- BPF_STMT(BPF_LD+BPF_MEM, 1), /* passed: swap hi back in */ \
++ BPF_STMT(BPF_LD+BPF_MEM, 1), \
+ jt, \
+- BPF_STMT(BPF_LD+BPF_MEM, 1) /* failed: swap hi back in */
++ BPF_STMT(BPF_LD+BPF_MEM, 1)
+
+ #define JLE64(lo, hi, jt) \
+- BPF_JUMP(BPF_JMP+BPF_JGT+BPF_K, (hi), 6, 0), \
+- BPF_JUMP(BPF_JMP+BPF_JEQ+BPF_K, (hi), 0, 3), \
+- BPF_STMT(BPF_LD+BPF_MEM, 0), /* swap in lo */ \
++ /* if (hi < arg.hi) goto MATCH; */ \
++ BPF_JUMP(BPF_JMP+BPF_JGE+BPF_K, (hi), 0, 4), \
++ /* if (hi != arg.hi) goto NOMATCH; */ \
++ BPF_JUMP(BPF_JMP+BPF_JEQ+BPF_K, (hi), 0, 5), \
++ BPF_STMT(BPF_LD+BPF_MEM, 0), \
++ /* if (lo <= arg.lo) goto MATCH; */ \
+ BPF_JUMP(BPF_JMP+BPF_JGT+BPF_K, (lo), 2, 0), \
+- BPF_STMT(BPF_LD+BPF_MEM, 1), /* passed: swap hi back in */ \
++ BPF_STMT(BPF_LD+BPF_MEM, 1), \
++ jt, \
++ BPF_STMT(BPF_LD+BPF_MEM, 1)
++
++#define JLT64(lo, hi, jt) \
++ /* if (hi < arg.hi) goto MATCH; */ \
++ BPF_JUMP(BPF_JMP+BPF_JGE+BPF_K, (hi), 0, 4), \
++ /* if (hi != arg.hi) goto NOMATCH; */ \
++ BPF_JUMP(BPF_JMP+BPF_JEQ+BPF_K, (hi), 0, 5), \
++ BPF_STMT(BPF_LD+BPF_MEM, 0), \
++ /* if (lo < arg.lo) goto MATCH; */ \
++ BPF_JUMP(BPF_JMP+BPF_JGE+BPF_K, (lo), 2, 0), \
++ BPF_STMT(BPF_LD+BPF_MEM, 1), \
+ jt, \
+- BPF_STMT(BPF_LD+BPF_MEM, 1) /* failed: swap hi back in */
++ BPF_STMT(BPF_LD+BPF_MEM, 1)
+
+ #define LOAD_SYSCALL_NR \
+ BPF_STMT(BPF_LD+BPF_W+BPF_ABS, \
+diff --git a/security/integrity/ima/ima.h b/security/integrity/ima/ima.h
+index 5e6180a4da7d..b563fbd4d122 100644
+--- a/security/integrity/ima/ima.h
++++ b/security/integrity/ima/ima.h
+@@ -204,7 +204,7 @@ int ima_store_template(struct ima_template_entry *entry, int violation,
+ struct inode *inode,
+ const unsigned char *filename, int pcr);
+ void ima_free_template_entry(struct ima_template_entry *entry);
+-const char *ima_d_path(const struct path *path, char **pathbuf);
++const char *ima_d_path(const struct path *path, char **pathbuf, char *filename);
+
+ /* IMA policy related functions */
+ int ima_match_policy(struct inode *inode, enum ima_hooks func, int mask,
+diff --git a/security/integrity/ima/ima_api.c b/security/integrity/ima/ima_api.c
+index 9df26a2b75ba..d01a52f8f708 100644
+--- a/security/integrity/ima/ima_api.c
++++ b/security/integrity/ima/ima_api.c
+@@ -318,7 +318,17 @@ void ima_audit_measurement(struct integrity_iint_cache *iint,
+ iint->flags |= IMA_AUDITED;
+ }
+
+-const char *ima_d_path(const struct path *path, char **pathbuf)
++/*
++ * ima_d_path - return a pointer to the full pathname
++ *
++ * Attempt to return a pointer to the full pathname for use in the
++ * IMA measurement list, IMA audit records, and auditing logs.
++ *
++ * On failure, return a pointer to a copy of the filename, not dname.
++ * Returning a pointer to dname, could result in using the pointer
++ * after the memory has been freed.
++ */
++const char *ima_d_path(const struct path *path, char **pathbuf, char *namebuf)
+ {
+ char *pathname = NULL;
+
+@@ -331,5 +341,11 @@ const char *ima_d_path(const struct path *path, char **pathbuf)
+ pathname = NULL;
+ }
+ }
+- return pathname ?: (const char *)path->dentry->d_name.name;
++
++ if (!pathname) {
++ strlcpy(namebuf, path->dentry->d_name.name, NAME_MAX);
++ pathname = namebuf;
++ }
++
++ return pathname;
+ }
+diff --git a/security/integrity/ima/ima_main.c b/security/integrity/ima/ima_main.c
+index 50818c60538b..d5e492bd2899 100644
+--- a/security/integrity/ima/ima_main.c
++++ b/security/integrity/ima/ima_main.c
+@@ -83,6 +83,7 @@ static void ima_rdwr_violation_check(struct file *file,
+ const char **pathname)
+ {
+ struct inode *inode = file_inode(file);
++ char filename[NAME_MAX];
+ fmode_t mode = file->f_mode;
+ bool send_tomtou = false, send_writers = false;
+
+@@ -102,7 +103,7 @@ static void ima_rdwr_violation_check(struct file *file,
+ if (!send_tomtou && !send_writers)
+ return;
+
+- *pathname = ima_d_path(&file->f_path, pathbuf);
++ *pathname = ima_d_path(&file->f_path, pathbuf, filename);
+
+ if (send_tomtou)
+ ima_add_violation(file, *pathname, iint,
+@@ -161,6 +162,7 @@ static int process_measurement(struct file *file, char *buf, loff_t size,
+ struct integrity_iint_cache *iint = NULL;
+ struct ima_template_desc *template_desc;
+ char *pathbuf = NULL;
++ char filename[NAME_MAX];
+ const char *pathname = NULL;
+ int rc = -ENOMEM, action, must_appraise;
+ int pcr = CONFIG_IMA_MEASURE_PCR_IDX;
+@@ -239,8 +241,8 @@ static int process_measurement(struct file *file, char *buf, loff_t size,
+ goto out_digsig;
+ }
+
+- if (!pathname) /* ima_rdwr_violation possibly pre-fetched */
+- pathname = ima_d_path(&file->f_path, &pathbuf);
++ if (!pathbuf) /* ima_rdwr_violation possibly pre-fetched */
++ pathname = ima_d_path(&file->f_path, &pathbuf, filename);
+
+ if (action & IMA_MEASURE)
+ ima_store_measurement(iint, file, pathname,
+diff --git a/sound/core/seq/seq_fifo.c b/sound/core/seq/seq_fifo.c
+index 1d5acbe0c08b..86240d02b530 100644
+--- a/sound/core/seq/seq_fifo.c
++++ b/sound/core/seq/seq_fifo.c
+@@ -135,6 +135,7 @@ int snd_seq_fifo_event_in(struct snd_seq_fifo *f,
+ f->tail = cell;
+ if (f->head == NULL)
+ f->head = cell;
++ cell->next = NULL;
+ f->cells++;
+ spin_unlock_irqrestore(&f->lock, flags);
+
+@@ -214,6 +215,8 @@ void snd_seq_fifo_cell_putback(struct snd_seq_fifo *f,
+ spin_lock_irqsave(&f->lock, flags);
+ cell->next = f->head;
+ f->head = cell;
++ if (!f->tail)
++ f->tail = cell;
+ f->cells++;
+ spin_unlock_irqrestore(&f->lock, flags);
+ }
+diff --git a/sound/core/timer.c b/sound/core/timer.c
+index fc144f43faa6..ad153149b231 100644
+--- a/sound/core/timer.c
++++ b/sound/core/timer.c
+@@ -1702,9 +1702,21 @@ static int snd_timer_user_params(struct file *file,
+ return -EBADFD;
+ if (copy_from_user(¶ms, _params, sizeof(params)))
+ return -EFAULT;
+- if (!(t->hw.flags & SNDRV_TIMER_HW_SLAVE) && params.ticks < 1) {
+- err = -EINVAL;
+- goto _end;
++ if (!(t->hw.flags & SNDRV_TIMER_HW_SLAVE)) {
++ u64 resolution;
++
++ if (params.ticks < 1) {
++ err = -EINVAL;
++ goto _end;
++ }
++
++ /* Don't allow resolution less than 1ms */
++ resolution = snd_timer_resolution(tu->timeri);
++ resolution *= params.ticks;
++ if (resolution < 1000000) {
++ err = -EINVAL;
++ goto _end;
++ }
+ }
+ if (params.queue_size > 0 &&
+ (params.queue_size < 32 || params.queue_size > 1024)) {
+diff --git a/sound/pci/ctxfi/cthw20k1.c b/sound/pci/ctxfi/cthw20k1.c
+index 9667cbfb0ca2..ab4cdab5cfa5 100644
+--- a/sound/pci/ctxfi/cthw20k1.c
++++ b/sound/pci/ctxfi/cthw20k1.c
+@@ -27,12 +27,6 @@
+ #include "cthw20k1.h"
+ #include "ct20k1reg.h"
+
+-#if BITS_PER_LONG == 32
+-#define CT_XFI_DMA_MASK DMA_BIT_MASK(32) /* 32 bit PTE */
+-#else
+-#define CT_XFI_DMA_MASK DMA_BIT_MASK(64) /* 64 bit PTE */
+-#endif
+-
+ struct hw20k1 {
+ struct hw hw;
+ spinlock_t reg_20k1_lock;
+@@ -1904,19 +1898,18 @@ static int hw_card_start(struct hw *hw)
+ {
+ int err;
+ struct pci_dev *pci = hw->pci;
++ const unsigned int dma_bits = BITS_PER_LONG;
+
+ err = pci_enable_device(pci);
+ if (err < 0)
+ return err;
+
+ /* Set DMA transfer mask */
+- if (dma_set_mask(&pci->dev, CT_XFI_DMA_MASK) < 0 ||
+- dma_set_coherent_mask(&pci->dev, CT_XFI_DMA_MASK) < 0) {
+- dev_err(hw->card->dev,
+- "architecture does not support PCI busmaster DMA with mask 0x%llx\n",
+- CT_XFI_DMA_MASK);
+- err = -ENXIO;
+- goto error1;
++ if (dma_set_mask(&pci->dev, DMA_BIT_MASK(dma_bits))) {
++ dma_set_coherent_mask(&pci->dev, DMA_BIT_MASK(dma_bits));
++ } else {
++ dma_set_mask(&pci->dev, DMA_BIT_MASK(32));
++ dma_set_coherent_mask(&pci->dev, DMA_BIT_MASK(32));
+ }
+
+ if (!hw->io_base) {
+diff --git a/sound/pci/ctxfi/cthw20k2.c b/sound/pci/ctxfi/cthw20k2.c
+index 6414ecf93efa..18ee7768b7c4 100644
+--- a/sound/pci/ctxfi/cthw20k2.c
++++ b/sound/pci/ctxfi/cthw20k2.c
+@@ -26,12 +26,6 @@
+ #include "cthw20k2.h"
+ #include "ct20k2reg.h"
+
+-#if BITS_PER_LONG == 32
+-#define CT_XFI_DMA_MASK DMA_BIT_MASK(32) /* 32 bit PTE */
+-#else
+-#define CT_XFI_DMA_MASK DMA_BIT_MASK(64) /* 64 bit PTE */
+-#endif
+-
+ struct hw20k2 {
+ struct hw hw;
+ /* for i2c */
+@@ -2029,19 +2023,18 @@ static int hw_card_start(struct hw *hw)
+ int err = 0;
+ struct pci_dev *pci = hw->pci;
+ unsigned int gctl;
++ const unsigned int dma_bits = BITS_PER_LONG;
+
+ err = pci_enable_device(pci);
+ if (err < 0)
+ return err;
+
+ /* Set DMA transfer mask */
+- if (dma_set_mask(&pci->dev, CT_XFI_DMA_MASK) < 0 ||
+- dma_set_coherent_mask(&pci->dev, CT_XFI_DMA_MASK) < 0) {
+- dev_err(hw->card->dev,
+- "architecture does not support PCI busmaster DMA with mask 0x%llx\n",
+- CT_XFI_DMA_MASK);
+- err = -ENXIO;
+- goto error1;
++ if (!dma_set_mask(&pci->dev, DMA_BIT_MASK(dma_bits))) {
++ dma_set_coherent_mask(&pci->dev, DMA_BIT_MASK(dma_bits));
++ } else {
++ dma_set_mask(&pci->dev, DMA_BIT_MASK(32));
++ dma_set_coherent_mask(&pci->dev, DMA_BIT_MASK(32));
+ }
+
+ if (!hw->io_base) {
+diff --git a/sound/pci/hda/hda_intel.c b/sound/pci/hda/hda_intel.c
+index c64d986009a9..bc4462694aaf 100644
+--- a/sound/pci/hda/hda_intel.c
++++ b/sound/pci/hda/hda_intel.c
+@@ -2197,9 +2197,9 @@ static const struct pci_device_id azx_ids[] = {
+ .driver_data = AZX_DRIVER_PCH | AZX_DCAPS_INTEL_PCH },
+ /* Lewisburg */
+ { PCI_DEVICE(0x8086, 0xa1f0),
+- .driver_data = AZX_DRIVER_PCH | AZX_DCAPS_INTEL_PCH },
++ .driver_data = AZX_DRIVER_PCH | AZX_DCAPS_INTEL_SKYLAKE },
+ { PCI_DEVICE(0x8086, 0xa270),
+- .driver_data = AZX_DRIVER_PCH | AZX_DCAPS_INTEL_PCH },
++ .driver_data = AZX_DRIVER_PCH | AZX_DCAPS_INTEL_SKYLAKE },
+ /* Lynx Point-LP */
+ { PCI_DEVICE(0x8086, 0x9c20),
+ .driver_data = AZX_DRIVER_PCH | AZX_DCAPS_INTEL_PCH },
+diff --git a/sound/pci/hda/patch_realtek.c b/sound/pci/hda/patch_realtek.c
+index 7d660ee1d5e8..6b041f7268fb 100644
+--- a/sound/pci/hda/patch_realtek.c
++++ b/sound/pci/hda/patch_realtek.c
+@@ -5577,6 +5577,7 @@ static const struct snd_pci_quirk alc269_fixup_tbl[] = {
+ SND_PCI_QUIRK(0x1028, 0x0725, "Dell Inspiron 3162", ALC255_FIXUP_DELL_SPK_NOISE),
+ SND_PCI_QUIRK(0x1028, 0x075b, "Dell XPS 13 9360", ALC256_FIXUP_DELL_XPS_13_HEADPHONE_NOISE),
+ SND_PCI_QUIRK(0x1028, 0x075d, "Dell AIO", ALC298_FIXUP_SPK_VOLUME),
++ SND_PCI_QUIRK(0x1028, 0x0798, "Dell Inspiron 17 7000 Gaming", ALC256_FIXUP_DELL_INSPIRON_7559_SUBWOOFER),
+ SND_PCI_QUIRK(0x1028, 0x164a, "Dell", ALC293_FIXUP_DELL1_MIC_NO_PRESENCE),
+ SND_PCI_QUIRK(0x1028, 0x164b, "Dell", ALC293_FIXUP_DELL1_MIC_NO_PRESENCE),
+ SND_PCI_QUIRK(0x103c, 0x1586, "HP", ALC269_FIXUP_HP_MUTE_LED_MIC2),
+@@ -5692,6 +5693,7 @@ static const struct snd_pci_quirk alc269_fixup_tbl[] = {
+ SND_PCI_QUIRK(0x17aa, 0x2233, "Thinkpad", ALC292_FIXUP_TPT460),
+ SND_PCI_QUIRK(0x17aa, 0x30bb, "ThinkCentre AIO", ALC233_FIXUP_LENOVO_LINE2_MIC_HOTKEY),
+ SND_PCI_QUIRK(0x17aa, 0x30e2, "ThinkCentre AIO", ALC233_FIXUP_LENOVO_LINE2_MIC_HOTKEY),
++ SND_PCI_QUIRK(0x17aa, 0x3112, "ThinkCentre AIO", ALC233_FIXUP_LENOVO_LINE2_MIC_HOTKEY),
+ SND_PCI_QUIRK(0x17aa, 0x3902, "Lenovo E50-80", ALC269_FIXUP_DMIC_THINKPAD_ACPI),
+ SND_PCI_QUIRK(0x17aa, 0x3977, "IdeaPad S210", ALC283_FIXUP_INT_MIC),
+ SND_PCI_QUIRK(0x17aa, 0x3978, "IdeaPad Y410P", ALC269_FIXUP_NO_SHUTUP),
+@@ -6065,6 +6067,12 @@ static const struct snd_hda_pin_quirk alc269_pin_fixup_tbl[] = {
+ SND_HDA_PIN_QUIRK(0x10ec0298, 0x1028, "Dell", ALC298_FIXUP_DELL1_MIC_NO_PRESENCE,
+ ALC298_STANDARD_PINS,
+ {0x17, 0x90170150}),
++ SND_HDA_PIN_QUIRK(0x10ec0298, 0x1028, "Dell", ALC298_FIXUP_SPK_VOLUME,
++ {0x12, 0xb7a60140},
++ {0x13, 0xb7a60150},
++ {0x17, 0x90170110},
++ {0x1a, 0x03011020},
++ {0x21, 0x03211030}),
+ {}
+ };
+
+diff --git a/virt/kvm/arm/vgic/vgic-irqfd.c b/virt/kvm/arm/vgic/vgic-irqfd.c
+index d918dcf26a5a..f138ed2e9c63 100644
+--- a/virt/kvm/arm/vgic/vgic-irqfd.c
++++ b/virt/kvm/arm/vgic/vgic-irqfd.c
+@@ -99,6 +99,9 @@ int kvm_set_msi(struct kvm_kernel_irq_routing_entry *e,
+ if (!vgic_has_its(kvm))
+ return -ENODEV;
+
++ if (!level)
++ return -1;
++
+ return vgic_its_inject_msi(kvm, &msi);
+ }
+
^ permalink raw reply related [flat|nested] 22+ messages in thread
* [gentoo-commits] proj/linux-patches:4.10 commit in: /
@ 2017-03-12 19:36 Mike Pagano
0 siblings, 0 replies; 22+ messages in thread
From: Mike Pagano @ 2017-03-12 19:36 UTC (permalink / raw
To: gentoo-commits
commit: 9cf0df901e0c9f8d1f9bb5a931ab29de5f62a9a0
Author: Mike Pagano <mpagano <AT> gentoo <DOT> org>
AuthorDate: Sun Mar 12 19:36:31 2017 +0000
Commit: Mike Pagano <mpagano <AT> gentoo <DOT> org>
CommitDate: Sun Mar 12 19:36:31 2017 +0000
URL: https://gitweb.gentoo.org/proj/linux-patches.git/commit/?id=9cf0df90
Add BFQ patchset for 4.10
0000_README | 16 +
...ups-kconfig-build-bits-for-BFQ-v7r11-4.10.patch | 103 +
...oduce-the-BFQ-v7r11-I-O-sched-for-4.10.0.patch1 | 7109 +++++++++++++++
...rly-Queue-Merge-EQM-to-BFQ-v7r11-for-4.10.patch | 1101 +++
...BFQ-v7r11-for-4.10.0-into-BFQ-v8r8-for-4.patch1 | 9187 ++++++++++++++++++++
5 files changed, 17516 insertions(+)
diff --git a/0000_README b/0000_README
index 44d9c5f..8ad9f95 100644
--- a/0000_README
+++ b/0000_README
@@ -79,6 +79,22 @@ Patch: 4567_distro-Gentoo-Kconfig.patch
From: Tom Wijsman <TomWij@gentoo.org>
Desc: Add Gentoo Linux support config settings and defaults.
+Patch: 5001_block-cgroups-kconfig-build-bits-for-BFQ-v7r11-4.10.patch
+From: http://algo.ing.unimo.it/people/paolo/disk_sched/
+Desc: BFQ v7r11 patch 1 for 4.10: Build, cgroups and kconfig bits
+
+Patch: 5002_block-introduce-the-BFQ-v7r11-I-O-sched-for-4.10.0.patch1
+From: http://algo.ing.unimo.it/people/paolo/disk_sched/
+Desc: BFQ v7r11 patch 2 for 4.10: BFQ Scheduler
+
+Patch: 5003_block-bfq-add-Early-Queue-Merge-EQM-to-BFQ-v7r11-for-4.10.patch
+From: http://algo.ing.unimo.it/people/paolo/disk_sched/
+Desc: BFQ v7r11 patch 3 for 4.10: Early Queue Merge (EQM)
+
+Patch: 5004_blkck-bfq-turn-BFQ-v7r11-for-4.10.0-into-BFQ-v8r8-for-4.patch1
+From: http://algo.ing.unimo.it/people/paolo/disk_sched/
+Desc: BFQ v8r8 patch 4 for 4.10: Early Queue Merge (EQM)
+
Patch: 5010_enable-additional-cpu-optimizations-for-gcc.patch
From: https://github.com/graysky2/kernel_gcc_patch/
Desc: Kernel patch enables gcc >= v4.9 optimizations for additional CPUs.
diff --git a/5001_block-cgroups-kconfig-build-bits-for-BFQ-v7r11-4.10.patch b/5001_block-cgroups-kconfig-build-bits-for-BFQ-v7r11-4.10.patch
new file mode 100644
index 0000000..45f4fd2
--- /dev/null
+++ b/5001_block-cgroups-kconfig-build-bits-for-BFQ-v7r11-4.10.patch
@@ -0,0 +1,103 @@
+From 8500f47272575b4616beb487c483019248d8c501 Mon Sep 17 00:00:00 2001
+From: Paolo Valente <paolo.valente@unimore.it>
+Date: Tue, 7 Apr 2015 13:39:12 +0200
+Subject: [PATCH 1/4] block: cgroups, kconfig, build bits for BFQ-v7r11-4.10.0
+
+Update Kconfig.iosched and do the related Makefile changes to include
+kernel configuration options for BFQ. Also increase the number of
+policies supported by the blkio controller so that BFQ can add its
+own.
+
+Signed-off-by: Paolo Valente <paolo.valente@unimore.it>
+Signed-off-by: Arianna Avanzini <avanzini@google.com>
+---
+ block/Kconfig.iosched | 32 ++++++++++++++++++++++++++++++++
+ block/Makefile | 1 +
+ include/linux/blkdev.h | 2 +-
+ 3 files changed, 34 insertions(+), 1 deletion(-)
+
+diff --git a/block/Kconfig.iosched b/block/Kconfig.iosched
+index 421bef9..0ee5f0f 100644
+--- a/block/Kconfig.iosched
++++ b/block/Kconfig.iosched
+@@ -39,6 +39,27 @@ config CFQ_GROUP_IOSCHED
+ ---help---
+ Enable group IO scheduling in CFQ.
+
++config IOSCHED_BFQ
++ tristate "BFQ I/O scheduler"
++ default n
++ ---help---
++ The BFQ I/O scheduler tries to distribute bandwidth among
++ all processes according to their weights.
++ It aims at distributing the bandwidth as desired, independently of
++ the disk parameters and with any workload. It also tries to
++ guarantee low latency to interactive and soft real-time
++ applications. If compiled built-in (saying Y here), BFQ can
++ be configured to support hierarchical scheduling.
++
++config CGROUP_BFQIO
++ bool "BFQ hierarchical scheduling support"
++ depends on CGROUPS && IOSCHED_BFQ=y
++ default n
++ ---help---
++ Enable hierarchical scheduling in BFQ, using the cgroups
++ filesystem interface. The name of the subsystem will be
++ bfqio.
++
+ choice
+ prompt "Default I/O scheduler"
+ default DEFAULT_CFQ
+@@ -52,6 +73,16 @@ choice
+ config DEFAULT_CFQ
+ bool "CFQ" if IOSCHED_CFQ=y
+
++ config DEFAULT_BFQ
++ bool "BFQ" if IOSCHED_BFQ=y
++ help
++ Selects BFQ as the default I/O scheduler which will be
++ used by default for all block devices.
++ The BFQ I/O scheduler aims at distributing the bandwidth
++ as desired, independently of the disk parameters and with
++ any workload. It also tries to guarantee low latency to
++ interactive and soft real-time applications.
++
+ config DEFAULT_NOOP
+ bool "No-op"
+
+@@ -61,6 +92,7 @@ config DEFAULT_IOSCHED
+ string
+ default "deadline" if DEFAULT_DEADLINE
+ default "cfq" if DEFAULT_CFQ
++ default "bfq" if DEFAULT_BFQ
+ default "noop" if DEFAULT_NOOP
+
+ endmenu
+diff --git a/block/Makefile b/block/Makefile
+index a827f98..3b14703 100644
+--- a/block/Makefile
++++ b/block/Makefile
+@@ -18,6 +18,7 @@ obj-$(CONFIG_BLK_DEV_THROTTLING) += blk-throttle.o
+ obj-$(CONFIG_IOSCHED_NOOP) += noop-iosched.o
+ obj-$(CONFIG_IOSCHED_DEADLINE) += deadline-iosched.o
+ obj-$(CONFIG_IOSCHED_CFQ) += cfq-iosched.o
++obj-$(CONFIG_IOSCHED_BFQ) += bfq-iosched.o
+
+ obj-$(CONFIG_BLOCK_COMPAT) += compat_ioctl.o
+ obj-$(CONFIG_BLK_CMDLINE_PARSER) += cmdline-parser.o
+diff --git a/include/linux/blkdev.h b/include/linux/blkdev.h
+index 1ca8e8f..8e2d6ed 100644
+--- a/include/linux/blkdev.h
++++ b/include/linux/blkdev.h
+@@ -47,7 +47,7 @@ struct rq_wb;
+ * Maximum number of blkcg policies allowed to be registered concurrently.
+ * Defined here to simplify include dependency.
+ */
+-#define BLKCG_MAX_POLS 2
++#define BLKCG_MAX_POLS 3
+
+ typedef void (rq_end_io_fn)(struct request *, int);
+
+--
+2.10.0
+
diff --git a/5002_block-introduce-the-BFQ-v7r11-I-O-sched-for-4.10.0.patch1 b/5002_block-introduce-the-BFQ-v7r11-I-O-sched-for-4.10.0.patch1
new file mode 100644
index 0000000..0812a57
--- /dev/null
+++ b/5002_block-introduce-the-BFQ-v7r11-I-O-sched-for-4.10.0.patch1
@@ -0,0 +1,7109 @@
+From 2f56e91506b329ffc29d0f184924ad0123c9ba9e Mon Sep 17 00:00:00 2001
+From: Paolo Valente <paolo.valente@unimore.it>
+Date: Thu, 9 May 2013 19:10:02 +0200
+Subject: [PATCH 2/4] block: introduce the BFQ-v7r11 I/O sched for 4.10.0
+
+The general structure is borrowed from CFQ, as much of the code for
+handling I/O contexts. Over time, several useful features have been
+ported from CFQ as well (details in the changelog in README.BFQ). A
+(bfq_)queue is associated to each task doing I/O on a device, and each
+time a scheduling decision has to be made a queue is selected and served
+until it expires.
+
+ - Slices are given in the service domain: tasks are assigned
+ budgets, measured in number of sectors. Once got the disk, a task
+ must however consume its assigned budget within a configurable
+ maximum time (by default, the maximum possible value of the
+ budgets is automatically computed to comply with this timeout).
+ This allows the desired latency vs "throughput boosting" tradeoff
+ to be set.
+
+ - Budgets are scheduled according to a variant of WF2Q+, implemented
+ using an augmented rb-tree to take eligibility into account while
+ preserving an O(log N) overall complexity.
+
+ - A low-latency tunable is provided; if enabled, both interactive
+ and soft real-time applications are guaranteed a very low latency.
+
+ - Latency guarantees are preserved also in the presence of NCQ.
+
+ - Also with flash-based devices, a high throughput is achieved
+ while still preserving latency guarantees.
+
+ - BFQ features Early Queue Merge (EQM), a sort of fusion of the
+ cooperating-queue-merging and the preemption mechanisms present
+ in CFQ. EQM is in fact a unified mechanism that tries to get a
+ sequential read pattern, and hence a high throughput, with any
+ set of processes performing interleaved I/O over a contiguous
+ sequence of sectors.
+
+ - BFQ supports full hierarchical scheduling, exporting a cgroups
+ interface. Since each node has a full scheduler, each group can
+ be assigned its own weight.
+
+ - If the cgroups interface is not used, only I/O priorities can be
+ assigned to processes, with ioprio values mapped to weights
+ with the relation weight = IOPRIO_BE_NR - ioprio.
+
+ - ioprio classes are served in strict priority order, i.e., lower
+ priority queues are not served as long as there are higher
+ priority queues. Among queues in the same class the bandwidth is
+ distributed in proportion to the weight of each queue. A very
+ thin extra bandwidth is however guaranteed to the Idle class, to
+ prevent it from starving.
+
+Signed-off-by: Paolo Valente <paolo.valente@unimore.it>
+Signed-off-by: Arianna Avanzini <avanzini@google.com>
+---
+ block/Kconfig.iosched | 6 +-
+ block/bfq-cgroup.c | 1186 ++++++++++++++++
+ block/bfq-ioc.c | 36 +
+ block/bfq-iosched.c | 3763 +++++++++++++++++++++++++++++++++++++++++++++++++
+ block/bfq-sched.c | 1199 ++++++++++++++++
+ block/bfq.h | 801 +++++++++++
+ 6 files changed, 6987 insertions(+), 4 deletions(-)
+ create mode 100644 block/bfq-cgroup.c
+ create mode 100644 block/bfq-ioc.c
+ create mode 100644 block/bfq-iosched.c
+ create mode 100644 block/bfq-sched.c
+ create mode 100644 block/bfq.h
+
+diff --git a/block/Kconfig.iosched b/block/Kconfig.iosched
+index 0ee5f0f..f78cd1a 100644
+--- a/block/Kconfig.iosched
++++ b/block/Kconfig.iosched
+@@ -51,14 +51,12 @@ config IOSCHED_BFQ
+ applications. If compiled built-in (saying Y here), BFQ can
+ be configured to support hierarchical scheduling.
+
+-config CGROUP_BFQIO
++config BFQ_GROUP_IOSCHED
+ bool "BFQ hierarchical scheduling support"
+ depends on CGROUPS && IOSCHED_BFQ=y
+ default n
+ ---help---
+- Enable hierarchical scheduling in BFQ, using the cgroups
+- filesystem interface. The name of the subsystem will be
+- bfqio.
++ Enable hierarchical scheduling in BFQ, using the blkio controller.
+
+ choice
+ prompt "Default I/O scheduler"
+diff --git a/block/bfq-cgroup.c b/block/bfq-cgroup.c
+new file mode 100644
+index 0000000..8b08a57
+--- /dev/null
++++ b/block/bfq-cgroup.c
+@@ -0,0 +1,1186 @@
++/*
++ * BFQ: CGROUPS support.
++ *
++ * Based on ideas and code from CFQ:
++ * Copyright (C) 2003 Jens Axboe <axboe@kernel.dk>
++ *
++ * Copyright (C) 2008 Fabio Checconi <fabio@gandalf.sssup.it>
++ * Paolo Valente <paolo.valente@unimore.it>
++ *
++ * Copyright (C) 2010 Paolo Valente <paolo.valente@unimore.it>
++ *
++ * Licensed under the GPL-2 as detailed in the accompanying COPYING.BFQ
++ * file.
++ */
++
++#ifdef CONFIG_BFQ_GROUP_IOSCHED
++
++/* bfqg stats flags */
++enum bfqg_stats_flags {
++ BFQG_stats_waiting = 0,
++ BFQG_stats_idling,
++ BFQG_stats_empty,
++};
++
++#define BFQG_FLAG_FNS(name) \
++static void bfqg_stats_mark_##name(struct bfqg_stats *stats) \
++{ \
++ stats->flags |= (1 << BFQG_stats_##name); \
++} \
++static void bfqg_stats_clear_##name(struct bfqg_stats *stats) \
++{ \
++ stats->flags &= ~(1 << BFQG_stats_##name); \
++} \
++static int bfqg_stats_##name(struct bfqg_stats *stats) \
++{ \
++ return (stats->flags & (1 << BFQG_stats_##name)) != 0; \
++} \
++
++BFQG_FLAG_FNS(waiting)
++BFQG_FLAG_FNS(idling)
++BFQG_FLAG_FNS(empty)
++#undef BFQG_FLAG_FNS
++
++/* This should be called with the queue_lock held. */
++static void bfqg_stats_update_group_wait_time(struct bfqg_stats *stats)
++{
++ unsigned long long now;
++
++ if (!bfqg_stats_waiting(stats))
++ return;
++
++ now = sched_clock();
++ if (time_after64(now, stats->start_group_wait_time))
++ blkg_stat_add(&stats->group_wait_time,
++ now - stats->start_group_wait_time);
++ bfqg_stats_clear_waiting(stats);
++}
++
++/* This should be called with the queue_lock held. */
++static void bfqg_stats_set_start_group_wait_time(struct bfq_group *bfqg,
++ struct bfq_group *curr_bfqg)
++{
++ struct bfqg_stats *stats = &bfqg->stats;
++
++ if (bfqg_stats_waiting(stats))
++ return;
++ if (bfqg == curr_bfqg)
++ return;
++ stats->start_group_wait_time = sched_clock();
++ bfqg_stats_mark_waiting(stats);
++}
++
++/* This should be called with the queue_lock held. */
++static void bfqg_stats_end_empty_time(struct bfqg_stats *stats)
++{
++ unsigned long long now;
++
++ if (!bfqg_stats_empty(stats))
++ return;
++
++ now = sched_clock();
++ if (time_after64(now, stats->start_empty_time))
++ blkg_stat_add(&stats->empty_time,
++ now - stats->start_empty_time);
++ bfqg_stats_clear_empty(stats);
++}
++
++static void bfqg_stats_update_dequeue(struct bfq_group *bfqg)
++{
++ blkg_stat_add(&bfqg->stats.dequeue, 1);
++}
++
++static void bfqg_stats_set_start_empty_time(struct bfq_group *bfqg)
++{
++ struct bfqg_stats *stats = &bfqg->stats;
++
++ if (blkg_rwstat_total(&stats->queued))
++ return;
++
++ /*
++ * group is already marked empty. This can happen if bfqq got new
++ * request in parent group and moved to this group while being added
++ * to service tree. Just ignore the event and move on.
++ */
++ if (bfqg_stats_empty(stats))
++ return;
++
++ stats->start_empty_time = sched_clock();
++ bfqg_stats_mark_empty(stats);
++}
++
++static void bfqg_stats_update_idle_time(struct bfq_group *bfqg)
++{
++ struct bfqg_stats *stats = &bfqg->stats;
++
++ if (bfqg_stats_idling(stats)) {
++ unsigned long long now = sched_clock();
++
++ if (time_after64(now, stats->start_idle_time))
++ blkg_stat_add(&stats->idle_time,
++ now - stats->start_idle_time);
++ bfqg_stats_clear_idling(stats);
++ }
++}
++
++static void bfqg_stats_set_start_idle_time(struct bfq_group *bfqg)
++{
++ struct bfqg_stats *stats = &bfqg->stats;
++
++ stats->start_idle_time = sched_clock();
++ bfqg_stats_mark_idling(stats);
++}
++
++static void bfqg_stats_update_avg_queue_size(struct bfq_group *bfqg)
++{
++ struct bfqg_stats *stats = &bfqg->stats;
++
++ blkg_stat_add(&stats->avg_queue_size_sum,
++ blkg_rwstat_total(&stats->queued));
++ blkg_stat_add(&stats->avg_queue_size_samples, 1);
++ bfqg_stats_update_group_wait_time(stats);
++}
++
++static struct blkcg_policy blkcg_policy_bfq;
++
++/*
++ * blk-cgroup policy-related handlers
++ * The following functions help in converting between blk-cgroup
++ * internal structures and BFQ-specific structures.
++ */
++
++static struct bfq_group *pd_to_bfqg(struct blkg_policy_data *pd)
++{
++ return pd ? container_of(pd, struct bfq_group, pd) : NULL;
++}
++
++static struct blkcg_gq *bfqg_to_blkg(struct bfq_group *bfqg)
++{
++ return pd_to_blkg(&bfqg->pd);
++}
++
++static struct bfq_group *blkg_to_bfqg(struct blkcg_gq *blkg)
++{
++ struct blkg_policy_data *pd = blkg_to_pd(blkg, &blkcg_policy_bfq);
++
++ BUG_ON(!pd);
++
++ return pd_to_bfqg(pd);
++}
++
++/*
++ * bfq_group handlers
++ * The following functions help in navigating the bfq_group hierarchy
++ * by allowing to find the parent of a bfq_group or the bfq_group
++ * associated to a bfq_queue.
++ */
++
++static struct bfq_group *bfqg_parent(struct bfq_group *bfqg)
++{
++ struct blkcg_gq *pblkg = bfqg_to_blkg(bfqg)->parent;
++
++ return pblkg ? blkg_to_bfqg(pblkg) : NULL;
++}
++
++static struct bfq_group *bfqq_group(struct bfq_queue *bfqq)
++{
++ struct bfq_entity *group_entity = bfqq->entity.parent;
++
++ return group_entity ? container_of(group_entity, struct bfq_group,
++ entity) :
++ bfqq->bfqd->root_group;
++}
++
++/*
++ * The following two functions handle get and put of a bfq_group by
++ * wrapping the related blk-cgroup hooks.
++ */
++
++static void bfqg_get(struct bfq_group *bfqg)
++{
++ return blkg_get(bfqg_to_blkg(bfqg));
++}
++
++static void bfqg_put(struct bfq_group *bfqg)
++{
++ return blkg_put(bfqg_to_blkg(bfqg));
++}
++
++static void bfqg_stats_update_io_add(struct bfq_group *bfqg,
++ struct bfq_queue *bfqq,
++ int rw)
++{
++ blkg_rwstat_add(&bfqg->stats.queued, rw, 1);
++ bfqg_stats_end_empty_time(&bfqg->stats);
++ if (!(bfqq == ((struct bfq_data *)bfqg->bfqd)->in_service_queue))
++ bfqg_stats_set_start_group_wait_time(bfqg, bfqq_group(bfqq));
++}
++
++static void bfqg_stats_update_io_remove(struct bfq_group *bfqg, int rw)
++{
++ blkg_rwstat_add(&bfqg->stats.queued, rw, -1);
++}
++
++static void bfqg_stats_update_io_merged(struct bfq_group *bfqg, int rw)
++{
++ blkg_rwstat_add(&bfqg->stats.merged, rw, 1);
++}
++
++static void bfqg_stats_update_dispatch(struct bfq_group *bfqg,
++ uint64_t bytes, int rw)
++{
++ blkg_stat_add(&bfqg->stats.sectors, bytes >> 9);
++ blkg_rwstat_add(&bfqg->stats.serviced, rw, 1);
++ blkg_rwstat_add(&bfqg->stats.service_bytes, rw, bytes);
++}
++
++static void bfqg_stats_update_completion(struct bfq_group *bfqg,
++ uint64_t start_time, uint64_t io_start_time, int rw)
++{
++ struct bfqg_stats *stats = &bfqg->stats;
++ unsigned long long now = sched_clock();
++
++ if (time_after64(now, io_start_time))
++ blkg_rwstat_add(&stats->service_time, rw, now - io_start_time);
++ if (time_after64(io_start_time, start_time))
++ blkg_rwstat_add(&stats->wait_time, rw,
++ io_start_time - start_time);
++}
++
++/* @stats = 0 */
++static void bfqg_stats_reset(struct bfqg_stats *stats)
++{
++ if (!stats)
++ return;
++
++ /* queued stats shouldn't be cleared */
++ blkg_rwstat_reset(&stats->service_bytes);
++ blkg_rwstat_reset(&stats->serviced);
++ blkg_rwstat_reset(&stats->merged);
++ blkg_rwstat_reset(&stats->service_time);
++ blkg_rwstat_reset(&stats->wait_time);
++ blkg_stat_reset(&stats->time);
++ blkg_stat_reset(&stats->unaccounted_time);
++ blkg_stat_reset(&stats->avg_queue_size_sum);
++ blkg_stat_reset(&stats->avg_queue_size_samples);
++ blkg_stat_reset(&stats->dequeue);
++ blkg_stat_reset(&stats->group_wait_time);
++ blkg_stat_reset(&stats->idle_time);
++ blkg_stat_reset(&stats->empty_time);
++}
++
++/* @to += @from */
++static void bfqg_stats_merge(struct bfqg_stats *to, struct bfqg_stats *from)
++{
++ if (!to || !from)
++ return;
++
++ /* queued stats shouldn't be cleared */
++ blkg_rwstat_add_aux(&to->service_bytes, &from->service_bytes);
++ blkg_rwstat_add_aux(&to->serviced, &from->serviced);
++ blkg_rwstat_add_aux(&to->merged, &from->merged);
++ blkg_rwstat_add_aux(&to->service_time, &from->service_time);
++ blkg_rwstat_add_aux(&to->wait_time, &from->wait_time);
++ blkg_stat_add_aux(&from->time, &from->time);
++ blkg_stat_add_aux(&to->unaccounted_time, &from->unaccounted_time);
++ blkg_stat_add_aux(&to->avg_queue_size_sum, &from->avg_queue_size_sum);
++ blkg_stat_add_aux(&to->avg_queue_size_samples,
++ &from->avg_queue_size_samples);
++ blkg_stat_add_aux(&to->dequeue, &from->dequeue);
++ blkg_stat_add_aux(&to->group_wait_time, &from->group_wait_time);
++ blkg_stat_add_aux(&to->idle_time, &from->idle_time);
++ blkg_stat_add_aux(&to->empty_time, &from->empty_time);
++}
++
++/*
++ * Transfer @bfqg's stats to its parent's dead_stats so that the ancestors'
++ * recursive stats can still account for the amount used by this bfqg after
++ * it's gone.
++ */
++static void bfqg_stats_xfer_dead(struct bfq_group *bfqg)
++{
++ struct bfq_group *parent;
++
++ if (!bfqg) /* root_group */
++ return;
++
++ parent = bfqg_parent(bfqg);
++
++ lockdep_assert_held(bfqg_to_blkg(bfqg)->q->queue_lock);
++
++ if (unlikely(!parent))
++ return;
++
++ bfqg_stats_merge(&parent->dead_stats, &bfqg->stats);
++ bfqg_stats_merge(&parent->dead_stats, &bfqg->dead_stats);
++ bfqg_stats_reset(&bfqg->stats);
++ bfqg_stats_reset(&bfqg->dead_stats);
++}
++
++static void bfq_init_entity(struct bfq_entity *entity,
++ struct bfq_group *bfqg)
++{
++ struct bfq_queue *bfqq = bfq_entity_to_bfqq(entity);
++
++ entity->weight = entity->new_weight;
++ entity->orig_weight = entity->new_weight;
++ if (bfqq) {
++ bfqq->ioprio = bfqq->new_ioprio;
++ bfqq->ioprio_class = bfqq->new_ioprio_class;
++ bfqg_get(bfqg);
++ }
++ entity->parent = bfqg->my_entity;
++ entity->sched_data = &bfqg->sched_data;
++}
++
++static void bfqg_stats_exit(struct bfqg_stats *stats)
++{
++ blkg_rwstat_exit(&stats->service_bytes);
++ blkg_rwstat_exit(&stats->serviced);
++ blkg_rwstat_exit(&stats->merged);
++ blkg_rwstat_exit(&stats->service_time);
++ blkg_rwstat_exit(&stats->wait_time);
++ blkg_rwstat_exit(&stats->queued);
++ blkg_stat_exit(&stats->sectors);
++ blkg_stat_exit(&stats->time);
++ blkg_stat_exit(&stats->unaccounted_time);
++ blkg_stat_exit(&stats->avg_queue_size_sum);
++ blkg_stat_exit(&stats->avg_queue_size_samples);
++ blkg_stat_exit(&stats->dequeue);
++ blkg_stat_exit(&stats->group_wait_time);
++ blkg_stat_exit(&stats->idle_time);
++ blkg_stat_exit(&stats->empty_time);
++}
++
++static int bfqg_stats_init(struct bfqg_stats *stats, gfp_t gfp)
++{
++ if (blkg_rwstat_init(&stats->service_bytes, gfp) ||
++ blkg_rwstat_init(&stats->serviced, gfp) ||
++ blkg_rwstat_init(&stats->merged, gfp) ||
++ blkg_rwstat_init(&stats->service_time, gfp) ||
++ blkg_rwstat_init(&stats->wait_time, gfp) ||
++ blkg_rwstat_init(&stats->queued, gfp) ||
++ blkg_stat_init(&stats->sectors, gfp) ||
++ blkg_stat_init(&stats->time, gfp) ||
++ blkg_stat_init(&stats->unaccounted_time, gfp) ||
++ blkg_stat_init(&stats->avg_queue_size_sum, gfp) ||
++ blkg_stat_init(&stats->avg_queue_size_samples, gfp) ||
++ blkg_stat_init(&stats->dequeue, gfp) ||
++ blkg_stat_init(&stats->group_wait_time, gfp) ||
++ blkg_stat_init(&stats->idle_time, gfp) ||
++ blkg_stat_init(&stats->empty_time, gfp)) {
++ bfqg_stats_exit(stats);
++ return -ENOMEM;
++ }
++
++ return 0;
++}
++
++static struct bfq_group_data *cpd_to_bfqgd(struct blkcg_policy_data *cpd)
++{
++ return cpd ? container_of(cpd, struct bfq_group_data, pd) : NULL;
++}
++
++static struct bfq_group_data *blkcg_to_bfqgd(struct blkcg *blkcg)
++{
++ return cpd_to_bfqgd(blkcg_to_cpd(blkcg, &blkcg_policy_bfq));
++}
++
++static void bfq_cpd_init(struct blkcg_policy_data *cpd)
++{
++ struct bfq_group_data *d = cpd_to_bfqgd(cpd);
++
++ d->weight = BFQ_DEFAULT_GRP_WEIGHT;
++}
++
++static struct blkg_policy_data *bfq_pd_alloc(gfp_t gfp, int node)
++{
++ struct bfq_group *bfqg;
++
++ bfqg = kzalloc_node(sizeof(*bfqg), gfp, node);
++ if (!bfqg)
++ return NULL;
++
++ if (bfqg_stats_init(&bfqg->stats, gfp) ||
++ bfqg_stats_init(&bfqg->dead_stats, gfp)) {
++ kfree(bfqg);
++ return NULL;
++ }
++
++ return &bfqg->pd;
++}
++
++static void bfq_group_set_parent(struct bfq_group *bfqg,
++ struct bfq_group *parent)
++{
++ struct bfq_entity *entity;
++
++ BUG_ON(!parent);
++ BUG_ON(!bfqg);
++ BUG_ON(bfqg == parent);
++
++ entity = &bfqg->entity;
++ entity->parent = parent->my_entity;
++ entity->sched_data = &parent->sched_data;
++}
++
++static void bfq_pd_init(struct blkg_policy_data *pd)
++{
++ struct blkcg_gq *blkg = pd_to_blkg(pd);
++ struct bfq_group *bfqg = blkg_to_bfqg(blkg);
++ struct bfq_data *bfqd = blkg->q->elevator->elevator_data;
++ struct bfq_entity *entity = &bfqg->entity;
++ struct bfq_group_data *d = blkcg_to_bfqgd(blkg->blkcg);
++
++ entity->orig_weight = entity->weight = entity->new_weight = d->weight;
++ entity->my_sched_data = &bfqg->sched_data;
++ bfqg->my_entity = entity; /*
++ * the root_group's will be set to NULL
++ * in bfq_init_queue()
++ */
++ bfqg->bfqd = bfqd;
++ bfqg->active_entities = 0;
++}
++
++static void bfq_pd_free(struct blkg_policy_data *pd)
++{
++ struct bfq_group *bfqg = pd_to_bfqg(pd);
++
++ bfqg_stats_exit(&bfqg->stats);
++ bfqg_stats_exit(&bfqg->dead_stats);
++
++ return kfree(bfqg);
++}
++
++/* offset delta from bfqg->stats to bfqg->dead_stats */
++static const int dead_stats_off_delta = offsetof(struct bfq_group, dead_stats) -
++ offsetof(struct bfq_group, stats);
++
++/* to be used by recursive prfill, sums live and dead stats recursively */
++static u64 bfqg_stat_pd_recursive_sum(struct blkg_policy_data *pd, int off)
++{
++ u64 sum = 0;
++
++ sum += blkg_stat_recursive_sum(pd_to_blkg(pd), &blkcg_policy_bfq, off);
++ sum += blkg_stat_recursive_sum(pd_to_blkg(pd), &blkcg_policy_bfq,
++ off + dead_stats_off_delta);
++ return sum;
++}
++
++/* to be used by recursive prfill, sums live and dead rwstats recursively */
++static struct blkg_rwstat
++bfqg_rwstat_pd_recursive_sum(struct blkg_policy_data *pd, int off)
++{
++ struct blkg_rwstat a, b;
++
++ a = blkg_rwstat_recursive_sum(pd_to_blkg(pd), &blkcg_policy_bfq, off);
++ b = blkg_rwstat_recursive_sum(pd_to_blkg(pd), &blkcg_policy_bfq,
++ off + dead_stats_off_delta);
++ blkg_rwstat_add_aux(&a, &b);
++ return a;
++}
++
++static void bfq_pd_reset_stats(struct blkg_policy_data *pd)
++{
++ struct bfq_group *bfqg = pd_to_bfqg(pd);
++
++ bfqg_stats_reset(&bfqg->stats);
++ bfqg_stats_reset(&bfqg->dead_stats);
++}
++
++static struct bfq_group *bfq_find_alloc_group(struct bfq_data *bfqd,
++ struct blkcg *blkcg)
++{
++ struct request_queue *q = bfqd->queue;
++ struct bfq_group *bfqg = NULL, *parent;
++ struct bfq_entity *entity = NULL;
++
++ assert_spin_locked(bfqd->queue->queue_lock);
++
++ /* avoid lookup for the common case where there's no blkcg */
++ if (blkcg == &blkcg_root) {
++ bfqg = bfqd->root_group;
++ } else {
++ struct blkcg_gq *blkg;
++
++ blkg = blkg_lookup_create(blkcg, q);
++ if (!IS_ERR(blkg))
++ bfqg = blkg_to_bfqg(blkg);
++ else /* fallback to root_group */
++ bfqg = bfqd->root_group;
++ }
++
++ BUG_ON(!bfqg);
++
++ /*
++ * Update chain of bfq_groups as we might be handling a leaf group
++ * which, along with some of its relatives, has not been hooked yet
++ * to the private hierarchy of BFQ.
++ */
++ entity = &bfqg->entity;
++ for_each_entity(entity) {
++ bfqg = container_of(entity, struct bfq_group, entity);
++ BUG_ON(!bfqg);
++ if (bfqg != bfqd->root_group) {
++ parent = bfqg_parent(bfqg);
++ if (!parent)
++ parent = bfqd->root_group;
++ BUG_ON(!parent);
++ bfq_group_set_parent(bfqg, parent);
++ }
++ }
++
++ return bfqg;
++}
++
++/**
++ * bfq_bfqq_move - migrate @bfqq to @bfqg.
++ * @bfqd: queue descriptor.
++ * @bfqq: the queue to move.
++ * @entity: @bfqq's entity.
++ * @bfqg: the group to move to.
++ *
++ * Move @bfqq to @bfqg, deactivating it from its old group and reactivating
++ * it on the new one. Avoid putting the entity on the old group idle tree.
++ *
++ * Must be called under the queue lock; the cgroup owning @bfqg must
++ * not disappear (by now this just means that we are called under
++ * rcu_read_lock()).
++ */
++static void bfq_bfqq_move(struct bfq_data *bfqd, struct bfq_queue *bfqq,
++ struct bfq_entity *entity, struct bfq_group *bfqg)
++{
++ int busy, resume;
++
++ busy = bfq_bfqq_busy(bfqq);
++ resume = !RB_EMPTY_ROOT(&bfqq->sort_list);
++
++ BUG_ON(resume && !entity->on_st);
++ BUG_ON(busy && !resume && entity->on_st &&
++ bfqq != bfqd->in_service_queue);
++
++ if (busy) {
++ BUG_ON(atomic_read(&bfqq->ref) < 2);
++
++ if (!resume)
++ bfq_del_bfqq_busy(bfqd, bfqq, 0);
++ else
++ bfq_deactivate_bfqq(bfqd, bfqq, 0);
++ } else if (entity->on_st)
++ bfq_put_idle_entity(bfq_entity_service_tree(entity), entity);
++ bfqg_put(bfqq_group(bfqq));
++
++ /*
++ * Here we use a reference to bfqg. We don't need a refcounter
++ * as the cgroup reference will not be dropped, so that its
++ * destroy() callback will not be invoked.
++ */
++ entity->parent = bfqg->my_entity;
++ entity->sched_data = &bfqg->sched_data;
++ bfqg_get(bfqg);
++
++ if (busy) {
++ if (resume)
++ bfq_activate_bfqq(bfqd, bfqq);
++ }
++
++ if (!bfqd->in_service_queue && !bfqd->rq_in_driver)
++ bfq_schedule_dispatch(bfqd);
++}
++
++/**
++ * __bfq_bic_change_cgroup - move @bic to @cgroup.
++ * @bfqd: the queue descriptor.
++ * @bic: the bic to move.
++ * @blkcg: the blk-cgroup to move to.
++ *
++ * Move bic to blkcg, assuming that bfqd->queue is locked; the caller
++ * has to make sure that the reference to cgroup is valid across the call.
++ *
++ * NOTE: an alternative approach might have been to store the current
++ * cgroup in bfqq and getting a reference to it, reducing the lookup
++ * time here, at the price of slightly more complex code.
++ */
++static struct bfq_group *__bfq_bic_change_cgroup(struct bfq_data *bfqd,
++ struct bfq_io_cq *bic,
++ struct blkcg *blkcg)
++{
++ struct bfq_queue *async_bfqq = bic_to_bfqq(bic, 0);
++ struct bfq_queue *sync_bfqq = bic_to_bfqq(bic, 1);
++ struct bfq_group *bfqg;
++ struct bfq_entity *entity;
++
++ lockdep_assert_held(bfqd->queue->queue_lock);
++
++ bfqg = bfq_find_alloc_group(bfqd, blkcg);
++ if (async_bfqq) {
++ entity = &async_bfqq->entity;
++
++ if (entity->sched_data != &bfqg->sched_data) {
++ bic_set_bfqq(bic, NULL, 0);
++ bfq_log_bfqq(bfqd, async_bfqq,
++ "bic_change_group: %p %d",
++ async_bfqq, atomic_read(&async_bfqq->ref));
++ bfq_put_queue(async_bfqq);
++ }
++ }
++
++ if (sync_bfqq) {
++ entity = &sync_bfqq->entity;
++ if (entity->sched_data != &bfqg->sched_data)
++ bfq_bfqq_move(bfqd, sync_bfqq, entity, bfqg);
++ }
++
++ return bfqg;
++}
++
++static void bfq_bic_update_cgroup(struct bfq_io_cq *bic, struct bio *bio)
++{
++ struct bfq_data *bfqd = bic_to_bfqd(bic);
++ struct blkcg *blkcg;
++ struct bfq_group *bfqg = NULL;
++ uint64_t id;
++
++ rcu_read_lock();
++ blkcg = bio_blkcg(bio);
++ id = blkcg->css.serial_nr;
++ rcu_read_unlock();
++
++ /*
++ * Check whether blkcg has changed. The condition may trigger
++ * spuriously on a newly created cic but there's no harm.
++ */
++ if (unlikely(!bfqd) || likely(bic->blkcg_id == id))
++ return;
++
++ bfqg = __bfq_bic_change_cgroup(bfqd, bic, blkcg);
++ BUG_ON(!bfqg);
++ bic->blkcg_id = id;
++}
++
++/**
++ * bfq_flush_idle_tree - deactivate any entity on the idle tree of @st.
++ * @st: the service tree being flushed.
++ */
++static void bfq_flush_idle_tree(struct bfq_service_tree *st)
++{
++ struct bfq_entity *entity = st->first_idle;
++
++ for (; entity ; entity = st->first_idle)
++ __bfq_deactivate_entity(entity, 0);
++}
++
++/**
++ * bfq_reparent_leaf_entity - move leaf entity to the root_group.
++ * @bfqd: the device data structure with the root group.
++ * @entity: the entity to move.
++ */
++static void bfq_reparent_leaf_entity(struct bfq_data *bfqd,
++ struct bfq_entity *entity)
++{
++ struct bfq_queue *bfqq = bfq_entity_to_bfqq(entity);
++
++ BUG_ON(!bfqq);
++ bfq_bfqq_move(bfqd, bfqq, entity, bfqd->root_group);
++}
++
++/**
++ * bfq_reparent_active_entities - move to the root group all active
++ * entities.
++ * @bfqd: the device data structure with the root group.
++ * @bfqg: the group to move from.
++ * @st: the service tree with the entities.
++ *
++ * Needs queue_lock to be taken and reference to be valid over the call.
++ */
++static void bfq_reparent_active_entities(struct bfq_data *bfqd,
++ struct bfq_group *bfqg,
++ struct bfq_service_tree *st)
++{
++ struct rb_root *active = &st->active;
++ struct bfq_entity *entity = NULL;
++
++ if (!RB_EMPTY_ROOT(&st->active))
++ entity = bfq_entity_of(rb_first(active));
++
++ for (; entity ; entity = bfq_entity_of(rb_first(active)))
++ bfq_reparent_leaf_entity(bfqd, entity);
++
++ if (bfqg->sched_data.in_service_entity)
++ bfq_reparent_leaf_entity(bfqd,
++ bfqg->sched_data.in_service_entity);
++}
++
++/**
++ * bfq_destroy_group - destroy @bfqg.
++ * @bfqg: the group being destroyed.
++ *
++ * Destroy @bfqg, making sure that it is not referenced from its parent.
++ * blkio already grabs the queue_lock for us, so no need to use RCU-based magic
++ */
++static void bfq_pd_offline(struct blkg_policy_data *pd)
++{
++ struct bfq_service_tree *st;
++ struct bfq_group *bfqg;
++ struct bfq_data *bfqd;
++ struct bfq_entity *entity;
++ int i;
++
++ BUG_ON(!pd);
++ bfqg = pd_to_bfqg(pd);
++ BUG_ON(!bfqg);
++ bfqd = bfqg->bfqd;
++ BUG_ON(bfqd && !bfqd->root_group);
++
++ entity = bfqg->my_entity;
++
++ if (!entity) /* root group */
++ return;
++
++ /*
++ * Empty all service_trees belonging to this group before
++ * deactivating the group itself.
++ */
++ for (i = 0; i < BFQ_IOPRIO_CLASSES; i++) {
++ BUG_ON(!bfqg->sched_data.service_tree);
++ st = bfqg->sched_data.service_tree + i;
++ /*
++ * The idle tree may still contain bfq_queues belonging
++ * to exited task because they never migrated to a different
++ * cgroup from the one being destroyed now. No one else
++ * can access them so it's safe to act without any lock.
++ */
++ bfq_flush_idle_tree(st);
++
++ /*
++ * It may happen that some queues are still active
++ * (busy) upon group destruction (if the corresponding
++ * processes have been forced to terminate). We move
++ * all the leaf entities corresponding to these queues
++ * to the root_group.
++ * Also, it may happen that the group has an entity
++ * in service, which is disconnected from the active
++ * tree: it must be moved, too.
++ * There is no need to put the sync queues, as the
++ * scheduler has taken no reference.
++ */
++ bfq_reparent_active_entities(bfqd, bfqg, st);
++ BUG_ON(!RB_EMPTY_ROOT(&st->active));
++ BUG_ON(!RB_EMPTY_ROOT(&st->idle));
++ }
++ BUG_ON(bfqg->sched_data.next_in_service);
++ BUG_ON(bfqg->sched_data.in_service_entity);
++
++ __bfq_deactivate_entity(entity, 0);
++ bfq_put_async_queues(bfqd, bfqg);
++ BUG_ON(entity->tree);
++
++ bfqg_stats_xfer_dead(bfqg);
++}
++
++static void bfq_end_wr_async(struct bfq_data *bfqd)
++{
++ struct blkcg_gq *blkg;
++
++ list_for_each_entry(blkg, &bfqd->queue->blkg_list, q_node) {
++ struct bfq_group *bfqg = blkg_to_bfqg(blkg);
++
++ bfq_end_wr_async_queues(bfqd, bfqg);
++ }
++ bfq_end_wr_async_queues(bfqd, bfqd->root_group);
++}
++
++static u64 bfqio_cgroup_weight_read(struct cgroup_subsys_state *css,
++ struct cftype *cftype)
++{
++ struct blkcg *blkcg = css_to_blkcg(css);
++ struct bfq_group_data *bfqgd = blkcg_to_bfqgd(blkcg);
++ int ret = -EINVAL;
++
++ spin_lock_irq(&blkcg->lock);
++ ret = bfqgd->weight;
++ spin_unlock_irq(&blkcg->lock);
++
++ return ret;
++}
++
++static int bfqio_cgroup_weight_read_dfl(struct seq_file *sf, void *v)
++{
++ struct blkcg *blkcg = css_to_blkcg(seq_css(sf));
++ struct bfq_group_data *bfqgd = blkcg_to_bfqgd(blkcg);
++
++ spin_lock_irq(&blkcg->lock);
++ seq_printf(sf, "%u\n", bfqgd->weight);
++ spin_unlock_irq(&blkcg->lock);
++
++ return 0;
++}
++
++static int bfqio_cgroup_weight_write(struct cgroup_subsys_state *css,
++ struct cftype *cftype,
++ u64 val)
++{
++ struct blkcg *blkcg = css_to_blkcg(css);
++ struct bfq_group_data *bfqgd = blkcg_to_bfqgd(blkcg);
++ struct blkcg_gq *blkg;
++ int ret = -EINVAL;
++
++ if (val < BFQ_MIN_WEIGHT || val > BFQ_MAX_WEIGHT)
++ return ret;
++
++ ret = 0;
++ spin_lock_irq(&blkcg->lock);
++ bfqgd->weight = (unsigned short)val;
++ hlist_for_each_entry(blkg, &blkcg->blkg_list, blkcg_node) {
++ struct bfq_group *bfqg = blkg_to_bfqg(blkg);
++
++ if (!bfqg)
++ continue;
++ /*
++ * Setting the prio_changed flag of the entity
++ * to 1 with new_weight == weight would re-set
++ * the value of the weight to its ioprio mapping.
++ * Set the flag only if necessary.
++ */
++ if ((unsigned short)val != bfqg->entity.new_weight) {
++ bfqg->entity.new_weight = (unsigned short)val;
++ /*
++ * Make sure that the above new value has been
++ * stored in bfqg->entity.new_weight before
++ * setting the prio_changed flag. In fact,
++ * this flag may be read asynchronously (in
++ * critical sections protected by a different
++ * lock than that held here), and finding this
++ * flag set may cause the execution of the code
++ * for updating parameters whose value may
++ * depend also on bfqg->entity.new_weight (in
++ * __bfq_entity_update_weight_prio).
++ * This barrier makes sure that the new value
++ * of bfqg->entity.new_weight is correctly
++ * seen in that code.
++ */
++ smp_wmb();
++ bfqg->entity.prio_changed = 1;
++ }
++ }
++ spin_unlock_irq(&blkcg->lock);
++
++ return ret;
++}
++
++static ssize_t bfqio_cgroup_weight_write_dfl(struct kernfs_open_file *of,
++ char *buf, size_t nbytes,
++ loff_t off)
++{
++ /* First unsigned long found in the file is used */
++ return bfqio_cgroup_weight_write(of_css(of), NULL,
++ simple_strtoull(strim(buf), NULL, 0));
++}
++
++static int bfqg_print_stat(struct seq_file *sf, void *v)
++{
++ blkcg_print_blkgs(sf, css_to_blkcg(seq_css(sf)), blkg_prfill_stat,
++ &blkcg_policy_bfq, seq_cft(sf)->private, false);
++ return 0;
++}
++
++static int bfqg_print_rwstat(struct seq_file *sf, void *v)
++{
++ blkcg_print_blkgs(sf, css_to_blkcg(seq_css(sf)), blkg_prfill_rwstat,
++ &blkcg_policy_bfq, seq_cft(sf)->private, true);
++ return 0;
++}
++
++static u64 bfqg_prfill_stat_recursive(struct seq_file *sf,
++ struct blkg_policy_data *pd, int off)
++{
++ u64 sum = bfqg_stat_pd_recursive_sum(pd, off);
++
++ return __blkg_prfill_u64(sf, pd, sum);
++}
++
++static u64 bfqg_prfill_rwstat_recursive(struct seq_file *sf,
++ struct blkg_policy_data *pd, int off)
++{
++ struct blkg_rwstat sum = bfqg_rwstat_pd_recursive_sum(pd, off);
++
++ return __blkg_prfill_rwstat(sf, pd, &sum);
++}
++
++static int bfqg_print_stat_recursive(struct seq_file *sf, void *v)
++{
++ blkcg_print_blkgs(sf, css_to_blkcg(seq_css(sf)),
++ bfqg_prfill_stat_recursive, &blkcg_policy_bfq,
++ seq_cft(sf)->private, false);
++ return 0;
++}
++
++static int bfqg_print_rwstat_recursive(struct seq_file *sf, void *v)
++{
++ blkcg_print_blkgs(sf, css_to_blkcg(seq_css(sf)),
++ bfqg_prfill_rwstat_recursive, &blkcg_policy_bfq,
++ seq_cft(sf)->private, true);
++ return 0;
++}
++
++static u64 bfqg_prfill_avg_queue_size(struct seq_file *sf,
++ struct blkg_policy_data *pd, int off)
++{
++ struct bfq_group *bfqg = pd_to_bfqg(pd);
++ u64 samples = blkg_stat_read(&bfqg->stats.avg_queue_size_samples);
++ u64 v = 0;
++
++ if (samples) {
++ v = blkg_stat_read(&bfqg->stats.avg_queue_size_sum);
++ v = div64_u64(v, samples);
++ }
++ __blkg_prfill_u64(sf, pd, v);
++ return 0;
++}
++
++/* print avg_queue_size */
++static int bfqg_print_avg_queue_size(struct seq_file *sf, void *v)
++{
++ blkcg_print_blkgs(sf, css_to_blkcg(seq_css(sf)),
++ bfqg_prfill_avg_queue_size, &blkcg_policy_bfq,
++ 0, false);
++ return 0;
++}
++
++static struct bfq_group *
++bfq_create_group_hierarchy(struct bfq_data *bfqd, int node)
++{
++ int ret;
++
++ ret = blkcg_activate_policy(bfqd->queue, &blkcg_policy_bfq);
++ if (ret)
++ return NULL;
++
++ return blkg_to_bfqg(bfqd->queue->root_blkg);
++}
++
++static struct blkcg_policy_data *bfq_cpd_alloc(gfp_t gfp)
++{
++ struct bfq_group_data *bgd;
++
++ bgd = kzalloc(sizeof(*bgd), GFP_KERNEL);
++ if (!bgd)
++ return NULL;
++ return &bgd->pd;
++}
++
++static void bfq_cpd_free(struct blkcg_policy_data *cpd)
++{
++ kfree(cpd_to_bfqgd(cpd));
++}
++
++static struct cftype bfqio_files_dfl[] = {
++ {
++ .name = "weight",
++ .flags = CFTYPE_NOT_ON_ROOT,
++ .seq_show = bfqio_cgroup_weight_read_dfl,
++ .write = bfqio_cgroup_weight_write_dfl,
++ },
++ {} /* terminate */
++};
++
++static struct cftype bfqio_files[] = {
++ {
++ .name = "bfq.weight",
++ .read_u64 = bfqio_cgroup_weight_read,
++ .write_u64 = bfqio_cgroup_weight_write,
++ },
++ /* statistics, cover only the tasks in the bfqg */
++ {
++ .name = "bfq.time",
++ .private = offsetof(struct bfq_group, stats.time),
++ .seq_show = bfqg_print_stat,
++ },
++ {
++ .name = "bfq.sectors",
++ .private = offsetof(struct bfq_group, stats.sectors),
++ .seq_show = bfqg_print_stat,
++ },
++ {
++ .name = "bfq.io_service_bytes",
++ .private = offsetof(struct bfq_group, stats.service_bytes),
++ .seq_show = bfqg_print_rwstat,
++ },
++ {
++ .name = "bfq.io_serviced",
++ .private = offsetof(struct bfq_group, stats.serviced),
++ .seq_show = bfqg_print_rwstat,
++ },
++ {
++ .name = "bfq.io_service_time",
++ .private = offsetof(struct bfq_group, stats.service_time),
++ .seq_show = bfqg_print_rwstat,
++ },
++ {
++ .name = "bfq.io_wait_time",
++ .private = offsetof(struct bfq_group, stats.wait_time),
++ .seq_show = bfqg_print_rwstat,
++ },
++ {
++ .name = "bfq.io_merged",
++ .private = offsetof(struct bfq_group, stats.merged),
++ .seq_show = bfqg_print_rwstat,
++ },
++ {
++ .name = "bfq.io_queued",
++ .private = offsetof(struct bfq_group, stats.queued),
++ .seq_show = bfqg_print_rwstat,
++ },
++
++ /* the same statictics which cover the bfqg and its descendants */
++ {
++ .name = "bfq.time_recursive",
++ .private = offsetof(struct bfq_group, stats.time),
++ .seq_show = bfqg_print_stat_recursive,
++ },
++ {
++ .name = "bfq.sectors_recursive",
++ .private = offsetof(struct bfq_group, stats.sectors),
++ .seq_show = bfqg_print_stat_recursive,
++ },
++ {
++ .name = "bfq.io_service_bytes_recursive",
++ .private = offsetof(struct bfq_group, stats.service_bytes),
++ .seq_show = bfqg_print_rwstat_recursive,
++ },
++ {
++ .name = "bfq.io_serviced_recursive",
++ .private = offsetof(struct bfq_group, stats.serviced),
++ .seq_show = bfqg_print_rwstat_recursive,
++ },
++ {
++ .name = "bfq.io_service_time_recursive",
++ .private = offsetof(struct bfq_group, stats.service_time),
++ .seq_show = bfqg_print_rwstat_recursive,
++ },
++ {
++ .name = "bfq.io_wait_time_recursive",
++ .private = offsetof(struct bfq_group, stats.wait_time),
++ .seq_show = bfqg_print_rwstat_recursive,
++ },
++ {
++ .name = "bfq.io_merged_recursive",
++ .private = offsetof(struct bfq_group, stats.merged),
++ .seq_show = bfqg_print_rwstat_recursive,
++ },
++ {
++ .name = "bfq.io_queued_recursive",
++ .private = offsetof(struct bfq_group, stats.queued),
++ .seq_show = bfqg_print_rwstat_recursive,
++ },
++ {
++ .name = "bfq.avg_queue_size",
++ .seq_show = bfqg_print_avg_queue_size,
++ },
++ {
++ .name = "bfq.group_wait_time",
++ .private = offsetof(struct bfq_group, stats.group_wait_time),
++ .seq_show = bfqg_print_stat,
++ },
++ {
++ .name = "bfq.idle_time",
++ .private = offsetof(struct bfq_group, stats.idle_time),
++ .seq_show = bfqg_print_stat,
++ },
++ {
++ .name = "bfq.empty_time",
++ .private = offsetof(struct bfq_group, stats.empty_time),
++ .seq_show = bfqg_print_stat,
++ },
++ {
++ .name = "bfq.dequeue",
++ .private = offsetof(struct bfq_group, stats.dequeue),
++ .seq_show = bfqg_print_stat,
++ },
++ {
++ .name = "bfq.unaccounted_time",
++ .private = offsetof(struct bfq_group, stats.unaccounted_time),
++ .seq_show = bfqg_print_stat,
++ },
++ { } /* terminate */
++};
++
++static struct blkcg_policy blkcg_policy_bfq = {
++ .dfl_cftypes = bfqio_files_dfl,
++ .legacy_cftypes = bfqio_files,
++
++ .pd_alloc_fn = bfq_pd_alloc,
++ .pd_init_fn = bfq_pd_init,
++ .pd_offline_fn = bfq_pd_offline,
++ .pd_free_fn = bfq_pd_free,
++ .pd_reset_stats_fn = bfq_pd_reset_stats,
++
++ .cpd_alloc_fn = bfq_cpd_alloc,
++ .cpd_init_fn = bfq_cpd_init,
++ .cpd_bind_fn = bfq_cpd_init,
++ .cpd_free_fn = bfq_cpd_free,
++};
++
++#else
++
++static void bfq_init_entity(struct bfq_entity *entity,
++ struct bfq_group *bfqg)
++{
++ struct bfq_queue *bfqq = bfq_entity_to_bfqq(entity);
++
++ entity->weight = entity->new_weight;
++ entity->orig_weight = entity->new_weight;
++ if (bfqq) {
++ bfqq->ioprio = bfqq->new_ioprio;
++ bfqq->ioprio_class = bfqq->new_ioprio_class;
++ }
++ entity->sched_data = &bfqg->sched_data;
++}
++
++static struct bfq_group *
++bfq_bic_update_cgroup(struct bfq_io_cq *bic, struct bio *bio)
++{
++ struct bfq_data *bfqd = bic_to_bfqd(bic);
++
++ return bfqd->root_group;
++}
++
++static void bfq_bfqq_move(struct bfq_data *bfqd,
++ struct bfq_queue *bfqq,
++ struct bfq_entity *entity,
++ struct bfq_group *bfqg)
++{
++}
++
++static void bfq_end_wr_async(struct bfq_data *bfqd)
++{
++ bfq_end_wr_async_queues(bfqd, bfqd->root_group);
++}
++
++static void bfq_disconnect_groups(struct bfq_data *bfqd)
++{
++ bfq_put_async_queues(bfqd, bfqd->root_group);
++}
++
++static struct bfq_group *bfq_find_alloc_group(struct bfq_data *bfqd,
++ struct blkcg *blkcg)
++{
++ return bfqd->root_group;
++}
++
++static struct bfq_group *
++bfq_create_group_hierarchy(struct bfq_data *bfqd, int node)
++{
++ struct bfq_group *bfqg;
++ int i;
++
++ bfqg = kmalloc_node(sizeof(*bfqg), GFP_KERNEL | __GFP_ZERO, node);
++ if (!bfqg)
++ return NULL;
++
++ for (i = 0; i < BFQ_IOPRIO_CLASSES; i++)
++ bfqg->sched_data.service_tree[i] = BFQ_SERVICE_TREE_INIT;
++
++ return bfqg;
++}
++#endif
+diff --git a/block/bfq-ioc.c b/block/bfq-ioc.c
+new file mode 100644
+index 0000000..fb7bb8f
+--- /dev/null
++++ b/block/bfq-ioc.c
+@@ -0,0 +1,36 @@
++/*
++ * BFQ: I/O context handling.
++ *
++ * Based on ideas and code from CFQ:
++ * Copyright (C) 2003 Jens Axboe <axboe@kernel.dk>
++ *
++ * Copyright (C) 2008 Fabio Checconi <fabio@gandalf.sssup.it>
++ * Paolo Valente <paolo.valente@unimore.it>
++ *
++ * Copyright (C) 2010 Paolo Valente <paolo.valente@unimore.it>
++ */
++
++/**
++ * icq_to_bic - convert iocontext queue structure to bfq_io_cq.
++ * @icq: the iocontext queue.
++ */
++static struct bfq_io_cq *icq_to_bic(struct io_cq *icq)
++{
++ /* bic->icq is the first member, %NULL will convert to %NULL */
++ return container_of(icq, struct bfq_io_cq, icq);
++}
++
++/**
++ * bfq_bic_lookup - search into @ioc a bic associated to @bfqd.
++ * @bfqd: the lookup key.
++ * @ioc: the io_context of the process doing I/O.
++ *
++ * Queue lock must be held.
++ */
++static struct bfq_io_cq *bfq_bic_lookup(struct bfq_data *bfqd,
++ struct io_context *ioc)
++{
++ if (ioc)
++ return icq_to_bic(ioc_lookup_icq(ioc, bfqd->queue));
++ return NULL;
++}
+diff --git a/block/bfq-iosched.c b/block/bfq-iosched.c
+new file mode 100644
+index 0000000..85e2169
+--- /dev/null
++++ b/block/bfq-iosched.c
+@@ -0,0 +1,3763 @@
++/*
++ * Budget Fair Queueing (BFQ) disk scheduler.
++ *
++ * Based on ideas and code from CFQ:
++ * Copyright (C) 2003 Jens Axboe <axboe@kernel.dk>
++ *
++ * Copyright (C) 2008 Fabio Checconi <fabio@gandalf.sssup.it>
++ * Paolo Valente <paolo.valente@unimore.it>
++ *
++ * Copyright (C) 2010 Paolo Valente <paolo.valente@unimore.it>
++ *
++ * Licensed under the GPL-2 as detailed in the accompanying COPYING.BFQ
++ * file.
++ *
++ * BFQ is a proportional-share storage-I/O scheduling algorithm based on
++ * the slice-by-slice service scheme of CFQ. But BFQ assigns budgets,
++ * measured in number of sectors, to processes instead of time slices. The
++ * device is not granted to the in-service process for a given time slice,
++ * but until it has exhausted its assigned budget. This change from the time
++ * to the service domain allows BFQ to distribute the device throughput
++ * among processes as desired, without any distortion due to ZBR, workload
++ * fluctuations or other factors. BFQ uses an ad hoc internal scheduler,
++ * called B-WF2Q+, to schedule processes according to their budgets. More
++ * precisely, BFQ schedules queues associated to processes. Thanks to the
++ * accurate policy of B-WF2Q+, BFQ can afford to assign high budgets to
++ * I/O-bound processes issuing sequential requests (to boost the
++ * throughput), and yet guarantee a low latency to interactive and soft
++ * real-time applications.
++ *
++ * BFQ is described in [1], where also a reference to the initial, more
++ * theoretical paper on BFQ can be found. The interested reader can find
++ * in the latter paper full details on the main algorithm, as well as
++ * formulas of the guarantees and formal proofs of all the properties.
++ * With respect to the version of BFQ presented in these papers, this
++ * implementation adds a few more heuristics, such as the one that
++ * guarantees a low latency to soft real-time applications, and a
++ * hierarchical extension based on H-WF2Q+.
++ *
++ * B-WF2Q+ is based on WF2Q+, that is described in [2], together with
++ * H-WF2Q+, while the augmented tree used to implement B-WF2Q+ with O(log N)
++ * complexity derives from the one introduced with EEVDF in [3].
++ *
++ * [1] P. Valente and M. Andreolini, ``Improving Application Responsiveness
++ * with the BFQ Disk I/O Scheduler'',
++ * Proceedings of the 5th Annual International Systems and Storage
++ * Conference (SYSTOR '12), June 2012.
++ *
++ * http://algogroup.unimo.it/people/paolo/disk_sched/bf1-v1-suite-results.pdf
++ *
++ * [2] Jon C.R. Bennett and H. Zhang, ``Hierarchical Packet Fair Queueing
++ * Algorithms,'' IEEE/ACM Transactions on Networking, 5(5):675-689,
++ * Oct 1997.
++ *
++ * http://www.cs.cmu.edu/~hzhang/papers/TON-97-Oct.ps.gz
++ *
++ * [3] I. Stoica and H. Abdel-Wahab, ``Earliest Eligible Virtual Deadline
++ * First: A Flexible and Accurate Mechanism for Proportional Share
++ * Resource Allocation,'' technical report.
++ *
++ * http://www.cs.berkeley.edu/~istoica/papers/eevdf-tr-95.pdf
++ */
++#include <linux/module.h>
++#include <linux/slab.h>
++#include <linux/blkdev.h>
++#include <linux/cgroup.h>
++#include <linux/elevator.h>
++#include <linux/jiffies.h>
++#include <linux/rbtree.h>
++#include <linux/ioprio.h>
++#include "bfq.h"
++#include "blk.h"
++
++/* Expiration time of sync (0) and async (1) requests, in jiffies. */
++static const int bfq_fifo_expire[2] = { HZ / 4, HZ / 8 };
++
++/* Maximum backwards seek, in KiB. */
++static const int bfq_back_max = 16 * 1024;
++
++/* Penalty of a backwards seek, in number of sectors. */
++static const int bfq_back_penalty = 2;
++
++/* Idling period duration, in jiffies. */
++static int bfq_slice_idle = HZ / 125;
++
++/* Minimum number of assigned budgets for which stats are safe to compute. */
++static const int bfq_stats_min_budgets = 194;
++
++/* Default maximum budget values, in sectors and number of requests. */
++static const int bfq_default_max_budget = 16 * 1024;
++static const int bfq_max_budget_async_rq = 4;
++
++/*
++ * Async to sync throughput distribution is controlled as follows:
++ * when an async request is served, the entity is charged the number
++ * of sectors of the request, multiplied by the factor below
++ */
++static const int bfq_async_charge_factor = 10;
++
++/* Default timeout values, in jiffies, approximating CFQ defaults. */
++static const int bfq_timeout_sync = HZ / 8;
++static int bfq_timeout_async = HZ / 25;
++
++struct kmem_cache *bfq_pool;
++
++/* Below this threshold (in ms), we consider thinktime immediate. */
++#define BFQ_MIN_TT 2
++
++/* hw_tag detection: parallel requests threshold and min samples needed. */
++#define BFQ_HW_QUEUE_THRESHOLD 4
++#define BFQ_HW_QUEUE_SAMPLES 32
++
++#define BFQQ_SEEK_THR (sector_t)(8 * 1024)
++#define BFQQ_SEEKY(bfqq) ((bfqq)->seek_mean > BFQQ_SEEK_THR)
++
++/* Min samples used for peak rate estimation (for autotuning). */
++#define BFQ_PEAK_RATE_SAMPLES 32
++
++/* Shift used for peak rate fixed precision calculations. */
++#define BFQ_RATE_SHIFT 16
++
++/*
++ * By default, BFQ computes the duration of the weight raising for
++ * interactive applications automatically, using the following formula:
++ * duration = (R / r) * T, where r is the peak rate of the device, and
++ * R and T are two reference parameters.
++ * In particular, R is the peak rate of the reference device (see below),
++ * and T is a reference time: given the systems that are likely to be
++ * installed on the reference device according to its speed class, T is
++ * about the maximum time needed, under BFQ and while reading two files in
++ * parallel, to load typical large applications on these systems.
++ * In practice, the slower/faster the device at hand is, the more/less it
++ * takes to load applications with respect to the reference device.
++ * Accordingly, the longer/shorter BFQ grants weight raising to interactive
++ * applications.
++ *
++ * BFQ uses four different reference pairs (R, T), depending on:
++ * . whether the device is rotational or non-rotational;
++ * . whether the device is slow, such as old or portable HDDs, as well as
++ * SD cards, or fast, such as newer HDDs and SSDs.
++ *
++ * The device's speed class is dynamically (re)detected in
++ * bfq_update_peak_rate() every time the estimated peak rate is updated.
++ *
++ * In the following definitions, R_slow[0]/R_fast[0] and T_slow[0]/T_fast[0]
++ * are the reference values for a slow/fast rotational device, whereas
++ * R_slow[1]/R_fast[1] and T_slow[1]/T_fast[1] are the reference values for
++ * a slow/fast non-rotational device. Finally, device_speed_thresh are the
++ * thresholds used to switch between speed classes.
++ * Both the reference peak rates and the thresholds are measured in
++ * sectors/usec, left-shifted by BFQ_RATE_SHIFT.
++ */
++static int R_slow[2] = {1536, 10752};
++static int R_fast[2] = {17415, 34791};
++/*
++ * To improve readability, a conversion function is used to initialize the
++ * following arrays, which entails that they can be initialized only in a
++ * function.
++ */
++static int T_slow[2];
++static int T_fast[2];
++static int device_speed_thresh[2];
++
++#define BFQ_SERVICE_TREE_INIT ((struct bfq_service_tree) \
++ { RB_ROOT, RB_ROOT, NULL, NULL, 0, 0 })
++
++#define RQ_BIC(rq) ((struct bfq_io_cq *) (rq)->elv.priv[0])
++#define RQ_BFQQ(rq) ((rq)->elv.priv[1])
++
++static void bfq_schedule_dispatch(struct bfq_data *bfqd);
++
++#include "bfq-ioc.c"
++#include "bfq-sched.c"
++#include "bfq-cgroup.c"
++
++#define bfq_class_idle(bfqq) ((bfqq)->ioprio_class == IOPRIO_CLASS_IDLE)
++#define bfq_class_rt(bfqq) ((bfqq)->ioprio_class == IOPRIO_CLASS_RT)
++
++#define bfq_sample_valid(samples) ((samples) > 80)
++
++/*
++ * We regard a request as SYNC, if either it's a read or has the SYNC bit
++ * set (in which case it could also be a direct WRITE).
++ */
++static int bfq_bio_sync(struct bio *bio)
++{
++ if (bio_data_dir(bio) == READ || (bio->bi_rw & REQ_SYNC))
++ return 1;
++
++ return 0;
++}
++
++/*
++ * Scheduler run of queue, if there are requests pending and no one in the
++ * driver that will restart queueing.
++ */
++static void bfq_schedule_dispatch(struct bfq_data *bfqd)
++{
++ if (bfqd->queued != 0) {
++ bfq_log(bfqd, "schedule dispatch");
++ kblockd_schedule_work(&bfqd->unplug_work);
++ }
++}
++
++/*
++ * Lifted from AS - choose which of rq1 and rq2 that is best served now.
++ * We choose the request that is closesr to the head right now. Distance
++ * behind the head is penalized and only allowed to a certain extent.
++ */
++static struct request *bfq_choose_req(struct bfq_data *bfqd,
++ struct request *rq1,
++ struct request *rq2,
++ sector_t last)
++{
++ sector_t s1, s2, d1 = 0, d2 = 0;
++ unsigned long back_max;
++#define BFQ_RQ1_WRAP 0x01 /* request 1 wraps */
++#define BFQ_RQ2_WRAP 0x02 /* request 2 wraps */
++ unsigned int wrap = 0; /* bit mask: requests behind the disk head? */
++
++ if (!rq1 || rq1 == rq2)
++ return rq2;
++ if (!rq2)
++ return rq1;
++
++ if (rq_is_sync(rq1) && !rq_is_sync(rq2))
++ return rq1;
++ else if (rq_is_sync(rq2) && !rq_is_sync(rq1))
++ return rq2;
++ if ((rq1->cmd_flags & REQ_META) && !(rq2->cmd_flags & REQ_META))
++ return rq1;
++ else if ((rq2->cmd_flags & REQ_META) && !(rq1->cmd_flags & REQ_META))
++ return rq2;
++
++ s1 = blk_rq_pos(rq1);
++ s2 = blk_rq_pos(rq2);
++
++ /*
++ * By definition, 1KiB is 2 sectors.
++ */
++ back_max = bfqd->bfq_back_max * 2;
++
++ /*
++ * Strict one way elevator _except_ in the case where we allow
++ * short backward seeks which are biased as twice the cost of a
++ * similar forward seek.
++ */
++ if (s1 >= last)
++ d1 = s1 - last;
++ else if (s1 + back_max >= last)
++ d1 = (last - s1) * bfqd->bfq_back_penalty;
++ else
++ wrap |= BFQ_RQ1_WRAP;
++
++ if (s2 >= last)
++ d2 = s2 - last;
++ else if (s2 + back_max >= last)
++ d2 = (last - s2) * bfqd->bfq_back_penalty;
++ else
++ wrap |= BFQ_RQ2_WRAP;
++
++ /* Found required data */
++
++ /*
++ * By doing switch() on the bit mask "wrap" we avoid having to
++ * check two variables for all permutations: --> faster!
++ */
++ switch (wrap) {
++ case 0: /* common case for CFQ: rq1 and rq2 not wrapped */
++ if (d1 < d2)
++ return rq1;
++ else if (d2 < d1)
++ return rq2;
++
++ if (s1 >= s2)
++ return rq1;
++ else
++ return rq2;
++
++ case BFQ_RQ2_WRAP:
++ return rq1;
++ case BFQ_RQ1_WRAP:
++ return rq2;
++ case (BFQ_RQ1_WRAP|BFQ_RQ2_WRAP): /* both rqs wrapped */
++ default:
++ /*
++ * Since both rqs are wrapped,
++ * start with the one that's further behind head
++ * (--> only *one* back seek required),
++ * since back seek takes more time than forward.
++ */
++ if (s1 <= s2)
++ return rq1;
++ else
++ return rq2;
++ }
++}
++
++/*
++ * Tell whether there are active queues or groups with differentiated weights.
++ */
++static bool bfq_differentiated_weights(struct bfq_data *bfqd)
++{
++ /*
++ * For weights to differ, at least one of the trees must contain
++ * at least two nodes.
++ */
++ return (!RB_EMPTY_ROOT(&bfqd->queue_weights_tree) &&
++ (bfqd->queue_weights_tree.rb_node->rb_left ||
++ bfqd->queue_weights_tree.rb_node->rb_right)
++#ifdef CONFIG_BFQ_GROUP_IOSCHED
++ ) ||
++ (!RB_EMPTY_ROOT(&bfqd->group_weights_tree) &&
++ (bfqd->group_weights_tree.rb_node->rb_left ||
++ bfqd->group_weights_tree.rb_node->rb_right)
++#endif
++ );
++}
++
++/*
++ * The following function returns true if every queue must receive the
++ * same share of the throughput (this condition is used when deciding
++ * whether idling may be disabled, see the comments in the function
++ * bfq_bfqq_may_idle()).
++ *
++ * Such a scenario occurs when:
++ * 1) all active queues have the same weight,
++ * 2) all active groups at the same level in the groups tree have the same
++ * weight,
++ * 3) all active groups at the same level in the groups tree have the same
++ * number of children.
++ *
++ * Unfortunately, keeping the necessary state for evaluating exactly the
++ * above symmetry conditions would be quite complex and time-consuming.
++ * Therefore this function evaluates, instead, the following stronger
++ * sub-conditions, for which it is much easier to maintain the needed
++ * state:
++ * 1) all active queues have the same weight,
++ * 2) all active groups have the same weight,
++ * 3) all active groups have at most one active child each.
++ * In particular, the last two conditions are always true if hierarchical
++ * support and the cgroups interface are not enabled, thus no state needs
++ * to be maintained in this case.
++ */
++static bool bfq_symmetric_scenario(struct bfq_data *bfqd)
++{
++ return
++#ifdef CONFIG_BFQ_GROUP_IOSCHED
++ !bfqd->active_numerous_groups &&
++#endif
++ !bfq_differentiated_weights(bfqd);
++}
++
++/*
++ * If the weight-counter tree passed as input contains no counter for
++ * the weight of the input entity, then add that counter; otherwise just
++ * increment the existing counter.
++ *
++ * Note that weight-counter trees contain few nodes in mostly symmetric
++ * scenarios. For example, if all queues have the same weight, then the
++ * weight-counter tree for the queues may contain at most one node.
++ * This holds even if low_latency is on, because weight-raised queues
++ * are not inserted in the tree.
++ * In most scenarios, the rate at which nodes are created/destroyed
++ * should be low too.
++ */
++static void bfq_weights_tree_add(struct bfq_data *bfqd,
++ struct bfq_entity *entity,
++ struct rb_root *root)
++{
++ struct rb_node **new = &(root->rb_node), *parent = NULL;
++
++ /*
++ * Do not insert if the entity is already associated with a
++ * counter, which happens if:
++ * 1) the entity is associated with a queue,
++ * 2) a request arrival has caused the queue to become both
++ * non-weight-raised, and hence change its weight, and
++ * backlogged; in this respect, each of the two events
++ * causes an invocation of this function,
++ * 3) this is the invocation of this function caused by the
++ * second event. This second invocation is actually useless,
++ * and we handle this fact by exiting immediately. More
++ * efficient or clearer solutions might possibly be adopted.
++ */
++ if (entity->weight_counter)
++ return;
++
++ while (*new) {
++ struct bfq_weight_counter *__counter = container_of(*new,
++ struct bfq_weight_counter,
++ weights_node);
++ parent = *new;
++
++ if (entity->weight == __counter->weight) {
++ entity->weight_counter = __counter;
++ goto inc_counter;
++ }
++ if (entity->weight < __counter->weight)
++ new = &((*new)->rb_left);
++ else
++ new = &((*new)->rb_right);
++ }
++
++ entity->weight_counter = kzalloc(sizeof(struct bfq_weight_counter),
++ GFP_ATOMIC);
++ entity->weight_counter->weight = entity->weight;
++ rb_link_node(&entity->weight_counter->weights_node, parent, new);
++ rb_insert_color(&entity->weight_counter->weights_node, root);
++
++inc_counter:
++ entity->weight_counter->num_active++;
++}
++
++/*
++ * Decrement the weight counter associated with the entity, and, if the
++ * counter reaches 0, remove the counter from the tree.
++ * See the comments to the function bfq_weights_tree_add() for considerations
++ * about overhead.
++ */
++static void bfq_weights_tree_remove(struct bfq_data *bfqd,
++ struct bfq_entity *entity,
++ struct rb_root *root)
++{
++ if (!entity->weight_counter)
++ return;
++
++ BUG_ON(RB_EMPTY_ROOT(root));
++ BUG_ON(entity->weight_counter->weight != entity->weight);
++
++ BUG_ON(!entity->weight_counter->num_active);
++ entity->weight_counter->num_active--;
++ if (entity->weight_counter->num_active > 0)
++ goto reset_entity_pointer;
++
++ rb_erase(&entity->weight_counter->weights_node, root);
++ kfree(entity->weight_counter);
++
++reset_entity_pointer:
++ entity->weight_counter = NULL;
++}
++
++static struct request *bfq_find_next_rq(struct bfq_data *bfqd,
++ struct bfq_queue *bfqq,
++ struct request *last)
++{
++ struct rb_node *rbnext = rb_next(&last->rb_node);
++ struct rb_node *rbprev = rb_prev(&last->rb_node);
++ struct request *next = NULL, *prev = NULL;
++
++ BUG_ON(RB_EMPTY_NODE(&last->rb_node));
++
++ if (rbprev)
++ prev = rb_entry_rq(rbprev);
++
++ if (rbnext)
++ next = rb_entry_rq(rbnext);
++ else {
++ rbnext = rb_first(&bfqq->sort_list);
++ if (rbnext && rbnext != &last->rb_node)
++ next = rb_entry_rq(rbnext);
++ }
++
++ return bfq_choose_req(bfqd, next, prev, blk_rq_pos(last));
++}
++
++/* see the definition of bfq_async_charge_factor for details */
++static unsigned long bfq_serv_to_charge(struct request *rq,
++ struct bfq_queue *bfqq)
++{
++ return blk_rq_sectors(rq) *
++ (1 + ((!bfq_bfqq_sync(bfqq)) * (bfqq->wr_coeff == 1) *
++ bfq_async_charge_factor));
++}
++
++/**
++ * bfq_updated_next_req - update the queue after a new next_rq selection.
++ * @bfqd: the device data the queue belongs to.
++ * @bfqq: the queue to update.
++ *
++ * If the first request of a queue changes we make sure that the queue
++ * has enough budget to serve at least its first request (if the
++ * request has grown). We do this because if the queue has not enough
++ * budget for its first request, it has to go through two dispatch
++ * rounds to actually get it dispatched.
++ */
++static void bfq_updated_next_req(struct bfq_data *bfqd,
++ struct bfq_queue *bfqq)
++{
++ struct bfq_entity *entity = &bfqq->entity;
++ struct bfq_service_tree *st = bfq_entity_service_tree(entity);
++ struct request *next_rq = bfqq->next_rq;
++ unsigned long new_budget;
++
++ if (!next_rq)
++ return;
++
++ if (bfqq == bfqd->in_service_queue)
++ /*
++ * In order not to break guarantees, budgets cannot be
++ * changed after an entity has been selected.
++ */
++ return;
++
++ BUG_ON(entity->tree != &st->active);
++ BUG_ON(entity == entity->sched_data->in_service_entity);
++
++ new_budget = max_t(unsigned long, bfqq->max_budget,
++ bfq_serv_to_charge(next_rq, bfqq));
++ if (entity->budget != new_budget) {
++ entity->budget = new_budget;
++ bfq_log_bfqq(bfqd, bfqq, "updated next rq: new budget %lu",
++ new_budget);
++ bfq_activate_bfqq(bfqd, bfqq);
++ }
++}
++
++static unsigned int bfq_wr_duration(struct bfq_data *bfqd)
++{
++ u64 dur;
++
++ if (bfqd->bfq_wr_max_time > 0)
++ return bfqd->bfq_wr_max_time;
++
++ dur = bfqd->RT_prod;
++ do_div(dur, bfqd->peak_rate);
++
++ return dur;
++}
++
++/* Empty burst list and add just bfqq (see comments to bfq_handle_burst) */
++static void bfq_reset_burst_list(struct bfq_data *bfqd, struct bfq_queue *bfqq)
++{
++ struct bfq_queue *item;
++ struct hlist_node *n;
++
++ hlist_for_each_entry_safe(item, n, &bfqd->burst_list, burst_list_node)
++ hlist_del_init(&item->burst_list_node);
++ hlist_add_head(&bfqq->burst_list_node, &bfqd->burst_list);
++ bfqd->burst_size = 1;
++}
++
++/* Add bfqq to the list of queues in current burst (see bfq_handle_burst) */
++static void bfq_add_to_burst(struct bfq_data *bfqd, struct bfq_queue *bfqq)
++{
++ /* Increment burst size to take into account also bfqq */
++ bfqd->burst_size++;
++
++ if (bfqd->burst_size == bfqd->bfq_large_burst_thresh) {
++ struct bfq_queue *pos, *bfqq_item;
++ struct hlist_node *n;
++
++ /*
++ * Enough queues have been activated shortly after each
++ * other to consider this burst as large.
++ */
++ bfqd->large_burst = true;
++
++ /*
++ * We can now mark all queues in the burst list as
++ * belonging to a large burst.
++ */
++ hlist_for_each_entry(bfqq_item, &bfqd->burst_list,
++ burst_list_node)
++ bfq_mark_bfqq_in_large_burst(bfqq_item);
++ bfq_mark_bfqq_in_large_burst(bfqq);
++
++ /*
++ * From now on, and until the current burst finishes, any
++ * new queue being activated shortly after the last queue
++ * was inserted in the burst can be immediately marked as
++ * belonging to a large burst. So the burst list is not
++ * needed any more. Remove it.
++ */
++ hlist_for_each_entry_safe(pos, n, &bfqd->burst_list,
++ burst_list_node)
++ hlist_del_init(&pos->burst_list_node);
++ } else /* burst not yet large: add bfqq to the burst list */
++ hlist_add_head(&bfqq->burst_list_node, &bfqd->burst_list);
++}
++
++/*
++ * If many queues happen to become active shortly after each other, then,
++ * to help the processes associated to these queues get their job done as
++ * soon as possible, it is usually better to not grant either weight-raising
++ * or device idling to these queues. In this comment we describe, firstly,
++ * the reasons why this fact holds, and, secondly, the next function, which
++ * implements the main steps needed to properly mark these queues so that
++ * they can then be treated in a different way.
++ *
++ * As for the terminology, we say that a queue becomes active, i.e.,
++ * switches from idle to backlogged, either when it is created (as a
++ * consequence of the arrival of an I/O request), or, if already existing,
++ * when a new request for the queue arrives while the queue is idle.
++ * Bursts of activations, i.e., activations of different queues occurring
++ * shortly after each other, are typically caused by services or applications
++ * that spawn or reactivate many parallel threads/processes. Examples are
++ * systemd during boot or git grep.
++ *
++ * These services or applications benefit mostly from a high throughput:
++ * the quicker the requests of the activated queues are cumulatively served,
++ * the sooner the target job of these queues gets completed. As a consequence,
++ * weight-raising any of these queues, which also implies idling the device
++ * for it, is almost always counterproductive: in most cases it just lowers
++ * throughput.
++ *
++ * On the other hand, a burst of activations may be also caused by the start
++ * of an application that does not consist in a lot of parallel I/O-bound
++ * threads. In fact, with a complex application, the burst may be just a
++ * consequence of the fact that several processes need to be executed to
++ * start-up the application. To start an application as quickly as possible,
++ * the best thing to do is to privilege the I/O related to the application
++ * with respect to all other I/O. Therefore, the best strategy to start as
++ * quickly as possible an application that causes a burst of activations is
++ * to weight-raise all the queues activated during the burst. This is the
++ * exact opposite of the best strategy for the other type of bursts.
++ *
++ * In the end, to take the best action for each of the two cases, the two
++ * types of bursts need to be distinguished. Fortunately, this seems
++ * relatively easy to do, by looking at the sizes of the bursts. In
++ * particular, we found a threshold such that bursts with a larger size
++ * than that threshold are apparently caused only by services or commands
++ * such as systemd or git grep. For brevity, hereafter we call just 'large'
++ * these bursts. BFQ *does not* weight-raise queues whose activations occur
++ * in a large burst. In addition, for each of these queues BFQ performs or
++ * does not perform idling depending on which choice boosts the throughput
++ * most. The exact choice depends on the device and request pattern at
++ * hand.
++ *
++ * Turning back to the next function, it implements all the steps needed
++ * to detect the occurrence of a large burst and to properly mark all the
++ * queues belonging to it (so that they can then be treated in a different
++ * way). This goal is achieved by maintaining a special "burst list" that
++ * holds, temporarily, the queues that belong to the burst in progress. The
++ * list is then used to mark these queues as belonging to a large burst if
++ * the burst does become large. The main steps are the following.
++ *
++ * . when the very first queue is activated, the queue is inserted into the
++ * list (as it could be the first queue in a possible burst)
++ *
++ * . if the current burst has not yet become large, and a queue Q that does
++ * not yet belong to the burst is activated shortly after the last time
++ * at which a new queue entered the burst list, then the function appends
++ * Q to the burst list
++ *
++ * . if, as a consequence of the previous step, the burst size reaches
++ * the large-burst threshold, then
++ *
++ * . all the queues in the burst list are marked as belonging to a
++ * large burst
++ *
++ * . the burst list is deleted; in fact, the burst list already served
++ * its purpose (keeping temporarily track of the queues in a burst,
++ * so as to be able to mark them as belonging to a large burst in the
++ * previous sub-step), and now is not needed any more
++ *
++ * . the device enters a large-burst mode
++ *
++ * . if a queue Q that does not belong to the burst is activated while
++ * the device is in large-burst mode and shortly after the last time
++ * at which a queue either entered the burst list or was marked as
++ * belonging to the current large burst, then Q is immediately marked
++ * as belonging to a large burst.
++ *
++ * . if a queue Q that does not belong to the burst is activated a while
++ * later, i.e., not shortly after, than the last time at which a queue
++ * either entered the burst list or was marked as belonging to the
++ * current large burst, then the current burst is deemed as finished and:
++ *
++ * . the large-burst mode is reset if set
++ *
++ * . the burst list is emptied
++ *
++ * . Q is inserted in the burst list, as Q may be the first queue
++ * in a possible new burst (then the burst list contains just Q
++ * after this step).
++ */
++static void bfq_handle_burst(struct bfq_data *bfqd, struct bfq_queue *bfqq,
++ bool idle_for_long_time)
++{
++ /*
++ * If bfqq happened to be activated in a burst, but has been idle
++ * for at least as long as an interactive queue, then we assume
++ * that, in the overall I/O initiated in the burst, the I/O
++ * associated to bfqq is finished. So bfqq does not need to be
++ * treated as a queue belonging to a burst anymore. Accordingly,
++ * we reset bfqq's in_large_burst flag if set, and remove bfqq
++ * from the burst list if it's there. We do not decrement instead
++ * burst_size, because the fact that bfqq does not need to belong
++ * to the burst list any more does not invalidate the fact that
++ * bfqq may have been activated during the current burst.
++ */
++ if (idle_for_long_time) {
++ hlist_del_init(&bfqq->burst_list_node);
++ bfq_clear_bfqq_in_large_burst(bfqq);
++ }
++
++ /*
++ * If bfqq is already in the burst list or is part of a large
++ * burst, then there is nothing else to do.
++ */
++ if (!hlist_unhashed(&bfqq->burst_list_node) ||
++ bfq_bfqq_in_large_burst(bfqq))
++ return;
++
++ /*
++ * If bfqq's activation happens late enough, then the current
++ * burst is finished, and related data structures must be reset.
++ *
++ * In this respect, consider the special case where bfqq is the very
++ * first queue being activated. In this case, last_ins_in_burst is
++ * not yet significant when we get here. But it is easy to verify
++ * that, whether or not the following condition is true, bfqq will
++ * end up being inserted into the burst list. In particular the
++ * list will happen to contain only bfqq. And this is exactly what
++ * has to happen, as bfqq may be the first queue in a possible
++ * burst.
++ */
++ if (time_is_before_jiffies(bfqd->last_ins_in_burst +
++ bfqd->bfq_burst_interval)) {
++ bfqd->large_burst = false;
++ bfq_reset_burst_list(bfqd, bfqq);
++ return;
++ }
++
++ /*
++ * If we get here, then bfqq is being activated shortly after the
++ * last queue. So, if the current burst is also large, we can mark
++ * bfqq as belonging to this large burst immediately.
++ */
++ if (bfqd->large_burst) {
++ bfq_mark_bfqq_in_large_burst(bfqq);
++ return;
++ }
++
++ /*
++ * If we get here, then a large-burst state has not yet been
++ * reached, but bfqq is being activated shortly after the last
++ * queue. Then we add bfqq to the burst.
++ */
++ bfq_add_to_burst(bfqd, bfqq);
++}
++
++static void bfq_add_request(struct request *rq)
++{
++ struct bfq_queue *bfqq = RQ_BFQQ(rq);
++ struct bfq_entity *entity = &bfqq->entity;
++ struct bfq_data *bfqd = bfqq->bfqd;
++ struct request *next_rq, *prev;
++ unsigned long old_wr_coeff = bfqq->wr_coeff;
++ bool interactive = false;
++
++ bfq_log_bfqq(bfqd, bfqq, "add_request %d", rq_is_sync(rq));
++ bfqq->queued[rq_is_sync(rq)]++;
++ bfqd->queued++;
++
++ elv_rb_add(&bfqq->sort_list, rq);
++
++ /*
++ * Check if this request is a better next-serve candidate.
++ */
++ prev = bfqq->next_rq;
++ next_rq = bfq_choose_req(bfqd, bfqq->next_rq, rq, bfqd->last_position);
++ BUG_ON(!next_rq);
++ bfqq->next_rq = next_rq;
++
++ if (!bfq_bfqq_busy(bfqq)) {
++ bool soft_rt, in_burst,
++ idle_for_long_time = time_is_before_jiffies(
++ bfqq->budget_timeout +
++ bfqd->bfq_wr_min_idle_time);
++
++#ifdef CONFIG_BFQ_GROUP_IOSCHED
++ bfqg_stats_update_io_add(bfqq_group(RQ_BFQQ(rq)), bfqq,
++ rq->cmd_flags);
++#endif
++ if (bfq_bfqq_sync(bfqq)) {
++ bool already_in_burst =
++ !hlist_unhashed(&bfqq->burst_list_node) ||
++ bfq_bfqq_in_large_burst(bfqq);
++ bfq_handle_burst(bfqd, bfqq, idle_for_long_time);
++ /*
++ * If bfqq was not already in the current burst,
++ * then, at this point, bfqq either has been
++ * added to the current burst or has caused the
++ * current burst to terminate. In particular, in
++ * the second case, bfqq has become the first
++ * queue in a possible new burst.
++ * In both cases last_ins_in_burst needs to be
++ * moved forward.
++ */
++ if (!already_in_burst)
++ bfqd->last_ins_in_burst = jiffies;
++ }
++
++ in_burst = bfq_bfqq_in_large_burst(bfqq);
++ soft_rt = bfqd->bfq_wr_max_softrt_rate > 0 &&
++ !in_burst &&
++ time_is_before_jiffies(bfqq->soft_rt_next_start);
++ interactive = !in_burst && idle_for_long_time;
++ entity->budget = max_t(unsigned long, bfqq->max_budget,
++ bfq_serv_to_charge(next_rq, bfqq));
++
++ if (!bfq_bfqq_IO_bound(bfqq)) {
++ if (time_before(jiffies,
++ RQ_BIC(rq)->ttime.last_end_request +
++ bfqd->bfq_slice_idle)) {
++ bfqq->requests_within_timer++;
++ if (bfqq->requests_within_timer >=
++ bfqd->bfq_requests_within_timer)
++ bfq_mark_bfqq_IO_bound(bfqq);
++ } else
++ bfqq->requests_within_timer = 0;
++ }
++
++ if (!bfqd->low_latency)
++ goto add_bfqq_busy;
++
++ /*
++ * If the queue:
++ * - is not being boosted,
++ * - has been idle for enough time,
++ * - is not a sync queue or is linked to a bfq_io_cq (it is
++ * shared "for its nature" or it is not shared and its
++ * requests have not been redirected to a shared queue)
++ * start a weight-raising period.
++ */
++ if (old_wr_coeff == 1 && (interactive || soft_rt) &&
++ (!bfq_bfqq_sync(bfqq) || bfqq->bic)) {
++ bfqq->wr_coeff = bfqd->bfq_wr_coeff;
++ if (interactive)
++ bfqq->wr_cur_max_time = bfq_wr_duration(bfqd);
++ else
++ bfqq->wr_cur_max_time =
++ bfqd->bfq_wr_rt_max_time;
++ bfq_log_bfqq(bfqd, bfqq,
++ "wrais starting at %lu, rais_max_time %u",
++ jiffies,
++ jiffies_to_msecs(bfqq->wr_cur_max_time));
++ } else if (old_wr_coeff > 1) {
++ if (interactive)
++ bfqq->wr_cur_max_time = bfq_wr_duration(bfqd);
++ else if (in_burst ||
++ (bfqq->wr_cur_max_time ==
++ bfqd->bfq_wr_rt_max_time &&
++ !soft_rt)) {
++ bfqq->wr_coeff = 1;
++ bfq_log_bfqq(bfqd, bfqq,
++ "wrais ending at %lu, rais_max_time %u",
++ jiffies,
++ jiffies_to_msecs(bfqq->
++ wr_cur_max_time));
++ } else if (time_before(
++ bfqq->last_wr_start_finish +
++ bfqq->wr_cur_max_time,
++ jiffies +
++ bfqd->bfq_wr_rt_max_time) &&
++ soft_rt) {
++ /*
++ *
++ * The remaining weight-raising time is lower
++ * than bfqd->bfq_wr_rt_max_time, which means
++ * that the application is enjoying weight
++ * raising either because deemed soft-rt in
++ * the near past, or because deemed interactive
++ * a long ago.
++ * In both cases, resetting now the current
++ * remaining weight-raising time for the
++ * application to the weight-raising duration
++ * for soft rt applications would not cause any
++ * latency increase for the application (as the
++ * new duration would be higher than the
++ * remaining time).
++ *
++ * In addition, the application is now meeting
++ * the requirements for being deemed soft rt.
++ * In the end we can correctly and safely
++ * (re)charge the weight-raising duration for
++ * the application with the weight-raising
++ * duration for soft rt applications.
++ *
++ * In particular, doing this recharge now, i.e.,
++ * before the weight-raising period for the
++ * application finishes, reduces the probability
++ * of the following negative scenario:
++ * 1) the weight of a soft rt application is
++ * raised at startup (as for any newly
++ * created application),
++ * 2) since the application is not interactive,
++ * at a certain time weight-raising is
++ * stopped for the application,
++ * 3) at that time the application happens to
++ * still have pending requests, and hence
++ * is destined to not have a chance to be
++ * deemed soft rt before these requests are
++ * completed (see the comments to the
++ * function bfq_bfqq_softrt_next_start()
++ * for details on soft rt detection),
++ * 4) these pending requests experience a high
++ * latency because the application is not
++ * weight-raised while they are pending.
++ */
++ bfqq->last_wr_start_finish = jiffies;
++ bfqq->wr_cur_max_time =
++ bfqd->bfq_wr_rt_max_time;
++ }
++ }
++ if (old_wr_coeff != bfqq->wr_coeff)
++ entity->prio_changed = 1;
++add_bfqq_busy:
++ bfqq->last_idle_bklogged = jiffies;
++ bfqq->service_from_backlogged = 0;
++ bfq_clear_bfqq_softrt_update(bfqq);
++ bfq_add_bfqq_busy(bfqd, bfqq);
++ } else {
++ if (bfqd->low_latency && old_wr_coeff == 1 && !rq_is_sync(rq) &&
++ time_is_before_jiffies(
++ bfqq->last_wr_start_finish +
++ bfqd->bfq_wr_min_inter_arr_async)) {
++ bfqq->wr_coeff = bfqd->bfq_wr_coeff;
++ bfqq->wr_cur_max_time = bfq_wr_duration(bfqd);
++
++ bfqd->wr_busy_queues++;
++ entity->prio_changed = 1;
++ bfq_log_bfqq(bfqd, bfqq,
++ "non-idle wrais starting at %lu, rais_max_time %u",
++ jiffies,
++ jiffies_to_msecs(bfqq->wr_cur_max_time));
++ }
++ if (prev != bfqq->next_rq)
++ bfq_updated_next_req(bfqd, bfqq);
++ }
++
++ if (bfqd->low_latency &&
++ (old_wr_coeff == 1 || bfqq->wr_coeff == 1 || interactive))
++ bfqq->last_wr_start_finish = jiffies;
++}
++
++static struct request *bfq_find_rq_fmerge(struct bfq_data *bfqd,
++ struct bio *bio)
++{
++ struct task_struct *tsk = current;
++ struct bfq_io_cq *bic;
++ struct bfq_queue *bfqq;
++
++ bic = bfq_bic_lookup(bfqd, tsk->io_context);
++ if (!bic)
++ return NULL;
++
++ bfqq = bic_to_bfqq(bic, bfq_bio_sync(bio));
++ if (bfqq)
++ return elv_rb_find(&bfqq->sort_list, bio_end_sector(bio));
++
++ return NULL;
++}
++
++static void bfq_activate_request(struct request_queue *q, struct request *rq)
++{
++ struct bfq_data *bfqd = q->elevator->elevator_data;
++
++ bfqd->rq_in_driver++;
++ bfqd->last_position = blk_rq_pos(rq) + blk_rq_sectors(rq);
++ bfq_log(bfqd, "activate_request: new bfqd->last_position %llu",
++ (unsigned long long) bfqd->last_position);
++}
++
++static void bfq_deactivate_request(struct request_queue *q, struct request *rq)
++{
++ struct bfq_data *bfqd = q->elevator->elevator_data;
++
++ BUG_ON(bfqd->rq_in_driver == 0);
++ bfqd->rq_in_driver--;
++}
++
++static void bfq_remove_request(struct request *rq)
++{
++ struct bfq_queue *bfqq = RQ_BFQQ(rq);
++ struct bfq_data *bfqd = bfqq->bfqd;
++ const int sync = rq_is_sync(rq);
++
++ if (bfqq->next_rq == rq) {
++ bfqq->next_rq = bfq_find_next_rq(bfqd, bfqq, rq);
++ bfq_updated_next_req(bfqd, bfqq);
++ }
++
++ if (rq->queuelist.prev != &rq->queuelist)
++ list_del_init(&rq->queuelist);
++ BUG_ON(bfqq->queued[sync] == 0);
++ bfqq->queued[sync]--;
++ bfqd->queued--;
++ elv_rb_del(&bfqq->sort_list, rq);
++
++ if (RB_EMPTY_ROOT(&bfqq->sort_list)) {
++ if (bfq_bfqq_busy(bfqq) && bfqq != bfqd->in_service_queue)
++ bfq_del_bfqq_busy(bfqd, bfqq, 1);
++ /*
++ * Remove queue from request-position tree as it is empty.
++ */
++ if (bfqq->pos_root) {
++ rb_erase(&bfqq->pos_node, bfqq->pos_root);
++ bfqq->pos_root = NULL;
++ }
++ }
++
++ if (rq->cmd_flags & REQ_META) {
++ BUG_ON(bfqq->meta_pending == 0);
++ bfqq->meta_pending--;
++ }
++#ifdef CONFIG_BFQ_GROUP_IOSCHED
++ bfqg_stats_update_io_remove(bfqq_group(bfqq), rq->cmd_flags);
++#endif
++}
++
++static int bfq_merge(struct request_queue *q, struct request **req,
++ struct bio *bio)
++{
++ struct bfq_data *bfqd = q->elevator->elevator_data;
++ struct request *__rq;
++
++ __rq = bfq_find_rq_fmerge(bfqd, bio);
++ if (__rq && elv_rq_merge_ok(__rq, bio)) {
++ *req = __rq;
++ return ELEVATOR_FRONT_MERGE;
++ }
++
++ return ELEVATOR_NO_MERGE;
++}
++
++static void bfq_merged_request(struct request_queue *q, struct request *req,
++ int type)
++{
++ if (type == ELEVATOR_FRONT_MERGE &&
++ rb_prev(&req->rb_node) &&
++ blk_rq_pos(req) <
++ blk_rq_pos(container_of(rb_prev(&req->rb_node),
++ struct request, rb_node))) {
++ struct bfq_queue *bfqq = RQ_BFQQ(req);
++ struct bfq_data *bfqd = bfqq->bfqd;
++ struct request *prev, *next_rq;
++
++ /* Reposition request in its sort_list */
++ elv_rb_del(&bfqq->sort_list, req);
++ elv_rb_add(&bfqq->sort_list, req);
++ /* Choose next request to be served for bfqq */
++ prev = bfqq->next_rq;
++ next_rq = bfq_choose_req(bfqd, bfqq->next_rq, req,
++ bfqd->last_position);
++ BUG_ON(!next_rq);
++ bfqq->next_rq = next_rq;
++ }
++}
++
++#ifdef CONFIG_BFQ_GROUP_IOSCHED
++static void bfq_bio_merged(struct request_queue *q, struct request *req,
++ struct bio *bio)
++{
++ bfqg_stats_update_io_merged(bfqq_group(RQ_BFQQ(req)), bio->bi_rw);
++}
++#endif
++
++static void bfq_merged_requests(struct request_queue *q, struct request *rq,
++ struct request *next)
++{
++ struct bfq_queue *bfqq = RQ_BFQQ(rq), *next_bfqq = RQ_BFQQ(next);
++
++ /*
++ * If next and rq belong to the same bfq_queue and next is older
++ * than rq, then reposition rq in the fifo (by substituting next
++ * with rq). Otherwise, if next and rq belong to different
++ * bfq_queues, never reposition rq: in fact, we would have to
++ * reposition it with respect to next's position in its own fifo,
++ * which would most certainly be too expensive with respect to
++ * the benefits.
++ */
++ if (bfqq == next_bfqq &&
++ !list_empty(&rq->queuelist) && !list_empty(&next->queuelist) &&
++ time_before(next->fifo_time, rq->fifo_time)) {
++ list_del_init(&rq->queuelist);
++ list_replace_init(&next->queuelist, &rq->queuelist);
++ rq->fifo_time = next->fifo_time;
++ }
++
++ if (bfqq->next_rq == next)
++ bfqq->next_rq = rq;
++
++ bfq_remove_request(next);
++#ifdef CONFIG_BFQ_GROUP_IOSCHED
++ bfqg_stats_update_io_merged(bfqq_group(bfqq), next->cmd_flags);
++#endif
++}
++
++/* Must be called with bfqq != NULL */
++static void bfq_bfqq_end_wr(struct bfq_queue *bfqq)
++{
++ BUG_ON(!bfqq);
++ if (bfq_bfqq_busy(bfqq))
++ bfqq->bfqd->wr_busy_queues--;
++ bfqq->wr_coeff = 1;
++ bfqq->wr_cur_max_time = 0;
++ /* Trigger a weight change on the next activation of the queue */
++ bfqq->entity.prio_changed = 1;
++}
++
++static void bfq_end_wr_async_queues(struct bfq_data *bfqd,
++ struct bfq_group *bfqg)
++{
++ int i, j;
++
++ for (i = 0; i < 2; i++)
++ for (j = 0; j < IOPRIO_BE_NR; j++)
++ if (bfqg->async_bfqq[i][j])
++ bfq_bfqq_end_wr(bfqg->async_bfqq[i][j]);
++ if (bfqg->async_idle_bfqq)
++ bfq_bfqq_end_wr(bfqg->async_idle_bfqq);
++}
++
++static void bfq_end_wr(struct bfq_data *bfqd)
++{
++ struct bfq_queue *bfqq;
++
++ spin_lock_irq(bfqd->queue->queue_lock);
++
++ list_for_each_entry(bfqq, &bfqd->active_list, bfqq_list)
++ bfq_bfqq_end_wr(bfqq);
++ list_for_each_entry(bfqq, &bfqd->idle_list, bfqq_list)
++ bfq_bfqq_end_wr(bfqq);
++ bfq_end_wr_async(bfqd);
++
++ spin_unlock_irq(bfqd->queue->queue_lock);
++}
++
++static int bfq_allow_merge(struct request_queue *q, struct request *rq,
++ struct bio *bio)
++{
++ struct bfq_data *bfqd = q->elevator->elevator_data;
++ struct bfq_io_cq *bic;
++
++ /*
++ * Disallow merge of a sync bio into an async request.
++ */
++ if (bfq_bio_sync(bio) && !rq_is_sync(rq))
++ return 0;
++
++ /*
++ * Lookup the bfqq that this bio will be queued with. Allow
++ * merge only if rq is queued there.
++ * Queue lock is held here.
++ */
++ bic = bfq_bic_lookup(bfqd, current->io_context);
++ if (!bic)
++ return 0;
++
++ return bic_to_bfqq(bic, bfq_bio_sync(bio)) == RQ_BFQQ(rq);
++}
++
++static void __bfq_set_in_service_queue(struct bfq_data *bfqd,
++ struct bfq_queue *bfqq)
++{
++ if (bfqq) {
++#ifdef CONFIG_BFQ_GROUP_IOSCHED
++ bfqg_stats_update_avg_queue_size(bfqq_group(bfqq));
++#endif
++ bfq_mark_bfqq_must_alloc(bfqq);
++ bfq_mark_bfqq_budget_new(bfqq);
++ bfq_clear_bfqq_fifo_expire(bfqq);
++
++ bfqd->budgets_assigned = (bfqd->budgets_assigned*7 + 256) / 8;
++
++ bfq_log_bfqq(bfqd, bfqq,
++ "set_in_service_queue, cur-budget = %d",
++ bfqq->entity.budget);
++ }
++
++ bfqd->in_service_queue = bfqq;
++}
++
++/*
++ * Get and set a new queue for service.
++ */
++static struct bfq_queue *bfq_set_in_service_queue(struct bfq_data *bfqd)
++{
++ struct bfq_queue *bfqq = bfq_get_next_queue(bfqd);
++
++ __bfq_set_in_service_queue(bfqd, bfqq);
++ return bfqq;
++}
++
++/*
++ * If enough samples have been computed, return the current max budget
++ * stored in bfqd, which is dynamically updated according to the
++ * estimated disk peak rate; otherwise return the default max budget
++ */
++static int bfq_max_budget(struct bfq_data *bfqd)
++{
++ if (bfqd->budgets_assigned < bfq_stats_min_budgets)
++ return bfq_default_max_budget;
++ else
++ return bfqd->bfq_max_budget;
++}
++
++/*
++ * Return min budget, which is a fraction of the current or default
++ * max budget (trying with 1/32)
++ */
++static int bfq_min_budget(struct bfq_data *bfqd)
++{
++ if (bfqd->budgets_assigned < bfq_stats_min_budgets)
++ return bfq_default_max_budget / 32;
++ else
++ return bfqd->bfq_max_budget / 32;
++}
++
++static void bfq_arm_slice_timer(struct bfq_data *bfqd)
++{
++ struct bfq_queue *bfqq = bfqd->in_service_queue;
++ struct bfq_io_cq *bic;
++ unsigned long sl;
++
++ BUG_ON(!RB_EMPTY_ROOT(&bfqq->sort_list));
++
++ /* Processes have exited, don't wait. */
++ bic = bfqd->in_service_bic;
++ if (!bic || atomic_read(&bic->icq.ioc->active_ref) == 0)
++ return;
++
++ bfq_mark_bfqq_wait_request(bfqq);
++
++ /*
++ * We don't want to idle for seeks, but we do want to allow
++ * fair distribution of slice time for a process doing back-to-back
++ * seeks. So allow a little bit of time for him to submit a new rq.
++ *
++ * To prevent processes with (partly) seeky workloads from
++ * being too ill-treated, grant them a small fraction of the
++ * assigned budget before reducing the waiting time to
++ * BFQ_MIN_TT. This happened to help reduce latency.
++ */
++ sl = bfqd->bfq_slice_idle;
++ /*
++ * Unless the queue is being weight-raised or the scenario is
++ * asymmetric, grant only minimum idle time if the queue either
++ * has been seeky for long enough or has already proved to be
++ * constantly seeky.
++ */
++ if (bfq_sample_valid(bfqq->seek_samples) &&
++ ((BFQQ_SEEKY(bfqq) && bfqq->entity.service >
++ bfq_max_budget(bfqq->bfqd) / 8) ||
++ bfq_bfqq_constantly_seeky(bfqq)) && bfqq->wr_coeff == 1 &&
++ bfq_symmetric_scenario(bfqd))
++ sl = min(sl, msecs_to_jiffies(BFQ_MIN_TT));
++ else if (bfqq->wr_coeff > 1)
++ sl = sl * 3;
++ bfqd->last_idling_start = ktime_get();
++ mod_timer(&bfqd->idle_slice_timer, jiffies + sl);
++#ifdef CONFIG_BFQ_GROUP_IOSCHED
++ bfqg_stats_set_start_idle_time(bfqq_group(bfqq));
++#endif
++ bfq_log(bfqd, "arm idle: %u/%u ms",
++ jiffies_to_msecs(sl), jiffies_to_msecs(bfqd->bfq_slice_idle));
++}
++
++/*
++ * Set the maximum time for the in-service queue to consume its
++ * budget. This prevents seeky processes from lowering the disk
++ * throughput (always guaranteed with a time slice scheme as in CFQ).
++ */
++static void bfq_set_budget_timeout(struct bfq_data *bfqd)
++{
++ struct bfq_queue *bfqq = bfqd->in_service_queue;
++ unsigned int timeout_coeff;
++
++ if (bfqq->wr_cur_max_time == bfqd->bfq_wr_rt_max_time)
++ timeout_coeff = 1;
++ else
++ timeout_coeff = bfqq->entity.weight / bfqq->entity.orig_weight;
++
++ bfqd->last_budget_start = ktime_get();
++
++ bfq_clear_bfqq_budget_new(bfqq);
++ bfqq->budget_timeout = jiffies +
++ bfqd->bfq_timeout[bfq_bfqq_sync(bfqq)] * timeout_coeff;
++
++ bfq_log_bfqq(bfqd, bfqq, "set budget_timeout %u",
++ jiffies_to_msecs(bfqd->bfq_timeout[bfq_bfqq_sync(bfqq)] *
++ timeout_coeff));
++}
++
++/*
++ * Move request from internal lists to the request queue dispatch list.
++ */
++static void bfq_dispatch_insert(struct request_queue *q, struct request *rq)
++{
++ struct bfq_data *bfqd = q->elevator->elevator_data;
++ struct bfq_queue *bfqq = RQ_BFQQ(rq);
++
++ /*
++ * For consistency, the next instruction should have been executed
++ * after removing the request from the queue and dispatching it.
++ * We execute instead this instruction before bfq_remove_request()
++ * (and hence introduce a temporary inconsistency), for efficiency.
++ * In fact, in a forced_dispatch, this prevents two counters related
++ * to bfqq->dispatched to risk to be uselessly decremented if bfqq
++ * is not in service, and then to be incremented again after
++ * incrementing bfqq->dispatched.
++ */
++ bfqq->dispatched++;
++ bfq_remove_request(rq);
++ elv_dispatch_sort(q, rq);
++
++ if (bfq_bfqq_sync(bfqq))
++ bfqd->sync_flight++;
++#ifdef CONFIG_BFQ_GROUP_IOSCHED
++ bfqg_stats_update_dispatch(bfqq_group(bfqq), blk_rq_bytes(rq),
++ rq->cmd_flags);
++#endif
++}
++
++/*
++ * Return expired entry, or NULL to just start from scratch in rbtree.
++ */
++static struct request *bfq_check_fifo(struct bfq_queue *bfqq)
++{
++ struct request *rq = NULL;
++
++ if (bfq_bfqq_fifo_expire(bfqq))
++ return NULL;
++
++ bfq_mark_bfqq_fifo_expire(bfqq);
++
++ if (list_empty(&bfqq->fifo))
++ return NULL;
++
++ rq = rq_entry_fifo(bfqq->fifo.next);
++
++ if (time_before(jiffies, rq->fifo_time))
++ return NULL;
++
++ return rq;
++}
++
++static int bfq_bfqq_budget_left(struct bfq_queue *bfqq)
++{
++ struct bfq_entity *entity = &bfqq->entity;
++
++ return entity->budget - entity->service;
++}
++
++static void __bfq_bfqq_expire(struct bfq_data *bfqd, struct bfq_queue *bfqq)
++{
++ BUG_ON(bfqq != bfqd->in_service_queue);
++
++ __bfq_bfqd_reset_in_service(bfqd);
++
++ if (RB_EMPTY_ROOT(&bfqq->sort_list)) {
++ /*
++ * Overloading budget_timeout field to store the time
++ * at which the queue remains with no backlog; used by
++ * the weight-raising mechanism.
++ */
++ bfqq->budget_timeout = jiffies;
++ bfq_del_bfqq_busy(bfqd, bfqq, 1);
++ } else
++ bfq_activate_bfqq(bfqd, bfqq);
++}
++
++/**
++ * __bfq_bfqq_recalc_budget - try to adapt the budget to the @bfqq behavior.
++ * @bfqd: device data.
++ * @bfqq: queue to update.
++ * @reason: reason for expiration.
++ *
++ * Handle the feedback on @bfqq budget at queue expiration.
++ * See the body for detailed comments.
++ */
++static void __bfq_bfqq_recalc_budget(struct bfq_data *bfqd,
++ struct bfq_queue *bfqq,
++ enum bfqq_expiration reason)
++{
++ struct request *next_rq;
++ int budget, min_budget;
++
++ budget = bfqq->max_budget;
++ min_budget = bfq_min_budget(bfqd);
++
++ BUG_ON(bfqq != bfqd->in_service_queue);
++
++ bfq_log_bfqq(bfqd, bfqq, "recalc_budg: last budg %d, budg left %d",
++ bfqq->entity.budget, bfq_bfqq_budget_left(bfqq));
++ bfq_log_bfqq(bfqd, bfqq, "recalc_budg: last max_budg %d, min budg %d",
++ budget, bfq_min_budget(bfqd));
++ bfq_log_bfqq(bfqd, bfqq, "recalc_budg: sync %d, seeky %d",
++ bfq_bfqq_sync(bfqq), BFQQ_SEEKY(bfqd->in_service_queue));
++
++ if (bfq_bfqq_sync(bfqq)) {
++ switch (reason) {
++ /*
++ * Caveat: in all the following cases we trade latency
++ * for throughput.
++ */
++ case BFQ_BFQQ_TOO_IDLE:
++ /*
++ * This is the only case where we may reduce
++ * the budget: if there is no request of the
++ * process still waiting for completion, then
++ * we assume (tentatively) that the timer has
++ * expired because the batch of requests of
++ * the process could have been served with a
++ * smaller budget. Hence, betting that
++ * process will behave in the same way when it
++ * becomes backlogged again, we reduce its
++ * next budget. As long as we guess right,
++ * this budget cut reduces the latency
++ * experienced by the process.
++ *
++ * However, if there are still outstanding
++ * requests, then the process may have not yet
++ * issued its next request just because it is
++ * still waiting for the completion of some of
++ * the still outstanding ones. So in this
++ * subcase we do not reduce its budget, on the
++ * contrary we increase it to possibly boost
++ * the throughput, as discussed in the
++ * comments to the BUDGET_TIMEOUT case.
++ */
++ if (bfqq->dispatched > 0) /* still outstanding reqs */
++ budget = min(budget * 2, bfqd->bfq_max_budget);
++ else {
++ if (budget > 5 * min_budget)
++ budget -= 4 * min_budget;
++ else
++ budget = min_budget;
++ }
++ break;
++ case BFQ_BFQQ_BUDGET_TIMEOUT:
++ /*
++ * We double the budget here because: 1) it
++ * gives the chance to boost the throughput if
++ * this is not a seeky process (which may have
++ * bumped into this timeout because of, e.g.,
++ * ZBR), 2) together with charge_full_budget
++ * it helps give seeky processes higher
++ * timestamps, and hence be served less
++ * frequently.
++ */
++ budget = min(budget * 2, bfqd->bfq_max_budget);
++ break;
++ case BFQ_BFQQ_BUDGET_EXHAUSTED:
++ /*
++ * The process still has backlog, and did not
++ * let either the budget timeout or the disk
++ * idling timeout expire. Hence it is not
++ * seeky, has a short thinktime and may be
++ * happy with a higher budget too. So
++ * definitely increase the budget of this good
++ * candidate to boost the disk throughput.
++ */
++ budget = min(budget * 4, bfqd->bfq_max_budget);
++ break;
++ case BFQ_BFQQ_NO_MORE_REQUESTS:
++ /*
++ * Leave the budget unchanged.
++ */
++ default:
++ return;
++ }
++ } else
++ /*
++ * Async queues get always the maximum possible budget
++ * (their ability to dispatch is limited by
++ * @bfqd->bfq_max_budget_async_rq).
++ */
++ budget = bfqd->bfq_max_budget;
++
++ bfqq->max_budget = budget;
++
++ if (bfqd->budgets_assigned >= bfq_stats_min_budgets &&
++ !bfqd->bfq_user_max_budget)
++ bfqq->max_budget = min(bfqq->max_budget, bfqd->bfq_max_budget);
++
++ /*
++ * Make sure that we have enough budget for the next request.
++ * Since the finish time of the bfqq must be kept in sync with
++ * the budget, be sure to call __bfq_bfqq_expire() after the
++ * update.
++ */
++ next_rq = bfqq->next_rq;
++ if (next_rq)
++ bfqq->entity.budget = max_t(unsigned long, bfqq->max_budget,
++ bfq_serv_to_charge(next_rq, bfqq));
++ else
++ bfqq->entity.budget = bfqq->max_budget;
++
++ bfq_log_bfqq(bfqd, bfqq, "head sect: %u, new budget %d",
++ next_rq ? blk_rq_sectors(next_rq) : 0,
++ bfqq->entity.budget);
++}
++
++static unsigned long bfq_calc_max_budget(u64 peak_rate, u64 timeout)
++{
++ unsigned long max_budget;
++
++ /*
++ * The max_budget calculated when autotuning is equal to the
++ * amount of sectors transfered in timeout_sync at the
++ * estimated peak rate.
++ */
++ max_budget = (unsigned long)(peak_rate * 1000 *
++ timeout >> BFQ_RATE_SHIFT);
++
++ return max_budget;
++}
++
++/*
++ * In addition to updating the peak rate, checks whether the process
++ * is "slow", and returns 1 if so. This slow flag is used, in addition
++ * to the budget timeout, to reduce the amount of service provided to
++ * seeky processes, and hence reduce their chances to lower the
++ * throughput. See the code for more details.
++ */
++static bool bfq_update_peak_rate(struct bfq_data *bfqd, struct bfq_queue *bfqq,
++ bool compensate, enum bfqq_expiration reason)
++{
++ u64 bw, usecs, expected, timeout;
++ ktime_t delta;
++ int update = 0;
++
++ if (!bfq_bfqq_sync(bfqq) || bfq_bfqq_budget_new(bfqq))
++ return false;
++
++ if (compensate)
++ delta = bfqd->last_idling_start;
++ else
++ delta = ktime_get();
++ delta = ktime_sub(delta, bfqd->last_budget_start);
++ usecs = ktime_to_us(delta);
++
++ /* Don't trust short/unrealistic values. */
++ if (usecs < 100 || usecs >= LONG_MAX)
++ return false;
++
++ /*
++ * Calculate the bandwidth for the last slice. We use a 64 bit
++ * value to store the peak rate, in sectors per usec in fixed
++ * point math. We do so to have enough precision in the estimate
++ * and to avoid overflows.
++ */
++ bw = (u64)bfqq->entity.service << BFQ_RATE_SHIFT;
++ do_div(bw, (unsigned long)usecs);
++
++ timeout = jiffies_to_msecs(bfqd->bfq_timeout[BLK_RW_SYNC]);
++
++ /*
++ * Use only long (> 20ms) intervals to filter out spikes for
++ * the peak rate estimation.
++ */
++ if (usecs > 20000) {
++ if (bw > bfqd->peak_rate ||
++ (!BFQQ_SEEKY(bfqq) &&
++ reason == BFQ_BFQQ_BUDGET_TIMEOUT)) {
++ bfq_log(bfqd, "measured bw =%llu", bw);
++ /*
++ * To smooth oscillations use a low-pass filter with
++ * alpha=7/8, i.e.,
++ * new_rate = (7/8) * old_rate + (1/8) * bw
++ */
++ do_div(bw, 8);
++ if (bw == 0)
++ return 0;
++ bfqd->peak_rate *= 7;
++ do_div(bfqd->peak_rate, 8);
++ bfqd->peak_rate += bw;
++ update = 1;
++ bfq_log(bfqd, "new peak_rate=%llu", bfqd->peak_rate);
++ }
++
++ update |= bfqd->peak_rate_samples == BFQ_PEAK_RATE_SAMPLES - 1;
++
++ if (bfqd->peak_rate_samples < BFQ_PEAK_RATE_SAMPLES)
++ bfqd->peak_rate_samples++;
++
++ if (bfqd->peak_rate_samples == BFQ_PEAK_RATE_SAMPLES &&
++ update) {
++ int dev_type = blk_queue_nonrot(bfqd->queue);
++
++ if (bfqd->bfq_user_max_budget == 0) {
++ bfqd->bfq_max_budget =
++ bfq_calc_max_budget(bfqd->peak_rate,
++ timeout);
++ bfq_log(bfqd, "new max_budget=%d",
++ bfqd->bfq_max_budget);
++ }
++ if (bfqd->device_speed == BFQ_BFQD_FAST &&
++ bfqd->peak_rate < device_speed_thresh[dev_type]) {
++ bfqd->device_speed = BFQ_BFQD_SLOW;
++ bfqd->RT_prod = R_slow[dev_type] *
++ T_slow[dev_type];
++ } else if (bfqd->device_speed == BFQ_BFQD_SLOW &&
++ bfqd->peak_rate > device_speed_thresh[dev_type]) {
++ bfqd->device_speed = BFQ_BFQD_FAST;
++ bfqd->RT_prod = R_fast[dev_type] *
++ T_fast[dev_type];
++ }
++ }
++ }
++
++ /*
++ * If the process has been served for a too short time
++ * interval to let its possible sequential accesses prevail on
++ * the initial seek time needed to move the disk head on the
++ * first sector it requested, then give the process a chance
++ * and for the moment return false.
++ */
++ if (bfqq->entity.budget <= bfq_max_budget(bfqd) / 8)
++ return false;
++
++ /*
++ * A process is considered ``slow'' (i.e., seeky, so that we
++ * cannot treat it fairly in the service domain, as it would
++ * slow down too much the other processes) if, when a slice
++ * ends for whatever reason, it has received service at a
++ * rate that would not be high enough to complete the budget
++ * before the budget timeout expiration.
++ */
++ expected = bw * 1000 * timeout >> BFQ_RATE_SHIFT;
++
++ /*
++ * Caveat: processes doing IO in the slower disk zones will
++ * tend to be slow(er) even if not seeky. And the estimated
++ * peak rate will actually be an average over the disk
++ * surface. Hence, to not be too harsh with unlucky processes,
++ * we keep a budget/3 margin of safety before declaring a
++ * process slow.
++ */
++ return expected > (4 * bfqq->entity.budget) / 3;
++}
++
++/*
++ * To be deemed as soft real-time, an application must meet two
++ * requirements. First, the application must not require an average
++ * bandwidth higher than the approximate bandwidth required to playback or
++ * record a compressed high-definition video.
++ * The next function is invoked on the completion of the last request of a
++ * batch, to compute the next-start time instant, soft_rt_next_start, such
++ * that, if the next request of the application does not arrive before
++ * soft_rt_next_start, then the above requirement on the bandwidth is met.
++ *
++ * The second requirement is that the request pattern of the application is
++ * isochronous, i.e., that, after issuing a request or a batch of requests,
++ * the application stops issuing new requests until all its pending requests
++ * have been completed. After that, the application may issue a new batch,
++ * and so on.
++ * For this reason the next function is invoked to compute
++ * soft_rt_next_start only for applications that meet this requirement,
++ * whereas soft_rt_next_start is set to infinity for applications that do
++ * not.
++ *
++ * Unfortunately, even a greedy application may happen to behave in an
++ * isochronous way if the CPU load is high. In fact, the application may
++ * stop issuing requests while the CPUs are busy serving other processes,
++ * then restart, then stop again for a while, and so on. In addition, if
++ * the disk achieves a low enough throughput with the request pattern
++ * issued by the application (e.g., because the request pattern is random
++ * and/or the device is slow), then the application may meet the above
++ * bandwidth requirement too. To prevent such a greedy application to be
++ * deemed as soft real-time, a further rule is used in the computation of
++ * soft_rt_next_start: soft_rt_next_start must be higher than the current
++ * time plus the maximum time for which the arrival of a request is waited
++ * for when a sync queue becomes idle, namely bfqd->bfq_slice_idle.
++ * This filters out greedy applications, as the latter issue instead their
++ * next request as soon as possible after the last one has been completed
++ * (in contrast, when a batch of requests is completed, a soft real-time
++ * application spends some time processing data).
++ *
++ * Unfortunately, the last filter may easily generate false positives if
++ * only bfqd->bfq_slice_idle is used as a reference time interval and one
++ * or both the following cases occur:
++ * 1) HZ is so low that the duration of a jiffy is comparable to or higher
++ * than bfqd->bfq_slice_idle. This happens, e.g., on slow devices with
++ * HZ=100.
++ * 2) jiffies, instead of increasing at a constant rate, may stop increasing
++ * for a while, then suddenly 'jump' by several units to recover the lost
++ * increments. This seems to happen, e.g., inside virtual machines.
++ * To address this issue, we do not use as a reference time interval just
++ * bfqd->bfq_slice_idle, but bfqd->bfq_slice_idle plus a few jiffies. In
++ * particular we add the minimum number of jiffies for which the filter
++ * seems to be quite precise also in embedded systems and KVM/QEMU virtual
++ * machines.
++ */
++static unsigned long bfq_bfqq_softrt_next_start(struct bfq_data *bfqd,
++ struct bfq_queue *bfqq)
++{
++ return max(bfqq->last_idle_bklogged +
++ HZ * bfqq->service_from_backlogged /
++ bfqd->bfq_wr_max_softrt_rate,
++ jiffies + bfqq->bfqd->bfq_slice_idle + 4);
++}
++
++/*
++ * Return the largest-possible time instant such that, for as long as possible,
++ * the current time will be lower than this time instant according to the macro
++ * time_is_before_jiffies().
++ */
++static unsigned long bfq_infinity_from_now(unsigned long now)
++{
++ return now + ULONG_MAX / 2;
++}
++
++/**
++ * bfq_bfqq_expire - expire a queue.
++ * @bfqd: device owning the queue.
++ * @bfqq: the queue to expire.
++ * @compensate: if true, compensate for the time spent idling.
++ * @reason: the reason causing the expiration.
++ *
++ *
++ * If the process associated to the queue is slow (i.e., seeky), or in
++ * case of budget timeout, or, finally, if it is async, we
++ * artificially charge it an entire budget (independently of the
++ * actual service it received). As a consequence, the queue will get
++ * higher timestamps than the correct ones upon reactivation, and
++ * hence it will be rescheduled as if it had received more service
++ * than what it actually received. In the end, this class of processes
++ * will receive less service in proportion to how slowly they consume
++ * their budgets (and hence how seriously they tend to lower the
++ * throughput).
++ *
++ * In contrast, when a queue expires because it has been idling for
++ * too much or because it exhausted its budget, we do not touch the
++ * amount of service it has received. Hence when the queue will be
++ * reactivated and its timestamps updated, the latter will be in sync
++ * with the actual service received by the queue until expiration.
++ *
++ * Charging a full budget to the first type of queues and the exact
++ * service to the others has the effect of using the WF2Q+ policy to
++ * schedule the former on a timeslice basis, without violating the
++ * service domain guarantees of the latter.
++ */
++static void bfq_bfqq_expire(struct bfq_data *bfqd,
++ struct bfq_queue *bfqq,
++ bool compensate,
++ enum bfqq_expiration reason)
++{
++ bool slow;
++
++ BUG_ON(bfqq != bfqd->in_service_queue);
++
++ /*
++ * Update disk peak rate for autotuning and check whether the
++ * process is slow (see bfq_update_peak_rate).
++ */
++ slow = bfq_update_peak_rate(bfqd, bfqq, compensate, reason);
++
++ /*
++ * As above explained, 'punish' slow (i.e., seeky), timed-out
++ * and async queues, to favor sequential sync workloads.
++ *
++ * Processes doing I/O in the slower disk zones will tend to be
++ * slow(er) even if not seeky. Hence, since the estimated peak
++ * rate is actually an average over the disk surface, these
++ * processes may timeout just for bad luck. To avoid punishing
++ * them we do not charge a full budget to a process that
++ * succeeded in consuming at least 2/3 of its budget.
++ */
++ if (slow || (reason == BFQ_BFQQ_BUDGET_TIMEOUT &&
++ bfq_bfqq_budget_left(bfqq) >= bfqq->entity.budget / 3))
++ bfq_bfqq_charge_full_budget(bfqq);
++
++ bfqq->service_from_backlogged += bfqq->entity.service;
++
++ if (BFQQ_SEEKY(bfqq) && reason == BFQ_BFQQ_BUDGET_TIMEOUT &&
++ !bfq_bfqq_constantly_seeky(bfqq)) {
++ bfq_mark_bfqq_constantly_seeky(bfqq);
++ if (!blk_queue_nonrot(bfqd->queue))
++ bfqd->const_seeky_busy_in_flight_queues++;
++ }
++
++ if (reason == BFQ_BFQQ_TOO_IDLE &&
++ bfqq->entity.service <= 2 * bfqq->entity.budget / 10)
++ bfq_clear_bfqq_IO_bound(bfqq);
++
++ if (bfqd->low_latency && bfqq->wr_coeff == 1)
++ bfqq->last_wr_start_finish = jiffies;
++
++ if (bfqd->low_latency && bfqd->bfq_wr_max_softrt_rate > 0 &&
++ RB_EMPTY_ROOT(&bfqq->sort_list)) {
++ /*
++ * If we get here, and there are no outstanding requests,
++ * then the request pattern is isochronous (see the comments
++ * to the function bfq_bfqq_softrt_next_start()). Hence we
++ * can compute soft_rt_next_start. If, instead, the queue
++ * still has outstanding requests, then we have to wait
++ * for the completion of all the outstanding requests to
++ * discover whether the request pattern is actually
++ * isochronous.
++ */
++ if (bfqq->dispatched == 0)
++ bfqq->soft_rt_next_start =
++ bfq_bfqq_softrt_next_start(bfqd, bfqq);
++ else {
++ /*
++ * The application is still waiting for the
++ * completion of one or more requests:
++ * prevent it from possibly being incorrectly
++ * deemed as soft real-time by setting its
++ * soft_rt_next_start to infinity. In fact,
++ * without this assignment, the application
++ * would be incorrectly deemed as soft
++ * real-time if:
++ * 1) it issued a new request before the
++ * completion of all its in-flight
++ * requests, and
++ * 2) at that time, its soft_rt_next_start
++ * happened to be in the past.
++ */
++ bfqq->soft_rt_next_start =
++ bfq_infinity_from_now(jiffies);
++ /*
++ * Schedule an update of soft_rt_next_start to when
++ * the task may be discovered to be isochronous.
++ */
++ bfq_mark_bfqq_softrt_update(bfqq);
++ }
++ }
++
++ bfq_log_bfqq(bfqd, bfqq,
++ "expire (%d, slow %d, num_disp %d, idle_win %d)", reason,
++ slow, bfqq->dispatched, bfq_bfqq_idle_window(bfqq));
++
++ /*
++ * Increase, decrease or leave budget unchanged according to
++ * reason.
++ */
++ __bfq_bfqq_recalc_budget(bfqd, bfqq, reason);
++ __bfq_bfqq_expire(bfqd, bfqq);
++}
++
++/*
++ * Budget timeout is not implemented through a dedicated timer, but
++ * just checked on request arrivals and completions, as well as on
++ * idle timer expirations.
++ */
++static bool bfq_bfqq_budget_timeout(struct bfq_queue *bfqq)
++{
++ if (bfq_bfqq_budget_new(bfqq) ||
++ time_before(jiffies, bfqq->budget_timeout))
++ return false;
++ return true;
++}
++
++/*
++ * If we expire a queue that is waiting for the arrival of a new
++ * request, we may prevent the fictitious timestamp back-shifting that
++ * allows the guarantees of the queue to be preserved (see [1] for
++ * this tricky aspect). Hence we return true only if this condition
++ * does not hold, or if the queue is slow enough to deserve only to be
++ * kicked off for preserving a high throughput.
++*/
++static bool bfq_may_expire_for_budg_timeout(struct bfq_queue *bfqq)
++{
++ bfq_log_bfqq(bfqq->bfqd, bfqq,
++ "may_budget_timeout: wait_request %d left %d timeout %d",
++ bfq_bfqq_wait_request(bfqq),
++ bfq_bfqq_budget_left(bfqq) >= bfqq->entity.budget / 3,
++ bfq_bfqq_budget_timeout(bfqq));
++
++ return (!bfq_bfqq_wait_request(bfqq) ||
++ bfq_bfqq_budget_left(bfqq) >= bfqq->entity.budget / 3)
++ &&
++ bfq_bfqq_budget_timeout(bfqq);
++}
++
++/*
++ * For a queue that becomes empty, device idling is allowed only if
++ * this function returns true for that queue. As a consequence, since
++ * device idling plays a critical role for both throughput boosting
++ * and service guarantees, the return value of this function plays a
++ * critical role as well.
++ *
++ * In a nutshell, this function returns true only if idling is
++ * beneficial for throughput or, even if detrimental for throughput,
++ * idling is however necessary to preserve service guarantees (low
++ * latency, desired throughput distribution, ...). In particular, on
++ * NCQ-capable devices, this function tries to return false, so as to
++ * help keep the drives' internal queues full, whenever this helps the
++ * device boost the throughput without causing any service-guarantee
++ * issue.
++ *
++ * In more detail, the return value of this function is obtained by,
++ * first, computing a number of boolean variables that take into
++ * account throughput and service-guarantee issues, and, then,
++ * combining these variables in a logical expression. Most of the
++ * issues taken into account are not trivial. We discuss these issues
++ * while introducing the variables.
++ */
++static bool bfq_bfqq_may_idle(struct bfq_queue *bfqq)
++{
++ struct bfq_data *bfqd = bfqq->bfqd;
++ bool idling_boosts_thr, idling_boosts_thr_without_issues,
++ all_queues_seeky, on_hdd_and_not_all_queues_seeky,
++ idling_needed_for_service_guarantees,
++ asymmetric_scenario;
++
++ /*
++ * The next variable takes into account the cases where idling
++ * boosts the throughput.
++ *
++ * The value of the variable is computed considering, first, that
++ * idling is virtually always beneficial for the throughput if:
++ * (a) the device is not NCQ-capable, or
++ * (b) regardless of the presence of NCQ, the device is rotational
++ * and the request pattern for bfqq is I/O-bound and sequential.
++ *
++ * Secondly, and in contrast to the above item (b), idling an
++ * NCQ-capable flash-based device would not boost the
++ * throughput even with sequential I/O; rather it would lower
++ * the throughput in proportion to how fast the device
++ * is. Accordingly, the next variable is true if any of the
++ * above conditions (a) and (b) is true, and, in particular,
++ * happens to be false if bfqd is an NCQ-capable flash-based
++ * device.
++ */
++ idling_boosts_thr = !bfqd->hw_tag ||
++ (!blk_queue_nonrot(bfqd->queue) && bfq_bfqq_IO_bound(bfqq) &&
++ bfq_bfqq_idle_window(bfqq));
++
++ /*
++ * The value of the next variable,
++ * idling_boosts_thr_without_issues, is equal to that of
++ * idling_boosts_thr, unless a special case holds. In this
++ * special case, described below, idling may cause problems to
++ * weight-raised queues.
++ *
++ * When the request pool is saturated (e.g., in the presence
++ * of write hogs), if the processes associated with
++ * non-weight-raised queues ask for requests at a lower rate,
++ * then processes associated with weight-raised queues have a
++ * higher probability to get a request from the pool
++ * immediately (or at least soon) when they need one. Thus
++ * they have a higher probability to actually get a fraction
++ * of the device throughput proportional to their high
++ * weight. This is especially true with NCQ-capable drives,
++ * which enqueue several requests in advance, and further
++ * reorder internally-queued requests.
++ *
++ * For this reason, we force to false the value of
++ * idling_boosts_thr_without_issues if there are weight-raised
++ * busy queues. In this case, and if bfqq is not weight-raised,
++ * this guarantees that the device is not idled for bfqq (if,
++ * instead, bfqq is weight-raised, then idling will be
++ * guaranteed by another variable, see below). Combined with
++ * the timestamping rules of BFQ (see [1] for details), this
++ * behavior causes bfqq, and hence any sync non-weight-raised
++ * queue, to get a lower number of requests served, and thus
++ * to ask for a lower number of requests from the request
++ * pool, before the busy weight-raised queues get served
++ * again. This often mitigates starvation problems in the
++ * presence of heavy write workloads and NCQ, thereby
++ * guaranteeing a higher application and system responsiveness
++ * in these hostile scenarios.
++ */
++ idling_boosts_thr_without_issues = idling_boosts_thr &&
++ bfqd->wr_busy_queues == 0;
++
++ /*
++ * There are then two cases where idling must be performed not
++ * for throughput concerns, but to preserve service
++ * guarantees. In the description of these cases, we say, for
++ * short, that a queue is sequential/random if the process
++ * associated to the queue issues sequential/random requests
++ * (in the second case the queue may be tagged as seeky or
++ * even constantly_seeky).
++ *
++ * To introduce the first case, we note that, since
++ * bfq_bfqq_idle_window(bfqq) is false if the device is
++ * NCQ-capable and bfqq is random (see
++ * bfq_update_idle_window()), then, from the above two
++ * assignments it follows that
++ * idling_boosts_thr_without_issues is false if the device is
++ * NCQ-capable and bfqq is random. Therefore, for this case,
++ * device idling would never be allowed if we used just
++ * idling_boosts_thr_without_issues to decide whether to allow
++ * it. And, beneficially, this would imply that throughput
++ * would always be boosted also with random I/O on NCQ-capable
++ * HDDs.
++ *
++ * But we must be careful on this point, to avoid an unfair
++ * treatment for bfqq. In fact, because of the same above
++ * assignments, idling_boosts_thr_without_issues is, on the
++ * other hand, true if 1) the device is an HDD and bfqq is
++ * sequential, and 2) there are no busy weight-raised
++ * queues. As a consequence, if we used just
++ * idling_boosts_thr_without_issues to decide whether to idle
++ * the device, then with an HDD we might easily bump into a
++ * scenario where queues that are sequential and I/O-bound
++ * would enjoy idling, whereas random queues would not. The
++ * latter might then get a low share of the device throughput,
++ * simply because the former would get many requests served
++ * after being set as in service, while the latter would not.
++ *
++ * To address this issue, we start by setting to true a
++ * sentinel variable, on_hdd_and_not_all_queues_seeky, if the
++ * device is rotational and not all queues with pending or
++ * in-flight requests are constantly seeky (i.e., there are
++ * active sequential queues, and bfqq might then be mistreated
++ * if it does not enjoy idling because it is random).
++ */
++ all_queues_seeky = bfq_bfqq_constantly_seeky(bfqq) &&
++ bfqd->busy_in_flight_queues ==
++ bfqd->const_seeky_busy_in_flight_queues;
++
++ on_hdd_and_not_all_queues_seeky =
++ !blk_queue_nonrot(bfqd->queue) && !all_queues_seeky;
++
++ /*
++ * To introduce the second case where idling needs to be
++ * performed to preserve service guarantees, we can note that
++ * allowing the drive to enqueue more than one request at a
++ * time, and hence delegating de facto final scheduling
++ * decisions to the drive's internal scheduler, causes loss of
++ * control on the actual request service order. In particular,
++ * the critical situation is when requests from different
++ * processes happens to be present, at the same time, in the
++ * internal queue(s) of the drive. In such a situation, the
++ * drive, by deciding the service order of the
++ * internally-queued requests, does determine also the actual
++ * throughput distribution among these processes. But the
++ * drive typically has no notion or concern about per-process
++ * throughput distribution, and makes its decisions only on a
++ * per-request basis. Therefore, the service distribution
++ * enforced by the drive's internal scheduler is likely to
++ * coincide with the desired device-throughput distribution
++ * only in a completely symmetric scenario where:
++ * (i) each of these processes must get the same throughput as
++ * the others;
++ * (ii) all these processes have the same I/O pattern
++ * (either sequential or random).
++ * In fact, in such a scenario, the drive will tend to treat
++ * the requests of each of these processes in about the same
++ * way as the requests of the others, and thus to provide
++ * each of these processes with about the same throughput
++ * (which is exactly the desired throughput distribution). In
++ * contrast, in any asymmetric scenario, device idling is
++ * certainly needed to guarantee that bfqq receives its
++ * assigned fraction of the device throughput (see [1] for
++ * details).
++ *
++ * We address this issue by controlling, actually, only the
++ * symmetry sub-condition (i), i.e., provided that
++ * sub-condition (i) holds, idling is not performed,
++ * regardless of whether sub-condition (ii) holds. In other
++ * words, only if sub-condition (i) holds, then idling is
++ * allowed, and the device tends to be prevented from queueing
++ * many requests, possibly of several processes. The reason
++ * for not controlling also sub-condition (ii) is that, first,
++ * in the case of an HDD, the asymmetry in terms of types of
++ * I/O patterns is already taken in to account in the above
++ * sentinel variable
++ * on_hdd_and_not_all_queues_seeky. Secondly, in the case of a
++ * flash-based device, we prefer however to privilege
++ * throughput (and idling lowers throughput for this type of
++ * devices), for the following reasons:
++ * 1) differently from HDDs, the service time of random
++ * requests is not orders of magnitudes lower than the service
++ * time of sequential requests; thus, even if processes doing
++ * sequential I/O get a preferential treatment with respect to
++ * others doing random I/O, the consequences are not as
++ * dramatic as with HDDs;
++ * 2) if a process doing random I/O does need strong
++ * throughput guarantees, it is hopefully already being
++ * weight-raised, or the user is likely to have assigned it a
++ * higher weight than the other processes (and thus
++ * sub-condition (i) is likely to be false, which triggers
++ * idling).
++ *
++ * According to the above considerations, the next variable is
++ * true (only) if sub-condition (i) holds. To compute the
++ * value of this variable, we not only use the return value of
++ * the function bfq_symmetric_scenario(), but also check
++ * whether bfqq is being weight-raised, because
++ * bfq_symmetric_scenario() does not take into account also
++ * weight-raised queues (see comments to
++ * bfq_weights_tree_add()).
++ *
++ * As a side note, it is worth considering that the above
++ * device-idling countermeasures may however fail in the
++ * following unlucky scenario: if idling is (correctly)
++ * disabled in a time period during which all symmetry
++ * sub-conditions hold, and hence the device is allowed to
++ * enqueue many requests, but at some later point in time some
++ * sub-condition stops to hold, then it may become impossible
++ * to let requests be served in the desired order until all
++ * the requests already queued in the device have been served.
++ */
++ asymmetric_scenario = bfqq->wr_coeff > 1 ||
++ !bfq_symmetric_scenario(bfqd);
++
++ /*
++ * Finally, there is a case where maximizing throughput is the
++ * best choice even if it may cause unfairness toward
++ * bfqq. Such a case is when bfqq became active in a burst of
++ * queue activations. Queues that became active during a large
++ * burst benefit only from throughput, as discussed in the
++ * comments to bfq_handle_burst. Thus, if bfqq became active
++ * in a burst and not idling the device maximizes throughput,
++ * then the device must no be idled, because not idling the
++ * device provides bfqq and all other queues in the burst with
++ * maximum benefit. Combining this and the two cases above, we
++ * can now establish when idling is actually needed to
++ * preserve service guarantees.
++ */
++ idling_needed_for_service_guarantees =
++ (on_hdd_and_not_all_queues_seeky || asymmetric_scenario) &&
++ !bfq_bfqq_in_large_burst(bfqq);
++
++ /*
++ * We have now all the components we need to compute the return
++ * value of the function, which is true only if both the following
++ * conditions hold:
++ * 1) bfqq is sync, because idling make sense only for sync queues;
++ * 2) idling either boosts the throughput (without issues), or
++ * is necessary to preserve service guarantees.
++ */
++ return bfq_bfqq_sync(bfqq) &&
++ (idling_boosts_thr_without_issues ||
++ idling_needed_for_service_guarantees);
++}
++
++/*
++ * If the in-service queue is empty but the function bfq_bfqq_may_idle
++ * returns true, then:
++ * 1) the queue must remain in service and cannot be expired, and
++ * 2) the device must be idled to wait for the possible arrival of a new
++ * request for the queue.
++ * See the comments to the function bfq_bfqq_may_idle for the reasons
++ * why performing device idling is the best choice to boost the throughput
++ * and preserve service guarantees when bfq_bfqq_may_idle itself
++ * returns true.
++ */
++static bool bfq_bfqq_must_idle(struct bfq_queue *bfqq)
++{
++ struct bfq_data *bfqd = bfqq->bfqd;
++
++ return RB_EMPTY_ROOT(&bfqq->sort_list) && bfqd->bfq_slice_idle != 0 &&
++ bfq_bfqq_may_idle(bfqq);
++}
++
++/*
++ * Select a queue for service. If we have a current queue in service,
++ * check whether to continue servicing it, or retrieve and set a new one.
++ */
++static struct bfq_queue *bfq_select_queue(struct bfq_data *bfqd)
++{
++ struct bfq_queue *bfqq;
++ struct request *next_rq;
++ enum bfqq_expiration reason = BFQ_BFQQ_BUDGET_TIMEOUT;
++
++ bfqq = bfqd->in_service_queue;
++ if (!bfqq)
++ goto new_queue;
++
++ bfq_log_bfqq(bfqd, bfqq, "select_queue: already in-service queue");
++
++ if (bfq_may_expire_for_budg_timeout(bfqq) &&
++ !timer_pending(&bfqd->idle_slice_timer) &&
++ !bfq_bfqq_must_idle(bfqq))
++ goto expire;
++
++ next_rq = bfqq->next_rq;
++ /*
++ * If bfqq has requests queued and it has enough budget left to
++ * serve them, keep the queue, otherwise expire it.
++ */
++ if (next_rq) {
++ if (bfq_serv_to_charge(next_rq, bfqq) >
++ bfq_bfqq_budget_left(bfqq)) {
++ reason = BFQ_BFQQ_BUDGET_EXHAUSTED;
++ goto expire;
++ } else {
++ /*
++ * The idle timer may be pending because we may
++ * not disable disk idling even when a new request
++ * arrives.
++ */
++ if (timer_pending(&bfqd->idle_slice_timer)) {
++ /*
++ * If we get here: 1) at least a new request
++ * has arrived but we have not disabled the
++ * timer because the request was too small,
++ * 2) then the block layer has unplugged
++ * the device, causing the dispatch to be
++ * invoked.
++ *
++ * Since the device is unplugged, now the
++ * requests are probably large enough to
++ * provide a reasonable throughput.
++ * So we disable idling.
++ */
++ bfq_clear_bfqq_wait_request(bfqq);
++ del_timer(&bfqd->idle_slice_timer);
++#ifdef CONFIG_BFQ_GROUP_IOSCHED
++ bfqg_stats_update_idle_time(bfqq_group(bfqq));
++#endif
++ }
++ goto keep_queue;
++ }
++ }
++
++ /*
++ * No requests pending. However, if the in-service queue is idling
++ * for a new request, or has requests waiting for a completion and
++ * may idle after their completion, then keep it anyway.
++ */
++ if (timer_pending(&bfqd->idle_slice_timer) ||
++ (bfqq->dispatched != 0 && bfq_bfqq_may_idle(bfqq))) {
++ bfqq = NULL;
++ goto keep_queue;
++ }
++
++ reason = BFQ_BFQQ_NO_MORE_REQUESTS;
++expire:
++ bfq_bfqq_expire(bfqd, bfqq, false, reason);
++new_queue:
++ bfqq = bfq_set_in_service_queue(bfqd);
++ bfq_log(bfqd, "select_queue: new queue %d returned",
++ bfqq ? bfqq->pid : 0);
++keep_queue:
++ return bfqq;
++}
++
++static void bfq_update_wr_data(struct bfq_data *bfqd, struct bfq_queue *bfqq)
++{
++ struct bfq_entity *entity = &bfqq->entity;
++
++ if (bfqq->wr_coeff > 1) { /* queue is being weight-raised */
++ bfq_log_bfqq(bfqd, bfqq,
++ "raising period dur %u/%u msec, old coeff %u, w %d(%d)",
++ jiffies_to_msecs(jiffies - bfqq->last_wr_start_finish),
++ jiffies_to_msecs(bfqq->wr_cur_max_time),
++ bfqq->wr_coeff,
++ bfqq->entity.weight, bfqq->entity.orig_weight);
++
++ BUG_ON(bfqq != bfqd->in_service_queue && entity->weight !=
++ entity->orig_weight * bfqq->wr_coeff);
++ if (entity->prio_changed)
++ bfq_log_bfqq(bfqd, bfqq, "WARN: pending prio change");
++
++ /*
++ * If the queue was activated in a burst, or
++ * too much time has elapsed from the beginning
++ * of this weight-raising period, then end weight
++ * raising.
++ */
++ if (bfq_bfqq_in_large_burst(bfqq) ||
++ time_is_before_jiffies(bfqq->last_wr_start_finish +
++ bfqq->wr_cur_max_time)) {
++ bfqq->last_wr_start_finish = jiffies;
++ bfq_log_bfqq(bfqd, bfqq,
++ "wrais ending at %lu, rais_max_time %u",
++ bfqq->last_wr_start_finish,
++ jiffies_to_msecs(bfqq->wr_cur_max_time));
++ bfq_bfqq_end_wr(bfqq);
++ }
++ }
++ /* Update weight both if it must be raised and if it must be lowered */
++ if ((entity->weight > entity->orig_weight) != (bfqq->wr_coeff > 1))
++ __bfq_entity_update_weight_prio(
++ bfq_entity_service_tree(entity),
++ entity);
++}
++
++/*
++ * Dispatch one request from bfqq, moving it to the request queue
++ * dispatch list.
++ */
++static int bfq_dispatch_request(struct bfq_data *bfqd,
++ struct bfq_queue *bfqq)
++{
++ int dispatched = 0;
++ struct request *rq;
++ unsigned long service_to_charge;
++
++ BUG_ON(RB_EMPTY_ROOT(&bfqq->sort_list));
++
++ /* Follow expired path, else get first next available. */
++ rq = bfq_check_fifo(bfqq);
++ if (!rq)
++ rq = bfqq->next_rq;
++ service_to_charge = bfq_serv_to_charge(rq, bfqq);
++
++ if (service_to_charge > bfq_bfqq_budget_left(bfqq)) {
++ /*
++ * This may happen if the next rq is chosen in fifo order
++ * instead of sector order. The budget is properly
++ * dimensioned to be always sufficient to serve the next
++ * request only if it is chosen in sector order. The reason
++ * is that it would be quite inefficient and little useful
++ * to always make sure that the budget is large enough to
++ * serve even the possible next rq in fifo order.
++ * In fact, requests are seldom served in fifo order.
++ *
++ * Expire the queue for budget exhaustion, and make sure
++ * that the next act_budget is enough to serve the next
++ * request, even if it comes from the fifo expired path.
++ */
++ bfqq->next_rq = rq;
++ /*
++ * Since this dispatch is failed, make sure that
++ * a new one will be performed
++ */
++ if (!bfqd->rq_in_driver)
++ bfq_schedule_dispatch(bfqd);
++ goto expire;
++ }
++
++ /* Finally, insert request into driver dispatch list. */
++ bfq_bfqq_served(bfqq, service_to_charge);
++ bfq_dispatch_insert(bfqd->queue, rq);
++
++ bfq_update_wr_data(bfqd, bfqq);
++
++ bfq_log_bfqq(bfqd, bfqq,
++ "dispatched %u sec req (%llu), budg left %d",
++ blk_rq_sectors(rq),
++ (unsigned long long) blk_rq_pos(rq),
++ bfq_bfqq_budget_left(bfqq));
++
++ dispatched++;
++
++ if (!bfqd->in_service_bic) {
++ atomic_long_inc(&RQ_BIC(rq)->icq.ioc->refcount);
++ bfqd->in_service_bic = RQ_BIC(rq);
++ }
++
++ if (bfqd->busy_queues > 1 && ((!bfq_bfqq_sync(bfqq) &&
++ dispatched >= bfqd->bfq_max_budget_async_rq) ||
++ bfq_class_idle(bfqq)))
++ goto expire;
++
++ return dispatched;
++
++expire:
++ bfq_bfqq_expire(bfqd, bfqq, false, BFQ_BFQQ_BUDGET_EXHAUSTED);
++ return dispatched;
++}
++
++static int __bfq_forced_dispatch_bfqq(struct bfq_queue *bfqq)
++{
++ int dispatched = 0;
++
++ while (bfqq->next_rq) {
++ bfq_dispatch_insert(bfqq->bfqd->queue, bfqq->next_rq);
++ dispatched++;
++ }
++
++ BUG_ON(!list_empty(&bfqq->fifo));
++ return dispatched;
++}
++
++/*
++ * Drain our current requests.
++ * Used for barriers and when switching io schedulers on-the-fly.
++ */
++static int bfq_forced_dispatch(struct bfq_data *bfqd)
++{
++ struct bfq_queue *bfqq, *n;
++ struct bfq_service_tree *st;
++ int dispatched = 0;
++
++ bfqq = bfqd->in_service_queue;
++ if (bfqq)
++ __bfq_bfqq_expire(bfqd, bfqq);
++
++ /*
++ * Loop through classes, and be careful to leave the scheduler
++ * in a consistent state, as feedback mechanisms and vtime
++ * updates cannot be disabled during the process.
++ */
++ list_for_each_entry_safe(bfqq, n, &bfqd->active_list, bfqq_list) {
++ st = bfq_entity_service_tree(&bfqq->entity);
++
++ dispatched += __bfq_forced_dispatch_bfqq(bfqq);
++ bfqq->max_budget = bfq_max_budget(bfqd);
++
++ bfq_forget_idle(st);
++ }
++
++ BUG_ON(bfqd->busy_queues != 0);
++
++ return dispatched;
++}
++
++static int bfq_dispatch_requests(struct request_queue *q, int force)
++{
++ struct bfq_data *bfqd = q->elevator->elevator_data;
++ struct bfq_queue *bfqq;
++ int max_dispatch;
++
++ bfq_log(bfqd, "dispatch requests: %d busy queues", bfqd->busy_queues);
++ if (bfqd->busy_queues == 0)
++ return 0;
++
++ if (unlikely(force))
++ return bfq_forced_dispatch(bfqd);
++
++ bfqq = bfq_select_queue(bfqd);
++ if (!bfqq)
++ return 0;
++
++ if (bfq_class_idle(bfqq))
++ max_dispatch = 1;
++
++ if (!bfq_bfqq_sync(bfqq))
++ max_dispatch = bfqd->bfq_max_budget_async_rq;
++
++ if (!bfq_bfqq_sync(bfqq) && bfqq->dispatched >= max_dispatch) {
++ if (bfqd->busy_queues > 1)
++ return 0;
++ if (bfqq->dispatched >= 4 * max_dispatch)
++ return 0;
++ }
++
++ if (bfqd->sync_flight != 0 && !bfq_bfqq_sync(bfqq))
++ return 0;
++
++ bfq_clear_bfqq_wait_request(bfqq);
++ BUG_ON(timer_pending(&bfqd->idle_slice_timer));
++
++ if (!bfq_dispatch_request(bfqd, bfqq))
++ return 0;
++
++ bfq_log_bfqq(bfqd, bfqq, "dispatched %s request",
++ bfq_bfqq_sync(bfqq) ? "sync" : "async");
++
++ return 1;
++}
++
++/*
++ * Task holds one reference to the queue, dropped when task exits. Each rq
++ * in-flight on this queue also holds a reference, dropped when rq is freed.
++ *
++ * Queue lock must be held here.
++ */
++static void bfq_put_queue(struct bfq_queue *bfqq)
++{
++ struct bfq_data *bfqd = bfqq->bfqd;
++#ifdef CONFIG_BFQ_GROUP_IOSCHED
++ struct bfq_group *bfqg = bfqq_group(bfqq);
++#endif
++
++ BUG_ON(atomic_read(&bfqq->ref) <= 0);
++
++ bfq_log_bfqq(bfqd, bfqq, "put_queue: %p %d", bfqq,
++ atomic_read(&bfqq->ref));
++ if (!atomic_dec_and_test(&bfqq->ref))
++ return;
++
++ BUG_ON(rb_first(&bfqq->sort_list));
++ BUG_ON(bfqq->allocated[READ] + bfqq->allocated[WRITE] != 0);
++ BUG_ON(bfqq->entity.tree);
++ BUG_ON(bfq_bfqq_busy(bfqq));
++ BUG_ON(bfqd->in_service_queue == bfqq);
++
++ if (bfq_bfqq_sync(bfqq))
++ /*
++ * The fact that this queue is being destroyed does not
++ * invalidate the fact that this queue may have been
++ * activated during the current burst. As a consequence,
++ * although the queue does not exist anymore, and hence
++ * needs to be removed from the burst list if there,
++ * the burst size has not to be decremented.
++ */
++ hlist_del_init(&bfqq->burst_list_node);
++
++ bfq_log_bfqq(bfqd, bfqq, "put_queue: %p freed", bfqq);
++
++ kmem_cache_free(bfq_pool, bfqq);
++#ifdef CONFIG_BFQ_GROUP_IOSCHED
++ bfqg_put(bfqg);
++#endif
++}
++
++static void bfq_exit_bfqq(struct bfq_data *bfqd, struct bfq_queue *bfqq)
++{
++ if (bfqq == bfqd->in_service_queue) {
++ __bfq_bfqq_expire(bfqd, bfqq);
++ bfq_schedule_dispatch(bfqd);
++ }
++
++ bfq_log_bfqq(bfqd, bfqq, "exit_bfqq: %p, %d", bfqq,
++ atomic_read(&bfqq->ref));
++
++ bfq_put_queue(bfqq);
++}
++
++static void bfq_init_icq(struct io_cq *icq)
++{
++ struct bfq_io_cq *bic = icq_to_bic(icq);
++
++ bic->ttime.last_end_request = jiffies;
++}
++
++static void bfq_exit_icq(struct io_cq *icq)
++{
++ struct bfq_io_cq *bic = icq_to_bic(icq);
++ struct bfq_data *bfqd = bic_to_bfqd(bic);
++
++ if (bic->bfqq[BLK_RW_ASYNC]) {
++ bfq_exit_bfqq(bfqd, bic->bfqq[BLK_RW_ASYNC]);
++ bic->bfqq[BLK_RW_ASYNC] = NULL;
++ }
++
++ if (bic->bfqq[BLK_RW_SYNC]) {
++ bfq_exit_bfqq(bfqd, bic->bfqq[BLK_RW_SYNC]);
++ bic->bfqq[BLK_RW_SYNC] = NULL;
++ }
++}
++
++/*
++ * Update the entity prio values; note that the new values will not
++ * be used until the next (re)activation.
++ */
++static void
++bfq_set_next_ioprio_data(struct bfq_queue *bfqq, struct bfq_io_cq *bic)
++{
++ struct task_struct *tsk = current;
++ int ioprio_class;
++
++ ioprio_class = IOPRIO_PRIO_CLASS(bic->ioprio);
++ switch (ioprio_class) {
++ default:
++ dev_err(bfqq->bfqd->queue->backing_dev_info.dev,
++ "bfq: bad prio class %d\n", ioprio_class);
++ case IOPRIO_CLASS_NONE:
++ /*
++ * No prio set, inherit CPU scheduling settings.
++ */
++ bfqq->new_ioprio = task_nice_ioprio(tsk);
++ bfqq->new_ioprio_class = task_nice_ioclass(tsk);
++ break;
++ case IOPRIO_CLASS_RT:
++ bfqq->new_ioprio = IOPRIO_PRIO_DATA(bic->ioprio);
++ bfqq->new_ioprio_class = IOPRIO_CLASS_RT;
++ break;
++ case IOPRIO_CLASS_BE:
++ bfqq->new_ioprio = IOPRIO_PRIO_DATA(bic->ioprio);
++ bfqq->new_ioprio_class = IOPRIO_CLASS_BE;
++ break;
++ case IOPRIO_CLASS_IDLE:
++ bfqq->new_ioprio_class = IOPRIO_CLASS_IDLE;
++ bfqq->new_ioprio = 7;
++ bfq_clear_bfqq_idle_window(bfqq);
++ break;
++ }
++
++ if (bfqq->new_ioprio < 0 || bfqq->new_ioprio >= IOPRIO_BE_NR) {
++ pr_crit("bfq_set_next_ioprio_data: new_ioprio %d\n",
++ bfqq->new_ioprio);
++ BUG();
++ }
++
++ bfqq->entity.new_weight = bfq_ioprio_to_weight(bfqq->new_ioprio);
++ bfqq->entity.prio_changed = 1;
++}
++
++static void bfq_check_ioprio_change(struct bfq_io_cq *bic, struct bio *bio)
++{
++ struct bfq_data *bfqd;
++ struct bfq_queue *bfqq, *new_bfqq;
++ unsigned long uninitialized_var(flags);
++ int ioprio = bic->icq.ioc->ioprio;
++
++ bfqd = bfq_get_bfqd_locked(&(bic->icq.q->elevator->elevator_data),
++ &flags);
++ /*
++ * This condition may trigger on a newly created bic, be sure to
++ * drop the lock before returning.
++ */
++ if (unlikely(!bfqd) || likely(bic->ioprio == ioprio))
++ goto out;
++
++ bic->ioprio = ioprio;
++
++ bfqq = bic->bfqq[BLK_RW_ASYNC];
++ if (bfqq) {
++ new_bfqq = bfq_get_queue(bfqd, bio, BLK_RW_ASYNC, bic,
++ GFP_ATOMIC);
++ if (new_bfqq) {
++ bic->bfqq[BLK_RW_ASYNC] = new_bfqq;
++ bfq_log_bfqq(bfqd, bfqq,
++ "check_ioprio_change: bfqq %p %d",
++ bfqq, atomic_read(&bfqq->ref));
++ bfq_put_queue(bfqq);
++ }
++ }
++
++ bfqq = bic->bfqq[BLK_RW_SYNC];
++ if (bfqq)
++ bfq_set_next_ioprio_data(bfqq, bic);
++
++out:
++ bfq_put_bfqd_unlock(bfqd, &flags);
++}
++
++static void bfq_init_bfqq(struct bfq_data *bfqd, struct bfq_queue *bfqq,
++ struct bfq_io_cq *bic, pid_t pid, int is_sync)
++{
++ RB_CLEAR_NODE(&bfqq->entity.rb_node);
++ INIT_LIST_HEAD(&bfqq->fifo);
++ INIT_HLIST_NODE(&bfqq->burst_list_node);
++
++ atomic_set(&bfqq->ref, 0);
++ bfqq->bfqd = bfqd;
++
++ if (bic)
++ bfq_set_next_ioprio_data(bfqq, bic);
++
++ if (is_sync) {
++ if (!bfq_class_idle(bfqq))
++ bfq_mark_bfqq_idle_window(bfqq);
++ bfq_mark_bfqq_sync(bfqq);
++ } else
++ bfq_clear_bfqq_sync(bfqq);
++ bfq_mark_bfqq_IO_bound(bfqq);
++
++ /* Tentative initial value to trade off between thr and lat */
++ bfqq->max_budget = (2 * bfq_max_budget(bfqd)) / 3;
++ bfqq->pid = pid;
++
++ bfqq->wr_coeff = 1;
++ bfqq->last_wr_start_finish = 0;
++ /*
++ * Set to the value for which bfqq will not be deemed as
++ * soft rt when it becomes backlogged.
++ */
++ bfqq->soft_rt_next_start = bfq_infinity_from_now(jiffies);
++}
++
++static struct bfq_queue *bfq_find_alloc_queue(struct bfq_data *bfqd,
++ struct bio *bio, int is_sync,
++ struct bfq_io_cq *bic,
++ gfp_t gfp_mask)
++{
++ struct bfq_group *bfqg;
++ struct bfq_queue *bfqq, *new_bfqq = NULL;
++ struct blkcg *blkcg;
++
++retry:
++ rcu_read_lock();
++
++ blkcg = bio_blkcg(bio);
++ bfqg = bfq_find_alloc_group(bfqd, blkcg);
++ /* bic always exists here */
++ bfqq = bic_to_bfqq(bic, is_sync);
++
++ /*
++ * Always try a new alloc if we fall back to the OOM bfqq
++ * originally, since it should just be a temporary situation.
++ */
++ if (!bfqq || bfqq == &bfqd->oom_bfqq) {
++ bfqq = NULL;
++ if (new_bfqq) {
++ bfqq = new_bfqq;
++ new_bfqq = NULL;
++ } else if (gfpflags_allow_blocking(gfp_mask)) {
++ rcu_read_unlock();
++ spin_unlock_irq(bfqd->queue->queue_lock);
++ new_bfqq = kmem_cache_alloc_node(bfq_pool,
++ gfp_mask | __GFP_ZERO,
++ bfqd->queue->node);
++ spin_lock_irq(bfqd->queue->queue_lock);
++ if (new_bfqq)
++ goto retry;
++ } else {
++ bfqq = kmem_cache_alloc_node(bfq_pool,
++ gfp_mask | __GFP_ZERO,
++ bfqd->queue->node);
++ }
++
++ if (bfqq) {
++ bfq_init_bfqq(bfqd, bfqq, bic, current->pid,
++ is_sync);
++ bfq_init_entity(&bfqq->entity, bfqg);
++ bfq_log_bfqq(bfqd, bfqq, "allocated");
++ } else {
++ bfqq = &bfqd->oom_bfqq;
++ bfq_log_bfqq(bfqd, bfqq, "using oom bfqq");
++ }
++ }
++
++ if (new_bfqq)
++ kmem_cache_free(bfq_pool, new_bfqq);
++
++ rcu_read_unlock();
++
++ return bfqq;
++}
++
++static struct bfq_queue **bfq_async_queue_prio(struct bfq_data *bfqd,
++ struct bfq_group *bfqg,
++ int ioprio_class, int ioprio)
++{
++ switch (ioprio_class) {
++ case IOPRIO_CLASS_RT:
++ return &bfqg->async_bfqq[0][ioprio];
++ case IOPRIO_CLASS_NONE:
++ ioprio = IOPRIO_NORM;
++ /* fall through */
++ case IOPRIO_CLASS_BE:
++ return &bfqg->async_bfqq[1][ioprio];
++ case IOPRIO_CLASS_IDLE:
++ return &bfqg->async_idle_bfqq;
++ default:
++ BUG();
++ }
++}
++
++static struct bfq_queue *bfq_get_queue(struct bfq_data *bfqd,
++ struct bio *bio, int is_sync,
++ struct bfq_io_cq *bic, gfp_t gfp_mask)
++{
++ const int ioprio = IOPRIO_PRIO_DATA(bic->ioprio);
++ const int ioprio_class = IOPRIO_PRIO_CLASS(bic->ioprio);
++ struct bfq_queue **async_bfqq = NULL;
++ struct bfq_queue *bfqq = NULL;
++
++ if (!is_sync) {
++ struct blkcg *blkcg;
++ struct bfq_group *bfqg;
++
++ rcu_read_lock();
++ blkcg = bio_blkcg(bio);
++ rcu_read_unlock();
++ bfqg = bfq_find_alloc_group(bfqd, blkcg);
++ async_bfqq = bfq_async_queue_prio(bfqd, bfqg, ioprio_class,
++ ioprio);
++ bfqq = *async_bfqq;
++ }
++
++ if (!bfqq)
++ bfqq = bfq_find_alloc_queue(bfqd, bio, is_sync, bic, gfp_mask);
++
++ /*
++ * Pin the queue now that it's allocated, scheduler exit will
++ * prune it.
++ */
++ if (!is_sync && !(*async_bfqq)) {
++ atomic_inc(&bfqq->ref);
++ bfq_log_bfqq(bfqd, bfqq, "get_queue, bfqq not in async: %p, %d",
++ bfqq, atomic_read(&bfqq->ref));
++ *async_bfqq = bfqq;
++ }
++
++ atomic_inc(&bfqq->ref);
++ bfq_log_bfqq(bfqd, bfqq, "get_queue, at end: %p, %d", bfqq,
++ atomic_read(&bfqq->ref));
++ return bfqq;
++}
++
++static void bfq_update_io_thinktime(struct bfq_data *bfqd,
++ struct bfq_io_cq *bic)
++{
++ unsigned long elapsed = jiffies - bic->ttime.last_end_request;
++ unsigned long ttime = min(elapsed, 2UL * bfqd->bfq_slice_idle);
++
++ bic->ttime.ttime_samples = (7*bic->ttime.ttime_samples + 256) / 8;
++ bic->ttime.ttime_total = (7*bic->ttime.ttime_total + 256*ttime) / 8;
++ bic->ttime.ttime_mean = (bic->ttime.ttime_total + 128) /
++ bic->ttime.ttime_samples;
++}
++
++static void bfq_update_io_seektime(struct bfq_data *bfqd,
++ struct bfq_queue *bfqq,
++ struct request *rq)
++{
++ sector_t sdist;
++ u64 total;
++
++ if (bfqq->last_request_pos < blk_rq_pos(rq))
++ sdist = blk_rq_pos(rq) - bfqq->last_request_pos;
++ else
++ sdist = bfqq->last_request_pos - blk_rq_pos(rq);
++
++ /*
++ * Don't allow the seek distance to get too large from the
++ * odd fragment, pagein, etc.
++ */
++ if (bfqq->seek_samples == 0) /* first request, not really a seek */
++ sdist = 0;
++ else if (bfqq->seek_samples <= 60) /* second & third seek */
++ sdist = min(sdist, (bfqq->seek_mean * 4) + 2*1024*1024);
++ else
++ sdist = min(sdist, (bfqq->seek_mean * 4) + 2*1024*64);
++
++ bfqq->seek_samples = (7*bfqq->seek_samples + 256) / 8;
++ bfqq->seek_total = (7*bfqq->seek_total + (u64)256*sdist) / 8;
++ total = bfqq->seek_total + (bfqq->seek_samples/2);
++ do_div(total, bfqq->seek_samples);
++ bfqq->seek_mean = (sector_t)total;
++
++ bfq_log_bfqq(bfqd, bfqq, "dist=%llu mean=%llu", (u64)sdist,
++ (u64)bfqq->seek_mean);
++}
++
++/*
++ * Disable idle window if the process thinks too long or seeks so much that
++ * it doesn't matter.
++ */
++static void bfq_update_idle_window(struct bfq_data *bfqd,
++ struct bfq_queue *bfqq,
++ struct bfq_io_cq *bic)
++{
++ int enable_idle;
++
++ /* Don't idle for async or idle io prio class. */
++ if (!bfq_bfqq_sync(bfqq) || bfq_class_idle(bfqq))
++ return;
++
++ enable_idle = bfq_bfqq_idle_window(bfqq);
++
++ if (atomic_read(&bic->icq.ioc->active_ref) == 0 ||
++ bfqd->bfq_slice_idle == 0 ||
++ (bfqd->hw_tag && BFQQ_SEEKY(bfqq) &&
++ bfqq->wr_coeff == 1))
++ enable_idle = 0;
++ else if (bfq_sample_valid(bic->ttime.ttime_samples)) {
++ if (bic->ttime.ttime_mean > bfqd->bfq_slice_idle &&
++ bfqq->wr_coeff == 1)
++ enable_idle = 0;
++ else
++ enable_idle = 1;
++ }
++ bfq_log_bfqq(bfqd, bfqq, "update_idle_window: enable_idle %d",
++ enable_idle);
++
++ if (enable_idle)
++ bfq_mark_bfqq_idle_window(bfqq);
++ else
++ bfq_clear_bfqq_idle_window(bfqq);
++}
++
++/*
++ * Called when a new fs request (rq) is added to bfqq. Check if there's
++ * something we should do about it.
++ */
++static void bfq_rq_enqueued(struct bfq_data *bfqd, struct bfq_queue *bfqq,
++ struct request *rq)
++{
++ struct bfq_io_cq *bic = RQ_BIC(rq);
++
++ if (rq->cmd_flags & REQ_META)
++ bfqq->meta_pending++;
++
++ bfq_update_io_thinktime(bfqd, bic);
++ bfq_update_io_seektime(bfqd, bfqq, rq);
++ if (!BFQQ_SEEKY(bfqq) && bfq_bfqq_constantly_seeky(bfqq)) {
++ bfq_clear_bfqq_constantly_seeky(bfqq);
++ if (!blk_queue_nonrot(bfqd->queue)) {
++ BUG_ON(!bfqd->const_seeky_busy_in_flight_queues);
++ bfqd->const_seeky_busy_in_flight_queues--;
++ }
++ }
++ if (bfqq->entity.service > bfq_max_budget(bfqd) / 8 ||
++ !BFQQ_SEEKY(bfqq))
++ bfq_update_idle_window(bfqd, bfqq, bic);
++
++ bfq_log_bfqq(bfqd, bfqq,
++ "rq_enqueued: idle_window=%d (seeky %d, mean %llu)",
++ bfq_bfqq_idle_window(bfqq), BFQQ_SEEKY(bfqq),
++ (unsigned long long) bfqq->seek_mean);
++
++ bfqq->last_request_pos = blk_rq_pos(rq) + blk_rq_sectors(rq);
++
++ if (bfqq == bfqd->in_service_queue && bfq_bfqq_wait_request(bfqq)) {
++ bool small_req = bfqq->queued[rq_is_sync(rq)] == 1 &&
++ blk_rq_sectors(rq) < 32;
++ bool budget_timeout = bfq_bfqq_budget_timeout(bfqq);
++
++ /*
++ * There is just this request queued: if the request
++ * is small and the queue is not to be expired, then
++ * just exit.
++ *
++ * In this way, if the disk is being idled to wait for
++ * a new request from the in-service queue, we avoid
++ * unplugging the device and committing the disk to serve
++ * just a small request. On the contrary, we wait for
++ * the block layer to decide when to unplug the device:
++ * hopefully, new requests will be merged to this one
++ * quickly, then the device will be unplugged and
++ * larger requests will be dispatched.
++ */
++ if (small_req && !budget_timeout)
++ return;
++
++ /*
++ * A large enough request arrived, or the queue is to
++ * be expired: in both cases disk idling is to be
++ * stopped, so clear wait_request flag and reset
++ * timer.
++ */
++ bfq_clear_bfqq_wait_request(bfqq);
++ del_timer(&bfqd->idle_slice_timer);
++#ifdef CONFIG_BFQ_GROUP_IOSCHED
++ bfqg_stats_update_idle_time(bfqq_group(bfqq));
++#endif
++
++ /*
++ * The queue is not empty, because a new request just
++ * arrived. Hence we can safely expire the queue, in
++ * case of budget timeout, without risking that the
++ * timestamps of the queue are not updated correctly.
++ * See [1] for more details.
++ */
++ if (budget_timeout)
++ bfq_bfqq_expire(bfqd, bfqq, false,
++ BFQ_BFQQ_BUDGET_TIMEOUT);
++
++ /*
++ * Let the request rip immediately, or let a new queue be
++ * selected if bfqq has just been expired.
++ */
++ __blk_run_queue(bfqd->queue);
++ }
++}
++
++static void bfq_insert_request(struct request_queue *q, struct request *rq)
++{
++ struct bfq_data *bfqd = q->elevator->elevator_data;
++ struct bfq_queue *bfqq = RQ_BFQQ(rq);
++
++ assert_spin_locked(bfqd->queue->queue_lock);
++
++ bfq_add_request(rq);
++
++ rq->fifo_time = jiffies + bfqd->bfq_fifo_expire[rq_is_sync(rq)];
++ list_add_tail(&rq->queuelist, &bfqq->fifo);
++
++ bfq_rq_enqueued(bfqd, bfqq, rq);
++}
++
++static void bfq_update_hw_tag(struct bfq_data *bfqd)
++{
++ bfqd->max_rq_in_driver = max(bfqd->max_rq_in_driver,
++ bfqd->rq_in_driver);
++
++ if (bfqd->hw_tag == 1)
++ return;
++
++ /*
++ * This sample is valid if the number of outstanding requests
++ * is large enough to allow a queueing behavior. Note that the
++ * sum is not exact, as it's not taking into account deactivated
++ * requests.
++ */
++ if (bfqd->rq_in_driver + bfqd->queued < BFQ_HW_QUEUE_THRESHOLD)
++ return;
++
++ if (bfqd->hw_tag_samples++ < BFQ_HW_QUEUE_SAMPLES)
++ return;
++
++ bfqd->hw_tag = bfqd->max_rq_in_driver > BFQ_HW_QUEUE_THRESHOLD;
++ bfqd->max_rq_in_driver = 0;
++ bfqd->hw_tag_samples = 0;
++}
++
++static void bfq_completed_request(struct request_queue *q, struct request *rq)
++{
++ struct bfq_queue *bfqq = RQ_BFQQ(rq);
++ struct bfq_data *bfqd = bfqq->bfqd;
++ bool sync = bfq_bfqq_sync(bfqq);
++
++ bfq_log_bfqq(bfqd, bfqq, "completed one req with %u sects left (%d)",
++ blk_rq_sectors(rq), sync);
++
++ bfq_update_hw_tag(bfqd);
++
++ BUG_ON(!bfqd->rq_in_driver);
++ BUG_ON(!bfqq->dispatched);
++ bfqd->rq_in_driver--;
++ bfqq->dispatched--;
++#ifdef CONFIG_BFQ_GROUP_IOSCHED
++ bfqg_stats_update_completion(bfqq_group(bfqq),
++ rq_start_time_ns(rq),
++ rq_io_start_time_ns(rq), rq->cmd_flags);
++#endif
++
++ if (!bfqq->dispatched && !bfq_bfqq_busy(bfqq)) {
++ bfq_weights_tree_remove(bfqd, &bfqq->entity,
++ &bfqd->queue_weights_tree);
++ if (!blk_queue_nonrot(bfqd->queue)) {
++ BUG_ON(!bfqd->busy_in_flight_queues);
++ bfqd->busy_in_flight_queues--;
++ if (bfq_bfqq_constantly_seeky(bfqq)) {
++ BUG_ON(!bfqd->
++ const_seeky_busy_in_flight_queues);
++ bfqd->const_seeky_busy_in_flight_queues--;
++ }
++ }
++ }
++
++ if (sync) {
++ bfqd->sync_flight--;
++ RQ_BIC(rq)->ttime.last_end_request = jiffies;
++ }
++
++ /*
++ * If we are waiting to discover whether the request pattern of the
++ * task associated with the queue is actually isochronous, and
++ * both requisites for this condition to hold are satisfied, then
++ * compute soft_rt_next_start (see the comments to the function
++ * bfq_bfqq_softrt_next_start()).
++ */
++ if (bfq_bfqq_softrt_update(bfqq) && bfqq->dispatched == 0 &&
++ RB_EMPTY_ROOT(&bfqq->sort_list))
++ bfqq->soft_rt_next_start =
++ bfq_bfqq_softrt_next_start(bfqd, bfqq);
++
++ /*
++ * If this is the in-service queue, check if it needs to be expired,
++ * or if we want to idle in case it has no pending requests.
++ */
++ if (bfqd->in_service_queue == bfqq) {
++ if (bfq_bfqq_budget_new(bfqq))
++ bfq_set_budget_timeout(bfqd);
++
++ if (bfq_bfqq_must_idle(bfqq)) {
++ bfq_arm_slice_timer(bfqd);
++ goto out;
++ } else if (bfq_may_expire_for_budg_timeout(bfqq))
++ bfq_bfqq_expire(bfqd, bfqq, false,
++ BFQ_BFQQ_BUDGET_TIMEOUT);
++ else if (RB_EMPTY_ROOT(&bfqq->sort_list) &&
++ (bfqq->dispatched == 0 ||
++ !bfq_bfqq_may_idle(bfqq)))
++ bfq_bfqq_expire(bfqd, bfqq, false,
++ BFQ_BFQQ_NO_MORE_REQUESTS);
++ }
++
++ if (!bfqd->rq_in_driver)
++ bfq_schedule_dispatch(bfqd);
++
++out:
++ return;
++}
++
++static int __bfq_may_queue(struct bfq_queue *bfqq)
++{
++ if (bfq_bfqq_wait_request(bfqq) && bfq_bfqq_must_alloc(bfqq)) {
++ bfq_clear_bfqq_must_alloc(bfqq);
++ return ELV_MQUEUE_MUST;
++ }
++
++ return ELV_MQUEUE_MAY;
++}
++
++static int bfq_may_queue(struct request_queue *q, int rw)
++{
++ struct bfq_data *bfqd = q->elevator->elevator_data;
++ struct task_struct *tsk = current;
++ struct bfq_io_cq *bic;
++ struct bfq_queue *bfqq;
++
++ /*
++ * Don't force setup of a queue from here, as a call to may_queue
++ * does not necessarily imply that a request actually will be
++ * queued. So just lookup a possibly existing queue, or return
++ * 'may queue' if that fails.
++ */
++ bic = bfq_bic_lookup(bfqd, tsk->io_context);
++ if (!bic)
++ return ELV_MQUEUE_MAY;
++
++ bfqq = bic_to_bfqq(bic, rw_is_sync(rw));
++ if (bfqq)
++ return __bfq_may_queue(bfqq);
++
++ return ELV_MQUEUE_MAY;
++}
++
++/*
++ * Queue lock held here.
++ */
++static void bfq_put_request(struct request *rq)
++{
++ struct bfq_queue *bfqq = RQ_BFQQ(rq);
++
++ if (bfqq) {
++ const int rw = rq_data_dir(rq);
++
++ BUG_ON(!bfqq->allocated[rw]);
++ bfqq->allocated[rw]--;
++
++ rq->elv.priv[0] = NULL;
++ rq->elv.priv[1] = NULL;
++
++ bfq_log_bfqq(bfqq->bfqd, bfqq, "put_request %p, %d",
++ bfqq, atomic_read(&bfqq->ref));
++ bfq_put_queue(bfqq);
++ }
++}
++
++/*
++ * Allocate bfq data structures associated with this request.
++ */
++static int bfq_set_request(struct request_queue *q, struct request *rq,
++ struct bio *bio, gfp_t gfp_mask)
++{
++ struct bfq_data *bfqd = q->elevator->elevator_data;
++ struct bfq_io_cq *bic = icq_to_bic(rq->elv.icq);
++ const int rw = rq_data_dir(rq);
++ const int is_sync = rq_is_sync(rq);
++ struct bfq_queue *bfqq;
++ unsigned long flags;
++
++ might_sleep_if(gfpflags_allow_blocking(gfp_mask));
++
++ bfq_check_ioprio_change(bic, bio);
++
++ spin_lock_irqsave(q->queue_lock, flags);
++
++ if (!bic)
++ goto queue_fail;
++
++ bfq_bic_update_cgroup(bic, bio);
++
++ bfqq = bic_to_bfqq(bic, is_sync);
++ if (!bfqq || bfqq == &bfqd->oom_bfqq) {
++ bfqq = bfq_get_queue(bfqd, bio, is_sync, bic, gfp_mask);
++ bic_set_bfqq(bic, bfqq, is_sync);
++ if (is_sync) {
++ if (bfqd->large_burst)
++ bfq_mark_bfqq_in_large_burst(bfqq);
++ else
++ bfq_clear_bfqq_in_large_burst(bfqq);
++ }
++ }
++
++ bfqq->allocated[rw]++;
++ atomic_inc(&bfqq->ref);
++ bfq_log_bfqq(bfqd, bfqq, "set_request: bfqq %p, %d", bfqq,
++ atomic_read(&bfqq->ref));
++
++ rq->elv.priv[0] = bic;
++ rq->elv.priv[1] = bfqq;
++
++ spin_unlock_irqrestore(q->queue_lock, flags);
++
++ return 0;
++
++queue_fail:
++ bfq_schedule_dispatch(bfqd);
++ spin_unlock_irqrestore(q->queue_lock, flags);
++
++ return 1;
++}
++
++static void bfq_kick_queue(struct work_struct *work)
++{
++ struct bfq_data *bfqd =
++ container_of(work, struct bfq_data, unplug_work);
++ struct request_queue *q = bfqd->queue;
++
++ spin_lock_irq(q->queue_lock);
++ __blk_run_queue(q);
++ spin_unlock_irq(q->queue_lock);
++}
++
++/*
++ * Handler of the expiration of the timer running if the in-service queue
++ * is idling inside its time slice.
++ */
++static void bfq_idle_slice_timer(unsigned long data)
++{
++ struct bfq_data *bfqd = (struct bfq_data *)data;
++ struct bfq_queue *bfqq;
++ unsigned long flags;
++ enum bfqq_expiration reason;
++
++ spin_lock_irqsave(bfqd->queue->queue_lock, flags);
++
++ bfqq = bfqd->in_service_queue;
++ /*
++ * Theoretical race here: the in-service queue can be NULL or
++ * different from the queue that was idling if the timer handler
++ * spins on the queue_lock and a new request arrives for the
++ * current queue and there is a full dispatch cycle that changes
++ * the in-service queue. This can hardly happen, but in the worst
++ * case we just expire a queue too early.
++ */
++ if (bfqq) {
++ bfq_log_bfqq(bfqd, bfqq, "slice_timer expired");
++ if (bfq_bfqq_budget_timeout(bfqq))
++ /*
++ * Also here the queue can be safely expired
++ * for budget timeout without wasting
++ * guarantees
++ */
++ reason = BFQ_BFQQ_BUDGET_TIMEOUT;
++ else if (bfqq->queued[0] == 0 && bfqq->queued[1] == 0)
++ /*
++ * The queue may not be empty upon timer expiration,
++ * because we may not disable the timer when the
++ * first request of the in-service queue arrives
++ * during disk idling.
++ */
++ reason = BFQ_BFQQ_TOO_IDLE;
++ else
++ goto schedule_dispatch;
++
++ bfq_bfqq_expire(bfqd, bfqq, true, reason);
++ }
++
++schedule_dispatch:
++ bfq_schedule_dispatch(bfqd);
++
++ spin_unlock_irqrestore(bfqd->queue->queue_lock, flags);
++}
++
++static void bfq_shutdown_timer_wq(struct bfq_data *bfqd)
++{
++ del_timer_sync(&bfqd->idle_slice_timer);
++ cancel_work_sync(&bfqd->unplug_work);
++}
++
++static void __bfq_put_async_bfqq(struct bfq_data *bfqd,
++ struct bfq_queue **bfqq_ptr)
++{
++ struct bfq_group *root_group = bfqd->root_group;
++ struct bfq_queue *bfqq = *bfqq_ptr;
++
++ bfq_log(bfqd, "put_async_bfqq: %p", bfqq);
++ if (bfqq) {
++ bfq_bfqq_move(bfqd, bfqq, &bfqq->entity, root_group);
++ bfq_log_bfqq(bfqd, bfqq, "put_async_bfqq: putting %p, %d",
++ bfqq, atomic_read(&bfqq->ref));
++ bfq_put_queue(bfqq);
++ *bfqq_ptr = NULL;
++ }
++}
++
++/*
++ * Release all the bfqg references to its async queues. If we are
++ * deallocating the group these queues may still contain requests, so
++ * we reparent them to the root cgroup (i.e., the only one that will
++ * exist for sure until all the requests on a device are gone).
++ */
++static void bfq_put_async_queues(struct bfq_data *bfqd, struct bfq_group *bfqg)
++{
++ int i, j;
++
++ for (i = 0; i < 2; i++)
++ for (j = 0; j < IOPRIO_BE_NR; j++)
++ __bfq_put_async_bfqq(bfqd, &bfqg->async_bfqq[i][j]);
++
++ __bfq_put_async_bfqq(bfqd, &bfqg->async_idle_bfqq);
++}
++
++static void bfq_exit_queue(struct elevator_queue *e)
++{
++ struct bfq_data *bfqd = e->elevator_data;
++ struct request_queue *q = bfqd->queue;
++ struct bfq_queue *bfqq, *n;
++
++ bfq_shutdown_timer_wq(bfqd);
++
++ spin_lock_irq(q->queue_lock);
++
++ BUG_ON(bfqd->in_service_queue);
++ list_for_each_entry_safe(bfqq, n, &bfqd->idle_list, bfqq_list)
++ bfq_deactivate_bfqq(bfqd, bfqq, 0);
++
++ spin_unlock_irq(q->queue_lock);
++
++ bfq_shutdown_timer_wq(bfqd);
++
++ synchronize_rcu();
++
++ BUG_ON(timer_pending(&bfqd->idle_slice_timer));
++
++#ifdef CONFIG_BFQ_GROUP_IOSCHED
++ blkcg_deactivate_policy(q, &blkcg_policy_bfq);
++#else
++ kfree(bfqd->root_group);
++#endif
++
++ kfree(bfqd);
++}
++
++static void bfq_init_root_group(struct bfq_group *root_group,
++ struct bfq_data *bfqd)
++{
++ int i;
++
++#ifdef CONFIG_BFQ_GROUP_IOSCHED
++ root_group->entity.parent = NULL;
++ root_group->my_entity = NULL;
++ root_group->bfqd = bfqd;
++#endif
++ for (i = 0; i < BFQ_IOPRIO_CLASSES; i++)
++ root_group->sched_data.service_tree[i] = BFQ_SERVICE_TREE_INIT;
++}
++
++static int bfq_init_queue(struct request_queue *q, struct elevator_type *e)
++{
++ struct bfq_data *bfqd;
++ struct elevator_queue *eq;
++
++ eq = elevator_alloc(q, e);
++ if (!eq)
++ return -ENOMEM;
++
++ bfqd = kzalloc_node(sizeof(*bfqd), GFP_KERNEL, q->node);
++ if (!bfqd) {
++ kobject_put(&eq->kobj);
++ return -ENOMEM;
++ }
++ eq->elevator_data = bfqd;
++
++ /*
++ * Our fallback bfqq if bfq_find_alloc_queue() runs into OOM issues.
++ * Grab a permanent reference to it, so that the normal code flow
++ * will not attempt to free it.
++ */
++ bfq_init_bfqq(bfqd, &bfqd->oom_bfqq, NULL, 1, 0);
++ atomic_inc(&bfqd->oom_bfqq.ref);
++ bfqd->oom_bfqq.new_ioprio = BFQ_DEFAULT_QUEUE_IOPRIO;
++ bfqd->oom_bfqq.new_ioprio_class = IOPRIO_CLASS_BE;
++ bfqd->oom_bfqq.entity.new_weight =
++ bfq_ioprio_to_weight(bfqd->oom_bfqq.new_ioprio);
++ /*
++ * Trigger weight initialization, according to ioprio, at the
++ * oom_bfqq's first activation. The oom_bfqq's ioprio and ioprio
++ * class won't be changed any more.
++ */
++ bfqd->oom_bfqq.entity.prio_changed = 1;
++
++ bfqd->queue = q;
++
++ spin_lock_irq(q->queue_lock);
++ q->elevator = eq;
++ spin_unlock_irq(q->queue_lock);
++
++ bfqd->root_group = bfq_create_group_hierarchy(bfqd, q->node);
++ if (!bfqd->root_group)
++ goto out_free;
++ bfq_init_root_group(bfqd->root_group, bfqd);
++ bfq_init_entity(&bfqd->oom_bfqq.entity, bfqd->root_group);
++#ifdef CONFIG_BFQ_GROUP_IOSCHED
++ bfqd->active_numerous_groups = 0;
++#endif
++
++ init_timer(&bfqd->idle_slice_timer);
++ bfqd->idle_slice_timer.function = bfq_idle_slice_timer;
++ bfqd->idle_slice_timer.data = (unsigned long)bfqd;
++
++ bfqd->queue_weights_tree = RB_ROOT;
++ bfqd->group_weights_tree = RB_ROOT;
++
++ INIT_WORK(&bfqd->unplug_work, bfq_kick_queue);
++
++ INIT_LIST_HEAD(&bfqd->active_list);
++ INIT_LIST_HEAD(&bfqd->idle_list);
++ INIT_HLIST_HEAD(&bfqd->burst_list);
++
++ bfqd->hw_tag = -1;
++
++ bfqd->bfq_max_budget = bfq_default_max_budget;
++
++ bfqd->bfq_fifo_expire[0] = bfq_fifo_expire[0];
++ bfqd->bfq_fifo_expire[1] = bfq_fifo_expire[1];
++ bfqd->bfq_back_max = bfq_back_max;
++ bfqd->bfq_back_penalty = bfq_back_penalty;
++ bfqd->bfq_slice_idle = bfq_slice_idle;
++ bfqd->bfq_class_idle_last_service = 0;
++ bfqd->bfq_max_budget_async_rq = bfq_max_budget_async_rq;
++ bfqd->bfq_timeout[BLK_RW_ASYNC] = bfq_timeout_async;
++ bfqd->bfq_timeout[BLK_RW_SYNC] = bfq_timeout_sync;
++
++ bfqd->bfq_requests_within_timer = 120;
++
++ bfqd->bfq_large_burst_thresh = 11;
++ bfqd->bfq_burst_interval = msecs_to_jiffies(500);
++
++ bfqd->low_latency = true;
++
++ bfqd->bfq_wr_coeff = 20;
++ bfqd->bfq_wr_rt_max_time = msecs_to_jiffies(300);
++ bfqd->bfq_wr_max_time = 0;
++ bfqd->bfq_wr_min_idle_time = msecs_to_jiffies(2000);
++ bfqd->bfq_wr_min_inter_arr_async = msecs_to_jiffies(500);
++ bfqd->bfq_wr_max_softrt_rate = 7000; /*
++ * Approximate rate required
++ * to playback or record a
++ * high-definition compressed
++ * video.
++ */
++ bfqd->wr_busy_queues = 0;
++ bfqd->busy_in_flight_queues = 0;
++ bfqd->const_seeky_busy_in_flight_queues = 0;
++
++ /*
++ * Begin by assuming, optimistically, that the device peak rate is
++ * equal to the highest reference rate.
++ */
++ bfqd->RT_prod = R_fast[blk_queue_nonrot(bfqd->queue)] *
++ T_fast[blk_queue_nonrot(bfqd->queue)];
++ bfqd->peak_rate = R_fast[blk_queue_nonrot(bfqd->queue)];
++ bfqd->device_speed = BFQ_BFQD_FAST;
++
++ return 0;
++
++out_free:
++ kfree(bfqd);
++ kobject_put(&eq->kobj);
++ return -ENOMEM;
++}
++
++static void bfq_slab_kill(void)
++{
++ kmem_cache_destroy(bfq_pool);
++}
++
++static int __init bfq_slab_setup(void)
++{
++ bfq_pool = KMEM_CACHE(bfq_queue, 0);
++ if (!bfq_pool)
++ return -ENOMEM;
++ return 0;
++}
++
++static ssize_t bfq_var_show(unsigned int var, char *page)
++{
++ return sprintf(page, "%d\n", var);
++}
++
++static ssize_t bfq_var_store(unsigned long *var, const char *page,
++ size_t count)
++{
++ unsigned long new_val;
++ int ret = kstrtoul(page, 10, &new_val);
++
++ if (ret == 0)
++ *var = new_val;
++
++ return count;
++}
++
++static ssize_t bfq_wr_max_time_show(struct elevator_queue *e, char *page)
++{
++ struct bfq_data *bfqd = e->elevator_data;
++
++ return sprintf(page, "%d\n", bfqd->bfq_wr_max_time > 0 ?
++ jiffies_to_msecs(bfqd->bfq_wr_max_time) :
++ jiffies_to_msecs(bfq_wr_duration(bfqd)));
++}
++
++static ssize_t bfq_weights_show(struct elevator_queue *e, char *page)
++{
++ struct bfq_queue *bfqq;
++ struct bfq_data *bfqd = e->elevator_data;
++ ssize_t num_char = 0;
++
++ num_char += sprintf(page + num_char, "Tot reqs queued %d\n\n",
++ bfqd->queued);
++
++ spin_lock_irq(bfqd->queue->queue_lock);
++
++ num_char += sprintf(page + num_char, "Active:\n");
++ list_for_each_entry(bfqq, &bfqd->active_list, bfqq_list) {
++ num_char += sprintf(page + num_char,
++ "pid%d: weight %hu, nr_queued %d %d, ",
++ bfqq->pid,
++ bfqq->entity.weight,
++ bfqq->queued[0],
++ bfqq->queued[1]);
++ num_char += sprintf(page + num_char,
++ "dur %d/%u\n",
++ jiffies_to_msecs(
++ jiffies -
++ bfqq->last_wr_start_finish),
++ jiffies_to_msecs(bfqq->wr_cur_max_time));
++ }
++
++ num_char += sprintf(page + num_char, "Idle:\n");
++ list_for_each_entry(bfqq, &bfqd->idle_list, bfqq_list) {
++ num_char += sprintf(page + num_char,
++ "pid%d: weight %hu, dur %d/%u\n",
++ bfqq->pid,
++ bfqq->entity.weight,
++ jiffies_to_msecs(jiffies -
++ bfqq->last_wr_start_finish),
++ jiffies_to_msecs(bfqq->wr_cur_max_time));
++ }
++
++ spin_unlock_irq(bfqd->queue->queue_lock);
++
++ return num_char;
++}
++
++#define SHOW_FUNCTION(__FUNC, __VAR, __CONV) \
++static ssize_t __FUNC(struct elevator_queue *e, char *page) \
++{ \
++ struct bfq_data *bfqd = e->elevator_data; \
++ unsigned int __data = __VAR; \
++ if (__CONV) \
++ __data = jiffies_to_msecs(__data); \
++ return bfq_var_show(__data, (page)); \
++}
++SHOW_FUNCTION(bfq_fifo_expire_sync_show, bfqd->bfq_fifo_expire[1], 1);
++SHOW_FUNCTION(bfq_fifo_expire_async_show, bfqd->bfq_fifo_expire[0], 1);
++SHOW_FUNCTION(bfq_back_seek_max_show, bfqd->bfq_back_max, 0);
++SHOW_FUNCTION(bfq_back_seek_penalty_show, bfqd->bfq_back_penalty, 0);
++SHOW_FUNCTION(bfq_slice_idle_show, bfqd->bfq_slice_idle, 1);
++SHOW_FUNCTION(bfq_max_budget_show, bfqd->bfq_user_max_budget, 0);
++SHOW_FUNCTION(bfq_max_budget_async_rq_show,
++ bfqd->bfq_max_budget_async_rq, 0);
++SHOW_FUNCTION(bfq_timeout_sync_show, bfqd->bfq_timeout[BLK_RW_SYNC], 1);
++SHOW_FUNCTION(bfq_timeout_async_show, bfqd->bfq_timeout[BLK_RW_ASYNC], 1);
++SHOW_FUNCTION(bfq_low_latency_show, bfqd->low_latency, 0);
++SHOW_FUNCTION(bfq_wr_coeff_show, bfqd->bfq_wr_coeff, 0);
++SHOW_FUNCTION(bfq_wr_rt_max_time_show, bfqd->bfq_wr_rt_max_time, 1);
++SHOW_FUNCTION(bfq_wr_min_idle_time_show, bfqd->bfq_wr_min_idle_time, 1);
++SHOW_FUNCTION(bfq_wr_min_inter_arr_async_show, bfqd->bfq_wr_min_inter_arr_async,
++ 1);
++SHOW_FUNCTION(bfq_wr_max_softrt_rate_show, bfqd->bfq_wr_max_softrt_rate, 0);
++#undef SHOW_FUNCTION
++
++#define STORE_FUNCTION(__FUNC, __PTR, MIN, MAX, __CONV) \
++static ssize_t \
++__FUNC(struct elevator_queue *e, const char *page, size_t count) \
++{ \
++ struct bfq_data *bfqd = e->elevator_data; \
++ unsigned long uninitialized_var(__data); \
++ int ret = bfq_var_store(&__data, (page), count); \
++ if (__data < (MIN)) \
++ __data = (MIN); \
++ else if (__data > (MAX)) \
++ __data = (MAX); \
++ if (__CONV) \
++ *(__PTR) = msecs_to_jiffies(__data); \
++ else \
++ *(__PTR) = __data; \
++ return ret; \
++}
++STORE_FUNCTION(bfq_fifo_expire_sync_store, &bfqd->bfq_fifo_expire[1], 1,
++ INT_MAX, 1);
++STORE_FUNCTION(bfq_fifo_expire_async_store, &bfqd->bfq_fifo_expire[0], 1,
++ INT_MAX, 1);
++STORE_FUNCTION(bfq_back_seek_max_store, &bfqd->bfq_back_max, 0, INT_MAX, 0);
++STORE_FUNCTION(bfq_back_seek_penalty_store, &bfqd->bfq_back_penalty, 1,
++ INT_MAX, 0);
++STORE_FUNCTION(bfq_slice_idle_store, &bfqd->bfq_slice_idle, 0, INT_MAX, 1);
++STORE_FUNCTION(bfq_max_budget_async_rq_store, &bfqd->bfq_max_budget_async_rq,
++ 1, INT_MAX, 0);
++STORE_FUNCTION(bfq_timeout_async_store, &bfqd->bfq_timeout[BLK_RW_ASYNC], 0,
++ INT_MAX, 1);
++STORE_FUNCTION(bfq_wr_coeff_store, &bfqd->bfq_wr_coeff, 1, INT_MAX, 0);
++STORE_FUNCTION(bfq_wr_max_time_store, &bfqd->bfq_wr_max_time, 0, INT_MAX, 1);
++STORE_FUNCTION(bfq_wr_rt_max_time_store, &bfqd->bfq_wr_rt_max_time, 0, INT_MAX,
++ 1);
++STORE_FUNCTION(bfq_wr_min_idle_time_store, &bfqd->bfq_wr_min_idle_time, 0,
++ INT_MAX, 1);
++STORE_FUNCTION(bfq_wr_min_inter_arr_async_store,
++ &bfqd->bfq_wr_min_inter_arr_async, 0, INT_MAX, 1);
++STORE_FUNCTION(bfq_wr_max_softrt_rate_store, &bfqd->bfq_wr_max_softrt_rate, 0,
++ INT_MAX, 0);
++#undef STORE_FUNCTION
++
++/* do nothing for the moment */
++static ssize_t bfq_weights_store(struct elevator_queue *e,
++ const char *page, size_t count)
++{
++ return count;
++}
++
++static unsigned long bfq_estimated_max_budget(struct bfq_data *bfqd)
++{
++ u64 timeout = jiffies_to_msecs(bfqd->bfq_timeout[BLK_RW_SYNC]);
++
++ if (bfqd->peak_rate_samples >= BFQ_PEAK_RATE_SAMPLES)
++ return bfq_calc_max_budget(bfqd->peak_rate, timeout);
++ else
++ return bfq_default_max_budget;
++}
++
++static ssize_t bfq_max_budget_store(struct elevator_queue *e,
++ const char *page, size_t count)
++{
++ struct bfq_data *bfqd = e->elevator_data;
++ unsigned long uninitialized_var(__data);
++ int ret = bfq_var_store(&__data, (page), count);
++
++ if (__data == 0)
++ bfqd->bfq_max_budget = bfq_estimated_max_budget(bfqd);
++ else {
++ if (__data > INT_MAX)
++ __data = INT_MAX;
++ bfqd->bfq_max_budget = __data;
++ }
++
++ bfqd->bfq_user_max_budget = __data;
++
++ return ret;
++}
++
++static ssize_t bfq_timeout_sync_store(struct elevator_queue *e,
++ const char *page, size_t count)
++{
++ struct bfq_data *bfqd = e->elevator_data;
++ unsigned long uninitialized_var(__data);
++ int ret = bfq_var_store(&__data, (page), count);
++
++ if (__data < 1)
++ __data = 1;
++ else if (__data > INT_MAX)
++ __data = INT_MAX;
++
++ bfqd->bfq_timeout[BLK_RW_SYNC] = msecs_to_jiffies(__data);
++ if (bfqd->bfq_user_max_budget == 0)
++ bfqd->bfq_max_budget = bfq_estimated_max_budget(bfqd);
++
++ return ret;
++}
++
++static ssize_t bfq_low_latency_store(struct elevator_queue *e,
++ const char *page, size_t count)
++{
++ struct bfq_data *bfqd = e->elevator_data;
++ unsigned long uninitialized_var(__data);
++ int ret = bfq_var_store(&__data, (page), count);
++
++ if (__data > 1)
++ __data = 1;
++ if (__data == 0 && bfqd->low_latency != 0)
++ bfq_end_wr(bfqd);
++ bfqd->low_latency = __data;
++
++ return ret;
++}
++
++#define BFQ_ATTR(name) \
++ __ATTR(name, S_IRUGO|S_IWUSR, bfq_##name##_show, bfq_##name##_store)
++
++static struct elv_fs_entry bfq_attrs[] = {
++ BFQ_ATTR(fifo_expire_sync),
++ BFQ_ATTR(fifo_expire_async),
++ BFQ_ATTR(back_seek_max),
++ BFQ_ATTR(back_seek_penalty),
++ BFQ_ATTR(slice_idle),
++ BFQ_ATTR(max_budget),
++ BFQ_ATTR(max_budget_async_rq),
++ BFQ_ATTR(timeout_sync),
++ BFQ_ATTR(timeout_async),
++ BFQ_ATTR(low_latency),
++ BFQ_ATTR(wr_coeff),
++ BFQ_ATTR(wr_max_time),
++ BFQ_ATTR(wr_rt_max_time),
++ BFQ_ATTR(wr_min_idle_time),
++ BFQ_ATTR(wr_min_inter_arr_async),
++ BFQ_ATTR(wr_max_softrt_rate),
++ BFQ_ATTR(weights),
++ __ATTR_NULL
++};
++
++static struct elevator_type iosched_bfq = {
++ .ops = {
++ .elevator_merge_fn = bfq_merge,
++ .elevator_merged_fn = bfq_merged_request,
++ .elevator_merge_req_fn = bfq_merged_requests,
++#ifdef CONFIG_BFQ_GROUP_IOSCHED
++ .elevator_bio_merged_fn = bfq_bio_merged,
++#endif
++ .elevator_allow_merge_fn = bfq_allow_merge,
++ .elevator_dispatch_fn = bfq_dispatch_requests,
++ .elevator_add_req_fn = bfq_insert_request,
++ .elevator_activate_req_fn = bfq_activate_request,
++ .elevator_deactivate_req_fn = bfq_deactivate_request,
++ .elevator_completed_req_fn = bfq_completed_request,
++ .elevator_former_req_fn = elv_rb_former_request,
++ .elevator_latter_req_fn = elv_rb_latter_request,
++ .elevator_init_icq_fn = bfq_init_icq,
++ .elevator_exit_icq_fn = bfq_exit_icq,
++ .elevator_set_req_fn = bfq_set_request,
++ .elevator_put_req_fn = bfq_put_request,
++ .elevator_may_queue_fn = bfq_may_queue,
++ .elevator_init_fn = bfq_init_queue,
++ .elevator_exit_fn = bfq_exit_queue,
++ },
++ .icq_size = sizeof(struct bfq_io_cq),
++ .icq_align = __alignof__(struct bfq_io_cq),
++ .elevator_attrs = bfq_attrs,
++ .elevator_name = "bfq",
++ .elevator_owner = THIS_MODULE,
++};
++
++static int __init bfq_init(void)
++{
++ int ret;
++
++ /*
++ * Can be 0 on HZ < 1000 setups.
++ */
++ if (bfq_slice_idle == 0)
++ bfq_slice_idle = 1;
++
++ if (bfq_timeout_async == 0)
++ bfq_timeout_async = 1;
++
++#ifdef CONFIG_BFQ_GROUP_IOSCHED
++ ret = blkcg_policy_register(&blkcg_policy_bfq);
++ if (ret)
++ return ret;
++#endif
++
++ ret = -ENOMEM;
++ if (bfq_slab_setup())
++ goto err_pol_unreg;
++
++ /*
++ * Times to load large popular applications for the typical systems
++ * installed on the reference devices (see the comments before the
++ * definitions of the two arrays).
++ */
++ T_slow[0] = msecs_to_jiffies(2600);
++ T_slow[1] = msecs_to_jiffies(1000);
++ T_fast[0] = msecs_to_jiffies(5500);
++ T_fast[1] = msecs_to_jiffies(2000);
++
++ /*
++ * Thresholds that determine the switch between speed classes (see
++ * the comments before the definition of the array).
++ */
++ device_speed_thresh[0] = (R_fast[0] + R_slow[0]) / 2;
++ device_speed_thresh[1] = (R_fast[1] + R_slow[1]) / 2;
++
++ ret = elv_register(&iosched_bfq);
++ if (ret)
++ goto err_pol_unreg;
++
++ pr_info("BFQ I/O-scheduler: v7r11");
++
++ return 0;
++
++err_pol_unreg:
++#ifdef CONFIG_BFQ_GROUP_IOSCHED
++ blkcg_policy_unregister(&blkcg_policy_bfq);
++#endif
++ return ret;
++}
++
++static void __exit bfq_exit(void)
++{
++ elv_unregister(&iosched_bfq);
++#ifdef CONFIG_BFQ_GROUP_IOSCHED
++ blkcg_policy_unregister(&blkcg_policy_bfq);
++#endif
++ bfq_slab_kill();
++}
++
++module_init(bfq_init);
++module_exit(bfq_exit);
++
++MODULE_AUTHOR("Arianna Avanzini, Fabio Checconi, Paolo Valente");
++MODULE_LICENSE("GPL");
+diff --git a/block/bfq-sched.c b/block/bfq-sched.c
+new file mode 100644
+index 0000000..a5ed694
+--- /dev/null
++++ b/block/bfq-sched.c
+@@ -0,0 +1,1199 @@
++/*
++ * BFQ: Hierarchical B-WF2Q+ scheduler.
++ *
++ * Based on ideas and code from CFQ:
++ * Copyright (C) 2003 Jens Axboe <axboe@kernel.dk>
++ *
++ * Copyright (C) 2008 Fabio Checconi <fabio@gandalf.sssup.it>
++ * Paolo Valente <paolo.valente@unimore.it>
++ *
++ * Copyright (C) 2010 Paolo Valente <paolo.valente@unimore.it>
++ */
++
++#ifdef CONFIG_BFQ_GROUP_IOSCHED
++#define for_each_entity(entity) \
++ for (; entity ; entity = entity->parent)
++
++#define for_each_entity_safe(entity, parent) \
++ for (; entity && ({ parent = entity->parent; 1; }); entity = parent)
++
++
++static struct bfq_entity *bfq_lookup_next_entity(struct bfq_sched_data *sd,
++ int extract,
++ struct bfq_data *bfqd);
++
++static struct bfq_group *bfqq_group(struct bfq_queue *bfqq);
++
++static void bfq_update_budget(struct bfq_entity *next_in_service)
++{
++ struct bfq_entity *bfqg_entity;
++ struct bfq_group *bfqg;
++ struct bfq_sched_data *group_sd;
++
++ BUG_ON(!next_in_service);
++
++ group_sd = next_in_service->sched_data;
++
++ bfqg = container_of(group_sd, struct bfq_group, sched_data);
++ /*
++ * bfq_group's my_entity field is not NULL only if the group
++ * is not the root group. We must not touch the root entity
++ * as it must never become an in-service entity.
++ */
++ bfqg_entity = bfqg->my_entity;
++ if (bfqg_entity)
++ bfqg_entity->budget = next_in_service->budget;
++}
++
++static int bfq_update_next_in_service(struct bfq_sched_data *sd)
++{
++ struct bfq_entity *next_in_service;
++
++ if (sd->in_service_entity)
++ /* will update/requeue at the end of service */
++ return 0;
++
++ /*
++ * NOTE: this can be improved in many ways, such as returning
++ * 1 (and thus propagating upwards the update) only when the
++ * budget changes, or caching the bfqq that will be scheduled
++ * next from this subtree. By now we worry more about
++ * correctness than about performance...
++ */
++ next_in_service = bfq_lookup_next_entity(sd, 0, NULL);
++ sd->next_in_service = next_in_service;
++
++ if (next_in_service)
++ bfq_update_budget(next_in_service);
++
++ return 1;
++}
++
++static void bfq_check_next_in_service(struct bfq_sched_data *sd,
++ struct bfq_entity *entity)
++{
++ BUG_ON(sd->next_in_service != entity);
++}
++#else
++#define for_each_entity(entity) \
++ for (; entity ; entity = NULL)
++
++#define for_each_entity_safe(entity, parent) \
++ for (parent = NULL; entity ; entity = parent)
++
++static int bfq_update_next_in_service(struct bfq_sched_data *sd)
++{
++ return 0;
++}
++
++static void bfq_check_next_in_service(struct bfq_sched_data *sd,
++ struct bfq_entity *entity)
++{
++}
++
++static void bfq_update_budget(struct bfq_entity *next_in_service)
++{
++}
++#endif
++
++/*
++ * Shift for timestamp calculations. This actually limits the maximum
++ * service allowed in one timestamp delta (small shift values increase it),
++ * the maximum total weight that can be used for the queues in the system
++ * (big shift values increase it), and the period of virtual time
++ * wraparounds.
++ */
++#define WFQ_SERVICE_SHIFT 22
++
++/**
++ * bfq_gt - compare two timestamps.
++ * @a: first ts.
++ * @b: second ts.
++ *
++ * Return @a > @b, dealing with wrapping correctly.
++ */
++static int bfq_gt(u64 a, u64 b)
++{
++ return (s64)(a - b) > 0;
++}
++
++static struct bfq_queue *bfq_entity_to_bfqq(struct bfq_entity *entity)
++{
++ struct bfq_queue *bfqq = NULL;
++
++ BUG_ON(!entity);
++
++ if (!entity->my_sched_data)
++ bfqq = container_of(entity, struct bfq_queue, entity);
++
++ return bfqq;
++}
++
++
++/**
++ * bfq_delta - map service into the virtual time domain.
++ * @service: amount of service.
++ * @weight: scale factor (weight of an entity or weight sum).
++ */
++static u64 bfq_delta(unsigned long service, unsigned long weight)
++{
++ u64 d = (u64)service << WFQ_SERVICE_SHIFT;
++
++ do_div(d, weight);
++ return d;
++}
++
++/**
++ * bfq_calc_finish - assign the finish time to an entity.
++ * @entity: the entity to act upon.
++ * @service: the service to be charged to the entity.
++ */
++static void bfq_calc_finish(struct bfq_entity *entity, unsigned long service)
++{
++ struct bfq_queue *bfqq = bfq_entity_to_bfqq(entity);
++
++ BUG_ON(entity->weight == 0);
++
++ entity->finish = entity->start +
++ bfq_delta(service, entity->weight);
++
++ if (bfqq) {
++ bfq_log_bfqq(bfqq->bfqd, bfqq,
++ "calc_finish: serv %lu, w %d",
++ service, entity->weight);
++ bfq_log_bfqq(bfqq->bfqd, bfqq,
++ "calc_finish: start %llu, finish %llu, delta %llu",
++ entity->start, entity->finish,
++ bfq_delta(service, entity->weight));
++ }
++}
++
++/**
++ * bfq_entity_of - get an entity from a node.
++ * @node: the node field of the entity.
++ *
++ * Convert a node pointer to the relative entity. This is used only
++ * to simplify the logic of some functions and not as the generic
++ * conversion mechanism because, e.g., in the tree walking functions,
++ * the check for a %NULL value would be redundant.
++ */
++static struct bfq_entity *bfq_entity_of(struct rb_node *node)
++{
++ struct bfq_entity *entity = NULL;
++
++ if (node)
++ entity = rb_entry(node, struct bfq_entity, rb_node);
++
++ return entity;
++}
++
++/**
++ * bfq_extract - remove an entity from a tree.
++ * @root: the tree root.
++ * @entity: the entity to remove.
++ */
++static void bfq_extract(struct rb_root *root, struct bfq_entity *entity)
++{
++ BUG_ON(entity->tree != root);
++
++ entity->tree = NULL;
++ rb_erase(&entity->rb_node, root);
++}
++
++/**
++ * bfq_idle_extract - extract an entity from the idle tree.
++ * @st: the service tree of the owning @entity.
++ * @entity: the entity being removed.
++ */
++static void bfq_idle_extract(struct bfq_service_tree *st,
++ struct bfq_entity *entity)
++{
++ struct bfq_queue *bfqq = bfq_entity_to_bfqq(entity);
++ struct rb_node *next;
++
++ BUG_ON(entity->tree != &st->idle);
++
++ if (entity == st->first_idle) {
++ next = rb_next(&entity->rb_node);
++ st->first_idle = bfq_entity_of(next);
++ }
++
++ if (entity == st->last_idle) {
++ next = rb_prev(&entity->rb_node);
++ st->last_idle = bfq_entity_of(next);
++ }
++
++ bfq_extract(&st->idle, entity);
++
++ if (bfqq)
++ list_del(&bfqq->bfqq_list);
++}
++
++/**
++ * bfq_insert - generic tree insertion.
++ * @root: tree root.
++ * @entity: entity to insert.
++ *
++ * This is used for the idle and the active tree, since they are both
++ * ordered by finish time.
++ */
++static void bfq_insert(struct rb_root *root, struct bfq_entity *entity)
++{
++ struct bfq_entity *entry;
++ struct rb_node **node = &root->rb_node;
++ struct rb_node *parent = NULL;
++
++ BUG_ON(entity->tree);
++
++ while (*node) {
++ parent = *node;
++ entry = rb_entry(parent, struct bfq_entity, rb_node);
++
++ if (bfq_gt(entry->finish, entity->finish))
++ node = &parent->rb_left;
++ else
++ node = &parent->rb_right;
++ }
++
++ rb_link_node(&entity->rb_node, parent, node);
++ rb_insert_color(&entity->rb_node, root);
++
++ entity->tree = root;
++}
++
++/**
++ * bfq_update_min - update the min_start field of a entity.
++ * @entity: the entity to update.
++ * @node: one of its children.
++ *
++ * This function is called when @entity may store an invalid value for
++ * min_start due to updates to the active tree. The function assumes
++ * that the subtree rooted at @node (which may be its left or its right
++ * child) has a valid min_start value.
++ */
++static void bfq_update_min(struct bfq_entity *entity, struct rb_node *node)
++{
++ struct bfq_entity *child;
++
++ if (node) {
++ child = rb_entry(node, struct bfq_entity, rb_node);
++ if (bfq_gt(entity->min_start, child->min_start))
++ entity->min_start = child->min_start;
++ }
++}
++
++/**
++ * bfq_update_active_node - recalculate min_start.
++ * @node: the node to update.
++ *
++ * @node may have changed position or one of its children may have moved,
++ * this function updates its min_start value. The left and right subtrees
++ * are assumed to hold a correct min_start value.
++ */
++static void bfq_update_active_node(struct rb_node *node)
++{
++ struct bfq_entity *entity = rb_entry(node, struct bfq_entity, rb_node);
++
++ entity->min_start = entity->start;
++ bfq_update_min(entity, node->rb_right);
++ bfq_update_min(entity, node->rb_left);
++}
++
++/**
++ * bfq_update_active_tree - update min_start for the whole active tree.
++ * @node: the starting node.
++ *
++ * @node must be the deepest modified node after an update. This function
++ * updates its min_start using the values held by its children, assuming
++ * that they did not change, and then updates all the nodes that may have
++ * changed in the path to the root. The only nodes that may have changed
++ * are the ones in the path or their siblings.
++ */
++static void bfq_update_active_tree(struct rb_node *node)
++{
++ struct rb_node *parent;
++
++up:
++ bfq_update_active_node(node);
++
++ parent = rb_parent(node);
++ if (!parent)
++ return;
++
++ if (node == parent->rb_left && parent->rb_right)
++ bfq_update_active_node(parent->rb_right);
++ else if (parent->rb_left)
++ bfq_update_active_node(parent->rb_left);
++
++ node = parent;
++ goto up;
++}
++
++static void bfq_weights_tree_add(struct bfq_data *bfqd,
++ struct bfq_entity *entity,
++ struct rb_root *root);
++
++static void bfq_weights_tree_remove(struct bfq_data *bfqd,
++ struct bfq_entity *entity,
++ struct rb_root *root);
++
++
++/**
++ * bfq_active_insert - insert an entity in the active tree of its
++ * group/device.
++ * @st: the service tree of the entity.
++ * @entity: the entity being inserted.
++ *
++ * The active tree is ordered by finish time, but an extra key is kept
++ * per each node, containing the minimum value for the start times of
++ * its children (and the node itself), so it's possible to search for
++ * the eligible node with the lowest finish time in logarithmic time.
++ */
++static void bfq_active_insert(struct bfq_service_tree *st,
++ struct bfq_entity *entity)
++{
++ struct bfq_queue *bfqq = bfq_entity_to_bfqq(entity);
++ struct rb_node *node = &entity->rb_node;
++#ifdef CONFIG_BFQ_GROUP_IOSCHED
++ struct bfq_sched_data *sd = NULL;
++ struct bfq_group *bfqg = NULL;
++ struct bfq_data *bfqd = NULL;
++#endif
++
++ bfq_insert(&st->active, entity);
++
++ if (node->rb_left)
++ node = node->rb_left;
++ else if (node->rb_right)
++ node = node->rb_right;
++
++ bfq_update_active_tree(node);
++
++#ifdef CONFIG_BFQ_GROUP_IOSCHED
++ sd = entity->sched_data;
++ bfqg = container_of(sd, struct bfq_group, sched_data);
++ BUG_ON(!bfqg);
++ bfqd = (struct bfq_data *)bfqg->bfqd;
++#endif
++ if (bfqq)
++ list_add(&bfqq->bfqq_list, &bfqq->bfqd->active_list);
++#ifdef CONFIG_BFQ_GROUP_IOSCHED
++ else { /* bfq_group */
++ BUG_ON(!bfqd);
++ bfq_weights_tree_add(bfqd, entity, &bfqd->group_weights_tree);
++ }
++ if (bfqg != bfqd->root_group) {
++ BUG_ON(!bfqg);
++ BUG_ON(!bfqd);
++ bfqg->active_entities++;
++ if (bfqg->active_entities == 2)
++ bfqd->active_numerous_groups++;
++ }
++#endif
++}
++
++/**
++ * bfq_ioprio_to_weight - calc a weight from an ioprio.
++ * @ioprio: the ioprio value to convert.
++ */
++static unsigned short bfq_ioprio_to_weight(int ioprio)
++{
++ BUG_ON(ioprio < 0 || ioprio >= IOPRIO_BE_NR);
++ return IOPRIO_BE_NR * BFQ_WEIGHT_CONVERSION_COEFF - ioprio;
++}
++
++/**
++ * bfq_weight_to_ioprio - calc an ioprio from a weight.
++ * @weight: the weight value to convert.
++ *
++ * To preserve as much as possible the old only-ioprio user interface,
++ * 0 is used as an escape ioprio value for weights (numerically) equal or
++ * larger than IOPRIO_BE_NR * BFQ_WEIGHT_CONVERSION_COEFF.
++ */
++static unsigned short bfq_weight_to_ioprio(int weight)
++{
++ BUG_ON(weight < BFQ_MIN_WEIGHT || weight > BFQ_MAX_WEIGHT);
++ return IOPRIO_BE_NR * BFQ_WEIGHT_CONVERSION_COEFF - weight < 0 ?
++ 0 : IOPRIO_BE_NR * BFQ_WEIGHT_CONVERSION_COEFF - weight;
++}
++
++static void bfq_get_entity(struct bfq_entity *entity)
++{
++ struct bfq_queue *bfqq = bfq_entity_to_bfqq(entity);
++
++ if (bfqq) {
++ atomic_inc(&bfqq->ref);
++ bfq_log_bfqq(bfqq->bfqd, bfqq, "get_entity: %p %d",
++ bfqq, atomic_read(&bfqq->ref));
++ }
++}
++
++/**
++ * bfq_find_deepest - find the deepest node that an extraction can modify.
++ * @node: the node being removed.
++ *
++ * Do the first step of an extraction in an rb tree, looking for the
++ * node that will replace @node, and returning the deepest node that
++ * the following modifications to the tree can touch. If @node is the
++ * last node in the tree return %NULL.
++ */
++static struct rb_node *bfq_find_deepest(struct rb_node *node)
++{
++ struct rb_node *deepest;
++
++ if (!node->rb_right && !node->rb_left)
++ deepest = rb_parent(node);
++ else if (!node->rb_right)
++ deepest = node->rb_left;
++ else if (!node->rb_left)
++ deepest = node->rb_right;
++ else {
++ deepest = rb_next(node);
++ if (deepest->rb_right)
++ deepest = deepest->rb_right;
++ else if (rb_parent(deepest) != node)
++ deepest = rb_parent(deepest);
++ }
++
++ return deepest;
++}
++
++/**
++ * bfq_active_extract - remove an entity from the active tree.
++ * @st: the service_tree containing the tree.
++ * @entity: the entity being removed.
++ */
++static void bfq_active_extract(struct bfq_service_tree *st,
++ struct bfq_entity *entity)
++{
++ struct bfq_queue *bfqq = bfq_entity_to_bfqq(entity);
++ struct rb_node *node;
++#ifdef CONFIG_BFQ_GROUP_IOSCHED
++ struct bfq_sched_data *sd = NULL;
++ struct bfq_group *bfqg = NULL;
++ struct bfq_data *bfqd = NULL;
++#endif
++
++ node = bfq_find_deepest(&entity->rb_node);
++ bfq_extract(&st->active, entity);
++
++ if (node)
++ bfq_update_active_tree(node);
++
++#ifdef CONFIG_BFQ_GROUP_IOSCHED
++ sd = entity->sched_data;
++ bfqg = container_of(sd, struct bfq_group, sched_data);
++ BUG_ON(!bfqg);
++ bfqd = (struct bfq_data *)bfqg->bfqd;
++#endif
++ if (bfqq)
++ list_del(&bfqq->bfqq_list);
++#ifdef CONFIG_BFQ_GROUP_IOSCHED
++ else { /* bfq_group */
++ BUG_ON(!bfqd);
++ bfq_weights_tree_remove(bfqd, entity,
++ &bfqd->group_weights_tree);
++ }
++ if (bfqg != bfqd->root_group) {
++ BUG_ON(!bfqg);
++ BUG_ON(!bfqd);
++ BUG_ON(!bfqg->active_entities);
++ bfqg->active_entities--;
++ if (bfqg->active_entities == 1) {
++ BUG_ON(!bfqd->active_numerous_groups);
++ bfqd->active_numerous_groups--;
++ }
++ }
++#endif
++}
++
++/**
++ * bfq_idle_insert - insert an entity into the idle tree.
++ * @st: the service tree containing the tree.
++ * @entity: the entity to insert.
++ */
++static void bfq_idle_insert(struct bfq_service_tree *st,
++ struct bfq_entity *entity)
++{
++ struct bfq_queue *bfqq = bfq_entity_to_bfqq(entity);
++ struct bfq_entity *first_idle = st->first_idle;
++ struct bfq_entity *last_idle = st->last_idle;
++
++ if (!first_idle || bfq_gt(first_idle->finish, entity->finish))
++ st->first_idle = entity;
++ if (!last_idle || bfq_gt(entity->finish, last_idle->finish))
++ st->last_idle = entity;
++
++ bfq_insert(&st->idle, entity);
++
++ if (bfqq)
++ list_add(&bfqq->bfqq_list, &bfqq->bfqd->idle_list);
++}
++
++/**
++ * bfq_forget_entity - remove an entity from the wfq trees.
++ * @st: the service tree.
++ * @entity: the entity being removed.
++ *
++ * Update the device status and forget everything about @entity, putting
++ * the device reference to it, if it is a queue. Entities belonging to
++ * groups are not refcounted.
++ */
++static void bfq_forget_entity(struct bfq_service_tree *st,
++ struct bfq_entity *entity)
++{
++ struct bfq_queue *bfqq = bfq_entity_to_bfqq(entity);
++ struct bfq_sched_data *sd;
++
++ BUG_ON(!entity->on_st);
++
++ entity->on_st = 0;
++ st->wsum -= entity->weight;
++ if (bfqq) {
++ sd = entity->sched_data;
++ bfq_log_bfqq(bfqq->bfqd, bfqq, "forget_entity: %p %d",
++ bfqq, atomic_read(&bfqq->ref));
++ bfq_put_queue(bfqq);
++ }
++}
++
++/**
++ * bfq_put_idle_entity - release the idle tree ref of an entity.
++ * @st: service tree for the entity.
++ * @entity: the entity being released.
++ */
++static void bfq_put_idle_entity(struct bfq_service_tree *st,
++ struct bfq_entity *entity)
++{
++ bfq_idle_extract(st, entity);
++ bfq_forget_entity(st, entity);
++}
++
++/**
++ * bfq_forget_idle - update the idle tree if necessary.
++ * @st: the service tree to act upon.
++ *
++ * To preserve the global O(log N) complexity we only remove one entry here;
++ * as the idle tree will not grow indefinitely this can be done safely.
++ */
++static void bfq_forget_idle(struct bfq_service_tree *st)
++{
++ struct bfq_entity *first_idle = st->first_idle;
++ struct bfq_entity *last_idle = st->last_idle;
++
++ if (RB_EMPTY_ROOT(&st->active) && last_idle &&
++ !bfq_gt(last_idle->finish, st->vtime)) {
++ /*
++ * Forget the whole idle tree, increasing the vtime past
++ * the last finish time of idle entities.
++ */
++ st->vtime = last_idle->finish;
++ }
++
++ if (first_idle && !bfq_gt(first_idle->finish, st->vtime))
++ bfq_put_idle_entity(st, first_idle);
++}
++
++static struct bfq_service_tree *
++__bfq_entity_update_weight_prio(struct bfq_service_tree *old_st,
++ struct bfq_entity *entity)
++{
++ struct bfq_service_tree *new_st = old_st;
++
++ if (entity->prio_changed) {
++ struct bfq_queue *bfqq = bfq_entity_to_bfqq(entity);
++ unsigned short prev_weight, new_weight;
++ struct bfq_data *bfqd = NULL;
++ struct rb_root *root;
++#ifdef CONFIG_BFQ_GROUP_IOSCHED
++ struct bfq_sched_data *sd;
++ struct bfq_group *bfqg;
++#endif
++
++ if (bfqq)
++ bfqd = bfqq->bfqd;
++#ifdef CONFIG_BFQ_GROUP_IOSCHED
++ else {
++ sd = entity->my_sched_data;
++ bfqg = container_of(sd, struct bfq_group, sched_data);
++ BUG_ON(!bfqg);
++ bfqd = (struct bfq_data *)bfqg->bfqd;
++ BUG_ON(!bfqd);
++ }
++#endif
++
++ BUG_ON(old_st->wsum < entity->weight);
++ old_st->wsum -= entity->weight;
++
++ if (entity->new_weight != entity->orig_weight) {
++ if (entity->new_weight < BFQ_MIN_WEIGHT ||
++ entity->new_weight > BFQ_MAX_WEIGHT) {
++ pr_crit("update_weight_prio: new_weight %d\n",
++ entity->new_weight);
++ BUG();
++ }
++ entity->orig_weight = entity->new_weight;
++ if (bfqq)
++ bfqq->ioprio =
++ bfq_weight_to_ioprio(entity->orig_weight);
++ }
++
++ if (bfqq)
++ bfqq->ioprio_class = bfqq->new_ioprio_class;
++ entity->prio_changed = 0;
++
++ /*
++ * NOTE: here we may be changing the weight too early,
++ * this will cause unfairness. The correct approach
++ * would have required additional complexity to defer
++ * weight changes to the proper time instants (i.e.,
++ * when entity->finish <= old_st->vtime).
++ */
++ new_st = bfq_entity_service_tree(entity);
++
++ prev_weight = entity->weight;
++ new_weight = entity->orig_weight *
++ (bfqq ? bfqq->wr_coeff : 1);
++ /*
++ * If the weight of the entity changes, remove the entity
++ * from its old weight counter (if there is a counter
++ * associated with the entity), and add it to the counter
++ * associated with its new weight.
++ */
++ if (prev_weight != new_weight) {
++ root = bfqq ? &bfqd->queue_weights_tree :
++ &bfqd->group_weights_tree;
++ bfq_weights_tree_remove(bfqd, entity, root);
++ }
++ entity->weight = new_weight;
++ /*
++ * Add the entity to its weights tree only if it is
++ * not associated with a weight-raised queue.
++ */
++ if (prev_weight != new_weight &&
++ (bfqq ? bfqq->wr_coeff == 1 : 1))
++ /* If we get here, root has been initialized. */
++ bfq_weights_tree_add(bfqd, entity, root);
++
++ new_st->wsum += entity->weight;
++
++ if (new_st != old_st)
++ entity->start = new_st->vtime;
++ }
++
++ return new_st;
++}
++
++#ifdef CONFIG_BFQ_GROUP_IOSCHED
++static void bfqg_stats_set_start_empty_time(struct bfq_group *bfqg);
++#endif
++
++/**
++ * bfq_bfqq_served - update the scheduler status after selection for
++ * service.
++ * @bfqq: the queue being served.
++ * @served: bytes to transfer.
++ *
++ * NOTE: this can be optimized, as the timestamps of upper level entities
++ * are synchronized every time a new bfqq is selected for service. By now,
++ * we keep it to better check consistency.
++ */
++static void bfq_bfqq_served(struct bfq_queue *bfqq, int served)
++{
++ struct bfq_entity *entity = &bfqq->entity;
++ struct bfq_service_tree *st;
++
++ for_each_entity(entity) {
++ st = bfq_entity_service_tree(entity);
++
++ entity->service += served;
++ BUG_ON(entity->service > entity->budget);
++ BUG_ON(st->wsum == 0);
++
++ st->vtime += bfq_delta(served, st->wsum);
++ bfq_forget_idle(st);
++ }
++#ifdef CONFIG_BFQ_GROUP_IOSCHED
++ bfqg_stats_set_start_empty_time(bfqq_group(bfqq));
++#endif
++ bfq_log_bfqq(bfqq->bfqd, bfqq, "bfqq_served %d secs", served);
++}
++
++/**
++ * bfq_bfqq_charge_full_budget - set the service to the entity budget.
++ * @bfqq: the queue that needs a service update.
++ *
++ * When it's not possible to be fair in the service domain, because
++ * a queue is not consuming its budget fast enough (the meaning of
++ * fast depends on the timeout parameter), we charge it a full
++ * budget. In this way we should obtain a sort of time-domain
++ * fairness among all the seeky/slow queues.
++ */
++static void bfq_bfqq_charge_full_budget(struct bfq_queue *bfqq)
++{
++ struct bfq_entity *entity = &bfqq->entity;
++
++ bfq_log_bfqq(bfqq->bfqd, bfqq, "charge_full_budget");
++
++ bfq_bfqq_served(bfqq, entity->budget - entity->service);
++}
++
++/**
++ * __bfq_activate_entity - activate an entity.
++ * @entity: the entity being activated.
++ *
++ * Called whenever an entity is activated, i.e., it is not active and one
++ * of its children receives a new request, or has to be reactivated due to
++ * budget exhaustion. It uses the current budget of the entity (and the
++ * service received if @entity is active) of the queue to calculate its
++ * timestamps.
++ */
++static void __bfq_activate_entity(struct bfq_entity *entity)
++{
++ struct bfq_sched_data *sd = entity->sched_data;
++ struct bfq_service_tree *st = bfq_entity_service_tree(entity);
++
++ if (entity == sd->in_service_entity) {
++ BUG_ON(entity->tree);
++ /*
++ * If we are requeueing the current entity we have
++ * to take care of not charging to it service it has
++ * not received.
++ */
++ bfq_calc_finish(entity, entity->service);
++ entity->start = entity->finish;
++ sd->in_service_entity = NULL;
++ } else if (entity->tree == &st->active) {
++ /*
++ * Requeueing an entity due to a change of some
++ * next_in_service entity below it. We reuse the
++ * old start time.
++ */
++ bfq_active_extract(st, entity);
++ } else if (entity->tree == &st->idle) {
++ /*
++ * Must be on the idle tree, bfq_idle_extract() will
++ * check for that.
++ */
++ bfq_idle_extract(st, entity);
++ entity->start = bfq_gt(st->vtime, entity->finish) ?
++ st->vtime : entity->finish;
++ } else {
++ /*
++ * The finish time of the entity may be invalid, and
++ * it is in the past for sure, otherwise the queue
++ * would have been on the idle tree.
++ */
++ entity->start = st->vtime;
++ st->wsum += entity->weight;
++ bfq_get_entity(entity);
++
++ BUG_ON(entity->on_st);
++ entity->on_st = 1;
++ }
++
++ st = __bfq_entity_update_weight_prio(st, entity);
++ bfq_calc_finish(entity, entity->budget);
++ bfq_active_insert(st, entity);
++}
++
++/**
++ * bfq_activate_entity - activate an entity and its ancestors if necessary.
++ * @entity: the entity to activate.
++ *
++ * Activate @entity and all the entities on the path from it to the root.
++ */
++static void bfq_activate_entity(struct bfq_entity *entity)
++{
++ struct bfq_sched_data *sd;
++
++ for_each_entity(entity) {
++ __bfq_activate_entity(entity);
++
++ sd = entity->sched_data;
++ if (!bfq_update_next_in_service(sd))
++ /*
++ * No need to propagate the activation to the
++ * upper entities, as they will be updated when
++ * the in-service entity is rescheduled.
++ */
++ break;
++ }
++}
++
++/**
++ * __bfq_deactivate_entity - deactivate an entity from its service tree.
++ * @entity: the entity to deactivate.
++ * @requeue: if false, the entity will not be put into the idle tree.
++ *
++ * Deactivate an entity, independently from its previous state. If the
++ * entity was not on a service tree just return, otherwise if it is on
++ * any scheduler tree, extract it from that tree, and if necessary
++ * and if the caller did not specify @requeue, put it on the idle tree.
++ *
++ * Return %1 if the caller should update the entity hierarchy, i.e.,
++ * if the entity was in service or if it was the next_in_service for
++ * its sched_data; return %0 otherwise.
++ */
++static int __bfq_deactivate_entity(struct bfq_entity *entity, int requeue)
++{
++ struct bfq_sched_data *sd = entity->sched_data;
++ struct bfq_service_tree *st;
++ int was_in_service;
++ int ret = 0;
++
++ if (sd == NULL || !entity->on_st) /* never activated, or inactive */
++ return 0;
++
++ st = bfq_entity_service_tree(entity);
++ was_in_service = entity == sd->in_service_entity;
++
++ BUG_ON(was_in_service && entity->tree);
++
++ if (was_in_service) {
++ bfq_calc_finish(entity, entity->service);
++ sd->in_service_entity = NULL;
++ } else if (entity->tree == &st->active)
++ bfq_active_extract(st, entity);
++ else if (entity->tree == &st->idle)
++ bfq_idle_extract(st, entity);
++ else if (entity->tree)
++ BUG();
++
++ if (was_in_service || sd->next_in_service == entity)
++ ret = bfq_update_next_in_service(sd);
++
++ if (!requeue || !bfq_gt(entity->finish, st->vtime))
++ bfq_forget_entity(st, entity);
++ else
++ bfq_idle_insert(st, entity);
++
++ BUG_ON(sd->in_service_entity == entity);
++ BUG_ON(sd->next_in_service == entity);
++
++ return ret;
++}
++
++/**
++ * bfq_deactivate_entity - deactivate an entity.
++ * @entity: the entity to deactivate.
++ * @requeue: true if the entity can be put on the idle tree
++ */
++static void bfq_deactivate_entity(struct bfq_entity *entity, int requeue)
++{
++ struct bfq_sched_data *sd;
++ struct bfq_entity *parent;
++
++ for_each_entity_safe(entity, parent) {
++ sd = entity->sched_data;
++
++ if (!__bfq_deactivate_entity(entity, requeue))
++ /*
++ * The parent entity is still backlogged, and
++ * we don't need to update it as it is still
++ * in service.
++ */
++ break;
++
++ if (sd->next_in_service)
++ /*
++ * The parent entity is still backlogged and
++ * the budgets on the path towards the root
++ * need to be updated.
++ */
++ goto update;
++
++ /*
++ * If we reach there the parent is no more backlogged and
++ * we want to propagate the dequeue upwards.
++ */
++ requeue = 1;
++ }
++
++ return;
++
++update:
++ entity = parent;
++ for_each_entity(entity) {
++ __bfq_activate_entity(entity);
++
++ sd = entity->sched_data;
++ if (!bfq_update_next_in_service(sd))
++ break;
++ }
++}
++
++/**
++ * bfq_update_vtime - update vtime if necessary.
++ * @st: the service tree to act upon.
++ *
++ * If necessary update the service tree vtime to have at least one
++ * eligible entity, skipping to its start time. Assumes that the
++ * active tree of the device is not empty.
++ *
++ * NOTE: this hierarchical implementation updates vtimes quite often,
++ * we may end up with reactivated processes getting timestamps after a
++ * vtime skip done because we needed a ->first_active entity on some
++ * intermediate node.
++ */
++static void bfq_update_vtime(struct bfq_service_tree *st)
++{
++ struct bfq_entity *entry;
++ struct rb_node *node = st->active.rb_node;
++
++ entry = rb_entry(node, struct bfq_entity, rb_node);
++ if (bfq_gt(entry->min_start, st->vtime)) {
++ st->vtime = entry->min_start;
++ bfq_forget_idle(st);
++ }
++}
++
++/**
++ * bfq_first_active_entity - find the eligible entity with
++ * the smallest finish time
++ * @st: the service tree to select from.
++ *
++ * This function searches the first schedulable entity, starting from the
++ * root of the tree and going on the left every time on this side there is
++ * a subtree with at least one eligible (start >= vtime) entity. The path on
++ * the right is followed only if a) the left subtree contains no eligible
++ * entities and b) no eligible entity has been found yet.
++ */
++static struct bfq_entity *bfq_first_active_entity(struct bfq_service_tree *st)
++{
++ struct bfq_entity *entry, *first = NULL;
++ struct rb_node *node = st->active.rb_node;
++
++ while (node) {
++ entry = rb_entry(node, struct bfq_entity, rb_node);
++left:
++ if (!bfq_gt(entry->start, st->vtime))
++ first = entry;
++
++ BUG_ON(bfq_gt(entry->min_start, st->vtime));
++
++ if (node->rb_left) {
++ entry = rb_entry(node->rb_left,
++ struct bfq_entity, rb_node);
++ if (!bfq_gt(entry->min_start, st->vtime)) {
++ node = node->rb_left;
++ goto left;
++ }
++ }
++ if (first)
++ break;
++ node = node->rb_right;
++ }
++
++ BUG_ON(!first && !RB_EMPTY_ROOT(&st->active));
++ return first;
++}
++
++/**
++ * __bfq_lookup_next_entity - return the first eligible entity in @st.
++ * @st: the service tree.
++ *
++ * Update the virtual time in @st and return the first eligible entity
++ * it contains.
++ */
++static struct bfq_entity *__bfq_lookup_next_entity(struct bfq_service_tree *st,
++ bool force)
++{
++ struct bfq_entity *entity, *new_next_in_service = NULL;
++
++ if (RB_EMPTY_ROOT(&st->active))
++ return NULL;
++
++ bfq_update_vtime(st);
++ entity = bfq_first_active_entity(st);
++ BUG_ON(bfq_gt(entity->start, st->vtime));
++
++ /*
++ * If the chosen entity does not match with the sched_data's
++ * next_in_service and we are forcedly serving the IDLE priority
++ * class tree, bubble up budget update.
++ */
++ if (unlikely(force && entity != entity->sched_data->next_in_service)) {
++ new_next_in_service = entity;
++ for_each_entity(new_next_in_service)
++ bfq_update_budget(new_next_in_service);
++ }
++
++ return entity;
++}
++
++/**
++ * bfq_lookup_next_entity - return the first eligible entity in @sd.
++ * @sd: the sched_data.
++ * @extract: if true the returned entity will be also extracted from @sd.
++ *
++ * NOTE: since we cache the next_in_service entity at each level of the
++ * hierarchy, the complexity of the lookup can be decreased with
++ * absolutely no effort just returning the cached next_in_service value;
++ * we prefer to do full lookups to test the consistency of * the data
++ * structures.
++ */
++static struct bfq_entity *bfq_lookup_next_entity(struct bfq_sched_data *sd,
++ int extract,
++ struct bfq_data *bfqd)
++{
++ struct bfq_service_tree *st = sd->service_tree;
++ struct bfq_entity *entity;
++ int i = 0;
++
++ BUG_ON(sd->in_service_entity);
++
++ if (bfqd &&
++ jiffies - bfqd->bfq_class_idle_last_service > BFQ_CL_IDLE_TIMEOUT) {
++ entity = __bfq_lookup_next_entity(st + BFQ_IOPRIO_CLASSES - 1,
++ true);
++ if (entity) {
++ i = BFQ_IOPRIO_CLASSES - 1;
++ bfqd->bfq_class_idle_last_service = jiffies;
++ sd->next_in_service = entity;
++ }
++ }
++ for (; i < BFQ_IOPRIO_CLASSES; i++) {
++ entity = __bfq_lookup_next_entity(st + i, false);
++ if (entity) {
++ if (extract) {
++ bfq_check_next_in_service(sd, entity);
++ bfq_active_extract(st + i, entity);
++ sd->in_service_entity = entity;
++ sd->next_in_service = NULL;
++ }
++ break;
++ }
++ }
++
++ return entity;
++}
++
++/*
++ * Get next queue for service.
++ */
++static struct bfq_queue *bfq_get_next_queue(struct bfq_data *bfqd)
++{
++ struct bfq_entity *entity = NULL;
++ struct bfq_sched_data *sd;
++ struct bfq_queue *bfqq;
++
++ BUG_ON(bfqd->in_service_queue);
++
++ if (bfqd->busy_queues == 0)
++ return NULL;
++
++ sd = &bfqd->root_group->sched_data;
++ for (; sd ; sd = entity->my_sched_data) {
++ entity = bfq_lookup_next_entity(sd, 1, bfqd);
++ BUG_ON(!entity);
++ entity->service = 0;
++ }
++
++ bfqq = bfq_entity_to_bfqq(entity);
++ BUG_ON(!bfqq);
++
++ return bfqq;
++}
++
++static void __bfq_bfqd_reset_in_service(struct bfq_data *bfqd)
++{
++ if (bfqd->in_service_bic) {
++ put_io_context(bfqd->in_service_bic->icq.ioc);
++ bfqd->in_service_bic = NULL;
++ }
++
++ bfqd->in_service_queue = NULL;
++ del_timer(&bfqd->idle_slice_timer);
++}
++
++static void bfq_deactivate_bfqq(struct bfq_data *bfqd, struct bfq_queue *bfqq,
++ int requeue)
++{
++ struct bfq_entity *entity = &bfqq->entity;
++
++ if (bfqq == bfqd->in_service_queue)
++ __bfq_bfqd_reset_in_service(bfqd);
++
++ bfq_deactivate_entity(entity, requeue);
++}
++
++static void bfq_activate_bfqq(struct bfq_data *bfqd, struct bfq_queue *bfqq)
++{
++ struct bfq_entity *entity = &bfqq->entity;
++
++ bfq_activate_entity(entity);
++}
++
++#ifdef CONFIG_BFQ_GROUP_IOSCHED
++static void bfqg_stats_update_dequeue(struct bfq_group *bfqg);
++#endif
++
++/*
++ * Called when the bfqq no longer has requests pending, remove it from
++ * the service tree.
++ */
++static void bfq_del_bfqq_busy(struct bfq_data *bfqd, struct bfq_queue *bfqq,
++ int requeue)
++{
++ BUG_ON(!bfq_bfqq_busy(bfqq));
++ BUG_ON(!RB_EMPTY_ROOT(&bfqq->sort_list));
++
++ bfq_log_bfqq(bfqd, bfqq, "del from busy");
++
++ bfq_clear_bfqq_busy(bfqq);
++
++ BUG_ON(bfqd->busy_queues == 0);
++ bfqd->busy_queues--;
++
++ if (!bfqq->dispatched) {
++ bfq_weights_tree_remove(bfqd, &bfqq->entity,
++ &bfqd->queue_weights_tree);
++ if (!blk_queue_nonrot(bfqd->queue)) {
++ BUG_ON(!bfqd->busy_in_flight_queues);
++ bfqd->busy_in_flight_queues--;
++ if (bfq_bfqq_constantly_seeky(bfqq)) {
++ BUG_ON(!bfqd->
++ const_seeky_busy_in_flight_queues);
++ bfqd->const_seeky_busy_in_flight_queues--;
++ }
++ }
++ }
++ if (bfqq->wr_coeff > 1)
++ bfqd->wr_busy_queues--;
++
++#ifdef CONFIG_BFQ_GROUP_IOSCHED
++ bfqg_stats_update_dequeue(bfqq_group(bfqq));
++#endif
++
++ bfq_deactivate_bfqq(bfqd, bfqq, requeue);
++}
++
++/*
++ * Called when an inactive queue receives a new request.
++ */
++static void bfq_add_bfqq_busy(struct bfq_data *bfqd, struct bfq_queue *bfqq)
++{
++ BUG_ON(bfq_bfqq_busy(bfqq));
++ BUG_ON(bfqq == bfqd->in_service_queue);
++
++ bfq_log_bfqq(bfqd, bfqq, "add to busy");
++
++ bfq_activate_bfqq(bfqd, bfqq);
++
++ bfq_mark_bfqq_busy(bfqq);
++ bfqd->busy_queues++;
++
++ if (!bfqq->dispatched) {
++ if (bfqq->wr_coeff == 1)
++ bfq_weights_tree_add(bfqd, &bfqq->entity,
++ &bfqd->queue_weights_tree);
++ if (!blk_queue_nonrot(bfqd->queue)) {
++ bfqd->busy_in_flight_queues++;
++ if (bfq_bfqq_constantly_seeky(bfqq))
++ bfqd->const_seeky_busy_in_flight_queues++;
++ }
++ }
++ if (bfqq->wr_coeff > 1)
++ bfqd->wr_busy_queues++;
++}
+diff --git a/block/bfq.h b/block/bfq.h
+new file mode 100644
+index 0000000..2bf54ae
+--- /dev/null
++++ b/block/bfq.h
+@@ -0,0 +1,801 @@
++/*
++ * BFQ-v7r11 for 4.5.0: data structures and common functions prototypes.
++ *
++ * Based on ideas and code from CFQ:
++ * Copyright (C) 2003 Jens Axboe <axboe@kernel.dk>
++ *
++ * Copyright (C) 2008 Fabio Checconi <fabio@gandalf.sssup.it>
++ * Paolo Valente <paolo.valente@unimore.it>
++ *
++ * Copyright (C) 2010 Paolo Valente <paolo.valente@unimore.it>
++ */
++
++#ifndef _BFQ_H
++#define _BFQ_H
++
++#include <linux/blktrace_api.h>
++#include <linux/hrtimer.h>
++#include <linux/ioprio.h>
++#include <linux/rbtree.h>
++#include <linux/blk-cgroup.h>
++
++#define BFQ_IOPRIO_CLASSES 3
++#define BFQ_CL_IDLE_TIMEOUT (HZ/5)
++
++#define BFQ_MIN_WEIGHT 1
++#define BFQ_MAX_WEIGHT 1000
++#define BFQ_WEIGHT_CONVERSION_COEFF 10
++
++#define BFQ_DEFAULT_QUEUE_IOPRIO 4
++
++#define BFQ_DEFAULT_GRP_WEIGHT 10
++#define BFQ_DEFAULT_GRP_IOPRIO 0
++#define BFQ_DEFAULT_GRP_CLASS IOPRIO_CLASS_BE
++
++struct bfq_entity;
++
++/**
++ * struct bfq_service_tree - per ioprio_class service tree.
++ * @active: tree for active entities (i.e., those backlogged).
++ * @idle: tree for idle entities (i.e., those not backlogged, with V <= F_i).
++ * @first_idle: idle entity with minimum F_i.
++ * @last_idle: idle entity with maximum F_i.
++ * @vtime: scheduler virtual time.
++ * @wsum: scheduler weight sum; active and idle entities contribute to it.
++ *
++ * Each service tree represents a B-WF2Q+ scheduler on its own. Each
++ * ioprio_class has its own independent scheduler, and so its own
++ * bfq_service_tree. All the fields are protected by the queue lock
++ * of the containing bfqd.
++ */
++struct bfq_service_tree {
++ struct rb_root active;
++ struct rb_root idle;
++
++ struct bfq_entity *first_idle;
++ struct bfq_entity *last_idle;
++
++ u64 vtime;
++ unsigned long wsum;
++};
++
++/**
++ * struct bfq_sched_data - multi-class scheduler.
++ * @in_service_entity: entity in service.
++ * @next_in_service: head-of-the-line entity in the scheduler.
++ * @service_tree: array of service trees, one per ioprio_class.
++ *
++ * bfq_sched_data is the basic scheduler queue. It supports three
++ * ioprio_classes, and can be used either as a toplevel queue or as
++ * an intermediate queue on a hierarchical setup.
++ * @next_in_service points to the active entity of the sched_data
++ * service trees that will be scheduled next.
++ *
++ * The supported ioprio_classes are the same as in CFQ, in descending
++ * priority order, IOPRIO_CLASS_RT, IOPRIO_CLASS_BE, IOPRIO_CLASS_IDLE.
++ * Requests from higher priority queues are served before all the
++ * requests from lower priority queues; among requests of the same
++ * queue requests are served according to B-WF2Q+.
++ * All the fields are protected by the queue lock of the containing bfqd.
++ */
++struct bfq_sched_data {
++ struct bfq_entity *in_service_entity;
++ struct bfq_entity *next_in_service;
++ struct bfq_service_tree service_tree[BFQ_IOPRIO_CLASSES];
++};
++
++/**
++ * struct bfq_weight_counter - counter of the number of all active entities
++ * with a given weight.
++ * @weight: weight of the entities that this counter refers to.
++ * @num_active: number of active entities with this weight.
++ * @weights_node: weights tree member (see bfq_data's @queue_weights_tree
++ * and @group_weights_tree).
++ */
++struct bfq_weight_counter {
++ short int weight;
++ unsigned int num_active;
++ struct rb_node weights_node;
++};
++
++/**
++ * struct bfq_entity - schedulable entity.
++ * @rb_node: service_tree member.
++ * @weight_counter: pointer to the weight counter associated with this entity.
++ * @on_st: flag, true if the entity is on a tree (either the active or
++ * the idle one of its service_tree).
++ * @finish: B-WF2Q+ finish timestamp (aka F_i).
++ * @start: B-WF2Q+ start timestamp (aka S_i).
++ * @tree: tree the entity is enqueued into; %NULL if not on a tree.
++ * @min_start: minimum start time of the (active) subtree rooted at
++ * this entity; used for O(log N) lookups into active trees.
++ * @service: service received during the last round of service.
++ * @budget: budget used to calculate F_i; F_i = S_i + @budget / @weight.
++ * @weight: weight of the queue
++ * @parent: parent entity, for hierarchical scheduling.
++ * @my_sched_data: for non-leaf nodes in the cgroup hierarchy, the
++ * associated scheduler queue, %NULL on leaf nodes.
++ * @sched_data: the scheduler queue this entity belongs to.
++ * @ioprio: the ioprio in use.
++ * @new_weight: when a weight change is requested, the new weight value.
++ * @orig_weight: original weight, used to implement weight boosting
++ * @prio_changed: flag, true when the user requested a weight, ioprio or
++ * ioprio_class change.
++ *
++ * A bfq_entity is used to represent either a bfq_queue (leaf node in the
++ * cgroup hierarchy) or a bfq_group into the upper level scheduler. Each
++ * entity belongs to the sched_data of the parent group in the cgroup
++ * hierarchy. Non-leaf entities have also their own sched_data, stored
++ * in @my_sched_data.
++ *
++ * Each entity stores independently its priority values; this would
++ * allow different weights on different devices, but this
++ * functionality is not exported to userspace by now. Priorities and
++ * weights are updated lazily, first storing the new values into the
++ * new_* fields, then setting the @prio_changed flag. As soon as
++ * there is a transition in the entity state that allows the priority
++ * update to take place the effective and the requested priority
++ * values are synchronized.
++ *
++ * Unless cgroups are used, the weight value is calculated from the
++ * ioprio to export the same interface as CFQ. When dealing with
++ * ``well-behaved'' queues (i.e., queues that do not spend too much
++ * time to consume their budget and have true sequential behavior, and
++ * when there are no external factors breaking anticipation) the
++ * relative weights at each level of the cgroups hierarchy should be
++ * guaranteed. All the fields are protected by the queue lock of the
++ * containing bfqd.
++ */
++struct bfq_entity {
++ struct rb_node rb_node;
++ struct bfq_weight_counter *weight_counter;
++
++ int on_st;
++
++ u64 finish;
++ u64 start;
++
++ struct rb_root *tree;
++
++ u64 min_start;
++
++ int service, budget;
++ unsigned short weight, new_weight;
++ unsigned short orig_weight;
++
++ struct bfq_entity *parent;
++
++ struct bfq_sched_data *my_sched_data;
++ struct bfq_sched_data *sched_data;
++
++ int prio_changed;
++};
++
++struct bfq_group;
++
++/**
++ * struct bfq_queue - leaf schedulable entity.
++ * @ref: reference counter.
++ * @bfqd: parent bfq_data.
++ * @new_ioprio: when an ioprio change is requested, the new ioprio value.
++ * @ioprio_class: the ioprio_class in use.
++ * @new_ioprio_class: when an ioprio_class change is requested, the new
++ * ioprio_class value.
++ * @new_bfqq: shared bfq_queue if queue is cooperating with
++ * one or more other queues.
++ * @sort_list: sorted list of pending requests.
++ * @next_rq: if fifo isn't expired, next request to serve.
++ * @queued: nr of requests queued in @sort_list.
++ * @allocated: currently allocated requests.
++ * @meta_pending: pending metadata requests.
++ * @fifo: fifo list of requests in sort_list.
++ * @entity: entity representing this queue in the scheduler.
++ * @max_budget: maximum budget allowed from the feedback mechanism.
++ * @budget_timeout: budget expiration (in jiffies).
++ * @dispatched: number of requests on the dispatch list or inside driver.
++ * @flags: status flags.
++ * @bfqq_list: node for active/idle bfqq list inside our bfqd.
++ * @burst_list_node: node for the device's burst list.
++ * @seek_samples: number of seeks sampled
++ * @seek_total: sum of the distances of the seeks sampled
++ * @seek_mean: mean seek distance
++ * @last_request_pos: position of the last request enqueued
++ * @requests_within_timer: number of consecutive pairs of request completion
++ * and arrival, such that the queue becomes idle
++ * after the completion, but the next request arrives
++ * within an idle time slice; used only if the queue's
++ * IO_bound has been cleared.
++ * @pid: pid of the process owning the queue, used for logging purposes.
++ * @last_wr_start_finish: start time of the current weight-raising period if
++ * the @bfq-queue is being weight-raised, otherwise
++ * finish time of the last weight-raising period
++ * @wr_cur_max_time: current max raising time for this queue
++ * @soft_rt_next_start: minimum time instant such that, only if a new
++ * request is enqueued after this time instant in an
++ * idle @bfq_queue with no outstanding requests, then
++ * the task associated with the queue it is deemed as
++ * soft real-time (see the comments to the function
++ * bfq_bfqq_softrt_next_start())
++ * @last_idle_bklogged: time of the last transition of the @bfq_queue from
++ * idle to backlogged
++ * @service_from_backlogged: cumulative service received from the @bfq_queue
++ * since the last transition from idle to
++ * backlogged
++ * @bic: pointer to the bfq_io_cq owning the bfq_queue, set to %NULL if the
++ * queue is shared
++ *
++ * A bfq_queue is a leaf request queue; it can be associated with an
++ * io_context or more, if it is async or shared between cooperating
++ * processes. @cgroup holds a reference to the cgroup, to be sure that it
++ * does not disappear while a bfqq still references it (mostly to avoid
++ * races between request issuing and task migration followed by cgroup
++ * destruction).
++ * All the fields are protected by the queue lock of the containing bfqd.
++ */
++struct bfq_queue {
++ atomic_t ref;
++ struct bfq_data *bfqd;
++
++ unsigned short ioprio, new_ioprio;
++ unsigned short ioprio_class, new_ioprio_class;
++
++ /* fields for cooperating queues handling */
++ struct bfq_queue *new_bfqq;
++ struct rb_node pos_node;
++ struct rb_root *pos_root;
++
++ struct rb_root sort_list;
++ struct request *next_rq;
++ int queued[2];
++ int allocated[2];
++ int meta_pending;
++ struct list_head fifo;
++
++ struct bfq_entity entity;
++
++ int max_budget;
++ unsigned long budget_timeout;
++
++ int dispatched;
++
++ unsigned int flags;
++
++ struct list_head bfqq_list;
++
++ struct hlist_node burst_list_node;
++
++ unsigned int seek_samples;
++ u64 seek_total;
++ sector_t seek_mean;
++ sector_t last_request_pos;
++
++ unsigned int requests_within_timer;
++
++ pid_t pid;
++ struct bfq_io_cq *bic;
++
++ /* weight-raising fields */
++ unsigned long wr_cur_max_time;
++ unsigned long soft_rt_next_start;
++ unsigned long last_wr_start_finish;
++ unsigned int wr_coeff;
++ unsigned long last_idle_bklogged;
++ unsigned long service_from_backlogged;
++};
++
++/**
++ * struct bfq_ttime - per process thinktime stats.
++ * @ttime_total: total process thinktime
++ * @ttime_samples: number of thinktime samples
++ * @ttime_mean: average process thinktime
++ */
++struct bfq_ttime {
++ unsigned long last_end_request;
++
++ unsigned long ttime_total;
++ unsigned long ttime_samples;
++ unsigned long ttime_mean;
++};
++
++/**
++ * struct bfq_io_cq - per (request_queue, io_context) structure.
++ * @icq: associated io_cq structure
++ * @bfqq: array of two process queues, the sync and the async
++ * @ttime: associated @bfq_ttime struct
++ * @ioprio: per (request_queue, blkcg) ioprio.
++ * @blkcg_id: id of the blkcg the related io_cq belongs to.
++ */
++struct bfq_io_cq {
++ struct io_cq icq; /* must be the first member */
++ struct bfq_queue *bfqq[2];
++ struct bfq_ttime ttime;
++ int ioprio;
++
++#ifdef CONFIG_BFQ_GROUP_IOSCHED
++ uint64_t blkcg_id; /* the current blkcg ID */
++#endif
++};
++
++enum bfq_device_speed {
++ BFQ_BFQD_FAST,
++ BFQ_BFQD_SLOW,
++};
++
++/**
++ * struct bfq_data - per device data structure.
++ * @queue: request queue for the managed device.
++ * @root_group: root bfq_group for the device.
++ * @active_numerous_groups: number of bfq_groups containing more than one
++ * active @bfq_entity.
++ * @queue_weights_tree: rbtree of weight counters of @bfq_queues, sorted by
++ * weight. Used to keep track of whether all @bfq_queues
++ * have the same weight. The tree contains one counter
++ * for each distinct weight associated to some active
++ * and not weight-raised @bfq_queue (see the comments to
++ * the functions bfq_weights_tree_[add|remove] for
++ * further details).
++ * @group_weights_tree: rbtree of non-queue @bfq_entity weight counters, sorted
++ * by weight. Used to keep track of whether all
++ * @bfq_groups have the same weight. The tree contains
++ * one counter for each distinct weight associated to
++ * some active @bfq_group (see the comments to the
++ * functions bfq_weights_tree_[add|remove] for further
++ * details).
++ * @busy_queues: number of bfq_queues containing requests (including the
++ * queue in service, even if it is idling).
++ * @busy_in_flight_queues: number of @bfq_queues containing pending or
++ * in-flight requests, plus the @bfq_queue in
++ * service, even if idle but waiting for the
++ * possible arrival of its next sync request. This
++ * field is updated only if the device is rotational,
++ * but used only if the device is also NCQ-capable.
++ * The reason why the field is updated also for non-
++ * NCQ-capable rotational devices is related to the
++ * fact that the value of @hw_tag may be set also
++ * later than when busy_in_flight_queues may need to
++ * be incremented for the first time(s). Taking also
++ * this possibility into account, to avoid unbalanced
++ * increments/decrements, would imply more overhead
++ * than just updating busy_in_flight_queues
++ * regardless of the value of @hw_tag.
++ * @const_seeky_busy_in_flight_queues: number of constantly-seeky @bfq_queues
++ * (that is, seeky queues that expired
++ * for budget timeout at least once)
++ * containing pending or in-flight
++ * requests, including the in-service
++ * @bfq_queue if constantly seeky. This
++ * field is updated only if the device
++ * is rotational, but used only if the
++ * device is also NCQ-capable (see the
++ * comments to @busy_in_flight_queues).
++ * @wr_busy_queues: number of weight-raised busy @bfq_queues.
++ * @queued: number of queued requests.
++ * @rq_in_driver: number of requests dispatched and waiting for completion.
++ * @sync_flight: number of sync requests in the driver.
++ * @max_rq_in_driver: max number of reqs in driver in the last
++ * @hw_tag_samples completed requests.
++ * @hw_tag_samples: nr of samples used to calculate hw_tag.
++ * @hw_tag: flag set to one if the driver is showing a queueing behavior.
++ * @budgets_assigned: number of budgets assigned.
++ * @idle_slice_timer: timer set when idling for the next sequential request
++ * from the queue in service.
++ * @unplug_work: delayed work to restart dispatching on the request queue.
++ * @in_service_queue: bfq_queue in service.
++ * @in_service_bic: bfq_io_cq (bic) associated with the @in_service_queue.
++ * @last_position: on-disk position of the last served request.
++ * @last_budget_start: beginning of the last budget.
++ * @last_idling_start: beginning of the last idle slice.
++ * @peak_rate: peak transfer rate observed for a budget.
++ * @peak_rate_samples: number of samples used to calculate @peak_rate.
++ * @bfq_max_budget: maximum budget allotted to a bfq_queue before
++ * rescheduling.
++ * @active_list: list of all the bfq_queues active on the device.
++ * @idle_list: list of all the bfq_queues idle on the device.
++ * @bfq_fifo_expire: timeout for async/sync requests; when it expires
++ * requests are served in fifo order.
++ * @bfq_back_penalty: weight of backward seeks wrt forward ones.
++ * @bfq_back_max: maximum allowed backward seek.
++ * @bfq_slice_idle: maximum idling time.
++ * @bfq_user_max_budget: user-configured max budget value
++ * (0 for auto-tuning).
++ * @bfq_max_budget_async_rq: maximum budget (in nr of requests) allotted to
++ * async queues.
++ * @bfq_timeout: timeout for bfq_queues to consume their budget; used to
++ * to prevent seeky queues to impose long latencies to well
++ * behaved ones (this also implies that seeky queues cannot
++ * receive guarantees in the service domain; after a timeout
++ * they are charged for the whole allocated budget, to try
++ * to preserve a behavior reasonably fair among them, but
++ * without service-domain guarantees).
++ * @bfq_coop_thresh: number of queue merges after which a @bfq_queue is
++ * no more granted any weight-raising.
++ * @bfq_failed_cooperations: number of consecutive failed cooperation
++ * chances after which weight-raising is restored
++ * to a queue subject to more than bfq_coop_thresh
++ * queue merges.
++ * @bfq_requests_within_timer: number of consecutive requests that must be
++ * issued within the idle time slice to set
++ * again idling to a queue which was marked as
++ * non-I/O-bound (see the definition of the
++ * IO_bound flag for further details).
++ * @last_ins_in_burst: last time at which a queue entered the current
++ * burst of queues being activated shortly after
++ * each other; for more details about this and the
++ * following parameters related to a burst of
++ * activations, see the comments to the function
++ * @bfq_handle_burst.
++ * @bfq_burst_interval: reference time interval used to decide whether a
++ * queue has been activated shortly after
++ * @last_ins_in_burst.
++ * @burst_size: number of queues in the current burst of queue activations.
++ * @bfq_large_burst_thresh: maximum burst size above which the current
++ * queue-activation burst is deemed as 'large'.
++ * @large_burst: true if a large queue-activation burst is in progress.
++ * @burst_list: head of the burst list (as for the above fields, more details
++ * in the comments to the function bfq_handle_burst).
++ * @low_latency: if set to true, low-latency heuristics are enabled.
++ * @bfq_wr_coeff: maximum factor by which the weight of a weight-raised
++ * queue is multiplied.
++ * @bfq_wr_max_time: maximum duration of a weight-raising period (jiffies).
++ * @bfq_wr_rt_max_time: maximum duration for soft real-time processes.
++ * @bfq_wr_min_idle_time: minimum idle period after which weight-raising
++ * may be reactivated for a queue (in jiffies).
++ * @bfq_wr_min_inter_arr_async: minimum period between request arrivals
++ * after which weight-raising may be
++ * reactivated for an already busy queue
++ * (in jiffies).
++ * @bfq_wr_max_softrt_rate: max service-rate for a soft real-time queue,
++ * sectors per seconds.
++ * @RT_prod: cached value of the product R*T used for computing the maximum
++ * duration of the weight raising automatically.
++ * @device_speed: device-speed class for the low-latency heuristic.
++ * @oom_bfqq: fallback dummy bfqq for extreme OOM conditions.
++ *
++ * All the fields are protected by the @queue lock.
++ */
++struct bfq_data {
++ struct request_queue *queue;
++
++ struct bfq_group *root_group;
++
++#ifdef CONFIG_BFQ_GROUP_IOSCHED
++ int active_numerous_groups;
++#endif
++
++ struct rb_root queue_weights_tree;
++ struct rb_root group_weights_tree;
++
++ int busy_queues;
++ int busy_in_flight_queues;
++ int const_seeky_busy_in_flight_queues;
++ int wr_busy_queues;
++ int queued;
++ int rq_in_driver;
++ int sync_flight;
++
++ int max_rq_in_driver;
++ int hw_tag_samples;
++ int hw_tag;
++
++ int budgets_assigned;
++
++ struct timer_list idle_slice_timer;
++ struct work_struct unplug_work;
++
++ struct bfq_queue *in_service_queue;
++ struct bfq_io_cq *in_service_bic;
++
++ sector_t last_position;
++
++ ktime_t last_budget_start;
++ ktime_t last_idling_start;
++ int peak_rate_samples;
++ u64 peak_rate;
++ int bfq_max_budget;
++
++ struct list_head active_list;
++ struct list_head idle_list;
++
++ unsigned int bfq_fifo_expire[2];
++ unsigned int bfq_back_penalty;
++ unsigned int bfq_back_max;
++ unsigned int bfq_slice_idle;
++ u64 bfq_class_idle_last_service;
++
++ int bfq_user_max_budget;
++ int bfq_max_budget_async_rq;
++ unsigned int bfq_timeout[2];
++
++ unsigned int bfq_coop_thresh;
++ unsigned int bfq_failed_cooperations;
++ unsigned int bfq_requests_within_timer;
++
++ unsigned long last_ins_in_burst;
++ unsigned long bfq_burst_interval;
++ int burst_size;
++ unsigned long bfq_large_burst_thresh;
++ bool large_burst;
++ struct hlist_head burst_list;
++
++ bool low_latency;
++
++ /* parameters of the low_latency heuristics */
++ unsigned int bfq_wr_coeff;
++ unsigned int bfq_wr_max_time;
++ unsigned int bfq_wr_rt_max_time;
++ unsigned int bfq_wr_min_idle_time;
++ unsigned long bfq_wr_min_inter_arr_async;
++ unsigned int bfq_wr_max_softrt_rate;
++ u64 RT_prod;
++ enum bfq_device_speed device_speed;
++
++ struct bfq_queue oom_bfqq;
++};
++
++enum bfqq_state_flags {
++ BFQ_BFQQ_FLAG_busy = 0, /* has requests or is in service */
++ BFQ_BFQQ_FLAG_wait_request, /* waiting for a request */
++ BFQ_BFQQ_FLAG_must_alloc, /* must be allowed rq alloc */
++ BFQ_BFQQ_FLAG_fifo_expire, /* FIFO checked in this slice */
++ BFQ_BFQQ_FLAG_idle_window, /* slice idling enabled */
++ BFQ_BFQQ_FLAG_sync, /* synchronous queue */
++ BFQ_BFQQ_FLAG_budget_new, /* no completion with this budget */
++ BFQ_BFQQ_FLAG_IO_bound, /*
++ * bfqq has timed-out at least once
++ * having consumed at most 2/10 of
++ * its budget
++ */
++ BFQ_BFQQ_FLAG_in_large_burst, /*
++ * bfqq activated in a large burst,
++ * see comments to bfq_handle_burst.
++ */
++ BFQ_BFQQ_FLAG_constantly_seeky, /*
++ * bfqq has proved to be slow and
++ * seeky until budget timeout
++ */
++ BFQ_BFQQ_FLAG_softrt_update, /*
++ * may need softrt-next-start
++ * update
++ */
++};
++
++#define BFQ_BFQQ_FNS(name) \
++static void bfq_mark_bfqq_##name(struct bfq_queue *bfqq) \
++{ \
++ (bfqq)->flags |= (1 << BFQ_BFQQ_FLAG_##name); \
++} \
++static void bfq_clear_bfqq_##name(struct bfq_queue *bfqq) \
++{ \
++ (bfqq)->flags &= ~(1 << BFQ_BFQQ_FLAG_##name); \
++} \
++static int bfq_bfqq_##name(const struct bfq_queue *bfqq) \
++{ \
++ return ((bfqq)->flags & (1 << BFQ_BFQQ_FLAG_##name)) != 0; \
++}
++
++BFQ_BFQQ_FNS(busy);
++BFQ_BFQQ_FNS(wait_request);
++BFQ_BFQQ_FNS(must_alloc);
++BFQ_BFQQ_FNS(fifo_expire);
++BFQ_BFQQ_FNS(idle_window);
++BFQ_BFQQ_FNS(sync);
++BFQ_BFQQ_FNS(budget_new);
++BFQ_BFQQ_FNS(IO_bound);
++BFQ_BFQQ_FNS(in_large_burst);
++BFQ_BFQQ_FNS(constantly_seeky);
++BFQ_BFQQ_FNS(softrt_update);
++#undef BFQ_BFQQ_FNS
++
++/* Logging facilities. */
++#define bfq_log_bfqq(bfqd, bfqq, fmt, args...) \
++ blk_add_trace_msg((bfqd)->queue, "bfq%d " fmt, (bfqq)->pid, ##args)
++
++#define bfq_log(bfqd, fmt, args...) \
++ blk_add_trace_msg((bfqd)->queue, "bfq " fmt, ##args)
++
++/* Expiration reasons. */
++enum bfqq_expiration {
++ BFQ_BFQQ_TOO_IDLE = 0, /*
++ * queue has been idling for
++ * too long
++ */
++ BFQ_BFQQ_BUDGET_TIMEOUT, /* budget took too long to be used */
++ BFQ_BFQQ_BUDGET_EXHAUSTED, /* budget consumed */
++ BFQ_BFQQ_NO_MORE_REQUESTS, /* the queue has no more requests */
++};
++
++#ifdef CONFIG_BFQ_GROUP_IOSCHED
++
++struct bfqg_stats {
++ /* total bytes transferred */
++ struct blkg_rwstat service_bytes;
++ /* total IOs serviced, post merge */
++ struct blkg_rwstat serviced;
++ /* number of ios merged */
++ struct blkg_rwstat merged;
++ /* total time spent on device in ns, may not be accurate w/ queueing */
++ struct blkg_rwstat service_time;
++ /* total time spent waiting in scheduler queue in ns */
++ struct blkg_rwstat wait_time;
++ /* number of IOs queued up */
++ struct blkg_rwstat queued;
++ /* total sectors transferred */
++ struct blkg_stat sectors;
++ /* total disk time and nr sectors dispatched by this group */
++ struct blkg_stat time;
++ /* time not charged to this cgroup */
++ struct blkg_stat unaccounted_time;
++ /* sum of number of ios queued across all samples */
++ struct blkg_stat avg_queue_size_sum;
++ /* count of samples taken for average */
++ struct blkg_stat avg_queue_size_samples;
++ /* how many times this group has been removed from service tree */
++ struct blkg_stat dequeue;
++ /* total time spent waiting for it to be assigned a timeslice. */
++ struct blkg_stat group_wait_time;
++ /* time spent idling for this blkcg_gq */
++ struct blkg_stat idle_time;
++ /* total time with empty current active q with other requests queued */
++ struct blkg_stat empty_time;
++ /* fields after this shouldn't be cleared on stat reset */
++ uint64_t start_group_wait_time;
++ uint64_t start_idle_time;
++ uint64_t start_empty_time;
++ uint16_t flags;
++};
++
++/*
++ * struct bfq_group_data - per-blkcg storage for the blkio subsystem.
++ *
++ * @ps: @blkcg_policy_storage that this structure inherits
++ * @weight: weight of the bfq_group
++ */
++struct bfq_group_data {
++ /* must be the first member */
++ struct blkcg_policy_data pd;
++
++ unsigned short weight;
++};
++
++/**
++ * struct bfq_group - per (device, cgroup) data structure.
++ * @entity: schedulable entity to insert into the parent group sched_data.
++ * @sched_data: own sched_data, to contain child entities (they may be
++ * both bfq_queues and bfq_groups).
++ * @bfqd: the bfq_data for the device this group acts upon.
++ * @async_bfqq: array of async queues for all the tasks belonging to
++ * the group, one queue per ioprio value per ioprio_class,
++ * except for the idle class that has only one queue.
++ * @async_idle_bfqq: async queue for the idle class (ioprio is ignored).
++ * @my_entity: pointer to @entity, %NULL for the toplevel group; used
++ * to avoid too many special cases during group creation/
++ * migration.
++ * @active_entities: number of active entities belonging to the group;
++ * unused for the root group. Used to know whether there
++ * are groups with more than one active @bfq_entity
++ * (see the comments to the function
++ * bfq_bfqq_must_not_expire()).
++ *
++ * Each (device, cgroup) pair has its own bfq_group, i.e., for each cgroup
++ * there is a set of bfq_groups, each one collecting the lower-level
++ * entities belonging to the group that are acting on the same device.
++ *
++ * Locking works as follows:
++ * o @bfqd is protected by the queue lock, RCU is used to access it
++ * from the readers.
++ * o All the other fields are protected by the @bfqd queue lock.
++ */
++struct bfq_group {
++ /* must be the first member */
++ struct blkg_policy_data pd;
++
++ struct bfq_entity entity;
++ struct bfq_sched_data sched_data;
++
++ void *bfqd;
++
++ struct bfq_queue *async_bfqq[2][IOPRIO_BE_NR];
++ struct bfq_queue *async_idle_bfqq;
++
++ struct bfq_entity *my_entity;
++
++ int active_entities;
++
++ struct bfqg_stats stats;
++ struct bfqg_stats dead_stats; /* stats pushed from dead children */
++};
++
++#else
++struct bfq_group {
++ struct bfq_sched_data sched_data;
++
++ struct bfq_queue *async_bfqq[2][IOPRIO_BE_NR];
++ struct bfq_queue *async_idle_bfqq;
++};
++#endif
++
++static struct bfq_queue *bfq_entity_to_bfqq(struct bfq_entity *entity);
++
++static struct bfq_service_tree *
++bfq_entity_service_tree(struct bfq_entity *entity)
++{
++ struct bfq_sched_data *sched_data = entity->sched_data;
++ struct bfq_queue *bfqq = bfq_entity_to_bfqq(entity);
++ unsigned int idx = bfqq ? bfqq->ioprio_class - 1 :
++ BFQ_DEFAULT_GRP_CLASS;
++
++ BUG_ON(idx >= BFQ_IOPRIO_CLASSES);
++ BUG_ON(sched_data == NULL);
++
++ return sched_data->service_tree + idx;
++}
++
++static struct bfq_queue *bic_to_bfqq(struct bfq_io_cq *bic, bool is_sync)
++{
++ return bic->bfqq[is_sync];
++}
++
++static void bic_set_bfqq(struct bfq_io_cq *bic, struct bfq_queue *bfqq,
++ bool is_sync)
++{
++ bic->bfqq[is_sync] = bfqq;
++}
++
++static struct bfq_data *bic_to_bfqd(struct bfq_io_cq *bic)
++{
++ return bic->icq.q->elevator->elevator_data;
++}
++
++/**
++ * bfq_get_bfqd_locked - get a lock to a bfqd using a RCU protected pointer.
++ * @ptr: a pointer to a bfqd.
++ * @flags: storage for the flags to be saved.
++ *
++ * This function allows bfqg->bfqd to be protected by the
++ * queue lock of the bfqd they reference; the pointer is dereferenced
++ * under RCU, so the storage for bfqd is assured to be safe as long
++ * as the RCU read side critical section does not end. After the
++ * bfqd->queue->queue_lock is taken the pointer is rechecked, to be
++ * sure that no other writer accessed it. If we raced with a writer,
++ * the function returns NULL, with the queue unlocked, otherwise it
++ * returns the dereferenced pointer, with the queue locked.
++ */
++static struct bfq_data *bfq_get_bfqd_locked(void **ptr, unsigned long *flags)
++{
++ struct bfq_data *bfqd;
++
++ rcu_read_lock();
++ bfqd = rcu_dereference(*(struct bfq_data **)ptr);
++
++ if (bfqd != NULL) {
++ spin_lock_irqsave(bfqd->queue->queue_lock, *flags);
++ if (ptr == NULL)
++ printk(KERN_CRIT "get_bfqd_locked pointer NULL\n");
++ else if (*ptr == bfqd)
++ goto out;
++ spin_unlock_irqrestore(bfqd->queue->queue_lock, *flags);
++ }
++
++ bfqd = NULL;
++out:
++ rcu_read_unlock();
++ return bfqd;
++}
++
++static void bfq_put_bfqd_unlock(struct bfq_data *bfqd, unsigned long *flags)
++{
++ spin_unlock_irqrestore(bfqd->queue->queue_lock, *flags);
++}
++
++static void bfq_check_ioprio_change(struct bfq_io_cq *bic, struct bio *bio);
++static void bfq_put_queue(struct bfq_queue *bfqq);
++static void bfq_dispatch_insert(struct request_queue *q, struct request *rq);
++static struct bfq_queue *bfq_get_queue(struct bfq_data *bfqd,
++ struct bio *bio, int is_sync,
++ struct bfq_io_cq *bic, gfp_t gfp_mask);
++static void bfq_end_wr_async_queues(struct bfq_data *bfqd,
++ struct bfq_group *bfqg);
++static void bfq_put_async_queues(struct bfq_data *bfqd, struct bfq_group *bfqg);
++static void bfq_exit_bfqq(struct bfq_data *bfqd, struct bfq_queue *bfqq);
++
++#endif /* _BFQ_H */
+--
+2.10.0
+
diff --git a/5003_block-bfq-add-Early-Queue-Merge-EQM-to-BFQ-v7r11-for-4.10.patch b/5003_block-bfq-add-Early-Queue-Merge-EQM-to-BFQ-v7r11-for-4.10.patch
new file mode 100644
index 0000000..28eeb1f
--- /dev/null
+++ b/5003_block-bfq-add-Early-Queue-Merge-EQM-to-BFQ-v7r11-for-4.10.patch
@@ -0,0 +1,1101 @@
+From e4d9bed2dfdec562b23491e44602c89c4a2a5ea4 Mon Sep 17 00:00:00 2001
+From: Mauro Andreolini <mauro.andreolini@unimore.it>
+Date: Sun, 6 Sep 2015 16:09:05 +0200
+Subject: [PATCH 3/4] block, bfq: add Early Queue Merge (EQM) to BFQ-v7r11 for
+ 4.10.0
+
+A set of processes may happen to perform interleaved reads, i.e.,requests
+whose union would give rise to a sequential read pattern. There are two
+typical cases: in the first case, processes read fixed-size chunks of
+data at a fixed distance from each other, while in the second case processes
+may read variable-size chunks at variable distances. The latter case occurs
+for example with QEMU, which splits the I/O generated by the guest into
+multiple chunks, and lets these chunks be served by a pool of cooperating
+processes, iteratively assigning the next chunk of I/O to the first
+available process. CFQ uses actual queue merging for the first type of
+rocesses, whereas it uses preemption to get a sequential read pattern out
+of the read requests performed by the second type of processes. In the end
+it uses two different mechanisms to achieve the same goal: boosting the
+throughput with interleaved I/O.
+
+This patch introduces Early Queue Merge (EQM), a unified mechanism to get a
+sequential read pattern with both types of processes. The main idea is
+checking newly arrived requests against the next request of the active queue
+both in case of actual request insert and in case of request merge. By doing
+so, both the types of processes can be handled by just merging their queues.
+EQM is then simpler and more compact than the pair of mechanisms used in
+CFQ.
+
+Finally, EQM also preserves the typical low-latency properties of BFQ, by
+properly restoring the weight-raising state of a queue when it gets back to
+a non-merged state.
+
+Signed-off-by: Mauro Andreolini <mauro.andreolini@unimore.it>
+Signed-off-by: Arianna Avanzini <avanzini@google.com>
+Signed-off-by: Paolo Valente <paolo.valente@unimore.it>
+Signed-off-by: Linus Walleij <linus.walleij@linaro.org>
+---
+ block/bfq-cgroup.c | 5 +
+ block/bfq-iosched.c | 685 +++++++++++++++++++++++++++++++++++++++++++++++++++-
+ block/bfq.h | 66 +++++
+ 3 files changed, 743 insertions(+), 13 deletions(-)
+
+diff --git a/block/bfq-cgroup.c b/block/bfq-cgroup.c
+index 8b08a57..0367996 100644
+--- a/block/bfq-cgroup.c
++++ b/block/bfq-cgroup.c
+@@ -440,6 +440,7 @@ static void bfq_pd_init(struct blkg_policy_data *pd)
+ */
+ bfqg->bfqd = bfqd;
+ bfqg->active_entities = 0;
++ bfqg->rq_pos_tree = RB_ROOT;
+ }
+
+ static void bfq_pd_free(struct blkg_policy_data *pd)
+@@ -533,6 +534,9 @@ static struct bfq_group *bfq_find_alloc_group(struct bfq_data *bfqd,
+ return bfqg;
+ }
+
++static void bfq_pos_tree_add_move(struct bfq_data *bfqd,
++ struct bfq_queue *bfqq);
++
+ /**
+ * bfq_bfqq_move - migrate @bfqq to @bfqg.
+ * @bfqd: queue descriptor.
+@@ -580,6 +584,7 @@ static void bfq_bfqq_move(struct bfq_data *bfqd, struct bfq_queue *bfqq,
+ bfqg_get(bfqg);
+
+ if (busy) {
++ bfq_pos_tree_add_move(bfqd, bfqq);
+ if (resume)
+ bfq_activate_bfqq(bfqd, bfqq);
+ }
+diff --git a/block/bfq-iosched.c b/block/bfq-iosched.c
+index 85e2169..cf3e9b1 100644
+--- a/block/bfq-iosched.c
++++ b/block/bfq-iosched.c
+@@ -295,6 +295,72 @@ static struct request *bfq_choose_req(struct bfq_data *bfqd,
+ }
+ }
+
++static struct bfq_queue *
++bfq_rq_pos_tree_lookup(struct bfq_data *bfqd, struct rb_root *root,
++ sector_t sector, struct rb_node **ret_parent,
++ struct rb_node ***rb_link)
++{
++ struct rb_node **p, *parent;
++ struct bfq_queue *bfqq = NULL;
++
++ parent = NULL;
++ p = &root->rb_node;
++ while (*p) {
++ struct rb_node **n;
++
++ parent = *p;
++ bfqq = rb_entry(parent, struct bfq_queue, pos_node);
++
++ /*
++ * Sort strictly based on sector. Smallest to the left,
++ * largest to the right.
++ */
++ if (sector > blk_rq_pos(bfqq->next_rq))
++ n = &(*p)->rb_right;
++ else if (sector < blk_rq_pos(bfqq->next_rq))
++ n = &(*p)->rb_left;
++ else
++ break;
++ p = n;
++ bfqq = NULL;
++ }
++
++ *ret_parent = parent;
++ if (rb_link)
++ *rb_link = p;
++
++ bfq_log(bfqd, "rq_pos_tree_lookup %llu: returning %d",
++ (unsigned long long) sector,
++ bfqq ? bfqq->pid : 0);
++
++ return bfqq;
++}
++
++static void bfq_pos_tree_add_move(struct bfq_data *bfqd, struct bfq_queue *bfqq)
++{
++ struct rb_node **p, *parent;
++ struct bfq_queue *__bfqq;
++
++ if (bfqq->pos_root) {
++ rb_erase(&bfqq->pos_node, bfqq->pos_root);
++ bfqq->pos_root = NULL;
++ }
++
++ if (bfq_class_idle(bfqq))
++ return;
++ if (!bfqq->next_rq)
++ return;
++
++ bfqq->pos_root = &bfq_bfqq_to_bfqg(bfqq)->rq_pos_tree;
++ __bfqq = bfq_rq_pos_tree_lookup(bfqd, bfqq->pos_root,
++ blk_rq_pos(bfqq->next_rq), &parent, &p);
++ if (!__bfqq) {
++ rb_link_node(&bfqq->pos_node, parent, p);
++ rb_insert_color(&bfqq->pos_node, bfqq->pos_root);
++ } else
++ bfqq->pos_root = NULL;
++}
++
+ /*
+ * Tell whether there are active queues or groups with differentiated weights.
+ */
+@@ -527,6 +593,57 @@ static unsigned int bfq_wr_duration(struct bfq_data *bfqd)
+ return dur;
+ }
+
++static unsigned int bfq_bfqq_cooperations(struct bfq_queue *bfqq)
++{
++ return bfqq->bic ? bfqq->bic->cooperations : 0;
++}
++
++static void
++bfq_bfqq_resume_state(struct bfq_queue *bfqq, struct bfq_io_cq *bic)
++{
++ if (bic->saved_idle_window)
++ bfq_mark_bfqq_idle_window(bfqq);
++ else
++ bfq_clear_bfqq_idle_window(bfqq);
++ if (bic->saved_IO_bound)
++ bfq_mark_bfqq_IO_bound(bfqq);
++ else
++ bfq_clear_bfqq_IO_bound(bfqq);
++ /* Assuming that the flag in_large_burst is already correctly set */
++ if (bic->wr_time_left && bfqq->bfqd->low_latency &&
++ !bfq_bfqq_in_large_burst(bfqq) &&
++ bic->cooperations < bfqq->bfqd->bfq_coop_thresh) {
++ /*
++ * Start a weight raising period with the duration given by
++ * the raising_time_left snapshot.
++ */
++ if (bfq_bfqq_busy(bfqq))
++ bfqq->bfqd->wr_busy_queues++;
++ bfqq->wr_coeff = bfqq->bfqd->bfq_wr_coeff;
++ bfqq->wr_cur_max_time = bic->wr_time_left;
++ bfqq->last_wr_start_finish = jiffies;
++ bfqq->entity.prio_changed = 1;
++ }
++ /*
++ * Clear wr_time_left to prevent bfq_bfqq_save_state() from
++ * getting confused about the queue's need of a weight-raising
++ * period.
++ */
++ bic->wr_time_left = 0;
++}
++
++static int bfqq_process_refs(struct bfq_queue *bfqq)
++{
++ int process_refs, io_refs;
++
++ lockdep_assert_held(bfqq->bfqd->queue->queue_lock);
++
++ io_refs = bfqq->allocated[READ] + bfqq->allocated[WRITE];
++ process_refs = atomic_read(&bfqq->ref) - io_refs - bfqq->entity.on_st;
++ BUG_ON(process_refs < 0);
++ return process_refs;
++}
++
+ /* Empty burst list and add just bfqq (see comments to bfq_handle_burst) */
+ static void bfq_reset_burst_list(struct bfq_data *bfqd, struct bfq_queue *bfqq)
+ {
+@@ -763,8 +880,14 @@ static void bfq_add_request(struct request *rq)
+ BUG_ON(!next_rq);
+ bfqq->next_rq = next_rq;
+
++ /*
++ * Adjust priority tree position, if next_rq changes.
++ */
++ if (prev != bfqq->next_rq)
++ bfq_pos_tree_add_move(bfqd, bfqq);
++
+ if (!bfq_bfqq_busy(bfqq)) {
+- bool soft_rt, in_burst,
++ bool soft_rt, coop_or_in_burst,
+ idle_for_long_time = time_is_before_jiffies(
+ bfqq->budget_timeout +
+ bfqd->bfq_wr_min_idle_time);
+@@ -792,11 +915,12 @@ static void bfq_add_request(struct request *rq)
+ bfqd->last_ins_in_burst = jiffies;
+ }
+
+- in_burst = bfq_bfqq_in_large_burst(bfqq);
++ coop_or_in_burst = bfq_bfqq_in_large_burst(bfqq) ||
++ bfq_bfqq_cooperations(bfqq) >= bfqd->bfq_coop_thresh;
+ soft_rt = bfqd->bfq_wr_max_softrt_rate > 0 &&
+- !in_burst &&
++ !coop_or_in_burst &&
+ time_is_before_jiffies(bfqq->soft_rt_next_start);
+- interactive = !in_burst && idle_for_long_time;
++ interactive = !coop_or_in_burst && idle_for_long_time;
+ entity->budget = max_t(unsigned long, bfqq->max_budget,
+ bfq_serv_to_charge(next_rq, bfqq));
+
+@@ -815,6 +939,9 @@ static void bfq_add_request(struct request *rq)
+ if (!bfqd->low_latency)
+ goto add_bfqq_busy;
+
++ if (bfq_bfqq_just_split(bfqq))
++ goto set_prio_changed;
++
+ /*
+ * If the queue:
+ * - is not being boosted,
+@@ -839,7 +966,7 @@ static void bfq_add_request(struct request *rq)
+ } else if (old_wr_coeff > 1) {
+ if (interactive)
+ bfqq->wr_cur_max_time = bfq_wr_duration(bfqd);
+- else if (in_burst ||
++ else if (coop_or_in_burst ||
+ (bfqq->wr_cur_max_time ==
+ bfqd->bfq_wr_rt_max_time &&
+ !soft_rt)) {
+@@ -904,6 +1031,7 @@ static void bfq_add_request(struct request *rq)
+ bfqd->bfq_wr_rt_max_time;
+ }
+ }
++set_prio_changed:
+ if (old_wr_coeff != bfqq->wr_coeff)
+ entity->prio_changed = 1;
+ add_bfqq_busy:
+@@ -1046,6 +1174,15 @@ static void bfq_merged_request(struct request_queue *q, struct request *req,
+ bfqd->last_position);
+ BUG_ON(!next_rq);
+ bfqq->next_rq = next_rq;
++ /*
++ * If next_rq changes, update both the queue's budget to
++ * fit the new request and the queue's position in its
++ * rq_pos_tree.
++ */
++ if (prev != bfqq->next_rq) {
++ bfq_updated_next_req(bfqd, bfqq);
++ bfq_pos_tree_add_move(bfqd, bfqq);
++ }
+ }
+ }
+
+@@ -1128,11 +1265,346 @@ static void bfq_end_wr(struct bfq_data *bfqd)
+ spin_unlock_irq(bfqd->queue->queue_lock);
+ }
+
++static sector_t bfq_io_struct_pos(void *io_struct, bool request)
++{
++ if (request)
++ return blk_rq_pos(io_struct);
++ else
++ return ((struct bio *)io_struct)->bi_iter.bi_sector;
++}
++
++static int bfq_rq_close_to_sector(void *io_struct, bool request,
++ sector_t sector)
++{
++ return abs(bfq_io_struct_pos(io_struct, request) - sector) <=
++ BFQQ_SEEK_THR;
++}
++
++static struct bfq_queue *bfqq_find_close(struct bfq_data *bfqd,
++ struct bfq_queue *bfqq,
++ sector_t sector)
++{
++ struct rb_root *root = &bfq_bfqq_to_bfqg(bfqq)->rq_pos_tree;
++ struct rb_node *parent, *node;
++ struct bfq_queue *__bfqq;
++
++ if (RB_EMPTY_ROOT(root))
++ return NULL;
++
++ /*
++ * First, if we find a request starting at the end of the last
++ * request, choose it.
++ */
++ __bfqq = bfq_rq_pos_tree_lookup(bfqd, root, sector, &parent, NULL);
++ if (__bfqq)
++ return __bfqq;
++
++ /*
++ * If the exact sector wasn't found, the parent of the NULL leaf
++ * will contain the closest sector (rq_pos_tree sorted by
++ * next_request position).
++ */
++ __bfqq = rb_entry(parent, struct bfq_queue, pos_node);
++ if (bfq_rq_close_to_sector(__bfqq->next_rq, true, sector))
++ return __bfqq;
++
++ if (blk_rq_pos(__bfqq->next_rq) < sector)
++ node = rb_next(&__bfqq->pos_node);
++ else
++ node = rb_prev(&__bfqq->pos_node);
++ if (!node)
++ return NULL;
++
++ __bfqq = rb_entry(node, struct bfq_queue, pos_node);
++ if (bfq_rq_close_to_sector(__bfqq->next_rq, true, sector))
++ return __bfqq;
++
++ return NULL;
++}
++
++static struct bfq_queue *bfq_find_close_cooperator(struct bfq_data *bfqd,
++ struct bfq_queue *cur_bfqq,
++ sector_t sector)
++{
++ struct bfq_queue *bfqq;
++
++ /*
++ * We shall notice if some of the queues are cooperating,
++ * e.g., working closely on the same area of the device. In
++ * that case, we can group them together and: 1) don't waste
++ * time idling, and 2) serve the union of their requests in
++ * the best possible order for throughput.
++ */
++ bfqq = bfqq_find_close(bfqd, cur_bfqq, sector);
++ if (!bfqq || bfqq == cur_bfqq)
++ return NULL;
++
++ return bfqq;
++}
++
++static struct bfq_queue *
++bfq_setup_merge(struct bfq_queue *bfqq, struct bfq_queue *new_bfqq)
++{
++ int process_refs, new_process_refs;
++ struct bfq_queue *__bfqq;
++
++ /*
++ * If there are no process references on the new_bfqq, then it is
++ * unsafe to follow the ->new_bfqq chain as other bfqq's in the chain
++ * may have dropped their last reference (not just their last process
++ * reference).
++ */
++ if (!bfqq_process_refs(new_bfqq))
++ return NULL;
++
++ /* Avoid a circular list and skip interim queue merges. */
++ while ((__bfqq = new_bfqq->new_bfqq)) {
++ if (__bfqq == bfqq)
++ return NULL;
++ new_bfqq = __bfqq;
++ }
++
++ process_refs = bfqq_process_refs(bfqq);
++ new_process_refs = bfqq_process_refs(new_bfqq);
++ /*
++ * If the process for the bfqq has gone away, there is no
++ * sense in merging the queues.
++ */
++ if (process_refs == 0 || new_process_refs == 0)
++ return NULL;
++
++ bfq_log_bfqq(bfqq->bfqd, bfqq, "scheduling merge with queue %d",
++ new_bfqq->pid);
++
++ /*
++ * Merging is just a redirection: the requests of the process
++ * owning one of the two queues are redirected to the other queue.
++ * The latter queue, in its turn, is set as shared if this is the
++ * first time that the requests of some process are redirected to
++ * it.
++ *
++ * We redirect bfqq to new_bfqq and not the opposite, because we
++ * are in the context of the process owning bfqq, hence we have
++ * the io_cq of this process. So we can immediately configure this
++ * io_cq to redirect the requests of the process to new_bfqq.
++ *
++ * NOTE, even if new_bfqq coincides with the in-service queue, the
++ * io_cq of new_bfqq is not available, because, if the in-service
++ * queue is shared, bfqd->in_service_bic may not point to the
++ * io_cq of the in-service queue.
++ * Redirecting the requests of the process owning bfqq to the
++ * currently in-service queue is in any case the best option, as
++ * we feed the in-service queue with new requests close to the
++ * last request served and, by doing so, hopefully increase the
++ * throughput.
++ */
++ bfqq->new_bfqq = new_bfqq;
++ atomic_add(process_refs, &new_bfqq->ref);
++ return new_bfqq;
++}
++
++static bool bfq_may_be_close_cooperator(struct bfq_queue *bfqq,
++ struct bfq_queue *new_bfqq)
++{
++ if (bfq_class_idle(bfqq) || bfq_class_idle(new_bfqq) ||
++ (bfqq->ioprio_class != new_bfqq->ioprio_class))
++ return false;
++
++ /*
++ * If either of the queues has already been detected as seeky,
++ * then merging it with the other queue is unlikely to lead to
++ * sequential I/O.
++ */
++ if (BFQQ_SEEKY(bfqq) || BFQQ_SEEKY(new_bfqq))
++ return false;
++
++ /*
++ * Interleaved I/O is known to be done by (some) applications
++ * only for reads, so it does not make sense to merge async
++ * queues.
++ */
++ if (!bfq_bfqq_sync(bfqq) || !bfq_bfqq_sync(new_bfqq))
++ return false;
++
++ return true;
++}
++
++/*
++ * Attempt to schedule a merge of bfqq with the currently in-service queue
++ * or with a close queue among the scheduled queues.
++ * Return NULL if no merge was scheduled, a pointer to the shared bfq_queue
++ * structure otherwise.
++ *
++ * The OOM queue is not allowed to participate to cooperation: in fact, since
++ * the requests temporarily redirected to the OOM queue could be redirected
++ * again to dedicated queues at any time, the state needed to correctly
++ * handle merging with the OOM queue would be quite complex and expensive
++ * to maintain. Besides, in such a critical condition as an out of memory,
++ * the benefits of queue merging may be little relevant, or even negligible.
++ */
++static struct bfq_queue *
++bfq_setup_cooperator(struct bfq_data *bfqd, struct bfq_queue *bfqq,
++ void *io_struct, bool request)
++{
++ struct bfq_queue *in_service_bfqq, *new_bfqq;
++
++ if (bfqq->new_bfqq)
++ return bfqq->new_bfqq;
++ if (!io_struct || unlikely(bfqq == &bfqd->oom_bfqq))
++ return NULL;
++ /* If device has only one backlogged bfq_queue, don't search. */
++ if (bfqd->busy_queues == 1)
++ return NULL;
++
++ in_service_bfqq = bfqd->in_service_queue;
++
++ if (!in_service_bfqq || in_service_bfqq == bfqq ||
++ !bfqd->in_service_bic ||
++ unlikely(in_service_bfqq == &bfqd->oom_bfqq))
++ goto check_scheduled;
++
++ if (bfq_rq_close_to_sector(io_struct, request, bfqd->last_position) &&
++ bfqq->entity.parent == in_service_bfqq->entity.parent &&
++ bfq_may_be_close_cooperator(bfqq, in_service_bfqq)) {
++ new_bfqq = bfq_setup_merge(bfqq, in_service_bfqq);
++ if (new_bfqq)
++ return new_bfqq;
++ }
++ /*
++ * Check whether there is a cooperator among currently scheduled
++ * queues. The only thing we need is that the bio/request is not
++ * NULL, as we need it to establish whether a cooperator exists.
++ */
++check_scheduled:
++ new_bfqq = bfq_find_close_cooperator(bfqd, bfqq,
++ bfq_io_struct_pos(io_struct, request));
++
++ BUG_ON(new_bfqq && bfqq->entity.parent != new_bfqq->entity.parent);
++
++ if (new_bfqq && likely(new_bfqq != &bfqd->oom_bfqq) &&
++ bfq_may_be_close_cooperator(bfqq, new_bfqq))
++ return bfq_setup_merge(bfqq, new_bfqq);
++
++ return NULL;
++}
++
++static void bfq_bfqq_save_state(struct bfq_queue *bfqq)
++{
++ /*
++ * If !bfqq->bic, the queue is already shared or its requests
++ * have already been redirected to a shared queue; both idle window
++ * and weight raising state have already been saved. Do nothing.
++ */
++ if (!bfqq->bic)
++ return;
++ if (bfqq->bic->wr_time_left)
++ /*
++ * This is the queue of a just-started process, and would
++ * deserve weight raising: we set wr_time_left to the full
++ * weight-raising duration to trigger weight-raising when
++ * and if the queue is split and the first request of the
++ * queue is enqueued.
++ */
++ bfqq->bic->wr_time_left = bfq_wr_duration(bfqq->bfqd);
++ else if (bfqq->wr_coeff > 1) {
++ unsigned long wr_duration =
++ jiffies - bfqq->last_wr_start_finish;
++ /*
++ * It may happen that a queue's weight raising period lasts
++ * longer than its wr_cur_max_time, as weight raising is
++ * handled only when a request is enqueued or dispatched (it
++ * does not use any timer). If the weight raising period is
++ * about to end, don't save it.
++ */
++ if (bfqq->wr_cur_max_time <= wr_duration)
++ bfqq->bic->wr_time_left = 0;
++ else
++ bfqq->bic->wr_time_left =
++ bfqq->wr_cur_max_time - wr_duration;
++ /*
++ * The bfq_queue is becoming shared or the requests of the
++ * process owning the queue are being redirected to a shared
++ * queue. Stop the weight raising period of the queue, as in
++ * both cases it should not be owned by an interactive or
++ * soft real-time application.
++ */
++ bfq_bfqq_end_wr(bfqq);
++ } else
++ bfqq->bic->wr_time_left = 0;
++ bfqq->bic->saved_idle_window = bfq_bfqq_idle_window(bfqq);
++ bfqq->bic->saved_IO_bound = bfq_bfqq_IO_bound(bfqq);
++ bfqq->bic->saved_in_large_burst = bfq_bfqq_in_large_burst(bfqq);
++ bfqq->bic->was_in_burst_list = !hlist_unhashed(&bfqq->burst_list_node);
++ bfqq->bic->cooperations++;
++ bfqq->bic->failed_cooperations = 0;
++}
++
++static void bfq_get_bic_reference(struct bfq_queue *bfqq)
++{
++ /*
++ * If bfqq->bic has a non-NULL value, the bic to which it belongs
++ * is about to begin using a shared bfq_queue.
++ */
++ if (bfqq->bic)
++ atomic_long_inc(&bfqq->bic->icq.ioc->refcount);
++}
++
++static void
++bfq_merge_bfqqs(struct bfq_data *bfqd, struct bfq_io_cq *bic,
++ struct bfq_queue *bfqq, struct bfq_queue *new_bfqq)
++{
++ bfq_log_bfqq(bfqd, bfqq, "merging with queue %lu",
++ (unsigned long) new_bfqq->pid);
++ /* Save weight raising and idle window of the merged queues */
++ bfq_bfqq_save_state(bfqq);
++ bfq_bfqq_save_state(new_bfqq);
++ if (bfq_bfqq_IO_bound(bfqq))
++ bfq_mark_bfqq_IO_bound(new_bfqq);
++ bfq_clear_bfqq_IO_bound(bfqq);
++ /*
++ * Grab a reference to the bic, to prevent it from being destroyed
++ * before being possibly touched by a bfq_split_bfqq().
++ */
++ bfq_get_bic_reference(bfqq);
++ bfq_get_bic_reference(new_bfqq);
++ /*
++ * Merge queues (that is, let bic redirect its requests to new_bfqq)
++ */
++ bic_set_bfqq(bic, new_bfqq, 1);
++ bfq_mark_bfqq_coop(new_bfqq);
++ /*
++ * new_bfqq now belongs to at least two bics (it is a shared queue):
++ * set new_bfqq->bic to NULL. bfqq either:
++ * - does not belong to any bic any more, and hence bfqq->bic must
++ * be set to NULL, or
++ * - is a queue whose owning bics have already been redirected to a
++ * different queue, hence the queue is destined to not belong to
++ * any bic soon and bfqq->bic is already NULL (therefore the next
++ * assignment causes no harm).
++ */
++ new_bfqq->bic = NULL;
++ bfqq->bic = NULL;
++ bfq_put_queue(bfqq);
++}
++
++static void bfq_bfqq_increase_failed_cooperations(struct bfq_queue *bfqq)
++{
++ struct bfq_io_cq *bic = bfqq->bic;
++ struct bfq_data *bfqd = bfqq->bfqd;
++
++ if (bic && bfq_bfqq_cooperations(bfqq) >= bfqd->bfq_coop_thresh) {
++ bic->failed_cooperations++;
++ if (bic->failed_cooperations >= bfqd->bfq_failed_cooperations)
++ bic->cooperations = 0;
++ }
++}
++
+ static int bfq_allow_merge(struct request_queue *q, struct request *rq,
+ struct bio *bio)
+ {
+ struct bfq_data *bfqd = q->elevator->elevator_data;
+ struct bfq_io_cq *bic;
++ struct bfq_queue *bfqq, *new_bfqq;
+
+ /*
+ * Disallow merge of a sync bio into an async request.
+@@ -1149,7 +1621,26 @@ static int bfq_allow_merge(struct request_queue *q, struct request *rq,
+ if (!bic)
+ return 0;
+
+- return bic_to_bfqq(bic, bfq_bio_sync(bio)) == RQ_BFQQ(rq);
++ bfqq = bic_to_bfqq(bic, bfq_bio_sync(bio));
++ /*
++ * We take advantage of this function to perform an early merge
++ * of the queues of possible cooperating processes.
++ */
++ if (bfqq) {
++ new_bfqq = bfq_setup_cooperator(bfqd, bfqq, bio, false);
++ if (new_bfqq) {
++ bfq_merge_bfqqs(bfqd, bic, bfqq, new_bfqq);
++ /*
++ * If we get here, the bio will be queued in the
++ * shared queue, i.e., new_bfqq, so use new_bfqq
++ * to decide whether bio and rq can be merged.
++ */
++ bfqq = new_bfqq;
++ } else
++ bfq_bfqq_increase_failed_cooperations(bfqq);
++ }
++
++ return bfqq == RQ_BFQQ(rq);
+ }
+
+ static void __bfq_set_in_service_queue(struct bfq_data *bfqd,
+@@ -1350,6 +1841,15 @@ static void __bfq_bfqq_expire(struct bfq_data *bfqd, struct bfq_queue *bfqq)
+
+ __bfq_bfqd_reset_in_service(bfqd);
+
++ /*
++ * If this bfqq is shared between multiple processes, check
++ * to make sure that those processes are still issuing I/Os
++ * within the mean seek distance. If not, it may be time to
++ * break the queues apart again.
++ */
++ if (bfq_bfqq_coop(bfqq) && BFQQ_SEEKY(bfqq))
++ bfq_mark_bfqq_split_coop(bfqq);
++
+ if (RB_EMPTY_ROOT(&bfqq->sort_list)) {
+ /*
+ * Overloading budget_timeout field to store the time
+@@ -1358,8 +1858,13 @@ static void __bfq_bfqq_expire(struct bfq_data *bfqd, struct bfq_queue *bfqq)
+ */
+ bfqq->budget_timeout = jiffies;
+ bfq_del_bfqq_busy(bfqd, bfqq, 1);
+- } else
++ } else {
+ bfq_activate_bfqq(bfqd, bfqq);
++ /*
++ * Resort priority tree of potential close cooperators.
++ */
++ bfq_pos_tree_add_move(bfqd, bfqq);
++ }
+ }
+
+ /**
+@@ -2246,10 +2751,12 @@ static void bfq_update_wr_data(struct bfq_data *bfqd, struct bfq_queue *bfqq)
+ /*
+ * If the queue was activated in a burst, or
+ * too much time has elapsed from the beginning
+- * of this weight-raising period, then end weight
+- * raising.
++ * of this weight-raising period, or the queue has
++ * exceeded the acceptable number of cooperations,
++ * then end weight raising.
+ */
+ if (bfq_bfqq_in_large_burst(bfqq) ||
++ bfq_bfqq_cooperations(bfqq) >= bfqd->bfq_coop_thresh ||
+ time_is_before_jiffies(bfqq->last_wr_start_finish +
+ bfqq->wr_cur_max_time)) {
+ bfqq->last_wr_start_finish = jiffies;
+@@ -2478,6 +2985,25 @@ static void bfq_put_queue(struct bfq_queue *bfqq)
+ #endif
+ }
+
++static void bfq_put_cooperator(struct bfq_queue *bfqq)
++{
++ struct bfq_queue *__bfqq, *next;
++
++ /*
++ * If this queue was scheduled to merge with another queue, be
++ * sure to drop the reference taken on that queue (and others in
++ * the merge chain). See bfq_setup_merge and bfq_merge_bfqqs.
++ */
++ __bfqq = bfqq->new_bfqq;
++ while (__bfqq) {
++ if (__bfqq == bfqq)
++ break;
++ next = __bfqq->new_bfqq;
++ bfq_put_queue(__bfqq);
++ __bfqq = next;
++ }
++}
++
+ static void bfq_exit_bfqq(struct bfq_data *bfqd, struct bfq_queue *bfqq)
+ {
+ if (bfqq == bfqd->in_service_queue) {
+@@ -2488,6 +3014,8 @@ static void bfq_exit_bfqq(struct bfq_data *bfqd, struct bfq_queue *bfqq)
+ bfq_log_bfqq(bfqd, bfqq, "exit_bfqq: %p, %d", bfqq,
+ atomic_read(&bfqq->ref));
+
++ bfq_put_cooperator(bfqq);
++
+ bfq_put_queue(bfqq);
+ }
+
+@@ -2496,6 +3024,25 @@ static void bfq_init_icq(struct io_cq *icq)
+ struct bfq_io_cq *bic = icq_to_bic(icq);
+
+ bic->ttime.last_end_request = jiffies;
++ /*
++ * A newly created bic indicates that the process has just
++ * started doing I/O, and is probably mapping into memory its
++ * executable and libraries: it definitely needs weight raising.
++ * There is however the possibility that the process performs,
++ * for a while, I/O close to some other process. EQM intercepts
++ * this behavior and may merge the queue corresponding to the
++ * process with some other queue, BEFORE the weight of the queue
++ * is raised. Merged queues are not weight-raised (they are assumed
++ * to belong to processes that benefit only from high throughput).
++ * If the merge is basically the consequence of an accident, then
++ * the queue will be split soon and will get back its old weight.
++ * It is then important to write down somewhere that this queue
++ * does need weight raising, even if it did not make it to get its
++ * weight raised before being merged. To this purpose, we overload
++ * the field raising_time_left and assign 1 to it, to mark the queue
++ * as needing weight raising.
++ */
++ bic->wr_time_left = 1;
+ }
+
+ static void bfq_exit_icq(struct io_cq *icq)
+@@ -2509,6 +3056,13 @@ static void bfq_exit_icq(struct io_cq *icq)
+ }
+
+ if (bic->bfqq[BLK_RW_SYNC]) {
++ /*
++ * If the bic is using a shared queue, put the reference
++ * taken on the io_context when the bic started using a
++ * shared bfq_queue.
++ */
++ if (bfq_bfqq_coop(bic->bfqq[BLK_RW_SYNC]))
++ put_io_context(icq->ioc);
+ bfq_exit_bfqq(bfqd, bic->bfqq[BLK_RW_SYNC]);
+ bic->bfqq[BLK_RW_SYNC] = NULL;
+ }
+@@ -2814,6 +3368,10 @@ static void bfq_update_idle_window(struct bfq_data *bfqd,
+ if (!bfq_bfqq_sync(bfqq) || bfq_class_idle(bfqq))
+ return;
+
++ /* Idle window just restored, statistics are meaningless. */
++ if (bfq_bfqq_just_split(bfqq))
++ return;
++
+ enable_idle = bfq_bfqq_idle_window(bfqq);
+
+ if (atomic_read(&bic->icq.ioc->active_ref) == 0 ||
+@@ -2861,6 +3419,7 @@ static void bfq_rq_enqueued(struct bfq_data *bfqd, struct bfq_queue *bfqq,
+ if (bfqq->entity.service > bfq_max_budget(bfqd) / 8 ||
+ !BFQQ_SEEKY(bfqq))
+ bfq_update_idle_window(bfqd, bfqq, bic);
++ bfq_clear_bfqq_just_split(bfqq);
+
+ bfq_log_bfqq(bfqd, bfqq,
+ "rq_enqueued: idle_window=%d (seeky %d, mean %llu)",
+@@ -2925,12 +3484,47 @@ static void bfq_rq_enqueued(struct bfq_data *bfqd, struct bfq_queue *bfqq,
+ static void bfq_insert_request(struct request_queue *q, struct request *rq)
+ {
+ struct bfq_data *bfqd = q->elevator->elevator_data;
+- struct bfq_queue *bfqq = RQ_BFQQ(rq);
++ struct bfq_queue *bfqq = RQ_BFQQ(rq), *new_bfqq;
+
+ assert_spin_locked(bfqd->queue->queue_lock);
+
++ /*
++ * An unplug may trigger a requeue of a request from the device
++ * driver: make sure we are in process context while trying to
++ * merge two bfq_queues.
++ */
++ if (!in_interrupt()) {
++ new_bfqq = bfq_setup_cooperator(bfqd, bfqq, rq, true);
++ if (new_bfqq) {
++ if (bic_to_bfqq(RQ_BIC(rq), 1) != bfqq)
++ new_bfqq = bic_to_bfqq(RQ_BIC(rq), 1);
++ /*
++ * Release the request's reference to the old bfqq
++ * and make sure one is taken to the shared queue.
++ */
++ new_bfqq->allocated[rq_data_dir(rq)]++;
++ bfqq->allocated[rq_data_dir(rq)]--;
++ atomic_inc(&new_bfqq->ref);
++ bfq_put_queue(bfqq);
++ if (bic_to_bfqq(RQ_BIC(rq), 1) == bfqq)
++ bfq_merge_bfqqs(bfqd, RQ_BIC(rq),
++ bfqq, new_bfqq);
++ rq->elv.priv[1] = new_bfqq;
++ bfqq = new_bfqq;
++ } else
++ bfq_bfqq_increase_failed_cooperations(bfqq);
++ }
++
+ bfq_add_request(rq);
+
++ /*
++ * Here a newly-created bfq_queue has already started a weight-raising
++ * period: clear raising_time_left to prevent bfq_bfqq_save_state()
++ * from assigning it a full weight-raising period. See the detailed
++ * comments about this field in bfq_init_icq().
++ */
++ if (bfqq->bic)
++ bfqq->bic->wr_time_left = 0;
+ rq->fifo_time = jiffies + bfqd->bfq_fifo_expire[rq_is_sync(rq)];
+ list_add_tail(&rq->queuelist, &bfqq->fifo);
+
+@@ -3099,6 +3693,32 @@ static void bfq_put_request(struct request *rq)
+ }
+
+ /*
++ * Returns NULL if a new bfqq should be allocated, or the old bfqq if this
++ * was the last process referring to said bfqq.
++ */
++static struct bfq_queue *
++bfq_split_bfqq(struct bfq_io_cq *bic, struct bfq_queue *bfqq)
++{
++ bfq_log_bfqq(bfqq->bfqd, bfqq, "splitting queue");
++
++ put_io_context(bic->icq.ioc);
++
++ if (bfqq_process_refs(bfqq) == 1) {
++ bfqq->pid = current->pid;
++ bfq_clear_bfqq_coop(bfqq);
++ bfq_clear_bfqq_split_coop(bfqq);
++ return bfqq;
++ }
++
++ bic_set_bfqq(bic, NULL, 1);
++
++ bfq_put_cooperator(bfqq);
++
++ bfq_put_queue(bfqq);
++ return NULL;
++}
++
++/*
+ * Allocate bfq data structures associated with this request.
+ */
+ static int bfq_set_request(struct request_queue *q, struct request *rq,
+@@ -3110,6 +3730,7 @@ static int bfq_set_request(struct request_queue *q, struct request *rq,
+ const int is_sync = rq_is_sync(rq);
+ struct bfq_queue *bfqq;
+ unsigned long flags;
++ bool split = false;
+
+ might_sleep_if(gfpflags_allow_blocking(gfp_mask));
+
+@@ -3122,15 +3743,30 @@ static int bfq_set_request(struct request_queue *q, struct request *rq,
+
+ bfq_bic_update_cgroup(bic, bio);
+
++new_queue:
+ bfqq = bic_to_bfqq(bic, is_sync);
+ if (!bfqq || bfqq == &bfqd->oom_bfqq) {
+ bfqq = bfq_get_queue(bfqd, bio, is_sync, bic, gfp_mask);
+ bic_set_bfqq(bic, bfqq, is_sync);
+- if (is_sync) {
+- if (bfqd->large_burst)
++ if (split && is_sync) {
++ if ((bic->was_in_burst_list && bfqd->large_burst) ||
++ bic->saved_in_large_burst)
+ bfq_mark_bfqq_in_large_burst(bfqq);
+- else
++ else {
+ bfq_clear_bfqq_in_large_burst(bfqq);
++ if (bic->was_in_burst_list)
++ hlist_add_head(&bfqq->burst_list_node,
++ &bfqd->burst_list);
++ }
++ }
++ } else {
++ /* If the queue was seeky for too long, break it apart. */
++ if (bfq_bfqq_coop(bfqq) && bfq_bfqq_split_coop(bfqq)) {
++ bfq_log_bfqq(bfqd, bfqq, "breaking apart bfqq");
++ bfqq = bfq_split_bfqq(bic, bfqq);
++ split = true;
++ if (!bfqq)
++ goto new_queue;
+ }
+ }
+
+@@ -3142,6 +3778,26 @@ static int bfq_set_request(struct request_queue *q, struct request *rq,
+ rq->elv.priv[0] = bic;
+ rq->elv.priv[1] = bfqq;
+
++ /*
++ * If a bfq_queue has only one process reference, it is owned
++ * by only one bfq_io_cq: we can set the bic field of the
++ * bfq_queue to the address of that structure. Also, if the
++ * queue has just been split, mark a flag so that the
++ * information is available to the other scheduler hooks.
++ */
++ if (likely(bfqq != &bfqd->oom_bfqq) && bfqq_process_refs(bfqq) == 1) {
++ bfqq->bic = bic;
++ if (split) {
++ bfq_mark_bfqq_just_split(bfqq);
++ /*
++ * If the queue has just been split from a shared
++ * queue, restore the idle window and the possible
++ * weight raising period.
++ */
++ bfq_bfqq_resume_state(bfqq, bic);
++ }
++ }
++
+ spin_unlock_irqrestore(q->queue_lock, flags);
+
+ return 0;
+@@ -3295,6 +3951,7 @@ static void bfq_init_root_group(struct bfq_group *root_group,
+ root_group->my_entity = NULL;
+ root_group->bfqd = bfqd;
+ #endif
++ root_group->rq_pos_tree = RB_ROOT;
+ for (i = 0; i < BFQ_IOPRIO_CLASSES; i++)
+ root_group->sched_data.service_tree[i] = BFQ_SERVICE_TREE_INIT;
+ }
+@@ -3375,6 +4032,8 @@ static int bfq_init_queue(struct request_queue *q, struct elevator_type *e)
+ bfqd->bfq_timeout[BLK_RW_ASYNC] = bfq_timeout_async;
+ bfqd->bfq_timeout[BLK_RW_SYNC] = bfq_timeout_sync;
+
++ bfqd->bfq_coop_thresh = 2;
++ bfqd->bfq_failed_cooperations = 7000;
+ bfqd->bfq_requests_within_timer = 120;
+
+ bfqd->bfq_large_burst_thresh = 11;
+diff --git a/block/bfq.h b/block/bfq.h
+index 2bf54ae..fcce855 100644
+--- a/block/bfq.h
++++ b/block/bfq.h
+@@ -183,6 +183,8 @@ struct bfq_group;
+ * ioprio_class value.
+ * @new_bfqq: shared bfq_queue if queue is cooperating with
+ * one or more other queues.
++ * @pos_node: request-position tree member (see bfq_group's @rq_pos_tree).
++ * @pos_root: request-position tree root (see bfq_group's @rq_pos_tree).
+ * @sort_list: sorted list of pending requests.
+ * @next_rq: if fifo isn't expired, next request to serve.
+ * @queued: nr of requests queued in @sort_list.
+@@ -304,6 +306,26 @@ struct bfq_ttime {
+ * @ttime: associated @bfq_ttime struct
+ * @ioprio: per (request_queue, blkcg) ioprio.
+ * @blkcg_id: id of the blkcg the related io_cq belongs to.
++ * @wr_time_left: snapshot of the time left before weight raising ends
++ * for the sync queue associated to this process; this
++ * snapshot is taken to remember this value while the weight
++ * raising is suspended because the queue is merged with a
++ * shared queue, and is used to set @raising_cur_max_time
++ * when the queue is split from the shared queue and its
++ * weight is raised again
++ * @saved_idle_window: same purpose as the previous field for the idle
++ * window
++ * @saved_IO_bound: same purpose as the previous two fields for the I/O
++ * bound classification of a queue
++ * @saved_in_large_burst: same purpose as the previous fields for the
++ * value of the field keeping the queue's belonging
++ * to a large burst
++ * @was_in_burst_list: true if the queue belonged to a burst list
++ * before its merge with another cooperating queue
++ * @cooperations: counter of consecutive successful queue merges underwent
++ * by any of the process' @bfq_queues
++ * @failed_cooperations: counter of consecutive failed queue merges of any
++ * of the process' @bfq_queues
+ */
+ struct bfq_io_cq {
+ struct io_cq icq; /* must be the first member */
+@@ -314,6 +336,16 @@ struct bfq_io_cq {
+ #ifdef CONFIG_BFQ_GROUP_IOSCHED
+ uint64_t blkcg_id; /* the current blkcg ID */
+ #endif
++
++ unsigned int wr_time_left;
++ bool saved_idle_window;
++ bool saved_IO_bound;
++
++ bool saved_in_large_burst;
++ bool was_in_burst_list;
++
++ unsigned int cooperations;
++ unsigned int failed_cooperations;
+ };
+
+ enum bfq_device_speed {
+@@ -557,6 +589,9 @@ enum bfqq_state_flags {
+ * may need softrt-next-start
+ * update
+ */
++ BFQ_BFQQ_FLAG_coop, /* bfqq is shared */
++ BFQ_BFQQ_FLAG_split_coop, /* shared bfqq will be split */
++ BFQ_BFQQ_FLAG_just_split, /* queue has just been split */
+ };
+
+ #define BFQ_BFQQ_FNS(name) \
+@@ -583,6 +618,9 @@ BFQ_BFQQ_FNS(budget_new);
+ BFQ_BFQQ_FNS(IO_bound);
+ BFQ_BFQQ_FNS(in_large_burst);
+ BFQ_BFQQ_FNS(constantly_seeky);
++BFQ_BFQQ_FNS(coop);
++BFQ_BFQQ_FNS(split_coop);
++BFQ_BFQQ_FNS(just_split);
+ BFQ_BFQQ_FNS(softrt_update);
+ #undef BFQ_BFQQ_FNS
+
+@@ -675,6 +713,9 @@ struct bfq_group_data {
+ * are groups with more than one active @bfq_entity
+ * (see the comments to the function
+ * bfq_bfqq_must_not_expire()).
++ * @rq_pos_tree: rbtree sorted by next_request position, used when
++ * determining if two or more queues have interleaving
++ * requests (see bfq_find_close_cooperator()).
+ *
+ * Each (device, cgroup) pair has its own bfq_group, i.e., for each cgroup
+ * there is a set of bfq_groups, each one collecting the lower-level
+@@ -701,6 +742,8 @@ struct bfq_group {
+
+ int active_entities;
+
++ struct rb_root rq_pos_tree;
++
+ struct bfqg_stats stats;
+ struct bfqg_stats dead_stats; /* stats pushed from dead children */
+ };
+@@ -711,6 +754,8 @@ struct bfq_group {
+
+ struct bfq_queue *async_bfqq[2][IOPRIO_BE_NR];
+ struct bfq_queue *async_idle_bfqq;
++
++ struct rb_root rq_pos_tree;
+ };
+ #endif
+
+@@ -787,6 +832,27 @@ static void bfq_put_bfqd_unlock(struct bfq_data *bfqd, unsigned long *flags)
+ spin_unlock_irqrestore(bfqd->queue->queue_lock, *flags);
+ }
+
++#ifdef CONFIG_BFQ_GROUP_IOSCHED
++
++static struct bfq_group *bfq_bfqq_to_bfqg(struct bfq_queue *bfqq)
++{
++ struct bfq_entity *group_entity = bfqq->entity.parent;
++
++ if (!group_entity)
++ group_entity = &bfqq->bfqd->root_group->entity;
++
++ return container_of(group_entity, struct bfq_group, entity);
++}
++
++#else
++
++static struct bfq_group *bfq_bfqq_to_bfqg(struct bfq_queue *bfqq)
++{
++ return bfqq->bfqd->root_group;
++}
++
++#endif
++
+ static void bfq_check_ioprio_change(struct bfq_io_cq *bic, struct bio *bio);
+ static void bfq_put_queue(struct bfq_queue *bfqq);
+ static void bfq_dispatch_insert(struct request_queue *q, struct request *rq);
+--
+2.10.0
+
diff --git a/5004_blkck-bfq-turn-BFQ-v7r11-for-4.10.0-into-BFQ-v8r8-for-4.patch1 b/5004_blkck-bfq-turn-BFQ-v7r11-for-4.10.0-into-BFQ-v8r8-for-4.patch1
new file mode 100644
index 0000000..48e64d9
--- /dev/null
+++ b/5004_blkck-bfq-turn-BFQ-v7r11-for-4.10.0-into-BFQ-v8r8-for-4.patch1
@@ -0,0 +1,9187 @@
+From b782bbfcb5e08e92c0448d0c6a870b44db198837 Mon Sep 17 00:00:00 2001
+From: Paolo Valente <paolo.valente@linaro.org>
+Date: Mon, 16 May 2016 11:16:17 +0200
+Subject: [PATCH 4/4] Turn BFQ-v7r11 for 4.10.0 into BFQ-v8r8 for 4.10.0
+
+Signed-off-by: Paolo Valente <paolo.valente@linaro.org>
+---
+ Documentation/block/00-INDEX | 2 +
+ Documentation/block/bfq-iosched.txt | 530 ++++++
+ block/Kconfig.iosched | 18 +-
+ block/bfq-cgroup.c | 510 +++---
+ block/bfq-iosched.c | 3414 ++++++++++++++++++++++-------------
+ block/bfq-sched.c | 1290 ++++++++++---
+ block/bfq.h | 800 ++++----
+ 7 files changed, 4390 insertions(+), 2174 deletions(-)
+ create mode 100644 Documentation/block/bfq-iosched.txt
+
+diff --git a/Documentation/block/00-INDEX b/Documentation/block/00-INDEX
+index e55103a..8d55b4b 100644
+--- a/Documentation/block/00-INDEX
++++ b/Documentation/block/00-INDEX
+@@ -1,5 +1,7 @@
+ 00-INDEX
+ - This file
++bfq-iosched.txt
++ - BFQ IO scheduler and its tunables
+ biodoc.txt
+ - Notes on the Generic Block Layer Rewrite in Linux 2.5
+ biovecs.txt
+diff --git a/Documentation/block/bfq-iosched.txt b/Documentation/block/bfq-iosched.txt
+new file mode 100644
+index 0000000..13b5248
+--- /dev/null
++++ b/Documentation/block/bfq-iosched.txt
+@@ -0,0 +1,530 @@
++BFQ (Budget Fair Queueing)
++==========================
++
++BFQ is a proportional-share I/O scheduler, with some extra
++low-latency capabilities. In addition to cgroups support (blkio or io
++controllers), BFQ's main features are:
++- BFQ guarantees a high system and application responsiveness, and a
++ low latency for time-sensitive applications, such as audio or video
++ players;
++- BFQ distributes bandwidth, and not just time, among processes or
++ groups (switching back to time distribution when needed to keep
++ throughput high).
++
++On average CPUs, the current version of BFQ can handle devices
++performing at most ~30K IOPS; at most ~50 KIOPS on faster CPUs. As a
++reference, 30-50 KIOPS correspond to very high bandwidths with
++sequential I/O (e.g., 8-12 GB/s if I/O requests are 256 KB large), and
++to 120-200 MB/s with 4KB random I/O.
++
++The table of contents follow. Impatients can just jump to Section 3.
++
++CONTENTS
++
++1. When may BFQ be useful?
++ 1-1 Personal systems
++ 1-2 Server systems
++2. How does BFQ work?
++3. What are BFQ's tunable?
++4. BFQ group scheduling
++ 4-1 Service guarantees provided
++ 4-2 Interface
++
++1. When may BFQ be useful?
++==========================
++
++BFQ provides the following benefits on personal and server systems.
++
++1-1 Personal systems
++--------------------
++
++Low latency for interactive applications
++
++Regardless of the actual background workload, BFQ guarantees that, for
++interactive tasks, the storage device is virtually as responsive as if
++it was idle. For example, even if one or more of the following
++background workloads are being executed:
++- one or more large files are being read, written or copied,
++- a tree of source files is being compiled,
++- one or more virtual machines are performing I/O,
++- a software update is in progress,
++- indexing daemons are scanning filesystems and updating their
++ databases,
++starting an application or loading a file from within an application
++takes about the same time as if the storage device was idle. As a
++comparison, with CFQ, NOOP or DEADLINE, and in the same conditions,
++applications experience high latencies, or even become unresponsive
++until the background workload terminates (also on SSDs).
++
++Low latency for soft real-time applications
++
++Also soft real-time applications, such as audio and video
++players/streamers, enjoy a low latency and a low drop rate, regardless
++of the background I/O workload. As a consequence, these applications
++do not suffer from almost any glitch due to the background workload.
++
++Higher speed for code-development tasks
++
++If some additional workload happens to be executed in parallel, then
++BFQ executes the I/O-related components of typical code-development
++tasks (compilation, checkout, merge, ...) much more quickly than CFQ,
++NOOP or DEADLINE.
++
++High throughput
++
++On hard disks, BFQ achieves up to 30% higher throughput than CFQ, and
++up to 150% higher throughput than DEADLINE and NOOP, with all the
++sequential workloads considered in our tests. With random workloads,
++and with all the workloads on flash-based devices, BFQ achieves,
++instead, about the same throughput as the other schedulers.
++
++Strong fairness, bandwidth and delay guarantees
++
++BFQ distributes the device throughput, and not just the device time,
++among I/O-bound applications in proportion their weights, with any
++workload and regardless of the device parameters. From these bandwidth
++guarantees, it is possible to compute tight per-I/O-request delay
++guarantees by a simple formula. If not configured for strict service
++guarantees, BFQ switches to time-based resource sharing (only) for
++applications that would otherwise cause a throughput loss.
++
++1-2 Server systems
++------------------
++
++Most benefits for server systems follow from the same service
++properties as above. In particular, regardless of whether additional,
++possibly heavy workloads are being served, BFQ guarantees:
++
++. audio and video-streaming with zero or very low jitter and drop
++ rate;
++
++. fast retrieval of WEB pages and embedded objects;
++
++. real-time recording of data in live-dumping applications (e.g.,
++ packet logging);
++
++. responsiveness in local and remote access to a server.
++
++
++2. How does BFQ work?
++=====================
++
++BFQ is a proportional-share I/O scheduler, whose general structure,
++plus a lot of code, are borrowed from CFQ.
++
++- Each process doing I/O on a device is associated with a weight and a
++ (bfq_)queue.
++
++- BFQ grants exclusive access to the device, for a while, to one queue
++ (process) at a time, and implements this service model by
++ associating every queue with a budget, measured in number of
++ sectors.
++
++ - After a queue is granted access to the device, the budget of the
++ queue is decremented, on each request dispatch, by the size of the
++ request.
++
++ - The in-service queue is expired, i.e., its service is suspended,
++ only if one of the following events occurs: 1) the queue finishes
++ its budget, 2) the queue empties, 3) a "budget timeout" fires.
++
++ - The budget timeout prevents processes doing random I/O from
++ holding the device for too long and dramatically reducing
++ throughput.
++
++ - Actually, as in CFQ, a queue associated with a process issuing
++ sync requests may not be expired immediately when it empties. In
++ contrast, BFQ may idle the device for a short time interval,
++ giving the process the chance to go on being served if it issues
++ a new request in time. Device idling typically boosts the
++ throughput on rotational devices, if processes do synchronous
++ and sequential I/O. In addition, under BFQ, device idling is
++ also instrumental in guaranteeing the desired throughput
++ fraction to processes issuing sync requests (see the description
++ of the slice_idle tunable in this document, or [1, 2], for more
++ details).
++
++ - With respect to idling for service guarantees, if several
++ processes are competing for the device at the same time, but
++ all processes (and groups, after the following commit) have
++ the same weight, then BFQ guarantees the expected throughput
++ distribution without ever idling the device. Throughput is
++ thus as high as possible in this common scenario.
++
++ - If low-latency mode is enabled (default configuration), BFQ
++ executes some special heuristics to detect interactive and soft
++ real-time applications (e.g., video or audio players/streamers),
++ and to reduce their latency. The most important action taken to
++ achieve this goal is to give to the queues associated with these
++ applications more than their fair share of the device
++ throughput. For brevity, we call just "weight-raising" the whole
++ sets of actions taken by BFQ to privilege these queues. In
++ particular, BFQ provides a milder form of weight-raising for
++ interactive applications, and a stronger form for soft real-time
++ applications.
++
++ - BFQ automatically deactivates idling for queues born in a burst of
++ queue creations. In fact, these queues are usually associated with
++ the processes of applications and services that benefit mostly
++ from a high throughput. Examples are systemd during boot, or git
++ grep.
++
++ - As CFQ, BFQ merges queues performing interleaved I/O, i.e.,
++ performing random I/O that becomes mostly sequential if
++ merged. Differently from CFQ, BFQ achieves this goal with a more
++ reactive mechanism, called Early Queue Merge (EQM). EQM is so
++ responsive in detecting interleaved I/O (cooperating processes),
++ that it enables BFQ to achieve a high throughput, by queue
++ merging, even for queues for which CFQ needs a different
++ mechanism, preemption, to get a high throughput. As such EQM is a
++ unified mechanism to achieve a high throughput with interleaved
++ I/O.
++
++ - Queues are scheduled according to a variant of WF2Q+, named
++ B-WF2Q+, and implemented using an augmented rb-tree to preserve an
++ O(log N) overall complexity. See [2] for more details. B-WF2Q+ is
++ also ready for hierarchical scheduling. However, for a cleaner
++ logical breakdown, the code that enables and completes
++ hierarchical support is provided in the next commit, which focuses
++ exactly on this feature.
++
++ - B-WF2Q+ guarantees a tight deviation with respect to an ideal,
++ perfectly fair, and smooth service. In particular, B-WF2Q+
++ guarantees that each queue receives a fraction of the device
++ throughput proportional to its weight, even if the throughput
++ fluctuates, and regardless of: the device parameters, the current
++ workload and the budgets assigned to the queue.
++
++ - The last, budget-independence, property (although probably
++ counterintuitive in the first place) is definitely beneficial, for
++ the following reasons:
++
++ - First, with any proportional-share scheduler, the maximum
++ deviation with respect to an ideal service is proportional to
++ the maximum budget (slice) assigned to queues. As a consequence,
++ BFQ can keep this deviation tight not only because of the
++ accurate service of B-WF2Q+, but also because BFQ *does not*
++ need to assign a larger budget to a queue to let the queue
++ receive a higher fraction of the device throughput.
++
++ - Second, BFQ is free to choose, for every process (queue), the
++ budget that best fits the needs of the process, or best
++ leverages the I/O pattern of the process. In particular, BFQ
++ updates queue budgets with a simple feedback-loop algorithm that
++ allows a high throughput to be achieved, while still providing
++ tight latency guarantees to time-sensitive applications. When
++ the in-service queue expires, this algorithm computes the next
++ budget of the queue so as to:
++
++ - Let large budgets be eventually assigned to the queues
++ associated with I/O-bound applications performing sequential
++ I/O: in fact, the longer these applications are served once
++ got access to the device, the higher the throughput is.
++
++ - Let small budgets be eventually assigned to the queues
++ associated with time-sensitive applications (which typically
++ perform sporadic and short I/O), because, the smaller the
++ budget assigned to a queue waiting for service is, the sooner
++ B-WF2Q+ will serve that queue (Subsec 3.3 in [2]).
++
++- If several processes are competing for the device at the same time,
++ but all processes and groups have the same weight, then BFQ
++ guarantees the expected throughput distribution without ever idling
++ the device. It uses preemption instead. Throughput is then much
++ higher in this common scenario.
++
++- ioprio classes are served in strict priority order, i.e.,
++ lower-priority queues are not served as long as there are
++ higher-priority queues. Among queues in the same class, the
++ bandwidth is distributed in proportion to the weight of each
++ queue. A very thin extra bandwidth is however guaranteed to
++ the Idle class, to prevent it from starving.
++
++
++3. What are BFQ's tunable?
++==========================
++
++The tunables back_seek-max, back_seek_penalty, fifo_expire_async and
++fifo_expire_sync below are the same as in CFQ. Their description is
++just copied from that for CFQ. Some considerations in the description
++of slice_idle are copied from CFQ too.
++
++per-process ioprio and weight
++-----------------------------
++
++Unless the cgroups interface is used (see "4. BFQ group scheduling"),
++weights can be assigned to processes only indirectly, through I/O
++priorities, and according to the relation:
++weight = (IOPRIO_BE_NR - ioprio) * 10.
++
++Beware that, if low-latency is set, then BFQ automatically raises the
++weight of the queues associated with interactive and soft real-time
++applications. Unset this tunable if you need/want to control weights.
++
++slice_idle
++----------
++
++This parameter specifies how long BFQ should idle for next I/O
++request, when certain sync BFQ queues become empty. By default
++slice_idle is a non-zero value. Idling has a double purpose: boosting
++throughput and making sure that the desired throughput distribution is
++respected (see the description of how BFQ works, and, if needed, the
++papers referred there).
++
++As for throughput, idling can be very helpful on highly seeky media
++like single spindle SATA/SAS disks where we can cut down on overall
++number of seeks and see improved throughput.
++
++Setting slice_idle to 0 will remove all the idling on queues and one
++should see an overall improved throughput on faster storage devices
++like multiple SATA/SAS disks in hardware RAID configuration.
++
++So depending on storage and workload, it might be useful to set
++slice_idle=0. In general for SATA/SAS disks and software RAID of
++SATA/SAS disks keeping slice_idle enabled should be useful. For any
++configurations where there are multiple spindles behind single LUN
++(Host based hardware RAID controller or for storage arrays), setting
++slice_idle=0 might end up in better throughput and acceptable
++latencies.
++
++Idling is however necessary to have service guarantees enforced in
++case of differentiated weights or differentiated I/O-request lengths.
++To see why, suppose that a given BFQ queue A must get several I/O
++requests served for each request served for another queue B. Idling
++ensures that, if A makes a new I/O request slightly after becoming
++empty, then no request of B is dispatched in the middle, and thus A
++does not lose the possibility to get more than one request dispatched
++before the next request of B is dispatched. Note that idling
++guarantees the desired differentiated treatment of queues only in
++terms of I/O-request dispatches. To guarantee that the actual service
++order then corresponds to the dispatch order, the strict_guarantees
++tunable must be set too.
++
++There is an important flipside for idling: apart from the above cases
++where it is beneficial also for throughput, idling can severely impact
++throughput. One important case is random workload. Because of this
++issue, BFQ tends to avoid idling as much as possible, when it is not
++beneficial also for throughput. As a consequence of this behavior, and
++of further issues described for the strict_guarantees tunable,
++short-term service guarantees may be occasionally violated. And, in
++some cases, these guarantees may be more important than guaranteeing
++maximum throughput. For example, in video playing/streaming, a very
++low drop rate may be more important than maximum throughput. In these
++cases, consider setting the strict_guarantees parameter.
++
++strict_guarantees
++-----------------
++
++If this parameter is set (default: unset), then BFQ
++
++- always performs idling when the in-service queue becomes empty;
++
++- forces the device to serve one I/O request at a time, by dispatching a
++ new request only if there is no outstanding request.
++
++In the presence of differentiated weights or I/O-request sizes, both
++the above conditions are needed to guarantee that every BFQ queue
++receives its allotted share of the bandwidth. The first condition is
++needed for the reasons explained in the description of the slice_idle
++tunable. The second condition is needed because all modern storage
++devices reorder internally-queued requests, which may trivially break
++the service guarantees enforced by the I/O scheduler.
++
++Setting strict_guarantees may evidently affect throughput.
++
++back_seek_max
++-------------
++
++This specifies, given in Kbytes, the maximum "distance" for backward seeking.
++The distance is the amount of space from the current head location to the
++sectors that are backward in terms of distance.
++
++This parameter allows the scheduler to anticipate requests in the "backward"
++direction and consider them as being the "next" if they are within this
++distance from the current head location.
++
++back_seek_penalty
++-----------------
++
++This parameter is used to compute the cost of backward seeking. If the
++backward distance of request is just 1/back_seek_penalty from a "front"
++request, then the seeking cost of two requests is considered equivalent.
++
++So scheduler will not bias toward one or the other request (otherwise scheduler
++will bias toward front request). Default value of back_seek_penalty is 2.
++
++fifo_expire_async
++-----------------
++
++This parameter is used to set the timeout of asynchronous requests. Default
++value of this is 248ms.
++
++fifo_expire_sync
++----------------
++
++This parameter is used to set the timeout of synchronous requests. Default
++value of this is 124ms. In case to favor synchronous requests over asynchronous
++one, this value should be decreased relative to fifo_expire_async.
++
++low_latency
++-----------
++
++This parameter is used to enable/disable BFQ's low latency mode. By
++default, low latency mode is enabled. If enabled, interactive and soft
++real-time applications are privileged and experience a lower latency,
++as explained in more detail in the description of how BFQ works.
++
++DO NOT enable this mode if you need full control on bandwidth
++distribution. In fact, if it is enabled, then BFQ automatically
++increases the bandwidth share of privileged applications, as the main
++means to guarantee a lower latency to them.
++
++timeout_sync
++------------
++
++Maximum amount of device time that can be given to a task (queue) once
++it has been selected for service. On devices with costly seeks,
++increasing this time usually increases maximum throughput. On the
++opposite end, increasing this time coarsens the granularity of the
++short-term bandwidth and latency guarantees, especially if the
++following parameter is set to zero.
++
++max_budget
++----------
++
++Maximum amount of service, measured in sectors, that can be provided
++to a BFQ queue once it is set in service (of course within the limits
++of the above timeout). According to what said in the description of
++the algorithm, larger values increase the throughput in proportion to
++the percentage of sequential I/O requests issued. The price of larger
++values is that they coarsen the granularity of short-term bandwidth
++and latency guarantees.
++
++The default value is 0, which enables auto-tuning: BFQ sets max_budget
++to the maximum number of sectors that can be served during
++timeout_sync, according to the estimated peak rate.
++
++weights
++-------
++
++Read-only parameter, used to show the weights of the currently active
++BFQ queues.
++
++
++wr_ tunables
++------------
++
++BFQ exports a few parameters to control/tune the behavior of
++low-latency heuristics.
++
++wr_coeff
++
++Factor by which the weight of a weight-raised queue is multiplied. If
++the queue is deemed soft real-time, then the weight is further
++multiplied by an additional, constant factor.
++
++wr_max_time
++
++Maximum duration of a weight-raising period for an interactive task
++(ms). If set to zero (default value), then this value is computed
++automatically, as a function of the peak rate of the device. In any
++case, when the value of this parameter is read, it always reports the
++current duration, regardless of whether it has been set manually or
++computed automatically.
++
++wr_max_softrt_rate
++
++Maximum service rate below which a queue is deemed to be associated
++with a soft real-time application, and is then weight-raised
++accordingly (sectors/sec).
++
++wr_min_idle_time
++
++Minimum idle period after which interactive weight-raising may be
++reactivated for a queue (in ms).
++
++wr_rt_max_time
++
++Maximum weight-raising duration for soft real-time queues (in ms). The
++start time from which this duration is considered is automatically
++moved forward if the queue is detected to be still soft real-time
++before the current soft real-time weight-raising period finishes.
++
++wr_min_inter_arr_async
++
++Minimum period between I/O request arrivals after which weight-raising
++may be reactivated for an already busy async queue (in ms).
++
++
++4. Group scheduling with BFQ
++============================
++
++BFQ supports both cgroups-v1 and cgroups-v2 io controllers, namely
++blkio and io. In particular, BFQ supports weight-based proportional
++share. To activate cgroups support, set BFQ_GROUP_IOSCHED.
++
++4-1 Service guarantees provided
++-------------------------------
++
++With BFQ, proportional share means true proportional share of the
++device bandwidth, according to group weights. For example, a group
++with weight 200 gets twice the bandwidth, and not just twice the time,
++of a group with weight 100.
++
++BFQ supports hierarchies (group trees) of any depth. Bandwidth is
++distributed among groups and processes in the expected way: for each
++group, the children of the group share the whole bandwidth of the
++group in proportion to their weights. In particular, this implies
++that, for each leaf group, every process of the group receives the
++same share of the whole group bandwidth, unless the ioprio of the
++process is modified.
++
++The resource-sharing guarantee for a group may partially or totally
++switch from bandwidth to time, if providing bandwidth guarantees to
++the group lowers the throughput too much. This switch occurs on a
++per-process basis: if a process of a leaf group causes throughput loss
++if served in such a way to receive its share of the bandwidth, then
++BFQ switches back to just time-based proportional share for that
++process.
++
++4-2 Interface
++-------------
++
++To get proportional sharing of bandwidth with BFQ for a given device,
++BFQ must of course be the active scheduler for that device.
++
++Within each group directory, the names of the files associated with
++BFQ-specific cgroup parameters and stats begin with the "bfq."
++prefix. So, with cgroups-v1 or cgroups-v2, the full prefix for
++BFQ-specific files is "blkio.bfq." or "io.bfq." For example, the group
++parameter to set the weight of a group with BFQ is blkio.bfq.weight
++or io.bfq.weight.
++
++Parameters to set
++-----------------
++
++For each group, there is only the following parameter to set.
++
++weight (namely blkio.bfq.weight or io.bfq-weight): the weight of the
++group inside its parent. Available values: 1..10000 (default 100). The
++linear mapping between ioprio and weights, described at the beginning
++of the tunable section, is still valid, but all weights higher than
++IOPRIO_BE_NR*10 are mapped to ioprio 0.
++
++Recall that, if low-latency is set, then BFQ automatically raises the
++weight of the queues associated with interactive and soft real-time
++applications. Unset this tunable if you need/want to control weights.
++
++
++[1] P. Valente, A. Avanzini, "Evolution of the BFQ Storage I/O
++ Scheduler", Proceedings of the First Workshop on Mobile System
++ Technologies (MST-2015), May 2015.
++ http://algogroup.unimore.it/people/paolo/disk_sched/mst-2015.pdf
++
++[2] P. Valente and M. Andreolini, "Improving Application
++ Responsiveness with the BFQ Disk I/O Scheduler", Proceedings of
++ the 5th Annual International Systems and Storage Conference
++ (SYSTOR '12), June 2012.
++ Slightly extended version:
++ http://algogroup.unimore.it/people/paolo/disk_sched/bfq-v1-suite-
++ results.pdf
+diff --git a/block/Kconfig.iosched b/block/Kconfig.iosched
+index f78cd1a..f2cd945 100644
+--- a/block/Kconfig.iosched
++++ b/block/Kconfig.iosched
+@@ -43,20 +43,20 @@ config IOSCHED_BFQ
+ tristate "BFQ I/O scheduler"
+ default n
+ ---help---
+- The BFQ I/O scheduler tries to distribute bandwidth among
+- all processes according to their weights.
+- It aims at distributing the bandwidth as desired, independently of
+- the disk parameters and with any workload. It also tries to
+- guarantee low latency to interactive and soft real-time
+- applications. If compiled built-in (saying Y here), BFQ can
+- be configured to support hierarchical scheduling.
++ The BFQ I/O scheduler distributes bandwidth among all
++ processes according to their weights, regardless of the
++ device parameters and with any workload. It also guarantees
++ a low latency to interactive and soft real-time applications.
++ Details in Documentation/block/bfq-iosched.txt
+
+ config BFQ_GROUP_IOSCHED
+ bool "BFQ hierarchical scheduling support"
+- depends on CGROUPS && IOSCHED_BFQ=y
++ depends on IOSCHED_BFQ && BLK_CGROUP
+ default n
+ ---help---
+- Enable hierarchical scheduling in BFQ, using the blkio controller.
++
++ Enable hierarchical scheduling in BFQ, using the blkio
++ (cgroups-v1) or io (cgroups-v2) controller.
+
+ choice
+ prompt "Default I/O scheduler"
+diff --git a/block/bfq-cgroup.c b/block/bfq-cgroup.c
+index 0367996..0125275 100644
+--- a/block/bfq-cgroup.c
++++ b/block/bfq-cgroup.c
+@@ -7,7 +7,9 @@
+ * Copyright (C) 2008 Fabio Checconi <fabio@gandalf.sssup.it>
+ * Paolo Valente <paolo.valente@unimore.it>
+ *
+- * Copyright (C) 2010 Paolo Valente <paolo.valente@unimore.it>
++ * Copyright (C) 2015 Paolo Valente <paolo.valente@unimore.it>
++ *
++ * Copyright (C) 2016 Paolo Valente <paolo.valente@linaro.org>
+ *
+ * Licensed under the GPL-2 as detailed in the accompanying COPYING.BFQ
+ * file.
+@@ -163,8 +165,6 @@ static struct bfq_group *blkg_to_bfqg(struct blkcg_gq *blkg)
+ {
+ struct blkg_policy_data *pd = blkg_to_pd(blkg, &blkcg_policy_bfq);
+
+- BUG_ON(!pd);
+-
+ return pd_to_bfqg(pd);
+ }
+
+@@ -208,59 +208,47 @@ static void bfqg_put(struct bfq_group *bfqg)
+
+ static void bfqg_stats_update_io_add(struct bfq_group *bfqg,
+ struct bfq_queue *bfqq,
+- int rw)
++ unsigned int op)
+ {
+- blkg_rwstat_add(&bfqg->stats.queued, rw, 1);
++ blkg_rwstat_add(&bfqg->stats.queued, op, 1);
+ bfqg_stats_end_empty_time(&bfqg->stats);
+ if (!(bfqq == ((struct bfq_data *)bfqg->bfqd)->in_service_queue))
+ bfqg_stats_set_start_group_wait_time(bfqg, bfqq_group(bfqq));
+ }
+
+-static void bfqg_stats_update_io_remove(struct bfq_group *bfqg, int rw)
+-{
+- blkg_rwstat_add(&bfqg->stats.queued, rw, -1);
+-}
+-
+-static void bfqg_stats_update_io_merged(struct bfq_group *bfqg, int rw)
++static void bfqg_stats_update_io_remove(struct bfq_group *bfqg, unsigned int op)
+ {
+- blkg_rwstat_add(&bfqg->stats.merged, rw, 1);
++ blkg_rwstat_add(&bfqg->stats.queued, op, -1);
+ }
+
+-static void bfqg_stats_update_dispatch(struct bfq_group *bfqg,
+- uint64_t bytes, int rw)
++static void bfqg_stats_update_io_merged(struct bfq_group *bfqg, unsigned int op)
+ {
+- blkg_stat_add(&bfqg->stats.sectors, bytes >> 9);
+- blkg_rwstat_add(&bfqg->stats.serviced, rw, 1);
+- blkg_rwstat_add(&bfqg->stats.service_bytes, rw, bytes);
++ blkg_rwstat_add(&bfqg->stats.merged, op, 1);
+ }
+
+ static void bfqg_stats_update_completion(struct bfq_group *bfqg,
+- uint64_t start_time, uint64_t io_start_time, int rw)
++ uint64_t start_time, uint64_t io_start_time,
++ unsigned int op)
+ {
+ struct bfqg_stats *stats = &bfqg->stats;
+ unsigned long long now = sched_clock();
+
+ if (time_after64(now, io_start_time))
+- blkg_rwstat_add(&stats->service_time, rw, now - io_start_time);
++ blkg_rwstat_add(&stats->service_time, op,
++ now - io_start_time);
+ if (time_after64(io_start_time, start_time))
+- blkg_rwstat_add(&stats->wait_time, rw,
++ blkg_rwstat_add(&stats->wait_time, op,
+ io_start_time - start_time);
+ }
+
+ /* @stats = 0 */
+ static void bfqg_stats_reset(struct bfqg_stats *stats)
+ {
+- if (!stats)
+- return;
+-
+ /* queued stats shouldn't be cleared */
+- blkg_rwstat_reset(&stats->service_bytes);
+- blkg_rwstat_reset(&stats->serviced);
+ blkg_rwstat_reset(&stats->merged);
+ blkg_rwstat_reset(&stats->service_time);
+ blkg_rwstat_reset(&stats->wait_time);
+ blkg_stat_reset(&stats->time);
+- blkg_stat_reset(&stats->unaccounted_time);
+ blkg_stat_reset(&stats->avg_queue_size_sum);
+ blkg_stat_reset(&stats->avg_queue_size_samples);
+ blkg_stat_reset(&stats->dequeue);
+@@ -270,19 +258,16 @@ static void bfqg_stats_reset(struct bfqg_stats *stats)
+ }
+
+ /* @to += @from */
+-static void bfqg_stats_merge(struct bfqg_stats *to, struct bfqg_stats *from)
++static void bfqg_stats_add_aux(struct bfqg_stats *to, struct bfqg_stats *from)
+ {
+ if (!to || !from)
+ return;
+
+ /* queued stats shouldn't be cleared */
+- blkg_rwstat_add_aux(&to->service_bytes, &from->service_bytes);
+- blkg_rwstat_add_aux(&to->serviced, &from->serviced);
+ blkg_rwstat_add_aux(&to->merged, &from->merged);
+ blkg_rwstat_add_aux(&to->service_time, &from->service_time);
+ blkg_rwstat_add_aux(&to->wait_time, &from->wait_time);
+ blkg_stat_add_aux(&from->time, &from->time);
+- blkg_stat_add_aux(&to->unaccounted_time, &from->unaccounted_time);
+ blkg_stat_add_aux(&to->avg_queue_size_sum, &from->avg_queue_size_sum);
+ blkg_stat_add_aux(&to->avg_queue_size_samples,
+ &from->avg_queue_size_samples);
+@@ -311,10 +296,8 @@ static void bfqg_stats_xfer_dead(struct bfq_group *bfqg)
+ if (unlikely(!parent))
+ return;
+
+- bfqg_stats_merge(&parent->dead_stats, &bfqg->stats);
+- bfqg_stats_merge(&parent->dead_stats, &bfqg->dead_stats);
++ bfqg_stats_add_aux(&parent->stats, &bfqg->stats);
+ bfqg_stats_reset(&bfqg->stats);
+- bfqg_stats_reset(&bfqg->dead_stats);
+ }
+
+ static void bfq_init_entity(struct bfq_entity *entity,
+@@ -329,21 +312,17 @@ static void bfq_init_entity(struct bfq_entity *entity,
+ bfqq->ioprio_class = bfqq->new_ioprio_class;
+ bfqg_get(bfqg);
+ }
+- entity->parent = bfqg->my_entity;
++ entity->parent = bfqg->my_entity; /* NULL for root group */
+ entity->sched_data = &bfqg->sched_data;
+ }
+
+ static void bfqg_stats_exit(struct bfqg_stats *stats)
+ {
+- blkg_rwstat_exit(&stats->service_bytes);
+- blkg_rwstat_exit(&stats->serviced);
+ blkg_rwstat_exit(&stats->merged);
+ blkg_rwstat_exit(&stats->service_time);
+ blkg_rwstat_exit(&stats->wait_time);
+ blkg_rwstat_exit(&stats->queued);
+- blkg_stat_exit(&stats->sectors);
+ blkg_stat_exit(&stats->time);
+- blkg_stat_exit(&stats->unaccounted_time);
+ blkg_stat_exit(&stats->avg_queue_size_sum);
+ blkg_stat_exit(&stats->avg_queue_size_samples);
+ blkg_stat_exit(&stats->dequeue);
+@@ -354,15 +333,11 @@ static void bfqg_stats_exit(struct bfqg_stats *stats)
+
+ static int bfqg_stats_init(struct bfqg_stats *stats, gfp_t gfp)
+ {
+- if (blkg_rwstat_init(&stats->service_bytes, gfp) ||
+- blkg_rwstat_init(&stats->serviced, gfp) ||
+- blkg_rwstat_init(&stats->merged, gfp) ||
++ if (blkg_rwstat_init(&stats->merged, gfp) ||
+ blkg_rwstat_init(&stats->service_time, gfp) ||
+ blkg_rwstat_init(&stats->wait_time, gfp) ||
+ blkg_rwstat_init(&stats->queued, gfp) ||
+- blkg_stat_init(&stats->sectors, gfp) ||
+ blkg_stat_init(&stats->time, gfp) ||
+- blkg_stat_init(&stats->unaccounted_time, gfp) ||
+ blkg_stat_init(&stats->avg_queue_size_sum, gfp) ||
+ blkg_stat_init(&stats->avg_queue_size_samples, gfp) ||
+ blkg_stat_init(&stats->dequeue, gfp) ||
+@@ -386,11 +361,27 @@ static struct bfq_group_data *blkcg_to_bfqgd(struct blkcg *blkcg)
+ return cpd_to_bfqgd(blkcg_to_cpd(blkcg, &blkcg_policy_bfq));
+ }
+
++static struct blkcg_policy_data *bfq_cpd_alloc(gfp_t gfp)
++{
++ struct bfq_group_data *bgd;
++
++ bgd = kzalloc(sizeof(*bgd), gfp);
++ if (!bgd)
++ return NULL;
++ return &bgd->pd;
++}
++
+ static void bfq_cpd_init(struct blkcg_policy_data *cpd)
+ {
+ struct bfq_group_data *d = cpd_to_bfqgd(cpd);
+
+- d->weight = BFQ_DEFAULT_GRP_WEIGHT;
++ d->weight = cgroup_subsys_on_dfl(io_cgrp_subsys) ?
++ CGROUP_WEIGHT_DFL : BFQ_WEIGHT_LEGACY_DFL;
++}
++
++static void bfq_cpd_free(struct blkcg_policy_data *cpd)
++{
++ kfree(cpd_to_bfqgd(cpd));
+ }
+
+ static struct blkg_policy_data *bfq_pd_alloc(gfp_t gfp, int node)
+@@ -401,8 +392,7 @@ static struct blkg_policy_data *bfq_pd_alloc(gfp_t gfp, int node)
+ if (!bfqg)
+ return NULL;
+
+- if (bfqg_stats_init(&bfqg->stats, gfp) ||
+- bfqg_stats_init(&bfqg->dead_stats, gfp)) {
++ if (bfqg_stats_init(&bfqg->stats, gfp)) {
+ kfree(bfqg);
+ return NULL;
+ }
+@@ -410,27 +400,20 @@ static struct blkg_policy_data *bfq_pd_alloc(gfp_t gfp, int node)
+ return &bfqg->pd;
+ }
+
+-static void bfq_group_set_parent(struct bfq_group *bfqg,
+- struct bfq_group *parent)
++static void bfq_pd_init(struct blkg_policy_data *pd)
+ {
++ struct blkcg_gq *blkg;
++ struct bfq_group *bfqg;
++ struct bfq_data *bfqd;
+ struct bfq_entity *entity;
++ struct bfq_group_data *d;
+
+- BUG_ON(!parent);
+- BUG_ON(!bfqg);
+- BUG_ON(bfqg == parent);
+-
++ blkg = pd_to_blkg(pd);
++ BUG_ON(!blkg);
++ bfqg = blkg_to_bfqg(blkg);
++ bfqd = blkg->q->elevator->elevator_data;
+ entity = &bfqg->entity;
+- entity->parent = parent->my_entity;
+- entity->sched_data = &parent->sched_data;
+-}
+-
+-static void bfq_pd_init(struct blkg_policy_data *pd)
+-{
+- struct blkcg_gq *blkg = pd_to_blkg(pd);
+- struct bfq_group *bfqg = blkg_to_bfqg(blkg);
+- struct bfq_data *bfqd = blkg->q->elevator->elevator_data;
+- struct bfq_entity *entity = &bfqg->entity;
+- struct bfq_group_data *d = blkcg_to_bfqgd(blkg->blkcg);
++ d = blkcg_to_bfqgd(blkg->blkcg);
+
+ entity->orig_weight = entity->weight = entity->new_weight = d->weight;
+ entity->my_sched_data = &bfqg->sched_data;
+@@ -448,70 +431,53 @@ static void bfq_pd_free(struct blkg_policy_data *pd)
+ struct bfq_group *bfqg = pd_to_bfqg(pd);
+
+ bfqg_stats_exit(&bfqg->stats);
+- bfqg_stats_exit(&bfqg->dead_stats);
+-
+ return kfree(bfqg);
+ }
+
+-/* offset delta from bfqg->stats to bfqg->dead_stats */
+-static const int dead_stats_off_delta = offsetof(struct bfq_group, dead_stats) -
+- offsetof(struct bfq_group, stats);
+-
+-/* to be used by recursive prfill, sums live and dead stats recursively */
+-static u64 bfqg_stat_pd_recursive_sum(struct blkg_policy_data *pd, int off)
++static void bfq_pd_reset_stats(struct blkg_policy_data *pd)
+ {
+- u64 sum = 0;
++ struct bfq_group *bfqg = pd_to_bfqg(pd);
+
+- sum += blkg_stat_recursive_sum(pd_to_blkg(pd), &blkcg_policy_bfq, off);
+- sum += blkg_stat_recursive_sum(pd_to_blkg(pd), &blkcg_policy_bfq,
+- off + dead_stats_off_delta);
+- return sum;
++ bfqg_stats_reset(&bfqg->stats);
+ }
+
+-/* to be used by recursive prfill, sums live and dead rwstats recursively */
+-static struct blkg_rwstat
+-bfqg_rwstat_pd_recursive_sum(struct blkg_policy_data *pd, int off)
++static void bfq_group_set_parent(struct bfq_group *bfqg,
++ struct bfq_group *parent)
+ {
+- struct blkg_rwstat a, b;
++ struct bfq_entity *entity;
+
+- a = blkg_rwstat_recursive_sum(pd_to_blkg(pd), &blkcg_policy_bfq, off);
+- b = blkg_rwstat_recursive_sum(pd_to_blkg(pd), &blkcg_policy_bfq,
+- off + dead_stats_off_delta);
+- blkg_rwstat_add_aux(&a, &b);
+- return a;
++ BUG_ON(!parent);
++ BUG_ON(!bfqg);
++ BUG_ON(bfqg == parent);
++
++ entity = &bfqg->entity;
++ entity->parent = parent->my_entity;
++ entity->sched_data = &parent->sched_data;
+ }
+
+-static void bfq_pd_reset_stats(struct blkg_policy_data *pd)
++static struct bfq_group *bfq_lookup_bfqg(struct bfq_data *bfqd,
++ struct blkcg *blkcg)
+ {
+- struct bfq_group *bfqg = pd_to_bfqg(pd);
++ struct blkcg_gq *blkg;
+
+- bfqg_stats_reset(&bfqg->stats);
+- bfqg_stats_reset(&bfqg->dead_stats);
++ blkg = blkg_lookup(blkcg, bfqd->queue);
++ if (likely(blkg))
++ return blkg_to_bfqg(blkg);
++ return NULL;
+ }
+
+-static struct bfq_group *bfq_find_alloc_group(struct bfq_data *bfqd,
+- struct blkcg *blkcg)
++static struct bfq_group *bfq_find_set_group(struct bfq_data *bfqd,
++ struct blkcg *blkcg)
+ {
+- struct request_queue *q = bfqd->queue;
+- struct bfq_group *bfqg = NULL, *parent;
+- struct bfq_entity *entity = NULL;
++ struct bfq_group *bfqg, *parent;
++ struct bfq_entity *entity;
+
+ assert_spin_locked(bfqd->queue->queue_lock);
+
+- /* avoid lookup for the common case where there's no blkcg */
+- if (blkcg == &blkcg_root) {
+- bfqg = bfqd->root_group;
+- } else {
+- struct blkcg_gq *blkg;
+-
+- blkg = blkg_lookup_create(blkcg, q);
+- if (!IS_ERR(blkg))
+- bfqg = blkg_to_bfqg(blkg);
+- else /* fallback to root_group */
+- bfqg = bfqd->root_group;
+- }
++ bfqg = bfq_lookup_bfqg(bfqd, blkcg);
+
+- BUG_ON(!bfqg);
++ if (unlikely(!bfqg))
++ return NULL;
+
+ /*
+ * Update chain of bfq_groups as we might be handling a leaf group
+@@ -537,11 +503,15 @@ static struct bfq_group *bfq_find_alloc_group(struct bfq_data *bfqd,
+ static void bfq_pos_tree_add_move(struct bfq_data *bfqd,
+ struct bfq_queue *bfqq);
+
++static void bfq_bfqq_expire(struct bfq_data *bfqd,
++ struct bfq_queue *bfqq,
++ bool compensate,
++ enum bfqq_expiration reason);
++
+ /**
+ * bfq_bfqq_move - migrate @bfqq to @bfqg.
+ * @bfqd: queue descriptor.
+ * @bfqq: the queue to move.
+- * @entity: @bfqq's entity.
+ * @bfqg: the group to move to.
+ *
+ * Move @bfqq to @bfqg, deactivating it from its old group and reactivating
+@@ -552,26 +522,40 @@ static void bfq_pos_tree_add_move(struct bfq_data *bfqd,
+ * rcu_read_lock()).
+ */
+ static void bfq_bfqq_move(struct bfq_data *bfqd, struct bfq_queue *bfqq,
+- struct bfq_entity *entity, struct bfq_group *bfqg)
++ struct bfq_group *bfqg)
+ {
+- int busy, resume;
+-
+- busy = bfq_bfqq_busy(bfqq);
+- resume = !RB_EMPTY_ROOT(&bfqq->sort_list);
++ struct bfq_entity *entity = &bfqq->entity;
+
+- BUG_ON(resume && !entity->on_st);
+- BUG_ON(busy && !resume && entity->on_st &&
++ BUG_ON(!bfq_bfqq_busy(bfqq) && !RB_EMPTY_ROOT(&bfqq->sort_list));
++ BUG_ON(!RB_EMPTY_ROOT(&bfqq->sort_list) && !entity->on_st);
++ BUG_ON(bfq_bfqq_busy(bfqq) && RB_EMPTY_ROOT(&bfqq->sort_list)
++ && entity->on_st &&
+ bfqq != bfqd->in_service_queue);
++ BUG_ON(!bfq_bfqq_busy(bfqq) && bfqq == bfqd->in_service_queue);
++
++ /* If bfqq is empty, then bfq_bfqq_expire also invokes
++ * bfq_del_bfqq_busy, thereby removing bfqq and its entity
++ * from data structures related to current group. Otherwise we
++ * need to remove bfqq explicitly with bfq_deactivate_bfqq, as
++ * we do below.
++ */
++ if (bfqq == bfqd->in_service_queue)
++ bfq_bfqq_expire(bfqd, bfqd->in_service_queue,
++ false, BFQ_BFQQ_PREEMPTED);
++
++ BUG_ON(entity->on_st && !bfq_bfqq_busy(bfqq)
++ && &bfq_entity_service_tree(entity)->idle !=
++ entity->tree);
+
+- if (busy) {
+- BUG_ON(atomic_read(&bfqq->ref) < 2);
++ BUG_ON(RB_EMPTY_ROOT(&bfqq->sort_list) && bfq_bfqq_busy(bfqq));
+
+- if (!resume)
+- bfq_del_bfqq_busy(bfqd, bfqq, 0);
+- else
+- bfq_deactivate_bfqq(bfqd, bfqq, 0);
+- } else if (entity->on_st)
++ if (bfq_bfqq_busy(bfqq))
++ bfq_deactivate_bfqq(bfqd, bfqq, false, false);
++ else if (entity->on_st) {
++ BUG_ON(&bfq_entity_service_tree(entity)->idle !=
++ entity->tree);
+ bfq_put_idle_entity(bfq_entity_service_tree(entity), entity);
++ }
+ bfqg_put(bfqq_group(bfqq));
+
+ /*
+@@ -583,14 +567,17 @@ static void bfq_bfqq_move(struct bfq_data *bfqd, struct bfq_queue *bfqq,
+ entity->sched_data = &bfqg->sched_data;
+ bfqg_get(bfqg);
+
+- if (busy) {
++ BUG_ON(RB_EMPTY_ROOT(&bfqq->sort_list) && bfq_bfqq_busy(bfqq));
++ if (bfq_bfqq_busy(bfqq)) {
+ bfq_pos_tree_add_move(bfqd, bfqq);
+- if (resume)
+- bfq_activate_bfqq(bfqd, bfqq);
++ bfq_activate_bfqq(bfqd, bfqq);
+ }
+
+ if (!bfqd->in_service_queue && !bfqd->rq_in_driver)
+ bfq_schedule_dispatch(bfqd);
++ BUG_ON(entity->on_st && !bfq_bfqq_busy(bfqq)
++ && &bfq_entity_service_tree(entity)->idle !=
++ entity->tree);
+ }
+
+ /**
+@@ -617,7 +604,11 @@ static struct bfq_group *__bfq_bic_change_cgroup(struct bfq_data *bfqd,
+
+ lockdep_assert_held(bfqd->queue->queue_lock);
+
+- bfqg = bfq_find_alloc_group(bfqd, blkcg);
++ bfqg = bfq_find_set_group(bfqd, blkcg);
++
++ if (unlikely(!bfqg))
++ bfqg = bfqd->root_group;
++
+ if (async_bfqq) {
+ entity = &async_bfqq->entity;
+
+@@ -625,7 +616,8 @@ static struct bfq_group *__bfq_bic_change_cgroup(struct bfq_data *bfqd,
+ bic_set_bfqq(bic, NULL, 0);
+ bfq_log_bfqq(bfqd, async_bfqq,
+ "bic_change_group: %p %d",
+- async_bfqq, atomic_read(&async_bfqq->ref));
++ async_bfqq,
++ async_bfqq->ref);
+ bfq_put_queue(async_bfqq);
+ }
+ }
+@@ -633,7 +625,7 @@ static struct bfq_group *__bfq_bic_change_cgroup(struct bfq_data *bfqd,
+ if (sync_bfqq) {
+ entity = &sync_bfqq->entity;
+ if (entity->sched_data != &bfqg->sched_data)
+- bfq_bfqq_move(bfqd, sync_bfqq, entity, bfqg);
++ bfq_bfqq_move(bfqd, sync_bfqq, bfqg);
+ }
+
+ return bfqg;
+@@ -642,25 +634,23 @@ static struct bfq_group *__bfq_bic_change_cgroup(struct bfq_data *bfqd,
+ static void bfq_bic_update_cgroup(struct bfq_io_cq *bic, struct bio *bio)
+ {
+ struct bfq_data *bfqd = bic_to_bfqd(bic);
+- struct blkcg *blkcg;
+ struct bfq_group *bfqg = NULL;
+- uint64_t id;
++ uint64_t serial_nr;
+
+ rcu_read_lock();
+- blkcg = bio_blkcg(bio);
+- id = blkcg->css.serial_nr;
+- rcu_read_unlock();
++ serial_nr = bio_blkcg(bio)->css.serial_nr;
+
+ /*
+ * Check whether blkcg has changed. The condition may trigger
+ * spuriously on a newly created cic but there's no harm.
+ */
+- if (unlikely(!bfqd) || likely(bic->blkcg_id == id))
+- return;
++ if (unlikely(!bfqd) || likely(bic->blkcg_serial_nr == serial_nr))
++ goto out;
+
+- bfqg = __bfq_bic_change_cgroup(bfqd, bic, blkcg);
+- BUG_ON(!bfqg);
+- bic->blkcg_id = id;
++ bfqg = __bfq_bic_change_cgroup(bfqd, bic, bio_blkcg(bio));
++ bic->blkcg_serial_nr = serial_nr;
++out:
++ rcu_read_unlock();
+ }
+
+ /**
+@@ -672,7 +662,7 @@ static void bfq_flush_idle_tree(struct bfq_service_tree *st)
+ struct bfq_entity *entity = st->first_idle;
+
+ for (; entity ; entity = st->first_idle)
+- __bfq_deactivate_entity(entity, 0);
++ __bfq_deactivate_entity(entity, false);
+ }
+
+ /**
+@@ -686,7 +676,7 @@ static void bfq_reparent_leaf_entity(struct bfq_data *bfqd,
+ struct bfq_queue *bfqq = bfq_entity_to_bfqq(entity);
+
+ BUG_ON(!bfqq);
+- bfq_bfqq_move(bfqd, bfqq, entity, bfqd->root_group);
++ bfq_bfqq_move(bfqd, bfqq, bfqd->root_group);
+ }
+
+ /**
+@@ -717,11 +707,12 @@ static void bfq_reparent_active_entities(struct bfq_data *bfqd,
+ }
+
+ /**
+- * bfq_destroy_group - destroy @bfqg.
+- * @bfqg: the group being destroyed.
++ * bfq_pd_offline - deactivate the entity associated with @pd,
++ * and reparent its children entities.
++ * @pd: descriptor of the policy going offline.
+ *
+- * Destroy @bfqg, making sure that it is not referenced from its parent.
+- * blkio already grabs the queue_lock for us, so no need to use RCU-based magic
++ * blkio already grabs the queue_lock for us, so no need to use
++ * RCU-based magic
+ */
+ static void bfq_pd_offline(struct blkg_policy_data *pd)
+ {
+@@ -776,10 +767,16 @@ static void bfq_pd_offline(struct blkg_policy_data *pd)
+ BUG_ON(bfqg->sched_data.next_in_service);
+ BUG_ON(bfqg->sched_data.in_service_entity);
+
+- __bfq_deactivate_entity(entity, 0);
++ __bfq_deactivate_entity(entity, false);
+ bfq_put_async_queues(bfqd, bfqg);
+ BUG_ON(entity->tree);
+
++ /*
++ * @blkg is going offline and will be ignored by
++ * blkg_[rw]stat_recursive_sum(). Transfer stats to the parent so
++ * that they don't get lost. If IOs complete after this point, the
++ * stats for them will be lost. Oh well...
++ */
+ bfqg_stats_xfer_dead(bfqg);
+ }
+
+@@ -789,46 +786,35 @@ static void bfq_end_wr_async(struct bfq_data *bfqd)
+
+ list_for_each_entry(blkg, &bfqd->queue->blkg_list, q_node) {
+ struct bfq_group *bfqg = blkg_to_bfqg(blkg);
++ BUG_ON(!bfqg);
+
+ bfq_end_wr_async_queues(bfqd, bfqg);
+ }
+ bfq_end_wr_async_queues(bfqd, bfqd->root_group);
+ }
+
+-static u64 bfqio_cgroup_weight_read(struct cgroup_subsys_state *css,
+- struct cftype *cftype)
+-{
+- struct blkcg *blkcg = css_to_blkcg(css);
+- struct bfq_group_data *bfqgd = blkcg_to_bfqgd(blkcg);
+- int ret = -EINVAL;
+-
+- spin_lock_irq(&blkcg->lock);
+- ret = bfqgd->weight;
+- spin_unlock_irq(&blkcg->lock);
+-
+- return ret;
+-}
+-
+-static int bfqio_cgroup_weight_read_dfl(struct seq_file *sf, void *v)
++static int bfq_io_show_weight(struct seq_file *sf, void *v)
+ {
+ struct blkcg *blkcg = css_to_blkcg(seq_css(sf));
+ struct bfq_group_data *bfqgd = blkcg_to_bfqgd(blkcg);
++ unsigned int val = 0;
+
+- spin_lock_irq(&blkcg->lock);
+- seq_printf(sf, "%u\n", bfqgd->weight);
+- spin_unlock_irq(&blkcg->lock);
++ if (bfqgd)
++ val = bfqgd->weight;
++
++ seq_printf(sf, "%u\n", val);
+
+ return 0;
+ }
+
+-static int bfqio_cgroup_weight_write(struct cgroup_subsys_state *css,
+- struct cftype *cftype,
+- u64 val)
++static int bfq_io_set_weight_legacy(struct cgroup_subsys_state *css,
++ struct cftype *cftype,
++ u64 val)
+ {
+ struct blkcg *blkcg = css_to_blkcg(css);
+ struct bfq_group_data *bfqgd = blkcg_to_bfqgd(blkcg);
+ struct blkcg_gq *blkg;
+- int ret = -EINVAL;
++ int ret = -ERANGE;
+
+ if (val < BFQ_MIN_WEIGHT || val > BFQ_MAX_WEIGHT)
+ return ret;
+@@ -873,13 +859,18 @@ static int bfqio_cgroup_weight_write(struct cgroup_subsys_state *css,
+ return ret;
+ }
+
+-static ssize_t bfqio_cgroup_weight_write_dfl(struct kernfs_open_file *of,
+- char *buf, size_t nbytes,
+- loff_t off)
++static ssize_t bfq_io_set_weight(struct kernfs_open_file *of,
++ char *buf, size_t nbytes,
++ loff_t off)
+ {
++ u64 weight;
+ /* First unsigned long found in the file is used */
+- return bfqio_cgroup_weight_write(of_css(of), NULL,
+- simple_strtoull(strim(buf), NULL, 0));
++ int ret = kstrtoull(strim(buf), 0, &weight);
++
++ if (ret)
++ return ret;
++
++ return bfq_io_set_weight_legacy(of_css(of), NULL, weight);
+ }
+
+ static int bfqg_print_stat(struct seq_file *sf, void *v)
+@@ -899,16 +890,17 @@ static int bfqg_print_rwstat(struct seq_file *sf, void *v)
+ static u64 bfqg_prfill_stat_recursive(struct seq_file *sf,
+ struct blkg_policy_data *pd, int off)
+ {
+- u64 sum = bfqg_stat_pd_recursive_sum(pd, off);
+-
++ u64 sum = blkg_stat_recursive_sum(pd_to_blkg(pd),
++ &blkcg_policy_bfq, off);
+ return __blkg_prfill_u64(sf, pd, sum);
+ }
+
+ static u64 bfqg_prfill_rwstat_recursive(struct seq_file *sf,
+ struct blkg_policy_data *pd, int off)
+ {
+- struct blkg_rwstat sum = bfqg_rwstat_pd_recursive_sum(pd, off);
+-
++ struct blkg_rwstat sum = blkg_rwstat_recursive_sum(pd_to_blkg(pd),
++ &blkcg_policy_bfq,
++ off);
+ return __blkg_prfill_rwstat(sf, pd, &sum);
+ }
+
+@@ -928,6 +920,41 @@ static int bfqg_print_rwstat_recursive(struct seq_file *sf, void *v)
+ return 0;
+ }
+
++static u64 bfqg_prfill_sectors(struct seq_file *sf, struct blkg_policy_data *pd,
++ int off)
++{
++ u64 sum = blkg_rwstat_total(&pd->blkg->stat_bytes);
++
++ return __blkg_prfill_u64(sf, pd, sum >> 9);
++}
++
++static int bfqg_print_stat_sectors(struct seq_file *sf, void *v)
++{
++ blkcg_print_blkgs(sf, css_to_blkcg(seq_css(sf)),
++ bfqg_prfill_sectors, &blkcg_policy_bfq, 0, false);
++ return 0;
++}
++
++static u64 bfqg_prfill_sectors_recursive(struct seq_file *sf,
++ struct blkg_policy_data *pd, int off)
++{
++ struct blkg_rwstat tmp = blkg_rwstat_recursive_sum(pd->blkg, NULL,
++ offsetof(struct blkcg_gq, stat_bytes));
++ u64 sum = atomic64_read(&tmp.aux_cnt[BLKG_RWSTAT_READ]) +
++ atomic64_read(&tmp.aux_cnt[BLKG_RWSTAT_WRITE]);
++
++ return __blkg_prfill_u64(sf, pd, sum >> 9);
++}
++
++static int bfqg_print_stat_sectors_recursive(struct seq_file *sf, void *v)
++{
++ blkcg_print_blkgs(sf, css_to_blkcg(seq_css(sf)),
++ bfqg_prfill_sectors_recursive, &blkcg_policy_bfq, 0,
++ false);
++ return 0;
++}
++
++
+ static u64 bfqg_prfill_avg_queue_size(struct seq_file *sf,
+ struct blkg_policy_data *pd, int off)
+ {
+@@ -964,38 +991,15 @@ bfq_create_group_hierarchy(struct bfq_data *bfqd, int node)
+ return blkg_to_bfqg(bfqd->queue->root_blkg);
+ }
+
+-static struct blkcg_policy_data *bfq_cpd_alloc(gfp_t gfp)
+-{
+- struct bfq_group_data *bgd;
+-
+- bgd = kzalloc(sizeof(*bgd), GFP_KERNEL);
+- if (!bgd)
+- return NULL;
+- return &bgd->pd;
+-}
+-
+-static void bfq_cpd_free(struct blkcg_policy_data *cpd)
+-{
+- kfree(cpd_to_bfqgd(cpd));
+-}
+-
+-static struct cftype bfqio_files_dfl[] = {
++static struct cftype bfq_blkcg_legacy_files[] = {
+ {
+- .name = "weight",
++ .name = "bfq.weight",
+ .flags = CFTYPE_NOT_ON_ROOT,
+- .seq_show = bfqio_cgroup_weight_read_dfl,
+- .write = bfqio_cgroup_weight_write_dfl,
++ .seq_show = bfq_io_show_weight,
++ .write_u64 = bfq_io_set_weight_legacy,
+ },
+- {} /* terminate */
+-};
+
+-static struct cftype bfqio_files[] = {
+- {
+- .name = "bfq.weight",
+- .read_u64 = bfqio_cgroup_weight_read,
+- .write_u64 = bfqio_cgroup_weight_write,
+- },
+- /* statistics, cover only the tasks in the bfqg */
++ /* statistics, covers only the tasks in the bfqg */
+ {
+ .name = "bfq.time",
+ .private = offsetof(struct bfq_group, stats.time),
+@@ -1003,18 +1007,17 @@ static struct cftype bfqio_files[] = {
+ },
+ {
+ .name = "bfq.sectors",
+- .private = offsetof(struct bfq_group, stats.sectors),
+- .seq_show = bfqg_print_stat,
++ .seq_show = bfqg_print_stat_sectors,
+ },
+ {
+ .name = "bfq.io_service_bytes",
+- .private = offsetof(struct bfq_group, stats.service_bytes),
+- .seq_show = bfqg_print_rwstat,
++ .private = (unsigned long)&blkcg_policy_bfq,
++ .seq_show = blkg_print_stat_bytes,
+ },
+ {
+ .name = "bfq.io_serviced",
+- .private = offsetof(struct bfq_group, stats.serviced),
+- .seq_show = bfqg_print_rwstat,
++ .private = (unsigned long)&blkcg_policy_bfq,
++ .seq_show = blkg_print_stat_ios,
+ },
+ {
+ .name = "bfq.io_service_time",
+@@ -1045,18 +1048,17 @@ static struct cftype bfqio_files[] = {
+ },
+ {
+ .name = "bfq.sectors_recursive",
+- .private = offsetof(struct bfq_group, stats.sectors),
+- .seq_show = bfqg_print_stat_recursive,
++ .seq_show = bfqg_print_stat_sectors_recursive,
+ },
+ {
+ .name = "bfq.io_service_bytes_recursive",
+- .private = offsetof(struct bfq_group, stats.service_bytes),
+- .seq_show = bfqg_print_rwstat_recursive,
++ .private = (unsigned long)&blkcg_policy_bfq,
++ .seq_show = blkg_print_stat_bytes_recursive,
+ },
+ {
+ .name = "bfq.io_serviced_recursive",
+- .private = offsetof(struct bfq_group, stats.serviced),
+- .seq_show = bfqg_print_rwstat_recursive,
++ .private = (unsigned long)&blkcg_policy_bfq,
++ .seq_show = blkg_print_stat_ios_recursive,
+ },
+ {
+ .name = "bfq.io_service_time_recursive",
+@@ -1102,31 +1104,42 @@ static struct cftype bfqio_files[] = {
+ .private = offsetof(struct bfq_group, stats.dequeue),
+ .seq_show = bfqg_print_stat,
+ },
+- {
+- .name = "bfq.unaccounted_time",
+- .private = offsetof(struct bfq_group, stats.unaccounted_time),
+- .seq_show = bfqg_print_stat,
+- },
+ { } /* terminate */
+ };
+
+-static struct blkcg_policy blkcg_policy_bfq = {
+- .dfl_cftypes = bfqio_files_dfl,
+- .legacy_cftypes = bfqio_files,
+-
+- .pd_alloc_fn = bfq_pd_alloc,
+- .pd_init_fn = bfq_pd_init,
+- .pd_offline_fn = bfq_pd_offline,
+- .pd_free_fn = bfq_pd_free,
+- .pd_reset_stats_fn = bfq_pd_reset_stats,
+-
+- .cpd_alloc_fn = bfq_cpd_alloc,
+- .cpd_init_fn = bfq_cpd_init,
+- .cpd_bind_fn = bfq_cpd_init,
+- .cpd_free_fn = bfq_cpd_free,
++static struct cftype bfq_blkg_files[] = {
++ {
++ .name = "bfq.weight",
++ .flags = CFTYPE_NOT_ON_ROOT,
++ .seq_show = bfq_io_show_weight,
++ .write = bfq_io_set_weight,
++ },
++ {} /* terminate */
+ };
+
+-#else
++#else /* CONFIG_BFQ_GROUP_IOSCHED */
++
++static inline void bfqg_stats_update_io_add(struct bfq_group *bfqg,
++ struct bfq_queue *bfqq, unsigned int op) { }
++static inline void
++bfqg_stats_update_io_remove(struct bfq_group *bfqg, unsigned int op) { }
++static inline void
++bfqg_stats_update_io_merged(struct bfq_group *bfqg, unsigned int op) { }
++static inline void bfqg_stats_update_completion(struct bfq_group *bfqg,
++ uint64_t start_time, uint64_t io_start_time,
++ unsigned int op) { }
++static inline void
++bfqg_stats_set_start_group_wait_time(struct bfq_group *bfqg,
++ struct bfq_group *curr_bfqg) { }
++static inline void bfqg_stats_end_empty_time(struct bfqg_stats *stats) { }
++static inline void bfqg_stats_update_dequeue(struct bfq_group *bfqg) { }
++static inline void bfqg_stats_set_start_empty_time(struct bfq_group *bfqg) { }
++static inline void bfqg_stats_update_idle_time(struct bfq_group *bfqg) { }
++static inline void bfqg_stats_set_start_idle_time(struct bfq_group *bfqg) { }
++static inline void bfqg_stats_update_avg_queue_size(struct bfq_group *bfqg) { }
++
++static void bfq_bfqq_move(struct bfq_data *bfqd, struct bfq_queue *bfqq,
++ struct bfq_group *bfqg) {}
+
+ static void bfq_init_entity(struct bfq_entity *entity,
+ struct bfq_group *bfqg)
+@@ -1142,35 +1155,22 @@ static void bfq_init_entity(struct bfq_entity *entity,
+ entity->sched_data = &bfqg->sched_data;
+ }
+
+-static struct bfq_group *
+-bfq_bic_update_cgroup(struct bfq_io_cq *bic, struct bio *bio)
+-{
+- struct bfq_data *bfqd = bic_to_bfqd(bic);
+-
+- return bfqd->root_group;
+-}
+-
+-static void bfq_bfqq_move(struct bfq_data *bfqd,
+- struct bfq_queue *bfqq,
+- struct bfq_entity *entity,
+- struct bfq_group *bfqg)
+-{
+-}
++static void bfq_bic_update_cgroup(struct bfq_io_cq *bic, struct bio *bio) {}
+
+ static void bfq_end_wr_async(struct bfq_data *bfqd)
+ {
+ bfq_end_wr_async_queues(bfqd, bfqd->root_group);
+ }
+
+-static void bfq_disconnect_groups(struct bfq_data *bfqd)
++static struct bfq_group *bfq_find_set_group(struct bfq_data *bfqd,
++ struct blkcg *blkcg)
+ {
+- bfq_put_async_queues(bfqd, bfqd->root_group);
++ return bfqd->root_group;
+ }
+
+-static struct bfq_group *bfq_find_alloc_group(struct bfq_data *bfqd,
+- struct blkcg *blkcg)
++static struct bfq_group *bfqq_group(struct bfq_queue *bfqq)
+ {
+- return bfqd->root_group;
++ return bfqq->bfqd->root_group;
+ }
+
+ static struct bfq_group *
+diff --git a/block/bfq-iosched.c b/block/bfq-iosched.c
+index cf3e9b1..e5dfa5a 100644
+--- a/block/bfq-iosched.c
++++ b/block/bfq-iosched.c
+@@ -1,5 +1,5 @@
+ /*
+- * Budget Fair Queueing (BFQ) disk scheduler.
++ * Budget Fair Queueing (BFQ) I/O scheduler.
+ *
+ * Based on ideas and code from CFQ:
+ * Copyright (C) 2003 Jens Axboe <axboe@kernel.dk>
+@@ -7,25 +7,34 @@
+ * Copyright (C) 2008 Fabio Checconi <fabio@gandalf.sssup.it>
+ * Paolo Valente <paolo.valente@unimore.it>
+ *
+- * Copyright (C) 2010 Paolo Valente <paolo.valente@unimore.it>
++ * Copyright (C) 2015 Paolo Valente <paolo.valente@unimore.it>
++ *
++ * Copyright (C) 2017 Paolo Valente <paolo.valente@linaro.org>
+ *
+ * Licensed under the GPL-2 as detailed in the accompanying COPYING.BFQ
+ * file.
+ *
+- * BFQ is a proportional-share storage-I/O scheduling algorithm based on
+- * the slice-by-slice service scheme of CFQ. But BFQ assigns budgets,
+- * measured in number of sectors, to processes instead of time slices. The
+- * device is not granted to the in-service process for a given time slice,
+- * but until it has exhausted its assigned budget. This change from the time
+- * to the service domain allows BFQ to distribute the device throughput
+- * among processes as desired, without any distortion due to ZBR, workload
+- * fluctuations or other factors. BFQ uses an ad hoc internal scheduler,
+- * called B-WF2Q+, to schedule processes according to their budgets. More
+- * precisely, BFQ schedules queues associated to processes. Thanks to the
+- * accurate policy of B-WF2Q+, BFQ can afford to assign high budgets to
+- * I/O-bound processes issuing sequential requests (to boost the
+- * throughput), and yet guarantee a low latency to interactive and soft
+- * real-time applications.
++ * BFQ is a proportional-share I/O scheduler, with some extra
++ * low-latency capabilities. BFQ also supports full hierarchical
++ * scheduling through cgroups. Next paragraphs provide an introduction
++ * on BFQ inner workings. Details on BFQ benefits and usage can be
++ * found in Documentation/block/bfq-iosched.txt.
++ *
++ * BFQ is a proportional-share storage-I/O scheduling algorithm based
++ * on the slice-by-slice service scheme of CFQ. But BFQ assigns
++ * budgets, measured in number of sectors, to processes instead of
++ * time slices. The device is not granted to the in-service process
++ * for a given time slice, but until it has exhausted its assigned
++ * budget. This change from the time to the service domain enables BFQ
++ * to distribute the device throughput among processes as desired,
++ * without any distortion due to throughput fluctuations, or to device
++ * internal queueing. BFQ uses an ad hoc internal scheduler, called
++ * B-WF2Q+, to schedule processes according to their budgets. More
++ * precisely, BFQ schedules queues associated with processes. Thanks to
++ * the accurate policy of B-WF2Q+, BFQ can afford to assign high
++ * budgets to I/O-bound processes issuing sequential requests (to
++ * boost the throughput), and yet guarantee a low latency to
++ * interactive and soft real-time applications.
+ *
+ * BFQ is described in [1], where also a reference to the initial, more
+ * theoretical paper on BFQ can be found. The interested reader can find
+@@ -40,10 +49,10 @@
+ * H-WF2Q+, while the augmented tree used to implement B-WF2Q+ with O(log N)
+ * complexity derives from the one introduced with EEVDF in [3].
+ *
+- * [1] P. Valente and M. Andreolini, ``Improving Application Responsiveness
+- * with the BFQ Disk I/O Scheduler'',
+- * Proceedings of the 5th Annual International Systems and Storage
+- * Conference (SYSTOR '12), June 2012.
++ * [1] P. Valente, A. Avanzini, "Evolution of the BFQ Storage I/O
++ * Scheduler", Proceedings of the First Workshop on Mobile System
++ * Technologies (MST-2015), May 2015.
++ * http://algogroup.unimore.it/people/paolo/disk_sched/mst-2015.pdf
+ *
+ * http://algogroup.unimo.it/people/paolo/disk_sched/bf1-v1-suite-results.pdf
+ *
+@@ -70,24 +79,23 @@
+ #include "bfq.h"
+ #include "blk.h"
+
+-/* Expiration time of sync (0) and async (1) requests, in jiffies. */
+-static const int bfq_fifo_expire[2] = { HZ / 4, HZ / 8 };
++/* Expiration time of sync (0) and async (1) requests, in ns. */
++static const u64 bfq_fifo_expire[2] = { NSEC_PER_SEC / 4, NSEC_PER_SEC / 8 };
+
+ /* Maximum backwards seek, in KiB. */
+-static const int bfq_back_max = 16 * 1024;
++static const int bfq_back_max = (16 * 1024);
+
+ /* Penalty of a backwards seek, in number of sectors. */
+ static const int bfq_back_penalty = 2;
+
+-/* Idling period duration, in jiffies. */
+-static int bfq_slice_idle = HZ / 125;
++/* Idling period duration, in ns. */
++static u32 bfq_slice_idle = (NSEC_PER_SEC / 125);
+
+ /* Minimum number of assigned budgets for which stats are safe to compute. */
+ static const int bfq_stats_min_budgets = 194;
+
+ /* Default maximum budget values, in sectors and number of requests. */
+-static const int bfq_default_max_budget = 16 * 1024;
+-static const int bfq_max_budget_async_rq = 4;
++static const int bfq_default_max_budget = (16 * 1024);
+
+ /*
+ * Async to sync throughput distribution is controlled as follows:
+@@ -97,23 +105,28 @@ static const int bfq_max_budget_async_rq = 4;
+ static const int bfq_async_charge_factor = 10;
+
+ /* Default timeout values, in jiffies, approximating CFQ defaults. */
+-static const int bfq_timeout_sync = HZ / 8;
+-static int bfq_timeout_async = HZ / 25;
++static const int bfq_timeout = (HZ / 8);
+
+-struct kmem_cache *bfq_pool;
++static struct kmem_cache *bfq_pool;
+
+-/* Below this threshold (in ms), we consider thinktime immediate. */
+-#define BFQ_MIN_TT 2
++/* Below this threshold (in ns), we consider thinktime immediate. */
++#define BFQ_MIN_TT (2 * NSEC_PER_MSEC)
+
+ /* hw_tag detection: parallel requests threshold and min samples needed. */
+ #define BFQ_HW_QUEUE_THRESHOLD 4
+ #define BFQ_HW_QUEUE_SAMPLES 32
+
+-#define BFQQ_SEEK_THR (sector_t)(8 * 1024)
+-#define BFQQ_SEEKY(bfqq) ((bfqq)->seek_mean > BFQQ_SEEK_THR)
++#define BFQQ_SEEK_THR (sector_t)(8 * 100)
++#define BFQQ_SECT_THR_NONROT (sector_t)(2 * 32)
++#define BFQQ_CLOSE_THR (sector_t)(8 * 1024)
++#define BFQQ_SEEKY(bfqq) (hweight32(bfqq->seek_history) > 32/8)
+
+-/* Min samples used for peak rate estimation (for autotuning). */
+-#define BFQ_PEAK_RATE_SAMPLES 32
++/* Min number of samples required to perform peak-rate update */
++#define BFQ_RATE_MIN_SAMPLES 32
++/* Min observation time interval required to perform a peak-rate update (ns) */
++#define BFQ_RATE_MIN_INTERVAL (300*NSEC_PER_MSEC)
++/* Target observation time interval for a peak-rate update (ns) */
++#define BFQ_RATE_REF_INTERVAL NSEC_PER_SEC
+
+ /* Shift used for peak rate fixed precision calculations. */
+ #define BFQ_RATE_SHIFT 16
+@@ -141,16 +154,24 @@ struct kmem_cache *bfq_pool;
+ * The device's speed class is dynamically (re)detected in
+ * bfq_update_peak_rate() every time the estimated peak rate is updated.
+ *
+- * In the following definitions, R_slow[0]/R_fast[0] and T_slow[0]/T_fast[0]
+- * are the reference values for a slow/fast rotational device, whereas
+- * R_slow[1]/R_fast[1] and T_slow[1]/T_fast[1] are the reference values for
+- * a slow/fast non-rotational device. Finally, device_speed_thresh are the
+- * thresholds used to switch between speed classes.
++ * In the following definitions, R_slow[0]/R_fast[0] and
++ * T_slow[0]/T_fast[0] are the reference values for a slow/fast
++ * rotational device, whereas R_slow[1]/R_fast[1] and
++ * T_slow[1]/T_fast[1] are the reference values for a slow/fast
++ * non-rotational device. Finally, device_speed_thresh are the
++ * thresholds used to switch between speed classes. The reference
++ * rates are not the actual peak rates of the devices used as a
++ * reference, but slightly lower values. The reason for using these
++ * slightly lower values is that the peak-rate estimator tends to
++ * yield slightly lower values than the actual peak rate (it can yield
++ * the actual peak rate only if there is only one process doing I/O,
++ * and the process does sequential I/O).
++ *
+ * Both the reference peak rates and the thresholds are measured in
+ * sectors/usec, left-shifted by BFQ_RATE_SHIFT.
+ */
+-static int R_slow[2] = {1536, 10752};
+-static int R_fast[2] = {17415, 34791};
++static int R_slow[2] = {1000, 10700};
++static int R_fast[2] = {14000, 33000};
+ /*
+ * To improve readability, a conversion function is used to initialize the
+ * following arrays, which entails that they can be initialized only in a
+@@ -178,18 +199,6 @@ static void bfq_schedule_dispatch(struct bfq_data *bfqd);
+ #define bfq_sample_valid(samples) ((samples) > 80)
+
+ /*
+- * We regard a request as SYNC, if either it's a read or has the SYNC bit
+- * set (in which case it could also be a direct WRITE).
+- */
+-static int bfq_bio_sync(struct bio *bio)
+-{
+- if (bio_data_dir(bio) == READ || (bio->bi_rw & REQ_SYNC))
+- return 1;
+-
+- return 0;
+-}
+-
+-/*
+ * Scheduler run of queue, if there are requests pending and no one in the
+ * driver that will restart queueing.
+ */
+@@ -409,11 +418,7 @@ static bool bfq_differentiated_weights(struct bfq_data *bfqd)
+ */
+ static bool bfq_symmetric_scenario(struct bfq_data *bfqd)
+ {
+- return
+-#ifdef CONFIG_BFQ_GROUP_IOSCHED
+- !bfqd->active_numerous_groups &&
+-#endif
+- !bfq_differentiated_weights(bfqd);
++ return !bfq_differentiated_weights(bfqd);
+ }
+
+ /*
+@@ -505,13 +510,45 @@ static void bfq_weights_tree_remove(struct bfq_data *bfqd,
+ entity->weight_counter = NULL;
+ }
+
++/*
++ * Return expired entry, or NULL to just start from scratch in rbtree.
++ */
++static struct request *bfq_check_fifo(struct bfq_queue *bfqq,
++ struct request *last)
++{
++ struct request *rq;
++
++ if (bfq_bfqq_fifo_expire(bfqq))
++ return NULL;
++
++ bfq_mark_bfqq_fifo_expire(bfqq);
++
++ rq = rq_entry_fifo(bfqq->fifo.next);
++
++ if (rq == last || ktime_get_ns() < rq->fifo_time)
++ return NULL;
++
++ bfq_log_bfqq(bfqq->bfqd, bfqq, "check_fifo: returned %p", rq);
++ BUG_ON(RB_EMPTY_NODE(&rq->rb_node));
++ return rq;
++}
++
+ static struct request *bfq_find_next_rq(struct bfq_data *bfqd,
+ struct bfq_queue *bfqq,
+ struct request *last)
+ {
+ struct rb_node *rbnext = rb_next(&last->rb_node);
+ struct rb_node *rbprev = rb_prev(&last->rb_node);
+- struct request *next = NULL, *prev = NULL;
++ struct request *next, *prev = NULL;
++
++ BUG_ON(list_empty(&bfqq->fifo));
++
++ /* Follow expired path, else get first next available. */
++ next = bfq_check_fifo(bfqq, last);
++ if (next) {
++ BUG_ON(next == last);
++ return next;
++ }
+
+ BUG_ON(RB_EMPTY_NODE(&last->rb_node));
+
+@@ -533,9 +570,19 @@ static struct request *bfq_find_next_rq(struct bfq_data *bfqd,
+ static unsigned long bfq_serv_to_charge(struct request *rq,
+ struct bfq_queue *bfqq)
+ {
+- return blk_rq_sectors(rq) *
+- (1 + ((!bfq_bfqq_sync(bfqq)) * (bfqq->wr_coeff == 1) *
+- bfq_async_charge_factor));
++ if (bfq_bfqq_sync(bfqq) || bfqq->wr_coeff > 1)
++ return blk_rq_sectors(rq);
++
++ /*
++ * If there are no weight-raised queues, then amplify service
++ * by just the async charge factor; otherwise amplify service
++ * by twice the async charge factor, to further reduce latency
++ * for weight-raised queues.
++ */
++ if (bfqq->bfqd->wr_busy_queues == 0)
++ return blk_rq_sectors(rq) * bfq_async_charge_factor;
++
++ return blk_rq_sectors(rq) * 2 * bfq_async_charge_factor;
+ }
+
+ /**
+@@ -576,7 +623,7 @@ static void bfq_updated_next_req(struct bfq_data *bfqd,
+ entity->budget = new_budget;
+ bfq_log_bfqq(bfqd, bfqq, "updated next rq: new budget %lu",
+ new_budget);
+- bfq_activate_bfqq(bfqd, bfqq);
++ bfq_requeue_bfqq(bfqd, bfqq);
+ }
+ }
+
+@@ -590,12 +637,23 @@ static unsigned int bfq_wr_duration(struct bfq_data *bfqd)
+ dur = bfqd->RT_prod;
+ do_div(dur, bfqd->peak_rate);
+
+- return dur;
+-}
++ /*
++ * Limit duration between 3 and 13 seconds. Tests show that
++ * higher values than 13 seconds often yield the opposite of
++ * the desired result, i.e., worsen responsiveness by letting
++ * non-interactive and non-soft-real-time applications
++ * preserve weight raising for a too long time interval.
++ *
++ * On the other end, lower values than 3 seconds make it
++ * difficult for most interactive tasks to complete their jobs
++ * before weight-raising finishes.
++ */
++ if (dur > msecs_to_jiffies(13000))
++ dur = msecs_to_jiffies(13000);
++ else if (dur < msecs_to_jiffies(3000))
++ dur = msecs_to_jiffies(3000);
+
+-static unsigned int bfq_bfqq_cooperations(struct bfq_queue *bfqq)
+-{
+- return bfqq->bic ? bfqq->bic->cooperations : 0;
++ return dur;
+ }
+
+ static void
+@@ -605,31 +663,31 @@ bfq_bfqq_resume_state(struct bfq_queue *bfqq, struct bfq_io_cq *bic)
+ bfq_mark_bfqq_idle_window(bfqq);
+ else
+ bfq_clear_bfqq_idle_window(bfqq);
++
+ if (bic->saved_IO_bound)
+ bfq_mark_bfqq_IO_bound(bfqq);
+ else
+ bfq_clear_bfqq_IO_bound(bfqq);
+- /* Assuming that the flag in_large_burst is already correctly set */
+- if (bic->wr_time_left && bfqq->bfqd->low_latency &&
+- !bfq_bfqq_in_large_burst(bfqq) &&
+- bic->cooperations < bfqq->bfqd->bfq_coop_thresh) {
+- /*
+- * Start a weight raising period with the duration given by
+- * the raising_time_left snapshot.
+- */
+- if (bfq_bfqq_busy(bfqq))
+- bfqq->bfqd->wr_busy_queues++;
+- bfqq->wr_coeff = bfqq->bfqd->bfq_wr_coeff;
+- bfqq->wr_cur_max_time = bic->wr_time_left;
+- bfqq->last_wr_start_finish = jiffies;
+- bfqq->entity.prio_changed = 1;
++
++ bfqq->wr_coeff = bic->saved_wr_coeff;
++ bfqq->wr_start_at_switch_to_srt = bic->saved_wr_start_at_switch_to_srt;
++ BUG_ON(time_is_after_jiffies(bfqq->wr_start_at_switch_to_srt));
++ bfqq->last_wr_start_finish = bic->saved_last_wr_start_finish;
++ bfqq->wr_cur_max_time = bic->saved_wr_cur_max_time;
++ BUG_ON(time_is_after_jiffies(bfqq->last_wr_start_finish));
++
++ if (bfqq->wr_coeff > 1 && (bfq_bfqq_in_large_burst(bfqq) ||
++ time_is_before_jiffies(bfqq->last_wr_start_finish +
++ bfqq->wr_cur_max_time))) {
++ bfq_log_bfqq(bfqq->bfqd, bfqq,
++ "resume state: switching off wr (%lu + %lu < %lu)",
++ bfqq->last_wr_start_finish, bfqq->wr_cur_max_time,
++ jiffies);
++
++ bfqq->wr_coeff = 1;
+ }
+- /*
+- * Clear wr_time_left to prevent bfq_bfqq_save_state() from
+- * getting confused about the queue's need of a weight-raising
+- * period.
+- */
+- bic->wr_time_left = 0;
++ /* make sure weight will be updated, however we got here */
++ bfqq->entity.prio_changed = 1;
+ }
+
+ static int bfqq_process_refs(struct bfq_queue *bfqq)
+@@ -639,7 +697,7 @@ static int bfqq_process_refs(struct bfq_queue *bfqq)
+ lockdep_assert_held(bfqq->bfqd->queue->queue_lock);
+
+ io_refs = bfqq->allocated[READ] + bfqq->allocated[WRITE];
+- process_refs = atomic_read(&bfqq->ref) - io_refs - bfqq->entity.on_st;
++ process_refs = bfqq->ref - io_refs - bfqq->entity.on_st;
+ BUG_ON(process_refs < 0);
+ return process_refs;
+ }
+@@ -654,6 +712,7 @@ static void bfq_reset_burst_list(struct bfq_data *bfqd, struct bfq_queue *bfqq)
+ hlist_del_init(&item->burst_list_node);
+ hlist_add_head(&bfqq->burst_list_node, &bfqd->burst_list);
+ bfqd->burst_size = 1;
++ bfqd->burst_parent_entity = bfqq->entity.parent;
+ }
+
+ /* Add bfqq to the list of queues in current burst (see bfq_handle_burst) */
+@@ -662,6 +721,10 @@ static void bfq_add_to_burst(struct bfq_data *bfqd, struct bfq_queue *bfqq)
+ /* Increment burst size to take into account also bfqq */
+ bfqd->burst_size++;
+
++ bfq_log_bfqq(bfqd, bfqq, "add_to_burst %d", bfqd->burst_size);
++
++ BUG_ON(bfqd->burst_size > bfqd->bfq_large_burst_thresh);
++
+ if (bfqd->burst_size == bfqd->bfq_large_burst_thresh) {
+ struct bfq_queue *pos, *bfqq_item;
+ struct hlist_node *n;
+@@ -671,15 +734,19 @@ static void bfq_add_to_burst(struct bfq_data *bfqd, struct bfq_queue *bfqq)
+ * other to consider this burst as large.
+ */
+ bfqd->large_burst = true;
++ bfq_log_bfqq(bfqd, bfqq, "add_to_burst: large burst started");
+
+ /*
+ * We can now mark all queues in the burst list as
+ * belonging to a large burst.
+ */
+ hlist_for_each_entry(bfqq_item, &bfqd->burst_list,
+- burst_list_node)
++ burst_list_node) {
+ bfq_mark_bfqq_in_large_burst(bfqq_item);
++ bfq_log_bfqq(bfqd, bfqq_item, "marked in large burst");
++ }
+ bfq_mark_bfqq_in_large_burst(bfqq);
++ bfq_log_bfqq(bfqd, bfqq, "marked in large burst");
+
+ /*
+ * From now on, and until the current burst finishes, any
+@@ -691,67 +758,79 @@ static void bfq_add_to_burst(struct bfq_data *bfqd, struct bfq_queue *bfqq)
+ hlist_for_each_entry_safe(pos, n, &bfqd->burst_list,
+ burst_list_node)
+ hlist_del_init(&pos->burst_list_node);
+- } else /* burst not yet large: add bfqq to the burst list */
++ } else /*
++ * Burst not yet large: add bfqq to the burst list. Do
++ * not increment the ref counter for bfqq, because bfqq
++ * is removed from the burst list before freeing bfqq
++ * in put_queue.
++ */
+ hlist_add_head(&bfqq->burst_list_node, &bfqd->burst_list);
+ }
+
+ /*
+- * If many queues happen to become active shortly after each other, then,
+- * to help the processes associated to these queues get their job done as
+- * soon as possible, it is usually better to not grant either weight-raising
+- * or device idling to these queues. In this comment we describe, firstly,
+- * the reasons why this fact holds, and, secondly, the next function, which
+- * implements the main steps needed to properly mark these queues so that
+- * they can then be treated in a different way.
++ * If many queues belonging to the same group happen to be created
++ * shortly after each other, then the processes associated with these
++ * queues have typically a common goal. In particular, bursts of queue
++ * creations are usually caused by services or applications that spawn
++ * many parallel threads/processes. Examples are systemd during boot,
++ * or git grep. To help these processes get their job done as soon as
++ * possible, it is usually better to not grant either weight-raising
++ * or device idling to their queues.
+ *
+- * As for the terminology, we say that a queue becomes active, i.e.,
+- * switches from idle to backlogged, either when it is created (as a
+- * consequence of the arrival of an I/O request), or, if already existing,
+- * when a new request for the queue arrives while the queue is idle.
+- * Bursts of activations, i.e., activations of different queues occurring
+- * shortly after each other, are typically caused by services or applications
+- * that spawn or reactivate many parallel threads/processes. Examples are
+- * systemd during boot or git grep.
++ * In this comment we describe, firstly, the reasons why this fact
++ * holds, and, secondly, the next function, which implements the main
++ * steps needed to properly mark these queues so that they can then be
++ * treated in a different way.
+ *
+- * These services or applications benefit mostly from a high throughput:
+- * the quicker the requests of the activated queues are cumulatively served,
+- * the sooner the target job of these queues gets completed. As a consequence,
+- * weight-raising any of these queues, which also implies idling the device
+- * for it, is almost always counterproductive: in most cases it just lowers
+- * throughput.
++ * The above services or applications benefit mostly from a high
++ * throughput: the quicker the requests of the activated queues are
++ * cumulatively served, the sooner the target job of these queues gets
++ * completed. As a consequence, weight-raising any of these queues,
++ * which also implies idling the device for it, is almost always
++ * counterproductive. In most cases it just lowers throughput.
+ *
+- * On the other hand, a burst of activations may be also caused by the start
+- * of an application that does not consist in a lot of parallel I/O-bound
+- * threads. In fact, with a complex application, the burst may be just a
+- * consequence of the fact that several processes need to be executed to
+- * start-up the application. To start an application as quickly as possible,
+- * the best thing to do is to privilege the I/O related to the application
+- * with respect to all other I/O. Therefore, the best strategy to start as
+- * quickly as possible an application that causes a burst of activations is
+- * to weight-raise all the queues activated during the burst. This is the
++ * On the other hand, a burst of queue creations may be caused also by
++ * the start of an application that does not consist of a lot of
++ * parallel I/O-bound threads. In fact, with a complex application,
++ * several short processes may need to be executed to start-up the
++ * application. In this respect, to start an application as quickly as
++ * possible, the best thing to do is in any case to privilege the I/O
++ * related to the application with respect to all other
++ * I/O. Therefore, the best strategy to start as quickly as possible
++ * an application that causes a burst of queue creations is to
++ * weight-raise all the queues created during the burst. This is the
+ * exact opposite of the best strategy for the other type of bursts.
+ *
+- * In the end, to take the best action for each of the two cases, the two
+- * types of bursts need to be distinguished. Fortunately, this seems
+- * relatively easy to do, by looking at the sizes of the bursts. In
+- * particular, we found a threshold such that bursts with a larger size
+- * than that threshold are apparently caused only by services or commands
+- * such as systemd or git grep. For brevity, hereafter we call just 'large'
+- * these bursts. BFQ *does not* weight-raise queues whose activations occur
+- * in a large burst. In addition, for each of these queues BFQ performs or
+- * does not perform idling depending on which choice boosts the throughput
+- * most. The exact choice depends on the device and request pattern at
++ * In the end, to take the best action for each of the two cases, the
++ * two types of bursts need to be distinguished. Fortunately, this
++ * seems relatively easy, by looking at the sizes of the bursts. In
++ * particular, we found a threshold such that only bursts with a
++ * larger size than that threshold are apparently caused by
++ * services or commands such as systemd or git grep. For brevity,
++ * hereafter we call just 'large' these bursts. BFQ *does not*
++ * weight-raise queues whose creation occurs in a large burst. In
++ * addition, for each of these queues BFQ performs or does not perform
++ * idling depending on which choice boosts the throughput more. The
++ * exact choice depends on the device and request pattern at
+ * hand.
+ *
+- * Turning back to the next function, it implements all the steps needed
+- * to detect the occurrence of a large burst and to properly mark all the
+- * queues belonging to it (so that they can then be treated in a different
+- * way). This goal is achieved by maintaining a special "burst list" that
+- * holds, temporarily, the queues that belong to the burst in progress. The
+- * list is then used to mark these queues as belonging to a large burst if
+- * the burst does become large. The main steps are the following.
++ * Unfortunately, false positives may occur while an interactive task
++ * is starting (e.g., an application is being started). The
++ * consequence is that the queues associated with the task do not
++ * enjoy weight raising as expected. Fortunately these false positives
++ * are very rare. They typically occur if some service happens to
++ * start doing I/O exactly when the interactive task starts.
+ *
+- * . when the very first queue is activated, the queue is inserted into the
++ * Turning back to the next function, it implements all the steps
++ * needed to detect the occurrence of a large burst and to properly
++ * mark all the queues belonging to it (so that they can then be
++ * treated in a different way). This goal is achieved by maintaining a
++ * "burst list" that holds, temporarily, the queues that belong to the
++ * burst in progress. The list is then used to mark these queues as
++ * belonging to a large burst if the burst does become large. The main
++ * steps are the following.
++ *
++ * . when the very first queue is created, the queue is inserted into the
+ * list (as it could be the first queue in a possible burst)
+ *
+ * . if the current burst has not yet become large, and a queue Q that does
+@@ -772,13 +851,13 @@ static void bfq_add_to_burst(struct bfq_data *bfqd, struct bfq_queue *bfqq)
+ *
+ * . the device enters a large-burst mode
+ *
+- * . if a queue Q that does not belong to the burst is activated while
++ * . if a queue Q that does not belong to the burst is created while
+ * the device is in large-burst mode and shortly after the last time
+ * at which a queue either entered the burst list or was marked as
+ * belonging to the current large burst, then Q is immediately marked
+ * as belonging to a large burst.
+ *
+- * . if a queue Q that does not belong to the burst is activated a while
++ * . if a queue Q that does not belong to the burst is created a while
+ * later, i.e., not shortly after, than the last time at which a queue
+ * either entered the burst list or was marked as belonging to the
+ * current large burst, then the current burst is deemed as finished and:
+@@ -791,52 +870,44 @@ static void bfq_add_to_burst(struct bfq_data *bfqd, struct bfq_queue *bfqq)
+ * in a possible new burst (then the burst list contains just Q
+ * after this step).
+ */
+-static void bfq_handle_burst(struct bfq_data *bfqd, struct bfq_queue *bfqq,
+- bool idle_for_long_time)
++static void bfq_handle_burst(struct bfq_data *bfqd, struct bfq_queue *bfqq)
+ {
+ /*
+- * If bfqq happened to be activated in a burst, but has been idle
+- * for at least as long as an interactive queue, then we assume
+- * that, in the overall I/O initiated in the burst, the I/O
+- * associated to bfqq is finished. So bfqq does not need to be
+- * treated as a queue belonging to a burst anymore. Accordingly,
+- * we reset bfqq's in_large_burst flag if set, and remove bfqq
+- * from the burst list if it's there. We do not decrement instead
+- * burst_size, because the fact that bfqq does not need to belong
+- * to the burst list any more does not invalidate the fact that
+- * bfqq may have been activated during the current burst.
+- */
+- if (idle_for_long_time) {
+- hlist_del_init(&bfqq->burst_list_node);
+- bfq_clear_bfqq_in_large_burst(bfqq);
+- }
+-
+- /*
+ * If bfqq is already in the burst list or is part of a large
+- * burst, then there is nothing else to do.
++ * burst, or finally has just been split, then there is
++ * nothing else to do.
+ */
+ if (!hlist_unhashed(&bfqq->burst_list_node) ||
+- bfq_bfqq_in_large_burst(bfqq))
++ bfq_bfqq_in_large_burst(bfqq) ||
++ time_is_after_eq_jiffies(bfqq->split_time +
++ msecs_to_jiffies(10)))
+ return;
+
+ /*
+- * If bfqq's activation happens late enough, then the current
+- * burst is finished, and related data structures must be reset.
++ * If bfqq's creation happens late enough, or bfqq belongs to
++ * a different group than the burst group, then the current
++ * burst is finished, and related data structures must be
++ * reset.
+ *
+- * In this respect, consider the special case where bfqq is the very
+- * first queue being activated. In this case, last_ins_in_burst is
+- * not yet significant when we get here. But it is easy to verify
+- * that, whether or not the following condition is true, bfqq will
+- * end up being inserted into the burst list. In particular the
+- * list will happen to contain only bfqq. And this is exactly what
+- * has to happen, as bfqq may be the first queue in a possible
++ * In this respect, consider the special case where bfqq is
++ * the very first queue created after BFQ is selected for this
++ * device. In this case, last_ins_in_burst and
++ * burst_parent_entity are not yet significant when we get
++ * here. But it is easy to verify that, whether or not the
++ * following condition is true, bfqq will end up being
++ * inserted into the burst list. In particular the list will
++ * happen to contain only bfqq. And this is exactly what has
++ * to happen, as bfqq may be the first queue of the first
+ * burst.
+ */
+ if (time_is_before_jiffies(bfqd->last_ins_in_burst +
+- bfqd->bfq_burst_interval)) {
++ bfqd->bfq_burst_interval) ||
++ bfqq->entity.parent != bfqd->burst_parent_entity) {
+ bfqd->large_burst = false;
+ bfq_reset_burst_list(bfqd, bfqq);
+- return;
++ bfq_log_bfqq(bfqd, bfqq,
++ "handle_burst: late activation or different group");
++ goto end;
+ }
+
+ /*
+@@ -845,8 +916,9 @@ static void bfq_handle_burst(struct bfq_data *bfqd, struct bfq_queue *bfqq,
+ * bfqq as belonging to this large burst immediately.
+ */
+ if (bfqd->large_burst) {
++ bfq_log_bfqq(bfqd, bfqq, "handle_burst: marked in burst");
+ bfq_mark_bfqq_in_large_burst(bfqq);
+- return;
++ goto end;
+ }
+
+ /*
+@@ -855,25 +927,490 @@ static void bfq_handle_burst(struct bfq_data *bfqd, struct bfq_queue *bfqq,
+ * queue. Then we add bfqq to the burst.
+ */
+ bfq_add_to_burst(bfqd, bfqq);
++end:
++ /*
++ * At this point, bfqq either has been added to the current
++ * burst or has caused the current burst to terminate and a
++ * possible new burst to start. In particular, in the second
++ * case, bfqq has become the first queue in the possible new
++ * burst. In both cases last_ins_in_burst needs to be moved
++ * forward.
++ */
++ bfqd->last_ins_in_burst = jiffies;
++
++}
++
++static int bfq_bfqq_budget_left(struct bfq_queue *bfqq)
++{
++ struct bfq_entity *entity = &bfqq->entity;
++
++ return entity->budget - entity->service;
++}
++
++/*
++ * If enough samples have been computed, return the current max budget
++ * stored in bfqd, which is dynamically updated according to the
++ * estimated disk peak rate; otherwise return the default max budget
++ */
++static int bfq_max_budget(struct bfq_data *bfqd)
++{
++ if (bfqd->budgets_assigned < bfq_stats_min_budgets)
++ return bfq_default_max_budget;
++ else
++ return bfqd->bfq_max_budget;
++}
++
++/*
++ * Return min budget, which is a fraction of the current or default
++ * max budget (trying with 1/32)
++ */
++static int bfq_min_budget(struct bfq_data *bfqd)
++{
++ if (bfqd->budgets_assigned < bfq_stats_min_budgets)
++ return bfq_default_max_budget / 32;
++ else
++ return bfqd->bfq_max_budget / 32;
++}
++
++static void bfq_bfqq_expire(struct bfq_data *bfqd,
++ struct bfq_queue *bfqq,
++ bool compensate,
++ enum bfqq_expiration reason);
++
++/*
++ * The next function, invoked after the input queue bfqq switches from
++ * idle to busy, updates the budget of bfqq. The function also tells
++ * whether the in-service queue should be expired, by returning
++ * true. The purpose of expiring the in-service queue is to give bfqq
++ * the chance to possibly preempt the in-service queue, and the reason
++ * for preempting the in-service queue is to achieve one of the two
++ * goals below.
++ *
++ * 1. Guarantee to bfqq its reserved bandwidth even if bfqq has
++ * expired because it has remained idle. In particular, bfqq may have
++ * expired for one of the following two reasons:
++ *
++ * - BFQ_BFQQ_NO_MORE_REQUEST bfqq did not enjoy any device idling and
++ * did not make it to issue a new request before its last request
++ * was served;
++ *
++ * - BFQ_BFQQ_TOO_IDLE bfqq did enjoy device idling, but did not issue
++ * a new request before the expiration of the idling-time.
++ *
++ * Even if bfqq has expired for one of the above reasons, the process
++ * associated with the queue may be however issuing requests greedily,
++ * and thus be sensitive to the bandwidth it receives (bfqq may have
++ * remained idle for other reasons: CPU high load, bfqq not enjoying
++ * idling, I/O throttling somewhere in the path from the process to
++ * the I/O scheduler, ...). But if, after every expiration for one of
++ * the above two reasons, bfqq has to wait for the service of at least
++ * one full budget of another queue before being served again, then
++ * bfqq is likely to get a much lower bandwidth or resource time than
++ * its reserved ones. To address this issue, two countermeasures need
++ * to be taken.
++ *
++ * First, the budget and the timestamps of bfqq need to be updated in
++ * a special way on bfqq reactivation: they need to be updated as if
++ * bfqq did not remain idle and did not expire. In fact, if they are
++ * computed as if bfqq expired and remained idle until reactivation,
++ * then the process associated with bfqq is treated as if, instead of
++ * being greedy, it stopped issuing requests when bfqq remained idle,
++ * and restarts issuing requests only on this reactivation. In other
++ * words, the scheduler does not help the process recover the "service
++ * hole" between bfqq expiration and reactivation. As a consequence,
++ * the process receives a lower bandwidth than its reserved one. In
++ * contrast, to recover this hole, the budget must be updated as if
++ * bfqq was not expired at all before this reactivation, i.e., it must
++ * be set to the value of the remaining budget when bfqq was
++ * expired. Along the same line, timestamps need to be assigned the
++ * value they had the last time bfqq was selected for service, i.e.,
++ * before last expiration. Thus timestamps need to be back-shifted
++ * with respect to their normal computation (see [1] for more details
++ * on this tricky aspect).
++ *
++ * Secondly, to allow the process to recover the hole, the in-service
++ * queue must be expired too, to give bfqq the chance to preempt it
++ * immediately. In fact, if bfqq has to wait for a full budget of the
++ * in-service queue to be completed, then it may become impossible to
++ * let the process recover the hole, even if the back-shifted
++ * timestamps of bfqq are lower than those of the in-service queue. If
++ * this happens for most or all of the holes, then the process may not
++ * receive its reserved bandwidth. In this respect, it is worth noting
++ * that, being the service of outstanding requests unpreemptible, a
++ * little fraction of the holes may however be unrecoverable, thereby
++ * causing a little loss of bandwidth.
++ *
++ * The last important point is detecting whether bfqq does need this
++ * bandwidth recovery. In this respect, the next function deems the
++ * process associated with bfqq greedy, and thus allows it to recover
++ * the hole, if: 1) the process is waiting for the arrival of a new
++ * request (which implies that bfqq expired for one of the above two
++ * reasons), and 2) such a request has arrived soon. The first
++ * condition is controlled through the flag non_blocking_wait_rq,
++ * while the second through the flag arrived_in_time. If both
++ * conditions hold, then the function computes the budget in the
++ * above-described special way, and signals that the in-service queue
++ * should be expired. Timestamp back-shifting is done later in
++ * __bfq_activate_entity.
++ *
++ * 2. Reduce latency. Even if timestamps are not backshifted to let
++ * the process associated with bfqq recover a service hole, bfqq may
++ * however happen to have, after being (re)activated, a lower finish
++ * timestamp than the in-service queue. That is, the next budget of
++ * bfqq may have to be completed before the one of the in-service
++ * queue. If this is the case, then preempting the in-service queue
++ * allows this goal to be achieved, apart from the unpreemptible,
++ * outstanding requests mentioned above.
++ *
++ * Unfortunately, regardless of which of the above two goals one wants
++ * to achieve, service trees need first to be updated to know whether
++ * the in-service queue must be preempted. To have service trees
++ * correctly updated, the in-service queue must be expired and
++ * rescheduled, and bfqq must be scheduled too. This is one of the
++ * most costly operations (in future versions, the scheduling
++ * mechanism may be re-designed in such a way to make it possible to
++ * know whether preemption is needed without needing to update service
++ * trees). In addition, queue preemptions almost always cause random
++ * I/O, and thus loss of throughput. Because of these facts, the next
++ * function adopts the following simple scheme to avoid both costly
++ * operations and too frequent preemptions: it requests the expiration
++ * of the in-service queue (unconditionally) only for queues that need
++ * to recover a hole, or that either are weight-raised or deserve to
++ * be weight-raised.
++ */
++static bool bfq_bfqq_update_budg_for_activation(struct bfq_data *bfqd,
++ struct bfq_queue *bfqq,
++ bool arrived_in_time,
++ bool wr_or_deserves_wr)
++{
++ struct bfq_entity *entity = &bfqq->entity;
++
++ if (bfq_bfqq_non_blocking_wait_rq(bfqq) && arrived_in_time) {
++ /*
++ * We do not clear the flag non_blocking_wait_rq here, as
++ * the latter is used in bfq_activate_bfqq to signal
++ * that timestamps need to be back-shifted (and is
++ * cleared right after).
++ */
++
++ /*
++ * In next assignment we rely on that either
++ * entity->service or entity->budget are not updated
++ * on expiration if bfqq is empty (see
++ * __bfq_bfqq_recalc_budget). Thus both quantities
++ * remain unchanged after such an expiration, and the
++ * following statement therefore assigns to
++ * entity->budget the remaining budget on such an
++ * expiration. For clarity, entity->service is not
++ * updated on expiration in any case, and, in normal
++ * operation, is reset only when bfqq is selected for
++ * service (see bfq_get_next_queue).
++ */
++ BUG_ON(bfqq->max_budget < 0);
++ entity->budget = min_t(unsigned long,
++ bfq_bfqq_budget_left(bfqq),
++ bfqq->max_budget);
++
++ BUG_ON(entity->budget < 0);
++ return true;
++ }
++
++ BUG_ON(bfqq->max_budget < 0);
++ entity->budget = max_t(unsigned long, bfqq->max_budget,
++ bfq_serv_to_charge(bfqq->next_rq, bfqq));
++ BUG_ON(entity->budget < 0);
++
++ bfq_clear_bfqq_non_blocking_wait_rq(bfqq);
++ return wr_or_deserves_wr;
++}
++
++static void bfq_update_bfqq_wr_on_rq_arrival(struct bfq_data *bfqd,
++ struct bfq_queue *bfqq,
++ unsigned int old_wr_coeff,
++ bool wr_or_deserves_wr,
++ bool interactive,
++ bool in_burst,
++ bool soft_rt)
++{
++ if (old_wr_coeff == 1 && wr_or_deserves_wr) {
++ /* start a weight-raising period */
++ if (interactive) {
++ bfqq->wr_coeff = bfqd->bfq_wr_coeff;
++ bfqq->wr_cur_max_time = bfq_wr_duration(bfqd);
++ } else {
++ bfqq->wr_start_at_switch_to_srt = jiffies;
++ bfqq->wr_coeff = bfqd->bfq_wr_coeff *
++ BFQ_SOFTRT_WEIGHT_FACTOR;
++ bfqq->wr_cur_max_time =
++ bfqd->bfq_wr_rt_max_time;
++ }
++ /*
++ * If needed, further reduce budget to make sure it is
++ * close to bfqq's backlog, so as to reduce the
++ * scheduling-error component due to a too large
++ * budget. Do not care about throughput consequences,
++ * but only about latency. Finally, do not assign a
++ * too small budget either, to avoid increasing
++ * latency by causing too frequent expirations.
++ */
++ bfqq->entity.budget = min_t(unsigned long,
++ bfqq->entity.budget,
++ 2 * bfq_min_budget(bfqd));
++
++ bfq_log_bfqq(bfqd, bfqq,
++ "wrais starting at %lu, rais_max_time %u",
++ jiffies,
++ jiffies_to_msecs(bfqq->wr_cur_max_time));
++ } else if (old_wr_coeff > 1) {
++ if (interactive) { /* update wr coeff and duration */
++ bfqq->wr_coeff = bfqd->bfq_wr_coeff;
++ bfqq->wr_cur_max_time = bfq_wr_duration(bfqd);
++ } else if (in_burst) {
++ bfqq->wr_coeff = 1;
++ bfq_log_bfqq(bfqd, bfqq,
++ "wrais ending at %lu, rais_max_time %u",
++ jiffies,
++ jiffies_to_msecs(bfqq->
++ wr_cur_max_time));
++ } else if (soft_rt) {
++ /*
++ * The application is now or still meeting the
++ * requirements for being deemed soft rt. We
++ * can then correctly and safely (re)charge
++ * the weight-raising duration for the
++ * application with the weight-raising
++ * duration for soft rt applications.
++ *
++ * In particular, doing this recharge now, i.e.,
++ * before the weight-raising period for the
++ * application finishes, reduces the probability
++ * of the following negative scenario:
++ * 1) the weight of a soft rt application is
++ * raised at startup (as for any newly
++ * created application),
++ * 2) since the application is not interactive,
++ * at a certain time weight-raising is
++ * stopped for the application,
++ * 3) at that time the application happens to
++ * still have pending requests, and hence
++ * is destined to not have a chance to be
++ * deemed soft rt before these requests are
++ * completed (see the comments to the
++ * function bfq_bfqq_softrt_next_start()
++ * for details on soft rt detection),
++ * 4) these pending requests experience a high
++ * latency because the application is not
++ * weight-raised while they are pending.
++ */
++ if (bfqq->wr_cur_max_time !=
++ bfqd->bfq_wr_rt_max_time) {
++ bfqq->wr_start_at_switch_to_srt =
++ bfqq->last_wr_start_finish;
++ BUG_ON(time_is_after_jiffies(bfqq->last_wr_start_finish));
++
++ bfqq->wr_cur_max_time =
++ bfqd->bfq_wr_rt_max_time;
++ bfqq->wr_coeff = bfqd->bfq_wr_coeff *
++ BFQ_SOFTRT_WEIGHT_FACTOR;
++ bfq_log_bfqq(bfqd, bfqq,
++ "switching to soft_rt wr");
++ } else
++ bfq_log_bfqq(bfqd, bfqq,
++ "moving forward soft_rt wr duration");
++ bfqq->last_wr_start_finish = jiffies;
++ }
++ }
++}
++
++static bool bfq_bfqq_idle_for_long_time(struct bfq_data *bfqd,
++ struct bfq_queue *bfqq)
++{
++ return bfqq->dispatched == 0 &&
++ time_is_before_jiffies(
++ bfqq->budget_timeout +
++ bfqd->bfq_wr_min_idle_time);
++}
++
++static void bfq_bfqq_handle_idle_busy_switch(struct bfq_data *bfqd,
++ struct bfq_queue *bfqq,
++ int old_wr_coeff,
++ struct request *rq,
++ bool *interactive)
++{
++ bool soft_rt, in_burst, wr_or_deserves_wr,
++ bfqq_wants_to_preempt,
++ idle_for_long_time = bfq_bfqq_idle_for_long_time(bfqd, bfqq),
++ /*
++ * See the comments on
++ * bfq_bfqq_update_budg_for_activation for
++ * details on the usage of the next variable.
++ */
++ arrived_in_time = ktime_get_ns() <=
++ RQ_BIC(rq)->ttime.last_end_request +
++ bfqd->bfq_slice_idle * 3;
++
++ bfq_log_bfqq(bfqd, bfqq,
++ "bfq_add_request non-busy: "
++ "jiffies %lu, in_time %d, idle_long %d busyw %d "
++ "wr_coeff %u",
++ jiffies, arrived_in_time,
++ idle_for_long_time,
++ bfq_bfqq_non_blocking_wait_rq(bfqq),
++ old_wr_coeff);
++
++ BUG_ON(bfqq->entity.budget < bfqq->entity.service);
++
++ BUG_ON(bfqq == bfqd->in_service_queue);
++ bfqg_stats_update_io_add(bfqq_group(RQ_BFQQ(rq)), bfqq, rq->cmd_flags);
++
++ /*
++ * bfqq deserves to be weight-raised if:
++ * - it is sync,
++ * - it does not belong to a large burst,
++ * - it has been idle for enough time or is soft real-time,
++ * - is linked to a bfq_io_cq (it is not shared in any sense)
++ */
++ in_burst = bfq_bfqq_in_large_burst(bfqq);
++ soft_rt = bfqd->bfq_wr_max_softrt_rate > 0 &&
++ !in_burst &&
++ time_is_before_jiffies(bfqq->soft_rt_next_start);
++ *interactive =
++ !in_burst &&
++ idle_for_long_time;
++ wr_or_deserves_wr = bfqd->low_latency &&
++ (bfqq->wr_coeff > 1 ||
++ (bfq_bfqq_sync(bfqq) &&
++ bfqq->bic && (*interactive || soft_rt)));
++
++ bfq_log_bfqq(bfqd, bfqq,
++ "bfq_add_request: "
++ "in_burst %d, "
++ "soft_rt %d (next %lu), inter %d, bic %p",
++ bfq_bfqq_in_large_burst(bfqq), soft_rt,
++ bfqq->soft_rt_next_start,
++ *interactive,
++ bfqq->bic);
++
++ /*
++ * Using the last flag, update budget and check whether bfqq
++ * may want to preempt the in-service queue.
++ */
++ bfqq_wants_to_preempt =
++ bfq_bfqq_update_budg_for_activation(bfqd, bfqq,
++ arrived_in_time,
++ wr_or_deserves_wr);
++
++ /*
++ * If bfqq happened to be activated in a burst, but has been
++ * idle for much more than an interactive queue, then we
++ * assume that, in the overall I/O initiated in the burst, the
++ * I/O associated with bfqq is finished. So bfqq does not need
++ * to be treated as a queue belonging to a burst
++ * anymore. Accordingly, we reset bfqq's in_large_burst flag
++ * if set, and remove bfqq from the burst list if it's
++ * there. We do not decrement burst_size, because the fact
++ * that bfqq does not need to belong to the burst list any
++ * more does not invalidate the fact that bfqq was created in
++ * a burst.
++ */
++ if (likely(!bfq_bfqq_just_created(bfqq)) &&
++ idle_for_long_time &&
++ time_is_before_jiffies(
++ bfqq->budget_timeout +
++ msecs_to_jiffies(10000))) {
++ hlist_del_init(&bfqq->burst_list_node);
++ bfq_clear_bfqq_in_large_burst(bfqq);
++ }
++
++ bfq_clear_bfqq_just_created(bfqq);
++
++ if (!bfq_bfqq_IO_bound(bfqq)) {
++ if (arrived_in_time) {
++ bfqq->requests_within_timer++;
++ if (bfqq->requests_within_timer >=
++ bfqd->bfq_requests_within_timer)
++ bfq_mark_bfqq_IO_bound(bfqq);
++ } else
++ bfqq->requests_within_timer = 0;
++ bfq_log_bfqq(bfqd, bfqq, "requests in time %d",
++ bfqq->requests_within_timer);
++ }
++
++ if (bfqd->low_latency) {
++ if (unlikely(time_is_after_jiffies(bfqq->split_time)))
++ /* wraparound */
++ bfqq->split_time =
++ jiffies - bfqd->bfq_wr_min_idle_time - 1;
++
++ if (time_is_before_jiffies(bfqq->split_time +
++ bfqd->bfq_wr_min_idle_time)) {
++ bfq_update_bfqq_wr_on_rq_arrival(bfqd, bfqq,
++ old_wr_coeff,
++ wr_or_deserves_wr,
++ *interactive,
++ in_burst,
++ soft_rt);
++
++ if (old_wr_coeff != bfqq->wr_coeff)
++ bfqq->entity.prio_changed = 1;
++ }
++ }
++
++ bfqq->last_idle_bklogged = jiffies;
++ bfqq->service_from_backlogged = 0;
++ bfq_clear_bfqq_softrt_update(bfqq);
++
++ bfq_add_bfqq_busy(bfqd, bfqq);
++
++ /*
++ * Expire in-service queue only if preemption may be needed
++ * for guarantees. In this respect, the function
++ * next_queue_may_preempt just checks a simple, necessary
++ * condition, and not a sufficient condition based on
++ * timestamps. In fact, for the latter condition to be
++ * evaluated, timestamps would need first to be updated, and
++ * this operation is quite costly (see the comments on the
++ * function bfq_bfqq_update_budg_for_activation).
++ */
++ if (bfqd->in_service_queue && bfqq_wants_to_preempt &&
++ bfqd->in_service_queue->wr_coeff < bfqq->wr_coeff &&
++ next_queue_may_preempt(bfqd)) {
++ struct bfq_queue *in_serv =
++ bfqd->in_service_queue;
++ BUG_ON(in_serv == bfqq);
++
++ bfq_bfqq_expire(bfqd, bfqd->in_service_queue,
++ false, BFQ_BFQQ_PREEMPTED);
++ BUG_ON(in_serv->entity.budget < 0);
++ }
+ }
+
+ static void bfq_add_request(struct request *rq)
+ {
+ struct bfq_queue *bfqq = RQ_BFQQ(rq);
+- struct bfq_entity *entity = &bfqq->entity;
+ struct bfq_data *bfqd = bfqq->bfqd;
+ struct request *next_rq, *prev;
+- unsigned long old_wr_coeff = bfqq->wr_coeff;
++ unsigned int old_wr_coeff = bfqq->wr_coeff;
+ bool interactive = false;
+
+- bfq_log_bfqq(bfqd, bfqq, "add_request %d", rq_is_sync(rq));
++ bfq_log_bfqq(bfqd, bfqq, "add_request: size %u %s",
++ blk_rq_sectors(rq), rq_is_sync(rq) ? "S" : "A");
++
++ if (bfqq->wr_coeff > 1) /* queue is being weight-raised */
++ bfq_log_bfqq(bfqd, bfqq,
++ "raising period dur %u/%u msec, old coeff %u, w %d(%d)",
++ jiffies_to_msecs(jiffies - bfqq->last_wr_start_finish),
++ jiffies_to_msecs(bfqq->wr_cur_max_time),
++ bfqq->wr_coeff,
++ bfqq->entity.weight, bfqq->entity.orig_weight);
++
+ bfqq->queued[rq_is_sync(rq)]++;
+ bfqd->queued++;
+
+ elv_rb_add(&bfqq->sort_list, rq);
+
+ /*
+- * Check if this request is a better next-serve candidate.
++ * Check if this request is a better next-to-serve candidate.
+ */
+ prev = bfqq->next_rq;
+ next_rq = bfq_choose_req(bfqd, bfqq->next_rq, rq, bfqd->last_position);
+@@ -886,160 +1423,10 @@ static void bfq_add_request(struct request *rq)
+ if (prev != bfqq->next_rq)
+ bfq_pos_tree_add_move(bfqd, bfqq);
+
+- if (!bfq_bfqq_busy(bfqq)) {
+- bool soft_rt, coop_or_in_burst,
+- idle_for_long_time = time_is_before_jiffies(
+- bfqq->budget_timeout +
+- bfqd->bfq_wr_min_idle_time);
+-
+-#ifdef CONFIG_BFQ_GROUP_IOSCHED
+- bfqg_stats_update_io_add(bfqq_group(RQ_BFQQ(rq)), bfqq,
+- rq->cmd_flags);
+-#endif
+- if (bfq_bfqq_sync(bfqq)) {
+- bool already_in_burst =
+- !hlist_unhashed(&bfqq->burst_list_node) ||
+- bfq_bfqq_in_large_burst(bfqq);
+- bfq_handle_burst(bfqd, bfqq, idle_for_long_time);
+- /*
+- * If bfqq was not already in the current burst,
+- * then, at this point, bfqq either has been
+- * added to the current burst or has caused the
+- * current burst to terminate. In particular, in
+- * the second case, bfqq has become the first
+- * queue in a possible new burst.
+- * In both cases last_ins_in_burst needs to be
+- * moved forward.
+- */
+- if (!already_in_burst)
+- bfqd->last_ins_in_burst = jiffies;
+- }
+-
+- coop_or_in_burst = bfq_bfqq_in_large_burst(bfqq) ||
+- bfq_bfqq_cooperations(bfqq) >= bfqd->bfq_coop_thresh;
+- soft_rt = bfqd->bfq_wr_max_softrt_rate > 0 &&
+- !coop_or_in_burst &&
+- time_is_before_jiffies(bfqq->soft_rt_next_start);
+- interactive = !coop_or_in_burst && idle_for_long_time;
+- entity->budget = max_t(unsigned long, bfqq->max_budget,
+- bfq_serv_to_charge(next_rq, bfqq));
+-
+- if (!bfq_bfqq_IO_bound(bfqq)) {
+- if (time_before(jiffies,
+- RQ_BIC(rq)->ttime.last_end_request +
+- bfqd->bfq_slice_idle)) {
+- bfqq->requests_within_timer++;
+- if (bfqq->requests_within_timer >=
+- bfqd->bfq_requests_within_timer)
+- bfq_mark_bfqq_IO_bound(bfqq);
+- } else
+- bfqq->requests_within_timer = 0;
+- }
+-
+- if (!bfqd->low_latency)
+- goto add_bfqq_busy;
+-
+- if (bfq_bfqq_just_split(bfqq))
+- goto set_prio_changed;
+-
+- /*
+- * If the queue:
+- * - is not being boosted,
+- * - has been idle for enough time,
+- * - is not a sync queue or is linked to a bfq_io_cq (it is
+- * shared "for its nature" or it is not shared and its
+- * requests have not been redirected to a shared queue)
+- * start a weight-raising period.
+- */
+- if (old_wr_coeff == 1 && (interactive || soft_rt) &&
+- (!bfq_bfqq_sync(bfqq) || bfqq->bic)) {
+- bfqq->wr_coeff = bfqd->bfq_wr_coeff;
+- if (interactive)
+- bfqq->wr_cur_max_time = bfq_wr_duration(bfqd);
+- else
+- bfqq->wr_cur_max_time =
+- bfqd->bfq_wr_rt_max_time;
+- bfq_log_bfqq(bfqd, bfqq,
+- "wrais starting at %lu, rais_max_time %u",
+- jiffies,
+- jiffies_to_msecs(bfqq->wr_cur_max_time));
+- } else if (old_wr_coeff > 1) {
+- if (interactive)
+- bfqq->wr_cur_max_time = bfq_wr_duration(bfqd);
+- else if (coop_or_in_burst ||
+- (bfqq->wr_cur_max_time ==
+- bfqd->bfq_wr_rt_max_time &&
+- !soft_rt)) {
+- bfqq->wr_coeff = 1;
+- bfq_log_bfqq(bfqd, bfqq,
+- "wrais ending at %lu, rais_max_time %u",
+- jiffies,
+- jiffies_to_msecs(bfqq->
+- wr_cur_max_time));
+- } else if (time_before(
+- bfqq->last_wr_start_finish +
+- bfqq->wr_cur_max_time,
+- jiffies +
+- bfqd->bfq_wr_rt_max_time) &&
+- soft_rt) {
+- /*
+- *
+- * The remaining weight-raising time is lower
+- * than bfqd->bfq_wr_rt_max_time, which means
+- * that the application is enjoying weight
+- * raising either because deemed soft-rt in
+- * the near past, or because deemed interactive
+- * a long ago.
+- * In both cases, resetting now the current
+- * remaining weight-raising time for the
+- * application to the weight-raising duration
+- * for soft rt applications would not cause any
+- * latency increase for the application (as the
+- * new duration would be higher than the
+- * remaining time).
+- *
+- * In addition, the application is now meeting
+- * the requirements for being deemed soft rt.
+- * In the end we can correctly and safely
+- * (re)charge the weight-raising duration for
+- * the application with the weight-raising
+- * duration for soft rt applications.
+- *
+- * In particular, doing this recharge now, i.e.,
+- * before the weight-raising period for the
+- * application finishes, reduces the probability
+- * of the following negative scenario:
+- * 1) the weight of a soft rt application is
+- * raised at startup (as for any newly
+- * created application),
+- * 2) since the application is not interactive,
+- * at a certain time weight-raising is
+- * stopped for the application,
+- * 3) at that time the application happens to
+- * still have pending requests, and hence
+- * is destined to not have a chance to be
+- * deemed soft rt before these requests are
+- * completed (see the comments to the
+- * function bfq_bfqq_softrt_next_start()
+- * for details on soft rt detection),
+- * 4) these pending requests experience a high
+- * latency because the application is not
+- * weight-raised while they are pending.
+- */
+- bfqq->last_wr_start_finish = jiffies;
+- bfqq->wr_cur_max_time =
+- bfqd->bfq_wr_rt_max_time;
+- }
+- }
+-set_prio_changed:
+- if (old_wr_coeff != bfqq->wr_coeff)
+- entity->prio_changed = 1;
+-add_bfqq_busy:
+- bfqq->last_idle_bklogged = jiffies;
+- bfqq->service_from_backlogged = 0;
+- bfq_clear_bfqq_softrt_update(bfqq);
+- bfq_add_bfqq_busy(bfqd, bfqq);
+- } else {
++ if (!bfq_bfqq_busy(bfqq)) /* switching to busy ... */
++ bfq_bfqq_handle_idle_busy_switch(bfqd, bfqq, old_wr_coeff,
++ rq, &interactive);
++ else {
+ if (bfqd->low_latency && old_wr_coeff == 1 && !rq_is_sync(rq) &&
+ time_is_before_jiffies(
+ bfqq->last_wr_start_finish +
+@@ -1048,16 +1435,43 @@ static void bfq_add_request(struct request *rq)
+ bfqq->wr_cur_max_time = bfq_wr_duration(bfqd);
+
+ bfqd->wr_busy_queues++;
+- entity->prio_changed = 1;
++ bfqq->entity.prio_changed = 1;
+ bfq_log_bfqq(bfqd, bfqq,
+- "non-idle wrais starting at %lu, rais_max_time %u",
+- jiffies,
+- jiffies_to_msecs(bfqq->wr_cur_max_time));
++ "non-idle wrais starting, "
++ "wr_max_time %u wr_busy %d",
++ jiffies_to_msecs(bfqq->wr_cur_max_time),
++ bfqd->wr_busy_queues);
+ }
+ if (prev != bfqq->next_rq)
+ bfq_updated_next_req(bfqd, bfqq);
+ }
+
++ /*
++ * Assign jiffies to last_wr_start_finish in the following
++ * cases:
++ *
++ * . if bfqq is not going to be weight-raised, because, for
++ * non weight-raised queues, last_wr_start_finish stores the
++ * arrival time of the last request; as of now, this piece
++ * of information is used only for deciding whether to
++ * weight-raise async queues
++ *
++ * . if bfqq is not weight-raised, because, if bfqq is now
++ * switching to weight-raised, then last_wr_start_finish
++ * stores the time when weight-raising starts
++ *
++ * . if bfqq is interactive, because, regardless of whether
++ * bfqq is currently weight-raised, the weight-raising
++ * period must start or restart (this case is considered
++ * separately because it is not detected by the above
++ * conditions, if bfqq is already weight-raised)
++ *
++ * last_wr_start_finish has to be updated also if bfqq is soft
++ * real-time, because the weight-raising period is constantly
++ * restarted on idle-to-busy transitions for these queues, but
++ * this is already done in bfq_bfqq_handle_idle_busy_switch if
++ * needed.
++ */
+ if (bfqd->low_latency &&
+ (old_wr_coeff == 1 || bfqq->wr_coeff == 1 || interactive))
+ bfqq->last_wr_start_finish = jiffies;
+@@ -1074,22 +1488,32 @@ static struct request *bfq_find_rq_fmerge(struct bfq_data *bfqd,
+ if (!bic)
+ return NULL;
+
+- bfqq = bic_to_bfqq(bic, bfq_bio_sync(bio));
++ bfqq = bic_to_bfqq(bic, op_is_sync(bio->bi_opf));
+ if (bfqq)
+ return elv_rb_find(&bfqq->sort_list, bio_end_sector(bio));
+
+ return NULL;
+ }
+
+-static void bfq_activate_request(struct request_queue *q, struct request *rq)
++static sector_t get_sdist(sector_t last_pos, struct request *rq)
+ {
+- struct bfq_data *bfqd = q->elevator->elevator_data;
+-
+- bfqd->rq_in_driver++;
+- bfqd->last_position = blk_rq_pos(rq) + blk_rq_sectors(rq);
+- bfq_log(bfqd, "activate_request: new bfqd->last_position %llu",
+- (unsigned long long) bfqd->last_position);
+-}
++ sector_t sdist = 0;
++
++ if (last_pos) {
++ if (last_pos < blk_rq_pos(rq))
++ sdist = blk_rq_pos(rq) - last_pos;
++ else
++ sdist = last_pos - blk_rq_pos(rq);
++ }
++
++ return sdist;
++}
++
++static void bfq_activate_request(struct request_queue *q, struct request *rq)
++{
++ struct bfq_data *bfqd = q->elevator->elevator_data;
++ bfqd->rq_in_driver++;
++}
+
+ static void bfq_deactivate_request(struct request_queue *q, struct request *rq)
+ {
+@@ -1105,6 +1529,9 @@ static void bfq_remove_request(struct request *rq)
+ struct bfq_data *bfqd = bfqq->bfqd;
+ const int sync = rq_is_sync(rq);
+
++ BUG_ON(bfqq->entity.service > bfqq->entity.budget &&
++ bfqq == bfqd->in_service_queue);
++
+ if (bfqq->next_rq == rq) {
+ bfqq->next_rq = bfq_find_next_rq(bfqd, bfqq, rq);
+ bfq_updated_next_req(bfqd, bfqq);
+@@ -1118,8 +1545,26 @@ static void bfq_remove_request(struct request *rq)
+ elv_rb_del(&bfqq->sort_list, rq);
+
+ if (RB_EMPTY_ROOT(&bfqq->sort_list)) {
+- if (bfq_bfqq_busy(bfqq) && bfqq != bfqd->in_service_queue)
+- bfq_del_bfqq_busy(bfqd, bfqq, 1);
++ bfqq->next_rq = NULL;
++
++ BUG_ON(bfqq->entity.budget < 0);
++
++ if (bfq_bfqq_busy(bfqq) && bfqq != bfqd->in_service_queue) {
++ bfq_del_bfqq_busy(bfqd, bfqq, false);
++ /* bfqq emptied. In normal operation, when
++ * bfqq is empty, bfqq->entity.service and
++ * bfqq->entity.budget must contain,
++ * respectively, the service received and the
++ * budget used last time bfqq emptied. These
++ * facts do not hold in this case, as at least
++ * this last removal occurred while bfqq is
++ * not in service. To avoid inconsistencies,
++ * reset both bfqq->entity.service and
++ * bfqq->entity.budget.
++ */
++ bfqq->entity.budget = bfqq->entity.service = 0;
++ }
++
+ /*
+ * Remove queue from request-position tree as it is empty.
+ */
+@@ -1133,9 +1578,7 @@ static void bfq_remove_request(struct request *rq)
+ BUG_ON(bfqq->meta_pending == 0);
+ bfqq->meta_pending--;
+ }
+-#ifdef CONFIG_BFQ_GROUP_IOSCHED
+ bfqg_stats_update_io_remove(bfqq_group(bfqq), rq->cmd_flags);
+-#endif
+ }
+
+ static int bfq_merge(struct request_queue *q, struct request **req,
+@@ -1145,7 +1588,7 @@ static int bfq_merge(struct request_queue *q, struct request **req,
+ struct request *__rq;
+
+ __rq = bfq_find_rq_fmerge(bfqd, bio);
+- if (__rq && elv_rq_merge_ok(__rq, bio)) {
++ if (__rq && elv_bio_merge_ok(__rq, bio)) {
+ *req = __rq;
+ return ELEVATOR_FRONT_MERGE;
+ }
+@@ -1190,7 +1633,7 @@ static void bfq_merged_request(struct request_queue *q, struct request *req,
+ static void bfq_bio_merged(struct request_queue *q, struct request *req,
+ struct bio *bio)
+ {
+- bfqg_stats_update_io_merged(bfqq_group(RQ_BFQQ(req)), bio->bi_rw);
++ bfqg_stats_update_io_merged(bfqq_group(RQ_BFQQ(req)), bio->bi_opf);
+ }
+ #endif
+
+@@ -1210,7 +1653,7 @@ static void bfq_merged_requests(struct request_queue *q, struct request *rq,
+ */
+ if (bfqq == next_bfqq &&
+ !list_empty(&rq->queuelist) && !list_empty(&next->queuelist) &&
+- time_before(next->fifo_time, rq->fifo_time)) {
++ next->fifo_time < rq->fifo_time) {
+ list_del_init(&rq->queuelist);
+ list_replace_init(&next->queuelist, &rq->queuelist);
+ rq->fifo_time = next->fifo_time;
+@@ -1220,21 +1663,30 @@ static void bfq_merged_requests(struct request_queue *q, struct request *rq,
+ bfqq->next_rq = rq;
+
+ bfq_remove_request(next);
+-#ifdef CONFIG_BFQ_GROUP_IOSCHED
+ bfqg_stats_update_io_merged(bfqq_group(bfqq), next->cmd_flags);
+-#endif
+ }
+
+ /* Must be called with bfqq != NULL */
+ static void bfq_bfqq_end_wr(struct bfq_queue *bfqq)
+ {
+ BUG_ON(!bfqq);
++
+ if (bfq_bfqq_busy(bfqq))
+ bfqq->bfqd->wr_busy_queues--;
+ bfqq->wr_coeff = 1;
+ bfqq->wr_cur_max_time = 0;
+- /* Trigger a weight change on the next activation of the queue */
++ bfqq->last_wr_start_finish = jiffies;
++ /*
++ * Trigger a weight change on the next invocation of
++ * __bfq_entity_update_weight_prio.
++ */
+ bfqq->entity.prio_changed = 1;
++ bfq_log_bfqq(bfqq->bfqd, bfqq,
++ "end_wr: wrais ending at %lu, rais_max_time %u",
++ bfqq->last_wr_start_finish,
++ jiffies_to_msecs(bfqq->wr_cur_max_time));
++ bfq_log_bfqq(bfqq->bfqd, bfqq, "end_wr: wr_busy %d",
++ bfqq->bfqd->wr_busy_queues);
+ }
+
+ static void bfq_end_wr_async_queues(struct bfq_data *bfqd,
+@@ -1277,7 +1729,7 @@ static int bfq_rq_close_to_sector(void *io_struct, bool request,
+ sector_t sector)
+ {
+ return abs(bfq_io_struct_pos(io_struct, request) - sector) <=
+- BFQQ_SEEK_THR;
++ BFQQ_CLOSE_THR;
+ }
+
+ static struct bfq_queue *bfqq_find_close(struct bfq_data *bfqd,
+@@ -1399,7 +1851,7 @@ bfq_setup_merge(struct bfq_queue *bfqq, struct bfq_queue *new_bfqq)
+ * throughput.
+ */
+ bfqq->new_bfqq = new_bfqq;
+- atomic_add(process_refs, &new_bfqq->ref);
++ new_bfqq->ref += process_refs;
+ return new_bfqq;
+ }
+
+@@ -1430,9 +1882,23 @@ static bool bfq_may_be_close_cooperator(struct bfq_queue *bfqq,
+ }
+
+ /*
+- * Attempt to schedule a merge of bfqq with the currently in-service queue
+- * or with a close queue among the scheduled queues.
+- * Return NULL if no merge was scheduled, a pointer to the shared bfq_queue
++ * If this function returns true, then bfqq cannot be merged. The idea
++ * is that true cooperation happens very early after processes start
++ * to do I/O. Usually, late cooperations are just accidental false
++ * positives. In case bfqq is weight-raised, such false positives
++ * would evidently degrade latency guarantees for bfqq.
++ */
++static bool wr_from_too_long(struct bfq_queue *bfqq)
++{
++ return bfqq->wr_coeff > 1 &&
++ time_is_before_jiffies(bfqq->last_wr_start_finish +
++ msecs_to_jiffies(100));
++}
++
++/*
++ * Attempt to schedule a merge of bfqq with the currently in-service
++ * queue or with a close queue among the scheduled queues. Return
++ * NULL if no merge was scheduled, a pointer to the shared bfq_queue
+ * structure otherwise.
+ *
+ * The OOM queue is not allowed to participate to cooperation: in fact, since
+@@ -1441,6 +1907,18 @@ static bool bfq_may_be_close_cooperator(struct bfq_queue *bfqq,
+ * handle merging with the OOM queue would be quite complex and expensive
+ * to maintain. Besides, in such a critical condition as an out of memory,
+ * the benefits of queue merging may be little relevant, or even negligible.
++ *
++ * Weight-raised queues can be merged only if their weight-raising
++ * period has just started. In fact cooperating processes are usually
++ * started together. Thus, with this filter we avoid false positives
++ * that would jeopardize low-latency guarantees.
++ *
++ * WARNING: queue merging may impair fairness among non-weight raised
++ * queues, for at least two reasons: 1) the original weight of a
++ * merged queue may change during the merged state, 2) even being the
++ * weight the same, a merged queue may be bloated with many more
++ * requests than the ones produced by its originally-associated
++ * process.
+ */
+ static struct bfq_queue *
+ bfq_setup_cooperator(struct bfq_data *bfqd, struct bfq_queue *bfqq,
+@@ -1450,16 +1928,32 @@ bfq_setup_cooperator(struct bfq_data *bfqd, struct bfq_queue *bfqq,
+
+ if (bfqq->new_bfqq)
+ return bfqq->new_bfqq;
+- if (!io_struct || unlikely(bfqq == &bfqd->oom_bfqq))
++
++ if (io_struct && wr_from_too_long(bfqq) &&
++ likely(bfqq != &bfqd->oom_bfqq))
++ bfq_log_bfqq(bfqd, bfqq,
++ "would have looked for coop, but bfq%d wr",
++ bfqq->pid);
++
++ if (!io_struct ||
++ wr_from_too_long(bfqq) ||
++ unlikely(bfqq == &bfqd->oom_bfqq))
+ return NULL;
+- /* If device has only one backlogged bfq_queue, don't search. */
++
++ /* If there is only one backlogged queue, don't search. */
+ if (bfqd->busy_queues == 1)
+ return NULL;
+
+ in_service_bfqq = bfqd->in_service_queue;
+
++ if (in_service_bfqq && in_service_bfqq != bfqq &&
++ bfqd->in_service_bic && wr_from_too_long(in_service_bfqq)
++ && likely(in_service_bfqq == &bfqd->oom_bfqq))
++ bfq_log_bfqq(bfqd, bfqq,
++ "would have tried merge with in-service-queue, but wr");
++
+ if (!in_service_bfqq || in_service_bfqq == bfqq ||
+- !bfqd->in_service_bic ||
++ !bfqd->in_service_bic || wr_from_too_long(in_service_bfqq) ||
+ unlikely(in_service_bfqq == &bfqd->oom_bfqq))
+ goto check_scheduled;
+
+@@ -1481,7 +1975,15 @@ bfq_setup_cooperator(struct bfq_data *bfqd, struct bfq_queue *bfqq,
+
+ BUG_ON(new_bfqq && bfqq->entity.parent != new_bfqq->entity.parent);
+
+- if (new_bfqq && likely(new_bfqq != &bfqd->oom_bfqq) &&
++ if (new_bfqq && wr_from_too_long(new_bfqq) &&
++ likely(new_bfqq != &bfqd->oom_bfqq) &&
++ bfq_may_be_close_cooperator(bfqq, new_bfqq))
++ bfq_log_bfqq(bfqd, bfqq,
++ "would have merged with bfq%d, but wr",
++ new_bfqq->pid);
++
++ if (new_bfqq && !wr_from_too_long(new_bfqq) &&
++ likely(new_bfqq != &bfqd->oom_bfqq) &&
+ bfq_may_be_close_cooperator(bfqq, new_bfqq))
+ return bfq_setup_merge(bfqq, new_bfqq);
+
+@@ -1490,53 +1992,25 @@ bfq_setup_cooperator(struct bfq_data *bfqd, struct bfq_queue *bfqq,
+
+ static void bfq_bfqq_save_state(struct bfq_queue *bfqq)
+ {
++ struct bfq_io_cq *bic = bfqq->bic;
++
+ /*
+ * If !bfqq->bic, the queue is already shared or its requests
+ * have already been redirected to a shared queue; both idle window
+ * and weight raising state have already been saved. Do nothing.
+ */
+- if (!bfqq->bic)
++ if (!bic)
+ return;
+- if (bfqq->bic->wr_time_left)
+- /*
+- * This is the queue of a just-started process, and would
+- * deserve weight raising: we set wr_time_left to the full
+- * weight-raising duration to trigger weight-raising when
+- * and if the queue is split and the first request of the
+- * queue is enqueued.
+- */
+- bfqq->bic->wr_time_left = bfq_wr_duration(bfqq->bfqd);
+- else if (bfqq->wr_coeff > 1) {
+- unsigned long wr_duration =
+- jiffies - bfqq->last_wr_start_finish;
+- /*
+- * It may happen that a queue's weight raising period lasts
+- * longer than its wr_cur_max_time, as weight raising is
+- * handled only when a request is enqueued or dispatched (it
+- * does not use any timer). If the weight raising period is
+- * about to end, don't save it.
+- */
+- if (bfqq->wr_cur_max_time <= wr_duration)
+- bfqq->bic->wr_time_left = 0;
+- else
+- bfqq->bic->wr_time_left =
+- bfqq->wr_cur_max_time - wr_duration;
+- /*
+- * The bfq_queue is becoming shared or the requests of the
+- * process owning the queue are being redirected to a shared
+- * queue. Stop the weight raising period of the queue, as in
+- * both cases it should not be owned by an interactive or
+- * soft real-time application.
+- */
+- bfq_bfqq_end_wr(bfqq);
+- } else
+- bfqq->bic->wr_time_left = 0;
+- bfqq->bic->saved_idle_window = bfq_bfqq_idle_window(bfqq);
+- bfqq->bic->saved_IO_bound = bfq_bfqq_IO_bound(bfqq);
+- bfqq->bic->saved_in_large_burst = bfq_bfqq_in_large_burst(bfqq);
+- bfqq->bic->was_in_burst_list = !hlist_unhashed(&bfqq->burst_list_node);
+- bfqq->bic->cooperations++;
+- bfqq->bic->failed_cooperations = 0;
++
++ bic->saved_idle_window = bfq_bfqq_idle_window(bfqq);
++ bic->saved_IO_bound = bfq_bfqq_IO_bound(bfqq);
++ bic->saved_in_large_burst = bfq_bfqq_in_large_burst(bfqq);
++ bic->was_in_burst_list = !hlist_unhashed(&bfqq->burst_list_node);
++ bic->saved_wr_coeff = bfqq->wr_coeff;
++ bic->saved_wr_start_at_switch_to_srt = bfqq->wr_start_at_switch_to_srt;
++ bic->saved_last_wr_start_finish = bfqq->last_wr_start_finish;
++ bic->saved_wr_cur_max_time = bfqq->wr_cur_max_time;
++ BUG_ON(time_is_after_jiffies(bfqq->last_wr_start_finish));
+ }
+
+ static void bfq_get_bic_reference(struct bfq_queue *bfqq)
+@@ -1561,6 +2035,40 @@ bfq_merge_bfqqs(struct bfq_data *bfqd, struct bfq_io_cq *bic,
+ if (bfq_bfqq_IO_bound(bfqq))
+ bfq_mark_bfqq_IO_bound(new_bfqq);
+ bfq_clear_bfqq_IO_bound(bfqq);
++
++ /*
++ * If bfqq is weight-raised, then let new_bfqq inherit
++ * weight-raising. To reduce false positives, neglect the case
++ * where bfqq has just been created, but has not yet made it
++ * to be weight-raised (which may happen because EQM may merge
++ * bfqq even before bfq_add_request is executed for the first
++ * time for bfqq). Handling this case would however be very
++ * easy, thanks to the flag just_created.
++ */
++ if (new_bfqq->wr_coeff == 1 && bfqq->wr_coeff > 1) {
++ new_bfqq->wr_coeff = bfqq->wr_coeff;
++ new_bfqq->wr_cur_max_time = bfqq->wr_cur_max_time;
++ new_bfqq->last_wr_start_finish = bfqq->last_wr_start_finish;
++ new_bfqq->wr_start_at_switch_to_srt = bfqq->wr_start_at_switch_to_srt;
++ if (bfq_bfqq_busy(new_bfqq))
++ bfqd->wr_busy_queues++;
++ new_bfqq->entity.prio_changed = 1;
++ bfq_log_bfqq(bfqd, new_bfqq,
++ "wr start after merge with %d, rais_max_time %u",
++ bfqq->pid,
++ jiffies_to_msecs(bfqq->wr_cur_max_time));
++ }
++
++ if (bfqq->wr_coeff > 1) { /* bfqq has given its wr to new_bfqq */
++ bfqq->wr_coeff = 1;
++ bfqq->entity.prio_changed = 1;
++ if (bfq_bfqq_busy(bfqq))
++ bfqd->wr_busy_queues--;
++ }
++
++ bfq_log_bfqq(bfqd, new_bfqq, "merge_bfqqs: wr_busy %d",
++ bfqd->wr_busy_queues);
++
+ /*
+ * Grab a reference to the bic, to prevent it from being destroyed
+ * before being possibly touched by a bfq_split_bfqq().
+@@ -1587,30 +2095,19 @@ bfq_merge_bfqqs(struct bfq_data *bfqd, struct bfq_io_cq *bic,
+ bfq_put_queue(bfqq);
+ }
+
+-static void bfq_bfqq_increase_failed_cooperations(struct bfq_queue *bfqq)
+-{
+- struct bfq_io_cq *bic = bfqq->bic;
+- struct bfq_data *bfqd = bfqq->bfqd;
+-
+- if (bic && bfq_bfqq_cooperations(bfqq) >= bfqd->bfq_coop_thresh) {
+- bic->failed_cooperations++;
+- if (bic->failed_cooperations >= bfqd->bfq_failed_cooperations)
+- bic->cooperations = 0;
+- }
+-}
+-
+-static int bfq_allow_merge(struct request_queue *q, struct request *rq,
+- struct bio *bio)
++static int bfq_allow_bio_merge(struct request_queue *q, struct request *rq,
++ struct bio *bio)
+ {
+ struct bfq_data *bfqd = q->elevator->elevator_data;
++ bool is_sync = op_is_sync(bio->bi_opf);
+ struct bfq_io_cq *bic;
+ struct bfq_queue *bfqq, *new_bfqq;
+
+ /*
+ * Disallow merge of a sync bio into an async request.
+ */
+- if (bfq_bio_sync(bio) && !rq_is_sync(rq))
+- return 0;
++ if (is_sync && !rq_is_sync(rq))
++ return false;
+
+ /*
+ * Lookup the bfqq that this bio will be queued with. Allow
+@@ -1619,9 +2116,9 @@ static int bfq_allow_merge(struct request_queue *q, struct request *rq,
+ */
+ bic = bfq_bic_lookup(bfqd, current->io_context);
+ if (!bic)
+- return 0;
++ return false;
+
+- bfqq = bic_to_bfqq(bic, bfq_bio_sync(bio));
++ bfqq = bic_to_bfqq(bic, is_sync);
+ /*
+ * We take advantage of this function to perform an early merge
+ * of the queues of possible cooperating processes.
+@@ -1636,30 +2133,111 @@ static int bfq_allow_merge(struct request_queue *q, struct request *rq,
+ * to decide whether bio and rq can be merged.
+ */
+ bfqq = new_bfqq;
+- } else
+- bfq_bfqq_increase_failed_cooperations(bfqq);
++ }
+ }
+
+ return bfqq == RQ_BFQQ(rq);
+ }
+
++static int bfq_allow_rq_merge(struct request_queue *q, struct request *rq,
++ struct request *next)
++{
++ return RQ_BFQQ(rq) == RQ_BFQQ(next);
++}
++
++/*
++ * Set the maximum time for the in-service queue to consume its
++ * budget. This prevents seeky processes from lowering the throughput.
++ * In practice, a time-slice service scheme is used with seeky
++ * processes.
++ */
++static void bfq_set_budget_timeout(struct bfq_data *bfqd,
++ struct bfq_queue *bfqq)
++{
++ unsigned int timeout_coeff;
++
++ if (bfqq->wr_cur_max_time == bfqd->bfq_wr_rt_max_time)
++ timeout_coeff = 1;
++ else
++ timeout_coeff = bfqq->entity.weight / bfqq->entity.orig_weight;
++
++ bfqd->last_budget_start = ktime_get();
++
++ bfqq->budget_timeout = jiffies +
++ bfqd->bfq_timeout * timeout_coeff;
++
++ bfq_log_bfqq(bfqd, bfqq, "set budget_timeout %u",
++ jiffies_to_msecs(bfqd->bfq_timeout * timeout_coeff));
++}
++
+ static void __bfq_set_in_service_queue(struct bfq_data *bfqd,
+ struct bfq_queue *bfqq)
+ {
+ if (bfqq) {
+-#ifdef CONFIG_BFQ_GROUP_IOSCHED
+ bfqg_stats_update_avg_queue_size(bfqq_group(bfqq));
+-#endif
+ bfq_mark_bfqq_must_alloc(bfqq);
+- bfq_mark_bfqq_budget_new(bfqq);
+ bfq_clear_bfqq_fifo_expire(bfqq);
+
+ bfqd->budgets_assigned = (bfqd->budgets_assigned*7 + 256) / 8;
+
++ BUG_ON(bfqq == bfqd->in_service_queue);
++ BUG_ON(RB_EMPTY_ROOT(&bfqq->sort_list));
++
++ if (time_is_before_jiffies(bfqq->last_wr_start_finish) &&
++ bfqq->wr_coeff > 1 &&
++ bfqq->wr_cur_max_time == bfqd->bfq_wr_rt_max_time &&
++ time_is_before_jiffies(bfqq->budget_timeout)) {
++ /*
++ * For soft real-time queues, move the start
++ * of the weight-raising period forward by the
++ * time the queue has not received any
++ * service. Otherwise, a relatively long
++ * service delay is likely to cause the
++ * weight-raising period of the queue to end,
++ * because of the short duration of the
++ * weight-raising period of a soft real-time
++ * queue. It is worth noting that this move
++ * is not so dangerous for the other queues,
++ * because soft real-time queues are not
++ * greedy.
++ *
++ * To not add a further variable, we use the
++ * overloaded field budget_timeout to
++ * determine for how long the queue has not
++ * received service, i.e., how much time has
++ * elapsed since the queue expired. However,
++ * this is a little imprecise, because
++ * budget_timeout is set to jiffies if bfqq
++ * not only expires, but also remains with no
++ * request.
++ */
++ if (time_after(bfqq->budget_timeout,
++ bfqq->last_wr_start_finish))
++ bfqq->last_wr_start_finish +=
++ jiffies - bfqq->budget_timeout;
++ else
++ bfqq->last_wr_start_finish = jiffies;
++
++ if (time_is_after_jiffies(bfqq->last_wr_start_finish)) {
++ pr_crit(
++ "BFQ WARNING:last %lu budget %lu jiffies %lu",
++ bfqq->last_wr_start_finish,
++ bfqq->budget_timeout,
++ jiffies);
++ pr_crit("diff %lu", jiffies -
++ max_t(unsigned long,
++ bfqq->last_wr_start_finish,
++ bfqq->budget_timeout));
++ bfqq->last_wr_start_finish = jiffies;
++ }
++ }
++
++ bfq_set_budget_timeout(bfqd, bfqq);
+ bfq_log_bfqq(bfqd, bfqq,
+ "set_in_service_queue, cur-budget = %d",
+ bfqq->entity.budget);
+- }
++ } else
++ bfq_log(bfqd, "set_in_service_queue: NULL");
+
+ bfqd->in_service_queue = bfqq;
+ }
+@@ -1675,36 +2253,11 @@ static struct bfq_queue *bfq_set_in_service_queue(struct bfq_data *bfqd)
+ return bfqq;
+ }
+
+-/*
+- * If enough samples have been computed, return the current max budget
+- * stored in bfqd, which is dynamically updated according to the
+- * estimated disk peak rate; otherwise return the default max budget
+- */
+-static int bfq_max_budget(struct bfq_data *bfqd)
+-{
+- if (bfqd->budgets_assigned < bfq_stats_min_budgets)
+- return bfq_default_max_budget;
+- else
+- return bfqd->bfq_max_budget;
+-}
+-
+-/*
+- * Return min budget, which is a fraction of the current or default
+- * max budget (trying with 1/32)
+- */
+-static int bfq_min_budget(struct bfq_data *bfqd)
+-{
+- if (bfqd->budgets_assigned < bfq_stats_min_budgets)
+- return bfq_default_max_budget / 32;
+- else
+- return bfqd->bfq_max_budget / 32;
+-}
+-
+ static void bfq_arm_slice_timer(struct bfq_data *bfqd)
+ {
+ struct bfq_queue *bfqq = bfqd->in_service_queue;
+ struct bfq_io_cq *bic;
+- unsigned long sl;
++ u32 sl;
+
+ BUG_ON(!RB_EMPTY_ROOT(&bfqq->sort_list));
+
+@@ -1728,119 +2281,366 @@ static void bfq_arm_slice_timer(struct bfq_data *bfqd)
+ sl = bfqd->bfq_slice_idle;
+ /*
+ * Unless the queue is being weight-raised or the scenario is
+- * asymmetric, grant only minimum idle time if the queue either
+- * has been seeky for long enough or has already proved to be
+- * constantly seeky.
++ * asymmetric, grant only minimum idle time if the queue
++ * is seeky. A long idling is preserved for a weight-raised
++ * queue, or, more in general, in an asymemtric scenario,
++ * because a long idling is needed for guaranteeing to a queue
++ * its reserved share of the throughput (in particular, it is
++ * needed if the queue has a higher weight than some other
++ * queue).
+ */
+- if (bfq_sample_valid(bfqq->seek_samples) &&
+- ((BFQQ_SEEKY(bfqq) && bfqq->entity.service >
+- bfq_max_budget(bfqq->bfqd) / 8) ||
+- bfq_bfqq_constantly_seeky(bfqq)) && bfqq->wr_coeff == 1 &&
++ if (BFQQ_SEEKY(bfqq) && bfqq->wr_coeff == 1 &&
+ bfq_symmetric_scenario(bfqd))
+- sl = min(sl, msecs_to_jiffies(BFQ_MIN_TT));
+- else if (bfqq->wr_coeff > 1)
+- sl = sl * 3;
++ sl = min_t(u32, sl, BFQ_MIN_TT);
++
+ bfqd->last_idling_start = ktime_get();
+- mod_timer(&bfqd->idle_slice_timer, jiffies + sl);
+-#ifdef CONFIG_BFQ_GROUP_IOSCHED
++ hrtimer_start(&bfqd->idle_slice_timer, ns_to_ktime(sl),
++ HRTIMER_MODE_REL);
+ bfqg_stats_set_start_idle_time(bfqq_group(bfqq));
+-#endif
+- bfq_log(bfqd, "arm idle: %u/%u ms",
+- jiffies_to_msecs(sl), jiffies_to_msecs(bfqd->bfq_slice_idle));
++ bfq_log(bfqd, "arm idle: %ld/%ld ms",
++ sl / NSEC_PER_MSEC, bfqd->bfq_slice_idle / NSEC_PER_MSEC);
+ }
+
+ /*
+- * Set the maximum time for the in-service queue to consume its
+- * budget. This prevents seeky processes from lowering the disk
+- * throughput (always guaranteed with a time slice scheme as in CFQ).
++ * In autotuning mode, max_budget is dynamically recomputed as the
++ * amount of sectors transferred in timeout at the estimated peak
++ * rate. This enables BFQ to utilize a full timeslice with a full
++ * budget, even if the in-service queue is served at peak rate. And
++ * this maximises throughput with sequential workloads.
+ */
+-static void bfq_set_budget_timeout(struct bfq_data *bfqd)
++static unsigned long bfq_calc_max_budget(struct bfq_data *bfqd)
+ {
+- struct bfq_queue *bfqq = bfqd->in_service_queue;
+- unsigned int timeout_coeff;
++ return (u64)bfqd->peak_rate * USEC_PER_MSEC *
++ jiffies_to_msecs(bfqd->bfq_timeout)>>BFQ_RATE_SHIFT;
++}
+
+- if (bfqq->wr_cur_max_time == bfqd->bfq_wr_rt_max_time)
+- timeout_coeff = 1;
+- else
+- timeout_coeff = bfqq->entity.weight / bfqq->entity.orig_weight;
++/*
++ * Update parameters related to throughput and responsiveness, as a
++ * function of the estimated peak rate. See comments on
++ * bfq_calc_max_budget(), and on T_slow and T_fast arrays.
++ */
++static void update_thr_responsiveness_params(struct bfq_data *bfqd)
++{
++ int dev_type = blk_queue_nonrot(bfqd->queue);
++
++ if (bfqd->bfq_user_max_budget == 0) {
++ bfqd->bfq_max_budget =
++ bfq_calc_max_budget(bfqd);
++ BUG_ON(bfqd->bfq_max_budget < 0);
++ bfq_log(bfqd, "new max_budget = %d",
++ bfqd->bfq_max_budget);
++ }
+
+- bfqd->last_budget_start = ktime_get();
++ if (bfqd->device_speed == BFQ_BFQD_FAST &&
++ bfqd->peak_rate < device_speed_thresh[dev_type]) {
++ bfqd->device_speed = BFQ_BFQD_SLOW;
++ bfqd->RT_prod = R_slow[dev_type] *
++ T_slow[dev_type];
++ } else if (bfqd->device_speed == BFQ_BFQD_SLOW &&
++ bfqd->peak_rate > device_speed_thresh[dev_type]) {
++ bfqd->device_speed = BFQ_BFQD_FAST;
++ bfqd->RT_prod = R_fast[dev_type] *
++ T_fast[dev_type];
++ }
+
+- bfq_clear_bfqq_budget_new(bfqq);
+- bfqq->budget_timeout = jiffies +
+- bfqd->bfq_timeout[bfq_bfqq_sync(bfqq)] * timeout_coeff;
++ bfq_log(bfqd,
++"dev_type %s dev_speed_class = %s (%llu sects/sec), thresh %llu setcs/sec",
++ dev_type == 0 ? "ROT" : "NONROT",
++ bfqd->device_speed == BFQ_BFQD_FAST ? "FAST" : "SLOW",
++ bfqd->device_speed == BFQ_BFQD_FAST ?
++ (USEC_PER_SEC*(u64)R_fast[dev_type])>>BFQ_RATE_SHIFT :
++ (USEC_PER_SEC*(u64)R_slow[dev_type])>>BFQ_RATE_SHIFT,
++ (USEC_PER_SEC*(u64)device_speed_thresh[dev_type])>>
++ BFQ_RATE_SHIFT);
++}
+
+- bfq_log_bfqq(bfqd, bfqq, "set budget_timeout %u",
+- jiffies_to_msecs(bfqd->bfq_timeout[bfq_bfqq_sync(bfqq)] *
+- timeout_coeff));
++static void bfq_reset_rate_computation(struct bfq_data *bfqd, struct request *rq)
++{
++ if (rq != NULL) { /* new rq dispatch now, reset accordingly */
++ bfqd->last_dispatch = bfqd->first_dispatch = ktime_get_ns() ;
++ bfqd->peak_rate_samples = 1;
++ bfqd->sequential_samples = 0;
++ bfqd->tot_sectors_dispatched = bfqd->last_rq_max_size =
++ blk_rq_sectors(rq);
++ } else /* no new rq dispatched, just reset the number of samples */
++ bfqd->peak_rate_samples = 0; /* full re-init on next disp. */
++
++ bfq_log(bfqd,
++ "reset_rate_computation at end, sample %u/%u tot_sects %llu",
++ bfqd->peak_rate_samples, bfqd->sequential_samples,
++ bfqd->tot_sectors_dispatched);
+ }
+
+-/*
+- * Move request from internal lists to the request queue dispatch list.
+- */
+-static void bfq_dispatch_insert(struct request_queue *q, struct request *rq)
++static void bfq_update_rate_reset(struct bfq_data *bfqd, struct request *rq)
+ {
+- struct bfq_data *bfqd = q->elevator->elevator_data;
+- struct bfq_queue *bfqq = RQ_BFQQ(rq);
++ u32 rate, weight, divisor;
+
+ /*
+- * For consistency, the next instruction should have been executed
+- * after removing the request from the queue and dispatching it.
+- * We execute instead this instruction before bfq_remove_request()
+- * (and hence introduce a temporary inconsistency), for efficiency.
+- * In fact, in a forced_dispatch, this prevents two counters related
+- * to bfqq->dispatched to risk to be uselessly decremented if bfqq
+- * is not in service, and then to be incremented again after
+- * incrementing bfqq->dispatched.
++ * For the convergence property to hold (see comments on
++ * bfq_update_peak_rate()) and for the assessment to be
++ * reliable, a minimum number of samples must be present, and
++ * a minimum amount of time must have elapsed. If not so, do
++ * not compute new rate. Just reset parameters, to get ready
++ * for a new evaluation attempt.
+ */
+- bfqq->dispatched++;
+- bfq_remove_request(rq);
+- elv_dispatch_sort(q, rq);
++ if (bfqd->peak_rate_samples < BFQ_RATE_MIN_SAMPLES ||
++ bfqd->delta_from_first < BFQ_RATE_MIN_INTERVAL) {
++ bfq_log(bfqd,
++ "update_rate_reset: only resetting, delta_first %lluus samples %d",
++ bfqd->delta_from_first>>10, bfqd->peak_rate_samples);
++ goto reset_computation;
++ }
+
+- if (bfq_bfqq_sync(bfqq))
+- bfqd->sync_flight++;
+-#ifdef CONFIG_BFQ_GROUP_IOSCHED
+- bfqg_stats_update_dispatch(bfqq_group(bfqq), blk_rq_bytes(rq),
+- rq->cmd_flags);
+-#endif
++ /*
++ * If a new request completion has occurred after last
++ * dispatch, then, to approximate the rate at which requests
++ * have been served by the device, it is more precise to
++ * extend the observation interval to the last completion.
++ */
++ bfqd->delta_from_first =
++ max_t(u64, bfqd->delta_from_first,
++ bfqd->last_completion - bfqd->first_dispatch);
++
++ BUG_ON(bfqd->delta_from_first == 0);
++ /*
++ * Rate computed in sects/usec, and not sects/nsec, for
++ * precision issues.
++ */
++ rate = div64_ul(bfqd->tot_sectors_dispatched<<BFQ_RATE_SHIFT,
++ div_u64(bfqd->delta_from_first, NSEC_PER_USEC));
++
++ bfq_log(bfqd,
++"update_rate_reset: tot_sects %llu delta_first %lluus rate %llu sects/s (%d)",
++ bfqd->tot_sectors_dispatched, bfqd->delta_from_first>>10,
++ ((USEC_PER_SEC*(u64)rate)>>BFQ_RATE_SHIFT),
++ rate > 20<<BFQ_RATE_SHIFT);
++
++ /*
++ * Peak rate not updated if:
++ * - the percentage of sequential dispatches is below 3/4 of the
++ * total, and rate is below the current estimated peak rate
++ * - rate is unreasonably high (> 20M sectors/sec)
++ */
++ if ((bfqd->sequential_samples < (3 * bfqd->peak_rate_samples)>>2 &&
++ rate <= bfqd->peak_rate) ||
++ rate > 20<<BFQ_RATE_SHIFT) {
++ bfq_log(bfqd,
++ "update_rate_reset: goto reset, samples %u/%u rate/peak %llu/%llu",
++ bfqd->peak_rate_samples, bfqd->sequential_samples,
++ ((USEC_PER_SEC*(u64)rate)>>BFQ_RATE_SHIFT),
++ ((USEC_PER_SEC*(u64)bfqd->peak_rate)>>BFQ_RATE_SHIFT));
++ goto reset_computation;
++ } else {
++ bfq_log(bfqd,
++ "update_rate_reset: do update, samples %u/%u rate/peak %llu/%llu",
++ bfqd->peak_rate_samples, bfqd->sequential_samples,
++ ((USEC_PER_SEC*(u64)rate)>>BFQ_RATE_SHIFT),
++ ((USEC_PER_SEC*(u64)bfqd->peak_rate)>>BFQ_RATE_SHIFT));
++ }
++
++ /*
++ * We have to update the peak rate, at last! To this purpose,
++ * we use a low-pass filter. We compute the smoothing constant
++ * of the filter as a function of the 'weight' of the new
++ * measured rate.
++ *
++ * As can be seen in next formulas, we define this weight as a
++ * quantity proportional to how sequential the workload is,
++ * and to how long the observation time interval is.
++ *
++ * The weight runs from 0 to 8. The maximum value of the
++ * weight, 8, yields the minimum value for the smoothing
++ * constant. At this minimum value for the smoothing constant,
++ * the measured rate contributes for half of the next value of
++ * the estimated peak rate.
++ *
++ * So, the first step is to compute the weight as a function
++ * of how sequential the workload is. Note that the weight
++ * cannot reach 9, because bfqd->sequential_samples cannot
++ * become equal to bfqd->peak_rate_samples, which, in its
++ * turn, holds true because bfqd->sequential_samples is not
++ * incremented for the first sample.
++ */
++ weight = (9 * bfqd->sequential_samples) / bfqd->peak_rate_samples;
++
++ /*
++ * Second step: further refine the weight as a function of the
++ * duration of the observation interval.
++ */
++ weight = min_t(u32, 8,
++ div_u64(weight * bfqd->delta_from_first,
++ BFQ_RATE_REF_INTERVAL));
++
++ /*
++ * Divisor ranging from 10, for minimum weight, to 2, for
++ * maximum weight.
++ */
++ divisor = 10 - weight;
++ BUG_ON(divisor == 0);
++
++ /*
++ * Finally, update peak rate:
++ *
++ * peak_rate = peak_rate * (divisor-1) / divisor + rate / divisor
++ */
++ bfqd->peak_rate *= divisor-1;
++ bfqd->peak_rate /= divisor;
++ rate /= divisor; /* smoothing constant alpha = 1/divisor */
++
++ bfq_log(bfqd,
++ "update_rate_reset: divisor %d tmp_peak_rate %llu tmp_rate %u",
++ divisor,
++ ((USEC_PER_SEC*(u64)bfqd->peak_rate)>>BFQ_RATE_SHIFT),
++ (u32)((USEC_PER_SEC*(u64)rate)>>BFQ_RATE_SHIFT));
++
++ BUG_ON(bfqd->peak_rate == 0);
++ BUG_ON(bfqd->peak_rate > 20<<BFQ_RATE_SHIFT);
++
++ bfqd->peak_rate += rate;
++ update_thr_responsiveness_params(bfqd);
++ BUG_ON(bfqd->peak_rate > 20<<BFQ_RATE_SHIFT);
++
++reset_computation:
++ bfq_reset_rate_computation(bfqd, rq);
+ }
+
+ /*
+- * Return expired entry, or NULL to just start from scratch in rbtree.
++ * Update the read/write peak rate (the main quantity used for
++ * auto-tuning, see update_thr_responsiveness_params()).
++ *
++ * It is not trivial to estimate the peak rate (correctly): because of
++ * the presence of sw and hw queues between the scheduler and the
++ * device components that finally serve I/O requests, it is hard to
++ * say exactly when a given dispatched request is served inside the
++ * device, and for how long. As a consequence, it is hard to know
++ * precisely at what rate a given set of requests is actually served
++ * by the device.
++ *
++ * On the opposite end, the dispatch time of any request is trivially
++ * available, and, from this piece of information, the "dispatch rate"
++ * of requests can be immediately computed. So, the idea in the next
++ * function is to use what is known, namely request dispatch times
++ * (plus, when useful, request completion times), to estimate what is
++ * unknown, namely in-device request service rate.
++ *
++ * The main issue is that, because of the above facts, the rate at
++ * which a certain set of requests is dispatched over a certain time
++ * interval can vary greatly with respect to the rate at which the
++ * same requests are then served. But, since the size of any
++ * intermediate queue is limited, and the service scheme is lossless
++ * (no request is silently dropped), the following obvious convergence
++ * property holds: the number of requests dispatched MUST become
++ * closer and closer to the number of requests completed as the
++ * observation interval grows. This is the key property used in
++ * the next function to estimate the peak service rate as a function
++ * of the observed dispatch rate. The function assumes to be invoked
++ * on every request dispatch.
+ */
+-static struct request *bfq_check_fifo(struct bfq_queue *bfqq)
++static void bfq_update_peak_rate(struct bfq_data *bfqd, struct request *rq)
+ {
+- struct request *rq = NULL;
++ u64 now_ns = ktime_get_ns();
++
++ if (bfqd->peak_rate_samples == 0) { /* first dispatch */
++ bfq_log(bfqd,
++ "update_peak_rate: goto reset, samples %d",
++ bfqd->peak_rate_samples) ;
++ bfq_reset_rate_computation(bfqd, rq);
++ goto update_last_values; /* will add one sample */
++ }
+
+- if (bfq_bfqq_fifo_expire(bfqq))
+- return NULL;
++ /*
++ * Device idle for very long: the observation interval lasting
++ * up to this dispatch cannot be a valid observation interval
++ * for computing a new peak rate (similarly to the late-
++ * completion event in bfq_completed_request()). Go to
++ * update_rate_and_reset to have the following three steps
++ * taken:
++ * - close the observation interval at the last (previous)
++ * request dispatch or completion
++ * - compute rate, if possible, for that observation interval
++ * - start a new observation interval with this dispatch
++ */
++ if (now_ns - bfqd->last_dispatch > 100*NSEC_PER_MSEC &&
++ bfqd->rq_in_driver == 0) {
++ bfq_log(bfqd,
++"update_peak_rate: jumping to updating&resetting delta_last %lluus samples %d",
++ (now_ns - bfqd->last_dispatch)>>10,
++ bfqd->peak_rate_samples) ;
++ goto update_rate_and_reset;
++ }
+
+- bfq_mark_bfqq_fifo_expire(bfqq);
++ /* Update sampling information */
++ bfqd->peak_rate_samples++;
+
+- if (list_empty(&bfqq->fifo))
+- return NULL;
++ if ((bfqd->rq_in_driver > 0 ||
++ now_ns - bfqd->last_completion < BFQ_MIN_TT)
++ && get_sdist(bfqd->last_position, rq) < BFQQ_SEEK_THR)
++ bfqd->sequential_samples++;
+
+- rq = rq_entry_fifo(bfqq->fifo.next);
++ bfqd->tot_sectors_dispatched += blk_rq_sectors(rq);
+
+- if (time_before(jiffies, rq->fifo_time))
+- return NULL;
++ /* Reset max observed rq size every 32 dispatches */
++ if (likely(bfqd->peak_rate_samples % 32))
++ bfqd->last_rq_max_size =
++ max_t(u32, blk_rq_sectors(rq), bfqd->last_rq_max_size);
++ else
++ bfqd->last_rq_max_size = blk_rq_sectors(rq);
+
+- return rq;
++ bfqd->delta_from_first = now_ns - bfqd->first_dispatch;
++
++ bfq_log(bfqd,
++ "update_peak_rate: added samples %u/%u tot_sects %llu delta_first %lluus",
++ bfqd->peak_rate_samples, bfqd->sequential_samples,
++ bfqd->tot_sectors_dispatched,
++ bfqd->delta_from_first>>10);
++
++ /* Target observation interval not yet reached, go on sampling */
++ if (bfqd->delta_from_first < BFQ_RATE_REF_INTERVAL)
++ goto update_last_values;
++
++update_rate_and_reset:
++ bfq_update_rate_reset(bfqd, rq);
++update_last_values:
++ bfqd->last_position = blk_rq_pos(rq) + blk_rq_sectors(rq);
++ bfqd->last_dispatch = now_ns;
++
++ bfq_log(bfqd,
++ "update_peak_rate: delta_first %lluus last_pos %llu peak_rate %llu",
++ (now_ns - bfqd->first_dispatch)>>10,
++ (unsigned long long) bfqd->last_position,
++ ((USEC_PER_SEC*(u64)bfqd->peak_rate)>>BFQ_RATE_SHIFT));
++ bfq_log(bfqd,
++ "update_peak_rate: samples at end %d", bfqd->peak_rate_samples);
+ }
+
+-static int bfq_bfqq_budget_left(struct bfq_queue *bfqq)
++/*
++ * Move request from internal lists to the dispatch list of the request queue
++ */
++static void bfq_dispatch_insert(struct request_queue *q, struct request *rq)
+ {
+- struct bfq_entity *entity = &bfqq->entity;
++ struct bfq_queue *bfqq = RQ_BFQQ(rq);
+
+- return entity->budget - entity->service;
++ /*
++ * For consistency, the next instruction should have been executed
++ * after removing the request from the queue and dispatching it.
++ * We execute instead this instruction before bfq_remove_request()
++ * (and hence introduce a temporary inconsistency), for efficiency.
++ * In fact, in a forced_dispatch, this prevents two counters related
++ * to bfqq->dispatched to risk to be uselessly decremented if bfqq
++ * is not in service, and then to be incremented again after
++ * incrementing bfqq->dispatched.
++ */
++ bfqq->dispatched++;
++ bfq_update_peak_rate(q->elevator->elevator_data, rq);
++
++ bfq_remove_request(rq);
++ elv_dispatch_sort(q, rq);
+ }
+
+ static void __bfq_bfqq_expire(struct bfq_data *bfqd, struct bfq_queue *bfqq)
+ {
+ BUG_ON(bfqq != bfqd->in_service_queue);
+
+- __bfq_bfqd_reset_in_service(bfqd);
+-
+ /*
+ * If this bfqq is shared between multiple processes, check
+ * to make sure that those processes are still issuing I/Os
+@@ -1851,20 +2651,30 @@ static void __bfq_bfqq_expire(struct bfq_data *bfqd, struct bfq_queue *bfqq)
+ bfq_mark_bfqq_split_coop(bfqq);
+
+ if (RB_EMPTY_ROOT(&bfqq->sort_list)) {
+- /*
+- * Overloading budget_timeout field to store the time
+- * at which the queue remains with no backlog; used by
+- * the weight-raising mechanism.
+- */
+- bfqq->budget_timeout = jiffies;
+- bfq_del_bfqq_busy(bfqd, bfqq, 1);
++ if (bfqq->dispatched == 0)
++ /*
++ * Overloading budget_timeout field to store
++ * the time at which the queue remains with no
++ * backlog and no outstanding request; used by
++ * the weight-raising mechanism.
++ */
++ bfqq->budget_timeout = jiffies;
++
++ bfq_del_bfqq_busy(bfqd, bfqq, true);
+ } else {
+- bfq_activate_bfqq(bfqd, bfqq);
++ bfq_requeue_bfqq(bfqd, bfqq);
+ /*
+ * Resort priority tree of potential close cooperators.
+ */
+ bfq_pos_tree_add_move(bfqd, bfqq);
+ }
++
++ /*
++ * All in-service entities must have been properly deactivated
++ * or requeued before executing the next function, which
++ * resets all in-service entites as no more in service.
++ */
++ __bfq_bfqd_reset_in_service(bfqd);
+ }
+
+ /**
+@@ -1883,10 +2693,19 @@ static void __bfq_bfqq_recalc_budget(struct bfq_data *bfqd,
+ struct request *next_rq;
+ int budget, min_budget;
+
+- budget = bfqq->max_budget;
++ BUG_ON(bfqq != bfqd->in_service_queue);
++
+ min_budget = bfq_min_budget(bfqd);
+
+- BUG_ON(bfqq != bfqd->in_service_queue);
++ if (bfqq->wr_coeff == 1)
++ budget = bfqq->max_budget;
++ else /*
++ * Use a constant, low budget for weight-raised queues,
++ * to help achieve a low latency. Keep it slightly higher
++ * than the minimum possible budget, to cause a little
++ * bit fewer expirations.
++ */
++ budget = 2 * min_budget;
+
+ bfq_log_bfqq(bfqd, bfqq, "recalc_budg: last budg %d, budg left %d",
+ bfqq->entity.budget, bfq_bfqq_budget_left(bfqq));
+@@ -1895,7 +2714,7 @@ static void __bfq_bfqq_recalc_budget(struct bfq_data *bfqd,
+ bfq_log_bfqq(bfqd, bfqq, "recalc_budg: sync %d, seeky %d",
+ bfq_bfqq_sync(bfqq), BFQQ_SEEKY(bfqd->in_service_queue));
+
+- if (bfq_bfqq_sync(bfqq)) {
++ if (bfq_bfqq_sync(bfqq) && bfqq->wr_coeff == 1) {
+ switch (reason) {
+ /*
+ * Caveat: in all the following cases we trade latency
+@@ -1937,14 +2756,10 @@ static void __bfq_bfqq_recalc_budget(struct bfq_data *bfqd,
+ break;
+ case BFQ_BFQQ_BUDGET_TIMEOUT:
+ /*
+- * We double the budget here because: 1) it
+- * gives the chance to boost the throughput if
+- * this is not a seeky process (which may have
+- * bumped into this timeout because of, e.g.,
+- * ZBR), 2) together with charge_full_budget
+- * it helps give seeky processes higher
+- * timestamps, and hence be served less
+- * frequently.
++ * We double the budget here because it gives
++ * the chance to boost the throughput if this
++ * is not a seeky process (and has bumped into
++ * this timeout because of, e.g., ZBR).
+ */
+ budget = min(budget * 2, bfqd->bfq_max_budget);
+ break;
+@@ -1961,17 +2776,49 @@ static void __bfq_bfqq_recalc_budget(struct bfq_data *bfqd,
+ budget = min(budget * 4, bfqd->bfq_max_budget);
+ break;
+ case BFQ_BFQQ_NO_MORE_REQUESTS:
+- /*
+- * Leave the budget unchanged.
+- */
++ /*
++ * For queues that expire for this reason, it
++ * is particularly important to keep the
++ * budget close to the actual service they
++ * need. Doing so reduces the timestamp
++ * misalignment problem described in the
++ * comments in the body of
++ * __bfq_activate_entity. In fact, suppose
++ * that a queue systematically expires for
++ * BFQ_BFQQ_NO_MORE_REQUESTS and presents a
++ * new request in time to enjoy timestamp
++ * back-shifting. The larger the budget of the
++ * queue is with respect to the service the
++ * queue actually requests in each service
++ * slot, the more times the queue can be
++ * reactivated with the same virtual finish
++ * time. It follows that, even if this finish
++ * time is pushed to the system virtual time
++ * to reduce the consequent timestamp
++ * misalignment, the queue unjustly enjoys for
++ * many re-activations a lower finish time
++ * than all newly activated queues.
++ *
++ * The service needed by bfqq is measured
++ * quite precisely by bfqq->entity.service.
++ * Since bfqq does not enjoy device idling,
++ * bfqq->entity.service is equal to the number
++ * of sectors that the process associated with
++ * bfqq requested to read/write before waiting
++ * for request completions, or blocking for
++ * other reasons.
++ */
++ budget = max_t(int, bfqq->entity.service, min_budget);
++ break;
+ default:
+ return;
+ }
+- } else
++ } else if (!bfq_bfqq_sync(bfqq))
+ /*
+- * Async queues get always the maximum possible budget
+- * (their ability to dispatch is limited by
+- * @bfqd->bfq_max_budget_async_rq).
++ * Async queues get always the maximum possible
++ * budget, as for them we do not care about latency
++ * (in addition, their ability to dispatch is limited
++ * by the charging factor).
+ */
+ budget = bfqd->bfq_max_budget;
+
+@@ -1982,160 +2829,120 @@ static void __bfq_bfqq_recalc_budget(struct bfq_data *bfqd,
+ bfqq->max_budget = min(bfqq->max_budget, bfqd->bfq_max_budget);
+
+ /*
+- * Make sure that we have enough budget for the next request.
+- * Since the finish time of the bfqq must be kept in sync with
+- * the budget, be sure to call __bfq_bfqq_expire() after the
++ * If there is still backlog, then assign a new budget, making
++ * sure that it is large enough for the next request. Since
++ * the finish time of bfqq must be kept in sync with the
++ * budget, be sure to call __bfq_bfqq_expire() *after* this
+ * update.
++ *
++ * If there is no backlog, then no need to update the budget;
++ * it will be updated on the arrival of a new request.
+ */
+ next_rq = bfqq->next_rq;
+- if (next_rq)
++ if (next_rq) {
++ BUG_ON(reason == BFQ_BFQQ_TOO_IDLE ||
++ reason == BFQ_BFQQ_NO_MORE_REQUESTS);
+ bfqq->entity.budget = max_t(unsigned long, bfqq->max_budget,
+ bfq_serv_to_charge(next_rq, bfqq));
+- else
+- bfqq->entity.budget = bfqq->max_budget;
++ BUG_ON(!bfq_bfqq_busy(bfqq));
++ BUG_ON(RB_EMPTY_ROOT(&bfqq->sort_list));
++ }
+
+ bfq_log_bfqq(bfqd, bfqq, "head sect: %u, new budget %d",
+ next_rq ? blk_rq_sectors(next_rq) : 0,
+ bfqq->entity.budget);
+ }
+
+-static unsigned long bfq_calc_max_budget(u64 peak_rate, u64 timeout)
+-{
+- unsigned long max_budget;
+-
+- /*
+- * The max_budget calculated when autotuning is equal to the
+- * amount of sectors transfered in timeout_sync at the
+- * estimated peak rate.
+- */
+- max_budget = (unsigned long)(peak_rate * 1000 *
+- timeout >> BFQ_RATE_SHIFT);
+-
+- return max_budget;
+-}
+-
+ /*
+- * In addition to updating the peak rate, checks whether the process
+- * is "slow", and returns 1 if so. This slow flag is used, in addition
+- * to the budget timeout, to reduce the amount of service provided to
+- * seeky processes, and hence reduce their chances to lower the
+- * throughput. See the code for more details.
++ * Return true if the process associated with bfqq is "slow". The slow
++ * flag is used, in addition to the budget timeout, to reduce the
++ * amount of service provided to seeky processes, and thus reduce
++ * their chances to lower the throughput. More details in the comments
++ * on the function bfq_bfqq_expire().
++ *
++ * An important observation is in order: as discussed in the comments
++ * on the function bfq_update_peak_rate(), with devices with internal
++ * queues, it is hard if ever possible to know when and for how long
++ * an I/O request is processed by the device (apart from the trivial
++ * I/O pattern where a new request is dispatched only after the
++ * previous one has been completed). This makes it hard to evaluate
++ * the real rate at which the I/O requests of each bfq_queue are
++ * served. In fact, for an I/O scheduler like BFQ, serving a
++ * bfq_queue means just dispatching its requests during its service
++ * slot (i.e., until the budget of the queue is exhausted, or the
++ * queue remains idle, or, finally, a timeout fires). But, during the
++ * service slot of a bfq_queue, around 100 ms at most, the device may
++ * be even still processing requests of bfq_queues served in previous
++ * service slots. On the opposite end, the requests of the in-service
++ * bfq_queue may be completed after the service slot of the queue
++ * finishes.
++ *
++ * Anyway, unless more sophisticated solutions are used
++ * (where possible), the sum of the sizes of the requests dispatched
++ * during the service slot of a bfq_queue is probably the only
++ * approximation available for the service received by the bfq_queue
++ * during its service slot. And this sum is the quantity used in this
++ * function to evaluate the I/O speed of a process.
+ */
+-static bool bfq_update_peak_rate(struct bfq_data *bfqd, struct bfq_queue *bfqq,
+- bool compensate, enum bfqq_expiration reason)
++static bool bfq_bfqq_is_slow(struct bfq_data *bfqd, struct bfq_queue *bfqq,
++ bool compensate, enum bfqq_expiration reason,
++ unsigned long *delta_ms)
+ {
+- u64 bw, usecs, expected, timeout;
+- ktime_t delta;
+- int update = 0;
++ ktime_t delta_ktime;
++ u32 delta_usecs;
++ bool slow = BFQQ_SEEKY(bfqq); /* if delta too short, use seekyness */
+
+- if (!bfq_bfqq_sync(bfqq) || bfq_bfqq_budget_new(bfqq))
++ if (!bfq_bfqq_sync(bfqq))
+ return false;
+
+ if (compensate)
+- delta = bfqd->last_idling_start;
++ delta_ktime = bfqd->last_idling_start;
+ else
+- delta = ktime_get();
+- delta = ktime_sub(delta, bfqd->last_budget_start);
+- usecs = ktime_to_us(delta);
+-
+- /* Don't trust short/unrealistic values. */
+- if (usecs < 100 || usecs >= LONG_MAX)
+- return false;
+-
+- /*
+- * Calculate the bandwidth for the last slice. We use a 64 bit
+- * value to store the peak rate, in sectors per usec in fixed
+- * point math. We do so to have enough precision in the estimate
+- * and to avoid overflows.
+- */
+- bw = (u64)bfqq->entity.service << BFQ_RATE_SHIFT;
+- do_div(bw, (unsigned long)usecs);
++ delta_ktime = ktime_get();
++ delta_ktime = ktime_sub(delta_ktime, bfqd->last_budget_start);
++ delta_usecs = ktime_to_us(delta_ktime);
++
++ /* don't trust short/unrealistic values. */
++ if (delta_usecs < 1000 || delta_usecs >= LONG_MAX) {
++ if (blk_queue_nonrot(bfqd->queue))
++ /*
++ * give same worst-case guarantees as idling
++ * for seeky
++ */
++ *delta_ms = BFQ_MIN_TT / NSEC_PER_MSEC;
++ else /* charge at least one seek */
++ *delta_ms = bfq_slice_idle / NSEC_PER_MSEC;
++
++ bfq_log(bfqd, "bfq_bfqq_is_slow: unrealistic %u", delta_usecs);
++
++ return slow;
++ }
+
+- timeout = jiffies_to_msecs(bfqd->bfq_timeout[BLK_RW_SYNC]);
++ *delta_ms = delta_usecs / USEC_PER_MSEC;
+
+ /*
+- * Use only long (> 20ms) intervals to filter out spikes for
+- * the peak rate estimation.
++ * Use only long (> 20ms) intervals to filter out excessive
++ * spikes in service rate estimation.
+ */
+- if (usecs > 20000) {
+- if (bw > bfqd->peak_rate ||
+- (!BFQQ_SEEKY(bfqq) &&
+- reason == BFQ_BFQQ_BUDGET_TIMEOUT)) {
+- bfq_log(bfqd, "measured bw =%llu", bw);
+- /*
+- * To smooth oscillations use a low-pass filter with
+- * alpha=7/8, i.e.,
+- * new_rate = (7/8) * old_rate + (1/8) * bw
+- */
+- do_div(bw, 8);
+- if (bw == 0)
+- return 0;
+- bfqd->peak_rate *= 7;
+- do_div(bfqd->peak_rate, 8);
+- bfqd->peak_rate += bw;
+- update = 1;
+- bfq_log(bfqd, "new peak_rate=%llu", bfqd->peak_rate);
+- }
+-
+- update |= bfqd->peak_rate_samples == BFQ_PEAK_RATE_SAMPLES - 1;
+-
+- if (bfqd->peak_rate_samples < BFQ_PEAK_RATE_SAMPLES)
+- bfqd->peak_rate_samples++;
+-
+- if (bfqd->peak_rate_samples == BFQ_PEAK_RATE_SAMPLES &&
+- update) {
+- int dev_type = blk_queue_nonrot(bfqd->queue);
+-
+- if (bfqd->bfq_user_max_budget == 0) {
+- bfqd->bfq_max_budget =
+- bfq_calc_max_budget(bfqd->peak_rate,
+- timeout);
+- bfq_log(bfqd, "new max_budget=%d",
+- bfqd->bfq_max_budget);
+- }
+- if (bfqd->device_speed == BFQ_BFQD_FAST &&
+- bfqd->peak_rate < device_speed_thresh[dev_type]) {
+- bfqd->device_speed = BFQ_BFQD_SLOW;
+- bfqd->RT_prod = R_slow[dev_type] *
+- T_slow[dev_type];
+- } else if (bfqd->device_speed == BFQ_BFQD_SLOW &&
+- bfqd->peak_rate > device_speed_thresh[dev_type]) {
+- bfqd->device_speed = BFQ_BFQD_FAST;
+- bfqd->RT_prod = R_fast[dev_type] *
+- T_fast[dev_type];
+- }
+- }
++ if (delta_usecs > 20000) {
++ /*
++ * Caveat for rotational devices: processes doing I/O
++ * in the slower disk zones tend to be slow(er) even
++ * if not seeky. In this respect, the estimated peak
++ * rate is likely to be an average over the disk
++ * surface. Accordingly, to not be too harsh with
++ * unlucky processes, a process is deemed slow only if
++ * its rate has been lower than half of the estimated
++ * peak rate.
++ */
++ slow = bfqq->entity.service < bfqd->bfq_max_budget / 2;
++ bfq_log(bfqd, "bfq_bfqq_is_slow: relative rate %d/%d",
++ bfqq->entity.service, bfqd->bfq_max_budget);
+ }
+
+- /*
+- * If the process has been served for a too short time
+- * interval to let its possible sequential accesses prevail on
+- * the initial seek time needed to move the disk head on the
+- * first sector it requested, then give the process a chance
+- * and for the moment return false.
+- */
+- if (bfqq->entity.budget <= bfq_max_budget(bfqd) / 8)
+- return false;
+-
+- /*
+- * A process is considered ``slow'' (i.e., seeky, so that we
+- * cannot treat it fairly in the service domain, as it would
+- * slow down too much the other processes) if, when a slice
+- * ends for whatever reason, it has received service at a
+- * rate that would not be high enough to complete the budget
+- * before the budget timeout expiration.
+- */
+- expected = bw * 1000 * timeout >> BFQ_RATE_SHIFT;
++ bfq_log_bfqq(bfqd, bfqq, "bfq_bfqq_is_slow: slow %d", slow);
+
+- /*
+- * Caveat: processes doing IO in the slower disk zones will
+- * tend to be slow(er) even if not seeky. And the estimated
+- * peak rate will actually be an average over the disk
+- * surface. Hence, to not be too harsh with unlucky processes,
+- * we keep a budget/3 margin of safety before declaring a
+- * process slow.
+- */
+- return expected > (4 * bfqq->entity.budget) / 3;
++ return slow;
+ }
+
+ /*
+@@ -2193,20 +3000,35 @@ static bool bfq_update_peak_rate(struct bfq_data *bfqd, struct bfq_queue *bfqq,
+ static unsigned long bfq_bfqq_softrt_next_start(struct bfq_data *bfqd,
+ struct bfq_queue *bfqq)
+ {
++ bfq_log_bfqq(bfqd, bfqq,
++"softrt_next_start: service_blkg %lu soft_rate %u sects/sec interval %u",
++ bfqq->service_from_backlogged,
++ bfqd->bfq_wr_max_softrt_rate,
++ jiffies_to_msecs(HZ * bfqq->service_from_backlogged /
++ bfqd->bfq_wr_max_softrt_rate));
++
+ return max(bfqq->last_idle_bklogged +
+ HZ * bfqq->service_from_backlogged /
+ bfqd->bfq_wr_max_softrt_rate,
+- jiffies + bfqq->bfqd->bfq_slice_idle + 4);
++ jiffies + nsecs_to_jiffies(bfqq->bfqd->bfq_slice_idle) + 4);
+ }
+
+ /*
+- * Return the largest-possible time instant such that, for as long as possible,
+- * the current time will be lower than this time instant according to the macro
+- * time_is_before_jiffies().
++ * Return the farthest future time instant according to jiffies
++ * macros.
+ */
+-static unsigned long bfq_infinity_from_now(unsigned long now)
++static unsigned long bfq_greatest_from_now(void)
+ {
+- return now + ULONG_MAX / 2;
++ return jiffies + MAX_JIFFY_OFFSET;
++}
++
++/*
++ * Return the farthest past time instant according to jiffies
++ * macros.
++ */
++static unsigned long bfq_smallest_from_now(void)
++{
++ return jiffies - MAX_JIFFY_OFFSET;
+ }
+
+ /**
+@@ -2216,28 +3038,24 @@ static unsigned long bfq_infinity_from_now(unsigned long now)
+ * @compensate: if true, compensate for the time spent idling.
+ * @reason: the reason causing the expiration.
+ *
++ * If the process associated with bfqq does slow I/O (e.g., because it
++ * issues random requests), we charge bfqq with the time it has been
++ * in service instead of the service it has received (see
++ * bfq_bfqq_charge_time for details on how this goal is achieved). As
++ * a consequence, bfqq will typically get higher timestamps upon
++ * reactivation, and hence it will be rescheduled as if it had
++ * received more service than what it has actually received. In the
++ * end, bfqq receives less service in proportion to how slowly its
++ * associated process consumes its budgets (and hence how seriously it
++ * tends to lower the throughput). In addition, this time-charging
++ * strategy guarantees time fairness among slow processes. In
++ * contrast, if the process associated with bfqq is not slow, we
++ * charge bfqq exactly with the service it has received.
+ *
+- * If the process associated to the queue is slow (i.e., seeky), or in
+- * case of budget timeout, or, finally, if it is async, we
+- * artificially charge it an entire budget (independently of the
+- * actual service it received). As a consequence, the queue will get
+- * higher timestamps than the correct ones upon reactivation, and
+- * hence it will be rescheduled as if it had received more service
+- * than what it actually received. In the end, this class of processes
+- * will receive less service in proportion to how slowly they consume
+- * their budgets (and hence how seriously they tend to lower the
+- * throughput).
+- *
+- * In contrast, when a queue expires because it has been idling for
+- * too much or because it exhausted its budget, we do not touch the
+- * amount of service it has received. Hence when the queue will be
+- * reactivated and its timestamps updated, the latter will be in sync
+- * with the actual service received by the queue until expiration.
+- *
+- * Charging a full budget to the first type of queues and the exact
+- * service to the others has the effect of using the WF2Q+ policy to
+- * schedule the former on a timeslice basis, without violating the
+- * service domain guarantees of the latter.
++ * Charging time to the first type of queues and the exact service to
++ * the other has the effect of using the WF2Q+ policy to schedule the
++ * former on a timeslice basis, without violating service domain
++ * guarantees among the latter.
+ */
+ static void bfq_bfqq_expire(struct bfq_data *bfqd,
+ struct bfq_queue *bfqq,
+@@ -2245,41 +3063,52 @@ static void bfq_bfqq_expire(struct bfq_data *bfqd,
+ enum bfqq_expiration reason)
+ {
+ bool slow;
++ unsigned long delta = 0;
++ struct bfq_entity *entity = &bfqq->entity;
+
+ BUG_ON(bfqq != bfqd->in_service_queue);
+
+ /*
+- * Update disk peak rate for autotuning and check whether the
+- * process is slow (see bfq_update_peak_rate).
++ * Check whether the process is slow (see bfq_bfqq_is_slow).
++ */
++ slow = bfq_bfqq_is_slow(bfqd, bfqq, compensate, reason, &delta);
++
++ /*
++ * Increase service_from_backlogged before next statement,
++ * because the possible next invocation of
++ * bfq_bfqq_charge_time would likely inflate
++ * entity->service. In contrast, service_from_backlogged must
++ * contain real service, to enable the soft real-time
++ * heuristic to correctly compute the bandwidth consumed by
++ * bfqq.
+ */
+- slow = bfq_update_peak_rate(bfqd, bfqq, compensate, reason);
++ bfqq->service_from_backlogged += entity->service;
+
+ /*
+- * As above explained, 'punish' slow (i.e., seeky), timed-out
+- * and async queues, to favor sequential sync workloads.
++ * As above explained, charge slow (typically seeky) and
++ * timed-out queues with the time and not the service
++ * received, to favor sequential workloads.
+ *
+- * Processes doing I/O in the slower disk zones will tend to be
+- * slow(er) even if not seeky. Hence, since the estimated peak
+- * rate is actually an average over the disk surface, these
+- * processes may timeout just for bad luck. To avoid punishing
+- * them we do not charge a full budget to a process that
+- * succeeded in consuming at least 2/3 of its budget.
++ * Processes doing I/O in the slower disk zones will tend to
++ * be slow(er) even if not seeky. Therefore, since the
++ * estimated peak rate is actually an average over the disk
++ * surface, these processes may timeout just for bad luck. To
++ * avoid punishing them, do not charge time to processes that
++ * succeeded in consuming at least 2/3 of their budget. This
++ * allows BFQ to preserve enough elasticity to still perform
++ * bandwidth, and not time, distribution with little unlucky
++ * or quasi-sequential processes.
+ */
+- if (slow || (reason == BFQ_BFQQ_BUDGET_TIMEOUT &&
+- bfq_bfqq_budget_left(bfqq) >= bfqq->entity.budget / 3))
+- bfq_bfqq_charge_full_budget(bfqq);
+-
+- bfqq->service_from_backlogged += bfqq->entity.service;
++ if (bfqq->wr_coeff == 1 &&
++ (slow ||
++ (reason == BFQ_BFQQ_BUDGET_TIMEOUT &&
++ bfq_bfqq_budget_left(bfqq) >= entity->budget / 3)))
++ bfq_bfqq_charge_time(bfqd, bfqq, delta);
+
+- if (BFQQ_SEEKY(bfqq) && reason == BFQ_BFQQ_BUDGET_TIMEOUT &&
+- !bfq_bfqq_constantly_seeky(bfqq)) {
+- bfq_mark_bfqq_constantly_seeky(bfqq);
+- if (!blk_queue_nonrot(bfqd->queue))
+- bfqd->const_seeky_busy_in_flight_queues++;
+- }
++ BUG_ON(bfqq->entity.budget < bfqq->entity.service);
+
+ if (reason == BFQ_BFQQ_TOO_IDLE &&
+- bfqq->entity.service <= 2 * bfqq->entity.budget / 10)
++ entity->service <= 2 * entity->budget / 10)
+ bfq_clear_bfqq_IO_bound(bfqq);
+
+ if (bfqd->low_latency && bfqq->wr_coeff == 1)
+@@ -2288,19 +3117,23 @@ static void bfq_bfqq_expire(struct bfq_data *bfqd,
+ if (bfqd->low_latency && bfqd->bfq_wr_max_softrt_rate > 0 &&
+ RB_EMPTY_ROOT(&bfqq->sort_list)) {
+ /*
+- * If we get here, and there are no outstanding requests,
+- * then the request pattern is isochronous (see the comments
+- * to the function bfq_bfqq_softrt_next_start()). Hence we
+- * can compute soft_rt_next_start. If, instead, the queue
+- * still has outstanding requests, then we have to wait
+- * for the completion of all the outstanding requests to
++ * If we get here, and there are no outstanding
++ * requests, then the request pattern is isochronous
++ * (see the comments on the function
++ * bfq_bfqq_softrt_next_start()). Thus we can compute
++ * soft_rt_next_start. If, instead, the queue still
++ * has outstanding requests, then we have to wait for
++ * the completion of all the outstanding requests to
+ * discover whether the request pattern is actually
+ * isochronous.
+ */
+- if (bfqq->dispatched == 0)
++ BUG_ON(bfqd->busy_queues < 1);
++ if (bfqq->dispatched == 0) {
+ bfqq->soft_rt_next_start =
+ bfq_bfqq_softrt_next_start(bfqd, bfqq);
+- else {
++ bfq_log_bfqq(bfqd, bfqq, "new soft_rt_next %lu",
++ bfqq->soft_rt_next_start);
++ } else {
+ /*
+ * The application is still waiting for the
+ * completion of one or more requests:
+@@ -2317,7 +3150,7 @@ static void bfq_bfqq_expire(struct bfq_data *bfqd,
+ * happened to be in the past.
+ */
+ bfqq->soft_rt_next_start =
+- bfq_infinity_from_now(jiffies);
++ bfq_greatest_from_now();
+ /*
+ * Schedule an update of soft_rt_next_start to when
+ * the task may be discovered to be isochronous.
+@@ -2327,15 +3160,27 @@ static void bfq_bfqq_expire(struct bfq_data *bfqd,
+ }
+
+ bfq_log_bfqq(bfqd, bfqq,
+- "expire (%d, slow %d, num_disp %d, idle_win %d)", reason,
+- slow, bfqq->dispatched, bfq_bfqq_idle_window(bfqq));
++ "expire (%d, slow %d, num_disp %d, idle_win %d, weight %d)",
++ reason, slow, bfqq->dispatched,
++ bfq_bfqq_idle_window(bfqq), entity->weight);
+
+ /*
+ * Increase, decrease or leave budget unchanged according to
+ * reason.
+ */
++ BUG_ON(bfqq->entity.budget < bfqq->entity.service);
+ __bfq_bfqq_recalc_budget(bfqd, bfqq, reason);
++ BUG_ON(bfqq->next_rq == NULL &&
++ bfqq->entity.budget < bfqq->entity.service);
+ __bfq_bfqq_expire(bfqd, bfqq);
++
++ BUG_ON(!bfq_bfqq_busy(bfqq) && reason == BFQ_BFQQ_BUDGET_EXHAUSTED &&
++ !bfq_class_idle(bfqq));
++
++ if (!bfq_bfqq_busy(bfqq) &&
++ reason != BFQ_BFQQ_BUDGET_TIMEOUT &&
++ reason != BFQ_BFQQ_BUDGET_EXHAUSTED)
++ bfq_mark_bfqq_non_blocking_wait_rq(bfqq);
+ }
+
+ /*
+@@ -2345,20 +3190,17 @@ static void bfq_bfqq_expire(struct bfq_data *bfqd,
+ */
+ static bool bfq_bfqq_budget_timeout(struct bfq_queue *bfqq)
+ {
+- if (bfq_bfqq_budget_new(bfqq) ||
+- time_before(jiffies, bfqq->budget_timeout))
+- return false;
+- return true;
++ return time_is_before_eq_jiffies(bfqq->budget_timeout);
+ }
+
+ /*
+- * If we expire a queue that is waiting for the arrival of a new
+- * request, we may prevent the fictitious timestamp back-shifting that
+- * allows the guarantees of the queue to be preserved (see [1] for
+- * this tricky aspect). Hence we return true only if this condition
+- * does not hold, or if the queue is slow enough to deserve only to be
+- * kicked off for preserving a high throughput.
+-*/
++ * If we expire a queue that is actively waiting (i.e., with the
++ * device idled) for the arrival of a new request, then we may incur
++ * the timestamp misalignment problem described in the body of the
++ * function __bfq_activate_entity. Hence we return true only if this
++ * condition does not hold, or if the queue is slow enough to deserve
++ * only to be kicked off for preserving a high throughput.
++ */
+ static bool bfq_may_expire_for_budg_timeout(struct bfq_queue *bfqq)
+ {
+ bfq_log_bfqq(bfqq->bfqd, bfqq,
+@@ -2400,10 +3242,12 @@ static bool bfq_bfqq_may_idle(struct bfq_queue *bfqq)
+ {
+ struct bfq_data *bfqd = bfqq->bfqd;
+ bool idling_boosts_thr, idling_boosts_thr_without_issues,
+- all_queues_seeky, on_hdd_and_not_all_queues_seeky,
+ idling_needed_for_service_guarantees,
+ asymmetric_scenario;
+
++ if (bfqd->strict_guarantees)
++ return true;
++
+ /*
+ * The next variable takes into account the cases where idling
+ * boosts the throughput.
+@@ -2466,74 +3310,27 @@ static bool bfq_bfqq_may_idle(struct bfq_queue *bfqq)
+ bfqd->wr_busy_queues == 0;
+
+ /*
+- * There are then two cases where idling must be performed not
++ * There is then a case where idling must be performed not
+ * for throughput concerns, but to preserve service
+- * guarantees. In the description of these cases, we say, for
+- * short, that a queue is sequential/random if the process
+- * associated to the queue issues sequential/random requests
+- * (in the second case the queue may be tagged as seeky or
+- * even constantly_seeky).
+- *
+- * To introduce the first case, we note that, since
+- * bfq_bfqq_idle_window(bfqq) is false if the device is
+- * NCQ-capable and bfqq is random (see
+- * bfq_update_idle_window()), then, from the above two
+- * assignments it follows that
+- * idling_boosts_thr_without_issues is false if the device is
+- * NCQ-capable and bfqq is random. Therefore, for this case,
+- * device idling would never be allowed if we used just
+- * idling_boosts_thr_without_issues to decide whether to allow
+- * it. And, beneficially, this would imply that throughput
+- * would always be boosted also with random I/O on NCQ-capable
+- * HDDs.
++ * guarantees.
+ *
+- * But we must be careful on this point, to avoid an unfair
+- * treatment for bfqq. In fact, because of the same above
+- * assignments, idling_boosts_thr_without_issues is, on the
+- * other hand, true if 1) the device is an HDD and bfqq is
+- * sequential, and 2) there are no busy weight-raised
+- * queues. As a consequence, if we used just
+- * idling_boosts_thr_without_issues to decide whether to idle
+- * the device, then with an HDD we might easily bump into a
+- * scenario where queues that are sequential and I/O-bound
+- * would enjoy idling, whereas random queues would not. The
+- * latter might then get a low share of the device throughput,
+- * simply because the former would get many requests served
+- * after being set as in service, while the latter would not.
+- *
+- * To address this issue, we start by setting to true a
+- * sentinel variable, on_hdd_and_not_all_queues_seeky, if the
+- * device is rotational and not all queues with pending or
+- * in-flight requests are constantly seeky (i.e., there are
+- * active sequential queues, and bfqq might then be mistreated
+- * if it does not enjoy idling because it is random).
+- */
+- all_queues_seeky = bfq_bfqq_constantly_seeky(bfqq) &&
+- bfqd->busy_in_flight_queues ==
+- bfqd->const_seeky_busy_in_flight_queues;
+-
+- on_hdd_and_not_all_queues_seeky =
+- !blk_queue_nonrot(bfqd->queue) && !all_queues_seeky;
+-
+- /*
+- * To introduce the second case where idling needs to be
+- * performed to preserve service guarantees, we can note that
+- * allowing the drive to enqueue more than one request at a
+- * time, and hence delegating de facto final scheduling
+- * decisions to the drive's internal scheduler, causes loss of
+- * control on the actual request service order. In particular,
+- * the critical situation is when requests from different
+- * processes happens to be present, at the same time, in the
+- * internal queue(s) of the drive. In such a situation, the
+- * drive, by deciding the service order of the
+- * internally-queued requests, does determine also the actual
+- * throughput distribution among these processes. But the
+- * drive typically has no notion or concern about per-process
+- * throughput distribution, and makes its decisions only on a
+- * per-request basis. Therefore, the service distribution
+- * enforced by the drive's internal scheduler is likely to
+- * coincide with the desired device-throughput distribution
+- * only in a completely symmetric scenario where:
++ * To introduce this case, we can note that allowing the drive
++ * to enqueue more than one request at a time, and hence
++ * delegating de facto final scheduling decisions to the
++ * drive's internal scheduler, entails loss of control on the
++ * actual request service order. In particular, the critical
++ * situation is when requests from different processes happen
++ * to be present, at the same time, in the internal queue(s)
++ * of the drive. In such a situation, the drive, by deciding
++ * the service order of the internally-queued requests, does
++ * determine also the actual throughput distribution among
++ * these processes. But the drive typically has no notion or
++ * concern about per-process throughput distribution, and
++ * makes its decisions only on a per-request basis. Therefore,
++ * the service distribution enforced by the drive's internal
++ * scheduler is likely to coincide with the desired
++ * device-throughput distribution only in a completely
++ * symmetric scenario where:
+ * (i) each of these processes must get the same throughput as
+ * the others;
+ * (ii) all these processes have the same I/O pattern
+@@ -2555,26 +3352,53 @@ static bool bfq_bfqq_may_idle(struct bfq_queue *bfqq)
+ * words, only if sub-condition (i) holds, then idling is
+ * allowed, and the device tends to be prevented from queueing
+ * many requests, possibly of several processes. The reason
+- * for not controlling also sub-condition (ii) is that, first,
+- * in the case of an HDD, the asymmetry in terms of types of
+- * I/O patterns is already taken in to account in the above
+- * sentinel variable
+- * on_hdd_and_not_all_queues_seeky. Secondly, in the case of a
+- * flash-based device, we prefer however to privilege
+- * throughput (and idling lowers throughput for this type of
+- * devices), for the following reasons:
+- * 1) differently from HDDs, the service time of random
+- * requests is not orders of magnitudes lower than the service
+- * time of sequential requests; thus, even if processes doing
+- * sequential I/O get a preferential treatment with respect to
+- * others doing random I/O, the consequences are not as
+- * dramatic as with HDDs;
+- * 2) if a process doing random I/O does need strong
+- * throughput guarantees, it is hopefully already being
+- * weight-raised, or the user is likely to have assigned it a
+- * higher weight than the other processes (and thus
+- * sub-condition (i) is likely to be false, which triggers
+- * idling).
++ * for not controlling also sub-condition (ii) is that we
++ * exploit preemption to preserve guarantees in case of
++ * symmetric scenarios, even if (ii) does not hold, as
++ * explained in the next two paragraphs.
++ *
++ * Even if a queue, say Q, is expired when it remains idle, Q
++ * can still preempt the new in-service queue if the next
++ * request of Q arrives soon (see the comments on
++ * bfq_bfqq_update_budg_for_activation). If all queues and
++ * groups have the same weight, this form of preemption,
++ * combined with the hole-recovery heuristic described in the
++ * comments on function bfq_bfqq_update_budg_for_activation,
++ * are enough to preserve a correct bandwidth distribution in
++ * the mid term, even without idling. In fact, even if not
++ * idling allows the internal queues of the device to contain
++ * many requests, and thus to reorder requests, we can rather
++ * safely assume that the internal scheduler still preserves a
++ * minimum of mid-term fairness. The motivation for using
++ * preemption instead of idling is that, by not idling,
++ * service guarantees are preserved without minimally
++ * sacrificing throughput. In other words, both a high
++ * throughput and its desired distribution are obtained.
++ *
++ * More precisely, this preemption-based, idleless approach
++ * provides fairness in terms of IOPS, and not sectors per
++ * second. This can be seen with a simple example. Suppose
++ * that there are two queues with the same weight, but that
++ * the first queue receives requests of 8 sectors, while the
++ * second queue receives requests of 1024 sectors. In
++ * addition, suppose that each of the two queues contains at
++ * most one request at a time, which implies that each queue
++ * always remains idle after it is served. Finally, after
++ * remaining idle, each queue receives very quickly a new
++ * request. It follows that the two queues are served
++ * alternatively, preempting each other if needed. This
++ * implies that, although both queues have the same weight,
++ * the queue with large requests receives a service that is
++ * 1024/8 times as high as the service received by the other
++ * queue.
++ *
++ * On the other hand, device idling is performed, and thus
++ * pure sector-domain guarantees are provided, for the
++ * following queues, which are likely to need stronger
++ * throughput guarantees: weight-raised queues, and queues
++ * with a higher weight than other queues. When such queues
++ * are active, sub-condition (i) is false, which triggers
++ * device idling.
+ *
+ * According to the above considerations, the next variable is
+ * true (only) if sub-condition (i) holds. To compute the
+@@ -2582,7 +3406,7 @@ static bool bfq_bfqq_may_idle(struct bfq_queue *bfqq)
+ * the function bfq_symmetric_scenario(), but also check
+ * whether bfqq is being weight-raised, because
+ * bfq_symmetric_scenario() does not take into account also
+- * weight-raised queues (see comments to
++ * weight-raised queues (see comments on
+ * bfq_weights_tree_add()).
+ *
+ * As a side note, it is worth considering that the above
+@@ -2604,17 +3428,16 @@ static bool bfq_bfqq_may_idle(struct bfq_queue *bfqq)
+ * bfqq. Such a case is when bfqq became active in a burst of
+ * queue activations. Queues that became active during a large
+ * burst benefit only from throughput, as discussed in the
+- * comments to bfq_handle_burst. Thus, if bfqq became active
++ * comments on bfq_handle_burst. Thus, if bfqq became active
+ * in a burst and not idling the device maximizes throughput,
+ * then the device must no be idled, because not idling the
+ * device provides bfqq and all other queues in the burst with
+- * maximum benefit. Combining this and the two cases above, we
+- * can now establish when idling is actually needed to
+- * preserve service guarantees.
++ * maximum benefit. Combining this and the above case, we can
++ * now establish when idling is actually needed to preserve
++ * service guarantees.
+ */
+ idling_needed_for_service_guarantees =
+- (on_hdd_and_not_all_queues_seeky || asymmetric_scenario) &&
+- !bfq_bfqq_in_large_burst(bfqq);
++ asymmetric_scenario && !bfq_bfqq_in_large_burst(bfqq);
+
+ /*
+ * We have now all the components we need to compute the return
+@@ -2624,6 +3447,16 @@ static bool bfq_bfqq_may_idle(struct bfq_queue *bfqq)
+ * 2) idling either boosts the throughput (without issues), or
+ * is necessary to preserve service guarantees.
+ */
++ bfq_log_bfqq(bfqd, bfqq, "may_idle: sync %d idling_boosts_thr %d",
++ bfq_bfqq_sync(bfqq), idling_boosts_thr);
++
++ bfq_log_bfqq(bfqd, bfqq,
++ "may_idle: wr_busy %d boosts %d IO-bound %d guar %d",
++ bfqd->wr_busy_queues,
++ idling_boosts_thr_without_issues,
++ bfq_bfqq_IO_bound(bfqq),
++ idling_needed_for_service_guarantees);
++
+ return bfq_bfqq_sync(bfqq) &&
+ (idling_boosts_thr_without_issues ||
+ idling_needed_for_service_guarantees);
+@@ -2635,7 +3468,7 @@ static bool bfq_bfqq_may_idle(struct bfq_queue *bfqq)
+ * 1) the queue must remain in service and cannot be expired, and
+ * 2) the device must be idled to wait for the possible arrival of a new
+ * request for the queue.
+- * See the comments to the function bfq_bfqq_may_idle for the reasons
++ * See the comments on the function bfq_bfqq_may_idle for the reasons
+ * why performing device idling is the best choice to boost the throughput
+ * and preserve service guarantees when bfq_bfqq_may_idle itself
+ * returns true.
+@@ -2665,18 +3498,33 @@ static struct bfq_queue *bfq_select_queue(struct bfq_data *bfqd)
+ bfq_log_bfqq(bfqd, bfqq, "select_queue: already in-service queue");
+
+ if (bfq_may_expire_for_budg_timeout(bfqq) &&
+- !timer_pending(&bfqd->idle_slice_timer) &&
++ !hrtimer_active(&bfqd->idle_slice_timer) &&
+ !bfq_bfqq_must_idle(bfqq))
+ goto expire;
+
++check_queue:
++ /*
++ * This loop is rarely executed more than once. Even when it
++ * happens, it is much more convenient to re-execute this loop
++ * than to return NULL and trigger a new dispatch to get a
++ * request served.
++ */
+ next_rq = bfqq->next_rq;
+ /*
+ * If bfqq has requests queued and it has enough budget left to
+ * serve them, keep the queue, otherwise expire it.
+ */
+ if (next_rq) {
++ BUG_ON(RB_EMPTY_ROOT(&bfqq->sort_list));
++
+ if (bfq_serv_to_charge(next_rq, bfqq) >
+ bfq_bfqq_budget_left(bfqq)) {
++ /*
++ * Expire the queue for budget exhaustion,
++ * which makes sure that the next budget is
++ * enough to serve the next request, even if
++ * it comes from the fifo expired path.
++ */
+ reason = BFQ_BFQQ_BUDGET_EXHAUSTED;
+ goto expire;
+ } else {
+@@ -2685,7 +3533,8 @@ static struct bfq_queue *bfq_select_queue(struct bfq_data *bfqd)
+ * not disable disk idling even when a new request
+ * arrives.
+ */
+- if (timer_pending(&bfqd->idle_slice_timer)) {
++ if (bfq_bfqq_wait_request(bfqq)) {
++ BUG_ON(!hrtimer_active(&bfqd->idle_slice_timer));
+ /*
+ * If we get here: 1) at least a new request
+ * has arrived but we have not disabled the
+@@ -2700,10 +3549,8 @@ static struct bfq_queue *bfq_select_queue(struct bfq_data *bfqd)
+ * So we disable idling.
+ */
+ bfq_clear_bfqq_wait_request(bfqq);
+- del_timer(&bfqd->idle_slice_timer);
+-#ifdef CONFIG_BFQ_GROUP_IOSCHED
++ hrtimer_try_to_cancel(&bfqd->idle_slice_timer);
+ bfqg_stats_update_idle_time(bfqq_group(bfqq));
+-#endif
+ }
+ goto keep_queue;
+ }
+@@ -2714,7 +3561,7 @@ static struct bfq_queue *bfq_select_queue(struct bfq_data *bfqd)
+ * for a new request, or has requests waiting for a completion and
+ * may idle after their completion, then keep it anyway.
+ */
+- if (timer_pending(&bfqd->idle_slice_timer) ||
++ if (hrtimer_active(&bfqd->idle_slice_timer) ||
+ (bfqq->dispatched != 0 && bfq_bfqq_may_idle(bfqq))) {
+ bfqq = NULL;
+ goto keep_queue;
+@@ -2725,9 +3572,16 @@ static struct bfq_queue *bfq_select_queue(struct bfq_data *bfqd)
+ bfq_bfqq_expire(bfqd, bfqq, false, reason);
+ new_queue:
+ bfqq = bfq_set_in_service_queue(bfqd);
+- bfq_log(bfqd, "select_queue: new queue %d returned",
+- bfqq ? bfqq->pid : 0);
++ if (bfqq) {
++ bfq_log_bfqq(bfqd, bfqq, "select_queue: checking new queue");
++ goto check_queue;
++ }
+ keep_queue:
++ if (bfqq)
++ bfq_log_bfqq(bfqd, bfqq, "select_queue: returned this queue");
++ else
++ bfq_log(bfqd, "select_queue: no queue returned");
++
+ return bfqq;
+ }
+
+@@ -2736,6 +3590,9 @@ static void bfq_update_wr_data(struct bfq_data *bfqd, struct bfq_queue *bfqq)
+ struct bfq_entity *entity = &bfqq->entity;
+
+ if (bfqq->wr_coeff > 1) { /* queue is being weight-raised */
++ BUG_ON(bfqq->wr_cur_max_time == bfqd->bfq_wr_rt_max_time &&
++ time_is_after_jiffies(bfqq->last_wr_start_finish));
++
+ bfq_log_bfqq(bfqd, bfqq,
+ "raising period dur %u/%u msec, old coeff %u, w %d(%d)",
+ jiffies_to_msecs(jiffies - bfqq->last_wr_start_finish),
+@@ -2749,22 +3606,30 @@ static void bfq_update_wr_data(struct bfq_data *bfqd, struct bfq_queue *bfqq)
+ bfq_log_bfqq(bfqd, bfqq, "WARN: pending prio change");
+
+ /*
+- * If the queue was activated in a burst, or
+- * too much time has elapsed from the beginning
+- * of this weight-raising period, or the queue has
+- * exceeded the acceptable number of cooperations,
+- * then end weight raising.
++ * If the queue was activated in a burst, or too much
++ * time has elapsed from the beginning of this
++ * weight-raising period, then end weight raising.
+ */
+- if (bfq_bfqq_in_large_burst(bfqq) ||
+- bfq_bfqq_cooperations(bfqq) >= bfqd->bfq_coop_thresh ||
+- time_is_before_jiffies(bfqq->last_wr_start_finish +
+- bfqq->wr_cur_max_time)) {
+- bfqq->last_wr_start_finish = jiffies;
+- bfq_log_bfqq(bfqd, bfqq,
+- "wrais ending at %lu, rais_max_time %u",
+- bfqq->last_wr_start_finish,
+- jiffies_to_msecs(bfqq->wr_cur_max_time));
++ if (bfq_bfqq_in_large_burst(bfqq))
+ bfq_bfqq_end_wr(bfqq);
++ else if (time_is_before_jiffies(bfqq->last_wr_start_finish +
++ bfqq->wr_cur_max_time)) {
++ if (bfqq->wr_cur_max_time != bfqd->bfq_wr_rt_max_time ||
++ time_is_before_jiffies(bfqq->wr_start_at_switch_to_srt +
++ bfq_wr_duration(bfqd)))
++ bfq_bfqq_end_wr(bfqq);
++ else {
++ /* switch back to interactive wr */
++ bfqq->wr_coeff = bfqd->bfq_wr_coeff;
++ bfqq->wr_cur_max_time = bfq_wr_duration(bfqd);
++ bfqq->last_wr_start_finish =
++ bfqq->wr_start_at_switch_to_srt;
++ BUG_ON(time_is_after_jiffies(
++ bfqq->last_wr_start_finish));
++ bfqq->entity.prio_changed = 1;
++ bfq_log_bfqq(bfqd, bfqq,
++ "back to interactive wr");
++ }
+ }
+ }
+ /* Update weight both if it must be raised and if it must be lowered */
+@@ -2782,46 +3647,34 @@ static int bfq_dispatch_request(struct bfq_data *bfqd,
+ struct bfq_queue *bfqq)
+ {
+ int dispatched = 0;
+- struct request *rq;
++ struct request *rq = bfqq->next_rq;
+ unsigned long service_to_charge;
+
+ BUG_ON(RB_EMPTY_ROOT(&bfqq->sort_list));
+-
+- /* Follow expired path, else get first next available. */
+- rq = bfq_check_fifo(bfqq);
+- if (!rq)
+- rq = bfqq->next_rq;
++ BUG_ON(!rq);
+ service_to_charge = bfq_serv_to_charge(rq, bfqq);
+
+- if (service_to_charge > bfq_bfqq_budget_left(bfqq)) {
+- /*
+- * This may happen if the next rq is chosen in fifo order
+- * instead of sector order. The budget is properly
+- * dimensioned to be always sufficient to serve the next
+- * request only if it is chosen in sector order. The reason
+- * is that it would be quite inefficient and little useful
+- * to always make sure that the budget is large enough to
+- * serve even the possible next rq in fifo order.
+- * In fact, requests are seldom served in fifo order.
+- *
+- * Expire the queue for budget exhaustion, and make sure
+- * that the next act_budget is enough to serve the next
+- * request, even if it comes from the fifo expired path.
+- */
+- bfqq->next_rq = rq;
+- /*
+- * Since this dispatch is failed, make sure that
+- * a new one will be performed
+- */
+- if (!bfqd->rq_in_driver)
+- bfq_schedule_dispatch(bfqd);
+- goto expire;
+- }
++ BUG_ON(service_to_charge > bfq_bfqq_budget_left(bfqq));
++
++ BUG_ON(bfqq->entity.budget < bfqq->entity.service);
+
+- /* Finally, insert request into driver dispatch list. */
+ bfq_bfqq_served(bfqq, service_to_charge);
++
++ BUG_ON(bfqq->entity.budget < bfqq->entity.service);
++
+ bfq_dispatch_insert(bfqd->queue, rq);
+
++ /*
++ * If weight raising has to terminate for bfqq, then next
++ * function causes an immediate update of bfqq's weight,
++ * without waiting for next activation. As a consequence, on
++ * expiration, bfqq will be timestamped as if has never been
++ * weight-raised during this service slot, even if it has
++ * received part or even most of the service as a
++ * weight-raised queue. This inflates bfqq's timestamps, which
++ * is beneficial, as bfqq is then more willing to leave the
++ * device immediately to possible other weight-raised queues.
++ */
+ bfq_update_wr_data(bfqd, bfqq);
+
+ bfq_log_bfqq(bfqd, bfqq,
+@@ -2837,9 +3690,7 @@ static int bfq_dispatch_request(struct bfq_data *bfqd,
+ bfqd->in_service_bic = RQ_BIC(rq);
+ }
+
+- if (bfqd->busy_queues > 1 && ((!bfq_bfqq_sync(bfqq) &&
+- dispatched >= bfqd->bfq_max_budget_async_rq) ||
+- bfq_class_idle(bfqq)))
++ if (bfqd->busy_queues > 1 && bfq_class_idle(bfqq))
+ goto expire;
+
+ return dispatched;
+@@ -2885,8 +3736,8 @@ static int bfq_forced_dispatch(struct bfq_data *bfqd)
+ st = bfq_entity_service_tree(&bfqq->entity);
+
+ dispatched += __bfq_forced_dispatch_bfqq(bfqq);
+- bfqq->max_budget = bfq_max_budget(bfqd);
+
++ bfqq->max_budget = bfq_max_budget(bfqd);
+ bfq_forget_idle(st);
+ }
+
+@@ -2899,37 +3750,37 @@ static int bfq_dispatch_requests(struct request_queue *q, int force)
+ {
+ struct bfq_data *bfqd = q->elevator->elevator_data;
+ struct bfq_queue *bfqq;
+- int max_dispatch;
+
+ bfq_log(bfqd, "dispatch requests: %d busy queues", bfqd->busy_queues);
++
+ if (bfqd->busy_queues == 0)
+ return 0;
+
+ if (unlikely(force))
+ return bfq_forced_dispatch(bfqd);
+
++ /*
++ * Force device to serve one request at a time if
++ * strict_guarantees is true. Forcing this service scheme is
++ * currently the ONLY way to guarantee that the request
++ * service order enforced by the scheduler is respected by a
++ * queueing device. Otherwise the device is free even to make
++ * some unlucky request wait for as long as the device
++ * wishes.
++ *
++ * Of course, serving one request at at time may cause loss of
++ * throughput.
++ */
++ if (bfqd->strict_guarantees && bfqd->rq_in_driver > 0)
++ return 0;
++
+ bfqq = bfq_select_queue(bfqd);
+ if (!bfqq)
+ return 0;
+
+- if (bfq_class_idle(bfqq))
+- max_dispatch = 1;
+-
+- if (!bfq_bfqq_sync(bfqq))
+- max_dispatch = bfqd->bfq_max_budget_async_rq;
+-
+- if (!bfq_bfqq_sync(bfqq) && bfqq->dispatched >= max_dispatch) {
+- if (bfqd->busy_queues > 1)
+- return 0;
+- if (bfqq->dispatched >= 4 * max_dispatch)
+- return 0;
+- }
+-
+- if (bfqd->sync_flight != 0 && !bfq_bfqq_sync(bfqq))
+- return 0;
++ BUG_ON(bfqq->entity.budget < bfqq->entity.service);
+
+- bfq_clear_bfqq_wait_request(bfqq);
+- BUG_ON(timer_pending(&bfqd->idle_slice_timer));
++ BUG_ON(bfq_bfqq_wait_request(bfqq));
+
+ if (!bfq_dispatch_request(bfqd, bfqq))
+ return 0;
+@@ -2937,6 +3788,8 @@ static int bfq_dispatch_requests(struct request_queue *q, int force)
+ bfq_log_bfqq(bfqd, bfqq, "dispatched %s request",
+ bfq_bfqq_sync(bfqq) ? "sync" : "async");
+
++ BUG_ON(bfqq->next_rq == NULL &&
++ bfqq->entity.budget < bfqq->entity.service);
+ return 1;
+ }
+
+@@ -2948,23 +3801,21 @@ static int bfq_dispatch_requests(struct request_queue *q, int force)
+ */
+ static void bfq_put_queue(struct bfq_queue *bfqq)
+ {
+- struct bfq_data *bfqd = bfqq->bfqd;
+ #ifdef CONFIG_BFQ_GROUP_IOSCHED
+ struct bfq_group *bfqg = bfqq_group(bfqq);
+ #endif
+
+- BUG_ON(atomic_read(&bfqq->ref) <= 0);
++ BUG_ON(bfqq->ref <= 0);
+
+- bfq_log_bfqq(bfqd, bfqq, "put_queue: %p %d", bfqq,
+- atomic_read(&bfqq->ref));
+- if (!atomic_dec_and_test(&bfqq->ref))
++ bfq_log_bfqq(bfqq->bfqd, bfqq, "put_queue: %p %d", bfqq, bfqq->ref);
++ bfqq->ref--;
++ if (bfqq->ref)
+ return;
+
+ BUG_ON(rb_first(&bfqq->sort_list));
+ BUG_ON(bfqq->allocated[READ] + bfqq->allocated[WRITE] != 0);
+ BUG_ON(bfqq->entity.tree);
+ BUG_ON(bfq_bfqq_busy(bfqq));
+- BUG_ON(bfqd->in_service_queue == bfqq);
+
+ if (bfq_bfqq_sync(bfqq))
+ /*
+@@ -2977,7 +3828,7 @@ static void bfq_put_queue(struct bfq_queue *bfqq)
+ */
+ hlist_del_init(&bfqq->burst_list_node);
+
+- bfq_log_bfqq(bfqd, bfqq, "put_queue: %p freed", bfqq);
++ bfq_log_bfqq(bfqq->bfqd, bfqq, "put_queue: %p freed", bfqq);
+
+ kmem_cache_free(bfq_pool, bfqq);
+ #ifdef CONFIG_BFQ_GROUP_IOSCHED
+@@ -3011,8 +3862,7 @@ static void bfq_exit_bfqq(struct bfq_data *bfqd, struct bfq_queue *bfqq)
+ bfq_schedule_dispatch(bfqd);
+ }
+
+- bfq_log_bfqq(bfqd, bfqq, "exit_bfqq: %p, %d", bfqq,
+- atomic_read(&bfqq->ref));
++ bfq_log_bfqq(bfqd, bfqq, "exit_bfqq: %p, %d", bfqq, bfqq->ref);
+
+ bfq_put_cooperator(bfqq);
+
+@@ -3021,28 +3871,7 @@ static void bfq_exit_bfqq(struct bfq_data *bfqd, struct bfq_queue *bfqq)
+
+ static void bfq_init_icq(struct io_cq *icq)
+ {
+- struct bfq_io_cq *bic = icq_to_bic(icq);
+-
+- bic->ttime.last_end_request = jiffies;
+- /*
+- * A newly created bic indicates that the process has just
+- * started doing I/O, and is probably mapping into memory its
+- * executable and libraries: it definitely needs weight raising.
+- * There is however the possibility that the process performs,
+- * for a while, I/O close to some other process. EQM intercepts
+- * this behavior and may merge the queue corresponding to the
+- * process with some other queue, BEFORE the weight of the queue
+- * is raised. Merged queues are not weight-raised (they are assumed
+- * to belong to processes that benefit only from high throughput).
+- * If the merge is basically the consequence of an accident, then
+- * the queue will be split soon and will get back its old weight.
+- * It is then important to write down somewhere that this queue
+- * does need weight raising, even if it did not make it to get its
+- * weight raised before being merged. To this purpose, we overload
+- * the field raising_time_left and assign 1 to it, to mark the queue
+- * as needing weight raising.
+- */
+- bic->wr_time_left = 1;
++ icq_to_bic(icq)->ttime.last_end_request = ktime_get_ns() - (1ULL<<32);
+ }
+
+ static void bfq_exit_icq(struct io_cq *icq)
+@@ -3050,21 +3879,21 @@ static void bfq_exit_icq(struct io_cq *icq)
+ struct bfq_io_cq *bic = icq_to_bic(icq);
+ struct bfq_data *bfqd = bic_to_bfqd(bic);
+
+- if (bic->bfqq[BLK_RW_ASYNC]) {
+- bfq_exit_bfqq(bfqd, bic->bfqq[BLK_RW_ASYNC]);
+- bic->bfqq[BLK_RW_ASYNC] = NULL;
++ if (bic_to_bfqq(bic, false)) {
++ bfq_exit_bfqq(bfqd, bic_to_bfqq(bic, false));
++ bic_set_bfqq(bic, NULL, false);
+ }
+
+- if (bic->bfqq[BLK_RW_SYNC]) {
++ if (bic_to_bfqq(bic, true)) {
+ /*
+ * If the bic is using a shared queue, put the reference
+ * taken on the io_context when the bic started using a
+ * shared bfq_queue.
+ */
+- if (bfq_bfqq_coop(bic->bfqq[BLK_RW_SYNC]))
++ if (bfq_bfqq_coop(bic_to_bfqq(bic, true)))
+ put_io_context(icq->ioc);
+- bfq_exit_bfqq(bfqd, bic->bfqq[BLK_RW_SYNC]);
+- bic->bfqq[BLK_RW_SYNC] = NULL;
++ bfq_exit_bfqq(bfqd, bic_to_bfqq(bic, true));
++ bic_set_bfqq(bic, NULL, true);
+ }
+ }
+
+@@ -3072,8 +3901,8 @@ static void bfq_exit_icq(struct io_cq *icq)
+ * Update the entity prio values; note that the new values will not
+ * be used until the next (re)activation.
+ */
+-static void
+-bfq_set_next_ioprio_data(struct bfq_queue *bfqq, struct bfq_io_cq *bic)
++static void bfq_set_next_ioprio_data(struct bfq_queue *bfqq,
++ struct bfq_io_cq *bic)
+ {
+ struct task_struct *tsk = current;
+ int ioprio_class;
+@@ -3105,7 +3934,7 @@ bfq_set_next_ioprio_data(struct bfq_queue *bfqq, struct bfq_io_cq *bic)
+ break;
+ }
+
+- if (bfqq->new_ioprio < 0 || bfqq->new_ioprio >= IOPRIO_BE_NR) {
++ if (bfqq->new_ioprio >= IOPRIO_BE_NR) {
+ pr_crit("bfq_set_next_ioprio_data: new_ioprio %d\n",
+ bfqq->new_ioprio);
+ BUG();
+@@ -3113,45 +3942,40 @@ bfq_set_next_ioprio_data(struct bfq_queue *bfqq, struct bfq_io_cq *bic)
+
+ bfqq->entity.new_weight = bfq_ioprio_to_weight(bfqq->new_ioprio);
+ bfqq->entity.prio_changed = 1;
++ bfq_log_bfqq(bfqq->bfqd, bfqq,
++ "set_next_ioprio_data: bic_class %d prio %d class %d",
++ ioprio_class, bfqq->new_ioprio, bfqq->new_ioprio_class);
+ }
+
+ static void bfq_check_ioprio_change(struct bfq_io_cq *bic, struct bio *bio)
+ {
+- struct bfq_data *bfqd;
+- struct bfq_queue *bfqq, *new_bfqq;
++ struct bfq_data *bfqd = bic_to_bfqd(bic);
++ struct bfq_queue *bfqq;
+ unsigned long uninitialized_var(flags);
+ int ioprio = bic->icq.ioc->ioprio;
+
+- bfqd = bfq_get_bfqd_locked(&(bic->icq.q->elevator->elevator_data),
+- &flags);
+ /*
+ * This condition may trigger on a newly created bic, be sure to
+ * drop the lock before returning.
+ */
+ if (unlikely(!bfqd) || likely(bic->ioprio == ioprio))
+- goto out;
++ return;
+
+ bic->ioprio = ioprio;
+
+- bfqq = bic->bfqq[BLK_RW_ASYNC];
++ bfqq = bic_to_bfqq(bic, false);
+ if (bfqq) {
+- new_bfqq = bfq_get_queue(bfqd, bio, BLK_RW_ASYNC, bic,
+- GFP_ATOMIC);
+- if (new_bfqq) {
+- bic->bfqq[BLK_RW_ASYNC] = new_bfqq;
+- bfq_log_bfqq(bfqd, bfqq,
+- "check_ioprio_change: bfqq %p %d",
+- bfqq, atomic_read(&bfqq->ref));
+- bfq_put_queue(bfqq);
+- }
++ bfq_put_queue(bfqq);
++ bfqq = bfq_get_queue(bfqd, bio, BLK_RW_ASYNC, bic);
++ bic_set_bfqq(bic, bfqq, false);
++ bfq_log_bfqq(bfqd, bfqq,
++ "check_ioprio_change: bfqq %p %d",
++ bfqq, bfqq->ref);
+ }
+
+- bfqq = bic->bfqq[BLK_RW_SYNC];
++ bfqq = bic_to_bfqq(bic, true);
+ if (bfqq)
+ bfq_set_next_ioprio_data(bfqq, bic);
+-
+-out:
+- bfq_put_bfqd_unlock(bfqd, &flags);
+ }
+
+ static void bfq_init_bfqq(struct bfq_data *bfqd, struct bfq_queue *bfqq,
+@@ -3160,8 +3984,9 @@ static void bfq_init_bfqq(struct bfq_data *bfqd, struct bfq_queue *bfqq,
+ RB_CLEAR_NODE(&bfqq->entity.rb_node);
+ INIT_LIST_HEAD(&bfqq->fifo);
+ INIT_HLIST_NODE(&bfqq->burst_list_node);
++ BUG_ON(!hlist_unhashed(&bfqq->burst_list_node));
+
+- atomic_set(&bfqq->ref, 0);
++ bfqq->ref = 0;
+ bfqq->bfqd = bfqd;
+
+ if (bic)
+@@ -3171,6 +3996,7 @@ static void bfq_init_bfqq(struct bfq_data *bfqd, struct bfq_queue *bfqq,
+ if (!bfq_class_idle(bfqq))
+ bfq_mark_bfqq_idle_window(bfqq);
+ bfq_mark_bfqq_sync(bfqq);
++ bfq_mark_bfqq_just_created(bfqq);
+ } else
+ bfq_clear_bfqq_sync(bfqq);
+ bfq_mark_bfqq_IO_bound(bfqq);
+@@ -3180,72 +4006,19 @@ static void bfq_init_bfqq(struct bfq_data *bfqd, struct bfq_queue *bfqq,
+ bfqq->pid = pid;
+
+ bfqq->wr_coeff = 1;
+- bfqq->last_wr_start_finish = 0;
++ bfqq->last_wr_start_finish = jiffies;
++ bfqq->wr_start_at_switch_to_srt = bfq_smallest_from_now();
++ bfqq->budget_timeout = bfq_smallest_from_now();
++ bfqq->split_time = bfq_smallest_from_now();
++
+ /*
+ * Set to the value for which bfqq will not be deemed as
+ * soft rt when it becomes backlogged.
+ */
+- bfqq->soft_rt_next_start = bfq_infinity_from_now(jiffies);
+-}
+-
+-static struct bfq_queue *bfq_find_alloc_queue(struct bfq_data *bfqd,
+- struct bio *bio, int is_sync,
+- struct bfq_io_cq *bic,
+- gfp_t gfp_mask)
+-{
+- struct bfq_group *bfqg;
+- struct bfq_queue *bfqq, *new_bfqq = NULL;
+- struct blkcg *blkcg;
+-
+-retry:
+- rcu_read_lock();
+-
+- blkcg = bio_blkcg(bio);
+- bfqg = bfq_find_alloc_group(bfqd, blkcg);
+- /* bic always exists here */
+- bfqq = bic_to_bfqq(bic, is_sync);
+-
+- /*
+- * Always try a new alloc if we fall back to the OOM bfqq
+- * originally, since it should just be a temporary situation.
+- */
+- if (!bfqq || bfqq == &bfqd->oom_bfqq) {
+- bfqq = NULL;
+- if (new_bfqq) {
+- bfqq = new_bfqq;
+- new_bfqq = NULL;
+- } else if (gfpflags_allow_blocking(gfp_mask)) {
+- rcu_read_unlock();
+- spin_unlock_irq(bfqd->queue->queue_lock);
+- new_bfqq = kmem_cache_alloc_node(bfq_pool,
+- gfp_mask | __GFP_ZERO,
+- bfqd->queue->node);
+- spin_lock_irq(bfqd->queue->queue_lock);
+- if (new_bfqq)
+- goto retry;
+- } else {
+- bfqq = kmem_cache_alloc_node(bfq_pool,
+- gfp_mask | __GFP_ZERO,
+- bfqd->queue->node);
+- }
+-
+- if (bfqq) {
+- bfq_init_bfqq(bfqd, bfqq, bic, current->pid,
+- is_sync);
+- bfq_init_entity(&bfqq->entity, bfqg);
+- bfq_log_bfqq(bfqd, bfqq, "allocated");
+- } else {
+- bfqq = &bfqd->oom_bfqq;
+- bfq_log_bfqq(bfqd, bfqq, "using oom bfqq");
+- }
+- }
+-
+- if (new_bfqq)
+- kmem_cache_free(bfq_pool, new_bfqq);
+-
+- rcu_read_unlock();
++ bfqq->soft_rt_next_start = bfq_greatest_from_now();
+
+- return bfqq;
++ /* first request is almost certainly seeky */
++ bfqq->seek_history = 1;
+ }
+
+ static struct bfq_queue **bfq_async_queue_prio(struct bfq_data *bfqd,
+@@ -3268,90 +4041,93 @@ static struct bfq_queue **bfq_async_queue_prio(struct bfq_data *bfqd,
+ }
+
+ static struct bfq_queue *bfq_get_queue(struct bfq_data *bfqd,
+- struct bio *bio, int is_sync,
+- struct bfq_io_cq *bic, gfp_t gfp_mask)
++ struct bio *bio, bool is_sync,
++ struct bfq_io_cq *bic)
+ {
+ const int ioprio = IOPRIO_PRIO_DATA(bic->ioprio);
+ const int ioprio_class = IOPRIO_PRIO_CLASS(bic->ioprio);
+ struct bfq_queue **async_bfqq = NULL;
+- struct bfq_queue *bfqq = NULL;
++ struct bfq_queue *bfqq;
++ struct bfq_group *bfqg;
+
+- if (!is_sync) {
+- struct blkcg *blkcg;
+- struct bfq_group *bfqg;
++ rcu_read_lock();
++
++ bfqg = bfq_find_set_group(bfqd, bio_blkcg(bio));
++ if (!bfqg) {
++ bfqq = &bfqd->oom_bfqq;
++ goto out;
++ }
+
+- rcu_read_lock();
+- blkcg = bio_blkcg(bio);
+- rcu_read_unlock();
+- bfqg = bfq_find_alloc_group(bfqd, blkcg);
++ if (!is_sync) {
+ async_bfqq = bfq_async_queue_prio(bfqd, bfqg, ioprio_class,
+ ioprio);
+ bfqq = *async_bfqq;
++ if (bfqq)
++ goto out;
+ }
+
+- if (!bfqq)
+- bfqq = bfq_find_alloc_queue(bfqd, bio, is_sync, bic, gfp_mask);
++ bfqq = kmem_cache_alloc_node(bfq_pool,
++ GFP_NOWAIT | __GFP_ZERO | __GFP_NOWARN,
++ bfqd->queue->node);
++
++ if (bfqq) {
++ bfq_init_bfqq(bfqd, bfqq, bic, current->pid,
++ is_sync);
++ bfq_init_entity(&bfqq->entity, bfqg);
++ bfq_log_bfqq(bfqd, bfqq, "allocated");
++ } else {
++ bfqq = &bfqd->oom_bfqq;
++ bfq_log_bfqq(bfqd, bfqq, "using oom bfqq");
++ goto out;
++ }
+
+ /*
+ * Pin the queue now that it's allocated, scheduler exit will
+ * prune it.
+ */
+- if (!is_sync && !(*async_bfqq)) {
+- atomic_inc(&bfqq->ref);
++ if (async_bfqq) {
++ bfqq->ref++; /*
++ * Extra group reference, w.r.t. sync
++ * queue. This extra reference is removed
++ * only if bfqq->bfqg disappears, to
++ * guarantee that this queue is not freed
++ * until its group goes away.
++ */
+ bfq_log_bfqq(bfqd, bfqq, "get_queue, bfqq not in async: %p, %d",
+- bfqq, atomic_read(&bfqq->ref));
++ bfqq, bfqq->ref);
+ *async_bfqq = bfqq;
+ }
+
+- atomic_inc(&bfqq->ref);
+- bfq_log_bfqq(bfqd, bfqq, "get_queue, at end: %p, %d", bfqq,
+- atomic_read(&bfqq->ref));
++out:
++ bfqq->ref++;
++ bfq_log_bfqq(bfqd, bfqq, "get_queue, at end: %p, %d", bfqq, bfqq->ref);
++ rcu_read_unlock();
+ return bfqq;
+ }
+
+ static void bfq_update_io_thinktime(struct bfq_data *bfqd,
+ struct bfq_io_cq *bic)
+ {
+- unsigned long elapsed = jiffies - bic->ttime.last_end_request;
+- unsigned long ttime = min(elapsed, 2UL * bfqd->bfq_slice_idle);
++ struct bfq_ttime *ttime = &bic->ttime;
++ u64 elapsed = ktime_get_ns() - bic->ttime.last_end_request;
+
+- bic->ttime.ttime_samples = (7*bic->ttime.ttime_samples + 256) / 8;
+- bic->ttime.ttime_total = (7*bic->ttime.ttime_total + 256*ttime) / 8;
+- bic->ttime.ttime_mean = (bic->ttime.ttime_total + 128) /
+- bic->ttime.ttime_samples;
++ elapsed = min_t(u64, elapsed, 2 * bfqd->bfq_slice_idle);
++
++ ttime->ttime_samples = (7*bic->ttime.ttime_samples + 256) / 8;
++ ttime->ttime_total = div_u64(7*ttime->ttime_total + 256*elapsed, 8);
++ ttime->ttime_mean = div64_ul(ttime->ttime_total + 128,
++ ttime->ttime_samples);
+ }
+
+-static void bfq_update_io_seektime(struct bfq_data *bfqd,
+- struct bfq_queue *bfqq,
+- struct request *rq)
++static void
++bfq_update_io_seektime(struct bfq_data *bfqd, struct bfq_queue *bfqq,
++ struct request *rq)
+ {
+- sector_t sdist;
+- u64 total;
+-
+- if (bfqq->last_request_pos < blk_rq_pos(rq))
+- sdist = blk_rq_pos(rq) - bfqq->last_request_pos;
+- else
+- sdist = bfqq->last_request_pos - blk_rq_pos(rq);
+-
+- /*
+- * Don't allow the seek distance to get too large from the
+- * odd fragment, pagein, etc.
+- */
+- if (bfqq->seek_samples == 0) /* first request, not really a seek */
+- sdist = 0;
+- else if (bfqq->seek_samples <= 60) /* second & third seek */
+- sdist = min(sdist, (bfqq->seek_mean * 4) + 2*1024*1024);
+- else
+- sdist = min(sdist, (bfqq->seek_mean * 4) + 2*1024*64);
+-
+- bfqq->seek_samples = (7*bfqq->seek_samples + 256) / 8;
+- bfqq->seek_total = (7*bfqq->seek_total + (u64)256*sdist) / 8;
+- total = bfqq->seek_total + (bfqq->seek_samples/2);
+- do_div(total, bfqq->seek_samples);
+- bfqq->seek_mean = (sector_t)total;
+-
+- bfq_log_bfqq(bfqd, bfqq, "dist=%llu mean=%llu", (u64)sdist,
+- (u64)bfqq->seek_mean);
++ bfqq->seek_history <<= 1;
++ bfqq->seek_history |=
++ get_sdist(bfqq->last_request_pos, rq) > BFQQ_SEEK_THR &&
++ (!blk_queue_nonrot(bfqd->queue) ||
++ blk_rq_sectors(rq) < BFQQ_SECT_THR_NONROT);
+ }
+
+ /*
+@@ -3369,7 +4145,8 @@ static void bfq_update_idle_window(struct bfq_data *bfqd,
+ return;
+
+ /* Idle window just restored, statistics are meaningless. */
+- if (bfq_bfqq_just_split(bfqq))
++ if (time_is_after_eq_jiffies(bfqq->split_time +
++ bfqd->bfq_wr_min_idle_time))
+ return;
+
+ enable_idle = bfq_bfqq_idle_window(bfqq);
+@@ -3409,22 +4186,13 @@ static void bfq_rq_enqueued(struct bfq_data *bfqd, struct bfq_queue *bfqq,
+
+ bfq_update_io_thinktime(bfqd, bic);
+ bfq_update_io_seektime(bfqd, bfqq, rq);
+- if (!BFQQ_SEEKY(bfqq) && bfq_bfqq_constantly_seeky(bfqq)) {
+- bfq_clear_bfqq_constantly_seeky(bfqq);
+- if (!blk_queue_nonrot(bfqd->queue)) {
+- BUG_ON(!bfqd->const_seeky_busy_in_flight_queues);
+- bfqd->const_seeky_busy_in_flight_queues--;
+- }
+- }
+ if (bfqq->entity.service > bfq_max_budget(bfqd) / 8 ||
+ !BFQQ_SEEKY(bfqq))
+ bfq_update_idle_window(bfqd, bfqq, bic);
+- bfq_clear_bfqq_just_split(bfqq);
+
+ bfq_log_bfqq(bfqd, bfqq,
+- "rq_enqueued: idle_window=%d (seeky %d, mean %llu)",
+- bfq_bfqq_idle_window(bfqq), BFQQ_SEEKY(bfqq),
+- (unsigned long long) bfqq->seek_mean);
++ "rq_enqueued: idle_window=%d (seeky %d)",
++ bfq_bfqq_idle_window(bfqq), BFQQ_SEEKY(bfqq));
+
+ bfqq->last_request_pos = blk_rq_pos(rq) + blk_rq_sectors(rq);
+
+@@ -3438,14 +4206,15 @@ static void bfq_rq_enqueued(struct bfq_data *bfqd, struct bfq_queue *bfqq,
+ * is small and the queue is not to be expired, then
+ * just exit.
+ *
+- * In this way, if the disk is being idled to wait for
+- * a new request from the in-service queue, we avoid
+- * unplugging the device and committing the disk to serve
+- * just a small request. On the contrary, we wait for
+- * the block layer to decide when to unplug the device:
+- * hopefully, new requests will be merged to this one
+- * quickly, then the device will be unplugged and
+- * larger requests will be dispatched.
++ * In this way, if the device is being idled to wait
++ * for a new request from the in-service queue, we
++ * avoid unplugging the device and committing the
++ * device to serve just a small request. On the
++ * contrary, we wait for the block layer to decide
++ * when to unplug the device: hopefully, new requests
++ * will be merged to this one quickly, then the device
++ * will be unplugged and larger requests will be
++ * dispatched.
+ */
+ if (small_req && !budget_timeout)
+ return;
+@@ -3457,10 +4226,8 @@ static void bfq_rq_enqueued(struct bfq_data *bfqd, struct bfq_queue *bfqq,
+ * timer.
+ */
+ bfq_clear_bfqq_wait_request(bfqq);
+- del_timer(&bfqd->idle_slice_timer);
+-#ifdef CONFIG_BFQ_GROUP_IOSCHED
++ hrtimer_try_to_cancel(&bfqd->idle_slice_timer);
+ bfqg_stats_update_idle_time(bfqq_group(bfqq));
+-#endif
+
+ /*
+ * The queue is not empty, because a new request just
+@@ -3504,28 +4271,20 @@ static void bfq_insert_request(struct request_queue *q, struct request *rq)
+ */
+ new_bfqq->allocated[rq_data_dir(rq)]++;
+ bfqq->allocated[rq_data_dir(rq)]--;
+- atomic_inc(&new_bfqq->ref);
++ new_bfqq->ref++;
++ bfq_clear_bfqq_just_created(bfqq);
+ bfq_put_queue(bfqq);
+ if (bic_to_bfqq(RQ_BIC(rq), 1) == bfqq)
+ bfq_merge_bfqqs(bfqd, RQ_BIC(rq),
+ bfqq, new_bfqq);
+ rq->elv.priv[1] = new_bfqq;
+ bfqq = new_bfqq;
+- } else
+- bfq_bfqq_increase_failed_cooperations(bfqq);
++ }
+ }
+
+ bfq_add_request(rq);
+
+- /*
+- * Here a newly-created bfq_queue has already started a weight-raising
+- * period: clear raising_time_left to prevent bfq_bfqq_save_state()
+- * from assigning it a full weight-raising period. See the detailed
+- * comments about this field in bfq_init_icq().
+- */
+- if (bfqq->bic)
+- bfqq->bic->wr_time_left = 0;
+- rq->fifo_time = jiffies + bfqd->bfq_fifo_expire[rq_is_sync(rq)];
++ rq->fifo_time = ktime_get_ns() + bfqd->bfq_fifo_expire[rq_is_sync(rq)];
+ list_add_tail(&rq->queuelist, &bfqq->fifo);
+
+ bfq_rq_enqueued(bfqd, bfqq, rq);
+@@ -3533,8 +4292,8 @@ static void bfq_insert_request(struct request_queue *q, struct request *rq)
+
+ static void bfq_update_hw_tag(struct bfq_data *bfqd)
+ {
+- bfqd->max_rq_in_driver = max(bfqd->max_rq_in_driver,
+- bfqd->rq_in_driver);
++ bfqd->max_rq_in_driver = max_t(int, bfqd->max_rq_in_driver,
++ bfqd->rq_in_driver);
+
+ if (bfqd->hw_tag == 1)
+ return;
+@@ -3560,48 +4319,85 @@ static void bfq_completed_request(struct request_queue *q, struct request *rq)
+ {
+ struct bfq_queue *bfqq = RQ_BFQQ(rq);
+ struct bfq_data *bfqd = bfqq->bfqd;
+- bool sync = bfq_bfqq_sync(bfqq);
++ u64 now_ns;
++ u32 delta_us;
+
+- bfq_log_bfqq(bfqd, bfqq, "completed one req with %u sects left (%d)",
+- blk_rq_sectors(rq), sync);
++ bfq_log_bfqq(bfqd, bfqq, "completed one req with %u sects left",
++ blk_rq_sectors(rq));
+
++ assert_spin_locked(bfqd->queue->queue_lock);
+ bfq_update_hw_tag(bfqd);
+
+ BUG_ON(!bfqd->rq_in_driver);
+ BUG_ON(!bfqq->dispatched);
+ bfqd->rq_in_driver--;
+ bfqq->dispatched--;
+-#ifdef CONFIG_BFQ_GROUP_IOSCHED
+ bfqg_stats_update_completion(bfqq_group(bfqq),
+ rq_start_time_ns(rq),
+- rq_io_start_time_ns(rq), rq->cmd_flags);
+-#endif
++ rq_io_start_time_ns(rq),
++ rq->cmd_flags);
+
+ if (!bfqq->dispatched && !bfq_bfqq_busy(bfqq)) {
++ BUG_ON(!RB_EMPTY_ROOT(&bfqq->sort_list));
++ /*
++ * Set budget_timeout (which we overload to store the
++ * time at which the queue remains with no backlog and
++ * no outstanding request; used by the weight-raising
++ * mechanism).
++ */
++ bfqq->budget_timeout = jiffies;
++
+ bfq_weights_tree_remove(bfqd, &bfqq->entity,
+ &bfqd->queue_weights_tree);
+- if (!blk_queue_nonrot(bfqd->queue)) {
+- BUG_ON(!bfqd->busy_in_flight_queues);
+- bfqd->busy_in_flight_queues--;
+- if (bfq_bfqq_constantly_seeky(bfqq)) {
+- BUG_ON(!bfqd->
+- const_seeky_busy_in_flight_queues);
+- bfqd->const_seeky_busy_in_flight_queues--;
+- }
+- }
+ }
+
+- if (sync) {
+- bfqd->sync_flight--;
+- RQ_BIC(rq)->ttime.last_end_request = jiffies;
+- }
++ now_ns = ktime_get_ns();
++
++ RQ_BIC(rq)->ttime.last_end_request = now_ns;
++
++ /*
++ * Using us instead of ns, to get a reasonable precision in
++ * computing rate in next check.
++ */
++ delta_us = div_u64(now_ns - bfqd->last_completion, NSEC_PER_USEC);
++
++ bfq_log(bfqd, "rq_completed: delta %uus/%luus max_size %u rate %llu/%llu",
++ delta_us, BFQ_MIN_TT/NSEC_PER_USEC, bfqd->last_rq_max_size,
++ (USEC_PER_SEC*
++ (u64)((bfqd->last_rq_max_size<<BFQ_RATE_SHIFT)/delta_us))
++ >>BFQ_RATE_SHIFT,
++ (USEC_PER_SEC*(u64)(1UL<<(BFQ_RATE_SHIFT-10)))>>BFQ_RATE_SHIFT);
++
++ /*
++ * If the request took rather long to complete, and, according
++ * to the maximum request size recorded, this completion latency
++ * implies that the request was certainly served at a very low
++ * rate (less than 1M sectors/sec), then the whole observation
++ * interval that lasts up to this time instant cannot be a
++ * valid time interval for computing a new peak rate. Invoke
++ * bfq_update_rate_reset to have the following three steps
++ * taken:
++ * - close the observation interval at the last (previous)
++ * request dispatch or completion
++ * - compute rate, if possible, for that observation interval
++ * - reset to zero samples, which will trigger a proper
++ * re-initialization of the observation interval on next
++ * dispatch
++ */
++ if (delta_us > BFQ_MIN_TT/NSEC_PER_USEC &&
++ (bfqd->last_rq_max_size<<BFQ_RATE_SHIFT)/delta_us <
++ 1UL<<(BFQ_RATE_SHIFT - 10))
++ bfq_update_rate_reset(bfqd, NULL);
++ bfqd->last_completion = now_ns;
+
+ /*
+- * If we are waiting to discover whether the request pattern of the
+- * task associated with the queue is actually isochronous, and
+- * both requisites for this condition to hold are satisfied, then
+- * compute soft_rt_next_start (see the comments to the function
+- * bfq_bfqq_softrt_next_start()).
++ * If we are waiting to discover whether the request pattern
++ * of the task associated with the queue is actually
++ * isochronous, and both requisites for this condition to hold
++ * are now satisfied, then compute soft_rt_next_start (see the
++ * comments on the function bfq_bfqq_softrt_next_start()). We
++ * schedule this delayed check when bfqq expires, if it still
++ * has in-flight requests.
+ */
+ if (bfq_bfqq_softrt_update(bfqq) && bfqq->dispatched == 0 &&
+ RB_EMPTY_ROOT(&bfqq->sort_list))
+@@ -3613,10 +4409,7 @@ static void bfq_completed_request(struct request_queue *q, struct request *rq)
+ * or if we want to idle in case it has no pending requests.
+ */
+ if (bfqd->in_service_queue == bfqq) {
+- if (bfq_bfqq_budget_new(bfqq))
+- bfq_set_budget_timeout(bfqd);
+-
+- if (bfq_bfqq_must_idle(bfqq)) {
++ if (bfqq->dispatched == 0 && bfq_bfqq_must_idle(bfqq)) {
+ bfq_arm_slice_timer(bfqd);
+ goto out;
+ } else if (bfq_may_expire_for_budg_timeout(bfqq))
+@@ -3646,7 +4439,7 @@ static int __bfq_may_queue(struct bfq_queue *bfqq)
+ return ELV_MQUEUE_MAY;
+ }
+
+-static int bfq_may_queue(struct request_queue *q, int rw)
++static int bfq_may_queue(struct request_queue *q, unsigned int op)
+ {
+ struct bfq_data *bfqd = q->elevator->elevator_data;
+ struct task_struct *tsk = current;
+@@ -3663,7 +4456,7 @@ static int bfq_may_queue(struct request_queue *q, int rw)
+ if (!bic)
+ return ELV_MQUEUE_MAY;
+
+- bfqq = bic_to_bfqq(bic, rw_is_sync(rw));
++ bfqq = bic_to_bfqq(bic, op_is_sync(op));
+ if (bfqq)
+ return __bfq_may_queue(bfqq);
+
+@@ -3687,14 +4480,14 @@ static void bfq_put_request(struct request *rq)
+ rq->elv.priv[1] = NULL;
+
+ bfq_log_bfqq(bfqq->bfqd, bfqq, "put_request %p, %d",
+- bfqq, atomic_read(&bfqq->ref));
++ bfqq, bfqq->ref);
+ bfq_put_queue(bfqq);
+ }
+ }
+
+ /*
+ * Returns NULL if a new bfqq should be allocated, or the old bfqq if this
+- * was the last process referring to said bfqq.
++ * was the last process referring to that bfqq.
+ */
+ static struct bfq_queue *
+ bfq_split_bfqq(struct bfq_io_cq *bic, struct bfq_queue *bfqq)
+@@ -3732,11 +4525,8 @@ static int bfq_set_request(struct request_queue *q, struct request *rq,
+ unsigned long flags;
+ bool split = false;
+
+- might_sleep_if(gfpflags_allow_blocking(gfp_mask));
+-
+- bfq_check_ioprio_change(bic, bio);
+-
+ spin_lock_irqsave(q->queue_lock, flags);
++ bfq_check_ioprio_change(bic, bio);
+
+ if (!bic)
+ goto queue_fail;
+@@ -3746,23 +4536,47 @@ static int bfq_set_request(struct request_queue *q, struct request *rq,
+ new_queue:
+ bfqq = bic_to_bfqq(bic, is_sync);
+ if (!bfqq || bfqq == &bfqd->oom_bfqq) {
+- bfqq = bfq_get_queue(bfqd, bio, is_sync, bic, gfp_mask);
++ if (bfqq)
++ bfq_put_queue(bfqq);
++ bfqq = bfq_get_queue(bfqd, bio, is_sync, bic);
++ BUG_ON(!hlist_unhashed(&bfqq->burst_list_node));
++
+ bic_set_bfqq(bic, bfqq, is_sync);
+ if (split && is_sync) {
++ bfq_log_bfqq(bfqd, bfqq,
++ "set_request: was_in_list %d "
++ "was_in_large_burst %d "
++ "large burst in progress %d",
++ bic->was_in_burst_list,
++ bic->saved_in_large_burst,
++ bfqd->large_burst);
++
+ if ((bic->was_in_burst_list && bfqd->large_burst) ||
+- bic->saved_in_large_burst)
++ bic->saved_in_large_burst) {
++ bfq_log_bfqq(bfqd, bfqq,
++ "set_request: marking in "
++ "large burst");
+ bfq_mark_bfqq_in_large_burst(bfqq);
+- else {
++ } else {
++ bfq_log_bfqq(bfqd, bfqq,
++ "set_request: clearing in "
++ "large burst");
+ bfq_clear_bfqq_in_large_burst(bfqq);
+ if (bic->was_in_burst_list)
+ hlist_add_head(&bfqq->burst_list_node,
+ &bfqd->burst_list);
+ }
++ bfqq->split_time = jiffies;
+ }
+ } else {
+ /* If the queue was seeky for too long, break it apart. */
+ if (bfq_bfqq_coop(bfqq) && bfq_bfqq_split_coop(bfqq)) {
+ bfq_log_bfqq(bfqd, bfqq, "breaking apart bfqq");
++
++ /* Update bic before losing reference to bfqq */
++ if (bfq_bfqq_in_large_burst(bfqq))
++ bic->saved_in_large_burst = true;
++
+ bfqq = bfq_split_bfqq(bic, bfqq);
+ split = true;
+ if (!bfqq)
+@@ -3771,9 +4585,8 @@ static int bfq_set_request(struct request_queue *q, struct request *rq,
+ }
+
+ bfqq->allocated[rw]++;
+- atomic_inc(&bfqq->ref);
+- bfq_log_bfqq(bfqd, bfqq, "set_request: bfqq %p, %d", bfqq,
+- atomic_read(&bfqq->ref));
++ bfqq->ref++;
++ bfq_log_bfqq(bfqd, bfqq, "set_request: bfqq %p, %d", bfqq, bfqq->ref);
+
+ rq->elv.priv[0] = bic;
+ rq->elv.priv[1] = bfqq;
+@@ -3788,7 +4601,6 @@ static int bfq_set_request(struct request_queue *q, struct request *rq,
+ if (likely(bfqq != &bfqd->oom_bfqq) && bfqq_process_refs(bfqq) == 1) {
+ bfqq->bic = bic;
+ if (split) {
+- bfq_mark_bfqq_just_split(bfqq);
+ /*
+ * If the queue has just been split from a shared
+ * queue, restore the idle window and the possible
+@@ -3798,6 +4610,9 @@ static int bfq_set_request(struct request_queue *q, struct request *rq,
+ }
+ }
+
++ if (unlikely(bfq_bfqq_just_created(bfqq)))
++ bfq_handle_burst(bfqd, bfqq);
++
+ spin_unlock_irqrestore(q->queue_lock, flags);
+
+ return 0;
+@@ -3824,9 +4639,10 @@ static void bfq_kick_queue(struct work_struct *work)
+ * Handler of the expiration of the timer running if the in-service queue
+ * is idling inside its time slice.
+ */
+-static void bfq_idle_slice_timer(unsigned long data)
++static enum hrtimer_restart bfq_idle_slice_timer(struct hrtimer *timer)
+ {
+- struct bfq_data *bfqd = (struct bfq_data *)data;
++ struct bfq_data *bfqd = container_of(timer, struct bfq_data,
++ idle_slice_timer);
+ struct bfq_queue *bfqq;
+ unsigned long flags;
+ enum bfqq_expiration reason;
+@@ -3844,6 +4660,8 @@ static void bfq_idle_slice_timer(unsigned long data)
+ */
+ if (bfqq) {
+ bfq_log_bfqq(bfqd, bfqq, "slice_timer expired");
++ bfq_clear_bfqq_wait_request(bfqq);
++
+ if (bfq_bfqq_budget_timeout(bfqq))
+ /*
+ * Also here the queue can be safely expired
+@@ -3869,11 +4687,12 @@ static void bfq_idle_slice_timer(unsigned long data)
+ bfq_schedule_dispatch(bfqd);
+
+ spin_unlock_irqrestore(bfqd->queue->queue_lock, flags);
++ return HRTIMER_NORESTART;
+ }
+
+ static void bfq_shutdown_timer_wq(struct bfq_data *bfqd)
+ {
+- del_timer_sync(&bfqd->idle_slice_timer);
++ hrtimer_cancel(&bfqd->idle_slice_timer);
+ cancel_work_sync(&bfqd->unplug_work);
+ }
+
+@@ -3885,9 +4704,9 @@ static void __bfq_put_async_bfqq(struct bfq_data *bfqd,
+
+ bfq_log(bfqd, "put_async_bfqq: %p", bfqq);
+ if (bfqq) {
+- bfq_bfqq_move(bfqd, bfqq, &bfqq->entity, root_group);
++ bfq_bfqq_move(bfqd, bfqq, root_group);
+ bfq_log_bfqq(bfqd, bfqq, "put_async_bfqq: putting %p, %d",
+- bfqq, atomic_read(&bfqq->ref));
++ bfqq, bfqq->ref);
+ bfq_put_queue(bfqq);
+ *bfqq_ptr = NULL;
+ }
+@@ -3922,19 +4741,18 @@ static void bfq_exit_queue(struct elevator_queue *e)
+
+ BUG_ON(bfqd->in_service_queue);
+ list_for_each_entry_safe(bfqq, n, &bfqd->idle_list, bfqq_list)
+- bfq_deactivate_bfqq(bfqd, bfqq, 0);
++ bfq_deactivate_bfqq(bfqd, bfqq, false, false);
+
+ spin_unlock_irq(q->queue_lock);
+
+ bfq_shutdown_timer_wq(bfqd);
+
+- synchronize_rcu();
+-
+- BUG_ON(timer_pending(&bfqd->idle_slice_timer));
++ BUG_ON(hrtimer_active(&bfqd->idle_slice_timer));
+
+ #ifdef CONFIG_BFQ_GROUP_IOSCHED
+ blkcg_deactivate_policy(q, &blkcg_policy_bfq);
+ #else
++ bfq_put_async_queues(bfqd, bfqd->root_group);
+ kfree(bfqd->root_group);
+ #endif
+
+@@ -3954,6 +4772,7 @@ static void bfq_init_root_group(struct bfq_group *root_group,
+ root_group->rq_pos_tree = RB_ROOT;
+ for (i = 0; i < BFQ_IOPRIO_CLASSES; i++)
+ root_group->sched_data.service_tree[i] = BFQ_SERVICE_TREE_INIT;
++ root_group->sched_data.bfq_class_idle_last_service = jiffies;
+ }
+
+ static int bfq_init_queue(struct request_queue *q, struct elevator_type *e)
+@@ -3978,11 +4797,14 @@ static int bfq_init_queue(struct request_queue *q, struct elevator_type *e)
+ * will not attempt to free it.
+ */
+ bfq_init_bfqq(bfqd, &bfqd->oom_bfqq, NULL, 1, 0);
+- atomic_inc(&bfqd->oom_bfqq.ref);
++ bfqd->oom_bfqq.ref++;
+ bfqd->oom_bfqq.new_ioprio = BFQ_DEFAULT_QUEUE_IOPRIO;
+ bfqd->oom_bfqq.new_ioprio_class = IOPRIO_CLASS_BE;
+ bfqd->oom_bfqq.entity.new_weight =
+ bfq_ioprio_to_weight(bfqd->oom_bfqq.new_ioprio);
++
++ /* oom_bfqq does not participate to bursts */
++ bfq_clear_bfqq_just_created(&bfqd->oom_bfqq);
+ /*
+ * Trigger weight initialization, according to ioprio, at the
+ * oom_bfqq's first activation. The oom_bfqq's ioprio and ioprio
+@@ -4001,13 +4823,10 @@ static int bfq_init_queue(struct request_queue *q, struct elevator_type *e)
+ goto out_free;
+ bfq_init_root_group(bfqd->root_group, bfqd);
+ bfq_init_entity(&bfqd->oom_bfqq.entity, bfqd->root_group);
+-#ifdef CONFIG_BFQ_GROUP_IOSCHED
+- bfqd->active_numerous_groups = 0;
+-#endif
+
+- init_timer(&bfqd->idle_slice_timer);
++ hrtimer_init(&bfqd->idle_slice_timer, CLOCK_MONOTONIC,
++ HRTIMER_MODE_REL);
+ bfqd->idle_slice_timer.function = bfq_idle_slice_timer;
+- bfqd->idle_slice_timer.data = (unsigned long)bfqd;
+
+ bfqd->queue_weights_tree = RB_ROOT;
+ bfqd->group_weights_tree = RB_ROOT;
+@@ -4027,21 +4846,19 @@ static int bfq_init_queue(struct request_queue *q, struct elevator_type *e)
+ bfqd->bfq_back_max = bfq_back_max;
+ bfqd->bfq_back_penalty = bfq_back_penalty;
+ bfqd->bfq_slice_idle = bfq_slice_idle;
+- bfqd->bfq_class_idle_last_service = 0;
+- bfqd->bfq_max_budget_async_rq = bfq_max_budget_async_rq;
+- bfqd->bfq_timeout[BLK_RW_ASYNC] = bfq_timeout_async;
+- bfqd->bfq_timeout[BLK_RW_SYNC] = bfq_timeout_sync;
++ bfqd->bfq_timeout = bfq_timeout;
+
+- bfqd->bfq_coop_thresh = 2;
+- bfqd->bfq_failed_cooperations = 7000;
+ bfqd->bfq_requests_within_timer = 120;
+
+- bfqd->bfq_large_burst_thresh = 11;
+- bfqd->bfq_burst_interval = msecs_to_jiffies(500);
++ bfqd->bfq_large_burst_thresh = 8;
++ bfqd->bfq_burst_interval = msecs_to_jiffies(180);
+
+ bfqd->low_latency = true;
+
+- bfqd->bfq_wr_coeff = 20;
++ /*
++ * Trade-off between responsiveness and fairness.
++ */
++ bfqd->bfq_wr_coeff = 30;
+ bfqd->bfq_wr_rt_max_time = msecs_to_jiffies(300);
+ bfqd->bfq_wr_max_time = 0;
+ bfqd->bfq_wr_min_idle_time = msecs_to_jiffies(2000);
+@@ -4053,16 +4870,15 @@ static int bfq_init_queue(struct request_queue *q, struct elevator_type *e)
+ * video.
+ */
+ bfqd->wr_busy_queues = 0;
+- bfqd->busy_in_flight_queues = 0;
+- bfqd->const_seeky_busy_in_flight_queues = 0;
+
+ /*
+- * Begin by assuming, optimistically, that the device peak rate is
+- * equal to the highest reference rate.
++ * Begin by assuming, optimistically, that the device is a
++ * high-speed one, and that its peak rate is equal to 2/3 of
++ * the highest reference rate.
+ */
+ bfqd->RT_prod = R_fast[blk_queue_nonrot(bfqd->queue)] *
+ T_fast[blk_queue_nonrot(bfqd->queue)];
+- bfqd->peak_rate = R_fast[blk_queue_nonrot(bfqd->queue)];
++ bfqd->peak_rate = R_fast[blk_queue_nonrot(bfqd->queue)] * 2 / 3;
+ bfqd->device_speed = BFQ_BFQD_FAST;
+
+ return 0;
+@@ -4088,7 +4904,7 @@ static int __init bfq_slab_setup(void)
+
+ static ssize_t bfq_var_show(unsigned int var, char *page)
+ {
+- return sprintf(page, "%d\n", var);
++ return sprintf(page, "%u\n", var);
+ }
+
+ static ssize_t bfq_var_store(unsigned long *var, const char *page,
+@@ -4159,21 +4975,21 @@ static ssize_t bfq_weights_show(struct elevator_queue *e, char *page)
+ static ssize_t __FUNC(struct elevator_queue *e, char *page) \
+ { \
+ struct bfq_data *bfqd = e->elevator_data; \
+- unsigned int __data = __VAR; \
+- if (__CONV) \
++ u64 __data = __VAR; \
++ if (__CONV == 1) \
+ __data = jiffies_to_msecs(__data); \
++ else if (__CONV == 2) \
++ __data = div_u64(__data, NSEC_PER_MSEC); \
+ return bfq_var_show(__data, (page)); \
+ }
+-SHOW_FUNCTION(bfq_fifo_expire_sync_show, bfqd->bfq_fifo_expire[1], 1);
+-SHOW_FUNCTION(bfq_fifo_expire_async_show, bfqd->bfq_fifo_expire[0], 1);
++SHOW_FUNCTION(bfq_fifo_expire_sync_show, bfqd->bfq_fifo_expire[1], 2);
++SHOW_FUNCTION(bfq_fifo_expire_async_show, bfqd->bfq_fifo_expire[0], 2);
+ SHOW_FUNCTION(bfq_back_seek_max_show, bfqd->bfq_back_max, 0);
+ SHOW_FUNCTION(bfq_back_seek_penalty_show, bfqd->bfq_back_penalty, 0);
+-SHOW_FUNCTION(bfq_slice_idle_show, bfqd->bfq_slice_idle, 1);
++SHOW_FUNCTION(bfq_slice_idle_show, bfqd->bfq_slice_idle, 2);
+ SHOW_FUNCTION(bfq_max_budget_show, bfqd->bfq_user_max_budget, 0);
+-SHOW_FUNCTION(bfq_max_budget_async_rq_show,
+- bfqd->bfq_max_budget_async_rq, 0);
+-SHOW_FUNCTION(bfq_timeout_sync_show, bfqd->bfq_timeout[BLK_RW_SYNC], 1);
+-SHOW_FUNCTION(bfq_timeout_async_show, bfqd->bfq_timeout[BLK_RW_ASYNC], 1);
++SHOW_FUNCTION(bfq_timeout_sync_show, bfqd->bfq_timeout, 1);
++SHOW_FUNCTION(bfq_strict_guarantees_show, bfqd->strict_guarantees, 0);
+ SHOW_FUNCTION(bfq_low_latency_show, bfqd->low_latency, 0);
+ SHOW_FUNCTION(bfq_wr_coeff_show, bfqd->bfq_wr_coeff, 0);
+ SHOW_FUNCTION(bfq_wr_rt_max_time_show, bfqd->bfq_wr_rt_max_time, 1);
+@@ -4183,6 +4999,17 @@ SHOW_FUNCTION(bfq_wr_min_inter_arr_async_show, bfqd->bfq_wr_min_inter_arr_async,
+ SHOW_FUNCTION(bfq_wr_max_softrt_rate_show, bfqd->bfq_wr_max_softrt_rate, 0);
+ #undef SHOW_FUNCTION
+
++#define USEC_SHOW_FUNCTION(__FUNC, __VAR) \
++static ssize_t __FUNC(struct elevator_queue *e, char *page) \
++{ \
++ struct bfq_data *bfqd = e->elevator_data; \
++ u64 __data = __VAR; \
++ __data = div_u64(__data, NSEC_PER_USEC); \
++ return bfq_var_show(__data, (page)); \
++}
++USEC_SHOW_FUNCTION(bfq_slice_idle_us_show, bfqd->bfq_slice_idle);
++#undef USEC_SHOW_FUNCTION
++
+ #define STORE_FUNCTION(__FUNC, __PTR, MIN, MAX, __CONV) \
+ static ssize_t \
+ __FUNC(struct elevator_queue *e, const char *page, size_t count) \
+@@ -4194,24 +5021,22 @@ __FUNC(struct elevator_queue *e, const char *page, size_t count) \
+ __data = (MIN); \
+ else if (__data > (MAX)) \
+ __data = (MAX); \
+- if (__CONV) \
++ if (__CONV == 1) \
+ *(__PTR) = msecs_to_jiffies(__data); \
++ else if (__CONV == 2) \
++ *(__PTR) = (u64)__data * NSEC_PER_MSEC; \
+ else \
+ *(__PTR) = __data; \
+ return ret; \
+ }
+ STORE_FUNCTION(bfq_fifo_expire_sync_store, &bfqd->bfq_fifo_expire[1], 1,
+- INT_MAX, 1);
++ INT_MAX, 2);
+ STORE_FUNCTION(bfq_fifo_expire_async_store, &bfqd->bfq_fifo_expire[0], 1,
+- INT_MAX, 1);
++ INT_MAX, 2);
+ STORE_FUNCTION(bfq_back_seek_max_store, &bfqd->bfq_back_max, 0, INT_MAX, 0);
+ STORE_FUNCTION(bfq_back_seek_penalty_store, &bfqd->bfq_back_penalty, 1,
+ INT_MAX, 0);
+-STORE_FUNCTION(bfq_slice_idle_store, &bfqd->bfq_slice_idle, 0, INT_MAX, 1);
+-STORE_FUNCTION(bfq_max_budget_async_rq_store, &bfqd->bfq_max_budget_async_rq,
+- 1, INT_MAX, 0);
+-STORE_FUNCTION(bfq_timeout_async_store, &bfqd->bfq_timeout[BLK_RW_ASYNC], 0,
+- INT_MAX, 1);
++STORE_FUNCTION(bfq_slice_idle_store, &bfqd->bfq_slice_idle, 0, INT_MAX, 2);
+ STORE_FUNCTION(bfq_wr_coeff_store, &bfqd->bfq_wr_coeff, 1, INT_MAX, 0);
+ STORE_FUNCTION(bfq_wr_max_time_store, &bfqd->bfq_wr_max_time, 0, INT_MAX, 1);
+ STORE_FUNCTION(bfq_wr_rt_max_time_store, &bfqd->bfq_wr_rt_max_time, 0, INT_MAX,
+@@ -4224,6 +5049,23 @@ STORE_FUNCTION(bfq_wr_max_softrt_rate_store, &bfqd->bfq_wr_max_softrt_rate, 0,
+ INT_MAX, 0);
+ #undef STORE_FUNCTION
+
++#define USEC_STORE_FUNCTION(__FUNC, __PTR, MIN, MAX) \
++static ssize_t __FUNC(struct elevator_queue *e, const char *page, size_t count)\
++{ \
++ struct bfq_data *bfqd = e->elevator_data; \
++ unsigned long uninitialized_var(__data); \
++ int ret = bfq_var_store(&__data, (page), count); \
++ if (__data < (MIN)) \
++ __data = (MIN); \
++ else if (__data > (MAX)) \
++ __data = (MAX); \
++ *(__PTR) = (u64)__data * NSEC_PER_USEC; \
++ return ret; \
++}
++USEC_STORE_FUNCTION(bfq_slice_idle_us_store, &bfqd->bfq_slice_idle, 0,
++ UINT_MAX);
++#undef USEC_STORE_FUNCTION
++
+ /* do nothing for the moment */
+ static ssize_t bfq_weights_store(struct elevator_queue *e,
+ const char *page, size_t count)
+@@ -4231,16 +5073,6 @@ static ssize_t bfq_weights_store(struct elevator_queue *e,
+ return count;
+ }
+
+-static unsigned long bfq_estimated_max_budget(struct bfq_data *bfqd)
+-{
+- u64 timeout = jiffies_to_msecs(bfqd->bfq_timeout[BLK_RW_SYNC]);
+-
+- if (bfqd->peak_rate_samples >= BFQ_PEAK_RATE_SAMPLES)
+- return bfq_calc_max_budget(bfqd->peak_rate, timeout);
+- else
+- return bfq_default_max_budget;
+-}
+-
+ static ssize_t bfq_max_budget_store(struct elevator_queue *e,
+ const char *page, size_t count)
+ {
+@@ -4249,7 +5081,7 @@ static ssize_t bfq_max_budget_store(struct elevator_queue *e,
+ int ret = bfq_var_store(&__data, (page), count);
+
+ if (__data == 0)
+- bfqd->bfq_max_budget = bfq_estimated_max_budget(bfqd);
++ bfqd->bfq_max_budget = bfq_calc_max_budget(bfqd);
+ else {
+ if (__data > INT_MAX)
+ __data = INT_MAX;
+@@ -4261,6 +5093,10 @@ static ssize_t bfq_max_budget_store(struct elevator_queue *e,
+ return ret;
+ }
+
++/*
++ * Leaving this name to preserve name compatibility with cfq
++ * parameters, but this timeout is used for both sync and async.
++ */
+ static ssize_t bfq_timeout_sync_store(struct elevator_queue *e,
+ const char *page, size_t count)
+ {
+@@ -4273,9 +5109,27 @@ static ssize_t bfq_timeout_sync_store(struct elevator_queue *e,
+ else if (__data > INT_MAX)
+ __data = INT_MAX;
+
+- bfqd->bfq_timeout[BLK_RW_SYNC] = msecs_to_jiffies(__data);
++ bfqd->bfq_timeout = msecs_to_jiffies(__data);
+ if (bfqd->bfq_user_max_budget == 0)
+- bfqd->bfq_max_budget = bfq_estimated_max_budget(bfqd);
++ bfqd->bfq_max_budget = bfq_calc_max_budget(bfqd);
++
++ return ret;
++}
++
++static ssize_t bfq_strict_guarantees_store(struct elevator_queue *e,
++ const char *page, size_t count)
++{
++ struct bfq_data *bfqd = e->elevator_data;
++ unsigned long uninitialized_var(__data);
++ int ret = bfq_var_store(&__data, (page), count);
++
++ if (__data > 1)
++ __data = 1;
++ if (!bfqd->strict_guarantees && __data == 1
++ && bfqd->bfq_slice_idle < 8 * NSEC_PER_MSEC)
++ bfqd->bfq_slice_idle = 8 * NSEC_PER_MSEC;
++
++ bfqd->strict_guarantees = __data;
+
+ return ret;
+ }
+@@ -4305,10 +5159,10 @@ static struct elv_fs_entry bfq_attrs[] = {
+ BFQ_ATTR(back_seek_max),
+ BFQ_ATTR(back_seek_penalty),
+ BFQ_ATTR(slice_idle),
++ BFQ_ATTR(slice_idle_us),
+ BFQ_ATTR(max_budget),
+- BFQ_ATTR(max_budget_async_rq),
+ BFQ_ATTR(timeout_sync),
+- BFQ_ATTR(timeout_async),
++ BFQ_ATTR(strict_guarantees),
+ BFQ_ATTR(low_latency),
+ BFQ_ATTR(wr_coeff),
+ BFQ_ATTR(wr_max_time),
+@@ -4328,7 +5182,8 @@ static struct elevator_type iosched_bfq = {
+ #ifdef CONFIG_BFQ_GROUP_IOSCHED
+ .elevator_bio_merged_fn = bfq_bio_merged,
+ #endif
+- .elevator_allow_merge_fn = bfq_allow_merge,
++ .elevator_allow_bio_merge_fn = bfq_allow_bio_merge,
++ .elevator_allow_rq_merge_fn = bfq_allow_rq_merge,
+ .elevator_dispatch_fn = bfq_dispatch_requests,
+ .elevator_add_req_fn = bfq_insert_request,
+ .elevator_activate_req_fn = bfq_activate_request,
+@@ -4351,18 +5206,28 @@ static struct elevator_type iosched_bfq = {
+ .elevator_owner = THIS_MODULE,
+ };
+
++#ifdef CONFIG_BFQ_GROUP_IOSCHED
++static struct blkcg_policy blkcg_policy_bfq = {
++ .dfl_cftypes = bfq_blkg_files,
++ .legacy_cftypes = bfq_blkcg_legacy_files,
++
++ .cpd_alloc_fn = bfq_cpd_alloc,
++ .cpd_init_fn = bfq_cpd_init,
++ .cpd_bind_fn = bfq_cpd_init,
++ .cpd_free_fn = bfq_cpd_free,
++
++ .pd_alloc_fn = bfq_pd_alloc,
++ .pd_init_fn = bfq_pd_init,
++ .pd_offline_fn = bfq_pd_offline,
++ .pd_free_fn = bfq_pd_free,
++ .pd_reset_stats_fn = bfq_pd_reset_stats,
++};
++#endif
++
+ static int __init bfq_init(void)
+ {
+ int ret;
+-
+- /*
+- * Can be 0 on HZ < 1000 setups.
+- */
+- if (bfq_slice_idle == 0)
+- bfq_slice_idle = 1;
+-
+- if (bfq_timeout_async == 0)
+- bfq_timeout_async = 1;
++ char msg[60] = "BFQ I/O-scheduler: v8r8";
+
+ #ifdef CONFIG_BFQ_GROUP_IOSCHED
+ ret = blkcg_policy_register(&blkcg_policy_bfq);
+@@ -4375,27 +5240,46 @@ static int __init bfq_init(void)
+ goto err_pol_unreg;
+
+ /*
+- * Times to load large popular applications for the typical systems
+- * installed on the reference devices (see the comments before the
+- * definitions of the two arrays).
++ * Times to load large popular applications for the typical
++ * systems installed on the reference devices (see the
++ * comments before the definitions of the next two
++ * arrays). Actually, we use slightly slower values, as the
++ * estimated peak rate tends to be smaller than the actual
++ * peak rate. The reason for this last fact is that estimates
++ * are computed over much shorter time intervals than the long
++ * intervals typically used for benchmarking. Why? First, to
++ * adapt more quickly to variations. Second, because an I/O
++ * scheduler cannot rely on a peak-rate-evaluation workload to
++ * be run for a long time.
+ */
+- T_slow[0] = msecs_to_jiffies(2600);
+- T_slow[1] = msecs_to_jiffies(1000);
+- T_fast[0] = msecs_to_jiffies(5500);
+- T_fast[1] = msecs_to_jiffies(2000);
++ T_slow[0] = msecs_to_jiffies(3500); /* actually 4 sec */
++ T_slow[1] = msecs_to_jiffies(6000); /* actually 6.5 sec */
++ T_fast[0] = msecs_to_jiffies(7000); /* actually 8 sec */
++ T_fast[1] = msecs_to_jiffies(2500); /* actually 3 sec */
+
+ /*
+- * Thresholds that determine the switch between speed classes (see
+- * the comments before the definition of the array).
++ * Thresholds that determine the switch between speed classes
++ * (see the comments before the definition of the array
++ * device_speed_thresh). These thresholds are biased towards
++ * transitions to the fast class. This is safer than the
++ * opposite bias. In fact, a wrong transition to the slow
++ * class results in short weight-raising periods, because the
++ * speed of the device then tends to be higher that the
++ * reference peak rate. On the opposite end, a wrong
++ * transition to the fast class tends to increase
++ * weight-raising periods, because of the opposite reason.
+ */
+- device_speed_thresh[0] = (R_fast[0] + R_slow[0]) / 2;
+- device_speed_thresh[1] = (R_fast[1] + R_slow[1]) / 2;
++ device_speed_thresh[0] = (4 * R_slow[0]) / 3;
++ device_speed_thresh[1] = (4 * R_slow[1]) / 3;
+
+ ret = elv_register(&iosched_bfq);
+ if (ret)
+ goto err_pol_unreg;
+
+- pr_info("BFQ I/O-scheduler: v7r11");
++#ifdef CONFIG_BFQ_GROUP_IOSCHED
++ strcat(msg, " (with cgroups support)");
++#endif
++ pr_info("%s", msg);
+
+ return 0;
+
+diff --git a/block/bfq-sched.c b/block/bfq-sched.c
+index a5ed694..2e9dc59 100644
+--- a/block/bfq-sched.c
++++ b/block/bfq-sched.c
+@@ -7,28 +7,166 @@
+ * Copyright (C) 2008 Fabio Checconi <fabio@gandalf.sssup.it>
+ * Paolo Valente <paolo.valente@unimore.it>
+ *
+- * Copyright (C) 2010 Paolo Valente <paolo.valente@unimore.it>
++ * Copyright (C) 2015 Paolo Valente <paolo.valente@unimore.it>
++ *
++ * Copyright (C) 2016 Paolo Valente <paolo.valente@linaro.org>
++ */
++
++static struct bfq_group *bfqq_group(struct bfq_queue *bfqq);
++
++/**
++ * bfq_gt - compare two timestamps.
++ * @a: first ts.
++ * @b: second ts.
++ *
++ * Return @a > @b, dealing with wrapping correctly.
++ */
++static int bfq_gt(u64 a, u64 b)
++{
++ return (s64)(a - b) > 0;
++}
++
++static struct bfq_entity *bfq_root_active_entity(struct rb_root *tree)
++{
++ struct rb_node *node = tree->rb_node;
++
++ return rb_entry(node, struct bfq_entity, rb_node);
++}
++
++static struct bfq_entity *bfq_lookup_next_entity(struct bfq_sched_data *sd);
++
++static bool bfq_update_parent_budget(struct bfq_entity *next_in_service);
++
++/**
++ * bfq_update_next_in_service - update sd->next_in_service
++ * @sd: sched_data for which to perform the update.
++ * @new_entity: if not NULL, pointer to the entity whose activation,
++ * requeueing or repositionig triggered the invocation of
++ * this function.
++ *
++ * This function is called to update sd->next_in_service, which, in
++ * its turn, may change as a consequence of the insertion or
++ * extraction of an entity into/from one of the active trees of
++ * sd. These insertions/extractions occur as a consequence of
++ * activations/deactivations of entities, with some activations being
++ * 'true' activations, and other activations being requeueings (i.e.,
++ * implementing the second, requeueing phase of the mechanism used to
++ * reposition an entity in its active tree; see comments on
++ * __bfq_activate_entity and __bfq_requeue_entity for details). In
++ * both the last two activation sub-cases, new_entity points to the
++ * just activated or requeued entity.
++ *
++ * Returns true if sd->next_in_service changes in such a way that
++ * entity->parent may become the next_in_service for its parent
++ * entity.
+ */
++static bool bfq_update_next_in_service(struct bfq_sched_data *sd,
++ struct bfq_entity *new_entity)
++{
++ struct bfq_entity *next_in_service = sd->next_in_service;
++ struct bfq_queue *bfqq;
++ bool parent_sched_may_change = false;
++
++ /*
++ * If this update is triggered by the activation, requeueing
++ * or repositiong of an entity that does not coincide with
++ * sd->next_in_service, then a full lookup in the active tree
++ * can be avoided. In fact, it is enough to check whether the
++ * just-modified entity has a higher priority than
++ * sd->next_in_service, or, even if it has the same priority
++ * as sd->next_in_service, is eligible and has a lower virtual
++ * finish time than sd->next_in_service. If this compound
++ * condition holds, then the new entity becomes the new
++ * next_in_service. Otherwise no change is needed.
++ */
++ if (new_entity && new_entity != sd->next_in_service) {
++ /*
++ * Flag used to decide whether to replace
++ * sd->next_in_service with new_entity. Tentatively
++ * set to true, and left as true if
++ * sd->next_in_service is NULL.
++ */
++ bool replace_next = true;
++
++ /*
++ * If there is already a next_in_service candidate
++ * entity, then compare class priorities or timestamps
++ * to decide whether to replace sd->service_tree with
++ * new_entity.
++ */
++ if (next_in_service) {
++ unsigned int new_entity_class_idx =
++ bfq_class_idx(new_entity);
++ struct bfq_service_tree *st =
++ sd->service_tree + new_entity_class_idx;
++
++ /*
++ * For efficiency, evaluate the most likely
++ * sub-condition first.
++ */
++ replace_next =
++ (new_entity_class_idx ==
++ bfq_class_idx(next_in_service)
++ &&
++ !bfq_gt(new_entity->start, st->vtime)
++ &&
++ bfq_gt(next_in_service->finish,
++ new_entity->finish))
++ ||
++ new_entity_class_idx <
++ bfq_class_idx(next_in_service);
++ }
++
++ if (replace_next)
++ next_in_service = new_entity;
++ } else /* invoked because of a deactivation: lookup needed */
++ next_in_service = bfq_lookup_next_entity(sd);
++
++ if (next_in_service) {
++ parent_sched_may_change = !sd->next_in_service ||
++ bfq_update_parent_budget(next_in_service);
++ }
++
++ sd->next_in_service = next_in_service;
++
++ if (!next_in_service)
++ return parent_sched_may_change;
+
++ bfqq = bfq_entity_to_bfqq(next_in_service);
++ if (bfqq)
++ bfq_log_bfqq(bfqq->bfqd, bfqq,
++ "update_next_in_service: chosen this queue");
+ #ifdef CONFIG_BFQ_GROUP_IOSCHED
+-#define for_each_entity(entity) \
++ else {
++ struct bfq_group *bfqg =
++ container_of(next_in_service,
++ struct bfq_group, entity);
++
++ bfq_log_bfqg((struct bfq_data *)bfqg->bfqd, bfqg,
++ "update_next_in_service: chosen this entity");
++ }
++#endif
++ return parent_sched_may_change;
++}
++
++#ifdef CONFIG_BFQ_GROUP_IOSCHED
++/* both next loops stop at one of the child entities of the root group */
++#define for_each_entity(entity) \
+ for (; entity ; entity = entity->parent)
+
+ #define for_each_entity_safe(entity, parent) \
+ for (; entity && ({ parent = entity->parent; 1; }); entity = parent)
+
+-
+-static struct bfq_entity *bfq_lookup_next_entity(struct bfq_sched_data *sd,
+- int extract,
+- struct bfq_data *bfqd);
+-
+-static struct bfq_group *bfqq_group(struct bfq_queue *bfqq);
+-
+-static void bfq_update_budget(struct bfq_entity *next_in_service)
++/*
++ * Returns true if this budget changes may let next_in_service->parent
++ * become the next_in_service entity for its parent entity.
++ */
++static bool bfq_update_parent_budget(struct bfq_entity *next_in_service)
+ {
+ struct bfq_entity *bfqg_entity;
+ struct bfq_group *bfqg;
+ struct bfq_sched_data *group_sd;
++ bool ret = false;
+
+ BUG_ON(!next_in_service);
+
+@@ -41,60 +179,68 @@ static void bfq_update_budget(struct bfq_entity *next_in_service)
+ * as it must never become an in-service entity.
+ */
+ bfqg_entity = bfqg->my_entity;
+- if (bfqg_entity)
++ if (bfqg_entity) {
++ if (bfqg_entity->budget > next_in_service->budget)
++ ret = true;
+ bfqg_entity->budget = next_in_service->budget;
++ }
++
++ return ret;
+ }
+
+-static int bfq_update_next_in_service(struct bfq_sched_data *sd)
++/*
++ * This function tells whether entity stops being a candidate for next
++ * service, according to the following logic.
++ *
++ * This function is invoked for an entity that is about to be set in
++ * service. If such an entity is a queue, then the entity is no longer
++ * a candidate for next service (i.e, a candidate entity to serve
++ * after the in-service entity is expired). The function then returns
++ * true.
++ *
++ * In contrast, the entity could stil be a candidate for next service
++ * if it is not a queue, and has more than one child. In fact, even if
++ * one of its children is about to be set in service, other children
++ * may still be the next to serve. As a consequence, a non-queue
++ * entity is not a candidate for next-service only if it has only one
++ * child. And only if this condition holds, then the function returns
++ * true for a non-queue entity.
++ */
++static bool bfq_no_longer_next_in_service(struct bfq_entity *entity)
+ {
+- struct bfq_entity *next_in_service;
++ struct bfq_group *bfqg;
+
+- if (sd->in_service_entity)
+- /* will update/requeue at the end of service */
+- return 0;
++ if (bfq_entity_to_bfqq(entity))
++ return true;
+
+- /*
+- * NOTE: this can be improved in many ways, such as returning
+- * 1 (and thus propagating upwards the update) only when the
+- * budget changes, or caching the bfqq that will be scheduled
+- * next from this subtree. By now we worry more about
+- * correctness than about performance...
+- */
+- next_in_service = bfq_lookup_next_entity(sd, 0, NULL);
+- sd->next_in_service = next_in_service;
++ bfqg = container_of(entity, struct bfq_group, entity);
+
+- if (next_in_service)
+- bfq_update_budget(next_in_service);
++ BUG_ON(bfqg == ((struct bfq_data *)(bfqg->bfqd))->root_group);
++ BUG_ON(bfqg->active_entities == 0);
++ if (bfqg->active_entities == 1)
++ return true;
+
+- return 1;
++ return false;
+ }
+
+-static void bfq_check_next_in_service(struct bfq_sched_data *sd,
+- struct bfq_entity *entity)
+-{
+- BUG_ON(sd->next_in_service != entity);
+-}
+-#else
++#else /* CONFIG_BFQ_GROUP_IOSCHED */
+ #define for_each_entity(entity) \
+ for (; entity ; entity = NULL)
+
+ #define for_each_entity_safe(entity, parent) \
+ for (parent = NULL; entity ; entity = parent)
+
+-static int bfq_update_next_in_service(struct bfq_sched_data *sd)
++static bool bfq_update_parent_budget(struct bfq_entity *next_in_service)
+ {
+- return 0;
++ return false;
+ }
+
+-static void bfq_check_next_in_service(struct bfq_sched_data *sd,
+- struct bfq_entity *entity)
++static bool bfq_no_longer_next_in_service(struct bfq_entity *entity)
+ {
++ return true;
+ }
+
+-static void bfq_update_budget(struct bfq_entity *next_in_service)
+-{
+-}
+-#endif
++#endif /* CONFIG_BFQ_GROUP_IOSCHED */
+
+ /*
+ * Shift for timestamp calculations. This actually limits the maximum
+@@ -105,18 +251,6 @@ static void bfq_update_budget(struct bfq_entity *next_in_service)
+ */
+ #define WFQ_SERVICE_SHIFT 22
+
+-/**
+- * bfq_gt - compare two timestamps.
+- * @a: first ts.
+- * @b: second ts.
+- *
+- * Return @a > @b, dealing with wrapping correctly.
+- */
+-static int bfq_gt(u64 a, u64 b)
+-{
+- return (s64)(a - b) > 0;
+-}
+-
+ static struct bfq_queue *bfq_entity_to_bfqq(struct bfq_entity *entity)
+ {
+ struct bfq_queue *bfqq = NULL;
+@@ -151,20 +285,36 @@ static u64 bfq_delta(unsigned long service, unsigned long weight)
+ static void bfq_calc_finish(struct bfq_entity *entity, unsigned long service)
+ {
+ struct bfq_queue *bfqq = bfq_entity_to_bfqq(entity);
++ unsigned long long start, finish, delta;
+
+ BUG_ON(entity->weight == 0);
+
+ entity->finish = entity->start +
+ bfq_delta(service, entity->weight);
+
++ start = ((entity->start>>10)*1000)>>12;
++ finish = ((entity->finish>>10)*1000)>>12;
++ delta = ((bfq_delta(service, entity->weight)>>10)*1000)>>12;
++
+ if (bfqq) {
+ bfq_log_bfqq(bfqq->bfqd, bfqq,
+ "calc_finish: serv %lu, w %d",
+ service, entity->weight);
+ bfq_log_bfqq(bfqq->bfqd, bfqq,
+ "calc_finish: start %llu, finish %llu, delta %llu",
+- entity->start, entity->finish,
+- bfq_delta(service, entity->weight));
++ start, finish, delta);
++#ifdef CONFIG_BFQ_GROUP_IOSCHED
++ } else {
++ struct bfq_group *bfqg =
++ container_of(entity, struct bfq_group, entity);
++
++ bfq_log_bfqg((struct bfq_data *)bfqg->bfqd, bfqg,
++ "calc_finish group: serv %lu, w %d",
++ service, entity->weight);
++ bfq_log_bfqg((struct bfq_data *)bfqg->bfqd, bfqg,
++ "calc_finish group: start %llu, finish %llu, delta %llu",
++ start, finish, delta);
++#endif
+ }
+ }
+
+@@ -293,10 +443,26 @@ static void bfq_update_min(struct bfq_entity *entity, struct rb_node *node)
+ static void bfq_update_active_node(struct rb_node *node)
+ {
+ struct bfq_entity *entity = rb_entry(node, struct bfq_entity, rb_node);
++ struct bfq_queue *bfqq = bfq_entity_to_bfqq(entity);
+
+ entity->min_start = entity->start;
+ bfq_update_min(entity, node->rb_right);
+ bfq_update_min(entity, node->rb_left);
++
++ if (bfqq) {
++ bfq_log_bfqq(bfqq->bfqd, bfqq,
++ "update_active_node: new min_start %llu",
++ ((entity->min_start>>10)*1000)>>12);
++#ifdef CONFIG_BFQ_GROUP_IOSCHED
++ } else {
++ struct bfq_group *bfqg =
++ container_of(entity, struct bfq_group, entity);
++
++ bfq_log_bfqg((struct bfq_data *)bfqg->bfqd, bfqg,
++ "update_active_node: new min_start %llu",
++ ((entity->min_start>>10)*1000)>>12);
++#endif
++ }
+ }
+
+ /**
+@@ -386,8 +552,6 @@ static void bfq_active_insert(struct bfq_service_tree *st,
+ BUG_ON(!bfqg);
+ BUG_ON(!bfqd);
+ bfqg->active_entities++;
+- if (bfqg->active_entities == 2)
+- bfqd->active_numerous_groups++;
+ }
+ #endif
+ }
+@@ -399,7 +563,7 @@ static void bfq_active_insert(struct bfq_service_tree *st,
+ static unsigned short bfq_ioprio_to_weight(int ioprio)
+ {
+ BUG_ON(ioprio < 0 || ioprio >= IOPRIO_BE_NR);
+- return IOPRIO_BE_NR * BFQ_WEIGHT_CONVERSION_COEFF - ioprio;
++ return (IOPRIO_BE_NR - ioprio) * BFQ_WEIGHT_CONVERSION_COEFF;
+ }
+
+ /**
+@@ -422,9 +586,9 @@ static void bfq_get_entity(struct bfq_entity *entity)
+ struct bfq_queue *bfqq = bfq_entity_to_bfqq(entity);
+
+ if (bfqq) {
+- atomic_inc(&bfqq->ref);
++ bfqq->ref++;
+ bfq_log_bfqq(bfqq->bfqd, bfqq, "get_entity: %p %d",
+- bfqq, atomic_read(&bfqq->ref));
++ bfqq, bfqq->ref);
+ }
+ }
+
+@@ -499,10 +663,6 @@ static void bfq_active_extract(struct bfq_service_tree *st,
+ BUG_ON(!bfqd);
+ BUG_ON(!bfqg->active_entities);
+ bfqg->active_entities--;
+- if (bfqg->active_entities == 1) {
+- BUG_ON(!bfqd->active_numerous_groups);
+- bfqd->active_numerous_groups--;
+- }
+ }
+ #endif
+ }
+@@ -547,12 +707,12 @@ static void bfq_forget_entity(struct bfq_service_tree *st,
+
+ BUG_ON(!entity->on_st);
+
+- entity->on_st = 0;
++ entity->on_st = false;
+ st->wsum -= entity->weight;
+ if (bfqq) {
+ sd = entity->sched_data;
+ bfq_log_bfqq(bfqq->bfqd, bfqq, "forget_entity: %p %d",
+- bfqq, atomic_read(&bfqq->ref));
++ bfqq, bfqq->ref);
+ bfq_put_queue(bfqq);
+ }
+ }
+@@ -602,7 +762,7 @@ __bfq_entity_update_weight_prio(struct bfq_service_tree *old_st,
+
+ if (entity->prio_changed) {
+ struct bfq_queue *bfqq = bfq_entity_to_bfqq(entity);
+- unsigned short prev_weight, new_weight;
++ unsigned int prev_weight, new_weight;
+ struct bfq_data *bfqd = NULL;
+ struct rb_root *root;
+ #ifdef CONFIG_BFQ_GROUP_IOSCHED
+@@ -630,7 +790,10 @@ __bfq_entity_update_weight_prio(struct bfq_service_tree *old_st,
+ entity->new_weight > BFQ_MAX_WEIGHT) {
+ pr_crit("update_weight_prio: new_weight %d\n",
+ entity->new_weight);
+- BUG();
++ if (entity->new_weight < BFQ_MIN_WEIGHT)
++ entity->new_weight = BFQ_MIN_WEIGHT;
++ else
++ entity->new_weight = BFQ_MAX_WEIGHT;
+ }
+ entity->orig_weight = entity->new_weight;
+ if (bfqq)
+@@ -661,6 +824,13 @@ __bfq_entity_update_weight_prio(struct bfq_service_tree *old_st,
+ * associated with its new weight.
+ */
+ if (prev_weight != new_weight) {
++ if (bfqq)
++ bfq_log_bfqq(bfqq->bfqd, bfqq,
++ "weight changed %d %d(%d %d)",
++ prev_weight, new_weight,
++ entity->orig_weight,
++ bfqq->wr_coeff);
++
+ root = bfqq ? &bfqd->queue_weights_tree :
+ &bfqd->group_weights_tree;
+ bfq_weights_tree_remove(bfqd, entity, root);
+@@ -707,7 +877,7 @@ static void bfq_bfqq_served(struct bfq_queue *bfqq, int served)
+ st = bfq_entity_service_tree(entity);
+
+ entity->service += served;
+- BUG_ON(entity->service > entity->budget);
++
+ BUG_ON(st->wsum == 0);
+
+ st->vtime += bfq_delta(served, st->wsum);
+@@ -716,234 +886,574 @@ static void bfq_bfqq_served(struct bfq_queue *bfqq, int served)
+ #ifdef CONFIG_BFQ_GROUP_IOSCHED
+ bfqg_stats_set_start_empty_time(bfqq_group(bfqq));
+ #endif
+- bfq_log_bfqq(bfqq->bfqd, bfqq, "bfqq_served %d secs", served);
++ st = bfq_entity_service_tree(&bfqq->entity);
++ bfq_log_bfqq(bfqq->bfqd, bfqq, "bfqq_served %d secs, vtime %llu on %p",
++ served, ((st->vtime>>10)*1000)>>12, st);
+ }
+
+ /**
+- * bfq_bfqq_charge_full_budget - set the service to the entity budget.
++ * bfq_bfqq_charge_time - charge an amount of service equivalent to the length
++ * of the time interval during which bfqq has been in
++ * service.
++ * @bfqd: the device
+ * @bfqq: the queue that needs a service update.
++ * @time_ms: the amount of time during which the queue has received service
++ *
++ * If a queue does not consume its budget fast enough, then providing
++ * the queue with service fairness may impair throughput, more or less
++ * severely. For this reason, queues that consume their budget slowly
++ * are provided with time fairness instead of service fairness. This
++ * goal is achieved through the BFQ scheduling engine, even if such an
++ * engine works in the service, and not in the time domain. The trick
++ * is charging these queues with an inflated amount of service, equal
++ * to the amount of service that they would have received during their
++ * service slot if they had been fast, i.e., if their requests had
++ * been dispatched at a rate equal to the estimated peak rate.
+ *
+- * When it's not possible to be fair in the service domain, because
+- * a queue is not consuming its budget fast enough (the meaning of
+- * fast depends on the timeout parameter), we charge it a full
+- * budget. In this way we should obtain a sort of time-domain
+- * fairness among all the seeky/slow queues.
++ * It is worth noting that time fairness can cause important
++ * distortions in terms of bandwidth distribution, on devices with
++ * internal queueing. The reason is that I/O requests dispatched
++ * during the service slot of a queue may be served after that service
++ * slot is finished, and may have a total processing time loosely
++ * correlated with the duration of the service slot. This is
++ * especially true for short service slots.
+ */
+-static void bfq_bfqq_charge_full_budget(struct bfq_queue *bfqq)
++static void bfq_bfqq_charge_time(struct bfq_data *bfqd, struct bfq_queue *bfqq,
++ unsigned long time_ms)
+ {
+ struct bfq_entity *entity = &bfqq->entity;
++ int tot_serv_to_charge = entity->service;
++ unsigned int timeout_ms = jiffies_to_msecs(bfq_timeout);
++
++ if (time_ms > 0 && time_ms < timeout_ms)
++ tot_serv_to_charge =
++ (bfqd->bfq_max_budget * time_ms) / timeout_ms;
+
+- bfq_log_bfqq(bfqq->bfqd, bfqq, "charge_full_budget");
++ if (tot_serv_to_charge < entity->service)
++ tot_serv_to_charge = entity->service;
+
+- bfq_bfqq_served(bfqq, entity->budget - entity->service);
++ bfq_log_bfqq(bfqq->bfqd, bfqq,
++ "charge_time: %lu/%u ms, %d/%d/%d sectors",
++ time_ms, timeout_ms, entity->service,
++ tot_serv_to_charge, entity->budget);
++
++ /* Increase budget to avoid inconsistencies */
++ if (tot_serv_to_charge > entity->budget)
++ entity->budget = tot_serv_to_charge;
++
++ bfq_bfqq_served(bfqq,
++ max_t(int, 0, tot_serv_to_charge - entity->service));
++}
++
++static void bfq_update_fin_time_enqueue(struct bfq_entity *entity,
++ struct bfq_service_tree *st,
++ bool backshifted)
++{
++ struct bfq_queue *bfqq = bfq_entity_to_bfqq(entity);
++ struct bfq_sched_data *sd = entity->sched_data;
++
++ st = __bfq_entity_update_weight_prio(st, entity);
++ bfq_calc_finish(entity, entity->budget);
++
++ /*
++ * If some queues enjoy backshifting for a while, then their
++ * (virtual) finish timestamps may happen to become lower and
++ * lower than the system virtual time. In particular, if
++ * these queues often happen to be idle for short time
++ * periods, and during such time periods other queues with
++ * higher timestamps happen to be busy, then the backshifted
++ * timestamps of the former queues can become much lower than
++ * the system virtual time. In fact, to serve the queues with
++ * higher timestamps while the ones with lower timestamps are
++ * idle, the system virtual time may be pushed-up to much
++ * higher values than the finish timestamps of the idle
++ * queues. As a consequence, the finish timestamps of all new
++ * or newly activated queues may end up being much larger than
++ * those of lucky queues with backshifted timestamps. The
++ * latter queues may then monopolize the device for a lot of
++ * time. This would simply break service guarantees.
++ *
++ * To reduce this problem, push up a little bit the
++ * backshifted timestamps of the queue associated with this
++ * entity (only a queue can happen to have the backshifted
++ * flag set): just enough to let the finish timestamp of the
++ * queue be equal to the current value of the system virtual
++ * time. This may introduce a little unfairness among queues
++ * with backshifted timestamps, but it does not break
++ * worst-case fairness guarantees.
++ *
++ * As a special case, if bfqq is weight-raised, push up
++ * timestamps much less, to keep very low the probability that
++ * this push up causes the backshifted finish timestamps of
++ * weight-raised queues to become higher than the backshifted
++ * finish timestamps of non weight-raised queues.
++ */
++ if (backshifted && bfq_gt(st->vtime, entity->finish)) {
++ unsigned long delta = st->vtime - entity->finish;
++
++ if (bfqq)
++ delta /= bfqq->wr_coeff;
++
++ entity->start += delta;
++ entity->finish += delta;
++
++ if (bfqq) {
++ bfq_log_bfqq(bfqq->bfqd, bfqq,
++ "__activate_entity: new queue finish %llu",
++ ((entity->finish>>10)*1000)>>12);
++#ifdef CONFIG_BFQ_GROUP_IOSCHED
++ } else {
++ struct bfq_group *bfqg =
++ container_of(entity, struct bfq_group, entity);
++
++ bfq_log_bfqg((struct bfq_data *)bfqg->bfqd, bfqg,
++ "__activate_entity: new group finish %llu",
++ ((entity->finish>>10)*1000)>>12);
++#endif
++ }
++ }
++
++ bfq_active_insert(st, entity);
++
++ if (bfqq) {
++ bfq_log_bfqq(bfqq->bfqd, bfqq,
++ "__activate_entity: queue %seligible in st %p",
++ entity->start <= st->vtime ? "" : "non ", st);
++#ifdef CONFIG_BFQ_GROUP_IOSCHED
++ } else {
++ struct bfq_group *bfqg =
++ container_of(entity, struct bfq_group, entity);
++
++ bfq_log_bfqg((struct bfq_data *)bfqg->bfqd, bfqg,
++ "__activate_entity: group %seligible in st %p",
++ entity->start <= st->vtime ? "" : "non ", st);
++#endif
++ }
++ BUG_ON(RB_EMPTY_ROOT(&st->active));
++ BUG_ON(&st->active != &sd->service_tree->active &&
++ &st->active != &(sd->service_tree+1)->active &&
++ &st->active != &(sd->service_tree+2)->active);
+ }
+
+ /**
+- * __bfq_activate_entity - activate an entity.
++ * __bfq_activate_entity - handle activation of entity.
+ * @entity: the entity being activated.
++ * @non_blocking_wait_rq: true if entity was waiting for a request
++ *
++ * Called for a 'true' activation, i.e., if entity is not active and
++ * one of its children receives a new request.
+ *
+- * Called whenever an entity is activated, i.e., it is not active and one
+- * of its children receives a new request, or has to be reactivated due to
+- * budget exhaustion. It uses the current budget of the entity (and the
+- * service received if @entity is active) of the queue to calculate its
+- * timestamps.
++ * Basically, this function updates the timestamps of entity and
++ * inserts entity into its active tree, ater possible extracting it
++ * from its idle tree.
+ */
+-static void __bfq_activate_entity(struct bfq_entity *entity)
++static void __bfq_activate_entity(struct bfq_entity *entity,
++ bool non_blocking_wait_rq)
+ {
+ struct bfq_sched_data *sd = entity->sched_data;
+ struct bfq_service_tree *st = bfq_entity_service_tree(entity);
++ struct bfq_queue *bfqq = bfq_entity_to_bfqq(entity);
++ bool backshifted = false;
++ unsigned long long min_vstart;
+
+- if (entity == sd->in_service_entity) {
+- BUG_ON(entity->tree);
+- /*
+- * If we are requeueing the current entity we have
+- * to take care of not charging to it service it has
+- * not received.
+- */
+- bfq_calc_finish(entity, entity->service);
+- entity->start = entity->finish;
+- sd->in_service_entity = NULL;
+- } else if (entity->tree == &st->active) {
+- /*
+- * Requeueing an entity due to a change of some
+- * next_in_service entity below it. We reuse the
+- * old start time.
+- */
+- bfq_active_extract(st, entity);
+- } else if (entity->tree == &st->idle) {
++ BUG_ON(!sd);
++ BUG_ON(!st);
++
++ /* See comments on bfq_fqq_update_budg_for_activation */
++ if (non_blocking_wait_rq && bfq_gt(st->vtime, entity->finish)) {
++ backshifted = true;
++ min_vstart = entity->finish;
++ } else
++ min_vstart = st->vtime;
++
++ if (entity->tree == &st->idle) {
+ /*
+ * Must be on the idle tree, bfq_idle_extract() will
+ * check for that.
+ */
+ bfq_idle_extract(st, entity);
+- entity->start = bfq_gt(st->vtime, entity->finish) ?
+- st->vtime : entity->finish;
++ entity->start = bfq_gt(min_vstart, entity->finish) ?
++ min_vstart : entity->finish;
+ } else {
+ /*
+ * The finish time of the entity may be invalid, and
+ * it is in the past for sure, otherwise the queue
+ * would have been on the idle tree.
+ */
+- entity->start = st->vtime;
++ entity->start = min_vstart;
+ st->wsum += entity->weight;
+ bfq_get_entity(entity);
+
+- BUG_ON(entity->on_st);
+- entity->on_st = 1;
++ BUG_ON(entity->on_st && bfqq);
++
++#ifdef CONFIG_BFQ_GROUP_IOSCHED
++ if (entity->on_st && !bfqq) {
++ struct bfq_group *bfqg =
++ container_of(entity, struct bfq_group,
++ entity);
++
++ bfq_log_bfqg((struct bfq_data *)bfqg->bfqd,
++ bfqg,
++ "activate bug, class %d in_service %p",
++ bfq_class_idx(entity), sd->in_service_entity);
++ }
++#endif
++ BUG_ON(entity->on_st && !bfqq);
++ entity->on_st = true;
+ }
+
+- st = __bfq_entity_update_weight_prio(st, entity);
+- bfq_calc_finish(entity, entity->budget);
+- bfq_active_insert(st, entity);
++ bfq_update_fin_time_enqueue(entity, st, backshifted);
+ }
+
+ /**
+- * bfq_activate_entity - activate an entity and its ancestors if necessary.
+- * @entity: the entity to activate.
++ * __bfq_requeue_entity - handle requeueing or repositioning of an entity.
++ * @entity: the entity being requeued or repositioned.
++ *
++ * Requeueing is needed if this entity stops being served, which
++ * happens if a leaf descendant entity has expired. On the other hand,
++ * repositioning is needed if the next_inservice_entity for the child
++ * entity has changed. See the comments inside the function for
++ * details.
+ *
+- * Activate @entity and all the entities on the path from it to the root.
++ * Basically, this function: 1) removes entity from its active tree if
++ * present there, 2) updates the timestamps of entity and 3) inserts
++ * entity back into its active tree (in the new, right position for
++ * the new values of the timestamps).
+ */
+-static void bfq_activate_entity(struct bfq_entity *entity)
++static void __bfq_requeue_entity(struct bfq_entity *entity)
++{
++ struct bfq_sched_data *sd = entity->sched_data;
++ struct bfq_service_tree *st = bfq_entity_service_tree(entity);
++
++ BUG_ON(!sd);
++ BUG_ON(!st);
++
++ BUG_ON(entity != sd->in_service_entity &&
++ entity->tree != &st->active);
++
++ if (entity == sd->in_service_entity) {
++ /*
++ * We are requeueing the current in-service entity,
++ * which may have to be done for one of the following
++ * reasons:
++ * - entity represents the in-service queue, and the
++ * in-service queue is being requeued after an
++ * expiration;
++ * - entity represents a group, and its budget has
++ * changed because one of its child entities has
++ * just been either activated or requeued for some
++ * reason; the timestamps of the entity need then to
++ * be updated, and the entity needs to be enqueued
++ * or repositioned accordingly.
++ *
++ * In particular, before requeueing, the start time of
++ * the entity must be moved forward to account for the
++ * service that the entity has received while in
++ * service. This is done by the next instructions. The
++ * finish time will then be updated according to this
++ * new value of the start time, and to the budget of
++ * the entity.
++ */
++ bfq_calc_finish(entity, entity->service);
++ entity->start = entity->finish;
++ BUG_ON(entity->tree && entity->tree != &st->active);
++ /*
++ * In addition, if the entity had more than one child
++ * when set in service, then was not extracted from
++ * the active tree. This implies that the position of
++ * the entity in the active tree may need to be
++ * changed now, because we have just updated the start
++ * time of the entity, and we will update its finish
++ * time in a moment (the requeueing is then, more
++ * precisely, a repositioning in this case). To
++ * implement this repositioning, we: 1) dequeue the
++ * entity here, 2) update the finish time and
++ * requeue the entity according to the new
++ * timestamps below.
++ */
++ if (entity->tree)
++ bfq_active_extract(st, entity);
++ } else { /* The entity is already active, and not in service */
++ /*
++ * In this case, this function gets called only if the
++ * next_in_service entity below this entity has
++ * changed, and this change has caused the budget of
++ * this entity to change, which, finally implies that
++ * the finish time of this entity must be
++ * updated. Such an update may cause the scheduling,
++ * i.e., the position in the active tree, of this
++ * entity to change. We handle this change by: 1)
++ * dequeueing the entity here, 2) updating the finish
++ * time and requeueing the entity according to the new
++ * timestamps below. This is the same approach as the
++ * non-extracted-entity sub-case above.
++ */
++ bfq_active_extract(st, entity);
++ }
++
++ bfq_update_fin_time_enqueue(entity, st, false);
++}
++
++static void __bfq_activate_requeue_entity(struct bfq_entity *entity,
++ struct bfq_sched_data *sd,
++ bool non_blocking_wait_rq)
++{
++ struct bfq_service_tree *st = bfq_entity_service_tree(entity);
++
++ if (sd->in_service_entity == entity || entity->tree == &st->active)
++ /*
++ * in service or already queued on the active tree,
++ * requeue or reposition
++ */
++ __bfq_requeue_entity(entity);
++ else
++ /*
++ * Not in service and not queued on its active tree:
++ * the activity is idle and this is a true activation.
++ */
++ __bfq_activate_entity(entity, non_blocking_wait_rq);
++}
++
++
++/**
++ * bfq_activate_entity - activate or requeue an entity representing a bfq_queue,
++ * and activate, requeue or reposition all ancestors
++ * for which such an update becomes necessary.
++ * @entity: the entity to activate.
++ * @non_blocking_wait_rq: true if this entity was waiting for a request
++ * @requeue: true if this is a requeue, which implies that bfqq is
++ * being expired; thus ALL its ancestors stop being served and must
++ * therefore be requeued
++ */
++static void bfq_activate_requeue_entity(struct bfq_entity *entity,
++ bool non_blocking_wait_rq,
++ bool requeue)
+ {
+ struct bfq_sched_data *sd;
+
+ for_each_entity(entity) {
+- __bfq_activate_entity(entity);
+-
++ BUG_ON(!entity);
+ sd = entity->sched_data;
+- if (!bfq_update_next_in_service(sd))
+- /*
+- * No need to propagate the activation to the
+- * upper entities, as they will be updated when
+- * the in-service entity is rescheduled.
+- */
++ __bfq_activate_requeue_entity(entity, sd, non_blocking_wait_rq);
++
++ BUG_ON(RB_EMPTY_ROOT(&sd->service_tree->active) &&
++ RB_EMPTY_ROOT(&(sd->service_tree+1)->active) &&
++ RB_EMPTY_ROOT(&(sd->service_tree+2)->active));
++
++ if (!bfq_update_next_in_service(sd, entity) && !requeue) {
++ BUG_ON(!sd->next_in_service);
+ break;
++ }
++ BUG_ON(!sd->next_in_service);
+ }
+ }
+
+ /**
+ * __bfq_deactivate_entity - deactivate an entity from its service tree.
+ * @entity: the entity to deactivate.
+- * @requeue: if false, the entity will not be put into the idle tree.
+- *
+- * Deactivate an entity, independently from its previous state. If the
+- * entity was not on a service tree just return, otherwise if it is on
+- * any scheduler tree, extract it from that tree, and if necessary
+- * and if the caller did not specify @requeue, put it on the idle tree.
++ * @ins_into_idle_tree: if false, the entity will not be put into the
++ * idle tree.
+ *
+- * Return %1 if the caller should update the entity hierarchy, i.e.,
+- * if the entity was in service or if it was the next_in_service for
+- * its sched_data; return %0 otherwise.
++ * Deactivates an entity, independently from its previous state. Must
++ * be invoked only if entity is on a service tree. Extracts the entity
++ * from that tree, and if necessary and allowed, puts it on the idle
++ * tree.
+ */
+-static int __bfq_deactivate_entity(struct bfq_entity *entity, int requeue)
++static bool __bfq_deactivate_entity(struct bfq_entity *entity,
++ bool ins_into_idle_tree)
+ {
+ struct bfq_sched_data *sd = entity->sched_data;
+- struct bfq_service_tree *st;
+- int was_in_service;
+- int ret = 0;
+-
+- if (sd == NULL || !entity->on_st) /* never activated, or inactive */
+- return 0;
++ struct bfq_service_tree *st = bfq_entity_service_tree(entity);
++ bool was_in_service = entity == sd->in_service_entity;
+
+- st = bfq_entity_service_tree(entity);
+- was_in_service = entity == sd->in_service_entity;
++ if (!entity->on_st) { /* entity never activated, or already inactive */
++ BUG_ON(entity == entity->sched_data->in_service_entity);
++ return false;
++ }
+
+- BUG_ON(was_in_service && entity->tree);
++ BUG_ON(was_in_service && entity->tree && entity->tree != &st->active);
+
+- if (was_in_service) {
++ if (was_in_service)
+ bfq_calc_finish(entity, entity->service);
+- sd->in_service_entity = NULL;
+- } else if (entity->tree == &st->active)
++
++ if (entity->tree == &st->active)
+ bfq_active_extract(st, entity);
+- else if (entity->tree == &st->idle)
++ else if (!was_in_service && entity->tree == &st->idle)
+ bfq_idle_extract(st, entity);
+ else if (entity->tree)
+ BUG();
+
+- if (was_in_service || sd->next_in_service == entity)
+- ret = bfq_update_next_in_service(sd);
+-
+- if (!requeue || !bfq_gt(entity->finish, st->vtime))
++ if (!ins_into_idle_tree || !bfq_gt(entity->finish, st->vtime))
+ bfq_forget_entity(st, entity);
+ else
+ bfq_idle_insert(st, entity);
+
+- BUG_ON(sd->in_service_entity == entity);
+- BUG_ON(sd->next_in_service == entity);
+-
+- return ret;
++ return true;
+ }
+
+ /**
+- * bfq_deactivate_entity - deactivate an entity.
++ * bfq_deactivate_entity - deactivate an entity representing a bfq_queue.
+ * @entity: the entity to deactivate.
+- * @requeue: true if the entity can be put on the idle tree
++ * @ins_into_idle_tree: true if the entity can be put on the idle tree
+ */
+-static void bfq_deactivate_entity(struct bfq_entity *entity, int requeue)
++static void bfq_deactivate_entity(struct bfq_entity *entity,
++ bool ins_into_idle_tree,
++ bool expiration)
+ {
+ struct bfq_sched_data *sd;
+- struct bfq_entity *parent;
++ struct bfq_entity *parent = NULL;
+
+ for_each_entity_safe(entity, parent) {
+ sd = entity->sched_data;
+
+- if (!__bfq_deactivate_entity(entity, requeue))
++ BUG_ON(sd == NULL); /*
++ * It would mean that this is the
++ * root group.
++ */
++
++ BUG_ON(expiration && entity != sd->in_service_entity);
++
++ BUG_ON(entity != sd->in_service_entity &&
++ entity->tree ==
++ &bfq_entity_service_tree(entity)->active &&
++ !sd->next_in_service);
++
++ if (!__bfq_deactivate_entity(entity, ins_into_idle_tree)) {
+ /*
+- * The parent entity is still backlogged, and
+- * we don't need to update it as it is still
+- * in service.
++ * Entity is not any tree any more, so, this
++ * deactivation is a no-op, and there is
++ * nothing to change for upper-level entities
++ * (in case of expiration, this can never
++ * happen).
+ */
+- break;
++ BUG_ON(expiration); /*
++ * entity cannot be already out of
++ * any tree
++ */
++ return;
++ }
+
+- if (sd->next_in_service)
++ if (sd->next_in_service == entity)
+ /*
+- * The parent entity is still backlogged and
+- * the budgets on the path towards the root
+- * need to be updated.
++ * entity was the next_in_service entity,
++ * then, since entity has just been
++ * deactivated, a new one must be found.
+ */
+- goto update;
++ bfq_update_next_in_service(sd, NULL);
++
++ if (sd->next_in_service) {
++ /*
++ * The parent entity is still backlogged,
++ * because next_in_service is not NULL. So, no
++ * further upwards deactivation must be
++ * performed. Yet, next_in_service has
++ * changed. Then the schedule does need to be
++ * updated upwards.
++ */
++ BUG_ON(sd->next_in_service == entity);
++ break;
++ }
+
+ /*
+- * If we reach there the parent is no more backlogged and
+- * we want to propagate the dequeue upwards.
++ * If we get here, then the parent is no more
++ * backlogged and we need to propagate the
++ * deactivation upwards. Thus let the loop go on.
+ */
+- requeue = 1;
+- }
+
+- return;
++ /*
++ * Also let parent be queued into the idle tree on
++ * deactivation, to preserve service guarantees, and
++ * assuming that who invoked this function does not
++ * need parent entities too to be removed completely.
++ */
++ ins_into_idle_tree = true;
++ }
+
+-update:
++ /*
++ * If the deactivation loop is fully executed, then there are
++ * no more entities to touch and next loop is not executed at
++ * all. Otherwise, requeue remaining entities if they are
++ * about to stop receiving service, or reposition them if this
++ * is not the case.
++ */
+ entity = parent;
+ for_each_entity(entity) {
+- __bfq_activate_entity(entity);
++ struct bfq_queue *bfqq = bfq_entity_to_bfqq(entity);
++
++ /*
++ * Invoke __bfq_requeue_entity on entity, even if
++ * already active, to requeue/reposition it in the
++ * active tree (because sd->next_in_service has
++ * changed)
++ */
++ __bfq_requeue_entity(entity);
+
+ sd = entity->sched_data;
+- if (!bfq_update_next_in_service(sd))
++ BUG_ON(expiration && sd->in_service_entity != entity);
++
++ if (bfqq)
++ bfq_log_bfqq(bfqq->bfqd, bfqq,
++ "invoking udpdate_next for this queue");
++#ifdef CONFIG_BFQ_GROUP_IOSCHED
++ else {
++ struct bfq_group *bfqg =
++ container_of(entity,
++ struct bfq_group, entity);
++
++ bfq_log_bfqg((struct bfq_data *)bfqg->bfqd, bfqg,
++ "invoking udpdate_next for this entity");
++ }
++#endif
++ if (!bfq_update_next_in_service(sd, entity) &&
++ !expiration)
++ /*
++ * next_in_service unchanged or not causing
++ * any change in entity->parent->sd, and no
++ * requeueing needed for expiration: stop
++ * here.
++ */
+ break;
+ }
+ }
+
+ /**
+- * bfq_update_vtime - update vtime if necessary.
++ * bfq_calc_vtime_jump - compute the value to which the vtime should jump,
++ * if needed, to have at least one entity eligible.
+ * @st: the service tree to act upon.
+ *
+- * If necessary update the service tree vtime to have at least one
+- * eligible entity, skipping to its start time. Assumes that the
+- * active tree of the device is not empty.
+- *
+- * NOTE: this hierarchical implementation updates vtimes quite often,
+- * we may end up with reactivated processes getting timestamps after a
+- * vtime skip done because we needed a ->first_active entity on some
+- * intermediate node.
++ * Assumes that st is not empty.
+ */
+-static void bfq_update_vtime(struct bfq_service_tree *st)
++static u64 bfq_calc_vtime_jump(struct bfq_service_tree *st)
+ {
+- struct bfq_entity *entry;
+- struct rb_node *node = st->active.rb_node;
++ struct bfq_entity *root_entity = bfq_root_active_entity(&st->active);
++
++ if (bfq_gt(root_entity->min_start, st->vtime)) {
++ struct bfq_queue *bfqq = bfq_entity_to_bfqq(root_entity);
+
+- entry = rb_entry(node, struct bfq_entity, rb_node);
+- if (bfq_gt(entry->min_start, st->vtime)) {
+- st->vtime = entry->min_start;
++ if (bfqq)
++ bfq_log_bfqq(bfqq->bfqd, bfqq,
++ "calc_vtime_jump: new value %llu",
++ root_entity->min_start);
++#ifdef CONFIG_BFQ_GROUP_IOSCHED
++ else {
++ struct bfq_group *bfqg =
++ container_of(root_entity, struct bfq_group,
++ entity);
++
++ bfq_log_bfqg((struct bfq_data *)bfqg->bfqd, bfqg,
++ "calc_vtime_jump: new value %llu",
++ root_entity->min_start);
++ }
++#endif
++ return root_entity->min_start;
++ }
++ return st->vtime;
++}
++
++static void bfq_update_vtime(struct bfq_service_tree *st, u64 new_value)
++{
++ if (new_value > st->vtime) {
++ st->vtime = new_value;
+ bfq_forget_idle(st);
+ }
+ }
+@@ -952,6 +1462,7 @@ static void bfq_update_vtime(struct bfq_service_tree *st)
+ * bfq_first_active_entity - find the eligible entity with
+ * the smallest finish time
+ * @st: the service tree to select from.
++ * @vtime: the system virtual to use as a reference for eligibility
+ *
+ * This function searches the first schedulable entity, starting from the
+ * root of the tree and going on the left every time on this side there is
+@@ -959,7 +1470,8 @@ static void bfq_update_vtime(struct bfq_service_tree *st)
+ * the right is followed only if a) the left subtree contains no eligible
+ * entities and b) no eligible entity has been found yet.
+ */
+-static struct bfq_entity *bfq_first_active_entity(struct bfq_service_tree *st)
++static struct bfq_entity *bfq_first_active_entity(struct bfq_service_tree *st,
++ u64 vtime)
+ {
+ struct bfq_entity *entry, *first = NULL;
+ struct rb_node *node = st->active.rb_node;
+@@ -967,15 +1479,15 @@ static struct bfq_entity *bfq_first_active_entity(struct bfq_service_tree *st)
+ while (node) {
+ entry = rb_entry(node, struct bfq_entity, rb_node);
+ left:
+- if (!bfq_gt(entry->start, st->vtime))
++ if (!bfq_gt(entry->start, vtime))
+ first = entry;
+
+- BUG_ON(bfq_gt(entry->min_start, st->vtime));
++ BUG_ON(bfq_gt(entry->min_start, vtime));
+
+ if (node->rb_left) {
+ entry = rb_entry(node->rb_left,
+ struct bfq_entity, rb_node);
+- if (!bfq_gt(entry->min_start, st->vtime)) {
++ if (!bfq_gt(entry->min_start, vtime)) {
+ node = node->rb_left;
+ goto left;
+ }
+@@ -993,31 +1505,84 @@ static struct bfq_entity *bfq_first_active_entity(struct bfq_service_tree *st)
+ * __bfq_lookup_next_entity - return the first eligible entity in @st.
+ * @st: the service tree.
+ *
+- * Update the virtual time in @st and return the first eligible entity
+- * it contains.
++ * If there is no in-service entity for the sched_data st belongs to,
++ * then return the entity that will be set in service if:
++ * 1) the parent entity this st belongs to is set in service;
++ * 2) no entity belonging to such parent entity undergoes a state change
++ * that would influence the timestamps of the entity (e.g., becomes idle,
++ * becomes backlogged, changes its budget, ...).
++ *
++ * In this first case, update the virtual time in @st too (see the
++ * comments on this update inside the function).
++ *
++ * In constrast, if there is an in-service entity, then return the
++ * entity that would be set in service if not only the above
++ * conditions, but also the next one held true: the currently
++ * in-service entity, on expiration,
++ * 1) gets a finish time equal to the current one, or
++ * 2) is not eligible any more, or
++ * 3) is idle.
+ */
+-static struct bfq_entity *__bfq_lookup_next_entity(struct bfq_service_tree *st,
+- bool force)
++static struct bfq_entity *
++__bfq_lookup_next_entity(struct bfq_service_tree *st, bool in_service
++#if 0
++ , bool force
++#endif
++ )
+ {
+- struct bfq_entity *entity, *new_next_in_service = NULL;
++ struct bfq_entity *entity
++#if 0
++ , *new_next_in_service = NULL
++#endif
++ ;
++ u64 new_vtime;
++ struct bfq_queue *bfqq;
+
+ if (RB_EMPTY_ROOT(&st->active))
+ return NULL;
+
+- bfq_update_vtime(st);
+- entity = bfq_first_active_entity(st);
+- BUG_ON(bfq_gt(entity->start, st->vtime));
++ /*
++ * Get the value of the system virtual time for which at
++ * least one entity is eligible.
++ */
++ new_vtime = bfq_calc_vtime_jump(st);
+
+ /*
+- * If the chosen entity does not match with the sched_data's
+- * next_in_service and we are forcedly serving the IDLE priority
+- * class tree, bubble up budget update.
++ * If there is no in-service entity for the sched_data this
++ * active tree belongs to, then push the system virtual time
++ * up to the value that guarantees that at least one entity is
++ * eligible. If, instead, there is an in-service entity, then
++ * do not make any such update, because there is already an
++ * eligible entity, namely the in-service one (even if the
++ * entity is not on st, because it was extracted when set in
++ * service).
+ */
+- if (unlikely(force && entity != entity->sched_data->next_in_service)) {
+- new_next_in_service = entity;
+- for_each_entity(new_next_in_service)
+- bfq_update_budget(new_next_in_service);
++ if (!in_service)
++ bfq_update_vtime(st, new_vtime);
++
++ entity = bfq_first_active_entity(st, new_vtime);
++ BUG_ON(bfq_gt(entity->start, new_vtime));
++
++ /* Log some information */
++ bfqq = bfq_entity_to_bfqq(entity);
++ if (bfqq)
++ bfq_log_bfqq(bfqq->bfqd, bfqq,
++ "__lookup_next: start %llu vtime %llu st %p",
++ ((entity->start>>10)*1000)>>12,
++ ((new_vtime>>10)*1000)>>12, st);
++#ifdef CONFIG_BFQ_GROUP_IOSCHED
++ else {
++ struct bfq_group *bfqg =
++ container_of(entity, struct bfq_group, entity);
++
++ bfq_log_bfqg((struct bfq_data *)bfqg->bfqd, bfqg,
++ "__lookup_next: start %llu vtime %llu st %p",
++ ((entity->start>>10)*1000)>>12,
++ ((new_vtime>>10)*1000)>>12, st);
+ }
++#endif
++
++ BUG_ON(!entity);
+
+ return entity;
+ }
+@@ -1025,50 +1590,81 @@ static struct bfq_entity *__bfq_lookup_next_entity(struct bfq_service_tree *st,
+ /**
+ * bfq_lookup_next_entity - return the first eligible entity in @sd.
+ * @sd: the sched_data.
+- * @extract: if true the returned entity will be also extracted from @sd.
+ *
+- * NOTE: since we cache the next_in_service entity at each level of the
+- * hierarchy, the complexity of the lookup can be decreased with
+- * absolutely no effort just returning the cached next_in_service value;
+- * we prefer to do full lookups to test the consistency of * the data
+- * structures.
++ * This function is invoked when there has been a change in the trees
++ * for sd, and we need know what is the new next entity after this
++ * change.
+ */
+-static struct bfq_entity *bfq_lookup_next_entity(struct bfq_sched_data *sd,
+- int extract,
+- struct bfq_data *bfqd)
++static struct bfq_entity *bfq_lookup_next_entity(struct bfq_sched_data *sd)
+ {
+ struct bfq_service_tree *st = sd->service_tree;
+- struct bfq_entity *entity;
+- int i = 0;
+-
+- BUG_ON(sd->in_service_entity);
++ struct bfq_service_tree *idle_class_st = st + (BFQ_IOPRIO_CLASSES - 1);
++ struct bfq_entity *entity = NULL;
++ struct bfq_queue *bfqq;
++ int class_idx = 0;
+
+- if (bfqd &&
+- jiffies - bfqd->bfq_class_idle_last_service > BFQ_CL_IDLE_TIMEOUT) {
+- entity = __bfq_lookup_next_entity(st + BFQ_IOPRIO_CLASSES - 1,
+- true);
+- if (entity) {
+- i = BFQ_IOPRIO_CLASSES - 1;
+- bfqd->bfq_class_idle_last_service = jiffies;
+- sd->next_in_service = entity;
+- }
++ BUG_ON(!sd);
++ BUG_ON(!st);
++ /*
++ * Choose from idle class, if needed to guarantee a minimum
++ * bandwidth to this class (and if there is some active entity
++ * in idle class). This should also mitigate
++ * priority-inversion problems in case a low priority task is
++ * holding file system resources.
++ */
++ if (time_is_before_jiffies(sd->bfq_class_idle_last_service +
++ BFQ_CL_IDLE_TIMEOUT)) {
++ if (!RB_EMPTY_ROOT(&idle_class_st->active))
++ class_idx = BFQ_IOPRIO_CLASSES - 1;
++ /* About to be served if backlogged, or not yet backlogged */
++ sd->bfq_class_idle_last_service = jiffies;
+ }
+- for (; i < BFQ_IOPRIO_CLASSES; i++) {
+- entity = __bfq_lookup_next_entity(st + i, false);
+- if (entity) {
+- if (extract) {
+- bfq_check_next_in_service(sd, entity);
+- bfq_active_extract(st + i, entity);
+- sd->in_service_entity = entity;
+- sd->next_in_service = NULL;
+- }
++
++ /*
++ * Find the next entity to serve for the highest-priority
++ * class, unless the idle class needs to be served.
++ */
++ for (; class_idx < BFQ_IOPRIO_CLASSES; class_idx++) {
++ entity = __bfq_lookup_next_entity(st + class_idx,
++ sd->in_service_entity);
++
++ if (entity)
+ break;
+- }
+ }
+
++ BUG_ON(!entity &&
++ (!RB_EMPTY_ROOT(&st->active) || !RB_EMPTY_ROOT(&(st+1)->active) ||
++ !RB_EMPTY_ROOT(&(st+2)->active)));
++
++ if (!entity)
++ return NULL;
++
++ /* Log some information */
++ bfqq = bfq_entity_to_bfqq(entity);
++ if (bfqq)
++ bfq_log_bfqq(bfqq->bfqd, bfqq, "chosen from st %p %d",
++ st + class_idx, class_idx);
++#ifdef CONFIG_BFQ_GROUP_IOSCHED
++ else {
++ struct bfq_group *bfqg =
++ container_of(entity, struct bfq_group, entity);
++
++ bfq_log_bfqg((struct bfq_data *)bfqg->bfqd, bfqg,
++ "chosen from st %p %d",
++ st + class_idx, class_idx);
++ }
++#endif
++
+ return entity;
+ }
+
++static bool next_queue_may_preempt(struct bfq_data *bfqd)
++{
++ struct bfq_sched_data *sd = &bfqd->root_group->sched_data;
++
++ return sd->next_in_service != sd->in_service_entity;
++}
++
+ /*
+ * Get next queue for service.
+ */
+@@ -1083,58 +1679,208 @@ static struct bfq_queue *bfq_get_next_queue(struct bfq_data *bfqd)
+ if (bfqd->busy_queues == 0)
+ return NULL;
+
++ /*
++ * Traverse the path from the root to the leaf entity to
++ * serve. Set in service all the entities visited along the
++ * way.
++ */
+ sd = &bfqd->root_group->sched_data;
+ for (; sd ; sd = entity->my_sched_data) {
+- entity = bfq_lookup_next_entity(sd, 1, bfqd);
+- BUG_ON(!entity);
++#ifdef CONFIG_BFQ_GROUP_IOSCHED
++ if (entity) {
++ struct bfq_group *bfqg =
++ container_of(entity, struct bfq_group, entity);
++
++ bfq_log_bfqg(bfqd, bfqg,
++ "get_next_queue: lookup in this group");
++ if (!sd->next_in_service)
++ pr_crit("get_next_queue: lookup in this group");
++ } else {
++ bfq_log_bfqg(bfqd, bfqd->root_group,
++ "get_next_queue: lookup in root group");
++ if (!sd->next_in_service)
++ pr_crit("get_next_queue: lookup in root group");
++ }
++#endif
++
++ BUG_ON(!sd->next_in_service);
++
++ /*
++ * WARNING. We are about to set the in-service entity
++ * to sd->next_in_service, i.e., to the (cached) value
++ * returned by bfq_lookup_next_entity(sd) the last
++ * time it was invoked, i.e., the last time when the
++ * service order in sd changed as a consequence of the
++ * activation or deactivation of an entity. In this
++ * respect, if we execute bfq_lookup_next_entity(sd)
++ * in this very moment, it may, although with low
++ * probability, yield a different entity than that
++ * pointed to by sd->next_in_service. This rare event
++ * happens in case there was no CLASS_IDLE entity to
++ * serve for sd when bfq_lookup_next_entity(sd) was
++ * invoked for the last time, while there is now one
++ * such entity.
++ *
++ * If the above event happens, then the scheduling of
++ * such entity in CLASS_IDLE is postponed until the
++ * service of the sd->next_in_service entity
++ * finishes. In fact, when the latter is expired,
++ * bfq_lookup_next_entity(sd) gets called again,
++ * exactly to update sd->next_in_service.
++ */
++
++ /* Make next_in_service entity become in_service_entity */
++ entity = sd->next_in_service;
++ sd->in_service_entity = entity;
++
++ /*
++ * Reset the accumulator of the amount of service that
++ * the entity is about to receive.
++ */
+ entity->service = 0;
++
++ /*
++ * If entity is no longer a candidate for next
++ * service, then we extract it from its active tree,
++ * for the following reason. To further boost the
++ * throughput in some special case, BFQ needs to know
++ * which is the next candidate entity to serve, while
++ * there is already an entity in service. In this
++ * respect, to make it easy to compute/update the next
++ * candidate entity to serve after the current
++ * candidate has been set in service, there is a case
++ * where it is necessary to extract the current
++ * candidate from its service tree. Such a case is
++ * when the entity just set in service cannot be also
++ * a candidate for next service. Details about when
++ * this conditions holds are reported in the comments
++ * on the function bfq_no_longer_next_in_service()
++ * invoked below.
++ */
++ if (bfq_no_longer_next_in_service(entity))
++ bfq_active_extract(bfq_entity_service_tree(entity),
++ entity);
++
++ /*
++ * For the same reason why we may have just extracted
++ * entity from its active tree, we may need to update
++ * next_in_service for the sched_data of entity too,
++ * regardless of whether entity has been extracted.
++ * In fact, even if entity has not been extracted, a
++ * descendant entity may get extracted. Such an event
++ * would cause a change in next_in_service for the
++ * level of the descendant entity, and thus possibly
++ * back to upper levels.
++ *
++ * We cannot perform the resulting needed update
++ * before the end of this loop, because, to know which
++ * is the correct next-to-serve candidate entity for
++ * each level, we need first to find the leaf entity
++ * to set in service. In fact, only after we know
++ * which is the next-to-serve leaf entity, we can
++ * discover whether the parent entity of the leaf
++ * entity becomes the next-to-serve, and so on.
++ */
++
++ /* Log some information */
++ bfqq = bfq_entity_to_bfqq(entity);
++ if (bfqq)
++ bfq_log_bfqq(bfqd, bfqq,
++ "get_next_queue: this queue, finish %llu",
++ (((entity->finish>>10)*1000)>>10)>>2);
++#ifdef CONFIG_BFQ_GROUP_IOSCHED
++ else {
++ struct bfq_group *bfqg =
++ container_of(entity, struct bfq_group, entity);
++
++ bfq_log_bfqg(bfqd, bfqg,
++ "get_next_queue: this entity, finish %llu",
++ (((entity->finish>>10)*1000)>>10)>>2);
++ }
++#endif
++
+ }
+
++ BUG_ON(!entity);
+ bfqq = bfq_entity_to_bfqq(entity);
+ BUG_ON(!bfqq);
+
++ /*
++ * We can finally update all next-to-serve entities along the
++ * path from the leaf entity just set in service to the root.
++ */
++ for_each_entity(entity) {
++ struct bfq_sched_data *sd = entity->sched_data;
++
++ if(!bfq_update_next_in_service(sd, NULL))
++ break;
++ }
++
+ return bfqq;
+ }
+
+ static void __bfq_bfqd_reset_in_service(struct bfq_data *bfqd)
+ {
++ struct bfq_entity *entity = &bfqd->in_service_queue->entity;
++
+ if (bfqd->in_service_bic) {
+ put_io_context(bfqd->in_service_bic->icq.ioc);
+ bfqd->in_service_bic = NULL;
+ }
+
++ bfq_clear_bfqq_wait_request(bfqd->in_service_queue);
++ hrtimer_try_to_cancel(&bfqd->idle_slice_timer);
+ bfqd->in_service_queue = NULL;
+- del_timer(&bfqd->idle_slice_timer);
++
++ /*
++ * When this function is called, all in-service entities have
++ * been properly deactivated or requeued, so we can safely
++ * execute the final step: reset in_service_entity along the
++ * path from entity to the root.
++ */
++ for_each_entity(entity)
++ entity->sched_data->in_service_entity = NULL;
+ }
+
+ static void bfq_deactivate_bfqq(struct bfq_data *bfqd, struct bfq_queue *bfqq,
+- int requeue)
++ bool ins_into_idle_tree, bool expiration)
+ {
+ struct bfq_entity *entity = &bfqq->entity;
+
+- if (bfqq == bfqd->in_service_queue)
+- __bfq_bfqd_reset_in_service(bfqd);
+-
+- bfq_deactivate_entity(entity, requeue);
++ bfq_deactivate_entity(entity, ins_into_idle_tree, expiration);
+ }
+
+ static void bfq_activate_bfqq(struct bfq_data *bfqd, struct bfq_queue *bfqq)
+ {
+ struct bfq_entity *entity = &bfqq->entity;
++ struct bfq_service_tree *st = bfq_entity_service_tree(entity);
++
++ BUG_ON(bfqq == bfqd->in_service_queue);
++ BUG_ON(entity->tree != &st->active && entity->tree != &st->idle &&
++ entity->on_st);
+
+- bfq_activate_entity(entity);
++ bfq_activate_requeue_entity(entity, bfq_bfqq_non_blocking_wait_rq(bfqq),
++ false);
++ bfq_clear_bfqq_non_blocking_wait_rq(bfqq);
++}
++
++static void bfq_requeue_bfqq(struct bfq_data *bfqd, struct bfq_queue *bfqq)
++{
++ struct bfq_entity *entity = &bfqq->entity;
++
++ bfq_activate_requeue_entity(entity, false,
++ bfqq == bfqd->in_service_queue);
+ }
+
+-#ifdef CONFIG_BFQ_GROUP_IOSCHED
+ static void bfqg_stats_update_dequeue(struct bfq_group *bfqg);
+-#endif
+
+ /*
+ * Called when the bfqq no longer has requests pending, remove it from
+- * the service tree.
++ * the service tree. As a special case, it can be invoked during an
++ * expiration.
+ */
+ static void bfq_del_bfqq_busy(struct bfq_data *bfqd, struct bfq_queue *bfqq,
+- int requeue)
++ bool expiration)
+ {
+ BUG_ON(!bfq_bfqq_busy(bfqq));
+ BUG_ON(!RB_EMPTY_ROOT(&bfqq->sort_list));
+@@ -1146,27 +1892,20 @@ static void bfq_del_bfqq_busy(struct bfq_data *bfqd, struct bfq_queue *bfqq,
+ BUG_ON(bfqd->busy_queues == 0);
+ bfqd->busy_queues--;
+
+- if (!bfqq->dispatched) {
++ if (!bfqq->dispatched)
+ bfq_weights_tree_remove(bfqd, &bfqq->entity,
+ &bfqd->queue_weights_tree);
+- if (!blk_queue_nonrot(bfqd->queue)) {
+- BUG_ON(!bfqd->busy_in_flight_queues);
+- bfqd->busy_in_flight_queues--;
+- if (bfq_bfqq_constantly_seeky(bfqq)) {
+- BUG_ON(!bfqd->
+- const_seeky_busy_in_flight_queues);
+- bfqd->const_seeky_busy_in_flight_queues--;
+- }
+- }
+- }
++
+ if (bfqq->wr_coeff > 1)
+ bfqd->wr_busy_queues--;
+
+-#ifdef CONFIG_BFQ_GROUP_IOSCHED
+ bfqg_stats_update_dequeue(bfqq_group(bfqq));
+-#endif
+
+- bfq_deactivate_bfqq(bfqd, bfqq, requeue);
++ BUG_ON(bfqq->entity.budget < 0);
++
++ bfq_deactivate_bfqq(bfqd, bfqq, true, expiration);
++
++ BUG_ON(bfqq->entity.budget < 0);
+ }
+
+ /*
+@@ -1184,16 +1923,11 @@ static void bfq_add_bfqq_busy(struct bfq_data *bfqd, struct bfq_queue *bfqq)
+ bfq_mark_bfqq_busy(bfqq);
+ bfqd->busy_queues++;
+
+- if (!bfqq->dispatched) {
++ if (!bfqq->dispatched)
+ if (bfqq->wr_coeff == 1)
+ bfq_weights_tree_add(bfqd, &bfqq->entity,
+ &bfqd->queue_weights_tree);
+- if (!blk_queue_nonrot(bfqd->queue)) {
+- bfqd->busy_in_flight_queues++;
+- if (bfq_bfqq_constantly_seeky(bfqq))
+- bfqd->const_seeky_busy_in_flight_queues++;
+- }
+- }
++
+ if (bfqq->wr_coeff > 1)
+ bfqd->wr_busy_queues++;
+ }
+diff --git a/block/bfq.h b/block/bfq.h
+index fcce855..2a2bc30 100644
+--- a/block/bfq.h
++++ b/block/bfq.h
+@@ -1,5 +1,5 @@
+ /*
+- * BFQ-v7r11 for 4.5.0: data structures and common functions prototypes.
++ * BFQ v8r8 for 4.10.0: data structures and common functions prototypes.
+ *
+ * Based on ideas and code from CFQ:
+ * Copyright (C) 2003 Jens Axboe <axboe@kernel.dk>
+@@ -7,7 +7,9 @@
+ * Copyright (C) 2008 Fabio Checconi <fabio@gandalf.sssup.it>
+ * Paolo Valente <paolo.valente@unimore.it>
+ *
+- * Copyright (C) 2010 Paolo Valente <paolo.valente@unimore.it>
++ * Copyright (C) 2015 Paolo Valente <paolo.valente@unimore.it>
++ *
++ * Copyright (C) 2017 Paolo Valente <paolo.valente@linaro.org>
+ */
+
+ #ifndef _BFQ_H
+@@ -28,20 +30,21 @@
+
+ #define BFQ_DEFAULT_QUEUE_IOPRIO 4
+
+-#define BFQ_DEFAULT_GRP_WEIGHT 10
++#define BFQ_WEIGHT_LEGACY_DFL 100
+ #define BFQ_DEFAULT_GRP_IOPRIO 0
+ #define BFQ_DEFAULT_GRP_CLASS IOPRIO_CLASS_BE
+
++/*
++ * Soft real-time applications are extremely more latency sensitive
++ * than interactive ones. Over-raise the weight of the former to
++ * privilege them against the latter.
++ */
++#define BFQ_SOFTRT_WEIGHT_FACTOR 100
++
+ struct bfq_entity;
+
+ /**
+ * struct bfq_service_tree - per ioprio_class service tree.
+- * @active: tree for active entities (i.e., those backlogged).
+- * @idle: tree for idle entities (i.e., those not backlogged, with V <= F_i).
+- * @first_idle: idle entity with minimum F_i.
+- * @last_idle: idle entity with maximum F_i.
+- * @vtime: scheduler virtual time.
+- * @wsum: scheduler weight sum; active and idle entities contribute to it.
+ *
+ * Each service tree represents a B-WF2Q+ scheduler on its own. Each
+ * ioprio_class has its own independent scheduler, and so its own
+@@ -49,27 +52,28 @@ struct bfq_entity;
+ * of the containing bfqd.
+ */
+ struct bfq_service_tree {
++ /* tree for active entities (i.e., those backlogged) */
+ struct rb_root active;
++ /* tree for idle entities (i.e., not backlogged, with V <= F_i)*/
+ struct rb_root idle;
+
+- struct bfq_entity *first_idle;
+- struct bfq_entity *last_idle;
++ struct bfq_entity *first_idle; /* idle entity with minimum F_i */
++ struct bfq_entity *last_idle; /* idle entity with maximum F_i */
+
+- u64 vtime;
++ u64 vtime; /* scheduler virtual time */
++ /* scheduler weight sum; active and idle entities contribute to it */
+ unsigned long wsum;
+ };
+
+ /**
+ * struct bfq_sched_data - multi-class scheduler.
+- * @in_service_entity: entity in service.
+- * @next_in_service: head-of-the-line entity in the scheduler.
+- * @service_tree: array of service trees, one per ioprio_class.
+ *
+ * bfq_sched_data is the basic scheduler queue. It supports three
+- * ioprio_classes, and can be used either as a toplevel queue or as
+- * an intermediate queue on a hierarchical setup.
+- * @next_in_service points to the active entity of the sched_data
+- * service trees that will be scheduled next.
++ * ioprio_classes, and can be used either as a toplevel queue or as an
++ * intermediate queue on a hierarchical setup. @next_in_service
++ * points to the active entity of the sched_data service trees that
++ * will be scheduled next. It is used to reduce the number of steps
++ * needed for each hierarchical-schedule update.
+ *
+ * The supported ioprio_classes are the same as in CFQ, in descending
+ * priority order, IOPRIO_CLASS_RT, IOPRIO_CLASS_BE, IOPRIO_CLASS_IDLE.
+@@ -79,48 +83,32 @@ struct bfq_service_tree {
+ * All the fields are protected by the queue lock of the containing bfqd.
+ */
+ struct bfq_sched_data {
+- struct bfq_entity *in_service_entity;
++ struct bfq_entity *in_service_entity; /* entity in service */
++ /* head-of-the-line entity in the scheduler (see comments above) */
+ struct bfq_entity *next_in_service;
++ /* array of service trees, one per ioprio_class */
+ struct bfq_service_tree service_tree[BFQ_IOPRIO_CLASSES];
++ /* last time CLASS_IDLE was served */
++ unsigned long bfq_class_idle_last_service;
++
+ };
+
+ /**
+ * struct bfq_weight_counter - counter of the number of all active entities
+ * with a given weight.
+- * @weight: weight of the entities that this counter refers to.
+- * @num_active: number of active entities with this weight.
+- * @weights_node: weights tree member (see bfq_data's @queue_weights_tree
+- * and @group_weights_tree).
+ */
+ struct bfq_weight_counter {
+- short int weight;
+- unsigned int num_active;
++ unsigned int weight; /* weight of the entities this counter refers to */
++ unsigned int num_active; /* nr of active entities with this weight */
++ /*
++ * Weights tree member (see bfq_data's @queue_weights_tree and
++ * @group_weights_tree)
++ */
+ struct rb_node weights_node;
+ };
+
+ /**
+ * struct bfq_entity - schedulable entity.
+- * @rb_node: service_tree member.
+- * @weight_counter: pointer to the weight counter associated with this entity.
+- * @on_st: flag, true if the entity is on a tree (either the active or
+- * the idle one of its service_tree).
+- * @finish: B-WF2Q+ finish timestamp (aka F_i).
+- * @start: B-WF2Q+ start timestamp (aka S_i).
+- * @tree: tree the entity is enqueued into; %NULL if not on a tree.
+- * @min_start: minimum start time of the (active) subtree rooted at
+- * this entity; used for O(log N) lookups into active trees.
+- * @service: service received during the last round of service.
+- * @budget: budget used to calculate F_i; F_i = S_i + @budget / @weight.
+- * @weight: weight of the queue
+- * @parent: parent entity, for hierarchical scheduling.
+- * @my_sched_data: for non-leaf nodes in the cgroup hierarchy, the
+- * associated scheduler queue, %NULL on leaf nodes.
+- * @sched_data: the scheduler queue this entity belongs to.
+- * @ioprio: the ioprio in use.
+- * @new_weight: when a weight change is requested, the new weight value.
+- * @orig_weight: original weight, used to implement weight boosting
+- * @prio_changed: flag, true when the user requested a weight, ioprio or
+- * ioprio_class change.
+ *
+ * A bfq_entity is used to represent either a bfq_queue (leaf node in the
+ * cgroup hierarchy) or a bfq_group into the upper level scheduler. Each
+@@ -147,27 +135,52 @@ struct bfq_weight_counter {
+ * containing bfqd.
+ */
+ struct bfq_entity {
+- struct rb_node rb_node;
++ struct rb_node rb_node; /* service_tree member */
++ /* pointer to the weight counter associated with this entity */
+ struct bfq_weight_counter *weight_counter;
+
+- int on_st;
++ /*
++ * Flag, true if the entity is on a tree (either the active or
++ * the idle one of its service_tree) or is in service.
++ */
++ bool on_st;
+
+- u64 finish;
+- u64 start;
++ u64 finish; /* B-WF2Q+ finish timestamp (aka F_i) */
++ u64 start; /* B-WF2Q+ start timestamp (aka S_i) */
+
++ /* tree the entity is enqueued into; %NULL if not on a tree */
+ struct rb_root *tree;
+
++ /*
++ * minimum start time of the (active) subtree rooted at this
++ * entity; used for O(log N) lookups into active trees
++ */
+ u64 min_start;
+
+- int service, budget;
+- unsigned short weight, new_weight;
+- unsigned short orig_weight;
++ /* amount of service received during the last service slot */
++ int service;
++
++ /* budget, used also to calculate F_i: F_i = S_i + @budget / @weight */
++ int budget;
++
++ unsigned int weight; /* weight of the queue */
++ unsigned int new_weight; /* next weight if a change is in progress */
++
++ /* original weight, used to implement weight boosting */
++ unsigned int orig_weight;
+
++ /* parent entity, for hierarchical scheduling */
+ struct bfq_entity *parent;
+
++ /*
++ * For non-leaf nodes in the hierarchy, the associated
++ * scheduler queue, %NULL on leaf nodes.
++ */
+ struct bfq_sched_data *my_sched_data;
++ /* the scheduler queue this entity belongs to */
+ struct bfq_sched_data *sched_data;
+
++ /* flag, set to request a weight, ioprio or ioprio_class change */
+ int prio_changed;
+ };
+
+@@ -175,56 +188,6 @@ struct bfq_group;
+
+ /**
+ * struct bfq_queue - leaf schedulable entity.
+- * @ref: reference counter.
+- * @bfqd: parent bfq_data.
+- * @new_ioprio: when an ioprio change is requested, the new ioprio value.
+- * @ioprio_class: the ioprio_class in use.
+- * @new_ioprio_class: when an ioprio_class change is requested, the new
+- * ioprio_class value.
+- * @new_bfqq: shared bfq_queue if queue is cooperating with
+- * one or more other queues.
+- * @pos_node: request-position tree member (see bfq_group's @rq_pos_tree).
+- * @pos_root: request-position tree root (see bfq_group's @rq_pos_tree).
+- * @sort_list: sorted list of pending requests.
+- * @next_rq: if fifo isn't expired, next request to serve.
+- * @queued: nr of requests queued in @sort_list.
+- * @allocated: currently allocated requests.
+- * @meta_pending: pending metadata requests.
+- * @fifo: fifo list of requests in sort_list.
+- * @entity: entity representing this queue in the scheduler.
+- * @max_budget: maximum budget allowed from the feedback mechanism.
+- * @budget_timeout: budget expiration (in jiffies).
+- * @dispatched: number of requests on the dispatch list or inside driver.
+- * @flags: status flags.
+- * @bfqq_list: node for active/idle bfqq list inside our bfqd.
+- * @burst_list_node: node for the device's burst list.
+- * @seek_samples: number of seeks sampled
+- * @seek_total: sum of the distances of the seeks sampled
+- * @seek_mean: mean seek distance
+- * @last_request_pos: position of the last request enqueued
+- * @requests_within_timer: number of consecutive pairs of request completion
+- * and arrival, such that the queue becomes idle
+- * after the completion, but the next request arrives
+- * within an idle time slice; used only if the queue's
+- * IO_bound has been cleared.
+- * @pid: pid of the process owning the queue, used for logging purposes.
+- * @last_wr_start_finish: start time of the current weight-raising period if
+- * the @bfq-queue is being weight-raised, otherwise
+- * finish time of the last weight-raising period
+- * @wr_cur_max_time: current max raising time for this queue
+- * @soft_rt_next_start: minimum time instant such that, only if a new
+- * request is enqueued after this time instant in an
+- * idle @bfq_queue with no outstanding requests, then
+- * the task associated with the queue it is deemed as
+- * soft real-time (see the comments to the function
+- * bfq_bfqq_softrt_next_start())
+- * @last_idle_bklogged: time of the last transition of the @bfq_queue from
+- * idle to backlogged
+- * @service_from_backlogged: cumulative service received from the @bfq_queue
+- * since the last transition from idle to
+- * backlogged
+- * @bic: pointer to the bfq_io_cq owning the bfq_queue, set to %NULL if the
+- * queue is shared
+ *
+ * A bfq_queue is a leaf request queue; it can be associated with an
+ * io_context or more, if it is async or shared between cooperating
+@@ -235,117 +198,175 @@ struct bfq_group;
+ * All the fields are protected by the queue lock of the containing bfqd.
+ */
+ struct bfq_queue {
+- atomic_t ref;
++ /* reference counter */
++ int ref;
++ /* parent bfq_data */
+ struct bfq_data *bfqd;
+
+- unsigned short ioprio, new_ioprio;
+- unsigned short ioprio_class, new_ioprio_class;
++ /* current ioprio and ioprio class */
++ unsigned short ioprio, ioprio_class;
++ /* next ioprio and ioprio class if a change is in progress */
++ unsigned short new_ioprio, new_ioprio_class;
+
+- /* fields for cooperating queues handling */
++ /*
++ * Shared bfq_queue if queue is cooperating with one or more
++ * other queues.
++ */
+ struct bfq_queue *new_bfqq;
++ /* request-position tree member (see bfq_group's @rq_pos_tree) */
+ struct rb_node pos_node;
++ /* request-position tree root (see bfq_group's @rq_pos_tree) */
+ struct rb_root *pos_root;
+
++ /* sorted list of pending requests */
+ struct rb_root sort_list;
++ /* if fifo isn't expired, next request to serve */
+ struct request *next_rq;
++ /* number of sync and async requests queued */
+ int queued[2];
++ /* number of sync and async requests currently allocated */
+ int allocated[2];
++ /* number of pending metadata requests */
+ int meta_pending;
++ /* fifo list of requests in sort_list */
+ struct list_head fifo;
+
++ /* entity representing this queue in the scheduler */
+ struct bfq_entity entity;
+
++ /* maximum budget allowed from the feedback mechanism */
+ int max_budget;
++ /* budget expiration (in jiffies) */
+ unsigned long budget_timeout;
+
++ /* number of requests on the dispatch list or inside driver */
+ int dispatched;
+
+- unsigned int flags;
++ unsigned int flags; /* status flags.*/
+
++ /* node for active/idle bfqq list inside parent bfqd */
+ struct list_head bfqq_list;
+
++ /* bit vector: a 1 for each seeky requests in history */
++ u32 seek_history;
++
++ /* node for the device's burst list */
+ struct hlist_node burst_list_node;
+
+- unsigned int seek_samples;
+- u64 seek_total;
+- sector_t seek_mean;
++ /* position of the last request enqueued */
+ sector_t last_request_pos;
+
++ /* Number of consecutive pairs of request completion and
++ * arrival, such that the queue becomes idle after the
++ * completion, but the next request arrives within an idle
++ * time slice; used only if the queue's IO_bound flag has been
++ * cleared.
++ */
+ unsigned int requests_within_timer;
+
++ /* pid of the process owning the queue, used for logging purposes */
+ pid_t pid;
++
++ /*
++ * Pointer to the bfq_io_cq owning the bfq_queue, set to %NULL
++ * if the queue is shared.
++ */
+ struct bfq_io_cq *bic;
+
+- /* weight-raising fields */
++ /* current maximum weight-raising time for this queue */
+ unsigned long wr_cur_max_time;
++ /*
++ * Minimum time instant such that, only if a new request is
++ * enqueued after this time instant in an idle @bfq_queue with
++ * no outstanding requests, then the task associated with the
++ * queue it is deemed as soft real-time (see the comments on
++ * the function bfq_bfqq_softrt_next_start())
++ */
+ unsigned long soft_rt_next_start;
++ /*
++ * Start time of the current weight-raising period if
++ * the @bfq-queue is being weight-raised, otherwise
++ * finish time of the last weight-raising period.
++ */
+ unsigned long last_wr_start_finish;
++ /* factor by which the weight of this queue is multiplied */
+ unsigned int wr_coeff;
++ /*
++ * Time of the last transition of the @bfq_queue from idle to
++ * backlogged.
++ */
+ unsigned long last_idle_bklogged;
++ /*
++ * Cumulative service received from the @bfq_queue since the
++ * last transition from idle to backlogged.
++ */
+ unsigned long service_from_backlogged;
++ /*
++ * Value of wr start time when switching to soft rt
++ */
++ unsigned long wr_start_at_switch_to_srt;
++
++ unsigned long split_time; /* time of last split */
+ };
+
+ /**
+ * struct bfq_ttime - per process thinktime stats.
+- * @ttime_total: total process thinktime
+- * @ttime_samples: number of thinktime samples
+- * @ttime_mean: average process thinktime
+ */
+ struct bfq_ttime {
+- unsigned long last_end_request;
++ u64 last_end_request; /* completion time of last request */
++
++ u64 ttime_total; /* total process thinktime */
++ unsigned long ttime_samples; /* number of thinktime samples */
++ u64 ttime_mean; /* average process thinktime */
+
+- unsigned long ttime_total;
+- unsigned long ttime_samples;
+- unsigned long ttime_mean;
+ };
+
+ /**
+ * struct bfq_io_cq - per (request_queue, io_context) structure.
+- * @icq: associated io_cq structure
+- * @bfqq: array of two process queues, the sync and the async
+- * @ttime: associated @bfq_ttime struct
+- * @ioprio: per (request_queue, blkcg) ioprio.
+- * @blkcg_id: id of the blkcg the related io_cq belongs to.
+- * @wr_time_left: snapshot of the time left before weight raising ends
+- * for the sync queue associated to this process; this
+- * snapshot is taken to remember this value while the weight
+- * raising is suspended because the queue is merged with a
+- * shared queue, and is used to set @raising_cur_max_time
+- * when the queue is split from the shared queue and its
+- * weight is raised again
+- * @saved_idle_window: same purpose as the previous field for the idle
+- * window
+- * @saved_IO_bound: same purpose as the previous two fields for the I/O
+- * bound classification of a queue
+- * @saved_in_large_burst: same purpose as the previous fields for the
+- * value of the field keeping the queue's belonging
+- * to a large burst
+- * @was_in_burst_list: true if the queue belonged to a burst list
+- * before its merge with another cooperating queue
+- * @cooperations: counter of consecutive successful queue merges underwent
+- * by any of the process' @bfq_queues
+- * @failed_cooperations: counter of consecutive failed queue merges of any
+- * of the process' @bfq_queues
+ */
+ struct bfq_io_cq {
++ /* associated io_cq structure */
+ struct io_cq icq; /* must be the first member */
++ /* array of two process queues, the sync and the async */
+ struct bfq_queue *bfqq[2];
++ /* associated @bfq_ttime struct */
+ struct bfq_ttime ttime;
++ /* per (request_queue, blkcg) ioprio */
+ int ioprio;
+-
+ #ifdef CONFIG_BFQ_GROUP_IOSCHED
+- uint64_t blkcg_id; /* the current blkcg ID */
++ uint64_t blkcg_serial_nr; /* the current blkcg serial */
+ #endif
+
+- unsigned int wr_time_left;
++ /*
++ * Snapshot of the idle window before merging; taken to
++ * remember this value while the queue is merged, so as to be
++ * able to restore it in case of split.
++ */
+ bool saved_idle_window;
++ /*
++ * Same purpose as the previous two fields for the I/O bound
++ * classification of a queue.
++ */
+ bool saved_IO_bound;
+
++ /*
++ * Same purpose as the previous fields for the value of the
++ * field keeping the queue's belonging to a large burst
++ */
+ bool saved_in_large_burst;
++ /*
++ * True if the queue belonged to a burst list before its merge
++ * with another cooperating queue.
++ */
+ bool was_in_burst_list;
+
+- unsigned int cooperations;
+- unsigned int failed_cooperations;
++ /*
++ * Similar to previous fields: save wr information.
++ */
++ unsigned long saved_wr_coeff;
++ unsigned long saved_last_wr_start_finish;
++ unsigned long saved_wr_start_at_switch_to_srt;
++ unsigned int saved_wr_cur_max_time;
+ };
+
+ enum bfq_device_speed {
+@@ -354,224 +375,232 @@ enum bfq_device_speed {
+ };
+
+ /**
+- * struct bfq_data - per device data structure.
+- * @queue: request queue for the managed device.
+- * @root_group: root bfq_group for the device.
+- * @active_numerous_groups: number of bfq_groups containing more than one
+- * active @bfq_entity.
+- * @queue_weights_tree: rbtree of weight counters of @bfq_queues, sorted by
+- * weight. Used to keep track of whether all @bfq_queues
+- * have the same weight. The tree contains one counter
+- * for each distinct weight associated to some active
+- * and not weight-raised @bfq_queue (see the comments to
+- * the functions bfq_weights_tree_[add|remove] for
+- * further details).
+- * @group_weights_tree: rbtree of non-queue @bfq_entity weight counters, sorted
+- * by weight. Used to keep track of whether all
+- * @bfq_groups have the same weight. The tree contains
+- * one counter for each distinct weight associated to
+- * some active @bfq_group (see the comments to the
+- * functions bfq_weights_tree_[add|remove] for further
+- * details).
+- * @busy_queues: number of bfq_queues containing requests (including the
+- * queue in service, even if it is idling).
+- * @busy_in_flight_queues: number of @bfq_queues containing pending or
+- * in-flight requests, plus the @bfq_queue in
+- * service, even if idle but waiting for the
+- * possible arrival of its next sync request. This
+- * field is updated only if the device is rotational,
+- * but used only if the device is also NCQ-capable.
+- * The reason why the field is updated also for non-
+- * NCQ-capable rotational devices is related to the
+- * fact that the value of @hw_tag may be set also
+- * later than when busy_in_flight_queues may need to
+- * be incremented for the first time(s). Taking also
+- * this possibility into account, to avoid unbalanced
+- * increments/decrements, would imply more overhead
+- * than just updating busy_in_flight_queues
+- * regardless of the value of @hw_tag.
+- * @const_seeky_busy_in_flight_queues: number of constantly-seeky @bfq_queues
+- * (that is, seeky queues that expired
+- * for budget timeout at least once)
+- * containing pending or in-flight
+- * requests, including the in-service
+- * @bfq_queue if constantly seeky. This
+- * field is updated only if the device
+- * is rotational, but used only if the
+- * device is also NCQ-capable (see the
+- * comments to @busy_in_flight_queues).
+- * @wr_busy_queues: number of weight-raised busy @bfq_queues.
+- * @queued: number of queued requests.
+- * @rq_in_driver: number of requests dispatched and waiting for completion.
+- * @sync_flight: number of sync requests in the driver.
+- * @max_rq_in_driver: max number of reqs in driver in the last
+- * @hw_tag_samples completed requests.
+- * @hw_tag_samples: nr of samples used to calculate hw_tag.
+- * @hw_tag: flag set to one if the driver is showing a queueing behavior.
+- * @budgets_assigned: number of budgets assigned.
+- * @idle_slice_timer: timer set when idling for the next sequential request
+- * from the queue in service.
+- * @unplug_work: delayed work to restart dispatching on the request queue.
+- * @in_service_queue: bfq_queue in service.
+- * @in_service_bic: bfq_io_cq (bic) associated with the @in_service_queue.
+- * @last_position: on-disk position of the last served request.
+- * @last_budget_start: beginning of the last budget.
+- * @last_idling_start: beginning of the last idle slice.
+- * @peak_rate: peak transfer rate observed for a budget.
+- * @peak_rate_samples: number of samples used to calculate @peak_rate.
+- * @bfq_max_budget: maximum budget allotted to a bfq_queue before
+- * rescheduling.
+- * @active_list: list of all the bfq_queues active on the device.
+- * @idle_list: list of all the bfq_queues idle on the device.
+- * @bfq_fifo_expire: timeout for async/sync requests; when it expires
+- * requests are served in fifo order.
+- * @bfq_back_penalty: weight of backward seeks wrt forward ones.
+- * @bfq_back_max: maximum allowed backward seek.
+- * @bfq_slice_idle: maximum idling time.
+- * @bfq_user_max_budget: user-configured max budget value
+- * (0 for auto-tuning).
+- * @bfq_max_budget_async_rq: maximum budget (in nr of requests) allotted to
+- * async queues.
+- * @bfq_timeout: timeout for bfq_queues to consume their budget; used to
+- * to prevent seeky queues to impose long latencies to well
+- * behaved ones (this also implies that seeky queues cannot
+- * receive guarantees in the service domain; after a timeout
+- * they are charged for the whole allocated budget, to try
+- * to preserve a behavior reasonably fair among them, but
+- * without service-domain guarantees).
+- * @bfq_coop_thresh: number of queue merges after which a @bfq_queue is
+- * no more granted any weight-raising.
+- * @bfq_failed_cooperations: number of consecutive failed cooperation
+- * chances after which weight-raising is restored
+- * to a queue subject to more than bfq_coop_thresh
+- * queue merges.
+- * @bfq_requests_within_timer: number of consecutive requests that must be
+- * issued within the idle time slice to set
+- * again idling to a queue which was marked as
+- * non-I/O-bound (see the definition of the
+- * IO_bound flag for further details).
+- * @last_ins_in_burst: last time at which a queue entered the current
+- * burst of queues being activated shortly after
+- * each other; for more details about this and the
+- * following parameters related to a burst of
+- * activations, see the comments to the function
+- * @bfq_handle_burst.
+- * @bfq_burst_interval: reference time interval used to decide whether a
+- * queue has been activated shortly after
+- * @last_ins_in_burst.
+- * @burst_size: number of queues in the current burst of queue activations.
+- * @bfq_large_burst_thresh: maximum burst size above which the current
+- * queue-activation burst is deemed as 'large'.
+- * @large_burst: true if a large queue-activation burst is in progress.
+- * @burst_list: head of the burst list (as for the above fields, more details
+- * in the comments to the function bfq_handle_burst).
+- * @low_latency: if set to true, low-latency heuristics are enabled.
+- * @bfq_wr_coeff: maximum factor by which the weight of a weight-raised
+- * queue is multiplied.
+- * @bfq_wr_max_time: maximum duration of a weight-raising period (jiffies).
+- * @bfq_wr_rt_max_time: maximum duration for soft real-time processes.
+- * @bfq_wr_min_idle_time: minimum idle period after which weight-raising
+- * may be reactivated for a queue (in jiffies).
+- * @bfq_wr_min_inter_arr_async: minimum period between request arrivals
+- * after which weight-raising may be
+- * reactivated for an already busy queue
+- * (in jiffies).
+- * @bfq_wr_max_softrt_rate: max service-rate for a soft real-time queue,
+- * sectors per seconds.
+- * @RT_prod: cached value of the product R*T used for computing the maximum
+- * duration of the weight raising automatically.
+- * @device_speed: device-speed class for the low-latency heuristic.
+- * @oom_bfqq: fallback dummy bfqq for extreme OOM conditions.
++ * struct bfq_data - per-device data structure.
+ *
+ * All the fields are protected by the @queue lock.
+ */
+ struct bfq_data {
++ /* request queue for the device */
+ struct request_queue *queue;
+
++ /* root bfq_group for the device */
+ struct bfq_group *root_group;
+
+-#ifdef CONFIG_BFQ_GROUP_IOSCHED
+- int active_numerous_groups;
+-#endif
+-
++ /*
++ * rbtree of weight counters of @bfq_queues, sorted by
++ * weight. Used to keep track of whether all @bfq_queues have
++ * the same weight. The tree contains one counter for each
++ * distinct weight associated to some active and not
++ * weight-raised @bfq_queue (see the comments to the functions
++ * bfq_weights_tree_[add|remove] for further details).
++ */
+ struct rb_root queue_weights_tree;
++ /*
++ * rbtree of non-queue @bfq_entity weight counters, sorted by
++ * weight. Used to keep track of whether all @bfq_groups have
++ * the same weight. The tree contains one counter for each
++ * distinct weight associated to some active @bfq_group (see
++ * the comments to the functions bfq_weights_tree_[add|remove]
++ * for further details).
++ */
+ struct rb_root group_weights_tree;
+
++ /*
++ * Number of bfq_queues containing requests (including the
++ * queue in service, even if it is idling).
++ */
+ int busy_queues;
+- int busy_in_flight_queues;
+- int const_seeky_busy_in_flight_queues;
++ /* number of weight-raised busy @bfq_queues */
+ int wr_busy_queues;
++ /* number of queued requests */
+ int queued;
++ /* number of requests dispatched and waiting for completion */
+ int rq_in_driver;
+- int sync_flight;
+
++ /*
++ * Maximum number of requests in driver in the last
++ * @hw_tag_samples completed requests.
++ */
+ int max_rq_in_driver;
++ /* number of samples used to calculate hw_tag */
+ int hw_tag_samples;
++ /* flag set to one if the driver is showing a queueing behavior */
+ int hw_tag;
+
++ /* number of budgets assigned */
+ int budgets_assigned;
+
+- struct timer_list idle_slice_timer;
++ /*
++ * Timer set when idling (waiting) for the next request from
++ * the queue in service.
++ */
++ struct hrtimer idle_slice_timer;
++ /* delayed work to restart dispatching on the request queue */
+ struct work_struct unplug_work;
+
++ /* bfq_queue in service */
+ struct bfq_queue *in_service_queue;
++ /* bfq_io_cq (bic) associated with the @in_service_queue */
+ struct bfq_io_cq *in_service_bic;
+
++ /* on-disk position of the last served request */
+ sector_t last_position;
+
++ /* time of last request completion (ns) */
++ u64 last_completion;
++
++ /* time of first rq dispatch in current observation interval (ns) */
++ u64 first_dispatch;
++ /* time of last rq dispatch in current observation interval (ns) */
++ u64 last_dispatch;
++
++ /* beginning of the last budget */
+ ktime_t last_budget_start;
++ /* beginning of the last idle slice */
+ ktime_t last_idling_start;
++
++ /* number of samples in current observation interval */
+ int peak_rate_samples;
+- u64 peak_rate;
++ /* num of samples of seq dispatches in current observation interval */
++ u32 sequential_samples;
++ /* total num of sectors transferred in current observation interval */
++ u64 tot_sectors_dispatched;
++ /* max rq size seen during current observation interval (sectors) */
++ u32 last_rq_max_size;
++ /* time elapsed from first dispatch in current observ. interval (us) */
++ u64 delta_from_first;
++ /* current estimate of device peak rate */
++ u32 peak_rate;
++
++ /* maximum budget allotted to a bfq_queue before rescheduling */
+ int bfq_max_budget;
+
++ /* list of all the bfq_queues active on the device */
+ struct list_head active_list;
++ /* list of all the bfq_queues idle on the device */
+ struct list_head idle_list;
+
+- unsigned int bfq_fifo_expire[2];
++ /*
++ * Timeout for async/sync requests; when it fires, requests
++ * are served in fifo order.
++ */
++ u64 bfq_fifo_expire[2];
++ /* weight of backward seeks wrt forward ones */
+ unsigned int bfq_back_penalty;
++ /* maximum allowed backward seek */
+ unsigned int bfq_back_max;
+- unsigned int bfq_slice_idle;
+- u64 bfq_class_idle_last_service;
++ /* maximum idling time */
++ u32 bfq_slice_idle;
+
++ /* user-configured max budget value (0 for auto-tuning) */
+ int bfq_user_max_budget;
+- int bfq_max_budget_async_rq;
+- unsigned int bfq_timeout[2];
+-
+- unsigned int bfq_coop_thresh;
+- unsigned int bfq_failed_cooperations;
++ /*
++ * Timeout for bfq_queues to consume their budget; used to
++ * prevent seeky queues from imposing long latencies to
++ * sequential or quasi-sequential ones (this also implies that
++ * seeky queues cannot receive guarantees in the service
++ * domain; after a timeout they are charged for the time they
++ * have been in service, to preserve fairness among them, but
++ * without service-domain guarantees).
++ */
++ unsigned int bfq_timeout;
++
++ /*
++ * Number of consecutive requests that must be issued within
++ * the idle time slice to set again idling to a queue which
++ * was marked as non-I/O-bound (see the definition of the
++ * IO_bound flag for further details).
++ */
+ unsigned int bfq_requests_within_timer;
+
++ /*
++ * Force device idling whenever needed to provide accurate
++ * service guarantees, without caring about throughput
++ * issues. CAVEAT: this may even increase latencies, in case
++ * of useless idling for processes that did stop doing I/O.
++ */
++ bool strict_guarantees;
++
++ /*
++ * Last time at which a queue entered the current burst of
++ * queues being activated shortly after each other; for more
++ * details about this and the following parameters related to
++ * a burst of activations, see the comments on the function
++ * bfq_handle_burst.
++ */
+ unsigned long last_ins_in_burst;
++ /*
++ * Reference time interval used to decide whether a queue has
++ * been activated shortly after @last_ins_in_burst.
++ */
+ unsigned long bfq_burst_interval;
++ /* number of queues in the current burst of queue activations */
+ int burst_size;
++
++ /* common parent entity for the queues in the burst */
++ struct bfq_entity *burst_parent_entity;
++ /* Maximum burst size above which the current queue-activation
++ * burst is deemed as 'large'.
++ */
+ unsigned long bfq_large_burst_thresh;
++ /* true if a large queue-activation burst is in progress */
+ bool large_burst;
++ /*
++ * Head of the burst list (as for the above fields, more
++ * details in the comments on the function bfq_handle_burst).
++ */
+ struct hlist_head burst_list;
+
++ /* if set to true, low-latency heuristics are enabled */
+ bool low_latency;
+-
+- /* parameters of the low_latency heuristics */
++ /*
++ * Maximum factor by which the weight of a weight-raised queue
++ * is multiplied.
++ */
+ unsigned int bfq_wr_coeff;
++ /* maximum duration of a weight-raising period (jiffies) */
+ unsigned int bfq_wr_max_time;
++
++ /* Maximum weight-raising duration for soft real-time processes */
+ unsigned int bfq_wr_rt_max_time;
++ /*
++ * Minimum idle period after which weight-raising may be
++ * reactivated for a queue (in jiffies).
++ */
+ unsigned int bfq_wr_min_idle_time;
++ /*
++ * Minimum period between request arrivals after which
++ * weight-raising may be reactivated for an already busy async
++ * queue (in jiffies).
++ */
+ unsigned long bfq_wr_min_inter_arr_async;
++
++ /* Max service-rate for a soft real-time queue, in sectors/sec */
+ unsigned int bfq_wr_max_softrt_rate;
++ /*
++ * Cached value of the product R*T, used for computing the
++ * maximum duration of weight raising automatically.
++ */
+ u64 RT_prod;
++ /* device-speed class for the low-latency heuristic */
+ enum bfq_device_speed device_speed;
+
++ /* fallback dummy bfqq for extreme OOM conditions */
+ struct bfq_queue oom_bfqq;
+ };
+
+ enum bfqq_state_flags {
+- BFQ_BFQQ_FLAG_busy = 0, /* has requests or is in service */
++ BFQ_BFQQ_FLAG_just_created = 0, /* queue just allocated */
++ BFQ_BFQQ_FLAG_busy, /* has requests or is in service */
+ BFQ_BFQQ_FLAG_wait_request, /* waiting for a request */
++ BFQ_BFQQ_FLAG_non_blocking_wait_rq, /*
++ * waiting for a request
++ * without idling the device
++ */
+ BFQ_BFQQ_FLAG_must_alloc, /* must be allowed rq alloc */
+ BFQ_BFQQ_FLAG_fifo_expire, /* FIFO checked in this slice */
+ BFQ_BFQQ_FLAG_idle_window, /* slice idling enabled */
+ BFQ_BFQQ_FLAG_sync, /* synchronous queue */
+- BFQ_BFQQ_FLAG_budget_new, /* no completion with this budget */
+ BFQ_BFQQ_FLAG_IO_bound, /*
+ * bfqq has timed-out at least once
+ * having consumed at most 2/10 of
+@@ -581,17 +610,12 @@ enum bfqq_state_flags {
+ * bfqq activated in a large burst,
+ * see comments to bfq_handle_burst.
+ */
+- BFQ_BFQQ_FLAG_constantly_seeky, /*
+- * bfqq has proved to be slow and
+- * seeky until budget timeout
+- */
+ BFQ_BFQQ_FLAG_softrt_update, /*
+ * may need softrt-next-start
+ * update
+ */
+ BFQ_BFQQ_FLAG_coop, /* bfqq is shared */
+- BFQ_BFQQ_FLAG_split_coop, /* shared bfqq will be split */
+- BFQ_BFQQ_FLAG_just_split, /* queue has just been split */
++ BFQ_BFQQ_FLAG_split_coop /* shared bfqq will be split */
+ };
+
+ #define BFQ_BFQQ_FNS(name) \
+@@ -608,28 +632,94 @@ static int bfq_bfqq_##name(const struct bfq_queue *bfqq) \
+ return ((bfqq)->flags & (1 << BFQ_BFQQ_FLAG_##name)) != 0; \
+ }
+
++BFQ_BFQQ_FNS(just_created);
+ BFQ_BFQQ_FNS(busy);
+ BFQ_BFQQ_FNS(wait_request);
++BFQ_BFQQ_FNS(non_blocking_wait_rq);
+ BFQ_BFQQ_FNS(must_alloc);
+ BFQ_BFQQ_FNS(fifo_expire);
+ BFQ_BFQQ_FNS(idle_window);
+ BFQ_BFQQ_FNS(sync);
+-BFQ_BFQQ_FNS(budget_new);
+ BFQ_BFQQ_FNS(IO_bound);
+ BFQ_BFQQ_FNS(in_large_burst);
+-BFQ_BFQQ_FNS(constantly_seeky);
+ BFQ_BFQQ_FNS(coop);
+ BFQ_BFQQ_FNS(split_coop);
+-BFQ_BFQQ_FNS(just_split);
+ BFQ_BFQQ_FNS(softrt_update);
+ #undef BFQ_BFQQ_FNS
+
+ /* Logging facilities. */
+-#define bfq_log_bfqq(bfqd, bfqq, fmt, args...) \
+- blk_add_trace_msg((bfqd)->queue, "bfq%d " fmt, (bfqq)->pid, ##args)
++#ifdef CONFIG_BFQ_REDIRECT_TO_CONSOLE
++#ifdef CONFIG_BFQ_GROUP_IOSCHED
++static struct bfq_group *bfqq_group(struct bfq_queue *bfqq);
++static struct blkcg_gq *bfqg_to_blkg(struct bfq_group *bfqg);
++
++#define bfq_log_bfqq(bfqd, bfqq, fmt, args...) do { \
++ char __pbuf[128]; \
++ \
++ assert_spin_locked((bfqd)->queue->queue_lock); \
++ blkg_path(bfqg_to_blkg(bfqq_group(bfqq)), __pbuf, sizeof(__pbuf)); \
++ pr_crit("bfq%d%c %s " fmt "\n", \
++ (bfqq)->pid, \
++ bfq_bfqq_sync((bfqq)) ? 'S' : 'A', \
++ __pbuf, ##args); \
++} while (0)
++
++#define bfq_log_bfqg(bfqd, bfqg, fmt, args...) do { \
++ char __pbuf[128]; \
++ \
++ blkg_path(bfqg_to_blkg(bfqg), __pbuf, sizeof(__pbuf)); \
++ pr_crit("%s " fmt "\n", __pbuf, ##args); \
++} while (0)
++
++#else /* CONFIG_BFQ_GROUP_IOSCHED */
++
++#define bfq_log_bfqq(bfqd, bfqq, fmt, args...) \
++ pr_crit("bfq%d%c " fmt "\n", (bfqq)->pid, \
++ bfq_bfqq_sync((bfqq)) ? 'S' : 'A', \
++ ##args)
++#define bfq_log_bfqg(bfqd, bfqg, fmt, args...) do {} while (0)
++
++#endif /* CONFIG_BFQ_GROUP_IOSCHED */
++
++#define bfq_log(bfqd, fmt, args...) \
++ pr_crit("bfq " fmt "\n", ##args)
++
++#else /* CONFIG_BFQ_REDIRECT_TO_CONSOLE */
++#ifdef CONFIG_BFQ_GROUP_IOSCHED
++static struct bfq_group *bfqq_group(struct bfq_queue *bfqq);
++static struct blkcg_gq *bfqg_to_blkg(struct bfq_group *bfqg);
++
++#define bfq_log_bfqq(bfqd, bfqq, fmt, args...) do { \
++ char __pbuf[128]; \
++ \
++ assert_spin_locked((bfqd)->queue->queue_lock); \
++ blkg_path(bfqg_to_blkg(bfqq_group(bfqq)), __pbuf, sizeof(__pbuf)); \
++ blk_add_trace_msg((bfqd)->queue, "bfq%d%c %s " fmt, \
++ (bfqq)->pid, \
++ bfq_bfqq_sync((bfqq)) ? 'S' : 'A', \
++ __pbuf, ##args); \
++} while (0)
++
++#define bfq_log_bfqg(bfqd, bfqg, fmt, args...) do { \
++ char __pbuf[128]; \
++ \
++ blkg_path(bfqg_to_blkg(bfqg), __pbuf, sizeof(__pbuf)); \
++ blk_add_trace_msg((bfqd)->queue, "%s " fmt, __pbuf, ##args); \
++} while (0)
++
++#else /* CONFIG_BFQ_GROUP_IOSCHED */
++
++#define bfq_log_bfqq(bfqd, bfqq, fmt, args...) \
++ blk_add_trace_msg((bfqd)->queue, "bfq%d%c " fmt, (bfqq)->pid, \
++ bfq_bfqq_sync((bfqq)) ? 'S' : 'A', \
++ ##args)
++#define bfq_log_bfqg(bfqd, bfqg, fmt, args...) do {} while (0)
++
++#endif /* CONFIG_BFQ_GROUP_IOSCHED */
+
+ #define bfq_log(bfqd, fmt, args...) \
+ blk_add_trace_msg((bfqd)->queue, "bfq " fmt, ##args)
++#endif /* CONFIG_BFQ_REDIRECT_TO_CONSOLE */
+
+ /* Expiration reasons. */
+ enum bfqq_expiration {
+@@ -640,15 +730,12 @@ enum bfqq_expiration {
+ BFQ_BFQQ_BUDGET_TIMEOUT, /* budget took too long to be used */
+ BFQ_BFQQ_BUDGET_EXHAUSTED, /* budget consumed */
+ BFQ_BFQQ_NO_MORE_REQUESTS, /* the queue has no more requests */
++ BFQ_BFQQ_PREEMPTED /* preemption in progress */
+ };
+
+-#ifdef CONFIG_BFQ_GROUP_IOSCHED
+
+ struct bfqg_stats {
+- /* total bytes transferred */
+- struct blkg_rwstat service_bytes;
+- /* total IOs serviced, post merge */
+- struct blkg_rwstat serviced;
++#ifdef CONFIG_BFQ_GROUP_IOSCHED
+ /* number of ios merged */
+ struct blkg_rwstat merged;
+ /* total time spent on device in ns, may not be accurate w/ queueing */
+@@ -657,12 +744,8 @@ struct bfqg_stats {
+ struct blkg_rwstat wait_time;
+ /* number of IOs queued up */
+ struct blkg_rwstat queued;
+- /* total sectors transferred */
+- struct blkg_stat sectors;
+ /* total disk time and nr sectors dispatched by this group */
+ struct blkg_stat time;
+- /* time not charged to this cgroup */
+- struct blkg_stat unaccounted_time;
+ /* sum of number of ios queued across all samples */
+ struct blkg_stat avg_queue_size_sum;
+ /* count of samples taken for average */
+@@ -680,8 +763,10 @@ struct bfqg_stats {
+ uint64_t start_idle_time;
+ uint64_t start_empty_time;
+ uint16_t flags;
++#endif
+ };
+
++#ifdef CONFIG_BFQ_GROUP_IOSCHED
+ /*
+ * struct bfq_group_data - per-blkcg storage for the blkio subsystem.
+ *
+@@ -692,7 +777,7 @@ struct bfq_group_data {
+ /* must be the first member */
+ struct blkcg_policy_data pd;
+
+- unsigned short weight;
++ unsigned int weight;
+ };
+
+ /**
+@@ -712,7 +797,7 @@ struct bfq_group_data {
+ * unused for the root group. Used to know whether there
+ * are groups with more than one active @bfq_entity
+ * (see the comments to the function
+- * bfq_bfqq_must_not_expire()).
++ * bfq_bfqq_may_idle()).
+ * @rq_pos_tree: rbtree sorted by next_request position, used when
+ * determining if two or more queues have interleaving
+ * requests (see bfq_find_close_cooperator()).
+@@ -745,7 +830,6 @@ struct bfq_group {
+ struct rb_root rq_pos_tree;
+
+ struct bfqg_stats stats;
+- struct bfqg_stats dead_stats; /* stats pushed from dead children */
+ };
+
+ #else
+@@ -761,17 +845,38 @@ struct bfq_group {
+
+ static struct bfq_queue *bfq_entity_to_bfqq(struct bfq_entity *entity);
+
++static unsigned int bfq_class_idx(struct bfq_entity *entity)
++{
++ struct bfq_queue *bfqq = bfq_entity_to_bfqq(entity);
++
++ return bfqq ? bfqq->ioprio_class - 1 :
++ BFQ_DEFAULT_GRP_CLASS - 1;
++}
++
+ static struct bfq_service_tree *
+ bfq_entity_service_tree(struct bfq_entity *entity)
+ {
+ struct bfq_sched_data *sched_data = entity->sched_data;
+ struct bfq_queue *bfqq = bfq_entity_to_bfqq(entity);
+- unsigned int idx = bfqq ? bfqq->ioprio_class - 1 :
+- BFQ_DEFAULT_GRP_CLASS;
++ unsigned int idx = bfq_class_idx(entity);
+
+ BUG_ON(idx >= BFQ_IOPRIO_CLASSES);
+ BUG_ON(sched_data == NULL);
+
++ if (bfqq)
++ bfq_log_bfqq(bfqq->bfqd, bfqq,
++ "entity_service_tree %p %d",
++ sched_data->service_tree + idx, idx);
++#ifdef CONFIG_BFQ_GROUP_IOSCHED
++ else {
++ struct bfq_group *bfqg =
++ container_of(entity, struct bfq_group, entity);
++
++ bfq_log_bfqg((struct bfq_data *)bfqg->bfqd, bfqg,
++ "entity_service_tree %p %d",
++ sched_data->service_tree + idx, idx);
++ }
++#endif
+ return sched_data->service_tree + idx;
+ }
+
+@@ -791,47 +896,6 @@ static struct bfq_data *bic_to_bfqd(struct bfq_io_cq *bic)
+ return bic->icq.q->elevator->elevator_data;
+ }
+
+-/**
+- * bfq_get_bfqd_locked - get a lock to a bfqd using a RCU protected pointer.
+- * @ptr: a pointer to a bfqd.
+- * @flags: storage for the flags to be saved.
+- *
+- * This function allows bfqg->bfqd to be protected by the
+- * queue lock of the bfqd they reference; the pointer is dereferenced
+- * under RCU, so the storage for bfqd is assured to be safe as long
+- * as the RCU read side critical section does not end. After the
+- * bfqd->queue->queue_lock is taken the pointer is rechecked, to be
+- * sure that no other writer accessed it. If we raced with a writer,
+- * the function returns NULL, with the queue unlocked, otherwise it
+- * returns the dereferenced pointer, with the queue locked.
+- */
+-static struct bfq_data *bfq_get_bfqd_locked(void **ptr, unsigned long *flags)
+-{
+- struct bfq_data *bfqd;
+-
+- rcu_read_lock();
+- bfqd = rcu_dereference(*(struct bfq_data **)ptr);
+-
+- if (bfqd != NULL) {
+- spin_lock_irqsave(bfqd->queue->queue_lock, *flags);
+- if (ptr == NULL)
+- printk(KERN_CRIT "get_bfqd_locked pointer NULL\n");
+- else if (*ptr == bfqd)
+- goto out;
+- spin_unlock_irqrestore(bfqd->queue->queue_lock, *flags);
+- }
+-
+- bfqd = NULL;
+-out:
+- rcu_read_unlock();
+- return bfqd;
+-}
+-
+-static void bfq_put_bfqd_unlock(struct bfq_data *bfqd, unsigned long *flags)
+-{
+- spin_unlock_irqrestore(bfqd->queue->queue_lock, *flags);
+-}
+-
+ #ifdef CONFIG_BFQ_GROUP_IOSCHED
+
+ static struct bfq_group *bfq_bfqq_to_bfqg(struct bfq_queue *bfqq)
+@@ -857,11 +921,13 @@ static void bfq_check_ioprio_change(struct bfq_io_cq *bic, struct bio *bio);
+ static void bfq_put_queue(struct bfq_queue *bfqq);
+ static void bfq_dispatch_insert(struct request_queue *q, struct request *rq);
+ static struct bfq_queue *bfq_get_queue(struct bfq_data *bfqd,
+- struct bio *bio, int is_sync,
+- struct bfq_io_cq *bic, gfp_t gfp_mask);
++ struct bio *bio, bool is_sync,
++ struct bfq_io_cq *bic);
+ static void bfq_end_wr_async_queues(struct bfq_data *bfqd,
+ struct bfq_group *bfqg);
++#ifdef CONFIG_BFQ_GROUP_IOSCHED
+ static void bfq_put_async_queues(struct bfq_data *bfqd, struct bfq_group *bfqg);
++#endif
+ static void bfq_exit_bfqq(struct bfq_data *bfqd, struct bfq_queue *bfqq);
+
+ #endif /* _BFQ_H */
+--
+2.10.0
+
^ permalink raw reply related [flat|nested] 22+ messages in thread
* [gentoo-commits] proj/linux-patches:4.10 commit in: /
@ 2017-03-15 17:17 Mike Pagano
0 siblings, 0 replies; 22+ messages in thread
From: Mike Pagano @ 2017-03-15 17:17 UTC (permalink / raw
To: gentoo-commits
commit: 547a1444731be81625b3f2ec0cdc1aa68d9b110a
Author: Mike Pagano <mpagano <AT> gentoo <DOT> org>
AuthorDate: Wed Mar 15 17:17:27 2017 +0000
Commit: Mike Pagano <mpagano <AT> gentoo <DOT> org>
CommitDate: Wed Mar 15 17:17:27 2017 +0000
URL: https://gitweb.gentoo.org/proj/linux-patches.git/commit/?id=547a1444
Linux patch 4.10.3
0000_README | 4 +
1002_linux-4.10.3.patch | 3907 +++++++++++++++++++++++++++++++++++++++++++++++
2 files changed, 3911 insertions(+)
diff --git a/0000_README b/0000_README
index 8ad9f95..471175a 100644
--- a/0000_README
+++ b/0000_README
@@ -51,6 +51,10 @@ Patch: 1001_linux-4.10.2.patch
From: http://www.kernel.org
Desc: Linux 4.10.2
+Patch: 1002_linux-4.10.3.patch
+From: http://www.kernel.org
+Desc: Linux 4.10.3
+
Patch: 1500_XATTR_USER_PREFIX.patch
From: https://bugs.gentoo.org/show_bug.cgi?id=470644
Desc: Support for namespace user.pax.* on tmpfs.
diff --git a/1002_linux-4.10.3.patch b/1002_linux-4.10.3.patch
new file mode 100644
index 0000000..3352128
--- /dev/null
+++ b/1002_linux-4.10.3.patch
@@ -0,0 +1,3907 @@
+diff --git a/Makefile b/Makefile
+index 6e09b3a44e9a..190a684303c1 100644
+--- a/Makefile
++++ b/Makefile
+@@ -1,6 +1,6 @@
+ VERSION = 4
+ PATCHLEVEL = 10
+-SUBLEVEL = 2
++SUBLEVEL = 3
+ EXTRAVERSION =
+ NAME = Fearless Coyote
+
+diff --git a/arch/s390/include/asm/processor.h b/arch/s390/include/asm/processor.h
+index 6bca916a5ba0..71cac7c43c4b 100644
+--- a/arch/s390/include/asm/processor.h
++++ b/arch/s390/include/asm/processor.h
+@@ -89,7 +89,8 @@ extern void execve_tail(void);
+ * User space process size: 2GB for 31 bit, 4TB or 8PT for 64 bit.
+ */
+
+-#define TASK_SIZE_OF(tsk) ((tsk)->mm->context.asce_limit)
++#define TASK_SIZE_OF(tsk) ((tsk)->mm ? \
++ (tsk)->mm->context.asce_limit : TASK_MAX_SIZE)
+ #define TASK_UNMAPPED_BASE (test_thread_flag(TIF_31BIT) ? \
+ (1UL << 30) : (1UL << 41))
+ #define TASK_SIZE TASK_SIZE_OF(current)
+diff --git a/arch/s390/kernel/crash_dump.c b/arch/s390/kernel/crash_dump.c
+index f9293bfefb7f..408b4f4fda0f 100644
+--- a/arch/s390/kernel/crash_dump.c
++++ b/arch/s390/kernel/crash_dump.c
+@@ -329,7 +329,11 @@ static void *nt_init_name(void *buf, Elf64_Word type, void *desc, int d_len,
+
+ static inline void *nt_init(void *buf, Elf64_Word type, void *desc, int d_len)
+ {
+- return nt_init_name(buf, type, desc, d_len, KEXEC_CORE_NOTE_NAME);
++ const char *note_name = "LINUX";
++
++ if (type == NT_PRPSINFO || type == NT_PRSTATUS || type == NT_PRFPREG)
++ note_name = KEXEC_CORE_NOTE_NAME;
++ return nt_init_name(buf, type, desc, d_len, note_name);
+ }
+
+ /*
+diff --git a/arch/s390/kernel/setup.c b/arch/s390/kernel/setup.c
+index 865a48871ca4..5401e79d6c32 100644
+--- a/arch/s390/kernel/setup.c
++++ b/arch/s390/kernel/setup.c
+@@ -820,10 +820,10 @@ static void __init setup_randomness(void)
+ {
+ struct sysinfo_3_2_2 *vmms;
+
+- vmms = (struct sysinfo_3_2_2 *) alloc_page(GFP_KERNEL);
+- if (vmms && stsi(vmms, 3, 2, 2) == 0 && vmms->count)
+- add_device_randomness(&vmms, vmms->count);
+- free_page((unsigned long) vmms);
++ vmms = (struct sysinfo_3_2_2 *) memblock_alloc(PAGE_SIZE, PAGE_SIZE);
++ if (stsi(vmms, 3, 2, 2) == 0 && vmms->count)
++ add_device_randomness(&vmms->vm, sizeof(vmms->vm[0]) * vmms->count);
++ memblock_free((unsigned long) vmms, PAGE_SIZE);
+ }
+
+ /*
+diff --git a/arch/s390/kernel/topology.c b/arch/s390/kernel/topology.c
+index 93dcbae1e98d..ab167620955c 100644
+--- a/arch/s390/kernel/topology.c
++++ b/arch/s390/kernel/topology.c
+@@ -466,7 +466,7 @@ void __init topology_init_early(void)
+ set_sched_topology(s390_topology);
+ if (!MACHINE_HAS_TOPOLOGY)
+ goto out;
+- tl_info = memblock_virt_alloc(sizeof(*tl_info), PAGE_SIZE);
++ tl_info = memblock_virt_alloc(PAGE_SIZE, PAGE_SIZE);
+ info = tl_info;
+ store_topology(info);
+ pr_info("The CPU configuration topology of the machine is:");
+diff --git a/arch/s390/kvm/kvm-s390.c b/arch/s390/kvm/kvm-s390.c
+index 6484a250021e..ac9eb595f0aa 100644
+--- a/arch/s390/kvm/kvm-s390.c
++++ b/arch/s390/kvm/kvm-s390.c
+@@ -442,6 +442,9 @@ int kvm_vm_ioctl_get_dirty_log(struct kvm *kvm,
+ struct kvm_memory_slot *memslot;
+ int is_dirty = 0;
+
++ if (kvm_is_ucontrol(kvm))
++ return -EINVAL;
++
+ mutex_lock(&kvm->slots_lock);
+
+ r = -EINVAL;
+diff --git a/arch/x86/include/asm/tlbflush.h b/arch/x86/include/asm/tlbflush.h
+index 6fa85944af83..fc5abff9b7fd 100644
+--- a/arch/x86/include/asm/tlbflush.h
++++ b/arch/x86/include/asm/tlbflush.h
+@@ -188,7 +188,7 @@ static inline void __native_flush_tlb_single(unsigned long addr)
+
+ static inline void __flush_tlb_all(void)
+ {
+- if (static_cpu_has(X86_FEATURE_PGE))
++ if (boot_cpu_has(X86_FEATURE_PGE))
+ __flush_tlb_global();
+ else
+ __flush_tlb();
+diff --git a/arch/x86/kvm/vmx.c b/arch/x86/kvm/vmx.c
+index a236decb81e4..2c22aef35dbc 100644
+--- a/arch/x86/kvm/vmx.c
++++ b/arch/x86/kvm/vmx.c
+@@ -3962,7 +3962,7 @@ static void fix_rmode_seg(int seg, struct kvm_segment *save)
+ }
+
+ vmcs_write16(sf->selector, var.selector);
+- vmcs_write32(sf->base, var.base);
++ vmcs_writel(sf->base, var.base);
+ vmcs_write32(sf->limit, var.limit);
+ vmcs_write32(sf->ar_bytes, vmx_segment_access_rights(&var));
+ }
+@@ -8350,7 +8350,7 @@ static void kvm_flush_pml_buffers(struct kvm *kvm)
+ static void vmx_dump_sel(char *name, uint32_t sel)
+ {
+ pr_err("%s sel=0x%04x, attr=0x%05x, limit=0x%08x, base=0x%016lx\n",
+- name, vmcs_read32(sel),
++ name, vmcs_read16(sel),
+ vmcs_read32(sel + GUEST_ES_AR_BYTES - GUEST_ES_SELECTOR),
+ vmcs_read32(sel + GUEST_ES_LIMIT - GUEST_ES_SELECTOR),
+ vmcs_readl(sel + GUEST_ES_BASE - GUEST_ES_SELECTOR));
+diff --git a/arch/x86/mm/gup.c b/arch/x86/mm/gup.c
+index 0d4fb3ebbbac..1680768d392c 100644
+--- a/arch/x86/mm/gup.c
++++ b/arch/x86/mm/gup.c
+@@ -120,6 +120,11 @@ static noinline int gup_pte_range(pmd_t pmd, unsigned long addr,
+ return 0;
+ }
+
++ if (!pte_allows_gup(pte_val(pte), write)) {
++ pte_unmap(ptep);
++ return 0;
++ }
++
+ if (pte_devmap(pte)) {
+ pgmap = get_dev_pagemap(pte_pfn(pte), pgmap);
+ if (unlikely(!pgmap)) {
+@@ -127,8 +132,7 @@ static noinline int gup_pte_range(pmd_t pmd, unsigned long addr,
+ pte_unmap(ptep);
+ return 0;
+ }
+- } else if (!pte_allows_gup(pte_val(pte), write) ||
+- pte_special(pte)) {
++ } else if (pte_special(pte)) {
+ pte_unmap(ptep);
+ return 0;
+ }
+diff --git a/arch/xtensa/kernel/setup.c b/arch/xtensa/kernel/setup.c
+index 8fd4be610607..75e47e5436e3 100644
+--- a/arch/xtensa/kernel/setup.c
++++ b/arch/xtensa/kernel/setup.c
+@@ -126,6 +126,8 @@ static int __init parse_tag_initrd(const bp_tag_t* tag)
+
+ __tagtable(BP_TAG_INITRD, parse_tag_initrd);
+
++#endif /* CONFIG_BLK_DEV_INITRD */
++
+ #ifdef CONFIG_OF
+
+ static int __init parse_tag_fdt(const bp_tag_t *tag)
+@@ -138,8 +140,6 @@ __tagtable(BP_TAG_FDT, parse_tag_fdt);
+
+ #endif /* CONFIG_OF */
+
+-#endif /* CONFIG_BLK_DEV_INITRD */
+-
+ static int __init parse_tag_cmdline(const bp_tag_t* tag)
+ {
+ strlcpy(command_line, (char *)(tag->data), COMMAND_LINE_SIZE);
+diff --git a/drivers/acpi/nfit/core.c b/drivers/acpi/nfit/core.c
+index 7361d00818e2..662036bdc65e 100644
+--- a/drivers/acpi/nfit/core.c
++++ b/drivers/acpi/nfit/core.c
+@@ -1603,7 +1603,7 @@ static size_t sizeof_nfit_set_info(int num_mappings)
+ + num_mappings * sizeof(struct nfit_set_info_map);
+ }
+
+-static int cmp_map(const void *m0, const void *m1)
++static int cmp_map_compat(const void *m0, const void *m1)
+ {
+ const struct nfit_set_info_map *map0 = m0;
+ const struct nfit_set_info_map *map1 = m1;
+@@ -1612,6 +1612,14 @@ static int cmp_map(const void *m0, const void *m1)
+ sizeof(u64));
+ }
+
++static int cmp_map(const void *m0, const void *m1)
++{
++ const struct nfit_set_info_map *map0 = m0;
++ const struct nfit_set_info_map *map1 = m1;
++
++ return map0->region_offset - map1->region_offset;
++}
++
+ /* Retrieve the nth entry referencing this spa */
+ static struct acpi_nfit_memory_map *memdev_from_spa(
+ struct acpi_nfit_desc *acpi_desc, u16 range_index, int n)
+@@ -1667,6 +1675,12 @@ static int acpi_nfit_init_interleave_set(struct acpi_nfit_desc *acpi_desc,
+ sort(&info->mapping[0], nr, sizeof(struct nfit_set_info_map),
+ cmp_map, NULL);
+ nd_set->cookie = nd_fletcher64(info, sizeof_nfit_set_info(nr), 0);
++
++ /* support namespaces created with the wrong sort order */
++ sort(&info->mapping[0], nr, sizeof(struct nfit_set_info_map),
++ cmp_map_compat, NULL);
++ nd_set->altcookie = nd_fletcher64(info, sizeof_nfit_set_info(nr), 0);
++
+ ndr_desc->nd_set = nd_set;
+ devm_kfree(dev, info);
+
+diff --git a/drivers/bluetooth/ath3k.c b/drivers/bluetooth/ath3k.c
+index fadba88745dc..b793853ff05f 100644
+--- a/drivers/bluetooth/ath3k.c
++++ b/drivers/bluetooth/ath3k.c
+@@ -94,6 +94,7 @@ static const struct usb_device_id ath3k_table[] = {
+ { USB_DEVICE(0x04CA, 0x300f) },
+ { USB_DEVICE(0x04CA, 0x3010) },
+ { USB_DEVICE(0x04CA, 0x3014) },
++ { USB_DEVICE(0x04CA, 0x3018) },
+ { USB_DEVICE(0x0930, 0x0219) },
+ { USB_DEVICE(0x0930, 0x021c) },
+ { USB_DEVICE(0x0930, 0x0220) },
+@@ -162,6 +163,7 @@ static const struct usb_device_id ath3k_blist_tbl[] = {
+ { USB_DEVICE(0x04ca, 0x300f), .driver_info = BTUSB_ATH3012 },
+ { USB_DEVICE(0x04ca, 0x3010), .driver_info = BTUSB_ATH3012 },
+ { USB_DEVICE(0x04ca, 0x3014), .driver_info = BTUSB_ATH3012 },
++ { USB_DEVICE(0x04ca, 0x3018), .driver_info = BTUSB_ATH3012 },
+ { USB_DEVICE(0x0930, 0x0219), .driver_info = BTUSB_ATH3012 },
+ { USB_DEVICE(0x0930, 0x021c), .driver_info = BTUSB_ATH3012 },
+ { USB_DEVICE(0x0930, 0x0220), .driver_info = BTUSB_ATH3012 },
+diff --git a/drivers/bluetooth/btusb.c b/drivers/bluetooth/btusb.c
+index 2f633df9f4e6..dd220fad366c 100644
+--- a/drivers/bluetooth/btusb.c
++++ b/drivers/bluetooth/btusb.c
+@@ -209,6 +209,7 @@ static const struct usb_device_id blacklist_table[] = {
+ { USB_DEVICE(0x04ca, 0x300f), .driver_info = BTUSB_ATH3012 },
+ { USB_DEVICE(0x04ca, 0x3010), .driver_info = BTUSB_ATH3012 },
+ { USB_DEVICE(0x04ca, 0x3014), .driver_info = BTUSB_ATH3012 },
++ { USB_DEVICE(0x04ca, 0x3018), .driver_info = BTUSB_ATH3012 },
+ { USB_DEVICE(0x0930, 0x0219), .driver_info = BTUSB_ATH3012 },
+ { USB_DEVICE(0x0930, 0x021c), .driver_info = BTUSB_ATH3012 },
+ { USB_DEVICE(0x0930, 0x0220), .driver_info = BTUSB_ATH3012 },
+diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_pm.c b/drivers/gpu/drm/amd/amdgpu/amdgpu_pm.c
+index 723ae682bf25..5a50b3df80ee 100644
+--- a/drivers/gpu/drm/amd/amdgpu/amdgpu_pm.c
++++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_pm.c
+@@ -1252,7 +1252,8 @@ void amdgpu_pm_compute_clocks(struct amdgpu_device *adev)
+ if (!adev->pm.dpm_enabled)
+ return;
+
+- amdgpu_display_bandwidth_update(adev);
++ if (adev->mode_info.num_crtc)
++ amdgpu_display_bandwidth_update(adev);
+
+ for (i = 0; i < AMDGPU_MAX_RINGS; i++) {
+ struct amdgpu_ring *ring = adev->rings[i];
+diff --git a/drivers/gpu/drm/amd/amdgpu/dce_v11_0.c b/drivers/gpu/drm/amd/amdgpu/dce_v11_0.c
+index a7af5b33a5e3..648f0d7475db 100644
+--- a/drivers/gpu/drm/amd/amdgpu/dce_v11_0.c
++++ b/drivers/gpu/drm/amd/amdgpu/dce_v11_0.c
+@@ -3737,9 +3737,15 @@ static void dce_v11_0_encoder_add(struct amdgpu_device *adev,
+ default:
+ encoder->possible_crtcs = 0x3;
+ break;
++ case 3:
++ encoder->possible_crtcs = 0x7;
++ break;
+ case 4:
+ encoder->possible_crtcs = 0xf;
+ break;
++ case 5:
++ encoder->possible_crtcs = 0x1f;
++ break;
+ case 6:
+ encoder->possible_crtcs = 0x3f;
+ break;
+diff --git a/drivers/gpu/drm/amd/amdgpu/gfx_v6_0.c b/drivers/gpu/drm/amd/amdgpu/gfx_v6_0.c
+index b323f5ef64d2..51bbd6e44dbb 100644
+--- a/drivers/gpu/drm/amd/amdgpu/gfx_v6_0.c
++++ b/drivers/gpu/drm/amd/amdgpu/gfx_v6_0.c
+@@ -708,290 +708,238 @@ static void gfx_v6_0_tiling_mode_table_init(struct amdgpu_device *adev)
+ for (reg_offset = 0; reg_offset < num_tile_mode_states; reg_offset++) {
+ switch (reg_offset) {
+ case 0:
+- gb_tile_moden = (MICRO_TILE_MODE(ADDR_SURF_DEPTH_MICRO_TILING) |
+- ARRAY_MODE(ARRAY_2D_TILED_THIN1) |
+- PIPE_CONFIG(ADDR_SURF_P2) |
++ gb_tile_moden = (ARRAY_MODE(ARRAY_2D_TILED_THIN1) |
++ MICRO_TILE_MODE(ADDR_SURF_DEPTH_MICRO_TILING) |
++ PIPE_CONFIG(ADDR_SURF_P4_8x16) |
+ TILE_SPLIT(ADDR_SURF_TILE_SPLIT_64B) |
++ NUM_BANKS(ADDR_SURF_16_BANK) |
+ BANK_WIDTH(ADDR_SURF_BANK_WIDTH_1) |
+ BANK_HEIGHT(ADDR_SURF_BANK_HEIGHT_4) |
+- MACRO_TILE_ASPECT(ADDR_SURF_MACRO_ASPECT_4) |
+- NUM_BANKS(ADDR_SURF_16_BANK));
++ MACRO_TILE_ASPECT(ADDR_SURF_MACRO_ASPECT_4));
+ break;
+ case 1:
+- gb_tile_moden = (MICRO_TILE_MODE(ADDR_SURF_DEPTH_MICRO_TILING) |
+- ARRAY_MODE(ARRAY_2D_TILED_THIN1) |
+- PIPE_CONFIG(ADDR_SURF_P2) |
++ gb_tile_moden = (ARRAY_MODE(ARRAY_2D_TILED_THIN1) |
++ MICRO_TILE_MODE(ADDR_SURF_DEPTH_MICRO_TILING) |
++ PIPE_CONFIG(ADDR_SURF_P4_8x16) |
+ TILE_SPLIT(ADDR_SURF_TILE_SPLIT_128B) |
++ NUM_BANKS(ADDR_SURF_16_BANK) |
+ BANK_WIDTH(ADDR_SURF_BANK_WIDTH_1) |
+ BANK_HEIGHT(ADDR_SURF_BANK_HEIGHT_4) |
+- MACRO_TILE_ASPECT(ADDR_SURF_MACRO_ASPECT_4) |
+- NUM_BANKS(ADDR_SURF_16_BANK));
++ MACRO_TILE_ASPECT(ADDR_SURF_MACRO_ASPECT_4));
+ break;
+ case 2:
+- gb_tile_moden = (MICRO_TILE_MODE(ADDR_SURF_DEPTH_MICRO_TILING) |
+- ARRAY_MODE(ARRAY_2D_TILED_THIN1) |
+- PIPE_CONFIG(ADDR_SURF_P2) |
++ gb_tile_moden = (ARRAY_MODE(ARRAY_2D_TILED_THIN1) |
++ MICRO_TILE_MODE(ADDR_SURF_DEPTH_MICRO_TILING) |
++ PIPE_CONFIG(ADDR_SURF_P4_8x16) |
+ TILE_SPLIT(ADDR_SURF_TILE_SPLIT_256B) |
++ NUM_BANKS(ADDR_SURF_16_BANK) |
+ BANK_WIDTH(ADDR_SURF_BANK_WIDTH_1) |
+ BANK_HEIGHT(ADDR_SURF_BANK_HEIGHT_4) |
+- MACRO_TILE_ASPECT(ADDR_SURF_MACRO_ASPECT_4) |
+- NUM_BANKS(ADDR_SURF_16_BANK));
++ MACRO_TILE_ASPECT(ADDR_SURF_MACRO_ASPECT_4));
+ break;
+ case 3:
+- gb_tile_moden = (MICRO_TILE_MODE(ADDR_SURF_DEPTH_MICRO_TILING) |
+- ARRAY_MODE(ARRAY_2D_TILED_THIN1) |
+- PIPE_CONFIG(ADDR_SURF_P2) |
++ gb_tile_moden = (ARRAY_MODE(ARRAY_2D_TILED_THIN1) |
++ MICRO_TILE_MODE(ADDR_SURF_DEPTH_MICRO_TILING) |
++ PIPE_CONFIG(ADDR_SURF_P4_8x16) |
++ TILE_SPLIT(ADDR_SURF_TILE_SPLIT_128B) |
++ NUM_BANKS(ADDR_SURF_16_BANK) |
+ BANK_WIDTH(ADDR_SURF_BANK_WIDTH_1) |
+- BANK_HEIGHT(ADDR_SURF_BANK_HEIGHT_2) |
+- MACRO_TILE_ASPECT(ADDR_SURF_MACRO_ASPECT_1) |
+- NUM_BANKS(ADDR_SURF_8_BANK) |
+- TILE_SPLIT(split_equal_to_row_size));
++ BANK_HEIGHT(ADDR_SURF_BANK_HEIGHT_4) |
++ MACRO_TILE_ASPECT(ADDR_SURF_MACRO_ASPECT_4));
+ break;
+ case 4:
+- gb_tile_moden = (MICRO_TILE_MODE(ADDR_SURF_DEPTH_MICRO_TILING) |
+- ARRAY_MODE(ARRAY_1D_TILED_THIN1) |
+- PIPE_CONFIG(ADDR_SURF_P2));
++ gb_tile_moden = (ARRAY_MODE(ARRAY_1D_TILED_THIN1) |
++ MICRO_TILE_MODE(ADDR_SURF_DEPTH_MICRO_TILING) |
++ PIPE_CONFIG(ADDR_SURF_P4_8x16) |
++ TILE_SPLIT(ADDR_SURF_TILE_SPLIT_64B) |
++ NUM_BANKS(ADDR_SURF_16_BANK) |
++ BANK_WIDTH(ADDR_SURF_BANK_WIDTH_1) |
++ BANK_HEIGHT(ADDR_SURF_BANK_HEIGHT_2) |
++ MACRO_TILE_ASPECT(ADDR_SURF_MACRO_ASPECT_2));
+ break;
+ case 5:
+- gb_tile_moden = (MICRO_TILE_MODE(ADDR_SURF_DEPTH_MICRO_TILING) |
+- ARRAY_MODE(ARRAY_2D_TILED_THIN1) |
+- PIPE_CONFIG(ADDR_SURF_P2) |
+- TILE_SPLIT(ADDR_SURF_TILE_SPLIT_512B) |
++ gb_tile_moden = (ARRAY_MODE(ARRAY_2D_TILED_THIN1) |
++ MICRO_TILE_MODE(ADDR_SURF_DEPTH_MICRO_TILING) |
++ PIPE_CONFIG(ADDR_SURF_P4_8x16) |
++ TILE_SPLIT(split_equal_to_row_size) |
++ NUM_BANKS(ADDR_SURF_16_BANK) |
+ BANK_WIDTH(ADDR_SURF_BANK_WIDTH_1) |
+- BANK_HEIGHT(ADDR_SURF_BANK_HEIGHT_1) |
+- MACRO_TILE_ASPECT(ADDR_SURF_MACRO_ASPECT_1) |
+- NUM_BANKS(ADDR_SURF_8_BANK));
++ BANK_HEIGHT(ADDR_SURF_BANK_HEIGHT_2) |
++ MACRO_TILE_ASPECT(ADDR_SURF_MACRO_ASPECT_2));
+ break;
+ case 6:
+- gb_tile_moden = (MICRO_TILE_MODE(ADDR_SURF_DEPTH_MICRO_TILING) |
+- ARRAY_MODE(ARRAY_2D_TILED_THIN1) |
+- PIPE_CONFIG(ADDR_SURF_P2) |
+- TILE_SPLIT(ADDR_SURF_TILE_SPLIT_256B) |
++ gb_tile_moden = (ARRAY_MODE(ARRAY_2D_TILED_THIN1) |
++ MICRO_TILE_MODE(ADDR_SURF_DEPTH_MICRO_TILING) |
++ PIPE_CONFIG(ADDR_SURF_P4_8x16) |
++ TILE_SPLIT(split_equal_to_row_size) |
++ NUM_BANKS(ADDR_SURF_16_BANK) |
+ BANK_WIDTH(ADDR_SURF_BANK_WIDTH_1) |
+- BANK_HEIGHT(ADDR_SURF_BANK_HEIGHT_2) |
+- MACRO_TILE_ASPECT(ADDR_SURF_MACRO_ASPECT_1) |
+- NUM_BANKS(ADDR_SURF_8_BANK));
++ BANK_HEIGHT(ADDR_SURF_BANK_HEIGHT_1) |
++ MACRO_TILE_ASPECT(ADDR_SURF_MACRO_ASPECT_2));
+ break;
+ case 7:
+- gb_tile_moden = (MICRO_TILE_MODE(ADDR_SURF_DEPTH_MICRO_TILING) |
+- ARRAY_MODE(ARRAY_2D_TILED_THIN1) |
+- PIPE_CONFIG(ADDR_SURF_P2) |
+- TILE_SPLIT(ADDR_SURF_TILE_SPLIT_1KB) |
++ gb_tile_moden = (ARRAY_MODE(ARRAY_2D_TILED_THIN1) |
++ MICRO_TILE_MODE(ADDR_SURF_DEPTH_MICRO_TILING) |
++ PIPE_CONFIG(ADDR_SURF_P4_8x16) |
++ TILE_SPLIT(split_equal_to_row_size) |
++ NUM_BANKS(ADDR_SURF_16_BANK) |
+ BANK_WIDTH(ADDR_SURF_BANK_WIDTH_1) |
+- BANK_HEIGHT(ADDR_SURF_BANK_HEIGHT_2) |
+- MACRO_TILE_ASPECT(ADDR_SURF_MACRO_ASPECT_1) |
+- NUM_BANKS(ADDR_SURF_4_BANK));
++ BANK_HEIGHT(ADDR_SURF_BANK_HEIGHT_4) |
++ MACRO_TILE_ASPECT(ADDR_SURF_MACRO_ASPECT_4));
+ break;
+ case 8:
+- gb_tile_moden = (ARRAY_MODE(ARRAY_LINEAR_ALIGNED));
++ gb_tile_moden = (ARRAY_MODE(ARRAY_LINEAR_ALIGNED) |
++ MICRO_TILE_MODE(ADDR_SURF_DISPLAY_MICRO_TILING) |
++ PIPE_CONFIG(ADDR_SURF_P4_8x16) |
++ TILE_SPLIT(ADDR_SURF_TILE_SPLIT_64B) |
++ NUM_BANKS(ADDR_SURF_16_BANK) |
++ BANK_WIDTH(ADDR_SURF_BANK_WIDTH_1) |
++ BANK_HEIGHT(ADDR_SURF_BANK_HEIGHT_2) |
++ MACRO_TILE_ASPECT(ADDR_SURF_MACRO_ASPECT_2));
+ break;
+ case 9:
+- gb_tile_moden = (MICRO_TILE_MODE(ADDR_SURF_DISPLAY_MICRO_TILING) |
+- ARRAY_MODE(ARRAY_1D_TILED_THIN1) |
+- PIPE_CONFIG(ADDR_SURF_P2));
++ gb_tile_moden = (ARRAY_MODE(ARRAY_1D_TILED_THIN1) |
++ MICRO_TILE_MODE(ADDR_SURF_DISPLAY_MICRO_TILING) |
++ PIPE_CONFIG(ADDR_SURF_P4_8x16) |
++ TILE_SPLIT(ADDR_SURF_TILE_SPLIT_64B) |
++ NUM_BANKS(ADDR_SURF_16_BANK) |
++ BANK_WIDTH(ADDR_SURF_BANK_WIDTH_1) |
++ BANK_HEIGHT(ADDR_SURF_BANK_HEIGHT_2) |
++ MACRO_TILE_ASPECT(ADDR_SURF_MACRO_ASPECT_2));
+ break;
+ case 10:
+- gb_tile_moden = (MICRO_TILE_MODE(ADDR_SURF_DISPLAY_MICRO_TILING) |
+- ARRAY_MODE(ARRAY_2D_TILED_THIN1) |
+- PIPE_CONFIG(ADDR_SURF_P2) |
++ gb_tile_moden = (ARRAY_MODE(ARRAY_2D_TILED_THIN1) |
++ MICRO_TILE_MODE(ADDR_SURF_DISPLAY_MICRO_TILING) |
++ PIPE_CONFIG(ADDR_SURF_P4_8x16) |
+ TILE_SPLIT(ADDR_SURF_TILE_SPLIT_256B) |
++ NUM_BANKS(ADDR_SURF_16_BANK) |
+ BANK_WIDTH(ADDR_SURF_BANK_WIDTH_1) |
+ BANK_HEIGHT(ADDR_SURF_BANK_HEIGHT_4) |
+- MACRO_TILE_ASPECT(ADDR_SURF_MACRO_ASPECT_4) |
+- NUM_BANKS(ADDR_SURF_16_BANK));
++ MACRO_TILE_ASPECT(ADDR_SURF_MACRO_ASPECT_4));
+ break;
+ case 11:
+- gb_tile_moden = (MICRO_TILE_MODE(ADDR_SURF_DISPLAY_MICRO_TILING) |
+- ARRAY_MODE(ARRAY_2D_TILED_THIN1) |
+- PIPE_CONFIG(ADDR_SURF_P2) |
++ gb_tile_moden = (ARRAY_MODE(ARRAY_2D_TILED_THIN1) |
++ MICRO_TILE_MODE(ADDR_SURF_DISPLAY_MICRO_TILING) |
++ PIPE_CONFIG(ADDR_SURF_P4_8x16) |
+ TILE_SPLIT(ADDR_SURF_TILE_SPLIT_256B) |
++ NUM_BANKS(ADDR_SURF_16_BANK) |
+ BANK_WIDTH(ADDR_SURF_BANK_WIDTH_1) |
+ BANK_HEIGHT(ADDR_SURF_BANK_HEIGHT_2) |
+- MACRO_TILE_ASPECT(ADDR_SURF_MACRO_ASPECT_2) |
+- NUM_BANKS(ADDR_SURF_16_BANK));
++ MACRO_TILE_ASPECT(ADDR_SURF_MACRO_ASPECT_2));
+ break;
+ case 12:
+- gb_tile_moden = (MICRO_TILE_MODE(ADDR_SURF_DISPLAY_MICRO_TILING) |
+- ARRAY_MODE(ARRAY_2D_TILED_THIN1) |
+- PIPE_CONFIG(ADDR_SURF_P2) |
++ gb_tile_moden = (ARRAY_MODE(ARRAY_2D_TILED_THIN1) |
++ MICRO_TILE_MODE(ADDR_SURF_DISPLAY_MICRO_TILING) |
++ PIPE_CONFIG(ADDR_SURF_P4_8x16) |
+ TILE_SPLIT(ADDR_SURF_TILE_SPLIT_512B) |
++ NUM_BANKS(ADDR_SURF_16_BANK) |
+ BANK_WIDTH(ADDR_SURF_BANK_WIDTH_1) |
+ BANK_HEIGHT(ADDR_SURF_BANK_HEIGHT_1) |
+- MACRO_TILE_ASPECT(ADDR_SURF_MACRO_ASPECT_2) |
+- NUM_BANKS(ADDR_SURF_16_BANK));
++ MACRO_TILE_ASPECT(ADDR_SURF_MACRO_ASPECT_2));
+ break;
+ case 13:
+- gb_tile_moden = (MICRO_TILE_MODE(ADDR_SURF_THIN_MICRO_TILING) |
+- ARRAY_MODE(ARRAY_1D_TILED_THIN1) |
+- PIPE_CONFIG(ADDR_SURF_P2));
++ gb_tile_moden = (ARRAY_MODE(ARRAY_1D_TILED_THIN1) |
++ MICRO_TILE_MODE(ADDR_SURF_THIN_MICRO_TILING) |
++ PIPE_CONFIG(ADDR_SURF_P4_8x16) |
++ TILE_SPLIT(ADDR_SURF_TILE_SPLIT_64B) |
++ NUM_BANKS(ADDR_SURF_16_BANK) |
++ BANK_WIDTH(ADDR_SURF_BANK_WIDTH_1) |
++ BANK_HEIGHT(ADDR_SURF_BANK_HEIGHT_2) |
++ MACRO_TILE_ASPECT(ADDR_SURF_MACRO_ASPECT_2));
+ break;
+ case 14:
+- gb_tile_moden = (MICRO_TILE_MODE(ADDR_SURF_THIN_MICRO_TILING) |
+- ARRAY_MODE(ARRAY_2D_TILED_THIN1) |
+- PIPE_CONFIG(ADDR_SURF_P2) |
++ gb_tile_moden = (ARRAY_MODE(ARRAY_2D_TILED_THIN1) |
++ MICRO_TILE_MODE(ADDR_SURF_THIN_MICRO_TILING) |
++ PIPE_CONFIG(ADDR_SURF_P4_8x16) |
+ TILE_SPLIT(ADDR_SURF_TILE_SPLIT_256B) |
++ NUM_BANKS(ADDR_SURF_16_BANK) |
+ BANK_WIDTH(ADDR_SURF_BANK_WIDTH_1) |
+ BANK_HEIGHT(ADDR_SURF_BANK_HEIGHT_4) |
+- MACRO_TILE_ASPECT(ADDR_SURF_MACRO_ASPECT_2) |
+- NUM_BANKS(ADDR_SURF_16_BANK));
++ MACRO_TILE_ASPECT(ADDR_SURF_MACRO_ASPECT_2));
+ break;
+ case 15:
+- gb_tile_moden = (MICRO_TILE_MODE(ADDR_SURF_THIN_MICRO_TILING) |
+- ARRAY_MODE(ARRAY_2D_TILED_THIN1) |
+- PIPE_CONFIG(ADDR_SURF_P2) |
++ gb_tile_moden = (ARRAY_MODE(ARRAY_2D_TILED_THIN1) |
++ MICRO_TILE_MODE(ADDR_SURF_THIN_MICRO_TILING) |
++ PIPE_CONFIG(ADDR_SURF_P4_8x16) |
+ TILE_SPLIT(ADDR_SURF_TILE_SPLIT_256B) |
++ NUM_BANKS(ADDR_SURF_16_BANK) |
+ BANK_WIDTH(ADDR_SURF_BANK_WIDTH_1) |
+ BANK_HEIGHT(ADDR_SURF_BANK_HEIGHT_2) |
+- MACRO_TILE_ASPECT(ADDR_SURF_MACRO_ASPECT_2) |
+- NUM_BANKS(ADDR_SURF_16_BANK));
++ MACRO_TILE_ASPECT(ADDR_SURF_MACRO_ASPECT_2));
+ break;
+ case 16:
+- gb_tile_moden = (MICRO_TILE_MODE(ADDR_SURF_THIN_MICRO_TILING) |
+- ARRAY_MODE(ARRAY_2D_TILED_THIN1) |
+- PIPE_CONFIG(ADDR_SURF_P2) |
++ gb_tile_moden = (ARRAY_MODE(ARRAY_2D_TILED_THIN1) |
++ MICRO_TILE_MODE(ADDR_SURF_THIN_MICRO_TILING) |
++ PIPE_CONFIG(ADDR_SURF_P4_8x16) |
+ TILE_SPLIT(ADDR_SURF_TILE_SPLIT_512B) |
++ NUM_BANKS(ADDR_SURF_16_BANK) |
+ BANK_WIDTH(ADDR_SURF_BANK_WIDTH_1) |
+ BANK_HEIGHT(ADDR_SURF_BANK_HEIGHT_1) |
+- MACRO_TILE_ASPECT(ADDR_SURF_MACRO_ASPECT_2) |
+- NUM_BANKS(ADDR_SURF_16_BANK));
++ MACRO_TILE_ASPECT(ADDR_SURF_MACRO_ASPECT_2));
+ break;
+ case 17:
+- gb_tile_moden = (MICRO_TILE_MODE(ADDR_SURF_THIN_MICRO_TILING) |
+- ARRAY_MODE(ARRAY_2D_TILED_THIN1) |
+- PIPE_CONFIG(ADDR_SURF_P2) |
+- BANK_WIDTH(ADDR_SURF_BANK_WIDTH_1) |
+- BANK_HEIGHT(ADDR_SURF_BANK_HEIGHT_1) |
+- MACRO_TILE_ASPECT(ADDR_SURF_MACRO_ASPECT_2) |
++ gb_tile_moden = (ARRAY_MODE(ARRAY_2D_TILED_THIN1) |
++ MICRO_TILE_MODE(ADDR_SURF_THIN_MICRO_TILING) |
++ PIPE_CONFIG(ADDR_SURF_P4_8x16) |
++ TILE_SPLIT(split_equal_to_row_size) |
+ NUM_BANKS(ADDR_SURF_16_BANK) |
+- TILE_SPLIT(split_equal_to_row_size));
+- break;
+- case 18:
+- gb_tile_moden = (MICRO_TILE_MODE(ADDR_SURF_THIN_MICRO_TILING) |
+- ARRAY_MODE(ARRAY_1D_TILED_THICK) |
+- PIPE_CONFIG(ADDR_SURF_P2));
+- break;
+- case 19:
+- gb_tile_moden = (MICRO_TILE_MODE(ADDR_SURF_THIN_MICRO_TILING) |
+- ARRAY_MODE(ARRAY_2D_TILED_XTHICK) |
+- PIPE_CONFIG(ADDR_SURF_P2) |
+ BANK_WIDTH(ADDR_SURF_BANK_WIDTH_1) |
+ BANK_HEIGHT(ADDR_SURF_BANK_HEIGHT_1) |
+- MACRO_TILE_ASPECT(ADDR_SURF_MACRO_ASPECT_2) |
+- NUM_BANKS(ADDR_SURF_16_BANK) |
+- TILE_SPLIT(split_equal_to_row_size));
+- break;
+- case 20:
+- gb_tile_moden = (MICRO_TILE_MODE(ADDR_SURF_THIN_MICRO_TILING) |
+- ARRAY_MODE(ARRAY_2D_TILED_THICK) |
+- PIPE_CONFIG(ADDR_SURF_P2) |
+- BANK_WIDTH(ADDR_SURF_BANK_WIDTH_1) |
+- BANK_HEIGHT(ADDR_SURF_BANK_HEIGHT_1) |
+- MACRO_TILE_ASPECT(ADDR_SURF_MACRO_ASPECT_2) |
+- NUM_BANKS(ADDR_SURF_16_BANK) |
+- TILE_SPLIT(split_equal_to_row_size));
++ MACRO_TILE_ASPECT(ADDR_SURF_MACRO_ASPECT_2));
+ break;
+ case 21:
+- gb_tile_moden = (MICRO_TILE_MODE(ADDR_SURF_THIN_MICRO_TILING) |
+- ARRAY_MODE(ARRAY_2D_TILED_THIN1) |
+- PIPE_CONFIG(ADDR_SURF_P2) |
++ gb_tile_moden = (ARRAY_MODE(ARRAY_2D_TILED_THIN1) |
++ MICRO_TILE_MODE(ADDR_SURF_THIN_MICRO_TILING) |
++ PIPE_CONFIG(ADDR_SURF_P8_32x32_8x16) |
+ TILE_SPLIT(ADDR_SURF_TILE_SPLIT_256B) |
++ NUM_BANKS(ADDR_SURF_16_BANK) |
+ BANK_WIDTH(ADDR_SURF_BANK_WIDTH_2) |
+ BANK_HEIGHT(ADDR_SURF_BANK_HEIGHT_4) |
+- MACRO_TILE_ASPECT(ADDR_SURF_MACRO_ASPECT_1) |
+- NUM_BANKS(ADDR_SURF_8_BANK));
++ MACRO_TILE_ASPECT(ADDR_SURF_MACRO_ASPECT_2));
+ break;
+ case 22:
+- gb_tile_moden = (MICRO_TILE_MODE(ADDR_SURF_THIN_MICRO_TILING) |
+- ARRAY_MODE(ARRAY_2D_TILED_THIN1) |
+- PIPE_CONFIG(ADDR_SURF_P2) |
++ gb_tile_moden = (ARRAY_MODE(ARRAY_2D_TILED_THIN1) |
++ MICRO_TILE_MODE(ADDR_SURF_THIN_MICRO_TILING) |
++ PIPE_CONFIG(ADDR_SURF_P8_32x32_8x16) |
+ TILE_SPLIT(ADDR_SURF_TILE_SPLIT_256B) |
+- BANK_WIDTH(ADDR_SURF_BANK_WIDTH_2) |
+- BANK_HEIGHT(ADDR_SURF_BANK_HEIGHT_2) |
+- MACRO_TILE_ASPECT(ADDR_SURF_MACRO_ASPECT_1) |
+- NUM_BANKS(ADDR_SURF_8_BANK));
++ NUM_BANKS(ADDR_SURF_16_BANK) |
++ BANK_WIDTH(ADDR_SURF_BANK_WIDTH_1) |
++ BANK_HEIGHT(ADDR_SURF_BANK_HEIGHT_4) |
++ MACRO_TILE_ASPECT(ADDR_SURF_MACRO_ASPECT_4));
+ break;
+ case 23:
+- gb_tile_moden = (MICRO_TILE_MODE(ADDR_SURF_THIN_MICRO_TILING) |
+- ARRAY_MODE(ARRAY_2D_TILED_THIN1) |
+- PIPE_CONFIG(ADDR_SURF_P2) |
++ gb_tile_moden = (ARRAY_MODE(ARRAY_2D_TILED_THIN1) |
++ MICRO_TILE_MODE(ADDR_SURF_THIN_MICRO_TILING) |
++ PIPE_CONFIG(ADDR_SURF_P8_32x32_8x16) |
+ TILE_SPLIT(ADDR_SURF_TILE_SPLIT_256B) |
++ NUM_BANKS(ADDR_SURF_16_BANK) |
+ BANK_WIDTH(ADDR_SURF_BANK_WIDTH_1) |
+ BANK_HEIGHT(ADDR_SURF_BANK_HEIGHT_2) |
+- MACRO_TILE_ASPECT(ADDR_SURF_MACRO_ASPECT_1) |
+- NUM_BANKS(ADDR_SURF_8_BANK));
++ MACRO_TILE_ASPECT(ADDR_SURF_MACRO_ASPECT_2));
+ break;
+ case 24:
+- gb_tile_moden = (MICRO_TILE_MODE(ADDR_SURF_THIN_MICRO_TILING) |
+- ARRAY_MODE(ARRAY_2D_TILED_THIN1) |
+- PIPE_CONFIG(ADDR_SURF_P2) |
++ gb_tile_moden = (ARRAY_MODE(ARRAY_2D_TILED_THIN1) |
++ MICRO_TILE_MODE(ADDR_SURF_THIN_MICRO_TILING) |
++ PIPE_CONFIG(ADDR_SURF_P8_32x32_8x16) |
+ TILE_SPLIT(ADDR_SURF_TILE_SPLIT_512B) |
++ NUM_BANKS(ADDR_SURF_16_BANK) |
+ BANK_WIDTH(ADDR_SURF_BANK_WIDTH_1) |
+ BANK_HEIGHT(ADDR_SURF_BANK_HEIGHT_1) |
+- MACRO_TILE_ASPECT(ADDR_SURF_MACRO_ASPECT_1) |
+- NUM_BANKS(ADDR_SURF_8_BANK));
++ MACRO_TILE_ASPECT(ADDR_SURF_MACRO_ASPECT_2));
+ break;
+ case 25:
+- gb_tile_moden = (MICRO_TILE_MODE(ADDR_SURF_THIN_MICRO_TILING) |
+- ARRAY_MODE(ARRAY_2D_TILED_THIN1) |
+- PIPE_CONFIG(ADDR_SURF_P2) |
+- TILE_SPLIT(ADDR_SURF_TILE_SPLIT_1KB) |
+- BANK_WIDTH(ADDR_SURF_BANK_WIDTH_1) |
+- BANK_HEIGHT(ADDR_SURF_BANK_HEIGHT_2) |
+- MACRO_TILE_ASPECT(ADDR_SURF_MACRO_ASPECT_1) |
+- NUM_BANKS(ADDR_SURF_4_BANK));
+- break;
+- case 26:
+- gb_tile_moden = (MICRO_TILE_MODE(ADDR_SURF_THIN_MICRO_TILING) |
+- ARRAY_MODE(ARRAY_2D_TILED_THIN1) |
+- PIPE_CONFIG(ADDR_SURF_P2) |
+- TILE_SPLIT(ADDR_SURF_TILE_SPLIT_1KB) |
+- BANK_WIDTH(ADDR_SURF_BANK_WIDTH_1) |
+- BANK_HEIGHT(ADDR_SURF_BANK_HEIGHT_2) |
+- MACRO_TILE_ASPECT(ADDR_SURF_MACRO_ASPECT_1) |
+- NUM_BANKS(ADDR_SURF_4_BANK));
+- break;
+- case 27:
+- gb_tile_moden = (MICRO_TILE_MODE(ADDR_SURF_THIN_MICRO_TILING) |
+- ARRAY_MODE(ARRAY_2D_TILED_THIN1) |
+- PIPE_CONFIG(ADDR_SURF_P2) |
+- TILE_SPLIT(ADDR_SURF_TILE_SPLIT_1KB) |
+- BANK_WIDTH(ADDR_SURF_BANK_WIDTH_1) |
+- BANK_HEIGHT(ADDR_SURF_BANK_HEIGHT_2) |
+- MACRO_TILE_ASPECT(ADDR_SURF_MACRO_ASPECT_1) |
+- NUM_BANKS(ADDR_SURF_4_BANK));
+- break;
+- case 28:
+- gb_tile_moden = (MICRO_TILE_MODE(ADDR_SURF_THIN_MICRO_TILING) |
+- ARRAY_MODE(ARRAY_2D_TILED_THIN1) |
+- PIPE_CONFIG(ADDR_SURF_P2) |
+- TILE_SPLIT(ADDR_SURF_TILE_SPLIT_1KB) |
+- BANK_WIDTH(ADDR_SURF_BANK_WIDTH_1) |
+- BANK_HEIGHT(ADDR_SURF_BANK_HEIGHT_2) |
+- MACRO_TILE_ASPECT(ADDR_SURF_MACRO_ASPECT_1) |
+- NUM_BANKS(ADDR_SURF_4_BANK));
+- break;
+- case 29:
+- gb_tile_moden = (MICRO_TILE_MODE(ADDR_SURF_THIN_MICRO_TILING) |
+- ARRAY_MODE(ARRAY_2D_TILED_THIN1) |
+- PIPE_CONFIG(ADDR_SURF_P2) |
++ gb_tile_moden = (ARRAY_MODE(ARRAY_2D_TILED_THIN1) |
++ MICRO_TILE_MODE(ADDR_SURF_THIN_MICRO_TILING) |
++ PIPE_CONFIG(ADDR_SURF_P8_32x32_8x16) |
+ TILE_SPLIT(ADDR_SURF_TILE_SPLIT_1KB) |
+- BANK_WIDTH(ADDR_SURF_BANK_WIDTH_1) |
+- BANK_HEIGHT(ADDR_SURF_BANK_HEIGHT_2) |
+- MACRO_TILE_ASPECT(ADDR_SURF_MACRO_ASPECT_1) |
+- NUM_BANKS(ADDR_SURF_4_BANK));
+- break;
+- case 30:
+- gb_tile_moden = (MICRO_TILE_MODE(ADDR_SURF_THIN_MICRO_TILING) |
+- ARRAY_MODE(ARRAY_2D_TILED_THIN1) |
+- PIPE_CONFIG(ADDR_SURF_P2) |
+- TILE_SPLIT(ADDR_SURF_TILE_SPLIT_2KB) |
++ NUM_BANKS(ADDR_SURF_8_BANK) |
+ BANK_WIDTH(ADDR_SURF_BANK_WIDTH_1) |
+ BANK_HEIGHT(ADDR_SURF_BANK_HEIGHT_1) |
+- MACRO_TILE_ASPECT(ADDR_SURF_MACRO_ASPECT_1) |
+- NUM_BANKS(ADDR_SURF_4_BANK));
++ MACRO_TILE_ASPECT(ADDR_SURF_MACRO_ASPECT_1));
+ break;
+ default:
+- continue;
++ gb_tile_moden = 0;
++ break;
+ }
+ adev->gfx.config.tile_mode_array[reg_offset] = gb_tile_moden;
+ WREG32(mmGB_TILE_MODE0 + reg_offset, gb_tile_moden);
+diff --git a/drivers/gpu/drm/ast/ast_drv.h b/drivers/gpu/drm/ast/ast_drv.h
+index 7abda94fc2cf..3bedcf7ddd2a 100644
+--- a/drivers/gpu/drm/ast/ast_drv.h
++++ b/drivers/gpu/drm/ast/ast_drv.h
+@@ -113,7 +113,11 @@ struct ast_private {
+ struct ttm_bo_kmap_obj cache_kmap;
+ int next_cursor;
+ bool support_wide_screen;
+- bool DisableP2A;
++ enum {
++ ast_use_p2a,
++ ast_use_dt,
++ ast_use_defaults
++ } config_mode;
+
+ enum ast_tx_chip tx_chip_type;
+ u8 dp501_maxclk;
+diff --git a/drivers/gpu/drm/ast/ast_main.c b/drivers/gpu/drm/ast/ast_main.c
+index 533e762d036d..fb9976254224 100644
+--- a/drivers/gpu/drm/ast/ast_main.c
++++ b/drivers/gpu/drm/ast/ast_main.c
+@@ -62,13 +62,84 @@ uint8_t ast_get_index_reg_mask(struct ast_private *ast,
+ return ret;
+ }
+
++static void ast_detect_config_mode(struct drm_device *dev, u32 *scu_rev)
++{
++ struct device_node *np = dev->pdev->dev.of_node;
++ struct ast_private *ast = dev->dev_private;
++ uint32_t data, jregd0, jregd1;
++
++ /* Defaults */
++ ast->config_mode = ast_use_defaults;
++ *scu_rev = 0xffffffff;
++
++ /* Check if we have device-tree properties */
++ if (np && !of_property_read_u32(np, "aspeed,scu-revision-id",
++ scu_rev)) {
++ /* We do, disable P2A access */
++ ast->config_mode = ast_use_dt;
++ DRM_INFO("Using device-tree for configuration\n");
++ return;
++ }
++
++ /* Not all families have a P2A bridge */
++ if (dev->pdev->device != PCI_CHIP_AST2000)
++ return;
++
++ /*
++ * The BMC will set SCU 0x40 D[12] to 1 if the P2 bridge
++ * is disabled. We force using P2A if VGA only mode bit
++ * is set D[7]
++ */
++ jregd0 = ast_get_index_reg_mask(ast, AST_IO_CRTC_PORT, 0xd0, 0xff);
++ jregd1 = ast_get_index_reg_mask(ast, AST_IO_CRTC_PORT, 0xd1, 0xff);
++ if (!(jregd0 & 0x80) || !(jregd1 & 0x10)) {
++ /* Double check it's actually working */
++ data = ast_read32(ast, 0xf004);
++ if (data != 0xFFFFFFFF) {
++ /* P2A works, grab silicon revision */
++ ast->config_mode = ast_use_p2a;
++
++ DRM_INFO("Using P2A bridge for configuration\n");
++
++ /* Read SCU7c (silicon revision register) */
++ ast_write32(ast, 0xf004, 0x1e6e0000);
++ ast_write32(ast, 0xf000, 0x1);
++ *scu_rev = ast_read32(ast, 0x1207c);
++ return;
++ }
++ }
++
++ /* We have a P2A bridge but it's disabled */
++ DRM_INFO("P2A bridge disabled, using default configuration\n");
++}
+
+ static int ast_detect_chip(struct drm_device *dev, bool *need_post)
+ {
+ struct ast_private *ast = dev->dev_private;
+- uint32_t data, jreg;
++ uint32_t jreg, scu_rev;
++
++ /*
++ * If VGA isn't enabled, we need to enable now or subsequent
++ * access to the scratch registers will fail. We also inform
++ * our caller that it needs to POST the chip
++ * (Assumption: VGA not enabled -> need to POST)
++ */
++ if (!ast_is_vga_enabled(dev)) {
++ ast_enable_vga(dev);
++ DRM_INFO("VGA not enabled on entry, requesting chip POST\n");
++ *need_post = true;
++ } else
++ *need_post = false;
++
++
++ /* Enable extended register access */
++ ast_enable_mmio(dev);
+ ast_open_key(ast);
+
++ /* Find out whether P2A works or whether to use device-tree */
++ ast_detect_config_mode(dev, &scu_rev);
++
++ /* Identify chipset */
+ if (dev->pdev->device == PCI_CHIP_AST1180) {
+ ast->chip = AST1100;
+ DRM_INFO("AST 1180 detected\n");
+@@ -80,12 +151,7 @@ static int ast_detect_chip(struct drm_device *dev, bool *need_post)
+ ast->chip = AST2300;
+ DRM_INFO("AST 2300 detected\n");
+ } else if (dev->pdev->revision >= 0x10) {
+- uint32_t data;
+- ast_write32(ast, 0xf004, 0x1e6e0000);
+- ast_write32(ast, 0xf000, 0x1);
+-
+- data = ast_read32(ast, 0x1207c);
+- switch (data & 0x0300) {
++ switch (scu_rev & 0x0300) {
+ case 0x0200:
+ ast->chip = AST1100;
+ DRM_INFO("AST 1100 detected\n");
+@@ -110,26 +176,6 @@ static int ast_detect_chip(struct drm_device *dev, bool *need_post)
+ }
+ }
+
+- /*
+- * If VGA isn't enabled, we need to enable now or subsequent
+- * access to the scratch registers will fail. We also inform
+- * our caller that it needs to POST the chip
+- * (Assumption: VGA not enabled -> need to POST)
+- */
+- if (!ast_is_vga_enabled(dev)) {
+- ast_enable_vga(dev);
+- ast_enable_mmio(dev);
+- DRM_INFO("VGA not enabled on entry, requesting chip POST\n");
+- *need_post = true;
+- } else
+- *need_post = false;
+-
+- /* Check P2A Access */
+- ast->DisableP2A = true;
+- data = ast_read32(ast, 0xf004);
+- if (data != 0xFFFFFFFF)
+- ast->DisableP2A = false;
+-
+ /* Check if we support wide screen */
+ switch (ast->chip) {
+ case AST1180:
+@@ -146,17 +192,12 @@ static int ast_detect_chip(struct drm_device *dev, bool *need_post)
+ ast->support_wide_screen = true;
+ else {
+ ast->support_wide_screen = false;
+- if (ast->DisableP2A == false) {
+- /* Read SCU7c (silicon revision register) */
+- ast_write32(ast, 0xf004, 0x1e6e0000);
+- ast_write32(ast, 0xf000, 0x1);
+- data = ast_read32(ast, 0x1207c);
+- data &= 0x300;
+- if (ast->chip == AST2300 && data == 0x0) /* ast1300 */
+- ast->support_wide_screen = true;
+- if (ast->chip == AST2400 && data == 0x100) /* ast1400 */
+- ast->support_wide_screen = true;
+- }
++ if (ast->chip == AST2300 &&
++ (scu_rev & 0x300) == 0x0) /* ast1300 */
++ ast->support_wide_screen = true;
++ if (ast->chip == AST2400 &&
++ (scu_rev & 0x300) == 0x100) /* ast1400 */
++ ast->support_wide_screen = true;
+ }
+ break;
+ }
+@@ -220,85 +261,102 @@ static int ast_detect_chip(struct drm_device *dev, bool *need_post)
+
+ static int ast_get_dram_info(struct drm_device *dev)
+ {
++ struct device_node *np = dev->pdev->dev.of_node;
+ struct ast_private *ast = dev->dev_private;
+- uint32_t data, data2;
+- uint32_t denum, num, div, ref_pll;
++ uint32_t mcr_cfg, mcr_scu_mpll, mcr_scu_strap;
++ uint32_t denum, num, div, ref_pll, dsel;
+
+- if (ast->DisableP2A)
+- {
++ switch (ast->config_mode) {
++ case ast_use_dt:
++ /*
++ * If some properties are missing, use reasonable
++ * defaults for AST2400
++ */
++ if (of_property_read_u32(np, "aspeed,mcr-configuration",
++ &mcr_cfg))
++ mcr_cfg = 0x00000577;
++ if (of_property_read_u32(np, "aspeed,mcr-scu-mpll",
++ &mcr_scu_mpll))
++ mcr_scu_mpll = 0x000050C0;
++ if (of_property_read_u32(np, "aspeed,mcr-scu-strap",
++ &mcr_scu_strap))
++ mcr_scu_strap = 0;
++ break;
++ case ast_use_p2a:
++ ast_write32(ast, 0xf004, 0x1e6e0000);
++ ast_write32(ast, 0xf000, 0x1);
++ mcr_cfg = ast_read32(ast, 0x10004);
++ mcr_scu_mpll = ast_read32(ast, 0x10120);
++ mcr_scu_strap = ast_read32(ast, 0x10170);
++ break;
++ case ast_use_defaults:
++ default:
+ ast->dram_bus_width = 16;
+ ast->dram_type = AST_DRAM_1Gx16;
+ ast->mclk = 396;
++ return 0;
+ }
+- else
+- {
+- ast_write32(ast, 0xf004, 0x1e6e0000);
+- ast_write32(ast, 0xf000, 0x1);
+- data = ast_read32(ast, 0x10004);
+-
+- if (data & 0x40)
+- ast->dram_bus_width = 16;
+- else
+- ast->dram_bus_width = 32;
+
+- if (ast->chip == AST2300 || ast->chip == AST2400) {
+- switch (data & 0x03) {
+- case 0:
+- ast->dram_type = AST_DRAM_512Mx16;
+- break;
+- default:
+- case 1:
+- ast->dram_type = AST_DRAM_1Gx16;
+- break;
+- case 2:
+- ast->dram_type = AST_DRAM_2Gx16;
+- break;
+- case 3:
+- ast->dram_type = AST_DRAM_4Gx16;
+- break;
+- }
+- } else {
+- switch (data & 0x0c) {
+- case 0:
+- case 4:
+- ast->dram_type = AST_DRAM_512Mx16;
+- break;
+- case 8:
+- if (data & 0x40)
+- ast->dram_type = AST_DRAM_1Gx16;
+- else
+- ast->dram_type = AST_DRAM_512Mx32;
+- break;
+- case 0xc:
+- ast->dram_type = AST_DRAM_1Gx32;
+- break;
+- }
+- }
++ if (mcr_cfg & 0x40)
++ ast->dram_bus_width = 16;
++ else
++ ast->dram_bus_width = 32;
+
+- data = ast_read32(ast, 0x10120);
+- data2 = ast_read32(ast, 0x10170);
+- if (data2 & 0x2000)
+- ref_pll = 14318;
+- else
+- ref_pll = 12000;
+-
+- denum = data & 0x1f;
+- num = (data & 0x3fe0) >> 5;
+- data = (data & 0xc000) >> 14;
+- switch (data) {
+- case 3:
+- div = 0x4;
++ if (ast->chip == AST2300 || ast->chip == AST2400) {
++ switch (mcr_cfg & 0x03) {
++ case 0:
++ ast->dram_type = AST_DRAM_512Mx16;
+ break;
+- case 2:
++ default:
+ case 1:
+- div = 0x2;
++ ast->dram_type = AST_DRAM_1Gx16;
+ break;
+- default:
+- div = 0x1;
++ case 2:
++ ast->dram_type = AST_DRAM_2Gx16;
++ break;
++ case 3:
++ ast->dram_type = AST_DRAM_4Gx16;
++ break;
++ }
++ } else {
++ switch (mcr_cfg & 0x0c) {
++ case 0:
++ case 4:
++ ast->dram_type = AST_DRAM_512Mx16;
++ break;
++ case 8:
++ if (mcr_cfg & 0x40)
++ ast->dram_type = AST_DRAM_1Gx16;
++ else
++ ast->dram_type = AST_DRAM_512Mx32;
++ break;
++ case 0xc:
++ ast->dram_type = AST_DRAM_1Gx32;
+ break;
+ }
+- ast->mclk = ref_pll * (num + 2) / (denum + 2) * (div * 1000);
+ }
++
++ if (mcr_scu_strap & 0x2000)
++ ref_pll = 14318;
++ else
++ ref_pll = 12000;
++
++ denum = mcr_scu_mpll & 0x1f;
++ num = (mcr_scu_mpll & 0x3fe0) >> 5;
++ dsel = (mcr_scu_mpll & 0xc000) >> 14;
++ switch (dsel) {
++ case 3:
++ div = 0x4;
++ break;
++ case 2:
++ case 1:
++ div = 0x2;
++ break;
++ default:
++ div = 0x1;
++ break;
++ }
++ ast->mclk = ref_pll * (num + 2) / (denum + 2) * (div * 1000);
+ return 0;
+ }
+
+diff --git a/drivers/gpu/drm/ast/ast_post.c b/drivers/gpu/drm/ast/ast_post.c
+index 5331ee1df086..c7c58becb25d 100644
+--- a/drivers/gpu/drm/ast/ast_post.c
++++ b/drivers/gpu/drm/ast/ast_post.c
+@@ -58,13 +58,9 @@ bool ast_is_vga_enabled(struct drm_device *dev)
+ /* TODO 1180 */
+ } else {
+ ch = ast_io_read8(ast, AST_IO_VGA_ENABLE_PORT);
+- if (ch) {
+- ast_open_key(ast);
+- ch = ast_get_index_reg_mask(ast, AST_IO_CRTC_PORT, 0xb6, 0xff);
+- return ch & 0x04;
+- }
++ return !!(ch & 0x01);
+ }
+- return 0;
++ return false;
+ }
+
+ static const u8 extreginfo[] = { 0x0f, 0x04, 0x1c, 0xff };
+@@ -375,21 +371,18 @@ void ast_post_gpu(struct drm_device *dev)
+ pci_write_config_dword(ast->dev->pdev, 0x04, reg);
+
+ ast_enable_vga(dev);
+- ast_enable_mmio(dev);
+ ast_open_key(ast);
++ ast_enable_mmio(dev);
+ ast_set_def_ext_reg(dev);
+
+- if (ast->DisableP2A == false)
+- {
++ if (ast->config_mode == ast_use_p2a) {
+ if (ast->chip == AST2300 || ast->chip == AST2400)
+ ast_init_dram_2300(dev);
+ else
+ ast_init_dram_reg(dev);
+
+ ast_init_3rdtx(dev);
+- }
+- else
+- {
++ } else {
+ if (ast->tx_chip_type != AST_TX_NONE)
+ ast_set_index_reg_mask(ast, AST_IO_CRTC_PORT, 0xa3, 0xcf, 0x80); /* Enable DVO */
+ }
+@@ -1638,12 +1631,44 @@ static void ast_init_dram_2300(struct drm_device *dev)
+ temp |= 0x73;
+ ast_write32(ast, 0x12008, temp);
+
++ param.dram_freq = 396;
+ param.dram_type = AST_DDR3;
++ temp = ast_mindwm(ast, 0x1e6e2070);
+ if (temp & 0x01000000)
+ param.dram_type = AST_DDR2;
+- param.dram_chipid = ast->dram_type;
+- param.dram_freq = ast->mclk;
+- param.vram_size = ast->vram_size;
++ switch (temp & 0x18000000) {
++ case 0:
++ param.dram_chipid = AST_DRAM_512Mx16;
++ break;
++ default:
++ case 0x08000000:
++ param.dram_chipid = AST_DRAM_1Gx16;
++ break;
++ case 0x10000000:
++ param.dram_chipid = AST_DRAM_2Gx16;
++ break;
++ case 0x18000000:
++ param.dram_chipid = AST_DRAM_4Gx16;
++ break;
++ }
++ switch (temp & 0x0c) {
++ default:
++ case 0x00:
++ param.vram_size = AST_VIDMEM_SIZE_8M;
++ break;
++
++ case 0x04:
++ param.vram_size = AST_VIDMEM_SIZE_16M;
++ break;
++
++ case 0x08:
++ param.vram_size = AST_VIDMEM_SIZE_32M;
++ break;
++
++ case 0x0c:
++ param.vram_size = AST_VIDMEM_SIZE_64M;
++ break;
++ }
+
+ if (param.dram_type == AST_DDR3) {
+ get_ddr3_info(ast, ¶m);
+diff --git a/drivers/gpu/drm/drm_atomic_helper.c b/drivers/gpu/drm/drm_atomic_helper.c
+index 4594477dee00..55e7372ea0a0 100644
+--- a/drivers/gpu/drm/drm_atomic_helper.c
++++ b/drivers/gpu/drm/drm_atomic_helper.c
+@@ -362,7 +362,7 @@ mode_fixup(struct drm_atomic_state *state)
+ struct drm_connector *connector;
+ struct drm_connector_state *conn_state;
+ int i;
+- bool ret;
++ int ret;
+
+ for_each_crtc_in_state(state, crtc, crtc_state, i) {
+ if (!crtc_state->mode_changed &&
+diff --git a/drivers/gpu/drm/drm_edid.c b/drivers/gpu/drm/drm_edid.c
+index 336be31ff3de..ec6474b01dbc 100644
+--- a/drivers/gpu/drm/drm_edid.c
++++ b/drivers/gpu/drm/drm_edid.c
+@@ -145,6 +145,9 @@ static struct edid_quirk {
+
+ /* Panel in Samsung NP700G7A-S01PL notebook reports 6bpc */
+ { "SEC", 0xd033, EDID_QUIRK_FORCE_8BPC },
++
++ /* Rotel RSX-1058 forwards sink's EDID but only does HDMI 1.1*/
++ { "ETR", 13896, EDID_QUIRK_FORCE_8BPC },
+ };
+
+ /*
+diff --git a/drivers/gpu/drm/drm_fb_helper.c b/drivers/gpu/drm/drm_fb_helper.c
+index e934b541feea..ad531126667c 100644
+--- a/drivers/gpu/drm/drm_fb_helper.c
++++ b/drivers/gpu/drm/drm_fb_helper.c
+@@ -856,6 +856,9 @@ void drm_fb_helper_fini(struct drm_fb_helper *fb_helper)
+ if (!drm_fbdev_emulation)
+ return;
+
++ cancel_work_sync(&fb_helper->resume_work);
++ cancel_work_sync(&fb_helper->dirty_work);
++
+ mutex_lock(&kernel_fb_helper_lock);
+ if (!list_empty(&fb_helper->kernel_fb_list)) {
+ list_del(&fb_helper->kernel_fb_list);
+diff --git a/drivers/gpu/drm/i915/i915_gem.c b/drivers/gpu/drm/i915/i915_gem.c
+index 24b5b046754b..7f4a54b94447 100644
+--- a/drivers/gpu/drm/i915/i915_gem.c
++++ b/drivers/gpu/drm/i915/i915_gem.c
+@@ -440,7 +440,7 @@ i915_gem_object_wait_reservation(struct reservation_object *resv,
+ timeout = i915_gem_object_wait_fence(shared[i],
+ flags, timeout,
+ rps);
+- if (timeout <= 0)
++ if (timeout < 0)
+ break;
+
+ dma_fence_put(shared[i]);
+@@ -453,7 +453,7 @@ i915_gem_object_wait_reservation(struct reservation_object *resv,
+ excl = reservation_object_get_excl_rcu(resv);
+ }
+
+- if (excl && timeout > 0)
++ if (excl && timeout >= 0)
+ timeout = i915_gem_object_wait_fence(excl, flags, timeout, rps);
+
+ dma_fence_put(excl);
+diff --git a/drivers/gpu/drm/i915/i915_gem_internal.c b/drivers/gpu/drm/i915/i915_gem_internal.c
+index d09c74973cb3..f7c4376d1136 100644
+--- a/drivers/gpu/drm/i915/i915_gem_internal.c
++++ b/drivers/gpu/drm/i915/i915_gem_internal.c
+@@ -46,24 +46,12 @@ static struct sg_table *
+ i915_gem_object_get_pages_internal(struct drm_i915_gem_object *obj)
+ {
+ struct drm_i915_private *i915 = to_i915(obj->base.dev);
+- unsigned int npages = obj->base.size / PAGE_SIZE;
+ struct sg_table *st;
+ struct scatterlist *sg;
++ unsigned int npages;
+ int max_order;
+ gfp_t gfp;
+
+- st = kmalloc(sizeof(*st), GFP_KERNEL);
+- if (!st)
+- return ERR_PTR(-ENOMEM);
+-
+- if (sg_alloc_table(st, npages, GFP_KERNEL)) {
+- kfree(st);
+- return ERR_PTR(-ENOMEM);
+- }
+-
+- sg = st->sgl;
+- st->nents = 0;
+-
+ max_order = MAX_ORDER;
+ #ifdef CONFIG_SWIOTLB
+ if (swiotlb_nr_tbl()) {
+@@ -85,6 +73,20 @@ i915_gem_object_get_pages_internal(struct drm_i915_gem_object *obj)
+ gfp |= __GFP_DMA32;
+ }
+
++create_st:
++ st = kmalloc(sizeof(*st), GFP_KERNEL);
++ if (!st)
++ return ERR_PTR(-ENOMEM);
++
++ npages = obj->base.size / PAGE_SIZE;
++ if (sg_alloc_table(st, npages, GFP_KERNEL)) {
++ kfree(st);
++ return ERR_PTR(-ENOMEM);
++ }
++
++ sg = st->sgl;
++ st->nents = 0;
++
+ do {
+ int order = min(fls(npages) - 1, max_order);
+ struct page *page;
+@@ -112,8 +114,15 @@ i915_gem_object_get_pages_internal(struct drm_i915_gem_object *obj)
+ sg = __sg_next(sg);
+ } while (1);
+
+- if (i915_gem_gtt_prepare_pages(obj, st))
++ if (i915_gem_gtt_prepare_pages(obj, st)) {
++ /* Failed to dma-map try again with single page sg segments */
++ if (get_order(st->sgl->length)) {
++ internal_free_pages(st);
++ max_order = 0;
++ goto create_st;
++ }
+ goto err;
++ }
+
+ /* Mark the pages as dontneed whilst they are still pinned. As soon
+ * as they are unpinned they are allowed to be reaped by the shrinker,
+diff --git a/drivers/gpu/drm/i915/i915_gem_request.c b/drivers/gpu/drm/i915/i915_gem_request.c
+index b8f403faadbb..d7958dc26d21 100644
+--- a/drivers/gpu/drm/i915/i915_gem_request.c
++++ b/drivers/gpu/drm/i915/i915_gem_request.c
+@@ -1011,8 +1011,13 @@ __i915_request_wait_for_execute(struct drm_i915_gem_request *request,
+ break;
+ }
+
++ if (!timeout) {
++ timeout = -ETIME;
++ break;
++ }
++
+ timeout = io_schedule_timeout(timeout);
+- } while (timeout);
++ } while (1);
+ finish_wait(&request->execute.wait, &wait);
+
+ if (flags & I915_WAIT_LOCKED)
+diff --git a/drivers/gpu/drm/i915/i915_gem_stolen.c b/drivers/gpu/drm/i915/i915_gem_stolen.c
+index abc78bbfc1dc..7325230fff02 100644
+--- a/drivers/gpu/drm/i915/i915_gem_stolen.c
++++ b/drivers/gpu/drm/i915/i915_gem_stolen.c
+@@ -414,6 +414,11 @@ int i915_gem_init_stolen(struct drm_i915_private *dev_priv)
+
+ mutex_init(&dev_priv->mm.stolen_lock);
+
++ if (intel_vgpu_active(dev_priv)) {
++ DRM_INFO("iGVT-g active, disabling use of stolen memory\n");
++ return 0;
++ }
++
+ #ifdef CONFIG_INTEL_IOMMU
+ if (intel_iommu_gfx_mapped && INTEL_GEN(dev_priv) < 8) {
+ DRM_INFO("DMAR active, disabling use of stolen memory\n");
+diff --git a/drivers/gpu/drm/i915/intel_dp.c b/drivers/gpu/drm/i915/intel_dp.c
+index 0b8e8eb85c19..4daf7dda9cca 100644
+--- a/drivers/gpu/drm/i915/intel_dp.c
++++ b/drivers/gpu/drm/i915/intel_dp.c
+@@ -2820,6 +2820,9 @@ static void vlv_detach_power_sequencer(struct intel_dp *intel_dp)
+ enum pipe pipe = intel_dp->pps_pipe;
+ i915_reg_t pp_on_reg = PP_ON_DELAYS(pipe);
+
++ if (WARN_ON(pipe != PIPE_A && pipe != PIPE_B))
++ return;
++
+ edp_panel_vdd_off_sync(intel_dp);
+
+ /*
+@@ -2847,9 +2850,6 @@ static void vlv_steal_power_sequencer(struct drm_device *dev,
+
+ lockdep_assert_held(&dev_priv->pps_mutex);
+
+- if (WARN_ON(pipe != PIPE_A && pipe != PIPE_B))
+- return;
+-
+ for_each_intel_encoder(dev, encoder) {
+ struct intel_dp *intel_dp;
+ enum port port;
+diff --git a/drivers/gpu/drm/i915/intel_opregion.c b/drivers/gpu/drm/i915/intel_opregion.c
+index f4429f67a4e3..4a862a358c70 100644
+--- a/drivers/gpu/drm/i915/intel_opregion.c
++++ b/drivers/gpu/drm/i915/intel_opregion.c
+@@ -982,7 +982,18 @@ int intel_opregion_setup(struct drm_i915_private *dev_priv)
+ opregion->vbt_size = vbt_size;
+ } else {
+ vbt = base + OPREGION_VBT_OFFSET;
+- vbt_size = OPREGION_ASLE_EXT_OFFSET - OPREGION_VBT_OFFSET;
++ /*
++ * The VBT specification says that if the ASLE ext
++ * mailbox is not used its area is reserved, but
++ * on some CHT boards the VBT extends into the
++ * ASLE ext area. Allow this even though it is
++ * against the spec, so we do not end up rejecting
++ * the VBT on those boards (and end up not finding the
++ * LCD panel because of this).
++ */
++ vbt_size = (mboxes & MBOX_ASLE_EXT) ?
++ OPREGION_ASLE_EXT_OFFSET : OPREGION_SIZE;
++ vbt_size -= OPREGION_VBT_OFFSET;
+ if (intel_bios_is_valid_vbt(vbt, vbt_size)) {
+ DRM_DEBUG_KMS("Found valid VBT in ACPI OpRegion (Mailbox #4)\n");
+ opregion->vbt = vbt;
+diff --git a/drivers/gpu/drm/imx/imx-tve.c b/drivers/gpu/drm/imx/imx-tve.c
+index 3b602ee33c44..0c6bf12d45b1 100644
+--- a/drivers/gpu/drm/imx/imx-tve.c
++++ b/drivers/gpu/drm/imx/imx-tve.c
+@@ -98,6 +98,8 @@
+ /* TVE_TST_MODE_REG */
+ #define TVE_TVDAC_TEST_MODE_MASK (0x7 << 0)
+
++#define IMX_TVE_DAC_VOLTAGE 2750000
++
+ enum {
+ TVE_MODE_TVOUT,
+ TVE_MODE_VGA,
+@@ -621,9 +623,8 @@ static int imx_tve_bind(struct device *dev, struct device *master, void *data)
+
+ tve->dac_reg = devm_regulator_get(dev, "dac");
+ if (!IS_ERR(tve->dac_reg)) {
+- ret = regulator_set_voltage(tve->dac_reg, 2750000, 2750000);
+- if (ret)
+- return ret;
++ if (regulator_get_voltage(tve->dac_reg) != IMX_TVE_DAC_VOLTAGE)
++ dev_warn(dev, "dac voltage is not %d uV\n", IMX_TVE_DAC_VOLTAGE);
+ ret = regulator_enable(tve->dac_reg);
+ if (ret)
+ return ret;
+diff --git a/drivers/gpu/drm/radeon/radeon_bios.c b/drivers/gpu/drm/radeon/radeon_bios.c
+index c829cfb02fc4..00cfb5d2875f 100644
+--- a/drivers/gpu/drm/radeon/radeon_bios.c
++++ b/drivers/gpu/drm/radeon/radeon_bios.c
+@@ -596,52 +596,58 @@ static bool radeon_read_disabled_bios(struct radeon_device *rdev)
+ #ifdef CONFIG_ACPI
+ static bool radeon_acpi_vfct_bios(struct radeon_device *rdev)
+ {
+- bool ret = false;
+ struct acpi_table_header *hdr;
+ acpi_size tbl_size;
+ UEFI_ACPI_VFCT *vfct;
+- GOP_VBIOS_CONTENT *vbios;
+- VFCT_IMAGE_HEADER *vhdr;
++ unsigned offset;
+
+ if (!ACPI_SUCCESS(acpi_get_table("VFCT", 1, &hdr)))
+ return false;
+ tbl_size = hdr->length;
+ if (tbl_size < sizeof(UEFI_ACPI_VFCT)) {
+ DRM_ERROR("ACPI VFCT table present but broken (too short #1)\n");
+- goto out_unmap;
++ return false;
+ }
+
+ vfct = (UEFI_ACPI_VFCT *)hdr;
+- if (vfct->VBIOSImageOffset + sizeof(VFCT_IMAGE_HEADER) > tbl_size) {
+- DRM_ERROR("ACPI VFCT table present but broken (too short #2)\n");
+- goto out_unmap;
+- }
++ offset = vfct->VBIOSImageOffset;
+
+- vbios = (GOP_VBIOS_CONTENT *)((char *)hdr + vfct->VBIOSImageOffset);
+- vhdr = &vbios->VbiosHeader;
+- DRM_INFO("ACPI VFCT contains a BIOS for %02x:%02x.%d %04x:%04x, size %d\n",
+- vhdr->PCIBus, vhdr->PCIDevice, vhdr->PCIFunction,
+- vhdr->VendorID, vhdr->DeviceID, vhdr->ImageLength);
+-
+- if (vhdr->PCIBus != rdev->pdev->bus->number ||
+- vhdr->PCIDevice != PCI_SLOT(rdev->pdev->devfn) ||
+- vhdr->PCIFunction != PCI_FUNC(rdev->pdev->devfn) ||
+- vhdr->VendorID != rdev->pdev->vendor ||
+- vhdr->DeviceID != rdev->pdev->device) {
+- DRM_INFO("ACPI VFCT table is not for this card\n");
+- goto out_unmap;
+- }
++ while (offset < tbl_size) {
++ GOP_VBIOS_CONTENT *vbios = (GOP_VBIOS_CONTENT *)((char *)hdr + offset);
++ VFCT_IMAGE_HEADER *vhdr = &vbios->VbiosHeader;
+
+- if (vfct->VBIOSImageOffset + sizeof(VFCT_IMAGE_HEADER) + vhdr->ImageLength > tbl_size) {
+- DRM_ERROR("ACPI VFCT image truncated\n");
+- goto out_unmap;
+- }
++ offset += sizeof(VFCT_IMAGE_HEADER);
++ if (offset > tbl_size) {
++ DRM_ERROR("ACPI VFCT image header truncated\n");
++ return false;
++ }
+
+- rdev->bios = kmemdup(&vbios->VbiosContent, vhdr->ImageLength, GFP_KERNEL);
+- ret = !!rdev->bios;
++ offset += vhdr->ImageLength;
++ if (offset > tbl_size) {
++ DRM_ERROR("ACPI VFCT image truncated\n");
++ return false;
++ }
++
++ if (vhdr->ImageLength &&
++ vhdr->PCIBus == rdev->pdev->bus->number &&
++ vhdr->PCIDevice == PCI_SLOT(rdev->pdev->devfn) &&
++ vhdr->PCIFunction == PCI_FUNC(rdev->pdev->devfn) &&
++ vhdr->VendorID == rdev->pdev->vendor &&
++ vhdr->DeviceID == rdev->pdev->device) {
++ rdev->bios = kmemdup(&vbios->VbiosContent,
++ vhdr->ImageLength,
++ GFP_KERNEL);
++
++ if (!rdev->bios) {
++ kfree(rdev->bios);
++ return false;
++ }
++ return true;
++ }
++ }
+
+-out_unmap:
+- return ret;
++ DRM_ERROR("ACPI VFCT table present but broken (too short #2)\n");
++ return false;
+ }
+ #else
+ static inline bool radeon_acpi_vfct_bios(struct radeon_device *rdev)
+diff --git a/drivers/gpu/drm/ttm/ttm_bo.c b/drivers/gpu/drm/ttm/ttm_bo.c
+index d5063618efa7..86e3b233b722 100644
+--- a/drivers/gpu/drm/ttm/ttm_bo.c
++++ b/drivers/gpu/drm/ttm/ttm_bo.c
+@@ -1670,7 +1670,6 @@ static int ttm_bo_swapout(struct ttm_mem_shrink *shrink)
+ struct ttm_buffer_object *bo;
+ int ret = -EBUSY;
+ int put_count;
+- uint32_t swap_placement = (TTM_PL_FLAG_CACHED | TTM_PL_FLAG_SYSTEM);
+
+ spin_lock(&glob->lru_lock);
+ list_for_each_entry(bo, &glob->swap_lru, swap) {
+@@ -1701,7 +1700,8 @@ static int ttm_bo_swapout(struct ttm_mem_shrink *shrink)
+ * Move to system cached
+ */
+
+- if ((bo->mem.placement & swap_placement) != swap_placement) {
++ if (bo->mem.mem_type != TTM_PL_SYSTEM ||
++ bo->ttm->caching_state != tt_cached) {
+ struct ttm_mem_reg evict_mem;
+
+ evict_mem = bo->mem;
+diff --git a/drivers/gpu/drm/vmwgfx/vmwgfx_drv.c b/drivers/gpu/drm/vmwgfx/vmwgfx_drv.c
+index 18061a4bc2f2..36005bdf3749 100644
+--- a/drivers/gpu/drm/vmwgfx/vmwgfx_drv.c
++++ b/drivers/gpu/drm/vmwgfx/vmwgfx_drv.c
+@@ -199,9 +199,14 @@ static const struct drm_ioctl_desc vmw_ioctls[] = {
+ VMW_IOCTL_DEF(VMW_PRESENT_READBACK,
+ vmw_present_readback_ioctl,
+ DRM_MASTER | DRM_AUTH),
++ /*
++ * The permissions of the below ioctl are overridden in
++ * vmw_generic_ioctl(). We require either
++ * DRM_MASTER or capable(CAP_SYS_ADMIN).
++ */
+ VMW_IOCTL_DEF(VMW_UPDATE_LAYOUT,
+ vmw_kms_update_layout_ioctl,
+- DRM_MASTER | DRM_CONTROL_ALLOW),
++ DRM_RENDER_ALLOW),
+ VMW_IOCTL_DEF(VMW_CREATE_SHADER,
+ vmw_shader_define_ioctl,
+ DRM_AUTH | DRM_RENDER_ALLOW),
+@@ -1125,6 +1130,10 @@ static long vmw_generic_ioctl(struct file *filp, unsigned int cmd,
+
+ return (long) vmw_execbuf_ioctl(dev, arg, file_priv,
+ _IOC_SIZE(cmd));
++ } else if (nr == DRM_COMMAND_BASE + DRM_VMW_UPDATE_LAYOUT) {
++ if (!drm_is_current_master(file_priv) &&
++ !capable(CAP_SYS_ADMIN))
++ return -EACCES;
+ }
+
+ if (unlikely(ioctl->cmd != cmd))
+diff --git a/drivers/gpu/drm/vmwgfx/vmwgfx_drv.h b/drivers/gpu/drm/vmwgfx/vmwgfx_drv.h
+index 1e59a486bba8..59ff4197173a 100644
+--- a/drivers/gpu/drm/vmwgfx/vmwgfx_drv.h
++++ b/drivers/gpu/drm/vmwgfx/vmwgfx_drv.h
+@@ -41,9 +41,9 @@
+ #include <drm/ttm/ttm_module.h>
+ #include "vmwgfx_fence.h"
+
+-#define VMWGFX_DRIVER_DATE "20160210"
++#define VMWGFX_DRIVER_DATE "20170221"
+ #define VMWGFX_DRIVER_MAJOR 2
+-#define VMWGFX_DRIVER_MINOR 11
++#define VMWGFX_DRIVER_MINOR 12
+ #define VMWGFX_DRIVER_PATCHLEVEL 0
+ #define VMWGFX_FILE_PAGE_OFFSET 0x00100000
+ #define VMWGFX_FIFO_STATIC_SIZE (1024*1024)
+diff --git a/drivers/hv/hv.c b/drivers/hv/hv.c
+index fbd8ce6d7ff3..27228fe57eca 100644
+--- a/drivers/hv/hv.c
++++ b/drivers/hv/hv.c
+@@ -220,7 +220,7 @@ int hv_init(void)
+ /* See if the hypercall page is already set */
+ rdmsrl(HV_X64_MSR_HYPERCALL, hypercall_msr.as_uint64);
+
+- virtaddr = __vmalloc(PAGE_SIZE, GFP_KERNEL, PAGE_KERNEL_EXEC);
++ virtaddr = __vmalloc(PAGE_SIZE, GFP_KERNEL, PAGE_KERNEL_RX);
+
+ if (!virtaddr)
+ goto cleanup;
+diff --git a/drivers/infiniband/hw/mlx5/srq.c b/drivers/infiniband/hw/mlx5/srq.c
+index 6f4397ee1ed6..7cb145f9a6db 100644
+--- a/drivers/infiniband/hw/mlx5/srq.c
++++ b/drivers/infiniband/hw/mlx5/srq.c
+@@ -165,8 +165,6 @@ static int create_srq_kernel(struct mlx5_ib_dev *dev, struct mlx5_ib_srq *srq,
+ int err;
+ int i;
+ struct mlx5_wqe_srq_next_seg *next;
+- int page_shift;
+- int npages;
+
+ err = mlx5_db_alloc(dev->mdev, &srq->db);
+ if (err) {
+@@ -179,7 +177,6 @@ static int create_srq_kernel(struct mlx5_ib_dev *dev, struct mlx5_ib_srq *srq,
+ err = -ENOMEM;
+ goto err_db;
+ }
+- page_shift = srq->buf.page_shift;
+
+ srq->head = 0;
+ srq->tail = srq->msrq.max - 1;
+@@ -191,10 +188,8 @@ static int create_srq_kernel(struct mlx5_ib_dev *dev, struct mlx5_ib_srq *srq,
+ cpu_to_be16((i + 1) & (srq->msrq.max - 1));
+ }
+
+- npages = DIV_ROUND_UP(srq->buf.npages, 1 << (page_shift - PAGE_SHIFT));
+- mlx5_ib_dbg(dev, "buf_size %d, page_shift %d, npages %d, calc npages %d\n",
+- buf_size, page_shift, srq->buf.npages, npages);
+- in->pas = mlx5_vzalloc(sizeof(*in->pas) * npages);
++ mlx5_ib_dbg(dev, "srq->buf.page_shift = %d\n", srq->buf.page_shift);
++ in->pas = mlx5_vzalloc(sizeof(*in->pas) * srq->buf.npages);
+ if (!in->pas) {
+ err = -ENOMEM;
+ goto err_buf;
+@@ -208,7 +203,7 @@ static int create_srq_kernel(struct mlx5_ib_dev *dev, struct mlx5_ib_srq *srq,
+ }
+ srq->wq_sig = !!srq_signature;
+
+- in->log_page_size = page_shift - MLX5_ADAPTER_PAGE_SHIFT;
++ in->log_page_size = srq->buf.page_shift - MLX5_ADAPTER_PAGE_SHIFT;
+ if (MLX5_CAP_GEN(dev->mdev, cqe_version) == MLX5_CQE_VERSION_V1 &&
+ in->type == IB_SRQT_XRC)
+ in->user_index = MLX5_IB_DEFAULT_UIDX;
+diff --git a/drivers/infiniband/ulp/ipoib/ipoib_cm.c b/drivers/infiniband/ulp/ipoib/ipoib_cm.c
+index 096c4f6fbd65..1c7a9a16efc7 100644
+--- a/drivers/infiniband/ulp/ipoib/ipoib_cm.c
++++ b/drivers/infiniband/ulp/ipoib/ipoib_cm.c
+@@ -1507,12 +1507,14 @@ static ssize_t set_mode(struct device *d, struct device_attribute *attr,
+
+ ret = ipoib_set_mode(dev, buf);
+
+- rtnl_unlock();
+-
+- if (!ret)
+- return count;
++ /* The assumption is that the function ipoib_set_mode returned
++ * with the rtnl held by it, if not the value -EBUSY returned,
++ * then no need to rtnl_unlock
++ */
++ if (ret != -EBUSY)
++ rtnl_unlock();
+
+- return ret;
++ return (!ret || ret == -EBUSY) ? count : ret;
+ }
+
+ static DEVICE_ATTR(mode, S_IWUSR | S_IRUGO, show_mode, set_mode);
+diff --git a/drivers/infiniband/ulp/ipoib/ipoib_main.c b/drivers/infiniband/ulp/ipoib/ipoib_main.c
+index 3ce0765a05ab..4584c03bc355 100644
+--- a/drivers/infiniband/ulp/ipoib/ipoib_main.c
++++ b/drivers/infiniband/ulp/ipoib/ipoib_main.c
+@@ -481,8 +481,7 @@ int ipoib_set_mode(struct net_device *dev, const char *buf)
+ priv->tx_wr.wr.send_flags &= ~IB_SEND_IP_CSUM;
+
+ ipoib_flush_paths(dev);
+- rtnl_lock();
+- return 0;
++ return (!rtnl_trylock()) ? -EBUSY : 0;
+ }
+
+ if (!strcmp(buf, "datagram\n")) {
+@@ -491,8 +490,7 @@ int ipoib_set_mode(struct net_device *dev, const char *buf)
+ dev_set_mtu(dev, min(priv->mcast_mtu, dev->mtu));
+ rtnl_unlock();
+ ipoib_flush_paths(dev);
+- rtnl_lock();
+- return 0;
++ return (!rtnl_trylock()) ? -EBUSY : 0;
+ }
+
+ return -EINVAL;
+@@ -716,6 +714,14 @@ int ipoib_check_sm_sendonly_fullmember_support(struct ipoib_dev_priv *priv)
+ return ret;
+ }
+
++static void push_pseudo_header(struct sk_buff *skb, const char *daddr)
++{
++ struct ipoib_pseudo_header *phdr;
++
++ phdr = (struct ipoib_pseudo_header *)skb_push(skb, sizeof(*phdr));
++ memcpy(phdr->hwaddr, daddr, INFINIBAND_ALEN);
++}
++
+ void ipoib_flush_paths(struct net_device *dev)
+ {
+ struct ipoib_dev_priv *priv = netdev_priv(dev);
+@@ -940,8 +946,7 @@ static void neigh_add_path(struct sk_buff *skb, u8 *daddr,
+ }
+ if (skb_queue_len(&neigh->queue) <
+ IPOIB_MAX_PATH_REC_QUEUE) {
+- /* put pseudoheader back on for next time */
+- skb_push(skb, IPOIB_PSEUDO_LEN);
++ push_pseudo_header(skb, neigh->daddr);
+ __skb_queue_tail(&neigh->queue, skb);
+ } else {
+ ipoib_warn(priv, "queue length limit %d. Packet drop.\n",
+@@ -959,10 +964,12 @@ static void neigh_add_path(struct sk_buff *skb, u8 *daddr,
+
+ if (!path->query && path_rec_start(dev, path))
+ goto err_path;
+- if (skb_queue_len(&neigh->queue) < IPOIB_MAX_PATH_REC_QUEUE)
++ if (skb_queue_len(&neigh->queue) < IPOIB_MAX_PATH_REC_QUEUE) {
++ push_pseudo_header(skb, neigh->daddr);
+ __skb_queue_tail(&neigh->queue, skb);
+- else
++ } else {
+ goto err_drop;
++ }
+ }
+
+ spin_unlock_irqrestore(&priv->lock, flags);
+@@ -998,8 +1005,7 @@ static void unicast_arp_send(struct sk_buff *skb, struct net_device *dev,
+ }
+ if (path) {
+ if (skb_queue_len(&path->queue) < IPOIB_MAX_PATH_REC_QUEUE) {
+- /* put pseudoheader back on for next time */
+- skb_push(skb, IPOIB_PSEUDO_LEN);
++ push_pseudo_header(skb, phdr->hwaddr);
+ __skb_queue_tail(&path->queue, skb);
+ } else {
+ ++dev->stats.tx_dropped;
+@@ -1031,8 +1037,7 @@ static void unicast_arp_send(struct sk_buff *skb, struct net_device *dev,
+ return;
+ } else if ((path->query || !path_rec_start(dev, path)) &&
+ skb_queue_len(&path->queue) < IPOIB_MAX_PATH_REC_QUEUE) {
+- /* put pseudoheader back on for next time */
+- skb_push(skb, IPOIB_PSEUDO_LEN);
++ push_pseudo_header(skb, phdr->hwaddr);
+ __skb_queue_tail(&path->queue, skb);
+ } else {
+ ++dev->stats.tx_dropped;
+@@ -1113,8 +1118,7 @@ static int ipoib_start_xmit(struct sk_buff *skb, struct net_device *dev)
+ }
+
+ if (skb_queue_len(&neigh->queue) < IPOIB_MAX_PATH_REC_QUEUE) {
+- /* put pseudoheader back on for next time */
+- skb_push(skb, sizeof(*phdr));
++ push_pseudo_header(skb, phdr->hwaddr);
+ spin_lock_irqsave(&priv->lock, flags);
+ __skb_queue_tail(&neigh->queue, skb);
+ spin_unlock_irqrestore(&priv->lock, flags);
+@@ -1146,7 +1150,6 @@ static int ipoib_hard_header(struct sk_buff *skb,
+ unsigned short type,
+ const void *daddr, const void *saddr, unsigned len)
+ {
+- struct ipoib_pseudo_header *phdr;
+ struct ipoib_header *header;
+
+ header = (struct ipoib_header *) skb_push(skb, sizeof *header);
+@@ -1159,8 +1162,7 @@ static int ipoib_hard_header(struct sk_buff *skb,
+ * destination address into skb hard header so we can figure out where
+ * to send the packet later.
+ */
+- phdr = (struct ipoib_pseudo_header *) skb_push(skb, sizeof(*phdr));
+- memcpy(phdr->hwaddr, daddr, INFINIBAND_ALEN);
++ push_pseudo_header(skb, daddr);
+
+ return IPOIB_HARD_LEN;
+ }
+diff --git a/drivers/infiniband/ulp/srp/ib_srp.c b/drivers/infiniband/ulp/srp/ib_srp.c
+index 79bf48477ddb..d9b57f5958b5 100644
+--- a/drivers/infiniband/ulp/srp/ib_srp.c
++++ b/drivers/infiniband/ulp/srp/ib_srp.c
+@@ -371,7 +371,6 @@ static struct srp_fr_pool *srp_create_fr_pool(struct ib_device *device,
+ struct srp_fr_desc *d;
+ struct ib_mr *mr;
+ int i, ret = -EINVAL;
+- enum ib_mr_type mr_type;
+
+ if (pool_size <= 0)
+ goto err;
+@@ -385,13 +384,9 @@ static struct srp_fr_pool *srp_create_fr_pool(struct ib_device *device,
+ spin_lock_init(&pool->lock);
+ INIT_LIST_HEAD(&pool->free_list);
+
+- if (device->attrs.device_cap_flags & IB_DEVICE_SG_GAPS_REG)
+- mr_type = IB_MR_TYPE_SG_GAPS;
+- else
+- mr_type = IB_MR_TYPE_MEM_REG;
+-
+ for (i = 0, d = &pool->desc[0]; i < pool->size; i++, d++) {
+- mr = ib_alloc_mr(pd, mr_type, max_page_list_len);
++ mr = ib_alloc_mr(pd, IB_MR_TYPE_MEM_REG,
++ max_page_list_len);
+ if (IS_ERR(mr)) {
+ ret = PTR_ERR(mr);
+ if (ret == -ENOMEM)
+@@ -1889,17 +1884,24 @@ static void srp_process_rsp(struct srp_rdma_ch *ch, struct srp_rsp *rsp)
+ if (unlikely(rsp->tag & SRP_TAG_TSK_MGMT)) {
+ spin_lock_irqsave(&ch->lock, flags);
+ ch->req_lim += be32_to_cpu(rsp->req_lim_delta);
++ if (rsp->tag == ch->tsk_mgmt_tag) {
++ ch->tsk_mgmt_status = -1;
++ if (be32_to_cpu(rsp->resp_data_len) >= 4)
++ ch->tsk_mgmt_status = rsp->data[3];
++ complete(&ch->tsk_mgmt_done);
++ } else {
++ shost_printk(KERN_ERR, target->scsi_host,
++ "Received tsk mgmt response too late for tag %#llx\n",
++ rsp->tag);
++ }
+ spin_unlock_irqrestore(&ch->lock, flags);
+-
+- ch->tsk_mgmt_status = -1;
+- if (be32_to_cpu(rsp->resp_data_len) >= 4)
+- ch->tsk_mgmt_status = rsp->data[3];
+- complete(&ch->tsk_mgmt_done);
+ } else {
+ scmnd = scsi_host_find_tag(target->scsi_host, rsp->tag);
+- if (scmnd) {
++ if (scmnd && scmnd->host_scribble) {
+ req = (void *)scmnd->host_scribble;
+ scmnd = srp_claim_req(ch, req, NULL, scmnd);
++ } else {
++ scmnd = NULL;
+ }
+ if (!scmnd) {
+ shost_printk(KERN_ERR, target->scsi_host,
+@@ -2531,19 +2533,18 @@ srp_change_queue_depth(struct scsi_device *sdev, int qdepth)
+ }
+
+ static int srp_send_tsk_mgmt(struct srp_rdma_ch *ch, u64 req_tag, u64 lun,
+- u8 func)
++ u8 func, u8 *status)
+ {
+ struct srp_target_port *target = ch->target;
+ struct srp_rport *rport = target->rport;
+ struct ib_device *dev = target->srp_host->srp_dev->dev;
+ struct srp_iu *iu;
+ struct srp_tsk_mgmt *tsk_mgmt;
++ int res;
+
+ if (!ch->connected || target->qp_in_error)
+ return -1;
+
+- init_completion(&ch->tsk_mgmt_done);
+-
+ /*
+ * Lock the rport mutex to avoid that srp_create_ch_ib() is
+ * invoked while a task management function is being sent.
+@@ -2566,10 +2567,16 @@ static int srp_send_tsk_mgmt(struct srp_rdma_ch *ch, u64 req_tag, u64 lun,
+
+ tsk_mgmt->opcode = SRP_TSK_MGMT;
+ int_to_scsilun(lun, &tsk_mgmt->lun);
+- tsk_mgmt->tag = req_tag | SRP_TAG_TSK_MGMT;
+ tsk_mgmt->tsk_mgmt_func = func;
+ tsk_mgmt->task_tag = req_tag;
+
++ spin_lock_irq(&ch->lock);
++ ch->tsk_mgmt_tag = (ch->tsk_mgmt_tag + 1) | SRP_TAG_TSK_MGMT;
++ tsk_mgmt->tag = ch->tsk_mgmt_tag;
++ spin_unlock_irq(&ch->lock);
++
++ init_completion(&ch->tsk_mgmt_done);
++
+ ib_dma_sync_single_for_device(dev, iu->dma, sizeof *tsk_mgmt,
+ DMA_TO_DEVICE);
+ if (srp_post_send(ch, iu, sizeof(*tsk_mgmt))) {
+@@ -2578,13 +2585,15 @@ static int srp_send_tsk_mgmt(struct srp_rdma_ch *ch, u64 req_tag, u64 lun,
+
+ return -1;
+ }
++ res = wait_for_completion_timeout(&ch->tsk_mgmt_done,
++ msecs_to_jiffies(SRP_ABORT_TIMEOUT_MS));
++ if (res > 0 && status)
++ *status = ch->tsk_mgmt_status;
+ mutex_unlock(&rport->mutex);
+
+- if (!wait_for_completion_timeout(&ch->tsk_mgmt_done,
+- msecs_to_jiffies(SRP_ABORT_TIMEOUT_MS)))
+- return -1;
++ WARN_ON_ONCE(res < 0);
+
+- return 0;
++ return res > 0 ? 0 : -1;
+ }
+
+ static int srp_abort(struct scsi_cmnd *scmnd)
+@@ -2610,7 +2619,7 @@ static int srp_abort(struct scsi_cmnd *scmnd)
+ shost_printk(KERN_ERR, target->scsi_host,
+ "Sending SRP abort for tag %#x\n", tag);
+ if (srp_send_tsk_mgmt(ch, tag, scmnd->device->lun,
+- SRP_TSK_ABORT_TASK) == 0)
++ SRP_TSK_ABORT_TASK, NULL) == 0)
+ ret = SUCCESS;
+ else if (target->rport->state == SRP_RPORT_LOST)
+ ret = FAST_IO_FAIL;
+@@ -2628,14 +2637,15 @@ static int srp_reset_device(struct scsi_cmnd *scmnd)
+ struct srp_target_port *target = host_to_target(scmnd->device->host);
+ struct srp_rdma_ch *ch;
+ int i;
++ u8 status;
+
+ shost_printk(KERN_ERR, target->scsi_host, "SRP reset_device called\n");
+
+ ch = &target->ch[0];
+ if (srp_send_tsk_mgmt(ch, SRP_TAG_NO_REQ, scmnd->device->lun,
+- SRP_TSK_LUN_RESET))
++ SRP_TSK_LUN_RESET, &status))
+ return FAILED;
+- if (ch->tsk_mgmt_status)
++ if (status)
+ return FAILED;
+
+ for (i = 0; i < target->ch_count; i++) {
+@@ -2664,9 +2674,8 @@ static int srp_slave_alloc(struct scsi_device *sdev)
+ struct Scsi_Host *shost = sdev->host;
+ struct srp_target_port *target = host_to_target(shost);
+ struct srp_device *srp_dev = target->srp_host->srp_dev;
+- struct ib_device *ibdev = srp_dev->dev;
+
+- if (!(ibdev->attrs.device_cap_flags & IB_DEVICE_SG_GAPS_REG))
++ if (true)
+ blk_queue_virt_boundary(sdev->request_queue,
+ ~srp_dev->mr_page_mask);
+
+diff --git a/drivers/infiniband/ulp/srp/ib_srp.h b/drivers/infiniband/ulp/srp/ib_srp.h
+index 21c69695f9d4..32ed40db3ca2 100644
+--- a/drivers/infiniband/ulp/srp/ib_srp.h
++++ b/drivers/infiniband/ulp/srp/ib_srp.h
+@@ -163,6 +163,7 @@ struct srp_rdma_ch {
+ int max_ti_iu_len;
+ int comp_vector;
+
++ u64 tsk_mgmt_tag;
+ struct completion tsk_mgmt_done;
+ u8 tsk_mgmt_status;
+ bool connected;
+diff --git a/drivers/memory/atmel-ebi.c b/drivers/memory/atmel-ebi.c
+index 047d6fcdcec2..1eaaa2be8ff2 100644
+--- a/drivers/memory/atmel-ebi.c
++++ b/drivers/memory/atmel-ebi.c
+@@ -93,7 +93,7 @@ static void at91sam9_ebi_get_config(struct at91_ebi_dev *ebid,
+ struct at91_ebi_dev_config *conf)
+ {
+ struct at91sam9_smc_generic_fields *fields = &ebid->ebi->sam9;
+- unsigned int clk_rate = clk_get_rate(ebid->ebi->clk);
++ unsigned int clk_period = NSEC_PER_SEC / clk_get_rate(ebid->ebi->clk);
+ struct at91sam9_ebi_dev_config *config = &conf->sam9;
+ struct at91sam9_smc_timings *timings = &config->timings;
+ unsigned int val;
+@@ -102,43 +102,43 @@ static void at91sam9_ebi_get_config(struct at91_ebi_dev *ebid,
+ config->mode = val & ~AT91_SMC_TDF;
+
+ val = (val & AT91_SMC_TDF) >> 16;
+- timings->tdf_ns = clk_rate * val;
++ timings->tdf_ns = clk_period * val;
+
+ regmap_fields_read(fields->setup, conf->cs, &val);
+ timings->ncs_rd_setup_ns = (val >> 24) & 0x1f;
+ timings->ncs_rd_setup_ns += ((val >> 29) & 0x1) * 128;
+- timings->ncs_rd_setup_ns *= clk_rate;
++ timings->ncs_rd_setup_ns *= clk_period;
+ timings->nrd_setup_ns = (val >> 16) & 0x1f;
+ timings->nrd_setup_ns += ((val >> 21) & 0x1) * 128;
+- timings->nrd_setup_ns *= clk_rate;
++ timings->nrd_setup_ns *= clk_period;
+ timings->ncs_wr_setup_ns = (val >> 8) & 0x1f;
+ timings->ncs_wr_setup_ns += ((val >> 13) & 0x1) * 128;
+- timings->ncs_wr_setup_ns *= clk_rate;
++ timings->ncs_wr_setup_ns *= clk_period;
+ timings->nwe_setup_ns = val & 0x1f;
+ timings->nwe_setup_ns += ((val >> 5) & 0x1) * 128;
+- timings->nwe_setup_ns *= clk_rate;
++ timings->nwe_setup_ns *= clk_period;
+
+ regmap_fields_read(fields->pulse, conf->cs, &val);
+ timings->ncs_rd_pulse_ns = (val >> 24) & 0x3f;
+ timings->ncs_rd_pulse_ns += ((val >> 30) & 0x1) * 256;
+- timings->ncs_rd_pulse_ns *= clk_rate;
++ timings->ncs_rd_pulse_ns *= clk_period;
+ timings->nrd_pulse_ns = (val >> 16) & 0x3f;
+ timings->nrd_pulse_ns += ((val >> 22) & 0x1) * 256;
+- timings->nrd_pulse_ns *= clk_rate;
++ timings->nrd_pulse_ns *= clk_period;
+ timings->ncs_wr_pulse_ns = (val >> 8) & 0x3f;
+ timings->ncs_wr_pulse_ns += ((val >> 14) & 0x1) * 256;
+- timings->ncs_wr_pulse_ns *= clk_rate;
++ timings->ncs_wr_pulse_ns *= clk_period;
+ timings->nwe_pulse_ns = val & 0x3f;
+ timings->nwe_pulse_ns += ((val >> 6) & 0x1) * 256;
+- timings->nwe_pulse_ns *= clk_rate;
++ timings->nwe_pulse_ns *= clk_period;
+
+ regmap_fields_read(fields->cycle, conf->cs, &val);
+ timings->nrd_cycle_ns = (val >> 16) & 0x7f;
+ timings->nrd_cycle_ns += ((val >> 23) & 0x3) * 256;
+- timings->nrd_cycle_ns *= clk_rate;
++ timings->nrd_cycle_ns *= clk_period;
+ timings->nwe_cycle_ns = val & 0x7f;
+ timings->nwe_cycle_ns += ((val >> 7) & 0x3) * 256;
+- timings->nwe_cycle_ns *= clk_rate;
++ timings->nwe_cycle_ns *= clk_period;
+ }
+
+ static int at91_xlate_timing(struct device_node *np, const char *prop,
+@@ -334,6 +334,7 @@ static int at91sam9_ebi_apply_config(struct at91_ebi_dev *ebid,
+ struct at91_ebi_dev_config *conf)
+ {
+ unsigned int clk_rate = clk_get_rate(ebid->ebi->clk);
++ unsigned int clk_period = NSEC_PER_SEC / clk_rate;
+ struct at91sam9_ebi_dev_config *config = &conf->sam9;
+ struct at91sam9_smc_timings *timings = &config->timings;
+ struct at91sam9_smc_generic_fields *fields = &ebid->ebi->sam9;
+@@ -376,7 +377,7 @@ static int at91sam9_ebi_apply_config(struct at91_ebi_dev *ebid,
+ val |= AT91SAM9_SMC_NWECYCLE(coded_val);
+ regmap_fields_write(fields->cycle, conf->cs, val);
+
+- val = DIV_ROUND_UP(timings->tdf_ns, clk_rate);
++ val = DIV_ROUND_UP(timings->tdf_ns, clk_period);
+ if (val > AT91_SMC_TDF_MAX)
+ val = AT91_SMC_TDF_MAX;
+ regmap_fields_write(fields->mode, conf->cs,
+diff --git a/drivers/misc/cxl/cxl.h b/drivers/misc/cxl/cxl.h
+index b24d76723fb0..08e7d3a54425 100644
+--- a/drivers/misc/cxl/cxl.h
++++ b/drivers/misc/cxl/cxl.h
+@@ -419,6 +419,9 @@ struct cxl_afu {
+ struct mutex contexts_lock;
+ spinlock_t afu_cntl_lock;
+
++ /* -1: AFU deconfigured/locked, >= 0: number of readers */
++ atomic_t configured_state;
++
+ /* AFU error buffer fields and bin attribute for sysfs */
+ u64 eb_len, eb_offset;
+ struct bin_attribute attr_eb;
+diff --git a/drivers/misc/cxl/main.c b/drivers/misc/cxl/main.c
+index 62e0dfb5f15b..cc1706a92ace 100644
+--- a/drivers/misc/cxl/main.c
++++ b/drivers/misc/cxl/main.c
+@@ -268,7 +268,7 @@ struct cxl_afu *cxl_alloc_afu(struct cxl *adapter, int slice)
+ idr_init(&afu->contexts_idr);
+ mutex_init(&afu->contexts_lock);
+ spin_lock_init(&afu->afu_cntl_lock);
+-
++ atomic_set(&afu->configured_state, -1);
+ afu->prefault_mode = CXL_PREFAULT_NONE;
+ afu->irqs_max = afu->adapter->user_irqs;
+
+diff --git a/drivers/misc/cxl/pci.c b/drivers/misc/cxl/pci.c
+index 80a87ab25b83..34e83219447c 100644
+--- a/drivers/misc/cxl/pci.c
++++ b/drivers/misc/cxl/pci.c
+@@ -1129,6 +1129,7 @@ static int pci_configure_afu(struct cxl_afu *afu, struct cxl *adapter, struct pc
+ if ((rc = cxl_native_register_psl_irq(afu)))
+ goto err2;
+
++ atomic_set(&afu->configured_state, 0);
+ return 0;
+
+ err2:
+@@ -1141,6 +1142,14 @@ static int pci_configure_afu(struct cxl_afu *afu, struct cxl *adapter, struct pc
+
+ static void pci_deconfigure_afu(struct cxl_afu *afu)
+ {
++ /*
++ * It's okay to deconfigure when AFU is already locked, otherwise wait
++ * until there are no readers
++ */
++ if (atomic_read(&afu->configured_state) != -1) {
++ while (atomic_cmpxchg(&afu->configured_state, 0, -1) != -1)
++ schedule();
++ }
+ cxl_native_release_psl_irq(afu);
+ if (afu->adapter->native->sl_ops->release_serr_irq)
+ afu->adapter->native->sl_ops->release_serr_irq(afu);
+diff --git a/drivers/misc/cxl/vphb.c b/drivers/misc/cxl/vphb.c
+index 3519acebfdab..512a4897dbf6 100644
+--- a/drivers/misc/cxl/vphb.c
++++ b/drivers/misc/cxl/vphb.c
+@@ -76,23 +76,32 @@ static int cxl_pcie_cfg_record(u8 bus, u8 devfn)
+ return (bus << 8) + devfn;
+ }
+
+-static int cxl_pcie_config_info(struct pci_bus *bus, unsigned int devfn,
+- struct cxl_afu **_afu, int *_record)
++static inline struct cxl_afu *pci_bus_to_afu(struct pci_bus *bus)
+ {
+- struct pci_controller *phb;
+- struct cxl_afu *afu;
+- int record;
++ struct pci_controller *phb = bus ? pci_bus_to_host(bus) : NULL;
+
+- phb = pci_bus_to_host(bus);
+- if (phb == NULL)
+- return PCIBIOS_DEVICE_NOT_FOUND;
++ return phb ? phb->private_data : NULL;
++}
++
++static void cxl_afu_configured_put(struct cxl_afu *afu)
++{
++ atomic_dec_if_positive(&afu->configured_state);
++}
++
++static bool cxl_afu_configured_get(struct cxl_afu *afu)
++{
++ return atomic_inc_unless_negative(&afu->configured_state);
++}
++
++static inline int cxl_pcie_config_info(struct pci_bus *bus, unsigned int devfn,
++ struct cxl_afu *afu, int *_record)
++{
++ int record;
+
+- afu = (struct cxl_afu *)phb->private_data;
+ record = cxl_pcie_cfg_record(bus->number, devfn);
+ if (record > afu->crs_num)
+ return PCIBIOS_DEVICE_NOT_FOUND;
+
+- *_afu = afu;
+ *_record = record;
+ return 0;
+ }
+@@ -106,9 +115,14 @@ static int cxl_pcie_read_config(struct pci_bus *bus, unsigned int devfn,
+ u16 val16;
+ u32 val32;
+
+- rc = cxl_pcie_config_info(bus, devfn, &afu, &record);
++ afu = pci_bus_to_afu(bus);
++ /* Grab a reader lock on afu. */
++ if (afu == NULL || !cxl_afu_configured_get(afu))
++ return PCIBIOS_DEVICE_NOT_FOUND;
++
++ rc = cxl_pcie_config_info(bus, devfn, afu, &record);
+ if (rc)
+- return rc;
++ goto out;
+
+ switch (len) {
+ case 1:
+@@ -127,10 +141,9 @@ static int cxl_pcie_read_config(struct pci_bus *bus, unsigned int devfn,
+ WARN_ON(1);
+ }
+
+- if (rc)
+- return PCIBIOS_DEVICE_NOT_FOUND;
+-
+- return PCIBIOS_SUCCESSFUL;
++out:
++ cxl_afu_configured_put(afu);
++ return rc ? PCIBIOS_DEVICE_NOT_FOUND : PCIBIOS_SUCCESSFUL;
+ }
+
+ static int cxl_pcie_write_config(struct pci_bus *bus, unsigned int devfn,
+@@ -139,9 +152,14 @@ static int cxl_pcie_write_config(struct pci_bus *bus, unsigned int devfn,
+ int rc, record;
+ struct cxl_afu *afu;
+
+- rc = cxl_pcie_config_info(bus, devfn, &afu, &record);
++ afu = pci_bus_to_afu(bus);
++ /* Grab a reader lock on afu. */
++ if (afu == NULL || !cxl_afu_configured_get(afu))
++ return PCIBIOS_DEVICE_NOT_FOUND;
++
++ rc = cxl_pcie_config_info(bus, devfn, afu, &record);
+ if (rc)
+- return rc;
++ goto out;
+
+ switch (len) {
+ case 1:
+@@ -157,10 +175,9 @@ static int cxl_pcie_write_config(struct pci_bus *bus, unsigned int devfn,
+ WARN_ON(1);
+ }
+
+- if (rc)
+- return PCIBIOS_SET_FAILED;
+-
+- return PCIBIOS_SUCCESSFUL;
++out:
++ cxl_afu_configured_put(afu);
++ return rc ? PCIBIOS_SET_FAILED : PCIBIOS_SUCCESSFUL;
+ }
+
+ static struct pci_ops cxl_pcie_pci_ops =
+diff --git a/drivers/net/ethernet/marvell/mvpp2.c b/drivers/net/ethernet/marvell/mvpp2.c
+index 4fe430ceb194..5f1f23e4878d 100644
+--- a/drivers/net/ethernet/marvell/mvpp2.c
++++ b/drivers/net/ethernet/marvell/mvpp2.c
+@@ -991,7 +991,7 @@ static void mvpp2_txq_inc_put(struct mvpp2_txq_pcpu *txq_pcpu,
+ txq_pcpu->buffs + txq_pcpu->txq_put_index;
+ tx_buf->skb = skb;
+ tx_buf->size = tx_desc->data_size;
+- tx_buf->phys = tx_desc->buf_phys_addr;
++ tx_buf->phys = tx_desc->buf_phys_addr + tx_desc->packet_offset;
+ txq_pcpu->txq_put_index++;
+ if (txq_pcpu->txq_put_index == txq_pcpu->size)
+ txq_pcpu->txq_put_index = 0;
+diff --git a/drivers/net/wireless/broadcom/brcm80211/brcmfmac/sdio.c b/drivers/net/wireless/broadcom/brcm80211/brcmfmac/sdio.c
+index dfb0658713d9..d2219885071f 100644
+--- a/drivers/net/wireless/broadcom/brcm80211/brcmfmac/sdio.c
++++ b/drivers/net/wireless/broadcom/brcm80211/brcmfmac/sdio.c
+@@ -1661,7 +1661,7 @@ static u8 brcmf_sdio_rxglom(struct brcmf_sdio *bus, u8 rxseq)
+ pfirst->len, pfirst->next,
+ pfirst->prev);
+ skb_unlink(pfirst, &bus->glom);
+- if (brcmf_sdio_fromevntchan(pfirst->data))
++ if (brcmf_sdio_fromevntchan(&dptr[SDPCM_HWHDR_LEN]))
+ brcmf_rx_event(bus->sdiodev->dev, pfirst);
+ else
+ brcmf_rx_frame(bus->sdiodev->dev, pfirst,
+diff --git a/drivers/nvdimm/namespace_devs.c b/drivers/nvdimm/namespace_devs.c
+index ce3e8dfa10ad..1b481a5fb966 100644
+--- a/drivers/nvdimm/namespace_devs.c
++++ b/drivers/nvdimm/namespace_devs.c
+@@ -1700,6 +1700,7 @@ static int select_pmem_id(struct nd_region *nd_region, u8 *pmem_id)
+ struct device *create_namespace_pmem(struct nd_region *nd_region,
+ struct nd_namespace_label *nd_label)
+ {
++ u64 altcookie = nd_region_interleave_set_altcookie(nd_region);
+ u64 cookie = nd_region_interleave_set_cookie(nd_region);
+ struct nd_label_ent *label_ent;
+ struct nd_namespace_pmem *nspm;
+@@ -1718,7 +1719,11 @@ struct device *create_namespace_pmem(struct nd_region *nd_region,
+ if (__le64_to_cpu(nd_label->isetcookie) != cookie) {
+ dev_dbg(&nd_region->dev, "invalid cookie in label: %pUb\n",
+ nd_label->uuid);
+- return ERR_PTR(-EAGAIN);
++ if (__le64_to_cpu(nd_label->isetcookie) != altcookie)
++ return ERR_PTR(-EAGAIN);
++
++ dev_dbg(&nd_region->dev, "valid altcookie in label: %pUb\n",
++ nd_label->uuid);
+ }
+
+ nspm = kzalloc(sizeof(*nspm), GFP_KERNEL);
+@@ -1733,9 +1738,14 @@ struct device *create_namespace_pmem(struct nd_region *nd_region,
+ res->name = dev_name(&nd_region->dev);
+ res->flags = IORESOURCE_MEM;
+
+- for (i = 0; i < nd_region->ndr_mappings; i++)
+- if (!has_uuid_at_pos(nd_region, nd_label->uuid, cookie, i))
+- break;
++ for (i = 0; i < nd_region->ndr_mappings; i++) {
++ if (has_uuid_at_pos(nd_region, nd_label->uuid, cookie, i))
++ continue;
++ if (has_uuid_at_pos(nd_region, nd_label->uuid, altcookie, i))
++ continue;
++ break;
++ }
++
+ if (i < nd_region->ndr_mappings) {
+ struct nvdimm_drvdata *ndd = to_ndd(&nd_region->mapping[i]);
+
+diff --git a/drivers/nvdimm/nd.h b/drivers/nvdimm/nd.h
+index 35dd75057e16..2a99c83aa19f 100644
+--- a/drivers/nvdimm/nd.h
++++ b/drivers/nvdimm/nd.h
+@@ -328,6 +328,7 @@ struct nd_region *to_nd_region(struct device *dev);
+ int nd_region_to_nstype(struct nd_region *nd_region);
+ int nd_region_register_namespaces(struct nd_region *nd_region, int *err);
+ u64 nd_region_interleave_set_cookie(struct nd_region *nd_region);
++u64 nd_region_interleave_set_altcookie(struct nd_region *nd_region);
+ void nvdimm_bus_lock(struct device *dev);
+ void nvdimm_bus_unlock(struct device *dev);
+ bool is_nvdimm_bus_locked(struct device *dev);
+diff --git a/drivers/nvdimm/region_devs.c b/drivers/nvdimm/region_devs.c
+index 7cd705f3247c..b7cb5066d961 100644
+--- a/drivers/nvdimm/region_devs.c
++++ b/drivers/nvdimm/region_devs.c
+@@ -505,6 +505,15 @@ u64 nd_region_interleave_set_cookie(struct nd_region *nd_region)
+ return 0;
+ }
+
++u64 nd_region_interleave_set_altcookie(struct nd_region *nd_region)
++{
++ struct nd_interleave_set *nd_set = nd_region->nd_set;
++
++ if (nd_set)
++ return nd_set->altcookie;
++ return 0;
++}
++
+ void nd_mapping_free_labels(struct nd_mapping *nd_mapping)
+ {
+ struct nd_label_ent *label_ent, *e;
+diff --git a/drivers/pci/hotplug/pnv_php.c b/drivers/pci/hotplug/pnv_php.c
+index acb2be0c8c2c..e96973b95e7a 100644
+--- a/drivers/pci/hotplug/pnv_php.c
++++ b/drivers/pci/hotplug/pnv_php.c
+@@ -82,7 +82,7 @@ static void pnv_php_free_slot(struct kref *kref)
+ static inline void pnv_php_put_slot(struct pnv_php_slot *php_slot)
+ {
+
+- if (WARN_ON(!php_slot))
++ if (!php_slot)
+ return;
+
+ kref_put(&php_slot->kref, pnv_php_free_slot);
+@@ -436,9 +436,21 @@ static int pnv_php_enable(struct pnv_php_slot *php_slot, bool rescan)
+ if (ret)
+ return ret;
+
+- /* Proceed if there have nothing behind the slot */
+- if (presence == OPAL_PCI_SLOT_EMPTY)
++ /*
++ * Proceed if there have nothing behind the slot. However,
++ * we should leave the slot in registered state at the
++ * beginning. Otherwise, the PCI devices inserted afterwards
++ * won't be probed and populated.
++ */
++ if (presence == OPAL_PCI_SLOT_EMPTY) {
++ if (!php_slot->power_state_check) {
++ php_slot->power_state_check = true;
++
++ return 0;
++ }
++
+ goto scan;
++ }
+
+ /*
+ * If the power supply to the slot is off, we can't detect
+@@ -713,8 +725,12 @@ static irqreturn_t pnv_php_interrupt(int irq, void *data)
+ added = !!(lsts & PCI_EXP_LNKSTA_DLLLA);
+ } else if (sts & PCI_EXP_SLTSTA_PDC) {
+ ret = pnv_pci_get_presence_state(php_slot->id, &presence);
+- if (!ret)
++ if (ret) {
++ dev_warn(&pdev->dev, "PCI slot [%s] error %d getting presence (0x%04x), to retry the operation.\n",
++ php_slot->name, ret, sts);
+ return IRQ_HANDLED;
++ }
++
+ added = !!(presence == OPAL_PCI_SLOT_PRESENT);
+ } else {
+ return IRQ_NONE;
+@@ -799,6 +815,14 @@ static void pnv_php_enable_irq(struct pnv_php_slot *php_slot)
+ struct pci_dev *pdev = php_slot->pdev;
+ int irq, ret;
+
++ /*
++ * The MSI/MSIx interrupt might have been occupied by other
++ * drivers. Don't populate the surprise hotplug capability
++ * in that case.
++ */
++ if (pci_dev_msi_enabled(pdev))
++ return;
++
+ ret = pci_enable_device(pdev);
+ if (ret) {
+ dev_warn(&pdev->dev, "Error %d enabling device\n", ret);
+diff --git a/drivers/phy/phy-qcom-ufs.c b/drivers/phy/phy-qcom-ufs.c
+index c69568b8543d..528540095c3f 100644
+--- a/drivers/phy/phy-qcom-ufs.c
++++ b/drivers/phy/phy-qcom-ufs.c
+@@ -189,12 +189,12 @@ int ufs_qcom_phy_init_clks(struct ufs_qcom_phy *phy_common)
+ if (err)
+ goto out;
+
++skip_txrx_clk:
+ err = ufs_qcom_phy_clk_get(phy_common->dev, "ref_clk_src",
+ &phy_common->ref_clk_src);
+ if (err)
+ goto out;
+
+-skip_txrx_clk:
+ /*
+ * "ref_clk_parent" is optional hence don't abort init if it's not
+ * found.
+@@ -217,12 +217,7 @@ static int __ufs_qcom_phy_init_vreg(struct device *dev,
+
+ char prop_name[MAX_PROP_NAME];
+
+- vreg->name = devm_kstrdup(dev, name, GFP_KERNEL);
+- if (!vreg->name) {
+- err = -ENOMEM;
+- goto out;
+- }
+-
++ vreg->name = name;
+ vreg->reg = devm_regulator_get(dev, name);
+ if (IS_ERR(vreg->reg)) {
+ err = PTR_ERR(vreg->reg);
+@@ -265,8 +260,6 @@ static int __ufs_qcom_phy_init_vreg(struct device *dev,
+ }
+
+ out:
+- if (err)
+- kfree(vreg->name);
+ return err;
+ }
+
+diff --git a/drivers/pwm/pwm-pca9685.c b/drivers/pwm/pwm-pca9685.c
+index 117fccf7934a..01a6a83f625d 100644
+--- a/drivers/pwm/pwm-pca9685.c
++++ b/drivers/pwm/pwm-pca9685.c
+@@ -65,7 +65,6 @@
+ #define PCA9685_MAXCHAN 0x10
+
+ #define LED_FULL (1 << 4)
+-#define MODE1_RESTART (1 << 7)
+ #define MODE1_SLEEP (1 << 4)
+ #define MODE2_INVRT (1 << 4)
+ #define MODE2_OUTDRV (1 << 2)
+@@ -117,16 +116,6 @@ static int pca9685_pwm_config(struct pwm_chip *chip, struct pwm_device *pwm,
+ udelay(500);
+
+ pca->period_ns = period_ns;
+-
+- /*
+- * If the duty cycle did not change, restart PWM with
+- * the same duty cycle to period ratio and return.
+- */
+- if (duty_ns == pca->duty_ns) {
+- regmap_update_bits(pca->regmap, PCA9685_MODE1,
+- MODE1_RESTART, 0x1);
+- return 0;
+- }
+ } else {
+ dev_err(chip->dev,
+ "prescaler not set: period out of bounds!\n");
+diff --git a/drivers/s390/block/dcssblk.c b/drivers/s390/block/dcssblk.c
+index 9d66b4fb174b..415d10a67b7a 100644
+--- a/drivers/s390/block/dcssblk.c
++++ b/drivers/s390/block/dcssblk.c
+@@ -892,7 +892,7 @@ dcssblk_direct_access (struct block_device *bdev, sector_t secnum,
+ dev_info = bdev->bd_disk->private_data;
+ if (!dev_info)
+ return -ENODEV;
+- dev_sz = dev_info->end - dev_info->start;
++ dev_sz = dev_info->end - dev_info->start + 1;
+ offset = secnum * 512;
+ *kaddr = (void *) dev_info->start + offset;
+ *pfn = __pfn_to_pfn_t(PFN_DOWN(dev_info->start + offset), PFN_DEV);
+diff --git a/drivers/s390/cio/ioasm.c b/drivers/s390/cio/ioasm.c
+index 8225da619014..4182f60124da 100644
+--- a/drivers/s390/cio/ioasm.c
++++ b/drivers/s390/cio/ioasm.c
+@@ -165,13 +165,15 @@ int tpi(struct tpi_info *addr)
+ int chsc(void *chsc_area)
+ {
+ typedef struct { char _[4096]; } addr_type;
+- int cc;
++ int cc = -EIO;
+
+ asm volatile(
+ " .insn rre,0xb25f0000,%2,0\n"
+- " ipm %0\n"
++ "0: ipm %0\n"
+ " srl %0,28\n"
+- : "=d" (cc), "=m" (*(addr_type *) chsc_area)
++ "1:\n"
++ EX_TABLE(0b, 1b)
++ : "+d" (cc), "=m" (*(addr_type *) chsc_area)
+ : "d" (chsc_area), "m" (*(addr_type *) chsc_area)
+ : "cc");
+ trace_s390_cio_chsc(chsc_area, cc);
+diff --git a/drivers/s390/cio/qdio_thinint.c b/drivers/s390/cio/qdio_thinint.c
+index 5d06253c2a7a..30e9fbbff051 100644
+--- a/drivers/s390/cio/qdio_thinint.c
++++ b/drivers/s390/cio/qdio_thinint.c
+@@ -147,11 +147,11 @@ static inline void tiqdio_call_inq_handlers(struct qdio_irq *irq)
+ struct qdio_q *q;
+ int i;
+
+- for_each_input_queue(irq, q, i) {
+- if (!references_shared_dsci(irq) &&
+- has_multiple_inq_on_dsci(irq))
+- xchg(q->irq_ptr->dsci, 0);
++ if (!references_shared_dsci(irq) &&
++ has_multiple_inq_on_dsci(irq))
++ xchg(irq->dsci, 0);
+
++ for_each_input_queue(irq, q, i) {
+ if (q->u.in.queue_start_poll) {
+ /* skip if polling is enabled or already in work */
+ if (test_and_set_bit(QDIO_QUEUE_IRQS_DISABLED,
+diff --git a/drivers/target/target_core_device.c b/drivers/target/target_core_device.c
+index 26929c44d703..03bdaac5c6c9 100644
+--- a/drivers/target/target_core_device.c
++++ b/drivers/target/target_core_device.c
+@@ -78,12 +78,16 @@ transport_lookup_cmd_lun(struct se_cmd *se_cmd, u64 unpacked_lun)
+ &deve->read_bytes);
+
+ se_lun = rcu_dereference(deve->se_lun);
++
++ if (!percpu_ref_tryget_live(&se_lun->lun_ref)) {
++ se_lun = NULL;
++ goto out_unlock;
++ }
++
+ se_cmd->se_lun = rcu_dereference(deve->se_lun);
+ se_cmd->pr_res_key = deve->pr_res_key;
+ se_cmd->orig_fe_lun = unpacked_lun;
+ se_cmd->se_cmd_flags |= SCF_SE_LUN_CMD;
+-
+- percpu_ref_get(&se_lun->lun_ref);
+ se_cmd->lun_ref_active = true;
+
+ if ((se_cmd->data_direction == DMA_TO_DEVICE) &&
+@@ -97,6 +101,7 @@ transport_lookup_cmd_lun(struct se_cmd *se_cmd, u64 unpacked_lun)
+ goto ref_dev;
+ }
+ }
++out_unlock:
+ rcu_read_unlock();
+
+ if (!se_lun) {
+@@ -816,6 +821,7 @@ struct se_device *target_alloc_device(struct se_hba *hba, const char *name)
+ xcopy_lun = &dev->xcopy_lun;
+ rcu_assign_pointer(xcopy_lun->lun_se_dev, dev);
+ init_completion(&xcopy_lun->lun_ref_comp);
++ init_completion(&xcopy_lun->lun_shutdown_comp);
+ INIT_LIST_HEAD(&xcopy_lun->lun_deve_list);
+ INIT_LIST_HEAD(&xcopy_lun->lun_dev_link);
+ mutex_init(&xcopy_lun->lun_tg_pt_md_mutex);
+diff --git a/drivers/target/target_core_tpg.c b/drivers/target/target_core_tpg.c
+index d99752c6cd60..2744251178ad 100644
+--- a/drivers/target/target_core_tpg.c
++++ b/drivers/target/target_core_tpg.c
+@@ -445,7 +445,7 @@ static void core_tpg_lun_ref_release(struct percpu_ref *ref)
+ {
+ struct se_lun *lun = container_of(ref, struct se_lun, lun_ref);
+
+- complete(&lun->lun_ref_comp);
++ complete(&lun->lun_shutdown_comp);
+ }
+
+ int core_tpg_register(
+@@ -571,6 +571,7 @@ struct se_lun *core_tpg_alloc_lun(
+ lun->lun_link_magic = SE_LUN_LINK_MAGIC;
+ atomic_set(&lun->lun_acl_count, 0);
+ init_completion(&lun->lun_ref_comp);
++ init_completion(&lun->lun_shutdown_comp);
+ INIT_LIST_HEAD(&lun->lun_deve_list);
+ INIT_LIST_HEAD(&lun->lun_dev_link);
+ atomic_set(&lun->lun_tg_pt_secondary_offline, 0);
+diff --git a/drivers/target/target_core_transport.c b/drivers/target/target_core_transport.c
+index 437591bc7c08..665be670b3f3 100644
+--- a/drivers/target/target_core_transport.c
++++ b/drivers/target/target_core_transport.c
+@@ -2706,10 +2706,39 @@ void target_wait_for_sess_cmds(struct se_session *se_sess)
+ }
+ EXPORT_SYMBOL(target_wait_for_sess_cmds);
+
++static void target_lun_confirm(struct percpu_ref *ref)
++{
++ struct se_lun *lun = container_of(ref, struct se_lun, lun_ref);
++
++ complete(&lun->lun_ref_comp);
++}
++
+ void transport_clear_lun_ref(struct se_lun *lun)
+ {
+- percpu_ref_kill(&lun->lun_ref);
++ /*
++ * Mark the percpu-ref as DEAD, switch to atomic_t mode, drop
++ * the initial reference and schedule confirm kill to be
++ * executed after one full RCU grace period has completed.
++ */
++ percpu_ref_kill_and_confirm(&lun->lun_ref, target_lun_confirm);
++ /*
++ * The first completion waits for percpu_ref_switch_to_atomic_rcu()
++ * to call target_lun_confirm after lun->lun_ref has been marked
++ * as __PERCPU_REF_DEAD on all CPUs, and switches to atomic_t
++ * mode so that percpu_ref_tryget_live() lookup of lun->lun_ref
++ * fails for all new incoming I/O.
++ */
+ wait_for_completion(&lun->lun_ref_comp);
++ /*
++ * The second completion waits for percpu_ref_put_many() to
++ * invoke ->release() after lun->lun_ref has switched to
++ * atomic_t mode, and lun->lun_ref.count has reached zero.
++ *
++ * At this point all target-core lun->lun_ref references have
++ * been dropped via transport_lun_remove_cmd(), and it's safe
++ * to proceed with the remaining LUN shutdown.
++ */
++ wait_for_completion(&lun->lun_shutdown_comp);
+ }
+
+ static bool
+diff --git a/drivers/tty/n_hdlc.c b/drivers/tty/n_hdlc.c
+index eb278832f5ce..728c8243473b 100644
+--- a/drivers/tty/n_hdlc.c
++++ b/drivers/tty/n_hdlc.c
+@@ -114,7 +114,7 @@
+ #define DEFAULT_TX_BUF_COUNT 3
+
+ struct n_hdlc_buf {
+- struct n_hdlc_buf *link;
++ struct list_head list_item;
+ int count;
+ char buf[1];
+ };
+@@ -122,8 +122,7 @@ struct n_hdlc_buf {
+ #define N_HDLC_BUF_SIZE (sizeof(struct n_hdlc_buf) + maxframe)
+
+ struct n_hdlc_buf_list {
+- struct n_hdlc_buf *head;
+- struct n_hdlc_buf *tail;
++ struct list_head list;
+ int count;
+ spinlock_t spinlock;
+ };
+@@ -136,7 +135,6 @@ struct n_hdlc_buf_list {
+ * @backup_tty - TTY to use if tty gets closed
+ * @tbusy - reentrancy flag for tx wakeup code
+ * @woke_up - FIXME: describe this field
+- * @tbuf - currently transmitting tx buffer
+ * @tx_buf_list - list of pending transmit frame buffers
+ * @rx_buf_list - list of received frame buffers
+ * @tx_free_buf_list - list unused transmit frame buffers
+@@ -149,7 +147,6 @@ struct n_hdlc {
+ struct tty_struct *backup_tty;
+ int tbusy;
+ int woke_up;
+- struct n_hdlc_buf *tbuf;
+ struct n_hdlc_buf_list tx_buf_list;
+ struct n_hdlc_buf_list rx_buf_list;
+ struct n_hdlc_buf_list tx_free_buf_list;
+@@ -159,6 +156,8 @@ struct n_hdlc {
+ /*
+ * HDLC buffer list manipulation functions
+ */
++static void n_hdlc_buf_return(struct n_hdlc_buf_list *buf_list,
++ struct n_hdlc_buf *buf);
+ static void n_hdlc_buf_put(struct n_hdlc_buf_list *list,
+ struct n_hdlc_buf *buf);
+ static struct n_hdlc_buf *n_hdlc_buf_get(struct n_hdlc_buf_list *list);
+@@ -208,16 +207,9 @@ static void flush_tx_queue(struct tty_struct *tty)
+ {
+ struct n_hdlc *n_hdlc = tty2n_hdlc(tty);
+ struct n_hdlc_buf *buf;
+- unsigned long flags;
+
+ while ((buf = n_hdlc_buf_get(&n_hdlc->tx_buf_list)))
+ n_hdlc_buf_put(&n_hdlc->tx_free_buf_list, buf);
+- spin_lock_irqsave(&n_hdlc->tx_buf_list.spinlock, flags);
+- if (n_hdlc->tbuf) {
+- n_hdlc_buf_put(&n_hdlc->tx_free_buf_list, n_hdlc->tbuf);
+- n_hdlc->tbuf = NULL;
+- }
+- spin_unlock_irqrestore(&n_hdlc->tx_buf_list.spinlock, flags);
+ }
+
+ static struct tty_ldisc_ops n_hdlc_ldisc = {
+@@ -283,7 +275,6 @@ static void n_hdlc_release(struct n_hdlc *n_hdlc)
+ } else
+ break;
+ }
+- kfree(n_hdlc->tbuf);
+ kfree(n_hdlc);
+
+ } /* end of n_hdlc_release() */
+@@ -402,13 +393,7 @@ static void n_hdlc_send_frames(struct n_hdlc *n_hdlc, struct tty_struct *tty)
+ n_hdlc->woke_up = 0;
+ spin_unlock_irqrestore(&n_hdlc->tx_buf_list.spinlock, flags);
+
+- /* get current transmit buffer or get new transmit */
+- /* buffer from list of pending transmit buffers */
+-
+- tbuf = n_hdlc->tbuf;
+- if (!tbuf)
+- tbuf = n_hdlc_buf_get(&n_hdlc->tx_buf_list);
+-
++ tbuf = n_hdlc_buf_get(&n_hdlc->tx_buf_list);
+ while (tbuf) {
+ if (debuglevel >= DEBUG_LEVEL_INFO)
+ printk("%s(%d)sending frame %p, count=%d\n",
+@@ -420,7 +405,7 @@ static void n_hdlc_send_frames(struct n_hdlc *n_hdlc, struct tty_struct *tty)
+
+ /* rollback was possible and has been done */
+ if (actual == -ERESTARTSYS) {
+- n_hdlc->tbuf = tbuf;
++ n_hdlc_buf_return(&n_hdlc->tx_buf_list, tbuf);
+ break;
+ }
+ /* if transmit error, throw frame away by */
+@@ -435,10 +420,7 @@ static void n_hdlc_send_frames(struct n_hdlc *n_hdlc, struct tty_struct *tty)
+
+ /* free current transmit buffer */
+ n_hdlc_buf_put(&n_hdlc->tx_free_buf_list, tbuf);
+-
+- /* this tx buffer is done */
+- n_hdlc->tbuf = NULL;
+-
++
+ /* wait up sleeping writers */
+ wake_up_interruptible(&tty->write_wait);
+
+@@ -448,10 +430,12 @@ static void n_hdlc_send_frames(struct n_hdlc *n_hdlc, struct tty_struct *tty)
+ if (debuglevel >= DEBUG_LEVEL_INFO)
+ printk("%s(%d)frame %p pending\n",
+ __FILE__,__LINE__,tbuf);
+-
+- /* buffer not accepted by driver */
+- /* set this buffer as pending buffer */
+- n_hdlc->tbuf = tbuf;
++
++ /*
++ * the buffer was not accepted by driver,
++ * return it back into tx queue
++ */
++ n_hdlc_buf_return(&n_hdlc->tx_buf_list, tbuf);
+ break;
+ }
+ }
+@@ -749,7 +733,8 @@ static int n_hdlc_tty_ioctl(struct tty_struct *tty, struct file *file,
+ int error = 0;
+ int count;
+ unsigned long flags;
+-
++ struct n_hdlc_buf *buf = NULL;
++
+ if (debuglevel >= DEBUG_LEVEL_INFO)
+ printk("%s(%d)n_hdlc_tty_ioctl() called %d\n",
+ __FILE__,__LINE__,cmd);
+@@ -763,8 +748,10 @@ static int n_hdlc_tty_ioctl(struct tty_struct *tty, struct file *file,
+ /* report count of read data available */
+ /* in next available frame (if any) */
+ spin_lock_irqsave(&n_hdlc->rx_buf_list.spinlock,flags);
+- if (n_hdlc->rx_buf_list.head)
+- count = n_hdlc->rx_buf_list.head->count;
++ buf = list_first_entry_or_null(&n_hdlc->rx_buf_list.list,
++ struct n_hdlc_buf, list_item);
++ if (buf)
++ count = buf->count;
+ else
+ count = 0;
+ spin_unlock_irqrestore(&n_hdlc->rx_buf_list.spinlock,flags);
+@@ -776,8 +763,10 @@ static int n_hdlc_tty_ioctl(struct tty_struct *tty, struct file *file,
+ count = tty_chars_in_buffer(tty);
+ /* add size of next output frame in queue */
+ spin_lock_irqsave(&n_hdlc->tx_buf_list.spinlock,flags);
+- if (n_hdlc->tx_buf_list.head)
+- count += n_hdlc->tx_buf_list.head->count;
++ buf = list_first_entry_or_null(&n_hdlc->tx_buf_list.list,
++ struct n_hdlc_buf, list_item);
++ if (buf)
++ count += buf->count;
+ spin_unlock_irqrestore(&n_hdlc->tx_buf_list.spinlock,flags);
+ error = put_user(count, (int __user *)arg);
+ break;
+@@ -825,14 +814,14 @@ static unsigned int n_hdlc_tty_poll(struct tty_struct *tty, struct file *filp,
+ poll_wait(filp, &tty->write_wait, wait);
+
+ /* set bits for operations that won't block */
+- if (n_hdlc->rx_buf_list.head)
++ if (!list_empty(&n_hdlc->rx_buf_list.list))
+ mask |= POLLIN | POLLRDNORM; /* readable */
+ if (test_bit(TTY_OTHER_CLOSED, &tty->flags))
+ mask |= POLLHUP;
+ if (tty_hung_up_p(filp))
+ mask |= POLLHUP;
+ if (!tty_is_writelocked(tty) &&
+- n_hdlc->tx_free_buf_list.head)
++ !list_empty(&n_hdlc->tx_free_buf_list.list))
+ mask |= POLLOUT | POLLWRNORM; /* writable */
+ }
+ return mask;
+@@ -856,7 +845,12 @@ static struct n_hdlc *n_hdlc_alloc(void)
+ spin_lock_init(&n_hdlc->tx_free_buf_list.spinlock);
+ spin_lock_init(&n_hdlc->rx_buf_list.spinlock);
+ spin_lock_init(&n_hdlc->tx_buf_list.spinlock);
+-
++
++ INIT_LIST_HEAD(&n_hdlc->rx_free_buf_list.list);
++ INIT_LIST_HEAD(&n_hdlc->tx_free_buf_list.list);
++ INIT_LIST_HEAD(&n_hdlc->rx_buf_list.list);
++ INIT_LIST_HEAD(&n_hdlc->tx_buf_list.list);
++
+ /* allocate free rx buffer list */
+ for(i=0;i<DEFAULT_RX_BUF_COUNT;i++) {
+ buf = kmalloc(N_HDLC_BUF_SIZE, GFP_KERNEL);
+@@ -884,53 +878,65 @@ static struct n_hdlc *n_hdlc_alloc(void)
+ } /* end of n_hdlc_alloc() */
+
+ /**
++ * n_hdlc_buf_return - put the HDLC buffer after the head of the specified list
++ * @buf_list - pointer to the buffer list
++ * @buf - pointer to the buffer
++ */
++static void n_hdlc_buf_return(struct n_hdlc_buf_list *buf_list,
++ struct n_hdlc_buf *buf)
++{
++ unsigned long flags;
++
++ spin_lock_irqsave(&buf_list->spinlock, flags);
++
++ list_add(&buf->list_item, &buf_list->list);
++ buf_list->count++;
++
++ spin_unlock_irqrestore(&buf_list->spinlock, flags);
++}
++
++/**
+ * n_hdlc_buf_put - add specified HDLC buffer to tail of specified list
+- * @list - pointer to buffer list
++ * @buf_list - pointer to buffer list
+ * @buf - pointer to buffer
+ */
+-static void n_hdlc_buf_put(struct n_hdlc_buf_list *list,
++static void n_hdlc_buf_put(struct n_hdlc_buf_list *buf_list,
+ struct n_hdlc_buf *buf)
+ {
+ unsigned long flags;
+- spin_lock_irqsave(&list->spinlock,flags);
+-
+- buf->link=NULL;
+- if (list->tail)
+- list->tail->link = buf;
+- else
+- list->head = buf;
+- list->tail = buf;
+- (list->count)++;
+-
+- spin_unlock_irqrestore(&list->spinlock,flags);
+-
++
++ spin_lock_irqsave(&buf_list->spinlock, flags);
++
++ list_add_tail(&buf->list_item, &buf_list->list);
++ buf_list->count++;
++
++ spin_unlock_irqrestore(&buf_list->spinlock, flags);
+ } /* end of n_hdlc_buf_put() */
+
+ /**
+ * n_hdlc_buf_get - remove and return an HDLC buffer from list
+- * @list - pointer to HDLC buffer list
++ * @buf_list - pointer to HDLC buffer list
+ *
+ * Remove and return an HDLC buffer from the head of the specified HDLC buffer
+ * list.
+ * Returns a pointer to HDLC buffer if available, otherwise %NULL.
+ */
+-static struct n_hdlc_buf* n_hdlc_buf_get(struct n_hdlc_buf_list *list)
++static struct n_hdlc_buf *n_hdlc_buf_get(struct n_hdlc_buf_list *buf_list)
+ {
+ unsigned long flags;
+ struct n_hdlc_buf *buf;
+- spin_lock_irqsave(&list->spinlock,flags);
+-
+- buf = list->head;
++
++ spin_lock_irqsave(&buf_list->spinlock, flags);
++
++ buf = list_first_entry_or_null(&buf_list->list,
++ struct n_hdlc_buf, list_item);
+ if (buf) {
+- list->head = buf->link;
+- (list->count)--;
++ list_del(&buf->list_item);
++ buf_list->count--;
+ }
+- if (!list->head)
+- list->tail = NULL;
+-
+- spin_unlock_irqrestore(&list->spinlock,flags);
++
++ spin_unlock_irqrestore(&buf_list->spinlock, flags);
+ return buf;
+-
+ } /* end of n_hdlc_buf_get() */
+
+ static char hdlc_banner[] __initdata =
+diff --git a/drivers/tty/serial/8250/8250_pci.c b/drivers/tty/serial/8250/8250_pci.c
+index 116436b7fa52..b2fd78ba02bc 100644
+--- a/drivers/tty/serial/8250/8250_pci.c
++++ b/drivers/tty/serial/8250/8250_pci.c
+@@ -2723,6 +2723,8 @@ enum pci_board_num_t {
+ pbn_b0_4_1152000_200,
+ pbn_b0_8_1152000_200,
+
++ pbn_b0_4_1250000,
++
+ pbn_b0_2_1843200,
+ pbn_b0_4_1843200,
+
+@@ -2954,6 +2956,13 @@ static struct pciserial_board pci_boards[] = {
+ .uart_offset = 0x200,
+ },
+
++ [pbn_b0_4_1250000] = {
++ .flags = FL_BASE0,
++ .num_ports = 4,
++ .base_baud = 1250000,
++ .uart_offset = 8,
++ },
++
+ [pbn_b0_2_1843200] = {
+ .flags = FL_BASE0,
+ .num_ports = 2,
+@@ -5589,6 +5598,10 @@ static struct pci_device_id serial_pci_tbl[] = {
+ { PCI_DEVICE(0x1c29, 0x1108), .driver_data = pbn_fintek_8 },
+ { PCI_DEVICE(0x1c29, 0x1112), .driver_data = pbn_fintek_12 },
+
++ /* MKS Tenta SCOM-080x serial cards */
++ { PCI_DEVICE(0x1601, 0x0800), .driver_data = pbn_b0_4_1250000 },
++ { PCI_DEVICE(0x1601, 0xa801), .driver_data = pbn_b0_4_1250000 },
++
+ /*
+ * These entries match devices with class COMMUNICATION_SERIAL,
+ * COMMUNICATION_MODEM or COMMUNICATION_MULTISERIAL
+diff --git a/fs/afs/mntpt.c b/fs/afs/mntpt.c
+index 81dd075356b9..d4fb0afc0097 100644
+--- a/fs/afs/mntpt.c
++++ b/fs/afs/mntpt.c
+@@ -202,7 +202,7 @@ static struct vfsmount *afs_mntpt_do_automount(struct dentry *mntpt)
+
+ /* try and do the mount */
+ _debug("--- attempting mount %s -o %s ---", devname, options);
+- mnt = vfs_kern_mount(&afs_fs_type, 0, devname, options);
++ mnt = vfs_submount(mntpt, &afs_fs_type, devname, options);
+ _debug("--- mount result %p ---", mnt);
+
+ free_page((unsigned long) devname);
+diff --git a/fs/autofs4/waitq.c b/fs/autofs4/waitq.c
+index 1278335ce366..79fbd85db4ba 100644
+--- a/fs/autofs4/waitq.c
++++ b/fs/autofs4/waitq.c
+@@ -436,8 +436,8 @@ int autofs4_wait(struct autofs_sb_info *sbi,
+ memcpy(&wq->name, &qstr, sizeof(struct qstr));
+ wq->dev = autofs4_get_dev(sbi);
+ wq->ino = autofs4_get_ino(sbi);
+- wq->uid = current_real_cred()->uid;
+- wq->gid = current_real_cred()->gid;
++ wq->uid = current_cred()->uid;
++ wq->gid = current_cred()->gid;
+ wq->pid = pid;
+ wq->tgid = tgid;
+ wq->status = -EINTR; /* Status return if interrupted */
+diff --git a/fs/btrfs/inode.c b/fs/btrfs/inode.c
+index 1e861a063721..ec54415fac7d 100644
+--- a/fs/btrfs/inode.c
++++ b/fs/btrfs/inode.c
+@@ -4502,19 +4502,8 @@ int btrfs_truncate_inode_items(struct btrfs_trans_handle *trans,
+ if (found_type > min_type) {
+ del_item = 1;
+ } else {
+- if (item_end < new_size) {
+- /*
+- * With NO_HOLES mode, for the following mapping
+- *
+- * [0-4k][hole][8k-12k]
+- *
+- * if truncating isize down to 6k, it ends up
+- * isize being 8k.
+- */
+- if (btrfs_fs_incompat(root->fs_info, NO_HOLES))
+- last_size = new_size;
++ if (item_end < new_size)
+ break;
+- }
+ if (found_key.offset >= new_size)
+ del_item = 1;
+ else
+@@ -4697,8 +4686,12 @@ int btrfs_truncate_inode_items(struct btrfs_trans_handle *trans,
+ btrfs_abort_transaction(trans, ret);
+ }
+ error:
+- if (root->root_key.objectid != BTRFS_TREE_LOG_OBJECTID)
++ if (root->root_key.objectid != BTRFS_TREE_LOG_OBJECTID) {
++ ASSERT(last_size >= new_size);
++ if (!err && last_size > new_size)
++ last_size = new_size;
+ btrfs_ordered_update_i_size(inode, last_size, NULL);
++ }
+
+ btrfs_free_path(path);
+
+diff --git a/fs/ceph/mds_client.c b/fs/ceph/mds_client.c
+index c9d2e553a6c4..0021026a2f74 100644
+--- a/fs/ceph/mds_client.c
++++ b/fs/ceph/mds_client.c
+@@ -628,6 +628,9 @@ static void __unregister_request(struct ceph_mds_client *mdsc,
+ {
+ dout("__unregister_request %p tid %lld\n", req, req->r_tid);
+
++ /* Never leave an unregistered request on an unsafe list! */
++ list_del_init(&req->r_unsafe_item);
++
+ if (req->r_tid == mdsc->oldest_tid) {
+ struct rb_node *p = rb_next(&req->r_node);
+ mdsc->oldest_tid = 0;
+@@ -1036,7 +1039,6 @@ static void cleanup_session_requests(struct ceph_mds_client *mdsc,
+ while (!list_empty(&session->s_unsafe)) {
+ req = list_first_entry(&session->s_unsafe,
+ struct ceph_mds_request, r_unsafe_item);
+- list_del_init(&req->r_unsafe_item);
+ pr_warn_ratelimited(" dropping unsafe request %llu\n",
+ req->r_tid);
+ __unregister_request(mdsc, req);
+@@ -2437,7 +2439,6 @@ static void handle_reply(struct ceph_mds_session *session, struct ceph_msg *msg)
+ * useful we could do with a revised return value.
+ */
+ dout("got safe reply %llu, mds%d\n", tid, mds);
+- list_del_init(&req->r_unsafe_item);
+
+ /* last unsafe request during umount? */
+ if (mdsc->stopping && !__get_oldest_req(mdsc))
+diff --git a/fs/cifs/cifs_dfs_ref.c b/fs/cifs/cifs_dfs_ref.c
+index ec9dbbcca3b9..9156be545b0f 100644
+--- a/fs/cifs/cifs_dfs_ref.c
++++ b/fs/cifs/cifs_dfs_ref.c
+@@ -245,7 +245,8 @@ char *cifs_compose_mount_options(const char *sb_mountdata,
+ * @fullpath: full path in UNC format
+ * @ref: server's referral
+ */
+-static struct vfsmount *cifs_dfs_do_refmount(struct cifs_sb_info *cifs_sb,
++static struct vfsmount *cifs_dfs_do_refmount(struct dentry *mntpt,
++ struct cifs_sb_info *cifs_sb,
+ const char *fullpath, const struct dfs_info3_param *ref)
+ {
+ struct vfsmount *mnt;
+@@ -259,7 +260,7 @@ static struct vfsmount *cifs_dfs_do_refmount(struct cifs_sb_info *cifs_sb,
+ if (IS_ERR(mountdata))
+ return (struct vfsmount *)mountdata;
+
+- mnt = vfs_kern_mount(&cifs_fs_type, 0, devname, mountdata);
++ mnt = vfs_submount(mntpt, &cifs_fs_type, devname, mountdata);
+ kfree(mountdata);
+ kfree(devname);
+ return mnt;
+@@ -334,7 +335,7 @@ static struct vfsmount *cifs_dfs_do_automount(struct dentry *mntpt)
+ mnt = ERR_PTR(-EINVAL);
+ break;
+ }
+- mnt = cifs_dfs_do_refmount(cifs_sb,
++ mnt = cifs_dfs_do_refmount(mntpt, cifs_sb,
+ full_path, referrals + i);
+ cifs_dbg(FYI, "%s: cifs_dfs_do_refmount:%s , mnt:%p\n",
+ __func__, referrals[i].node_name, mnt);
+diff --git a/fs/debugfs/inode.c b/fs/debugfs/inode.c
+index f17fcf89e18e..1e30f74a9527 100644
+--- a/fs/debugfs/inode.c
++++ b/fs/debugfs/inode.c
+@@ -187,9 +187,9 @@ static const struct super_operations debugfs_super_operations = {
+
+ static struct vfsmount *debugfs_automount(struct path *path)
+ {
+- struct vfsmount *(*f)(void *);
+- f = (struct vfsmount *(*)(void *))path->dentry->d_fsdata;
+- return f(d_inode(path->dentry)->i_private);
++ debugfs_automount_t f;
++ f = (debugfs_automount_t)path->dentry->d_fsdata;
++ return f(path->dentry, d_inode(path->dentry)->i_private);
+ }
+
+ static const struct dentry_operations debugfs_dops = {
+@@ -504,7 +504,7 @@ EXPORT_SYMBOL_GPL(debugfs_create_dir);
+ */
+ struct dentry *debugfs_create_automount(const char *name,
+ struct dentry *parent,
+- struct vfsmount *(*f)(void *),
++ debugfs_automount_t f,
+ void *data)
+ {
+ struct dentry *dentry = start_creating(name, parent);
+diff --git a/fs/fat/inode.c b/fs/fat/inode.c
+index 338d2f73eb29..a2c05f2ada6d 100644
+--- a/fs/fat/inode.c
++++ b/fs/fat/inode.c
+@@ -1359,6 +1359,16 @@ static int parse_options(struct super_block *sb, char *options, int is_vfat,
+ return 0;
+ }
+
++static void fat_dummy_inode_init(struct inode *inode)
++{
++ /* Initialize this dummy inode to work as no-op. */
++ MSDOS_I(inode)->mmu_private = 0;
++ MSDOS_I(inode)->i_start = 0;
++ MSDOS_I(inode)->i_logstart = 0;
++ MSDOS_I(inode)->i_attrs = 0;
++ MSDOS_I(inode)->i_pos = 0;
++}
++
+ static int fat_read_root(struct inode *inode)
+ {
+ struct msdos_sb_info *sbi = MSDOS_SB(inode->i_sb);
+@@ -1803,12 +1813,13 @@ int fat_fill_super(struct super_block *sb, void *data, int silent, int isvfat,
+ fat_inode = new_inode(sb);
+ if (!fat_inode)
+ goto out_fail;
+- MSDOS_I(fat_inode)->i_pos = 0;
++ fat_dummy_inode_init(fat_inode);
+ sbi->fat_inode = fat_inode;
+
+ fsinfo_inode = new_inode(sb);
+ if (!fsinfo_inode)
+ goto out_fail;
++ fat_dummy_inode_init(fsinfo_inode);
+ fsinfo_inode->i_ino = MSDOS_FSINFO_INO;
+ sbi->fsinfo_inode = fsinfo_inode;
+ insert_inode_hash(fsinfo_inode);
+diff --git a/fs/mount.h b/fs/mount.h
+index 2c856fc47ae3..2826543a131d 100644
+--- a/fs/mount.h
++++ b/fs/mount.h
+@@ -89,7 +89,6 @@ static inline int is_mounted(struct vfsmount *mnt)
+ }
+
+ extern struct mount *__lookup_mnt(struct vfsmount *, struct dentry *);
+-extern struct mount *__lookup_mnt_last(struct vfsmount *, struct dentry *);
+
+ extern int __legitimize_mnt(struct vfsmount *, unsigned);
+ extern bool legitimize_mnt(struct vfsmount *, unsigned);
+diff --git a/fs/namei.c b/fs/namei.c
+index ad74877e1442..dff5cd3b556f 100644
+--- a/fs/namei.c
++++ b/fs/namei.c
+@@ -1100,7 +1100,6 @@ static int follow_automount(struct path *path, struct nameidata *nd,
+ bool *need_mntput)
+ {
+ struct vfsmount *mnt;
+- const struct cred *old_cred;
+ int err;
+
+ if (!path->dentry->d_op || !path->dentry->d_op->d_automount)
+@@ -1129,9 +1128,7 @@ static int follow_automount(struct path *path, struct nameidata *nd,
+ if (nd->total_link_count >= 40)
+ return -ELOOP;
+
+- old_cred = override_creds(&init_cred);
+ mnt = path->dentry->d_op->d_automount(path);
+- revert_creds(old_cred);
+ if (IS_ERR(mnt)) {
+ /*
+ * The filesystem is allowed to return -EISDIR here to indicate
+diff --git a/fs/namespace.c b/fs/namespace.c
+index 487ba30bb5c6..8bfad42c1ccf 100644
+--- a/fs/namespace.c
++++ b/fs/namespace.c
+@@ -637,28 +637,6 @@ struct mount *__lookup_mnt(struct vfsmount *mnt, struct dentry *dentry)
+ }
+
+ /*
+- * find the last mount at @dentry on vfsmount @mnt.
+- * mount_lock must be held.
+- */
+-struct mount *__lookup_mnt_last(struct vfsmount *mnt, struct dentry *dentry)
+-{
+- struct mount *p, *res = NULL;
+- p = __lookup_mnt(mnt, dentry);
+- if (!p)
+- goto out;
+- if (!(p->mnt.mnt_flags & MNT_UMOUNT))
+- res = p;
+- hlist_for_each_entry_continue(p, mnt_hash) {
+- if (&p->mnt_parent->mnt != mnt || p->mnt_mountpoint != dentry)
+- break;
+- if (!(p->mnt.mnt_flags & MNT_UMOUNT))
+- res = p;
+- }
+-out:
+- return res;
+-}
+-
+-/*
+ * lookup_mnt - Return the first child mount mounted at path
+ *
+ * "First" means first mounted chronologically. If you create the
+@@ -878,6 +856,13 @@ void mnt_set_mountpoint(struct mount *mnt,
+ hlist_add_head(&child_mnt->mnt_mp_list, &mp->m_list);
+ }
+
++static void __attach_mnt(struct mount *mnt, struct mount *parent)
++{
++ hlist_add_head_rcu(&mnt->mnt_hash,
++ m_hash(&parent->mnt, mnt->mnt_mountpoint));
++ list_add_tail(&mnt->mnt_child, &parent->mnt_mounts);
++}
++
+ /*
+ * vfsmount lock must be held for write
+ */
+@@ -886,28 +871,45 @@ static void attach_mnt(struct mount *mnt,
+ struct mountpoint *mp)
+ {
+ mnt_set_mountpoint(parent, mp, mnt);
+- hlist_add_head_rcu(&mnt->mnt_hash, m_hash(&parent->mnt, mp->m_dentry));
+- list_add_tail(&mnt->mnt_child, &parent->mnt_mounts);
++ __attach_mnt(mnt, parent);
+ }
+
+-static void attach_shadowed(struct mount *mnt,
+- struct mount *parent,
+- struct mount *shadows)
++void mnt_change_mountpoint(struct mount *parent, struct mountpoint *mp, struct mount *mnt)
+ {
+- if (shadows) {
+- hlist_add_behind_rcu(&mnt->mnt_hash, &shadows->mnt_hash);
+- list_add(&mnt->mnt_child, &shadows->mnt_child);
+- } else {
+- hlist_add_head_rcu(&mnt->mnt_hash,
+- m_hash(&parent->mnt, mnt->mnt_mountpoint));
+- list_add_tail(&mnt->mnt_child, &parent->mnt_mounts);
+- }
++ struct mountpoint *old_mp = mnt->mnt_mp;
++ struct dentry *old_mountpoint = mnt->mnt_mountpoint;
++ struct mount *old_parent = mnt->mnt_parent;
++
++ list_del_init(&mnt->mnt_child);
++ hlist_del_init(&mnt->mnt_mp_list);
++ hlist_del_init_rcu(&mnt->mnt_hash);
++
++ attach_mnt(mnt, parent, mp);
++
++ put_mountpoint(old_mp);
++
++ /*
++ * Safely avoid even the suggestion this code might sleep or
++ * lock the mount hash by taking advantage of the knowledge that
++ * mnt_change_mountpoint will not release the final reference
++ * to a mountpoint.
++ *
++ * During mounting, the mount passed in as the parent mount will
++ * continue to use the old mountpoint and during unmounting, the
++ * old mountpoint will continue to exist until namespace_unlock,
++ * which happens well after mnt_change_mountpoint.
++ */
++ spin_lock(&old_mountpoint->d_lock);
++ old_mountpoint->d_lockref.count--;
++ spin_unlock(&old_mountpoint->d_lock);
++
++ mnt_add_count(old_parent, -1);
+ }
+
+ /*
+ * vfsmount lock must be held for write
+ */
+-static void commit_tree(struct mount *mnt, struct mount *shadows)
++static void commit_tree(struct mount *mnt)
+ {
+ struct mount *parent = mnt->mnt_parent;
+ struct mount *m;
+@@ -925,7 +927,7 @@ static void commit_tree(struct mount *mnt, struct mount *shadows)
+ n->mounts += n->pending_mounts;
+ n->pending_mounts = 0;
+
+- attach_shadowed(mnt, parent, shadows);
++ __attach_mnt(mnt, parent);
+ touch_mnt_namespace(n);
+ }
+
+@@ -989,6 +991,21 @@ vfs_kern_mount(struct file_system_type *type, int flags, const char *name, void
+ }
+ EXPORT_SYMBOL_GPL(vfs_kern_mount);
+
++struct vfsmount *
++vfs_submount(const struct dentry *mountpoint, struct file_system_type *type,
++ const char *name, void *data)
++{
++ /* Until it is worked out how to pass the user namespace
++ * through from the parent mount to the submount don't support
++ * unprivileged mounts with submounts.
++ */
++ if (mountpoint->d_sb->s_user_ns != &init_user_ns)
++ return ERR_PTR(-EPERM);
++
++ return vfs_kern_mount(type, MS_SUBMOUNT, name, data);
++}
++EXPORT_SYMBOL_GPL(vfs_submount);
++
+ static struct mount *clone_mnt(struct mount *old, struct dentry *root,
+ int flag)
+ {
+@@ -1764,7 +1781,6 @@ struct mount *copy_tree(struct mount *mnt, struct dentry *dentry,
+ continue;
+
+ for (s = r; s; s = next_mnt(s, r)) {
+- struct mount *t = NULL;
+ if (!(flag & CL_COPY_UNBINDABLE) &&
+ IS_MNT_UNBINDABLE(s)) {
+ s = skip_mnt_tree(s);
+@@ -1786,14 +1802,7 @@ struct mount *copy_tree(struct mount *mnt, struct dentry *dentry,
+ goto out;
+ lock_mount_hash();
+ list_add_tail(&q->mnt_list, &res->mnt_list);
+- mnt_set_mountpoint(parent, p->mnt_mp, q);
+- if (!list_empty(&parent->mnt_mounts)) {
+- t = list_last_entry(&parent->mnt_mounts,
+- struct mount, mnt_child);
+- if (t->mnt_mp != p->mnt_mp)
+- t = NULL;
+- }
+- attach_shadowed(q, parent, t);
++ attach_mnt(q, parent, p->mnt_mp);
+ unlock_mount_hash();
+ }
+ }
+@@ -1992,10 +2001,18 @@ static int attach_recursive_mnt(struct mount *source_mnt,
+ {
+ HLIST_HEAD(tree_list);
+ struct mnt_namespace *ns = dest_mnt->mnt_ns;
++ struct mountpoint *smp;
+ struct mount *child, *p;
+ struct hlist_node *n;
+ int err;
+
++ /* Preallocate a mountpoint in case the new mounts need
++ * to be tucked under other mounts.
++ */
++ smp = get_mountpoint(source_mnt->mnt.mnt_root);
++ if (IS_ERR(smp))
++ return PTR_ERR(smp);
++
+ /* Is there space to add these mounts to the mount namespace? */
+ if (!parent_path) {
+ err = count_mounts(ns, source_mnt);
+@@ -2022,16 +2039,19 @@ static int attach_recursive_mnt(struct mount *source_mnt,
+ touch_mnt_namespace(source_mnt->mnt_ns);
+ } else {
+ mnt_set_mountpoint(dest_mnt, dest_mp, source_mnt);
+- commit_tree(source_mnt, NULL);
++ commit_tree(source_mnt);
+ }
+
+ hlist_for_each_entry_safe(child, n, &tree_list, mnt_hash) {
+ struct mount *q;
+ hlist_del_init(&child->mnt_hash);
+- q = __lookup_mnt_last(&child->mnt_parent->mnt,
+- child->mnt_mountpoint);
+- commit_tree(child, q);
++ q = __lookup_mnt(&child->mnt_parent->mnt,
++ child->mnt_mountpoint);
++ if (q)
++ mnt_change_mountpoint(child, smp, q);
++ commit_tree(child);
+ }
++ put_mountpoint(smp);
+ unlock_mount_hash();
+
+ return 0;
+@@ -2046,6 +2066,11 @@ static int attach_recursive_mnt(struct mount *source_mnt,
+ cleanup_group_ids(source_mnt, NULL);
+ out:
+ ns->pending_mounts = 0;
++
++ read_seqlock_excl(&mount_lock);
++ put_mountpoint(smp);
++ read_sequnlock_excl(&mount_lock);
++
+ return err;
+ }
+
+@@ -2794,7 +2819,7 @@ long do_mount(const char *dev_name, const char __user *dir_name,
+
+ flags &= ~(MS_NOSUID | MS_NOEXEC | MS_NODEV | MS_ACTIVE | MS_BORN |
+ MS_NOATIME | MS_NODIRATIME | MS_RELATIME| MS_KERNMOUNT |
+- MS_STRICTATIME | MS_NOREMOTELOCK);
++ MS_STRICTATIME | MS_NOREMOTELOCK | MS_SUBMOUNT);
+
+ if (flags & MS_REMOUNT)
+ retval = do_remount(&path, flags & ~MS_REMOUNT, mnt_flags,
+diff --git a/fs/nfs/namespace.c b/fs/nfs/namespace.c
+index 5551e8ef67fd..e49d831c4e85 100644
+--- a/fs/nfs/namespace.c
++++ b/fs/nfs/namespace.c
+@@ -226,7 +226,7 @@ static struct vfsmount *nfs_do_clone_mount(struct nfs_server *server,
+ const char *devname,
+ struct nfs_clone_mount *mountdata)
+ {
+- return vfs_kern_mount(&nfs_xdev_fs_type, 0, devname, mountdata);
++ return vfs_submount(mountdata->dentry, &nfs_xdev_fs_type, devname, mountdata);
+ }
+
+ /**
+diff --git a/fs/nfs/nfs4namespace.c b/fs/nfs/nfs4namespace.c
+index d21104912676..d8b040bd9814 100644
+--- a/fs/nfs/nfs4namespace.c
++++ b/fs/nfs/nfs4namespace.c
+@@ -279,7 +279,7 @@ static struct vfsmount *try_location(struct nfs_clone_mount *mountdata,
+ mountdata->hostname,
+ mountdata->mnt_path);
+
+- mnt = vfs_kern_mount(&nfs4_referral_fs_type, 0, page, mountdata);
++ mnt = vfs_submount(mountdata->dentry, &nfs4_referral_fs_type, page, mountdata);
+ if (!IS_ERR(mnt))
+ break;
+ }
+diff --git a/fs/orangefs/super.c b/fs/orangefs/super.c
+index c48859f16e7b..67c24351a67f 100644
+--- a/fs/orangefs/super.c
++++ b/fs/orangefs/super.c
+@@ -115,6 +115,13 @@ static struct inode *orangefs_alloc_inode(struct super_block *sb)
+ return &orangefs_inode->vfs_inode;
+ }
+
++static void orangefs_i_callback(struct rcu_head *head)
++{
++ struct inode *inode = container_of(head, struct inode, i_rcu);
++ struct orangefs_inode_s *orangefs_inode = ORANGEFS_I(inode);
++ kmem_cache_free(orangefs_inode_cache, orangefs_inode);
++}
++
+ static void orangefs_destroy_inode(struct inode *inode)
+ {
+ struct orangefs_inode_s *orangefs_inode = ORANGEFS_I(inode);
+@@ -123,7 +130,7 @@ static void orangefs_destroy_inode(struct inode *inode)
+ "%s: deallocated %p destroying inode %pU\n",
+ __func__, orangefs_inode, get_khandle_from_ino(inode));
+
+- kmem_cache_free(orangefs_inode_cache, orangefs_inode);
++ call_rcu(&inode->i_rcu, orangefs_i_callback);
+ }
+
+ /*
+diff --git a/fs/pnode.c b/fs/pnode.c
+index 06a793f4ae38..5bc7896d122a 100644
+--- a/fs/pnode.c
++++ b/fs/pnode.c
+@@ -322,6 +322,21 @@ int propagate_mnt(struct mount *dest_mnt, struct mountpoint *dest_mp,
+ return ret;
+ }
+
++static struct mount *find_topper(struct mount *mnt)
++{
++ /* If there is exactly one mount covering mnt completely return it. */
++ struct mount *child;
++
++ if (!list_is_singular(&mnt->mnt_mounts))
++ return NULL;
++
++ child = list_first_entry(&mnt->mnt_mounts, struct mount, mnt_child);
++ if (child->mnt_mountpoint != mnt->mnt.mnt_root)
++ return NULL;
++
++ return child;
++}
++
+ /*
+ * return true if the refcount is greater than count
+ */
+@@ -342,9 +357,8 @@ static inline int do_refcount_check(struct mount *mnt, int count)
+ */
+ int propagate_mount_busy(struct mount *mnt, int refcnt)
+ {
+- struct mount *m, *child;
++ struct mount *m, *child, *topper;
+ struct mount *parent = mnt->mnt_parent;
+- int ret = 0;
+
+ if (mnt == parent)
+ return do_refcount_check(mnt, refcnt);
+@@ -359,12 +373,24 @@ int propagate_mount_busy(struct mount *mnt, int refcnt)
+
+ for (m = propagation_next(parent, parent); m;
+ m = propagation_next(m, parent)) {
+- child = __lookup_mnt_last(&m->mnt, mnt->mnt_mountpoint);
+- if (child && list_empty(&child->mnt_mounts) &&
+- (ret = do_refcount_check(child, 1)))
+- break;
++ int count = 1;
++ child = __lookup_mnt(&m->mnt, mnt->mnt_mountpoint);
++ if (!child)
++ continue;
++
++ /* Is there exactly one mount on the child that covers
++ * it completely whose reference should be ignored?
++ */
++ topper = find_topper(child);
++ if (topper)
++ count += 1;
++ else if (!list_empty(&child->mnt_mounts))
++ continue;
++
++ if (do_refcount_check(child, count))
++ return 1;
+ }
+- return ret;
++ return 0;
+ }
+
+ /*
+@@ -381,7 +407,7 @@ void propagate_mount_unlock(struct mount *mnt)
+
+ for (m = propagation_next(parent, parent); m;
+ m = propagation_next(m, parent)) {
+- child = __lookup_mnt_last(&m->mnt, mnt->mnt_mountpoint);
++ child = __lookup_mnt(&m->mnt, mnt->mnt_mountpoint);
+ if (child)
+ child->mnt.mnt_flags &= ~MNT_LOCKED;
+ }
+@@ -399,9 +425,11 @@ static void mark_umount_candidates(struct mount *mnt)
+
+ for (m = propagation_next(parent, parent); m;
+ m = propagation_next(m, parent)) {
+- struct mount *child = __lookup_mnt_last(&m->mnt,
++ struct mount *child = __lookup_mnt(&m->mnt,
+ mnt->mnt_mountpoint);
+- if (child && (!IS_MNT_LOCKED(child) || IS_MNT_MARKED(m))) {
++ if (!child || (child->mnt.mnt_flags & MNT_UMOUNT))
++ continue;
++ if (!IS_MNT_LOCKED(child) || IS_MNT_MARKED(m)) {
+ SET_MNT_MARK(child);
+ }
+ }
+@@ -420,8 +448,8 @@ static void __propagate_umount(struct mount *mnt)
+
+ for (m = propagation_next(parent, parent); m;
+ m = propagation_next(m, parent)) {
+-
+- struct mount *child = __lookup_mnt_last(&m->mnt,
++ struct mount *topper;
++ struct mount *child = __lookup_mnt(&m->mnt,
+ mnt->mnt_mountpoint);
+ /*
+ * umount the child only if the child has no children
+@@ -430,6 +458,15 @@ static void __propagate_umount(struct mount *mnt)
+ if (!child || !IS_MNT_MARKED(child))
+ continue;
+ CLEAR_MNT_MARK(child);
++
++ /* If there is exactly one mount covering all of child
++ * replace child with that mount.
++ */
++ topper = find_topper(child);
++ if (topper)
++ mnt_change_mountpoint(child->mnt_parent, child->mnt_mp,
++ topper);
++
+ if (list_empty(&child->mnt_mounts)) {
+ list_del_init(&child->mnt_child);
+ child->mnt.mnt_flags |= MNT_UMOUNT;
+diff --git a/fs/pnode.h b/fs/pnode.h
+index 550f5a8b4fcf..dc87e65becd2 100644
+--- a/fs/pnode.h
++++ b/fs/pnode.h
+@@ -49,6 +49,8 @@ int get_dominating_id(struct mount *mnt, const struct path *root);
+ unsigned int mnt_get_count(struct mount *mnt);
+ void mnt_set_mountpoint(struct mount *, struct mountpoint *,
+ struct mount *);
++void mnt_change_mountpoint(struct mount *parent, struct mountpoint *mp,
++ struct mount *mnt);
+ struct mount *copy_tree(struct mount *, struct dentry *, int);
+ bool is_path_reachable(struct mount *, struct dentry *,
+ const struct path *root);
+diff --git a/fs/super.c b/fs/super.c
+index 1709ed029a2c..4185844f7a12 100644
+--- a/fs/super.c
++++ b/fs/super.c
+@@ -469,7 +469,7 @@ struct super_block *sget_userns(struct file_system_type *type,
+ struct super_block *old;
+ int err;
+
+- if (!(flags & MS_KERNMOUNT) &&
++ if (!(flags & (MS_KERNMOUNT|MS_SUBMOUNT)) &&
+ !(type->fs_flags & FS_USERNS_MOUNT) &&
+ !capable(CAP_SYS_ADMIN))
+ return ERR_PTR(-EPERM);
+@@ -499,7 +499,7 @@ struct super_block *sget_userns(struct file_system_type *type,
+ }
+ if (!s) {
+ spin_unlock(&sb_lock);
+- s = alloc_super(type, flags, user_ns);
++ s = alloc_super(type, (flags & ~MS_SUBMOUNT), user_ns);
+ if (!s)
+ return ERR_PTR(-ENOMEM);
+ goto retry;
+@@ -540,8 +540,15 @@ struct super_block *sget(struct file_system_type *type,
+ {
+ struct user_namespace *user_ns = current_user_ns();
+
++ /* We don't yet pass the user namespace of the parent
++ * mount through to here so always use &init_user_ns
++ * until that changes.
++ */
++ if (flags & MS_SUBMOUNT)
++ user_ns = &init_user_ns;
++
+ /* Ensure the requestor has permissions over the target filesystem */
+- if (!(flags & MS_KERNMOUNT) && !ns_capable(user_ns, CAP_SYS_ADMIN))
++ if (!(flags & (MS_KERNMOUNT|MS_SUBMOUNT)) && !ns_capable(user_ns, CAP_SYS_ADMIN))
+ return ERR_PTR(-EPERM);
+
+ return sget_userns(type, test, set, flags, user_ns, data);
+diff --git a/include/linux/ceph/osdmap.h b/include/linux/ceph/osdmap.h
+index 9a9041784dcf..412906609954 100644
+--- a/include/linux/ceph/osdmap.h
++++ b/include/linux/ceph/osdmap.h
+@@ -57,7 +57,7 @@ static inline bool ceph_can_shift_osds(struct ceph_pg_pool_info *pool)
+ case CEPH_POOL_TYPE_EC:
+ return false;
+ default:
+- BUG_ON(1);
++ BUG();
+ }
+ }
+
+diff --git a/include/linux/debugfs.h b/include/linux/debugfs.h
+index 014cc564d1c4..233006be30aa 100644
+--- a/include/linux/debugfs.h
++++ b/include/linux/debugfs.h
+@@ -97,9 +97,10 @@ struct dentry *debugfs_create_dir(const char *name, struct dentry *parent);
+ struct dentry *debugfs_create_symlink(const char *name, struct dentry *parent,
+ const char *dest);
+
++typedef struct vfsmount *(*debugfs_automount_t)(struct dentry *, void *);
+ struct dentry *debugfs_create_automount(const char *name,
+ struct dentry *parent,
+- struct vfsmount *(*f)(void *),
++ debugfs_automount_t f,
+ void *data);
+
+ void debugfs_remove(struct dentry *dentry);
+diff --git a/include/linux/libnvdimm.h b/include/linux/libnvdimm.h
+index 8458c5351e56..77e7af32543f 100644
+--- a/include/linux/libnvdimm.h
++++ b/include/linux/libnvdimm.h
+@@ -70,6 +70,8 @@ struct nd_cmd_desc {
+
+ struct nd_interleave_set {
+ u64 cookie;
++ /* compatibility with initial buggy Linux implementation */
++ u64 altcookie;
+ };
+
+ struct nd_mapping_desc {
+diff --git a/include/linux/lockd/lockd.h b/include/linux/lockd/lockd.h
+index c15373894a42..b37dee3acaba 100644
+--- a/include/linux/lockd/lockd.h
++++ b/include/linux/lockd/lockd.h
+@@ -355,7 +355,8 @@ static inline int nlm_privileged_requester(const struct svc_rqst *rqstp)
+ static inline int nlm_compare_locks(const struct file_lock *fl1,
+ const struct file_lock *fl2)
+ {
+- return fl1->fl_pid == fl2->fl_pid
++ return file_inode(fl1->fl_file) == file_inode(fl2->fl_file)
++ && fl1->fl_pid == fl2->fl_pid
+ && fl1->fl_owner == fl2->fl_owner
+ && fl1->fl_start == fl2->fl_start
+ && fl1->fl_end == fl2->fl_end
+diff --git a/include/linux/mount.h b/include/linux/mount.h
+index c6f55158d5e5..8e0352af06b7 100644
+--- a/include/linux/mount.h
++++ b/include/linux/mount.h
+@@ -90,6 +90,9 @@ struct file_system_type;
+ extern struct vfsmount *vfs_kern_mount(struct file_system_type *type,
+ int flags, const char *name,
+ void *data);
++extern struct vfsmount *vfs_submount(const struct dentry *mountpoint,
++ struct file_system_type *type,
++ const char *name, void *data);
+
+ extern void mnt_set_expiry(struct vfsmount *mnt, struct list_head *expiry_list);
+ extern void mark_mounts_for_expiry(struct list_head *mounts);
+diff --git a/include/target/target_core_base.h b/include/target/target_core_base.h
+index da854fb4530f..775c2319a72b 100644
+--- a/include/target/target_core_base.h
++++ b/include/target/target_core_base.h
+@@ -732,6 +732,7 @@ struct se_lun {
+ struct config_group lun_group;
+ struct se_port_stat_grps port_stat_grps;
+ struct completion lun_ref_comp;
++ struct completion lun_shutdown_comp;
+ struct percpu_ref lun_ref;
+ struct list_head lun_dev_link;
+ struct hlist_node link;
+diff --git a/include/uapi/linux/fs.h b/include/uapi/linux/fs.h
+index 36da93fbf188..048a85e9f017 100644
+--- a/include/uapi/linux/fs.h
++++ b/include/uapi/linux/fs.h
+@@ -132,6 +132,7 @@ struct inodes_stat_t {
+ #define MS_LAZYTIME (1<<25) /* Update the on-disk [acm]times lazily */
+
+ /* These sb flags are internal to the kernel */
++#define MS_SUBMOUNT (1<<26)
+ #define MS_NOREMOTELOCK (1<<27)
+ #define MS_NOSEC (1<<28)
+ #define MS_BORN (1<<29)
+diff --git a/kernel/trace/trace.c b/kernel/trace/trace.c
+index d7449783987a..310f0ea0d1a2 100644
+--- a/kernel/trace/trace.c
++++ b/kernel/trace/trace.c
+@@ -7503,7 +7503,7 @@ init_tracer_tracefs(struct trace_array *tr, struct dentry *d_tracer)
+ ftrace_init_tracefs(tr, d_tracer);
+ }
+
+-static struct vfsmount *trace_automount(void *ingore)
++static struct vfsmount *trace_automount(struct dentry *mntpt, void *ingore)
+ {
+ struct vfsmount *mnt;
+ struct file_system_type *type;
+@@ -7516,7 +7516,7 @@ static struct vfsmount *trace_automount(void *ingore)
+ type = get_fs_type("tracefs");
+ if (!type)
+ return NULL;
+- mnt = vfs_kern_mount(type, 0, "tracefs", NULL);
++ mnt = vfs_submount(mntpt, type, "tracefs", NULL);
+ put_filesystem(type);
+ if (IS_ERR(mnt))
+ return NULL;
+diff --git a/kernel/trace/trace_benchmark.c b/kernel/trace/trace_benchmark.c
+index e3b488825ae3..e49fbe901cfc 100644
+--- a/kernel/trace/trace_benchmark.c
++++ b/kernel/trace/trace_benchmark.c
+@@ -175,9 +175,9 @@ int trace_benchmark_reg(void)
+
+ bm_event_thread = kthread_run(benchmark_event_kthread,
+ NULL, "event_benchmark");
+- if (!bm_event_thread) {
++ if (IS_ERR(bm_event_thread)) {
+ pr_warning("trace benchmark failed to create kernel thread\n");
+- return -ENOMEM;
++ return PTR_ERR(bm_event_thread);
+ }
+
+ return 0;
+diff --git a/mm/kasan/quarantine.c b/mm/kasan/quarantine.c
+index dae929c02bbb..872e1981f63b 100644
+--- a/mm/kasan/quarantine.c
++++ b/mm/kasan/quarantine.c
+@@ -282,8 +282,15 @@ void quarantine_remove_cache(struct kmem_cache *cache)
+ on_each_cpu(per_cpu_remove_cache, cache, 1);
+
+ spin_lock_irqsave(&quarantine_lock, flags);
+- for (i = 0; i < QUARANTINE_BATCHES; i++)
++ for (i = 0; i < QUARANTINE_BATCHES; i++) {
++ if (qlist_empty(&global_quarantine[i]))
++ continue;
+ qlist_move_cache(&global_quarantine[i], &to_free, cache);
++ /* Scanning whole quarantine can take a while. */
++ spin_unlock_irqrestore(&quarantine_lock, flags);
++ cond_resched();
++ spin_lock_irqsave(&quarantine_lock, flags);
++ }
+ spin_unlock_irqrestore(&quarantine_lock, flags);
+
+ qlist_free_all(&to_free, cache);
+diff --git a/mm/memcontrol.c b/mm/memcontrol.c
+index b822e158b319..86c1100bc69e 100644
+--- a/mm/memcontrol.c
++++ b/mm/memcontrol.c
+@@ -4132,17 +4132,22 @@ static void free_mem_cgroup_per_node_info(struct mem_cgroup *memcg, int node)
+ kfree(memcg->nodeinfo[node]);
+ }
+
+-static void mem_cgroup_free(struct mem_cgroup *memcg)
++static void __mem_cgroup_free(struct mem_cgroup *memcg)
+ {
+ int node;
+
+- memcg_wb_domain_exit(memcg);
+ for_each_node(node)
+ free_mem_cgroup_per_node_info(memcg, node);
+ free_percpu(memcg->stat);
+ kfree(memcg);
+ }
+
++static void mem_cgroup_free(struct mem_cgroup *memcg)
++{
++ memcg_wb_domain_exit(memcg);
++ __mem_cgroup_free(memcg);
++}
++
+ static struct mem_cgroup *mem_cgroup_alloc(void)
+ {
+ struct mem_cgroup *memcg;
+@@ -4193,7 +4198,7 @@ static struct mem_cgroup *mem_cgroup_alloc(void)
+ fail:
+ if (memcg->id.id > 0)
+ idr_remove(&mem_cgroup_idr, memcg->id.id);
+- mem_cgroup_free(memcg);
++ __mem_cgroup_free(memcg);
+ return NULL;
+ }
+
+diff --git a/mm/mlock.c b/mm/mlock.c
+index cdbed8aaa426..665ab75b5533 100644
+--- a/mm/mlock.c
++++ b/mm/mlock.c
+@@ -441,7 +441,7 @@ void munlock_vma_pages_range(struct vm_area_struct *vma,
+
+ while (start < end) {
+ struct page *page;
+- unsigned int page_mask;
++ unsigned int page_mask = 0;
+ unsigned long page_increm;
+ struct pagevec pvec;
+ struct zone *zone;
+@@ -455,8 +455,7 @@ void munlock_vma_pages_range(struct vm_area_struct *vma,
+ * suits munlock very well (and if somehow an abnormal page
+ * has sneaked into the range, we won't oops here: great).
+ */
+- page = follow_page_mask(vma, start, FOLL_GET | FOLL_DUMP,
+- &page_mask);
++ page = follow_page(vma, start, FOLL_GET | FOLL_DUMP);
+
+ if (page && !IS_ERR(page)) {
+ if (PageTransTail(page)) {
+@@ -467,8 +466,8 @@ void munlock_vma_pages_range(struct vm_area_struct *vma,
+ /*
+ * Any THP page found by follow_page_mask() may
+ * have gotten split before reaching
+- * munlock_vma_page(), so we need to recompute
+- * the page_mask here.
++ * munlock_vma_page(), so we need to compute
++ * the page_mask here instead.
+ */
+ page_mask = munlock_vma_page(page);
+ unlock_page(page);
+diff --git a/net/mac80211/agg-rx.c b/net/mac80211/agg-rx.c
+index 3b5fd4188f2a..58ad23a44109 100644
+--- a/net/mac80211/agg-rx.c
++++ b/net/mac80211/agg-rx.c
+@@ -398,6 +398,7 @@ void __ieee80211_start_rx_ba_session(struct sta_info *sta,
+ tid_agg_rx->timeout = timeout;
+ tid_agg_rx->stored_mpdu_num = 0;
+ tid_agg_rx->auto_seq = auto_seq;
++ tid_agg_rx->started = false;
+ tid_agg_rx->reorder_buf_filtered = 0;
+ status = WLAN_STATUS_SUCCESS;
+
+diff --git a/net/mac80211/pm.c b/net/mac80211/pm.c
+index 28a3a0957c9e..76a8bcd8ef11 100644
+--- a/net/mac80211/pm.c
++++ b/net/mac80211/pm.c
+@@ -168,6 +168,7 @@ int __ieee80211_suspend(struct ieee80211_hw *hw, struct cfg80211_wowlan *wowlan)
+ break;
+ }
+
++ flush_delayed_work(&sdata->dec_tailroom_needed_wk);
+ drv_remove_interface(local, sdata);
+ }
+
+diff --git a/net/mac80211/rx.c b/net/mac80211/rx.c
+index 3090dd4342f6..1109e60e9121 100644
+--- a/net/mac80211/rx.c
++++ b/net/mac80211/rx.c
+@@ -4,7 +4,7 @@
+ * Copyright 2006-2007 Jiri Benc <jbenc@suse.cz>
+ * Copyright 2007-2010 Johannes Berg <johannes@sipsolutions.net>
+ * Copyright 2013-2014 Intel Mobile Communications GmbH
+- * Copyright(c) 2015 - 2016 Intel Deutschland GmbH
++ * Copyright(c) 2015 - 2017 Intel Deutschland GmbH
+ *
+ * This program is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU General Public License version 2 as
+@@ -1034,6 +1034,18 @@ static bool ieee80211_sta_manage_reorder_buf(struct ieee80211_sub_if_data *sdata
+ buf_size = tid_agg_rx->buf_size;
+ head_seq_num = tid_agg_rx->head_seq_num;
+
++ /*
++ * If the current MPDU's SN is smaller than the SSN, it shouldn't
++ * be reordered.
++ */
++ if (unlikely(!tid_agg_rx->started)) {
++ if (ieee80211_sn_less(mpdu_seq_num, head_seq_num)) {
++ ret = false;
++ goto out;
++ }
++ tid_agg_rx->started = true;
++ }
++
+ /* frame with out of date sequence number */
+ if (ieee80211_sn_less(mpdu_seq_num, head_seq_num)) {
+ dev_kfree_skb(skb);
+@@ -4077,15 +4089,17 @@ static void __ieee80211_rx_handle_packet(struct ieee80211_hw *hw,
+ ieee80211_is_beacon(hdr->frame_control)))
+ ieee80211_scan_rx(local, skb);
+
+- if (pubsta) {
+- rx.sta = container_of(pubsta, struct sta_info, sta);
+- rx.sdata = rx.sta->sdata;
+- if (ieee80211_prepare_and_rx_handle(&rx, skb, true))
+- return;
+- goto out;
+- } else if (ieee80211_is_data(fc)) {
++ if (ieee80211_is_data(fc)) {
+ struct sta_info *sta, *prev_sta;
+
++ if (pubsta) {
++ rx.sta = container_of(pubsta, struct sta_info, sta);
++ rx.sdata = rx.sta->sdata;
++ if (ieee80211_prepare_and_rx_handle(&rx, skb, true))
++ return;
++ goto out;
++ }
++
+ prev_sta = NULL;
+
+ for_each_sta_info(local, hdr->addr2, sta, tmp) {
+diff --git a/net/mac80211/sta_info.h b/net/mac80211/sta_info.h
+index dd06ef0b8861..15599c70a38f 100644
+--- a/net/mac80211/sta_info.h
++++ b/net/mac80211/sta_info.h
+@@ -189,6 +189,7 @@ struct tid_ampdu_tx {
+ * @auto_seq: used for offloaded BA sessions to automatically pick head_seq_and
+ * and ssn.
+ * @removed: this session is removed (but might have been found due to RCU)
++ * @started: this session has started (head ssn or higher was received)
+ *
+ * This structure's lifetime is managed by RCU, assignments to
+ * the array holding it must hold the aggregation mutex.
+@@ -212,8 +213,9 @@ struct tid_ampdu_rx {
+ u16 ssn;
+ u16 buf_size;
+ u16 timeout;
+- bool auto_seq;
+- bool removed;
++ u8 auto_seq:1,
++ removed:1,
++ started:1;
+ };
+
+ /**
+diff --git a/net/mac80211/status.c b/net/mac80211/status.c
+index ddf71c648cab..ad37b4e58c2f 100644
+--- a/net/mac80211/status.c
++++ b/net/mac80211/status.c
+@@ -51,7 +51,8 @@ static void ieee80211_handle_filtered_frame(struct ieee80211_local *local,
+ struct ieee80211_hdr *hdr = (void *)skb->data;
+ int ac;
+
+- if (info->flags & IEEE80211_TX_CTL_NO_PS_BUFFER) {
++ if (info->flags & (IEEE80211_TX_CTL_NO_PS_BUFFER |
++ IEEE80211_TX_CTL_AMPDU)) {
+ ieee80211_free_txskb(&local->hw, skb);
+ return;
+ }
+diff --git a/tools/testing/ktest/ktest.pl b/tools/testing/ktest/ktest.pl
+index be93ab02b490..33f3337019ee 100755
+--- a/tools/testing/ktest/ktest.pl
++++ b/tools/testing/ktest/ktest.pl
+@@ -2629,7 +2629,7 @@ sub do_run_test {
+ }
+
+ waitpid $child_pid, 0;
+- $child_exit = $?;
++ $child_exit = $? >> 8;
+
+ my $end_time = time;
+ $test_time = $end_time - $start_time;
^ permalink raw reply related [flat|nested] 22+ messages in thread
* [gentoo-commits] proj/linux-patches:4.10 commit in: /
@ 2017-03-18 14:35 Mike Pagano
0 siblings, 0 replies; 22+ messages in thread
From: Mike Pagano @ 2017-03-18 14:35 UTC (permalink / raw
To: gentoo-commits
commit: 2e76d12605325175166610a828dae3a9daa06f89
Author: Mike Pagano <mpagano <AT> gentoo <DOT> org>
AuthorDate: Sat Mar 18 14:35:30 2017 +0000
Commit: Mike Pagano <mpagano <AT> gentoo <DOT> org>
CommitDate: Sat Mar 18 14:35:30 2017 +0000
URL: https://gitweb.gentoo.org/proj/linux-patches.git/commit/?id=2e76d126
Linux patch 4.10.4
0000_README | 4 +
1003_linux-4.10.4.patch | 1805 +++++++++++++++++++++++++++++++++++++++++++++++
2 files changed, 1809 insertions(+)
diff --git a/0000_README b/0000_README
index 471175a..a80feb8 100644
--- a/0000_README
+++ b/0000_README
@@ -55,6 +55,10 @@ Patch: 1002_linux-4.10.3.patch
From: http://www.kernel.org
Desc: Linux 4.10.3
+Patch: 1003_linux-4.10.4.patch
+From: http://www.kernel.org
+Desc: Linux 4.10.4
+
Patch: 1500_XATTR_USER_PREFIX.patch
From: https://bugs.gentoo.org/show_bug.cgi?id=470644
Desc: Support for namespace user.pax.* on tmpfs.
diff --git a/1003_linux-4.10.4.patch b/1003_linux-4.10.4.patch
new file mode 100644
index 0000000..ed8a7ee
--- /dev/null
+++ b/1003_linux-4.10.4.patch
@@ -0,0 +1,1805 @@
+diff --git a/Makefile b/Makefile
+index 190a684303c1..8df819e31882 100644
+--- a/Makefile
++++ b/Makefile
+@@ -1,6 +1,6 @@
+ VERSION = 4
+ PATCHLEVEL = 10
+-SUBLEVEL = 3
++SUBLEVEL = 4
+ EXTRAVERSION =
+ NAME = Fearless Coyote
+
+diff --git a/arch/arm/configs/qcom_defconfig b/arch/arm/configs/qcom_defconfig
+index 8c3a0108a231..c73299b51f7b 100644
+--- a/arch/arm/configs/qcom_defconfig
++++ b/arch/arm/configs/qcom_defconfig
+@@ -157,6 +157,8 @@ CONFIG_DMADEVICES=y
+ CONFIG_QCOM_BAM_DMA=y
+ CONFIG_STAGING=y
+ CONFIG_COMMON_CLK_QCOM=y
++CONFIG_QCOM_CLK_RPM=y
++CONFIG_QCOM_CLK_SMD_RPM=y
+ CONFIG_APQ_MMCC_8084=y
+ CONFIG_IPQ_LCC_806X=y
+ CONFIG_MSM_GCC_8660=y
+diff --git a/arch/mips/configs/ip22_defconfig b/arch/mips/configs/ip22_defconfig
+index 5d83ff755547..ec8e9684296d 100644
+--- a/arch/mips/configs/ip22_defconfig
++++ b/arch/mips/configs/ip22_defconfig
+@@ -67,8 +67,8 @@ CONFIG_NETFILTER_NETLINK_QUEUE=m
+ CONFIG_NF_CONNTRACK=m
+ CONFIG_NF_CONNTRACK_SECMARK=y
+ CONFIG_NF_CONNTRACK_EVENTS=y
+-CONFIG_NF_CT_PROTO_DCCP=m
+-CONFIG_NF_CT_PROTO_UDPLITE=m
++CONFIG_NF_CT_PROTO_DCCP=y
++CONFIG_NF_CT_PROTO_UDPLITE=y
+ CONFIG_NF_CONNTRACK_AMANDA=m
+ CONFIG_NF_CONNTRACK_FTP=m
+ CONFIG_NF_CONNTRACK_H323=m
+diff --git a/arch/mips/configs/ip27_defconfig b/arch/mips/configs/ip27_defconfig
+index 2b74aee320a1..e582069b44fd 100644
+--- a/arch/mips/configs/ip27_defconfig
++++ b/arch/mips/configs/ip27_defconfig
+@@ -133,7 +133,7 @@ CONFIG_LIBFC=m
+ CONFIG_SCSI_QLOGIC_1280=y
+ CONFIG_SCSI_PMCRAID=m
+ CONFIG_SCSI_BFA_FC=m
+-CONFIG_SCSI_DH=m
++CONFIG_SCSI_DH=y
+ CONFIG_SCSI_DH_RDAC=m
+ CONFIG_SCSI_DH_HP_SW=m
+ CONFIG_SCSI_DH_EMC=m
+@@ -205,7 +205,6 @@ CONFIG_MLX4_EN=m
+ # CONFIG_MLX4_DEBUG is not set
+ CONFIG_TEHUTI=m
+ CONFIG_BNX2X=m
+-CONFIG_QLGE=m
+ CONFIG_SFC=m
+ CONFIG_BE2NET=m
+ CONFIG_LIBERTAS_THINFIRM=m
+diff --git a/arch/mips/configs/lemote2f_defconfig b/arch/mips/configs/lemote2f_defconfig
+index 5da76e0e120f..0cdb431bff80 100644
+--- a/arch/mips/configs/lemote2f_defconfig
++++ b/arch/mips/configs/lemote2f_defconfig
+@@ -39,7 +39,7 @@ CONFIG_HIBERNATION=y
+ CONFIG_PM_STD_PARTITION="/dev/hda3"
+ CONFIG_CPU_FREQ=y
+ CONFIG_CPU_FREQ_DEBUG=y
+-CONFIG_CPU_FREQ_STAT=m
++CONFIG_CPU_FREQ_STAT=y
+ CONFIG_CPU_FREQ_STAT_DETAILS=y
+ CONFIG_CPU_FREQ_DEFAULT_GOV_ONDEMAND=y
+ CONFIG_CPU_FREQ_GOV_POWERSAVE=m
+diff --git a/arch/mips/configs/malta_defconfig b/arch/mips/configs/malta_defconfig
+index 58d43f3c348d..078ecac071ab 100644
+--- a/arch/mips/configs/malta_defconfig
++++ b/arch/mips/configs/malta_defconfig
+@@ -59,8 +59,8 @@ CONFIG_NETFILTER=y
+ CONFIG_NF_CONNTRACK=m
+ CONFIG_NF_CONNTRACK_SECMARK=y
+ CONFIG_NF_CONNTRACK_EVENTS=y
+-CONFIG_NF_CT_PROTO_DCCP=m
+-CONFIG_NF_CT_PROTO_UDPLITE=m
++CONFIG_NF_CT_PROTO_DCCP=y
++CONFIG_NF_CT_PROTO_UDPLITE=y
+ CONFIG_NF_CONNTRACK_AMANDA=m
+ CONFIG_NF_CONNTRACK_FTP=m
+ CONFIG_NF_CONNTRACK_H323=m
+diff --git a/arch/mips/configs/malta_kvm_defconfig b/arch/mips/configs/malta_kvm_defconfig
+index c8f7e2835840..e233f878afef 100644
+--- a/arch/mips/configs/malta_kvm_defconfig
++++ b/arch/mips/configs/malta_kvm_defconfig
+@@ -60,8 +60,8 @@ CONFIG_NETFILTER=y
+ CONFIG_NF_CONNTRACK=m
+ CONFIG_NF_CONNTRACK_SECMARK=y
+ CONFIG_NF_CONNTRACK_EVENTS=y
+-CONFIG_NF_CT_PROTO_DCCP=m
+-CONFIG_NF_CT_PROTO_UDPLITE=m
++CONFIG_NF_CT_PROTO_DCCP=y
++CONFIG_NF_CT_PROTO_UDPLITE=y
+ CONFIG_NF_CONNTRACK_AMANDA=m
+ CONFIG_NF_CONNTRACK_FTP=m
+ CONFIG_NF_CONNTRACK_H323=m
+diff --git a/arch/mips/configs/malta_kvm_guest_defconfig b/arch/mips/configs/malta_kvm_guest_defconfig
+index d2f54e55356c..fbe085c328ab 100644
+--- a/arch/mips/configs/malta_kvm_guest_defconfig
++++ b/arch/mips/configs/malta_kvm_guest_defconfig
+@@ -59,8 +59,8 @@ CONFIG_NETFILTER=y
+ CONFIG_NF_CONNTRACK=m
+ CONFIG_NF_CONNTRACK_SECMARK=y
+ CONFIG_NF_CONNTRACK_EVENTS=y
+-CONFIG_NF_CT_PROTO_DCCP=m
+-CONFIG_NF_CT_PROTO_UDPLITE=m
++CONFIG_NF_CT_PROTO_DCCP=y
++CONFIG_NF_CT_PROTO_UDPLITE=y
+ CONFIG_NF_CONNTRACK_AMANDA=m
+ CONFIG_NF_CONNTRACK_FTP=m
+ CONFIG_NF_CONNTRACK_H323=m
+diff --git a/arch/mips/configs/maltaup_xpa_defconfig b/arch/mips/configs/maltaup_xpa_defconfig
+index 3d0d9cb9673f..2942610e4082 100644
+--- a/arch/mips/configs/maltaup_xpa_defconfig
++++ b/arch/mips/configs/maltaup_xpa_defconfig
+@@ -61,8 +61,8 @@ CONFIG_NETFILTER=y
+ CONFIG_NF_CONNTRACK=m
+ CONFIG_NF_CONNTRACK_SECMARK=y
+ CONFIG_NF_CONNTRACK_EVENTS=y
+-CONFIG_NF_CT_PROTO_DCCP=m
+-CONFIG_NF_CT_PROTO_UDPLITE=m
++CONFIG_NF_CT_PROTO_DCCP=y
++CONFIG_NF_CT_PROTO_UDPLITE=y
+ CONFIG_NF_CONNTRACK_AMANDA=m
+ CONFIG_NF_CONNTRACK_FTP=m
+ CONFIG_NF_CONNTRACK_H323=m
+diff --git a/arch/mips/configs/nlm_xlp_defconfig b/arch/mips/configs/nlm_xlp_defconfig
+index b496c25fced6..07d01827a973 100644
+--- a/arch/mips/configs/nlm_xlp_defconfig
++++ b/arch/mips/configs/nlm_xlp_defconfig
+@@ -110,7 +110,7 @@ CONFIG_NETFILTER=y
+ CONFIG_NF_CONNTRACK=m
+ CONFIG_NF_CONNTRACK_SECMARK=y
+ CONFIG_NF_CONNTRACK_EVENTS=y
+-CONFIG_NF_CT_PROTO_UDPLITE=m
++CONFIG_NF_CT_PROTO_UDPLITE=y
+ CONFIG_NF_CONNTRACK_AMANDA=m
+ CONFIG_NF_CONNTRACK_FTP=m
+ CONFIG_NF_CONNTRACK_H323=m
+diff --git a/arch/mips/configs/nlm_xlr_defconfig b/arch/mips/configs/nlm_xlr_defconfig
+index 8e99ad807a57..f59969acb724 100644
+--- a/arch/mips/configs/nlm_xlr_defconfig
++++ b/arch/mips/configs/nlm_xlr_defconfig
+@@ -90,7 +90,7 @@ CONFIG_NETFILTER=y
+ CONFIG_NF_CONNTRACK=m
+ CONFIG_NF_CONNTRACK_SECMARK=y
+ CONFIG_NF_CONNTRACK_EVENTS=y
+-CONFIG_NF_CT_PROTO_UDPLITE=m
++CONFIG_NF_CT_PROTO_UDPLITE=y
+ CONFIG_NF_CONNTRACK_AMANDA=m
+ CONFIG_NF_CONNTRACK_FTP=m
+ CONFIG_NF_CONNTRACK_H323=m
+diff --git a/arch/mips/include/asm/mach-ip27/spaces.h b/arch/mips/include/asm/mach-ip27/spaces.h
+index 4775a1136a5b..24d5e31bcfa6 100644
+--- a/arch/mips/include/asm/mach-ip27/spaces.h
++++ b/arch/mips/include/asm/mach-ip27/spaces.h
+@@ -12,14 +12,16 @@
+
+ /*
+ * IP27 uses the R10000's uncached attribute feature. Attribute 3 selects
+- * uncached memory addressing.
++ * uncached memory addressing. Hide the definitions on 32-bit compilation
++ * of the compat-vdso code.
+ */
+-
++#ifdef CONFIG_64BIT
+ #define HSPEC_BASE 0x9000000000000000
+ #define IO_BASE 0x9200000000000000
+ #define MSPEC_BASE 0x9400000000000000
+ #define UNCAC_BASE 0x9600000000000000
+ #define CAC_BASE 0xa800000000000000
++#endif
+
+ #define TO_MSPEC(x) (MSPEC_BASE | ((x) & TO_PHYS_MASK))
+ #define TO_HSPEC(x) (HSPEC_BASE | ((x) & TO_PHYS_MASK))
+diff --git a/arch/mips/ralink/prom.c b/arch/mips/ralink/prom.c
+index 5a73c5e14221..23198c9050e5 100644
+--- a/arch/mips/ralink/prom.c
++++ b/arch/mips/ralink/prom.c
+@@ -30,8 +30,10 @@ const char *get_system_type(void)
+ return soc_info.sys_type;
+ }
+
+-static __init void prom_init_cmdline(int argc, char **argv)
++static __init void prom_init_cmdline(void)
+ {
++ int argc;
++ char **argv;
+ int i;
+
+ pr_debug("prom: fw_arg0=%08x fw_arg1=%08x fw_arg2=%08x fw_arg3=%08x\n",
+@@ -60,14 +62,11 @@ static __init void prom_init_cmdline(int argc, char **argv)
+
+ void __init prom_init(void)
+ {
+- int argc;
+- char **argv;
+-
+ prom_soc_init(&soc_info);
+
+ pr_info("SoC Type: %s\n", get_system_type());
+
+- prom_init_cmdline(argc, argv);
++ prom_init_cmdline();
+ }
+
+ void __init prom_free_prom_memory(void)
+diff --git a/arch/mips/ralink/rt288x.c b/arch/mips/ralink/rt288x.c
+index 285796e6d75c..2b76e3643869 100644
+--- a/arch/mips/ralink/rt288x.c
++++ b/arch/mips/ralink/rt288x.c
+@@ -40,16 +40,6 @@ static struct rt2880_pmx_group rt2880_pinmux_data_act[] = {
+ { 0 }
+ };
+
+-static void rt288x_wdt_reset(void)
+-{
+- u32 t;
+-
+- /* enable WDT reset output on pin SRAM_CS_N */
+- t = rt_sysc_r32(SYSC_REG_CLKCFG);
+- t |= CLKCFG_SRAM_CS_N_WDT;
+- rt_sysc_w32(t, SYSC_REG_CLKCFG);
+-}
+-
+ void __init ralink_clk_init(void)
+ {
+ unsigned long cpu_rate, wmac_rate = 40000000;
+diff --git a/arch/mips/ralink/rt305x.c b/arch/mips/ralink/rt305x.c
+index c8a28c4bf29e..e778e0b54ffb 100644
+--- a/arch/mips/ralink/rt305x.c
++++ b/arch/mips/ralink/rt305x.c
+@@ -89,17 +89,6 @@ static struct rt2880_pmx_group rt5350_pinmux_data[] = {
+ { 0 }
+ };
+
+-static void rt305x_wdt_reset(void)
+-{
+- u32 t;
+-
+- /* enable WDT reset output on pin SRAM_CS_N */
+- t = rt_sysc_r32(SYSC_REG_SYSTEM_CONFIG);
+- t |= RT305X_SYSCFG_SRAM_CS0_MODE_WDT <<
+- RT305X_SYSCFG_SRAM_CS0_MODE_SHIFT;
+- rt_sysc_w32(t, SYSC_REG_SYSTEM_CONFIG);
+-}
+-
+ static unsigned long rt5350_get_mem_size(void)
+ {
+ void __iomem *sysc = (void __iomem *) KSEG1ADDR(RT305X_SYSC_BASE);
+diff --git a/arch/mips/ralink/rt3883.c b/arch/mips/ralink/rt3883.c
+index 4cef9162bd9b..3e0aa09c6b55 100644
+--- a/arch/mips/ralink/rt3883.c
++++ b/arch/mips/ralink/rt3883.c
+@@ -63,16 +63,6 @@ static struct rt2880_pmx_group rt3883_pinmux_data[] = {
+ { 0 }
+ };
+
+-static void rt3883_wdt_reset(void)
+-{
+- u32 t;
+-
+- /* enable WDT reset output on GPIO 2 */
+- t = rt_sysc_r32(RT3883_SYSC_REG_SYSCFG1);
+- t |= RT3883_SYSCFG1_GPIO2_AS_WDT_OUT;
+- rt_sysc_w32(t, RT3883_SYSC_REG_SYSCFG1);
+-}
+-
+ void __init ralink_clk_init(void)
+ {
+ unsigned long cpu_rate, sys_rate;
+diff --git a/arch/mips/ralink/timer.c b/arch/mips/ralink/timer.c
+index 8077ff39bdea..d4469b20d176 100644
+--- a/arch/mips/ralink/timer.c
++++ b/arch/mips/ralink/timer.c
+@@ -71,11 +71,6 @@ static int rt_timer_request(struct rt_timer *rt)
+ return err;
+ }
+
+-static void rt_timer_free(struct rt_timer *rt)
+-{
+- free_irq(rt->irq, rt);
+-}
+-
+ static int rt_timer_config(struct rt_timer *rt, unsigned long divisor)
+ {
+ if (rt->timer_freq < divisor)
+@@ -101,15 +96,6 @@ static int rt_timer_enable(struct rt_timer *rt)
+ return 0;
+ }
+
+-static void rt_timer_disable(struct rt_timer *rt)
+-{
+- u32 t;
+-
+- t = rt_timer_r32(rt, TIMER_REG_TMR0CTL);
+- t &= ~TMR0CTL_ENABLE;
+- rt_timer_w32(rt, TIMER_REG_TMR0CTL, t);
+-}
+-
+ static int rt_timer_probe(struct platform_device *pdev)
+ {
+ struct resource *res = platform_get_resource(pdev, IORESOURCE_MEM, 0);
+diff --git a/arch/mips/sgi-ip22/Platform b/arch/mips/sgi-ip22/Platform
+index b7a4b7e04c38..e8f6b3a42a48 100644
+--- a/arch/mips/sgi-ip22/Platform
++++ b/arch/mips/sgi-ip22/Platform
+@@ -25,7 +25,7 @@ endif
+ # Simplified: what IP22 does at 128MB+ in ksegN, IP28 does at 512MB+ in xkphys
+ #
+ ifdef CONFIG_SGI_IP28
+- ifeq ($(call cc-option-yn,-mr10k-cache-barrier=store), n)
++ ifeq ($(call cc-option-yn,-march=r10000 -mr10k-cache-barrier=store), n)
+ $(error gcc doesn't support needed option -mr10k-cache-barrier=store)
+ endif
+ endif
+diff --git a/arch/powerpc/include/asm/nohash/pgtable.h b/arch/powerpc/include/asm/nohash/pgtable.h
+index 0cd8a3852763..e5805ad78e12 100644
+--- a/arch/powerpc/include/asm/nohash/pgtable.h
++++ b/arch/powerpc/include/asm/nohash/pgtable.h
+@@ -230,7 +230,7 @@ static inline int hugepd_ok(hugepd_t hpd)
+ return ((hpd_val(hpd) & 0x4) != 0);
+ #else
+ /* We clear the top bit to indicate hugepd */
+- return ((hpd_val(hpd) & PD_HUGE) == 0);
++ return (hpd_val(hpd) && (hpd_val(hpd) & PD_HUGE) == 0);
+ #endif
+ }
+
+diff --git a/arch/powerpc/lib/sstep.c b/arch/powerpc/lib/sstep.c
+index 06c7e9b88408..e14a2fbcf38d 100644
+--- a/arch/powerpc/lib/sstep.c
++++ b/arch/powerpc/lib/sstep.c
+@@ -1799,8 +1799,6 @@ int __kprobes emulate_step(struct pt_regs *regs, unsigned int instr)
+ goto instr_done;
+
+ case LARX:
+- if (regs->msr & MSR_LE)
+- return 0;
+ if (op.ea & (size - 1))
+ break; /* can't handle misaligned */
+ err = -EFAULT;
+@@ -1824,8 +1822,6 @@ int __kprobes emulate_step(struct pt_regs *regs, unsigned int instr)
+ goto ldst_done;
+
+ case STCX:
+- if (regs->msr & MSR_LE)
+- return 0;
+ if (op.ea & (size - 1))
+ break; /* can't handle misaligned */
+ err = -EFAULT;
+@@ -1851,8 +1847,6 @@ int __kprobes emulate_step(struct pt_regs *regs, unsigned int instr)
+ goto ldst_done;
+
+ case LOAD:
+- if (regs->msr & MSR_LE)
+- return 0;
+ err = read_mem(®s->gpr[op.reg], op.ea, size, regs);
+ if (!err) {
+ if (op.type & SIGNEXT)
+@@ -1864,8 +1858,6 @@ int __kprobes emulate_step(struct pt_regs *regs, unsigned int instr)
+
+ #ifdef CONFIG_PPC_FPU
+ case LOAD_FP:
+- if (regs->msr & MSR_LE)
+- return 0;
+ if (size == 4)
+ err = do_fp_load(op.reg, do_lfs, op.ea, size, regs);
+ else
+@@ -1874,15 +1866,11 @@ int __kprobes emulate_step(struct pt_regs *regs, unsigned int instr)
+ #endif
+ #ifdef CONFIG_ALTIVEC
+ case LOAD_VMX:
+- if (regs->msr & MSR_LE)
+- return 0;
+ err = do_vec_load(op.reg, do_lvx, op.ea & ~0xfUL, regs);
+ goto ldst_done;
+ #endif
+ #ifdef CONFIG_VSX
+ case LOAD_VSX:
+- if (regs->msr & MSR_LE)
+- return 0;
+ err = do_vsx_load(op.reg, do_lxvd2x, op.ea, regs);
+ goto ldst_done;
+ #endif
+@@ -1905,8 +1893,6 @@ int __kprobes emulate_step(struct pt_regs *regs, unsigned int instr)
+ goto instr_done;
+
+ case STORE:
+- if (regs->msr & MSR_LE)
+- return 0;
+ if ((op.type & UPDATE) && size == sizeof(long) &&
+ op.reg == 1 && op.update_reg == 1 &&
+ !(regs->msr & MSR_PR) &&
+@@ -1919,8 +1905,6 @@ int __kprobes emulate_step(struct pt_regs *regs, unsigned int instr)
+
+ #ifdef CONFIG_PPC_FPU
+ case STORE_FP:
+- if (regs->msr & MSR_LE)
+- return 0;
+ if (size == 4)
+ err = do_fp_store(op.reg, do_stfs, op.ea, size, regs);
+ else
+@@ -1929,15 +1913,11 @@ int __kprobes emulate_step(struct pt_regs *regs, unsigned int instr)
+ #endif
+ #ifdef CONFIG_ALTIVEC
+ case STORE_VMX:
+- if (regs->msr & MSR_LE)
+- return 0;
+ err = do_vec_store(op.reg, do_stvx, op.ea & ~0xfUL, regs);
+ goto ldst_done;
+ #endif
+ #ifdef CONFIG_VSX
+ case STORE_VSX:
+- if (regs->msr & MSR_LE)
+- return 0;
+ err = do_vsx_store(op.reg, do_stxvd2x, op.ea, regs);
+ goto ldst_done;
+ #endif
+diff --git a/arch/powerpc/sysdev/xics/icp-opal.c b/arch/powerpc/sysdev/xics/icp-opal.c
+index f9670eabfcfa..b53f80f0b4d8 100644
+--- a/arch/powerpc/sysdev/xics/icp-opal.c
++++ b/arch/powerpc/sysdev/xics/icp-opal.c
+@@ -91,6 +91,16 @@ static unsigned int icp_opal_get_irq(void)
+
+ static void icp_opal_set_cpu_priority(unsigned char cppr)
+ {
++ /*
++ * Here be dragons. The caller has asked to allow only IPI's and not
++ * external interrupts. But OPAL XIVE doesn't support that. So instead
++ * of allowing no interrupts allow all. That's still not right, but
++ * currently the only caller who does this is xics_migrate_irqs_away()
++ * and it works in that case.
++ */
++ if (cppr >= DEFAULT_PRIORITY)
++ cppr = LOWEST_PRIORITY;
++
+ xics_set_base_cppr(cppr);
+ opal_int_set_cppr(cppr);
+ iosync();
+diff --git a/arch/powerpc/sysdev/xics/xics-common.c b/arch/powerpc/sysdev/xics/xics-common.c
+index 69d858e51ac7..23efe4e42172 100644
+--- a/arch/powerpc/sysdev/xics/xics-common.c
++++ b/arch/powerpc/sysdev/xics/xics-common.c
+@@ -20,6 +20,7 @@
+ #include <linux/of.h>
+ #include <linux/slab.h>
+ #include <linux/spinlock.h>
++#include <linux/delay.h>
+
+ #include <asm/prom.h>
+ #include <asm/io.h>
+@@ -198,9 +199,6 @@ void xics_migrate_irqs_away(void)
+ /* Remove ourselves from the global interrupt queue */
+ xics_set_cpu_giq(xics_default_distrib_server, 0);
+
+- /* Allow IPIs again... */
+- icp_ops->set_priority(DEFAULT_PRIORITY);
+-
+ for_each_irq_desc(virq, desc) {
+ struct irq_chip *chip;
+ long server;
+@@ -255,6 +253,19 @@ void xics_migrate_irqs_away(void)
+ unlock:
+ raw_spin_unlock_irqrestore(&desc->lock, flags);
+ }
++
++ /* Allow "sufficient" time to drop any inflight IRQ's */
++ mdelay(5);
++
++ /*
++ * Allow IPIs again. This is done at the very end, after migrating all
++ * interrupts, the expectation is that we'll only get woken up by an IPI
++ * interrupt beyond this point, but leave externals masked just to be
++ * safe. If we're using icp-opal this may actually allow all
++ * interrupts anyway, but that should be OK.
++ */
++ icp_ops->set_priority(DEFAULT_PRIORITY);
++
+ }
+ #endif /* CONFIG_HOTPLUG_CPU */
+
+diff --git a/arch/s390/mm/pgtable.c b/arch/s390/mm/pgtable.c
+index d56ef26d4681..7678f7956409 100644
+--- a/arch/s390/mm/pgtable.c
++++ b/arch/s390/mm/pgtable.c
+@@ -606,12 +606,29 @@ void ptep_zap_key(struct mm_struct *mm, unsigned long addr, pte_t *ptep)
+ bool test_and_clear_guest_dirty(struct mm_struct *mm, unsigned long addr)
+ {
+ spinlock_t *ptl;
++ pgd_t *pgd;
++ pud_t *pud;
++ pmd_t *pmd;
+ pgste_t pgste;
+ pte_t *ptep;
+ pte_t pte;
+ bool dirty;
+
+- ptep = get_locked_pte(mm, addr, &ptl);
++ pgd = pgd_offset(mm, addr);
++ pud = pud_alloc(mm, pgd, addr);
++ if (!pud)
++ return false;
++ pmd = pmd_alloc(mm, pud, addr);
++ if (!pmd)
++ return false;
++ /* We can't run guests backed by huge pages, but userspace can
++ * still set them up and then try to migrate them without any
++ * migration support.
++ */
++ if (pmd_large(*pmd))
++ return true;
++
++ ptep = pte_alloc_map_lock(mm, pmd, addr, &ptl);
+ if (unlikely(!ptep))
+ return false;
+
+diff --git a/crypto/Makefile b/crypto/Makefile
+index b8f0e3eb0791..aa10a4db41de 100644
+--- a/crypto/Makefile
++++ b/crypto/Makefile
+@@ -75,6 +75,7 @@ obj-$(CONFIG_CRYPTO_SHA256) += sha256_generic.o
+ obj-$(CONFIG_CRYPTO_SHA512) += sha512_generic.o
+ obj-$(CONFIG_CRYPTO_SHA3) += sha3_generic.o
+ obj-$(CONFIG_CRYPTO_WP512) += wp512.o
++CFLAGS_wp512.o := $(call cc-option,-fno-schedule-insns) # https://gcc.gnu.org/bugzilla/show_bug.cgi?id=79149
+ obj-$(CONFIG_CRYPTO_TGR192) += tgr192.o
+ obj-$(CONFIG_CRYPTO_GF128MUL) += gf128mul.o
+ obj-$(CONFIG_CRYPTO_ECB) += ecb.o
+@@ -98,6 +99,7 @@ obj-$(CONFIG_CRYPTO_BLOWFISH_COMMON) += blowfish_common.o
+ obj-$(CONFIG_CRYPTO_TWOFISH) += twofish_generic.o
+ obj-$(CONFIG_CRYPTO_TWOFISH_COMMON) += twofish_common.o
+ obj-$(CONFIG_CRYPTO_SERPENT) += serpent_generic.o
++CFLAGS_serpent_generic.o := $(call cc-option,-fsched-pressure) # https://gcc.gnu.org/bugzilla/show_bug.cgi?id=79149
+ obj-$(CONFIG_CRYPTO_AES) += aes_generic.o
+ obj-$(CONFIG_CRYPTO_CAMELLIA) += camellia_generic.o
+ obj-$(CONFIG_CRYPTO_CAST_COMMON) += cast_common.o
+diff --git a/drivers/firmware/efi/arm-runtime.c b/drivers/firmware/efi/arm-runtime.c
+index 349dc3e1e52e..974c5a31a005 100644
+--- a/drivers/firmware/efi/arm-runtime.c
++++ b/drivers/firmware/efi/arm-runtime.c
+@@ -65,6 +65,7 @@ static bool __init efi_virtmap_init(void)
+ bool systab_found;
+
+ efi_mm.pgd = pgd_alloc(&efi_mm);
++ mm_init_cpumask(&efi_mm);
+ init_new_context(NULL, &efi_mm);
+
+ systab_found = false;
+diff --git a/drivers/gpu/drm/i915/gvt/handlers.c b/drivers/gpu/drm/i915/gvt/handlers.c
+index ab2ea157da4c..e9d9e8aa180d 100644
+--- a/drivers/gpu/drm/i915/gvt/handlers.c
++++ b/drivers/gpu/drm/i915/gvt/handlers.c
+@@ -1039,7 +1039,7 @@ static int send_display_ready_uevent(struct intel_vgpu *vgpu, int ready)
+ char vmid_str[20];
+ char display_ready_str[20];
+
+- snprintf(display_ready_str, 20, "GVT_DISPLAY_READY=%d\n", ready);
++ snprintf(display_ready_str, 20, "GVT_DISPLAY_READY=%d", ready);
+ env[0] = display_ready_str;
+
+ snprintf(vmid_str, 20, "VMID=%d", vgpu->id);
+diff --git a/drivers/i2c/busses/i2c-bcm2835.c b/drivers/i2c/busses/i2c-bcm2835.c
+index c3436f627028..cd07a69e2e93 100644
+--- a/drivers/i2c/busses/i2c-bcm2835.c
++++ b/drivers/i2c/busses/i2c-bcm2835.c
+@@ -195,7 +195,9 @@ static irqreturn_t bcm2835_i2c_isr(int this_irq, void *data)
+ }
+
+ if (val & BCM2835_I2C_S_DONE) {
+- if (i2c_dev->curr_msg->flags & I2C_M_RD) {
++ if (!i2c_dev->curr_msg) {
++ dev_err(i2c_dev->dev, "Got unexpected interrupt (from firmware?)\n");
++ } else if (i2c_dev->curr_msg->flags & I2C_M_RD) {
+ bcm2835_drain_rxfifo(i2c_dev);
+ val = bcm2835_i2c_readl(i2c_dev, BCM2835_I2C_S);
+ }
+diff --git a/drivers/i2c/i2c-mux.c b/drivers/i2c/i2c-mux.c
+index 83768e85a919..2178266bca79 100644
+--- a/drivers/i2c/i2c-mux.c
++++ b/drivers/i2c/i2c-mux.c
+@@ -429,6 +429,7 @@ void i2c_mux_del_adapters(struct i2c_mux_core *muxc)
+ while (muxc->num_adapters) {
+ struct i2c_adapter *adap = muxc->adapter[--muxc->num_adapters];
+ struct i2c_mux_priv *priv = adap->algo_data;
++ struct device_node *np = adap->dev.of_node;
+
+ muxc->adapter[muxc->num_adapters] = NULL;
+
+@@ -438,6 +439,7 @@ void i2c_mux_del_adapters(struct i2c_mux_core *muxc)
+
+ sysfs_remove_link(&priv->adap.dev.kobj, "mux_device");
+ i2c_del_adapter(adap);
++ of_node_put(np);
+ kfree(priv);
+ }
+ }
+diff --git a/drivers/iio/counter/104-quad-8.c b/drivers/iio/counter/104-quad-8.c
+index a5913e97945e..f9b8fc9ae13f 100644
+--- a/drivers/iio/counter/104-quad-8.c
++++ b/drivers/iio/counter/104-quad-8.c
+@@ -76,7 +76,7 @@ static int quad8_read_raw(struct iio_dev *indio_dev,
+ return IIO_VAL_INT;
+ }
+
+- flags = inb(base_offset);
++ flags = inb(base_offset + 1);
+ borrow = flags & BIT(0);
+ carry = !!(flags & BIT(1));
+
+diff --git a/drivers/infiniband/hw/mlx5/main.c b/drivers/infiniband/hw/mlx5/main.c
+index d566f6738833..1664a7ccada7 100644
+--- a/drivers/infiniband/hw/mlx5/main.c
++++ b/drivers/infiniband/hw/mlx5/main.c
+@@ -3233,9 +3233,11 @@ static void *mlx5_ib_add(struct mlx5_core_dev *mdev)
+ if (err)
+ goto err_rsrc;
+
+- err = mlx5_ib_alloc_q_counters(dev);
+- if (err)
+- goto err_odp;
++ if (MLX5_CAP_GEN(dev->mdev, max_qp_cnt)) {
++ err = mlx5_ib_alloc_q_counters(dev);
++ if (err)
++ goto err_odp;
++ }
+
+ err = ib_register_device(&dev->ib_dev, NULL);
+ if (err)
+@@ -3263,7 +3265,8 @@ static void *mlx5_ib_add(struct mlx5_core_dev *mdev)
+ ib_unregister_device(&dev->ib_dev);
+
+ err_q_cnt:
+- mlx5_ib_dealloc_q_counters(dev);
++ if (MLX5_CAP_GEN(dev->mdev, max_qp_cnt))
++ mlx5_ib_dealloc_q_counters(dev);
+
+ err_odp:
+ mlx5_ib_odp_remove_one(dev);
+@@ -3293,7 +3296,8 @@ static void mlx5_ib_remove(struct mlx5_core_dev *mdev, void *context)
+
+ mlx5_remove_netdev_notifier(dev);
+ ib_unregister_device(&dev->ib_dev);
+- mlx5_ib_dealloc_q_counters(dev);
++ if (MLX5_CAP_GEN(dev->mdev, max_qp_cnt))
++ mlx5_ib_dealloc_q_counters(dev);
+ destroy_umrc_res(dev);
+ mlx5_ib_odp_remove_one(dev);
+ destroy_dev_resources(&dev->devr);
+diff --git a/drivers/md/dm.c b/drivers/md/dm.c
+index 3086da5664f3..0ff5469c03d2 100644
+--- a/drivers/md/dm.c
++++ b/drivers/md/dm.c
+@@ -972,10 +972,61 @@ void dm_accept_partial_bio(struct bio *bio, unsigned n_sectors)
+ }
+ EXPORT_SYMBOL_GPL(dm_accept_partial_bio);
+
++/*
++ * Flush current->bio_list when the target map method blocks.
++ * This fixes deadlocks in snapshot and possibly in other targets.
++ */
++struct dm_offload {
++ struct blk_plug plug;
++ struct blk_plug_cb cb;
++};
++
++static void flush_current_bio_list(struct blk_plug_cb *cb, bool from_schedule)
++{
++ struct dm_offload *o = container_of(cb, struct dm_offload, cb);
++ struct bio_list list;
++ struct bio *bio;
++
++ INIT_LIST_HEAD(&o->cb.list);
++
++ if (unlikely(!current->bio_list))
++ return;
++
++ list = *current->bio_list;
++ bio_list_init(current->bio_list);
++
++ while ((bio = bio_list_pop(&list))) {
++ struct bio_set *bs = bio->bi_pool;
++ if (unlikely(!bs) || bs == fs_bio_set) {
++ bio_list_add(current->bio_list, bio);
++ continue;
++ }
++
++ spin_lock(&bs->rescue_lock);
++ bio_list_add(&bs->rescue_list, bio);
++ queue_work(bs->rescue_workqueue, &bs->rescue_work);
++ spin_unlock(&bs->rescue_lock);
++ }
++}
++
++static void dm_offload_start(struct dm_offload *o)
++{
++ blk_start_plug(&o->plug);
++ o->cb.callback = flush_current_bio_list;
++ list_add(&o->cb.list, ¤t->plug->cb_list);
++}
++
++static void dm_offload_end(struct dm_offload *o)
++{
++ list_del(&o->cb.list);
++ blk_finish_plug(&o->plug);
++}
++
+ static void __map_bio(struct dm_target_io *tio)
+ {
+ int r;
+ sector_t sector;
++ struct dm_offload o;
+ struct bio *clone = &tio->clone;
+ struct dm_target *ti = tio->ti;
+
+@@ -988,7 +1039,11 @@ static void __map_bio(struct dm_target_io *tio)
+ */
+ atomic_inc(&tio->io->io_count);
+ sector = clone->bi_iter.bi_sector;
++
++ dm_offload_start(&o);
+ r = ti->type->map(ti, clone);
++ dm_offload_end(&o);
++
+ if (r == DM_MAPIO_REMAPPED) {
+ /* the bio has been remapped so dispatch it */
+
+diff --git a/drivers/media/rc/rc-main.c b/drivers/media/rc/rc-main.c
+index dedaf38c5ff6..9a397da137b1 100644
+--- a/drivers/media/rc/rc-main.c
++++ b/drivers/media/rc/rc-main.c
+@@ -1441,6 +1441,7 @@ int rc_register_device(struct rc_dev *dev)
+ int attr = 0;
+ int minor;
+ int rc;
++ u64 rc_type;
+
+ if (!dev || !dev->map_name)
+ return -EINVAL;
+@@ -1526,14 +1527,18 @@ int rc_register_device(struct rc_dev *dev)
+ goto out_input;
+ }
+
++ rc_type = BIT_ULL(rc_map->rc_type);
++
+ if (dev->change_protocol) {
+- u64 rc_type = (1ll << rc_map->rc_type);
+ rc = dev->change_protocol(dev, &rc_type);
+ if (rc < 0)
+ goto out_raw;
+ dev->enabled_protocols = rc_type;
+ }
+
++ if (dev->driver_type == RC_DRIVER_IR_RAW)
++ ir_raw_load_modules(&rc_type);
++
+ /* Allow the RC sysfs nodes to be accessible */
+ atomic_set(&dev->initialized, 1);
+
+diff --git a/drivers/media/rc/serial_ir.c b/drivers/media/rc/serial_ir.c
+index 436bd58b5f05..62f8d10b39e8 100644
+--- a/drivers/media/rc/serial_ir.c
++++ b/drivers/media/rc/serial_ir.c
+@@ -471,10 +471,65 @@ static int hardware_init_port(void)
+ return 0;
+ }
+
++/* Needed by serial_ir_probe() */
++static int serial_ir_tx(struct rc_dev *dev, unsigned int *txbuf,
++ unsigned int count);
++static int serial_ir_tx_duty_cycle(struct rc_dev *dev, u32 cycle);
++static int serial_ir_tx_carrier(struct rc_dev *dev, u32 carrier);
++static int serial_ir_open(struct rc_dev *rcdev);
++static void serial_ir_close(struct rc_dev *rcdev);
++
+ static int serial_ir_probe(struct platform_device *dev)
+ {
++ struct rc_dev *rcdev;
+ int i, nlow, nhigh, result;
+
++ rcdev = devm_rc_allocate_device(&dev->dev);
++ if (!rcdev)
++ return -ENOMEM;
++
++ if (hardware[type].send_pulse && hardware[type].send_space)
++ rcdev->tx_ir = serial_ir_tx;
++ if (hardware[type].set_send_carrier)
++ rcdev->s_tx_carrier = serial_ir_tx_carrier;
++ if (hardware[type].set_duty_cycle)
++ rcdev->s_tx_duty_cycle = serial_ir_tx_duty_cycle;
++
++ switch (type) {
++ case IR_HOMEBREW:
++ rcdev->input_name = "Serial IR type home-brew";
++ break;
++ case IR_IRDEO:
++ rcdev->input_name = "Serial IR type IRdeo";
++ break;
++ case IR_IRDEO_REMOTE:
++ rcdev->input_name = "Serial IR type IRdeo remote";
++ break;
++ case IR_ANIMAX:
++ rcdev->input_name = "Serial IR type AnimaX";
++ break;
++ case IR_IGOR:
++ rcdev->input_name = "Serial IR type IgorPlug";
++ break;
++ }
++
++ rcdev->input_phys = KBUILD_MODNAME "/input0";
++ rcdev->input_id.bustype = BUS_HOST;
++ rcdev->input_id.vendor = 0x0001;
++ rcdev->input_id.product = 0x0001;
++ rcdev->input_id.version = 0x0100;
++ rcdev->open = serial_ir_open;
++ rcdev->close = serial_ir_close;
++ rcdev->dev.parent = &serial_ir.pdev->dev;
++ rcdev->driver_type = RC_DRIVER_IR_RAW;
++ rcdev->allowed_protocols = RC_BIT_ALL;
++ rcdev->driver_name = KBUILD_MODNAME;
++ rcdev->map_name = RC_MAP_RC6_MCE;
++ rcdev->timeout = IR_DEFAULT_TIMEOUT;
++ rcdev->rx_resolution = 250000;
++
++ serial_ir.rcdev = rcdev;
++
+ result = devm_request_irq(&dev->dev, irq, serial_ir_irq_handler,
+ share_irq ? IRQF_SHARED : 0,
+ KBUILD_MODNAME, &hardware);
+@@ -533,7 +588,8 @@ static int serial_ir_probe(struct platform_device *dev)
+ sense ? "low" : "high");
+
+ dev_dbg(&dev->dev, "Interrupt %d, port %04x obtained\n", irq, io);
+- return 0;
++
++ return devm_rc_register_device(&dev->dev, rcdev);
+ }
+
+ static int serial_ir_open(struct rc_dev *rcdev)
+@@ -704,7 +760,6 @@ static void serial_ir_exit(void)
+
+ static int __init serial_ir_init_module(void)
+ {
+- struct rc_dev *rcdev;
+ int result;
+
+ switch (type) {
+@@ -735,69 +790,15 @@ static int __init serial_ir_init_module(void)
+ sense = !!sense;
+
+ result = serial_ir_init();
+- if (result)
+- return result;
+-
+- rcdev = devm_rc_allocate_device(&serial_ir.pdev->dev);
+- if (!rcdev) {
+- result = -ENOMEM;
+- goto serial_cleanup;
+- }
+-
+- if (hardware[type].send_pulse && hardware[type].send_space)
+- rcdev->tx_ir = serial_ir_tx;
+- if (hardware[type].set_send_carrier)
+- rcdev->s_tx_carrier = serial_ir_tx_carrier;
+- if (hardware[type].set_duty_cycle)
+- rcdev->s_tx_duty_cycle = serial_ir_tx_duty_cycle;
+-
+- switch (type) {
+- case IR_HOMEBREW:
+- rcdev->input_name = "Serial IR type home-brew";
+- break;
+- case IR_IRDEO:
+- rcdev->input_name = "Serial IR type IRdeo";
+- break;
+- case IR_IRDEO_REMOTE:
+- rcdev->input_name = "Serial IR type IRdeo remote";
+- break;
+- case IR_ANIMAX:
+- rcdev->input_name = "Serial IR type AnimaX";
+- break;
+- case IR_IGOR:
+- rcdev->input_name = "Serial IR type IgorPlug";
+- break;
+- }
+-
+- rcdev->input_phys = KBUILD_MODNAME "/input0";
+- rcdev->input_id.bustype = BUS_HOST;
+- rcdev->input_id.vendor = 0x0001;
+- rcdev->input_id.product = 0x0001;
+- rcdev->input_id.version = 0x0100;
+- rcdev->open = serial_ir_open;
+- rcdev->close = serial_ir_close;
+- rcdev->dev.parent = &serial_ir.pdev->dev;
+- rcdev->driver_type = RC_DRIVER_IR_RAW;
+- rcdev->allowed_protocols = RC_BIT_ALL;
+- rcdev->driver_name = KBUILD_MODNAME;
+- rcdev->map_name = RC_MAP_RC6_MCE;
+- rcdev->timeout = IR_DEFAULT_TIMEOUT;
+- rcdev->rx_resolution = 250000;
+-
+- serial_ir.rcdev = rcdev;
+-
+- result = rc_register_device(rcdev);
+-
+ if (!result)
+ return 0;
+-serial_cleanup:
++
+ serial_ir_exit();
+ return result;
+ }
+
+ static void __exit serial_ir_exit_module(void)
+ {
+- rc_unregister_device(serial_ir.rcdev);
+ serial_ir_exit();
+ }
+
+diff --git a/drivers/media/usb/dvb-usb/dw2102.c b/drivers/media/usb/dvb-usb/dw2102.c
+index 6ca502d834b4..4f42d57f81d9 100644
+--- a/drivers/media/usb/dvb-usb/dw2102.c
++++ b/drivers/media/usb/dvb-usb/dw2102.c
+@@ -68,6 +68,7 @@
+ struct dw2102_state {
+ u8 initialized;
+ u8 last_lock;
++ u8 data[MAX_XFER_SIZE + 4];
+ struct i2c_client *i2c_client_demod;
+ struct i2c_client *i2c_client_tuner;
+
+@@ -661,62 +662,72 @@ static int su3000_i2c_transfer(struct i2c_adapter *adap, struct i2c_msg msg[],
+ int num)
+ {
+ struct dvb_usb_device *d = i2c_get_adapdata(adap);
+- u8 obuf[0x40], ibuf[0x40];
++ struct dw2102_state *state;
+
+ if (!d)
+ return -ENODEV;
++
++ state = d->priv;
++
+ if (mutex_lock_interruptible(&d->i2c_mutex) < 0)
+ return -EAGAIN;
++ if (mutex_lock_interruptible(&d->data_mutex) < 0) {
++ mutex_unlock(&d->i2c_mutex);
++ return -EAGAIN;
++ }
+
+ switch (num) {
+ case 1:
+ switch (msg[0].addr) {
+ case SU3000_STREAM_CTRL:
+- obuf[0] = msg[0].buf[0] + 0x36;
+- obuf[1] = 3;
+- obuf[2] = 0;
+- if (dvb_usb_generic_rw(d, obuf, 3, ibuf, 0, 0) < 0)
++ state->data[0] = msg[0].buf[0] + 0x36;
++ state->data[1] = 3;
++ state->data[2] = 0;
++ if (dvb_usb_generic_rw(d, state->data, 3,
++ state->data, 0, 0) < 0)
+ err("i2c transfer failed.");
+ break;
+ case DW2102_RC_QUERY:
+- obuf[0] = 0x10;
+- if (dvb_usb_generic_rw(d, obuf, 1, ibuf, 2, 0) < 0)
++ state->data[0] = 0x10;
++ if (dvb_usb_generic_rw(d, state->data, 1,
++ state->data, 2, 0) < 0)
+ err("i2c transfer failed.");
+- msg[0].buf[1] = ibuf[0];
+- msg[0].buf[0] = ibuf[1];
++ msg[0].buf[1] = state->data[0];
++ msg[0].buf[0] = state->data[1];
+ break;
+ default:
+ /* always i2c write*/
+- obuf[0] = 0x08;
+- obuf[1] = msg[0].addr;
+- obuf[2] = msg[0].len;
++ state->data[0] = 0x08;
++ state->data[1] = msg[0].addr;
++ state->data[2] = msg[0].len;
+
+- memcpy(&obuf[3], msg[0].buf, msg[0].len);
++ memcpy(&state->data[3], msg[0].buf, msg[0].len);
+
+- if (dvb_usb_generic_rw(d, obuf, msg[0].len + 3,
+- ibuf, 1, 0) < 0)
++ if (dvb_usb_generic_rw(d, state->data, msg[0].len + 3,
++ state->data, 1, 0) < 0)
+ err("i2c transfer failed.");
+
+ }
+ break;
+ case 2:
+ /* always i2c read */
+- obuf[0] = 0x09;
+- obuf[1] = msg[0].len;
+- obuf[2] = msg[1].len;
+- obuf[3] = msg[0].addr;
+- memcpy(&obuf[4], msg[0].buf, msg[0].len);
+-
+- if (dvb_usb_generic_rw(d, obuf, msg[0].len + 4,
+- ibuf, msg[1].len + 1, 0) < 0)
++ state->data[0] = 0x09;
++ state->data[1] = msg[0].len;
++ state->data[2] = msg[1].len;
++ state->data[3] = msg[0].addr;
++ memcpy(&state->data[4], msg[0].buf, msg[0].len);
++
++ if (dvb_usb_generic_rw(d, state->data, msg[0].len + 4,
++ state->data, msg[1].len + 1, 0) < 0)
+ err("i2c transfer failed.");
+
+- memcpy(msg[1].buf, &ibuf[1], msg[1].len);
++ memcpy(msg[1].buf, &state->data[1], msg[1].len);
+ break;
+ default:
+ warn("more than 2 i2c messages at a time is not handled yet.");
+ break;
+ }
++ mutex_unlock(&d->data_mutex);
+ mutex_unlock(&d->i2c_mutex);
+ return num;
+ }
+@@ -844,17 +855,23 @@ static int su3000_streaming_ctrl(struct dvb_usb_adapter *adap, int onoff)
+ static int su3000_power_ctrl(struct dvb_usb_device *d, int i)
+ {
+ struct dw2102_state *state = (struct dw2102_state *)d->priv;
+- u8 obuf[] = {0xde, 0};
++ int ret = 0;
+
+ info("%s: %d, initialized %d", __func__, i, state->initialized);
+
+ if (i && !state->initialized) {
++ mutex_lock(&d->data_mutex);
++
++ state->data[0] = 0xde;
++ state->data[1] = 0;
++
+ state->initialized = 1;
+ /* reset board */
+- return dvb_usb_generic_rw(d, obuf, 2, NULL, 0, 0);
++ ret = dvb_usb_generic_rw(d, state->data, 2, NULL, 0, 0);
++ mutex_unlock(&d->data_mutex);
+ }
+
+- return 0;
++ return ret;
+ }
+
+ static int su3000_read_mac_address(struct dvb_usb_device *d, u8 mac[6])
+@@ -1309,49 +1326,57 @@ static int prof_7500_frontend_attach(struct dvb_usb_adapter *d)
+ return 0;
+ }
+
+-static int su3000_frontend_attach(struct dvb_usb_adapter *d)
++static int su3000_frontend_attach(struct dvb_usb_adapter *adap)
+ {
+- u8 obuf[3] = { 0xe, 0x80, 0 };
+- u8 ibuf[] = { 0 };
++ struct dvb_usb_device *d = adap->dev;
++ struct dw2102_state *state = d->priv;
++
++ mutex_lock(&d->data_mutex);
++
++ state->data[0] = 0xe;
++ state->data[1] = 0x80;
++ state->data[2] = 0;
+
+- if (dvb_usb_generic_rw(d->dev, obuf, 3, ibuf, 1, 0) < 0)
++ if (dvb_usb_generic_rw(d, state->data, 3, state->data, 1, 0) < 0)
+ err("command 0x0e transfer failed.");
+
+- obuf[0] = 0xe;
+- obuf[1] = 0x02;
+- obuf[2] = 1;
++ state->data[0] = 0xe;
++ state->data[1] = 0x02;
++ state->data[2] = 1;
+
+- if (dvb_usb_generic_rw(d->dev, obuf, 3, ibuf, 1, 0) < 0)
++ if (dvb_usb_generic_rw(d, state->data, 3, state->data, 1, 0) < 0)
+ err("command 0x0e transfer failed.");
+ msleep(300);
+
+- obuf[0] = 0xe;
+- obuf[1] = 0x83;
+- obuf[2] = 0;
++ state->data[0] = 0xe;
++ state->data[1] = 0x83;
++ state->data[2] = 0;
+
+- if (dvb_usb_generic_rw(d->dev, obuf, 3, ibuf, 1, 0) < 0)
++ if (dvb_usb_generic_rw(d, state->data, 3, state->data, 1, 0) < 0)
+ err("command 0x0e transfer failed.");
+
+- obuf[0] = 0xe;
+- obuf[1] = 0x83;
+- obuf[2] = 1;
++ state->data[0] = 0xe;
++ state->data[1] = 0x83;
++ state->data[2] = 1;
+
+- if (dvb_usb_generic_rw(d->dev, obuf, 3, ibuf, 1, 0) < 0)
++ if (dvb_usb_generic_rw(d, state->data, 3, state->data, 1, 0) < 0)
+ err("command 0x0e transfer failed.");
+
+- obuf[0] = 0x51;
++ state->data[0] = 0x51;
+
+- if (dvb_usb_generic_rw(d->dev, obuf, 1, ibuf, 1, 0) < 0)
++ if (dvb_usb_generic_rw(d, state->data, 1, state->data, 1, 0) < 0)
+ err("command 0x51 transfer failed.");
+
+- d->fe_adap[0].fe = dvb_attach(ds3000_attach, &su3000_ds3000_config,
+- &d->dev->i2c_adap);
+- if (d->fe_adap[0].fe == NULL)
++ mutex_unlock(&d->data_mutex);
++
++ adap->fe_adap[0].fe = dvb_attach(ds3000_attach, &su3000_ds3000_config,
++ &d->i2c_adap);
++ if (adap->fe_adap[0].fe == NULL)
+ return -EIO;
+
+- if (dvb_attach(ts2020_attach, d->fe_adap[0].fe,
++ if (dvb_attach(ts2020_attach, adap->fe_adap[0].fe,
+ &dw2104_ts2020_config,
+- &d->dev->i2c_adap)) {
++ &d->i2c_adap)) {
+ info("Attached DS3000/TS2020!");
+ return 0;
+ }
+@@ -1360,47 +1385,55 @@ static int su3000_frontend_attach(struct dvb_usb_adapter *d)
+ return -EIO;
+ }
+
+-static int t220_frontend_attach(struct dvb_usb_adapter *d)
++static int t220_frontend_attach(struct dvb_usb_adapter *adap)
+ {
+- u8 obuf[3] = { 0xe, 0x87, 0 };
+- u8 ibuf[] = { 0 };
++ struct dvb_usb_device *d = adap->dev;
++ struct dw2102_state *state = d->priv;
++
++ mutex_lock(&d->data_mutex);
+
+- if (dvb_usb_generic_rw(d->dev, obuf, 3, ibuf, 1, 0) < 0)
++ state->data[0] = 0xe;
++ state->data[1] = 0x87;
++ state->data[2] = 0x0;
++
++ if (dvb_usb_generic_rw(d, state->data, 3, state->data, 1, 0) < 0)
+ err("command 0x0e transfer failed.");
+
+- obuf[0] = 0xe;
+- obuf[1] = 0x86;
+- obuf[2] = 1;
++ state->data[0] = 0xe;
++ state->data[1] = 0x86;
++ state->data[2] = 1;
+
+- if (dvb_usb_generic_rw(d->dev, obuf, 3, ibuf, 1, 0) < 0)
++ if (dvb_usb_generic_rw(d, state->data, 3, state->data, 1, 0) < 0)
+ err("command 0x0e transfer failed.");
+
+- obuf[0] = 0xe;
+- obuf[1] = 0x80;
+- obuf[2] = 0;
++ state->data[0] = 0xe;
++ state->data[1] = 0x80;
++ state->data[2] = 0;
+
+- if (dvb_usb_generic_rw(d->dev, obuf, 3, ibuf, 1, 0) < 0)
++ if (dvb_usb_generic_rw(d, state->data, 3, state->data, 1, 0) < 0)
+ err("command 0x0e transfer failed.");
+
+ msleep(50);
+
+- obuf[0] = 0xe;
+- obuf[1] = 0x80;
+- obuf[2] = 1;
++ state->data[0] = 0xe;
++ state->data[1] = 0x80;
++ state->data[2] = 1;
+
+- if (dvb_usb_generic_rw(d->dev, obuf, 3, ibuf, 1, 0) < 0)
++ if (dvb_usb_generic_rw(d, state->data, 3, state->data, 1, 0) < 0)
+ err("command 0x0e transfer failed.");
+
+- obuf[0] = 0x51;
++ state->data[0] = 0x51;
+
+- if (dvb_usb_generic_rw(d->dev, obuf, 1, ibuf, 1, 0) < 0)
++ if (dvb_usb_generic_rw(d, state->data, 1, state->data, 1, 0) < 0)
+ err("command 0x51 transfer failed.");
+
+- d->fe_adap[0].fe = dvb_attach(cxd2820r_attach, &cxd2820r_config,
+- &d->dev->i2c_adap, NULL);
+- if (d->fe_adap[0].fe != NULL) {
+- if (dvb_attach(tda18271_attach, d->fe_adap[0].fe, 0x60,
+- &d->dev->i2c_adap, &tda18271_config)) {
++ mutex_unlock(&d->data_mutex);
++
++ adap->fe_adap[0].fe = dvb_attach(cxd2820r_attach, &cxd2820r_config,
++ &d->i2c_adap, NULL);
++ if (adap->fe_adap[0].fe != NULL) {
++ if (dvb_attach(tda18271_attach, adap->fe_adap[0].fe, 0x60,
++ &d->i2c_adap, &tda18271_config)) {
+ info("Attached TDA18271HD/CXD2820R!");
+ return 0;
+ }
+@@ -1410,23 +1443,30 @@ static int t220_frontend_attach(struct dvb_usb_adapter *d)
+ return -EIO;
+ }
+
+-static int m88rs2000_frontend_attach(struct dvb_usb_adapter *d)
++static int m88rs2000_frontend_attach(struct dvb_usb_adapter *adap)
+ {
+- u8 obuf[] = { 0x51 };
+- u8 ibuf[] = { 0 };
++ struct dvb_usb_device *d = adap->dev;
++ struct dw2102_state *state = d->priv;
++
++ mutex_lock(&d->data_mutex);
+
+- if (dvb_usb_generic_rw(d->dev, obuf, 1, ibuf, 1, 0) < 0)
++ state->data[0] = 0x51;
++
++ if (dvb_usb_generic_rw(d, state->data, 1, state->data, 1, 0) < 0)
+ err("command 0x51 transfer failed.");
+
+- d->fe_adap[0].fe = dvb_attach(m88rs2000_attach, &s421_m88rs2000_config,
+- &d->dev->i2c_adap);
++ mutex_unlock(&d->data_mutex);
+
+- if (d->fe_adap[0].fe == NULL)
++ adap->fe_adap[0].fe = dvb_attach(m88rs2000_attach,
++ &s421_m88rs2000_config,
++ &d->i2c_adap);
++
++ if (adap->fe_adap[0].fe == NULL)
+ return -EIO;
+
+- if (dvb_attach(ts2020_attach, d->fe_adap[0].fe,
++ if (dvb_attach(ts2020_attach, adap->fe_adap[0].fe,
+ &dw2104_ts2020_config,
+- &d->dev->i2c_adap)) {
++ &d->i2c_adap)) {
+ info("Attached RS2000/TS2020!");
+ return 0;
+ }
+@@ -1439,44 +1479,50 @@ static int tt_s2_4600_frontend_attach(struct dvb_usb_adapter *adap)
+ {
+ struct dvb_usb_device *d = adap->dev;
+ struct dw2102_state *state = d->priv;
+- u8 obuf[3] = { 0xe, 0x80, 0 };
+- u8 ibuf[] = { 0 };
+ struct i2c_adapter *i2c_adapter;
+ struct i2c_client *client;
+ struct i2c_board_info board_info;
+ struct m88ds3103_platform_data m88ds3103_pdata = {};
+ struct ts2020_config ts2020_config = {};
+
+- if (dvb_usb_generic_rw(d, obuf, 3, ibuf, 1, 0) < 0)
++ mutex_lock(&d->data_mutex);
++
++ state->data[0] = 0xe;
++ state->data[1] = 0x80;
++ state->data[2] = 0x0;
++
++ if (dvb_usb_generic_rw(d, state->data, 3, state->data, 1, 0) < 0)
+ err("command 0x0e transfer failed.");
+
+- obuf[0] = 0xe;
+- obuf[1] = 0x02;
+- obuf[2] = 1;
++ state->data[0] = 0xe;
++ state->data[1] = 0x02;
++ state->data[2] = 1;
+
+- if (dvb_usb_generic_rw(d, obuf, 3, ibuf, 1, 0) < 0)
++ if (dvb_usb_generic_rw(d, state->data, 3, state->data, 1, 0) < 0)
+ err("command 0x0e transfer failed.");
+ msleep(300);
+
+- obuf[0] = 0xe;
+- obuf[1] = 0x83;
+- obuf[2] = 0;
++ state->data[0] = 0xe;
++ state->data[1] = 0x83;
++ state->data[2] = 0;
+
+- if (dvb_usb_generic_rw(d, obuf, 3, ibuf, 1, 0) < 0)
++ if (dvb_usb_generic_rw(d, state->data, 3, state->data, 1, 0) < 0)
+ err("command 0x0e transfer failed.");
+
+- obuf[0] = 0xe;
+- obuf[1] = 0x83;
+- obuf[2] = 1;
++ state->data[0] = 0xe;
++ state->data[1] = 0x83;
++ state->data[2] = 1;
+
+- if (dvb_usb_generic_rw(d, obuf, 3, ibuf, 1, 0) < 0)
++ if (dvb_usb_generic_rw(d, state->data, 3, state->data, 1, 0) < 0)
+ err("command 0x0e transfer failed.");
+
+- obuf[0] = 0x51;
++ state->data[0] = 0x51;
+
+- if (dvb_usb_generic_rw(d, obuf, 1, ibuf, 1, 0) < 0)
++ if (dvb_usb_generic_rw(d, state->data, 1, state->data, 1, 0) < 0)
+ err("command 0x51 transfer failed.");
+
++ mutex_unlock(&d->data_mutex);
++
+ /* attach demod */
+ m88ds3103_pdata.clk = 27000000;
+ m88ds3103_pdata.i2c_wr_max = 33;
+diff --git a/drivers/mtd/maps/pmcmsp-flash.c b/drivers/mtd/maps/pmcmsp-flash.c
+index f9fa3fad728e..2051f28ddac6 100644
+--- a/drivers/mtd/maps/pmcmsp-flash.c
++++ b/drivers/mtd/maps/pmcmsp-flash.c
+@@ -139,15 +139,13 @@ static int __init init_msp_flash(void)
+ }
+
+ msp_maps[i].bankwidth = 1;
+- msp_maps[i].name = kmalloc(7, GFP_KERNEL);
++ msp_maps[i].name = kstrndup(flash_name, 7, GFP_KERNEL);
+ if (!msp_maps[i].name) {
+ iounmap(msp_maps[i].virt);
+ kfree(msp_parts[i]);
+ goto cleanup_loop;
+ }
+
+- msp_maps[i].name = strncpy(msp_maps[i].name, flash_name, 7);
+-
+ for (j = 0; j < pcnt; j++) {
+ part_name[5] = '0' + i;
+ part_name[7] = '0' + j;
+diff --git a/drivers/pci/quirks.c b/drivers/pci/quirks.c
+index 1800befa8b8b..024def5bb3fa 100644
+--- a/drivers/pci/quirks.c
++++ b/drivers/pci/quirks.c
+@@ -2173,6 +2173,7 @@ DECLARE_PCI_FIXUP_FINAL(PCI_VENDOR_ID_LSI_LOGIC, 0x005d, quirk_blacklist_vpd);
+ DECLARE_PCI_FIXUP_FINAL(PCI_VENDOR_ID_LSI_LOGIC, 0x005f, quirk_blacklist_vpd);
+ DECLARE_PCI_FIXUP_FINAL(PCI_VENDOR_ID_ATTANSIC, PCI_ANY_ID,
+ quirk_blacklist_vpd);
++DECLARE_PCI_FIXUP_FINAL(PCI_VENDOR_ID_QLOGIC, 0x2261, quirk_blacklist_vpd);
+
+ /*
+ * For Broadcom 5706, 5708, 5709 rev. A nics, any read beyond the
+diff --git a/drivers/tty/serial/samsung.c b/drivers/tty/serial/samsung.c
+index f44615fa474d..3e2ef4fd7382 100644
+--- a/drivers/tty/serial/samsung.c
++++ b/drivers/tty/serial/samsung.c
+@@ -1036,8 +1036,10 @@ static int s3c64xx_serial_startup(struct uart_port *port)
+ if (ourport->dma) {
+ ret = s3c24xx_serial_request_dma(ourport);
+ if (ret < 0) {
+- dev_warn(port->dev, "DMA request failed\n");
+- return ret;
++ dev_warn(port->dev,
++ "DMA request failed, DMA will not be used\n");
++ devm_kfree(port->dev, ourport->dma);
++ ourport->dma = NULL;
+ }
+ }
+
+diff --git a/drivers/usb/dwc3/dwc3-omap.c b/drivers/usb/dwc3/dwc3-omap.c
+index eb1b9cb3f9d1..35b63518baf6 100644
+--- a/drivers/usb/dwc3/dwc3-omap.c
++++ b/drivers/usb/dwc3/dwc3-omap.c
+@@ -250,6 +250,7 @@ static void dwc3_omap_set_mailbox(struct dwc3_omap *omap,
+ val = dwc3_omap_read_utmi_ctrl(omap);
+ val |= USBOTGSS_UTMI_OTG_CTRL_IDDIG;
+ dwc3_omap_write_utmi_ctrl(omap, val);
++ break;
+
+ case OMAP_DWC3_VBUS_OFF:
+ val = dwc3_omap_read_utmi_ctrl(omap);
+diff --git a/drivers/usb/dwc3/gadget.h b/drivers/usb/dwc3/gadget.h
+index 3129bcf74d7d..265e223ab645 100644
+--- a/drivers/usb/dwc3/gadget.h
++++ b/drivers/usb/dwc3/gadget.h
+@@ -28,23 +28,23 @@ struct dwc3;
+ #define gadget_to_dwc(g) (container_of(g, struct dwc3, gadget))
+
+ /* DEPCFG parameter 1 */
+-#define DWC3_DEPCFG_INT_NUM(n) ((n) << 0)
++#define DWC3_DEPCFG_INT_NUM(n) (((n) & 0x1f) << 0)
+ #define DWC3_DEPCFG_XFER_COMPLETE_EN (1 << 8)
+ #define DWC3_DEPCFG_XFER_IN_PROGRESS_EN (1 << 9)
+ #define DWC3_DEPCFG_XFER_NOT_READY_EN (1 << 10)
+ #define DWC3_DEPCFG_FIFO_ERROR_EN (1 << 11)
+ #define DWC3_DEPCFG_STREAM_EVENT_EN (1 << 13)
+-#define DWC3_DEPCFG_BINTERVAL_M1(n) ((n) << 16)
++#define DWC3_DEPCFG_BINTERVAL_M1(n) (((n) & 0xff) << 16)
+ #define DWC3_DEPCFG_STREAM_CAPABLE (1 << 24)
+-#define DWC3_DEPCFG_EP_NUMBER(n) ((n) << 25)
++#define DWC3_DEPCFG_EP_NUMBER(n) (((n) & 0x1f) << 25)
+ #define DWC3_DEPCFG_BULK_BASED (1 << 30)
+ #define DWC3_DEPCFG_FIFO_BASED (1 << 31)
+
+ /* DEPCFG parameter 0 */
+-#define DWC3_DEPCFG_EP_TYPE(n) ((n) << 1)
+-#define DWC3_DEPCFG_MAX_PACKET_SIZE(n) ((n) << 3)
+-#define DWC3_DEPCFG_FIFO_NUMBER(n) ((n) << 17)
+-#define DWC3_DEPCFG_BURST_SIZE(n) ((n) << 22)
++#define DWC3_DEPCFG_EP_TYPE(n) (((n) & 0x3) << 1)
++#define DWC3_DEPCFG_MAX_PACKET_SIZE(n) (((n) & 0x7ff) << 3)
++#define DWC3_DEPCFG_FIFO_NUMBER(n) (((n) & 0x1f) << 17)
++#define DWC3_DEPCFG_BURST_SIZE(n) (((n) & 0xf) << 22)
+ #define DWC3_DEPCFG_DATA_SEQ_NUM(n) ((n) << 26)
+ /* This applies for core versions earlier than 1.94a */
+ #define DWC3_DEPCFG_IGN_SEQ_NUM (1 << 31)
+diff --git a/drivers/usb/gadget/function/f_fs.c b/drivers/usb/gadget/function/f_fs.c
+index fd80c1b9c823..560d400eb078 100644
+--- a/drivers/usb/gadget/function/f_fs.c
++++ b/drivers/usb/gadget/function/f_fs.c
+@@ -1833,11 +1833,14 @@ static int ffs_func_eps_enable(struct ffs_function *func)
+ spin_lock_irqsave(&func->ffs->eps_lock, flags);
+ while(count--) {
+ struct usb_endpoint_descriptor *ds;
++ struct usb_ss_ep_comp_descriptor *comp_desc = NULL;
++ int needs_comp_desc = false;
+ int desc_idx;
+
+- if (ffs->gadget->speed == USB_SPEED_SUPER)
++ if (ffs->gadget->speed == USB_SPEED_SUPER) {
+ desc_idx = 2;
+- else if (ffs->gadget->speed == USB_SPEED_HIGH)
++ needs_comp_desc = true;
++ } else if (ffs->gadget->speed == USB_SPEED_HIGH)
+ desc_idx = 1;
+ else
+ desc_idx = 0;
+@@ -1854,6 +1857,14 @@ static int ffs_func_eps_enable(struct ffs_function *func)
+
+ ep->ep->driver_data = ep;
+ ep->ep->desc = ds;
++
++ comp_desc = (struct usb_ss_ep_comp_descriptor *)(ds +
++ USB_DT_ENDPOINT_SIZE);
++ ep->ep->maxburst = comp_desc->bMaxBurst + 1;
++
++ if (needs_comp_desc)
++ ep->ep->comp_desc = comp_desc;
++
+ ret = usb_ep_enable(ep->ep);
+ if (likely(!ret)) {
+ epfile->ep = ep;
+diff --git a/drivers/usb/gadget/function/f_uvc.c b/drivers/usb/gadget/function/f_uvc.c
+index 27ed51b5082f..29b41b5dee04 100644
+--- a/drivers/usb/gadget/function/f_uvc.c
++++ b/drivers/usb/gadget/function/f_uvc.c
+@@ -258,13 +258,6 @@ uvc_function_setup(struct usb_function *f, const struct usb_ctrlrequest *ctrl)
+ memcpy(&uvc_event->req, ctrl, sizeof(uvc_event->req));
+ v4l2_event_queue(&uvc->vdev, &v4l2_event);
+
+- /* Pass additional setup data to userspace */
+- if (uvc->event_setup_out && uvc->event_length) {
+- uvc->control_req->length = uvc->event_length;
+- return usb_ep_queue(uvc->func.config->cdev->gadget->ep0,
+- uvc->control_req, GFP_ATOMIC);
+- }
+-
+ return 0;
+ }
+
+diff --git a/drivers/usb/gadget/udc/dummy_hcd.c b/drivers/usb/gadget/udc/dummy_hcd.c
+index c60abe3a68f9..8cabc5944d5f 100644
+--- a/drivers/usb/gadget/udc/dummy_hcd.c
++++ b/drivers/usb/gadget/udc/dummy_hcd.c
+@@ -1031,6 +1031,8 @@ static int dummy_udc_probe(struct platform_device *pdev)
+ int rc;
+
+ dum = *((void **)dev_get_platdata(&pdev->dev));
++ /* Clear usb_gadget region for new registration to udc-core */
++ memzero_explicit(&dum->gadget, sizeof(struct usb_gadget));
+ dum->gadget.name = gadget_name;
+ dum->gadget.ops = &dummy_ops;
+ dum->gadget.max_speed = USB_SPEED_SUPER;
+diff --git a/drivers/usb/host/ohci-at91.c b/drivers/usb/host/ohci-at91.c
+index 414e3c376dbb..5302f988e7e6 100644
+--- a/drivers/usb/host/ohci-at91.c
++++ b/drivers/usb/host/ohci-at91.c
+@@ -350,7 +350,7 @@ static int ohci_at91_hub_control(struct usb_hcd *hcd, u16 typeReq, u16 wValue,
+
+ case USB_PORT_FEAT_SUSPEND:
+ dev_dbg(hcd->self.controller, "SetPortFeat: SUSPEND\n");
+- if (valid_port(wIndex)) {
++ if (valid_port(wIndex) && ohci_at91->sfr_regmap) {
+ ohci_at91_port_suspend(ohci_at91->sfr_regmap,
+ 1);
+ return 0;
+@@ -393,7 +393,7 @@ static int ohci_at91_hub_control(struct usb_hcd *hcd, u16 typeReq, u16 wValue,
+
+ case USB_PORT_FEAT_SUSPEND:
+ dev_dbg(hcd->self.controller, "ClearPortFeature: SUSPEND\n");
+- if (valid_port(wIndex)) {
++ if (valid_port(wIndex) && ohci_at91->sfr_regmap) {
+ ohci_at91_port_suspend(ohci_at91->sfr_regmap,
+ 0);
+ return 0;
+diff --git a/drivers/usb/host/xhci-dbg.c b/drivers/usb/host/xhci-dbg.c
+index 74c42f722678..3425154baf8b 100644
+--- a/drivers/usb/host/xhci-dbg.c
++++ b/drivers/usb/host/xhci-dbg.c
+@@ -111,7 +111,7 @@ static void xhci_print_cap_regs(struct xhci_hcd *xhci)
+ xhci_dbg(xhci, "RTSOFF 0x%x:\n", temp & RTSOFF_MASK);
+
+ /* xhci 1.1 controllers have the HCCPARAMS2 register */
+- if (hci_version > 100) {
++ if (hci_version > 0x100) {
+ temp = readl(&xhci->cap_regs->hcc_params2);
+ xhci_dbg(xhci, "HCC PARAMS2 0x%x:\n", (unsigned int) temp);
+ xhci_dbg(xhci, " HC %s Force save context capability",
+diff --git a/drivers/usb/host/xhci-plat.c b/drivers/usb/host/xhci-plat.c
+index c0cd98e804a3..9715200eb36e 100644
+--- a/drivers/usb/host/xhci-plat.c
++++ b/drivers/usb/host/xhci-plat.c
+@@ -283,6 +283,8 @@ static int xhci_plat_remove(struct platform_device *dev)
+ struct xhci_hcd *xhci = hcd_to_xhci(hcd);
+ struct clk *clk = xhci->clk;
+
++ xhci->xhc_state |= XHCI_STATE_REMOVING;
++
+ usb_remove_hcd(xhci->shared_hcd);
+ usb_phy_shutdown(hcd->usb_phy);
+
+diff --git a/drivers/usb/misc/iowarrior.c b/drivers/usb/misc/iowarrior.c
+index 095778ff984d..37c63cb39714 100644
+--- a/drivers/usb/misc/iowarrior.c
++++ b/drivers/usb/misc/iowarrior.c
+@@ -781,12 +781,6 @@ static int iowarrior_probe(struct usb_interface *interface,
+ iface_desc = interface->cur_altsetting;
+ dev->product_id = le16_to_cpu(udev->descriptor.idProduct);
+
+- if (iface_desc->desc.bNumEndpoints < 1) {
+- dev_err(&interface->dev, "Invalid number of endpoints\n");
+- retval = -EINVAL;
+- goto error;
+- }
+-
+ /* set up the endpoint information */
+ for (i = 0; i < iface_desc->desc.bNumEndpoints; ++i) {
+ endpoint = &iface_desc->endpoint[i].desc;
+@@ -797,6 +791,21 @@ static int iowarrior_probe(struct usb_interface *interface,
+ /* this one will match for the IOWarrior56 only */
+ dev->int_out_endpoint = endpoint;
+ }
++
++ if (!dev->int_in_endpoint) {
++ dev_err(&interface->dev, "no interrupt-in endpoint found\n");
++ retval = -ENODEV;
++ goto error;
++ }
++
++ if (dev->product_id == USB_DEVICE_ID_CODEMERCS_IOW56) {
++ if (!dev->int_out_endpoint) {
++ dev_err(&interface->dev, "no interrupt-out endpoint found\n");
++ retval = -ENODEV;
++ goto error;
++ }
++ }
++
+ /* we have to check the report_size often, so remember it in the endianness suitable for our machine */
+ dev->report_size = usb_endpoint_maxp(dev->int_in_endpoint);
+ if ((dev->interface->cur_altsetting->desc.bInterfaceNumber == 0) &&
+diff --git a/drivers/usb/serial/digi_acceleport.c b/drivers/usb/serial/digi_acceleport.c
+index 6a1df9e824ca..30bf0f5db82d 100644
+--- a/drivers/usb/serial/digi_acceleport.c
++++ b/drivers/usb/serial/digi_acceleport.c
+@@ -1482,16 +1482,20 @@ static int digi_read_oob_callback(struct urb *urb)
+ struct usb_serial *serial = port->serial;
+ struct tty_struct *tty;
+ struct digi_port *priv = usb_get_serial_port_data(port);
++ unsigned char *buf = urb->transfer_buffer;
+ int opcode, line, status, val;
+ int i;
+ unsigned int rts;
+
++ if (urb->actual_length < 4)
++ return -1;
++
+ /* handle each oob command */
+- for (i = 0; i < urb->actual_length - 3;) {
+- opcode = ((unsigned char *)urb->transfer_buffer)[i++];
+- line = ((unsigned char *)urb->transfer_buffer)[i++];
+- status = ((unsigned char *)urb->transfer_buffer)[i++];
+- val = ((unsigned char *)urb->transfer_buffer)[i++];
++ for (i = 0; i < urb->actual_length - 3; i += 4) {
++ opcode = buf[i];
++ line = buf[i + 1];
++ status = buf[i + 2];
++ val = buf[i + 3];
+
+ dev_dbg(&port->dev, "digi_read_oob_callback: opcode=%d, line=%d, status=%d, val=%d\n",
+ opcode, line, status, val);
+diff --git a/drivers/usb/serial/io_ti.c b/drivers/usb/serial/io_ti.c
+index 9a0db2965fbb..d1cec36f55f2 100644
+--- a/drivers/usb/serial/io_ti.c
++++ b/drivers/usb/serial/io_ti.c
+@@ -1674,6 +1674,12 @@ static void edge_interrupt_callback(struct urb *urb)
+ function = TIUMP_GET_FUNC_FROM_CODE(data[0]);
+ dev_dbg(dev, "%s - port_number %d, function %d, info 0x%x\n", __func__,
+ port_number, function, data[1]);
++
++ if (port_number >= edge_serial->serial->num_ports) {
++ dev_err(dev, "bad port number %d\n", port_number);
++ goto exit;
++ }
++
+ port = edge_serial->serial->port[port_number];
+ edge_port = usb_get_serial_port_data(port);
+ if (!edge_port) {
+@@ -1755,7 +1761,7 @@ static void edge_bulk_in_callback(struct urb *urb)
+
+ port_number = edge_port->port->port_number;
+
+- if (edge_port->lsr_event) {
++ if (urb->actual_length > 0 && edge_port->lsr_event) {
+ edge_port->lsr_event = 0;
+ dev_dbg(dev, "%s ===== Port %u LSR Status = %02x, Data = %02x ======\n",
+ __func__, port_number, edge_port->lsr_mask, *data);
+diff --git a/drivers/usb/serial/omninet.c b/drivers/usb/serial/omninet.c
+index a180b17d2432..76564b3bebb9 100644
+--- a/drivers/usb/serial/omninet.c
++++ b/drivers/usb/serial/omninet.c
+@@ -142,12 +142,6 @@ static int omninet_port_remove(struct usb_serial_port *port)
+
+ static int omninet_open(struct tty_struct *tty, struct usb_serial_port *port)
+ {
+- struct usb_serial *serial = port->serial;
+- struct usb_serial_port *wport;
+-
+- wport = serial->port[1];
+- tty_port_tty_set(&wport->port, tty);
+-
+ return usb_serial_generic_open(tty, port);
+ }
+
+diff --git a/drivers/usb/serial/safe_serial.c b/drivers/usb/serial/safe_serial.c
+index 93c6c9b08daa..8a069aa154ed 100644
+--- a/drivers/usb/serial/safe_serial.c
++++ b/drivers/usb/serial/safe_serial.c
+@@ -200,6 +200,11 @@ static void safe_process_read_urb(struct urb *urb)
+ if (!safe)
+ goto out;
+
++ if (length < 2) {
++ dev_err(&port->dev, "malformed packet\n");
++ return;
++ }
++
+ fcs = fcs_compute10(data, length, CRC10_INITFCS);
+ if (fcs) {
+ dev_err(&port->dev, "%s - bad CRC %x\n", __func__, fcs);
+diff --git a/fs/ext4/inode.c b/fs/ext4/inode.c
+index b4a8173bb80c..750b3f1eba31 100644
+--- a/fs/ext4/inode.c
++++ b/fs/ext4/inode.c
+@@ -3927,6 +3927,10 @@ static int ext4_block_truncate_page(handle_t *handle,
+ unsigned blocksize;
+ struct inode *inode = mapping->host;
+
++ /* If we are processing an encrypted inode during orphan list handling */
++ if (ext4_encrypted_inode(inode) && !fscrypt_has_encryption_key(inode))
++ return 0;
++
+ blocksize = inode->i_sb->s_blocksize;
+ length = blocksize - (offset & (blocksize - 1));
+
+diff --git a/include/linux/user_namespace.h b/include/linux/user_namespace.h
+index eb209d4523f5..dc797739f164 100644
+--- a/include/linux/user_namespace.h
++++ b/include/linux/user_namespace.h
+@@ -65,7 +65,7 @@ struct ucounts {
+ struct hlist_node node;
+ struct user_namespace *ns;
+ kuid_t uid;
+- atomic_t count;
++ int count;
+ atomic_t ucount[UCOUNT_COUNTS];
+ };
+
+diff --git a/include/trace/events/syscalls.h b/include/trace/events/syscalls.h
+index 14e49c798135..b35533b94277 100644
+--- a/include/trace/events/syscalls.h
++++ b/include/trace/events/syscalls.h
+@@ -1,5 +1,6 @@
+ #undef TRACE_SYSTEM
+ #define TRACE_SYSTEM raw_syscalls
++#undef TRACE_INCLUDE_FILE
+ #define TRACE_INCLUDE_FILE syscalls
+
+ #if !defined(_TRACE_EVENTS_SYSCALLS_H) || defined(TRACE_HEADER_MULTI_READ)
+diff --git a/kernel/ucount.c b/kernel/ucount.c
+index 95c6336fc2b3..c761cdba2a2d 100644
+--- a/kernel/ucount.c
++++ b/kernel/ucount.c
+@@ -139,7 +139,7 @@ static struct ucounts *get_ucounts(struct user_namespace *ns, kuid_t uid)
+
+ new->ns = ns;
+ new->uid = uid;
+- atomic_set(&new->count, 0);
++ new->count = 0;
+
+ spin_lock_irq(&ucounts_lock);
+ ucounts = find_ucounts(ns, uid, hashent);
+@@ -150,8 +150,10 @@ static struct ucounts *get_ucounts(struct user_namespace *ns, kuid_t uid)
+ ucounts = new;
+ }
+ }
+- if (!atomic_add_unless(&ucounts->count, 1, INT_MAX))
++ if (ucounts->count == INT_MAX)
+ ucounts = NULL;
++ else
++ ucounts->count += 1;
+ spin_unlock_irq(&ucounts_lock);
+ return ucounts;
+ }
+@@ -160,13 +162,15 @@ static void put_ucounts(struct ucounts *ucounts)
+ {
+ unsigned long flags;
+
+- if (atomic_dec_and_test(&ucounts->count)) {
+- spin_lock_irqsave(&ucounts_lock, flags);
++ spin_lock_irqsave(&ucounts_lock, flags);
++ ucounts->count -= 1;
++ if (!ucounts->count)
+ hlist_del_init(&ucounts->node);
+- spin_unlock_irqrestore(&ucounts_lock, flags);
++ else
++ ucounts = NULL;
++ spin_unlock_irqrestore(&ucounts_lock, flags);
+
+- kfree(ucounts);
+- }
++ kfree(ucounts);
+ }
+
+ static inline bool atomic_inc_below(atomic_t *v, int u)
+diff --git a/virt/kvm/arm/vgic/vgic-mmio.c b/virt/kvm/arm/vgic/vgic-mmio.c
+index ebe1b9fa3c4d..85814d1bad11 100644
+--- a/virt/kvm/arm/vgic/vgic-mmio.c
++++ b/virt/kvm/arm/vgic/vgic-mmio.c
+@@ -187,21 +187,37 @@ unsigned long vgic_mmio_read_active(struct kvm_vcpu *vcpu,
+ static void vgic_mmio_change_active(struct kvm_vcpu *vcpu, struct vgic_irq *irq,
+ bool new_active_state)
+ {
++ struct kvm_vcpu *requester_vcpu;
+ spin_lock(&irq->irq_lock);
++
++ /*
++ * The vcpu parameter here can mean multiple things depending on how
++ * this function is called; when handling a trap from the kernel it
++ * depends on the GIC version, and these functions are also called as
++ * part of save/restore from userspace.
++ *
++ * Therefore, we have to figure out the requester in a reliable way.
++ *
++ * When accessing VGIC state from user space, the requester_vcpu is
++ * NULL, which is fine, because we guarantee that no VCPUs are running
++ * when accessing VGIC state from user space so irq->vcpu->cpu is
++ * always -1.
++ */
++ requester_vcpu = kvm_arm_get_running_vcpu();
++
+ /*
+ * If this virtual IRQ was written into a list register, we
+ * have to make sure the CPU that runs the VCPU thread has
+- * synced back LR state to the struct vgic_irq. We can only
+- * know this for sure, when either this irq is not assigned to
+- * anyone's AP list anymore, or the VCPU thread is not
+- * running on any CPUs.
++ * synced back the LR state to the struct vgic_irq.
+ *
+- * In the opposite case, we know the VCPU thread may be on its
+- * way back from the guest and still has to sync back this
+- * IRQ, so we release and re-acquire the spin_lock to let the
+- * other thread sync back the IRQ.
++ * As long as the conditions below are true, we know the VCPU thread
++ * may be on its way back from the guest (we kicked the VCPU thread in
++ * vgic_change_active_prepare) and still has to sync back this IRQ,
++ * so we release and re-acquire the spin_lock to let the other thread
++ * sync back the IRQ.
+ */
+ while (irq->vcpu && /* IRQ may have state in an LR somewhere */
++ irq->vcpu != requester_vcpu && /* Current thread is not the VCPU thread */
+ irq->vcpu->cpu != -1) /* VCPU thread is running */
+ cond_resched_lock(&irq->irq_lock);
+
^ permalink raw reply related [flat|nested] 22+ messages in thread
* [gentoo-commits] proj/linux-patches:4.10 commit in: /
@ 2017-03-22 16:55 Mike Pagano
0 siblings, 0 replies; 22+ messages in thread
From: Mike Pagano @ 2017-03-22 16:55 UTC (permalink / raw
To: gentoo-commits
commit: 81829230c28d23865ae8b0139826128a718c7017
Author: Mike Pagano <mpagano <AT> gentoo <DOT> org>
AuthorDate: Wed Mar 22 16:55:18 2017 +0000
Commit: Mike Pagano <mpagano <AT> gentoo <DOT> org>
CommitDate: Wed Mar 22 16:55:18 2017 +0000
URL: https://gitweb.gentoo.org/proj/linux-patches.git/commit/?id=81829230
Linux patch 4.10.5
0000_README | 4 +
1004_linux-4.10.5.patch | 2238 +++++++++++++++++++++++++++++++++++++++++++++++
2 files changed, 2242 insertions(+)
diff --git a/0000_README b/0000_README
index a80feb8..464eea3 100644
--- a/0000_README
+++ b/0000_README
@@ -59,6 +59,10 @@ Patch: 1003_linux-4.10.4.patch
From: http://www.kernel.org
Desc: Linux 4.10.4
+Patch: 1004_linux-4.10.5.patch
+From: http://www.kernel.org
+Desc: Linux 4.10.5
+
Patch: 1500_XATTR_USER_PREFIX.patch
From: https://bugs.gentoo.org/show_bug.cgi?id=470644
Desc: Support for namespace user.pax.* on tmpfs.
diff --git a/1004_linux-4.10.5.patch b/1004_linux-4.10.5.patch
new file mode 100644
index 0000000..0772bdc
--- /dev/null
+++ b/1004_linux-4.10.5.patch
@@ -0,0 +1,2238 @@
+diff --git a/Documentation/arm64/silicon-errata.txt b/Documentation/arm64/silicon-errata.txt
+index 405da11fc3e4..d11af52427b4 100644
+--- a/Documentation/arm64/silicon-errata.txt
++++ b/Documentation/arm64/silicon-errata.txt
+@@ -42,24 +42,26 @@ file acts as a registry of software workarounds in the Linux Kernel and
+ will be updated when new workarounds are committed and backported to
+ stable kernels.
+
+-| Implementor | Component | Erratum ID | Kconfig |
+-+----------------+-----------------+-----------------+-------------------------+
+-| ARM | Cortex-A53 | #826319 | ARM64_ERRATUM_826319 |
+-| ARM | Cortex-A53 | #827319 | ARM64_ERRATUM_827319 |
+-| ARM | Cortex-A53 | #824069 | ARM64_ERRATUM_824069 |
+-| ARM | Cortex-A53 | #819472 | ARM64_ERRATUM_819472 |
+-| ARM | Cortex-A53 | #845719 | ARM64_ERRATUM_845719 |
+-| ARM | Cortex-A53 | #843419 | ARM64_ERRATUM_843419 |
+-| ARM | Cortex-A57 | #832075 | ARM64_ERRATUM_832075 |
+-| ARM | Cortex-A57 | #852523 | N/A |
+-| ARM | Cortex-A57 | #834220 | ARM64_ERRATUM_834220 |
+-| ARM | Cortex-A72 | #853709 | N/A |
+-| ARM | MMU-500 | #841119,#826419 | N/A |
+-| | | | |
+-| Cavium | ThunderX ITS | #22375, #24313 | CAVIUM_ERRATUM_22375 |
+-| Cavium | ThunderX ITS | #23144 | CAVIUM_ERRATUM_23144 |
+-| Cavium | ThunderX GICv3 | #23154 | CAVIUM_ERRATUM_23154 |
+-| Cavium | ThunderX Core | #27456 | CAVIUM_ERRATUM_27456 |
+-| Cavium | ThunderX SMMUv2 | #27704 | N/A |
+-| | | | |
+-| Freescale/NXP | LS2080A/LS1043A | A-008585 | FSL_ERRATUM_A008585 |
++| Implementor | Component | Erratum ID | Kconfig |
+++----------------+-----------------+-----------------+-----------------------------+
++| ARM | Cortex-A53 | #826319 | ARM64_ERRATUM_826319 |
++| ARM | Cortex-A53 | #827319 | ARM64_ERRATUM_827319 |
++| ARM | Cortex-A53 | #824069 | ARM64_ERRATUM_824069 |
++| ARM | Cortex-A53 | #819472 | ARM64_ERRATUM_819472 |
++| ARM | Cortex-A53 | #845719 | ARM64_ERRATUM_845719 |
++| ARM | Cortex-A53 | #843419 | ARM64_ERRATUM_843419 |
++| ARM | Cortex-A57 | #832075 | ARM64_ERRATUM_832075 |
++| ARM | Cortex-A57 | #852523 | N/A |
++| ARM | Cortex-A57 | #834220 | ARM64_ERRATUM_834220 |
++| ARM | Cortex-A72 | #853709 | N/A |
++| ARM | MMU-500 | #841119,#826419 | N/A |
++| | | | |
++| Cavium | ThunderX ITS | #22375, #24313 | CAVIUM_ERRATUM_22375 |
++| Cavium | ThunderX ITS | #23144 | CAVIUM_ERRATUM_23144 |
++| Cavium | ThunderX GICv3 | #23154 | CAVIUM_ERRATUM_23154 |
++| Cavium | ThunderX Core | #27456 | CAVIUM_ERRATUM_27456 |
++| Cavium | ThunderX SMMUv2 | #27704 | N/A |
++| | | | |
++| Freescale/NXP | LS2080A/LS1043A | A-008585 | FSL_ERRATUM_A008585 |
++| | | | |
++| Qualcomm Tech. | QDF2400 ITS | E0065 | QCOM_QDF2400_ERRATUM_0065 |
+diff --git a/Makefile b/Makefile
+index 8df819e31882..48e18096913f 100644
+--- a/Makefile
++++ b/Makefile
+@@ -1,6 +1,6 @@
+ VERSION = 4
+ PATCHLEVEL = 10
+-SUBLEVEL = 4
++SUBLEVEL = 5
+ EXTRAVERSION =
+ NAME = Fearless Coyote
+
+diff --git a/arch/arm64/Kconfig b/arch/arm64/Kconfig
+index 111742126897..51634f7f0aff 100644
+--- a/arch/arm64/Kconfig
++++ b/arch/arm64/Kconfig
+@@ -479,6 +479,16 @@ config CAVIUM_ERRATUM_27456
+
+ If unsure, say Y.
+
++config QCOM_QDF2400_ERRATUM_0065
++ bool "QDF2400 E0065: Incorrect GITS_TYPER.ITT_Entry_size"
++ default y
++ help
++ On Qualcomm Datacenter Technologies QDF2400 SoC, ITS hardware reports
++ ITE size incorrectly. The GITS_TYPER.ITT_Entry_size field should have
++ been indicated as 16Bytes (0xf), not 8Bytes (0x7).
++
++ If unsure, say Y.
++
+ endmenu
+
+
+diff --git a/arch/arm64/kvm/hyp/tlb.c b/arch/arm64/kvm/hyp/tlb.c
+index 88e2f2b938f0..55889d057757 100644
+--- a/arch/arm64/kvm/hyp/tlb.c
++++ b/arch/arm64/kvm/hyp/tlb.c
+@@ -17,14 +17,62 @@
+
+ #include <asm/kvm_hyp.h>
+
++static void __hyp_text __tlb_switch_to_guest_vhe(struct kvm *kvm)
++{
++ u64 val;
++
++ /*
++ * With VHE enabled, we have HCR_EL2.{E2H,TGE} = {1,1}, and
++ * most TLB operations target EL2/EL0. In order to affect the
++ * guest TLBs (EL1/EL0), we need to change one of these two
++ * bits. Changing E2H is impossible (goodbye TTBR1_EL2), so
++ * let's flip TGE before executing the TLB operation.
++ */
++ write_sysreg(kvm->arch.vttbr, vttbr_el2);
++ val = read_sysreg(hcr_el2);
++ val &= ~HCR_TGE;
++ write_sysreg(val, hcr_el2);
++ isb();
++}
++
++static void __hyp_text __tlb_switch_to_guest_nvhe(struct kvm *kvm)
++{
++ write_sysreg(kvm->arch.vttbr, vttbr_el2);
++ isb();
++}
++
++static hyp_alternate_select(__tlb_switch_to_guest,
++ __tlb_switch_to_guest_nvhe,
++ __tlb_switch_to_guest_vhe,
++ ARM64_HAS_VIRT_HOST_EXTN);
++
++static void __hyp_text __tlb_switch_to_host_vhe(struct kvm *kvm)
++{
++ /*
++ * We're done with the TLB operation, let's restore the host's
++ * view of HCR_EL2.
++ */
++ write_sysreg(0, vttbr_el2);
++ write_sysreg(HCR_HOST_VHE_FLAGS, hcr_el2);
++}
++
++static void __hyp_text __tlb_switch_to_host_nvhe(struct kvm *kvm)
++{
++ write_sysreg(0, vttbr_el2);
++}
++
++static hyp_alternate_select(__tlb_switch_to_host,
++ __tlb_switch_to_host_nvhe,
++ __tlb_switch_to_host_vhe,
++ ARM64_HAS_VIRT_HOST_EXTN);
++
+ void __hyp_text __kvm_tlb_flush_vmid_ipa(struct kvm *kvm, phys_addr_t ipa)
+ {
+ dsb(ishst);
+
+ /* Switch to requested VMID */
+ kvm = kern_hyp_va(kvm);
+- write_sysreg(kvm->arch.vttbr, vttbr_el2);
+- isb();
++ __tlb_switch_to_guest()(kvm);
+
+ /*
+ * We could do so much better if we had the VA as well.
+@@ -45,7 +93,7 @@ void __hyp_text __kvm_tlb_flush_vmid_ipa(struct kvm *kvm, phys_addr_t ipa)
+ dsb(ish);
+ isb();
+
+- write_sysreg(0, vttbr_el2);
++ __tlb_switch_to_host()(kvm);
+ }
+
+ void __hyp_text __kvm_tlb_flush_vmid(struct kvm *kvm)
+@@ -54,14 +102,13 @@ void __hyp_text __kvm_tlb_flush_vmid(struct kvm *kvm)
+
+ /* Switch to requested VMID */
+ kvm = kern_hyp_va(kvm);
+- write_sysreg(kvm->arch.vttbr, vttbr_el2);
+- isb();
++ __tlb_switch_to_guest()(kvm);
+
+ asm volatile("tlbi vmalls12e1is" : : );
+ dsb(ish);
+ isb();
+
+- write_sysreg(0, vttbr_el2);
++ __tlb_switch_to_host()(kvm);
+ }
+
+ void __hyp_text __kvm_tlb_flush_local_vmid(struct kvm_vcpu *vcpu)
+@@ -69,14 +116,13 @@ void __hyp_text __kvm_tlb_flush_local_vmid(struct kvm_vcpu *vcpu)
+ struct kvm *kvm = kern_hyp_va(kern_hyp_va(vcpu)->kvm);
+
+ /* Switch to requested VMID */
+- write_sysreg(kvm->arch.vttbr, vttbr_el2);
+- isb();
++ __tlb_switch_to_guest()(kvm);
+
+ asm volatile("tlbi vmalle1" : : );
+ dsb(nsh);
+ isb();
+
+- write_sysreg(0, vttbr_el2);
++ __tlb_switch_to_host()(kvm);
+ }
+
+ void __hyp_text __kvm_flush_vm_context(void)
+diff --git a/arch/powerpc/crypto/crc32c-vpmsum_glue.c b/arch/powerpc/crypto/crc32c-vpmsum_glue.c
+index 9fa046d56eba..411994551afc 100644
+--- a/arch/powerpc/crypto/crc32c-vpmsum_glue.c
++++ b/arch/powerpc/crypto/crc32c-vpmsum_glue.c
+@@ -52,7 +52,7 @@ static int crc32c_vpmsum_cra_init(struct crypto_tfm *tfm)
+ {
+ u32 *key = crypto_tfm_ctx(tfm);
+
+- *key = 0;
++ *key = ~0;
+
+ return 0;
+ }
+diff --git a/arch/x86/events/core.c b/arch/x86/events/core.c
+index 1635c0c8df23..e07b36c5588a 100644
+--- a/arch/x86/events/core.c
++++ b/arch/x86/events/core.c
+@@ -2100,8 +2100,8 @@ static int x86_pmu_event_init(struct perf_event *event)
+
+ static void refresh_pce(void *ignored)
+ {
+- if (current->mm)
+- load_mm_cr4(current->mm);
++ if (current->active_mm)
++ load_mm_cr4(current->active_mm);
+ }
+
+ static void x86_pmu_event_mapped(struct perf_event *event)
+diff --git a/arch/x86/kernel/cpu/intel_rdt_rdtgroup.c b/arch/x86/kernel/cpu/intel_rdt_rdtgroup.c
+index 8af04afdfcb9..84c0f23ea644 100644
+--- a/arch/x86/kernel/cpu/intel_rdt_rdtgroup.c
++++ b/arch/x86/kernel/cpu/intel_rdt_rdtgroup.c
+@@ -727,7 +727,7 @@ void rdtgroup_kn_unlock(struct kernfs_node *kn)
+ if (atomic_dec_and_test(&rdtgrp->waitcount) &&
+ (rdtgrp->flags & RDT_DELETED)) {
+ kernfs_unbreak_active_protection(kn);
+- kernfs_put(kn);
++ kernfs_put(rdtgrp->kn);
+ kfree(rdtgrp);
+ } else {
+ kernfs_unbreak_active_protection(kn);
+diff --git a/arch/x86/kernel/head64.c b/arch/x86/kernel/head64.c
+index 54a2372f5dbb..b5785c197e53 100644
+--- a/arch/x86/kernel/head64.c
++++ b/arch/x86/kernel/head64.c
+@@ -4,6 +4,7 @@
+ * Copyright (C) 2000 Andrea Arcangeli <andrea@suse.de> SuSE
+ */
+
++#define DISABLE_BRANCH_PROFILING
+ #include <linux/init.h>
+ #include <linux/linkage.h>
+ #include <linux/types.h>
+diff --git a/arch/x86/kernel/tsc.c b/arch/x86/kernel/tsc.c
+index 37e7cf544e51..62d55e34d373 100644
+--- a/arch/x86/kernel/tsc.c
++++ b/arch/x86/kernel/tsc.c
+@@ -1310,6 +1310,8 @@ static int __init init_tsc_clocksource(void)
+ * the refined calibration and directly register it as a clocksource.
+ */
+ if (boot_cpu_has(X86_FEATURE_TSC_KNOWN_FREQ)) {
++ if (boot_cpu_has(X86_FEATURE_ART))
++ art_related_clocksource = &clocksource_tsc;
+ clocksource_register_khz(&clocksource_tsc, tsc_khz);
+ return 0;
+ }
+diff --git a/arch/x86/kernel/unwind_frame.c b/arch/x86/kernel/unwind_frame.c
+index 23d15565d02a..919a7b78f945 100644
+--- a/arch/x86/kernel/unwind_frame.c
++++ b/arch/x86/kernel/unwind_frame.c
+@@ -80,19 +80,43 @@ static size_t regs_size(struct pt_regs *regs)
+ return sizeof(*regs);
+ }
+
++#ifdef CONFIG_X86_32
++#define GCC_REALIGN_WORDS 3
++#else
++#define GCC_REALIGN_WORDS 1
++#endif
++
+ static bool is_last_task_frame(struct unwind_state *state)
+ {
+- unsigned long bp = (unsigned long)state->bp;
+- unsigned long regs = (unsigned long)task_pt_regs(state->task);
++ unsigned long *last_bp = (unsigned long *)task_pt_regs(state->task) - 2;
++ unsigned long *aligned_bp = last_bp - GCC_REALIGN_WORDS;
+
+ /*
+ * We have to check for the last task frame at two different locations
+ * because gcc can occasionally decide to realign the stack pointer and
+- * change the offset of the stack frame by a word in the prologue of a
+- * function called by head/entry code.
++ * change the offset of the stack frame in the prologue of a function
++ * called by head/entry code. Examples:
++ *
++ * <start_secondary>:
++ * push %edi
++ * lea 0x8(%esp),%edi
++ * and $0xfffffff8,%esp
++ * pushl -0x4(%edi)
++ * push %ebp
++ * mov %esp,%ebp
++ *
++ * <x86_64_start_kernel>:
++ * lea 0x8(%rsp),%r10
++ * and $0xfffffffffffffff0,%rsp
++ * pushq -0x8(%r10)
++ * push %rbp
++ * mov %rsp,%rbp
++ *
++ * Note that after aligning the stack, it pushes a duplicate copy of
++ * the return address before pushing the frame pointer.
+ */
+- return bp == regs - FRAME_HEADER_SIZE ||
+- bp == regs - FRAME_HEADER_SIZE - sizeof(long);
++ return (state->bp == last_bp ||
++ (state->bp == aligned_bp && *(aligned_bp+1) == *(last_bp+1)));
+ }
+
+ /*
+diff --git a/arch/x86/mm/kasan_init_64.c b/arch/x86/mm/kasan_init_64.c
+index 0493c17b8a51..333362f992e4 100644
+--- a/arch/x86/mm/kasan_init_64.c
++++ b/arch/x86/mm/kasan_init_64.c
+@@ -1,3 +1,4 @@
++#define DISABLE_BRANCH_PROFILING
+ #define pr_fmt(fmt) "kasan: " fmt
+ #include <linux/bootmem.h>
+ #include <linux/kasan.h>
+diff --git a/drivers/crypto/s5p-sss.c b/drivers/crypto/s5p-sss.c
+index dce1af0ce85c..4721d50c4628 100644
+--- a/drivers/crypto/s5p-sss.c
++++ b/drivers/crypto/s5p-sss.c
+@@ -270,7 +270,7 @@ static void s5p_sg_copy_buf(void *buf, struct scatterlist *sg,
+ scatterwalk_done(&walk, out, 0);
+ }
+
+-static void s5p_aes_complete(struct s5p_aes_dev *dev, int err)
++static void s5p_sg_done(struct s5p_aes_dev *dev)
+ {
+ if (dev->sg_dst_cpy) {
+ dev_dbg(dev->dev,
+@@ -281,8 +281,11 @@ static void s5p_aes_complete(struct s5p_aes_dev *dev, int err)
+ }
+ s5p_free_sg_cpy(dev, &dev->sg_src_cpy);
+ s5p_free_sg_cpy(dev, &dev->sg_dst_cpy);
++}
+
+- /* holding a lock outside */
++/* Calls the completion. Cannot be called with dev->lock hold. */
++static void s5p_aes_complete(struct s5p_aes_dev *dev, int err)
++{
+ dev->req->base.complete(&dev->req->base, err);
+ dev->busy = false;
+ }
+@@ -368,51 +371,44 @@ static int s5p_set_indata(struct s5p_aes_dev *dev, struct scatterlist *sg)
+ }
+
+ /*
+- * Returns true if new transmitting (output) data is ready and its
+- * address+length have to be written to device (by calling
+- * s5p_set_dma_outdata()). False otherwise.
++ * Returns -ERRNO on error (mapping of new data failed).
++ * On success returns:
++ * - 0 if there is no more data,
++ * - 1 if new transmitting (output) data is ready and its address+length
++ * have to be written to device (by calling s5p_set_dma_outdata()).
+ */
+-static bool s5p_aes_tx(struct s5p_aes_dev *dev)
++static int s5p_aes_tx(struct s5p_aes_dev *dev)
+ {
+- int err = 0;
+- bool ret = false;
++ int ret = 0;
+
+ s5p_unset_outdata(dev);
+
+ if (!sg_is_last(dev->sg_dst)) {
+- err = s5p_set_outdata(dev, sg_next(dev->sg_dst));
+- if (err)
+- s5p_aes_complete(dev, err);
+- else
+- ret = true;
+- } else {
+- s5p_aes_complete(dev, err);
+-
+- dev->busy = true;
+- tasklet_schedule(&dev->tasklet);
++ ret = s5p_set_outdata(dev, sg_next(dev->sg_dst));
++ if (!ret)
++ ret = 1;
+ }
+
+ return ret;
+ }
+
+ /*
+- * Returns true if new receiving (input) data is ready and its
+- * address+length have to be written to device (by calling
+- * s5p_set_dma_indata()). False otherwise.
++ * Returns -ERRNO on error (mapping of new data failed).
++ * On success returns:
++ * - 0 if there is no more data,
++ * - 1 if new receiving (input) data is ready and its address+length
++ * have to be written to device (by calling s5p_set_dma_indata()).
+ */
+-static bool s5p_aes_rx(struct s5p_aes_dev *dev)
++static int s5p_aes_rx(struct s5p_aes_dev *dev/*, bool *set_dma*/)
+ {
+- int err;
+- bool ret = false;
++ int ret = 0;
+
+ s5p_unset_indata(dev);
+
+ if (!sg_is_last(dev->sg_src)) {
+- err = s5p_set_indata(dev, sg_next(dev->sg_src));
+- if (err)
+- s5p_aes_complete(dev, err);
+- else
+- ret = true;
++ ret = s5p_set_indata(dev, sg_next(dev->sg_src));
++ if (!ret)
++ ret = 1;
+ }
+
+ return ret;
+@@ -422,33 +418,73 @@ static irqreturn_t s5p_aes_interrupt(int irq, void *dev_id)
+ {
+ struct platform_device *pdev = dev_id;
+ struct s5p_aes_dev *dev = platform_get_drvdata(pdev);
+- bool set_dma_tx = false;
+- bool set_dma_rx = false;
++ int err_dma_tx = 0;
++ int err_dma_rx = 0;
++ bool tx_end = false;
+ unsigned long flags;
+ uint32_t status;
++ int err;
+
+ spin_lock_irqsave(&dev->lock, flags);
+
++ /*
++ * Handle rx or tx interrupt. If there is still data (scatterlist did not
++ * reach end), then map next scatterlist entry.
++ * In case of such mapping error, s5p_aes_complete() should be called.
++ *
++ * If there is no more data in tx scatter list, call s5p_aes_complete()
++ * and schedule new tasklet.
++ */
+ status = SSS_READ(dev, FCINTSTAT);
+ if (status & SSS_FCINTSTAT_BRDMAINT)
+- set_dma_rx = s5p_aes_rx(dev);
+- if (status & SSS_FCINTSTAT_BTDMAINT)
+- set_dma_tx = s5p_aes_tx(dev);
++ err_dma_rx = s5p_aes_rx(dev);
++
++ if (status & SSS_FCINTSTAT_BTDMAINT) {
++ if (sg_is_last(dev->sg_dst))
++ tx_end = true;
++ err_dma_tx = s5p_aes_tx(dev);
++ }
+
+ SSS_WRITE(dev, FCINTPEND, status);
+
+- /*
+- * Writing length of DMA block (either receiving or transmitting)
+- * will start the operation immediately, so this should be done
+- * at the end (even after clearing pending interrupts to not miss the
+- * interrupt).
+- */
+- if (set_dma_tx)
+- s5p_set_dma_outdata(dev, dev->sg_dst);
+- if (set_dma_rx)
+- s5p_set_dma_indata(dev, dev->sg_src);
++ if (err_dma_rx < 0) {
++ err = err_dma_rx;
++ goto error;
++ }
++ if (err_dma_tx < 0) {
++ err = err_dma_tx;
++ goto error;
++ }
++
++ if (tx_end) {
++ s5p_sg_done(dev);
++
++ spin_unlock_irqrestore(&dev->lock, flags);
++
++ s5p_aes_complete(dev, 0);
++ dev->busy = true;
++ tasklet_schedule(&dev->tasklet);
++ } else {
++ /*
++ * Writing length of DMA block (either receiving or
++ * transmitting) will start the operation immediately, so this
++ * should be done at the end (even after clearing pending
++ * interrupts to not miss the interrupt).
++ */
++ if (err_dma_tx == 1)
++ s5p_set_dma_outdata(dev, dev->sg_dst);
++ if (err_dma_rx == 1)
++ s5p_set_dma_indata(dev, dev->sg_src);
+
++ spin_unlock_irqrestore(&dev->lock, flags);
++ }
++
++ return IRQ_HANDLED;
++
++error:
++ s5p_sg_done(dev);
+ spin_unlock_irqrestore(&dev->lock, flags);
++ s5p_aes_complete(dev, err);
+
+ return IRQ_HANDLED;
+ }
+@@ -597,8 +633,9 @@ static void s5p_aes_crypt_start(struct s5p_aes_dev *dev, unsigned long mode)
+ s5p_unset_indata(dev);
+
+ indata_error:
+- s5p_aes_complete(dev, err);
++ s5p_sg_done(dev);
+ spin_unlock_irqrestore(&dev->lock, flags);
++ s5p_aes_complete(dev, err);
+ }
+
+ static void s5p_tasklet_cb(unsigned long data)
+diff --git a/drivers/gpu/drm/i915/i915_drv.c b/drivers/gpu/drm/i915/i915_drv.c
+index 728ca3ea74d2..f02da12f2860 100644
+--- a/drivers/gpu/drm/i915/i915_drv.c
++++ b/drivers/gpu/drm/i915/i915_drv.c
+@@ -1573,18 +1573,21 @@ static int i915_drm_resume(struct drm_device *dev)
+ intel_opregion_setup(dev_priv);
+
+ intel_init_pch_refclk(dev);
+- drm_mode_config_reset(dev);
+
+ /*
+ * Interrupts have to be enabled before any batches are run. If not the
+ * GPU will hang. i915_gem_init_hw() will initiate batches to
+ * update/restore the context.
+ *
++ * drm_mode_config_reset() needs AUX interrupts.
++ *
+ * Modeset enabling in intel_modeset_init_hw() also needs working
+ * interrupts.
+ */
+ intel_runtime_pm_enable_interrupts(dev_priv);
+
++ drm_mode_config_reset(dev);
++
+ mutex_lock(&dev->struct_mutex);
+ if (i915_gem_init_hw(dev)) {
+ DRM_ERROR("failed to re-initialize GPU, declaring wedged!\n");
+diff --git a/drivers/gpu/drm/i915/i915_irq.c b/drivers/gpu/drm/i915/i915_irq.c
+index 07ca71cabb2b..f914581b1729 100644
+--- a/drivers/gpu/drm/i915/i915_irq.c
++++ b/drivers/gpu/drm/i915/i915_irq.c
+@@ -3089,19 +3089,16 @@ static void ibx_hpd_irq_setup(struct drm_i915_private *dev_priv)
+ I915_WRITE(PCH_PORT_HOTPLUG, hotplug);
+ }
+
+-static void spt_hpd_irq_setup(struct drm_i915_private *dev_priv)
++static void spt_hpd_detection_setup(struct drm_i915_private *dev_priv)
+ {
+- u32 hotplug_irqs, hotplug, enabled_irqs;
+-
+- hotplug_irqs = SDE_HOTPLUG_MASK_SPT;
+- enabled_irqs = intel_hpd_enabled_irqs(dev_priv, hpd_spt);
+-
+- ibx_display_interrupt_update(dev_priv, hotplug_irqs, enabled_irqs);
++ u32 hotplug;
+
+ /* Enable digital hotplug on the PCH */
+ hotplug = I915_READ(PCH_PORT_HOTPLUG);
+- hotplug |= PORTD_HOTPLUG_ENABLE | PORTC_HOTPLUG_ENABLE |
+- PORTB_HOTPLUG_ENABLE | PORTA_HOTPLUG_ENABLE;
++ hotplug |= PORTA_HOTPLUG_ENABLE |
++ PORTB_HOTPLUG_ENABLE |
++ PORTC_HOTPLUG_ENABLE |
++ PORTD_HOTPLUG_ENABLE;
+ I915_WRITE(PCH_PORT_HOTPLUG, hotplug);
+
+ hotplug = I915_READ(PCH_PORT_HOTPLUG2);
+@@ -3109,6 +3106,18 @@ static void spt_hpd_irq_setup(struct drm_i915_private *dev_priv)
+ I915_WRITE(PCH_PORT_HOTPLUG2, hotplug);
+ }
+
++static void spt_hpd_irq_setup(struct drm_i915_private *dev_priv)
++{
++ u32 hotplug_irqs, enabled_irqs;
++
++ hotplug_irqs = SDE_HOTPLUG_MASK_SPT;
++ enabled_irqs = intel_hpd_enabled_irqs(dev_priv, hpd_spt);
++
++ ibx_display_interrupt_update(dev_priv, hotplug_irqs, enabled_irqs);
++
++ spt_hpd_detection_setup(dev_priv);
++}
++
+ static void ilk_hpd_irq_setup(struct drm_i915_private *dev_priv)
+ {
+ u32 hotplug_irqs, hotplug, enabled_irqs;
+@@ -3143,18 +3152,15 @@ static void ilk_hpd_irq_setup(struct drm_i915_private *dev_priv)
+ ibx_hpd_irq_setup(dev_priv);
+ }
+
+-static void bxt_hpd_irq_setup(struct drm_i915_private *dev_priv)
++static void __bxt_hpd_detection_setup(struct drm_i915_private *dev_priv,
++ u32 enabled_irqs)
+ {
+- u32 hotplug_irqs, hotplug, enabled_irqs;
+-
+- enabled_irqs = intel_hpd_enabled_irqs(dev_priv, hpd_bxt);
+- hotplug_irqs = BXT_DE_PORT_HOTPLUG_MASK;
+-
+- bdw_update_port_irq(dev_priv, hotplug_irqs, enabled_irqs);
++ u32 hotplug;
+
+ hotplug = I915_READ(PCH_PORT_HOTPLUG);
+- hotplug |= PORTC_HOTPLUG_ENABLE | PORTB_HOTPLUG_ENABLE |
+- PORTA_HOTPLUG_ENABLE;
++ hotplug |= PORTA_HOTPLUG_ENABLE |
++ PORTB_HOTPLUG_ENABLE |
++ PORTC_HOTPLUG_ENABLE;
+
+ DRM_DEBUG_KMS("Invert bit setting: hp_ctl:%x hp_port:%x\n",
+ hotplug, enabled_irqs);
+@@ -3164,7 +3170,6 @@ static void bxt_hpd_irq_setup(struct drm_i915_private *dev_priv)
+ * For BXT invert bit has to be set based on AOB design
+ * for HPD detection logic, update it based on VBT fields.
+ */
+-
+ if ((enabled_irqs & BXT_DE_PORT_HP_DDIA) &&
+ intel_bios_is_port_hpd_inverted(dev_priv, PORT_A))
+ hotplug |= BXT_DDIA_HPD_INVERT;
+@@ -3178,6 +3183,23 @@ static void bxt_hpd_irq_setup(struct drm_i915_private *dev_priv)
+ I915_WRITE(PCH_PORT_HOTPLUG, hotplug);
+ }
+
++static void bxt_hpd_detection_setup(struct drm_i915_private *dev_priv)
++{
++ __bxt_hpd_detection_setup(dev_priv, BXT_DE_PORT_HOTPLUG_MASK);
++}
++
++static void bxt_hpd_irq_setup(struct drm_i915_private *dev_priv)
++{
++ u32 hotplug_irqs, enabled_irqs;
++
++ enabled_irqs = intel_hpd_enabled_irqs(dev_priv, hpd_bxt);
++ hotplug_irqs = BXT_DE_PORT_HOTPLUG_MASK;
++
++ bdw_update_port_irq(dev_priv, hotplug_irqs, enabled_irqs);
++
++ __bxt_hpd_detection_setup(dev_priv, enabled_irqs);
++}
++
+ static void ibx_irq_postinstall(struct drm_device *dev)
+ {
+ struct drm_i915_private *dev_priv = to_i915(dev);
+@@ -3193,6 +3215,12 @@ static void ibx_irq_postinstall(struct drm_device *dev)
+
+ gen5_assert_iir_is_zero(dev_priv, SDEIIR);
+ I915_WRITE(SDEIMR, ~mask);
++
++ if (HAS_PCH_IBX(dev_priv) || HAS_PCH_CPT(dev_priv) ||
++ HAS_PCH_LPT(dev_priv))
++ ; /* TODO: Enable HPD detection on older PCH platforms too */
++ else
++ spt_hpd_detection_setup(dev_priv);
+ }
+
+ static void gen5_gt_irq_postinstall(struct drm_device *dev)
+@@ -3404,6 +3432,9 @@ static void gen8_de_irq_postinstall(struct drm_i915_private *dev_priv)
+
+ GEN5_IRQ_INIT(GEN8_DE_PORT_, ~de_port_masked, de_port_enables);
+ GEN5_IRQ_INIT(GEN8_DE_MISC_, ~de_misc_masked, de_misc_masked);
++
++ if (IS_BROXTON(dev_priv))
++ bxt_hpd_detection_setup(dev_priv);
+ }
+
+ static int gen8_irq_postinstall(struct drm_device *dev)
+diff --git a/drivers/gpu/drm/i915/intel_dp.c b/drivers/gpu/drm/i915/intel_dp.c
+index 4daf7dda9cca..c3ab0240691a 100644
+--- a/drivers/gpu/drm/i915/intel_dp.c
++++ b/drivers/gpu/drm/i915/intel_dp.c
+@@ -4289,8 +4289,8 @@ static bool bxt_digital_port_connected(struct drm_i915_private *dev_priv,
+ *
+ * Return %true if @port is connected, %false otherwise.
+ */
+-static bool intel_digital_port_connected(struct drm_i915_private *dev_priv,
+- struct intel_digital_port *port)
++bool intel_digital_port_connected(struct drm_i915_private *dev_priv,
++ struct intel_digital_port *port)
+ {
+ if (HAS_PCH_IBX(dev_priv))
+ return ibx_digital_port_connected(dev_priv, port);
+diff --git a/drivers/gpu/drm/i915/intel_drv.h b/drivers/gpu/drm/i915/intel_drv.h
+index 03a2112004f9..a0af54bba85b 100644
+--- a/drivers/gpu/drm/i915/intel_drv.h
++++ b/drivers/gpu/drm/i915/intel_drv.h
+@@ -1451,6 +1451,8 @@ bool intel_dp_read_dpcd(struct intel_dp *intel_dp);
+ bool __intel_dp_read_desc(struct intel_dp *intel_dp,
+ struct intel_dp_desc *desc);
+ bool intel_dp_read_desc(struct intel_dp *intel_dp);
++bool intel_digital_port_connected(struct drm_i915_private *dev_priv,
++ struct intel_digital_port *port);
+
+ /* intel_dp_aux_backlight.c */
+ int intel_dp_aux_init_backlight_funcs(struct intel_connector *intel_connector);
+diff --git a/drivers/gpu/drm/i915/intel_lspcon.c b/drivers/gpu/drm/i915/intel_lspcon.c
+index daa523410953..12695616f673 100644
+--- a/drivers/gpu/drm/i915/intel_lspcon.c
++++ b/drivers/gpu/drm/i915/intel_lspcon.c
+@@ -100,6 +100,8 @@ static bool lspcon_probe(struct intel_lspcon *lspcon)
+ static void lspcon_resume_in_pcon_wa(struct intel_lspcon *lspcon)
+ {
+ struct intel_dp *intel_dp = lspcon_to_intel_dp(lspcon);
++ struct intel_digital_port *dig_port = dp_to_dig_port(intel_dp);
++ struct drm_i915_private *dev_priv = to_i915(dig_port->base.base.dev);
+ unsigned long start = jiffies;
+
+ if (!lspcon->desc_valid)
+@@ -115,7 +117,8 @@ static void lspcon_resume_in_pcon_wa(struct intel_lspcon *lspcon)
+ if (!__intel_dp_read_desc(intel_dp, &desc))
+ return;
+
+- if (!memcmp(&intel_dp->desc, &desc, sizeof(desc))) {
++ if (intel_digital_port_connected(dev_priv, dig_port) &&
++ !memcmp(&intel_dp->desc, &desc, sizeof(desc))) {
+ DRM_DEBUG_KMS("LSPCON recovering in PCON mode after %u ms\n",
+ jiffies_to_msecs(jiffies - start));
+ return;
+diff --git a/drivers/irqchip/irq-gic-v3-its.c b/drivers/irqchip/irq-gic-v3-its.c
+index 69b040f47d56..519ff7a18b5b 100644
+--- a/drivers/irqchip/irq-gic-v3-its.c
++++ b/drivers/irqchip/irq-gic-v3-its.c
+@@ -1597,6 +1597,14 @@ static void __maybe_unused its_enable_quirk_cavium_23144(void *data)
+ its->flags |= ITS_FLAGS_WORKAROUND_CAVIUM_23144;
+ }
+
++static void __maybe_unused its_enable_quirk_qdf2400_e0065(void *data)
++{
++ struct its_node *its = data;
++
++ /* On QDF2400, the size of the ITE is 16Bytes */
++ its->ite_size = 16;
++}
++
+ static const struct gic_quirk its_quirks[] = {
+ #ifdef CONFIG_CAVIUM_ERRATUM_22375
+ {
+@@ -1614,6 +1622,14 @@ static const struct gic_quirk its_quirks[] = {
+ .init = its_enable_quirk_cavium_23144,
+ },
+ #endif
++#ifdef CONFIG_QCOM_QDF2400_ERRATUM_0065
++ {
++ .desc = "ITS: QDF2400 erratum 0065",
++ .iidr = 0x00001070, /* QDF2400 ITS rev 1.x */
++ .mask = 0xffffffff,
++ .init = its_enable_quirk_qdf2400_e0065,
++ },
++#endif
+ {
+ }
+ };
+diff --git a/drivers/net/bonding/bond_main.c b/drivers/net/bonding/bond_main.c
+index 8029dd4912b6..644d2bf0c451 100644
+--- a/drivers/net/bonding/bond_main.c
++++ b/drivers/net/bonding/bond_main.c
+@@ -4185,6 +4185,7 @@ void bond_setup(struct net_device *bond_dev)
+
+ /* Initialize the device entry points */
+ ether_setup(bond_dev);
++ bond_dev->max_mtu = ETH_MAX_MTU;
+ bond_dev->netdev_ops = &bond_netdev_ops;
+ bond_dev->ethtool_ops = &bond_ethtool_ops;
+
+diff --git a/drivers/net/ethernet/amd/xgbe/xgbe-dev.c b/drivers/net/ethernet/amd/xgbe/xgbe-dev.c
+index a7d16db5c4b2..937f37a5dcb2 100644
+--- a/drivers/net/ethernet/amd/xgbe/xgbe-dev.c
++++ b/drivers/net/ethernet/amd/xgbe/xgbe-dev.c
+@@ -1323,7 +1323,7 @@ static int xgbe_read_ext_mii_regs(struct xgbe_prv_data *pdata, int addr,
+ static int xgbe_set_ext_mii_mode(struct xgbe_prv_data *pdata, unsigned int port,
+ enum xgbe_mdio_mode mode)
+ {
+- unsigned int reg_val = 0;
++ unsigned int reg_val = XGMAC_IOREAD(pdata, MAC_MDIOCL22R);
+
+ switch (mode) {
+ case XGBE_MDIO_MODE_CL22:
+diff --git a/drivers/net/ethernet/amd/xgbe/xgbe-drv.c b/drivers/net/ethernet/amd/xgbe/xgbe-drv.c
+index 1c87cc204075..742e5d1b5da4 100644
+--- a/drivers/net/ethernet/amd/xgbe/xgbe-drv.c
++++ b/drivers/net/ethernet/amd/xgbe/xgbe-drv.c
+@@ -1131,12 +1131,12 @@ static void xgbe_stop(struct xgbe_prv_data *pdata)
+ hw_if->disable_tx(pdata);
+ hw_if->disable_rx(pdata);
+
++ phy_if->phy_stop(pdata);
++
+ xgbe_free_irqs(pdata);
+
+ xgbe_napi_disable(pdata, 1);
+
+- phy_if->phy_stop(pdata);
+-
+ hw_if->exit(pdata);
+
+ channel = pdata->channel;
+@@ -2274,10 +2274,7 @@ static int xgbe_one_poll(struct napi_struct *napi, int budget)
+ processed = xgbe_rx_poll(channel, budget);
+
+ /* If we processed everything, we are done */
+- if (processed < budget) {
+- /* Turn off polling */
+- napi_complete_done(napi, processed);
+-
++ if ((processed < budget) && napi_complete_done(napi, processed)) {
+ /* Enable Tx and Rx interrupts */
+ if (pdata->channel_irq_mode)
+ xgbe_enable_rx_tx_int(pdata, channel);
+@@ -2319,10 +2316,7 @@ static int xgbe_all_poll(struct napi_struct *napi, int budget)
+ } while ((processed < budget) && (processed != last_processed));
+
+ /* If we processed everything, we are done */
+- if (processed < budget) {
+- /* Turn off polling */
+- napi_complete_done(napi, processed);
+-
++ if ((processed < budget) && napi_complete_done(napi, processed)) {
+ /* Enable Tx and Rx interrupts */
+ xgbe_enable_rx_tx_ints(pdata);
+ }
+diff --git a/drivers/net/ethernet/amd/xgbe/xgbe-phy-v2.c b/drivers/net/ethernet/amd/xgbe/xgbe-phy-v2.c
+index 9d8c953083b4..e707c49cc55a 100644
+--- a/drivers/net/ethernet/amd/xgbe/xgbe-phy-v2.c
++++ b/drivers/net/ethernet/amd/xgbe/xgbe-phy-v2.c
+@@ -716,6 +716,8 @@ static void xgbe_phy_sfp_phy_settings(struct xgbe_prv_data *pdata)
+ pdata->phy.duplex = DUPLEX_UNKNOWN;
+ pdata->phy.autoneg = AUTONEG_ENABLE;
+ pdata->phy.advertising = pdata->phy.supported;
++
++ return;
+ }
+
+ pdata->phy.advertising &= ~ADVERTISED_Autoneg;
+@@ -875,6 +877,16 @@ static int xgbe_phy_find_phy_device(struct xgbe_prv_data *pdata)
+ !phy_data->sfp_phy_avail)
+ return 0;
+
++ /* Set the proper MDIO mode for the PHY */
++ ret = pdata->hw_if.set_ext_mii_mode(pdata, phy_data->mdio_addr,
++ phy_data->phydev_mode);
++ if (ret) {
++ netdev_err(pdata->netdev,
++ "mdio port/clause not compatible (%u/%u)\n",
++ phy_data->mdio_addr, phy_data->phydev_mode);
++ return ret;
++ }
++
+ /* Create and connect to the PHY device */
+ phydev = get_phy_device(phy_data->mii, phy_data->mdio_addr,
+ (phy_data->phydev_mode == XGBE_MDIO_MODE_CL45));
+@@ -2722,6 +2734,18 @@ static int xgbe_phy_start(struct xgbe_prv_data *pdata)
+ if (ret)
+ return ret;
+
++ /* Set the proper MDIO mode for the re-driver */
++ if (phy_data->redrv && !phy_data->redrv_if) {
++ ret = pdata->hw_if.set_ext_mii_mode(pdata, phy_data->redrv_addr,
++ XGBE_MDIO_MODE_CL22);
++ if (ret) {
++ netdev_err(pdata->netdev,
++ "redriver mdio port not compatible (%u)\n",
++ phy_data->redrv_addr);
++ return ret;
++ }
++ }
++
+ /* Start in highest supported mode */
+ xgbe_phy_set_mode(pdata, phy_data->start_mode);
+
+diff --git a/drivers/net/ethernet/mellanox/mlx5/core/en.h b/drivers/net/ethernet/mellanox/mlx5/core/en.h
+index d5ecb8f53fd4..c69a1f827b65 100644
+--- a/drivers/net/ethernet/mellanox/mlx5/core/en.h
++++ b/drivers/net/ethernet/mellanox/mlx5/core/en.h
+@@ -803,6 +803,7 @@ int mlx5e_get_max_linkspeed(struct mlx5_core_dev *mdev, u32 *speed);
+
+ void mlx5e_set_rx_cq_mode_params(struct mlx5e_params *params,
+ u8 cq_period_mode);
++void mlx5e_set_rq_type_params(struct mlx5e_priv *priv, u8 rq_type);
+
+ static inline void mlx5e_tx_notify_hw(struct mlx5e_sq *sq,
+ struct mlx5_wqe_ctrl_seg *ctrl, int bf_sz)
+diff --git a/drivers/net/ethernet/mellanox/mlx5/core/en_ethtool.c b/drivers/net/ethernet/mellanox/mlx5/core/en_ethtool.c
+index bb67863aa361..6906deae06e0 100644
+--- a/drivers/net/ethernet/mellanox/mlx5/core/en_ethtool.c
++++ b/drivers/net/ethernet/mellanox/mlx5/core/en_ethtool.c
+@@ -1477,6 +1477,7 @@ static int set_pflag_rx_cqe_compress(struct net_device *netdev,
+
+ MLX5E_SET_PFLAG(priv, MLX5E_PFLAG_RX_CQE_COMPRESS, enable);
+ priv->params.rx_cqe_compress_def = enable;
++ mlx5e_set_rq_type_params(priv, priv->params.rq_wq_type);
+
+ if (reset)
+ err = mlx5e_open_locked(netdev);
+diff --git a/drivers/net/ethernet/mellanox/mlx5/core/en_main.c b/drivers/net/ethernet/mellanox/mlx5/core/en_main.c
+index f14ca3385fdd..9d9c64927372 100644
+--- a/drivers/net/ethernet/mellanox/mlx5/core/en_main.c
++++ b/drivers/net/ethernet/mellanox/mlx5/core/en_main.c
+@@ -78,9 +78,10 @@ static bool mlx5e_check_fragmented_striding_rq_cap(struct mlx5_core_dev *mdev)
+ MLX5_CAP_ETH(mdev, reg_umr_sq);
+ }
+
+-static void mlx5e_set_rq_type_params(struct mlx5e_priv *priv, u8 rq_type)
++void mlx5e_set_rq_type_params(struct mlx5e_priv *priv, u8 rq_type)
+ {
+ priv->params.rq_wq_type = rq_type;
++ priv->params.lro_wqe_sz = MLX5E_PARAMS_DEFAULT_LRO_WQE_SZ;
+ switch (priv->params.rq_wq_type) {
+ case MLX5_WQ_TYPE_LINKED_LIST_STRIDING_RQ:
+ priv->params.log_rq_size = MLX5E_PARAMS_DEFAULT_LOG_RQ_SIZE_MPW;
+@@ -93,6 +94,10 @@ static void mlx5e_set_rq_type_params(struct mlx5e_priv *priv, u8 rq_type)
+ break;
+ default: /* MLX5_WQ_TYPE_LINKED_LIST */
+ priv->params.log_rq_size = MLX5E_PARAMS_DEFAULT_LOG_RQ_SIZE;
++
++ /* Extra room needed for build_skb */
++ priv->params.lro_wqe_sz -= MLX5_RX_HEADROOM +
++ SKB_DATA_ALIGN(sizeof(struct skb_shared_info));
+ }
+ priv->params.min_rx_wqes = mlx5_min_rx_wqes(priv->params.rq_wq_type,
+ BIT(priv->params.log_rq_size));
+@@ -3495,6 +3500,9 @@ static void mlx5e_build_nic_netdev_priv(struct mlx5_core_dev *mdev,
+ cqe_compress_heuristic(link_speed, pci_bw);
+ }
+
++ MLX5E_SET_PFLAG(priv, MLX5E_PFLAG_RX_CQE_COMPRESS,
++ priv->params.rx_cqe_compress_def);
++
+ mlx5e_set_rq_priv_params(priv);
+ if (priv->params.rq_wq_type == MLX5_WQ_TYPE_LINKED_LIST_STRIDING_RQ)
+ priv->params.lro_en = true;
+@@ -3517,16 +3525,9 @@ static void mlx5e_build_nic_netdev_priv(struct mlx5_core_dev *mdev,
+ mlx5e_build_default_indir_rqt(mdev, priv->params.indirection_rqt,
+ MLX5E_INDIR_RQT_SIZE, profile->max_nch(mdev));
+
+- priv->params.lro_wqe_sz =
+- MLX5E_PARAMS_DEFAULT_LRO_WQE_SZ -
+- /* Extra room needed for build_skb */
+- MLX5_RX_HEADROOM -
+- SKB_DATA_ALIGN(sizeof(struct skb_shared_info));
+-
+ /* Initialize pflags */
+ MLX5E_SET_PFLAG(priv, MLX5E_PFLAG_RX_CQE_BASED_MODER,
+ priv->params.rx_cq_period_mode == MLX5_CQ_PERIOD_MODE_START_FROM_CQE);
+- MLX5E_SET_PFLAG(priv, MLX5E_PFLAG_RX_CQE_COMPRESS, priv->params.rx_cqe_compress_def);
+
+ mutex_init(&priv->state_lock);
+
+@@ -3940,6 +3941,19 @@ static void mlx5e_register_vport_rep(struct mlx5_core_dev *mdev)
+ }
+ }
+
++static void mlx5e_unregister_vport_rep(struct mlx5_core_dev *mdev)
++{
++ struct mlx5_eswitch *esw = mdev->priv.eswitch;
++ int total_vfs = MLX5_TOTAL_VPORTS(mdev);
++ int vport;
++
++ if (!MLX5_CAP_GEN(mdev, vport_group_manager))
++ return;
++
++ for (vport = 1; vport < total_vfs; vport++)
++ mlx5_eswitch_unregister_vport_rep(esw, vport);
++}
++
+ void mlx5e_detach_netdev(struct mlx5_core_dev *mdev, struct net_device *netdev)
+ {
+ struct mlx5e_priv *priv = netdev_priv(netdev);
+@@ -3986,6 +4000,7 @@ static int mlx5e_attach(struct mlx5_core_dev *mdev, void *vpriv)
+ return err;
+ }
+
++ mlx5e_register_vport_rep(mdev);
+ return 0;
+ }
+
+@@ -3997,6 +4012,7 @@ static void mlx5e_detach(struct mlx5_core_dev *mdev, void *vpriv)
+ if (!netif_device_present(netdev))
+ return;
+
++ mlx5e_unregister_vport_rep(mdev);
+ mlx5e_detach_netdev(mdev, netdev);
+ mlx5e_destroy_mdev_resources(mdev);
+ }
+@@ -4015,8 +4031,6 @@ static void *mlx5e_add(struct mlx5_core_dev *mdev)
+ if (err)
+ return NULL;
+
+- mlx5e_register_vport_rep(mdev);
+-
+ if (MLX5_CAP_GEN(mdev, vport_group_manager))
+ ppriv = &esw->offloads.vport_reps[0];
+
+@@ -4068,13 +4082,7 @@ void mlx5e_destroy_netdev(struct mlx5_core_dev *mdev, struct mlx5e_priv *priv)
+
+ static void mlx5e_remove(struct mlx5_core_dev *mdev, void *vpriv)
+ {
+- struct mlx5_eswitch *esw = mdev->priv.eswitch;
+- int total_vfs = MLX5_TOTAL_VPORTS(mdev);
+ struct mlx5e_priv *priv = vpriv;
+- int vport;
+-
+- for (vport = 1; vport < total_vfs; vport++)
+- mlx5_eswitch_unregister_vport_rep(esw, vport);
+
+ unregister_netdev(priv->netdev);
+ mlx5e_detach(mdev, vpriv);
+diff --git a/drivers/net/ethernet/mellanox/mlx5/core/en_rx.c b/drivers/net/ethernet/mellanox/mlx5/core/en_rx.c
+index 06d5e6fecb0a..e3b88bbb9dcf 100644
+--- a/drivers/net/ethernet/mellanox/mlx5/core/en_rx.c
++++ b/drivers/net/ethernet/mellanox/mlx5/core/en_rx.c
+@@ -92,19 +92,18 @@ static inline void mlx5e_cqes_update_owner(struct mlx5e_cq *cq, u32 cqcc, int n)
+ static inline void mlx5e_decompress_cqe(struct mlx5e_rq *rq,
+ struct mlx5e_cq *cq, u32 cqcc)
+ {
+- u16 wqe_cnt_step;
+-
+ cq->title.byte_cnt = cq->mini_arr[cq->mini_arr_idx].byte_cnt;
+ cq->title.check_sum = cq->mini_arr[cq->mini_arr_idx].checksum;
+ cq->title.op_own &= 0xf0;
+ cq->title.op_own |= 0x01 & (cqcc >> cq->wq.log_sz);
+ cq->title.wqe_counter = cpu_to_be16(cq->decmprs_wqe_counter);
+
+- wqe_cnt_step =
+- rq->wq_type == MLX5_WQ_TYPE_LINKED_LIST_STRIDING_RQ ?
+- mpwrq_get_cqe_consumed_strides(&cq->title) : 1;
+- cq->decmprs_wqe_counter =
+- (cq->decmprs_wqe_counter + wqe_cnt_step) & rq->wq.sz_m1;
++ if (rq->wq_type == MLX5_WQ_TYPE_LINKED_LIST_STRIDING_RQ)
++ cq->decmprs_wqe_counter +=
++ mpwrq_get_cqe_consumed_strides(&cq->title);
++ else
++ cq->decmprs_wqe_counter =
++ (cq->decmprs_wqe_counter + 1) & rq->wq.sz_m1;
+ }
+
+ static inline void mlx5e_decompress_cqe_no_hash(struct mlx5e_rq *rq,
+@@ -172,6 +171,7 @@ void mlx5e_modify_rx_cqe_compression(struct mlx5e_priv *priv, bool val)
+ mlx5e_close_locked(priv->netdev);
+
+ MLX5E_SET_PFLAG(priv, MLX5E_PFLAG_RX_CQE_COMPRESS, val);
++ mlx5e_set_rq_type_params(priv, priv->params.rq_wq_type);
+
+ if (was_opened)
+ mlx5e_open_locked(priv->netdev);
+diff --git a/drivers/net/ethernet/mellanox/mlxsw/spectrum_router.c b/drivers/net/ethernet/mellanox/mlxsw/spectrum_router.c
+index 9e494a446b7e..f17f906f1d3a 100644
+--- a/drivers/net/ethernet/mellanox/mlxsw/spectrum_router.c
++++ b/drivers/net/ethernet/mellanox/mlxsw/spectrum_router.c
+@@ -496,30 +496,40 @@ static int
+ mlxsw_sp_vr_lpm_tree_check(struct mlxsw_sp *mlxsw_sp, struct mlxsw_sp_vr *vr,
+ struct mlxsw_sp_prefix_usage *req_prefix_usage)
+ {
+- struct mlxsw_sp_lpm_tree *lpm_tree;
++ struct mlxsw_sp_lpm_tree *lpm_tree = vr->lpm_tree;
++ struct mlxsw_sp_lpm_tree *new_tree;
++ int err;
+
+- if (mlxsw_sp_prefix_usage_eq(req_prefix_usage,
+- &vr->lpm_tree->prefix_usage))
++ if (mlxsw_sp_prefix_usage_eq(req_prefix_usage, &lpm_tree->prefix_usage))
+ return 0;
+
+- lpm_tree = mlxsw_sp_lpm_tree_get(mlxsw_sp, req_prefix_usage,
++ new_tree = mlxsw_sp_lpm_tree_get(mlxsw_sp, req_prefix_usage,
+ vr->proto, false);
+- if (IS_ERR(lpm_tree)) {
++ if (IS_ERR(new_tree)) {
+ /* We failed to get a tree according to the required
+ * prefix usage. However, the current tree might be still good
+ * for us if our requirement is subset of the prefixes used
+ * in the tree.
+ */
+ if (mlxsw_sp_prefix_usage_subset(req_prefix_usage,
+- &vr->lpm_tree->prefix_usage))
++ &lpm_tree->prefix_usage))
+ return 0;
+- return PTR_ERR(lpm_tree);
++ return PTR_ERR(new_tree);
+ }
+
+- mlxsw_sp_vr_lpm_tree_unbind(mlxsw_sp, vr);
+- mlxsw_sp_lpm_tree_put(mlxsw_sp, vr->lpm_tree);
++ /* Prevent packet loss by overwriting existing binding */
++ vr->lpm_tree = new_tree;
++ err = mlxsw_sp_vr_lpm_tree_bind(mlxsw_sp, vr);
++ if (err)
++ goto err_tree_bind;
++ mlxsw_sp_lpm_tree_put(mlxsw_sp, lpm_tree);
++
++ return 0;
++
++err_tree_bind:
+ vr->lpm_tree = lpm_tree;
+- return mlxsw_sp_vr_lpm_tree_bind(mlxsw_sp, vr);
++ mlxsw_sp_lpm_tree_put(mlxsw_sp, new_tree);
++ return err;
+ }
+
+ static struct mlxsw_sp_vr *mlxsw_sp_vr_get(struct mlxsw_sp *mlxsw_sp,
+diff --git a/drivers/net/geneve.c b/drivers/net/geneve.c
+index 45301cb98bc1..7074b40ebd7f 100644
+--- a/drivers/net/geneve.c
++++ b/drivers/net/geneve.c
+@@ -881,12 +881,14 @@ static netdev_tx_t geneve_xmit(struct sk_buff *skb, struct net_device *dev)
+ info = &geneve->info;
+ }
+
++ rcu_read_lock();
+ #if IS_ENABLED(CONFIG_IPV6)
+ if (info->mode & IP_TUNNEL_INFO_IPV6)
+ err = geneve6_xmit_skb(skb, dev, geneve, info);
+ else
+ #endif
+ err = geneve_xmit_skb(skb, dev, geneve, info);
++ rcu_read_unlock();
+
+ if (likely(!err))
+ return NETDEV_TX_OK;
+diff --git a/drivers/net/team/team.c b/drivers/net/team/team.c
+index bdc58567d10e..707321508c69 100644
+--- a/drivers/net/team/team.c
++++ b/drivers/net/team/team.c
+@@ -2075,6 +2075,7 @@ static int team_dev_type_check_change(struct net_device *dev,
+ static void team_setup(struct net_device *dev)
+ {
+ ether_setup(dev);
++ dev->max_mtu = ETH_MAX_MTU;
+
+ dev->netdev_ops = &team_netdev_ops;
+ dev->ethtool_ops = &team_ethtool_ops;
+diff --git a/drivers/net/tun.c b/drivers/net/tun.c
+index bfabe180053e..cdf6339827e6 100644
+--- a/drivers/net/tun.c
++++ b/drivers/net/tun.c
+@@ -819,7 +819,18 @@ static void tun_net_uninit(struct net_device *dev)
+ /* Net device open. */
+ static int tun_net_open(struct net_device *dev)
+ {
++ struct tun_struct *tun = netdev_priv(dev);
++ int i;
++
+ netif_tx_start_all_queues(dev);
++
++ for (i = 0; i < tun->numqueues; i++) {
++ struct tun_file *tfile;
++
++ tfile = rtnl_dereference(tun->tfiles[i]);
++ tfile->socket.sk->sk_write_space(tfile->socket.sk);
++ }
++
+ return 0;
+ }
+
+@@ -1101,9 +1112,10 @@ static unsigned int tun_chr_poll(struct file *file, poll_table *wait)
+ if (!skb_array_empty(&tfile->tx_array))
+ mask |= POLLIN | POLLRDNORM;
+
+- if (sock_writeable(sk) ||
+- (!test_and_set_bit(SOCKWQ_ASYNC_NOSPACE, &sk->sk_socket->flags) &&
+- sock_writeable(sk)))
++ if (tun->dev->flags & IFF_UP &&
++ (sock_writeable(sk) ||
++ (!test_and_set_bit(SOCKWQ_ASYNC_NOSPACE, &sk->sk_socket->flags) &&
++ sock_writeable(sk))))
+ mask |= POLLOUT | POLLWRNORM;
+
+ if (tun->dev->reg_state != NETREG_REGISTERED)
+diff --git a/drivers/net/vrf.c b/drivers/net/vrf.c
+index 454f907d419a..682aac0a2267 100644
+--- a/drivers/net/vrf.c
++++ b/drivers/net/vrf.c
+@@ -341,6 +341,7 @@ static netdev_tx_t is_ip_tx_frame(struct sk_buff *skb, struct net_device *dev)
+
+ static netdev_tx_t vrf_xmit(struct sk_buff *skb, struct net_device *dev)
+ {
++ int len = skb->len;
+ netdev_tx_t ret = is_ip_tx_frame(skb, dev);
+
+ if (likely(ret == NET_XMIT_SUCCESS || ret == NET_XMIT_CN)) {
+@@ -348,7 +349,7 @@ static netdev_tx_t vrf_xmit(struct sk_buff *skb, struct net_device *dev)
+
+ u64_stats_update_begin(&dstats->syncp);
+ dstats->tx_pkts++;
+- dstats->tx_bytes += skb->len;
++ dstats->tx_bytes += len;
+ u64_stats_update_end(&dstats->syncp);
+ } else {
+ this_cpu_inc(dev->dstats->tx_drps);
+diff --git a/drivers/net/vxlan.c b/drivers/net/vxlan.c
+index 30b04cf2bb1e..0e204f1a5072 100644
+--- a/drivers/net/vxlan.c
++++ b/drivers/net/vxlan.c
+@@ -1992,7 +1992,6 @@ static void vxlan_xmit_one(struct sk_buff *skb, struct net_device *dev,
+ const struct iphdr *old_iph = ip_hdr(skb);
+ union vxlan_addr *dst;
+ union vxlan_addr remote_ip, local_ip;
+- union vxlan_addr *src;
+ struct vxlan_metadata _md;
+ struct vxlan_metadata *md = &_md;
+ __be16 src_port = 0, dst_port;
+@@ -2019,7 +2018,7 @@ static void vxlan_xmit_one(struct sk_buff *skb, struct net_device *dev,
+
+ dst_port = rdst->remote_port ? rdst->remote_port : vxlan->cfg.dst_port;
+ vni = rdst->remote_vni;
+- src = &vxlan->cfg.saddr;
++ local_ip = vxlan->cfg.saddr;
+ dst_cache = &rdst->dst_cache;
+ md->gbp = skb->mark;
+ ttl = vxlan->cfg.ttl;
+@@ -2052,7 +2051,6 @@ static void vxlan_xmit_one(struct sk_buff *skb, struct net_device *dev,
+ dst = &remote_ip;
+ dst_port = info->key.tp_dst ? : vxlan->cfg.dst_port;
+ vni = tunnel_id_to_key32(info->key.tun_id);
+- src = &local_ip;
+ dst_cache = &info->dst_cache;
+ if (info->options_len)
+ md = ip_tunnel_info_opts(info);
+@@ -2064,6 +2062,7 @@ static void vxlan_xmit_one(struct sk_buff *skb, struct net_device *dev,
+ src_port = udp_flow_src_port(dev_net(dev), skb, vxlan->cfg.port_min,
+ vxlan->cfg.port_max, true);
+
++ rcu_read_lock();
+ if (dst->sa.sa_family == AF_INET) {
+ struct vxlan_sock *sock4 = rcu_dereference(vxlan->vn4_sock);
+ struct rtable *rt;
+@@ -2072,7 +2071,7 @@ static void vxlan_xmit_one(struct sk_buff *skb, struct net_device *dev,
+ rt = vxlan_get_route(vxlan, dev, sock4, skb,
+ rdst ? rdst->remote_ifindex : 0, tos,
+ dst->sin.sin_addr.s_addr,
+- &src->sin.sin_addr.s_addr,
++ &local_ip.sin.sin_addr.s_addr,
+ dst_port, src_port,
+ dst_cache, info);
+ if (IS_ERR(rt)) {
+@@ -2086,7 +2085,7 @@ static void vxlan_xmit_one(struct sk_buff *skb, struct net_device *dev,
+ dst_port, vni, &rt->dst,
+ rt->rt_flags);
+ if (err)
+- return;
++ goto out_unlock;
+ } else if (info->key.tun_flags & TUNNEL_DONT_FRAGMENT) {
+ df = htons(IP_DF);
+ }
+@@ -2099,7 +2098,7 @@ static void vxlan_xmit_one(struct sk_buff *skb, struct net_device *dev,
+ if (err < 0)
+ goto tx_error;
+
+- udp_tunnel_xmit_skb(rt, sock4->sock->sk, skb, src->sin.sin_addr.s_addr,
++ udp_tunnel_xmit_skb(rt, sock4->sock->sk, skb, local_ip.sin.sin_addr.s_addr,
+ dst->sin.sin_addr.s_addr, tos, ttl, df,
+ src_port, dst_port, xnet, !udp_sum);
+ #if IS_ENABLED(CONFIG_IPV6)
+@@ -2109,7 +2108,7 @@ static void vxlan_xmit_one(struct sk_buff *skb, struct net_device *dev,
+ ndst = vxlan6_get_route(vxlan, dev, sock6, skb,
+ rdst ? rdst->remote_ifindex : 0, tos,
+ label, &dst->sin6.sin6_addr,
+- &src->sin6.sin6_addr,
++ &local_ip.sin6.sin6_addr,
+ dst_port, src_port,
+ dst_cache, info);
+ if (IS_ERR(ndst)) {
+@@ -2125,7 +2124,7 @@ static void vxlan_xmit_one(struct sk_buff *skb, struct net_device *dev,
+ dst_port, vni, ndst,
+ rt6i_flags);
+ if (err)
+- return;
++ goto out_unlock;
+ }
+
+ tos = ip_tunnel_ecn_encap(tos, old_iph, skb);
+@@ -2137,11 +2136,13 @@ static void vxlan_xmit_one(struct sk_buff *skb, struct net_device *dev,
+ goto tx_error;
+
+ udp_tunnel6_xmit_skb(ndst, sock6->sock->sk, skb, dev,
+- &src->sin6.sin6_addr,
++ &local_ip.sin6.sin6_addr,
+ &dst->sin6.sin6_addr, tos, ttl,
+ label, src_port, dst_port, !udp_sum);
+ #endif
+ }
++out_unlock:
++ rcu_read_unlock();
+ return;
+
+ drop:
+@@ -2150,6 +2151,7 @@ static void vxlan_xmit_one(struct sk_buff *skb, struct net_device *dev,
+ return;
+
+ tx_error:
++ rcu_read_unlock();
+ if (err == -ELOOP)
+ dev->stats.collisions++;
+ else if (err == -ENETUNREACH)
+@@ -2626,7 +2628,7 @@ static int vxlan_validate(struct nlattr *tb[], struct nlattr *data[])
+
+ if (data[IFLA_VXLAN_ID]) {
+ __u32 id = nla_get_u32(data[IFLA_VXLAN_ID]);
+- if (id >= VXLAN_VID_MASK)
++ if (id >= VXLAN_N_VID)
+ return -ERANGE;
+ }
+
+diff --git a/include/linux/dccp.h b/include/linux/dccp.h
+index 61d042bbbf60..68449293c4b6 100644
+--- a/include/linux/dccp.h
++++ b/include/linux/dccp.h
+@@ -163,6 +163,7 @@ struct dccp_request_sock {
+ __u64 dreq_isr;
+ __u64 dreq_gsr;
+ __be32 dreq_service;
++ spinlock_t dreq_lock;
+ struct list_head dreq_featneg;
+ __u32 dreq_timestamp_echo;
+ __u32 dreq_timestamp_time;
+diff --git a/include/uapi/linux/packet_diag.h b/include/uapi/linux/packet_diag.h
+index d08c63f3dd6f..0c5d5dd61b6a 100644
+--- a/include/uapi/linux/packet_diag.h
++++ b/include/uapi/linux/packet_diag.h
+@@ -64,7 +64,7 @@ struct packet_diag_mclist {
+ __u32 pdmc_count;
+ __u16 pdmc_type;
+ __u16 pdmc_alen;
+- __u8 pdmc_addr[MAX_ADDR_LEN];
++ __u8 pdmc_addr[32]; /* MAX_ADDR_LEN */
+ };
+
+ struct packet_diag_ring {
+diff --git a/kernel/futex.c b/kernel/futex.c
+index cdf365036141..dda00f03337d 100644
+--- a/kernel/futex.c
++++ b/kernel/futex.c
+@@ -2813,7 +2813,6 @@ static int futex_wait_requeue_pi(u32 __user *uaddr, unsigned int flags,
+ {
+ struct hrtimer_sleeper timeout, *to = NULL;
+ struct rt_mutex_waiter rt_waiter;
+- struct rt_mutex *pi_mutex = NULL;
+ struct futex_hash_bucket *hb;
+ union futex_key key2 = FUTEX_KEY_INIT;
+ struct futex_q q = futex_q_init;
+@@ -2897,6 +2896,8 @@ static int futex_wait_requeue_pi(u32 __user *uaddr, unsigned int flags,
+ if (q.pi_state && (q.pi_state->owner != current)) {
+ spin_lock(q.lock_ptr);
+ ret = fixup_pi_state_owner(uaddr2, &q, current);
++ if (ret && rt_mutex_owner(&q.pi_state->pi_mutex) == current)
++ rt_mutex_unlock(&q.pi_state->pi_mutex);
+ /*
+ * Drop the reference to the pi state which
+ * the requeue_pi() code acquired for us.
+@@ -2905,6 +2906,8 @@ static int futex_wait_requeue_pi(u32 __user *uaddr, unsigned int flags,
+ spin_unlock(q.lock_ptr);
+ }
+ } else {
++ struct rt_mutex *pi_mutex;
++
+ /*
+ * We have been woken up by futex_unlock_pi(), a timeout, or a
+ * signal. futex_unlock_pi() will not destroy the lock_ptr nor
+@@ -2928,18 +2931,19 @@ static int futex_wait_requeue_pi(u32 __user *uaddr, unsigned int flags,
+ if (res)
+ ret = (res < 0) ? res : 0;
+
++ /*
++ * If fixup_pi_state_owner() faulted and was unable to handle
++ * the fault, unlock the rt_mutex and return the fault to
++ * userspace.
++ */
++ if (ret && rt_mutex_owner(pi_mutex) == current)
++ rt_mutex_unlock(pi_mutex);
++
+ /* Unqueue and drop the lock. */
+ unqueue_me_pi(&q);
+ }
+
+- /*
+- * If fixup_pi_state_owner() faulted and was unable to handle the
+- * fault, unlock the rt_mutex and return the fault to userspace.
+- */
+- if (ret == -EFAULT) {
+- if (pi_mutex && rt_mutex_owner(pi_mutex) == current)
+- rt_mutex_unlock(pi_mutex);
+- } else if (ret == -EINTR) {
++ if (ret == -EINTR) {
+ /*
+ * We've already been requeued, but cannot restart by calling
+ * futex_lock_pi() directly. We could restart this syscall, but
+diff --git a/kernel/locking/rwsem-spinlock.c b/kernel/locking/rwsem-spinlock.c
+index 1591f6b3539f..2bef4ab94003 100644
+--- a/kernel/locking/rwsem-spinlock.c
++++ b/kernel/locking/rwsem-spinlock.c
+@@ -216,10 +216,8 @@ int __sched __down_write_common(struct rw_semaphore *sem, int state)
+ */
+ if (sem->count == 0)
+ break;
+- if (signal_pending_state(state, current)) {
+- ret = -EINTR;
+- goto out;
+- }
++ if (signal_pending_state(state, current))
++ goto out_nolock;
+ set_task_state(tsk, state);
+ raw_spin_unlock_irqrestore(&sem->wait_lock, flags);
+ schedule();
+@@ -227,12 +225,19 @@ int __sched __down_write_common(struct rw_semaphore *sem, int state)
+ }
+ /* got the lock */
+ sem->count = -1;
+-out:
+ list_del(&waiter.list);
+
+ raw_spin_unlock_irqrestore(&sem->wait_lock, flags);
+
+ return ret;
++
++out_nolock:
++ list_del(&waiter.list);
++ if (!list_empty(&sem->wait_list))
++ __rwsem_do_wake(sem, 1);
++ raw_spin_unlock_irqrestore(&sem->wait_lock, flags);
++
++ return -EINTR;
+ }
+
+ void __sched __down_write(struct rw_semaphore *sem)
+diff --git a/net/bridge/br_forward.c b/net/bridge/br_forward.c
+index 7cb41aee4c82..8498e3503605 100644
+--- a/net/bridge/br_forward.c
++++ b/net/bridge/br_forward.c
+@@ -186,8 +186,9 @@ void br_flood(struct net_bridge *br, struct sk_buff *skb,
+ /* Do not flood unicast traffic to ports that turn it off */
+ if (pkt_type == BR_PKT_UNICAST && !(p->flags & BR_FLOOD))
+ continue;
++ /* Do not flood if mc off, except for traffic we originate */
+ if (pkt_type == BR_PKT_MULTICAST &&
+- !(p->flags & BR_MCAST_FLOOD))
++ !(p->flags & BR_MCAST_FLOOD) && skb->dev != br->dev)
+ continue;
+
+ /* Do not flood to ports that enable proxy ARP */
+diff --git a/net/bridge/br_input.c b/net/bridge/br_input.c
+index 855b72fbe1da..267b46af407f 100644
+--- a/net/bridge/br_input.c
++++ b/net/bridge/br_input.c
+@@ -29,6 +29,7 @@ EXPORT_SYMBOL(br_should_route_hook);
+ static int
+ br_netif_receive_skb(struct net *net, struct sock *sk, struct sk_buff *skb)
+ {
++ br_drop_fake_rtable(skb);
+ return netif_receive_skb(skb);
+ }
+
+diff --git a/net/bridge/br_netfilter_hooks.c b/net/bridge/br_netfilter_hooks.c
+index 95087e6e8258..fa87fbd62bb7 100644
+--- a/net/bridge/br_netfilter_hooks.c
++++ b/net/bridge/br_netfilter_hooks.c
+@@ -521,21 +521,6 @@ static unsigned int br_nf_pre_routing(void *priv,
+ }
+
+
+-/* PF_BRIDGE/LOCAL_IN ************************************************/
+-/* The packet is locally destined, which requires a real
+- * dst_entry, so detach the fake one. On the way up, the
+- * packet would pass through PRE_ROUTING again (which already
+- * took place when the packet entered the bridge), but we
+- * register an IPv4 PRE_ROUTING 'sabotage' hook that will
+- * prevent this from happening. */
+-static unsigned int br_nf_local_in(void *priv,
+- struct sk_buff *skb,
+- const struct nf_hook_state *state)
+-{
+- br_drop_fake_rtable(skb);
+- return NF_ACCEPT;
+-}
+-
+ /* PF_BRIDGE/FORWARD *************************************************/
+ static int br_nf_forward_finish(struct net *net, struct sock *sk, struct sk_buff *skb)
+ {
+@@ -908,12 +893,6 @@ static struct nf_hook_ops br_nf_ops[] __read_mostly = {
+ .priority = NF_BR_PRI_BRNF,
+ },
+ {
+- .hook = br_nf_local_in,
+- .pf = NFPROTO_BRIDGE,
+- .hooknum = NF_BR_LOCAL_IN,
+- .priority = NF_BR_PRI_BRNF,
+- },
+- {
+ .hook = br_nf_forward_ip,
+ .pf = NFPROTO_BRIDGE,
+ .hooknum = NF_BR_FORWARD,
+diff --git a/net/core/dev.c b/net/core/dev.c
+index 29101c98399f..fd6e2dfda45f 100644
+--- a/net/core/dev.c
++++ b/net/core/dev.c
+@@ -1696,27 +1696,54 @@ EXPORT_SYMBOL_GPL(net_dec_egress_queue);
+ static struct static_key netstamp_needed __read_mostly;
+ #ifdef HAVE_JUMP_LABEL
+ static atomic_t netstamp_needed_deferred;
++static atomic_t netstamp_wanted;
+ static void netstamp_clear(struct work_struct *work)
+ {
+ int deferred = atomic_xchg(&netstamp_needed_deferred, 0);
++ int wanted;
+
+- while (deferred--)
+- static_key_slow_dec(&netstamp_needed);
++ wanted = atomic_add_return(deferred, &netstamp_wanted);
++ if (wanted > 0)
++ static_key_enable(&netstamp_needed);
++ else
++ static_key_disable(&netstamp_needed);
+ }
+ static DECLARE_WORK(netstamp_work, netstamp_clear);
+ #endif
+
+ void net_enable_timestamp(void)
+ {
++#ifdef HAVE_JUMP_LABEL
++ int wanted;
++
++ while (1) {
++ wanted = atomic_read(&netstamp_wanted);
++ if (wanted <= 0)
++ break;
++ if (atomic_cmpxchg(&netstamp_wanted, wanted, wanted + 1) == wanted)
++ return;
++ }
++ atomic_inc(&netstamp_needed_deferred);
++ schedule_work(&netstamp_work);
++#else
+ static_key_slow_inc(&netstamp_needed);
++#endif
+ }
+ EXPORT_SYMBOL(net_enable_timestamp);
+
+ void net_disable_timestamp(void)
+ {
+ #ifdef HAVE_JUMP_LABEL
+- /* net_disable_timestamp() can be called from non process context */
+- atomic_inc(&netstamp_needed_deferred);
++ int wanted;
++
++ while (1) {
++ wanted = atomic_read(&netstamp_wanted);
++ if (wanted <= 1)
++ break;
++ if (atomic_cmpxchg(&netstamp_wanted, wanted, wanted - 1) == wanted)
++ return;
++ }
++ atomic_dec(&netstamp_needed_deferred);
+ schedule_work(&netstamp_work);
+ #else
+ static_key_slow_dec(&netstamp_needed);
+diff --git a/net/core/net-sysfs.c b/net/core/net-sysfs.c
+index b0c04cf4851d..1004418d937e 100644
+--- a/net/core/net-sysfs.c
++++ b/net/core/net-sysfs.c
+@@ -952,7 +952,7 @@ net_rx_queue_update_kobjects(struct net_device *dev, int old_num, int new_num)
+ while (--i >= new_num) {
+ struct kobject *kobj = &dev->_rx[i].kobj;
+
+- if (!list_empty(&dev_net(dev)->exit_list))
++ if (!atomic_read(&dev_net(dev)->count))
+ kobj->uevent_suppress = 1;
+ if (dev->sysfs_rx_queue_group)
+ sysfs_remove_group(kobj, dev->sysfs_rx_queue_group);
+@@ -1370,7 +1370,7 @@ netdev_queue_update_kobjects(struct net_device *dev, int old_num, int new_num)
+ while (--i >= new_num) {
+ struct netdev_queue *queue = dev->_tx + i;
+
+- if (!list_empty(&dev_net(dev)->exit_list))
++ if (!atomic_read(&dev_net(dev)->count))
+ queue->kobj.uevent_suppress = 1;
+ #ifdef CONFIG_BQL
+ sysfs_remove_group(&queue->kobj, &dql_group);
+@@ -1557,7 +1557,7 @@ void netdev_unregister_kobject(struct net_device *ndev)
+ {
+ struct device *dev = &(ndev->dev);
+
+- if (!list_empty(&dev_net(ndev)->exit_list))
++ if (!atomic_read(&dev_net(ndev)->count))
+ dev_set_uevent_suppress(dev, 1);
+
+ kobject_get(&dev->kobj);
+diff --git a/net/core/skbuff.c b/net/core/skbuff.c
+index 734c71468b01..aa3a13378c90 100644
+--- a/net/core/skbuff.c
++++ b/net/core/skbuff.c
+@@ -3824,13 +3824,14 @@ void skb_complete_tx_timestamp(struct sk_buff *skb,
+ if (!skb_may_tx_timestamp(sk, false))
+ return;
+
+- /* take a reference to prevent skb_orphan() from freeing the socket */
+- sock_hold(sk);
+-
+- *skb_hwtstamps(skb) = *hwtstamps;
+- __skb_complete_tx_timestamp(skb, sk, SCM_TSTAMP_SND);
+-
+- sock_put(sk);
++ /* Take a reference to prevent skb_orphan() from freeing the socket,
++ * but only if the socket refcount is not zero.
++ */
++ if (likely(atomic_inc_not_zero(&sk->sk_refcnt))) {
++ *skb_hwtstamps(skb) = *hwtstamps;
++ __skb_complete_tx_timestamp(skb, sk, SCM_TSTAMP_SND);
++ sock_put(sk);
++ }
+ }
+ EXPORT_SYMBOL_GPL(skb_complete_tx_timestamp);
+
+@@ -3889,7 +3890,7 @@ void skb_complete_wifi_ack(struct sk_buff *skb, bool acked)
+ {
+ struct sock *sk = skb->sk;
+ struct sock_exterr_skb *serr;
+- int err;
++ int err = 1;
+
+ skb->wifi_acked_valid = 1;
+ skb->wifi_acked = acked;
+@@ -3899,14 +3900,15 @@ void skb_complete_wifi_ack(struct sk_buff *skb, bool acked)
+ serr->ee.ee_errno = ENOMSG;
+ serr->ee.ee_origin = SO_EE_ORIGIN_TXSTATUS;
+
+- /* take a reference to prevent skb_orphan() from freeing the socket */
+- sock_hold(sk);
+-
+- err = sock_queue_err_skb(sk, skb);
++ /* Take a reference to prevent skb_orphan() from freeing the socket,
++ * but only if the socket refcount is not zero.
++ */
++ if (likely(atomic_inc_not_zero(&sk->sk_refcnt))) {
++ err = sock_queue_err_skb(sk, skb);
++ sock_put(sk);
++ }
+ if (err)
+ kfree_skb(skb);
+-
+- sock_put(sk);
+ }
+ EXPORT_SYMBOL_GPL(skb_complete_wifi_ack);
+
+diff --git a/net/dccp/ccids/ccid2.c b/net/dccp/ccids/ccid2.c
+index f053198e730c..5e3a7302f774 100644
+--- a/net/dccp/ccids/ccid2.c
++++ b/net/dccp/ccids/ccid2.c
+@@ -749,6 +749,7 @@ static void ccid2_hc_tx_exit(struct sock *sk)
+ for (i = 0; i < hc->tx_seqbufc; i++)
+ kfree(hc->tx_seqbuf[i]);
+ hc->tx_seqbufc = 0;
++ dccp_ackvec_parsed_cleanup(&hc->tx_av_chunks);
+ }
+
+ static void ccid2_hc_rx_packet_recv(struct sock *sk, struct sk_buff *skb)
+diff --git a/net/dccp/input.c b/net/dccp/input.c
+index 8fedc2d49770..4a05d7876850 100644
+--- a/net/dccp/input.c
++++ b/net/dccp/input.c
+@@ -577,6 +577,7 @@ int dccp_rcv_state_process(struct sock *sk, struct sk_buff *skb,
+ struct dccp_sock *dp = dccp_sk(sk);
+ struct dccp_skb_cb *dcb = DCCP_SKB_CB(skb);
+ const int old_state = sk->sk_state;
++ bool acceptable;
+ int queued = 0;
+
+ /*
+@@ -603,8 +604,13 @@ int dccp_rcv_state_process(struct sock *sk, struct sk_buff *skb,
+ */
+ if (sk->sk_state == DCCP_LISTEN) {
+ if (dh->dccph_type == DCCP_PKT_REQUEST) {
+- if (inet_csk(sk)->icsk_af_ops->conn_request(sk,
+- skb) < 0)
++ /* It is possible that we process SYN packets from backlog,
++ * so we need to make sure to disable BH right there.
++ */
++ local_bh_disable();
++ acceptable = inet_csk(sk)->icsk_af_ops->conn_request(sk, skb) >= 0;
++ local_bh_enable();
++ if (!acceptable)
+ return 1;
+ consume_skb(skb);
+ return 0;
+diff --git a/net/dccp/ipv4.c b/net/dccp/ipv4.c
+index d859a5c36e70..b0a1ba968ed5 100644
+--- a/net/dccp/ipv4.c
++++ b/net/dccp/ipv4.c
+@@ -289,7 +289,8 @@ static void dccp_v4_err(struct sk_buff *skb, u32 info)
+
+ switch (type) {
+ case ICMP_REDIRECT:
+- dccp_do_redirect(skb, sk);
++ if (!sock_owned_by_user(sk))
++ dccp_do_redirect(skb, sk);
+ goto out;
+ case ICMP_SOURCE_QUENCH:
+ /* Just silently ignore these. */
+diff --git a/net/dccp/ipv6.c b/net/dccp/ipv6.c
+index c4e879c02186..2f3e8bbe2cb9 100644
+--- a/net/dccp/ipv6.c
++++ b/net/dccp/ipv6.c
+@@ -122,10 +122,12 @@ static void dccp_v6_err(struct sk_buff *skb, struct inet6_skb_parm *opt,
+ np = inet6_sk(sk);
+
+ if (type == NDISC_REDIRECT) {
+- struct dst_entry *dst = __sk_dst_check(sk, np->dst_cookie);
++ if (!sock_owned_by_user(sk)) {
++ struct dst_entry *dst = __sk_dst_check(sk, np->dst_cookie);
+
+- if (dst)
+- dst->ops->redirect(dst, sk, skb);
++ if (dst)
++ dst->ops->redirect(dst, sk, skb);
++ }
+ goto out;
+ }
+
+diff --git a/net/dccp/minisocks.c b/net/dccp/minisocks.c
+index 53eddf99e4f6..39e7e2bca8db 100644
+--- a/net/dccp/minisocks.c
++++ b/net/dccp/minisocks.c
+@@ -122,6 +122,7 @@ struct sock *dccp_create_openreq_child(const struct sock *sk,
+ /* It is still raw copy of parent, so invalidate
+ * destructor and make plain sk_free() */
+ newsk->sk_destruct = NULL;
++ bh_unlock_sock(newsk);
+ sk_free(newsk);
+ return NULL;
+ }
+@@ -145,6 +146,13 @@ struct sock *dccp_check_req(struct sock *sk, struct sk_buff *skb,
+ struct dccp_request_sock *dreq = dccp_rsk(req);
+ bool own_req;
+
++ /* TCP/DCCP listeners became lockless.
++ * DCCP stores complex state in its request_sock, so we need
++ * a protection for them, now this code runs without being protected
++ * by the parent (listener) lock.
++ */
++ spin_lock_bh(&dreq->dreq_lock);
++
+ /* Check for retransmitted REQUEST */
+ if (dccp_hdr(skb)->dccph_type == DCCP_PKT_REQUEST) {
+
+@@ -159,7 +167,7 @@ struct sock *dccp_check_req(struct sock *sk, struct sk_buff *skb,
+ inet_rtx_syn_ack(sk, req);
+ }
+ /* Network Duplicate, discard packet */
+- return NULL;
++ goto out;
+ }
+
+ DCCP_SKB_CB(skb)->dccpd_reset_code = DCCP_RESET_CODE_PACKET_ERROR;
+@@ -185,20 +193,20 @@ struct sock *dccp_check_req(struct sock *sk, struct sk_buff *skb,
+
+ child = inet_csk(sk)->icsk_af_ops->syn_recv_sock(sk, skb, req, NULL,
+ req, &own_req);
+- if (!child)
+- goto listen_overflow;
+-
+- return inet_csk_complete_hashdance(sk, child, req, own_req);
++ if (child) {
++ child = inet_csk_complete_hashdance(sk, child, req, own_req);
++ goto out;
++ }
+
+-listen_overflow:
+- dccp_pr_debug("listen_overflow!\n");
+ DCCP_SKB_CB(skb)->dccpd_reset_code = DCCP_RESET_CODE_TOO_BUSY;
+ drop:
+ if (dccp_hdr(skb)->dccph_type != DCCP_PKT_RESET)
+ req->rsk_ops->send_reset(sk, skb);
+
+ inet_csk_reqsk_queue_drop(sk, req);
+- return NULL;
++out:
++ spin_unlock_bh(&dreq->dreq_lock);
++ return child;
+ }
+
+ EXPORT_SYMBOL_GPL(dccp_check_req);
+@@ -249,6 +257,7 @@ int dccp_reqsk_init(struct request_sock *req,
+ {
+ struct dccp_request_sock *dreq = dccp_rsk(req);
+
++ spin_lock_init(&dreq->dreq_lock);
+ inet_rsk(req)->ir_rmt_port = dccp_hdr(skb)->dccph_sport;
+ inet_rsk(req)->ir_num = ntohs(dccp_hdr(skb)->dccph_dport);
+ inet_rsk(req)->acked = 0;
+diff --git a/net/ipv4/af_inet.c b/net/ipv4/af_inet.c
+index f75069883f2b..4391da91789f 100644
+--- a/net/ipv4/af_inet.c
++++ b/net/ipv4/af_inet.c
+@@ -1470,8 +1470,10 @@ int inet_gro_complete(struct sk_buff *skb, int nhoff)
+ int proto = iph->protocol;
+ int err = -ENOSYS;
+
+- if (skb->encapsulation)
++ if (skb->encapsulation) {
++ skb_set_inner_protocol(skb, cpu_to_be16(ETH_P_IP));
+ skb_set_inner_network_header(skb, nhoff);
++ }
+
+ csum_replace2(&iph->check, iph->tot_len, newlen);
+ iph->tot_len = newlen;
+diff --git a/net/ipv4/fib_frontend.c b/net/ipv4/fib_frontend.c
+index 7db2ad2e82d3..b39a791f6756 100644
+--- a/net/ipv4/fib_frontend.c
++++ b/net/ipv4/fib_frontend.c
+@@ -319,7 +319,7 @@ static int __fib_validate_source(struct sk_buff *skb, __be32 src, __be32 dst,
+ int ret, no_addr;
+ struct fib_result res;
+ struct flowi4 fl4;
+- struct net *net;
++ struct net *net = dev_net(dev);
+ bool dev_match;
+
+ fl4.flowi4_oif = 0;
+@@ -332,6 +332,7 @@ static int __fib_validate_source(struct sk_buff *skb, __be32 src, __be32 dst,
+ fl4.flowi4_scope = RT_SCOPE_UNIVERSE;
+ fl4.flowi4_tun_key.tun_id = 0;
+ fl4.flowi4_flags = 0;
++ fl4.flowi4_uid = sock_net_uid(net, NULL);
+
+ no_addr = idev->ifa_list == NULL;
+
+@@ -339,13 +340,12 @@ static int __fib_validate_source(struct sk_buff *skb, __be32 src, __be32 dst,
+
+ trace_fib_validate_source(dev, &fl4);
+
+- net = dev_net(dev);
+ if (fib_lookup(net, &fl4, &res, 0))
+ goto last_resort;
+ if (res.type != RTN_UNICAST &&
+ (res.type != RTN_LOCAL || !IN_DEV_ACCEPT_LOCAL(idev)))
+ goto e_inval;
+- if (!rpf && !fib_num_tclassid_users(dev_net(dev)) &&
++ if (!rpf && !fib_num_tclassid_users(net) &&
+ (dev->ifindex != oif || !IN_DEV_TX_REDIRECTS(idev)))
+ goto last_resort;
+ fib_combine_itag(itag, &res);
+diff --git a/net/ipv4/route.c b/net/ipv4/route.c
+index 709ffe67d1de..8976887dc83e 100644
+--- a/net/ipv4/route.c
++++ b/net/ipv4/route.c
+@@ -1858,6 +1858,7 @@ static int ip_route_input_slow(struct sk_buff *skb, __be32 daddr, __be32 saddr,
+ fl4.flowi4_flags = 0;
+ fl4.daddr = daddr;
+ fl4.saddr = saddr;
++ fl4.flowi4_uid = sock_net_uid(net, NULL);
+ err = fib_lookup(net, &fl4, &res, 0);
+ if (err != 0) {
+ if (!IN_DEV_FORWARD(in_dev))
+@@ -1990,6 +1991,7 @@ int ip_route_input_noref(struct sk_buff *skb, __be32 daddr, __be32 saddr,
+ {
+ int res;
+
++ tos &= IPTOS_RT_MASK;
+ rcu_read_lock();
+
+ /* Multicast recognition logic is moved from route cache to here.
+diff --git a/net/ipv4/tcp_input.c b/net/ipv4/tcp_input.c
+index 41dcbd568cbe..28777a0307c8 100644
+--- a/net/ipv4/tcp_input.c
++++ b/net/ipv4/tcp_input.c
+@@ -5916,9 +5916,15 @@ int tcp_rcv_state_process(struct sock *sk, struct sk_buff *skb)
+ if (th->syn) {
+ if (th->fin)
+ goto discard;
+- if (icsk->icsk_af_ops->conn_request(sk, skb) < 0)
+- return 1;
++ /* It is possible that we process SYN packets from backlog,
++ * so we need to make sure to disable BH right there.
++ */
++ local_bh_disable();
++ acceptable = icsk->icsk_af_ops->conn_request(sk, skb) >= 0;
++ local_bh_enable();
+
++ if (!acceptable)
++ return 1;
+ consume_skb(skb);
+ return 0;
+ }
+diff --git a/net/ipv4/tcp_ipv4.c b/net/ipv4/tcp_ipv4.c
+index fe9da4fb96bf..bb629dc2bfb0 100644
+--- a/net/ipv4/tcp_ipv4.c
++++ b/net/ipv4/tcp_ipv4.c
+@@ -269,10 +269,13 @@ EXPORT_SYMBOL(tcp_v4_connect);
+ */
+ void tcp_v4_mtu_reduced(struct sock *sk)
+ {
+- struct dst_entry *dst;
+ struct inet_sock *inet = inet_sk(sk);
+- u32 mtu = tcp_sk(sk)->mtu_info;
++ struct dst_entry *dst;
++ u32 mtu;
+
++ if ((1 << sk->sk_state) & (TCPF_LISTEN | TCPF_CLOSE))
++ return;
++ mtu = tcp_sk(sk)->mtu_info;
+ dst = inet_csk_update_pmtu(sk, mtu);
+ if (!dst)
+ return;
+@@ -418,7 +421,8 @@ void tcp_v4_err(struct sk_buff *icmp_skb, u32 info)
+
+ switch (type) {
+ case ICMP_REDIRECT:
+- do_redirect(icmp_skb, sk);
++ if (!sock_owned_by_user(sk))
++ do_redirect(icmp_skb, sk);
+ goto out;
+ case ICMP_SOURCE_QUENCH:
+ /* Just silently ignore these. */
+diff --git a/net/ipv4/tcp_timer.c b/net/ipv4/tcp_timer.c
+index 3705075f42c3..45d707569af6 100644
+--- a/net/ipv4/tcp_timer.c
++++ b/net/ipv4/tcp_timer.c
+@@ -249,7 +249,8 @@ void tcp_delack_timer_handler(struct sock *sk)
+
+ sk_mem_reclaim_partial(sk);
+
+- if (sk->sk_state == TCP_CLOSE || !(icsk->icsk_ack.pending & ICSK_ACK_TIMER))
++ if (((1 << sk->sk_state) & (TCPF_CLOSE | TCPF_LISTEN)) ||
++ !(icsk->icsk_ack.pending & ICSK_ACK_TIMER))
+ goto out;
+
+ if (time_after(icsk->icsk_ack.timeout, jiffies)) {
+@@ -552,7 +553,8 @@ void tcp_write_timer_handler(struct sock *sk)
+ struct inet_connection_sock *icsk = inet_csk(sk);
+ int event;
+
+- if (sk->sk_state == TCP_CLOSE || !icsk->icsk_pending)
++ if (((1 << sk->sk_state) & (TCPF_CLOSE | TCPF_LISTEN)) ||
++ !icsk->icsk_pending)
+ goto out;
+
+ if (time_after(icsk->icsk_timeout, jiffies)) {
+diff --git a/net/ipv6/ip6_fib.c b/net/ipv6/ip6_fib.c
+index ef5485204522..8c88a37392d0 100644
+--- a/net/ipv6/ip6_fib.c
++++ b/net/ipv6/ip6_fib.c
+@@ -908,6 +908,8 @@ static int fib6_add_rt2node(struct fib6_node *fn, struct rt6_info *rt,
+ ins = &rt->dst.rt6_next;
+ iter = *ins;
+ while (iter) {
++ if (iter->rt6i_metric > rt->rt6i_metric)
++ break;
+ if (rt6_qualify_for_ecmp(iter)) {
+ *ins = iter->dst.rt6_next;
+ fib6_purge_rt(iter, fn, info->nl_net);
+diff --git a/net/ipv6/ip6_offload.c b/net/ipv6/ip6_offload.c
+index fc7b4017ba24..33b04ec2744a 100644
+--- a/net/ipv6/ip6_offload.c
++++ b/net/ipv6/ip6_offload.c
+@@ -294,8 +294,10 @@ static int ipv6_gro_complete(struct sk_buff *skb, int nhoff)
+ struct ipv6hdr *iph = (struct ipv6hdr *)(skb->data + nhoff);
+ int err = -ENOSYS;
+
+- if (skb->encapsulation)
++ if (skb->encapsulation) {
++ skb_set_inner_protocol(skb, cpu_to_be16(ETH_P_IPV6));
+ skb_set_inner_network_header(skb, nhoff);
++ }
+
+ iph->payload_len = htons(skb->len - nhoff - sizeof(*iph));
+
+diff --git a/net/ipv6/ip6_output.c b/net/ipv6/ip6_output.c
+index 7cebee58e55b..d57f4ee5ec29 100644
+--- a/net/ipv6/ip6_output.c
++++ b/net/ipv6/ip6_output.c
+@@ -767,13 +767,14 @@ int ip6_fragment(struct net *net, struct sock *sk, struct sk_buff *skb,
+ * Fragment the datagram.
+ */
+
+- *prevhdr = NEXTHDR_FRAGMENT;
+ troom = rt->dst.dev->needed_tailroom;
+
+ /*
+ * Keep copying data until we run out.
+ */
+ while (left > 0) {
++ u8 *fragnexthdr_offset;
++
+ len = left;
+ /* IF: it doesn't fit, use 'mtu' - the data space left */
+ if (len > mtu)
+@@ -818,6 +819,10 @@ int ip6_fragment(struct net *net, struct sock *sk, struct sk_buff *skb,
+ */
+ skb_copy_from_linear_data(skb, skb_network_header(frag), hlen);
+
++ fragnexthdr_offset = skb_network_header(frag);
++ fragnexthdr_offset += prevhdr - skb_network_header(skb);
++ *fragnexthdr_offset = NEXTHDR_FRAGMENT;
++
+ /*
+ * Build fragment header.
+ */
+diff --git a/net/ipv6/ip6_vti.c b/net/ipv6/ip6_vti.c
+index d82042c8d8fd..733c63ef4b8a 100644
+--- a/net/ipv6/ip6_vti.c
++++ b/net/ipv6/ip6_vti.c
+@@ -692,6 +692,10 @@ vti6_parm_to_user(struct ip6_tnl_parm2 *u, const struct __ip6_tnl_parm *p)
+ u->link = p->link;
+ u->i_key = p->i_key;
+ u->o_key = p->o_key;
++ if (u->i_key)
++ u->i_flags |= GRE_KEY;
++ if (u->o_key)
++ u->o_flags |= GRE_KEY;
+ u->proto = p->proto;
+
+ memcpy(u->name, p->name, sizeof(u->name));
+diff --git a/net/ipv6/netfilter/nf_conntrack_reasm.c b/net/ipv6/netfilter/nf_conntrack_reasm.c
+index 9948b5ce52da..986d4ca38832 100644
+--- a/net/ipv6/netfilter/nf_conntrack_reasm.c
++++ b/net/ipv6/netfilter/nf_conntrack_reasm.c
+@@ -589,6 +589,7 @@ int nf_ct_frag6_gather(struct net *net, struct sk_buff *skb, u32 user)
+ hdr = ipv6_hdr(skb);
+ fhdr = (struct frag_hdr *)skb_transport_header(skb);
+
++ skb_orphan(skb);
+ fq = fq_find(net, fhdr->identification, user, &hdr->saddr, &hdr->daddr,
+ skb->dev ? skb->dev->ifindex : 0, ip6_frag_ecn(hdr));
+ if (fq == NULL) {
+diff --git a/net/ipv6/tcp_ipv6.c b/net/ipv6/tcp_ipv6.c
+index 4c60c6f71cd3..cfc232714139 100644
+--- a/net/ipv6/tcp_ipv6.c
++++ b/net/ipv6/tcp_ipv6.c
+@@ -382,10 +382,12 @@ static void tcp_v6_err(struct sk_buff *skb, struct inet6_skb_parm *opt,
+ np = inet6_sk(sk);
+
+ if (type == NDISC_REDIRECT) {
+- struct dst_entry *dst = __sk_dst_check(sk, np->dst_cookie);
++ if (!sock_owned_by_user(sk)) {
++ struct dst_entry *dst = __sk_dst_check(sk, np->dst_cookie);
+
+- if (dst)
+- dst->ops->redirect(dst, sk, skb);
++ if (dst)
++ dst->ops->redirect(dst, sk, skb);
++ }
+ goto out;
+ }
+
+diff --git a/net/l2tp/l2tp_ip.c b/net/l2tp/l2tp_ip.c
+index 28c21546d5b6..3ed30153a6f5 100644
+--- a/net/l2tp/l2tp_ip.c
++++ b/net/l2tp/l2tp_ip.c
+@@ -381,7 +381,7 @@ static int l2tp_ip_backlog_recv(struct sock *sk, struct sk_buff *skb)
+ drop:
+ IP_INC_STATS(sock_net(sk), IPSTATS_MIB_INDISCARDS);
+ kfree_skb(skb);
+- return -1;
++ return 0;
+ }
+
+ /* Userspace will call sendmsg() on the tunnel socket to send L2TP
+diff --git a/net/mpls/af_mpls.c b/net/mpls/af_mpls.c
+index 5b77377e5a15..1309e2c34764 100644
+--- a/net/mpls/af_mpls.c
++++ b/net/mpls/af_mpls.c
+@@ -956,7 +956,8 @@ static void mpls_ifdown(struct net_device *dev, int event)
+ /* fall through */
+ case NETDEV_CHANGE:
+ nh->nh_flags |= RTNH_F_LINKDOWN;
+- ACCESS_ONCE(rt->rt_nhn_alive) = rt->rt_nhn_alive - 1;
++ if (event != NETDEV_UNREGISTER)
++ ACCESS_ONCE(rt->rt_nhn_alive) = rt->rt_nhn_alive - 1;
+ break;
+ }
+ if (event == NETDEV_UNREGISTER)
+@@ -1696,6 +1697,7 @@ static void mpls_net_exit(struct net *net)
+ for (index = 0; index < platform_labels; index++) {
+ struct mpls_route *rt = rtnl_dereference(platform_label[index]);
+ RCU_INIT_POINTER(platform_label[index], NULL);
++ mpls_notify_route(net, index, rt, NULL, NULL);
+ mpls_rt_free(rt);
+ }
+ rtnl_unlock();
+diff --git a/net/openvswitch/conntrack.c b/net/openvswitch/conntrack.c
+index 54253ea5976e..919d66e083d1 100644
+--- a/net/openvswitch/conntrack.c
++++ b/net/openvswitch/conntrack.c
+@@ -367,7 +367,6 @@ static int handle_fragments(struct net *net, struct sw_flow_key *key,
+ } else if (key->eth.type == htons(ETH_P_IPV6)) {
+ enum ip6_defrag_users user = IP6_DEFRAG_CONNTRACK_IN + zone;
+
+- skb_orphan(skb);
+ memset(IP6CB(skb), 0, sizeof(struct inet6_skb_parm));
+ err = nf_ct_frag6_gather(net, skb, user);
+ if (err) {
+diff --git a/net/packet/af_packet.c b/net/packet/af_packet.c
+index 70f5b6a4683c..c59fcc79ba32 100644
+--- a/net/packet/af_packet.c
++++ b/net/packet/af_packet.c
+@@ -3082,7 +3082,7 @@ static int packet_bind_spkt(struct socket *sock, struct sockaddr *uaddr,
+ int addr_len)
+ {
+ struct sock *sk = sock->sk;
+- char name[15];
++ char name[sizeof(uaddr->sa_data) + 1];
+
+ /*
+ * Check legality
+@@ -3090,7 +3090,11 @@ static int packet_bind_spkt(struct socket *sock, struct sockaddr *uaddr,
+
+ if (addr_len != sizeof(struct sockaddr))
+ return -EINVAL;
+- strlcpy(name, uaddr->sa_data, sizeof(name));
++ /* uaddr->sa_data comes from the userspace, it's not guaranteed to be
++ * zero-terminated.
++ */
++ memcpy(name, uaddr->sa_data, sizeof(uaddr->sa_data));
++ name[sizeof(uaddr->sa_data)] = 0;
+
+ return packet_do_bind(sk, name, 0, pkt_sk(sk)->num);
+ }
+diff --git a/net/sched/act_api.c b/net/sched/act_api.c
+index e10456ef6f7a..9b29b6115384 100644
+--- a/net/sched/act_api.c
++++ b/net/sched/act_api.c
+@@ -817,10 +817,8 @@ static int tca_action_flush(struct net *net, struct nlattr *nla,
+ goto out_module_put;
+
+ err = ops->walk(net, skb, &dcb, RTM_DELACTION, ops);
+- if (err < 0)
++ if (err <= 0)
+ goto out_module_put;
+- if (err == 0)
+- goto noflush_out;
+
+ nla_nest_end(skb, nest);
+
+@@ -837,7 +835,6 @@ static int tca_action_flush(struct net *net, struct nlattr *nla,
+ out_module_put:
+ module_put(ops->owner);
+ err_out:
+-noflush_out:
+ kfree_skb(skb);
+ return err;
+ }
+diff --git a/net/sched/act_connmark.c b/net/sched/act_connmark.c
+index ab8062909962..f9bb43c25697 100644
+--- a/net/sched/act_connmark.c
++++ b/net/sched/act_connmark.c
+@@ -113,6 +113,9 @@ static int tcf_connmark_init(struct net *net, struct nlattr *nla,
+ if (ret < 0)
+ return ret;
+
++ if (!tb[TCA_CONNMARK_PARMS])
++ return -EINVAL;
++
+ parm = nla_data(tb[TCA_CONNMARK_PARMS]);
+
+ if (!tcf_hash_check(tn, parm->index, a, bind)) {
+diff --git a/net/sched/act_skbmod.c b/net/sched/act_skbmod.c
+index 3b7074e23024..c736627f8f4a 100644
+--- a/net/sched/act_skbmod.c
++++ b/net/sched/act_skbmod.c
+@@ -228,7 +228,6 @@ static int tcf_skbmod_dump(struct sk_buff *skb, struct tc_action *a,
+
+ return skb->len;
+ nla_put_failure:
+- rcu_read_unlock();
+ nlmsg_trim(skb, b);
+ return -1;
+ }
+diff --git a/net/sctp/protocol.c b/net/sctp/protocol.c
+index 616a9428e0c4..4ee4a33e34dc 100644
+--- a/net/sctp/protocol.c
++++ b/net/sctp/protocol.c
+@@ -199,6 +199,7 @@ int sctp_copy_local_addr_list(struct net *net, struct sctp_bind_addr *bp,
+ sctp_scope_t scope, gfp_t gfp, int copy_flags)
+ {
+ struct sctp_sockaddr_entry *addr;
++ union sctp_addr laddr;
+ int error = 0;
+
+ rcu_read_lock();
+@@ -220,7 +221,10 @@ int sctp_copy_local_addr_list(struct net *net, struct sctp_bind_addr *bp,
+ !(copy_flags & SCTP_ADDR6_PEERSUPP)))
+ continue;
+
+- if (sctp_bind_addr_state(bp, &addr->a) != -1)
++ laddr = addr->a;
++ /* also works for setting ipv6 address port */
++ laddr.v4.sin_port = htons(bp->port);
++ if (sctp_bind_addr_state(bp, &laddr) != -1)
+ continue;
+
+ error = sctp_add_bind_addr(bp, &addr->a, sizeof(addr->a),
+diff --git a/net/sctp/socket.c b/net/sctp/socket.c
+index 1b5d669e3029..d04a8b66098c 100644
+--- a/net/sctp/socket.c
++++ b/net/sctp/socket.c
+@@ -4734,6 +4734,12 @@ int sctp_do_peeloff(struct sock *sk, sctp_assoc_t id, struct socket **sockp)
+ if (!asoc)
+ return -EINVAL;
+
++ /* If there is a thread waiting on more sndbuf space for
++ * sending on this asoc, it cannot be peeled.
++ */
++ if (waitqueue_active(&asoc->wait))
++ return -EBUSY;
++
+ /* An association cannot be branched off from an already peeled-off
+ * socket, nor is this supported for tcp style sockets.
+ */
+@@ -7426,8 +7432,6 @@ static int sctp_wait_for_sndbuf(struct sctp_association *asoc, long *timeo_p,
+ */
+ release_sock(sk);
+ current_timeo = schedule_timeout(current_timeo);
+- if (sk != asoc->base.sk)
+- goto do_error;
+ lock_sock(sk);
+
+ *timeo_p = current_timeo;
+diff --git a/net/strparser/strparser.c b/net/strparser/strparser.c
+index 41adf362936d..b5c279b22680 100644
+--- a/net/strparser/strparser.c
++++ b/net/strparser/strparser.c
+@@ -504,6 +504,7 @@ static int __init strp_mod_init(void)
+
+ static void __exit strp_mod_exit(void)
+ {
++ destroy_workqueue(strp_wq);
+ }
+ module_init(strp_mod_init);
+ module_exit(strp_mod_exit);
^ permalink raw reply related [flat|nested] 22+ messages in thread
* [gentoo-commits] proj/linux-patches:4.10 commit in: /
@ 2017-03-23 17:28 Mike Pagano
0 siblings, 0 replies; 22+ messages in thread
From: Mike Pagano @ 2017-03-23 17:28 UTC (permalink / raw
To: gentoo-commits
commit: 4d7ca9b58735b713fe8dbf0380a319c4f2f45be8
Author: Mike Pagano <mpagano <AT> gentoo <DOT> org>
AuthorDate: Thu Mar 23 17:28:39 2017 +0000
Commit: Mike Pagano <mpagano <AT> gentoo <DOT> org>
CommitDate: Thu Mar 23 17:28:39 2017 +0000
URL: https://gitweb.gentoo.org/proj/linux-patches.git/commit/?id=4d7ca9b5
Upgrade gcc cpu optimization patch. See bug #613570
...able-additional-cpu-optimizations-for-gcc.patch | 225 +++++++++++++++------
1 file changed, 166 insertions(+), 59 deletions(-)
diff --git a/5010_enable-additional-cpu-optimizations-for-gcc.patch b/5010_enable-additional-cpu-optimizations-for-gcc.patch
index d9729b2..76cbd9d 100644
--- a/5010_enable-additional-cpu-optimizations-for-gcc.patch
+++ b/5010_enable-additional-cpu-optimizations-for-gcc.patch
@@ -1,33 +1,51 @@
-WARNING - this version of the patch works with version 4.9+ of gcc and with
-kernel version 3.15.x+ and should NOT be applied when compiling on older
-versions due to name changes of the flags with the 4.9 release of gcc.
+WARNING
+This patch works with gcc versions 4.9+ and with kernel version 3.15+ and should
+NOT be applied when compiling on older versions of gcc due to key name changes
+of the march flags introduced with the version 4.9 release of gcc.[1]
+
Use the older version of this patch hosted on the same github for older
-versions of gcc. For example:
+versions of gcc.
-corei7 --> nehalem
-corei7-avx --> sandybridge
-core-avx-i --> ivybridge
-core-avx2 --> haswell
+FEATURES
+This patch adds additional CPU options to the Linux kernel accessible under:
+ Processor type and features --->
+ Processor family --->
-For more, see: https://gcc.gnu.org/gcc-4.9/changes.html
+The expanded microarchitectures include:
+* AMD Improved K8-family
+* AMD K10-family
+* AMD Family 10h (Barcelona)
+* AMD Family 14h (Bobcat)
+* AMD Family 16h (Jaguar)
+* AMD Family 15h (Bulldozer)
+* AMD Family 15h (Piledriver)
+* AMD Family 15h (Steamroller)
+* AMD Family 15h (Excavator)
+* AMD Family 17h (Zen)
+* Intel Silvermont low-power processors
+* Intel 1st Gen Core i3/i5/i7 (Nehalem)
+* Intel 1.5 Gen Core i3/i5/i7 (Westmere)
+* Intel 2nd Gen Core i3/i5/i7 (Sandybridge)
+* Intel 3rd Gen Core i3/i5/i7 (Ivybridge)
+* Intel 4th Gen Core i3/i5/i7 (Haswell)
+* Intel 5th Gen Core i3/i5/i7 (Broadwell)
+* Intel 6th Gen Core i3/i5.i7 (Skylake)
-It also changes 'atom' to 'bonnell' in accordance with the gcc v4.9 changes.
-Note that upstream is using the deprecated 'match=atom' flags when I believe it
-should use the newer 'march=bonnell' flag for atom processors.
+It also offers to compile passing the 'native' option which, "selects the CPU
+to generate code for at compilation time by determining the processor type of
+the compiling machine. Using -march=native enables all instruction subsets
+supported by the local machine and will produce code optimized for the local
+machine under the constraints of the selected instruction set."[3]
-I have made that change to this patch set as well. See the following kernel
-bug report to see if I'm right: https://bugzilla.kernel.org/show_bug.cgi?id=77461
+MINOR NOTES
+This patch also changes 'atom' to 'bonnell' in accordance with the gcc v4.9
+changes. Note that upstream is using the deprecated 'match=atom' flags when I
+believe it should use the newer 'march=bonnell' flag for atom processors.[2]
-This patch will expand the number of microarchitectures to include newer
-processors including: AMD K10-family, AMD Family 10h (Barcelona), AMD Family
-14h (Bobcat), AMD Family 15h (Bulldozer), AMD Family 15h (Piledriver), AMD
-Family 15h (Steamroller), Family 16h (Jaguar), Intel 1st Gen Core i3/i5/i7
-(Nehalem), Intel 1.5 Gen Core i3/i5/i7 (Westmere), Intel 2nd Gen Core i3/i5/i7
-(Sandybridge), Intel 3rd Gen Core i3/i5/i7 (Ivybridge), Intel 4th Gen Core
-i3/i5/i7 (Haswell), Intel 5th Gen Core i3/i5/i7 (Broadwell), and the low power
-Silvermont series of Atom processors (Silvermont). It also offers the compiler
-the 'native' flag.
+It is not recommended to compile on Atom-CPUs with the 'native' option.[4] The
+recommendation is use to the 'atom' option instead.
+BENEFITS
Small but real speed increases are measurable using a make endpoint comparing
a generic kernel to one built with one of the respective microarchs.
@@ -38,8 +56,18 @@ REQUIREMENTS
linux version >=3.15
gcc version >=4.9
---- a/arch/x86/include/asm/module.h 2015-08-30 14:34:09.000000000 -0400
-+++ b/arch/x86/include/asm/module.h 2015-11-06 14:18:24.234941036 -0500
+ACKNOWLEDGMENTS
+This patch builds on the seminal work by Jeroen.[5]
+
+REFERENCES
+1. https://gcc.gnu.org/gcc-4.9/changes.html
+2. https://bugzilla.kernel.org/show_bug.cgi?id=77461
+3. https://gcc.gnu.org/onlinedocs/gcc/x86-Options.html
+4. https://github.com/graysky2/kernel_gcc_patch/issues/15
+5. http://www.linuxforge.net/docs/linux/linux-gcc.php
+
+--- a/arch/x86/include/asm/module.h 2016-12-11 14:17:54.000000000 -0500
++++ b/arch/x86/include/asm/module.h 2017-01-06 20:44:36.602227264 -0500
@@ -15,6 +15,24 @@
#define MODULE_PROC_FAMILY "586MMX "
#elif defined CONFIG_MCORE2
@@ -65,7 +93,7 @@ gcc version >=4.9
#elif defined CONFIG_MATOM
#define MODULE_PROC_FAMILY "ATOM "
#elif defined CONFIG_M686
-@@ -33,6 +51,22 @@
+@@ -33,6 +51,26 @@
#define MODULE_PROC_FAMILY "K7 "
#elif defined CONFIG_MK8
#define MODULE_PROC_FAMILY "K8 "
@@ -80,17 +108,29 @@ gcc version >=4.9
+#elif defined CONFIG_MBULLDOZER
+#define MODULE_PROC_FAMILY "BULLDOZER "
+#elif defined CONFIG_MPILEDRIVER
-+#define MODULE_PROC_FAMILY "STEAMROLLER "
-+#elif defined CONFIG_MSTEAMROLLER
+#define MODULE_PROC_FAMILY "PILEDRIVER "
++#elif defined CONFIG_MSTEAMROLLER
++#define MODULE_PROC_FAMILY "STEAMROLLER "
+#elif defined CONFIG_MJAGUAR
+#define MODULE_PROC_FAMILY "JAGUAR "
++#elif defined CONFIG_MEXCAVATOR
++#define MODULE_PROC_FAMILY "EXCAVATOR "
++#elif defined CONFIG_MZEN
++#define MODULE_PROC_FAMILY "ZEN "
#elif defined CONFIG_MELAN
#define MODULE_PROC_FAMILY "ELAN "
#elif defined CONFIG_MCRUSOE
---- a/arch/x86/Kconfig.cpu 2015-08-30 14:34:09.000000000 -0400
-+++ b/arch/x86/Kconfig.cpu 2015-11-06 14:20:14.948369244 -0500
-@@ -137,9 +137,8 @@ config MPENTIUM4
+--- a/arch/x86/Kconfig.cpu 2016-12-11 14:17:54.000000000 -0500
++++ b/arch/x86/Kconfig.cpu 2017-01-06 20:46:14.004109597 -0500
+@@ -115,6 +115,7 @@ config MPENTIUMM
+ config MPENTIUM4
+ bool "Pentium-4/Celeron(P4-based)/Pentium-4 M/older Xeon"
+ depends on X86_32
++ select X86_P6_NOP
+ ---help---
+ Select this for Intel Pentium 4 chips. This includes the
+ Pentium 4, Pentium D, P4-based Celeron and Xeon, and
+@@ -147,9 +148,8 @@ config MPENTIUM4
-Paxville
-Dempsey
@@ -101,7 +141,7 @@ gcc version >=4.9
depends on X86_32
---help---
Select this for an AMD K6-family processor. Enables use of
-@@ -147,7 +146,7 @@ config MK6
+@@ -157,7 +157,7 @@ config MK6
flags to GCC.
config MK7
@@ -110,7 +150,7 @@ gcc version >=4.9
depends on X86_32
---help---
Select this for an AMD Athlon K7-family processor. Enables use of
-@@ -155,12 +154,69 @@ config MK7
+@@ -165,12 +165,83 @@ config MK7
flags to GCC.
config MK8
@@ -139,54 +179,77 @@ gcc version >=4.9
+config MBARCELONA
+ bool "AMD Barcelona"
+ ---help---
-+ Select this for AMD Barcelona and newer processors.
++ Select this for AMD Family 10h Barcelona processors.
+
+ Enables -march=barcelona
+
+config MBOBCAT
+ bool "AMD Bobcat"
+ ---help---
-+ Select this for AMD Bobcat processors.
++ Select this for AMD Family 14h Bobcat processors.
+
+ Enables -march=btver1
+
++config MJAGUAR
++ bool "AMD Jaguar"
++ ---help---
++ Select this for AMD Family 16h Jaguar processors.
++
++ Enables -march=btver2
++
+config MBULLDOZER
+ bool "AMD Bulldozer"
+ ---help---
-+ Select this for AMD Bulldozer processors.
++ Select this for AMD Family 15h Bulldozer processors.
+
+ Enables -march=bdver1
+
+config MPILEDRIVER
+ bool "AMD Piledriver"
+ ---help---
-+ Select this for AMD Piledriver processors.
++ Select this for AMD Family 15h Piledriver processors.
+
+ Enables -march=bdver2
+
+config MSTEAMROLLER
+ bool "AMD Steamroller"
+ ---help---
-+ Select this for AMD Steamroller processors.
++ Select this for AMD Family 15h Steamroller processors.
+
+ Enables -march=bdver3
+
-+config MJAGUAR
-+ bool "AMD Jaguar"
++config MEXCAVATOR
++ bool "AMD Excavator"
+ ---help---
-+ Select this for AMD Jaguar processors.
++ Select this for AMD Family 15h Excavator processors.
+
-+ Enables -march=btver2
++ Enables -march=bdver4
++
++config MZEN
++ bool "AMD Zen"
++ ---help---
++ Select this for AMD Family 17h Zen processors.
++
++ Enables -march=znver1
+
config MCRUSOE
bool "Crusoe"
depends on X86_32
-@@ -251,8 +307,17 @@ config MPSC
+@@ -252,6 +323,7 @@ config MVIAC7
+
+ config MPSC
+ bool "Intel P4 / older Netburst based Xeon"
++ select X86_P6_NOP
+ depends on X86_64
+ ---help---
+ Optimize for Intel Pentium 4, Pentium D and older Nocona/Dempsey
+@@ -261,8 +333,19 @@ config MPSC
using the cpu family field
in /proc/cpuinfo. Family 15 is an older Xeon, Family 6 a newer one.
+config MATOM
+ bool "Intel Atom"
++ select X86_P6_NOP
+ ---help---
+
+ Select this for the Intel Atom platform. Intel Atom CPUs have an
@@ -197,10 +260,11 @@ gcc version >=4.9
config MCORE2
- bool "Core 2/newer Xeon"
+ bool "Intel Core 2"
++ select X86_P6_NOP
---help---
Select this for Intel Core 2 and newer Core 2 Xeons (Xeon 51xx and
-@@ -260,14 +325,71 @@ config MCORE2
+@@ -270,14 +353,79 @@ config MCORE2
family in /proc/cpuinfo. Newer ones have 6 and older ones 15
(not a typo)
@@ -210,6 +274,7 @@ gcc version >=4.9
+
+config MNEHALEM
+ bool "Intel Nehalem"
++ select X86_P6_NOP
---help---
- Select this for the Intel Atom platform. Intel Atom CPUs have an
@@ -222,6 +287,7 @@ gcc version >=4.9
+
+config MWESTMERE
+ bool "Intel Westmere"
++ select X86_P6_NOP
+ ---help---
+
+ Select this for the Intel Westmere formerly Nehalem-C family.
@@ -230,6 +296,7 @@ gcc version >=4.9
+
+config MSILVERMONT
+ bool "Intel Silvermont"
++ select X86_P6_NOP
+ ---help---
+
+ Select this for the Intel Silvermont platform.
@@ -238,6 +305,7 @@ gcc version >=4.9
+
+config MSANDYBRIDGE
+ bool "Intel Sandy Bridge"
++ select X86_P6_NOP
+ ---help---
+
+ Select this for 2nd Gen Core processors in the Sandy Bridge family.
@@ -246,6 +314,7 @@ gcc version >=4.9
+
+config MIVYBRIDGE
+ bool "Intel Ivy Bridge"
++ select X86_P6_NOP
+ ---help---
+
+ Select this for 3rd Gen Core processors in the Ivy Bridge family.
@@ -254,6 +323,7 @@ gcc version >=4.9
+
+config MHASWELL
+ bool "Intel Haswell"
++ select X86_P6_NOP
+ ---help---
+
+ Select this for 4th Gen Core processors in the Haswell family.
@@ -262,6 +332,7 @@ gcc version >=4.9
+
+config MBROADWELL
+ bool "Intel Broadwell"
++ select X86_P6_NOP
+ ---help---
+
+ Select this for 5th Gen Core processors in the Broadwell family.
@@ -270,6 +341,7 @@ gcc version >=4.9
+
+config MSKYLAKE
+ bool "Intel Skylake"
++ select X86_P6_NOP
+ ---help---
+
+ Select this for 6th Gen Core processors in the Skylake family.
@@ -278,7 +350,7 @@ gcc version >=4.9
config GENERIC_CPU
bool "Generic-x86-64"
-@@ -276,6 +398,19 @@ config GENERIC_CPU
+@@ -286,6 +434,19 @@ config GENERIC_CPU
Generic x86-64 CPU.
Run equally well on all x86-64 CPUs.
@@ -298,16 +370,16 @@ gcc version >=4.9
endchoice
config X86_GENERIC
-@@ -300,7 +435,7 @@ config X86_INTERNODE_CACHE_SHIFT
+@@ -310,7 +471,7 @@ config X86_INTERNODE_CACHE_SHIFT
config X86_L1_CACHE_SHIFT
int
default "7" if MPENTIUM4 || MPSC
- default "6" if MK7 || MK8 || MPENTIUMM || MCORE2 || MATOM || MVIAC7 || X86_GENERIC || GENERIC_CPU
-+ default "6" if MK7 || MK8 || MK8SSE3 || MK10 || MBARCELONA || MBOBCAT || MBULLDOZER || MPILEDRIVER || MSTEAMROLLER || MJAGUAR || MPENTIUMM || MCORE2 || MNEHALEM || MWESTMERE || MSILVERMONT || MSANDYBRIDGE || MIVYBRIDGE || MHASWELL || MBROADWELL || MSKYLAKE || MNATIVE || MATOM || MVIAC7 || X86_GENERIC || GENERIC_CPU
++ default "6" if MK7 || MK8 || MK8SSE3 || MK10 || MBARCELONA || MBOBCAT || MBULLDOZER || MPILEDRIVER || MSTEAMROLLER || MEXCAVATOR || MZEN || MJAGUAR || MPENTIUMM || MCORE2 || MNEHALEM || MWESTMERE || MSILVERMONT || MSANDYBRIDGE || MIVYBRIDGE || MHASWELL || MBROADWELL || MSKYLAKE || MNATIVE || MATOM || MVIAC7 || X86_GENERIC || GENERIC_CPU
default "4" if MELAN || M486 || MGEODEGX1
default "5" if MWINCHIP3D || MWINCHIPC6 || MCRUSOE || MEFFICEON || MCYRIXIII || MK6 || MPENTIUMIII || MPENTIUMII || M686 || M586MMX || M586TSC || M586 || MVIAC3_2 || MGEODE_LX
-@@ -331,11 +466,11 @@ config X86_ALIGNMENT_16
+@@ -341,45 +502,46 @@ config X86_ALIGNMENT_16
config X86_INTEL_USERCOPY
def_bool y
@@ -321,7 +393,38 @@ gcc version >=4.9
config X86_USE_3DNOW
def_bool y
-@@ -359,17 +494,17 @@ config X86_P6_NOP
+ depends on (MCYRIXIII || MK7 || MGEODE_LX) && !UML
+
+-#
+-# P6_NOPs are a relatively minor optimization that require a family >=
+-# 6 processor, except that it is broken on certain VIA chips.
+-# Furthermore, AMD chips prefer a totally different sequence of NOPs
+-# (which work on all CPUs). In addition, it looks like Virtual PC
+-# does not understand them.
+-#
+-# As a result, disallow these if we're not compiling for X86_64 (these
+-# NOPs do work on all x86-64 capable chips); the list of processors in
+-# the right-hand clause are the cores that benefit from this optimization.
+-#
+ config X86_P6_NOP
+- def_bool y
+- depends on X86_64
+- depends on (MCORE2 || MPENTIUM4 || MPSC)
++ default n
++ bool "Support for P6_NOPs on Intel chips"
++ depends on (MCORE2 || MPENTIUM4 || MPSC || MATOM || MNEHALEM || MWESTMERE || MSILVERMONT || MSANDYBRIDGE || MIVYBRIDGE || MHASWELL || MBROADWELL || MSKYLAKE || MNATIVE)
++ ---help---
++ P6_NOPs are a relatively minor optimization that require a family >=
++ 6 processor, except that it is broken on certain VIA chips.
++ Furthermore, AMD chips prefer a totally different sequence of NOPs
++ (which work on all CPUs). In addition, it looks like Virtual PC
++ does not understand them.
++
++ As a result, disallow these if we're not compiling for X86_64 (these
++ NOPs do work on all x86-64 capable chips); the list of processors in
++ the right-hand clause are the cores that benefit from this optimization.
++
++ Say Y if you have Intel CPU newer than Pentium Pro, N otherwise.
config X86_TSC
def_bool y
@@ -338,13 +441,13 @@ gcc version >=4.9
config X86_CMOV
def_bool y
- depends on (MK8 || MK7 || MCORE2 || MPENTIUM4 || MPENTIUMM || MPENTIUMIII || MPENTIUMII || M686 || MVIAC3_2 || MVIAC7 || MCRUSOE || MEFFICEON || X86_64 || MATOM || MGEODE_LX)
-+ depends on (MK8 || MK8SSE3 || MK10 || MBARCELONA || MBOBCAT || MBULLDOZER || MPILEDRIVER || MSTEAMROLLER || MJAGUAR || MK7 || MCORE2 || MNEHALEM || MWESTMERE || MSILVERMONT || MSANDYBRIDGE || MIVYBRIDGE || MHASWELL || MBROADWELL || MSKYLAKE || MPENTIUM4 || MPENTIUMM || MPENTIUMIII || MPENTIUMII || M686 || MVIAC3_2 || MVIAC7 || MCRUSOE || MEFFICEON || X86_64 || MNATIVE || MATOM || MGEODE_LX)
++ depends on (MK8 || MK8SSE3 || MK10 || MBARCELONA || MBOBCAT || MBULLDOZER || MPILEDRIVER || MSTEAMROLLER || MEXCAVATOR || MZEN || MJAGUAR || MK7 || MCORE2 || MNEHALEM || MWESTMERE || MSILVERMONT || MSANDYBRIDGE || MIVYBRIDGE || MHASWELL || MBROADWELL || MSKYLAKE || MPENTIUM4 || MPENTIUMM || MPENTIUMIII || MPENTIUMII || M686 || MVIAC3_2 || MVIAC7 || MCRUSOE || MEFFICEON || X86_64 || MNATIVE || MATOM || MGEODE_LX)
config X86_MINIMUM_CPU_FAMILY
int
---- a/arch/x86/Makefile 2015-08-30 14:34:09.000000000 -0400
-+++ b/arch/x86/Makefile 2015-11-06 14:21:05.708983344 -0500
-@@ -94,13 +94,38 @@ else
+--- a/arch/x86/Makefile 2016-12-11 14:17:54.000000000 -0500
++++ b/arch/x86/Makefile 2017-01-06 20:44:36.603227283 -0500
+@@ -104,13 +104,40 @@ else
KBUILD_CFLAGS += $(call cc-option,-mskip-rax-setup)
# FIXME - should be integrated in Makefile.cpu (Makefile_32.cpu)
@@ -354,10 +457,12 @@ gcc version >=4.9
+ cflags-$(CONFIG_MK10) += $(call cc-option,-march=amdfam10)
+ cflags-$(CONFIG_MBARCELONA) += $(call cc-option,-march=barcelona)
+ cflags-$(CONFIG_MBOBCAT) += $(call cc-option,-march=btver1)
++ cflags-$(CONFIG_MJAGUAR) += $(call cc-option,-march=btver2)
+ cflags-$(CONFIG_MBULLDOZER) += $(call cc-option,-march=bdver1)
+ cflags-$(CONFIG_MPILEDRIVER) += $(call cc-option,-march=bdver2)
+ cflags-$(CONFIG_MSTEAMROLLER) += $(call cc-option,-march=bdver3)
-+ cflags-$(CONFIG_MJAGUAR) += $(call cc-option,-march=btver2)
++ cflags-$(CONFIG_MEXCAVATOR) += $(call cc-option,-march=bdver4)
++ cflags-$(CONFIG_MZEN) += $(call cc-option,-march=znver1)
cflags-$(CONFIG_MPSC) += $(call cc-option,-march=nocona)
cflags-$(CONFIG_MCORE2) += \
@@ -386,9 +491,9 @@ gcc version >=4.9
cflags-$(CONFIG_GENERIC_CPU) += $(call cc-option,-mtune=generic)
KBUILD_CFLAGS += $(cflags-y)
---- a/arch/x86/Makefile_32.cpu 2015-08-30 14:34:09.000000000 -0400
-+++ b/arch/x86/Makefile_32.cpu 2015-11-06 14:21:43.604429077 -0500
-@@ -23,7 +23,16 @@ cflags-$(CONFIG_MK6) += -march=k6
+--- a/arch/x86/Makefile_32.cpu 2016-12-11 14:17:54.000000000 -0500
++++ b/arch/x86/Makefile_32.cpu 2017-01-06 20:44:36.603227283 -0500
+@@ -23,7 +23,18 @@ cflags-$(CONFIG_MK6) += -march=k6
# Please note, that patches that add -march=athlon-xp and friends are pointless.
# They make zero difference whatsosever to performance at this time.
cflags-$(CONFIG_MK7) += -march=athlon
@@ -398,14 +503,16 @@ gcc version >=4.9
+cflags-$(CONFIG_MK10) += $(call cc-option,-march=amdfam10,-march=athlon)
+cflags-$(CONFIG_MBARCELONA) += $(call cc-option,-march=barcelona,-march=athlon)
+cflags-$(CONFIG_MBOBCAT) += $(call cc-option,-march=btver1,-march=athlon)
++cflags-$(CONFIG_MJAGUAR) += $(call cc-option,-march=btver2,-march=athlon)
+cflags-$(CONFIG_MBULLDOZER) += $(call cc-option,-march=bdver1,-march=athlon)
+cflags-$(CONFIG_MPILEDRIVER) += $(call cc-option,-march=bdver2,-march=athlon)
+cflags-$(CONFIG_MSTEAMROLLER) += $(call cc-option,-march=bdver3,-march=athlon)
-+cflags-$(CONFIG_MJAGUAR) += $(call cc-option,-march=btver2,-march=athlon)
++cflags-$(CONFIG_MEXCAVATOR) += $(call cc-option,-march=bdver4,-march=athlon)
++cflags-$(CONFIG_MZEN) += $(call cc-option,-march=znver1,-march=athlon)
cflags-$(CONFIG_MCRUSOE) += -march=i686 $(align)-functions=0 $(align)-jumps=0 $(align)-loops=0
cflags-$(CONFIG_MEFFICEON) += -march=i686 $(call tune,pentium3) $(align)-functions=0 $(align)-jumps=0 $(align)-loops=0
cflags-$(CONFIG_MWINCHIPC6) += $(call cc-option,-march=winchip-c6,-march=i586)
-@@ -32,8 +41,16 @@ cflags-$(CONFIG_MCYRIXIII) += $(call cc-
+@@ -32,8 +43,16 @@ cflags-$(CONFIG_MCYRIXIII) += $(call cc-
cflags-$(CONFIG_MVIAC3_2) += $(call cc-option,-march=c3-2,-march=i686)
cflags-$(CONFIG_MVIAC7) += -march=i686
cflags-$(CONFIG_MCORE2) += -march=i686 $(call tune,core2)
^ permalink raw reply related [flat|nested] 22+ messages in thread
* [gentoo-commits] proj/linux-patches:4.10 commit in: /
@ 2017-03-26 19:33 Mike Pagano
0 siblings, 0 replies; 22+ messages in thread
From: Mike Pagano @ 2017-03-26 19:33 UTC (permalink / raw
To: gentoo-commits
commit: 504b3f376381804b07a3110ce29a6bb6ac11060d
Author: Mike Pagano <mpagano <AT> gentoo <DOT> org>
AuthorDate: Sun Mar 26 19:33:18 2017 +0000
Commit: Mike Pagano <mpagano <AT> gentoo <DOT> org>
CommitDate: Sun Mar 26 19:33:18 2017 +0000
URL: https://gitweb.gentoo.org/proj/linux-patches.git/commit/?id=504b3f37
Linux patch 4.10.6
0000_README | 4 +
1005_linux-4.10.6.patch | 943 ++++++++++++++++++++++++++++++++++++++++++++++++
2 files changed, 947 insertions(+)
diff --git a/0000_README b/0000_README
index 464eea3..014e9c4 100644
--- a/0000_README
+++ b/0000_README
@@ -63,6 +63,10 @@ Patch: 1004_linux-4.10.5.patch
From: http://www.kernel.org
Desc: Linux 4.10.5
+Patch: 1005_linux-4.10.6.patch
+From: http://www.kernel.org
+Desc: Linux 4.10.6
+
Patch: 1500_XATTR_USER_PREFIX.patch
From: https://bugs.gentoo.org/show_bug.cgi?id=470644
Desc: Support for namespace user.pax.* on tmpfs.
diff --git a/1005_linux-4.10.6.patch b/1005_linux-4.10.6.patch
new file mode 100644
index 0000000..3c1b6da
--- /dev/null
+++ b/1005_linux-4.10.6.patch
@@ -0,0 +1,943 @@
+diff --git a/Makefile b/Makefile
+index 48e18096913f..23b6d29cb6da 100644
+--- a/Makefile
++++ b/Makefile
+@@ -1,6 +1,6 @@
+ VERSION = 4
+ PATCHLEVEL = 10
+-SUBLEVEL = 5
++SUBLEVEL = 6
+ EXTRAVERSION =
+ NAME = Fearless Coyote
+
+diff --git a/arch/parisc/include/asm/cacheflush.h b/arch/parisc/include/asm/cacheflush.h
+index 7bd69bd43a01..1d8c24dc04d4 100644
+--- a/arch/parisc/include/asm/cacheflush.h
++++ b/arch/parisc/include/asm/cacheflush.h
+@@ -45,28 +45,9 @@ static inline void flush_kernel_dcache_page(struct page *page)
+
+ #define flush_kernel_dcache_range(start,size) \
+ flush_kernel_dcache_range_asm((start), (start)+(size));
+-/* vmap range flushes and invalidates. Architecturally, we don't need
+- * the invalidate, because the CPU should refuse to speculate once an
+- * area has been flushed, so invalidate is left empty */
+-static inline void flush_kernel_vmap_range(void *vaddr, int size)
+-{
+- unsigned long start = (unsigned long)vaddr;
+-
+- flush_kernel_dcache_range_asm(start, start + size);
+-}
+-static inline void invalidate_kernel_vmap_range(void *vaddr, int size)
+-{
+- unsigned long start = (unsigned long)vaddr;
+- void *cursor = vaddr;
+
+- for ( ; cursor < vaddr + size; cursor += PAGE_SIZE) {
+- struct page *page = vmalloc_to_page(cursor);
+-
+- if (test_and_clear_bit(PG_dcache_dirty, &page->flags))
+- flush_kernel_dcache_page(page);
+- }
+- flush_kernel_dcache_range_asm(start, start + size);
+-}
++void flush_kernel_vmap_range(void *vaddr, int size);
++void invalidate_kernel_vmap_range(void *vaddr, int size);
+
+ #define flush_cache_vmap(start, end) flush_cache_all()
+ #define flush_cache_vunmap(start, end) flush_cache_all()
+diff --git a/arch/parisc/kernel/cache.c b/arch/parisc/kernel/cache.c
+index 977f0a4f5ecf..53ec75f8e237 100644
+--- a/arch/parisc/kernel/cache.c
++++ b/arch/parisc/kernel/cache.c
+@@ -633,3 +633,25 @@ flush_cache_page(struct vm_area_struct *vma, unsigned long vmaddr, unsigned long
+ __flush_cache_page(vma, vmaddr, PFN_PHYS(pfn));
+ }
+ }
++
++void flush_kernel_vmap_range(void *vaddr, int size)
++{
++ unsigned long start = (unsigned long)vaddr;
++
++ if ((unsigned long)size > parisc_cache_flush_threshold)
++ flush_data_cache();
++ else
++ flush_kernel_dcache_range_asm(start, start + size);
++}
++EXPORT_SYMBOL(flush_kernel_vmap_range);
++
++void invalidate_kernel_vmap_range(void *vaddr, int size)
++{
++ unsigned long start = (unsigned long)vaddr;
++
++ if ((unsigned long)size > parisc_cache_flush_threshold)
++ flush_data_cache();
++ else
++ flush_kernel_dcache_range_asm(start, start + size);
++}
++EXPORT_SYMBOL(invalidate_kernel_vmap_range);
+diff --git a/arch/parisc/kernel/module.c b/arch/parisc/kernel/module.c
+index a0ecdb4abcc8..c66c943d9322 100644
+--- a/arch/parisc/kernel/module.c
++++ b/arch/parisc/kernel/module.c
+@@ -620,6 +620,10 @@ int apply_relocate_add(Elf_Shdr *sechdrs,
+ */
+ *loc = fsel(val, addend);
+ break;
++ case R_PARISC_SECREL32:
++ /* 32-bit section relative address. */
++ *loc = fsel(val, addend);
++ break;
+ case R_PARISC_DPREL21L:
+ /* left 21 bit of relative address */
+ val = lrsel(val - dp, addend);
+@@ -807,6 +811,10 @@ int apply_relocate_add(Elf_Shdr *sechdrs,
+ */
+ *loc = fsel(val, addend);
+ break;
++ case R_PARISC_SECREL32:
++ /* 32-bit section relative address. */
++ *loc = fsel(val, addend);
++ break;
+ case R_PARISC_FPTR64:
+ /* 64-bit function address */
+ if(in_local(me, (void *)(val + addend))) {
+diff --git a/arch/parisc/kernel/process.c b/arch/parisc/kernel/process.c
+index ea6603ee8d24..9e2d98ee6f9c 100644
+--- a/arch/parisc/kernel/process.c
++++ b/arch/parisc/kernel/process.c
+@@ -139,6 +139,8 @@ void machine_power_off(void)
+
+ printk(KERN_EMERG "System shut down completed.\n"
+ "Please power this system off now.");
++
++ for (;;);
+ }
+
+ void (*pm_power_off)(void) = machine_power_off;
+diff --git a/arch/powerpc/boot/zImage.lds.S b/arch/powerpc/boot/zImage.lds.S
+index 861e72109df2..f080abfc2f83 100644
+--- a/arch/powerpc/boot/zImage.lds.S
++++ b/arch/powerpc/boot/zImage.lds.S
+@@ -68,6 +68,7 @@ SECTIONS
+ }
+
+ #ifdef CONFIG_PPC64_BOOT_WRAPPER
++ . = ALIGN(256);
+ .got :
+ {
+ __toc_start = .;
+diff --git a/drivers/char/hw_random/omap-rng.c b/drivers/char/hw_random/omap-rng.c
+index 3ad86fdf954e..b1ad12552b56 100644
+--- a/drivers/char/hw_random/omap-rng.c
++++ b/drivers/char/hw_random/omap-rng.c
+@@ -397,9 +397,8 @@ static int of_get_omap_rng_device_details(struct omap_rng_dev *priv,
+ irq, err);
+ return err;
+ }
+- omap_rng_write(priv, RNG_INTMASK_REG, RNG_SHUTDOWN_OFLO_MASK);
+
+- priv->clk = of_clk_get(pdev->dev.of_node, 0);
++ priv->clk = devm_clk_get(&pdev->dev, NULL);
+ if (IS_ERR(priv->clk) && PTR_ERR(priv->clk) == -EPROBE_DEFER)
+ return -EPROBE_DEFER;
+ if (!IS_ERR(priv->clk)) {
+@@ -408,6 +407,19 @@ static int of_get_omap_rng_device_details(struct omap_rng_dev *priv,
+ dev_err(&pdev->dev, "unable to enable the clk, "
+ "err = %d\n", err);
+ }
++
++ /*
++ * On OMAP4, enabling the shutdown_oflo interrupt is
++ * done in the interrupt mask register. There is no
++ * such register on EIP76, and it's enabled by the
++ * same bit in the control register
++ */
++ if (priv->pdata->regs[RNG_INTMASK_REG])
++ omap_rng_write(priv, RNG_INTMASK_REG,
++ RNG_SHUTDOWN_OFLO_MASK);
++ else
++ omap_rng_write(priv, RNG_CONTROL_REG,
++ RNG_SHUTDOWN_OFLO_MASK);
+ }
+ return 0;
+ }
+diff --git a/drivers/cpufreq/cpufreq.c b/drivers/cpufreq/cpufreq.c
+index cc475eff90b3..061b165d632e 100644
+--- a/drivers/cpufreq/cpufreq.c
++++ b/drivers/cpufreq/cpufreq.c
+@@ -680,9 +680,11 @@ static ssize_t show_cpuinfo_cur_freq(struct cpufreq_policy *policy,
+ char *buf)
+ {
+ unsigned int cur_freq = __cpufreq_get(policy);
+- if (!cur_freq)
+- return sprintf(buf, "<unknown>");
+- return sprintf(buf, "%u\n", cur_freq);
++
++ if (cur_freq)
++ return sprintf(buf, "%u\n", cur_freq);
++
++ return sprintf(buf, "<unknown>\n");
+ }
+
+ /**
+diff --git a/drivers/gpu/drm/amd/amdgpu/si_dpm.c b/drivers/gpu/drm/amd/amdgpu/si_dpm.c
+index 6e150db8f380..9a5ccae06b6c 100644
+--- a/drivers/gpu/drm/amd/amdgpu/si_dpm.c
++++ b/drivers/gpu/drm/amd/amdgpu/si_dpm.c
+@@ -3497,6 +3497,12 @@ static void si_apply_state_adjust_rules(struct amdgpu_device *adev,
+ (adev->pdev->device == 0x6667)) {
+ max_sclk = 75000;
+ }
++ } else if (adev->asic_type == CHIP_OLAND) {
++ if ((adev->pdev->device == 0x6604) &&
++ (adev->pdev->subsystem_vendor == 0x1028) &&
++ (adev->pdev->subsystem_device == 0x066F)) {
++ max_sclk = 75000;
++ }
+ }
+ /* Apply dpm quirks */
+ while (p && p->chip_device != 0) {
+diff --git a/drivers/isdn/gigaset/bas-gigaset.c b/drivers/isdn/gigaset/bas-gigaset.c
+index 11e13c56126f..2da3ff650e1d 100644
+--- a/drivers/isdn/gigaset/bas-gigaset.c
++++ b/drivers/isdn/gigaset/bas-gigaset.c
+@@ -2317,6 +2317,9 @@ static int gigaset_probe(struct usb_interface *interface,
+ return -ENODEV;
+ }
+
++ if (hostif->desc.bNumEndpoints < 1)
++ return -ENODEV;
++
+ dev_info(&udev->dev,
+ "%s: Device matched (Vendor: 0x%x, Product: 0x%x)\n",
+ __func__, le16_to_cpu(udev->descriptor.idVendor),
+diff --git a/drivers/md/raid10.c b/drivers/md/raid10.c
+index 1920756828df..87f14080c2cd 100644
+--- a/drivers/md/raid10.c
++++ b/drivers/md/raid10.c
+@@ -1571,7 +1571,25 @@ static void raid10_make_request(struct mddev *mddev, struct bio *bio)
+ split = bio;
+ }
+
++ /*
++ * If a bio is splitted, the first part of bio will pass
++ * barrier but the bio is queued in current->bio_list (see
++ * generic_make_request). If there is a raise_barrier() called
++ * here, the second part of bio can't pass barrier. But since
++ * the first part bio isn't dispatched to underlaying disks
++ * yet, the barrier is never released, hence raise_barrier will
++ * alays wait. We have a deadlock.
++ * Note, this only happens in read path. For write path, the
++ * first part of bio is dispatched in a schedule() call
++ * (because of blk plug) or offloaded to raid10d.
++ * Quitting from the function immediately can change the bio
++ * order queued in bio_list and avoid the deadlock.
++ */
+ __make_request(mddev, split);
++ if (split != bio && bio_data_dir(bio) == READ) {
++ generic_make_request(bio);
++ break;
++ }
+ } while (split != bio);
+
+ /* In case raid10d snuck in to freeze_array */
+diff --git a/drivers/md/raid5.c b/drivers/md/raid5.c
+index 3c7e106c12a2..6661db2c85f0 100644
+--- a/drivers/md/raid5.c
++++ b/drivers/md/raid5.c
+@@ -1364,7 +1364,8 @@ static int set_syndrome_sources(struct page **srcs,
+ (test_bit(R5_Wantdrain, &dev->flags) ||
+ test_bit(R5_InJournal, &dev->flags))) ||
+ (srctype == SYNDROME_SRC_WRITTEN &&
+- dev->written)) {
++ (dev->written ||
++ test_bit(R5_InJournal, &dev->flags)))) {
+ if (test_bit(R5_InJournal, &dev->flags))
+ srcs[slot] = sh->dev[i].orig_page;
+ else
+diff --git a/drivers/scsi/libiscsi.c b/drivers/scsi/libiscsi.c
+index f9b6fba689ff..a530f08592cd 100644
+--- a/drivers/scsi/libiscsi.c
++++ b/drivers/scsi/libiscsi.c
+@@ -560,8 +560,12 @@ static void iscsi_complete_task(struct iscsi_task *task, int state)
+ WARN_ON_ONCE(task->state == ISCSI_TASK_FREE);
+ task->state = state;
+
+- if (!list_empty(&task->running))
++ spin_lock_bh(&conn->taskqueuelock);
++ if (!list_empty(&task->running)) {
++ pr_debug_once("%s while task on list", __func__);
+ list_del_init(&task->running);
++ }
++ spin_unlock_bh(&conn->taskqueuelock);
+
+ if (conn->task == task)
+ conn->task = NULL;
+@@ -783,7 +787,9 @@ __iscsi_conn_send_pdu(struct iscsi_conn *conn, struct iscsi_hdr *hdr,
+ if (session->tt->xmit_task(task))
+ goto free_task;
+ } else {
++ spin_lock_bh(&conn->taskqueuelock);
+ list_add_tail(&task->running, &conn->mgmtqueue);
++ spin_unlock_bh(&conn->taskqueuelock);
+ iscsi_conn_queue_work(conn);
+ }
+
+@@ -1474,8 +1480,10 @@ void iscsi_requeue_task(struct iscsi_task *task)
+ * this may be on the requeue list already if the xmit_task callout
+ * is handling the r2ts while we are adding new ones
+ */
++ spin_lock_bh(&conn->taskqueuelock);
+ if (list_empty(&task->running))
+ list_add_tail(&task->running, &conn->requeue);
++ spin_unlock_bh(&conn->taskqueuelock);
+ iscsi_conn_queue_work(conn);
+ }
+ EXPORT_SYMBOL_GPL(iscsi_requeue_task);
+@@ -1512,22 +1520,26 @@ static int iscsi_data_xmit(struct iscsi_conn *conn)
+ * only have one nop-out as a ping from us and targets should not
+ * overflow us with nop-ins
+ */
++ spin_lock_bh(&conn->taskqueuelock);
+ check_mgmt:
+ while (!list_empty(&conn->mgmtqueue)) {
+ conn->task = list_entry(conn->mgmtqueue.next,
+ struct iscsi_task, running);
+ list_del_init(&conn->task->running);
++ spin_unlock_bh(&conn->taskqueuelock);
+ if (iscsi_prep_mgmt_task(conn, conn->task)) {
+ /* regular RX path uses back_lock */
+ spin_lock_bh(&conn->session->back_lock);
+ __iscsi_put_task(conn->task);
+ spin_unlock_bh(&conn->session->back_lock);
+ conn->task = NULL;
++ spin_lock_bh(&conn->taskqueuelock);
+ continue;
+ }
+ rc = iscsi_xmit_task(conn);
+ if (rc)
+ goto done;
++ spin_lock_bh(&conn->taskqueuelock);
+ }
+
+ /* process pending command queue */
+@@ -1535,19 +1547,24 @@ static int iscsi_data_xmit(struct iscsi_conn *conn)
+ conn->task = list_entry(conn->cmdqueue.next, struct iscsi_task,
+ running);
+ list_del_init(&conn->task->running);
++ spin_unlock_bh(&conn->taskqueuelock);
+ if (conn->session->state == ISCSI_STATE_LOGGING_OUT) {
+ fail_scsi_task(conn->task, DID_IMM_RETRY);
++ spin_lock_bh(&conn->taskqueuelock);
+ continue;
+ }
+ rc = iscsi_prep_scsi_cmd_pdu(conn->task);
+ if (rc) {
+ if (rc == -ENOMEM || rc == -EACCES) {
++ spin_lock_bh(&conn->taskqueuelock);
+ list_add_tail(&conn->task->running,
+ &conn->cmdqueue);
+ conn->task = NULL;
++ spin_unlock_bh(&conn->taskqueuelock);
+ goto done;
+ } else
+ fail_scsi_task(conn->task, DID_ABORT);
++ spin_lock_bh(&conn->taskqueuelock);
+ continue;
+ }
+ rc = iscsi_xmit_task(conn);
+@@ -1558,6 +1575,7 @@ static int iscsi_data_xmit(struct iscsi_conn *conn)
+ * we need to check the mgmt queue for nops that need to
+ * be sent to aviod starvation
+ */
++ spin_lock_bh(&conn->taskqueuelock);
+ if (!list_empty(&conn->mgmtqueue))
+ goto check_mgmt;
+ }
+@@ -1577,12 +1595,15 @@ static int iscsi_data_xmit(struct iscsi_conn *conn)
+ conn->task = task;
+ list_del_init(&conn->task->running);
+ conn->task->state = ISCSI_TASK_RUNNING;
++ spin_unlock_bh(&conn->taskqueuelock);
+ rc = iscsi_xmit_task(conn);
+ if (rc)
+ goto done;
++ spin_lock_bh(&conn->taskqueuelock);
+ if (!list_empty(&conn->mgmtqueue))
+ goto check_mgmt;
+ }
++ spin_unlock_bh(&conn->taskqueuelock);
+ spin_unlock_bh(&conn->session->frwd_lock);
+ return -ENODATA;
+
+@@ -1738,7 +1759,9 @@ int iscsi_queuecommand(struct Scsi_Host *host, struct scsi_cmnd *sc)
+ goto prepd_reject;
+ }
+ } else {
++ spin_lock_bh(&conn->taskqueuelock);
+ list_add_tail(&task->running, &conn->cmdqueue);
++ spin_unlock_bh(&conn->taskqueuelock);
+ iscsi_conn_queue_work(conn);
+ }
+
+@@ -2897,6 +2920,7 @@ iscsi_conn_setup(struct iscsi_cls_session *cls_session, int dd_size,
+ INIT_LIST_HEAD(&conn->mgmtqueue);
+ INIT_LIST_HEAD(&conn->cmdqueue);
+ INIT_LIST_HEAD(&conn->requeue);
++ spin_lock_init(&conn->taskqueuelock);
+ INIT_WORK(&conn->xmitwork, iscsi_xmitworker);
+
+ /* allocate login_task used for the login/text sequences */
+diff --git a/drivers/scsi/lpfc/lpfc_init.c b/drivers/scsi/lpfc/lpfc_init.c
+index 4776fd85514f..10f75ad2b9e8 100644
+--- a/drivers/scsi/lpfc/lpfc_init.c
++++ b/drivers/scsi/lpfc/lpfc_init.c
+@@ -11447,6 +11447,7 @@ static struct pci_driver lpfc_driver = {
+ .id_table = lpfc_id_table,
+ .probe = lpfc_pci_probe_one,
+ .remove = lpfc_pci_remove_one,
++ .shutdown = lpfc_pci_remove_one,
+ .suspend = lpfc_pci_suspend_one,
+ .resume = lpfc_pci_resume_one,
+ .err_handler = &lpfc_err_handler,
+diff --git a/drivers/scsi/mpt3sas/mpt3sas_base.h b/drivers/scsi/mpt3sas/mpt3sas_base.h
+index dcb33f4fa687..20492e057da3 100644
+--- a/drivers/scsi/mpt3sas/mpt3sas_base.h
++++ b/drivers/scsi/mpt3sas/mpt3sas_base.h
+@@ -1443,9 +1443,6 @@ void mpt3sas_transport_update_links(struct MPT3SAS_ADAPTER *ioc,
+ u64 sas_address, u16 handle, u8 phy_number, u8 link_rate);
+ extern struct sas_function_template mpt3sas_transport_functions;
+ extern struct scsi_transport_template *mpt3sas_transport_template;
+-extern int scsi_internal_device_block(struct scsi_device *sdev);
+-extern int scsi_internal_device_unblock(struct scsi_device *sdev,
+- enum scsi_device_state new_state);
+ /* trigger data externs */
+ void mpt3sas_send_trigger_data_event(struct MPT3SAS_ADAPTER *ioc,
+ struct SL_WH_TRIGGERS_EVENT_DATA_T *event_data);
+diff --git a/drivers/scsi/mpt3sas/mpt3sas_scsih.c b/drivers/scsi/mpt3sas/mpt3sas_scsih.c
+index 0b5b423b1db0..245fbe2f1696 100644
+--- a/drivers/scsi/mpt3sas/mpt3sas_scsih.c
++++ b/drivers/scsi/mpt3sas/mpt3sas_scsih.c
+@@ -2840,7 +2840,7 @@ _scsih_internal_device_block(struct scsi_device *sdev,
+ sas_device_priv_data->sas_target->handle);
+ sas_device_priv_data->block = 1;
+
+- r = scsi_internal_device_block(sdev);
++ r = scsi_internal_device_block(sdev, false);
+ if (r == -EINVAL)
+ sdev_printk(KERN_WARNING, sdev,
+ "device_block failed with return(%d) for handle(0x%04x)\n",
+@@ -2876,7 +2876,7 @@ _scsih_internal_device_unblock(struct scsi_device *sdev,
+ "performing a block followed by an unblock\n",
+ r, sas_device_priv_data->sas_target->handle);
+ sas_device_priv_data->block = 1;
+- r = scsi_internal_device_block(sdev);
++ r = scsi_internal_device_block(sdev, false);
+ if (r)
+ sdev_printk(KERN_WARNING, sdev, "retried device_block "
+ "failed with return(%d) for handle(0x%04x)\n",
+diff --git a/drivers/scsi/qla2xxx/qla_target.c b/drivers/scsi/qla2xxx/qla_target.c
+index e4fda84b959e..26fe9cb3a963 100644
+--- a/drivers/scsi/qla2xxx/qla_target.c
++++ b/drivers/scsi/qla2xxx/qla_target.c
+@@ -5372,16 +5372,22 @@ qlt_send_busy(struct scsi_qla_host *vha,
+
+ static int
+ qlt_chk_qfull_thresh_hold(struct scsi_qla_host *vha,
+- struct atio_from_isp *atio)
++ struct atio_from_isp *atio, bool ha_locked)
+ {
+ struct qla_hw_data *ha = vha->hw;
+ uint16_t status;
++ unsigned long flags;
+
+ if (ha->tgt.num_pend_cmds < Q_FULL_THRESH_HOLD(ha))
+ return 0;
+
++ if (!ha_locked)
++ spin_lock_irqsave(&ha->hardware_lock, flags);
+ status = temp_sam_status;
+ qlt_send_busy(vha, atio, status);
++ if (!ha_locked)
++ spin_unlock_irqrestore(&ha->hardware_lock, flags);
++
+ return 1;
+ }
+
+@@ -5426,7 +5432,7 @@ static void qlt_24xx_atio_pkt(struct scsi_qla_host *vha,
+
+
+ if (likely(atio->u.isp24.fcp_cmnd.task_mgmt_flags == 0)) {
+- rc = qlt_chk_qfull_thresh_hold(vha, atio);
++ rc = qlt_chk_qfull_thresh_hold(vha, atio, ha_locked);
+ if (rc != 0) {
+ tgt->atio_irq_cmd_count--;
+ return;
+@@ -5549,7 +5555,7 @@ static void qlt_response_pkt(struct scsi_qla_host *vha, response_t *pkt)
+ break;
+ }
+
+- rc = qlt_chk_qfull_thresh_hold(vha, atio);
++ rc = qlt_chk_qfull_thresh_hold(vha, atio, true);
+ if (rc != 0) {
+ tgt->irq_cmd_count--;
+ return;
+@@ -6815,6 +6821,8 @@ qlt_handle_abts_recv_work(struct work_struct *work)
+ spin_lock_irqsave(&ha->hardware_lock, flags);
+ qlt_response_pkt_all_vps(vha, (response_t *)&op->atio);
+ spin_unlock_irqrestore(&ha->hardware_lock, flags);
++
++ kfree(op);
+ }
+
+ void
+diff --git a/drivers/scsi/scsi_lib.c b/drivers/scsi/scsi_lib.c
+index f16221b66668..d438430c49a2 100644
+--- a/drivers/scsi/scsi_lib.c
++++ b/drivers/scsi/scsi_lib.c
+@@ -2880,6 +2880,8 @@ EXPORT_SYMBOL(scsi_target_resume);
+ /**
+ * scsi_internal_device_block - internal function to put a device temporarily into the SDEV_BLOCK state
+ * @sdev: device to block
++ * @wait: Whether or not to wait until ongoing .queuecommand() /
++ * .queue_rq() calls have finished.
+ *
+ * Block request made by scsi lld's to temporarily stop all
+ * scsi commands on the specified device. May sleep.
+@@ -2897,7 +2899,7 @@ EXPORT_SYMBOL(scsi_target_resume);
+ * remove the rport mutex lock and unlock calls from srp_queuecommand().
+ */
+ int
+-scsi_internal_device_block(struct scsi_device *sdev)
++scsi_internal_device_block(struct scsi_device *sdev, bool wait)
+ {
+ struct request_queue *q = sdev->request_queue;
+ unsigned long flags;
+@@ -2917,12 +2919,16 @@ scsi_internal_device_block(struct scsi_device *sdev)
+ * request queue.
+ */
+ if (q->mq_ops) {
+- blk_mq_quiesce_queue(q);
++ if (wait)
++ blk_mq_quiesce_queue(q);
++ else
++ blk_mq_stop_hw_queues(q);
+ } else {
+ spin_lock_irqsave(q->queue_lock, flags);
+ blk_stop_queue(q);
+ spin_unlock_irqrestore(q->queue_lock, flags);
+- scsi_wait_for_queuecommand(sdev);
++ if (wait)
++ scsi_wait_for_queuecommand(sdev);
+ }
+
+ return 0;
+@@ -2984,7 +2990,7 @@ EXPORT_SYMBOL_GPL(scsi_internal_device_unblock);
+ static void
+ device_block(struct scsi_device *sdev, void *data)
+ {
+- scsi_internal_device_block(sdev);
++ scsi_internal_device_block(sdev, true);
+ }
+
+ static int
+diff --git a/drivers/scsi/scsi_priv.h b/drivers/scsi/scsi_priv.h
+index 193636a59adf..9811f82b9d0c 100644
+--- a/drivers/scsi/scsi_priv.h
++++ b/drivers/scsi/scsi_priv.h
+@@ -189,8 +189,5 @@ static inline void scsi_dh_remove_device(struct scsi_device *sdev) { }
+ */
+
+ #define SCSI_DEVICE_BLOCK_MAX_TIMEOUT 600 /* units in seconds */
+-extern int scsi_internal_device_block(struct scsi_device *sdev);
+-extern int scsi_internal_device_unblock(struct scsi_device *sdev,
+- enum scsi_device_state new_state);
+
+ #endif /* _SCSI_PRIV_H */
+diff --git a/drivers/target/target_core_pscsi.c b/drivers/target/target_core_pscsi.c
+index 04d7aa7390d0..3e677866bb4f 100644
+--- a/drivers/target/target_core_pscsi.c
++++ b/drivers/target/target_core_pscsi.c
+@@ -154,7 +154,7 @@ static void pscsi_tape_read_blocksize(struct se_device *dev,
+
+ buf = kzalloc(12, GFP_KERNEL);
+ if (!buf)
+- return;
++ goto out_free;
+
+ memset(cdb, 0, MAX_COMMAND_SIZE);
+ cdb[0] = MODE_SENSE;
+@@ -169,9 +169,10 @@ static void pscsi_tape_read_blocksize(struct se_device *dev,
+ * If MODE_SENSE still returns zero, set the default value to 1024.
+ */
+ sdev->sector_size = (buf[9] << 16) | (buf[10] << 8) | (buf[11]);
++out_free:
+ if (!sdev->sector_size)
+ sdev->sector_size = 1024;
+-out_free:
++
+ kfree(buf);
+ }
+
+@@ -314,9 +315,10 @@ static int pscsi_add_device_to_list(struct se_device *dev,
+ sd->lun, sd->queue_depth);
+ }
+
+- dev->dev_attrib.hw_block_size = sd->sector_size;
++ dev->dev_attrib.hw_block_size =
++ min_not_zero((int)sd->sector_size, 512);
+ dev->dev_attrib.hw_max_sectors =
+- min_t(int, sd->host->max_sectors, queue_max_hw_sectors(q));
++ min_not_zero(sd->host->max_sectors, queue_max_hw_sectors(q));
+ dev->dev_attrib.hw_queue_depth = sd->queue_depth;
+
+ /*
+@@ -339,8 +341,10 @@ static int pscsi_add_device_to_list(struct se_device *dev,
+ /*
+ * For TYPE_TAPE, attempt to determine blocksize with MODE_SENSE.
+ */
+- if (sd->type == TYPE_TAPE)
++ if (sd->type == TYPE_TAPE) {
+ pscsi_tape_read_blocksize(dev, sd);
++ dev->dev_attrib.hw_block_size = sd->sector_size;
++ }
+ return 0;
+ }
+
+@@ -406,7 +410,7 @@ static int pscsi_create_type_disk(struct se_device *dev, struct scsi_device *sd)
+ /*
+ * Called with struct Scsi_Host->host_lock called.
+ */
+-static int pscsi_create_type_rom(struct se_device *dev, struct scsi_device *sd)
++static int pscsi_create_type_nondisk(struct se_device *dev, struct scsi_device *sd)
+ __releases(sh->host_lock)
+ {
+ struct pscsi_hba_virt *phv = dev->se_hba->hba_ptr;
+@@ -433,28 +437,6 @@ static int pscsi_create_type_rom(struct se_device *dev, struct scsi_device *sd)
+ return 0;
+ }
+
+-/*
+- * Called with struct Scsi_Host->host_lock called.
+- */
+-static int pscsi_create_type_other(struct se_device *dev,
+- struct scsi_device *sd)
+- __releases(sh->host_lock)
+-{
+- struct pscsi_hba_virt *phv = dev->se_hba->hba_ptr;
+- struct Scsi_Host *sh = sd->host;
+- int ret;
+-
+- spin_unlock_irq(sh->host_lock);
+- ret = pscsi_add_device_to_list(dev, sd);
+- if (ret)
+- return ret;
+-
+- pr_debug("CORE_PSCSI[%d] - Added Type: %s for %d:%d:%d:%llu\n",
+- phv->phv_host_id, scsi_device_type(sd->type), sh->host_no,
+- sd->channel, sd->id, sd->lun);
+- return 0;
+-}
+-
+ static int pscsi_configure_device(struct se_device *dev)
+ {
+ struct se_hba *hba = dev->se_hba;
+@@ -542,11 +524,8 @@ static int pscsi_configure_device(struct se_device *dev)
+ case TYPE_DISK:
+ ret = pscsi_create_type_disk(dev, sd);
+ break;
+- case TYPE_ROM:
+- ret = pscsi_create_type_rom(dev, sd);
+- break;
+ default:
+- ret = pscsi_create_type_other(dev, sd);
++ ret = pscsi_create_type_nondisk(dev, sd);
+ break;
+ }
+
+@@ -611,8 +590,7 @@ static void pscsi_free_device(struct se_device *dev)
+ else if (pdv->pdv_lld_host)
+ scsi_host_put(pdv->pdv_lld_host);
+
+- if ((sd->type == TYPE_DISK) || (sd->type == TYPE_ROM))
+- scsi_device_put(sd);
++ scsi_device_put(sd);
+
+ pdv->pdv_sd = NULL;
+ }
+@@ -1065,7 +1043,6 @@ static sector_t pscsi_get_blocks(struct se_device *dev)
+ if (pdv->pdv_bd && pdv->pdv_bd->bd_part)
+ return pdv->pdv_bd->bd_part->nr_sects;
+
+- dump_stack();
+ return 0;
+ }
+
+diff --git a/drivers/target/target_core_sbc.c b/drivers/target/target_core_sbc.c
+index df7b6e95c019..6ec5dded4ae0 100644
+--- a/drivers/target/target_core_sbc.c
++++ b/drivers/target/target_core_sbc.c
+@@ -1105,9 +1105,15 @@ sbc_parse_cdb(struct se_cmd *cmd, struct sbc_ops *ops)
+ return ret;
+ break;
+ case VERIFY:
++ case VERIFY_16:
+ size = 0;
+- sectors = transport_get_sectors_10(cdb);
+- cmd->t_task_lba = transport_lba_32(cdb);
++ if (cdb[0] == VERIFY) {
++ sectors = transport_get_sectors_10(cdb);
++ cmd->t_task_lba = transport_lba_32(cdb);
++ } else {
++ sectors = transport_get_sectors_16(cdb);
++ cmd->t_task_lba = transport_lba_64(cdb);
++ }
+ cmd->execute_cmd = sbc_emulate_noop;
+ goto check_lba;
+ case REZERO_UNIT:
+diff --git a/fs/gfs2/incore.h b/fs/gfs2/incore.h
+index a6a3389a07fc..51519c2836b5 100644
+--- a/fs/gfs2/incore.h
++++ b/fs/gfs2/incore.h
+@@ -207,7 +207,7 @@ struct lm_lockname {
+ struct gfs2_sbd *ln_sbd;
+ u64 ln_number;
+ unsigned int ln_type;
+-};
++} __packed __aligned(sizeof(int));
+
+ #define lm_name_equal(name1, name2) \
+ (((name1)->ln_number == (name2)->ln_number) && \
+diff --git a/fs/nfs/nfs4proc.c b/fs/nfs/nfs4proc.c
+index 37bcd887f742..0a436c4a28ad 100644
+--- a/fs/nfs/nfs4proc.c
++++ b/fs/nfs/nfs4proc.c
+@@ -7541,11 +7541,11 @@ static void nfs4_exchange_id_release(void *data)
+ struct nfs41_exchange_id_data *cdata =
+ (struct nfs41_exchange_id_data *)data;
+
+- nfs_put_client(cdata->args.client);
+ if (cdata->xprt) {
+ xprt_put(cdata->xprt);
+ rpc_clnt_xprt_switch_put(cdata->args.client->cl_rpcclient);
+ }
++ nfs_put_client(cdata->args.client);
+ kfree(cdata->res.impl_id);
+ kfree(cdata->res.server_scope);
+ kfree(cdata->res.server_owner);
+@@ -7652,10 +7652,8 @@ static int _nfs4_proc_exchange_id(struct nfs_client *clp, struct rpc_cred *cred,
+ task_setup_data.callback_data = calldata;
+
+ task = rpc_run_task(&task_setup_data);
+- if (IS_ERR(task)) {
+- status = PTR_ERR(task);
+- goto out_impl_id;
+- }
++ if (IS_ERR(task))
++ return PTR_ERR(task);
+
+ if (!xprt) {
+ status = rpc_wait_for_completion_task(task);
+@@ -7683,6 +7681,7 @@ static int _nfs4_proc_exchange_id(struct nfs_client *clp, struct rpc_cred *cred,
+ kfree(calldata->res.server_owner);
+ out_calldata:
+ kfree(calldata);
++ nfs_put_client(clp);
+ goto out;
+ }
+
+diff --git a/include/linux/log2.h b/include/linux/log2.h
+index ef3d4f67118c..c373295f359f 100644
+--- a/include/linux/log2.h
++++ b/include/linux/log2.h
+@@ -16,12 +16,6 @@
+ #include <linux/bitops.h>
+
+ /*
+- * deal with unrepresentable constant logarithms
+- */
+-extern __attribute__((const, noreturn))
+-int ____ilog2_NaN(void);
+-
+-/*
+ * non-constant log of base 2 calculators
+ * - the arch may override these in asm/bitops.h if they can be implemented
+ * more efficiently than using fls() and fls64()
+@@ -85,7 +79,7 @@ unsigned long __rounddown_pow_of_two(unsigned long n)
+ #define ilog2(n) \
+ ( \
+ __builtin_constant_p(n) ? ( \
+- (n) < 1 ? ____ilog2_NaN() : \
++ (n) < 2 ? 0 : \
+ (n) & (1ULL << 63) ? 63 : \
+ (n) & (1ULL << 62) ? 62 : \
+ (n) & (1ULL << 61) ? 61 : \
+@@ -148,10 +142,7 @@ unsigned long __rounddown_pow_of_two(unsigned long n)
+ (n) & (1ULL << 4) ? 4 : \
+ (n) & (1ULL << 3) ? 3 : \
+ (n) & (1ULL << 2) ? 2 : \
+- (n) & (1ULL << 1) ? 1 : \
+- (n) & (1ULL << 0) ? 0 : \
+- ____ilog2_NaN() \
+- ) : \
++ 1 ) : \
+ (sizeof(n) <= 4) ? \
+ __ilog2_u32(n) : \
+ __ilog2_u64(n) \
+diff --git a/include/scsi/libiscsi.h b/include/scsi/libiscsi.h
+index 4d1c46aac331..c7b1dc713cdd 100644
+--- a/include/scsi/libiscsi.h
++++ b/include/scsi/libiscsi.h
+@@ -196,6 +196,7 @@ struct iscsi_conn {
+ struct iscsi_task *task; /* xmit task in progress */
+
+ /* xmit */
++ spinlock_t taskqueuelock; /* protects the next three lists */
+ struct list_head mgmtqueue; /* mgmt (control) xmit queue */
+ struct list_head cmdqueue; /* data-path cmd queue */
+ struct list_head requeue; /* tasks needing another run */
+diff --git a/include/scsi/scsi_device.h b/include/scsi/scsi_device.h
+index be41c76ddd48..59ed779bbd9a 100644
+--- a/include/scsi/scsi_device.h
++++ b/include/scsi/scsi_device.h
+@@ -475,6 +475,10 @@ static inline int scsi_device_created(struct scsi_device *sdev)
+ sdev->sdev_state == SDEV_CREATED_BLOCK;
+ }
+
++int scsi_internal_device_block(struct scsi_device *sdev, bool wait);
++int scsi_internal_device_unblock(struct scsi_device *sdev,
++ enum scsi_device_state new_state);
++
+ /* accessor functions for the SCSI parameters */
+ static inline int scsi_device_sync(struct scsi_device *sdev)
+ {
+diff --git a/kernel/cgroup_pids.c b/kernel/cgroup_pids.c
+index 2bd673783f1a..a57242e0d5a6 100644
+--- a/kernel/cgroup_pids.c
++++ b/kernel/cgroup_pids.c
+@@ -229,7 +229,7 @@ static int pids_can_fork(struct task_struct *task)
+ /* Only log the first time events_limit is incremented. */
+ if (atomic64_inc_return(&pids->events_limit) == 1) {
+ pr_info("cgroup: fork rejected by pids controller in ");
+- pr_cont_cgroup_path(task_cgroup(current, pids_cgrp_id));
++ pr_cont_cgroup_path(css->cgroup);
+ pr_cont("\n");
+ }
+ cgroup_file_notify(&pids->events_file);
+diff --git a/kernel/events/core.c b/kernel/events/core.c
+index e235bb991bdd..8113654a815a 100644
+--- a/kernel/events/core.c
++++ b/kernel/events/core.c
+@@ -10374,6 +10374,17 @@ void perf_event_free_task(struct task_struct *task)
+ continue;
+
+ mutex_lock(&ctx->mutex);
++ raw_spin_lock_irq(&ctx->lock);
++ /*
++ * Destroy the task <-> ctx relation and mark the context dead.
++ *
++ * This is important because even though the task hasn't been
++ * exposed yet the context has been (through child_list).
++ */
++ RCU_INIT_POINTER(task->perf_event_ctxp[ctxn], NULL);
++ WRITE_ONCE(ctx->task, TASK_TOMBSTONE);
++ put_task_struct(task); /* cannot be last */
++ raw_spin_unlock_irq(&ctx->lock);
+ again:
+ list_for_each_entry_safe(event, tmp, &ctx->pinned_groups,
+ group_entry)
+@@ -10627,7 +10638,7 @@ static int perf_event_init_context(struct task_struct *child, int ctxn)
+ ret = inherit_task_group(event, parent, parent_ctx,
+ child, ctxn, &inherited_all);
+ if (ret)
+- break;
++ goto out_unlock;
+ }
+
+ /*
+@@ -10643,7 +10654,7 @@ static int perf_event_init_context(struct task_struct *child, int ctxn)
+ ret = inherit_task_group(event, parent, parent_ctx,
+ child, ctxn, &inherited_all);
+ if (ret)
+- break;
++ goto out_unlock;
+ }
+
+ raw_spin_lock_irqsave(&parent_ctx->lock, flags);
+@@ -10671,6 +10682,7 @@ static int perf_event_init_context(struct task_struct *child, int ctxn)
+ }
+
+ raw_spin_unlock_irqrestore(&parent_ctx->lock, flags);
++out_unlock:
+ mutex_unlock(&parent_ctx->mutex);
+
+ perf_unpin_context(parent_ctx);
+diff --git a/mm/percpu.c b/mm/percpu.c
+index 0686f566d347..232356a2d914 100644
+--- a/mm/percpu.c
++++ b/mm/percpu.c
+@@ -1011,8 +1011,11 @@ static void __percpu *pcpu_alloc(size_t size, size_t align, bool reserved,
+ mutex_unlock(&pcpu_alloc_mutex);
+ }
+
+- if (chunk != pcpu_reserved_chunk)
++ if (chunk != pcpu_reserved_chunk) {
++ spin_lock_irqsave(&pcpu_lock, flags);
+ pcpu_nr_empty_pop_pages -= occ_pages;
++ spin_unlock_irqrestore(&pcpu_lock, flags);
++ }
+
+ if (pcpu_nr_empty_pop_pages < PCPU_EMPTY_POP_PAGES_LOW)
+ pcpu_schedule_balance_work();
+diff --git a/net/sunrpc/xprtrdma/verbs.c b/net/sunrpc/xprtrdma/verbs.c
+index 61d16c39e92c..6e27aab79f76 100644
+--- a/net/sunrpc/xprtrdma/verbs.c
++++ b/net/sunrpc/xprtrdma/verbs.c
+@@ -495,7 +495,8 @@ rpcrdma_ep_create(struct rpcrdma_ep *ep, struct rpcrdma_ia *ia,
+ struct ib_cq *sendcq, *recvcq;
+ int rc;
+
+- max_sge = min(ia->ri_device->attrs.max_sge, RPCRDMA_MAX_SEND_SGES);
++ max_sge = min_t(unsigned int, ia->ri_device->attrs.max_sge,
++ RPCRDMA_MAX_SEND_SGES);
+ if (max_sge < RPCRDMA_MIN_SEND_SGES) {
+ pr_warn("rpcrdma: HCA provides only %d send SGEs\n", max_sge);
+ return -ENOMEM;
+diff --git a/tools/include/linux/log2.h b/tools/include/linux/log2.h
+index 41446668ccce..d5677d39c1e4 100644
+--- a/tools/include/linux/log2.h
++++ b/tools/include/linux/log2.h
+@@ -13,12 +13,6 @@
+ #define _TOOLS_LINUX_LOG2_H
+
+ /*
+- * deal with unrepresentable constant logarithms
+- */
+-extern __attribute__((const, noreturn))
+-int ____ilog2_NaN(void);
+-
+-/*
+ * non-constant log of base 2 calculators
+ * - the arch may override these in asm/bitops.h if they can be implemented
+ * more efficiently than using fls() and fls64()
+@@ -78,7 +72,7 @@ unsigned long __rounddown_pow_of_two(unsigned long n)
+ #define ilog2(n) \
+ ( \
+ __builtin_constant_p(n) ? ( \
+- (n) < 1 ? ____ilog2_NaN() : \
++ (n) < 2 ? 0 : \
+ (n) & (1ULL << 63) ? 63 : \
+ (n) & (1ULL << 62) ? 62 : \
+ (n) & (1ULL << 61) ? 61 : \
+@@ -141,10 +135,7 @@ unsigned long __rounddown_pow_of_two(unsigned long n)
+ (n) & (1ULL << 4) ? 4 : \
+ (n) & (1ULL << 3) ? 3 : \
+ (n) & (1ULL << 2) ? 2 : \
+- (n) & (1ULL << 1) ? 1 : \
+- (n) & (1ULL << 0) ? 0 : \
+- ____ilog2_NaN() \
+- ) : \
++ 1 ) : \
+ (sizeof(n) <= 4) ? \
+ __ilog2_u32(n) : \
+ __ilog2_u64(n) \
^ permalink raw reply related [flat|nested] 22+ messages in thread
* [gentoo-commits] proj/linux-patches:4.10 commit in: /
@ 2017-03-30 18:17 Mike Pagano
0 siblings, 0 replies; 22+ messages in thread
From: Mike Pagano @ 2017-03-30 18:17 UTC (permalink / raw
To: gentoo-commits
commit: 81106172f4f12a8d54193305855bddefcb09ae71
Author: Mike Pagano <mpagano <AT> gentoo <DOT> org>
AuthorDate: Thu Mar 30 18:17:47 2017 +0000
Commit: Mike Pagano <mpagano <AT> gentoo <DOT> org>
CommitDate: Thu Mar 30 18:17:47 2017 +0000
URL: https://gitweb.gentoo.org/proj/linux-patches.git/commit/?id=81106172
Linux patch 4.10.7
0000_README | 4 +
1006_linux-4.10.7.patch | 5177 +++++++++++++++++++++++++++++++++++++++++++++++
2 files changed, 5181 insertions(+)
diff --git a/0000_README b/0000_README
index 014e9c4..02aad35 100644
--- a/0000_README
+++ b/0000_README
@@ -67,6 +67,10 @@ Patch: 1005_linux-4.10.6.patch
From: http://www.kernel.org
Desc: Linux 4.10.6
+Patch: 1006_linux-4.10.7.patch
+From: http://www.kernel.org
+Desc: Linux 4.10.7
+
Patch: 1500_XATTR_USER_PREFIX.patch
From: https://bugs.gentoo.org/show_bug.cgi?id=470644
Desc: Support for namespace user.pax.* on tmpfs.
diff --git a/1006_linux-4.10.7.patch b/1006_linux-4.10.7.patch
new file mode 100644
index 0000000..beafe8f
--- /dev/null
+++ b/1006_linux-4.10.7.patch
@@ -0,0 +1,5177 @@
+diff --git a/Makefile b/Makefile
+index 23b6d29cb6da..976e8d1a468a 100644
+--- a/Makefile
++++ b/Makefile
+@@ -1,6 +1,6 @@
+ VERSION = 4
+ PATCHLEVEL = 10
+-SUBLEVEL = 6
++SUBLEVEL = 7
+ EXTRAVERSION =
+ NAME = Fearless Coyote
+
+diff --git a/arch/arm/boot/dts/sama5d2.dtsi b/arch/arm/boot/dts/sama5d2.dtsi
+index ceb9783ff7e1..ff7eae833a6d 100644
+--- a/arch/arm/boot/dts/sama5d2.dtsi
++++ b/arch/arm/boot/dts/sama5d2.dtsi
+@@ -266,7 +266,7 @@
+ };
+
+ usb1: ohci@00400000 {
+- compatible = "atmel,sama5d2-ohci", "usb-ohci";
++ compatible = "atmel,at91rm9200-ohci", "usb-ohci";
+ reg = <0x00400000 0x100000>;
+ interrupts = <41 IRQ_TYPE_LEVEL_HIGH 2>;
+ clocks = <&uhphs_clk>, <&uhphs_clk>, <&uhpck>;
+diff --git a/arch/arm/mach-at91/pm.c b/arch/arm/mach-at91/pm.c
+index b4332b727e9c..31dde8b6f2ea 100644
+--- a/arch/arm/mach-at91/pm.c
++++ b/arch/arm/mach-at91/pm.c
+@@ -289,6 +289,22 @@ static void at91_ddr_standby(void)
+ at91_ramc_write(1, AT91_DDRSDRC_LPR, saved_lpr1);
+ }
+
++static void sama5d3_ddr_standby(void)
++{
++ u32 lpr0;
++ u32 saved_lpr0;
++
++ saved_lpr0 = at91_ramc_read(0, AT91_DDRSDRC_LPR);
++ lpr0 = saved_lpr0 & ~AT91_DDRSDRC_LPCB;
++ lpr0 |= AT91_DDRSDRC_LPCB_POWER_DOWN;
++
++ at91_ramc_write(0, AT91_DDRSDRC_LPR, lpr0);
++
++ cpu_do_idle();
++
++ at91_ramc_write(0, AT91_DDRSDRC_LPR, saved_lpr0);
++}
++
+ /* We manage both DDRAM/SDRAM controllers, we need more than one value to
+ * remember.
+ */
+@@ -323,7 +339,7 @@ static const struct of_device_id const ramc_ids[] __initconst = {
+ { .compatible = "atmel,at91rm9200-sdramc", .data = at91rm9200_standby },
+ { .compatible = "atmel,at91sam9260-sdramc", .data = at91sam9_sdram_standby },
+ { .compatible = "atmel,at91sam9g45-ddramc", .data = at91_ddr_standby },
+- { .compatible = "atmel,sama5d3-ddramc", .data = at91_ddr_standby },
++ { .compatible = "atmel,sama5d3-ddramc", .data = sama5d3_ddr_standby },
+ { /*sentinel*/ }
+ };
+
+diff --git a/arch/arm64/kernel/kaslr.c b/arch/arm64/kernel/kaslr.c
+index 769f24ef628c..d7e90d97f5c4 100644
+--- a/arch/arm64/kernel/kaslr.c
++++ b/arch/arm64/kernel/kaslr.c
+@@ -131,11 +131,15 @@ u64 __init kaslr_early_init(u64 dt_phys, u64 modulo_offset)
+ /*
+ * The kernel Image should not extend across a 1GB/32MB/512MB alignment
+ * boundary (for 4KB/16KB/64KB granule kernels, respectively). If this
+- * happens, increase the KASLR offset by the size of the kernel image.
++ * happens, increase the KASLR offset by the size of the kernel image
++ * rounded up by SWAPPER_BLOCK_SIZE.
+ */
+ if ((((u64)_text + offset + modulo_offset) >> SWAPPER_TABLE_SHIFT) !=
+- (((u64)_end + offset + modulo_offset) >> SWAPPER_TABLE_SHIFT))
+- offset = (offset + (u64)(_end - _text)) & mask;
++ (((u64)_end + offset + modulo_offset) >> SWAPPER_TABLE_SHIFT)) {
++ u64 kimg_sz = _end - _text;
++ offset = (offset + round_up(kimg_sz, SWAPPER_BLOCK_SIZE))
++ & mask;
++ }
+
+ if (IS_ENABLED(CONFIG_KASAN))
+ /*
+diff --git a/arch/powerpc/kernel/idle_book3s.S b/arch/powerpc/kernel/idle_book3s.S
+index 72dac0b58061..b350ac5e3111 100644
+--- a/arch/powerpc/kernel/idle_book3s.S
++++ b/arch/powerpc/kernel/idle_book3s.S
+@@ -439,9 +439,23 @@ END_FTR_SECTION_IFSET(CPU_FTR_ARCH_300)
+ _GLOBAL(pnv_wakeup_tb_loss)
+ ld r1,PACAR1(r13)
+ /*
+- * Before entering any idle state, the NVGPRs are saved in the stack
+- * and they are restored before switching to the process context. Hence
+- * until they are restored, they are free to be used.
++ * Before entering any idle state, the NVGPRs are saved in the stack.
++ * If there was a state loss, or PACA_NAPSTATELOST was set, then the
++ * NVGPRs are restored. If we are here, it is likely that state is lost,
++ * but not guaranteed -- neither ISA207 nor ISA300 tests to reach
++ * here are the same as the test to restore NVGPRS:
++ * PACA_THREAD_IDLE_STATE test for ISA207, PSSCR test for ISA300,
++ * and SRR1 test for restoring NVGPRs.
++ *
++ * We are about to clobber NVGPRs now, so set NAPSTATELOST to
++ * guarantee they will always be restored. This might be tightened
++ * with careful reading of specs (particularly for ISA300) but this
++ * is already a slow wakeup path and it's simpler to be safe.
++ */
++ li r0,1
++ stb r0,PACA_NAPSTATELOST(r13)
++
++ /*
+ *
+ * Save SRR1 and LR in NVGPRs as they might be clobbered in
+ * opal_call() (called in CHECK_HMI_INTERRUPT). SRR1 is required
+diff --git a/arch/x86/pci/xen.c b/arch/x86/pci/xen.c
+index e1fb269c87af..292ab0364a89 100644
+--- a/arch/x86/pci/xen.c
++++ b/arch/x86/pci/xen.c
+@@ -234,23 +234,14 @@ static int xen_hvm_setup_msi_irqs(struct pci_dev *dev, int nvec, int type)
+ return 1;
+
+ for_each_pci_msi_entry(msidesc, dev) {
+- __pci_read_msi_msg(msidesc, &msg);
+- pirq = MSI_ADDR_EXT_DEST_ID(msg.address_hi) |
+- ((msg.address_lo >> MSI_ADDR_DEST_ID_SHIFT) & 0xff);
+- if (msg.data != XEN_PIRQ_MSI_DATA ||
+- xen_irq_from_pirq(pirq) < 0) {
+- pirq = xen_allocate_pirq_msi(dev, msidesc);
+- if (pirq < 0) {
+- irq = -ENODEV;
+- goto error;
+- }
+- xen_msi_compose_msg(dev, pirq, &msg);
+- __pci_write_msi_msg(msidesc, &msg);
+- dev_dbg(&dev->dev, "xen: msi bound to pirq=%d\n", pirq);
+- } else {
+- dev_dbg(&dev->dev,
+- "xen: msi already bound to pirq=%d\n", pirq);
++ pirq = xen_allocate_pirq_msi(dev, msidesc);
++ if (pirq < 0) {
++ irq = -ENODEV;
++ goto error;
+ }
++ xen_msi_compose_msg(dev, pirq, &msg);
++ __pci_write_msi_msg(msidesc, &msg);
++ dev_dbg(&dev->dev, "xen: msi bound to pirq=%d\n", pirq);
+ irq = xen_bind_pirq_msi_to_irq(dev, msidesc, pirq,
+ (type == PCI_CAP_ID_MSI) ? nvec : 1,
+ (type == PCI_CAP_ID_MSIX) ?
+diff --git a/block/blk-mq.c b/block/blk-mq.c
+index c3400b5444a7..3b57e75098c3 100644
+--- a/block/blk-mq.c
++++ b/block/blk-mq.c
+@@ -678,17 +678,8 @@ static void blk_mq_check_expired(struct blk_mq_hw_ctx *hctx,
+ {
+ struct blk_mq_timeout_data *data = priv;
+
+- if (!test_bit(REQ_ATOM_STARTED, &rq->atomic_flags)) {
+- /*
+- * If a request wasn't started before the queue was
+- * marked dying, kill it here or it'll go unnoticed.
+- */
+- if (unlikely(blk_queue_dying(rq->q))) {
+- rq->errors = -EIO;
+- blk_mq_end_request(rq, rq->errors);
+- }
++ if (!test_bit(REQ_ATOM_STARTED, &rq->atomic_flags))
+ return;
+- }
+
+ if (time_after_eq(jiffies, rq->deadline)) {
+ if (!blk_mark_rq_complete(rq))
+diff --git a/crypto/algif_hash.c b/crypto/algif_hash.c
+index d19b09cdf284..54fc90e8339c 100644
+--- a/crypto/algif_hash.c
++++ b/crypto/algif_hash.c
+@@ -245,7 +245,7 @@ static int hash_accept(struct socket *sock, struct socket *newsock, int flags)
+ struct alg_sock *ask = alg_sk(sk);
+ struct hash_ctx *ctx = ask->private;
+ struct ahash_request *req = &ctx->req;
+- char state[crypto_ahash_statesize(crypto_ahash_reqtfm(req))];
++ char state[crypto_ahash_statesize(crypto_ahash_reqtfm(req)) ? : 1];
+ struct sock *sk2;
+ struct alg_sock *ask2;
+ struct hash_ctx *ctx2;
+diff --git a/drivers/auxdisplay/img-ascii-lcd.c b/drivers/auxdisplay/img-ascii-lcd.c
+index bf43b5d2aafc..83f1439e57fd 100644
+--- a/drivers/auxdisplay/img-ascii-lcd.c
++++ b/drivers/auxdisplay/img-ascii-lcd.c
+@@ -218,6 +218,7 @@ static const struct of_device_id img_ascii_lcd_matches[] = {
+ { .compatible = "img,boston-lcd", .data = &boston_config },
+ { .compatible = "mti,malta-lcd", .data = &malta_config },
+ { .compatible = "mti,sead3-lcd", .data = &sead3_config },
++ { /* sentinel */ }
+ };
+
+ /**
+diff --git a/drivers/char/hw_random/amd-rng.c b/drivers/char/hw_random/amd-rng.c
+index 4a99ac756f08..9959c762da2f 100644
+--- a/drivers/char/hw_random/amd-rng.c
++++ b/drivers/char/hw_random/amd-rng.c
+@@ -55,6 +55,7 @@ MODULE_DEVICE_TABLE(pci, pci_tbl);
+ struct amd768_priv {
+ void __iomem *iobase;
+ struct pci_dev *pcidev;
++ u32 pmbase;
+ };
+
+ static int amd_rng_read(struct hwrng *rng, void *buf, size_t max, bool wait)
+@@ -148,33 +149,58 @@ static int __init mod_init(void)
+ if (pmbase == 0)
+ return -EIO;
+
+- priv = devm_kzalloc(&pdev->dev, sizeof(*priv), GFP_KERNEL);
++ priv = kzalloc(sizeof(*priv), GFP_KERNEL);
+ if (!priv)
+ return -ENOMEM;
+
+- if (!devm_request_region(&pdev->dev, pmbase + PMBASE_OFFSET,
+- PMBASE_SIZE, DRV_NAME)) {
++ if (!request_region(pmbase + PMBASE_OFFSET, PMBASE_SIZE, DRV_NAME)) {
+ dev_err(&pdev->dev, DRV_NAME " region 0x%x already in use!\n",
+ pmbase + 0xF0);
+- return -EBUSY;
++ err = -EBUSY;
++ goto out;
+ }
+
+- priv->iobase = devm_ioport_map(&pdev->dev, pmbase + PMBASE_OFFSET,
+- PMBASE_SIZE);
++ priv->iobase = ioport_map(pmbase + PMBASE_OFFSET, PMBASE_SIZE);
+ if (!priv->iobase) {
+ pr_err(DRV_NAME "Cannot map ioport\n");
+- return -ENOMEM;
++ err = -EINVAL;
++ goto err_iomap;
+ }
+
+ amd_rng.priv = (unsigned long)priv;
++ priv->pmbase = pmbase;
+ priv->pcidev = pdev;
+
+ pr_info(DRV_NAME " detected\n");
+- return devm_hwrng_register(&pdev->dev, &amd_rng);
++ err = hwrng_register(&amd_rng);
++ if (err) {
++ pr_err(DRV_NAME " registering failed (%d)\n", err);
++ goto err_hwrng;
++ }
++ return 0;
++
++err_hwrng:
++ ioport_unmap(priv->iobase);
++err_iomap:
++ release_region(pmbase + PMBASE_OFFSET, PMBASE_SIZE);
++out:
++ kfree(priv);
++ return err;
+ }
+
+ static void __exit mod_exit(void)
+ {
++ struct amd768_priv *priv;
++
++ priv = (struct amd768_priv *)amd_rng.priv;
++
++ hwrng_unregister(&amd_rng);
++
++ ioport_unmap(priv->iobase);
++
++ release_region(priv->pmbase + PMBASE_OFFSET, PMBASE_SIZE);
++
++ kfree(priv);
+ }
+
+ module_init(mod_init);
+diff --git a/drivers/char/hw_random/geode-rng.c b/drivers/char/hw_random/geode-rng.c
+index e7a245942029..e1d421a36a13 100644
+--- a/drivers/char/hw_random/geode-rng.c
++++ b/drivers/char/hw_random/geode-rng.c
+@@ -31,6 +31,9 @@
+ #include <linux/module.h>
+ #include <linux/pci.h>
+
++
++#define PFX KBUILD_MODNAME ": "
++
+ #define GEODE_RNG_DATA_REG 0x50
+ #define GEODE_RNG_STATUS_REG 0x54
+
+@@ -82,6 +85,7 @@ static struct hwrng geode_rng = {
+
+ static int __init mod_init(void)
+ {
++ int err = -ENODEV;
+ struct pci_dev *pdev = NULL;
+ const struct pci_device_id *ent;
+ void __iomem *mem;
+@@ -89,27 +93,43 @@ static int __init mod_init(void)
+
+ for_each_pci_dev(pdev) {
+ ent = pci_match_id(pci_tbl, pdev);
+- if (ent) {
+- rng_base = pci_resource_start(pdev, 0);
+- if (rng_base == 0)
+- return -ENODEV;
+-
+- mem = devm_ioremap(&pdev->dev, rng_base, 0x58);
+- if (!mem)
+- return -ENOMEM;
+- geode_rng.priv = (unsigned long)mem;
+-
+- pr_info("AMD Geode RNG detected\n");
+- return devm_hwrng_register(&pdev->dev, &geode_rng);
+- }
++ if (ent)
++ goto found;
+ }
+-
+ /* Device not found. */
+- return -ENODEV;
++ goto out;
++
++found:
++ rng_base = pci_resource_start(pdev, 0);
++ if (rng_base == 0)
++ goto out;
++ err = -ENOMEM;
++ mem = ioremap(rng_base, 0x58);
++ if (!mem)
++ goto out;
++ geode_rng.priv = (unsigned long)mem;
++
++ pr_info("AMD Geode RNG detected\n");
++ err = hwrng_register(&geode_rng);
++ if (err) {
++ pr_err(PFX "RNG registering failed (%d)\n",
++ err);
++ goto err_unmap;
++ }
++out:
++ return err;
++
++err_unmap:
++ iounmap(mem);
++ goto out;
+ }
+
+ static void __exit mod_exit(void)
+ {
++ void __iomem *mem = (void __iomem *)geode_rng.priv;
++
++ hwrng_unregister(&geode_rng);
++ iounmap(mem);
+ }
+
+ module_init(mod_init);
+diff --git a/drivers/char/ppdev.c b/drivers/char/ppdev.c
+index 87885d146dbb..a372fef7654b 100644
+--- a/drivers/char/ppdev.c
++++ b/drivers/char/ppdev.c
+@@ -84,11 +84,14 @@ struct pp_struct {
+ struct ieee1284_info state;
+ struct ieee1284_info saved_state;
+ long default_inactivity;
++ int index;
+ };
+
+ /* should we use PARDEVICE_MAX here? */
+ static struct device *devices[PARPORT_MAX];
+
++static DEFINE_IDA(ida_index);
++
+ /* pp_struct.flags bitfields */
+ #define PP_CLAIMED (1<<0)
+ #define PP_EXCL (1<<1)
+@@ -290,7 +293,7 @@ static int register_device(int minor, struct pp_struct *pp)
+ struct pardevice *pdev = NULL;
+ char *name;
+ struct pardev_cb ppdev_cb;
+- int rc = 0;
++ int rc = 0, index;
+
+ name = kasprintf(GFP_KERNEL, CHRDEV "%x", minor);
+ if (name == NULL)
+@@ -303,20 +306,23 @@ static int register_device(int minor, struct pp_struct *pp)
+ goto err;
+ }
+
++ index = ida_simple_get(&ida_index, 0, 0, GFP_KERNEL);
+ memset(&ppdev_cb, 0, sizeof(ppdev_cb));
+ ppdev_cb.irq_func = pp_irq;
+ ppdev_cb.flags = (pp->flags & PP_EXCL) ? PARPORT_FLAG_EXCL : 0;
+ ppdev_cb.private = pp;
+- pdev = parport_register_dev_model(port, name, &ppdev_cb, minor);
++ pdev = parport_register_dev_model(port, name, &ppdev_cb, index);
+ parport_put_port(port);
+
+ if (!pdev) {
+ pr_warn("%s: failed to register device!\n", name);
+ rc = -ENXIO;
++ ida_simple_remove(&ida_index, index);
+ goto err;
+ }
+
+ pp->pdev = pdev;
++ pp->index = index;
+ dev_dbg(&pdev->dev, "registered pardevice\n");
+ err:
+ kfree(name);
+@@ -755,6 +761,7 @@ static int pp_release(struct inode *inode, struct file *file)
+
+ if (pp->pdev) {
+ parport_unregister_device(pp->pdev);
++ ida_simple_remove(&ida_index, pp->index);
+ pp->pdev = NULL;
+ pr_debug(CHRDEV "%x: unregistered pardevice\n", minor);
+ }
+diff --git a/drivers/clk/sunxi-ng/ccu-sun6i-a31.c b/drivers/clk/sunxi-ng/ccu-sun6i-a31.c
+index fc75a335a7ce..8ca07fe8d3f3 100644
+--- a/drivers/clk/sunxi-ng/ccu-sun6i-a31.c
++++ b/drivers/clk/sunxi-ng/ccu-sun6i-a31.c
+@@ -608,7 +608,7 @@ static SUNXI_CCU_M_WITH_MUX_GATE(hdmi_clk, "hdmi", lcd_ch1_parents,
+ 0x150, 0, 4, 24, 2, BIT(31),
+ CLK_SET_RATE_PARENT);
+
+-static SUNXI_CCU_GATE(hdmi_ddc_clk, "hdmi-ddc", "osc24M", 0x150, BIT(31), 0);
++static SUNXI_CCU_GATE(hdmi_ddc_clk, "hdmi-ddc", "osc24M", 0x150, BIT(30), 0);
+
+ static SUNXI_CCU_GATE(ps_clk, "ps", "lcd1-ch1", 0x140, BIT(31), 0);
+
+diff --git a/drivers/clk/sunxi-ng/ccu_mp.c b/drivers/clk/sunxi-ng/ccu_mp.c
+index ebb1b31568a5..ee7810429c30 100644
+--- a/drivers/clk/sunxi-ng/ccu_mp.c
++++ b/drivers/clk/sunxi-ng/ccu_mp.c
+@@ -85,6 +85,10 @@ static unsigned long ccu_mp_recalc_rate(struct clk_hw *hw,
+ unsigned int m, p;
+ u32 reg;
+
++ /* Adjust parent_rate according to pre-dividers */
++ ccu_mux_helper_adjust_parent_for_prediv(&cmp->common, &cmp->mux,
++ -1, &parent_rate);
++
+ reg = readl(cmp->common.base + cmp->common.reg);
+
+ m = reg >> cmp->m.shift;
+@@ -114,6 +118,10 @@ static int ccu_mp_set_rate(struct clk_hw *hw, unsigned long rate,
+ unsigned int m, p;
+ u32 reg;
+
++ /* Adjust parent_rate according to pre-dividers */
++ ccu_mux_helper_adjust_parent_for_prediv(&cmp->common, &cmp->mux,
++ -1, &parent_rate);
++
+ max_m = cmp->m.max ?: 1 << cmp->m.width;
+ max_p = cmp->p.max ?: 1 << ((1 << cmp->p.width) - 1);
+
+diff --git a/drivers/cpufreq/cpufreq.c b/drivers/cpufreq/cpufreq.c
+index 061b165d632e..0af2229b09fb 100644
+--- a/drivers/cpufreq/cpufreq.c
++++ b/drivers/cpufreq/cpufreq.c
+@@ -1190,6 +1190,9 @@ static int cpufreq_online(unsigned int cpu)
+ for_each_cpu(j, policy->related_cpus)
+ per_cpu(cpufreq_cpu_data, j) = policy;
+ write_unlock_irqrestore(&cpufreq_driver_lock, flags);
++ } else {
++ policy->min = policy->user_policy.min;
++ policy->max = policy->user_policy.max;
+ }
+
+ if (cpufreq_driver->get && !cpufreq_driver->setpolicy) {
+diff --git a/drivers/cpuidle/sysfs.c b/drivers/cpuidle/sysfs.c
+index c5adc8c9ac43..ae948b1da93a 100644
+--- a/drivers/cpuidle/sysfs.c
++++ b/drivers/cpuidle/sysfs.c
+@@ -615,6 +615,18 @@ int cpuidle_add_sysfs(struct cpuidle_device *dev)
+ struct device *cpu_dev = get_cpu_device((unsigned long)dev->cpu);
+ int error;
+
++ /*
++ * Return if cpu_device is not setup for this CPU.
++ *
++ * This could happen if the arch did not set up cpu_device
++ * since this CPU is not in cpu_present mask and the
++ * driver did not send a correct CPU mask during registration.
++ * Without this check we would end up passing bogus
++ * value for &cpu_dev->kobj in kobject_init_and_add()
++ */
++ if (!cpu_dev)
++ return -ENODEV;
++
+ kdev = kzalloc(sizeof(*kdev), GFP_KERNEL);
+ if (!kdev)
+ return -ENOMEM;
+diff --git a/drivers/crypto/ccp/ccp-dev.c b/drivers/crypto/ccp/ccp-dev.c
+index 511ab042b5e7..92d1c6959f08 100644
+--- a/drivers/crypto/ccp/ccp-dev.c
++++ b/drivers/crypto/ccp/ccp-dev.c
+@@ -283,11 +283,14 @@ EXPORT_SYMBOL_GPL(ccp_version);
+ */
+ int ccp_enqueue_cmd(struct ccp_cmd *cmd)
+ {
+- struct ccp_device *ccp = ccp_get_device();
++ struct ccp_device *ccp;
+ unsigned long flags;
+ unsigned int i;
+ int ret;
+
++ /* Some commands might need to be sent to a specific device */
++ ccp = cmd->ccp ? cmd->ccp : ccp_get_device();
++
+ if (!ccp)
+ return -ENODEV;
+
+diff --git a/drivers/crypto/ccp/ccp-dmaengine.c b/drivers/crypto/ccp/ccp-dmaengine.c
+index e5d9278f4019..8d0eeb46d4a2 100644
+--- a/drivers/crypto/ccp/ccp-dmaengine.c
++++ b/drivers/crypto/ccp/ccp-dmaengine.c
+@@ -390,6 +390,7 @@ static struct ccp_dma_desc *ccp_create_desc(struct dma_chan *dma_chan,
+ goto err;
+
+ ccp_cmd = &cmd->ccp_cmd;
++ ccp_cmd->ccp = chan->ccp;
+ ccp_pt = &ccp_cmd->u.passthru_nomap;
+ ccp_cmd->flags = CCP_CMD_MAY_BACKLOG;
+ ccp_cmd->flags |= CCP_CMD_PASSTHRU_NO_DMA_MAP;
+diff --git a/drivers/dax/dax.c b/drivers/dax/dax.c
+index ed758b74ddf0..20ab6bf9d1c7 100644
+--- a/drivers/dax/dax.c
++++ b/drivers/dax/dax.c
+@@ -427,6 +427,7 @@ static int __dax_dev_fault(struct dax_dev *dax_dev, struct vm_area_struct *vma,
+ int rc = VM_FAULT_SIGBUS;
+ phys_addr_t phys;
+ pfn_t pfn;
++ unsigned int fault_size = PAGE_SIZE;
+
+ if (check_vma(dax_dev, vma, __func__))
+ return VM_FAULT_SIGBUS;
+@@ -437,6 +438,9 @@ static int __dax_dev_fault(struct dax_dev *dax_dev, struct vm_area_struct *vma,
+ return VM_FAULT_SIGBUS;
+ }
+
++ if (fault_size != dax_region->align)
++ return VM_FAULT_SIGBUS;
++
+ phys = pgoff_to_phys(dax_dev, vmf->pgoff, PAGE_SIZE);
+ if (phys == -1) {
+ dev_dbg(dev, "%s: phys_to_pgoff(%#lx) failed\n", __func__,
+@@ -482,6 +486,7 @@ static int __dax_dev_pmd_fault(struct dax_dev *dax_dev,
+ phys_addr_t phys;
+ pgoff_t pgoff;
+ pfn_t pfn;
++ unsigned int fault_size = PMD_SIZE;
+
+ if (check_vma(dax_dev, vma, __func__))
+ return VM_FAULT_SIGBUS;
+@@ -498,6 +503,16 @@ static int __dax_dev_pmd_fault(struct dax_dev *dax_dev,
+ return VM_FAULT_SIGBUS;
+ }
+
++ if (fault_size < dax_region->align)
++ return VM_FAULT_SIGBUS;
++ else if (fault_size > dax_region->align)
++ return VM_FAULT_FALLBACK;
++
++ /* if we are outside of the VMA */
++ if (pmd_addr < vma->vm_start ||
++ (pmd_addr + PMD_SIZE) > vma->vm_end)
++ return VM_FAULT_SIGBUS;
++
+ pgoff = linear_page_index(vma, pmd_addr);
+ phys = pgoff_to_phys(dax_dev, pgoff, PMD_SIZE);
+ if (phys == -1) {
+diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_drv.c b/drivers/gpu/drm/amd/amdgpu/amdgpu_drv.c
+index 2534adaebe30..f48da3d6698d 100644
+--- a/drivers/gpu/drm/amd/amdgpu/amdgpu_drv.c
++++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_drv.c
+@@ -424,6 +424,7 @@ static const struct pci_device_id pciidlist[] = {
+ {0x1002, 0x6985, PCI_ANY_ID, PCI_ANY_ID, 0, 0, CHIP_POLARIS12},
+ {0x1002, 0x6986, PCI_ANY_ID, PCI_ANY_ID, 0, 0, CHIP_POLARIS12},
+ {0x1002, 0x6987, PCI_ANY_ID, PCI_ANY_ID, 0, 0, CHIP_POLARIS12},
++ {0x1002, 0x6995, PCI_ANY_ID, PCI_ANY_ID, 0, 0, CHIP_POLARIS12},
+ {0x1002, 0x699F, PCI_ANY_ID, PCI_ANY_ID, 0, 0, CHIP_POLARIS12},
+
+ {0, 0, 0}
+diff --git a/drivers/gpu/drm/amd/amdgpu/si_dpm.c b/drivers/gpu/drm/amd/amdgpu/si_dpm.c
+index 9a5ccae06b6c..054c9c29536d 100644
+--- a/drivers/gpu/drm/amd/amdgpu/si_dpm.c
++++ b/drivers/gpu/drm/amd/amdgpu/si_dpm.c
+@@ -3498,9 +3498,13 @@ static void si_apply_state_adjust_rules(struct amdgpu_device *adev,
+ max_sclk = 75000;
+ }
+ } else if (adev->asic_type == CHIP_OLAND) {
+- if ((adev->pdev->device == 0x6604) &&
+- (adev->pdev->subsystem_vendor == 0x1028) &&
+- (adev->pdev->subsystem_device == 0x066F)) {
++ if ((adev->pdev->revision == 0xC7) ||
++ (adev->pdev->revision == 0x80) ||
++ (adev->pdev->revision == 0x81) ||
++ (adev->pdev->revision == 0x83) ||
++ (adev->pdev->revision == 0x87) ||
++ (adev->pdev->device == 0x6604) ||
++ (adev->pdev->device == 0x6605)) {
+ max_sclk = 75000;
+ }
+ }
+diff --git a/drivers/gpu/drm/drm_atomic_helper.c b/drivers/gpu/drm/drm_atomic_helper.c
+index 55e7372ea0a0..205251fae539 100644
+--- a/drivers/gpu/drm/drm_atomic_helper.c
++++ b/drivers/gpu/drm/drm_atomic_helper.c
+@@ -1389,6 +1389,15 @@ static int stall_checks(struct drm_crtc *crtc, bool nonblock)
+ return ret < 0 ? ret : 0;
+ }
+
++void release_crtc_commit(struct completion *completion)
++{
++ struct drm_crtc_commit *commit = container_of(completion,
++ typeof(*commit),
++ flip_done);
++
++ drm_crtc_commit_put(commit);
++}
++
+ /**
+ * drm_atomic_helper_setup_commit - setup possibly nonblocking commit
+ * @state: new modeset state to be committed
+@@ -1481,6 +1490,8 @@ int drm_atomic_helper_setup_commit(struct drm_atomic_state *state,
+ }
+
+ crtc_state->event->base.completion = &commit->flip_done;
++ crtc_state->event->base.completion_release = release_crtc_commit;
++ drm_crtc_commit_get(commit);
+ }
+
+ return 0;
+diff --git a/drivers/gpu/drm/drm_fops.c b/drivers/gpu/drm/drm_fops.c
+index 5d96de40b63f..30c20f90520a 100644
+--- a/drivers/gpu/drm/drm_fops.c
++++ b/drivers/gpu/drm/drm_fops.c
+@@ -689,8 +689,8 @@ void drm_send_event_locked(struct drm_device *dev, struct drm_pending_event *e)
+ assert_spin_locked(&dev->event_lock);
+
+ if (e->completion) {
+- /* ->completion might disappear as soon as it signalled. */
+ complete_all(e->completion);
++ e->completion_release(e->completion);
+ e->completion = NULL;
+ }
+
+diff --git a/drivers/hid/hid-sony.c b/drivers/hid/hid-sony.c
+index f405b07d0381..740996f9bdd4 100644
+--- a/drivers/hid/hid-sony.c
++++ b/drivers/hid/hid-sony.c
+@@ -2632,6 +2632,8 @@ static int sony_input_configured(struct hid_device *hdev,
+ sony_leds_remove(sc);
+ if (sc->quirks & SONY_BATTERY_SUPPORT)
+ sony_battery_remove(sc);
++ if (sc->touchpad)
++ sony_unregister_touchpad(sc);
+ sony_cancel_work_sync(sc);
+ kfree(sc->output_report_dmabuf);
+ sony_remove_dev_list(sc);
+diff --git a/drivers/hv/channel.c b/drivers/hv/channel.c
+index be34547cdb68..1606e7f08f4b 100644
+--- a/drivers/hv/channel.c
++++ b/drivers/hv/channel.c
+@@ -506,12 +506,15 @@ int vmbus_teardown_gpadl(struct vmbus_channel *channel, u32 gpadl_handle)
+
+ wait_for_completion(&info->waitevent);
+
+- if (channel->rescind) {
+- ret = -ENODEV;
+- goto post_msg_err;
+- }
+-
+ post_msg_err:
++ /*
++ * If the channel has been rescinded;
++ * we will be awakened by the rescind
++ * handler; set the error code to zero so we don't leak memory.
++ */
++ if (channel->rescind)
++ ret = 0;
++
+ spin_lock_irqsave(&vmbus_connection.channelmsg_lock, flags);
+ list_del(&info->msglistentry);
+ spin_unlock_irqrestore(&vmbus_connection.channelmsg_lock, flags);
+diff --git a/drivers/hv/channel_mgmt.c b/drivers/hv/channel_mgmt.c
+index 0af7e39006c8..a58cd102af1b 100644
+--- a/drivers/hv/channel_mgmt.c
++++ b/drivers/hv/channel_mgmt.c
+@@ -779,6 +779,7 @@ static void vmbus_onoffer(struct vmbus_channel_message_header *hdr)
+ /* Allocate the channel object and save this offer. */
+ newchannel = alloc_channel();
+ if (!newchannel) {
++ vmbus_release_relid(offer->child_relid);
+ pr_err("Unable to allocate channel object\n");
+ return;
+ }
+diff --git a/drivers/hwtracing/intel_th/core.c b/drivers/hwtracing/intel_th/core.c
+index cdd9b3b26195..7563eceeaaea 100644
+--- a/drivers/hwtracing/intel_th/core.c
++++ b/drivers/hwtracing/intel_th/core.c
+@@ -221,8 +221,10 @@ static int intel_th_output_activate(struct intel_th_device *thdev)
+ else
+ intel_th_trace_enable(thdev);
+
+- if (ret)
++ if (ret) {
+ pm_runtime_put(&thdev->dev);
++ module_put(thdrv->driver.owner);
++ }
+
+ return ret;
+ }
+diff --git a/drivers/iio/adc/ti_am335x_adc.c b/drivers/iio/adc/ti_am335x_adc.c
+index ad9dec30bb30..4282ceca3d8f 100644
+--- a/drivers/iio/adc/ti_am335x_adc.c
++++ b/drivers/iio/adc/ti_am335x_adc.c
+@@ -169,7 +169,9 @@ static irqreturn_t tiadc_irq_h(int irq, void *private)
+ {
+ struct iio_dev *indio_dev = private;
+ struct tiadc_device *adc_dev = iio_priv(indio_dev);
+- unsigned int status, config;
++ unsigned int status, config, adc_fsm;
++ unsigned short count = 0;
++
+ status = tiadc_readl(adc_dev, REG_IRQSTATUS);
+
+ /*
+@@ -183,6 +185,15 @@ static irqreturn_t tiadc_irq_h(int irq, void *private)
+ tiadc_writel(adc_dev, REG_CTRL, config);
+ tiadc_writel(adc_dev, REG_IRQSTATUS, IRQENB_FIFO1OVRRUN
+ | IRQENB_FIFO1UNDRFLW | IRQENB_FIFO1THRES);
++
++ /* wait for idle state.
++ * ADC needs to finish the current conversion
++ * before disabling the module
++ */
++ do {
++ adc_fsm = tiadc_readl(adc_dev, REG_ADCFSM);
++ } while (adc_fsm != 0x10 && count++ < 100);
++
+ tiadc_writel(adc_dev, REG_CTRL, (config | CNTRLREG_TSCSSENB));
+ return IRQ_HANDLED;
+ } else if (status & IRQENB_FIFO1THRES) {
+diff --git a/drivers/iio/common/hid-sensors/hid-sensor-trigger.c b/drivers/iio/common/hid-sensors/hid-sensor-trigger.c
+index a3cce3a38300..ecf592d69043 100644
+--- a/drivers/iio/common/hid-sensors/hid-sensor-trigger.c
++++ b/drivers/iio/common/hid-sensors/hid-sensor-trigger.c
+@@ -51,8 +51,6 @@ static int _hid_sensor_power_state(struct hid_sensor_common *st, bool state)
+ st->report_state.report_id,
+ st->report_state.index,
+ HID_USAGE_SENSOR_PROP_REPORTING_STATE_ALL_EVENTS_ENUM);
+-
+- poll_value = hid_sensor_read_poll_value(st);
+ } else {
+ int val;
+
+@@ -89,7 +87,9 @@ static int _hid_sensor_power_state(struct hid_sensor_common *st, bool state)
+ sensor_hub_get_feature(st->hsdev, st->power_state.report_id,
+ st->power_state.index,
+ sizeof(state_val), &state_val);
+- if (state && poll_value)
++ if (state)
++ poll_value = hid_sensor_read_poll_value(st);
++ if (poll_value > 0)
+ msleep_interruptible(poll_value * 2);
+
+ return 0;
+diff --git a/drivers/iio/magnetometer/ak8974.c b/drivers/iio/magnetometer/ak8974.c
+index ce09d771c1fb..75f83424903b 100644
+--- a/drivers/iio/magnetometer/ak8974.c
++++ b/drivers/iio/magnetometer/ak8974.c
+@@ -767,7 +767,7 @@ static int ak8974_probe(struct i2c_client *i2c,
+ return ret;
+ }
+
+-static int __exit ak8974_remove(struct i2c_client *i2c)
++static int ak8974_remove(struct i2c_client *i2c)
+ {
+ struct iio_dev *indio_dev = i2c_get_clientdata(i2c);
+ struct ak8974 *ak8974 = iio_priv(indio_dev);
+@@ -849,7 +849,7 @@ static struct i2c_driver ak8974_driver = {
+ .of_match_table = of_match_ptr(ak8974_of_match),
+ },
+ .probe = ak8974_probe,
+- .remove = __exit_p(ak8974_remove),
++ .remove = ak8974_remove,
+ .id_table = ak8974_id,
+ };
+ module_i2c_driver(ak8974_driver);
+diff --git a/drivers/input/joystick/iforce/iforce-usb.c b/drivers/input/joystick/iforce/iforce-usb.c
+index d96aa27dfcdc..db64adfbe1af 100644
+--- a/drivers/input/joystick/iforce/iforce-usb.c
++++ b/drivers/input/joystick/iforce/iforce-usb.c
+@@ -141,6 +141,9 @@ static int iforce_usb_probe(struct usb_interface *intf,
+
+ interface = intf->cur_altsetting;
+
++ if (interface->desc.bNumEndpoints < 2)
++ return -ENODEV;
++
+ epirq = &interface->endpoint[0].desc;
+ epout = &interface->endpoint[1].desc;
+
+diff --git a/drivers/input/misc/cm109.c b/drivers/input/misc/cm109.c
+index 9cc6d057c302..23c191a2a071 100644
+--- a/drivers/input/misc/cm109.c
++++ b/drivers/input/misc/cm109.c
+@@ -700,6 +700,10 @@ static int cm109_usb_probe(struct usb_interface *intf,
+ int error = -ENOMEM;
+
+ interface = intf->cur_altsetting;
++
++ if (interface->desc.bNumEndpoints < 1)
++ return -ENODEV;
++
+ endpoint = &interface->endpoint[0].desc;
+
+ if (!usb_endpoint_is_int_in(endpoint))
+diff --git a/drivers/input/misc/ims-pcu.c b/drivers/input/misc/ims-pcu.c
+index 9c0ea36913b4..f4e8fbec6a94 100644
+--- a/drivers/input/misc/ims-pcu.c
++++ b/drivers/input/misc/ims-pcu.c
+@@ -1667,6 +1667,10 @@ static int ims_pcu_parse_cdc_data(struct usb_interface *intf, struct ims_pcu *pc
+ return -EINVAL;
+
+ alt = pcu->ctrl_intf->cur_altsetting;
++
++ if (alt->desc.bNumEndpoints < 1)
++ return -ENODEV;
++
+ pcu->ep_ctrl = &alt->endpoint[0].desc;
+ pcu->max_ctrl_size = usb_endpoint_maxp(pcu->ep_ctrl);
+
+diff --git a/drivers/input/misc/yealink.c b/drivers/input/misc/yealink.c
+index 79c964c075f1..6e7ff9561d92 100644
+--- a/drivers/input/misc/yealink.c
++++ b/drivers/input/misc/yealink.c
+@@ -875,6 +875,10 @@ static int usb_probe(struct usb_interface *intf, const struct usb_device_id *id)
+ int ret, pipe, i;
+
+ interface = intf->cur_altsetting;
++
++ if (interface->desc.bNumEndpoints < 1)
++ return -ENODEV;
++
+ endpoint = &interface->endpoint[0].desc;
+ if (!usb_endpoint_is_int_in(endpoint))
+ return -ENODEV;
+diff --git a/drivers/input/mouse/alps.c b/drivers/input/mouse/alps.c
+index 328edc8c8786..2a0f9e79bf69 100644
+--- a/drivers/input/mouse/alps.c
++++ b/drivers/input/mouse/alps.c
+@@ -1282,10 +1282,8 @@ static int alps_decode_ss4_v2(struct alps_fields *f,
+ /* handle buttons */
+ if (pkt_id == SS4_PACKET_ID_STICK) {
+ f->ts_left = !!(SS4_BTN_V2(p) & 0x01);
+- if (!(priv->flags & ALPS_BUTTONPAD)) {
+- f->ts_right = !!(SS4_BTN_V2(p) & 0x02);
+- f->ts_middle = !!(SS4_BTN_V2(p) & 0x04);
+- }
++ f->ts_right = !!(SS4_BTN_V2(p) & 0x02);
++ f->ts_middle = !!(SS4_BTN_V2(p) & 0x04);
+ } else {
+ f->left = !!(SS4_BTN_V2(p) & 0x01);
+ if (!(priv->flags & ALPS_BUTTONPAD)) {
+@@ -2462,14 +2460,34 @@ static int alps_update_device_area_ss4_v2(unsigned char otp[][4],
+ int num_y_electrode;
+ int x_pitch, y_pitch, x_phys, y_phys;
+
+- num_x_electrode = SS4_NUMSENSOR_XOFFSET + (otp[1][0] & 0x0F);
+- num_y_electrode = SS4_NUMSENSOR_YOFFSET + ((otp[1][0] >> 4) & 0x0F);
++ if (IS_SS4PLUS_DEV(priv->dev_id)) {
++ num_x_electrode =
++ SS4PLUS_NUMSENSOR_XOFFSET + (otp[0][2] & 0x0F);
++ num_y_electrode =
++ SS4PLUS_NUMSENSOR_YOFFSET + ((otp[0][2] >> 4) & 0x0F);
++
++ priv->x_max =
++ (num_x_electrode - 1) * SS4PLUS_COUNT_PER_ELECTRODE;
++ priv->y_max =
++ (num_y_electrode - 1) * SS4PLUS_COUNT_PER_ELECTRODE;
+
+- priv->x_max = (num_x_electrode - 1) * SS4_COUNT_PER_ELECTRODE;
+- priv->y_max = (num_y_electrode - 1) * SS4_COUNT_PER_ELECTRODE;
++ x_pitch = (otp[0][1] & 0x0F) + SS4PLUS_MIN_PITCH_MM;
++ y_pitch = ((otp[0][1] >> 4) & 0x0F) + SS4PLUS_MIN_PITCH_MM;
+
+- x_pitch = ((otp[1][2] >> 2) & 0x07) + SS4_MIN_PITCH_MM;
+- y_pitch = ((otp[1][2] >> 5) & 0x07) + SS4_MIN_PITCH_MM;
++ } else {
++ num_x_electrode =
++ SS4_NUMSENSOR_XOFFSET + (otp[1][0] & 0x0F);
++ num_y_electrode =
++ SS4_NUMSENSOR_YOFFSET + ((otp[1][0] >> 4) & 0x0F);
++
++ priv->x_max =
++ (num_x_electrode - 1) * SS4_COUNT_PER_ELECTRODE;
++ priv->y_max =
++ (num_y_electrode - 1) * SS4_COUNT_PER_ELECTRODE;
++
++ x_pitch = ((otp[1][2] >> 2) & 0x07) + SS4_MIN_PITCH_MM;
++ y_pitch = ((otp[1][2] >> 5) & 0x07) + SS4_MIN_PITCH_MM;
++ }
+
+ x_phys = x_pitch * (num_x_electrode - 1); /* In 0.1 mm units */
+ y_phys = y_pitch * (num_y_electrode - 1); /* In 0.1 mm units */
+@@ -2485,7 +2503,10 @@ static int alps_update_btn_info_ss4_v2(unsigned char otp[][4],
+ {
+ unsigned char is_btnless;
+
+- is_btnless = (otp[1][1] >> 3) & 0x01;
++ if (IS_SS4PLUS_DEV(priv->dev_id))
++ is_btnless = (otp[1][0] >> 1) & 0x01;
++ else
++ is_btnless = (otp[1][1] >> 3) & 0x01;
+
+ if (is_btnless)
+ priv->flags |= ALPS_BUTTONPAD;
+@@ -2493,6 +2514,21 @@ static int alps_update_btn_info_ss4_v2(unsigned char otp[][4],
+ return 0;
+ }
+
++static int alps_update_dual_info_ss4_v2(unsigned char otp[][4],
++ struct alps_data *priv)
++{
++ bool is_dual = false;
++
++ if (IS_SS4PLUS_DEV(priv->dev_id))
++ is_dual = (otp[0][0] >> 4) & 0x01;
++
++ if (is_dual)
++ priv->flags |= ALPS_DUALPOINT |
++ ALPS_DUALPOINT_WITH_PRESSURE;
++
++ return 0;
++}
++
+ static int alps_set_defaults_ss4_v2(struct psmouse *psmouse,
+ struct alps_data *priv)
+ {
+@@ -2508,6 +2544,8 @@ static int alps_set_defaults_ss4_v2(struct psmouse *psmouse,
+
+ alps_update_btn_info_ss4_v2(otp, priv);
+
++ alps_update_dual_info_ss4_v2(otp, priv);
++
+ return 0;
+ }
+
+@@ -2753,10 +2791,6 @@ static int alps_set_protocol(struct psmouse *psmouse,
+ if (alps_set_defaults_ss4_v2(psmouse, priv))
+ return -EIO;
+
+- if (priv->fw_ver[1] == 0x1)
+- priv->flags |= ALPS_DUALPOINT |
+- ALPS_DUALPOINT_WITH_PRESSURE;
+-
+ break;
+ }
+
+@@ -2827,10 +2861,7 @@ static int alps_identify(struct psmouse *psmouse, struct alps_data *priv)
+ ec[2] >= 0x90 && ec[2] <= 0x9d) {
+ protocol = &alps_v3_protocol_data;
+ } else if (e7[0] == 0x73 && e7[1] == 0x03 &&
+- e7[2] == 0x14 && ec[1] == 0x02) {
+- protocol = &alps_v8_protocol_data;
+- } else if (e7[0] == 0x73 && e7[1] == 0x03 &&
+- e7[2] == 0x28 && ec[1] == 0x01) {
++ (e7[2] == 0x14 || e7[2] == 0x28)) {
+ protocol = &alps_v8_protocol_data;
+ } else {
+ psmouse_dbg(psmouse,
+@@ -2840,7 +2871,8 @@ static int alps_identify(struct psmouse *psmouse, struct alps_data *priv)
+ }
+
+ if (priv) {
+- /* Save the Firmware version */
++ /* Save Device ID and Firmware version */
++ memcpy(priv->dev_id, e7, 3);
+ memcpy(priv->fw_ver, ec, 3);
+ error = alps_set_protocol(psmouse, priv, protocol);
+ if (error)
+diff --git a/drivers/input/mouse/alps.h b/drivers/input/mouse/alps.h
+index 6d279aa27cb9..4334f2805d93 100644
+--- a/drivers/input/mouse/alps.h
++++ b/drivers/input/mouse/alps.h
+@@ -54,6 +54,16 @@ enum SS4_PACKET_ID {
+
+ #define SS4_MASK_NORMAL_BUTTONS 0x07
+
++#define SS4PLUS_COUNT_PER_ELECTRODE 128
++#define SS4PLUS_NUMSENSOR_XOFFSET 16
++#define SS4PLUS_NUMSENSOR_YOFFSET 5
++#define SS4PLUS_MIN_PITCH_MM 37
++
++#define IS_SS4PLUS_DEV(_b) (((_b[0]) == 0x73) && \
++ ((_b[1]) == 0x03) && \
++ ((_b[2]) == 0x28) \
++ )
++
+ #define SS4_IS_IDLE_V2(_b) (((_b[0]) == 0x18) && \
+ ((_b[1]) == 0x10) && \
+ ((_b[2]) == 0x00) && \
+@@ -283,6 +293,7 @@ struct alps_data {
+ int addr_command;
+ u16 proto_version;
+ u8 byte0, mask0;
++ u8 dev_id[3];
+ u8 fw_ver[3];
+ int flags;
+ int x_max;
+diff --git a/drivers/input/mouse/elan_i2c_core.c b/drivers/input/mouse/elan_i2c_core.c
+index 1e1d0ad406f2..a26f44c28d82 100644
+--- a/drivers/input/mouse/elan_i2c_core.c
++++ b/drivers/input/mouse/elan_i2c_core.c
+@@ -218,17 +218,19 @@ static int elan_query_product(struct elan_tp_data *data)
+
+ static int elan_check_ASUS_special_fw(struct elan_tp_data *data)
+ {
+- if (data->ic_type != 0x0E)
+- return false;
+-
+- switch (data->product_id) {
+- case 0x05 ... 0x07:
+- case 0x09:
+- case 0x13:
++ if (data->ic_type == 0x0E) {
++ switch (data->product_id) {
++ case 0x05 ... 0x07:
++ case 0x09:
++ case 0x13:
++ return true;
++ }
++ } else if (data->ic_type == 0x08 && data->product_id == 0x26) {
++ /* ASUS EeeBook X205TA */
+ return true;
+- default:
+- return false;
+ }
++
++ return false;
+ }
+
+ static int __elan_initialize(struct elan_tp_data *data)
+diff --git a/drivers/input/serio/i8042-x86ia64io.h b/drivers/input/serio/i8042-x86ia64io.h
+index a7618776705a..27ae2a0ef1b9 100644
+--- a/drivers/input/serio/i8042-x86ia64io.h
++++ b/drivers/input/serio/i8042-x86ia64io.h
+@@ -120,6 +120,13 @@ static const struct dmi_system_id __initconst i8042_dmi_noloop_table[] = {
+ },
+ },
+ {
++ /* Dell Embedded Box PC 3000 */
++ .matches = {
++ DMI_MATCH(DMI_SYS_VENDOR, "Dell Inc."),
++ DMI_MATCH(DMI_PRODUCT_NAME, "Embedded Box PC 3000"),
++ },
++ },
++ {
+ /* OQO Model 01 */
+ .matches = {
+ DMI_MATCH(DMI_SYS_VENDOR, "OQO"),
+diff --git a/drivers/input/tablet/hanwang.c b/drivers/input/tablet/hanwang.c
+index cd852059b99e..df4bea96d7ed 100644
+--- a/drivers/input/tablet/hanwang.c
++++ b/drivers/input/tablet/hanwang.c
+@@ -340,6 +340,9 @@ static int hanwang_probe(struct usb_interface *intf, const struct usb_device_id
+ int error;
+ int i;
+
++ if (intf->cur_altsetting->desc.bNumEndpoints < 1)
++ return -ENODEV;
++
+ hanwang = kzalloc(sizeof(struct hanwang), GFP_KERNEL);
+ input_dev = input_allocate_device();
+ if (!hanwang || !input_dev) {
+diff --git a/drivers/input/tablet/kbtab.c b/drivers/input/tablet/kbtab.c
+index e850d7e8afbc..4d9d64908b59 100644
+--- a/drivers/input/tablet/kbtab.c
++++ b/drivers/input/tablet/kbtab.c
+@@ -122,6 +122,9 @@ static int kbtab_probe(struct usb_interface *intf, const struct usb_device_id *i
+ struct input_dev *input_dev;
+ int error = -ENOMEM;
+
++ if (intf->cur_altsetting->desc.bNumEndpoints < 1)
++ return -ENODEV;
++
+ kbtab = kzalloc(sizeof(struct kbtab), GFP_KERNEL);
+ input_dev = input_allocate_device();
+ if (!kbtab || !input_dev)
+diff --git a/drivers/input/touchscreen/sur40.c b/drivers/input/touchscreen/sur40.c
+index aefb6e11f88a..4c0eecae065c 100644
+--- a/drivers/input/touchscreen/sur40.c
++++ b/drivers/input/touchscreen/sur40.c
+@@ -527,6 +527,9 @@ static int sur40_probe(struct usb_interface *interface,
+ if (iface_desc->desc.bInterfaceClass != 0xFF)
+ return -ENODEV;
+
++ if (iface_desc->desc.bNumEndpoints < 5)
++ return -ENODEV;
++
+ /* Use endpoint #4 (0x86). */
+ endpoint = &iface_desc->endpoint[4].desc;
+ if (endpoint->bEndpointAddress != TOUCH_ENDPOINT)
+diff --git a/drivers/iommu/exynos-iommu.c b/drivers/iommu/exynos-iommu.c
+index 57ba0d3091ea..318cc878d0ca 100644
+--- a/drivers/iommu/exynos-iommu.c
++++ b/drivers/iommu/exynos-iommu.c
+@@ -509,7 +509,13 @@ static void sysmmu_tlb_invalidate_flpdcache(struct sysmmu_drvdata *data,
+ spin_lock_irqsave(&data->lock, flags);
+ if (data->active && data->version >= MAKE_MMU_VER(3, 3)) {
+ clk_enable(data->clk_master);
+- __sysmmu_tlb_invalidate_entry(data, iova, 1);
++ if (sysmmu_block(data)) {
++ if (data->version >= MAKE_MMU_VER(5, 0))
++ __sysmmu_tlb_invalidate(data);
++ else
++ __sysmmu_tlb_invalidate_entry(data, iova, 1);
++ sysmmu_unblock(data);
++ }
+ clk_disable(data->clk_master);
+ }
+ spin_unlock_irqrestore(&data->lock, flags);
+diff --git a/drivers/iommu/intel-iommu.c b/drivers/iommu/intel-iommu.c
+index 23eead3cf77c..dfeb3808bc62 100644
+--- a/drivers/iommu/intel-iommu.c
++++ b/drivers/iommu/intel-iommu.c
+@@ -915,7 +915,7 @@ static struct intel_iommu *device_to_iommu(struct device *dev, u8 *bus, u8 *devf
+ * which we used for the IOMMU lookup. Strictly speaking
+ * we could do this for all PCI devices; we only need to
+ * get the BDF# from the scope table for ACPI matches. */
+- if (pdev->is_virtfn)
++ if (pdev && pdev->is_virtfn)
+ goto got_pdev;
+
+ *bus = drhd->devices[i].bus;
+diff --git a/drivers/media/usb/dvb-usb/dvb-usb-firmware.c b/drivers/media/usb/dvb-usb/dvb-usb-firmware.c
+index ab9866024ec7..04033efe7ad5 100644
+--- a/drivers/media/usb/dvb-usb/dvb-usb-firmware.c
++++ b/drivers/media/usb/dvb-usb/dvb-usb-firmware.c
+@@ -36,16 +36,18 @@ static int usb_cypress_writemem(struct usb_device *udev,u16 addr,u8 *data, u8 le
+ int usb_cypress_load_firmware(struct usb_device *udev, const struct firmware *fw, int type)
+ {
+ struct hexline *hx;
+- u8 reset;
+- int ret,pos=0;
++ u8 *buf;
++ int ret, pos = 0;
++ u16 cpu_cs_register = cypress[type].cpu_cs_register;
+
+- hx = kmalloc(sizeof(*hx), GFP_KERNEL);
+- if (!hx)
++ buf = kmalloc(sizeof(*hx), GFP_KERNEL);
++ if (!buf)
+ return -ENOMEM;
++ hx = (struct hexline *)buf;
+
+ /* stop the CPU */
+- reset = 1;
+- if ((ret = usb_cypress_writemem(udev,cypress[type].cpu_cs_register,&reset,1)) != 1)
++ buf[0] = 1;
++ if (usb_cypress_writemem(udev, cpu_cs_register, buf, 1) != 1)
+ err("could not stop the USB controller CPU.");
+
+ while ((ret = dvb_usb_get_hexline(fw, hx, &pos)) > 0) {
+@@ -61,21 +63,21 @@ int usb_cypress_load_firmware(struct usb_device *udev, const struct firmware *fw
+ }
+ if (ret < 0) {
+ err("firmware download failed at %d with %d",pos,ret);
+- kfree(hx);
++ kfree(buf);
+ return ret;
+ }
+
+ if (ret == 0) {
+ /* restart the CPU */
+- reset = 0;
+- if (ret || usb_cypress_writemem(udev,cypress[type].cpu_cs_register,&reset,1) != 1) {
++ buf[0] = 0;
++ if (usb_cypress_writemem(udev, cpu_cs_register, buf, 1) != 1) {
+ err("could not restart the USB controller CPU.");
+ ret = -EINVAL;
+ }
+ } else
+ ret = -EIO;
+
+- kfree(hx);
++ kfree(buf);
+
+ return ret;
+ }
+diff --git a/drivers/misc/mei/bus-fixup.c b/drivers/misc/mei/bus-fixup.c
+index 3600c9993a98..29f2daed37e0 100644
+--- a/drivers/misc/mei/bus-fixup.c
++++ b/drivers/misc/mei/bus-fixup.c
+@@ -112,11 +112,9 @@ struct mkhi_msg {
+
+ static int mei_osver(struct mei_cl_device *cldev)
+ {
+- int ret;
+ const size_t size = sizeof(struct mkhi_msg_hdr) +
+ sizeof(struct mkhi_fwcaps) +
+ sizeof(struct mei_os_ver);
+- size_t length = 8;
+ char buf[size];
+ struct mkhi_msg *req;
+ struct mkhi_fwcaps *fwcaps;
+@@ -137,15 +135,7 @@ static int mei_osver(struct mei_cl_device *cldev)
+ os_ver = (struct mei_os_ver *)fwcaps->data;
+ os_ver->os_type = OSTYPE_LINUX;
+
+- ret = __mei_cl_send(cldev->cl, buf, size, mode);
+- if (ret < 0)
+- return ret;
+-
+- ret = __mei_cl_recv(cldev->cl, buf, length, 0);
+- if (ret < 0)
+- return ret;
+-
+- return 0;
++ return __mei_cl_send(cldev->cl, buf, size, mode);
+ }
+
+ static void mei_mkhi_fix(struct mei_cl_device *cldev)
+@@ -160,7 +150,7 @@ static void mei_mkhi_fix(struct mei_cl_device *cldev)
+ return;
+
+ ret = mei_osver(cldev);
+- if (ret)
++ if (ret < 0)
+ dev_err(&cldev->dev, "OS version command failed %d\n", ret);
+
+ mei_cldev_disable(cldev);
+diff --git a/drivers/misc/mei/init.c b/drivers/misc/mei/init.c
+index 41e5760a6886..a13abc8fa1bc 100644
+--- a/drivers/misc/mei/init.c
++++ b/drivers/misc/mei/init.c
+@@ -124,8 +124,6 @@ int mei_reset(struct mei_device *dev)
+
+ mei_clear_interrupts(dev);
+
+- mei_synchronize_irq(dev);
+-
+ /* we're already in reset, cancel the init timer
+ * if the reset was called due the hbm protocol error
+ * we need to call it before hw start
+@@ -304,6 +302,9 @@ static void mei_reset_work(struct work_struct *work)
+ container_of(work, struct mei_device, reset_work);
+ int ret;
+
++ mei_clear_interrupts(dev);
++ mei_synchronize_irq(dev);
++
+ mutex_lock(&dev->device_lock);
+
+ ret = mei_reset(dev);
+@@ -328,6 +329,9 @@ void mei_stop(struct mei_device *dev)
+
+ mei_cancel_work(dev);
+
++ mei_clear_interrupts(dev);
++ mei_synchronize_irq(dev);
++
+ mutex_lock(&dev->device_lock);
+
+ dev->dev_state = MEI_DEV_POWER_DOWN;
+diff --git a/drivers/mmc/core/block.c b/drivers/mmc/core/block.c
+index cb1698f268f1..7f4927a05be0 100644
+--- a/drivers/mmc/core/block.c
++++ b/drivers/mmc/core/block.c
+@@ -1791,6 +1791,7 @@ int mmc_blk_issue_rq(struct mmc_queue *mq, struct request *req)
+ ret = mmc_blk_issue_flush(mq, req);
+ } else {
+ ret = mmc_blk_issue_rw_rq(mq, req);
++ card->host->context_info.is_waiting_last_req = false;
+ }
+
+ out:
+diff --git a/drivers/mmc/core/mmc.c b/drivers/mmc/core/mmc.c
+index 0fccca075e29..4ede0904602c 100644
+--- a/drivers/mmc/core/mmc.c
++++ b/drivers/mmc/core/mmc.c
+@@ -1706,7 +1706,7 @@ static int mmc_init_card(struct mmc_host *host, u32 ocr,
+ err = mmc_select_hs400(card);
+ if (err)
+ goto free_card;
+- } else {
++ } else if (!mmc_card_hs400es(card)) {
+ /* Select the desired bus width optionally */
+ err = mmc_select_bus_width(card);
+ if (err > 0 && mmc_card_hs(card)) {
+diff --git a/drivers/mmc/host/sdhci-of-arasan.c b/drivers/mmc/host/sdhci-of-arasan.c
+index 410a55b1c25f..1cfd7f900339 100644
+--- a/drivers/mmc/host/sdhci-of-arasan.c
++++ b/drivers/mmc/host/sdhci-of-arasan.c
+@@ -28,13 +28,9 @@
+ #include "sdhci-pltfm.h"
+ #include <linux/of.h>
+
+-#define SDHCI_ARASAN_CLK_CTRL_OFFSET 0x2c
+ #define SDHCI_ARASAN_VENDOR_REGISTER 0x78
+
+ #define VENDOR_ENHANCED_STROBE BIT(0)
+-#define CLK_CTRL_TIMEOUT_SHIFT 16
+-#define CLK_CTRL_TIMEOUT_MASK (0xf << CLK_CTRL_TIMEOUT_SHIFT)
+-#define CLK_CTRL_TIMEOUT_MIN_EXP 13
+
+ #define PHY_CLK_TOO_SLOW_HZ 400000
+
+@@ -163,15 +159,15 @@ static int sdhci_arasan_syscon_write(struct sdhci_host *host,
+
+ static unsigned int sdhci_arasan_get_timeout_clock(struct sdhci_host *host)
+ {
+- u32 div;
+ unsigned long freq;
+ struct sdhci_pltfm_host *pltfm_host = sdhci_priv(host);
+
+- div = readl(host->ioaddr + SDHCI_ARASAN_CLK_CTRL_OFFSET);
+- div = (div & CLK_CTRL_TIMEOUT_MASK) >> CLK_CTRL_TIMEOUT_SHIFT;
++ /* SDHCI timeout clock is in kHz */
++ freq = DIV_ROUND_UP(clk_get_rate(pltfm_host->clk), 1000);
+
+- freq = clk_get_rate(pltfm_host->clk);
+- freq /= 1 << (CLK_CTRL_TIMEOUT_MIN_EXP + div);
++ /* or in MHz */
++ if (host->caps & SDHCI_TIMEOUT_CLK_UNIT)
++ freq = DIV_ROUND_UP(freq, 1000);
+
+ return freq;
+ }
+diff --git a/drivers/mmc/host/sdhci-of-at91.c b/drivers/mmc/host/sdhci-of-at91.c
+index 2f9ad213377a..7fd964256faa 100644
+--- a/drivers/mmc/host/sdhci-of-at91.c
++++ b/drivers/mmc/host/sdhci-of-at91.c
+@@ -85,11 +85,30 @@ static void sdhci_at91_set_clock(struct sdhci_host *host, unsigned int clock)
+ sdhci_writew(host, clk, SDHCI_CLOCK_CONTROL);
+ }
+
++/*
++ * In this specific implementation of the SDHCI controller, the power register
++ * needs to have a valid voltage set even when the power supply is managed by
++ * an external regulator.
++ */
++static void sdhci_at91_set_power(struct sdhci_host *host, unsigned char mode,
++ unsigned short vdd)
++{
++ if (!IS_ERR(host->mmc->supply.vmmc)) {
++ struct mmc_host *mmc = host->mmc;
++
++ spin_unlock_irq(&host->lock);
++ mmc_regulator_set_ocr(mmc, mmc->supply.vmmc, vdd);
++ spin_lock_irq(&host->lock);
++ }
++ sdhci_set_power_noreg(host, mode, vdd);
++}
++
+ static const struct sdhci_ops sdhci_at91_sama5d2_ops = {
+ .set_clock = sdhci_at91_set_clock,
+ .set_bus_width = sdhci_set_bus_width,
+ .reset = sdhci_reset,
+ .set_uhs_signaling = sdhci_set_uhs_signaling,
++ .set_power = sdhci_at91_set_power,
+ };
+
+ static const struct sdhci_pltfm_data soc_data_sama5d2 = {
+diff --git a/drivers/mmc/host/sdhci-pci-core.c b/drivers/mmc/host/sdhci-pci-core.c
+index 1a72d32af07f..e977048a8428 100644
+--- a/drivers/mmc/host/sdhci-pci-core.c
++++ b/drivers/mmc/host/sdhci-pci-core.c
+@@ -452,6 +452,8 @@ static void sdhci_intel_set_power(struct sdhci_host *host, unsigned char mode,
+ if (mode == MMC_POWER_OFF)
+ return;
+
++ spin_unlock_irq(&host->lock);
++
+ /*
+ * Bus power might not enable after D3 -> D0 transition due to the
+ * present state not yet having propagated. Retry for up to 2ms.
+@@ -464,6 +466,8 @@ static void sdhci_intel_set_power(struct sdhci_host *host, unsigned char mode,
+ reg |= SDHCI_POWER_ON;
+ sdhci_writeb(host, reg, SDHCI_POWER_CONTROL);
+ }
++
++ spin_lock_irq(&host->lock);
+ }
+
+ static const struct sdhci_ops sdhci_intel_byt_ops = {
+diff --git a/drivers/mmc/host/sdhci.c b/drivers/mmc/host/sdhci.c
+index 0def99590d16..d0819d18ad08 100644
+--- a/drivers/mmc/host/sdhci.c
++++ b/drivers/mmc/host/sdhci.c
+@@ -1362,7 +1362,9 @@ void sdhci_enable_clk(struct sdhci_host *host, u16 clk)
+ return;
+ }
+ timeout--;
+- mdelay(1);
++ spin_unlock_irq(&host->lock);
++ usleep_range(900, 1100);
++ spin_lock_irq(&host->lock);
+ }
+
+ clk |= SDHCI_CLOCK_CARD_EN;
+diff --git a/drivers/mmc/host/ushc.c b/drivers/mmc/host/ushc.c
+index d2c386f09d69..1d843357422e 100644
+--- a/drivers/mmc/host/ushc.c
++++ b/drivers/mmc/host/ushc.c
+@@ -426,6 +426,9 @@ static int ushc_probe(struct usb_interface *intf, const struct usb_device_id *id
+ struct ushc_data *ushc;
+ int ret;
+
++ if (intf->cur_altsetting->desc.bNumEndpoints < 1)
++ return -ENODEV;
++
+ mmc = mmc_alloc_host(sizeof(struct ushc_data), &intf->dev);
+ if (mmc == NULL)
+ return -ENOMEM;
+diff --git a/drivers/net/ethernet/amd/xgbe/xgbe-common.h b/drivers/net/ethernet/amd/xgbe/xgbe-common.h
+index 8a280e7d66bd..127adbeefb10 100644
+--- a/drivers/net/ethernet/amd/xgbe/xgbe-common.h
++++ b/drivers/net/ethernet/amd/xgbe/xgbe-common.h
+@@ -984,29 +984,29 @@
+ #define XP_ECC_CNT1_DESC_DED_WIDTH 8
+ #define XP_ECC_CNT1_DESC_SEC_INDEX 0
+ #define XP_ECC_CNT1_DESC_SEC_WIDTH 8
+-#define XP_ECC_IER_DESC_DED_INDEX 0
++#define XP_ECC_IER_DESC_DED_INDEX 5
+ #define XP_ECC_IER_DESC_DED_WIDTH 1
+-#define XP_ECC_IER_DESC_SEC_INDEX 1
++#define XP_ECC_IER_DESC_SEC_INDEX 4
+ #define XP_ECC_IER_DESC_SEC_WIDTH 1
+-#define XP_ECC_IER_RX_DED_INDEX 2
++#define XP_ECC_IER_RX_DED_INDEX 3
+ #define XP_ECC_IER_RX_DED_WIDTH 1
+-#define XP_ECC_IER_RX_SEC_INDEX 3
++#define XP_ECC_IER_RX_SEC_INDEX 2
+ #define XP_ECC_IER_RX_SEC_WIDTH 1
+-#define XP_ECC_IER_TX_DED_INDEX 4
++#define XP_ECC_IER_TX_DED_INDEX 1
+ #define XP_ECC_IER_TX_DED_WIDTH 1
+-#define XP_ECC_IER_TX_SEC_INDEX 5
++#define XP_ECC_IER_TX_SEC_INDEX 0
+ #define XP_ECC_IER_TX_SEC_WIDTH 1
+-#define XP_ECC_ISR_DESC_DED_INDEX 0
++#define XP_ECC_ISR_DESC_DED_INDEX 5
+ #define XP_ECC_ISR_DESC_DED_WIDTH 1
+-#define XP_ECC_ISR_DESC_SEC_INDEX 1
++#define XP_ECC_ISR_DESC_SEC_INDEX 4
+ #define XP_ECC_ISR_DESC_SEC_WIDTH 1
+-#define XP_ECC_ISR_RX_DED_INDEX 2
++#define XP_ECC_ISR_RX_DED_INDEX 3
+ #define XP_ECC_ISR_RX_DED_WIDTH 1
+-#define XP_ECC_ISR_RX_SEC_INDEX 3
++#define XP_ECC_ISR_RX_SEC_INDEX 2
+ #define XP_ECC_ISR_RX_SEC_WIDTH 1
+-#define XP_ECC_ISR_TX_DED_INDEX 4
++#define XP_ECC_ISR_TX_DED_INDEX 1
+ #define XP_ECC_ISR_TX_DED_WIDTH 1
+-#define XP_ECC_ISR_TX_SEC_INDEX 5
++#define XP_ECC_ISR_TX_SEC_INDEX 0
+ #define XP_ECC_ISR_TX_SEC_WIDTH 1
+ #define XP_I2C_MUTEX_BUSY_INDEX 31
+ #define XP_I2C_MUTEX_BUSY_WIDTH 1
+@@ -1148,8 +1148,8 @@
+ #define RX_PACKET_ATTRIBUTES_CSUM_DONE_WIDTH 1
+ #define RX_PACKET_ATTRIBUTES_VLAN_CTAG_INDEX 1
+ #define RX_PACKET_ATTRIBUTES_VLAN_CTAG_WIDTH 1
+-#define RX_PACKET_ATTRIBUTES_INCOMPLETE_INDEX 2
+-#define RX_PACKET_ATTRIBUTES_INCOMPLETE_WIDTH 1
++#define RX_PACKET_ATTRIBUTES_LAST_INDEX 2
++#define RX_PACKET_ATTRIBUTES_LAST_WIDTH 1
+ #define RX_PACKET_ATTRIBUTES_CONTEXT_NEXT_INDEX 3
+ #define RX_PACKET_ATTRIBUTES_CONTEXT_NEXT_WIDTH 1
+ #define RX_PACKET_ATTRIBUTES_CONTEXT_INDEX 4
+@@ -1158,6 +1158,8 @@
+ #define RX_PACKET_ATTRIBUTES_RX_TSTAMP_WIDTH 1
+ #define RX_PACKET_ATTRIBUTES_RSS_HASH_INDEX 6
+ #define RX_PACKET_ATTRIBUTES_RSS_HASH_WIDTH 1
++#define RX_PACKET_ATTRIBUTES_FIRST_INDEX 7
++#define RX_PACKET_ATTRIBUTES_FIRST_WIDTH 1
+
+ #define RX_NORMAL_DESC0_OVT_INDEX 0
+ #define RX_NORMAL_DESC0_OVT_WIDTH 16
+diff --git a/drivers/net/ethernet/amd/xgbe/xgbe-dev.c b/drivers/net/ethernet/amd/xgbe/xgbe-dev.c
+index 937f37a5dcb2..24a687ce4388 100644
+--- a/drivers/net/ethernet/amd/xgbe/xgbe-dev.c
++++ b/drivers/net/ethernet/amd/xgbe/xgbe-dev.c
+@@ -1896,10 +1896,15 @@ static int xgbe_dev_read(struct xgbe_channel *channel)
+
+ /* Get the header length */
+ if (XGMAC_GET_BITS_LE(rdesc->desc3, RX_NORMAL_DESC3, FD)) {
++ XGMAC_SET_BITS(packet->attributes, RX_PACKET_ATTRIBUTES,
++ FIRST, 1);
+ rdata->rx.hdr_len = XGMAC_GET_BITS_LE(rdesc->desc2,
+ RX_NORMAL_DESC2, HL);
+ if (rdata->rx.hdr_len)
+ pdata->ext_stats.rx_split_header_packets++;
++ } else {
++ XGMAC_SET_BITS(packet->attributes, RX_PACKET_ATTRIBUTES,
++ FIRST, 0);
+ }
+
+ /* Get the RSS hash */
+@@ -1922,19 +1927,16 @@ static int xgbe_dev_read(struct xgbe_channel *channel)
+ }
+ }
+
+- /* Get the packet length */
+- rdata->rx.len = XGMAC_GET_BITS_LE(rdesc->desc3, RX_NORMAL_DESC3, PL);
+-
+- if (!XGMAC_GET_BITS_LE(rdesc->desc3, RX_NORMAL_DESC3, LD)) {
+- /* Not all the data has been transferred for this packet */
+- XGMAC_SET_BITS(packet->attributes, RX_PACKET_ATTRIBUTES,
+- INCOMPLETE, 1);
++ /* Not all the data has been transferred for this packet */
++ if (!XGMAC_GET_BITS_LE(rdesc->desc3, RX_NORMAL_DESC3, LD))
+ return 0;
+- }
+
+ /* This is the last of the data for this packet */
+ XGMAC_SET_BITS(packet->attributes, RX_PACKET_ATTRIBUTES,
+- INCOMPLETE, 0);
++ LAST, 1);
++
++ /* Get the packet length */
++ rdata->rx.len = XGMAC_GET_BITS_LE(rdesc->desc3, RX_NORMAL_DESC3, PL);
+
+ /* Set checksum done indicator as appropriate */
+ if (netdev->features & NETIF_F_RXCSUM)
+diff --git a/drivers/net/ethernet/amd/xgbe/xgbe-drv.c b/drivers/net/ethernet/amd/xgbe/xgbe-drv.c
+index 742e5d1b5da4..36fd1a158251 100644
+--- a/drivers/net/ethernet/amd/xgbe/xgbe-drv.c
++++ b/drivers/net/ethernet/amd/xgbe/xgbe-drv.c
+@@ -1973,13 +1973,12 @@ static struct sk_buff *xgbe_create_skb(struct xgbe_prv_data *pdata,
+ {
+ struct sk_buff *skb;
+ u8 *packet;
+- unsigned int copy_len;
+
+ skb = napi_alloc_skb(napi, rdata->rx.hdr.dma_len);
+ if (!skb)
+ return NULL;
+
+- /* Start with the header buffer which may contain just the header
++ /* Pull in the header buffer which may contain just the header
+ * or the header plus data
+ */
+ dma_sync_single_range_for_cpu(pdata->dev, rdata->rx.hdr.dma_base,
+@@ -1988,30 +1987,49 @@ static struct sk_buff *xgbe_create_skb(struct xgbe_prv_data *pdata,
+
+ packet = page_address(rdata->rx.hdr.pa.pages) +
+ rdata->rx.hdr.pa.pages_offset;
+- copy_len = (rdata->rx.hdr_len) ? rdata->rx.hdr_len : len;
+- copy_len = min(rdata->rx.hdr.dma_len, copy_len);
+- skb_copy_to_linear_data(skb, packet, copy_len);
+- skb_put(skb, copy_len);
+-
+- len -= copy_len;
+- if (len) {
+- /* Add the remaining data as a frag */
+- dma_sync_single_range_for_cpu(pdata->dev,
+- rdata->rx.buf.dma_base,
+- rdata->rx.buf.dma_off,
+- rdata->rx.buf.dma_len,
+- DMA_FROM_DEVICE);
+-
+- skb_add_rx_frag(skb, skb_shinfo(skb)->nr_frags,
+- rdata->rx.buf.pa.pages,
+- rdata->rx.buf.pa.pages_offset,
+- len, rdata->rx.buf.dma_len);
+- rdata->rx.buf.pa.pages = NULL;
+- }
++ skb_copy_to_linear_data(skb, packet, len);
++ skb_put(skb, len);
+
+ return skb;
+ }
+
++static unsigned int xgbe_rx_buf1_len(struct xgbe_ring_data *rdata,
++ struct xgbe_packet_data *packet)
++{
++ /* Always zero if not the first descriptor */
++ if (!XGMAC_GET_BITS(packet->attributes, RX_PACKET_ATTRIBUTES, FIRST))
++ return 0;
++
++ /* First descriptor with split header, return header length */
++ if (rdata->rx.hdr_len)
++ return rdata->rx.hdr_len;
++
++ /* First descriptor but not the last descriptor and no split header,
++ * so the full buffer was used
++ */
++ if (!XGMAC_GET_BITS(packet->attributes, RX_PACKET_ATTRIBUTES, LAST))
++ return rdata->rx.hdr.dma_len;
++
++ /* First descriptor and last descriptor and no split header, so
++ * calculate how much of the buffer was used
++ */
++ return min_t(unsigned int, rdata->rx.hdr.dma_len, rdata->rx.len);
++}
++
++static unsigned int xgbe_rx_buf2_len(struct xgbe_ring_data *rdata,
++ struct xgbe_packet_data *packet,
++ unsigned int len)
++{
++ /* Always the full buffer if not the last descriptor */
++ if (!XGMAC_GET_BITS(packet->attributes, RX_PACKET_ATTRIBUTES, LAST))
++ return rdata->rx.buf.dma_len;
++
++ /* Last descriptor so calculate how much of the buffer was used
++ * for the last bit of data
++ */
++ return rdata->rx.len - len;
++}
++
+ static int xgbe_tx_poll(struct xgbe_channel *channel)
+ {
+ struct xgbe_prv_data *pdata = channel->pdata;
+@@ -2094,8 +2112,8 @@ static int xgbe_rx_poll(struct xgbe_channel *channel, int budget)
+ struct napi_struct *napi;
+ struct sk_buff *skb;
+ struct skb_shared_hwtstamps *hwtstamps;
+- unsigned int incomplete, error, context_next, context;
+- unsigned int len, rdesc_len, max_len;
++ unsigned int last, error, context_next, context;
++ unsigned int len, buf1_len, buf2_len, max_len;
+ unsigned int received = 0;
+ int packet_count = 0;
+
+@@ -2105,7 +2123,7 @@ static int xgbe_rx_poll(struct xgbe_channel *channel, int budget)
+ if (!ring)
+ return 0;
+
+- incomplete = 0;
++ last = 0;
+ context_next = 0;
+
+ napi = (pdata->per_channel_irq) ? &channel->napi : &pdata->napi;
+@@ -2139,9 +2157,8 @@ static int xgbe_rx_poll(struct xgbe_channel *channel, int budget)
+ received++;
+ ring->cur++;
+
+- incomplete = XGMAC_GET_BITS(packet->attributes,
+- RX_PACKET_ATTRIBUTES,
+- INCOMPLETE);
++ last = XGMAC_GET_BITS(packet->attributes, RX_PACKET_ATTRIBUTES,
++ LAST);
+ context_next = XGMAC_GET_BITS(packet->attributes,
+ RX_PACKET_ATTRIBUTES,
+ CONTEXT_NEXT);
+@@ -2150,7 +2167,7 @@ static int xgbe_rx_poll(struct xgbe_channel *channel, int budget)
+ CONTEXT);
+
+ /* Earlier error, just drain the remaining data */
+- if ((incomplete || context_next) && error)
++ if ((!last || context_next) && error)
+ goto read_again;
+
+ if (error || packet->errors) {
+@@ -2162,16 +2179,22 @@ static int xgbe_rx_poll(struct xgbe_channel *channel, int budget)
+ }
+
+ if (!context) {
+- /* Length is cumulative, get this descriptor's length */
+- rdesc_len = rdata->rx.len - len;
+- len += rdesc_len;
++ /* Get the data length in the descriptor buffers */
++ buf1_len = xgbe_rx_buf1_len(rdata, packet);
++ len += buf1_len;
++ buf2_len = xgbe_rx_buf2_len(rdata, packet, len);
++ len += buf2_len;
+
+- if (rdesc_len && !skb) {
++ if (!skb) {
+ skb = xgbe_create_skb(pdata, napi, rdata,
+- rdesc_len);
+- if (!skb)
++ buf1_len);
++ if (!skb) {
+ error = 1;
+- } else if (rdesc_len) {
++ goto skip_data;
++ }
++ }
++
++ if (buf2_len) {
+ dma_sync_single_range_for_cpu(pdata->dev,
+ rdata->rx.buf.dma_base,
+ rdata->rx.buf.dma_off,
+@@ -2181,13 +2204,14 @@ static int xgbe_rx_poll(struct xgbe_channel *channel, int budget)
+ skb_add_rx_frag(skb, skb_shinfo(skb)->nr_frags,
+ rdata->rx.buf.pa.pages,
+ rdata->rx.buf.pa.pages_offset,
+- rdesc_len,
++ buf2_len,
+ rdata->rx.buf.dma_len);
+ rdata->rx.buf.pa.pages = NULL;
+ }
+ }
+
+- if (incomplete || context_next)
++skip_data:
++ if (!last || context_next)
+ goto read_again;
+
+ if (!skb)
+@@ -2245,7 +2269,7 @@ static int xgbe_rx_poll(struct xgbe_channel *channel, int budget)
+ }
+
+ /* Check if we need to save state before leaving */
+- if (received && (incomplete || context_next)) {
++ if (received && (!last || context_next)) {
+ rdata = XGBE_GET_DESC_DATA(ring, ring->cur);
+ rdata->state_saved = 1;
+ rdata->state.skb = skb;
+diff --git a/drivers/net/ethernet/broadcom/genet/bcmgenet.c b/drivers/net/ethernet/broadcom/genet/bcmgenet.c
+index f92896835d2a..3789bed26716 100644
+--- a/drivers/net/ethernet/broadcom/genet/bcmgenet.c
++++ b/drivers/net/ethernet/broadcom/genet/bcmgenet.c
+@@ -3395,7 +3395,8 @@ static int bcmgenet_suspend(struct device *d)
+
+ bcmgenet_netif_stop(dev);
+
+- phy_suspend(priv->phydev);
++ if (!device_may_wakeup(d))
++ phy_suspend(priv->phydev);
+
+ netif_device_detach(dev);
+
+@@ -3492,7 +3493,8 @@ static int bcmgenet_resume(struct device *d)
+
+ netif_device_attach(dev);
+
+- phy_resume(priv->phydev);
++ if (!device_may_wakeup(d))
++ phy_resume(priv->phydev);
+
+ if (priv->eee.eee_enabled)
+ bcmgenet_eee_enable_set(dev, true);
+diff --git a/drivers/net/ethernet/broadcom/genet/bcmmii.c b/drivers/net/ethernet/broadcom/genet/bcmmii.c
+index e87607621e62..2f9281936f0e 100644
+--- a/drivers/net/ethernet/broadcom/genet/bcmmii.c
++++ b/drivers/net/ethernet/broadcom/genet/bcmmii.c
+@@ -220,20 +220,6 @@ void bcmgenet_phy_power_set(struct net_device *dev, bool enable)
+ udelay(60);
+ }
+
+-static void bcmgenet_internal_phy_setup(struct net_device *dev)
+-{
+- struct bcmgenet_priv *priv = netdev_priv(dev);
+- u32 reg;
+-
+- /* Power up PHY */
+- bcmgenet_phy_power_set(dev, true);
+- /* enable APD */
+- reg = bcmgenet_ext_readl(priv, EXT_EXT_PWR_MGMT);
+- reg |= EXT_PWR_DN_EN_LD;
+- bcmgenet_ext_writel(priv, reg, EXT_EXT_PWR_MGMT);
+- bcmgenet_mii_reset(dev);
+-}
+-
+ static void bcmgenet_moca_phy_setup(struct bcmgenet_priv *priv)
+ {
+ u32 reg;
+@@ -281,7 +267,6 @@ int bcmgenet_mii_config(struct net_device *dev)
+
+ if (priv->internal_phy) {
+ phy_name = "internal PHY";
+- bcmgenet_internal_phy_setup(dev);
+ } else if (priv->phy_interface == PHY_INTERFACE_MODE_MOCA) {
+ phy_name = "MoCA";
+ bcmgenet_moca_phy_setup(priv);
+diff --git a/drivers/net/ethernet/mellanox/mlx5/core/cmd.c b/drivers/net/ethernet/mellanox/mlx5/core/cmd.c
+index caa837e5e2b9..a380353a78c2 100644
+--- a/drivers/net/ethernet/mellanox/mlx5/core/cmd.c
++++ b/drivers/net/ethernet/mellanox/mlx5/core/cmd.c
+@@ -361,6 +361,8 @@ static int mlx5_internal_err_ret_value(struct mlx5_core_dev *dev, u16 op,
+ case MLX5_CMD_OP_QUERY_VPORT_COUNTER:
+ case MLX5_CMD_OP_ALLOC_Q_COUNTER:
+ case MLX5_CMD_OP_QUERY_Q_COUNTER:
++ case MLX5_CMD_OP_SET_RATE_LIMIT:
++ case MLX5_CMD_OP_QUERY_RATE_LIMIT:
+ case MLX5_CMD_OP_ALLOC_PD:
+ case MLX5_CMD_OP_ALLOC_UAR:
+ case MLX5_CMD_OP_CONFIG_INT_MODERATION:
+@@ -497,6 +499,8 @@ const char *mlx5_command_str(int command)
+ MLX5_COMMAND_STR_CASE(ALLOC_Q_COUNTER);
+ MLX5_COMMAND_STR_CASE(DEALLOC_Q_COUNTER);
+ MLX5_COMMAND_STR_CASE(QUERY_Q_COUNTER);
++ MLX5_COMMAND_STR_CASE(SET_RATE_LIMIT);
++ MLX5_COMMAND_STR_CASE(QUERY_RATE_LIMIT);
+ MLX5_COMMAND_STR_CASE(ALLOC_PD);
+ MLX5_COMMAND_STR_CASE(DEALLOC_PD);
+ MLX5_COMMAND_STR_CASE(ALLOC_UAR);
+diff --git a/drivers/net/ethernet/mellanox/mlx5/core/en.h b/drivers/net/ethernet/mellanox/mlx5/core/en.h
+index c69a1f827b65..41db47050991 100644
+--- a/drivers/net/ethernet/mellanox/mlx5/core/en.h
++++ b/drivers/net/ethernet/mellanox/mlx5/core/en.h
+@@ -921,10 +921,6 @@ void mlx5e_destroy_netdev(struct mlx5_core_dev *mdev, struct mlx5e_priv *priv);
+ int mlx5e_attach_netdev(struct mlx5_core_dev *mdev, struct net_device *netdev);
+ void mlx5e_detach_netdev(struct mlx5_core_dev *mdev, struct net_device *netdev);
+ u32 mlx5e_choose_lro_timeout(struct mlx5_core_dev *mdev, u32 wanted_timeout);
+-void mlx5e_add_vxlan_port(struct net_device *netdev,
+- struct udp_tunnel_info *ti);
+-void mlx5e_del_vxlan_port(struct net_device *netdev,
+- struct udp_tunnel_info *ti);
+
+ int mlx5e_get_offload_stats(int attr_id, const struct net_device *dev,
+ void *sp);
+diff --git a/drivers/net/ethernet/mellanox/mlx5/core/en_main.c b/drivers/net/ethernet/mellanox/mlx5/core/en_main.c
+index 9d9c64927372..a501d823e87d 100644
+--- a/drivers/net/ethernet/mellanox/mlx5/core/en_main.c
++++ b/drivers/net/ethernet/mellanox/mlx5/core/en_main.c
+@@ -3055,8 +3055,8 @@ static int mlx5e_get_vf_stats(struct net_device *dev,
+ vf_stats);
+ }
+
+-void mlx5e_add_vxlan_port(struct net_device *netdev,
+- struct udp_tunnel_info *ti)
++static void mlx5e_add_vxlan_port(struct net_device *netdev,
++ struct udp_tunnel_info *ti)
+ {
+ struct mlx5e_priv *priv = netdev_priv(netdev);
+
+@@ -3069,8 +3069,8 @@ void mlx5e_add_vxlan_port(struct net_device *netdev,
+ mlx5e_vxlan_queue_work(priv, ti->sa_family, be16_to_cpu(ti->port), 1);
+ }
+
+-void mlx5e_del_vxlan_port(struct net_device *netdev,
+- struct udp_tunnel_info *ti)
++static void mlx5e_del_vxlan_port(struct net_device *netdev,
++ struct udp_tunnel_info *ti)
+ {
+ struct mlx5e_priv *priv = netdev_priv(netdev);
+
+diff --git a/drivers/net/ethernet/mellanox/mlx5/core/en_rep.c b/drivers/net/ethernet/mellanox/mlx5/core/en_rep.c
+index 850378893b25..871ff3b51293 100644
+--- a/drivers/net/ethernet/mellanox/mlx5/core/en_rep.c
++++ b/drivers/net/ethernet/mellanox/mlx5/core/en_rep.c
+@@ -394,8 +394,6 @@ static const struct net_device_ops mlx5e_netdev_ops_rep = {
+ .ndo_get_phys_port_name = mlx5e_rep_get_phys_port_name,
+ .ndo_setup_tc = mlx5e_rep_ndo_setup_tc,
+ .ndo_get_stats64 = mlx5e_rep_get_stats,
+- .ndo_udp_tunnel_add = mlx5e_add_vxlan_port,
+- .ndo_udp_tunnel_del = mlx5e_del_vxlan_port,
+ .ndo_has_offload_stats = mlx5e_has_offload_stats,
+ .ndo_get_offload_stats = mlx5e_get_offload_stats,
+ };
+diff --git a/drivers/net/ethernet/mellanox/mlx5/core/en_rx.c b/drivers/net/ethernet/mellanox/mlx5/core/en_rx.c
+index e3b88bbb9dcf..b1939a1d4815 100644
+--- a/drivers/net/ethernet/mellanox/mlx5/core/en_rx.c
++++ b/drivers/net/ethernet/mellanox/mlx5/core/en_rx.c
+@@ -603,6 +603,10 @@ static inline void mlx5e_build_rx_skb(struct mlx5_cqe64 *cqe,
+ if (lro_num_seg > 1) {
+ mlx5e_lro_update_hdr(skb, cqe, cqe_bcnt);
+ skb_shinfo(skb)->gso_size = DIV_ROUND_UP(cqe_bcnt, lro_num_seg);
++ /* Subtract one since we already counted this as one
++ * "regular" packet in mlx5e_complete_rx_cqe()
++ */
++ rq->stats.packets += lro_num_seg - 1;
+ rq->stats.lro_packets++;
+ rq->stats.lro_bytes += cqe_bcnt;
+ }
+diff --git a/drivers/net/ethernet/mellanox/mlx5/core/en_tc.c b/drivers/net/ethernet/mellanox/mlx5/core/en_tc.c
+index 2ebbe80d8126..cc718814c378 100644
+--- a/drivers/net/ethernet/mellanox/mlx5/core/en_tc.c
++++ b/drivers/net/ethernet/mellanox/mlx5/core/en_tc.c
+@@ -128,6 +128,23 @@ mlx5e_tc_add_nic_flow(struct mlx5e_priv *priv,
+ return rule;
+ }
+
++static void mlx5e_tc_del_nic_flow(struct mlx5e_priv *priv,
++ struct mlx5e_tc_flow *flow)
++{
++ struct mlx5_fc *counter = NULL;
++
++ if (!IS_ERR(flow->rule)) {
++ counter = mlx5_flow_rule_counter(flow->rule);
++ mlx5_del_flow_rules(flow->rule);
++ mlx5_fc_destroy(priv->mdev, counter);
++ }
++
++ if (!mlx5e_tc_num_filters(priv) && (priv->fs.tc.t)) {
++ mlx5_destroy_flow_table(priv->fs.tc.t);
++ priv->fs.tc.t = NULL;
++ }
++}
++
+ static struct mlx5_flow_handle *
+ mlx5e_tc_add_fdb_flow(struct mlx5e_priv *priv,
+ struct mlx5_flow_spec *spec,
+@@ -144,7 +161,24 @@ mlx5e_tc_add_fdb_flow(struct mlx5e_priv *priv,
+ }
+
+ static void mlx5e_detach_encap(struct mlx5e_priv *priv,
+- struct mlx5e_tc_flow *flow) {
++ struct mlx5e_tc_flow *flow);
++
++static void mlx5e_tc_del_fdb_flow(struct mlx5e_priv *priv,
++ struct mlx5e_tc_flow *flow)
++{
++ struct mlx5_eswitch *esw = priv->mdev->priv.eswitch;
++
++ mlx5_eswitch_del_offloaded_rule(esw, flow->rule, flow->attr);
++
++ mlx5_eswitch_del_vlan_action(esw, flow->attr);
++
++ if (flow->attr->action & MLX5_FLOW_CONTEXT_ACTION_ENCAP)
++ mlx5e_detach_encap(priv, flow);
++}
++
++static void mlx5e_detach_encap(struct mlx5e_priv *priv,
++ struct mlx5e_tc_flow *flow)
++{
+ struct list_head *next = flow->encap.next;
+
+ list_del(&flow->encap);
+@@ -169,24 +203,11 @@ static void mlx5e_tc_del_flow(struct mlx5e_priv *priv,
+ struct mlx5e_tc_flow *flow)
+ {
+ struct mlx5_eswitch *esw = priv->mdev->priv.eswitch;
+- struct mlx5_fc *counter = NULL;
+-
+- if (!IS_ERR(flow->rule)) {
+- counter = mlx5_flow_rule_counter(flow->rule);
+- mlx5_del_flow_rules(flow->rule);
+- mlx5_fc_destroy(priv->mdev, counter);
+- }
+-
+- if (esw && esw->mode == SRIOV_OFFLOADS) {
+- mlx5_eswitch_del_vlan_action(esw, flow->attr);
+- if (flow->attr->action & MLX5_FLOW_CONTEXT_ACTION_ENCAP)
+- mlx5e_detach_encap(priv, flow);
+- }
+
+- if (!mlx5e_tc_num_filters(priv) && (priv->fs.tc.t)) {
+- mlx5_destroy_flow_table(priv->fs.tc.t);
+- priv->fs.tc.t = NULL;
+- }
++ if (esw && esw->mode == SRIOV_OFFLOADS)
++ mlx5e_tc_del_fdb_flow(priv, flow);
++ else
++ mlx5e_tc_del_nic_flow(priv, flow);
+ }
+
+ static void parse_vxlan_attr(struct mlx5_flow_spec *spec,
+@@ -243,12 +264,15 @@ static int parse_tunnel_attr(struct mlx5e_priv *priv,
+ skb_flow_dissector_target(f->dissector,
+ FLOW_DISSECTOR_KEY_ENC_PORTS,
+ f->mask);
++ struct mlx5_eswitch *esw = priv->mdev->priv.eswitch;
++ struct net_device *up_dev = mlx5_eswitch_get_uplink_netdev(esw);
++ struct mlx5e_priv *up_priv = netdev_priv(up_dev);
+
+ /* Full udp dst port must be given */
+ if (memchr_inv(&mask->dst, 0xff, sizeof(mask->dst)))
+ goto vxlan_match_offload_err;
+
+- if (mlx5e_vxlan_lookup_port(priv, be16_to_cpu(key->dst)) &&
++ if (mlx5e_vxlan_lookup_port(up_priv, be16_to_cpu(key->dst)) &&
+ MLX5_CAP_ESW(priv->mdev, vxlan_encap_decap))
+ parse_vxlan_attr(spec, f);
+ else {
+@@ -806,6 +830,8 @@ static int mlx5e_attach_encap(struct mlx5e_priv *priv,
+ struct mlx5_esw_flow_attr *attr)
+ {
+ struct mlx5_eswitch *esw = priv->mdev->priv.eswitch;
++ struct net_device *up_dev = mlx5_eswitch_get_uplink_netdev(esw);
++ struct mlx5e_priv *up_priv = netdev_priv(up_dev);
+ unsigned short family = ip_tunnel_info_af(tun_info);
+ struct ip_tunnel_key *key = &tun_info->key;
+ struct mlx5_encap_info info;
+@@ -828,7 +854,7 @@ static int mlx5e_attach_encap(struct mlx5e_priv *priv,
+ return -EOPNOTSUPP;
+ }
+
+- if (mlx5e_vxlan_lookup_port(priv, be16_to_cpu(key->tp_dst)) &&
++ if (mlx5e_vxlan_lookup_port(up_priv, be16_to_cpu(key->tp_dst)) &&
+ MLX5_CAP_ESW(priv->mdev, vxlan_encap_decap)) {
+ info.tp_dst = key->tp_dst;
+ info.tun_id = tunnel_id_to_key32(key->tun_id);
+@@ -953,14 +979,16 @@ static int parse_tc_fdb_actions(struct mlx5e_priv *priv, struct tcf_exts *exts,
+ }
+
+ if (is_tcf_vlan(a)) {
+- if (tcf_vlan_action(a) == VLAN_F_POP) {
++ if (tcf_vlan_action(a) == TCA_VLAN_ACT_POP) {
+ attr->action |= MLX5_FLOW_CONTEXT_ACTION_VLAN_POP;
+- } else if (tcf_vlan_action(a) == VLAN_F_PUSH) {
++ } else if (tcf_vlan_action(a) == TCA_VLAN_ACT_PUSH) {
+ if (tcf_vlan_push_proto(a) != htons(ETH_P_8021Q))
+ return -EOPNOTSUPP;
+
+ attr->action |= MLX5_FLOW_CONTEXT_ACTION_VLAN_PUSH;
+ attr->vlan = tcf_vlan_push_vid(a);
++ } else { /* action is TCA_VLAN_ACT_MODIFY */
++ return -EOPNOTSUPP;
+ }
+ continue;
+ }
+diff --git a/drivers/net/ethernet/mellanox/mlx5/core/en_tx.c b/drivers/net/ethernet/mellanox/mlx5/core/en_tx.c
+index cfb68371c397..574311018e6f 100644
+--- a/drivers/net/ethernet/mellanox/mlx5/core/en_tx.c
++++ b/drivers/net/ethernet/mellanox/mlx5/core/en_tx.c
+@@ -272,15 +272,18 @@ static netdev_tx_t mlx5e_sq_xmit(struct mlx5e_sq *sq, struct sk_buff *skb)
+ sq->stats.tso_bytes += skb->len - ihs;
+ }
+
++ sq->stats.packets += skb_shinfo(skb)->gso_segs;
+ num_bytes = skb->len + (skb_shinfo(skb)->gso_segs - 1) * ihs;
+ } else {
+ bf = sq->bf_budget &&
+ !skb->xmit_more &&
+ !skb_shinfo(skb)->nr_frags;
+ ihs = mlx5e_get_inline_hdr_size(sq, skb, bf);
++ sq->stats.packets++;
+ num_bytes = max_t(unsigned int, skb->len, ETH_ZLEN);
+ }
+
++ sq->stats.bytes += num_bytes;
+ wi->num_bytes = num_bytes;
+
+ if (skb_vlan_tag_present(skb)) {
+@@ -377,8 +380,6 @@ static netdev_tx_t mlx5e_sq_xmit(struct mlx5e_sq *sq, struct sk_buff *skb)
+ if (bf)
+ sq->bf_budget--;
+
+- sq->stats.packets++;
+- sq->stats.bytes += num_bytes;
+ return NETDEV_TX_OK;
+
+ dma_unmap_wqe_err:
+diff --git a/drivers/net/ethernet/mellanox/mlx5/core/eswitch.h b/drivers/net/ethernet/mellanox/mlx5/core/eswitch.h
+index 8661dd3f542c..b5967df1eeaa 100644
+--- a/drivers/net/ethernet/mellanox/mlx5/core/eswitch.h
++++ b/drivers/net/ethernet/mellanox/mlx5/core/eswitch.h
+@@ -201,6 +201,7 @@ struct mlx5_esw_offload {
+ struct mlx5_eswitch_rep *vport_reps;
+ DECLARE_HASHTABLE(encap_tbl, 8);
+ u8 inline_mode;
++ u64 num_flows;
+ };
+
+ struct mlx5_eswitch {
+@@ -263,6 +264,11 @@ struct mlx5_flow_handle *
+ mlx5_eswitch_add_offloaded_rule(struct mlx5_eswitch *esw,
+ struct mlx5_flow_spec *spec,
+ struct mlx5_esw_flow_attr *attr);
++void
++mlx5_eswitch_del_offloaded_rule(struct mlx5_eswitch *esw,
++ struct mlx5_flow_handle *rule,
++ struct mlx5_esw_flow_attr *attr);
++
+ struct mlx5_flow_handle *
+ mlx5_eswitch_create_vport_rx_rule(struct mlx5_eswitch *esw, int vport, u32 tirn);
+
+diff --git a/drivers/net/ethernet/mellanox/mlx5/core/eswitch_offloads.c b/drivers/net/ethernet/mellanox/mlx5/core/eswitch_offloads.c
+index 595f7c7383b3..7bce2bdbb79b 100644
+--- a/drivers/net/ethernet/mellanox/mlx5/core/eswitch_offloads.c
++++ b/drivers/net/ethernet/mellanox/mlx5/core/eswitch_offloads.c
+@@ -93,10 +93,27 @@ mlx5_eswitch_add_offloaded_rule(struct mlx5_eswitch *esw,
+ spec, &flow_act, dest, i);
+ if (IS_ERR(rule))
+ mlx5_fc_destroy(esw->dev, counter);
++ else
++ esw->offloads.num_flows++;
+
+ return rule;
+ }
+
++void
++mlx5_eswitch_del_offloaded_rule(struct mlx5_eswitch *esw,
++ struct mlx5_flow_handle *rule,
++ struct mlx5_esw_flow_attr *attr)
++{
++ struct mlx5_fc *counter = NULL;
++
++ if (!IS_ERR(rule)) {
++ counter = mlx5_flow_rule_counter(rule);
++ mlx5_del_flow_rules(rule);
++ mlx5_fc_destroy(esw->dev, counter);
++ esw->offloads.num_flows--;
++ }
++}
++
+ static int esw_set_global_vlan_pop(struct mlx5_eswitch *esw, u8 val)
+ {
+ struct mlx5_eswitch_rep *rep;
+@@ -905,6 +922,11 @@ int mlx5_devlink_eswitch_inline_mode_set(struct devlink *devlink, u8 mode)
+ MLX5_CAP_INLINE_MODE_VPORT_CONTEXT)
+ return -EOPNOTSUPP;
+
++ if (esw->offloads.num_flows > 0) {
++ esw_warn(dev, "Can't set inline mode when flows are configured\n");
++ return -EOPNOTSUPP;
++ }
++
+ err = esw_inline_mode_from_devlink(mode, &mlx5_mode);
+ if (err)
+ goto out;
+diff --git a/drivers/net/ethernet/mellanox/mlx5/core/main.c b/drivers/net/ethernet/mellanox/mlx5/core/main.c
+index 3c315eb8d270..4aca265d9c14 100644
+--- a/drivers/net/ethernet/mellanox/mlx5/core/main.c
++++ b/drivers/net/ethernet/mellanox/mlx5/core/main.c
+@@ -87,7 +87,7 @@ static struct mlx5_profile profile[] = {
+ [2] = {
+ .mask = MLX5_PROF_MASK_QP_SIZE |
+ MLX5_PROF_MASK_MR_CACHE,
+- .log_max_qp = 17,
++ .log_max_qp = 18,
+ .mr_cache[0] = {
+ .size = 500,
+ .limit = 250
+diff --git a/drivers/net/ethernet/ti/Kconfig b/drivers/net/ethernet/ti/Kconfig
+index 296c8efd0038..bd0af5974a75 100644
+--- a/drivers/net/ethernet/ti/Kconfig
++++ b/drivers/net/ethernet/ti/Kconfig
+@@ -76,7 +76,7 @@ config TI_CPSW
+ config TI_CPTS
+ tristate "TI Common Platform Time Sync (CPTS) Support"
+ depends on TI_CPSW || TI_KEYSTONE_NETCP
+- imply PTP_1588_CLOCK
++ depends on PTP_1588_CLOCK
+ ---help---
+ This driver supports the Common Platform Time Sync unit of
+ the CPSW Ethernet Switch and Keystone 2 1g/10g Switch Subsystem.
+diff --git a/drivers/net/usb/qmi_wwan.c b/drivers/net/usb/qmi_wwan.c
+index 24d5272cdce5..0d519a9582ca 100644
+--- a/drivers/net/usb/qmi_wwan.c
++++ b/drivers/net/usb/qmi_wwan.c
+@@ -924,6 +924,8 @@ static const struct usb_device_id products[] = {
+ {QMI_FIXED_INTF(0x413c, 0x81a9, 8)}, /* Dell Wireless 5808e Gobi(TM) 4G LTE Mobile Broadband Card */
+ {QMI_FIXED_INTF(0x413c, 0x81b1, 8)}, /* Dell Wireless 5809e Gobi(TM) 4G LTE Mobile Broadband Card */
+ {QMI_FIXED_INTF(0x413c, 0x81b3, 8)}, /* Dell Wireless 5809e Gobi(TM) 4G LTE Mobile Broadband Card (rev3) */
++ {QMI_FIXED_INTF(0x413c, 0x81b6, 8)}, /* Dell Wireless 5811e */
++ {QMI_FIXED_INTF(0x413c, 0x81b6, 10)}, /* Dell Wireless 5811e */
+ {QMI_FIXED_INTF(0x03f0, 0x4e1d, 8)}, /* HP lt4111 LTE/EV-DO/HSPA+ Gobi 4G Module */
+ {QMI_FIXED_INTF(0x22de, 0x9061, 3)}, /* WeTelecom WPD-600N */
+ {QMI_FIXED_INTF(0x1e0e, 0x9001, 5)}, /* SIMCom 7230E */
+diff --git a/drivers/net/vrf.c b/drivers/net/vrf.c
+index 682aac0a2267..921fef275ea4 100644
+--- a/drivers/net/vrf.c
++++ b/drivers/net/vrf.c
+@@ -462,8 +462,10 @@ static void vrf_rt6_release(struct net_device *dev, struct net_vrf *vrf)
+ }
+
+ if (rt6_local) {
+- if (rt6_local->rt6i_idev)
++ if (rt6_local->rt6i_idev) {
+ in6_dev_put(rt6_local->rt6i_idev);
++ rt6_local->rt6i_idev = NULL;
++ }
+
+ dst = &rt6_local->dst;
+ dev_put(dst->dev);
+diff --git a/drivers/net/wireless/ath/ath10k/hw.c b/drivers/net/wireless/ath/ath10k/hw.c
+index 33fb26833cd0..d9f37ee4bfdd 100644
+--- a/drivers/net/wireless/ath/ath10k/hw.c
++++ b/drivers/net/wireless/ath/ath10k/hw.c
+@@ -51,7 +51,7 @@ const struct ath10k_hw_regs qca6174_regs = {
+ .rtc_soc_base_address = 0x00000800,
+ .rtc_wmac_base_address = 0x00001000,
+ .soc_core_base_address = 0x0003a000,
+- .wlan_mac_base_address = 0x00020000,
++ .wlan_mac_base_address = 0x00010000,
+ .ce_wrapper_base_address = 0x00034000,
+ .ce0_base_address = 0x00034400,
+ .ce1_base_address = 0x00034800,
+diff --git a/drivers/net/wireless/marvell/mwifiex/pcie.c b/drivers/net/wireless/marvell/mwifiex/pcie.c
+index 4db07da81d8d..6d724c61cc7a 100644
+--- a/drivers/net/wireless/marvell/mwifiex/pcie.c
++++ b/drivers/net/wireless/marvell/mwifiex/pcie.c
+@@ -2742,6 +2742,21 @@ static void mwifiex_pcie_device_dump(struct mwifiex_adapter *adapter)
+ schedule_work(&pcie_work);
+ }
+
++static void mwifiex_pcie_free_buffers(struct mwifiex_adapter *adapter)
++{
++ struct pcie_service_card *card = adapter->card;
++ const struct mwifiex_pcie_card_reg *reg = card->pcie.reg;
++
++ if (reg->sleep_cookie)
++ mwifiex_pcie_delete_sleep_cookie_buf(adapter);
++
++ mwifiex_pcie_delete_cmdrsp_buf(adapter);
++ mwifiex_pcie_delete_evtbd_ring(adapter);
++ mwifiex_pcie_delete_rxbd_ring(adapter);
++ mwifiex_pcie_delete_txbd_ring(adapter);
++ card->cmdrsp_buf = NULL;
++}
++
+ /*
+ * This function initializes the PCI-E host memory space, WCB rings, etc.
+ *
+@@ -2853,13 +2868,6 @@ static int mwifiex_pcie_init(struct mwifiex_adapter *adapter)
+
+ /*
+ * This function cleans up the allocated card buffers.
+- *
+- * The following are freed by this function -
+- * - TXBD ring buffers
+- * - RXBD ring buffers
+- * - Event BD ring buffers
+- * - Command response ring buffer
+- * - Sleep cookie buffer
+ */
+ static void mwifiex_pcie_cleanup(struct mwifiex_adapter *adapter)
+ {
+@@ -2875,6 +2883,8 @@ static void mwifiex_pcie_cleanup(struct mwifiex_adapter *adapter)
+ "Failed to write driver not-ready signature\n");
+ }
+
++ mwifiex_pcie_free_buffers(adapter);
++
+ if (pdev) {
+ pci_iounmap(pdev, card->pci_mmap);
+ pci_iounmap(pdev, card->pci_mmap1);
+@@ -3115,10 +3125,7 @@ static void mwifiex_pcie_up_dev(struct mwifiex_adapter *adapter)
+ pci_iounmap(pdev, card->pci_mmap1);
+ }
+
+-/* This function cleans up the PCI-E host memory space.
+- * Some code is extracted from mwifiex_unregister_dev()
+- *
+- */
++/* This function cleans up the PCI-E host memory space. */
+ static void mwifiex_pcie_down_dev(struct mwifiex_adapter *adapter)
+ {
+ struct pcie_service_card *card = adapter->card;
+@@ -3130,14 +3137,7 @@ static void mwifiex_pcie_down_dev(struct mwifiex_adapter *adapter)
+ adapter->seq_num = 0;
+ adapter->tx_buf_size = MWIFIEX_TX_DATA_BUF_SIZE_4K;
+
+- if (reg->sleep_cookie)
+- mwifiex_pcie_delete_sleep_cookie_buf(adapter);
+-
+- mwifiex_pcie_delete_cmdrsp_buf(adapter);
+- mwifiex_pcie_delete_evtbd_ring(adapter);
+- mwifiex_pcie_delete_rxbd_ring(adapter);
+- mwifiex_pcie_delete_txbd_ring(adapter);
+- card->cmdrsp_buf = NULL;
++ mwifiex_pcie_free_buffers(adapter);
+ }
+
+ static struct mwifiex_if_ops pcie_ops = {
+diff --git a/drivers/parport/share.c b/drivers/parport/share.c
+index 3308427ed9f7..4399de34054a 100644
+--- a/drivers/parport/share.c
++++ b/drivers/parport/share.c
+@@ -939,8 +939,10 @@ parport_register_dev_model(struct parport *port, const char *name,
+ * pardevice fields. -arca
+ */
+ port->ops->init_state(par_dev, par_dev->state);
+- port->proc_device = par_dev;
+- parport_device_proc_register(par_dev);
++ if (!test_and_set_bit(PARPORT_DEVPROC_REGISTERED, &port->devflags)) {
++ port->proc_device = par_dev;
++ parport_device_proc_register(par_dev);
++ }
+
+ return par_dev;
+
+diff --git a/drivers/scsi/mpt3sas/mpt3sas_scsih.c b/drivers/scsi/mpt3sas/mpt3sas_scsih.c
+index 245fbe2f1696..6e620242a600 100644
+--- a/drivers/scsi/mpt3sas/mpt3sas_scsih.c
++++ b/drivers/scsi/mpt3sas/mpt3sas_scsih.c
+@@ -4658,7 +4658,6 @@ _scsih_io_done(struct MPT3SAS_ADAPTER *ioc, u16 smid, u8 msix_index, u32 reply)
+ struct MPT3SAS_DEVICE *sas_device_priv_data;
+ u32 response_code = 0;
+ unsigned long flags;
+- unsigned int sector_sz;
+
+ mpi_reply = mpt3sas_base_get_reply_virt_addr(ioc, reply);
+ scmd = _scsih_scsi_lookup_get_clear(ioc, smid);
+@@ -4717,20 +4716,6 @@ _scsih_io_done(struct MPT3SAS_ADAPTER *ioc, u16 smid, u8 msix_index, u32 reply)
+ }
+
+ xfer_cnt = le32_to_cpu(mpi_reply->TransferCount);
+-
+- /* In case of bogus fw or device, we could end up having
+- * unaligned partial completion. We can force alignment here,
+- * then scsi-ml does not need to handle this misbehavior.
+- */
+- sector_sz = scmd->device->sector_size;
+- if (unlikely(scmd->request->cmd_type == REQ_TYPE_FS && sector_sz &&
+- xfer_cnt % sector_sz)) {
+- sdev_printk(KERN_INFO, scmd->device,
+- "unaligned partial completion avoided (xfer_cnt=%u, sector_sz=%u)\n",
+- xfer_cnt, sector_sz);
+- xfer_cnt = round_down(xfer_cnt, sector_sz);
+- }
+-
+ scsi_set_resid(scmd, scsi_bufflen(scmd) - xfer_cnt);
+ if (ioc_status & MPI2_IOCSTATUS_FLAG_LOG_INFO_AVAILABLE)
+ log_info = le32_to_cpu(mpi_reply->IOCLogInfo);
+diff --git a/drivers/scsi/sd.c b/drivers/scsi/sd.c
+index 1f5d92a25a49..1ee57619c95e 100644
+--- a/drivers/scsi/sd.c
++++ b/drivers/scsi/sd.c
+@@ -1790,6 +1790,8 @@ static int sd_done(struct scsi_cmnd *SCpnt)
+ {
+ int result = SCpnt->result;
+ unsigned int good_bytes = result ? 0 : scsi_bufflen(SCpnt);
++ unsigned int sector_size = SCpnt->device->sector_size;
++ unsigned int resid;
+ struct scsi_sense_hdr sshdr;
+ struct scsi_disk *sdkp = scsi_disk(SCpnt->request->rq_disk);
+ struct request *req = SCpnt->request;
+@@ -1820,6 +1822,21 @@ static int sd_done(struct scsi_cmnd *SCpnt)
+ scsi_set_resid(SCpnt, blk_rq_bytes(req));
+ }
+ break;
++ default:
++ /*
++ * In case of bogus fw or device, we could end up having
++ * an unaligned partial completion. Check this here and force
++ * alignment.
++ */
++ resid = scsi_get_resid(SCpnt);
++ if (resid & (sector_size - 1)) {
++ sd_printk(KERN_INFO, sdkp,
++ "Unaligned partial completion (resid=%u, sector_sz=%u)\n",
++ resid, sector_size);
++ resid = min(scsi_bufflen(SCpnt),
++ round_up(resid, sector_size));
++ scsi_set_resid(SCpnt, resid);
++ }
+ }
+
+ if (result) {
+diff --git a/drivers/usb/class/usbtmc.c b/drivers/usb/class/usbtmc.c
+index f03692ec5520..8fb309a0ff6b 100644
+--- a/drivers/usb/class/usbtmc.c
++++ b/drivers/usb/class/usbtmc.c
+@@ -1381,7 +1381,7 @@ static int usbtmc_probe(struct usb_interface *intf,
+
+ dev_dbg(&intf->dev, "%s called\n", __func__);
+
+- data = kmalloc(sizeof(*data), GFP_KERNEL);
++ data = kzalloc(sizeof(*data), GFP_KERNEL);
+ if (!data)
+ return -ENOMEM;
+
+@@ -1444,6 +1444,13 @@ static int usbtmc_probe(struct usb_interface *intf,
+ break;
+ }
+ }
++
++ if (!data->bulk_out || !data->bulk_in) {
++ dev_err(&intf->dev, "bulk endpoints not found\n");
++ retcode = -ENODEV;
++ goto err_put;
++ }
++
+ /* Find int endpoint */
+ for (n = 0; n < iface_desc->desc.bNumEndpoints; n++) {
+ endpoint = &iface_desc->endpoint[n].desc;
+@@ -1469,8 +1476,10 @@ static int usbtmc_probe(struct usb_interface *intf,
+ if (data->iin_ep_present) {
+ /* allocate int urb */
+ data->iin_urb = usb_alloc_urb(0, GFP_KERNEL);
+- if (!data->iin_urb)
++ if (!data->iin_urb) {
++ retcode = -ENOMEM;
+ goto error_register;
++ }
+
+ /* Protect interrupt in endpoint data until iin_urb is freed */
+ kref_get(&data->kref);
+@@ -1478,8 +1487,10 @@ static int usbtmc_probe(struct usb_interface *intf,
+ /* allocate buffer for interrupt in */
+ data->iin_buffer = kmalloc(data->iin_wMaxPacketSize,
+ GFP_KERNEL);
+- if (!data->iin_buffer)
++ if (!data->iin_buffer) {
++ retcode = -ENOMEM;
+ goto error_register;
++ }
+
+ /* fill interrupt urb */
+ usb_fill_int_urb(data->iin_urb, data->usb_dev,
+@@ -1512,6 +1523,7 @@ static int usbtmc_probe(struct usb_interface *intf,
+ sysfs_remove_group(&intf->dev.kobj, &capability_attr_grp);
+ sysfs_remove_group(&intf->dev.kobj, &data_attr_grp);
+ usbtmc_free_int(data);
++err_put:
+ kref_put(&data->kref, usbtmc_delete);
+ return retcode;
+ }
+diff --git a/drivers/usb/core/config.c b/drivers/usb/core/config.c
+index 25dbd8c7aec7..4be52c602e9b 100644
+--- a/drivers/usb/core/config.c
++++ b/drivers/usb/core/config.c
+@@ -280,6 +280,16 @@ static int usb_parse_endpoint(struct device *ddev, int cfgno, int inum,
+
+ /*
+ * Adjust bInterval for quirked devices.
++ */
++ /*
++ * This quirk fixes bIntervals reported in ms.
++ */
++ if (to_usb_device(ddev)->quirks &
++ USB_QUIRK_LINEAR_FRAME_INTR_BINTERVAL) {
++ n = clamp(fls(d->bInterval) + 3, i, j);
++ i = j = n;
++ }
++ /*
+ * This quirk fixes bIntervals reported in
+ * linear microframes.
+ */
+diff --git a/drivers/usb/core/hub.c b/drivers/usb/core/hub.c
+index a56c75e09786..48fbf523d186 100644
+--- a/drivers/usb/core/hub.c
++++ b/drivers/usb/core/hub.c
+@@ -4275,7 +4275,7 @@ static void hub_set_initial_usb2_lpm_policy(struct usb_device *udev)
+ struct usb_hub *hub = usb_hub_to_struct_hub(udev->parent);
+ int connect_type = USB_PORT_CONNECT_TYPE_UNKNOWN;
+
+- if (!udev->usb2_hw_lpm_capable)
++ if (!udev->usb2_hw_lpm_capable || !udev->bos)
+ return;
+
+ if (hub)
+diff --git a/drivers/usb/core/quirks.c b/drivers/usb/core/quirks.c
+index 24f9f98968a5..96b21b0dac1e 100644
+--- a/drivers/usb/core/quirks.c
++++ b/drivers/usb/core/quirks.c
+@@ -170,6 +170,14 @@ static const struct usb_device_id usb_quirk_list[] = {
+ /* M-Systems Flash Disk Pioneers */
+ { USB_DEVICE(0x08ec, 0x1000), .driver_info = USB_QUIRK_RESET_RESUME },
+
++ /* Baum Vario Ultra */
++ { USB_DEVICE(0x0904, 0x6101), .driver_info =
++ USB_QUIRK_LINEAR_FRAME_INTR_BINTERVAL },
++ { USB_DEVICE(0x0904, 0x6102), .driver_info =
++ USB_QUIRK_LINEAR_FRAME_INTR_BINTERVAL },
++ { USB_DEVICE(0x0904, 0x6103), .driver_info =
++ USB_QUIRK_LINEAR_FRAME_INTR_BINTERVAL },
++
+ /* Keytouch QWERTY Panel keyboard */
+ { USB_DEVICE(0x0926, 0x3333), .driver_info =
+ USB_QUIRK_CONFIG_INTF_STRINGS },
+diff --git a/drivers/usb/gadget/function/f_acm.c b/drivers/usb/gadget/function/f_acm.c
+index a30766ca4226..5e3828d9dac7 100644
+--- a/drivers/usb/gadget/function/f_acm.c
++++ b/drivers/usb/gadget/function/f_acm.c
+@@ -535,13 +535,15 @@ static int acm_notify_serial_state(struct f_acm *acm)
+ {
+ struct usb_composite_dev *cdev = acm->port.func.config->cdev;
+ int status;
++ __le16 serial_state;
+
+ spin_lock(&acm->lock);
+ if (acm->notify_req) {
+ dev_dbg(&cdev->gadget->dev, "acm ttyGS%d serial state %04x\n",
+ acm->port_num, acm->serial_state);
++ serial_state = cpu_to_le16(acm->serial_state);
+ status = acm_cdc_notify(acm, USB_CDC_NOTIFY_SERIAL_STATE,
+- 0, &acm->serial_state, sizeof(acm->serial_state));
++ 0, &serial_state, sizeof(acm->serial_state));
+ } else {
+ acm->pending = true;
+ status = 0;
+diff --git a/drivers/usb/gadget/function/f_uvc.c b/drivers/usb/gadget/function/f_uvc.c
+index 29b41b5dee04..c7689d05356c 100644
+--- a/drivers/usb/gadget/function/f_uvc.c
++++ b/drivers/usb/gadget/function/f_uvc.c
+@@ -625,7 +625,7 @@ uvc_function_bind(struct usb_configuration *c, struct usb_function *f)
+ uvc_ss_streaming_comp.bMaxBurst = opts->streaming_maxburst;
+ uvc_ss_streaming_comp.wBytesPerInterval =
+ cpu_to_le16(max_packet_size * max_packet_mult *
+- opts->streaming_maxburst);
++ (opts->streaming_maxburst + 1));
+
+ /* Allocate endpoints. */
+ ep = usb_ep_autoconfig(cdev->gadget, &uvc_control_ep);
+diff --git a/drivers/usb/misc/idmouse.c b/drivers/usb/misc/idmouse.c
+index debc1fd74b0d..dc9328fd8030 100644
+--- a/drivers/usb/misc/idmouse.c
++++ b/drivers/usb/misc/idmouse.c
+@@ -346,6 +346,9 @@ static int idmouse_probe(struct usb_interface *interface,
+ if (iface_desc->desc.bInterfaceClass != 0x0A)
+ return -ENODEV;
+
++ if (iface_desc->desc.bNumEndpoints < 1)
++ return -ENODEV;
++
+ /* allocate memory for our device state and initialize it */
+ dev = kzalloc(sizeof(*dev), GFP_KERNEL);
+ if (dev == NULL)
+diff --git a/drivers/usb/misc/lvstest.c b/drivers/usb/misc/lvstest.c
+index 77176511658f..d3d124753266 100644
+--- a/drivers/usb/misc/lvstest.c
++++ b/drivers/usb/misc/lvstest.c
+@@ -366,6 +366,10 @@ static int lvs_rh_probe(struct usb_interface *intf,
+
+ hdev = interface_to_usbdev(intf);
+ desc = intf->cur_altsetting;
++
++ if (desc->desc.bNumEndpoints < 1)
++ return -ENODEV;
++
+ endpoint = &desc->endpoint[0].desc;
+
+ /* valid only for SS root hub */
+diff --git a/drivers/usb/misc/uss720.c b/drivers/usb/misc/uss720.c
+index 356d312add57..9ff66525924e 100644
+--- a/drivers/usb/misc/uss720.c
++++ b/drivers/usb/misc/uss720.c
+@@ -708,6 +708,11 @@ static int uss720_probe(struct usb_interface *intf,
+
+ interface = intf->cur_altsetting;
+
++ if (interface->desc.bNumEndpoints < 3) {
++ usb_put_dev(usbdev);
++ return -ENODEV;
++ }
++
+ /*
+ * Allocate parport interface
+ */
+diff --git a/drivers/usb/musb/musb_cppi41.c b/drivers/usb/musb/musb_cppi41.c
+index 16363852c034..cac3b21a720b 100644
+--- a/drivers/usb/musb/musb_cppi41.c
++++ b/drivers/usb/musb/musb_cppi41.c
+@@ -231,8 +231,27 @@ static void cppi41_dma_callback(void *private_data)
+ transferred < cppi41_channel->packet_sz)
+ cppi41_channel->prog_len = 0;
+
+- if (cppi41_channel->is_tx)
+- empty = musb_is_tx_fifo_empty(hw_ep);
++ if (cppi41_channel->is_tx) {
++ u8 type;
++
++ if (is_host_active(musb))
++ type = hw_ep->out_qh->type;
++ else
++ type = hw_ep->ep_in.type;
++
++ if (type == USB_ENDPOINT_XFER_ISOC)
++ /*
++ * Don't use the early-TX-interrupt workaround below
++ * for Isoch transfter. Since Isoch are periodic
++ * transfer, by the time the next transfer is
++ * scheduled, the current one should be done already.
++ *
++ * This avoids audio playback underrun issue.
++ */
++ empty = true;
++ else
++ empty = musb_is_tx_fifo_empty(hw_ep);
++ }
+
+ if (!cppi41_channel->is_tx || empty) {
+ cppi41_trans_done(cppi41_channel);
+diff --git a/drivers/usb/serial/option.c b/drivers/usb/serial/option.c
+index 42cc72e54c05..af67a0de6b5d 100644
+--- a/drivers/usb/serial/option.c
++++ b/drivers/usb/serial/option.c
+@@ -233,6 +233,14 @@ static void option_instat_callback(struct urb *urb);
+ #define BANDRICH_PRODUCT_1012 0x1012
+
+ #define QUALCOMM_VENDOR_ID 0x05C6
++/* These Quectel products use Qualcomm's vendor ID */
++#define QUECTEL_PRODUCT_UC20 0x9003
++#define QUECTEL_PRODUCT_UC15 0x9090
++
++#define QUECTEL_VENDOR_ID 0x2c7c
++/* These Quectel products use Quectel's vendor ID */
++#define QUECTEL_PRODUCT_EC21 0x0121
++#define QUECTEL_PRODUCT_EC25 0x0125
+
+ #define CMOTECH_VENDOR_ID 0x16d8
+ #define CMOTECH_PRODUCT_6001 0x6001
+@@ -1161,7 +1169,14 @@ static const struct usb_device_id option_ids[] = {
+ { USB_DEVICE(QUALCOMM_VENDOR_ID, 0x6613)}, /* Onda H600/ZTE MF330 */
+ { USB_DEVICE(QUALCOMM_VENDOR_ID, 0x0023)}, /* ONYX 3G device */
+ { USB_DEVICE(QUALCOMM_VENDOR_ID, 0x9000)}, /* SIMCom SIM5218 */
+- { USB_DEVICE(QUALCOMM_VENDOR_ID, 0x9003), /* Quectel UC20 */
++ /* Quectel products using Qualcomm vendor ID */
++ { USB_DEVICE(QUALCOMM_VENDOR_ID, QUECTEL_PRODUCT_UC15)},
++ { USB_DEVICE(QUALCOMM_VENDOR_ID, QUECTEL_PRODUCT_UC20),
++ .driver_info = (kernel_ulong_t)&net_intf4_blacklist },
++ /* Quectel products using Quectel vendor ID */
++ { USB_DEVICE(QUECTEL_VENDOR_ID, QUECTEL_PRODUCT_EC21),
++ .driver_info = (kernel_ulong_t)&net_intf4_blacklist },
++ { USB_DEVICE(QUECTEL_VENDOR_ID, QUECTEL_PRODUCT_EC25),
+ .driver_info = (kernel_ulong_t)&net_intf4_blacklist },
+ { USB_DEVICE(CMOTECH_VENDOR_ID, CMOTECH_PRODUCT_6001) },
+ { USB_DEVICE(CMOTECH_VENDOR_ID, CMOTECH_PRODUCT_CMU_300) },
+diff --git a/drivers/usb/serial/qcserial.c b/drivers/usb/serial/qcserial.c
+index 696458db7e3c..38b3f0d8cd58 100644
+--- a/drivers/usb/serial/qcserial.c
++++ b/drivers/usb/serial/qcserial.c
+@@ -169,6 +169,8 @@ static const struct usb_device_id id_table[] = {
+ {DEVICE_SWI(0x413c, 0x81a9)}, /* Dell Wireless 5808e Gobi(TM) 4G LTE Mobile Broadband Card */
+ {DEVICE_SWI(0x413c, 0x81b1)}, /* Dell Wireless 5809e Gobi(TM) 4G LTE Mobile Broadband Card */
+ {DEVICE_SWI(0x413c, 0x81b3)}, /* Dell Wireless 5809e Gobi(TM) 4G LTE Mobile Broadband Card (rev3) */
++ {DEVICE_SWI(0x413c, 0x81b5)}, /* Dell Wireless 5811e QDL */
++ {DEVICE_SWI(0x413c, 0x81b6)}, /* Dell Wireless 5811e QDL */
+
+ /* Huawei devices */
+ {DEVICE_HWI(0x03f0, 0x581d)}, /* HP lt4112 LTE/HSPA+ Gobi 4G Modem (Huawei me906e) */
+diff --git a/drivers/usb/wusbcore/wa-hc.c b/drivers/usb/wusbcore/wa-hc.c
+index 252c7bd9218a..d01496fd27fe 100644
+--- a/drivers/usb/wusbcore/wa-hc.c
++++ b/drivers/usb/wusbcore/wa-hc.c
+@@ -39,6 +39,9 @@ int wa_create(struct wahc *wa, struct usb_interface *iface,
+ int result;
+ struct device *dev = &iface->dev;
+
++ if (iface->cur_altsetting->desc.bNumEndpoints < 3)
++ return -ENODEV;
++
+ result = wa_rpipes_create(wa);
+ if (result < 0)
+ goto error_rpipes_create;
+diff --git a/drivers/uwb/hwa-rc.c b/drivers/uwb/hwa-rc.c
+index 0aa6c3c29d17..35a1e777b449 100644
+--- a/drivers/uwb/hwa-rc.c
++++ b/drivers/uwb/hwa-rc.c
+@@ -823,6 +823,9 @@ static int hwarc_probe(struct usb_interface *iface,
+ struct hwarc *hwarc;
+ struct device *dev = &iface->dev;
+
++ if (iface->cur_altsetting->desc.bNumEndpoints < 1)
++ return -ENODEV;
++
+ result = -ENOMEM;
+ uwb_rc = uwb_rc_alloc();
+ if (uwb_rc == NULL) {
+diff --git a/drivers/uwb/i1480/dfu/usb.c b/drivers/uwb/i1480/dfu/usb.c
+index 2bfc846ac071..6345e85822a4 100644
+--- a/drivers/uwb/i1480/dfu/usb.c
++++ b/drivers/uwb/i1480/dfu/usb.c
+@@ -362,6 +362,9 @@ int i1480_usb_probe(struct usb_interface *iface, const struct usb_device_id *id)
+ result);
+ }
+
++ if (iface->cur_altsetting->desc.bNumEndpoints < 1)
++ return -ENODEV;
++
+ result = -ENOMEM;
+ i1480_usb = kzalloc(sizeof(*i1480_usb), GFP_KERNEL);
+ if (i1480_usb == NULL) {
+diff --git a/drivers/vfio/vfio.c b/drivers/vfio/vfio.c
+index 9901c4671e2f..6e10325596b6 100644
+--- a/drivers/vfio/vfio.c
++++ b/drivers/vfio/vfio.c
+@@ -403,6 +403,7 @@ static void vfio_group_release(struct kref *kref)
+ struct iommu_group *iommu_group = group->iommu_group;
+
+ WARN_ON(!list_empty(&group->device_list));
++ WARN_ON(group->notifier.head);
+
+ list_for_each_entry_safe(unbound, tmp,
+ &group->unbound_list, unbound_next) {
+@@ -1573,6 +1574,10 @@ static int vfio_group_fops_open(struct inode *inode, struct file *filep)
+ return -EBUSY;
+ }
+
++ /* Warn if previous user didn't cleanup and re-init to drop them */
++ if (WARN_ON(group->notifier.head))
++ BLOCKING_INIT_NOTIFIER_HEAD(&group->notifier);
++
+ filep->private_data = group;
+
+ return 0;
+@@ -1584,9 +1589,6 @@ static int vfio_group_fops_release(struct inode *inode, struct file *filep)
+
+ filep->private_data = NULL;
+
+- /* Any user didn't unregister? */
+- WARN_ON(group->notifier.head);
+-
+ vfio_group_try_dissolve_container(group);
+
+ atomic_dec(&group->opened);
+diff --git a/drivers/video/console/fbcon.c b/drivers/video/console/fbcon.c
+index a44f5627b82a..f4daadff8a6c 100644
+--- a/drivers/video/console/fbcon.c
++++ b/drivers/video/console/fbcon.c
+@@ -1165,6 +1165,8 @@ static void fbcon_free_font(struct display *p, bool freefont)
+ p->userfont = 0;
+ }
+
++static void set_vc_hi_font(struct vc_data *vc, bool set);
++
+ static void fbcon_deinit(struct vc_data *vc)
+ {
+ struct display *p = &fb_display[vc->vc_num];
+@@ -1200,6 +1202,9 @@ static void fbcon_deinit(struct vc_data *vc)
+ if (free_font)
+ vc->vc_font.data = NULL;
+
++ if (vc->vc_hi_font_mask)
++ set_vc_hi_font(vc, false);
++
+ if (!con_is_bound(&fb_con))
+ fbcon_exit();
+
+@@ -2436,32 +2441,10 @@ static int fbcon_get_font(struct vc_data *vc, struct console_font *font)
+ return 0;
+ }
+
+-static int fbcon_do_set_font(struct vc_data *vc, int w, int h,
+- const u8 * data, int userfont)
++/* set/clear vc_hi_font_mask and update vc attrs accordingly */
++static void set_vc_hi_font(struct vc_data *vc, bool set)
+ {
+- struct fb_info *info = registered_fb[con2fb_map[vc->vc_num]];
+- struct fbcon_ops *ops = info->fbcon_par;
+- struct display *p = &fb_display[vc->vc_num];
+- int resize;
+- int cnt;
+- char *old_data = NULL;
+-
+- if (con_is_visible(vc) && softback_lines)
+- fbcon_set_origin(vc);
+-
+- resize = (w != vc->vc_font.width) || (h != vc->vc_font.height);
+- if (p->userfont)
+- old_data = vc->vc_font.data;
+- if (userfont)
+- cnt = FNTCHARCNT(data);
+- else
+- cnt = 256;
+- vc->vc_font.data = (void *)(p->fontdata = data);
+- if ((p->userfont = userfont))
+- REFCOUNT(data)++;
+- vc->vc_font.width = w;
+- vc->vc_font.height = h;
+- if (vc->vc_hi_font_mask && cnt == 256) {
++ if (!set) {
+ vc->vc_hi_font_mask = 0;
+ if (vc->vc_can_do_color) {
+ vc->vc_complement_mask >>= 1;
+@@ -2484,7 +2467,7 @@ static int fbcon_do_set_font(struct vc_data *vc, int w, int h,
+ ((c & 0xfe00) >> 1) | (c & 0xff);
+ vc->vc_attr >>= 1;
+ }
+- } else if (!vc->vc_hi_font_mask && cnt == 512) {
++ } else {
+ vc->vc_hi_font_mask = 0x100;
+ if (vc->vc_can_do_color) {
+ vc->vc_complement_mask <<= 1;
+@@ -2516,8 +2499,38 @@ static int fbcon_do_set_font(struct vc_data *vc, int w, int h,
+ } else
+ vc->vc_video_erase_char = c & ~0x100;
+ }
+-
+ }
++}
++
++static int fbcon_do_set_font(struct vc_data *vc, int w, int h,
++ const u8 * data, int userfont)
++{
++ struct fb_info *info = registered_fb[con2fb_map[vc->vc_num]];
++ struct fbcon_ops *ops = info->fbcon_par;
++ struct display *p = &fb_display[vc->vc_num];
++ int resize;
++ int cnt;
++ char *old_data = NULL;
++
++ if (con_is_visible(vc) && softback_lines)
++ fbcon_set_origin(vc);
++
++ resize = (w != vc->vc_font.width) || (h != vc->vc_font.height);
++ if (p->userfont)
++ old_data = vc->vc_font.data;
++ if (userfont)
++ cnt = FNTCHARCNT(data);
++ else
++ cnt = 256;
++ vc->vc_font.data = (void *)(p->fontdata = data);
++ if ((p->userfont = userfont))
++ REFCOUNT(data)++;
++ vc->vc_font.width = w;
++ vc->vc_font.height = h;
++ if (vc->vc_hi_font_mask && cnt == 256)
++ set_vc_hi_font(vc, false);
++ else if (!vc->vc_hi_font_mask && cnt == 512)
++ set_vc_hi_font(vc, true);
+
+ if (resize) {
+ int cols, rows;
+diff --git a/drivers/xen/xen-acpi-processor.c b/drivers/xen/xen-acpi-processor.c
+index 4ce10bcca18b..4b857463a2b4 100644
+--- a/drivers/xen/xen-acpi-processor.c
++++ b/drivers/xen/xen-acpi-processor.c
+@@ -27,10 +27,10 @@
+ #include <linux/init.h>
+ #include <linux/module.h>
+ #include <linux/types.h>
++#include <linux/syscore_ops.h>
+ #include <linux/acpi.h>
+ #include <acpi/processor.h>
+ #include <xen/xen.h>
+-#include <xen/xen-ops.h>
+ #include <xen/interface/platform.h>
+ #include <asm/xen/hypercall.h>
+
+@@ -466,15 +466,33 @@ static int xen_upload_processor_pm_data(void)
+ return rc;
+ }
+
+-static int xen_acpi_processor_resume(struct notifier_block *nb,
+- unsigned long action, void *data)
++static void xen_acpi_processor_resume_worker(struct work_struct *dummy)
+ {
++ int rc;
++
+ bitmap_zero(acpi_ids_done, nr_acpi_bits);
+- return xen_upload_processor_pm_data();
++
++ rc = xen_upload_processor_pm_data();
++ if (rc != 0)
++ pr_info("ACPI data upload failed, error = %d\n", rc);
++}
++
++static void xen_acpi_processor_resume(void)
++{
++ static DECLARE_WORK(wq, xen_acpi_processor_resume_worker);
++
++ /*
++ * xen_upload_processor_pm_data() calls non-atomic code.
++ * However, the context for xen_acpi_processor_resume is syscore
++ * with only the boot CPU online and in an atomic context.
++ *
++ * So defer the upload for some point safer.
++ */
++ schedule_work(&wq);
+ }
+
+-struct notifier_block xen_acpi_processor_resume_nb = {
+- .notifier_call = xen_acpi_processor_resume,
++static struct syscore_ops xap_syscore_ops = {
++ .resume = xen_acpi_processor_resume,
+ };
+
+ static int __init xen_acpi_processor_init(void)
+@@ -527,7 +545,7 @@ static int __init xen_acpi_processor_init(void)
+ if (rc)
+ goto err_unregister;
+
+- xen_resume_notifier_register(&xen_acpi_processor_resume_nb);
++ register_syscore_ops(&xap_syscore_ops);
+
+ return 0;
+ err_unregister:
+@@ -544,7 +562,7 @@ static void __exit xen_acpi_processor_exit(void)
+ {
+ int i;
+
+- xen_resume_notifier_unregister(&xen_acpi_processor_resume_nb);
++ unregister_syscore_ops(&xap_syscore_ops);
+ kfree(acpi_ids_done);
+ kfree(acpi_id_present);
+ kfree(acpi_id_cst_present);
+diff --git a/fs/crypto/crypto.c b/fs/crypto/crypto.c
+index ac8e4f6a3773..3c2ca312c251 100644
+--- a/fs/crypto/crypto.c
++++ b/fs/crypto/crypto.c
+@@ -394,7 +394,6 @@ EXPORT_SYMBOL(fscrypt_zeroout_range);
+ static int fscrypt_d_revalidate(struct dentry *dentry, unsigned int flags)
+ {
+ struct dentry *dir;
+- struct fscrypt_info *ci;
+ int dir_has_key, cached_with_key;
+
+ if (flags & LOOKUP_RCU)
+@@ -406,18 +405,11 @@ static int fscrypt_d_revalidate(struct dentry *dentry, unsigned int flags)
+ return 0;
+ }
+
+- ci = d_inode(dir)->i_crypt_info;
+- if (ci && ci->ci_keyring_key &&
+- (ci->ci_keyring_key->flags & ((1 << KEY_FLAG_INVALIDATED) |
+- (1 << KEY_FLAG_REVOKED) |
+- (1 << KEY_FLAG_DEAD))))
+- ci = NULL;
+-
+ /* this should eventually be an flag in d_flags */
+ spin_lock(&dentry->d_lock);
+ cached_with_key = dentry->d_flags & DCACHE_ENCRYPTED_WITH_KEY;
+ spin_unlock(&dentry->d_lock);
+- dir_has_key = (ci != NULL);
++ dir_has_key = (d_inode(dir)->i_crypt_info != NULL);
+ dput(dir);
+
+ /*
+diff --git a/fs/crypto/fname.c b/fs/crypto/fname.c
+index 56ad9d195f18..8af4d5224bdd 100644
+--- a/fs/crypto/fname.c
++++ b/fs/crypto/fname.c
+@@ -350,7 +350,7 @@ int fscrypt_setup_filename(struct inode *dir, const struct qstr *iname,
+ fname->disk_name.len = iname->len;
+ return 0;
+ }
+- ret = fscrypt_get_crypt_info(dir);
++ ret = fscrypt_get_encryption_info(dir);
+ if (ret && ret != -EOPNOTSUPP)
+ return ret;
+
+diff --git a/fs/crypto/fscrypt_private.h b/fs/crypto/fscrypt_private.h
+index aeab032d7d35..b7b9b566bd86 100644
+--- a/fs/crypto/fscrypt_private.h
++++ b/fs/crypto/fscrypt_private.h
+@@ -67,7 +67,6 @@ struct fscrypt_info {
+ u8 ci_filename_mode;
+ u8 ci_flags;
+ struct crypto_skcipher *ci_ctfm;
+- struct key *ci_keyring_key;
+ u8 ci_master_key[FS_KEY_DESCRIPTOR_SIZE];
+ };
+
+@@ -87,7 +86,4 @@ struct fscrypt_completion_result {
+ /* crypto.c */
+ int fscrypt_initialize(unsigned int cop_flags);
+
+-/* keyinfo.c */
+-extern int fscrypt_get_crypt_info(struct inode *);
+-
+ #endif /* _FSCRYPT_PRIVATE_H */
+diff --git a/fs/crypto/keyinfo.c b/fs/crypto/keyinfo.c
+index 95cd4c3b06c3..6df6ad3af432 100644
+--- a/fs/crypto/keyinfo.c
++++ b/fs/crypto/keyinfo.c
+@@ -99,6 +99,7 @@ static int validate_user_key(struct fscrypt_info *crypt_info,
+ kfree(full_key_descriptor);
+ if (IS_ERR(keyring_key))
+ return PTR_ERR(keyring_key);
++ down_read(&keyring_key->sem);
+
+ if (keyring_key->type != &key_type_logon) {
+ printk_once(KERN_WARNING
+@@ -106,11 +107,9 @@ static int validate_user_key(struct fscrypt_info *crypt_info,
+ res = -ENOKEY;
+ goto out;
+ }
+- down_read(&keyring_key->sem);
+ ukp = user_key_payload(keyring_key);
+ if (ukp->datalen != sizeof(struct fscrypt_key)) {
+ res = -EINVAL;
+- up_read(&keyring_key->sem);
+ goto out;
+ }
+ master_key = (struct fscrypt_key *)ukp->data;
+@@ -121,17 +120,11 @@ static int validate_user_key(struct fscrypt_info *crypt_info,
+ "%s: key size incorrect: %d\n",
+ __func__, master_key->size);
+ res = -ENOKEY;
+- up_read(&keyring_key->sem);
+ goto out;
+ }
+ res = derive_key_aes(ctx->nonce, master_key->raw, raw_key);
+- up_read(&keyring_key->sem);
+- if (res)
+- goto out;
+-
+- crypt_info->ci_keyring_key = keyring_key;
+- return 0;
+ out:
++ up_read(&keyring_key->sem);
+ key_put(keyring_key);
+ return res;
+ }
+@@ -173,12 +166,11 @@ static void put_crypt_info(struct fscrypt_info *ci)
+ if (!ci)
+ return;
+
+- key_put(ci->ci_keyring_key);
+ crypto_free_skcipher(ci->ci_ctfm);
+ kmem_cache_free(fscrypt_info_cachep, ci);
+ }
+
+-int fscrypt_get_crypt_info(struct inode *inode)
++int fscrypt_get_encryption_info(struct inode *inode)
+ {
+ struct fscrypt_info *crypt_info;
+ struct fscrypt_context ctx;
+@@ -188,21 +180,15 @@ int fscrypt_get_crypt_info(struct inode *inode)
+ u8 *raw_key = NULL;
+ int res;
+
++ if (inode->i_crypt_info)
++ return 0;
++
+ res = fscrypt_initialize(inode->i_sb->s_cop->flags);
+ if (res)
+ return res;
+
+ if (!inode->i_sb->s_cop->get_context)
+ return -EOPNOTSUPP;
+-retry:
+- crypt_info = ACCESS_ONCE(inode->i_crypt_info);
+- if (crypt_info) {
+- if (!crypt_info->ci_keyring_key ||
+- key_validate(crypt_info->ci_keyring_key) == 0)
+- return 0;
+- fscrypt_put_encryption_info(inode, crypt_info);
+- goto retry;
+- }
+
+ res = inode->i_sb->s_cop->get_context(inode, &ctx, sizeof(ctx));
+ if (res < 0) {
+@@ -230,7 +216,6 @@ int fscrypt_get_crypt_info(struct inode *inode)
+ crypt_info->ci_data_mode = ctx.contents_encryption_mode;
+ crypt_info->ci_filename_mode = ctx.filenames_encryption_mode;
+ crypt_info->ci_ctfm = NULL;
+- crypt_info->ci_keyring_key = NULL;
+ memcpy(crypt_info->ci_master_key, ctx.master_key_descriptor,
+ sizeof(crypt_info->ci_master_key));
+
+@@ -286,14 +271,8 @@ int fscrypt_get_crypt_info(struct inode *inode)
+ if (res)
+ goto out;
+
+- kzfree(raw_key);
+- raw_key = NULL;
+- if (cmpxchg(&inode->i_crypt_info, NULL, crypt_info) != NULL) {
+- put_crypt_info(crypt_info);
+- goto retry;
+- }
+- return 0;
+-
++ if (cmpxchg(&inode->i_crypt_info, NULL, crypt_info) == NULL)
++ crypt_info = NULL;
+ out:
+ if (res == -ENOKEY)
+ res = 0;
+@@ -301,6 +280,7 @@ int fscrypt_get_crypt_info(struct inode *inode)
+ kzfree(raw_key);
+ return res;
+ }
++EXPORT_SYMBOL(fscrypt_get_encryption_info);
+
+ void fscrypt_put_encryption_info(struct inode *inode, struct fscrypt_info *ci)
+ {
+@@ -318,17 +298,3 @@ void fscrypt_put_encryption_info(struct inode *inode, struct fscrypt_info *ci)
+ put_crypt_info(ci);
+ }
+ EXPORT_SYMBOL(fscrypt_put_encryption_info);
+-
+-int fscrypt_get_encryption_info(struct inode *inode)
+-{
+- struct fscrypt_info *ci = inode->i_crypt_info;
+-
+- if (!ci ||
+- (ci->ci_keyring_key &&
+- (ci->ci_keyring_key->flags & ((1 << KEY_FLAG_INVALIDATED) |
+- (1 << KEY_FLAG_REVOKED) |
+- (1 << KEY_FLAG_DEAD)))))
+- return fscrypt_get_crypt_info(inode);
+- return 0;
+-}
+-EXPORT_SYMBOL(fscrypt_get_encryption_info);
+diff --git a/fs/ext4/inline.c b/fs/ext4/inline.c
+index 627ace344739..b6a38ecbca00 100644
+--- a/fs/ext4/inline.c
++++ b/fs/ext4/inline.c
+@@ -1167,10 +1167,9 @@ static int ext4_finish_convert_inline_dir(handle_t *handle,
+ set_buffer_uptodate(dir_block);
+ err = ext4_handle_dirty_dirent_node(handle, inode, dir_block);
+ if (err)
+- goto out;
++ return err;
+ set_buffer_verified(dir_block);
+-out:
+- return err;
++ return ext4_mark_inode_dirty(handle, inode);
+ }
+
+ static int ext4_convert_inline_data_nolock(handle_t *handle,
+diff --git a/fs/ext4/xattr.c b/fs/ext4/xattr.c
+index c40bd55b6400..7b5a683defe6 100644
+--- a/fs/ext4/xattr.c
++++ b/fs/ext4/xattr.c
+@@ -131,31 +131,26 @@ static __le32 ext4_xattr_block_csum(struct inode *inode,
+ }
+
+ static int ext4_xattr_block_csum_verify(struct inode *inode,
+- sector_t block_nr,
+- struct ext4_xattr_header *hdr)
++ struct buffer_head *bh)
+ {
+- if (ext4_has_metadata_csum(inode->i_sb) &&
+- (hdr->h_checksum != ext4_xattr_block_csum(inode, block_nr, hdr)))
+- return 0;
+- return 1;
+-}
+-
+-static void ext4_xattr_block_csum_set(struct inode *inode,
+- sector_t block_nr,
+- struct ext4_xattr_header *hdr)
+-{
+- if (!ext4_has_metadata_csum(inode->i_sb))
+- return;
++ struct ext4_xattr_header *hdr = BHDR(bh);
++ int ret = 1;
+
+- hdr->h_checksum = ext4_xattr_block_csum(inode, block_nr, hdr);
++ if (ext4_has_metadata_csum(inode->i_sb)) {
++ lock_buffer(bh);
++ ret = (hdr->h_checksum == ext4_xattr_block_csum(inode,
++ bh->b_blocknr, hdr));
++ unlock_buffer(bh);
++ }
++ return ret;
+ }
+
+-static inline int ext4_handle_dirty_xattr_block(handle_t *handle,
+- struct inode *inode,
+- struct buffer_head *bh)
++static void ext4_xattr_block_csum_set(struct inode *inode,
++ struct buffer_head *bh)
+ {
+- ext4_xattr_block_csum_set(inode, bh->b_blocknr, BHDR(bh));
+- return ext4_handle_dirty_metadata(handle, inode, bh);
++ if (ext4_has_metadata_csum(inode->i_sb))
++ BHDR(bh)->h_checksum = ext4_xattr_block_csum(inode,
++ bh->b_blocknr, BHDR(bh));
+ }
+
+ static inline const struct xattr_handler *
+@@ -233,7 +228,7 @@ ext4_xattr_check_block(struct inode *inode, struct buffer_head *bh)
+ if (BHDR(bh)->h_magic != cpu_to_le32(EXT4_XATTR_MAGIC) ||
+ BHDR(bh)->h_blocks != cpu_to_le32(1))
+ return -EFSCORRUPTED;
+- if (!ext4_xattr_block_csum_verify(inode, bh->b_blocknr, BHDR(bh)))
++ if (!ext4_xattr_block_csum_verify(inode, bh))
+ return -EFSBADCRC;
+ error = ext4_xattr_check_names(BFIRST(bh), bh->b_data + bh->b_size,
+ bh->b_data);
+@@ -615,23 +610,22 @@ ext4_xattr_release_block(handle_t *handle, struct inode *inode,
+ }
+ }
+
++ ext4_xattr_block_csum_set(inode, bh);
+ /*
+ * Beware of this ugliness: Releasing of xattr block references
+ * from different inodes can race and so we have to protect
+ * from a race where someone else frees the block (and releases
+ * its journal_head) before we are done dirtying the buffer. In
+ * nojournal mode this race is harmless and we actually cannot
+- * call ext4_handle_dirty_xattr_block() with locked buffer as
++ * call ext4_handle_dirty_metadata() with locked buffer as
+ * that function can call sync_dirty_buffer() so for that case
+ * we handle the dirtying after unlocking the buffer.
+ */
+ if (ext4_handle_valid(handle))
+- error = ext4_handle_dirty_xattr_block(handle, inode,
+- bh);
++ error = ext4_handle_dirty_metadata(handle, inode, bh);
+ unlock_buffer(bh);
+ if (!ext4_handle_valid(handle))
+- error = ext4_handle_dirty_xattr_block(handle, inode,
+- bh);
++ error = ext4_handle_dirty_metadata(handle, inode, bh);
+ if (IS_SYNC(inode))
+ ext4_handle_sync(handle);
+ dquot_free_block(inode, EXT4_C2B(EXT4_SB(inode->i_sb), 1));
+@@ -860,13 +854,14 @@ ext4_xattr_block_set(handle_t *handle, struct inode *inode,
+ ext4_xattr_cache_insert(ext4_mb_cache,
+ bs->bh);
+ }
++ ext4_xattr_block_csum_set(inode, bs->bh);
+ unlock_buffer(bs->bh);
+ if (error == -EFSCORRUPTED)
+ goto bad_block;
+ if (!error)
+- error = ext4_handle_dirty_xattr_block(handle,
+- inode,
+- bs->bh);
++ error = ext4_handle_dirty_metadata(handle,
++ inode,
++ bs->bh);
+ if (error)
+ goto cleanup;
+ goto inserted;
+@@ -964,10 +959,11 @@ ext4_xattr_block_set(handle_t *handle, struct inode *inode,
+ ce->e_reusable = 0;
+ ea_bdebug(new_bh, "reusing; refcount now=%d",
+ ref);
++ ext4_xattr_block_csum_set(inode, new_bh);
+ unlock_buffer(new_bh);
+- error = ext4_handle_dirty_xattr_block(handle,
+- inode,
+- new_bh);
++ error = ext4_handle_dirty_metadata(handle,
++ inode,
++ new_bh);
+ if (error)
+ goto cleanup_dquot;
+ }
+@@ -1017,11 +1013,12 @@ ext4_xattr_block_set(handle_t *handle, struct inode *inode,
+ goto getblk_failed;
+ }
+ memcpy(new_bh->b_data, s->base, new_bh->b_size);
++ ext4_xattr_block_csum_set(inode, new_bh);
+ set_buffer_uptodate(new_bh);
+ unlock_buffer(new_bh);
+ ext4_xattr_cache_insert(ext4_mb_cache, new_bh);
+- error = ext4_handle_dirty_xattr_block(handle,
+- inode, new_bh);
++ error = ext4_handle_dirty_metadata(handle, inode,
++ new_bh);
+ if (error)
+ goto cleanup;
+ }
+diff --git a/fs/jbd2/journal.c b/fs/jbd2/journal.c
+index a097048ed1a3..bdc3afad4a8c 100644
+--- a/fs/jbd2/journal.c
++++ b/fs/jbd2/journal.c
+@@ -1125,10 +1125,8 @@ static journal_t *journal_init_common(struct block_device *bdev,
+
+ /* Set up a default-sized revoke table for the new mount. */
+ err = jbd2_journal_init_revoke(journal, JOURNAL_REVOKE_DEFAULT_HASH);
+- if (err) {
+- kfree(journal);
+- return NULL;
+- }
++ if (err)
++ goto err_cleanup;
+
+ spin_lock_init(&journal->j_history_lock);
+
+@@ -1145,23 +1143,25 @@ static journal_t *journal_init_common(struct block_device *bdev,
+ journal->j_wbufsize = n;
+ journal->j_wbuf = kmalloc_array(n, sizeof(struct buffer_head *),
+ GFP_KERNEL);
+- if (!journal->j_wbuf) {
+- kfree(journal);
+- return NULL;
+- }
++ if (!journal->j_wbuf)
++ goto err_cleanup;
+
+ bh = getblk_unmovable(journal->j_dev, start, journal->j_blocksize);
+ if (!bh) {
+ pr_err("%s: Cannot get buffer for journal superblock\n",
+ __func__);
+- kfree(journal->j_wbuf);
+- kfree(journal);
+- return NULL;
++ goto err_cleanup;
+ }
+ journal->j_sb_buffer = bh;
+ journal->j_superblock = (journal_superblock_t *)bh->b_data;
+
+ return journal;
++
++err_cleanup:
++ kfree(journal->j_wbuf);
++ jbd2_journal_destroy_revoke(journal);
++ kfree(journal);
++ return NULL;
+ }
+
+ /* jbd2_journal_init_dev and jbd2_journal_init_inode:
+diff --git a/fs/jbd2/revoke.c b/fs/jbd2/revoke.c
+index cfc38b552118..f9aefcda5854 100644
+--- a/fs/jbd2/revoke.c
++++ b/fs/jbd2/revoke.c
+@@ -280,6 +280,7 @@ int jbd2_journal_init_revoke(journal_t *journal, int hash_size)
+
+ fail1:
+ jbd2_journal_destroy_revoke_table(journal->j_revoke_table[0]);
++ journal->j_revoke_table[0] = NULL;
+ fail0:
+ return -ENOMEM;
+ }
+diff --git a/include/drm/drmP.h b/include/drm/drmP.h
+index 9c4ee144b5f6..1871ca60e079 100644
+--- a/include/drm/drmP.h
++++ b/include/drm/drmP.h
+@@ -360,6 +360,7 @@ struct drm_ioctl_desc {
+ /* Event queued up for userspace to read */
+ struct drm_pending_event {
+ struct completion *completion;
++ void (*completion_release)(struct completion *completion);
+ struct drm_event *event;
+ struct dma_fence *fence;
+ struct list_head link;
+diff --git a/include/linux/ccp.h b/include/linux/ccp.h
+index c71dd8fa5764..c41b8d99dd0e 100644
+--- a/include/linux/ccp.h
++++ b/include/linux/ccp.h
+@@ -556,7 +556,7 @@ enum ccp_engine {
+ * struct ccp_cmd - CCP operation request
+ * @entry: list element (ccp driver use only)
+ * @work: work element used for callbacks (ccp driver use only)
+- * @ccp: CCP device to be run on (ccp driver use only)
++ * @ccp: CCP device to be run on
+ * @ret: operation return code (ccp driver use only)
+ * @flags: cmd processing flags
+ * @engine: CCP operation to perform
+diff --git a/include/linux/iio/sw_device.h b/include/linux/iio/sw_device.h
+index 23ca41515527..fa7931933067 100644
+--- a/include/linux/iio/sw_device.h
++++ b/include/linux/iio/sw_device.h
+@@ -62,7 +62,7 @@ void iio_swd_group_init_type_name(struct iio_sw_device *d,
+ const char *name,
+ struct config_item_type *type)
+ {
+-#ifdef CONFIG_CONFIGFS_FS
++#if IS_ENABLED(CONFIG_CONFIGFS_FS)
+ config_group_init_type_name(&d->group, name, type);
+ #endif
+ }
+diff --git a/include/linux/netdevice.h b/include/linux/netdevice.h
+index 27914672602d..bdef8b7d4305 100644
+--- a/include/linux/netdevice.h
++++ b/include/linux/netdevice.h
+@@ -330,6 +330,7 @@ struct napi_struct {
+
+ enum {
+ NAPI_STATE_SCHED, /* Poll is scheduled */
++ NAPI_STATE_MISSED, /* reschedule a napi */
+ NAPI_STATE_DISABLE, /* Disable pending */
+ NAPI_STATE_NPSVC, /* Netpoll - don't dequeue from poll_list */
+ NAPI_STATE_HASHED, /* In NAPI hash (busy polling possible) */
+@@ -338,12 +339,13 @@ enum {
+ };
+
+ enum {
+- NAPIF_STATE_SCHED = (1UL << NAPI_STATE_SCHED),
+- NAPIF_STATE_DISABLE = (1UL << NAPI_STATE_DISABLE),
+- NAPIF_STATE_NPSVC = (1UL << NAPI_STATE_NPSVC),
+- NAPIF_STATE_HASHED = (1UL << NAPI_STATE_HASHED),
+- NAPIF_STATE_NO_BUSY_POLL = (1UL << NAPI_STATE_NO_BUSY_POLL),
+- NAPIF_STATE_IN_BUSY_POLL = (1UL << NAPI_STATE_IN_BUSY_POLL),
++ NAPIF_STATE_SCHED = BIT(NAPI_STATE_SCHED),
++ NAPIF_STATE_MISSED = BIT(NAPI_STATE_MISSED),
++ NAPIF_STATE_DISABLE = BIT(NAPI_STATE_DISABLE),
++ NAPIF_STATE_NPSVC = BIT(NAPI_STATE_NPSVC),
++ NAPIF_STATE_HASHED = BIT(NAPI_STATE_HASHED),
++ NAPIF_STATE_NO_BUSY_POLL = BIT(NAPI_STATE_NO_BUSY_POLL),
++ NAPIF_STATE_IN_BUSY_POLL = BIT(NAPI_STATE_IN_BUSY_POLL),
+ };
+
+ enum gro_result {
+@@ -413,20 +415,7 @@ static inline bool napi_disable_pending(struct napi_struct *n)
+ return test_bit(NAPI_STATE_DISABLE, &n->state);
+ }
+
+-/**
+- * napi_schedule_prep - check if NAPI can be scheduled
+- * @n: NAPI context
+- *
+- * Test if NAPI routine is already running, and if not mark
+- * it as running. This is used as a condition variable to
+- * insure only one NAPI poll instance runs. We also make
+- * sure there is no pending NAPI disable.
+- */
+-static inline bool napi_schedule_prep(struct napi_struct *n)
+-{
+- return !napi_disable_pending(n) &&
+- !test_and_set_bit(NAPI_STATE_SCHED, &n->state);
+-}
++bool napi_schedule_prep(struct napi_struct *n);
+
+ /**
+ * napi_schedule - schedule NAPI poll
+diff --git a/include/linux/usb/quirks.h b/include/linux/usb/quirks.h
+index 1d0043dc34e4..de2a722fe3cf 100644
+--- a/include/linux/usb/quirks.h
++++ b/include/linux/usb/quirks.h
+@@ -50,4 +50,10 @@
+ /* device can't handle Link Power Management */
+ #define USB_QUIRK_NO_LPM BIT(10)
+
++/*
++ * Device reports its bInterval as linear frames instead of the
++ * USB 2.0 calculation.
++ */
++#define USB_QUIRK_LINEAR_FRAME_INTR_BINTERVAL BIT(11)
++
+ #endif /* __LINUX_USB_QUIRKS_H */
+diff --git a/kernel/audit.c b/kernel/audit.c
+index 6e399bb69d7c..ba4481d20fa1 100644
+--- a/kernel/audit.c
++++ b/kernel/audit.c
+@@ -54,6 +54,10 @@
+ #include <linux/kthread.h>
+ #include <linux/kernel.h>
+ #include <linux/syscalls.h>
++#include <linux/spinlock.h>
++#include <linux/rcupdate.h>
++#include <linux/mutex.h>
++#include <linux/gfp.h>
+
+ #include <linux/audit.h>
+
+@@ -90,13 +94,34 @@ static u32 audit_default;
+ /* If auditing cannot proceed, audit_failure selects what happens. */
+ static u32 audit_failure = AUDIT_FAIL_PRINTK;
+
+-/*
+- * If audit records are to be written to the netlink socket, audit_pid
+- * contains the pid of the auditd process and audit_nlk_portid contains
+- * the portid to use to send netlink messages to that process.
++/* private audit network namespace index */
++static unsigned int audit_net_id;
++
++/**
++ * struct audit_net - audit private network namespace data
++ * @sk: communication socket
++ */
++struct audit_net {
++ struct sock *sk;
++};
++
++/**
++ * struct auditd_connection - kernel/auditd connection state
++ * @pid: auditd PID
++ * @portid: netlink portid
++ * @net: the associated network namespace
++ * @lock: spinlock to protect write access
++ *
++ * Description:
++ * This struct is RCU protected; you must either hold the RCU lock for reading
++ * or the included spinlock for writing.
+ */
+-int audit_pid;
+-static __u32 audit_nlk_portid;
++static struct auditd_connection {
++ int pid;
++ u32 portid;
++ struct net *net;
++ spinlock_t lock;
++} auditd_conn;
+
+ /* If audit_rate_limit is non-zero, limit the rate of sending audit records
+ * to that number per second. This prevents DoS attacks, but results in
+@@ -123,10 +148,6 @@ u32 audit_sig_sid = 0;
+ */
+ static atomic_t audit_lost = ATOMIC_INIT(0);
+
+-/* The netlink socket. */
+-static struct sock *audit_sock;
+-static unsigned int audit_net_id;
+-
+ /* Hash for inode-based rules */
+ struct list_head audit_inode_hash[AUDIT_INODE_BUCKETS];
+
+@@ -139,6 +160,7 @@ static LIST_HEAD(audit_freelist);
+
+ /* queue msgs to send via kauditd_task */
+ static struct sk_buff_head audit_queue;
++static void kauditd_hold_skb(struct sk_buff *skb);
+ /* queue msgs due to temporary unicast send problems */
+ static struct sk_buff_head audit_retry_queue;
+ /* queue msgs waiting for new auditd connection */
+@@ -192,6 +214,43 @@ struct audit_reply {
+ struct sk_buff *skb;
+ };
+
++/**
++ * auditd_test_task - Check to see if a given task is an audit daemon
++ * @task: the task to check
++ *
++ * Description:
++ * Return 1 if the task is a registered audit daemon, 0 otherwise.
++ */
++int auditd_test_task(const struct task_struct *task)
++{
++ int rc;
++
++ rcu_read_lock();
++ rc = (auditd_conn.pid && task->tgid == auditd_conn.pid ? 1 : 0);
++ rcu_read_unlock();
++
++ return rc;
++}
++
++/**
++ * audit_get_sk - Return the audit socket for the given network namespace
++ * @net: the destination network namespace
++ *
++ * Description:
++ * Returns the sock pointer if valid, NULL otherwise. The caller must ensure
++ * that a reference is held for the network namespace while the sock is in use.
++ */
++static struct sock *audit_get_sk(const struct net *net)
++{
++ struct audit_net *aunet;
++
++ if (!net)
++ return NULL;
++
++ aunet = net_generic(net, audit_net_id);
++ return aunet->sk;
++}
++
+ static void audit_set_portid(struct audit_buffer *ab, __u32 portid)
+ {
+ if (ab) {
+@@ -210,9 +269,7 @@ void audit_panic(const char *message)
+ pr_err("%s\n", message);
+ break;
+ case AUDIT_FAIL_PANIC:
+- /* test audit_pid since printk is always losey, why bother? */
+- if (audit_pid)
+- panic("audit: %s\n", message);
++ panic("audit: %s\n", message);
+ break;
+ }
+ }
+@@ -370,21 +427,87 @@ static int audit_set_failure(u32 state)
+ return audit_do_config_change("audit_failure", &audit_failure, state);
+ }
+
+-/*
+- * For one reason or another this nlh isn't getting delivered to the userspace
+- * audit daemon, just send it to printk.
++/**
++ * auditd_set - Set/Reset the auditd connection state
++ * @pid: auditd PID
++ * @portid: auditd netlink portid
++ * @net: auditd network namespace pointer
++ *
++ * Description:
++ * This function will obtain and drop network namespace references as
++ * necessary.
++ */
++static void auditd_set(int pid, u32 portid, struct net *net)
++{
++ unsigned long flags;
++
++ spin_lock_irqsave(&auditd_conn.lock, flags);
++ auditd_conn.pid = pid;
++ auditd_conn.portid = portid;
++ if (auditd_conn.net)
++ put_net(auditd_conn.net);
++ if (net)
++ auditd_conn.net = get_net(net);
++ else
++ auditd_conn.net = NULL;
++ spin_unlock_irqrestore(&auditd_conn.lock, flags);
++}
++
++/**
++ * auditd_reset - Disconnect the auditd connection
++ *
++ * Description:
++ * Break the auditd/kauditd connection and move all the queued records into the
++ * hold queue in case auditd reconnects.
++ */
++static void auditd_reset(void)
++{
++ struct sk_buff *skb;
++
++ /* if it isn't already broken, break the connection */
++ rcu_read_lock();
++ if (auditd_conn.pid)
++ auditd_set(0, 0, NULL);
++ rcu_read_unlock();
++
++ /* flush all of the main and retry queues to the hold queue */
++ while ((skb = skb_dequeue(&audit_retry_queue)))
++ kauditd_hold_skb(skb);
++ while ((skb = skb_dequeue(&audit_queue)))
++ kauditd_hold_skb(skb);
++}
++
++/**
++ * kauditd_print_skb - Print the audit record to the ring buffer
++ * @skb: audit record
++ *
++ * Whatever the reason, this packet may not make it to the auditd connection
++ * so write it via printk so the information isn't completely lost.
+ */
+ static void kauditd_printk_skb(struct sk_buff *skb)
+ {
+ struct nlmsghdr *nlh = nlmsg_hdr(skb);
+ char *data = nlmsg_data(nlh);
+
+- if (nlh->nlmsg_type != AUDIT_EOE) {
+- if (printk_ratelimit())
+- pr_notice("type=%d %s\n", nlh->nlmsg_type, data);
+- else
+- audit_log_lost("printk limit exceeded");
+- }
++ if (nlh->nlmsg_type != AUDIT_EOE && printk_ratelimit())
++ pr_notice("type=%d %s\n", nlh->nlmsg_type, data);
++}
++
++/**
++ * kauditd_rehold_skb - Handle a audit record send failure in the hold queue
++ * @skb: audit record
++ *
++ * Description:
++ * This should only be used by the kauditd_thread when it fails to flush the
++ * hold queue.
++ */
++static void kauditd_rehold_skb(struct sk_buff *skb)
++{
++ /* put the record back in the queue at the same place */
++ skb_queue_head(&audit_hold_queue, skb);
++
++ /* fail the auditd connection */
++ auditd_reset();
+ }
+
+ /**
+@@ -421,6 +544,9 @@ static void kauditd_hold_skb(struct sk_buff *skb)
+ /* we have no other options - drop the message */
+ audit_log_lost("kauditd hold queue overflow");
+ kfree_skb(skb);
++
++ /* fail the auditd connection */
++ auditd_reset();
+ }
+
+ /**
+@@ -441,51 +567,122 @@ static void kauditd_retry_skb(struct sk_buff *skb)
+ }
+
+ /**
+- * auditd_reset - Disconnect the auditd connection
++ * auditd_send_unicast_skb - Send a record via unicast to auditd
++ * @skb: audit record
+ *
+ * Description:
+- * Break the auditd/kauditd connection and move all the records in the retry
+- * queue into the hold queue in case auditd reconnects. The audit_cmd_mutex
+- * must be held when calling this function.
++ * Send a skb to the audit daemon, returns positive/zero values on success and
++ * negative values on failure; in all cases the skb will be consumed by this
++ * function. If the send results in -ECONNREFUSED the connection with auditd
++ * will be reset. This function may sleep so callers should not hold any locks
++ * where this would cause a problem.
+ */
+-static void auditd_reset(void)
++static int auditd_send_unicast_skb(struct sk_buff *skb)
+ {
+- struct sk_buff *skb;
+-
+- /* break the connection */
+- if (audit_sock) {
+- sock_put(audit_sock);
+- audit_sock = NULL;
++ int rc;
++ u32 portid;
++ struct net *net;
++ struct sock *sk;
++
++ /* NOTE: we can't call netlink_unicast while in the RCU section so
++ * take a reference to the network namespace and grab local
++ * copies of the namespace, the sock, and the portid; the
++ * namespace and sock aren't going to go away while we hold a
++ * reference and if the portid does become invalid after the RCU
++ * section netlink_unicast() should safely return an error */
++
++ rcu_read_lock();
++ if (!auditd_conn.pid) {
++ rcu_read_unlock();
++ rc = -ECONNREFUSED;
++ goto err;
+ }
+- audit_pid = 0;
+- audit_nlk_portid = 0;
++ net = auditd_conn.net;
++ get_net(net);
++ sk = audit_get_sk(net);
++ portid = auditd_conn.portid;
++ rcu_read_unlock();
+
+- /* flush all of the retry queue to the hold queue */
+- while ((skb = skb_dequeue(&audit_retry_queue)))
+- kauditd_hold_skb(skb);
++ rc = netlink_unicast(sk, skb, portid, 0);
++ put_net(net);
++ if (rc < 0)
++ goto err;
++
++ return rc;
++
++err:
++ if (rc == -ECONNREFUSED)
++ auditd_reset();
++ return rc;
+ }
+
+ /**
+- * kauditd_send_unicast_skb - Send a record via unicast to auditd
+- * @skb: audit record
++ * kauditd_send_queue - Helper for kauditd_thread to flush skb queues
++ * @sk: the sending sock
++ * @portid: the netlink destination
++ * @queue: the skb queue to process
++ * @retry_limit: limit on number of netlink unicast failures
++ * @skb_hook: per-skb hook for additional processing
++ * @err_hook: hook called if the skb fails the netlink unicast send
++ *
++ * Description:
++ * Run through the given queue and attempt to send the audit records to auditd,
++ * returns zero on success, negative values on failure. It is up to the caller
++ * to ensure that the @sk is valid for the duration of this function.
++ *
+ */
+-static int kauditd_send_unicast_skb(struct sk_buff *skb)
++static int kauditd_send_queue(struct sock *sk, u32 portid,
++ struct sk_buff_head *queue,
++ unsigned int retry_limit,
++ void (*skb_hook)(struct sk_buff *skb),
++ void (*err_hook)(struct sk_buff *skb))
+ {
+- int rc;
++ int rc = 0;
++ struct sk_buff *skb;
++ static unsigned int failed = 0;
+
+- /* if we know nothing is connected, don't even try the netlink call */
+- if (!audit_pid)
+- return -ECONNREFUSED;
++ /* NOTE: kauditd_thread takes care of all our locking, we just use
++ * the netlink info passed to us (e.g. sk and portid) */
++
++ while ((skb = skb_dequeue(queue))) {
++ /* call the skb_hook for each skb we touch */
++ if (skb_hook)
++ (*skb_hook)(skb);
++
++ /* can we send to anyone via unicast? */
++ if (!sk) {
++ if (err_hook)
++ (*err_hook)(skb);
++ continue;
++ }
+
+- /* get an extra skb reference in case we fail to send */
+- skb_get(skb);
+- rc = netlink_unicast(audit_sock, skb, audit_nlk_portid, 0);
+- if (rc >= 0) {
+- consume_skb(skb);
+- rc = 0;
++ /* grab an extra skb reference in case of error */
++ skb_get(skb);
++ rc = netlink_unicast(sk, skb, portid, 0);
++ if (rc < 0) {
++ /* fatal failure for our queue flush attempt? */
++ if (++failed >= retry_limit ||
++ rc == -ECONNREFUSED || rc == -EPERM) {
++ /* yes - error processing for the queue */
++ sk = NULL;
++ if (err_hook)
++ (*err_hook)(skb);
++ if (!skb_hook)
++ goto out;
++ /* keep processing with the skb_hook */
++ continue;
++ } else
++ /* no - requeue to preserve ordering */
++ skb_queue_head(queue, skb);
++ } else {
++ /* it worked - drop the extra reference and continue */
++ consume_skb(skb);
++ failed = 0;
++ }
+ }
+
+- return rc;
++out:
++ return (rc >= 0 ? 0 : rc);
+ }
+
+ /*
+@@ -493,16 +690,19 @@ static int kauditd_send_unicast_skb(struct sk_buff *skb)
+ * @skb: audit record
+ *
+ * Description:
+- * This function doesn't consume an skb as might be expected since it has to
+- * copy it anyways.
++ * Write a multicast message to anyone listening in the initial network
++ * namespace. This function doesn't consume an skb as might be expected since
++ * it has to copy it anyways.
+ */
+ static void kauditd_send_multicast_skb(struct sk_buff *skb)
+ {
+ struct sk_buff *copy;
+- struct audit_net *aunet = net_generic(&init_net, audit_net_id);
+- struct sock *sock = aunet->nlsk;
++ struct sock *sock = audit_get_sk(&init_net);
+ struct nlmsghdr *nlh;
+
++ /* NOTE: we are not taking an additional reference for init_net since
++ * we don't have to worry about it going away */
++
+ if (!netlink_has_listeners(sock, AUDIT_NLGRP_READLOG))
+ return;
+
+@@ -526,149 +726,75 @@ static void kauditd_send_multicast_skb(struct sk_buff *skb)
+ }
+
+ /**
+- * kauditd_wake_condition - Return true when it is time to wake kauditd_thread
+- *
+- * Description:
+- * This function is for use by the wait_event_freezable() call in
+- * kauditd_thread().
++ * kauditd_thread - Worker thread to send audit records to userspace
++ * @dummy: unused
+ */
+-static int kauditd_wake_condition(void)
+-{
+- static int pid_last = 0;
+- int rc;
+- int pid = audit_pid;
+-
+- /* wake on new messages or a change in the connected auditd */
+- rc = skb_queue_len(&audit_queue) || (pid && pid != pid_last);
+- if (rc)
+- pid_last = pid;
+-
+- return rc;
+-}
+-
+ static int kauditd_thread(void *dummy)
+ {
+ int rc;
+- int auditd = 0;
+- int reschedule = 0;
+- struct sk_buff *skb;
+- struct nlmsghdr *nlh;
++ u32 portid = 0;
++ struct net *net = NULL;
++ struct sock *sk = NULL;
+
+ #define UNICAST_RETRIES 5
+-#define AUDITD_BAD(x,y) \
+- ((x) == -ECONNREFUSED || (x) == -EPERM || ++(y) >= UNICAST_RETRIES)
+-
+- /* NOTE: we do invalidate the auditd connection flag on any sending
+- * errors, but we only "restore" the connection flag at specific places
+- * in the loop in order to help ensure proper ordering of audit
+- * records */
+
+ set_freezable();
+ while (!kthread_should_stop()) {
+- /* NOTE: possible area for future improvement is to look at
+- * the hold and retry queues, since only this thread
+- * has access to these queues we might be able to do
+- * our own queuing and skip some/all of the locking */
+-
+- /* NOTE: it might be a fun experiment to split the hold and
+- * retry queue handling to another thread, but the
+- * synchronization issues and other overhead might kill
+- * any performance gains */
++ /* NOTE: see the lock comments in auditd_send_unicast_skb() */
++ rcu_read_lock();
++ if (!auditd_conn.pid) {
++ rcu_read_unlock();
++ goto main_queue;
++ }
++ net = auditd_conn.net;
++ get_net(net);
++ sk = audit_get_sk(net);
++ portid = auditd_conn.portid;
++ rcu_read_unlock();
+
+ /* attempt to flush the hold queue */
+- while (auditd && (skb = skb_dequeue(&audit_hold_queue))) {
+- rc = kauditd_send_unicast_skb(skb);
+- if (rc) {
+- /* requeue to the same spot */
+- skb_queue_head(&audit_hold_queue, skb);
+-
+- auditd = 0;
+- if (AUDITD_BAD(rc, reschedule)) {
+- mutex_lock(&audit_cmd_mutex);
+- auditd_reset();
+- mutex_unlock(&audit_cmd_mutex);
+- reschedule = 0;
+- }
+- } else
+- /* we were able to send successfully */
+- reschedule = 0;
++ rc = kauditd_send_queue(sk, portid,
++ &audit_hold_queue, UNICAST_RETRIES,
++ NULL, kauditd_rehold_skb);
++ if (rc < 0) {
++ sk = NULL;
++ goto main_queue;
+ }
+
+ /* attempt to flush the retry queue */
+- while (auditd && (skb = skb_dequeue(&audit_retry_queue))) {
+- rc = kauditd_send_unicast_skb(skb);
+- if (rc) {
+- auditd = 0;
+- if (AUDITD_BAD(rc, reschedule)) {
+- kauditd_hold_skb(skb);
+- mutex_lock(&audit_cmd_mutex);
+- auditd_reset();
+- mutex_unlock(&audit_cmd_mutex);
+- reschedule = 0;
+- } else
+- /* temporary problem (we hope), queue
+- * to the same spot and retry */
+- skb_queue_head(&audit_retry_queue, skb);
+- } else
+- /* we were able to send successfully */
+- reschedule = 0;
++ rc = kauditd_send_queue(sk, portid,
++ &audit_retry_queue, UNICAST_RETRIES,
++ NULL, kauditd_hold_skb);
++ if (rc < 0) {
++ sk = NULL;
++ goto main_queue;
+ }
+
+- /* standard queue processing, try to be as quick as possible */
+-quick_loop:
+- skb = skb_dequeue(&audit_queue);
+- if (skb) {
+- /* setup the netlink header, see the comments in
+- * kauditd_send_multicast_skb() for length quirks */
+- nlh = nlmsg_hdr(skb);
+- nlh->nlmsg_len = skb->len - NLMSG_HDRLEN;
+-
+- /* attempt to send to any multicast listeners */
+- kauditd_send_multicast_skb(skb);
+-
+- /* attempt to send to auditd, queue on failure */
+- if (auditd) {
+- rc = kauditd_send_unicast_skb(skb);
+- if (rc) {
+- auditd = 0;
+- if (AUDITD_BAD(rc, reschedule)) {
+- mutex_lock(&audit_cmd_mutex);
+- auditd_reset();
+- mutex_unlock(&audit_cmd_mutex);
+- reschedule = 0;
+- }
+-
+- /* move to the retry queue */
+- kauditd_retry_skb(skb);
+- } else
+- /* everything is working so go fast! */
+- goto quick_loop;
+- } else if (reschedule)
+- /* we are currently having problems, move to
+- * the retry queue */
+- kauditd_retry_skb(skb);
+- else
+- /* dump the message via printk and hold it */
+- kauditd_hold_skb(skb);
+- } else {
+- /* we have flushed the backlog so wake everyone */
+- wake_up(&audit_backlog_wait);
+-
+- /* if everything is okay with auditd (if present), go
+- * to sleep until there is something new in the queue
+- * or we have a change in the connected auditd;
+- * otherwise simply reschedule to give things a chance
+- * to recover */
+- if (reschedule) {
+- set_current_state(TASK_INTERRUPTIBLE);
+- schedule();
+- } else
+- wait_event_freezable(kauditd_wait,
+- kauditd_wake_condition());
+-
+- /* update the auditd connection status */
+- auditd = (audit_pid ? 1 : 0);
++main_queue:
++ /* process the main queue - do the multicast send and attempt
++ * unicast, dump failed record sends to the retry queue; if
++ * sk == NULL due to previous failures we will just do the
++ * multicast send and move the record to the retry queue */
++ kauditd_send_queue(sk, portid, &audit_queue, 1,
++ kauditd_send_multicast_skb,
++ kauditd_retry_skb);
++
++ /* drop our netns reference, no auditd sends past this line */
++ if (net) {
++ put_net(net);
++ net = NULL;
+ }
++ sk = NULL;
++
++ /* we have processed all the queues so wake everyone */
++ wake_up(&audit_backlog_wait);
++
++ /* NOTE: we want to wake up if there is anything on the queue,
++ * regardless of if an auditd is connected, as we need to
++ * do the multicast send and rotate records from the
++ * main queue to the retry/hold queues */
++ wait_event_freezable(kauditd_wait,
++ (skb_queue_len(&audit_queue) ? 1 : 0));
+ }
+
+ return 0;
+@@ -678,17 +804,16 @@ int audit_send_list(void *_dest)
+ {
+ struct audit_netlink_list *dest = _dest;
+ struct sk_buff *skb;
+- struct net *net = dest->net;
+- struct audit_net *aunet = net_generic(net, audit_net_id);
++ struct sock *sk = audit_get_sk(dest->net);
+
+ /* wait for parent to finish and send an ACK */
+ mutex_lock(&audit_cmd_mutex);
+ mutex_unlock(&audit_cmd_mutex);
+
+ while ((skb = __skb_dequeue(&dest->q)) != NULL)
+- netlink_unicast(aunet->nlsk, skb, dest->portid, 0);
++ netlink_unicast(sk, skb, dest->portid, 0);
+
+- put_net(net);
++ put_net(dest->net);
+ kfree(dest);
+
+ return 0;
+@@ -722,16 +847,15 @@ struct sk_buff *audit_make_reply(__u32 portid, int seq, int type, int done,
+ static int audit_send_reply_thread(void *arg)
+ {
+ struct audit_reply *reply = (struct audit_reply *)arg;
+- struct net *net = reply->net;
+- struct audit_net *aunet = net_generic(net, audit_net_id);
++ struct sock *sk = audit_get_sk(reply->net);
+
+ mutex_lock(&audit_cmd_mutex);
+ mutex_unlock(&audit_cmd_mutex);
+
+ /* Ignore failure. It'll only happen if the sender goes away,
+ because our timeout is set to infinite. */
+- netlink_unicast(aunet->nlsk , reply->skb, reply->portid, 0);
+- put_net(net);
++ netlink_unicast(sk, reply->skb, reply->portid, 0);
++ put_net(reply->net);
+ kfree(reply);
+ return 0;
+ }
+@@ -949,12 +1073,12 @@ static int audit_set_feature(struct sk_buff *skb)
+
+ static int audit_replace(pid_t pid)
+ {
+- struct sk_buff *skb = audit_make_reply(0, 0, AUDIT_REPLACE, 0, 0,
+- &pid, sizeof(pid));
++ struct sk_buff *skb;
+
++ skb = audit_make_reply(0, 0, AUDIT_REPLACE, 0, 0, &pid, sizeof(pid));
+ if (!skb)
+ return -ENOMEM;
+- return netlink_unicast(audit_sock, skb, audit_nlk_portid, 0);
++ return auditd_send_unicast_skb(skb);
+ }
+
+ static int audit_receive_msg(struct sk_buff *skb, struct nlmsghdr *nlh)
+@@ -981,7 +1105,9 @@ static int audit_receive_msg(struct sk_buff *skb, struct nlmsghdr *nlh)
+ memset(&s, 0, sizeof(s));
+ s.enabled = audit_enabled;
+ s.failure = audit_failure;
+- s.pid = audit_pid;
++ rcu_read_lock();
++ s.pid = auditd_conn.pid;
++ rcu_read_unlock();
+ s.rate_limit = audit_rate_limit;
+ s.backlog_limit = audit_backlog_limit;
+ s.lost = atomic_read(&audit_lost);
+@@ -1014,30 +1140,44 @@ static int audit_receive_msg(struct sk_buff *skb, struct nlmsghdr *nlh)
+ * from the initial pid namespace, but something
+ * to keep in mind if this changes */
+ int new_pid = s.pid;
++ pid_t auditd_pid;
+ pid_t requesting_pid = task_tgid_vnr(current);
+
+- if ((!new_pid) && (requesting_pid != audit_pid)) {
+- audit_log_config_change("audit_pid", new_pid, audit_pid, 0);
++ /* test the auditd connection */
++ audit_replace(requesting_pid);
++
++ rcu_read_lock();
++ auditd_pid = auditd_conn.pid;
++ /* only the current auditd can unregister itself */
++ if ((!new_pid) && (requesting_pid != auditd_pid)) {
++ rcu_read_unlock();
++ audit_log_config_change("audit_pid", new_pid,
++ auditd_pid, 0);
+ return -EACCES;
+ }
+- if (audit_pid && new_pid &&
+- audit_replace(requesting_pid) != -ECONNREFUSED) {
+- audit_log_config_change("audit_pid", new_pid, audit_pid, 0);
++ /* replacing a healthy auditd is not allowed */
++ if (auditd_pid && new_pid) {
++ rcu_read_unlock();
++ audit_log_config_change("audit_pid", new_pid,
++ auditd_pid, 0);
+ return -EEXIST;
+ }
++ rcu_read_unlock();
++
+ if (audit_enabled != AUDIT_OFF)
+- audit_log_config_change("audit_pid", new_pid, audit_pid, 1);
++ audit_log_config_change("audit_pid", new_pid,
++ auditd_pid, 1);
++
+ if (new_pid) {
+- if (audit_sock)
+- sock_put(audit_sock);
+- audit_pid = new_pid;
+- audit_nlk_portid = NETLINK_CB(skb).portid;
+- sock_hold(skb->sk);
+- audit_sock = skb->sk;
+- } else {
++ /* register a new auditd connection */
++ auditd_set(new_pid,
++ NETLINK_CB(skb).portid,
++ sock_net(NETLINK_CB(skb).sk));
++ /* try to process any backlog */
++ wake_up_interruptible(&kauditd_wait);
++ } else
++ /* unregister the auditd connection */
+ auditd_reset();
+- }
+- wake_up_interruptible(&kauditd_wait);
+ }
+ if (s.mask & AUDIT_STATUS_RATE_LIMIT) {
+ err = audit_set_rate_limit(s.rate_limit);
+@@ -1084,7 +1224,6 @@ static int audit_receive_msg(struct sk_buff *skb, struct nlmsghdr *nlh)
+ if (err)
+ break;
+ }
+- mutex_unlock(&audit_cmd_mutex);
+ audit_log_common_recv_msg(&ab, msg_type);
+ if (msg_type != AUDIT_USER_TTY)
+ audit_log_format(ab, " msg='%.*s'",
+@@ -1102,7 +1241,6 @@ static int audit_receive_msg(struct sk_buff *skb, struct nlmsghdr *nlh)
+ }
+ audit_set_portid(ab, NETLINK_CB(skb).portid);
+ audit_log_end(ab);
+- mutex_lock(&audit_cmd_mutex);
+ }
+ break;
+ case AUDIT_ADD_RULE:
+@@ -1292,26 +1430,26 @@ static int __net_init audit_net_init(struct net *net)
+
+ struct audit_net *aunet = net_generic(net, audit_net_id);
+
+- aunet->nlsk = netlink_kernel_create(net, NETLINK_AUDIT, &cfg);
+- if (aunet->nlsk == NULL) {
++ aunet->sk = netlink_kernel_create(net, NETLINK_AUDIT, &cfg);
++ if (aunet->sk == NULL) {
+ audit_panic("cannot initialize netlink socket in namespace");
+ return -ENOMEM;
+ }
+- aunet->nlsk->sk_sndtimeo = MAX_SCHEDULE_TIMEOUT;
++ aunet->sk->sk_sndtimeo = MAX_SCHEDULE_TIMEOUT;
++
+ return 0;
+ }
+
+ static void __net_exit audit_net_exit(struct net *net)
+ {
+ struct audit_net *aunet = net_generic(net, audit_net_id);
+- struct sock *sock = aunet->nlsk;
+- mutex_lock(&audit_cmd_mutex);
+- if (sock == audit_sock)
++
++ rcu_read_lock();
++ if (net == auditd_conn.net)
+ auditd_reset();
+- mutex_unlock(&audit_cmd_mutex);
++ rcu_read_unlock();
+
+- netlink_kernel_release(sock);
+- aunet->nlsk = NULL;
++ netlink_kernel_release(aunet->sk);
+ }
+
+ static struct pernet_operations audit_net_ops __net_initdata = {
+@@ -1329,20 +1467,24 @@ static int __init audit_init(void)
+ if (audit_initialized == AUDIT_DISABLED)
+ return 0;
+
+- pr_info("initializing netlink subsys (%s)\n",
+- audit_default ? "enabled" : "disabled");
+- register_pernet_subsys(&audit_net_ops);
++ memset(&auditd_conn, 0, sizeof(auditd_conn));
++ spin_lock_init(&auditd_conn.lock);
+
+ skb_queue_head_init(&audit_queue);
+ skb_queue_head_init(&audit_retry_queue);
+ skb_queue_head_init(&audit_hold_queue);
+- audit_initialized = AUDIT_INITIALIZED;
+- audit_enabled = audit_default;
+- audit_ever_enabled |= !!audit_default;
+
+ for (i = 0; i < AUDIT_INODE_BUCKETS; i++)
+ INIT_LIST_HEAD(&audit_inode_hash[i]);
+
++ pr_info("initializing netlink subsys (%s)\n",
++ audit_default ? "enabled" : "disabled");
++ register_pernet_subsys(&audit_net_ops);
++
++ audit_initialized = AUDIT_INITIALIZED;
++ audit_enabled = audit_default;
++ audit_ever_enabled |= !!audit_default;
++
+ kauditd_task = kthread_run(kauditd_thread, NULL, "kauditd");
+ if (IS_ERR(kauditd_task)) {
+ int err = PTR_ERR(kauditd_task);
+@@ -1511,20 +1653,16 @@ struct audit_buffer *audit_log_start(struct audit_context *ctx, gfp_t gfp_mask,
+ if (unlikely(!audit_filter(type, AUDIT_FILTER_TYPE)))
+ return NULL;
+
+- /* don't ever fail/sleep on these two conditions:
++ /* NOTE: don't ever fail/sleep on these two conditions:
+ * 1. auditd generated record - since we need auditd to drain the
+ * queue; also, when we are checking for auditd, compare PIDs using
+ * task_tgid_vnr() since auditd_pid is set in audit_receive_msg()
+ * using a PID anchored in the caller's namespace
+- * 2. audit command message - record types 1000 through 1099 inclusive
+- * are command messages/records used to manage the kernel subsystem
+- * and the audit userspace, blocking on these messages could cause
+- * problems under load so don't do it (note: not all of these
+- * command types are valid as record types, but it is quicker to
+- * just check two ints than a series of ints in a if/switch stmt) */
+- if (!((audit_pid && audit_pid == task_tgid_vnr(current)) ||
+- (type >= 1000 && type <= 1099))) {
+- long sleep_time = audit_backlog_wait_time;
++ * 2. generator holding the audit_cmd_mutex - we don't want to block
++ * while holding the mutex */
++ if (!(auditd_test_task(current) ||
++ (current == __mutex_owner(&audit_cmd_mutex)))) {
++ long stime = audit_backlog_wait_time;
+
+ while (audit_backlog_limit &&
+ (skb_queue_len(&audit_queue) > audit_backlog_limit)) {
+@@ -1533,14 +1671,13 @@ struct audit_buffer *audit_log_start(struct audit_context *ctx, gfp_t gfp_mask,
+
+ /* sleep if we are allowed and we haven't exhausted our
+ * backlog wait limit */
+- if ((gfp_mask & __GFP_DIRECT_RECLAIM) &&
+- (sleep_time > 0)) {
++ if (gfpflags_allow_blocking(gfp_mask) && (stime > 0)) {
+ DECLARE_WAITQUEUE(wait, current);
+
+ add_wait_queue_exclusive(&audit_backlog_wait,
+ &wait);
+ set_current_state(TASK_UNINTERRUPTIBLE);
+- sleep_time = schedule_timeout(sleep_time);
++ stime = schedule_timeout(stime);
+ remove_wait_queue(&audit_backlog_wait, &wait);
+ } else {
+ if (audit_rate_check() && printk_ratelimit())
+@@ -2119,15 +2256,27 @@ void audit_log_link_denied(const char *operation, const struct path *link)
+ */
+ void audit_log_end(struct audit_buffer *ab)
+ {
++ struct sk_buff *skb;
++ struct nlmsghdr *nlh;
++
+ if (!ab)
+ return;
+- if (!audit_rate_check()) {
+- audit_log_lost("rate limit exceeded");
+- } else {
+- skb_queue_tail(&audit_queue, ab->skb);
+- wake_up_interruptible(&kauditd_wait);
++
++ if (audit_rate_check()) {
++ skb = ab->skb;
+ ab->skb = NULL;
+- }
++
++ /* setup the netlink header, see the comments in
++ * kauditd_send_multicast_skb() for length quirks */
++ nlh = nlmsg_hdr(skb);
++ nlh->nlmsg_len = skb->len - NLMSG_HDRLEN;
++
++ /* queue the netlink packet and poke the kauditd thread */
++ skb_queue_tail(&audit_queue, skb);
++ wake_up_interruptible(&kauditd_wait);
++ } else
++ audit_log_lost("rate limit exceeded");
++
+ audit_buffer_free(ab);
+ }
+
+diff --git a/kernel/audit.h b/kernel/audit.h
+index 960d49c9db5e..c6fba919b2e4 100644
+--- a/kernel/audit.h
++++ b/kernel/audit.h
+@@ -215,7 +215,7 @@ extern void audit_log_name(struct audit_context *context,
+ struct audit_names *n, const struct path *path,
+ int record_num, int *call_panic);
+
+-extern int audit_pid;
++extern int auditd_test_task(const struct task_struct *task);
+
+ #define AUDIT_INODE_BUCKETS 32
+ extern struct list_head audit_inode_hash[AUDIT_INODE_BUCKETS];
+@@ -247,10 +247,6 @@ struct audit_netlink_list {
+
+ int audit_send_list(void *);
+
+-struct audit_net {
+- struct sock *nlsk;
+-};
+-
+ extern int selinux_audit_rule_update(void);
+
+ extern struct mutex audit_filter_mutex;
+@@ -337,8 +333,7 @@ extern int audit_filter(int msgtype, unsigned int listtype);
+ extern int __audit_signal_info(int sig, struct task_struct *t);
+ static inline int audit_signal_info(int sig, struct task_struct *t)
+ {
+- if (unlikely((audit_pid && t->tgid == audit_pid) ||
+- (audit_signals && !audit_dummy_context())))
++ if (auditd_test_task(t) || (audit_signals && !audit_dummy_context()))
+ return __audit_signal_info(sig, t);
+ return 0;
+ }
+diff --git a/kernel/auditsc.c b/kernel/auditsc.c
+index cf1fa43512c1..9e69c3a6b732 100644
+--- a/kernel/auditsc.c
++++ b/kernel/auditsc.c
+@@ -762,7 +762,7 @@ static enum audit_state audit_filter_syscall(struct task_struct *tsk,
+ struct audit_entry *e;
+ enum audit_state state;
+
+- if (audit_pid && tsk->tgid == audit_pid)
++ if (auditd_test_task(tsk))
+ return AUDIT_DISABLED;
+
+ rcu_read_lock();
+@@ -816,7 +816,7 @@ void audit_filter_inodes(struct task_struct *tsk, struct audit_context *ctx)
+ {
+ struct audit_names *n;
+
+- if (audit_pid && tsk->tgid == audit_pid)
++ if (auditd_test_task(tsk))
+ return;
+
+ rcu_read_lock();
+@@ -2251,7 +2251,7 @@ int __audit_signal_info(int sig, struct task_struct *t)
+ struct audit_context *ctx = tsk->audit_context;
+ kuid_t uid = current_uid(), t_uid = task_uid(t);
+
+- if (audit_pid && t->tgid == audit_pid) {
++ if (auditd_test_task(t)) {
+ if (sig == SIGTERM || sig == SIGHUP || sig == SIGUSR1 || sig == SIGUSR2) {
+ audit_sig_pid = task_tgid_nr(tsk);
+ if (uid_valid(tsk->loginuid))
+diff --git a/net/ceph/osdmap.c b/net/ceph/osdmap.c
+index d2436880b305..d3f6c26425b3 100644
+--- a/net/ceph/osdmap.c
++++ b/net/ceph/osdmap.c
+@@ -1334,7 +1334,6 @@ static int decode_new_up_state_weight(void **p, void *end,
+ if ((map->osd_state[osd] & CEPH_OSD_EXISTS) &&
+ (xorstate & CEPH_OSD_EXISTS)) {
+ pr_info("osd%d does not exist\n", osd);
+- map->osd_weight[osd] = CEPH_OSD_IN;
+ ret = set_primary_affinity(map, osd,
+ CEPH_OSD_DEFAULT_PRIMARY_AFFINITY);
+ if (ret)
+diff --git a/net/core/dev.c b/net/core/dev.c
+index fd6e2dfda45f..54f8c162ded8 100644
+--- a/net/core/dev.c
++++ b/net/core/dev.c
+@@ -4913,6 +4913,39 @@ void __napi_schedule(struct napi_struct *n)
+ EXPORT_SYMBOL(__napi_schedule);
+
+ /**
++ * napi_schedule_prep - check if napi can be scheduled
++ * @n: napi context
++ *
++ * Test if NAPI routine is already running, and if not mark
++ * it as running. This is used as a condition variable
++ * insure only one NAPI poll instance runs. We also make
++ * sure there is no pending NAPI disable.
++ */
++bool napi_schedule_prep(struct napi_struct *n)
++{
++ unsigned long val, new;
++
++ do {
++ val = READ_ONCE(n->state);
++ if (unlikely(val & NAPIF_STATE_DISABLE))
++ return false;
++ new = val | NAPIF_STATE_SCHED;
++
++ /* Sets STATE_MISSED bit if STATE_SCHED was already set
++ * This was suggested by Alexander Duyck, as compiler
++ * emits better code than :
++ * if (val & NAPIF_STATE_SCHED)
++ * new |= NAPIF_STATE_MISSED;
++ */
++ new |= (val & NAPIF_STATE_SCHED) / NAPIF_STATE_SCHED *
++ NAPIF_STATE_MISSED;
++ } while (cmpxchg(&n->state, val, new) != val);
++
++ return !(val & NAPIF_STATE_SCHED);
++}
++EXPORT_SYMBOL(napi_schedule_prep);
++
++/**
+ * __napi_schedule_irqoff - schedule for receive
+ * @n: entry to schedule
+ *
+@@ -4943,7 +4976,7 @@ EXPORT_SYMBOL(__napi_complete);
+
+ bool napi_complete_done(struct napi_struct *n, int work_done)
+ {
+- unsigned long flags;
++ unsigned long flags, val, new;
+
+ /*
+ * 1) Don't let napi dequeue from the cpu poll list
+@@ -4967,14 +5000,33 @@ bool napi_complete_done(struct napi_struct *n, int work_done)
+ else
+ napi_gro_flush(n, false);
+ }
+- if (likely(list_empty(&n->poll_list))) {
+- WARN_ON_ONCE(!test_and_clear_bit(NAPI_STATE_SCHED, &n->state));
+- } else {
++ if (unlikely(!list_empty(&n->poll_list))) {
+ /* If n->poll_list is not empty, we need to mask irqs */
+ local_irq_save(flags);
+- __napi_complete(n);
++ list_del_init(&n->poll_list);
+ local_irq_restore(flags);
+ }
++
++ do {
++ val = READ_ONCE(n->state);
++
++ WARN_ON_ONCE(!(val & NAPIF_STATE_SCHED));
++
++ new = val & ~(NAPIF_STATE_MISSED | NAPIF_STATE_SCHED);
++
++ /* If STATE_MISSED was set, leave STATE_SCHED set,
++ * because we will call napi->poll() one more time.
++ * This C code was suggested by Alexander Duyck to help gcc.
++ */
++ new |= (val & NAPIF_STATE_MISSED) / NAPIF_STATE_MISSED *
++ NAPIF_STATE_SCHED;
++ } while (cmpxchg(&n->state, val, new) != val);
++
++ if (unlikely(val & NAPIF_STATE_MISSED)) {
++ __napi_schedule(n);
++ return false;
++ }
++
+ return true;
+ }
+ EXPORT_SYMBOL(napi_complete_done);
+@@ -5000,6 +5052,16 @@ static void busy_poll_stop(struct napi_struct *napi, void *have_poll_lock)
+ {
+ int rc;
+
++ /* Busy polling means there is a high chance device driver hard irq
++ * could not grab NAPI_STATE_SCHED, and that NAPI_STATE_MISSED was
++ * set in napi_schedule_prep().
++ * Since we are about to call napi->poll() once more, we can safely
++ * clear NAPI_STATE_MISSED.
++ *
++ * Note: x86 could use a single "lock and ..." instruction
++ * to perform these two clear_bit()
++ */
++ clear_bit(NAPI_STATE_MISSED, &napi->state);
+ clear_bit(NAPI_STATE_IN_BUSY_POLL, &napi->state);
+
+ local_bh_disable();
+@@ -5146,8 +5208,13 @@ static enum hrtimer_restart napi_watchdog(struct hrtimer *timer)
+ struct napi_struct *napi;
+
+ napi = container_of(timer, struct napi_struct, timer);
+- if (napi->gro_list)
+- napi_schedule(napi);
++
++ /* Note : we use a relaxed variant of napi_schedule_prep() not setting
++ * NAPI_STATE_MISSED, since we do not react to a device IRQ.
++ */
++ if (napi->gro_list && !napi_disable_pending(napi) &&
++ !test_and_set_bit(NAPI_STATE_SCHED, &napi->state))
++ __napi_schedule_irqoff(napi);
+
+ return HRTIMER_NORESTART;
+ }
+diff --git a/net/core/netclassid_cgroup.c b/net/core/netclassid_cgroup.c
+index 11fce17274f6..46e8830c1979 100644
+--- a/net/core/netclassid_cgroup.c
++++ b/net/core/netclassid_cgroup.c
+@@ -69,27 +69,17 @@ static int update_classid_sock(const void *v, struct file *file, unsigned n)
+ return 0;
+ }
+
+-static void update_classid(struct cgroup_subsys_state *css, void *v)
++static void cgrp_attach(struct cgroup_taskset *tset)
+ {
+- struct css_task_iter it;
++ struct cgroup_subsys_state *css;
+ struct task_struct *p;
+
+- css_task_iter_start(css, &it);
+- while ((p = css_task_iter_next(&it))) {
++ cgroup_taskset_for_each(p, css, tset) {
+ task_lock(p);
+- iterate_fd(p->files, 0, update_classid_sock, v);
++ iterate_fd(p->files, 0, update_classid_sock,
++ (void *)(unsigned long)css_cls_state(css)->classid);
+ task_unlock(p);
+ }
+- css_task_iter_end(&it);
+-}
+-
+-static void cgrp_attach(struct cgroup_taskset *tset)
+-{
+- struct cgroup_subsys_state *css;
+-
+- cgroup_taskset_first(tset, &css);
+- update_classid(css,
+- (void *)(unsigned long)css_cls_state(css)->classid);
+ }
+
+ static u64 read_classid(struct cgroup_subsys_state *css, struct cftype *cft)
+@@ -101,12 +91,22 @@ static int write_classid(struct cgroup_subsys_state *css, struct cftype *cft,
+ u64 value)
+ {
+ struct cgroup_cls_state *cs = css_cls_state(css);
++ struct css_task_iter it;
++ struct task_struct *p;
+
+ cgroup_sk_alloc_disable();
+
+ cs->classid = (u32)value;
+
+- update_classid(css, (void *)(unsigned long)cs->classid);
++ css_task_iter_start(css, &it);
++ while ((p = css_task_iter_next(&it))) {
++ task_lock(p);
++ iterate_fd(p->files, 0, update_classid_sock,
++ (void *)(unsigned long)cs->classid);
++ task_unlock(p);
++ }
++ css_task_iter_end(&it);
++
+ return 0;
+ }
+
+diff --git a/net/core/sock.c b/net/core/sock.c
+index 4eca27dc5c94..4e7f10c92666 100644
+--- a/net/core/sock.c
++++ b/net/core/sock.c
+@@ -1444,6 +1444,11 @@ static void __sk_destruct(struct rcu_head *head)
+ pr_debug("%s: optmem leakage (%d bytes) detected\n",
+ __func__, atomic_read(&sk->sk_omem_alloc));
+
++ if (sk->sk_frag.page) {
++ put_page(sk->sk_frag.page);
++ sk->sk_frag.page = NULL;
++ }
++
+ if (sk->sk_peer_cred)
+ put_cred(sk->sk_peer_cred);
+ put_pid(sk->sk_peer_pid);
+@@ -1540,6 +1545,12 @@ struct sock *sk_clone_lock(const struct sock *sk, const gfp_t priority)
+ is_charged = sk_filter_charge(newsk, filter);
+
+ if (unlikely(!is_charged || xfrm_sk_clone_policy(newsk, sk))) {
++ /* We need to make sure that we don't uncharge the new
++ * socket if we couldn't charge it in the first place
++ * as otherwise we uncharge the parent's filter.
++ */
++ if (!is_charged)
++ RCU_INIT_POINTER(newsk->sk_filter, NULL);
+ /* It is still raw copy of parent, so invalidate
+ * destructor and make plain sk_free() */
+ newsk->sk_destruct = NULL;
+@@ -2774,11 +2785,6 @@ void sk_common_release(struct sock *sk)
+
+ sk_refcnt_debug_release(sk);
+
+- if (sk->sk_frag.page) {
+- put_page(sk->sk_frag.page);
+- sk->sk_frag.page = NULL;
+- }
+-
+ sock_put(sk);
+ }
+ EXPORT_SYMBOL(sk_common_release);
+diff --git a/net/ipv4/fib_frontend.c b/net/ipv4/fib_frontend.c
+index b39a791f6756..091de0b93d5d 100644
+--- a/net/ipv4/fib_frontend.c
++++ b/net/ipv4/fib_frontend.c
+@@ -1082,7 +1082,8 @@ static void nl_fib_input(struct sk_buff *skb)
+
+ net = sock_net(skb->sk);
+ nlh = nlmsg_hdr(skb);
+- if (skb->len < NLMSG_HDRLEN || skb->len < nlh->nlmsg_len ||
++ if (skb->len < nlmsg_total_size(sizeof(*frn)) ||
++ skb->len < nlh->nlmsg_len ||
+ nlmsg_len(nlh) < sizeof(*frn))
+ return;
+
+diff --git a/net/ipv4/tcp_input.c b/net/ipv4/tcp_input.c
+index 28777a0307c8..e7516efa99dc 100644
+--- a/net/ipv4/tcp_input.c
++++ b/net/ipv4/tcp_input.c
+@@ -5571,6 +5571,7 @@ void tcp_finish_connect(struct sock *sk, struct sk_buff *skb)
+ struct inet_connection_sock *icsk = inet_csk(sk);
+
+ tcp_set_state(sk, TCP_ESTABLISHED);
++ icsk->icsk_ack.lrcvtime = tcp_time_stamp;
+
+ if (skb) {
+ icsk->icsk_af_ops->sk_rx_dst_set(sk, skb);
+@@ -5789,7 +5790,6 @@ static int tcp_rcv_synsent_state_process(struct sock *sk, struct sk_buff *skb,
+ * to stand against the temptation 8) --ANK
+ */
+ inet_csk_schedule_ack(sk);
+- icsk->icsk_ack.lrcvtime = tcp_time_stamp;
+ tcp_enter_quickack_mode(sk);
+ inet_csk_reset_xmit_timer(sk, ICSK_TIME_DACK,
+ TCP_DELACK_MAX, TCP_RTO_MAX);
+diff --git a/net/ipv4/tcp_minisocks.c b/net/ipv4/tcp_minisocks.c
+index 28ce5ee831f5..80ff517a7542 100644
+--- a/net/ipv4/tcp_minisocks.c
++++ b/net/ipv4/tcp_minisocks.c
+@@ -466,6 +466,7 @@ struct sock *tcp_create_openreq_child(const struct sock *sk,
+ newtp->mdev_us = jiffies_to_usecs(TCP_TIMEOUT_INIT);
+ minmax_reset(&newtp->rtt_min, tcp_time_stamp, ~0U);
+ newicsk->icsk_rto = TCP_TIMEOUT_INIT;
++ newicsk->icsk_ack.lrcvtime = tcp_time_stamp;
+
+ newtp->packets_out = 0;
+ newtp->retrans_out = 0;
+diff --git a/net/ipv6/udp.c b/net/ipv6/udp.c
+index 221825a9407a..0770f95f5e1c 100644
+--- a/net/ipv6/udp.c
++++ b/net/ipv6/udp.c
+@@ -1022,6 +1022,7 @@ int udpv6_sendmsg(struct sock *sk, struct msghdr *msg, size_t len)
+ ipc6.hlimit = -1;
+ ipc6.tclass = -1;
+ ipc6.dontfrag = -1;
++ sockc.tsflags = sk->sk_tsflags;
+
+ /* destination address check */
+ if (sin6) {
+@@ -1146,7 +1147,6 @@ int udpv6_sendmsg(struct sock *sk, struct msghdr *msg, size_t len)
+
+ fl6.flowi6_mark = sk->sk_mark;
+ fl6.flowi6_uid = sk->sk_uid;
+- sockc.tsflags = sk->sk_tsflags;
+
+ if (msg->msg_controllen) {
+ opt = &opt_space;
+diff --git a/net/netlink/genetlink.c b/net/netlink/genetlink.c
+index fb6e10fdb217..92e0981f7404 100644
+--- a/net/netlink/genetlink.c
++++ b/net/netlink/genetlink.c
+@@ -783,8 +783,10 @@ static int ctrl_dumpfamily(struct sk_buff *skb, struct netlink_callback *cb)
+
+ if (ctrl_fill_info(rt, NETLINK_CB(cb->skb).portid,
+ cb->nlh->nlmsg_seq, NLM_F_MULTI,
+- skb, CTRL_CMD_NEWFAMILY) < 0)
++ skb, CTRL_CMD_NEWFAMILY) < 0) {
++ n--;
+ break;
++ }
+ }
+
+ cb->args[0] = n;
+diff --git a/net/openvswitch/flow_netlink.c b/net/openvswitch/flow_netlink.c
+index c87d359b9b37..256e8f1450fd 100644
+--- a/net/openvswitch/flow_netlink.c
++++ b/net/openvswitch/flow_netlink.c
+@@ -588,7 +588,7 @@ static int ip_tun_from_nlattr(const struct nlattr *attr,
+ ipv4 = true;
+ break;
+ case OVS_TUNNEL_KEY_ATTR_IPV6_SRC:
+- SW_FLOW_KEY_PUT(match, tun_key.u.ipv6.dst,
++ SW_FLOW_KEY_PUT(match, tun_key.u.ipv6.src,
+ nla_get_in6_addr(a), is_mask);
+ ipv6 = true;
+ break;
+@@ -649,6 +649,8 @@ static int ip_tun_from_nlattr(const struct nlattr *attr,
+ tun_flags |= TUNNEL_VXLAN_OPT;
+ opts_type = type;
+ break;
++ case OVS_TUNNEL_KEY_ATTR_PAD:
++ break;
+ default:
+ OVS_NLERR(log, "Unknown IP tunnel attribute %d",
+ type);
+diff --git a/net/unix/garbage.c b/net/unix/garbage.c
+index 6a0d48525fcf..c36757e72844 100644
+--- a/net/unix/garbage.c
++++ b/net/unix/garbage.c
+@@ -146,6 +146,7 @@ void unix_notinflight(struct user_struct *user, struct file *fp)
+ if (s) {
+ struct unix_sock *u = unix_sk(s);
+
++ BUG_ON(!atomic_long_read(&u->inflight));
+ BUG_ON(list_empty(&u->link));
+
+ if (atomic_long_dec_and_test(&u->inflight))
+@@ -341,6 +342,14 @@ void unix_gc(void)
+ }
+ list_del(&cursor);
+
++ /* Now gc_candidates contains only garbage. Restore original
++ * inflight counters for these as well, and remove the skbuffs
++ * which are creating the cycle(s).
++ */
++ skb_queue_head_init(&hitlist);
++ list_for_each_entry(u, &gc_candidates, link)
++ scan_children(&u->sk, inc_inflight, &hitlist);
++
+ /* not_cycle_list contains those sockets which do not make up a
+ * cycle. Restore these to the inflight list.
+ */
+@@ -350,14 +359,6 @@ void unix_gc(void)
+ list_move_tail(&u->link, &gc_inflight_list);
+ }
+
+- /* Now gc_candidates contains only garbage. Restore original
+- * inflight counters for these as well, and remove the skbuffs
+- * which are creating the cycle(s).
+- */
+- skb_queue_head_init(&hitlist);
+- list_for_each_entry(u, &gc_candidates, link)
+- scan_children(&u->sk, inc_inflight, &hitlist);
+-
+ spin_unlock(&unix_gc_lock);
+
+ /* Here we are. Hitlist is filled. Die. */
+diff --git a/net/wireless/nl80211.c b/net/wireless/nl80211.c
+index aee396b9f190..c1081a6e31ef 100644
+--- a/net/wireless/nl80211.c
++++ b/net/wireless/nl80211.c
+@@ -540,22 +540,18 @@ static int nl80211_prepare_wdev_dump(struct sk_buff *skb,
+ {
+ int err;
+
+- rtnl_lock();
+-
+ if (!cb->args[0]) {
+ err = nlmsg_parse(cb->nlh, GENL_HDRLEN + nl80211_fam.hdrsize,
+ genl_family_attrbuf(&nl80211_fam),
+ nl80211_fam.maxattr, nl80211_policy);
+ if (err)
+- goto out_unlock;
++ return err;
+
+ *wdev = __cfg80211_wdev_from_attrs(
+ sock_net(skb->sk),
+ genl_family_attrbuf(&nl80211_fam));
+- if (IS_ERR(*wdev)) {
+- err = PTR_ERR(*wdev);
+- goto out_unlock;
+- }
++ if (IS_ERR(*wdev))
++ return PTR_ERR(*wdev);
+ *rdev = wiphy_to_rdev((*wdev)->wiphy);
+ /* 0 is the first index - add 1 to parse only once */
+ cb->args[0] = (*rdev)->wiphy_idx + 1;
+@@ -565,10 +561,8 @@ static int nl80211_prepare_wdev_dump(struct sk_buff *skb,
+ struct wiphy *wiphy = wiphy_idx_to_wiphy(cb->args[0] - 1);
+ struct wireless_dev *tmp;
+
+- if (!wiphy) {
+- err = -ENODEV;
+- goto out_unlock;
+- }
++ if (!wiphy)
++ return -ENODEV;
+ *rdev = wiphy_to_rdev(wiphy);
+ *wdev = NULL;
+
+@@ -579,21 +573,11 @@ static int nl80211_prepare_wdev_dump(struct sk_buff *skb,
+ }
+ }
+
+- if (!*wdev) {
+- err = -ENODEV;
+- goto out_unlock;
+- }
++ if (!*wdev)
++ return -ENODEV;
+ }
+
+ return 0;
+- out_unlock:
+- rtnl_unlock();
+- return err;
+-}
+-
+-static void nl80211_finish_wdev_dump(struct cfg80211_registered_device *rdev)
+-{
+- rtnl_unlock();
+ }
+
+ /* IE validation */
+@@ -2599,17 +2583,17 @@ static int nl80211_dump_interface(struct sk_buff *skb, struct netlink_callback *
+ int filter_wiphy = -1;
+ struct cfg80211_registered_device *rdev;
+ struct wireless_dev *wdev;
++ int ret;
+
+ rtnl_lock();
+ if (!cb->args[2]) {
+ struct nl80211_dump_wiphy_state state = {
+ .filter_wiphy = -1,
+ };
+- int ret;
+
+ ret = nl80211_dump_wiphy_parse(skb, cb, &state);
+ if (ret)
+- return ret;
++ goto out_unlock;
+
+ filter_wiphy = state.filter_wiphy;
+
+@@ -2654,12 +2638,14 @@ static int nl80211_dump_interface(struct sk_buff *skb, struct netlink_callback *
+ wp_idx++;
+ }
+ out:
+- rtnl_unlock();
+-
+ cb->args[0] = wp_idx;
+ cb->args[1] = if_idx;
+
+- return skb->len;
++ ret = skb->len;
++ out_unlock:
++ rtnl_unlock();
++
++ return ret;
+ }
+
+ static int nl80211_get_interface(struct sk_buff *skb, struct genl_info *info)
+@@ -4398,9 +4384,10 @@ static int nl80211_dump_station(struct sk_buff *skb,
+ int sta_idx = cb->args[2];
+ int err;
+
++ rtnl_lock();
+ err = nl80211_prepare_wdev_dump(skb, cb, &rdev, &wdev);
+ if (err)
+- return err;
++ goto out_err;
+
+ if (!wdev->netdev) {
+ err = -EINVAL;
+@@ -4435,7 +4422,7 @@ static int nl80211_dump_station(struct sk_buff *skb,
+ cb->args[2] = sta_idx;
+ err = skb->len;
+ out_err:
+- nl80211_finish_wdev_dump(rdev);
++ rtnl_unlock();
+
+ return err;
+ }
+@@ -5221,9 +5208,10 @@ static int nl80211_dump_mpath(struct sk_buff *skb,
+ int path_idx = cb->args[2];
+ int err;
+
++ rtnl_lock();
+ err = nl80211_prepare_wdev_dump(skb, cb, &rdev, &wdev);
+ if (err)
+- return err;
++ goto out_err;
+
+ if (!rdev->ops->dump_mpath) {
+ err = -EOPNOTSUPP;
+@@ -5256,7 +5244,7 @@ static int nl80211_dump_mpath(struct sk_buff *skb,
+ cb->args[2] = path_idx;
+ err = skb->len;
+ out_err:
+- nl80211_finish_wdev_dump(rdev);
++ rtnl_unlock();
+ return err;
+ }
+
+@@ -5416,9 +5404,10 @@ static int nl80211_dump_mpp(struct sk_buff *skb,
+ int path_idx = cb->args[2];
+ int err;
+
++ rtnl_lock();
+ err = nl80211_prepare_wdev_dump(skb, cb, &rdev, &wdev);
+ if (err)
+- return err;
++ goto out_err;
+
+ if (!rdev->ops->dump_mpp) {
+ err = -EOPNOTSUPP;
+@@ -5451,7 +5440,7 @@ static int nl80211_dump_mpp(struct sk_buff *skb,
+ cb->args[2] = path_idx;
+ err = skb->len;
+ out_err:
+- nl80211_finish_wdev_dump(rdev);
++ rtnl_unlock();
+ return err;
+ }
+
+@@ -7596,9 +7585,12 @@ static int nl80211_dump_scan(struct sk_buff *skb, struct netlink_callback *cb)
+ int start = cb->args[2], idx = 0;
+ int err;
+
++ rtnl_lock();
+ err = nl80211_prepare_wdev_dump(skb, cb, &rdev, &wdev);
+- if (err)
++ if (err) {
++ rtnl_unlock();
+ return err;
++ }
+
+ wdev_lock(wdev);
+ spin_lock_bh(&rdev->bss_lock);
+@@ -7621,7 +7613,7 @@ static int nl80211_dump_scan(struct sk_buff *skb, struct netlink_callback *cb)
+ wdev_unlock(wdev);
+
+ cb->args[2] = idx;
+- nl80211_finish_wdev_dump(rdev);
++ rtnl_unlock();
+
+ return skb->len;
+ }
+@@ -7706,9 +7698,10 @@ static int nl80211_dump_survey(struct sk_buff *skb, struct netlink_callback *cb)
+ int res;
+ bool radio_stats;
+
++ rtnl_lock();
+ res = nl80211_prepare_wdev_dump(skb, cb, &rdev, &wdev);
+ if (res)
+- return res;
++ goto out_err;
+
+ /* prepare_wdev_dump parsed the attributes */
+ radio_stats = attrbuf[NL80211_ATTR_SURVEY_RADIO_STATS];
+@@ -7749,7 +7742,7 @@ static int nl80211_dump_survey(struct sk_buff *skb, struct netlink_callback *cb)
+ cb->args[2] = survey_idx;
+ res = skb->len;
+ out_err:
+- nl80211_finish_wdev_dump(rdev);
++ rtnl_unlock();
+ return res;
+ }
+
+@@ -11378,17 +11371,13 @@ static int nl80211_prepare_vendor_dump(struct sk_buff *skb,
+ void *data = NULL;
+ unsigned int data_len = 0;
+
+- rtnl_lock();
+-
+ if (cb->args[0]) {
+ /* subtract the 1 again here */
+ struct wiphy *wiphy = wiphy_idx_to_wiphy(cb->args[0] - 1);
+ struct wireless_dev *tmp;
+
+- if (!wiphy) {
+- err = -ENODEV;
+- goto out_unlock;
+- }
++ if (!wiphy)
++ return -ENODEV;
+ *rdev = wiphy_to_rdev(wiphy);
+ *wdev = NULL;
+
+@@ -11408,23 +11397,19 @@ static int nl80211_prepare_vendor_dump(struct sk_buff *skb,
+ err = nlmsg_parse(cb->nlh, GENL_HDRLEN + nl80211_fam.hdrsize,
+ attrbuf, nl80211_fam.maxattr, nl80211_policy);
+ if (err)
+- goto out_unlock;
++ return err;
+
+ if (!attrbuf[NL80211_ATTR_VENDOR_ID] ||
+- !attrbuf[NL80211_ATTR_VENDOR_SUBCMD]) {
+- err = -EINVAL;
+- goto out_unlock;
+- }
++ !attrbuf[NL80211_ATTR_VENDOR_SUBCMD])
++ return -EINVAL;
+
+ *wdev = __cfg80211_wdev_from_attrs(sock_net(skb->sk), attrbuf);
+ if (IS_ERR(*wdev))
+ *wdev = NULL;
+
+ *rdev = __cfg80211_rdev_from_attrs(sock_net(skb->sk), attrbuf);
+- if (IS_ERR(*rdev)) {
+- err = PTR_ERR(*rdev);
+- goto out_unlock;
+- }
++ if (IS_ERR(*rdev))
++ return PTR_ERR(*rdev);
+
+ vid = nla_get_u32(attrbuf[NL80211_ATTR_VENDOR_ID]);
+ subcmd = nla_get_u32(attrbuf[NL80211_ATTR_VENDOR_SUBCMD]);
+@@ -11437,19 +11422,15 @@ static int nl80211_prepare_vendor_dump(struct sk_buff *skb,
+ if (vcmd->info.vendor_id != vid || vcmd->info.subcmd != subcmd)
+ continue;
+
+- if (!vcmd->dumpit) {
+- err = -EOPNOTSUPP;
+- goto out_unlock;
+- }
++ if (!vcmd->dumpit)
++ return -EOPNOTSUPP;
+
+ vcmd_idx = i;
+ break;
+ }
+
+- if (vcmd_idx < 0) {
+- err = -EOPNOTSUPP;
+- goto out_unlock;
+- }
++ if (vcmd_idx < 0)
++ return -EOPNOTSUPP;
+
+ if (attrbuf[NL80211_ATTR_VENDOR_DATA]) {
+ data = nla_data(attrbuf[NL80211_ATTR_VENDOR_DATA]);
+@@ -11466,9 +11447,6 @@ static int nl80211_prepare_vendor_dump(struct sk_buff *skb,
+
+ /* keep rtnl locked in successful case */
+ return 0;
+- out_unlock:
+- rtnl_unlock();
+- return err;
+ }
+
+ static int nl80211_vendor_cmd_dump(struct sk_buff *skb,
+@@ -11483,9 +11461,10 @@ static int nl80211_vendor_cmd_dump(struct sk_buff *skb,
+ int err;
+ struct nlattr *vendor_data;
+
++ rtnl_lock();
+ err = nl80211_prepare_vendor_dump(skb, cb, &rdev, &wdev);
+ if (err)
+- return err;
++ goto out;
+
+ vcmd_idx = cb->args[2];
+ data = (void *)cb->args[3];
+@@ -11494,15 +11473,21 @@ static int nl80211_vendor_cmd_dump(struct sk_buff *skb,
+
+ if (vcmd->flags & (WIPHY_VENDOR_CMD_NEED_WDEV |
+ WIPHY_VENDOR_CMD_NEED_NETDEV)) {
+- if (!wdev)
+- return -EINVAL;
++ if (!wdev) {
++ err = -EINVAL;
++ goto out;
++ }
+ if (vcmd->flags & WIPHY_VENDOR_CMD_NEED_NETDEV &&
+- !wdev->netdev)
+- return -EINVAL;
++ !wdev->netdev) {
++ err = -EINVAL;
++ goto out;
++ }
+
+ if (vcmd->flags & WIPHY_VENDOR_CMD_NEED_RUNNING) {
+- if (!wdev_running(wdev))
+- return -ENETDOWN;
++ if (!wdev_running(wdev)) {
++ err = -ENETDOWN;
++ goto out;
++ }
+ }
+ }
+
+diff --git a/sound/core/seq/seq_clientmgr.c b/sound/core/seq/seq_clientmgr.c
+index 4c935202ce23..f3b1d7f50b81 100644
+--- a/sound/core/seq/seq_clientmgr.c
++++ b/sound/core/seq/seq_clientmgr.c
+@@ -1832,6 +1832,7 @@ static int snd_seq_ioctl_set_client_pool(struct snd_seq_client *client,
+ info->output_pool != client->pool->size)) {
+ if (snd_seq_write_pool_allocated(client)) {
+ /* remove all existing cells */
++ snd_seq_pool_mark_closing(client->pool);
+ snd_seq_queue_client_leave_cells(client->number);
+ snd_seq_pool_done(client->pool);
+ }
+diff --git a/sound/core/seq/seq_fifo.c b/sound/core/seq/seq_fifo.c
+index 86240d02b530..3f4efcb85df5 100644
+--- a/sound/core/seq/seq_fifo.c
++++ b/sound/core/seq/seq_fifo.c
+@@ -70,6 +70,9 @@ void snd_seq_fifo_delete(struct snd_seq_fifo **fifo)
+ return;
+ *fifo = NULL;
+
++ if (f->pool)
++ snd_seq_pool_mark_closing(f->pool);
++
+ snd_seq_fifo_clear(f);
+
+ /* wake up clients if any */
+diff --git a/sound/core/seq/seq_memory.c b/sound/core/seq/seq_memory.c
+index dfa5156f3585..5847c4475bf3 100644
+--- a/sound/core/seq/seq_memory.c
++++ b/sound/core/seq/seq_memory.c
+@@ -414,6 +414,18 @@ int snd_seq_pool_init(struct snd_seq_pool *pool)
+ return 0;
+ }
+
++/* refuse the further insertion to the pool */
++void snd_seq_pool_mark_closing(struct snd_seq_pool *pool)
++{
++ unsigned long flags;
++
++ if (snd_BUG_ON(!pool))
++ return;
++ spin_lock_irqsave(&pool->lock, flags);
++ pool->closing = 1;
++ spin_unlock_irqrestore(&pool->lock, flags);
++}
++
+ /* remove events */
+ int snd_seq_pool_done(struct snd_seq_pool *pool)
+ {
+@@ -424,10 +436,6 @@ int snd_seq_pool_done(struct snd_seq_pool *pool)
+ return -EINVAL;
+
+ /* wait for closing all threads */
+- spin_lock_irqsave(&pool->lock, flags);
+- pool->closing = 1;
+- spin_unlock_irqrestore(&pool->lock, flags);
+-
+ if (waitqueue_active(&pool->output_sleep))
+ wake_up(&pool->output_sleep);
+
+@@ -484,6 +492,7 @@ int snd_seq_pool_delete(struct snd_seq_pool **ppool)
+ *ppool = NULL;
+ if (pool == NULL)
+ return 0;
++ snd_seq_pool_mark_closing(pool);
+ snd_seq_pool_done(pool);
+ kfree(pool);
+ return 0;
+diff --git a/sound/core/seq/seq_memory.h b/sound/core/seq/seq_memory.h
+index 4a2ec779b8a7..32f959c17786 100644
+--- a/sound/core/seq/seq_memory.h
++++ b/sound/core/seq/seq_memory.h
+@@ -84,6 +84,7 @@ static inline int snd_seq_total_cells(struct snd_seq_pool *pool)
+ int snd_seq_pool_init(struct snd_seq_pool *pool);
+
+ /* done pool - free events */
++void snd_seq_pool_mark_closing(struct snd_seq_pool *pool);
+ int snd_seq_pool_done(struct snd_seq_pool *pool);
+
+ /* create pool */
+diff --git a/sound/pci/ctxfi/cthw20k1.c b/sound/pci/ctxfi/cthw20k1.c
+index ab4cdab5cfa5..79edd88d5cd0 100644
+--- a/sound/pci/ctxfi/cthw20k1.c
++++ b/sound/pci/ctxfi/cthw20k1.c
+@@ -1905,7 +1905,7 @@ static int hw_card_start(struct hw *hw)
+ return err;
+
+ /* Set DMA transfer mask */
+- if (dma_set_mask(&pci->dev, DMA_BIT_MASK(dma_bits))) {
++ if (!dma_set_mask(&pci->dev, DMA_BIT_MASK(dma_bits))) {
+ dma_set_coherent_mask(&pci->dev, DMA_BIT_MASK(dma_bits));
+ } else {
+ dma_set_mask(&pci->dev, DMA_BIT_MASK(32));
+diff --git a/sound/pci/hda/patch_realtek.c b/sound/pci/hda/patch_realtek.c
+index 6b041f7268fb..c813ad857650 100644
+--- a/sound/pci/hda/patch_realtek.c
++++ b/sound/pci/hda/patch_realtek.c
+@@ -6058,6 +6058,8 @@ static const struct snd_hda_pin_quirk alc269_pin_fixup_tbl[] = {
+ ALC295_STANDARD_PINS,
+ {0x17, 0x21014040},
+ {0x18, 0x21a19050}),
++ SND_HDA_PIN_QUIRK(0x10ec0295, 0x1028, "Dell", ALC269_FIXUP_DELL1_MIC_NO_PRESENCE,
++ ALC295_STANDARD_PINS),
+ SND_HDA_PIN_QUIRK(0x10ec0298, 0x1028, "Dell", ALC298_FIXUP_DELL1_MIC_NO_PRESENCE,
+ ALC298_STANDARD_PINS,
+ {0x17, 0x90170110}),
^ permalink raw reply related [flat|nested] 22+ messages in thread
* [gentoo-commits] proj/linux-patches:4.10 commit in: /
@ 2017-03-31 10:45 Mike Pagano
0 siblings, 0 replies; 22+ messages in thread
From: Mike Pagano @ 2017-03-31 10:45 UTC (permalink / raw
To: gentoo-commits
commit: 0ebaa38341c1d4266ba9b27e39a35bf296bd1c96
Author: Mike Pagano <mpagano <AT> gentoo <DOT> org>
AuthorDate: Fri Mar 31 10:45:43 2017 +0000
Commit: Mike Pagano <mpagano <AT> gentoo <DOT> org>
CommitDate: Fri Mar 31 10:45:43 2017 +0000
URL: https://gitweb.gentoo.org/proj/linux-patches.git/commit/?id=0ebaa383
Linux patch 4.10.8
0000_README | 4 +
1007_linux-4.10.8.patch | 493 ++++++++++++++++++++++++++++++++++++++++++++++++
2 files changed, 497 insertions(+)
diff --git a/0000_README b/0000_README
index 02aad35..4c7de50 100644
--- a/0000_README
+++ b/0000_README
@@ -71,6 +71,10 @@ Patch: 1006_linux-4.10.7.patch
From: http://www.kernel.org
Desc: Linux 4.10.7
+Patch: 1007_linux-4.10.8.patch
+From: http://www.kernel.org
+Desc: Linux 4.10.8
+
Patch: 1500_XATTR_USER_PREFIX.patch
From: https://bugs.gentoo.org/show_bug.cgi?id=470644
Desc: Support for namespace user.pax.* on tmpfs.
diff --git a/1007_linux-4.10.8.patch b/1007_linux-4.10.8.patch
new file mode 100644
index 0000000..4928a4c
--- /dev/null
+++ b/1007_linux-4.10.8.patch
@@ -0,0 +1,493 @@
+diff --git a/Makefile b/Makefile
+index 976e8d1a468a..82e0809fed9b 100644
+--- a/Makefile
++++ b/Makefile
+@@ -1,6 +1,6 @@
+ VERSION = 4
+ PATCHLEVEL = 10
+-SUBLEVEL = 7
++SUBLEVEL = 8
+ EXTRAVERSION =
+ NAME = Fearless Coyote
+
+diff --git a/arch/c6x/kernel/ptrace.c b/arch/c6x/kernel/ptrace.c
+index 3c494e84444d..a511ac16a8e3 100644
+--- a/arch/c6x/kernel/ptrace.c
++++ b/arch/c6x/kernel/ptrace.c
+@@ -69,46 +69,6 @@ static int gpr_get(struct task_struct *target,
+ 0, sizeof(*regs));
+ }
+
+-static int gpr_set(struct task_struct *target,
+- const struct user_regset *regset,
+- unsigned int pos, unsigned int count,
+- const void *kbuf, const void __user *ubuf)
+-{
+- int ret;
+- struct pt_regs *regs = task_pt_regs(target);
+-
+- /* Don't copyin TSR or CSR */
+- ret = user_regset_copyin(&pos, &count, &kbuf, &ubuf,
+- ®s,
+- 0, PT_TSR * sizeof(long));
+- if (ret)
+- return ret;
+-
+- ret = user_regset_copyin_ignore(&pos, &count, &kbuf, &ubuf,
+- PT_TSR * sizeof(long),
+- (PT_TSR + 1) * sizeof(long));
+- if (ret)
+- return ret;
+-
+- ret = user_regset_copyin(&pos, &count, &kbuf, &ubuf,
+- ®s,
+- (PT_TSR + 1) * sizeof(long),
+- PT_CSR * sizeof(long));
+- if (ret)
+- return ret;
+-
+- ret = user_regset_copyin_ignore(&pos, &count, &kbuf, &ubuf,
+- PT_CSR * sizeof(long),
+- (PT_CSR + 1) * sizeof(long));
+- if (ret)
+- return ret;
+-
+- ret = user_regset_copyin(&pos, &count, &kbuf, &ubuf,
+- ®s,
+- (PT_CSR + 1) * sizeof(long), -1);
+- return ret;
+-}
+-
+ enum c6x_regset {
+ REGSET_GPR,
+ };
+@@ -120,7 +80,6 @@ static const struct user_regset c6x_regsets[] = {
+ .size = sizeof(u32),
+ .align = sizeof(u32),
+ .get = gpr_get,
+- .set = gpr_set
+ },
+ };
+
+diff --git a/arch/h8300/kernel/ptrace.c b/arch/h8300/kernel/ptrace.c
+index 92075544a19a..0dc1c8f622bc 100644
+--- a/arch/h8300/kernel/ptrace.c
++++ b/arch/h8300/kernel/ptrace.c
+@@ -95,7 +95,8 @@ static int regs_get(struct task_struct *target,
+ long *reg = (long *)®s;
+
+ /* build user regs in buffer */
+- for (r = 0; r < ARRAY_SIZE(register_offset); r++)
++ BUILD_BUG_ON(sizeof(regs) % sizeof(long) != 0);
++ for (r = 0; r < sizeof(regs) / sizeof(long); r++)
+ *reg++ = h8300_get_reg(target, r);
+
+ return user_regset_copyout(&pos, &count, &kbuf, &ubuf,
+@@ -113,7 +114,8 @@ static int regs_set(struct task_struct *target,
+ long *reg;
+
+ /* build user regs in buffer */
+- for (reg = (long *)®s, r = 0; r < ARRAY_SIZE(register_offset); r++)
++ BUILD_BUG_ON(sizeof(regs) % sizeof(long) != 0);
++ for (reg = (long *)®s, r = 0; r < sizeof(regs) / sizeof(long); r++)
+ *reg++ = h8300_get_reg(target, r);
+
+ ret = user_regset_copyin(&pos, &count, &kbuf, &ubuf,
+@@ -122,7 +124,7 @@ static int regs_set(struct task_struct *target,
+ return ret;
+
+ /* write back to pt_regs */
+- for (reg = (long *)®s, r = 0; r < ARRAY_SIZE(register_offset); r++)
++ for (reg = (long *)®s, r = 0; r < sizeof(regs) / sizeof(long); r++)
+ h8300_put_reg(target, r, *reg++);
+ return 0;
+ }
+diff --git a/arch/metag/kernel/ptrace.c b/arch/metag/kernel/ptrace.c
+index 7563628822bd..5e2dc7defd2c 100644
+--- a/arch/metag/kernel/ptrace.c
++++ b/arch/metag/kernel/ptrace.c
+@@ -24,6 +24,16 @@
+ * user_regset definitions.
+ */
+
++static unsigned long user_txstatus(const struct pt_regs *regs)
++{
++ unsigned long data = (unsigned long)regs->ctx.Flags;
++
++ if (regs->ctx.SaveMask & TBICTX_CBUF_BIT)
++ data |= USER_GP_REGS_STATUS_CATCH_BIT;
++
++ return data;
++}
++
+ int metag_gp_regs_copyout(const struct pt_regs *regs,
+ unsigned int pos, unsigned int count,
+ void *kbuf, void __user *ubuf)
+@@ -62,9 +72,7 @@ int metag_gp_regs_copyout(const struct pt_regs *regs,
+ if (ret)
+ goto out;
+ /* TXSTATUS */
+- data = (unsigned long)regs->ctx.Flags;
+- if (regs->ctx.SaveMask & TBICTX_CBUF_BIT)
+- data |= USER_GP_REGS_STATUS_CATCH_BIT;
++ data = user_txstatus(regs);
+ ret = user_regset_copyout(&pos, &count, &kbuf, &ubuf,
+ &data, 4*25, 4*26);
+ if (ret)
+@@ -119,6 +127,7 @@ int metag_gp_regs_copyin(struct pt_regs *regs,
+ if (ret)
+ goto out;
+ /* TXSTATUS */
++ data = user_txstatus(regs);
+ ret = user_regset_copyin(&pos, &count, &kbuf, &ubuf,
+ &data, 4*25, 4*26);
+ if (ret)
+@@ -244,6 +253,8 @@ int metag_rp_state_copyin(struct pt_regs *regs,
+ unsigned long long *ptr;
+ int ret, i;
+
++ if (count < 4*13)
++ return -EINVAL;
+ /* Read the entire pipeline before making any changes */
+ ret = user_regset_copyin(&pos, &count, &kbuf, &ubuf,
+ &rp, 0, 4*13);
+@@ -303,7 +314,7 @@ static int metag_tls_set(struct task_struct *target,
+ const void *kbuf, const void __user *ubuf)
+ {
+ int ret;
+- void __user *tls;
++ void __user *tls = target->thread.tls_ptr;
+
+ ret = user_regset_copyin(&pos, &count, &kbuf, &ubuf, &tls, 0, -1);
+ if (ret)
+diff --git a/arch/mips/kernel/ptrace.c b/arch/mips/kernel/ptrace.c
+index c8ba26072132..5d2498eb2340 100644
+--- a/arch/mips/kernel/ptrace.c
++++ b/arch/mips/kernel/ptrace.c
+@@ -485,7 +485,8 @@ static int fpr_set(struct task_struct *target,
+ &target->thread.fpu,
+ 0, sizeof(elf_fpregset_t));
+
+- for (i = 0; i < NUM_FPU_REGS; i++) {
++ BUILD_BUG_ON(sizeof(fpr_val) != sizeof(elf_fpreg_t));
++ for (i = 0; i < NUM_FPU_REGS && count >= sizeof(elf_fpreg_t); i++) {
+ err = user_regset_copyin(&pos, &count, &kbuf, &ubuf,
+ &fpr_val, i * sizeof(elf_fpreg_t),
+ (i + 1) * sizeof(elf_fpreg_t));
+diff --git a/arch/sparc/kernel/ptrace_64.c b/arch/sparc/kernel/ptrace_64.c
+index 901063c1cf7e..341129a40e94 100644
+--- a/arch/sparc/kernel/ptrace_64.c
++++ b/arch/sparc/kernel/ptrace_64.c
+@@ -350,7 +350,7 @@ static int genregs64_set(struct task_struct *target,
+ }
+
+ if (!ret) {
+- unsigned long y;
++ unsigned long y = regs->y;
+
+ ret = user_regset_copyin(&pos, &count, &kbuf, &ubuf,
+ &y,
+diff --git a/arch/x86/include/asm/kvm_page_track.h b/arch/x86/include/asm/kvm_page_track.h
+index d74747b031ec..c4eda791f877 100644
+--- a/arch/x86/include/asm/kvm_page_track.h
++++ b/arch/x86/include/asm/kvm_page_track.h
+@@ -46,6 +46,7 @@ struct kvm_page_track_notifier_node {
+ };
+
+ void kvm_page_track_init(struct kvm *kvm);
++void kvm_page_track_cleanup(struct kvm *kvm);
+
+ void kvm_page_track_free_memslot(struct kvm_memory_slot *free,
+ struct kvm_memory_slot *dont);
+diff --git a/arch/x86/kvm/page_track.c b/arch/x86/kvm/page_track.c
+index 4a1c13eaa518..c9473acd65d6 100644
+--- a/arch/x86/kvm/page_track.c
++++ b/arch/x86/kvm/page_track.c
+@@ -158,6 +158,14 @@ bool kvm_page_track_is_active(struct kvm_vcpu *vcpu, gfn_t gfn,
+ return !!ACCESS_ONCE(slot->arch.gfn_track[mode][index]);
+ }
+
++void kvm_page_track_cleanup(struct kvm *kvm)
++{
++ struct kvm_page_track_notifier_head *head;
++
++ head = &kvm->arch.track_notifier_head;
++ cleanup_srcu_struct(&head->track_srcu);
++}
++
+ void kvm_page_track_init(struct kvm *kvm)
+ {
+ struct kvm_page_track_notifier_head *head;
+diff --git a/arch/x86/kvm/vmx.c b/arch/x86/kvm/vmx.c
+index 2c22aef35dbc..c989e67dcc9d 100644
+--- a/arch/x86/kvm/vmx.c
++++ b/arch/x86/kvm/vmx.c
+@@ -2811,7 +2811,6 @@ static void nested_vmx_setup_ctls_msrs(struct vcpu_vmx *vmx)
+ SECONDARY_EXEC_RDTSCP |
+ SECONDARY_EXEC_DESC |
+ SECONDARY_EXEC_VIRTUALIZE_X2APIC_MODE |
+- SECONDARY_EXEC_ENABLE_VPID |
+ SECONDARY_EXEC_APIC_REGISTER_VIRT |
+ SECONDARY_EXEC_VIRTUAL_INTR_DELIVERY |
+ SECONDARY_EXEC_WBINVD_EXITING |
+@@ -2839,10 +2838,12 @@ static void nested_vmx_setup_ctls_msrs(struct vcpu_vmx *vmx)
+ * though it is treated as global context. The alternative is
+ * not failing the single-context invvpid, and it is worse.
+ */
+- if (enable_vpid)
++ if (enable_vpid) {
++ vmx->nested.nested_vmx_secondary_ctls_high |=
++ SECONDARY_EXEC_ENABLE_VPID;
+ vmx->nested.nested_vmx_vpid_caps = VMX_VPID_INVVPID_BIT |
+ VMX_VPID_EXTENT_SUPPORTED_MASK;
+- else
++ } else
+ vmx->nested.nested_vmx_vpid_caps = 0;
+
+ if (enable_unrestricted_guest)
+diff --git a/arch/x86/kvm/x86.c b/arch/x86/kvm/x86.c
+index e52c9088660f..b3b212f20f78 100644
+--- a/arch/x86/kvm/x86.c
++++ b/arch/x86/kvm/x86.c
+@@ -8052,6 +8052,7 @@ void kvm_arch_destroy_vm(struct kvm *kvm)
+ kvm_free_vcpus(kvm);
+ kvfree(rcu_dereference_check(kvm->arch.apic_map, 1));
+ kvm_mmu_uninit_vm(kvm);
++ kvm_page_track_cleanup(kvm);
+ }
+
+ void kvm_arch_free_memslot(struct kvm *kvm, struct kvm_memory_slot *free,
+diff --git a/drivers/pinctrl/qcom/pinctrl-msm.c b/drivers/pinctrl/qcom/pinctrl-msm.c
+index 775c88303017..bedce3453dd3 100644
+--- a/drivers/pinctrl/qcom/pinctrl-msm.c
++++ b/drivers/pinctrl/qcom/pinctrl-msm.c
+@@ -594,10 +594,6 @@ static void msm_gpio_irq_unmask(struct irq_data *d)
+
+ spin_lock_irqsave(&pctrl->lock, flags);
+
+- val = readl(pctrl->regs + g->intr_status_reg);
+- val &= ~BIT(g->intr_status_bit);
+- writel(val, pctrl->regs + g->intr_status_reg);
+-
+ val = readl(pctrl->regs + g->intr_cfg_reg);
+ val |= BIT(g->intr_enable_bit);
+ writel(val, pctrl->regs + g->intr_cfg_reg);
+diff --git a/drivers/scsi/qla2xxx/qla_attr.c b/drivers/scsi/qla2xxx/qla_attr.c
+index f201f4099620..f204d7cd5354 100644
+--- a/drivers/scsi/qla2xxx/qla_attr.c
++++ b/drivers/scsi/qla2xxx/qla_attr.c
+@@ -2154,8 +2154,6 @@ qla24xx_vport_delete(struct fc_vport *fc_vport)
+ "Timer for the VP[%d] has stopped\n", vha->vp_idx);
+ }
+
+- BUG_ON(atomic_read(&vha->vref_count));
+-
+ qla2x00_free_fcports(vha);
+
+ mutex_lock(&ha->vport_lock);
+@@ -2163,7 +2161,7 @@ qla24xx_vport_delete(struct fc_vport *fc_vport)
+ clear_bit(vha->vp_idx, ha->vp_idx_map);
+ mutex_unlock(&ha->vport_lock);
+
+- if (vha->qpair->vp_idx == vha->vp_idx) {
++ if (vha->qpair && vha->qpair->vp_idx == vha->vp_idx) {
+ if (qla2xxx_delete_qpair(vha, vha->qpair) != QLA_SUCCESS)
+ ql_log(ql_log_warn, vha, 0x7087,
+ "Queue Pair delete failed.\n");
+diff --git a/drivers/scsi/qla2xxx/qla_def.h b/drivers/scsi/qla2xxx/qla_def.h
+index 5b1287a63c49..7887f9b0950d 100644
+--- a/drivers/scsi/qla2xxx/qla_def.h
++++ b/drivers/scsi/qla2xxx/qla_def.h
+@@ -3788,6 +3788,7 @@ typedef struct scsi_qla_host {
+ struct qla8044_reset_template reset_tmplt;
+ struct qla_tgt_counters tgt_counters;
+ uint16_t bbcr;
++ wait_queue_head_t vref_waitq;
+ } scsi_qla_host_t;
+
+ struct qla27xx_image_status {
+@@ -3843,14 +3844,17 @@ struct qla2_sgx {
+ mb(); \
+ if (__vha->flags.delete_progress) { \
+ atomic_dec(&__vha->vref_count); \
++ wake_up(&__vha->vref_waitq); \
+ __bail = 1; \
+ } else { \
+ __bail = 0; \
+ } \
+ } while (0)
+
+-#define QLA_VHA_MARK_NOT_BUSY(__vha) \
++#define QLA_VHA_MARK_NOT_BUSY(__vha) do { \
+ atomic_dec(&__vha->vref_count); \
++ wake_up(&__vha->vref_waitq); \
++} while (0) \
+
+ #define QLA_QPAIR_MARK_BUSY(__qpair, __bail) do { \
+ atomic_inc(&__qpair->ref_count); \
+diff --git a/drivers/scsi/qla2xxx/qla_init.c b/drivers/scsi/qla2xxx/qla_init.c
+index 7b6317c8c2e9..e2b2d7b6a085 100644
+--- a/drivers/scsi/qla2xxx/qla_init.c
++++ b/drivers/scsi/qla2xxx/qla_init.c
+@@ -4352,6 +4352,7 @@ qla2x00_update_fcports(scsi_qla_host_t *base_vha)
+ }
+ }
+ atomic_dec(&vha->vref_count);
++ wake_up(&vha->vref_waitq);
+ }
+ spin_unlock_irqrestore(&ha->vport_slock, flags);
+ }
+diff --git a/drivers/scsi/qla2xxx/qla_mid.c b/drivers/scsi/qla2xxx/qla_mid.c
+index c6d6f0d912ff..09a490c98763 100644
+--- a/drivers/scsi/qla2xxx/qla_mid.c
++++ b/drivers/scsi/qla2xxx/qla_mid.c
+@@ -74,13 +74,14 @@ qla24xx_deallocate_vp_id(scsi_qla_host_t *vha)
+ * ensures no active vp_list traversal while the vport is removed
+ * from the queue)
+ */
+- spin_lock_irqsave(&ha->vport_slock, flags);
+- while (atomic_read(&vha->vref_count)) {
+- spin_unlock_irqrestore(&ha->vport_slock, flags);
+-
+- msleep(500);
++ wait_event_timeout(vha->vref_waitq, atomic_read(&vha->vref_count),
++ 10*HZ);
+
+- spin_lock_irqsave(&ha->vport_slock, flags);
++ spin_lock_irqsave(&ha->vport_slock, flags);
++ if (atomic_read(&vha->vref_count)) {
++ ql_dbg(ql_dbg_vport, vha, 0xfffa,
++ "vha->vref_count=%u timeout\n", vha->vref_count.counter);
++ vha->vref_count = (atomic_t)ATOMIC_INIT(0);
+ }
+ list_del(&vha->list);
+ qlt_update_vp_map(vha, RESET_VP_IDX);
+@@ -269,6 +270,7 @@ qla2x00_alert_all_vps(struct rsp_que *rsp, uint16_t *mb)
+
+ spin_lock_irqsave(&ha->vport_slock, flags);
+ atomic_dec(&vha->vref_count);
++ wake_up(&vha->vref_waitq);
+ }
+ i++;
+ }
+diff --git a/drivers/scsi/qla2xxx/qla_os.c b/drivers/scsi/qla2xxx/qla_os.c
+index 17cdd1d09a57..dc79524178ad 100644
+--- a/drivers/scsi/qla2xxx/qla_os.c
++++ b/drivers/scsi/qla2xxx/qla_os.c
+@@ -4215,6 +4215,7 @@ struct scsi_qla_host *qla2x00_create_host(struct scsi_host_template *sht,
+
+ spin_lock_init(&vha->work_lock);
+ spin_lock_init(&vha->cmd_list_lock);
++ init_waitqueue_head(&vha->vref_waitq);
+
+ sprintf(vha->host_str, "%s_%ld", QLA2XXX_DRIVER_NAME, vha->host_no);
+ ql_dbg(ql_dbg_init, vha, 0x0041,
+diff --git a/drivers/usb/musb/musb_core.c b/drivers/usb/musb/musb_core.c
+index 772f15821242..4387afabebfd 100644
+--- a/drivers/usb/musb/musb_core.c
++++ b/drivers/usb/musb/musb_core.c
+@@ -2497,8 +2497,8 @@ static int musb_remove(struct platform_device *pdev)
+ pm_runtime_get_sync(musb->controller);
+ musb_host_cleanup(musb);
+ musb_gadget_cleanup(musb);
+- spin_lock_irqsave(&musb->lock, flags);
+ musb_platform_disable(musb);
++ spin_lock_irqsave(&musb->lock, flags);
+ musb_generic_disable(musb);
+ spin_unlock_irqrestore(&musb->lock, flags);
+ musb_writeb(musb->mregs, MUSB_DEVCTL, 0);
+diff --git a/drivers/virtio/virtio_balloon.c b/drivers/virtio/virtio_balloon.c
+index 9d2738e9217f..2c2e6792f7e0 100644
+--- a/drivers/virtio/virtio_balloon.c
++++ b/drivers/virtio/virtio_balloon.c
+@@ -427,6 +427,8 @@ static int init_vqs(struct virtio_balloon *vb)
+ * Prime this virtqueue with one buffer so the hypervisor can
+ * use it to signal us later (it can't be broken yet!).
+ */
++ update_balloon_stats(vb);
++
+ sg_init_one(&sg, vb->stats, sizeof vb->stats);
+ if (virtqueue_add_outbuf(vb->stats_vq, &sg, 1, vb, GFP_KERNEL)
+ < 0)
+diff --git a/kernel/sched/deadline.c b/kernel/sched/deadline.c
+index 70ef2b1901e4..bf06ec6d7650 100644
+--- a/kernel/sched/deadline.c
++++ b/kernel/sched/deadline.c
+@@ -1729,12 +1729,11 @@ static void switched_to_dl(struct rq *rq, struct task_struct *p)
+ #ifdef CONFIG_SMP
+ if (tsk_nr_cpus_allowed(p) > 1 && rq->dl.overloaded)
+ queue_push_tasks(rq);
+-#else
++#endif
+ if (dl_task(rq->curr))
+ check_preempt_curr_dl(rq, p, 0);
+ else
+ resched_curr(rq);
+-#endif
+ }
+ }
+
+diff --git a/kernel/sched/rt.c b/kernel/sched/rt.c
+index 2516b8df6dbb..f139f22ce30d 100644
+--- a/kernel/sched/rt.c
++++ b/kernel/sched/rt.c
+@@ -2198,10 +2198,9 @@ static void switched_to_rt(struct rq *rq, struct task_struct *p)
+ #ifdef CONFIG_SMP
+ if (tsk_nr_cpus_allowed(p) > 1 && rq->rt.overloaded)
+ queue_push_tasks(rq);
+-#else
++#endif /* CONFIG_SMP */
+ if (p->prio < rq->curr->prio)
+ resched_curr(rq);
+-#endif /* CONFIG_SMP */
+ }
+ }
+
+diff --git a/net/xfrm/xfrm_policy.c b/net/xfrm/xfrm_policy.c
+index 177e208e8ff5..3c8f5b70abf8 100644
+--- a/net/xfrm/xfrm_policy.c
++++ b/net/xfrm/xfrm_policy.c
+@@ -3062,6 +3062,11 @@ static int __net_init xfrm_net_init(struct net *net)
+ {
+ int rv;
+
++ /* Initialize the per-net locks here */
++ spin_lock_init(&net->xfrm.xfrm_state_lock);
++ spin_lock_init(&net->xfrm.xfrm_policy_lock);
++ mutex_init(&net->xfrm.xfrm_cfg_mutex);
++
+ rv = xfrm_statistics_init(net);
+ if (rv < 0)
+ goto out_statistics;
+@@ -3078,11 +3083,6 @@ static int __net_init xfrm_net_init(struct net *net)
+ if (rv < 0)
+ goto out;
+
+- /* Initialize the per-net locks here */
+- spin_lock_init(&net->xfrm.xfrm_state_lock);
+- spin_lock_init(&net->xfrm.xfrm_policy_lock);
+- mutex_init(&net->xfrm.xfrm_cfg_mutex);
+-
+ return 0;
+
+ out:
+diff --git a/net/xfrm/xfrm_user.c b/net/xfrm/xfrm_user.c
+index 9705c279494b..40a8aa39220d 100644
+--- a/net/xfrm/xfrm_user.c
++++ b/net/xfrm/xfrm_user.c
+@@ -412,7 +412,14 @@ static inline int xfrm_replay_verify_len(struct xfrm_replay_state_esn *replay_es
+ up = nla_data(rp);
+ ulen = xfrm_replay_state_esn_len(up);
+
+- if (nla_len(rp) < ulen || xfrm_replay_state_esn_len(replay_esn) != ulen)
++ /* Check the overall length and the internal bitmap length to avoid
++ * potential overflow. */
++ if (nla_len(rp) < ulen ||
++ xfrm_replay_state_esn_len(replay_esn) != ulen ||
++ replay_esn->bmp_len != up->bmp_len)
++ return -EINVAL;
++
++ if (up->replay_window > up->bmp_len * sizeof(__u32) * 8)
+ return -EINVAL;
+
+ return 0;
^ permalink raw reply related [flat|nested] 22+ messages in thread
* [gentoo-commits] proj/linux-patches:4.10 commit in: /
@ 2017-04-08 13:51 Mike Pagano
0 siblings, 0 replies; 22+ messages in thread
From: Mike Pagano @ 2017-04-08 13:51 UTC (permalink / raw
To: gentoo-commits
commit: 8fb2c956e0adbcdcac001eff148fcbf3b7d81ae6
Author: Mike Pagano <mpagano <AT> gentoo <DOT> org>
AuthorDate: Sat Apr 8 13:51:03 2017 +0000
Commit: Mike Pagano <mpagano <AT> gentoo <DOT> org>
CommitDate: Sat Apr 8 13:51:03 2017 +0000
URL: https://gitweb.gentoo.org/proj/linux-patches.git/commit/?id=8fb2c956
Linux patch 4.10.9
0000_README | 4 +
1008_linux-4.10.9.patch | 4556 +++++++++++++++++++++++++++++++++++++++++++++++
2 files changed, 4560 insertions(+)
diff --git a/0000_README b/0000_README
index 4c7de50..5f8d5b0 100644
--- a/0000_README
+++ b/0000_README
@@ -75,6 +75,10 @@ Patch: 1007_linux-4.10.8.patch
From: http://www.kernel.org
Desc: Linux 4.10.8
+Patch: 1008_linux-4.10.9.patch
+From: http://www.kernel.org
+Desc: Linux 4.10.9
+
Patch: 1500_XATTR_USER_PREFIX.patch
From: https://bugs.gentoo.org/show_bug.cgi?id=470644
Desc: Support for namespace user.pax.* on tmpfs.
diff --git a/1008_linux-4.10.9.patch b/1008_linux-4.10.9.patch
new file mode 100644
index 0000000..1aba6be
--- /dev/null
+++ b/1008_linux-4.10.9.patch
@@ -0,0 +1,4556 @@
+diff --git a/Documentation/devicetree/bindings/rng/omap_rng.txt b/Documentation/devicetree/bindings/rng/omap_rng.txt
+index 471477299ece..9cf7876ab434 100644
+--- a/Documentation/devicetree/bindings/rng/omap_rng.txt
++++ b/Documentation/devicetree/bindings/rng/omap_rng.txt
+@@ -12,7 +12,8 @@ Required properties:
+ - reg : Offset and length of the register set for the module
+ - interrupts : the interrupt number for the RNG module.
+ Used for "ti,omap4-rng" and "inside-secure,safexcel-eip76"
+-- clocks: the trng clock source
++- clocks: the trng clock source. Only mandatory for the
++ "inside-secure,safexcel-eip76" compatible.
+
+ Example:
+ /* AM335x */
+diff --git a/Makefile b/Makefile
+index 82e0809fed9b..4ebd511dee58 100644
+--- a/Makefile
++++ b/Makefile
+@@ -1,6 +1,6 @@
+ VERSION = 4
+ PATCHLEVEL = 10
+-SUBLEVEL = 8
++SUBLEVEL = 9
+ EXTRAVERSION =
+ NAME = Fearless Coyote
+
+diff --git a/arch/arc/mm/cache.c b/arch/arc/mm/cache.c
+index d408fa21a07c..928562967f3c 100644
+--- a/arch/arc/mm/cache.c
++++ b/arch/arc/mm/cache.c
+@@ -633,6 +633,9 @@ noinline static void slc_entire_op(const int op)
+
+ write_aux_reg(ARC_REG_SLC_INVALIDATE, 1);
+
++ /* Make sure "busy" bit reports correct stataus, see STAR 9001165532 */
++ read_aux_reg(r);
++
+ /* Important to wait for flush to complete */
+ while (read_aux_reg(r) & SLC_CTRL_BUSY);
+ }
+diff --git a/arch/arm/boot/dts/bcm5301x.dtsi b/arch/arm/boot/dts/bcm5301x.dtsi
+index f09a2bb08979..4b6049240ec2 100644
+--- a/arch/arm/boot/dts/bcm5301x.dtsi
++++ b/arch/arm/boot/dts/bcm5301x.dtsi
+@@ -66,14 +66,14 @@
+ timer@20200 {
+ compatible = "arm,cortex-a9-global-timer";
+ reg = <0x20200 0x100>;
+- interrupts = <GIC_PPI 11 IRQ_TYPE_LEVEL_HIGH>;
++ interrupts = <GIC_PPI 11 IRQ_TYPE_EDGE_RISING>;
+ clocks = <&periph_clk>;
+ };
+
+ local-timer@20600 {
+ compatible = "arm,cortex-a9-twd-timer";
+ reg = <0x20600 0x100>;
+- interrupts = <GIC_PPI 13 IRQ_TYPE_LEVEL_HIGH>;
++ interrupts = <GIC_PPI 13 IRQ_TYPE_EDGE_RISING>;
+ clocks = <&periph_clk>;
+ };
+
+diff --git a/arch/mips/lantiq/irq.c b/arch/mips/lantiq/irq.c
+index 8ac0e5994ed2..0ddf3698b85d 100644
+--- a/arch/mips/lantiq/irq.c
++++ b/arch/mips/lantiq/irq.c
+@@ -269,6 +269,11 @@ static void ltq_hw5_irqdispatch(void)
+ DEFINE_HWx_IRQDISPATCH(5)
+ #endif
+
++static void ltq_hw_irq_handler(struct irq_desc *desc)
++{
++ ltq_hw_irqdispatch(irq_desc_get_irq(desc) - 2);
++}
++
+ #ifdef CONFIG_MIPS_MT_SMP
+ void __init arch_init_ipiirq(int irq, struct irqaction *action)
+ {
+@@ -313,23 +318,19 @@ static struct irqaction irq_call = {
+ asmlinkage void plat_irq_dispatch(void)
+ {
+ unsigned int pending = read_c0_status() & read_c0_cause() & ST0_IM;
+- unsigned int i;
+-
+- if ((MIPS_CPU_TIMER_IRQ == 7) && (pending & CAUSEF_IP7)) {
+- do_IRQ(MIPS_CPU_TIMER_IRQ);
+- goto out;
+- } else {
+- for (i = 0; i < MAX_IM; i++) {
+- if (pending & (CAUSEF_IP2 << i)) {
+- ltq_hw_irqdispatch(i);
+- goto out;
+- }
+- }
++ int irq;
++
++ if (!pending) {
++ spurious_interrupt();
++ return;
+ }
+- pr_alert("Spurious IRQ: CAUSE=0x%08x\n", read_c0_status());
+
+-out:
+- return;
++ pending >>= CAUSEB_IP;
++ while (pending) {
++ irq = fls(pending) - 1;
++ do_IRQ(MIPS_CPU_IRQ_BASE + irq);
++ pending &= ~BIT(irq);
++ }
+ }
+
+ static int icu_map(struct irq_domain *d, unsigned int irq, irq_hw_number_t hw)
+@@ -354,11 +355,6 @@ static const struct irq_domain_ops irq_domain_ops = {
+ .map = icu_map,
+ };
+
+-static struct irqaction cascade = {
+- .handler = no_action,
+- .name = "cascade",
+-};
+-
+ int __init icu_of_init(struct device_node *node, struct device_node *parent)
+ {
+ struct device_node *eiu_node;
+@@ -390,7 +386,7 @@ int __init icu_of_init(struct device_node *node, struct device_node *parent)
+ mips_cpu_irq_init();
+
+ for (i = 0; i < MAX_IM; i++)
+- setup_irq(i + 2, &cascade);
++ irq_set_chained_handler(i + 2, ltq_hw_irq_handler);
+
+ if (cpu_has_vint) {
+ pr_info("Setting up vectored interrupts\n");
+diff --git a/arch/parisc/include/asm/uaccess.h b/arch/parisc/include/asm/uaccess.h
+index 9a2aee1b90fc..7fcf5128996a 100644
+--- a/arch/parisc/include/asm/uaccess.h
++++ b/arch/parisc/include/asm/uaccess.h
+@@ -68,6 +68,15 @@ struct exception_table_entry {
+ ".previous\n"
+
+ /*
++ * ASM_EXCEPTIONTABLE_ENTRY_EFAULT() creates a special exception table entry
++ * (with lowest bit set) for which the fault handler in fixup_exception() will
++ * load -EFAULT into %r8 for a read or write fault, and zeroes the target
++ * register in case of a read fault in get_user().
++ */
++#define ASM_EXCEPTIONTABLE_ENTRY_EFAULT( fault_addr, except_addr )\
++ ASM_EXCEPTIONTABLE_ENTRY( fault_addr, except_addr + 1)
++
++/*
+ * The page fault handler stores, in a per-cpu area, the following information
+ * if a fixup routine is available.
+ */
+@@ -94,7 +103,7 @@ struct exception_data {
+ #define __get_user(x, ptr) \
+ ({ \
+ register long __gu_err __asm__ ("r8") = 0; \
+- register long __gu_val __asm__ ("r9") = 0; \
++ register long __gu_val; \
+ \
+ load_sr2(); \
+ switch (sizeof(*(ptr))) { \
+@@ -110,22 +119,23 @@ struct exception_data {
+ })
+
+ #define __get_user_asm(ldx, ptr) \
+- __asm__("\n1:\t" ldx "\t0(%%sr2,%2),%0\n\t" \
+- ASM_EXCEPTIONTABLE_ENTRY(1b, fixup_get_user_skip_1)\
++ __asm__("1: " ldx " 0(%%sr2,%2),%0\n" \
++ "9:\n" \
++ ASM_EXCEPTIONTABLE_ENTRY_EFAULT(1b, 9b) \
+ : "=r"(__gu_val), "=r"(__gu_err) \
+- : "r"(ptr), "1"(__gu_err) \
+- : "r1");
++ : "r"(ptr), "1"(__gu_err));
+
+ #if !defined(CONFIG_64BIT)
+
+ #define __get_user_asm64(ptr) \
+- __asm__("\n1:\tldw 0(%%sr2,%2),%0" \
+- "\n2:\tldw 4(%%sr2,%2),%R0\n\t" \
+- ASM_EXCEPTIONTABLE_ENTRY(1b, fixup_get_user_skip_2)\
+- ASM_EXCEPTIONTABLE_ENTRY(2b, fixup_get_user_skip_1)\
++ __asm__(" copy %%r0,%R0\n" \
++ "1: ldw 0(%%sr2,%2),%0\n" \
++ "2: ldw 4(%%sr2,%2),%R0\n" \
++ "9:\n" \
++ ASM_EXCEPTIONTABLE_ENTRY_EFAULT(1b, 9b) \
++ ASM_EXCEPTIONTABLE_ENTRY_EFAULT(2b, 9b) \
+ : "=r"(__gu_val), "=r"(__gu_err) \
+- : "r"(ptr), "1"(__gu_err) \
+- : "r1");
++ : "r"(ptr), "1"(__gu_err));
+
+ #endif /* !defined(CONFIG_64BIT) */
+
+@@ -151,32 +161,31 @@ struct exception_data {
+ * The "__put_user/kernel_asm()" macros tell gcc they read from memory
+ * instead of writing. This is because they do not write to any memory
+ * gcc knows about, so there are no aliasing issues. These macros must
+- * also be aware that "fixup_put_user_skip_[12]" are executed in the
+- * context of the fault, and any registers used there must be listed
+- * as clobbers. In this case only "r1" is used by the current routines.
+- * r8/r9 are already listed as err/val.
++ * also be aware that fixups are executed in the context of the fault,
++ * and any registers used there must be listed as clobbers.
++ * r8 is already listed as err.
+ */
+
+ #define __put_user_asm(stx, x, ptr) \
+ __asm__ __volatile__ ( \
+- "\n1:\t" stx "\t%2,0(%%sr2,%1)\n\t" \
+- ASM_EXCEPTIONTABLE_ENTRY(1b, fixup_put_user_skip_1)\
++ "1: " stx " %2,0(%%sr2,%1)\n" \
++ "9:\n" \
++ ASM_EXCEPTIONTABLE_ENTRY_EFAULT(1b, 9b) \
+ : "=r"(__pu_err) \
+- : "r"(ptr), "r"(x), "0"(__pu_err) \
+- : "r1")
++ : "r"(ptr), "r"(x), "0"(__pu_err))
+
+
+ #if !defined(CONFIG_64BIT)
+
+ #define __put_user_asm64(__val, ptr) do { \
+ __asm__ __volatile__ ( \
+- "\n1:\tstw %2,0(%%sr2,%1)" \
+- "\n2:\tstw %R2,4(%%sr2,%1)\n\t" \
+- ASM_EXCEPTIONTABLE_ENTRY(1b, fixup_put_user_skip_2)\
+- ASM_EXCEPTIONTABLE_ENTRY(2b, fixup_put_user_skip_1)\
++ "1: stw %2,0(%%sr2,%1)\n" \
++ "2: stw %R2,4(%%sr2,%1)\n" \
++ "9:\n" \
++ ASM_EXCEPTIONTABLE_ENTRY_EFAULT(1b, 9b) \
++ ASM_EXCEPTIONTABLE_ENTRY_EFAULT(2b, 9b) \
+ : "=r"(__pu_err) \
+- : "r"(ptr), "r"(__val), "0"(__pu_err) \
+- : "r1"); \
++ : "r"(ptr), "r"(__val), "0"(__pu_err)); \
+ } while (0)
+
+ #endif /* !defined(CONFIG_64BIT) */
+diff --git a/arch/parisc/kernel/parisc_ksyms.c b/arch/parisc/kernel/parisc_ksyms.c
+index 7484b3d11e0d..c6d6272a934f 100644
+--- a/arch/parisc/kernel/parisc_ksyms.c
++++ b/arch/parisc/kernel/parisc_ksyms.c
+@@ -47,16 +47,6 @@ EXPORT_SYMBOL(__cmpxchg_u64);
+ EXPORT_SYMBOL(lclear_user);
+ EXPORT_SYMBOL(lstrnlen_user);
+
+-/* Global fixups - defined as int to avoid creation of function pointers */
+-extern int fixup_get_user_skip_1;
+-extern int fixup_get_user_skip_2;
+-extern int fixup_put_user_skip_1;
+-extern int fixup_put_user_skip_2;
+-EXPORT_SYMBOL(fixup_get_user_skip_1);
+-EXPORT_SYMBOL(fixup_get_user_skip_2);
+-EXPORT_SYMBOL(fixup_put_user_skip_1);
+-EXPORT_SYMBOL(fixup_put_user_skip_2);
+-
+ #ifndef CONFIG_64BIT
+ /* Needed so insmod can set dp value */
+ extern int $global$;
+diff --git a/arch/parisc/kernel/process.c b/arch/parisc/kernel/process.c
+index 9e2d98ee6f9c..3286cbc7b934 100644
+--- a/arch/parisc/kernel/process.c
++++ b/arch/parisc/kernel/process.c
+@@ -140,6 +140,8 @@ void machine_power_off(void)
+ printk(KERN_EMERG "System shut down completed.\n"
+ "Please power this system off now.");
+
++ /* prevent soft lockup/stalled CPU messages for endless loop. */
++ rcu_sysrq_start();
+ for (;;);
+ }
+
+diff --git a/arch/parisc/lib/Makefile b/arch/parisc/lib/Makefile
+index 8fa92b8d839a..f2dac4d73b1b 100644
+--- a/arch/parisc/lib/Makefile
++++ b/arch/parisc/lib/Makefile
+@@ -2,7 +2,7 @@
+ # Makefile for parisc-specific library files
+ #
+
+-lib-y := lusercopy.o bitops.o checksum.o io.o memset.o fixup.o memcpy.o \
++lib-y := lusercopy.o bitops.o checksum.o io.o memset.o memcpy.o \
+ ucmpdi2.o delay.o
+
+ obj-y := iomap.o
+diff --git a/arch/parisc/lib/fixup.S b/arch/parisc/lib/fixup.S
+deleted file mode 100644
+index a5b72f22c7a6..000000000000
+--- a/arch/parisc/lib/fixup.S
++++ /dev/null
+@@ -1,98 +0,0 @@
+-/*
+- * Linux/PA-RISC Project (http://www.parisc-linux.org/)
+- *
+- * Copyright (C) 2004 Randolph Chung <tausq@debian.org>
+- *
+- * This program is free software; you can redistribute it and/or modify
+- * it under the terms of the GNU General Public License as published by
+- * the Free Software Foundation; either version 2, or (at your option)
+- * any later version.
+- *
+- * This program is distributed in the hope that it will be useful,
+- * but WITHOUT ANY WARRANTY; without even the implied warranty of
+- * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
+- * GNU General Public License for more details.
+- *
+- * You should have received a copy of the GNU General Public License
+- * along with this program; if not, write to the Free Software
+- * Foundation, Inc., 675 Mass Ave, Cambridge, MA 02139, USA.
+- *
+- * Fixup routines for kernel exception handling.
+- */
+-#include <asm/asm-offsets.h>
+-#include <asm/assembly.h>
+-#include <asm/errno.h>
+-#include <linux/linkage.h>
+-
+-#ifdef CONFIG_SMP
+- .macro get_fault_ip t1 t2
+- loadgp
+- addil LT%__per_cpu_offset,%r27
+- LDREG RT%__per_cpu_offset(%r1),\t1
+- /* t2 = smp_processor_id() */
+- mfctl 30,\t2
+- ldw TI_CPU(\t2),\t2
+-#ifdef CONFIG_64BIT
+- extrd,u \t2,63,32,\t2
+-#endif
+- /* t2 = &__per_cpu_offset[smp_processor_id()]; */
+- LDREGX \t2(\t1),\t2
+- addil LT%exception_data,%r27
+- LDREG RT%exception_data(%r1),\t1
+- /* t1 = this_cpu_ptr(&exception_data) */
+- add,l \t1,\t2,\t1
+- /* %r27 = t1->fault_gp - restore gp */
+- LDREG EXCDATA_GP(\t1), %r27
+- /* t1 = t1->fault_ip */
+- LDREG EXCDATA_IP(\t1), \t1
+- .endm
+-#else
+- .macro get_fault_ip t1 t2
+- loadgp
+- /* t1 = this_cpu_ptr(&exception_data) */
+- addil LT%exception_data,%r27
+- LDREG RT%exception_data(%r1),\t2
+- /* %r27 = t2->fault_gp - restore gp */
+- LDREG EXCDATA_GP(\t2), %r27
+- /* t1 = t2->fault_ip */
+- LDREG EXCDATA_IP(\t2), \t1
+- .endm
+-#endif
+-
+- .level LEVEL
+-
+- .text
+- .section .fixup, "ax"
+-
+- /* get_user() fixups, store -EFAULT in r8, and 0 in r9 */
+-ENTRY_CFI(fixup_get_user_skip_1)
+- get_fault_ip %r1,%r8
+- ldo 4(%r1), %r1
+- ldi -EFAULT, %r8
+- bv %r0(%r1)
+- copy %r0, %r9
+-ENDPROC_CFI(fixup_get_user_skip_1)
+-
+-ENTRY_CFI(fixup_get_user_skip_2)
+- get_fault_ip %r1,%r8
+- ldo 8(%r1), %r1
+- ldi -EFAULT, %r8
+- bv %r0(%r1)
+- copy %r0, %r9
+-ENDPROC_CFI(fixup_get_user_skip_2)
+-
+- /* put_user() fixups, store -EFAULT in r8 */
+-ENTRY_CFI(fixup_put_user_skip_1)
+- get_fault_ip %r1,%r8
+- ldo 4(%r1), %r1
+- bv %r0(%r1)
+- ldi -EFAULT, %r8
+-ENDPROC_CFI(fixup_put_user_skip_1)
+-
+-ENTRY_CFI(fixup_put_user_skip_2)
+- get_fault_ip %r1,%r8
+- ldo 8(%r1), %r1
+- bv %r0(%r1)
+- ldi -EFAULT, %r8
+-ENDPROC_CFI(fixup_put_user_skip_2)
+-
+diff --git a/arch/parisc/lib/lusercopy.S b/arch/parisc/lib/lusercopy.S
+index 56845de6b5df..f01188c044ee 100644
+--- a/arch/parisc/lib/lusercopy.S
++++ b/arch/parisc/lib/lusercopy.S
+@@ -5,6 +5,8 @@
+ * Copyright (C) 2000 Richard Hirst <rhirst with parisc-linux.org>
+ * Copyright (C) 2001 Matthieu Delahaye <delahaym at esiee.fr>
+ * Copyright (C) 2003 Randolph Chung <tausq with parisc-linux.org>
++ * Copyright (C) 2017 Helge Deller <deller@gmx.de>
++ * Copyright (C) 2017 John David Anglin <dave.anglin@bell.net>
+ *
+ *
+ * This program is free software; you can redistribute it and/or modify
+@@ -132,4 +134,320 @@ ENDPROC_CFI(lstrnlen_user)
+
+ .procend
+
++
++
++/*
++ * unsigned long pa_memcpy(void *dstp, const void *srcp, unsigned long len)
++ *
++ * Inputs:
++ * - sr1 already contains space of source region
++ * - sr2 already contains space of destination region
++ *
++ * Returns:
++ * - number of bytes that could not be copied.
++ * On success, this will be zero.
++ *
++ * This code is based on a C-implementation of a copy routine written by
++ * Randolph Chung, which in turn was derived from the glibc.
++ *
++ * Several strategies are tried to try to get the best performance for various
++ * conditions. In the optimal case, we copy by loops that copy 32- or 16-bytes
++ * at a time using general registers. Unaligned copies are handled either by
++ * aligning the destination and then using shift-and-write method, or in a few
++ * cases by falling back to a byte-at-a-time copy.
++ *
++ * Testing with various alignments and buffer sizes shows that this code is
++ * often >10x faster than a simple byte-at-a-time copy, even for strangely
++ * aligned operands. It is interesting to note that the glibc version of memcpy
++ * (written in C) is actually quite fast already. This routine is able to beat
++ * it by 30-40% for aligned copies because of the loop unrolling, but in some
++ * cases the glibc version is still slightly faster. This lends more
++ * credibility that gcc can generate very good code as long as we are careful.
++ *
++ * Possible optimizations:
++ * - add cache prefetching
++ * - try not to use the post-increment address modifiers; they may create
++ * additional interlocks. Assumption is that those were only efficient on old
++ * machines (pre PA8000 processors)
++ */
++
++ dst = arg0
++ src = arg1
++ len = arg2
++ end = arg3
++ t1 = r19
++ t2 = r20
++ t3 = r21
++ t4 = r22
++ srcspc = sr1
++ dstspc = sr2
++
++ t0 = r1
++ a1 = t1
++ a2 = t2
++ a3 = t3
++ a0 = t4
++
++ save_src = ret0
++ save_dst = ret1
++ save_len = r31
++
++ENTRY_CFI(pa_memcpy)
++ .proc
++ .callinfo NO_CALLS
++ .entry
++
++ /* Last destination address */
++ add dst,len,end
++
++ /* short copy with less than 16 bytes? */
++ cmpib,>>=,n 15,len,.Lbyte_loop
++
++ /* same alignment? */
++ xor src,dst,t0
++ extru t0,31,2,t1
++ cmpib,<>,n 0,t1,.Lunaligned_copy
++
++#ifdef CONFIG_64BIT
++ /* only do 64-bit copies if we can get aligned. */
++ extru t0,31,3,t1
++ cmpib,<>,n 0,t1,.Lalign_loop32
++
++ /* loop until we are 64-bit aligned */
++.Lalign_loop64:
++ extru dst,31,3,t1
++ cmpib,=,n 0,t1,.Lcopy_loop_16
++20: ldb,ma 1(srcspc,src),t1
++21: stb,ma t1,1(dstspc,dst)
++ b .Lalign_loop64
++ ldo -1(len),len
++
++ ASM_EXCEPTIONTABLE_ENTRY(20b,.Lcopy_done)
++ ASM_EXCEPTIONTABLE_ENTRY(21b,.Lcopy_done)
++
++ ldi 31,t0
++.Lcopy_loop_16:
++ cmpb,COND(>>=),n t0,len,.Lword_loop
++
++10: ldd 0(srcspc,src),t1
++11: ldd 8(srcspc,src),t2
++ ldo 16(src),src
++12: std,ma t1,8(dstspc,dst)
++13: std,ma t2,8(dstspc,dst)
++14: ldd 0(srcspc,src),t1
++15: ldd 8(srcspc,src),t2
++ ldo 16(src),src
++16: std,ma t1,8(dstspc,dst)
++17: std,ma t2,8(dstspc,dst)
++
++ ASM_EXCEPTIONTABLE_ENTRY(10b,.Lcopy_done)
++ ASM_EXCEPTIONTABLE_ENTRY(11b,.Lcopy16_fault)
++ ASM_EXCEPTIONTABLE_ENTRY(12b,.Lcopy_done)
++ ASM_EXCEPTIONTABLE_ENTRY(13b,.Lcopy_done)
++ ASM_EXCEPTIONTABLE_ENTRY(14b,.Lcopy_done)
++ ASM_EXCEPTIONTABLE_ENTRY(15b,.Lcopy16_fault)
++ ASM_EXCEPTIONTABLE_ENTRY(16b,.Lcopy_done)
++ ASM_EXCEPTIONTABLE_ENTRY(17b,.Lcopy_done)
++
++ b .Lcopy_loop_16
++ ldo -32(len),len
++
++.Lword_loop:
++ cmpib,COND(>>=),n 3,len,.Lbyte_loop
++20: ldw,ma 4(srcspc,src),t1
++21: stw,ma t1,4(dstspc,dst)
++ b .Lword_loop
++ ldo -4(len),len
++
++ ASM_EXCEPTIONTABLE_ENTRY(20b,.Lcopy_done)
++ ASM_EXCEPTIONTABLE_ENTRY(21b,.Lcopy_done)
++
++#endif /* CONFIG_64BIT */
++
++ /* loop until we are 32-bit aligned */
++.Lalign_loop32:
++ extru dst,31,2,t1
++ cmpib,=,n 0,t1,.Lcopy_loop_4
++20: ldb,ma 1(srcspc,src),t1
++21: stb,ma t1,1(dstspc,dst)
++ b .Lalign_loop32
++ ldo -1(len),len
++
++ ASM_EXCEPTIONTABLE_ENTRY(20b,.Lcopy_done)
++ ASM_EXCEPTIONTABLE_ENTRY(21b,.Lcopy_done)
++
++
++.Lcopy_loop_4:
++ cmpib,COND(>>=),n 15,len,.Lbyte_loop
++
++10: ldw 0(srcspc,src),t1
++11: ldw 4(srcspc,src),t2
++12: stw,ma t1,4(dstspc,dst)
++13: stw,ma t2,4(dstspc,dst)
++14: ldw 8(srcspc,src),t1
++15: ldw 12(srcspc,src),t2
++ ldo 16(src),src
++16: stw,ma t1,4(dstspc,dst)
++17: stw,ma t2,4(dstspc,dst)
++
++ ASM_EXCEPTIONTABLE_ENTRY(10b,.Lcopy_done)
++ ASM_EXCEPTIONTABLE_ENTRY(11b,.Lcopy8_fault)
++ ASM_EXCEPTIONTABLE_ENTRY(12b,.Lcopy_done)
++ ASM_EXCEPTIONTABLE_ENTRY(13b,.Lcopy_done)
++ ASM_EXCEPTIONTABLE_ENTRY(14b,.Lcopy_done)
++ ASM_EXCEPTIONTABLE_ENTRY(15b,.Lcopy8_fault)
++ ASM_EXCEPTIONTABLE_ENTRY(16b,.Lcopy_done)
++ ASM_EXCEPTIONTABLE_ENTRY(17b,.Lcopy_done)
++
++ b .Lcopy_loop_4
++ ldo -16(len),len
++
++.Lbyte_loop:
++ cmpclr,COND(<>) len,%r0,%r0
++ b,n .Lcopy_done
++20: ldb 0(srcspc,src),t1
++ ldo 1(src),src
++21: stb,ma t1,1(dstspc,dst)
++ b .Lbyte_loop
++ ldo -1(len),len
++
++ ASM_EXCEPTIONTABLE_ENTRY(20b,.Lcopy_done)
++ ASM_EXCEPTIONTABLE_ENTRY(21b,.Lcopy_done)
++
++.Lcopy_done:
++ bv %r0(%r2)
++ sub end,dst,ret0
++
++
++ /* src and dst are not aligned the same way. */
++ /* need to go the hard way */
++.Lunaligned_copy:
++ /* align until dst is 32bit-word-aligned */
++ extru dst,31,2,t1
++ cmpib,COND(=),n 0,t1,.Lcopy_dstaligned
++20: ldb 0(srcspc,src),t1
++ ldo 1(src),src
++21: stb,ma t1,1(dstspc,dst)
++ b .Lunaligned_copy
++ ldo -1(len),len
++
++ ASM_EXCEPTIONTABLE_ENTRY(20b,.Lcopy_done)
++ ASM_EXCEPTIONTABLE_ENTRY(21b,.Lcopy_done)
++
++.Lcopy_dstaligned:
++
++ /* store src, dst and len in safe place */
++ copy src,save_src
++ copy dst,save_dst
++ copy len,save_len
++
++ /* len now needs give number of words to copy */
++ SHRREG len,2,len
++
++ /*
++ * Copy from a not-aligned src to an aligned dst using shifts.
++ * Handles 4 words per loop.
++ */
++
++ depw,z src,28,2,t0
++ subi 32,t0,t0
++ mtsar t0
++ extru len,31,2,t0
++ cmpib,= 2,t0,.Lcase2
++ /* Make src aligned by rounding it down. */
++ depi 0,31,2,src
++
++ cmpiclr,<> 3,t0,%r0
++ b,n .Lcase3
++ cmpiclr,<> 1,t0,%r0
++ b,n .Lcase1
++.Lcase0:
++ cmpb,= %r0,len,.Lcda_finish
++ nop
++
++1: ldw,ma 4(srcspc,src), a3
++ ASM_EXCEPTIONTABLE_ENTRY(1b,.Lcda_rdfault)
++1: ldw,ma 4(srcspc,src), a0
++ ASM_EXCEPTIONTABLE_ENTRY(1b,.Lcda_rdfault)
++ b,n .Ldo3
++.Lcase1:
++1: ldw,ma 4(srcspc,src), a2
++ ASM_EXCEPTIONTABLE_ENTRY(1b,.Lcda_rdfault)
++1: ldw,ma 4(srcspc,src), a3
++ ASM_EXCEPTIONTABLE_ENTRY(1b,.Lcda_rdfault)
++ ldo -1(len),len
++ cmpb,=,n %r0,len,.Ldo0
++.Ldo4:
++1: ldw,ma 4(srcspc,src), a0
++ ASM_EXCEPTIONTABLE_ENTRY(1b,.Lcda_rdfault)
++ shrpw a2, a3, %sar, t0
++1: stw,ma t0, 4(dstspc,dst)
++ ASM_EXCEPTIONTABLE_ENTRY(1b,.Lcopy_done)
++.Ldo3:
++1: ldw,ma 4(srcspc,src), a1
++ ASM_EXCEPTIONTABLE_ENTRY(1b,.Lcda_rdfault)
++ shrpw a3, a0, %sar, t0
++1: stw,ma t0, 4(dstspc,dst)
++ ASM_EXCEPTIONTABLE_ENTRY(1b,.Lcopy_done)
++.Ldo2:
++1: ldw,ma 4(srcspc,src), a2
++ ASM_EXCEPTIONTABLE_ENTRY(1b,.Lcda_rdfault)
++ shrpw a0, a1, %sar, t0
++1: stw,ma t0, 4(dstspc,dst)
++ ASM_EXCEPTIONTABLE_ENTRY(1b,.Lcopy_done)
++.Ldo1:
++1: ldw,ma 4(srcspc,src), a3
++ ASM_EXCEPTIONTABLE_ENTRY(1b,.Lcda_rdfault)
++ shrpw a1, a2, %sar, t0
++1: stw,ma t0, 4(dstspc,dst)
++ ASM_EXCEPTIONTABLE_ENTRY(1b,.Lcopy_done)
++ ldo -4(len),len
++ cmpb,<> %r0,len,.Ldo4
++ nop
++.Ldo0:
++ shrpw a2, a3, %sar, t0
++1: stw,ma t0, 4(dstspc,dst)
++ ASM_EXCEPTIONTABLE_ENTRY(1b,.Lcopy_done)
++
++.Lcda_rdfault:
++.Lcda_finish:
++ /* calculate new src, dst and len and jump to byte-copy loop */
++ sub dst,save_dst,t0
++ add save_src,t0,src
++ b .Lbyte_loop
++ sub save_len,t0,len
++
++.Lcase3:
++1: ldw,ma 4(srcspc,src), a0
++ ASM_EXCEPTIONTABLE_ENTRY(1b,.Lcda_rdfault)
++1: ldw,ma 4(srcspc,src), a1
++ ASM_EXCEPTIONTABLE_ENTRY(1b,.Lcda_rdfault)
++ b .Ldo2
++ ldo 1(len),len
++.Lcase2:
++1: ldw,ma 4(srcspc,src), a1
++ ASM_EXCEPTIONTABLE_ENTRY(1b,.Lcda_rdfault)
++1: ldw,ma 4(srcspc,src), a2
++ ASM_EXCEPTIONTABLE_ENTRY(1b,.Lcda_rdfault)
++ b .Ldo1
++ ldo 2(len),len
++
++
++ /* fault exception fixup handlers: */
++#ifdef CONFIG_64BIT
++.Lcopy16_fault:
++10: b .Lcopy_done
++ std,ma t1,8(dstspc,dst)
++ ASM_EXCEPTIONTABLE_ENTRY(10b,.Lcopy_done)
++#endif
++
++.Lcopy8_fault:
++10: b .Lcopy_done
++ stw,ma t1,4(dstspc,dst)
++ ASM_EXCEPTIONTABLE_ENTRY(10b,.Lcopy_done)
++
++ .exit
++ENDPROC_CFI(pa_memcpy)
++ .procend
++
+ .end
+diff --git a/arch/parisc/lib/memcpy.c b/arch/parisc/lib/memcpy.c
+index f82ff10ed974..b3d47ec1d80a 100644
+--- a/arch/parisc/lib/memcpy.c
++++ b/arch/parisc/lib/memcpy.c
+@@ -2,7 +2,7 @@
+ * Optimized memory copy routines.
+ *
+ * Copyright (C) 2004 Randolph Chung <tausq@debian.org>
+- * Copyright (C) 2013 Helge Deller <deller@gmx.de>
++ * Copyright (C) 2013-2017 Helge Deller <deller@gmx.de>
+ *
+ * This program is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU General Public License as published by
+@@ -21,474 +21,21 @@
+ * Portions derived from the GNU C Library
+ * Copyright (C) 1991, 1997, 2003 Free Software Foundation, Inc.
+ *
+- * Several strategies are tried to try to get the best performance for various
+- * conditions. In the optimal case, we copy 64-bytes in an unrolled loop using
+- * fp regs. This is followed by loops that copy 32- or 16-bytes at a time using
+- * general registers. Unaligned copies are handled either by aligning the
+- * destination and then using shift-and-write method, or in a few cases by
+- * falling back to a byte-at-a-time copy.
+- *
+- * I chose to implement this in C because it is easier to maintain and debug,
+- * and in my experiments it appears that the C code generated by gcc (3.3/3.4
+- * at the time of writing) is fairly optimal. Unfortunately some of the
+- * semantics of the copy routine (exception handling) is difficult to express
+- * in C, so we have to play some tricks to get it to work.
+- *
+- * All the loads and stores are done via explicit asm() code in order to use
+- * the right space registers.
+- *
+- * Testing with various alignments and buffer sizes shows that this code is
+- * often >10x faster than a simple byte-at-a-time copy, even for strangely
+- * aligned operands. It is interesting to note that the glibc version
+- * of memcpy (written in C) is actually quite fast already. This routine is
+- * able to beat it by 30-40% for aligned copies because of the loop unrolling,
+- * but in some cases the glibc version is still slightly faster. This lends
+- * more credibility that gcc can generate very good code as long as we are
+- * careful.
+- *
+- * TODO:
+- * - cache prefetching needs more experimentation to get optimal settings
+- * - try not to use the post-increment address modifiers; they create additional
+- * interlocks
+- * - replace byte-copy loops with stybs sequences
+ */
+
+-#ifdef __KERNEL__
+ #include <linux/module.h>
+ #include <linux/compiler.h>
+ #include <linux/uaccess.h>
+-#define s_space "%%sr1"
+-#define d_space "%%sr2"
+-#else
+-#include "memcpy.h"
+-#define s_space "%%sr0"
+-#define d_space "%%sr0"
+-#define pa_memcpy new2_copy
+-#endif
+
+ DECLARE_PER_CPU(struct exception_data, exception_data);
+
+-#define preserve_branch(label) do { \
+- volatile int dummy = 0; \
+- /* The following branch is never taken, it's just here to */ \
+- /* prevent gcc from optimizing away our exception code. */ \
+- if (unlikely(dummy != dummy)) \
+- goto label; \
+-} while (0)
+-
+ #define get_user_space() (segment_eq(get_fs(), KERNEL_DS) ? 0 : mfsp(3))
+ #define get_kernel_space() (0)
+
+-#define MERGE(w0, sh_1, w1, sh_2) ({ \
+- unsigned int _r; \
+- asm volatile ( \
+- "mtsar %3\n" \
+- "shrpw %1, %2, %%sar, %0\n" \
+- : "=r"(_r) \
+- : "r"(w0), "r"(w1), "r"(sh_2) \
+- ); \
+- _r; \
+-})
+-#define THRESHOLD 16
+-
+-#ifdef DEBUG_MEMCPY
+-#define DPRINTF(fmt, args...) do { printk(KERN_DEBUG "%s:%d:%s ", __FILE__, __LINE__, __func__ ); printk(KERN_DEBUG fmt, ##args ); } while (0)
+-#else
+-#define DPRINTF(fmt, args...)
+-#endif
+-
+-#define def_load_ai_insn(_insn,_sz,_tt,_s,_a,_t,_e) \
+- __asm__ __volatile__ ( \
+- "1:\t" #_insn ",ma " #_sz "(" _s ",%1), %0\n\t" \
+- ASM_EXCEPTIONTABLE_ENTRY(1b,_e) \
+- : _tt(_t), "+r"(_a) \
+- : \
+- : "r8")
+-
+-#define def_store_ai_insn(_insn,_sz,_tt,_s,_a,_t,_e) \
+- __asm__ __volatile__ ( \
+- "1:\t" #_insn ",ma %1, " #_sz "(" _s ",%0)\n\t" \
+- ASM_EXCEPTIONTABLE_ENTRY(1b,_e) \
+- : "+r"(_a) \
+- : _tt(_t) \
+- : "r8")
+-
+-#define ldbma(_s, _a, _t, _e) def_load_ai_insn(ldbs,1,"=r",_s,_a,_t,_e)
+-#define stbma(_s, _t, _a, _e) def_store_ai_insn(stbs,1,"r",_s,_a,_t,_e)
+-#define ldwma(_s, _a, _t, _e) def_load_ai_insn(ldw,4,"=r",_s,_a,_t,_e)
+-#define stwma(_s, _t, _a, _e) def_store_ai_insn(stw,4,"r",_s,_a,_t,_e)
+-#define flddma(_s, _a, _t, _e) def_load_ai_insn(fldd,8,"=f",_s,_a,_t,_e)
+-#define fstdma(_s, _t, _a, _e) def_store_ai_insn(fstd,8,"f",_s,_a,_t,_e)
+-
+-#define def_load_insn(_insn,_tt,_s,_o,_a,_t,_e) \
+- __asm__ __volatile__ ( \
+- "1:\t" #_insn " " #_o "(" _s ",%1), %0\n\t" \
+- ASM_EXCEPTIONTABLE_ENTRY(1b,_e) \
+- : _tt(_t) \
+- : "r"(_a) \
+- : "r8")
+-
+-#define def_store_insn(_insn,_tt,_s,_t,_o,_a,_e) \
+- __asm__ __volatile__ ( \
+- "1:\t" #_insn " %0, " #_o "(" _s ",%1)\n\t" \
+- ASM_EXCEPTIONTABLE_ENTRY(1b,_e) \
+- : \
+- : _tt(_t), "r"(_a) \
+- : "r8")
+-
+-#define ldw(_s,_o,_a,_t,_e) def_load_insn(ldw,"=r",_s,_o,_a,_t,_e)
+-#define stw(_s,_t,_o,_a,_e) def_store_insn(stw,"r",_s,_t,_o,_a,_e)
+-
+-#ifdef CONFIG_PREFETCH
+-static inline void prefetch_src(const void *addr)
+-{
+- __asm__("ldw 0(" s_space ",%0), %%r0" : : "r" (addr));
+-}
+-
+-static inline void prefetch_dst(const void *addr)
+-{
+- __asm__("ldd 0(" d_space ",%0), %%r0" : : "r" (addr));
+-}
+-#else
+-#define prefetch_src(addr) do { } while(0)
+-#define prefetch_dst(addr) do { } while(0)
+-#endif
+-
+-#define PA_MEMCPY_OK 0
+-#define PA_MEMCPY_LOAD_ERROR 1
+-#define PA_MEMCPY_STORE_ERROR 2
+-
+-/* Copy from a not-aligned src to an aligned dst, using shifts. Handles 4 words
+- * per loop. This code is derived from glibc.
+- */
+-static noinline unsigned long copy_dstaligned(unsigned long dst,
+- unsigned long src, unsigned long len)
+-{
+- /* gcc complains that a2 and a3 may be uninitialized, but actually
+- * they cannot be. Initialize a2/a3 to shut gcc up.
+- */
+- register unsigned int a0, a1, a2 = 0, a3 = 0;
+- int sh_1, sh_2;
+-
+- /* prefetch_src((const void *)src); */
+-
+- /* Calculate how to shift a word read at the memory operation
+- aligned srcp to make it aligned for copy. */
+- sh_1 = 8 * (src % sizeof(unsigned int));
+- sh_2 = 8 * sizeof(unsigned int) - sh_1;
+-
+- /* Make src aligned by rounding it down. */
+- src &= -sizeof(unsigned int);
+-
+- switch (len % 4)
+- {
+- case 2:
+- /* a1 = ((unsigned int *) src)[0];
+- a2 = ((unsigned int *) src)[1]; */
+- ldw(s_space, 0, src, a1, cda_ldw_exc);
+- ldw(s_space, 4, src, a2, cda_ldw_exc);
+- src -= 1 * sizeof(unsigned int);
+- dst -= 3 * sizeof(unsigned int);
+- len += 2;
+- goto do1;
+- case 3:
+- /* a0 = ((unsigned int *) src)[0];
+- a1 = ((unsigned int *) src)[1]; */
+- ldw(s_space, 0, src, a0, cda_ldw_exc);
+- ldw(s_space, 4, src, a1, cda_ldw_exc);
+- src -= 0 * sizeof(unsigned int);
+- dst -= 2 * sizeof(unsigned int);
+- len += 1;
+- goto do2;
+- case 0:
+- if (len == 0)
+- return PA_MEMCPY_OK;
+- /* a3 = ((unsigned int *) src)[0];
+- a0 = ((unsigned int *) src)[1]; */
+- ldw(s_space, 0, src, a3, cda_ldw_exc);
+- ldw(s_space, 4, src, a0, cda_ldw_exc);
+- src -=-1 * sizeof(unsigned int);
+- dst -= 1 * sizeof(unsigned int);
+- len += 0;
+- goto do3;
+- case 1:
+- /* a2 = ((unsigned int *) src)[0];
+- a3 = ((unsigned int *) src)[1]; */
+- ldw(s_space, 0, src, a2, cda_ldw_exc);
+- ldw(s_space, 4, src, a3, cda_ldw_exc);
+- src -=-2 * sizeof(unsigned int);
+- dst -= 0 * sizeof(unsigned int);
+- len -= 1;
+- if (len == 0)
+- goto do0;
+- goto do4; /* No-op. */
+- }
+-
+- do
+- {
+- /* prefetch_src((const void *)(src + 4 * sizeof(unsigned int))); */
+-do4:
+- /* a0 = ((unsigned int *) src)[0]; */
+- ldw(s_space, 0, src, a0, cda_ldw_exc);
+- /* ((unsigned int *) dst)[0] = MERGE (a2, sh_1, a3, sh_2); */
+- stw(d_space, MERGE (a2, sh_1, a3, sh_2), 0, dst, cda_stw_exc);
+-do3:
+- /* a1 = ((unsigned int *) src)[1]; */
+- ldw(s_space, 4, src, a1, cda_ldw_exc);
+- /* ((unsigned int *) dst)[1] = MERGE (a3, sh_1, a0, sh_2); */
+- stw(d_space, MERGE (a3, sh_1, a0, sh_2), 4, dst, cda_stw_exc);
+-do2:
+- /* a2 = ((unsigned int *) src)[2]; */
+- ldw(s_space, 8, src, a2, cda_ldw_exc);
+- /* ((unsigned int *) dst)[2] = MERGE (a0, sh_1, a1, sh_2); */
+- stw(d_space, MERGE (a0, sh_1, a1, sh_2), 8, dst, cda_stw_exc);
+-do1:
+- /* a3 = ((unsigned int *) src)[3]; */
+- ldw(s_space, 12, src, a3, cda_ldw_exc);
+- /* ((unsigned int *) dst)[3] = MERGE (a1, sh_1, a2, sh_2); */
+- stw(d_space, MERGE (a1, sh_1, a2, sh_2), 12, dst, cda_stw_exc);
+-
+- src += 4 * sizeof(unsigned int);
+- dst += 4 * sizeof(unsigned int);
+- len -= 4;
+- }
+- while (len != 0);
+-
+-do0:
+- /* ((unsigned int *) dst)[0] = MERGE (a2, sh_1, a3, sh_2); */
+- stw(d_space, MERGE (a2, sh_1, a3, sh_2), 0, dst, cda_stw_exc);
+-
+- preserve_branch(handle_load_error);
+- preserve_branch(handle_store_error);
+-
+- return PA_MEMCPY_OK;
+-
+-handle_load_error:
+- __asm__ __volatile__ ("cda_ldw_exc:\n");
+- return PA_MEMCPY_LOAD_ERROR;
+-
+-handle_store_error:
+- __asm__ __volatile__ ("cda_stw_exc:\n");
+- return PA_MEMCPY_STORE_ERROR;
+-}
+-
+-
+-/* Returns PA_MEMCPY_OK, PA_MEMCPY_LOAD_ERROR or PA_MEMCPY_STORE_ERROR.
+- * In case of an access fault the faulty address can be read from the per_cpu
+- * exception data struct. */
+-static noinline unsigned long pa_memcpy_internal(void *dstp, const void *srcp,
+- unsigned long len)
+-{
+- register unsigned long src, dst, t1, t2, t3;
+- register unsigned char *pcs, *pcd;
+- register unsigned int *pws, *pwd;
+- register double *pds, *pdd;
+- unsigned long ret;
+-
+- src = (unsigned long)srcp;
+- dst = (unsigned long)dstp;
+- pcs = (unsigned char *)srcp;
+- pcd = (unsigned char *)dstp;
+-
+- /* prefetch_src((const void *)srcp); */
+-
+- if (len < THRESHOLD)
+- goto byte_copy;
+-
+- /* Check alignment */
+- t1 = (src ^ dst);
+- if (unlikely(t1 & (sizeof(double)-1)))
+- goto unaligned_copy;
+-
+- /* src and dst have same alignment. */
+-
+- /* Copy bytes till we are double-aligned. */
+- t2 = src & (sizeof(double) - 1);
+- if (unlikely(t2 != 0)) {
+- t2 = sizeof(double) - t2;
+- while (t2 && len) {
+- /* *pcd++ = *pcs++; */
+- ldbma(s_space, pcs, t3, pmc_load_exc);
+- len--;
+- stbma(d_space, t3, pcd, pmc_store_exc);
+- t2--;
+- }
+- }
+-
+- pds = (double *)pcs;
+- pdd = (double *)pcd;
+-
+-#if 0
+- /* Copy 8 doubles at a time */
+- while (len >= 8*sizeof(double)) {
+- register double r1, r2, r3, r4, r5, r6, r7, r8;
+- /* prefetch_src((char *)pds + L1_CACHE_BYTES); */
+- flddma(s_space, pds, r1, pmc_load_exc);
+- flddma(s_space, pds, r2, pmc_load_exc);
+- flddma(s_space, pds, r3, pmc_load_exc);
+- flddma(s_space, pds, r4, pmc_load_exc);
+- fstdma(d_space, r1, pdd, pmc_store_exc);
+- fstdma(d_space, r2, pdd, pmc_store_exc);
+- fstdma(d_space, r3, pdd, pmc_store_exc);
+- fstdma(d_space, r4, pdd, pmc_store_exc);
+-
+-#if 0
+- if (L1_CACHE_BYTES <= 32)
+- prefetch_src((char *)pds + L1_CACHE_BYTES);
+-#endif
+- flddma(s_space, pds, r5, pmc_load_exc);
+- flddma(s_space, pds, r6, pmc_load_exc);
+- flddma(s_space, pds, r7, pmc_load_exc);
+- flddma(s_space, pds, r8, pmc_load_exc);
+- fstdma(d_space, r5, pdd, pmc_store_exc);
+- fstdma(d_space, r6, pdd, pmc_store_exc);
+- fstdma(d_space, r7, pdd, pmc_store_exc);
+- fstdma(d_space, r8, pdd, pmc_store_exc);
+- len -= 8*sizeof(double);
+- }
+-#endif
+-
+- pws = (unsigned int *)pds;
+- pwd = (unsigned int *)pdd;
+-
+-word_copy:
+- while (len >= 8*sizeof(unsigned int)) {
+- register unsigned int r1,r2,r3,r4,r5,r6,r7,r8;
+- /* prefetch_src((char *)pws + L1_CACHE_BYTES); */
+- ldwma(s_space, pws, r1, pmc_load_exc);
+- ldwma(s_space, pws, r2, pmc_load_exc);
+- ldwma(s_space, pws, r3, pmc_load_exc);
+- ldwma(s_space, pws, r4, pmc_load_exc);
+- stwma(d_space, r1, pwd, pmc_store_exc);
+- stwma(d_space, r2, pwd, pmc_store_exc);
+- stwma(d_space, r3, pwd, pmc_store_exc);
+- stwma(d_space, r4, pwd, pmc_store_exc);
+-
+- ldwma(s_space, pws, r5, pmc_load_exc);
+- ldwma(s_space, pws, r6, pmc_load_exc);
+- ldwma(s_space, pws, r7, pmc_load_exc);
+- ldwma(s_space, pws, r8, pmc_load_exc);
+- stwma(d_space, r5, pwd, pmc_store_exc);
+- stwma(d_space, r6, pwd, pmc_store_exc);
+- stwma(d_space, r7, pwd, pmc_store_exc);
+- stwma(d_space, r8, pwd, pmc_store_exc);
+- len -= 8*sizeof(unsigned int);
+- }
+-
+- while (len >= 4*sizeof(unsigned int)) {
+- register unsigned int r1,r2,r3,r4;
+- ldwma(s_space, pws, r1, pmc_load_exc);
+- ldwma(s_space, pws, r2, pmc_load_exc);
+- ldwma(s_space, pws, r3, pmc_load_exc);
+- ldwma(s_space, pws, r4, pmc_load_exc);
+- stwma(d_space, r1, pwd, pmc_store_exc);
+- stwma(d_space, r2, pwd, pmc_store_exc);
+- stwma(d_space, r3, pwd, pmc_store_exc);
+- stwma(d_space, r4, pwd, pmc_store_exc);
+- len -= 4*sizeof(unsigned int);
+- }
+-
+- pcs = (unsigned char *)pws;
+- pcd = (unsigned char *)pwd;
+-
+-byte_copy:
+- while (len) {
+- /* *pcd++ = *pcs++; */
+- ldbma(s_space, pcs, t3, pmc_load_exc);
+- stbma(d_space, t3, pcd, pmc_store_exc);
+- len--;
+- }
+-
+- return PA_MEMCPY_OK;
+-
+-unaligned_copy:
+- /* possibly we are aligned on a word, but not on a double... */
+- if (likely((t1 & (sizeof(unsigned int)-1)) == 0)) {
+- t2 = src & (sizeof(unsigned int) - 1);
+-
+- if (unlikely(t2 != 0)) {
+- t2 = sizeof(unsigned int) - t2;
+- while (t2) {
+- /* *pcd++ = *pcs++; */
+- ldbma(s_space, pcs, t3, pmc_load_exc);
+- stbma(d_space, t3, pcd, pmc_store_exc);
+- len--;
+- t2--;
+- }
+- }
+-
+- pws = (unsigned int *)pcs;
+- pwd = (unsigned int *)pcd;
+- goto word_copy;
+- }
+-
+- /* Align the destination. */
+- if (unlikely((dst & (sizeof(unsigned int) - 1)) != 0)) {
+- t2 = sizeof(unsigned int) - (dst & (sizeof(unsigned int) - 1));
+- while (t2) {
+- /* *pcd++ = *pcs++; */
+- ldbma(s_space, pcs, t3, pmc_load_exc);
+- stbma(d_space, t3, pcd, pmc_store_exc);
+- len--;
+- t2--;
+- }
+- dst = (unsigned long)pcd;
+- src = (unsigned long)pcs;
+- }
+-
+- ret = copy_dstaligned(dst, src, len / sizeof(unsigned int));
+- if (ret)
+- return ret;
+-
+- pcs += (len & -sizeof(unsigned int));
+- pcd += (len & -sizeof(unsigned int));
+- len %= sizeof(unsigned int);
+-
+- preserve_branch(handle_load_error);
+- preserve_branch(handle_store_error);
+-
+- goto byte_copy;
+-
+-handle_load_error:
+- __asm__ __volatile__ ("pmc_load_exc:\n");
+- return PA_MEMCPY_LOAD_ERROR;
+-
+-handle_store_error:
+- __asm__ __volatile__ ("pmc_store_exc:\n");
+- return PA_MEMCPY_STORE_ERROR;
+-}
+-
+-
+ /* Returns 0 for success, otherwise, returns number of bytes not transferred. */
+-static unsigned long pa_memcpy(void *dstp, const void *srcp, unsigned long len)
+-{
+- unsigned long ret, fault_addr, reference;
+- struct exception_data *d;
+-
+- ret = pa_memcpy_internal(dstp, srcp, len);
+- if (likely(ret == PA_MEMCPY_OK))
+- return 0;
+-
+- /* if a load or store fault occured we can get the faulty addr */
+- d = this_cpu_ptr(&exception_data);
+- fault_addr = d->fault_addr;
+-
+- /* error in load or store? */
+- if (ret == PA_MEMCPY_LOAD_ERROR)
+- reference = (unsigned long) srcp;
+- else
+- reference = (unsigned long) dstp;
++extern unsigned long pa_memcpy(void *dst, const void *src,
++ unsigned long len);
+
+- DPRINTF("pa_memcpy: fault type = %lu, len=%lu fault_addr=%lu ref=%lu\n",
+- ret, len, fault_addr, reference);
+-
+- if (fault_addr >= reference)
+- return len - (fault_addr - reference);
+- else
+- return len;
+-}
+-
+-#ifdef __KERNEL__
+ unsigned long __copy_to_user(void __user *dst, const void *src,
+ unsigned long len)
+ {
+@@ -537,5 +84,3 @@ long probe_kernel_read(void *dst, const void *src, size_t size)
+
+ return __probe_kernel_read(dst, src, size);
+ }
+-
+-#endif
+diff --git a/arch/parisc/mm/fault.c b/arch/parisc/mm/fault.c
+index 1a0b4f63f0e9..040c48fc5391 100644
+--- a/arch/parisc/mm/fault.c
++++ b/arch/parisc/mm/fault.c
+@@ -149,6 +149,23 @@ int fixup_exception(struct pt_regs *regs)
+ d->fault_space = regs->isr;
+ d->fault_addr = regs->ior;
+
++ /*
++ * Fix up get_user() and put_user().
++ * ASM_EXCEPTIONTABLE_ENTRY_EFAULT() sets the least-significant
++ * bit in the relative address of the fixup routine to indicate
++ * that %r8 should be loaded with -EFAULT to report a userspace
++ * access error.
++ */
++ if (fix->fixup & 1) {
++ regs->gr[8] = -EFAULT;
++
++ /* zero target register for get_user() */
++ if (parisc_acctyp(0, regs->iir) == VM_READ) {
++ int treg = regs->iir & 0x1f;
++ regs->gr[treg] = 0;
++ }
++ }
++
+ regs->iaoq[0] = (unsigned long)&fix->fixup + fix->fixup;
+ regs->iaoq[0] &= ~3;
+ /*
+diff --git a/arch/x86/kvm/vmx.c b/arch/x86/kvm/vmx.c
+index c989e67dcc9d..9764463ce833 100644
+--- a/arch/x86/kvm/vmx.c
++++ b/arch/x86/kvm/vmx.c
+@@ -10027,7 +10027,6 @@ static int prepare_vmcs02(struct kvm_vcpu *vcpu, struct vmcs12 *vmcs12,
+ {
+ struct vcpu_vmx *vmx = to_vmx(vcpu);
+ u32 exec_control;
+- bool nested_ept_enabled = false;
+
+ vmcs_write16(GUEST_ES_SELECTOR, vmcs12->guest_es_selector);
+ vmcs_write16(GUEST_CS_SELECTOR, vmcs12->guest_cs_selector);
+@@ -10192,7 +10191,6 @@ static int prepare_vmcs02(struct kvm_vcpu *vcpu, struct vmcs12 *vmcs12,
+ vmcs12->guest_intr_status);
+ }
+
+- nested_ept_enabled = (exec_control & SECONDARY_EXEC_ENABLE_EPT) != 0;
+ vmcs_write32(SECONDARY_VM_EXEC_CONTROL, exec_control);
+ }
+
+@@ -10344,7 +10342,7 @@ static int prepare_vmcs02(struct kvm_vcpu *vcpu, struct vmcs12 *vmcs12,
+ vmx_set_efer(vcpu, vcpu->arch.efer);
+
+ /* Shadow page tables on either EPT or shadow page tables. */
+- if (nested_vmx_load_cr3(vcpu, vmcs12->guest_cr3, nested_ept_enabled,
++ if (nested_vmx_load_cr3(vcpu, vmcs12->guest_cr3, nested_cpu_has_ept(vmcs12),
+ entry_failure_code))
+ return 1;
+
+diff --git a/arch/x86/lib/memcpy_64.S b/arch/x86/lib/memcpy_64.S
+index 779782f58324..9a53a06e5a3e 100644
+--- a/arch/x86/lib/memcpy_64.S
++++ b/arch/x86/lib/memcpy_64.S
+@@ -290,7 +290,7 @@ EXPORT_SYMBOL_GPL(memcpy_mcsafe_unrolled)
+ _ASM_EXTABLE_FAULT(.L_copy_leading_bytes, .L_memcpy_mcsafe_fail)
+ _ASM_EXTABLE_FAULT(.L_cache_w0, .L_memcpy_mcsafe_fail)
+ _ASM_EXTABLE_FAULT(.L_cache_w1, .L_memcpy_mcsafe_fail)
+- _ASM_EXTABLE_FAULT(.L_cache_w3, .L_memcpy_mcsafe_fail)
++ _ASM_EXTABLE_FAULT(.L_cache_w2, .L_memcpy_mcsafe_fail)
+ _ASM_EXTABLE_FAULT(.L_cache_w3, .L_memcpy_mcsafe_fail)
+ _ASM_EXTABLE_FAULT(.L_cache_w4, .L_memcpy_mcsafe_fail)
+ _ASM_EXTABLE_FAULT(.L_cache_w5, .L_memcpy_mcsafe_fail)
+diff --git a/arch/x86/mm/kaslr.c b/arch/x86/mm/kaslr.c
+index 887e57182716..aed206475aa7 100644
+--- a/arch/x86/mm/kaslr.c
++++ b/arch/x86/mm/kaslr.c
+@@ -48,7 +48,7 @@ static const unsigned long vaddr_start = __PAGE_OFFSET_BASE;
+ #if defined(CONFIG_X86_ESPFIX64)
+ static const unsigned long vaddr_end = ESPFIX_BASE_ADDR;
+ #elif defined(CONFIG_EFI)
+-static const unsigned long vaddr_end = EFI_VA_START;
++static const unsigned long vaddr_end = EFI_VA_END;
+ #else
+ static const unsigned long vaddr_end = __START_KERNEL_map;
+ #endif
+@@ -105,7 +105,7 @@ void __init kernel_randomize_memory(void)
+ */
+ BUILD_BUG_ON(vaddr_start >= vaddr_end);
+ BUILD_BUG_ON(IS_ENABLED(CONFIG_X86_ESPFIX64) &&
+- vaddr_end >= EFI_VA_START);
++ vaddr_end >= EFI_VA_END);
+ BUILD_BUG_ON((IS_ENABLED(CONFIG_X86_ESPFIX64) ||
+ IS_ENABLED(CONFIG_EFI)) &&
+ vaddr_end >= __START_KERNEL_map);
+diff --git a/block/bio.c b/block/bio.c
+index 2b375020fc49..17ece5b40a2f 100644
+--- a/block/bio.c
++++ b/block/bio.c
+@@ -376,10 +376,14 @@ static void punt_bios_to_rescuer(struct bio_set *bs)
+ bio_list_init(&punt);
+ bio_list_init(&nopunt);
+
+- while ((bio = bio_list_pop(current->bio_list)))
++ while ((bio = bio_list_pop(¤t->bio_list[0])))
+ bio_list_add(bio->bi_pool == bs ? &punt : &nopunt, bio);
++ current->bio_list[0] = nopunt;
+
+- *current->bio_list = nopunt;
++ bio_list_init(&nopunt);
++ while ((bio = bio_list_pop(¤t->bio_list[1])))
++ bio_list_add(bio->bi_pool == bs ? &punt : &nopunt, bio);
++ current->bio_list[1] = nopunt;
+
+ spin_lock(&bs->rescue_lock);
+ bio_list_merge(&bs->rescue_list, &punt);
+@@ -466,7 +470,9 @@ struct bio *bio_alloc_bioset(gfp_t gfp_mask, int nr_iovecs, struct bio_set *bs)
+ * we retry with the original gfp_flags.
+ */
+
+- if (current->bio_list && !bio_list_empty(current->bio_list))
++ if (current->bio_list &&
++ (!bio_list_empty(¤t->bio_list[0]) ||
++ !bio_list_empty(¤t->bio_list[1])))
+ gfp_mask &= ~__GFP_DIRECT_RECLAIM;
+
+ p = mempool_alloc(bs->bio_pool, gfp_mask);
+diff --git a/block/blk-core.c b/block/blk-core.c
+index 61ba08c58b64..9734b5d0d932 100644
+--- a/block/blk-core.c
++++ b/block/blk-core.c
+@@ -1977,7 +1977,14 @@ generic_make_request_checks(struct bio *bio)
+ */
+ blk_qc_t generic_make_request(struct bio *bio)
+ {
+- struct bio_list bio_list_on_stack;
++ /*
++ * bio_list_on_stack[0] contains bios submitted by the current
++ * make_request_fn.
++ * bio_list_on_stack[1] contains bios that were submitted before
++ * the current make_request_fn, but that haven't been processed
++ * yet.
++ */
++ struct bio_list bio_list_on_stack[2];
+ blk_qc_t ret = BLK_QC_T_NONE;
+
+ if (!generic_make_request_checks(bio))
+@@ -1994,7 +2001,7 @@ blk_qc_t generic_make_request(struct bio *bio)
+ * should be added at the tail
+ */
+ if (current->bio_list) {
+- bio_list_add(current->bio_list, bio);
++ bio_list_add(¤t->bio_list[0], bio);
+ goto out;
+ }
+
+@@ -2013,23 +2020,39 @@ blk_qc_t generic_make_request(struct bio *bio)
+ * bio_list, and call into ->make_request() again.
+ */
+ BUG_ON(bio->bi_next);
+- bio_list_init(&bio_list_on_stack);
+- current->bio_list = &bio_list_on_stack;
++ bio_list_init(&bio_list_on_stack[0]);
++ current->bio_list = bio_list_on_stack;
+ do {
+ struct request_queue *q = bdev_get_queue(bio->bi_bdev);
+
+ if (likely(blk_queue_enter(q, false) == 0)) {
++ struct bio_list lower, same;
++
++ /* Create a fresh bio_list for all subordinate requests */
++ bio_list_on_stack[1] = bio_list_on_stack[0];
++ bio_list_init(&bio_list_on_stack[0]);
+ ret = q->make_request_fn(q, bio);
+
+ blk_queue_exit(q);
+
+- bio = bio_list_pop(current->bio_list);
++ /* sort new bios into those for a lower level
++ * and those for the same level
++ */
++ bio_list_init(&lower);
++ bio_list_init(&same);
++ while ((bio = bio_list_pop(&bio_list_on_stack[0])) != NULL)
++ if (q == bdev_get_queue(bio->bi_bdev))
++ bio_list_add(&same, bio);
++ else
++ bio_list_add(&lower, bio);
++ /* now assemble so we handle the lowest level first */
++ bio_list_merge(&bio_list_on_stack[0], &lower);
++ bio_list_merge(&bio_list_on_stack[0], &same);
++ bio_list_merge(&bio_list_on_stack[0], &bio_list_on_stack[1]);
+ } else {
+- struct bio *bio_next = bio_list_pop(current->bio_list);
+-
+ bio_io_error(bio);
+- bio = bio_next;
+ }
++ bio = bio_list_pop(&bio_list_on_stack[0]);
+ } while (bio);
+ current->bio_list = NULL; /* deactivate */
+
+diff --git a/crypto/lrw.c b/crypto/lrw.c
+index ecd8474018e3..3ea095adafd9 100644
+--- a/crypto/lrw.c
++++ b/crypto/lrw.c
+@@ -286,8 +286,11 @@ static int init_crypt(struct skcipher_request *req, crypto_completion_t done)
+
+ subreq->cryptlen = LRW_BUFFER_SIZE;
+ if (req->cryptlen > LRW_BUFFER_SIZE) {
+- subreq->cryptlen = min(req->cryptlen, (unsigned)PAGE_SIZE);
+- rctx->ext = kmalloc(subreq->cryptlen, gfp);
++ unsigned int n = min(req->cryptlen, (unsigned int)PAGE_SIZE);
++
++ rctx->ext = kmalloc(n, gfp);
++ if (rctx->ext)
++ subreq->cryptlen = n;
+ }
+
+ rctx->src = req->src;
+diff --git a/crypto/xts.c b/crypto/xts.c
+index baeb34dd8582..c976bfac29da 100644
+--- a/crypto/xts.c
++++ b/crypto/xts.c
+@@ -230,8 +230,11 @@ static int init_crypt(struct skcipher_request *req, crypto_completion_t done)
+
+ subreq->cryptlen = XTS_BUFFER_SIZE;
+ if (req->cryptlen > XTS_BUFFER_SIZE) {
+- subreq->cryptlen = min(req->cryptlen, (unsigned)PAGE_SIZE);
+- rctx->ext = kmalloc(subreq->cryptlen, gfp);
++ unsigned int n = min(req->cryptlen, (unsigned int)PAGE_SIZE);
++
++ rctx->ext = kmalloc(n, gfp);
++ if (rctx->ext)
++ subreq->cryptlen = n;
+ }
+
+ rctx->src = req->src;
+diff --git a/drivers/acpi/Makefile b/drivers/acpi/Makefile
+index 9ed087853dee..4c5678cfa9c4 100644
+--- a/drivers/acpi/Makefile
++++ b/drivers/acpi/Makefile
+@@ -2,7 +2,6 @@
+ # Makefile for the Linux ACPI interpreter
+ #
+
+-ccflags-y := -Os
+ ccflags-$(CONFIG_ACPI_DEBUG) += -DACPI_DEBUG_OUTPUT
+
+ #
+diff --git a/drivers/acpi/acpi_platform.c b/drivers/acpi/acpi_platform.c
+index b4c1a6a51da4..03250e1f1103 100644
+--- a/drivers/acpi/acpi_platform.c
++++ b/drivers/acpi/acpi_platform.c
+@@ -25,9 +25,11 @@
+ ACPI_MODULE_NAME("platform");
+
+ static const struct acpi_device_id forbidden_id_list[] = {
+- {"PNP0000", 0}, /* PIC */
+- {"PNP0100", 0}, /* Timer */
+- {"PNP0200", 0}, /* AT DMA Controller */
++ {"PNP0000", 0}, /* PIC */
++ {"PNP0100", 0}, /* Timer */
++ {"PNP0200", 0}, /* AT DMA Controller */
++ {"ACPI0009", 0}, /* IOxAPIC */
++ {"ACPI000A", 0}, /* IOAPIC */
+ {"", 0},
+ };
+
+diff --git a/drivers/crypto/ccp/ccp-dev-v5.c b/drivers/crypto/ccp/ccp-dev-v5.c
+index 612898b4aaad..3422f203455d 100644
+--- a/drivers/crypto/ccp/ccp-dev-v5.c
++++ b/drivers/crypto/ccp/ccp-dev-v5.c
+@@ -1014,6 +1014,7 @@ const struct ccp_vdata ccpv5a = {
+
+ const struct ccp_vdata ccpv5b = {
+ .version = CCP_VERSION(5, 0),
++ .dma_chan_attr = DMA_PRIVATE,
+ .setup = ccp5other_config,
+ .perform = &ccp5_actions,
+ .bar = 2,
+diff --git a/drivers/crypto/ccp/ccp-dev.h b/drivers/crypto/ccp/ccp-dev.h
+index 649e5610a5ce..cd9a7051da3c 100644
+--- a/drivers/crypto/ccp/ccp-dev.h
++++ b/drivers/crypto/ccp/ccp-dev.h
+@@ -179,6 +179,10 @@
+
+ /* ------------------------ General CCP Defines ------------------------ */
+
++#define CCP_DMA_DFLT 0x0
++#define CCP_DMA_PRIV 0x1
++#define CCP_DMA_PUB 0x2
++
+ #define CCP_DMAPOOL_MAX_SIZE 64
+ #define CCP_DMAPOOL_ALIGN BIT(5)
+
+@@ -635,6 +639,7 @@ struct ccp_actions {
+ /* Structure to hold CCP version-specific values */
+ struct ccp_vdata {
+ const unsigned int version;
++ const unsigned int dma_chan_attr;
+ void (*setup)(struct ccp_device *);
+ const struct ccp_actions *perform;
+ const unsigned int bar;
+diff --git a/drivers/crypto/ccp/ccp-dmaengine.c b/drivers/crypto/ccp/ccp-dmaengine.c
+index 8d0eeb46d4a2..e00be01fbf5a 100644
+--- a/drivers/crypto/ccp/ccp-dmaengine.c
++++ b/drivers/crypto/ccp/ccp-dmaengine.c
+@@ -10,6 +10,7 @@
+ * published by the Free Software Foundation.
+ */
+
++#include <linux/module.h>
+ #include <linux/kernel.h>
+ #include <linux/dmaengine.h>
+ #include <linux/spinlock.h>
+@@ -25,6 +26,37 @@
+ (mask == 0) ? 64 : fls64(mask); \
+ })
+
++/* The CCP as a DMA provider can be configured for public or private
++ * channels. Default is specified in the vdata for the device (PCI ID).
++ * This module parameter will override for all channels on all devices:
++ * dma_chan_attr = 0x2 to force all channels public
++ * = 0x1 to force all channels private
++ * = 0x0 to defer to the vdata setting
++ * = any other value: warning, revert to 0x0
++ */
++static unsigned int dma_chan_attr = CCP_DMA_DFLT;
++module_param(dma_chan_attr, uint, 0444);
++MODULE_PARM_DESC(dma_chan_attr, "Set DMA channel visibility: 0 (default) = device defaults, 1 = make private, 2 = make public");
++
++unsigned int ccp_get_dma_chan_attr(struct ccp_device *ccp)
++{
++ switch (dma_chan_attr) {
++ case CCP_DMA_DFLT:
++ return ccp->vdata->dma_chan_attr;
++
++ case CCP_DMA_PRIV:
++ return DMA_PRIVATE;
++
++ case CCP_DMA_PUB:
++ return 0;
++
++ default:
++ dev_info_once(ccp->dev, "Invalid value for dma_chan_attr: %d\n",
++ dma_chan_attr);
++ return ccp->vdata->dma_chan_attr;
++ }
++}
++
+ static void ccp_free_cmd_resources(struct ccp_device *ccp,
+ struct list_head *list)
+ {
+@@ -675,6 +707,15 @@ int ccp_dmaengine_register(struct ccp_device *ccp)
+ dma_cap_set(DMA_SG, dma_dev->cap_mask);
+ dma_cap_set(DMA_INTERRUPT, dma_dev->cap_mask);
+
++ /* The DMA channels for this device can be set to public or private,
++ * and overridden by the module parameter dma_chan_attr.
++ * Default: according to the value in vdata (dma_chan_attr=0)
++ * dma_chan_attr=0x1: all channels private (override vdata)
++ * dma_chan_attr=0x2: all channels public (override vdata)
++ */
++ if (ccp_get_dma_chan_attr(ccp) == DMA_PRIVATE)
++ dma_cap_set(DMA_PRIVATE, dma_dev->cap_mask);
++
+ INIT_LIST_HEAD(&dma_dev->channels);
+ for (i = 0; i < ccp->cmd_q_count; i++) {
+ chan = ccp->ccp_dma_chan + i;
+diff --git a/drivers/gpu/drm/armada/Makefile b/drivers/gpu/drm/armada/Makefile
+index a18f156c8b66..64c0b4546fb2 100644
+--- a/drivers/gpu/drm/armada/Makefile
++++ b/drivers/gpu/drm/armada/Makefile
+@@ -4,3 +4,5 @@ armada-y += armada_510.o
+ armada-$(CONFIG_DEBUG_FS) += armada_debugfs.o
+
+ obj-$(CONFIG_DRM_ARMADA) := armada.o
++
++CFLAGS_armada_trace.o := -I$(src)
+diff --git a/drivers/gpu/drm/etnaviv/etnaviv_gpu.c b/drivers/gpu/drm/etnaviv/etnaviv_gpu.c
+index 0a67124bb2a4..db0a43a090d0 100644
+--- a/drivers/gpu/drm/etnaviv/etnaviv_gpu.c
++++ b/drivers/gpu/drm/etnaviv/etnaviv_gpu.c
+@@ -1303,6 +1303,8 @@ int etnaviv_gpu_submit(struct etnaviv_gpu *gpu,
+ goto out_pm_put;
+ }
+
++ mutex_lock(&gpu->lock);
++
+ fence = etnaviv_gpu_fence_alloc(gpu);
+ if (!fence) {
+ event_free(gpu, event);
+@@ -1310,8 +1312,6 @@ int etnaviv_gpu_submit(struct etnaviv_gpu *gpu,
+ goto out_pm_put;
+ }
+
+- mutex_lock(&gpu->lock);
+-
+ gpu->event[event].fence = fence;
+ submit->fence = fence->seqno;
+ gpu->active_fence = submit->fence;
+diff --git a/drivers/gpu/drm/i915/gvt/kvmgt.c b/drivers/gpu/drm/i915/gvt/kvmgt.c
+index 3f656e3a6e5a..325cb9b55989 100644
+--- a/drivers/gpu/drm/i915/gvt/kvmgt.c
++++ b/drivers/gpu/drm/i915/gvt/kvmgt.c
+@@ -1334,6 +1334,7 @@ static int kvmgt_guest_init(struct mdev_device *mdev)
+ vgpu->handle = (unsigned long)info;
+ info->vgpu = vgpu;
+ info->kvm = kvm;
++ kvm_get_kvm(info->kvm);
+
+ kvmgt_protect_table_init(info);
+ gvt_cache_init(vgpu);
+@@ -1353,6 +1354,7 @@ static bool kvmgt_guest_exit(struct kvmgt_guest_info *info)
+ }
+
+ kvm_page_track_unregister_notifier(info->kvm, &info->track_node);
++ kvm_put_kvm(info->kvm);
+ kvmgt_protect_table_destroy(info);
+ gvt_cache_destroy(info->vgpu);
+ vfree(info);
+diff --git a/drivers/gpu/drm/i915/i915_gem_gtt.c b/drivers/gpu/drm/i915/i915_gem_gtt.c
+index b4bde1452f2a..6924a8e79da9 100644
+--- a/drivers/gpu/drm/i915/i915_gem_gtt.c
++++ b/drivers/gpu/drm/i915/i915_gem_gtt.c
+@@ -735,10 +735,9 @@ static bool gen8_ppgtt_clear_pt(struct i915_address_space *vm,
+ GEM_BUG_ON(pte_end > GEN8_PTES);
+
+ bitmap_clear(pt->used_ptes, pte, num_entries);
+-
+- if (bitmap_empty(pt->used_ptes, GEN8_PTES)) {
+- free_pt(to_i915(vm->dev), pt);
+- return true;
++ if (USES_FULL_PPGTT(vm->i915)) {
++ if (bitmap_empty(pt->used_ptes, GEN8_PTES))
++ return true;
+ }
+
+ pt_vaddr = kmap_px(pt);
+@@ -775,13 +774,12 @@ static bool gen8_ppgtt_clear_pd(struct i915_address_space *vm,
+ pde_vaddr = kmap_px(pd);
+ pde_vaddr[pde] = scratch_pde;
+ kunmap_px(ppgtt, pde_vaddr);
++ free_pt(to_i915(vm->dev), pt);
+ }
+ }
+
+- if (bitmap_empty(pd->used_pdes, I915_PDES)) {
+- free_pd(to_i915(vm->dev), pd);
++ if (bitmap_empty(pd->used_pdes, I915_PDES))
+ return true;
+- }
+
+ return false;
+ }
+@@ -795,7 +793,6 @@ static bool gen8_ppgtt_clear_pdp(struct i915_address_space *vm,
+ uint64_t length)
+ {
+ struct i915_hw_ppgtt *ppgtt = i915_vm_to_ppgtt(vm);
+- struct drm_i915_private *dev_priv = to_i915(vm->dev);
+ struct i915_page_directory *pd;
+ uint64_t pdpe;
+ gen8_ppgtt_pdpe_t *pdpe_vaddr;
+@@ -813,16 +810,14 @@ static bool gen8_ppgtt_clear_pdp(struct i915_address_space *vm,
+ pdpe_vaddr[pdpe] = scratch_pdpe;
+ kunmap_px(ppgtt, pdpe_vaddr);
+ }
++ free_pd(to_i915(vm->dev), pd);
+ }
+ }
+
+ mark_tlbs_dirty(ppgtt);
+
+- if (USES_FULL_48BIT_PPGTT(dev_priv) &&
+- bitmap_empty(pdp->used_pdpes, I915_PDPES_PER_PDP(dev_priv))) {
+- free_pdp(dev_priv, pdp);
++ if (bitmap_empty(pdp->used_pdpes, I915_PDPES_PER_PDP(dev_priv)))
+ return true;
+- }
+
+ return false;
+ }
+@@ -836,6 +831,7 @@ static void gen8_ppgtt_clear_pml4(struct i915_address_space *vm,
+ uint64_t start,
+ uint64_t length)
+ {
++ struct drm_i915_private *dev_priv = to_i915(vm->dev);
+ struct i915_hw_ppgtt *ppgtt = i915_vm_to_ppgtt(vm);
+ struct i915_page_directory_pointer *pdp;
+ uint64_t pml4e;
+@@ -854,6 +850,7 @@ static void gen8_ppgtt_clear_pml4(struct i915_address_space *vm,
+ pml4e_vaddr = kmap_px(pml4);
+ pml4e_vaddr[pml4e] = scratch_pml4e;
+ kunmap_px(ppgtt, pml4e_vaddr);
++ free_pdp(dev_priv, pdp);
+ }
+ }
+ }
+diff --git a/drivers/gpu/drm/i915/intel_lrc.c b/drivers/gpu/drm/i915/intel_lrc.c
+index beabc17e7c8a..2af4522d60e6 100644
+--- a/drivers/gpu/drm/i915/intel_lrc.c
++++ b/drivers/gpu/drm/i915/intel_lrc.c
+@@ -362,7 +362,8 @@ execlists_update_context_pdps(struct i915_hw_ppgtt *ppgtt, u32 *reg_state)
+ static u64 execlists_update_context(struct drm_i915_gem_request *rq)
+ {
+ struct intel_context *ce = &rq->ctx->engine[rq->engine->id];
+- struct i915_hw_ppgtt *ppgtt = rq->ctx->ppgtt;
++ struct i915_hw_ppgtt *ppgtt =
++ rq->ctx->ppgtt ?: rq->i915->mm.aliasing_ppgtt;
+ u32 *reg_state = ce->lrc_reg_state;
+
+ reg_state[CTX_RING_TAIL+1] = rq->tail;
+diff --git a/drivers/gpu/drm/radeon/radeon_ttm.c b/drivers/gpu/drm/radeon/radeon_ttm.c
+index 0cf03ccbf0a7..445a907552c1 100644
+--- a/drivers/gpu/drm/radeon/radeon_ttm.c
++++ b/drivers/gpu/drm/radeon/radeon_ttm.c
+@@ -213,8 +213,8 @@ static void radeon_evict_flags(struct ttm_buffer_object *bo,
+ rbo->placement.num_busy_placement = 0;
+ for (i = 0; i < rbo->placement.num_placement; i++) {
+ if (rbo->placements[i].flags & TTM_PL_FLAG_VRAM) {
+- if (rbo->placements[0].fpfn < fpfn)
+- rbo->placements[0].fpfn = fpfn;
++ if (rbo->placements[i].fpfn < fpfn)
++ rbo->placements[i].fpfn = fpfn;
+ } else {
+ rbo->placement.busy_placement =
+ &rbo->placements[i];
+diff --git a/drivers/gpu/drm/vc4/vc4_crtc.c b/drivers/gpu/drm/vc4/vc4_crtc.c
+index 7aadce1f7e7a..c7e6c9839c9a 100644
+--- a/drivers/gpu/drm/vc4/vc4_crtc.c
++++ b/drivers/gpu/drm/vc4/vc4_crtc.c
+@@ -842,6 +842,17 @@ static void vc4_crtc_destroy_state(struct drm_crtc *crtc,
+ drm_atomic_helper_crtc_destroy_state(crtc, state);
+ }
+
++static void
++vc4_crtc_reset(struct drm_crtc *crtc)
++{
++ if (crtc->state)
++ __drm_atomic_helper_crtc_destroy_state(crtc->state);
++
++ crtc->state = kzalloc(sizeof(struct vc4_crtc_state), GFP_KERNEL);
++ if (crtc->state)
++ crtc->state->crtc = crtc;
++}
++
+ static const struct drm_crtc_funcs vc4_crtc_funcs = {
+ .set_config = drm_atomic_helper_set_config,
+ .destroy = vc4_crtc_destroy,
+@@ -849,7 +860,7 @@ static const struct drm_crtc_funcs vc4_crtc_funcs = {
+ .set_property = NULL,
+ .cursor_set = NULL, /* handled by drm_mode_cursor_universal */
+ .cursor_move = NULL, /* handled by drm_mode_cursor_universal */
+- .reset = drm_atomic_helper_crtc_reset,
++ .reset = vc4_crtc_reset,
+ .atomic_duplicate_state = vc4_crtc_duplicate_state,
+ .atomic_destroy_state = vc4_crtc_destroy_state,
+ .gamma_set = vc4_crtc_gamma_set,
+diff --git a/drivers/hid/wacom_sys.c b/drivers/hid/wacom_sys.c
+index 8aeca038cc73..5f282bb0ea10 100644
+--- a/drivers/hid/wacom_sys.c
++++ b/drivers/hid/wacom_sys.c
+@@ -2081,6 +2081,14 @@ static int wacom_parse_and_register(struct wacom *wacom, bool wireless)
+
+ wacom_update_name(wacom, wireless ? " (WL)" : "");
+
++ /* pen only Bamboo neither support touch nor pad */
++ if ((features->type == BAMBOO_PEN) &&
++ ((features->device_type & WACOM_DEVICETYPE_TOUCH) ||
++ (features->device_type & WACOM_DEVICETYPE_PAD))) {
++ error = -ENODEV;
++ goto fail;
++ }
++
+ error = wacom_add_shared_data(hdev);
+ if (error)
+ goto fail;
+@@ -2128,14 +2136,6 @@ static int wacom_parse_and_register(struct wacom *wacom, bool wireless)
+ goto fail_quirks;
+ }
+
+- /* pen only Bamboo neither support touch nor pad */
+- if ((features->type == BAMBOO_PEN) &&
+- ((features->device_type & WACOM_DEVICETYPE_TOUCH) ||
+- (features->device_type & WACOM_DEVICETYPE_PAD))) {
+- error = -ENODEV;
+- goto fail_quirks;
+- }
+-
+ if (features->device_type & WACOM_DEVICETYPE_WL_MONITOR)
+ error = hid_hw_open(hdev);
+
+diff --git a/drivers/md/dm.c b/drivers/md/dm.c
+index 0ff5469c03d2..b78bc2916664 100644
+--- a/drivers/md/dm.c
++++ b/drivers/md/dm.c
+@@ -986,26 +986,29 @@ static void flush_current_bio_list(struct blk_plug_cb *cb, bool from_schedule)
+ struct dm_offload *o = container_of(cb, struct dm_offload, cb);
+ struct bio_list list;
+ struct bio *bio;
++ int i;
+
+ INIT_LIST_HEAD(&o->cb.list);
+
+ if (unlikely(!current->bio_list))
+ return;
+
+- list = *current->bio_list;
+- bio_list_init(current->bio_list);
+-
+- while ((bio = bio_list_pop(&list))) {
+- struct bio_set *bs = bio->bi_pool;
+- if (unlikely(!bs) || bs == fs_bio_set) {
+- bio_list_add(current->bio_list, bio);
+- continue;
++ for (i = 0; i < 2; i++) {
++ list = current->bio_list[i];
++ bio_list_init(¤t->bio_list[i]);
++
++ while ((bio = bio_list_pop(&list))) {
++ struct bio_set *bs = bio->bi_pool;
++ if (unlikely(!bs) || bs == fs_bio_set) {
++ bio_list_add(¤t->bio_list[i], bio);
++ continue;
++ }
++
++ spin_lock(&bs->rescue_lock);
++ bio_list_add(&bs->rescue_list, bio);
++ queue_work(bs->rescue_workqueue, &bs->rescue_work);
++ spin_unlock(&bs->rescue_lock);
+ }
+-
+- spin_lock(&bs->rescue_lock);
+- bio_list_add(&bs->rescue_list, bio);
+- queue_work(bs->rescue_workqueue, &bs->rescue_work);
+- spin_unlock(&bs->rescue_lock);
+ }
+ }
+
+diff --git a/drivers/md/raid10.c b/drivers/md/raid10.c
+index 87f14080c2cd..41693890e2b8 100644
+--- a/drivers/md/raid10.c
++++ b/drivers/md/raid10.c
+@@ -974,7 +974,8 @@ static void wait_barrier(struct r10conf *conf)
+ !conf->barrier ||
+ (atomic_read(&conf->nr_pending) &&
+ current->bio_list &&
+- !bio_list_empty(current->bio_list)),
++ (!bio_list_empty(¤t->bio_list[0]) ||
++ !bio_list_empty(¤t->bio_list[1]))),
+ conf->resync_lock);
+ conf->nr_waiting--;
+ if (!conf->nr_waiting)
+diff --git a/drivers/mmc/host/sdhci-of-at91.c b/drivers/mmc/host/sdhci-of-at91.c
+index 7fd964256faa..d5430ed02a67 100644
+--- a/drivers/mmc/host/sdhci-of-at91.c
++++ b/drivers/mmc/host/sdhci-of-at91.c
+@@ -29,6 +29,8 @@
+
+ #include "sdhci-pltfm.h"
+
++#define SDMMC_MC1R 0x204
++#define SDMMC_MC1R_DDR BIT(3)
+ #define SDMMC_CACR 0x230
+ #define SDMMC_CACR_CAPWREN BIT(0)
+ #define SDMMC_CACR_KEY (0x46 << 8)
+@@ -103,11 +105,18 @@ static void sdhci_at91_set_power(struct sdhci_host *host, unsigned char mode,
+ sdhci_set_power_noreg(host, mode, vdd);
+ }
+
++void sdhci_at91_set_uhs_signaling(struct sdhci_host *host, unsigned int timing)
++{
++ if (timing == MMC_TIMING_MMC_DDR52)
++ sdhci_writeb(host, SDMMC_MC1R_DDR, SDMMC_MC1R);
++ sdhci_set_uhs_signaling(host, timing);
++}
++
+ static const struct sdhci_ops sdhci_at91_sama5d2_ops = {
+ .set_clock = sdhci_at91_set_clock,
+ .set_bus_width = sdhci_set_bus_width,
+ .reset = sdhci_reset,
+- .set_uhs_signaling = sdhci_set_uhs_signaling,
++ .set_uhs_signaling = sdhci_at91_set_uhs_signaling,
+ .set_power = sdhci_at91_set_power,
+ };
+
+diff --git a/drivers/mmc/host/sdhci.c b/drivers/mmc/host/sdhci.c
+index d0819d18ad08..d2a4adc50a84 100644
+--- a/drivers/mmc/host/sdhci.c
++++ b/drivers/mmc/host/sdhci.c
+@@ -1830,6 +1830,9 @@ static void sdhci_enable_sdio_irq(struct mmc_host *mmc, int enable)
+ struct sdhci_host *host = mmc_priv(mmc);
+ unsigned long flags;
+
++ if (enable)
++ pm_runtime_get_noresume(host->mmc->parent);
++
+ spin_lock_irqsave(&host->lock, flags);
+ if (enable)
+ host->flags |= SDHCI_SDIO_IRQ_ENABLED;
+@@ -1838,6 +1841,9 @@ static void sdhci_enable_sdio_irq(struct mmc_host *mmc, int enable)
+
+ sdhci_enable_sdio_irq_nolock(host, enable);
+ spin_unlock_irqrestore(&host->lock, flags);
++
++ if (!enable)
++ pm_runtime_put_noidle(host->mmc->parent);
+ }
+
+ static int sdhci_start_signal_voltage_switch(struct mmc_host *mmc,
+diff --git a/drivers/nvme/host/core.c b/drivers/nvme/host/core.c
+index 8a3c3e32a704..3818ff609d55 100644
+--- a/drivers/nvme/host/core.c
++++ b/drivers/nvme/host/core.c
+@@ -2034,9 +2034,9 @@ void nvme_kill_queues(struct nvme_ctrl *ctrl)
+ * Revalidating a dead namespace sets capacity to 0. This will
+ * end buffered writers dirtying pages that can't be synced.
+ */
+- if (ns->disk && !test_and_set_bit(NVME_NS_DEAD, &ns->flags))
+- revalidate_disk(ns->disk);
+-
++ if (!ns->disk || test_and_set_bit(NVME_NS_DEAD, &ns->flags))
++ continue;
++ revalidate_disk(ns->disk);
+ blk_set_queue_dying(ns->queue);
+ blk_mq_abort_requeue_list(ns->queue);
+ blk_mq_start_stopped_hw_queues(ns->queue, true);
+diff --git a/drivers/nvme/host/pci.c b/drivers/nvme/host/pci.c
+index 3faefabf339c..410c3d15b0cb 100644
+--- a/drivers/nvme/host/pci.c
++++ b/drivers/nvme/host/pci.c
+@@ -1990,8 +1990,10 @@ static void nvme_remove(struct pci_dev *pdev)
+
+ pci_set_drvdata(pdev, NULL);
+
+- if (!pci_device_is_present(pdev))
++ if (!pci_device_is_present(pdev)) {
+ nvme_change_ctrl_state(&dev->ctrl, NVME_CTRL_DEAD);
++ nvme_dev_disable(dev, false);
++ }
+
+ flush_work(&dev->reset_work);
+ nvme_uninit_ctrl(&dev->ctrl);
+diff --git a/drivers/pci/host/pci-thunder-pem.c b/drivers/pci/host/pci-thunder-pem.c
+index af722eb0ca75..e354010fb006 100644
+--- a/drivers/pci/host/pci-thunder-pem.c
++++ b/drivers/pci/host/pci-thunder-pem.c
+@@ -331,7 +331,7 @@ static int thunder_pem_acpi_init(struct pci_config_window *cfg)
+ if (!res_pem)
+ return -ENOMEM;
+
+- ret = acpi_get_rc_resources(dev, "THRX0002", root->segment, res_pem);
++ ret = acpi_get_rc_resources(dev, "CAVA02B", root->segment, res_pem);
+ if (ret) {
+ dev_err(dev, "can't get rc base address\n");
+ return ret;
+diff --git a/drivers/pci/host/pcie-iproc-bcma.c b/drivers/pci/host/pcie-iproc-bcma.c
+index bd4c9ec25edc..384c27e664fe 100644
+--- a/drivers/pci/host/pcie-iproc-bcma.c
++++ b/drivers/pci/host/pcie-iproc-bcma.c
+@@ -44,8 +44,7 @@ static int iproc_pcie_bcma_probe(struct bcma_device *bdev)
+ {
+ struct device *dev = &bdev->dev;
+ struct iproc_pcie *pcie;
+- LIST_HEAD(res);
+- struct resource res_mem;
++ LIST_HEAD(resources);
+ int ret;
+
+ pcie = devm_kzalloc(dev, sizeof(*pcie), GFP_KERNEL);
+@@ -63,22 +62,23 @@ static int iproc_pcie_bcma_probe(struct bcma_device *bdev)
+
+ pcie->base_addr = bdev->addr;
+
+- res_mem.start = bdev->addr_s[0];
+- res_mem.end = bdev->addr_s[0] + SZ_128M - 1;
+- res_mem.name = "PCIe MEM space";
+- res_mem.flags = IORESOURCE_MEM;
+- pci_add_resource(&res, &res_mem);
++ pcie->mem.start = bdev->addr_s[0];
++ pcie->mem.end = bdev->addr_s[0] + SZ_128M - 1;
++ pcie->mem.name = "PCIe MEM space";
++ pcie->mem.flags = IORESOURCE_MEM;
++ pci_add_resource(&resources, &pcie->mem);
+
+ pcie->map_irq = iproc_pcie_bcma_map_irq;
+
+- ret = iproc_pcie_setup(pcie, &res);
+- if (ret)
++ ret = iproc_pcie_setup(pcie, &resources);
++ if (ret) {
+ dev_err(dev, "PCIe controller setup failed\n");
+-
+- pci_free_resource_list(&res);
++ pci_free_resource_list(&resources);
++ return ret;
++ }
+
+ bcma_set_drvdata(bdev, pcie);
+- return ret;
++ return 0;
+ }
+
+ static void iproc_pcie_bcma_remove(struct bcma_device *bdev)
+diff --git a/drivers/pci/host/pcie-iproc-platform.c b/drivers/pci/host/pcie-iproc-platform.c
+index 22d814a78a78..f95564ac37df 100644
+--- a/drivers/pci/host/pcie-iproc-platform.c
++++ b/drivers/pci/host/pcie-iproc-platform.c
+@@ -52,7 +52,7 @@ static int iproc_pcie_pltfm_probe(struct platform_device *pdev)
+ struct device_node *np = dev->of_node;
+ struct resource reg;
+ resource_size_t iobase = 0;
+- LIST_HEAD(res);
++ LIST_HEAD(resources);
+ int ret;
+
+ of_id = of_match_device(iproc_pcie_of_match_table, dev);
+@@ -101,10 +101,10 @@ static int iproc_pcie_pltfm_probe(struct platform_device *pdev)
+ pcie->phy = NULL;
+ }
+
+- ret = of_pci_get_host_bridge_resources(np, 0, 0xff, &res, &iobase);
++ ret = of_pci_get_host_bridge_resources(np, 0, 0xff, &resources,
++ &iobase);
+ if (ret) {
+- dev_err(dev,
+- "unable to get PCI host bridge resources\n");
++ dev_err(dev, "unable to get PCI host bridge resources\n");
+ return ret;
+ }
+
+@@ -117,14 +117,15 @@ static int iproc_pcie_pltfm_probe(struct platform_device *pdev)
+ pcie->map_irq = of_irq_parse_and_map_pci;
+ }
+
+- ret = iproc_pcie_setup(pcie, &res);
+- if (ret)
++ ret = iproc_pcie_setup(pcie, &resources);
++ if (ret) {
+ dev_err(dev, "PCIe controller setup failed\n");
+-
+- pci_free_resource_list(&res);
++ pci_free_resource_list(&resources);
++ return ret;
++ }
+
+ platform_set_drvdata(pdev, pcie);
+- return ret;
++ return 0;
+ }
+
+ static int iproc_pcie_pltfm_remove(struct platform_device *pdev)
+diff --git a/drivers/pci/host/pcie-iproc.h b/drivers/pci/host/pcie-iproc.h
+index 04fed8e907f1..0bbe2ea44f3e 100644
+--- a/drivers/pci/host/pcie-iproc.h
++++ b/drivers/pci/host/pcie-iproc.h
+@@ -90,6 +90,7 @@ struct iproc_pcie {
+ #ifdef CONFIG_ARM
+ struct pci_sys_data sysdata;
+ #endif
++ struct resource mem;
+ struct pci_bus *root_bus;
+ struct phy *phy;
+ int (*map_irq)(const struct pci_dev *, u8, u8);
+diff --git a/drivers/scsi/device_handler/scsi_dh_alua.c b/drivers/scsi/device_handler/scsi_dh_alua.c
+index d704752b6332..6021cb9ea910 100644
+--- a/drivers/scsi/device_handler/scsi_dh_alua.c
++++ b/drivers/scsi/device_handler/scsi_dh_alua.c
+@@ -113,7 +113,7 @@ struct alua_queue_data {
+ #define ALUA_POLICY_SWITCH_ALL 1
+
+ static void alua_rtpg_work(struct work_struct *work);
+-static void alua_rtpg_queue(struct alua_port_group *pg,
++static bool alua_rtpg_queue(struct alua_port_group *pg,
+ struct scsi_device *sdev,
+ struct alua_queue_data *qdata, bool force);
+ static void alua_check(struct scsi_device *sdev, bool force);
+@@ -866,7 +866,13 @@ static void alua_rtpg_work(struct work_struct *work)
+ kref_put(&pg->kref, release_port_group);
+ }
+
+-static void alua_rtpg_queue(struct alua_port_group *pg,
++/**
++ * alua_rtpg_queue() - cause RTPG to be submitted asynchronously
++ *
++ * Returns true if and only if alua_rtpg_work() will be called asynchronously.
++ * That function is responsible for calling @qdata->fn().
++ */
++static bool alua_rtpg_queue(struct alua_port_group *pg,
+ struct scsi_device *sdev,
+ struct alua_queue_data *qdata, bool force)
+ {
+@@ -874,8 +880,8 @@ static void alua_rtpg_queue(struct alua_port_group *pg,
+ unsigned long flags;
+ struct workqueue_struct *alua_wq = kaluad_wq;
+
+- if (!pg)
+- return;
++ if (!pg || scsi_device_get(sdev))
++ return false;
+
+ spin_lock_irqsave(&pg->lock, flags);
+ if (qdata) {
+@@ -888,14 +894,12 @@ static void alua_rtpg_queue(struct alua_port_group *pg,
+ pg->flags |= ALUA_PG_RUN_RTPG;
+ kref_get(&pg->kref);
+ pg->rtpg_sdev = sdev;
+- scsi_device_get(sdev);
+ start_queue = 1;
+ } else if (!(pg->flags & ALUA_PG_RUN_RTPG) && force) {
+ pg->flags |= ALUA_PG_RUN_RTPG;
+ /* Do not queue if the worker is already running */
+ if (!(pg->flags & ALUA_PG_RUNNING)) {
+ kref_get(&pg->kref);
+- sdev = NULL;
+ start_queue = 1;
+ }
+ }
+@@ -904,13 +908,17 @@ static void alua_rtpg_queue(struct alua_port_group *pg,
+ alua_wq = kaluad_sync_wq;
+ spin_unlock_irqrestore(&pg->lock, flags);
+
+- if (start_queue &&
+- !queue_delayed_work(alua_wq, &pg->rtpg_work,
+- msecs_to_jiffies(ALUA_RTPG_DELAY_MSECS))) {
+- if (sdev)
+- scsi_device_put(sdev);
+- kref_put(&pg->kref, release_port_group);
++ if (start_queue) {
++ if (queue_delayed_work(alua_wq, &pg->rtpg_work,
++ msecs_to_jiffies(ALUA_RTPG_DELAY_MSECS)))
++ sdev = NULL;
++ else
++ kref_put(&pg->kref, release_port_group);
+ }
++ if (sdev)
++ scsi_device_put(sdev);
++
++ return true;
+ }
+
+ /*
+@@ -1011,11 +1019,13 @@ static int alua_activate(struct scsi_device *sdev,
+ mutex_unlock(&h->init_mutex);
+ goto out;
+ }
+- fn = NULL;
+ rcu_read_unlock();
+ mutex_unlock(&h->init_mutex);
+
+- alua_rtpg_queue(pg, sdev, qdata, true);
++ if (alua_rtpg_queue(pg, sdev, qdata, true))
++ fn = NULL;
++ else
++ err = SCSI_DH_DEV_OFFLINED;
+ kref_put(&pg->kref, release_port_group);
+ out:
+ if (fn)
+diff --git a/drivers/scsi/libsas/sas_ata.c b/drivers/scsi/libsas/sas_ata.c
+index 763f012fdeca..87f5e694dbed 100644
+--- a/drivers/scsi/libsas/sas_ata.c
++++ b/drivers/scsi/libsas/sas_ata.c
+@@ -221,7 +221,7 @@ static unsigned int sas_ata_qc_issue(struct ata_queued_cmd *qc)
+ task->num_scatter = qc->n_elem;
+ } else {
+ for_each_sg(qc->sg, sg, qc->n_elem, si)
+- xfer += sg->length;
++ xfer += sg_dma_len(sg);
+
+ task->total_xfer_len = xfer;
+ task->num_scatter = si;
+diff --git a/drivers/scsi/sg.c b/drivers/scsi/sg.c
+index 121de0aaa6ad..f753df25ba34 100644
+--- a/drivers/scsi/sg.c
++++ b/drivers/scsi/sg.c
+@@ -998,6 +998,8 @@ sg_ioctl(struct file *filp, unsigned int cmd_in, unsigned long arg)
+ result = get_user(val, ip);
+ if (result)
+ return result;
++ if (val > SG_MAX_CDB_SIZE)
++ return -ENOMEM;
+ sfp->next_cmd_len = (val > 0) ? val : 0;
+ return 0;
+ case SG_GET_VERSION_NUM:
+diff --git a/drivers/tty/serial/atmel_serial.c b/drivers/tty/serial/atmel_serial.c
+index fabbe76203bb..4d079cdaa7a3 100644
+--- a/drivers/tty/serial/atmel_serial.c
++++ b/drivers/tty/serial/atmel_serial.c
+@@ -1938,6 +1938,11 @@ static void atmel_flush_buffer(struct uart_port *port)
+ atmel_uart_writel(port, ATMEL_PDC_TCR, 0);
+ atmel_port->pdc_tx.ofs = 0;
+ }
++ /*
++ * in uart_flush_buffer(), the xmit circular buffer has just
++ * been cleared, so we have to reset tx_len accordingly.
++ */
++ atmel_port->tx_len = 0;
+ }
+
+ /*
+@@ -2471,6 +2476,9 @@ static void atmel_console_write(struct console *co, const char *s, u_int count)
+ pdc_tx = atmel_uart_readl(port, ATMEL_PDC_PTSR) & ATMEL_PDC_TXTEN;
+ atmel_uart_writel(port, ATMEL_PDC_PTCR, ATMEL_PDC_TXTDIS);
+
++ /* Make sure that tx path is actually able to send characters */
++ atmel_uart_writel(port, ATMEL_US_CR, ATMEL_US_TXEN);
++
+ uart_console_write(port, s, count, atmel_console_putchar);
+
+ /*
+diff --git a/drivers/tty/serial/mxs-auart.c b/drivers/tty/serial/mxs-auart.c
+index 8c1c9112b3fd..181972b03845 100644
+--- a/drivers/tty/serial/mxs-auart.c
++++ b/drivers/tty/serial/mxs-auart.c
+@@ -1085,7 +1085,7 @@ static void mxs_auart_settermios(struct uart_port *u,
+ AUART_LINECTRL_BAUD_DIV_MAX);
+ baud_max = u->uartclk * 32 / AUART_LINECTRL_BAUD_DIV_MIN;
+ baud = uart_get_baud_rate(u, termios, old, baud_min, baud_max);
+- div = u->uartclk * 32 / baud;
++ div = DIV_ROUND_CLOSEST(u->uartclk * 32, baud);
+ }
+
+ ctrl |= AUART_LINECTRL_BAUD_DIVFRAC(div & 0x3F);
+diff --git a/drivers/usb/core/hcd.c b/drivers/usb/core/hcd.c
+index 479e223f9cff..f029aad67183 100644
+--- a/drivers/usb/core/hcd.c
++++ b/drivers/usb/core/hcd.c
+@@ -520,8 +520,10 @@ static int rh_call_control (struct usb_hcd *hcd, struct urb *urb)
+ */
+ tbuf_size = max_t(u16, sizeof(struct usb_hub_descriptor), wLength);
+ tbuf = kzalloc(tbuf_size, GFP_KERNEL);
+- if (!tbuf)
+- return -ENOMEM;
++ if (!tbuf) {
++ status = -ENOMEM;
++ goto err_alloc;
++ }
+
+ bufp = tbuf;
+
+@@ -734,6 +736,7 @@ static int rh_call_control (struct usb_hcd *hcd, struct urb *urb)
+ }
+
+ kfree(tbuf);
++ err_alloc:
+
+ /* any errors get returned through the urb completion */
+ spin_lock_irq(&hcd_root_hub_lock);
+diff --git a/drivers/usb/host/xhci-ring.c b/drivers/usb/host/xhci-ring.c
+index e32029a31ca4..4c101f4161f8 100644
+--- a/drivers/usb/host/xhci-ring.c
++++ b/drivers/usb/host/xhci-ring.c
+@@ -2000,6 +2000,9 @@ static int process_ctrl_td(struct xhci_hcd *xhci, struct xhci_td *td,
+ case TRB_NORMAL:
+ td->urb->actual_length = requested - remaining;
+ goto finish_td;
++ case TRB_STATUS:
++ td->urb->actual_length = requested;
++ goto finish_td;
+ default:
+ xhci_warn(xhci, "WARN: unexpected TRB Type %d\n",
+ trb_type);
+diff --git a/fs/nfs/nfs4proc.c b/fs/nfs/nfs4proc.c
+index 0a436c4a28ad..2c48e2528600 100644
+--- a/fs/nfs/nfs4proc.c
++++ b/fs/nfs/nfs4proc.c
+@@ -2550,17 +2550,14 @@ static void nfs41_check_delegation_stateid(struct nfs4_state *state)
+ }
+
+ nfs4_stateid_copy(&stateid, &delegation->stateid);
+- if (test_bit(NFS_DELEGATION_REVOKED, &delegation->flags)) {
++ if (test_bit(NFS_DELEGATION_REVOKED, &delegation->flags) ||
++ !test_and_clear_bit(NFS_DELEGATION_TEST_EXPIRED,
++ &delegation->flags)) {
+ rcu_read_unlock();
+ nfs_finish_clear_delegation_stateid(state, &stateid);
+ return;
+ }
+
+- if (!test_and_clear_bit(NFS_DELEGATION_TEST_EXPIRED, &delegation->flags)) {
+- rcu_read_unlock();
+- return;
+- }
+-
+ cred = get_rpccred(delegation->cred);
+ rcu_read_unlock();
+ status = nfs41_test_and_free_expired_stateid(server, &stateid, cred);
+diff --git a/fs/nfsd/nfsproc.c b/fs/nfsd/nfsproc.c
+index 010aff5c5a79..536009e50387 100644
+--- a/fs/nfsd/nfsproc.c
++++ b/fs/nfsd/nfsproc.c
+@@ -790,6 +790,7 @@ nfserrno (int errno)
+ { nfserr_serverfault, -ESERVERFAULT },
+ { nfserr_serverfault, -ENFILE },
+ { nfserr_io, -EUCLEAN },
++ { nfserr_perm, -ENOKEY },
+ };
+ int i;
+
+diff --git a/fs/xfs/libxfs/xfs_bmap.c b/fs/xfs/libxfs/xfs_bmap.c
+index bfc00de5c6f1..3365ecb9074d 100644
+--- a/fs/xfs/libxfs/xfs_bmap.c
++++ b/fs/xfs/libxfs/xfs_bmap.c
+@@ -769,8 +769,8 @@ xfs_bmap_extents_to_btree(
+ args.type = XFS_ALLOCTYPE_START_BNO;
+ args.fsbno = XFS_INO_TO_FSB(mp, ip->i_ino);
+ } else if (dfops->dop_low) {
+-try_another_ag:
+ args.type = XFS_ALLOCTYPE_START_BNO;
++try_another_ag:
+ args.fsbno = *firstblock;
+ } else {
+ args.type = XFS_ALLOCTYPE_NEAR_BNO;
+@@ -796,17 +796,19 @@ xfs_bmap_extents_to_btree(
+ if (xfs_sb_version_hasreflink(&cur->bc_mp->m_sb) &&
+ args.fsbno == NULLFSBLOCK &&
+ args.type == XFS_ALLOCTYPE_NEAR_BNO) {
+- dfops->dop_low = true;
++ args.type = XFS_ALLOCTYPE_FIRST_AG;
+ goto try_another_ag;
+ }
++ if (WARN_ON_ONCE(args.fsbno == NULLFSBLOCK)) {
++ xfs_iroot_realloc(ip, -1, whichfork);
++ xfs_btree_del_cursor(cur, XFS_BTREE_ERROR);
++ return -ENOSPC;
++ }
+ /*
+ * Allocation can't fail, the space was reserved.
+ */
+- ASSERT(args.fsbno != NULLFSBLOCK);
+ ASSERT(*firstblock == NULLFSBLOCK ||
+- args.agno == XFS_FSB_TO_AGNO(mp, *firstblock) ||
+- (dfops->dop_low &&
+- args.agno > XFS_FSB_TO_AGNO(mp, *firstblock)));
++ args.agno >= XFS_FSB_TO_AGNO(mp, *firstblock));
+ *firstblock = cur->bc_private.b.firstblock = args.fsbno;
+ cur->bc_private.b.allocated++;
+ ip->i_d.di_nblocks++;
+@@ -1278,7 +1280,6 @@ xfs_bmap_read_extents(
+ /* REFERENCED */
+ xfs_extnum_t room; /* number of entries there's room for */
+
+- bno = NULLFSBLOCK;
+ mp = ip->i_mount;
+ ifp = XFS_IFORK_PTR(ip, whichfork);
+ exntf = (whichfork != XFS_DATA_FORK) ? XFS_EXTFMT_NOSTATE :
+@@ -1291,9 +1292,7 @@ xfs_bmap_read_extents(
+ ASSERT(level > 0);
+ pp = XFS_BMAP_BROOT_PTR_ADDR(mp, block, 1, ifp->if_broot_bytes);
+ bno = be64_to_cpu(*pp);
+- ASSERT(bno != NULLFSBLOCK);
+- ASSERT(XFS_FSB_TO_AGNO(mp, bno) < mp->m_sb.sb_agcount);
+- ASSERT(XFS_FSB_TO_AGBNO(mp, bno) < mp->m_sb.sb_agblocks);
++
+ /*
+ * Go down the tree until leaf level is reached, following the first
+ * pointer (leftmost) at each level.
+@@ -1864,6 +1863,7 @@ xfs_bmap_add_extent_delay_real(
+ */
+ trace_xfs_bmap_pre_update(bma->ip, bma->idx, state, _THIS_IP_);
+ xfs_bmbt_set_startblock(ep, new->br_startblock);
++ xfs_bmbt_set_state(ep, new->br_state);
+ trace_xfs_bmap_post_update(bma->ip, bma->idx, state, _THIS_IP_);
+
+ (*nextents)++;
+@@ -2202,6 +2202,7 @@ STATIC int /* error */
+ xfs_bmap_add_extent_unwritten_real(
+ struct xfs_trans *tp,
+ xfs_inode_t *ip, /* incore inode pointer */
++ int whichfork,
+ xfs_extnum_t *idx, /* extent number to update/insert */
+ xfs_btree_cur_t **curp, /* if *curp is null, not a btree */
+ xfs_bmbt_irec_t *new, /* new data to add to file extents */
+@@ -2221,12 +2222,14 @@ xfs_bmap_add_extent_unwritten_real(
+ /* left is 0, right is 1, prev is 2 */
+ int rval=0; /* return value (logging flags) */
+ int state = 0;/* state bits, accessed thru macros */
+- struct xfs_mount *mp = tp->t_mountp;
++ struct xfs_mount *mp = ip->i_mount;
+
+ *logflagsp = 0;
+
+ cur = *curp;
+- ifp = XFS_IFORK_PTR(ip, XFS_DATA_FORK);
++ ifp = XFS_IFORK_PTR(ip, whichfork);
++ if (whichfork == XFS_COW_FORK)
++ state |= BMAP_COWFORK;
+
+ ASSERT(*idx >= 0);
+ ASSERT(*idx <= xfs_iext_count(ifp));
+@@ -2285,7 +2288,7 @@ xfs_bmap_add_extent_unwritten_real(
+ * Don't set contiguous if the combined extent would be too large.
+ * Also check for all-three-contiguous being too large.
+ */
+- if (*idx < xfs_iext_count(&ip->i_df) - 1) {
++ if (*idx < xfs_iext_count(ifp) - 1) {
+ state |= BMAP_RIGHT_VALID;
+ xfs_bmbt_get_all(xfs_iext_get_ext(ifp, *idx + 1), &RIGHT);
+ if (isnullstartblock(RIGHT.br_startblock))
+@@ -2325,7 +2328,8 @@ xfs_bmap_add_extent_unwritten_real(
+ trace_xfs_bmap_post_update(ip, *idx, state, _THIS_IP_);
+
+ xfs_iext_remove(ip, *idx + 1, 2, state);
+- ip->i_d.di_nextents -= 2;
++ XFS_IFORK_NEXT_SET(ip, whichfork,
++ XFS_IFORK_NEXTENTS(ip, whichfork) - 2);
+ if (cur == NULL)
+ rval = XFS_ILOG_CORE | XFS_ILOG_DEXT;
+ else {
+@@ -2368,7 +2372,8 @@ xfs_bmap_add_extent_unwritten_real(
+ trace_xfs_bmap_post_update(ip, *idx, state, _THIS_IP_);
+
+ xfs_iext_remove(ip, *idx + 1, 1, state);
+- ip->i_d.di_nextents--;
++ XFS_IFORK_NEXT_SET(ip, whichfork,
++ XFS_IFORK_NEXTENTS(ip, whichfork) - 1);
+ if (cur == NULL)
+ rval = XFS_ILOG_CORE | XFS_ILOG_DEXT;
+ else {
+@@ -2403,7 +2408,8 @@ xfs_bmap_add_extent_unwritten_real(
+ xfs_bmbt_set_state(ep, newext);
+ trace_xfs_bmap_post_update(ip, *idx, state, _THIS_IP_);
+ xfs_iext_remove(ip, *idx + 1, 1, state);
+- ip->i_d.di_nextents--;
++ XFS_IFORK_NEXT_SET(ip, whichfork,
++ XFS_IFORK_NEXTENTS(ip, whichfork) - 1);
+ if (cur == NULL)
+ rval = XFS_ILOG_CORE | XFS_ILOG_DEXT;
+ else {
+@@ -2515,7 +2521,8 @@ xfs_bmap_add_extent_unwritten_real(
+ trace_xfs_bmap_post_update(ip, *idx, state, _THIS_IP_);
+
+ xfs_iext_insert(ip, *idx, 1, new, state);
+- ip->i_d.di_nextents++;
++ XFS_IFORK_NEXT_SET(ip, whichfork,
++ XFS_IFORK_NEXTENTS(ip, whichfork) + 1);
+ if (cur == NULL)
+ rval = XFS_ILOG_CORE | XFS_ILOG_DEXT;
+ else {
+@@ -2593,7 +2600,8 @@ xfs_bmap_add_extent_unwritten_real(
+ ++*idx;
+ xfs_iext_insert(ip, *idx, 1, new, state);
+
+- ip->i_d.di_nextents++;
++ XFS_IFORK_NEXT_SET(ip, whichfork,
++ XFS_IFORK_NEXTENTS(ip, whichfork) + 1);
+ if (cur == NULL)
+ rval = XFS_ILOG_CORE | XFS_ILOG_DEXT;
+ else {
+@@ -2641,7 +2649,8 @@ xfs_bmap_add_extent_unwritten_real(
+ ++*idx;
+ xfs_iext_insert(ip, *idx, 2, &r[0], state);
+
+- ip->i_d.di_nextents += 2;
++ XFS_IFORK_NEXT_SET(ip, whichfork,
++ XFS_IFORK_NEXTENTS(ip, whichfork) + 2);
+ if (cur == NULL)
+ rval = XFS_ILOG_CORE | XFS_ILOG_DEXT;
+ else {
+@@ -2695,17 +2704,17 @@ xfs_bmap_add_extent_unwritten_real(
+ }
+
+ /* update reverse mappings */
+- error = xfs_rmap_convert_extent(mp, dfops, ip, XFS_DATA_FORK, new);
++ error = xfs_rmap_convert_extent(mp, dfops, ip, whichfork, new);
+ if (error)
+ goto done;
+
+ /* convert to a btree if necessary */
+- if (xfs_bmap_needs_btree(ip, XFS_DATA_FORK)) {
++ if (xfs_bmap_needs_btree(ip, whichfork)) {
+ int tmp_logflags; /* partial log flag return val */
+
+ ASSERT(cur == NULL);
+ error = xfs_bmap_extents_to_btree(tp, ip, first, dfops, &cur,
+- 0, &tmp_logflags, XFS_DATA_FORK);
++ 0, &tmp_logflags, whichfork);
+ *logflagsp |= tmp_logflags;
+ if (error)
+ goto done;
+@@ -2717,7 +2726,7 @@ xfs_bmap_add_extent_unwritten_real(
+ *curp = cur;
+ }
+
+- xfs_bmap_check_leaf_extents(*curp, ip, XFS_DATA_FORK);
++ xfs_bmap_check_leaf_extents(*curp, ip, whichfork);
+ done:
+ *logflagsp |= rval;
+ return error;
+@@ -2809,7 +2818,8 @@ xfs_bmap_add_extent_hole_delay(
+ oldlen = startblockval(left.br_startblock) +
+ startblockval(new->br_startblock) +
+ startblockval(right.br_startblock);
+- newlen = xfs_bmap_worst_indlen(ip, temp);
++ newlen = XFS_FILBLKS_MIN(xfs_bmap_worst_indlen(ip, temp),
++ oldlen);
+ xfs_bmbt_set_startblock(xfs_iext_get_ext(ifp, *idx),
+ nullstartblock((int)newlen));
+ trace_xfs_bmap_post_update(ip, *idx, state, _THIS_IP_);
+@@ -2830,7 +2840,8 @@ xfs_bmap_add_extent_hole_delay(
+ xfs_bmbt_set_blockcount(xfs_iext_get_ext(ifp, *idx), temp);
+ oldlen = startblockval(left.br_startblock) +
+ startblockval(new->br_startblock);
+- newlen = xfs_bmap_worst_indlen(ip, temp);
++ newlen = XFS_FILBLKS_MIN(xfs_bmap_worst_indlen(ip, temp),
++ oldlen);
+ xfs_bmbt_set_startblock(xfs_iext_get_ext(ifp, *idx),
+ nullstartblock((int)newlen));
+ trace_xfs_bmap_post_update(ip, *idx, state, _THIS_IP_);
+@@ -2846,7 +2857,8 @@ xfs_bmap_add_extent_hole_delay(
+ temp = new->br_blockcount + right.br_blockcount;
+ oldlen = startblockval(new->br_startblock) +
+ startblockval(right.br_startblock);
+- newlen = xfs_bmap_worst_indlen(ip, temp);
++ newlen = XFS_FILBLKS_MIN(xfs_bmap_worst_indlen(ip, temp),
++ oldlen);
+ xfs_bmbt_set_allf(xfs_iext_get_ext(ifp, *idx),
+ new->br_startoff,
+ nullstartblock((int)newlen), temp, right.br_state);
+@@ -3822,17 +3834,13 @@ xfs_bmap_btalloc(
+ * the first block that was allocated.
+ */
+ ASSERT(*ap->firstblock == NULLFSBLOCK ||
+- XFS_FSB_TO_AGNO(mp, *ap->firstblock) ==
+- XFS_FSB_TO_AGNO(mp, args.fsbno) ||
+- (ap->dfops->dop_low &&
+- XFS_FSB_TO_AGNO(mp, *ap->firstblock) <
+- XFS_FSB_TO_AGNO(mp, args.fsbno)));
++ XFS_FSB_TO_AGNO(mp, *ap->firstblock) <=
++ XFS_FSB_TO_AGNO(mp, args.fsbno));
+
+ ap->blkno = args.fsbno;
+ if (*ap->firstblock == NULLFSBLOCK)
+ *ap->firstblock = args.fsbno;
+- ASSERT(nullfb || fb_agno == args.agno ||
+- (ap->dfops->dop_low && fb_agno < args.agno));
++ ASSERT(nullfb || fb_agno <= args.agno);
+ ap->length = args.len;
+ if (!(ap->flags & XFS_BMAPI_COWFORK))
+ ap->ip->i_d.di_nblocks += args.len;
+@@ -4156,6 +4164,19 @@ xfs_bmapi_read(
+ return 0;
+ }
+
++/*
++ * Add a delayed allocation extent to an inode. Blocks are reserved from the
++ * global pool and the extent inserted into the inode in-core extent tree.
++ *
++ * On entry, got refers to the first extent beyond the offset of the extent to
++ * allocate or eof is specified if no such extent exists. On return, got refers
++ * to the extent record that was inserted to the inode fork.
++ *
++ * Note that the allocated extent may have been merged with contiguous extents
++ * during insertion into the inode fork. Thus, got does not reflect the current
++ * state of the inode fork on return. If necessary, the caller can use lastx to
++ * look up the updated record in the inode fork.
++ */
+ int
+ xfs_bmapi_reserve_delalloc(
+ struct xfs_inode *ip,
+@@ -4242,13 +4263,8 @@ xfs_bmapi_reserve_delalloc(
+ got->br_startblock = nullstartblock(indlen);
+ got->br_blockcount = alen;
+ got->br_state = XFS_EXT_NORM;
+- xfs_bmap_add_extent_hole_delay(ip, whichfork, lastx, got);
+
+- /*
+- * Update our extent pointer, given that xfs_bmap_add_extent_hole_delay
+- * might have merged it into one of the neighbouring ones.
+- */
+- xfs_bmbt_get_all(xfs_iext_get_ext(ifp, *lastx), got);
++ xfs_bmap_add_extent_hole_delay(ip, whichfork, lastx, got);
+
+ /*
+ * Tag the inode if blocks were preallocated. Note that COW fork
+@@ -4260,10 +4276,6 @@ xfs_bmapi_reserve_delalloc(
+ if (whichfork == XFS_COW_FORK && (prealloc || aoff < off || alen > len))
+ xfs_inode_set_cowblocks_tag(ip);
+
+- ASSERT(got->br_startoff <= aoff);
+- ASSERT(got->br_startoff + got->br_blockcount >= aoff + alen);
+- ASSERT(isnullstartblock(got->br_startblock));
+- ASSERT(got->br_state == XFS_EXT_NORM);
+ return 0;
+
+ out_unreserve_blocks:
+@@ -4368,10 +4380,16 @@ xfs_bmapi_allocate(
+ bma->got.br_state = XFS_EXT_NORM;
+
+ /*
+- * A wasdelay extent has been initialized, so shouldn't be flagged
+- * as unwritten.
++ * In the data fork, a wasdelay extent has been initialized, so
++ * shouldn't be flagged as unwritten.
++ *
++ * For the cow fork, however, we convert delalloc reservations
++ * (extents allocated for speculative preallocation) to
++ * allocated unwritten extents, and only convert the unwritten
++ * extents to real extents when we're about to write the data.
+ */
+- if (!bma->wasdel && (bma->flags & XFS_BMAPI_PREALLOC) &&
++ if ((!bma->wasdel || (bma->flags & XFS_BMAPI_COWFORK)) &&
++ (bma->flags & XFS_BMAPI_PREALLOC) &&
+ xfs_sb_version_hasextflgbit(&mp->m_sb))
+ bma->got.br_state = XFS_EXT_UNWRITTEN;
+
+@@ -4422,8 +4440,6 @@ xfs_bmapi_convert_unwritten(
+ (XFS_BMAPI_PREALLOC | XFS_BMAPI_CONVERT))
+ return 0;
+
+- ASSERT(whichfork != XFS_COW_FORK);
+-
+ /*
+ * Modify (by adding) the state flag, if writing.
+ */
+@@ -4448,8 +4464,8 @@ xfs_bmapi_convert_unwritten(
+ return error;
+ }
+
+- error = xfs_bmap_add_extent_unwritten_real(bma->tp, bma->ip, &bma->idx,
+- &bma->cur, mval, bma->firstblock, bma->dfops,
++ error = xfs_bmap_add_extent_unwritten_real(bma->tp, bma->ip, whichfork,
++ &bma->idx, &bma->cur, mval, bma->firstblock, bma->dfops,
+ &tmp_logflags);
+ /*
+ * Log the inode core unconditionally in the unwritten extent conversion
+@@ -4458,8 +4474,12 @@ xfs_bmapi_convert_unwritten(
+ * in the transaction for the sake of fsync(), even if nothing has
+ * changed, because fsync() will not force the log for this transaction
+ * unless it sees the inode pinned.
++ *
++ * Note: If we're only converting cow fork extents, there aren't
++ * any on-disk updates to make, so we don't need to log anything.
+ */
+- bma->logflags |= tmp_logflags | XFS_ILOG_CORE;
++ if (whichfork != XFS_COW_FORK)
++ bma->logflags |= tmp_logflags | XFS_ILOG_CORE;
+ if (error)
+ return error;
+
+@@ -4533,15 +4553,15 @@ xfs_bmapi_write(
+ ASSERT(*nmap >= 1);
+ ASSERT(*nmap <= XFS_BMAP_MAX_NMAP);
+ ASSERT(!(flags & XFS_BMAPI_IGSTATE));
+- ASSERT(tp != NULL);
++ ASSERT(tp != NULL ||
++ (flags & (XFS_BMAPI_CONVERT | XFS_BMAPI_COWFORK)) ==
++ (XFS_BMAPI_CONVERT | XFS_BMAPI_COWFORK));
+ ASSERT(len > 0);
+ ASSERT(XFS_IFORK_FORMAT(ip, whichfork) != XFS_DINODE_FMT_LOCAL);
+ ASSERT(xfs_isilocked(ip, XFS_ILOCK_EXCL));
+ ASSERT(!(flags & XFS_BMAPI_REMAP) || whichfork == XFS_DATA_FORK);
+ ASSERT(!(flags & XFS_BMAPI_PREALLOC) || !(flags & XFS_BMAPI_REMAP));
+ ASSERT(!(flags & XFS_BMAPI_CONVERT) || !(flags & XFS_BMAPI_REMAP));
+- ASSERT(!(flags & XFS_BMAPI_PREALLOC) || whichfork != XFS_COW_FORK);
+- ASSERT(!(flags & XFS_BMAPI_CONVERT) || whichfork != XFS_COW_FORK);
+
+ /* zeroing is for currently only for data extents, not metadata */
+ ASSERT((flags & (XFS_BMAPI_METADATA | XFS_BMAPI_ZERO)) !=
+@@ -4746,13 +4766,9 @@ xfs_bmapi_write(
+ if (bma.cur) {
+ if (!error) {
+ ASSERT(*firstblock == NULLFSBLOCK ||
+- XFS_FSB_TO_AGNO(mp, *firstblock) ==
++ XFS_FSB_TO_AGNO(mp, *firstblock) <=
+ XFS_FSB_TO_AGNO(mp,
+- bma.cur->bc_private.b.firstblock) ||
+- (dfops->dop_low &&
+- XFS_FSB_TO_AGNO(mp, *firstblock) <
+- XFS_FSB_TO_AGNO(mp,
+- bma.cur->bc_private.b.firstblock)));
++ bma.cur->bc_private.b.firstblock));
+ *firstblock = bma.cur->bc_private.b.firstblock;
+ }
+ xfs_btree_del_cursor(bma.cur,
+@@ -4787,34 +4803,59 @@ xfs_bmap_split_indlen(
+ xfs_filblks_t len2 = *indlen2;
+ xfs_filblks_t nres = len1 + len2; /* new total res. */
+ xfs_filblks_t stolen = 0;
++ xfs_filblks_t resfactor;
+
+ /*
+ * Steal as many blocks as we can to try and satisfy the worst case
+ * indlen for both new extents.
+ */
+- while (nres > ores && avail) {
+- nres--;
+- avail--;
+- stolen++;
+- }
++ if (ores < nres && avail)
++ stolen = XFS_FILBLKS_MIN(nres - ores, avail);
++ ores += stolen;
++
++ /* nothing else to do if we've satisfied the new reservation */
++ if (ores >= nres)
++ return stolen;
++
++ /*
++ * We can't meet the total required reservation for the two extents.
++ * Calculate the percent of the overall shortage between both extents
++ * and apply this percentage to each of the requested indlen values.
++ * This distributes the shortage fairly and reduces the chances that one
++ * of the two extents is left with nothing when extents are repeatedly
++ * split.
++ */
++ resfactor = (ores * 100);
++ do_div(resfactor, nres);
++ len1 *= resfactor;
++ do_div(len1, 100);
++ len2 *= resfactor;
++ do_div(len2, 100);
++ ASSERT(len1 + len2 <= ores);
++ ASSERT(len1 < *indlen1 && len2 < *indlen2);
+
+ /*
+- * The only blocks available are those reserved for the original
+- * extent and what we can steal from the extent being removed.
+- * If this still isn't enough to satisfy the combined
+- * requirements for the two new extents, skim blocks off of each
+- * of the new reservations until they match what is available.
++ * Hand out the remainder to each extent. If one of the two reservations
++ * is zero, we want to make sure that one gets a block first. The loop
++ * below starts with len1, so hand len2 a block right off the bat if it
++ * is zero.
+ */
+- while (nres > ores) {
+- if (len1) {
+- len1--;
+- nres--;
++ ores -= (len1 + len2);
++ ASSERT((*indlen1 - len1) + (*indlen2 - len2) >= ores);
++ if (ores && !len2 && *indlen2) {
++ len2++;
++ ores--;
++ }
++ while (ores) {
++ if (len1 < *indlen1) {
++ len1++;
++ ores--;
+ }
+- if (nres == ores)
++ if (!ores)
+ break;
+- if (len2) {
+- len2--;
+- nres--;
++ if (len2 < *indlen2) {
++ len2++;
++ ores--;
+ }
+ }
+
+@@ -5556,8 +5597,8 @@ __xfs_bunmapi(
+ }
+ del.br_state = XFS_EXT_UNWRITTEN;
+ error = xfs_bmap_add_extent_unwritten_real(tp, ip,
+- &lastx, &cur, &del, firstblock, dfops,
+- &logflags);
++ whichfork, &lastx, &cur, &del,
++ firstblock, dfops, &logflags);
+ if (error)
+ goto error0;
+ goto nodelete;
+@@ -5610,8 +5651,9 @@ __xfs_bunmapi(
+ prev.br_state = XFS_EXT_UNWRITTEN;
+ lastx--;
+ error = xfs_bmap_add_extent_unwritten_real(tp,
+- ip, &lastx, &cur, &prev,
+- firstblock, dfops, &logflags);
++ ip, whichfork, &lastx, &cur,
++ &prev, firstblock, dfops,
++ &logflags);
+ if (error)
+ goto error0;
+ goto nodelete;
+@@ -5619,8 +5661,9 @@ __xfs_bunmapi(
+ ASSERT(del.br_state == XFS_EXT_NORM);
+ del.br_state = XFS_EXT_UNWRITTEN;
+ error = xfs_bmap_add_extent_unwritten_real(tp,
+- ip, &lastx, &cur, &del,
+- firstblock, dfops, &logflags);
++ ip, whichfork, &lastx, &cur,
++ &del, firstblock, dfops,
++ &logflags);
+ if (error)
+ goto error0;
+ goto nodelete;
+diff --git a/fs/xfs/libxfs/xfs_bmap_btree.c b/fs/xfs/libxfs/xfs_bmap_btree.c
+index d9be241fc86f..999cc5878890 100644
+--- a/fs/xfs/libxfs/xfs_bmap_btree.c
++++ b/fs/xfs/libxfs/xfs_bmap_btree.c
+@@ -453,8 +453,8 @@ xfs_bmbt_alloc_block(
+
+ if (args.fsbno == NULLFSBLOCK) {
+ args.fsbno = be64_to_cpu(start->l);
+-try_another_ag:
+ args.type = XFS_ALLOCTYPE_START_BNO;
++try_another_ag:
+ /*
+ * Make sure there is sufficient room left in the AG to
+ * complete a full tree split for an extent insert. If
+@@ -494,8 +494,8 @@ xfs_bmbt_alloc_block(
+ if (xfs_sb_version_hasreflink(&cur->bc_mp->m_sb) &&
+ args.fsbno == NULLFSBLOCK &&
+ args.type == XFS_ALLOCTYPE_NEAR_BNO) {
+- cur->bc_private.b.dfops->dop_low = true;
+ args.fsbno = cur->bc_private.b.firstblock;
++ args.type = XFS_ALLOCTYPE_FIRST_AG;
+ goto try_another_ag;
+ }
+
+@@ -512,7 +512,7 @@ xfs_bmbt_alloc_block(
+ goto error0;
+ cur->bc_private.b.dfops->dop_low = true;
+ }
+- if (args.fsbno == NULLFSBLOCK) {
++ if (WARN_ON_ONCE(args.fsbno == NULLFSBLOCK)) {
+ XFS_BTREE_TRACE_CURSOR(cur, XBT_EXIT);
+ *stat = 0;
+ return 0;
+diff --git a/fs/xfs/libxfs/xfs_btree.c b/fs/xfs/libxfs/xfs_btree.c
+index 21e6a6ab6b9a..2849d3fa3d0b 100644
+--- a/fs/xfs/libxfs/xfs_btree.c
++++ b/fs/xfs/libxfs/xfs_btree.c
+@@ -810,7 +810,8 @@ xfs_btree_read_bufl(
+ xfs_daddr_t d; /* real disk block address */
+ int error;
+
+- ASSERT(fsbno != NULLFSBLOCK);
++ if (!XFS_FSB_SANITY_CHECK(mp, fsbno))
++ return -EFSCORRUPTED;
+ d = XFS_FSB_TO_DADDR(mp, fsbno);
+ error = xfs_trans_read_buf(mp, tp, mp->m_ddev_targp, d,
+ mp->m_bsize, lock, &bp, ops);
+diff --git a/fs/xfs/libxfs/xfs_btree.h b/fs/xfs/libxfs/xfs_btree.h
+index b69b947c4c1b..33a8f8694d30 100644
+--- a/fs/xfs/libxfs/xfs_btree.h
++++ b/fs/xfs/libxfs/xfs_btree.h
+@@ -456,7 +456,7 @@ static inline int xfs_btree_get_level(struct xfs_btree_block *block)
+ #define XFS_FILBLKS_MAX(a,b) max_t(xfs_filblks_t, (a), (b))
+
+ #define XFS_FSB_SANITY_CHECK(mp,fsb) \
+- (XFS_FSB_TO_AGNO(mp, fsb) < mp->m_sb.sb_agcount && \
++ (fsb && XFS_FSB_TO_AGNO(mp, fsb) < mp->m_sb.sb_agcount && \
+ XFS_FSB_TO_AGBNO(mp, fsb) < mp->m_sb.sb_agblocks)
+
+ /*
+diff --git a/fs/xfs/libxfs/xfs_da_btree.c b/fs/xfs/libxfs/xfs_da_btree.c
+index f2dc1a950c85..1bdf2888295b 100644
+--- a/fs/xfs/libxfs/xfs_da_btree.c
++++ b/fs/xfs/libxfs/xfs_da_btree.c
+@@ -2633,7 +2633,7 @@ xfs_da_read_buf(
+ /*
+ * Readahead the dir/attr block.
+ */
+-xfs_daddr_t
++int
+ xfs_da_reada_buf(
+ struct xfs_inode *dp,
+ xfs_dablk_t bno,
+@@ -2664,7 +2664,5 @@ xfs_da_reada_buf(
+ if (mapp != &map)
+ kmem_free(mapp);
+
+- if (error)
+- return -1;
+- return mappedbno;
++ return error;
+ }
+diff --git a/fs/xfs/libxfs/xfs_da_btree.h b/fs/xfs/libxfs/xfs_da_btree.h
+index 98c75cbe6ac2..4e29cb6a3627 100644
+--- a/fs/xfs/libxfs/xfs_da_btree.h
++++ b/fs/xfs/libxfs/xfs_da_btree.h
+@@ -201,7 +201,7 @@ int xfs_da_read_buf(struct xfs_trans *trans, struct xfs_inode *dp,
+ xfs_dablk_t bno, xfs_daddr_t mappedbno,
+ struct xfs_buf **bpp, int whichfork,
+ const struct xfs_buf_ops *ops);
+-xfs_daddr_t xfs_da_reada_buf(struct xfs_inode *dp, xfs_dablk_t bno,
++int xfs_da_reada_buf(struct xfs_inode *dp, xfs_dablk_t bno,
+ xfs_daddr_t mapped_bno, int whichfork,
+ const struct xfs_buf_ops *ops);
+ int xfs_da_shrink_inode(xfs_da_args_t *args, xfs_dablk_t dead_blkno,
+diff --git a/fs/xfs/libxfs/xfs_dir2_node.c b/fs/xfs/libxfs/xfs_dir2_node.c
+index 75a557432d0f..bbd1238852b3 100644
+--- a/fs/xfs/libxfs/xfs_dir2_node.c
++++ b/fs/xfs/libxfs/xfs_dir2_node.c
+@@ -155,6 +155,42 @@ const struct xfs_buf_ops xfs_dir3_free_buf_ops = {
+ .verify_write = xfs_dir3_free_write_verify,
+ };
+
++/* Everything ok in the free block header? */
++static bool
++xfs_dir3_free_header_check(
++ struct xfs_inode *dp,
++ xfs_dablk_t fbno,
++ struct xfs_buf *bp)
++{
++ struct xfs_mount *mp = dp->i_mount;
++ unsigned int firstdb;
++ int maxbests;
++
++ maxbests = dp->d_ops->free_max_bests(mp->m_dir_geo);
++ firstdb = (xfs_dir2_da_to_db(mp->m_dir_geo, fbno) -
++ xfs_dir2_byte_to_db(mp->m_dir_geo, XFS_DIR2_FREE_OFFSET)) *
++ maxbests;
++ if (xfs_sb_version_hascrc(&mp->m_sb)) {
++ struct xfs_dir3_free_hdr *hdr3 = bp->b_addr;
++
++ if (be32_to_cpu(hdr3->firstdb) != firstdb)
++ return false;
++ if (be32_to_cpu(hdr3->nvalid) > maxbests)
++ return false;
++ if (be32_to_cpu(hdr3->nvalid) < be32_to_cpu(hdr3->nused))
++ return false;
++ } else {
++ struct xfs_dir2_free_hdr *hdr = bp->b_addr;
++
++ if (be32_to_cpu(hdr->firstdb) != firstdb)
++ return false;
++ if (be32_to_cpu(hdr->nvalid) > maxbests)
++ return false;
++ if (be32_to_cpu(hdr->nvalid) < be32_to_cpu(hdr->nused))
++ return false;
++ }
++ return true;
++}
+
+ static int
+ __xfs_dir3_free_read(
+@@ -168,11 +204,22 @@ __xfs_dir3_free_read(
+
+ err = xfs_da_read_buf(tp, dp, fbno, mappedbno, bpp,
+ XFS_DATA_FORK, &xfs_dir3_free_buf_ops);
++ if (err || !*bpp)
++ return err;
++
++ /* Check things that we can't do in the verifier. */
++ if (!xfs_dir3_free_header_check(dp, fbno, *bpp)) {
++ xfs_buf_ioerror(*bpp, -EFSCORRUPTED);
++ xfs_verifier_error(*bpp);
++ xfs_trans_brelse(tp, *bpp);
++ return -EFSCORRUPTED;
++ }
+
+ /* try read returns without an error or *bpp if it lands in a hole */
+- if (!err && tp && *bpp)
++ if (tp)
+ xfs_trans_buf_set_type(tp, *bpp, XFS_BLFT_DIR_FREE_BUF);
+- return err;
++
++ return 0;
+ }
+
+ int
+diff --git a/fs/xfs/libxfs/xfs_ialloc.c b/fs/xfs/libxfs/xfs_ialloc.c
+index f272abff11e1..d41ade5d293e 100644
+--- a/fs/xfs/libxfs/xfs_ialloc.c
++++ b/fs/xfs/libxfs/xfs_ialloc.c
+@@ -51,8 +51,7 @@ xfs_ialloc_cluster_alignment(
+ struct xfs_mount *mp)
+ {
+ if (xfs_sb_version_hasalign(&mp->m_sb) &&
+- mp->m_sb.sb_inoalignmt >=
+- XFS_B_TO_FSBT(mp, mp->m_inode_cluster_size))
++ mp->m_sb.sb_inoalignmt >= xfs_icluster_size_fsb(mp))
+ return mp->m_sb.sb_inoalignmt;
+ return 1;
+ }
+diff --git a/fs/xfs/libxfs/xfs_inode_fork.c b/fs/xfs/libxfs/xfs_inode_fork.c
+index 222e103356c6..25c1e078aef6 100644
+--- a/fs/xfs/libxfs/xfs_inode_fork.c
++++ b/fs/xfs/libxfs/xfs_inode_fork.c
+@@ -26,6 +26,7 @@
+ #include "xfs_inode.h"
+ #include "xfs_trans.h"
+ #include "xfs_inode_item.h"
++#include "xfs_btree.h"
+ #include "xfs_bmap_btree.h"
+ #include "xfs_bmap.h"
+ #include "xfs_error.h"
+@@ -429,11 +430,13 @@ xfs_iformat_btree(
+ /* REFERENCED */
+ int nrecs;
+ int size;
++ int level;
+
+ ifp = XFS_IFORK_PTR(ip, whichfork);
+ dfp = (xfs_bmdr_block_t *)XFS_DFORK_PTR(dip, whichfork);
+ size = XFS_BMAP_BROOT_SPACE(mp, dfp);
+ nrecs = be16_to_cpu(dfp->bb_numrecs);
++ level = be16_to_cpu(dfp->bb_level);
+
+ /*
+ * blow out if -- fork has less extents than can fit in
+@@ -446,7 +449,8 @@ xfs_iformat_btree(
+ XFS_IFORK_MAXEXT(ip, whichfork) ||
+ XFS_BMDR_SPACE_CALC(nrecs) >
+ XFS_DFORK_SIZE(dip, mp, whichfork) ||
+- XFS_IFORK_NEXTENTS(ip, whichfork) > ip->i_d.di_nblocks)) {
++ XFS_IFORK_NEXTENTS(ip, whichfork) > ip->i_d.di_nblocks) ||
++ level == 0 || level > XFS_BTREE_MAXLEVELS) {
+ xfs_warn(mp, "corrupt inode %Lu (btree).",
+ (unsigned long long) ip->i_ino);
+ XFS_CORRUPTION_ERROR("xfs_iformat_btree", XFS_ERRLEVEL_LOW,
+@@ -497,15 +501,14 @@ xfs_iread_extents(
+ * We know that the size is valid (it's checked in iformat_btree)
+ */
+ ifp->if_bytes = ifp->if_real_bytes = 0;
+- ifp->if_flags |= XFS_IFEXTENTS;
+ xfs_iext_add(ifp, 0, nextents);
+ error = xfs_bmap_read_extents(tp, ip, whichfork);
+ if (error) {
+ xfs_iext_destroy(ifp);
+- ifp->if_flags &= ~XFS_IFEXTENTS;
+ return error;
+ }
+ xfs_validate_extents(ifp, nextents, XFS_EXTFMT_INODE(ip));
++ ifp->if_flags |= XFS_IFEXTENTS;
+ return 0;
+ }
+ /*
+diff --git a/fs/xfs/xfs_aops.c b/fs/xfs/xfs_aops.c
+index 631e7c0e0a29..937d406d3c11 100644
+--- a/fs/xfs/xfs_aops.c
++++ b/fs/xfs/xfs_aops.c
+@@ -274,54 +274,49 @@ xfs_end_io(
+ struct xfs_ioend *ioend =
+ container_of(work, struct xfs_ioend, io_work);
+ struct xfs_inode *ip = XFS_I(ioend->io_inode);
++ xfs_off_t offset = ioend->io_offset;
++ size_t size = ioend->io_size;
+ int error = ioend->io_bio->bi_error;
+
+ /*
+- * Set an error if the mount has shut down and proceed with end I/O
+- * processing so it can perform whatever cleanups are necessary.
++ * Just clean up the in-memory strutures if the fs has been shut down.
+ */
+- if (XFS_FORCED_SHUTDOWN(ip->i_mount))
++ if (XFS_FORCED_SHUTDOWN(ip->i_mount)) {
+ error = -EIO;
++ goto done;
++ }
+
+ /*
+- * For a CoW extent, we need to move the mapping from the CoW fork
+- * to the data fork. If instead an error happened, just dump the
+- * new blocks.
++ * Clean up any COW blocks on an I/O error.
+ */
+- if (ioend->io_type == XFS_IO_COW) {
+- if (error)
+- goto done;
+- if (ioend->io_bio->bi_error) {
+- error = xfs_reflink_cancel_cow_range(ip,
+- ioend->io_offset, ioend->io_size);
+- goto done;
++ if (unlikely(error)) {
++ switch (ioend->io_type) {
++ case XFS_IO_COW:
++ xfs_reflink_cancel_cow_range(ip, offset, size, true);
++ break;
+ }
+- error = xfs_reflink_end_cow(ip, ioend->io_offset,
+- ioend->io_size);
+- if (error)
+- goto done;
++
++ goto done;
+ }
+
+ /*
+- * For unwritten extents we need to issue transactions to convert a
+- * range to normal written extens after the data I/O has finished.
+- * Detecting and handling completion IO errors is done individually
+- * for each case as different cleanup operations need to be performed
+- * on error.
++ * Success: commit the COW or unwritten blocks if needed.
+ */
+- if (ioend->io_type == XFS_IO_UNWRITTEN) {
+- if (error)
+- goto done;
+- error = xfs_iomap_write_unwritten(ip, ioend->io_offset,
+- ioend->io_size);
+- } else if (ioend->io_append_trans) {
+- error = xfs_setfilesize_ioend(ioend, error);
+- } else {
+- ASSERT(!xfs_ioend_is_append(ioend) ||
+- ioend->io_type == XFS_IO_COW);
++ switch (ioend->io_type) {
++ case XFS_IO_COW:
++ error = xfs_reflink_end_cow(ip, offset, size);
++ break;
++ case XFS_IO_UNWRITTEN:
++ error = xfs_iomap_write_unwritten(ip, offset, size);
++ break;
++ default:
++ ASSERT(!xfs_ioend_is_append(ioend) || ioend->io_append_trans);
++ break;
+ }
+
+ done:
++ if (ioend->io_append_trans)
++ error = xfs_setfilesize_ioend(ioend, error);
+ xfs_destroy_ioend(ioend, error);
+ }
+
+@@ -481,6 +476,12 @@ xfs_submit_ioend(
+ struct xfs_ioend *ioend,
+ int status)
+ {
++ /* Convert CoW extents to regular */
++ if (!status && ioend->io_type == XFS_IO_COW) {
++ status = xfs_reflink_convert_cow(XFS_I(ioend->io_inode),
++ ioend->io_offset, ioend->io_size);
++ }
++
+ /* Reserve log space if we might write beyond the on-disk inode size. */
+ if (!status &&
+ ioend->io_type != XFS_IO_UNWRITTEN &&
+diff --git a/fs/xfs/xfs_bmap_util.c b/fs/xfs/xfs_bmap_util.c
+index c1417919ab0a..c516d7158a21 100644
+--- a/fs/xfs/xfs_bmap_util.c
++++ b/fs/xfs/xfs_bmap_util.c
+@@ -917,17 +917,18 @@ xfs_can_free_eofblocks(struct xfs_inode *ip, bool force)
+ */
+ int
+ xfs_free_eofblocks(
+- xfs_mount_t *mp,
+- xfs_inode_t *ip,
+- bool need_iolock)
++ struct xfs_inode *ip)
+ {
+- xfs_trans_t *tp;
+- int error;
+- xfs_fileoff_t end_fsb;
+- xfs_fileoff_t last_fsb;
+- xfs_filblks_t map_len;
+- int nimaps;
+- xfs_bmbt_irec_t imap;
++ struct xfs_trans *tp;
++ int error;
++ xfs_fileoff_t end_fsb;
++ xfs_fileoff_t last_fsb;
++ xfs_filblks_t map_len;
++ int nimaps;
++ struct xfs_bmbt_irec imap;
++ struct xfs_mount *mp = ip->i_mount;
++
++ ASSERT(xfs_isilocked(ip, XFS_IOLOCK_EXCL));
+
+ /*
+ * Figure out if there are any blocks beyond the end
+@@ -944,6 +945,10 @@ xfs_free_eofblocks(
+ error = xfs_bmapi_read(ip, end_fsb, map_len, &imap, &nimaps, 0);
+ xfs_iunlock(ip, XFS_ILOCK_SHARED);
+
++ /*
++ * If there are blocks after the end of file, truncate the file to its
++ * current size to free them up.
++ */
+ if (!error && (nimaps != 0) &&
+ (imap.br_startblock != HOLESTARTBLOCK ||
+ ip->i_delayed_blks)) {
+@@ -954,22 +959,13 @@ xfs_free_eofblocks(
+ if (error)
+ return error;
+
+- /*
+- * There are blocks after the end of file.
+- * Free them up now by truncating the file to
+- * its current size.
+- */
+- if (need_iolock) {
+- if (!xfs_ilock_nowait(ip, XFS_IOLOCK_EXCL))
+- return -EAGAIN;
+- }
++ /* wait on dio to ensure i_size has settled */
++ inode_dio_wait(VFS_I(ip));
+
+ error = xfs_trans_alloc(mp, &M_RES(mp)->tr_itruncate, 0, 0, 0,
+ &tp);
+ if (error) {
+ ASSERT(XFS_FORCED_SHUTDOWN(mp));
+- if (need_iolock)
+- xfs_iunlock(ip, XFS_IOLOCK_EXCL);
+ return error;
+ }
+
+@@ -997,8 +993,6 @@ xfs_free_eofblocks(
+ }
+
+ xfs_iunlock(ip, XFS_ILOCK_EXCL);
+- if (need_iolock)
+- xfs_iunlock(ip, XFS_IOLOCK_EXCL);
+ }
+ return error;
+ }
+@@ -1393,10 +1387,16 @@ xfs_shift_file_space(
+ xfs_fileoff_t stop_fsb;
+ xfs_fileoff_t next_fsb;
+ xfs_fileoff_t shift_fsb;
++ uint resblks;
+
+ ASSERT(direction == SHIFT_LEFT || direction == SHIFT_RIGHT);
+
+ if (direction == SHIFT_LEFT) {
++ /*
++ * Reserve blocks to cover potential extent merges after left
++ * shift operations.
++ */
++ resblks = XFS_DIOSTRAT_SPACE_RES(mp, 0);
+ next_fsb = XFS_B_TO_FSB(mp, offset + len);
+ stop_fsb = XFS_B_TO_FSB(mp, VFS_I(ip)->i_size);
+ } else {
+@@ -1404,6 +1404,7 @@ xfs_shift_file_space(
+ * If right shift, delegate the work of initialization of
+ * next_fsb to xfs_bmap_shift_extent as it has ilock held.
+ */
++ resblks = 0;
+ next_fsb = NULLFSBLOCK;
+ stop_fsb = XFS_B_TO_FSB(mp, offset);
+ }
+@@ -1415,7 +1416,7 @@ xfs_shift_file_space(
+ * into the accessible region of the file.
+ */
+ if (xfs_can_free_eofblocks(ip, true)) {
+- error = xfs_free_eofblocks(mp, ip, false);
++ error = xfs_free_eofblocks(ip);
+ if (error)
+ return error;
+ }
+@@ -1445,21 +1446,14 @@ xfs_shift_file_space(
+ }
+
+ while (!error && !done) {
+- /*
+- * We would need to reserve permanent block for transaction.
+- * This will come into picture when after shifting extent into
+- * hole we found that adjacent extents can be merged which
+- * may lead to freeing of a block during record update.
+- */
+- error = xfs_trans_alloc(mp, &M_RES(mp)->tr_write,
+- XFS_DIOSTRAT_SPACE_RES(mp, 0), 0, 0, &tp);
++ error = xfs_trans_alloc(mp, &M_RES(mp)->tr_write, resblks, 0, 0,
++ &tp);
+ if (error)
+ break;
+
+ xfs_ilock(ip, XFS_ILOCK_EXCL);
+ error = xfs_trans_reserve_quota(tp, mp, ip->i_udquot,
+- ip->i_gdquot, ip->i_pdquot,
+- XFS_DIOSTRAT_SPACE_RES(mp, 0), 0,
++ ip->i_gdquot, ip->i_pdquot, resblks, 0,
+ XFS_QMOPT_RES_REGBLKS);
+ if (error)
+ goto out_trans_cancel;
+diff --git a/fs/xfs/xfs_bmap_util.h b/fs/xfs/xfs_bmap_util.h
+index 68a621a8e0c0..f1005393785c 100644
+--- a/fs/xfs/xfs_bmap_util.h
++++ b/fs/xfs/xfs_bmap_util.h
+@@ -63,8 +63,7 @@ int xfs_insert_file_space(struct xfs_inode *, xfs_off_t offset,
+
+ /* EOF block manipulation functions */
+ bool xfs_can_free_eofblocks(struct xfs_inode *ip, bool force);
+-int xfs_free_eofblocks(struct xfs_mount *mp, struct xfs_inode *ip,
+- bool need_iolock);
++int xfs_free_eofblocks(struct xfs_inode *ip);
+
+ int xfs_swap_extents(struct xfs_inode *ip, struct xfs_inode *tip,
+ struct xfs_swapext *sx);
+diff --git a/fs/xfs/xfs_buf_item.c b/fs/xfs/xfs_buf_item.c
+index 2975cb2319f4..0306168af332 100644
+--- a/fs/xfs/xfs_buf_item.c
++++ b/fs/xfs/xfs_buf_item.c
+@@ -1162,6 +1162,7 @@ xfs_buf_iodone_callbacks(
+ */
+ bp->b_last_error = 0;
+ bp->b_retries = 0;
++ bp->b_first_retry_time = 0;
+
+ xfs_buf_do_callbacks(bp);
+ bp->b_fspriv = NULL;
+diff --git a/fs/xfs/xfs_extent_busy.c b/fs/xfs/xfs_extent_busy.c
+index 162dc186cf04..29c2f997aedf 100644
+--- a/fs/xfs/xfs_extent_busy.c
++++ b/fs/xfs/xfs_extent_busy.c
+@@ -45,18 +45,7 @@ xfs_extent_busy_insert(
+ struct rb_node **rbp;
+ struct rb_node *parent = NULL;
+
+- new = kmem_zalloc(sizeof(struct xfs_extent_busy), KM_MAYFAIL);
+- if (!new) {
+- /*
+- * No Memory! Since it is now not possible to track the free
+- * block, make this a synchronous transaction to insure that
+- * the block is not reused before this transaction commits.
+- */
+- trace_xfs_extent_busy_enomem(tp->t_mountp, agno, bno, len);
+- xfs_trans_set_sync(tp);
+- return;
+- }
+-
++ new = kmem_zalloc(sizeof(struct xfs_extent_busy), KM_SLEEP);
+ new->agno = agno;
+ new->bno = bno;
+ new->length = len;
+diff --git a/fs/xfs/xfs_file.c b/fs/xfs/xfs_file.c
+index bbb9eb6811b2..2a695a8f4fe7 100644
+--- a/fs/xfs/xfs_file.c
++++ b/fs/xfs/xfs_file.c
+@@ -527,6 +527,15 @@ xfs_file_dio_aio_write(
+ if ((iocb->ki_pos & mp->m_blockmask) ||
+ ((iocb->ki_pos + count) & mp->m_blockmask)) {
+ unaligned_io = 1;
++
++ /*
++ * We can't properly handle unaligned direct I/O to reflink
++ * files yet, as we can't unshare a partial block.
++ */
++ if (xfs_is_reflink_inode(ip)) {
++ trace_xfs_reflink_bounce_dio_write(ip, iocb->ki_pos, count);
++ return -EREMCHG;
++ }
+ iolock = XFS_IOLOCK_EXCL;
+ } else {
+ iolock = XFS_IOLOCK_SHARED;
+@@ -614,8 +623,10 @@ xfs_file_buffered_aio_write(
+ struct xfs_inode *ip = XFS_I(inode);
+ ssize_t ret;
+ int enospc = 0;
+- int iolock = XFS_IOLOCK_EXCL;
++ int iolock;
+
++write_retry:
++ iolock = XFS_IOLOCK_EXCL;
+ xfs_ilock(ip, iolock);
+
+ ret = xfs_file_aio_write_checks(iocb, from, &iolock);
+@@ -625,7 +636,6 @@ xfs_file_buffered_aio_write(
+ /* We can write back this queue in page reclaim */
+ current->backing_dev_info = inode_to_bdi(inode);
+
+-write_retry:
+ trace_xfs_file_buffered_write(ip, iov_iter_count(from), iocb->ki_pos);
+ ret = iomap_file_buffered_write(iocb, from, &xfs_iomap_ops);
+ if (likely(ret >= 0))
+@@ -641,18 +651,21 @@ xfs_file_buffered_aio_write(
+ * running at the same time.
+ */
+ if (ret == -EDQUOT && !enospc) {
++ xfs_iunlock(ip, iolock);
+ enospc = xfs_inode_free_quota_eofblocks(ip);
+ if (enospc)
+ goto write_retry;
+ enospc = xfs_inode_free_quota_cowblocks(ip);
+ if (enospc)
+ goto write_retry;
++ iolock = 0;
+ } else if (ret == -ENOSPC && !enospc) {
+ struct xfs_eofblocks eofb = {0};
+
+ enospc = 1;
+ xfs_flush_inodes(ip->i_mount);
+- eofb.eof_scan_owner = ip->i_ino; /* for locking */
++
++ xfs_iunlock(ip, iolock);
+ eofb.eof_flags = XFS_EOF_FLAGS_SYNC;
+ xfs_icache_free_eofblocks(ip->i_mount, &eofb);
+ goto write_retry;
+@@ -660,7 +673,8 @@ xfs_file_buffered_aio_write(
+
+ current->backing_dev_info = NULL;
+ out:
+- xfs_iunlock(ip, iolock);
++ if (iolock)
++ xfs_iunlock(ip, iolock);
+ return ret;
+ }
+
+@@ -908,9 +922,9 @@ xfs_dir_open(
+ */
+ mode = xfs_ilock_data_map_shared(ip);
+ if (ip->i_d.di_nextents > 0)
+- xfs_dir3_data_readahead(ip, 0, -1);
++ error = xfs_dir3_data_readahead(ip, 0, -1);
+ xfs_iunlock(ip, mode);
+- return 0;
++ return error;
+ }
+
+ STATIC int
+diff --git a/fs/xfs/xfs_icache.c b/fs/xfs/xfs_icache.c
+index 70ca4f608321..3531f8f72fa5 100644
+--- a/fs/xfs/xfs_icache.c
++++ b/fs/xfs/xfs_icache.c
+@@ -1322,13 +1322,10 @@ xfs_inode_free_eofblocks(
+ int flags,
+ void *args)
+ {
+- int ret;
++ int ret = 0;
+ struct xfs_eofblocks *eofb = args;
+- bool need_iolock = true;
+ int match;
+
+- ASSERT(!eofb || (eofb && eofb->eof_scan_owner != 0));
+-
+ if (!xfs_can_free_eofblocks(ip, false)) {
+ /* inode could be preallocated or append-only */
+ trace_xfs_inode_free_eofblocks_invalid(ip);
+@@ -1356,21 +1353,19 @@ xfs_inode_free_eofblocks(
+ if (eofb->eof_flags & XFS_EOF_FLAGS_MINFILESIZE &&
+ XFS_ISIZE(ip) < eofb->eof_min_file_size)
+ return 0;
+-
+- /*
+- * A scan owner implies we already hold the iolock. Skip it in
+- * xfs_free_eofblocks() to avoid deadlock. This also eliminates
+- * the possibility of EAGAIN being returned.
+- */
+- if (eofb->eof_scan_owner == ip->i_ino)
+- need_iolock = false;
+ }
+
+- ret = xfs_free_eofblocks(ip->i_mount, ip, need_iolock);
+-
+- /* don't revisit the inode if we're not waiting */
+- if (ret == -EAGAIN && !(flags & SYNC_WAIT))
+- ret = 0;
++ /*
++ * If the caller is waiting, return -EAGAIN to keep the background
++ * scanner moving and revisit the inode in a subsequent pass.
++ */
++ if (!xfs_ilock_nowait(ip, XFS_IOLOCK_EXCL)) {
++ if (flags & SYNC_WAIT)
++ ret = -EAGAIN;
++ return ret;
++ }
++ ret = xfs_free_eofblocks(ip);
++ xfs_iunlock(ip, XFS_IOLOCK_EXCL);
+
+ return ret;
+ }
+@@ -1417,15 +1412,10 @@ __xfs_inode_free_quota_eofblocks(
+ struct xfs_eofblocks eofb = {0};
+ struct xfs_dquot *dq;
+
+- ASSERT(xfs_isilocked(ip, XFS_IOLOCK_EXCL));
+-
+ /*
+- * Set the scan owner to avoid a potential livelock. Otherwise, the scan
+- * can repeatedly trylock on the inode we're currently processing. We
+- * run a sync scan to increase effectiveness and use the union filter to
++ * Run a sync scan to increase effectiveness and use the union filter to
+ * cover all applicable quotas in a single scan.
+ */
+- eofb.eof_scan_owner = ip->i_ino;
+ eofb.eof_flags = XFS_EOF_FLAGS_UNION|XFS_EOF_FLAGS_SYNC;
+
+ if (XFS_IS_UQUOTA_ENFORCED(ip->i_mount)) {
+@@ -1577,12 +1567,9 @@ xfs_inode_free_cowblocks(
+ {
+ int ret;
+ struct xfs_eofblocks *eofb = args;
+- bool need_iolock = true;
+ int match;
+ struct xfs_ifork *ifp = XFS_IFORK_PTR(ip, XFS_COW_FORK);
+
+- ASSERT(!eofb || (eofb && eofb->eof_scan_owner != 0));
+-
+ /*
+ * Just clear the tag if we have an empty cow fork or none at all. It's
+ * possible the inode was fully unshared since it was originally tagged.
+@@ -1615,28 +1602,16 @@ xfs_inode_free_cowblocks(
+ if (eofb->eof_flags & XFS_EOF_FLAGS_MINFILESIZE &&
+ XFS_ISIZE(ip) < eofb->eof_min_file_size)
+ return 0;
+-
+- /*
+- * A scan owner implies we already hold the iolock. Skip it in
+- * xfs_free_eofblocks() to avoid deadlock. This also eliminates
+- * the possibility of EAGAIN being returned.
+- */
+- if (eofb->eof_scan_owner == ip->i_ino)
+- need_iolock = false;
+ }
+
+ /* Free the CoW blocks */
+- if (need_iolock) {
+- xfs_ilock(ip, XFS_IOLOCK_EXCL);
+- xfs_ilock(ip, XFS_MMAPLOCK_EXCL);
+- }
++ xfs_ilock(ip, XFS_IOLOCK_EXCL);
++ xfs_ilock(ip, XFS_MMAPLOCK_EXCL);
+
+- ret = xfs_reflink_cancel_cow_range(ip, 0, NULLFILEOFF);
++ ret = xfs_reflink_cancel_cow_range(ip, 0, NULLFILEOFF, false);
+
+- if (need_iolock) {
+- xfs_iunlock(ip, XFS_MMAPLOCK_EXCL);
+- xfs_iunlock(ip, XFS_IOLOCK_EXCL);
+- }
++ xfs_iunlock(ip, XFS_MMAPLOCK_EXCL);
++ xfs_iunlock(ip, XFS_IOLOCK_EXCL);
+
+ return ret;
+ }
+diff --git a/fs/xfs/xfs_icache.h b/fs/xfs/xfs_icache.h
+index a1e02f4708ab..8a7c849b4dea 100644
+--- a/fs/xfs/xfs_icache.h
++++ b/fs/xfs/xfs_icache.h
+@@ -27,7 +27,6 @@ struct xfs_eofblocks {
+ kgid_t eof_gid;
+ prid_t eof_prid;
+ __u64 eof_min_file_size;
+- xfs_ino_t eof_scan_owner;
+ };
+
+ #define SYNC_WAIT 0x0001 /* wait for i/o to complete */
+@@ -102,7 +101,6 @@ xfs_fs_eofblocks_from_user(
+ dst->eof_flags = src->eof_flags;
+ dst->eof_prid = src->eof_prid;
+ dst->eof_min_file_size = src->eof_min_file_size;
+- dst->eof_scan_owner = NULLFSINO;
+
+ dst->eof_uid = INVALID_UID;
+ if (src->eof_flags & XFS_EOF_FLAGS_UID) {
+diff --git a/fs/xfs/xfs_inode.c b/fs/xfs/xfs_inode.c
+index de32f0fe47c8..7eaf1ef74e3c 100644
+--- a/fs/xfs/xfs_inode.c
++++ b/fs/xfs/xfs_inode.c
+@@ -1615,7 +1615,7 @@ xfs_itruncate_extents(
+
+ /* Remove all pending CoW reservations. */
+ error = xfs_reflink_cancel_cow_blocks(ip, &tp, first_unmap_block,
+- last_block);
++ last_block, true);
+ if (error)
+ goto out;
+
+@@ -1692,32 +1692,34 @@ xfs_release(
+ if (xfs_can_free_eofblocks(ip, false)) {
+
+ /*
++ * Check if the inode is being opened, written and closed
++ * frequently and we have delayed allocation blocks outstanding
++ * (e.g. streaming writes from the NFS server), truncating the
++ * blocks past EOF will cause fragmentation to occur.
++ *
++ * In this case don't do the truncation, but we have to be
++ * careful how we detect this case. Blocks beyond EOF show up as
++ * i_delayed_blks even when the inode is clean, so we need to
++ * truncate them away first before checking for a dirty release.
++ * Hence on the first dirty close we will still remove the
++ * speculative allocation, but after that we will leave it in
++ * place.
++ */
++ if (xfs_iflags_test(ip, XFS_IDIRTY_RELEASE))
++ return 0;
++ /*
+ * If we can't get the iolock just skip truncating the blocks
+ * past EOF because we could deadlock with the mmap_sem
+- * otherwise. We'll get another chance to drop them once the
++ * otherwise. We'll get another chance to drop them once the
+ * last reference to the inode is dropped, so we'll never leak
+ * blocks permanently.
+- *
+- * Further, check if the inode is being opened, written and
+- * closed frequently and we have delayed allocation blocks
+- * outstanding (e.g. streaming writes from the NFS server),
+- * truncating the blocks past EOF will cause fragmentation to
+- * occur.
+- *
+- * In this case don't do the truncation, either, but we have to
+- * be careful how we detect this case. Blocks beyond EOF show
+- * up as i_delayed_blks even when the inode is clean, so we
+- * need to truncate them away first before checking for a dirty
+- * release. Hence on the first dirty close we will still remove
+- * the speculative allocation, but after that we will leave it
+- * in place.
+ */
+- if (xfs_iflags_test(ip, XFS_IDIRTY_RELEASE))
+- return 0;
+-
+- error = xfs_free_eofblocks(mp, ip, true);
+- if (error && error != -EAGAIN)
+- return error;
++ if (xfs_ilock_nowait(ip, XFS_IOLOCK_EXCL)) {
++ error = xfs_free_eofblocks(ip);
++ xfs_iunlock(ip, XFS_IOLOCK_EXCL);
++ if (error)
++ return error;
++ }
+
+ /* delalloc blocks after truncation means it really is dirty */
+ if (ip->i_delayed_blks)
+@@ -1904,8 +1906,11 @@ xfs_inactive(
+ * cache. Post-eof blocks must be freed, lest we end up with
+ * broken free space accounting.
+ */
+- if (xfs_can_free_eofblocks(ip, true))
+- xfs_free_eofblocks(mp, ip, false);
++ if (xfs_can_free_eofblocks(ip, true)) {
++ xfs_ilock(ip, XFS_IOLOCK_EXCL);
++ xfs_free_eofblocks(ip);
++ xfs_iunlock(ip, XFS_IOLOCK_EXCL);
++ }
+
+ return;
+ }
+diff --git a/fs/xfs/xfs_iomap.c b/fs/xfs/xfs_iomap.c
+index fdecf79d2fa4..2326a6913fde 100644
+--- a/fs/xfs/xfs_iomap.c
++++ b/fs/xfs/xfs_iomap.c
+@@ -637,6 +637,11 @@ xfs_file_iomap_begin_delay(
+ goto out_unlock;
+ }
+
++ /*
++ * Flag newly allocated delalloc blocks with IOMAP_F_NEW so we punch
++ * them out if the write happens to fail.
++ */
++ iomap->flags = IOMAP_F_NEW;
+ trace_xfs_iomap_alloc(ip, offset, count, 0, &got);
+ done:
+ if (isnullstartblock(got.br_startblock))
+@@ -685,7 +690,7 @@ xfs_iomap_write_allocate(
+ int nres;
+
+ if (whichfork == XFS_COW_FORK)
+- flags |= XFS_BMAPI_COWFORK;
++ flags |= XFS_BMAPI_COWFORK | XFS_BMAPI_PREALLOC;
+
+ /*
+ * Make sure that the dquots are there.
+@@ -1026,17 +1031,7 @@ xfs_file_iomap_begin(
+ if (error)
+ goto out_unlock;
+
+- /*
+- * We're here because we're trying to do a directio write to a
+- * region that isn't aligned to a filesystem block. If the
+- * extent is shared, fall back to buffered mode to handle the
+- * RMW.
+- */
+- if (!(flags & IOMAP_REPORT) && shared) {
+- trace_xfs_reflink_bounce_dio_write(ip, &imap);
+- error = -EREMCHG;
+- goto out_unlock;
+- }
++ ASSERT((flags & IOMAP_REPORT) || !shared);
+ }
+
+ if ((flags & (IOMAP_WRITE | IOMAP_ZERO)) && xfs_is_reflink_inode(ip)) {
+@@ -1095,7 +1090,8 @@ xfs_file_iomap_end_delalloc(
+ struct xfs_inode *ip,
+ loff_t offset,
+ loff_t length,
+- ssize_t written)
++ ssize_t written,
++ struct iomap *iomap)
+ {
+ struct xfs_mount *mp = ip->i_mount;
+ xfs_fileoff_t start_fsb;
+@@ -1114,14 +1110,14 @@ xfs_file_iomap_end_delalloc(
+ end_fsb = XFS_B_TO_FSB(mp, offset + length);
+
+ /*
+- * Trim back delalloc blocks if we didn't manage to write the whole
+- * range reserved.
++ * Trim delalloc blocks if they were allocated by this write and we
++ * didn't manage to write the whole range.
+ *
+ * We don't need to care about racing delalloc as we hold i_mutex
+ * across the reserve/allocate/unreserve calls. If there are delalloc
+ * blocks in the range, they are ours.
+ */
+- if (start_fsb < end_fsb) {
++ if ((iomap->flags & IOMAP_F_NEW) && start_fsb < end_fsb) {
+ truncate_pagecache_range(VFS_I(ip), XFS_FSB_TO_B(mp, start_fsb),
+ XFS_FSB_TO_B(mp, end_fsb) - 1);
+
+@@ -1151,7 +1147,7 @@ xfs_file_iomap_end(
+ {
+ if ((flags & IOMAP_WRITE) && iomap->type == IOMAP_DELALLOC)
+ return xfs_file_iomap_end_delalloc(XFS_I(inode), offset,
+- length, written);
++ length, written, iomap);
+ return 0;
+ }
+
+diff --git a/fs/xfs/xfs_mount.c b/fs/xfs/xfs_mount.c
+index 9b9540db17a6..52d27cc4370a 100644
+--- a/fs/xfs/xfs_mount.c
++++ b/fs/xfs/xfs_mount.c
+@@ -187,7 +187,7 @@ xfs_initialize_perag(
+ xfs_agnumber_t *maxagi)
+ {
+ xfs_agnumber_t index;
+- xfs_agnumber_t first_initialised = 0;
++ xfs_agnumber_t first_initialised = NULLAGNUMBER;
+ xfs_perag_t *pag;
+ int error = -ENOMEM;
+
+@@ -202,22 +202,20 @@ xfs_initialize_perag(
+ xfs_perag_put(pag);
+ continue;
+ }
+- if (!first_initialised)
+- first_initialised = index;
+
+ pag = kmem_zalloc(sizeof(*pag), KM_MAYFAIL);
+ if (!pag)
+- goto out_unwind;
++ goto out_unwind_new_pags;
+ pag->pag_agno = index;
+ pag->pag_mount = mp;
+ spin_lock_init(&pag->pag_ici_lock);
+ mutex_init(&pag->pag_ici_reclaim_lock);
+ INIT_RADIX_TREE(&pag->pag_ici_root, GFP_ATOMIC);
+ if (xfs_buf_hash_init(pag))
+- goto out_unwind;
++ goto out_free_pag;
+
+ if (radix_tree_preload(GFP_NOFS))
+- goto out_unwind;
++ goto out_hash_destroy;
+
+ spin_lock(&mp->m_perag_lock);
+ if (radix_tree_insert(&mp->m_perag_tree, index, pag)) {
+@@ -225,10 +223,13 @@ xfs_initialize_perag(
+ spin_unlock(&mp->m_perag_lock);
+ radix_tree_preload_end();
+ error = -EEXIST;
+- goto out_unwind;
++ goto out_hash_destroy;
+ }
+ spin_unlock(&mp->m_perag_lock);
+ radix_tree_preload_end();
++ /* first new pag is fully initialized */
++ if (first_initialised == NULLAGNUMBER)
++ first_initialised = index;
+ }
+
+ index = xfs_set_inode_alloc(mp, agcount);
+@@ -239,11 +240,16 @@ xfs_initialize_perag(
+ mp->m_ag_prealloc_blocks = xfs_prealloc_blocks(mp);
+ return 0;
+
+-out_unwind:
++out_hash_destroy:
+ xfs_buf_hash_destroy(pag);
++out_free_pag:
+ kmem_free(pag);
+- for (; index > first_initialised; index--) {
++out_unwind_new_pags:
++ /* unwind any prior newly initialized pags */
++ for (index = first_initialised; index < agcount; index++) {
+ pag = radix_tree_delete(&mp->m_perag_tree, index);
++ if (!pag)
++ break;
+ xfs_buf_hash_destroy(pag);
+ kmem_free(pag);
+ }
+@@ -505,8 +511,7 @@ STATIC void
+ xfs_set_inoalignment(xfs_mount_t *mp)
+ {
+ if (xfs_sb_version_hasalign(&mp->m_sb) &&
+- mp->m_sb.sb_inoalignmt >=
+- XFS_B_TO_FSBT(mp, mp->m_inode_cluster_size))
++ mp->m_sb.sb_inoalignmt >= xfs_icluster_size_fsb(mp))
+ mp->m_inoalign_mask = mp->m_sb.sb_inoalignmt - 1;
+ else
+ mp->m_inoalign_mask = 0;
+diff --git a/fs/xfs/xfs_reflink.c b/fs/xfs/xfs_reflink.c
+index 07593a362cd0..a72cd2e3c048 100644
+--- a/fs/xfs/xfs_reflink.c
++++ b/fs/xfs/xfs_reflink.c
+@@ -82,11 +82,22 @@
+ * mappings are a reservation against the free space in the filesystem;
+ * adjacent mappings can also be combined into fewer larger mappings.
+ *
++ * As an optimization, the CoW extent size hint (cowextsz) creates
++ * outsized aligned delalloc reservations in the hope of landing out of
++ * order nearby CoW writes in a single extent on disk, thereby reducing
++ * fragmentation and improving future performance.
++ *
++ * D: --RRRRRRSSSRRRRRRRR--- (data fork)
++ * C: ------DDDDDDD--------- (CoW fork)
++ *
+ * When dirty pages are being written out (typically in writepage), the
+- * delalloc reservations are converted into real mappings by allocating
+- * blocks and replacing the delalloc mapping with real ones. A delalloc
+- * mapping can be replaced by several real ones if the free space is
+- * fragmented.
++ * delalloc reservations are converted into unwritten mappings by
++ * allocating blocks and replacing the delalloc mapping with real ones.
++ * A delalloc mapping can be replaced by several unwritten ones if the
++ * free space is fragmented.
++ *
++ * D: --RRRRRRSSSRRRRRRRR---
++ * C: ------UUUUUUU---------
+ *
+ * We want to adapt the delalloc mechanism for copy-on-write, since the
+ * write paths are similar. The first two steps (creating the reservation
+@@ -101,13 +112,29 @@
+ * Block-aligned directio writes will use the same mechanism as buffered
+ * writes.
+ *
++ * Just prior to submitting the actual disk write requests, we convert
++ * the extents representing the range of the file actually being written
++ * (as opposed to extra pieces created for the cowextsize hint) to real
++ * extents. This will become important in the next step:
++ *
++ * D: --RRRRRRSSSRRRRRRRR---
++ * C: ------UUrrUUU---------
++ *
+ * CoW remapping must be done after the data block write completes,
+ * because we don't want to destroy the old data fork map until we're sure
+ * the new block has been written. Since the new mappings are kept in a
+ * separate fork, we can simply iterate these mappings to find the ones
+ * that cover the file blocks that we just CoW'd. For each extent, simply
+ * unmap the corresponding range in the data fork, map the new range into
+- * the data fork, and remove the extent from the CoW fork.
++ * the data fork, and remove the extent from the CoW fork. Because of
++ * the presence of the cowextsize hint, however, we must be careful
++ * only to remap the blocks that we've actually written out -- we must
++ * never remap delalloc reservations nor CoW staging blocks that have
++ * yet to be written. This corresponds exactly to the real extents in
++ * the CoW fork:
++ *
++ * D: --RRRRRRrrSRRRRRRRR---
++ * C: ------UU--UUU---------
+ *
+ * Since the remapping operation can be applied to an arbitrary file
+ * range, we record the need for the remap step as a flag in the ioend
+@@ -296,6 +323,65 @@ xfs_reflink_reserve_cow(
+ return 0;
+ }
+
++/* Convert part of an unwritten CoW extent to a real one. */
++STATIC int
++xfs_reflink_convert_cow_extent(
++ struct xfs_inode *ip,
++ struct xfs_bmbt_irec *imap,
++ xfs_fileoff_t offset_fsb,
++ xfs_filblks_t count_fsb,
++ struct xfs_defer_ops *dfops)
++{
++ struct xfs_bmbt_irec irec = *imap;
++ xfs_fsblock_t first_block;
++ int nimaps = 1;
++
++ if (imap->br_state == XFS_EXT_NORM)
++ return 0;
++
++ xfs_trim_extent(&irec, offset_fsb, count_fsb);
++ trace_xfs_reflink_convert_cow(ip, &irec);
++ if (irec.br_blockcount == 0)
++ return 0;
++ return xfs_bmapi_write(NULL, ip, irec.br_startoff, irec.br_blockcount,
++ XFS_BMAPI_COWFORK | XFS_BMAPI_CONVERT, &first_block,
++ 0, &irec, &nimaps, dfops);
++}
++
++/* Convert all of the unwritten CoW extents in a file's range to real ones. */
++int
++xfs_reflink_convert_cow(
++ struct xfs_inode *ip,
++ xfs_off_t offset,
++ xfs_off_t count)
++{
++ struct xfs_bmbt_irec got;
++ struct xfs_defer_ops dfops;
++ struct xfs_mount *mp = ip->i_mount;
++ struct xfs_ifork *ifp = XFS_IFORK_PTR(ip, XFS_COW_FORK);
++ xfs_fileoff_t offset_fsb = XFS_B_TO_FSBT(mp, offset);
++ xfs_fileoff_t end_fsb = XFS_B_TO_FSB(mp, offset + count);
++ xfs_extnum_t idx;
++ bool found;
++ int error = 0;
++
++ xfs_ilock(ip, XFS_ILOCK_EXCL);
++
++ /* Convert all the extents to real from unwritten. */
++ for (found = xfs_iext_lookup_extent(ip, ifp, offset_fsb, &idx, &got);
++ found && got.br_startoff < end_fsb;
++ found = xfs_iext_get_extent(ifp, ++idx, &got)) {
++ error = xfs_reflink_convert_cow_extent(ip, &got, offset_fsb,
++ end_fsb - offset_fsb, &dfops);
++ if (error)
++ break;
++ }
++
++ /* Finish up. */
++ xfs_iunlock(ip, XFS_ILOCK_EXCL);
++ return error;
++}
++
+ /* Allocate all CoW reservations covering a range of blocks in a file. */
+ static int
+ __xfs_reflink_allocate_cow(
+@@ -328,6 +414,7 @@ __xfs_reflink_allocate_cow(
+ goto out_unlock;
+ ASSERT(nimaps == 1);
+
++ /* Make sure there's a CoW reservation for it. */
+ error = xfs_reflink_reserve_cow(ip, &imap, &shared);
+ if (error)
+ goto out_trans_cancel;
+@@ -337,14 +424,16 @@ __xfs_reflink_allocate_cow(
+ goto out_trans_cancel;
+ }
+
++ /* Allocate the entire reservation as unwritten blocks. */
+ xfs_trans_ijoin(tp, ip, 0);
+ error = xfs_bmapi_write(tp, ip, imap.br_startoff, imap.br_blockcount,
+- XFS_BMAPI_COWFORK, &first_block,
++ XFS_BMAPI_COWFORK | XFS_BMAPI_PREALLOC, &first_block,
+ XFS_EXTENTADD_SPACE_RES(mp, XFS_DATA_FORK),
+ &imap, &nimaps, &dfops);
+ if (error)
+ goto out_trans_cancel;
+
++ /* Finish up. */
+ error = xfs_defer_finish(&tp, &dfops, NULL);
+ if (error)
+ goto out_trans_cancel;
+@@ -389,11 +478,12 @@ xfs_reflink_allocate_cow_range(
+ if (error) {
+ trace_xfs_reflink_allocate_cow_range_error(ip, error,
+ _RET_IP_);
+- break;
++ return error;
+ }
+ }
+
+- return error;
++ /* Convert the CoW extents to regular. */
++ return xfs_reflink_convert_cow(ip, offset, count);
+ }
+
+ /*
+@@ -459,14 +549,18 @@ xfs_reflink_trim_irec_to_next_cow(
+ }
+
+ /*
+- * Cancel all pending CoW reservations for some block range of an inode.
++ * Cancel CoW reservations for some block range of an inode.
++ *
++ * If cancel_real is true this function cancels all COW fork extents for the
++ * inode; if cancel_real is false, real extents are not cleared.
+ */
+ int
+ xfs_reflink_cancel_cow_blocks(
+ struct xfs_inode *ip,
+ struct xfs_trans **tpp,
+ xfs_fileoff_t offset_fsb,
+- xfs_fileoff_t end_fsb)
++ xfs_fileoff_t end_fsb,
++ bool cancel_real)
+ {
+ struct xfs_ifork *ifp = XFS_IFORK_PTR(ip, XFS_COW_FORK);
+ struct xfs_bmbt_irec got, del;
+@@ -490,7 +584,7 @@ xfs_reflink_cancel_cow_blocks(
+ &idx, &got, &del);
+ if (error)
+ break;
+- } else {
++ } else if (del.br_state == XFS_EXT_UNWRITTEN || cancel_real) {
+ xfs_trans_ijoin(*tpp, ip, 0);
+ xfs_defer_init(&dfops, &firstfsb);
+
+@@ -532,13 +626,17 @@ xfs_reflink_cancel_cow_blocks(
+ }
+
+ /*
+- * Cancel all pending CoW reservations for some byte range of an inode.
++ * Cancel CoW reservations for some byte range of an inode.
++ *
++ * If cancel_real is true this function cancels all COW fork extents for the
++ * inode; if cancel_real is false, real extents are not cleared.
+ */
+ int
+ xfs_reflink_cancel_cow_range(
+ struct xfs_inode *ip,
+ xfs_off_t offset,
+- xfs_off_t count)
++ xfs_off_t count,
++ bool cancel_real)
+ {
+ struct xfs_trans *tp;
+ xfs_fileoff_t offset_fsb;
+@@ -564,7 +662,8 @@ xfs_reflink_cancel_cow_range(
+ xfs_trans_ijoin(tp, ip, 0);
+
+ /* Scrape out the old CoW reservations */
+- error = xfs_reflink_cancel_cow_blocks(ip, &tp, offset_fsb, end_fsb);
++ error = xfs_reflink_cancel_cow_blocks(ip, &tp, offset_fsb, end_fsb,
++ cancel_real);
+ if (error)
+ goto out_cancel;
+
+@@ -641,6 +740,16 @@ xfs_reflink_end_cow(
+
+ ASSERT(!isnullstartblock(got.br_startblock));
+
++ /*
++ * Don't remap unwritten extents; these are
++ * speculatively preallocated CoW extents that have been
++ * allocated but have not yet been involved in a write.
++ */
++ if (got.br_state == XFS_EXT_UNWRITTEN) {
++ idx--;
++ goto next_extent;
++ }
++
+ /* Unmap the old blocks in the data fork. */
+ xfs_defer_init(&dfops, &firstfsb);
+ rlen = del.br_blockcount;
+@@ -855,13 +964,14 @@ STATIC int
+ xfs_reflink_update_dest(
+ struct xfs_inode *dest,
+ xfs_off_t newlen,
+- xfs_extlen_t cowextsize)
++ xfs_extlen_t cowextsize,
++ bool is_dedupe)
+ {
+ struct xfs_mount *mp = dest->i_mount;
+ struct xfs_trans *tp;
+ int error;
+
+- if (newlen <= i_size_read(VFS_I(dest)) && cowextsize == 0)
++ if (is_dedupe && newlen <= i_size_read(VFS_I(dest)) && cowextsize == 0)
+ return 0;
+
+ error = xfs_trans_alloc(mp, &M_RES(mp)->tr_ichange, 0, 0, 0, &tp);
+@@ -882,6 +992,10 @@ xfs_reflink_update_dest(
+ dest->i_d.di_flags2 |= XFS_DIFLAG2_COWEXTSIZE;
+ }
+
++ if (!is_dedupe) {
++ xfs_trans_ichgtime(tp, dest,
++ XFS_ICHGTIME_MOD | XFS_ICHGTIME_CHG);
++ }
+ xfs_trans_log_inode(tp, dest, XFS_ILOG_CORE);
+
+ error = xfs_trans_commit(tp);
+@@ -1195,7 +1309,8 @@ xfs_reflink_remap_range(
+ !(dest->i_d.di_flags2 & XFS_DIFLAG2_COWEXTSIZE))
+ cowextsize = src->i_d.di_cowextsize;
+
+- ret = xfs_reflink_update_dest(dest, pos_out + len, cowextsize);
++ ret = xfs_reflink_update_dest(dest, pos_out + len, cowextsize,
++ is_dedupe);
+
+ out_unlock:
+ xfs_iunlock(src, XFS_MMAPLOCK_EXCL);
+@@ -1345,7 +1460,7 @@ xfs_reflink_clear_inode_flag(
+ * We didn't find any shared blocks so turn off the reflink flag.
+ * First, get rid of any leftover CoW mappings.
+ */
+- error = xfs_reflink_cancel_cow_blocks(ip, tpp, 0, NULLFILEOFF);
++ error = xfs_reflink_cancel_cow_blocks(ip, tpp, 0, NULLFILEOFF, true);
+ if (error)
+ return error;
+
+diff --git a/fs/xfs/xfs_reflink.h b/fs/xfs/xfs_reflink.h
+index aa6a4d64bd35..b715bacb2ea2 100644
+--- a/fs/xfs/xfs_reflink.h
++++ b/fs/xfs/xfs_reflink.h
+@@ -30,6 +30,8 @@ extern int xfs_reflink_reserve_cow(struct xfs_inode *ip,
+ struct xfs_bmbt_irec *imap, bool *shared);
+ extern int xfs_reflink_allocate_cow_range(struct xfs_inode *ip,
+ xfs_off_t offset, xfs_off_t count);
++extern int xfs_reflink_convert_cow(struct xfs_inode *ip, xfs_off_t offset,
++ xfs_off_t count);
+ extern bool xfs_reflink_find_cow_mapping(struct xfs_inode *ip, xfs_off_t offset,
+ struct xfs_bmbt_irec *imap);
+ extern void xfs_reflink_trim_irec_to_next_cow(struct xfs_inode *ip,
+@@ -37,9 +39,9 @@ extern void xfs_reflink_trim_irec_to_next_cow(struct xfs_inode *ip,
+
+ extern int xfs_reflink_cancel_cow_blocks(struct xfs_inode *ip,
+ struct xfs_trans **tpp, xfs_fileoff_t offset_fsb,
+- xfs_fileoff_t end_fsb);
++ xfs_fileoff_t end_fsb, bool cancel_real);
+ extern int xfs_reflink_cancel_cow_range(struct xfs_inode *ip, xfs_off_t offset,
+- xfs_off_t count);
++ xfs_off_t count, bool cancel_real);
+ extern int xfs_reflink_end_cow(struct xfs_inode *ip, xfs_off_t offset,
+ xfs_off_t count);
+ extern int xfs_reflink_recover_cow(struct xfs_mount *mp);
+diff --git a/fs/xfs/xfs_super.c b/fs/xfs/xfs_super.c
+index eecbaac08eba..d80187b0e726 100644
+--- a/fs/xfs/xfs_super.c
++++ b/fs/xfs/xfs_super.c
+@@ -953,7 +953,7 @@ xfs_fs_destroy_inode(
+ XFS_STATS_INC(ip->i_mount, vn_remove);
+
+ if (xfs_is_reflink_inode(ip)) {
+- error = xfs_reflink_cancel_cow_range(ip, 0, NULLFILEOFF);
++ error = xfs_reflink_cancel_cow_range(ip, 0, NULLFILEOFF, true);
+ if (error && !XFS_FORCED_SHUTDOWN(ip->i_mount))
+ xfs_warn(ip->i_mount,
+ "Error %d while evicting CoW blocks for inode %llu.",
+diff --git a/fs/xfs/xfs_trace.h b/fs/xfs/xfs_trace.h
+index 69c5bcd9a51b..375c5e030e5b 100644
+--- a/fs/xfs/xfs_trace.h
++++ b/fs/xfs/xfs_trace.h
+@@ -3089,6 +3089,7 @@ DECLARE_EVENT_CLASS(xfs_inode_irec_class,
+ __field(xfs_fileoff_t, lblk)
+ __field(xfs_extlen_t, len)
+ __field(xfs_fsblock_t, pblk)
++ __field(int, state)
+ ),
+ TP_fast_assign(
+ __entry->dev = VFS_I(ip)->i_sb->s_dev;
+@@ -3096,13 +3097,15 @@ DECLARE_EVENT_CLASS(xfs_inode_irec_class,
+ __entry->lblk = irec->br_startoff;
+ __entry->len = irec->br_blockcount;
+ __entry->pblk = irec->br_startblock;
++ __entry->state = irec->br_state;
+ ),
+- TP_printk("dev %d:%d ino 0x%llx lblk 0x%llx len 0x%x pblk %llu",
++ TP_printk("dev %d:%d ino 0x%llx lblk 0x%llx len 0x%x pblk %llu st %d",
+ MAJOR(__entry->dev), MINOR(__entry->dev),
+ __entry->ino,
+ __entry->lblk,
+ __entry->len,
+- __entry->pblk)
++ __entry->pblk,
++ __entry->state)
+ );
+ #define DEFINE_INODE_IREC_EVENT(name) \
+ DEFINE_EVENT(xfs_inode_irec_class, name, \
+@@ -3242,11 +3245,12 @@ DEFINE_INODE_IREC_EVENT(xfs_reflink_trim_around_shared);
+ DEFINE_INODE_IREC_EVENT(xfs_reflink_cow_alloc);
+ DEFINE_INODE_IREC_EVENT(xfs_reflink_cow_found);
+ DEFINE_INODE_IREC_EVENT(xfs_reflink_cow_enospc);
++DEFINE_INODE_IREC_EVENT(xfs_reflink_convert_cow);
+
+ DEFINE_RW_EVENT(xfs_reflink_reserve_cow);
+ DEFINE_RW_EVENT(xfs_reflink_allocate_cow_range);
+
+-DEFINE_INODE_IREC_EVENT(xfs_reflink_bounce_dio_write);
++DEFINE_SIMPLE_IO_EVENT(xfs_reflink_bounce_dio_write);
+ DEFINE_IOMAP_EVENT(xfs_reflink_find_cow_mapping);
+ DEFINE_INODE_IREC_EVENT(xfs_reflink_trim_irec);
+
+diff --git a/include/linux/kvm_host.h b/include/linux/kvm_host.h
+index 1c5190dab2c1..e3d146dadceb 100644
+--- a/include/linux/kvm_host.h
++++ b/include/linux/kvm_host.h
+@@ -162,8 +162,8 @@ int kvm_io_bus_read(struct kvm_vcpu *vcpu, enum kvm_bus bus_idx, gpa_t addr,
+ int len, void *val);
+ int kvm_io_bus_register_dev(struct kvm *kvm, enum kvm_bus bus_idx, gpa_t addr,
+ int len, struct kvm_io_device *dev);
+-int kvm_io_bus_unregister_dev(struct kvm *kvm, enum kvm_bus bus_idx,
+- struct kvm_io_device *dev);
++void kvm_io_bus_unregister_dev(struct kvm *kvm, enum kvm_bus bus_idx,
++ struct kvm_io_device *dev);
+ struct kvm_io_device *kvm_io_bus_get_dev(struct kvm *kvm, enum kvm_bus bus_idx,
+ gpa_t addr);
+
+diff --git a/include/linux/memcontrol.h b/include/linux/memcontrol.h
+index 254698856b8f..8b35bdbdc214 100644
+--- a/include/linux/memcontrol.h
++++ b/include/linux/memcontrol.h
+@@ -739,6 +739,12 @@ static inline bool mem_cgroup_oom_synchronize(bool wait)
+ return false;
+ }
+
++static inline void mem_cgroup_update_page_stat(struct page *page,
++ enum mem_cgroup_stat_index idx,
++ int nr)
++{
++}
++
+ static inline void mem_cgroup_inc_page_stat(struct page *page,
+ enum mem_cgroup_stat_index idx)
+ {
+diff --git a/kernel/padata.c b/kernel/padata.c
+index 05316c9f32da..3202aa17492c 100644
+--- a/kernel/padata.c
++++ b/kernel/padata.c
+@@ -186,19 +186,20 @@ static struct padata_priv *padata_get_next(struct parallel_data *pd)
+
+ reorder = &next_queue->reorder;
+
++ spin_lock(&reorder->lock);
+ if (!list_empty(&reorder->list)) {
+ padata = list_entry(reorder->list.next,
+ struct padata_priv, list);
+
+- spin_lock(&reorder->lock);
+ list_del_init(&padata->list);
+ atomic_dec(&pd->reorder_objects);
+- spin_unlock(&reorder->lock);
+
+ pd->processed++;
+
++ spin_unlock(&reorder->lock);
+ goto out;
+ }
++ spin_unlock(&reorder->lock);
+
+ if (__this_cpu_read(pd->pqueue->cpu_index) == next_queue->cpu_index) {
+ padata = ERR_PTR(-ENODATA);
+diff --git a/lib/syscall.c b/lib/syscall.c
+index 63239e097b13..a72cd0996230 100644
+--- a/lib/syscall.c
++++ b/lib/syscall.c
+@@ -11,6 +11,7 @@ static int collect_syscall(struct task_struct *target, long *callno,
+
+ if (!try_get_task_stack(target)) {
+ /* Task has no stack, so the task isn't in a syscall. */
++ *sp = *pc = 0;
+ *callno = -1;
+ return 0;
+ }
+diff --git a/mm/hugetlb.c b/mm/hugetlb.c
+index c7025c132670..968b547f3b90 100644
+--- a/mm/hugetlb.c
++++ b/mm/hugetlb.c
+@@ -4474,6 +4474,7 @@ follow_huge_pmd(struct mm_struct *mm, unsigned long address,
+ {
+ struct page *page = NULL;
+ spinlock_t *ptl;
++ pte_t pte;
+ retry:
+ ptl = pmd_lockptr(mm, pmd);
+ spin_lock(ptl);
+@@ -4483,12 +4484,13 @@ follow_huge_pmd(struct mm_struct *mm, unsigned long address,
+ */
+ if (!pmd_huge(*pmd))
+ goto out;
+- if (pmd_present(*pmd)) {
++ pte = huge_ptep_get((pte_t *)pmd);
++ if (pte_present(pte)) {
+ page = pmd_page(*pmd) + ((address & ~PMD_MASK) >> PAGE_SHIFT);
+ if (flags & FOLL_GET)
+ get_page(page);
+ } else {
+- if (is_hugetlb_entry_migration(huge_ptep_get((pte_t *)pmd))) {
++ if (is_hugetlb_entry_migration(pte)) {
+ spin_unlock(ptl);
+ __migration_entry_wait(mm, (pte_t *)pmd, ptl);
+ goto retry;
+diff --git a/mm/rmap.c b/mm/rmap.c
+index 91619fd70939..a40d990eede0 100644
+--- a/mm/rmap.c
++++ b/mm/rmap.c
+@@ -1294,7 +1294,7 @@ void page_add_file_rmap(struct page *page, bool compound)
+ goto out;
+ }
+ __mod_node_page_state(page_pgdat(page), NR_FILE_MAPPED, nr);
+- mem_cgroup_inc_page_stat(page, MEM_CGROUP_STAT_FILE_MAPPED);
++ mem_cgroup_update_page_stat(page, MEM_CGROUP_STAT_FILE_MAPPED, nr);
+ out:
+ unlock_page_memcg(page);
+ }
+@@ -1334,7 +1334,7 @@ static void page_remove_file_rmap(struct page *page, bool compound)
+ * pte lock(a spinlock) is held, which implies preemption disabled.
+ */
+ __mod_node_page_state(page_pgdat(page), NR_FILE_MAPPED, -nr);
+- mem_cgroup_dec_page_stat(page, MEM_CGROUP_STAT_FILE_MAPPED);
++ mem_cgroup_update_page_stat(page, MEM_CGROUP_STAT_FILE_MAPPED, -nr);
+
+ if (unlikely(PageMlocked(page)))
+ clear_page_mlock(page);
+diff --git a/mm/workingset.c b/mm/workingset.c
+index a67f5796b995..dda16cf9599f 100644
+--- a/mm/workingset.c
++++ b/mm/workingset.c
+@@ -533,7 +533,7 @@ static int __init workingset_init(void)
+ pr_info("workingset: timestamp_bits=%d max_order=%d bucket_order=%u\n",
+ timestamp_bits, max_order, bucket_order);
+
+- ret = list_lru_init_key(&shadow_nodes, &shadow_nodes_key);
++ ret = __list_lru_init(&shadow_nodes, true, &shadow_nodes_key);
+ if (ret)
+ goto err;
+ ret = register_shrinker(&workingset_shadow_shrinker);
+diff --git a/net/ceph/messenger.c b/net/ceph/messenger.c
+index 770c52701efa..140b067d5d57 100644
+--- a/net/ceph/messenger.c
++++ b/net/ceph/messenger.c
+@@ -7,6 +7,7 @@
+ #include <linux/kthread.h>
+ #include <linux/net.h>
+ #include <linux/nsproxy.h>
++#include <linux/sched.h>
+ #include <linux/slab.h>
+ #include <linux/socket.h>
+ #include <linux/string.h>
+@@ -469,11 +470,16 @@ static int ceph_tcp_connect(struct ceph_connection *con)
+ {
+ struct sockaddr_storage *paddr = &con->peer_addr.in_addr;
+ struct socket *sock;
++ unsigned int noio_flag;
+ int ret;
+
+ BUG_ON(con->sock);
++
++ /* sock_create_kern() allocates with GFP_KERNEL */
++ noio_flag = memalloc_noio_save();
+ ret = sock_create_kern(read_pnet(&con->msgr->net), paddr->ss_family,
+ SOCK_STREAM, IPPROTO_TCP, &sock);
++ memalloc_noio_restore(noio_flag);
+ if (ret)
+ return ret;
+ sock->sk->sk_allocation = GFP_NOFS;
+diff --git a/sound/core/seq/seq_fifo.c b/sound/core/seq/seq_fifo.c
+index 3f4efcb85df5..3490d21ab9e7 100644
+--- a/sound/core/seq/seq_fifo.c
++++ b/sound/core/seq/seq_fifo.c
+@@ -265,6 +265,10 @@ int snd_seq_fifo_resize(struct snd_seq_fifo *f, int poolsize)
+ /* NOTE: overflow flag is not cleared */
+ spin_unlock_irqrestore(&f->lock, flags);
+
++ /* close the old pool and wait until all users are gone */
++ snd_seq_pool_mark_closing(oldpool);
++ snd_use_lock_sync(&f->use_lock);
++
+ /* release cells in old pool */
+ for (cell = oldhead; cell; cell = next) {
+ next = cell->next;
+diff --git a/sound/pci/hda/patch_realtek.c b/sound/pci/hda/patch_realtek.c
+index c813ad857650..152c7ed65254 100644
+--- a/sound/pci/hda/patch_realtek.c
++++ b/sound/pci/hda/patch_realtek.c
+@@ -4846,6 +4846,7 @@ enum {
+ ALC292_FIXUP_DISABLE_AAMIX,
+ ALC293_FIXUP_DISABLE_AAMIX_MULTIJACK,
+ ALC298_FIXUP_DELL1_MIC_NO_PRESENCE,
++ ALC298_FIXUP_DELL_AIO_MIC_NO_PRESENCE,
+ ALC275_FIXUP_DELL_XPS,
+ ALC256_FIXUP_DELL_XPS_13_HEADPHONE_NOISE,
+ ALC293_FIXUP_LENOVO_SPK_NOISE,
+@@ -5446,6 +5447,15 @@ static const struct hda_fixup alc269_fixups[] = {
+ .chained = true,
+ .chain_id = ALC269_FIXUP_HEADSET_MODE
+ },
++ [ALC298_FIXUP_DELL_AIO_MIC_NO_PRESENCE] = {
++ .type = HDA_FIXUP_PINS,
++ .v.pins = (const struct hda_pintbl[]) {
++ { 0x18, 0x01a1913c }, /* use as headset mic, without its own jack detect */
++ { }
++ },
++ .chained = true,
++ .chain_id = ALC269_FIXUP_HEADSET_MODE
++ },
+ [ALC275_FIXUP_DELL_XPS] = {
+ .type = HDA_FIXUP_VERBS,
+ .v.verbs = (const struct hda_verb[]) {
+@@ -5518,7 +5528,7 @@ static const struct hda_fixup alc269_fixups[] = {
+ .type = HDA_FIXUP_FUNC,
+ .v.func = alc298_fixup_speaker_volume,
+ .chained = true,
+- .chain_id = ALC298_FIXUP_DELL1_MIC_NO_PRESENCE,
++ .chain_id = ALC298_FIXUP_DELL_AIO_MIC_NO_PRESENCE,
+ },
+ [ALC256_FIXUP_DELL_INSPIRON_7559_SUBWOOFER] = {
+ .type = HDA_FIXUP_PINS,
+diff --git a/sound/soc/atmel/atmel-classd.c b/sound/soc/atmel/atmel-classd.c
+index 89ac5f5a93eb..7ae46c2647d4 100644
+--- a/sound/soc/atmel/atmel-classd.c
++++ b/sound/soc/atmel/atmel-classd.c
+@@ -349,7 +349,7 @@ static int atmel_classd_codec_dai_digital_mute(struct snd_soc_dai *codec_dai,
+ }
+
+ #define CLASSD_ACLK_RATE_11M2896_MPY_8 (112896 * 100 * 8)
+-#define CLASSD_ACLK_RATE_12M288_MPY_8 (12228 * 1000 * 8)
++#define CLASSD_ACLK_RATE_12M288_MPY_8 (12288 * 1000 * 8)
+
+ static struct {
+ int rate;
+diff --git a/sound/soc/codecs/rt5665.c b/sound/soc/codecs/rt5665.c
+index 324461e985b3..fe2cf1ed8237 100644
+--- a/sound/soc/codecs/rt5665.c
++++ b/sound/soc/codecs/rt5665.c
+@@ -1241,7 +1241,7 @@ static irqreturn_t rt5665_irq(int irq, void *data)
+ static void rt5665_jd_check_handler(struct work_struct *work)
+ {
+ struct rt5665_priv *rt5665 = container_of(work, struct rt5665_priv,
+- calibrate_work.work);
++ jd_check_work.work);
+
+ if (snd_soc_read(rt5665->codec, RT5665_AJD1_CTRL) & 0x0010) {
+ /* jack out */
+diff --git a/sound/soc/intel/skylake/skl-topology.c b/sound/soc/intel/skylake/skl-topology.c
+index bd313c907b20..172d7db1653c 100644
+--- a/sound/soc/intel/skylake/skl-topology.c
++++ b/sound/soc/intel/skylake/skl-topology.c
+@@ -486,7 +486,7 @@ static int skl_tplg_set_module_init_data(struct snd_soc_dapm_widget *w)
+ if (bc->set_params != SKL_PARAM_INIT)
+ continue;
+
+- mconfig->formats_config.caps = (u32 *)&bc->params;
++ mconfig->formats_config.caps = (u32 *)bc->params;
+ mconfig->formats_config.caps_size = bc->size;
+
+ break;
+diff --git a/virt/kvm/eventfd.c b/virt/kvm/eventfd.c
+index a29786dd9522..4d28a9ddbee0 100644
+--- a/virt/kvm/eventfd.c
++++ b/virt/kvm/eventfd.c
+@@ -870,7 +870,8 @@ kvm_deassign_ioeventfd_idx(struct kvm *kvm, enum kvm_bus bus_idx,
+ continue;
+
+ kvm_io_bus_unregister_dev(kvm, bus_idx, &p->dev);
+- kvm->buses[bus_idx]->ioeventfd_count--;
++ if (kvm->buses[bus_idx])
++ kvm->buses[bus_idx]->ioeventfd_count--;
+ ioeventfd_release(p);
+ ret = 0;
+ break;
+diff --git a/virt/kvm/kvm_main.c b/virt/kvm/kvm_main.c
+index 482612b4e496..da5db473afb0 100644
+--- a/virt/kvm/kvm_main.c
++++ b/virt/kvm/kvm_main.c
+@@ -723,8 +723,11 @@ static void kvm_destroy_vm(struct kvm *kvm)
+ list_del(&kvm->vm_list);
+ spin_unlock(&kvm_lock);
+ kvm_free_irq_routing(kvm);
+- for (i = 0; i < KVM_NR_BUSES; i++)
+- kvm_io_bus_destroy(kvm->buses[i]);
++ for (i = 0; i < KVM_NR_BUSES; i++) {
++ if (kvm->buses[i])
++ kvm_io_bus_destroy(kvm->buses[i]);
++ kvm->buses[i] = NULL;
++ }
+ kvm_coalesced_mmio_free(kvm);
+ #if defined(CONFIG_MMU_NOTIFIER) && defined(KVM_ARCH_WANT_MMU_NOTIFIER)
+ mmu_notifier_unregister(&kvm->mmu_notifier, kvm->mm);
+@@ -3473,6 +3476,8 @@ int kvm_io_bus_write(struct kvm_vcpu *vcpu, enum kvm_bus bus_idx, gpa_t addr,
+ };
+
+ bus = srcu_dereference(vcpu->kvm->buses[bus_idx], &vcpu->kvm->srcu);
++ if (!bus)
++ return -ENOMEM;
+ r = __kvm_io_bus_write(vcpu, bus, &range, val);
+ return r < 0 ? r : 0;
+ }
+@@ -3490,6 +3495,8 @@ int kvm_io_bus_write_cookie(struct kvm_vcpu *vcpu, enum kvm_bus bus_idx,
+ };
+
+ bus = srcu_dereference(vcpu->kvm->buses[bus_idx], &vcpu->kvm->srcu);
++ if (!bus)
++ return -ENOMEM;
+
+ /* First try the device referenced by cookie. */
+ if ((cookie >= 0) && (cookie < bus->dev_count) &&
+@@ -3540,6 +3547,8 @@ int kvm_io_bus_read(struct kvm_vcpu *vcpu, enum kvm_bus bus_idx, gpa_t addr,
+ };
+
+ bus = srcu_dereference(vcpu->kvm->buses[bus_idx], &vcpu->kvm->srcu);
++ if (!bus)
++ return -ENOMEM;
+ r = __kvm_io_bus_read(vcpu, bus, &range, val);
+ return r < 0 ? r : 0;
+ }
+@@ -3552,6 +3561,9 @@ int kvm_io_bus_register_dev(struct kvm *kvm, enum kvm_bus bus_idx, gpa_t addr,
+ struct kvm_io_bus *new_bus, *bus;
+
+ bus = kvm->buses[bus_idx];
++ if (!bus)
++ return -ENOMEM;
++
+ /* exclude ioeventfd which is limited by maximum fd */
+ if (bus->dev_count - bus->ioeventfd_count > NR_IOBUS_DEVS - 1)
+ return -ENOSPC;
+@@ -3571,37 +3583,41 @@ int kvm_io_bus_register_dev(struct kvm *kvm, enum kvm_bus bus_idx, gpa_t addr,
+ }
+
+ /* Caller must hold slots_lock. */
+-int kvm_io_bus_unregister_dev(struct kvm *kvm, enum kvm_bus bus_idx,
+- struct kvm_io_device *dev)
++void kvm_io_bus_unregister_dev(struct kvm *kvm, enum kvm_bus bus_idx,
++ struct kvm_io_device *dev)
+ {
+- int i, r;
++ int i;
+ struct kvm_io_bus *new_bus, *bus;
+
+ bus = kvm->buses[bus_idx];
+- r = -ENOENT;
++ if (!bus)
++ return;
++
+ for (i = 0; i < bus->dev_count; i++)
+ if (bus->range[i].dev == dev) {
+- r = 0;
+ break;
+ }
+
+- if (r)
+- return r;
++ if (i == bus->dev_count)
++ return;
+
+ new_bus = kmalloc(sizeof(*bus) + ((bus->dev_count - 1) *
+ sizeof(struct kvm_io_range)), GFP_KERNEL);
+- if (!new_bus)
+- return -ENOMEM;
++ if (!new_bus) {
++ pr_err("kvm: failed to shrink bus, removing it completely\n");
++ goto broken;
++ }
+
+ memcpy(new_bus, bus, sizeof(*bus) + i * sizeof(struct kvm_io_range));
+ new_bus->dev_count--;
+ memcpy(new_bus->range + i, bus->range + i + 1,
+ (new_bus->dev_count - i) * sizeof(struct kvm_io_range));
+
++broken:
+ rcu_assign_pointer(kvm->buses[bus_idx], new_bus);
+ synchronize_srcu_expedited(&kvm->srcu);
+ kfree(bus);
+- return r;
++ return;
+ }
+
+ struct kvm_io_device *kvm_io_bus_get_dev(struct kvm *kvm, enum kvm_bus bus_idx,
+@@ -3614,6 +3630,8 @@ struct kvm_io_device *kvm_io_bus_get_dev(struct kvm *kvm, enum kvm_bus bus_idx,
+ srcu_idx = srcu_read_lock(&kvm->srcu);
+
+ bus = srcu_dereference(kvm->buses[bus_idx], &kvm->srcu);
++ if (!bus)
++ goto out_unlock;
+
+ dev_idx = kvm_io_bus_get_first_dev(bus, addr, 1);
+ if (dev_idx < 0)
^ permalink raw reply related [flat|nested] 22+ messages in thread
* [gentoo-commits] proj/linux-patches:4.10 commit in: /
@ 2017-04-12 18:02 Mike Pagano
0 siblings, 0 replies; 22+ messages in thread
From: Mike Pagano @ 2017-04-12 18:02 UTC (permalink / raw
To: gentoo-commits
commit: dc93dbc1a0e8814da2e5ae4a6f6d9e7bc83aabcc
Author: Mike Pagano <mpagano <AT> gentoo <DOT> org>
AuthorDate: Wed Apr 12 18:02:09 2017 +0000
Commit: Mike Pagano <mpagano <AT> gentoo <DOT> org>
CommitDate: Wed Apr 12 18:02:09 2017 +0000
URL: https://gitweb.gentoo.org/proj/linux-patches.git/commit/?id=dc93dbc1
Linux patch 4.10.10
0000_README | 4 +
1009_linux-4.10.10.patch | 4168 ++++++++++++++++++++++++++++++++++++++++++++++
2 files changed, 4172 insertions(+)
diff --git a/0000_README b/0000_README
index 5f8d5b0..abc6f43 100644
--- a/0000_README
+++ b/0000_README
@@ -79,6 +79,10 @@ Patch: 1008_linux-4.10.9.patch
From: http://www.kernel.org
Desc: Linux 4.10.9
+Patch: 1009_linux-4.10.10.patch
+From: http://www.kernel.org
+Desc: Linux 4.10.10
+
Patch: 1500_XATTR_USER_PREFIX.patch
From: https://bugs.gentoo.org/show_bug.cgi?id=470644
Desc: Support for namespace user.pax.* on tmpfs.
diff --git a/1009_linux-4.10.10.patch b/1009_linux-4.10.10.patch
new file mode 100644
index 0000000..8380fc6
--- /dev/null
+++ b/1009_linux-4.10.10.patch
@@ -0,0 +1,4168 @@
+diff --git a/Documentation/devicetree/bindings/usb/usb-xhci.txt b/Documentation/devicetree/bindings/usb/usb-xhci.txt
+index 0b7d8576001c..2d80b60eeabe 100644
+--- a/Documentation/devicetree/bindings/usb/usb-xhci.txt
++++ b/Documentation/devicetree/bindings/usb/usb-xhci.txt
+@@ -27,6 +27,7 @@ Required properties:
+ Optional properties:
+ - clocks: reference to a clock
+ - usb3-lpm-capable: determines if platform is USB3 LPM capable
++ - quirk-broken-port-ped: set if the controller has broken port disable mechanism
+
+ Example:
+ usb@f0931000 {
+diff --git a/Documentation/devicetree/bindings/watchdog/samsung-wdt.txt b/Documentation/devicetree/bindings/watchdog/samsung-wdt.txt
+index 8f3d96af81d7..1f6e101e299a 100644
+--- a/Documentation/devicetree/bindings/watchdog/samsung-wdt.txt
++++ b/Documentation/devicetree/bindings/watchdog/samsung-wdt.txt
+@@ -6,10 +6,11 @@ occurred.
+
+ Required properties:
+ - compatible : should be one among the following
+- (a) "samsung,s3c2410-wdt" for Exynos4 and previous SoCs
+- (b) "samsung,exynos5250-wdt" for Exynos5250
+- (c) "samsung,exynos5420-wdt" for Exynos5420
+- (c) "samsung,exynos7-wdt" for Exynos7
++ - "samsung,s3c2410-wdt" for S3C2410
++ - "samsung,s3c6410-wdt" for S3C6410, S5PV210 and Exynos4
++ - "samsung,exynos5250-wdt" for Exynos5250
++ - "samsung,exynos5420-wdt" for Exynos5420
++ - "samsung,exynos7-wdt" for Exynos7
+
+ - reg : base physical address of the controller and length of memory mapped
+ region.
+diff --git a/Documentation/process/stable-kernel-rules.rst b/Documentation/process/stable-kernel-rules.rst
+index 11ec2d93a5e0..61e9c78bd6d1 100644
+--- a/Documentation/process/stable-kernel-rules.rst
++++ b/Documentation/process/stable-kernel-rules.rst
+@@ -124,7 +124,7 @@ specified in the following format in the sign-off area:
+
+ .. code-block:: none
+
+- Cc: <stable@vger.kernel.org> # 3.3.x-
++ Cc: <stable@vger.kernel.org> # 3.3.x
+
+ The tag has the meaning of:
+
+diff --git a/Makefile b/Makefile
+index 4ebd511dee58..52858726495b 100644
+--- a/Makefile
++++ b/Makefile
+@@ -1,6 +1,6 @@
+ VERSION = 4
+ PATCHLEVEL = 10
+-SUBLEVEL = 9
++SUBLEVEL = 10
+ EXTRAVERSION =
+ NAME = Fearless Coyote
+
+@@ -370,7 +370,7 @@ LDFLAGS_MODULE =
+ CFLAGS_KERNEL =
+ AFLAGS_KERNEL =
+ LDFLAGS_vmlinux =
+-CFLAGS_GCOV = -fprofile-arcs -ftest-coverage -fno-tree-loop-im -Wno-maybe-uninitialized
++CFLAGS_GCOV := -fprofile-arcs -ftest-coverage -fno-tree-loop-im $(call cc-disable-warning,maybe-uninitialized,)
+ CFLAGS_KCOV := $(call cc-option,-fsanitize-coverage=trace-pc,)
+
+
+@@ -651,6 +651,12 @@ KBUILD_CFLAGS += $(call cc-ifversion, -lt, 0409, \
+ # Tell gcc to never replace conditional load with a non-conditional one
+ KBUILD_CFLAGS += $(call cc-option,--param=allow-store-data-races=0)
+
++# check for 'asm goto'
++ifeq ($(shell $(CONFIG_SHELL) $(srctree)/scripts/gcc-goto.sh $(CC) $(KBUILD_CFLAGS)), y)
++ KBUILD_CFLAGS += -DCC_HAVE_ASM_GOTO
++ KBUILD_AFLAGS += -DCC_HAVE_ASM_GOTO
++endif
++
+ include scripts/Makefile.gcc-plugins
+
+ ifdef CONFIG_READABLE_ASM
+@@ -796,12 +802,6 @@ KBUILD_CFLAGS += $(call cc-option,-Werror=incompatible-pointer-types)
+ # use the deterministic mode of AR if available
+ KBUILD_ARFLAGS := $(call ar-option,D)
+
+-# check for 'asm goto'
+-ifeq ($(shell $(CONFIG_SHELL) $(srctree)/scripts/gcc-goto.sh $(CC) $(KBUILD_CFLAGS)), y)
+- KBUILD_CFLAGS += -DCC_HAVE_ASM_GOTO
+- KBUILD_AFLAGS += -DCC_HAVE_ASM_GOTO
+-endif
+-
+ include scripts/Makefile.kasan
+ include scripts/Makefile.extrawarn
+ include scripts/Makefile.ubsan
+diff --git a/arch/arm/kernel/armksyms.c b/arch/arm/kernel/armksyms.c
+index 7e45f69a0ddc..8e8d20cdbce7 100644
+--- a/arch/arm/kernel/armksyms.c
++++ b/arch/arm/kernel/armksyms.c
+@@ -178,6 +178,6 @@ EXPORT_SYMBOL(__pv_offset);
+ #endif
+
+ #ifdef CONFIG_HAVE_ARM_SMCCC
+-EXPORT_SYMBOL(arm_smccc_smc);
+-EXPORT_SYMBOL(arm_smccc_hvc);
++EXPORT_SYMBOL(__arm_smccc_smc);
++EXPORT_SYMBOL(__arm_smccc_hvc);
+ #endif
+diff --git a/arch/arm/kernel/smccc-call.S b/arch/arm/kernel/smccc-call.S
+index 2e48b674aab1..e5d43066b889 100644
+--- a/arch/arm/kernel/smccc-call.S
++++ b/arch/arm/kernel/smccc-call.S
+@@ -46,17 +46,19 @@ UNWIND( .fnend)
+ /*
+ * void smccc_smc(unsigned long a0, unsigned long a1, unsigned long a2,
+ * unsigned long a3, unsigned long a4, unsigned long a5,
+- * unsigned long a6, unsigned long a7, struct arm_smccc_res *res)
++ * unsigned long a6, unsigned long a7, struct arm_smccc_res *res,
++ * struct arm_smccc_quirk *quirk)
+ */
+-ENTRY(arm_smccc_smc)
++ENTRY(__arm_smccc_smc)
+ SMCCC SMCCC_SMC
+-ENDPROC(arm_smccc_smc)
++ENDPROC(__arm_smccc_smc)
+
+ /*
+ * void smccc_hvc(unsigned long a0, unsigned long a1, unsigned long a2,
+ * unsigned long a3, unsigned long a4, unsigned long a5,
+- * unsigned long a6, unsigned long a7, struct arm_smccc_res *res)
++ * unsigned long a6, unsigned long a7, struct arm_smccc_res *res,
++ * struct arm_smccc_quirk *quirk)
+ */
+-ENTRY(arm_smccc_hvc)
++ENTRY(__arm_smccc_hvc)
+ SMCCC SMCCC_HVC
+-ENDPROC(arm_smccc_hvc)
++ENDPROC(__arm_smccc_hvc)
+diff --git a/arch/arm/kvm/mmu.c b/arch/arm/kvm/mmu.c
+index a5265edbeeab..2fd5c135e8a4 100644
+--- a/arch/arm/kvm/mmu.c
++++ b/arch/arm/kvm/mmu.c
+@@ -292,11 +292,18 @@ static void unmap_stage2_range(struct kvm *kvm, phys_addr_t start, u64 size)
+ phys_addr_t addr = start, end = start + size;
+ phys_addr_t next;
+
++ assert_spin_locked(&kvm->mmu_lock);
+ pgd = kvm->arch.pgd + stage2_pgd_index(addr);
+ do {
+ next = stage2_pgd_addr_end(addr, end);
+ if (!stage2_pgd_none(*pgd))
+ unmap_stage2_puds(kvm, pgd, addr, next);
++ /*
++ * If the range is too large, release the kvm->mmu_lock
++ * to prevent starvation and lockup detector warnings.
++ */
++ if (next != end)
++ cond_resched_lock(&kvm->mmu_lock);
+ } while (pgd++, addr = next, addr != end);
+ }
+
+@@ -803,6 +810,7 @@ void stage2_unmap_vm(struct kvm *kvm)
+ int idx;
+
+ idx = srcu_read_lock(&kvm->srcu);
++ down_read(¤t->mm->mmap_sem);
+ spin_lock(&kvm->mmu_lock);
+
+ slots = kvm_memslots(kvm);
+@@ -810,6 +818,7 @@ void stage2_unmap_vm(struct kvm *kvm)
+ stage2_unmap_memslot(kvm, memslot);
+
+ spin_unlock(&kvm->mmu_lock);
++ up_read(¤t->mm->mmap_sem);
+ srcu_read_unlock(&kvm->srcu, idx);
+ }
+
+@@ -829,7 +838,10 @@ void kvm_free_stage2_pgd(struct kvm *kvm)
+ if (kvm->arch.pgd == NULL)
+ return;
+
++ spin_lock(&kvm->mmu_lock);
+ unmap_stage2_range(kvm, 0, KVM_PHYS_SIZE);
++ spin_unlock(&kvm->mmu_lock);
++
+ /* Free the HW pgd, one page at a time */
+ free_pages_exact(kvm->arch.pgd, S2_PGD_SIZE);
+ kvm->arch.pgd = NULL;
+@@ -1804,6 +1816,7 @@ int kvm_arch_prepare_memory_region(struct kvm *kvm,
+ (KVM_PHYS_SIZE >> PAGE_SHIFT))
+ return -EFAULT;
+
++ down_read(¤t->mm->mmap_sem);
+ /*
+ * A memory region could potentially cover multiple VMAs, and any holes
+ * between them, so iterate over all of them to find out if we can map
+@@ -1847,8 +1860,10 @@ int kvm_arch_prepare_memory_region(struct kvm *kvm,
+ pa += vm_start - vma->vm_start;
+
+ /* IO region dirty page logging not allowed */
+- if (memslot->flags & KVM_MEM_LOG_DIRTY_PAGES)
+- return -EINVAL;
++ if (memslot->flags & KVM_MEM_LOG_DIRTY_PAGES) {
++ ret = -EINVAL;
++ goto out;
++ }
+
+ ret = kvm_phys_addr_ioremap(kvm, gpa, pa,
+ vm_end - vm_start,
+@@ -1860,7 +1875,7 @@ int kvm_arch_prepare_memory_region(struct kvm *kvm,
+ } while (hva < reg_end);
+
+ if (change == KVM_MR_FLAGS_ONLY)
+- return ret;
++ goto out;
+
+ spin_lock(&kvm->mmu_lock);
+ if (ret)
+@@ -1868,6 +1883,8 @@ int kvm_arch_prepare_memory_region(struct kvm *kvm,
+ else
+ stage2_flush_memslot(kvm, memslot);
+ spin_unlock(&kvm->mmu_lock);
++out:
++ up_read(¤t->mm->mmap_sem);
+ return ret;
+ }
+
+diff --git a/arch/arm64/kernel/arm64ksyms.c b/arch/arm64/kernel/arm64ksyms.c
+index 78f368039c79..e9c4dc9e0ada 100644
+--- a/arch/arm64/kernel/arm64ksyms.c
++++ b/arch/arm64/kernel/arm64ksyms.c
+@@ -73,5 +73,5 @@ NOKPROBE_SYMBOL(_mcount);
+ #endif
+
+ /* arm-smccc */
+-EXPORT_SYMBOL(arm_smccc_smc);
+-EXPORT_SYMBOL(arm_smccc_hvc);
++EXPORT_SYMBOL(__arm_smccc_smc);
++EXPORT_SYMBOL(__arm_smccc_hvc);
+diff --git a/arch/arm64/kernel/asm-offsets.c b/arch/arm64/kernel/asm-offsets.c
+index bc049afc73a7..b3bb7ef97bc8 100644
+--- a/arch/arm64/kernel/asm-offsets.c
++++ b/arch/arm64/kernel/asm-offsets.c
+@@ -143,8 +143,11 @@ int main(void)
+ DEFINE(SLEEP_STACK_DATA_SYSTEM_REGS, offsetof(struct sleep_stack_data, system_regs));
+ DEFINE(SLEEP_STACK_DATA_CALLEE_REGS, offsetof(struct sleep_stack_data, callee_saved_regs));
+ #endif
+- DEFINE(ARM_SMCCC_RES_X0_OFFS, offsetof(struct arm_smccc_res, a0));
+- DEFINE(ARM_SMCCC_RES_X2_OFFS, offsetof(struct arm_smccc_res, a2));
++ DEFINE(ARM_SMCCC_RES_X0_OFFS, offsetof(struct arm_smccc_res, a0));
++ DEFINE(ARM_SMCCC_RES_X2_OFFS, offsetof(struct arm_smccc_res, a2));
++ DEFINE(ARM_SMCCC_QUIRK_ID_OFFS, offsetof(struct arm_smccc_quirk, id));
++ DEFINE(ARM_SMCCC_QUIRK_STATE_OFFS, offsetof(struct arm_smccc_quirk, state));
++
+ BLANK();
+ DEFINE(HIBERN_PBE_ORIG, offsetof(struct pbe, orig_address));
+ DEFINE(HIBERN_PBE_ADDR, offsetof(struct pbe, address));
+diff --git a/arch/arm64/kernel/smccc-call.S b/arch/arm64/kernel/smccc-call.S
+index ae0496fa4235..62522342e1e4 100644
+--- a/arch/arm64/kernel/smccc-call.S
++++ b/arch/arm64/kernel/smccc-call.S
+@@ -12,6 +12,7 @@
+ *
+ */
+ #include <linux/linkage.h>
++#include <linux/arm-smccc.h>
+ #include <asm/asm-offsets.h>
+
+ .macro SMCCC instr
+@@ -20,24 +21,32 @@
+ ldr x4, [sp]
+ stp x0, x1, [x4, #ARM_SMCCC_RES_X0_OFFS]
+ stp x2, x3, [x4, #ARM_SMCCC_RES_X2_OFFS]
+- ret
++ ldr x4, [sp, #8]
++ cbz x4, 1f /* no quirk structure */
++ ldr x9, [x4, #ARM_SMCCC_QUIRK_ID_OFFS]
++ cmp x9, #ARM_SMCCC_QUIRK_QCOM_A6
++ b.ne 1f
++ str x6, [x4, ARM_SMCCC_QUIRK_STATE_OFFS]
++1: ret
+ .cfi_endproc
+ .endm
+
+ /*
+ * void arm_smccc_smc(unsigned long a0, unsigned long a1, unsigned long a2,
+ * unsigned long a3, unsigned long a4, unsigned long a5,
+- * unsigned long a6, unsigned long a7, struct arm_smccc_res *res)
++ * unsigned long a6, unsigned long a7, struct arm_smccc_res *res,
++ * struct arm_smccc_quirk *quirk)
+ */
+-ENTRY(arm_smccc_smc)
++ENTRY(__arm_smccc_smc)
+ SMCCC smc
+-ENDPROC(arm_smccc_smc)
++ENDPROC(__arm_smccc_smc)
+
+ /*
+ * void arm_smccc_hvc(unsigned long a0, unsigned long a1, unsigned long a2,
+ * unsigned long a3, unsigned long a4, unsigned long a5,
+- * unsigned long a6, unsigned long a7, struct arm_smccc_res *res)
++ * unsigned long a6, unsigned long a7, struct arm_smccc_res *res,
++ * struct arm_smccc_quirk *quirk)
+ */
+-ENTRY(arm_smccc_hvc)
++ENTRY(__arm_smccc_hvc)
+ SMCCC hvc
+-ENDPROC(arm_smccc_hvc)
++ENDPROC(__arm_smccc_hvc)
+diff --git a/arch/arm64/mm/fault.c b/arch/arm64/mm/fault.c
+index 156169c6981b..ed0f50b565c3 100644
+--- a/arch/arm64/mm/fault.c
++++ b/arch/arm64/mm/fault.c
+@@ -41,7 +41,20 @@
+ #include <asm/pgtable.h>
+ #include <asm/tlbflush.h>
+
+-static const char *fault_name(unsigned int esr);
++struct fault_info {
++ int (*fn)(unsigned long addr, unsigned int esr,
++ struct pt_regs *regs);
++ int sig;
++ int code;
++ const char *name;
++};
++
++static const struct fault_info fault_info[];
++
++static inline const struct fault_info *esr_to_fault_info(unsigned int esr)
++{
++ return fault_info + (esr & 63);
++}
+
+ #ifdef CONFIG_KPROBES
+ static inline int notify_page_fault(struct pt_regs *regs, unsigned int esr)
+@@ -196,10 +209,12 @@ static void __do_user_fault(struct task_struct *tsk, unsigned long addr,
+ struct pt_regs *regs)
+ {
+ struct siginfo si;
++ const struct fault_info *inf;
+
+ if (unhandled_signal(tsk, sig) && show_unhandled_signals_ratelimited()) {
++ inf = esr_to_fault_info(esr);
+ pr_info("%s[%d]: unhandled %s (%d) at 0x%08lx, esr 0x%03x\n",
+- tsk->comm, task_pid_nr(tsk), fault_name(esr), sig,
++ tsk->comm, task_pid_nr(tsk), inf->name, sig,
+ addr, esr);
+ show_pte(tsk->mm, addr);
+ show_regs(regs);
+@@ -218,14 +233,16 @@ static void do_bad_area(unsigned long addr, unsigned int esr, struct pt_regs *re
+ {
+ struct task_struct *tsk = current;
+ struct mm_struct *mm = tsk->active_mm;
++ const struct fault_info *inf;
+
+ /*
+ * If we are in kernel mode at this point, we have no context to
+ * handle this fault with.
+ */
+- if (user_mode(regs))
+- __do_user_fault(tsk, addr, esr, SIGSEGV, SEGV_MAPERR, regs);
+- else
++ if (user_mode(regs)) {
++ inf = esr_to_fault_info(esr);
++ __do_user_fault(tsk, addr, esr, inf->sig, inf->code, regs);
++ } else
+ __do_kernel_fault(mm, addr, esr, regs);
+ }
+
+@@ -487,12 +504,7 @@ static int do_bad(unsigned long addr, unsigned int esr, struct pt_regs *regs)
+ return 1;
+ }
+
+-static const struct fault_info {
+- int (*fn)(unsigned long addr, unsigned int esr, struct pt_regs *regs);
+- int sig;
+- int code;
+- const char *name;
+-} fault_info[] = {
++static const struct fault_info fault_info[] = {
+ { do_bad, SIGBUS, 0, "ttbr address size fault" },
+ { do_bad, SIGBUS, 0, "level 1 address size fault" },
+ { do_bad, SIGBUS, 0, "level 2 address size fault" },
+@@ -559,19 +571,13 @@ static const struct fault_info {
+ { do_bad, SIGBUS, 0, "unknown 63" },
+ };
+
+-static const char *fault_name(unsigned int esr)
+-{
+- const struct fault_info *inf = fault_info + (esr & 63);
+- return inf->name;
+-}
+-
+ /*
+ * Dispatch a data abort to the relevant handler.
+ */
+ asmlinkage void __exception do_mem_abort(unsigned long addr, unsigned int esr,
+ struct pt_regs *regs)
+ {
+- const struct fault_info *inf = fault_info + (esr & 63);
++ const struct fault_info *inf = esr_to_fault_info(esr);
+ struct siginfo info;
+
+ if (!inf->fn(addr, esr, regs))
+diff --git a/arch/metag/include/asm/uaccess.h b/arch/metag/include/asm/uaccess.h
+index 273e61225c27..07238b39638c 100644
+--- a/arch/metag/include/asm/uaccess.h
++++ b/arch/metag/include/asm/uaccess.h
+@@ -197,20 +197,21 @@ extern long __must_check strnlen_user(const char __user *src, long count);
+
+ #define strlen_user(str) strnlen_user(str, 32767)
+
+-extern unsigned long __must_check __copy_user_zeroing(void *to,
+- const void __user *from,
+- unsigned long n);
++extern unsigned long raw_copy_from_user(void *to, const void __user *from,
++ unsigned long n);
+
+ static inline unsigned long
+ copy_from_user(void *to, const void __user *from, unsigned long n)
+ {
++ unsigned long res = n;
+ if (likely(access_ok(VERIFY_READ, from, n)))
+- return __copy_user_zeroing(to, from, n);
+- memset(to, 0, n);
+- return n;
++ res = raw_copy_from_user(to, from, n);
++ if (unlikely(res))
++ memset(to + (n - res), 0, res);
++ return res;
+ }
+
+-#define __copy_from_user(to, from, n) __copy_user_zeroing(to, from, n)
++#define __copy_from_user(to, from, n) raw_copy_from_user(to, from, n)
+ #define __copy_from_user_inatomic __copy_from_user
+
+ extern unsigned long __must_check __copy_user(void __user *to,
+diff --git a/arch/metag/lib/usercopy.c b/arch/metag/lib/usercopy.c
+index b3ebfe9c8e88..2792fc621088 100644
+--- a/arch/metag/lib/usercopy.c
++++ b/arch/metag/lib/usercopy.c
+@@ -29,7 +29,6 @@
+ COPY \
+ "1:\n" \
+ " .section .fixup,\"ax\"\n" \
+- " MOV D1Ar1,#0\n" \
+ FIXUP \
+ " MOVT D1Ar1,#HI(1b)\n" \
+ " JUMP D1Ar1,#LO(1b)\n" \
+@@ -260,27 +259,31 @@
+ "MGETL D0FrT, D0.5, D0.6, D0.7, [%1++]\n" \
+ "22:\n" \
+ "MSETL [%0++], D0FrT, D0.5, D0.6, D0.7\n" \
+- "SUB %3, %3, #32\n" \
+ "23:\n" \
+- "MGETL D0FrT, D0.5, D0.6, D0.7, [%1++]\n" \
++ "SUB %3, %3, #32\n" \
+ "24:\n" \
++ "MGETL D0FrT, D0.5, D0.6, D0.7, [%1++]\n" \
++ "25:\n" \
+ "MSETL [%0++], D0FrT, D0.5, D0.6, D0.7\n" \
++ "26:\n" \
+ "SUB %3, %3, #32\n" \
+ "DCACHE [%1+#-64], D0Ar6\n" \
+ "BR $Lloop"id"\n" \
+ \
+ "MOV RAPF, %1\n" \
+- "25:\n" \
++ "27:\n" \
+ "MGETL D0FrT, D0.5, D0.6, D0.7, [%1++]\n" \
+- "26:\n" \
++ "28:\n" \
+ "MSETL [%0++], D0FrT, D0.5, D0.6, D0.7\n" \
++ "29:\n" \
+ "SUB %3, %3, #32\n" \
+- "27:\n" \
++ "30:\n" \
+ "MGETL D0FrT, D0.5, D0.6, D0.7, [%1++]\n" \
+- "28:\n" \
++ "31:\n" \
+ "MSETL [%0++], D0FrT, D0.5, D0.6, D0.7\n" \
++ "32:\n" \
+ "SUB %0, %0, #8\n" \
+- "29:\n" \
++ "33:\n" \
+ "SETL [%0++], D0.7, D1.7\n" \
+ "SUB %3, %3, #32\n" \
+ "1:" \
+@@ -312,11 +315,15 @@
+ " .long 26b,3b\n" \
+ " .long 27b,3b\n" \
+ " .long 28b,3b\n" \
+- " .long 29b,4b\n" \
++ " .long 29b,3b\n" \
++ " .long 30b,3b\n" \
++ " .long 31b,3b\n" \
++ " .long 32b,3b\n" \
++ " .long 33b,4b\n" \
+ " .previous\n" \
+ : "=r" (to), "=r" (from), "=r" (ret), "=d" (n) \
+ : "0" (to), "1" (from), "2" (ret), "3" (n) \
+- : "D1Ar1", "D0Ar2", "memory")
++ : "D1Ar1", "D0Ar2", "cc", "memory")
+
+ /* rewind 'to' and 'from' pointers when a fault occurs
+ *
+@@ -342,7 +349,7 @@
+ #define __asm_copy_to_user_64bit_rapf_loop(to, from, ret, n, id)\
+ __asm_copy_user_64bit_rapf_loop(to, from, ret, n, id, \
+ "LSR D0Ar2, D0Ar2, #8\n" \
+- "AND D0Ar2, D0Ar2, #0x7\n" \
++ "ANDS D0Ar2, D0Ar2, #0x7\n" \
+ "ADDZ D0Ar2, D0Ar2, #4\n" \
+ "SUB D0Ar2, D0Ar2, #1\n" \
+ "MOV D1Ar1, #4\n" \
+@@ -403,47 +410,55 @@
+ "MGETD D0FrT, D0.5, D0.6, D0.7, [%1++]\n" \
+ "22:\n" \
+ "MSETD [%0++], D0FrT, D0.5, D0.6, D0.7\n" \
+- "SUB %3, %3, #16\n" \
+ "23:\n" \
+- "MGETD D0FrT, D0.5, D0.6, D0.7, [%1++]\n" \
+- "24:\n" \
+- "MSETD [%0++], D0FrT, D0.5, D0.6, D0.7\n" \
+ "SUB %3, %3, #16\n" \
+- "25:\n" \
++ "24:\n" \
+ "MGETD D0FrT, D0.5, D0.6, D0.7, [%1++]\n" \
+- "26:\n" \
++ "25:\n" \
+ "MSETD [%0++], D0FrT, D0.5, D0.6, D0.7\n" \
++ "26:\n" \
+ "SUB %3, %3, #16\n" \
+ "27:\n" \
+ "MGETD D0FrT, D0.5, D0.6, D0.7, [%1++]\n" \
+ "28:\n" \
+ "MSETD [%0++], D0FrT, D0.5, D0.6, D0.7\n" \
++ "29:\n" \
++ "SUB %3, %3, #16\n" \
++ "30:\n" \
++ "MGETD D0FrT, D0.5, D0.6, D0.7, [%1++]\n" \
++ "31:\n" \
++ "MSETD [%0++], D0FrT, D0.5, D0.6, D0.7\n" \
++ "32:\n" \
+ "SUB %3, %3, #16\n" \
+ "DCACHE [%1+#-64], D0Ar6\n" \
+ "BR $Lloop"id"\n" \
+ \
+ "MOV RAPF, %1\n" \
+- "29:\n" \
++ "33:\n" \
+ "MGETD D0FrT, D0.5, D0.6, D0.7, [%1++]\n" \
+- "30:\n" \
++ "34:\n" \
+ "MSETD [%0++], D0FrT, D0.5, D0.6, D0.7\n" \
++ "35:\n" \
+ "SUB %3, %3, #16\n" \
+- "31:\n" \
++ "36:\n" \
+ "MGETD D0FrT, D0.5, D0.6, D0.7, [%1++]\n" \
+- "32:\n" \
++ "37:\n" \
+ "MSETD [%0++], D0FrT, D0.5, D0.6, D0.7\n" \
++ "38:\n" \
+ "SUB %3, %3, #16\n" \
+- "33:\n" \
++ "39:\n" \
+ "MGETD D0FrT, D0.5, D0.6, D0.7, [%1++]\n" \
+- "34:\n" \
++ "40:\n" \
+ "MSETD [%0++], D0FrT, D0.5, D0.6, D0.7\n" \
++ "41:\n" \
+ "SUB %3, %3, #16\n" \
+- "35:\n" \
++ "42:\n" \
+ "MGETD D0FrT, D0.5, D0.6, D0.7, [%1++]\n" \
+- "36:\n" \
++ "43:\n" \
+ "MSETD [%0++], D0FrT, D0.5, D0.6, D0.7\n" \
++ "44:\n" \
+ "SUB %0, %0, #4\n" \
+- "37:\n" \
++ "45:\n" \
+ "SETD [%0++], D0.7\n" \
+ "SUB %3, %3, #16\n" \
+ "1:" \
+@@ -483,11 +498,19 @@
+ " .long 34b,3b\n" \
+ " .long 35b,3b\n" \
+ " .long 36b,3b\n" \
+- " .long 37b,4b\n" \
++ " .long 37b,3b\n" \
++ " .long 38b,3b\n" \
++ " .long 39b,3b\n" \
++ " .long 40b,3b\n" \
++ " .long 41b,3b\n" \
++ " .long 42b,3b\n" \
++ " .long 43b,3b\n" \
++ " .long 44b,3b\n" \
++ " .long 45b,4b\n" \
+ " .previous\n" \
+ : "=r" (to), "=r" (from), "=r" (ret), "=d" (n) \
+ : "0" (to), "1" (from), "2" (ret), "3" (n) \
+- : "D1Ar1", "D0Ar2", "memory")
++ : "D1Ar1", "D0Ar2", "cc", "memory")
+
+ /* rewind 'to' and 'from' pointers when a fault occurs
+ *
+@@ -513,7 +536,7 @@
+ #define __asm_copy_to_user_32bit_rapf_loop(to, from, ret, n, id)\
+ __asm_copy_user_32bit_rapf_loop(to, from, ret, n, id, \
+ "LSR D0Ar2, D0Ar2, #8\n" \
+- "AND D0Ar2, D0Ar2, #0x7\n" \
++ "ANDS D0Ar2, D0Ar2, #0x7\n" \
+ "ADDZ D0Ar2, D0Ar2, #4\n" \
+ "SUB D0Ar2, D0Ar2, #1\n" \
+ "MOV D1Ar1, #4\n" \
+@@ -538,23 +561,31 @@ unsigned long __copy_user(void __user *pdst, const void *psrc,
+ if ((unsigned long) src & 1) {
+ __asm_copy_to_user_1(dst, src, retn);
+ n--;
++ if (retn)
++ return retn + n;
+ }
+ if ((unsigned long) dst & 1) {
+ /* Worst case - byte copy */
+ while (n > 0) {
+ __asm_copy_to_user_1(dst, src, retn);
+ n--;
++ if (retn)
++ return retn + n;
+ }
+ }
+ if (((unsigned long) src & 2) && n >= 2) {
+ __asm_copy_to_user_2(dst, src, retn);
+ n -= 2;
++ if (retn)
++ return retn + n;
+ }
+ if ((unsigned long) dst & 2) {
+ /* Second worst case - word copy */
+ while (n >= 2) {
+ __asm_copy_to_user_2(dst, src, retn);
+ n -= 2;
++ if (retn)
++ return retn + n;
+ }
+ }
+
+@@ -569,6 +600,8 @@ unsigned long __copy_user(void __user *pdst, const void *psrc,
+ while (n >= 8) {
+ __asm_copy_to_user_8x64(dst, src, retn);
+ n -= 8;
++ if (retn)
++ return retn + n;
+ }
+ }
+ if (n >= RAPF_MIN_BUF_SIZE) {
+@@ -581,6 +614,8 @@ unsigned long __copy_user(void __user *pdst, const void *psrc,
+ while (n >= 8) {
+ __asm_copy_to_user_8x64(dst, src, retn);
+ n -= 8;
++ if (retn)
++ return retn + n;
+ }
+ }
+ #endif
+@@ -588,11 +623,15 @@ unsigned long __copy_user(void __user *pdst, const void *psrc,
+ while (n >= 16) {
+ __asm_copy_to_user_16(dst, src, retn);
+ n -= 16;
++ if (retn)
++ return retn + n;
+ }
+
+ while (n >= 4) {
+ __asm_copy_to_user_4(dst, src, retn);
+ n -= 4;
++ if (retn)
++ return retn + n;
+ }
+
+ switch (n) {
+@@ -609,6 +648,10 @@ unsigned long __copy_user(void __user *pdst, const void *psrc,
+ break;
+ }
+
++ /*
++ * If we get here, retn correctly reflects the number of failing
++ * bytes.
++ */
+ return retn;
+ }
+ EXPORT_SYMBOL(__copy_user);
+@@ -617,16 +660,14 @@ EXPORT_SYMBOL(__copy_user);
+ __asm_copy_user_cont(to, from, ret, \
+ " GETB D1Ar1,[%1++]\n" \
+ "2: SETB [%0++],D1Ar1\n", \
+- "3: ADD %2,%2,#1\n" \
+- " SETB [%0++],D1Ar1\n", \
++ "3: ADD %2,%2,#1\n", \
+ " .long 2b,3b\n")
+
+ #define __asm_copy_from_user_2x_cont(to, from, ret, COPY, FIXUP, TENTRY) \
+ __asm_copy_user_cont(to, from, ret, \
+ " GETW D1Ar1,[%1++]\n" \
+ "2: SETW [%0++],D1Ar1\n" COPY, \
+- "3: ADD %2,%2,#2\n" \
+- " SETW [%0++],D1Ar1\n" FIXUP, \
++ "3: ADD %2,%2,#2\n" FIXUP, \
+ " .long 2b,3b\n" TENTRY)
+
+ #define __asm_copy_from_user_2(to, from, ret) \
+@@ -636,145 +677,26 @@ EXPORT_SYMBOL(__copy_user);
+ __asm_copy_from_user_2x_cont(to, from, ret, \
+ " GETB D1Ar1,[%1++]\n" \
+ "4: SETB [%0++],D1Ar1\n", \
+- "5: ADD %2,%2,#1\n" \
+- " SETB [%0++],D1Ar1\n", \
++ "5: ADD %2,%2,#1\n", \
+ " .long 4b,5b\n")
+
+ #define __asm_copy_from_user_4x_cont(to, from, ret, COPY, FIXUP, TENTRY) \
+ __asm_copy_user_cont(to, from, ret, \
+ " GETD D1Ar1,[%1++]\n" \
+ "2: SETD [%0++],D1Ar1\n" COPY, \
+- "3: ADD %2,%2,#4\n" \
+- " SETD [%0++],D1Ar1\n" FIXUP, \
++ "3: ADD %2,%2,#4\n" FIXUP, \
+ " .long 2b,3b\n" TENTRY)
+
+ #define __asm_copy_from_user_4(to, from, ret) \
+ __asm_copy_from_user_4x_cont(to, from, ret, "", "", "")
+
+-#define __asm_copy_from_user_5(to, from, ret) \
+- __asm_copy_from_user_4x_cont(to, from, ret, \
+- " GETB D1Ar1,[%1++]\n" \
+- "4: SETB [%0++],D1Ar1\n", \
+- "5: ADD %2,%2,#1\n" \
+- " SETB [%0++],D1Ar1\n", \
+- " .long 4b,5b\n")
+-
+-#define __asm_copy_from_user_6x_cont(to, from, ret, COPY, FIXUP, TENTRY) \
+- __asm_copy_from_user_4x_cont(to, from, ret, \
+- " GETW D1Ar1,[%1++]\n" \
+- "4: SETW [%0++],D1Ar1\n" COPY, \
+- "5: ADD %2,%2,#2\n" \
+- " SETW [%0++],D1Ar1\n" FIXUP, \
+- " .long 4b,5b\n" TENTRY)
+-
+-#define __asm_copy_from_user_6(to, from, ret) \
+- __asm_copy_from_user_6x_cont(to, from, ret, "", "", "")
+-
+-#define __asm_copy_from_user_7(to, from, ret) \
+- __asm_copy_from_user_6x_cont(to, from, ret, \
+- " GETB D1Ar1,[%1++]\n" \
+- "6: SETB [%0++],D1Ar1\n", \
+- "7: ADD %2,%2,#1\n" \
+- " SETB [%0++],D1Ar1\n", \
+- " .long 6b,7b\n")
+-
+-#define __asm_copy_from_user_8x_cont(to, from, ret, COPY, FIXUP, TENTRY) \
+- __asm_copy_from_user_4x_cont(to, from, ret, \
+- " GETD D1Ar1,[%1++]\n" \
+- "4: SETD [%0++],D1Ar1\n" COPY, \
+- "5: ADD %2,%2,#4\n" \
+- " SETD [%0++],D1Ar1\n" FIXUP, \
+- " .long 4b,5b\n" TENTRY)
+-
+-#define __asm_copy_from_user_8(to, from, ret) \
+- __asm_copy_from_user_8x_cont(to, from, ret, "", "", "")
+-
+-#define __asm_copy_from_user_9(to, from, ret) \
+- __asm_copy_from_user_8x_cont(to, from, ret, \
+- " GETB D1Ar1,[%1++]\n" \
+- "6: SETB [%0++],D1Ar1\n", \
+- "7: ADD %2,%2,#1\n" \
+- " SETB [%0++],D1Ar1\n", \
+- " .long 6b,7b\n")
+-
+-#define __asm_copy_from_user_10x_cont(to, from, ret, COPY, FIXUP, TENTRY) \
+- __asm_copy_from_user_8x_cont(to, from, ret, \
+- " GETW D1Ar1,[%1++]\n" \
+- "6: SETW [%0++],D1Ar1\n" COPY, \
+- "7: ADD %2,%2,#2\n" \
+- " SETW [%0++],D1Ar1\n" FIXUP, \
+- " .long 6b,7b\n" TENTRY)
+-
+-#define __asm_copy_from_user_10(to, from, ret) \
+- __asm_copy_from_user_10x_cont(to, from, ret, "", "", "")
+-
+-#define __asm_copy_from_user_11(to, from, ret) \
+- __asm_copy_from_user_10x_cont(to, from, ret, \
+- " GETB D1Ar1,[%1++]\n" \
+- "8: SETB [%0++],D1Ar1\n", \
+- "9: ADD %2,%2,#1\n" \
+- " SETB [%0++],D1Ar1\n", \
+- " .long 8b,9b\n")
+-
+-#define __asm_copy_from_user_12x_cont(to, from, ret, COPY, FIXUP, TENTRY) \
+- __asm_copy_from_user_8x_cont(to, from, ret, \
+- " GETD D1Ar1,[%1++]\n" \
+- "6: SETD [%0++],D1Ar1\n" COPY, \
+- "7: ADD %2,%2,#4\n" \
+- " SETD [%0++],D1Ar1\n" FIXUP, \
+- " .long 6b,7b\n" TENTRY)
+-
+-#define __asm_copy_from_user_12(to, from, ret) \
+- __asm_copy_from_user_12x_cont(to, from, ret, "", "", "")
+-
+-#define __asm_copy_from_user_13(to, from, ret) \
+- __asm_copy_from_user_12x_cont(to, from, ret, \
+- " GETB D1Ar1,[%1++]\n" \
+- "8: SETB [%0++],D1Ar1\n", \
+- "9: ADD %2,%2,#1\n" \
+- " SETB [%0++],D1Ar1\n", \
+- " .long 8b,9b\n")
+-
+-#define __asm_copy_from_user_14x_cont(to, from, ret, COPY, FIXUP, TENTRY) \
+- __asm_copy_from_user_12x_cont(to, from, ret, \
+- " GETW D1Ar1,[%1++]\n" \
+- "8: SETW [%0++],D1Ar1\n" COPY, \
+- "9: ADD %2,%2,#2\n" \
+- " SETW [%0++],D1Ar1\n" FIXUP, \
+- " .long 8b,9b\n" TENTRY)
+-
+-#define __asm_copy_from_user_14(to, from, ret) \
+- __asm_copy_from_user_14x_cont(to, from, ret, "", "", "")
+-
+-#define __asm_copy_from_user_15(to, from, ret) \
+- __asm_copy_from_user_14x_cont(to, from, ret, \
+- " GETB D1Ar1,[%1++]\n" \
+- "10: SETB [%0++],D1Ar1\n", \
+- "11: ADD %2,%2,#1\n" \
+- " SETB [%0++],D1Ar1\n", \
+- " .long 10b,11b\n")
+-
+-#define __asm_copy_from_user_16x_cont(to, from, ret, COPY, FIXUP, TENTRY) \
+- __asm_copy_from_user_12x_cont(to, from, ret, \
+- " GETD D1Ar1,[%1++]\n" \
+- "8: SETD [%0++],D1Ar1\n" COPY, \
+- "9: ADD %2,%2,#4\n" \
+- " SETD [%0++],D1Ar1\n" FIXUP, \
+- " .long 8b,9b\n" TENTRY)
+-
+-#define __asm_copy_from_user_16(to, from, ret) \
+- __asm_copy_from_user_16x_cont(to, from, ret, "", "", "")
+-
+ #define __asm_copy_from_user_8x64(to, from, ret) \
+ asm volatile ( \
+ " GETL D0Ar2,D1Ar1,[%1++]\n" \
+ "2: SETL [%0++],D0Ar2,D1Ar1\n" \
+ "1:\n" \
+ " .section .fixup,\"ax\"\n" \
+- " MOV D1Ar1,#0\n" \
+- " MOV D0Ar2,#0\n" \
+ "3: ADD %2,%2,#8\n" \
+- " SETL [%0++],D0Ar2,D1Ar1\n" \
+ " MOVT D0Ar2,#HI(1b)\n" \
+ " JUMP D0Ar2,#LO(1b)\n" \
+ " .previous\n" \
+@@ -789,36 +711,57 @@ EXPORT_SYMBOL(__copy_user);
+ *
+ * Rationale:
+ * A fault occurs while reading from user buffer, which is the
+- * source. Since the fault is at a single address, we only
+- * need to rewind by 8 bytes.
++ * source.
+ * Since we don't write to kernel buffer until we read first,
+ * the kernel buffer is at the right state and needn't be
+- * corrected.
++ * corrected, but the source must be rewound to the beginning of
++ * the block, which is LSM_STEP*8 bytes.
++ * LSM_STEP is bits 10:8 in TXSTATUS which is already read
++ * and stored in D0Ar2
++ *
++ * NOTE: If a fault occurs at the last operation in M{G,S}ETL
++ * LSM_STEP will be 0. ie: we do 4 writes in our case, if
++ * a fault happens at the 4th write, LSM_STEP will be 0
++ * instead of 4. The code copes with that.
+ */
+ #define __asm_copy_from_user_64bit_rapf_loop(to, from, ret, n, id) \
+ __asm_copy_user_64bit_rapf_loop(to, from, ret, n, id, \
+- "SUB %1, %1, #8\n")
++ "LSR D0Ar2, D0Ar2, #5\n" \
++ "ANDS D0Ar2, D0Ar2, #0x38\n" \
++ "ADDZ D0Ar2, D0Ar2, #32\n" \
++ "SUB %1, %1, D0Ar2\n")
+
+ /* rewind 'from' pointer when a fault occurs
+ *
+ * Rationale:
+ * A fault occurs while reading from user buffer, which is the
+- * source. Since the fault is at a single address, we only
+- * need to rewind by 4 bytes.
++ * source.
+ * Since we don't write to kernel buffer until we read first,
+ * the kernel buffer is at the right state and needn't be
+- * corrected.
++ * corrected, but the source must be rewound to the beginning of
++ * the block, which is LSM_STEP*4 bytes.
++ * LSM_STEP is bits 10:8 in TXSTATUS which is already read
++ * and stored in D0Ar2
++ *
++ * NOTE: If a fault occurs at the last operation in M{G,S}ETL
++ * LSM_STEP will be 0. ie: we do 4 writes in our case, if
++ * a fault happens at the 4th write, LSM_STEP will be 0
++ * instead of 4. The code copes with that.
+ */
+ #define __asm_copy_from_user_32bit_rapf_loop(to, from, ret, n, id) \
+ __asm_copy_user_32bit_rapf_loop(to, from, ret, n, id, \
+- "SUB %1, %1, #4\n")
++ "LSR D0Ar2, D0Ar2, #6\n" \
++ "ANDS D0Ar2, D0Ar2, #0x1c\n" \
++ "ADDZ D0Ar2, D0Ar2, #16\n" \
++ "SUB %1, %1, D0Ar2\n")
+
+
+-/* Copy from user to kernel, zeroing the bytes that were inaccessible in
+- userland. The return-value is the number of bytes that were
+- inaccessible. */
+-unsigned long __copy_user_zeroing(void *pdst, const void __user *psrc,
+- unsigned long n)
++/*
++ * Copy from user to kernel. The return-value is the number of bytes that were
++ * inaccessible.
++ */
++unsigned long raw_copy_from_user(void *pdst, const void __user *psrc,
++ unsigned long n)
+ {
+ register char *dst asm ("A0.2") = pdst;
+ register const char __user *src asm ("A1.2") = psrc;
+@@ -830,6 +773,8 @@ unsigned long __copy_user_zeroing(void *pdst, const void __user *psrc,
+ if ((unsigned long) src & 1) {
+ __asm_copy_from_user_1(dst, src, retn);
+ n--;
++ if (retn)
++ return retn + n;
+ }
+ if ((unsigned long) dst & 1) {
+ /* Worst case - byte copy */
+@@ -837,12 +782,14 @@ unsigned long __copy_user_zeroing(void *pdst, const void __user *psrc,
+ __asm_copy_from_user_1(dst, src, retn);
+ n--;
+ if (retn)
+- goto copy_exception_bytes;
++ return retn + n;
+ }
+ }
+ if (((unsigned long) src & 2) && n >= 2) {
+ __asm_copy_from_user_2(dst, src, retn);
+ n -= 2;
++ if (retn)
++ return retn + n;
+ }
+ if ((unsigned long) dst & 2) {
+ /* Second worst case - word copy */
+@@ -850,16 +797,10 @@ unsigned long __copy_user_zeroing(void *pdst, const void __user *psrc,
+ __asm_copy_from_user_2(dst, src, retn);
+ n -= 2;
+ if (retn)
+- goto copy_exception_bytes;
++ return retn + n;
+ }
+ }
+
+- /* We only need one check after the unalignment-adjustments,
+- because if both adjustments were done, either both or
+- neither reference had an exception. */
+- if (retn != 0)
+- goto copy_exception_bytes;
+-
+ #ifdef USE_RAPF
+ /* 64 bit copy loop */
+ if (!(((unsigned long) src | (unsigned long) dst) & 7)) {
+@@ -872,7 +813,7 @@ unsigned long __copy_user_zeroing(void *pdst, const void __user *psrc,
+ __asm_copy_from_user_8x64(dst, src, retn);
+ n -= 8;
+ if (retn)
+- goto copy_exception_bytes;
++ return retn + n;
+ }
+ }
+
+@@ -888,7 +829,7 @@ unsigned long __copy_user_zeroing(void *pdst, const void __user *psrc,
+ __asm_copy_from_user_8x64(dst, src, retn);
+ n -= 8;
+ if (retn)
+- goto copy_exception_bytes;
++ return retn + n;
+ }
+ }
+ #endif
+@@ -898,7 +839,7 @@ unsigned long __copy_user_zeroing(void *pdst, const void __user *psrc,
+ n -= 4;
+
+ if (retn)
+- goto copy_exception_bytes;
++ return retn + n;
+ }
+
+ /* If we get here, there were no memory read faults. */
+@@ -924,21 +865,8 @@ unsigned long __copy_user_zeroing(void *pdst, const void __user *psrc,
+ /* If we get here, retn correctly reflects the number of failing
+ bytes. */
+ return retn;
+-
+- copy_exception_bytes:
+- /* We already have "retn" bytes cleared, and need to clear the
+- remaining "n" bytes. A non-optimized simple byte-for-byte in-line
+- memset is preferred here, since this isn't speed-critical code and
+- we'd rather have this a leaf-function than calling memset. */
+- {
+- char *endp;
+- for (endp = dst + n; dst < endp; dst++)
+- *dst = 0;
+- }
+-
+- return retn + n;
+ }
+-EXPORT_SYMBOL(__copy_user_zeroing);
++EXPORT_SYMBOL(raw_copy_from_user);
+
+ #define __asm_clear_8x64(to, ret) \
+ asm volatile ( \
+diff --git a/arch/mips/Kconfig b/arch/mips/Kconfig
+index b3c5bde43d34..9a6e11b6f457 100644
+--- a/arch/mips/Kconfig
++++ b/arch/mips/Kconfig
+@@ -1526,7 +1526,7 @@ config CPU_MIPS64_R6
+ select CPU_SUPPORTS_HIGHMEM
+ select CPU_SUPPORTS_MSA
+ select GENERIC_CSUM
+- select MIPS_O32_FP64_SUPPORT if MIPS32_O32
++ select MIPS_O32_FP64_SUPPORT if 32BIT || MIPS32_O32
+ select HAVE_KVM
+ help
+ Choose this option to build a kernel for release 6 or later of the
+diff --git a/arch/mips/include/asm/spinlock.h b/arch/mips/include/asm/spinlock.h
+index f485afe51514..a8df44d60607 100644
+--- a/arch/mips/include/asm/spinlock.h
++++ b/arch/mips/include/asm/spinlock.h
+@@ -127,7 +127,7 @@ static inline void arch_spin_lock(arch_spinlock_t *lock)
+ " andi %[ticket], %[ticket], 0xffff \n"
+ " bne %[ticket], %[my_ticket], 4f \n"
+ " subu %[ticket], %[my_ticket], %[ticket] \n"
+- "2: \n"
++ "2: .insn \n"
+ " .subsection 2 \n"
+ "4: andi %[ticket], %[ticket], 0xffff \n"
+ " sll %[ticket], 5 \n"
+@@ -202,7 +202,7 @@ static inline unsigned int arch_spin_trylock(arch_spinlock_t *lock)
+ " sc %[ticket], %[ticket_ptr] \n"
+ " beqz %[ticket], 1b \n"
+ " li %[ticket], 1 \n"
+- "2: \n"
++ "2: .insn \n"
+ " .subsection 2 \n"
+ "3: b 2b \n"
+ " li %[ticket], 0 \n"
+@@ -382,7 +382,7 @@ static inline int arch_read_trylock(arch_rwlock_t *rw)
+ " .set reorder \n"
+ __WEAK_LLSC_MB
+ " li %2, 1 \n"
+- "2: \n"
++ "2: .insn \n"
+ : "=" GCC_OFF_SMALL_ASM() (rw->lock), "=&r" (tmp), "=&r" (ret)
+ : GCC_OFF_SMALL_ASM() (rw->lock)
+ : "memory");
+@@ -422,7 +422,7 @@ static inline int arch_write_trylock(arch_rwlock_t *rw)
+ " lui %1, 0x8000 \n"
+ " sc %1, %0 \n"
+ " li %2, 1 \n"
+- "2: \n"
++ "2: .insn \n"
+ : "=" GCC_OFF_SMALL_ASM() (rw->lock), "=&r" (tmp),
+ "=&r" (ret)
+ : GCC_OFF_SMALL_ASM() (rw->lock)
+diff --git a/arch/mips/kernel/cpu-probe.c b/arch/mips/kernel/cpu-probe.c
+index 07718bb5fc9d..12422fd4af23 100644
+--- a/arch/mips/kernel/cpu-probe.c
++++ b/arch/mips/kernel/cpu-probe.c
+@@ -1824,7 +1824,7 @@ static inline void cpu_probe_loongson(struct cpuinfo_mips *c, unsigned int cpu)
+ }
+
+ decode_configs(c);
+- c->options |= MIPS_CPU_TLBINV | MIPS_CPU_LDPTE;
++ c->options |= MIPS_CPU_FTLB | MIPS_CPU_TLBINV | MIPS_CPU_LDPTE;
+ c->writecombine = _CACHE_UNCACHED_ACCELERATED;
+ break;
+ default:
+diff --git a/arch/mips/kernel/genex.S b/arch/mips/kernel/genex.S
+index dc0b29612891..52a4fdfc8513 100644
+--- a/arch/mips/kernel/genex.S
++++ b/arch/mips/kernel/genex.S
+@@ -448,7 +448,7 @@ NESTED(nmi_handler, PT_SIZE, sp)
+ BUILD_HANDLER reserved reserved sti verbose /* others */
+
+ .align 5
+- LEAF(handle_ri_rdhwr_vivt)
++ LEAF(handle_ri_rdhwr_tlbp)
+ .set push
+ .set noat
+ .set noreorder
+@@ -467,7 +467,7 @@ NESTED(nmi_handler, PT_SIZE, sp)
+ .set pop
+ bltz k1, handle_ri /* slow path */
+ /* fall thru */
+- END(handle_ri_rdhwr_vivt)
++ END(handle_ri_rdhwr_tlbp)
+
+ LEAF(handle_ri_rdhwr)
+ .set push
+diff --git a/arch/mips/kernel/traps.c b/arch/mips/kernel/traps.c
+index 6c7f9d7e92b3..6e2487d59fee 100644
+--- a/arch/mips/kernel/traps.c
++++ b/arch/mips/kernel/traps.c
+@@ -81,7 +81,7 @@ extern asmlinkage void handle_dbe(void);
+ extern asmlinkage void handle_sys(void);
+ extern asmlinkage void handle_bp(void);
+ extern asmlinkage void handle_ri(void);
+-extern asmlinkage void handle_ri_rdhwr_vivt(void);
++extern asmlinkage void handle_ri_rdhwr_tlbp(void);
+ extern asmlinkage void handle_ri_rdhwr(void);
+ extern asmlinkage void handle_cpu(void);
+ extern asmlinkage void handle_ov(void);
+@@ -2352,9 +2352,18 @@ void __init trap_init(void)
+
+ set_except_vector(EXCCODE_SYS, handle_sys);
+ set_except_vector(EXCCODE_BP, handle_bp);
+- set_except_vector(EXCCODE_RI, rdhwr_noopt ? handle_ri :
+- (cpu_has_vtag_icache ?
+- handle_ri_rdhwr_vivt : handle_ri_rdhwr));
++
++ if (rdhwr_noopt)
++ set_except_vector(EXCCODE_RI, handle_ri);
++ else {
++ if (cpu_has_vtag_icache)
++ set_except_vector(EXCCODE_RI, handle_ri_rdhwr_tlbp);
++ else if (current_cpu_type() == CPU_LOONGSON3)
++ set_except_vector(EXCCODE_RI, handle_ri_rdhwr_tlbp);
++ else
++ set_except_vector(EXCCODE_RI, handle_ri_rdhwr);
++ }
++
+ set_except_vector(EXCCODE_CPU, handle_cpu);
+ set_except_vector(EXCCODE_OV, handle_ov);
+ set_except_vector(EXCCODE_TR, handle_tr);
+diff --git a/arch/mips/lantiq/xway/sysctrl.c b/arch/mips/lantiq/xway/sysctrl.c
+index 9a61671c00a7..90565477dfbd 100644
+--- a/arch/mips/lantiq/xway/sysctrl.c
++++ b/arch/mips/lantiq/xway/sysctrl.c
+@@ -467,7 +467,7 @@ void __init ltq_soc_init(void)
+
+ if (!np_xbar)
+ panic("Failed to load xbar nodes from devicetree");
+- if (of_address_to_resource(np_pmu, 0, &res_xbar))
++ if (of_address_to_resource(np_xbar, 0, &res_xbar))
+ panic("Failed to get xbar resources");
+ if (request_mem_region(res_xbar.start, resource_size(&res_xbar),
+ res_xbar.name) < 0)
+diff --git a/arch/mips/mm/c-r4k.c b/arch/mips/mm/c-r4k.c
+index 88cfaf81c958..9d0107fbb169 100644
+--- a/arch/mips/mm/c-r4k.c
++++ b/arch/mips/mm/c-r4k.c
+@@ -1558,6 +1558,7 @@ static void probe_vcache(void)
+ vcache_size = c->vcache.sets * c->vcache.ways * c->vcache.linesz;
+
+ c->vcache.waybit = 0;
++ c->vcache.waysize = vcache_size / c->vcache.ways;
+
+ pr_info("Unified victim cache %ldkB %s, linesize %d bytes.\n",
+ vcache_size >> 10, way_string[c->vcache.ways], c->vcache.linesz);
+@@ -1660,6 +1661,7 @@ static void __init loongson3_sc_init(void)
+ /* Loongson-3 has 4 cores, 1MB scache for each. scaches are shared */
+ scache_size *= 4;
+ c->scache.waybit = 0;
++ c->scache.waysize = scache_size / c->scache.ways;
+ pr_info("Unified secondary cache %ldkB %s, linesize %d bytes.\n",
+ scache_size >> 10, way_string[c->scache.ways], c->scache.linesz);
+ if (scache_size)
+diff --git a/arch/mips/mm/tlbex.c b/arch/mips/mm/tlbex.c
+index 55ce39606cb8..2da5649fc545 100644
+--- a/arch/mips/mm/tlbex.c
++++ b/arch/mips/mm/tlbex.c
+@@ -762,7 +762,8 @@ static void build_huge_update_entries(u32 **p, unsigned int pte,
+ static void build_huge_handler_tail(u32 **p, struct uasm_reloc **r,
+ struct uasm_label **l,
+ unsigned int pte,
+- unsigned int ptr)
++ unsigned int ptr,
++ unsigned int flush)
+ {
+ #ifdef CONFIG_SMP
+ UASM_i_SC(p, pte, 0, ptr);
+@@ -771,6 +772,22 @@ static void build_huge_handler_tail(u32 **p, struct uasm_reloc **r,
+ #else
+ UASM_i_SW(p, pte, 0, ptr);
+ #endif
++ if (cpu_has_ftlb && flush) {
++ BUG_ON(!cpu_has_tlbinv);
++
++ UASM_i_MFC0(p, ptr, C0_ENTRYHI);
++ uasm_i_ori(p, ptr, ptr, MIPS_ENTRYHI_EHINV);
++ UASM_i_MTC0(p, ptr, C0_ENTRYHI);
++ build_tlb_write_entry(p, l, r, tlb_indexed);
++
++ uasm_i_xori(p, ptr, ptr, MIPS_ENTRYHI_EHINV);
++ UASM_i_MTC0(p, ptr, C0_ENTRYHI);
++ build_huge_update_entries(p, pte, ptr);
++ build_huge_tlb_write_entry(p, l, r, pte, tlb_random, 0);
++
++ return;
++ }
++
+ build_huge_update_entries(p, pte, ptr);
+ build_huge_tlb_write_entry(p, l, r, pte, tlb_indexed, 0);
+ }
+@@ -2197,7 +2214,7 @@ static void build_r4000_tlb_load_handler(void)
+ uasm_l_tlbl_goaround2(&l, p);
+ }
+ uasm_i_ori(&p, wr.r1, wr.r1, (_PAGE_ACCESSED | _PAGE_VALID));
+- build_huge_handler_tail(&p, &r, &l, wr.r1, wr.r2);
++ build_huge_handler_tail(&p, &r, &l, wr.r1, wr.r2, 1);
+ #endif
+
+ uasm_l_nopage_tlbl(&l, p);
+@@ -2252,7 +2269,7 @@ static void build_r4000_tlb_store_handler(void)
+ build_tlb_probe_entry(&p);
+ uasm_i_ori(&p, wr.r1, wr.r1,
+ _PAGE_ACCESSED | _PAGE_MODIFIED | _PAGE_VALID | _PAGE_DIRTY);
+- build_huge_handler_tail(&p, &r, &l, wr.r1, wr.r2);
++ build_huge_handler_tail(&p, &r, &l, wr.r1, wr.r2, 1);
+ #endif
+
+ uasm_l_nopage_tlbs(&l, p);
+@@ -2308,7 +2325,7 @@ static void build_r4000_tlb_modify_handler(void)
+ build_tlb_probe_entry(&p);
+ uasm_i_ori(&p, wr.r1, wr.r1,
+ _PAGE_ACCESSED | _PAGE_MODIFIED | _PAGE_VALID | _PAGE_DIRTY);
+- build_huge_handler_tail(&p, &r, &l, wr.r1, wr.r2);
++ build_huge_handler_tail(&p, &r, &l, wr.r1, wr.r2, 0);
+ #endif
+
+ uasm_l_nopage_tlbm(&l, p);
+diff --git a/arch/mips/ralink/rt3883.c b/arch/mips/ralink/rt3883.c
+index 3e0aa09c6b55..9e4631acfcb5 100644
+--- a/arch/mips/ralink/rt3883.c
++++ b/arch/mips/ralink/rt3883.c
+@@ -36,7 +36,7 @@ static struct rt2880_pmx_func uartlite_func[] = { FUNC("uartlite", 0, 15, 2) };
+ static struct rt2880_pmx_func jtag_func[] = { FUNC("jtag", 0, 17, 5) };
+ static struct rt2880_pmx_func mdio_func[] = { FUNC("mdio", 0, 22, 2) };
+ static struct rt2880_pmx_func lna_a_func[] = { FUNC("lna a", 0, 32, 3) };
+-static struct rt2880_pmx_func lna_g_func[] = { FUNC("lna a", 0, 35, 3) };
++static struct rt2880_pmx_func lna_g_func[] = { FUNC("lna g", 0, 35, 3) };
+ static struct rt2880_pmx_func pci_func[] = {
+ FUNC("pci-dev", 0, 40, 32),
+ FUNC("pci-host2", 1, 40, 32),
+@@ -44,7 +44,7 @@ static struct rt2880_pmx_func pci_func[] = {
+ FUNC("pci-fnc", 3, 40, 32)
+ };
+ static struct rt2880_pmx_func ge1_func[] = { FUNC("ge1", 0, 72, 12) };
+-static struct rt2880_pmx_func ge2_func[] = { FUNC("ge1", 0, 84, 12) };
++static struct rt2880_pmx_func ge2_func[] = { FUNC("ge2", 0, 84, 12) };
+
+ static struct rt2880_pmx_group rt3883_pinmux_data[] = {
+ GRP("i2c", i2c_func, 1, RT3883_GPIO_MODE_I2C),
+diff --git a/arch/nios2/kernel/prom.c b/arch/nios2/kernel/prom.c
+index 367c5426157b..3901b80d4420 100644
+--- a/arch/nios2/kernel/prom.c
++++ b/arch/nios2/kernel/prom.c
+@@ -48,6 +48,13 @@ void * __init early_init_dt_alloc_memory_arch(u64 size, u64 align)
+ return alloc_bootmem_align(size, align);
+ }
+
++int __init early_init_dt_reserve_memory_arch(phys_addr_t base, phys_addr_t size,
++ bool nomap)
++{
++ reserve_bootmem(base, size, BOOTMEM_DEFAULT);
++ return 0;
++}
++
+ void __init early_init_devtree(void *params)
+ {
+ __be32 *dtb = (u32 *)__dtb_start;
+diff --git a/arch/nios2/kernel/setup.c b/arch/nios2/kernel/setup.c
+index a3fa80d1aacc..72ef4077bf2b 100644
+--- a/arch/nios2/kernel/setup.c
++++ b/arch/nios2/kernel/setup.c
+@@ -200,6 +200,9 @@ void __init setup_arch(char **cmdline_p)
+ }
+ #endif /* CONFIG_BLK_DEV_INITRD */
+
++ early_init_fdt_reserve_self();
++ early_init_fdt_scan_reserved_mem();
++
+ unflatten_and_copy_device_tree();
+
+ setup_cpuinfo();
+diff --git a/arch/powerpc/crypto/crc32c-vpmsum_glue.c b/arch/powerpc/crypto/crc32c-vpmsum_glue.c
+index 411994551afc..f058e0c3e4d4 100644
+--- a/arch/powerpc/crypto/crc32c-vpmsum_glue.c
++++ b/arch/powerpc/crypto/crc32c-vpmsum_glue.c
+@@ -33,10 +33,13 @@ static u32 crc32c_vpmsum(u32 crc, unsigned char const *p, size_t len)
+ }
+
+ if (len & ~VMX_ALIGN_MASK) {
++ preempt_disable();
+ pagefault_disable();
+ enable_kernel_altivec();
+ crc = __crc32c_vpmsum(crc, p, len & ~VMX_ALIGN_MASK);
++ disable_kernel_altivec();
+ pagefault_enable();
++ preempt_enable();
+ }
+
+ tail = len & VMX_ALIGN_MASK;
+diff --git a/arch/powerpc/kernel/align.c b/arch/powerpc/kernel/align.c
+index 8d58c61908f7..df88d2067348 100644
+--- a/arch/powerpc/kernel/align.c
++++ b/arch/powerpc/kernel/align.c
+@@ -807,14 +807,25 @@ int fix_alignment(struct pt_regs *regs)
+ nb = aligninfo[instr].len;
+ flags = aligninfo[instr].flags;
+
+- /* ldbrx/stdbrx overlap lfs/stfs in the DSISR unfortunately */
+- if (IS_XFORM(instruction) && ((instruction >> 1) & 0x3ff) == 532) {
+- nb = 8;
+- flags = LD+SW;
+- } else if (IS_XFORM(instruction) &&
+- ((instruction >> 1) & 0x3ff) == 660) {
+- nb = 8;
+- flags = ST+SW;
++ /*
++ * Handle some cases which give overlaps in the DSISR values.
++ */
++ if (IS_XFORM(instruction)) {
++ switch (get_xop(instruction)) {
++ case 532: /* ldbrx */
++ nb = 8;
++ flags = LD+SW;
++ break;
++ case 660: /* stdbrx */
++ nb = 8;
++ flags = ST+SW;
++ break;
++ case 20: /* lwarx */
++ case 84: /* ldarx */
++ case 116: /* lharx */
++ case 276: /* lqarx */
++ return 0; /* not emulated ever */
++ }
+ }
+
+ /* Byteswap little endian loads and stores */
+diff --git a/arch/powerpc/kernel/misc_64.S b/arch/powerpc/kernel/misc_64.S
+index 32be2a844947..d5f2431daa5e 100644
+--- a/arch/powerpc/kernel/misc_64.S
++++ b/arch/powerpc/kernel/misc_64.S
+@@ -67,7 +67,7 @@ PPC64_CACHES:
+ * flush all bytes from start through stop-1 inclusive
+ */
+
+-_GLOBAL(flush_icache_range)
++_GLOBAL_TOC(flush_icache_range)
+ BEGIN_FTR_SECTION
+ PURGE_PREFETCHED_INS
+ blr
+@@ -120,7 +120,7 @@ EXPORT_SYMBOL(flush_icache_range)
+ *
+ * flush all bytes from start to stop-1 inclusive
+ */
+-_GLOBAL(flush_dcache_range)
++_GLOBAL_TOC(flush_dcache_range)
+
+ /*
+ * Flush the data cache to memory
+diff --git a/arch/powerpc/kernel/setup_64.c b/arch/powerpc/kernel/setup_64.c
+index 6824157e4d2e..18a0946837d4 100644
+--- a/arch/powerpc/kernel/setup_64.c
++++ b/arch/powerpc/kernel/setup_64.c
+@@ -245,6 +245,15 @@ static void cpu_ready_for_interrupts(void)
+ mtspr(SPRN_LPCR, lpcr | LPCR_AIL_3);
+ }
+
++ /*
++ * Fixup HFSCR:TM based on CPU features. The bit is set by our
++ * early asm init because at that point we haven't updated our
++ * CPU features from firmware and device-tree. Here we have,
++ * so let's do it.
++ */
++ if (cpu_has_feature(CPU_FTR_HVMODE) && !cpu_has_feature(CPU_FTR_TM_COMP))
++ mtspr(SPRN_HFSCR, mfspr(SPRN_HFSCR) & ~HFSCR_TM);
++
+ /* Set IR and DR in PACA MSR */
+ get_paca()->kernel_msr = MSR_KERNEL;
+ }
+diff --git a/arch/powerpc/mm/hash_native_64.c b/arch/powerpc/mm/hash_native_64.c
+index cc332608e656..65bb8f33b399 100644
+--- a/arch/powerpc/mm/hash_native_64.c
++++ b/arch/powerpc/mm/hash_native_64.c
+@@ -638,6 +638,10 @@ static void native_flush_hash_range(unsigned long number, int local)
+ unsigned long psize = batch->psize;
+ int ssize = batch->ssize;
+ int i;
++ unsigned int use_local;
++
++ use_local = local && mmu_has_feature(MMU_FTR_TLBIEL) &&
++ mmu_psize_defs[psize].tlbiel && !cxl_ctx_in_use();
+
+ local_irq_save(flags);
+
+@@ -667,8 +671,7 @@ static void native_flush_hash_range(unsigned long number, int local)
+ } pte_iterate_hashed_end();
+ }
+
+- if (mmu_has_feature(MMU_FTR_TLBIEL) &&
+- mmu_psize_defs[psize].tlbiel && local) {
++ if (use_local) {
+ asm volatile("ptesync":::"memory");
+ for (i = 0; i < number; i++) {
+ vpn = batch->vpn[i];
+diff --git a/arch/s390/boot/compressed/misc.c b/arch/s390/boot/compressed/misc.c
+index 8515dd5a5663..bd90448347eb 100644
+--- a/arch/s390/boot/compressed/misc.c
++++ b/arch/s390/boot/compressed/misc.c
+@@ -141,31 +141,34 @@ static void check_ipl_parmblock(void *start, unsigned long size)
+
+ unsigned long decompress_kernel(void)
+ {
+- unsigned long output_addr;
+- unsigned char *output;
++ void *output, *kernel_end;
+
+- output_addr = ((unsigned long) &_end + HEAP_SIZE + 4095UL) & -4096UL;
+- check_ipl_parmblock((void *) 0, output_addr + SZ__bss_start);
+- memset(&_bss, 0, &_ebss - &_bss);
+- free_mem_ptr = (unsigned long)&_end;
+- free_mem_end_ptr = free_mem_ptr + HEAP_SIZE;
+- output = (unsigned char *) output_addr;
++ output = (void *) ALIGN((unsigned long) &_end + HEAP_SIZE, PAGE_SIZE);
++ kernel_end = output + SZ__bss_start;
++ check_ipl_parmblock((void *) 0, (unsigned long) kernel_end);
+
+ #ifdef CONFIG_BLK_DEV_INITRD
+ /*
+ * Move the initrd right behind the end of the decompressed
+- * kernel image.
++ * kernel image. This also prevents initrd corruption caused by
++ * bss clearing since kernel_end will always be located behind the
++ * current bss section..
+ */
+- if (INITRD_START && INITRD_SIZE &&
+- INITRD_START < (unsigned long) output + SZ__bss_start) {
+- check_ipl_parmblock(output + SZ__bss_start,
+- INITRD_START + INITRD_SIZE);
+- memmove(output + SZ__bss_start,
+- (void *) INITRD_START, INITRD_SIZE);
+- INITRD_START = (unsigned long) output + SZ__bss_start;
++ if (INITRD_START && INITRD_SIZE && kernel_end > (void *) INITRD_START) {
++ check_ipl_parmblock(kernel_end, INITRD_SIZE);
++ memmove(kernel_end, (void *) INITRD_START, INITRD_SIZE);
++ INITRD_START = (unsigned long) kernel_end;
+ }
+ #endif
+
++ /*
++ * Clear bss section. free_mem_ptr and free_mem_end_ptr need to be
++ * initialized afterwards since they reside in bss.
++ */
++ memset(&_bss, 0, &_ebss - &_bss);
++ free_mem_ptr = (unsigned long) &_end;
++ free_mem_end_ptr = free_mem_ptr + HEAP_SIZE;
++
+ puts("Uncompressing Linux... ");
+ __decompress(input_data, input_len, NULL, NULL, output, 0, NULL, error);
+ puts("Ok, booting the kernel.\n");
+diff --git a/arch/s390/include/asm/uaccess.h b/arch/s390/include/asm/uaccess.h
+index f82b04e85a21..7e99fb34ff23 100644
+--- a/arch/s390/include/asm/uaccess.h
++++ b/arch/s390/include/asm/uaccess.h
+@@ -144,7 +144,7 @@ unsigned long __must_check __copy_to_user(void __user *to, const void *from,
+ " jg 2b\n" \
+ ".popsection\n" \
+ EX_TABLE(0b,3b) EX_TABLE(1b,3b) \
+- : "=d" (__rc), "=Q" (*(to)) \
++ : "=d" (__rc), "+Q" (*(to)) \
+ : "d" (size), "Q" (*(from)), \
+ "d" (__reg0), "K" (-EFAULT) \
+ : "cc"); \
+diff --git a/arch/x86/kernel/cpu/mcheck/mce.c b/arch/x86/kernel/cpu/mcheck/mce.c
+index 537c6647d84c..036fc03aefbd 100644
+--- a/arch/x86/kernel/cpu/mcheck/mce.c
++++ b/arch/x86/kernel/cpu/mcheck/mce.c
+@@ -54,6 +54,8 @@
+
+ static DEFINE_MUTEX(mce_chrdev_read_mutex);
+
++static int mce_chrdev_open_count; /* #times opened */
++
+ #define mce_log_get_idx_check(p) \
+ ({ \
+ RCU_LOCKDEP_WARN(!rcu_read_lock_sched_held() && \
+@@ -601,6 +603,10 @@ static int mce_default_notifier(struct notifier_block *nb, unsigned long val,
+ if (atomic_read(&num_notifiers) > 2)
+ return NOTIFY_DONE;
+
++ /* Don't print when mcelog is running */
++ if (mce_chrdev_open_count > 0)
++ return NOTIFY_DONE;
++
+ __print_mce(m);
+
+ return NOTIFY_DONE;
+@@ -1871,7 +1877,6 @@ void mcheck_cpu_clear(struct cpuinfo_x86 *c)
+ */
+
+ static DEFINE_SPINLOCK(mce_chrdev_state_lock);
+-static int mce_chrdev_open_count; /* #times opened */
+ static int mce_chrdev_open_exclu; /* already open exclusive? */
+
+ static int mce_chrdev_open(struct inode *inode, struct file *file)
+diff --git a/arch/x86/kernel/reboot.c b/arch/x86/kernel/reboot.c
+index e244c19a2451..067f9813fd2c 100644
+--- a/arch/x86/kernel/reboot.c
++++ b/arch/x86/kernel/reboot.c
+@@ -223,6 +223,22 @@ static struct dmi_system_id __initdata reboot_dmi_table[] = {
+ DMI_MATCH(DMI_BOARD_NAME, "P4S800"),
+ },
+ },
++ { /* Handle problems with rebooting on ASUS EeeBook X205TA */
++ .callback = set_acpi_reboot,
++ .ident = "ASUS EeeBook X205TA",
++ .matches = {
++ DMI_MATCH(DMI_SYS_VENDOR, "ASUSTeK COMPUTER INC."),
++ DMI_MATCH(DMI_PRODUCT_NAME, "X205TA"),
++ },
++ },
++ { /* Handle problems with rebooting on ASUS EeeBook X205TAW */
++ .callback = set_acpi_reboot,
++ .ident = "ASUS EeeBook X205TAW",
++ .matches = {
++ DMI_MATCH(DMI_SYS_VENDOR, "ASUSTeK COMPUTER INC."),
++ DMI_MATCH(DMI_PRODUCT_NAME, "X205TAW"),
++ },
++ },
+
+ /* Certec */
+ { /* Handle problems with rebooting on Certec BPC600 */
+diff --git a/arch/x86/kvm/vmx.c b/arch/x86/kvm/vmx.c
+index 9764463ce833..cce7d2e3be15 100644
+--- a/arch/x86/kvm/vmx.c
++++ b/arch/x86/kvm/vmx.c
+@@ -7086,13 +7086,18 @@ static int nested_vmx_check_vmptr(struct kvm_vcpu *vcpu, int exit_reason,
+ }
+
+ page = nested_get_page(vcpu, vmptr);
+- if (page == NULL ||
+- *(u32 *)kmap(page) != VMCS12_REVISION) {
++ if (page == NULL) {
+ nested_vmx_failInvalid(vcpu);
++ return kvm_skip_emulated_instruction(vcpu);
++ }
++ if (*(u32 *)kmap(page) != VMCS12_REVISION) {
+ kunmap(page);
++ nested_release_page_clean(page);
++ nested_vmx_failInvalid(vcpu);
+ return kvm_skip_emulated_instruction(vcpu);
+ }
+ kunmap(page);
++ nested_release_page_clean(page);
+ vmx->nested.vmxon_ptr = vmptr;
+ break;
+ case EXIT_REASON_VMCLEAR:
+diff --git a/arch/xtensa/include/asm/page.h b/arch/xtensa/include/asm/page.h
+index 976b1d70edbc..4ddbfd57a7c8 100644
+--- a/arch/xtensa/include/asm/page.h
++++ b/arch/xtensa/include/asm/page.h
+@@ -164,8 +164,21 @@ void copy_user_highpage(struct page *to, struct page *from,
+
+ #define ARCH_PFN_OFFSET (PHYS_OFFSET >> PAGE_SHIFT)
+
++#ifdef CONFIG_MMU
++static inline unsigned long ___pa(unsigned long va)
++{
++ unsigned long off = va - PAGE_OFFSET;
++
++ if (off >= XCHAL_KSEG_SIZE)
++ off -= XCHAL_KSEG_SIZE;
++
++ return off + PHYS_OFFSET;
++}
++#define __pa(x) ___pa((unsigned long)(x))
++#else
+ #define __pa(x) \
+ ((unsigned long) (x) - PAGE_OFFSET + PHYS_OFFSET)
++#endif
+ #define __va(x) \
+ ((void *)((unsigned long) (x) - PHYS_OFFSET + PAGE_OFFSET))
+ #define pfn_valid(pfn) \
+diff --git a/drivers/acpi/button.c b/drivers/acpi/button.c
+index e19f530f1083..6d5a8c1d3132 100644
+--- a/drivers/acpi/button.c
++++ b/drivers/acpi/button.c
+@@ -113,7 +113,7 @@ struct acpi_button {
+
+ static BLOCKING_NOTIFIER_HEAD(acpi_lid_notifier);
+ static struct acpi_device *lid_device;
+-static u8 lid_init_state = ACPI_BUTTON_LID_INIT_METHOD;
++static u8 lid_init_state = ACPI_BUTTON_LID_INIT_OPEN;
+
+ static unsigned long lid_report_interval __read_mostly = 500;
+ module_param(lid_report_interval, ulong, 0644);
+diff --git a/drivers/acpi/glue.c b/drivers/acpi/glue.c
+index fb19e1cdb641..edc8663b5db3 100644
+--- a/drivers/acpi/glue.c
++++ b/drivers/acpi/glue.c
+@@ -99,13 +99,13 @@ static int find_child_checks(struct acpi_device *adev, bool check_children)
+ return -ENODEV;
+
+ /*
+- * If the device has a _HID (or _CID) returning a valid ACPI/PNP
+- * device ID, it is better to make it look less attractive here, so that
+- * the other device with the same _ADR value (that may not have a valid
+- * device ID) can be matched going forward. [This means a second spec
+- * violation in a row, so whatever we do here is best effort anyway.]
++ * If the device has a _HID returning a valid ACPI/PNP device ID, it is
++ * better to make it look less attractive here, so that the other device
++ * with the same _ADR value (that may not have a valid device ID) can be
++ * matched going forward. [This means a second spec violation in a row,
++ * so whatever we do here is best effort anyway.]
+ */
+- return sta_present && list_empty(&adev->pnp.ids) ?
++ return sta_present && !adev->pnp.type.platform_id ?
+ FIND_CHILD_MAX_SCORE : FIND_CHILD_MIN_SCORE;
+ }
+
+diff --git a/drivers/acpi/sleep.c b/drivers/acpi/sleep.c
+index 54abb26b7366..a4327af676fe 100644
+--- a/drivers/acpi/sleep.c
++++ b/drivers/acpi/sleep.c
+@@ -130,6 +130,12 @@ void __init acpi_nvs_nosave_s3(void)
+ nvs_nosave_s3 = true;
+ }
+
++static int __init init_nvs_save_s3(const struct dmi_system_id *d)
++{
++ nvs_nosave_s3 = false;
++ return 0;
++}
++
+ /*
+ * ACPI 1.0 wants us to execute _PTS before suspending devices, so we allow the
+ * user to request that behavior by using the 'acpi_old_suspend_ordering'
+@@ -324,6 +330,19 @@ static struct dmi_system_id acpisleep_dmi_table[] __initdata = {
+ DMI_MATCH(DMI_PRODUCT_NAME, "K54HR"),
+ },
+ },
++ /*
++ * https://bugzilla.kernel.org/show_bug.cgi?id=189431
++ * Lenovo G50-45 is a platform later than 2012, but needs nvs memory
++ * saving during S3.
++ */
++ {
++ .callback = init_nvs_save_s3,
++ .ident = "Lenovo G50-45",
++ .matches = {
++ DMI_MATCH(DMI_SYS_VENDOR, "LENOVO"),
++ DMI_MATCH(DMI_PRODUCT_NAME, "80E3"),
++ },
++ },
+ {},
+ };
+
+diff --git a/drivers/ata/ahci_da850.c b/drivers/ata/ahci_da850.c
+index 267a3d3e79f4..52f2674d5e89 100644
+--- a/drivers/ata/ahci_da850.c
++++ b/drivers/ata/ahci_da850.c
+@@ -54,11 +54,42 @@ static void da850_sata_init(struct device *dev, void __iomem *pwrdn_reg,
+ writel(val, ahci_base + SATA_P0PHYCR_REG);
+ }
+
++static int ahci_da850_softreset(struct ata_link *link,
++ unsigned int *class, unsigned long deadline)
++{
++ int pmp, ret;
++
++ pmp = sata_srst_pmp(link);
++
++ /*
++ * There's an issue with the SATA controller on da850 SoCs: if we
++ * enable Port Multiplier support, but the drive is connected directly
++ * to the board, it can't be detected. As a workaround: if PMP is
++ * enabled, we first call ahci_do_softreset() and pass it the result of
++ * sata_srst_pmp(). If this call fails, we retry with pmp = 0.
++ */
++ ret = ahci_do_softreset(link, class, pmp, deadline, ahci_check_ready);
++ if (pmp && ret == -EBUSY)
++ return ahci_do_softreset(link, class, 0,
++ deadline, ahci_check_ready);
++
++ return ret;
++}
++
++static struct ata_port_operations ahci_da850_port_ops = {
++ .inherits = &ahci_platform_ops,
++ .softreset = ahci_da850_softreset,
++ /*
++ * No need to override .pmp_softreset - it's only used for actual
++ * PMP-enabled ports.
++ */
++};
++
+ static const struct ata_port_info ahci_da850_port_info = {
+ .flags = AHCI_FLAG_COMMON,
+ .pio_mask = ATA_PIO4,
+ .udma_mask = ATA_UDMA6,
+- .port_ops = &ahci_platform_ops,
++ .port_ops = &ahci_da850_port_ops,
+ };
+
+ static struct scsi_host_template ahci_platform_sht = {
+diff --git a/drivers/char/random.c b/drivers/char/random.c
+index 1ef26403bcc8..433facfd6cb8 100644
+--- a/drivers/char/random.c
++++ b/drivers/char/random.c
+@@ -2042,63 +2042,65 @@ struct ctl_table random_table[] = {
+ };
+ #endif /* CONFIG_SYSCTL */
+
+-static u32 random_int_secret[MD5_MESSAGE_BYTES / 4] ____cacheline_aligned;
+-
+-int random_int_secret_init(void)
+-{
+- get_random_bytes(random_int_secret, sizeof(random_int_secret));
+- return 0;
+-}
+-
+-static DEFINE_PER_CPU(__u32 [MD5_DIGEST_WORDS], get_random_int_hash)
+- __aligned(sizeof(unsigned long));
++struct batched_entropy {
++ union {
++ unsigned long entropy_long[CHACHA20_BLOCK_SIZE / sizeof(unsigned long)];
++ unsigned int entropy_int[CHACHA20_BLOCK_SIZE / sizeof(unsigned int)];
++ };
++ unsigned int position;
++};
+
+ /*
+- * Get a random word for internal kernel use only. Similar to urandom but
+- * with the goal of minimal entropy pool depletion. As a result, the random
+- * value is not cryptographically secure but for several uses the cost of
+- * depleting entropy is too high
++ * Get a random word for internal kernel use only. The quality of the random
++ * number is either as good as RDRAND or as good as /dev/urandom, with the
++ * goal of being quite fast and not depleting entropy.
+ */
+-unsigned int get_random_int(void)
++static DEFINE_PER_CPU(struct batched_entropy, batched_entropy_long);
++unsigned long get_random_long(void)
+ {
+- __u32 *hash;
+- unsigned int ret;
++ unsigned long ret;
++ struct batched_entropy *batch;
+
+- if (arch_get_random_int(&ret))
++ if (arch_get_random_long(&ret))
+ return ret;
+
+- hash = get_cpu_var(get_random_int_hash);
+-
+- hash[0] += current->pid + jiffies + random_get_entropy();
+- md5_transform(hash, random_int_secret);
+- ret = hash[0];
+- put_cpu_var(get_random_int_hash);
+-
++ batch = &get_cpu_var(batched_entropy_long);
++ if (batch->position % ARRAY_SIZE(batch->entropy_long) == 0) {
++ extract_crng((u8 *)batch->entropy_long);
++ batch->position = 0;
++ }
++ ret = batch->entropy_long[batch->position++];
++ put_cpu_var(batched_entropy_long);
+ return ret;
+ }
+-EXPORT_SYMBOL(get_random_int);
++EXPORT_SYMBOL(get_random_long);
+
+-/*
+- * Same as get_random_int(), but returns unsigned long.
+- */
+-unsigned long get_random_long(void)
++#if BITS_PER_LONG == 32
++unsigned int get_random_int(void)
+ {
+- __u32 *hash;
+- unsigned long ret;
++ return get_random_long();
++}
++#else
++static DEFINE_PER_CPU(struct batched_entropy, batched_entropy_int);
++unsigned int get_random_int(void)
++{
++ unsigned int ret;
++ struct batched_entropy *batch;
+
+- if (arch_get_random_long(&ret))
++ if (arch_get_random_int(&ret))
+ return ret;
+
+- hash = get_cpu_var(get_random_int_hash);
+-
+- hash[0] += current->pid + jiffies + random_get_entropy();
+- md5_transform(hash, random_int_secret);
+- ret = *(unsigned long *)hash;
+- put_cpu_var(get_random_int_hash);
+-
++ batch = &get_cpu_var(batched_entropy_int);
++ if (batch->position % ARRAY_SIZE(batch->entropy_int) == 0) {
++ extract_crng((u8 *)batch->entropy_int);
++ batch->position = 0;
++ }
++ ret = batch->entropy_int[batch->position++];
++ put_cpu_var(batched_entropy_int);
+ return ret;
+ }
+-EXPORT_SYMBOL(get_random_long);
++#endif
++EXPORT_SYMBOL(get_random_int);
+
+ /**
+ * randomize_page - Generate a random, page aligned address
+diff --git a/drivers/firmware/qcom_scm-64.c b/drivers/firmware/qcom_scm-64.c
+index 4a0f5ead4fb5..1e2e5198db53 100644
+--- a/drivers/firmware/qcom_scm-64.c
++++ b/drivers/firmware/qcom_scm-64.c
+@@ -91,6 +91,7 @@ static int qcom_scm_call(struct device *dev, u32 svc_id, u32 cmd_id,
+ dma_addr_t args_phys = 0;
+ void *args_virt = NULL;
+ size_t alloc_len;
++ struct arm_smccc_quirk quirk = {.id = ARM_SMCCC_QUIRK_QCOM_A6};
+
+ if (unlikely(arglen > N_REGISTER_ARGS)) {
+ alloc_len = N_EXT_QCOM_SCM_ARGS * sizeof(u64);
+@@ -131,10 +132,16 @@ static int qcom_scm_call(struct device *dev, u32 svc_id, u32 cmd_id,
+ qcom_smccc_convention,
+ ARM_SMCCC_OWNER_SIP, fn_id);
+
++ quirk.state.a6 = 0;
++
+ do {
+- arm_smccc_smc(cmd, desc->arginfo, desc->args[0],
+- desc->args[1], desc->args[2], x5, 0, 0,
+- res);
++ arm_smccc_smc_quirk(cmd, desc->arginfo, desc->args[0],
++ desc->args[1], desc->args[2], x5,
++ quirk.state.a6, 0, res, &quirk);
++
++ if (res->a0 == QCOM_SCM_INTERRUPTED)
++ cmd = res->a0;
++
+ } while (res->a0 == QCOM_SCM_INTERRUPTED);
+
+ mutex_unlock(&qcom_scm_lock);
+diff --git a/drivers/gpio/gpiolib-acpi.c b/drivers/gpio/gpiolib-acpi.c
+index a3faefa44f68..d3f9f028a37b 100644
+--- a/drivers/gpio/gpiolib-acpi.c
++++ b/drivers/gpio/gpiolib-acpi.c
+@@ -572,8 +572,10 @@ struct gpio_desc *acpi_find_gpio(struct device *dev,
+ }
+
+ desc = acpi_get_gpiod_by_index(adev, propname, idx, &info);
+- if (!IS_ERR(desc) || (PTR_ERR(desc) == -EPROBE_DEFER))
++ if (!IS_ERR(desc))
+ break;
++ if (PTR_ERR(desc) == -EPROBE_DEFER)
++ return ERR_CAST(desc);
+ }
+
+ /* Then from plain _CRS GPIOs */
+diff --git a/drivers/gpu/drm/drm_edid.c b/drivers/gpu/drm/drm_edid.c
+index ec6474b01dbc..7cce86933000 100644
+--- a/drivers/gpu/drm/drm_edid.c
++++ b/drivers/gpu/drm/drm_edid.c
+@@ -90,7 +90,7 @@ struct detailed_mode_closure {
+ #define LEVEL_GTF2 2
+ #define LEVEL_CVT 3
+
+-static struct edid_quirk {
++static const struct edid_quirk {
+ char vendor[4];
+ int product_id;
+ u32 quirks;
+@@ -1480,7 +1480,7 @@ EXPORT_SYMBOL(drm_edid_duplicate);
+ *
+ * Returns true if @vendor is in @edid, false otherwise
+ */
+-static bool edid_vendor(struct edid *edid, char *vendor)
++static bool edid_vendor(struct edid *edid, const char *vendor)
+ {
+ char edid_vendor[3];
+
+@@ -1500,7 +1500,7 @@ static bool edid_vendor(struct edid *edid, char *vendor)
+ */
+ static u32 edid_get_quirks(struct edid *edid)
+ {
+- struct edid_quirk *quirk;
++ const struct edid_quirk *quirk;
+ int i;
+
+ for (i = 0; i < ARRAY_SIZE(edid_quirk_list); i++) {
+diff --git a/drivers/gpu/drm/i915/gvt/kvmgt.c b/drivers/gpu/drm/i915/gvt/kvmgt.c
+index 325cb9b55989..5f30a0716531 100644
+--- a/drivers/gpu/drm/i915/gvt/kvmgt.c
++++ b/drivers/gpu/drm/i915/gvt/kvmgt.c
+@@ -1422,7 +1422,7 @@ static int kvmgt_rw_gpa(unsigned long handle, unsigned long gpa,
+ {
+ struct kvmgt_guest_info *info;
+ struct kvm *kvm;
+- int ret;
++ int idx, ret;
+ bool kthread = current->mm == NULL;
+
+ if (!handle_valid(handle))
+@@ -1434,8 +1434,10 @@ static int kvmgt_rw_gpa(unsigned long handle, unsigned long gpa,
+ if (kthread)
+ use_mm(kvm->mm);
+
++ idx = srcu_read_lock(&kvm->srcu);
+ ret = write ? kvm_write_guest(kvm, gpa, buf, len) :
+ kvm_read_guest(kvm, gpa, buf, len);
++ srcu_read_unlock(&kvm->srcu, idx);
+
+ if (kthread)
+ unuse_mm(kvm->mm);
+diff --git a/drivers/gpu/drm/i915/gvt/sched_policy.c b/drivers/gpu/drm/i915/gvt/sched_policy.c
+index 678b0be85376..3635dbe328ef 100644
+--- a/drivers/gpu/drm/i915/gvt/sched_policy.c
++++ b/drivers/gpu/drm/i915/gvt/sched_policy.c
+@@ -101,7 +101,7 @@ struct tbs_sched_data {
+ struct list_head runq_head;
+ };
+
+-#define GVT_DEFAULT_TIME_SLICE (1 * HZ / 1000)
++#define GVT_DEFAULT_TIME_SLICE (msecs_to_jiffies(1))
+
+ static void tbs_sched_func(struct work_struct *work)
+ {
+@@ -224,7 +224,7 @@ static void tbs_sched_start_schedule(struct intel_vgpu *vgpu)
+ return;
+
+ list_add_tail(&vgpu_data->list, &sched_data->runq_head);
+- schedule_delayed_work(&sched_data->work, sched_data->period);
++ schedule_delayed_work(&sched_data->work, 0);
+ }
+
+ static void tbs_sched_stop_schedule(struct intel_vgpu *vgpu)
+diff --git a/drivers/gpu/drm/i915/i915_pci.c b/drivers/gpu/drm/i915/i915_pci.c
+index fce8e198bc76..08e274e16165 100644
+--- a/drivers/gpu/drm/i915/i915_pci.c
++++ b/drivers/gpu/drm/i915/i915_pci.c
+@@ -421,6 +421,7 @@ static const struct pci_device_id pciidlist[] = {
+ INTEL_VLV_IDS(&intel_valleyview_info),
+ INTEL_BDW_GT12_IDS(&intel_broadwell_info),
+ INTEL_BDW_GT3_IDS(&intel_broadwell_gt3_info),
++ INTEL_BDW_RSVD_IDS(&intel_broadwell_info),
+ INTEL_CHV_IDS(&intel_cherryview_info),
+ INTEL_SKL_GT1_IDS(&intel_skylake_info),
+ INTEL_SKL_GT2_IDS(&intel_skylake_info),
+diff --git a/drivers/gpu/drm/mga/mga_dma.c b/drivers/gpu/drm/mga/mga_dma.c
+index 1f2f9ca25901..4556e2b13ac5 100644
+--- a/drivers/gpu/drm/mga/mga_dma.c
++++ b/drivers/gpu/drm/mga/mga_dma.c
+@@ -392,6 +392,24 @@ int mga_driver_load(struct drm_device *dev, unsigned long flags)
+ drm_mga_private_t *dev_priv;
+ int ret;
+
++ /* There are PCI versions of the G450. These cards have the
++ * same PCI ID as the AGP G450, but have an additional PCI-to-PCI
++ * bridge chip. We detect these cards, which are not currently
++ * supported by this driver, by looking at the device ID of the
++ * bus the "card" is on. If vendor is 0x3388 (Hint Corp) and the
++ * device is 0x0021 (HB6 Universal PCI-PCI bridge), we reject the
++ * device.
++ */
++ if ((dev->pdev->device == 0x0525) && dev->pdev->bus->self
++ && (dev->pdev->bus->self->vendor == 0x3388)
++ && (dev->pdev->bus->self->device == 0x0021)
++ && dev->agp) {
++ /* FIXME: This should be quirked in the pci core, but oh well
++ * the hw probably stopped existing. */
++ arch_phys_wc_del(dev->agp->agp_mtrr);
++ kfree(dev->agp);
++ dev->agp = NULL;
++ }
+ dev_priv = kzalloc(sizeof(drm_mga_private_t), GFP_KERNEL);
+ if (!dev_priv)
+ return -ENOMEM;
+@@ -698,7 +716,7 @@ static int mga_do_pci_dma_bootstrap(struct drm_device *dev,
+ static int mga_do_dma_bootstrap(struct drm_device *dev,
+ drm_mga_dma_bootstrap_t *dma_bs)
+ {
+- const int is_agp = (dma_bs->agp_mode != 0) && drm_pci_device_is_agp(dev);
++ const int is_agp = (dma_bs->agp_mode != 0) && dev->agp;
+ int err;
+ drm_mga_private_t *const dev_priv =
+ (drm_mga_private_t *) dev->dev_private;
+diff --git a/drivers/gpu/drm/mga/mga_drv.c b/drivers/gpu/drm/mga/mga_drv.c
+index 25b2a1a424e6..63ba0699d107 100644
+--- a/drivers/gpu/drm/mga/mga_drv.c
++++ b/drivers/gpu/drm/mga/mga_drv.c
+@@ -37,8 +37,6 @@
+
+ #include <drm/drm_pciids.h>
+
+-static int mga_driver_device_is_agp(struct drm_device *dev);
+-
+ static struct pci_device_id pciidlist[] = {
+ mga_PCI_IDS
+ };
+@@ -66,7 +64,6 @@ static struct drm_driver driver = {
+ .lastclose = mga_driver_lastclose,
+ .set_busid = drm_pci_set_busid,
+ .dma_quiescent = mga_driver_dma_quiescent,
+- .device_is_agp = mga_driver_device_is_agp,
+ .get_vblank_counter = mga_get_vblank_counter,
+ .enable_vblank = mga_enable_vblank,
+ .disable_vblank = mga_disable_vblank,
+@@ -107,37 +104,3 @@ module_exit(mga_exit);
+ MODULE_AUTHOR(DRIVER_AUTHOR);
+ MODULE_DESCRIPTION(DRIVER_DESC);
+ MODULE_LICENSE("GPL and additional rights");
+-
+-/**
+- * Determine if the device really is AGP or not.
+- *
+- * In addition to the usual tests performed by \c drm_device_is_agp, this
+- * function detects PCI G450 cards that appear to the system exactly like
+- * AGP G450 cards.
+- *
+- * \param dev The device to be tested.
+- *
+- * \returns
+- * If the device is a PCI G450, zero is returned. Otherwise 2 is returned.
+- */
+-static int mga_driver_device_is_agp(struct drm_device *dev)
+-{
+- const struct pci_dev *const pdev = dev->pdev;
+-
+- /* There are PCI versions of the G450. These cards have the
+- * same PCI ID as the AGP G450, but have an additional PCI-to-PCI
+- * bridge chip. We detect these cards, which are not currently
+- * supported by this driver, by looking at the device ID of the
+- * bus the "card" is on. If vendor is 0x3388 (Hint Corp) and the
+- * device is 0x0021 (HB6 Universal PCI-PCI bridge), we reject the
+- * device.
+- */
+-
+- if ((pdev->device == 0x0525) && pdev->bus->self
+- && (pdev->bus->self->vendor == 0x3388)
+- && (pdev->bus->self->device == 0x0021)) {
+- return 0;
+- }
+-
+- return 2;
+-}
+diff --git a/drivers/gpu/drm/msm/adreno/a5xx_gpu.c b/drivers/gpu/drm/msm/adreno/a5xx_gpu.c
+index b8647198c11c..657874077400 100644
+--- a/drivers/gpu/drm/msm/adreno/a5xx_gpu.c
++++ b/drivers/gpu/drm/msm/adreno/a5xx_gpu.c
+@@ -846,7 +846,9 @@ static const struct adreno_gpu_funcs funcs = {
+ .idle = a5xx_idle,
+ .irq = a5xx_irq,
+ .destroy = a5xx_destroy,
++#ifdef CONFIG_DEBUG_FS
+ .show = a5xx_show,
++#endif
+ },
+ .get_timestamp = a5xx_get_timestamp,
+ };
+diff --git a/drivers/gpu/drm/ttm/ttm_object.c b/drivers/gpu/drm/ttm/ttm_object.c
+index 4f5fa8d65fe9..144367c0c28f 100644
+--- a/drivers/gpu/drm/ttm/ttm_object.c
++++ b/drivers/gpu/drm/ttm/ttm_object.c
+@@ -179,7 +179,7 @@ int ttm_base_object_init(struct ttm_object_file *tfile,
+ if (unlikely(ret != 0))
+ goto out_err0;
+
+- ret = ttm_ref_object_add(tfile, base, TTM_REF_USAGE, NULL);
++ ret = ttm_ref_object_add(tfile, base, TTM_REF_USAGE, NULL, false);
+ if (unlikely(ret != 0))
+ goto out_err1;
+
+@@ -318,7 +318,8 @@ EXPORT_SYMBOL(ttm_ref_object_exists);
+
+ int ttm_ref_object_add(struct ttm_object_file *tfile,
+ struct ttm_base_object *base,
+- enum ttm_ref_type ref_type, bool *existed)
++ enum ttm_ref_type ref_type, bool *existed,
++ bool require_existed)
+ {
+ struct drm_open_hash *ht = &tfile->ref_hash[ref_type];
+ struct ttm_ref_object *ref;
+@@ -345,6 +346,9 @@ int ttm_ref_object_add(struct ttm_object_file *tfile,
+ }
+
+ rcu_read_unlock();
++ if (require_existed)
++ return -EPERM;
++
+ ret = ttm_mem_global_alloc(mem_glob, sizeof(*ref),
+ false, false);
+ if (unlikely(ret != 0))
+@@ -635,7 +639,7 @@ int ttm_prime_fd_to_handle(struct ttm_object_file *tfile,
+ prime = (struct ttm_prime_object *) dma_buf->priv;
+ base = &prime->base;
+ *handle = base->hash.key;
+- ret = ttm_ref_object_add(tfile, base, TTM_REF_USAGE, NULL);
++ ret = ttm_ref_object_add(tfile, base, TTM_REF_USAGE, NULL, false);
+
+ dma_buf_put(dma_buf);
+
+diff --git a/drivers/gpu/drm/vmwgfx/vmwgfx_fence.c b/drivers/gpu/drm/vmwgfx/vmwgfx_fence.c
+index 6541dd8b82dc..6b2708b4eafe 100644
+--- a/drivers/gpu/drm/vmwgfx/vmwgfx_fence.c
++++ b/drivers/gpu/drm/vmwgfx/vmwgfx_fence.c
+@@ -538,7 +538,7 @@ int vmw_fence_create(struct vmw_fence_manager *fman,
+ struct vmw_fence_obj **p_fence)
+ {
+ struct vmw_fence_obj *fence;
+- int ret;
++ int ret;
+
+ fence = kzalloc(sizeof(*fence), GFP_KERNEL);
+ if (unlikely(fence == NULL))
+@@ -701,6 +701,41 @@ void vmw_fence_fifo_up(struct vmw_fence_manager *fman)
+ }
+
+
++/**
++ * vmw_fence_obj_lookup - Look up a user-space fence object
++ *
++ * @tfile: A struct ttm_object_file identifying the caller.
++ * @handle: A handle identifying the fence object.
++ * @return: A struct vmw_user_fence base ttm object on success or
++ * an error pointer on failure.
++ *
++ * The fence object is looked up and type-checked. The caller needs
++ * to have opened the fence object first, but since that happens on
++ * creation and fence objects aren't shareable, that's not an
++ * issue currently.
++ */
++static struct ttm_base_object *
++vmw_fence_obj_lookup(struct ttm_object_file *tfile, u32 handle)
++{
++ struct ttm_base_object *base = ttm_base_object_lookup(tfile, handle);
++
++ if (!base) {
++ pr_err("Invalid fence object handle 0x%08lx.\n",
++ (unsigned long)handle);
++ return ERR_PTR(-EINVAL);
++ }
++
++ if (base->refcount_release != vmw_user_fence_base_release) {
++ pr_err("Invalid fence object handle 0x%08lx.\n",
++ (unsigned long)handle);
++ ttm_base_object_unref(&base);
++ return ERR_PTR(-EINVAL);
++ }
++
++ return base;
++}
++
++
+ int vmw_fence_obj_wait_ioctl(struct drm_device *dev, void *data,
+ struct drm_file *file_priv)
+ {
+@@ -726,13 +761,9 @@ int vmw_fence_obj_wait_ioctl(struct drm_device *dev, void *data,
+ arg->kernel_cookie = jiffies + wait_timeout;
+ }
+
+- base = ttm_base_object_lookup(tfile, arg->handle);
+- if (unlikely(base == NULL)) {
+- printk(KERN_ERR "Wait invalid fence object handle "
+- "0x%08lx.\n",
+- (unsigned long)arg->handle);
+- return -EINVAL;
+- }
++ base = vmw_fence_obj_lookup(tfile, arg->handle);
++ if (IS_ERR(base))
++ return PTR_ERR(base);
+
+ fence = &(container_of(base, struct vmw_user_fence, base)->fence);
+
+@@ -771,13 +802,9 @@ int vmw_fence_obj_signaled_ioctl(struct drm_device *dev, void *data,
+ struct ttm_object_file *tfile = vmw_fpriv(file_priv)->tfile;
+ struct vmw_private *dev_priv = vmw_priv(dev);
+
+- base = ttm_base_object_lookup(tfile, arg->handle);
+- if (unlikely(base == NULL)) {
+- printk(KERN_ERR "Fence signaled invalid fence object handle "
+- "0x%08lx.\n",
+- (unsigned long)arg->handle);
+- return -EINVAL;
+- }
++ base = vmw_fence_obj_lookup(tfile, arg->handle);
++ if (IS_ERR(base))
++ return PTR_ERR(base);
+
+ fence = &(container_of(base, struct vmw_user_fence, base)->fence);
+ fman = fman_from_fence(fence);
+@@ -1024,6 +1051,7 @@ int vmw_fence_event_ioctl(struct drm_device *dev, void *data,
+ (struct drm_vmw_fence_event_arg *) data;
+ struct vmw_fence_obj *fence = NULL;
+ struct vmw_fpriv *vmw_fp = vmw_fpriv(file_priv);
++ struct ttm_object_file *tfile = vmw_fp->tfile;
+ struct drm_vmw_fence_rep __user *user_fence_rep =
+ (struct drm_vmw_fence_rep __user *)(unsigned long)
+ arg->fence_rep;
+@@ -1037,24 +1065,18 @@ int vmw_fence_event_ioctl(struct drm_device *dev, void *data,
+ */
+ if (arg->handle) {
+ struct ttm_base_object *base =
+- ttm_base_object_lookup_for_ref(dev_priv->tdev,
+- arg->handle);
+-
+- if (unlikely(base == NULL)) {
+- DRM_ERROR("Fence event invalid fence object handle "
+- "0x%08lx.\n",
+- (unsigned long)arg->handle);
+- return -EINVAL;
+- }
++ vmw_fence_obj_lookup(tfile, arg->handle);
++
++ if (IS_ERR(base))
++ return PTR_ERR(base);
++
+ fence = &(container_of(base, struct vmw_user_fence,
+ base)->fence);
+ (void) vmw_fence_obj_reference(fence);
+
+ if (user_fence_rep != NULL) {
+- bool existed;
+-
+ ret = ttm_ref_object_add(vmw_fp->tfile, base,
+- TTM_REF_USAGE, &existed);
++ TTM_REF_USAGE, NULL, false);
+ if (unlikely(ret != 0)) {
+ DRM_ERROR("Failed to reference a fence "
+ "object.\n");
+@@ -1097,8 +1119,7 @@ int vmw_fence_event_ioctl(struct drm_device *dev, void *data,
+ return 0;
+ out_no_create:
+ if (user_fence_rep != NULL)
+- ttm_ref_object_base_unref(vmw_fpriv(file_priv)->tfile,
+- handle, TTM_REF_USAGE);
++ ttm_ref_object_base_unref(tfile, handle, TTM_REF_USAGE);
+ out_no_ref_obj:
+ vmw_fence_obj_unreference(&fence);
+ return ret;
+diff --git a/drivers/gpu/drm/vmwgfx/vmwgfx_ioctl.c b/drivers/gpu/drm/vmwgfx/vmwgfx_ioctl.c
+index b8c6a03c8c54..5ec24fd801cd 100644
+--- a/drivers/gpu/drm/vmwgfx/vmwgfx_ioctl.c
++++ b/drivers/gpu/drm/vmwgfx/vmwgfx_ioctl.c
+@@ -114,8 +114,6 @@ int vmw_getparam_ioctl(struct drm_device *dev, void *data,
+ param->value = dev_priv->has_dx;
+ break;
+ default:
+- DRM_ERROR("Illegal vmwgfx get param request: %d\n",
+- param->param);
+ return -EINVAL;
+ }
+
+@@ -186,7 +184,7 @@ int vmw_get_cap_3d_ioctl(struct drm_device *dev, void *data,
+ bool gb_objects = !!(dev_priv->capabilities & SVGA_CAP_GBOBJECTS);
+ struct vmw_fpriv *vmw_fp = vmw_fpriv(file_priv);
+
+- if (unlikely(arg->pad64 != 0)) {
++ if (unlikely(arg->pad64 != 0 || arg->max_size == 0)) {
+ DRM_ERROR("Illegal GET_3D_CAP argument.\n");
+ return -EINVAL;
+ }
+diff --git a/drivers/gpu/drm/vmwgfx/vmwgfx_resource.c b/drivers/gpu/drm/vmwgfx/vmwgfx_resource.c
+index 8e86d6d4141b..53fa9f1c1d10 100644
+--- a/drivers/gpu/drm/vmwgfx/vmwgfx_resource.c
++++ b/drivers/gpu/drm/vmwgfx/vmwgfx_resource.c
+@@ -589,7 +589,7 @@ static int vmw_user_dmabuf_synccpu_grab(struct vmw_user_dma_buffer *user_bo,
+ return ret;
+
+ ret = ttm_ref_object_add(tfile, &user_bo->prime.base,
+- TTM_REF_SYNCCPU_WRITE, &existed);
++ TTM_REF_SYNCCPU_WRITE, &existed, false);
+ if (ret != 0 || existed)
+ ttm_bo_synccpu_write_release(&user_bo->dma.base);
+
+@@ -773,7 +773,7 @@ int vmw_user_dmabuf_reference(struct ttm_object_file *tfile,
+
+ *handle = user_bo->prime.base.hash.key;
+ return ttm_ref_object_add(tfile, &user_bo->prime.base,
+- TTM_REF_USAGE, NULL);
++ TTM_REF_USAGE, NULL, false);
+ }
+
+ /*
+diff --git a/drivers/gpu/drm/vmwgfx/vmwgfx_surface.c b/drivers/gpu/drm/vmwgfx/vmwgfx_surface.c
+index b445ce9b9757..05fa092c942b 100644
+--- a/drivers/gpu/drm/vmwgfx/vmwgfx_surface.c
++++ b/drivers/gpu/drm/vmwgfx/vmwgfx_surface.c
+@@ -713,11 +713,14 @@ int vmw_surface_define_ioctl(struct drm_device *dev, void *data,
+ 128;
+
+ num_sizes = 0;
+- for (i = 0; i < DRM_VMW_MAX_SURFACE_FACES; ++i)
++ for (i = 0; i < DRM_VMW_MAX_SURFACE_FACES; ++i) {
++ if (req->mip_levels[i] > DRM_VMW_MAX_MIP_LEVELS)
++ return -EINVAL;
+ num_sizes += req->mip_levels[i];
++ }
+
+- if (num_sizes > DRM_VMW_MAX_SURFACE_FACES *
+- DRM_VMW_MAX_MIP_LEVELS)
++ if (num_sizes > DRM_VMW_MAX_SURFACE_FACES * DRM_VMW_MAX_MIP_LEVELS ||
++ num_sizes == 0)
+ return -EINVAL;
+
+ size = vmw_user_surface_size + 128 +
+@@ -891,17 +894,16 @@ vmw_surface_handle_reference(struct vmw_private *dev_priv,
+ uint32_t handle;
+ struct ttm_base_object *base;
+ int ret;
++ bool require_exist = false;
+
+ if (handle_type == DRM_VMW_HANDLE_PRIME) {
+ ret = ttm_prime_fd_to_handle(tfile, u_handle, &handle);
+ if (unlikely(ret != 0))
+ return ret;
+ } else {
+- if (unlikely(drm_is_render_client(file_priv))) {
+- DRM_ERROR("Render client refused legacy "
+- "surface reference.\n");
+- return -EACCES;
+- }
++ if (unlikely(drm_is_render_client(file_priv)))
++ require_exist = true;
++
+ if (ACCESS_ONCE(vmw_fpriv(file_priv)->locked_master)) {
+ DRM_ERROR("Locked master refused legacy "
+ "surface reference.\n");
+@@ -929,17 +931,14 @@ vmw_surface_handle_reference(struct vmw_private *dev_priv,
+
+ /*
+ * Make sure the surface creator has the same
+- * authenticating master.
++ * authenticating master, or is already registered with us.
+ */
+ if (drm_is_primary_client(file_priv) &&
+- user_srf->master != file_priv->master) {
+- DRM_ERROR("Trying to reference surface outside of"
+- " master domain.\n");
+- ret = -EACCES;
+- goto out_bad_resource;
+- }
++ user_srf->master != file_priv->master)
++ require_exist = true;
+
+- ret = ttm_ref_object_add(tfile, base, TTM_REF_USAGE, NULL);
++ ret = ttm_ref_object_add(tfile, base, TTM_REF_USAGE, NULL,
++ require_exist);
+ if (unlikely(ret != 0)) {
+ DRM_ERROR("Could not add a reference to a surface.\n");
+ goto out_bad_resource;
+diff --git a/drivers/hid/wacom_wac.c b/drivers/hid/wacom_wac.c
+index 672145b0d8f5..6ef4f2fcfe43 100644
+--- a/drivers/hid/wacom_wac.c
++++ b/drivers/hid/wacom_wac.c
+@@ -3290,6 +3290,9 @@ int wacom_setup_pad_input_capabilities(struct input_dev *input_dev,
+ {
+ struct wacom_features *features = &wacom_wac->features;
+
++ if ((features->type == HID_GENERIC) && features->numbered_buttons > 0)
++ features->device_type |= WACOM_DEVICETYPE_PAD;
++
+ if (!(features->device_type & WACOM_DEVICETYPE_PAD))
+ return -ENODEV;
+
+diff --git a/drivers/iio/gyro/bmg160_core.c b/drivers/iio/gyro/bmg160_core.c
+index f7fcfa886f72..821919dd245b 100644
+--- a/drivers/iio/gyro/bmg160_core.c
++++ b/drivers/iio/gyro/bmg160_core.c
+@@ -27,6 +27,7 @@
+ #include <linux/iio/trigger_consumer.h>
+ #include <linux/iio/triggered_buffer.h>
+ #include <linux/regmap.h>
++#include <linux/delay.h>
+ #include "bmg160.h"
+
+ #define BMG160_IRQ_NAME "bmg160_event"
+@@ -52,6 +53,9 @@
+ #define BMG160_DEF_BW 100
+ #define BMG160_REG_PMU_BW_RES BIT(7)
+
++#define BMG160_GYRO_REG_RESET 0x14
++#define BMG160_GYRO_RESET_VAL 0xb6
++
+ #define BMG160_REG_INT_MAP_0 0x17
+ #define BMG160_INT_MAP_0_BIT_ANY BIT(1)
+
+@@ -236,6 +240,14 @@ static int bmg160_chip_init(struct bmg160_data *data)
+ int ret;
+ unsigned int val;
+
++ /*
++ * Reset chip to get it in a known good state. A delay of 30ms after
++ * reset is required according to the datasheet.
++ */
++ regmap_write(data->regmap, BMG160_GYRO_REG_RESET,
++ BMG160_GYRO_RESET_VAL);
++ usleep_range(30000, 30700);
++
+ ret = regmap_read(data->regmap, BMG160_REG_CHIP_ID, &val);
+ if (ret < 0) {
+ dev_err(dev, "Error reading reg_chip_id\n");
+diff --git a/drivers/iio/industrialio-core.c b/drivers/iio/industrialio-core.c
+index aaca42862389..d9c15e411e10 100644
+--- a/drivers/iio/industrialio-core.c
++++ b/drivers/iio/industrialio-core.c
+@@ -608,10 +608,9 @@ static ssize_t __iio_format_value(char *buf, size_t len, unsigned int type,
+ tmp0 = (int)div_s64_rem(tmp, 1000000000, &tmp1);
+ return snprintf(buf, len, "%d.%09u", tmp0, abs(tmp1));
+ case IIO_VAL_FRACTIONAL_LOG2:
+- tmp = (s64)vals[0] * 1000000000LL >> vals[1];
+- tmp1 = do_div(tmp, 1000000000LL);
+- tmp0 = tmp;
+- return snprintf(buf, len, "%d.%09u", tmp0, tmp1);
++ tmp = shift_right((s64)vals[0] * 1000000000LL, vals[1]);
++ tmp0 = (int)div_s64_rem(tmp, 1000000000LL, &tmp1);
++ return snprintf(buf, len, "%d.%09u", tmp0, abs(tmp1));
+ case IIO_VAL_INT_MULTIPLE:
+ {
+ int i;
+diff --git a/drivers/iio/pressure/st_pressure_core.c b/drivers/iio/pressure/st_pressure_core.c
+index e19e0787864c..f82560a4f772 100644
+--- a/drivers/iio/pressure/st_pressure_core.c
++++ b/drivers/iio/pressure/st_pressure_core.c
+@@ -455,6 +455,7 @@ static const struct st_sensor_settings st_press_sensors_settings[] = {
+ .addr_stat_drdy = ST_SENSORS_DEFAULT_STAT_ADDR,
+ },
+ .multi_read_bit = true,
++ .bootime = 2,
+ },
+ };
+
+diff --git a/drivers/md/dm-raid.c b/drivers/md/dm-raid.c
+index 4a157b0f4155..fd4f3ace200b 100644
+--- a/drivers/md/dm-raid.c
++++ b/drivers/md/dm-raid.c
+@@ -3594,7 +3594,7 @@ static int raid_preresume(struct dm_target *ti)
+ return r;
+
+ /* Resize bitmap to adjust to changed region size (aka MD bitmap chunksize) */
+- if (test_bit(RT_FLAG_RS_BITMAP_LOADED, &rs->runtime_flags) &&
++ if (test_bit(RT_FLAG_RS_BITMAP_LOADED, &rs->runtime_flags) && mddev->bitmap &&
+ mddev->bitmap_info.chunksize != to_bytes(rs->requested_bitmap_chunk_sectors)) {
+ r = bitmap_resize(mddev->bitmap, mddev->dev_sectors,
+ to_bytes(rs->requested_bitmap_chunk_sectors), 0);
+diff --git a/drivers/md/dm-verity-fec.c b/drivers/md/dm-verity-fec.c
+index 0f0eb8a3d922..78f36012eaca 100644
+--- a/drivers/md/dm-verity-fec.c
++++ b/drivers/md/dm-verity-fec.c
+@@ -146,8 +146,6 @@ static int fec_decode_bufs(struct dm_verity *v, struct dm_verity_fec_io *fio,
+ block = fec_buffer_rs_block(v, fio, n, i);
+ res = fec_decode_rs8(v, fio, block, &par[offset], neras);
+ if (res < 0) {
+- dm_bufio_release(buf);
+-
+ r = res;
+ goto error;
+ }
+@@ -172,6 +170,8 @@ static int fec_decode_bufs(struct dm_verity *v, struct dm_verity_fec_io *fio,
+ done:
+ r = corrected;
+ error:
++ dm_bufio_release(buf);
++
+ if (r < 0 && neras)
+ DMERR_LIMIT("%s: FEC %llu: failed to correct: %d",
+ v->data_dev->name, (unsigned long long)rsb, r);
+@@ -269,7 +269,7 @@ static int fec_read_bufs(struct dm_verity *v, struct dm_verity_io *io,
+ &is_zero) == 0) {
+ /* skip known zero blocks entirely */
+ if (is_zero)
+- continue;
++ goto done;
+
+ /*
+ * skip if we have already found the theoretical
+@@ -439,6 +439,13 @@ int verity_fec_decode(struct dm_verity *v, struct dm_verity_io *io,
+ if (!verity_fec_is_enabled(v))
+ return -EOPNOTSUPP;
+
++ if (fio->level >= DM_VERITY_FEC_MAX_RECURSION) {
++ DMWARN_LIMIT("%s: FEC: recursion too deep", v->data_dev->name);
++ return -EIO;
++ }
++
++ fio->level++;
++
+ if (type == DM_VERITY_BLOCK_TYPE_METADATA)
+ block += v->data_blocks;
+
+@@ -470,7 +477,7 @@ int verity_fec_decode(struct dm_verity *v, struct dm_verity_io *io,
+ if (r < 0) {
+ r = fec_decode_rsb(v, io, fio, rsb, offset, true);
+ if (r < 0)
+- return r;
++ goto done;
+ }
+
+ if (dest)
+@@ -480,6 +487,8 @@ int verity_fec_decode(struct dm_verity *v, struct dm_verity_io *io,
+ r = verity_for_bv_block(v, io, iter, fec_bv_copy);
+ }
+
++done:
++ fio->level--;
+ return r;
+ }
+
+@@ -520,6 +529,7 @@ void verity_fec_init_io(struct dm_verity_io *io)
+ memset(fio->bufs, 0, sizeof(fio->bufs));
+ fio->nbufs = 0;
+ fio->output = NULL;
++ fio->level = 0;
+ }
+
+ /*
+diff --git a/drivers/md/dm-verity-fec.h b/drivers/md/dm-verity-fec.h
+index 7fa0298b995e..bb31ce87a933 100644
+--- a/drivers/md/dm-verity-fec.h
++++ b/drivers/md/dm-verity-fec.h
+@@ -27,6 +27,9 @@
+ #define DM_VERITY_FEC_BUF_MAX \
+ (1 << (PAGE_SHIFT - DM_VERITY_FEC_BUF_RS_BITS))
+
++/* maximum recursion level for verity_fec_decode */
++#define DM_VERITY_FEC_MAX_RECURSION 4
++
+ #define DM_VERITY_OPT_FEC_DEV "use_fec_from_device"
+ #define DM_VERITY_OPT_FEC_BLOCKS "fec_blocks"
+ #define DM_VERITY_OPT_FEC_START "fec_start"
+@@ -58,6 +61,7 @@ struct dm_verity_fec_io {
+ unsigned nbufs; /* number of buffers allocated */
+ u8 *output; /* buffer for corrected output */
+ size_t output_pos;
++ unsigned level; /* recursion level */
+ };
+
+ #ifdef CONFIG_DM_VERITY_FEC
+diff --git a/drivers/mmc/host/sdhci-of-esdhc.c b/drivers/mmc/host/sdhci-of-esdhc.c
+index 9a6eb4492172..364f6b87a728 100644
+--- a/drivers/mmc/host/sdhci-of-esdhc.c
++++ b/drivers/mmc/host/sdhci-of-esdhc.c
+@@ -569,16 +569,19 @@ static const struct sdhci_ops sdhci_esdhc_le_ops = {
+ };
+
+ static const struct sdhci_pltfm_data sdhci_esdhc_be_pdata = {
+- .quirks = ESDHC_DEFAULT_QUIRKS | SDHCI_QUIRK_BROKEN_CARD_DETECTION
+- | SDHCI_QUIRK_NO_CARD_NO_RESET
+- | SDHCI_QUIRK_NO_ENDATTR_IN_NOPDESC,
++ .quirks = ESDHC_DEFAULT_QUIRKS |
++#ifdef CONFIG_PPC
++ SDHCI_QUIRK_BROKEN_CARD_DETECTION |
++#endif
++ SDHCI_QUIRK_NO_CARD_NO_RESET |
++ SDHCI_QUIRK_NO_ENDATTR_IN_NOPDESC,
+ .ops = &sdhci_esdhc_be_ops,
+ };
+
+ static const struct sdhci_pltfm_data sdhci_esdhc_le_pdata = {
+- .quirks = ESDHC_DEFAULT_QUIRKS | SDHCI_QUIRK_BROKEN_CARD_DETECTION
+- | SDHCI_QUIRK_NO_CARD_NO_RESET
+- | SDHCI_QUIRK_NO_ENDATTR_IN_NOPDESC,
++ .quirks = ESDHC_DEFAULT_QUIRKS |
++ SDHCI_QUIRK_NO_CARD_NO_RESET |
++ SDHCI_QUIRK_NO_ENDATTR_IN_NOPDESC,
+ .ops = &sdhci_esdhc_le_ops,
+ };
+
+@@ -643,8 +646,7 @@ static int sdhci_esdhc_probe(struct platform_device *pdev)
+ of_device_is_compatible(np, "fsl,p5020-esdhc") ||
+ of_device_is_compatible(np, "fsl,p4080-esdhc") ||
+ of_device_is_compatible(np, "fsl,p1020-esdhc") ||
+- of_device_is_compatible(np, "fsl,t1040-esdhc") ||
+- of_device_is_compatible(np, "fsl,ls1021a-esdhc"))
++ of_device_is_compatible(np, "fsl,t1040-esdhc"))
+ host->quirks &= ~SDHCI_QUIRK_BROKEN_CARD_DETECTION;
+
+ if (of_device_is_compatible(np, "fsl,ls1021a-esdhc"))
+diff --git a/drivers/net/wireless/broadcom/brcm80211/brcmfmac/p2p.c b/drivers/net/wireless/broadcom/brcm80211/brcmfmac/p2p.c
+index de19c7c92bc6..85d949e03f79 100644
+--- a/drivers/net/wireless/broadcom/brcm80211/brcmfmac/p2p.c
++++ b/drivers/net/wireless/broadcom/brcm80211/brcmfmac/p2p.c
+@@ -2238,14 +2238,16 @@ int brcmf_p2p_del_vif(struct wiphy *wiphy, struct wireless_dev *wdev)
+ struct brcmf_cfg80211_info *cfg = wiphy_priv(wiphy);
+ struct brcmf_p2p_info *p2p = &cfg->p2p;
+ struct brcmf_cfg80211_vif *vif;
++ enum nl80211_iftype iftype;
+ bool wait_for_disable = false;
+ int err;
+
+ brcmf_dbg(TRACE, "delete P2P vif\n");
+ vif = container_of(wdev, struct brcmf_cfg80211_vif, wdev);
+
++ iftype = vif->wdev.iftype;
+ brcmf_cfg80211_arm_vif_event(cfg, vif);
+- switch (vif->wdev.iftype) {
++ switch (iftype) {
+ case NL80211_IFTYPE_P2P_CLIENT:
+ if (test_bit(BRCMF_VIF_STATUS_DISCONNECTING, &vif->sme_state))
+ wait_for_disable = true;
+@@ -2275,7 +2277,7 @@ int brcmf_p2p_del_vif(struct wiphy *wiphy, struct wireless_dev *wdev)
+ BRCMF_P2P_DISABLE_TIMEOUT);
+
+ err = 0;
+- if (vif->wdev.iftype != NL80211_IFTYPE_P2P_DEVICE) {
++ if (iftype != NL80211_IFTYPE_P2P_DEVICE) {
+ brcmf_vif_clear_mgmt_ies(vif);
+ err = brcmf_p2p_release_p2p_if(vif);
+ }
+@@ -2291,7 +2293,7 @@ int brcmf_p2p_del_vif(struct wiphy *wiphy, struct wireless_dev *wdev)
+ brcmf_remove_interface(vif->ifp, true);
+
+ brcmf_cfg80211_arm_vif_event(cfg, NULL);
+- if (vif->wdev.iftype != NL80211_IFTYPE_P2P_DEVICE)
++ if (iftype != NL80211_IFTYPE_P2P_DEVICE)
+ p2p->bss_idx[P2PAPI_BSSCFG_CONNECTION].vif = NULL;
+
+ return err;
+diff --git a/drivers/pci/host/pci-thunder-pem.c b/drivers/pci/host/pci-thunder-pem.c
+index e354010fb006..cea581414e10 100644
+--- a/drivers/pci/host/pci-thunder-pem.c
++++ b/drivers/pci/host/pci-thunder-pem.c
+@@ -14,6 +14,7 @@
+ * Copyright (C) 2015 - 2016 Cavium, Inc.
+ */
+
++#include <linux/bitfield.h>
+ #include <linux/kernel.h>
+ #include <linux/init.h>
+ #include <linux/of_address.h>
+@@ -319,6 +320,49 @@ static int thunder_pem_init(struct device *dev, struct pci_config_window *cfg,
+
+ #if defined(CONFIG_ACPI) && defined(CONFIG_PCI_QUIRKS)
+
++#define PEM_RES_BASE 0x87e0c0000000UL
++#define PEM_NODE_MASK GENMASK(45, 44)
++#define PEM_INDX_MASK GENMASK(26, 24)
++#define PEM_MIN_DOM_IN_NODE 4
++#define PEM_MAX_DOM_IN_NODE 10
++
++static void thunder_pem_reserve_range(struct device *dev, int seg,
++ struct resource *r)
++{
++ resource_size_t start = r->start, end = r->end;
++ struct resource *res;
++ const char *regionid;
++
++ regionid = kasprintf(GFP_KERNEL, "PEM RC:%d", seg);
++ if (!regionid)
++ return;
++
++ res = request_mem_region(start, end - start + 1, regionid);
++ if (res)
++ res->flags &= ~IORESOURCE_BUSY;
++ else
++ kfree(regionid);
++
++ dev_info(dev, "%pR %s reserved\n", r,
++ res ? "has been" : "could not be");
++}
++
++static void thunder_pem_legacy_fw(struct acpi_pci_root *root,
++ struct resource *res_pem)
++{
++ int node = acpi_get_node(root->device->handle);
++ int index;
++
++ if (node == NUMA_NO_NODE)
++ node = 0;
++
++ index = root->segment - PEM_MIN_DOM_IN_NODE;
++ index -= node * PEM_MAX_DOM_IN_NODE;
++ res_pem->start = PEM_RES_BASE | FIELD_PREP(PEM_NODE_MASK, node) |
++ FIELD_PREP(PEM_INDX_MASK, index);
++ res_pem->flags = IORESOURCE_MEM;
++}
++
+ static int thunder_pem_acpi_init(struct pci_config_window *cfg)
+ {
+ struct device *dev = cfg->parent;
+@@ -332,9 +376,23 @@ static int thunder_pem_acpi_init(struct pci_config_window *cfg)
+ return -ENOMEM;
+
+ ret = acpi_get_rc_resources(dev, "CAVA02B", root->segment, res_pem);
++
++ /*
++ * If we fail to gather resources it means that we run with old
++ * FW where we need to calculate PEM-specific resources manually.
++ */
+ if (ret) {
+- dev_err(dev, "can't get rc base address\n");
+- return ret;
++ thunder_pem_legacy_fw(root, res_pem);
++ /*
++ * Reserve 64K size PEM specific resources. The full 16M range
++ * size is required for thunder_pem_init() call.
++ */
++ res_pem->end = res_pem->start + SZ_64K - 1;
++ thunder_pem_reserve_range(dev, root->segment, res_pem);
++ res_pem->end = res_pem->start + SZ_16M - 1;
++
++ /* Reserve PCI configuration space as well. */
++ thunder_pem_reserve_range(dev, root->segment, &cfg->res);
+ }
+
+ return thunder_pem_init(dev, cfg, res_pem);
+diff --git a/drivers/pci/host/pci-xgene.c b/drivers/pci/host/pci-xgene.c
+index 7c3b54b9eb17..142a1669dd82 100644
+--- a/drivers/pci/host/pci-xgene.c
++++ b/drivers/pci/host/pci-xgene.c
+@@ -246,14 +246,11 @@ static int xgene_pcie_ecam_init(struct pci_config_window *cfg, u32 ipversion)
+ ret = xgene_get_csr_resource(adev, &csr);
+ if (ret) {
+ dev_err(dev, "can't get CSR resource\n");
+- kfree(port);
+ return ret;
+ }
+ port->csr_base = devm_ioremap_resource(dev, &csr);
+- if (IS_ERR(port->csr_base)) {
+- kfree(port);
+- return -ENOMEM;
+- }
++ if (IS_ERR(port->csr_base))
++ return PTR_ERR(port->csr_base);
+
+ port->cfg_base = cfg->win;
+ port->version = ipversion;
+diff --git a/drivers/pci/quirks.c b/drivers/pci/quirks.c
+index 024def5bb3fa..a171762048e7 100644
+--- a/drivers/pci/quirks.c
++++ b/drivers/pci/quirks.c
+@@ -1634,6 +1634,7 @@ static void quirk_pcie_mch(struct pci_dev *pdev)
+ DECLARE_PCI_FIXUP_FINAL(PCI_VENDOR_ID_INTEL, PCI_DEVICE_ID_INTEL_E7520_MCH, quirk_pcie_mch);
+ DECLARE_PCI_FIXUP_FINAL(PCI_VENDOR_ID_INTEL, PCI_DEVICE_ID_INTEL_E7320_MCH, quirk_pcie_mch);
+ DECLARE_PCI_FIXUP_FINAL(PCI_VENDOR_ID_INTEL, PCI_DEVICE_ID_INTEL_E7525_MCH, quirk_pcie_mch);
++DECLARE_PCI_FIXUP_FINAL(PCI_VENDOR_ID_HUAWEI, 0x1610, quirk_pcie_mch);
+
+
+ /*
+@@ -2240,6 +2241,27 @@ DECLARE_PCI_FIXUP_ENABLE(PCI_VENDOR_ID_BROADCOM,
+ PCI_DEVICE_ID_TIGON3_5719,
+ quirk_brcm_5719_limit_mrrs);
+
++#ifdef CONFIG_PCIE_IPROC_PLATFORM
++static void quirk_paxc_bridge(struct pci_dev *pdev)
++{
++ /* The PCI config space is shared with the PAXC root port and the first
++ * Ethernet device. So, we need to workaround this by telling the PCI
++ * code that the bridge is not an Ethernet device.
++ */
++ if (pdev->hdr_type == PCI_HEADER_TYPE_BRIDGE)
++ pdev->class = PCI_CLASS_BRIDGE_PCI << 8;
++
++ /* MPSS is not being set properly (as it is currently 0). This is
++ * because that area of the PCI config space is hard coded to zero, and
++ * is not modifiable by firmware. Set this to 2 (e.g., 512 byte MPS)
++ * so that the MPS can be set to the real max value.
++ */
++ pdev->pcie_mpss = 2;
++}
++DECLARE_PCI_FIXUP_EARLY(PCI_VENDOR_ID_BROADCOM, 0x16cd, quirk_paxc_bridge);
++DECLARE_PCI_FIXUP_EARLY(PCI_VENDOR_ID_BROADCOM, 0x16f0, quirk_paxc_bridge);
++#endif
++
+ /* Originally in EDAC sources for i82875P:
+ * Intel tells BIOS developers to hide device 6 which
+ * configures the overflow device access containing
+@@ -3114,30 +3136,32 @@ static void quirk_remove_d3_delay(struct pci_dev *dev)
+ {
+ dev->d3_delay = 0;
+ }
+-DECLARE_PCI_FIXUP_FINAL(PCI_VENDOR_ID_INTEL, 0x0c00, quirk_remove_d3_delay);
++/* C600 Series devices do not need 10ms d3_delay */
+ DECLARE_PCI_FIXUP_FINAL(PCI_VENDOR_ID_INTEL, 0x0412, quirk_remove_d3_delay);
++DECLARE_PCI_FIXUP_FINAL(PCI_VENDOR_ID_INTEL, 0x0c00, quirk_remove_d3_delay);
+ DECLARE_PCI_FIXUP_FINAL(PCI_VENDOR_ID_INTEL, 0x0c0c, quirk_remove_d3_delay);
+-DECLARE_PCI_FIXUP_FINAL(PCI_VENDOR_ID_INTEL, 0x8c31, quirk_remove_d3_delay);
+-DECLARE_PCI_FIXUP_FINAL(PCI_VENDOR_ID_INTEL, 0x8c3a, quirk_remove_d3_delay);
+-DECLARE_PCI_FIXUP_FINAL(PCI_VENDOR_ID_INTEL, 0x8c3d, quirk_remove_d3_delay);
+-DECLARE_PCI_FIXUP_FINAL(PCI_VENDOR_ID_INTEL, 0x8c2d, quirk_remove_d3_delay);
+-DECLARE_PCI_FIXUP_FINAL(PCI_VENDOR_ID_INTEL, 0x8c20, quirk_remove_d3_delay);
++/* Lynxpoint-H PCH devices do not need 10ms d3_delay */
++DECLARE_PCI_FIXUP_FINAL(PCI_VENDOR_ID_INTEL, 0x8c02, quirk_remove_d3_delay);
+ DECLARE_PCI_FIXUP_FINAL(PCI_VENDOR_ID_INTEL, 0x8c18, quirk_remove_d3_delay);
+ DECLARE_PCI_FIXUP_FINAL(PCI_VENDOR_ID_INTEL, 0x8c1c, quirk_remove_d3_delay);
++DECLARE_PCI_FIXUP_FINAL(PCI_VENDOR_ID_INTEL, 0x8c20, quirk_remove_d3_delay);
++DECLARE_PCI_FIXUP_FINAL(PCI_VENDOR_ID_INTEL, 0x8c22, quirk_remove_d3_delay);
+ DECLARE_PCI_FIXUP_FINAL(PCI_VENDOR_ID_INTEL, 0x8c26, quirk_remove_d3_delay);
++DECLARE_PCI_FIXUP_FINAL(PCI_VENDOR_ID_INTEL, 0x8c2d, quirk_remove_d3_delay);
++DECLARE_PCI_FIXUP_FINAL(PCI_VENDOR_ID_INTEL, 0x8c31, quirk_remove_d3_delay);
++DECLARE_PCI_FIXUP_FINAL(PCI_VENDOR_ID_INTEL, 0x8c3a, quirk_remove_d3_delay);
++DECLARE_PCI_FIXUP_FINAL(PCI_VENDOR_ID_INTEL, 0x8c3d, quirk_remove_d3_delay);
+ DECLARE_PCI_FIXUP_FINAL(PCI_VENDOR_ID_INTEL, 0x8c4e, quirk_remove_d3_delay);
+-DECLARE_PCI_FIXUP_FINAL(PCI_VENDOR_ID_INTEL, 0x8c02, quirk_remove_d3_delay);
+-DECLARE_PCI_FIXUP_FINAL(PCI_VENDOR_ID_INTEL, 0x8c22, quirk_remove_d3_delay);
+ /* Intel Cherrytrail devices do not need 10ms d3_delay */
+ DECLARE_PCI_FIXUP_FINAL(PCI_VENDOR_ID_INTEL, 0x2280, quirk_remove_d3_delay);
++DECLARE_PCI_FIXUP_FINAL(PCI_VENDOR_ID_INTEL, 0x2298, quirk_remove_d3_delay);
++DECLARE_PCI_FIXUP_FINAL(PCI_VENDOR_ID_INTEL, 0x229c, quirk_remove_d3_delay);
+ DECLARE_PCI_FIXUP_FINAL(PCI_VENDOR_ID_INTEL, 0x22b0, quirk_remove_d3_delay);
++DECLARE_PCI_FIXUP_FINAL(PCI_VENDOR_ID_INTEL, 0x22b5, quirk_remove_d3_delay);
++DECLARE_PCI_FIXUP_FINAL(PCI_VENDOR_ID_INTEL, 0x22b7, quirk_remove_d3_delay);
+ DECLARE_PCI_FIXUP_FINAL(PCI_VENDOR_ID_INTEL, 0x22b8, quirk_remove_d3_delay);
+ DECLARE_PCI_FIXUP_FINAL(PCI_VENDOR_ID_INTEL, 0x22d8, quirk_remove_d3_delay);
+ DECLARE_PCI_FIXUP_FINAL(PCI_VENDOR_ID_INTEL, 0x22dc, quirk_remove_d3_delay);
+-DECLARE_PCI_FIXUP_FINAL(PCI_VENDOR_ID_INTEL, 0x22b5, quirk_remove_d3_delay);
+-DECLARE_PCI_FIXUP_FINAL(PCI_VENDOR_ID_INTEL, 0x22b7, quirk_remove_d3_delay);
+-DECLARE_PCI_FIXUP_FINAL(PCI_VENDOR_ID_INTEL, 0x2298, quirk_remove_d3_delay);
+-DECLARE_PCI_FIXUP_FINAL(PCI_VENDOR_ID_INTEL, 0x229c, quirk_remove_d3_delay);
+
+ /*
+ * Some devices may pass our check in pci_intx_mask_supported() if
+@@ -4137,6 +4161,26 @@ static int pci_quirk_intel_pch_acs(struct pci_dev *dev, u16 acs_flags)
+ }
+
+ /*
++ * These QCOM root ports do provide ACS-like features to disable peer
++ * transactions and validate bus numbers in requests, but do not provide an
++ * actual PCIe ACS capability. Hardware supports source validation but it
++ * will report the issue as Completer Abort instead of ACS Violation.
++ * Hardware doesn't support peer-to-peer and each root port is a root
++ * complex with unique segment numbers. It is not possible for one root
++ * port to pass traffic to another root port. All PCIe transactions are
++ * terminated inside the root port.
++ */
++static int pci_quirk_qcom_rp_acs(struct pci_dev *dev, u16 acs_flags)
++{
++ u16 flags = (PCI_ACS_RR | PCI_ACS_CR | PCI_ACS_UF | PCI_ACS_SV);
++ int ret = acs_flags & ~flags ? 0 : 1;
++
++ dev_info(&dev->dev, "Using QCOM ACS Quirk (%d)\n", ret);
++
++ return ret;
++}
++
++/*
+ * Sunrise Point PCH root ports implement ACS, but unfortunately as shown in
+ * the datasheet (Intel 100 Series Chipset Family PCH Datasheet, Vol. 2,
+ * 12.1.46, 12.1.47)[1] this chipset uses dwords for the ACS capability and
+@@ -4151,15 +4195,35 @@ static int pci_quirk_intel_pch_acs(struct pci_dev *dev, u16 acs_flags)
+ *
+ * N.B. This doesn't fix what lspci shows.
+ *
++ * The 100 series chipset specification update includes this as errata #23[3].
++ *
++ * The 200 series chipset (Union Point) has the same bug according to the
++ * specification update (Intel 200 Series Chipset Family Platform Controller
++ * Hub, Specification Update, January 2017, Revision 001, Document# 335194-001,
++ * Errata 22)[4]. Per the datasheet[5], root port PCI Device IDs for this
++ * chipset include:
++ *
++ * 0xa290-0xa29f PCI Express Root port #{0-16}
++ * 0xa2e7-0xa2ee PCI Express Root port #{17-24}
++ *
+ * [1] http://www.intel.com/content/www/us/en/chipsets/100-series-chipset-datasheet-vol-2.html
+ * [2] http://www.intel.com/content/www/us/en/chipsets/100-series-chipset-datasheet-vol-1.html
++ * [3] http://www.intel.com/content/www/us/en/chipsets/100-series-chipset-spec-update.html
++ * [4] http://www.intel.com/content/www/us/en/chipsets/200-series-chipset-pch-spec-update.html
++ * [5] http://www.intel.com/content/www/us/en/chipsets/200-series-chipset-pch-datasheet-vol-1.html
+ */
+ static bool pci_quirk_intel_spt_pch_acs_match(struct pci_dev *dev)
+ {
+- return pci_is_pcie(dev) &&
+- pci_pcie_type(dev) == PCI_EXP_TYPE_ROOT_PORT &&
+- ((dev->device & ~0xf) == 0xa110 ||
+- (dev->device >= 0xa167 && dev->device <= 0xa16a));
++ if (!pci_is_pcie(dev) || pci_pcie_type(dev) != PCI_EXP_TYPE_ROOT_PORT)
++ return false;
++
++ switch (dev->device) {
++ case 0xa110 ... 0xa11f: case 0xa167 ... 0xa16a: /* Sunrise Point */
++ case 0xa290 ... 0xa29f: case 0xa2e7 ... 0xa2ee: /* Union Point */
++ return true;
++ }
++
++ return false;
+ }
+
+ #define INTEL_SPT_ACS_CTRL (PCI_ACS_CAP + 4)
+@@ -4272,6 +4336,9 @@ static const struct pci_dev_acs_enabled {
+ /* I219 */
+ { PCI_VENDOR_ID_INTEL, 0x15b7, pci_quirk_mf_endpoint_acs },
+ { PCI_VENDOR_ID_INTEL, 0x15b8, pci_quirk_mf_endpoint_acs },
++ /* QCOM QDF2xxx root ports */
++ { 0x17cb, 0x400, pci_quirk_qcom_rp_acs },
++ { 0x17cb, 0x401, pci_quirk_qcom_rp_acs },
+ /* Intel PCH root ports */
+ { PCI_VENDOR_ID_INTEL, PCI_ANY_ID, pci_quirk_intel_pch_acs },
+ { PCI_VENDOR_ID_INTEL, PCI_ANY_ID, pci_quirk_intel_spt_pch_acs },
+diff --git a/drivers/platform/x86/asus-wmi.c b/drivers/platform/x86/asus-wmi.c
+index 43cb680adbb4..8499d3ae4257 100644
+--- a/drivers/platform/x86/asus-wmi.c
++++ b/drivers/platform/x86/asus-wmi.c
+@@ -159,6 +159,8 @@ MODULE_LICENSE("GPL");
+ #define USB_INTEL_XUSB2PR 0xD0
+ #define PCI_DEVICE_ID_INTEL_LYNXPOINT_LP_XHCI 0x9c31
+
++static const char * const ashs_ids[] = { "ATK4001", "ATK4002", NULL };
++
+ struct bios_args {
+ u32 arg0;
+ u32 arg1;
+@@ -2051,6 +2053,16 @@ static int asus_wmi_fan_init(struct asus_wmi *asus)
+ return 0;
+ }
+
++static bool ashs_present(void)
++{
++ int i = 0;
++ while (ashs_ids[i]) {
++ if (acpi_dev_found(ashs_ids[i++]))
++ return true;
++ }
++ return false;
++}
++
+ /*
+ * WMI Driver
+ */
+@@ -2095,6 +2107,13 @@ static int asus_wmi_add(struct platform_device *pdev)
+ if (err)
+ goto fail_leds;
+
++ asus_wmi_get_devstate(asus, ASUS_WMI_DEVID_WLAN, &result);
++ if (result & (ASUS_WMI_DSTS_PRESENCE_BIT | ASUS_WMI_DSTS_USER_BIT))
++ asus->driver->wlan_ctrl_by_user = 1;
++
++ if (asus->driver->wlan_ctrl_by_user && ashs_present())
++ asus->driver->quirks->no_rfkill = 1;
++
+ if (!asus->driver->quirks->no_rfkill) {
+ err = asus_wmi_rfkill_init(asus);
+ if (err)
+@@ -2134,10 +2153,6 @@ static int asus_wmi_add(struct platform_device *pdev)
+ if (err)
+ goto fail_debugfs;
+
+- asus_wmi_get_devstate(asus, ASUS_WMI_DEVID_WLAN, &result);
+- if (result & (ASUS_WMI_DSTS_PRESENCE_BIT | ASUS_WMI_DSTS_USER_BIT))
+- asus->driver->wlan_ctrl_by_user = 1;
+-
+ return 0;
+
+ fail_debugfs:
+diff --git a/drivers/staging/android/ashmem.c b/drivers/staging/android/ashmem.c
+index 7cbad0d45b9c..6ba270e0494d 100644
+--- a/drivers/staging/android/ashmem.c
++++ b/drivers/staging/android/ashmem.c
+@@ -409,6 +409,7 @@ static int ashmem_mmap(struct file *file, struct vm_area_struct *vma)
+ ret = PTR_ERR(vmfile);
+ goto out;
+ }
++ vmfile->f_mode |= FMODE_LSEEK;
+ asma->file = vmfile;
+ }
+ get_file(asma->file);
+diff --git a/drivers/tty/serial/8250/8250_omap.c b/drivers/tty/serial/8250/8250_omap.c
+index 61ad6c3b20a0..f4eb807a2616 100644
+--- a/drivers/tty/serial/8250/8250_omap.c
++++ b/drivers/tty/serial/8250/8250_omap.c
+@@ -1075,15 +1075,15 @@ static int omap8250_no_handle_irq(struct uart_port *port)
+ }
+
+ static const u8 am3352_habit = OMAP_DMA_TX_KICK | UART_ERRATA_CLOCK_DISABLE;
+-static const u8 am4372_habit = UART_ERRATA_CLOCK_DISABLE;
++static const u8 dra742_habit = UART_ERRATA_CLOCK_DISABLE;
+
+ static const struct of_device_id omap8250_dt_ids[] = {
+ { .compatible = "ti,omap2-uart" },
+ { .compatible = "ti,omap3-uart" },
+ { .compatible = "ti,omap4-uart" },
+ { .compatible = "ti,am3352-uart", .data = &am3352_habit, },
+- { .compatible = "ti,am4372-uart", .data = &am4372_habit, },
+- { .compatible = "ti,dra742-uart", .data = &am4372_habit, },
++ { .compatible = "ti,am4372-uart", .data = &am3352_habit, },
++ { .compatible = "ti,dra742-uart", .data = &dra742_habit, },
+ {},
+ };
+ MODULE_DEVICE_TABLE(of, omap8250_dt_ids);
+@@ -1218,9 +1218,6 @@ static int omap8250_probe(struct platform_device *pdev)
+ priv->omap8250_dma.rx_size = RX_TRIGGER;
+ priv->omap8250_dma.rxconf.src_maxburst = RX_TRIGGER;
+ priv->omap8250_dma.txconf.dst_maxburst = TX_TRIGGER;
+-
+- if (of_machine_is_compatible("ti,am33xx"))
+- priv->habit |= OMAP_DMA_TX_KICK;
+ /*
+ * pause is currently not supported atleast on omap-sdma
+ * and edma on most earlier kernels.
+diff --git a/drivers/usb/chipidea/ci_hdrc_msm.c b/drivers/usb/chipidea/ci_hdrc_msm.c
+index 3889809fd0c4..37591a4b1346 100644
+--- a/drivers/usb/chipidea/ci_hdrc_msm.c
++++ b/drivers/usb/chipidea/ci_hdrc_msm.c
+@@ -24,7 +24,6 @@ static void ci_hdrc_msm_notify_event(struct ci_hdrc *ci, unsigned event)
+ switch (event) {
+ case CI_HDRC_CONTROLLER_RESET_EVENT:
+ dev_dbg(dev, "CI_HDRC_CONTROLLER_RESET_EVENT received\n");
+- writel(0, USB_AHBBURST);
+ /* use AHB transactor, allow posted data writes */
+ writel(0x8, USB_AHBMODE);
+ usb_phy_init(ci->usb_phy);
+@@ -47,7 +46,8 @@ static struct ci_hdrc_platform_data ci_hdrc_msm_platdata = {
+ .name = "ci_hdrc_msm",
+ .capoffset = DEF_CAPOFFSET,
+ .flags = CI_HDRC_REGS_SHARED |
+- CI_HDRC_DISABLE_STREAMING,
++ CI_HDRC_DISABLE_STREAMING |
++ CI_HDRC_OVERRIDE_AHB_BURST,
+
+ .notify_event = ci_hdrc_msm_notify_event,
+ };
+diff --git a/drivers/usb/dwc3/gadget.c b/drivers/usb/dwc3/gadget.c
+index a8a4fe4ffa30..16768abf7f7c 100644
+--- a/drivers/usb/dwc3/gadget.c
++++ b/drivers/usb/dwc3/gadget.c
+@@ -171,6 +171,7 @@ void dwc3_gadget_giveback(struct dwc3_ep *dep, struct dwc3_request *req,
+ int status)
+ {
+ struct dwc3 *dwc = dep->dwc;
++ unsigned int unmap_after_complete = false;
+
+ req->started = false;
+ list_del(&req->list);
+@@ -180,11 +181,19 @@ void dwc3_gadget_giveback(struct dwc3_ep *dep, struct dwc3_request *req,
+ if (req->request.status == -EINPROGRESS)
+ req->request.status = status;
+
+- if (dwc->ep0_bounced && dep->number <= 1)
++ /*
++ * NOTICE we don't want to unmap before calling ->complete() if we're
++ * dealing with a bounced ep0 request. If we unmap it here, we would end
++ * up overwritting the contents of req->buf and this could confuse the
++ * gadget driver.
++ */
++ if (dwc->ep0_bounced && dep->number <= 1) {
+ dwc->ep0_bounced = false;
+-
+- usb_gadget_unmap_request_by_dev(dwc->sysdev,
+- &req->request, req->direction);
++ unmap_after_complete = true;
++ } else {
++ usb_gadget_unmap_request_by_dev(dwc->sysdev,
++ &req->request, req->direction);
++ }
+
+ trace_dwc3_gadget_giveback(req);
+
+@@ -192,6 +201,10 @@ void dwc3_gadget_giveback(struct dwc3_ep *dep, struct dwc3_request *req,
+ usb_gadget_giveback_request(&dep->endpoint, &req->request);
+ spin_lock(&dwc->lock);
+
++ if (unmap_after_complete)
++ usb_gadget_unmap_request_by_dev(dwc->sysdev,
++ &req->request, req->direction);
++
+ if (dep->number > 1)
+ pm_runtime_put(dwc->dev);
+ }
+diff --git a/drivers/usb/dwc3/host.c b/drivers/usb/dwc3/host.c
+index 487f0ff6ae25..76f0b0df37c1 100644
+--- a/drivers/usb/dwc3/host.c
++++ b/drivers/usb/dwc3/host.c
+@@ -54,11 +54,12 @@ static int dwc3_host_get_irq(struct dwc3 *dwc)
+
+ int dwc3_host_init(struct dwc3 *dwc)
+ {
+- struct property_entry props[2];
++ struct property_entry props[3];
+ struct platform_device *xhci;
+ int ret, irq;
+ struct resource *res;
+ struct platform_device *dwc3_pdev = to_platform_device(dwc->dev);
++ int prop_idx = 0;
+
+ irq = dwc3_host_get_irq(dwc);
+ if (irq < 0)
+@@ -97,8 +98,22 @@ int dwc3_host_init(struct dwc3 *dwc)
+
+ memset(props, 0, sizeof(struct property_entry) * ARRAY_SIZE(props));
+
+- if (dwc->usb3_lpm_capable) {
+- props[0].name = "usb3-lpm-capable";
++ if (dwc->usb3_lpm_capable)
++ props[prop_idx++].name = "usb3-lpm-capable";
++
++ /**
++ * WORKAROUND: dwc3 revisions <=3.00a have a limitation
++ * where Port Disable command doesn't work.
++ *
++ * The suggested workaround is that we avoid Port Disable
++ * completely.
++ *
++ * This following flag tells XHCI to do just that.
++ */
++ if (dwc->revision <= DWC3_REVISION_300A)
++ props[prop_idx++].name = "quirk-broken-port-ped";
++
++ if (prop_idx) {
+ ret = platform_device_add_properties(xhci, props);
+ if (ret) {
+ dev_err(dwc->dev, "failed to add properties to xHCI\n");
+diff --git a/drivers/usb/host/xhci-hub.c b/drivers/usb/host/xhci-hub.c
+index 0ef16900efed..1d41637a53e5 100644
+--- a/drivers/usb/host/xhci-hub.c
++++ b/drivers/usb/host/xhci-hub.c
+@@ -458,6 +458,12 @@ static void xhci_disable_port(struct usb_hcd *hcd, struct xhci_hcd *xhci,
+ return;
+ }
+
++ if (xhci->quirks & XHCI_BROKEN_PORT_PED) {
++ xhci_dbg(xhci,
++ "Broken Port Enabled/Disabled, ignoring port disable request.\n");
++ return;
++ }
++
+ /* Write 1 to disable the port */
+ writel(port_status | PORT_PE, addr);
+ port_status = readl(addr);
+diff --git a/drivers/usb/host/xhci-plat.c b/drivers/usb/host/xhci-plat.c
+index 9715200eb36e..bd02a6cd8e2c 100644
+--- a/drivers/usb/host/xhci-plat.c
++++ b/drivers/usb/host/xhci-plat.c
+@@ -232,6 +232,9 @@ static int xhci_plat_probe(struct platform_device *pdev)
+ if (device_property_read_bool(&pdev->dev, "usb3-lpm-capable"))
+ xhci->quirks |= XHCI_LPM_SUPPORT;
+
++ if (device_property_read_bool(&pdev->dev, "quirk-broken-port-ped"))
++ xhci->quirks |= XHCI_BROKEN_PORT_PED;
++
+ hcd->usb_phy = devm_usb_get_phy_by_phandle(&pdev->dev, "usb-phy", 0);
+ if (IS_ERR(hcd->usb_phy)) {
+ ret = PTR_ERR(hcd->usb_phy);
+diff --git a/drivers/usb/host/xhci.h b/drivers/usb/host/xhci.h
+index 2d7b6374b58d..ea18bf49c2eb 100644
+--- a/drivers/usb/host/xhci.h
++++ b/drivers/usb/host/xhci.h
+@@ -1650,6 +1650,9 @@ struct xhci_hcd {
+ #define XHCI_SSIC_PORT_UNUSED (1 << 22)
+ #define XHCI_NO_64BIT_SUPPORT (1 << 23)
+ #define XHCI_MISSING_CAS (1 << 24)
++/* For controller with a broken Port Disable implementation */
++#define XHCI_BROKEN_PORT_PED (1 << 25)
++
+ unsigned int num_active_eps;
+ unsigned int limit_active_eps;
+ /* There are two roothubs to keep track of bus suspend info for */
+diff --git a/drivers/usb/storage/unusual_devs.h b/drivers/usb/storage/unusual_devs.h
+index 16cc18369111..9129f6cb8230 100644
+--- a/drivers/usb/storage/unusual_devs.h
++++ b/drivers/usb/storage/unusual_devs.h
+@@ -2071,6 +2071,20 @@ UNUSUAL_DEV( 0x1370, 0x6828, 0x0110, 0x0110,
+ USB_SC_DEVICE, USB_PR_DEVICE, NULL,
+ US_FL_IGNORE_RESIDUE ),
+
++/*
++ * Reported by Tobias Jakobi <tjakobi@math.uni-bielefeld.de>
++ * The INIC-3619 bridge is used in the StarTech SLSODDU33B
++ * SATA-USB enclosure for slimline optical drives.
++ *
++ * The quirk enables MakeMKV to properly exchange keys with
++ * an installed BD drive.
++ */
++UNUSUAL_DEV( 0x13fd, 0x3609, 0x0209, 0x0209,
++ "Initio Corporation",
++ "INIC-3619",
++ USB_SC_DEVICE, USB_PR_DEVICE, NULL,
++ US_FL_IGNORE_RESIDUE ),
++
+ /* Reported by Qinglin Ye <yestyle@gmail.com> */
+ UNUSUAL_DEV( 0x13fe, 0x3600, 0x0100, 0x0100,
+ "Kingston",
+diff --git a/drivers/watchdog/s3c2410_wdt.c b/drivers/watchdog/s3c2410_wdt.c
+index 59e95762a6de..c5a567a73f59 100644
+--- a/drivers/watchdog/s3c2410_wdt.c
++++ b/drivers/watchdog/s3c2410_wdt.c
+@@ -46,6 +46,7 @@
+ #define S3C2410_WTCON 0x00
+ #define S3C2410_WTDAT 0x04
+ #define S3C2410_WTCNT 0x08
++#define S3C2410_WTCLRINT 0x0c
+
+ #define S3C2410_WTCNT_MAXCNT 0xffff
+
+@@ -72,6 +73,7 @@
+ #define EXYNOS5_WDT_MASK_RESET_REG_OFFSET 0x040c
+ #define QUIRK_HAS_PMU_CONFIG (1 << 0)
+ #define QUIRK_HAS_RST_STAT (1 << 1)
++#define QUIRK_HAS_WTCLRINT_REG (1 << 2)
+
+ /* These quirks require that we have a PMU register map */
+ #define QUIRKS_HAVE_PMUREG (QUIRK_HAS_PMU_CONFIG | \
+@@ -143,13 +145,18 @@ static const struct s3c2410_wdt_variant drv_data_s3c2410 = {
+ };
+
+ #ifdef CONFIG_OF
++static const struct s3c2410_wdt_variant drv_data_s3c6410 = {
++ .quirks = QUIRK_HAS_WTCLRINT_REG,
++};
++
+ static const struct s3c2410_wdt_variant drv_data_exynos5250 = {
+ .disable_reg = EXYNOS5_WDT_DISABLE_REG_OFFSET,
+ .mask_reset_reg = EXYNOS5_WDT_MASK_RESET_REG_OFFSET,
+ .mask_bit = 20,
+ .rst_stat_reg = EXYNOS5_RST_STAT_REG_OFFSET,
+ .rst_stat_bit = 20,
+- .quirks = QUIRK_HAS_PMU_CONFIG | QUIRK_HAS_RST_STAT,
++ .quirks = QUIRK_HAS_PMU_CONFIG | QUIRK_HAS_RST_STAT \
++ | QUIRK_HAS_WTCLRINT_REG,
+ };
+
+ static const struct s3c2410_wdt_variant drv_data_exynos5420 = {
+@@ -158,7 +165,8 @@ static const struct s3c2410_wdt_variant drv_data_exynos5420 = {
+ .mask_bit = 0,
+ .rst_stat_reg = EXYNOS5_RST_STAT_REG_OFFSET,
+ .rst_stat_bit = 9,
+- .quirks = QUIRK_HAS_PMU_CONFIG | QUIRK_HAS_RST_STAT,
++ .quirks = QUIRK_HAS_PMU_CONFIG | QUIRK_HAS_RST_STAT \
++ | QUIRK_HAS_WTCLRINT_REG,
+ };
+
+ static const struct s3c2410_wdt_variant drv_data_exynos7 = {
+@@ -167,12 +175,15 @@ static const struct s3c2410_wdt_variant drv_data_exynos7 = {
+ .mask_bit = 23,
+ .rst_stat_reg = EXYNOS5_RST_STAT_REG_OFFSET,
+ .rst_stat_bit = 23, /* A57 WDTRESET */
+- .quirks = QUIRK_HAS_PMU_CONFIG | QUIRK_HAS_RST_STAT,
++ .quirks = QUIRK_HAS_PMU_CONFIG | QUIRK_HAS_RST_STAT \
++ | QUIRK_HAS_WTCLRINT_REG,
+ };
+
+ static const struct of_device_id s3c2410_wdt_match[] = {
+ { .compatible = "samsung,s3c2410-wdt",
+ .data = &drv_data_s3c2410 },
++ { .compatible = "samsung,s3c6410-wdt",
++ .data = &drv_data_s3c6410 },
+ { .compatible = "samsung,exynos5250-wdt",
+ .data = &drv_data_exynos5250 },
+ { .compatible = "samsung,exynos5420-wdt",
+@@ -418,6 +429,10 @@ static irqreturn_t s3c2410wdt_irq(int irqno, void *param)
+ dev_info(wdt->dev, "watchdog timer expired (irq)\n");
+
+ s3c2410wdt_keepalive(&wdt->wdt_device);
++
++ if (wdt->drv_data->quirks & QUIRK_HAS_WTCLRINT_REG)
++ writel(0x1, wdt->reg_base + S3C2410_WTCLRINT);
++
+ return IRQ_HANDLED;
+ }
+
+diff --git a/fs/cifs/smb2pdu.c b/fs/cifs/smb2pdu.c
+index 87457227812c..bdd32925a15e 100644
+--- a/fs/cifs/smb2pdu.c
++++ b/fs/cifs/smb2pdu.c
+@@ -1104,6 +1104,10 @@ SMB2_tcon(const unsigned int xid, struct cifs_ses *ses, const char *tree,
+ return -EINVAL;
+ }
+
++ /* SMB2 TREE_CONNECT request must be called with TreeId == 0 */
++ if (tcon)
++ tcon->tid = 0;
++
+ rc = small_smb2_init(SMB2_TREE_CONNECT, tcon, (void **) &req);
+ if (rc) {
+ kfree(unc_path);
+diff --git a/fs/dax.c b/fs/dax.c
+index c45598b912e1..a39b404b646a 100644
+--- a/fs/dax.c
++++ b/fs/dax.c
+@@ -369,6 +369,22 @@ static void *grab_mapping_entry(struct address_space *mapping, pgoff_t index,
+ }
+ spin_lock_irq(&mapping->tree_lock);
+
++ if (!entry) {
++ /*
++ * We needed to drop the page_tree lock while calling
++ * radix_tree_preload() and we didn't have an entry to
++ * lock. See if another thread inserted an entry at
++ * our index during this time.
++ */
++ entry = __radix_tree_lookup(&mapping->page_tree, index,
++ NULL, &slot);
++ if (entry) {
++ radix_tree_preload_end();
++ spin_unlock_irq(&mapping->tree_lock);
++ goto restart;
++ }
++ }
++
+ if (pmd_downgrade) {
+ radix_tree_delete(&mapping->page_tree, index);
+ mapping->nrexceptional--;
+@@ -384,19 +400,12 @@ static void *grab_mapping_entry(struct address_space *mapping, pgoff_t index,
+ if (err) {
+ spin_unlock_irq(&mapping->tree_lock);
+ /*
+- * Someone already created the entry? This is a
+- * normal failure when inserting PMDs in a range
+- * that already contains PTEs. In that case we want
+- * to return -EEXIST immediately.
+- */
+- if (err == -EEXIST && !(size_flag & RADIX_DAX_PMD))
+- goto restart;
+- /*
+- * Our insertion of a DAX PMD entry failed, most
+- * likely because it collided with a PTE sized entry
+- * at a different index in the PMD range. We haven't
+- * inserted anything into the radix tree and have no
+- * waiters to wake.
++ * Our insertion of a DAX entry failed, most likely
++ * because we were inserting a PMD entry and it
++ * collided with a PTE sized entry at a different
++ * index in the PMD range. We haven't inserted
++ * anything into the radix tree and have no waiters to
++ * wake.
+ */
+ return ERR_PTR(err);
+ }
+diff --git a/fs/orangefs/super.c b/fs/orangefs/super.c
+index 67c24351a67f..cd261c8de53a 100644
+--- a/fs/orangefs/super.c
++++ b/fs/orangefs/super.c
+@@ -263,8 +263,13 @@ int orangefs_remount(struct orangefs_sb_info_s *orangefs_sb)
+ if (!new_op)
+ return -ENOMEM;
+ new_op->upcall.req.features.features = 0;
+- ret = service_operation(new_op, "orangefs_features", 0);
+- orangefs_features = new_op->downcall.resp.features.features;
++ ret = service_operation(new_op, "orangefs_features",
++ ORANGEFS_OP_PRIORITY | ORANGEFS_OP_NO_MUTEX);
++ if (!ret)
++ orangefs_features =
++ new_op->downcall.resp.features.features;
++ else
++ orangefs_features = 0;
+ op_release(new_op);
+ } else {
+ orangefs_features = 0;
+diff --git a/fs/sysfs/file.c b/fs/sysfs/file.c
+index b803213d1307..39c75a86c67f 100644
+--- a/fs/sysfs/file.c
++++ b/fs/sysfs/file.c
+@@ -108,7 +108,7 @@ static ssize_t sysfs_kf_read(struct kernfs_open_file *of, char *buf,
+ {
+ const struct sysfs_ops *ops = sysfs_file_ops(of->kn);
+ struct kobject *kobj = of->kn->parent->priv;
+- size_t len;
++ ssize_t len;
+
+ /*
+ * If buf != of->prealloc_buf, we don't know how
+@@ -117,13 +117,15 @@ static ssize_t sysfs_kf_read(struct kernfs_open_file *of, char *buf,
+ if (WARN_ON_ONCE(buf != of->prealloc_buf))
+ return 0;
+ len = ops->show(kobj, of->kn->priv, buf);
++ if (len < 0)
++ return len;
+ if (pos) {
+ if (len <= pos)
+ return 0;
+ len -= pos;
+ memmove(buf, buf + pos, len);
+ }
+- return min(count, len);
++ return min_t(ssize_t, count, len);
+ }
+
+ /* kernfs write callback for regular sysfs files */
+diff --git a/fs/xfs/xfs_bmap_util.c b/fs/xfs/xfs_bmap_util.c
+index c516d7158a21..205ab55d595d 100644
+--- a/fs/xfs/xfs_bmap_util.c
++++ b/fs/xfs/xfs_bmap_util.c
+@@ -1318,8 +1318,16 @@ xfs_free_file_space(
+ /*
+ * Now that we've unmap all full blocks we'll have to zero out any
+ * partial block at the beginning and/or end. xfs_zero_range is
+- * smart enough to skip any holes, including those we just created.
++ * smart enough to skip any holes, including those we just created,
++ * but we must take care not to zero beyond EOF and enlarge i_size.
+ */
++
++ if (offset >= XFS_ISIZE(ip))
++ return 0;
++
++ if (offset + len > XFS_ISIZE(ip))
++ len = XFS_ISIZE(ip) - offset;
++
+ return xfs_zero_range(ip, offset, len, NULL);
+ }
+
+diff --git a/include/drm/i915_pciids.h b/include/drm/i915_pciids.h
+index 0d5f4268d75f..61766a420f6b 100644
+--- a/include/drm/i915_pciids.h
++++ b/include/drm/i915_pciids.h
+@@ -226,23 +226,18 @@
+ INTEL_VGA_DEVICE(0x162A, info), /* Server */ \
+ INTEL_VGA_DEVICE(0x162D, info) /* Workstation */
+
+-#define INTEL_BDW_RSVDM_IDS(info) \
++#define INTEL_BDW_RSVD_IDS(info) \
+ INTEL_VGA_DEVICE(0x1632, info), /* ULT */ \
+ INTEL_VGA_DEVICE(0x1636, info), /* ULT */ \
+ INTEL_VGA_DEVICE(0x163B, info), /* Iris */ \
+- INTEL_VGA_DEVICE(0x163E, info) /* ULX */
+-
+-#define INTEL_BDW_RSVDD_IDS(info) \
++ INTEL_VGA_DEVICE(0x163E, info), /* ULX */ \
+ INTEL_VGA_DEVICE(0x163A, info), /* Server */ \
+ INTEL_VGA_DEVICE(0x163D, info) /* Workstation */
+
+ #define INTEL_BDW_IDS(info) \
+ INTEL_BDW_GT12_IDS(info), \
+ INTEL_BDW_GT3_IDS(info), \
+- INTEL_BDW_RSVDM_IDS(info), \
+- INTEL_BDW_GT12_IDS(info), \
+- INTEL_BDW_GT3_IDS(info), \
+- INTEL_BDW_RSVDD_IDS(info)
++ INTEL_BDW_RSVD_IDS(info)
+
+ #define INTEL_CHV_IDS(info) \
+ INTEL_VGA_DEVICE(0x22b0, info), \
+diff --git a/include/drm/ttm/ttm_object.h b/include/drm/ttm/ttm_object.h
+index ed953f98f0e1..1487011fe057 100644
+--- a/include/drm/ttm/ttm_object.h
++++ b/include/drm/ttm/ttm_object.h
+@@ -229,6 +229,8 @@ extern void ttm_base_object_unref(struct ttm_base_object **p_base);
+ * @ref_type: The type of reference.
+ * @existed: Upon completion, indicates that an identical reference object
+ * already existed, and the refcount was upped on that object instead.
++ * @require_existed: Fail with -EPERM if an identical ref object didn't
++ * already exist.
+ *
+ * Checks that the base object is shareable and adds a ref object to it.
+ *
+@@ -243,7 +245,8 @@ extern void ttm_base_object_unref(struct ttm_base_object **p_base);
+ */
+ extern int ttm_ref_object_add(struct ttm_object_file *tfile,
+ struct ttm_base_object *base,
+- enum ttm_ref_type ref_type, bool *existed);
++ enum ttm_ref_type ref_type, bool *existed,
++ bool require_existed);
+
+ extern bool ttm_ref_object_exists(struct ttm_object_file *tfile,
+ struct ttm_base_object *base);
+diff --git a/include/linux/arm-smccc.h b/include/linux/arm-smccc.h
+index b5abfda80465..4c5bca38c653 100644
+--- a/include/linux/arm-smccc.h
++++ b/include/linux/arm-smccc.h
+@@ -14,9 +14,6 @@
+ #ifndef __LINUX_ARM_SMCCC_H
+ #define __LINUX_ARM_SMCCC_H
+
+-#include <linux/linkage.h>
+-#include <linux/types.h>
+-
+ /*
+ * This file provides common defines for ARM SMC Calling Convention as
+ * specified in
+@@ -60,6 +57,13 @@
+ #define ARM_SMCCC_OWNER_TRUSTED_OS 50
+ #define ARM_SMCCC_OWNER_TRUSTED_OS_END 63
+
++#define ARM_SMCCC_QUIRK_NONE 0
++#define ARM_SMCCC_QUIRK_QCOM_A6 1 /* Save/restore register a6 */
++
++#ifndef __ASSEMBLY__
++
++#include <linux/linkage.h>
++#include <linux/types.h>
+ /**
+ * struct arm_smccc_res - Result from SMC/HVC call
+ * @a0-a3 result values from registers 0 to 3
+@@ -72,33 +76,59 @@ struct arm_smccc_res {
+ };
+
+ /**
+- * arm_smccc_smc() - make SMC calls
++ * struct arm_smccc_quirk - Contains quirk information
++ * @id: quirk identification
++ * @state: quirk specific information
++ * @a6: Qualcomm quirk entry for returning post-smc call contents of a6
++ */
++struct arm_smccc_quirk {
++ int id;
++ union {
++ unsigned long a6;
++ } state;
++};
++
++/**
++ * __arm_smccc_smc() - make SMC calls
+ * @a0-a7: arguments passed in registers 0 to 7
+ * @res: result values from registers 0 to 3
++ * @quirk: points to an arm_smccc_quirk, or NULL when no quirks are required.
+ *
+ * This function is used to make SMC calls following SMC Calling Convention.
+ * The content of the supplied param are copied to registers 0 to 7 prior
+ * to the SMC instruction. The return values are updated with the content
+- * from register 0 to 3 on return from the SMC instruction.
++ * from register 0 to 3 on return from the SMC instruction. An optional
++ * quirk structure provides vendor specific behavior.
+ */
+-asmlinkage void arm_smccc_smc(unsigned long a0, unsigned long a1,
++asmlinkage void __arm_smccc_smc(unsigned long a0, unsigned long a1,
+ unsigned long a2, unsigned long a3, unsigned long a4,
+ unsigned long a5, unsigned long a6, unsigned long a7,
+- struct arm_smccc_res *res);
++ struct arm_smccc_res *res, struct arm_smccc_quirk *quirk);
+
+ /**
+- * arm_smccc_hvc() - make HVC calls
++ * __arm_smccc_hvc() - make HVC calls
+ * @a0-a7: arguments passed in registers 0 to 7
+ * @res: result values from registers 0 to 3
++ * @quirk: points to an arm_smccc_quirk, or NULL when no quirks are required.
+ *
+ * This function is used to make HVC calls following SMC Calling
+ * Convention. The content of the supplied param are copied to registers 0
+ * to 7 prior to the HVC instruction. The return values are updated with
+- * the content from register 0 to 3 on return from the HVC instruction.
++ * the content from register 0 to 3 on return from the HVC instruction. An
++ * optional quirk structure provides vendor specific behavior.
+ */
+-asmlinkage void arm_smccc_hvc(unsigned long a0, unsigned long a1,
++asmlinkage void __arm_smccc_hvc(unsigned long a0, unsigned long a1,
+ unsigned long a2, unsigned long a3, unsigned long a4,
+ unsigned long a5, unsigned long a6, unsigned long a7,
+- struct arm_smccc_res *res);
++ struct arm_smccc_res *res, struct arm_smccc_quirk *quirk);
++
++#define arm_smccc_smc(...) __arm_smccc_smc(__VA_ARGS__, NULL)
++
++#define arm_smccc_smc_quirk(...) __arm_smccc_smc(__VA_ARGS__)
++
++#define arm_smccc_hvc(...) __arm_smccc_hvc(__VA_ARGS__, NULL)
++
++#define arm_smccc_hvc_quirk(...) __arm_smccc_hvc(__VA_ARGS__)
+
++#endif /*__ASSEMBLY__*/
+ #endif /*__LINUX_ARM_SMCCC_H*/
+diff --git a/include/linux/pci_ids.h b/include/linux/pci_ids.h
+index 73dda0edcb97..a4f77feecbb0 100644
+--- a/include/linux/pci_ids.h
++++ b/include/linux/pci_ids.h
+@@ -2516,6 +2516,8 @@
+ #define PCI_DEVICE_ID_KORENIX_JETCARDF2 0x1700
+ #define PCI_DEVICE_ID_KORENIX_JETCARDF3 0x17ff
+
++#define PCI_VENDOR_ID_HUAWEI 0x19e5
++
+ #define PCI_VENDOR_ID_NETRONOME 0x19ee
+ #define PCI_DEVICE_ID_NETRONOME_NFP3200 0x3200
+ #define PCI_DEVICE_ID_NETRONOME_NFP3240 0x3240
+diff --git a/include/linux/random.h b/include/linux/random.h
+index 7bd2403e4fef..16ab429735a7 100644
+--- a/include/linux/random.h
++++ b/include/linux/random.h
+@@ -37,7 +37,6 @@ extern void get_random_bytes(void *buf, int nbytes);
+ extern int add_random_ready_callback(struct random_ready_callback *rdy);
+ extern void del_random_ready_callback(struct random_ready_callback *rdy);
+ extern void get_random_bytes_arch(void *buf, int nbytes);
+-extern int random_int_secret_init(void);
+
+ #ifndef MODULE
+ extern const struct file_operations random_fops, urandom_fops;
+diff --git a/init/main.c b/init/main.c
+index b0c9d6facef9..09beb7fc6e8c 100644
+--- a/init/main.c
++++ b/init/main.c
+@@ -879,7 +879,6 @@ static void __init do_basic_setup(void)
+ do_ctors();
+ usermodehelper_enable();
+ do_initcalls();
+- random_int_secret_init();
+ }
+
+ static void __init do_pre_smp_initcalls(void)
+diff --git a/kernel/ptrace.c b/kernel/ptrace.c
+index 49ba7c1ade9d..a5caecef88be 100644
+--- a/kernel/ptrace.c
++++ b/kernel/ptrace.c
+@@ -181,11 +181,17 @@ static void ptrace_unfreeze_traced(struct task_struct *task)
+
+ WARN_ON(!task->ptrace || task->parent != current);
+
++ /*
++ * PTRACE_LISTEN can allow ptrace_trap_notify to wake us up remotely.
++ * Recheck state under the lock to close this race.
++ */
+ spin_lock_irq(&task->sighand->siglock);
+- if (__fatal_signal_pending(task))
+- wake_up_state(task, __TASK_TRACED);
+- else
+- task->state = TASK_TRACED;
++ if (task->state == __TASK_TRACED) {
++ if (__fatal_signal_pending(task))
++ wake_up_state(task, __TASK_TRACED);
++ else
++ task->state = TASK_TRACED;
++ }
+ spin_unlock_irq(&task->sighand->siglock);
+ }
+
+diff --git a/kernel/trace/ring_buffer.c b/kernel/trace/ring_buffer.c
+index a85739efcc30..8df48ccb8af6 100644
+--- a/kernel/trace/ring_buffer.c
++++ b/kernel/trace/ring_buffer.c
+@@ -4825,9 +4825,9 @@ static __init int test_ringbuffer(void)
+ rb_data[cpu].cnt = cpu;
+ rb_threads[cpu] = kthread_create(rb_test, &rb_data[cpu],
+ "rbtester/%d", cpu);
+- if (WARN_ON(!rb_threads[cpu])) {
++ if (WARN_ON(IS_ERR(rb_threads[cpu]))) {
+ pr_cont("FAILED\n");
+- ret = -1;
++ ret = PTR_ERR(rb_threads[cpu]);
+ goto out_free;
+ }
+
+@@ -4837,9 +4837,9 @@ static __init int test_ringbuffer(void)
+
+ /* Now create the rb hammer! */
+ rb_hammer = kthread_run(rb_hammer_test, NULL, "rbhammer");
+- if (WARN_ON(!rb_hammer)) {
++ if (WARN_ON(IS_ERR(rb_hammer))) {
+ pr_cont("FAILED\n");
+- ret = -1;
++ ret = PTR_ERR(rb_hammer);
+ goto out_free;
+ }
+
+diff --git a/mm/mempolicy.c b/mm/mempolicy.c
+index 1e7873e40c9a..dc8a2672c407 100644
+--- a/mm/mempolicy.c
++++ b/mm/mempolicy.c
+@@ -1526,7 +1526,6 @@ COMPAT_SYSCALL_DEFINE5(get_mempolicy, int __user *, policy,
+ COMPAT_SYSCALL_DEFINE3(set_mempolicy, int, mode, compat_ulong_t __user *, nmask,
+ compat_ulong_t, maxnode)
+ {
+- long err = 0;
+ unsigned long __user *nm = NULL;
+ unsigned long nr_bits, alloc_size;
+ DECLARE_BITMAP(bm, MAX_NUMNODES);
+@@ -1535,14 +1534,13 @@ COMPAT_SYSCALL_DEFINE3(set_mempolicy, int, mode, compat_ulong_t __user *, nmask,
+ alloc_size = ALIGN(nr_bits, BITS_PER_LONG) / 8;
+
+ if (nmask) {
+- err = compat_get_bitmap(bm, nmask, nr_bits);
++ if (compat_get_bitmap(bm, nmask, nr_bits))
++ return -EFAULT;
+ nm = compat_alloc_user_space(alloc_size);
+- err |= copy_to_user(nm, bm, alloc_size);
++ if (copy_to_user(nm, bm, alloc_size))
++ return -EFAULT;
+ }
+
+- if (err)
+- return -EFAULT;
+-
+ return sys_set_mempolicy(mode, nm, nr_bits+1);
+ }
+
+@@ -1550,7 +1548,6 @@ COMPAT_SYSCALL_DEFINE6(mbind, compat_ulong_t, start, compat_ulong_t, len,
+ compat_ulong_t, mode, compat_ulong_t __user *, nmask,
+ compat_ulong_t, maxnode, compat_ulong_t, flags)
+ {
+- long err = 0;
+ unsigned long __user *nm = NULL;
+ unsigned long nr_bits, alloc_size;
+ nodemask_t bm;
+@@ -1559,14 +1556,13 @@ COMPAT_SYSCALL_DEFINE6(mbind, compat_ulong_t, start, compat_ulong_t, len,
+ alloc_size = ALIGN(nr_bits, BITS_PER_LONG) / 8;
+
+ if (nmask) {
+- err = compat_get_bitmap(nodes_addr(bm), nmask, nr_bits);
++ if (compat_get_bitmap(nodes_addr(bm), nmask, nr_bits))
++ return -EFAULT;
+ nm = compat_alloc_user_space(alloc_size);
+- err |= copy_to_user(nm, nodes_addr(bm), alloc_size);
++ if (copy_to_user(nm, nodes_addr(bm), alloc_size))
++ return -EFAULT;
+ }
+
+- if (err)
+- return -EFAULT;
+-
+ return sys_mbind(start, len, mode, nm, nr_bits+1, flags);
+ }
+
+diff --git a/mm/page_alloc.c b/mm/page_alloc.c
+index 1a5f6655958e..1aec370bf9e9 100644
+--- a/mm/page_alloc.c
++++ b/mm/page_alloc.c
+@@ -4381,13 +4381,13 @@ void show_free_areas(unsigned int filter)
+ K(node_page_state(pgdat, NR_FILE_MAPPED)),
+ K(node_page_state(pgdat, NR_FILE_DIRTY)),
+ K(node_page_state(pgdat, NR_WRITEBACK)),
++ K(node_page_state(pgdat, NR_SHMEM)),
+ #ifdef CONFIG_TRANSPARENT_HUGEPAGE
+ K(node_page_state(pgdat, NR_SHMEM_THPS) * HPAGE_PMD_NR),
+ K(node_page_state(pgdat, NR_SHMEM_PMDMAPPED)
+ * HPAGE_PMD_NR),
+ K(node_page_state(pgdat, NR_ANON_THPS) * HPAGE_PMD_NR),
+ #endif
+- K(node_page_state(pgdat, NR_SHMEM)),
+ K(node_page_state(pgdat, NR_WRITEBACK_TEMP)),
+ K(node_page_state(pgdat, NR_UNSTABLE_NFS)),
+ node_page_state(pgdat, NR_PAGES_SCANNED),
+diff --git a/net/mac80211/iface.c b/net/mac80211/iface.c
+index d37ae7dc114b..56d491950390 100644
+--- a/net/mac80211/iface.c
++++ b/net/mac80211/iface.c
+@@ -718,7 +718,8 @@ int ieee80211_do_open(struct wireless_dev *wdev, bool coming_up)
+ ieee80211_recalc_ps(local);
+
+ if (sdata->vif.type == NL80211_IFTYPE_MONITOR ||
+- sdata->vif.type == NL80211_IFTYPE_AP_VLAN) {
++ sdata->vif.type == NL80211_IFTYPE_AP_VLAN ||
++ local->ops->wake_tx_queue) {
+ /* XXX: for AP_VLAN, actually track AP queues */
+ netif_tx_start_all_queues(dev);
+ } else if (dev) {
+diff --git a/net/wireless/sysfs.c b/net/wireless/sysfs.c
+index 14b3f007826d..2927d06faa6e 100644
+--- a/net/wireless/sysfs.c
++++ b/net/wireless/sysfs.c
+@@ -130,12 +130,10 @@ static int wiphy_resume(struct device *dev)
+ /* Age scan results with time spent in suspend */
+ cfg80211_bss_age(rdev, get_seconds() - rdev->suspend_at);
+
+- if (rdev->ops->resume) {
+- rtnl_lock();
+- if (rdev->wiphy.registered)
+- ret = rdev_resume(rdev);
+- rtnl_unlock();
+- }
++ rtnl_lock();
++ if (rdev->wiphy.registered && rdev->ops->resume)
++ ret = rdev_resume(rdev);
++ rtnl_unlock();
+
+ return ret;
+ }
+diff --git a/sound/soc/codecs/rt5670.c b/sound/soc/codecs/rt5670.c
+index 97bafac3bc15..17d20b99f041 100644
+--- a/sound/soc/codecs/rt5670.c
++++ b/sound/soc/codecs/rt5670.c
+@@ -2814,6 +2814,7 @@ MODULE_DEVICE_TABLE(i2c, rt5670_i2c_id);
+ static const struct acpi_device_id rt5670_acpi_match[] = {
+ { "10EC5670", 0},
+ { "10EC5672", 0},
++ { "10EC5640", 0}, /* quirk */
+ { },
+ };
+ MODULE_DEVICE_TABLE(acpi, rt5670_acpi_match);
+diff --git a/sound/soc/intel/atom/sst/sst_acpi.c b/sound/soc/intel/atom/sst/sst_acpi.c
+index f4d92bbc5373..63820080dd16 100644
+--- a/sound/soc/intel/atom/sst/sst_acpi.c
++++ b/sound/soc/intel/atom/sst/sst_acpi.c
+@@ -400,6 +400,7 @@ static int sst_acpi_remove(struct platform_device *pdev)
+ static unsigned long cht_machine_id;
+
+ #define CHT_SURFACE_MACH 1
++#define BYT_THINKPAD_10 2
+
+ static int cht_surface_quirk_cb(const struct dmi_system_id *id)
+ {
+@@ -407,6 +408,23 @@ static int cht_surface_quirk_cb(const struct dmi_system_id *id)
+ return 1;
+ }
+
++static int byt_thinkpad10_quirk_cb(const struct dmi_system_id *id)
++{
++ cht_machine_id = BYT_THINKPAD_10;
++ return 1;
++}
++
++
++static const struct dmi_system_id byt_table[] = {
++ {
++ .callback = byt_thinkpad10_quirk_cb,
++ .matches = {
++ DMI_MATCH(DMI_SYS_VENDOR, "LENOVO"),
++ DMI_MATCH(DMI_PRODUCT_NAME, "20C3001VHH"),
++ },
++ },
++ { }
++};
+
+ static const struct dmi_system_id cht_table[] = {
+ {
+@@ -424,6 +442,10 @@ static struct sst_acpi_mach cht_surface_mach = {
+ "10EC5640", "cht-bsw-rt5645", "intel/fw_sst_22a8.bin", "cht-bsw", NULL,
+ &chv_platform_data };
+
++static struct sst_acpi_mach byt_thinkpad_10 = {
++ "10EC5640", "cht-bsw-rt5672", "intel/fw_sst_0f28.bin", "cht-bsw", NULL,
++ &byt_rvp_platform_data };
++
+ static struct sst_acpi_mach *cht_quirk(void *arg)
+ {
+ struct sst_acpi_mach *mach = arg;
+@@ -436,8 +458,21 @@ static struct sst_acpi_mach *cht_quirk(void *arg)
+ return mach;
+ }
+
++static struct sst_acpi_mach *byt_quirk(void *arg)
++{
++ struct sst_acpi_mach *mach = arg;
++
++ dmi_check_system(byt_table);
++
++ if (cht_machine_id == BYT_THINKPAD_10)
++ return &byt_thinkpad_10;
++ else
++ return mach;
++}
++
++
+ static struct sst_acpi_mach sst_acpi_bytcr[] = {
+- {"10EC5640", "bytcr_rt5640", "intel/fw_sst_0f28.bin", "bytcr_rt5640", NULL,
++ {"10EC5640", "bytcr_rt5640", "intel/fw_sst_0f28.bin", "bytcr_rt5640", byt_quirk,
+ &byt_rvp_platform_data },
+ {"10EC5642", "bytcr_rt5640", "intel/fw_sst_0f28.bin", "bytcr_rt5640", NULL,
+ &byt_rvp_platform_data },
+diff --git a/sound/soc/intel/boards/bytcr_rt5640.c b/sound/soc/intel/boards/bytcr_rt5640.c
+index 8d2fb2d6f532..1bd985f01c73 100644
+--- a/sound/soc/intel/boards/bytcr_rt5640.c
++++ b/sound/soc/intel/boards/bytcr_rt5640.c
+@@ -387,6 +387,16 @@ static const struct dmi_system_id byt_rt5640_quirk_table[] = {
+ BYT_RT5640_SSP0_AIF1),
+
+ },
++ {
++ .callback = byt_rt5640_quirk_cb,
++ .matches = {
++ DMI_MATCH(DMI_SYS_VENDOR, "Insyde"),
++ },
++ .driver_data = (unsigned long *)(BYT_RT5640_IN3_MAP |
++ BYT_RT5640_MCLK_EN |
++ BYT_RT5640_SSP0_AIF1),
++
++ },
+ {}
+ };
+
+diff --git a/sound/soc/intel/boards/cht_bsw_rt5645.c b/sound/soc/intel/boards/cht_bsw_rt5645.c
+index f504a0e18f91..753938371965 100644
+--- a/sound/soc/intel/boards/cht_bsw_rt5645.c
++++ b/sound/soc/intel/boards/cht_bsw_rt5645.c
+@@ -24,6 +24,9 @@
+ #include <linux/acpi.h>
+ #include <linux/platform_device.h>
+ #include <linux/slab.h>
++#include <asm/cpu_device_id.h>
++#include <asm/platform_sst_audio.h>
++#include <linux/clk.h>
+ #include <sound/pcm.h>
+ #include <sound/pcm_params.h>
+ #include <sound/soc.h>
+@@ -45,6 +48,7 @@ struct cht_mc_private {
+ struct snd_soc_jack jack;
+ struct cht_acpi_card *acpi_card;
+ char codec_name[16];
++ struct clk *mclk;
+ };
+
+ static inline struct snd_soc_dai *cht_get_codec_dai(struct snd_soc_card *card)
+@@ -65,6 +69,7 @@ static int platform_clock_control(struct snd_soc_dapm_widget *w,
+ struct snd_soc_dapm_context *dapm = w->dapm;
+ struct snd_soc_card *card = dapm->card;
+ struct snd_soc_dai *codec_dai;
++ struct cht_mc_private *ctx = snd_soc_card_get_drvdata(card);
+ int ret;
+
+ codec_dai = cht_get_codec_dai(card);
+@@ -73,19 +78,30 @@ static int platform_clock_control(struct snd_soc_dapm_widget *w,
+ return -EIO;
+ }
+
+- if (!SND_SOC_DAPM_EVENT_OFF(event))
+- return 0;
++ if (SND_SOC_DAPM_EVENT_ON(event)) {
++ if (ctx->mclk) {
++ ret = clk_prepare_enable(ctx->mclk);
++ if (ret < 0) {
++ dev_err(card->dev,
++ "could not configure MCLK state");
++ return ret;
++ }
++ }
++ } else {
++ /* Set codec sysclk source to its internal clock because codec PLL will
++ * be off when idle and MCLK will also be off when codec is
++ * runtime suspended. Codec needs clock for jack detection and button
++ * press. MCLK is turned off with clock framework or ACPI.
++ */
++ ret = snd_soc_dai_set_sysclk(codec_dai, RT5645_SCLK_S_RCCLK,
++ 48000 * 512, SND_SOC_CLOCK_IN);
++ if (ret < 0) {
++ dev_err(card->dev, "can't set codec sysclk: %d\n", ret);
++ return ret;
++ }
+
+- /* Set codec sysclk source to its internal clock because codec PLL will
+- * be off when idle and MCLK will also be off by ACPI when codec is
+- * runtime suspended. Codec needs clock for jack detection and button
+- * press.
+- */
+- ret = snd_soc_dai_set_sysclk(codec_dai, RT5645_SCLK_S_RCCLK,
+- 0, SND_SOC_CLOCK_IN);
+- if (ret < 0) {
+- dev_err(card->dev, "can't set codec sysclk: %d\n", ret);
+- return ret;
++ if (ctx->mclk)
++ clk_disable_unprepare(ctx->mclk);
+ }
+
+ return 0;
+@@ -97,7 +113,7 @@ static const struct snd_soc_dapm_widget cht_dapm_widgets[] = {
+ SND_SOC_DAPM_MIC("Int Mic", NULL),
+ SND_SOC_DAPM_SPK("Ext Spk", NULL),
+ SND_SOC_DAPM_SUPPLY("Platform Clock", SND_SOC_NOPM, 0, 0,
+- platform_clock_control, SND_SOC_DAPM_POST_PMD),
++ platform_clock_control, SND_SOC_DAPM_PRE_PMU | SND_SOC_DAPM_POST_PMD),
+ };
+
+ static const struct snd_soc_dapm_route cht_rt5645_audio_map[] = {
+@@ -225,6 +241,26 @@ static int cht_codec_init(struct snd_soc_pcm_runtime *runtime)
+
+ rt5645_set_jack_detect(codec, &ctx->jack, &ctx->jack, &ctx->jack);
+
++ if (ctx->mclk) {
++ /*
++ * The firmware might enable the clock at
++ * boot (this information may or may not
++ * be reflected in the enable clock register).
++ * To change the rate we must disable the clock
++ * first to cover these cases. Due to common
++ * clock framework restrictions that do not allow
++ * to disable a clock that has not been enabled,
++ * we need to enable the clock first.
++ */
++ ret = clk_prepare_enable(ctx->mclk);
++ if (!ret)
++ clk_disable_unprepare(ctx->mclk);
++
++ ret = clk_set_rate(ctx->mclk, CHT_PLAT_CLK_3_HZ);
++
++ if (ret)
++ dev_err(runtime->dev, "unable to set MCLK rate\n");
++ }
+ return ret;
+ }
+
+@@ -349,6 +385,18 @@ static struct cht_acpi_card snd_soc_cards[] = {
+
+ static char cht_rt5640_codec_name[16]; /* i2c-<HID>:00 with HID being 8 chars */
+
++static bool is_valleyview(void)
++{
++ static const struct x86_cpu_id cpu_ids[] = {
++ { X86_VENDOR_INTEL, 6, 55 }, /* Valleyview, Bay Trail */
++ {}
++ };
++
++ if (!x86_match_cpu(cpu_ids))
++ return false;
++ return true;
++}
++
+ static int snd_cht_mc_probe(struct platform_device *pdev)
+ {
+ int ret_val = 0;
+@@ -358,22 +406,32 @@ static int snd_cht_mc_probe(struct platform_device *pdev)
+ struct sst_acpi_mach *mach;
+ const char *i2c_name = NULL;
+ int dai_index = 0;
++ bool found = false;
+
+ drv = devm_kzalloc(&pdev->dev, sizeof(*drv), GFP_ATOMIC);
+ if (!drv)
+ return -ENOMEM;
+
++ mach = (&pdev->dev)->platform_data;
++
+ for (i = 0; i < ARRAY_SIZE(snd_soc_cards); i++) {
+- if (acpi_dev_found(snd_soc_cards[i].codec_id)) {
++ if (acpi_dev_found(snd_soc_cards[i].codec_id) &&
++ (!strncmp(snd_soc_cards[i].codec_id, mach->id, 8))) {
+ dev_dbg(&pdev->dev,
+ "found codec %s\n", snd_soc_cards[i].codec_id);
+ card = snd_soc_cards[i].soc_card;
+ drv->acpi_card = &snd_soc_cards[i];
++ found = true;
+ break;
+ }
+ }
++
++ if (!found) {
++ dev_err(&pdev->dev, "No matching HID found in supported list\n");
++ return -ENODEV;
++ }
++
+ card->dev = &pdev->dev;
+- mach = card->dev->platform_data;
+ sprintf(drv->codec_name, "i2c-%s:00", drv->acpi_card->codec_id);
+
+ /* set correct codec name */
+@@ -391,6 +449,16 @@ static int snd_cht_mc_probe(struct platform_device *pdev)
+ cht_dailink[dai_index].codec_name = cht_rt5640_codec_name;
+ }
+
++ if (is_valleyview()) {
++ drv->mclk = devm_clk_get(&pdev->dev, "pmc_plt_clk_3");
++ if (IS_ERR(drv->mclk)) {
++ dev_err(&pdev->dev,
++ "Failed to get MCLK from pmc_plt_clk_3: %ld\n",
++ PTR_ERR(drv->mclk));
++ return PTR_ERR(drv->mclk);
++ }
++ }
++
+ snd_soc_card_set_drvdata(card, drv);
+ ret_val = devm_snd_soc_register_card(&pdev->dev, card);
+ if (ret_val) {
+diff --git a/sound/soc/sunxi/sun4i-i2s.c b/sound/soc/sunxi/sun4i-i2s.c
+index f24d19526603..268f2bf691b3 100644
+--- a/sound/soc/sunxi/sun4i-i2s.c
++++ b/sound/soc/sunxi/sun4i-i2s.c
+@@ -14,9 +14,11 @@
+ #include <linux/clk.h>
+ #include <linux/dmaengine.h>
+ #include <linux/module.h>
++#include <linux/of_device.h>
+ #include <linux/platform_device.h>
+ #include <linux/pm_runtime.h>
+ #include <linux/regmap.h>
++#include <linux/reset.h>
+
+ #include <sound/dmaengine_pcm.h>
+ #include <sound/pcm_params.h>
+@@ -92,6 +94,7 @@ struct sun4i_i2s {
+ struct clk *bus_clk;
+ struct clk *mod_clk;
+ struct regmap *regmap;
++ struct reset_control *rst;
+
+ unsigned int mclk_freq;
+
+@@ -651,9 +654,22 @@ static int sun4i_i2s_runtime_suspend(struct device *dev)
+ return 0;
+ }
+
++struct sun4i_i2s_quirks {
++ bool has_reset;
++};
++
++static const struct sun4i_i2s_quirks sun4i_a10_i2s_quirks = {
++ .has_reset = false,
++};
++
++static const struct sun4i_i2s_quirks sun6i_a31_i2s_quirks = {
++ .has_reset = true,
++};
++
+ static int sun4i_i2s_probe(struct platform_device *pdev)
+ {
+ struct sun4i_i2s *i2s;
++ const struct sun4i_i2s_quirks *quirks;
+ struct resource *res;
+ void __iomem *regs;
+ int irq, ret;
+@@ -674,6 +690,12 @@ static int sun4i_i2s_probe(struct platform_device *pdev)
+ return irq;
+ }
+
++ quirks = of_device_get_match_data(&pdev->dev);
++ if (!quirks) {
++ dev_err(&pdev->dev, "Failed to determine the quirks to use\n");
++ return -ENODEV;
++ }
++
+ i2s->bus_clk = devm_clk_get(&pdev->dev, "apb");
+ if (IS_ERR(i2s->bus_clk)) {
+ dev_err(&pdev->dev, "Can't get our bus clock\n");
+@@ -692,7 +714,24 @@ static int sun4i_i2s_probe(struct platform_device *pdev)
+ dev_err(&pdev->dev, "Can't get our mod clock\n");
+ return PTR_ERR(i2s->mod_clk);
+ }
+-
++
++ if (quirks->has_reset) {
++ i2s->rst = devm_reset_control_get(&pdev->dev, NULL);
++ if (IS_ERR(i2s->rst)) {
++ dev_err(&pdev->dev, "Failed to get reset control\n");
++ return PTR_ERR(i2s->rst);
++ }
++ }
++
++ if (!IS_ERR(i2s->rst)) {
++ ret = reset_control_deassert(i2s->rst);
++ if (ret) {
++ dev_err(&pdev->dev,
++ "Failed to deassert the reset control\n");
++ return -EINVAL;
++ }
++ }
++
+ i2s->playback_dma_data.addr = res->start + SUN4I_I2S_FIFO_TX_REG;
+ i2s->playback_dma_data.maxburst = 4;
+
+@@ -727,23 +766,37 @@ static int sun4i_i2s_probe(struct platform_device *pdev)
+ sun4i_i2s_runtime_suspend(&pdev->dev);
+ err_pm_disable:
+ pm_runtime_disable(&pdev->dev);
++ if (!IS_ERR(i2s->rst))
++ reset_control_assert(i2s->rst);
+
+ return ret;
+ }
+
+ static int sun4i_i2s_remove(struct platform_device *pdev)
+ {
++ struct sun4i_i2s *i2s = dev_get_drvdata(&pdev->dev);
++
+ snd_dmaengine_pcm_unregister(&pdev->dev);
+
+ pm_runtime_disable(&pdev->dev);
+ if (!pm_runtime_status_suspended(&pdev->dev))
+ sun4i_i2s_runtime_suspend(&pdev->dev);
+
++ if (!IS_ERR(i2s->rst))
++ reset_control_assert(i2s->rst);
++
+ return 0;
+ }
+
+ static const struct of_device_id sun4i_i2s_match[] = {
+- { .compatible = "allwinner,sun4i-a10-i2s", },
++ {
++ .compatible = "allwinner,sun4i-a10-i2s",
++ .data = &sun4i_a10_i2s_quirks,
++ },
++ {
++ .compatible = "allwinner,sun6i-a31-i2s",
++ .data = &sun6i_a31_i2s_quirks,
++ },
+ {}
+ };
+ MODULE_DEVICE_TABLE(of, sun4i_i2s_match);
^ permalink raw reply related [flat|nested] 22+ messages in thread
* [gentoo-commits] proj/linux-patches:4.10 commit in: /
@ 2017-04-18 10:23 Mike Pagano
0 siblings, 0 replies; 22+ messages in thread
From: Mike Pagano @ 2017-04-18 10:23 UTC (permalink / raw
To: gentoo-commits
commit: 4e0e4f1029afd27b8bf7999371ce7d817c11d73a
Author: Mike Pagano <mpagano <AT> gentoo <DOT> org>
AuthorDate: Tue Apr 18 10:23:49 2017 +0000
Commit: Mike Pagano <mpagano <AT> gentoo <DOT> org>
CommitDate: Tue Apr 18 10:23:49 2017 +0000
URL: https://gitweb.gentoo.org/proj/linux-patches.git/commit/?id=4e0e4f10
Linux patch 4.10.11
0000_README | 4 +
1010_linux-4.10.11.patch | 1128 ++++++++++++++++++++++++++++++++++++++++++++++
2 files changed, 1132 insertions(+)
diff --git a/0000_README b/0000_README
index abc6f43..f05d7f1 100644
--- a/0000_README
+++ b/0000_README
@@ -83,6 +83,10 @@ Patch: 1009_linux-4.10.10.patch
From: http://www.kernel.org
Desc: Linux 4.10.10
+Patch: 1010_linux-4.10.11.patch
+From: http://www.kernel.org
+Desc: Linux 4.10.11
+
Patch: 1500_XATTR_USER_PREFIX.patch
From: https://bugs.gentoo.org/show_bug.cgi?id=470644
Desc: Support for namespace user.pax.* on tmpfs.
diff --git a/1010_linux-4.10.11.patch b/1010_linux-4.10.11.patch
new file mode 100644
index 0000000..ac7bb4e
--- /dev/null
+++ b/1010_linux-4.10.11.patch
@@ -0,0 +1,1128 @@
+diff --git a/Makefile b/Makefile
+index 52858726495b..412f2a0a3814 100644
+--- a/Makefile
++++ b/Makefile
+@@ -1,6 +1,6 @@
+ VERSION = 4
+ PATCHLEVEL = 10
+-SUBLEVEL = 10
++SUBLEVEL = 11
+ EXTRAVERSION =
+ NAME = Fearless Coyote
+
+diff --git a/arch/mips/Kconfig b/arch/mips/Kconfig
+index 9a6e11b6f457..5a4f2eb9d0d5 100644
+--- a/arch/mips/Kconfig
++++ b/arch/mips/Kconfig
+@@ -9,6 +9,7 @@ config MIPS
+ select HAVE_CONTEXT_TRACKING
+ select HAVE_GENERIC_DMA_COHERENT
+ select HAVE_IDE
++ select HAVE_IRQ_EXIT_ON_IRQ_STACK
+ select HAVE_OPROFILE
+ select HAVE_PERF_EVENTS
+ select PERF_USE_VMALLOC
+diff --git a/arch/mips/include/asm/irq.h b/arch/mips/include/asm/irq.h
+index 6bf10e796553..956db6e201d1 100644
+--- a/arch/mips/include/asm/irq.h
++++ b/arch/mips/include/asm/irq.h
+@@ -17,6 +17,18 @@
+
+ #include <irq.h>
+
++#define IRQ_STACK_SIZE THREAD_SIZE
++
++extern void *irq_stack[NR_CPUS];
++
++static inline bool on_irq_stack(int cpu, unsigned long sp)
++{
++ unsigned long low = (unsigned long)irq_stack[cpu];
++ unsigned long high = low + IRQ_STACK_SIZE;
++
++ return (low <= sp && sp <= high);
++}
++
+ #ifdef CONFIG_I8259
+ static inline int irq_canonicalize(int irq)
+ {
+diff --git a/arch/mips/include/asm/stackframe.h b/arch/mips/include/asm/stackframe.h
+index eebf39549606..2f182bdf024f 100644
+--- a/arch/mips/include/asm/stackframe.h
++++ b/arch/mips/include/asm/stackframe.h
+@@ -216,12 +216,19 @@
+ LONG_S $25, PT_R25(sp)
+ LONG_S $28, PT_R28(sp)
+ LONG_S $31, PT_R31(sp)
++
++ /* Set thread_info if we're coming from user mode */
++ mfc0 k0, CP0_STATUS
++ sll k0, 3 /* extract cu0 bit */
++ bltz k0, 9f
++
+ ori $28, sp, _THREAD_MASK
+ xori $28, _THREAD_MASK
+ #ifdef CONFIG_CPU_CAVIUM_OCTEON
+ .set mips64
+ pref 0, 0($28) /* Prefetch the current pointer */
+ #endif
++9:
+ .set pop
+ .endm
+
+diff --git a/arch/mips/kernel/asm-offsets.c b/arch/mips/kernel/asm-offsets.c
+index 6080582a26d1..a7277698d328 100644
+--- a/arch/mips/kernel/asm-offsets.c
++++ b/arch/mips/kernel/asm-offsets.c
+@@ -102,6 +102,7 @@ void output_thread_info_defines(void)
+ OFFSET(TI_REGS, thread_info, regs);
+ DEFINE(_THREAD_SIZE, THREAD_SIZE);
+ DEFINE(_THREAD_MASK, THREAD_MASK);
++ DEFINE(_IRQ_STACK_SIZE, IRQ_STACK_SIZE);
+ BLANK();
+ }
+
+diff --git a/arch/mips/kernel/genex.S b/arch/mips/kernel/genex.S
+index 52a4fdfc8513..2ac6c2625c13 100644
+--- a/arch/mips/kernel/genex.S
++++ b/arch/mips/kernel/genex.S
+@@ -187,9 +187,44 @@ NESTED(handle_int, PT_SIZE, sp)
+
+ LONG_L s0, TI_REGS($28)
+ LONG_S sp, TI_REGS($28)
+- PTR_LA ra, ret_from_irq
+- PTR_LA v0, plat_irq_dispatch
+- jr v0
++
++ /*
++ * SAVE_ALL ensures we are using a valid kernel stack for the thread.
++ * Check if we are already using the IRQ stack.
++ */
++ move s1, sp # Preserve the sp
++
++ /* Get IRQ stack for this CPU */
++ ASM_CPUID_MFC0 k0, ASM_SMP_CPUID_REG
++#if defined(CONFIG_32BIT) || defined(KBUILD_64BIT_SYM32)
++ lui k1, %hi(irq_stack)
++#else
++ lui k1, %highest(irq_stack)
++ daddiu k1, %higher(irq_stack)
++ dsll k1, 16
++ daddiu k1, %hi(irq_stack)
++ dsll k1, 16
++#endif
++ LONG_SRL k0, SMP_CPUID_PTRSHIFT
++ LONG_ADDU k1, k0
++ LONG_L t0, %lo(irq_stack)(k1)
++
++ # Check if already on IRQ stack
++ PTR_LI t1, ~(_THREAD_SIZE-1)
++ and t1, t1, sp
++ beq t0, t1, 2f
++
++ /* Switch to IRQ stack */
++ li t1, _IRQ_STACK_SIZE
++ PTR_ADD sp, t0, t1
++
++2:
++ jal plat_irq_dispatch
++
++ /* Restore sp */
++ move sp, s1
++
++ j ret_from_irq
+ #ifdef CONFIG_CPU_MICROMIPS
+ nop
+ #endif
+@@ -262,8 +297,44 @@ NESTED(except_vec_vi_handler, 0, sp)
+
+ LONG_L s0, TI_REGS($28)
+ LONG_S sp, TI_REGS($28)
+- PTR_LA ra, ret_from_irq
+- jr v0
++
++ /*
++ * SAVE_ALL ensures we are using a valid kernel stack for the thread.
++ * Check if we are already using the IRQ stack.
++ */
++ move s1, sp # Preserve the sp
++
++ /* Get IRQ stack for this CPU */
++ ASM_CPUID_MFC0 k0, ASM_SMP_CPUID_REG
++#if defined(CONFIG_32BIT) || defined(KBUILD_64BIT_SYM32)
++ lui k1, %hi(irq_stack)
++#else
++ lui k1, %highest(irq_stack)
++ daddiu k1, %higher(irq_stack)
++ dsll k1, 16
++ daddiu k1, %hi(irq_stack)
++ dsll k1, 16
++#endif
++ LONG_SRL k0, SMP_CPUID_PTRSHIFT
++ LONG_ADDU k1, k0
++ LONG_L t0, %lo(irq_stack)(k1)
++
++ # Check if already on IRQ stack
++ PTR_LI t1, ~(_THREAD_SIZE-1)
++ and t1, t1, sp
++ beq t0, t1, 2f
++
++ /* Switch to IRQ stack */
++ li t1, _IRQ_STACK_SIZE
++ PTR_ADD sp, t0, t1
++
++2:
++ jalr v0
++
++ /* Restore sp */
++ move sp, s1
++
++ j ret_from_irq
+ END(except_vec_vi_handler)
+
+ /*
+diff --git a/arch/mips/kernel/irq.c b/arch/mips/kernel/irq.c
+index f8f5836eb3c1..ba150c755fcc 100644
+--- a/arch/mips/kernel/irq.c
++++ b/arch/mips/kernel/irq.c
+@@ -25,6 +25,8 @@
+ #include <linux/atomic.h>
+ #include <linux/uaccess.h>
+
++void *irq_stack[NR_CPUS];
++
+ /*
+ * 'what should we do if we get a hw irq event on an illegal vector'.
+ * each architecture has to answer this themselves.
+@@ -58,6 +60,15 @@ void __init init_IRQ(void)
+ clear_c0_status(ST0_IM);
+
+ arch_init_irq();
++
++ for_each_possible_cpu(i) {
++ int irq_pages = IRQ_STACK_SIZE / PAGE_SIZE;
++ void *s = (void *)__get_free_pages(GFP_KERNEL, irq_pages);
++
++ irq_stack[i] = s;
++ pr_debug("CPU%d IRQ stack at 0x%p - 0x%p\n", i,
++ irq_stack[i], irq_stack[i] + IRQ_STACK_SIZE);
++ }
+ }
+
+ #ifdef CONFIG_DEBUG_STACKOVERFLOW
+diff --git a/arch/mips/kernel/process.c b/arch/mips/kernel/process.c
+index 7d80447e5d03..efa1df52c616 100644
+--- a/arch/mips/kernel/process.c
++++ b/arch/mips/kernel/process.c
+@@ -33,6 +33,7 @@
+ #include <asm/dsemul.h>
+ #include <asm/dsp.h>
+ #include <asm/fpu.h>
++#include <asm/irq.h>
+ #include <asm/msa.h>
+ #include <asm/pgtable.h>
+ #include <asm/mipsregs.h>
+@@ -556,7 +557,19 @@ EXPORT_SYMBOL(unwind_stack_by_address);
+ unsigned long unwind_stack(struct task_struct *task, unsigned long *sp,
+ unsigned long pc, unsigned long *ra)
+ {
+- unsigned long stack_page = (unsigned long)task_stack_page(task);
++ unsigned long stack_page = 0;
++ int cpu;
++
++ for_each_possible_cpu(cpu) {
++ if (on_irq_stack(cpu, *sp)) {
++ stack_page = (unsigned long)irq_stack[cpu];
++ break;
++ }
++ }
++
++ if (!stack_page)
++ stack_page = (unsigned long)task_stack_page(task);
++
+ return unwind_stack_by_address(stack_page, sp, pc, ra);
+ }
+ #endif
+diff --git a/drivers/crypto/caam/caampkc.c b/drivers/crypto/caam/caampkc.c
+index 32100c4851dd..49cbdcba7883 100644
+--- a/drivers/crypto/caam/caampkc.c
++++ b/drivers/crypto/caam/caampkc.c
+@@ -506,7 +506,7 @@ static int caam_rsa_init_tfm(struct crypto_akcipher *tfm)
+ ctx->dev = caam_jr_alloc();
+
+ if (IS_ERR(ctx->dev)) {
+- dev_err(ctx->dev, "Job Ring Device allocation for transform failed\n");
++ pr_err("Job Ring Device allocation for transform failed\n");
+ return PTR_ERR(ctx->dev);
+ }
+
+diff --git a/drivers/crypto/caam/ctrl.c b/drivers/crypto/caam/ctrl.c
+index 755109841cfd..6092252ce6ca 100644
+--- a/drivers/crypto/caam/ctrl.c
++++ b/drivers/crypto/caam/ctrl.c
+@@ -282,7 +282,8 @@ static int deinstantiate_rng(struct device *ctrldev, int state_handle_mask)
+ /* Try to run it through DECO0 */
+ ret = run_descriptor_deco0(ctrldev, desc, &status);
+
+- if (ret || status) {
++ if (ret ||
++ (status && status != JRSTA_SSRC_JUMP_HALT_CC)) {
+ dev_err(ctrldev,
+ "Failed to deinstantiate RNG4 SH%d\n",
+ sh_idx);
+diff --git a/drivers/dma-buf/dma-buf.c b/drivers/dma-buf/dma-buf.c
+index e72e64484131..686dc3e7eb0b 100644
+--- a/drivers/dma-buf/dma-buf.c
++++ b/drivers/dma-buf/dma-buf.c
+@@ -303,6 +303,9 @@ static const struct file_operations dma_buf_fops = {
+ .llseek = dma_buf_llseek,
+ .poll = dma_buf_poll,
+ .unlocked_ioctl = dma_buf_ioctl,
++#ifdef CONFIG_COMPAT
++ .compat_ioctl = dma_buf_ioctl,
++#endif
+ };
+
+ /*
+diff --git a/drivers/gpu/drm/i915/i915_drv.c b/drivers/gpu/drm/i915/i915_drv.c
+index f02da12f2860..8be958fee160 100644
+--- a/drivers/gpu/drm/i915/i915_drv.c
++++ b/drivers/gpu/drm/i915/i915_drv.c
+@@ -248,6 +248,7 @@ static int i915_getparam(struct drm_device *dev, void *data,
+ case I915_PARAM_IRQ_ACTIVE:
+ case I915_PARAM_ALLOW_BATCHBUFFER:
+ case I915_PARAM_LAST_DISPATCH:
++ case I915_PARAM_HAS_EXEC_CONSTANTS:
+ /* Reject all old ums/dri params. */
+ return -ENODEV;
+ case I915_PARAM_CHIPSET_ID:
+@@ -274,9 +275,6 @@ static int i915_getparam(struct drm_device *dev, void *data,
+ case I915_PARAM_HAS_BSD2:
+ value = !!dev_priv->engine[VCS2];
+ break;
+- case I915_PARAM_HAS_EXEC_CONSTANTS:
+- value = INTEL_GEN(dev_priv) >= 4;
+- break;
+ case I915_PARAM_HAS_LLC:
+ value = HAS_LLC(dev_priv);
+ break;
+diff --git a/drivers/gpu/drm/i915/i915_drv.h b/drivers/gpu/drm/i915/i915_drv.h
+index 8493e19b563a..4a1ed776b41d 100644
+--- a/drivers/gpu/drm/i915/i915_drv.h
++++ b/drivers/gpu/drm/i915/i915_drv.h
+@@ -1263,7 +1263,7 @@ struct intel_gen6_power_mgmt {
+ unsigned boosts;
+
+ /* manual wa residency calculations */
+- struct intel_rps_ei up_ei, down_ei;
++ struct intel_rps_ei ei;
+
+ /*
+ * Protects RPS/RC6 register access and PCU communication.
+@@ -1805,8 +1805,6 @@ struct drm_i915_private {
+
+ const struct intel_device_info info;
+
+- int relative_constants_mode;
+-
+ void __iomem *regs;
+
+ struct intel_uncore uncore;
+diff --git a/drivers/gpu/drm/i915/i915_gem.c b/drivers/gpu/drm/i915/i915_gem.c
+index 7f4a54b94447..b7146494d53f 100644
+--- a/drivers/gpu/drm/i915/i915_gem.c
++++ b/drivers/gpu/drm/i915/i915_gem.c
+@@ -2184,6 +2184,7 @@ i915_gem_object_truncate(struct drm_i915_gem_object *obj)
+ */
+ shmem_truncate_range(file_inode(obj->base.filp), 0, (loff_t)-1);
+ obj->mm.madv = __I915_MADV_PURGED;
++ obj->mm.pages = ERR_PTR(-EFAULT);
+ }
+
+ /* Try to discard unwanted pages */
+@@ -2283,7 +2284,9 @@ void __i915_gem_object_put_pages(struct drm_i915_gem_object *obj,
+
+ __i915_gem_object_reset_page_iter(obj);
+
+- obj->ops->put_pages(obj, pages);
++ if (!IS_ERR(pages))
++ obj->ops->put_pages(obj, pages);
++
+ unlock:
+ mutex_unlock(&obj->mm.lock);
+ }
+@@ -2501,7 +2504,7 @@ int __i915_gem_object_get_pages(struct drm_i915_gem_object *obj)
+ if (err)
+ return err;
+
+- if (unlikely(!obj->mm.pages)) {
++ if (unlikely(IS_ERR_OR_NULL(obj->mm.pages))) {
+ err = ____i915_gem_object_get_pages(obj);
+ if (err)
+ goto unlock;
+@@ -2579,7 +2582,7 @@ void *i915_gem_object_pin_map(struct drm_i915_gem_object *obj,
+
+ pinned = true;
+ if (!atomic_inc_not_zero(&obj->mm.pages_pin_count)) {
+- if (unlikely(!obj->mm.pages)) {
++ if (unlikely(IS_ERR_OR_NULL(obj->mm.pages))) {
+ ret = ____i915_gem_object_get_pages(obj);
+ if (ret)
+ goto err_unlock;
+@@ -3003,6 +3006,16 @@ i915_gem_wait_ioctl(struct drm_device *dev, void *data, struct drm_file *file)
+ args->timeout_ns -= ktime_to_ns(ktime_sub(ktime_get(), start));
+ if (args->timeout_ns < 0)
+ args->timeout_ns = 0;
++
++ /*
++ * Apparently ktime isn't accurate enough and occasionally has a
++ * bit of mismatch in the jiffies<->nsecs<->ktime loop. So patch
++ * things up to make the test happy. We allow up to 1 jiffy.
++ *
++ * This is a regression from the timespec->ktime conversion.
++ */
++ if (ret == -ETIME && !nsecs_to_jiffies(args->timeout_ns))
++ args->timeout_ns = 0;
+ }
+
+ i915_gem_object_put(obj);
+@@ -4554,8 +4567,6 @@ i915_gem_load_init(struct drm_device *dev)
+ init_waitqueue_head(&dev_priv->gpu_error.wait_queue);
+ init_waitqueue_head(&dev_priv->gpu_error.reset_queue);
+
+- dev_priv->relative_constants_mode = I915_EXEC_CONSTANTS_REL_GENERAL;
+-
+ init_waitqueue_head(&dev_priv->pending_flip_queue);
+
+ dev_priv->mm.interruptible = true;
+diff --git a/drivers/gpu/drm/i915/i915_gem_execbuffer.c b/drivers/gpu/drm/i915/i915_gem_execbuffer.c
+index b8b877c91b0a..3d37a15531ad 100644
+--- a/drivers/gpu/drm/i915/i915_gem_execbuffer.c
++++ b/drivers/gpu/drm/i915/i915_gem_execbuffer.c
+@@ -1410,10 +1410,7 @@ execbuf_submit(struct i915_execbuffer_params *params,
+ struct drm_i915_gem_execbuffer2 *args,
+ struct list_head *vmas)
+ {
+- struct drm_i915_private *dev_priv = params->request->i915;
+ u64 exec_start, exec_len;
+- int instp_mode;
+- u32 instp_mask;
+ int ret;
+
+ ret = i915_gem_execbuffer_move_to_gpu(params->request, vmas);
+@@ -1424,56 +1421,11 @@ execbuf_submit(struct i915_execbuffer_params *params,
+ if (ret)
+ return ret;
+
+- instp_mode = args->flags & I915_EXEC_CONSTANTS_MASK;
+- instp_mask = I915_EXEC_CONSTANTS_MASK;
+- switch (instp_mode) {
+- case I915_EXEC_CONSTANTS_REL_GENERAL:
+- case I915_EXEC_CONSTANTS_ABSOLUTE:
+- case I915_EXEC_CONSTANTS_REL_SURFACE:
+- if (instp_mode != 0 && params->engine->id != RCS) {
+- DRM_DEBUG("non-0 rel constants mode on non-RCS\n");
+- return -EINVAL;
+- }
+-
+- if (instp_mode != dev_priv->relative_constants_mode) {
+- if (INTEL_INFO(dev_priv)->gen < 4) {
+- DRM_DEBUG("no rel constants on pre-gen4\n");
+- return -EINVAL;
+- }
+-
+- if (INTEL_INFO(dev_priv)->gen > 5 &&
+- instp_mode == I915_EXEC_CONSTANTS_REL_SURFACE) {
+- DRM_DEBUG("rel surface constants mode invalid on gen5+\n");
+- return -EINVAL;
+- }
+-
+- /* The HW changed the meaning on this bit on gen6 */
+- if (INTEL_INFO(dev_priv)->gen >= 6)
+- instp_mask &= ~I915_EXEC_CONSTANTS_REL_SURFACE;
+- }
+- break;
+- default:
+- DRM_DEBUG("execbuf with unknown constants: %d\n", instp_mode);
++ if (args->flags & I915_EXEC_CONSTANTS_MASK) {
++ DRM_DEBUG("I915_EXEC_CONSTANTS_* unsupported\n");
+ return -EINVAL;
+ }
+
+- if (params->engine->id == RCS &&
+- instp_mode != dev_priv->relative_constants_mode) {
+- struct intel_ring *ring = params->request->ring;
+-
+- ret = intel_ring_begin(params->request, 4);
+- if (ret)
+- return ret;
+-
+- intel_ring_emit(ring, MI_NOOP);
+- intel_ring_emit(ring, MI_LOAD_REGISTER_IMM(1));
+- intel_ring_emit_reg(ring, INSTPM);
+- intel_ring_emit(ring, instp_mask << 16 | instp_mode);
+- intel_ring_advance(ring);
+-
+- dev_priv->relative_constants_mode = instp_mode;
+- }
+-
+ if (args->flags & I915_EXEC_GEN7_SOL_RESET) {
+ ret = i915_reset_gen7_sol_offsets(params->request);
+ if (ret)
+diff --git a/drivers/gpu/drm/i915/i915_gem_shrinker.c b/drivers/gpu/drm/i915/i915_gem_shrinker.c
+index 401006b4c6a3..d5d2b4c6ed38 100644
+--- a/drivers/gpu/drm/i915/i915_gem_shrinker.c
++++ b/drivers/gpu/drm/i915/i915_gem_shrinker.c
+@@ -263,7 +263,7 @@ unsigned long i915_gem_shrink_all(struct drm_i915_private *dev_priv)
+ I915_SHRINK_BOUND |
+ I915_SHRINK_UNBOUND |
+ I915_SHRINK_ACTIVE);
+- rcu_barrier(); /* wait until our RCU delayed slab frees are completed */
++ synchronize_rcu(); /* wait for our earlier RCU delayed slab frees */
+
+ return freed;
+ }
+diff --git a/drivers/gpu/drm/i915/i915_irq.c b/drivers/gpu/drm/i915/i915_irq.c
+index f914581b1729..de6710f02d95 100644
+--- a/drivers/gpu/drm/i915/i915_irq.c
++++ b/drivers/gpu/drm/i915/i915_irq.c
+@@ -1046,68 +1046,51 @@ static void vlv_c0_read(struct drm_i915_private *dev_priv,
+ ei->media_c0 = I915_READ(VLV_MEDIA_C0_COUNT);
+ }
+
+-static bool vlv_c0_above(struct drm_i915_private *dev_priv,
+- const struct intel_rps_ei *old,
+- const struct intel_rps_ei *now,
+- int threshold)
+-{
+- u64 time, c0;
+- unsigned int mul = 100;
+-
+- if (old->cz_clock == 0)
+- return false;
+-
+- if (I915_READ(VLV_COUNTER_CONTROL) & VLV_COUNT_RANGE_HIGH)
+- mul <<= 8;
+-
+- time = now->cz_clock - old->cz_clock;
+- time *= threshold * dev_priv->czclk_freq;
+-
+- /* Workload can be split between render + media, e.g. SwapBuffers
+- * being blitted in X after being rendered in mesa. To account for
+- * this we need to combine both engines into our activity counter.
+- */
+- c0 = now->render_c0 - old->render_c0;
+- c0 += now->media_c0 - old->media_c0;
+- c0 *= mul * VLV_CZ_CLOCK_TO_MILLI_SEC;
+-
+- return c0 >= time;
+-}
+-
+ void gen6_rps_reset_ei(struct drm_i915_private *dev_priv)
+ {
+- vlv_c0_read(dev_priv, &dev_priv->rps.down_ei);
+- dev_priv->rps.up_ei = dev_priv->rps.down_ei;
++ memset(&dev_priv->rps.ei, 0, sizeof(dev_priv->rps.ei));
+ }
+
+ static u32 vlv_wa_c0_ei(struct drm_i915_private *dev_priv, u32 pm_iir)
+ {
++ const struct intel_rps_ei *prev = &dev_priv->rps.ei;
+ struct intel_rps_ei now;
+ u32 events = 0;
+
+- if ((pm_iir & (GEN6_PM_RP_DOWN_EI_EXPIRED | GEN6_PM_RP_UP_EI_EXPIRED)) == 0)
++ if ((pm_iir & GEN6_PM_RP_UP_EI_EXPIRED) == 0)
+ return 0;
+
+ vlv_c0_read(dev_priv, &now);
+ if (now.cz_clock == 0)
+ return 0;
+
+- if (pm_iir & GEN6_PM_RP_DOWN_EI_EXPIRED) {
+- if (!vlv_c0_above(dev_priv,
+- &dev_priv->rps.down_ei, &now,
+- dev_priv->rps.down_threshold))
+- events |= GEN6_PM_RP_DOWN_THRESHOLD;
+- dev_priv->rps.down_ei = now;
+- }
++ if (prev->cz_clock) {
++ u64 time, c0;
++ unsigned int mul;
++
++ mul = VLV_CZ_CLOCK_TO_MILLI_SEC * 100; /* scale to threshold% */
++ if (I915_READ(VLV_COUNTER_CONTROL) & VLV_COUNT_RANGE_HIGH)
++ mul <<= 8;
+
+- if (pm_iir & GEN6_PM_RP_UP_EI_EXPIRED) {
+- if (vlv_c0_above(dev_priv,
+- &dev_priv->rps.up_ei, &now,
+- dev_priv->rps.up_threshold))
+- events |= GEN6_PM_RP_UP_THRESHOLD;
+- dev_priv->rps.up_ei = now;
++ time = now.cz_clock - prev->cz_clock;
++ time *= dev_priv->czclk_freq;
++
++ /* Workload can be split between render + media,
++ * e.g. SwapBuffers being blitted in X after being rendered in
++ * mesa. To account for this we need to combine both engines
++ * into our activity counter.
++ */
++ c0 = now.render_c0 - prev->render_c0;
++ c0 += now.media_c0 - prev->media_c0;
++ c0 *= mul;
++
++ if (c0 > time * dev_priv->rps.up_threshold)
++ events = GEN6_PM_RP_UP_THRESHOLD;
++ else if (c0 < time * dev_priv->rps.down_threshold)
++ events = GEN6_PM_RP_DOWN_THRESHOLD;
+ }
+
++ dev_priv->rps.ei = now;
+ return events;
+ }
+
+@@ -4178,7 +4161,7 @@ void intel_irq_init(struct drm_i915_private *dev_priv)
+ /* Let's track the enabled rps events */
+ if (IS_VALLEYVIEW(dev_priv))
+ /* WaGsvRC0ResidencyMethod:vlv */
+- dev_priv->pm_rps_events = GEN6_PM_RP_DOWN_EI_EXPIRED | GEN6_PM_RP_UP_EI_EXPIRED;
++ dev_priv->pm_rps_events = GEN6_PM_RP_UP_EI_EXPIRED;
+ else
+ dev_priv->pm_rps_events = GEN6_PM_RPS_EVENTS;
+
+@@ -4216,6 +4199,16 @@ void intel_irq_init(struct drm_i915_private *dev_priv)
+ if (!IS_GEN2(dev_priv))
+ dev->vblank_disable_immediate = true;
+
++ /* Most platforms treat the display irq block as an always-on
++ * power domain. vlv/chv can disable it at runtime and need
++ * special care to avoid writing any of the display block registers
++ * outside of the power domain. We defer setting up the display irqs
++ * in this case to the runtime pm.
++ */
++ dev_priv->display_irqs_enabled = true;
++ if (IS_VALLEYVIEW(dev_priv) || IS_CHERRYVIEW(dev_priv))
++ dev_priv->display_irqs_enabled = false;
++
+ dev->driver->get_vblank_timestamp = i915_get_vblank_timestamp;
+ dev->driver->get_scanout_position = i915_get_crtc_scanoutpos;
+
+diff --git a/drivers/gpu/drm/i915/intel_display.c b/drivers/gpu/drm/i915/intel_display.c
+index 891c86aef99d..59231312c4e0 100644
+--- a/drivers/gpu/drm/i915/intel_display.c
++++ b/drivers/gpu/drm/i915/intel_display.c
+@@ -3677,10 +3677,6 @@ static void intel_update_pipe_config(struct intel_crtc *crtc,
+ /* drm_atomic_helper_update_legacy_modeset_state might not be called. */
+ crtc->base.mode = crtc->base.state->mode;
+
+- DRM_DEBUG_KMS("Updating pipe size %ix%i -> %ix%i\n",
+- old_crtc_state->pipe_src_w, old_crtc_state->pipe_src_h,
+- pipe_config->pipe_src_w, pipe_config->pipe_src_h);
+-
+ /*
+ * Update pipe size and adjust fitter if needed: the reason for this is
+ * that in compute_mode_changes we check the native mode (not the pfit
+@@ -4805,23 +4801,17 @@ static void skylake_pfit_enable(struct intel_crtc *crtc)
+ struct intel_crtc_scaler_state *scaler_state =
+ &crtc->config->scaler_state;
+
+- DRM_DEBUG_KMS("for crtc_state = %p\n", crtc->config);
+-
+ if (crtc->config->pch_pfit.enabled) {
+ int id;
+
+- if (WARN_ON(crtc->config->scaler_state.scaler_id < 0)) {
+- DRM_ERROR("Requesting pfit without getting a scaler first\n");
++ if (WARN_ON(crtc->config->scaler_state.scaler_id < 0))
+ return;
+- }
+
+ id = scaler_state->scaler_id;
+ I915_WRITE(SKL_PS_CTRL(pipe, id), PS_SCALER_EN |
+ PS_FILTER_MEDIUM | scaler_state->scalers[id].mode);
+ I915_WRITE(SKL_PS_WIN_POS(pipe, id), crtc->config->pch_pfit.pos);
+ I915_WRITE(SKL_PS_WIN_SZ(pipe, id), crtc->config->pch_pfit.size);
+-
+- DRM_DEBUG_KMS("for crtc_state = %p scaler_id = %d\n", crtc->config, id);
+ }
+ }
+
+@@ -14895,17 +14885,19 @@ static void intel_begin_crtc_commit(struct drm_crtc *crtc,
+ to_intel_atomic_state(old_crtc_state->state);
+ bool modeset = needs_modeset(crtc->state);
+
++ if (!modeset &&
++ (intel_cstate->base.color_mgmt_changed ||
++ intel_cstate->update_pipe)) {
++ intel_color_set_csc(crtc->state);
++ intel_color_load_luts(crtc->state);
++ }
++
+ /* Perform vblank evasion around commit operation */
+ intel_pipe_update_start(intel_crtc);
+
+ if (modeset)
+ goto out;
+
+- if (crtc->state->color_mgmt_changed || to_intel_crtc_state(crtc->state)->update_pipe) {
+- intel_color_set_csc(crtc->state);
+- intel_color_load_luts(crtc->state);
+- }
+-
+ if (intel_cstate->update_pipe)
+ intel_update_pipe_config(intel_crtc, old_intel_cstate);
+ else if (INTEL_GEN(dev_priv) >= 9)
+@@ -16497,12 +16489,11 @@ int intel_modeset_init(struct drm_device *dev)
+ }
+ }
+
+- intel_update_czclk(dev_priv);
+- intel_update_cdclk(dev_priv);
+- dev_priv->atomic_cdclk_freq = dev_priv->cdclk_freq;
+-
+ intel_shared_dpll_init(dev);
+
++ intel_update_czclk(dev_priv);
++ intel_modeset_init_hw(dev);
++
+ if (dev_priv->max_cdclk_freq == 0)
+ intel_update_max_cdclk(dev_priv);
+
+@@ -17057,8 +17048,6 @@ void intel_modeset_gem_init(struct drm_device *dev)
+
+ intel_init_gt_powersave(dev_priv);
+
+- intel_modeset_init_hw(dev);
+-
+ intel_setup_overlay(dev_priv);
+ }
+
+diff --git a/drivers/gpu/drm/i915/intel_fbdev.c b/drivers/gpu/drm/i915/intel_fbdev.c
+index f4a8c4fc57c4..c20ca8e08390 100644
+--- a/drivers/gpu/drm/i915/intel_fbdev.c
++++ b/drivers/gpu/drm/i915/intel_fbdev.c
+@@ -357,14 +357,13 @@ static bool intel_fb_initial_config(struct drm_fb_helper *fb_helper,
+ bool *enabled, int width, int height)
+ {
+ struct drm_i915_private *dev_priv = to_i915(fb_helper->dev);
+- unsigned long conn_configured, mask;
++ unsigned long conn_configured, conn_seq, mask;
+ unsigned int count = min(fb_helper->connector_count, BITS_PER_LONG);
+ int i, j;
+ bool *save_enabled;
+ bool fallback = true;
+ int num_connectors_enabled = 0;
+ int num_connectors_detected = 0;
+- int pass = 0;
+
+ save_enabled = kcalloc(count, sizeof(bool), GFP_KERNEL);
+ if (!save_enabled)
+@@ -374,6 +373,7 @@ static bool intel_fb_initial_config(struct drm_fb_helper *fb_helper,
+ mask = BIT(count) - 1;
+ conn_configured = 0;
+ retry:
++ conn_seq = conn_configured;
+ for (i = 0; i < count; i++) {
+ struct drm_fb_helper_connector *fb_conn;
+ struct drm_connector *connector;
+@@ -387,7 +387,7 @@ static bool intel_fb_initial_config(struct drm_fb_helper *fb_helper,
+ if (conn_configured & BIT(i))
+ continue;
+
+- if (pass == 0 && !connector->has_tile)
++ if (conn_seq == 0 && !connector->has_tile)
+ continue;
+
+ if (connector->status == connector_status_connected)
+@@ -498,10 +498,8 @@ static bool intel_fb_initial_config(struct drm_fb_helper *fb_helper,
+ conn_configured |= BIT(i);
+ }
+
+- if ((conn_configured & mask) != mask) {
+- pass++;
++ if ((conn_configured & mask) != mask && conn_configured != conn_seq)
+ goto retry;
+- }
+
+ /*
+ * If the BIOS didn't enable everything it could, fall back to have the
+diff --git a/drivers/gpu/drm/i915/intel_hdmi.c b/drivers/gpu/drm/i915/intel_hdmi.c
+index fb88e32e25a3..fe8f8a4c384e 100644
+--- a/drivers/gpu/drm/i915/intel_hdmi.c
++++ b/drivers/gpu/drm/i915/intel_hdmi.c
+@@ -1293,16 +1293,34 @@ intel_hdmi_mode_valid(struct drm_connector *connector,
+
+ static bool hdmi_12bpc_possible(struct intel_crtc_state *crtc_state)
+ {
+- struct drm_device *dev = crtc_state->base.crtc->dev;
++ struct drm_i915_private *dev_priv =
++ to_i915(crtc_state->base.crtc->dev);
++ struct drm_atomic_state *state = crtc_state->base.state;
++ struct drm_connector_state *connector_state;
++ struct drm_connector *connector;
++ int i;
+
+- if (HAS_GMCH_DISPLAY(to_i915(dev)))
++ if (HAS_GMCH_DISPLAY(dev_priv))
+ return false;
+
+ /*
+ * HDMI 12bpc affects the clocks, so it's only possible
+ * when not cloning with other encoder types.
+ */
+- return crtc_state->output_types == 1 << INTEL_OUTPUT_HDMI;
++ if (crtc_state->output_types != 1 << INTEL_OUTPUT_HDMI)
++ return false;
++
++ for_each_connector_in_state(state, connector, connector_state, i) {
++ const struct drm_display_info *info = &connector->display_info;
++
++ if (connector_state->crtc != crtc_state->base.crtc)
++ continue;
++
++ if ((info->edid_hdmi_dc_modes & DRM_EDID_HDMI_DC_36) == 0)
++ return false;
++ }
++
++ return true;
+ }
+
+ bool intel_hdmi_compute_config(struct intel_encoder *encoder,
+diff --git a/drivers/gpu/drm/i915/intel_hotplug.c b/drivers/gpu/drm/i915/intel_hotplug.c
+index 3d546c019de0..b782f22856f8 100644
+--- a/drivers/gpu/drm/i915/intel_hotplug.c
++++ b/drivers/gpu/drm/i915/intel_hotplug.c
+@@ -219,7 +219,7 @@ static void intel_hpd_irq_storm_reenable_work(struct work_struct *work)
+ }
+ }
+ }
+- if (dev_priv->display.hpd_irq_setup)
++ if (dev_priv->display_irqs_enabled && dev_priv->display.hpd_irq_setup)
+ dev_priv->display.hpd_irq_setup(dev_priv);
+ spin_unlock_irq(&dev_priv->irq_lock);
+
+@@ -425,7 +425,7 @@ void intel_hpd_irq_handler(struct drm_i915_private *dev_priv,
+ }
+ }
+
+- if (storm_detected)
++ if (storm_detected && dev_priv->display_irqs_enabled)
+ dev_priv->display.hpd_irq_setup(dev_priv);
+ spin_unlock(&dev_priv->irq_lock);
+
+@@ -471,10 +471,12 @@ void intel_hpd_init(struct drm_i915_private *dev_priv)
+ * Interrupt setup is already guaranteed to be single-threaded, this is
+ * just to make the assert_spin_locked checks happy.
+ */
+- spin_lock_irq(&dev_priv->irq_lock);
+- if (dev_priv->display.hpd_irq_setup)
+- dev_priv->display.hpd_irq_setup(dev_priv);
+- spin_unlock_irq(&dev_priv->irq_lock);
++ if (dev_priv->display_irqs_enabled && dev_priv->display.hpd_irq_setup) {
++ spin_lock_irq(&dev_priv->irq_lock);
++ if (dev_priv->display_irqs_enabled)
++ dev_priv->display.hpd_irq_setup(dev_priv);
++ spin_unlock_irq(&dev_priv->irq_lock);
++ }
+ }
+
+ static void i915_hpd_poll_init_work(struct work_struct *work)
+diff --git a/drivers/gpu/drm/i915/intel_pm.c b/drivers/gpu/drm/i915/intel_pm.c
+index ae2c0bb4b2e8..3af22cf865f4 100644
+--- a/drivers/gpu/drm/i915/intel_pm.c
++++ b/drivers/gpu/drm/i915/intel_pm.c
+@@ -4876,6 +4876,12 @@ static void gen6_set_rps_thresholds(struct drm_i915_private *dev_priv, u8 val)
+ break;
+ }
+
++ /* When byt can survive without system hang with dynamic
++ * sw freq adjustments, this restriction can be lifted.
++ */
++ if (IS_VALLEYVIEW(dev_priv))
++ goto skip_hw_write;
++
+ I915_WRITE(GEN6_RP_UP_EI,
+ GT_INTERVAL_FROM_US(dev_priv, ei_up));
+ I915_WRITE(GEN6_RP_UP_THRESHOLD,
+@@ -4896,6 +4902,7 @@ static void gen6_set_rps_thresholds(struct drm_i915_private *dev_priv, u8 val)
+ GEN6_RP_UP_BUSY_AVG |
+ GEN6_RP_DOWN_IDLE_AVG);
+
++skip_hw_write:
+ dev_priv->rps.power = new_power;
+ dev_priv->rps.up_threshold = threshold_up;
+ dev_priv->rps.down_threshold = threshold_down;
+@@ -4906,8 +4913,9 @@ static u32 gen6_rps_pm_mask(struct drm_i915_private *dev_priv, u8 val)
+ {
+ u32 mask = 0;
+
++ /* We use UP_EI_EXPIRED interupts for both up/down in manual mode */
+ if (val > dev_priv->rps.min_freq_softlimit)
+- mask |= GEN6_PM_RP_DOWN_EI_EXPIRED | GEN6_PM_RP_DOWN_THRESHOLD | GEN6_PM_RP_DOWN_TIMEOUT;
++ mask |= GEN6_PM_RP_UP_EI_EXPIRED | GEN6_PM_RP_DOWN_THRESHOLD | GEN6_PM_RP_DOWN_TIMEOUT;
+ if (val < dev_priv->rps.max_freq_softlimit)
+ mask |= GEN6_PM_RP_UP_EI_EXPIRED | GEN6_PM_RP_UP_THRESHOLD;
+
+@@ -5007,7 +5015,7 @@ void gen6_rps_busy(struct drm_i915_private *dev_priv)
+ {
+ mutex_lock(&dev_priv->rps.hw_lock);
+ if (dev_priv->rps.enabled) {
+- if (dev_priv->pm_rps_events & (GEN6_PM_RP_DOWN_EI_EXPIRED | GEN6_PM_RP_UP_EI_EXPIRED))
++ if (dev_priv->pm_rps_events & GEN6_PM_RP_UP_EI_EXPIRED)
+ gen6_rps_reset_ei(dev_priv);
+ I915_WRITE(GEN6_PMINTRMSK,
+ gen6_rps_pm_mask(dev_priv, dev_priv->rps.cur_freq));
+@@ -7895,10 +7903,10 @@ static bool skl_pcode_try_request(struct drm_i915_private *dev_priv, u32 mbox,
+ * @timeout_base_ms: timeout for polling with preemption enabled
+ *
+ * Keep resending the @request to @mbox until PCODE acknowledges it, PCODE
+- * reports an error or an overall timeout of @timeout_base_ms+10 ms expires.
++ * reports an error or an overall timeout of @timeout_base_ms+50 ms expires.
+ * The request is acknowledged once the PCODE reply dword equals @reply after
+ * applying @reply_mask. Polling is first attempted with preemption enabled
+- * for @timeout_base_ms and if this times out for another 10 ms with
++ * for @timeout_base_ms and if this times out for another 50 ms with
+ * preemption disabled.
+ *
+ * Returns 0 on success, %-ETIMEDOUT in case of a timeout, <0 in case of some
+@@ -7934,14 +7942,15 @@ int skl_pcode_request(struct drm_i915_private *dev_priv, u32 mbox, u32 request,
+ * worst case) _and_ PCODE was busy for some reason even after a
+ * (queued) request and @timeout_base_ms delay. As a workaround retry
+ * the poll with preemption disabled to maximize the number of
+- * requests. Increase the timeout from @timeout_base_ms to 10ms to
++ * requests. Increase the timeout from @timeout_base_ms to 50ms to
+ * account for interrupts that could reduce the number of these
+- * requests.
++ * requests, and for any quirks of the PCODE firmware that delays
++ * the request completion.
+ */
+ DRM_DEBUG_KMS("PCODE timeout, retrying with preemption disabled\n");
+ WARN_ON_ONCE(timeout_base_ms > 3);
+ preempt_disable();
+- ret = wait_for_atomic(COND, 10);
++ ret = wait_for_atomic(COND, 50);
+ preempt_enable();
+
+ out:
+diff --git a/drivers/gpu/drm/i915/intel_uncore.c b/drivers/gpu/drm/i915/intel_uncore.c
+index 0bffd3f0c15d..2e4fbed3a826 100644
+--- a/drivers/gpu/drm/i915/intel_uncore.c
++++ b/drivers/gpu/drm/i915/intel_uncore.c
+@@ -119,6 +119,8 @@ fw_domains_get(struct drm_i915_private *dev_priv, enum forcewake_domains fw_doma
+
+ for_each_fw_domain_masked(d, fw_domains, dev_priv)
+ fw_domain_wait_ack(d);
++
++ dev_priv->uncore.fw_domains_active |= fw_domains;
+ }
+
+ static void
+@@ -130,6 +132,8 @@ fw_domains_put(struct drm_i915_private *dev_priv, enum forcewake_domains fw_doma
+ fw_domain_put(d);
+ fw_domain_posting_read(d);
+ }
++
++ dev_priv->uncore.fw_domains_active &= ~fw_domains;
+ }
+
+ static void
+@@ -240,10 +244,8 @@ intel_uncore_fw_release_timer(struct hrtimer *timer)
+ if (WARN_ON(domain->wake_count == 0))
+ domain->wake_count++;
+
+- if (--domain->wake_count == 0) {
++ if (--domain->wake_count == 0)
+ dev_priv->uncore.funcs.force_wake_put(dev_priv, domain->mask);
+- dev_priv->uncore.fw_domains_active &= ~domain->mask;
+- }
+
+ spin_unlock_irqrestore(&dev_priv->uncore.lock, irqflags);
+
+@@ -455,10 +457,8 @@ static void __intel_uncore_forcewake_get(struct drm_i915_private *dev_priv,
+ fw_domains &= ~domain->mask;
+ }
+
+- if (fw_domains) {
++ if (fw_domains)
+ dev_priv->uncore.funcs.force_wake_get(dev_priv, fw_domains);
+- dev_priv->uncore.fw_domains_active |= fw_domains;
+- }
+ }
+
+ /**
+@@ -962,7 +962,6 @@ static noinline void ___force_wake_auto(struct drm_i915_private *dev_priv,
+ fw_domain_arm_timer(domain);
+
+ dev_priv->uncore.funcs.force_wake_get(dev_priv, fw_domains);
+- dev_priv->uncore.fw_domains_active |= fw_domains;
+ }
+
+ static inline void __force_wake_auto(struct drm_i915_private *dev_priv,
+diff --git a/drivers/net/wireless/ralink/rt2x00/rt2x00usb.c b/drivers/net/wireless/ralink/rt2x00/rt2x00usb.c
+index 6005e14213ca..662705e31136 100644
+--- a/drivers/net/wireless/ralink/rt2x00/rt2x00usb.c
++++ b/drivers/net/wireless/ralink/rt2x00/rt2x00usb.c
+@@ -319,10 +319,8 @@ static bool rt2x00usb_kick_tx_entry(struct queue_entry *entry, void *data)
+ entry->skb->data, length,
+ rt2x00usb_interrupt_txdone, entry);
+
+- usb_anchor_urb(entry_priv->urb, rt2x00dev->anchor);
+ status = usb_submit_urb(entry_priv->urb, GFP_ATOMIC);
+ if (status) {
+- usb_unanchor_urb(entry_priv->urb);
+ if (status == -ENODEV)
+ clear_bit(DEVICE_STATE_PRESENT, &rt2x00dev->flags);
+ set_bit(ENTRY_DATA_IO_FAILED, &entry->flags);
+@@ -410,10 +408,8 @@ static bool rt2x00usb_kick_rx_entry(struct queue_entry *entry, void *data)
+ entry->skb->data, entry->skb->len,
+ rt2x00usb_interrupt_rxdone, entry);
+
+- usb_anchor_urb(entry_priv->urb, rt2x00dev->anchor);
+ status = usb_submit_urb(entry_priv->urb, GFP_ATOMIC);
+ if (status) {
+- usb_unanchor_urb(entry_priv->urb);
+ if (status == -ENODEV)
+ clear_bit(DEVICE_STATE_PRESENT, &rt2x00dev->flags);
+ set_bit(ENTRY_DATA_IO_FAILED, &entry->flags);
+@@ -824,10 +820,6 @@ int rt2x00usb_probe(struct usb_interface *usb_intf,
+ if (retval)
+ goto exit_free_device;
+
+- retval = rt2x00lib_probe_dev(rt2x00dev);
+- if (retval)
+- goto exit_free_reg;
+-
+ rt2x00dev->anchor = devm_kmalloc(&usb_dev->dev,
+ sizeof(struct usb_anchor),
+ GFP_KERNEL);
+@@ -835,10 +827,17 @@ int rt2x00usb_probe(struct usb_interface *usb_intf,
+ retval = -ENOMEM;
+ goto exit_free_reg;
+ }
+-
+ init_usb_anchor(rt2x00dev->anchor);
++
++ retval = rt2x00lib_probe_dev(rt2x00dev);
++ if (retval)
++ goto exit_free_anchor;
++
+ return 0;
+
++exit_free_anchor:
++ usb_kill_anchored_urbs(rt2x00dev->anchor);
++
+ exit_free_reg:
+ rt2x00usb_free_reg(rt2x00dev);
+
+diff --git a/fs/nfs/flexfilelayout/flexfilelayoutdev.c b/fs/nfs/flexfilelayout/flexfilelayoutdev.c
+index e5a6f248697b..15421e625a12 100644
+--- a/fs/nfs/flexfilelayout/flexfilelayoutdev.c
++++ b/fs/nfs/flexfilelayout/flexfilelayoutdev.c
+@@ -208,6 +208,10 @@ static bool ff_layout_mirror_valid(struct pnfs_layout_segment *lseg,
+ } else
+ goto outerr;
+ }
++
++ if (IS_ERR(mirror->mirror_ds))
++ goto outerr;
++
+ if (mirror->mirror_ds->ds == NULL) {
+ struct nfs4_deviceid_node *devid;
+ devid = &mirror->mirror_ds->id_node;
+diff --git a/fs/orangefs/devorangefs-req.c b/fs/orangefs/devorangefs-req.c
+index b0ced669427e..c4ab6fdf17a0 100644
+--- a/fs/orangefs/devorangefs-req.c
++++ b/fs/orangefs/devorangefs-req.c
+@@ -400,8 +400,9 @@ static ssize_t orangefs_devreq_write_iter(struct kiocb *iocb,
+ /* remove the op from the in progress hash table */
+ op = orangefs_devreq_remove_op(head.tag);
+ if (!op) {
+- gossip_err("WARNING: No one's waiting for tag %llu\n",
+- llu(head.tag));
++ gossip_debug(GOSSIP_DEV_DEBUG,
++ "%s: No one's waiting for tag %llu\n",
++ __func__, llu(head.tag));
+ return ret;
+ }
+
+diff --git a/fs/orangefs/orangefs-debugfs.c b/fs/orangefs/orangefs-debugfs.c
+index 27e75cf28b3a..791912da97d7 100644
+--- a/fs/orangefs/orangefs-debugfs.c
++++ b/fs/orangefs/orangefs-debugfs.c
+@@ -967,13 +967,13 @@ int orangefs_debugfs_new_client_string(void __user *arg)
+ int ret;
+
+ ret = copy_from_user(&client_debug_array_string,
+- (void __user *)arg,
+- ORANGEFS_MAX_DEBUG_STRING_LEN);
++ (void __user *)arg,
++ ORANGEFS_MAX_DEBUG_STRING_LEN);
+
+ if (ret != 0) {
+ pr_info("%s: CLIENT_STRING: copy_from_user failed\n",
+ __func__);
+- return -EIO;
++ return -EFAULT;
+ }
+
+ /*
+@@ -988,17 +988,18 @@ int orangefs_debugfs_new_client_string(void __user *arg)
+ */
+ client_debug_array_string[ORANGEFS_MAX_DEBUG_STRING_LEN - 1] =
+ '\0';
+-
++
+ pr_info("%s: client debug array string has been received.\n",
+ __func__);
+
+ if (!help_string_initialized) {
+
+ /* Build a proper debug help string. */
+- if (orangefs_prepare_debugfs_help_string(0)) {
++ ret = orangefs_prepare_debugfs_help_string(0);
++ if (ret) {
+ gossip_err("%s: no debug help string \n",
+ __func__);
+- return -EIO;
++ return ret;
+ }
+
+ }
+@@ -1011,7 +1012,7 @@ int orangefs_debugfs_new_client_string(void __user *arg)
+
+ help_string_initialized++;
+
+- return ret;
++ return 0;
+ }
+
+ int orangefs_debugfs_new_debug(void __user *arg)
+diff --git a/fs/orangefs/orangefs-dev-proto.h b/fs/orangefs/orangefs-dev-proto.h
+index a3d84ffee905..f380f9ed1b28 100644
+--- a/fs/orangefs/orangefs-dev-proto.h
++++ b/fs/orangefs/orangefs-dev-proto.h
+@@ -50,8 +50,7 @@
+ * Misc constants. Please retain them as multiples of 8!
+ * Otherwise 32-64 bit interactions will be messed up :)
+ */
+-#define ORANGEFS_MAX_DEBUG_STRING_LEN 0x00000400
+-#define ORANGEFS_MAX_DEBUG_ARRAY_LEN 0x00000800
++#define ORANGEFS_MAX_DEBUG_STRING_LEN 0x00000800
+
+ /*
+ * The maximum number of directory entries in a single request is 96.
+diff --git a/net/packet/af_packet.c b/net/packet/af_packet.c
+index c59fcc79ba32..5c919933a39b 100644
+--- a/net/packet/af_packet.c
++++ b/net/packet/af_packet.c
+@@ -4177,8 +4177,8 @@ static int packet_set_ring(struct sock *sk, union tpacket_req_u *req_u,
+ if (unlikely(!PAGE_ALIGNED(req->tp_block_size)))
+ goto out;
+ if (po->tp_version >= TPACKET_V3 &&
+- (int)(req->tp_block_size -
+- BLK_PLUS_PRIV(req_u->req3.tp_sizeof_priv)) <= 0)
++ req->tp_block_size <=
++ BLK_PLUS_PRIV((u64)req_u->req3.tp_sizeof_priv))
+ goto out;
+ if (unlikely(req->tp_frame_size < po->tp_hdrlen +
+ po->tp_reserve))
^ permalink raw reply related [flat|nested] 22+ messages in thread
* [gentoo-commits] proj/linux-patches:4.10 commit in: /
@ 2017-04-22 17:03 Mike Pagano
0 siblings, 0 replies; 22+ messages in thread
From: Mike Pagano @ 2017-04-22 17:03 UTC (permalink / raw
To: gentoo-commits
commit: b3664958047a78d65c5ad40678c8bff0d0843099
Author: Mike Pagano <mpagano <AT> gentoo <DOT> org>
AuthorDate: Sat Apr 22 17:03:12 2017 +0000
Commit: Mike Pagano <mpagano <AT> gentoo <DOT> org>
CommitDate: Sat Apr 22 17:03:12 2017 +0000
URL: https://gitweb.gentoo.org/proj/linux-patches.git/commit/?id=b3664958
Linux patch 4.10.12
0000_README | 4 +
1011_linux-4.10.12.patch | 3376 ++++++++++++++++++++++++++++++++++++++++++++++
2 files changed, 3380 insertions(+)
diff --git a/0000_README b/0000_README
index f05d7f1..e55a9e7 100644
--- a/0000_README
+++ b/0000_README
@@ -87,6 +87,10 @@ Patch: 1010_linux-4.10.11.patch
From: http://www.kernel.org
Desc: Linux 4.10.11
+Patch: 1011_linux-4.10.12.patch
+From: http://www.kernel.org
+Desc: Linux 4.10.12
+
Patch: 1500_XATTR_USER_PREFIX.patch
From: https://bugs.gentoo.org/show_bug.cgi?id=470644
Desc: Support for namespace user.pax.* on tmpfs.
diff --git a/1011_linux-4.10.12.patch b/1011_linux-4.10.12.patch
new file mode 100644
index 0000000..74cbab6
--- /dev/null
+++ b/1011_linux-4.10.12.patch
@@ -0,0 +1,3376 @@
+diff --git a/Makefile b/Makefile
+index 412f2a0a3814..9689d3f644ea 100644
+--- a/Makefile
++++ b/Makefile
+@@ -1,6 +1,6 @@
+ VERSION = 4
+ PATCHLEVEL = 10
+-SUBLEVEL = 11
++SUBLEVEL = 12
+ EXTRAVERSION =
+ NAME = Fearless Coyote
+
+diff --git a/arch/mips/lantiq/irq.c b/arch/mips/lantiq/irq.c
+index 0ddf3698b85d..8ac0e5994ed2 100644
+--- a/arch/mips/lantiq/irq.c
++++ b/arch/mips/lantiq/irq.c
+@@ -269,11 +269,6 @@ static void ltq_hw5_irqdispatch(void)
+ DEFINE_HWx_IRQDISPATCH(5)
+ #endif
+
+-static void ltq_hw_irq_handler(struct irq_desc *desc)
+-{
+- ltq_hw_irqdispatch(irq_desc_get_irq(desc) - 2);
+-}
+-
+ #ifdef CONFIG_MIPS_MT_SMP
+ void __init arch_init_ipiirq(int irq, struct irqaction *action)
+ {
+@@ -318,19 +313,23 @@ static struct irqaction irq_call = {
+ asmlinkage void plat_irq_dispatch(void)
+ {
+ unsigned int pending = read_c0_status() & read_c0_cause() & ST0_IM;
+- int irq;
+-
+- if (!pending) {
+- spurious_interrupt();
+- return;
++ unsigned int i;
++
++ if ((MIPS_CPU_TIMER_IRQ == 7) && (pending & CAUSEF_IP7)) {
++ do_IRQ(MIPS_CPU_TIMER_IRQ);
++ goto out;
++ } else {
++ for (i = 0; i < MAX_IM; i++) {
++ if (pending & (CAUSEF_IP2 << i)) {
++ ltq_hw_irqdispatch(i);
++ goto out;
++ }
++ }
+ }
++ pr_alert("Spurious IRQ: CAUSE=0x%08x\n", read_c0_status());
+
+- pending >>= CAUSEB_IP;
+- while (pending) {
+- irq = fls(pending) - 1;
+- do_IRQ(MIPS_CPU_IRQ_BASE + irq);
+- pending &= ~BIT(irq);
+- }
++out:
++ return;
+ }
+
+ static int icu_map(struct irq_domain *d, unsigned int irq, irq_hw_number_t hw)
+@@ -355,6 +354,11 @@ static const struct irq_domain_ops irq_domain_ops = {
+ .map = icu_map,
+ };
+
++static struct irqaction cascade = {
++ .handler = no_action,
++ .name = "cascade",
++};
++
+ int __init icu_of_init(struct device_node *node, struct device_node *parent)
+ {
+ struct device_node *eiu_node;
+@@ -386,7 +390,7 @@ int __init icu_of_init(struct device_node *node, struct device_node *parent)
+ mips_cpu_irq_init();
+
+ for (i = 0; i < MAX_IM; i++)
+- irq_set_chained_handler(i + 2, ltq_hw_irq_handler);
++ setup_irq(i + 2, &cascade);
+
+ if (cpu_has_vint) {
+ pr_info("Setting up vectored interrupts\n");
+diff --git a/arch/parisc/include/asm/uaccess.h b/arch/parisc/include/asm/uaccess.h
+index 7fcf5128996a..0497ceceeb85 100644
+--- a/arch/parisc/include/asm/uaccess.h
++++ b/arch/parisc/include/asm/uaccess.h
+@@ -42,10 +42,10 @@ static inline long access_ok(int type, const void __user * addr,
+ #define get_user __get_user
+
+ #if !defined(CONFIG_64BIT)
+-#define LDD_USER(ptr) __get_user_asm64(ptr)
++#define LDD_USER(val, ptr) __get_user_asm64(val, ptr)
+ #define STD_USER(x, ptr) __put_user_asm64(x, ptr)
+ #else
+-#define LDD_USER(ptr) __get_user_asm("ldd", ptr)
++#define LDD_USER(val, ptr) __get_user_asm(val, "ldd", ptr)
+ #define STD_USER(x, ptr) __put_user_asm("std", x, ptr)
+ #endif
+
+@@ -100,63 +100,87 @@ struct exception_data {
+ " mtsp %0,%%sr2\n\t" \
+ : : "r"(get_fs()) : )
+
+-#define __get_user(x, ptr) \
+-({ \
+- register long __gu_err __asm__ ("r8") = 0; \
+- register long __gu_val; \
+- \
+- load_sr2(); \
+- switch (sizeof(*(ptr))) { \
+- case 1: __get_user_asm("ldb", ptr); break; \
+- case 2: __get_user_asm("ldh", ptr); break; \
+- case 4: __get_user_asm("ldw", ptr); break; \
+- case 8: LDD_USER(ptr); break; \
+- default: BUILD_BUG(); break; \
+- } \
+- \
+- (x) = (__force __typeof__(*(ptr))) __gu_val; \
+- __gu_err; \
++#define __get_user_internal(val, ptr) \
++({ \
++ register long __gu_err __asm__ ("r8") = 0; \
++ \
++ switch (sizeof(*(ptr))) { \
++ case 1: __get_user_asm(val, "ldb", ptr); break; \
++ case 2: __get_user_asm(val, "ldh", ptr); break; \
++ case 4: __get_user_asm(val, "ldw", ptr); break; \
++ case 8: LDD_USER(val, ptr); break; \
++ default: BUILD_BUG(); \
++ } \
++ \
++ __gu_err; \
+ })
+
+-#define __get_user_asm(ldx, ptr) \
++#define __get_user(val, ptr) \
++({ \
++ load_sr2(); \
++ __get_user_internal(val, ptr); \
++})
++
++#define __get_user_asm(val, ldx, ptr) \
++{ \
++ register long __gu_val; \
++ \
+ __asm__("1: " ldx " 0(%%sr2,%2),%0\n" \
+ "9:\n" \
+ ASM_EXCEPTIONTABLE_ENTRY_EFAULT(1b, 9b) \
+ : "=r"(__gu_val), "=r"(__gu_err) \
+- : "r"(ptr), "1"(__gu_err));
++ : "r"(ptr), "1"(__gu_err)); \
++ \
++ (val) = (__force __typeof__(*(ptr))) __gu_val; \
++}
+
+ #if !defined(CONFIG_64BIT)
+
+-#define __get_user_asm64(ptr) \
++#define __get_user_asm64(val, ptr) \
++{ \
++ union { \
++ unsigned long long l; \
++ __typeof__(*(ptr)) t; \
++ } __gu_tmp; \
++ \
+ __asm__(" copy %%r0,%R0\n" \
+ "1: ldw 0(%%sr2,%2),%0\n" \
+ "2: ldw 4(%%sr2,%2),%R0\n" \
+ "9:\n" \
+ ASM_EXCEPTIONTABLE_ENTRY_EFAULT(1b, 9b) \
+ ASM_EXCEPTIONTABLE_ENTRY_EFAULT(2b, 9b) \
+- : "=r"(__gu_val), "=r"(__gu_err) \
+- : "r"(ptr), "1"(__gu_err));
++ : "=&r"(__gu_tmp.l), "=r"(__gu_err) \
++ : "r"(ptr), "1"(__gu_err)); \
++ \
++ (val) = __gu_tmp.t; \
++}
+
+ #endif /* !defined(CONFIG_64BIT) */
+
+
+-#define __put_user(x, ptr) \
++#define __put_user_internal(x, ptr) \
+ ({ \
+ register long __pu_err __asm__ ("r8") = 0; \
+ __typeof__(*(ptr)) __x = (__typeof__(*(ptr)))(x); \
+ \
+- load_sr2(); \
+ switch (sizeof(*(ptr))) { \
+- case 1: __put_user_asm("stb", __x, ptr); break; \
+- case 2: __put_user_asm("sth", __x, ptr); break; \
+- case 4: __put_user_asm("stw", __x, ptr); break; \
+- case 8: STD_USER(__x, ptr); break; \
+- default: BUILD_BUG(); break; \
+- } \
++ case 1: __put_user_asm("stb", __x, ptr); break; \
++ case 2: __put_user_asm("sth", __x, ptr); break; \
++ case 4: __put_user_asm("stw", __x, ptr); break; \
++ case 8: STD_USER(__x, ptr); break; \
++ default: BUILD_BUG(); \
++ } \
+ \
+ __pu_err; \
+ })
+
++#define __put_user(x, ptr) \
++({ \
++ load_sr2(); \
++ __put_user_internal(x, ptr); \
++})
++
++
+ /*
+ * The "__put_user/kernel_asm()" macros tell gcc they read from memory
+ * instead of writing. This is because they do not write to any memory
+diff --git a/arch/parisc/lib/lusercopy.S b/arch/parisc/lib/lusercopy.S
+index f01188c044ee..85c28bb80fb7 100644
+--- a/arch/parisc/lib/lusercopy.S
++++ b/arch/parisc/lib/lusercopy.S
+@@ -201,7 +201,7 @@ ENTRY_CFI(pa_memcpy)
+ add dst,len,end
+
+ /* short copy with less than 16 bytes? */
+- cmpib,>>=,n 15,len,.Lbyte_loop
++ cmpib,COND(>>=),n 15,len,.Lbyte_loop
+
+ /* same alignment? */
+ xor src,dst,t0
+@@ -216,7 +216,7 @@ ENTRY_CFI(pa_memcpy)
+ /* loop until we are 64-bit aligned */
+ .Lalign_loop64:
+ extru dst,31,3,t1
+- cmpib,=,n 0,t1,.Lcopy_loop_16
++ cmpib,=,n 0,t1,.Lcopy_loop_16_start
+ 20: ldb,ma 1(srcspc,src),t1
+ 21: stb,ma t1,1(dstspc,dst)
+ b .Lalign_loop64
+@@ -225,6 +225,7 @@ ENTRY_CFI(pa_memcpy)
+ ASM_EXCEPTIONTABLE_ENTRY(20b,.Lcopy_done)
+ ASM_EXCEPTIONTABLE_ENTRY(21b,.Lcopy_done)
+
++.Lcopy_loop_16_start:
+ ldi 31,t0
+ .Lcopy_loop_16:
+ cmpb,COND(>>=),n t0,len,.Lword_loop
+@@ -267,7 +268,7 @@ ENTRY_CFI(pa_memcpy)
+ /* loop until we are 32-bit aligned */
+ .Lalign_loop32:
+ extru dst,31,2,t1
+- cmpib,=,n 0,t1,.Lcopy_loop_4
++ cmpib,=,n 0,t1,.Lcopy_loop_8
+ 20: ldb,ma 1(srcspc,src),t1
+ 21: stb,ma t1,1(dstspc,dst)
+ b .Lalign_loop32
+@@ -277,7 +278,7 @@ ENTRY_CFI(pa_memcpy)
+ ASM_EXCEPTIONTABLE_ENTRY(21b,.Lcopy_done)
+
+
+-.Lcopy_loop_4:
++.Lcopy_loop_8:
+ cmpib,COND(>>=),n 15,len,.Lbyte_loop
+
+ 10: ldw 0(srcspc,src),t1
+@@ -299,7 +300,7 @@ ENTRY_CFI(pa_memcpy)
+ ASM_EXCEPTIONTABLE_ENTRY(16b,.Lcopy_done)
+ ASM_EXCEPTIONTABLE_ENTRY(17b,.Lcopy_done)
+
+- b .Lcopy_loop_4
++ b .Lcopy_loop_8
+ ldo -16(len),len
+
+ .Lbyte_loop:
+@@ -324,7 +325,7 @@ ENTRY_CFI(pa_memcpy)
+ .Lunaligned_copy:
+ /* align until dst is 32bit-word-aligned */
+ extru dst,31,2,t1
+- cmpib,COND(=),n 0,t1,.Lcopy_dstaligned
++ cmpib,=,n 0,t1,.Lcopy_dstaligned
+ 20: ldb 0(srcspc,src),t1
+ ldo 1(src),src
+ 21: stb,ma t1,1(dstspc,dst)
+@@ -362,7 +363,7 @@ ENTRY_CFI(pa_memcpy)
+ cmpiclr,<> 1,t0,%r0
+ b,n .Lcase1
+ .Lcase0:
+- cmpb,= %r0,len,.Lcda_finish
++ cmpb,COND(=) %r0,len,.Lcda_finish
+ nop
+
+ 1: ldw,ma 4(srcspc,src), a3
+@@ -376,7 +377,7 @@ ENTRY_CFI(pa_memcpy)
+ 1: ldw,ma 4(srcspc,src), a3
+ ASM_EXCEPTIONTABLE_ENTRY(1b,.Lcda_rdfault)
+ ldo -1(len),len
+- cmpb,=,n %r0,len,.Ldo0
++ cmpb,COND(=),n %r0,len,.Ldo0
+ .Ldo4:
+ 1: ldw,ma 4(srcspc,src), a0
+ ASM_EXCEPTIONTABLE_ENTRY(1b,.Lcda_rdfault)
+@@ -402,7 +403,7 @@ ENTRY_CFI(pa_memcpy)
+ 1: stw,ma t0, 4(dstspc,dst)
+ ASM_EXCEPTIONTABLE_ENTRY(1b,.Lcopy_done)
+ ldo -4(len),len
+- cmpb,<> %r0,len,.Ldo4
++ cmpb,COND(<>) %r0,len,.Ldo4
+ nop
+ .Ldo0:
+ shrpw a2, a3, %sar, t0
+@@ -436,14 +437,14 @@ ENTRY_CFI(pa_memcpy)
+ /* fault exception fixup handlers: */
+ #ifdef CONFIG_64BIT
+ .Lcopy16_fault:
+-10: b .Lcopy_done
+- std,ma t1,8(dstspc,dst)
++ b .Lcopy_done
++10: std,ma t1,8(dstspc,dst)
+ ASM_EXCEPTIONTABLE_ENTRY(10b,.Lcopy_done)
+ #endif
+
+ .Lcopy8_fault:
+-10: b .Lcopy_done
+- stw,ma t1,4(dstspc,dst)
++ b .Lcopy_done
++10: stw,ma t1,4(dstspc,dst)
+ ASM_EXCEPTIONTABLE_ENTRY(10b,.Lcopy_done)
+
+ .exit
+diff --git a/arch/x86/entry/vdso/vdso32-setup.c b/arch/x86/entry/vdso/vdso32-setup.c
+index 7853b53959cd..3f9d1a83891a 100644
+--- a/arch/x86/entry/vdso/vdso32-setup.c
++++ b/arch/x86/entry/vdso/vdso32-setup.c
+@@ -30,8 +30,10 @@ static int __init vdso32_setup(char *s)
+ {
+ vdso32_enabled = simple_strtoul(s, NULL, 0);
+
+- if (vdso32_enabled > 1)
++ if (vdso32_enabled > 1) {
+ pr_warn("vdso32 values other than 0 and 1 are no longer allowed; vdso disabled\n");
++ vdso32_enabled = 0;
++ }
+
+ return 1;
+ }
+@@ -62,13 +64,18 @@ subsys_initcall(sysenter_setup);
+ /* Register vsyscall32 into the ABI table */
+ #include <linux/sysctl.h>
+
++static const int zero;
++static const int one = 1;
++
+ static struct ctl_table abi_table2[] = {
+ {
+ .procname = "vsyscall32",
+ .data = &vdso32_enabled,
+ .maxlen = sizeof(int),
+ .mode = 0644,
+- .proc_handler = proc_dointvec
++ .proc_handler = proc_dointvec_minmax,
++ .extra1 = (int *)&zero,
++ .extra2 = (int *)&one,
+ },
+ {}
+ };
+diff --git a/arch/x86/events/intel/lbr.c b/arch/x86/events/intel/lbr.c
+index 81b321ace8e0..f924629836a8 100644
+--- a/arch/x86/events/intel/lbr.c
++++ b/arch/x86/events/intel/lbr.c
+@@ -507,6 +507,9 @@ static void intel_pmu_lbr_read_32(struct cpu_hw_events *cpuc)
+ cpuc->lbr_entries[i].to = msr_lastbranch.to;
+ cpuc->lbr_entries[i].mispred = 0;
+ cpuc->lbr_entries[i].predicted = 0;
++ cpuc->lbr_entries[i].in_tx = 0;
++ cpuc->lbr_entries[i].abort = 0;
++ cpuc->lbr_entries[i].cycles = 0;
+ cpuc->lbr_entries[i].reserved = 0;
+ }
+ cpuc->lbr_stack.nr = i;
+diff --git a/arch/x86/include/asm/elf.h b/arch/x86/include/asm/elf.h
+index e7f155c3045e..94aad6364b47 100644
+--- a/arch/x86/include/asm/elf.h
++++ b/arch/x86/include/asm/elf.h
+@@ -278,7 +278,7 @@ struct task_struct;
+
+ #define ARCH_DLINFO_IA32 \
+ do { \
+- if (vdso32_enabled) { \
++ if (VDSO_CURRENT_BASE) { \
+ NEW_AUX_ENT(AT_SYSINFO, VDSO_ENTRY); \
+ NEW_AUX_ENT(AT_SYSINFO_EHDR, VDSO_CURRENT_BASE); \
+ } \
+diff --git a/arch/x86/include/asm/pmem.h b/arch/x86/include/asm/pmem.h
+index 2c1ebeb4d737..529bb4a6487a 100644
+--- a/arch/x86/include/asm/pmem.h
++++ b/arch/x86/include/asm/pmem.h
+@@ -55,7 +55,8 @@ static inline int arch_memcpy_from_pmem(void *dst, const void *src, size_t n)
+ * @size: number of bytes to write back
+ *
+ * Write back a cache range using the CLWB (cache line write back)
+- * instruction.
++ * instruction. Note that @size is internally rounded up to be cache
++ * line size aligned.
+ */
+ static inline void arch_wb_cache_pmem(void *addr, size_t size)
+ {
+@@ -69,15 +70,6 @@ static inline void arch_wb_cache_pmem(void *addr, size_t size)
+ clwb(p);
+ }
+
+-/*
+- * copy_from_iter_nocache() on x86 only uses non-temporal stores for iovec
+- * iterators, so for other types (bvec & kvec) we must do a cache write-back.
+- */
+-static inline bool __iter_needs_pmem_wb(struct iov_iter *i)
+-{
+- return iter_is_iovec(i) == false;
+-}
+-
+ /**
+ * arch_copy_from_iter_pmem - copy data from an iterator to PMEM
+ * @addr: PMEM destination address
+@@ -94,7 +86,35 @@ static inline size_t arch_copy_from_iter_pmem(void *addr, size_t bytes,
+ /* TODO: skip the write-back by always using non-temporal stores */
+ len = copy_from_iter_nocache(addr, bytes, i);
+
+- if (__iter_needs_pmem_wb(i))
++ /*
++ * In the iovec case on x86_64 copy_from_iter_nocache() uses
++ * non-temporal stores for the bulk of the transfer, but we need
++ * to manually flush if the transfer is unaligned. A cached
++ * memory copy is used when destination or size is not naturally
++ * aligned. That is:
++ * - Require 8-byte alignment when size is 8 bytes or larger.
++ * - Require 4-byte alignment when size is 4 bytes.
++ *
++ * In the non-iovec case the entire destination needs to be
++ * flushed.
++ */
++ if (iter_is_iovec(i)) {
++ unsigned long flushed, dest = (unsigned long) addr;
++
++ if (bytes < 8) {
++ if (!IS_ALIGNED(dest, 4) || (bytes != 4))
++ arch_wb_cache_pmem(addr, 1);
++ } else {
++ if (!IS_ALIGNED(dest, 8)) {
++ dest = ALIGN(dest, boot_cpu_data.x86_clflush_size);
++ arch_wb_cache_pmem(addr, 1);
++ }
++
++ flushed = dest - (unsigned long) addr;
++ if (bytes > flushed && !IS_ALIGNED(bytes - flushed, 8))
++ arch_wb_cache_pmem(addr + bytes - 1, 1);
++ }
++ } else
+ arch_wb_cache_pmem(addr, bytes);
+
+ return len;
+diff --git a/arch/x86/kernel/cpu/intel_rdt_schemata.c b/arch/x86/kernel/cpu/intel_rdt_schemata.c
+index f369cb8db0d5..badd2b31a560 100644
+--- a/arch/x86/kernel/cpu/intel_rdt_schemata.c
++++ b/arch/x86/kernel/cpu/intel_rdt_schemata.c
+@@ -200,11 +200,11 @@ ssize_t rdtgroup_schemata_write(struct kernfs_open_file *of,
+ }
+
+ out:
+- rdtgroup_kn_unlock(of->kn);
+ for_each_enabled_rdt_resource(r) {
+ kfree(r->tmp_cbms);
+ r->tmp_cbms = NULL;
+ }
++ rdtgroup_kn_unlock(of->kn);
+ return ret ?: nbytes;
+ }
+
+diff --git a/arch/x86/kernel/signal_compat.c b/arch/x86/kernel/signal_compat.c
+index ec1f756f9dc9..71beb28600d4 100644
+--- a/arch/x86/kernel/signal_compat.c
++++ b/arch/x86/kernel/signal_compat.c
+@@ -151,8 +151,8 @@ int __copy_siginfo_to_user32(compat_siginfo_t __user *to, const siginfo_t *from,
+
+ if (from->si_signo == SIGSEGV) {
+ if (from->si_code == SEGV_BNDERR) {
+- compat_uptr_t lower = (unsigned long)&to->si_lower;
+- compat_uptr_t upper = (unsigned long)&to->si_upper;
++ compat_uptr_t lower = (unsigned long)from->si_lower;
++ compat_uptr_t upper = (unsigned long)from->si_upper;
+ put_user_ex(lower, &to->si_lower);
+ put_user_ex(upper, &to->si_upper);
+ }
+diff --git a/arch/x86/mm/init.c b/arch/x86/mm/init.c
+index 22af912d66d2..889e7619a091 100644
+--- a/arch/x86/mm/init.c
++++ b/arch/x86/mm/init.c
+@@ -643,21 +643,40 @@ void __init init_mem_mapping(void)
+ * devmem_is_allowed() checks to see if /dev/mem access to a certain address
+ * is valid. The argument is a physical page number.
+ *
+- *
+- * On x86, access has to be given to the first megabyte of ram because that area
+- * contains BIOS code and data regions used by X and dosemu and similar apps.
+- * Access has to be given to non-kernel-ram areas as well, these contain the PCI
+- * mmio resources as well as potential bios/acpi data regions.
++ * On x86, access has to be given to the first megabyte of RAM because that
++ * area traditionally contains BIOS code and data regions used by X, dosemu,
++ * and similar apps. Since they map the entire memory range, the whole range
++ * must be allowed (for mapping), but any areas that would otherwise be
++ * disallowed are flagged as being "zero filled" instead of rejected.
++ * Access has to be given to non-kernel-ram areas as well, these contain the
++ * PCI mmio resources as well as potential bios/acpi data regions.
+ */
+ int devmem_is_allowed(unsigned long pagenr)
+ {
+- if (pagenr < 256)
+- return 1;
+- if (iomem_is_exclusive(pagenr << PAGE_SHIFT))
++ if (page_is_ram(pagenr)) {
++ /*
++ * For disallowed memory regions in the low 1MB range,
++ * request that the page be shown as all zeros.
++ */
++ if (pagenr < 256)
++ return 2;
++
++ return 0;
++ }
++
++ /*
++ * This must follow RAM test, since System RAM is considered a
++ * restricted resource under CONFIG_STRICT_IOMEM.
++ */
++ if (iomem_is_exclusive(pagenr << PAGE_SHIFT)) {
++ /* Low 1MB bypasses iomem restrictions. */
++ if (pagenr < 256)
++ return 1;
++
+ return 0;
+- if (!page_is_ram(pagenr))
+- return 1;
+- return 0;
++ }
++
++ return 1;
+ }
+
+ void free_init_pages(char *what, unsigned long begin, unsigned long end)
+diff --git a/arch/x86/platform/efi/quirks.c b/arch/x86/platform/efi/quirks.c
+index 30031d5293c4..cdfe8c628959 100644
+--- a/arch/x86/platform/efi/quirks.c
++++ b/arch/x86/platform/efi/quirks.c
+@@ -201,6 +201,10 @@ void __init efi_arch_mem_reserve(phys_addr_t addr, u64 size)
+ return;
+ }
+
++ /* No need to reserve regions that will never be freed. */
++ if (md.attribute & EFI_MEMORY_RUNTIME)
++ return;
++
+ size += addr % EFI_PAGE_SIZE;
+ size = round_up(size, EFI_PAGE_SIZE);
+ addr = round_down(addr, EFI_PAGE_SIZE);
+diff --git a/arch/x86/xen/apic.c b/arch/x86/xen/apic.c
+index 44c88ad1841a..bcea81f36fc5 100644
+--- a/arch/x86/xen/apic.c
++++ b/arch/x86/xen/apic.c
+@@ -145,7 +145,7 @@ static void xen_silent_inquire(int apicid)
+ static int xen_cpu_present_to_apicid(int cpu)
+ {
+ if (cpu_present(cpu))
+- return xen_get_apic_id(xen_apic_read(APIC_ID));
++ return cpu_data(cpu).apicid;
+ else
+ return BAD_APICID;
+ }
+diff --git a/crypto/ahash.c b/crypto/ahash.c
+index 2ce8bcb9049c..cce0268a13fe 100644
+--- a/crypto/ahash.c
++++ b/crypto/ahash.c
+@@ -31,6 +31,7 @@ struct ahash_request_priv {
+ crypto_completion_t complete;
+ void *data;
+ u8 *result;
++ u32 flags;
+ void *ubuf[] CRYPTO_MINALIGN_ATTR;
+ };
+
+@@ -252,6 +253,8 @@ static int ahash_save_req(struct ahash_request *req, crypto_completion_t cplt)
+ priv->result = req->result;
+ priv->complete = req->base.complete;
+ priv->data = req->base.data;
++ priv->flags = req->base.flags;
++
+ /*
+ * WARNING: We do not backup req->priv here! The req->priv
+ * is for internal use of the Crypto API and the
+@@ -266,38 +269,44 @@ static int ahash_save_req(struct ahash_request *req, crypto_completion_t cplt)
+ return 0;
+ }
+
+-static void ahash_restore_req(struct ahash_request *req)
++static void ahash_restore_req(struct ahash_request *req, int err)
+ {
+ struct ahash_request_priv *priv = req->priv;
+
++ if (!err)
++ memcpy(priv->result, req->result,
++ crypto_ahash_digestsize(crypto_ahash_reqtfm(req)));
++
+ /* Restore the original crypto request. */
+ req->result = priv->result;
+- req->base.complete = priv->complete;
+- req->base.data = priv->data;
++
++ ahash_request_set_callback(req, priv->flags,
++ priv->complete, priv->data);
+ req->priv = NULL;
+
+ /* Free the req->priv.priv from the ADJUSTED request. */
+ kzfree(priv);
+ }
+
+-static void ahash_op_unaligned_finish(struct ahash_request *req, int err)
++static void ahash_notify_einprogress(struct ahash_request *req)
+ {
+ struct ahash_request_priv *priv = req->priv;
++ struct crypto_async_request oreq;
+
+- if (err == -EINPROGRESS)
+- return;
+-
+- if (!err)
+- memcpy(priv->result, req->result,
+- crypto_ahash_digestsize(crypto_ahash_reqtfm(req)));
++ oreq.data = priv->data;
+
+- ahash_restore_req(req);
++ priv->complete(&oreq, -EINPROGRESS);
+ }
+
+ static void ahash_op_unaligned_done(struct crypto_async_request *req, int err)
+ {
+ struct ahash_request *areq = req->data;
+
++ if (err == -EINPROGRESS) {
++ ahash_notify_einprogress(areq);
++ return;
++ }
++
+ /*
+ * Restore the original request, see ahash_op_unaligned() for what
+ * goes where.
+@@ -308,7 +317,7 @@ static void ahash_op_unaligned_done(struct crypto_async_request *req, int err)
+ */
+
+ /* First copy req->result into req->priv.result */
+- ahash_op_unaligned_finish(areq, err);
++ ahash_restore_req(areq, err);
+
+ /* Complete the ORIGINAL request. */
+ areq->base.complete(&areq->base, err);
+@@ -324,7 +333,12 @@ static int ahash_op_unaligned(struct ahash_request *req,
+ return err;
+
+ err = op(req);
+- ahash_op_unaligned_finish(req, err);
++ if (err == -EINPROGRESS ||
++ (err == -EBUSY && (ahash_request_flags(req) &
++ CRYPTO_TFM_REQ_MAY_BACKLOG)))
++ return err;
++
++ ahash_restore_req(req, err);
+
+ return err;
+ }
+@@ -359,25 +373,14 @@ int crypto_ahash_digest(struct ahash_request *req)
+ }
+ EXPORT_SYMBOL_GPL(crypto_ahash_digest);
+
+-static void ahash_def_finup_finish2(struct ahash_request *req, int err)
++static void ahash_def_finup_done2(struct crypto_async_request *req, int err)
+ {
+- struct ahash_request_priv *priv = req->priv;
++ struct ahash_request *areq = req->data;
+
+ if (err == -EINPROGRESS)
+ return;
+
+- if (!err)
+- memcpy(priv->result, req->result,
+- crypto_ahash_digestsize(crypto_ahash_reqtfm(req)));
+-
+- ahash_restore_req(req);
+-}
+-
+-static void ahash_def_finup_done2(struct crypto_async_request *req, int err)
+-{
+- struct ahash_request *areq = req->data;
+-
+- ahash_def_finup_finish2(areq, err);
++ ahash_restore_req(areq, err);
+
+ areq->base.complete(&areq->base, err);
+ }
+@@ -388,11 +391,15 @@ static int ahash_def_finup_finish1(struct ahash_request *req, int err)
+ goto out;
+
+ req->base.complete = ahash_def_finup_done2;
+- req->base.flags &= ~CRYPTO_TFM_REQ_MAY_SLEEP;
++
+ err = crypto_ahash_reqtfm(req)->final(req);
++ if (err == -EINPROGRESS ||
++ (err == -EBUSY && (ahash_request_flags(req) &
++ CRYPTO_TFM_REQ_MAY_BACKLOG)))
++ return err;
+
+ out:
+- ahash_def_finup_finish2(req, err);
++ ahash_restore_req(req, err);
+ return err;
+ }
+
+@@ -400,7 +407,16 @@ static void ahash_def_finup_done1(struct crypto_async_request *req, int err)
+ {
+ struct ahash_request *areq = req->data;
+
++ if (err == -EINPROGRESS) {
++ ahash_notify_einprogress(areq);
++ return;
++ }
++
++ areq->base.flags &= ~CRYPTO_TFM_REQ_MAY_SLEEP;
++
+ err = ahash_def_finup_finish1(areq, err);
++ if (areq->priv)
++ return;
+
+ areq->base.complete(&areq->base, err);
+ }
+@@ -415,6 +431,11 @@ static int ahash_def_finup(struct ahash_request *req)
+ return err;
+
+ err = tfm->update(req);
++ if (err == -EINPROGRESS ||
++ (err == -EBUSY && (ahash_request_flags(req) &
++ CRYPTO_TFM_REQ_MAY_BACKLOG)))
++ return err;
++
+ return ahash_def_finup_finish1(req, err);
+ }
+
+diff --git a/crypto/algif_aead.c b/crypto/algif_aead.c
+index 533265f110e0..c3177c989dc8 100644
+--- a/crypto/algif_aead.c
++++ b/crypto/algif_aead.c
+@@ -39,6 +39,7 @@ struct aead_async_req {
+ struct aead_async_rsgl first_rsgl;
+ struct list_head list;
+ struct kiocb *iocb;
++ struct sock *sk;
+ unsigned int tsgls;
+ char iv[];
+ };
+@@ -378,12 +379,10 @@ static ssize_t aead_sendpage(struct socket *sock, struct page *page,
+
+ static void aead_async_cb(struct crypto_async_request *_req, int err)
+ {
+- struct sock *sk = _req->data;
+- struct alg_sock *ask = alg_sk(sk);
+- struct aead_ctx *ctx = ask->private;
+- struct crypto_aead *tfm = crypto_aead_reqtfm(&ctx->aead_req);
+- struct aead_request *req = aead_request_cast(_req);
++ struct aead_request *req = _req->data;
++ struct crypto_aead *tfm = crypto_aead_reqtfm(req);
+ struct aead_async_req *areq = GET_ASYM_REQ(req, tfm);
++ struct sock *sk = areq->sk;
+ struct scatterlist *sg = areq->tsgl;
+ struct aead_async_rsgl *rsgl;
+ struct kiocb *iocb = areq->iocb;
+@@ -446,11 +445,12 @@ static int aead_recvmsg_async(struct socket *sock, struct msghdr *msg,
+ memset(&areq->first_rsgl, '\0', sizeof(areq->first_rsgl));
+ INIT_LIST_HEAD(&areq->list);
+ areq->iocb = msg->msg_iocb;
++ areq->sk = sk;
+ memcpy(areq->iv, ctx->iv, crypto_aead_ivsize(tfm));
+ aead_request_set_tfm(req, tfm);
+ aead_request_set_ad(req, ctx->aead_assoclen);
+ aead_request_set_callback(req, CRYPTO_TFM_REQ_MAY_BACKLOG,
+- aead_async_cb, sk);
++ aead_async_cb, req);
+ used -= ctx->aead_assoclen;
+
+ /* take over all tx sgls from ctx */
+diff --git a/crypto/lrw.c b/crypto/lrw.c
+index 3ea095adafd9..a8bfae4451bf 100644
+--- a/crypto/lrw.c
++++ b/crypto/lrw.c
+@@ -345,6 +345,13 @@ static void encrypt_done(struct crypto_async_request *areq, int err)
+ struct rctx *rctx;
+
+ rctx = skcipher_request_ctx(req);
++
++ if (err == -EINPROGRESS) {
++ if (rctx->left != req->cryptlen)
++ return;
++ goto out;
++ }
++
+ subreq = &rctx->subreq;
+ subreq->base.flags &= CRYPTO_TFM_REQ_MAY_BACKLOG;
+
+@@ -352,6 +359,7 @@ static void encrypt_done(struct crypto_async_request *areq, int err)
+ if (rctx->left)
+ return;
+
++out:
+ skcipher_request_complete(req, err);
+ }
+
+@@ -389,6 +397,13 @@ static void decrypt_done(struct crypto_async_request *areq, int err)
+ struct rctx *rctx;
+
+ rctx = skcipher_request_ctx(req);
++
++ if (err == -EINPROGRESS) {
++ if (rctx->left != req->cryptlen)
++ return;
++ goto out;
++ }
++
+ subreq = &rctx->subreq;
+ subreq->base.flags &= CRYPTO_TFM_REQ_MAY_BACKLOG;
+
+@@ -396,6 +411,7 @@ static void decrypt_done(struct crypto_async_request *areq, int err)
+ if (rctx->left)
+ return;
+
++out:
+ skcipher_request_complete(req, err);
+ }
+
+diff --git a/crypto/xts.c b/crypto/xts.c
+index c976bfac29da..89ace5ebc2da 100644
+--- a/crypto/xts.c
++++ b/crypto/xts.c
+@@ -286,6 +286,13 @@ static void encrypt_done(struct crypto_async_request *areq, int err)
+ struct rctx *rctx;
+
+ rctx = skcipher_request_ctx(req);
++
++ if (err == -EINPROGRESS) {
++ if (rctx->left != req->cryptlen)
++ return;
++ goto out;
++ }
++
+ subreq = &rctx->subreq;
+ subreq->base.flags &= CRYPTO_TFM_REQ_MAY_BACKLOG;
+
+@@ -293,6 +300,7 @@ static void encrypt_done(struct crypto_async_request *areq, int err)
+ if (rctx->left)
+ return;
+
++out:
+ skcipher_request_complete(req, err);
+ }
+
+@@ -330,6 +338,13 @@ static void decrypt_done(struct crypto_async_request *areq, int err)
+ struct rctx *rctx;
+
+ rctx = skcipher_request_ctx(req);
++
++ if (err == -EINPROGRESS) {
++ if (rctx->left != req->cryptlen)
++ return;
++ goto out;
++ }
++
+ subreq = &rctx->subreq;
+ subreq->base.flags &= CRYPTO_TFM_REQ_MAY_BACKLOG;
+
+@@ -337,6 +352,7 @@ static void decrypt_done(struct crypto_async_request *areq, int err)
+ if (rctx->left)
+ return;
+
++out:
+ skcipher_request_complete(req, err);
+ }
+
+diff --git a/drivers/acpi/ec.c b/drivers/acpi/ec.c
+index 48e19d013170..22ca89242518 100644
+--- a/drivers/acpi/ec.c
++++ b/drivers/acpi/ec.c
+@@ -729,12 +729,12 @@ static void start_transaction(struct acpi_ec *ec)
+
+ static int ec_guard(struct acpi_ec *ec)
+ {
+- unsigned long guard = usecs_to_jiffies(ec_polling_guard);
++ unsigned long guard = usecs_to_jiffies(ec->polling_guard);
+ unsigned long timeout = ec->timestamp + guard;
+
+ /* Ensure guarding period before polling EC status */
+ do {
+- if (ec_busy_polling) {
++ if (ec->busy_polling) {
+ /* Perform busy polling */
+ if (ec_transaction_completed(ec))
+ return 0;
+@@ -998,6 +998,28 @@ static void acpi_ec_stop(struct acpi_ec *ec, bool suspending)
+ spin_unlock_irqrestore(&ec->lock, flags);
+ }
+
++static void acpi_ec_enter_noirq(struct acpi_ec *ec)
++{
++ unsigned long flags;
++
++ spin_lock_irqsave(&ec->lock, flags);
++ ec->busy_polling = true;
++ ec->polling_guard = 0;
++ ec_log_drv("interrupt blocked");
++ spin_unlock_irqrestore(&ec->lock, flags);
++}
++
++static void acpi_ec_leave_noirq(struct acpi_ec *ec)
++{
++ unsigned long flags;
++
++ spin_lock_irqsave(&ec->lock, flags);
++ ec->busy_polling = ec_busy_polling;
++ ec->polling_guard = ec_polling_guard;
++ ec_log_drv("interrupt unblocked");
++ spin_unlock_irqrestore(&ec->lock, flags);
++}
++
+ void acpi_ec_block_transactions(void)
+ {
+ struct acpi_ec *ec = first_ec;
+@@ -1278,7 +1300,7 @@ acpi_ec_space_handler(u32 function, acpi_physical_address address,
+ if (function != ACPI_READ && function != ACPI_WRITE)
+ return AE_BAD_PARAMETER;
+
+- if (ec_busy_polling || bits > 8)
++ if (ec->busy_polling || bits > 8)
+ acpi_ec_burst_enable(ec);
+
+ for (i = 0; i < bytes; ++i, ++address, ++value)
+@@ -1286,7 +1308,7 @@ acpi_ec_space_handler(u32 function, acpi_physical_address address,
+ acpi_ec_read(ec, address, value) :
+ acpi_ec_write(ec, address, *value);
+
+- if (ec_busy_polling || bits > 8)
++ if (ec->busy_polling || bits > 8)
+ acpi_ec_burst_disable(ec);
+
+ switch (result) {
+@@ -1329,6 +1351,8 @@ static struct acpi_ec *acpi_ec_alloc(void)
+ spin_lock_init(&ec->lock);
+ INIT_WORK(&ec->work, acpi_ec_event_handler);
+ ec->timestamp = jiffies;
++ ec->busy_polling = true;
++ ec->polling_guard = 0;
+ return ec;
+ }
+
+@@ -1390,6 +1414,7 @@ static int ec_install_handlers(struct acpi_ec *ec, bool handle_events)
+ acpi_ec_start(ec, false);
+
+ if (!test_bit(EC_FLAGS_EC_HANDLER_INSTALLED, &ec->flags)) {
++ acpi_ec_enter_noirq(ec);
+ status = acpi_install_address_space_handler(ec->handle,
+ ACPI_ADR_SPACE_EC,
+ &acpi_ec_space_handler,
+@@ -1429,6 +1454,7 @@ static int ec_install_handlers(struct acpi_ec *ec, bool handle_events)
+ /* This is not fatal as we can poll EC events */
+ if (ACPI_SUCCESS(status)) {
+ set_bit(EC_FLAGS_GPE_HANDLER_INSTALLED, &ec->flags);
++ acpi_ec_leave_noirq(ec);
+ if (test_bit(EC_FLAGS_STARTED, &ec->flags) &&
+ ec->reference_count >= 1)
+ acpi_ec_enable_gpe(ec, true);
+@@ -1839,34 +1865,6 @@ int __init acpi_ec_ecdt_probe(void)
+ }
+
+ #ifdef CONFIG_PM_SLEEP
+-static void acpi_ec_enter_noirq(struct acpi_ec *ec)
+-{
+- unsigned long flags;
+-
+- if (ec == first_ec) {
+- spin_lock_irqsave(&ec->lock, flags);
+- ec->saved_busy_polling = ec_busy_polling;
+- ec->saved_polling_guard = ec_polling_guard;
+- ec_busy_polling = true;
+- ec_polling_guard = 0;
+- ec_log_drv("interrupt blocked");
+- spin_unlock_irqrestore(&ec->lock, flags);
+- }
+-}
+-
+-static void acpi_ec_leave_noirq(struct acpi_ec *ec)
+-{
+- unsigned long flags;
+-
+- if (ec == first_ec) {
+- spin_lock_irqsave(&ec->lock, flags);
+- ec_busy_polling = ec->saved_busy_polling;
+- ec_polling_guard = ec->saved_polling_guard;
+- ec_log_drv("interrupt unblocked");
+- spin_unlock_irqrestore(&ec->lock, flags);
+- }
+-}
+-
+ static int acpi_ec_suspend_noirq(struct device *dev)
+ {
+ struct acpi_ec *ec =
+diff --git a/drivers/acpi/internal.h b/drivers/acpi/internal.h
+index 0c452265c111..219b90bc0922 100644
+--- a/drivers/acpi/internal.h
++++ b/drivers/acpi/internal.h
+@@ -172,8 +172,8 @@ struct acpi_ec {
+ struct work_struct work;
+ unsigned long timestamp;
+ unsigned long nr_pending_queries;
+- bool saved_busy_polling;
+- unsigned int saved_polling_guard;
++ bool busy_polling;
++ unsigned int polling_guard;
+ };
+
+ extern struct acpi_ec *first_ec;
+diff --git a/drivers/acpi/nfit/core.c b/drivers/acpi/nfit/core.c
+index 662036bdc65e..c8ea9d698cd0 100644
+--- a/drivers/acpi/nfit/core.c
++++ b/drivers/acpi/nfit/core.c
+@@ -1617,7 +1617,11 @@ static int cmp_map(const void *m0, const void *m1)
+ const struct nfit_set_info_map *map0 = m0;
+ const struct nfit_set_info_map *map1 = m1;
+
+- return map0->region_offset - map1->region_offset;
++ if (map0->region_offset < map1->region_offset)
++ return -1;
++ else if (map0->region_offset > map1->region_offset)
++ return 1;
++ return 0;
+ }
+
+ /* Retrieve the nth entry referencing this spa */
+diff --git a/drivers/acpi/scan.c b/drivers/acpi/scan.c
+index 192691880d55..2433569b02ef 100644
+--- a/drivers/acpi/scan.c
++++ b/drivers/acpi/scan.c
+@@ -1857,15 +1857,20 @@ static void acpi_bus_attach(struct acpi_device *device)
+ return;
+
+ device->flags.match_driver = true;
+- if (!ret) {
+- ret = device_attach(&device->dev);
+- if (ret < 0)
+- return;
+-
+- if (!ret && device->pnp.type.platform_id)
+- acpi_default_enumeration(device);
++ if (ret > 0) {
++ acpi_device_set_enumerated(device);
++ goto ok;
+ }
+
++ ret = device_attach(&device->dev);
++ if (ret < 0)
++ return;
++
++ if (ret > 0 || !device->pnp.type.platform_id)
++ acpi_device_set_enumerated(device);
++ else
++ acpi_default_enumeration(device);
++
+ ok:
+ list_for_each_entry(child, &device->children, node)
+ acpi_bus_attach(child);
+diff --git a/drivers/block/zram/zram_drv.c b/drivers/block/zram/zram_drv.c
+index e5ab7d9e8c45..5b4b59be8b1f 100644
+--- a/drivers/block/zram/zram_drv.c
++++ b/drivers/block/zram/zram_drv.c
+@@ -583,13 +583,13 @@ static int zram_decompress_page(struct zram *zram, char *mem, u32 index)
+
+ if (!handle || zram_test_flag(meta, index, ZRAM_ZERO)) {
+ bit_spin_unlock(ZRAM_ACCESS, &meta->table[index].value);
+- clear_page(mem);
++ memset(mem, 0, PAGE_SIZE);
+ return 0;
+ }
+
+ cmem = zs_map_object(meta->mem_pool, handle, ZS_MM_RO);
+ if (size == PAGE_SIZE) {
+- copy_page(mem, cmem);
++ memcpy(mem, cmem, PAGE_SIZE);
+ } else {
+ struct zcomp_strm *zstrm = zcomp_stream_get(zram->comp);
+
+@@ -781,7 +781,7 @@ static int zram_bvec_write(struct zram *zram, struct bio_vec *bvec, u32 index,
+
+ if ((clen == PAGE_SIZE) && !is_partial_io(bvec)) {
+ src = kmap_atomic(page);
+- copy_page(cmem, src);
++ memcpy(cmem, src, PAGE_SIZE);
+ kunmap_atomic(src);
+ } else {
+ memcpy(cmem, src, clen);
+diff --git a/drivers/char/Kconfig b/drivers/char/Kconfig
+index fde005ef9d36..4ee2a10207d0 100644
+--- a/drivers/char/Kconfig
++++ b/drivers/char/Kconfig
+@@ -571,9 +571,12 @@ config TELCLOCK
+ controlling the behavior of this hardware.
+
+ config DEVPORT
+- bool
++ bool "/dev/port character device"
+ depends on ISA || PCI
+ default y
++ help
++ Say Y here if you want to support the /dev/port device. The /dev/port
++ device is similar to /dev/mem, but for I/O ports.
+
+ source "drivers/s390/char/Kconfig"
+
+diff --git a/drivers/char/mem.c b/drivers/char/mem.c
+index 6d9cc2d39d22..7e4a9d1296bb 100644
+--- a/drivers/char/mem.c
++++ b/drivers/char/mem.c
+@@ -60,6 +60,10 @@ static inline int valid_mmap_phys_addr_range(unsigned long pfn, size_t size)
+ #endif
+
+ #ifdef CONFIG_STRICT_DEVMEM
++static inline int page_is_allowed(unsigned long pfn)
++{
++ return devmem_is_allowed(pfn);
++}
+ static inline int range_is_allowed(unsigned long pfn, unsigned long size)
+ {
+ u64 from = ((u64)pfn) << PAGE_SHIFT;
+@@ -75,6 +79,10 @@ static inline int range_is_allowed(unsigned long pfn, unsigned long size)
+ return 1;
+ }
+ #else
++static inline int page_is_allowed(unsigned long pfn)
++{
++ return 1;
++}
+ static inline int range_is_allowed(unsigned long pfn, unsigned long size)
+ {
+ return 1;
+@@ -122,23 +130,31 @@ static ssize_t read_mem(struct file *file, char __user *buf,
+
+ while (count > 0) {
+ unsigned long remaining;
++ int allowed;
+
+ sz = size_inside_page(p, count);
+
+- if (!range_is_allowed(p >> PAGE_SHIFT, count))
++ allowed = page_is_allowed(p >> PAGE_SHIFT);
++ if (!allowed)
+ return -EPERM;
++ if (allowed == 2) {
++ /* Show zeros for restricted memory. */
++ remaining = clear_user(buf, sz);
++ } else {
++ /*
++ * On ia64 if a page has been mapped somewhere as
++ * uncached, then it must also be accessed uncached
++ * by the kernel or data corruption may occur.
++ */
++ ptr = xlate_dev_mem_ptr(p);
++ if (!ptr)
++ return -EFAULT;
+
+- /*
+- * On ia64 if a page has been mapped somewhere as uncached, then
+- * it must also be accessed uncached by the kernel or data
+- * corruption may occur.
+- */
+- ptr = xlate_dev_mem_ptr(p);
+- if (!ptr)
+- return -EFAULT;
++ remaining = copy_to_user(buf, ptr, sz);
++
++ unxlate_dev_mem_ptr(p, ptr);
++ }
+
+- remaining = copy_to_user(buf, ptr, sz);
+- unxlate_dev_mem_ptr(p, ptr);
+ if (remaining)
+ return -EFAULT;
+
+@@ -181,30 +197,36 @@ static ssize_t write_mem(struct file *file, const char __user *buf,
+ #endif
+
+ while (count > 0) {
++ int allowed;
++
+ sz = size_inside_page(p, count);
+
+- if (!range_is_allowed(p >> PAGE_SHIFT, sz))
++ allowed = page_is_allowed(p >> PAGE_SHIFT);
++ if (!allowed)
+ return -EPERM;
+
+- /*
+- * On ia64 if a page has been mapped somewhere as uncached, then
+- * it must also be accessed uncached by the kernel or data
+- * corruption may occur.
+- */
+- ptr = xlate_dev_mem_ptr(p);
+- if (!ptr) {
+- if (written)
+- break;
+- return -EFAULT;
+- }
++ /* Skip actual writing when a page is marked as restricted. */
++ if (allowed == 1) {
++ /*
++ * On ia64 if a page has been mapped somewhere as
++ * uncached, then it must also be accessed uncached
++ * by the kernel or data corruption may occur.
++ */
++ ptr = xlate_dev_mem_ptr(p);
++ if (!ptr) {
++ if (written)
++ break;
++ return -EFAULT;
++ }
+
+- copied = copy_from_user(ptr, buf, sz);
+- unxlate_dev_mem_ptr(p, ptr);
+- if (copied) {
+- written += sz - copied;
+- if (written)
+- break;
+- return -EFAULT;
++ copied = copy_from_user(ptr, buf, sz);
++ unxlate_dev_mem_ptr(p, ptr);
++ if (copied) {
++ written += sz - copied;
++ if (written)
++ break;
++ return -EFAULT;
++ }
+ }
+
+ buf += sz;
+diff --git a/drivers/char/virtio_console.c b/drivers/char/virtio_console.c
+index 17857beb4892..3cbf4c95e446 100644
+--- a/drivers/char/virtio_console.c
++++ b/drivers/char/virtio_console.c
+@@ -1136,6 +1136,8 @@ static int put_chars(u32 vtermno, const char *buf, int count)
+ {
+ struct port *port;
+ struct scatterlist sg[1];
++ void *data;
++ int ret;
+
+ if (unlikely(early_put_chars))
+ return early_put_chars(vtermno, buf, count);
+@@ -1144,8 +1146,14 @@ static int put_chars(u32 vtermno, const char *buf, int count)
+ if (!port)
+ return -EPIPE;
+
+- sg_init_one(sg, buf, count);
+- return __send_to_port(port, sg, 1, count, (void *)buf, false);
++ data = kmemdup(buf, count, GFP_ATOMIC);
++ if (!data)
++ return -ENOMEM;
++
++ sg_init_one(sg, data, count);
++ ret = __send_to_port(port, sg, 1, count, data, false);
++ kfree(data);
++ return ret;
+ }
+
+ /*
+diff --git a/drivers/cpufreq/cpufreq.c b/drivers/cpufreq/cpufreq.c
+index 0af2229b09fb..791e1ef25baf 100644
+--- a/drivers/cpufreq/cpufreq.c
++++ b/drivers/cpufreq/cpufreq.c
+@@ -2405,6 +2405,20 @@ EXPORT_SYMBOL_GPL(cpufreq_boost_enabled);
+ *********************************************************************/
+ static enum cpuhp_state hp_online;
+
++static int cpuhp_cpufreq_online(unsigned int cpu)
++{
++ cpufreq_online(cpu);
++
++ return 0;
++}
++
++static int cpuhp_cpufreq_offline(unsigned int cpu)
++{
++ cpufreq_offline(cpu);
++
++ return 0;
++}
++
+ /**
+ * cpufreq_register_driver - register a CPU Frequency driver
+ * @driver_data: A struct cpufreq_driver containing the values#
+@@ -2467,8 +2481,8 @@ int cpufreq_register_driver(struct cpufreq_driver *driver_data)
+ }
+
+ ret = cpuhp_setup_state_nocalls(CPUHP_AP_ONLINE_DYN, "cpufreq:online",
+- cpufreq_online,
+- cpufreq_offline);
++ cpuhp_cpufreq_online,
++ cpuhp_cpufreq_offline);
+ if (ret < 0)
+ goto err_if_unreg;
+ hp_online = ret;
+diff --git a/drivers/firmware/efi/libstub/gop.c b/drivers/firmware/efi/libstub/gop.c
+index 932742e4cf23..24c461dea7af 100644
+--- a/drivers/firmware/efi/libstub/gop.c
++++ b/drivers/firmware/efi/libstub/gop.c
+@@ -149,7 +149,8 @@ setup_gop32(efi_system_table_t *sys_table_arg, struct screen_info *si,
+
+ status = __gop_query32(sys_table_arg, gop32, &info, &size,
+ ¤t_fb_base);
+- if (status == EFI_SUCCESS && (!first_gop || conout_found)) {
++ if (status == EFI_SUCCESS && (!first_gop || conout_found) &&
++ info->pixel_format != PIXEL_BLT_ONLY) {
+ /*
+ * Systems that use the UEFI Console Splitter may
+ * provide multiple GOP devices, not all of which are
+@@ -266,7 +267,8 @@ setup_gop64(efi_system_table_t *sys_table_arg, struct screen_info *si,
+
+ status = __gop_query64(sys_table_arg, gop64, &info, &size,
+ ¤t_fb_base);
+- if (status == EFI_SUCCESS && (!first_gop || conout_found)) {
++ if (status == EFI_SUCCESS && (!first_gop || conout_found) &&
++ info->pixel_format != PIXEL_BLT_ONLY) {
+ /*
+ * Systems that use the UEFI Console Splitter may
+ * provide multiple GOP devices, not all of which are
+diff --git a/drivers/gpu/drm/drm_fb_helper.c b/drivers/gpu/drm/drm_fb_helper.c
+index ad531126667c..e3da97b409bc 100644
+--- a/drivers/gpu/drm/drm_fb_helper.c
++++ b/drivers/gpu/drm/drm_fb_helper.c
+@@ -1256,9 +1256,9 @@ int drm_fb_helper_check_var(struct fb_var_screeninfo *var,
+ * to KMS, hence fail if different settings are requested.
+ */
+ if (var->bits_per_pixel != fb->bits_per_pixel ||
+- var->xres != fb->width || var->yres != fb->height ||
+- var->xres_virtual != fb->width || var->yres_virtual != fb->height) {
+- DRM_DEBUG("fb userspace requested width/height/bpp different than current fb "
++ var->xres > fb->width || var->yres > fb->height ||
++ var->xres_virtual > fb->width || var->yres_virtual > fb->height) {
++ DRM_DEBUG("fb requested width/height/bpp can't fit in current fb "
+ "request %dx%d-%d (virtual %dx%d) > %dx%d-%d\n",
+ var->xres, var->yres, var->bits_per_pixel,
+ var->xres_virtual, var->yres_virtual,
+diff --git a/drivers/gpu/drm/etnaviv/etnaviv_gpu.c b/drivers/gpu/drm/etnaviv/etnaviv_gpu.c
+index db0a43a090d0..34ffe6e1f6d0 100644
+--- a/drivers/gpu/drm/etnaviv/etnaviv_gpu.c
++++ b/drivers/gpu/drm/etnaviv/etnaviv_gpu.c
+@@ -1309,7 +1309,7 @@ int etnaviv_gpu_submit(struct etnaviv_gpu *gpu,
+ if (!fence) {
+ event_free(gpu, event);
+ ret = -ENOMEM;
+- goto out_pm_put;
++ goto out_unlock;
+ }
+
+ gpu->event[event].fence = fence;
+@@ -1349,6 +1349,7 @@ int etnaviv_gpu_submit(struct etnaviv_gpu *gpu,
+ hangcheck_timer_reset(gpu);
+ ret = 0;
+
++out_unlock:
+ mutex_unlock(&gpu->lock);
+
+ out_pm_put:
+diff --git a/drivers/gpu/drm/i915/gvt/execlist.c b/drivers/gpu/drm/i915/gvt/execlist.c
+index 34083731669d..6804bf5fec3a 100644
+--- a/drivers/gpu/drm/i915/gvt/execlist.c
++++ b/drivers/gpu/drm/i915/gvt/execlist.c
+@@ -778,7 +778,8 @@ static void init_vgpu_execlist(struct intel_vgpu *vgpu, int ring_id)
+ _EL_OFFSET_STATUS_PTR);
+
+ ctx_status_ptr.dw = vgpu_vreg(vgpu, ctx_status_ptr_reg);
+- ctx_status_ptr.read_ptr = ctx_status_ptr.write_ptr = 0x7;
++ ctx_status_ptr.read_ptr = 0;
++ ctx_status_ptr.write_ptr = 0x7;
+ vgpu_vreg(vgpu, ctx_status_ptr_reg) = ctx_status_ptr.dw;
+ }
+
+diff --git a/drivers/gpu/drm/nouveau/nv50_display.c b/drivers/gpu/drm/nouveau/nv50_display.c
+index 32097fd615fd..96b510871631 100644
+--- a/drivers/gpu/drm/nouveau/nv50_display.c
++++ b/drivers/gpu/drm/nouveau/nv50_display.c
+@@ -995,7 +995,6 @@ nv50_wndw_atomic_destroy_state(struct drm_plane *plane,
+ {
+ struct nv50_wndw_atom *asyw = nv50_wndw_atom(state);
+ __drm_atomic_helper_plane_destroy_state(&asyw->state);
+- dma_fence_put(asyw->state.fence);
+ kfree(asyw);
+ }
+
+@@ -1007,7 +1006,6 @@ nv50_wndw_atomic_duplicate_state(struct drm_plane *plane)
+ if (!(asyw = kmalloc(sizeof(*asyw), GFP_KERNEL)))
+ return NULL;
+ __drm_atomic_helper_plane_duplicate_state(plane, &asyw->state);
+- asyw->state.fence = NULL;
+ asyw->interval = 1;
+ asyw->sema = armw->sema;
+ asyw->ntfy = armw->ntfy;
+@@ -2038,6 +2036,7 @@ nv50_head_atomic_check_mode(struct nv50_head *head, struct nv50_head_atom *asyh)
+ u32 vbackp = (mode->vtotal - mode->vsync_end) * vscan / ilace;
+ u32 hfrontp = mode->hsync_start - mode->hdisplay;
+ u32 vfrontp = (mode->vsync_start - mode->vdisplay) * vscan / ilace;
++ u32 blankus;
+ struct nv50_head_mode *m = &asyh->mode;
+
+ m->h.active = mode->htotal;
+@@ -2051,9 +2050,10 @@ nv50_head_atomic_check_mode(struct nv50_head *head, struct nv50_head_atom *asyh)
+ m->v.blanks = m->v.active - vfrontp - 1;
+
+ /*XXX: Safe underestimate, even "0" works */
+- m->v.blankus = (m->v.active - mode->vdisplay - 2) * m->h.active;
+- m->v.blankus *= 1000;
+- m->v.blankus /= mode->clock;
++ blankus = (m->v.active - mode->vdisplay - 2) * m->h.active;
++ blankus *= 1000;
++ blankus /= mode->clock;
++ m->v.blankus = blankus;
+
+ if (mode->flags & DRM_MODE_FLAG_INTERLACE) {
+ m->v.blank2e = m->v.active + m->v.synce + vbackp;
+diff --git a/drivers/gpu/drm/nouveau/nvkm/engine/device/base.c b/drivers/gpu/drm/nouveau/nvkm/engine/device/base.c
+index cceda959b47c..c2f7f6755aec 100644
+--- a/drivers/gpu/drm/nouveau/nvkm/engine/device/base.c
++++ b/drivers/gpu/drm/nouveau/nvkm/engine/device/base.c
+@@ -714,7 +714,7 @@ nv4a_chipset = {
+ .i2c = nv04_i2c_new,
+ .imem = nv40_instmem_new,
+ .mc = nv44_mc_new,
+- .mmu = nv44_mmu_new,
++ .mmu = nv04_mmu_new,
+ .pci = nv40_pci_new,
+ .therm = nv40_therm_new,
+ .timer = nv41_timer_new,
+@@ -2270,6 +2270,35 @@ nv136_chipset = {
+ .fifo = gp100_fifo_new,
+ };
+
++static const struct nvkm_device_chip
++nv137_chipset = {
++ .name = "GP107",
++ .bar = gf100_bar_new,
++ .bios = nvkm_bios_new,
++ .bus = gf100_bus_new,
++ .devinit = gm200_devinit_new,
++ .fb = gp102_fb_new,
++ .fuse = gm107_fuse_new,
++ .gpio = gk104_gpio_new,
++ .i2c = gm200_i2c_new,
++ .ibus = gm200_ibus_new,
++ .imem = nv50_instmem_new,
++ .ltc = gp100_ltc_new,
++ .mc = gp100_mc_new,
++ .mmu = gf100_mmu_new,
++ .pci = gp100_pci_new,
++ .pmu = gp102_pmu_new,
++ .timer = gk20a_timer_new,
++ .top = gk104_top_new,
++ .ce[0] = gp102_ce_new,
++ .ce[1] = gp102_ce_new,
++ .ce[2] = gp102_ce_new,
++ .ce[3] = gp102_ce_new,
++ .disp = gp102_disp_new,
++ .dma = gf119_dma_new,
++ .fifo = gp100_fifo_new,
++};
++
+ static int
+ nvkm_device_event_ctor(struct nvkm_object *object, void *data, u32 size,
+ struct nvkm_notify *notify)
+@@ -2707,6 +2736,7 @@ nvkm_device_ctor(const struct nvkm_device_func *func,
+ case 0x132: device->chip = &nv132_chipset; break;
+ case 0x134: device->chip = &nv134_chipset; break;
+ case 0x136: device->chip = &nv136_chipset; break;
++ case 0x137: device->chip = &nv137_chipset; break;
+ default:
+ nvdev_error(device, "unknown chipset (%08x)\n", boot0);
+ goto done;
+diff --git a/drivers/gpu/drm/nouveau/nvkm/engine/mpeg/nv31.c b/drivers/gpu/drm/nouveau/nvkm/engine/mpeg/nv31.c
+index 003ac915eaad..8a8895246d26 100644
+--- a/drivers/gpu/drm/nouveau/nvkm/engine/mpeg/nv31.c
++++ b/drivers/gpu/drm/nouveau/nvkm/engine/mpeg/nv31.c
+@@ -198,7 +198,7 @@ nv31_mpeg_intr(struct nvkm_engine *engine)
+ }
+
+ if (type == 0x00000010) {
+- if (!nv31_mpeg_mthd(mpeg, mthd, data))
++ if (nv31_mpeg_mthd(mpeg, mthd, data))
+ show &= ~0x01000000;
+ }
+ }
+diff --git a/drivers/gpu/drm/nouveau/nvkm/engine/mpeg/nv44.c b/drivers/gpu/drm/nouveau/nvkm/engine/mpeg/nv44.c
+index e536f37e24b0..c3cf02ed468e 100644
+--- a/drivers/gpu/drm/nouveau/nvkm/engine/mpeg/nv44.c
++++ b/drivers/gpu/drm/nouveau/nvkm/engine/mpeg/nv44.c
+@@ -172,7 +172,7 @@ nv44_mpeg_intr(struct nvkm_engine *engine)
+ }
+
+ if (type == 0x00000010) {
+- if (!nv44_mpeg_mthd(subdev->device, mthd, data))
++ if (nv44_mpeg_mthd(subdev->device, mthd, data))
+ show &= ~0x01000000;
+ }
+ }
+diff --git a/drivers/input/joystick/xpad.c b/drivers/input/joystick/xpad.c
+index c7d5b2b643d1..b9f48d4e155a 100644
+--- a/drivers/input/joystick/xpad.c
++++ b/drivers/input/joystick/xpad.c
+@@ -202,6 +202,7 @@ static const struct xpad_device {
+ { 0x1430, 0x8888, "TX6500+ Dance Pad (first generation)", MAP_DPAD_TO_BUTTONS, XTYPE_XBOX },
+ { 0x146b, 0x0601, "BigBen Interactive XBOX 360 Controller", 0, XTYPE_XBOX360 },
+ { 0x1532, 0x0037, "Razer Sabertooth", 0, XTYPE_XBOX360 },
++ { 0x1532, 0x0a03, "Razer Wildcat", 0, XTYPE_XBOXONE },
+ { 0x15e4, 0x3f00, "Power A Mini Pro Elite", 0, XTYPE_XBOX360 },
+ { 0x15e4, 0x3f0a, "Xbox Airflo wired controller", 0, XTYPE_XBOX360 },
+ { 0x15e4, 0x3f10, "Batarang Xbox 360 controller", 0, XTYPE_XBOX360 },
+@@ -330,6 +331,7 @@ static struct usb_device_id xpad_table[] = {
+ XPAD_XBOX360_VENDOR(0x24c6), /* PowerA Controllers */
+ XPAD_XBOXONE_VENDOR(0x24c6), /* PowerA Controllers */
+ XPAD_XBOX360_VENDOR(0x1532), /* Razer Sabertooth */
++ XPAD_XBOXONE_VENDOR(0x1532), /* Razer Wildcat */
+ XPAD_XBOX360_VENDOR(0x15e4), /* Numark X-Box 360 controllers */
+ XPAD_XBOX360_VENDOR(0x162e), /* Joytech X-Box 360 controllers */
+ { }
+diff --git a/drivers/irqchip/irq-imx-gpcv2.c b/drivers/irqchip/irq-imx-gpcv2.c
+index 15af9a9753e5..2d203b422129 100644
+--- a/drivers/irqchip/irq-imx-gpcv2.c
++++ b/drivers/irqchip/irq-imx-gpcv2.c
+@@ -230,6 +230,8 @@ static int __init imx_gpcv2_irqchip_init(struct device_node *node,
+ return -ENOMEM;
+ }
+
++ raw_spin_lock_init(&cd->rlock);
++
+ cd->gpc_base = of_iomap(node, 0);
+ if (!cd->gpc_base) {
+ pr_err("fsl-gpcv2: unable to map gpc registers\n");
+diff --git a/drivers/media/usb/dvb-usb-v2/dvb_usb_core.c b/drivers/media/usb/dvb-usb-v2/dvb_usb_core.c
+index a8e6624fbe83..a9bb2dde98ea 100644
+--- a/drivers/media/usb/dvb-usb-v2/dvb_usb_core.c
++++ b/drivers/media/usb/dvb-usb-v2/dvb_usb_core.c
+@@ -1013,8 +1013,8 @@ EXPORT_SYMBOL(dvb_usbv2_probe);
+ void dvb_usbv2_disconnect(struct usb_interface *intf)
+ {
+ struct dvb_usb_device *d = usb_get_intfdata(intf);
+- const char *name = d->name;
+- struct device dev = d->udev->dev;
++ const char *devname = kstrdup(dev_name(&d->udev->dev), GFP_KERNEL);
++ const char *drvname = d->name;
+
+ dev_dbg(&d->udev->dev, "%s: bInterfaceNumber=%d\n", __func__,
+ intf->cur_altsetting->desc.bInterfaceNumber);
+@@ -1024,8 +1024,9 @@ void dvb_usbv2_disconnect(struct usb_interface *intf)
+
+ dvb_usbv2_exit(d);
+
+- dev_info(&dev, "%s: '%s' successfully deinitialized and disconnected\n",
+- KBUILD_MODNAME, name);
++ pr_info("%s: '%s:%s' successfully deinitialized and disconnected\n",
++ KBUILD_MODNAME, drvname, devname);
++ kfree(devname);
+ }
+ EXPORT_SYMBOL(dvb_usbv2_disconnect);
+
+diff --git a/drivers/media/usb/dvb-usb/cxusb.c b/drivers/media/usb/dvb-usb/cxusb.c
+index 9b8771eb31d4..a10961948f8c 100644
+--- a/drivers/media/usb/dvb-usb/cxusb.c
++++ b/drivers/media/usb/dvb-usb/cxusb.c
+@@ -59,23 +59,24 @@ static int cxusb_ctrl_msg(struct dvb_usb_device *d,
+ u8 cmd, u8 *wbuf, int wlen, u8 *rbuf, int rlen)
+ {
+ struct cxusb_state *st = d->priv;
+- int ret, wo;
++ int ret;
+
+ if (1 + wlen > MAX_XFER_SIZE) {
+ warn("i2c wr: len=%d is too big!\n", wlen);
+ return -EOPNOTSUPP;
+ }
+
+- wo = (rbuf == NULL || rlen == 0); /* write-only */
++ if (rlen > MAX_XFER_SIZE) {
++ warn("i2c rd: len=%d is too big!\n", rlen);
++ return -EOPNOTSUPP;
++ }
+
+ mutex_lock(&d->data_mutex);
+ st->data[0] = cmd;
+ memcpy(&st->data[1], wbuf, wlen);
+- if (wo)
+- ret = dvb_usb_generic_write(d, st->data, 1 + wlen);
+- else
+- ret = dvb_usb_generic_rw(d, st->data, 1 + wlen,
+- rbuf, rlen, 0);
++ ret = dvb_usb_generic_rw(d, st->data, 1 + wlen, st->data, rlen, 0);
++ if (!ret && rbuf && rlen)
++ memcpy(rbuf, st->data, rlen);
+
+ mutex_unlock(&d->data_mutex);
+ return ret;
+diff --git a/drivers/net/can/ifi_canfd/ifi_canfd.c b/drivers/net/can/ifi_canfd/ifi_canfd.c
+index 368bb0710d8f..481895b2f9f4 100644
+--- a/drivers/net/can/ifi_canfd/ifi_canfd.c
++++ b/drivers/net/can/ifi_canfd/ifi_canfd.c
+@@ -557,7 +557,7 @@ static int ifi_canfd_poll(struct napi_struct *napi, int quota)
+ int work_done = 0;
+
+ u32 stcmd = readl(priv->base + IFI_CANFD_STCMD);
+- u32 rxstcmd = readl(priv->base + IFI_CANFD_STCMD);
++ u32 rxstcmd = readl(priv->base + IFI_CANFD_RXSTCMD);
+ u32 errctr = readl(priv->base + IFI_CANFD_ERROR_CTR);
+
+ /* Handle bus state changes */
+diff --git a/drivers/nvdimm/bus.c b/drivers/nvdimm/bus.c
+index 23d4a1728cdf..351bac8f6503 100644
+--- a/drivers/nvdimm/bus.c
++++ b/drivers/nvdimm/bus.c
+@@ -934,8 +934,14 @@ static int __nd_ioctl(struct nvdimm_bus *nvdimm_bus, struct nvdimm *nvdimm,
+ rc = nd_desc->ndctl(nd_desc, nvdimm, cmd, buf, buf_len, NULL);
+ if (rc < 0)
+ goto out_unlock;
++ nvdimm_bus_unlock(&nvdimm_bus->dev);
++
+ if (copy_to_user(p, buf, buf_len))
+ rc = -EFAULT;
++
++ vfree(buf);
++ return rc;
++
+ out_unlock:
+ nvdimm_bus_unlock(&nvdimm_bus->dev);
+ out:
+diff --git a/drivers/nvdimm/claim.c b/drivers/nvdimm/claim.c
+index b3323c0697f6..ca6d572c48fc 100644
+--- a/drivers/nvdimm/claim.c
++++ b/drivers/nvdimm/claim.c
+@@ -243,7 +243,15 @@ static int nsio_rw_bytes(struct nd_namespace_common *ndns,
+ }
+
+ if (unlikely(is_bad_pmem(&nsio->bb, sector, sz_align))) {
+- if (IS_ALIGNED(offset, 512) && IS_ALIGNED(size, 512)) {
++ /*
++ * FIXME: nsio_rw_bytes() may be called from atomic
++ * context in the btt case and nvdimm_clear_poison()
++ * takes a sleeping lock. Until the locking can be
++ * reworked this capability requires that the namespace
++ * is not claimed by btt.
++ */
++ if (IS_ALIGNED(offset, 512) && IS_ALIGNED(size, 512)
++ && (!ndns->claim || !is_nd_btt(ndns->claim))) {
+ long cleared;
+
+ cleared = nvdimm_clear_poison(&ndns->dev, offset, size);
+diff --git a/drivers/nvdimm/dimm_devs.c b/drivers/nvdimm/dimm_devs.c
+index 0eedc49e0d47..8b721321be5b 100644
+--- a/drivers/nvdimm/dimm_devs.c
++++ b/drivers/nvdimm/dimm_devs.c
+@@ -395,7 +395,7 @@ EXPORT_SYMBOL_GPL(nvdimm_create);
+
+ int alias_dpa_busy(struct device *dev, void *data)
+ {
+- resource_size_t map_end, blk_start, new, busy;
++ resource_size_t map_end, blk_start, new;
+ struct blk_alloc_info *info = data;
+ struct nd_mapping *nd_mapping;
+ struct nd_region *nd_region;
+@@ -436,29 +436,19 @@ int alias_dpa_busy(struct device *dev, void *data)
+ retry:
+ /*
+ * Find the free dpa from the end of the last pmem allocation to
+- * the end of the interleave-set mapping that is not already
+- * covered by a blk allocation.
++ * the end of the interleave-set mapping.
+ */
+- busy = 0;
+ for_each_dpa_resource(ndd, res) {
++ if (strncmp(res->name, "pmem", 4) != 0)
++ continue;
+ if ((res->start >= blk_start && res->start < map_end)
+ || (res->end >= blk_start
+ && res->end <= map_end)) {
+- if (strncmp(res->name, "pmem", 4) == 0) {
+- new = max(blk_start, min(map_end + 1,
+- res->end + 1));
+- if (new != blk_start) {
+- blk_start = new;
+- goto retry;
+- }
+- } else
+- busy += min(map_end, res->end)
+- - max(nd_mapping->start, res->start) + 1;
+- } else if (nd_mapping->start > res->start
+- && map_end < res->end) {
+- /* total eclipse of the PMEM region mapping */
+- busy += nd_mapping->size;
+- break;
++ new = max(blk_start, min(map_end + 1, res->end + 1));
++ if (new != blk_start) {
++ blk_start = new;
++ goto retry;
++ }
+ }
+ }
+
+@@ -470,52 +460,11 @@ int alias_dpa_busy(struct device *dev, void *data)
+ return 1;
+ }
+
+- info->available -= blk_start - nd_mapping->start + busy;
++ info->available -= blk_start - nd_mapping->start;
+
+ return 0;
+ }
+
+-static int blk_dpa_busy(struct device *dev, void *data)
+-{
+- struct blk_alloc_info *info = data;
+- struct nd_mapping *nd_mapping;
+- struct nd_region *nd_region;
+- resource_size_t map_end;
+- int i;
+-
+- if (!is_nd_pmem(dev))
+- return 0;
+-
+- nd_region = to_nd_region(dev);
+- for (i = 0; i < nd_region->ndr_mappings; i++) {
+- nd_mapping = &nd_region->mapping[i];
+- if (nd_mapping->nvdimm == info->nd_mapping->nvdimm)
+- break;
+- }
+-
+- if (i >= nd_region->ndr_mappings)
+- return 0;
+-
+- map_end = nd_mapping->start + nd_mapping->size - 1;
+- if (info->res->start >= nd_mapping->start
+- && info->res->start < map_end) {
+- if (info->res->end <= map_end) {
+- info->busy = 0;
+- return 1;
+- } else {
+- info->busy -= info->res->end - map_end;
+- return 0;
+- }
+- } else if (info->res->end >= nd_mapping->start
+- && info->res->end <= map_end) {
+- info->busy -= nd_mapping->start - info->res->start;
+- return 0;
+- } else {
+- info->busy -= nd_mapping->size;
+- return 0;
+- }
+-}
+-
+ /**
+ * nd_blk_available_dpa - account the unused dpa of BLK region
+ * @nd_mapping: container of dpa-resource-root + labels
+@@ -545,11 +494,7 @@ resource_size_t nd_blk_available_dpa(struct nd_region *nd_region)
+ for_each_dpa_resource(ndd, res) {
+ if (strncmp(res->name, "blk", 3) != 0)
+ continue;
+-
+- info.res = res;
+- info.busy = resource_size(res);
+- device_for_each_child(&nvdimm_bus->dev, &info, blk_dpa_busy);
+- info.available -= info.busy;
++ info.available -= resource_size(res);
+ }
+
+ return info.available;
+diff --git a/drivers/platform/x86/acer-wmi.c b/drivers/platform/x86/acer-wmi.c
+index a66192f692e3..c29b9b611ab2 100644
+--- a/drivers/platform/x86/acer-wmi.c
++++ b/drivers/platform/x86/acer-wmi.c
+@@ -1846,11 +1846,24 @@ static int __init acer_wmi_enable_lm(void)
+ return status;
+ }
+
++#define ACER_WMID_ACCEL_HID "BST0001"
++
+ static acpi_status __init acer_wmi_get_handle_cb(acpi_handle ah, u32 level,
+ void *ctx, void **retval)
+ {
++ struct acpi_device *dev;
++
++ if (!strcmp(ctx, "SENR")) {
++ if (acpi_bus_get_device(ah, &dev))
++ return AE_OK;
++ if (!strcmp(ACER_WMID_ACCEL_HID, acpi_device_hid(dev)))
++ return AE_OK;
++ } else
++ return AE_OK;
++
+ *(acpi_handle *)retval = ah;
+- return AE_OK;
++
++ return AE_CTRL_TERMINATE;
+ }
+
+ static int __init acer_wmi_get_handle(const char *name, const char *prop,
+@@ -1877,7 +1890,7 @@ static int __init acer_wmi_accel_setup(void)
+ {
+ int err;
+
+- err = acer_wmi_get_handle("SENR", "BST0001", &gsensor_handle);
++ err = acer_wmi_get_handle("SENR", ACER_WMID_ACCEL_HID, &gsensor_handle);
+ if (err)
+ return err;
+
+@@ -2233,10 +2246,11 @@ static int __init acer_wmi_init(void)
+ err = acer_wmi_input_setup();
+ if (err)
+ return err;
++ err = acer_wmi_accel_setup();
++ if (err)
++ return err;
+ }
+
+- acer_wmi_accel_setup();
+-
+ err = platform_driver_register(&acer_platform_driver);
+ if (err) {
+ pr_err("Unable to register platform driver\n");
+diff --git a/drivers/pwm/pwm-rockchip.c b/drivers/pwm/pwm-rockchip.c
+index ef89df1f7336..744d56197286 100644
+--- a/drivers/pwm/pwm-rockchip.c
++++ b/drivers/pwm/pwm-rockchip.c
+@@ -191,6 +191,28 @@ static int rockchip_pwm_config(struct pwm_chip *chip, struct pwm_device *pwm,
+ return 0;
+ }
+
++static int rockchip_pwm_enable(struct pwm_chip *chip,
++ struct pwm_device *pwm,
++ bool enable,
++ enum pwm_polarity polarity)
++{
++ struct rockchip_pwm_chip *pc = to_rockchip_pwm_chip(chip);
++ int ret;
++
++ if (enable) {
++ ret = clk_enable(pc->clk);
++ if (ret)
++ return ret;
++ }
++
++ pc->data->set_enable(chip, pwm, enable, polarity);
++
++ if (!enable)
++ clk_disable(pc->clk);
++
++ return 0;
++}
++
+ static int rockchip_pwm_apply(struct pwm_chip *chip, struct pwm_device *pwm,
+ struct pwm_state *state)
+ {
+@@ -207,22 +229,26 @@ static int rockchip_pwm_apply(struct pwm_chip *chip, struct pwm_device *pwm,
+ return ret;
+
+ if (state->polarity != curstate.polarity && enabled) {
+- pc->data->set_enable(chip, pwm, false, state->polarity);
++ ret = rockchip_pwm_enable(chip, pwm, false, state->polarity);
++ if (ret)
++ goto out;
+ enabled = false;
+ }
+
+ ret = rockchip_pwm_config(chip, pwm, state->duty_cycle, state->period);
+ if (ret) {
+ if (enabled != curstate.enabled)
+- pc->data->set_enable(chip, pwm, !enabled,
+- state->polarity);
+-
++ rockchip_pwm_enable(chip, pwm, !enabled,
++ state->polarity);
+ goto out;
+ }
+
+- if (state->enabled != enabled)
+- pc->data->set_enable(chip, pwm, state->enabled,
+- state->polarity);
++ if (state->enabled != enabled) {
++ ret = rockchip_pwm_enable(chip, pwm, state->enabled,
++ state->polarity);
++ if (ret)
++ goto out;
++ }
+
+ /*
+ * Update the state with the real hardware, which can differ a bit
+diff --git a/drivers/rtc/rtc-tegra.c b/drivers/rtc/rtc-tegra.c
+index 3853ba963bb5..19e03d0b956b 100644
+--- a/drivers/rtc/rtc-tegra.c
++++ b/drivers/rtc/rtc-tegra.c
+@@ -18,6 +18,7 @@
+ * 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301, USA.
+ */
+ #include <linux/kernel.h>
++#include <linux/clk.h>
+ #include <linux/init.h>
+ #include <linux/module.h>
+ #include <linux/slab.h>
+@@ -59,6 +60,7 @@ struct tegra_rtc_info {
+ struct platform_device *pdev;
+ struct rtc_device *rtc_dev;
+ void __iomem *rtc_base; /* NULL if not initialized. */
++ struct clk *clk;
+ int tegra_rtc_irq; /* alarm and periodic irq */
+ spinlock_t tegra_rtc_lock;
+ };
+@@ -326,6 +328,14 @@ static int __init tegra_rtc_probe(struct platform_device *pdev)
+ if (info->tegra_rtc_irq <= 0)
+ return -EBUSY;
+
++ info->clk = devm_clk_get(&pdev->dev, NULL);
++ if (IS_ERR(info->clk))
++ return PTR_ERR(info->clk);
++
++ ret = clk_prepare_enable(info->clk);
++ if (ret < 0)
++ return ret;
++
+ /* set context info. */
+ info->pdev = pdev;
+ spin_lock_init(&info->tegra_rtc_lock);
+@@ -346,7 +356,7 @@ static int __init tegra_rtc_probe(struct platform_device *pdev)
+ ret = PTR_ERR(info->rtc_dev);
+ dev_err(&pdev->dev, "Unable to register device (err=%d).\n",
+ ret);
+- return ret;
++ goto disable_clk;
+ }
+
+ ret = devm_request_irq(&pdev->dev, info->tegra_rtc_irq,
+@@ -356,12 +366,25 @@ static int __init tegra_rtc_probe(struct platform_device *pdev)
+ dev_err(&pdev->dev,
+ "Unable to request interrupt for device (err=%d).\n",
+ ret);
+- return ret;
++ goto disable_clk;
+ }
+
+ dev_notice(&pdev->dev, "Tegra internal Real Time Clock\n");
+
+ return 0;
++
++disable_clk:
++ clk_disable_unprepare(info->clk);
++ return ret;
++}
++
++static int tegra_rtc_remove(struct platform_device *pdev)
++{
++ struct tegra_rtc_info *info = platform_get_drvdata(pdev);
++
++ clk_disable_unprepare(info->clk);
++
++ return 0;
+ }
+
+ #ifdef CONFIG_PM_SLEEP
+@@ -413,6 +436,7 @@ static void tegra_rtc_shutdown(struct platform_device *pdev)
+
+ MODULE_ALIAS("platform:tegra_rtc");
+ static struct platform_driver tegra_rtc_driver = {
++ .remove = tegra_rtc_remove,
+ .shutdown = tegra_rtc_shutdown,
+ .driver = {
+ .name = "tegra_rtc",
+diff --git a/drivers/scsi/qla2xxx/qla_os.c b/drivers/scsi/qla2xxx/qla_os.c
+index dc79524178ad..f72fe724074d 100644
+--- a/drivers/scsi/qla2xxx/qla_os.c
++++ b/drivers/scsi/qla2xxx/qla_os.c
+@@ -1125,8 +1125,13 @@ static inline
+ uint32_t qla2x00_isp_reg_stat(struct qla_hw_data *ha)
+ {
+ struct device_reg_24xx __iomem *reg = &ha->iobase->isp24;
++ struct device_reg_82xx __iomem *reg82 = &ha->iobase->isp82;
+
+- return ((RD_REG_DWORD(®->host_status)) == ISP_REG_DISCONNECT);
++ if (IS_P3P_TYPE(ha))
++ return ((RD_REG_DWORD(®82->host_int)) == ISP_REG_DISCONNECT);
++ else
++ return ((RD_REG_DWORD(®->host_status)) ==
++ ISP_REG_DISCONNECT);
+ }
+
+ /**************************************************************************
+diff --git a/drivers/scsi/sd.c b/drivers/scsi/sd.c
+index 1ee57619c95e..d3886917b2ea 100644
+--- a/drivers/scsi/sd.c
++++ b/drivers/scsi/sd.c
+@@ -2109,6 +2109,22 @@ static void read_capacity_error(struct scsi_disk *sdkp, struct scsi_device *sdp,
+
+ #define READ_CAPACITY_RETRIES_ON_RESET 10
+
++/*
++ * Ensure that we don't overflow sector_t when CONFIG_LBDAF is not set
++ * and the reported logical block size is bigger than 512 bytes. Note
++ * that last_sector is a u64 and therefore logical_to_sectors() is not
++ * applicable.
++ */
++static bool sd_addressable_capacity(u64 lba, unsigned int sector_size)
++{
++ u64 last_sector = (lba + 1ULL) << (ilog2(sector_size) - 9);
++
++ if (sizeof(sector_t) == 4 && last_sector > U32_MAX)
++ return false;
++
++ return true;
++}
++
+ static int read_capacity_16(struct scsi_disk *sdkp, struct scsi_device *sdp,
+ unsigned char *buffer)
+ {
+@@ -2174,7 +2190,7 @@ static int read_capacity_16(struct scsi_disk *sdkp, struct scsi_device *sdp,
+ return -ENODEV;
+ }
+
+- if ((sizeof(sdkp->capacity) == 4) && (lba >= 0xffffffffULL)) {
++ if (!sd_addressable_capacity(lba, sector_size)) {
+ sd_printk(KERN_ERR, sdkp, "Too big for this kernel. Use a "
+ "kernel compiled with support for large block "
+ "devices.\n");
+@@ -2263,7 +2279,7 @@ static int read_capacity_10(struct scsi_disk *sdkp, struct scsi_device *sdp,
+ return sector_size;
+ }
+
+- if ((sizeof(sdkp->capacity) == 4) && (lba == 0xffffffff)) {
++ if (!sd_addressable_capacity(lba, sector_size)) {
+ sd_printk(KERN_ERR, sdkp, "Too big for this kernel. Use a "
+ "kernel compiled with support for large block "
+ "devices.\n");
+@@ -2963,7 +2979,8 @@ static int sd_revalidate_disk(struct gendisk *disk)
+ q->limits.io_opt = logical_to_bytes(sdp, sdkp->opt_xfer_blocks);
+ rw_max = logical_to_sectors(sdp, sdkp->opt_xfer_blocks);
+ } else
+- rw_max = BLK_DEF_MAX_SECTORS;
++ rw_max = min_not_zero(logical_to_sectors(sdp, dev_max),
++ (sector_t)BLK_DEF_MAX_SECTORS);
+
+ /* Combine with controller limits */
+ q->limits.max_sectors = min(rw_max, queue_max_hw_sectors(q));
+diff --git a/drivers/scsi/sr.c b/drivers/scsi/sr.c
+index 94352e4df831..94b0aacefae6 100644
+--- a/drivers/scsi/sr.c
++++ b/drivers/scsi/sr.c
+@@ -833,6 +833,7 @@ static void get_capabilities(struct scsi_cd *cd)
+ unsigned char *buffer;
+ struct scsi_mode_data data;
+ struct scsi_sense_hdr sshdr;
++ unsigned int ms_len = 128;
+ int rc, n;
+
+ static const char *loadmech[] =
+@@ -859,10 +860,11 @@ static void get_capabilities(struct scsi_cd *cd)
+ scsi_test_unit_ready(cd->device, SR_TIMEOUT, MAX_RETRIES, &sshdr);
+
+ /* ask for mode page 0x2a */
+- rc = scsi_mode_sense(cd->device, 0, 0x2a, buffer, 128,
++ rc = scsi_mode_sense(cd->device, 0, 0x2a, buffer, ms_len,
+ SR_TIMEOUT, 3, &data, NULL);
+
+- if (!scsi_status_is_good(rc)) {
++ if (!scsi_status_is_good(rc) || data.length > ms_len ||
++ data.header_length + data.block_descriptor_length > data.length) {
+ /* failed, drive doesn't have capabilities mode page */
+ cd->cdi.speed = 1;
+ cd->cdi.mask |= (CDC_CD_R | CDC_CD_RW | CDC_DVD_R |
+diff --git a/drivers/target/iscsi/iscsi_target_parameters.c b/drivers/target/iscsi/iscsi_target_parameters.c
+index e65bf78ceef3..fce627628200 100644
+--- a/drivers/target/iscsi/iscsi_target_parameters.c
++++ b/drivers/target/iscsi/iscsi_target_parameters.c
+@@ -782,22 +782,6 @@ static void iscsi_check_proposer_for_optional_reply(struct iscsi_param *param)
+ if (!strcmp(param->name, MAXRECVDATASEGMENTLENGTH))
+ SET_PSTATE_REPLY_OPTIONAL(param);
+ /*
+- * The GlobalSAN iSCSI Initiator for MacOSX does
+- * not respond to MaxBurstLength, FirstBurstLength,
+- * DefaultTime2Wait or DefaultTime2Retain parameter keys.
+- * So, we set them to 'reply optional' here, and assume the
+- * the defaults from iscsi_parameters.h if the initiator
+- * is not RFC compliant and the keys are not negotiated.
+- */
+- if (!strcmp(param->name, MAXBURSTLENGTH))
+- SET_PSTATE_REPLY_OPTIONAL(param);
+- if (!strcmp(param->name, FIRSTBURSTLENGTH))
+- SET_PSTATE_REPLY_OPTIONAL(param);
+- if (!strcmp(param->name, DEFAULTTIME2WAIT))
+- SET_PSTATE_REPLY_OPTIONAL(param);
+- if (!strcmp(param->name, DEFAULTTIME2RETAIN))
+- SET_PSTATE_REPLY_OPTIONAL(param);
+- /*
+ * Required for gPXE iSCSI boot client
+ */
+ if (!strcmp(param->name, MAXCONNECTIONS))
+diff --git a/drivers/target/iscsi/iscsi_target_util.c b/drivers/target/iscsi/iscsi_target_util.c
+index b5a1b4ccba12..712fd36a1220 100644
+--- a/drivers/target/iscsi/iscsi_target_util.c
++++ b/drivers/target/iscsi/iscsi_target_util.c
+@@ -736,21 +736,23 @@ void iscsit_free_cmd(struct iscsi_cmd *cmd, bool shutdown)
+ {
+ struct se_cmd *se_cmd = NULL;
+ int rc;
++ bool op_scsi = false;
+ /*
+ * Determine if a struct se_cmd is associated with
+ * this struct iscsi_cmd.
+ */
+ switch (cmd->iscsi_opcode) {
+ case ISCSI_OP_SCSI_CMD:
+- se_cmd = &cmd->se_cmd;
+- __iscsit_free_cmd(cmd, true, shutdown);
++ op_scsi = true;
+ /*
+ * Fallthrough
+ */
+ case ISCSI_OP_SCSI_TMFUNC:
+- rc = transport_generic_free_cmd(&cmd->se_cmd, shutdown);
+- if (!rc && shutdown && se_cmd && se_cmd->se_sess) {
+- __iscsit_free_cmd(cmd, true, shutdown);
++ se_cmd = &cmd->se_cmd;
++ __iscsit_free_cmd(cmd, op_scsi, shutdown);
++ rc = transport_generic_free_cmd(se_cmd, shutdown);
++ if (!rc && shutdown && se_cmd->se_sess) {
++ __iscsit_free_cmd(cmd, op_scsi, shutdown);
+ target_put_sess_cmd(se_cmd);
+ }
+ break;
+diff --git a/drivers/target/target_core_fabric_configfs.c b/drivers/target/target_core_fabric_configfs.c
+index d8a16ca6baa5..d1e6cab8e3d3 100644
+--- a/drivers/target/target_core_fabric_configfs.c
++++ b/drivers/target/target_core_fabric_configfs.c
+@@ -92,6 +92,11 @@ static int target_fabric_mappedlun_link(
+ pr_err("Source se_lun->lun_se_dev does not exist\n");
+ return -EINVAL;
+ }
++ if (lun->lun_shutdown) {
++ pr_err("Unable to create mappedlun symlink because"
++ " lun->lun_shutdown=true\n");
++ return -EINVAL;
++ }
+ se_tpg = lun->lun_tpg;
+
+ nacl_ci = &lun_acl_ci->ci_parent->ci_group->cg_item;
+diff --git a/drivers/target/target_core_tpg.c b/drivers/target/target_core_tpg.c
+index 2744251178ad..1949f50725a5 100644
+--- a/drivers/target/target_core_tpg.c
++++ b/drivers/target/target_core_tpg.c
+@@ -640,6 +640,8 @@ void core_tpg_remove_lun(
+ */
+ struct se_device *dev = rcu_dereference_raw(lun->lun_se_dev);
+
++ lun->lun_shutdown = true;
++
+ core_clear_lun_from_tpg(lun, tpg);
+ /*
+ * Wait for any active I/O references to percpu se_lun->lun_ref to
+@@ -661,6 +663,8 @@ void core_tpg_remove_lun(
+ }
+ if (!(dev->se_hba->hba_flags & HBA_FLAGS_INTERNAL_USE))
+ hlist_del_rcu(&lun->link);
++
++ lun->lun_shutdown = false;
+ mutex_unlock(&tpg->tpg_lun_mutex);
+
+ percpu_ref_exit(&lun->lun_ref);
+diff --git a/drivers/target/target_core_user.c b/drivers/target/target_core_user.c
+index 8041710b6972..ced82cd3cb0e 100644
+--- a/drivers/target/target_core_user.c
++++ b/drivers/target/target_core_user.c
+@@ -307,24 +307,50 @@ static void free_data_area(struct tcmu_dev *udev, struct tcmu_cmd *cmd)
+ DATA_BLOCK_BITS);
+ }
+
+-static void gather_data_area(struct tcmu_dev *udev, unsigned long *cmd_bitmap,
+- struct scatterlist *data_sg, unsigned int data_nents)
++static void gather_data_area(struct tcmu_dev *udev, struct tcmu_cmd *cmd,
++ bool bidi)
+ {
++ struct se_cmd *se_cmd = cmd->se_cmd;
+ int i, block;
+ int block_remaining = 0;
+ void *from, *to;
+ size_t copy_bytes, from_offset;
+- struct scatterlist *sg;
++ struct scatterlist *sg, *data_sg;
++ unsigned int data_nents;
++ DECLARE_BITMAP(bitmap, DATA_BLOCK_BITS);
++
++ bitmap_copy(bitmap, cmd->data_bitmap, DATA_BLOCK_BITS);
++
++ if (!bidi) {
++ data_sg = se_cmd->t_data_sg;
++ data_nents = se_cmd->t_data_nents;
++ } else {
++ uint32_t count;
++
++ /*
++ * For bidi case, the first count blocks are for Data-Out
++ * buffer blocks, and before gathering the Data-In buffer
++ * the Data-Out buffer blocks should be discarded.
++ */
++ count = DIV_ROUND_UP(se_cmd->data_length, DATA_BLOCK_SIZE);
++ while (count--) {
++ block = find_first_bit(bitmap, DATA_BLOCK_BITS);
++ clear_bit(block, bitmap);
++ }
++
++ data_sg = se_cmd->t_bidi_data_sg;
++ data_nents = se_cmd->t_bidi_data_nents;
++ }
+
+ for_each_sg(data_sg, sg, data_nents, i) {
+ int sg_remaining = sg->length;
+ to = kmap_atomic(sg_page(sg)) + sg->offset;
+ while (sg_remaining > 0) {
+ if (block_remaining == 0) {
+- block = find_first_bit(cmd_bitmap,
++ block = find_first_bit(bitmap,
+ DATA_BLOCK_BITS);
+ block_remaining = DATA_BLOCK_SIZE;
+- clear_bit(block, cmd_bitmap);
++ clear_bit(block, bitmap);
+ }
+ copy_bytes = min_t(size_t, sg_remaining,
+ block_remaining);
+@@ -390,6 +416,27 @@ static bool is_ring_space_avail(struct tcmu_dev *udev, size_t cmd_size, size_t d
+ return true;
+ }
+
++static inline size_t tcmu_cmd_get_data_length(struct tcmu_cmd *tcmu_cmd)
++{
++ struct se_cmd *se_cmd = tcmu_cmd->se_cmd;
++ size_t data_length = round_up(se_cmd->data_length, DATA_BLOCK_SIZE);
++
++ if (se_cmd->se_cmd_flags & SCF_BIDI) {
++ BUG_ON(!(se_cmd->t_bidi_data_sg && se_cmd->t_bidi_data_nents));
++ data_length += round_up(se_cmd->t_bidi_data_sg->length,
++ DATA_BLOCK_SIZE);
++ }
++
++ return data_length;
++}
++
++static inline uint32_t tcmu_cmd_get_block_cnt(struct tcmu_cmd *tcmu_cmd)
++{
++ size_t data_length = tcmu_cmd_get_data_length(tcmu_cmd);
++
++ return data_length / DATA_BLOCK_SIZE;
++}
++
+ static sense_reason_t
+ tcmu_queue_cmd_ring(struct tcmu_cmd *tcmu_cmd)
+ {
+@@ -403,7 +450,7 @@ tcmu_queue_cmd_ring(struct tcmu_cmd *tcmu_cmd)
+ uint32_t cmd_head;
+ uint64_t cdb_off;
+ bool copy_to_data_area;
+- size_t data_length;
++ size_t data_length = tcmu_cmd_get_data_length(tcmu_cmd);
+ DECLARE_BITMAP(old_bitmap, DATA_BLOCK_BITS);
+
+ if (test_bit(TCMU_DEV_BIT_BROKEN, &udev->flags))
+@@ -417,8 +464,7 @@ tcmu_queue_cmd_ring(struct tcmu_cmd *tcmu_cmd)
+ * expensive to tell how many regions are freed in the bitmap
+ */
+ base_command_size = max(offsetof(struct tcmu_cmd_entry,
+- req.iov[se_cmd->t_bidi_data_nents +
+- se_cmd->t_data_nents]),
++ req.iov[tcmu_cmd_get_block_cnt(tcmu_cmd)]),
+ sizeof(struct tcmu_cmd_entry));
+ command_size = base_command_size
+ + round_up(scsi_command_size(se_cmd->t_task_cdb), TCMU_OP_ALIGN_SIZE);
+@@ -429,11 +475,6 @@ tcmu_queue_cmd_ring(struct tcmu_cmd *tcmu_cmd)
+
+ mb = udev->mb_addr;
+ cmd_head = mb->cmd_head % udev->cmdr_size; /* UAM */
+- data_length = se_cmd->data_length;
+- if (se_cmd->se_cmd_flags & SCF_BIDI) {
+- BUG_ON(!(se_cmd->t_bidi_data_sg && se_cmd->t_bidi_data_nents));
+- data_length += se_cmd->t_bidi_data_sg->length;
+- }
+ if ((command_size > (udev->cmdr_size / 2)) ||
+ data_length > udev->data_size) {
+ pr_warn("TCMU: Request of size %zu/%zu is too big for %u/%zu "
+@@ -503,11 +544,14 @@ tcmu_queue_cmd_ring(struct tcmu_cmd *tcmu_cmd)
+ entry->req.iov_dif_cnt = 0;
+
+ /* Handle BIDI commands */
+- iov_cnt = 0;
+- alloc_and_scatter_data_area(udev, se_cmd->t_bidi_data_sg,
+- se_cmd->t_bidi_data_nents, &iov, &iov_cnt, false);
+- entry->req.iov_bidi_cnt = iov_cnt;
+-
++ if (se_cmd->se_cmd_flags & SCF_BIDI) {
++ iov_cnt = 0;
++ iov++;
++ alloc_and_scatter_data_area(udev, se_cmd->t_bidi_data_sg,
++ se_cmd->t_bidi_data_nents, &iov, &iov_cnt,
++ false);
++ entry->req.iov_bidi_cnt = iov_cnt;
++ }
+ /* cmd's data_bitmap is what changed in process */
+ bitmap_xor(tcmu_cmd->data_bitmap, old_bitmap, udev->data_bitmap,
+ DATA_BLOCK_BITS);
+@@ -583,19 +627,11 @@ static void tcmu_handle_completion(struct tcmu_cmd *cmd, struct tcmu_cmd_entry *
+ se_cmd->scsi_sense_length);
+ free_data_area(udev, cmd);
+ } else if (se_cmd->se_cmd_flags & SCF_BIDI) {
+- DECLARE_BITMAP(bitmap, DATA_BLOCK_BITS);
+-
+ /* Get Data-In buffer before clean up */
+- bitmap_copy(bitmap, cmd->data_bitmap, DATA_BLOCK_BITS);
+- gather_data_area(udev, bitmap,
+- se_cmd->t_bidi_data_sg, se_cmd->t_bidi_data_nents);
++ gather_data_area(udev, cmd, true);
+ free_data_area(udev, cmd);
+ } else if (se_cmd->data_direction == DMA_FROM_DEVICE) {
+- DECLARE_BITMAP(bitmap, DATA_BLOCK_BITS);
+-
+- bitmap_copy(bitmap, cmd->data_bitmap, DATA_BLOCK_BITS);
+- gather_data_area(udev, bitmap,
+- se_cmd->t_data_sg, se_cmd->t_data_nents);
++ gather_data_area(udev, cmd, false);
+ free_data_area(udev, cmd);
+ } else if (se_cmd->data_direction == DMA_TO_DEVICE) {
+ free_data_area(udev, cmd);
+diff --git a/drivers/video/fbdev/efifb.c b/drivers/video/fbdev/efifb.c
+index 8c4dc1e1f94f..b827a8113e26 100644
+--- a/drivers/video/fbdev/efifb.c
++++ b/drivers/video/fbdev/efifb.c
+@@ -10,6 +10,7 @@
+ #include <linux/efi.h>
+ #include <linux/errno.h>
+ #include <linux/fb.h>
++#include <linux/pci.h>
+ #include <linux/platform_device.h>
+ #include <linux/screen_info.h>
+ #include <video/vga.h>
+@@ -143,6 +144,8 @@ static struct attribute *efifb_attrs[] = {
+ };
+ ATTRIBUTE_GROUPS(efifb);
+
++static bool pci_dev_disabled; /* FB base matches BAR of a disabled device */
++
+ static int efifb_probe(struct platform_device *dev)
+ {
+ struct fb_info *info;
+@@ -152,7 +155,7 @@ static int efifb_probe(struct platform_device *dev)
+ unsigned int size_total;
+ char *option = NULL;
+
+- if (screen_info.orig_video_isVGA != VIDEO_TYPE_EFI)
++ if (screen_info.orig_video_isVGA != VIDEO_TYPE_EFI || pci_dev_disabled)
+ return -ENODEV;
+
+ if (fb_get_options("efifb", &option))
+@@ -360,3 +363,64 @@ static struct platform_driver efifb_driver = {
+ };
+
+ builtin_platform_driver(efifb_driver);
++
++#if defined(CONFIG_PCI) && !defined(CONFIG_X86)
++
++static bool pci_bar_found; /* did we find a BAR matching the efifb base? */
++
++static void claim_efifb_bar(struct pci_dev *dev, int idx)
++{
++ u16 word;
++
++ pci_bar_found = true;
++
++ pci_read_config_word(dev, PCI_COMMAND, &word);
++ if (!(word & PCI_COMMAND_MEMORY)) {
++ pci_dev_disabled = true;
++ dev_err(&dev->dev,
++ "BAR %d: assigned to efifb but device is disabled!\n",
++ idx);
++ return;
++ }
++
++ if (pci_claim_resource(dev, idx)) {
++ pci_dev_disabled = true;
++ dev_err(&dev->dev,
++ "BAR %d: failed to claim resource for efifb!\n", idx);
++ return;
++ }
++
++ dev_info(&dev->dev, "BAR %d: assigned to efifb\n", idx);
++}
++
++static void efifb_fixup_resources(struct pci_dev *dev)
++{
++ u64 base = screen_info.lfb_base;
++ u64 size = screen_info.lfb_size;
++ int i;
++
++ if (pci_bar_found || screen_info.orig_video_isVGA != VIDEO_TYPE_EFI)
++ return;
++
++ if (screen_info.capabilities & VIDEO_CAPABILITY_64BIT_BASE)
++ base |= (u64)screen_info.ext_lfb_base << 32;
++
++ if (!base)
++ return;
++
++ for (i = 0; i < PCI_STD_RESOURCE_END; i++) {
++ struct resource *res = &dev->resource[i];
++
++ if (!(res->flags & IORESOURCE_MEM))
++ continue;
++
++ if (res->start <= base && res->end >= base + size - 1) {
++ claim_efifb_bar(dev, i);
++ break;
++ }
++ }
++}
++DECLARE_PCI_FIXUP_CLASS_HEADER(PCI_ANY_ID, PCI_ANY_ID, PCI_BASE_CLASS_DISPLAY,
++ 16, efifb_fixup_resources);
++
++#endif
+diff --git a/drivers/video/fbdev/xen-fbfront.c b/drivers/video/fbdev/xen-fbfront.c
+index d0115a7af0a9..3ee309c50b2d 100644
+--- a/drivers/video/fbdev/xen-fbfront.c
++++ b/drivers/video/fbdev/xen-fbfront.c
+@@ -643,7 +643,6 @@ static void xenfb_backend_changed(struct xenbus_device *dev,
+ break;
+
+ case XenbusStateInitWait:
+-InitWait:
+ xenbus_switch_state(dev, XenbusStateConnected);
+ break;
+
+@@ -654,7 +653,8 @@ static void xenfb_backend_changed(struct xenbus_device *dev,
+ * get Connected twice here.
+ */
+ if (dev->state != XenbusStateConnected)
+- goto InitWait; /* no InitWait seen yet, fudge it */
++ /* no InitWait seen yet, fudge it */
++ xenbus_switch_state(dev, XenbusStateConnected);
+
+ if (xenbus_read_unsigned(info->xbdev->otherend,
+ "request-update", 0))
+diff --git a/fs/cifs/file.c b/fs/cifs/file.c
+index 1cd0e2eefc66..3925758f6dde 100644
+--- a/fs/cifs/file.c
++++ b/fs/cifs/file.c
+@@ -2597,7 +2597,7 @@ cifs_write_from_iter(loff_t offset, size_t len, struct iov_iter *from,
+ wdata->credits = credits;
+
+ if (!wdata->cfile->invalidHandle ||
+- !cifs_reopen_file(wdata->cfile, false))
++ !(rc = cifs_reopen_file(wdata->cfile, false)))
+ rc = server->ops->async_writev(wdata,
+ cifs_uncached_writedata_release);
+ if (rc) {
+@@ -3002,7 +3002,7 @@ cifs_send_async_read(loff_t offset, size_t len, struct cifsFileInfo *open_file,
+ rdata->credits = credits;
+
+ if (!rdata->cfile->invalidHandle ||
+- !cifs_reopen_file(rdata->cfile, true))
++ !(rc = cifs_reopen_file(rdata->cfile, true)))
+ rc = server->ops->async_readv(rdata);
+ error:
+ if (rc) {
+@@ -3577,7 +3577,7 @@ static int cifs_readpages(struct file *file, struct address_space *mapping,
+ }
+
+ if (!rdata->cfile->invalidHandle ||
+- !cifs_reopen_file(rdata->cfile, true))
++ !(rc = cifs_reopen_file(rdata->cfile, true)))
+ rc = server->ops->async_readv(rdata);
+ if (rc) {
+ add_credits_and_wake_if(server, rdata->credits, 0);
+diff --git a/fs/cifs/smb2pdu.c b/fs/cifs/smb2pdu.c
+index bdd32925a15e..7080dac3592c 100644
+--- a/fs/cifs/smb2pdu.c
++++ b/fs/cifs/smb2pdu.c
+@@ -1987,6 +1987,9 @@ void smb2_reconnect_server(struct work_struct *work)
+ struct cifs_tcon *tcon, *tcon2;
+ struct list_head tmp_list;
+ int tcon_exist = false;
++ int rc;
++ int resched = false;
++
+
+ /* Prevent simultaneous reconnects that can corrupt tcon->rlist list */
+ mutex_lock(&server->reconnect_mutex);
+@@ -2014,13 +2017,18 @@ void smb2_reconnect_server(struct work_struct *work)
+ spin_unlock(&cifs_tcp_ses_lock);
+
+ list_for_each_entry_safe(tcon, tcon2, &tmp_list, rlist) {
+- if (!smb2_reconnect(SMB2_INTERNAL_CMD, tcon))
++ rc = smb2_reconnect(SMB2_INTERNAL_CMD, tcon);
++ if (!rc)
+ cifs_reopen_persistent_handles(tcon);
++ else
++ resched = true;
+ list_del_init(&tcon->rlist);
+ cifs_put_tcon(tcon);
+ }
+
+ cifs_dbg(FYI, "Reconnecting tcons finished\n");
++ if (resched)
++ queue_delayed_work(cifsiod_wq, &server->reconnect, 2 * HZ);
+ mutex_unlock(&server->reconnect_mutex);
+
+ /* now we can safely release srv struct */
+diff --git a/fs/orangefs/devorangefs-req.c b/fs/orangefs/devorangefs-req.c
+index c4ab6fdf17a0..e1534c9bab16 100644
+--- a/fs/orangefs/devorangefs-req.c
++++ b/fs/orangefs/devorangefs-req.c
+@@ -208,14 +208,19 @@ static ssize_t orangefs_devreq_read(struct file *file,
+ continue;
+ /*
+ * Skip ops whose filesystem we don't know about unless
+- * it is being mounted.
++ * it is being mounted or unmounted. It is possible for
++ * a filesystem we don't know about to be unmounted if
++ * it fails to mount in the kernel after userspace has
++ * been sent the mount request.
+ */
+ /* XXX: is there a better way to detect this? */
+ } else if (ret == -1 &&
+ !(op->upcall.type ==
+ ORANGEFS_VFS_OP_FS_MOUNT ||
+ op->upcall.type ==
+- ORANGEFS_VFS_OP_GETATTR)) {
++ ORANGEFS_VFS_OP_GETATTR ||
++ op->upcall.type ==
++ ORANGEFS_VFS_OP_FS_UMOUNT)) {
+ gossip_debug(GOSSIP_DEV_DEBUG,
+ "orangefs: skipping op tag %llu %s\n",
+ llu(op->tag), get_opname_string(op));
+diff --git a/fs/orangefs/orangefs-kernel.h b/fs/orangefs/orangefs-kernel.h
+index 3bf803d732c5..45dd8f27b2ac 100644
+--- a/fs/orangefs/orangefs-kernel.h
++++ b/fs/orangefs/orangefs-kernel.h
+@@ -249,6 +249,7 @@ struct orangefs_sb_info_s {
+ char devname[ORANGEFS_MAX_SERVER_ADDR_LEN];
+ struct super_block *sb;
+ int mount_pending;
++ int no_list;
+ struct list_head list;
+ };
+
+diff --git a/fs/orangefs/super.c b/fs/orangefs/super.c
+index cd261c8de53a..629d8c917fa6 100644
+--- a/fs/orangefs/super.c
++++ b/fs/orangefs/super.c
+@@ -493,7 +493,7 @@ struct dentry *orangefs_mount(struct file_system_type *fst,
+
+ if (ret) {
+ d = ERR_PTR(ret);
+- goto free_op;
++ goto free_sb_and_op;
+ }
+
+ /*
+@@ -519,6 +519,9 @@ struct dentry *orangefs_mount(struct file_system_type *fst,
+ spin_unlock(&orangefs_superblocks_lock);
+ op_release(new_op);
+
++ /* Must be removed from the list now. */
++ ORANGEFS_SB(sb)->no_list = 0;
++
+ if (orangefs_userspace_version >= 20906) {
+ new_op = op_alloc(ORANGEFS_VFS_OP_FEATURES);
+ if (!new_op)
+@@ -533,6 +536,10 @@ struct dentry *orangefs_mount(struct file_system_type *fst,
+
+ return dget(sb->s_root);
+
++free_sb_and_op:
++ /* Will call orangefs_kill_sb with sb not in list. */
++ ORANGEFS_SB(sb)->no_list = 1;
++ deactivate_locked_super(sb);
+ free_op:
+ gossip_err("orangefs_mount: mount request failed with %d\n", ret);
+ if (ret == -EINVAL) {
+@@ -558,12 +565,14 @@ void orangefs_kill_sb(struct super_block *sb)
+ */
+ orangefs_unmount_sb(sb);
+
+- /* remove the sb from our list of orangefs specific sb's */
+-
+- spin_lock(&orangefs_superblocks_lock);
+- __list_del_entry(&ORANGEFS_SB(sb)->list); /* not list_del_init */
+- ORANGEFS_SB(sb)->list.prev = NULL;
+- spin_unlock(&orangefs_superblocks_lock);
++ if (!ORANGEFS_SB(sb)->no_list) {
++ /* remove the sb from our list of orangefs specific sb's */
++ spin_lock(&orangefs_superblocks_lock);
++ /* not list_del_init */
++ __list_del_entry(&ORANGEFS_SB(sb)->list);
++ ORANGEFS_SB(sb)->list.prev = NULL;
++ spin_unlock(&orangefs_superblocks_lock);
++ }
+
+ /*
+ * make sure that ORANGEFS_DEV_REMOUNT_ALL loop that might've seen us
+diff --git a/fs/proc/task_mmu.c b/fs/proc/task_mmu.c
+index 8f96a49178d0..129215eca0e8 100644
+--- a/fs/proc/task_mmu.c
++++ b/fs/proc/task_mmu.c
+@@ -899,7 +899,14 @@ static inline void clear_soft_dirty(struct vm_area_struct *vma,
+ static inline void clear_soft_dirty_pmd(struct vm_area_struct *vma,
+ unsigned long addr, pmd_t *pmdp)
+ {
+- pmd_t pmd = pmdp_huge_get_and_clear(vma->vm_mm, addr, pmdp);
++ pmd_t pmd = *pmdp;
++
++ /* See comment in change_huge_pmd() */
++ pmdp_invalidate(vma, addr, pmdp);
++ if (pmd_dirty(*pmdp))
++ pmd = pmd_mkdirty(pmd);
++ if (pmd_young(*pmdp))
++ pmd = pmd_mkyoung(pmd);
+
+ pmd = pmd_wrprotect(pmd);
+ pmd = pmd_clear_soft_dirty(pmd);
+diff --git a/include/crypto/internal/hash.h b/include/crypto/internal/hash.h
+index 1d4f365d8f03..f6d9af3efa45 100644
+--- a/include/crypto/internal/hash.h
++++ b/include/crypto/internal/hash.h
+@@ -166,6 +166,16 @@ static inline struct ahash_instance *ahash_alloc_instance(
+ return crypto_alloc_instance2(name, alg, ahash_instance_headroom());
+ }
+
++static inline void ahash_request_complete(struct ahash_request *req, int err)
++{
++ req->base.complete(&req->base, err);
++}
++
++static inline u32 ahash_request_flags(struct ahash_request *req)
++{
++ return req->base.flags;
++}
++
+ static inline struct crypto_ahash *crypto_spawn_ahash(
+ struct crypto_ahash_spawn *spawn)
+ {
+diff --git a/include/linux/cgroup.h b/include/linux/cgroup.h
+index c83c23f0577b..307ae63ef262 100644
+--- a/include/linux/cgroup.h
++++ b/include/linux/cgroup.h
+@@ -570,6 +570,25 @@ static inline void pr_cont_cgroup_path(struct cgroup *cgrp)
+ pr_cont_kernfs_path(cgrp->kn);
+ }
+
++static inline void cgroup_init_kthreadd(void)
++{
++ /*
++ * kthreadd is inherited by all kthreads, keep it in the root so
++ * that the new kthreads are guaranteed to stay in the root until
++ * initialization is finished.
++ */
++ current->no_cgroup_migration = 1;
++}
++
++static inline void cgroup_kthread_ready(void)
++{
++ /*
++ * This kthread finished initialization. The creator should have
++ * set PF_NO_SETAFFINITY if this kthread should stay in the root.
++ */
++ current->no_cgroup_migration = 0;
++}
++
+ #else /* !CONFIG_CGROUPS */
+
+ struct cgroup_subsys_state;
+@@ -590,6 +609,8 @@ static inline void cgroup_free(struct task_struct *p) {}
+
+ static inline int cgroup_init_early(void) { return 0; }
+ static inline int cgroup_init(void) { return 0; }
++static inline void cgroup_init_kthreadd(void) {}
++static inline void cgroup_kthread_ready(void) {}
+
+ static inline bool task_under_cgroup_hierarchy(struct task_struct *task,
+ struct cgroup *ancestor)
+diff --git a/include/linux/sched.h b/include/linux/sched.h
+index ad3ec9ec61f7..f2bdb2141941 100644
+--- a/include/linux/sched.h
++++ b/include/linux/sched.h
+@@ -1620,6 +1620,10 @@ struct task_struct {
+ #ifdef CONFIG_COMPAT_BRK
+ unsigned brk_randomized:1;
+ #endif
++#ifdef CONFIG_CGROUPS
++ /* disallow userland-initiated cgroup migration */
++ unsigned no_cgroup_migration:1;
++#endif
+
+ unsigned long atomic_flags; /* Flags needing atomic access. */
+
+diff --git a/include/linux/uio.h b/include/linux/uio.h
+index 804e34c6f981..f2d36a3d3005 100644
+--- a/include/linux/uio.h
++++ b/include/linux/uio.h
+@@ -39,7 +39,10 @@ struct iov_iter {
+ };
+ union {
+ unsigned long nr_segs;
+- int idx;
++ struct {
++ int idx;
++ int start_idx;
++ };
+ };
+ };
+
+@@ -81,6 +84,7 @@ unsigned long iov_shorten(struct iovec *iov, unsigned long nr_segs, size_t to);
+ size_t iov_iter_copy_from_user_atomic(struct page *page,
+ struct iov_iter *i, unsigned long offset, size_t bytes);
+ void iov_iter_advance(struct iov_iter *i, size_t bytes);
++void iov_iter_revert(struct iov_iter *i, size_t bytes);
+ int iov_iter_fault_in_readable(struct iov_iter *i, size_t bytes);
+ size_t iov_iter_single_seg_count(const struct iov_iter *i);
+ size_t copy_page_to_iter(struct page *page, size_t offset, size_t bytes,
+diff --git a/include/target/target_core_base.h b/include/target/target_core_base.h
+index 775c2319a72b..cd225c455bca 100644
+--- a/include/target/target_core_base.h
++++ b/include/target/target_core_base.h
+@@ -705,6 +705,7 @@ struct se_lun {
+ u64 unpacked_lun;
+ #define SE_LUN_LINK_MAGIC 0xffff7771
+ u32 lun_link_magic;
++ bool lun_shutdown;
+ bool lun_access_ro;
+ u32 lun_index;
+
+diff --git a/kernel/audit.c b/kernel/audit.c
+index ba4481d20fa1..765c27c69165 100644
+--- a/kernel/audit.c
++++ b/kernel/audit.c
+@@ -160,7 +160,6 @@ static LIST_HEAD(audit_freelist);
+
+ /* queue msgs to send via kauditd_task */
+ static struct sk_buff_head audit_queue;
+-static void kauditd_hold_skb(struct sk_buff *skb);
+ /* queue msgs due to temporary unicast send problems */
+ static struct sk_buff_head audit_retry_queue;
+ /* queue msgs waiting for new auditd connection */
+@@ -454,30 +453,6 @@ static void auditd_set(int pid, u32 portid, struct net *net)
+ }
+
+ /**
+- * auditd_reset - Disconnect the auditd connection
+- *
+- * Description:
+- * Break the auditd/kauditd connection and move all the queued records into the
+- * hold queue in case auditd reconnects.
+- */
+-static void auditd_reset(void)
+-{
+- struct sk_buff *skb;
+-
+- /* if it isn't already broken, break the connection */
+- rcu_read_lock();
+- if (auditd_conn.pid)
+- auditd_set(0, 0, NULL);
+- rcu_read_unlock();
+-
+- /* flush all of the main and retry queues to the hold queue */
+- while ((skb = skb_dequeue(&audit_retry_queue)))
+- kauditd_hold_skb(skb);
+- while ((skb = skb_dequeue(&audit_queue)))
+- kauditd_hold_skb(skb);
+-}
+-
+-/**
+ * kauditd_print_skb - Print the audit record to the ring buffer
+ * @skb: audit record
+ *
+@@ -505,9 +480,6 @@ static void kauditd_rehold_skb(struct sk_buff *skb)
+ {
+ /* put the record back in the queue at the same place */
+ skb_queue_head(&audit_hold_queue, skb);
+-
+- /* fail the auditd connection */
+- auditd_reset();
+ }
+
+ /**
+@@ -544,9 +516,6 @@ static void kauditd_hold_skb(struct sk_buff *skb)
+ /* we have no other options - drop the message */
+ audit_log_lost("kauditd hold queue overflow");
+ kfree_skb(skb);
+-
+- /* fail the auditd connection */
+- auditd_reset();
+ }
+
+ /**
+@@ -567,6 +536,30 @@ static void kauditd_retry_skb(struct sk_buff *skb)
+ }
+
+ /**
++ * auditd_reset - Disconnect the auditd connection
++ *
++ * Description:
++ * Break the auditd/kauditd connection and move all the queued records into the
++ * hold queue in case auditd reconnects.
++ */
++static void auditd_reset(void)
++{
++ struct sk_buff *skb;
++
++ /* if it isn't already broken, break the connection */
++ rcu_read_lock();
++ if (auditd_conn.pid)
++ auditd_set(0, 0, NULL);
++ rcu_read_unlock();
++
++ /* flush all of the main and retry queues to the hold queue */
++ while ((skb = skb_dequeue(&audit_retry_queue)))
++ kauditd_hold_skb(skb);
++ while ((skb = skb_dequeue(&audit_queue)))
++ kauditd_hold_skb(skb);
++}
++
++/**
+ * auditd_send_unicast_skb - Send a record via unicast to auditd
+ * @skb: audit record
+ *
+@@ -758,6 +751,7 @@ static int kauditd_thread(void *dummy)
+ NULL, kauditd_rehold_skb);
+ if (rc < 0) {
+ sk = NULL;
++ auditd_reset();
+ goto main_queue;
+ }
+
+@@ -767,6 +761,7 @@ static int kauditd_thread(void *dummy)
+ NULL, kauditd_hold_skb);
+ if (rc < 0) {
+ sk = NULL;
++ auditd_reset();
+ goto main_queue;
+ }
+
+@@ -775,16 +770,18 @@ static int kauditd_thread(void *dummy)
+ * unicast, dump failed record sends to the retry queue; if
+ * sk == NULL due to previous failures we will just do the
+ * multicast send and move the record to the retry queue */
+- kauditd_send_queue(sk, portid, &audit_queue, 1,
+- kauditd_send_multicast_skb,
+- kauditd_retry_skb);
++ rc = kauditd_send_queue(sk, portid, &audit_queue, 1,
++ kauditd_send_multicast_skb,
++ kauditd_retry_skb);
++ if (sk == NULL || rc < 0)
++ auditd_reset();
++ sk = NULL;
+
+ /* drop our netns reference, no auditd sends past this line */
+ if (net) {
+ put_net(net);
+ net = NULL;
+ }
+- sk = NULL;
+
+ /* we have processed all the queues so wake everyone */
+ wake_up(&audit_backlog_wait);
+diff --git a/kernel/cgroup.c b/kernel/cgroup.c
+index 53bbca7c4859..36672b678cce 100644
+--- a/kernel/cgroup.c
++++ b/kernel/cgroup.c
+@@ -2920,11 +2920,12 @@ static ssize_t __cgroup_procs_write(struct kernfs_open_file *of, char *buf,
+ tsk = tsk->group_leader;
+
+ /*
+- * Workqueue threads may acquire PF_NO_SETAFFINITY and become
+- * trapped in a cpuset, or RT worker may be born in a cgroup
+- * with no rt_runtime allocated. Just say no.
++ * kthreads may acquire PF_NO_SETAFFINITY during initialization.
++ * If userland migrates such a kthread to a non-root cgroup, it can
++ * become trapped in a cpuset, or RT kthread may be born in a
++ * cgroup with no rt_runtime allocated. Just say no.
+ */
+- if (tsk == kthreadd_task || (tsk->flags & PF_NO_SETAFFINITY)) {
++ if (tsk->no_cgroup_migration || (tsk->flags & PF_NO_SETAFFINITY)) {
+ ret = -EINVAL;
+ goto out_unlock_rcu;
+ }
+diff --git a/kernel/kthread.c b/kernel/kthread.c
+index 2318fba86277..175a438901bf 100644
+--- a/kernel/kthread.c
++++ b/kernel/kthread.c
+@@ -18,6 +18,7 @@
+ #include <linux/freezer.h>
+ #include <linux/ptrace.h>
+ #include <linux/uaccess.h>
++#include <linux/cgroup.h>
+ #include <trace/events/sched.h>
+
+ static DEFINE_SPINLOCK(kthread_create_lock);
+@@ -223,6 +224,7 @@ static int kthread(void *_create)
+
+ ret = -EINTR;
+ if (!test_bit(KTHREAD_SHOULD_STOP, &self->flags)) {
++ cgroup_kthread_ready();
+ __kthread_parkme(self);
+ ret = threadfn(data);
+ }
+@@ -536,6 +538,7 @@ int kthreadd(void *unused)
+ set_mems_allowed(node_states[N_MEMORY]);
+
+ current->flags |= PF_NOFREEZE;
++ cgroup_init_kthreadd();
+
+ for (;;) {
+ set_current_state(TASK_INTERRUPTIBLE);
+diff --git a/kernel/trace/ftrace.c b/kernel/trace/ftrace.c
+index eb230f06ba41..c24bf79bdf9f 100644
+--- a/kernel/trace/ftrace.c
++++ b/kernel/trace/ftrace.c
+@@ -3740,23 +3740,24 @@ static void __enable_ftrace_function_probe(struct ftrace_ops_hash *old_hash)
+ ftrace_probe_registered = 1;
+ }
+
+-static void __disable_ftrace_function_probe(void)
++static bool __disable_ftrace_function_probe(void)
+ {
+ int i;
+
+ if (!ftrace_probe_registered)
+- return;
++ return false;
+
+ for (i = 0; i < FTRACE_FUNC_HASHSIZE; i++) {
+ struct hlist_head *hhd = &ftrace_func_hash[i];
+ if (hhd->first)
+- return;
++ return false;
+ }
+
+ /* no more funcs left */
+ ftrace_shutdown(&trace_probe_ops, 0);
+
+ ftrace_probe_registered = 0;
++ return true;
+ }
+
+
+@@ -3886,6 +3887,7 @@ static void
+ __unregister_ftrace_function_probe(char *glob, struct ftrace_probe_ops *ops,
+ void *data, int flags)
+ {
++ struct ftrace_ops_hash old_hash_ops;
+ struct ftrace_func_entry *rec_entry;
+ struct ftrace_func_probe *entry;
+ struct ftrace_func_probe *p;
+@@ -3897,6 +3899,7 @@ __unregister_ftrace_function_probe(char *glob, struct ftrace_probe_ops *ops,
+ struct hlist_node *tmp;
+ char str[KSYM_SYMBOL_LEN];
+ int i, ret;
++ bool disabled;
+
+ if (glob && (strcmp(glob, "*") == 0 || !strlen(glob)))
+ func_g.search = NULL;
+@@ -3915,6 +3918,10 @@ __unregister_ftrace_function_probe(char *glob, struct ftrace_probe_ops *ops,
+
+ mutex_lock(&trace_probe_ops.func_hash->regex_lock);
+
++ old_hash_ops.filter_hash = old_hash;
++ /* Probes only have filters */
++ old_hash_ops.notrace_hash = NULL;
++
+ hash = alloc_and_copy_ftrace_hash(FTRACE_HASH_DEFAULT_BITS, *orig_hash);
+ if (!hash)
+ /* Hmm, should report this somehow */
+@@ -3952,12 +3959,17 @@ __unregister_ftrace_function_probe(char *glob, struct ftrace_probe_ops *ops,
+ }
+ }
+ mutex_lock(&ftrace_lock);
+- __disable_ftrace_function_probe();
++ disabled = __disable_ftrace_function_probe();
+ /*
+ * Remove after the disable is called. Otherwise, if the last
+ * probe is removed, a null hash means *all enabled*.
+ */
+ ret = ftrace_hash_move(&trace_probe_ops, 1, orig_hash, hash);
++
++ /* still need to update the function call sites */
++ if (ftrace_enabled && !disabled)
++ ftrace_run_modify_code(&trace_probe_ops, FTRACE_UPDATE_CALLS,
++ &old_hash_ops);
+ synchronize_sched();
+ if (!ret)
+ free_ftrace_hash_rcu(old_hash);
+@@ -5410,6 +5422,15 @@ static void clear_ftrace_pids(struct trace_array *tr)
+ trace_free_pid_list(pid_list);
+ }
+
++void ftrace_clear_pids(struct trace_array *tr)
++{
++ mutex_lock(&ftrace_lock);
++
++ clear_ftrace_pids(tr);
++
++ mutex_unlock(&ftrace_lock);
++}
++
+ static void ftrace_pid_reset(struct trace_array *tr)
+ {
+ mutex_lock(&ftrace_lock);
+diff --git a/kernel/trace/trace.c b/kernel/trace/trace.c
+index 310f0ea0d1a2..6ee340a43f18 100644
+--- a/kernel/trace/trace.c
++++ b/kernel/trace/trace.c
+@@ -7409,6 +7409,7 @@ static int instance_rmdir(const char *name)
+
+ tracing_set_nop(tr);
+ event_trace_del_tracer(tr);
++ ftrace_clear_pids(tr);
+ ftrace_destroy_function_files(tr);
+ tracefs_remove_recursive(tr->dir);
+ free_trace_buffers(tr);
+diff --git a/kernel/trace/trace.h b/kernel/trace/trace.h
+index 1ea51ab53edf..8d5f9bcf2a5b 100644
+--- a/kernel/trace/trace.h
++++ b/kernel/trace/trace.h
+@@ -884,6 +884,7 @@ int using_ftrace_ops_list_func(void);
+ void ftrace_init_tracefs(struct trace_array *tr, struct dentry *d_tracer);
+ void ftrace_init_tracefs_toplevel(struct trace_array *tr,
+ struct dentry *d_tracer);
++void ftrace_clear_pids(struct trace_array *tr);
+ #else
+ static inline int ftrace_trace_task(struct trace_array *tr)
+ {
+@@ -902,6 +903,7 @@ ftrace_init_global_array_ops(struct trace_array *tr) { }
+ static inline void ftrace_reset_array_ops(struct trace_array *tr) { }
+ static inline void ftrace_init_tracefs(struct trace_array *tr, struct dentry *d) { }
+ static inline void ftrace_init_tracefs_toplevel(struct trace_array *tr, struct dentry *d) { }
++static inline void ftrace_clear_pids(struct trace_array *tr) { }
+ /* ftace_func_t type is not defined, use macro instead of static inline */
+ #define ftrace_init_array_ops(tr, func) do { } while (0)
+ #endif /* CONFIG_FUNCTION_TRACER */
+diff --git a/lib/iov_iter.c b/lib/iov_iter.c
+index e68604ae3ced..60abc44385b7 100644
+--- a/lib/iov_iter.c
++++ b/lib/iov_iter.c
+@@ -786,6 +786,68 @@ void iov_iter_advance(struct iov_iter *i, size_t size)
+ }
+ EXPORT_SYMBOL(iov_iter_advance);
+
++void iov_iter_revert(struct iov_iter *i, size_t unroll)
++{
++ if (!unroll)
++ return;
++ i->count += unroll;
++ if (unlikely(i->type & ITER_PIPE)) {
++ struct pipe_inode_info *pipe = i->pipe;
++ int idx = i->idx;
++ size_t off = i->iov_offset;
++ while (1) {
++ size_t n = off - pipe->bufs[idx].offset;
++ if (unroll < n) {
++ off -= (n - unroll);
++ break;
++ }
++ unroll -= n;
++ if (!unroll && idx == i->start_idx) {
++ off = 0;
++ break;
++ }
++ if (!idx--)
++ idx = pipe->buffers - 1;
++ off = pipe->bufs[idx].offset + pipe->bufs[idx].len;
++ }
++ i->iov_offset = off;
++ i->idx = idx;
++ pipe_truncate(i);
++ return;
++ }
++ if (unroll <= i->iov_offset) {
++ i->iov_offset -= unroll;
++ return;
++ }
++ unroll -= i->iov_offset;
++ if (i->type & ITER_BVEC) {
++ const struct bio_vec *bvec = i->bvec;
++ while (1) {
++ size_t n = (--bvec)->bv_len;
++ i->nr_segs++;
++ if (unroll <= n) {
++ i->bvec = bvec;
++ i->iov_offset = n - unroll;
++ return;
++ }
++ unroll -= n;
++ }
++ } else { /* same logics for iovec and kvec */
++ const struct iovec *iov = i->iov;
++ while (1) {
++ size_t n = (--iov)->iov_len;
++ i->nr_segs++;
++ if (unroll <= n) {
++ i->iov = iov;
++ i->iov_offset = n - unroll;
++ return;
++ }
++ unroll -= n;
++ }
++ }
++}
++EXPORT_SYMBOL(iov_iter_revert);
++
+ /*
+ * Return the count of just the current iov_iter segment.
+ */
+@@ -839,6 +901,7 @@ void iov_iter_pipe(struct iov_iter *i, int direction,
+ i->idx = (pipe->curbuf + pipe->nrbufs) & (pipe->buffers - 1);
+ i->iov_offset = 0;
+ i->count = count;
++ i->start_idx = i->idx;
+ }
+ EXPORT_SYMBOL(iov_iter_pipe);
+
+diff --git a/mm/huge_memory.c b/mm/huge_memory.c
+index 5f3ad65c85de..c1a081d17178 100644
+--- a/mm/huge_memory.c
++++ b/mm/huge_memory.c
+@@ -1393,8 +1393,7 @@ bool madvise_free_huge_pmd(struct mmu_gather *tlb, struct vm_area_struct *vma,
+ deactivate_page(page);
+
+ if (pmd_young(orig_pmd) || pmd_dirty(orig_pmd)) {
+- orig_pmd = pmdp_huge_get_and_clear_full(tlb->mm, addr, pmd,
+- tlb->fullmm);
++ pmdp_invalidate(vma, addr, pmd);
+ orig_pmd = pmd_mkold(orig_pmd);
+ orig_pmd = pmd_mkclean(orig_pmd);
+
+diff --git a/mm/zsmalloc.c b/mm/zsmalloc.c
+index 9cc3c0b2c2c1..f9ed0813d64e 100644
+--- a/mm/zsmalloc.c
++++ b/mm/zsmalloc.c
+@@ -280,7 +280,7 @@ struct zs_pool {
+ struct zspage {
+ struct {
+ unsigned int fullness:FULLNESS_BITS;
+- unsigned int class:CLASS_BITS;
++ unsigned int class:CLASS_BITS + 1;
+ unsigned int isolated:ISOLATED_BITS;
+ unsigned int magic:MAGIC_VAL_BITS;
+ };
+diff --git a/net/core/datagram.c b/net/core/datagram.c
+index ea633342ab0d..f4947e737f34 100644
+--- a/net/core/datagram.c
++++ b/net/core/datagram.c
+@@ -398,7 +398,7 @@ int skb_copy_datagram_iter(const struct sk_buff *skb, int offset,
+ struct iov_iter *to, int len)
+ {
+ int start = skb_headlen(skb);
+- int i, copy = start - offset;
++ int i, copy = start - offset, start_off = offset, n;
+ struct sk_buff *frag_iter;
+
+ trace_skb_copy_datagram_iovec(skb, len);
+@@ -407,11 +407,12 @@ int skb_copy_datagram_iter(const struct sk_buff *skb, int offset,
+ if (copy > 0) {
+ if (copy > len)
+ copy = len;
+- if (copy_to_iter(skb->data + offset, copy, to) != copy)
++ n = copy_to_iter(skb->data + offset, copy, to);
++ offset += n;
++ if (n != copy)
+ goto short_copy;
+ if ((len -= copy) == 0)
+ return 0;
+- offset += copy;
+ }
+
+ /* Copy paged appendix. Hmm... why does this look so complicated? */
+@@ -425,13 +426,14 @@ int skb_copy_datagram_iter(const struct sk_buff *skb, int offset,
+ if ((copy = end - offset) > 0) {
+ if (copy > len)
+ copy = len;
+- if (copy_page_to_iter(skb_frag_page(frag),
++ n = copy_page_to_iter(skb_frag_page(frag),
+ frag->page_offset + offset -
+- start, copy, to) != copy)
++ start, copy, to);
++ offset += n;
++ if (n != copy)
+ goto short_copy;
+ if (!(len -= copy))
+ return 0;
+- offset += copy;
+ }
+ start = end;
+ }
+@@ -463,6 +465,7 @@ int skb_copy_datagram_iter(const struct sk_buff *skb, int offset,
+ */
+
+ fault:
++ iov_iter_revert(to, offset - start_off);
+ return -EFAULT;
+
+ short_copy:
+@@ -613,7 +616,7 @@ static int skb_copy_and_csum_datagram(const struct sk_buff *skb, int offset,
+ __wsum *csump)
+ {
+ int start = skb_headlen(skb);
+- int i, copy = start - offset;
++ int i, copy = start - offset, start_off = offset;
+ struct sk_buff *frag_iter;
+ int pos = 0;
+ int n;
+@@ -623,11 +626,11 @@ static int skb_copy_and_csum_datagram(const struct sk_buff *skb, int offset,
+ if (copy > len)
+ copy = len;
+ n = csum_and_copy_to_iter(skb->data + offset, copy, csump, to);
++ offset += n;
+ if (n != copy)
+ goto fault;
+ if ((len -= copy) == 0)
+ return 0;
+- offset += copy;
+ pos = copy;
+ }
+
+@@ -649,12 +652,12 @@ static int skb_copy_and_csum_datagram(const struct sk_buff *skb, int offset,
+ offset - start, copy,
+ &csum2, to);
+ kunmap(page);
++ offset += n;
+ if (n != copy)
+ goto fault;
+ *csump = csum_block_add(*csump, csum2, pos);
+ if (!(len -= copy))
+ return 0;
+- offset += copy;
+ pos += copy;
+ }
+ start = end;
+@@ -687,6 +690,7 @@ static int skb_copy_and_csum_datagram(const struct sk_buff *skb, int offset,
+ return 0;
+
+ fault:
++ iov_iter_revert(to, offset - start_off);
+ return -EFAULT;
+ }
+
+@@ -771,6 +775,7 @@ int skb_copy_and_csum_datagram_msg(struct sk_buff *skb,
+ }
+ return 0;
+ csum_error:
++ iov_iter_revert(&msg->msg_iter, chunk);
+ return -EINVAL;
+ fault:
+ return -EFAULT;
+diff --git a/sound/soc/intel/Kconfig b/sound/soc/intel/Kconfig
+index fd5d1e091038..e18fe9d6f08f 100644
+--- a/sound/soc/intel/Kconfig
++++ b/sound/soc/intel/Kconfig
+@@ -33,11 +33,9 @@ config SND_SOC_INTEL_SST
+ select SND_SOC_INTEL_SST_MATCH if ACPI
+ depends on (X86 || COMPILE_TEST)
+
+-# firmware stuff depends DW_DMAC_CORE; since there is no depends-on from
+-# the reverse selection, each machine driver needs to select
+-# SND_SOC_INTEL_SST_FIRMWARE carefully depending on DW_DMAC_CORE
+ config SND_SOC_INTEL_SST_FIRMWARE
+ tristate
++ select DW_DMAC_CORE
+
+ config SND_SOC_INTEL_SST_ACPI
+ tristate
+@@ -47,16 +45,18 @@ config SND_SOC_INTEL_SST_MATCH
+
+ config SND_SOC_INTEL_HASWELL
+ tristate
++ select SND_SOC_INTEL_SST
+ select SND_SOC_INTEL_SST_FIRMWARE
+
+ config SND_SOC_INTEL_BAYTRAIL
+ tristate
++ select SND_SOC_INTEL_SST
++ select SND_SOC_INTEL_SST_FIRMWARE
+
+ config SND_SOC_INTEL_HASWELL_MACH
+ tristate "ASoC Audio DSP support for Intel Haswell Lynxpoint"
+ depends on X86_INTEL_LPSS && I2C && I2C_DESIGNWARE_PLATFORM
+- depends on DW_DMAC_CORE
+- select SND_SOC_INTEL_SST
++ depends on DMADEVICES
+ select SND_SOC_INTEL_HASWELL
+ select SND_SOC_RT5640
+ help
+@@ -99,9 +99,8 @@ config SND_SOC_INTEL_BXT_RT298_MACH
+ config SND_SOC_INTEL_BYT_RT5640_MACH
+ tristate "ASoC Audio driver for Intel Baytrail with RT5640 codec"
+ depends on X86_INTEL_LPSS && I2C
+- depends on DW_DMAC_CORE && (SND_SST_IPC_ACPI = n)
+- select SND_SOC_INTEL_SST
+- select SND_SOC_INTEL_SST_FIRMWARE
++ depends on DMADEVICES
++ depends on SND_SST_IPC_ACPI = n
+ select SND_SOC_INTEL_BAYTRAIL
+ select SND_SOC_RT5640
+ help
+@@ -112,9 +111,8 @@ config SND_SOC_INTEL_BYT_RT5640_MACH
+ config SND_SOC_INTEL_BYT_MAX98090_MACH
+ tristate "ASoC Audio driver for Intel Baytrail with MAX98090 codec"
+ depends on X86_INTEL_LPSS && I2C
+- depends on DW_DMAC_CORE && (SND_SST_IPC_ACPI = n)
+- select SND_SOC_INTEL_SST
+- select SND_SOC_INTEL_SST_FIRMWARE
++ depends on DMADEVICES
++ depends on SND_SST_IPC_ACPI = n
+ select SND_SOC_INTEL_BAYTRAIL
+ select SND_SOC_MAX98090
+ help
+@@ -123,9 +121,8 @@ config SND_SOC_INTEL_BYT_MAX98090_MACH
+
+ config SND_SOC_INTEL_BDW_RT5677_MACH
+ tristate "ASoC Audio driver for Intel Broadwell with RT5677 codec"
+- depends on X86_INTEL_LPSS && GPIOLIB && I2C && DW_DMAC
+- depends on DW_DMAC_CORE=y
+- select SND_SOC_INTEL_SST
++ depends on X86_INTEL_LPSS && GPIOLIB && I2C
++ depends on DMADEVICES
+ select SND_SOC_INTEL_HASWELL
+ select SND_SOC_RT5677
+ help
+@@ -134,10 +131,8 @@ config SND_SOC_INTEL_BDW_RT5677_MACH
+
+ config SND_SOC_INTEL_BROADWELL_MACH
+ tristate "ASoC Audio DSP support for Intel Broadwell Wildcatpoint"
+- depends on X86_INTEL_LPSS && I2C && DW_DMAC && \
+- I2C_DESIGNWARE_PLATFORM
+- depends on DW_DMAC_CORE
+- select SND_SOC_INTEL_SST
++ depends on X86_INTEL_LPSS && I2C && I2C_DESIGNWARE_PLATFORM
++ depends on DMADEVICES
+ select SND_SOC_INTEL_HASWELL
+ select SND_SOC_RT286
+ help
+diff --git a/tools/perf/util/annotate.c b/tools/perf/util/annotate.c
+index 06cc04e5806a..cea3e7958cde 100644
+--- a/tools/perf/util/annotate.c
++++ b/tools/perf/util/annotate.c
+@@ -130,6 +130,12 @@ static struct arch architectures[] = {
+ .name = "powerpc",
+ .init = powerpc__annotate_init,
+ },
++ {
++ .name = "s390",
++ .objdump = {
++ .comment_char = '#',
++ },
++ },
+ };
+
+ static void ins__delete(struct ins_operands *ops)
^ permalink raw reply related [flat|nested] 22+ messages in thread
* [gentoo-commits] proj/linux-patches:4.10 commit in: /
@ 2017-04-27 9:42 Alice Ferrazzi
0 siblings, 0 replies; 22+ messages in thread
From: Alice Ferrazzi @ 2017-04-27 9:42 UTC (permalink / raw
To: gentoo-commits
commit: ea4709ae5b6d7054a381c8c5ee5db3980e0e543f
Author: Alice Ferrazzi <alicef <AT> gentoo <DOT> org>
AuthorDate: Thu Apr 27 09:42:05 2017 +0000
Commit: Alice Ferrazzi <alicef <AT> gentoo <DOT> org>
CommitDate: Thu Apr 27 09:42:05 2017 +0000
URL: https://gitweb.gentoo.org/proj/linux-patches.git/commit/?id=ea4709ae
Linux patch 4.10.13
0000_README | 4 +
1012_linux-4.10.13.patch | 814 +++++++++++++++++++++++++++++++++++++++++++++++
2 files changed, 818 insertions(+)
diff --git a/0000_README b/0000_README
index e55a9e7..0aa6665 100644
--- a/0000_README
+++ b/0000_README
@@ -91,6 +91,10 @@ Patch: 1011_linux-4.10.12.patch
From: http://www.kernel.org
Desc: Linux 4.10.12
+Patch: 1012_linux-4.10.13.patch
+From: http://www.kernel.org
+Desc: Linux 4.10.13
+
Patch: 1500_XATTR_USER_PREFIX.patch
From: https://bugs.gentoo.org/show_bug.cgi?id=470644
Desc: Support for namespace user.pax.* on tmpfs.
diff --git a/1012_linux-4.10.13.patch b/1012_linux-4.10.13.patch
new file mode 100644
index 0000000..7c9db8c
--- /dev/null
+++ b/1012_linux-4.10.13.patch
@@ -0,0 +1,814 @@
+diff --git a/Makefile b/Makefile
+index 9689d3f644ea..8285f4de02d1 100644
+--- a/Makefile
++++ b/Makefile
+@@ -1,6 +1,6 @@
+ VERSION = 4
+ PATCHLEVEL = 10
+-SUBLEVEL = 12
++SUBLEVEL = 13
+ EXTRAVERSION =
+ NAME = Fearless Coyote
+
+diff --git a/arch/powerpc/kernel/entry_64.S b/arch/powerpc/kernel/entry_64.S
+index 6432d4bf08c8..767ef6d68c9e 100644
+--- a/arch/powerpc/kernel/entry_64.S
++++ b/arch/powerpc/kernel/entry_64.S
+@@ -689,7 +689,7 @@ resume_kernel:
+
+ addi r8,r1,INT_FRAME_SIZE /* Get the kprobed function entry */
+
+- lwz r3,GPR1(r1)
++ ld r3,GPR1(r1)
+ subi r3,r3,INT_FRAME_SIZE /* dst: Allocate a trampoline exception frame */
+ mr r4,r1 /* src: current exception frame */
+ mr r1,r3 /* Reroute the trampoline frame to r1 */
+@@ -703,8 +703,8 @@ resume_kernel:
+ addi r6,r6,8
+ bdnz 2b
+
+- /* Do real store operation to complete stwu */
+- lwz r5,GPR1(r1)
++ /* Do real store operation to complete stdu */
++ ld r5,GPR1(r1)
+ std r8,0(r5)
+
+ /* Clear _TIF_EMULATE_STACK_STORE flag */
+diff --git a/arch/s390/include/asm/pgtable.h b/arch/s390/include/asm/pgtable.h
+index 0362cd5fa187..0cea7026e4ff 100644
+--- a/arch/s390/include/asm/pgtable.h
++++ b/arch/s390/include/asm/pgtable.h
+@@ -1029,6 +1029,8 @@ int get_guest_storage_key(struct mm_struct *mm, unsigned long addr,
+ static inline void set_pte_at(struct mm_struct *mm, unsigned long addr,
+ pte_t *ptep, pte_t entry)
+ {
++ if (pte_present(entry))
++ pte_val(entry) &= ~_PAGE_UNUSED;
+ if (mm_has_pgste(mm))
+ ptep_set_pte_at(mm, addr, ptep, entry);
+ else
+diff --git a/arch/x86/kernel/cpu/mcheck/mce-genpool.c b/arch/x86/kernel/cpu/mcheck/mce-genpool.c
+index 93d824ec3120..040af1939460 100644
+--- a/arch/x86/kernel/cpu/mcheck/mce-genpool.c
++++ b/arch/x86/kernel/cpu/mcheck/mce-genpool.c
+@@ -85,7 +85,7 @@ void mce_gen_pool_process(void)
+ head = llist_reverse_order(head);
+ llist_for_each_entry_safe(node, tmp, head, llnode) {
+ mce = &node->mce;
+- atomic_notifier_call_chain(&x86_mce_decoder_chain, 0, mce);
++ blocking_notifier_call_chain(&x86_mce_decoder_chain, 0, mce);
+ gen_pool_free(mce_evt_pool, (unsigned long)node, sizeof(*node));
+ }
+ }
+diff --git a/arch/x86/kernel/cpu/mcheck/mce-internal.h b/arch/x86/kernel/cpu/mcheck/mce-internal.h
+index cd74a3f00aea..de20902ecf23 100644
+--- a/arch/x86/kernel/cpu/mcheck/mce-internal.h
++++ b/arch/x86/kernel/cpu/mcheck/mce-internal.h
+@@ -13,7 +13,7 @@ enum severity_level {
+ MCE_PANIC_SEVERITY,
+ };
+
+-extern struct atomic_notifier_head x86_mce_decoder_chain;
++extern struct blocking_notifier_head x86_mce_decoder_chain;
+
+ #define ATTR_LEN 16
+ #define INITIAL_CHECK_INTERVAL 5 * 60 /* 5 minutes */
+diff --git a/arch/x86/kernel/cpu/mcheck/mce.c b/arch/x86/kernel/cpu/mcheck/mce.c
+index 036fc03aefbd..fcf8b8d6ebfb 100644
+--- a/arch/x86/kernel/cpu/mcheck/mce.c
++++ b/arch/x86/kernel/cpu/mcheck/mce.c
+@@ -123,7 +123,7 @@ static void (*quirk_no_way_out)(int bank, struct mce *m, struct pt_regs *regs);
+ * CPU/chipset specific EDAC code can register a notifier call here to print
+ * MCE errors in a human-readable form.
+ */
+-ATOMIC_NOTIFIER_HEAD(x86_mce_decoder_chain);
++BLOCKING_NOTIFIER_HEAD(x86_mce_decoder_chain);
+
+ /* Do initial initialization of a struct mce */
+ void mce_setup(struct mce *m)
+@@ -223,7 +223,7 @@ void mce_register_decode_chain(struct notifier_block *nb)
+ if (nb != &mce_srao_nb && nb->priority == INT_MAX)
+ nb->priority -= 1;
+
+- atomic_notifier_chain_register(&x86_mce_decoder_chain, nb);
++ blocking_notifier_chain_register(&x86_mce_decoder_chain, nb);
+ }
+ EXPORT_SYMBOL_GPL(mce_register_decode_chain);
+
+@@ -231,7 +231,7 @@ void mce_unregister_decode_chain(struct notifier_block *nb)
+ {
+ atomic_dec(&num_notifiers);
+
+- atomic_notifier_chain_unregister(&x86_mce_decoder_chain, nb);
++ blocking_notifier_chain_unregister(&x86_mce_decoder_chain, nb);
+ }
+ EXPORT_SYMBOL_GPL(mce_unregister_decode_chain);
+
+@@ -324,18 +324,7 @@ static void __print_mce(struct mce *m)
+
+ static void print_mce(struct mce *m)
+ {
+- int ret = 0;
+-
+ __print_mce(m);
+-
+- /*
+- * Print out human-readable details about the MCE error,
+- * (if the CPU has an implementation for that)
+- */
+- ret = atomic_notifier_call_chain(&x86_mce_decoder_chain, 0, m);
+- if (ret == NOTIFY_STOP)
+- return;
+-
+ pr_emerg_ratelimited(HW_ERR "Run the above through 'mcelog --ascii'\n");
+ }
+
+diff --git a/arch/x86/kernel/cpu/mcheck/mce_amd.c b/arch/x86/kernel/cpu/mcheck/mce_amd.c
+index a5fd137417a2..b44a25d77a84 100644
+--- a/arch/x86/kernel/cpu/mcheck/mce_amd.c
++++ b/arch/x86/kernel/cpu/mcheck/mce_amd.c
+@@ -60,7 +60,7 @@ static const char * const th_names[] = {
+ "load_store",
+ "insn_fetch",
+ "combined_unit",
+- "",
++ "decode_unit",
+ "northbridge",
+ "execution_unit",
+ };
+diff --git a/drivers/acpi/power.c b/drivers/acpi/power.c
+index fcd4ce6f78d5..1c2b846c5776 100644
+--- a/drivers/acpi/power.c
++++ b/drivers/acpi/power.c
+@@ -200,6 +200,7 @@ static int acpi_power_get_list_state(struct list_head *list, int *state)
+ return -EINVAL;
+
+ /* The state of the list is 'on' IFF all resources are 'on'. */
++ cur_state = 0;
+ list_for_each_entry(entry, list, node) {
+ struct acpi_power_resource *resource = entry->resource;
+ acpi_handle handle = resource->device.handle;
+diff --git a/drivers/dax/Kconfig b/drivers/dax/Kconfig
+index 3e2ab3b14eea..9e95bf94eb13 100644
+--- a/drivers/dax/Kconfig
++++ b/drivers/dax/Kconfig
+@@ -2,6 +2,7 @@ menuconfig DEV_DAX
+ tristate "DAX: direct access to differentiated memory"
+ default m if NVDIMM_DAX
+ depends on TRANSPARENT_HUGEPAGE
++ select SRCU
+ help
+ Support raw access to differentiated (persistence, bandwidth,
+ latency...) memory via an mmap(2) capable character
+diff --git a/drivers/dax/dax.c b/drivers/dax/dax.c
+index 20ab6bf9d1c7..53a016c3dffa 100644
+--- a/drivers/dax/dax.c
++++ b/drivers/dax/dax.c
+@@ -24,6 +24,7 @@
+ #include "dax.h"
+
+ static dev_t dax_devt;
++DEFINE_STATIC_SRCU(dax_srcu);
+ static struct class *dax_class;
+ static DEFINE_IDA(dax_minor_ida);
+ static int nr_dax = CONFIG_NR_DEV_DAX;
+@@ -59,7 +60,7 @@ struct dax_region {
+ * @region - parent region
+ * @dev - device backing the character device
+ * @cdev - core chardev data
+- * @alive - !alive + rcu grace period == no new mappings can be established
++ * @alive - !alive + srcu grace period == no new mappings can be established
+ * @id - child id in the region
+ * @num_resources - number of physical address extents in this device
+ * @res - array of physical address ranges
+@@ -530,7 +531,7 @@ static int __dax_dev_pmd_fault(struct dax_dev *dax_dev,
+ static int dax_dev_pmd_fault(struct vm_area_struct *vma, unsigned long addr,
+ pmd_t *pmd, unsigned int flags)
+ {
+- int rc;
++ int rc, id;
+ struct file *filp = vma->vm_file;
+ struct dax_dev *dax_dev = filp->private_data;
+
+@@ -538,9 +539,9 @@ static int dax_dev_pmd_fault(struct vm_area_struct *vma, unsigned long addr,
+ current->comm, (flags & FAULT_FLAG_WRITE)
+ ? "write" : "read", vma->vm_start, vma->vm_end);
+
+- rcu_read_lock();
++ id = srcu_read_lock(&dax_srcu);
+ rc = __dax_dev_pmd_fault(dax_dev, vma, addr, pmd, flags);
+- rcu_read_unlock();
++ srcu_read_unlock(&dax_srcu, id);
+
+ return rc;
+ }
+@@ -656,11 +657,11 @@ static void unregister_dax_dev(void *dev)
+ * Note, rcu is not protecting the liveness of dax_dev, rcu is
+ * ensuring that any fault handlers that might have seen
+ * dax_dev->alive == true, have completed. Any fault handlers
+- * that start after synchronize_rcu() has started will abort
++ * that start after synchronize_srcu() has started will abort
+ * upon seeing dax_dev->alive == false.
+ */
+ dax_dev->alive = false;
+- synchronize_rcu();
++ synchronize_srcu(&dax_srcu);
+ unmap_mapping_range(dax_dev->inode->i_mapping, 0, 0, 1);
+ cdev_del(cdev);
+ device_unregister(dev);
+diff --git a/drivers/hid/wacom_wac.c b/drivers/hid/wacom_wac.c
+index 6ef4f2fcfe43..0611f082f392 100644
+--- a/drivers/hid/wacom_wac.c
++++ b/drivers/hid/wacom_wac.c
+@@ -1798,7 +1798,7 @@ static void wacom_wac_pen_event(struct hid_device *hdev, struct hid_field *field
+ return;
+ case HID_DG_TOOLSERIALNUMBER:
+ wacom_wac->serial[0] = (wacom_wac->serial[0] & ~0xFFFFFFFFULL);
+- wacom_wac->serial[0] |= value;
++ wacom_wac->serial[0] |= (__u32)value;
+ return;
+ case WACOM_HID_WD_SENSE:
+ wacom_wac->hid_data.sense_state = value;
+diff --git a/drivers/input/mouse/elantech.c b/drivers/input/mouse/elantech.c
+index db7d1d666ac1..7826994c45bf 100644
+--- a/drivers/input/mouse/elantech.c
++++ b/drivers/input/mouse/elantech.c
+@@ -1118,6 +1118,7 @@ static int elantech_get_resolution_v4(struct psmouse *psmouse,
+ * Asus UX32VD 0x361f02 00, 15, 0e clickpad
+ * Avatar AVIU-145A2 0x361f00 ? clickpad
+ * Fujitsu LIFEBOOK E544 0x470f00 d0, 12, 09 2 hw buttons
++ * Fujitsu LIFEBOOK E547 0x470f00 50, 12, 09 2 hw buttons
+ * Fujitsu LIFEBOOK E554 0x570f01 40, 14, 0c 2 hw buttons
+ * Fujitsu T725 0x470f01 05, 12, 09 2 hw buttons
+ * Fujitsu H730 0x570f00 c0, 14, 0c 3 hw buttons (**)
+@@ -1524,6 +1525,13 @@ static const struct dmi_system_id elantech_dmi_force_crc_enabled[] = {
+ },
+ },
+ {
++ /* Fujitsu LIFEBOOK E547 does not work with crc_enabled == 0 */
++ .matches = {
++ DMI_MATCH(DMI_SYS_VENDOR, "FUJITSU"),
++ DMI_MATCH(DMI_PRODUCT_NAME, "LIFEBOOK E547"),
++ },
++ },
++ {
+ /* Fujitsu LIFEBOOK E554 does not work with crc_enabled == 0 */
+ .matches = {
+ DMI_MATCH(DMI_SYS_VENDOR, "FUJITSU"),
+diff --git a/drivers/mmc/host/dw_mmc.c b/drivers/mmc/host/dw_mmc.c
+index 73db08558e4d..0a634d23b2ef 100644
+--- a/drivers/mmc/host/dw_mmc.c
++++ b/drivers/mmc/host/dw_mmc.c
+@@ -22,6 +22,7 @@
+ #include <linux/ioport.h>
+ #include <linux/module.h>
+ #include <linux/platform_device.h>
++#include <linux/pm_runtime.h>
+ #include <linux/seq_file.h>
+ #include <linux/slab.h>
+ #include <linux/stat.h>
+@@ -1179,11 +1180,13 @@ static void dw_mci_setup_bus(struct dw_mci_slot *slot, bool force_clkinit)
+ if ((clock != slot->__clk_old &&
+ !test_bit(DW_MMC_CARD_NEEDS_POLL, &slot->flags)) ||
+ force_clkinit) {
+- dev_info(&slot->mmc->class_dev,
+- "Bus speed (slot %d) = %dHz (slot req %dHz, actual %dHZ div = %d)\n",
+- slot->id, host->bus_hz, clock,
+- div ? ((host->bus_hz / div) >> 1) :
+- host->bus_hz, div);
++ /* Silent the verbose log if calling from PM context */
++ if (!force_clkinit)
++ dev_info(&slot->mmc->class_dev,
++ "Bus speed (slot %d) = %dHz (slot req %dHz, actual %dHZ div = %d)\n",
++ slot->id, host->bus_hz, clock,
++ div ? ((host->bus_hz / div) >> 1) :
++ host->bus_hz, div);
+
+ /*
+ * If card is polling, display the message only
+@@ -1616,10 +1619,16 @@ static void dw_mci_init_card(struct mmc_host *mmc, struct mmc_card *card)
+
+ if (card->type == MMC_TYPE_SDIO ||
+ card->type == MMC_TYPE_SD_COMBO) {
+- set_bit(DW_MMC_CARD_NO_LOW_PWR, &slot->flags);
++ if (!test_bit(DW_MMC_CARD_NO_LOW_PWR, &slot->flags)) {
++ pm_runtime_get_noresume(mmc->parent);
++ set_bit(DW_MMC_CARD_NO_LOW_PWR, &slot->flags);
++ }
+ clk_en_a = clk_en_a_old & ~clken_low_pwr;
+ } else {
+- clear_bit(DW_MMC_CARD_NO_LOW_PWR, &slot->flags);
++ if (test_bit(DW_MMC_CARD_NO_LOW_PWR, &slot->flags)) {
++ pm_runtime_put_noidle(mmc->parent);
++ clear_bit(DW_MMC_CARD_NO_LOW_PWR, &slot->flags);
++ }
+ clk_en_a = clk_en_a_old | clken_low_pwr;
+ }
+
+diff --git a/drivers/mmc/host/sdhci-esdhc-imx.c b/drivers/mmc/host/sdhci-esdhc-imx.c
+index 7123ef96ed18..445fc47dc3e7 100644
+--- a/drivers/mmc/host/sdhci-esdhc-imx.c
++++ b/drivers/mmc/host/sdhci-esdhc-imx.c
+@@ -830,6 +830,7 @@ static int esdhc_change_pinstate(struct sdhci_host *host,
+
+ switch (uhs) {
+ case MMC_TIMING_UHS_SDR50:
++ case MMC_TIMING_UHS_DDR50:
+ pinctrl = imx_data->pins_100mhz;
+ break;
+ case MMC_TIMING_UHS_SDR104:
+diff --git a/drivers/mtd/ubi/upd.c b/drivers/mtd/ubi/upd.c
+index 0134ba32a057..39712560b4c1 100644
+--- a/drivers/mtd/ubi/upd.c
++++ b/drivers/mtd/ubi/upd.c
+@@ -148,11 +148,11 @@ int ubi_start_update(struct ubi_device *ubi, struct ubi_volume *vol,
+ return err;
+ }
+
+- if (bytes == 0) {
+- err = ubi_wl_flush(ubi, UBI_ALL, UBI_ALL);
+- if (err)
+- return err;
++ err = ubi_wl_flush(ubi, UBI_ALL, UBI_ALL);
++ if (err)
++ return err;
+
++ if (bytes == 0) {
+ err = clear_update_marker(ubi, vol, 0);
+ if (err)
+ return err;
+diff --git a/fs/cifs/cifsglob.h b/fs/cifs/cifsglob.h
+index 7ea8a3393936..54a7d078a3a8 100644
+--- a/fs/cifs/cifsglob.h
++++ b/fs/cifs/cifsglob.h
+@@ -933,7 +933,6 @@ struct cifs_tcon {
+ bool use_persistent:1; /* use persistent instead of durable handles */
+ #ifdef CONFIG_CIFS_SMB2
+ bool print:1; /* set if connection to printer share */
+- bool bad_network_name:1; /* set if ret status STATUS_BAD_NETWORK_NAME */
+ __le32 capabilities;
+ __u32 share_flags;
+ __u32 maximal_access;
+diff --git a/fs/cifs/smb1ops.c b/fs/cifs/smb1ops.c
+index fc537c29044e..87b87e091e8e 100644
+--- a/fs/cifs/smb1ops.c
++++ b/fs/cifs/smb1ops.c
+@@ -1015,6 +1015,15 @@ cifs_dir_needs_close(struct cifsFileInfo *cfile)
+ return !cfile->srch_inf.endOfSearch && !cfile->invalidHandle;
+ }
+
++static bool
++cifs_can_echo(struct TCP_Server_Info *server)
++{
++ if (server->tcpStatus == CifsGood)
++ return true;
++
++ return false;
++}
++
+ struct smb_version_operations smb1_operations = {
+ .send_cancel = send_nt_cancel,
+ .compare_fids = cifs_compare_fids,
+@@ -1049,6 +1058,7 @@ struct smb_version_operations smb1_operations = {
+ .get_dfs_refer = CIFSGetDFSRefer,
+ .qfs_tcon = cifs_qfs_tcon,
+ .is_path_accessible = cifs_is_path_accessible,
++ .can_echo = cifs_can_echo,
+ .query_path_info = cifs_query_path_info,
+ .query_file_info = cifs_query_file_info,
+ .get_srv_inum = cifs_get_srv_inum,
+diff --git a/fs/cifs/smb2pdu.c b/fs/cifs/smb2pdu.c
+index 7080dac3592c..802185386851 100644
+--- a/fs/cifs/smb2pdu.c
++++ b/fs/cifs/smb2pdu.c
+@@ -1084,9 +1084,6 @@ SMB2_tcon(const unsigned int xid, struct cifs_ses *ses, const char *tree,
+ else
+ return -EIO;
+
+- if (tcon && tcon->bad_network_name)
+- return -ENOENT;
+-
+ if ((tcon && tcon->seal) &&
+ ((ses->server->capabilities & SMB2_GLOBAL_CAP_ENCRYPTION) == 0)) {
+ cifs_dbg(VFS, "encryption requested but no server support");
+@@ -1188,8 +1185,6 @@ SMB2_tcon(const unsigned int xid, struct cifs_ses *ses, const char *tree,
+ tcon_error_exit:
+ if (rsp->hdr.Status == STATUS_BAD_NETWORK_NAME) {
+ cifs_dbg(VFS, "BAD_NETWORK_NAME: %s\n", tree);
+- if (tcon)
+- tcon->bad_network_name = true;
+ }
+ goto tcon_exit;
+ }
+diff --git a/fs/ubifs/dir.c b/fs/ubifs/dir.c
+index 528369f3e472..beaddaf52fba 100644
+--- a/fs/ubifs/dir.c
++++ b/fs/ubifs/dir.c
+@@ -748,6 +748,11 @@ static int ubifs_link(struct dentry *old_dentry, struct inode *dir,
+ goto out_fname;
+
+ lock_2_inodes(dir, inode);
++
++ /* Handle O_TMPFILE corner case, it is allowed to link a O_TMPFILE. */
++ if (inode->i_nlink == 0)
++ ubifs_delete_orphan(c, inode->i_ino);
++
+ inc_nlink(inode);
+ ihold(inode);
+ inode->i_ctime = ubifs_current_time(inode);
+@@ -768,6 +773,8 @@ static int ubifs_link(struct dentry *old_dentry, struct inode *dir,
+ dir->i_size -= sz_change;
+ dir_ui->ui_size = dir->i_size;
+ drop_nlink(inode);
++ if (inode->i_nlink == 0)
++ ubifs_add_orphan(c, inode->i_ino);
+ unlock_2_inodes(dir, inode);
+ ubifs_release_budget(c, &req);
+ iput(inode);
+@@ -1316,9 +1323,6 @@ static int do_rename(struct inode *old_dir, struct dentry *old_dentry,
+ unsigned int uninitialized_var(saved_nlink);
+ struct fscrypt_name old_nm, new_nm;
+
+- if (flags & ~RENAME_NOREPLACE)
+- return -EINVAL;
+-
+ /*
+ * Budget request settings: deletion direntry, new direntry, removing
+ * the old inode, and changing old and new parent directory inodes.
+diff --git a/kernel/trace/ring_buffer.c b/kernel/trace/ring_buffer.c
+index 8df48ccb8af6..79172c35c2b2 100644
+--- a/kernel/trace/ring_buffer.c
++++ b/kernel/trace/ring_buffer.c
+@@ -3404,11 +3404,23 @@ EXPORT_SYMBOL_GPL(ring_buffer_iter_reset);
+ int ring_buffer_iter_empty(struct ring_buffer_iter *iter)
+ {
+ struct ring_buffer_per_cpu *cpu_buffer;
++ struct buffer_page *reader;
++ struct buffer_page *head_page;
++ struct buffer_page *commit_page;
++ unsigned commit;
+
+ cpu_buffer = iter->cpu_buffer;
+
+- return iter->head_page == cpu_buffer->commit_page &&
+- iter->head == rb_commit_index(cpu_buffer);
++ /* Remember, trace recording is off when iterator is in use */
++ reader = cpu_buffer->reader_page;
++ head_page = cpu_buffer->head_page;
++ commit_page = cpu_buffer->commit_page;
++ commit = rb_page_commit(commit_page);
++
++ return ((iter->head_page == commit_page && iter->head == commit) ||
++ (iter->head_page == reader && commit_page == head_page &&
++ head_page->read == commit &&
++ iter->head == rb_page_commit(cpu_buffer->reader_page)));
+ }
+ EXPORT_SYMBOL_GPL(ring_buffer_iter_empty);
+
+diff --git a/kernel/trace/trace.c b/kernel/trace/trace.c
+index 6ee340a43f18..f76ff14be517 100644
+--- a/kernel/trace/trace.c
++++ b/kernel/trace/trace.c
+@@ -6740,11 +6740,13 @@ ftrace_trace_snapshot_callback(struct ftrace_hash *hash,
+ return ret;
+
+ out_reg:
+- ret = register_ftrace_function_probe(glob, ops, count);
++ ret = alloc_snapshot(&global_trace);
++ if (ret < 0)
++ goto out;
+
+- if (ret >= 0)
+- alloc_snapshot(&global_trace);
++ ret = register_ftrace_function_probe(glob, ops, count);
+
++ out:
+ return ret < 0 ? ret : 0;
+ }
+
+diff --git a/mm/migrate.c b/mm/migrate.c
+index 87f4d0f81819..c509a92639f6 100644
+--- a/mm/migrate.c
++++ b/mm/migrate.c
+@@ -183,9 +183,9 @@ void putback_movable_pages(struct list_head *l)
+ unlock_page(page);
+ put_page(page);
+ } else {
+- putback_lru_page(page);
+ dec_node_page_state(page, NR_ISOLATED_ANON +
+ page_is_file_cache(page));
++ putback_lru_page(page);
+ }
+ }
+ }
+diff --git a/net/mac80211/rx.c b/net/mac80211/rx.c
+index 1109e60e9121..03476694a7c8 100644
+--- a/net/mac80211/rx.c
++++ b/net/mac80211/rx.c
+@@ -208,6 +208,51 @@ ieee80211_rx_radiotap_hdrlen(struct ieee80211_local *local,
+ return len;
+ }
+
++static void ieee80211_handle_mu_mimo_mon(struct ieee80211_sub_if_data *sdata,
++ struct sk_buff *skb,
++ int rtap_vendor_space)
++{
++ struct {
++ struct ieee80211_hdr_3addr hdr;
++ u8 category;
++ u8 action_code;
++ } __packed action;
++
++ if (!sdata)
++ return;
++
++ BUILD_BUG_ON(sizeof(action) != IEEE80211_MIN_ACTION_SIZE + 1);
++
++ if (skb->len < rtap_vendor_space + sizeof(action) +
++ VHT_MUMIMO_GROUPS_DATA_LEN)
++ return;
++
++ if (!is_valid_ether_addr(sdata->u.mntr.mu_follow_addr))
++ return;
++
++ skb_copy_bits(skb, rtap_vendor_space, &action, sizeof(action));
++
++ if (!ieee80211_is_action(action.hdr.frame_control))
++ return;
++
++ if (action.category != WLAN_CATEGORY_VHT)
++ return;
++
++ if (action.action_code != WLAN_VHT_ACTION_GROUPID_MGMT)
++ return;
++
++ if (!ether_addr_equal(action.hdr.addr1, sdata->u.mntr.mu_follow_addr))
++ return;
++
++ skb = skb_copy(skb, GFP_ATOMIC);
++ if (!skb)
++ return;
++
++ skb->pkt_type = IEEE80211_SDATA_QUEUE_TYPE_FRAME;
++ skb_queue_tail(&sdata->skb_queue, skb);
++ ieee80211_queue_work(&sdata->local->hw, &sdata->work);
++}
++
+ /*
+ * ieee80211_add_rx_radiotap_header - add radiotap header
+ *
+@@ -515,7 +560,6 @@ ieee80211_rx_monitor(struct ieee80211_local *local, struct sk_buff *origskb,
+ struct net_device *prev_dev = NULL;
+ int present_fcs_len = 0;
+ unsigned int rtap_vendor_space = 0;
+- struct ieee80211_mgmt *mgmt;
+ struct ieee80211_sub_if_data *monitor_sdata =
+ rcu_dereference(local->monitor_sdata);
+
+@@ -553,6 +597,8 @@ ieee80211_rx_monitor(struct ieee80211_local *local, struct sk_buff *origskb,
+ return remove_monitor_info(local, origskb, rtap_vendor_space);
+ }
+
++ ieee80211_handle_mu_mimo_mon(monitor_sdata, origskb, rtap_vendor_space);
++
+ /* room for the radiotap header based on driver features */
+ rt_hdrlen = ieee80211_rx_radiotap_hdrlen(local, status, origskb);
+ needed_headroom = rt_hdrlen - rtap_vendor_space;
+@@ -618,23 +664,6 @@ ieee80211_rx_monitor(struct ieee80211_local *local, struct sk_buff *origskb,
+ ieee80211_rx_stats(sdata->dev, skb->len);
+ }
+
+- mgmt = (void *)skb->data;
+- if (monitor_sdata &&
+- skb->len >= IEEE80211_MIN_ACTION_SIZE + 1 + VHT_MUMIMO_GROUPS_DATA_LEN &&
+- ieee80211_is_action(mgmt->frame_control) &&
+- mgmt->u.action.category == WLAN_CATEGORY_VHT &&
+- mgmt->u.action.u.vht_group_notif.action_code == WLAN_VHT_ACTION_GROUPID_MGMT &&
+- is_valid_ether_addr(monitor_sdata->u.mntr.mu_follow_addr) &&
+- ether_addr_equal(mgmt->da, monitor_sdata->u.mntr.mu_follow_addr)) {
+- struct sk_buff *mu_skb = skb_copy(skb, GFP_ATOMIC);
+-
+- if (mu_skb) {
+- mu_skb->pkt_type = IEEE80211_SDATA_QUEUE_TYPE_FRAME;
+- skb_queue_tail(&monitor_sdata->skb_queue, mu_skb);
+- ieee80211_queue_work(&local->hw, &monitor_sdata->work);
+- }
+- }
+-
+ if (prev_dev) {
+ skb->dev = prev_dev;
+ netif_receive_skb(skb);
+@@ -3614,6 +3643,27 @@ static bool ieee80211_accept_frame(struct ieee80211_rx_data *rx)
+ !ether_addr_equal(bssid, hdr->addr1))
+ return false;
+ }
++
++ /*
++ * 802.11-2016 Table 9-26 says that for data frames, A1 must be
++ * the BSSID - we've checked that already but may have accepted
++ * the wildcard (ff:ff:ff:ff:ff:ff).
++ *
++ * It also says:
++ * The BSSID of the Data frame is determined as follows:
++ * a) If the STA is contained within an AP or is associated
++ * with an AP, the BSSID is the address currently in use
++ * by the STA contained in the AP.
++ *
++ * So we should not accept data frames with an address that's
++ * multicast.
++ *
++ * Accepting it also opens a security problem because stations
++ * could encrypt it with the GTK and inject traffic that way.
++ */
++ if (ieee80211_is_data(hdr->frame_control) && multicast)
++ return false;
++
+ return true;
+ case NL80211_IFTYPE_WDS:
+ if (bssid || !ieee80211_is_data(hdr->frame_control))
+diff --git a/security/keys/gc.c b/security/keys/gc.c
+index addf060399e0..9cb4fe4478a1 100644
+--- a/security/keys/gc.c
++++ b/security/keys/gc.c
+@@ -46,7 +46,7 @@ static unsigned long key_gc_flags;
+ * immediately unlinked.
+ */
+ struct key_type key_type_dead = {
+- .name = "dead",
++ .name = ".dead",
+ };
+
+ /*
+diff --git a/security/keys/keyctl.c b/security/keys/keyctl.c
+index 04a764f71ec8..3c7f6897fd5b 100644
+--- a/security/keys/keyctl.c
++++ b/security/keys/keyctl.c
+@@ -271,7 +271,8 @@ long keyctl_get_keyring_ID(key_serial_t id, int create)
+ * Create and join an anonymous session keyring or join a named session
+ * keyring, creating it if necessary. A named session keyring must have Search
+ * permission for it to be joined. Session keyrings without this permit will
+- * be skipped over.
++ * be skipped over. It is not permitted for userspace to create or join
++ * keyrings whose name begin with a dot.
+ *
+ * If successful, the ID of the joined session keyring will be returned.
+ */
+@@ -288,12 +289,16 @@ long keyctl_join_session_keyring(const char __user *_name)
+ ret = PTR_ERR(name);
+ goto error;
+ }
++
++ ret = -EPERM;
++ if (name[0] == '.')
++ goto error_name;
+ }
+
+ /* join the session */
+ ret = join_session_keyring(name);
++error_name:
+ kfree(name);
+-
+ error:
+ return ret;
+ }
+@@ -1251,8 +1256,8 @@ long keyctl_reject_key(key_serial_t id, unsigned timeout, unsigned error,
+ * Read or set the default keyring in which request_key() will cache keys and
+ * return the old setting.
+ *
+- * If a process keyring is specified then this will be created if it doesn't
+- * yet exist. The old setting will be returned if successful.
++ * If a thread or process keyring is specified then it will be created if it
++ * doesn't yet exist. The old setting will be returned if successful.
+ */
+ long keyctl_set_reqkey_keyring(int reqkey_defl)
+ {
+@@ -1277,11 +1282,8 @@ long keyctl_set_reqkey_keyring(int reqkey_defl)
+
+ case KEY_REQKEY_DEFL_PROCESS_KEYRING:
+ ret = install_process_keyring_to_cred(new);
+- if (ret < 0) {
+- if (ret != -EEXIST)
+- goto error;
+- ret = 0;
+- }
++ if (ret < 0)
++ goto error;
+ goto set;
+
+ case KEY_REQKEY_DEFL_DEFAULT:
+diff --git a/security/keys/process_keys.c b/security/keys/process_keys.c
+index 918cddcd4516..855b94df1126 100644
+--- a/security/keys/process_keys.c
++++ b/security/keys/process_keys.c
+@@ -127,13 +127,18 @@ int install_user_keyrings(void)
+ }
+
+ /*
+- * Install a fresh thread keyring directly to new credentials. This keyring is
+- * allowed to overrun the quota.
++ * Install a thread keyring to the given credentials struct if it didn't have
++ * one already. This is allowed to overrun the quota.
++ *
++ * Return: 0 if a thread keyring is now present; -errno on failure.
+ */
+ int install_thread_keyring_to_cred(struct cred *new)
+ {
+ struct key *keyring;
+
++ if (new->thread_keyring)
++ return 0;
++
+ keyring = keyring_alloc("_tid", new->uid, new->gid, new,
+ KEY_POS_ALL | KEY_USR_VIEW,
+ KEY_ALLOC_QUOTA_OVERRUN,
+@@ -146,7 +151,9 @@ int install_thread_keyring_to_cred(struct cred *new)
+ }
+
+ /*
+- * Install a fresh thread keyring, discarding the old one.
++ * Install a thread keyring to the current task if it didn't have one already.
++ *
++ * Return: 0 if a thread keyring is now present; -errno on failure.
+ */
+ static int install_thread_keyring(void)
+ {
+@@ -157,8 +164,6 @@ static int install_thread_keyring(void)
+ if (!new)
+ return -ENOMEM;
+
+- BUG_ON(new->thread_keyring);
+-
+ ret = install_thread_keyring_to_cred(new);
+ if (ret < 0) {
+ abort_creds(new);
+@@ -169,17 +174,17 @@ static int install_thread_keyring(void)
+ }
+
+ /*
+- * Install a process keyring directly to a credentials struct.
++ * Install a process keyring to the given credentials struct if it didn't have
++ * one already. This is allowed to overrun the quota.
+ *
+- * Returns -EEXIST if there was already a process keyring, 0 if one installed,
+- * and other value on any other error
++ * Return: 0 if a process keyring is now present; -errno on failure.
+ */
+ int install_process_keyring_to_cred(struct cred *new)
+ {
+ struct key *keyring;
+
+ if (new->process_keyring)
+- return -EEXIST;
++ return 0;
+
+ keyring = keyring_alloc("_pid", new->uid, new->gid, new,
+ KEY_POS_ALL | KEY_USR_VIEW,
+@@ -193,11 +198,9 @@ int install_process_keyring_to_cred(struct cred *new)
+ }
+
+ /*
+- * Make sure a process keyring is installed for the current process. The
+- * existing process keyring is not replaced.
++ * Install a process keyring to the current task if it didn't have one already.
+ *
+- * Returns 0 if there is a process keyring by the end of this function, some
+- * error otherwise.
++ * Return: 0 if a process keyring is now present; -errno on failure.
+ */
+ static int install_process_keyring(void)
+ {
+@@ -211,14 +214,18 @@ static int install_process_keyring(void)
+ ret = install_process_keyring_to_cred(new);
+ if (ret < 0) {
+ abort_creds(new);
+- return ret != -EEXIST ? ret : 0;
++ return ret;
+ }
+
+ return commit_creds(new);
+ }
+
+ /*
+- * Install a session keyring directly to a credentials struct.
++ * Install the given keyring as the session keyring of the given credentials
++ * struct, replacing the existing one if any. If the given keyring is NULL,
++ * then install a new anonymous session keyring.
++ *
++ * Return: 0 on success; -errno on failure.
+ */
+ int install_session_keyring_to_cred(struct cred *cred, struct key *keyring)
+ {
+@@ -253,8 +260,11 @@ int install_session_keyring_to_cred(struct cred *cred, struct key *keyring)
+ }
+
+ /*
+- * Install a session keyring, discarding the old one. If a keyring is not
+- * supplied, an empty one is invented.
++ * Install the given keyring as the session keyring of the current task,
++ * replacing the existing one if any. If the given keyring is NULL, then
++ * install a new anonymous session keyring.
++ *
++ * Return: 0 on success; -errno on failure.
+ */
+ static int install_session_keyring(struct key *keyring)
+ {
^ permalink raw reply related [flat|nested] 22+ messages in thread
* [gentoo-commits] proj/linux-patches:4.10 commit in: /
@ 2017-05-03 17:46 Mike Pagano
0 siblings, 0 replies; 22+ messages in thread
From: Mike Pagano @ 2017-05-03 17:46 UTC (permalink / raw
To: gentoo-commits
commit: 2924718e5b11fe3a7209b755097cba3a3f955839
Author: Mike Pagano <mpagano <AT> gentoo <DOT> org>
AuthorDate: Wed May 3 17:46:16 2017 +0000
Commit: Mike Pagano <mpagano <AT> gentoo <DOT> org>
CommitDate: Wed May 3 17:46:16 2017 +0000
URL: https://gitweb.gentoo.org/proj/linux-patches.git/commit/?id=2924718e
Linux patch 4.10.14
0000_README | 4 +
1013_linux-4.10.14.patch | 2251 ++++++++++++++++++++++++++++++++++++++++++++++
2 files changed, 2255 insertions(+)
diff --git a/0000_README b/0000_README
index 0aa6665..5295a7d 100644
--- a/0000_README
+++ b/0000_README
@@ -95,6 +95,10 @@ Patch: 1012_linux-4.10.13.patch
From: http://www.kernel.org
Desc: Linux 4.10.13
+Patch: 1013_linux-4.10.14.patch
+From: http://www.kernel.org
+Desc: Linux 4.10.14
+
Patch: 1500_XATTR_USER_PREFIX.patch
From: https://bugs.gentoo.org/show_bug.cgi?id=470644
Desc: Support for namespace user.pax.* on tmpfs.
diff --git a/1013_linux-4.10.14.patch b/1013_linux-4.10.14.patch
new file mode 100644
index 0000000..ae4d094
--- /dev/null
+++ b/1013_linux-4.10.14.patch
@@ -0,0 +1,2251 @@
+diff --git a/Makefile b/Makefile
+index 8285f4de02d1..48756653c42c 100644
+--- a/Makefile
++++ b/Makefile
+@@ -1,6 +1,6 @@
+ VERSION = 4
+ PATCHLEVEL = 10
+-SUBLEVEL = 13
++SUBLEVEL = 14
+ EXTRAVERSION =
+ NAME = Fearless Coyote
+
+diff --git a/arch/arc/include/asm/atomic.h b/arch/arc/include/asm/atomic.h
+index b65930a49589..54b54da6384c 100644
+--- a/arch/arc/include/asm/atomic.h
++++ b/arch/arc/include/asm/atomic.h
+@@ -17,10 +17,11 @@
+ #include <asm/barrier.h>
+ #include <asm/smp.h>
+
++#define ATOMIC_INIT(i) { (i) }
++
+ #ifndef CONFIG_ARC_PLAT_EZNPS
+
+ #define atomic_read(v) READ_ONCE((v)->counter)
+-#define ATOMIC_INIT(i) { (i) }
+
+ #ifdef CONFIG_ARC_HAS_LLSC
+
+diff --git a/arch/mips/kernel/cevt-r4k.c b/arch/mips/kernel/cevt-r4k.c
+index 804d2a2a19fe..dd6a18bc10ab 100644
+--- a/arch/mips/kernel/cevt-r4k.c
++++ b/arch/mips/kernel/cevt-r4k.c
+@@ -80,7 +80,7 @@ static unsigned int calculate_min_delta(void)
+ }
+
+ /* Sorted insert of 75th percentile into buf2 */
+- for (k = 0; k < i; ++k) {
++ for (k = 0; k < i && k < ARRAY_SIZE(buf2); ++k) {
+ if (buf1[ARRAY_SIZE(buf1) - 1] < buf2[k]) {
+ l = min_t(unsigned int,
+ i, ARRAY_SIZE(buf2) - 1);
+diff --git a/arch/mips/kernel/elf.c b/arch/mips/kernel/elf.c
+index 6430bff21fff..5c429d70e17f 100644
+--- a/arch/mips/kernel/elf.c
++++ b/arch/mips/kernel/elf.c
+@@ -257,7 +257,7 @@ int arch_check_elf(void *_ehdr, bool has_interpreter, void *_interp_ehdr,
+ else if ((prog_req.fr1 && prog_req.frdefault) ||
+ (prog_req.single && !prog_req.frdefault))
+ /* Make sure 64-bit MIPS III/IV/64R1 will not pick FR1 */
+- state->overall_fp_mode = ((current_cpu_data.fpu_id & MIPS_FPIR_F64) &&
++ state->overall_fp_mode = ((raw_current_cpu_data.fpu_id & MIPS_FPIR_F64) &&
+ cpu_has_mips_r2_r6) ?
+ FP_FR1 : FP_FR0;
+ else if (prog_req.fr1)
+diff --git a/arch/mips/kernel/kgdb.c b/arch/mips/kernel/kgdb.c
+index 1f4bd222ba76..eb6c0d582626 100644
+--- a/arch/mips/kernel/kgdb.c
++++ b/arch/mips/kernel/kgdb.c
+@@ -244,9 +244,6 @@ static int compute_signal(int tt)
+ void sleeping_thread_to_gdb_regs(unsigned long *gdb_regs, struct task_struct *p)
+ {
+ int reg;
+- struct thread_info *ti = task_thread_info(p);
+- unsigned long ksp = (unsigned long)ti + THREAD_SIZE - 32;
+- struct pt_regs *regs = (struct pt_regs *)ksp - 1;
+ #if (KGDB_GDB_REG_SIZE == 32)
+ u32 *ptr = (u32 *)gdb_regs;
+ #else
+@@ -254,25 +251,46 @@ void sleeping_thread_to_gdb_regs(unsigned long *gdb_regs, struct task_struct *p)
+ #endif
+
+ for (reg = 0; reg < 16; reg++)
+- *(ptr++) = regs->regs[reg];
++ *(ptr++) = 0;
+
+ /* S0 - S7 */
+- for (reg = 16; reg < 24; reg++)
+- *(ptr++) = regs->regs[reg];
++ *(ptr++) = p->thread.reg16;
++ *(ptr++) = p->thread.reg17;
++ *(ptr++) = p->thread.reg18;
++ *(ptr++) = p->thread.reg19;
++ *(ptr++) = p->thread.reg20;
++ *(ptr++) = p->thread.reg21;
++ *(ptr++) = p->thread.reg22;
++ *(ptr++) = p->thread.reg23;
+
+ for (reg = 24; reg < 28; reg++)
+ *(ptr++) = 0;
+
+ /* GP, SP, FP, RA */
+- for (reg = 28; reg < 32; reg++)
+- *(ptr++) = regs->regs[reg];
+-
+- *(ptr++) = regs->cp0_status;
+- *(ptr++) = regs->lo;
+- *(ptr++) = regs->hi;
+- *(ptr++) = regs->cp0_badvaddr;
+- *(ptr++) = regs->cp0_cause;
+- *(ptr++) = regs->cp0_epc;
++ *(ptr++) = (long)p;
++ *(ptr++) = p->thread.reg29;
++ *(ptr++) = p->thread.reg30;
++ *(ptr++) = p->thread.reg31;
++
++ *(ptr++) = p->thread.cp0_status;
++
++ /* lo, hi */
++ *(ptr++) = 0;
++ *(ptr++) = 0;
++
++ /*
++ * BadVAddr, Cause
++ * Ideally these would come from the last exception frame up the stack
++ * but that requires unwinding, otherwise we can't know much for sure.
++ */
++ *(ptr++) = 0;
++ *(ptr++) = 0;
++
++ /*
++ * PC
++ * use return address (RA), i.e. the moment after return from resume()
++ */
++ *(ptr++) = p->thread.reg31;
+ }
+
+ void kgdb_arch_set_pc(struct pt_regs *regs, unsigned long pc)
+diff --git a/arch/sparc/include/asm/pgtable_64.h b/arch/sparc/include/asm/pgtable_64.h
+index 314b66851348..f0266cef56e4 100644
+--- a/arch/sparc/include/asm/pgtable_64.h
++++ b/arch/sparc/include/asm/pgtable_64.h
+@@ -673,26 +673,27 @@ static inline unsigned long pmd_pfn(pmd_t pmd)
+ return pte_pfn(pte);
+ }
+
+-#ifdef CONFIG_TRANSPARENT_HUGEPAGE
+-static inline unsigned long pmd_dirty(pmd_t pmd)
++#define __HAVE_ARCH_PMD_WRITE
++static inline unsigned long pmd_write(pmd_t pmd)
+ {
+ pte_t pte = __pte(pmd_val(pmd));
+
+- return pte_dirty(pte);
++ return pte_write(pte);
+ }
+
+-static inline unsigned long pmd_young(pmd_t pmd)
++#ifdef CONFIG_TRANSPARENT_HUGEPAGE
++static inline unsigned long pmd_dirty(pmd_t pmd)
+ {
+ pte_t pte = __pte(pmd_val(pmd));
+
+- return pte_young(pte);
++ return pte_dirty(pte);
+ }
+
+-static inline unsigned long pmd_write(pmd_t pmd)
++static inline unsigned long pmd_young(pmd_t pmd)
+ {
+ pte_t pte = __pte(pmd_val(pmd));
+
+- return pte_write(pte);
++ return pte_young(pte);
+ }
+
+ static inline unsigned long pmd_trans_huge(pmd_t pmd)
+diff --git a/arch/sparc/mm/init_64.c b/arch/sparc/mm/init_64.c
+index 5d2f91511c60..47ecac5106d3 100644
+--- a/arch/sparc/mm/init_64.c
++++ b/arch/sparc/mm/init_64.c
+@@ -1495,7 +1495,7 @@ bool kern_addr_valid(unsigned long addr)
+ if ((long)addr < 0L) {
+ unsigned long pa = __pa(addr);
+
+- if ((addr >> max_phys_bits) != 0UL)
++ if ((pa >> max_phys_bits) != 0UL)
+ return false;
+
+ return pfn_valid(pa >> PAGE_SHIFT);
+diff --git a/arch/x86/kernel/ftrace.c b/arch/x86/kernel/ftrace.c
+index 8639bb2ae058..6bf09f5594b2 100644
+--- a/arch/x86/kernel/ftrace.c
++++ b/arch/x86/kernel/ftrace.c
+@@ -983,6 +983,18 @@ void prepare_ftrace_return(unsigned long self_addr, unsigned long *parent,
+ unsigned long return_hooker = (unsigned long)
+ &return_to_handler;
+
++ /*
++ * When resuming from suspend-to-ram, this function can be indirectly
++ * called from early CPU startup code while the CPU is in real mode,
++ * which would fail miserably. Make sure the stack pointer is a
++ * virtual address.
++ *
++ * This check isn't as accurate as virt_addr_valid(), but it should be
++ * good enough for this purpose, and it's fast.
++ */
++ if (unlikely((long)__builtin_frame_address(0) >= 0))
++ return;
++
+ if (unlikely(ftrace_graph_is_dead()))
+ return;
+
+diff --git a/drivers/input/serio/i8042-x86ia64io.h b/drivers/input/serio/i8042-x86ia64io.h
+index 27ae2a0ef1b9..ecd075fd5754 100644
+--- a/drivers/input/serio/i8042-x86ia64io.h
++++ b/drivers/input/serio/i8042-x86ia64io.h
+@@ -613,6 +613,13 @@ static const struct dmi_system_id __initconst i8042_dmi_reset_table[] = {
+ DMI_MATCH(DMI_PRODUCT_NAME, "20046"),
+ },
+ },
++ {
++ /* Clevo P650RS, 650RP6, Sager NP8152-S, and others */
++ .matches = {
++ DMI_MATCH(DMI_SYS_VENDOR, "Notebook"),
++ DMI_MATCH(DMI_PRODUCT_NAME, "P65xRP"),
++ },
++ },
+ { }
+ };
+
+diff --git a/drivers/net/can/usb/gs_usb.c b/drivers/net/can/usb/gs_usb.c
+index a0dabd4038ba..7ab24c5262f3 100644
+--- a/drivers/net/can/usb/gs_usb.c
++++ b/drivers/net/can/usb/gs_usb.c
+@@ -740,13 +740,18 @@ static const struct net_device_ops gs_usb_netdev_ops = {
+ static int gs_usb_set_identify(struct net_device *netdev, bool do_identify)
+ {
+ struct gs_can *dev = netdev_priv(netdev);
+- struct gs_identify_mode imode;
++ struct gs_identify_mode *imode;
+ int rc;
+
++ imode = kmalloc(sizeof(*imode), GFP_KERNEL);
++
++ if (!imode)
++ return -ENOMEM;
++
+ if (do_identify)
+- imode.mode = GS_CAN_IDENTIFY_ON;
++ imode->mode = GS_CAN_IDENTIFY_ON;
+ else
+- imode.mode = GS_CAN_IDENTIFY_OFF;
++ imode->mode = GS_CAN_IDENTIFY_OFF;
+
+ rc = usb_control_msg(interface_to_usbdev(dev->iface),
+ usb_sndctrlpipe(interface_to_usbdev(dev->iface),
+@@ -756,10 +761,12 @@ static int gs_usb_set_identify(struct net_device *netdev, bool do_identify)
+ USB_RECIP_INTERFACE,
+ dev->channel,
+ 0,
+- &imode,
+- sizeof(imode),
++ imode,
++ sizeof(*imode),
+ 100);
+
++ kfree(imode);
++
+ return (rc > 0) ? 0 : rc;
+ }
+
+diff --git a/drivers/net/ethernet/mellanox/mlx5/core/en.h b/drivers/net/ethernet/mellanox/mlx5/core/en.h
+index 41db47050991..0145765002b3 100644
+--- a/drivers/net/ethernet/mellanox/mlx5/core/en.h
++++ b/drivers/net/ethernet/mellanox/mlx5/core/en.h
+@@ -82,7 +82,7 @@
+ #define MLX5E_VALID_NUM_MTTS(num_mtts) (MLX5_MTT_OCTW(num_mtts) - 1 <= U16_MAX)
+
+ #define MLX5_UMR_ALIGN (2048)
+-#define MLX5_MPWRQ_SMALL_PACKET_THRESHOLD (128)
++#define MLX5_MPWRQ_SMALL_PACKET_THRESHOLD (256)
+
+ #define MLX5E_PARAMS_DEFAULT_LRO_WQE_SZ (64 * 1024)
+ #define MLX5E_DEFAULT_LRO_TIMEOUT 32
+diff --git a/drivers/net/ethernet/mellanox/mlx5/core/en_fs_ethtool.c b/drivers/net/ethernet/mellanox/mlx5/core/en_fs_ethtool.c
+index f33f72d0237c..32d56cd1b638 100644
+--- a/drivers/net/ethernet/mellanox/mlx5/core/en_fs_ethtool.c
++++ b/drivers/net/ethernet/mellanox/mlx5/core/en_fs_ethtool.c
+@@ -564,6 +564,7 @@ int mlx5e_ethtool_get_all_flows(struct mlx5e_priv *priv, struct ethtool_rxnfc *i
+ int idx = 0;
+ int err = 0;
+
++ info->data = MAX_NUM_OF_ETHTOOL_RULES;
+ while ((!err || err == -ENOENT) && idx < info->rule_cnt) {
+ err = mlx5e_ethtool_get_flow(priv, info, location);
+ if (!err)
+diff --git a/drivers/net/ethernet/mellanox/mlx5/core/en_tc.c b/drivers/net/ethernet/mellanox/mlx5/core/en_tc.c
+index cc718814c378..dc5c594f7c5e 100644
+--- a/drivers/net/ethernet/mellanox/mlx5/core/en_tc.c
++++ b/drivers/net/ethernet/mellanox/mlx5/core/en_tc.c
+@@ -611,7 +611,8 @@ static int parse_cls_flower(struct mlx5e_priv *priv,
+
+ if (!err && esw->mode == SRIOV_OFFLOADS &&
+ rep->vport != FDB_UPLINK_VPORT) {
+- if (min_inline > esw->offloads.inline_mode) {
++ if (esw->offloads.inline_mode != MLX5_INLINE_MODE_NONE &&
++ esw->offloads.inline_mode < min_inline) {
+ netdev_warn(priv->netdev,
+ "Flow is not offloaded due to min inline setting, required %d actual %d\n",
+ min_inline, esw->offloads.inline_mode);
+diff --git a/drivers/net/ethernet/mellanox/mlx5/core/eswitch_offloads.c b/drivers/net/ethernet/mellanox/mlx5/core/eswitch_offloads.c
+index 7bce2bdbb79b..4d111c129144 100644
+--- a/drivers/net/ethernet/mellanox/mlx5/core/eswitch_offloads.c
++++ b/drivers/net/ethernet/mellanox/mlx5/core/eswitch_offloads.c
+@@ -908,8 +908,7 @@ int mlx5_devlink_eswitch_inline_mode_set(struct devlink *devlink, u8 mode)
+ struct mlx5_core_dev *dev = devlink_priv(devlink);
+ struct mlx5_eswitch *esw = dev->priv.eswitch;
+ int num_vports = esw->enabled_vports;
+- int err;
+- int vport;
++ int err, vport;
+ u8 mlx5_mode;
+
+ if (!MLX5_CAP_GEN(dev, vport_group_manager))
+@@ -918,9 +917,17 @@ int mlx5_devlink_eswitch_inline_mode_set(struct devlink *devlink, u8 mode)
+ if (esw->mode == SRIOV_NONE)
+ return -EOPNOTSUPP;
+
+- if (MLX5_CAP_ETH(dev, wqe_inline_mode) !=
+- MLX5_CAP_INLINE_MODE_VPORT_CONTEXT)
++ switch (MLX5_CAP_ETH(dev, wqe_inline_mode)) {
++ case MLX5_CAP_INLINE_MODE_NOT_REQUIRED:
++ if (mode == DEVLINK_ESWITCH_INLINE_MODE_NONE)
++ return 0;
++ /* fall through */
++ case MLX5_CAP_INLINE_MODE_L2:
++ esw_warn(dev, "Inline mode can't be set\n");
+ return -EOPNOTSUPP;
++ case MLX5_CAP_INLINE_MODE_VPORT_CONTEXT:
++ break;
++ }
+
+ if (esw->offloads.num_flows > 0) {
+ esw_warn(dev, "Can't set inline mode when flows are configured\n");
+@@ -963,18 +970,14 @@ int mlx5_devlink_eswitch_inline_mode_get(struct devlink *devlink, u8 *mode)
+ if (esw->mode == SRIOV_NONE)
+ return -EOPNOTSUPP;
+
+- if (MLX5_CAP_ETH(dev, wqe_inline_mode) !=
+- MLX5_CAP_INLINE_MODE_VPORT_CONTEXT)
+- return -EOPNOTSUPP;
+-
+ return esw_inline_mode_to_devlink(esw->offloads.inline_mode, mode);
+ }
+
+ int mlx5_eswitch_inline_mode_get(struct mlx5_eswitch *esw, int nvfs, u8 *mode)
+ {
++ u8 prev_mlx5_mode, mlx5_mode = MLX5_INLINE_MODE_L2;
+ struct mlx5_core_dev *dev = esw->dev;
+ int vport;
+- u8 prev_mlx5_mode, mlx5_mode = MLX5_INLINE_MODE_L2;
+
+ if (!MLX5_CAP_GEN(dev, vport_group_manager))
+ return -EOPNOTSUPP;
+@@ -982,10 +985,18 @@ int mlx5_eswitch_inline_mode_get(struct mlx5_eswitch *esw, int nvfs, u8 *mode)
+ if (esw->mode == SRIOV_NONE)
+ return -EOPNOTSUPP;
+
+- if (MLX5_CAP_ETH(dev, wqe_inline_mode) !=
+- MLX5_CAP_INLINE_MODE_VPORT_CONTEXT)
+- return -EOPNOTSUPP;
++ switch (MLX5_CAP_ETH(dev, wqe_inline_mode)) {
++ case MLX5_CAP_INLINE_MODE_NOT_REQUIRED:
++ mlx5_mode = MLX5_INLINE_MODE_NONE;
++ goto out;
++ case MLX5_CAP_INLINE_MODE_L2:
++ mlx5_mode = MLX5_INLINE_MODE_L2;
++ goto out;
++ case MLX5_CAP_INLINE_MODE_VPORT_CONTEXT:
++ goto query_vports;
++ }
+
++query_vports:
+ for (vport = 1; vport <= nvfs; vport++) {
+ mlx5_query_nic_vport_min_inline(dev, vport, &mlx5_mode);
+ if (vport > 1 && prev_mlx5_mode != mlx5_mode)
+@@ -993,6 +1004,7 @@ int mlx5_eswitch_inline_mode_get(struct mlx5_eswitch *esw, int nvfs, u8 *mode)
+ prev_mlx5_mode = mlx5_mode;
+ }
+
++out:
+ *mode = mlx5_mode;
+ return 0;
+ }
+diff --git a/drivers/net/ethernet/mellanox/mlx5/core/lag.c b/drivers/net/ethernet/mellanox/mlx5/core/lag.c
+index 55957246c0e8..b5d5519542e8 100644
+--- a/drivers/net/ethernet/mellanox/mlx5/core/lag.c
++++ b/drivers/net/ethernet/mellanox/mlx5/core/lag.c
+@@ -294,7 +294,7 @@ static int mlx5_handle_changeupper_event(struct mlx5_lag *ldev,
+ struct netdev_notifier_changeupper_info *info)
+ {
+ struct net_device *upper = info->upper_dev, *ndev_tmp;
+- struct netdev_lag_upper_info *lag_upper_info;
++ struct netdev_lag_upper_info *lag_upper_info = NULL;
+ bool is_bonded;
+ int bond_status = 0;
+ int num_slaves = 0;
+@@ -303,7 +303,8 @@ static int mlx5_handle_changeupper_event(struct mlx5_lag *ldev,
+ if (!netif_is_lag_master(upper))
+ return 0;
+
+- lag_upper_info = info->upper_info;
++ if (info->linking)
++ lag_upper_info = info->upper_info;
+
+ /* The event may still be of interest if the slave does not belong to
+ * us, but is enslaved to a master which has one or more of our netdevs
+diff --git a/drivers/net/ethernet/mellanox/mlx5/core/main.c b/drivers/net/ethernet/mellanox/mlx5/core/main.c
+index 4aca265d9c14..4ee7ea775a02 100644
+--- a/drivers/net/ethernet/mellanox/mlx5/core/main.c
++++ b/drivers/net/ethernet/mellanox/mlx5/core/main.c
+@@ -1001,7 +1001,7 @@ static int mlx5_load_one(struct mlx5_core_dev *dev, struct mlx5_priv *priv,
+ if (err) {
+ dev_err(&dev->pdev->dev, "Firmware over %d MS in initializing state, aborting\n",
+ FW_INIT_TIMEOUT_MILI);
+- goto out_err;
++ goto err_cmd_cleanup;
+ }
+
+ err = mlx5_core_enable_hca(dev, 0);
+diff --git a/drivers/net/ethernet/renesas/sh_eth.c b/drivers/net/ethernet/renesas/sh_eth.c
+index f729a6b43958..1a012b3e0ded 100644
+--- a/drivers/net/ethernet/renesas/sh_eth.c
++++ b/drivers/net/ethernet/renesas/sh_eth.c
+@@ -1061,12 +1061,70 @@ static struct mdiobb_ops bb_ops = {
+ .get_mdio_data = sh_get_mdio,
+ };
+
++/* free Tx skb function */
++static int sh_eth_tx_free(struct net_device *ndev, bool sent_only)
++{
++ struct sh_eth_private *mdp = netdev_priv(ndev);
++ struct sh_eth_txdesc *txdesc;
++ int free_num = 0;
++ int entry;
++ bool sent;
++
++ for (; mdp->cur_tx - mdp->dirty_tx > 0; mdp->dirty_tx++) {
++ entry = mdp->dirty_tx % mdp->num_tx_ring;
++ txdesc = &mdp->tx_ring[entry];
++ sent = !(txdesc->status & cpu_to_le32(TD_TACT));
++ if (sent_only && !sent)
++ break;
++ /* TACT bit must be checked before all the following reads */
++ dma_rmb();
++ netif_info(mdp, tx_done, ndev,
++ "tx entry %d status 0x%08x\n",
++ entry, le32_to_cpu(txdesc->status));
++ /* Free the original skb. */
++ if (mdp->tx_skbuff[entry]) {
++ dma_unmap_single(&ndev->dev, le32_to_cpu(txdesc->addr),
++ le32_to_cpu(txdesc->len) >> 16,
++ DMA_TO_DEVICE);
++ dev_kfree_skb_irq(mdp->tx_skbuff[entry]);
++ mdp->tx_skbuff[entry] = NULL;
++ free_num++;
++ }
++ txdesc->status = cpu_to_le32(TD_TFP);
++ if (entry >= mdp->num_tx_ring - 1)
++ txdesc->status |= cpu_to_le32(TD_TDLE);
++
++ if (sent) {
++ ndev->stats.tx_packets++;
++ ndev->stats.tx_bytes += le32_to_cpu(txdesc->len) >> 16;
++ }
++ }
++ return free_num;
++}
++
+ /* free skb and descriptor buffer */
+ static void sh_eth_ring_free(struct net_device *ndev)
+ {
+ struct sh_eth_private *mdp = netdev_priv(ndev);
+ int ringsize, i;
+
++ if (mdp->rx_ring) {
++ for (i = 0; i < mdp->num_rx_ring; i++) {
++ if (mdp->rx_skbuff[i]) {
++ struct sh_eth_rxdesc *rxdesc = &mdp->rx_ring[i];
++
++ dma_unmap_single(&ndev->dev,
++ le32_to_cpu(rxdesc->addr),
++ ALIGN(mdp->rx_buf_sz, 32),
++ DMA_FROM_DEVICE);
++ }
++ }
++ ringsize = sizeof(struct sh_eth_rxdesc) * mdp->num_rx_ring;
++ dma_free_coherent(NULL, ringsize, mdp->rx_ring,
++ mdp->rx_desc_dma);
++ mdp->rx_ring = NULL;
++ }
++
+ /* Free Rx skb ringbuffer */
+ if (mdp->rx_skbuff) {
+ for (i = 0; i < mdp->num_rx_ring; i++)
+@@ -1075,27 +1133,18 @@ static void sh_eth_ring_free(struct net_device *ndev)
+ kfree(mdp->rx_skbuff);
+ mdp->rx_skbuff = NULL;
+
+- /* Free Tx skb ringbuffer */
+- if (mdp->tx_skbuff) {
+- for (i = 0; i < mdp->num_tx_ring; i++)
+- dev_kfree_skb(mdp->tx_skbuff[i]);
+- }
+- kfree(mdp->tx_skbuff);
+- mdp->tx_skbuff = NULL;
+-
+- if (mdp->rx_ring) {
+- ringsize = sizeof(struct sh_eth_rxdesc) * mdp->num_rx_ring;
+- dma_free_coherent(NULL, ringsize, mdp->rx_ring,
+- mdp->rx_desc_dma);
+- mdp->rx_ring = NULL;
+- }
+-
+ if (mdp->tx_ring) {
++ sh_eth_tx_free(ndev, false);
++
+ ringsize = sizeof(struct sh_eth_txdesc) * mdp->num_tx_ring;
+ dma_free_coherent(NULL, ringsize, mdp->tx_ring,
+ mdp->tx_desc_dma);
+ mdp->tx_ring = NULL;
+ }
++
++ /* Free Tx skb ringbuffer */
++ kfree(mdp->tx_skbuff);
++ mdp->tx_skbuff = NULL;
+ }
+
+ /* format skb and descriptor buffer */
+@@ -1343,43 +1392,6 @@ static void sh_eth_dev_exit(struct net_device *ndev)
+ update_mac_address(ndev);
+ }
+
+-/* free Tx skb function */
+-static int sh_eth_txfree(struct net_device *ndev)
+-{
+- struct sh_eth_private *mdp = netdev_priv(ndev);
+- struct sh_eth_txdesc *txdesc;
+- int free_num = 0;
+- int entry;
+-
+- for (; mdp->cur_tx - mdp->dirty_tx > 0; mdp->dirty_tx++) {
+- entry = mdp->dirty_tx % mdp->num_tx_ring;
+- txdesc = &mdp->tx_ring[entry];
+- if (txdesc->status & cpu_to_le32(TD_TACT))
+- break;
+- /* TACT bit must be checked before all the following reads */
+- dma_rmb();
+- netif_info(mdp, tx_done, ndev,
+- "tx entry %d status 0x%08x\n",
+- entry, le32_to_cpu(txdesc->status));
+- /* Free the original skb. */
+- if (mdp->tx_skbuff[entry]) {
+- dma_unmap_single(&ndev->dev, le32_to_cpu(txdesc->addr),
+- le32_to_cpu(txdesc->len) >> 16,
+- DMA_TO_DEVICE);
+- dev_kfree_skb_irq(mdp->tx_skbuff[entry]);
+- mdp->tx_skbuff[entry] = NULL;
+- free_num++;
+- }
+- txdesc->status = cpu_to_le32(TD_TFP);
+- if (entry >= mdp->num_tx_ring - 1)
+- txdesc->status |= cpu_to_le32(TD_TDLE);
+-
+- ndev->stats.tx_packets++;
+- ndev->stats.tx_bytes += le32_to_cpu(txdesc->len) >> 16;
+- }
+- return free_num;
+-}
+-
+ /* Packet receive function */
+ static int sh_eth_rx(struct net_device *ndev, u32 intr_status, int *quota)
+ {
+@@ -1622,7 +1634,7 @@ static void sh_eth_error(struct net_device *ndev, u32 intr_status)
+ intr_status, mdp->cur_tx, mdp->dirty_tx,
+ (u32)ndev->state, edtrr);
+ /* dirty buffer free */
+- sh_eth_txfree(ndev);
++ sh_eth_tx_free(ndev, true);
+
+ /* SH7712 BUG */
+ if (edtrr ^ sh_eth_get_edtrr_trns(mdp)) {
+@@ -1681,7 +1693,7 @@ static irqreturn_t sh_eth_interrupt(int irq, void *netdev)
+ /* Clear Tx interrupts */
+ sh_eth_write(ndev, intr_status & cd->tx_check, EESR);
+
+- sh_eth_txfree(ndev);
++ sh_eth_tx_free(ndev, true);
+ netif_wake_queue(ndev);
+ }
+
+@@ -2309,7 +2321,7 @@ static int sh_eth_start_xmit(struct sk_buff *skb, struct net_device *ndev)
+
+ spin_lock_irqsave(&mdp->lock, flags);
+ if ((mdp->cur_tx - mdp->dirty_tx) >= (mdp->num_tx_ring - 4)) {
+- if (!sh_eth_txfree(ndev)) {
++ if (!sh_eth_tx_free(ndev, true)) {
+ netif_warn(mdp, tx_queued, ndev, "TxFD exhausted.\n");
+ netif_stop_queue(ndev);
+ spin_unlock_irqrestore(&mdp->lock, flags);
+diff --git a/drivers/net/macsec.c b/drivers/net/macsec.c
+index f83cf6696820..8420069594b3 100644
+--- a/drivers/net/macsec.c
++++ b/drivers/net/macsec.c
+@@ -2713,7 +2713,7 @@ static netdev_tx_t macsec_start_xmit(struct sk_buff *skb,
+ }
+
+ #define MACSEC_FEATURES \
+- (NETIF_F_SG | NETIF_F_HIGHDMA | NETIF_F_FRAGLIST)
++ (NETIF_F_SG | NETIF_F_HIGHDMA)
+ static struct lock_class_key macsec_netdev_addr_lock_key;
+
+ static int macsec_dev_init(struct net_device *dev)
+diff --git a/drivers/net/macvlan.c b/drivers/net/macvlan.c
+index 20b3fdf282c5..7d49a36d6020 100644
+--- a/drivers/net/macvlan.c
++++ b/drivers/net/macvlan.c
+@@ -1140,6 +1140,7 @@ static int macvlan_port_create(struct net_device *dev)
+ static void macvlan_port_destroy(struct net_device *dev)
+ {
+ struct macvlan_port *port = macvlan_port_get_rtnl(dev);
++ struct sk_buff *skb;
+
+ dev->priv_flags &= ~IFF_MACVLAN_PORT;
+ netdev_rx_handler_unregister(dev);
+@@ -1148,7 +1149,15 @@ static void macvlan_port_destroy(struct net_device *dev)
+ * but we need to cancel it and purge left skbs if any.
+ */
+ cancel_work_sync(&port->bc_work);
+- __skb_queue_purge(&port->bc_queue);
++
++ while ((skb = __skb_dequeue(&port->bc_queue))) {
++ const struct macvlan_dev *src = MACVLAN_SKB_CB(skb)->src;
++
++ if (src)
++ dev_put(src->dev);
++
++ kfree_skb(skb);
++ }
+
+ kfree(port);
+ }
+diff --git a/drivers/net/phy/dp83640.c b/drivers/net/phy/dp83640.c
+index e2460a57e4b1..ed0d10f54f26 100644
+--- a/drivers/net/phy/dp83640.c
++++ b/drivers/net/phy/dp83640.c
+@@ -1438,8 +1438,6 @@ static bool dp83640_rxtstamp(struct phy_device *phydev,
+ skb_info->tmo = jiffies + SKB_TIMESTAMP_TIMEOUT;
+ skb_queue_tail(&dp83640->rx_queue, skb);
+ schedule_delayed_work(&dp83640->ts_work, SKB_TIMESTAMP_TIMEOUT);
+- } else {
+- netif_rx_ni(skb);
+ }
+
+ return true;
+diff --git a/drivers/net/phy/phy.c b/drivers/net/phy/phy.c
+index 7cc1b7dcfe05..b41a32b26be7 100644
+--- a/drivers/net/phy/phy.c
++++ b/drivers/net/phy/phy.c
+@@ -591,16 +591,18 @@ int phy_mii_ioctl(struct phy_device *phydev, struct ifreq *ifr, int cmd)
+ EXPORT_SYMBOL(phy_mii_ioctl);
+
+ /**
+- * phy_start_aneg - start auto-negotiation for this PHY device
++ * phy_start_aneg_priv - start auto-negotiation for this PHY device
+ * @phydev: the phy_device struct
++ * @sync: indicate whether we should wait for the workqueue cancelation
+ *
+ * Description: Sanitizes the settings (if we're not autonegotiating
+ * them), and then calls the driver's config_aneg function.
+ * If the PHYCONTROL Layer is operating, we change the state to
+ * reflect the beginning of Auto-negotiation or forcing.
+ */
+-int phy_start_aneg(struct phy_device *phydev)
++static int phy_start_aneg_priv(struct phy_device *phydev, bool sync)
+ {
++ bool trigger = 0;
+ int err;
+
+ mutex_lock(&phydev->lock);
+@@ -625,10 +627,40 @@ int phy_start_aneg(struct phy_device *phydev)
+ }
+ }
+
++ /* Re-schedule a PHY state machine to check PHY status because
++ * negotiation may already be done and aneg interrupt may not be
++ * generated.
++ */
++ if (phy_interrupt_is_valid(phydev) && (phydev->state == PHY_AN)) {
++ err = phy_aneg_done(phydev);
++ if (err > 0) {
++ trigger = true;
++ err = 0;
++ }
++ }
++
+ out_unlock:
+ mutex_unlock(&phydev->lock);
++
++ if (trigger)
++ phy_trigger_machine(phydev, sync);
++
+ return err;
+ }
++
++/**
++ * phy_start_aneg - start auto-negotiation for this PHY device
++ * @phydev: the phy_device struct
++ *
++ * Description: Sanitizes the settings (if we're not autonegotiating
++ * them), and then calls the driver's config_aneg function.
++ * If the PHYCONTROL Layer is operating, we change the state to
++ * reflect the beginning of Auto-negotiation or forcing.
++ */
++int phy_start_aneg(struct phy_device *phydev)
++{
++ return phy_start_aneg_priv(phydev, true);
++}
+ EXPORT_SYMBOL(phy_start_aneg);
+
+ /**
+@@ -656,7 +688,7 @@ void phy_start_machine(struct phy_device *phydev)
+ * state machine runs.
+ */
+
+-static void phy_trigger_machine(struct phy_device *phydev, bool sync)
++void phy_trigger_machine(struct phy_device *phydev, bool sync)
+ {
+ if (sync)
+ cancel_delayed_work_sync(&phydev->state_queue);
+@@ -678,7 +710,7 @@ void phy_stop_machine(struct phy_device *phydev)
+ cancel_delayed_work_sync(&phydev->state_queue);
+
+ mutex_lock(&phydev->lock);
+- if (phydev->state > PHY_UP)
++ if (phydev->state > PHY_UP && phydev->state != PHY_HALTED)
+ phydev->state = PHY_UP;
+ mutex_unlock(&phydev->lock);
+ }
+@@ -1151,7 +1183,7 @@ void phy_state_machine(struct work_struct *work)
+ mutex_unlock(&phydev->lock);
+
+ if (needs_aneg)
+- err = phy_start_aneg(phydev);
++ err = phy_start_aneg_priv(phydev, false);
+ else if (do_suspend)
+ phy_suspend(phydev);
+
+diff --git a/drivers/net/vrf.c b/drivers/net/vrf.c
+index 921fef275ea4..f2fd52e71a5e 100644
+--- a/drivers/net/vrf.c
++++ b/drivers/net/vrf.c
+@@ -1126,7 +1126,7 @@ static int vrf_fib_rule(const struct net_device *dev, __u8 family, bool add_it)
+ goto nla_put_failure;
+
+ /* rule only needs to appear once */
+- nlh->nlmsg_flags &= NLM_F_EXCL;
++ nlh->nlmsg_flags |= NLM_F_EXCL;
+
+ frh = nlmsg_data(nlh);
+ memset(frh, 0, sizeof(*frh));
+diff --git a/drivers/scsi/scsi_lib.c b/drivers/scsi/scsi_lib.c
+index d438430c49a2..dba671d88377 100644
+--- a/drivers/scsi/scsi_lib.c
++++ b/drivers/scsi/scsi_lib.c
+@@ -1038,10 +1038,10 @@ int scsi_init_io(struct scsi_cmnd *cmd)
+ struct scsi_device *sdev = cmd->device;
+ struct request *rq = cmd->request;
+ bool is_mq = (rq->mq_ctx != NULL);
+- int error;
++ int error = BLKPREP_KILL;
+
+ if (WARN_ON_ONCE(!blk_rq_nr_phys_segments(rq)))
+- return -EINVAL;
++ goto err_exit;
+
+ error = scsi_init_sgtable(rq, &cmd->sdb);
+ if (error)
+diff --git a/fs/ceph/inode.c b/fs/ceph/inode.c
+index 5e659d054b40..4299348c880a 100644
+--- a/fs/ceph/inode.c
++++ b/fs/ceph/inode.c
+@@ -2069,11 +2069,6 @@ int __ceph_setattr(struct inode *inode, struct iattr *attr)
+ if (inode_dirty_flags)
+ __mark_inode_dirty(inode, inode_dirty_flags);
+
+- if (ia_valid & ATTR_MODE) {
+- err = posix_acl_chmod(inode, attr->ia_mode);
+- if (err)
+- goto out_put;
+- }
+
+ if (mask) {
+ req->r_inode = inode;
+@@ -2087,13 +2082,11 @@ int __ceph_setattr(struct inode *inode, struct iattr *attr)
+ ceph_cap_string(dirtied), mask);
+
+ ceph_mdsc_put_request(req);
+- if (mask & CEPH_SETATTR_SIZE)
+- __ceph_do_pending_vmtruncate(inode);
+- ceph_free_cap_flush(prealloc_cf);
+- return err;
+-out_put:
+- ceph_mdsc_put_request(req);
+ ceph_free_cap_flush(prealloc_cf);
++
++ if (err >= 0 && (mask & CEPH_SETATTR_SIZE))
++ __ceph_do_pending_vmtruncate(inode);
++
+ return err;
+ }
+
+@@ -2112,7 +2105,12 @@ int ceph_setattr(struct dentry *dentry, struct iattr *attr)
+ if (err != 0)
+ return err;
+
+- return __ceph_setattr(inode, attr);
++ err = __ceph_setattr(inode, attr);
++
++ if (err >= 0 && (attr->ia_valid & ATTR_MODE))
++ err = posix_acl_chmod(inode, attr->ia_mode);
++
++ return err;
+ }
+
+ /*
+diff --git a/fs/nfsd/nfs3xdr.c b/fs/nfsd/nfs3xdr.c
+index dba2ff8eaa68..452334694a5d 100644
+--- a/fs/nfsd/nfs3xdr.c
++++ b/fs/nfsd/nfs3xdr.c
+@@ -358,6 +358,8 @@ nfs3svc_decode_writeargs(struct svc_rqst *rqstp, __be32 *p,
+ {
+ unsigned int len, v, hdr, dlen;
+ u32 max_blocksize = svc_max_payload(rqstp);
++ struct kvec *head = rqstp->rq_arg.head;
++ struct kvec *tail = rqstp->rq_arg.tail;
+
+ p = decode_fh(p, &args->fh);
+ if (!p)
+@@ -367,6 +369,8 @@ nfs3svc_decode_writeargs(struct svc_rqst *rqstp, __be32 *p,
+ args->count = ntohl(*p++);
+ args->stable = ntohl(*p++);
+ len = args->len = ntohl(*p++);
++ if ((void *)p > head->iov_base + head->iov_len)
++ return 0;
+ /*
+ * The count must equal the amount of data passed.
+ */
+@@ -377,9 +381,8 @@ nfs3svc_decode_writeargs(struct svc_rqst *rqstp, __be32 *p,
+ * Check to make sure that we got the right number of
+ * bytes.
+ */
+- hdr = (void*)p - rqstp->rq_arg.head[0].iov_base;
+- dlen = rqstp->rq_arg.head[0].iov_len + rqstp->rq_arg.page_len
+- + rqstp->rq_arg.tail[0].iov_len - hdr;
++ hdr = (void*)p - head->iov_base;
++ dlen = head->iov_len + rqstp->rq_arg.page_len + tail->iov_len - hdr;
+ /*
+ * Round the length of the data which was specified up to
+ * the next multiple of XDR units and then compare that
+@@ -396,7 +399,7 @@ nfs3svc_decode_writeargs(struct svc_rqst *rqstp, __be32 *p,
+ len = args->len = max_blocksize;
+ }
+ rqstp->rq_vec[0].iov_base = (void*)p;
+- rqstp->rq_vec[0].iov_len = rqstp->rq_arg.head[0].iov_len - hdr;
++ rqstp->rq_vec[0].iov_len = head->iov_len - hdr;
+ v = 0;
+ while (len > rqstp->rq_vec[v].iov_len) {
+ len -= rqstp->rq_vec[v].iov_len;
+@@ -471,6 +474,8 @@ nfs3svc_decode_symlinkargs(struct svc_rqst *rqstp, __be32 *p,
+ /* first copy and check from the first page */
+ old = (char*)p;
+ vec = &rqstp->rq_arg.head[0];
++ if ((void *)old > vec->iov_base + vec->iov_len)
++ return 0;
+ avail = vec->iov_len - (old - (char*)vec->iov_base);
+ while (len && avail && *old) {
+ *new++ = *old++;
+diff --git a/fs/nfsd/nfssvc.c b/fs/nfsd/nfssvc.c
+index e6bfd96734c0..15497cbbc563 100644
+--- a/fs/nfsd/nfssvc.c
++++ b/fs/nfsd/nfssvc.c
+@@ -733,6 +733,37 @@ static __be32 map_new_errors(u32 vers, __be32 nfserr)
+ return nfserr;
+ }
+
++/*
++ * A write procedure can have a large argument, and a read procedure can
++ * have a large reply, but no NFSv2 or NFSv3 procedure has argument and
++ * reply that can both be larger than a page. The xdr code has taken
++ * advantage of this assumption to be a sloppy about bounds checking in
++ * some cases. Pending a rewrite of the NFSv2/v3 xdr code to fix that
++ * problem, we enforce these assumptions here:
++ */
++static bool nfs_request_too_big(struct svc_rqst *rqstp,
++ struct svc_procedure *proc)
++{
++ /*
++ * The ACL code has more careful bounds-checking and is not
++ * susceptible to this problem:
++ */
++ if (rqstp->rq_prog != NFS_PROGRAM)
++ return false;
++ /*
++ * Ditto NFSv4 (which can in theory have argument and reply both
++ * more than a page):
++ */
++ if (rqstp->rq_vers >= 4)
++ return false;
++ /* The reply will be small, we're OK: */
++ if (proc->pc_xdrressize > 0 &&
++ proc->pc_xdrressize < XDR_QUADLEN(PAGE_SIZE))
++ return false;
++
++ return rqstp->rq_arg.len > PAGE_SIZE;
++}
++
+ int
+ nfsd_dispatch(struct svc_rqst *rqstp, __be32 *statp)
+ {
+@@ -745,6 +776,11 @@ nfsd_dispatch(struct svc_rqst *rqstp, __be32 *statp)
+ rqstp->rq_vers, rqstp->rq_proc);
+ proc = rqstp->rq_procinfo;
+
++ if (nfs_request_too_big(rqstp, proc)) {
++ dprintk("nfsd: NFSv%d argument too large\n", rqstp->rq_vers);
++ *statp = rpc_garbage_args;
++ return 1;
++ }
+ /*
+ * Give the xdr decoder a chance to change this if it wants
+ * (necessary in the NFSv4.0 compound case)
+diff --git a/fs/nfsd/nfsxdr.c b/fs/nfsd/nfsxdr.c
+index 41b468a6a90f..de07ff625777 100644
+--- a/fs/nfsd/nfsxdr.c
++++ b/fs/nfsd/nfsxdr.c
+@@ -280,6 +280,7 @@ nfssvc_decode_writeargs(struct svc_rqst *rqstp, __be32 *p,
+ struct nfsd_writeargs *args)
+ {
+ unsigned int len, hdr, dlen;
++ struct kvec *head = rqstp->rq_arg.head;
+ int v;
+
+ p = decode_fh(p, &args->fh);
+@@ -300,9 +301,10 @@ nfssvc_decode_writeargs(struct svc_rqst *rqstp, __be32 *p,
+ * Check to make sure that we got the right number of
+ * bytes.
+ */
+- hdr = (void*)p - rqstp->rq_arg.head[0].iov_base;
+- dlen = rqstp->rq_arg.head[0].iov_len + rqstp->rq_arg.page_len
+- - hdr;
++ hdr = (void*)p - head->iov_base;
++ if (hdr > head->iov_len)
++ return 0;
++ dlen = head->iov_len + rqstp->rq_arg.page_len - hdr;
+
+ /*
+ * Round the length of the data which was specified up to
+@@ -316,7 +318,7 @@ nfssvc_decode_writeargs(struct svc_rqst *rqstp, __be32 *p,
+ return 0;
+
+ rqstp->rq_vec[0].iov_base = (void*)p;
+- rqstp->rq_vec[0].iov_len = rqstp->rq_arg.head[0].iov_len - hdr;
++ rqstp->rq_vec[0].iov_len = head->iov_len - hdr;
+ v = 0;
+ while (len > rqstp->rq_vec[v].iov_len) {
+ len -= rqstp->rq_vec[v].iov_len;
+diff --git a/include/linux/errqueue.h b/include/linux/errqueue.h
+index 9ca23fcfb5d7..6fdfc884fdeb 100644
+--- a/include/linux/errqueue.h
++++ b/include/linux/errqueue.h
+@@ -20,6 +20,8 @@ struct sock_exterr_skb {
+ struct sock_extended_err ee;
+ u16 addr_offset;
+ __be16 port;
++ u8 opt_stats:1,
++ unused:7;
+ };
+
+ #endif
+diff --git a/include/linux/phy.h b/include/linux/phy.h
+index 7fc1105605bf..b19ae667c9c4 100644
+--- a/include/linux/phy.h
++++ b/include/linux/phy.h
+@@ -840,6 +840,7 @@ void phy_change_work(struct work_struct *work);
+ void phy_mac_interrupt(struct phy_device *phydev, int new_link);
+ void phy_start_machine(struct phy_device *phydev);
+ void phy_stop_machine(struct phy_device *phydev);
++void phy_trigger_machine(struct phy_device *phydev, bool sync);
+ int phy_ethtool_sset(struct phy_device *phydev, struct ethtool_cmd *cmd);
+ int phy_ethtool_gset(struct phy_device *phydev, struct ethtool_cmd *cmd);
+ int phy_ethtool_ksettings_get(struct phy_device *phydev,
+diff --git a/include/uapi/linux/ipv6_route.h b/include/uapi/linux/ipv6_route.h
+index f6598d1c886e..316e838b7470 100644
+--- a/include/uapi/linux/ipv6_route.h
++++ b/include/uapi/linux/ipv6_route.h
+@@ -34,7 +34,7 @@
+ #define RTF_PREF(pref) ((pref) << 27)
+ #define RTF_PREF_MASK 0x18000000
+
+-#define RTF_PCPU 0x40000000
++#define RTF_PCPU 0x40000000 /* read-only: can not be set by user */
+ #define RTF_LOCAL 0x80000000
+
+
+diff --git a/kernel/bpf/verifier.c b/kernel/bpf/verifier.c
+index cdc43b899f28..f3c938ba87a2 100644
+--- a/kernel/bpf/verifier.c
++++ b/kernel/bpf/verifier.c
+@@ -1859,14 +1859,15 @@ static void find_good_pkt_pointers(struct bpf_verifier_state *state,
+
+ for (i = 0; i < MAX_BPF_REG; i++)
+ if (regs[i].type == PTR_TO_PACKET && regs[i].id == dst_reg->id)
+- regs[i].range = dst_reg->off;
++ /* keep the maximum range already checked */
++ regs[i].range = max(regs[i].range, dst_reg->off);
+
+ for (i = 0; i < MAX_BPF_STACK; i += BPF_REG_SIZE) {
+ if (state->stack_slot_type[i] != STACK_SPILL)
+ continue;
+ reg = &state->spilled_regs[i / BPF_REG_SIZE];
+ if (reg->type == PTR_TO_PACKET && reg->id == dst_reg->id)
+- reg->range = dst_reg->off;
++ reg->range = max(reg->range, dst_reg->off);
+ }
+ }
+
+diff --git a/kernel/cpu.c b/kernel/cpu.c
+index 0a5f630f5c54..f90ef82076a9 100644
+--- a/kernel/cpu.c
++++ b/kernel/cpu.c
+@@ -1333,26 +1333,21 @@ static int cpuhp_store_callbacks(enum cpuhp_state state, const char *name,
+ struct cpuhp_step *sp;
+ int ret = 0;
+
+- mutex_lock(&cpuhp_state_mutex);
+-
+ if (state == CPUHP_AP_ONLINE_DYN || state == CPUHP_BP_PREPARE_DYN) {
+ ret = cpuhp_reserve_state(state);
+ if (ret < 0)
+- goto out;
++ return ret;
+ state = ret;
+ }
+ sp = cpuhp_get_step(state);
+- if (name && sp->name) {
+- ret = -EBUSY;
+- goto out;
+- }
++ if (name && sp->name)
++ return -EBUSY;
++
+ sp->startup.single = startup;
+ sp->teardown.single = teardown;
+ sp->name = name;
+ sp->multi_instance = multi_instance;
+ INIT_HLIST_HEAD(&sp->list);
+-out:
+- mutex_unlock(&cpuhp_state_mutex);
+ return ret;
+ }
+
+@@ -1426,6 +1421,7 @@ int __cpuhp_state_add_instance(enum cpuhp_state state, struct hlist_node *node,
+ return -EINVAL;
+
+ get_online_cpus();
++ mutex_lock(&cpuhp_state_mutex);
+
+ if (!invoke || !sp->startup.multi)
+ goto add_node;
+@@ -1445,16 +1441,14 @@ int __cpuhp_state_add_instance(enum cpuhp_state state, struct hlist_node *node,
+ if (ret) {
+ if (sp->teardown.multi)
+ cpuhp_rollback_install(cpu, state, node);
+- goto err;
++ goto unlock;
+ }
+ }
+ add_node:
+ ret = 0;
+- mutex_lock(&cpuhp_state_mutex);
+ hlist_add_head(node, &sp->list);
++unlock:
+ mutex_unlock(&cpuhp_state_mutex);
+-
+-err:
+ put_online_cpus();
+ return ret;
+ }
+@@ -1489,6 +1483,7 @@ int __cpuhp_setup_state(enum cpuhp_state state,
+ return -EINVAL;
+
+ get_online_cpus();
++ mutex_lock(&cpuhp_state_mutex);
+
+ ret = cpuhp_store_callbacks(state, name, startup, teardown,
+ multi_instance);
+@@ -1522,6 +1517,7 @@ int __cpuhp_setup_state(enum cpuhp_state state,
+ }
+ }
+ out:
++ mutex_unlock(&cpuhp_state_mutex);
+ put_online_cpus();
+ /*
+ * If the requested state is CPUHP_AP_ONLINE_DYN, return the
+@@ -1545,6 +1541,8 @@ int __cpuhp_state_remove_instance(enum cpuhp_state state,
+ return -EINVAL;
+
+ get_online_cpus();
++ mutex_lock(&cpuhp_state_mutex);
++
+ if (!invoke || !cpuhp_get_teardown_cb(state))
+ goto remove;
+ /*
+@@ -1561,7 +1559,6 @@ int __cpuhp_state_remove_instance(enum cpuhp_state state,
+ }
+
+ remove:
+- mutex_lock(&cpuhp_state_mutex);
+ hlist_del(node);
+ mutex_unlock(&cpuhp_state_mutex);
+ put_online_cpus();
+@@ -1569,6 +1566,7 @@ int __cpuhp_state_remove_instance(enum cpuhp_state state,
+ return 0;
+ }
+ EXPORT_SYMBOL_GPL(__cpuhp_state_remove_instance);
++
+ /**
+ * __cpuhp_remove_state - Remove the callbacks for an hotplug machine state
+ * @state: The state to remove
+@@ -1587,6 +1585,7 @@ void __cpuhp_remove_state(enum cpuhp_state state, bool invoke)
+
+ get_online_cpus();
+
++ mutex_lock(&cpuhp_state_mutex);
+ if (sp->multi_instance) {
+ WARN(!hlist_empty(&sp->list),
+ "Error: Removing state %d which has instances left.\n",
+@@ -1611,6 +1610,7 @@ void __cpuhp_remove_state(enum cpuhp_state state, bool invoke)
+ }
+ remove:
+ cpuhp_store_callbacks(state, NULL, NULL, NULL, false);
++ mutex_unlock(&cpuhp_state_mutex);
+ put_online_cpus();
+ }
+ EXPORT_SYMBOL(__cpuhp_remove_state);
+diff --git a/net/9p/client.c b/net/9p/client.c
+index 3fc94a49ccd5..cf129fec7329 100644
+--- a/net/9p/client.c
++++ b/net/9p/client.c
+@@ -2101,6 +2101,10 @@ int p9_client_readdir(struct p9_fid *fid, char *data, u32 count, u64 offset)
+ trace_9p_protocol_dump(clnt, req->rc);
+ goto free_and_error;
+ }
++ if (rsize < count) {
++ pr_err("bogus RREADDIR count (%d > %d)\n", count, rsize);
++ count = rsize;
++ }
+
+ p9_debug(P9_DEBUG_9P, "<<< RREADDIR count %d\n", count);
+
+diff --git a/net/core/neighbour.c b/net/core/neighbour.c
+index e7c12caa20c8..4526cbd7e28a 100644
+--- a/net/core/neighbour.c
++++ b/net/core/neighbour.c
+@@ -860,7 +860,8 @@ static void neigh_probe(struct neighbour *neigh)
+ if (skb)
+ skb = skb_clone(skb, GFP_ATOMIC);
+ write_unlock(&neigh->lock);
+- neigh->ops->solicit(neigh, skb);
++ if (neigh->ops->solicit)
++ neigh->ops->solicit(neigh, skb);
+ atomic_inc(&neigh->probes);
+ kfree_skb(skb);
+ }
+diff --git a/net/core/netpoll.c b/net/core/netpoll.c
+index 9424673009c1..29be2466970c 100644
+--- a/net/core/netpoll.c
++++ b/net/core/netpoll.c
+@@ -105,15 +105,21 @@ static void queue_process(struct work_struct *work)
+ while ((skb = skb_dequeue(&npinfo->txq))) {
+ struct net_device *dev = skb->dev;
+ struct netdev_queue *txq;
++ unsigned int q_index;
+
+ if (!netif_device_present(dev) || !netif_running(dev)) {
+ kfree_skb(skb);
+ continue;
+ }
+
+- txq = skb_get_tx_queue(dev, skb);
+-
+ local_irq_save(flags);
++ /* check if skb->queue_mapping is still valid */
++ q_index = skb_get_queue_mapping(skb);
++ if (unlikely(q_index >= dev->real_num_tx_queues)) {
++ q_index = q_index % dev->real_num_tx_queues;
++ skb_set_queue_mapping(skb, q_index);
++ }
++ txq = netdev_get_tx_queue(dev, q_index);
+ HARD_TX_LOCK(dev, txq, smp_processor_id());
+ if (netif_xmit_frozen_or_stopped(txq) ||
+ netpoll_start_xmit(skb, dev, txq) != NETDEV_TX_OK) {
+diff --git a/net/core/secure_seq.c b/net/core/secure_seq.c
+index 88a8e429fc3e..0fd421713775 100644
+--- a/net/core/secure_seq.c
++++ b/net/core/secure_seq.c
+@@ -16,9 +16,11 @@
+ #define NET_SECRET_SIZE (MD5_MESSAGE_BYTES / 4)
+
+ static u32 net_secret[NET_SECRET_SIZE] ____cacheline_aligned;
++static u32 ts_secret[2];
+
+ static __always_inline void net_secret_init(void)
+ {
++ net_get_random_once(ts_secret, sizeof(ts_secret));
+ net_get_random_once(net_secret, sizeof(net_secret));
+ }
+ #endif
+@@ -41,6 +43,21 @@ static u32 seq_scale(u32 seq)
+ #endif
+
+ #if IS_ENABLED(CONFIG_IPV6)
++static u32 secure_tcpv6_ts_off(const __be32 *saddr, const __be32 *daddr)
++{
++ u32 hash[4 + 4 + 1];
++
++ if (sysctl_tcp_timestamps != 1)
++ return 0;
++
++ memcpy(hash, saddr, 16);
++ memcpy(hash + 4, daddr, 16);
++
++ hash[8] = ts_secret[0];
++
++ return jhash2(hash, ARRAY_SIZE(hash), ts_secret[1]);
++}
++
+ u32 secure_tcpv6_sequence_number(const __be32 *saddr, const __be32 *daddr,
+ __be16 sport, __be16 dport, u32 *tsoff)
+ {
+@@ -59,7 +76,7 @@ u32 secure_tcpv6_sequence_number(const __be32 *saddr, const __be32 *daddr,
+
+ md5_transform(hash, secret);
+
+- *tsoff = sysctl_tcp_timestamps == 1 ? hash[1] : 0;
++ *tsoff = secure_tcpv6_ts_off(saddr, daddr);
+ return seq_scale(hash[0]);
+ }
+ EXPORT_SYMBOL(secure_tcpv6_sequence_number);
+@@ -87,6 +104,14 @@ EXPORT_SYMBOL(secure_ipv6_port_ephemeral);
+ #endif
+
+ #ifdef CONFIG_INET
++static u32 secure_tcp_ts_off(__be32 saddr, __be32 daddr)
++{
++ if (sysctl_tcp_timestamps != 1)
++ return 0;
++
++ return jhash_3words((__force u32)saddr, (__force u32)daddr,
++ ts_secret[0], ts_secret[1]);
++}
+
+ u32 secure_tcp_sequence_number(__be32 saddr, __be32 daddr,
+ __be16 sport, __be16 dport, u32 *tsoff)
+@@ -101,7 +126,7 @@ u32 secure_tcp_sequence_number(__be32 saddr, __be32 daddr,
+
+ md5_transform(hash, net_secret);
+
+- *tsoff = sysctl_tcp_timestamps == 1 ? hash[1] : 0;
++ *tsoff = secure_tcp_ts_off(saddr, daddr);
+ return seq_scale(hash[0]);
+ }
+
+diff --git a/net/core/skbuff.c b/net/core/skbuff.c
+index aa3a13378c90..887995e6df9a 100644
+--- a/net/core/skbuff.c
++++ b/net/core/skbuff.c
+@@ -3078,22 +3078,32 @@ struct sk_buff *skb_segment(struct sk_buff *head_skb,
+ if (sg && csum && (mss != GSO_BY_FRAGS)) {
+ if (!(features & NETIF_F_GSO_PARTIAL)) {
+ struct sk_buff *iter;
++ unsigned int frag_len;
+
+ if (!list_skb ||
+ !net_gso_ok(features, skb_shinfo(head_skb)->gso_type))
+ goto normal;
+
+- /* Split the buffer at the frag_list pointer.
+- * This is based on the assumption that all
+- * buffers in the chain excluding the last
+- * containing the same amount of data.
++ /* If we get here then all the required
++ * GSO features except frag_list are supported.
++ * Try to split the SKB to multiple GSO SKBs
++ * with no frag_list.
++ * Currently we can do that only when the buffers don't
++ * have a linear part and all the buffers except
++ * the last are of the same length.
+ */
++ frag_len = list_skb->len;
+ skb_walk_frags(head_skb, iter) {
++ if (frag_len != iter->len && iter->next)
++ goto normal;
+ if (skb_headlen(iter))
+ goto normal;
+
+ len -= iter->len;
+ }
++
++ if (len != frag_len)
++ goto normal;
+ }
+
+ /* GSO partial only requires that we trim off any excess that
+@@ -3690,6 +3700,15 @@ static void sock_rmem_free(struct sk_buff *skb)
+ atomic_sub(skb->truesize, &sk->sk_rmem_alloc);
+ }
+
++static void skb_set_err_queue(struct sk_buff *skb)
++{
++ /* pkt_type of skbs received on local sockets is never PACKET_OUTGOING.
++ * So, it is safe to (mis)use it to mark skbs on the error queue.
++ */
++ skb->pkt_type = PACKET_OUTGOING;
++ BUILD_BUG_ON(PACKET_OUTGOING == 0);
++}
++
+ /*
+ * Note: We dont mem charge error packets (no sk_forward_alloc changes)
+ */
+@@ -3703,6 +3722,7 @@ int sock_queue_err_skb(struct sock *sk, struct sk_buff *skb)
+ skb->sk = sk;
+ skb->destructor = sock_rmem_free;
+ atomic_add(skb->truesize, &sk->sk_rmem_alloc);
++ skb_set_err_queue(skb);
+
+ /* before exiting rcu section, make sure dst is refcounted */
+ skb_dst_force(skb);
+@@ -3779,16 +3799,21 @@ EXPORT_SYMBOL(skb_clone_sk);
+
+ static void __skb_complete_tx_timestamp(struct sk_buff *skb,
+ struct sock *sk,
+- int tstype)
++ int tstype,
++ bool opt_stats)
+ {
+ struct sock_exterr_skb *serr;
+ int err;
+
++ BUILD_BUG_ON(sizeof(struct sock_exterr_skb) > sizeof(skb->cb));
++
+ serr = SKB_EXT_ERR(skb);
+ memset(serr, 0, sizeof(*serr));
+ serr->ee.ee_errno = ENOMSG;
+ serr->ee.ee_origin = SO_EE_ORIGIN_TIMESTAMPING;
+ serr->ee.ee_info = tstype;
++ serr->opt_stats = opt_stats;
++ serr->header.h4.iif = skb->dev ? skb->dev->ifindex : 0;
+ if (sk->sk_tsflags & SOF_TIMESTAMPING_OPT_ID) {
+ serr->ee.ee_data = skb_shinfo(skb)->tskey;
+ if (sk->sk_protocol == IPPROTO_TCP &&
+@@ -3829,7 +3854,7 @@ void skb_complete_tx_timestamp(struct sk_buff *skb,
+ */
+ if (likely(atomic_inc_not_zero(&sk->sk_refcnt))) {
+ *skb_hwtstamps(skb) = *hwtstamps;
+- __skb_complete_tx_timestamp(skb, sk, SCM_TSTAMP_SND);
++ __skb_complete_tx_timestamp(skb, sk, SCM_TSTAMP_SND, false);
+ sock_put(sk);
+ }
+ }
+@@ -3840,7 +3865,7 @@ void __skb_tstamp_tx(struct sk_buff *orig_skb,
+ struct sock *sk, int tstype)
+ {
+ struct sk_buff *skb;
+- bool tsonly;
++ bool tsonly, opt_stats = false;
+
+ if (!sk)
+ return;
+@@ -3853,9 +3878,10 @@ void __skb_tstamp_tx(struct sk_buff *orig_skb,
+ #ifdef CONFIG_INET
+ if ((sk->sk_tsflags & SOF_TIMESTAMPING_OPT_STATS) &&
+ sk->sk_protocol == IPPROTO_TCP &&
+- sk->sk_type == SOCK_STREAM)
++ sk->sk_type == SOCK_STREAM) {
+ skb = tcp_get_timestamping_opt_stats(sk);
+- else
++ opt_stats = true;
++ } else
+ #endif
+ skb = alloc_skb(0, GFP_ATOMIC);
+ } else {
+@@ -3874,7 +3900,7 @@ void __skb_tstamp_tx(struct sk_buff *orig_skb,
+ else
+ skb->tstamp = ktime_get_real();
+
+- __skb_complete_tx_timestamp(skb, sk, tstype);
++ __skb_complete_tx_timestamp(skb, sk, tstype, opt_stats);
+ }
+ EXPORT_SYMBOL_GPL(__skb_tstamp_tx);
+
+diff --git a/net/ipv4/ip_sockglue.c b/net/ipv4/ip_sockglue.c
+index fc4bf4d54158..fcf53a399560 100644
+--- a/net/ipv4/ip_sockglue.c
++++ b/net/ipv4/ip_sockglue.c
+@@ -488,16 +488,15 @@ static bool ipv4_datagram_support_cmsg(const struct sock *sk,
+ return false;
+
+ /* Support IP_PKTINFO on tstamp packets if requested, to correlate
+- * timestamp with egress dev. Not possible for packets without dev
++ * timestamp with egress dev. Not possible for packets without iif
+ * or without payload (SOF_TIMESTAMPING_OPT_TSONLY).
+ */
+- if ((!(sk->sk_tsflags & SOF_TIMESTAMPING_OPT_CMSG)) ||
+- (!skb->dev))
++ info = PKTINFO_SKB_CB(skb);
++ if (!(sk->sk_tsflags & SOF_TIMESTAMPING_OPT_CMSG) ||
++ !info->ipi_ifindex)
+ return false;
+
+- info = PKTINFO_SKB_CB(skb);
+ info->ipi_spec_dst.s_addr = ip_hdr(skb)->saddr;
+- info->ipi_ifindex = skb->dev->ifindex;
+ return true;
+ }
+
+diff --git a/net/ipv4/ping.c b/net/ipv4/ping.c
+index 68d77b1f1495..51e2f3c5e954 100644
+--- a/net/ipv4/ping.c
++++ b/net/ipv4/ping.c
+@@ -156,17 +156,18 @@ int ping_hash(struct sock *sk)
+ void ping_unhash(struct sock *sk)
+ {
+ struct inet_sock *isk = inet_sk(sk);
++
+ pr_debug("ping_unhash(isk=%p,isk->num=%u)\n", isk, isk->inet_num);
++ write_lock_bh(&ping_table.lock);
+ if (sk_hashed(sk)) {
+- write_lock_bh(&ping_table.lock);
+ hlist_nulls_del(&sk->sk_nulls_node);
+ sk_nulls_node_init(&sk->sk_nulls_node);
+ sock_put(sk);
+ isk->inet_num = 0;
+ isk->inet_sport = 0;
+ sock_prot_inuse_add(sock_net(sk), sk->sk_prot, -1);
+- write_unlock_bh(&ping_table.lock);
+ }
++ write_unlock_bh(&ping_table.lock);
+ }
+ EXPORT_SYMBOL_GPL(ping_unhash);
+
+diff --git a/net/ipv4/route.c b/net/ipv4/route.c
+index 8976887dc83e..6263af2f6ce8 100644
+--- a/net/ipv4/route.c
++++ b/net/ipv4/route.c
+@@ -2608,7 +2608,7 @@ static int inet_rtm_getroute(struct sk_buff *in_skb, struct nlmsghdr *nlh)
+ skb_reset_network_header(skb);
+
+ /* Bugfix: need to give ip_route_input enough of an IP header to not gag. */
+- ip_hdr(skb)->protocol = IPPROTO_ICMP;
++ ip_hdr(skb)->protocol = IPPROTO_UDP;
+ skb_reserve(skb, MAX_HEADER + sizeof(struct iphdr));
+
+ src = tb[RTA_SRC] ? nla_get_in_addr(tb[RTA_SRC]) : 0;
+diff --git a/net/ipv4/tcp.c b/net/ipv4/tcp.c
+index 0efb4c7f6704..53fa3a4275de 100644
+--- a/net/ipv4/tcp.c
++++ b/net/ipv4/tcp.c
+@@ -2301,6 +2301,7 @@ int tcp_disconnect(struct sock *sk, int flags)
+ tcp_init_send_head(sk);
+ memset(&tp->rx_opt, 0, sizeof(tp->rx_opt));
+ __sk_dst_reset(sk);
++ tcp_saved_syn_free(tp);
+
+ WARN_ON(inet->inet_num && !icsk->icsk_bind_hash);
+
+diff --git a/net/ipv4/tcp_cong.c b/net/ipv4/tcp_cong.c
+index 79c4817abc94..6e3c512054a6 100644
+--- a/net/ipv4/tcp_cong.c
++++ b/net/ipv4/tcp_cong.c
+@@ -168,12 +168,8 @@ void tcp_assign_congestion_control(struct sock *sk)
+ }
+ out:
+ rcu_read_unlock();
++ memset(icsk->icsk_ca_priv, 0, sizeof(icsk->icsk_ca_priv));
+
+- /* Clear out private data before diag gets it and
+- * the ca has not been initialized.
+- */
+- if (ca->get_info)
+- memset(icsk->icsk_ca_priv, 0, sizeof(icsk->icsk_ca_priv));
+ if (ca->flags & TCP_CONG_NEEDS_ECN)
+ INET_ECN_xmit(sk);
+ else
+@@ -200,11 +196,10 @@ static void tcp_reinit_congestion_control(struct sock *sk,
+ tcp_cleanup_congestion_control(sk);
+ icsk->icsk_ca_ops = ca;
+ icsk->icsk_ca_setsockopt = 1;
++ memset(icsk->icsk_ca_priv, 0, sizeof(icsk->icsk_ca_priv));
+
+- if (sk->sk_state != TCP_CLOSE) {
+- memset(icsk->icsk_ca_priv, 0, sizeof(icsk->icsk_ca_priv));
++ if (sk->sk_state != TCP_CLOSE)
+ tcp_init_congestion_control(sk);
+- }
+ }
+
+ /* Manage refcounts on socket close. */
+diff --git a/net/ipv6/addrconf.c b/net/ipv6/addrconf.c
+index a7bcc0ab5e99..ec76bbee2c35 100644
+--- a/net/ipv6/addrconf.c
++++ b/net/ipv6/addrconf.c
+@@ -3263,14 +3263,24 @@ static void addrconf_gre_config(struct net_device *dev)
+ static int fixup_permanent_addr(struct inet6_dev *idev,
+ struct inet6_ifaddr *ifp)
+ {
+- if (!ifp->rt) {
+- struct rt6_info *rt;
++ /* rt6i_ref == 0 means the host route was removed from the
++ * FIB, for example, if 'lo' device is taken down. In that
++ * case regenerate the host route.
++ */
++ if (!ifp->rt || !atomic_read(&ifp->rt->rt6i_ref)) {
++ struct rt6_info *rt, *prev;
+
+ rt = addrconf_dst_alloc(idev, &ifp->addr, false);
+ if (unlikely(IS_ERR(rt)))
+ return PTR_ERR(rt);
+
++ /* ifp->rt can be accessed outside of rtnl */
++ spin_lock(&ifp->lock);
++ prev = ifp->rt;
+ ifp->rt = rt;
++ spin_unlock(&ifp->lock);
++
++ ip6_rt_put(prev);
+ }
+
+ if (!(ifp->flags & IFA_F_NOPREFIXROUTE)) {
+@@ -3618,14 +3628,19 @@ static int addrconf_ifdown(struct net_device *dev, int how)
+ INIT_LIST_HEAD(&del_list);
+ list_for_each_entry_safe(ifa, tmp, &idev->addr_list, if_list) {
+ struct rt6_info *rt = NULL;
++ bool keep;
+
+ addrconf_del_dad_work(ifa);
+
++ keep = keep_addr && (ifa->flags & IFA_F_PERMANENT) &&
++ !addr_is_local(&ifa->addr);
++ if (!keep)
++ list_move(&ifa->if_list, &del_list);
++
+ write_unlock_bh(&idev->lock);
+ spin_lock_bh(&ifa->lock);
+
+- if (keep_addr && (ifa->flags & IFA_F_PERMANENT) &&
+- !addr_is_local(&ifa->addr)) {
++ if (keep) {
+ /* set state to skip the notifier below */
+ state = INET6_IFADDR_STATE_DEAD;
+ ifa->state = 0;
+@@ -3637,8 +3652,6 @@ static int addrconf_ifdown(struct net_device *dev, int how)
+ } else {
+ state = ifa->state;
+ ifa->state = INET6_IFADDR_STATE_DEAD;
+-
+- list_move(&ifa->if_list, &del_list);
+ }
+
+ spin_unlock_bh(&ifa->lock);
+diff --git a/net/ipv6/datagram.c b/net/ipv6/datagram.c
+index eec27f87efac..e011122ebd43 100644
+--- a/net/ipv6/datagram.c
++++ b/net/ipv6/datagram.c
+@@ -405,9 +405,6 @@ static inline bool ipv6_datagram_support_addr(struct sock_exterr_skb *serr)
+ * At one point, excluding local errors was a quick test to identify icmp/icmp6
+ * errors. This is no longer true, but the test remained, so the v6 stack,
+ * unlike v4, also honors cmsg requests on all wifi and timestamp errors.
+- *
+- * Timestamp code paths do not initialize the fields expected by cmsg:
+- * the PKTINFO fields in skb->cb[]. Fill those in here.
+ */
+ static bool ip6_datagram_support_cmsg(struct sk_buff *skb,
+ struct sock_exterr_skb *serr)
+@@ -419,14 +416,9 @@ static bool ip6_datagram_support_cmsg(struct sk_buff *skb,
+ if (serr->ee.ee_origin == SO_EE_ORIGIN_LOCAL)
+ return false;
+
+- if (!skb->dev)
++ if (!IP6CB(skb)->iif)
+ return false;
+
+- if (skb->protocol == htons(ETH_P_IPV6))
+- IP6CB(skb)->iif = skb->dev->ifindex;
+- else
+- PKTINFO_SKB_CB(skb)->ipi_ifindex = skb->dev->ifindex;
+-
+ return true;
+ }
+
+diff --git a/net/ipv6/exthdrs.c b/net/ipv6/exthdrs.c
+index 275cac628a95..d32e2110aff2 100644
+--- a/net/ipv6/exthdrs.c
++++ b/net/ipv6/exthdrs.c
+@@ -388,7 +388,6 @@ static int ipv6_srh_rcv(struct sk_buff *skb)
+ icmpv6_param_prob(skb, ICMPV6_HDR_FIELD,
+ ((&hdr->segments_left) -
+ skb_network_header(skb)));
+- kfree_skb(skb);
+ return -1;
+ }
+
+@@ -910,6 +909,8 @@ static void ipv6_push_rthdr(struct sk_buff *skb, u8 *proto,
+ {
+ switch (opt->type) {
+ case IPV6_SRCRT_TYPE_0:
++ case IPV6_SRCRT_STRICT:
++ case IPV6_SRCRT_TYPE_2:
+ ipv6_push_rthdr0(skb, proto, opt, addr_p, saddr);
+ break;
+ case IPV6_SRCRT_TYPE_4:
+@@ -1164,6 +1165,8 @@ struct in6_addr *fl6_update_dst(struct flowi6 *fl6,
+
+ switch (opt->srcrt->type) {
+ case IPV6_SRCRT_TYPE_0:
++ case IPV6_SRCRT_STRICT:
++ case IPV6_SRCRT_TYPE_2:
+ fl6->daddr = *((struct rt0_hdr *)opt->srcrt)->addr;
+ break;
+ case IPV6_SRCRT_TYPE_4:
+diff --git a/net/ipv6/ip6_tunnel.c b/net/ipv6/ip6_tunnel.c
+index 75fac933c209..a9692ec0cd6d 100644
+--- a/net/ipv6/ip6_tunnel.c
++++ b/net/ipv6/ip6_tunnel.c
+@@ -1037,7 +1037,7 @@ int ip6_tnl_xmit(struct sk_buff *skb, struct net_device *dev, __u8 dsfield,
+ struct ip6_tnl *t = netdev_priv(dev);
+ struct net *net = t->net;
+ struct net_device_stats *stats = &t->dev->stats;
+- struct ipv6hdr *ipv6h = ipv6_hdr(skb);
++ struct ipv6hdr *ipv6h;
+ struct ipv6_tel_txoption opt;
+ struct dst_entry *dst = NULL, *ndst = NULL;
+ struct net_device *tdev;
+@@ -1057,26 +1057,28 @@ int ip6_tnl_xmit(struct sk_buff *skb, struct net_device *dev, __u8 dsfield,
+
+ /* NBMA tunnel */
+ if (ipv6_addr_any(&t->parms.raddr)) {
+- struct in6_addr *addr6;
+- struct neighbour *neigh;
+- int addr_type;
++ if (skb->protocol == htons(ETH_P_IPV6)) {
++ struct in6_addr *addr6;
++ struct neighbour *neigh;
++ int addr_type;
+
+- if (!skb_dst(skb))
+- goto tx_err_link_failure;
++ if (!skb_dst(skb))
++ goto tx_err_link_failure;
+
+- neigh = dst_neigh_lookup(skb_dst(skb),
+- &ipv6_hdr(skb)->daddr);
+- if (!neigh)
+- goto tx_err_link_failure;
++ neigh = dst_neigh_lookup(skb_dst(skb),
++ &ipv6_hdr(skb)->daddr);
++ if (!neigh)
++ goto tx_err_link_failure;
+
+- addr6 = (struct in6_addr *)&neigh->primary_key;
+- addr_type = ipv6_addr_type(addr6);
++ addr6 = (struct in6_addr *)&neigh->primary_key;
++ addr_type = ipv6_addr_type(addr6);
+
+- if (addr_type == IPV6_ADDR_ANY)
+- addr6 = &ipv6_hdr(skb)->daddr;
++ if (addr_type == IPV6_ADDR_ANY)
++ addr6 = &ipv6_hdr(skb)->daddr;
+
+- memcpy(&fl6->daddr, addr6, sizeof(fl6->daddr));
+- neigh_release(neigh);
++ memcpy(&fl6->daddr, addr6, sizeof(fl6->daddr));
++ neigh_release(neigh);
++ }
+ } else if (!(t->parms.flags &
+ (IP6_TNL_F_USE_ORIG_TCLASS | IP6_TNL_F_USE_ORIG_FWMARK))) {
+ /* enable the cache only only if the routing decision does
+diff --git a/net/ipv6/ip6mr.c b/net/ipv6/ip6mr.c
+index 604d8953c775..72a00e4961ba 100644
+--- a/net/ipv6/ip6mr.c
++++ b/net/ipv6/ip6mr.c
+@@ -774,7 +774,8 @@ static struct net_device *ip6mr_reg_vif(struct net *net, struct mr6_table *mrt)
+ * Delete a VIF entry
+ */
+
+-static int mif6_delete(struct mr6_table *mrt, int vifi, struct list_head *head)
++static int mif6_delete(struct mr6_table *mrt, int vifi, int notify,
++ struct list_head *head)
+ {
+ struct mif_device *v;
+ struct net_device *dev;
+@@ -820,7 +821,7 @@ static int mif6_delete(struct mr6_table *mrt, int vifi, struct list_head *head)
+ dev->ifindex, &in6_dev->cnf);
+ }
+
+- if (v->flags & MIFF_REGISTER)
++ if ((v->flags & MIFF_REGISTER) && !notify)
+ unregister_netdevice_queue(dev, head);
+
+ dev_put(dev);
+@@ -1331,7 +1332,6 @@ static int ip6mr_device_event(struct notifier_block *this,
+ struct mr6_table *mrt;
+ struct mif_device *v;
+ int ct;
+- LIST_HEAD(list);
+
+ if (event != NETDEV_UNREGISTER)
+ return NOTIFY_DONE;
+@@ -1340,10 +1340,9 @@ static int ip6mr_device_event(struct notifier_block *this,
+ v = &mrt->vif6_table[0];
+ for (ct = 0; ct < mrt->maxvif; ct++, v++) {
+ if (v->dev == dev)
+- mif6_delete(mrt, ct, &list);
++ mif6_delete(mrt, ct, 1, NULL);
+ }
+ }
+- unregister_netdevice_many(&list);
+
+ return NOTIFY_DONE;
+ }
+@@ -1552,7 +1551,7 @@ static void mroute_clean_tables(struct mr6_table *mrt, bool all)
+ for (i = 0; i < mrt->maxvif; i++) {
+ if (!all && (mrt->vif6_table[i].flags & VIFF_STATIC))
+ continue;
+- mif6_delete(mrt, i, &list);
++ mif6_delete(mrt, i, 0, &list);
+ }
+ unregister_netdevice_many(&list);
+
+@@ -1706,7 +1705,7 @@ int ip6_mroute_setsockopt(struct sock *sk, int optname, char __user *optval, uns
+ if (copy_from_user(&mifi, optval, sizeof(mifi_t)))
+ return -EFAULT;
+ rtnl_lock();
+- ret = mif6_delete(mrt, mifi, NULL);
++ ret = mif6_delete(mrt, mifi, 0, NULL);
+ rtnl_unlock();
+ return ret;
+
+diff --git a/net/ipv6/raw.c b/net/ipv6/raw.c
+index ea89073c8247..294fb6f743cb 100644
+--- a/net/ipv6/raw.c
++++ b/net/ipv6/raw.c
+@@ -1174,8 +1174,7 @@ static int rawv6_ioctl(struct sock *sk, int cmd, unsigned long arg)
+ spin_lock_bh(&sk->sk_receive_queue.lock);
+ skb = skb_peek(&sk->sk_receive_queue);
+ if (skb)
+- amount = skb_tail_pointer(skb) -
+- skb_transport_header(skb);
++ amount = skb->len;
+ spin_unlock_bh(&sk->sk_receive_queue.lock);
+ return put_user(amount, (int __user *)arg);
+ }
+diff --git a/net/ipv6/route.c b/net/ipv6/route.c
+index 7ea85370c11c..523681a5c898 100644
+--- a/net/ipv6/route.c
++++ b/net/ipv6/route.c
+@@ -1831,6 +1831,10 @@ static struct rt6_info *ip6_route_info_create(struct fib6_config *cfg)
+ int addr_type;
+ int err = -EINVAL;
+
++ /* RTF_PCPU is an internal flag; can not be set by userspace */
++ if (cfg->fc_flags & RTF_PCPU)
++ goto out;
++
+ if (cfg->fc_dst_len > 128 || cfg->fc_src_len > 128)
+ goto out;
+ #ifndef CONFIG_IPV6_SUBTREES
+diff --git a/net/ipv6/seg6.c b/net/ipv6/seg6.c
+index a855eb325b03..5f44ffed2576 100644
+--- a/net/ipv6/seg6.c
++++ b/net/ipv6/seg6.c
+@@ -53,6 +53,9 @@ bool seg6_validate_srh(struct ipv6_sr_hdr *srh, int len)
+ struct sr6_tlv *tlv;
+ unsigned int tlv_len;
+
++ if (trailing < sizeof(*tlv))
++ return false;
++
+ tlv = (struct sr6_tlv *)((unsigned char *)srh + tlv_offset);
+ tlv_len = sizeof(*tlv) + tlv->len;
+
+diff --git a/net/kcm/kcmsock.c b/net/kcm/kcmsock.c
+index a646f3481240..fecad1098cf8 100644
+--- a/net/kcm/kcmsock.c
++++ b/net/kcm/kcmsock.c
+@@ -1685,7 +1685,7 @@ static int kcm_ioctl(struct socket *sock, unsigned int cmd, unsigned long arg)
+ struct kcm_attach info;
+
+ if (copy_from_user(&info, (void __user *)arg, sizeof(info)))
+- err = -EFAULT;
++ return -EFAULT;
+
+ err = kcm_attach_ioctl(sock, &info);
+
+@@ -1695,7 +1695,7 @@ static int kcm_ioctl(struct socket *sock, unsigned int cmd, unsigned long arg)
+ struct kcm_unattach info;
+
+ if (copy_from_user(&info, (void __user *)arg, sizeof(info)))
+- err = -EFAULT;
++ return -EFAULT;
+
+ err = kcm_unattach_ioctl(sock, &info);
+
+@@ -1706,7 +1706,7 @@ static int kcm_ioctl(struct socket *sock, unsigned int cmd, unsigned long arg)
+ struct socket *newsock = NULL;
+
+ if (copy_from_user(&info, (void __user *)arg, sizeof(info)))
+- err = -EFAULT;
++ return -EFAULT;
+
+ err = kcm_clone(sock, &info, &newsock);
+
+diff --git a/net/l2tp/l2tp_core.c b/net/l2tp/l2tp_core.c
+index 85948c69b236..56036ab5dcb7 100644
+--- a/net/l2tp/l2tp_core.c
++++ b/net/l2tp/l2tp_core.c
+@@ -278,7 +278,8 @@ struct l2tp_session *l2tp_session_find(struct net *net, struct l2tp_tunnel *tunn
+ }
+ EXPORT_SYMBOL_GPL(l2tp_session_find);
+
+-struct l2tp_session *l2tp_session_find_nth(struct l2tp_tunnel *tunnel, int nth)
++struct l2tp_session *l2tp_session_get_nth(struct l2tp_tunnel *tunnel, int nth,
++ bool do_ref)
+ {
+ int hash;
+ struct l2tp_session *session;
+@@ -288,6 +289,9 @@ struct l2tp_session *l2tp_session_find_nth(struct l2tp_tunnel *tunnel, int nth)
+ for (hash = 0; hash < L2TP_HASH_SIZE; hash++) {
+ hlist_for_each_entry(session, &tunnel->session_hlist[hash], hlist) {
+ if (++count > nth) {
++ l2tp_session_inc_refcount(session);
++ if (do_ref && session->ref)
++ session->ref(session);
+ read_unlock_bh(&tunnel->hlist_lock);
+ return session;
+ }
+@@ -298,7 +302,7 @@ struct l2tp_session *l2tp_session_find_nth(struct l2tp_tunnel *tunnel, int nth)
+
+ return NULL;
+ }
+-EXPORT_SYMBOL_GPL(l2tp_session_find_nth);
++EXPORT_SYMBOL_GPL(l2tp_session_get_nth);
+
+ /* Lookup a session by interface name.
+ * This is very inefficient but is only used by management interfaces.
+diff --git a/net/l2tp/l2tp_core.h b/net/l2tp/l2tp_core.h
+index aebf281d09ee..221648b07b3c 100644
+--- a/net/l2tp/l2tp_core.h
++++ b/net/l2tp/l2tp_core.h
+@@ -233,7 +233,8 @@ static inline struct l2tp_tunnel *l2tp_sock_to_tunnel(struct sock *sk)
+ struct l2tp_session *l2tp_session_find(struct net *net,
+ struct l2tp_tunnel *tunnel,
+ u32 session_id);
+-struct l2tp_session *l2tp_session_find_nth(struct l2tp_tunnel *tunnel, int nth);
++struct l2tp_session *l2tp_session_get_nth(struct l2tp_tunnel *tunnel, int nth,
++ bool do_ref);
+ struct l2tp_session *l2tp_session_find_by_ifname(struct net *net, char *ifname);
+ struct l2tp_tunnel *l2tp_tunnel_find(struct net *net, u32 tunnel_id);
+ struct l2tp_tunnel *l2tp_tunnel_find_nth(struct net *net, int nth);
+diff --git a/net/l2tp/l2tp_debugfs.c b/net/l2tp/l2tp_debugfs.c
+index 2d6760a2ae34..d100aed3d06f 100644
+--- a/net/l2tp/l2tp_debugfs.c
++++ b/net/l2tp/l2tp_debugfs.c
+@@ -53,7 +53,7 @@ static void l2tp_dfs_next_tunnel(struct l2tp_dfs_seq_data *pd)
+
+ static void l2tp_dfs_next_session(struct l2tp_dfs_seq_data *pd)
+ {
+- pd->session = l2tp_session_find_nth(pd->tunnel, pd->session_idx);
++ pd->session = l2tp_session_get_nth(pd->tunnel, pd->session_idx, true);
+ pd->session_idx++;
+
+ if (pd->session == NULL) {
+@@ -238,10 +238,14 @@ static int l2tp_dfs_seq_show(struct seq_file *m, void *v)
+ }
+
+ /* Show the tunnel or session context */
+- if (pd->session == NULL)
++ if (!pd->session) {
+ l2tp_dfs_seq_tunnel_show(m, pd->tunnel);
+- else
++ } else {
+ l2tp_dfs_seq_session_show(m, pd->session);
++ if (pd->session->deref)
++ pd->session->deref(pd->session);
++ l2tp_session_dec_refcount(pd->session);
++ }
+
+ out:
+ return 0;
+diff --git a/net/l2tp/l2tp_ip.c b/net/l2tp/l2tp_ip.c
+index 3ed30153a6f5..fa2bcfce53df 100644
+--- a/net/l2tp/l2tp_ip.c
++++ b/net/l2tp/l2tp_ip.c
+@@ -171,9 +171,10 @@ static int l2tp_ip_recv(struct sk_buff *skb)
+
+ tunnel_id = ntohl(*(__be32 *) &skb->data[4]);
+ tunnel = l2tp_tunnel_find(net, tunnel_id);
+- if (tunnel != NULL)
++ if (tunnel) {
+ sk = tunnel->sock;
+- else {
++ sock_hold(sk);
++ } else {
+ struct iphdr *iph = (struct iphdr *) skb_network_header(skb);
+
+ read_lock_bh(&l2tp_ip_lock);
+diff --git a/net/l2tp/l2tp_ip6.c b/net/l2tp/l2tp_ip6.c
+index f47c45250f86..4e4fa1538cbb 100644
+--- a/net/l2tp/l2tp_ip6.c
++++ b/net/l2tp/l2tp_ip6.c
+@@ -183,9 +183,10 @@ static int l2tp_ip6_recv(struct sk_buff *skb)
+
+ tunnel_id = ntohl(*(__be32 *) &skb->data[4]);
+ tunnel = l2tp_tunnel_find(net, tunnel_id);
+- if (tunnel != NULL)
++ if (tunnel) {
+ sk = tunnel->sock;
+- else {
++ sock_hold(sk);
++ } else {
+ struct ipv6hdr *iph = ipv6_hdr(skb);
+
+ read_lock_bh(&l2tp_ip6_lock);
+diff --git a/net/l2tp/l2tp_netlink.c b/net/l2tp/l2tp_netlink.c
+index 3620fba31786..ad191a786806 100644
+--- a/net/l2tp/l2tp_netlink.c
++++ b/net/l2tp/l2tp_netlink.c
+@@ -852,7 +852,7 @@ static int l2tp_nl_cmd_session_dump(struct sk_buff *skb, struct netlink_callback
+ goto out;
+ }
+
+- session = l2tp_session_find_nth(tunnel, si);
++ session = l2tp_session_get_nth(tunnel, si, false);
+ if (session == NULL) {
+ ti++;
+ tunnel = NULL;
+@@ -862,8 +862,11 @@ static int l2tp_nl_cmd_session_dump(struct sk_buff *skb, struct netlink_callback
+
+ if (l2tp_nl_session_send(skb, NETLINK_CB(cb->skb).portid,
+ cb->nlh->nlmsg_seq, NLM_F_MULTI,
+- session, L2TP_CMD_SESSION_GET) < 0)
++ session, L2TP_CMD_SESSION_GET) < 0) {
++ l2tp_session_dec_refcount(session);
+ break;
++ }
++ l2tp_session_dec_refcount(session);
+
+ si++;
+ }
+diff --git a/net/l2tp/l2tp_ppp.c b/net/l2tp/l2tp_ppp.c
+index 36cc56fd0418..781d22272f4a 100644
+--- a/net/l2tp/l2tp_ppp.c
++++ b/net/l2tp/l2tp_ppp.c
+@@ -450,6 +450,10 @@ static void pppol2tp_session_close(struct l2tp_session *session)
+ static void pppol2tp_session_destruct(struct sock *sk)
+ {
+ struct l2tp_session *session = sk->sk_user_data;
++
++ skb_queue_purge(&sk->sk_receive_queue);
++ skb_queue_purge(&sk->sk_write_queue);
++
+ if (session) {
+ sk->sk_user_data = NULL;
+ BUG_ON(session->magic != L2TP_SESSION_MAGIC);
+@@ -488,9 +492,6 @@ static int pppol2tp_release(struct socket *sock)
+ l2tp_session_queue_purge(session);
+ sock_put(sk);
+ }
+- skb_queue_purge(&sk->sk_receive_queue);
+- skb_queue_purge(&sk->sk_write_queue);
+-
+ release_sock(sk);
+
+ /* This will delete the session context via
+@@ -1554,7 +1555,7 @@ static void pppol2tp_next_tunnel(struct net *net, struct pppol2tp_seq_data *pd)
+
+ static void pppol2tp_next_session(struct net *net, struct pppol2tp_seq_data *pd)
+ {
+- pd->session = l2tp_session_find_nth(pd->tunnel, pd->session_idx);
++ pd->session = l2tp_session_get_nth(pd->tunnel, pd->session_idx, true);
+ pd->session_idx++;
+
+ if (pd->session == NULL) {
+@@ -1681,10 +1682,14 @@ static int pppol2tp_seq_show(struct seq_file *m, void *v)
+
+ /* Show the tunnel or session context.
+ */
+- if (pd->session == NULL)
++ if (!pd->session) {
+ pppol2tp_seq_tunnel_show(m, pd->tunnel);
+- else
++ } else {
+ pppol2tp_seq_session_show(m, pd->session);
++ if (pd->session->deref)
++ pd->session->deref(pd->session);
++ l2tp_session_dec_refcount(pd->session);
++ }
+
+ out:
+ return 0;
+@@ -1843,4 +1848,4 @@ MODULE_DESCRIPTION("PPP over L2TP over UDP");
+ MODULE_LICENSE("GPL");
+ MODULE_VERSION(PPPOL2TP_DRV_VERSION);
+ MODULE_ALIAS_NET_PF_PROTO(PF_PPPOX, PX_PROTO_OL2TP);
+-MODULE_ALIAS_L2TP_PWTYPE(11);
++MODULE_ALIAS_L2TP_PWTYPE(7);
+diff --git a/net/openvswitch/flow.c b/net/openvswitch/flow.c
+index 2c0a00f7f1b7..bb789359a29b 100644
+--- a/net/openvswitch/flow.c
++++ b/net/openvswitch/flow.c
+@@ -527,7 +527,7 @@ static int key_extract(struct sk_buff *skb, struct sw_flow_key *key)
+
+ /* Link layer. */
+ clear_vlan(key);
+- if (key->mac_proto == MAC_PROTO_NONE) {
++ if (ovs_key_mac_proto(key) == MAC_PROTO_NONE) {
+ if (unlikely(eth_type_vlan(skb->protocol)))
+ return -EINVAL;
+
+@@ -745,7 +745,13 @@ static int key_extract(struct sk_buff *skb, struct sw_flow_key *key)
+
+ int ovs_flow_key_update(struct sk_buff *skb, struct sw_flow_key *key)
+ {
+- return key_extract(skb, key);
++ int res;
++
++ res = key_extract(skb, key);
++ if (!res)
++ key->mac_proto &= ~SW_FLOW_KEY_INVALID;
++
++ return res;
+ }
+
+ static int key_extract_mac_proto(struct sk_buff *skb)
+diff --git a/net/packet/af_packet.c b/net/packet/af_packet.c
+index 5c919933a39b..0f074c96f43f 100644
+--- a/net/packet/af_packet.c
++++ b/net/packet/af_packet.c
+@@ -3644,6 +3644,8 @@ packet_setsockopt(struct socket *sock, int level, int optname, char __user *optv
+ return -EBUSY;
+ if (copy_from_user(&val, optval, sizeof(val)))
+ return -EFAULT;
++ if (val > INT_MAX)
++ return -EINVAL;
+ po->tp_reserve = val;
+ return 0;
+ }
+@@ -4189,6 +4191,8 @@ static int packet_set_ring(struct sock *sk, union tpacket_req_u *req_u,
+ rb->frames_per_block = req->tp_block_size / req->tp_frame_size;
+ if (unlikely(rb->frames_per_block == 0))
+ goto out;
++ if (unlikely(req->tp_block_size > UINT_MAX / req->tp_block_nr))
++ goto out;
+ if (unlikely((rb->frames_per_block * req->tp_block_nr) !=
+ req->tp_frame_nr))
+ goto out;
+diff --git a/net/sctp/socket.c b/net/sctp/socket.c
+index d04a8b66098c..6932cf34fea8 100644
+--- a/net/sctp/socket.c
++++ b/net/sctp/socket.c
+@@ -6860,6 +6860,9 @@ int sctp_inet_listen(struct socket *sock, int backlog)
+ if (sock->state != SS_UNCONNECTED)
+ goto out;
+
++ if (!sctp_sstate(sk, LISTENING) && !sctp_sstate(sk, CLOSED))
++ goto out;
++
+ /* If backlog is zero, disable listening. */
+ if (!backlog) {
+ if (sctp_sstate(sk, CLOSED))
+diff --git a/net/socket.c b/net/socket.c
+index 02bd9249e295..6361d3161120 100644
+--- a/net/socket.c
++++ b/net/socket.c
+@@ -654,6 +654,16 @@ int kernel_sendmsg(struct socket *sock, struct msghdr *msg,
+ }
+ EXPORT_SYMBOL(kernel_sendmsg);
+
++static bool skb_is_err_queue(const struct sk_buff *skb)
++{
++ /* pkt_type of skbs enqueued on the error queue are set to
++ * PACKET_OUTGOING in skb_set_err_queue(). This is only safe to do
++ * in recvmsg, since skbs received on a local socket will never
++ * have a pkt_type of PACKET_OUTGOING.
++ */
++ return skb->pkt_type == PACKET_OUTGOING;
++}
++
+ /*
+ * called from sock_recv_timestamp() if sock_flag(sk, SOCK_RCVTSTAMP)
+ */
+@@ -697,7 +707,8 @@ void __sock_recv_timestamp(struct msghdr *msg, struct sock *sk,
+ put_cmsg(msg, SOL_SOCKET,
+ SCM_TIMESTAMPING, sizeof(tss), &tss);
+
+- if (skb->len && (sk->sk_tsflags & SOF_TIMESTAMPING_OPT_STATS))
++ if (skb_is_err_queue(skb) && skb->len &&
++ SKB_EXT_ERR(skb)->opt_stats)
+ put_cmsg(msg, SOL_SOCKET, SCM_TIMESTAMPING_OPT_STATS,
+ skb->len, skb->data);
+ }
+diff --git a/sound/core/seq/seq_lock.c b/sound/core/seq/seq_lock.c
+index 3b693e924db7..12ba83367b1b 100644
+--- a/sound/core/seq/seq_lock.c
++++ b/sound/core/seq/seq_lock.c
+@@ -28,19 +28,16 @@
+ /* wait until all locks are released */
+ void snd_use_lock_sync_helper(snd_use_lock_t *lockp, const char *file, int line)
+ {
+- int max_count = 5 * HZ;
++ int warn_count = 5 * HZ;
+
+ if (atomic_read(lockp) < 0) {
+ pr_warn("ALSA: seq_lock: lock trouble [counter = %d] in %s:%d\n", atomic_read(lockp), file, line);
+ return;
+ }
+ while (atomic_read(lockp) > 0) {
+- if (max_count == 0) {
+- pr_warn("ALSA: seq_lock: timeout [%d left] in %s:%d\n", atomic_read(lockp), file, line);
+- break;
+- }
++ if (warn_count-- == 0)
++ pr_warn("ALSA: seq_lock: waiting [%d left] in %s:%d\n", atomic_read(lockp), file, line);
+ schedule_timeout_uninterruptible(1);
+- max_count--;
+ }
+ }
+
+diff --git a/sound/firewire/lib.h b/sound/firewire/lib.h
+index f6769312ebfc..c3768cd494a5 100644
+--- a/sound/firewire/lib.h
++++ b/sound/firewire/lib.h
+@@ -45,7 +45,7 @@ struct snd_fw_async_midi_port {
+
+ struct snd_rawmidi_substream *substream;
+ snd_fw_async_midi_port_fill fill;
+- unsigned int consume_bytes;
++ int consume_bytes;
+ };
+
+ int snd_fw_async_midi_port_init(struct snd_fw_async_midi_port *port,
+diff --git a/sound/firewire/oxfw/oxfw.c b/sound/firewire/oxfw/oxfw.c
+index e629b88f7d93..474b06d8acd1 100644
+--- a/sound/firewire/oxfw/oxfw.c
++++ b/sound/firewire/oxfw/oxfw.c
+@@ -226,11 +226,11 @@ static void do_registration(struct work_struct *work)
+ if (err < 0)
+ goto error;
+
+- err = detect_quirks(oxfw);
++ err = snd_oxfw_stream_discover(oxfw);
+ if (err < 0)
+ goto error;
+
+- err = snd_oxfw_stream_discover(oxfw);
++ err = detect_quirks(oxfw);
+ if (err < 0)
+ goto error;
+
+diff --git a/sound/soc/intel/boards/bytcr_rt5640.c b/sound/soc/intel/boards/bytcr_rt5640.c
+index 1bd985f01c73..342d8425bc1f 100644
+--- a/sound/soc/intel/boards/bytcr_rt5640.c
++++ b/sound/soc/intel/boards/bytcr_rt5640.c
+@@ -621,7 +621,7 @@ static struct snd_soc_dai_link byt_rt5640_dais[] = {
+ .codec_dai_name = "snd-soc-dummy-dai",
+ .codec_name = "snd-soc-dummy",
+ .platform_name = "sst-mfld-platform",
+- .ignore_suspend = 1,
++ .nonatomic = true,
+ .dynamic = 1,
+ .dpcm_playback = 1,
+ .dpcm_capture = 1,
+@@ -634,7 +634,6 @@ static struct snd_soc_dai_link byt_rt5640_dais[] = {
+ .codec_dai_name = "snd-soc-dummy-dai",
+ .codec_name = "snd-soc-dummy",
+ .platform_name = "sst-mfld-platform",
+- .ignore_suspend = 1,
+ .nonatomic = true,
+ .dynamic = 1,
+ .dpcm_playback = 1,
+@@ -661,6 +660,7 @@ static struct snd_soc_dai_link byt_rt5640_dais[] = {
+ | SND_SOC_DAIFMT_CBS_CFS,
+ .be_hw_params_fixup = byt_rt5640_codec_fixup,
+ .ignore_suspend = 1,
++ .nonatomic = true,
+ .dpcm_playback = 1,
+ .dpcm_capture = 1,
+ .init = byt_rt5640_init,
+diff --git a/sound/soc/intel/boards/bytcr_rt5651.c b/sound/soc/intel/boards/bytcr_rt5651.c
+index 2d24dc04b597..d938328dc64f 100644
+--- a/sound/soc/intel/boards/bytcr_rt5651.c
++++ b/sound/soc/intel/boards/bytcr_rt5651.c
+@@ -235,7 +235,6 @@ static struct snd_soc_dai_link byt_rt5651_dais[] = {
+ .codec_dai_name = "snd-soc-dummy-dai",
+ .codec_name = "snd-soc-dummy",
+ .platform_name = "sst-mfld-platform",
+- .ignore_suspend = 1,
+ .nonatomic = true,
+ .dynamic = 1,
+ .dpcm_playback = 1,
+@@ -249,7 +248,6 @@ static struct snd_soc_dai_link byt_rt5651_dais[] = {
+ .codec_dai_name = "snd-soc-dummy-dai",
+ .codec_name = "snd-soc-dummy",
+ .platform_name = "sst-mfld-platform",
+- .ignore_suspend = 1,
+ .nonatomic = true,
+ .dynamic = 1,
+ .dpcm_playback = 1,
+diff --git a/tools/testing/selftests/bpf/test_verifier.c b/tools/testing/selftests/bpf/test_verifier.c
+index 853d7e43434a..e1aea9e60f33 100644
+--- a/tools/testing/selftests/bpf/test_verifier.c
++++ b/tools/testing/selftests/bpf/test_verifier.c
+@@ -2876,6 +2876,26 @@ static struct bpf_test tests[] = {
+ .prog_type = BPF_PROG_TYPE_LWT_XMIT,
+ },
+ {
++ "overlapping checks for direct packet access",
++ .insns = {
++ BPF_LDX_MEM(BPF_W, BPF_REG_2, BPF_REG_1,
++ offsetof(struct __sk_buff, data)),
++ BPF_LDX_MEM(BPF_W, BPF_REG_3, BPF_REG_1,
++ offsetof(struct __sk_buff, data_end)),
++ BPF_MOV64_REG(BPF_REG_0, BPF_REG_2),
++ BPF_ALU64_IMM(BPF_ADD, BPF_REG_0, 8),
++ BPF_JMP_REG(BPF_JGT, BPF_REG_0, BPF_REG_3, 4),
++ BPF_MOV64_REG(BPF_REG_1, BPF_REG_2),
++ BPF_ALU64_IMM(BPF_ADD, BPF_REG_1, 6),
++ BPF_JMP_REG(BPF_JGT, BPF_REG_1, BPF_REG_3, 1),
++ BPF_LDX_MEM(BPF_H, BPF_REG_0, BPF_REG_2, 6),
++ BPF_MOV64_IMM(BPF_REG_0, 0),
++ BPF_EXIT_INSN(),
++ },
++ .result = ACCEPT,
++ .prog_type = BPF_PROG_TYPE_LWT_XMIT,
++ },
++ {
+ "invalid access of tc_classid for LWT_IN",
+ .insns = {
+ BPF_LDX_MEM(BPF_W, BPF_REG_0, BPF_REG_1,
^ permalink raw reply related [flat|nested] 22+ messages in thread
* [gentoo-commits] proj/linux-patches:4.10 commit in: /
@ 2017-05-08 10:45 Mike Pagano
0 siblings, 0 replies; 22+ messages in thread
From: Mike Pagano @ 2017-05-08 10:45 UTC (permalink / raw
To: gentoo-commits
commit: 7287b48d4c0fe338c20e0ef320c23a4b2376c81c
Author: Mike Pagano <mpagano <AT> gentoo <DOT> org>
AuthorDate: Mon May 8 10:44:53 2017 +0000
Commit: Mike Pagano <mpagano <AT> gentoo <DOT> org>
CommitDate: Mon May 8 10:44:53 2017 +0000
URL: https://gitweb.gentoo.org/proj/linux-patches.git/commit/?id=7287b48d
Linux patch 4.10.15
0000_README | 4 +
1014_linux-4.10.15.patch | 509 +++++++++++++++++++++++++++++++++++++++++++++++
2 files changed, 513 insertions(+)
diff --git a/0000_README b/0000_README
index 5295a7d..1e8afc8 100644
--- a/0000_README
+++ b/0000_README
@@ -99,6 +99,10 @@ Patch: 1013_linux-4.10.14.patch
From: http://www.kernel.org
Desc: Linux 4.10.14
+Patch: 1014_linux-4.10.15.patch
+From: http://www.kernel.org
+Desc: Linux 4.10.15
+
Patch: 1500_XATTR_USER_PREFIX.patch
From: https://bugs.gentoo.org/show_bug.cgi?id=470644
Desc: Support for namespace user.pax.* on tmpfs.
diff --git a/1014_linux-4.10.15.patch b/1014_linux-4.10.15.patch
new file mode 100644
index 0000000..d485ffb
--- /dev/null
+++ b/1014_linux-4.10.15.patch
@@ -0,0 +1,509 @@
+diff --git a/Makefile b/Makefile
+index 48756653c42c..6f600fee5753 100644
+--- a/Makefile
++++ b/Makefile
+@@ -1,6 +1,6 @@
+ VERSION = 4
+ PATCHLEVEL = 10
+-SUBLEVEL = 14
++SUBLEVEL = 15
+ EXTRAVERSION =
+ NAME = Fearless Coyote
+
+diff --git a/drivers/hwmon/it87.c b/drivers/hwmon/it87.c
+index 43146162c122..b99c1df48156 100644
+--- a/drivers/hwmon/it87.c
++++ b/drivers/hwmon/it87.c
+@@ -3115,7 +3115,7 @@ static int __init sm_it87_init(void)
+ {
+ int sioaddr[2] = { REG_2E, REG_4E };
+ struct it87_sio_data sio_data;
+- unsigned short isa_address;
++ unsigned short isa_address[2];
+ bool found = false;
+ int i, err;
+
+@@ -3125,15 +3125,29 @@ static int __init sm_it87_init(void)
+
+ for (i = 0; i < ARRAY_SIZE(sioaddr); i++) {
+ memset(&sio_data, 0, sizeof(struct it87_sio_data));
+- isa_address = 0;
+- err = it87_find(sioaddr[i], &isa_address, &sio_data);
+- if (err || isa_address == 0)
++ isa_address[i] = 0;
++ err = it87_find(sioaddr[i], &isa_address[i], &sio_data);
++ if (err || isa_address[i] == 0)
+ continue;
++ /*
++ * Don't register second chip if its ISA address matches
++ * the first chip's ISA address.
++ */
++ if (i && isa_address[i] == isa_address[0])
++ break;
+
+- err = it87_device_add(i, isa_address, &sio_data);
++ err = it87_device_add(i, isa_address[i], &sio_data);
+ if (err)
+ goto exit_dev_unregister;
++
+ found = true;
++
++ /*
++ * IT8705F may respond on both SIO addresses.
++ * Stop probing after finding one.
++ */
++ if (sio_data.type == it87)
++ break;
+ }
+
+ if (!found) {
+diff --git a/drivers/md/dm-ioctl.c b/drivers/md/dm-ioctl.c
+index a5a9b17f0f7f..5edc2a58edcc 100644
+--- a/drivers/md/dm-ioctl.c
++++ b/drivers/md/dm-ioctl.c
+@@ -1847,7 +1847,7 @@ static int ctl_ioctl(uint command, struct dm_ioctl __user *user)
+ if (r)
+ goto out;
+
+- param->data_size = sizeof(*param);
++ param->data_size = offsetof(struct dm_ioctl, data);
+ r = fn(param, input_param_size);
+
+ if (unlikely(param->flags & DM_BUFFER_FULL_FLAG) &&
+diff --git a/drivers/scsi/storvsc_drv.c b/drivers/scsi/storvsc_drv.c
+index 7be04fc0d0e7..6f5d173ea9ff 100644
+--- a/drivers/scsi/storvsc_drv.c
++++ b/drivers/scsi/storvsc_drv.c
+@@ -400,8 +400,6 @@ MODULE_PARM_DESC(storvsc_vcpus_per_sub_channel, "Ratio of VCPUs to subchannels")
+ */
+ static int storvsc_timeout = 180;
+
+-static int msft_blist_flags = BLIST_TRY_VPD_PAGES;
+-
+ #if IS_ENABLED(CONFIG_SCSI_FC_ATTRS)
+ static struct scsi_transport_template *fc_transport_template;
+ #endif
+@@ -1283,6 +1281,22 @@ static int storvsc_do_io(struct hv_device *device,
+ return ret;
+ }
+
++static int storvsc_device_alloc(struct scsi_device *sdevice)
++{
++ /*
++ * Set blist flag to permit the reading of the VPD pages even when
++ * the target may claim SPC-2 compliance. MSFT targets currently
++ * claim SPC-2 compliance while they implement post SPC-2 features.
++ * With this flag we can correctly handle WRITE_SAME_16 issues.
++ *
++ * Hypervisor reports SCSI_UNKNOWN type for DVD ROM device but
++ * still supports REPORT LUN.
++ */
++ sdevice->sdev_bflags = BLIST_REPORTLUN2 | BLIST_TRY_VPD_PAGES;
++
++ return 0;
++}
++
+ static int storvsc_device_configure(struct scsi_device *sdevice)
+ {
+
+@@ -1298,14 +1312,6 @@ static int storvsc_device_configure(struct scsi_device *sdevice)
+ sdevice->no_write_same = 1;
+
+ /*
+- * Add blist flags to permit the reading of the VPD pages even when
+- * the target may claim SPC-2 compliance. MSFT targets currently
+- * claim SPC-2 compliance while they implement post SPC-2 features.
+- * With this patch we can correctly handle WRITE_SAME_16 issues.
+- */
+- sdevice->sdev_bflags |= msft_blist_flags;
+-
+- /*
+ * If the host is WIN8 or WIN8 R2, claim conformance to SPC-3
+ * if the device is a MSFT virtual device. If the host is
+ * WIN10 or newer, allow write_same.
+@@ -1569,6 +1575,7 @@ static struct scsi_host_template scsi_driver = {
+ .eh_host_reset_handler = storvsc_host_reset_handler,
+ .proc_name = "storvsc_host",
+ .eh_timed_out = storvsc_eh_timed_out,
++ .slave_alloc = storvsc_device_alloc,
+ .slave_configure = storvsc_device_configure,
+ .cmd_per_lun = 255,
+ .this_id = -1,
+diff --git a/fs/cifs/cifsglob.h b/fs/cifs/cifsglob.h
+index 54a7d078a3a8..7fa45f48e59d 100644
+--- a/fs/cifs/cifsglob.h
++++ b/fs/cifs/cifsglob.h
+@@ -241,6 +241,7 @@ struct smb_version_operations {
+ /* verify the message */
+ int (*check_message)(char *, unsigned int, struct TCP_Server_Info *);
+ bool (*is_oplock_break)(char *, struct TCP_Server_Info *);
++ int (*handle_cancelled_mid)(char *, struct TCP_Server_Info *);
+ void (*downgrade_oplock)(struct TCP_Server_Info *,
+ struct cifsInodeInfo *, bool);
+ /* process transaction2 response */
+@@ -1318,12 +1319,19 @@ struct mid_q_entry {
+ void *callback_data; /* general purpose pointer for callback */
+ void *resp_buf; /* pointer to received SMB header */
+ int mid_state; /* wish this were enum but can not pass to wait_event */
++ unsigned int mid_flags;
+ __le16 command; /* smb command code */
+ bool large_buf:1; /* if valid response, is pointer to large buf */
+ bool multiRsp:1; /* multiple trans2 responses for one request */
+ bool multiEnd:1; /* both received */
+ };
+
++struct close_cancelled_open {
++ struct cifs_fid fid;
++ struct cifs_tcon *tcon;
++ struct work_struct work;
++};
++
+ /* Make code in transport.c a little cleaner by moving
+ update of optional stats into function below */
+ #ifdef CONFIG_CIFS_STATS2
+@@ -1455,6 +1463,9 @@ static inline void free_dfs_info_array(struct dfs_info3_param *param,
+ #define MID_RESPONSE_MALFORMED 0x10
+ #define MID_SHUTDOWN 0x20
+
++/* Flags */
++#define MID_WAIT_CANCELLED 1 /* Cancelled while waiting for response */
++
+ /* Types of response buffer returned from SendReceive2 */
+ #define CIFS_NO_BUFFER 0 /* Response buffer not returned */
+ #define CIFS_SMALL_BUFFER 1
+diff --git a/fs/cifs/cifssmb.c b/fs/cifs/cifssmb.c
+index b47261858e6d..2dc92351027b 100644
+--- a/fs/cifs/cifssmb.c
++++ b/fs/cifs/cifssmb.c
+@@ -1423,6 +1423,8 @@ cifs_readv_discard(struct TCP_Server_Info *server, struct mid_q_entry *mid)
+
+ length = discard_remaining_data(server);
+ dequeue_mid(mid, rdata->result);
++ mid->resp_buf = server->smallbuf;
++ server->smallbuf = NULL;
+ return length;
+ }
+
+@@ -1534,6 +1536,8 @@ cifs_readv_receive(struct TCP_Server_Info *server, struct mid_q_entry *mid)
+ return cifs_readv_discard(server, mid);
+
+ dequeue_mid(mid, false);
++ mid->resp_buf = server->smallbuf;
++ server->smallbuf = NULL;
+ return length;
+ }
+
+diff --git a/fs/cifs/connect.c b/fs/cifs/connect.c
+index 35ae49ed1f76..acf7bc1eab77 100644
+--- a/fs/cifs/connect.c
++++ b/fs/cifs/connect.c
+@@ -887,10 +887,19 @@ cifs_demultiplex_thread(void *p)
+
+ server->lstrp = jiffies;
+ if (mid_entry != NULL) {
++ if ((mid_entry->mid_flags & MID_WAIT_CANCELLED) &&
++ mid_entry->mid_state == MID_RESPONSE_RECEIVED &&
++ server->ops->handle_cancelled_mid)
++ server->ops->handle_cancelled_mid(
++ mid_entry->resp_buf,
++ server);
++
+ if (!mid_entry->multiRsp || mid_entry->multiEnd)
+ mid_entry->callback(mid_entry);
+- } else if (!server->ops->is_oplock_break ||
+- !server->ops->is_oplock_break(buf, server)) {
++ } else if (server->ops->is_oplock_break &&
++ server->ops->is_oplock_break(buf, server)) {
++ cifs_dbg(FYI, "Received oplock break\n");
++ } else {
+ cifs_dbg(VFS, "No task to wake, unknown frame received! NumMids %d\n",
+ atomic_read(&midCount));
+ cifs_dump_mem("Received Data is: ", buf,
+diff --git a/fs/cifs/smb2misc.c b/fs/cifs/smb2misc.c
+index 3d383489b9cf..97307808ae42 100644
+--- a/fs/cifs/smb2misc.c
++++ b/fs/cifs/smb2misc.c
+@@ -654,3 +654,47 @@ smb2_is_valid_oplock_break(char *buffer, struct TCP_Server_Info *server)
+ cifs_dbg(FYI, "Can not process oplock break for non-existent connection\n");
+ return false;
+ }
++
++void
++smb2_cancelled_close_fid(struct work_struct *work)
++{
++ struct close_cancelled_open *cancelled = container_of(work,
++ struct close_cancelled_open, work);
++
++ cifs_dbg(VFS, "Close unmatched open\n");
++
++ SMB2_close(0, cancelled->tcon, cancelled->fid.persistent_fid,
++ cancelled->fid.volatile_fid);
++ cifs_put_tcon(cancelled->tcon);
++ kfree(cancelled);
++}
++
++int
++smb2_handle_cancelled_mid(char *buffer, struct TCP_Server_Info *server)
++{
++ struct smb2_hdr *hdr = (struct smb2_hdr *)buffer;
++ struct smb2_create_rsp *rsp = (struct smb2_create_rsp *)buffer;
++ struct cifs_tcon *tcon;
++ struct close_cancelled_open *cancelled;
++
++ if (hdr->Command != SMB2_CREATE || hdr->Status != STATUS_SUCCESS)
++ return 0;
++
++ cancelled = kzalloc(sizeof(*cancelled), GFP_KERNEL);
++ if (!cancelled)
++ return -ENOMEM;
++
++ tcon = smb2_find_smb_tcon(server, hdr->SessionId, hdr->TreeId);
++ if (!tcon) {
++ kfree(cancelled);
++ return -ENOENT;
++ }
++
++ cancelled->fid.persistent_fid = rsp->PersistentFileId;
++ cancelled->fid.volatile_fid = rsp->VolatileFileId;
++ cancelled->tcon = tcon;
++ INIT_WORK(&cancelled->work, smb2_cancelled_close_fid);
++ queue_work(cifsiod_wq, &cancelled->work);
++
++ return 0;
++}
+diff --git a/fs/cifs/smb2ops.c b/fs/cifs/smb2ops.c
+index 5d456ebb3813..007abf7195af 100644
+--- a/fs/cifs/smb2ops.c
++++ b/fs/cifs/smb2ops.c
+@@ -1565,6 +1565,7 @@ struct smb_version_operations smb20_operations = {
+ .clear_stats = smb2_clear_stats,
+ .print_stats = smb2_print_stats,
+ .is_oplock_break = smb2_is_valid_oplock_break,
++ .handle_cancelled_mid = smb2_handle_cancelled_mid,
+ .downgrade_oplock = smb2_downgrade_oplock,
+ .need_neg = smb2_need_neg,
+ .negotiate = smb2_negotiate,
+@@ -1645,6 +1646,7 @@ struct smb_version_operations smb21_operations = {
+ .clear_stats = smb2_clear_stats,
+ .print_stats = smb2_print_stats,
+ .is_oplock_break = smb2_is_valid_oplock_break,
++ .handle_cancelled_mid = smb2_handle_cancelled_mid,
+ .downgrade_oplock = smb2_downgrade_oplock,
+ .need_neg = smb2_need_neg,
+ .negotiate = smb2_negotiate,
+@@ -1727,6 +1729,7 @@ struct smb_version_operations smb30_operations = {
+ .print_stats = smb2_print_stats,
+ .dump_share_caps = smb2_dump_share_caps,
+ .is_oplock_break = smb2_is_valid_oplock_break,
++ .handle_cancelled_mid = smb2_handle_cancelled_mid,
+ .downgrade_oplock = smb2_downgrade_oplock,
+ .need_neg = smb2_need_neg,
+ .negotiate = smb2_negotiate,
+@@ -1815,6 +1818,7 @@ struct smb_version_operations smb311_operations = {
+ .print_stats = smb2_print_stats,
+ .dump_share_caps = smb2_dump_share_caps,
+ .is_oplock_break = smb2_is_valid_oplock_break,
++ .handle_cancelled_mid = smb2_handle_cancelled_mid,
+ .downgrade_oplock = smb2_downgrade_oplock,
+ .need_neg = smb2_need_neg,
+ .negotiate = smb2_negotiate,
+diff --git a/fs/cifs/smb2proto.h b/fs/cifs/smb2proto.h
+index f2d511a6971b..04ef6e914597 100644
+--- a/fs/cifs/smb2proto.h
++++ b/fs/cifs/smb2proto.h
+@@ -48,6 +48,10 @@ extern struct mid_q_entry *smb2_setup_request(struct cifs_ses *ses,
+ struct smb_rqst *rqst);
+ extern struct mid_q_entry *smb2_setup_async_request(
+ struct TCP_Server_Info *server, struct smb_rqst *rqst);
++extern struct cifs_ses *smb2_find_smb_ses(struct TCP_Server_Info *server,
++ __u64 ses_id);
++extern struct cifs_tcon *smb2_find_smb_tcon(struct TCP_Server_Info *server,
++ __u64 ses_id, __u32 tid);
+ extern int smb2_calc_signature(struct smb_rqst *rqst,
+ struct TCP_Server_Info *server);
+ extern int smb3_calc_signature(struct smb_rqst *rqst,
+@@ -158,6 +162,9 @@ extern int SMB2_set_compression(const unsigned int xid, struct cifs_tcon *tcon,
+ extern int SMB2_oplock_break(const unsigned int xid, struct cifs_tcon *tcon,
+ const u64 persistent_fid, const u64 volatile_fid,
+ const __u8 oplock_level);
++extern int smb2_handle_cancelled_mid(char *buffer,
++ struct TCP_Server_Info *server);
++void smb2_cancelled_close_fid(struct work_struct *work);
+ extern int SMB2_QFS_info(const unsigned int xid, struct cifs_tcon *tcon,
+ u64 persistent_file_id, u64 volatile_file_id,
+ struct kstatfs *FSData);
+diff --git a/fs/cifs/smb2transport.c b/fs/cifs/smb2transport.c
+index bc9a7b634643..390b0d0198f8 100644
+--- a/fs/cifs/smb2transport.c
++++ b/fs/cifs/smb2transport.c
+@@ -115,22 +115,68 @@ smb3_crypto_shash_allocate(struct TCP_Server_Info *server)
+ }
+
+ static struct cifs_ses *
+-smb2_find_smb_ses(struct smb2_hdr *smb2hdr, struct TCP_Server_Info *server)
++smb2_find_smb_ses_unlocked(struct TCP_Server_Info *server, __u64 ses_id)
+ {
+ struct cifs_ses *ses;
+
+- spin_lock(&cifs_tcp_ses_lock);
+ list_for_each_entry(ses, &server->smb_ses_list, smb_ses_list) {
+- if (ses->Suid != smb2hdr->SessionId)
++ if (ses->Suid != ses_id)
+ continue;
+- spin_unlock(&cifs_tcp_ses_lock);
+ return ses;
+ }
++
++ return NULL;
++}
++
++struct cifs_ses *
++smb2_find_smb_ses(struct TCP_Server_Info *server, __u64 ses_id)
++{
++ struct cifs_ses *ses;
++
++ spin_lock(&cifs_tcp_ses_lock);
++ ses = smb2_find_smb_ses_unlocked(server, ses_id);
+ spin_unlock(&cifs_tcp_ses_lock);
+
++ return ses;
++}
++
++static struct cifs_tcon *
++smb2_find_smb_sess_tcon_unlocked(struct cifs_ses *ses, __u32 tid)
++{
++ struct cifs_tcon *tcon;
++
++ list_for_each_entry(tcon, &ses->tcon_list, tcon_list) {
++ if (tcon->tid != tid)
++ continue;
++ ++tcon->tc_count;
++ return tcon;
++ }
++
+ return NULL;
+ }
+
++/*
++ * Obtain tcon corresponding to the tid in the given
++ * cifs_ses
++ */
++
++struct cifs_tcon *
++smb2_find_smb_tcon(struct TCP_Server_Info *server, __u64 ses_id, __u32 tid)
++{
++ struct cifs_ses *ses;
++ struct cifs_tcon *tcon;
++
++ spin_lock(&cifs_tcp_ses_lock);
++ ses = smb2_find_smb_ses_unlocked(server, ses_id);
++ if (!ses) {
++ spin_unlock(&cifs_tcp_ses_lock);
++ return NULL;
++ }
++ tcon = smb2_find_smb_sess_tcon_unlocked(ses, tid);
++ spin_unlock(&cifs_tcp_ses_lock);
++
++ return tcon;
++}
+
+ int
+ smb2_calc_signature(struct smb_rqst *rqst, struct TCP_Server_Info *server)
+@@ -142,7 +188,7 @@ smb2_calc_signature(struct smb_rqst *rqst, struct TCP_Server_Info *server)
+ struct smb2_hdr *smb2_pdu = (struct smb2_hdr *)iov[0].iov_base;
+ struct cifs_ses *ses;
+
+- ses = smb2_find_smb_ses(smb2_pdu, server);
++ ses = smb2_find_smb_ses(server, smb2_pdu->SessionId);
+ if (!ses) {
+ cifs_dbg(VFS, "%s: Could not find session\n", __func__);
+ return 0;
+@@ -359,7 +405,7 @@ smb3_calc_signature(struct smb_rqst *rqst, struct TCP_Server_Info *server)
+ struct smb2_hdr *smb2_pdu = (struct smb2_hdr *)iov[0].iov_base;
+ struct cifs_ses *ses;
+
+- ses = smb2_find_smb_ses(smb2_pdu, server);
++ ses = smb2_find_smb_ses(server, smb2_pdu->SessionId);
+ if (!ses) {
+ cifs_dbg(VFS, "%s: Could not find session\n", __func__);
+ return 0;
+diff --git a/fs/cifs/transport.c b/fs/cifs/transport.c
+index fbb84c08e3cd..842e6a042023 100644
+--- a/fs/cifs/transport.c
++++ b/fs/cifs/transport.c
+@@ -728,9 +728,11 @@ SendReceive2(const unsigned int xid, struct cifs_ses *ses,
+
+ rc = wait_for_response(ses->server, midQ);
+ if (rc != 0) {
++ cifs_dbg(FYI, "Cancelling wait for mid %llu\n", midQ->mid);
+ send_cancel(ses->server, buf, midQ);
+ spin_lock(&GlobalMid_Lock);
+ if (midQ->mid_state == MID_REQUEST_SUBMITTED) {
++ midQ->mid_flags |= MID_WAIT_CANCELLED;
+ midQ->callback = DeleteMidQEntry;
+ spin_unlock(&GlobalMid_Lock);
+ cifs_small_buf_release(buf);
+diff --git a/fs/timerfd.c b/fs/timerfd.c
+index c173cc196175..384fa759a563 100644
+--- a/fs/timerfd.c
++++ b/fs/timerfd.c
+@@ -40,6 +40,7 @@ struct timerfd_ctx {
+ short unsigned settime_flags; /* to show in fdinfo */
+ struct rcu_head rcu;
+ struct list_head clist;
++ spinlock_t cancel_lock;
+ bool might_cancel;
+ };
+
+@@ -112,7 +113,7 @@ void timerfd_clock_was_set(void)
+ rcu_read_unlock();
+ }
+
+-static void timerfd_remove_cancel(struct timerfd_ctx *ctx)
++static void __timerfd_remove_cancel(struct timerfd_ctx *ctx)
+ {
+ if (ctx->might_cancel) {
+ ctx->might_cancel = false;
+@@ -122,6 +123,13 @@ static void timerfd_remove_cancel(struct timerfd_ctx *ctx)
+ }
+ }
+
++static void timerfd_remove_cancel(struct timerfd_ctx *ctx)
++{
++ spin_lock(&ctx->cancel_lock);
++ __timerfd_remove_cancel(ctx);
++ spin_unlock(&ctx->cancel_lock);
++}
++
+ static bool timerfd_canceled(struct timerfd_ctx *ctx)
+ {
+ if (!ctx->might_cancel || ctx->moffs != KTIME_MAX)
+@@ -132,6 +140,7 @@ static bool timerfd_canceled(struct timerfd_ctx *ctx)
+
+ static void timerfd_setup_cancel(struct timerfd_ctx *ctx, int flags)
+ {
++ spin_lock(&ctx->cancel_lock);
+ if ((ctx->clockid == CLOCK_REALTIME ||
+ ctx->clockid == CLOCK_REALTIME_ALARM) &&
+ (flags & TFD_TIMER_ABSTIME) && (flags & TFD_TIMER_CANCEL_ON_SET)) {
+@@ -141,9 +150,10 @@ static void timerfd_setup_cancel(struct timerfd_ctx *ctx, int flags)
+ list_add_rcu(&ctx->clist, &cancel_list);
+ spin_unlock(&cancel_lock);
+ }
+- } else if (ctx->might_cancel) {
+- timerfd_remove_cancel(ctx);
++ } else {
++ __timerfd_remove_cancel(ctx);
+ }
++ spin_unlock(&ctx->cancel_lock);
+ }
+
+ static ktime_t timerfd_get_remaining(struct timerfd_ctx *ctx)
+@@ -400,6 +410,7 @@ SYSCALL_DEFINE2(timerfd_create, int, clockid, int, flags)
+ return -ENOMEM;
+
+ init_waitqueue_head(&ctx->wqh);
++ spin_lock_init(&ctx->cancel_lock);
+ ctx->clockid = clockid;
+
+ if (isalarm(ctx))
^ permalink raw reply related [flat|nested] 22+ messages in thread
* [gentoo-commits] proj/linux-patches:4.10 commit in: /
@ 2017-05-14 13:30 Mike Pagano
0 siblings, 0 replies; 22+ messages in thread
From: Mike Pagano @ 2017-05-14 13:30 UTC (permalink / raw
To: gentoo-commits
commit: 45c2b279727028056f64b1f569338074f00f8075
Author: Mike Pagano <mpagano <AT> gentoo <DOT> org>
AuthorDate: Sun May 14 13:30:05 2017 +0000
Commit: Mike Pagano <mpagano <AT> gentoo <DOT> org>
CommitDate: Sun May 14 13:30:05 2017 +0000
URL: https://gitweb.gentoo.org/proj/linux-patches.git/commit/?id=45c2b279
Linux patch 4.10.16
0000_README | 4 +
1015_linux-4.10.16.patch | 4665 ++++++++++++++++++++++++++++++++++++++++++++++
2 files changed, 4669 insertions(+)
diff --git a/0000_README b/0000_README
index 1e8afc8..6a98163 100644
--- a/0000_README
+++ b/0000_README
@@ -103,6 +103,10 @@ Patch: 1014_linux-4.10.15.patch
From: http://www.kernel.org
Desc: Linux 4.10.15
+Patch: 1015_linux-4.10.16.patch
+From: http://www.kernel.org
+Desc: Linux 4.10.16
+
Patch: 1500_XATTR_USER_PREFIX.patch
From: https://bugs.gentoo.org/show_bug.cgi?id=470644
Desc: Support for namespace user.pax.* on tmpfs.
diff --git a/1015_linux-4.10.16.patch b/1015_linux-4.10.16.patch
new file mode 100644
index 0000000..fa64d7c
--- /dev/null
+++ b/1015_linux-4.10.16.patch
@@ -0,0 +1,4665 @@
+diff --git a/Makefile b/Makefile
+index 6f600fee5753..e3e60e71fa78 100644
+--- a/Makefile
++++ b/Makefile
+@@ -1,6 +1,6 @@
+ VERSION = 4
+ PATCHLEVEL = 10
+-SUBLEVEL = 15
++SUBLEVEL = 16
+ EXTRAVERSION =
+ NAME = Fearless Coyote
+
+diff --git a/arch/arm/boot/dts/am57xx-idk-common.dtsi b/arch/arm/boot/dts/am57xx-idk-common.dtsi
+index 814a720d5c3d..d0a55b845690 100644
+--- a/arch/arm/boot/dts/am57xx-idk-common.dtsi
++++ b/arch/arm/boot/dts/am57xx-idk-common.dtsi
+@@ -311,6 +311,13 @@
+ /* ID & VBUS GPIOs provided in board dts */
+ };
+ };
++
++ tpic2810: tpic2810@60 {
++ compatible = "ti,tpic2810";
++ reg = <0x60>;
++ gpio-controller;
++ #gpio-cells = <2>;
++ };
+ };
+
+ &mcspi3 {
+@@ -326,13 +333,6 @@
+ spi-max-frequency = <1000000>;
+ spi-cpol;
+ };
+-
+- tpic2810: tpic2810@60 {
+- compatible = "ti,tpic2810";
+- reg = <0x60>;
+- gpio-controller;
+- #gpio-cells = <2>;
+- };
+ };
+
+ &uart3 {
+diff --git a/arch/arm/boot/dts/bcm958522er.dts b/arch/arm/boot/dts/bcm958522er.dts
+index a21b0fd21f4e..417f65738402 100644
+--- a/arch/arm/boot/dts/bcm958522er.dts
++++ b/arch/arm/boot/dts/bcm958522er.dts
+@@ -55,6 +55,7 @@
+ gpio-restart {
+ compatible = "gpio-restart";
+ gpios = <&gpioa 15 GPIO_ACTIVE_LOW>;
++ open-source;
+ priority = <200>;
+ };
+ };
+diff --git a/arch/arm/boot/dts/bcm958525er.dts b/arch/arm/boot/dts/bcm958525er.dts
+index be7f2f8ecf39..5279b769fdfc 100644
+--- a/arch/arm/boot/dts/bcm958525er.dts
++++ b/arch/arm/boot/dts/bcm958525er.dts
+@@ -55,6 +55,7 @@
+ gpio-restart {
+ compatible = "gpio-restart";
+ gpios = <&gpioa 15 GPIO_ACTIVE_LOW>;
++ open-source;
+ priority = <200>;
+ };
+ };
+diff --git a/arch/arm/boot/dts/bcm958525xmc.dts b/arch/arm/boot/dts/bcm958525xmc.dts
+index 959cde911c3c..872882bd01bc 100644
+--- a/arch/arm/boot/dts/bcm958525xmc.dts
++++ b/arch/arm/boot/dts/bcm958525xmc.dts
+@@ -55,6 +55,7 @@
+ gpio-restart {
+ compatible = "gpio-restart";
+ gpios = <&gpioa 31 GPIO_ACTIVE_LOW>;
++ open-source;
+ priority = <200>;
+ };
+ };
+diff --git a/arch/arm/boot/dts/bcm958622hr.dts b/arch/arm/boot/dts/bcm958622hr.dts
+index ad2aa87dd15a..a340e1d93a58 100644
+--- a/arch/arm/boot/dts/bcm958622hr.dts
++++ b/arch/arm/boot/dts/bcm958622hr.dts
+@@ -55,6 +55,7 @@
+ gpio-restart {
+ compatible = "gpio-restart";
+ gpios = <&gpioa 15 GPIO_ACTIVE_LOW>;
++ open-source;
+ priority = <200>;
+ };
+ };
+diff --git a/arch/arm/boot/dts/bcm958623hr.dts b/arch/arm/boot/dts/bcm958623hr.dts
+index 4ceb8fef8041..226b652ccdc8 100644
+--- a/arch/arm/boot/dts/bcm958623hr.dts
++++ b/arch/arm/boot/dts/bcm958623hr.dts
+@@ -55,6 +55,7 @@
+ gpio-restart {
+ compatible = "gpio-restart";
+ gpios = <&gpioa 15 GPIO_ACTIVE_LOW>;
++ open-source;
+ priority = <200>;
+ };
+ };
+diff --git a/arch/arm/boot/dts/bcm958625hr.dts b/arch/arm/boot/dts/bcm958625hr.dts
+index 442002597063..a1658d0721b8 100644
+--- a/arch/arm/boot/dts/bcm958625hr.dts
++++ b/arch/arm/boot/dts/bcm958625hr.dts
+@@ -55,6 +55,7 @@
+ gpio-restart {
+ compatible = "gpio-restart";
+ gpios = <&gpioa 15 GPIO_ACTIVE_LOW>;
++ open-source;
+ priority = <200>;
+ };
+ };
+diff --git a/arch/arm/boot/dts/bcm988312hr.dts b/arch/arm/boot/dts/bcm988312hr.dts
+index 104afe98a43b..ed05e33d56de 100644
+--- a/arch/arm/boot/dts/bcm988312hr.dts
++++ b/arch/arm/boot/dts/bcm988312hr.dts
+@@ -55,6 +55,7 @@
+ gpio-restart {
+ compatible = "gpio-restart";
+ gpios = <&gpioa 15 GPIO_ACTIVE_LOW>;
++ open-source;
+ priority = <200>;
+ };
+ };
+diff --git a/arch/arm/boot/dts/imx6sx-udoo-neo.dtsi b/arch/arm/boot/dts/imx6sx-udoo-neo.dtsi
+index 2b65d26f4396..caea6f065cf9 100644
+--- a/arch/arm/boot/dts/imx6sx-udoo-neo.dtsi
++++ b/arch/arm/boot/dts/imx6sx-udoo-neo.dtsi
+@@ -77,11 +77,6 @@
+ };
+ };
+
+-&cpu0 {
+- arm-supply = <&sw1a_reg>;
+- soc-supply = <&sw1c_reg>;
+-};
+-
+ &fec1 {
+ pinctrl-names = "default";
+ pinctrl-0 = <&pinctrl_enet1>;
+diff --git a/arch/arm/boot/dts/qcom-ipq8064.dtsi b/arch/arm/boot/dts/qcom-ipq8064.dtsi
+index 2e375576ffd0..76f4e8921d58 100644
+--- a/arch/arm/boot/dts/qcom-ipq8064.dtsi
++++ b/arch/arm/boot/dts/qcom-ipq8064.dtsi
+@@ -65,13 +65,13 @@
+ cxo_board {
+ compatible = "fixed-clock";
+ #clock-cells = <0>;
+- clock-frequency = <19200000>;
++ clock-frequency = <25000000>;
+ };
+
+ pxo_board {
+ compatible = "fixed-clock";
+ #clock-cells = <0>;
+- clock-frequency = <27000000>;
++ clock-frequency = <25000000>;
+ };
+
+ sleep_clk: sleep_clk {
+diff --git a/arch/arm/boot/dts/sun7i-a20-lamobo-r1.dts b/arch/arm/boot/dts/sun7i-a20-lamobo-r1.dts
+index 73c05dab0a69..e00539ae1b8a 100644
+--- a/arch/arm/boot/dts/sun7i-a20-lamobo-r1.dts
++++ b/arch/arm/boot/dts/sun7i-a20-lamobo-r1.dts
+@@ -167,7 +167,7 @@
+ reg = <8>;
+ label = "cpu";
+ ethernet = <&gmac>;
+- phy-mode = "rgmii";
++ phy-mode = "rgmii-txid";
+ fixed-link {
+ speed = <1000>;
+ full-duplex;
+diff --git a/arch/arm/mach-omap2/omap-headsmp.S b/arch/arm/mach-omap2/omap-headsmp.S
+index fe36ce2734d4..4c6f14cf92a8 100644
+--- a/arch/arm/mach-omap2/omap-headsmp.S
++++ b/arch/arm/mach-omap2/omap-headsmp.S
+@@ -17,6 +17,7 @@
+
+ #include <linux/linkage.h>
+ #include <linux/init.h>
++#include <asm/assembler.h>
+
+ #include "omap44xx.h"
+
+@@ -66,7 +67,7 @@ wait_2: ldr r2, =AUX_CORE_BOOT0_PA @ read from AuxCoreBoot0
+ cmp r0, r4
+ bne wait_2
+ ldr r12, =API_HYP_ENTRY
+- adr r0, hyp_boot
++ badr r0, hyp_boot
+ smc #0
+ hyp_boot:
+ b omap_secondary_startup
+diff --git a/arch/arm/mach-omap2/omap_hwmod_3xxx_data.c b/arch/arm/mach-omap2/omap_hwmod_3xxx_data.c
+index 56f917ec8621..507ff0795a8e 100644
+--- a/arch/arm/mach-omap2/omap_hwmod_3xxx_data.c
++++ b/arch/arm/mach-omap2/omap_hwmod_3xxx_data.c
+@@ -2112,11 +2112,20 @@ static struct omap_hwmod_ocp_if omap3_l4_core__i2c3 = {
+ };
+
+ /* L4 CORE -> SR1 interface */
++static struct omap_hwmod_addr_space omap3_sr1_addr_space[] = {
++ {
++ .pa_start = OMAP34XX_SR1_BASE,
++ .pa_end = OMAP34XX_SR1_BASE + SZ_1K - 1,
++ .flags = ADDR_TYPE_RT,
++ },
++ { },
++};
+
+ static struct omap_hwmod_ocp_if omap34xx_l4_core__sr1 = {
+ .master = &omap3xxx_l4_core_hwmod,
+ .slave = &omap34xx_sr1_hwmod,
+ .clk = "sr_l4_ick",
++ .addr = omap3_sr1_addr_space,
+ .user = OCP_USER_MPU,
+ };
+
+@@ -2124,15 +2133,25 @@ static struct omap_hwmod_ocp_if omap36xx_l4_core__sr1 = {
+ .master = &omap3xxx_l4_core_hwmod,
+ .slave = &omap36xx_sr1_hwmod,
+ .clk = "sr_l4_ick",
++ .addr = omap3_sr1_addr_space,
+ .user = OCP_USER_MPU,
+ };
+
+ /* L4 CORE -> SR1 interface */
++static struct omap_hwmod_addr_space omap3_sr2_addr_space[] = {
++ {
++ .pa_start = OMAP34XX_SR2_BASE,
++ .pa_end = OMAP34XX_SR2_BASE + SZ_1K - 1,
++ .flags = ADDR_TYPE_RT,
++ },
++ { },
++};
+
+ static struct omap_hwmod_ocp_if omap34xx_l4_core__sr2 = {
+ .master = &omap3xxx_l4_core_hwmod,
+ .slave = &omap34xx_sr2_hwmod,
+ .clk = "sr_l4_ick",
++ .addr = omap3_sr2_addr_space,
+ .user = OCP_USER_MPU,
+ };
+
+@@ -2140,6 +2159,7 @@ static struct omap_hwmod_ocp_if omap36xx_l4_core__sr2 = {
+ .master = &omap3xxx_l4_core_hwmod,
+ .slave = &omap36xx_sr2_hwmod,
+ .clk = "sr_l4_ick",
++ .addr = omap3_sr2_addr_space,
+ .user = OCP_USER_MPU,
+ };
+
+diff --git a/arch/arm/mach-pxa/ezx.c b/arch/arm/mach-pxa/ezx.c
+index 0b8300e6fca3..a057cf9c0e7b 100644
+--- a/arch/arm/mach-pxa/ezx.c
++++ b/arch/arm/mach-pxa/ezx.c
+@@ -696,32 +696,7 @@ static struct pxa27x_keypad_platform_data e2_keypad_platform_data = {
+ };
+ #endif /* CONFIG_MACH_EZX_E2 */
+
+-#ifdef CONFIG_MACH_EZX_A780
+-/* gpio_keys */
+-static struct gpio_keys_button a780_buttons[] = {
+- [0] = {
+- .code = SW_LID,
+- .gpio = GPIO12_A780_FLIP_LID,
+- .active_low = 0,
+- .desc = "A780 flip lid",
+- .type = EV_SW,
+- .wakeup = 1,
+- },
+-};
+-
+-static struct gpio_keys_platform_data a780_gpio_keys_platform_data = {
+- .buttons = a780_buttons,
+- .nbuttons = ARRAY_SIZE(a780_buttons),
+-};
+-
+-static struct platform_device a780_gpio_keys = {
+- .name = "gpio-keys",
+- .id = -1,
+- .dev = {
+- .platform_data = &a780_gpio_keys_platform_data,
+- },
+-};
+-
++#if defined(CONFIG_MACH_EZX_A780) || defined(CONFIG_MACH_EZX_A910)
+ /* camera */
+ static struct regulator_consumer_supply camera_dummy_supplies[] = {
+ REGULATOR_SUPPLY("vdd", "0-005d"),
+@@ -750,6 +725,35 @@ static struct platform_device camera_supply_dummy_device = {
+ .platform_data = &camera_dummy_config,
+ },
+ };
++#endif
++
++#ifdef CONFIG_MACH_EZX_A780
++/* gpio_keys */
++static struct gpio_keys_button a780_buttons[] = {
++ [0] = {
++ .code = SW_LID,
++ .gpio = GPIO12_A780_FLIP_LID,
++ .active_low = 0,
++ .desc = "A780 flip lid",
++ .type = EV_SW,
++ .wakeup = 1,
++ },
++};
++
++static struct gpio_keys_platform_data a780_gpio_keys_platform_data = {
++ .buttons = a780_buttons,
++ .nbuttons = ARRAY_SIZE(a780_buttons),
++};
++
++static struct platform_device a780_gpio_keys = {
++ .name = "gpio-keys",
++ .id = -1,
++ .dev = {
++ .platform_data = &a780_gpio_keys_platform_data,
++ },
++};
++
++/* camera */
+ static int a780_camera_reset(struct device *dev)
+ {
+ gpio_set_value(GPIO19_GEN1_CAM_RST, 0);
+diff --git a/arch/arm64/boot/dts/renesas/r8a7795.dtsi b/arch/arm64/boot/dts/renesas/r8a7795.dtsi
+index bbf594bce930..bb8709a6064a 100644
+--- a/arch/arm64/boot/dts/renesas/r8a7795.dtsi
++++ b/arch/arm64/boot/dts/renesas/r8a7795.dtsi
+@@ -563,6 +563,7 @@
+ phy-mode = "rgmii-id";
+ #address-cells = <1>;
+ #size-cells = <0>;
++ status = "disabled";
+ };
+
+ can0: can@e6c30000 {
+diff --git a/arch/arm64/include/asm/pgtable.h b/arch/arm64/include/asm/pgtable.h
+index ffbb9a520563..61e214015b38 100644
+--- a/arch/arm64/include/asm/pgtable.h
++++ b/arch/arm64/include/asm/pgtable.h
+@@ -71,9 +71,8 @@ extern unsigned long empty_zero_page[PAGE_SIZE / sizeof(unsigned long)];
+ #define pte_young(pte) (!!(pte_val(pte) & PTE_AF))
+ #define pte_special(pte) (!!(pte_val(pte) & PTE_SPECIAL))
+ #define pte_write(pte) (!!(pte_val(pte) & PTE_WRITE))
+-#define pte_exec(pte) (!(pte_val(pte) & PTE_UXN))
++#define pte_user_exec(pte) (!(pte_val(pte) & PTE_UXN))
+ #define pte_cont(pte) (!!(pte_val(pte) & PTE_CONT))
+-#define pte_ng(pte) (!!(pte_val(pte) & PTE_NG))
+
+ #ifdef CONFIG_ARM64_HW_AFDBM
+ #define pte_hw_dirty(pte) (pte_write(pte) && !(pte_val(pte) & PTE_RDONLY))
+@@ -84,8 +83,12 @@ extern unsigned long empty_zero_page[PAGE_SIZE / sizeof(unsigned long)];
+ #define pte_dirty(pte) (pte_sw_dirty(pte) || pte_hw_dirty(pte))
+
+ #define pte_valid(pte) (!!(pte_val(pte) & PTE_VALID))
+-#define pte_valid_global(pte) \
+- ((pte_val(pte) & (PTE_VALID | PTE_NG)) == PTE_VALID)
++/*
++ * Execute-only user mappings do not have the PTE_USER bit set. All valid
++ * kernel mappings have the PTE_UXN bit set.
++ */
++#define pte_valid_not_user(pte) \
++ ((pte_val(pte) & (PTE_VALID | PTE_USER | PTE_UXN)) == (PTE_VALID | PTE_UXN))
+ #define pte_valid_young(pte) \
+ ((pte_val(pte) & (PTE_VALID | PTE_AF)) == (PTE_VALID | PTE_AF))
+
+@@ -178,7 +181,7 @@ static inline void set_pte(pte_t *ptep, pte_t pte)
+ * Only if the new pte is valid and kernel, otherwise TLB maintenance
+ * or update_mmu_cache() have the necessary barriers.
+ */
+- if (pte_valid_global(pte)) {
++ if (pte_valid_not_user(pte)) {
+ dsb(ishst);
+ isb();
+ }
+@@ -212,7 +215,7 @@ static inline void set_pte_at(struct mm_struct *mm, unsigned long addr,
+ pte_val(pte) &= ~PTE_RDONLY;
+ else
+ pte_val(pte) |= PTE_RDONLY;
+- if (pte_ng(pte) && pte_exec(pte) && !pte_special(pte))
++ if (pte_user_exec(pte) && !pte_special(pte))
+ __sync_icache_dcache(pte, addr);
+ }
+
+diff --git a/arch/arm64/kernel/topology.c b/arch/arm64/kernel/topology.c
+index 655e65f38f31..565dd69888cc 100644
+--- a/arch/arm64/kernel/topology.c
++++ b/arch/arm64/kernel/topology.c
+@@ -41,7 +41,6 @@ static void set_capacity_scale(unsigned int cpu, unsigned long capacity)
+ per_cpu(cpu_scale, cpu) = capacity;
+ }
+
+-#ifdef CONFIG_PROC_SYSCTL
+ static ssize_t cpu_capacity_show(struct device *dev,
+ struct device_attribute *attr,
+ char *buf)
+@@ -98,7 +97,6 @@ static int register_cpu_capacity_sysctl(void)
+ return 0;
+ }
+ subsys_initcall(register_cpu_capacity_sysctl);
+-#endif
+
+ static u32 capacity_scale;
+ static u32 *raw_capacity;
+diff --git a/arch/arm64/net/bpf_jit_comp.c b/arch/arm64/net/bpf_jit_comp.c
+index b2fc97a2c56c..9c4b57a7b265 100644
+--- a/arch/arm64/net/bpf_jit_comp.c
++++ b/arch/arm64/net/bpf_jit_comp.c
+@@ -779,14 +779,14 @@ static int build_body(struct jit_ctx *ctx)
+ int ret;
+
+ ret = build_insn(insn, ctx);
+-
+- if (ctx->image == NULL)
+- ctx->offset[i] = ctx->idx;
+-
+ if (ret > 0) {
+ i++;
++ if (ctx->image == NULL)
++ ctx->offset[i] = ctx->idx;
+ continue;
+ }
++ if (ctx->image == NULL)
++ ctx->offset[i] = ctx->idx;
+ if (ret)
+ return ret;
+ }
+diff --git a/arch/mips/kernel/mips-r2-to-r6-emul.c b/arch/mips/kernel/mips-r2-to-r6-emul.c
+index ef2ca28a028b..d8f1cf1ec370 100644
+--- a/arch/mips/kernel/mips-r2-to-r6-emul.c
++++ b/arch/mips/kernel/mips-r2-to-r6-emul.c
+@@ -433,8 +433,8 @@ static int multu_func(struct pt_regs *regs, u32 ir)
+ rs = regs->regs[MIPSInst_RS(ir)];
+ res = (u64)rt * (u64)rs;
+ rt = res;
+- regs->lo = (s64)rt;
+- regs->hi = (s64)(res >> 32);
++ regs->lo = (s64)(s32)rt;
++ regs->hi = (s64)(s32)(res >> 32);
+
+ MIPS_R2_STATS(muls);
+
+@@ -670,9 +670,9 @@ static int maddu_func(struct pt_regs *regs, u32 ir)
+ res += ((((s64)rt) << 32) | (u32)rs);
+
+ rt = res;
+- regs->lo = (s64)rt;
++ regs->lo = (s64)(s32)rt;
+ rs = res >> 32;
+- regs->hi = (s64)rs;
++ regs->hi = (s64)(s32)rs;
+
+ MIPS_R2_STATS(dsps);
+
+@@ -728,9 +728,9 @@ static int msubu_func(struct pt_regs *regs, u32 ir)
+ res = ((((s64)rt) << 32) | (u32)rs) - res;
+
+ rt = res;
+- regs->lo = (s64)rt;
++ regs->lo = (s64)(s32)rt;
+ rs = res >> 32;
+- regs->hi = (s64)rs;
++ regs->hi = (s64)(s32)rs;
+
+ MIPS_R2_STATS(dsps);
+
+diff --git a/arch/powerpc/Kconfig b/arch/powerpc/Kconfig
+index 281f4f1fcd1f..068deb010375 100644
+--- a/arch/powerpc/Kconfig
++++ b/arch/powerpc/Kconfig
+@@ -390,8 +390,8 @@ config DISABLE_MPROFILE_KERNEL
+ be disabled also.
+
+ If you have a toolchain which supports mprofile-kernel, then you can
+- enable this. Otherwise leave it disabled. If you're not sure, say
+- "N".
++ disable this. Otherwise leave it enabled. If you're not sure, say
++ "Y".
+
+ config MPROFILE_KERNEL
+ depends on PPC64 && CPU_LITTLE_ENDIAN
+diff --git a/arch/powerpc/include/asm/reg.h b/arch/powerpc/include/asm/reg.h
+index dff79798903d..2fd6b5b34756 100644
+--- a/arch/powerpc/include/asm/reg.h
++++ b/arch/powerpc/include/asm/reg.h
+@@ -338,7 +338,7 @@
+ #define LPCR_DPFD_SH 52
+ #define LPCR_DPFD (ASM_CONST(7) << LPCR_DPFD_SH)
+ #define LPCR_VRMASD_SH 47
+-#define LPCR_VRMASD (ASM_CONST(1) << LPCR_VRMASD_SH)
++#define LPCR_VRMASD (ASM_CONST(0x1f) << LPCR_VRMASD_SH)
+ #define LPCR_VRMA_L ASM_CONST(0x0008000000000000)
+ #define LPCR_VRMA_LP0 ASM_CONST(0x0001000000000000)
+ #define LPCR_VRMA_LP1 ASM_CONST(0x0000800000000000)
+diff --git a/arch/powerpc/kernel/Makefile b/arch/powerpc/kernel/Makefile
+index f4c2b52e58b3..b1a9805c2eef 100644
+--- a/arch/powerpc/kernel/Makefile
++++ b/arch/powerpc/kernel/Makefile
+@@ -15,7 +15,7 @@ CFLAGS_btext.o += -fPIC
+ endif
+
+ CFLAGS_cputable.o += $(DISABLE_LATENT_ENTROPY_PLUGIN)
+-CFLAGS_init.o += $(DISABLE_LATENT_ENTROPY_PLUGIN)
++CFLAGS_prom_init.o += $(DISABLE_LATENT_ENTROPY_PLUGIN)
+ CFLAGS_btext.o += $(DISABLE_LATENT_ENTROPY_PLUGIN)
+ CFLAGS_prom.o += $(DISABLE_LATENT_ENTROPY_PLUGIN)
+
+diff --git a/arch/powerpc/kvm/book3s_hv.c b/arch/powerpc/kvm/book3s_hv.c
+index ec34e39471a7..8d9cc07b1e9c 100644
+--- a/arch/powerpc/kvm/book3s_hv.c
++++ b/arch/powerpc/kvm/book3s_hv.c
+@@ -182,7 +182,8 @@ static void kvmppc_fast_vcpu_kick_hv(struct kvm_vcpu *vcpu)
+ ++vcpu->stat.halt_wakeup;
+ }
+
+- if (kvmppc_ipi_thread(vcpu->arch.thread_cpu))
++ cpu = READ_ONCE(vcpu->arch.thread_cpu);
++ if (cpu >= 0 && kvmppc_ipi_thread(cpu))
+ return;
+
+ /* CPU points to the first thread of the core */
+diff --git a/arch/powerpc/mm/mmu_context_iommu.c b/arch/powerpc/mm/mmu_context_iommu.c
+index 104bad029ce9..7de7124ac91b 100644
+--- a/arch/powerpc/mm/mmu_context_iommu.c
++++ b/arch/powerpc/mm/mmu_context_iommu.c
+@@ -184,7 +184,7 @@ long mm_iommu_get(struct mm_struct *mm, unsigned long ua, unsigned long entries,
+ * of the CMA zone if possible. NOTE: faulting in + migration
+ * can be expensive. Batching can be considered later
+ */
+- if (get_pageblock_migratetype(page) == MIGRATE_CMA) {
++ if (is_migrate_cma_page(page)) {
+ if (mm_iommu_move_page_from_cma(page))
+ goto populate;
+ if (1 != get_user_pages_fast(ua + (i << PAGE_SHIFT),
+diff --git a/arch/powerpc/perf/core-book3s.c b/arch/powerpc/perf/core-book3s.c
+index 270eb9b74e2e..e9e6dfff032f 100644
+--- a/arch/powerpc/perf/core-book3s.c
++++ b/arch/powerpc/perf/core-book3s.c
+@@ -183,6 +183,8 @@ static inline void perf_get_data_addr(struct pt_regs *regs, u64 *addrp)
+ sdsync = POWER7P_MMCRA_SDAR_VALID;
+ else if (ppmu->flags & PPMU_ALT_SIPR)
+ sdsync = POWER6_MMCRA_SDSYNC;
++ else if (ppmu->flags & PPMU_NO_SIAR)
++ sdsync = MMCRA_SAMPLE_ENABLE;
+ else
+ sdsync = MMCRA_SDSYNC;
+
+diff --git a/arch/powerpc/perf/isa207-common.c b/arch/powerpc/perf/isa207-common.c
+index 50e598cf644b..15db053d25f6 100644
+--- a/arch/powerpc/perf/isa207-common.c
++++ b/arch/powerpc/perf/isa207-common.c
+@@ -65,12 +65,41 @@ static bool is_event_valid(u64 event)
+ return !(event & ~valid_mask);
+ }
+
+-static u64 mmcra_sdar_mode(u64 event)
++static inline bool is_event_marked(u64 event)
+ {
+- if (cpu_has_feature(CPU_FTR_ARCH_300) && !cpu_has_feature(CPU_FTR_POWER9_DD1))
+- return p9_SDAR_MODE(event) << MMCRA_SDAR_MODE_SHIFT;
++ if (event & EVENT_IS_MARKED)
++ return true;
+
+- return MMCRA_SDAR_MODE_TLB;
++ return false;
++}
++
++static void mmcra_sdar_mode(u64 event, unsigned long *mmcra)
++{
++ /*
++ * MMCRA[SDAR_MODE] specifices how the SDAR should be updated in
++ * continous sampling mode.
++ *
++ * Incase of Power8:
++ * MMCRA[SDAR_MODE] will be programmed as "0b01" for continous sampling
++ * mode and will be un-changed when setting MMCRA[63] (Marked events).
++ *
++ * Incase of Power9:
++ * Marked event: MMCRA[SDAR_MODE] will be set to 0b00 ('No Updates'),
++ * or if group already have any marked events.
++ * Non-Marked events (for DD1):
++ * MMCRA[SDAR_MODE] will be set to 0b01
++ * For rest
++ * MMCRA[SDAR_MODE] will be set from event code.
++ */
++ if (cpu_has_feature(CPU_FTR_ARCH_300)) {
++ if (is_event_marked(event) || (*mmcra & MMCRA_SAMPLE_ENABLE))
++ *mmcra &= MMCRA_SDAR_MODE_NO_UPDATES;
++ else if (!cpu_has_feature(CPU_FTR_POWER9_DD1))
++ *mmcra |= p9_SDAR_MODE(event) << MMCRA_SDAR_MODE_SHIFT;
++ else if (cpu_has_feature(CPU_FTR_POWER9_DD1))
++ *mmcra |= MMCRA_SDAR_MODE_TLB;
++ } else
++ *mmcra |= MMCRA_SDAR_MODE_TLB;
+ }
+
+ static u64 thresh_cmp_val(u64 value)
+@@ -97,6 +126,28 @@ static unsigned long combine_shift(unsigned long pmc)
+ return MMCR1_COMBINE_SHIFT(pmc);
+ }
+
++static inline bool event_is_threshold(u64 event)
++{
++ return (event >> EVENT_THR_SEL_SHIFT) & EVENT_THR_SEL_MASK;
++}
++
++static bool is_thresh_cmp_valid(u64 event)
++{
++ unsigned int cmp, exp;
++
++ /*
++ * Check the mantissa upper two bits are not zero, unless the
++ * exponent is also zero. See the THRESH_CMP_MANTISSA doc.
++ */
++ cmp = (event >> EVENT_THR_CMP_SHIFT) & EVENT_THR_CMP_MASK;
++ exp = cmp >> 7;
++
++ if (exp && (cmp & 0x60) == 0)
++ return false;
++
++ return true;
++}
++
+ int isa207_get_constraint(u64 event, unsigned long *maskp, unsigned long *valp)
+ {
+ unsigned int unit, pmc, cache, ebb;
+@@ -158,33 +209,31 @@ int isa207_get_constraint(u64 event, unsigned long *maskp, unsigned long *valp)
+ value |= CNST_L1_QUAL_VAL(cache);
+ }
+
+- if (event & EVENT_IS_MARKED) {
++ if (is_event_marked(event)) {
+ mask |= CNST_SAMPLE_MASK;
+ value |= CNST_SAMPLE_VAL(event >> EVENT_SAMPLE_SHIFT);
+ }
+
+- /*
+- * Special case for PM_MRK_FAB_RSP_MATCH and PM_MRK_FAB_RSP_MATCH_CYC,
+- * the threshold control bits are used for the match value.
+- */
+- if (event_is_fab_match(event)) {
+- mask |= CNST_FAB_MATCH_MASK;
+- value |= CNST_FAB_MATCH_VAL(event >> EVENT_THR_CTL_SHIFT);
++ if (cpu_has_feature(CPU_FTR_ARCH_300)) {
++ if (event_is_threshold(event) && is_thresh_cmp_valid(event)) {
++ mask |= CNST_THRESH_MASK;
++ value |= CNST_THRESH_VAL(event >> EVENT_THRESH_SHIFT);
++ }
+ } else {
+ /*
+- * Check the mantissa upper two bits are not zero, unless the
+- * exponent is also zero. See the THRESH_CMP_MANTISSA doc.
++ * Special case for PM_MRK_FAB_RSP_MATCH and PM_MRK_FAB_RSP_MATCH_CYC,
++ * the threshold control bits are used for the match value.
+ */
+- unsigned int cmp, exp;
+-
+- cmp = (event >> EVENT_THR_CMP_SHIFT) & EVENT_THR_CMP_MASK;
+- exp = cmp >> 7;
+-
+- if (exp && (cmp & 0x60) == 0)
+- return -1;
++ if (event_is_fab_match(event)) {
++ mask |= CNST_FAB_MATCH_MASK;
++ value |= CNST_FAB_MATCH_VAL(event >> EVENT_THR_CTL_SHIFT);
++ } else {
++ if (!is_thresh_cmp_valid(event))
++ return -1;
+
+- mask |= CNST_THRESH_MASK;
+- value |= CNST_THRESH_VAL(event >> EVENT_THRESH_SHIFT);
++ mask |= CNST_THRESH_MASK;
++ value |= CNST_THRESH_VAL(event >> EVENT_THRESH_SHIFT);
++ }
+ }
+
+ if (!pmc && ebb)
+@@ -256,7 +305,7 @@ int isa207_compute_mmcr(u64 event[], int n_ev,
+ }
+
+ /* In continuous sampling mode, update SDAR on TLB miss */
+- mmcra |= mmcra_sdar_mode(event[i]);
++ mmcra_sdar_mode(event[i], &mmcra);
+
+ if (event[i] & EVENT_IS_L1) {
+ cache = event[i] >> EVENT_CACHE_SEL_SHIFT;
+@@ -265,7 +314,7 @@ int isa207_compute_mmcr(u64 event[], int n_ev,
+ mmcr1 |= (cache & 1) << MMCR1_DC_QUAL_SHIFT;
+ }
+
+- if (event[i] & EVENT_IS_MARKED) {
++ if (is_event_marked(event[i])) {
+ mmcra |= MMCRA_SAMPLE_ENABLE;
+
+ val = (event[i] >> EVENT_SAMPLE_SHIFT) & EVENT_SAMPLE_MASK;
+@@ -279,7 +328,7 @@ int isa207_compute_mmcr(u64 event[], int n_ev,
+ * PM_MRK_FAB_RSP_MATCH and PM_MRK_FAB_RSP_MATCH_CYC,
+ * the threshold bits are used for the match value.
+ */
+- if (event_is_fab_match(event[i])) {
++ if (!cpu_has_feature(CPU_FTR_ARCH_300) && event_is_fab_match(event[i])) {
+ mmcr1 |= ((event[i] >> EVENT_THR_CTL_SHIFT) &
+ EVENT_THR_CTL_MASK) << MMCR1_FAB_SHIFT;
+ } else {
+diff --git a/arch/powerpc/perf/isa207-common.h b/arch/powerpc/perf/isa207-common.h
+index 90495f1580c7..7554dd4b4e43 100644
+--- a/arch/powerpc/perf/isa207-common.h
++++ b/arch/powerpc/perf/isa207-common.h
+@@ -242,6 +242,7 @@
+ #define MMCRA_THR_CMP_SHIFT 32
+ #define MMCRA_SDAR_MODE_SHIFT 42
+ #define MMCRA_SDAR_MODE_TLB (1ull << MMCRA_SDAR_MODE_SHIFT)
++#define MMCRA_SDAR_MODE_NO_UPDATES ~(0x3ull << MMCRA_SDAR_MODE_SHIFT)
+ #define MMCRA_IFM_SHIFT 30
+
+ /* MMCR1 Threshold Compare bit constant for power9 */
+diff --git a/arch/powerpc/perf/power9-pmu.c b/arch/powerpc/perf/power9-pmu.c
+index 7332634e18c9..7950cee7d617 100644
+--- a/arch/powerpc/perf/power9-pmu.c
++++ b/arch/powerpc/perf/power9-pmu.c
+@@ -22,7 +22,7 @@
+ * | - - - - | - - - - | - - - - | - - - - | - - - - | - - - - | - - - - | - - - - |
+ * | | [ ] [ ] [ thresh_cmp ] [ thresh_ctl ]
+ * | | | | |
+- * | | *- IFM (Linux) | thresh start/stop OR FAB match -*
++ * | | *- IFM (Linux) | thresh start/stop -*
+ * | *- BHRB (Linux) *sm
+ * *- EBB (Linux)
+ *
+@@ -50,11 +50,9 @@
+ * MMCR1[31] = pmc4combine[1]
+ *
+ * if pmc == 3 and unit == 0 and pmcxsel[0:6] == 0b0101011
+- * # PM_MRK_FAB_RSP_MATCH
+- * MMCR1[20:27] = thresh_ctl (FAB_CRESP_MATCH / FAB_TYPE_MATCH)
++ * MMCR1[20:27] = thresh_ctl
+ * else if pmc == 4 and unit == 0xf and pmcxsel[0:6] == 0b0101001
+- * # PM_MRK_FAB_RSP_MATCH_CYC
+- * MMCR1[20:27] = thresh_ctl (FAB_CRESP_MATCH / FAB_TYPE_MATCH)
++ * MMCR1[20:27] = thresh_ctl
+ * else
+ * MMCRA[48:55] = thresh_ctl (THRESH START/END)
+ *
+diff --git a/arch/powerpc/platforms/powernv/opal-wrappers.S b/arch/powerpc/platforms/powernv/opal-wrappers.S
+index 3aa40f1b20f5..81a09fe4249c 100644
+--- a/arch/powerpc/platforms/powernv/opal-wrappers.S
++++ b/arch/powerpc/platforms/powernv/opal-wrappers.S
+@@ -146,7 +146,7 @@ opal_tracepoint_entry:
+ opal_tracepoint_return:
+ std r3,STK_REG(R31)(r1)
+ mr r4,r3
+- ld r0,STK_REG(R23)(r1)
++ ld r3,STK_REG(R23)(r1)
+ bl __trace_opal_exit
+ ld r3,STK_REG(R31)(r1)
+ addi r1,r1,STACKFRAMESIZE
+diff --git a/arch/sparc/kernel/head_64.S b/arch/sparc/kernel/head_64.S
+index 6aa3da152c20..9835152a0682 100644
+--- a/arch/sparc/kernel/head_64.S
++++ b/arch/sparc/kernel/head_64.S
+@@ -935,3 +935,9 @@ ENTRY(__retl_o1)
+ retl
+ mov %o1, %o0
+ ENDPROC(__retl_o1)
++
++ENTRY(__retl_o1_asi)
++ wr %o5, 0x0, %asi
++ retl
++ mov %o1, %o0
++ENDPROC(__retl_o1_asi)
+diff --git a/arch/sparc/lib/GENbzero.S b/arch/sparc/lib/GENbzero.S
+index 8e7a843ddd88..2fbf6297d57c 100644
+--- a/arch/sparc/lib/GENbzero.S
++++ b/arch/sparc/lib/GENbzero.S
+@@ -8,7 +8,7 @@
+ 98: x,y; \
+ .section __ex_table,"a";\
+ .align 4; \
+- .word 98b, __retl_o1; \
++ .word 98b, __retl_o1_asi;\
+ .text; \
+ .align 4;
+
+diff --git a/arch/sparc/lib/NGbzero.S b/arch/sparc/lib/NGbzero.S
+index beab29bf419b..33053bdf3766 100644
+--- a/arch/sparc/lib/NGbzero.S
++++ b/arch/sparc/lib/NGbzero.S
+@@ -8,7 +8,7 @@
+ 98: x,y; \
+ .section __ex_table,"a";\
+ .align 4; \
+- .word 98b, __retl_o1; \
++ .word 98b, __retl_o1_asi;\
+ .text; \
+ .align 4;
+
+diff --git a/arch/x86/events/intel/pt.c b/arch/x86/events/intel/pt.c
+index 1c1b9fe705c8..5900471ee508 100644
+--- a/arch/x86/events/intel/pt.c
++++ b/arch/x86/events/intel/pt.c
+@@ -99,18 +99,24 @@ static struct attribute_group pt_cap_group = {
+ };
+
+ PMU_FORMAT_ATTR(cyc, "config:1" );
++PMU_FORMAT_ATTR(pwr_evt, "config:4" );
++PMU_FORMAT_ATTR(fup_on_ptw, "config:5" );
+ PMU_FORMAT_ATTR(mtc, "config:9" );
+ PMU_FORMAT_ATTR(tsc, "config:10" );
+ PMU_FORMAT_ATTR(noretcomp, "config:11" );
++PMU_FORMAT_ATTR(ptw, "config:12" );
+ PMU_FORMAT_ATTR(mtc_period, "config:14-17" );
+ PMU_FORMAT_ATTR(cyc_thresh, "config:19-22" );
+ PMU_FORMAT_ATTR(psb_period, "config:24-27" );
+
+ static struct attribute *pt_formats_attr[] = {
+ &format_attr_cyc.attr,
++ &format_attr_pwr_evt.attr,
++ &format_attr_fup_on_ptw.attr,
+ &format_attr_mtc.attr,
+ &format_attr_tsc.attr,
+ &format_attr_noretcomp.attr,
++ &format_attr_ptw.attr,
+ &format_attr_mtc_period.attr,
+ &format_attr_cyc_thresh.attr,
+ &format_attr_psb_period.attr,
+diff --git a/arch/x86/include/asm/xen/events.h b/arch/x86/include/asm/xen/events.h
+index 608a79d5a466..e6911caf5bbf 100644
+--- a/arch/x86/include/asm/xen/events.h
++++ b/arch/x86/include/asm/xen/events.h
+@@ -20,4 +20,15 @@ static inline int xen_irqs_disabled(struct pt_regs *regs)
+ /* No need for a barrier -- XCHG is a barrier on x86. */
+ #define xchg_xen_ulong(ptr, val) xchg((ptr), (val))
+
++extern int xen_have_vector_callback;
++
++/*
++ * Events delivered via platform PCI interrupts are always
++ * routed to vcpu 0 and hence cannot be rebound.
++ */
++static inline bool xen_support_evtchn_rebind(void)
++{
++ return (!xen_hvm_domain() || xen_have_vector_callback);
++}
++
+ #endif /* _ASM_X86_XEN_EVENTS_H */
+diff --git a/arch/x86/kernel/apic/io_apic.c b/arch/x86/kernel/apic/io_apic.c
+index bd6b8c270c24..52f352b063fd 100644
+--- a/arch/x86/kernel/apic/io_apic.c
++++ b/arch/x86/kernel/apic/io_apic.c
+@@ -1875,6 +1875,7 @@ static struct irq_chip ioapic_chip __read_mostly = {
+ .irq_ack = irq_chip_ack_parent,
+ .irq_eoi = ioapic_ack_level,
+ .irq_set_affinity = ioapic_set_affinity,
++ .irq_retrigger = irq_chip_retrigger_hierarchy,
+ .flags = IRQCHIP_SKIP_SET_WAKE,
+ };
+
+@@ -1886,6 +1887,7 @@ static struct irq_chip ioapic_ir_chip __read_mostly = {
+ .irq_ack = irq_chip_ack_parent,
+ .irq_eoi = ioapic_ir_ack_level,
+ .irq_set_affinity = ioapic_set_affinity,
++ .irq_retrigger = irq_chip_retrigger_hierarchy,
+ .flags = IRQCHIP_SKIP_SET_WAKE,
+ };
+
+diff --git a/arch/x86/kernel/kprobes/common.h b/arch/x86/kernel/kprobes/common.h
+index c6ee63f927ab..d688826e5736 100644
+--- a/arch/x86/kernel/kprobes/common.h
++++ b/arch/x86/kernel/kprobes/common.h
+@@ -67,7 +67,7 @@
+ #endif
+
+ /* Ensure if the instruction can be boostable */
+-extern int can_boost(kprobe_opcode_t *instruction);
++extern int can_boost(kprobe_opcode_t *instruction, void *addr);
+ /* Recover instruction if given address is probed */
+ extern unsigned long recover_probed_instruction(kprobe_opcode_t *buf,
+ unsigned long addr);
+diff --git a/arch/x86/kernel/kprobes/core.c b/arch/x86/kernel/kprobes/core.c
+index eb3509338ae0..dcdaee805863 100644
+--- a/arch/x86/kernel/kprobes/core.c
++++ b/arch/x86/kernel/kprobes/core.c
+@@ -166,12 +166,12 @@ NOKPROBE_SYMBOL(skip_prefixes);
+ * Returns non-zero if opcode is boostable.
+ * RIP relative instructions are adjusted at copying time in 64 bits mode
+ */
+-int can_boost(kprobe_opcode_t *opcodes)
++int can_boost(kprobe_opcode_t *opcodes, void *addr)
+ {
+ kprobe_opcode_t opcode;
+ kprobe_opcode_t *orig_opcodes = opcodes;
+
+- if (search_exception_tables((unsigned long)opcodes))
++ if (search_exception_tables((unsigned long)addr))
+ return 0; /* Page fault may occur on this address. */
+
+ retry:
+@@ -416,7 +416,7 @@ static int arch_copy_kprobe(struct kprobe *p)
+ * __copy_instruction can modify the displacement of the instruction,
+ * but it doesn't affect boostable check.
+ */
+- if (can_boost(p->ainsn.insn))
++ if (can_boost(p->ainsn.insn, p->addr))
+ p->ainsn.boostable = 0;
+ else
+ p->ainsn.boostable = -1;
+diff --git a/arch/x86/kernel/kprobes/opt.c b/arch/x86/kernel/kprobes/opt.c
+index 3d1bee9d6a72..3e7c6e5a08ff 100644
+--- a/arch/x86/kernel/kprobes/opt.c
++++ b/arch/x86/kernel/kprobes/opt.c
+@@ -178,7 +178,7 @@ static int copy_optimized_instructions(u8 *dest, u8 *src)
+
+ while (len < RELATIVEJUMP_SIZE) {
+ ret = __copy_instruction(dest + len, src + len);
+- if (!ret || !can_boost(dest + len))
++ if (!ret || !can_boost(dest + len, src + len))
+ return -EINVAL;
+ len += ret;
+ }
+diff --git a/arch/x86/kernel/pci-calgary_64.c b/arch/x86/kernel/pci-calgary_64.c
+index 5d400ba1349d..d47517941bbc 100644
+--- a/arch/x86/kernel/pci-calgary_64.c
++++ b/arch/x86/kernel/pci-calgary_64.c
+@@ -296,7 +296,7 @@ static void iommu_free(struct iommu_table *tbl, dma_addr_t dma_addr,
+
+ /* were we called with bad_dma_address? */
+ badend = DMA_ERROR_CODE + (EMERGENCY_PAGES * PAGE_SIZE);
+- if (unlikely((dma_addr >= DMA_ERROR_CODE) && (dma_addr < badend))) {
++ if (unlikely(dma_addr < badend)) {
+ WARN(1, KERN_ERR "Calgary: driver tried unmapping bad DMA "
+ "address 0x%Lx\n", dma_addr);
+ return;
+diff --git a/arch/x86/kvm/cpuid.c b/arch/x86/kvm/cpuid.c
+index e85f6bd7b9d5..fa341c47baeb 100644
+--- a/arch/x86/kvm/cpuid.c
++++ b/arch/x86/kvm/cpuid.c
+@@ -861,12 +861,6 @@ void kvm_cpuid(struct kvm_vcpu *vcpu, u32 *eax, u32 *ebx, u32 *ecx, u32 *edx)
+ if (!best)
+ best = check_cpuid_limit(vcpu, function, index);
+
+- /*
+- * Perfmon not yet supported for L2 guest.
+- */
+- if (is_guest_mode(vcpu) && function == 0xa)
+- best = NULL;
+-
+ if (best) {
+ *eax = best->eax;
+ *ebx = best->ebx;
+diff --git a/arch/x86/kvm/vmx.c b/arch/x86/kvm/vmx.c
+index cce7d2e3be15..cedd6745ccbe 100644
+--- a/arch/x86/kvm/vmx.c
++++ b/arch/x86/kvm/vmx.c
+@@ -8197,8 +8197,6 @@ static bool nested_vmx_exit_handled(struct kvm_vcpu *vcpu)
+ case EXIT_REASON_TASK_SWITCH:
+ return true;
+ case EXIT_REASON_CPUID:
+- if (kvm_register_read(vcpu, VCPU_REGS_RAX) == 0xa)
+- return false;
+ return true;
+ case EXIT_REASON_HLT:
+ return nested_cpu_has(vmcs12, CPU_BASED_HLT_EXITING);
+@@ -8285,6 +8283,9 @@ static bool nested_vmx_exit_handled(struct kvm_vcpu *vcpu)
+ return nested_cpu_has2(vmcs12, SECONDARY_EXEC_XSAVES);
+ case EXIT_REASON_PREEMPTION_TIMER:
+ return false;
++ case EXIT_REASON_PML_FULL:
++ /* We don't expose PML support to L1. */
++ return false;
+ default:
+ return true;
+ }
+@@ -10318,6 +10319,18 @@ static int prepare_vmcs02(struct kvm_vcpu *vcpu, struct vmcs12 *vmcs12,
+
+ }
+
++ if (enable_pml) {
++ /*
++ * Conceptually we want to copy the PML address and index from
++ * vmcs01 here, and then back to vmcs01 on nested vmexit. But,
++ * since we always flush the log on each vmexit, this happens
++ * to be equivalent to simply resetting the fields in vmcs02.
++ */
++ ASSERT(vmx->pml_pg);
++ vmcs_write64(PML_ADDRESS, page_to_phys(vmx->pml_pg));
++ vmcs_write16(GUEST_PML_INDEX, PML_ENTITY_NUM - 1);
++ }
++
+ if (nested_cpu_has_ept(vmcs12)) {
+ kvm_mmu_unload(vcpu);
+ nested_ept_init_mmu_context(vcpu);
+diff --git a/arch/x86/pci/xen.c b/arch/x86/pci/xen.c
+index 292ab0364a89..c4b3646bd04c 100644
+--- a/arch/x86/pci/xen.c
++++ b/arch/x86/pci/xen.c
+@@ -447,7 +447,7 @@ void __init xen_msi_init(void)
+
+ int __init pci_xen_hvm_init(void)
+ {
+- if (!xen_feature(XENFEAT_hvm_pirqs))
++ if (!xen_have_vector_callback || !xen_feature(XENFEAT_hvm_pirqs))
+ return 0;
+
+ #ifdef CONFIG_ACPI
+diff --git a/arch/x86/platform/intel-mid/device_libs/platform_mrfld_wdt.c b/arch/x86/platform/intel-mid/device_libs/platform_mrfld_wdt.c
+index 3f1f1c77d090..10bad1e55fcc 100644
+--- a/arch/x86/platform/intel-mid/device_libs/platform_mrfld_wdt.c
++++ b/arch/x86/platform/intel-mid/device_libs/platform_mrfld_wdt.c
+@@ -19,7 +19,7 @@
+ #include <asm/intel_scu_ipc.h>
+ #include <asm/io_apic.h>
+
+-#define TANGIER_EXT_TIMER0_MSI 15
++#define TANGIER_EXT_TIMER0_MSI 12
+
+ static struct platform_device wdt_dev = {
+ .name = "intel_mid_wdt",
+diff --git a/arch/x86/xen/enlighten.c b/arch/x86/xen/enlighten.c
+index 51ef95232725..6623867cc0d4 100644
+--- a/arch/x86/xen/enlighten.c
++++ b/arch/x86/xen/enlighten.c
+@@ -137,6 +137,8 @@ struct shared_info xen_dummy_shared_info;
+ void *xen_initial_gdt;
+
+ RESERVE_BRK(shared_info_page_brk, PAGE_SIZE);
++__read_mostly int xen_have_vector_callback;
++EXPORT_SYMBOL_GPL(xen_have_vector_callback);
+
+ static int xen_cpu_up_prepare(unsigned int cpu);
+ static int xen_cpu_up_online(unsigned int cpu);
+@@ -1508,7 +1510,10 @@ static void __init xen_pvh_early_guest_init(void)
+ if (!xen_feature(XENFEAT_auto_translated_physmap))
+ return;
+
+- BUG_ON(!xen_feature(XENFEAT_hvm_callback_vector));
++ if (!xen_feature(XENFEAT_hvm_callback_vector))
++ return;
++
++ xen_have_vector_callback = 1;
+
+ xen_pvh_early_cpu_init(0, false);
+ xen_pvh_set_cr_flags(0);
+@@ -1847,7 +1852,9 @@ static int xen_cpu_up_prepare(unsigned int cpu)
+ xen_vcpu_setup(cpu);
+ }
+
+- if (xen_pv_domain() || xen_feature(XENFEAT_hvm_safe_pvclock))
++ if (xen_pv_domain() ||
++ (xen_have_vector_callback &&
++ xen_feature(XENFEAT_hvm_safe_pvclock)))
+ xen_setup_timer(cpu);
+
+ rc = xen_smp_intr_init(cpu);
+@@ -1863,7 +1870,9 @@ static int xen_cpu_dead(unsigned int cpu)
+ {
+ xen_smp_intr_free(cpu);
+
+- if (xen_pv_domain() || xen_feature(XENFEAT_hvm_safe_pvclock))
++ if (xen_pv_domain() ||
++ (xen_have_vector_callback &&
++ xen_feature(XENFEAT_hvm_safe_pvclock)))
+ xen_teardown_timer(cpu);
+
+ return 0;
+@@ -1902,8 +1911,8 @@ static void __init xen_hvm_guest_init(void)
+
+ xen_panic_handler_init();
+
+- BUG_ON(!xen_feature(XENFEAT_hvm_callback_vector));
+-
++ if (xen_feature(XENFEAT_hvm_callback_vector))
++ xen_have_vector_callback = 1;
+ xen_hvm_smp_init();
+ WARN_ON(xen_cpuhp_setup());
+ xen_unplug_emulated_devices();
+@@ -1941,7 +1950,7 @@ bool xen_hvm_need_lapic(void)
+ return false;
+ if (!xen_hvm_domain())
+ return false;
+- if (xen_feature(XENFEAT_hvm_pirqs))
++ if (xen_feature(XENFEAT_hvm_pirqs) && xen_have_vector_callback)
+ return false;
+ return true;
+ }
+diff --git a/arch/x86/xen/smp.c b/arch/x86/xen/smp.c
+index 311acad7dad2..137afbbd0590 100644
+--- a/arch/x86/xen/smp.c
++++ b/arch/x86/xen/smp.c
+@@ -765,6 +765,8 @@ static void __init xen_hvm_smp_prepare_cpus(unsigned int max_cpus)
+
+ void __init xen_hvm_smp_init(void)
+ {
++ if (!xen_have_vector_callback)
++ return;
+ smp_ops.smp_prepare_cpus = xen_hvm_smp_prepare_cpus;
+ smp_ops.smp_send_reschedule = xen_smp_send_reschedule;
+ smp_ops.cpu_die = xen_cpu_die;
+diff --git a/arch/x86/xen/time.c b/arch/x86/xen/time.c
+index 1e69956d7852..4535627cf532 100644
+--- a/arch/x86/xen/time.c
++++ b/arch/x86/xen/time.c
+@@ -432,6 +432,11 @@ static void xen_hvm_setup_cpu_clockevents(void)
+
+ void __init xen_hvm_init_time_ops(void)
+ {
++ /* vector callback is needed otherwise we cannot receive interrupts
++ * on cpu > 0 and at this point we don't know how many cpus are
++ * available */
++ if (!xen_have_vector_callback)
++ return;
+ if (!xen_feature(XENFEAT_hvm_safe_pvclock)) {
+ printk(KERN_INFO "Xen doesn't support pvclock on HVM,"
+ "disable pv timer\n");
+diff --git a/block/blk-integrity.c b/block/blk-integrity.c
+index d69c5c79f98e..319f2e4f4a8b 100644
+--- a/block/blk-integrity.c
++++ b/block/blk-integrity.c
+@@ -417,7 +417,7 @@ void blk_integrity_register(struct gendisk *disk, struct blk_integrity *template
+ bi->tuple_size = template->tuple_size;
+ bi->tag_size = template->tag_size;
+
+- blk_integrity_revalidate(disk);
++ disk->queue->backing_dev_info.capabilities |= BDI_CAP_STABLE_WRITES;
+ }
+ EXPORT_SYMBOL(blk_integrity_register);
+
+@@ -430,26 +430,11 @@ EXPORT_SYMBOL(blk_integrity_register);
+ */
+ void blk_integrity_unregister(struct gendisk *disk)
+ {
+- blk_integrity_revalidate(disk);
++ disk->queue->backing_dev_info.capabilities &= ~BDI_CAP_STABLE_WRITES;
+ memset(&disk->queue->integrity, 0, sizeof(struct blk_integrity));
+ }
+ EXPORT_SYMBOL(blk_integrity_unregister);
+
+-void blk_integrity_revalidate(struct gendisk *disk)
+-{
+- struct blk_integrity *bi = &disk->queue->integrity;
+-
+- if (!(disk->flags & GENHD_FL_UP))
+- return;
+-
+- if (bi->profile)
+- disk->queue->backing_dev_info.capabilities |=
+- BDI_CAP_STABLE_WRITES;
+- else
+- disk->queue->backing_dev_info.capabilities &=
+- ~BDI_CAP_STABLE_WRITES;
+-}
+-
+ void blk_integrity_add(struct gendisk *disk)
+ {
+ if (kobject_init_and_add(&disk->integrity_kobj, &integrity_ktype,
+diff --git a/block/partition-generic.c b/block/partition-generic.c
+index 7afb9907821f..0171a2faad68 100644
+--- a/block/partition-generic.c
++++ b/block/partition-generic.c
+@@ -497,7 +497,6 @@ int rescan_partitions(struct gendisk *disk, struct block_device *bdev)
+
+ if (disk->fops->revalidate_disk)
+ disk->fops->revalidate_disk(disk);
+- blk_integrity_revalidate(disk);
+ check_disk_size_change(disk, bdev);
+ bdev->bd_invalidated = 0;
+ if (!get_capacity(disk) || !(state = check_partition(disk, bdev)))
+diff --git a/drivers/char/tpm/tpm-chip.c b/drivers/char/tpm/tpm-chip.c
+index a77262d31911..c406343848da 100644
+--- a/drivers/char/tpm/tpm-chip.c
++++ b/drivers/char/tpm/tpm-chip.c
+@@ -141,7 +141,7 @@ static void tpm_dev_release(struct device *dev)
+ * Allocates a new struct tpm_chip instance and assigns a free
+ * device number for it. Must be paired with put_device(&chip->dev).
+ */
+-struct tpm_chip *tpm_chip_alloc(struct device *dev,
++struct tpm_chip *tpm_chip_alloc(struct device *pdev,
+ const struct tpm_class_ops *ops)
+ {
+ struct tpm_chip *chip;
+@@ -160,7 +160,7 @@ struct tpm_chip *tpm_chip_alloc(struct device *dev,
+ rc = idr_alloc(&dev_nums_idr, NULL, 0, TPM_NUM_DEVICES, GFP_KERNEL);
+ mutex_unlock(&idr_lock);
+ if (rc < 0) {
+- dev_err(dev, "No available tpm device numbers\n");
++ dev_err(pdev, "No available tpm device numbers\n");
+ kfree(chip);
+ return ERR_PTR(rc);
+ }
+@@ -170,7 +170,7 @@ struct tpm_chip *tpm_chip_alloc(struct device *dev,
+
+ chip->dev.class = tpm_class;
+ chip->dev.release = tpm_dev_release;
+- chip->dev.parent = dev;
++ chip->dev.parent = pdev;
+ chip->dev.groups = chip->groups;
+
+ if (chip->dev_num == 0)
+@@ -182,7 +182,7 @@ struct tpm_chip *tpm_chip_alloc(struct device *dev,
+ if (rc)
+ goto out;
+
+- if (!dev)
++ if (!pdev)
+ chip->flags |= TPM_CHIP_FLAG_VIRTUAL;
+
+ cdev_init(&chip->cdev, &tpm_fops);
+diff --git a/drivers/char/tpm/tpm.h b/drivers/char/tpm/tpm.h
+index 1ae976894257..f9613f55e7bc 100644
+--- a/drivers/char/tpm/tpm.h
++++ b/drivers/char/tpm/tpm.h
+@@ -529,6 +529,11 @@ static inline void tpm_add_ppi(struct tpm_chip *chip)
+ }
+ #endif
+
++static inline inline u32 tpm2_rc_value(u32 rc)
++{
++ return (rc & BIT(7)) ? rc & 0xff : rc;
++}
++
+ int tpm2_pcr_read(struct tpm_chip *chip, int pcr_idx, u8 *res_buf);
+ int tpm2_pcr_extend(struct tpm_chip *chip, int pcr_idx, const u8 *hash);
+ int tpm2_get_random(struct tpm_chip *chip, u8 *out, size_t max);
+diff --git a/drivers/char/tpm/tpm2-cmd.c b/drivers/char/tpm/tpm2-cmd.c
+index da5b782a9731..6a397c8bf033 100644
+--- a/drivers/char/tpm/tpm2-cmd.c
++++ b/drivers/char/tpm/tpm2-cmd.c
+@@ -529,7 +529,7 @@ int tpm2_seal_trusted(struct tpm_chip *chip,
+ tpm_buf_destroy(&buf);
+
+ if (rc > 0) {
+- if ((rc & TPM2_RC_HASH) == TPM2_RC_HASH)
++ if (tpm2_rc_value(rc) == TPM2_RC_HASH)
+ rc = -EINVAL;
+ else
+ rc = -EPERM;
+diff --git a/drivers/clk/Makefile b/drivers/clk/Makefile
+index 925081ec14c0..42042c0a936c 100644
+--- a/drivers/clk/Makefile
++++ b/drivers/clk/Makefile
+@@ -87,6 +87,8 @@ obj-y += ti/
+ obj-$(CONFIG_CLK_UNIPHIER) += uniphier/
+ obj-$(CONFIG_ARCH_U8500) += ux500/
+ obj-$(CONFIG_COMMON_CLK_VERSATILE) += versatile/
++ifeq ($(CONFIG_COMMON_CLK), y)
+ obj-$(CONFIG_X86) += x86/
++endif
+ obj-$(CONFIG_ARCH_ZX) += zte/
+ obj-$(CONFIG_ARCH_ZYNQ) += zynq/
+diff --git a/drivers/clk/rockchip/clk-rk3036.c b/drivers/clk/rockchip/clk-rk3036.c
+index 924f560dcf80..dcde70f4c105 100644
+--- a/drivers/clk/rockchip/clk-rk3036.c
++++ b/drivers/clk/rockchip/clk-rk3036.c
+@@ -127,7 +127,7 @@ PNAME(mux_ddrphy_p) = { "dpll_ddr", "gpll_ddr" };
+ PNAME(mux_pll_src_3plls_p) = { "apll", "dpll", "gpll" };
+ PNAME(mux_timer_p) = { "xin24m", "pclk_peri_src" };
+
+-PNAME(mux_pll_src_apll_dpll_gpll_usb480m_p) = { "apll", "dpll", "gpll" "usb480m" };
++PNAME(mux_pll_src_apll_dpll_gpll_usb480m_p) = { "apll", "dpll", "gpll", "usb480m" };
+
+ PNAME(mux_mmc_src_p) = { "apll", "dpll", "gpll", "xin24m" };
+ PNAME(mux_i2s_pre_p) = { "i2s_src", "i2s_frac", "ext_i2s", "xin12m" };
+diff --git a/drivers/crypto/caam/caamhash.c b/drivers/crypto/caam/caamhash.c
+index e58639ea53b1..de2f87bc91d5 100644
+--- a/drivers/crypto/caam/caamhash.c
++++ b/drivers/crypto/caam/caamhash.c
+@@ -109,7 +109,6 @@ struct caam_hash_ctx {
+ dma_addr_t sh_desc_digest_dma;
+ struct device *jrdev;
+ u8 key[CAAM_MAX_HASH_KEY_SIZE];
+- dma_addr_t key_dma;
+ int ctx_len;
+ struct alginfo adata;
+ };
+@@ -149,6 +148,7 @@ static inline int map_seq_out_ptr_ctx(u32 *desc, struct device *jrdev,
+ ctx_len, DMA_FROM_DEVICE);
+ if (dma_mapping_error(jrdev, state->ctx_dma)) {
+ dev_err(jrdev, "unable to map ctx\n");
++ state->ctx_dma = 0;
+ return -ENOMEM;
+ }
+
+@@ -209,6 +209,7 @@ static inline int ctx_map_to_sec4_sg(u32 *desc, struct device *jrdev,
+ state->ctx_dma = dma_map_single(jrdev, state->caam_ctx, ctx_len, flag);
+ if (dma_mapping_error(jrdev, state->ctx_dma)) {
+ dev_err(jrdev, "unable to map ctx\n");
++ state->ctx_dma = 0;
+ return -ENOMEM;
+ }
+
+@@ -420,7 +421,6 @@ static int ahash_setkey(struct crypto_ahash *ahash,
+ const u8 *key, unsigned int keylen)
+ {
+ struct caam_hash_ctx *ctx = crypto_ahash_ctx(ahash);
+- struct device *jrdev = ctx->jrdev;
+ int blocksize = crypto_tfm_alg_blocksize(&ahash->base);
+ int digestsize = crypto_ahash_digestsize(ahash);
+ int ret;
+@@ -448,28 +448,14 @@ static int ahash_setkey(struct crypto_ahash *ahash,
+ if (ret)
+ goto bad_free_key;
+
+- ctx->key_dma = dma_map_single(jrdev, ctx->key, ctx->adata.keylen_pad,
+- DMA_TO_DEVICE);
+- if (dma_mapping_error(jrdev, ctx->key_dma)) {
+- dev_err(jrdev, "unable to map key i/o memory\n");
+- ret = -ENOMEM;
+- goto error_free_key;
+- }
+ #ifdef DEBUG
+ print_hex_dump(KERN_ERR, "ctx.key@"__stringify(__LINE__)": ",
+ DUMP_PREFIX_ADDRESS, 16, 4, ctx->key,
+ ctx->adata.keylen_pad, 1);
+ #endif
+
+- ret = ahash_set_sh_desc(ahash);
+- if (ret) {
+- dma_unmap_single(jrdev, ctx->key_dma, ctx->adata.keylen_pad,
+- DMA_TO_DEVICE);
+- }
+-
+- error_free_key:
+ kfree(hashed_key);
+- return ret;
++ return ahash_set_sh_desc(ahash);
+ bad_free_key:
+ kfree(hashed_key);
+ crypto_ahash_set_flags(ahash, CRYPTO_TFM_RES_BAD_KEY_LEN);
+@@ -516,8 +502,10 @@ static inline void ahash_unmap_ctx(struct device *dev,
+ struct caam_hash_ctx *ctx = crypto_ahash_ctx(ahash);
+ struct caam_hash_state *state = ahash_request_ctx(req);
+
+- if (state->ctx_dma)
++ if (state->ctx_dma) {
+ dma_unmap_single(dev, state->ctx_dma, ctx->ctx_len, flag);
++ state->ctx_dma = 0;
++ }
+ ahash_unmap(dev, edesc, req, dst_len);
+ }
+
+@@ -1497,6 +1485,7 @@ static int ahash_init(struct ahash_request *req)
+ state->finup = ahash_finup_first;
+ state->final = ahash_final_no_ctx;
+
++ state->ctx_dma = 0;
+ state->current_buf = 0;
+ state->buf_dma = 0;
+ state->buflen_0 = 0;
+diff --git a/drivers/gpu/drm/hisilicon/hibmc/hibmc_drm_fbdev.c b/drivers/gpu/drm/hisilicon/hibmc/hibmc_drm_fbdev.c
+index 9b0696735ba1..d8659eba73d4 100644
+--- a/drivers/gpu/drm/hisilicon/hibmc/hibmc_drm_fbdev.c
++++ b/drivers/gpu/drm/hisilicon/hibmc/hibmc_drm_fbdev.c
+@@ -121,7 +121,7 @@ static int hibmc_drm_fb_create(struct drm_fb_helper *helper,
+
+ hi_fbdev->fb = hibmc_framebuffer_init(priv->dev, &mode_cmd, gobj);
+ if (IS_ERR(hi_fbdev->fb)) {
+- ret = PTR_ERR(info);
++ ret = PTR_ERR(hi_fbdev->fb);
+ DRM_ERROR("failed to initialize framebuffer: %d\n", ret);
+ goto out_release_fbi;
+ }
+diff --git a/drivers/gpu/drm/mxsfb/mxsfb_drv.c b/drivers/gpu/drm/mxsfb/mxsfb_drv.c
+index 79a18bf48b54..955441f71500 100644
+--- a/drivers/gpu/drm/mxsfb/mxsfb_drv.c
++++ b/drivers/gpu/drm/mxsfb/mxsfb_drv.c
+@@ -395,8 +395,8 @@ static int mxsfb_probe(struct platform_device *pdev)
+ pdev->id_entry = of_id->data;
+
+ drm = drm_dev_alloc(&mxsfb_driver, &pdev->dev);
+- if (!drm)
+- return -ENOMEM;
++ if (IS_ERR(drm))
++ return PTR_ERR(drm);
+
+ ret = mxsfb_load(drm, 0);
+ if (ret)
+diff --git a/drivers/gpu/drm/sti/sti_gdp.c b/drivers/gpu/drm/sti/sti_gdp.c
+index 81df3097b545..7fd496f99385 100644
+--- a/drivers/gpu/drm/sti/sti_gdp.c
++++ b/drivers/gpu/drm/sti/sti_gdp.c
+@@ -66,7 +66,9 @@ static struct gdp_format_to_str {
+ #define GAM_GDP_ALPHARANGE_255 BIT(5)
+ #define GAM_GDP_AGC_FULL_RANGE 0x00808080
+ #define GAM_GDP_PPT_IGNORE (BIT(1) | BIT(0))
+-#define GAM_GDP_SIZE_MAX 0x7FF
++
++#define GAM_GDP_SIZE_MAX_WIDTH 3840
++#define GAM_GDP_SIZE_MAX_HEIGHT 2160
+
+ #define GDP_NODE_NB_BANK 2
+ #define GDP_NODE_PER_FIELD 2
+@@ -633,8 +635,8 @@ static int sti_gdp_atomic_check(struct drm_plane *drm_plane,
+ /* src_x are in 16.16 format */
+ src_x = state->src_x >> 16;
+ src_y = state->src_y >> 16;
+- src_w = clamp_val(state->src_w >> 16, 0, GAM_GDP_SIZE_MAX);
+- src_h = clamp_val(state->src_h >> 16, 0, GAM_GDP_SIZE_MAX);
++ src_w = clamp_val(state->src_w >> 16, 0, GAM_GDP_SIZE_MAX_WIDTH);
++ src_h = clamp_val(state->src_h >> 16, 0, GAM_GDP_SIZE_MAX_HEIGHT);
+
+ format = sti_gdp_fourcc2format(fb->pixel_format);
+ if (format == -1) {
+@@ -732,8 +734,8 @@ static void sti_gdp_atomic_update(struct drm_plane *drm_plane,
+ /* src_x are in 16.16 format */
+ src_x = state->src_x >> 16;
+ src_y = state->src_y >> 16;
+- src_w = clamp_val(state->src_w >> 16, 0, GAM_GDP_SIZE_MAX);
+- src_h = clamp_val(state->src_h >> 16, 0, GAM_GDP_SIZE_MAX);
++ src_w = clamp_val(state->src_w >> 16, 0, GAM_GDP_SIZE_MAX_WIDTH);
++ src_h = clamp_val(state->src_h >> 16, 0, GAM_GDP_SIZE_MAX_HEIGHT);
+
+ list = sti_gdp_get_free_nodes(gdp);
+ top_field = list->top_field;
+diff --git a/drivers/gpu/drm/ttm/ttm_bo_vm.c b/drivers/gpu/drm/ttm/ttm_bo_vm.c
+index 68ef993ab431..88169141bef5 100644
+--- a/drivers/gpu/drm/ttm/ttm_bo_vm.c
++++ b/drivers/gpu/drm/ttm/ttm_bo_vm.c
+@@ -66,8 +66,11 @@ static int ttm_bo_vm_fault_idle(struct ttm_buffer_object *bo,
+ if (vmf->flags & FAULT_FLAG_RETRY_NOWAIT)
+ goto out_unlock;
+
++ ttm_bo_reference(bo);
+ up_read(&vma->vm_mm->mmap_sem);
+ (void) dma_fence_wait(bo->moving, true);
++ ttm_bo_unreserve(bo);
++ ttm_bo_unref(&bo);
+ goto out_unlock;
+ }
+
+@@ -120,8 +123,10 @@ static int ttm_bo_vm_fault(struct vm_area_struct *vma, struct vm_fault *vmf)
+
+ if (vmf->flags & FAULT_FLAG_ALLOW_RETRY) {
+ if (!(vmf->flags & FAULT_FLAG_RETRY_NOWAIT)) {
++ ttm_bo_reference(bo);
+ up_read(&vma->vm_mm->mmap_sem);
+ (void) ttm_bo_wait_unreserved(bo);
++ ttm_bo_unref(&bo);
+ }
+
+ return VM_FAULT_RETRY;
+@@ -166,6 +171,13 @@ static int ttm_bo_vm_fault(struct vm_area_struct *vma, struct vm_fault *vmf)
+ ret = ttm_bo_vm_fault_idle(bo, vma, vmf);
+ if (unlikely(ret != 0)) {
+ retval = ret;
++
++ if (retval == VM_FAULT_RETRY &&
++ !(vmf->flags & FAULT_FLAG_RETRY_NOWAIT)) {
++ /* The BO has already been unreserved. */
++ return retval;
++ }
++
+ goto out_unlock;
+ }
+
+diff --git a/drivers/hwmon/it87.c b/drivers/hwmon/it87.c
+index b99c1df48156..81853ee85f6a 100644
+--- a/drivers/hwmon/it87.c
++++ b/drivers/hwmon/it87.c
+@@ -2600,7 +2600,7 @@ static int __init it87_find(int sioaddr, unsigned short *address,
+
+ /* Check for pwm4 */
+ reg = superio_inb(sioaddr, IT87_SIO_GPIO4_REG);
+- if (!(reg & BIT(2)))
++ if (reg & BIT(2))
+ sio_data->skip_pwm |= BIT(3);
+
+ /* Check for pwm2, fan2 */
+diff --git a/drivers/leds/leds-ktd2692.c b/drivers/leds/leds-ktd2692.c
+index bf23ba191ad0..45296aaca9da 100644
+--- a/drivers/leds/leds-ktd2692.c
++++ b/drivers/leds/leds-ktd2692.c
+@@ -270,15 +270,15 @@ static int ktd2692_parse_dt(struct ktd2692_context *led, struct device *dev,
+ return -ENXIO;
+
+ led->ctrl_gpio = devm_gpiod_get(dev, "ctrl", GPIOD_ASIS);
+- if (IS_ERR(led->ctrl_gpio)) {
+- ret = PTR_ERR(led->ctrl_gpio);
++ ret = PTR_ERR_OR_ZERO(led->ctrl_gpio);
++ if (ret) {
+ dev_err(dev, "cannot get ctrl-gpios %d\n", ret);
+ return ret;
+ }
+
+ led->aux_gpio = devm_gpiod_get(dev, "aux", GPIOD_ASIS);
+- if (IS_ERR(led->aux_gpio)) {
+- ret = PTR_ERR(led->aux_gpio);
++ ret = PTR_ERR_OR_ZERO(led->aux_gpio);
++ if (ret) {
+ dev_err(dev, "cannot get aux-gpios %d\n", ret);
+ return ret;
+ }
+diff --git a/drivers/mtd/nand/Kconfig b/drivers/mtd/nand/Kconfig
+index 9ce5dcb4abd0..1408d2ab89a6 100644
+--- a/drivers/mtd/nand/Kconfig
++++ b/drivers/mtd/nand/Kconfig
+@@ -426,6 +426,7 @@ config MTD_NAND_ORION
+
+ config MTD_NAND_OXNAS
+ tristate "NAND Flash support for Oxford Semiconductor SoC"
++ depends on ARCH_OXNAS || COMPILE_TEST
+ depends on HAS_IOMEM
+ help
+ This enables the NAND flash controller on Oxford Semiconductor SoCs.
+diff --git a/drivers/net/ethernet/broadcom/bnxt/bnxt.c b/drivers/net/ethernet/broadcom/bnxt/bnxt.c
+index 4fcc6a84a087..8bc5785da9d5 100644
+--- a/drivers/net/ethernet/broadcom/bnxt/bnxt.c
++++ b/drivers/net/ethernet/broadcom/bnxt/bnxt.c
+@@ -2869,7 +2869,8 @@ static int bnxt_alloc_ntp_fltrs(struct bnxt *bp)
+ INIT_HLIST_HEAD(&bp->ntp_fltr_hash_tbl[i]);
+
+ bp->ntp_fltr_count = 0;
+- bp->ntp_fltr_bmap = kzalloc(BITS_TO_LONGS(BNXT_NTP_FLTR_MAX_FLTR),
++ bp->ntp_fltr_bmap = kcalloc(BITS_TO_LONGS(BNXT_NTP_FLTR_MAX_FLTR),
++ sizeof(long),
+ GFP_KERNEL);
+
+ if (!bp->ntp_fltr_bmap)
+diff --git a/drivers/net/ethernet/cadence/macb.c b/drivers/net/ethernet/cadence/macb.c
+index baba2db9d9c2..0b92b88513a7 100644
+--- a/drivers/net/ethernet/cadence/macb.c
++++ b/drivers/net/ethernet/cadence/macb.c
+@@ -432,15 +432,17 @@ static int macb_mii_probe(struct net_device *dev)
+ }
+
+ pdata = dev_get_platdata(&bp->pdev->dev);
+- if (pdata && gpio_is_valid(pdata->phy_irq_pin)) {
+- ret = devm_gpio_request(&bp->pdev->dev, pdata->phy_irq_pin,
+- "phy int");
+- if (!ret) {
+- phy_irq = gpio_to_irq(pdata->phy_irq_pin);
+- phydev->irq = (phy_irq < 0) ? PHY_POLL : phy_irq;
++ if (pdata) {
++ if (gpio_is_valid(pdata->phy_irq_pin)) {
++ ret = devm_gpio_request(&bp->pdev->dev,
++ pdata->phy_irq_pin, "phy int");
++ if (!ret) {
++ phy_irq = gpio_to_irq(pdata->phy_irq_pin);
++ phydev->irq = (phy_irq < 0) ? PHY_POLL : phy_irq;
++ }
++ } else {
++ phydev->irq = PHY_POLL;
+ }
+- } else {
+- phydev->irq = PHY_POLL;
+ }
+
+ /* attach the mac to the phy */
+diff --git a/drivers/net/geneve.c b/drivers/net/geneve.c
+index 7074b40ebd7f..dec5d563ab19 100644
+--- a/drivers/net/geneve.c
++++ b/drivers/net/geneve.c
+@@ -1244,7 +1244,7 @@ static int geneve_newlink(struct net *net, struct net_device *dev,
+ metadata = true;
+
+ if (data[IFLA_GENEVE_UDP_CSUM] &&
+- !nla_get_u8(data[IFLA_GENEVE_UDP_CSUM]))
++ nla_get_u8(data[IFLA_GENEVE_UDP_CSUM]))
+ info.key.tun_flags |= TUNNEL_CSUM;
+
+ if (data[IFLA_GENEVE_UDP_ZERO_CSUM6_TX] &&
+diff --git a/drivers/net/macsec.c b/drivers/net/macsec.c
+index 8420069594b3..4b14d2f62d62 100644
+--- a/drivers/net/macsec.c
++++ b/drivers/net/macsec.c
+@@ -617,7 +617,8 @@ static void macsec_encrypt_done(struct crypto_async_request *base, int err)
+
+ static struct aead_request *macsec_alloc_req(struct crypto_aead *tfm,
+ unsigned char **iv,
+- struct scatterlist **sg)
++ struct scatterlist **sg,
++ int num_frags)
+ {
+ size_t size, iv_offset, sg_offset;
+ struct aead_request *req;
+@@ -629,7 +630,7 @@ static struct aead_request *macsec_alloc_req(struct crypto_aead *tfm,
+
+ size = ALIGN(size, __alignof__(struct scatterlist));
+ sg_offset = size;
+- size += sizeof(struct scatterlist) * (MAX_SKB_FRAGS + 1);
++ size += sizeof(struct scatterlist) * num_frags;
+
+ tmp = kmalloc(size, GFP_ATOMIC);
+ if (!tmp)
+@@ -649,6 +650,7 @@ static struct sk_buff *macsec_encrypt(struct sk_buff *skb,
+ {
+ int ret;
+ struct scatterlist *sg;
++ struct sk_buff *trailer;
+ unsigned char *iv;
+ struct ethhdr *eth;
+ struct macsec_eth_header *hh;
+@@ -723,7 +725,14 @@ static struct sk_buff *macsec_encrypt(struct sk_buff *skb,
+ return ERR_PTR(-EINVAL);
+ }
+
+- req = macsec_alloc_req(tx_sa->key.tfm, &iv, &sg);
++ ret = skb_cow_data(skb, 0, &trailer);
++ if (unlikely(ret < 0)) {
++ macsec_txsa_put(tx_sa);
++ kfree_skb(skb);
++ return ERR_PTR(ret);
++ }
++
++ req = macsec_alloc_req(tx_sa->key.tfm, &iv, &sg, ret);
+ if (!req) {
+ macsec_txsa_put(tx_sa);
+ kfree_skb(skb);
+@@ -732,7 +741,7 @@ static struct sk_buff *macsec_encrypt(struct sk_buff *skb,
+
+ macsec_fill_iv(iv, secy->sci, pn);
+
+- sg_init_table(sg, MAX_SKB_FRAGS + 1);
++ sg_init_table(sg, ret);
+ skb_to_sgvec(skb, sg, 0, skb->len);
+
+ if (tx_sc->encrypt) {
+@@ -914,6 +923,7 @@ static struct sk_buff *macsec_decrypt(struct sk_buff *skb,
+ {
+ int ret;
+ struct scatterlist *sg;
++ struct sk_buff *trailer;
+ unsigned char *iv;
+ struct aead_request *req;
+ struct macsec_eth_header *hdr;
+@@ -924,7 +934,12 @@ static struct sk_buff *macsec_decrypt(struct sk_buff *skb,
+ if (!skb)
+ return ERR_PTR(-ENOMEM);
+
+- req = macsec_alloc_req(rx_sa->key.tfm, &iv, &sg);
++ ret = skb_cow_data(skb, 0, &trailer);
++ if (unlikely(ret < 0)) {
++ kfree_skb(skb);
++ return ERR_PTR(ret);
++ }
++ req = macsec_alloc_req(rx_sa->key.tfm, &iv, &sg, ret);
+ if (!req) {
+ kfree_skb(skb);
+ return ERR_PTR(-ENOMEM);
+@@ -933,7 +948,7 @@ static struct sk_buff *macsec_decrypt(struct sk_buff *skb,
+ hdr = (struct macsec_eth_header *)skb->data;
+ macsec_fill_iv(iv, sci, ntohl(hdr->packet_number));
+
+- sg_init_table(sg, MAX_SKB_FRAGS + 1);
++ sg_init_table(sg, ret);
+ skb_to_sgvec(skb, sg, 0, skb->len);
+
+ if (hdr->tci_an & MACSEC_TCI_E) {
+@@ -2713,7 +2728,7 @@ static netdev_tx_t macsec_start_xmit(struct sk_buff *skb,
+ }
+
+ #define MACSEC_FEATURES \
+- (NETIF_F_SG | NETIF_F_HIGHDMA)
++ (NETIF_F_SG | NETIF_F_HIGHDMA | NETIF_F_FRAGLIST)
+ static struct lock_class_key macsec_netdev_addr_lock_key;
+
+ static int macsec_dev_init(struct net_device *dev)
+diff --git a/drivers/net/phy/mdio-mux-bcm-iproc.c b/drivers/net/phy/mdio-mux-bcm-iproc.c
+index 0a0412524cec..0a5f62e0efcc 100644
+--- a/drivers/net/phy/mdio-mux-bcm-iproc.c
++++ b/drivers/net/phy/mdio-mux-bcm-iproc.c
+@@ -203,11 +203,14 @@ static int mdio_mux_iproc_probe(struct platform_device *pdev)
+ &md->mux_handle, md, md->mii_bus);
+ if (rc) {
+ dev_info(md->dev, "mdiomux initialization failed\n");
+- goto out;
++ goto out_register;
+ }
+
+ dev_info(md->dev, "iProc mdiomux registered\n");
+ return 0;
++
++out_register:
++ mdiobus_unregister(bus);
+ out:
+ mdiobus_free(bus);
+ return rc;
+diff --git a/drivers/net/usb/qmi_wwan.c b/drivers/net/usb/qmi_wwan.c
+index 0d519a9582ca..34d997ca1b27 100644
+--- a/drivers/net/usb/qmi_wwan.c
++++ b/drivers/net/usb/qmi_wwan.c
+@@ -902,6 +902,7 @@ static const struct usb_device_id products[] = {
+ {QMI_FIXED_INTF(0x2357, 0x0201, 4)}, /* TP-LINK HSUPA Modem MA180 */
+ {QMI_FIXED_INTF(0x2357, 0x9000, 4)}, /* TP-LINK MA260 */
+ {QMI_QUIRK_SET_DTR(0x1bc7, 0x1040, 2)}, /* Telit LE922A */
++ {QMI_FIXED_INTF(0x1bc7, 0x1100, 3)}, /* Telit ME910 */
+ {QMI_FIXED_INTF(0x1bc7, 0x1200, 5)}, /* Telit LE920 */
+ {QMI_FIXED_INTF(0x1bc7, 0x1201, 2)}, /* Telit LE920 */
+ {QMI_FIXED_INTF(0x1c9e, 0x9b01, 3)}, /* XS Stick W100-2 from 4G Systems */
+diff --git a/drivers/net/wireless/broadcom/brcm80211/brcmfmac/core.c b/drivers/net/wireless/broadcom/brcm80211/brcmfmac/core.c
+index 9e6f60a0ec3e..da8aad4f9912 100644
+--- a/drivers/net/wireless/broadcom/brcm80211/brcmfmac/core.c
++++ b/drivers/net/wireless/broadcom/brcm80211/brcmfmac/core.c
+@@ -198,7 +198,7 @@ static netdev_tx_t brcmf_netdev_start_xmit(struct sk_buff *skb,
+ int ret;
+ struct brcmf_if *ifp = netdev_priv(ndev);
+ struct brcmf_pub *drvr = ifp->drvr;
+- struct ethhdr *eh = (struct ethhdr *)(skb->data);
++ struct ethhdr *eh;
+
+ brcmf_dbg(DATA, "Enter, bsscfgidx=%d\n", ifp->bsscfgidx);
+
+@@ -211,22 +211,13 @@ static netdev_tx_t brcmf_netdev_start_xmit(struct sk_buff *skb,
+ goto done;
+ }
+
+- /* Make sure there's enough room for any header */
+- if (skb_headroom(skb) < drvr->hdrlen) {
+- struct sk_buff *skb2;
+-
+- brcmf_dbg(INFO, "%s: insufficient headroom\n",
++ /* Make sure there's enough writable headroom*/
++ ret = skb_cow_head(skb, drvr->hdrlen);
++ if (ret < 0) {
++ brcmf_err("%s: skb_cow_head failed\n",
+ brcmf_ifname(ifp));
+- drvr->bus_if->tx_realloc++;
+- skb2 = skb_realloc_headroom(skb, drvr->hdrlen);
+ dev_kfree_skb(skb);
+- skb = skb2;
+- if (skb == NULL) {
+- brcmf_err("%s: skb_realloc_headroom failed\n",
+- brcmf_ifname(ifp));
+- ret = -ENOMEM;
+- goto done;
+- }
++ goto done;
+ }
+
+ /* validate length for ether packet */
+@@ -236,6 +227,8 @@ static netdev_tx_t brcmf_netdev_start_xmit(struct sk_buff *skb,
+ goto done;
+ }
+
++ eh = (struct ethhdr *)(skb->data);
++
+ if (eh->h_proto == htons(ETH_P_PAE))
+ atomic_inc(&ifp->pend_8021x_cnt);
+
+diff --git a/drivers/net/wireless/intel/iwlwifi/iwl-6000.c b/drivers/net/wireless/intel/iwlwifi/iwl-6000.c
+index 0b9f6a7bc834..39335b7b0c16 100644
+--- a/drivers/net/wireless/intel/iwlwifi/iwl-6000.c
++++ b/drivers/net/wireless/intel/iwlwifi/iwl-6000.c
+@@ -371,4 +371,4 @@ const struct iwl_cfg iwl6000_3agn_cfg = {
+ MODULE_FIRMWARE(IWL6000_MODULE_FIRMWARE(IWL6000_UCODE_API_MAX));
+ MODULE_FIRMWARE(IWL6050_MODULE_FIRMWARE(IWL6050_UCODE_API_MAX));
+ MODULE_FIRMWARE(IWL6005_MODULE_FIRMWARE(IWL6000G2_UCODE_API_MAX));
+-MODULE_FIRMWARE(IWL6030_MODULE_FIRMWARE(IWL6000G2B_UCODE_API_MAX));
++MODULE_FIRMWARE(IWL6030_MODULE_FIRMWARE(IWL6000G2_UCODE_API_MAX));
+diff --git a/drivers/net/wireless/intel/iwlwifi/mvm/d3.c b/drivers/net/wireless/intel/iwlwifi/mvm/d3.c
+index b88e2048ae0b..207d8ae1e116 100644
+--- a/drivers/net/wireless/intel/iwlwifi/mvm/d3.c
++++ b/drivers/net/wireless/intel/iwlwifi/mvm/d3.c
+@@ -1262,12 +1262,15 @@ static int __iwl_mvm_suspend(struct ieee80211_hw *hw,
+ iwl_trans_d3_suspend(mvm->trans, test, !unified_image);
+ out:
+ if (ret < 0) {
+- iwl_mvm_ref(mvm, IWL_MVM_REF_UCODE_DOWN);
+- if (mvm->restart_fw > 0) {
+- mvm->restart_fw--;
+- ieee80211_restart_hw(mvm->hw);
+- }
+ iwl_mvm_free_nd(mvm);
++
++ if (!unified_image) {
++ iwl_mvm_ref(mvm, IWL_MVM_REF_UCODE_DOWN);
++ if (mvm->restart_fw > 0) {
++ mvm->restart_fw--;
++ ieee80211_restart_hw(mvm->hw);
++ }
++ }
+ }
+ out_noreset:
+ mutex_unlock(&mvm->mutex);
+diff --git a/drivers/net/wireless/intel/iwlwifi/mvm/debugfs.c b/drivers/net/wireless/intel/iwlwifi/mvm/debugfs.c
+index 7b7d2a146e30..0bda91ffc608 100644
+--- a/drivers/net/wireless/intel/iwlwifi/mvm/debugfs.c
++++ b/drivers/net/wireless/intel/iwlwifi/mvm/debugfs.c
+@@ -1056,6 +1056,8 @@ static ssize_t iwl_dbgfs_fw_dbg_collect_write(struct iwl_mvm *mvm,
+
+ if (ret)
+ return ret;
++ if (count == 0)
++ return 0;
+
+ iwl_mvm_fw_dbg_collect(mvm, FW_DBG_TRIGGER_USER, buf,
+ (count - 1), NULL);
+diff --git a/drivers/net/wireless/intel/iwlwifi/mvm/fw-dbg.c b/drivers/net/wireless/intel/iwlwifi/mvm/fw-dbg.c
+index 2e8e3e8e30a3..94a3486cec89 100644
+--- a/drivers/net/wireless/intel/iwlwifi/mvm/fw-dbg.c
++++ b/drivers/net/wireless/intel/iwlwifi/mvm/fw-dbg.c
+@@ -779,12 +779,16 @@ void iwl_mvm_fw_error_dump(struct iwl_mvm *mvm)
+ struct iwl_fw_error_dump_paging *paging;
+ struct page *pages =
+ mvm->fw_paging_db[i].fw_paging_block;
++ dma_addr_t addr = mvm->fw_paging_db[i].fw_paging_phys;
+
+ dump_data->type = cpu_to_le32(IWL_FW_ERROR_DUMP_PAGING);
+ dump_data->len = cpu_to_le32(sizeof(*paging) +
+ PAGING_BLOCK_SIZE);
+ paging = (void *)dump_data->data;
+ paging->index = cpu_to_le32(i);
++ dma_sync_single_for_cpu(mvm->trans->dev, addr,
++ PAGING_BLOCK_SIZE,
++ DMA_BIDIRECTIONAL);
+ memcpy(paging->data, page_address(pages),
+ PAGING_BLOCK_SIZE);
+ dump_data = iwl_fw_error_next_data(dump_data);
+@@ -816,11 +820,12 @@ void iwl_mvm_fw_error_dump(struct iwl_mvm *mvm)
+ sg_nents(sg_dump_data),
+ fw_error_dump->op_mode_ptr,
+ fw_error_dump->op_mode_len, 0);
+- sg_pcopy_from_buffer(sg_dump_data,
+- sg_nents(sg_dump_data),
+- fw_error_dump->trans_ptr->data,
+- fw_error_dump->trans_ptr->len,
+- fw_error_dump->op_mode_len);
++ if (fw_error_dump->trans_ptr)
++ sg_pcopy_from_buffer(sg_dump_data,
++ sg_nents(sg_dump_data),
++ fw_error_dump->trans_ptr->data,
++ fw_error_dump->trans_ptr->len,
++ fw_error_dump->op_mode_len);
+ dev_coredumpsg(mvm->trans->dev, sg_dump_data, file_len,
+ GFP_KERNEL);
+ }
+diff --git a/drivers/net/wireless/intel/iwlwifi/mvm/fw.c b/drivers/net/wireless/intel/iwlwifi/mvm/fw.c
+index 872066317fa5..2ec3a91a0f6b 100644
+--- a/drivers/net/wireless/intel/iwlwifi/mvm/fw.c
++++ b/drivers/net/wireless/intel/iwlwifi/mvm/fw.c
+@@ -214,6 +214,10 @@ static int iwl_fill_paging_mem(struct iwl_mvm *mvm, const struct fw_img *image)
+ memcpy(page_address(mvm->fw_paging_db[0].fw_paging_block),
+ image->sec[sec_idx].data,
+ mvm->fw_paging_db[0].fw_paging_size);
++ dma_sync_single_for_device(mvm->trans->dev,
++ mvm->fw_paging_db[0].fw_paging_phys,
++ mvm->fw_paging_db[0].fw_paging_size,
++ DMA_BIDIRECTIONAL);
+
+ IWL_DEBUG_FW(mvm,
+ "Paging: copied %d CSS bytes to first block\n",
+@@ -228,9 +232,16 @@ static int iwl_fill_paging_mem(struct iwl_mvm *mvm, const struct fw_img *image)
+ * loop stop at num_of_paging_blk since that last block is not full.
+ */
+ for (idx = 1; idx < mvm->num_of_paging_blk; idx++) {
+- memcpy(page_address(mvm->fw_paging_db[idx].fw_paging_block),
++ struct iwl_fw_paging *block = &mvm->fw_paging_db[idx];
++
++ memcpy(page_address(block->fw_paging_block),
+ image->sec[sec_idx].data + offset,
+- mvm->fw_paging_db[idx].fw_paging_size);
++ block->fw_paging_size);
++ dma_sync_single_for_device(mvm->trans->dev,
++ block->fw_paging_phys,
++ block->fw_paging_size,
++ DMA_BIDIRECTIONAL);
++
+
+ IWL_DEBUG_FW(mvm,
+ "Paging: copied %d paging bytes to block %d\n",
+@@ -242,9 +253,15 @@ static int iwl_fill_paging_mem(struct iwl_mvm *mvm, const struct fw_img *image)
+
+ /* copy the last paging block */
+ if (mvm->num_of_pages_in_last_blk > 0) {
+- memcpy(page_address(mvm->fw_paging_db[idx].fw_paging_block),
++ struct iwl_fw_paging *block = &mvm->fw_paging_db[idx];
++
++ memcpy(page_address(block->fw_paging_block),
+ image->sec[sec_idx].data + offset,
+ FW_PAGING_SIZE * mvm->num_of_pages_in_last_blk);
++ dma_sync_single_for_device(mvm->trans->dev,
++ block->fw_paging_phys,
++ block->fw_paging_size,
++ DMA_BIDIRECTIONAL);
+
+ IWL_DEBUG_FW(mvm,
+ "Paging: copied %d pages in the last block %d\n",
+diff --git a/drivers/net/wireless/intel/iwlwifi/mvm/mac80211.c b/drivers/net/wireless/intel/iwlwifi/mvm/mac80211.c
+index 45122dafe922..8c555502185c 100644
+--- a/drivers/net/wireless/intel/iwlwifi/mvm/mac80211.c
++++ b/drivers/net/wireless/intel/iwlwifi/mvm/mac80211.c
+@@ -2411,7 +2411,7 @@ void iwl_mvm_sta_pm_notif(struct iwl_mvm *mvm, struct iwl_rx_cmd_buffer *rxb)
+ return;
+
+ rcu_read_lock();
+- sta = mvm->fw_id_to_mac_id[notif->sta_id];
++ sta = rcu_dereference(mvm->fw_id_to_mac_id[notif->sta_id]);
+ if (WARN_ON(IS_ERR_OR_NULL(sta))) {
+ rcu_read_unlock();
+ return;
+diff --git a/drivers/net/wireless/intel/iwlwifi/mvm/rxmq.c b/drivers/net/wireless/intel/iwlwifi/mvm/rxmq.c
+index 6c802cee900c..a481eb41f693 100644
+--- a/drivers/net/wireless/intel/iwlwifi/mvm/rxmq.c
++++ b/drivers/net/wireless/intel/iwlwifi/mvm/rxmq.c
+@@ -409,7 +409,7 @@ static void iwl_mvm_release_frames(struct iwl_mvm *mvm,
+
+ /* ignore nssn smaller than head sn - this can happen due to timeout */
+ if (iwl_mvm_is_sn_less(nssn, ssn, reorder_buf->buf_size))
+- return;
++ goto set_timer;
+
+ while (iwl_mvm_is_sn_less(ssn, nssn, reorder_buf->buf_size)) {
+ int index = ssn % reorder_buf->buf_size;
+@@ -432,6 +432,7 @@ static void iwl_mvm_release_frames(struct iwl_mvm *mvm,
+ }
+ reorder_buf->head_sn = nssn;
+
++set_timer:
+ if (reorder_buf->num_stored && !reorder_buf->removed) {
+ u16 index = reorder_buf->head_sn % reorder_buf->buf_size;
+
+diff --git a/drivers/net/wireless/intel/iwlwifi/mvm/sta.c b/drivers/net/wireless/intel/iwlwifi/mvm/sta.c
+index 09e9e2e3ed04..1137ed71461e 100644
+--- a/drivers/net/wireless/intel/iwlwifi/mvm/sta.c
++++ b/drivers/net/wireless/intel/iwlwifi/mvm/sta.c
+@@ -1486,6 +1486,7 @@ int iwl_mvm_rm_sta(struct iwl_mvm *mvm,
+ {
+ struct iwl_mvm_vif *mvmvif = iwl_mvm_vif_from_mac80211(vif);
+ struct iwl_mvm_sta *mvm_sta = iwl_mvm_sta_from_mac80211(sta);
++ u8 sta_id = mvm_sta->sta_id;
+ int ret;
+
+ lockdep_assert_held(&mvm->mutex);
+@@ -1494,7 +1495,7 @@ int iwl_mvm_rm_sta(struct iwl_mvm *mvm,
+ kfree(mvm_sta->dup_data);
+
+ if ((vif->type == NL80211_IFTYPE_STATION &&
+- mvmvif->ap_sta_id == mvm_sta->sta_id) ||
++ mvmvif->ap_sta_id == sta_id) ||
+ iwl_mvm_is_dqa_supported(mvm)){
+ ret = iwl_mvm_drain_sta(mvm, mvm_sta, true);
+ if (ret)
+@@ -1510,8 +1511,17 @@ int iwl_mvm_rm_sta(struct iwl_mvm *mvm,
+ ret = iwl_mvm_drain_sta(mvm, mvm_sta, false);
+
+ /* If DQA is supported - the queues can be disabled now */
+- if (iwl_mvm_is_dqa_supported(mvm))
++ if (iwl_mvm_is_dqa_supported(mvm)) {
+ iwl_mvm_disable_sta_queues(mvm, vif, mvm_sta);
++ /*
++ * If pending_frames is set at this point - it must be
++ * driver internal logic error, since queues are empty
++ * and removed successuly.
++ * warn on it but set it to 0 anyway to avoid station
++ * not being removed later in the function
++ */
++ WARN_ON(atomic_xchg(&mvm->pending_frames[sta_id], 0));
++ }
+
+ /* If there is a TXQ still marked as reserved - free it */
+ if (iwl_mvm_is_dqa_supported(mvm) &&
+@@ -1529,7 +1539,7 @@ int iwl_mvm_rm_sta(struct iwl_mvm *mvm,
+ if (WARN((*status != IWL_MVM_QUEUE_RESERVED) &&
+ (*status != IWL_MVM_QUEUE_FREE),
+ "sta_id %d reserved txq %d status %d",
+- mvm_sta->sta_id, reserved_txq, *status)) {
++ sta_id, reserved_txq, *status)) {
+ spin_unlock_bh(&mvm->queue_info_lock);
+ return -EINVAL;
+ }
+@@ -1539,7 +1549,7 @@ int iwl_mvm_rm_sta(struct iwl_mvm *mvm,
+ }
+
+ if (vif->type == NL80211_IFTYPE_STATION &&
+- mvmvif->ap_sta_id == mvm_sta->sta_id) {
++ mvmvif->ap_sta_id == sta_id) {
+ /* if associated - we can't remove the AP STA now */
+ if (vif->bss_conf.assoc)
+ return ret;
+@@ -1548,7 +1558,7 @@ int iwl_mvm_rm_sta(struct iwl_mvm *mvm,
+ mvmvif->ap_sta_id = IWL_MVM_STATION_COUNT;
+
+ /* clear d0i3_ap_sta_id if no longer relevant */
+- if (mvm->d0i3_ap_sta_id == mvm_sta->sta_id)
++ if (mvm->d0i3_ap_sta_id == sta_id)
+ mvm->d0i3_ap_sta_id = IWL_MVM_STATION_COUNT;
+ }
+ }
+@@ -1557,7 +1567,7 @@ int iwl_mvm_rm_sta(struct iwl_mvm *mvm,
+ * This shouldn't happen - the TDLS channel switch should be canceled
+ * before the STA is removed.
+ */
+- if (WARN_ON_ONCE(mvm->tdls_cs.peer.sta_id == mvm_sta->sta_id)) {
++ if (WARN_ON_ONCE(mvm->tdls_cs.peer.sta_id == sta_id)) {
+ mvm->tdls_cs.peer.sta_id = IWL_MVM_STATION_COUNT;
+ cancel_delayed_work(&mvm->tdls_cs.dwork);
+ }
+@@ -1567,21 +1577,20 @@ int iwl_mvm_rm_sta(struct iwl_mvm *mvm,
+ * calls the drain worker.
+ */
+ spin_lock_bh(&mvm_sta->lock);
++
+ /*
+ * There are frames pending on the AC queues for this station.
+ * We need to wait until all the frames are drained...
+ */
+- if (atomic_read(&mvm->pending_frames[mvm_sta->sta_id])) {
+- rcu_assign_pointer(mvm->fw_id_to_mac_id[mvm_sta->sta_id],
++ if (atomic_read(&mvm->pending_frames[sta_id])) {
++ rcu_assign_pointer(mvm->fw_id_to_mac_id[sta_id],
+ ERR_PTR(-EBUSY));
+ spin_unlock_bh(&mvm_sta->lock);
+
+ /* disable TDLS sta queues on drain complete */
+ if (sta->tdls) {
+- mvm->tfd_drained[mvm_sta->sta_id] =
+- mvm_sta->tfd_queue_msk;
+- IWL_DEBUG_TDLS(mvm, "Draining TDLS sta %d\n",
+- mvm_sta->sta_id);
++ mvm->tfd_drained[sta_id] = mvm_sta->tfd_queue_msk;
++ IWL_DEBUG_TDLS(mvm, "Draining TDLS sta %d\n", sta_id);
+ }
+
+ ret = iwl_mvm_drain_sta(mvm, mvm_sta, true);
+diff --git a/drivers/net/wireless/intel/iwlwifi/mvm/tx.c b/drivers/net/wireless/intel/iwlwifi/mvm/tx.c
+index 66957ac12ca4..0556d139b719 100644
+--- a/drivers/net/wireless/intel/iwlwifi/mvm/tx.c
++++ b/drivers/net/wireless/intel/iwlwifi/mvm/tx.c
+@@ -202,7 +202,6 @@ void iwl_mvm_set_tx_cmd(struct iwl_mvm *mvm, struct sk_buff *skb,
+ struct iwl_tx_cmd *tx_cmd,
+ struct ieee80211_tx_info *info, u8 sta_id)
+ {
+- struct ieee80211_tx_info *skb_info = IEEE80211_SKB_CB(skb);
+ struct ieee80211_hdr *hdr = (void *)skb->data;
+ __le16 fc = hdr->frame_control;
+ u32 tx_flags = le32_to_cpu(tx_cmd->tx_flags);
+@@ -284,9 +283,8 @@ void iwl_mvm_set_tx_cmd(struct iwl_mvm *mvm, struct sk_buff *skb,
+ tx_flags |= TX_CMD_FLG_WRITE_TX_POWER;
+
+ tx_cmd->tx_flags = cpu_to_le32(tx_flags);
+- /* Total # bytes to be transmitted */
+- tx_cmd->len = cpu_to_le16((u16)skb->len +
+- (uintptr_t)skb_info->driver_data[0]);
++ /* Total # bytes to be transmitted - PCIe code will adjust for A-MSDU */
++ tx_cmd->len = cpu_to_le16((u16)skb->len);
+ tx_cmd->life_time = cpu_to_le32(TX_CMD_LIFE_TIME_INFINITE);
+ tx_cmd->sta_id = sta_id;
+
+@@ -459,7 +457,6 @@ iwl_mvm_set_tx_params(struct iwl_mvm *mvm, struct sk_buff *skb,
+ struct ieee80211_sta *sta, u8 sta_id)
+ {
+ struct ieee80211_hdr *hdr = (struct ieee80211_hdr *)skb->data;
+- struct ieee80211_tx_info *skb_info = IEEE80211_SKB_CB(skb);
+ struct iwl_device_cmd *dev_cmd;
+ struct iwl_tx_cmd *tx_cmd;
+
+@@ -479,12 +476,18 @@ iwl_mvm_set_tx_params(struct iwl_mvm *mvm, struct sk_buff *skb,
+
+ iwl_mvm_set_tx_cmd_rate(mvm, tx_cmd, info, sta, hdr->frame_control);
+
++ return dev_cmd;
++}
++
++static void iwl_mvm_skb_prepare_status(struct sk_buff *skb,
++ struct iwl_device_cmd *cmd)
++{
++ struct ieee80211_tx_info *skb_info = IEEE80211_SKB_CB(skb);
++
+ memset(&skb_info->status, 0, sizeof(skb_info->status));
+ memset(skb_info->driver_data, 0, sizeof(skb_info->driver_data));
+
+- skb_info->driver_data[1] = dev_cmd;
+-
+- return dev_cmd;
++ skb_info->driver_data[1] = cmd;
+ }
+
+ static int iwl_mvm_get_ctrl_vif_queue(struct iwl_mvm *mvm,
+@@ -550,9 +553,6 @@ int iwl_mvm_tx_skb_non_sta(struct iwl_mvm *mvm, struct sk_buff *skb)
+ info.hw_queue != info.control.vif->cab_queue)))
+ return -1;
+
+- /* This holds the amsdu headers length */
+- skb_info->driver_data[0] = (void *)(uintptr_t)0;
+-
+ queue = info.hw_queue;
+
+ /*
+@@ -563,9 +563,10 @@ int iwl_mvm_tx_skb_non_sta(struct iwl_mvm *mvm, struct sk_buff *skb)
+ * (this is not possible for unicast packets as a TLDS discovery
+ * response are sent without a station entry); otherwise use the
+ * AUX station.
+- * In DQA mode, if vif is of type STATION and frames are not multicast,
+- * they should be sent from the BSS queue. For example, TDLS setup
+- * frames should be sent on this queue, as they go through the AP.
++ * In DQA mode, if vif is of type STATION and frames are not multicast
++ * or offchannel, they should be sent from the BSS queue.
++ * For example, TDLS setup frames should be sent on this queue,
++ * as they go through the AP.
+ */
+ sta_id = mvm->aux_sta.sta_id;
+ if (info.control.vif) {
+@@ -587,7 +588,8 @@ int iwl_mvm_tx_skb_non_sta(struct iwl_mvm *mvm, struct sk_buff *skb)
+ if (ap_sta_id != IWL_MVM_STATION_COUNT)
+ sta_id = ap_sta_id;
+ } else if (iwl_mvm_is_dqa_supported(mvm) &&
+- info.control.vif->type == NL80211_IFTYPE_STATION) {
++ info.control.vif->type == NL80211_IFTYPE_STATION &&
++ queue != mvm->aux_queue) {
+ queue = IWL_MVM_DQA_BSS_CLIENT_QUEUE;
+ }
+ }
+@@ -598,6 +600,9 @@ int iwl_mvm_tx_skb_non_sta(struct iwl_mvm *mvm, struct sk_buff *skb)
+ if (!dev_cmd)
+ return -1;
+
++ /* From now on, we cannot access info->control */
++ iwl_mvm_skb_prepare_status(skb, dev_cmd);
++
+ tx_cmd = (struct iwl_tx_cmd *)dev_cmd->payload;
+
+ /* Copy MAC header from skb into command buffer */
+@@ -634,7 +639,7 @@ static int iwl_mvm_tx_tso(struct iwl_mvm *mvm, struct sk_buff *skb,
+ unsigned int num_subframes, tcp_payload_len, subf_len, max_amsdu_len;
+ bool ipv4 = (skb->protocol == htons(ETH_P_IP));
+ u16 ip_base_id = ipv4 ? ntohs(ip_hdr(skb)->id) : 0;
+- u16 amsdu_add, snap_ip_tcp, pad, i = 0;
++ u16 snap_ip_tcp, pad, i = 0;
+ unsigned int dbg_max_amsdu_len;
+ netdev_features_t netdev_features = NETIF_F_CSUM_MASK | NETIF_F_SG;
+ u8 *qc, tid, txf;
+@@ -736,21 +741,6 @@ static int iwl_mvm_tx_tso(struct iwl_mvm *mvm, struct sk_buff *skb,
+
+ /* This skb fits in one single A-MSDU */
+ if (num_subframes * mss >= tcp_payload_len) {
+- struct ieee80211_tx_info *skb_info = IEEE80211_SKB_CB(skb);
+-
+- /*
+- * Compute the length of all the data added for the A-MSDU.
+- * This will be used to compute the length to write in the TX
+- * command. We have: SNAP + IP + TCP for n -1 subframes and
+- * ETH header for n subframes. Note that the original skb
+- * already had one set of SNAP / IP / TCP headers.
+- */
+- num_subframes = DIV_ROUND_UP(tcp_payload_len, mss);
+- amsdu_add = num_subframes * sizeof(struct ethhdr) +
+- (num_subframes - 1) * (snap_ip_tcp + pad);
+- /* This holds the amsdu headers length */
+- skb_info->driver_data[0] = (void *)(uintptr_t)amsdu_add;
+-
+ __skb_queue_tail(mpdus_skb, skb);
+ return 0;
+ }
+@@ -789,14 +779,6 @@ static int iwl_mvm_tx_tso(struct iwl_mvm *mvm, struct sk_buff *skb,
+ ip_hdr(tmp)->id = htons(ip_base_id + i * num_subframes);
+
+ if (tcp_payload_len > mss) {
+- struct ieee80211_tx_info *skb_info =
+- IEEE80211_SKB_CB(tmp);
+-
+- num_subframes = DIV_ROUND_UP(tcp_payload_len, mss);
+- amsdu_add = num_subframes * sizeof(struct ethhdr) +
+- (num_subframes - 1) * (snap_ip_tcp + pad);
+- skb_info->driver_data[0] =
+- (void *)(uintptr_t)amsdu_add;
+ skb_shinfo(tmp)->gso_size = mss;
+ } else {
+ qc = ieee80211_get_qos_ctl((void *)tmp->data);
+@@ -908,7 +890,6 @@ static int iwl_mvm_tx_mpdu(struct iwl_mvm *mvm, struct sk_buff *skb,
+ goto drop;
+
+ tx_cmd = (struct iwl_tx_cmd *)dev_cmd->payload;
+- /* From now on, we cannot access info->control */
+
+ /*
+ * we handle that entirely ourselves -- for uAPSD the firmware
+@@ -1015,6 +996,9 @@ static int iwl_mvm_tx_mpdu(struct iwl_mvm *mvm, struct sk_buff *skb,
+ IWL_DEBUG_TX(mvm, "TX to [%d|%d] Q:%d - seq: 0x%x\n", mvmsta->sta_id,
+ tid, txq_id, IEEE80211_SEQ_TO_SN(seq_number));
+
++ /* From now on, we cannot access info->control */
++ iwl_mvm_skb_prepare_status(skb, dev_cmd);
++
+ if (iwl_trans_tx(mvm->trans, skb, dev_cmd, txq_id))
+ goto drop_unlock_sta;
+
+@@ -1024,7 +1008,10 @@ static int iwl_mvm_tx_mpdu(struct iwl_mvm *mvm, struct sk_buff *skb,
+ spin_unlock(&mvmsta->lock);
+
+ /* Increase pending frames count if this isn't AMPDU */
+- if (!is_ampdu)
++ if ((iwl_mvm_is_dqa_supported(mvm) &&
++ mvmsta->tid_data[tx_cmd->tid_tspec].state != IWL_AGG_ON &&
++ mvmsta->tid_data[tx_cmd->tid_tspec].state != IWL_AGG_STARTING) ||
++ (!iwl_mvm_is_dqa_supported(mvm) && !is_ampdu))
+ atomic_inc(&mvm->pending_frames[mvmsta->sta_id]);
+
+ return 0;
+@@ -1040,7 +1027,6 @@ int iwl_mvm_tx_skb(struct iwl_mvm *mvm, struct sk_buff *skb,
+ struct ieee80211_sta *sta)
+ {
+ struct iwl_mvm_sta *mvmsta = iwl_mvm_sta_from_mac80211(sta);
+- struct ieee80211_tx_info *skb_info = IEEE80211_SKB_CB(skb);
+ struct ieee80211_tx_info info;
+ struct sk_buff_head mpdus_skbs;
+ unsigned int payload_len;
+@@ -1054,9 +1040,6 @@ int iwl_mvm_tx_skb(struct iwl_mvm *mvm, struct sk_buff *skb,
+
+ memcpy(&info, skb->cb, sizeof(info));
+
+- /* This holds the amsdu headers length */
+- skb_info->driver_data[0] = (void *)(uintptr_t)0;
+-
+ if (!skb_is_gso(skb))
+ return iwl_mvm_tx_mpdu(mvm, skb, &info, sta);
+
+@@ -1295,8 +1278,6 @@ static void iwl_mvm_rx_tx_cmd_single(struct iwl_mvm *mvm,
+
+ memset(&info->status, 0, sizeof(info->status));
+
+- info->flags &= ~IEEE80211_TX_CTL_AMPDU;
+-
+ /* inform mac80211 about what happened with the frame */
+ switch (status & TX_STATUS_MSK) {
+ case TX_STATUS_SUCCESS:
+@@ -1319,10 +1300,11 @@ static void iwl_mvm_rx_tx_cmd_single(struct iwl_mvm *mvm,
+ (void *)(uintptr_t)le32_to_cpu(tx_resp->initial_rate);
+
+ /* Single frame failure in an AMPDU queue => send BAR */
+- if (txq_id >= mvm->first_agg_queue &&
++ if (info->flags & IEEE80211_TX_CTL_AMPDU &&
+ !(info->flags & IEEE80211_TX_STAT_ACK) &&
+ !(info->flags & IEEE80211_TX_STAT_TX_FILTERED))
+ info->flags |= IEEE80211_TX_STAT_AMPDU_NO_BACK;
++ info->flags &= ~IEEE80211_TX_CTL_AMPDU;
+
+ /* W/A FW bug: seq_ctl is wrong when the status isn't success */
+ if (status != TX_STATUS_SUCCESS) {
+@@ -1357,7 +1339,7 @@ static void iwl_mvm_rx_tx_cmd_single(struct iwl_mvm *mvm,
+ ieee80211_tx_status(mvm->hw, skb);
+ }
+
+- if (txq_id >= mvm->first_agg_queue) {
++ if (iwl_mvm_is_dqa_supported(mvm) || txq_id >= mvm->first_agg_queue) {
+ /* If this is an aggregation queue, we use the ssn since:
+ * ssn = wifi seq_num % 256.
+ * The seq_ctl is the sequence control of the packet to which
+diff --git a/drivers/net/wireless/intel/iwlwifi/pcie/internal.h b/drivers/net/wireless/intel/iwlwifi/pcie/internal.h
+index cac6d99012b3..e3cede979751 100644
+--- a/drivers/net/wireless/intel/iwlwifi/pcie/internal.h
++++ b/drivers/net/wireless/intel/iwlwifi/pcie/internal.h
+@@ -279,7 +279,7 @@ struct iwl_txq {
+ bool frozen;
+ u8 active;
+ bool ampdu;
+- bool block;
++ int block;
+ unsigned long wd_timeout;
+ struct sk_buff_head overflow_q;
+
+diff --git a/drivers/net/wireless/intel/iwlwifi/pcie/trans.c b/drivers/net/wireless/intel/iwlwifi/pcie/trans.c
+index b10e3633df1a..550102ffc315 100644
+--- a/drivers/net/wireless/intel/iwlwifi/pcie/trans.c
++++ b/drivers/net/wireless/intel/iwlwifi/pcie/trans.c
+@@ -868,17 +868,13 @@ static int iwl_pcie_load_cpu_sections(struct iwl_trans *trans,
+ int cpu,
+ int *first_ucode_section)
+ {
+- int shift_param;
+ int i, ret = 0;
+ u32 last_read_idx = 0;
+
+- if (cpu == 1) {
+- shift_param = 0;
++ if (cpu == 1)
+ *first_ucode_section = 0;
+- } else {
+- shift_param = 16;
++ else
+ (*first_ucode_section)++;
+- }
+
+ for (i = *first_ucode_section; i < IWL_UCODE_SECTION_MAX; i++) {
+ last_read_idx = i;
+@@ -2960,16 +2956,12 @@ struct iwl_trans *iwl_trans_pcie_alloc(struct pci_dev *pdev,
+ PCIE_LINK_STATE_CLKPM);
+ }
+
+- if (cfg->mq_rx_supported)
+- addr_size = 64;
+- else
+- addr_size = 36;
+-
+ if (cfg->use_tfh) {
++ addr_size = 64;
+ trans_pcie->max_tbs = IWL_TFH_NUM_TBS;
+ trans_pcie->tfd_size = sizeof(struct iwl_tfh_tfd);
+-
+ } else {
++ addr_size = 36;
+ trans_pcie->max_tbs = IWL_NUM_OF_TBS;
+ trans_pcie->tfd_size = sizeof(struct iwl_tfd);
+ }
+diff --git a/drivers/net/wireless/intel/iwlwifi/pcie/tx.c b/drivers/net/wireless/intel/iwlwifi/pcie/tx.c
+index e44e5adc2b95..911cf9868107 100644
+--- a/drivers/net/wireless/intel/iwlwifi/pcie/tx.c
++++ b/drivers/net/wireless/intel/iwlwifi/pcie/tx.c
+@@ -2096,6 +2096,7 @@ static int iwl_fill_data_tbs_amsdu(struct iwl_trans *trans, struct sk_buff *skb,
+ struct iwl_cmd_meta *out_meta,
+ struct iwl_device_cmd *dev_cmd, u16 tb1_len)
+ {
++ struct iwl_tx_cmd *tx_cmd = (void *)dev_cmd->payload;
+ struct iwl_trans_pcie *trans_pcie = txq->trans_pcie;
+ struct ieee80211_hdr *hdr = (void *)skb->data;
+ unsigned int snap_ip_tcp_hdrlen, ip_hdrlen, total_len, hdr_room;
+@@ -2145,6 +2146,13 @@ static int iwl_fill_data_tbs_amsdu(struct iwl_trans *trans, struct sk_buff *skb,
+ */
+ skb_pull(skb, hdr_len + iv_len);
+
++ /*
++ * Remove the length of all the headers that we don't actually
++ * have in the MPDU by themselves, but that we duplicate into
++ * all the different MSDUs inside the A-MSDU.
++ */
++ le16_add_cpu(&tx_cmd->len, -snap_ip_tcp_hdrlen);
++
+ tso_start(skb, &tso);
+
+ while (total_len) {
+@@ -2155,7 +2163,7 @@ static int iwl_fill_data_tbs_amsdu(struct iwl_trans *trans, struct sk_buff *skb,
+ unsigned int hdr_tb_len;
+ dma_addr_t hdr_tb_phys;
+ struct tcphdr *tcph;
+- u8 *iph;
++ u8 *iph, *subf_hdrs_start = hdr_page->pos;
+
+ total_len -= data_left;
+
+@@ -2216,6 +2224,8 @@ static int iwl_fill_data_tbs_amsdu(struct iwl_trans *trans, struct sk_buff *skb,
+ hdr_tb_len, false);
+ trace_iwlwifi_dev_tx_tso_chunk(trans->dev, start_hdr,
+ hdr_tb_len);
++ /* add this subframe's headers' length to the tx_cmd */
++ le16_add_cpu(&tx_cmd->len, hdr_page->pos - subf_hdrs_start);
+
+ /* prepare the start_hdr for the next subframe */
+ start_hdr = hdr_page->pos;
+@@ -2408,9 +2418,10 @@ int iwl_trans_pcie_tx(struct iwl_trans *trans, struct sk_buff *skb,
+ tb1_len = len;
+ }
+
+- /* The first TB points to bi-directional DMA data */
+- memcpy(&txq->first_tb_bufs[txq->write_ptr], &dev_cmd->hdr,
+- IWL_FIRST_TB_SIZE);
++ /*
++ * The first TB points to bi-directional DMA data, we'll
++ * memcpy the data into it later.
++ */
+ iwl_pcie_txq_build_tfd(trans, txq, tb0_phys,
+ IWL_FIRST_TB_SIZE, true);
+
+@@ -2434,6 +2445,10 @@ int iwl_trans_pcie_tx(struct iwl_trans *trans, struct sk_buff *skb,
+ goto out_err;
+ }
+
++ /* building the A-MSDU might have changed this data, so memcpy it now */
++ memcpy(&txq->first_tb_bufs[txq->write_ptr], &dev_cmd->hdr,
++ IWL_FIRST_TB_SIZE);
++
+ tfd = iwl_pcie_get_tfd(trans_pcie, txq, txq->write_ptr);
+ /* Set up entry for this TFD in Tx byte-count array */
+ iwl_pcie_txq_update_byte_cnt_tbl(trans, txq, le16_to_cpu(tx_cmd->len),
+diff --git a/drivers/net/wireless/marvell/mwifiex/11n_aggr.c b/drivers/net/wireless/marvell/mwifiex/11n_aggr.c
+index c47d6366875d..a75013ac84d7 100644
+--- a/drivers/net/wireless/marvell/mwifiex/11n_aggr.c
++++ b/drivers/net/wireless/marvell/mwifiex/11n_aggr.c
+@@ -101,13 +101,6 @@ mwifiex_11n_form_amsdu_txpd(struct mwifiex_private *priv,
+ {
+ struct txpd *local_tx_pd;
+ struct mwifiex_txinfo *tx_info = MWIFIEX_SKB_TXCB(skb);
+- unsigned int pad;
+- int headroom = (priv->adapter->iface_type ==
+- MWIFIEX_USB) ? 0 : INTF_HEADER_LEN;
+-
+- pad = ((void *)skb->data - sizeof(*local_tx_pd) -
+- headroom - NULL) & (MWIFIEX_DMA_ALIGN_SZ - 1);
+- skb_push(skb, pad);
+
+ skb_push(skb, sizeof(*local_tx_pd));
+
+@@ -121,12 +114,10 @@ mwifiex_11n_form_amsdu_txpd(struct mwifiex_private *priv,
+ local_tx_pd->bss_num = priv->bss_num;
+ local_tx_pd->bss_type = priv->bss_type;
+ /* Always zero as the data is followed by struct txpd */
+- local_tx_pd->tx_pkt_offset = cpu_to_le16(sizeof(struct txpd) +
+- pad);
++ local_tx_pd->tx_pkt_offset = cpu_to_le16(sizeof(struct txpd));
+ local_tx_pd->tx_pkt_type = cpu_to_le16(PKT_TYPE_AMSDU);
+ local_tx_pd->tx_pkt_length = cpu_to_le16(skb->len -
+- sizeof(*local_tx_pd) -
+- pad);
++ sizeof(*local_tx_pd));
+
+ if (tx_info->flags & MWIFIEX_BUF_FLAG_TDLS_PKT)
+ local_tx_pd->flags |= MWIFIEX_TXPD_FLAGS_TDLS_PACKET;
+@@ -190,7 +181,11 @@ mwifiex_11n_aggregate_pkt(struct mwifiex_private *priv,
+ ra_list_flags);
+ return -1;
+ }
+- skb_reserve(skb_aggr, MWIFIEX_MIN_DATA_HEADER_LEN);
++
++ /* skb_aggr->data already 64 byte align, just reserve bus interface
++ * header and txpd.
++ */
++ skb_reserve(skb_aggr, headroom + sizeof(struct txpd));
+ tx_info_aggr = MWIFIEX_SKB_TXCB(skb_aggr);
+
+ memset(tx_info_aggr, 0, sizeof(*tx_info_aggr));
+diff --git a/drivers/net/wireless/marvell/mwifiex/debugfs.c b/drivers/net/wireless/marvell/mwifiex/debugfs.c
+index b9284b533294..ae2b69db5994 100644
+--- a/drivers/net/wireless/marvell/mwifiex/debugfs.c
++++ b/drivers/net/wireless/marvell/mwifiex/debugfs.c
+@@ -114,7 +114,8 @@ mwifiex_info_read(struct file *file, char __user *ubuf,
+ if (GET_BSS_ROLE(priv) == MWIFIEX_BSS_ROLE_STA) {
+ p += sprintf(p, "multicast_count=\"%d\"\n",
+ netdev_mc_count(netdev));
+- p += sprintf(p, "essid=\"%s\"\n", info.ssid.ssid);
++ p += sprintf(p, "essid=\"%.*s\"\n", info.ssid.ssid_len,
++ info.ssid.ssid);
+ p += sprintf(p, "bssid=\"%pM\"\n", info.bssid);
+ p += sprintf(p, "channel=\"%d\"\n", (int) info.bss_chan);
+ p += sprintf(p, "country_code = \"%s\"\n", info.country_code);
+diff --git a/drivers/net/wireless/marvell/mwifiex/main.c b/drivers/net/wireless/marvell/mwifiex/main.c
+index e5c3a8aa3929..ab2ab18e0d94 100644
+--- a/drivers/net/wireless/marvell/mwifiex/main.c
++++ b/drivers/net/wireless/marvell/mwifiex/main.c
+@@ -57,8 +57,8 @@ MODULE_PARM_DESC(mfg_mode, "manufacturing mode enable:1, disable:0");
+ * In case of any errors during inittialization, this function also ensures
+ * proper cleanup before exiting.
+ */
+-static int mwifiex_register(void *card, struct mwifiex_if_ops *if_ops,
+- void **padapter)
++static int mwifiex_register(void *card, struct device *dev,
++ struct mwifiex_if_ops *if_ops, void **padapter)
+ {
+ struct mwifiex_adapter *adapter;
+ int i;
+@@ -68,6 +68,7 @@ static int mwifiex_register(void *card, struct mwifiex_if_ops *if_ops,
+ return -ENOMEM;
+
+ *padapter = adapter;
++ adapter->dev = dev;
+ adapter->card = card;
+
+ /* Save interface specific operations in adapter */
+@@ -1569,13 +1570,13 @@ static void mwifiex_probe_of(struct mwifiex_adapter *adapter)
+ struct device *dev = adapter->dev;
+
+ if (!dev->of_node)
+- return;
++ goto err_exit;
+
+ adapter->dt_node = dev->of_node;
+ adapter->irq_wakeup = irq_of_parse_and_map(adapter->dt_node, 0);
+ if (!adapter->irq_wakeup) {
+- dev_info(dev, "fail to parse irq_wakeup from device tree\n");
+- return;
++ dev_dbg(dev, "fail to parse irq_wakeup from device tree\n");
++ goto err_exit;
+ }
+
+ ret = devm_request_irq(dev, adapter->irq_wakeup,
+@@ -1595,7 +1596,7 @@ static void mwifiex_probe_of(struct mwifiex_adapter *adapter)
+ return;
+
+ err_exit:
+- adapter->irq_wakeup = 0;
++ adapter->irq_wakeup = -1;
+ }
+
+ /*
+@@ -1618,12 +1619,11 @@ mwifiex_add_card(void *card, struct completion *fw_done,
+ {
+ struct mwifiex_adapter *adapter;
+
+- if (mwifiex_register(card, if_ops, (void **)&adapter)) {
++ if (mwifiex_register(card, dev, if_ops, (void **)&adapter)) {
+ pr_err("%s: software init failed\n", __func__);
+ goto err_init_sw;
+ }
+
+- adapter->dev = dev;
+ mwifiex_probe_of(adapter);
+
+ adapter->iface_type = iface_type;
+diff --git a/drivers/net/wireless/marvell/mwifiex/sta_ioctl.c b/drivers/net/wireless/marvell/mwifiex/sta_ioctl.c
+index 644f3a248741..1532ac9cee0b 100644
+--- a/drivers/net/wireless/marvell/mwifiex/sta_ioctl.c
++++ b/drivers/net/wireless/marvell/mwifiex/sta_ioctl.c
+@@ -1159,8 +1159,6 @@ int mwifiex_set_encode(struct mwifiex_private *priv, struct key_params *kp,
+ encrypt_key.is_rx_seq_valid = true;
+ }
+ } else {
+- if (GET_BSS_ROLE(priv) == MWIFIEX_BSS_ROLE_UAP)
+- return 0;
+ encrypt_key.key_disable = true;
+ if (mac_addr)
+ memcpy(encrypt_key.mac_addr, mac_addr, ETH_ALEN);
+diff --git a/drivers/phy/Kconfig b/drivers/phy/Kconfig
+index e8eb7f225a88..d77024199147 100644
+--- a/drivers/phy/Kconfig
++++ b/drivers/phy/Kconfig
+@@ -440,6 +440,7 @@ config PHY_QCOM_UFS
+ config PHY_TUSB1210
+ tristate "TI TUSB1210 ULPI PHY module"
+ depends on USB_ULPI_BUS
++ depends on EXTCON || !EXTCON # if EXTCON=m, this cannot be built-in
+ select GENERIC_PHY
+ help
+ Support for TI TUSB1210 USB ULPI PHY.
+diff --git a/drivers/platform/x86/intel_pmc_core.c b/drivers/platform/x86/intel_pmc_core.c
+index b130b8c9b9d7..914bcd2edbde 100644
+--- a/drivers/platform/x86/intel_pmc_core.c
++++ b/drivers/platform/x86/intel_pmc_core.c
+@@ -188,8 +188,7 @@ static int pmc_core_check_read_lock_bit(void)
+ u32 value;
+
+ value = pmc_core_reg_read(pmcdev, SPT_PMC_PM_CFG_OFFSET);
+- return test_bit(SPT_PMC_READ_DISABLE_BIT,
+- (unsigned long *)&value);
++ return value & BIT(SPT_PMC_READ_DISABLE_BIT);
+ }
+
+ #if IS_ENABLED(CONFIG_DEBUG_FS)
+@@ -238,8 +237,7 @@ static int pmc_core_mtpmc_link_status(void)
+ u32 value;
+
+ value = pmc_core_reg_read(pmcdev, SPT_PMC_PM_STS_OFFSET);
+- return test_bit(SPT_PMC_MSG_FULL_STS_BIT,
+- (unsigned long *)&value);
++ return value & BIT(SPT_PMC_MSG_FULL_STS_BIT);
+ }
+
+ static int pmc_core_send_msg(u32 *addr_xram)
+diff --git a/drivers/power/supply/bq24190_charger.c b/drivers/power/supply/bq24190_charger.c
+index e9584330aeed..50171fd3cc6d 100644
+--- a/drivers/power/supply/bq24190_charger.c
++++ b/drivers/power/supply/bq24190_charger.c
+@@ -144,10 +144,7 @@
+ * so the first read after a fault returns the latched value and subsequent
+ * reads return the current value. In order to return the fault status
+ * to the user, have the interrupt handler save the reg's value and retrieve
+- * it in the appropriate health/status routine. Each routine has its own
+- * flag indicating whether it should use the value stored by the last run
+- * of the interrupt handler or do an actual reg read. That way each routine
+- * can report back whatever fault may have occured.
++ * it in the appropriate health/status routine.
+ */
+ struct bq24190_dev_info {
+ struct i2c_client *client;
+@@ -159,10 +156,6 @@ struct bq24190_dev_info {
+ unsigned int gpio_int;
+ unsigned int irq;
+ struct mutex f_reg_lock;
+- bool first_time;
+- bool charger_health_valid;
+- bool battery_health_valid;
+- bool battery_status_valid;
+ u8 f_reg;
+ u8 ss_reg;
+ u8 watchdog;
+@@ -636,21 +629,11 @@ static int bq24190_charger_get_health(struct bq24190_dev_info *bdi,
+ union power_supply_propval *val)
+ {
+ u8 v;
+- int health, ret;
++ int health;
+
+ mutex_lock(&bdi->f_reg_lock);
+-
+- if (bdi->charger_health_valid) {
+- v = bdi->f_reg;
+- bdi->charger_health_valid = false;
+- mutex_unlock(&bdi->f_reg_lock);
+- } else {
+- mutex_unlock(&bdi->f_reg_lock);
+-
+- ret = bq24190_read(bdi, BQ24190_REG_F, &v);
+- if (ret < 0)
+- return ret;
+- }
++ v = bdi->f_reg;
++ mutex_unlock(&bdi->f_reg_lock);
+
+ if (v & BQ24190_REG_F_BOOST_FAULT_MASK) {
+ /*
+@@ -937,18 +920,8 @@ static int bq24190_battery_get_status(struct bq24190_dev_info *bdi,
+ int status, ret;
+
+ mutex_lock(&bdi->f_reg_lock);
+-
+- if (bdi->battery_status_valid) {
+- chrg_fault = bdi->f_reg;
+- bdi->battery_status_valid = false;
+- mutex_unlock(&bdi->f_reg_lock);
+- } else {
+- mutex_unlock(&bdi->f_reg_lock);
+-
+- ret = bq24190_read(bdi, BQ24190_REG_F, &chrg_fault);
+- if (ret < 0)
+- return ret;
+- }
++ chrg_fault = bdi->f_reg;
++ mutex_unlock(&bdi->f_reg_lock);
+
+ chrg_fault &= BQ24190_REG_F_CHRG_FAULT_MASK;
+ chrg_fault >>= BQ24190_REG_F_CHRG_FAULT_SHIFT;
+@@ -996,21 +969,11 @@ static int bq24190_battery_get_health(struct bq24190_dev_info *bdi,
+ union power_supply_propval *val)
+ {
+ u8 v;
+- int health, ret;
++ int health;
+
+ mutex_lock(&bdi->f_reg_lock);
+-
+- if (bdi->battery_health_valid) {
+- v = bdi->f_reg;
+- bdi->battery_health_valid = false;
+- mutex_unlock(&bdi->f_reg_lock);
+- } else {
+- mutex_unlock(&bdi->f_reg_lock);
+-
+- ret = bq24190_read(bdi, BQ24190_REG_F, &v);
+- if (ret < 0)
+- return ret;
+- }
++ v = bdi->f_reg;
++ mutex_unlock(&bdi->f_reg_lock);
+
+ if (v & BQ24190_REG_F_BAT_FAULT_MASK) {
+ health = POWER_SUPPLY_HEALTH_OVERVOLTAGE;
+@@ -1197,9 +1160,12 @@ static const struct power_supply_desc bq24190_battery_desc = {
+ static irqreturn_t bq24190_irq_handler_thread(int irq, void *data)
+ {
+ struct bq24190_dev_info *bdi = data;
+- bool alert_userspace = false;
++ const u8 battery_mask_ss = BQ24190_REG_SS_CHRG_STAT_MASK;
++ const u8 battery_mask_f = BQ24190_REG_F_BAT_FAULT_MASK
++ | BQ24190_REG_F_NTC_FAULT_MASK;
++ bool alert_charger = false, alert_battery = false;
+ u8 ss_reg = 0, f_reg = 0;
+- int ret;
++ int i, ret;
+
+ pm_runtime_get_sync(bdi->dev);
+
+@@ -1209,6 +1175,32 @@ static irqreturn_t bq24190_irq_handler_thread(int irq, void *data)
+ goto out;
+ }
+
++ i = 0;
++ do {
++ ret = bq24190_read(bdi, BQ24190_REG_F, &f_reg);
++ if (ret < 0) {
++ dev_err(bdi->dev, "Can't read F reg: %d\n", ret);
++ goto out;
++ }
++ } while (f_reg && ++i < 2);
++
++ if (f_reg != bdi->f_reg) {
++ dev_info(bdi->dev,
++ "Fault: boost %d, charge %d, battery %d, ntc %d\n",
++ !!(f_reg & BQ24190_REG_F_BOOST_FAULT_MASK),
++ !!(f_reg & BQ24190_REG_F_CHRG_FAULT_MASK),
++ !!(f_reg & BQ24190_REG_F_BAT_FAULT_MASK),
++ !!(f_reg & BQ24190_REG_F_NTC_FAULT_MASK));
++
++ mutex_lock(&bdi->f_reg_lock);
++ if ((bdi->f_reg & battery_mask_f) != (f_reg & battery_mask_f))
++ alert_battery = true;
++ if ((bdi->f_reg & ~battery_mask_f) != (f_reg & ~battery_mask_f))
++ alert_charger = true;
++ bdi->f_reg = f_reg;
++ mutex_unlock(&bdi->f_reg_lock);
++ }
++
+ if (ss_reg != bdi->ss_reg) {
+ /*
+ * The device is in host mode so when PG_STAT goes from 1->0
+@@ -1225,47 +1217,17 @@ static irqreturn_t bq24190_irq_handler_thread(int irq, void *data)
+ ret);
+ }
+
++ if ((bdi->ss_reg & battery_mask_ss) != (ss_reg & battery_mask_ss))
++ alert_battery = true;
++ if ((bdi->ss_reg & ~battery_mask_ss) != (ss_reg & ~battery_mask_ss))
++ alert_charger = true;
+ bdi->ss_reg = ss_reg;
+- alert_userspace = true;
+- }
+-
+- mutex_lock(&bdi->f_reg_lock);
+-
+- ret = bq24190_read(bdi, BQ24190_REG_F, &f_reg);
+- if (ret < 0) {
+- mutex_unlock(&bdi->f_reg_lock);
+- dev_err(bdi->dev, "Can't read F reg: %d\n", ret);
+- goto out;
+ }
+
+- if (f_reg != bdi->f_reg) {
+- bdi->f_reg = f_reg;
+- bdi->charger_health_valid = true;
+- bdi->battery_health_valid = true;
+- bdi->battery_status_valid = true;
+-
+- alert_userspace = true;
+- }
+-
+- mutex_unlock(&bdi->f_reg_lock);
+-
+- /*
+- * Sometimes bq24190 gives a steady trickle of interrupts even
+- * though the watchdog timer is turned off and neither the STATUS
+- * nor FAULT registers have changed. Weed out these sprurious
+- * interrupts so userspace isn't alerted for no reason.
+- * In addition, the chip always generates an interrupt after
+- * register reset so we should ignore that one (the very first
+- * interrupt received).
+- */
+- if (alert_userspace) {
+- if (!bdi->first_time) {
+- power_supply_changed(bdi->charger);
+- power_supply_changed(bdi->battery);
+- } else {
+- bdi->first_time = false;
+- }
+- }
++ if (alert_charger)
++ power_supply_changed(bdi->charger);
++ if (alert_battery)
++ power_supply_changed(bdi->battery);
+
+ out:
+ pm_runtime_put_sync(bdi->dev);
+@@ -1300,6 +1262,10 @@ static int bq24190_hw_init(struct bq24190_dev_info *bdi)
+ goto out;
+
+ ret = bq24190_set_mode_host(bdi);
++ if (ret < 0)
++ goto out;
++
++ ret = bq24190_read(bdi, BQ24190_REG_SS, &bdi->ss_reg);
+ out:
+ pm_runtime_put_sync(bdi->dev);
+ return ret;
+@@ -1375,10 +1341,8 @@ static int bq24190_probe(struct i2c_client *client,
+ bdi->model = id->driver_data;
+ strncpy(bdi->model_name, id->name, I2C_NAME_SIZE);
+ mutex_init(&bdi->f_reg_lock);
+- bdi->first_time = true;
+- bdi->charger_health_valid = false;
+- bdi->battery_health_valid = false;
+- bdi->battery_status_valid = false;
++ bdi->f_reg = 0;
++ bdi->ss_reg = BQ24190_REG_SS_VBUS_STAT_MASK; /* impossible state */
+
+ i2c_set_clientdata(client, bdi);
+
+@@ -1392,22 +1356,13 @@ static int bq24190_probe(struct i2c_client *client,
+ return -EINVAL;
+ }
+
+- ret = devm_request_threaded_irq(dev, bdi->irq, NULL,
+- bq24190_irq_handler_thread,
+- IRQF_TRIGGER_RISING | IRQF_ONESHOT,
+- "bq24190-charger", bdi);
+- if (ret < 0) {
+- dev_err(dev, "Can't set up irq handler\n");
+- goto out1;
+- }
+-
+ pm_runtime_enable(dev);
+ pm_runtime_resume(dev);
+
+ ret = bq24190_hw_init(bdi);
+ if (ret < 0) {
+ dev_err(dev, "Hardware init failed\n");
+- goto out2;
++ goto out1;
+ }
+
+ charger_cfg.drv_data = bdi;
+@@ -1418,7 +1373,7 @@ static int bq24190_probe(struct i2c_client *client,
+ if (IS_ERR(bdi->charger)) {
+ dev_err(dev, "Can't register charger\n");
+ ret = PTR_ERR(bdi->charger);
+- goto out2;
++ goto out1;
+ }
+
+ battery_cfg.drv_data = bdi;
+@@ -1427,24 +1382,34 @@ static int bq24190_probe(struct i2c_client *client,
+ if (IS_ERR(bdi->battery)) {
+ dev_err(dev, "Can't register battery\n");
+ ret = PTR_ERR(bdi->battery);
+- goto out3;
++ goto out2;
+ }
+
+ ret = bq24190_sysfs_create_group(bdi);
+ if (ret) {
+ dev_err(dev, "Can't create sysfs entries\n");
++ goto out3;
++ }
++
++ ret = devm_request_threaded_irq(dev, bdi->irq, NULL,
++ bq24190_irq_handler_thread,
++ IRQF_TRIGGER_FALLING | IRQF_ONESHOT,
++ "bq24190-charger", bdi);
++ if (ret < 0) {
++ dev_err(dev, "Can't set up irq handler\n");
+ goto out4;
+ }
+
+ return 0;
+
+ out4:
+- power_supply_unregister(bdi->battery);
++ bq24190_sysfs_remove_group(bdi);
+ out3:
+- power_supply_unregister(bdi->charger);
++ power_supply_unregister(bdi->battery);
+ out2:
+- pm_runtime_disable(dev);
++ power_supply_unregister(bdi->charger);
+ out1:
++ pm_runtime_disable(dev);
+ if (bdi->gpio_int)
+ gpio_free(bdi->gpio_int);
+
+@@ -1488,12 +1453,13 @@ static int bq24190_pm_resume(struct device *dev)
+ struct i2c_client *client = to_i2c_client(dev);
+ struct bq24190_dev_info *bdi = i2c_get_clientdata(client);
+
+- bdi->charger_health_valid = false;
+- bdi->battery_health_valid = false;
+- bdi->battery_status_valid = false;
++ bdi->f_reg = 0;
++ bdi->ss_reg = BQ24190_REG_SS_VBUS_STAT_MASK; /* impossible state */
+
+ pm_runtime_get_sync(bdi->dev);
+ bq24190_register_reset(bdi);
++ bq24190_set_mode_host(bdi);
++ bq24190_read(bdi, BQ24190_REG_SS, &bdi->ss_reg);
+ pm_runtime_put_sync(bdi->dev);
+
+ /* Things may have changed while suspended so alert upper layer */
+diff --git a/drivers/power/supply/lp8788-charger.c b/drivers/power/supply/lp8788-charger.c
+index 509e2b341bd6..677f7c40b25a 100644
+--- a/drivers/power/supply/lp8788-charger.c
++++ b/drivers/power/supply/lp8788-charger.c
+@@ -651,7 +651,7 @@ static ssize_t lp8788_show_eoc_time(struct device *dev,
+ {
+ struct lp8788_charger *pchg = dev_get_drvdata(dev);
+ char *stime[] = { "400ms", "5min", "10min", "15min",
+- "20min", "25min", "30min" "No timeout" };
++ "20min", "25min", "30min", "No timeout" };
+ u8 val;
+
+ lp8788_read_byte(pchg->lp, LP8788_CHG_EOC, &val);
+diff --git a/drivers/scsi/Kconfig b/drivers/scsi/Kconfig
+index a4f6b0d95515..f8b0ba015d43 100644
+--- a/drivers/scsi/Kconfig
++++ b/drivers/scsi/Kconfig
+@@ -1477,7 +1477,7 @@ config ATARI_SCSI
+
+ config MAC_SCSI
+ tristate "Macintosh NCR5380 SCSI"
+- depends on MAC && SCSI=y
++ depends on MAC && SCSI
+ select SCSI_SPI_ATTRS
+ help
+ This is the NCR 5380 SCSI controller included on most of the 68030
+diff --git a/drivers/scsi/qedi/qedi_debugfs.c b/drivers/scsi/qedi/qedi_debugfs.c
+index 955936274241..59417199bf36 100644
+--- a/drivers/scsi/qedi/qedi_debugfs.c
++++ b/drivers/scsi/qedi/qedi_debugfs.c
+@@ -14,7 +14,7 @@
+ #include <linux/debugfs.h>
+ #include <linux/module.h>
+
+-int do_not_recover;
++int qedi_do_not_recover;
+ static struct dentry *qedi_dbg_root;
+
+ void
+@@ -74,22 +74,22 @@ qedi_dbg_exit(void)
+ static ssize_t
+ qedi_dbg_do_not_recover_enable(struct qedi_dbg_ctx *qedi_dbg)
+ {
+- if (!do_not_recover)
+- do_not_recover = 1;
++ if (!qedi_do_not_recover)
++ qedi_do_not_recover = 1;
+
+ QEDI_INFO(qedi_dbg, QEDI_LOG_DEBUGFS, "do_not_recover=%d\n",
+- do_not_recover);
++ qedi_do_not_recover);
+ return 0;
+ }
+
+ static ssize_t
+ qedi_dbg_do_not_recover_disable(struct qedi_dbg_ctx *qedi_dbg)
+ {
+- if (do_not_recover)
+- do_not_recover = 0;
++ if (qedi_do_not_recover)
++ qedi_do_not_recover = 0;
+
+ QEDI_INFO(qedi_dbg, QEDI_LOG_DEBUGFS, "do_not_recover=%d\n",
+- do_not_recover);
++ qedi_do_not_recover);
+ return 0;
+ }
+
+@@ -141,7 +141,7 @@ qedi_dbg_do_not_recover_cmd_read(struct file *filp, char __user *buffer,
+ if (*ppos)
+ return 0;
+
+- cnt = sprintf(buffer, "do_not_recover=%d\n", do_not_recover);
++ cnt = sprintf(buffer, "do_not_recover=%d\n", qedi_do_not_recover);
+ cnt = min_t(int, count, cnt - *ppos);
+ *ppos += cnt;
+ return cnt;
+diff --git a/drivers/scsi/qedi/qedi_fw.c b/drivers/scsi/qedi/qedi_fw.c
+index b1d3904ae8fd..d98d73a5f678 100644
+--- a/drivers/scsi/qedi/qedi_fw.c
++++ b/drivers/scsi/qedi/qedi_fw.c
+@@ -1460,9 +1460,9 @@ static void qedi_tmf_work(struct work_struct *work)
+ get_itt(tmf_hdr->rtt), get_itt(ctask->itt), cmd->task_id,
+ qedi_conn->iscsi_conn_id);
+
+- if (do_not_recover) {
++ if (qedi_do_not_recover) {
+ QEDI_ERR(&qedi->dbg_ctx, "DONT SEND CLEANUP/ABORT %d\n",
+- do_not_recover);
++ qedi_do_not_recover);
+ goto abort_ret;
+ }
+
+diff --git a/drivers/scsi/qedi/qedi_gbl.h b/drivers/scsi/qedi/qedi_gbl.h
+index 8e488de88ece..63d793f46064 100644
+--- a/drivers/scsi/qedi/qedi_gbl.h
++++ b/drivers/scsi/qedi/qedi_gbl.h
+@@ -12,8 +12,14 @@
+
+ #include "qedi_iscsi.h"
+
++#ifdef CONFIG_DEBUG_FS
++extern int qedi_do_not_recover;
++#else
++#define qedi_do_not_recover (0)
++#endif
++
+ extern uint qedi_io_tracing;
+-extern int do_not_recover;
++
+ extern struct scsi_host_template qedi_host_template;
+ extern struct iscsi_transport qedi_iscsi_transport;
+ extern const struct qed_iscsi_ops *qedi_ops;
+diff --git a/drivers/scsi/qedi/qedi_iscsi.c b/drivers/scsi/qedi/qedi_iscsi.c
+index d6a205433b66..21bfb4a64cfa 100644
+--- a/drivers/scsi/qedi/qedi_iscsi.c
++++ b/drivers/scsi/qedi/qedi_iscsi.c
+@@ -453,13 +453,9 @@ static int qedi_iscsi_update_conn(struct qedi_ctx *qedi,
+ if (rval) {
+ rval = -ENXIO;
+ QEDI_ERR(&qedi->dbg_ctx, "Could not update connection\n");
+- goto update_conn_err;
+ }
+
+ kfree(conn_info);
+- rval = 0;
+-
+-update_conn_err:
+ return rval;
+ }
+
+@@ -836,7 +832,7 @@ qedi_ep_connect(struct Scsi_Host *shost, struct sockaddr *dst_addr,
+ return ERR_PTR(ret);
+ }
+
+- if (do_not_recover) {
++ if (qedi_do_not_recover) {
+ ret = -ENOMEM;
+ return ERR_PTR(ret);
+ }
+@@ -960,7 +956,7 @@ static int qedi_ep_poll(struct iscsi_endpoint *ep, int timeout_ms)
+ struct qedi_endpoint *qedi_ep;
+ int ret = 0;
+
+- if (do_not_recover)
++ if (qedi_do_not_recover)
+ return 1;
+
+ qedi_ep = ep->dd_data;
+@@ -1028,7 +1024,7 @@ static void qedi_ep_disconnect(struct iscsi_endpoint *ep)
+ }
+
+ if (test_bit(QEDI_IN_RECOVERY, &qedi->flags)) {
+- if (do_not_recover) {
++ if (qedi_do_not_recover) {
+ QEDI_INFO(&qedi->dbg_ctx, QEDI_LOG_INFO,
+ "Do not recover cid=0x%x\n",
+ qedi_ep->iscsi_cid);
+@@ -1042,7 +1038,7 @@ static void qedi_ep_disconnect(struct iscsi_endpoint *ep)
+ }
+ }
+
+- if (do_not_recover)
++ if (qedi_do_not_recover)
+ goto ep_exit_recover;
+
+ switch (qedi_ep->state) {
+diff --git a/drivers/scsi/qla2xxx/qla_os.c b/drivers/scsi/qla2xxx/qla_os.c
+index f72fe724074d..61811aec1a44 100644
+--- a/drivers/scsi/qla2xxx/qla_os.c
++++ b/drivers/scsi/qla2xxx/qla_os.c
+@@ -1621,7 +1621,8 @@ qla2x00_abort_all_cmds(scsi_qla_host_t *vha, int res)
+ /* Don't abort commands in adapter during EEH
+ * recovery as it's not accessible/responding.
+ */
+- if (GET_CMD_SP(sp) && !ha->flags.eeh_busy) {
++ if (GET_CMD_SP(sp) && !ha->flags.eeh_busy &&
++ (sp->type == SRB_SCSI_CMD)) {
+ /* Get a reference to the sp and drop the lock.
+ * The reference ensures this sp->done() call
+ * - and not the call in qla2xxx_eh_abort() -
+diff --git a/drivers/scsi/smartpqi/smartpqi_init.c b/drivers/scsi/smartpqi/smartpqi_init.c
+index 8702d9cf8040..251559f2cbe7 100644
+--- a/drivers/scsi/smartpqi/smartpqi_init.c
++++ b/drivers/scsi/smartpqi/smartpqi_init.c
+@@ -534,8 +534,7 @@ static int pqi_write_current_time_to_host_wellness(
+ size_t buffer_length;
+ time64_t local_time;
+ unsigned int year;
+- struct timeval time;
+- struct rtc_time tm;
++ struct tm tm;
+
+ buffer_length = sizeof(*buffer);
+
+@@ -552,9 +551,8 @@ static int pqi_write_current_time_to_host_wellness(
+ put_unaligned_le16(sizeof(buffer->time),
+ &buffer->time_length);
+
+- do_gettimeofday(&time);
+- local_time = time.tv_sec - (sys_tz.tz_minuteswest * 60);
+- rtc_time64_to_tm(local_time, &tm);
++ local_time = ktime_get_real_seconds();
++ time64_to_tm(local_time, -sys_tz.tz_minuteswest * 60, &tm);
+ year = tm.tm_year + 1900;
+
+ buffer->time[0] = bin2bcd(tm.tm_hour);
+diff --git a/drivers/spi/spi-armada-3700.c b/drivers/spi/spi-armada-3700.c
+index 0314c6b9e044..e3a33935698c 100644
+--- a/drivers/spi/spi-armada-3700.c
++++ b/drivers/spi/spi-armada-3700.c
+@@ -901,7 +901,6 @@ static int a3700_spi_remove(struct platform_device *pdev)
+ struct a3700_spi *spi = spi_master_get_devdata(master);
+
+ clk_unprepare(spi->clk);
+- spi_master_put(master);
+
+ return 0;
+ }
+diff --git a/drivers/staging/emxx_udc/emxx_udc.c b/drivers/staging/emxx_udc/emxx_udc.c
+index 3f42fa8b0bf3..4a3da2406f48 100644
+--- a/drivers/staging/emxx_udc/emxx_udc.c
++++ b/drivers/staging/emxx_udc/emxx_udc.c
+@@ -3137,7 +3137,7 @@ static const struct {
+ };
+
+ /*-------------------------------------------------------------------------*/
+-static void __init nbu2ss_drv_ep_init(struct nbu2ss_udc *udc)
++static void nbu2ss_drv_ep_init(struct nbu2ss_udc *udc)
+ {
+ int i;
+
+@@ -3168,7 +3168,7 @@ static void __init nbu2ss_drv_ep_init(struct nbu2ss_udc *udc)
+
+ /*-------------------------------------------------------------------------*/
+ /* platform_driver */
+-static int __init nbu2ss_drv_contest_init(
++static int nbu2ss_drv_contest_init(
+ struct platform_device *pdev,
+ struct nbu2ss_udc *udc)
+ {
+diff --git a/drivers/staging/lustre/lustre/llite/lproc_llite.c b/drivers/staging/lustre/lustre/llite/lproc_llite.c
+index 03682c10fc9e..f3ee584157e0 100644
+--- a/drivers/staging/lustre/lustre/llite/lproc_llite.c
++++ b/drivers/staging/lustre/lustre/llite/lproc_llite.c
+@@ -924,27 +924,29 @@ static ssize_t ll_unstable_stats_seq_write(struct file *file,
+ }
+ LPROC_SEQ_FOPS(ll_unstable_stats);
+
+-static ssize_t root_squash_show(struct kobject *kobj, struct attribute *attr,
+- char *buf)
++static int ll_root_squash_seq_show(struct seq_file *m, void *v)
+ {
+- struct ll_sb_info *sbi = container_of(kobj, struct ll_sb_info,
+- ll_kobj);
++ struct super_block *sb = m->private;
++ struct ll_sb_info *sbi = ll_s2sbi(sb);
+ struct root_squash_info *squash = &sbi->ll_squash;
+
+- return sprintf(buf, "%u:%u\n", squash->rsi_uid, squash->rsi_gid);
++ seq_printf(m, "%u:%u\n", squash->rsi_uid, squash->rsi_gid);
++ return 0;
+ }
+
+-static ssize_t root_squash_store(struct kobject *kobj, struct attribute *attr,
+- const char *buffer, size_t count)
++static ssize_t ll_root_squash_seq_write(struct file *file,
++ const char __user *buffer,
++ size_t count, loff_t *off)
+ {
+- struct ll_sb_info *sbi = container_of(kobj, struct ll_sb_info,
+- ll_kobj);
++ struct seq_file *m = file->private_data;
++ struct super_block *sb = m->private;
++ struct ll_sb_info *sbi = ll_s2sbi(sb);
+ struct root_squash_info *squash = &sbi->ll_squash;
+
+ return lprocfs_wr_root_squash(buffer, count, squash,
+- ll_get_fsname(sbi->ll_sb, NULL, 0));
++ ll_get_fsname(sb, NULL, 0));
+ }
+-LUSTRE_RW_ATTR(root_squash);
++LPROC_SEQ_FOPS(ll_root_squash);
+
+ static int ll_nosquash_nids_seq_show(struct seq_file *m, void *v)
+ {
+@@ -997,6 +999,8 @@ static struct lprocfs_vars lprocfs_llite_obd_vars[] = {
+ { "statahead_stats", &ll_statahead_stats_fops, NULL, 0 },
+ { "unstable_stats", &ll_unstable_stats_fops, NULL },
+ { "sbi_flags", &ll_sbi_flags_fops, NULL, 0 },
++ { .name = "root_squash",
++ .fops = &ll_root_squash_fops },
+ { .name = "nosquash_nids",
+ .fops = &ll_nosquash_nids_fops },
+ { NULL }
+@@ -1027,7 +1031,6 @@ static struct attribute *llite_attrs[] = {
+ &lustre_attr_max_easize.attr,
+ &lustre_attr_default_easize.attr,
+ &lustre_attr_xattr_cache.attr,
+- &lustre_attr_root_squash.attr,
+ NULL,
+ };
+
+diff --git a/drivers/staging/lustre/lustre/ptlrpc/pack_generic.c b/drivers/staging/lustre/lustre/ptlrpc/pack_generic.c
+index 13f00b7cbbe5..b1170277fd84 100644
+--- a/drivers/staging/lustre/lustre/ptlrpc/pack_generic.c
++++ b/drivers/staging/lustre/lustre/ptlrpc/pack_generic.c
+@@ -469,6 +469,7 @@ int lustre_shrink_msg(struct lustre_msg *msg, int segment,
+ default:
+ LASSERTF(0, "incorrect message magic: %08x\n", msg->lm_magic);
+ }
++ return 0;
+ }
+ EXPORT_SYMBOL(lustre_shrink_msg);
+
+diff --git a/drivers/staging/wlan-ng/p80211netdev.c b/drivers/staging/wlan-ng/p80211netdev.c
+index 73fcf07254fe..eea50b614638 100644
+--- a/drivers/staging/wlan-ng/p80211netdev.c
++++ b/drivers/staging/wlan-ng/p80211netdev.c
+@@ -237,7 +237,7 @@ static int p80211_convert_to_ether(struct wlandevice *wlandev,
+ struct p80211_hdr_a3 *hdr;
+
+ hdr = (struct p80211_hdr_a3 *)skb->data;
+- if (p80211_rx_typedrop(wlandev, hdr->fc))
++ if (p80211_rx_typedrop(wlandev, le16_to_cpu(hdr->fc)))
+ return CONV_TO_ETHER_SKIPPED;
+
+ /* perform mcast filtering: allow my local address through but reject
+diff --git a/drivers/tty/serial/8250/8250_omap.c b/drivers/tty/serial/8250/8250_omap.c
+index f4eb807a2616..da31159a03ec 100644
+--- a/drivers/tty/serial/8250/8250_omap.c
++++ b/drivers/tty/serial/8250/8250_omap.c
+@@ -1237,7 +1237,8 @@ static int omap8250_probe(struct platform_device *pdev)
+ pm_runtime_put_autosuspend(&pdev->dev);
+ return 0;
+ err:
+- pm_runtime_put(&pdev->dev);
++ pm_runtime_dont_use_autosuspend(&pdev->dev);
++ pm_runtime_put_sync(&pdev->dev);
+ pm_runtime_disable(&pdev->dev);
+ return ret;
+ }
+@@ -1246,6 +1247,7 @@ static int omap8250_remove(struct platform_device *pdev)
+ {
+ struct omap8250_priv *priv = platform_get_drvdata(pdev);
+
++ pm_runtime_dont_use_autosuspend(&pdev->dev);
+ pm_runtime_put_sync(&pdev->dev);
+ pm_runtime_disable(&pdev->dev);
+ serial8250_unregister_port(priv->line);
+@@ -1345,6 +1347,10 @@ static int omap8250_runtime_suspend(struct device *dev)
+ struct omap8250_priv *priv = dev_get_drvdata(dev);
+ struct uart_8250_port *up;
+
++ /* In case runtime-pm tries this before we are setup */
++ if (!priv)
++ return 0;
++
+ up = serial8250_get_port(priv->line);
+ /*
+ * When using 'no_console_suspend', the console UART must not be
+diff --git a/drivers/usb/chipidea/ci.h b/drivers/usb/chipidea/ci.h
+index cd414559040f..05bc4d631cb9 100644
+--- a/drivers/usb/chipidea/ci.h
++++ b/drivers/usb/chipidea/ci.h
+@@ -428,9 +428,6 @@ int hw_port_test_set(struct ci_hdrc *ci, u8 mode);
+
+ u8 hw_port_test_get(struct ci_hdrc *ci);
+
+-int hw_wait_reg(struct ci_hdrc *ci, enum ci_hw_regs reg, u32 mask,
+- u32 value, unsigned int timeout_ms);
+-
+ void ci_platform_configure(struct ci_hdrc *ci);
+
+ int dbg_create_files(struct ci_hdrc *ci);
+diff --git a/drivers/usb/chipidea/core.c b/drivers/usb/chipidea/core.c
+index 3dbb4a21ab44..6e0d614a8075 100644
+--- a/drivers/usb/chipidea/core.c
++++ b/drivers/usb/chipidea/core.c
+@@ -516,38 +516,6 @@ int hw_device_reset(struct ci_hdrc *ci)
+ return 0;
+ }
+
+-/**
+- * hw_wait_reg: wait the register value
+- *
+- * Sometimes, it needs to wait register value before going on.
+- * Eg, when switch to device mode, the vbus value should be lower
+- * than OTGSC_BSV before connects to host.
+- *
+- * @ci: the controller
+- * @reg: register index
+- * @mask: mast bit
+- * @value: the bit value to wait
+- * @timeout_ms: timeout in millisecond
+- *
+- * This function returns an error code if timeout
+- */
+-int hw_wait_reg(struct ci_hdrc *ci, enum ci_hw_regs reg, u32 mask,
+- u32 value, unsigned int timeout_ms)
+-{
+- unsigned long elapse = jiffies + msecs_to_jiffies(timeout_ms);
+-
+- while (hw_read(ci, reg, mask) != value) {
+- if (time_after(jiffies, elapse)) {
+- dev_err(ci->dev, "timeout waiting for %08x in %d\n",
+- mask, reg);
+- return -ETIMEDOUT;
+- }
+- msleep(20);
+- }
+-
+- return 0;
+-}
+-
+ static irqreturn_t ci_irq(int irq, void *data)
+ {
+ struct ci_hdrc *ci = data;
+diff --git a/drivers/usb/chipidea/otg.c b/drivers/usb/chipidea/otg.c
+index 03b6743461d1..0cf149edddd8 100644
+--- a/drivers/usb/chipidea/otg.c
++++ b/drivers/usb/chipidea/otg.c
+@@ -44,12 +44,15 @@ u32 hw_read_otgsc(struct ci_hdrc *ci, u32 mask)
+ else
+ val &= ~OTGSC_BSVIS;
+
+- cable->changed = false;
+-
+ if (cable->state)
+ val |= OTGSC_BSV;
+ else
+ val &= ~OTGSC_BSV;
++
++ if (cable->enabled)
++ val |= OTGSC_BSVIE;
++ else
++ val &= ~OTGSC_BSVIE;
+ }
+
+ cable = &ci->platdata->id_extcon;
+@@ -59,15 +62,18 @@ u32 hw_read_otgsc(struct ci_hdrc *ci, u32 mask)
+ else
+ val &= ~OTGSC_IDIS;
+
+- cable->changed = false;
+-
+ if (cable->state)
+ val |= OTGSC_ID;
+ else
+ val &= ~OTGSC_ID;
++
++ if (cable->enabled)
++ val |= OTGSC_IDIE;
++ else
++ val &= ~OTGSC_IDIE;
+ }
+
+- return val;
++ return val & mask;
+ }
+
+ /**
+@@ -77,6 +83,36 @@ u32 hw_read_otgsc(struct ci_hdrc *ci, u32 mask)
+ */
+ void hw_write_otgsc(struct ci_hdrc *ci, u32 mask, u32 data)
+ {
++ struct ci_hdrc_cable *cable;
++
++ cable = &ci->platdata->vbus_extcon;
++ if (!IS_ERR(cable->edev)) {
++ if (data & mask & OTGSC_BSVIS)
++ cable->changed = false;
++
++ /* Don't enable vbus interrupt if using external notifier */
++ if (data & mask & OTGSC_BSVIE) {
++ cable->enabled = true;
++ data &= ~OTGSC_BSVIE;
++ } else if (mask & OTGSC_BSVIE) {
++ cable->enabled = false;
++ }
++ }
++
++ cable = &ci->platdata->id_extcon;
++ if (!IS_ERR(cable->edev)) {
++ if (data & mask & OTGSC_IDIS)
++ cable->changed = false;
++
++ /* Don't enable id interrupt if using external notifier */
++ if (data & mask & OTGSC_IDIE) {
++ cable->enabled = true;
++ data &= ~OTGSC_IDIE;
++ } else if (mask & OTGSC_IDIE) {
++ cable->enabled = false;
++ }
++ }
++
+ hw_write(ci, OP_OTGSC, mask | OTGSC_INT_STATUS_BITS, data);
+ }
+
+@@ -104,7 +140,31 @@ void ci_handle_vbus_change(struct ci_hdrc *ci)
+ usb_gadget_vbus_disconnect(&ci->gadget);
+ }
+
+-#define CI_VBUS_STABLE_TIMEOUT_MS 5000
++/**
++ * When we switch to device mode, the vbus value should be lower
++ * than OTGSC_BSV before connecting to host.
++ *
++ * @ci: the controller
++ *
++ * This function returns an error code if timeout
++ */
++static int hw_wait_vbus_lower_bsv(struct ci_hdrc *ci)
++{
++ unsigned long elapse = jiffies + msecs_to_jiffies(5000);
++ u32 mask = OTGSC_BSV;
++
++ while (hw_read_otgsc(ci, mask)) {
++ if (time_after(jiffies, elapse)) {
++ dev_err(ci->dev, "timeout waiting for %08x in OTGSC\n",
++ mask);
++ return -ETIMEDOUT;
++ }
++ msleep(20);
++ }
++
++ return 0;
++}
++
+ static void ci_handle_id_switch(struct ci_hdrc *ci)
+ {
+ enum ci_role role = ci_otg_role(ci);
+@@ -116,9 +176,11 @@ static void ci_handle_id_switch(struct ci_hdrc *ci)
+ ci_role_stop(ci);
+
+ if (role == CI_ROLE_GADGET)
+- /* wait vbus lower than OTGSC_BSV */
+- hw_wait_reg(ci, OP_OTGSC, OTGSC_BSV, 0,
+- CI_VBUS_STABLE_TIMEOUT_MS);
++ /*
++ * wait vbus lower than OTGSC_BSV before connecting
++ * to host
++ */
++ hw_wait_vbus_lower_bsv(ci);
+
+ ci_role_start(ci, role);
+ }
+diff --git a/drivers/usb/dwc2/core.c b/drivers/usb/dwc2/core.c
+index 11d8ae9aead1..439a21b8a056 100644
+--- a/drivers/usb/dwc2/core.c
++++ b/drivers/usb/dwc2/core.c
+@@ -455,7 +455,7 @@ void dwc2_clear_force_mode(struct dwc2_hsotg *hsotg)
+ dwc2_writel(gusbcfg, hsotg->regs + GUSBCFG);
+
+ if (dwc2_iddig_filter_enabled(hsotg))
+- usleep_range(100000, 110000);
++ msleep(100);
+ }
+
+ /*
+diff --git a/drivers/usb/host/ehci-exynos.c b/drivers/usb/host/ehci-exynos.c
+index 42e5b66353ef..7a603f66a9bc 100644
+--- a/drivers/usb/host/ehci-exynos.c
++++ b/drivers/usb/host/ehci-exynos.c
+@@ -77,10 +77,12 @@ static int exynos_ehci_get_phy(struct device *dev,
+ if (IS_ERR(phy)) {
+ ret = PTR_ERR(phy);
+ if (ret == -EPROBE_DEFER) {
++ of_node_put(child);
+ return ret;
+ } else if (ret != -ENOSYS && ret != -ENODEV) {
+ dev_err(dev,
+ "Error retrieving usb2 phy: %d\n", ret);
++ of_node_put(child);
+ return ret;
+ }
+ }
+diff --git a/drivers/usb/host/ohci-exynos.c b/drivers/usb/host/ohci-exynos.c
+index 2cd105be7319..6865b919403f 100644
+--- a/drivers/usb/host/ohci-exynos.c
++++ b/drivers/usb/host/ohci-exynos.c
+@@ -66,10 +66,12 @@ static int exynos_ohci_get_phy(struct device *dev,
+ if (IS_ERR(phy)) {
+ ret = PTR_ERR(phy);
+ if (ret == -EPROBE_DEFER) {
++ of_node_put(child);
+ return ret;
+ } else if (ret != -ENOSYS && ret != -ENODEV) {
+ dev_err(dev,
+ "Error retrieving usb2 phy: %d\n", ret);
++ of_node_put(child);
+ return ret;
+ }
+ }
+diff --git a/drivers/usb/serial/ark3116.c b/drivers/usb/serial/ark3116.c
+index 7812052dc700..754fc3e41005 100644
+--- a/drivers/usb/serial/ark3116.c
++++ b/drivers/usb/serial/ark3116.c
+@@ -373,23 +373,29 @@ static int ark3116_open(struct tty_struct *tty, struct usb_serial_port *port)
+ dev_dbg(&port->dev,
+ "%s - usb_serial_generic_open failed: %d\n",
+ __func__, result);
+- goto err_out;
++ goto err_free;
+ }
+
+ /* remove any data still left: also clears error state */
+ ark3116_read_reg(serial, UART_RX, buf);
+
+ /* read modem status */
+- priv->msr = ark3116_read_reg(serial, UART_MSR, buf);
++ result = ark3116_read_reg(serial, UART_MSR, buf);
++ if (result < 0)
++ goto err_close;
++ priv->msr = *buf;
++
+ /* read line status */
+- priv->lsr = ark3116_read_reg(serial, UART_LSR, buf);
++ result = ark3116_read_reg(serial, UART_LSR, buf);
++ if (result < 0)
++ goto err_close;
++ priv->lsr = *buf;
+
+ result = usb_submit_urb(port->interrupt_in_urb, GFP_KERNEL);
+ if (result) {
+ dev_err(&port->dev, "submit irq_in urb failed %d\n",
+ result);
+- ark3116_close(port);
+- goto err_out;
++ goto err_close;
+ }
+
+ /* activate interrupts */
+@@ -402,8 +408,15 @@ static int ark3116_open(struct tty_struct *tty, struct usb_serial_port *port)
+ if (tty)
+ ark3116_set_termios(tty, port, NULL);
+
+-err_out:
+ kfree(buf);
++
++ return 0;
++
++err_close:
++ usb_serial_generic_close(port);
++err_free:
++ kfree(buf);
++
+ return result;
+ }
+
+diff --git a/drivers/usb/serial/ch341.c b/drivers/usb/serial/ch341.c
+index 95aa5233726c..86692d2f8523 100644
+--- a/drivers/usb/serial/ch341.c
++++ b/drivers/usb/serial/ch341.c
+@@ -248,21 +248,11 @@ static int ch341_configure(struct usb_device *dev, struct ch341_private *priv)
+ if (r < 0)
+ goto out;
+
+- /* expect 0xff 0xee */
+- r = ch341_get_status(dev, priv);
+- if (r < 0)
+- goto out;
+-
+ r = ch341_set_baudrate_lcr(dev, priv, priv->lcr);
+ if (r < 0)
+ goto out;
+
+ r = ch341_set_handshake(dev, priv->line_control);
+- if (r < 0)
+- goto out;
+-
+- /* expect 0x9f 0xee */
+- r = ch341_get_status(dev, priv);
+
+ out: kfree(buffer);
+ return r;
+@@ -334,14 +324,9 @@ static void ch341_close(struct usb_serial_port *port)
+ /* open this device, set default parameters */
+ static int ch341_open(struct tty_struct *tty, struct usb_serial_port *port)
+ {
+- struct usb_serial *serial = port->serial;
+ struct ch341_private *priv = usb_get_serial_port_data(port);
+ int r;
+
+- r = ch341_configure(serial->dev, priv);
+- if (r)
+- return r;
+-
+ if (tty)
+ ch341_set_termios(tty, port, NULL);
+
+@@ -353,6 +338,12 @@ static int ch341_open(struct tty_struct *tty, struct usb_serial_port *port)
+ return r;
+ }
+
++ r = ch341_get_status(port->serial->dev, priv);
++ if (r < 0) {
++ dev_err(&port->dev, "failed to read modem status: %d\n", r);
++ goto err_kill_interrupt_urb;
++ }
++
+ r = usb_serial_generic_open(tty, port);
+ if (r)
+ goto err_kill_interrupt_urb;
+@@ -619,6 +610,12 @@ static int ch341_reset_resume(struct usb_serial *serial)
+ ret);
+ return ret;
+ }
++
++ ret = ch341_get_status(port->serial->dev, priv);
++ if (ret < 0) {
++ dev_err(&port->dev, "failed to read modem status: %d\n",
++ ret);
++ }
+ }
+
+ return usb_serial_generic_resume(serial);
+diff --git a/drivers/usb/serial/digi_acceleport.c b/drivers/usb/serial/digi_acceleport.c
+index 30bf0f5db82d..7ab3235febfc 100644
+--- a/drivers/usb/serial/digi_acceleport.c
++++ b/drivers/usb/serial/digi_acceleport.c
+@@ -1398,25 +1398,30 @@ static int digi_read_inb_callback(struct urb *urb)
+ {
+ struct usb_serial_port *port = urb->context;
+ struct digi_port *priv = usb_get_serial_port_data(port);
+- int opcode = ((unsigned char *)urb->transfer_buffer)[0];
+- int len = ((unsigned char *)urb->transfer_buffer)[1];
+- int port_status = ((unsigned char *)urb->transfer_buffer)[2];
+- unsigned char *data = ((unsigned char *)urb->transfer_buffer) + 3;
++ unsigned char *buf = urb->transfer_buffer;
++ int opcode;
++ int len;
++ int port_status;
++ unsigned char *data;
+ int flag, throttled;
+- int status = urb->status;
+-
+- /* do not process callbacks on closed ports */
+- /* but do continue the read chain */
+- if (urb->status == -ENOENT)
+- return 0;
+
+ /* short/multiple packet check */
++ if (urb->actual_length < 2) {
++ dev_warn(&port->dev, "short packet received\n");
++ return -1;
++ }
++
++ opcode = buf[0];
++ len = buf[1];
++
+ if (urb->actual_length != len + 2) {
+- dev_err(&port->dev, "%s: INCOMPLETE OR MULTIPLE PACKET, "
+- "status=%d, port=%d, opcode=%d, len=%d, "
+- "actual_length=%d, status=%d\n", __func__, status,
+- priv->dp_port_num, opcode, len, urb->actual_length,
+- port_status);
++ dev_err(&port->dev, "malformed packet received: port=%d, opcode=%d, len=%d, actual_length=%u\n",
++ priv->dp_port_num, opcode, len, urb->actual_length);
++ return -1;
++ }
++
++ if (opcode == DIGI_CMD_RECEIVE_DATA && len < 1) {
++ dev_err(&port->dev, "malformed data packet received\n");
+ return -1;
+ }
+
+@@ -1430,6 +1435,9 @@ static int digi_read_inb_callback(struct urb *urb)
+
+ /* receive data */
+ if (opcode == DIGI_CMD_RECEIVE_DATA) {
++ port_status = buf[2];
++ data = &buf[3];
++
+ /* get flag from port_status */
+ flag = 0;
+
+diff --git a/drivers/usb/serial/ftdi_sio.c b/drivers/usb/serial/ftdi_sio.c
+index 7d863fda1f18..c6c388bed156 100644
+--- a/drivers/usb/serial/ftdi_sio.c
++++ b/drivers/usb/serial/ftdi_sio.c
+@@ -1439,10 +1439,13 @@ static int read_latency_timer(struct usb_serial_port *port)
+ FTDI_SIO_GET_LATENCY_TIMER_REQUEST_TYPE,
+ 0, priv->interface,
+ buf, 1, WDR_TIMEOUT);
+- if (rv < 0)
++ if (rv < 1) {
+ dev_err(&port->dev, "Unable to read latency timer: %i\n", rv);
+- else
++ if (rv >= 0)
++ rv = -EIO;
++ } else {
+ priv->latency = buf[0];
++ }
+
+ kfree(buf);
+
+diff --git a/drivers/usb/serial/io_edgeport.c b/drivers/usb/serial/io_edgeport.c
+index d50e5773483f..8ab5f5b49ef3 100644
+--- a/drivers/usb/serial/io_edgeport.c
++++ b/drivers/usb/serial/io_edgeport.c
+@@ -492,20 +492,24 @@ static int get_epic_descriptor(struct edgeport_serial *ep)
+ int result;
+ struct usb_serial *serial = ep->serial;
+ struct edgeport_product_info *product_info = &ep->product_info;
+- struct edge_compatibility_descriptor *epic = &ep->epic_descriptor;
++ struct edge_compatibility_descriptor *epic;
+ struct edge_compatibility_bits *bits;
+ struct device *dev = &serial->dev->dev;
+
+ ep->is_epic = 0;
++
++ epic = kmalloc(sizeof(*epic), GFP_KERNEL);
++ if (!epic)
++ return -ENOMEM;
++
+ result = usb_control_msg(serial->dev, usb_rcvctrlpipe(serial->dev, 0),
+ USB_REQUEST_ION_GET_EPIC_DESC,
+ 0xC0, 0x00, 0x00,
+- &ep->epic_descriptor,
+- sizeof(struct edge_compatibility_descriptor),
++ epic, sizeof(*epic),
+ 300);
+-
+- if (result > 0) {
++ if (result == sizeof(*epic)) {
+ ep->is_epic = 1;
++ memcpy(&ep->epic_descriptor, epic, sizeof(*epic));
+ memset(product_info, 0, sizeof(struct edgeport_product_info));
+
+ product_info->NumPorts = epic->NumPorts;
+@@ -534,8 +538,16 @@ static int get_epic_descriptor(struct edgeport_serial *ep)
+ dev_dbg(dev, " IOSPWriteLCR : %s\n", bits->IOSPWriteLCR ? "TRUE": "FALSE");
+ dev_dbg(dev, " IOSPSetBaudRate : %s\n", bits->IOSPSetBaudRate ? "TRUE": "FALSE");
+ dev_dbg(dev, " TrueEdgeport : %s\n", bits->TrueEdgeport ? "TRUE": "FALSE");
++
++ result = 0;
++ } else if (result >= 0) {
++ dev_warn(&serial->interface->dev, "short epic descriptor received: %d\n",
++ result);
++ result = -EIO;
+ }
+
++ kfree(epic);
++
+ return result;
+ }
+
+@@ -2090,8 +2102,7 @@ static int rom_write(struct usb_serial *serial, __u16 extAddr, __u16 addr,
+ * rom_read
+ * reads a number of bytes from the Edgeport device starting at the given
+ * address.
+- * If successful returns the number of bytes read, otherwise it returns
+- * a negative error number of the problem.
++ * Returns zero on success or a negative error number.
+ ****************************************************************************/
+ static int rom_read(struct usb_serial *serial, __u16 extAddr,
+ __u16 addr, __u16 length, __u8 *data)
+@@ -2116,12 +2127,17 @@ static int rom_read(struct usb_serial *serial, __u16 extAddr,
+ USB_REQUEST_ION_READ_ROM,
+ 0xC0, addr, extAddr, transfer_buffer,
+ current_length, 300);
+- if (result < 0)
++ if (result < current_length) {
++ if (result >= 0)
++ result = -EIO;
+ break;
++ }
+ memcpy(data, transfer_buffer, current_length);
+ length -= current_length;
+ addr += current_length;
+ data += current_length;
++
++ result = 0;
+ }
+
+ kfree(transfer_buffer);
+@@ -2575,9 +2591,10 @@ static void get_manufacturing_desc(struct edgeport_serial *edge_serial)
+ EDGE_MANUF_DESC_LEN,
+ (__u8 *)(&edge_serial->manuf_descriptor));
+
+- if (response < 1)
+- dev_err(dev, "error in getting manufacturer descriptor\n");
+- else {
++ if (response < 0) {
++ dev_err(dev, "error in getting manufacturer descriptor: %d\n",
++ response);
++ } else {
+ char string[30];
+ dev_dbg(dev, "**Manufacturer Descriptor\n");
+ dev_dbg(dev, " RomSize: %dK\n",
+@@ -2634,9 +2651,10 @@ static void get_boot_desc(struct edgeport_serial *edge_serial)
+ EDGE_BOOT_DESC_LEN,
+ (__u8 *)(&edge_serial->boot_descriptor));
+
+- if (response < 1)
+- dev_err(dev, "error in getting boot descriptor\n");
+- else {
++ if (response < 0) {
++ dev_err(dev, "error in getting boot descriptor: %d\n",
++ response);
++ } else {
+ dev_dbg(dev, "**Boot Descriptor:\n");
+ dev_dbg(dev, " BootCodeLength: %d\n",
+ le16_to_cpu(edge_serial->boot_descriptor.BootCodeLength));
+@@ -2779,7 +2797,7 @@ static int edge_startup(struct usb_serial *serial)
+ dev_info(&serial->dev->dev, "%s detected\n", edge_serial->name);
+
+ /* Read the epic descriptor */
+- if (get_epic_descriptor(edge_serial) <= 0) {
++ if (get_epic_descriptor(edge_serial) < 0) {
+ /* memcpy descriptor to Supports structures */
+ memcpy(&edge_serial->epic_descriptor.Supports, descriptor,
+ sizeof(struct edge_compatibility_bits));
+diff --git a/drivers/usb/serial/keyspan_pda.c b/drivers/usb/serial/keyspan_pda.c
+index 83523fcf6fb9..d2dab2a341b8 100644
+--- a/drivers/usb/serial/keyspan_pda.c
++++ b/drivers/usb/serial/keyspan_pda.c
+@@ -139,6 +139,7 @@ static void keyspan_pda_rx_interrupt(struct urb *urb)
+ {
+ struct usb_serial_port *port = urb->context;
+ unsigned char *data = urb->transfer_buffer;
++ unsigned int len = urb->actual_length;
+ int retval;
+ int status = urb->status;
+ struct keyspan_pda_private *priv;
+@@ -159,18 +160,26 @@ static void keyspan_pda_rx_interrupt(struct urb *urb)
+ goto exit;
+ }
+
++ if (len < 1) {
++ dev_warn(&port->dev, "short message received\n");
++ goto exit;
++ }
++
+ /* see if the message is data or a status interrupt */
+ switch (data[0]) {
+ case 0:
+ /* rest of message is rx data */
+- if (urb->actual_length) {
+- tty_insert_flip_string(&port->port, data + 1,
+- urb->actual_length - 1);
+- tty_flip_buffer_push(&port->port);
+- }
++ if (len < 2)
++ break;
++ tty_insert_flip_string(&port->port, data + 1, len - 1);
++ tty_flip_buffer_push(&port->port);
+ break;
+ case 1:
+ /* status interrupt */
++ if (len < 3) {
++ dev_warn(&port->dev, "short interrupt message received\n");
++ break;
++ }
+ dev_dbg(&port->dev, "rx int, d1=%d, d2=%d\n", data[1], data[2]);
+ switch (data[1]) {
+ case 1: /* modemline change */
+diff --git a/drivers/usb/serial/mct_u232.c b/drivers/usb/serial/mct_u232.c
+index 885655315de1..edbc81f205c2 100644
+--- a/drivers/usb/serial/mct_u232.c
++++ b/drivers/usb/serial/mct_u232.c
+@@ -322,8 +322,12 @@ static int mct_u232_get_modem_stat(struct usb_serial_port *port,
+ MCT_U232_GET_REQUEST_TYPE,
+ 0, 0, buf, MCT_U232_GET_MODEM_STAT_SIZE,
+ WDR_TIMEOUT);
+- if (rc < 0) {
++ if (rc < MCT_U232_GET_MODEM_STAT_SIZE) {
+ dev_err(&port->dev, "Get MODEM STATus failed (error = %d)\n", rc);
++
++ if (rc >= 0)
++ rc = -EIO;
++
+ *msr = 0;
+ } else {
+ *msr = buf[0];
+diff --git a/drivers/usb/serial/quatech2.c b/drivers/usb/serial/quatech2.c
+index 5709cc93b083..cf29128327d3 100644
+--- a/drivers/usb/serial/quatech2.c
++++ b/drivers/usb/serial/quatech2.c
+@@ -188,22 +188,22 @@ static inline int qt2_setdevice(struct usb_device *dev, u8 *data)
+ }
+
+
+-static inline int qt2_getdevice(struct usb_device *dev, u8 *data)
+-{
+- return usb_control_msg(dev, usb_rcvctrlpipe(dev, 0),
+- QT_SET_GET_DEVICE, 0xc0, 0, 0,
+- data, 3, QT2_USB_TIMEOUT);
+-}
+-
+ static inline int qt2_getregister(struct usb_device *dev,
+ u8 uart,
+ u8 reg,
+ u8 *data)
+ {
+- return usb_control_msg(dev, usb_rcvctrlpipe(dev, 0),
+- QT_SET_GET_REGISTER, 0xc0, reg,
+- uart, data, sizeof(*data), QT2_USB_TIMEOUT);
++ int ret;
++
++ ret = usb_control_msg(dev, usb_rcvctrlpipe(dev, 0),
++ QT_SET_GET_REGISTER, 0xc0, reg,
++ uart, data, sizeof(*data), QT2_USB_TIMEOUT);
++ if (ret < sizeof(*data)) {
++ if (ret >= 0)
++ ret = -EIO;
++ }
+
++ return ret;
+ }
+
+ static inline int qt2_setregister(struct usb_device *dev,
+@@ -372,9 +372,11 @@ static int qt2_open(struct tty_struct *tty, struct usb_serial_port *port)
+ 0xc0, 0,
+ device_port, data, 2, QT2_USB_TIMEOUT);
+
+- if (status < 0) {
++ if (status < 2) {
+ dev_err(&port->dev, "%s - open port failed %i\n", __func__,
+ status);
++ if (status >= 0)
++ status = -EIO;
+ kfree(data);
+ return status;
+ }
+diff --git a/drivers/usb/serial/ssu100.c b/drivers/usb/serial/ssu100.c
+index 2a156144c76c..55814538ff1f 100644
+--- a/drivers/usb/serial/ssu100.c
++++ b/drivers/usb/serial/ssu100.c
+@@ -80,9 +80,17 @@ static inline int ssu100_setdevice(struct usb_device *dev, u8 *data)
+
+ static inline int ssu100_getdevice(struct usb_device *dev, u8 *data)
+ {
+- return usb_control_msg(dev, usb_rcvctrlpipe(dev, 0),
+- QT_SET_GET_DEVICE, 0xc0, 0, 0,
+- data, 3, 300);
++ int ret;
++
++ ret = usb_control_msg(dev, usb_rcvctrlpipe(dev, 0),
++ QT_SET_GET_DEVICE, 0xc0, 0, 0,
++ data, 3, 300);
++ if (ret < 3) {
++ if (ret >= 0)
++ ret = -EIO;
++ }
++
++ return ret;
+ }
+
+ static inline int ssu100_getregister(struct usb_device *dev,
+@@ -90,10 +98,17 @@ static inline int ssu100_getregister(struct usb_device *dev,
+ unsigned short reg,
+ u8 *data)
+ {
+- return usb_control_msg(dev, usb_rcvctrlpipe(dev, 0),
+- QT_SET_GET_REGISTER, 0xc0, reg,
+- uart, data, sizeof(*data), 300);
++ int ret;
++
++ ret = usb_control_msg(dev, usb_rcvctrlpipe(dev, 0),
++ QT_SET_GET_REGISTER, 0xc0, reg,
++ uart, data, sizeof(*data), 300);
++ if (ret < sizeof(*data)) {
++ if (ret >= 0)
++ ret = -EIO;
++ }
+
++ return ret;
+ }
+
+
+@@ -289,8 +304,10 @@ static int ssu100_open(struct tty_struct *tty, struct usb_serial_port *port)
+ QT_OPEN_CLOSE_CHANNEL,
+ QT_TRANSFER_IN, 0x01,
+ 0, data, 2, 300);
+- if (result < 0) {
++ if (result < 2) {
+ dev_dbg(&port->dev, "%s - open failed %i\n", __func__, result);
++ if (result >= 0)
++ result = -EIO;
+ kfree(data);
+ return result;
+ }
+diff --git a/drivers/usb/serial/ti_usb_3410_5052.c b/drivers/usb/serial/ti_usb_3410_5052.c
+index 64b85b8dedf3..3107bf5d1c96 100644
+--- a/drivers/usb/serial/ti_usb_3410_5052.c
++++ b/drivers/usb/serial/ti_usb_3410_5052.c
+@@ -1553,13 +1553,10 @@ static int ti_command_out_sync(struct ti_device *tdev, __u8 command,
+ (USB_TYPE_VENDOR | USB_RECIP_DEVICE | USB_DIR_OUT),
+ value, moduleid, data, size, 1000);
+
+- if (status == size)
+- status = 0;
+-
+- if (status > 0)
+- status = -ECOMM;
++ if (status < 0)
++ return status;
+
+- return status;
++ return 0;
+ }
+
+
+@@ -1575,8 +1572,7 @@ static int ti_command_in_sync(struct ti_device *tdev, __u8 command,
+
+ if (status == size)
+ status = 0;
+-
+- if (status > 0)
++ else if (status >= 0)
+ status = -ECOMM;
+
+ return status;
+diff --git a/drivers/xen/events/events_base.c b/drivers/xen/events/events_base.c
+index fd8e872d2943..86199f31bc57 100644
+--- a/drivers/xen/events/events_base.c
++++ b/drivers/xen/events/events_base.c
+@@ -1312,6 +1312,9 @@ static int rebind_irq_to_cpu(unsigned irq, unsigned tcpu)
+ if (!VALID_EVTCHN(evtchn))
+ return -1;
+
++ if (!xen_support_evtchn_rebind())
++ return -1;
++
+ /* Send future instances of this interrupt to other vcpu. */
+ bind_vcpu.port = evtchn;
+ bind_vcpu.vcpu = xen_vcpu_nr(tcpu);
+@@ -1645,15 +1648,20 @@ void xen_callback_vector(void)
+ {
+ int rc;
+ uint64_t callback_via;
+-
+- callback_via = HVM_CALLBACK_VECTOR(HYPERVISOR_CALLBACK_VECTOR);
+- rc = xen_set_callback_via(callback_via);
+- BUG_ON(rc);
+- pr_info("Xen HVM callback vector for event delivery is enabled\n");
+- /* in the restore case the vector has already been allocated */
+- if (!test_bit(HYPERVISOR_CALLBACK_VECTOR, used_vectors))
+- alloc_intr_gate(HYPERVISOR_CALLBACK_VECTOR,
+- xen_hvm_callback_vector);
++ if (xen_have_vector_callback) {
++ callback_via = HVM_CALLBACK_VECTOR(HYPERVISOR_CALLBACK_VECTOR);
++ rc = xen_set_callback_via(callback_via);
++ if (rc) {
++ pr_err("Request for Xen HVM callback vector failed\n");
++ xen_have_vector_callback = 0;
++ return;
++ }
++ pr_info("Xen HVM callback vector for event delivery is enabled\n");
++ /* in the restore case the vector has already been allocated */
++ if (!test_bit(HYPERVISOR_CALLBACK_VECTOR, used_vectors))
++ alloc_intr_gate(HYPERVISOR_CALLBACK_VECTOR,
++ xen_hvm_callback_vector);
++ }
+ }
+ #else
+ void xen_callback_vector(void) {}
+diff --git a/drivers/xen/platform-pci.c b/drivers/xen/platform-pci.c
+index 2a165cc8a43c..1c4deac9b0f8 100644
+--- a/drivers/xen/platform-pci.c
++++ b/drivers/xen/platform-pci.c
+@@ -67,7 +67,7 @@ static uint64_t get_callback_via(struct pci_dev *pdev)
+ pin = pdev->pin;
+
+ /* We don't know the GSI. Specify the PCI INTx line instead. */
+- return ((uint64_t)0x01 << HVM_CALLBACK_VIA_TYPE_SHIFT) | /* PCI INTx identifier */
++ return ((uint64_t)0x01 << 56) | /* PCI INTx identifier */
+ ((uint64_t)pci_domain_nr(pdev->bus) << 32) |
+ ((uint64_t)pdev->bus->number << 16) |
+ ((uint64_t)(pdev->devfn & 0xff) << 8) |
+@@ -90,7 +90,7 @@ static int xen_allocate_irq(struct pci_dev *pdev)
+ static int platform_pci_resume(struct pci_dev *pdev)
+ {
+ int err;
+- if (!xen_pv_domain())
++ if (xen_have_vector_callback)
+ return 0;
+ err = xen_set_callback_via(callback_via);
+ if (err) {
+@@ -138,14 +138,7 @@ static int platform_pci_probe(struct pci_dev *pdev,
+ platform_mmio = mmio_addr;
+ platform_mmiolen = mmio_len;
+
+- /*
+- * Xen HVM guests always use the vector callback mechanism.
+- * L1 Dom0 in a nested Xen environment is a PV guest inside in an
+- * HVM environment. It needs the platform-pci driver to get
+- * notifications from L0 Xen, but it cannot use the vector callback
+- * as it is not exported by L1 Xen.
+- */
+- if (xen_pv_domain()) {
++ if (!xen_have_vector_callback) {
+ ret = xen_allocate_irq(pdev);
+ if (ret) {
+ dev_warn(&pdev->dev, "request_irq failed err=%d\n", ret);
+diff --git a/fs/9p/acl.c b/fs/9p/acl.c
+index b3c2cc79c20d..082d227fa56b 100644
+--- a/fs/9p/acl.c
++++ b/fs/9p/acl.c
+@@ -277,6 +277,7 @@ static int v9fs_xattr_set_acl(const struct xattr_handler *handler,
+ case ACL_TYPE_ACCESS:
+ if (acl) {
+ struct iattr iattr;
++ struct posix_acl *old_acl = acl;
+
+ retval = posix_acl_update_mode(inode, &iattr.ia_mode, &acl);
+ if (retval)
+@@ -287,6 +288,7 @@ static int v9fs_xattr_set_acl(const struct xattr_handler *handler,
+ * by the mode bits. So don't
+ * update ACL.
+ */
++ posix_acl_release(old_acl);
+ value = NULL;
+ size = 0;
+ }
+diff --git a/fs/block_dev.c b/fs/block_dev.c
+index 3c47614a4b32..b94e2a4974a1 100644
+--- a/fs/block_dev.c
++++ b/fs/block_dev.c
+@@ -1422,7 +1422,6 @@ int revalidate_disk(struct gendisk *disk)
+
+ if (disk->fops->revalidate_disk)
+ ret = disk->fops->revalidate_disk(disk);
+- blk_integrity_revalidate(disk);
+ bdev = bdget_disk(disk, 0);
+ if (!bdev)
+ return ret;
+diff --git a/fs/f2fs/super.c b/fs/f2fs/super.c
+index 287fcbd0551e..119b68332cd5 100644
+--- a/fs/f2fs/super.c
++++ b/fs/f2fs/super.c
+@@ -1441,6 +1441,13 @@ static int sanity_check_raw_super(struct f2fs_sb_info *sbi,
+ return 1;
+ }
+
++ if (le32_to_cpu(raw_super->segment_count) > F2FS_MAX_SEGMENT) {
++ f2fs_msg(sb, KERN_INFO,
++ "Invalid segment count (%u)",
++ le32_to_cpu(raw_super->segment_count));
++ return 1;
++ }
++
+ /* check CP/SIT/NAT/SSA/MAIN_AREA area boundary */
+ if (sanity_check_area_boundary(sbi, bh))
+ return 1;
+diff --git a/include/linux/f2fs_fs.h b/include/linux/f2fs_fs.h
+index cea41a124a80..3e5972ef5019 100644
+--- a/include/linux/f2fs_fs.h
++++ b/include/linux/f2fs_fs.h
+@@ -294,6 +294,12 @@ struct f2fs_nat_block {
+ #define SIT_ENTRY_PER_BLOCK (PAGE_SIZE / sizeof(struct f2fs_sit_entry))
+
+ /*
++ * F2FS uses 4 bytes to represent block address. As a result, supported size of
++ * disk is 16 TB and it equals to 16 * 1024 * 1024 / 2 segments.
++ */
++#define F2FS_MAX_SEGMENT ((16 * 1024 * 1024) / 2)
++
++/*
+ * Note that f2fs_sit_entry->vblocks has the following bit-field information.
+ * [15:10] : allocation type such as CURSEG_XXXX_TYPE
+ * [9:0] : valid block count
+diff --git a/include/linux/genhd.h b/include/linux/genhd.h
+index 76f39754e7b0..76d6a1cd4153 100644
+--- a/include/linux/genhd.h
++++ b/include/linux/genhd.h
+@@ -722,11 +722,9 @@ static inline void part_nr_sects_write(struct hd_struct *part, sector_t size)
+ #if defined(CONFIG_BLK_DEV_INTEGRITY)
+ extern void blk_integrity_add(struct gendisk *);
+ extern void blk_integrity_del(struct gendisk *);
+-extern void blk_integrity_revalidate(struct gendisk *);
+ #else /* CONFIG_BLK_DEV_INTEGRITY */
+ static inline void blk_integrity_add(struct gendisk *disk) { }
+ static inline void blk_integrity_del(struct gendisk *disk) { }
+-static inline void blk_integrity_revalidate(struct gendisk *disk) { }
+ #endif /* CONFIG_BLK_DEV_INTEGRITY */
+
+ #else /* CONFIG_BLOCK */
+diff --git a/include/linux/usb/chipidea.h b/include/linux/usb/chipidea.h
+index 5dd75fa47dd8..f9be467d6695 100644
+--- a/include/linux/usb/chipidea.h
++++ b/include/linux/usb/chipidea.h
+@@ -14,6 +14,7 @@ struct ci_hdrc;
+ * struct ci_hdrc_cable - structure for external connector cable state tracking
+ * @state: current state of the line
+ * @changed: set to true when extcon event happen
++ * @enabled: set to true if we've enabled the vbus or id interrupt
+ * @edev: device which generate events
+ * @ci: driver state of the chipidea device
+ * @nb: hold event notification callback
+@@ -22,6 +23,7 @@ struct ci_hdrc;
+ struct ci_hdrc_cable {
+ bool state;
+ bool changed;
++ bool enabled;
+ struct extcon_dev *edev;
+ struct ci_hdrc *ci;
+ struct notifier_block nb;
+diff --git a/include/net/addrconf.h b/include/net/addrconf.h
+index 8f998afc1384..b8ee8a113e32 100644
+--- a/include/net/addrconf.h
++++ b/include/net/addrconf.h
+@@ -20,6 +20,8 @@
+ #define ADDRCONF_TIMER_FUZZ (HZ / 4)
+ #define ADDRCONF_TIMER_FUZZ_MAX (HZ)
+
++#define ADDRCONF_NOTIFY_PRIORITY 0
++
+ #include <linux/in.h>
+ #include <linux/in6.h>
+
+diff --git a/include/net/ip6_route.h b/include/net/ip6_route.h
+index 9dc2c182a263..f5e625f53367 100644
+--- a/include/net/ip6_route.h
++++ b/include/net/ip6_route.h
+@@ -84,6 +84,7 @@ struct dst_entry *ip6_route_lookup(struct net *net, struct flowi6 *fl6,
+ struct rt6_info *ip6_pol_route(struct net *net, struct fib6_table *table,
+ int ifindex, struct flowi6 *fl6, int flags);
+
++void ip6_route_init_special_entries(void);
+ int ip6_route_init(void);
+ void ip6_route_cleanup(void);
+
+diff --git a/include/xen/xen.h b/include/xen/xen.h
+index f0f0252cff9a..0c0e3ef4c45d 100644
+--- a/include/xen/xen.h
++++ b/include/xen/xen.h
+@@ -38,7 +38,8 @@ extern enum xen_domain_type xen_domain_type;
+ */
+ #include <xen/features.h>
+ #define xen_pvh_domain() (xen_pv_domain() && \
+- xen_feature(XENFEAT_auto_translated_physmap))
++ xen_feature(XENFEAT_auto_translated_physmap) && \
++ xen_have_vector_callback)
+ #else
+ #define xen_pvh_domain() (0)
+ #endif
+diff --git a/kernel/bpf/verifier.c b/kernel/bpf/verifier.c
+index f3c938ba87a2..b54585d67c0c 100644
+--- a/kernel/bpf/verifier.c
++++ b/kernel/bpf/verifier.c
+@@ -296,7 +296,8 @@ static const char *const bpf_jmp_string[16] = {
+ [BPF_EXIT >> 4] = "exit",
+ };
+
+-static void print_bpf_insn(struct bpf_insn *insn)
++static void print_bpf_insn(const struct bpf_verifier_env *env,
++ const struct bpf_insn *insn)
+ {
+ u8 class = BPF_CLASS(insn->code);
+
+@@ -360,9 +361,19 @@ static void print_bpf_insn(struct bpf_insn *insn)
+ insn->code,
+ bpf_ldst_string[BPF_SIZE(insn->code) >> 3],
+ insn->src_reg, insn->imm);
+- } else if (BPF_MODE(insn->code) == BPF_IMM) {
+- verbose("(%02x) r%d = 0x%x\n",
+- insn->code, insn->dst_reg, insn->imm);
++ } else if (BPF_MODE(insn->code) == BPF_IMM &&
++ BPF_SIZE(insn->code) == BPF_DW) {
++ /* At this point, we already made sure that the second
++ * part of the ldimm64 insn is accessible.
++ */
++ u64 imm = ((u64)(insn + 1)->imm << 32) | (u32)insn->imm;
++ bool map_ptr = insn->src_reg == BPF_PSEUDO_MAP_FD;
++
++ if (map_ptr && !env->allow_ptr_leaks)
++ imm = 0;
++
++ verbose("(%02x) r%d = 0x%llx\n", insn->code,
++ insn->dst_reg, (unsigned long long)imm);
+ } else {
+ verbose("BUG_ld_%02x\n", insn->code);
+ return;
+@@ -1779,6 +1790,17 @@ static int check_alu_op(struct bpf_verifier_env *env, struct bpf_insn *insn)
+ return 0;
+ } else if (opcode == BPF_ADD &&
+ BPF_CLASS(insn->code) == BPF_ALU64 &&
++ dst_reg->type == PTR_TO_STACK &&
++ ((BPF_SRC(insn->code) == BPF_X &&
++ regs[insn->src_reg].type == CONST_IMM) ||
++ BPF_SRC(insn->code) == BPF_K)) {
++ if (BPF_SRC(insn->code) == BPF_X)
++ dst_reg->imm += regs[insn->src_reg].imm;
++ else
++ dst_reg->imm += insn->imm;
++ return 0;
++ } else if (opcode == BPF_ADD &&
++ BPF_CLASS(insn->code) == BPF_ALU64 &&
+ (dst_reg->type == PTR_TO_PACKET ||
+ (BPF_SRC(insn->code) == BPF_X &&
+ regs[insn->src_reg].type == PTR_TO_PACKET))) {
+@@ -2693,7 +2715,7 @@ static int do_check(struct bpf_verifier_env *env)
+
+ if (log_level) {
+ verbose("%d: ", insn_idx);
+- print_bpf_insn(insn);
++ print_bpf_insn(env, insn);
+ }
+
+ err = ext_analyzer_insn_hook(env, insn_idx, prev_insn_idx);
+diff --git a/lib/test_bpf.c b/lib/test_bpf.c
+index 0362da0b66c3..2e385026915c 100644
+--- a/lib/test_bpf.c
++++ b/lib/test_bpf.c
+@@ -4656,6 +4656,51 @@ static struct bpf_test tests[] = {
+ { },
+ { { 0, 1 } },
+ },
++ {
++ /* Mainly testing JIT + imm64 here. */
++ "JMP_JGE_X: ldimm64 test 1",
++ .u.insns_int = {
++ BPF_ALU32_IMM(BPF_MOV, R0, 0),
++ BPF_LD_IMM64(R1, 3),
++ BPF_LD_IMM64(R2, 2),
++ BPF_JMP_REG(BPF_JGE, R1, R2, 2),
++ BPF_LD_IMM64(R0, 0xffffffffffffffffUL),
++ BPF_LD_IMM64(R0, 0xeeeeeeeeeeeeeeeeUL),
++ BPF_EXIT_INSN(),
++ },
++ INTERNAL,
++ { },
++ { { 0, 0xeeeeeeeeU } },
++ },
++ {
++ "JMP_JGE_X: ldimm64 test 2",
++ .u.insns_int = {
++ BPF_ALU32_IMM(BPF_MOV, R0, 0),
++ BPF_LD_IMM64(R1, 3),
++ BPF_LD_IMM64(R2, 2),
++ BPF_JMP_REG(BPF_JGE, R1, R2, 0),
++ BPF_LD_IMM64(R0, 0xffffffffffffffffUL),
++ BPF_EXIT_INSN(),
++ },
++ INTERNAL,
++ { },
++ { { 0, 0xffffffffU } },
++ },
++ {
++ "JMP_JGE_X: ldimm64 test 3",
++ .u.insns_int = {
++ BPF_ALU32_IMM(BPF_MOV, R0, 1),
++ BPF_LD_IMM64(R1, 3),
++ BPF_LD_IMM64(R2, 2),
++ BPF_JMP_REG(BPF_JGE, R1, R2, 4),
++ BPF_LD_IMM64(R0, 0xffffffffffffffffUL),
++ BPF_LD_IMM64(R0, 0xeeeeeeeeeeeeeeeeUL),
++ BPF_EXIT_INSN(),
++ },
++ INTERNAL,
++ { },
++ { { 0, 1 } },
++ },
+ /* BPF_JMP | BPF_JNE | BPF_X */
+ {
+ "JMP_JNE_X: if (3 != 2) return 1",
+diff --git a/net/core/rtnetlink.c b/net/core/rtnetlink.c
+index 75e3ea7bda08..d64d8d14bb2e 100644
+--- a/net/core/rtnetlink.c
++++ b/net/core/rtnetlink.c
+@@ -1059,7 +1059,7 @@ static int rtnl_phys_port_name_fill(struct sk_buff *skb, struct net_device *dev)
+ return err;
+ }
+
+- if (nla_put(skb, IFLA_PHYS_PORT_NAME, strlen(name), name))
++ if (nla_put_string(skb, IFLA_PHYS_PORT_NAME, name))
+ return -EMSGSIZE;
+
+ return 0;
+diff --git a/net/core/skbuff.c b/net/core/skbuff.c
+index 887995e6df9a..7d6369b31b88 100644
+--- a/net/core/skbuff.c
++++ b/net/core/skbuff.c
+@@ -1572,6 +1572,8 @@ int ___pskb_trim(struct sk_buff *skb, unsigned int len)
+ skb_set_tail_pointer(skb, len);
+ }
+
++ if (!skb->sk || skb->destructor == sock_edemux)
++ skb_condense(skb);
+ return 0;
+ }
+ EXPORT_SYMBOL(___pskb_trim);
+diff --git a/net/ipv4/raw.c b/net/ipv4/raw.c
+index 4e49e5cb001c..259fbcd8c479 100644
+--- a/net/ipv4/raw.c
++++ b/net/ipv4/raw.c
+@@ -358,6 +358,9 @@ static int raw_send_hdrinc(struct sock *sk, struct flowi4 *fl4,
+ rt->dst.dev->mtu);
+ return -EMSGSIZE;
+ }
++ if (length < sizeof(struct iphdr))
++ return -EINVAL;
++
+ if (flags&MSG_PROBE)
+ goto out;
+
+diff --git a/net/ipv4/tcp_lp.c b/net/ipv4/tcp_lp.c
+index 046fd3910873..d6fb6c067af4 100644
+--- a/net/ipv4/tcp_lp.c
++++ b/net/ipv4/tcp_lp.c
+@@ -264,13 +264,15 @@ static void tcp_lp_pkts_acked(struct sock *sk, const struct ack_sample *sample)
+ {
+ struct tcp_sock *tp = tcp_sk(sk);
+ struct lp *lp = inet_csk_ca(sk);
++ u32 delta;
+
+ if (sample->rtt_us > 0)
+ tcp_lp_rtt_sample(sk, sample->rtt_us);
+
+ /* calc inference */
+- if (tcp_time_stamp > tp->rx_opt.rcv_tsecr)
+- lp->inference = 3 * (tcp_time_stamp - tp->rx_opt.rcv_tsecr);
++ delta = tcp_time_stamp - tp->rx_opt.rcv_tsecr;
++ if ((s32)delta > 0)
++ lp->inference = 3 * delta;
+
+ /* test if within inference */
+ if (lp->last_drop && (tcp_time_stamp - lp->last_drop < lp->inference))
+diff --git a/net/ipv4/tcp_minisocks.c b/net/ipv4/tcp_minisocks.c
+index 80ff517a7542..64bea51fefde 100644
+--- a/net/ipv4/tcp_minisocks.c
++++ b/net/ipv4/tcp_minisocks.c
+@@ -543,6 +543,7 @@ struct sock *tcp_create_openreq_child(const struct sock *sk,
+ newicsk->icsk_ack.last_seg_size = skb->len - newtp->tcp_header_len;
+ newtp->rx_opt.mss_clamp = req->mss;
+ tcp_ecn_openreq_child(newtp, req);
++ newtp->fastopen_req = NULL;
+ newtp->fastopen_rsk = NULL;
+ newtp->syn_data_acked = 0;
+ newtp->rack.mstamp.v64 = 0;
+diff --git a/net/ipv4/tcp_output.c b/net/ipv4/tcp_output.c
+index 8ce50dc3ab8c..b7236ad4d832 100644
+--- a/net/ipv4/tcp_output.c
++++ b/net/ipv4/tcp_output.c
+@@ -1257,7 +1257,7 @@ int tcp_fragment(struct sock *sk, struct sk_buff *skb, u32 len,
+ * eventually). The difference is that pulled data not copied, but
+ * immediately discarded.
+ */
+-static void __pskb_trim_head(struct sk_buff *skb, int len)
++static int __pskb_trim_head(struct sk_buff *skb, int len)
+ {
+ struct skb_shared_info *shinfo;
+ int i, k, eat;
+@@ -1267,7 +1267,7 @@ static void __pskb_trim_head(struct sk_buff *skb, int len)
+ __skb_pull(skb, eat);
+ len -= eat;
+ if (!len)
+- return;
++ return 0;
+ }
+ eat = len;
+ k = 0;
+@@ -1293,23 +1293,28 @@ static void __pskb_trim_head(struct sk_buff *skb, int len)
+ skb_reset_tail_pointer(skb);
+ skb->data_len -= len;
+ skb->len = skb->data_len;
++ return len;
+ }
+
+ /* Remove acked data from a packet in the transmit queue. */
+ int tcp_trim_head(struct sock *sk, struct sk_buff *skb, u32 len)
+ {
++ u32 delta_truesize;
++
+ if (skb_unclone(skb, GFP_ATOMIC))
+ return -ENOMEM;
+
+- __pskb_trim_head(skb, len);
++ delta_truesize = __pskb_trim_head(skb, len);
+
+ TCP_SKB_CB(skb)->seq += len;
+ skb->ip_summed = CHECKSUM_PARTIAL;
+
+- skb->truesize -= len;
+- sk->sk_wmem_queued -= len;
+- sk_mem_uncharge(sk, len);
+- sock_set_flag(sk, SOCK_QUEUE_SHRUNK);
++ if (delta_truesize) {
++ skb->truesize -= delta_truesize;
++ sk->sk_wmem_queued -= delta_truesize;
++ sk_mem_uncharge(sk, delta_truesize);
++ sock_set_flag(sk, SOCK_QUEUE_SHRUNK);
++ }
+
+ /* Any change of skb->len requires recalculation of tso factor. */
+ if (tcp_skb_pcount(skb) > 1)
+diff --git a/net/ipv6/addrconf.c b/net/ipv6/addrconf.c
+index ec76bbee2c35..82a55980e03e 100644
+--- a/net/ipv6/addrconf.c
++++ b/net/ipv6/addrconf.c
+@@ -3288,7 +3288,8 @@ static int fixup_permanent_addr(struct inet6_dev *idev,
+ idev->dev, 0, 0);
+ }
+
+- addrconf_dad_start(ifp);
++ if (ifp->state == INET6_IFADDR_STATE_PREDAD)
++ addrconf_dad_start(ifp);
+
+ return 0;
+ }
+@@ -3507,6 +3508,7 @@ static int addrconf_notify(struct notifier_block *this, unsigned long event,
+ */
+ static struct notifier_block ipv6_dev_notf = {
+ .notifier_call = addrconf_notify,
++ .priority = ADDRCONF_NOTIFY_PRIORITY,
+ };
+
+ static void addrconf_type_change(struct net_device *dev, unsigned long event)
+@@ -3643,7 +3645,7 @@ static int addrconf_ifdown(struct net_device *dev, int how)
+ if (keep) {
+ /* set state to skip the notifier below */
+ state = INET6_IFADDR_STATE_DEAD;
+- ifa->state = 0;
++ ifa->state = INET6_IFADDR_STATE_PREDAD;
+ if (!(ifa->flags & IFA_F_NODAD))
+ ifa->flags |= IFA_F_TENTATIVE;
+
+@@ -6323,6 +6325,8 @@ int __init addrconf_init(void)
+ goto errlo;
+ }
+
++ ip6_route_init_special_entries();
++
+ for (i = 0; i < IN6_ADDR_HSIZE; i++)
+ INIT_HLIST_HEAD(&inet6_addr_lst[i]);
+
+diff --git a/net/ipv6/raw.c b/net/ipv6/raw.c
+index 294fb6f743cb..6006b3281a2e 100644
+--- a/net/ipv6/raw.c
++++ b/net/ipv6/raw.c
+@@ -632,6 +632,8 @@ static int rawv6_send_hdrinc(struct sock *sk, struct msghdr *msg, int length,
+ ipv6_local_error(sk, EMSGSIZE, fl6, rt->dst.dev->mtu);
+ return -EMSGSIZE;
+ }
++ if (length < sizeof(struct ipv6hdr))
++ return -EINVAL;
+ if (flags&MSG_PROBE)
+ goto out;
+
+diff --git a/net/ipv6/route.c b/net/ipv6/route.c
+index 523681a5c898..d5c2e35c4e26 100644
+--- a/net/ipv6/route.c
++++ b/net/ipv6/route.c
+@@ -3495,7 +3495,10 @@ static int ip6_route_dev_notify(struct notifier_block *this,
+ struct net_device *dev = netdev_notifier_info_to_dev(ptr);
+ struct net *net = dev_net(dev);
+
+- if (event == NETDEV_REGISTER && (dev->flags & IFF_LOOPBACK)) {
++ if (!(dev->flags & IFF_LOOPBACK))
++ return NOTIFY_OK;
++
++ if (event == NETDEV_REGISTER) {
+ net->ipv6.ip6_null_entry->dst.dev = dev;
+ net->ipv6.ip6_null_entry->rt6i_idev = in6_dev_get(dev);
+ #ifdef CONFIG_IPV6_MULTIPLE_TABLES
+@@ -3504,6 +3507,12 @@ static int ip6_route_dev_notify(struct notifier_block *this,
+ net->ipv6.ip6_blk_hole_entry->dst.dev = dev;
+ net->ipv6.ip6_blk_hole_entry->rt6i_idev = in6_dev_get(dev);
+ #endif
++ } else if (event == NETDEV_UNREGISTER) {
++ in6_dev_put(net->ipv6.ip6_null_entry->rt6i_idev);
++#ifdef CONFIG_IPV6_MULTIPLE_TABLES
++ in6_dev_put(net->ipv6.ip6_prohibit_entry->rt6i_idev);
++ in6_dev_put(net->ipv6.ip6_blk_hole_entry->rt6i_idev);
++#endif
+ }
+
+ return NOTIFY_OK;
+@@ -3810,9 +3819,24 @@ static struct pernet_operations ip6_route_net_late_ops = {
+
+ static struct notifier_block ip6_route_dev_notifier = {
+ .notifier_call = ip6_route_dev_notify,
+- .priority = 0,
++ .priority = ADDRCONF_NOTIFY_PRIORITY - 10,
+ };
+
++void __init ip6_route_init_special_entries(void)
++{
++ /* Registering of the loopback is done before this portion of code,
++ * the loopback reference in rt6_info will not be taken, do it
++ * manually for init_net */
++ init_net.ipv6.ip6_null_entry->dst.dev = init_net.loopback_dev;
++ init_net.ipv6.ip6_null_entry->rt6i_idev = in6_dev_get(init_net.loopback_dev);
++ #ifdef CONFIG_IPV6_MULTIPLE_TABLES
++ init_net.ipv6.ip6_prohibit_entry->dst.dev = init_net.loopback_dev;
++ init_net.ipv6.ip6_prohibit_entry->rt6i_idev = in6_dev_get(init_net.loopback_dev);
++ init_net.ipv6.ip6_blk_hole_entry->dst.dev = init_net.loopback_dev;
++ init_net.ipv6.ip6_blk_hole_entry->rt6i_idev = in6_dev_get(init_net.loopback_dev);
++ #endif
++}
++
+ int __init ip6_route_init(void)
+ {
+ int ret;
+@@ -3839,17 +3863,6 @@ int __init ip6_route_init(void)
+
+ ip6_dst_blackhole_ops.kmem_cachep = ip6_dst_ops_template.kmem_cachep;
+
+- /* Registering of the loopback is done before this portion of code,
+- * the loopback reference in rt6_info will not be taken, do it
+- * manually for init_net */
+- init_net.ipv6.ip6_null_entry->dst.dev = init_net.loopback_dev;
+- init_net.ipv6.ip6_null_entry->rt6i_idev = in6_dev_get(init_net.loopback_dev);
+- #ifdef CONFIG_IPV6_MULTIPLE_TABLES
+- init_net.ipv6.ip6_prohibit_entry->dst.dev = init_net.loopback_dev;
+- init_net.ipv6.ip6_prohibit_entry->rt6i_idev = in6_dev_get(init_net.loopback_dev);
+- init_net.ipv6.ip6_blk_hole_entry->dst.dev = init_net.loopback_dev;
+- init_net.ipv6.ip6_blk_hole_entry->rt6i_idev = in6_dev_get(init_net.loopback_dev);
+- #endif
+ ret = fib6_init();
+ if (ret)
+ goto out_register_subsys;
+diff --git a/net/openvswitch/vport-internal_dev.c b/net/openvswitch/vport-internal_dev.c
+index d5d6caecd072..695acd2f664c 100644
+--- a/net/openvswitch/vport-internal_dev.c
++++ b/net/openvswitch/vport-internal_dev.c
+@@ -151,6 +151,8 @@ static void do_setup(struct net_device *netdev)
+ {
+ ether_setup(netdev);
+
++ netdev->max_mtu = ETH_MAX_MTU;
++
+ netdev->netdev_ops = &internal_dev_netdev_ops;
+
+ netdev->priv_flags &= ~IFF_TX_SKB_SHARING;
+diff --git a/sound/pci/hda/hda_intel.c b/sound/pci/hda/hda_intel.c
+index bc4462694aaf..5cb7e04fa4ba 100644
+--- a/sound/pci/hda/hda_intel.c
++++ b/sound/pci/hda/hda_intel.c
+@@ -2155,7 +2155,20 @@ static void azx_remove(struct pci_dev *pci)
+ /* cancel the pending probing work */
+ chip = card->private_data;
+ hda = container_of(chip, struct hda_intel, chip);
++ /* FIXME: below is an ugly workaround.
++ * Both device_release_driver() and driver_probe_device()
++ * take *both* the device's and its parent's lock before
++ * calling the remove() and probe() callbacks. The codec
++ * probe takes the locks of both the codec itself and its
++ * parent, i.e. the PCI controller dev. Meanwhile, when
++ * the PCI controller is unbound, it takes its lock, too
++ * ==> ouch, a deadlock!
++ * As a workaround, we unlock temporarily here the controller
++ * device during cancel_work_sync() call.
++ */
++ device_unlock(&pci->dev);
+ cancel_work_sync(&hda->probe_work);
++ device_lock(&pci->dev);
+
+ snd_card_free(card);
+ }
+diff --git a/tools/power/cpupower/utils/helpers/cpuid.c b/tools/power/cpupower/utils/helpers/cpuid.c
+index 93b0aa74ca03..39c2c7d067bb 100644
+--- a/tools/power/cpupower/utils/helpers/cpuid.c
++++ b/tools/power/cpupower/utils/helpers/cpuid.c
+@@ -156,6 +156,7 @@ int get_cpu_info(unsigned int cpu, struct cpupower_cpu_info *cpu_info)
+ */
+ case 0x2C: /* Westmere EP - Gulftown */
+ cpu_info->caps |= CPUPOWER_CAP_HAS_TURBO_RATIO;
++ break;
+ case 0x2A: /* SNB */
+ case 0x2D: /* SNB Xeon */
+ case 0x3A: /* IVB */
+diff --git a/tools/testing/selftests/bpf/test_verifier.c b/tools/testing/selftests/bpf/test_verifier.c
+index e1aea9e60f33..35e9f50e40b4 100644
+--- a/tools/testing/selftests/bpf/test_verifier.c
++++ b/tools/testing/selftests/bpf/test_verifier.c
+@@ -1357,16 +1357,22 @@ static struct bpf_test tests[] = {
+ .result = ACCEPT,
+ },
+ {
+- "unpriv: obfuscate stack pointer",
++ "stack pointer arithmetic",
+ .insns = {
+- BPF_MOV64_REG(BPF_REG_2, BPF_REG_10),
+- BPF_ALU64_IMM(BPF_ADD, BPF_REG_2, -8),
+- BPF_ALU64_IMM(BPF_ADD, BPF_REG_2, -8),
++ BPF_MOV64_IMM(BPF_REG_1, 4),
++ BPF_JMP_IMM(BPF_JA, 0, 0, 0),
++ BPF_MOV64_REG(BPF_REG_7, BPF_REG_10),
++ BPF_ALU64_IMM(BPF_ADD, BPF_REG_7, -10),
++ BPF_ALU64_IMM(BPF_ADD, BPF_REG_7, -10),
++ BPF_MOV64_REG(BPF_REG_2, BPF_REG_7),
++ BPF_ALU64_REG(BPF_ADD, BPF_REG_2, BPF_REG_1),
++ BPF_ST_MEM(0, BPF_REG_2, 4, 0),
++ BPF_MOV64_REG(BPF_REG_2, BPF_REG_7),
++ BPF_ALU64_IMM(BPF_ADD, BPF_REG_2, 8),
++ BPF_ST_MEM(0, BPF_REG_2, 4, 0),
+ BPF_MOV64_IMM(BPF_REG_0, 0),
+ BPF_EXIT_INSN(),
+ },
+- .errstr_unpriv = "R2 pointer arithmetic",
+- .result_unpriv = REJECT,
+ .result = ACCEPT,
+ },
+ {
+diff --git a/tools/testing/selftests/x86/Makefile b/tools/testing/selftests/x86/Makefile
+index 8c1cb423cfe6..fefd95043fd7 100644
+--- a/tools/testing/selftests/x86/Makefile
++++ b/tools/testing/selftests/x86/Makefile
+@@ -5,7 +5,7 @@ include ../lib.mk
+ .PHONY: all all_32 all_64 warn_32bit_failure clean
+
+ TARGETS_C_BOTHBITS := single_step_syscall sysret_ss_attrs syscall_nt ptrace_syscall test_mremap_vdso \
+- check_initial_reg_state sigreturn ldt_gdt iopl \
++ check_initial_reg_state sigreturn ldt_gdt iopl mpx-mini-test \
+ protection_keys test_vdso
+ TARGETS_C_32BIT_ONLY := entry_from_vm86 syscall_arg_fault test_syscall_vdso unwind_vdso \
+ test_FCMOV test_FCOMI test_FISTTP \
^ permalink raw reply related [flat|nested] 22+ messages in thread
end of thread, other threads:[~2017-05-14 13:30 UTC | newest]
Thread overview: 22+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2017-03-26 19:33 [gentoo-commits] proj/linux-patches:4.10 commit in: / Mike Pagano
-- strict thread matches above, loose matches on Subject: below --
2017-05-14 13:30 Mike Pagano
2017-05-08 10:45 Mike Pagano
2017-05-03 17:46 Mike Pagano
2017-04-27 9:42 Alice Ferrazzi
2017-04-22 17:03 Mike Pagano
2017-04-18 10:23 Mike Pagano
2017-04-12 18:02 Mike Pagano
2017-04-08 13:51 Mike Pagano
2017-03-31 10:45 Mike Pagano
2017-03-30 18:17 Mike Pagano
2017-03-23 17:28 Mike Pagano
2017-03-22 16:55 Mike Pagano
2017-03-18 14:35 Mike Pagano
2017-03-15 17:17 Mike Pagano
2017-03-12 19:36 Mike Pagano
2017-03-12 13:00 Alice Ferrazzi
2017-03-02 16:20 Mike Pagano
2017-02-27 1:08 Mike Pagano
2017-02-20 0:08 Mike Pagano
2017-02-14 23:44 Mike Pagano
2017-01-03 18:56 Mike Pagano
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox