* [gentoo-commits] proj/linux-patches:4.15 commit in: /
@ 2018-01-16 13:35 Mike Pagano
0 siblings, 0 replies; 27+ messages in thread
From: Mike Pagano @ 2018-01-16 13:35 UTC (permalink / raw
To: gentoo-commits
commit: 4d63e24f02c7d8d91b62045130b2b0e601e84891
Author: Mike Pagano <mpagano <AT> gentoo <DOT> org>
AuthorDate: Tue Jan 16 13:35:37 2018 +0000
Commit: Mike Pagano <mpagano <AT> gentoo <DOT> org>
CommitDate: Tue Jan 16 13:35:37 2018 +0000
URL: https://gitweb.gentoo.org/proj/linux-patches.git/commit/?id=4d63e24f
Patch to support for namespace user.pax.* on tmpfs. Patch to Desc: Path to enable link security restrictions by default. Workaround to enable poweroff on Mac Pro 11. Patcg to add UAS disable quirk. See bug #640082. hid-apple patch to enable swapping of the FN and left Control keys and some additional on some apple keyboards. Patch to ensure that /dev/root doesn't appear in /proc/mounts when bootint without an initramfs. Bootsplash patch ported by Conrad Kostecki. Patch to enable control of the unaligned access control policy from sysctl. Add Gentoo Linux support config settings and defaults. Kernel patch that enables gcc >= v4.9 optimizations for additional.
0000_README | 36 +
1500_XATTR_USER_PREFIX.patch | 69 +
...ble-link-security-restrictions-by-default.patch | 22 +
2300_enable-poweroff-on-Mac-Pro-11.patch | 76 +
...age-Disable-UAS-on-JMicron-SATA-enclosure.patch | 40 +
2600_enable-key-swapping-for-apple-mac.patch | 114 ++
2900_dev-root-proc-mount-fix.patch | 38 +
4200_fbcondecor.patch | 2095 ++++++++++++++++++++
4400_alpha-sysctl-uac.patch | 142 ++
...able-additional-cpu-optimizations-for-gcc.patch | 530 +++++
10 files changed, 3162 insertions(+)
diff --git a/0000_README b/0000_README
index 9018993..01553d4 100644
--- a/0000_README
+++ b/0000_README
@@ -43,6 +43,42 @@ EXPERIMENTAL
Individual Patch Descriptions:
--------------------------------------------------------------------------
+Patch: 1500_XATTR_USER_PREFIX.patch
+From: https://bugs.gentoo.org/show_bug.cgi?id=470644
+Desc: Support for namespace user.pax.* on tmpfs.
+
+Patch: 1510_fs-enable-link-security-restrictions-by-default.patch
+From: http://sources.debian.net/src/linux/3.16.7-ckt4-3/debian/patches/debian/fs-enable-link-security-restrictions-by-default.patch/
+Desc: Enable link security restrictions by default.
+
+Patch: 2300_enable-poweroff-on-Mac-Pro-11.patch
+From: http://kernel.ubuntu.com/git/ubuntu/ubuntu-xenial.git/patch/drivers/pci/quirks.c?id=5080ff61a438f3dd80b88b423e1a20791d8a774c
+Desc: Workaround to enable poweroff on Mac Pro 11. See bug #601964.
+
+Patch: 2500_usb-storage-Disable-UAS-on-JMicron-SATA-enclosure.patch
+From: https://bugzilla.redhat.com/show_bug.cgi?id=1260207#c5
+Desc: Add UAS disable quirk. See bug #640082.
+
+Patch: 2600_enable-key-swapping-for-apple-mac.patch
+From: https://github.com/free5lot/hid-apple-patched
+Desc: This hid-apple patch enables swapping of the FN and left Control keys and some additional on some apple keyboards. See bug #622902
+
+Patch: 2900_dev-root-proc-mount-fix.patch
+From: https://bugs.gentoo.org/show_bug.cgi?id=438380
+Desc: Ensure that /dev/root doesn't appear in /proc/mounts when bootint without an initramfs.
+
+Patch: 4200_fbcondecor.patch
+From: http://www.mepiscommunity.org/fbcondecor
+Desc: Bootsplash ported by Conrad Kostecki. (Bug #637434)
+
+Patch: 4400_alpha-sysctl-uac.patch
+From: Tobias Klausmann (klausman@gentoo.org) and http://bugs.gentoo.org/show_bug.cgi?id=217323
+Desc: Enable control of the unaligned access control policy from sysctl
+
Patch: 4567_distro-Gentoo-Kconfig.patch
From: Tom Wijsman <TomWij@gentoo.org>
Desc: Add Gentoo Linux support config settings and defaults.
+
+Patch: 5010_enable-additional-cpu-optimizations-for-gcc.patch
+From: https://github.com/graysky2/kernel_gcc_patch/
+Desc: Kernel patch enables gcc >= v4.9 optimizations for additional CPUs.
diff --git a/1500_XATTR_USER_PREFIX.patch b/1500_XATTR_USER_PREFIX.patch
new file mode 100644
index 0000000..bacd032
--- /dev/null
+++ b/1500_XATTR_USER_PREFIX.patch
@@ -0,0 +1,69 @@
+From: Anthony G. Basile <blueness@gentoo.org>
+
+This patch adds support for a restricted user-controlled namespace on
+tmpfs filesystem used to house PaX flags. The namespace must be of the
+form user.pax.* and its value cannot exceed a size of 8 bytes.
+
+This is needed even on all Gentoo systems so that XATTR_PAX flags
+are preserved for users who might build packages using portage on
+a tmpfs system with a non-hardened kernel and then switch to a
+hardened kernel with XATTR_PAX enabled.
+
+The namespace is added to any user with Extended Attribute support
+enabled for tmpfs. Users who do not enable xattrs will not have
+the XATTR_PAX flags preserved.
+
+diff --git a/include/uapi/linux/xattr.h b/include/uapi/linux/xattr.h
+index 1590c49..5eab462 100644
+--- a/include/uapi/linux/xattr.h
++++ b/include/uapi/linux/xattr.h
+@@ -73,5 +73,9 @@
+ #define XATTR_POSIX_ACL_DEFAULT "posix_acl_default"
+ #define XATTR_NAME_POSIX_ACL_DEFAULT XATTR_SYSTEM_PREFIX XATTR_POSIX_ACL_DEFAULT
+
++/* User namespace */
++#define XATTR_PAX_PREFIX XATTR_USER_PREFIX "pax."
++#define XATTR_PAX_FLAGS_SUFFIX "flags"
++#define XATTR_NAME_PAX_FLAGS XATTR_PAX_PREFIX XATTR_PAX_FLAGS_SUFFIX
+
+ #endif /* _UAPI_LINUX_XATTR_H */
+diff --git a/mm/shmem.c b/mm/shmem.c
+index 440e2a7..c377172 100644
+--- a/mm/shmem.c
++++ b/mm/shmem.c
+@@ -2667,6 +2667,14 @@ static int shmem_xattr_handler_set(const struct xattr_handler *handler,
+ struct shmem_inode_info *info = SHMEM_I(d_inode(dentry));
+
+ name = xattr_full_name(handler, name);
++
++ if (!strncmp(name, XATTR_USER_PREFIX, XATTR_USER_PREFIX_LEN)) {
++ if (strcmp(name, XATTR_NAME_PAX_FLAGS))
++ return -EOPNOTSUPP;
++ if (size > 8)
++ return -EINVAL;
++ }
++
+ return simple_xattr_set(&info->xattrs, name, value, size, flags);
+ }
+
+@@ -2682,6 +2690,12 @@ static const struct xattr_handler shmem_trusted_xattr_handler = {
+ .set = shmem_xattr_handler_set,
+ };
+
++static const struct xattr_handler shmem_user_xattr_handler = {
++ .prefix = XATTR_USER_PREFIX,
++ .get = shmem_xattr_handler_get,
++ .set = shmem_xattr_handler_set,
++};
++
+ static const struct xattr_handler *shmem_xattr_handlers[] = {
+ #ifdef CONFIG_TMPFS_POSIX_ACL
+ &posix_acl_access_xattr_handler,
+@@ -2689,6 +2703,7 @@ static const struct xattr_handler *shmem_xattr_handlers[] = {
+ #endif
+ &shmem_security_xattr_handler,
+ &shmem_trusted_xattr_handler,
++ &shmem_user_xattr_handler,
+ NULL
+ };
+
diff --git a/1510_fs-enable-link-security-restrictions-by-default.patch b/1510_fs-enable-link-security-restrictions-by-default.patch
new file mode 100644
index 0000000..639fb3c
--- /dev/null
+++ b/1510_fs-enable-link-security-restrictions-by-default.patch
@@ -0,0 +1,22 @@
+From: Ben Hutchings <ben@decadent.org.uk>
+Subject: fs: Enable link security restrictions by default
+Date: Fri, 02 Nov 2012 05:32:06 +0000
+Bug-Debian: https://bugs.debian.org/609455
+Forwarded: not-needed
+
+This reverts commit 561ec64ae67ef25cac8d72bb9c4bfc955edfd415
+('VFS: don't do protected {sym,hard}links by default').
+
+--- a/fs/namei.c
++++ b/fs/namei.c
+@@ -651,8 +651,8 @@ static inline void put_link(struct namei
+ path_put(link);
+ }
+
+-int sysctl_protected_symlinks __read_mostly = 0;
+-int sysctl_protected_hardlinks __read_mostly = 0;
++int sysctl_protected_symlinks __read_mostly = 1;
++int sysctl_protected_hardlinks __read_mostly = 1;
+
+ /**
+ * may_follow_link - Check symlink following for unsafe situations
diff --git a/2300_enable-poweroff-on-Mac-Pro-11.patch b/2300_enable-poweroff-on-Mac-Pro-11.patch
new file mode 100644
index 0000000..063f2a1
--- /dev/null
+++ b/2300_enable-poweroff-on-Mac-Pro-11.patch
@@ -0,0 +1,76 @@
+From 5080ff61a438f3dd80b88b423e1a20791d8a774c Mon Sep 17 00:00:00 2001
+From: Chen Yu <yu.c.chen@intel.com>
+Date: Fri, 19 Aug 2016 10:25:57 -0700
+Subject: UBUNTU: SAUCE: PCI: Workaround to enable poweroff on Mac Pro 11
+
+BugLink: http://bugs.launchpad.net/bugs/1587714
+
+People reported that they can not do a poweroff nor a
+suspend to ram on their Mac Pro 11. After some investigations
+it was found that, once the PCI bridge 0000:00:1c.0 reassigns its
+mm windows to ([mem 0x7fa00000-0x7fbfffff] and
+[mem 0x7fc00000-0x7fdfffff 64bit pref]), the region of ACPI
+io resource 0x1804 becomes unaccessible immediately, where the
+ACPI Sleep register is located, as a result neither poweroff(S5)
+nor suspend to ram(S3) works.
+
+As suggested by Bjorn, further testing shows that, there is an
+unreported device may be (using) conflict with above aperture,
+which brings unpredictable result such as the failure of accessing
+the io port, which blocks the poweroff(S5). Besides if we reassign
+the memory aperture to the other place, the poweroff works again.
+
+As we do not find any resource declared in _CRS which contain above
+memory aperture, and Mac OS does not use this pci bridge neither, we
+choose a simple workaround to clear the hotplug flag(suggested by
+Yinghai Lu), thus do not allocate any resource for this pci bridge,
+and thereby no conflict anymore.
+
+Bugzilla: https://bugzilla.kernel.org/show_bug.cgi?id=103211
+Cc: Bjorn Helgaas <bhelgaas@google.com>
+Cc: Rafael J. Wysocki <rafael@kernel.org>
+Cc: Lukas Wunner <lukas@wunner.de>
+Signed-off-by: Chen Yu <yu.c.chen@intel.com>
+Reference: https://patchwork.kernel.org/patch/9289777/
+Signed-off-by: Kamal Mostafa <kamal@canonical.com>
+Acked-by: Brad Figg <brad.figg@canonical.com>
+Acked-by: Stefan Bader <stefan.bader@canonical.com>
+Signed-off-by: Tim Gardner <tim.gardner@canonical.com>
+---
+ drivers/pci/quirks.c | 20 ++++++++++++++++++++
+ 1 file changed, 20 insertions(+)
+
+diff --git a/drivers/pci/quirks.c b/drivers/pci/quirks.c
+index 48cfaa0..23968b6 100644
+--- a/drivers/pci/quirks.c
++++ b/drivers/pci/quirks.c
+@@ -2750,6 +2750,26 @@ static void quirk_hotplug_bridge(struct pci_dev *dev)
+ DECLARE_PCI_FIXUP_HEADER(PCI_VENDOR_ID_HINT, 0x0020, quirk_hotplug_bridge);
+
+ /*
++ * Apple: Avoid programming the memory/io aperture of 00:1c.0
++ *
++ * BIOS does not declare any resource for 00:1c.0, but with
++ * hotplug flag set, thus the OS allocates:
++ * [mem 0x7fa00000 - 0x7fbfffff]
++ * [mem 0x7fc00000-0x7fdfffff 64bit pref]
++ * which is conflict with an unreported device, which
++ * causes unpredictable result such as accessing io port.
++ * So clear the hotplug flag to work around it.
++ */
++static void quirk_apple_mbp_poweroff(struct pci_dev *dev)
++{
++ if (dmi_match(DMI_PRODUCT_NAME, "MacBookPro11,4") ||
++ dmi_match(DMI_PRODUCT_NAME, "MacBookPro11,5"))
++ dev->is_hotplug_bridge = 0;
++}
++
++DECLARE_PCI_FIXUP_HEADER(PCI_VENDOR_ID_INTEL, 0x8c10, quirk_apple_mbp_poweroff);
++
++/*
+ * This is a quirk for the Ricoh MMC controller found as a part of
+ * some mulifunction chips.
+
+--
+cgit v0.11.2
+
diff --git a/2500_usb-storage-Disable-UAS-on-JMicron-SATA-enclosure.patch b/2500_usb-storage-Disable-UAS-on-JMicron-SATA-enclosure.patch
new file mode 100644
index 0000000..0dd93ef
--- /dev/null
+++ b/2500_usb-storage-Disable-UAS-on-JMicron-SATA-enclosure.patch
@@ -0,0 +1,40 @@
+From d02a55182307c01136b599fd048b4679f259a84e Mon Sep 17 00:00:00 2001
+From: Laura Abbott <labbott@fedoraproject.org>
+Date: Tue, 8 Sep 2015 09:53:38 -0700
+Subject: [PATCH] usb-storage: Disable UAS on JMicron SATA enclosure
+
+Steve Ellis reported incorrect block sizes and alignement
+offsets with a SATA enclosure. Adding a quirk to disable
+UAS fixes the problems.
+
+Reported-by: Steven Ellis <sellis@redhat.com>
+Signed-off-by: Laura Abbott <labbott@fedoraproject.org>
+---
+ drivers/usb/storage/unusual_uas.h | 7 +++++--
+ 1 file changed, 5 insertions(+), 2 deletions(-)
+
+diff --git a/drivers/usb/storage/unusual_uas.h b/drivers/usb/storage/unusual_uas.h
+index c85ea53..216d93d 100644
+--- a/drivers/usb/storage/unusual_uas.h
++++ b/drivers/usb/storage/unusual_uas.h
+@@ -141,12 +141,15 @@ UNUSUAL_DEV(0x2109, 0x0711, 0x0000, 0x9999,
+ USB_SC_DEVICE, USB_PR_DEVICE, NULL,
+ US_FL_NO_ATA_1X),
+
+-/* Reported-by: Takeo Nakayama <javhera@gmx.com> */
++/*
++ * Initially Reported-by: Takeo Nakayama <javhera@gmx.com>
++ * UAS Ignore Reported by Steven Ellis <sellis@redhat.com>
++ */
+ UNUSUAL_DEV(0x357d, 0x7788, 0x0000, 0x9999,
+ "JMicron",
+ "JMS566",
+ USB_SC_DEVICE, USB_PR_DEVICE, NULL,
+- US_FL_NO_REPORT_OPCODES),
++ US_FL_NO_REPORT_OPCODES | US_FL_IGNORE_UAS),
+
+ /* Reported-by: Hans de Goede <hdegoede@redhat.com> */
+ UNUSUAL_DEV(0x4971, 0x1012, 0x0000, 0x9999,
+--
+2.4.3
+
diff --git a/2600_enable-key-swapping-for-apple-mac.patch b/2600_enable-key-swapping-for-apple-mac.patch
new file mode 100644
index 0000000..ab228d3
--- /dev/null
+++ b/2600_enable-key-swapping-for-apple-mac.patch
@@ -0,0 +1,114 @@
+--- a/drivers/hid/hid-apple.c
++++ b/drivers/hid/hid-apple.c
+@@ -52,6 +52,22 @@
+ "(For people who want to keep Windows PC keyboard muscle memory. "
+ "[0] = as-is, Mac layout. 1 = swapped, Windows layout.)");
+
++static unsigned int swap_fn_leftctrl;
++module_param(swap_fn_leftctrl, uint, 0644);
++MODULE_PARM_DESC(swap_fn_leftctrl, "Swap the Fn and left Control keys. "
++ "(For people who want to keep PC keyboard muscle memory. "
++ "[0] = as-is, Mac layout, 1 = swapped, PC layout)");
++
++static unsigned int rightalt_as_rightctrl;
++module_param(rightalt_as_rightctrl, uint, 0644);
++MODULE_PARM_DESC(rightalt_as_rightctrl, "Use the right Alt key as a right Ctrl key. "
++ "[0] = as-is, Mac layout. 1 = Right Alt is right Ctrl");
++
++static unsigned int ejectcd_as_delete;
++module_param(ejectcd_as_delete, uint, 0644);
++MODULE_PARM_DESC(ejectcd_as_delete, "Use Eject-CD key as Delete key. "
++ "([0] = disabled, 1 = enabled)");
++
+ struct apple_sc {
+ unsigned long quirks;
+ unsigned int fn_on;
+@@ -164,6 +180,21 @@
+ { }
+ };
+
++static const struct apple_key_translation swapped_fn_leftctrl_keys[] = {
++ { KEY_FN, KEY_LEFTCTRL },
++ { }
++};
++
++static const struct apple_key_translation rightalt_as_rightctrl_keys[] = {
++ { KEY_RIGHTALT, KEY_RIGHTCTRL },
++ { }
++};
++
++static const struct apple_key_translation ejectcd_as_delete_keys[] = {
++ { KEY_EJECTCD, KEY_DELETE },
++ { }
++};
++
+ static const struct apple_key_translation *apple_find_translation(
+ const struct apple_key_translation *table, u16 from)
+ {
+@@ -183,9 +214,11 @@
+ struct apple_sc *asc = hid_get_drvdata(hid);
+ const struct apple_key_translation *trans, *table;
+
+- if (usage->code == KEY_FN) {
++ u16 fn_keycode = (swap_fn_leftctrl) ? (KEY_LEFTCTRL) : (KEY_FN);
++
++ if (usage->code == fn_keycode) {
+ asc->fn_on = !!value;
+- input_event(input, usage->type, usage->code, value);
++ input_event(input, usage->type, KEY_FN, value);
+ return 1;
+ }
+
+@@ -264,6 +297,30 @@
+ }
+ }
+
++ if (swap_fn_leftctrl) {
++ trans = apple_find_translation(swapped_fn_leftctrl_keys, usage->code);
++ if (trans) {
++ input_event(input, usage->type, trans->to, value);
++ return 1;
++ }
++ }
++
++ if (ejectcd_as_delete) {
++ trans = apple_find_translation(ejectcd_as_delete_keys, usage->code);
++ if (trans) {
++ input_event(input, usage->type, trans->to, value);
++ return 1;
++ }
++ }
++
++ if (rightalt_as_rightctrl) {
++ trans = apple_find_translation(rightalt_as_rightctrl_keys, usage->code);
++ if (trans) {
++ input_event(input, usage->type, trans->to, value);
++ return 1;
++ }
++ }
++
+ return 0;
+ }
+
+@@ -327,6 +384,21 @@
+
+ for (trans = apple_iso_keyboard; trans->from; trans++)
+ set_bit(trans->to, input->keybit);
++
++ if (swap_fn_leftctrl) {
++ for (trans = swapped_fn_leftctrl_keys; trans->from; trans++)
++ set_bit(trans->to, input->keybit);
++ }
++
++ if (ejectcd_as_delete) {
++ for (trans = ejectcd_as_delete_keys; trans->from; trans++)
++ set_bit(trans->to, input->keybit);
++ }
++
++ if (rightalt_as_rightctrl) {
++ for (trans = rightalt_as_rightctrl_keys; trans->from; trans++)
++ set_bit(trans->to, input->keybit);
++ }
+ }
+
+ static int apple_input_mapping(struct hid_device *hdev, struct hid_input *hi,
diff --git a/2900_dev-root-proc-mount-fix.patch b/2900_dev-root-proc-mount-fix.patch
new file mode 100644
index 0000000..60af1eb
--- /dev/null
+++ b/2900_dev-root-proc-mount-fix.patch
@@ -0,0 +1,38 @@
+--- a/init/do_mounts.c 2015-08-19 10:27:16.753852576 -0400
++++ b/init/do_mounts.c 2015-08-19 10:34:25.473850353 -0400
+@@ -490,7 +490,11 @@ void __init change_floppy(char *fmt, ...
+ va_start(args, fmt);
+ vsprintf(buf, fmt, args);
+ va_end(args);
+- fd = sys_open("/dev/root", O_RDWR | O_NDELAY, 0);
++ if (saved_root_name[0])
++ fd = sys_open(saved_root_name, O_RDWR | O_NDELAY, 0);
++ else
++ fd = sys_open("/dev/root", O_RDWR | O_NDELAY, 0);
++
+ if (fd >= 0) {
+ sys_ioctl(fd, FDEJECT, 0);
+ sys_close(fd);
+@@ -534,11 +538,17 @@ void __init mount_root(void)
+ #endif
+ #ifdef CONFIG_BLOCK
+ {
+- int err = create_dev("/dev/root", ROOT_DEV);
+-
+- if (err < 0)
+- pr_emerg("Failed to create /dev/root: %d\n", err);
+- mount_block_root("/dev/root", root_mountflags);
++ if (saved_root_name[0] == '/') {
++ int err = create_dev(saved_root_name, ROOT_DEV);
++ if (err < 0)
++ pr_emerg("Failed to create %s: %d\n", saved_root_name, err);
++ mount_block_root(saved_root_name, root_mountflags);
++ } else {
++ int err = create_dev("/dev/root", ROOT_DEV);
++ if (err < 0)
++ pr_emerg("Failed to create /dev/root: %d\n", err);
++ mount_block_root("/dev/root", root_mountflags);
++ }
+ }
+ #endif
+ }
diff --git a/4200_fbcondecor.patch b/4200_fbcondecor.patch
new file mode 100644
index 0000000..7151d0f
--- /dev/null
+++ b/4200_fbcondecor.patch
@@ -0,0 +1,2095 @@
+diff --git a/Documentation/fb/00-INDEX b/Documentation/fb/00-INDEX
+index fe85e7c5907a..22309308ba56 100644
+--- a/Documentation/fb/00-INDEX
++++ b/Documentation/fb/00-INDEX
+@@ -23,6 +23,8 @@ ep93xx-fb.txt
+ - info on the driver for EP93xx LCD controller.
+ fbcon.txt
+ - intro to and usage guide for the framebuffer console (fbcon).
++fbcondecor.txt
++ - info on the Framebuffer Console Decoration
+ framebuffer.txt
+ - introduction to frame buffer devices.
+ gxfb.txt
+diff --git a/Documentation/fb/fbcondecor.txt b/Documentation/fb/fbcondecor.txt
+new file mode 100644
+index 000000000000..637209e11ccd
+--- /dev/null
++++ b/Documentation/fb/fbcondecor.txt
+@@ -0,0 +1,207 @@
++What is it?
++-----------
++
++The framebuffer decorations are a kernel feature which allows displaying a
++background picture on selected consoles.
++
++What do I need to get it to work?
++---------------------------------
++
++To get fbcondecor up-and-running you will have to:
++ 1) get a copy of splashutils [1] or a similar program
++ 2) get some fbcondecor themes
++ 3) build the kernel helper program
++ 4) build your kernel with the FB_CON_DECOR option enabled.
++
++To get fbcondecor operational right after fbcon initialization is finished, you
++will have to include a theme and the kernel helper into your initramfs image.
++Please refer to splashutils documentation for instructions on how to do that.
++
++[1] The splashutils package can be downloaded from:
++ http://github.com/alanhaggai/fbsplash
++
++The userspace helper
++--------------------
++
++The userspace fbcondecor helper (by default: /sbin/fbcondecor_helper) is called by the
++kernel whenever an important event occurs and the kernel needs some kind of
++job to be carried out. Important events include console switches and video
++mode switches (the kernel requests background images and configuration
++parameters for the current console). The fbcondecor helper must be accessible at
++all times. If it's not, fbcondecor will be switched off automatically.
++
++It's possible to set path to the fbcondecor helper by writing it to
++/proc/sys/kernel/fbcondecor.
++
++*****************************************************************************
++
++The information below is mostly technical stuff. There's probably no need to
++read it unless you plan to develop a userspace helper.
++
++The fbcondecor protocol
++-----------------------
++
++The fbcondecor protocol defines a communication interface between the kernel and
++the userspace fbcondecor helper.
++
++The kernel side is responsible for:
++
++ * rendering console text, using an image as a background (instead of a
++ standard solid color fbcon uses),
++ * accepting commands from the user via ioctls on the fbcondecor device,
++ * calling the userspace helper to set things up as soon as the fb subsystem
++ is initialized.
++
++The userspace helper is responsible for everything else, including parsing
++configuration files, decompressing the image files whenever the kernel needs
++it, and communicating with the kernel if necessary.
++
++The fbcondecor protocol specifies how communication is done in both ways:
++kernel->userspace and userspace->helper.
++
++Kernel -> Userspace
++-------------------
++
++The kernel communicates with the userspace helper by calling it and specifying
++the task to be done in a series of arguments.
++
++The arguments follow the pattern:
++<fbcondecor protocol version> <command> <parameters>
++
++All commands defined in fbcondecor protocol v2 have the following parameters:
++ virtual console
++ framebuffer number
++ theme
++
++Fbcondecor protocol v1 specified an additional 'fbcondecor mode' after the
++framebuffer number. Fbcondecor protocol v1 is deprecated and should not be used.
++
++Fbcondecor protocol v2 specifies the following commands:
++
++getpic
++------
++ The kernel issues this command to request image data. It's up to the
++ userspace helper to find a background image appropriate for the specified
++ theme and the current resolution. The userspace helper should respond by
++ issuing the FBIOCONDECOR_SETPIC ioctl.
++
++init
++----
++ The kernel issues this command after the fbcondecor device is created and
++ the fbcondecor interface is initialized. Upon receiving 'init', the userspace
++ helper should parse the kernel command line (/proc/cmdline) or otherwise
++ decide whether fbcondecor is to be activated.
++
++ To activate fbcondecor on the first console the helper should issue the
++ FBIOCONDECOR_SETCFG, FBIOCONDECOR_SETPIC and FBIOCONDECOR_SETSTATE commands,
++ in the above-mentioned order.
++
++ When the userspace helper is called in an early phase of the boot process
++ (right after the initialization of fbcon), no filesystems will be mounted.
++ The helper program should mount sysfs and then create the appropriate
++ framebuffer, fbcondecor and tty0 devices (if they don't already exist) to get
++ current display settings and to be able to communicate with the kernel side.
++ It should probably also mount the procfs to be able to parse the kernel
++ command line parameters.
++
++ Note that the console sem is not held when the kernel calls fbcondecor_helper
++ with the 'init' command. The fbcondecor helper should perform all ioctls with
++ origin set to FBCON_DECOR_IO_ORIG_USER.
++
++modechange
++----------
++ The kernel issues this command on a mode change. The helper's response should
++ be similar to the response to the 'init' command. Note that this time the
++ console sem is held and all ioctls must be performed with origin set to
++ FBCON_DECOR_IO_ORIG_KERNEL.
++
++
++Userspace -> Kernel
++-------------------
++
++Userspace programs can communicate with fbcondecor via ioctls on the
++fbcondecor device. These ioctls are to be used by both the userspace helper
++(called only by the kernel) and userspace configuration tools (run by the users).
++
++The fbcondecor helper should set the origin field to FBCON_DECOR_IO_ORIG_KERNEL
++when doing the appropriate ioctls. All userspace configuration tools should
++use FBCON_DECOR_IO_ORIG_USER. Failure to set the appropriate value in the origin
++field when performing ioctls from the kernel helper will most likely result
++in a console deadlock.
++
++FBCON_DECOR_IO_ORIG_KERNEL instructs fbcondecor not to try to acquire the console
++semaphore. Not surprisingly, FBCON_DECOR_IO_ORIG_USER instructs it to acquire
++the console sem.
++
++The framebuffer console decoration provides the following ioctls (all defined in
++linux/fb.h):
++
++FBIOCONDECOR_SETPIC
++description: loads a background picture for a virtual console
++argument: struct fbcon_decor_iowrapper*; data: struct fb_image*
++notes:
++If called for consoles other than the current foreground one, the picture data
++will be ignored.
++
++If the current virtual console is running in a 8-bpp mode, the cmap substruct
++of fb_image has to be filled appropriately: start should be set to 16 (first
++16 colors are reserved for fbcon), len to a value <= 240 and red, green and
++blue should point to valid cmap data. The transp field is ingored. The fields
++dx, dy, bg_color, fg_color in fb_image are ignored as well.
++
++FBIOCONDECOR_SETCFG
++description: sets the fbcondecor config for a virtual console
++argument: struct fbcon_decor_iowrapper*; data: struct vc_decor*
++notes: The structure has to be filled with valid data.
++
++FBIOCONDECOR_GETCFG
++description: gets the fbcondecor config for a virtual console
++argument: struct fbcon_decor_iowrapper*; data: struct vc_decor*
++
++FBIOCONDECOR_SETSTATE
++description: sets the fbcondecor state for a virtual console
++argument: struct fbcon_decor_iowrapper*; data: unsigned int*
++ values: 0 = disabled, 1 = enabled.
++
++FBIOCONDECOR_GETSTATE
++description: gets the fbcondecor state for a virtual console
++argument: struct fbcon_decor_iowrapper*; data: unsigned int*
++ values: as in FBIOCONDECOR_SETSTATE
++
++Info on used structures:
++
++Definition of struct vc_decor can be found in linux/console_decor.h. It's
++heavily commented. Note that the 'theme' field should point to a string
++no longer than FBCON_DECOR_THEME_LEN. When FBIOCONDECOR_GETCFG call is
++performed, the theme field should point to a char buffer of length
++FBCON_DECOR_THEME_LEN.
++
++Definition of struct fbcon_decor_iowrapper can be found in linux/fb.h.
++The fields in this struct have the following meaning:
++
++vc:
++Virtual console number.
++
++origin:
++Specifies if the ioctl is performed as a response to a kernel request. The
++fbcondecor helper should set this field to FBCON_DECOR_IO_ORIG_KERNEL, userspace
++programs should set it to FBCON_DECOR_IO_ORIG_USER. This field is necessary to
++avoid console semaphore deadlocks.
++
++data:
++Pointer to a data structure appropriate for the performed ioctl. Type of
++the data struct is specified in the ioctls description.
++
++*****************************************************************************
++
++Credit
++------
++
++Original 'bootsplash' project & implementation by:
++ Volker Poplawski <volker@poplawski.de>, Stefan Reinauer <stepan@suse.de>,
++ Steffen Winterfeldt <snwint@suse.de>, Michael Schroeder <mls@suse.de>,
++ Ken Wimer <wimer@suse.de>.
++
++Fbcondecor, fbcondecor protocol design, current implementation & docs by:
++ Michal Januszewski <michalj+fbcondecor@gmail.com>
++
+diff --git a/drivers/Makefile b/drivers/Makefile
+index 1d034b680431..9f41f2ea0c8b 100644
+--- a/drivers/Makefile
++++ b/drivers/Makefile
+@@ -23,6 +23,10 @@ obj-y += pci/dwc/
+
+ obj-$(CONFIG_PARISC) += parisc/
+ obj-$(CONFIG_RAPIDIO) += rapidio/
++# tty/ comes before char/ so that the VT console is the boot-time
++# default.
++obj-y += tty/
++obj-y += char/
+ obj-y += video/
+ obj-y += idle/
+
+@@ -53,11 +57,6 @@ obj-$(CONFIG_REGULATOR) += regulator/
+ # reset controllers early, since gpu drivers might rely on them to initialize
+ obj-$(CONFIG_RESET_CONTROLLER) += reset/
+
+-# tty/ comes before char/ so that the VT console is the boot-time
+-# default.
+-obj-y += tty/
+-obj-y += char/
+-
+ # iommu/ comes before gpu as gpu are using iommu controllers
+ obj-$(CONFIG_IOMMU_SUPPORT) += iommu/
+
+diff --git a/drivers/video/console/Kconfig b/drivers/video/console/Kconfig
+index 7f1f1fbcef9e..8439b618dfc0 100644
+--- a/drivers/video/console/Kconfig
++++ b/drivers/video/console/Kconfig
+@@ -151,6 +151,19 @@ config FRAMEBUFFER_CONSOLE_ROTATION
+ such that other users of the framebuffer will remain normally
+ oriented.
+
++config FB_CON_DECOR
++ bool "Support for the Framebuffer Console Decorations"
++ depends on FRAMEBUFFER_CONSOLE=y && !FB_TILEBLITTING
++ default n
++ ---help---
++ This option enables support for framebuffer console decorations which
++ makes it possible to display images in the background of the system
++ consoles. Note that userspace utilities are necessary in order to take
++ advantage of these features. Refer to Documentation/fb/fbcondecor.txt
++ for more information.
++
++ If unsure, say N.
++
+ config STI_CONSOLE
+ bool "STI text console"
+ depends on PARISC
+diff --git a/drivers/video/console/Makefile b/drivers/video/console/Makefile
+index db07b784bd2c..3e369bd120b8 100644
+--- a/drivers/video/console/Makefile
++++ b/drivers/video/console/Makefile
+@@ -9,4 +9,5 @@ obj-$(CONFIG_STI_CONSOLE) += sticon.o sticore.o
+ obj-$(CONFIG_VGA_CONSOLE) += vgacon.o
+ obj-$(CONFIG_MDA_CONSOLE) += mdacon.o
+
++obj-$(CONFIG_FB_CON_DECOR) += fbcondecor.o cfbcondecor.o
+ obj-$(CONFIG_FB_STI) += sticore.o
+diff --git a/drivers/video/console/cfbcondecor.c b/drivers/video/console/cfbcondecor.c
+new file mode 100644
+index 000000000000..b00960803edc
+--- /dev/null
++++ b/drivers/video/console/cfbcondecor.c
+@@ -0,0 +1,473 @@
++/*
++ * linux/drivers/video/cfbcon_decor.c -- Framebuffer decor render functions
++ *
++ * Copyright (C) 2004 Michal Januszewski <michalj+fbcondecor@gmail.com>
++ *
++ * Code based upon "Bootdecor" (C) 2001-2003
++ * Volker Poplawski <volker@poplawski.de>,
++ * Stefan Reinauer <stepan@suse.de>,
++ * Steffen Winterfeldt <snwint@suse.de>,
++ * Michael Schroeder <mls@suse.de>,
++ * Ken Wimer <wimer@suse.de>.
++ *
++ * This file is subject to the terms and conditions of the GNU General Public
++ * License. See the file COPYING in the main directory of this archive for
++ * more details.
++ */
++#include <linux/module.h>
++#include <linux/types.h>
++#include <linux/fb.h>
++#include <linux/selection.h>
++#include <linux/slab.h>
++#include <linux/vt_kern.h>
++#include <asm/irq.h>
++
++#include "../fbdev/core/fbcon.h"
++#include "fbcondecor.h"
++
++#define parse_pixel(shift, bpp, type) \
++ do { \
++ if (d & (0x80 >> (shift))) \
++ dd2[(shift)] = fgx; \
++ else \
++ dd2[(shift)] = transparent ? *(type *)decor_src : bgx; \
++ decor_src += (bpp); \
++ } while (0) \
++
++extern int get_color(struct vc_data *vc, struct fb_info *info,
++ u16 c, int is_fg);
++
++void fbcon_decor_fix_pseudo_pal(struct fb_info *info, struct vc_data *vc)
++{
++ int i, j, k;
++ int minlen = min(min(info->var.red.length, info->var.green.length),
++ info->var.blue.length);
++ u32 col;
++
++ for (j = i = 0; i < 16; i++) {
++ k = color_table[i];
++
++ col = ((vc->vc_palette[j++] >> (8-minlen))
++ << info->var.red.offset);
++ col |= ((vc->vc_palette[j++] >> (8-minlen))
++ << info->var.green.offset);
++ col |= ((vc->vc_palette[j++] >> (8-minlen))
++ << info->var.blue.offset);
++ ((u32 *)info->pseudo_palette)[k] = col;
++ }
++}
++
++void fbcon_decor_renderc(struct fb_info *info, int ypos, int xpos, int height,
++ int width, u8 *src, u32 fgx, u32 bgx, u8 transparent)
++{
++ unsigned int x, y;
++ u32 dd;
++ int bytespp = ((info->var.bits_per_pixel + 7) >> 3);
++ unsigned int d = ypos * info->fix.line_length + xpos * bytespp;
++ unsigned int ds = (ypos * info->var.xres + xpos) * bytespp;
++ u16 dd2[4];
++
++ u8 *decor_src = (u8 *)(info->bgdecor.data + ds);
++ u8 *dst = (u8 *)(info->screen_base + d);
++
++ if ((ypos + height) > info->var.yres || (xpos + width) > info->var.xres)
++ return;
++
++ for (y = 0; y < height; y++) {
++ switch (info->var.bits_per_pixel) {
++
++ case 32:
++ for (x = 0; x < width; x++) {
++
++ if ((x & 7) == 0)
++ d = *src++;
++ if (d & 0x80)
++ dd = fgx;
++ else
++ dd = transparent ?
++ *(u32 *)decor_src : bgx;
++
++ d <<= 1;
++ decor_src += 4;
++ fb_writel(dd, dst);
++ dst += 4;
++ }
++ break;
++ case 24:
++ for (x = 0; x < width; x++) {
++
++ if ((x & 7) == 0)
++ d = *src++;
++ if (d & 0x80)
++ dd = fgx;
++ else
++ dd = transparent ?
++ (*(u32 *)decor_src & 0xffffff) : bgx;
++
++ d <<= 1;
++ decor_src += 3;
++#ifdef __LITTLE_ENDIAN
++ fb_writew(dd & 0xffff, dst);
++ dst += 2;
++ fb_writeb((dd >> 16), dst);
++#else
++ fb_writew(dd >> 8, dst);
++ dst += 2;
++ fb_writeb(dd & 0xff, dst);
++#endif
++ dst++;
++ }
++ break;
++ case 16:
++ for (x = 0; x < width; x += 2) {
++ if ((x & 7) == 0)
++ d = *src++;
++
++ parse_pixel(0, 2, u16);
++ parse_pixel(1, 2, u16);
++#ifdef __LITTLE_ENDIAN
++ dd = dd2[0] | (dd2[1] << 16);
++#else
++ dd = dd2[1] | (dd2[0] << 16);
++#endif
++ d <<= 2;
++ fb_writel(dd, dst);
++ dst += 4;
++ }
++ break;
++
++ case 8:
++ for (x = 0; x < width; x += 4) {
++ if ((x & 7) == 0)
++ d = *src++;
++
++ parse_pixel(0, 1, u8);
++ parse_pixel(1, 1, u8);
++ parse_pixel(2, 1, u8);
++ parse_pixel(3, 1, u8);
++
++#ifdef __LITTLE_ENDIAN
++ dd = dd2[0] | (dd2[1] << 8) | (dd2[2] << 16) | (dd2[3] << 24);
++#else
++ dd = dd2[3] | (dd2[2] << 8) | (dd2[1] << 16) | (dd2[0] << 24);
++#endif
++ d <<= 4;
++ fb_writel(dd, dst);
++ dst += 4;
++ }
++ }
++
++ dst += info->fix.line_length - width * bytespp;
++ decor_src += (info->var.xres - width) * bytespp;
++ }
++}
++
++#define cc2cx(a) \
++ ((info->fix.visual == FB_VISUAL_TRUECOLOR || \
++ info->fix.visual == FB_VISUAL_DIRECTCOLOR) ? \
++ ((u32 *)info->pseudo_palette)[a] : a)
++
++void fbcon_decor_putcs(struct vc_data *vc, struct fb_info *info,
++ const unsigned short *s, int count, int yy, int xx)
++{
++ unsigned short charmask = vc->vc_hi_font_mask ? 0x1ff : 0xff;
++ struct fbcon_ops *ops = info->fbcon_par;
++ int fg_color, bg_color, transparent;
++ u8 *src;
++ u32 bgx, fgx;
++ u16 c = scr_readw(s);
++
++ fg_color = get_color(vc, info, c, 1);
++ bg_color = get_color(vc, info, c, 0);
++
++ /* Don't paint the background image if console is blanked */
++ transparent = ops->blank_state ? 0 :
++ (vc->vc_decor.bg_color == bg_color);
++
++ xx = xx * vc->vc_font.width + vc->vc_decor.tx;
++ yy = yy * vc->vc_font.height + vc->vc_decor.ty;
++
++ fgx = cc2cx(fg_color);
++ bgx = cc2cx(bg_color);
++
++ while (count--) {
++ c = scr_readw(s++);
++ src = vc->vc_font.data + (c & charmask) * vc->vc_font.height *
++ ((vc->vc_font.width + 7) >> 3);
++
++ fbcon_decor_renderc(info, yy, xx, vc->vc_font.height,
++ vc->vc_font.width, src, fgx, bgx, transparent);
++ xx += vc->vc_font.width;
++ }
++}
++
++void fbcon_decor_cursor(struct fb_info *info, struct fb_cursor *cursor)
++{
++ int i;
++ unsigned int dsize, s_pitch;
++ struct fbcon_ops *ops = info->fbcon_par;
++ struct vc_data *vc;
++ u8 *src;
++
++ /* we really don't need any cursors while the console is blanked */
++ if (info->state != FBINFO_STATE_RUNNING || ops->blank_state)
++ return;
++
++ vc = vc_cons[ops->currcon].d;
++
++ src = kmalloc(64 + sizeof(struct fb_image), GFP_ATOMIC);
++ if (!src)
++ return;
++
++ s_pitch = (cursor->image.width + 7) >> 3;
++ dsize = s_pitch * cursor->image.height;
++ if (cursor->enable) {
++ switch (cursor->rop) {
++ case ROP_XOR:
++ for (i = 0; i < dsize; i++)
++ src[i] = cursor->image.data[i] ^ cursor->mask[i];
++ break;
++ case ROP_COPY:
++ default:
++ for (i = 0; i < dsize; i++)
++ src[i] = cursor->image.data[i] & cursor->mask[i];
++ break;
++ }
++ } else
++ memcpy(src, cursor->image.data, dsize);
++
++ fbcon_decor_renderc(info,
++ cursor->image.dy + vc->vc_decor.ty,
++ cursor->image.dx + vc->vc_decor.tx,
++ cursor->image.height,
++ cursor->image.width,
++ (u8 *)src,
++ cc2cx(cursor->image.fg_color),
++ cc2cx(cursor->image.bg_color),
++ cursor->image.bg_color == vc->vc_decor.bg_color);
++
++ kfree(src);
++}
++
++static void decorset(u8 *dst, int height, int width, int dstbytes,
++ u32 bgx, int bpp)
++{
++ int i;
++
++ if (bpp == 8)
++ bgx |= bgx << 8;
++ if (bpp == 16 || bpp == 8)
++ bgx |= bgx << 16;
++
++ while (height-- > 0) {
++ u8 *p = dst;
++
++ switch (bpp) {
++
++ case 32:
++ for (i = 0; i < width; i++) {
++ fb_writel(bgx, p); p += 4;
++ }
++ break;
++ case 24:
++ for (i = 0; i < width; i++) {
++#ifdef __LITTLE_ENDIAN
++ fb_writew((bgx & 0xffff), (u16 *)p); p += 2;
++ fb_writeb((bgx >> 16), p++);
++#else
++ fb_writew((bgx >> 8), (u16 *)p); p += 2;
++ fb_writeb((bgx & 0xff), p++);
++#endif
++ }
++ break;
++ case 16:
++ for (i = 0; i < width/4; i++) {
++ fb_writel(bgx, p); p += 4;
++ fb_writel(bgx, p); p += 4;
++ }
++ if (width & 2) {
++ fb_writel(bgx, p); p += 4;
++ }
++ if (width & 1)
++ fb_writew(bgx, (u16 *)p);
++ break;
++ case 8:
++ for (i = 0; i < width/4; i++) {
++ fb_writel(bgx, p); p += 4;
++ }
++
++ if (width & 2) {
++ fb_writew(bgx, p); p += 2;
++ }
++ if (width & 1)
++ fb_writeb(bgx, (u8 *)p);
++ break;
++
++ }
++ dst += dstbytes;
++ }
++}
++
++void fbcon_decor_copy(u8 *dst, u8 *src, int height, int width, int linebytes,
++ int srclinebytes, int bpp)
++{
++ int i;
++
++ while (height-- > 0) {
++ u32 *p = (u32 *)dst;
++ u32 *q = (u32 *)src;
++
++ switch (bpp) {
++
++ case 32:
++ for (i = 0; i < width; i++)
++ fb_writel(*q++, p++);
++ break;
++ case 24:
++ for (i = 0; i < (width * 3 / 4); i++)
++ fb_writel(*q++, p++);
++ if ((width * 3) % 4) {
++ if (width & 2) {
++ fb_writeb(*(u8 *)q, (u8 *)p);
++ } else if (width & 1) {
++ fb_writew(*(u16 *)q, (u16 *)p);
++ fb_writeb(*(u8 *)((u16 *)q + 1),
++ (u8 *)((u16 *)p + 2));
++ }
++ }
++ break;
++ case 16:
++ for (i = 0; i < width/4; i++) {
++ fb_writel(*q++, p++);
++ fb_writel(*q++, p++);
++ }
++ if (width & 2)
++ fb_writel(*q++, p++);
++ if (width & 1)
++ fb_writew(*(u16 *)q, (u16 *)p);
++ break;
++ case 8:
++ for (i = 0; i < width/4; i++)
++ fb_writel(*q++, p++);
++
++ if (width & 2) {
++ fb_writew(*(u16 *)q, (u16 *)p);
++ q = (u32 *) ((u16 *)q + 1);
++ p = (u32 *) ((u16 *)p + 1);
++ }
++ if (width & 1)
++ fb_writeb(*(u8 *)q, (u8 *)p);
++ break;
++ }
++
++ dst += linebytes;
++ src += srclinebytes;
++ }
++}
++
++static void decorfill(struct fb_info *info, int sy, int sx, int height,
++ int width)
++{
++ int bytespp = ((info->var.bits_per_pixel + 7) >> 3);
++ int d = sy * info->fix.line_length + sx * bytespp;
++ int ds = (sy * info->var.xres + sx) * bytespp;
++
++ fbcon_decor_copy((u8 *)(info->screen_base + d), (u8 *)(info->bgdecor.data + ds),
++ height, width, info->fix.line_length, info->var.xres * bytespp,
++ info->var.bits_per_pixel);
++}
++
++void fbcon_decor_clear(struct vc_data *vc, struct fb_info *info, int sy, int sx,
++ int height, int width)
++{
++ int bgshift = (vc->vc_hi_font_mask) ? 13 : 12;
++ struct fbcon_ops *ops = info->fbcon_par;
++ u8 *dst;
++ int transparent, bg_color = attr_bgcol_ec(bgshift, vc, info);
++
++ transparent = (vc->vc_decor.bg_color == bg_color);
++ sy = sy * vc->vc_font.height + vc->vc_decor.ty;
++ sx = sx * vc->vc_font.width + vc->vc_decor.tx;
++ height *= vc->vc_font.height;
++ width *= vc->vc_font.width;
++
++ /* Don't paint the background image if console is blanked */
++ if (transparent && !ops->blank_state) {
++ decorfill(info, sy, sx, height, width);
++ } else {
++ dst = (u8 *)(info->screen_base + sy * info->fix.line_length +
++ sx * ((info->var.bits_per_pixel + 7) >> 3));
++ decorset(dst, height, width, info->fix.line_length, cc2cx(bg_color),
++ info->var.bits_per_pixel);
++ }
++}
++
++void fbcon_decor_clear_margins(struct vc_data *vc, struct fb_info *info,
++ int bottom_only)
++{
++ unsigned int tw = vc->vc_cols*vc->vc_font.width;
++ unsigned int th = vc->vc_rows*vc->vc_font.height;
++
++ if (!bottom_only) {
++ /* top margin */
++ decorfill(info, 0, 0, vc->vc_decor.ty, info->var.xres);
++ /* left margin */
++ decorfill(info, vc->vc_decor.ty, 0, th, vc->vc_decor.tx);
++ /* right margin */
++ decorfill(info, vc->vc_decor.ty, vc->vc_decor.tx + tw, th,
++ info->var.xres - vc->vc_decor.tx - tw);
++ }
++ decorfill(info, vc->vc_decor.ty + th, 0,
++ info->var.yres - vc->vc_decor.ty - th, info->var.xres);
++}
++
++void fbcon_decor_bmove_redraw(struct vc_data *vc, struct fb_info *info, int y,
++ int sx, int dx, int width)
++{
++ u16 *d = (u16 *) (vc->vc_origin + vc->vc_size_row * y + dx * 2);
++ u16 *s = d + (dx - sx);
++ u16 *start = d;
++ u16 *ls = d;
++ u16 *le = d + width;
++ u16 c;
++ int x = dx;
++ u16 attr = 1;
++
++ do {
++ c = scr_readw(d);
++ if (attr != (c & 0xff00)) {
++ attr = c & 0xff00;
++ if (d > start) {
++ fbcon_decor_putcs(vc, info, start, d - start, y, x);
++ x += d - start;
++ start = d;
++ }
++ }
++ if (s >= ls && s < le && c == scr_readw(s)) {
++ if (d > start) {
++ fbcon_decor_putcs(vc, info, start, d - start, y, x);
++ x += d - start + 1;
++ start = d + 1;
++ } else {
++ x++;
++ start++;
++ }
++ }
++ s++;
++ d++;
++ } while (d < le);
++ if (d > start)
++ fbcon_decor_putcs(vc, info, start, d - start, y, x);
++}
++
++void fbcon_decor_blank(struct vc_data *vc, struct fb_info *info, int blank)
++{
++ if (blank) {
++ decorset((u8 *)info->screen_base, info->var.yres, info->var.xres,
++ info->fix.line_length, 0, info->var.bits_per_pixel);
++ } else {
++ update_screen(vc);
++ fbcon_decor_clear_margins(vc, info, 0);
++ }
++}
++
+diff --git a/drivers/video/console/fbcondecor.c b/drivers/video/console/fbcondecor.c
+new file mode 100644
+index 000000000000..78288a497a60
+--- /dev/null
++++ b/drivers/video/console/fbcondecor.c
+@@ -0,0 +1,549 @@
++/*
++ * linux/drivers/video/console/fbcondecor.c -- Framebuffer console decorations
++ *
++ * Copyright (C) 2004-2009 Michal Januszewski <michalj+fbcondecor@gmail.com>
++ *
++ * Code based upon "Bootsplash" (C) 2001-2003
++ * Volker Poplawski <volker@poplawski.de>,
++ * Stefan Reinauer <stepan@suse.de>,
++ * Steffen Winterfeldt <snwint@suse.de>,
++ * Michael Schroeder <mls@suse.de>,
++ * Ken Wimer <wimer@suse.de>.
++ *
++ * Compat ioctl support by Thorsten Klein <TK@Thorsten-Klein.de>.
++ *
++ * This file is subject to the terms and conditions of the GNU General Public
++ * License. See the file COPYING in the main directory of this archive for
++ * more details.
++ *
++ */
++#include <linux/module.h>
++#include <linux/kernel.h>
++#include <linux/string.h>
++#include <linux/types.h>
++#include <linux/fb.h>
++#include <linux/vt_kern.h>
++#include <linux/vmalloc.h>
++#include <linux/unistd.h>
++#include <linux/syscalls.h>
++#include <linux/init.h>
++#include <linux/proc_fs.h>
++#include <linux/workqueue.h>
++#include <linux/kmod.h>
++#include <linux/miscdevice.h>
++#include <linux/device.h>
++#include <linux/fs.h>
++#include <linux/compat.h>
++#include <linux/console.h>
++#include <linux/binfmts.h>
++#include <linux/uaccess.h>
++#include <asm/irq.h>
++
++#include "../fbdev/core/fbcon.h"
++#include "fbcondecor.h"
++
++extern signed char con2fb_map[];
++static int fbcon_decor_enable(struct vc_data *vc);
++
++static int initialized;
++
++char fbcon_decor_path[KMOD_PATH_LEN] = "/sbin/fbcondecor_helper";
++EXPORT_SYMBOL(fbcon_decor_path);
++
++int fbcon_decor_call_helper(char *cmd, unsigned short vc)
++{
++ char *envp[] = {
++ "HOME=/",
++ "PATH=/sbin:/bin",
++ NULL
++ };
++
++ char tfb[5];
++ char tcons[5];
++ unsigned char fb = (int) con2fb_map[vc];
++
++ char *argv[] = {
++ fbcon_decor_path,
++ "2",
++ cmd,
++ tcons,
++ tfb,
++ vc_cons[vc].d->vc_decor.theme,
++ NULL
++ };
++
++ snprintf(tfb, 5, "%d", fb);
++ snprintf(tcons, 5, "%d", vc);
++
++ return call_usermodehelper(fbcon_decor_path, argv, envp, UMH_WAIT_EXEC);
++}
++
++/* Disables fbcondecor on a virtual console; called with console sem held. */
++int fbcon_decor_disable(struct vc_data *vc, unsigned char redraw)
++{
++ struct fb_info *info;
++
++ if (!vc->vc_decor.state)
++ return -EINVAL;
++
++ info = registered_fb[(int) con2fb_map[vc->vc_num]];
++
++ if (info == NULL)
++ return -EINVAL;
++
++ vc->vc_decor.state = 0;
++ vc_resize(vc, info->var.xres / vc->vc_font.width,
++ info->var.yres / vc->vc_font.height);
++
++ if (fg_console == vc->vc_num && redraw) {
++ redraw_screen(vc, 0);
++ update_region(vc, vc->vc_origin +
++ vc->vc_size_row * vc->vc_top,
++ vc->vc_size_row * (vc->vc_bottom - vc->vc_top) / 2);
++ }
++
++ printk(KERN_INFO "fbcondecor: switched decor state to 'off' on console %d\n",
++ vc->vc_num);
++
++ return 0;
++}
++
++/* Enables fbcondecor on a virtual console; called with console sem held. */
++static int fbcon_decor_enable(struct vc_data *vc)
++{
++ struct fb_info *info;
++
++ info = registered_fb[(int) con2fb_map[vc->vc_num]];
++
++ if (vc->vc_decor.twidth == 0 || vc->vc_decor.theight == 0 ||
++ info == NULL || vc->vc_decor.state || (!info->bgdecor.data &&
++ vc->vc_num == fg_console))
++ return -EINVAL;
++
++ vc->vc_decor.state = 1;
++ vc_resize(vc, vc->vc_decor.twidth / vc->vc_font.width,
++ vc->vc_decor.theight / vc->vc_font.height);
++
++ if (fg_console == vc->vc_num) {
++ redraw_screen(vc, 0);
++ update_region(vc, vc->vc_origin +
++ vc->vc_size_row * vc->vc_top,
++ vc->vc_size_row * (vc->vc_bottom - vc->vc_top) / 2);
++ fbcon_decor_clear_margins(vc, info, 0);
++ }
++
++ printk(KERN_INFO "fbcondecor: switched decor state to 'on' on console %d\n",
++ vc->vc_num);
++
++ return 0;
++}
++
++static inline int fbcon_decor_ioctl_dosetstate(struct vc_data *vc, unsigned int state, unsigned char origin)
++{
++ int ret;
++
++ console_lock();
++ if (!state)
++ ret = fbcon_decor_disable(vc, 1);
++ else
++ ret = fbcon_decor_enable(vc);
++ console_unlock();
++
++ return ret;
++}
++
++static inline void fbcon_decor_ioctl_dogetstate(struct vc_data *vc, unsigned int *state)
++{
++ *state = vc->vc_decor.state;
++}
++
++static int fbcon_decor_ioctl_dosetcfg(struct vc_data *vc, struct vc_decor *cfg, unsigned char origin)
++{
++ struct fb_info *info;
++ int len;
++ char *tmp;
++
++ info = registered_fb[(int) con2fb_map[vc->vc_num]];
++
++ if (info == NULL || !cfg->twidth || !cfg->theight ||
++ cfg->tx + cfg->twidth > info->var.xres ||
++ cfg->ty + cfg->theight > info->var.yres)
++ return -EINVAL;
++
++ len = strnlen_user(cfg->theme, MAX_ARG_STRLEN);
++ if (!len || len > FBCON_DECOR_THEME_LEN)
++ return -EINVAL;
++ tmp = kmalloc(len, GFP_KERNEL);
++ if (!tmp)
++ return -ENOMEM;
++ if (copy_from_user(tmp, (void __user *)cfg->theme, len))
++ return -EFAULT;
++ cfg->theme = tmp;
++ cfg->state = 0;
++
++ console_lock();
++ if (vc->vc_decor.state)
++ fbcon_decor_disable(vc, 1);
++ kfree(vc->vc_decor.theme);
++ vc->vc_decor = *cfg;
++ console_unlock();
++
++ printk(KERN_INFO "fbcondecor: console %d using theme '%s'\n",
++ vc->vc_num, vc->vc_decor.theme);
++ return 0;
++}
++
++static int fbcon_decor_ioctl_dogetcfg(struct vc_data *vc,
++ struct vc_decor *decor)
++{
++ char __user *tmp;
++
++ tmp = decor->theme;
++ *decor = vc->vc_decor;
++ decor->theme = tmp;
++
++ if (vc->vc_decor.theme) {
++ if (copy_to_user(tmp, vc->vc_decor.theme,
++ strlen(vc->vc_decor.theme) + 1))
++ return -EFAULT;
++ } else
++ if (put_user(0, tmp))
++ return -EFAULT;
++
++ return 0;
++}
++
++static int fbcon_decor_ioctl_dosetpic(struct vc_data *vc, struct fb_image *img,
++ unsigned char origin)
++{
++ struct fb_info *info;
++ int len;
++ u8 *tmp;
++
++ if (vc->vc_num != fg_console)
++ return -EINVAL;
++
++ info = registered_fb[(int) con2fb_map[vc->vc_num]];
++
++ if (info == NULL)
++ return -EINVAL;
++
++ if (img->width != info->var.xres || img->height != info->var.yres) {
++ printk(KERN_ERR "fbcondecor: picture dimensions mismatch\n");
++ printk(KERN_ERR "%dx%d vs %dx%d\n", img->width, img->height,
++ info->var.xres, info->var.yres);
++ return -EINVAL;
++ }
++
++ if (img->depth != info->var.bits_per_pixel) {
++ printk(KERN_ERR "fbcondecor: picture depth mismatch\n");
++ return -EINVAL;
++ }
++
++ if (img->depth == 8) {
++ if (!img->cmap.len || !img->cmap.red || !img->cmap.green ||
++ !img->cmap.blue)
++ return -EINVAL;
++
++ tmp = vmalloc(img->cmap.len * 3 * 2);
++ if (!tmp)
++ return -ENOMEM;
++
++ if (copy_from_user(tmp,
++ (void __user *)img->cmap.red,
++ (img->cmap.len << 1)) ||
++ copy_from_user(tmp + (img->cmap.len << 1),
++ (void __user *)img->cmap.green,
++ (img->cmap.len << 1)) ||
++ copy_from_user(tmp + (img->cmap.len << 2),
++ (void __user *)img->cmap.blue,
++ (img->cmap.len << 1))) {
++ vfree(tmp);
++ return -EFAULT;
++ }
++
++ img->cmap.transp = NULL;
++ img->cmap.red = (u16 *)tmp;
++ img->cmap.green = img->cmap.red + img->cmap.len;
++ img->cmap.blue = img->cmap.green + img->cmap.len;
++ } else {
++ img->cmap.red = NULL;
++ }
++
++ len = ((img->depth + 7) >> 3) * img->width * img->height;
++
++ /*
++ * Allocate an additional byte so that we never go outside of the
++ * buffer boundaries in the rendering functions in a 24 bpp mode.
++ */
++ tmp = vmalloc(len + 1);
++
++ if (!tmp)
++ goto out;
++
++ if (copy_from_user(tmp, (void __user *)img->data, len))
++ goto out;
++
++ img->data = tmp;
++
++ console_lock();
++
++ if (info->bgdecor.data)
++ vfree((u8 *)info->bgdecor.data);
++ if (info->bgdecor.cmap.red)
++ vfree(info->bgdecor.cmap.red);
++
++ info->bgdecor = *img;
++
++ if (fbcon_decor_active_vc(vc) && fg_console == vc->vc_num) {
++ redraw_screen(vc, 0);
++ update_region(vc, vc->vc_origin +
++ vc->vc_size_row * vc->vc_top,
++ vc->vc_size_row * (vc->vc_bottom - vc->vc_top) / 2);
++ fbcon_decor_clear_margins(vc, info, 0);
++ }
++
++ console_unlock();
++
++ return 0;
++
++out:
++ if (img->cmap.red)
++ vfree(img->cmap.red);
++
++ if (tmp)
++ vfree(tmp);
++ return -ENOMEM;
++}
++
++static long fbcon_decor_ioctl(struct file *filp, u_int cmd, u_long arg)
++{
++ struct fbcon_decor_iowrapper __user *wrapper = (void __user *) arg;
++ struct vc_data *vc = NULL;
++ unsigned short vc_num = 0;
++ unsigned char origin = 0;
++ void __user *data = NULL;
++
++ if (!access_ok(VERIFY_READ, wrapper,
++ sizeof(struct fbcon_decor_iowrapper)))
++ return -EFAULT;
++
++ __get_user(vc_num, &wrapper->vc);
++ __get_user(origin, &wrapper->origin);
++ __get_user(data, &wrapper->data);
++
++ if (!vc_cons_allocated(vc_num))
++ return -EINVAL;
++
++ vc = vc_cons[vc_num].d;
++
++ switch (cmd) {
++ case FBIOCONDECOR_SETPIC:
++ {
++ struct fb_image img;
++
++ if (copy_from_user(&img, (struct fb_image __user *)data, sizeof(struct fb_image)))
++ return -EFAULT;
++
++ return fbcon_decor_ioctl_dosetpic(vc, &img, origin);
++ }
++ case FBIOCONDECOR_SETCFG:
++ {
++ struct vc_decor cfg;
++
++ if (copy_from_user(&cfg, (struct vc_decor __user *)data, sizeof(struct vc_decor)))
++ return -EFAULT;
++
++ return fbcon_decor_ioctl_dosetcfg(vc, &cfg, origin);
++ }
++ case FBIOCONDECOR_GETCFG:
++ {
++ int rval;
++ struct vc_decor cfg;
++
++ if (copy_from_user(&cfg, (struct vc_decor __user *)data, sizeof(struct vc_decor)))
++ return -EFAULT;
++
++ rval = fbcon_decor_ioctl_dogetcfg(vc, &cfg);
++
++ if (copy_to_user(data, &cfg, sizeof(struct vc_decor)))
++ return -EFAULT;
++ return rval;
++ }
++ case FBIOCONDECOR_SETSTATE:
++ {
++ unsigned int state = 0;
++
++ if (get_user(state, (unsigned int __user *)data))
++ return -EFAULT;
++ return fbcon_decor_ioctl_dosetstate(vc, state, origin);
++ }
++ case FBIOCONDECOR_GETSTATE:
++ {
++ unsigned int state = 0;
++
++ fbcon_decor_ioctl_dogetstate(vc, &state);
++ return put_user(state, (unsigned int __user *)data);
++ }
++
++ default:
++ return -ENOIOCTLCMD;
++ }
++}
++
++#ifdef CONFIG_COMPAT
++
++static long fbcon_decor_compat_ioctl(struct file *filp, unsigned int cmd, unsigned long arg)
++{
++ struct fbcon_decor_iowrapper32 __user *wrapper = (void __user *)arg;
++ struct vc_data *vc = NULL;
++ unsigned short vc_num = 0;
++ unsigned char origin = 0;
++ compat_uptr_t data_compat = 0;
++ void __user *data = NULL;
++
++ if (!access_ok(VERIFY_READ, wrapper,
++ sizeof(struct fbcon_decor_iowrapper32)))
++ return -EFAULT;
++
++ __get_user(vc_num, &wrapper->vc);
++ __get_user(origin, &wrapper->origin);
++ __get_user(data_compat, &wrapper->data);
++ data = compat_ptr(data_compat);
++
++ if (!vc_cons_allocated(vc_num))
++ return -EINVAL;
++
++ vc = vc_cons[vc_num].d;
++
++ switch (cmd) {
++ case FBIOCONDECOR_SETPIC32:
++ {
++ struct fb_image32 img_compat;
++ struct fb_image img;
++
++ if (copy_from_user(&img_compat, (struct fb_image32 __user *)data, sizeof(struct fb_image32)))
++ return -EFAULT;
++
++ fb_image_from_compat(img, img_compat);
++
++ return fbcon_decor_ioctl_dosetpic(vc, &img, origin);
++ }
++
++ case FBIOCONDECOR_SETCFG32:
++ {
++ struct vc_decor32 cfg_compat;
++ struct vc_decor cfg;
++
++ if (copy_from_user(&cfg_compat, (struct vc_decor32 __user *)data, sizeof(struct vc_decor32)))
++ return -EFAULT;
++
++ vc_decor_from_compat(cfg, cfg_compat);
++
++ return fbcon_decor_ioctl_dosetcfg(vc, &cfg, origin);
++ }
++
++ case FBIOCONDECOR_GETCFG32:
++ {
++ int rval;
++ struct vc_decor32 cfg_compat;
++ struct vc_decor cfg;
++
++ if (copy_from_user(&cfg_compat, (struct vc_decor32 __user *)data, sizeof(struct vc_decor32)))
++ return -EFAULT;
++ cfg.theme = compat_ptr(cfg_compat.theme);
++
++ rval = fbcon_decor_ioctl_dogetcfg(vc, &cfg);
++
++ vc_decor_to_compat(cfg_compat, cfg);
++
++ if (copy_to_user((struct vc_decor32 __user *)data, &cfg_compat, sizeof(struct vc_decor32)))
++ return -EFAULT;
++ return rval;
++ }
++
++ case FBIOCONDECOR_SETSTATE32:
++ {
++ compat_uint_t state_compat = 0;
++ unsigned int state = 0;
++
++ if (get_user(state_compat, (compat_uint_t __user *)data))
++ return -EFAULT;
++
++ state = (unsigned int)state_compat;
++
++ return fbcon_decor_ioctl_dosetstate(vc, state, origin);
++ }
++
++ case FBIOCONDECOR_GETSTATE32:
++ {
++ compat_uint_t state_compat = 0;
++ unsigned int state = 0;
++
++ fbcon_decor_ioctl_dogetstate(vc, &state);
++ state_compat = (compat_uint_t)state;
++
++ return put_user(state_compat, (compat_uint_t __user *)data);
++ }
++
++ default:
++ return -ENOIOCTLCMD;
++ }
++}
++#else
++ #define fbcon_decor_compat_ioctl NULL
++#endif
++
++static struct file_operations fbcon_decor_ops = {
++ .owner = THIS_MODULE,
++ .unlocked_ioctl = fbcon_decor_ioctl,
++ .compat_ioctl = fbcon_decor_compat_ioctl
++};
++
++static struct miscdevice fbcon_decor_dev = {
++ .minor = MISC_DYNAMIC_MINOR,
++ .name = "fbcondecor",
++ .fops = &fbcon_decor_ops
++};
++
++void fbcon_decor_reset(void)
++{
++ int i;
++
++ for (i = 0; i < num_registered_fb; i++) {
++ registered_fb[i]->bgdecor.data = NULL;
++ registered_fb[i]->bgdecor.cmap.red = NULL;
++ }
++
++ for (i = 0; i < MAX_NR_CONSOLES && vc_cons[i].d; i++) {
++ vc_cons[i].d->vc_decor.state = vc_cons[i].d->vc_decor.twidth =
++ vc_cons[i].d->vc_decor.theight = 0;
++ vc_cons[i].d->vc_decor.theme = NULL;
++ }
++}
++
++int fbcon_decor_init(void)
++{
++ int i;
++
++ fbcon_decor_reset();
++
++ if (initialized)
++ return 0;
++
++ i = misc_register(&fbcon_decor_dev);
++ if (i) {
++ printk(KERN_ERR "fbcondecor: failed to register device\n");
++ return i;
++ }
++
++ fbcon_decor_call_helper("init", 0);
++ initialized = 1;
++ return 0;
++}
++
++int fbcon_decor_exit(void)
++{
++ fbcon_decor_reset();
++ return 0;
++}
+diff --git a/drivers/video/console/fbcondecor.h b/drivers/video/console/fbcondecor.h
+new file mode 100644
+index 000000000000..c49386c16695
+--- /dev/null
++++ b/drivers/video/console/fbcondecor.h
+@@ -0,0 +1,77 @@
++/*
++ * linux/drivers/video/console/fbcondecor.h -- Framebuffer Console Decoration headers
++ *
++ * Copyright (C) 2004 Michal Januszewski <michalj+fbcondecor@gmail.com>
++ *
++ */
++
++#ifndef __FBCON_DECOR_H
++#define __FBCON_DECOR_H
++
++#ifndef _LINUX_FB_H
++#include <linux/fb.h>
++#endif
++
++/* This is needed for vc_cons in fbcmap.c */
++#include <linux/vt_kern.h>
++
++struct fb_cursor;
++struct fb_info;
++struct vc_data;
++
++#ifdef CONFIG_FB_CON_DECOR
++/* fbcondecor.c */
++int fbcon_decor_init(void);
++int fbcon_decor_exit(void);
++int fbcon_decor_call_helper(char *cmd, unsigned short cons);
++int fbcon_decor_disable(struct vc_data *vc, unsigned char redraw);
++
++/* cfbcondecor.c */
++void fbcon_decor_putcs(struct vc_data *vc, struct fb_info *info, const unsigned short *s, int count, int yy, int xx);
++void fbcon_decor_cursor(struct fb_info *info, struct fb_cursor *cursor);
++void fbcon_decor_clear(struct vc_data *vc, struct fb_info *info, int sy, int sx, int height, int width);
++void fbcon_decor_clear_margins(struct vc_data *vc, struct fb_info *info, int bottom_only);
++void fbcon_decor_blank(struct vc_data *vc, struct fb_info *info, int blank);
++void fbcon_decor_bmove_redraw(struct vc_data *vc, struct fb_info *info, int y, int sx, int dx, int width);
++void fbcon_decor_copy(u8 *dst, u8 *src, int height, int width, int linebytes, int srclinesbytes, int bpp);
++void fbcon_decor_fix_pseudo_pal(struct fb_info *info, struct vc_data *vc);
++
++/* vt.c */
++void acquire_console_sem(void);
++void release_console_sem(void);
++void do_unblank_screen(int entering_gfx);
++
++/* struct vc_data *y */
++#define fbcon_decor_active_vc(y) (y->vc_decor.state && y->vc_decor.theme)
++
++/* struct fb_info *x, struct vc_data *y */
++#define fbcon_decor_active_nores(x, y) (x->bgdecor.data && fbcon_decor_active_vc(y))
++
++/* struct fb_info *x, struct vc_data *y */
++#define fbcon_decor_active(x, y) (fbcon_decor_active_nores(x, y) && \
++ x->bgdecor.width == x->var.xres && \
++ x->bgdecor.height == x->var.yres && \
++ x->bgdecor.depth == x->var.bits_per_pixel)
++
++#else /* CONFIG_FB_CON_DECOR */
++
++static inline void fbcon_decor_putcs(struct vc_data *vc, struct fb_info *info, const unsigned short *s, int count, int yy, int xx) {}
++static inline void fbcon_decor_putc(struct vc_data *vc, struct fb_info *info, int c, int ypos, int xpos) {}
++static inline void fbcon_decor_cursor(struct fb_info *info, struct fb_cursor *cursor) {}
++static inline void fbcon_decor_clear(struct vc_data *vc, struct fb_info *info, int sy, int sx, int height, int width) {}
++static inline void fbcon_decor_clear_margins(struct vc_data *vc, struct fb_info *info, int bottom_only) {}
++static inline void fbcon_decor_blank(struct vc_data *vc, struct fb_info *info, int blank) {}
++static inline void fbcon_decor_bmove_redraw(struct vc_data *vc, struct fb_info *info, int y, int sx, int dx, int width) {}
++static inline void fbcon_decor_fix_pseudo_pal(struct fb_info *info, struct vc_data *vc) {}
++static inline int fbcon_decor_call_helper(char *cmd, unsigned short cons) { return 0; }
++static inline int fbcon_decor_init(void) { return 0; }
++static inline int fbcon_decor_exit(void) { return 0; }
++static inline int fbcon_decor_disable(struct vc_data *vc, unsigned char redraw) { return 0; }
++
++#define fbcon_decor_active_vc(y) (0)
++#define fbcon_decor_active_nores(x, y) (0)
++#define fbcon_decor_active(x, y) (0)
++
++#endif /* CONFIG_FB_CON_DECOR */
++
++#endif /* __FBCON_DECOR_H */
+diff --git a/drivers/video/fbdev/Kconfig b/drivers/video/fbdev/Kconfig
+index 5e58f5ec0a28..1daa8c2cb2d8 100644
+--- a/drivers/video/fbdev/Kconfig
++++ b/drivers/video/fbdev/Kconfig
+@@ -1226,7 +1226,6 @@ config FB_MATROX
+ select FB_CFB_FILLRECT
+ select FB_CFB_COPYAREA
+ select FB_CFB_IMAGEBLIT
+- select FB_TILEBLITTING
+ select FB_MACMODES if PPC_PMAC
+ ---help---
+ Say Y here if you have a Matrox Millennium, Matrox Millennium II,
+diff --git a/drivers/video/fbdev/core/bitblit.c b/drivers/video/fbdev/core/bitblit.c
+index 790900d646c0..3f940c93752c 100644
+--- a/drivers/video/fbdev/core/bitblit.c
++++ b/drivers/video/fbdev/core/bitblit.c
+@@ -18,6 +18,7 @@
+ #include <linux/console.h>
+ #include <asm/types.h>
+ #include "fbcon.h"
++#include "../../console/fbcondecor.h"
+
+ /*
+ * Accelerated handlers.
+@@ -55,6 +56,13 @@ static void bit_bmove(struct vc_data *vc, struct fb_info *info, int sy,
+ area.height = height * vc->vc_font.height;
+ area.width = width * vc->vc_font.width;
+
++ if (fbcon_decor_active(info, vc)) {
++ area.sx += vc->vc_decor.tx;
++ area.sy += vc->vc_decor.ty;
++ area.dx += vc->vc_decor.tx;
++ area.dy += vc->vc_decor.ty;
++ }
++
+ info->fbops->fb_copyarea(info, &area);
+ }
+
+@@ -379,11 +387,15 @@ static void bit_cursor(struct vc_data *vc, struct fb_info *info, int mode,
+ cursor.image.depth = 1;
+ cursor.rop = ROP_XOR;
+
+- if (info->fbops->fb_cursor)
+- err = info->fbops->fb_cursor(info, &cursor);
++ if (fbcon_decor_active(info, vc)) {
++ fbcon_decor_cursor(info, &cursor);
++ } else {
++ if (info->fbops->fb_cursor)
++ err = info->fbops->fb_cursor(info, &cursor);
+
+- if (err)
+- soft_cursor(info, &cursor);
++ if (err)
++ soft_cursor(info, &cursor);
++ }
+
+ ops->cursor_reset = 0;
+ }
+diff --git a/drivers/video/fbdev/core/fbcmap.c b/drivers/video/fbdev/core/fbcmap.c
+index 68a113594808..21f977cb59d2 100644
+--- a/drivers/video/fbdev/core/fbcmap.c
++++ b/drivers/video/fbdev/core/fbcmap.c
+@@ -17,6 +17,8 @@
+ #include <linux/slab.h>
+ #include <linux/uaccess.h>
+
++#include "../../console/fbcondecor.h"
++
+ static u16 red2[] __read_mostly = {
+ 0x0000, 0xaaaa
+ };
+@@ -256,9 +258,12 @@ int fb_set_cmap(struct fb_cmap *cmap, struct fb_info *info)
+ break;
+ }
+ }
+- if (rc == 0)
++ if (rc == 0) {
+ fb_copy_cmap(cmap, &info->cmap);
+-
++ if (fbcon_decor_active(info, vc_cons[fg_console].d) &&
++ info->fix.visual == FB_VISUAL_DIRECTCOLOR)
++ fbcon_decor_fix_pseudo_pal(info, vc_cons[fg_console].d);
++ }
+ return rc;
+ }
+
+diff --git a/drivers/video/fbdev/core/fbcon.c b/drivers/video/fbdev/core/fbcon.c
+index 04612f938bab..95c349200078 100644
+--- a/drivers/video/fbdev/core/fbcon.c
++++ b/drivers/video/fbdev/core/fbcon.c
+@@ -80,6 +80,7 @@
+ #include <asm/irq.h>
+
+ #include "fbcon.h"
++#include "../../console/fbcondecor.h"
+
+ #ifdef FBCONDEBUG
+ # define DPRINTK(fmt, args...) printk(KERN_DEBUG "%s: " fmt, __func__ , ## args)
+@@ -95,7 +96,7 @@ enum {
+
+ static struct display fb_display[MAX_NR_CONSOLES];
+
+-static signed char con2fb_map[MAX_NR_CONSOLES];
++signed char con2fb_map[MAX_NR_CONSOLES];
+ static signed char con2fb_map_boot[MAX_NR_CONSOLES];
+
+ static int logo_lines;
+@@ -282,7 +283,7 @@ static inline int fbcon_is_inactive(struct vc_data *vc, struct fb_info *info)
+ !vt_force_oops_output(vc);
+ }
+
+-static int get_color(struct vc_data *vc, struct fb_info *info,
++int get_color(struct vc_data *vc, struct fb_info *info,
+ u16 c, int is_fg)
+ {
+ int depth = fb_get_color_depth(&info->var, &info->fix);
+@@ -551,6 +552,9 @@ static int do_fbcon_takeover(int show_logo)
+ info_idx = -1;
+ } else {
+ fbcon_has_console_bind = 1;
++#ifdef CONFIG_FB_CON_DECOR
++ fbcon_decor_init();
++#endif
+ }
+
+ return err;
+@@ -1013,6 +1017,12 @@ static const char *fbcon_startup(void)
+ rows = FBCON_SWAP(ops->rotate, info->var.yres, info->var.xres);
+ cols /= vc->vc_font.width;
+ rows /= vc->vc_font.height;
++
++ if (fbcon_decor_active(info, vc)) {
++ cols = vc->vc_decor.twidth / vc->vc_font.width;
++ rows = vc->vc_decor.theight / vc->vc_font.height;
++ }
++
+ vc_resize(vc, cols, rows);
+
+ DPRINTK("mode: %s\n", info->fix.id);
+@@ -1042,7 +1052,7 @@ static void fbcon_init(struct vc_data *vc, int init)
+ cap = info->flags;
+
+ if (vc != svc || logo_shown == FBCON_LOGO_DONTSHOW ||
+- (info->fix.type == FB_TYPE_TEXT))
++ (info->fix.type == FB_TYPE_TEXT) || fbcon_decor_active(info, vc))
+ logo = 0;
+
+ if (var_to_display(p, &info->var, info))
+@@ -1275,6 +1285,11 @@ static void fbcon_clear(struct vc_data *vc, int sy, int sx, int height,
+ fbcon_clear_margins(vc, 0);
+ }
+
++ if (fbcon_decor_active(info, vc)) {
++ fbcon_decor_clear(vc, info, sy, sx, height, width);
++ return;
++ }
++
+ /* Split blits that cross physical y_wrap boundary */
+
+ y_break = p->vrows - p->yscroll;
+@@ -1294,10 +1309,15 @@ static void fbcon_putcs(struct vc_data *vc, const unsigned short *s,
+ struct display *p = &fb_display[vc->vc_num];
+ struct fbcon_ops *ops = info->fbcon_par;
+
+- if (!fbcon_is_inactive(vc, info))
+- ops->putcs(vc, info, s, count, real_y(p, ypos), xpos,
+- get_color(vc, info, scr_readw(s), 1),
+- get_color(vc, info, scr_readw(s), 0));
++ if (!fbcon_is_inactive(vc, info)) {
++
++ if (fbcon_decor_active(info, vc))
++ fbcon_decor_putcs(vc, info, s, count, ypos, xpos);
++ else
++ ops->putcs(vc, info, s, count, real_y(p, ypos), xpos,
++ get_color(vc, info, scr_readw(s), 1),
++ get_color(vc, info, scr_readw(s), 0));
++ }
+ }
+
+ static void fbcon_putc(struct vc_data *vc, int c, int ypos, int xpos)
+@@ -1313,8 +1333,12 @@ static void fbcon_clear_margins(struct vc_data *vc, int bottom_only)
+ struct fb_info *info = registered_fb[con2fb_map[vc->vc_num]];
+ struct fbcon_ops *ops = info->fbcon_par;
+
+- if (!fbcon_is_inactive(vc, info))
+- ops->clear_margins(vc, info, margin_color, bottom_only);
++ if (!fbcon_is_inactive(vc, info)) {
++ if (fbcon_decor_active(info, vc))
++ fbcon_decor_clear_margins(vc, info, bottom_only);
++ else
++ ops->clear_margins(vc, info, margin_color, bottom_only);
++ }
+ }
+
+ static void fbcon_cursor(struct vc_data *vc, int mode)
+@@ -1835,7 +1859,7 @@ static bool fbcon_scroll(struct vc_data *vc, unsigned int t, unsigned int b,
+ count = vc->vc_rows;
+ if (softback_top)
+ fbcon_softback_note(vc, t, count);
+- if (logo_shown >= 0)
++ if (logo_shown >= 0 || fbcon_decor_active(info, vc))
+ goto redraw_up;
+ switch (p->scrollmode) {
+ case SCROLL_MOVE:
+@@ -1928,6 +1952,8 @@ static bool fbcon_scroll(struct vc_data *vc, unsigned int t, unsigned int b,
+ count = vc->vc_rows;
+ if (logo_shown >= 0)
+ goto redraw_down;
++ if (fbcon_decor_active(info, vc))
++ goto redraw_down;
+ switch (p->scrollmode) {
+ case SCROLL_MOVE:
+ fbcon_redraw_blit(vc, info, p, b - 1, b - t - count,
+@@ -2076,6 +2102,13 @@ static void fbcon_bmove_rec(struct vc_data *vc, struct display *p, int sy, int s
+ }
+ return;
+ }
++
++ if (fbcon_decor_active(info, vc) && sy == dy && height == 1) {
++ /* must use slower redraw bmove to keep background pic intact */
++ fbcon_decor_bmove_redraw(vc, info, sy, sx, dx, width);
++ return;
++ }
++
+ ops->bmove(vc, info, real_y(p, sy), sx, real_y(p, dy), dx,
+ height, width);
+ }
+@@ -2146,8 +2179,8 @@ static int fbcon_resize(struct vc_data *vc, unsigned int width,
+ var.yres = virt_h * virt_fh;
+ x_diff = info->var.xres - var.xres;
+ y_diff = info->var.yres - var.yres;
+- if (x_diff < 0 || x_diff > virt_fw ||
+- y_diff < 0 || y_diff > virt_fh) {
++ if ((x_diff < 0 || x_diff > virt_fw ||
++ y_diff < 0 || y_diff > virt_fh) && !vc->vc_decor.state) {
+ const struct fb_videomode *mode;
+
+ DPRINTK("attempting resize %ix%i\n", var.xres, var.yres);
+@@ -2183,6 +2216,22 @@ static int fbcon_switch(struct vc_data *vc)
+
+ info = registered_fb[con2fb_map[vc->vc_num]];
+ ops = info->fbcon_par;
++ prev_console = ops->currcon;
++ if (prev_console != -1)
++ old_info = registered_fb[con2fb_map[prev_console]];
++
++#ifdef CONFIG_FB_CON_DECOR
++ if (!fbcon_decor_active_vc(vc) && info->fix.visual == FB_VISUAL_DIRECTCOLOR) {
++ struct vc_data *vc_curr = vc_cons[prev_console].d;
++
++ if (vc_curr && fbcon_decor_active_vc(vc_curr)) {
++ // Clear the screen to avoid displaying funky colors
++ // during palette updates.
++ memset((u8 *)info->screen_base + info->fix.line_length * info->var.yoffset,
++ 0, info->var.yres * info->fix.line_length);
++ }
++ }
++#endif
+
+ if (softback_top) {
+ if (softback_lines)
+@@ -2201,9 +2250,6 @@ static int fbcon_switch(struct vc_data *vc)
+ logo_shown = FBCON_LOGO_CANSHOW;
+ }
+
+- prev_console = ops->currcon;
+- if (prev_console != -1)
+- old_info = registered_fb[con2fb_map[prev_console]];
+ /*
+ * FIXME: If we have multiple fbdev's loaded, we need to
+ * update all info->currcon. Perhaps, we can place this
+@@ -2247,6 +2293,18 @@ static int fbcon_switch(struct vc_data *vc)
+ fbcon_del_cursor_timer(old_info);
+ }
+
++ if (fbcon_decor_active_vc(vc)) {
++ struct vc_data *vc_curr = vc_cons[prev_console].d;
++
++ if (!vc_curr->vc_decor.theme ||
++ strcmp(vc->vc_decor.theme, vc_curr->vc_decor.theme) ||
++ (fbcon_decor_active_nores(info, vc_curr) &&
++ !fbcon_decor_active(info, vc_curr))) {
++ fbcon_decor_disable(vc, 0);
++ fbcon_decor_call_helper("modechange", vc->vc_num);
++ }
++ }
++
+ if (fbcon_is_inactive(vc, info) ||
+ ops->blank_state != FB_BLANK_UNBLANK)
+ fbcon_del_cursor_timer(info);
+@@ -2355,15 +2413,20 @@ static int fbcon_blank(struct vc_data *vc, int blank, int mode_switch)
+ }
+ }
+
+- if (!fbcon_is_inactive(vc, info)) {
++ if (!fbcon_is_inactive(vc, info)) {
+ if (ops->blank_state != blank) {
+ ops->blank_state = blank;
+ fbcon_cursor(vc, blank ? CM_ERASE : CM_DRAW);
+ ops->cursor_flash = (!blank);
+
+- if (!(info->flags & FBINFO_MISC_USEREVENT))
+- if (fb_blank(info, blank))
+- fbcon_generic_blank(vc, info, blank);
++ if (!(info->flags & FBINFO_MISC_USEREVENT)) {
++ if (fb_blank(info, blank)) {
++ if (fbcon_decor_active(info, vc))
++ fbcon_decor_blank(vc, info, blank);
++ else
++ fbcon_generic_blank(vc, info, blank);
++ }
++ }
+ }
+
+ if (!blank)
+@@ -2546,13 +2609,22 @@ static int fbcon_do_set_font(struct vc_data *vc, int w, int h,
+ set_vc_hi_font(vc, true);
+
+ if (resize) {
++ /* reset wrap/pan */
+ int cols, rows;
+
+ cols = FBCON_SWAP(ops->rotate, info->var.xres, info->var.yres);
+ rows = FBCON_SWAP(ops->rotate, info->var.yres, info->var.xres);
++
++ if (fbcon_decor_active(info, vc)) {
++ info->var.xoffset = info->var.yoffset = p->yscroll = 0;
++ cols = vc->vc_decor.twidth;
++ rows = vc->vc_decor.theight;
++ }
+ cols /= w;
+ rows /= h;
++
+ vc_resize(vc, cols, rows);
++
+ if (con_is_visible(vc) && softback_buf)
+ fbcon_update_softback(vc);
+ } else if (con_is_visible(vc)
+@@ -2681,7 +2753,11 @@ static void fbcon_set_palette(struct vc_data *vc, const unsigned char *table)
+ int i, j, k, depth;
+ u8 val;
+
+- if (fbcon_is_inactive(vc, info))
++ if (fbcon_is_inactive(vc, info)
++#ifdef CONFIG_FB_CON_DECOR
++ || vc->vc_num != fg_console
++#endif
++ )
+ return;
+
+ if (!con_is_visible(vc))
+@@ -2707,7 +2783,47 @@ static void fbcon_set_palette(struct vc_data *vc, const unsigned char *table)
+ } else
+ fb_copy_cmap(fb_default_cmap(1 << depth), &palette_cmap);
+
+- fb_set_cmap(&palette_cmap, info);
++ if (fbcon_decor_active(info, vc_cons[fg_console].d) &&
++ info->fix.visual == FB_VISUAL_DIRECTCOLOR) {
++
++ u16 *red, *green, *blue;
++ int minlen = min(min(info->var.red.length, info->var.green.length),
++ info->var.blue.length);
++
++ struct fb_cmap cmap = {
++ .start = 0,
++ .len = (1 << minlen),
++ .red = NULL,
++ .green = NULL,
++ .blue = NULL,
++ .transp = NULL
++ };
++
++ red = kmalloc(256 * sizeof(u16) * 3, GFP_KERNEL);
++
++ if (!red)
++ goto out;
++
++ green = red + 256;
++ blue = green + 256;
++ cmap.red = red;
++ cmap.green = green;
++ cmap.blue = blue;
++
++ for (i = 0; i < cmap.len; i++)
++ red[i] = green[i] = blue[i] = (0xffff * i)/(cmap.len-1);
++
++ fb_set_cmap(&cmap, info);
++ fbcon_decor_fix_pseudo_pal(info, vc_cons[fg_console].d);
++ kfree(red);
++
++ return;
++
++ } else if (fbcon_decor_active(info, vc_cons[fg_console].d) &&
++ info->var.bits_per_pixel == 8 && info->bgdecor.cmap.red != NULL)
++ fb_set_cmap(&info->bgdecor.cmap, info);
++
++out: fb_set_cmap(&palette_cmap, info);
+ }
+
+ static u16 *fbcon_screen_pos(struct vc_data *vc, int offset)
+@@ -2932,7 +3048,14 @@ static void fbcon_modechanged(struct fb_info *info)
+ rows = FBCON_SWAP(ops->rotate, info->var.yres, info->var.xres);
+ cols /= vc->vc_font.width;
+ rows /= vc->vc_font.height;
+- vc_resize(vc, cols, rows);
++
++ if (!fbcon_decor_active_nores(info, vc)) {
++ vc_resize(vc, cols, rows);
++ } else {
++ fbcon_decor_disable(vc, 0);
++ fbcon_decor_call_helper("modechange", vc->vc_num);
++ }
++
+ updatescrollmode(p, info, vc);
+ scrollback_max = 0;
+ scrollback_current = 0;
+@@ -2977,7 +3100,8 @@ static void fbcon_set_all_vcs(struct fb_info *info)
+ rows = FBCON_SWAP(ops->rotate, info->var.yres, info->var.xres);
+ cols /= vc->vc_font.width;
+ rows /= vc->vc_font.height;
+- vc_resize(vc, cols, rows);
++ if (!fbcon_decor_active_nores(info, vc))
++ vc_resize(vc, cols, rows);
+ }
+
+ if (fg != -1)
+@@ -3618,6 +3742,7 @@ static void fbcon_exit(void)
+ }
+ }
+
++ fbcon_decor_exit();
+ fbcon_has_exited = 1;
+ }
+
+diff --git a/drivers/video/fbdev/core/fbmem.c b/drivers/video/fbdev/core/fbmem.c
+index f741ba8df01b..b0141433d249 100644
+--- a/drivers/video/fbdev/core/fbmem.c
++++ b/drivers/video/fbdev/core/fbmem.c
+@@ -1253,15 +1253,6 @@ struct fb_fix_screeninfo32 {
+ u16 reserved[3];
+ };
+
+-struct fb_cmap32 {
+- u32 start;
+- u32 len;
+- compat_caddr_t red;
+- compat_caddr_t green;
+- compat_caddr_t blue;
+- compat_caddr_t transp;
+-};
+-
+ static int fb_getput_cmap(struct fb_info *info, unsigned int cmd,
+ unsigned long arg)
+ {
+diff --git a/include/linux/console_decor.h b/include/linux/console_decor.h
+new file mode 100644
+index 000000000000..15143556c2aa
+--- /dev/null
++++ b/include/linux/console_decor.h
+@@ -0,0 +1,46 @@
++#ifndef _LINUX_CONSOLE_DECOR_H_
++#define _LINUX_CONSOLE_DECOR_H_ 1
++
++/* A structure used by the framebuffer console decorations (drivers/video/console/fbcondecor.c) */
++struct vc_decor {
++ __u8 bg_color; /* The color that is to be treated as transparent */
++ __u8 state; /* Current decor state: 0 = off, 1 = on */
++ __u16 tx, ty; /* Top left corner coordinates of the text field */
++ __u16 twidth, theight; /* Width and height of the text field */
++ char *theme;
++};
++
++#ifdef __KERNEL__
++#ifdef CONFIG_COMPAT
++#include <linux/compat.h>
++
++struct vc_decor32 {
++ __u8 bg_color; /* The color that is to be treated as transparent */
++ __u8 state; /* Current decor state: 0 = off, 1 = on */
++ __u16 tx, ty; /* Top left corner coordinates of the text field */
++ __u16 twidth, theight; /* Width and height of the text field */
++ compat_uptr_t theme;
++};
++
++#define vc_decor_from_compat(to, from) \
++ (to).bg_color = (from).bg_color; \
++ (to).state = (from).state; \
++ (to).tx = (from).tx; \
++ (to).ty = (from).ty; \
++ (to).twidth = (from).twidth; \
++ (to).theight = (from).theight; \
++ (to).theme = compat_ptr((from).theme)
++
++#define vc_decor_to_compat(to, from) \
++ (to).bg_color = (from).bg_color; \
++ (to).state = (from).state; \
++ (to).tx = (from).tx; \
++ (to).ty = (from).ty; \
++ (to).twidth = (from).twidth; \
++ (to).theight = (from).theight; \
++ (to).theme = ptr_to_compat((from).theme)
++
++#endif /* CONFIG_COMPAT */
++#endif /* __KERNEL__ */
++
++#endif
+diff --git a/include/linux/console_struct.h b/include/linux/console_struct.h
+index c0ec478ea5bf..8bfed6b21fc9 100644
+--- a/include/linux/console_struct.h
++++ b/include/linux/console_struct.h
+@@ -21,6 +21,7 @@ struct vt_struct;
+ struct uni_pagedir;
+
+ #define NPAR 16
++#include <linux/console_decor.h>
+
+ /*
+ * Example: vc_data of a console that was scrolled 3 lines down.
+@@ -141,6 +142,8 @@ struct vc_data {
+ struct uni_pagedir *vc_uni_pagedir;
+ struct uni_pagedir **vc_uni_pagedir_loc; /* [!] Location of uni_pagedir variable for this console */
+ bool vc_panic_force_write; /* when oops/panic this VC can accept forced output/blanking */
++
++ struct vc_decor vc_decor;
+ /* additional information is in vt_kern.h */
+ };
+
+diff --git a/include/linux/fb.h b/include/linux/fb.h
+index bc24e48e396d..ad7d182c7545 100644
+--- a/include/linux/fb.h
++++ b/include/linux/fb.h
+@@ -239,6 +239,34 @@ struct fb_deferred_io {
+ };
+ #endif
+
++#ifdef __KERNEL__
++#ifdef CONFIG_COMPAT
++struct fb_image32 {
++ __u32 dx; /* Where to place image */
++ __u32 dy;
++ __u32 width; /* Size of image */
++ __u32 height;
++ __u32 fg_color; /* Only used when a mono bitmap */
++ __u32 bg_color;
++ __u8 depth; /* Depth of the image */
++ const compat_uptr_t data; /* Pointer to image data */
++ struct fb_cmap32 cmap; /* color map info */
++};
++
++#define fb_image_from_compat(to, from) \
++ (to).dx = (from).dx; \
++ (to).dy = (from).dy; \
++ (to).width = (from).width; \
++ (to).height = (from).height; \
++ (to).fg_color = (from).fg_color; \
++ (to).bg_color = (from).bg_color; \
++ (to).depth = (from).depth; \
++ (to).data = compat_ptr((from).data); \
++ fb_cmap_from_compat((to).cmap, (from).cmap)
++
++#endif /* CONFIG_COMPAT */
++#endif /* __KERNEL__ */
++
+ /*
+ * Frame buffer operations
+ *
+@@ -509,6 +537,9 @@ struct fb_info {
+ #define FBINFO_STATE_SUSPENDED 1
+ u32 state; /* Hardware state i.e suspend */
+ void *fbcon_par; /* fbcon use-only private area */
++
++ struct fb_image bgdecor;
++
+ /* From here on everything is device dependent */
+ void *par;
+ /* we need the PCI or similar aperture base/size not
+diff --git a/include/uapi/linux/fb.h b/include/uapi/linux/fb.h
+index 6cd9b198b7c6..a228440649fa 100644
+--- a/include/uapi/linux/fb.h
++++ b/include/uapi/linux/fb.h
+@@ -9,6 +9,23 @@
+
+ #define FB_MAX 32 /* sufficient for now */
+
++struct fbcon_decor_iowrapper {
++ unsigned short vc; /* Virtual console */
++ unsigned char origin; /* Point of origin of the request */
++ void *data;
++};
++
++#ifdef __KERNEL__
++#ifdef CONFIG_COMPAT
++#include <linux/compat.h>
++struct fbcon_decor_iowrapper32 {
++ unsigned short vc; /* Virtual console */
++ unsigned char origin; /* Point of origin of the request */
++ compat_uptr_t data;
++};
++#endif /* CONFIG_COMPAT */
++#endif /* __KERNEL__ */
++
+ /* ioctls
+ 0x46 is 'F' */
+ #define FBIOGET_VSCREENINFO 0x4600
+@@ -36,6 +53,25 @@
+ #define FBIOGET_DISPINFO 0x4618
+ #define FBIO_WAITFORVSYNC _IOW('F', 0x20, __u32)
+
++#define FBIOCONDECOR_SETCFG _IOWR('F', 0x19, struct fbcon_decor_iowrapper)
++#define FBIOCONDECOR_GETCFG _IOR('F', 0x1A, struct fbcon_decor_iowrapper)
++#define FBIOCONDECOR_SETSTATE _IOWR('F', 0x1B, struct fbcon_decor_iowrapper)
++#define FBIOCONDECOR_GETSTATE _IOR('F', 0x1C, struct fbcon_decor_iowrapper)
++#define FBIOCONDECOR_SETPIC _IOWR('F', 0x1D, struct fbcon_decor_iowrapper)
++#ifdef __KERNEL__
++#ifdef CONFIG_COMPAT
++#define FBIOCONDECOR_SETCFG32 _IOWR('F', 0x19, struct fbcon_decor_iowrapper32)
++#define FBIOCONDECOR_GETCFG32 _IOR('F', 0x1A, struct fbcon_decor_iowrapper32)
++#define FBIOCONDECOR_SETSTATE32 _IOWR('F', 0x1B, struct fbcon_decor_iowrapper32)
++#define FBIOCONDECOR_GETSTATE32 _IOR('F', 0x1C, struct fbcon_decor_iowrapper32)
++#define FBIOCONDECOR_SETPIC32 _IOWR('F', 0x1D, struct fbcon_decor_iowrapper32)
++#endif /* CONFIG_COMPAT */
++#endif /* __KERNEL__ */
++
++#define FBCON_DECOR_THEME_LEN 128 /* Maximum length of a theme name */
++#define FBCON_DECOR_IO_ORIG_KERNEL 0 /* Kernel ioctl origin */
++#define FBCON_DECOR_IO_ORIG_USER 1 /* User ioctl origin */
++
+ #define FB_TYPE_PACKED_PIXELS 0 /* Packed Pixels */
+ #define FB_TYPE_PLANES 1 /* Non interleaved planes */
+ #define FB_TYPE_INTERLEAVED_PLANES 2 /* Interleaved planes */
+@@ -278,6 +314,29 @@ struct fb_var_screeninfo {
+ __u32 reserved[4]; /* Reserved for future compatibility */
+ };
+
++#ifdef __KERNEL__
++#ifdef CONFIG_COMPAT
++struct fb_cmap32 {
++ __u32 start;
++ __u32 len; /* Number of entries */
++ compat_uptr_t red; /* Red values */
++ compat_uptr_t green;
++ compat_uptr_t blue;
++ compat_uptr_t transp; /* transparency, can be NULL */
++};
++
++#define fb_cmap_from_compat(to, from) \
++ (to).start = (from).start; \
++ (to).len = (from).len; \
++ (to).red = compat_ptr((from).red); \
++ (to).green = compat_ptr((from).green); \
++ (to).blue = compat_ptr((from).blue); \
++ (to).transp = compat_ptr((from).transp)
++
++#endif /* CONFIG_COMPAT */
++#endif /* __KERNEL__ */
++
++
+ struct fb_cmap {
+ __u32 start; /* First entry */
+ __u32 len; /* Number of entries */
+diff --git a/kernel/sysctl.c b/kernel/sysctl.c
+index d9c31bc2eaea..e33ac56cc32a 100644
+--- a/kernel/sysctl.c
++++ b/kernel/sysctl.c
+@@ -150,6 +150,10 @@ static const int cap_last_cap = CAP_LAST_CAP;
+ static unsigned long hung_task_timeout_max = (LONG_MAX/HZ);
+ #endif
+
++#ifdef CONFIG_FB_CON_DECOR
++extern char fbcon_decor_path[];
++#endif
++
+ #ifdef CONFIG_INOTIFY_USER
+ #include <linux/inotify.h>
+ #endif
+@@ -283,6 +287,15 @@ static struct ctl_table sysctl_base_table[] = {
+ .mode = 0555,
+ .child = dev_table,
+ },
++#ifdef CONFIG_FB_CON_DECOR
++ {
++ .procname = "fbcondecor",
++ .data = &fbcon_decor_path,
++ .maxlen = KMOD_PATH_LEN,
++ .mode = 0644,
++ .proc_handler = &proc_dostring,
++ },
++#endif
+ { }
+ };
+
diff --git a/4400_alpha-sysctl-uac.patch b/4400_alpha-sysctl-uac.patch
new file mode 100644
index 0000000..d42b4ed
--- /dev/null
+++ b/4400_alpha-sysctl-uac.patch
@@ -0,0 +1,142 @@
+diff --git a/arch/alpha/Kconfig b/arch/alpha/Kconfig
+index 7f312d8..1eb686b 100644
+--- a/arch/alpha/Kconfig
++++ b/arch/alpha/Kconfig
+@@ -697,6 +697,33 @@ config HZ
+ default 1200 if HZ_1200
+ default 1024
+
++config ALPHA_UAC_SYSCTL
++ bool "Configure UAC policy via sysctl"
++ depends on SYSCTL
++ default y
++ ---help---
++ Configuring the UAC (unaligned access control) policy on a Linux
++ system usually involves setting a compile time define. If you say
++ Y here, you will be able to modify the UAC policy at runtime using
++ the /proc interface.
++
++ The UAC policy defines the action Linux should take when an
++ unaligned memory access occurs. The action can include printing a
++ warning message (NOPRINT), sending a signal to the offending
++ program to help developers debug their applications (SIGBUS), or
++ disabling the transparent fixing (NOFIX).
++
++ The sysctls will be initialized to the compile-time defined UAC
++ policy. You can change these manually, or with the sysctl(8)
++ userspace utility.
++
++ To disable the warning messages at runtime, you would use
++
++ echo 1 > /proc/sys/kernel/uac/noprint
++
++ This is pretty harmless. Say Y if you're not sure.
++
++
+ source "drivers/pci/Kconfig"
+ source "drivers/eisa/Kconfig"
+
+diff --git a/arch/alpha/kernel/traps.c b/arch/alpha/kernel/traps.c
+index 74aceea..cb35d80 100644
+--- a/arch/alpha/kernel/traps.c
++++ b/arch/alpha/kernel/traps.c
+@@ -103,6 +103,49 @@ static char * ireg_name[] = {"v0", "t0", "t1", "t2", "t3", "t4", "t5", "t6",
+ "t10", "t11", "ra", "pv", "at", "gp", "sp", "zero"};
+ #endif
+
++#ifdef CONFIG_ALPHA_UAC_SYSCTL
++
++#include <linux/sysctl.h>
++
++static int enabled_noprint = 0;
++static int enabled_sigbus = 0;
++static int enabled_nofix = 0;
++
++struct ctl_table uac_table[] = {
++ {
++ .procname = "noprint",
++ .data = &enabled_noprint,
++ .maxlen = sizeof (int),
++ .mode = 0644,
++ .proc_handler = &proc_dointvec,
++ },
++ {
++ .procname = "sigbus",
++ .data = &enabled_sigbus,
++ .maxlen = sizeof (int),
++ .mode = 0644,
++ .proc_handler = &proc_dointvec,
++ },
++ {
++ .procname = "nofix",
++ .data = &enabled_nofix,
++ .maxlen = sizeof (int),
++ .mode = 0644,
++ .proc_handler = &proc_dointvec,
++ },
++ { }
++};
++
++static int __init init_uac_sysctl(void)
++{
++ /* Initialize sysctls with the #defined UAC policy */
++ enabled_noprint = (test_thread_flag (TS_UAC_NOPRINT)) ? 1 : 0;
++ enabled_sigbus = (test_thread_flag (TS_UAC_SIGBUS)) ? 1 : 0;
++ enabled_nofix = (test_thread_flag (TS_UAC_NOFIX)) ? 1 : 0;
++ return 0;
++}
++#endif
++
+ static void
+ dik_show_code(unsigned int *pc)
+ {
+@@ -785,7 +828,12 @@ do_entUnaUser(void __user * va, unsigned long opcode,
+ /* Check the UAC bits to decide what the user wants us to do
+ with the unaliged access. */
+
++#ifndef CONFIG_ALPHA_UAC_SYSCTL
+ if (!(current_thread_info()->status & TS_UAC_NOPRINT)) {
++#else /* CONFIG_ALPHA_UAC_SYSCTL */
++ if (!(current_thread_info()->status & TS_UAC_NOPRINT) &&
++ !(enabled_noprint)) {
++#endif /* CONFIG_ALPHA_UAC_SYSCTL */
+ if (__ratelimit(&ratelimit)) {
+ printk("%s(%d): unaligned trap at %016lx: %p %lx %ld\n",
+ current->comm, task_pid_nr(current),
+@@ -1090,3 +1138,6 @@ trap_init(void)
+ wrent(entSys, 5);
+ wrent(entDbg, 6);
+ }
++#ifdef CONFIG_ALPHA_UAC_SYSCTL
++ __initcall(init_uac_sysctl);
++#endif
+diff --git a/kernel/sysctl.c b/kernel/sysctl.c
+index 87b2fc3..55021a8 100644
+--- a/kernel/sysctl.c
++++ b/kernel/sysctl.c
+@@ -152,6 +152,11 @@ static unsigned long hung_task_timeout_max = (LONG_MAX/HZ);
+ #ifdef CONFIG_INOTIFY_USER
+ #include <linux/inotify.h>
+ #endif
++
++#ifdef CONFIG_ALPHA_UAC_SYSCTL
++extern struct ctl_table uac_table[];
++#endif
++
+ #ifdef CONFIG_SPARC
+ #endif
+
+@@ -1844,6 +1849,13 @@ static struct ctl_table debug_table[] = {
+ .extra2 = &one,
+ },
+ #endif
++#ifdef CONFIG_ALPHA_UAC_SYSCTL
++ {
++ .procname = "uac",
++ .mode = 0555,
++ .child = uac_table,
++ },
++#endif /* CONFIG_ALPHA_UAC_SYSCTL */
+ { }
+ };
+
diff --git a/5010_enable-additional-cpu-optimizations-for-gcc.patch b/5010_enable-additional-cpu-optimizations-for-gcc.patch
new file mode 100644
index 0000000..c68d072
--- /dev/null
+++ b/5010_enable-additional-cpu-optimizations-for-gcc.patch
@@ -0,0 +1,530 @@
+WARNING
+This patch works with gcc versions 4.9+ and with kernel version 3.15+ and should
+NOT be applied when compiling on older versions of gcc due to key name changes
+of the march flags introduced with the version 4.9 release of gcc.[1]
+
+Use the older version of this patch hosted on the same github for older
+versions of gcc.
+
+FEATURES
+This patch adds additional CPU options to the Linux kernel accessible under:
+ Processor type and features --->
+ Processor family --->
+
+The expanded microarchitectures include:
+* AMD Improved K8-family
+* AMD K10-family
+* AMD Family 10h (Barcelona)
+* AMD Family 14h (Bobcat)
+* AMD Family 16h (Jaguar)
+* AMD Family 15h (Bulldozer)
+* AMD Family 15h (Piledriver)
+* AMD Family 15h (Steamroller)
+* AMD Family 15h (Excavator)
+* AMD Family 17h (Zen)
+* Intel Silvermont low-power processors
+* Intel 1st Gen Core i3/i5/i7 (Nehalem)
+* Intel 1.5 Gen Core i3/i5/i7 (Westmere)
+* Intel 2nd Gen Core i3/i5/i7 (Sandybridge)
+* Intel 3rd Gen Core i3/i5/i7 (Ivybridge)
+* Intel 4th Gen Core i3/i5/i7 (Haswell)
+* Intel 5th Gen Core i3/i5/i7 (Broadwell)
+* Intel 6th Gen Core i3/i5.i7 (Skylake)
+
+It also offers to compile passing the 'native' option which, "selects the CPU
+to generate code for at compilation time by determining the processor type of
+the compiling machine. Using -march=native enables all instruction subsets
+supported by the local machine and will produce code optimized for the local
+machine under the constraints of the selected instruction set."[3]
+
+MINOR NOTES
+This patch also changes 'atom' to 'bonnell' in accordance with the gcc v4.9
+changes. Note that upstream is using the deprecated 'match=atom' flags when I
+believe it should use the newer 'march=bonnell' flag for atom processors.[2]
+
+It is not recommended to compile on Atom-CPUs with the 'native' option.[4] The
+recommendation is use to the 'atom' option instead.
+
+BENEFITS
+Small but real speed increases are measurable using a make endpoint comparing
+a generic kernel to one built with one of the respective microarchs.
+
+See the following experimental evidence supporting this statement:
+https://github.com/graysky2/kernel_gcc_patch
+
+REQUIREMENTS
+linux version >=3.15
+gcc version >=4.9
+
+ACKNOWLEDGMENTS
+This patch builds on the seminal work by Jeroen.[5]
+
+REFERENCES
+1. https://gcc.gnu.org/gcc-4.9/changes.html
+2. https://bugzilla.kernel.org/show_bug.cgi?id=77461
+3. https://gcc.gnu.org/onlinedocs/gcc/x86-Options.html
+4. https://github.com/graysky2/kernel_gcc_patch/issues/15
+5. http://www.linuxforge.net/docs/linux/linux-gcc.php
+
+--- a/arch/x86/include/asm/module.h 2017-08-02 11:41:47.442200461 -0400
++++ b/arch/x86/include/asm/module.h 2017-08-02 12:14:21.204358744 -0400
+@@ -15,6 +15,24 @@
+ #define MODULE_PROC_FAMILY "586MMX "
+ #elif defined CONFIG_MCORE2
+ #define MODULE_PROC_FAMILY "CORE2 "
++#elif defined CONFIG_MNATIVE
++#define MODULE_PROC_FAMILY "NATIVE "
++#elif defined CONFIG_MNEHALEM
++#define MODULE_PROC_FAMILY "NEHALEM "
++#elif defined CONFIG_MWESTMERE
++#define MODULE_PROC_FAMILY "WESTMERE "
++#elif defined CONFIG_MSILVERMONT
++#define MODULE_PROC_FAMILY "SILVERMONT "
++#elif defined CONFIG_MSANDYBRIDGE
++#define MODULE_PROC_FAMILY "SANDYBRIDGE "
++#elif defined CONFIG_MIVYBRIDGE
++#define MODULE_PROC_FAMILY "IVYBRIDGE "
++#elif defined CONFIG_MHASWELL
++#define MODULE_PROC_FAMILY "HASWELL "
++#elif defined CONFIG_MBROADWELL
++#define MODULE_PROC_FAMILY "BROADWELL "
++#elif defined CONFIG_MSKYLAKE
++#define MODULE_PROC_FAMILY "SKYLAKE "
+ #elif defined CONFIG_MATOM
+ #define MODULE_PROC_FAMILY "ATOM "
+ #elif defined CONFIG_M686
+@@ -33,6 +51,26 @@
+ #define MODULE_PROC_FAMILY "K7 "
+ #elif defined CONFIG_MK8
+ #define MODULE_PROC_FAMILY "K8 "
++#elif defined CONFIG_MK8SSE3
++#define MODULE_PROC_FAMILY "K8SSE3 "
++#elif defined CONFIG_MK10
++#define MODULE_PROC_FAMILY "K10 "
++#elif defined CONFIG_MBARCELONA
++#define MODULE_PROC_FAMILY "BARCELONA "
++#elif defined CONFIG_MBOBCAT
++#define MODULE_PROC_FAMILY "BOBCAT "
++#elif defined CONFIG_MBULLDOZER
++#define MODULE_PROC_FAMILY "BULLDOZER "
++#elif defined CONFIG_MPILEDRIVER
++#define MODULE_PROC_FAMILY "PILEDRIVER "
++#elif defined CONFIG_MSTEAMROLLER
++#define MODULE_PROC_FAMILY "STEAMROLLER "
++#elif defined CONFIG_MJAGUAR
++#define MODULE_PROC_FAMILY "JAGUAR "
++#elif defined CONFIG_MEXCAVATOR
++#define MODULE_PROC_FAMILY "EXCAVATOR "
++#elif defined CONFIG_MZEN
++#define MODULE_PROC_FAMILY "ZEN "
+ #elif defined CONFIG_MELAN
+ #define MODULE_PROC_FAMILY "ELAN "
+ #elif defined CONFIG_MCRUSOE
+--- a/arch/x86/Kconfig.cpu 2017-08-02 11:41:47.443200463 -0400
++++ b/arch/x86/Kconfig.cpu 2017-08-02 12:14:37.108956741 -0400
+@@ -115,6 +115,7 @@ config MPENTIUMM
+ config MPENTIUM4
+ bool "Pentium-4/Celeron(P4-based)/Pentium-4 M/older Xeon"
+ depends on X86_32
++ select X86_P6_NOP
+ ---help---
+ Select this for Intel Pentium 4 chips. This includes the
+ Pentium 4, Pentium D, P4-based Celeron and Xeon, and
+@@ -147,9 +148,8 @@ config MPENTIUM4
+ -Paxville
+ -Dempsey
+
+-
+ config MK6
+- bool "K6/K6-II/K6-III"
++ bool "AMD K6/K6-II/K6-III"
+ depends on X86_32
+ ---help---
+ Select this for an AMD K6-family processor. Enables use of
+@@ -157,7 +157,7 @@ config MK6
+ flags to GCC.
+
+ config MK7
+- bool "Athlon/Duron/K7"
++ bool "AMD Athlon/Duron/K7"
+ depends on X86_32
+ ---help---
+ Select this for an AMD Athlon K7-family processor. Enables use of
+@@ -165,12 +165,83 @@ config MK7
+ flags to GCC.
+
+ config MK8
+- bool "Opteron/Athlon64/Hammer/K8"
++ bool "AMD Opteron/Athlon64/Hammer/K8"
+ ---help---
+ Select this for an AMD Opteron or Athlon64 Hammer-family processor.
+ Enables use of some extended instructions, and passes appropriate
+ optimization flags to GCC.
+
++config MK8SSE3
++ bool "AMD Opteron/Athlon64/Hammer/K8 with SSE3"
++ ---help---
++ Select this for improved AMD Opteron or Athlon64 Hammer-family processors.
++ Enables use of some extended instructions, and passes appropriate
++ optimization flags to GCC.
++
++config MK10
++ bool "AMD 61xx/7x50/PhenomX3/X4/II/K10"
++ ---help---
++ Select this for an AMD 61xx Eight-Core Magny-Cours, Athlon X2 7x50,
++ Phenom X3/X4/II, Athlon II X2/X3/X4, or Turion II-family processor.
++ Enables use of some extended instructions, and passes appropriate
++ optimization flags to GCC.
++
++config MBARCELONA
++ bool "AMD Barcelona"
++ ---help---
++ Select this for AMD Family 10h Barcelona processors.
++
++ Enables -march=barcelona
++
++config MBOBCAT
++ bool "AMD Bobcat"
++ ---help---
++ Select this for AMD Family 14h Bobcat processors.
++
++ Enables -march=btver1
++
++config MJAGUAR
++ bool "AMD Jaguar"
++ ---help---
++ Select this for AMD Family 16h Jaguar processors.
++
++ Enables -march=btver2
++
++config MBULLDOZER
++ bool "AMD Bulldozer"
++ ---help---
++ Select this for AMD Family 15h Bulldozer processors.
++
++ Enables -march=bdver1
++
++config MPILEDRIVER
++ bool "AMD Piledriver"
++ ---help---
++ Select this for AMD Family 15h Piledriver processors.
++
++ Enables -march=bdver2
++
++config MSTEAMROLLER
++ bool "AMD Steamroller"
++ ---help---
++ Select this for AMD Family 15h Steamroller processors.
++
++ Enables -march=bdver3
++
++config MEXCAVATOR
++ bool "AMD Excavator"
++ ---help---
++ Select this for AMD Family 15h Excavator processors.
++
++ Enables -march=bdver4
++
++config MZEN
++ bool "AMD Zen"
++ ---help---
++ Select this for AMD Family 17h Zen processors.
++
++ Enables -march=znver1
++
+ config MCRUSOE
+ bool "Crusoe"
+ depends on X86_32
+@@ -252,6 +323,7 @@ config MVIAC7
+
+ config MPSC
+ bool "Intel P4 / older Netburst based Xeon"
++ select X86_P6_NOP
+ depends on X86_64
+ ---help---
+ Optimize for Intel Pentium 4, Pentium D and older Nocona/Dempsey
+@@ -261,8 +333,19 @@ config MPSC
+ using the cpu family field
+ in /proc/cpuinfo. Family 15 is an older Xeon, Family 6 a newer one.
+
++config MATOM
++ bool "Intel Atom"
++ select X86_P6_NOP
++ ---help---
++
++ Select this for the Intel Atom platform. Intel Atom CPUs have an
++ in-order pipelining architecture and thus can benefit from
++ accordingly optimized code. Use a recent GCC with specific Atom
++ support in order to fully benefit from selecting this option.
++
+ config MCORE2
+- bool "Core 2/newer Xeon"
++ bool "Intel Core 2"
++ select X86_P6_NOP
+ ---help---
+
+ Select this for Intel Core 2 and newer Core 2 Xeons (Xeon 51xx and
+@@ -270,14 +353,79 @@ config MCORE2
+ family in /proc/cpuinfo. Newer ones have 6 and older ones 15
+ (not a typo)
+
+-config MATOM
+- bool "Intel Atom"
++ Enables -march=core2
++
++config MNEHALEM
++ bool "Intel Nehalem"
++ select X86_P6_NOP
+ ---help---
+
+- Select this for the Intel Atom platform. Intel Atom CPUs have an
+- in-order pipelining architecture and thus can benefit from
+- accordingly optimized code. Use a recent GCC with specific Atom
+- support in order to fully benefit from selecting this option.
++ Select this for 1st Gen Core processors in the Nehalem family.
++
++ Enables -march=nehalem
++
++config MWESTMERE
++ bool "Intel Westmere"
++ select X86_P6_NOP
++ ---help---
++
++ Select this for the Intel Westmere formerly Nehalem-C family.
++
++ Enables -march=westmere
++
++config MSILVERMONT
++ bool "Intel Silvermont"
++ select X86_P6_NOP
++ ---help---
++
++ Select this for the Intel Silvermont platform.
++
++ Enables -march=silvermont
++
++config MSANDYBRIDGE
++ bool "Intel Sandy Bridge"
++ select X86_P6_NOP
++ ---help---
++
++ Select this for 2nd Gen Core processors in the Sandy Bridge family.
++
++ Enables -march=sandybridge
++
++config MIVYBRIDGE
++ bool "Intel Ivy Bridge"
++ select X86_P6_NOP
++ ---help---
++
++ Select this for 3rd Gen Core processors in the Ivy Bridge family.
++
++ Enables -march=ivybridge
++
++config MHASWELL
++ bool "Intel Haswell"
++ select X86_P6_NOP
++ ---help---
++
++ Select this for 4th Gen Core processors in the Haswell family.
++
++ Enables -march=haswell
++
++config MBROADWELL
++ bool "Intel Broadwell"
++ select X86_P6_NOP
++ ---help---
++
++ Select this for 5th Gen Core processors in the Broadwell family.
++
++ Enables -march=broadwell
++
++config MSKYLAKE
++ bool "Intel Skylake"
++ select X86_P6_NOP
++ ---help---
++
++ Select this for 6th Gen Core processors in the Skylake family.
++
++ Enables -march=skylake
+
+ config GENERIC_CPU
+ bool "Generic-x86-64"
+@@ -286,6 +434,19 @@ config GENERIC_CPU
+ Generic x86-64 CPU.
+ Run equally well on all x86-64 CPUs.
+
++config MNATIVE
++ bool "Native optimizations autodetected by GCC"
++ ---help---
++
++ GCC 4.2 and above support -march=native, which automatically detects
++ the optimum settings to use based on your processor. -march=native
++ also detects and applies additional settings beyond -march specific
++ to your CPU, (eg. -msse4). Unless you have a specific reason not to
++ (e.g. distcc cross-compiling), you should probably be using
++ -march=native rather than anything listed below.
++
++ Enables -march=native
++
+ endchoice
+
+ config X86_GENERIC
+@@ -310,7 +471,7 @@ config X86_INTERNODE_CACHE_SHIFT
+ config X86_L1_CACHE_SHIFT
+ int
+ default "7" if MPENTIUM4 || MPSC
+- default "6" if MK7 || MK8 || MPENTIUMM || MCORE2 || MATOM || MVIAC7 || X86_GENERIC || GENERIC_CPU
++ default "6" if MK7 || MK8 || MK8SSE3 || MK10 || MBARCELONA || MBOBCAT || MBULLDOZER || MPILEDRIVER || MSTEAMROLLER || MEXCAVATOR || MZEN || MJAGUAR || MPENTIUMM || MCORE2 || MNEHALEM || MWESTMERE || MSILVERMONT || MSANDYBRIDGE || MIVYBRIDGE || MHASWELL || MBROADWELL || MSKYLAKE || MNATIVE || MATOM || MVIAC7 || X86_GENERIC || GENERIC_CPU
+ default "4" if MELAN || M486 || MGEODEGX1
+ default "5" if MWINCHIP3D || MWINCHIPC6 || MCRUSOE || MEFFICEON || MCYRIXIII || MK6 || MPENTIUMIII || MPENTIUMII || M686 || M586MMX || M586TSC || M586 || MVIAC3_2 || MGEODE_LX
+
+@@ -341,45 +502,46 @@ config X86_ALIGNMENT_16
+
+ config X86_INTEL_USERCOPY
+ def_bool y
+- depends on MPENTIUM4 || MPENTIUMM || MPENTIUMIII || MPENTIUMII || M586MMX || X86_GENERIC || MK8 || MK7 || MEFFICEON || MCORE2
++ depends on MPENTIUM4 || MPENTIUMM || MPENTIUMIII || MPENTIUMII || M586MMX || X86_GENERIC || MK8 || MK8SSE3 || MK7 || MEFFICEON || MCORE2 || MK10 || MBARCELONA || MNEHALEM || MWESTMERE || MSILVERMONT || MSANDYBRIDGE || MIVYBRIDGE || MHASWELL || MBROADWELL || MSKYLAKE || MNATIVE
+
+ config X86_USE_PPRO_CHECKSUM
+ def_bool y
+- depends on MWINCHIP3D || MWINCHIPC6 || MCYRIXIII || MK7 || MK6 || MPENTIUM4 || MPENTIUMM || MPENTIUMIII || MPENTIUMII || M686 || MK8 || MVIAC3_2 || MVIAC7 || MEFFICEON || MGEODE_LX || MCORE2 || MATOM
++ depends on MWINCHIP3D || MWINCHIPC6 || MCYRIXIII || MK7 || MK6 || MK10 || MPENTIUM4 || MPENTIUMM || MPENTIUMIII || MPENTIUMII || M686 || MK8 || MK8SSE3 || MVIAC3_2 || MVIAC7 || MEFFICEON || MGEODE_LX || MCORE2 || MNEHALEM || MWESTMERE || MSILVERMONT || MSANDYBRIDGE || MIVYBRIDGE || MHASWELL || MBROADWELL || MSKYLAKE || MATOM || MNATIVE
+
+ config X86_USE_3DNOW
+ def_bool y
+ depends on (MCYRIXIII || MK7 || MGEODE_LX) && !UML
+
+-#
+-# P6_NOPs are a relatively minor optimization that require a family >=
+-# 6 processor, except that it is broken on certain VIA chips.
+-# Furthermore, AMD chips prefer a totally different sequence of NOPs
+-# (which work on all CPUs). In addition, it looks like Virtual PC
+-# does not understand them.
+-#
+-# As a result, disallow these if we're not compiling for X86_64 (these
+-# NOPs do work on all x86-64 capable chips); the list of processors in
+-# the right-hand clause are the cores that benefit from this optimization.
+-#
+ config X86_P6_NOP
+- def_bool y
+- depends on X86_64
+- depends on (MCORE2 || MPENTIUM4 || MPSC)
++ default n
++ bool "Support for P6_NOPs on Intel chips"
++ depends on (MCORE2 || MPENTIUM4 || MPSC || MATOM || MNEHALEM || MWESTMERE || MSILVERMONT || MSANDYBRIDGE || MIVYBRIDGE || MHASWELL || MBROADWELL || MSKYLAKE || MNATIVE)
++ ---help---
++ P6_NOPs are a relatively minor optimization that require a family >=
++ 6 processor, except that it is broken on certain VIA chips.
++ Furthermore, AMD chips prefer a totally different sequence of NOPs
++ (which work on all CPUs). In addition, it looks like Virtual PC
++ does not understand them.
++
++ As a result, disallow these if we're not compiling for X86_64 (these
++ NOPs do work on all x86-64 capable chips); the list of processors in
++ the right-hand clause are the cores that benefit from this optimization.
++
++ Say Y if you have Intel CPU newer than Pentium Pro, N otherwise.
+
+ config X86_TSC
+ def_bool y
+- depends on (MWINCHIP3D || MCRUSOE || MEFFICEON || MCYRIXIII || MK7 || MK6 || MPENTIUM4 || MPENTIUMM || MPENTIUMIII || MPENTIUMII || M686 || M586MMX || M586TSC || MK8 || MVIAC3_2 || MVIAC7 || MGEODEGX1 || MGEODE_LX || MCORE2 || MATOM) || X86_64
++ depends on (MWINCHIP3D || MCRUSOE || MEFFICEON || MCYRIXIII || MK7 || MK6 || MPENTIUM4 || MPENTIUMM || MPENTIUMIII || MPENTIUMII || M686 || M586MMX || M586TSC || MK8 || MK8SSE3 || MVIAC3_2 || MVIAC7 || MGEODEGX1 || MGEODE_LX || MCORE2 || MNEHALEM || MWESTMERE || MSILVERMONT || MSANDYBRIDGE || MIVYBRIDGE || MHASWELL || MBROADWELL || MSKYLAKE || MNATIVE || MATOM) || X86_64
+
+ config X86_CMPXCHG64
+ def_bool y
+- depends on X86_PAE || X86_64 || MCORE2 || MPENTIUM4 || MPENTIUMM || MPENTIUMIII || MPENTIUMII || M686 || MATOM
++ depends on X86_PAE || X86_64 || MCORE2 || MNEHALEM || MWESTMERE || MSILVERMONT || MSANDYBRIDGE || MIVYBRIDGE || MHASWELL || MBROADWELL || MSKYLAKE || MPENTIUM4 || MPENTIUMM || MPENTIUMIII || MPENTIUMII || M686 || MATOM || MNATIVE
+
+ # this should be set for all -march=.. options where the compiler
+ # generates cmov.
+ config X86_CMOV
+ def_bool y
+- depends on (MK8 || MK7 || MCORE2 || MPENTIUM4 || MPENTIUMM || MPENTIUMIII || MPENTIUMII || M686 || MVIAC3_2 || MVIAC7 || MCRUSOE || MEFFICEON || X86_64 || MATOM || MGEODE_LX)
++ depends on (MK8 || MK8SSE3 || MK10 || MBARCELONA || MBOBCAT || MBULLDOZER || MPILEDRIVER || MSTEAMROLLER || MEXCAVATOR || MZEN || MJAGUAR || MK7 || MCORE2 || MNEHALEM || MWESTMERE || MSILVERMONT || MSANDYBRIDGE || MIVYBRIDGE || MHASWELL || MBROADWELL || MSKYLAKE || MPENTIUM4 || MPENTIUMM || MPENTIUMIII || MPENTIUMII || M686 || MVIAC3_2 || MVIAC7 || MCRUSOE || MEFFICEON || X86_64 || MNATIVE || MATOM || MGEODE_LX)
+
+ config X86_MINIMUM_CPU_FAMILY
+ int
+--- a/arch/x86/Makefile 2017-08-02 11:41:47.443200463 -0400
++++ b/arch/x86/Makefile 2017-08-02 12:14:46.373727353 -0400
+@@ -121,13 +121,40 @@ else
+ KBUILD_CFLAGS += $(call cc-option,-mskip-rax-setup)
+
+ # FIXME - should be integrated in Makefile.cpu (Makefile_32.cpu)
++ cflags-$(CONFIG_MNATIVE) += $(call cc-option,-march=native)
+ cflags-$(CONFIG_MK8) += $(call cc-option,-march=k8)
++ cflags-$(CONFIG_MK8SSE3) += $(call cc-option,-march=k8-sse3,-mtune=k8)
++ cflags-$(CONFIG_MK10) += $(call cc-option,-march=amdfam10)
++ cflags-$(CONFIG_MBARCELONA) += $(call cc-option,-march=barcelona)
++ cflags-$(CONFIG_MBOBCAT) += $(call cc-option,-march=btver1)
++ cflags-$(CONFIG_MJAGUAR) += $(call cc-option,-march=btver2)
++ cflags-$(CONFIG_MBULLDOZER) += $(call cc-option,-march=bdver1)
++ cflags-$(CONFIG_MPILEDRIVER) += $(call cc-option,-march=bdver2)
++ cflags-$(CONFIG_MSTEAMROLLER) += $(call cc-option,-march=bdver3)
++ cflags-$(CONFIG_MEXCAVATOR) += $(call cc-option,-march=bdver4)
++ cflags-$(CONFIG_MZEN) += $(call cc-option,-march=znver1)
+ cflags-$(CONFIG_MPSC) += $(call cc-option,-march=nocona)
+
+ cflags-$(CONFIG_MCORE2) += \
+- $(call cc-option,-march=core2,$(call cc-option,-mtune=generic))
+- cflags-$(CONFIG_MATOM) += $(call cc-option,-march=atom) \
+- $(call cc-option,-mtune=atom,$(call cc-option,-mtune=generic))
++ $(call cc-option,-march=core2,$(call cc-option,-mtune=core2))
++ cflags-$(CONFIG_MNEHALEM) += \
++ $(call cc-option,-march=nehalem,$(call cc-option,-mtune=nehalem))
++ cflags-$(CONFIG_MWESTMERE) += \
++ $(call cc-option,-march=westmere,$(call cc-option,-mtune=westmere))
++ cflags-$(CONFIG_MSILVERMONT) += \
++ $(call cc-option,-march=silvermont,$(call cc-option,-mtune=silvermont))
++ cflags-$(CONFIG_MSANDYBRIDGE) += \
++ $(call cc-option,-march=sandybridge,$(call cc-option,-mtune=sandybridge))
++ cflags-$(CONFIG_MIVYBRIDGE) += \
++ $(call cc-option,-march=ivybridge,$(call cc-option,-mtune=ivybridge))
++ cflags-$(CONFIG_MHASWELL) += \
++ $(call cc-option,-march=haswell,$(call cc-option,-mtune=haswell))
++ cflags-$(CONFIG_MBROADWELL) += \
++ $(call cc-option,-march=broadwell,$(call cc-option,-mtune=broadwell))
++ cflags-$(CONFIG_MSKYLAKE) += \
++ $(call cc-option,-march=skylake,$(call cc-option,-mtune=skylake))
++ cflags-$(CONFIG_MATOM) += $(call cc-option,-march=bonnell) \
++ $(call cc-option,-mtune=bonnell,$(call cc-option,-mtune=generic))
+ cflags-$(CONFIG_GENERIC_CPU) += $(call cc-option,-mtune=generic)
+ KBUILD_CFLAGS += $(cflags-y)
+
+--- a/arch/x86/Makefile_32.cpu 2017-08-02 11:41:47.444200464 -0400
++++ b/arch/x86/Makefile_32.cpu 2017-08-02 12:23:41.636760695 -0400
+@@ -22,7 +22,18 @@ cflags-$(CONFIG_MK6) += -march=k6
+ # Please note, that patches that add -march=athlon-xp and friends are pointless.
+ # They make zero difference whatsosever to performance at this time.
+ cflags-$(CONFIG_MK7) += -march=athlon
++cflags-$(CONFIG_MNATIVE) += $(call cc-option,-march=native)
+ cflags-$(CONFIG_MK8) += $(call cc-option,-march=k8,-march=athlon)
++cflags-$(CONFIG_MK8SSE3) += $(call cc-option,-march=k8-sse3,-march=athlon)
++cflags-$(CONFIG_MK10) += $(call cc-option,-march=amdfam10,-march=athlon)
++cflags-$(CONFIG_MBARCELONA) += $(call cc-option,-march=barcelona,-march=athlon)
++cflags-$(CONFIG_MBOBCAT) += $(call cc-option,-march=btver1,-march=athlon)
++cflags-$(CONFIG_MJAGUAR) += $(call cc-option,-march=btver2,-march=athlon)
++cflags-$(CONFIG_MBULLDOZER) += $(call cc-option,-march=bdver1,-march=athlon)
++cflags-$(CONFIG_MPILEDRIVER) += $(call cc-option,-march=bdver2,-march=athlon)
++cflags-$(CONFIG_MSTEAMROLLER) += $(call cc-option,-march=bdver3,-march=athlon)
++cflags-$(CONFIG_MEXCAVATOR) += $(call cc-option,-march=bdver4,-march=athlon)
++cflags-$(CONFIG_MZEN) += $(call cc-option,-march=znver1,-march=athlon)
+ cflags-$(CONFIG_MCRUSOE) += -march=i686 -falign-functions=0 -falign-jumps=0 -falign-loops=0
+ cflags-$(CONFIG_MEFFICEON) += -march=i686 $(call tune,pentium3) -falign-functions=0 -falign-jumps=0 -falign-loops=0
+ cflags-$(CONFIG_MWINCHIPC6) += $(call cc-option,-march=winchip-c6,-march=i586)
+@@ -31,9 +42,12 @@ cflags-$(CONFIG_MCYRIXIII) += $(call cc-
+ cflags-$(CONFIG_MVIAC3_2) += $(call cc-option,-march=c3-2,-march=i686)
+ cflags-$(CONFIG_MVIAC7) += -march=i686
+ cflags-$(CONFIG_MCORE2) += -march=i686 $(call tune,core2)
+-cflags-$(CONFIG_MATOM) += $(call cc-option,-march=atom,$(call cc-option,-march=core2,-march=i686)) \
+- $(call cc-option,-mtune=atom,$(call cc-option,-mtune=generic))
+-
++cflags-$(CONFIG_MNEHALEM) += -march=i686 $(call tune,nehalem)
++cflags-$(CONFIG_MWESTMERE) += -march=i686 $(call tune,westmere)
++cflags-$(CONFIG_MSILVERMONT) += -march=i686 $(call tune,silvermont)
++cflags-$(CONFIG_MSANDYBRIDGE) += -march=i686 $(call tune,sandybridge)
++cflags-$(CONFIG_MIVYBRIDGE) += -march=i686 $(call tune,ivybridge)
++cflags-$(CONFIG_MHASWELL) += -march=i686 $(call tune,haswell)
+ # AMD Elan support
+ cflags-$(CONFIG_MELAN) += -march=i486
+
^ permalink raw reply related [flat|nested] 27+ messages in thread
* [gentoo-commits] proj/linux-patches:4.15 commit in: /
@ 2018-02-03 21:20 Mike Pagano
0 siblings, 0 replies; 27+ messages in thread
From: Mike Pagano @ 2018-02-03 21:20 UTC (permalink / raw
To: gentoo-commits
commit: cd10231abdae4f2e8ddd323db3031b9d5778b320
Author: Mike Pagano <mpagano <AT> gentoo <DOT> org>
AuthorDate: Sat Feb 3 21:19:57 2018 +0000
Commit: Mike Pagano <mpagano <AT> gentoo <DOT> org>
CommitDate: Sat Feb 3 21:19:57 2018 +0000
URL: https://gitweb.gentoo.org/proj/linux-patches.git/commit/?id=cd10231a
Linux patch 4.15.1
0000_README | 4 +
1000_linux-4.15.1.patch | 1808 +++++++++++++++++++++++++++++++++++++++++++++++
2 files changed, 1812 insertions(+)
diff --git a/0000_README b/0000_README
index 01553d4..da07a38 100644
--- a/0000_README
+++ b/0000_README
@@ -43,6 +43,10 @@ EXPERIMENTAL
Individual Patch Descriptions:
--------------------------------------------------------------------------
+Patch: 1000_linux-4.15.1.patch
+From: http://www.kernel.org
+Desc: Linux 4.15.1
+
Patch: 1500_XATTR_USER_PREFIX.patch
From: https://bugs.gentoo.org/show_bug.cgi?id=470644
Desc: Support for namespace user.pax.* on tmpfs.
diff --git a/1000_linux-4.15.1.patch b/1000_linux-4.15.1.patch
new file mode 100644
index 0000000..7a10ddd
--- /dev/null
+++ b/1000_linux-4.15.1.patch
@@ -0,0 +1,1808 @@
+diff --git a/Makefile b/Makefile
+index c8b8e902d5a4..af101b556ba0 100644
+--- a/Makefile
++++ b/Makefile
+@@ -1,7 +1,7 @@
+ # SPDX-License-Identifier: GPL-2.0
+ VERSION = 4
+ PATCHLEVEL = 15
+-SUBLEVEL = 0
++SUBLEVEL = 1
+ EXTRAVERSION =
+ NAME = Fearless Coyote
+
+diff --git a/arch/x86/crypto/aesni-intel_asm.S b/arch/x86/crypto/aesni-intel_asm.S
+index 3d09e3aca18d..12e8484a8ee7 100644
+--- a/arch/x86/crypto/aesni-intel_asm.S
++++ b/arch/x86/crypto/aesni-intel_asm.S
+@@ -90,30 +90,6 @@ SHIFT_MASK: .octa 0x0f0e0d0c0b0a09080706050403020100
+ ALL_F: .octa 0xffffffffffffffffffffffffffffffff
+ .octa 0x00000000000000000000000000000000
+
+-.section .rodata
+-.align 16
+-.type aad_shift_arr, @object
+-.size aad_shift_arr, 272
+-aad_shift_arr:
+- .octa 0xffffffffffffffffffffffffffffffff
+- .octa 0xffffffffffffffffffffffffffffff0C
+- .octa 0xffffffffffffffffffffffffffff0D0C
+- .octa 0xffffffffffffffffffffffffff0E0D0C
+- .octa 0xffffffffffffffffffffffff0F0E0D0C
+- .octa 0xffffffffffffffffffffff0C0B0A0908
+- .octa 0xffffffffffffffffffff0D0C0B0A0908
+- .octa 0xffffffffffffffffff0E0D0C0B0A0908
+- .octa 0xffffffffffffffff0F0E0D0C0B0A0908
+- .octa 0xffffffffffffff0C0B0A090807060504
+- .octa 0xffffffffffff0D0C0B0A090807060504
+- .octa 0xffffffffff0E0D0C0B0A090807060504
+- .octa 0xffffffff0F0E0D0C0B0A090807060504
+- .octa 0xffffff0C0B0A09080706050403020100
+- .octa 0xffff0D0C0B0A09080706050403020100
+- .octa 0xff0E0D0C0B0A09080706050403020100
+- .octa 0x0F0E0D0C0B0A09080706050403020100
+-
+-
+ .text
+
+
+@@ -257,6 +233,37 @@ aad_shift_arr:
+ pxor \TMP1, \GH # result is in TMP1
+ .endm
+
++# Reads DLEN bytes starting at DPTR and stores in XMMDst
++# where 0 < DLEN < 16
++# Clobbers %rax, DLEN and XMM1
++.macro READ_PARTIAL_BLOCK DPTR DLEN XMM1 XMMDst
++ cmp $8, \DLEN
++ jl _read_lt8_\@
++ mov (\DPTR), %rax
++ MOVQ_R64_XMM %rax, \XMMDst
++ sub $8, \DLEN
++ jz _done_read_partial_block_\@
++ xor %eax, %eax
++_read_next_byte_\@:
++ shl $8, %rax
++ mov 7(\DPTR, \DLEN, 1), %al
++ dec \DLEN
++ jnz _read_next_byte_\@
++ MOVQ_R64_XMM %rax, \XMM1
++ pslldq $8, \XMM1
++ por \XMM1, \XMMDst
++ jmp _done_read_partial_block_\@
++_read_lt8_\@:
++ xor %eax, %eax
++_read_next_byte_lt8_\@:
++ shl $8, %rax
++ mov -1(\DPTR, \DLEN, 1), %al
++ dec \DLEN
++ jnz _read_next_byte_lt8_\@
++ MOVQ_R64_XMM %rax, \XMMDst
++_done_read_partial_block_\@:
++.endm
++
+ /*
+ * if a = number of total plaintext bytes
+ * b = floor(a/16)
+@@ -273,62 +280,30 @@ aad_shift_arr:
+ XMM2 XMM3 XMM4 XMMDst TMP6 TMP7 i i_seq operation
+ MOVADQ SHUF_MASK(%rip), %xmm14
+ mov arg7, %r10 # %r10 = AAD
+- mov arg8, %r12 # %r12 = aadLen
+- mov %r12, %r11
++ mov arg8, %r11 # %r11 = aadLen
+ pxor %xmm\i, %xmm\i
+ pxor \XMM2, \XMM2
+
+ cmp $16, %r11
+- jl _get_AAD_rest8\num_initial_blocks\operation
++ jl _get_AAD_rest\num_initial_blocks\operation
+ _get_AAD_blocks\num_initial_blocks\operation:
+ movdqu (%r10), %xmm\i
+ PSHUFB_XMM %xmm14, %xmm\i # byte-reflect the AAD data
+ pxor %xmm\i, \XMM2
+ GHASH_MUL \XMM2, \TMP3, \TMP1, \TMP2, \TMP4, \TMP5, \XMM1
+ add $16, %r10
+- sub $16, %r12
+ sub $16, %r11
+ cmp $16, %r11
+ jge _get_AAD_blocks\num_initial_blocks\operation
+
+ movdqu \XMM2, %xmm\i
++
++ /* read the last <16B of AAD */
++_get_AAD_rest\num_initial_blocks\operation:
+ cmp $0, %r11
+ je _get_AAD_done\num_initial_blocks\operation
+
+- pxor %xmm\i,%xmm\i
+-
+- /* read the last <16B of AAD. since we have at least 4B of
+- data right after the AAD (the ICV, and maybe some CT), we can
+- read 4B/8B blocks safely, and then get rid of the extra stuff */
+-_get_AAD_rest8\num_initial_blocks\operation:
+- cmp $4, %r11
+- jle _get_AAD_rest4\num_initial_blocks\operation
+- movq (%r10), \TMP1
+- add $8, %r10
+- sub $8, %r11
+- pslldq $8, \TMP1
+- psrldq $8, %xmm\i
+- pxor \TMP1, %xmm\i
+- jmp _get_AAD_rest8\num_initial_blocks\operation
+-_get_AAD_rest4\num_initial_blocks\operation:
+- cmp $0, %r11
+- jle _get_AAD_rest0\num_initial_blocks\operation
+- mov (%r10), %eax
+- movq %rax, \TMP1
+- add $4, %r10
+- sub $4, %r10
+- pslldq $12, \TMP1
+- psrldq $4, %xmm\i
+- pxor \TMP1, %xmm\i
+-_get_AAD_rest0\num_initial_blocks\operation:
+- /* finalize: shift out the extra bytes we read, and align
+- left. since pslldq can only shift by an immediate, we use
+- vpshufb and an array of shuffle masks */
+- movq %r12, %r11
+- salq $4, %r11
+- movdqu aad_shift_arr(%r11), \TMP1
+- PSHUFB_XMM \TMP1, %xmm\i
+-_get_AAD_rest_final\num_initial_blocks\operation:
++ READ_PARTIAL_BLOCK %r10, %r11, \TMP1, %xmm\i
+ PSHUFB_XMM %xmm14, %xmm\i # byte-reflect the AAD data
+ pxor \XMM2, %xmm\i
+ GHASH_MUL %xmm\i, \TMP3, \TMP1, \TMP2, \TMP4, \TMP5, \XMM1
+@@ -532,62 +507,30 @@ _initial_blocks_done\num_initial_blocks\operation:
+ XMM2 XMM3 XMM4 XMMDst TMP6 TMP7 i i_seq operation
+ MOVADQ SHUF_MASK(%rip), %xmm14
+ mov arg7, %r10 # %r10 = AAD
+- mov arg8, %r12 # %r12 = aadLen
+- mov %r12, %r11
++ mov arg8, %r11 # %r11 = aadLen
+ pxor %xmm\i, %xmm\i
+ pxor \XMM2, \XMM2
+
+ cmp $16, %r11
+- jl _get_AAD_rest8\num_initial_blocks\operation
++ jl _get_AAD_rest\num_initial_blocks\operation
+ _get_AAD_blocks\num_initial_blocks\operation:
+ movdqu (%r10), %xmm\i
+ PSHUFB_XMM %xmm14, %xmm\i # byte-reflect the AAD data
+ pxor %xmm\i, \XMM2
+ GHASH_MUL \XMM2, \TMP3, \TMP1, \TMP2, \TMP4, \TMP5, \XMM1
+ add $16, %r10
+- sub $16, %r12
+ sub $16, %r11
+ cmp $16, %r11
+ jge _get_AAD_blocks\num_initial_blocks\operation
+
+ movdqu \XMM2, %xmm\i
++
++ /* read the last <16B of AAD */
++_get_AAD_rest\num_initial_blocks\operation:
+ cmp $0, %r11
+ je _get_AAD_done\num_initial_blocks\operation
+
+- pxor %xmm\i,%xmm\i
+-
+- /* read the last <16B of AAD. since we have at least 4B of
+- data right after the AAD (the ICV, and maybe some PT), we can
+- read 4B/8B blocks safely, and then get rid of the extra stuff */
+-_get_AAD_rest8\num_initial_blocks\operation:
+- cmp $4, %r11
+- jle _get_AAD_rest4\num_initial_blocks\operation
+- movq (%r10), \TMP1
+- add $8, %r10
+- sub $8, %r11
+- pslldq $8, \TMP1
+- psrldq $8, %xmm\i
+- pxor \TMP1, %xmm\i
+- jmp _get_AAD_rest8\num_initial_blocks\operation
+-_get_AAD_rest4\num_initial_blocks\operation:
+- cmp $0, %r11
+- jle _get_AAD_rest0\num_initial_blocks\operation
+- mov (%r10), %eax
+- movq %rax, \TMP1
+- add $4, %r10
+- sub $4, %r10
+- pslldq $12, \TMP1
+- psrldq $4, %xmm\i
+- pxor \TMP1, %xmm\i
+-_get_AAD_rest0\num_initial_blocks\operation:
+- /* finalize: shift out the extra bytes we read, and align
+- left. since pslldq can only shift by an immediate, we use
+- vpshufb and an array of shuffle masks */
+- movq %r12, %r11
+- salq $4, %r11
+- movdqu aad_shift_arr(%r11), \TMP1
+- PSHUFB_XMM \TMP1, %xmm\i
+-_get_AAD_rest_final\num_initial_blocks\operation:
++ READ_PARTIAL_BLOCK %r10, %r11, \TMP1, %xmm\i
+ PSHUFB_XMM %xmm14, %xmm\i # byte-reflect the AAD data
+ pxor \XMM2, %xmm\i
+ GHASH_MUL %xmm\i, \TMP3, \TMP1, \TMP2, \TMP4, \TMP5, \XMM1
+@@ -1386,14 +1329,6 @@ _esb_loop_\@:
+ *
+ * AAD Format with 64-bit Extended Sequence Number
+ *
+-* aadLen:
+-* from the definition of the spec, aadLen can only be 8 or 12 bytes.
+-* The code supports 16 too but for other sizes, the code will fail.
+-*
+-* TLen:
+-* from the definition of the spec, TLen can only be 8, 12 or 16 bytes.
+-* For other sizes, the code will fail.
+-*
+ * poly = x^128 + x^127 + x^126 + x^121 + 1
+ *
+ *****************************************************************************/
+@@ -1487,19 +1422,16 @@ _zero_cipher_left_decrypt:
+ PSHUFB_XMM %xmm10, %xmm0
+
+ ENCRYPT_SINGLE_BLOCK %xmm0, %xmm1 # E(K, Yn)
+- sub $16, %r11
+- add %r13, %r11
+- movdqu (%arg3,%r11,1), %xmm1 # receive the last <16 byte block
+- lea SHIFT_MASK+16(%rip), %r12
+- sub %r13, %r12
+-# adjust the shuffle mask pointer to be able to shift 16-%r13 bytes
+-# (%r13 is the number of bytes in plaintext mod 16)
+- movdqu (%r12), %xmm2 # get the appropriate shuffle mask
+- PSHUFB_XMM %xmm2, %xmm1 # right shift 16-%r13 butes
+
++ lea (%arg3,%r11,1), %r10
++ mov %r13, %r12
++ READ_PARTIAL_BLOCK %r10 %r12 %xmm2 %xmm1
++
++ lea ALL_F+16(%rip), %r12
++ sub %r13, %r12
+ movdqa %xmm1, %xmm2
+ pxor %xmm1, %xmm0 # Ciphertext XOR E(K, Yn)
+- movdqu ALL_F-SHIFT_MASK(%r12), %xmm1
++ movdqu (%r12), %xmm1
+ # get the appropriate mask to mask out top 16-%r13 bytes of %xmm0
+ pand %xmm1, %xmm0 # mask out top 16-%r13 bytes of %xmm0
+ pand %xmm1, %xmm2
+@@ -1508,9 +1440,6 @@ _zero_cipher_left_decrypt:
+
+ pxor %xmm2, %xmm8
+ GHASH_MUL %xmm8, %xmm13, %xmm9, %xmm10, %xmm11, %xmm5, %xmm6
+- # GHASH computation for the last <16 byte block
+- sub %r13, %r11
+- add $16, %r11
+
+ # output %r13 bytes
+ MOVQ_R64_XMM %xmm0, %rax
+@@ -1664,14 +1593,6 @@ ENDPROC(aesni_gcm_dec)
+ *
+ * AAD Format with 64-bit Extended Sequence Number
+ *
+-* aadLen:
+-* from the definition of the spec, aadLen can only be 8 or 12 bytes.
+-* The code supports 16 too but for other sizes, the code will fail.
+-*
+-* TLen:
+-* from the definition of the spec, TLen can only be 8, 12 or 16 bytes.
+-* For other sizes, the code will fail.
+-*
+ * poly = x^128 + x^127 + x^126 + x^121 + 1
+ ***************************************************************************/
+ ENTRY(aesni_gcm_enc)
+@@ -1764,19 +1685,16 @@ _zero_cipher_left_encrypt:
+ movdqa SHUF_MASK(%rip), %xmm10
+ PSHUFB_XMM %xmm10, %xmm0
+
+-
+ ENCRYPT_SINGLE_BLOCK %xmm0, %xmm1 # Encrypt(K, Yn)
+- sub $16, %r11
+- add %r13, %r11
+- movdqu (%arg3,%r11,1), %xmm1 # receive the last <16 byte blocks
+- lea SHIFT_MASK+16(%rip), %r12
++
++ lea (%arg3,%r11,1), %r10
++ mov %r13, %r12
++ READ_PARTIAL_BLOCK %r10 %r12 %xmm2 %xmm1
++
++ lea ALL_F+16(%rip), %r12
+ sub %r13, %r12
+- # adjust the shuffle mask pointer to be able to shift 16-r13 bytes
+- # (%r13 is the number of bytes in plaintext mod 16)
+- movdqu (%r12), %xmm2 # get the appropriate shuffle mask
+- PSHUFB_XMM %xmm2, %xmm1 # shift right 16-r13 byte
+ pxor %xmm1, %xmm0 # Plaintext XOR Encrypt(K, Yn)
+- movdqu ALL_F-SHIFT_MASK(%r12), %xmm1
++ movdqu (%r12), %xmm1
+ # get the appropriate mask to mask out top 16-r13 bytes of xmm0
+ pand %xmm1, %xmm0 # mask out top 16-r13 bytes of xmm0
+ movdqa SHUF_MASK(%rip), %xmm10
+@@ -1785,9 +1703,6 @@ _zero_cipher_left_encrypt:
+ pxor %xmm0, %xmm8
+ GHASH_MUL %xmm8, %xmm13, %xmm9, %xmm10, %xmm11, %xmm5, %xmm6
+ # GHASH computation for the last <16 byte block
+- sub %r13, %r11
+- add $16, %r11
+-
+ movdqa SHUF_MASK(%rip), %xmm10
+ PSHUFB_XMM %xmm10, %xmm0
+
+diff --git a/arch/x86/crypto/aesni-intel_glue.c b/arch/x86/crypto/aesni-intel_glue.c
+index 3bf3dcf29825..34cf1c1f8c98 100644
+--- a/arch/x86/crypto/aesni-intel_glue.c
++++ b/arch/x86/crypto/aesni-intel_glue.c
+@@ -690,8 +690,8 @@ static int common_rfc4106_set_key(struct crypto_aead *aead, const u8 *key,
+ rfc4106_set_hash_subkey(ctx->hash_subkey, key, key_len);
+ }
+
+-static int rfc4106_set_key(struct crypto_aead *parent, const u8 *key,
+- unsigned int key_len)
++static int gcmaes_wrapper_set_key(struct crypto_aead *parent, const u8 *key,
++ unsigned int key_len)
+ {
+ struct cryptd_aead **ctx = crypto_aead_ctx(parent);
+ struct cryptd_aead *cryptd_tfm = *ctx;
+@@ -716,8 +716,8 @@ static int common_rfc4106_set_authsize(struct crypto_aead *aead,
+
+ /* This is the Integrity Check Value (aka the authentication tag length and can
+ * be 8, 12 or 16 bytes long. */
+-static int rfc4106_set_authsize(struct crypto_aead *parent,
+- unsigned int authsize)
++static int gcmaes_wrapper_set_authsize(struct crypto_aead *parent,
++ unsigned int authsize)
+ {
+ struct cryptd_aead **ctx = crypto_aead_ctx(parent);
+ struct cryptd_aead *cryptd_tfm = *ctx;
+@@ -824,7 +824,7 @@ static int gcmaes_decrypt(struct aead_request *req, unsigned int assoclen,
+ if (sg_is_last(req->src) &&
+ (!PageHighMem(sg_page(req->src)) ||
+ req->src->offset + req->src->length <= PAGE_SIZE) &&
+- sg_is_last(req->dst) &&
++ sg_is_last(req->dst) && req->dst->length &&
+ (!PageHighMem(sg_page(req->dst)) ||
+ req->dst->offset + req->dst->length <= PAGE_SIZE)) {
+ one_entry_in_sg = 1;
+@@ -929,7 +929,7 @@ static int helper_rfc4106_decrypt(struct aead_request *req)
+ aes_ctx);
+ }
+
+-static int rfc4106_encrypt(struct aead_request *req)
++static int gcmaes_wrapper_encrypt(struct aead_request *req)
+ {
+ struct crypto_aead *tfm = crypto_aead_reqtfm(req);
+ struct cryptd_aead **ctx = crypto_aead_ctx(tfm);
+@@ -945,7 +945,7 @@ static int rfc4106_encrypt(struct aead_request *req)
+ return crypto_aead_encrypt(req);
+ }
+
+-static int rfc4106_decrypt(struct aead_request *req)
++static int gcmaes_wrapper_decrypt(struct aead_request *req)
+ {
+ struct crypto_aead *tfm = crypto_aead_reqtfm(req);
+ struct cryptd_aead **ctx = crypto_aead_ctx(tfm);
+@@ -1117,7 +1117,7 @@ static int generic_gcmaes_decrypt(struct aead_request *req)
+ {
+ __be32 counter = cpu_to_be32(1);
+ struct crypto_aead *tfm = crypto_aead_reqtfm(req);
+- struct aesni_rfc4106_gcm_ctx *ctx = aesni_rfc4106_gcm_ctx_get(tfm);
++ struct generic_gcmaes_ctx *ctx = generic_gcmaes_ctx_get(tfm);
+ void *aes_ctx = &(ctx->aes_key_expanded);
+ u8 iv[16] __attribute__ ((__aligned__(AESNI_ALIGN)));
+
+@@ -1128,6 +1128,30 @@ static int generic_gcmaes_decrypt(struct aead_request *req)
+ aes_ctx);
+ }
+
++static int generic_gcmaes_init(struct crypto_aead *aead)
++{
++ struct cryptd_aead *cryptd_tfm;
++ struct cryptd_aead **ctx = crypto_aead_ctx(aead);
++
++ cryptd_tfm = cryptd_alloc_aead("__driver-generic-gcm-aes-aesni",
++ CRYPTO_ALG_INTERNAL,
++ CRYPTO_ALG_INTERNAL);
++ if (IS_ERR(cryptd_tfm))
++ return PTR_ERR(cryptd_tfm);
++
++ *ctx = cryptd_tfm;
++ crypto_aead_set_reqsize(aead, crypto_aead_reqsize(&cryptd_tfm->base));
++
++ return 0;
++}
++
++static void generic_gcmaes_exit(struct crypto_aead *aead)
++{
++ struct cryptd_aead **ctx = crypto_aead_ctx(aead);
++
++ cryptd_free_aead(*ctx);
++}
++
+ static struct aead_alg aesni_aead_algs[] = { {
+ .setkey = common_rfc4106_set_key,
+ .setauthsize = common_rfc4106_set_authsize,
+@@ -1147,10 +1171,10 @@ static struct aead_alg aesni_aead_algs[] = { {
+ }, {
+ .init = rfc4106_init,
+ .exit = rfc4106_exit,
+- .setkey = rfc4106_set_key,
+- .setauthsize = rfc4106_set_authsize,
+- .encrypt = rfc4106_encrypt,
+- .decrypt = rfc4106_decrypt,
++ .setkey = gcmaes_wrapper_set_key,
++ .setauthsize = gcmaes_wrapper_set_authsize,
++ .encrypt = gcmaes_wrapper_encrypt,
++ .decrypt = gcmaes_wrapper_decrypt,
+ .ivsize = GCM_RFC4106_IV_SIZE,
+ .maxauthsize = 16,
+ .base = {
+@@ -1169,14 +1193,32 @@ static struct aead_alg aesni_aead_algs[] = { {
+ .decrypt = generic_gcmaes_decrypt,
+ .ivsize = GCM_AES_IV_SIZE,
+ .maxauthsize = 16,
++ .base = {
++ .cra_name = "__generic-gcm-aes-aesni",
++ .cra_driver_name = "__driver-generic-gcm-aes-aesni",
++ .cra_priority = 0,
++ .cra_flags = CRYPTO_ALG_INTERNAL,
++ .cra_blocksize = 1,
++ .cra_ctxsize = sizeof(struct generic_gcmaes_ctx),
++ .cra_alignmask = AESNI_ALIGN - 1,
++ .cra_module = THIS_MODULE,
++ },
++}, {
++ .init = generic_gcmaes_init,
++ .exit = generic_gcmaes_exit,
++ .setkey = gcmaes_wrapper_set_key,
++ .setauthsize = gcmaes_wrapper_set_authsize,
++ .encrypt = gcmaes_wrapper_encrypt,
++ .decrypt = gcmaes_wrapper_decrypt,
++ .ivsize = GCM_AES_IV_SIZE,
++ .maxauthsize = 16,
+ .base = {
+ .cra_name = "gcm(aes)",
+ .cra_driver_name = "generic-gcm-aesni",
+ .cra_priority = 400,
+ .cra_flags = CRYPTO_ALG_ASYNC,
+ .cra_blocksize = 1,
+- .cra_ctxsize = sizeof(struct generic_gcmaes_ctx),
+- .cra_alignmask = AESNI_ALIGN - 1,
++ .cra_ctxsize = sizeof(struct cryptd_aead *),
+ .cra_module = THIS_MODULE,
+ },
+ } };
+diff --git a/crypto/Kconfig b/crypto/Kconfig
+index f7911963bb79..9327fbfccf5a 100644
+--- a/crypto/Kconfig
++++ b/crypto/Kconfig
+@@ -130,7 +130,7 @@ config CRYPTO_DH
+
+ config CRYPTO_ECDH
+ tristate "ECDH algorithm"
+- select CRYTPO_KPP
++ select CRYPTO_KPP
+ select CRYPTO_RNG_DEFAULT
+ help
+ Generic implementation of the ECDH algorithm
+diff --git a/crypto/af_alg.c b/crypto/af_alg.c
+index 35d4dcea381f..5231f421ad00 100644
+--- a/crypto/af_alg.c
++++ b/crypto/af_alg.c
+@@ -150,7 +150,7 @@ EXPORT_SYMBOL_GPL(af_alg_release_parent);
+
+ static int alg_bind(struct socket *sock, struct sockaddr *uaddr, int addr_len)
+ {
+- const u32 forbidden = CRYPTO_ALG_INTERNAL;
++ const u32 allowed = CRYPTO_ALG_KERN_DRIVER_ONLY;
+ struct sock *sk = sock->sk;
+ struct alg_sock *ask = alg_sk(sk);
+ struct sockaddr_alg *sa = (void *)uaddr;
+@@ -158,6 +158,10 @@ static int alg_bind(struct socket *sock, struct sockaddr *uaddr, int addr_len)
+ void *private;
+ int err;
+
++ /* If caller uses non-allowed flag, return error. */
++ if ((sa->salg_feat & ~allowed) || (sa->salg_mask & ~allowed))
++ return -EINVAL;
++
+ if (sock->state == SS_CONNECTED)
+ return -EINVAL;
+
+@@ -176,9 +180,7 @@ static int alg_bind(struct socket *sock, struct sockaddr *uaddr, int addr_len)
+ if (IS_ERR(type))
+ return PTR_ERR(type);
+
+- private = type->bind(sa->salg_name,
+- sa->salg_feat & ~forbidden,
+- sa->salg_mask & ~forbidden);
++ private = type->bind(sa->salg_name, sa->salg_feat, sa->salg_mask);
+ if (IS_ERR(private)) {
+ module_put(type->owner);
+ return PTR_ERR(private);
+diff --git a/crypto/sha3_generic.c b/crypto/sha3_generic.c
+index 7e8ed96236ce..a68be626017c 100644
+--- a/crypto/sha3_generic.c
++++ b/crypto/sha3_generic.c
+@@ -18,6 +18,7 @@
+ #include <linux/types.h>
+ #include <crypto/sha3.h>
+ #include <asm/byteorder.h>
++#include <asm/unaligned.h>
+
+ #define KECCAK_ROUNDS 24
+
+@@ -149,7 +150,7 @@ static int sha3_update(struct shash_desc *desc, const u8 *data,
+ unsigned int i;
+
+ for (i = 0; i < sctx->rsizw; i++)
+- sctx->st[i] ^= ((u64 *) src)[i];
++ sctx->st[i] ^= get_unaligned_le64(src + 8 * i);
+ keccakf(sctx->st);
+
+ done += sctx->rsiz;
+@@ -174,7 +175,7 @@ static int sha3_final(struct shash_desc *desc, u8 *out)
+ sctx->buf[sctx->rsiz - 1] |= 0x80;
+
+ for (i = 0; i < sctx->rsizw; i++)
+- sctx->st[i] ^= ((u64 *) sctx->buf)[i];
++ sctx->st[i] ^= get_unaligned_le64(sctx->buf + 8 * i);
+
+ keccakf(sctx->st);
+
+diff --git a/drivers/android/binder.c b/drivers/android/binder.c
+index a7ecfde66b7b..ec0917fb7cca 100644
+--- a/drivers/android/binder.c
++++ b/drivers/android/binder.c
+@@ -4302,6 +4302,18 @@ static int binder_thread_release(struct binder_proc *proc,
+ if (t)
+ spin_lock(&t->lock);
+ }
++
++ /*
++ * If this thread used poll, make sure we remove the waitqueue
++ * from any epoll data structures holding it with POLLFREE.
++ * waitqueue_active() is safe to use here because we're holding
++ * the inner lock.
++ */
++ if ((thread->looper & BINDER_LOOPER_STATE_POLL) &&
++ waitqueue_active(&thread->wait)) {
++ wake_up_poll(&thread->wait, POLLHUP | POLLFREE);
++ }
++
+ binder_inner_proc_unlock(thread->proc);
+
+ if (send_reply)
+diff --git a/drivers/android/binder_alloc.c b/drivers/android/binder_alloc.c
+index 6f6f745605af..bef8c65af8fc 100644
+--- a/drivers/android/binder_alloc.c
++++ b/drivers/android/binder_alloc.c
+@@ -666,7 +666,7 @@ int binder_alloc_mmap_handler(struct binder_alloc *alloc,
+ goto err_already_mapped;
+ }
+
+- area = get_vm_area(vma->vm_end - vma->vm_start, VM_IOREMAP);
++ area = get_vm_area(vma->vm_end - vma->vm_start, VM_ALLOC);
+ if (area == NULL) {
+ ret = -ENOMEM;
+ failure_string = "get_vm_area";
+diff --git a/drivers/bluetooth/hci_serdev.c b/drivers/bluetooth/hci_serdev.c
+index 71664b22ec9d..e0e6461b9200 100644
+--- a/drivers/bluetooth/hci_serdev.c
++++ b/drivers/bluetooth/hci_serdev.c
+@@ -303,6 +303,7 @@ int hci_uart_register_device(struct hci_uart *hu,
+ hci_set_drvdata(hdev, hu);
+
+ INIT_WORK(&hu->write_work, hci_uart_write_work);
++ percpu_init_rwsem(&hu->proto_lock);
+
+ /* Only when vendor specific setup callback is provided, consider
+ * the manufacturer information valid. This avoids filling in the
+diff --git a/drivers/crypto/inside-secure/safexcel_hash.c b/drivers/crypto/inside-secure/safexcel_hash.c
+index 0c5a5820b06e..da9d040bccc2 100644
+--- a/drivers/crypto/inside-secure/safexcel_hash.c
++++ b/drivers/crypto/inside-secure/safexcel_hash.c
+@@ -34,6 +34,8 @@ struct safexcel_ahash_req {
+ bool hmac;
+ bool needs_inv;
+
++ int nents;
++
+ u8 state_sz; /* expected sate size, only set once */
+ u32 state[SHA256_DIGEST_SIZE / sizeof(u32)] __aligned(sizeof(u32));
+
+@@ -152,8 +154,10 @@ static int safexcel_handle_req_result(struct safexcel_crypto_priv *priv, int rin
+ memcpy(areq->result, sreq->state,
+ crypto_ahash_digestsize(ahash));
+
+- dma_unmap_sg(priv->dev, areq->src,
+- sg_nents_for_len(areq->src, areq->nbytes), DMA_TO_DEVICE);
++ if (sreq->nents) {
++ dma_unmap_sg(priv->dev, areq->src, sreq->nents, DMA_TO_DEVICE);
++ sreq->nents = 0;
++ }
+
+ safexcel_free_context(priv, async, sreq->state_sz);
+
+@@ -178,7 +182,7 @@ static int safexcel_ahash_send_req(struct crypto_async_request *async, int ring,
+ struct safexcel_command_desc *cdesc, *first_cdesc = NULL;
+ struct safexcel_result_desc *rdesc;
+ struct scatterlist *sg;
+- int i, nents, queued, len, cache_len, extra, n_cdesc = 0, ret = 0;
++ int i, queued, len, cache_len, extra, n_cdesc = 0, ret = 0;
+
+ queued = len = req->len - req->processed;
+ if (queued < crypto_ahash_blocksize(ahash))
+@@ -186,17 +190,31 @@ static int safexcel_ahash_send_req(struct crypto_async_request *async, int ring,
+ else
+ cache_len = queued - areq->nbytes;
+
+- /*
+- * If this is not the last request and the queued data does not fit
+- * into full blocks, cache it for the next send() call.
+- */
+- extra = queued & (crypto_ahash_blocksize(ahash) - 1);
+- if (!req->last_req && extra) {
+- sg_pcopy_to_buffer(areq->src, sg_nents(areq->src),
+- req->cache_next, extra, areq->nbytes - extra);
+-
+- queued -= extra;
+- len -= extra;
++ if (!req->last_req) {
++ /* If this is not the last request and the queued data does not
++ * fit into full blocks, cache it for the next send() call.
++ */
++ extra = queued & (crypto_ahash_blocksize(ahash) - 1);
++ if (!extra)
++ /* If this is not the last request and the queued data
++ * is a multiple of a block, cache the last one for now.
++ */
++ extra = queued - crypto_ahash_blocksize(ahash);
++
++ if (extra) {
++ sg_pcopy_to_buffer(areq->src, sg_nents(areq->src),
++ req->cache_next, extra,
++ areq->nbytes - extra);
++
++ queued -= extra;
++ len -= extra;
++
++ if (!queued) {
++ *commands = 0;
++ *results = 0;
++ return 0;
++ }
++ }
+ }
+
+ spin_lock_bh(&priv->ring[ring].egress_lock);
+@@ -234,15 +252,15 @@ static int safexcel_ahash_send_req(struct crypto_async_request *async, int ring,
+ }
+
+ /* Now handle the current ahash request buffer(s) */
+- nents = dma_map_sg(priv->dev, areq->src,
+- sg_nents_for_len(areq->src, areq->nbytes),
+- DMA_TO_DEVICE);
+- if (!nents) {
++ req->nents = dma_map_sg(priv->dev, areq->src,
++ sg_nents_for_len(areq->src, areq->nbytes),
++ DMA_TO_DEVICE);
++ if (!req->nents) {
+ ret = -ENOMEM;
+ goto cdesc_rollback;
+ }
+
+- for_each_sg(areq->src, sg, nents, i) {
++ for_each_sg(areq->src, sg, req->nents, i) {
+ int sglen = sg_dma_len(sg);
+
+ /* Do not overflow the request */
+diff --git a/drivers/firmware/efi/Kconfig b/drivers/firmware/efi/Kconfig
+index 2b4c39fdfa91..86210f75d233 100644
+--- a/drivers/firmware/efi/Kconfig
++++ b/drivers/firmware/efi/Kconfig
+@@ -159,7 +159,10 @@ config RESET_ATTACK_MITIGATION
+ using the TCG Platform Reset Attack Mitigation specification. This
+ protects against an attacker forcibly rebooting the system while it
+ still contains secrets in RAM, booting another OS and extracting the
+- secrets.
++ secrets. This should only be enabled when userland is configured to
++ clear the MemoryOverwriteRequest flag on clean shutdown after secrets
++ have been evicted, since otherwise it will trigger even on clean
++ reboots.
+
+ endmenu
+
+diff --git a/drivers/gpio/gpio-ath79.c b/drivers/gpio/gpio-ath79.c
+index 5fad89dfab7e..3ae7c1876bf4 100644
+--- a/drivers/gpio/gpio-ath79.c
++++ b/drivers/gpio/gpio-ath79.c
+@@ -324,3 +324,6 @@ static struct platform_driver ath79_gpio_driver = {
+ };
+
+ module_platform_driver(ath79_gpio_driver);
++
++MODULE_DESCRIPTION("Atheros AR71XX/AR724X/AR913X GPIO API support");
++MODULE_LICENSE("GPL v2");
+diff --git a/drivers/gpio/gpio-iop.c b/drivers/gpio/gpio-iop.c
+index 98c7ff2a76e7..8d62db447ec1 100644
+--- a/drivers/gpio/gpio-iop.c
++++ b/drivers/gpio/gpio-iop.c
+@@ -58,3 +58,7 @@ static int __init iop3xx_gpio_init(void)
+ return platform_driver_register(&iop3xx_gpio_driver);
+ }
+ arch_initcall(iop3xx_gpio_init);
++
++MODULE_DESCRIPTION("GPIO handling for Intel IOP3xx processors");
++MODULE_AUTHOR("Lennert Buytenhek <buytenh@wantstofly.org>");
++MODULE_LICENSE("GPL");
+diff --git a/drivers/gpio/gpio-stmpe.c b/drivers/gpio/gpio-stmpe.c
+index e6e5cca624a7..a365feff2377 100644
+--- a/drivers/gpio/gpio-stmpe.c
++++ b/drivers/gpio/gpio-stmpe.c
+@@ -190,6 +190,16 @@ static void stmpe_gpio_irq_sync_unlock(struct irq_data *d)
+ };
+ int i, j;
+
++ /*
++ * STMPE1600: to be able to get IRQ from pins,
++ * a read must be done on GPMR register, or a write in
++ * GPSR or GPCR registers
++ */
++ if (stmpe->partnum == STMPE1600) {
++ stmpe_reg_read(stmpe, stmpe->regs[STMPE_IDX_GPMR_LSB]);
++ stmpe_reg_read(stmpe, stmpe->regs[STMPE_IDX_GPMR_CSB]);
++ }
++
+ for (i = 0; i < CACHE_NR_REGS; i++) {
+ /* STMPE801 and STMPE1600 don't have RE and FE registers */
+ if ((stmpe->partnum == STMPE801 ||
+@@ -227,21 +237,11 @@ static void stmpe_gpio_irq_unmask(struct irq_data *d)
+ {
+ struct gpio_chip *gc = irq_data_get_irq_chip_data(d);
+ struct stmpe_gpio *stmpe_gpio = gpiochip_get_data(gc);
+- struct stmpe *stmpe = stmpe_gpio->stmpe;
+ int offset = d->hwirq;
+ int regoffset = offset / 8;
+ int mask = BIT(offset % 8);
+
+ stmpe_gpio->regs[REG_IE][regoffset] |= mask;
+-
+- /*
+- * STMPE1600 workaround: to be able to get IRQ from pins,
+- * a read must be done on GPMR register, or a write in
+- * GPSR or GPCR registers
+- */
+- if (stmpe->partnum == STMPE1600)
+- stmpe_reg_read(stmpe,
+- stmpe->regs[STMPE_IDX_GPMR_LSB + regoffset]);
+ }
+
+ static void stmpe_dbg_show_one(struct seq_file *s,
+diff --git a/drivers/gpio/gpiolib.c b/drivers/gpio/gpiolib.c
+index 14532d9576e4..f6efcf94f6ad 100644
+--- a/drivers/gpio/gpiolib.c
++++ b/drivers/gpio/gpiolib.c
+@@ -732,6 +732,9 @@ static irqreturn_t lineevent_irq_thread(int irq, void *p)
+ struct gpioevent_data ge;
+ int ret, level;
+
++ /* Do not leak kernel stack to userspace */
++ memset(&ge, 0, sizeof(ge));
++
+ ge.timestamp = ktime_get_real_ns();
+ level = gpiod_get_value_cansleep(le->desc);
+
+diff --git a/drivers/hid/wacom_sys.c b/drivers/hid/wacom_sys.c
+index ee71ad9b6cc1..76531796bd3c 100644
+--- a/drivers/hid/wacom_sys.c
++++ b/drivers/hid/wacom_sys.c
+@@ -2347,23 +2347,23 @@ static void wacom_remote_destroy_one(struct wacom *wacom, unsigned int index)
+ int i;
+ unsigned long flags;
+
+- spin_lock_irqsave(&remote->remote_lock, flags);
+- remote->remotes[index].registered = false;
+- spin_unlock_irqrestore(&remote->remote_lock, flags);
++ for (i = 0; i < WACOM_MAX_REMOTES; i++) {
++ if (remote->remotes[i].serial == serial) {
+
+- if (remote->remotes[index].battery.battery)
+- devres_release_group(&wacom->hdev->dev,
+- &remote->remotes[index].battery.bat_desc);
++ spin_lock_irqsave(&remote->remote_lock, flags);
++ remote->remotes[i].registered = false;
++ spin_unlock_irqrestore(&remote->remote_lock, flags);
+
+- if (remote->remotes[index].group.name)
+- devres_release_group(&wacom->hdev->dev,
+- &remote->remotes[index]);
++ if (remote->remotes[i].battery.battery)
++ devres_release_group(&wacom->hdev->dev,
++ &remote->remotes[i].battery.bat_desc);
++
++ if (remote->remotes[i].group.name)
++ devres_release_group(&wacom->hdev->dev,
++ &remote->remotes[i]);
+
+- for (i = 0; i < WACOM_MAX_REMOTES; i++) {
+- if (remote->remotes[i].serial == serial) {
+ remote->remotes[i].serial = 0;
+ remote->remotes[i].group.name = NULL;
+- remote->remotes[i].registered = false;
+ remote->remotes[i].battery.battery = NULL;
+ wacom->led.groups[i].select = WACOM_STATUS_UNKNOWN;
+ }
+diff --git a/drivers/hid/wacom_wac.c b/drivers/hid/wacom_wac.c
+index 16af6886e828..7dbff253c05c 100644
+--- a/drivers/hid/wacom_wac.c
++++ b/drivers/hid/wacom_wac.c
+@@ -1924,7 +1924,6 @@ static void wacom_wac_pad_event(struct hid_device *hdev, struct hid_field *field
+ struct wacom_features *features = &wacom_wac->features;
+ unsigned equivalent_usage = wacom_equivalent_usage(usage->hid);
+ int i;
+- bool is_touch_on = value;
+ bool do_report = false;
+
+ /*
+@@ -1969,16 +1968,17 @@ static void wacom_wac_pad_event(struct hid_device *hdev, struct hid_field *field
+ break;
+
+ case WACOM_HID_WD_MUTE_DEVICE:
+- if (wacom_wac->shared->touch_input && value) {
+- wacom_wac->shared->is_touch_on = !wacom_wac->shared->is_touch_on;
+- is_touch_on = wacom_wac->shared->is_touch_on;
+- }
+-
+- /* fall through*/
+ case WACOM_HID_WD_TOUCHONOFF:
+ if (wacom_wac->shared->touch_input) {
++ bool *is_touch_on = &wacom_wac->shared->is_touch_on;
++
++ if (equivalent_usage == WACOM_HID_WD_MUTE_DEVICE && value)
++ *is_touch_on = !(*is_touch_on);
++ else if (equivalent_usage == WACOM_HID_WD_TOUCHONOFF)
++ *is_touch_on = value;
++
+ input_report_switch(wacom_wac->shared->touch_input,
+- SW_MUTE_DEVICE, !is_touch_on);
++ SW_MUTE_DEVICE, !(*is_touch_on));
+ input_sync(wacom_wac->shared->touch_input);
+ }
+ break;
+diff --git a/drivers/iio/adc/stm32-adc.c b/drivers/iio/adc/stm32-adc.c
+index c9d96f935dba..cecf1e5b244c 100644
+--- a/drivers/iio/adc/stm32-adc.c
++++ b/drivers/iio/adc/stm32-adc.c
+@@ -1315,6 +1315,7 @@ static int stm32_adc_set_watermark(struct iio_dev *indio_dev, unsigned int val)
+ {
+ struct stm32_adc *adc = iio_priv(indio_dev);
+ unsigned int watermark = STM32_DMA_BUFFER_SIZE / 2;
++ unsigned int rx_buf_sz = STM32_DMA_BUFFER_SIZE;
+
+ /*
+ * dma cyclic transfers are used, buffer is split into two periods.
+@@ -1323,7 +1324,7 @@ static int stm32_adc_set_watermark(struct iio_dev *indio_dev, unsigned int val)
+ * - one buffer (period) driver can push with iio_trigger_poll().
+ */
+ watermark = min(watermark, val * (unsigned)(sizeof(u16)));
+- adc->rx_buf_sz = watermark * 2;
++ adc->rx_buf_sz = min(rx_buf_sz, watermark * 2 * adc->num_conv);
+
+ return 0;
+ }
+diff --git a/drivers/iio/chemical/ccs811.c b/drivers/iio/chemical/ccs811.c
+index 97bce8345c6a..fbe2431f5b81 100644
+--- a/drivers/iio/chemical/ccs811.c
++++ b/drivers/iio/chemical/ccs811.c
+@@ -96,7 +96,6 @@ static const struct iio_chan_spec ccs811_channels[] = {
+ .channel2 = IIO_MOD_CO2,
+ .modified = 1,
+ .info_mask_separate = BIT(IIO_CHAN_INFO_RAW) |
+- BIT(IIO_CHAN_INFO_OFFSET) |
+ BIT(IIO_CHAN_INFO_SCALE),
+ .scan_index = 0,
+ .scan_type = {
+@@ -255,24 +254,18 @@ static int ccs811_read_raw(struct iio_dev *indio_dev,
+ switch (chan->channel2) {
+ case IIO_MOD_CO2:
+ *val = 0;
+- *val2 = 12834;
++ *val2 = 100;
+ return IIO_VAL_INT_PLUS_MICRO;
+ case IIO_MOD_VOC:
+ *val = 0;
+- *val2 = 84246;
+- return IIO_VAL_INT_PLUS_MICRO;
++ *val2 = 100;
++ return IIO_VAL_INT_PLUS_NANO;
+ default:
+ return -EINVAL;
+ }
+ default:
+ return -EINVAL;
+ }
+- case IIO_CHAN_INFO_OFFSET:
+- if (!(chan->type == IIO_CONCENTRATION &&
+- chan->channel2 == IIO_MOD_CO2))
+- return -EINVAL;
+- *val = -400;
+- return IIO_VAL_INT;
+ default:
+ return -EINVAL;
+ }
+diff --git a/drivers/input/rmi4/rmi_driver.c b/drivers/input/rmi4/rmi_driver.c
+index 141ea228aac6..f5954981e9ee 100644
+--- a/drivers/input/rmi4/rmi_driver.c
++++ b/drivers/input/rmi4/rmi_driver.c
+@@ -41,6 +41,13 @@ void rmi_free_function_list(struct rmi_device *rmi_dev)
+
+ rmi_dbg(RMI_DEBUG_CORE, &rmi_dev->dev, "Freeing function list\n");
+
++ /* Doing it in the reverse order so F01 will be removed last */
++ list_for_each_entry_safe_reverse(fn, tmp,
++ &data->function_list, node) {
++ list_del(&fn->node);
++ rmi_unregister_function(fn);
++ }
++
+ devm_kfree(&rmi_dev->dev, data->irq_memory);
+ data->irq_memory = NULL;
+ data->irq_status = NULL;
+@@ -50,13 +57,6 @@ void rmi_free_function_list(struct rmi_device *rmi_dev)
+
+ data->f01_container = NULL;
+ data->f34_container = NULL;
+-
+- /* Doing it in the reverse order so F01 will be removed last */
+- list_for_each_entry_safe_reverse(fn, tmp,
+- &data->function_list, node) {
+- list_del(&fn->node);
+- rmi_unregister_function(fn);
+- }
+ }
+
+ static int reset_one_function(struct rmi_function *fn)
+diff --git a/drivers/input/rmi4/rmi_f03.c b/drivers/input/rmi4/rmi_f03.c
+index ad71a5e768dc..7ccbb370a9a8 100644
+--- a/drivers/input/rmi4/rmi_f03.c
++++ b/drivers/input/rmi4/rmi_f03.c
+@@ -32,6 +32,7 @@ struct f03_data {
+ struct rmi_function *fn;
+
+ struct serio *serio;
++ bool serio_registered;
+
+ unsigned int overwrite_buttons;
+
+@@ -138,6 +139,37 @@ static int rmi_f03_initialize(struct f03_data *f03)
+ return 0;
+ }
+
++static int rmi_f03_pt_open(struct serio *serio)
++{
++ struct f03_data *f03 = serio->port_data;
++ struct rmi_function *fn = f03->fn;
++ const u8 ob_len = f03->rx_queue_length * RMI_F03_OB_SIZE;
++ const u16 data_addr = fn->fd.data_base_addr + RMI_F03_OB_OFFSET;
++ u8 obs[RMI_F03_QUEUE_LENGTH * RMI_F03_OB_SIZE];
++ int error;
++
++ /*
++ * Consume any pending data. Some devices like to spam with
++ * 0xaa 0x00 announcements which may confuse us as we try to
++ * probe the device.
++ */
++ error = rmi_read_block(fn->rmi_dev, data_addr, &obs, ob_len);
++ if (!error)
++ rmi_dbg(RMI_DEBUG_FN, &fn->dev,
++ "%s: Consumed %*ph (%d) from PS2 guest\n",
++ __func__, ob_len, obs, ob_len);
++
++ return fn->rmi_dev->driver->set_irq_bits(fn->rmi_dev, fn->irq_mask);
++}
++
++static void rmi_f03_pt_close(struct serio *serio)
++{
++ struct f03_data *f03 = serio->port_data;
++ struct rmi_function *fn = f03->fn;
++
++ fn->rmi_dev->driver->clear_irq_bits(fn->rmi_dev, fn->irq_mask);
++}
++
+ static int rmi_f03_register_pt(struct f03_data *f03)
+ {
+ struct serio *serio;
+@@ -148,6 +180,8 @@ static int rmi_f03_register_pt(struct f03_data *f03)
+
+ serio->id.type = SERIO_PS_PSTHRU;
+ serio->write = rmi_f03_pt_write;
++ serio->open = rmi_f03_pt_open;
++ serio->close = rmi_f03_pt_close;
+ serio->port_data = f03;
+
+ strlcpy(serio->name, "Synaptics RMI4 PS/2 pass-through",
+@@ -184,17 +218,27 @@ static int rmi_f03_probe(struct rmi_function *fn)
+ f03->device_count);
+
+ dev_set_drvdata(dev, f03);
+-
+- error = rmi_f03_register_pt(f03);
+- if (error)
+- return error;
+-
+ return 0;
+ }
+
+ static int rmi_f03_config(struct rmi_function *fn)
+ {
+- fn->rmi_dev->driver->set_irq_bits(fn->rmi_dev, fn->irq_mask);
++ struct f03_data *f03 = dev_get_drvdata(&fn->dev);
++ int error;
++
++ if (!f03->serio_registered) {
++ error = rmi_f03_register_pt(f03);
++ if (error)
++ return error;
++
++ f03->serio_registered = true;
++ } else {
++ /*
++ * We must be re-configuring the sensor, just enable
++ * interrupts for this function.
++ */
++ fn->rmi_dev->driver->set_irq_bits(fn->rmi_dev, fn->irq_mask);
++ }
+
+ return 0;
+ }
+@@ -204,7 +248,7 @@ static int rmi_f03_attention(struct rmi_function *fn, unsigned long *irq_bits)
+ struct rmi_device *rmi_dev = fn->rmi_dev;
+ struct rmi_driver_data *drvdata = dev_get_drvdata(&rmi_dev->dev);
+ struct f03_data *f03 = dev_get_drvdata(&fn->dev);
+- u16 data_addr = fn->fd.data_base_addr;
++ const u16 data_addr = fn->fd.data_base_addr + RMI_F03_OB_OFFSET;
+ const u8 ob_len = f03->rx_queue_length * RMI_F03_OB_SIZE;
+ u8 obs[RMI_F03_QUEUE_LENGTH * RMI_F03_OB_SIZE];
+ u8 ob_status;
+@@ -226,8 +270,7 @@ static int rmi_f03_attention(struct rmi_function *fn, unsigned long *irq_bits)
+ drvdata->attn_data.size -= ob_len;
+ } else {
+ /* Grab all of the data registers, and check them for data */
+- error = rmi_read_block(fn->rmi_dev, data_addr + RMI_F03_OB_OFFSET,
+- &obs, ob_len);
++ error = rmi_read_block(fn->rmi_dev, data_addr, &obs, ob_len);
+ if (error) {
+ dev_err(&fn->dev,
+ "%s: Failed to read F03 output buffers: %d\n",
+@@ -266,7 +309,8 @@ static void rmi_f03_remove(struct rmi_function *fn)
+ {
+ struct f03_data *f03 = dev_get_drvdata(&fn->dev);
+
+- serio_unregister_port(f03->serio);
++ if (f03->serio_registered)
++ serio_unregister_port(f03->serio);
+ }
+
+ struct rmi_function_handler rmi_f03_handler = {
+diff --git a/drivers/misc/mei/pci-me.c b/drivers/misc/mei/pci-me.c
+index f4f17552c9b8..4a0ccda4d04b 100644
+--- a/drivers/misc/mei/pci-me.c
++++ b/drivers/misc/mei/pci-me.c
+@@ -238,8 +238,11 @@ static int mei_me_probe(struct pci_dev *pdev, const struct pci_device_id *ent)
+ */
+ mei_me_set_pm_domain(dev);
+
+- if (mei_pg_is_enabled(dev))
++ if (mei_pg_is_enabled(dev)) {
+ pm_runtime_put_noidle(&pdev->dev);
++ if (hw->d0i3_supported)
++ pm_runtime_allow(&pdev->dev);
++ }
+
+ dev_dbg(&pdev->dev, "initialization successful.\n");
+
+diff --git a/drivers/mtd/nand/denali_pci.c b/drivers/mtd/nand/denali_pci.c
+index 57fb7ae31412..49cb3e1f8bd0 100644
+--- a/drivers/mtd/nand/denali_pci.c
++++ b/drivers/mtd/nand/denali_pci.c
+@@ -125,3 +125,7 @@ static struct pci_driver denali_pci_driver = {
+ .remove = denali_pci_remove,
+ };
+ module_pci_driver(denali_pci_driver);
++
++MODULE_DESCRIPTION("PCI driver for Denali NAND controller");
++MODULE_AUTHOR("Intel Corporation and its suppliers");
++MODULE_LICENSE("GPL v2");
+diff --git a/drivers/net/ethernet/intel/igb/igb_main.c b/drivers/net/ethernet/intel/igb/igb_main.c
+index c208753ff5b7..c69a5b3ae8c8 100644
+--- a/drivers/net/ethernet/intel/igb/igb_main.c
++++ b/drivers/net/ethernet/intel/igb/igb_main.c
+@@ -3676,7 +3676,7 @@ static int __igb_close(struct net_device *netdev, bool suspending)
+
+ int igb_close(struct net_device *netdev)
+ {
+- if (netif_device_present(netdev))
++ if (netif_device_present(netdev) || netdev->dismantle)
+ return __igb_close(netdev, false);
+ return 0;
+ }
+diff --git a/drivers/power/reset/zx-reboot.c b/drivers/power/reset/zx-reboot.c
+index 7549c7f74a3c..c03e96e6a041 100644
+--- a/drivers/power/reset/zx-reboot.c
++++ b/drivers/power/reset/zx-reboot.c
+@@ -82,3 +82,7 @@ static struct platform_driver zx_reboot_driver = {
+ },
+ };
+ module_platform_driver(zx_reboot_driver);
++
++MODULE_DESCRIPTION("ZTE SoCs reset driver");
++MODULE_AUTHOR("Jun Nie <jun.nie@linaro.org>");
++MODULE_LICENSE("GPL v2");
+diff --git a/drivers/scsi/aacraid/aachba.c b/drivers/scsi/aacraid/aachba.c
+index af3e4d3f9735..7173ae53c526 100644
+--- a/drivers/scsi/aacraid/aachba.c
++++ b/drivers/scsi/aacraid/aachba.c
+@@ -913,8 +913,15 @@ static void setinqstr(struct aac_dev *dev, void *data, int tindex)
+ memset(str, ' ', sizeof(*str));
+
+ if (sup_adap_info->adapter_type_text[0]) {
+- char *cp = sup_adap_info->adapter_type_text;
+ int c;
++ char *cp;
++ char *cname = kmemdup(sup_adap_info->adapter_type_text,
++ sizeof(sup_adap_info->adapter_type_text),
++ GFP_ATOMIC);
++ if (!cname)
++ return;
++
++ cp = cname;
+ if ((cp[0] == 'A') && (cp[1] == 'O') && (cp[2] == 'C'))
+ inqstrcpy("SMC", str->vid);
+ else {
+@@ -923,7 +930,7 @@ static void setinqstr(struct aac_dev *dev, void *data, int tindex)
+ ++cp;
+ c = *cp;
+ *cp = '\0';
+- inqstrcpy(sup_adap_info->adapter_type_text, str->vid);
++ inqstrcpy(cname, str->vid);
+ *cp = c;
+ while (*cp && *cp != ' ')
+ ++cp;
+@@ -937,8 +944,8 @@ static void setinqstr(struct aac_dev *dev, void *data, int tindex)
+ cp[sizeof(str->pid)] = '\0';
+ }
+ inqstrcpy (cp, str->pid);
+- if (c)
+- cp[sizeof(str->pid)] = c;
++
++ kfree(cname);
+ } else {
+ struct aac_driver_ident *mp = aac_get_driver_ident(dev->cardtype);
+
+diff --git a/drivers/scsi/aacraid/commsup.c b/drivers/scsi/aacraid/commsup.c
+index 80a8cb26cdea..d9b20dada109 100644
+--- a/drivers/scsi/aacraid/commsup.c
++++ b/drivers/scsi/aacraid/commsup.c
+@@ -1643,14 +1643,7 @@ static int _aac_reset_adapter(struct aac_dev *aac, int forced, u8 reset_type)
+ out:
+ aac->in_reset = 0;
+ scsi_unblock_requests(host);
+- /*
+- * Issue bus rescan to catch any configuration that might have
+- * occurred
+- */
+- if (!retval) {
+- dev_info(&aac->pdev->dev, "Issuing bus rescan\n");
+- scsi_scan_host(host);
+- }
++
+ if (jafo) {
+ spin_lock_irq(host->host_lock);
+ }
+diff --git a/drivers/scsi/storvsc_drv.c b/drivers/scsi/storvsc_drv.c
+index 3b3d1d050cac..40fc7a590e81 100644
+--- a/drivers/scsi/storvsc_drv.c
++++ b/drivers/scsi/storvsc_drv.c
+@@ -1834,8 +1834,10 @@ static int storvsc_probe(struct hv_device *device,
+ fc_host_node_name(host) = stor_device->node_name;
+ fc_host_port_name(host) = stor_device->port_name;
+ stor_device->rport = fc_remote_port_add(host, 0, &ids);
+- if (!stor_device->rport)
++ if (!stor_device->rport) {
++ ret = -ENOMEM;
+ goto err_out4;
++ }
+ }
+ #endif
+ return 0;
+diff --git a/drivers/spi/spi-imx.c b/drivers/spi/spi-imx.c
+index 79ddefe4180d..40390d31a93b 100644
+--- a/drivers/spi/spi-imx.c
++++ b/drivers/spi/spi-imx.c
+@@ -1668,12 +1668,23 @@ static int spi_imx_remove(struct platform_device *pdev)
+ {
+ struct spi_master *master = platform_get_drvdata(pdev);
+ struct spi_imx_data *spi_imx = spi_master_get_devdata(master);
++ int ret;
+
+ spi_bitbang_stop(&spi_imx->bitbang);
+
++ ret = clk_enable(spi_imx->clk_per);
++ if (ret)
++ return ret;
++
++ ret = clk_enable(spi_imx->clk_ipg);
++ if (ret) {
++ clk_disable(spi_imx->clk_per);
++ return ret;
++ }
++
+ writel(0, spi_imx->base + MXC_CSPICTRL);
+- clk_unprepare(spi_imx->clk_ipg);
+- clk_unprepare(spi_imx->clk_per);
++ clk_disable_unprepare(spi_imx->clk_ipg);
++ clk_disable_unprepare(spi_imx->clk_per);
+ spi_imx_sdma_exit(spi_imx);
+ spi_master_put(master);
+
+diff --git a/drivers/staging/ccree/ssi_cipher.c b/drivers/staging/ccree/ssi_cipher.c
+index ee85cbf7c9ae..980ff42128b4 100644
+--- a/drivers/staging/ccree/ssi_cipher.c
++++ b/drivers/staging/ccree/ssi_cipher.c
+@@ -908,6 +908,7 @@ static int ssi_ablkcipher_decrypt(struct ablkcipher_request *req)
+ scatterwalk_map_and_copy(req_ctx->backup_info, req->src,
+ (req->nbytes - ivsize), ivsize, 0);
+ req_ctx->is_giv = false;
++ req_ctx->backup_info = NULL;
+
+ return ssi_blkcipher_process(tfm, req_ctx, req->dst, req->src, req->nbytes, req->info, ivsize, (void *)req, DRV_CRYPTO_DIRECTION_DECRYPT);
+ }
+diff --git a/drivers/staging/ccree/ssi_driver.c b/drivers/staging/ccree/ssi_driver.c
+index 1a3c481fa92a..ce16c3440319 100644
+--- a/drivers/staging/ccree/ssi_driver.c
++++ b/drivers/staging/ccree/ssi_driver.c
+@@ -117,7 +117,7 @@ static irqreturn_t cc_isr(int irq, void *dev_id)
+ irr &= ~SSI_COMP_IRQ_MASK;
+ complete_request(drvdata);
+ }
+-#ifdef CC_SUPPORT_FIPS
++#ifdef CONFIG_CRYPTO_FIPS
+ /* TEE FIPS interrupt */
+ if (likely((irr & SSI_GPR0_IRQ_MASK) != 0)) {
+ /* Mask interrupt - will be unmasked in Deferred service handler */
+diff --git a/drivers/staging/lustre/lnet/klnds/o2iblnd/o2iblnd.c b/drivers/staging/lustre/lnet/klnds/o2iblnd/o2iblnd.c
+index 8024843521ab..7b256d716251 100644
+--- a/drivers/staging/lustre/lnet/klnds/o2iblnd/o2iblnd.c
++++ b/drivers/staging/lustre/lnet/klnds/o2iblnd/o2iblnd.c
+@@ -826,14 +826,15 @@ struct kib_conn *kiblnd_create_conn(struct kib_peer *peer, struct rdma_cm_id *cm
+ return conn;
+
+ failed_2:
+- kiblnd_destroy_conn(conn, true);
++ kiblnd_destroy_conn(conn);
++ LIBCFS_FREE(conn, sizeof(*conn));
+ failed_1:
+ LIBCFS_FREE(init_qp_attr, sizeof(*init_qp_attr));
+ failed_0:
+ return NULL;
+ }
+
+-void kiblnd_destroy_conn(struct kib_conn *conn, bool free_conn)
++void kiblnd_destroy_conn(struct kib_conn *conn)
+ {
+ struct rdma_cm_id *cmid = conn->ibc_cmid;
+ struct kib_peer *peer = conn->ibc_peer;
+@@ -896,8 +897,6 @@ void kiblnd_destroy_conn(struct kib_conn *conn, bool free_conn)
+ rdma_destroy_id(cmid);
+ atomic_dec(&net->ibn_nconns);
+ }
+-
+- LIBCFS_FREE(conn, sizeof(*conn));
+ }
+
+ int kiblnd_close_peer_conns_locked(struct kib_peer *peer, int why)
+diff --git a/drivers/staging/lustre/lnet/klnds/o2iblnd/o2iblnd.h b/drivers/staging/lustre/lnet/klnds/o2iblnd/o2iblnd.h
+index 171eced213f8..b18911d09e9a 100644
+--- a/drivers/staging/lustre/lnet/klnds/o2iblnd/o2iblnd.h
++++ b/drivers/staging/lustre/lnet/klnds/o2iblnd/o2iblnd.h
+@@ -1016,7 +1016,7 @@ int kiblnd_close_peer_conns_locked(struct kib_peer *peer, int why);
+ struct kib_conn *kiblnd_create_conn(struct kib_peer *peer,
+ struct rdma_cm_id *cmid,
+ int state, int version);
+-void kiblnd_destroy_conn(struct kib_conn *conn, bool free_conn);
++void kiblnd_destroy_conn(struct kib_conn *conn);
+ void kiblnd_close_conn(struct kib_conn *conn, int error);
+ void kiblnd_close_conn_locked(struct kib_conn *conn, int error);
+
+diff --git a/drivers/staging/lustre/lnet/klnds/o2iblnd/o2iblnd_cb.c b/drivers/staging/lustre/lnet/klnds/o2iblnd/o2iblnd_cb.c
+index 40e3af5d8b04..2f25642ea1a6 100644
+--- a/drivers/staging/lustre/lnet/klnds/o2iblnd/o2iblnd_cb.c
++++ b/drivers/staging/lustre/lnet/klnds/o2iblnd/o2iblnd_cb.c
+@@ -3314,11 +3314,13 @@ kiblnd_connd(void *arg)
+ spin_unlock_irqrestore(lock, flags);
+ dropped_lock = 1;
+
+- kiblnd_destroy_conn(conn, !peer);
++ kiblnd_destroy_conn(conn);
+
+ spin_lock_irqsave(lock, flags);
+- if (!peer)
++ if (!peer) {
++ kfree(conn);
+ continue;
++ }
+
+ conn->ibc_peer = peer;
+ if (peer->ibp_reconnected < KIB_RECONN_HIGH_RACE)
+diff --git a/drivers/tty/serial/8250/8250_dw.c b/drivers/tty/serial/8250/8250_dw.c
+index 5bb0c42c88dd..7070203e3157 100644
+--- a/drivers/tty/serial/8250/8250_dw.c
++++ b/drivers/tty/serial/8250/8250_dw.c
+@@ -252,31 +252,25 @@ static void dw8250_set_termios(struct uart_port *p, struct ktermios *termios,
+ struct ktermios *old)
+ {
+ unsigned int baud = tty_termios_baud_rate(termios);
+- unsigned int target_rate, min_rate, max_rate;
+ struct dw8250_data *d = p->private_data;
+ long rate;
+- int i, ret;
++ int ret;
+
+ if (IS_ERR(d->clk) || !old)
+ goto out;
+
+- /* Find a clk rate within +/-1.6% of an integer multiple of baudx16 */
+- target_rate = baud * 16;
+- min_rate = target_rate - (target_rate >> 6);
+- max_rate = target_rate + (target_rate >> 6);
+-
+- for (i = 1; i <= UART_DIV_MAX; i++) {
+- rate = clk_round_rate(d->clk, i * target_rate);
+- if (rate >= i * min_rate && rate <= i * max_rate)
+- break;
+- }
+- if (i <= UART_DIV_MAX) {
+- clk_disable_unprepare(d->clk);
++ clk_disable_unprepare(d->clk);
++ rate = clk_round_rate(d->clk, baud * 16);
++ if (rate < 0)
++ ret = rate;
++ else if (rate == 0)
++ ret = -ENOENT;
++ else
+ ret = clk_set_rate(d->clk, rate);
+- clk_prepare_enable(d->clk);
+- if (!ret)
+- p->uartclk = rate;
+- }
++ clk_prepare_enable(d->clk);
++
++ if (!ret)
++ p->uartclk = rate;
+
+ out:
+ p->status &= ~UPSTAT_AUTOCTS;
+diff --git a/drivers/tty/serial/8250/8250_of.c b/drivers/tty/serial/8250/8250_of.c
+index 1e67a7e4a5fd..160b8906d9b9 100644
+--- a/drivers/tty/serial/8250/8250_of.c
++++ b/drivers/tty/serial/8250/8250_of.c
+@@ -136,8 +136,11 @@ static int of_platform_serial_setup(struct platform_device *ofdev,
+ }
+
+ info->rst = devm_reset_control_get_optional_shared(&ofdev->dev, NULL);
+- if (IS_ERR(info->rst))
++ if (IS_ERR(info->rst)) {
++ ret = PTR_ERR(info->rst);
+ goto err_dispose;
++ }
++
+ ret = reset_control_deassert(info->rst);
+ if (ret)
+ goto err_dispose;
+diff --git a/drivers/tty/serial/8250/8250_uniphier.c b/drivers/tty/serial/8250/8250_uniphier.c
+index 45ef506293ae..28d88ccf5a0c 100644
+--- a/drivers/tty/serial/8250/8250_uniphier.c
++++ b/drivers/tty/serial/8250/8250_uniphier.c
+@@ -250,12 +250,13 @@ static int uniphier_uart_probe(struct platform_device *pdev)
+ up.dl_read = uniphier_serial_dl_read;
+ up.dl_write = uniphier_serial_dl_write;
+
+- priv->line = serial8250_register_8250_port(&up);
+- if (priv->line < 0) {
++ ret = serial8250_register_8250_port(&up);
++ if (ret < 0) {
+ dev_err(dev, "failed to register 8250 port\n");
+ clk_disable_unprepare(priv->clk);
+ return ret;
+ }
++ priv->line = ret;
+
+ platform_set_drvdata(pdev, priv);
+
+diff --git a/drivers/tty/serial/imx.c b/drivers/tty/serial/imx.c
+index e4b3d9123a03..e9145ed0d6e7 100644
+--- a/drivers/tty/serial/imx.c
++++ b/drivers/tty/serial/imx.c
+@@ -2238,12 +2238,14 @@ static void serial_imx_enable_wakeup(struct imx_port *sport, bool on)
+ val &= ~UCR3_AWAKEN;
+ writel(val, sport->port.membase + UCR3);
+
+- val = readl(sport->port.membase + UCR1);
+- if (on)
+- val |= UCR1_RTSDEN;
+- else
+- val &= ~UCR1_RTSDEN;
+- writel(val, sport->port.membase + UCR1);
++ if (sport->have_rtscts) {
++ val = readl(sport->port.membase + UCR1);
++ if (on)
++ val |= UCR1_RTSDEN;
++ else
++ val &= ~UCR1_RTSDEN;
++ writel(val, sport->port.membase + UCR1);
++ }
+ }
+
+ static int imx_serial_port_suspend_noirq(struct device *dev)
+diff --git a/drivers/tty/tty_io.c b/drivers/tty/tty_io.c
+index dc60aeea87d8..4b506f2d3522 100644
+--- a/drivers/tty/tty_io.c
++++ b/drivers/tty/tty_io.c
+@@ -1323,6 +1323,9 @@ struct tty_struct *tty_init_dev(struct tty_driver *driver, int idx)
+ "%s: %s driver does not set tty->port. This will crash the kernel later. Fix the driver!\n",
+ __func__, tty->driver->name);
+
++ retval = tty_ldisc_lock(tty, 5 * HZ);
++ if (retval)
++ goto err_release_lock;
+ tty->port->itty = tty;
+
+ /*
+@@ -1333,6 +1336,7 @@ struct tty_struct *tty_init_dev(struct tty_driver *driver, int idx)
+ retval = tty_ldisc_setup(tty, tty->link);
+ if (retval)
+ goto err_release_tty;
++ tty_ldisc_unlock(tty);
+ /* Return the tty locked so that it cannot vanish under the caller */
+ return tty;
+
+@@ -1345,9 +1349,11 @@ struct tty_struct *tty_init_dev(struct tty_driver *driver, int idx)
+
+ /* call the tty release_tty routine to clean out this slot */
+ err_release_tty:
+- tty_unlock(tty);
++ tty_ldisc_unlock(tty);
+ tty_info_ratelimited(tty, "ldisc open failed (%d), clearing slot %d\n",
+ retval, idx);
++err_release_lock:
++ tty_unlock(tty);
+ release_tty(tty, idx);
+ return ERR_PTR(retval);
+ }
+diff --git a/drivers/tty/tty_ldisc.c b/drivers/tty/tty_ldisc.c
+index 24ec5c7e6b20..4e7946c0484b 100644
+--- a/drivers/tty/tty_ldisc.c
++++ b/drivers/tty/tty_ldisc.c
+@@ -337,7 +337,7 @@ static inline void __tty_ldisc_unlock(struct tty_struct *tty)
+ ldsem_up_write(&tty->ldisc_sem);
+ }
+
+-static int tty_ldisc_lock(struct tty_struct *tty, unsigned long timeout)
++int tty_ldisc_lock(struct tty_struct *tty, unsigned long timeout)
+ {
+ int ret;
+
+@@ -348,7 +348,7 @@ static int tty_ldisc_lock(struct tty_struct *tty, unsigned long timeout)
+ return 0;
+ }
+
+-static void tty_ldisc_unlock(struct tty_struct *tty)
++void tty_ldisc_unlock(struct tty_struct *tty)
+ {
+ clear_bit(TTY_LDISC_HALTED, &tty->flags);
+ __tty_ldisc_unlock(tty);
+diff --git a/drivers/usb/class/cdc-acm.c b/drivers/usb/class/cdc-acm.c
+index 8e0636c963a7..18bbe3fedb8b 100644
+--- a/drivers/usb/class/cdc-acm.c
++++ b/drivers/usb/class/cdc-acm.c
+@@ -425,7 +425,7 @@ static int acm_submit_read_urb(struct acm *acm, int index, gfp_t mem_flags)
+
+ res = usb_submit_urb(acm->read_urbs[index], mem_flags);
+ if (res) {
+- if (res != -EPERM) {
++ if (res != -EPERM && res != -ENODEV) {
+ dev_err(&acm->data->dev,
+ "urb %d failed submission with %d\n",
+ index, res);
+@@ -1752,6 +1752,9 @@ static const struct usb_device_id acm_ids[] = {
+ { USB_DEVICE(0x0ace, 0x1611), /* ZyDAS 56K USB MODEM - new version */
+ .driver_info = SINGLE_RX_URB, /* firmware bug */
+ },
++ { USB_DEVICE(0x11ca, 0x0201), /* VeriFone Mx870 Gadget Serial */
++ .driver_info = SINGLE_RX_URB,
++ },
+ { USB_DEVICE(0x22b8, 0x7000), /* Motorola Q Phone */
+ .driver_info = NO_UNION_NORMAL, /* has no union descriptor */
+ },
+diff --git a/drivers/usb/gadget/function/f_fs.c b/drivers/usb/gadget/function/f_fs.c
+index b6cf5ab5a0a1..f9bd351637cd 100644
+--- a/drivers/usb/gadget/function/f_fs.c
++++ b/drivers/usb/gadget/function/f_fs.c
+@@ -3700,7 +3700,8 @@ static void ffs_closed(struct ffs_data *ffs)
+ ci = opts->func_inst.group.cg_item.ci_parent->ci_parent;
+ ffs_dev_unlock();
+
+- unregister_gadget_item(ci);
++ if (test_bit(FFS_FL_BOUND, &ffs->flags))
++ unregister_gadget_item(ci);
+ return;
+ done:
+ ffs_dev_unlock();
+diff --git a/drivers/usb/gadget/udc/core.c b/drivers/usb/gadget/udc/core.c
+index 1b3efb14aec7..ac0541529499 100644
+--- a/drivers/usb/gadget/udc/core.c
++++ b/drivers/usb/gadget/udc/core.c
+@@ -912,7 +912,7 @@ int usb_gadget_ep_match_desc(struct usb_gadget *gadget,
+ return 0;
+
+ /* "high bandwidth" works only at high speed */
+- if (!gadget_is_dualspeed(gadget) && usb_endpoint_maxp(desc) & (3<<11))
++ if (!gadget_is_dualspeed(gadget) && usb_endpoint_maxp_mult(desc) > 1)
+ return 0;
+
+ switch (type) {
+diff --git a/drivers/usb/serial/Kconfig b/drivers/usb/serial/Kconfig
+index a8d5f2e4878d..c66b93664d54 100644
+--- a/drivers/usb/serial/Kconfig
++++ b/drivers/usb/serial/Kconfig
+@@ -63,6 +63,7 @@ config USB_SERIAL_SIMPLE
+ - Google USB serial devices
+ - HP4x calculators
+ - a number of Motorola phones
++ - Motorola Tetra devices
+ - Novatel Wireless GPS receivers
+ - Siemens USB/MPI adapter.
+ - ViVOtech ViVOpay USB device.
+diff --git a/drivers/usb/serial/io_edgeport.c b/drivers/usb/serial/io_edgeport.c
+index 219265ce3711..17283f4b4779 100644
+--- a/drivers/usb/serial/io_edgeport.c
++++ b/drivers/usb/serial/io_edgeport.c
+@@ -2282,7 +2282,6 @@ static int write_cmd_usb(struct edgeport_port *edge_port,
+ /* something went wrong */
+ dev_err(dev, "%s - usb_submit_urb(write command) failed, status = %d\n",
+ __func__, status);
+- usb_kill_urb(urb);
+ usb_free_urb(urb);
+ atomic_dec(&CmdUrbs);
+ return status;
+diff --git a/drivers/usb/serial/option.c b/drivers/usb/serial/option.c
+index b6320e3be429..5db8ed517e0e 100644
+--- a/drivers/usb/serial/option.c
++++ b/drivers/usb/serial/option.c
+@@ -380,6 +380,9 @@ static void option_instat_callback(struct urb *urb);
+ #define FOUR_G_SYSTEMS_PRODUCT_W14 0x9603
+ #define FOUR_G_SYSTEMS_PRODUCT_W100 0x9b01
+
++/* Fujisoft products */
++#define FUJISOFT_PRODUCT_FS040U 0x9b02
++
+ /* iBall 3.5G connect wireless modem */
+ #define IBALL_3_5G_CONNECT 0x9605
+
+@@ -1894,6 +1897,8 @@ static const struct usb_device_id option_ids[] = {
+ { USB_DEVICE(LONGCHEER_VENDOR_ID, FOUR_G_SYSTEMS_PRODUCT_W100),
+ .driver_info = (kernel_ulong_t)&four_g_w100_blacklist
+ },
++ {USB_DEVICE(LONGCHEER_VENDOR_ID, FUJISOFT_PRODUCT_FS040U),
++ .driver_info = (kernel_ulong_t)&net_intf3_blacklist},
+ { USB_DEVICE_INTERFACE_CLASS(LONGCHEER_VENDOR_ID, SPEEDUP_PRODUCT_SU9800, 0xff) },
+ { USB_DEVICE_INTERFACE_CLASS(LONGCHEER_VENDOR_ID, 0x9801, 0xff),
+ .driver_info = (kernel_ulong_t)&net_intf3_blacklist },
+diff --git a/drivers/usb/serial/pl2303.c b/drivers/usb/serial/pl2303.c
+index 57ae832a49ff..46dd09da2434 100644
+--- a/drivers/usb/serial/pl2303.c
++++ b/drivers/usb/serial/pl2303.c
+@@ -38,6 +38,7 @@ static const struct usb_device_id id_table[] = {
+ { USB_DEVICE(PL2303_VENDOR_ID, PL2303_PRODUCT_ID_RSAQ2) },
+ { USB_DEVICE(PL2303_VENDOR_ID, PL2303_PRODUCT_ID_DCU11) },
+ { USB_DEVICE(PL2303_VENDOR_ID, PL2303_PRODUCT_ID_RSAQ3) },
++ { USB_DEVICE(PL2303_VENDOR_ID, PL2303_PRODUCT_ID_CHILITAG) },
+ { USB_DEVICE(PL2303_VENDOR_ID, PL2303_PRODUCT_ID_PHAROS) },
+ { USB_DEVICE(PL2303_VENDOR_ID, PL2303_PRODUCT_ID_ALDIGA) },
+ { USB_DEVICE(PL2303_VENDOR_ID, PL2303_PRODUCT_ID_MMX) },
+diff --git a/drivers/usb/serial/pl2303.h b/drivers/usb/serial/pl2303.h
+index f98fd84890de..fcd72396a7b6 100644
+--- a/drivers/usb/serial/pl2303.h
++++ b/drivers/usb/serial/pl2303.h
+@@ -12,6 +12,7 @@
+ #define PL2303_PRODUCT_ID_DCU11 0x1234
+ #define PL2303_PRODUCT_ID_PHAROS 0xaaa0
+ #define PL2303_PRODUCT_ID_RSAQ3 0xaaa2
++#define PL2303_PRODUCT_ID_CHILITAG 0xaaa8
+ #define PL2303_PRODUCT_ID_ALDIGA 0x0611
+ #define PL2303_PRODUCT_ID_MMX 0x0612
+ #define PL2303_PRODUCT_ID_GPRS 0x0609
+diff --git a/drivers/usb/serial/usb-serial-simple.c b/drivers/usb/serial/usb-serial-simple.c
+index 74172fe158df..4ef79e29cb26 100644
+--- a/drivers/usb/serial/usb-serial-simple.c
++++ b/drivers/usb/serial/usb-serial-simple.c
+@@ -77,6 +77,11 @@ DEVICE(vivopay, VIVOPAY_IDS);
+ { USB_DEVICE(0x22b8, 0x2c64) } /* Motorola V950 phone */
+ DEVICE(moto_modem, MOTO_IDS);
+
++/* Motorola Tetra driver */
++#define MOTOROLA_TETRA_IDS() \
++ { USB_DEVICE(0x0cad, 0x9011) } /* Motorola Solutions TETRA PEI */
++DEVICE(motorola_tetra, MOTOROLA_TETRA_IDS);
++
+ /* Novatel Wireless GPS driver */
+ #define NOVATEL_IDS() \
+ { USB_DEVICE(0x09d7, 0x0100) } /* NovAtel FlexPack GPS */
+@@ -107,6 +112,7 @@ static struct usb_serial_driver * const serial_drivers[] = {
+ &google_device,
+ &vivopay_device,
+ &moto_modem_device,
++ &motorola_tetra_device,
+ &novatel_gps_device,
+ &hp4x_device,
+ &suunto_device,
+@@ -122,6 +128,7 @@ static const struct usb_device_id id_table[] = {
+ GOOGLE_IDS(),
+ VIVOPAY_IDS(),
+ MOTO_IDS(),
++ MOTOROLA_TETRA_IDS(),
+ NOVATEL_IDS(),
+ HP4X_IDS(),
+ SUUNTO_IDS(),
+diff --git a/drivers/usb/storage/uas.c b/drivers/usb/storage/uas.c
+index 5d04c40ee40a..3b1b9695177a 100644
+--- a/drivers/usb/storage/uas.c
++++ b/drivers/usb/storage/uas.c
+@@ -1076,20 +1076,19 @@ static int uas_post_reset(struct usb_interface *intf)
+ return 0;
+
+ err = uas_configure_endpoints(devinfo);
+- if (err) {
++ if (err && err != ENODEV)
+ shost_printk(KERN_ERR, shost,
+ "%s: alloc streams error %d after reset",
+ __func__, err);
+- return 1;
+- }
+
++ /* we must unblock the host in every case lest we deadlock */
+ spin_lock_irqsave(shost->host_lock, flags);
+ scsi_report_bus_reset(shost, 0);
+ spin_unlock_irqrestore(shost->host_lock, flags);
+
+ scsi_unblock_requests(shost);
+
+- return 0;
++ return err ? 1 : 0;
+ }
+
+ static int uas_suspend(struct usb_interface *intf, pm_message_t message)
+diff --git a/include/linux/tty.h b/include/linux/tty.h
+index 7ac8ba208b1f..0a6c71e0ad01 100644
+--- a/include/linux/tty.h
++++ b/include/linux/tty.h
+@@ -405,6 +405,8 @@ extern const char *tty_name(const struct tty_struct *tty);
+ extern struct tty_struct *tty_kopen(dev_t device);
+ extern void tty_kclose(struct tty_struct *tty);
+ extern int tty_dev_name_to_number(const char *name, dev_t *number);
++extern int tty_ldisc_lock(struct tty_struct *tty, unsigned long timeout);
++extern void tty_ldisc_unlock(struct tty_struct *tty);
+ #else
+ static inline void tty_kref_put(struct tty_struct *tty)
+ { }
+diff --git a/lib/test_firmware.c b/lib/test_firmware.c
+index 64a4c76cba2b..e7008688769b 100644
+--- a/lib/test_firmware.c
++++ b/lib/test_firmware.c
+@@ -371,6 +371,7 @@ static ssize_t config_num_requests_store(struct device *dev,
+ if (test_fw_config->reqs) {
+ pr_err("Must call release_all_firmware prior to changing config\n");
+ rc = -EINVAL;
++ mutex_unlock(&test_fw_mutex);
+ goto out;
+ }
+ mutex_unlock(&test_fw_mutex);
+diff --git a/security/integrity/ima/ima_policy.c b/security/integrity/ima/ima_policy.c
+index ee4613fa5840..f19f4841a97a 100644
+--- a/security/integrity/ima/ima_policy.c
++++ b/security/integrity/ima/ima_policy.c
+@@ -743,7 +743,7 @@ static int ima_parse_rule(char *rule, struct ima_rule_entry *entry)
+ case Opt_fsuuid:
+ ima_log_string(ab, "fsuuid", args[0].from);
+
+- if (uuid_is_null(&entry->fsuuid)) {
++ if (!uuid_is_null(&entry->fsuuid)) {
+ result = -EINVAL;
+ break;
+ }
+diff --git a/sound/pci/hda/patch_realtek.c b/sound/pci/hda/patch_realtek.c
+index 9aafc6c86132..1750e00c5bb4 100644
+--- a/sound/pci/hda/patch_realtek.c
++++ b/sound/pci/hda/patch_realtek.c
+@@ -3154,11 +3154,13 @@ static void alc256_shutup(struct hda_codec *codec)
+ if (hp_pin_sense)
+ msleep(85);
+
++ /* 3k pull low control for Headset jack. */
++ /* NOTE: call this before clearing the pin, otherwise codec stalls */
++ alc_update_coef_idx(codec, 0x46, 0, 3 << 12);
++
+ snd_hda_codec_write(codec, hp_pin, 0,
+ AC_VERB_SET_PIN_WIDGET_CONTROL, 0x0);
+
+- alc_update_coef_idx(codec, 0x46, 0, 3 << 12); /* 3k pull low control for Headset jack. */
+-
+ if (hp_pin_sense)
+ msleep(100);
+
+diff --git a/tools/gpio/gpio-event-mon.c b/tools/gpio/gpio-event-mon.c
+index 1c14c2595158..4b36323ea64b 100644
+--- a/tools/gpio/gpio-event-mon.c
++++ b/tools/gpio/gpio-event-mon.c
+@@ -23,6 +23,7 @@
+ #include <getopt.h>
+ #include <inttypes.h>
+ #include <sys/ioctl.h>
++#include <sys/types.h>
+ #include <linux/gpio.h>
+
+ int monitor_device(const char *device_name,
+diff --git a/tools/usb/usbip/src/usbip_bind.c b/tools/usb/usbip/src/usbip_bind.c
+index fa46141ae68b..e121cfb1746a 100644
+--- a/tools/usb/usbip/src/usbip_bind.c
++++ b/tools/usb/usbip/src/usbip_bind.c
+@@ -144,6 +144,7 @@ static int bind_device(char *busid)
+ int rc;
+ struct udev *udev;
+ struct udev_device *dev;
++ const char *devpath;
+
+ /* Check whether the device with this bus ID exists. */
+ udev = udev_new();
+@@ -152,8 +153,16 @@ static int bind_device(char *busid)
+ err("device with the specified bus ID does not exist");
+ return -1;
+ }
++ devpath = udev_device_get_devpath(dev);
+ udev_unref(udev);
+
++ /* If the device is already attached to vhci_hcd - bail out */
++ if (strstr(devpath, USBIP_VHCI_DRV_NAME)) {
++ err("bind loop detected: device: %s is attached to %s\n",
++ devpath, USBIP_VHCI_DRV_NAME);
++ return -1;
++ }
++
+ rc = unbind_other(busid);
+ if (rc == UNBIND_ST_FAILED) {
+ err("could not unbind driver from device on busid %s", busid);
+diff --git a/tools/usb/usbip/src/usbip_list.c b/tools/usb/usbip/src/usbip_list.c
+index f1b38e866dd7..d65a9f444174 100644
+--- a/tools/usb/usbip/src/usbip_list.c
++++ b/tools/usb/usbip/src/usbip_list.c
+@@ -187,6 +187,7 @@ static int list_devices(bool parsable)
+ const char *busid;
+ char product_name[128];
+ int ret = -1;
++ const char *devpath;
+
+ /* Create libudev context. */
+ udev = udev_new();
+@@ -209,6 +210,14 @@ static int list_devices(bool parsable)
+ path = udev_list_entry_get_name(dev_list_entry);
+ dev = udev_device_new_from_syspath(udev, path);
+
++ /* Ignore devices attached to vhci_hcd */
++ devpath = udev_device_get_devpath(dev);
++ if (strstr(devpath, USBIP_VHCI_DRV_NAME)) {
++ dbg("Skip the device %s already attached to %s\n",
++ devpath, USBIP_VHCI_DRV_NAME);
++ continue;
++ }
++
+ /* Get device information. */
+ idVendor = udev_device_get_sysattr_value(dev, "idVendor");
+ idProduct = udev_device_get_sysattr_value(dev, "idProduct");
^ permalink raw reply related [flat|nested] 27+ messages in thread
* [gentoo-commits] proj/linux-patches:4.15 commit in: /
@ 2018-02-08 0:38 Mike Pagano
0 siblings, 0 replies; 27+ messages in thread
From: Mike Pagano @ 2018-02-08 0:38 UTC (permalink / raw
To: gentoo-commits
commit: 8f7885176c06072da594527eb21ae23cfd8ddbf5
Author: Mike Pagano <mpagano <AT> gentoo <DOT> org>
AuthorDate: Thu Feb 8 00:38:44 2018 +0000
Commit: Mike Pagano <mpagano <AT> gentoo <DOT> org>
CommitDate: Thu Feb 8 00:38:44 2018 +0000
URL: https://gitweb.gentoo.org/proj/linux-patches.git/commit/?id=8f788517
Linux patch 4.15.2
0000_README | 4 +
1001_linux-4.15.2.patch | 3696 +++++++++++++++++++++++++++++++++++++++++++++++
2 files changed, 3700 insertions(+)
diff --git a/0000_README b/0000_README
index da07a38..db575f6 100644
--- a/0000_README
+++ b/0000_README
@@ -47,6 +47,10 @@ Patch: 1000_linux-4.15.1.patch
From: http://www.kernel.org
Desc: Linux 4.15.1
+Patch: 1001_linux-4.15.2.patch
+From: http://www.kernel.org
+Desc: Linux 4.15.2
+
Patch: 1500_XATTR_USER_PREFIX.patch
From: https://bugs.gentoo.org/show_bug.cgi?id=470644
Desc: Support for namespace user.pax.* on tmpfs.
diff --git a/1001_linux-4.15.2.patch b/1001_linux-4.15.2.patch
new file mode 100644
index 0000000..e9d606b
--- /dev/null
+++ b/1001_linux-4.15.2.patch
@@ -0,0 +1,3696 @@
+diff --git a/Documentation/admin-guide/kernel-parameters.txt b/Documentation/admin-guide/kernel-parameters.txt
+index 46b26bfee27b..1e762c210f1b 100644
+--- a/Documentation/admin-guide/kernel-parameters.txt
++++ b/Documentation/admin-guide/kernel-parameters.txt
+@@ -2742,8 +2742,6 @@
+ norandmaps Don't use address space randomization. Equivalent to
+ echo 0 > /proc/sys/kernel/randomize_va_space
+
+- noreplace-paravirt [X86,IA-64,PV_OPS] Don't patch paravirt_ops
+-
+ noreplace-smp [X86-32,SMP] Don't replace SMP instructions
+ with UP alternatives
+
+diff --git a/Documentation/speculation.txt b/Documentation/speculation.txt
+new file mode 100644
+index 000000000000..e9e6cbae2841
+--- /dev/null
++++ b/Documentation/speculation.txt
+@@ -0,0 +1,90 @@
++This document explains potential effects of speculation, and how undesirable
++effects can be mitigated portably using common APIs.
++
++===========
++Speculation
++===========
++
++To improve performance and minimize average latencies, many contemporary CPUs
++employ speculative execution techniques such as branch prediction, performing
++work which may be discarded at a later stage.
++
++Typically speculative execution cannot be observed from architectural state,
++such as the contents of registers. However, in some cases it is possible to
++observe its impact on microarchitectural state, such as the presence or
++absence of data in caches. Such state may form side-channels which can be
++observed to extract secret information.
++
++For example, in the presence of branch prediction, it is possible for bounds
++checks to be ignored by code which is speculatively executed. Consider the
++following code:
++
++ int load_array(int *array, unsigned int index)
++ {
++ if (index >= MAX_ARRAY_ELEMS)
++ return 0;
++ else
++ return array[index];
++ }
++
++Which, on arm64, may be compiled to an assembly sequence such as:
++
++ CMP <index>, #MAX_ARRAY_ELEMS
++ B.LT less
++ MOV <returnval>, #0
++ RET
++ less:
++ LDR <returnval>, [<array>, <index>]
++ RET
++
++It is possible that a CPU mis-predicts the conditional branch, and
++speculatively loads array[index], even if index >= MAX_ARRAY_ELEMS. This
++value will subsequently be discarded, but the speculated load may affect
++microarchitectural state which can be subsequently measured.
++
++More complex sequences involving multiple dependent memory accesses may
++result in sensitive information being leaked. Consider the following
++code, building on the prior example:
++
++ int load_dependent_arrays(int *arr1, int *arr2, int index)
++ {
++ int val1, val2,
++
++ val1 = load_array(arr1, index);
++ val2 = load_array(arr2, val1);
++
++ return val2;
++ }
++
++Under speculation, the first call to load_array() may return the value
++of an out-of-bounds address, while the second call will influence
++microarchitectural state dependent on this value. This may provide an
++arbitrary read primitive.
++
++====================================
++Mitigating speculation side-channels
++====================================
++
++The kernel provides a generic API to ensure that bounds checks are
++respected even under speculation. Architectures which are affected by
++speculation-based side-channels are expected to implement these
++primitives.
++
++The array_index_nospec() helper in <linux/nospec.h> can be used to
++prevent information from being leaked via side-channels.
++
++A call to array_index_nospec(index, size) returns a sanitized index
++value that is bounded to [0, size) even under cpu speculation
++conditions.
++
++This can be used to protect the earlier load_array() example:
++
++ int load_array(int *array, unsigned int index)
++ {
++ if (index >= MAX_ARRAY_ELEMS)
++ return 0;
++ else {
++ index = array_index_nospec(index, MAX_ARRAY_ELEMS);
++ return array[index];
++ }
++ }
+diff --git a/Makefile b/Makefile
+index af101b556ba0..54f1bc10b531 100644
+--- a/Makefile
++++ b/Makefile
+@@ -1,7 +1,7 @@
+ # SPDX-License-Identifier: GPL-2.0
+ VERSION = 4
+ PATCHLEVEL = 15
+-SUBLEVEL = 1
++SUBLEVEL = 2
+ EXTRAVERSION =
+ NAME = Fearless Coyote
+
+diff --git a/arch/x86/entry/common.c b/arch/x86/entry/common.c
+index d7d3cc24baf4..21dbdf0e476b 100644
+--- a/arch/x86/entry/common.c
++++ b/arch/x86/entry/common.c
+@@ -21,6 +21,7 @@
+ #include <linux/export.h>
+ #include <linux/context_tracking.h>
+ #include <linux/user-return-notifier.h>
++#include <linux/nospec.h>
+ #include <linux/uprobes.h>
+ #include <linux/livepatch.h>
+ #include <linux/syscalls.h>
+@@ -206,7 +207,7 @@ __visible inline void prepare_exit_to_usermode(struct pt_regs *regs)
+ * special case only applies after poking regs and before the
+ * very next return to user mode.
+ */
+- current->thread.status &= ~(TS_COMPAT|TS_I386_REGS_POKED);
++ ti->status &= ~(TS_COMPAT|TS_I386_REGS_POKED);
+ #endif
+
+ user_enter_irqoff();
+@@ -282,7 +283,8 @@ __visible void do_syscall_64(struct pt_regs *regs)
+ * regs->orig_ax, which changes the behavior of some syscalls.
+ */
+ if (likely((nr & __SYSCALL_MASK) < NR_syscalls)) {
+- regs->ax = sys_call_table[nr & __SYSCALL_MASK](
++ nr = array_index_nospec(nr & __SYSCALL_MASK, NR_syscalls);
++ regs->ax = sys_call_table[nr](
+ regs->di, regs->si, regs->dx,
+ regs->r10, regs->r8, regs->r9);
+ }
+@@ -304,7 +306,7 @@ static __always_inline void do_syscall_32_irqs_on(struct pt_regs *regs)
+ unsigned int nr = (unsigned int)regs->orig_ax;
+
+ #ifdef CONFIG_IA32_EMULATION
+- current->thread.status |= TS_COMPAT;
++ ti->status |= TS_COMPAT;
+ #endif
+
+ if (READ_ONCE(ti->flags) & _TIF_WORK_SYSCALL_ENTRY) {
+@@ -318,6 +320,7 @@ static __always_inline void do_syscall_32_irqs_on(struct pt_regs *regs)
+ }
+
+ if (likely(nr < IA32_NR_syscalls)) {
++ nr = array_index_nospec(nr, IA32_NR_syscalls);
+ /*
+ * It's possible that a 32-bit syscall implementation
+ * takes a 64-bit parameter but nonetheless assumes that
+diff --git a/arch/x86/entry/entry_32.S b/arch/x86/entry/entry_32.S
+index 60c4c342316c..2a35b1e0fb90 100644
+--- a/arch/x86/entry/entry_32.S
++++ b/arch/x86/entry/entry_32.S
+@@ -252,7 +252,8 @@ ENTRY(__switch_to_asm)
+ * exist, overwrite the RSB with entries which capture
+ * speculative execution to prevent attack.
+ */
+- FILL_RETURN_BUFFER %ebx, RSB_CLEAR_LOOPS, X86_FEATURE_RSB_CTXSW
++ /* Clobbers %ebx */
++ FILL_RETURN_BUFFER RSB_CLEAR_LOOPS, X86_FEATURE_RSB_CTXSW
+ #endif
+
+ /* restore callee-saved registers */
+diff --git a/arch/x86/entry/entry_64.S b/arch/x86/entry/entry_64.S
+index ff6f8022612c..c752abe89d80 100644
+--- a/arch/x86/entry/entry_64.S
++++ b/arch/x86/entry/entry_64.S
+@@ -236,91 +236,20 @@ GLOBAL(entry_SYSCALL_64_after_hwframe)
+ pushq %r9 /* pt_regs->r9 */
+ pushq %r10 /* pt_regs->r10 */
+ pushq %r11 /* pt_regs->r11 */
+- sub $(6*8), %rsp /* pt_regs->bp, bx, r12-15 not saved */
+- UNWIND_HINT_REGS extra=0
+-
+- TRACE_IRQS_OFF
+-
+- /*
+- * If we need to do entry work or if we guess we'll need to do
+- * exit work, go straight to the slow path.
+- */
+- movq PER_CPU_VAR(current_task), %r11
+- testl $_TIF_WORK_SYSCALL_ENTRY|_TIF_ALLWORK_MASK, TASK_TI_flags(%r11)
+- jnz entry_SYSCALL64_slow_path
+-
+-entry_SYSCALL_64_fastpath:
+- /*
+- * Easy case: enable interrupts and issue the syscall. If the syscall
+- * needs pt_regs, we'll call a stub that disables interrupts again
+- * and jumps to the slow path.
+- */
+- TRACE_IRQS_ON
+- ENABLE_INTERRUPTS(CLBR_NONE)
+-#if __SYSCALL_MASK == ~0
+- cmpq $__NR_syscall_max, %rax
+-#else
+- andl $__SYSCALL_MASK, %eax
+- cmpl $__NR_syscall_max, %eax
+-#endif
+- ja 1f /* return -ENOSYS (already in pt_regs->ax) */
+- movq %r10, %rcx
+-
+- /*
+- * This call instruction is handled specially in stub_ptregs_64.
+- * It might end up jumping to the slow path. If it jumps, RAX
+- * and all argument registers are clobbered.
+- */
+-#ifdef CONFIG_RETPOLINE
+- movq sys_call_table(, %rax, 8), %rax
+- call __x86_indirect_thunk_rax
+-#else
+- call *sys_call_table(, %rax, 8)
+-#endif
+-.Lentry_SYSCALL_64_after_fastpath_call:
+-
+- movq %rax, RAX(%rsp)
+-1:
++ pushq %rbx /* pt_regs->rbx */
++ pushq %rbp /* pt_regs->rbp */
++ pushq %r12 /* pt_regs->r12 */
++ pushq %r13 /* pt_regs->r13 */
++ pushq %r14 /* pt_regs->r14 */
++ pushq %r15 /* pt_regs->r15 */
++ UNWIND_HINT_REGS
+
+- /*
+- * If we get here, then we know that pt_regs is clean for SYSRET64.
+- * If we see that no exit work is required (which we are required
+- * to check with IRQs off), then we can go straight to SYSRET64.
+- */
+- DISABLE_INTERRUPTS(CLBR_ANY)
+ TRACE_IRQS_OFF
+- movq PER_CPU_VAR(current_task), %r11
+- testl $_TIF_ALLWORK_MASK, TASK_TI_flags(%r11)
+- jnz 1f
+-
+- LOCKDEP_SYS_EXIT
+- TRACE_IRQS_ON /* user mode is traced as IRQs on */
+- movq RIP(%rsp), %rcx
+- movq EFLAGS(%rsp), %r11
+- addq $6*8, %rsp /* skip extra regs -- they were preserved */
+- UNWIND_HINT_EMPTY
+- jmp .Lpop_c_regs_except_rcx_r11_and_sysret
+
+-1:
+- /*
+- * The fast path looked good when we started, but something changed
+- * along the way and we need to switch to the slow path. Calling
+- * raise(3) will trigger this, for example. IRQs are off.
+- */
+- TRACE_IRQS_ON
+- ENABLE_INTERRUPTS(CLBR_ANY)
+- SAVE_EXTRA_REGS
+- movq %rsp, %rdi
+- call syscall_return_slowpath /* returns with IRQs disabled */
+- jmp return_from_SYSCALL_64
+-
+-entry_SYSCALL64_slow_path:
+ /* IRQs are off. */
+- SAVE_EXTRA_REGS
+ movq %rsp, %rdi
+ call do_syscall_64 /* returns with IRQs disabled */
+
+-return_from_SYSCALL_64:
+ TRACE_IRQS_IRETQ /* we're about to change IF */
+
+ /*
+@@ -393,7 +322,6 @@ syscall_return_via_sysret:
+ /* rcx and r11 are already restored (see code above) */
+ UNWIND_HINT_EMPTY
+ POP_EXTRA_REGS
+-.Lpop_c_regs_except_rcx_r11_and_sysret:
+ popq %rsi /* skip r11 */
+ popq %r10
+ popq %r9
+@@ -424,47 +352,6 @@ syscall_return_via_sysret:
+ USERGS_SYSRET64
+ END(entry_SYSCALL_64)
+
+-ENTRY(stub_ptregs_64)
+- /*
+- * Syscalls marked as needing ptregs land here.
+- * If we are on the fast path, we need to save the extra regs,
+- * which we achieve by trying again on the slow path. If we are on
+- * the slow path, the extra regs are already saved.
+- *
+- * RAX stores a pointer to the C function implementing the syscall.
+- * IRQs are on.
+- */
+- cmpq $.Lentry_SYSCALL_64_after_fastpath_call, (%rsp)
+- jne 1f
+-
+- /*
+- * Called from fast path -- disable IRQs again, pop return address
+- * and jump to slow path
+- */
+- DISABLE_INTERRUPTS(CLBR_ANY)
+- TRACE_IRQS_OFF
+- popq %rax
+- UNWIND_HINT_REGS extra=0
+- jmp entry_SYSCALL64_slow_path
+-
+-1:
+- JMP_NOSPEC %rax /* Called from C */
+-END(stub_ptregs_64)
+-
+-.macro ptregs_stub func
+-ENTRY(ptregs_\func)
+- UNWIND_HINT_FUNC
+- leaq \func(%rip), %rax
+- jmp stub_ptregs_64
+-END(ptregs_\func)
+-.endm
+-
+-/* Instantiate ptregs_stub for each ptregs-using syscall */
+-#define __SYSCALL_64_QUAL_(sym)
+-#define __SYSCALL_64_QUAL_ptregs(sym) ptregs_stub sym
+-#define __SYSCALL_64(nr, sym, qual) __SYSCALL_64_QUAL_##qual(sym)
+-#include <asm/syscalls_64.h>
+-
+ /*
+ * %rdi: prev task
+ * %rsi: next task
+@@ -499,7 +386,8 @@ ENTRY(__switch_to_asm)
+ * exist, overwrite the RSB with entries which capture
+ * speculative execution to prevent attack.
+ */
+- FILL_RETURN_BUFFER %r12, RSB_CLEAR_LOOPS, X86_FEATURE_RSB_CTXSW
++ /* Clobbers %rbx */
++ FILL_RETURN_BUFFER RSB_CLEAR_LOOPS, X86_FEATURE_RSB_CTXSW
+ #endif
+
+ /* restore callee-saved registers */
+diff --git a/arch/x86/entry/syscall_64.c b/arch/x86/entry/syscall_64.c
+index 9c09775e589d..c176d2fab1da 100644
+--- a/arch/x86/entry/syscall_64.c
++++ b/arch/x86/entry/syscall_64.c
+@@ -7,14 +7,11 @@
+ #include <asm/asm-offsets.h>
+ #include <asm/syscall.h>
+
+-#define __SYSCALL_64_QUAL_(sym) sym
+-#define __SYSCALL_64_QUAL_ptregs(sym) ptregs_##sym
+-
+-#define __SYSCALL_64(nr, sym, qual) extern asmlinkage long __SYSCALL_64_QUAL_##qual(sym)(unsigned long, unsigned long, unsigned long, unsigned long, unsigned long, unsigned long);
++#define __SYSCALL_64(nr, sym, qual) extern asmlinkage long sym(unsigned long, unsigned long, unsigned long, unsigned long, unsigned long, unsigned long);
+ #include <asm/syscalls_64.h>
+ #undef __SYSCALL_64
+
+-#define __SYSCALL_64(nr, sym, qual) [nr] = __SYSCALL_64_QUAL_##qual(sym),
++#define __SYSCALL_64(nr, sym, qual) [nr] = sym,
+
+ extern long sys_ni_syscall(unsigned long, unsigned long, unsigned long, unsigned long, unsigned long, unsigned long);
+
+diff --git a/arch/x86/include/asm/asm-prototypes.h b/arch/x86/include/asm/asm-prototypes.h
+index 1908214b9125..4d111616524b 100644
+--- a/arch/x86/include/asm/asm-prototypes.h
++++ b/arch/x86/include/asm/asm-prototypes.h
+@@ -38,4 +38,7 @@ INDIRECT_THUNK(dx)
+ INDIRECT_THUNK(si)
+ INDIRECT_THUNK(di)
+ INDIRECT_THUNK(bp)
++asmlinkage void __fill_rsb(void);
++asmlinkage void __clear_rsb(void);
++
+ #endif /* CONFIG_RETPOLINE */
+diff --git a/arch/x86/include/asm/barrier.h b/arch/x86/include/asm/barrier.h
+index 7fb336210e1b..30d406146016 100644
+--- a/arch/x86/include/asm/barrier.h
++++ b/arch/x86/include/asm/barrier.h
+@@ -24,6 +24,34 @@
+ #define wmb() asm volatile("sfence" ::: "memory")
+ #endif
+
++/**
++ * array_index_mask_nospec() - generate a mask that is ~0UL when the
++ * bounds check succeeds and 0 otherwise
++ * @index: array element index
++ * @size: number of elements in array
++ *
++ * Returns:
++ * 0 - (index < size)
++ */
++static inline unsigned long array_index_mask_nospec(unsigned long index,
++ unsigned long size)
++{
++ unsigned long mask;
++
++ asm ("cmp %1,%2; sbb %0,%0;"
++ :"=r" (mask)
++ :"r"(size),"r" (index)
++ :"cc");
++ return mask;
++}
++
++/* Override the default implementation from linux/nospec.h. */
++#define array_index_mask_nospec array_index_mask_nospec
++
++/* Prevent speculative execution past this barrier. */
++#define barrier_nospec() alternative_2("", "mfence", X86_FEATURE_MFENCE_RDTSC, \
++ "lfence", X86_FEATURE_LFENCE_RDTSC)
++
+ #ifdef CONFIG_X86_PPRO_FENCE
+ #define dma_rmb() rmb()
+ #else
+diff --git a/arch/x86/include/asm/cpufeature.h b/arch/x86/include/asm/cpufeature.h
+index ea9a7dde62e5..70eddb3922ff 100644
+--- a/arch/x86/include/asm/cpufeature.h
++++ b/arch/x86/include/asm/cpufeature.h
+@@ -29,6 +29,7 @@ enum cpuid_leafs
+ CPUID_8000_000A_EDX,
+ CPUID_7_ECX,
+ CPUID_8000_0007_EBX,
++ CPUID_7_EDX,
+ };
+
+ #ifdef CONFIG_X86_FEATURE_NAMES
+@@ -79,8 +80,9 @@ extern const char * const x86_bug_flags[NBUGINTS*32];
+ CHECK_BIT_IN_MASK_WORD(REQUIRED_MASK, 15, feature_bit) || \
+ CHECK_BIT_IN_MASK_WORD(REQUIRED_MASK, 16, feature_bit) || \
+ CHECK_BIT_IN_MASK_WORD(REQUIRED_MASK, 17, feature_bit) || \
++ CHECK_BIT_IN_MASK_WORD(REQUIRED_MASK, 18, feature_bit) || \
+ REQUIRED_MASK_CHECK || \
+- BUILD_BUG_ON_ZERO(NCAPINTS != 18))
++ BUILD_BUG_ON_ZERO(NCAPINTS != 19))
+
+ #define DISABLED_MASK_BIT_SET(feature_bit) \
+ ( CHECK_BIT_IN_MASK_WORD(DISABLED_MASK, 0, feature_bit) || \
+@@ -101,8 +103,9 @@ extern const char * const x86_bug_flags[NBUGINTS*32];
+ CHECK_BIT_IN_MASK_WORD(DISABLED_MASK, 15, feature_bit) || \
+ CHECK_BIT_IN_MASK_WORD(DISABLED_MASK, 16, feature_bit) || \
+ CHECK_BIT_IN_MASK_WORD(DISABLED_MASK, 17, feature_bit) || \
++ CHECK_BIT_IN_MASK_WORD(DISABLED_MASK, 18, feature_bit) || \
+ DISABLED_MASK_CHECK || \
+- BUILD_BUG_ON_ZERO(NCAPINTS != 18))
++ BUILD_BUG_ON_ZERO(NCAPINTS != 19))
+
+ #define cpu_has(c, bit) \
+ (__builtin_constant_p(bit) && REQUIRED_MASK_BIT_SET(bit) ? 1 : \
+diff --git a/arch/x86/include/asm/cpufeatures.h b/arch/x86/include/asm/cpufeatures.h
+index 25b9375c1484..73b5fff159a4 100644
+--- a/arch/x86/include/asm/cpufeatures.h
++++ b/arch/x86/include/asm/cpufeatures.h
+@@ -13,7 +13,7 @@
+ /*
+ * Defines x86 CPU feature bits
+ */
+-#define NCAPINTS 18 /* N 32-bit words worth of info */
++#define NCAPINTS 19 /* N 32-bit words worth of info */
+ #define NBUGINTS 1 /* N 32-bit bug flags */
+
+ /*
+@@ -203,14 +203,14 @@
+ #define X86_FEATURE_PROC_FEEDBACK ( 7*32+ 9) /* AMD ProcFeedbackInterface */
+ #define X86_FEATURE_SME ( 7*32+10) /* AMD Secure Memory Encryption */
+ #define X86_FEATURE_PTI ( 7*32+11) /* Kernel Page Table Isolation enabled */
+-#define X86_FEATURE_RETPOLINE ( 7*32+12) /* Generic Retpoline mitigation for Spectre variant 2 */
+-#define X86_FEATURE_RETPOLINE_AMD ( 7*32+13) /* AMD Retpoline mitigation for Spectre variant 2 */
++#define X86_FEATURE_RETPOLINE ( 7*32+12) /* "" Generic Retpoline mitigation for Spectre variant 2 */
++#define X86_FEATURE_RETPOLINE_AMD ( 7*32+13) /* "" AMD Retpoline mitigation for Spectre variant 2 */
+ #define X86_FEATURE_INTEL_PPIN ( 7*32+14) /* Intel Processor Inventory Number */
+-#define X86_FEATURE_AVX512_4VNNIW ( 7*32+16) /* AVX-512 Neural Network Instructions */
+-#define X86_FEATURE_AVX512_4FMAPS ( 7*32+17) /* AVX-512 Multiply Accumulation Single precision */
+
+ #define X86_FEATURE_MBA ( 7*32+18) /* Memory Bandwidth Allocation */
+-#define X86_FEATURE_RSB_CTXSW ( 7*32+19) /* Fill RSB on context switches */
++#define X86_FEATURE_RSB_CTXSW ( 7*32+19) /* "" Fill RSB on context switches */
++
++#define X86_FEATURE_USE_IBPB ( 7*32+21) /* "" Indirect Branch Prediction Barrier enabled */
+
+ /* Virtualization flags: Linux defined, word 8 */
+ #define X86_FEATURE_TPR_SHADOW ( 8*32+ 0) /* Intel TPR Shadow */
+@@ -271,6 +271,9 @@
+ #define X86_FEATURE_CLZERO (13*32+ 0) /* CLZERO instruction */
+ #define X86_FEATURE_IRPERF (13*32+ 1) /* Instructions Retired Count */
+ #define X86_FEATURE_XSAVEERPTR (13*32+ 2) /* Always save/restore FP error pointers */
++#define X86_FEATURE_IBPB (13*32+12) /* Indirect Branch Prediction Barrier */
++#define X86_FEATURE_IBRS (13*32+14) /* Indirect Branch Restricted Speculation */
++#define X86_FEATURE_STIBP (13*32+15) /* Single Thread Indirect Branch Predictors */
+
+ /* Thermal and Power Management Leaf, CPUID level 0x00000006 (EAX), word 14 */
+ #define X86_FEATURE_DTHERM (14*32+ 0) /* Digital Thermal Sensor */
+@@ -319,6 +322,13 @@
+ #define X86_FEATURE_SUCCOR (17*32+ 1) /* Uncorrectable error containment and recovery */
+ #define X86_FEATURE_SMCA (17*32+ 3) /* Scalable MCA */
+
++/* Intel-defined CPU features, CPUID level 0x00000007:0 (EDX), word 18 */
++#define X86_FEATURE_AVX512_4VNNIW (18*32+ 2) /* AVX-512 Neural Network Instructions */
++#define X86_FEATURE_AVX512_4FMAPS (18*32+ 3) /* AVX-512 Multiply Accumulation Single precision */
++#define X86_FEATURE_SPEC_CTRL (18*32+26) /* "" Speculation Control (IBRS + IBPB) */
++#define X86_FEATURE_INTEL_STIBP (18*32+27) /* "" Single Thread Indirect Branch Predictors */
++#define X86_FEATURE_ARCH_CAPABILITIES (18*32+29) /* IA32_ARCH_CAPABILITIES MSR (Intel) */
++
+ /*
+ * BUG word(s)
+ */
+diff --git a/arch/x86/include/asm/disabled-features.h b/arch/x86/include/asm/disabled-features.h
+index b027633e7300..33833d1909af 100644
+--- a/arch/x86/include/asm/disabled-features.h
++++ b/arch/x86/include/asm/disabled-features.h
+@@ -77,6 +77,7 @@
+ #define DISABLED_MASK15 0
+ #define DISABLED_MASK16 (DISABLE_PKU|DISABLE_OSPKE|DISABLE_LA57|DISABLE_UMIP)
+ #define DISABLED_MASK17 0
+-#define DISABLED_MASK_CHECK BUILD_BUG_ON_ZERO(NCAPINTS != 18)
++#define DISABLED_MASK18 0
++#define DISABLED_MASK_CHECK BUILD_BUG_ON_ZERO(NCAPINTS != 19)
+
+ #endif /* _ASM_X86_DISABLED_FEATURES_H */
+diff --git a/arch/x86/include/asm/fixmap.h b/arch/x86/include/asm/fixmap.h
+index 64c4a30e0d39..e203169931c7 100644
+--- a/arch/x86/include/asm/fixmap.h
++++ b/arch/x86/include/asm/fixmap.h
+@@ -137,8 +137,10 @@ enum fixed_addresses {
+
+ extern void reserve_top_address(unsigned long reserve);
+
+-#define FIXADDR_SIZE (__end_of_permanent_fixed_addresses << PAGE_SHIFT)
+-#define FIXADDR_START (FIXADDR_TOP - FIXADDR_SIZE)
++#define FIXADDR_SIZE (__end_of_permanent_fixed_addresses << PAGE_SHIFT)
++#define FIXADDR_START (FIXADDR_TOP - FIXADDR_SIZE)
++#define FIXADDR_TOT_SIZE (__end_of_fixed_addresses << PAGE_SHIFT)
++#define FIXADDR_TOT_START (FIXADDR_TOP - FIXADDR_TOT_SIZE)
+
+ extern int fixmaps_set;
+
+diff --git a/arch/x86/include/asm/msr-index.h b/arch/x86/include/asm/msr-index.h
+index e7b983a35506..e520a1e6fc11 100644
+--- a/arch/x86/include/asm/msr-index.h
++++ b/arch/x86/include/asm/msr-index.h
+@@ -39,6 +39,13 @@
+
+ /* Intel MSRs. Some also available on other CPUs */
+
++#define MSR_IA32_SPEC_CTRL 0x00000048 /* Speculation Control */
++#define SPEC_CTRL_IBRS (1 << 0) /* Indirect Branch Restricted Speculation */
++#define SPEC_CTRL_STIBP (1 << 1) /* Single Thread Indirect Branch Predictors */
++
++#define MSR_IA32_PRED_CMD 0x00000049 /* Prediction Command */
++#define PRED_CMD_IBPB (1 << 0) /* Indirect Branch Prediction Barrier */
++
+ #define MSR_PPIN_CTL 0x0000004e
+ #define MSR_PPIN 0x0000004f
+
+@@ -57,6 +64,11 @@
+ #define SNB_C3_AUTO_UNDEMOTE (1UL << 28)
+
+ #define MSR_MTRRcap 0x000000fe
++
++#define MSR_IA32_ARCH_CAPABILITIES 0x0000010a
++#define ARCH_CAP_RDCL_NO (1 << 0) /* Not susceptible to Meltdown */
++#define ARCH_CAP_IBRS_ALL (1 << 1) /* Enhanced IBRS support */
++
+ #define MSR_IA32_BBL_CR_CTL 0x00000119
+ #define MSR_IA32_BBL_CR_CTL3 0x0000011e
+
+diff --git a/arch/x86/include/asm/msr.h b/arch/x86/include/asm/msr.h
+index 07962f5f6fba..30df295f6d94 100644
+--- a/arch/x86/include/asm/msr.h
++++ b/arch/x86/include/asm/msr.h
+@@ -214,8 +214,7 @@ static __always_inline unsigned long long rdtsc_ordered(void)
+ * that some other imaginary CPU is updating continuously with a
+ * time stamp.
+ */
+- alternative_2("", "mfence", X86_FEATURE_MFENCE_RDTSC,
+- "lfence", X86_FEATURE_LFENCE_RDTSC);
++ barrier_nospec();
+ return rdtsc();
+ }
+
+diff --git a/arch/x86/include/asm/nospec-branch.h b/arch/x86/include/asm/nospec-branch.h
+index 4ad41087ce0e..4d57894635f2 100644
+--- a/arch/x86/include/asm/nospec-branch.h
++++ b/arch/x86/include/asm/nospec-branch.h
+@@ -1,56 +1,12 @@
+ /* SPDX-License-Identifier: GPL-2.0 */
+
+-#ifndef __NOSPEC_BRANCH_H__
+-#define __NOSPEC_BRANCH_H__
++#ifndef _ASM_X86_NOSPEC_BRANCH_H_
++#define _ASM_X86_NOSPEC_BRANCH_H_
+
+ #include <asm/alternative.h>
+ #include <asm/alternative-asm.h>
+ #include <asm/cpufeatures.h>
+
+-/*
+- * Fill the CPU return stack buffer.
+- *
+- * Each entry in the RSB, if used for a speculative 'ret', contains an
+- * infinite 'pause; lfence; jmp' loop to capture speculative execution.
+- *
+- * This is required in various cases for retpoline and IBRS-based
+- * mitigations for the Spectre variant 2 vulnerability. Sometimes to
+- * eliminate potentially bogus entries from the RSB, and sometimes
+- * purely to ensure that it doesn't get empty, which on some CPUs would
+- * allow predictions from other (unwanted!) sources to be used.
+- *
+- * We define a CPP macro such that it can be used from both .S files and
+- * inline assembly. It's possible to do a .macro and then include that
+- * from C via asm(".include <asm/nospec-branch.h>") but let's not go there.
+- */
+-
+-#define RSB_CLEAR_LOOPS 32 /* To forcibly overwrite all entries */
+-#define RSB_FILL_LOOPS 16 /* To avoid underflow */
+-
+-/*
+- * Google experimented with loop-unrolling and this turned out to be
+- * the optimal version — two calls, each with their own speculation
+- * trap should their return address end up getting used, in a loop.
+- */
+-#define __FILL_RETURN_BUFFER(reg, nr, sp) \
+- mov $(nr/2), reg; \
+-771: \
+- call 772f; \
+-773: /* speculation trap */ \
+- pause; \
+- lfence; \
+- jmp 773b; \
+-772: \
+- call 774f; \
+-775: /* speculation trap */ \
+- pause; \
+- lfence; \
+- jmp 775b; \
+-774: \
+- dec reg; \
+- jnz 771b; \
+- add $(BITS_PER_LONG/8) * nr, sp;
+-
+ #ifdef __ASSEMBLY__
+
+ /*
+@@ -121,17 +77,10 @@
+ #endif
+ .endm
+
+- /*
+- * A simpler FILL_RETURN_BUFFER macro. Don't make people use the CPP
+- * monstrosity above, manually.
+- */
+-.macro FILL_RETURN_BUFFER reg:req nr:req ftr:req
++/* This clobbers the BX register */
++.macro FILL_RETURN_BUFFER nr:req ftr:req
+ #ifdef CONFIG_RETPOLINE
+- ANNOTATE_NOSPEC_ALTERNATIVE
+- ALTERNATIVE "jmp .Lskip_rsb_\@", \
+- __stringify(__FILL_RETURN_BUFFER(\reg,\nr,%_ASM_SP)) \
+- \ftr
+-.Lskip_rsb_\@:
++ ALTERNATIVE "", "call __clear_rsb", \ftr
+ #endif
+ .endm
+
+@@ -201,22 +150,25 @@ extern char __indirect_thunk_end[];
+ * On VMEXIT we must ensure that no RSB predictions learned in the guest
+ * can be followed in the host, by overwriting the RSB completely. Both
+ * retpoline and IBRS mitigations for Spectre v2 need this; only on future
+- * CPUs with IBRS_ATT *might* it be avoided.
++ * CPUs with IBRS_ALL *might* it be avoided.
+ */
+ static inline void vmexit_fill_RSB(void)
+ {
+ #ifdef CONFIG_RETPOLINE
+- unsigned long loops;
+-
+- asm volatile (ANNOTATE_NOSPEC_ALTERNATIVE
+- ALTERNATIVE("jmp 910f",
+- __stringify(__FILL_RETURN_BUFFER(%0, RSB_CLEAR_LOOPS, %1)),
+- X86_FEATURE_RETPOLINE)
+- "910:"
+- : "=r" (loops), ASM_CALL_CONSTRAINT
+- : : "memory" );
++ alternative_input("",
++ "call __fill_rsb",
++ X86_FEATURE_RETPOLINE,
++ ASM_NO_INPUT_CLOBBER(_ASM_BX, "memory"));
+ #endif
+ }
+
++static inline void indirect_branch_prediction_barrier(void)
++{
++ alternative_input("",
++ "call __ibp_barrier",
++ X86_FEATURE_USE_IBPB,
++ ASM_NO_INPUT_CLOBBER("eax", "ecx", "edx", "memory"));
++}
++
+ #endif /* __ASSEMBLY__ */
+-#endif /* __NOSPEC_BRANCH_H__ */
++#endif /* _ASM_X86_NOSPEC_BRANCH_H_ */
+diff --git a/arch/x86/include/asm/pgtable_32_types.h b/arch/x86/include/asm/pgtable_32_types.h
+index ce245b0cdfca..0777e18a1d23 100644
+--- a/arch/x86/include/asm/pgtable_32_types.h
++++ b/arch/x86/include/asm/pgtable_32_types.h
+@@ -44,8 +44,9 @@ extern bool __vmalloc_start_set; /* set once high_memory is set */
+ */
+ #define CPU_ENTRY_AREA_PAGES (NR_CPUS * 40)
+
+-#define CPU_ENTRY_AREA_BASE \
+- ((FIXADDR_START - PAGE_SIZE * (CPU_ENTRY_AREA_PAGES + 1)) & PMD_MASK)
++#define CPU_ENTRY_AREA_BASE \
++ ((FIXADDR_TOT_START - PAGE_SIZE * (CPU_ENTRY_AREA_PAGES + 1)) \
++ & PMD_MASK)
+
+ #define PKMAP_BASE \
+ ((CPU_ENTRY_AREA_BASE - PAGE_SIZE) & PMD_MASK)
+diff --git a/arch/x86/include/asm/processor.h b/arch/x86/include/asm/processor.h
+index d3a67fba200a..513f9604c192 100644
+--- a/arch/x86/include/asm/processor.h
++++ b/arch/x86/include/asm/processor.h
+@@ -460,8 +460,6 @@ struct thread_struct {
+ unsigned short gsindex;
+ #endif
+
+- u32 status; /* thread synchronous flags */
+-
+ #ifdef CONFIG_X86_64
+ unsigned long fsbase;
+ unsigned long gsbase;
+@@ -971,4 +969,7 @@ bool xen_set_default_idle(void);
+
+ void stop_this_cpu(void *dummy);
+ void df_debug(struct pt_regs *regs, long error_code);
++
++void __ibp_barrier(void);
++
+ #endif /* _ASM_X86_PROCESSOR_H */
+diff --git a/arch/x86/include/asm/required-features.h b/arch/x86/include/asm/required-features.h
+index d91ba04dd007..fb3a6de7440b 100644
+--- a/arch/x86/include/asm/required-features.h
++++ b/arch/x86/include/asm/required-features.h
+@@ -106,6 +106,7 @@
+ #define REQUIRED_MASK15 0
+ #define REQUIRED_MASK16 (NEED_LA57)
+ #define REQUIRED_MASK17 0
+-#define REQUIRED_MASK_CHECK BUILD_BUG_ON_ZERO(NCAPINTS != 18)
++#define REQUIRED_MASK18 0
++#define REQUIRED_MASK_CHECK BUILD_BUG_ON_ZERO(NCAPINTS != 19)
+
+ #endif /* _ASM_X86_REQUIRED_FEATURES_H */
+diff --git a/arch/x86/include/asm/syscall.h b/arch/x86/include/asm/syscall.h
+index e3c95e8e61c5..03eedc21246d 100644
+--- a/arch/x86/include/asm/syscall.h
++++ b/arch/x86/include/asm/syscall.h
+@@ -60,7 +60,7 @@ static inline long syscall_get_error(struct task_struct *task,
+ * TS_COMPAT is set for 32-bit syscall entries and then
+ * remains set until we return to user mode.
+ */
+- if (task->thread.status & (TS_COMPAT|TS_I386_REGS_POKED))
++ if (task->thread_info.status & (TS_COMPAT|TS_I386_REGS_POKED))
+ /*
+ * Sign-extend the value so (int)-EFOO becomes (long)-EFOO
+ * and will match correctly in comparisons.
+@@ -116,7 +116,7 @@ static inline void syscall_get_arguments(struct task_struct *task,
+ unsigned long *args)
+ {
+ # ifdef CONFIG_IA32_EMULATION
+- if (task->thread.status & TS_COMPAT)
++ if (task->thread_info.status & TS_COMPAT)
+ switch (i) {
+ case 0:
+ if (!n--) break;
+@@ -177,7 +177,7 @@ static inline void syscall_set_arguments(struct task_struct *task,
+ const unsigned long *args)
+ {
+ # ifdef CONFIG_IA32_EMULATION
+- if (task->thread.status & TS_COMPAT)
++ if (task->thread_info.status & TS_COMPAT)
+ switch (i) {
+ case 0:
+ if (!n--) break;
+diff --git a/arch/x86/include/asm/thread_info.h b/arch/x86/include/asm/thread_info.h
+index 00223333821a..eda3b6823ca4 100644
+--- a/arch/x86/include/asm/thread_info.h
++++ b/arch/x86/include/asm/thread_info.h
+@@ -55,6 +55,7 @@ struct task_struct;
+
+ struct thread_info {
+ unsigned long flags; /* low level flags */
++ u32 status; /* thread synchronous flags */
+ };
+
+ #define INIT_THREAD_INFO(tsk) \
+@@ -221,7 +222,7 @@ static inline int arch_within_stack_frames(const void * const stack,
+ #define in_ia32_syscall() true
+ #else
+ #define in_ia32_syscall() (IS_ENABLED(CONFIG_IA32_EMULATION) && \
+- current->thread.status & TS_COMPAT)
++ current_thread_info()->status & TS_COMPAT)
+ #endif
+
+ /*
+diff --git a/arch/x86/include/asm/tlbflush.h b/arch/x86/include/asm/tlbflush.h
+index d33e4a26dc7e..2b8f18ca5874 100644
+--- a/arch/x86/include/asm/tlbflush.h
++++ b/arch/x86/include/asm/tlbflush.h
+@@ -174,6 +174,8 @@ struct tlb_state {
+ struct mm_struct *loaded_mm;
+ u16 loaded_mm_asid;
+ u16 next_asid;
++ /* last user mm's ctx id */
++ u64 last_ctx_id;
+
+ /*
+ * We can be in one of several states:
+diff --git a/arch/x86/include/asm/uaccess.h b/arch/x86/include/asm/uaccess.h
+index 574dff4d2913..aae77eb8491c 100644
+--- a/arch/x86/include/asm/uaccess.h
++++ b/arch/x86/include/asm/uaccess.h
+@@ -124,6 +124,11 @@ extern int __get_user_bad(void);
+
+ #define __uaccess_begin() stac()
+ #define __uaccess_end() clac()
++#define __uaccess_begin_nospec() \
++({ \
++ stac(); \
++ barrier_nospec(); \
++})
+
+ /*
+ * This is a type: either unsigned long, if the argument fits into
+@@ -445,7 +450,7 @@ do { \
+ ({ \
+ int __gu_err; \
+ __inttype(*(ptr)) __gu_val; \
+- __uaccess_begin(); \
++ __uaccess_begin_nospec(); \
+ __get_user_size(__gu_val, (ptr), (size), __gu_err, -EFAULT); \
+ __uaccess_end(); \
+ (x) = (__force __typeof__(*(ptr)))__gu_val; \
+@@ -487,6 +492,10 @@ struct __large_struct { unsigned long buf[100]; };
+ __uaccess_begin(); \
+ barrier();
+
++#define uaccess_try_nospec do { \
++ current->thread.uaccess_err = 0; \
++ __uaccess_begin_nospec(); \
++
+ #define uaccess_catch(err) \
+ __uaccess_end(); \
+ (err) |= (current->thread.uaccess_err ? -EFAULT : 0); \
+@@ -548,7 +557,7 @@ struct __large_struct { unsigned long buf[100]; };
+ * get_user_ex(...);
+ * } get_user_catch(err)
+ */
+-#define get_user_try uaccess_try
++#define get_user_try uaccess_try_nospec
+ #define get_user_catch(err) uaccess_catch(err)
+
+ #define get_user_ex(x, ptr) do { \
+@@ -582,7 +591,7 @@ extern void __cmpxchg_wrong_size(void)
+ __typeof__(ptr) __uval = (uval); \
+ __typeof__(*(ptr)) __old = (old); \
+ __typeof__(*(ptr)) __new = (new); \
+- __uaccess_begin(); \
++ __uaccess_begin_nospec(); \
+ switch (size) { \
+ case 1: \
+ { \
+diff --git a/arch/x86/include/asm/uaccess_32.h b/arch/x86/include/asm/uaccess_32.h
+index 72950401b223..ba2dc1930630 100644
+--- a/arch/x86/include/asm/uaccess_32.h
++++ b/arch/x86/include/asm/uaccess_32.h
+@@ -29,21 +29,21 @@ raw_copy_from_user(void *to, const void __user *from, unsigned long n)
+ switch (n) {
+ case 1:
+ ret = 0;
+- __uaccess_begin();
++ __uaccess_begin_nospec();
+ __get_user_asm_nozero(*(u8 *)to, from, ret,
+ "b", "b", "=q", 1);
+ __uaccess_end();
+ return ret;
+ case 2:
+ ret = 0;
+- __uaccess_begin();
++ __uaccess_begin_nospec();
+ __get_user_asm_nozero(*(u16 *)to, from, ret,
+ "w", "w", "=r", 2);
+ __uaccess_end();
+ return ret;
+ case 4:
+ ret = 0;
+- __uaccess_begin();
++ __uaccess_begin_nospec();
+ __get_user_asm_nozero(*(u32 *)to, from, ret,
+ "l", "k", "=r", 4);
+ __uaccess_end();
+diff --git a/arch/x86/include/asm/uaccess_64.h b/arch/x86/include/asm/uaccess_64.h
+index f07ef3c575db..62546b3a398e 100644
+--- a/arch/x86/include/asm/uaccess_64.h
++++ b/arch/x86/include/asm/uaccess_64.h
+@@ -55,31 +55,31 @@ raw_copy_from_user(void *dst, const void __user *src, unsigned long size)
+ return copy_user_generic(dst, (__force void *)src, size);
+ switch (size) {
+ case 1:
+- __uaccess_begin();
++ __uaccess_begin_nospec();
+ __get_user_asm_nozero(*(u8 *)dst, (u8 __user *)src,
+ ret, "b", "b", "=q", 1);
+ __uaccess_end();
+ return ret;
+ case 2:
+- __uaccess_begin();
++ __uaccess_begin_nospec();
+ __get_user_asm_nozero(*(u16 *)dst, (u16 __user *)src,
+ ret, "w", "w", "=r", 2);
+ __uaccess_end();
+ return ret;
+ case 4:
+- __uaccess_begin();
++ __uaccess_begin_nospec();
+ __get_user_asm_nozero(*(u32 *)dst, (u32 __user *)src,
+ ret, "l", "k", "=r", 4);
+ __uaccess_end();
+ return ret;
+ case 8:
+- __uaccess_begin();
++ __uaccess_begin_nospec();
+ __get_user_asm_nozero(*(u64 *)dst, (u64 __user *)src,
+ ret, "q", "", "=r", 8);
+ __uaccess_end();
+ return ret;
+ case 10:
+- __uaccess_begin();
++ __uaccess_begin_nospec();
+ __get_user_asm_nozero(*(u64 *)dst, (u64 __user *)src,
+ ret, "q", "", "=r", 10);
+ if (likely(!ret))
+@@ -89,7 +89,7 @@ raw_copy_from_user(void *dst, const void __user *src, unsigned long size)
+ __uaccess_end();
+ return ret;
+ case 16:
+- __uaccess_begin();
++ __uaccess_begin_nospec();
+ __get_user_asm_nozero(*(u64 *)dst, (u64 __user *)src,
+ ret, "q", "", "=r", 16);
+ if (likely(!ret))
+diff --git a/arch/x86/kernel/alternative.c b/arch/x86/kernel/alternative.c
+index 4817d743c263..a481763a3776 100644
+--- a/arch/x86/kernel/alternative.c
++++ b/arch/x86/kernel/alternative.c
+@@ -46,17 +46,6 @@ static int __init setup_noreplace_smp(char *str)
+ }
+ __setup("noreplace-smp", setup_noreplace_smp);
+
+-#ifdef CONFIG_PARAVIRT
+-static int __initdata_or_module noreplace_paravirt = 0;
+-
+-static int __init setup_noreplace_paravirt(char *str)
+-{
+- noreplace_paravirt = 1;
+- return 1;
+-}
+-__setup("noreplace-paravirt", setup_noreplace_paravirt);
+-#endif
+-
+ #define DPRINTK(fmt, args...) \
+ do { \
+ if (debug_alternative) \
+@@ -298,7 +287,7 @@ recompute_jump(struct alt_instr *a, u8 *orig_insn, u8 *repl_insn, u8 *insnbuf)
+ tgt_rip = next_rip + o_dspl;
+ n_dspl = tgt_rip - orig_insn;
+
+- DPRINTK("target RIP: %p, new_displ: 0x%x", tgt_rip, n_dspl);
++ DPRINTK("target RIP: %px, new_displ: 0x%x", tgt_rip, n_dspl);
+
+ if (tgt_rip - orig_insn >= 0) {
+ if (n_dspl - 2 <= 127)
+@@ -355,7 +344,7 @@ static void __init_or_module noinline optimize_nops(struct alt_instr *a, u8 *ins
+ add_nops(instr + (a->instrlen - a->padlen), a->padlen);
+ local_irq_restore(flags);
+
+- DUMP_BYTES(instr, a->instrlen, "%p: [%d:%d) optimized NOPs: ",
++ DUMP_BYTES(instr, a->instrlen, "%px: [%d:%d) optimized NOPs: ",
+ instr, a->instrlen - a->padlen, a->padlen);
+ }
+
+@@ -376,7 +365,7 @@ void __init_or_module noinline apply_alternatives(struct alt_instr *start,
+ u8 *instr, *replacement;
+ u8 insnbuf[MAX_PATCH_LEN];
+
+- DPRINTK("alt table %p -> %p", start, end);
++ DPRINTK("alt table %px, -> %px", start, end);
+ /*
+ * The scan order should be from start to end. A later scanned
+ * alternative code can overwrite previously scanned alternative code.
+@@ -400,14 +389,14 @@ void __init_or_module noinline apply_alternatives(struct alt_instr *start,
+ continue;
+ }
+
+- DPRINTK("feat: %d*32+%d, old: (%p, len: %d), repl: (%p, len: %d), pad: %d",
++ DPRINTK("feat: %d*32+%d, old: (%px len: %d), repl: (%px, len: %d), pad: %d",
+ a->cpuid >> 5,
+ a->cpuid & 0x1f,
+ instr, a->instrlen,
+ replacement, a->replacementlen, a->padlen);
+
+- DUMP_BYTES(instr, a->instrlen, "%p: old_insn: ", instr);
+- DUMP_BYTES(replacement, a->replacementlen, "%p: rpl_insn: ", replacement);
++ DUMP_BYTES(instr, a->instrlen, "%px: old_insn: ", instr);
++ DUMP_BYTES(replacement, a->replacementlen, "%px: rpl_insn: ", replacement);
+
+ memcpy(insnbuf, replacement, a->replacementlen);
+ insnbuf_sz = a->replacementlen;
+@@ -433,7 +422,7 @@ void __init_or_module noinline apply_alternatives(struct alt_instr *start,
+ a->instrlen - a->replacementlen);
+ insnbuf_sz += a->instrlen - a->replacementlen;
+ }
+- DUMP_BYTES(insnbuf, insnbuf_sz, "%p: final_insn: ", instr);
++ DUMP_BYTES(insnbuf, insnbuf_sz, "%px: final_insn: ", instr);
+
+ text_poke_early(instr, insnbuf, insnbuf_sz);
+ }
+@@ -599,9 +588,6 @@ void __init_or_module apply_paravirt(struct paravirt_patch_site *start,
+ struct paravirt_patch_site *p;
+ char insnbuf[MAX_PATCH_LEN];
+
+- if (noreplace_paravirt)
+- return;
+-
+ for (p = start; p < end; p++) {
+ unsigned int used;
+
+diff --git a/arch/x86/kernel/cpu/bugs.c b/arch/x86/kernel/cpu/bugs.c
+index 390b3dc3d438..71949bf2de5a 100644
+--- a/arch/x86/kernel/cpu/bugs.c
++++ b/arch/x86/kernel/cpu/bugs.c
+@@ -11,6 +11,7 @@
+ #include <linux/init.h>
+ #include <linux/utsname.h>
+ #include <linux/cpu.h>
++#include <linux/module.h>
+
+ #include <asm/nospec-branch.h>
+ #include <asm/cmdline.h>
+@@ -90,20 +91,41 @@ static const char *spectre_v2_strings[] = {
+ };
+
+ #undef pr_fmt
+-#define pr_fmt(fmt) "Spectre V2 mitigation: " fmt
++#define pr_fmt(fmt) "Spectre V2 : " fmt
+
+ static enum spectre_v2_mitigation spectre_v2_enabled = SPECTRE_V2_NONE;
+
++#ifdef RETPOLINE
++static bool spectre_v2_bad_module;
++
++bool retpoline_module_ok(bool has_retpoline)
++{
++ if (spectre_v2_enabled == SPECTRE_V2_NONE || has_retpoline)
++ return true;
++
++ pr_err("System may be vulnerable to spectre v2\n");
++ spectre_v2_bad_module = true;
++ return false;
++}
++
++static inline const char *spectre_v2_module_string(void)
++{
++ return spectre_v2_bad_module ? " - vulnerable module loaded" : "";
++}
++#else
++static inline const char *spectre_v2_module_string(void) { return ""; }
++#endif
++
+ static void __init spec2_print_if_insecure(const char *reason)
+ {
+ if (boot_cpu_has_bug(X86_BUG_SPECTRE_V2))
+- pr_info("%s\n", reason);
++ pr_info("%s selected on command line.\n", reason);
+ }
+
+ static void __init spec2_print_if_secure(const char *reason)
+ {
+ if (!boot_cpu_has_bug(X86_BUG_SPECTRE_V2))
+- pr_info("%s\n", reason);
++ pr_info("%s selected on command line.\n", reason);
+ }
+
+ static inline bool retp_compiler(void)
+@@ -118,42 +140,68 @@ static inline bool match_option(const char *arg, int arglen, const char *opt)
+ return len == arglen && !strncmp(arg, opt, len);
+ }
+
++static const struct {
++ const char *option;
++ enum spectre_v2_mitigation_cmd cmd;
++ bool secure;
++} mitigation_options[] = {
++ { "off", SPECTRE_V2_CMD_NONE, false },
++ { "on", SPECTRE_V2_CMD_FORCE, true },
++ { "retpoline", SPECTRE_V2_CMD_RETPOLINE, false },
++ { "retpoline,amd", SPECTRE_V2_CMD_RETPOLINE_AMD, false },
++ { "retpoline,generic", SPECTRE_V2_CMD_RETPOLINE_GENERIC, false },
++ { "auto", SPECTRE_V2_CMD_AUTO, false },
++};
++
+ static enum spectre_v2_mitigation_cmd __init spectre_v2_parse_cmdline(void)
+ {
+ char arg[20];
+- int ret;
+-
+- ret = cmdline_find_option(boot_command_line, "spectre_v2", arg,
+- sizeof(arg));
+- if (ret > 0) {
+- if (match_option(arg, ret, "off")) {
+- goto disable;
+- } else if (match_option(arg, ret, "on")) {
+- spec2_print_if_secure("force enabled on command line.");
+- return SPECTRE_V2_CMD_FORCE;
+- } else if (match_option(arg, ret, "retpoline")) {
+- spec2_print_if_insecure("retpoline selected on command line.");
+- return SPECTRE_V2_CMD_RETPOLINE;
+- } else if (match_option(arg, ret, "retpoline,amd")) {
+- if (boot_cpu_data.x86_vendor != X86_VENDOR_AMD) {
+- pr_err("retpoline,amd selected but CPU is not AMD. Switching to AUTO select\n");
+- return SPECTRE_V2_CMD_AUTO;
+- }
+- spec2_print_if_insecure("AMD retpoline selected on command line.");
+- return SPECTRE_V2_CMD_RETPOLINE_AMD;
+- } else if (match_option(arg, ret, "retpoline,generic")) {
+- spec2_print_if_insecure("generic retpoline selected on command line.");
+- return SPECTRE_V2_CMD_RETPOLINE_GENERIC;
+- } else if (match_option(arg, ret, "auto")) {
++ int ret, i;
++ enum spectre_v2_mitigation_cmd cmd = SPECTRE_V2_CMD_AUTO;
++
++ if (cmdline_find_option_bool(boot_command_line, "nospectre_v2"))
++ return SPECTRE_V2_CMD_NONE;
++ else {
++ ret = cmdline_find_option(boot_command_line, "spectre_v2", arg,
++ sizeof(arg));
++ if (ret < 0)
++ return SPECTRE_V2_CMD_AUTO;
++
++ for (i = 0; i < ARRAY_SIZE(mitigation_options); i++) {
++ if (!match_option(arg, ret, mitigation_options[i].option))
++ continue;
++ cmd = mitigation_options[i].cmd;
++ break;
++ }
++
++ if (i >= ARRAY_SIZE(mitigation_options)) {
++ pr_err("unknown option (%s). Switching to AUTO select\n",
++ mitigation_options[i].option);
+ return SPECTRE_V2_CMD_AUTO;
+ }
+ }
+
+- if (!cmdline_find_option_bool(boot_command_line, "nospectre_v2"))
++ if ((cmd == SPECTRE_V2_CMD_RETPOLINE ||
++ cmd == SPECTRE_V2_CMD_RETPOLINE_AMD ||
++ cmd == SPECTRE_V2_CMD_RETPOLINE_GENERIC) &&
++ !IS_ENABLED(CONFIG_RETPOLINE)) {
++ pr_err("%s selected but not compiled in. Switching to AUTO select\n",
++ mitigation_options[i].option);
+ return SPECTRE_V2_CMD_AUTO;
+-disable:
+- spec2_print_if_insecure("disabled on command line.");
+- return SPECTRE_V2_CMD_NONE;
++ }
++
++ if (cmd == SPECTRE_V2_CMD_RETPOLINE_AMD &&
++ boot_cpu_data.x86_vendor != X86_VENDOR_AMD) {
++ pr_err("retpoline,amd selected but CPU is not AMD. Switching to AUTO select\n");
++ return SPECTRE_V2_CMD_AUTO;
++ }
++
++ if (mitigation_options[i].secure)
++ spec2_print_if_secure(mitigation_options[i].option);
++ else
++ spec2_print_if_insecure(mitigation_options[i].option);
++
++ return cmd;
+ }
+
+ /* Check for Skylake-like CPUs (for RSB handling) */
+@@ -191,10 +239,10 @@ static void __init spectre_v2_select_mitigation(void)
+ return;
+
+ case SPECTRE_V2_CMD_FORCE:
+- /* FALLTRHU */
+ case SPECTRE_V2_CMD_AUTO:
+- goto retpoline_auto;
+-
++ if (IS_ENABLED(CONFIG_RETPOLINE))
++ goto retpoline_auto;
++ break;
+ case SPECTRE_V2_CMD_RETPOLINE_AMD:
+ if (IS_ENABLED(CONFIG_RETPOLINE))
+ goto retpoline_amd;
+@@ -249,6 +297,12 @@ static void __init spectre_v2_select_mitigation(void)
+ setup_force_cpu_cap(X86_FEATURE_RSB_CTXSW);
+ pr_info("Filling RSB on context switch\n");
+ }
++
++ /* Initialize Indirect Branch Prediction Barrier if supported */
++ if (boot_cpu_has(X86_FEATURE_IBPB)) {
++ setup_force_cpu_cap(X86_FEATURE_USE_IBPB);
++ pr_info("Enabling Indirect Branch Prediction Barrier\n");
++ }
+ }
+
+ #undef pr_fmt
+@@ -269,7 +323,7 @@ ssize_t cpu_show_spectre_v1(struct device *dev,
+ {
+ if (!boot_cpu_has_bug(X86_BUG_SPECTRE_V1))
+ return sprintf(buf, "Not affected\n");
+- return sprintf(buf, "Vulnerable\n");
++ return sprintf(buf, "Mitigation: __user pointer sanitization\n");
+ }
+
+ ssize_t cpu_show_spectre_v2(struct device *dev,
+@@ -278,6 +332,14 @@ ssize_t cpu_show_spectre_v2(struct device *dev,
+ if (!boot_cpu_has_bug(X86_BUG_SPECTRE_V2))
+ return sprintf(buf, "Not affected\n");
+
+- return sprintf(buf, "%s\n", spectre_v2_strings[spectre_v2_enabled]);
++ return sprintf(buf, "%s%s%s\n", spectre_v2_strings[spectre_v2_enabled],
++ boot_cpu_has(X86_FEATURE_USE_IBPB) ? ", IBPB" : "",
++ spectre_v2_module_string());
+ }
+ #endif
++
++void __ibp_barrier(void)
++{
++ __wrmsr(MSR_IA32_PRED_CMD, PRED_CMD_IBPB, 0);
++}
++EXPORT_SYMBOL_GPL(__ibp_barrier);
+diff --git a/arch/x86/kernel/cpu/common.c b/arch/x86/kernel/cpu/common.c
+index ef29ad001991..d63f4b5706e4 100644
+--- a/arch/x86/kernel/cpu/common.c
++++ b/arch/x86/kernel/cpu/common.c
+@@ -47,6 +47,8 @@
+ #include <asm/pat.h>
+ #include <asm/microcode.h>
+ #include <asm/microcode_intel.h>
++#include <asm/intel-family.h>
++#include <asm/cpu_device_id.h>
+
+ #ifdef CONFIG_X86_LOCAL_APIC
+ #include <asm/uv/uv.h>
+@@ -748,6 +750,26 @@ static void apply_forced_caps(struct cpuinfo_x86 *c)
+ }
+ }
+
++static void init_speculation_control(struct cpuinfo_x86 *c)
++{
++ /*
++ * The Intel SPEC_CTRL CPUID bit implies IBRS and IBPB support,
++ * and they also have a different bit for STIBP support. Also,
++ * a hypervisor might have set the individual AMD bits even on
++ * Intel CPUs, for finer-grained selection of what's available.
++ *
++ * We use the AMD bits in 0x8000_0008 EBX as the generic hardware
++ * features, which are visible in /proc/cpuinfo and used by the
++ * kernel. So set those accordingly from the Intel bits.
++ */
++ if (cpu_has(c, X86_FEATURE_SPEC_CTRL)) {
++ set_cpu_cap(c, X86_FEATURE_IBRS);
++ set_cpu_cap(c, X86_FEATURE_IBPB);
++ }
++ if (cpu_has(c, X86_FEATURE_INTEL_STIBP))
++ set_cpu_cap(c, X86_FEATURE_STIBP);
++}
++
+ void get_cpu_cap(struct cpuinfo_x86 *c)
+ {
+ u32 eax, ebx, ecx, edx;
+@@ -769,6 +791,7 @@ void get_cpu_cap(struct cpuinfo_x86 *c)
+ cpuid_count(0x00000007, 0, &eax, &ebx, &ecx, &edx);
+ c->x86_capability[CPUID_7_0_EBX] = ebx;
+ c->x86_capability[CPUID_7_ECX] = ecx;
++ c->x86_capability[CPUID_7_EDX] = edx;
+ }
+
+ /* Extended state features: level 0x0000000d */
+@@ -841,6 +864,7 @@ void get_cpu_cap(struct cpuinfo_x86 *c)
+ c->x86_capability[CPUID_8000_000A_EDX] = cpuid_edx(0x8000000a);
+
+ init_scattered_cpuid_features(c);
++ init_speculation_control(c);
+
+ /*
+ * Clear/Set all flags overridden by options, after probe.
+@@ -876,6 +900,41 @@ static void identify_cpu_without_cpuid(struct cpuinfo_x86 *c)
+ #endif
+ }
+
++static const __initconst struct x86_cpu_id cpu_no_speculation[] = {
++ { X86_VENDOR_INTEL, 6, INTEL_FAM6_ATOM_CEDARVIEW, X86_FEATURE_ANY },
++ { X86_VENDOR_INTEL, 6, INTEL_FAM6_ATOM_CLOVERVIEW, X86_FEATURE_ANY },
++ { X86_VENDOR_INTEL, 6, INTEL_FAM6_ATOM_LINCROFT, X86_FEATURE_ANY },
++ { X86_VENDOR_INTEL, 6, INTEL_FAM6_ATOM_PENWELL, X86_FEATURE_ANY },
++ { X86_VENDOR_INTEL, 6, INTEL_FAM6_ATOM_PINEVIEW, X86_FEATURE_ANY },
++ { X86_VENDOR_CENTAUR, 5 },
++ { X86_VENDOR_INTEL, 5 },
++ { X86_VENDOR_NSC, 5 },
++ { X86_VENDOR_ANY, 4 },
++ {}
++};
++
++static const __initconst struct x86_cpu_id cpu_no_meltdown[] = {
++ { X86_VENDOR_AMD },
++ {}
++};
++
++static bool __init cpu_vulnerable_to_meltdown(struct cpuinfo_x86 *c)
++{
++ u64 ia32_cap = 0;
++
++ if (x86_match_cpu(cpu_no_meltdown))
++ return false;
++
++ if (cpu_has(c, X86_FEATURE_ARCH_CAPABILITIES))
++ rdmsrl(MSR_IA32_ARCH_CAPABILITIES, ia32_cap);
++
++ /* Rogue Data Cache Load? No! */
++ if (ia32_cap & ARCH_CAP_RDCL_NO)
++ return false;
++
++ return true;
++}
++
+ /*
+ * Do minimum CPU detection early.
+ * Fields really needed: vendor, cpuid_level, family, model, mask,
+@@ -923,11 +982,12 @@ static void __init early_identify_cpu(struct cpuinfo_x86 *c)
+
+ setup_force_cpu_cap(X86_FEATURE_ALWAYS);
+
+- if (c->x86_vendor != X86_VENDOR_AMD)
+- setup_force_cpu_bug(X86_BUG_CPU_MELTDOWN);
+-
+- setup_force_cpu_bug(X86_BUG_SPECTRE_V1);
+- setup_force_cpu_bug(X86_BUG_SPECTRE_V2);
++ if (!x86_match_cpu(cpu_no_speculation)) {
++ if (cpu_vulnerable_to_meltdown(c))
++ setup_force_cpu_bug(X86_BUG_CPU_MELTDOWN);
++ setup_force_cpu_bug(X86_BUG_SPECTRE_V1);
++ setup_force_cpu_bug(X86_BUG_SPECTRE_V2);
++ }
+
+ fpu__init_system(c);
+
+diff --git a/arch/x86/kernel/cpu/intel.c b/arch/x86/kernel/cpu/intel.c
+index b1af22073e28..319bf989fad1 100644
+--- a/arch/x86/kernel/cpu/intel.c
++++ b/arch/x86/kernel/cpu/intel.c
+@@ -102,6 +102,59 @@ static void probe_xeon_phi_r3mwait(struct cpuinfo_x86 *c)
+ ELF_HWCAP2 |= HWCAP2_RING3MWAIT;
+ }
+
++/*
++ * Early microcode releases for the Spectre v2 mitigation were broken.
++ * Information taken from;
++ * - https://newsroom.intel.com/wp-content/uploads/sites/11/2018/01/microcode-update-guidance.pdf
++ * - https://kb.vmware.com/s/article/52345
++ * - Microcode revisions observed in the wild
++ * - Release note from 20180108 microcode release
++ */
++struct sku_microcode {
++ u8 model;
++ u8 stepping;
++ u32 microcode;
++};
++static const struct sku_microcode spectre_bad_microcodes[] = {
++ { INTEL_FAM6_KABYLAKE_DESKTOP, 0x0B, 0x84 },
++ { INTEL_FAM6_KABYLAKE_DESKTOP, 0x0A, 0x84 },
++ { INTEL_FAM6_KABYLAKE_DESKTOP, 0x09, 0x84 },
++ { INTEL_FAM6_KABYLAKE_MOBILE, 0x0A, 0x84 },
++ { INTEL_FAM6_KABYLAKE_MOBILE, 0x09, 0x84 },
++ { INTEL_FAM6_SKYLAKE_X, 0x03, 0x0100013e },
++ { INTEL_FAM6_SKYLAKE_X, 0x04, 0x0200003c },
++ { INTEL_FAM6_SKYLAKE_MOBILE, 0x03, 0xc2 },
++ { INTEL_FAM6_SKYLAKE_DESKTOP, 0x03, 0xc2 },
++ { INTEL_FAM6_BROADWELL_CORE, 0x04, 0x28 },
++ { INTEL_FAM6_BROADWELL_GT3E, 0x01, 0x1b },
++ { INTEL_FAM6_BROADWELL_XEON_D, 0x02, 0x14 },
++ { INTEL_FAM6_BROADWELL_XEON_D, 0x03, 0x07000011 },
++ { INTEL_FAM6_BROADWELL_X, 0x01, 0x0b000025 },
++ { INTEL_FAM6_HASWELL_ULT, 0x01, 0x21 },
++ { INTEL_FAM6_HASWELL_GT3E, 0x01, 0x18 },
++ { INTEL_FAM6_HASWELL_CORE, 0x03, 0x23 },
++ { INTEL_FAM6_HASWELL_X, 0x02, 0x3b },
++ { INTEL_FAM6_HASWELL_X, 0x04, 0x10 },
++ { INTEL_FAM6_IVYBRIDGE_X, 0x04, 0x42a },
++ /* Updated in the 20180108 release; blacklist until we know otherwise */
++ { INTEL_FAM6_ATOM_GEMINI_LAKE, 0x01, 0x22 },
++ /* Observed in the wild */
++ { INTEL_FAM6_SANDYBRIDGE_X, 0x06, 0x61b },
++ { INTEL_FAM6_SANDYBRIDGE_X, 0x07, 0x712 },
++};
++
++static bool bad_spectre_microcode(struct cpuinfo_x86 *c)
++{
++ int i;
++
++ for (i = 0; i < ARRAY_SIZE(spectre_bad_microcodes); i++) {
++ if (c->x86_model == spectre_bad_microcodes[i].model &&
++ c->x86_mask == spectre_bad_microcodes[i].stepping)
++ return (c->microcode <= spectre_bad_microcodes[i].microcode);
++ }
++ return false;
++}
++
+ static void early_init_intel(struct cpuinfo_x86 *c)
+ {
+ u64 misc_enable;
+@@ -122,6 +175,19 @@ static void early_init_intel(struct cpuinfo_x86 *c)
+ if (c->x86 >= 6 && !cpu_has(c, X86_FEATURE_IA64))
+ c->microcode = intel_get_microcode_revision();
+
++ /* Now if any of them are set, check the blacklist and clear the lot */
++ if ((cpu_has(c, X86_FEATURE_SPEC_CTRL) ||
++ cpu_has(c, X86_FEATURE_INTEL_STIBP) ||
++ cpu_has(c, X86_FEATURE_IBRS) || cpu_has(c, X86_FEATURE_IBPB) ||
++ cpu_has(c, X86_FEATURE_STIBP)) && bad_spectre_microcode(c)) {
++ pr_warn("Intel Spectre v2 broken microcode detected; disabling Speculation Control\n");
++ setup_clear_cpu_cap(X86_FEATURE_IBRS);
++ setup_clear_cpu_cap(X86_FEATURE_IBPB);
++ setup_clear_cpu_cap(X86_FEATURE_STIBP);
++ setup_clear_cpu_cap(X86_FEATURE_SPEC_CTRL);
++ setup_clear_cpu_cap(X86_FEATURE_INTEL_STIBP);
++ }
++
+ /*
+ * Atom erratum AAE44/AAF40/AAG38/AAH41:
+ *
+diff --git a/arch/x86/kernel/cpu/scattered.c b/arch/x86/kernel/cpu/scattered.c
+index d0e69769abfd..df11f5d604be 100644
+--- a/arch/x86/kernel/cpu/scattered.c
++++ b/arch/x86/kernel/cpu/scattered.c
+@@ -21,8 +21,6 @@ struct cpuid_bit {
+ static const struct cpuid_bit cpuid_bits[] = {
+ { X86_FEATURE_APERFMPERF, CPUID_ECX, 0, 0x00000006, 0 },
+ { X86_FEATURE_EPB, CPUID_ECX, 3, 0x00000006, 0 },
+- { X86_FEATURE_AVX512_4VNNIW, CPUID_EDX, 2, 0x00000007, 0 },
+- { X86_FEATURE_AVX512_4FMAPS, CPUID_EDX, 3, 0x00000007, 0 },
+ { X86_FEATURE_CAT_L3, CPUID_EBX, 1, 0x00000010, 0 },
+ { X86_FEATURE_CAT_L2, CPUID_EBX, 2, 0x00000010, 0 },
+ { X86_FEATURE_CDP_L3, CPUID_ECX, 2, 0x00000010, 1 },
+diff --git a/arch/x86/kernel/process_64.c b/arch/x86/kernel/process_64.c
+index c75466232016..9eb448c7859d 100644
+--- a/arch/x86/kernel/process_64.c
++++ b/arch/x86/kernel/process_64.c
+@@ -557,7 +557,7 @@ static void __set_personality_x32(void)
+ * Pretend to come from a x32 execve.
+ */
+ task_pt_regs(current)->orig_ax = __NR_x32_execve | __X32_SYSCALL_BIT;
+- current->thread.status &= ~TS_COMPAT;
++ current_thread_info()->status &= ~TS_COMPAT;
+ #endif
+ }
+
+@@ -571,7 +571,7 @@ static void __set_personality_ia32(void)
+ current->personality |= force_personality32;
+ /* Prepare the first "return" to user space */
+ task_pt_regs(current)->orig_ax = __NR_ia32_execve;
+- current->thread.status |= TS_COMPAT;
++ current_thread_info()->status |= TS_COMPAT;
+ #endif
+ }
+
+diff --git a/arch/x86/kernel/ptrace.c b/arch/x86/kernel/ptrace.c
+index f37d18124648..ed5c4cdf0a34 100644
+--- a/arch/x86/kernel/ptrace.c
++++ b/arch/x86/kernel/ptrace.c
+@@ -935,7 +935,7 @@ static int putreg32(struct task_struct *child, unsigned regno, u32 value)
+ */
+ regs->orig_ax = value;
+ if (syscall_get_nr(child, regs) >= 0)
+- child->thread.status |= TS_I386_REGS_POKED;
++ child->thread_info.status |= TS_I386_REGS_POKED;
+ break;
+
+ case offsetof(struct user32, regs.eflags):
+diff --git a/arch/x86/kernel/signal.c b/arch/x86/kernel/signal.c
+index b9e00e8f1c9b..4cdc0b27ec82 100644
+--- a/arch/x86/kernel/signal.c
++++ b/arch/x86/kernel/signal.c
+@@ -787,7 +787,7 @@ static inline unsigned long get_nr_restart_syscall(const struct pt_regs *regs)
+ * than the tracee.
+ */
+ #ifdef CONFIG_IA32_EMULATION
+- if (current->thread.status & (TS_COMPAT|TS_I386_REGS_POKED))
++ if (current_thread_info()->status & (TS_COMPAT|TS_I386_REGS_POKED))
+ return __NR_ia32_restart_syscall;
+ #endif
+ #ifdef CONFIG_X86_X32_ABI
+diff --git a/arch/x86/kvm/cpuid.c b/arch/x86/kvm/cpuid.c
+index 0099e10eb045..13f5d4217e4f 100644
+--- a/arch/x86/kvm/cpuid.c
++++ b/arch/x86/kvm/cpuid.c
+@@ -67,9 +67,7 @@ u64 kvm_supported_xcr0(void)
+
+ #define F(x) bit(X86_FEATURE_##x)
+
+-/* These are scattered features in cpufeatures.h. */
+-#define KVM_CPUID_BIT_AVX512_4VNNIW 2
+-#define KVM_CPUID_BIT_AVX512_4FMAPS 3
++/* For scattered features from cpufeatures.h; we currently expose none */
+ #define KF(x) bit(KVM_CPUID_BIT_##x)
+
+ int kvm_update_cpuid(struct kvm_vcpu *vcpu)
+@@ -367,6 +365,10 @@ static inline int __do_cpuid_ent(struct kvm_cpuid_entry2 *entry, u32 function,
+ F(3DNOWPREFETCH) | F(OSVW) | 0 /* IBS */ | F(XOP) |
+ 0 /* SKINIT, WDT, LWP */ | F(FMA4) | F(TBM);
+
++ /* cpuid 0x80000008.ebx */
++ const u32 kvm_cpuid_8000_0008_ebx_x86_features =
++ F(IBPB) | F(IBRS);
++
+ /* cpuid 0xC0000001.edx */
+ const u32 kvm_cpuid_C000_0001_edx_x86_features =
+ F(XSTORE) | F(XSTORE_EN) | F(XCRYPT) | F(XCRYPT_EN) |
+@@ -392,7 +394,8 @@ static inline int __do_cpuid_ent(struct kvm_cpuid_entry2 *entry, u32 function,
+
+ /* cpuid 7.0.edx*/
+ const u32 kvm_cpuid_7_0_edx_x86_features =
+- KF(AVX512_4VNNIW) | KF(AVX512_4FMAPS);
++ F(AVX512_4VNNIW) | F(AVX512_4FMAPS) | F(SPEC_CTRL) |
++ F(ARCH_CAPABILITIES);
+
+ /* all calls to cpuid_count() should be made on the same cpu */
+ get_cpu();
+@@ -477,7 +480,7 @@ static inline int __do_cpuid_ent(struct kvm_cpuid_entry2 *entry, u32 function,
+ if (!tdp_enabled || !boot_cpu_has(X86_FEATURE_OSPKE))
+ entry->ecx &= ~F(PKU);
+ entry->edx &= kvm_cpuid_7_0_edx_x86_features;
+- entry->edx &= get_scattered_cpuid_leaf(7, 0, CPUID_EDX);
++ cpuid_mask(&entry->edx, CPUID_7_EDX);
+ } else {
+ entry->ebx = 0;
+ entry->ecx = 0;
+@@ -627,7 +630,14 @@ static inline int __do_cpuid_ent(struct kvm_cpuid_entry2 *entry, u32 function,
+ if (!g_phys_as)
+ g_phys_as = phys_as;
+ entry->eax = g_phys_as | (virt_as << 8);
+- entry->ebx = entry->edx = 0;
++ entry->edx = 0;
++ /* IBRS and IBPB aren't necessarily present in hardware cpuid */
++ if (boot_cpu_has(X86_FEATURE_IBPB))
++ entry->ebx |= F(IBPB);
++ if (boot_cpu_has(X86_FEATURE_IBRS))
++ entry->ebx |= F(IBRS);
++ entry->ebx &= kvm_cpuid_8000_0008_ebx_x86_features;
++ cpuid_mask(&entry->ebx, CPUID_8000_0008_EBX);
+ break;
+ }
+ case 0x80000019:
+diff --git a/arch/x86/kvm/cpuid.h b/arch/x86/kvm/cpuid.h
+index c2cea6651279..9a327d5b6d1f 100644
+--- a/arch/x86/kvm/cpuid.h
++++ b/arch/x86/kvm/cpuid.h
+@@ -54,6 +54,7 @@ static const struct cpuid_reg reverse_cpuid[] = {
+ [CPUID_8000_000A_EDX] = {0x8000000a, 0, CPUID_EDX},
+ [CPUID_7_ECX] = { 7, 0, CPUID_ECX},
+ [CPUID_8000_0007_EBX] = {0x80000007, 0, CPUID_EBX},
++ [CPUID_7_EDX] = { 7, 0, CPUID_EDX},
+ };
+
+ static __always_inline struct cpuid_reg x86_feature_cpuid(unsigned x86_feature)
+diff --git a/arch/x86/kvm/emulate.c b/arch/x86/kvm/emulate.c
+index b514b2b2845a..290ecf711aec 100644
+--- a/arch/x86/kvm/emulate.c
++++ b/arch/x86/kvm/emulate.c
+@@ -25,6 +25,7 @@
+ #include <asm/kvm_emulate.h>
+ #include <linux/stringify.h>
+ #include <asm/debugreg.h>
++#include <asm/nospec-branch.h>
+
+ #include "x86.h"
+ #include "tss.h"
+@@ -1021,8 +1022,8 @@ static __always_inline u8 test_cc(unsigned int condition, unsigned long flags)
+ void (*fop)(void) = (void *)em_setcc + 4 * (condition & 0xf);
+
+ flags = (flags & EFLAGS_MASK) | X86_EFLAGS_IF;
+- asm("push %[flags]; popf; call *%[fastop]"
+- : "=a"(rc) : [fastop]"r"(fop), [flags]"r"(flags));
++ asm("push %[flags]; popf; " CALL_NOSPEC
++ : "=a"(rc) : [thunk_target]"r"(fop), [flags]"r"(flags));
+ return rc;
+ }
+
+@@ -5335,9 +5336,9 @@ static int fastop(struct x86_emulate_ctxt *ctxt, void (*fop)(struct fastop *))
+ if (!(ctxt->d & ByteOp))
+ fop += __ffs(ctxt->dst.bytes) * FASTOP_SIZE;
+
+- asm("push %[flags]; popf; call *%[fastop]; pushf; pop %[flags]\n"
++ asm("push %[flags]; popf; " CALL_NOSPEC " ; pushf; pop %[flags]\n"
+ : "+a"(ctxt->dst.val), "+d"(ctxt->src.val), [flags]"+D"(flags),
+- [fastop]"+S"(fop), ASM_CALL_CONSTRAINT
++ [thunk_target]"+S"(fop), ASM_CALL_CONSTRAINT
+ : "c"(ctxt->src2.val));
+
+ ctxt->eflags = (ctxt->eflags & ~EFLAGS_MASK) | (flags & EFLAGS_MASK);
+diff --git a/arch/x86/kvm/svm.c b/arch/x86/kvm/svm.c
+index f40d0da1f1d3..4e3c79530526 100644
+--- a/arch/x86/kvm/svm.c
++++ b/arch/x86/kvm/svm.c
+@@ -184,6 +184,8 @@ struct vcpu_svm {
+ u64 gs_base;
+ } host;
+
++ u64 spec_ctrl;
++
+ u32 *msrpm;
+
+ ulong nmi_iret_rip;
+@@ -249,6 +251,8 @@ static const struct svm_direct_access_msrs {
+ { .index = MSR_CSTAR, .always = true },
+ { .index = MSR_SYSCALL_MASK, .always = true },
+ #endif
++ { .index = MSR_IA32_SPEC_CTRL, .always = false },
++ { .index = MSR_IA32_PRED_CMD, .always = false },
+ { .index = MSR_IA32_LASTBRANCHFROMIP, .always = false },
+ { .index = MSR_IA32_LASTBRANCHTOIP, .always = false },
+ { .index = MSR_IA32_LASTINTFROMIP, .always = false },
+@@ -529,6 +533,7 @@ struct svm_cpu_data {
+ struct kvm_ldttss_desc *tss_desc;
+
+ struct page *save_area;
++ struct vmcb *current_vmcb;
+ };
+
+ static DEFINE_PER_CPU(struct svm_cpu_data *, svm_data);
+@@ -880,6 +885,25 @@ static bool valid_msr_intercept(u32 index)
+ return false;
+ }
+
++static bool msr_write_intercepted(struct kvm_vcpu *vcpu, unsigned msr)
++{
++ u8 bit_write;
++ unsigned long tmp;
++ u32 offset;
++ u32 *msrpm;
++
++ msrpm = is_guest_mode(vcpu) ? to_svm(vcpu)->nested.msrpm:
++ to_svm(vcpu)->msrpm;
++
++ offset = svm_msrpm_offset(msr);
++ bit_write = 2 * (msr & 0x0f) + 1;
++ tmp = msrpm[offset];
++
++ BUG_ON(offset == MSR_INVALID);
++
++ return !!test_bit(bit_write, &tmp);
++}
++
+ static void set_msr_interception(u32 *msrpm, unsigned msr,
+ int read, int write)
+ {
+@@ -1582,6 +1606,8 @@ static void svm_vcpu_reset(struct kvm_vcpu *vcpu, bool init_event)
+ u32 dummy;
+ u32 eax = 1;
+
++ svm->spec_ctrl = 0;
++
+ if (!init_event) {
+ svm->vcpu.arch.apic_base = APIC_DEFAULT_PHYS_BASE |
+ MSR_IA32_APICBASE_ENABLE;
+@@ -1703,11 +1729,17 @@ static void svm_free_vcpu(struct kvm_vcpu *vcpu)
+ __free_pages(virt_to_page(svm->nested.msrpm), MSRPM_ALLOC_ORDER);
+ kvm_vcpu_uninit(vcpu);
+ kmem_cache_free(kvm_vcpu_cache, svm);
++ /*
++ * The vmcb page can be recycled, causing a false negative in
++ * svm_vcpu_load(). So do a full IBPB now.
++ */
++ indirect_branch_prediction_barrier();
+ }
+
+ static void svm_vcpu_load(struct kvm_vcpu *vcpu, int cpu)
+ {
+ struct vcpu_svm *svm = to_svm(vcpu);
++ struct svm_cpu_data *sd = per_cpu(svm_data, cpu);
+ int i;
+
+ if (unlikely(cpu != vcpu->cpu)) {
+@@ -1736,6 +1768,10 @@ static void svm_vcpu_load(struct kvm_vcpu *vcpu, int cpu)
+ if (static_cpu_has(X86_FEATURE_RDTSCP))
+ wrmsrl(MSR_TSC_AUX, svm->tsc_aux);
+
++ if (sd->current_vmcb != svm->vmcb) {
++ sd->current_vmcb = svm->vmcb;
++ indirect_branch_prediction_barrier();
++ }
+ avic_vcpu_load(vcpu, cpu);
+ }
+
+@@ -3593,6 +3629,13 @@ static int svm_get_msr(struct kvm_vcpu *vcpu, struct msr_data *msr_info)
+ case MSR_VM_CR:
+ msr_info->data = svm->nested.vm_cr_msr;
+ break;
++ case MSR_IA32_SPEC_CTRL:
++ if (!msr_info->host_initiated &&
++ !guest_cpuid_has(vcpu, X86_FEATURE_IBRS))
++ return 1;
++
++ msr_info->data = svm->spec_ctrl;
++ break;
+ case MSR_IA32_UCODE_REV:
+ msr_info->data = 0x01000065;
+ break;
+@@ -3684,6 +3727,49 @@ static int svm_set_msr(struct kvm_vcpu *vcpu, struct msr_data *msr)
+ case MSR_IA32_TSC:
+ kvm_write_tsc(vcpu, msr);
+ break;
++ case MSR_IA32_SPEC_CTRL:
++ if (!msr->host_initiated &&
++ !guest_cpuid_has(vcpu, X86_FEATURE_IBRS))
++ return 1;
++
++ /* The STIBP bit doesn't fault even if it's not advertised */
++ if (data & ~(SPEC_CTRL_IBRS | SPEC_CTRL_STIBP))
++ return 1;
++
++ svm->spec_ctrl = data;
++
++ if (!data)
++ break;
++
++ /*
++ * For non-nested:
++ * When it's written (to non-zero) for the first time, pass
++ * it through.
++ *
++ * For nested:
++ * The handling of the MSR bitmap for L2 guests is done in
++ * nested_svm_vmrun_msrpm.
++ * We update the L1 MSR bit as well since it will end up
++ * touching the MSR anyway now.
++ */
++ set_msr_interception(svm->msrpm, MSR_IA32_SPEC_CTRL, 1, 1);
++ break;
++ case MSR_IA32_PRED_CMD:
++ if (!msr->host_initiated &&
++ !guest_cpuid_has(vcpu, X86_FEATURE_IBPB))
++ return 1;
++
++ if (data & ~PRED_CMD_IBPB)
++ return 1;
++
++ if (!data)
++ break;
++
++ wrmsrl(MSR_IA32_PRED_CMD, PRED_CMD_IBPB);
++ if (is_guest_mode(vcpu))
++ break;
++ set_msr_interception(svm->msrpm, MSR_IA32_PRED_CMD, 0, 1);
++ break;
+ case MSR_STAR:
+ svm->vmcb->save.star = data;
+ break;
+@@ -4936,6 +5022,15 @@ static void svm_vcpu_run(struct kvm_vcpu *vcpu)
+
+ local_irq_enable();
+
++ /*
++ * If this vCPU has touched SPEC_CTRL, restore the guest's value if
++ * it's non-zero. Since vmentry is serialising on affected CPUs, there
++ * is no need to worry about the conditional branch over the wrmsr
++ * being speculatively taken.
++ */
++ if (svm->spec_ctrl)
++ wrmsrl(MSR_IA32_SPEC_CTRL, svm->spec_ctrl);
++
+ asm volatile (
+ "push %%" _ASM_BP "; \n\t"
+ "mov %c[rbx](%[svm]), %%" _ASM_BX " \n\t"
+@@ -5028,6 +5123,27 @@ static void svm_vcpu_run(struct kvm_vcpu *vcpu)
+ #endif
+ );
+
++ /*
++ * We do not use IBRS in the kernel. If this vCPU has used the
++ * SPEC_CTRL MSR it may have left it on; save the value and
++ * turn it off. This is much more efficient than blindly adding
++ * it to the atomic save/restore list. Especially as the former
++ * (Saving guest MSRs on vmexit) doesn't even exist in KVM.
++ *
++ * For non-nested case:
++ * If the L01 MSR bitmap does not intercept the MSR, then we need to
++ * save it.
++ *
++ * For nested case:
++ * If the L02 MSR bitmap does not intercept the MSR, then we need to
++ * save it.
++ */
++ if (!msr_write_intercepted(vcpu, MSR_IA32_SPEC_CTRL))
++ rdmsrl(MSR_IA32_SPEC_CTRL, svm->spec_ctrl);
++
++ if (svm->spec_ctrl)
++ wrmsrl(MSR_IA32_SPEC_CTRL, 0);
++
+ /* Eliminate branch target predictions from guest mode */
+ vmexit_fill_RSB();
+
+diff --git a/arch/x86/kvm/vmx.c b/arch/x86/kvm/vmx.c
+index c829d89e2e63..bee4c49f6dd0 100644
+--- a/arch/x86/kvm/vmx.c
++++ b/arch/x86/kvm/vmx.c
+@@ -34,6 +34,7 @@
+ #include <linux/tboot.h>
+ #include <linux/hrtimer.h>
+ #include <linux/frame.h>
++#include <linux/nospec.h>
+ #include "kvm_cache_regs.h"
+ #include "x86.h"
+
+@@ -111,6 +112,14 @@ static u64 __read_mostly host_xss;
+ static bool __read_mostly enable_pml = 1;
+ module_param_named(pml, enable_pml, bool, S_IRUGO);
+
++#define MSR_TYPE_R 1
++#define MSR_TYPE_W 2
++#define MSR_TYPE_RW 3
++
++#define MSR_BITMAP_MODE_X2APIC 1
++#define MSR_BITMAP_MODE_X2APIC_APICV 2
++#define MSR_BITMAP_MODE_LM 4
++
+ #define KVM_VMX_TSC_MULTIPLIER_MAX 0xffffffffffffffffULL
+
+ /* Guest_tsc -> host_tsc conversion requires 64-bit division. */
+@@ -185,7 +194,6 @@ module_param(ple_window_max, int, S_IRUGO);
+ extern const ulong vmx_return;
+
+ #define NR_AUTOLOAD_MSRS 8
+-#define VMCS02_POOL_SIZE 1
+
+ struct vmcs {
+ u32 revision_id;
+@@ -210,6 +218,7 @@ struct loaded_vmcs {
+ int soft_vnmi_blocked;
+ ktime_t entry_time;
+ s64 vnmi_blocked_time;
++ unsigned long *msr_bitmap;
+ struct list_head loaded_vmcss_on_cpu_link;
+ };
+
+@@ -226,7 +235,7 @@ struct shared_msr_entry {
+ * stored in guest memory specified by VMPTRLD, but is opaque to the guest,
+ * which must access it using VMREAD/VMWRITE/VMCLEAR instructions.
+ * More than one of these structures may exist, if L1 runs multiple L2 guests.
+- * nested_vmx_run() will use the data here to build a vmcs02: a VMCS for the
++ * nested_vmx_run() will use the data here to build the vmcs02: a VMCS for the
+ * underlying hardware which will be used to run L2.
+ * This structure is packed to ensure that its layout is identical across
+ * machines (necessary for live migration).
+@@ -409,13 +418,6 @@ struct __packed vmcs12 {
+ */
+ #define VMCS12_SIZE 0x1000
+
+-/* Used to remember the last vmcs02 used for some recently used vmcs12s */
+-struct vmcs02_list {
+- struct list_head list;
+- gpa_t vmptr;
+- struct loaded_vmcs vmcs02;
+-};
+-
+ /*
+ * The nested_vmx structure is part of vcpu_vmx, and holds information we need
+ * for correct emulation of VMX (i.e., nested VMX) on this vcpu.
+@@ -440,15 +442,15 @@ struct nested_vmx {
+ */
+ bool sync_shadow_vmcs;
+
+- /* vmcs02_list cache of VMCSs recently used to run L2 guests */
+- struct list_head vmcs02_pool;
+- int vmcs02_num;
+ bool change_vmcs01_virtual_x2apic_mode;
+ /* L2 must run next, and mustn't decide to exit to L1. */
+ bool nested_run_pending;
++
++ struct loaded_vmcs vmcs02;
++
+ /*
+- * Guest pages referred to in vmcs02 with host-physical pointers, so
+- * we must keep them pinned while L2 runs.
++ * Guest pages referred to in the vmcs02 with host-physical
++ * pointers, so we must keep them pinned while L2 runs.
+ */
+ struct page *apic_access_page;
+ struct page *virtual_apic_page;
+@@ -457,8 +459,6 @@ struct nested_vmx {
+ bool pi_pending;
+ u16 posted_intr_nv;
+
+- unsigned long *msr_bitmap;
+-
+ struct hrtimer preemption_timer;
+ bool preemption_timer_expired;
+
+@@ -581,6 +581,7 @@ struct vcpu_vmx {
+ struct kvm_vcpu vcpu;
+ unsigned long host_rsp;
+ u8 fail;
++ u8 msr_bitmap_mode;
+ u32 exit_intr_info;
+ u32 idt_vectoring_info;
+ ulong rflags;
+@@ -592,6 +593,10 @@ struct vcpu_vmx {
+ u64 msr_host_kernel_gs_base;
+ u64 msr_guest_kernel_gs_base;
+ #endif
++
++ u64 arch_capabilities;
++ u64 spec_ctrl;
++
+ u32 vm_entry_controls_shadow;
+ u32 vm_exit_controls_shadow;
+ u32 secondary_exec_control;
+@@ -898,21 +903,18 @@ static const unsigned short vmcs_field_to_offset_table[] = {
+
+ static inline short vmcs_field_to_offset(unsigned long field)
+ {
+- BUILD_BUG_ON(ARRAY_SIZE(vmcs_field_to_offset_table) > SHRT_MAX);
++ const size_t size = ARRAY_SIZE(vmcs_field_to_offset_table);
++ unsigned short offset;
+
+- if (field >= ARRAY_SIZE(vmcs_field_to_offset_table))
++ BUILD_BUG_ON(size > SHRT_MAX);
++ if (field >= size)
+ return -ENOENT;
+
+- /*
+- * FIXME: Mitigation for CVE-2017-5753. To be replaced with a
+- * generic mechanism.
+- */
+- asm("lfence");
+-
+- if (vmcs_field_to_offset_table[field] == 0)
++ field = array_index_nospec(field, size);
++ offset = vmcs_field_to_offset_table[field];
++ if (offset == 0)
+ return -ENOENT;
+-
+- return vmcs_field_to_offset_table[field];
++ return offset;
+ }
+
+ static inline struct vmcs12 *get_vmcs12(struct kvm_vcpu *vcpu)
+@@ -935,6 +937,9 @@ static bool vmx_get_nmi_mask(struct kvm_vcpu *vcpu);
+ static void vmx_set_nmi_mask(struct kvm_vcpu *vcpu, bool masked);
+ static bool nested_vmx_is_page_fault_vmexit(struct vmcs12 *vmcs12,
+ u16 error_code);
++static void vmx_update_msr_bitmap(struct kvm_vcpu *vcpu);
++static void __always_inline vmx_disable_intercept_for_msr(unsigned long *msr_bitmap,
++ u32 msr, int type);
+
+ static DEFINE_PER_CPU(struct vmcs *, vmxarea);
+ static DEFINE_PER_CPU(struct vmcs *, current_vmcs);
+@@ -954,12 +959,6 @@ static DEFINE_PER_CPU(spinlock_t, blocked_vcpu_on_cpu_lock);
+ enum {
+ VMX_IO_BITMAP_A,
+ VMX_IO_BITMAP_B,
+- VMX_MSR_BITMAP_LEGACY,
+- VMX_MSR_BITMAP_LONGMODE,
+- VMX_MSR_BITMAP_LEGACY_X2APIC_APICV,
+- VMX_MSR_BITMAP_LONGMODE_X2APIC_APICV,
+- VMX_MSR_BITMAP_LEGACY_X2APIC,
+- VMX_MSR_BITMAP_LONGMODE_X2APIC,
+ VMX_VMREAD_BITMAP,
+ VMX_VMWRITE_BITMAP,
+ VMX_BITMAP_NR
+@@ -969,12 +968,6 @@ static unsigned long *vmx_bitmap[VMX_BITMAP_NR];
+
+ #define vmx_io_bitmap_a (vmx_bitmap[VMX_IO_BITMAP_A])
+ #define vmx_io_bitmap_b (vmx_bitmap[VMX_IO_BITMAP_B])
+-#define vmx_msr_bitmap_legacy (vmx_bitmap[VMX_MSR_BITMAP_LEGACY])
+-#define vmx_msr_bitmap_longmode (vmx_bitmap[VMX_MSR_BITMAP_LONGMODE])
+-#define vmx_msr_bitmap_legacy_x2apic_apicv (vmx_bitmap[VMX_MSR_BITMAP_LEGACY_X2APIC_APICV])
+-#define vmx_msr_bitmap_longmode_x2apic_apicv (vmx_bitmap[VMX_MSR_BITMAP_LONGMODE_X2APIC_APICV])
+-#define vmx_msr_bitmap_legacy_x2apic (vmx_bitmap[VMX_MSR_BITMAP_LEGACY_X2APIC])
+-#define vmx_msr_bitmap_longmode_x2apic (vmx_bitmap[VMX_MSR_BITMAP_LONGMODE_X2APIC])
+ #define vmx_vmread_bitmap (vmx_bitmap[VMX_VMREAD_BITMAP])
+ #define vmx_vmwrite_bitmap (vmx_bitmap[VMX_VMWRITE_BITMAP])
+
+@@ -1918,6 +1911,52 @@ static void update_exception_bitmap(struct kvm_vcpu *vcpu)
+ vmcs_write32(EXCEPTION_BITMAP, eb);
+ }
+
++/*
++ * Check if MSR is intercepted for currently loaded MSR bitmap.
++ */
++static bool msr_write_intercepted(struct kvm_vcpu *vcpu, u32 msr)
++{
++ unsigned long *msr_bitmap;
++ int f = sizeof(unsigned long);
++
++ if (!cpu_has_vmx_msr_bitmap())
++ return true;
++
++ msr_bitmap = to_vmx(vcpu)->loaded_vmcs->msr_bitmap;
++
++ if (msr <= 0x1fff) {
++ return !!test_bit(msr, msr_bitmap + 0x800 / f);
++ } else if ((msr >= 0xc0000000) && (msr <= 0xc0001fff)) {
++ msr &= 0x1fff;
++ return !!test_bit(msr, msr_bitmap + 0xc00 / f);
++ }
++
++ return true;
++}
++
++/*
++ * Check if MSR is intercepted for L01 MSR bitmap.
++ */
++static bool msr_write_intercepted_l01(struct kvm_vcpu *vcpu, u32 msr)
++{
++ unsigned long *msr_bitmap;
++ int f = sizeof(unsigned long);
++
++ if (!cpu_has_vmx_msr_bitmap())
++ return true;
++
++ msr_bitmap = to_vmx(vcpu)->vmcs01.msr_bitmap;
++
++ if (msr <= 0x1fff) {
++ return !!test_bit(msr, msr_bitmap + 0x800 / f);
++ } else if ((msr >= 0xc0000000) && (msr <= 0xc0001fff)) {
++ msr &= 0x1fff;
++ return !!test_bit(msr, msr_bitmap + 0xc00 / f);
++ }
++
++ return true;
++}
++
+ static void clear_atomic_switch_msr_special(struct vcpu_vmx *vmx,
+ unsigned long entry, unsigned long exit)
+ {
+@@ -2296,6 +2335,7 @@ static void vmx_vcpu_load(struct kvm_vcpu *vcpu, int cpu)
+ if (per_cpu(current_vmcs, cpu) != vmx->loaded_vmcs->vmcs) {
+ per_cpu(current_vmcs, cpu) = vmx->loaded_vmcs->vmcs;
+ vmcs_load(vmx->loaded_vmcs->vmcs);
++ indirect_branch_prediction_barrier();
+ }
+
+ if (!already_loaded) {
+@@ -2572,36 +2612,6 @@ static void move_msr_up(struct vcpu_vmx *vmx, int from, int to)
+ vmx->guest_msrs[from] = tmp;
+ }
+
+-static void vmx_set_msr_bitmap(struct kvm_vcpu *vcpu)
+-{
+- unsigned long *msr_bitmap;
+-
+- if (is_guest_mode(vcpu))
+- msr_bitmap = to_vmx(vcpu)->nested.msr_bitmap;
+- else if (cpu_has_secondary_exec_ctrls() &&
+- (vmcs_read32(SECONDARY_VM_EXEC_CONTROL) &
+- SECONDARY_EXEC_VIRTUALIZE_X2APIC_MODE)) {
+- if (enable_apicv && kvm_vcpu_apicv_active(vcpu)) {
+- if (is_long_mode(vcpu))
+- msr_bitmap = vmx_msr_bitmap_longmode_x2apic_apicv;
+- else
+- msr_bitmap = vmx_msr_bitmap_legacy_x2apic_apicv;
+- } else {
+- if (is_long_mode(vcpu))
+- msr_bitmap = vmx_msr_bitmap_longmode_x2apic;
+- else
+- msr_bitmap = vmx_msr_bitmap_legacy_x2apic;
+- }
+- } else {
+- if (is_long_mode(vcpu))
+- msr_bitmap = vmx_msr_bitmap_longmode;
+- else
+- msr_bitmap = vmx_msr_bitmap_legacy;
+- }
+-
+- vmcs_write64(MSR_BITMAP, __pa(msr_bitmap));
+-}
+-
+ /*
+ * Set up the vmcs to automatically save and restore system
+ * msrs. Don't touch the 64-bit msrs if the guest is in legacy
+@@ -2642,7 +2652,7 @@ static void setup_msrs(struct vcpu_vmx *vmx)
+ vmx->save_nmsrs = save_nmsrs;
+
+ if (cpu_has_vmx_msr_bitmap())
+- vmx_set_msr_bitmap(&vmx->vcpu);
++ vmx_update_msr_bitmap(&vmx->vcpu);
+ }
+
+ /*
+@@ -3276,6 +3286,20 @@ static int vmx_get_msr(struct kvm_vcpu *vcpu, struct msr_data *msr_info)
+ case MSR_IA32_TSC:
+ msr_info->data = guest_read_tsc(vcpu);
+ break;
++ case MSR_IA32_SPEC_CTRL:
++ if (!msr_info->host_initiated &&
++ !guest_cpuid_has(vcpu, X86_FEATURE_IBRS) &&
++ !guest_cpuid_has(vcpu, X86_FEATURE_SPEC_CTRL))
++ return 1;
++
++ msr_info->data = to_vmx(vcpu)->spec_ctrl;
++ break;
++ case MSR_IA32_ARCH_CAPABILITIES:
++ if (!msr_info->host_initiated &&
++ !guest_cpuid_has(vcpu, X86_FEATURE_ARCH_CAPABILITIES))
++ return 1;
++ msr_info->data = to_vmx(vcpu)->arch_capabilities;
++ break;
+ case MSR_IA32_SYSENTER_CS:
+ msr_info->data = vmcs_read32(GUEST_SYSENTER_CS);
+ break;
+@@ -3383,6 +3407,70 @@ static int vmx_set_msr(struct kvm_vcpu *vcpu, struct msr_data *msr_info)
+ case MSR_IA32_TSC:
+ kvm_write_tsc(vcpu, msr_info);
+ break;
++ case MSR_IA32_SPEC_CTRL:
++ if (!msr_info->host_initiated &&
++ !guest_cpuid_has(vcpu, X86_FEATURE_IBRS) &&
++ !guest_cpuid_has(vcpu, X86_FEATURE_SPEC_CTRL))
++ return 1;
++
++ /* The STIBP bit doesn't fault even if it's not advertised */
++ if (data & ~(SPEC_CTRL_IBRS | SPEC_CTRL_STIBP))
++ return 1;
++
++ vmx->spec_ctrl = data;
++
++ if (!data)
++ break;
++
++ /*
++ * For non-nested:
++ * When it's written (to non-zero) for the first time, pass
++ * it through.
++ *
++ * For nested:
++ * The handling of the MSR bitmap for L2 guests is done in
++ * nested_vmx_merge_msr_bitmap. We should not touch the
++ * vmcs02.msr_bitmap here since it gets completely overwritten
++ * in the merging. We update the vmcs01 here for L1 as well
++ * since it will end up touching the MSR anyway now.
++ */
++ vmx_disable_intercept_for_msr(vmx->vmcs01.msr_bitmap,
++ MSR_IA32_SPEC_CTRL,
++ MSR_TYPE_RW);
++ break;
++ case MSR_IA32_PRED_CMD:
++ if (!msr_info->host_initiated &&
++ !guest_cpuid_has(vcpu, X86_FEATURE_IBPB) &&
++ !guest_cpuid_has(vcpu, X86_FEATURE_SPEC_CTRL))
++ return 1;
++
++ if (data & ~PRED_CMD_IBPB)
++ return 1;
++
++ if (!data)
++ break;
++
++ wrmsrl(MSR_IA32_PRED_CMD, PRED_CMD_IBPB);
++
++ /*
++ * For non-nested:
++ * When it's written (to non-zero) for the first time, pass
++ * it through.
++ *
++ * For nested:
++ * The handling of the MSR bitmap for L2 guests is done in
++ * nested_vmx_merge_msr_bitmap. We should not touch the
++ * vmcs02.msr_bitmap here since it gets completely overwritten
++ * in the merging.
++ */
++ vmx_disable_intercept_for_msr(vmx->vmcs01.msr_bitmap, MSR_IA32_PRED_CMD,
++ MSR_TYPE_W);
++ break;
++ case MSR_IA32_ARCH_CAPABILITIES:
++ if (!msr_info->host_initiated)
++ return 1;
++ vmx->arch_capabilities = data;
++ break;
+ case MSR_IA32_CR_PAT:
+ if (vmcs_config.vmentry_ctrl & VM_ENTRY_LOAD_IA32_PAT) {
+ if (!kvm_mtrr_valid(vcpu, MSR_IA32_CR_PAT, data))
+@@ -3837,11 +3925,6 @@ static struct vmcs *alloc_vmcs_cpu(int cpu)
+ return vmcs;
+ }
+
+-static struct vmcs *alloc_vmcs(void)
+-{
+- return alloc_vmcs_cpu(raw_smp_processor_id());
+-}
+-
+ static void free_vmcs(struct vmcs *vmcs)
+ {
+ free_pages((unsigned long)vmcs, vmcs_config.order);
+@@ -3857,9 +3940,38 @@ static void free_loaded_vmcs(struct loaded_vmcs *loaded_vmcs)
+ loaded_vmcs_clear(loaded_vmcs);
+ free_vmcs(loaded_vmcs->vmcs);
+ loaded_vmcs->vmcs = NULL;
++ if (loaded_vmcs->msr_bitmap)
++ free_page((unsigned long)loaded_vmcs->msr_bitmap);
+ WARN_ON(loaded_vmcs->shadow_vmcs != NULL);
+ }
+
++static struct vmcs *alloc_vmcs(void)
++{
++ return alloc_vmcs_cpu(raw_smp_processor_id());
++}
++
++static int alloc_loaded_vmcs(struct loaded_vmcs *loaded_vmcs)
++{
++ loaded_vmcs->vmcs = alloc_vmcs();
++ if (!loaded_vmcs->vmcs)
++ return -ENOMEM;
++
++ loaded_vmcs->shadow_vmcs = NULL;
++ loaded_vmcs_init(loaded_vmcs);
++
++ if (cpu_has_vmx_msr_bitmap()) {
++ loaded_vmcs->msr_bitmap = (unsigned long *)__get_free_page(GFP_KERNEL);
++ if (!loaded_vmcs->msr_bitmap)
++ goto out_vmcs;
++ memset(loaded_vmcs->msr_bitmap, 0xff, PAGE_SIZE);
++ }
++ return 0;
++
++out_vmcs:
++ free_loaded_vmcs(loaded_vmcs);
++ return -ENOMEM;
++}
++
+ static void free_kvm_area(void)
+ {
+ int cpu;
+@@ -4918,10 +5030,8 @@ static void free_vpid(int vpid)
+ spin_unlock(&vmx_vpid_lock);
+ }
+
+-#define MSR_TYPE_R 1
+-#define MSR_TYPE_W 2
+-static void __vmx_disable_intercept_for_msr(unsigned long *msr_bitmap,
+- u32 msr, int type)
++static void __always_inline vmx_disable_intercept_for_msr(unsigned long *msr_bitmap,
++ u32 msr, int type)
+ {
+ int f = sizeof(unsigned long);
+
+@@ -4955,6 +5065,50 @@ static void __vmx_disable_intercept_for_msr(unsigned long *msr_bitmap,
+ }
+ }
+
++static void __always_inline vmx_enable_intercept_for_msr(unsigned long *msr_bitmap,
++ u32 msr, int type)
++{
++ int f = sizeof(unsigned long);
++
++ if (!cpu_has_vmx_msr_bitmap())
++ return;
++
++ /*
++ * See Intel PRM Vol. 3, 20.6.9 (MSR-Bitmap Address). Early manuals
++ * have the write-low and read-high bitmap offsets the wrong way round.
++ * We can control MSRs 0x00000000-0x00001fff and 0xc0000000-0xc0001fff.
++ */
++ if (msr <= 0x1fff) {
++ if (type & MSR_TYPE_R)
++ /* read-low */
++ __set_bit(msr, msr_bitmap + 0x000 / f);
++
++ if (type & MSR_TYPE_W)
++ /* write-low */
++ __set_bit(msr, msr_bitmap + 0x800 / f);
++
++ } else if ((msr >= 0xc0000000) && (msr <= 0xc0001fff)) {
++ msr &= 0x1fff;
++ if (type & MSR_TYPE_R)
++ /* read-high */
++ __set_bit(msr, msr_bitmap + 0x400 / f);
++
++ if (type & MSR_TYPE_W)
++ /* write-high */
++ __set_bit(msr, msr_bitmap + 0xc00 / f);
++
++ }
++}
++
++static void __always_inline vmx_set_intercept_for_msr(unsigned long *msr_bitmap,
++ u32 msr, int type, bool value)
++{
++ if (value)
++ vmx_enable_intercept_for_msr(msr_bitmap, msr, type);
++ else
++ vmx_disable_intercept_for_msr(msr_bitmap, msr, type);
++}
++
+ /*
+ * If a msr is allowed by L0, we should check whether it is allowed by L1.
+ * The corresponding bit will be cleared unless both of L0 and L1 allow it.
+@@ -5001,30 +5155,70 @@ static void nested_vmx_disable_intercept_for_msr(unsigned long *msr_bitmap_l1,
+ }
+ }
+
+-static void vmx_disable_intercept_for_msr(u32 msr, bool longmode_only)
++static u8 vmx_msr_bitmap_mode(struct kvm_vcpu *vcpu)
+ {
+- if (!longmode_only)
+- __vmx_disable_intercept_for_msr(vmx_msr_bitmap_legacy,
+- msr, MSR_TYPE_R | MSR_TYPE_W);
+- __vmx_disable_intercept_for_msr(vmx_msr_bitmap_longmode,
+- msr, MSR_TYPE_R | MSR_TYPE_W);
++ u8 mode = 0;
++
++ if (cpu_has_secondary_exec_ctrls() &&
++ (vmcs_read32(SECONDARY_VM_EXEC_CONTROL) &
++ SECONDARY_EXEC_VIRTUALIZE_X2APIC_MODE)) {
++ mode |= MSR_BITMAP_MODE_X2APIC;
++ if (enable_apicv && kvm_vcpu_apicv_active(vcpu))
++ mode |= MSR_BITMAP_MODE_X2APIC_APICV;
++ }
++
++ if (is_long_mode(vcpu))
++ mode |= MSR_BITMAP_MODE_LM;
++
++ return mode;
+ }
+
+-static void vmx_disable_intercept_msr_x2apic(u32 msr, int type, bool apicv_active)
++#define X2APIC_MSR(r) (APIC_BASE_MSR + ((r) >> 4))
++
++static void vmx_update_msr_bitmap_x2apic(unsigned long *msr_bitmap,
++ u8 mode)
+ {
+- if (apicv_active) {
+- __vmx_disable_intercept_for_msr(vmx_msr_bitmap_legacy_x2apic_apicv,
+- msr, type);
+- __vmx_disable_intercept_for_msr(vmx_msr_bitmap_longmode_x2apic_apicv,
+- msr, type);
+- } else {
+- __vmx_disable_intercept_for_msr(vmx_msr_bitmap_legacy_x2apic,
+- msr, type);
+- __vmx_disable_intercept_for_msr(vmx_msr_bitmap_longmode_x2apic,
+- msr, type);
++ int msr;
++
++ for (msr = 0x800; msr <= 0x8ff; msr += BITS_PER_LONG) {
++ unsigned word = msr / BITS_PER_LONG;
++ msr_bitmap[word] = (mode & MSR_BITMAP_MODE_X2APIC_APICV) ? 0 : ~0;
++ msr_bitmap[word + (0x800 / sizeof(long))] = ~0;
++ }
++
++ if (mode & MSR_BITMAP_MODE_X2APIC) {
++ /*
++ * TPR reads and writes can be virtualized even if virtual interrupt
++ * delivery is not in use.
++ */
++ vmx_disable_intercept_for_msr(msr_bitmap, X2APIC_MSR(APIC_TASKPRI), MSR_TYPE_RW);
++ if (mode & MSR_BITMAP_MODE_X2APIC_APICV) {
++ vmx_enable_intercept_for_msr(msr_bitmap, X2APIC_MSR(APIC_TMCCT), MSR_TYPE_R);
++ vmx_disable_intercept_for_msr(msr_bitmap, X2APIC_MSR(APIC_EOI), MSR_TYPE_W);
++ vmx_disable_intercept_for_msr(msr_bitmap, X2APIC_MSR(APIC_SELF_IPI), MSR_TYPE_W);
++ }
+ }
+ }
+
++static void vmx_update_msr_bitmap(struct kvm_vcpu *vcpu)
++{
++ struct vcpu_vmx *vmx = to_vmx(vcpu);
++ unsigned long *msr_bitmap = vmx->vmcs01.msr_bitmap;
++ u8 mode = vmx_msr_bitmap_mode(vcpu);
++ u8 changed = mode ^ vmx->msr_bitmap_mode;
++
++ if (!changed)
++ return;
++
++ vmx_set_intercept_for_msr(msr_bitmap, MSR_KERNEL_GS_BASE, MSR_TYPE_RW,
++ !(mode & MSR_BITMAP_MODE_LM));
++
++ if (changed & (MSR_BITMAP_MODE_X2APIC | MSR_BITMAP_MODE_X2APIC_APICV))
++ vmx_update_msr_bitmap_x2apic(msr_bitmap, mode);
++
++ vmx->msr_bitmap_mode = mode;
++}
++
+ static bool vmx_get_enable_apicv(struct kvm_vcpu *vcpu)
+ {
+ return enable_apicv;
+@@ -5274,7 +5468,7 @@ static void vmx_refresh_apicv_exec_ctrl(struct kvm_vcpu *vcpu)
+ }
+
+ if (cpu_has_vmx_msr_bitmap())
+- vmx_set_msr_bitmap(vcpu);
++ vmx_update_msr_bitmap(vcpu);
+ }
+
+ static u32 vmx_exec_control(struct vcpu_vmx *vmx)
+@@ -5461,7 +5655,7 @@ static void vmx_vcpu_setup(struct vcpu_vmx *vmx)
+ vmcs_write64(VMWRITE_BITMAP, __pa(vmx_vmwrite_bitmap));
+ }
+ if (cpu_has_vmx_msr_bitmap())
+- vmcs_write64(MSR_BITMAP, __pa(vmx_msr_bitmap_legacy));
++ vmcs_write64(MSR_BITMAP, __pa(vmx->vmcs01.msr_bitmap));
+
+ vmcs_write64(VMCS_LINK_POINTER, -1ull); /* 22.3.1.5 */
+
+@@ -5539,6 +5733,8 @@ static void vmx_vcpu_setup(struct vcpu_vmx *vmx)
+ ++vmx->nmsrs;
+ }
+
++ if (boot_cpu_has(X86_FEATURE_ARCH_CAPABILITIES))
++ rdmsrl(MSR_IA32_ARCH_CAPABILITIES, vmx->arch_capabilities);
+
+ vm_exit_controls_init(vmx, vmcs_config.vmexit_ctrl);
+
+@@ -5567,6 +5763,7 @@ static void vmx_vcpu_reset(struct kvm_vcpu *vcpu, bool init_event)
+ u64 cr0;
+
+ vmx->rmode.vm86_active = 0;
++ vmx->spec_ctrl = 0;
+
+ vmx->vcpu.arch.regs[VCPU_REGS_RDX] = get_rdx_init_val();
+ kvm_set_cr8(vcpu, 0);
+@@ -6744,7 +6941,7 @@ void vmx_enable_tdp(void)
+
+ static __init int hardware_setup(void)
+ {
+- int r = -ENOMEM, i, msr;
++ int r = -ENOMEM, i;
+
+ rdmsrl_safe(MSR_EFER, &host_efer);
+
+@@ -6764,9 +6961,6 @@ static __init int hardware_setup(void)
+
+ memset(vmx_io_bitmap_b, 0xff, PAGE_SIZE);
+
+- memset(vmx_msr_bitmap_legacy, 0xff, PAGE_SIZE);
+- memset(vmx_msr_bitmap_longmode, 0xff, PAGE_SIZE);
+-
+ if (setup_vmcs_config(&vmcs_config) < 0) {
+ r = -EIO;
+ goto out;
+@@ -6835,42 +7029,8 @@ static __init int hardware_setup(void)
+ kvm_tsc_scaling_ratio_frac_bits = 48;
+ }
+
+- vmx_disable_intercept_for_msr(MSR_FS_BASE, false);
+- vmx_disable_intercept_for_msr(MSR_GS_BASE, false);
+- vmx_disable_intercept_for_msr(MSR_KERNEL_GS_BASE, true);
+- vmx_disable_intercept_for_msr(MSR_IA32_SYSENTER_CS, false);
+- vmx_disable_intercept_for_msr(MSR_IA32_SYSENTER_ESP, false);
+- vmx_disable_intercept_for_msr(MSR_IA32_SYSENTER_EIP, false);
+-
+- memcpy(vmx_msr_bitmap_legacy_x2apic_apicv,
+- vmx_msr_bitmap_legacy, PAGE_SIZE);
+- memcpy(vmx_msr_bitmap_longmode_x2apic_apicv,
+- vmx_msr_bitmap_longmode, PAGE_SIZE);
+- memcpy(vmx_msr_bitmap_legacy_x2apic,
+- vmx_msr_bitmap_legacy, PAGE_SIZE);
+- memcpy(vmx_msr_bitmap_longmode_x2apic,
+- vmx_msr_bitmap_longmode, PAGE_SIZE);
+-
+ set_bit(0, vmx_vpid_bitmap); /* 0 is reserved for host */
+
+- for (msr = 0x800; msr <= 0x8ff; msr++) {
+- if (msr == 0x839 /* TMCCT */)
+- continue;
+- vmx_disable_intercept_msr_x2apic(msr, MSR_TYPE_R, true);
+- }
+-
+- /*
+- * TPR reads and writes can be virtualized even if virtual interrupt
+- * delivery is not in use.
+- */
+- vmx_disable_intercept_msr_x2apic(0x808, MSR_TYPE_W, true);
+- vmx_disable_intercept_msr_x2apic(0x808, MSR_TYPE_R | MSR_TYPE_W, false);
+-
+- /* EOI */
+- vmx_disable_intercept_msr_x2apic(0x80b, MSR_TYPE_W, true);
+- /* SELF-IPI */
+- vmx_disable_intercept_msr_x2apic(0x83f, MSR_TYPE_W, true);
+-
+ if (enable_ept)
+ vmx_enable_tdp();
+ else
+@@ -6973,94 +7133,6 @@ static int handle_monitor(struct kvm_vcpu *vcpu)
+ return handle_nop(vcpu);
+ }
+
+-/*
+- * To run an L2 guest, we need a vmcs02 based on the L1-specified vmcs12.
+- * We could reuse a single VMCS for all the L2 guests, but we also want the
+- * option to allocate a separate vmcs02 for each separate loaded vmcs12 - this
+- * allows keeping them loaded on the processor, and in the future will allow
+- * optimizations where prepare_vmcs02 doesn't need to set all the fields on
+- * every entry if they never change.
+- * So we keep, in vmx->nested.vmcs02_pool, a cache of size VMCS02_POOL_SIZE
+- * (>=0) with a vmcs02 for each recently loaded vmcs12s, most recent first.
+- *
+- * The following functions allocate and free a vmcs02 in this pool.
+- */
+-
+-/* Get a VMCS from the pool to use as vmcs02 for the current vmcs12. */
+-static struct loaded_vmcs *nested_get_current_vmcs02(struct vcpu_vmx *vmx)
+-{
+- struct vmcs02_list *item;
+- list_for_each_entry(item, &vmx->nested.vmcs02_pool, list)
+- if (item->vmptr == vmx->nested.current_vmptr) {
+- list_move(&item->list, &vmx->nested.vmcs02_pool);
+- return &item->vmcs02;
+- }
+-
+- if (vmx->nested.vmcs02_num >= max(VMCS02_POOL_SIZE, 1)) {
+- /* Recycle the least recently used VMCS. */
+- item = list_last_entry(&vmx->nested.vmcs02_pool,
+- struct vmcs02_list, list);
+- item->vmptr = vmx->nested.current_vmptr;
+- list_move(&item->list, &vmx->nested.vmcs02_pool);
+- return &item->vmcs02;
+- }
+-
+- /* Create a new VMCS */
+- item = kzalloc(sizeof(struct vmcs02_list), GFP_KERNEL);
+- if (!item)
+- return NULL;
+- item->vmcs02.vmcs = alloc_vmcs();
+- item->vmcs02.shadow_vmcs = NULL;
+- if (!item->vmcs02.vmcs) {
+- kfree(item);
+- return NULL;
+- }
+- loaded_vmcs_init(&item->vmcs02);
+- item->vmptr = vmx->nested.current_vmptr;
+- list_add(&(item->list), &(vmx->nested.vmcs02_pool));
+- vmx->nested.vmcs02_num++;
+- return &item->vmcs02;
+-}
+-
+-/* Free and remove from pool a vmcs02 saved for a vmcs12 (if there is one) */
+-static void nested_free_vmcs02(struct vcpu_vmx *vmx, gpa_t vmptr)
+-{
+- struct vmcs02_list *item;
+- list_for_each_entry(item, &vmx->nested.vmcs02_pool, list)
+- if (item->vmptr == vmptr) {
+- free_loaded_vmcs(&item->vmcs02);
+- list_del(&item->list);
+- kfree(item);
+- vmx->nested.vmcs02_num--;
+- return;
+- }
+-}
+-
+-/*
+- * Free all VMCSs saved for this vcpu, except the one pointed by
+- * vmx->loaded_vmcs. We must be running L1, so vmx->loaded_vmcs
+- * must be &vmx->vmcs01.
+- */
+-static void nested_free_all_saved_vmcss(struct vcpu_vmx *vmx)
+-{
+- struct vmcs02_list *item, *n;
+-
+- WARN_ON(vmx->loaded_vmcs != &vmx->vmcs01);
+- list_for_each_entry_safe(item, n, &vmx->nested.vmcs02_pool, list) {
+- /*
+- * Something will leak if the above WARN triggers. Better than
+- * a use-after-free.
+- */
+- if (vmx->loaded_vmcs == &item->vmcs02)
+- continue;
+-
+- free_loaded_vmcs(&item->vmcs02);
+- list_del(&item->list);
+- kfree(item);
+- vmx->nested.vmcs02_num--;
+- }
+-}
+-
+ /*
+ * The following 3 functions, nested_vmx_succeed()/failValid()/failInvalid(),
+ * set the success or error code of an emulated VMX instruction, as specified
+@@ -7241,13 +7313,11 @@ static int enter_vmx_operation(struct kvm_vcpu *vcpu)
+ {
+ struct vcpu_vmx *vmx = to_vmx(vcpu);
+ struct vmcs *shadow_vmcs;
++ int r;
+
+- if (cpu_has_vmx_msr_bitmap()) {
+- vmx->nested.msr_bitmap =
+- (unsigned long *)__get_free_page(GFP_KERNEL);
+- if (!vmx->nested.msr_bitmap)
+- goto out_msr_bitmap;
+- }
++ r = alloc_loaded_vmcs(&vmx->nested.vmcs02);
++ if (r < 0)
++ goto out_vmcs02;
+
+ vmx->nested.cached_vmcs12 = kmalloc(VMCS12_SIZE, GFP_KERNEL);
+ if (!vmx->nested.cached_vmcs12)
+@@ -7264,9 +7334,6 @@ static int enter_vmx_operation(struct kvm_vcpu *vcpu)
+ vmx->vmcs01.shadow_vmcs = shadow_vmcs;
+ }
+
+- INIT_LIST_HEAD(&(vmx->nested.vmcs02_pool));
+- vmx->nested.vmcs02_num = 0;
+-
+ hrtimer_init(&vmx->nested.preemption_timer, CLOCK_MONOTONIC,
+ HRTIMER_MODE_REL_PINNED);
+ vmx->nested.preemption_timer.function = vmx_preemption_timer_fn;
+@@ -7278,9 +7345,9 @@ static int enter_vmx_operation(struct kvm_vcpu *vcpu)
+ kfree(vmx->nested.cached_vmcs12);
+
+ out_cached_vmcs12:
+- free_page((unsigned long)vmx->nested.msr_bitmap);
++ free_loaded_vmcs(&vmx->nested.vmcs02);
+
+-out_msr_bitmap:
++out_vmcs02:
+ return -ENOMEM;
+ }
+
+@@ -7423,10 +7490,6 @@ static void free_nested(struct vcpu_vmx *vmx)
+ free_vpid(vmx->nested.vpid02);
+ vmx->nested.posted_intr_nv = -1;
+ vmx->nested.current_vmptr = -1ull;
+- if (vmx->nested.msr_bitmap) {
+- free_page((unsigned long)vmx->nested.msr_bitmap);
+- vmx->nested.msr_bitmap = NULL;
+- }
+ if (enable_shadow_vmcs) {
+ vmx_disable_shadow_vmcs(vmx);
+ vmcs_clear(vmx->vmcs01.shadow_vmcs);
+@@ -7434,7 +7497,7 @@ static void free_nested(struct vcpu_vmx *vmx)
+ vmx->vmcs01.shadow_vmcs = NULL;
+ }
+ kfree(vmx->nested.cached_vmcs12);
+- /* Unpin physical memory we referred to in current vmcs02 */
++ /* Unpin physical memory we referred to in the vmcs02 */
+ if (vmx->nested.apic_access_page) {
+ kvm_release_page_dirty(vmx->nested.apic_access_page);
+ vmx->nested.apic_access_page = NULL;
+@@ -7450,7 +7513,7 @@ static void free_nested(struct vcpu_vmx *vmx)
+ vmx->nested.pi_desc = NULL;
+ }
+
+- nested_free_all_saved_vmcss(vmx);
++ free_loaded_vmcs(&vmx->nested.vmcs02);
+ }
+
+ /* Emulate the VMXOFF instruction */
+@@ -7493,8 +7556,6 @@ static int handle_vmclear(struct kvm_vcpu *vcpu)
+ vmptr + offsetof(struct vmcs12, launch_state),
+ &zero, sizeof(zero));
+
+- nested_free_vmcs02(vmx, vmptr);
+-
+ nested_vmx_succeed(vcpu);
+ return kvm_skip_emulated_instruction(vcpu);
+ }
+@@ -8406,10 +8467,11 @@ static bool nested_vmx_exit_reflected(struct kvm_vcpu *vcpu, u32 exit_reason)
+
+ /*
+ * The host physical addresses of some pages of guest memory
+- * are loaded into VMCS02 (e.g. L1's Virtual APIC Page). The CPU
+- * may write to these pages via their host physical address while
+- * L2 is running, bypassing any address-translation-based dirty
+- * tracking (e.g. EPT write protection).
++ * are loaded into the vmcs02 (e.g. vmcs12's Virtual APIC
++ * Page). The CPU may write to these pages via their host
++ * physical address while L2 is running, bypassing any
++ * address-translation-based dirty tracking (e.g. EPT write
++ * protection).
+ *
+ * Mark them dirty on every exit from L2 to prevent them from
+ * getting out of sync with dirty tracking.
+@@ -8943,7 +9005,7 @@ static void vmx_set_virtual_x2apic_mode(struct kvm_vcpu *vcpu, bool set)
+ }
+ vmcs_write32(SECONDARY_VM_EXEC_CONTROL, sec_exec_control);
+
+- vmx_set_msr_bitmap(vcpu);
++ vmx_update_msr_bitmap(vcpu);
+ }
+
+ static void vmx_set_apic_access_page_addr(struct kvm_vcpu *vcpu, hpa_t hpa)
+@@ -9129,14 +9191,14 @@ static void vmx_handle_external_intr(struct kvm_vcpu *vcpu)
+ #endif
+ "pushf\n\t"
+ __ASM_SIZE(push) " $%c[cs]\n\t"
+- "call *%[entry]\n\t"
++ CALL_NOSPEC
+ :
+ #ifdef CONFIG_X86_64
+ [sp]"=&r"(tmp),
+ #endif
+ ASM_CALL_CONSTRAINT
+ :
+- [entry]"r"(entry),
++ THUNK_TARGET(entry),
+ [ss]"i"(__KERNEL_DS),
+ [cs]"i"(__KERNEL_CS)
+ );
+@@ -9373,6 +9435,15 @@ static void __noclone vmx_vcpu_run(struct kvm_vcpu *vcpu)
+
+ vmx_arm_hv_timer(vcpu);
+
++ /*
++ * If this vCPU has touched SPEC_CTRL, restore the guest's value if
++ * it's non-zero. Since vmentry is serialising on affected CPUs, there
++ * is no need to worry about the conditional branch over the wrmsr
++ * being speculatively taken.
++ */
++ if (vmx->spec_ctrl)
++ wrmsrl(MSR_IA32_SPEC_CTRL, vmx->spec_ctrl);
++
+ vmx->__launched = vmx->loaded_vmcs->launched;
+ asm(
+ /* Store host registers */
+@@ -9491,6 +9562,27 @@ static void __noclone vmx_vcpu_run(struct kvm_vcpu *vcpu)
+ #endif
+ );
+
++ /*
++ * We do not use IBRS in the kernel. If this vCPU has used the
++ * SPEC_CTRL MSR it may have left it on; save the value and
++ * turn it off. This is much more efficient than blindly adding
++ * it to the atomic save/restore list. Especially as the former
++ * (Saving guest MSRs on vmexit) doesn't even exist in KVM.
++ *
++ * For non-nested case:
++ * If the L01 MSR bitmap does not intercept the MSR, then we need to
++ * save it.
++ *
++ * For nested case:
++ * If the L02 MSR bitmap does not intercept the MSR, then we need to
++ * save it.
++ */
++ if (!msr_write_intercepted(vcpu, MSR_IA32_SPEC_CTRL))
++ rdmsrl(MSR_IA32_SPEC_CTRL, vmx->spec_ctrl);
++
++ if (vmx->spec_ctrl)
++ wrmsrl(MSR_IA32_SPEC_CTRL, 0);
++
+ /* Eliminate branch target predictions from guest mode */
+ vmexit_fill_RSB();
+
+@@ -9604,6 +9696,7 @@ static struct kvm_vcpu *vmx_create_vcpu(struct kvm *kvm, unsigned int id)
+ {
+ int err;
+ struct vcpu_vmx *vmx = kmem_cache_zalloc(kvm_vcpu_cache, GFP_KERNEL);
++ unsigned long *msr_bitmap;
+ int cpu;
+
+ if (!vmx)
+@@ -9636,13 +9729,20 @@ static struct kvm_vcpu *vmx_create_vcpu(struct kvm *kvm, unsigned int id)
+ if (!vmx->guest_msrs)
+ goto free_pml;
+
+- vmx->loaded_vmcs = &vmx->vmcs01;
+- vmx->loaded_vmcs->vmcs = alloc_vmcs();
+- vmx->loaded_vmcs->shadow_vmcs = NULL;
+- if (!vmx->loaded_vmcs->vmcs)
++ err = alloc_loaded_vmcs(&vmx->vmcs01);
++ if (err < 0)
+ goto free_msrs;
+- loaded_vmcs_init(vmx->loaded_vmcs);
+
++ msr_bitmap = vmx->vmcs01.msr_bitmap;
++ vmx_disable_intercept_for_msr(msr_bitmap, MSR_FS_BASE, MSR_TYPE_RW);
++ vmx_disable_intercept_for_msr(msr_bitmap, MSR_GS_BASE, MSR_TYPE_RW);
++ vmx_disable_intercept_for_msr(msr_bitmap, MSR_KERNEL_GS_BASE, MSR_TYPE_RW);
++ vmx_disable_intercept_for_msr(msr_bitmap, MSR_IA32_SYSENTER_CS, MSR_TYPE_RW);
++ vmx_disable_intercept_for_msr(msr_bitmap, MSR_IA32_SYSENTER_ESP, MSR_TYPE_RW);
++ vmx_disable_intercept_for_msr(msr_bitmap, MSR_IA32_SYSENTER_EIP, MSR_TYPE_RW);
++ vmx->msr_bitmap_mode = 0;
++
++ vmx->loaded_vmcs = &vmx->vmcs01;
+ cpu = get_cpu();
+ vmx_vcpu_load(&vmx->vcpu, cpu);
+ vmx->vcpu.cpu = cpu;
+@@ -10105,10 +10205,25 @@ static inline bool nested_vmx_merge_msr_bitmap(struct kvm_vcpu *vcpu,
+ int msr;
+ struct page *page;
+ unsigned long *msr_bitmap_l1;
+- unsigned long *msr_bitmap_l0 = to_vmx(vcpu)->nested.msr_bitmap;
++ unsigned long *msr_bitmap_l0 = to_vmx(vcpu)->nested.vmcs02.msr_bitmap;
++ /*
++ * pred_cmd & spec_ctrl are trying to verify two things:
++ *
++ * 1. L0 gave a permission to L1 to actually passthrough the MSR. This
++ * ensures that we do not accidentally generate an L02 MSR bitmap
++ * from the L12 MSR bitmap that is too permissive.
++ * 2. That L1 or L2s have actually used the MSR. This avoids
++ * unnecessarily merging of the bitmap if the MSR is unused. This
++ * works properly because we only update the L01 MSR bitmap lazily.
++ * So even if L0 should pass L1 these MSRs, the L01 bitmap is only
++ * updated to reflect this when L1 (or its L2s) actually write to
++ * the MSR.
++ */
++ bool pred_cmd = msr_write_intercepted_l01(vcpu, MSR_IA32_PRED_CMD);
++ bool spec_ctrl = msr_write_intercepted_l01(vcpu, MSR_IA32_SPEC_CTRL);
+
+- /* This shortcut is ok because we support only x2APIC MSRs so far. */
+- if (!nested_cpu_has_virt_x2apic_mode(vmcs12))
++ if (!nested_cpu_has_virt_x2apic_mode(vmcs12) &&
++ !pred_cmd && !spec_ctrl)
+ return false;
+
+ page = kvm_vcpu_gpa_to_page(vcpu, vmcs12->msr_bitmap);
+@@ -10141,6 +10256,19 @@ static inline bool nested_vmx_merge_msr_bitmap(struct kvm_vcpu *vcpu,
+ MSR_TYPE_W);
+ }
+ }
++
++ if (spec_ctrl)
++ nested_vmx_disable_intercept_for_msr(
++ msr_bitmap_l1, msr_bitmap_l0,
++ MSR_IA32_SPEC_CTRL,
++ MSR_TYPE_R | MSR_TYPE_W);
++
++ if (pred_cmd)
++ nested_vmx_disable_intercept_for_msr(
++ msr_bitmap_l1, msr_bitmap_l0,
++ MSR_IA32_PRED_CMD,
++ MSR_TYPE_W);
++
+ kunmap(page);
+ kvm_release_page_clean(page);
+
+@@ -10682,6 +10810,9 @@ static int prepare_vmcs02(struct kvm_vcpu *vcpu, struct vmcs12 *vmcs12,
+ if (kvm_has_tsc_control)
+ decache_tsc_multiplier(vmx);
+
++ if (cpu_has_vmx_msr_bitmap())
++ vmcs_write64(MSR_BITMAP, __pa(vmx->nested.vmcs02.msr_bitmap));
++
+ if (enable_vpid) {
+ /*
+ * There is no direct mapping between vpid02 and vpid12, the
+@@ -10903,20 +11034,15 @@ static int enter_vmx_non_root_mode(struct kvm_vcpu *vcpu, bool from_vmentry)
+ {
+ struct vcpu_vmx *vmx = to_vmx(vcpu);
+ struct vmcs12 *vmcs12 = get_vmcs12(vcpu);
+- struct loaded_vmcs *vmcs02;
+ u32 msr_entry_idx;
+ u32 exit_qual;
+
+- vmcs02 = nested_get_current_vmcs02(vmx);
+- if (!vmcs02)
+- return -ENOMEM;
+-
+ enter_guest_mode(vcpu);
+
+ if (!(vmcs12->vm_entry_controls & VM_ENTRY_LOAD_DEBUG_CONTROLS))
+ vmx->nested.vmcs01_debugctl = vmcs_read64(GUEST_IA32_DEBUGCTL);
+
+- vmx_switch_vmcs(vcpu, vmcs02);
++ vmx_switch_vmcs(vcpu, &vmx->nested.vmcs02);
+ vmx_segment_cache_clear(vmx);
+
+ if (prepare_vmcs02(vcpu, vmcs12, from_vmentry, &exit_qual)) {
+@@ -11485,7 +11611,7 @@ static void load_vmcs12_host_state(struct kvm_vcpu *vcpu,
+ vmcs_write64(GUEST_IA32_DEBUGCTL, 0);
+
+ if (cpu_has_vmx_msr_bitmap())
+- vmx_set_msr_bitmap(vcpu);
++ vmx_update_msr_bitmap(vcpu);
+
+ if (nested_vmx_load_msr(vcpu, vmcs12->vm_exit_msr_load_addr,
+ vmcs12->vm_exit_msr_load_count))
+@@ -11534,10 +11660,6 @@ static void nested_vmx_vmexit(struct kvm_vcpu *vcpu, u32 exit_reason,
+ vm_exit_controls_reset_shadow(vmx);
+ vmx_segment_cache_clear(vmx);
+
+- /* if no vmcs02 cache requested, remove the one we used */
+- if (VMCS02_POOL_SIZE == 0)
+- nested_free_vmcs02(vmx, vmx->nested.current_vmptr);
+-
+ /* Update any VMCS fields that might have changed while L2 ran */
+ vmcs_write32(VM_EXIT_MSR_LOAD_COUNT, vmx->msr_autoload.nr);
+ vmcs_write32(VM_ENTRY_MSR_LOAD_COUNT, vmx->msr_autoload.nr);
+diff --git a/arch/x86/kvm/x86.c b/arch/x86/kvm/x86.c
+index c53298dfbf50..ac381437c291 100644
+--- a/arch/x86/kvm/x86.c
++++ b/arch/x86/kvm/x86.c
+@@ -1009,6 +1009,7 @@ static u32 msrs_to_save[] = {
+ #endif
+ MSR_IA32_TSC, MSR_IA32_CR_PAT, MSR_VM_HSAVE_PA,
+ MSR_IA32_FEATURE_CONTROL, MSR_IA32_BNDCFGS, MSR_TSC_AUX,
++ MSR_IA32_SPEC_CTRL, MSR_IA32_ARCH_CAPABILITIES
+ };
+
+ static unsigned num_msrs_to_save;
+diff --git a/arch/x86/lib/Makefile b/arch/x86/lib/Makefile
+index f23934bbaf4e..69a473919260 100644
+--- a/arch/x86/lib/Makefile
++++ b/arch/x86/lib/Makefile
+@@ -27,6 +27,7 @@ lib-$(CONFIG_RWSEM_XCHGADD_ALGORITHM) += rwsem.o
+ lib-$(CONFIG_INSTRUCTION_DECODER) += insn.o inat.o insn-eval.o
+ lib-$(CONFIG_RANDOMIZE_BASE) += kaslr.o
+ lib-$(CONFIG_RETPOLINE) += retpoline.o
++OBJECT_FILES_NON_STANDARD_retpoline.o :=y
+
+ obj-y += msr.o msr-reg.o msr-reg-export.o hweight.o
+
+diff --git a/arch/x86/lib/getuser.S b/arch/x86/lib/getuser.S
+index c97d935a29e8..49b167f73215 100644
+--- a/arch/x86/lib/getuser.S
++++ b/arch/x86/lib/getuser.S
+@@ -40,6 +40,8 @@ ENTRY(__get_user_1)
+ mov PER_CPU_VAR(current_task), %_ASM_DX
+ cmp TASK_addr_limit(%_ASM_DX),%_ASM_AX
+ jae bad_get_user
++ sbb %_ASM_DX, %_ASM_DX /* array_index_mask_nospec() */
++ and %_ASM_DX, %_ASM_AX
+ ASM_STAC
+ 1: movzbl (%_ASM_AX),%edx
+ xor %eax,%eax
+@@ -54,6 +56,8 @@ ENTRY(__get_user_2)
+ mov PER_CPU_VAR(current_task), %_ASM_DX
+ cmp TASK_addr_limit(%_ASM_DX),%_ASM_AX
+ jae bad_get_user
++ sbb %_ASM_DX, %_ASM_DX /* array_index_mask_nospec() */
++ and %_ASM_DX, %_ASM_AX
+ ASM_STAC
+ 2: movzwl -1(%_ASM_AX),%edx
+ xor %eax,%eax
+@@ -68,6 +72,8 @@ ENTRY(__get_user_4)
+ mov PER_CPU_VAR(current_task), %_ASM_DX
+ cmp TASK_addr_limit(%_ASM_DX),%_ASM_AX
+ jae bad_get_user
++ sbb %_ASM_DX, %_ASM_DX /* array_index_mask_nospec() */
++ and %_ASM_DX, %_ASM_AX
+ ASM_STAC
+ 3: movl -3(%_ASM_AX),%edx
+ xor %eax,%eax
+@@ -83,6 +89,8 @@ ENTRY(__get_user_8)
+ mov PER_CPU_VAR(current_task), %_ASM_DX
+ cmp TASK_addr_limit(%_ASM_DX),%_ASM_AX
+ jae bad_get_user
++ sbb %_ASM_DX, %_ASM_DX /* array_index_mask_nospec() */
++ and %_ASM_DX, %_ASM_AX
+ ASM_STAC
+ 4: movq -7(%_ASM_AX),%rdx
+ xor %eax,%eax
+@@ -94,6 +102,8 @@ ENTRY(__get_user_8)
+ mov PER_CPU_VAR(current_task), %_ASM_DX
+ cmp TASK_addr_limit(%_ASM_DX),%_ASM_AX
+ jae bad_get_user_8
++ sbb %_ASM_DX, %_ASM_DX /* array_index_mask_nospec() */
++ and %_ASM_DX, %_ASM_AX
+ ASM_STAC
+ 4: movl -7(%_ASM_AX),%edx
+ 5: movl -3(%_ASM_AX),%ecx
+diff --git a/arch/x86/lib/retpoline.S b/arch/x86/lib/retpoline.S
+index c909961e678a..480edc3a5e03 100644
+--- a/arch/x86/lib/retpoline.S
++++ b/arch/x86/lib/retpoline.S
+@@ -7,6 +7,7 @@
+ #include <asm/alternative-asm.h>
+ #include <asm/export.h>
+ #include <asm/nospec-branch.h>
++#include <asm/bitsperlong.h>
+
+ .macro THUNK reg
+ .section .text.__x86.indirect_thunk
+@@ -46,3 +47,58 @@ GENERATE_THUNK(r13)
+ GENERATE_THUNK(r14)
+ GENERATE_THUNK(r15)
+ #endif
++
++/*
++ * Fill the CPU return stack buffer.
++ *
++ * Each entry in the RSB, if used for a speculative 'ret', contains an
++ * infinite 'pause; lfence; jmp' loop to capture speculative execution.
++ *
++ * This is required in various cases for retpoline and IBRS-based
++ * mitigations for the Spectre variant 2 vulnerability. Sometimes to
++ * eliminate potentially bogus entries from the RSB, and sometimes
++ * purely to ensure that it doesn't get empty, which on some CPUs would
++ * allow predictions from other (unwanted!) sources to be used.
++ *
++ * Google experimented with loop-unrolling and this turned out to be
++ * the optimal version - two calls, each with their own speculation
++ * trap should their return address end up getting used, in a loop.
++ */
++.macro STUFF_RSB nr:req sp:req
++ mov $(\nr / 2), %_ASM_BX
++ .align 16
++771:
++ call 772f
++773: /* speculation trap */
++ pause
++ lfence
++ jmp 773b
++ .align 16
++772:
++ call 774f
++775: /* speculation trap */
++ pause
++ lfence
++ jmp 775b
++ .align 16
++774:
++ dec %_ASM_BX
++ jnz 771b
++ add $((BITS_PER_LONG/8) * \nr), \sp
++.endm
++
++#define RSB_FILL_LOOPS 16 /* To avoid underflow */
++
++ENTRY(__fill_rsb)
++ STUFF_RSB RSB_FILL_LOOPS, %_ASM_SP
++ ret
++END(__fill_rsb)
++EXPORT_SYMBOL_GPL(__fill_rsb)
++
++#define RSB_CLEAR_LOOPS 32 /* To forcibly overwrite all entries */
++
++ENTRY(__clear_rsb)
++ STUFF_RSB RSB_CLEAR_LOOPS, %_ASM_SP
++ ret
++END(__clear_rsb)
++EXPORT_SYMBOL_GPL(__clear_rsb)
+diff --git a/arch/x86/lib/usercopy_32.c b/arch/x86/lib/usercopy_32.c
+index 1b377f734e64..7add8ba06887 100644
+--- a/arch/x86/lib/usercopy_32.c
++++ b/arch/x86/lib/usercopy_32.c
+@@ -331,12 +331,12 @@ do { \
+
+ unsigned long __copy_user_ll(void *to, const void *from, unsigned long n)
+ {
+- stac();
++ __uaccess_begin_nospec();
+ if (movsl_is_ok(to, from, n))
+ __copy_user(to, from, n);
+ else
+ n = __copy_user_intel(to, from, n);
+- clac();
++ __uaccess_end();
+ return n;
+ }
+ EXPORT_SYMBOL(__copy_user_ll);
+@@ -344,7 +344,7 @@ EXPORT_SYMBOL(__copy_user_ll);
+ unsigned long __copy_from_user_ll_nocache_nozero(void *to, const void __user *from,
+ unsigned long n)
+ {
+- stac();
++ __uaccess_begin_nospec();
+ #ifdef CONFIG_X86_INTEL_USERCOPY
+ if (n > 64 && static_cpu_has(X86_FEATURE_XMM2))
+ n = __copy_user_intel_nocache(to, from, n);
+@@ -353,7 +353,7 @@ unsigned long __copy_from_user_ll_nocache_nozero(void *to, const void __user *fr
+ #else
+ __copy_user(to, from, n);
+ #endif
+- clac();
++ __uaccess_end();
+ return n;
+ }
+ EXPORT_SYMBOL(__copy_from_user_ll_nocache_nozero);
+diff --git a/arch/x86/mm/tlb.c b/arch/x86/mm/tlb.c
+index 5bfe61a5e8e3..012d02624848 100644
+--- a/arch/x86/mm/tlb.c
++++ b/arch/x86/mm/tlb.c
+@@ -6,13 +6,14 @@
+ #include <linux/interrupt.h>
+ #include <linux/export.h>
+ #include <linux/cpu.h>
++#include <linux/debugfs.h>
+
+ #include <asm/tlbflush.h>
+ #include <asm/mmu_context.h>
++#include <asm/nospec-branch.h>
+ #include <asm/cache.h>
+ #include <asm/apic.h>
+ #include <asm/uv/uv.h>
+-#include <linux/debugfs.h>
+
+ /*
+ * TLB flushing, formerly SMP-only
+@@ -247,6 +248,27 @@ void switch_mm_irqs_off(struct mm_struct *prev, struct mm_struct *next,
+ } else {
+ u16 new_asid;
+ bool need_flush;
++ u64 last_ctx_id = this_cpu_read(cpu_tlbstate.last_ctx_id);
++
++ /*
++ * Avoid user/user BTB poisoning by flushing the branch
++ * predictor when switching between processes. This stops
++ * one process from doing Spectre-v2 attacks on another.
++ *
++ * As an optimization, flush indirect branches only when
++ * switching into processes that disable dumping. This
++ * protects high value processes like gpg, without having
++ * too high performance overhead. IBPB is *expensive*!
++ *
++ * This will not flush branches when switching into kernel
++ * threads. It will also not flush if we switch to idle
++ * thread and back to the same process. It will flush if we
++ * switch to a different non-dumpable process.
++ */
++ if (tsk && tsk->mm &&
++ tsk->mm->context.ctx_id != last_ctx_id &&
++ get_dumpable(tsk->mm) != SUID_DUMP_USER)
++ indirect_branch_prediction_barrier();
+
+ if (IS_ENABLED(CONFIG_VMAP_STACK)) {
+ /*
+@@ -292,6 +314,14 @@ void switch_mm_irqs_off(struct mm_struct *prev, struct mm_struct *next,
+ trace_tlb_flush_rcuidle(TLB_FLUSH_ON_TASK_SWITCH, 0);
+ }
+
++ /*
++ * Record last user mm's context id, so we can avoid
++ * flushing branch buffer with IBPB if we switch back
++ * to the same user.
++ */
++ if (next != &init_mm)
++ this_cpu_write(cpu_tlbstate.last_ctx_id, next->context.ctx_id);
++
+ this_cpu_write(cpu_tlbstate.loaded_mm, next);
+ this_cpu_write(cpu_tlbstate.loaded_mm_asid, new_asid);
+ }
+@@ -369,6 +399,7 @@ void initialize_tlbstate_and_flush(void)
+ write_cr3(build_cr3(mm->pgd, 0));
+
+ /* Reinitialize tlbstate. */
++ this_cpu_write(cpu_tlbstate.last_ctx_id, mm->context.ctx_id);
+ this_cpu_write(cpu_tlbstate.loaded_mm_asid, 0);
+ this_cpu_write(cpu_tlbstate.next_asid, 1);
+ this_cpu_write(cpu_tlbstate.ctxs[0].ctx_id, mm->context.ctx_id);
+diff --git a/drivers/auxdisplay/img-ascii-lcd.c b/drivers/auxdisplay/img-ascii-lcd.c
+index db040b378224..9180b9bd5821 100644
+--- a/drivers/auxdisplay/img-ascii-lcd.c
++++ b/drivers/auxdisplay/img-ascii-lcd.c
+@@ -441,3 +441,7 @@ static struct platform_driver img_ascii_lcd_driver = {
+ .remove = img_ascii_lcd_remove,
+ };
+ module_platform_driver(img_ascii_lcd_driver);
++
++MODULE_DESCRIPTION("Imagination Technologies ASCII LCD Display");
++MODULE_AUTHOR("Paul Burton <paul.burton@mips.com>");
++MODULE_LICENSE("GPL");
+diff --git a/drivers/fpga/fpga-region.c b/drivers/fpga/fpga-region.c
+index d9ab7c75b14f..e0c73ceba2ed 100644
+--- a/drivers/fpga/fpga-region.c
++++ b/drivers/fpga/fpga-region.c
+@@ -147,6 +147,7 @@ static struct fpga_manager *fpga_region_get_manager(struct fpga_region *region)
+ mgr_node = of_parse_phandle(np, "fpga-mgr", 0);
+ if (mgr_node) {
+ mgr = of_fpga_mgr_get(mgr_node);
++ of_node_put(mgr_node);
+ of_node_put(np);
+ return mgr;
+ }
+@@ -192,10 +193,13 @@ static int fpga_region_get_bridges(struct fpga_region *region,
+ parent_br = region_np->parent;
+
+ /* If overlay has a list of bridges, use it. */
+- if (of_parse_phandle(overlay, "fpga-bridges", 0))
++ br = of_parse_phandle(overlay, "fpga-bridges", 0);
++ if (br) {
++ of_node_put(br);
+ np = overlay;
+- else
++ } else {
+ np = region_np;
++ }
+
+ for (i = 0; ; i++) {
+ br = of_parse_phandle(np, "fpga-bridges", i);
+@@ -203,12 +207,15 @@ static int fpga_region_get_bridges(struct fpga_region *region,
+ break;
+
+ /* If parent bridge is in list, skip it. */
+- if (br == parent_br)
++ if (br == parent_br) {
++ of_node_put(br);
+ continue;
++ }
+
+ /* If node is a bridge, get it and add to list */
+ ret = fpga_bridge_get_to_list(br, region->info,
+ ®ion->bridge_list);
++ of_node_put(br);
+
+ /* If any of the bridges are in use, give up */
+ if (ret == -EBUSY) {
+diff --git a/drivers/iio/accel/kxsd9-i2c.c b/drivers/iio/accel/kxsd9-i2c.c
+index 98fbb628d5bd..38411e1c155b 100644
+--- a/drivers/iio/accel/kxsd9-i2c.c
++++ b/drivers/iio/accel/kxsd9-i2c.c
+@@ -63,3 +63,6 @@ static struct i2c_driver kxsd9_i2c_driver = {
+ .id_table = kxsd9_i2c_id,
+ };
+ module_i2c_driver(kxsd9_i2c_driver);
++
++MODULE_LICENSE("GPL v2");
++MODULE_DESCRIPTION("KXSD9 accelerometer I2C interface");
+diff --git a/drivers/iio/adc/qcom-vadc-common.c b/drivers/iio/adc/qcom-vadc-common.c
+index 47d24ae5462f..fe3d7826783c 100644
+--- a/drivers/iio/adc/qcom-vadc-common.c
++++ b/drivers/iio/adc/qcom-vadc-common.c
+@@ -5,6 +5,7 @@
+ #include <linux/math64.h>
+ #include <linux/log2.h>
+ #include <linux/err.h>
++#include <linux/module.h>
+
+ #include "qcom-vadc-common.h"
+
+@@ -229,3 +230,6 @@ int qcom_vadc_decimation_from_dt(u32 value)
+ return __ffs64(value / VADC_DECIMATION_MIN);
+ }
+ EXPORT_SYMBOL(qcom_vadc_decimation_from_dt);
++
++MODULE_LICENSE("GPL v2");
++MODULE_DESCRIPTION("Qualcomm ADC common functionality");
+diff --git a/drivers/pinctrl/pxa/pinctrl-pxa2xx.c b/drivers/pinctrl/pxa/pinctrl-pxa2xx.c
+index 866aa3ce1ac9..6cf0006d4c8d 100644
+--- a/drivers/pinctrl/pxa/pinctrl-pxa2xx.c
++++ b/drivers/pinctrl/pxa/pinctrl-pxa2xx.c
+@@ -436,3 +436,7 @@ int pxa2xx_pinctrl_exit(struct platform_device *pdev)
+ return 0;
+ }
+ EXPORT_SYMBOL_GPL(pxa2xx_pinctrl_exit);
++
++MODULE_AUTHOR("Robert Jarzmik <robert.jarzmik@free.fr>");
++MODULE_DESCRIPTION("Marvell PXA2xx pinctrl driver");
++MODULE_LICENSE("GPL v2");
+diff --git a/drivers/tty/serial/serial_core.c b/drivers/tty/serial/serial_core.c
+index 854995e1cae7..7e7e6eb95b0a 100644
+--- a/drivers/tty/serial/serial_core.c
++++ b/drivers/tty/serial/serial_core.c
+@@ -974,6 +974,8 @@ static int uart_set_info(struct tty_struct *tty, struct tty_port *port,
+ }
+ } else {
+ retval = uart_startup(tty, state, 1);
++ if (retval == 0)
++ tty_port_set_initialized(port, true);
+ if (retval > 0)
+ retval = 0;
+ }
+diff --git a/include/linux/fdtable.h b/include/linux/fdtable.h
+index 1c65817673db..41615f38bcff 100644
+--- a/include/linux/fdtable.h
++++ b/include/linux/fdtable.h
+@@ -10,6 +10,7 @@
+ #include <linux/compiler.h>
+ #include <linux/spinlock.h>
+ #include <linux/rcupdate.h>
++#include <linux/nospec.h>
+ #include <linux/types.h>
+ #include <linux/init.h>
+ #include <linux/fs.h>
+@@ -82,8 +83,10 @@ static inline struct file *__fcheck_files(struct files_struct *files, unsigned i
+ {
+ struct fdtable *fdt = rcu_dereference_raw(files->fdt);
+
+- if (fd < fdt->max_fds)
++ if (fd < fdt->max_fds) {
++ fd = array_index_nospec(fd, fdt->max_fds);
+ return rcu_dereference_raw(fdt->fd[fd]);
++ }
+ return NULL;
+ }
+
+diff --git a/include/linux/init.h b/include/linux/init.h
+index ea1b31101d9e..506a98151131 100644
+--- a/include/linux/init.h
++++ b/include/linux/init.h
+@@ -5,6 +5,13 @@
+ #include <linux/compiler.h>
+ #include <linux/types.h>
+
++/* Built-in __init functions needn't be compiled with retpoline */
++#if defined(RETPOLINE) && !defined(MODULE)
++#define __noretpoline __attribute__((indirect_branch("keep")))
++#else
++#define __noretpoline
++#endif
++
+ /* These macros are used to mark some functions or
+ * initialized data (doesn't apply to uninitialized data)
+ * as `initialization' functions. The kernel can take this
+@@ -40,7 +47,7 @@
+
+ /* These are for everybody (although not all archs will actually
+ discard it in modules) */
+-#define __init __section(.init.text) __cold __latent_entropy
++#define __init __section(.init.text) __cold __latent_entropy __noretpoline
+ #define __initdata __section(.init.data)
+ #define __initconst __section(.init.rodata)
+ #define __exitdata __section(.exit.data)
+diff --git a/include/linux/module.h b/include/linux/module.h
+index c69b49abe877..1d8f245967be 100644
+--- a/include/linux/module.h
++++ b/include/linux/module.h
+@@ -801,6 +801,15 @@ static inline void module_bug_finalize(const Elf_Ehdr *hdr,
+ static inline void module_bug_cleanup(struct module *mod) {}
+ #endif /* CONFIG_GENERIC_BUG */
+
++#ifdef RETPOLINE
++extern bool retpoline_module_ok(bool has_retpoline);
++#else
++static inline bool retpoline_module_ok(bool has_retpoline)
++{
++ return true;
++}
++#endif
++
+ #ifdef CONFIG_MODULE_SIG
+ static inline bool module_sig_ok(struct module *module)
+ {
+diff --git a/include/linux/nospec.h b/include/linux/nospec.h
+new file mode 100644
+index 000000000000..b99bced39ac2
+--- /dev/null
++++ b/include/linux/nospec.h
+@@ -0,0 +1,72 @@
++// SPDX-License-Identifier: GPL-2.0
++// Copyright(c) 2018 Linus Torvalds. All rights reserved.
++// Copyright(c) 2018 Alexei Starovoitov. All rights reserved.
++// Copyright(c) 2018 Intel Corporation. All rights reserved.
++
++#ifndef _LINUX_NOSPEC_H
++#define _LINUX_NOSPEC_H
++
++/**
++ * array_index_mask_nospec() - generate a ~0 mask when index < size, 0 otherwise
++ * @index: array element index
++ * @size: number of elements in array
++ *
++ * When @index is out of bounds (@index >= @size), the sign bit will be
++ * set. Extend the sign bit to all bits and invert, giving a result of
++ * zero for an out of bounds index, or ~0 if within bounds [0, @size).
++ */
++#ifndef array_index_mask_nospec
++static inline unsigned long array_index_mask_nospec(unsigned long index,
++ unsigned long size)
++{
++ /*
++ * Warn developers about inappropriate array_index_nospec() usage.
++ *
++ * Even if the CPU speculates past the WARN_ONCE branch, the
++ * sign bit of @index is taken into account when generating the
++ * mask.
++ *
++ * This warning is compiled out when the compiler can infer that
++ * @index and @size are less than LONG_MAX.
++ */
++ if (WARN_ONCE(index > LONG_MAX || size > LONG_MAX,
++ "array_index_nospec() limited to range of [0, LONG_MAX]\n"))
++ return 0;
++
++ /*
++ * Always calculate and emit the mask even if the compiler
++ * thinks the mask is not needed. The compiler does not take
++ * into account the value of @index under speculation.
++ */
++ OPTIMIZER_HIDE_VAR(index);
++ return ~(long)(index | (size - 1UL - index)) >> (BITS_PER_LONG - 1);
++}
++#endif
++
++/*
++ * array_index_nospec - sanitize an array index after a bounds check
++ *
++ * For a code sequence like:
++ *
++ * if (index < size) {
++ * index = array_index_nospec(index, size);
++ * val = array[index];
++ * }
++ *
++ * ...if the CPU speculates past the bounds check then
++ * array_index_nospec() will clamp the index within the range of [0,
++ * size).
++ */
++#define array_index_nospec(index, size) \
++({ \
++ typeof(index) _i = (index); \
++ typeof(size) _s = (size); \
++ unsigned long _mask = array_index_mask_nospec(_i, _s); \
++ \
++ BUILD_BUG_ON(sizeof(_i) > sizeof(long)); \
++ BUILD_BUG_ON(sizeof(_s) > sizeof(long)); \
++ \
++ _i &= _mask; \
++ _i; \
++})
++#endif /* _LINUX_NOSPEC_H */
+diff --git a/kernel/module.c b/kernel/module.c
+index dea01ac9cb74..09e48eee4d55 100644
+--- a/kernel/module.c
++++ b/kernel/module.c
+@@ -2863,6 +2863,15 @@ static int check_modinfo_livepatch(struct module *mod, struct load_info *info)
+ }
+ #endif /* CONFIG_LIVEPATCH */
+
++static void check_modinfo_retpoline(struct module *mod, struct load_info *info)
++{
++ if (retpoline_module_ok(get_modinfo(info, "retpoline")))
++ return;
++
++ pr_warn("%s: loading module not compiled with retpoline compiler.\n",
++ mod->name);
++}
++
+ /* Sets info->hdr and info->len. */
+ static int copy_module_from_user(const void __user *umod, unsigned long len,
+ struct load_info *info)
+@@ -3029,6 +3038,8 @@ static int check_modinfo(struct module *mod, struct load_info *info, int flags)
+ add_taint_module(mod, TAINT_OOT_MODULE, LOCKDEP_STILL_OK);
+ }
+
++ check_modinfo_retpoline(mod, info);
++
+ if (get_modinfo(info, "staging")) {
+ add_taint_module(mod, TAINT_CRAP, LOCKDEP_STILL_OK);
+ pr_warn("%s: module is from the staging directory, the quality "
+diff --git a/net/wireless/nl80211.c b/net/wireless/nl80211.c
+index 542a4fc0a8d7..4bbcfc1e2d43 100644
+--- a/net/wireless/nl80211.c
++++ b/net/wireless/nl80211.c
+@@ -16,6 +16,7 @@
+ #include <linux/nl80211.h>
+ #include <linux/rtnetlink.h>
+ #include <linux/netlink.h>
++#include <linux/nospec.h>
+ #include <linux/etherdevice.h>
+ #include <net/net_namespace.h>
+ #include <net/genetlink.h>
+@@ -2056,20 +2057,22 @@ static const struct nla_policy txq_params_policy[NL80211_TXQ_ATTR_MAX + 1] = {
+ static int parse_txq_params(struct nlattr *tb[],
+ struct ieee80211_txq_params *txq_params)
+ {
++ u8 ac;
++
+ if (!tb[NL80211_TXQ_ATTR_AC] || !tb[NL80211_TXQ_ATTR_TXOP] ||
+ !tb[NL80211_TXQ_ATTR_CWMIN] || !tb[NL80211_TXQ_ATTR_CWMAX] ||
+ !tb[NL80211_TXQ_ATTR_AIFS])
+ return -EINVAL;
+
+- txq_params->ac = nla_get_u8(tb[NL80211_TXQ_ATTR_AC]);
++ ac = nla_get_u8(tb[NL80211_TXQ_ATTR_AC]);
+ txq_params->txop = nla_get_u16(tb[NL80211_TXQ_ATTR_TXOP]);
+ txq_params->cwmin = nla_get_u16(tb[NL80211_TXQ_ATTR_CWMIN]);
+ txq_params->cwmax = nla_get_u16(tb[NL80211_TXQ_ATTR_CWMAX]);
+ txq_params->aifs = nla_get_u8(tb[NL80211_TXQ_ATTR_AIFS]);
+
+- if (txq_params->ac >= NL80211_NUM_ACS)
++ if (ac >= NL80211_NUM_ACS)
+ return -EINVAL;
+-
++ txq_params->ac = array_index_nospec(ac, NL80211_NUM_ACS);
+ return 0;
+ }
+
+diff --git a/scripts/mod/modpost.c b/scripts/mod/modpost.c
+index f51cf977c65b..6510536c06df 100644
+--- a/scripts/mod/modpost.c
++++ b/scripts/mod/modpost.c
+@@ -2165,6 +2165,14 @@ static void add_intree_flag(struct buffer *b, int is_intree)
+ buf_printf(b, "\nMODULE_INFO(intree, \"Y\");\n");
+ }
+
++/* Cannot check for assembler */
++static void add_retpoline(struct buffer *b)
++{
++ buf_printf(b, "\n#ifdef RETPOLINE\n");
++ buf_printf(b, "MODULE_INFO(retpoline, \"Y\");\n");
++ buf_printf(b, "#endif\n");
++}
++
+ static void add_staging_flag(struct buffer *b, const char *name)
+ {
+ static const char *staging_dir = "drivers/staging";
+@@ -2506,6 +2514,7 @@ int main(int argc, char **argv)
+ err |= check_modname_len(mod);
+ add_header(&buf, mod);
+ add_intree_flag(&buf, !external_module);
++ add_retpoline(&buf);
+ add_staging_flag(&buf, mod->name);
+ err |= add_versions(&buf, mod);
+ add_depends(&buf, mod, modules);
+diff --git a/sound/soc/codecs/pcm512x-spi.c b/sound/soc/codecs/pcm512x-spi.c
+index 25c63510ae15..7cdd2dc4fd79 100644
+--- a/sound/soc/codecs/pcm512x-spi.c
++++ b/sound/soc/codecs/pcm512x-spi.c
+@@ -70,3 +70,7 @@ static struct spi_driver pcm512x_spi_driver = {
+ };
+
+ module_spi_driver(pcm512x_spi_driver);
++
++MODULE_DESCRIPTION("ASoC PCM512x codec driver - SPI");
++MODULE_AUTHOR("Mark Brown <broonie@kernel.org>");
++MODULE_LICENSE("GPL v2");
+diff --git a/tools/objtool/check.c b/tools/objtool/check.c
+index f40d46e24bcc..9cd028aa1509 100644
+--- a/tools/objtool/check.c
++++ b/tools/objtool/check.c
+@@ -543,18 +543,14 @@ static int add_call_destinations(struct objtool_file *file)
+ dest_off = insn->offset + insn->len + insn->immediate;
+ insn->call_dest = find_symbol_by_offset(insn->sec,
+ dest_off);
+- /*
+- * FIXME: Thanks to retpolines, it's now considered
+- * normal for a function to call within itself. So
+- * disable this warning for now.
+- */
+-#if 0
+- if (!insn->call_dest) {
+- WARN_FUNC("can't find call dest symbol at offset 0x%lx",
+- insn->sec, insn->offset, dest_off);
++
++ if (!insn->call_dest && !insn->ignore) {
++ WARN_FUNC("unsupported intra-function call",
++ insn->sec, insn->offset);
++ WARN("If this is a retpoline, please patch it in with alternatives and annotate it with ANNOTATE_NOSPEC_ALTERNATIVE.");
+ return -1;
+ }
+-#endif
++
+ } else if (rela->sym->type == STT_SECTION) {
+ insn->call_dest = find_symbol_by_offset(rela->sym->sec,
+ rela->addend+4);
+@@ -598,7 +594,7 @@ static int handle_group_alt(struct objtool_file *file,
+ struct instruction *orig_insn,
+ struct instruction **new_insn)
+ {
+- struct instruction *last_orig_insn, *last_new_insn, *insn, *fake_jump;
++ struct instruction *last_orig_insn, *last_new_insn, *insn, *fake_jump = NULL;
+ unsigned long dest_off;
+
+ last_orig_insn = NULL;
+@@ -614,28 +610,30 @@ static int handle_group_alt(struct objtool_file *file,
+ last_orig_insn = insn;
+ }
+
+- if (!next_insn_same_sec(file, last_orig_insn)) {
+- WARN("%s: don't know how to handle alternatives at end of section",
+- special_alt->orig_sec->name);
+- return -1;
+- }
+-
+- fake_jump = malloc(sizeof(*fake_jump));
+- if (!fake_jump) {
+- WARN("malloc failed");
+- return -1;
++ if (next_insn_same_sec(file, last_orig_insn)) {
++ fake_jump = malloc(sizeof(*fake_jump));
++ if (!fake_jump) {
++ WARN("malloc failed");
++ return -1;
++ }
++ memset(fake_jump, 0, sizeof(*fake_jump));
++ INIT_LIST_HEAD(&fake_jump->alts);
++ clear_insn_state(&fake_jump->state);
++
++ fake_jump->sec = special_alt->new_sec;
++ fake_jump->offset = -1;
++ fake_jump->type = INSN_JUMP_UNCONDITIONAL;
++ fake_jump->jump_dest = list_next_entry(last_orig_insn, list);
++ fake_jump->ignore = true;
+ }
+- memset(fake_jump, 0, sizeof(*fake_jump));
+- INIT_LIST_HEAD(&fake_jump->alts);
+- clear_insn_state(&fake_jump->state);
+-
+- fake_jump->sec = special_alt->new_sec;
+- fake_jump->offset = -1;
+- fake_jump->type = INSN_JUMP_UNCONDITIONAL;
+- fake_jump->jump_dest = list_next_entry(last_orig_insn, list);
+- fake_jump->ignore = true;
+
+ if (!special_alt->new_len) {
++ if (!fake_jump) {
++ WARN("%s: empty alternative at end of section",
++ special_alt->orig_sec->name);
++ return -1;
++ }
++
+ *new_insn = fake_jump;
+ return 0;
+ }
+@@ -648,6 +646,8 @@ static int handle_group_alt(struct objtool_file *file,
+
+ last_new_insn = insn;
+
++ insn->ignore = orig_insn->ignore_alts;
++
+ if (insn->type != INSN_JUMP_CONDITIONAL &&
+ insn->type != INSN_JUMP_UNCONDITIONAL)
+ continue;
+@@ -656,8 +656,14 @@ static int handle_group_alt(struct objtool_file *file,
+ continue;
+
+ dest_off = insn->offset + insn->len + insn->immediate;
+- if (dest_off == special_alt->new_off + special_alt->new_len)
++ if (dest_off == special_alt->new_off + special_alt->new_len) {
++ if (!fake_jump) {
++ WARN("%s: alternative jump to end of section",
++ special_alt->orig_sec->name);
++ return -1;
++ }
+ insn->jump_dest = fake_jump;
++ }
+
+ if (!insn->jump_dest) {
+ WARN_FUNC("can't find alternative jump destination",
+@@ -672,7 +678,8 @@ static int handle_group_alt(struct objtool_file *file,
+ return -1;
+ }
+
+- list_add(&fake_jump->list, &last_new_insn->list);
++ if (fake_jump)
++ list_add(&fake_jump->list, &last_new_insn->list);
+
+ return 0;
+ }
+@@ -729,10 +736,6 @@ static int add_special_section_alts(struct objtool_file *file)
+ goto out;
+ }
+
+- /* Ignore retpoline alternatives. */
+- if (orig_insn->ignore_alts)
+- continue;
+-
+ new_insn = NULL;
+ if (!special_alt->group || special_alt->new_len) {
+ new_insn = find_insn(file, special_alt->new_sec,
+@@ -1089,11 +1092,11 @@ static int decode_sections(struct objtool_file *file)
+ if (ret)
+ return ret;
+
+- ret = add_call_destinations(file);
++ ret = add_special_section_alts(file);
+ if (ret)
+ return ret;
+
+- ret = add_special_section_alts(file);
++ ret = add_call_destinations(file);
+ if (ret)
+ return ret;
+
+@@ -1720,10 +1723,12 @@ static int validate_branch(struct objtool_file *file, struct instruction *first,
+
+ insn->visited = true;
+
+- list_for_each_entry(alt, &insn->alts, list) {
+- ret = validate_branch(file, alt->insn, state);
+- if (ret)
+- return 1;
++ if (!insn->ignore_alts) {
++ list_for_each_entry(alt, &insn->alts, list) {
++ ret = validate_branch(file, alt->insn, state);
++ if (ret)
++ return 1;
++ }
+ }
+
+ switch (insn->type) {
+diff --git a/tools/objtool/orc_gen.c b/tools/objtool/orc_gen.c
+index e61fe703197b..18384d9be4e1 100644
+--- a/tools/objtool/orc_gen.c
++++ b/tools/objtool/orc_gen.c
+@@ -98,6 +98,11 @@ static int create_orc_entry(struct section *u_sec, struct section *ip_relasec,
+ struct orc_entry *orc;
+ struct rela *rela;
+
++ if (!insn_sec->sym) {
++ WARN("missing symbol for section %s", insn_sec->name);
++ return -1;
++ }
++
+ /* populate ORC data */
+ orc = (struct orc_entry *)u_sec->data->d_buf + idx;
+ memcpy(orc, o, sizeof(*orc));
^ permalink raw reply related [flat|nested] 27+ messages in thread
* [gentoo-commits] proj/linux-patches:4.15 commit in: /
@ 2018-02-12 9:01 Alice Ferrazzi
0 siblings, 0 replies; 27+ messages in thread
From: Alice Ferrazzi @ 2018-02-12 9:01 UTC (permalink / raw
To: gentoo-commits
commit: a0d0349612049072e45ad7d28ff2422914b4b2dd
Author: Alice Ferrazzi <alicef <AT> gentoo <DOT> org>
AuthorDate: Mon Feb 12 09:01:01 2018 +0000
Commit: Alice Ferrazzi <alicef <AT> gentoo <DOT> org>
CommitDate: Mon Feb 12 09:01:01 2018 +0000
URL: https://gitweb.gentoo.org/proj/linux-patches.git/commit/?id=a0d03496
linux kernel 4.15.3
0000_README | 4 +
1002_linux-4.15.3.patch | 776 ++++++++++++++++++++++++++++++++++++++++++++++++
2 files changed, 780 insertions(+)
diff --git a/0000_README b/0000_README
index db575f6..635b977 100644
--- a/0000_README
+++ b/0000_README
@@ -51,6 +51,10 @@ Patch: 1001_linux-4.15.2.patch
From: http://www.kernel.org
Desc: Linux 4.15.2
+Patch: 1002_linux-4.15.3.patch
+From: http://www.kernel.org
+Desc: Linux 4.15.3
+
Patch: 1500_XATTR_USER_PREFIX.patch
From: https://bugs.gentoo.org/show_bug.cgi?id=470644
Desc: Support for namespace user.pax.* on tmpfs.
diff --git a/1002_linux-4.15.3.patch b/1002_linux-4.15.3.patch
new file mode 100644
index 0000000..7d0d7a2
--- /dev/null
+++ b/1002_linux-4.15.3.patch
@@ -0,0 +1,776 @@
+diff --git a/Makefile b/Makefile
+index 54f1bc10b531..13566ad7863a 100644
+--- a/Makefile
++++ b/Makefile
+@@ -1,7 +1,7 @@
+ # SPDX-License-Identifier: GPL-2.0
+ VERSION = 4
+ PATCHLEVEL = 15
+-SUBLEVEL = 2
++SUBLEVEL = 3
+ EXTRAVERSION =
+ NAME = Fearless Coyote
+
+diff --git a/crypto/tcrypt.c b/crypto/tcrypt.c
+index 9267cbdb14d2..3ced1ba1fd11 100644
+--- a/crypto/tcrypt.c
++++ b/crypto/tcrypt.c
+@@ -198,11 +198,13 @@ static void sg_init_aead(struct scatterlist *sg, char *xbuf[XBUFSIZE],
+ }
+
+ sg_init_table(sg, np + 1);
+- np--;
++ if (rem)
++ np--;
+ for (k = 0; k < np; k++)
+ sg_set_buf(&sg[k + 1], xbuf[k], PAGE_SIZE);
+
+- sg_set_buf(&sg[k + 1], xbuf[k], rem);
++ if (rem)
++ sg_set_buf(&sg[k + 1], xbuf[k], rem);
+ }
+
+ static void test_aead_speed(const char *algo, int enc, unsigned int secs,
+diff --git a/drivers/gpio/gpio-uniphier.c b/drivers/gpio/gpio-uniphier.c
+index 016d7427ebfa..761d8279abca 100644
+--- a/drivers/gpio/gpio-uniphier.c
++++ b/drivers/gpio/gpio-uniphier.c
+@@ -505,4 +505,4 @@ module_platform_driver(uniphier_gpio_driver);
+
+ MODULE_AUTHOR("Masahiro Yamada <yamada.masahiro@socionext.com>");
+ MODULE_DESCRIPTION("UniPhier GPIO driver");
+-MODULE_LICENSE("GPL");
++MODULE_LICENSE("GPL v2");
+diff --git a/drivers/media/platform/mtk-vcodec/mtk_vcodec_util.c b/drivers/media/platform/mtk-vcodec/mtk_vcodec_util.c
+index 46768c056193..0c28d0b995cc 100644
+--- a/drivers/media/platform/mtk-vcodec/mtk_vcodec_util.c
++++ b/drivers/media/platform/mtk-vcodec/mtk_vcodec_util.c
+@@ -115,3 +115,6 @@ struct mtk_vcodec_ctx *mtk_vcodec_get_curr_ctx(struct mtk_vcodec_dev *dev)
+ return ctx;
+ }
+ EXPORT_SYMBOL(mtk_vcodec_get_curr_ctx);
++
++MODULE_LICENSE("GPL v2");
++MODULE_DESCRIPTION("Mediatek video codec driver");
+diff --git a/drivers/media/platform/soc_camera/soc_scale_crop.c b/drivers/media/platform/soc_camera/soc_scale_crop.c
+index 270ec613c27c..6164102e6f9f 100644
+--- a/drivers/media/platform/soc_camera/soc_scale_crop.c
++++ b/drivers/media/platform/soc_camera/soc_scale_crop.c
+@@ -420,3 +420,7 @@ void soc_camera_calc_client_output(struct soc_camera_device *icd,
+ mf->height = soc_camera_shift_scale(rect->height, shift, scale_v);
+ }
+ EXPORT_SYMBOL(soc_camera_calc_client_output);
++
++MODULE_DESCRIPTION("soc-camera scaling-cropping functions");
++MODULE_AUTHOR("Guennadi Liakhovetski <kernel@pengutronix.de>");
++MODULE_LICENSE("GPL");
+diff --git a/drivers/media/platform/tegra-cec/tegra_cec.c b/drivers/media/platform/tegra-cec/tegra_cec.c
+index 807c94c70049..92f93a880015 100644
+--- a/drivers/media/platform/tegra-cec/tegra_cec.c
++++ b/drivers/media/platform/tegra-cec/tegra_cec.c
+@@ -493,3 +493,8 @@ static struct platform_driver tegra_cec_driver = {
+ };
+
+ module_platform_driver(tegra_cec_driver);
++
++MODULE_DESCRIPTION("Tegra HDMI CEC driver");
++MODULE_AUTHOR("NVIDIA CORPORATION");
++MODULE_AUTHOR("Cisco Systems, Inc. and/or its affiliates");
++MODULE_LICENSE("GPL v2");
+diff --git a/drivers/net/ethernet/qlogic/qlcnic/qlcnic_83xx_hw.c b/drivers/net/ethernet/qlogic/qlcnic/qlcnic_83xx_hw.c
+index f7080d0ab874..46b0372dd032 100644
+--- a/drivers/net/ethernet/qlogic/qlcnic/qlcnic_83xx_hw.c
++++ b/drivers/net/ethernet/qlogic/qlcnic/qlcnic_83xx_hw.c
+@@ -3891,7 +3891,7 @@ static void qlcnic_83xx_flush_mbx_queue(struct qlcnic_adapter *adapter)
+ struct list_head *head = &mbx->cmd_q;
+ struct qlcnic_cmd_args *cmd = NULL;
+
+- spin_lock(&mbx->queue_lock);
++ spin_lock_bh(&mbx->queue_lock);
+
+ while (!list_empty(head)) {
+ cmd = list_entry(head->next, struct qlcnic_cmd_args, list);
+@@ -3902,7 +3902,7 @@ static void qlcnic_83xx_flush_mbx_queue(struct qlcnic_adapter *adapter)
+ qlcnic_83xx_notify_cmd_completion(adapter, cmd);
+ }
+
+- spin_unlock(&mbx->queue_lock);
++ spin_unlock_bh(&mbx->queue_lock);
+ }
+
+ static int qlcnic_83xx_check_mbx_status(struct qlcnic_adapter *adapter)
+@@ -3938,12 +3938,12 @@ static void qlcnic_83xx_dequeue_mbx_cmd(struct qlcnic_adapter *adapter,
+ {
+ struct qlcnic_mailbox *mbx = adapter->ahw->mailbox;
+
+- spin_lock(&mbx->queue_lock);
++ spin_lock_bh(&mbx->queue_lock);
+
+ list_del(&cmd->list);
+ mbx->num_cmds--;
+
+- spin_unlock(&mbx->queue_lock);
++ spin_unlock_bh(&mbx->queue_lock);
+
+ qlcnic_83xx_notify_cmd_completion(adapter, cmd);
+ }
+@@ -4008,7 +4008,7 @@ static int qlcnic_83xx_enqueue_mbx_cmd(struct qlcnic_adapter *adapter,
+ init_completion(&cmd->completion);
+ cmd->rsp_opcode = QLC_83XX_MBX_RESPONSE_UNKNOWN;
+
+- spin_lock(&mbx->queue_lock);
++ spin_lock_bh(&mbx->queue_lock);
+
+ list_add_tail(&cmd->list, &mbx->cmd_q);
+ mbx->num_cmds++;
+@@ -4016,7 +4016,7 @@ static int qlcnic_83xx_enqueue_mbx_cmd(struct qlcnic_adapter *adapter,
+ *timeout = cmd->total_cmds * QLC_83XX_MBX_TIMEOUT;
+ queue_work(mbx->work_q, &mbx->work);
+
+- spin_unlock(&mbx->queue_lock);
++ spin_unlock_bh(&mbx->queue_lock);
+
+ return 0;
+ }
+@@ -4112,15 +4112,15 @@ static void qlcnic_83xx_mailbox_worker(struct work_struct *work)
+ mbx->rsp_status = QLC_83XX_MBX_RESPONSE_WAIT;
+ spin_unlock_irqrestore(&mbx->aen_lock, flags);
+
+- spin_lock(&mbx->queue_lock);
++ spin_lock_bh(&mbx->queue_lock);
+
+ if (list_empty(head)) {
+- spin_unlock(&mbx->queue_lock);
++ spin_unlock_bh(&mbx->queue_lock);
+ return;
+ }
+ cmd = list_entry(head->next, struct qlcnic_cmd_args, list);
+
+- spin_unlock(&mbx->queue_lock);
++ spin_unlock_bh(&mbx->queue_lock);
+
+ mbx_ops->encode_cmd(adapter, cmd);
+ mbx_ops->nofity_fw(adapter, QLC_83XX_MBX_REQUEST);
+diff --git a/drivers/net/ethernet/realtek/r8169.c b/drivers/net/ethernet/realtek/r8169.c
+index 734286ebe5ef..dd713dff8d22 100644
+--- a/drivers/net/ethernet/realtek/r8169.c
++++ b/drivers/net/ethernet/realtek/r8169.c
+@@ -1395,7 +1395,7 @@ DECLARE_RTL_COND(rtl_ocp_tx_cond)
+ {
+ void __iomem *ioaddr = tp->mmio_addr;
+
+- return RTL_R8(IBISR0) & 0x02;
++ return RTL_R8(IBISR0) & 0x20;
+ }
+
+ static void rtl8168ep_stop_cmac(struct rtl8169_private *tp)
+@@ -1403,7 +1403,7 @@ static void rtl8168ep_stop_cmac(struct rtl8169_private *tp)
+ void __iomem *ioaddr = tp->mmio_addr;
+
+ RTL_W8(IBCR2, RTL_R8(IBCR2) & ~0x01);
+- rtl_msleep_loop_wait_low(tp, &rtl_ocp_tx_cond, 50, 2000);
++ rtl_msleep_loop_wait_high(tp, &rtl_ocp_tx_cond, 50, 2000);
+ RTL_W8(IBISR0, RTL_R8(IBISR0) | 0x20);
+ RTL_W8(IBCR0, RTL_R8(IBCR0) & ~0x01);
+ }
+diff --git a/drivers/net/ethernet/rocker/rocker_main.c b/drivers/net/ethernet/rocker/rocker_main.c
+index fc8f8bdf6579..056cb6093630 100644
+--- a/drivers/net/ethernet/rocker/rocker_main.c
++++ b/drivers/net/ethernet/rocker/rocker_main.c
+@@ -2902,6 +2902,12 @@ static int rocker_probe(struct pci_dev *pdev, const struct pci_device_id *id)
+ goto err_alloc_ordered_workqueue;
+ }
+
++ err = rocker_probe_ports(rocker);
++ if (err) {
++ dev_err(&pdev->dev, "failed to probe ports\n");
++ goto err_probe_ports;
++ }
++
+ /* Only FIBs pointing to our own netdevs are programmed into
+ * the device, so no need to pass a callback.
+ */
+@@ -2918,22 +2924,16 @@ static int rocker_probe(struct pci_dev *pdev, const struct pci_device_id *id)
+
+ rocker->hw.id = rocker_read64(rocker, SWITCH_ID);
+
+- err = rocker_probe_ports(rocker);
+- if (err) {
+- dev_err(&pdev->dev, "failed to probe ports\n");
+- goto err_probe_ports;
+- }
+-
+ dev_info(&pdev->dev, "Rocker switch with id %*phN\n",
+ (int)sizeof(rocker->hw.id), &rocker->hw.id);
+
+ return 0;
+
+-err_probe_ports:
+- unregister_switchdev_notifier(&rocker_switchdev_notifier);
+ err_register_switchdev_notifier:
+ unregister_fib_notifier(&rocker->fib_nb);
+ err_register_fib_notifier:
++ rocker_remove_ports(rocker);
++err_probe_ports:
+ destroy_workqueue(rocker->rocker_owq);
+ err_alloc_ordered_workqueue:
+ free_irq(rocker_msix_vector(rocker, ROCKER_MSIX_VEC_EVENT), rocker);
+@@ -2961,9 +2961,9 @@ static void rocker_remove(struct pci_dev *pdev)
+ {
+ struct rocker *rocker = pci_get_drvdata(pdev);
+
+- rocker_remove_ports(rocker);
+ unregister_switchdev_notifier(&rocker_switchdev_notifier);
+ unregister_fib_notifier(&rocker->fib_nb);
++ rocker_remove_ports(rocker);
+ rocker_write32(rocker, CONTROL, ROCKER_CONTROL_RESET);
+ destroy_workqueue(rocker->rocker_owq);
+ free_irq(rocker_msix_vector(rocker, ROCKER_MSIX_VEC_EVENT), rocker);
+diff --git a/drivers/net/usb/qmi_wwan.c b/drivers/net/usb/qmi_wwan.c
+index 728819feab44..e7114c34fe4b 100644
+--- a/drivers/net/usb/qmi_wwan.c
++++ b/drivers/net/usb/qmi_wwan.c
+@@ -1245,6 +1245,7 @@ static const struct usb_device_id products[] = {
+ {QMI_QUIRK_SET_DTR(0x2c7c, 0x0125, 4)}, /* Quectel EC25, EC20 R2.0 Mini PCIe */
+ {QMI_QUIRK_SET_DTR(0x2c7c, 0x0121, 4)}, /* Quectel EC21 Mini PCIe */
+ {QMI_FIXED_INTF(0x2c7c, 0x0296, 4)}, /* Quectel BG96 */
++ {QMI_QUIRK_SET_DTR(0x2c7c, 0x0306, 4)}, /* Quectel EP06 Mini PCIe */
+
+ /* 4. Gobi 1000 devices */
+ {QMI_GOBI1K_DEVICE(0x05c6, 0x9212)}, /* Acer Gobi Modem Device */
+diff --git a/drivers/vhost/net.c b/drivers/vhost/net.c
+index c7bdeb655646..5636c7ca8eba 100644
+--- a/drivers/vhost/net.c
++++ b/drivers/vhost/net.c
+@@ -1208,6 +1208,7 @@ static long vhost_net_reset_owner(struct vhost_net *n)
+ }
+ vhost_net_stop(n, &tx_sock, &rx_sock);
+ vhost_net_flush(n);
++ vhost_dev_stop(&n->dev);
+ vhost_dev_reset_owner(&n->dev, umem);
+ vhost_net_vq_reset(n);
+ done:
+diff --git a/include/net/sch_generic.h b/include/net/sch_generic.h
+index becf86aa4ac6..d6ec5a5a6782 100644
+--- a/include/net/sch_generic.h
++++ b/include/net/sch_generic.h
+@@ -280,7 +280,6 @@ struct tcf_block {
+ struct net *net;
+ struct Qdisc *q;
+ struct list_head cb_list;
+- struct work_struct work;
+ };
+
+ static inline void qdisc_cb_private_validate(const struct sk_buff *skb, int sz)
+diff --git a/mm/memcontrol.c b/mm/memcontrol.c
+index ac2ffd5e02b9..0a78ce57872d 100644
+--- a/mm/memcontrol.c
++++ b/mm/memcontrol.c
+@@ -5828,6 +5828,20 @@ void mem_cgroup_sk_alloc(struct sock *sk)
+ if (!mem_cgroup_sockets_enabled)
+ return;
+
++ /*
++ * Socket cloning can throw us here with sk_memcg already
++ * filled. It won't however, necessarily happen from
++ * process context. So the test for root memcg given
++ * the current task's memcg won't help us in this case.
++ *
++ * Respecting the original socket's memcg is a better
++ * decision in this case.
++ */
++ if (sk->sk_memcg) {
++ css_get(&sk->sk_memcg->css);
++ return;
++ }
++
+ rcu_read_lock();
+ memcg = mem_cgroup_from_task(current);
+ if (memcg == root_mem_cgroup)
+diff --git a/net/core/sock.c b/net/core/sock.c
+index c0b5b2f17412..7571dabfc4cf 100644
+--- a/net/core/sock.c
++++ b/net/core/sock.c
+@@ -1675,16 +1675,13 @@ struct sock *sk_clone_lock(const struct sock *sk, const gfp_t priority)
+ newsk->sk_dst_pending_confirm = 0;
+ newsk->sk_wmem_queued = 0;
+ newsk->sk_forward_alloc = 0;
+-
+- /* sk->sk_memcg will be populated at accept() time */
+- newsk->sk_memcg = NULL;
+-
+ atomic_set(&newsk->sk_drops, 0);
+ newsk->sk_send_head = NULL;
+ newsk->sk_userlocks = sk->sk_userlocks & ~SOCK_BINDPORT_LOCK;
+ atomic_set(&newsk->sk_zckey, 0);
+
+ sock_reset_flag(newsk, SOCK_DONE);
++ mem_cgroup_sk_alloc(newsk);
+ cgroup_sk_alloc(&newsk->sk_cgrp_data);
+
+ rcu_read_lock();
+diff --git a/net/core/sock_reuseport.c b/net/core/sock_reuseport.c
+index 5eeb1d20cc38..676092d7bd81 100644
+--- a/net/core/sock_reuseport.c
++++ b/net/core/sock_reuseport.c
+@@ -94,6 +94,16 @@ static struct sock_reuseport *reuseport_grow(struct sock_reuseport *reuse)
+ return more_reuse;
+ }
+
++static void reuseport_free_rcu(struct rcu_head *head)
++{
++ struct sock_reuseport *reuse;
++
++ reuse = container_of(head, struct sock_reuseport, rcu);
++ if (reuse->prog)
++ bpf_prog_destroy(reuse->prog);
++ kfree(reuse);
++}
++
+ /**
+ * reuseport_add_sock - Add a socket to the reuseport group of another.
+ * @sk: New socket to add to the group.
+@@ -102,7 +112,7 @@ static struct sock_reuseport *reuseport_grow(struct sock_reuseport *reuse)
+ */
+ int reuseport_add_sock(struct sock *sk, struct sock *sk2)
+ {
+- struct sock_reuseport *reuse;
++ struct sock_reuseport *old_reuse, *reuse;
+
+ if (!rcu_access_pointer(sk2->sk_reuseport_cb)) {
+ int err = reuseport_alloc(sk2);
+@@ -113,10 +123,13 @@ int reuseport_add_sock(struct sock *sk, struct sock *sk2)
+
+ spin_lock_bh(&reuseport_lock);
+ reuse = rcu_dereference_protected(sk2->sk_reuseport_cb,
+- lockdep_is_held(&reuseport_lock)),
+- WARN_ONCE(rcu_dereference_protected(sk->sk_reuseport_cb,
+- lockdep_is_held(&reuseport_lock)),
+- "socket already in reuseport group");
++ lockdep_is_held(&reuseport_lock));
++ old_reuse = rcu_dereference_protected(sk->sk_reuseport_cb,
++ lockdep_is_held(&reuseport_lock));
++ if (old_reuse && old_reuse->num_socks != 1) {
++ spin_unlock_bh(&reuseport_lock);
++ return -EBUSY;
++ }
+
+ if (reuse->num_socks == reuse->max_socks) {
+ reuse = reuseport_grow(reuse);
+@@ -134,19 +147,11 @@ int reuseport_add_sock(struct sock *sk, struct sock *sk2)
+
+ spin_unlock_bh(&reuseport_lock);
+
++ if (old_reuse)
++ call_rcu(&old_reuse->rcu, reuseport_free_rcu);
+ return 0;
+ }
+
+-static void reuseport_free_rcu(struct rcu_head *head)
+-{
+- struct sock_reuseport *reuse;
+-
+- reuse = container_of(head, struct sock_reuseport, rcu);
+- if (reuse->prog)
+- bpf_prog_destroy(reuse->prog);
+- kfree(reuse);
+-}
+-
+ void reuseport_detach_sock(struct sock *sk)
+ {
+ struct sock_reuseport *reuse;
+diff --git a/net/ipv4/igmp.c b/net/ipv4/igmp.c
+index 2d49717a7421..f0b1fc35dde1 100644
+--- a/net/ipv4/igmp.c
++++ b/net/ipv4/igmp.c
+@@ -386,7 +386,11 @@ static struct sk_buff *igmpv3_newpack(struct net_device *dev, unsigned int mtu)
+ pip->frag_off = htons(IP_DF);
+ pip->ttl = 1;
+ pip->daddr = fl4.daddr;
++
++ rcu_read_lock();
+ pip->saddr = igmpv3_get_srcaddr(dev, &fl4);
++ rcu_read_unlock();
++
+ pip->protocol = IPPROTO_IGMP;
+ pip->tot_len = 0; /* filled in later */
+ ip_select_ident(net, skb, NULL);
+diff --git a/net/ipv4/inet_connection_sock.c b/net/ipv4/inet_connection_sock.c
+index 4ca46dc08e63..3668c4182655 100644
+--- a/net/ipv4/inet_connection_sock.c
++++ b/net/ipv4/inet_connection_sock.c
+@@ -475,7 +475,6 @@ struct sock *inet_csk_accept(struct sock *sk, int flags, int *err, bool kern)
+ }
+ spin_unlock_bh(&queue->fastopenq.lock);
+ }
+- mem_cgroup_sk_alloc(newsk);
+ out:
+ release_sock(sk);
+ if (req)
+diff --git a/net/ipv4/tcp.c b/net/ipv4/tcp.c
+index 8e053ad7cae2..c821f5d68720 100644
+--- a/net/ipv4/tcp.c
++++ b/net/ipv4/tcp.c
+@@ -2434,6 +2434,12 @@ int tcp_disconnect(struct sock *sk, int flags)
+
+ WARN_ON(inet->inet_num && !icsk->icsk_bind_hash);
+
++ if (sk->sk_frag.page) {
++ put_page(sk->sk_frag.page);
++ sk->sk_frag.page = NULL;
++ sk->sk_frag.offset = 0;
++ }
++
+ sk->sk_error_report(sk);
+ return err;
+ }
+diff --git a/net/ipv4/tcp_bbr.c b/net/ipv4/tcp_bbr.c
+index 8322f26e770e..25c5a0b60cfc 100644
+--- a/net/ipv4/tcp_bbr.c
++++ b/net/ipv4/tcp_bbr.c
+@@ -481,7 +481,8 @@ static void bbr_advance_cycle_phase(struct sock *sk)
+
+ bbr->cycle_idx = (bbr->cycle_idx + 1) & (CYCLE_LEN - 1);
+ bbr->cycle_mstamp = tp->delivered_mstamp;
+- bbr->pacing_gain = bbr_pacing_gain[bbr->cycle_idx];
++ bbr->pacing_gain = bbr->lt_use_bw ? BBR_UNIT :
++ bbr_pacing_gain[bbr->cycle_idx];
+ }
+
+ /* Gain cycling: cycle pacing gain to converge to fair share of available bw. */
+@@ -490,8 +491,7 @@ static void bbr_update_cycle_phase(struct sock *sk,
+ {
+ struct bbr *bbr = inet_csk_ca(sk);
+
+- if ((bbr->mode == BBR_PROBE_BW) && !bbr->lt_use_bw &&
+- bbr_is_next_cycle_phase(sk, rs))
++ if (bbr->mode == BBR_PROBE_BW && bbr_is_next_cycle_phase(sk, rs))
+ bbr_advance_cycle_phase(sk);
+ }
+
+diff --git a/net/ipv6/addrconf.c b/net/ipv6/addrconf.c
+index f49bd7897e95..2547222589fe 100644
+--- a/net/ipv6/addrconf.c
++++ b/net/ipv6/addrconf.c
+@@ -186,7 +186,8 @@ static struct rt6_info *addrconf_get_prefix_route(const struct in6_addr *pfx,
+
+ static void addrconf_dad_start(struct inet6_ifaddr *ifp);
+ static void addrconf_dad_work(struct work_struct *w);
+-static void addrconf_dad_completed(struct inet6_ifaddr *ifp, bool bump_id);
++static void addrconf_dad_completed(struct inet6_ifaddr *ifp, bool bump_id,
++ bool send_na);
+ static void addrconf_dad_run(struct inet6_dev *idev);
+ static void addrconf_rs_timer(struct timer_list *t);
+ static void __ipv6_ifa_notify(int event, struct inet6_ifaddr *ifa);
+@@ -3833,12 +3834,17 @@ static void addrconf_dad_begin(struct inet6_ifaddr *ifp)
+ idev->cnf.accept_dad < 1) ||
+ !(ifp->flags&IFA_F_TENTATIVE) ||
+ ifp->flags & IFA_F_NODAD) {
++ bool send_na = false;
++
++ if (ifp->flags & IFA_F_TENTATIVE &&
++ !(ifp->flags & IFA_F_OPTIMISTIC))
++ send_na = true;
+ bump_id = ifp->flags & IFA_F_TENTATIVE;
+ ifp->flags &= ~(IFA_F_TENTATIVE|IFA_F_OPTIMISTIC|IFA_F_DADFAILED);
+ spin_unlock(&ifp->lock);
+ read_unlock_bh(&idev->lock);
+
+- addrconf_dad_completed(ifp, bump_id);
++ addrconf_dad_completed(ifp, bump_id, send_na);
+ return;
+ }
+
+@@ -3967,16 +3973,21 @@ static void addrconf_dad_work(struct work_struct *w)
+ }
+
+ if (ifp->dad_probes == 0) {
++ bool send_na = false;
++
+ /*
+ * DAD was successful
+ */
+
++ if (ifp->flags & IFA_F_TENTATIVE &&
++ !(ifp->flags & IFA_F_OPTIMISTIC))
++ send_na = true;
+ bump_id = ifp->flags & IFA_F_TENTATIVE;
+ ifp->flags &= ~(IFA_F_TENTATIVE|IFA_F_OPTIMISTIC|IFA_F_DADFAILED);
+ spin_unlock(&ifp->lock);
+ write_unlock_bh(&idev->lock);
+
+- addrconf_dad_completed(ifp, bump_id);
++ addrconf_dad_completed(ifp, bump_id, send_na);
+
+ goto out;
+ }
+@@ -4014,7 +4025,8 @@ static bool ipv6_lonely_lladdr(struct inet6_ifaddr *ifp)
+ return true;
+ }
+
+-static void addrconf_dad_completed(struct inet6_ifaddr *ifp, bool bump_id)
++static void addrconf_dad_completed(struct inet6_ifaddr *ifp, bool bump_id,
++ bool send_na)
+ {
+ struct net_device *dev = ifp->idev->dev;
+ struct in6_addr lladdr;
+@@ -4046,6 +4058,16 @@ static void addrconf_dad_completed(struct inet6_ifaddr *ifp, bool bump_id)
+ if (send_mld)
+ ipv6_mc_dad_complete(ifp->idev);
+
++ /* send unsolicited NA if enabled */
++ if (send_na &&
++ (ifp->idev->cnf.ndisc_notify ||
++ dev_net(dev)->ipv6.devconf_all->ndisc_notify)) {
++ ndisc_send_na(dev, &in6addr_linklocal_allnodes, &ifp->addr,
++ /*router=*/ !!ifp->idev->cnf.forwarding,
++ /*solicited=*/ false, /*override=*/ true,
++ /*inc_opt=*/ true);
++ }
++
+ if (send_rs) {
+ /*
+ * If a host as already performed a random delay
+@@ -4352,9 +4374,11 @@ static void addrconf_verify_rtnl(void)
+ spin_lock(&ifpub->lock);
+ ifpub->regen_count = 0;
+ spin_unlock(&ifpub->lock);
++ rcu_read_unlock_bh();
+ ipv6_create_tempaddr(ifpub, ifp, true);
+ in6_ifa_put(ifpub);
+ in6_ifa_put(ifp);
++ rcu_read_lock_bh();
+ goto restart;
+ }
+ } else if (time_before(ifp->tstamp + ifp->prefered_lft * HZ - regen_advance * HZ, next))
+diff --git a/net/ipv6/af_inet6.c b/net/ipv6/af_inet6.c
+index c9441ca45399..416917719a6f 100644
+--- a/net/ipv6/af_inet6.c
++++ b/net/ipv6/af_inet6.c
+@@ -284,6 +284,7 @@ int inet6_bind(struct socket *sock, struct sockaddr *uaddr, int addr_len)
+ struct net *net = sock_net(sk);
+ __be32 v4addr = 0;
+ unsigned short snum;
++ bool saved_ipv6only;
+ int addr_type = 0;
+ int err = 0;
+
+@@ -389,19 +390,21 @@ int inet6_bind(struct socket *sock, struct sockaddr *uaddr, int addr_len)
+ if (!(addr_type & IPV6_ADDR_MULTICAST))
+ np->saddr = addr->sin6_addr;
+
++ saved_ipv6only = sk->sk_ipv6only;
++ if (addr_type != IPV6_ADDR_ANY && addr_type != IPV6_ADDR_MAPPED)
++ sk->sk_ipv6only = 1;
++
+ /* Make sure we are allowed to bind here. */
+ if ((snum || !inet->bind_address_no_port) &&
+ sk->sk_prot->get_port(sk, snum)) {
++ sk->sk_ipv6only = saved_ipv6only;
+ inet_reset_saddr(sk);
+ err = -EADDRINUSE;
+ goto out;
+ }
+
+- if (addr_type != IPV6_ADDR_ANY) {
++ if (addr_type != IPV6_ADDR_ANY)
+ sk->sk_userlocks |= SOCK_BINDADDR_LOCK;
+- if (addr_type != IPV6_ADDR_MAPPED)
+- sk->sk_ipv6only = 1;
+- }
+ if (snum)
+ sk->sk_userlocks |= SOCK_BINDPORT_LOCK;
+ inet->inet_sport = htons(inet->inet_num);
+diff --git a/net/ipv6/ip6mr.c b/net/ipv6/ip6mr.c
+index a2e1a864eb46..4fc566ec7e79 100644
+--- a/net/ipv6/ip6mr.c
++++ b/net/ipv6/ip6mr.c
+@@ -495,6 +495,7 @@ static void *ipmr_mfc_seq_start(struct seq_file *seq, loff_t *pos)
+ return ERR_PTR(-ENOENT);
+
+ it->mrt = mrt;
++ it->cache = NULL;
+ return *pos ? ipmr_mfc_seq_idx(net, seq->private, *pos - 1)
+ : SEQ_START_TOKEN;
+ }
+diff --git a/net/ipv6/ndisc.c b/net/ipv6/ndisc.c
+index b3cea200c85e..f61a5b613b52 100644
+--- a/net/ipv6/ndisc.c
++++ b/net/ipv6/ndisc.c
+@@ -566,6 +566,11 @@ static void ndisc_send_unsol_na(struct net_device *dev)
+
+ read_lock_bh(&idev->lock);
+ list_for_each_entry(ifa, &idev->addr_list, if_list) {
++ /* skip tentative addresses until dad completes */
++ if (ifa->flags & IFA_F_TENTATIVE &&
++ !(ifa->flags & IFA_F_OPTIMISTIC))
++ continue;
++
+ ndisc_send_na(dev, &in6addr_linklocal_allnodes, &ifa->addr,
+ /*router=*/ !!idev->cnf.forwarding,
+ /*solicited=*/ false, /*override=*/ true,
+diff --git a/net/ipv6/route.c b/net/ipv6/route.c
+index 0458b761f3c5..a560fb1d0230 100644
+--- a/net/ipv6/route.c
++++ b/net/ipv6/route.c
+@@ -1586,12 +1586,19 @@ static void rt6_age_examine_exception(struct rt6_exception_bucket *bucket,
+ * EXPIRES exceptions - e.g. pmtu-generated ones are pruned when
+ * expired, independently from their aging, as per RFC 8201 section 4
+ */
+- if (!(rt->rt6i_flags & RTF_EXPIRES) &&
+- time_after_eq(now, rt->dst.lastuse + gc_args->timeout)) {
+- RT6_TRACE("aging clone %p\n", rt);
++ if (!(rt->rt6i_flags & RTF_EXPIRES)) {
++ if (time_after_eq(now, rt->dst.lastuse + gc_args->timeout)) {
++ RT6_TRACE("aging clone %p\n", rt);
++ rt6_remove_exception(bucket, rt6_ex);
++ return;
++ }
++ } else if (time_after(jiffies, rt->dst.expires)) {
++ RT6_TRACE("purging expired route %p\n", rt);
+ rt6_remove_exception(bucket, rt6_ex);
+ return;
+- } else if (rt->rt6i_flags & RTF_GATEWAY) {
++ }
++
++ if (rt->rt6i_flags & RTF_GATEWAY) {
+ struct neighbour *neigh;
+ __u8 neigh_flags = 0;
+
+@@ -1606,11 +1613,8 @@ static void rt6_age_examine_exception(struct rt6_exception_bucket *bucket,
+ rt6_remove_exception(bucket, rt6_ex);
+ return;
+ }
+- } else if (__rt6_check_expired(rt)) {
+- RT6_TRACE("purging expired route %p\n", rt);
+- rt6_remove_exception(bucket, rt6_ex);
+- return;
+ }
++
+ gc_args->more++;
+ }
+
+diff --git a/net/sched/cls_api.c b/net/sched/cls_api.c
+index b9d63d2246e6..e6b853f0ee4f 100644
+--- a/net/sched/cls_api.c
++++ b/net/sched/cls_api.c
+@@ -217,8 +217,12 @@ static void tcf_chain_flush(struct tcf_chain *chain)
+
+ static void tcf_chain_destroy(struct tcf_chain *chain)
+ {
++ struct tcf_block *block = chain->block;
++
+ list_del(&chain->list);
+ kfree(chain);
++ if (list_empty(&block->chain_list))
++ kfree(block);
+ }
+
+ static void tcf_chain_hold(struct tcf_chain *chain)
+@@ -329,49 +333,34 @@ int tcf_block_get(struct tcf_block **p_block,
+ }
+ EXPORT_SYMBOL(tcf_block_get);
+
+-static void tcf_block_put_final(struct work_struct *work)
+-{
+- struct tcf_block *block = container_of(work, struct tcf_block, work);
+- struct tcf_chain *chain, *tmp;
+-
+- rtnl_lock();
+-
+- /* At this point, all the chains should have refcnt == 1. */
+- list_for_each_entry_safe(chain, tmp, &block->chain_list, list)
+- tcf_chain_put(chain);
+- rtnl_unlock();
+- kfree(block);
+-}
+-
+ /* XXX: Standalone actions are not allowed to jump to any chain, and bound
+ * actions should be all removed after flushing.
+ */
+ void tcf_block_put_ext(struct tcf_block *block, struct Qdisc *q,
+ struct tcf_block_ext_info *ei)
+ {
+- struct tcf_chain *chain;
++ struct tcf_chain *chain, *tmp;
+
+ if (!block)
+ return;
+- /* Hold a refcnt for all chains, except 0, so that they don't disappear
++ /* Hold a refcnt for all chains, so that they don't disappear
+ * while we are iterating.
+ */
+ list_for_each_entry(chain, &block->chain_list, list)
+- if (chain->index)
+- tcf_chain_hold(chain);
++ tcf_chain_hold(chain);
+
+ list_for_each_entry(chain, &block->chain_list, list)
+ tcf_chain_flush(chain);
+
+ tcf_block_offload_unbind(block, q, ei);
+
+- INIT_WORK(&block->work, tcf_block_put_final);
+- /* Wait for existing RCU callbacks to cool down, make sure their works
+- * have been queued before this. We can not flush pending works here
+- * because we are holding the RTNL lock.
+- */
+- rcu_barrier();
+- tcf_queue_work(&block->work);
++ /* At this point, all the chains should have refcnt >= 1. */
++ list_for_each_entry_safe(chain, tmp, &block->chain_list, list)
++ tcf_chain_put(chain);
++
++ /* Finally, put chain 0 and allow block to be freed. */
++ chain = list_first_entry(&block->chain_list, struct tcf_chain, list);
++ tcf_chain_put(chain);
+ }
+ EXPORT_SYMBOL(tcf_block_put_ext);
+
+diff --git a/net/sched/cls_u32.c b/net/sched/cls_u32.c
+index 507859cdd1cb..33294b5b2c6a 100644
+--- a/net/sched/cls_u32.c
++++ b/net/sched/cls_u32.c
+@@ -544,6 +544,7 @@ static void u32_remove_hw_knode(struct tcf_proto *tp, u32 handle)
+ static int u32_replace_hw_knode(struct tcf_proto *tp, struct tc_u_knode *n,
+ u32 flags)
+ {
++ struct tc_u_hnode *ht = rtnl_dereference(n->ht_down);
+ struct tcf_block *block = tp->chain->block;
+ struct tc_cls_u32_offload cls_u32 = {};
+ bool skip_sw = tc_skip_sw(flags);
+@@ -563,7 +564,7 @@ static int u32_replace_hw_knode(struct tcf_proto *tp, struct tc_u_knode *n,
+ cls_u32.knode.sel = &n->sel;
+ cls_u32.knode.exts = &n->exts;
+ if (n->ht_down)
+- cls_u32.knode.link_handle = n->ht_down->handle;
++ cls_u32.knode.link_handle = ht->handle;
+
+ err = tc_setup_cb_call(block, NULL, TC_SETUP_CLSU32, &cls_u32, skip_sw);
+ if (err < 0) {
+@@ -840,8 +841,9 @@ static void u32_replace_knode(struct tcf_proto *tp, struct tc_u_common *tp_c,
+ static struct tc_u_knode *u32_init_knode(struct tcf_proto *tp,
+ struct tc_u_knode *n)
+ {
+- struct tc_u_knode *new;
++ struct tc_u_hnode *ht = rtnl_dereference(n->ht_down);
+ struct tc_u32_sel *s = &n->sel;
++ struct tc_u_knode *new;
+
+ new = kzalloc(sizeof(*n) + s->nkeys*sizeof(struct tc_u32_key),
+ GFP_KERNEL);
+@@ -859,11 +861,11 @@ static struct tc_u_knode *u32_init_knode(struct tcf_proto *tp,
+ new->fshift = n->fshift;
+ new->res = n->res;
+ new->flags = n->flags;
+- RCU_INIT_POINTER(new->ht_down, n->ht_down);
++ RCU_INIT_POINTER(new->ht_down, ht);
+
+ /* bump reference count as long as we hold pointer to structure */
+- if (new->ht_down)
+- new->ht_down->refcnt++;
++ if (ht)
++ ht->refcnt++;
+
+ #ifdef CONFIG_CLS_U32_PERF
+ /* Statistics may be incremented by readers during update
^ permalink raw reply related [flat|nested] 27+ messages in thread
* [gentoo-commits] proj/linux-patches:4.15 commit in: /
@ 2018-02-17 14:02 Alice Ferrazzi
0 siblings, 0 replies; 27+ messages in thread
From: Alice Ferrazzi @ 2018-02-17 14:02 UTC (permalink / raw
To: gentoo-commits
commit: 22eadc0e10798d00a998ffebb02b9b4d83a7edb3
Author: Alice Ferrazzi <alicef <AT> gentoo <DOT> org>
AuthorDate: Sat Feb 17 14:01:56 2018 +0000
Commit: Alice Ferrazzi <alicef <AT> gentoo <DOT> org>
CommitDate: Sat Feb 17 14:01:56 2018 +0000
URL: https://gitweb.gentoo.org/proj/linux-patches.git/commit/?id=22eadc0e
linux kernel 4.15.4
0000_README | 4 +
1003_linux-4.15.4.patch | 10956 ++++++++++++++++++++++++++++++++++++++++++++++
2 files changed, 10960 insertions(+)
diff --git a/0000_README b/0000_README
index 635b977..ffe8729 100644
--- a/0000_README
+++ b/0000_README
@@ -55,6 +55,10 @@ Patch: 1002_linux-4.15.3.patch
From: http://www.kernel.org
Desc: Linux 4.15.3
+Patch: 1003_linux-4.15.4.patch
+From: http://www.kernel.org
+Desc: Linux 4.15.4
+
Patch: 1500_XATTR_USER_PREFIX.patch
From: https://bugs.gentoo.org/show_bug.cgi?id=470644
Desc: Support for namespace user.pax.* on tmpfs.
diff --git a/1003_linux-4.15.4.patch b/1003_linux-4.15.4.patch
new file mode 100644
index 0000000..657bd62
--- /dev/null
+++ b/1003_linux-4.15.4.patch
@@ -0,0 +1,10956 @@
+diff --git a/Documentation/arm64/silicon-errata.txt b/Documentation/arm64/silicon-errata.txt
+index fc1c884fea10..c1d520de6dfe 100644
+--- a/Documentation/arm64/silicon-errata.txt
++++ b/Documentation/arm64/silicon-errata.txt
+@@ -72,7 +72,7 @@ stable kernels.
+ | Hisilicon | Hip0{6,7} | #161010701 | N/A |
+ | Hisilicon | Hip07 | #161600802 | HISILICON_ERRATUM_161600802 |
+ | | | | |
+-| Qualcomm Tech. | Falkor v1 | E1003 | QCOM_FALKOR_ERRATUM_1003 |
++| Qualcomm Tech. | Kryo/Falkor v1 | E1003 | QCOM_FALKOR_ERRATUM_1003 |
+ | Qualcomm Tech. | Falkor v1 | E1009 | QCOM_FALKOR_ERRATUM_1009 |
+ | Qualcomm Tech. | QDF2400 ITS | E0065 | QCOM_QDF2400_ERRATUM_0065 |
+ | Qualcomm Tech. | Falkor v{1,2} | E1041 | QCOM_FALKOR_ERRATUM_1041 |
+diff --git a/Documentation/devicetree/bindings/media/cec-gpio.txt b/Documentation/devicetree/bindings/media/cec-gpio.txt
+index 46a0bac8b3b9..12fcd55ed153 100644
+--- a/Documentation/devicetree/bindings/media/cec-gpio.txt
++++ b/Documentation/devicetree/bindings/media/cec-gpio.txt
+@@ -4,6 +4,10 @@ The HDMI CEC GPIO module supports CEC implementations where the CEC line
+ is hooked up to a pull-up GPIO line and - optionally - the HPD line is
+ hooked up to another GPIO line.
+
++Please note: the maximum voltage for the CEC line is 3.63V, for the HPD
++line it is 5.3V. So you may need some sort of level conversion circuitry
++when connecting them to a GPIO line.
++
+ Required properties:
+ - compatible: value must be "cec-gpio".
+ - cec-gpios: gpio that the CEC line is connected to. The line should be
+@@ -21,7 +25,7 @@ the following property is optional:
+
+ Example for the Raspberry Pi 3 where the CEC line is connected to
+ pin 26 aka BCM7 aka CE1 on the GPIO pin header and the HPD line is
+-connected to pin 11 aka BCM17:
++connected to pin 11 aka BCM17 (some level shifter is needed for this!):
+
+ #include <dt-bindings/gpio/gpio.h>
+
+diff --git a/Makefile b/Makefile
+index 13566ad7863a..8495e1ca052e 100644
+--- a/Makefile
++++ b/Makefile
+@@ -1,7 +1,7 @@
+ # SPDX-License-Identifier: GPL-2.0
+ VERSION = 4
+ PATCHLEVEL = 15
+-SUBLEVEL = 3
++SUBLEVEL = 4
+ EXTRAVERSION =
+ NAME = Fearless Coyote
+
+@@ -432,7 +432,8 @@ export MAKE AWK GENKSYMS INSTALLKERNEL PERL PYTHON UTS_MACHINE
+ export HOSTCXX HOSTCXXFLAGS LDFLAGS_MODULE CHECK CHECKFLAGS
+
+ export KBUILD_CPPFLAGS NOSTDINC_FLAGS LINUXINCLUDE OBJCOPYFLAGS LDFLAGS
+-export KBUILD_CFLAGS CFLAGS_KERNEL CFLAGS_MODULE CFLAGS_KASAN CFLAGS_UBSAN
++export KBUILD_CFLAGS CFLAGS_KERNEL CFLAGS_MODULE
++export CFLAGS_KASAN CFLAGS_KASAN_NOSANITIZE CFLAGS_UBSAN
+ export KBUILD_AFLAGS AFLAGS_KERNEL AFLAGS_MODULE
+ export KBUILD_AFLAGS_MODULE KBUILD_CFLAGS_MODULE KBUILD_LDFLAGS_MODULE
+ export KBUILD_AFLAGS_KERNEL KBUILD_CFLAGS_KERNEL
+diff --git a/arch/alpha/include/asm/futex.h b/arch/alpha/include/asm/futex.h
+index d2e4da93e68c..ca3322536f72 100644
+--- a/arch/alpha/include/asm/futex.h
++++ b/arch/alpha/include/asm/futex.h
+@@ -20,8 +20,8 @@
+ "3: .subsection 2\n" \
+ "4: br 1b\n" \
+ " .previous\n" \
+- EXC(1b,3b,%1,$31) \
+- EXC(2b,3b,%1,$31) \
++ EXC(1b,3b,$31,%1) \
++ EXC(2b,3b,$31,%1) \
+ : "=&r" (oldval), "=&r"(ret) \
+ : "r" (uaddr), "r"(oparg) \
+ : "memory")
+@@ -82,8 +82,8 @@ futex_atomic_cmpxchg_inatomic(u32 *uval, u32 __user *uaddr,
+ "3: .subsection 2\n"
+ "4: br 1b\n"
+ " .previous\n"
+- EXC(1b,3b,%0,$31)
+- EXC(2b,3b,%0,$31)
++ EXC(1b,3b,$31,%0)
++ EXC(2b,3b,$31,%0)
+ : "+r"(ret), "=&r"(prev), "=&r"(cmp)
+ : "r"(uaddr), "r"((long)(int)oldval), "r"(newval)
+ : "memory");
+diff --git a/arch/alpha/kernel/osf_sys.c b/arch/alpha/kernel/osf_sys.c
+index ce3a675c0c4b..75a5c35a2067 100644
+--- a/arch/alpha/kernel/osf_sys.c
++++ b/arch/alpha/kernel/osf_sys.c
+@@ -964,8 +964,8 @@ static inline long
+ put_tv32(struct timeval32 __user *o, struct timeval *i)
+ {
+ return copy_to_user(o, &(struct timeval32){
+- .tv_sec = o->tv_sec,
+- .tv_usec = o->tv_usec},
++ .tv_sec = i->tv_sec,
++ .tv_usec = i->tv_usec},
+ sizeof(struct timeval32));
+ }
+
+diff --git a/arch/alpha/kernel/pci_impl.h b/arch/alpha/kernel/pci_impl.h
+index 2e4cb74fdc41..18043af45e2b 100644
+--- a/arch/alpha/kernel/pci_impl.h
++++ b/arch/alpha/kernel/pci_impl.h
+@@ -144,7 +144,8 @@ struct pci_iommu_arena
+ };
+
+ #if defined(CONFIG_ALPHA_SRM) && \
+- (defined(CONFIG_ALPHA_CIA) || defined(CONFIG_ALPHA_LCA))
++ (defined(CONFIG_ALPHA_CIA) || defined(CONFIG_ALPHA_LCA) || \
++ defined(CONFIG_ALPHA_AVANTI))
+ # define NEED_SRM_SAVE_RESTORE
+ #else
+ # undef NEED_SRM_SAVE_RESTORE
+diff --git a/arch/alpha/kernel/process.c b/arch/alpha/kernel/process.c
+index 74bfb1f2d68e..3a885253f486 100644
+--- a/arch/alpha/kernel/process.c
++++ b/arch/alpha/kernel/process.c
+@@ -269,12 +269,13 @@ copy_thread(unsigned long clone_flags, unsigned long usp,
+ application calling fork. */
+ if (clone_flags & CLONE_SETTLS)
+ childti->pcb.unique = regs->r20;
++ else
++ regs->r20 = 0; /* OSF/1 has some strange fork() semantics. */
+ childti->pcb.usp = usp ?: rdusp();
+ *childregs = *regs;
+ childregs->r0 = 0;
+ childregs->r19 = 0;
+ childregs->r20 = 1; /* OSF/1 has some strange fork() semantics. */
+- regs->r20 = 0;
+ stack = ((struct switch_stack *) regs) - 1;
+ *childstack = *stack;
+ childstack->r26 = (unsigned long) ret_from_fork;
+diff --git a/arch/alpha/kernel/traps.c b/arch/alpha/kernel/traps.c
+index 4bd99a7b1c41..f43bd05dede2 100644
+--- a/arch/alpha/kernel/traps.c
++++ b/arch/alpha/kernel/traps.c
+@@ -160,11 +160,16 @@ void show_stack(struct task_struct *task, unsigned long *sp)
+ for(i=0; i < kstack_depth_to_print; i++) {
+ if (((long) stack & (THREAD_SIZE-1)) == 0)
+ break;
+- if (i && ((i % 4) == 0))
+- printk("\n ");
+- printk("%016lx ", *stack++);
++ if ((i % 4) == 0) {
++ if (i)
++ pr_cont("\n");
++ printk(" ");
++ } else {
++ pr_cont(" ");
++ }
++ pr_cont("%016lx", *stack++);
+ }
+- printk("\n");
++ pr_cont("\n");
+ dik_show_trace(sp);
+ }
+
+diff --git a/arch/arm/crypto/crc32-ce-glue.c b/arch/arm/crypto/crc32-ce-glue.c
+index 1b0e0e86ee9c..96e62ec105d0 100644
+--- a/arch/arm/crypto/crc32-ce-glue.c
++++ b/arch/arm/crypto/crc32-ce-glue.c
+@@ -188,6 +188,7 @@ static struct shash_alg crc32_pmull_algs[] = { {
+ .base.cra_name = "crc32",
+ .base.cra_driver_name = "crc32-arm-ce",
+ .base.cra_priority = 200,
++ .base.cra_flags = CRYPTO_ALG_OPTIONAL_KEY,
+ .base.cra_blocksize = 1,
+ .base.cra_module = THIS_MODULE,
+ }, {
+@@ -203,6 +204,7 @@ static struct shash_alg crc32_pmull_algs[] = { {
+ .base.cra_name = "crc32c",
+ .base.cra_driver_name = "crc32c-arm-ce",
+ .base.cra_priority = 200,
++ .base.cra_flags = CRYPTO_ALG_OPTIONAL_KEY,
+ .base.cra_blocksize = 1,
+ .base.cra_module = THIS_MODULE,
+ } };
+diff --git a/arch/arm/include/asm/kvm_host.h b/arch/arm/include/asm/kvm_host.h
+index a9f7d3f47134..fdd9da1555be 100644
+--- a/arch/arm/include/asm/kvm_host.h
++++ b/arch/arm/include/asm/kvm_host.h
+@@ -301,4 +301,10 @@ int kvm_arm_vcpu_arch_has_attr(struct kvm_vcpu *vcpu,
+ /* All host FP/SIMD state is restored on guest exit, so nothing to save: */
+ static inline void kvm_fpsimd_flush_cpu_state(void) {}
+
++static inline bool kvm_arm_harden_branch_predictor(void)
++{
++ /* No way to detect it yet, pretend it is not there. */
++ return false;
++}
++
+ #endif /* __ARM_KVM_HOST_H__ */
+diff --git a/arch/arm/include/asm/kvm_mmu.h b/arch/arm/include/asm/kvm_mmu.h
+index fa6f2174276b..eb46fc81a440 100644
+--- a/arch/arm/include/asm/kvm_mmu.h
++++ b/arch/arm/include/asm/kvm_mmu.h
+@@ -221,6 +221,16 @@ static inline unsigned int kvm_get_vmid_bits(void)
+ return 8;
+ }
+
++static inline void *kvm_get_hyp_vector(void)
++{
++ return kvm_ksym_ref(__kvm_hyp_vector);
++}
++
++static inline int kvm_map_vectors(void)
++{
++ return 0;
++}
++
+ #endif /* !__ASSEMBLY__ */
+
+ #endif /* __ARM_KVM_MMU_H__ */
+diff --git a/arch/arm/include/asm/kvm_psci.h b/arch/arm/include/asm/kvm_psci.h
+deleted file mode 100644
+index 6bda945d31fa..000000000000
+--- a/arch/arm/include/asm/kvm_psci.h
++++ /dev/null
+@@ -1,27 +0,0 @@
+-/*
+- * Copyright (C) 2012 - ARM Ltd
+- * Author: Marc Zyngier <marc.zyngier@arm.com>
+- *
+- * This program is free software; you can redistribute it and/or modify
+- * it under the terms of the GNU General Public License version 2 as
+- * published by the Free Software Foundation.
+- *
+- * This program is distributed in the hope that it will be useful,
+- * but WITHOUT ANY WARRANTY; without even the implied warranty of
+- * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
+- * GNU General Public License for more details.
+- *
+- * You should have received a copy of the GNU General Public License
+- * along with this program. If not, see <http://www.gnu.org/licenses/>.
+- */
+-
+-#ifndef __ARM_KVM_PSCI_H__
+-#define __ARM_KVM_PSCI_H__
+-
+-#define KVM_ARM_PSCI_0_1 1
+-#define KVM_ARM_PSCI_0_2 2
+-
+-int kvm_psci_version(struct kvm_vcpu *vcpu);
+-int kvm_psci_call(struct kvm_vcpu *vcpu);
+-
+-#endif /* __ARM_KVM_PSCI_H__ */
+diff --git a/arch/arm/kvm/handle_exit.c b/arch/arm/kvm/handle_exit.c
+index cf8bf6bf87c4..910bd8dabb3c 100644
+--- a/arch/arm/kvm/handle_exit.c
++++ b/arch/arm/kvm/handle_exit.c
+@@ -21,7 +21,7 @@
+ #include <asm/kvm_emulate.h>
+ #include <asm/kvm_coproc.h>
+ #include <asm/kvm_mmu.h>
+-#include <asm/kvm_psci.h>
++#include <kvm/arm_psci.h>
+ #include <trace/events/kvm.h>
+
+ #include "trace.h"
+@@ -36,9 +36,9 @@ static int handle_hvc(struct kvm_vcpu *vcpu, struct kvm_run *run)
+ kvm_vcpu_hvc_get_imm(vcpu));
+ vcpu->stat.hvc_exit_stat++;
+
+- ret = kvm_psci_call(vcpu);
++ ret = kvm_hvc_call_handler(vcpu);
+ if (ret < 0) {
+- kvm_inject_undefined(vcpu);
++ vcpu_set_reg(vcpu, 0, ~0UL);
+ return 1;
+ }
+
+@@ -47,7 +47,16 @@ static int handle_hvc(struct kvm_vcpu *vcpu, struct kvm_run *run)
+
+ static int handle_smc(struct kvm_vcpu *vcpu, struct kvm_run *run)
+ {
+- kvm_inject_undefined(vcpu);
++ /*
++ * "If an SMC instruction executed at Non-secure EL1 is
++ * trapped to EL2 because HCR_EL2.TSC is 1, the exception is a
++ * Trap exception, not a Secure Monitor Call exception [...]"
++ *
++ * We need to advance the PC after the trap, as it would
++ * otherwise return to the same address...
++ */
++ vcpu_set_reg(vcpu, 0, ~0UL);
++ kvm_skip_instr(vcpu, kvm_vcpu_trap_il_is32bit(vcpu));
+ return 1;
+ }
+
+diff --git a/arch/arm64/Kconfig b/arch/arm64/Kconfig
+index c9a7e9e1414f..d22f64095ca2 100644
+--- a/arch/arm64/Kconfig
++++ b/arch/arm64/Kconfig
+@@ -522,20 +522,13 @@ config CAVIUM_ERRATUM_30115
+ config QCOM_FALKOR_ERRATUM_1003
+ bool "Falkor E1003: Incorrect translation due to ASID change"
+ default y
+- select ARM64_PAN if ARM64_SW_TTBR0_PAN
+ help
+ On Falkor v1, an incorrect ASID may be cached in the TLB when ASID
+- and BADDR are changed together in TTBRx_EL1. The workaround for this
+- issue is to use a reserved ASID in cpu_do_switch_mm() before
+- switching to the new ASID. Saying Y here selects ARM64_PAN if
+- ARM64_SW_TTBR0_PAN is selected. This is done because implementing and
+- maintaining the E1003 workaround in the software PAN emulation code
+- would be an unnecessary complication. The affected Falkor v1 CPU
+- implements ARMv8.1 hardware PAN support and using hardware PAN
+- support versus software PAN emulation is mutually exclusive at
+- runtime.
+-
+- If unsure, say Y.
++ and BADDR are changed together in TTBRx_EL1. Since we keep the ASID
++ in TTBR1_EL1, this situation only occurs in the entry trampoline and
++ then only for entries in the walk cache, since the leaf translation
++ is unchanged. Work around the erratum by invalidating the walk cache
++ entries for the trampoline before entering the kernel proper.
+
+ config QCOM_FALKOR_ERRATUM_1009
+ bool "Falkor E1009: Prematurely complete a DSB after a TLBI"
+@@ -850,6 +843,35 @@ config FORCE_MAX_ZONEORDER
+ However for 4K, we choose a higher default value, 11 as opposed to 10, giving us
+ 4M allocations matching the default size used by generic code.
+
++config UNMAP_KERNEL_AT_EL0
++ bool "Unmap kernel when running in userspace (aka \"KAISER\")" if EXPERT
++ default y
++ help
++ Speculation attacks against some high-performance processors can
++ be used to bypass MMU permission checks and leak kernel data to
++ userspace. This can be defended against by unmapping the kernel
++ when running in userspace, mapping it back in on exception entry
++ via a trampoline page in the vector table.
++
++ If unsure, say Y.
++
++config HARDEN_BRANCH_PREDICTOR
++ bool "Harden the branch predictor against aliasing attacks" if EXPERT
++ default y
++ help
++ Speculation attacks against some high-performance processors rely on
++ being able to manipulate the branch predictor for a victim context by
++ executing aliasing branches in the attacker context. Such attacks
++ can be partially mitigated against by clearing internal branch
++ predictor state and limiting the prediction logic in some situations.
++
++ This config option will take CPU-specific actions to harden the
++ branch predictor against aliasing attacks and may rely on specific
++ instruction sequences or control bits being set by the system
++ firmware.
++
++ If unsure, say Y.
++
+ menuconfig ARMV8_DEPRECATED
+ bool "Emulate deprecated/obsolete ARMv8 instructions"
+ depends on COMPAT
+diff --git a/arch/arm64/boot/dts/marvell/armada-7040-db.dts b/arch/arm64/boot/dts/marvell/armada-7040-db.dts
+index 52b5341cb270..62b83416b30c 100644
+--- a/arch/arm64/boot/dts/marvell/armada-7040-db.dts
++++ b/arch/arm64/boot/dts/marvell/armada-7040-db.dts
+@@ -61,6 +61,12 @@
+ reg = <0x0 0x0 0x0 0x80000000>;
+ };
+
++ aliases {
++ ethernet0 = &cpm_eth0;
++ ethernet1 = &cpm_eth1;
++ ethernet2 = &cpm_eth2;
++ };
++
+ cpm_reg_usb3_0_vbus: cpm-usb3-0-vbus {
+ compatible = "regulator-fixed";
+ regulator-name = "usb3h0-vbus";
+diff --git a/arch/arm64/boot/dts/marvell/armada-8040-db.dts b/arch/arm64/boot/dts/marvell/armada-8040-db.dts
+index d97b72bed662..d9fffde64c44 100644
+--- a/arch/arm64/boot/dts/marvell/armada-8040-db.dts
++++ b/arch/arm64/boot/dts/marvell/armada-8040-db.dts
+@@ -61,6 +61,13 @@
+ reg = <0x0 0x0 0x0 0x80000000>;
+ };
+
++ aliases {
++ ethernet0 = &cpm_eth0;
++ ethernet1 = &cpm_eth2;
++ ethernet2 = &cps_eth0;
++ ethernet3 = &cps_eth1;
++ };
++
+ cpm_reg_usb3_0_vbus: cpm-usb3-0-vbus {
+ compatible = "regulator-fixed";
+ regulator-name = "cpm-usb3h0-vbus";
+diff --git a/arch/arm64/boot/dts/marvell/armada-8040-mcbin.dts b/arch/arm64/boot/dts/marvell/armada-8040-mcbin.dts
+index b3350827ee55..945f7bd22802 100644
+--- a/arch/arm64/boot/dts/marvell/armada-8040-mcbin.dts
++++ b/arch/arm64/boot/dts/marvell/armada-8040-mcbin.dts
+@@ -62,6 +62,12 @@
+ reg = <0x0 0x0 0x0 0x80000000>;
+ };
+
++ aliases {
++ ethernet0 = &cpm_eth0;
++ ethernet1 = &cps_eth0;
++ ethernet2 = &cps_eth1;
++ };
++
+ /* Regulator labels correspond with schematics */
+ v_3_3: regulator-3-3v {
+ compatible = "regulator-fixed";
+diff --git a/arch/arm64/crypto/crc32-ce-glue.c b/arch/arm64/crypto/crc32-ce-glue.c
+index 624f4137918c..34b4e3d46aab 100644
+--- a/arch/arm64/crypto/crc32-ce-glue.c
++++ b/arch/arm64/crypto/crc32-ce-glue.c
+@@ -185,6 +185,7 @@ static struct shash_alg crc32_pmull_algs[] = { {
+ .base.cra_name = "crc32",
+ .base.cra_driver_name = "crc32-arm64-ce",
+ .base.cra_priority = 200,
++ .base.cra_flags = CRYPTO_ALG_OPTIONAL_KEY,
+ .base.cra_blocksize = 1,
+ .base.cra_module = THIS_MODULE,
+ }, {
+@@ -200,6 +201,7 @@ static struct shash_alg crc32_pmull_algs[] = { {
+ .base.cra_name = "crc32c",
+ .base.cra_driver_name = "crc32c-arm64-ce",
+ .base.cra_priority = 200,
++ .base.cra_flags = CRYPTO_ALG_OPTIONAL_KEY,
+ .base.cra_blocksize = 1,
+ .base.cra_module = THIS_MODULE,
+ } };
+diff --git a/arch/arm64/include/asm/asm-uaccess.h b/arch/arm64/include/asm/asm-uaccess.h
+index b3da6c886835..dd49c3567f20 100644
+--- a/arch/arm64/include/asm/asm-uaccess.h
++++ b/arch/arm64/include/asm/asm-uaccess.h
+@@ -4,6 +4,7 @@
+
+ #include <asm/alternative.h>
+ #include <asm/kernel-pgtable.h>
++#include <asm/mmu.h>
+ #include <asm/sysreg.h>
+ #include <asm/assembler.h>
+
+@@ -13,51 +14,62 @@
+ #ifdef CONFIG_ARM64_SW_TTBR0_PAN
+ .macro __uaccess_ttbr0_disable, tmp1
+ mrs \tmp1, ttbr1_el1 // swapper_pg_dir
++ bic \tmp1, \tmp1, #TTBR_ASID_MASK
+ add \tmp1, \tmp1, #SWAPPER_DIR_SIZE // reserved_ttbr0 at the end of swapper_pg_dir
+ msr ttbr0_el1, \tmp1 // set reserved TTBR0_EL1
+ isb
++ sub \tmp1, \tmp1, #SWAPPER_DIR_SIZE
++ msr ttbr1_el1, \tmp1 // set reserved ASID
++ isb
+ .endm
+
+- .macro __uaccess_ttbr0_enable, tmp1
++ .macro __uaccess_ttbr0_enable, tmp1, tmp2
+ get_thread_info \tmp1
+ ldr \tmp1, [\tmp1, #TSK_TI_TTBR0] // load saved TTBR0_EL1
++ mrs \tmp2, ttbr1_el1
++ extr \tmp2, \tmp2, \tmp1, #48
++ ror \tmp2, \tmp2, #16
++ msr ttbr1_el1, \tmp2 // set the active ASID
++ isb
+ msr ttbr0_el1, \tmp1 // set the non-PAN TTBR0_EL1
+ isb
+ .endm
+
+- .macro uaccess_ttbr0_disable, tmp1
++ .macro uaccess_ttbr0_disable, tmp1, tmp2
+ alternative_if_not ARM64_HAS_PAN
++ save_and_disable_irq \tmp2 // avoid preemption
+ __uaccess_ttbr0_disable \tmp1
++ restore_irq \tmp2
+ alternative_else_nop_endif
+ .endm
+
+- .macro uaccess_ttbr0_enable, tmp1, tmp2
++ .macro uaccess_ttbr0_enable, tmp1, tmp2, tmp3
+ alternative_if_not ARM64_HAS_PAN
+- save_and_disable_irq \tmp2 // avoid preemption
+- __uaccess_ttbr0_enable \tmp1
+- restore_irq \tmp2
++ save_and_disable_irq \tmp3 // avoid preemption
++ __uaccess_ttbr0_enable \tmp1, \tmp2
++ restore_irq \tmp3
+ alternative_else_nop_endif
+ .endm
+ #else
+- .macro uaccess_ttbr0_disable, tmp1
++ .macro uaccess_ttbr0_disable, tmp1, tmp2
+ .endm
+
+- .macro uaccess_ttbr0_enable, tmp1, tmp2
++ .macro uaccess_ttbr0_enable, tmp1, tmp2, tmp3
+ .endm
+ #endif
+
+ /*
+ * These macros are no-ops when UAO is present.
+ */
+- .macro uaccess_disable_not_uao, tmp1
+- uaccess_ttbr0_disable \tmp1
++ .macro uaccess_disable_not_uao, tmp1, tmp2
++ uaccess_ttbr0_disable \tmp1, \tmp2
+ alternative_if ARM64_ALT_PAN_NOT_UAO
+ SET_PSTATE_PAN(1)
+ alternative_else_nop_endif
+ .endm
+
+- .macro uaccess_enable_not_uao, tmp1, tmp2
+- uaccess_ttbr0_enable \tmp1, \tmp2
++ .macro uaccess_enable_not_uao, tmp1, tmp2, tmp3
++ uaccess_ttbr0_enable \tmp1, \tmp2, \tmp3
+ alternative_if ARM64_ALT_PAN_NOT_UAO
+ SET_PSTATE_PAN(0)
+ alternative_else_nop_endif
+diff --git a/arch/arm64/include/asm/assembler.h b/arch/arm64/include/asm/assembler.h
+index 8b168280976f..b05565dd50b6 100644
+--- a/arch/arm64/include/asm/assembler.h
++++ b/arch/arm64/include/asm/assembler.h
+@@ -26,7 +26,6 @@
+ #include <asm/asm-offsets.h>
+ #include <asm/cpufeature.h>
+ #include <asm/debug-monitors.h>
+-#include <asm/mmu_context.h>
+ #include <asm/page.h>
+ #include <asm/pgtable-hwdef.h>
+ #include <asm/ptrace.h>
+@@ -109,6 +108,24 @@
+ dmb \opt
+ .endm
+
++/*
++ * Value prediction barrier
++ */
++ .macro csdb
++ hint #20
++ .endm
++
++/*
++ * Sanitise a 64-bit bounded index wrt speculation, returning zero if out
++ * of bounds.
++ */
++ .macro mask_nospec64, idx, limit, tmp
++ sub \tmp, \idx, \limit
++ bic \tmp, \tmp, \idx
++ and \idx, \idx, \tmp, asr #63
++ csdb
++ .endm
++
+ /*
+ * NOP sequence
+ */
+@@ -477,39 +494,8 @@ alternative_endif
+ mrs \rd, sp_el0
+ .endm
+
+-/*
+- * Errata workaround prior to TTBR0_EL1 update
+- *
+- * val: TTBR value with new BADDR, preserved
+- * tmp0: temporary register, clobbered
+- * tmp1: other temporary register, clobbered
+- */
+- .macro pre_ttbr0_update_workaround, val, tmp0, tmp1
+-#ifdef CONFIG_QCOM_FALKOR_ERRATUM_1003
+-alternative_if ARM64_WORKAROUND_QCOM_FALKOR_E1003
+- mrs \tmp0, ttbr0_el1
+- mov \tmp1, #FALKOR_RESERVED_ASID
+- bfi \tmp0, \tmp1, #48, #16 // reserved ASID + old BADDR
+- msr ttbr0_el1, \tmp0
+- isb
+- bfi \tmp0, \val, #0, #48 // reserved ASID + new BADDR
+- msr ttbr0_el1, \tmp0
+- isb
+-alternative_else_nop_endif
+-#endif
+- .endm
+-
+-/*
+- * Errata workaround post TTBR0_EL1 update.
+- */
+- .macro post_ttbr0_update_workaround
+-#ifdef CONFIG_CAVIUM_ERRATUM_27456
+-alternative_if ARM64_WORKAROUND_CAVIUM_27456
+- ic iallu
+- dsb nsh
+- isb
+-alternative_else_nop_endif
+-#endif
++ .macro pte_to_phys, phys, pte
++ and \phys, \pte, #(((1 << (48 - PAGE_SHIFT)) - 1) << PAGE_SHIFT)
+ .endm
+
+ /**
+diff --git a/arch/arm64/include/asm/barrier.h b/arch/arm64/include/asm/barrier.h
+index 77651c49ef44..f11518af96a9 100644
+--- a/arch/arm64/include/asm/barrier.h
++++ b/arch/arm64/include/asm/barrier.h
+@@ -32,6 +32,7 @@
+ #define dsb(opt) asm volatile("dsb " #opt : : : "memory")
+
+ #define psb_csync() asm volatile("hint #17" : : : "memory")
++#define csdb() asm volatile("hint #20" : : : "memory")
+
+ #define mb() dsb(sy)
+ #define rmb() dsb(ld)
+@@ -40,6 +41,27 @@
+ #define dma_rmb() dmb(oshld)
+ #define dma_wmb() dmb(oshst)
+
++/*
++ * Generate a mask for array_index__nospec() that is ~0UL when 0 <= idx < sz
++ * and 0 otherwise.
++ */
++#define array_index_mask_nospec array_index_mask_nospec
++static inline unsigned long array_index_mask_nospec(unsigned long idx,
++ unsigned long sz)
++{
++ unsigned long mask;
++
++ asm volatile(
++ " cmp %1, %2\n"
++ " sbc %0, xzr, xzr\n"
++ : "=r" (mask)
++ : "r" (idx), "Ir" (sz)
++ : "cc");
++
++ csdb();
++ return mask;
++}
++
+ #define __smp_mb() dmb(ish)
+ #define __smp_rmb() dmb(ishld)
+ #define __smp_wmb() dmb(ishst)
+diff --git a/arch/arm64/include/asm/cpucaps.h b/arch/arm64/include/asm/cpucaps.h
+index 2ff7c5e8efab..7049b4802587 100644
+--- a/arch/arm64/include/asm/cpucaps.h
++++ b/arch/arm64/include/asm/cpucaps.h
+@@ -41,7 +41,10 @@
+ #define ARM64_WORKAROUND_CAVIUM_30115 20
+ #define ARM64_HAS_DCPOP 21
+ #define ARM64_SVE 22
++#define ARM64_UNMAP_KERNEL_AT_EL0 23
++#define ARM64_HARDEN_BRANCH_PREDICTOR 24
++#define ARM64_HARDEN_BP_POST_GUEST_EXIT 25
+
+-#define ARM64_NCAPS 23
++#define ARM64_NCAPS 26
+
+ #endif /* __ASM_CPUCAPS_H */
+diff --git a/arch/arm64/include/asm/cputype.h b/arch/arm64/include/asm/cputype.h
+index cbf08d7cbf30..be7bd19c87ec 100644
+--- a/arch/arm64/include/asm/cputype.h
++++ b/arch/arm64/include/asm/cputype.h
+@@ -79,28 +79,37 @@
+ #define ARM_CPU_PART_AEM_V8 0xD0F
+ #define ARM_CPU_PART_FOUNDATION 0xD00
+ #define ARM_CPU_PART_CORTEX_A57 0xD07
++#define ARM_CPU_PART_CORTEX_A72 0xD08
+ #define ARM_CPU_PART_CORTEX_A53 0xD03
+ #define ARM_CPU_PART_CORTEX_A73 0xD09
++#define ARM_CPU_PART_CORTEX_A75 0xD0A
+
+ #define APM_CPU_PART_POTENZA 0x000
+
+ #define CAVIUM_CPU_PART_THUNDERX 0x0A1
+ #define CAVIUM_CPU_PART_THUNDERX_81XX 0x0A2
+ #define CAVIUM_CPU_PART_THUNDERX_83XX 0x0A3
++#define CAVIUM_CPU_PART_THUNDERX2 0x0AF
+
+ #define BRCM_CPU_PART_VULCAN 0x516
+
+ #define QCOM_CPU_PART_FALKOR_V1 0x800
+ #define QCOM_CPU_PART_FALKOR 0xC00
++#define QCOM_CPU_PART_KRYO 0x200
+
+ #define MIDR_CORTEX_A53 MIDR_CPU_MODEL(ARM_CPU_IMP_ARM, ARM_CPU_PART_CORTEX_A53)
+ #define MIDR_CORTEX_A57 MIDR_CPU_MODEL(ARM_CPU_IMP_ARM, ARM_CPU_PART_CORTEX_A57)
++#define MIDR_CORTEX_A72 MIDR_CPU_MODEL(ARM_CPU_IMP_ARM, ARM_CPU_PART_CORTEX_A72)
+ #define MIDR_CORTEX_A73 MIDR_CPU_MODEL(ARM_CPU_IMP_ARM, ARM_CPU_PART_CORTEX_A73)
++#define MIDR_CORTEX_A75 MIDR_CPU_MODEL(ARM_CPU_IMP_ARM, ARM_CPU_PART_CORTEX_A75)
+ #define MIDR_THUNDERX MIDR_CPU_MODEL(ARM_CPU_IMP_CAVIUM, CAVIUM_CPU_PART_THUNDERX)
+ #define MIDR_THUNDERX_81XX MIDR_CPU_MODEL(ARM_CPU_IMP_CAVIUM, CAVIUM_CPU_PART_THUNDERX_81XX)
+ #define MIDR_THUNDERX_83XX MIDR_CPU_MODEL(ARM_CPU_IMP_CAVIUM, CAVIUM_CPU_PART_THUNDERX_83XX)
++#define MIDR_CAVIUM_THUNDERX2 MIDR_CPU_MODEL(ARM_CPU_IMP_CAVIUM, CAVIUM_CPU_PART_THUNDERX2)
++#define MIDR_BRCM_VULCAN MIDR_CPU_MODEL(ARM_CPU_IMP_BRCM, BRCM_CPU_PART_VULCAN)
+ #define MIDR_QCOM_FALKOR_V1 MIDR_CPU_MODEL(ARM_CPU_IMP_QCOM, QCOM_CPU_PART_FALKOR_V1)
+ #define MIDR_QCOM_FALKOR MIDR_CPU_MODEL(ARM_CPU_IMP_QCOM, QCOM_CPU_PART_FALKOR)
++#define MIDR_QCOM_KRYO MIDR_CPU_MODEL(ARM_CPU_IMP_QCOM, QCOM_CPU_PART_KRYO)
+
+ #ifndef __ASSEMBLY__
+
+diff --git a/arch/arm64/include/asm/efi.h b/arch/arm64/include/asm/efi.h
+index c4cd5081d78b..8389050328bb 100644
+--- a/arch/arm64/include/asm/efi.h
++++ b/arch/arm64/include/asm/efi.h
+@@ -121,19 +121,21 @@ static inline void efi_set_pgd(struct mm_struct *mm)
+ if (mm != current->active_mm) {
+ /*
+ * Update the current thread's saved ttbr0 since it is
+- * restored as part of a return from exception. Set
+- * the hardware TTBR0_EL1 using cpu_switch_mm()
+- * directly to enable potential errata workarounds.
++ * restored as part of a return from exception. Enable
++ * access to the valid TTBR0_EL1 and invoke the errata
++ * workaround directly since there is no return from
++ * exception when invoking the EFI run-time services.
+ */
+ update_saved_ttbr0(current, mm);
+- cpu_switch_mm(mm->pgd, mm);
++ uaccess_ttbr0_enable();
++ post_ttbr_update_workaround();
+ } else {
+ /*
+ * Defer the switch to the current thread's TTBR0_EL1
+ * until uaccess_enable(). Restore the current
+ * thread's saved ttbr0 corresponding to its active_mm
+ */
+- cpu_set_reserved_ttbr0();
++ uaccess_ttbr0_disable();
+ update_saved_ttbr0(current, current->active_mm);
+ }
+ }
+diff --git a/arch/arm64/include/asm/fixmap.h b/arch/arm64/include/asm/fixmap.h
+index 4052ec39e8db..ec1e6d6fa14c 100644
+--- a/arch/arm64/include/asm/fixmap.h
++++ b/arch/arm64/include/asm/fixmap.h
+@@ -58,6 +58,11 @@ enum fixed_addresses {
+ FIX_APEI_GHES_NMI,
+ #endif /* CONFIG_ACPI_APEI_GHES */
+
++#ifdef CONFIG_UNMAP_KERNEL_AT_EL0
++ FIX_ENTRY_TRAMP_DATA,
++ FIX_ENTRY_TRAMP_TEXT,
++#define TRAMP_VALIAS (__fix_to_virt(FIX_ENTRY_TRAMP_TEXT))
++#endif /* CONFIG_UNMAP_KERNEL_AT_EL0 */
+ __end_of_permanent_fixed_addresses,
+
+ /*
+diff --git a/arch/arm64/include/asm/futex.h b/arch/arm64/include/asm/futex.h
+index 5bb2fd4674e7..07fe2479d310 100644
+--- a/arch/arm64/include/asm/futex.h
++++ b/arch/arm64/include/asm/futex.h
+@@ -48,9 +48,10 @@ do { \
+ } while (0)
+
+ static inline int
+-arch_futex_atomic_op_inuser(int op, int oparg, int *oval, u32 __user *uaddr)
++arch_futex_atomic_op_inuser(int op, int oparg, int *oval, u32 __user *_uaddr)
+ {
+ int oldval = 0, ret, tmp;
++ u32 __user *uaddr = __uaccess_mask_ptr(_uaddr);
+
+ pagefault_disable();
+
+@@ -88,15 +89,17 @@ arch_futex_atomic_op_inuser(int op, int oparg, int *oval, u32 __user *uaddr)
+ }
+
+ static inline int
+-futex_atomic_cmpxchg_inatomic(u32 *uval, u32 __user *uaddr,
++futex_atomic_cmpxchg_inatomic(u32 *uval, u32 __user *_uaddr,
+ u32 oldval, u32 newval)
+ {
+ int ret = 0;
+ u32 val, tmp;
++ u32 __user *uaddr;
+
+- if (!access_ok(VERIFY_WRITE, uaddr, sizeof(u32)))
++ if (!access_ok(VERIFY_WRITE, _uaddr, sizeof(u32)))
+ return -EFAULT;
+
++ uaddr = __uaccess_mask_ptr(_uaddr);
+ uaccess_enable();
+ asm volatile("// futex_atomic_cmpxchg_inatomic\n"
+ " prfm pstl1strm, %2\n"
+diff --git a/arch/arm64/include/asm/kvm_asm.h b/arch/arm64/include/asm/kvm_asm.h
+index ab4d0a926043..24961b732e65 100644
+--- a/arch/arm64/include/asm/kvm_asm.h
++++ b/arch/arm64/include/asm/kvm_asm.h
+@@ -68,6 +68,8 @@ extern u32 __kvm_get_mdcr_el2(void);
+
+ extern u32 __init_stage2_translation(void);
+
++extern void __qcom_hyp_sanitize_btac_predictors(void);
++
+ #endif
+
+ #endif /* __ARM_KVM_ASM_H__ */
+diff --git a/arch/arm64/include/asm/kvm_host.h b/arch/arm64/include/asm/kvm_host.h
+index ea6cb5b24258..20cd5b514773 100644
+--- a/arch/arm64/include/asm/kvm_host.h
++++ b/arch/arm64/include/asm/kvm_host.h
+@@ -396,4 +396,9 @@ static inline void kvm_fpsimd_flush_cpu_state(void)
+ sve_flush_cpu_state();
+ }
+
++static inline bool kvm_arm_harden_branch_predictor(void)
++{
++ return cpus_have_const_cap(ARM64_HARDEN_BRANCH_PREDICTOR);
++}
++
+ #endif /* __ARM64_KVM_HOST_H__ */
+diff --git a/arch/arm64/include/asm/kvm_mmu.h b/arch/arm64/include/asm/kvm_mmu.h
+index 672c8684d5c2..2d6d4bd9de52 100644
+--- a/arch/arm64/include/asm/kvm_mmu.h
++++ b/arch/arm64/include/asm/kvm_mmu.h
+@@ -309,5 +309,43 @@ static inline unsigned int kvm_get_vmid_bits(void)
+ return (cpuid_feature_extract_unsigned_field(reg, ID_AA64MMFR1_VMIDBITS_SHIFT) == 2) ? 16 : 8;
+ }
+
++#ifdef CONFIG_HARDEN_BRANCH_PREDICTOR
++#include <asm/mmu.h>
++
++static inline void *kvm_get_hyp_vector(void)
++{
++ struct bp_hardening_data *data = arm64_get_bp_hardening_data();
++ void *vect = kvm_ksym_ref(__kvm_hyp_vector);
++
++ if (data->fn) {
++ vect = __bp_harden_hyp_vecs_start +
++ data->hyp_vectors_slot * SZ_2K;
++
++ if (!has_vhe())
++ vect = lm_alias(vect);
++ }
++
++ return vect;
++}
++
++static inline int kvm_map_vectors(void)
++{
++ return create_hyp_mappings(kvm_ksym_ref(__bp_harden_hyp_vecs_start),
++ kvm_ksym_ref(__bp_harden_hyp_vecs_end),
++ PAGE_HYP_EXEC);
++}
++
++#else
++static inline void *kvm_get_hyp_vector(void)
++{
++ return kvm_ksym_ref(__kvm_hyp_vector);
++}
++
++static inline int kvm_map_vectors(void)
++{
++ return 0;
++}
++#endif
++
+ #endif /* __ASSEMBLY__ */
+ #endif /* __ARM64_KVM_MMU_H__ */
+diff --git a/arch/arm64/include/asm/kvm_psci.h b/arch/arm64/include/asm/kvm_psci.h
+deleted file mode 100644
+index bc39e557c56c..000000000000
+--- a/arch/arm64/include/asm/kvm_psci.h
++++ /dev/null
+@@ -1,27 +0,0 @@
+-/*
+- * Copyright (C) 2012,2013 - ARM Ltd
+- * Author: Marc Zyngier <marc.zyngier@arm.com>
+- *
+- * This program is free software; you can redistribute it and/or modify
+- * it under the terms of the GNU General Public License version 2 as
+- * published by the Free Software Foundation.
+- *
+- * This program is distributed in the hope that it will be useful,
+- * but WITHOUT ANY WARRANTY; without even the implied warranty of
+- * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
+- * GNU General Public License for more details.
+- *
+- * You should have received a copy of the GNU General Public License
+- * along with this program. If not, see <http://www.gnu.org/licenses/>.
+- */
+-
+-#ifndef __ARM64_KVM_PSCI_H__
+-#define __ARM64_KVM_PSCI_H__
+-
+-#define KVM_ARM_PSCI_0_1 1
+-#define KVM_ARM_PSCI_0_2 2
+-
+-int kvm_psci_version(struct kvm_vcpu *vcpu);
+-int kvm_psci_call(struct kvm_vcpu *vcpu);
+-
+-#endif /* __ARM64_KVM_PSCI_H__ */
+diff --git a/arch/arm64/include/asm/mmu.h b/arch/arm64/include/asm/mmu.h
+index 0d34bf0a89c7..6dd83d75b82a 100644
+--- a/arch/arm64/include/asm/mmu.h
++++ b/arch/arm64/include/asm/mmu.h
+@@ -17,6 +17,10 @@
+ #define __ASM_MMU_H
+
+ #define MMCF_AARCH32 0x1 /* mm context flag for AArch32 executables */
++#define USER_ASID_FLAG (UL(1) << 48)
++#define TTBR_ASID_MASK (UL(0xffff) << 48)
++
++#ifndef __ASSEMBLY__
+
+ typedef struct {
+ atomic64_t id;
+@@ -31,6 +35,49 @@ typedef struct {
+ */
+ #define ASID(mm) ((mm)->context.id.counter & 0xffff)
+
++static inline bool arm64_kernel_unmapped_at_el0(void)
++{
++ return IS_ENABLED(CONFIG_UNMAP_KERNEL_AT_EL0) &&
++ cpus_have_const_cap(ARM64_UNMAP_KERNEL_AT_EL0);
++}
++
++typedef void (*bp_hardening_cb_t)(void);
++
++struct bp_hardening_data {
++ int hyp_vectors_slot;
++ bp_hardening_cb_t fn;
++};
++
++#ifdef CONFIG_HARDEN_BRANCH_PREDICTOR
++extern char __bp_harden_hyp_vecs_start[], __bp_harden_hyp_vecs_end[];
++
++DECLARE_PER_CPU_READ_MOSTLY(struct bp_hardening_data, bp_hardening_data);
++
++static inline struct bp_hardening_data *arm64_get_bp_hardening_data(void)
++{
++ return this_cpu_ptr(&bp_hardening_data);
++}
++
++static inline void arm64_apply_bp_hardening(void)
++{
++ struct bp_hardening_data *d;
++
++ if (!cpus_have_const_cap(ARM64_HARDEN_BRANCH_PREDICTOR))
++ return;
++
++ d = arm64_get_bp_hardening_data();
++ if (d->fn)
++ d->fn();
++}
++#else
++static inline struct bp_hardening_data *arm64_get_bp_hardening_data(void)
++{
++ return NULL;
++}
++
++static inline void arm64_apply_bp_hardening(void) { }
++#endif /* CONFIG_HARDEN_BRANCH_PREDICTOR */
++
+ extern void paging_init(void);
+ extern void bootmem_init(void);
+ extern void __iomem *early_io_map(phys_addr_t phys, unsigned long virt);
+@@ -41,4 +88,5 @@ extern void create_pgd_mapping(struct mm_struct *mm, phys_addr_t phys,
+ extern void *fixmap_remap_fdt(phys_addr_t dt_phys);
+ extern void mark_linear_text_alias_ro(void);
+
++#endif /* !__ASSEMBLY__ */
+ #endif
+diff --git a/arch/arm64/include/asm/mmu_context.h b/arch/arm64/include/asm/mmu_context.h
+index 9d155fa9a507..779d7a2ec5ec 100644
+--- a/arch/arm64/include/asm/mmu_context.h
++++ b/arch/arm64/include/asm/mmu_context.h
+@@ -19,8 +19,6 @@
+ #ifndef __ASM_MMU_CONTEXT_H
+ #define __ASM_MMU_CONTEXT_H
+
+-#define FALKOR_RESERVED_ASID 1
+-
+ #ifndef __ASSEMBLY__
+
+ #include <linux/compiler.h>
+@@ -57,6 +55,13 @@ static inline void cpu_set_reserved_ttbr0(void)
+ isb();
+ }
+
++static inline void cpu_switch_mm(pgd_t *pgd, struct mm_struct *mm)
++{
++ BUG_ON(pgd == swapper_pg_dir);
++ cpu_set_reserved_ttbr0();
++ cpu_do_switch_mm(virt_to_phys(pgd),mm);
++}
++
+ /*
+ * TCR.T0SZ value to use when the ID map is active. Usually equals
+ * TCR_T0SZ(VA_BITS), unless system RAM is positioned very high in
+@@ -170,7 +175,7 @@ static inline void update_saved_ttbr0(struct task_struct *tsk,
+ else
+ ttbr = virt_to_phys(mm->pgd) | ASID(mm) << 48;
+
+- task_thread_info(tsk)->ttbr0 = ttbr;
++ WRITE_ONCE(task_thread_info(tsk)->ttbr0, ttbr);
+ }
+ #else
+ static inline void update_saved_ttbr0(struct task_struct *tsk,
+@@ -225,6 +230,7 @@ switch_mm(struct mm_struct *prev, struct mm_struct *next,
+ #define activate_mm(prev,next) switch_mm(prev, next, current)
+
+ void verify_cpu_asid_bits(void);
++void post_ttbr_update_workaround(void);
+
+ #endif /* !__ASSEMBLY__ */
+
+diff --git a/arch/arm64/include/asm/pgtable-hwdef.h b/arch/arm64/include/asm/pgtable-hwdef.h
+index eb0c2bd90de9..8df4cb6ac6f7 100644
+--- a/arch/arm64/include/asm/pgtable-hwdef.h
++++ b/arch/arm64/include/asm/pgtable-hwdef.h
+@@ -272,6 +272,7 @@
+ #define TCR_TG1_4K (UL(2) << TCR_TG1_SHIFT)
+ #define TCR_TG1_64K (UL(3) << TCR_TG1_SHIFT)
+
++#define TCR_A1 (UL(1) << 22)
+ #define TCR_ASID16 (UL(1) << 36)
+ #define TCR_TBI0 (UL(1) << 37)
+ #define TCR_HA (UL(1) << 39)
+diff --git a/arch/arm64/include/asm/pgtable-prot.h b/arch/arm64/include/asm/pgtable-prot.h
+index 0a5635fb0ef9..2db84df5eb42 100644
+--- a/arch/arm64/include/asm/pgtable-prot.h
++++ b/arch/arm64/include/asm/pgtable-prot.h
+@@ -34,8 +34,14 @@
+
+ #include <asm/pgtable-types.h>
+
+-#define PROT_DEFAULT (PTE_TYPE_PAGE | PTE_AF | PTE_SHARED)
+-#define PROT_SECT_DEFAULT (PMD_TYPE_SECT | PMD_SECT_AF | PMD_SECT_S)
++#define _PROT_DEFAULT (PTE_TYPE_PAGE | PTE_AF | PTE_SHARED)
++#define _PROT_SECT_DEFAULT (PMD_TYPE_SECT | PMD_SECT_AF | PMD_SECT_S)
++
++#define PTE_MAYBE_NG (arm64_kernel_unmapped_at_el0() ? PTE_NG : 0)
++#define PMD_MAYBE_NG (arm64_kernel_unmapped_at_el0() ? PMD_SECT_NG : 0)
++
++#define PROT_DEFAULT (_PROT_DEFAULT | PTE_MAYBE_NG)
++#define PROT_SECT_DEFAULT (_PROT_SECT_DEFAULT | PMD_MAYBE_NG)
+
+ #define PROT_DEVICE_nGnRnE (PROT_DEFAULT | PTE_PXN | PTE_UXN | PTE_DIRTY | PTE_WRITE | PTE_ATTRINDX(MT_DEVICE_nGnRnE))
+ #define PROT_DEVICE_nGnRE (PROT_DEFAULT | PTE_PXN | PTE_UXN | PTE_DIRTY | PTE_WRITE | PTE_ATTRINDX(MT_DEVICE_nGnRE))
+@@ -47,23 +53,24 @@
+ #define PROT_SECT_NORMAL (PROT_SECT_DEFAULT | PMD_SECT_PXN | PMD_SECT_UXN | PMD_ATTRINDX(MT_NORMAL))
+ #define PROT_SECT_NORMAL_EXEC (PROT_SECT_DEFAULT | PMD_SECT_UXN | PMD_ATTRINDX(MT_NORMAL))
+
+-#define _PAGE_DEFAULT (PROT_DEFAULT | PTE_ATTRINDX(MT_NORMAL))
++#define _PAGE_DEFAULT (_PROT_DEFAULT | PTE_ATTRINDX(MT_NORMAL))
++#define _HYP_PAGE_DEFAULT _PAGE_DEFAULT
+
+-#define PAGE_KERNEL __pgprot(_PAGE_DEFAULT | PTE_PXN | PTE_UXN | PTE_DIRTY | PTE_WRITE)
+-#define PAGE_KERNEL_RO __pgprot(_PAGE_DEFAULT | PTE_PXN | PTE_UXN | PTE_DIRTY | PTE_RDONLY)
+-#define PAGE_KERNEL_ROX __pgprot(_PAGE_DEFAULT | PTE_UXN | PTE_DIRTY | PTE_RDONLY)
+-#define PAGE_KERNEL_EXEC __pgprot(_PAGE_DEFAULT | PTE_UXN | PTE_DIRTY | PTE_WRITE)
+-#define PAGE_KERNEL_EXEC_CONT __pgprot(_PAGE_DEFAULT | PTE_UXN | PTE_DIRTY | PTE_WRITE | PTE_CONT)
++#define PAGE_KERNEL __pgprot(PROT_NORMAL)
++#define PAGE_KERNEL_RO __pgprot((PROT_NORMAL & ~PTE_WRITE) | PTE_RDONLY)
++#define PAGE_KERNEL_ROX __pgprot((PROT_NORMAL & ~(PTE_WRITE | PTE_PXN)) | PTE_RDONLY)
++#define PAGE_KERNEL_EXEC __pgprot(PROT_NORMAL & ~PTE_PXN)
++#define PAGE_KERNEL_EXEC_CONT __pgprot((PROT_NORMAL & ~PTE_PXN) | PTE_CONT)
+
+-#define PAGE_HYP __pgprot(_PAGE_DEFAULT | PTE_HYP | PTE_HYP_XN)
+-#define PAGE_HYP_EXEC __pgprot(_PAGE_DEFAULT | PTE_HYP | PTE_RDONLY)
+-#define PAGE_HYP_RO __pgprot(_PAGE_DEFAULT | PTE_HYP | PTE_RDONLY | PTE_HYP_XN)
++#define PAGE_HYP __pgprot(_HYP_PAGE_DEFAULT | PTE_HYP | PTE_HYP_XN)
++#define PAGE_HYP_EXEC __pgprot(_HYP_PAGE_DEFAULT | PTE_HYP | PTE_RDONLY)
++#define PAGE_HYP_RO __pgprot(_HYP_PAGE_DEFAULT | PTE_HYP | PTE_RDONLY | PTE_HYP_XN)
+ #define PAGE_HYP_DEVICE __pgprot(PROT_DEVICE_nGnRE | PTE_HYP)
+
+-#define PAGE_S2 __pgprot(PROT_DEFAULT | PTE_S2_MEMATTR(MT_S2_NORMAL) | PTE_S2_RDONLY)
+-#define PAGE_S2_DEVICE __pgprot(PROT_DEFAULT | PTE_S2_MEMATTR(MT_S2_DEVICE_nGnRE) | PTE_S2_RDONLY | PTE_UXN)
++#define PAGE_S2 __pgprot(_PROT_DEFAULT | PTE_S2_MEMATTR(MT_S2_NORMAL) | PTE_S2_RDONLY)
++#define PAGE_S2_DEVICE __pgprot(_PROT_DEFAULT | PTE_S2_MEMATTR(MT_S2_DEVICE_nGnRE) | PTE_S2_RDONLY | PTE_UXN)
+
+-#define PAGE_NONE __pgprot(((_PAGE_DEFAULT) & ~PTE_VALID) | PTE_PROT_NONE | PTE_RDONLY | PTE_PXN | PTE_UXN)
++#define PAGE_NONE __pgprot(((_PAGE_DEFAULT) & ~PTE_VALID) | PTE_PROT_NONE | PTE_RDONLY | PTE_NG | PTE_PXN | PTE_UXN)
+ #define PAGE_SHARED __pgprot(_PAGE_DEFAULT | PTE_USER | PTE_NG | PTE_PXN | PTE_UXN | PTE_WRITE)
+ #define PAGE_SHARED_EXEC __pgprot(_PAGE_DEFAULT | PTE_USER | PTE_NG | PTE_PXN | PTE_WRITE)
+ #define PAGE_READONLY __pgprot(_PAGE_DEFAULT | PTE_USER | PTE_RDONLY | PTE_NG | PTE_PXN | PTE_UXN)
+diff --git a/arch/arm64/include/asm/pgtable.h b/arch/arm64/include/asm/pgtable.h
+index bdcc7f1c9d06..e74394e7b4a6 100644
+--- a/arch/arm64/include/asm/pgtable.h
++++ b/arch/arm64/include/asm/pgtable.h
+@@ -683,6 +683,7 @@ static inline void pmdp_set_wrprotect(struct mm_struct *mm,
+
+ extern pgd_t swapper_pg_dir[PTRS_PER_PGD];
+ extern pgd_t idmap_pg_dir[PTRS_PER_PGD];
++extern pgd_t tramp_pg_dir[PTRS_PER_PGD];
+
+ /*
+ * Encode and decode a swap entry:
+diff --git a/arch/arm64/include/asm/proc-fns.h b/arch/arm64/include/asm/proc-fns.h
+index 14ad6e4e87d1..16cef2e8449e 100644
+--- a/arch/arm64/include/asm/proc-fns.h
++++ b/arch/arm64/include/asm/proc-fns.h
+@@ -35,12 +35,6 @@ extern u64 cpu_do_resume(phys_addr_t ptr, u64 idmap_ttbr);
+
+ #include <asm/memory.h>
+
+-#define cpu_switch_mm(pgd,mm) \
+-do { \
+- BUG_ON(pgd == swapper_pg_dir); \
+- cpu_do_switch_mm(virt_to_phys(pgd),mm); \
+-} while (0)
+-
+ #endif /* __ASSEMBLY__ */
+ #endif /* __KERNEL__ */
+ #endif /* __ASM_PROCFNS_H */
+diff --git a/arch/arm64/include/asm/processor.h b/arch/arm64/include/asm/processor.h
+index 023cacb946c3..f96a13556887 100644
+--- a/arch/arm64/include/asm/processor.h
++++ b/arch/arm64/include/asm/processor.h
+@@ -21,6 +21,9 @@
+
+ #define TASK_SIZE_64 (UL(1) << VA_BITS)
+
++#define KERNEL_DS UL(-1)
++#define USER_DS (TASK_SIZE_64 - 1)
++
+ #ifndef __ASSEMBLY__
+
+ /*
+diff --git a/arch/arm64/include/asm/sysreg.h b/arch/arm64/include/asm/sysreg.h
+index 08cc88574659..871744973ece 100644
+--- a/arch/arm64/include/asm/sysreg.h
++++ b/arch/arm64/include/asm/sysreg.h
+@@ -437,6 +437,8 @@
+ #define ID_AA64ISAR1_DPB_SHIFT 0
+
+ /* id_aa64pfr0 */
++#define ID_AA64PFR0_CSV3_SHIFT 60
++#define ID_AA64PFR0_CSV2_SHIFT 56
+ #define ID_AA64PFR0_SVE_SHIFT 32
+ #define ID_AA64PFR0_GIC_SHIFT 24
+ #define ID_AA64PFR0_ASIMD_SHIFT 20
+diff --git a/arch/arm64/include/asm/tlbflush.h b/arch/arm64/include/asm/tlbflush.h
+index af1c76981911..9e82dd79c7db 100644
+--- a/arch/arm64/include/asm/tlbflush.h
++++ b/arch/arm64/include/asm/tlbflush.h
+@@ -23,6 +23,7 @@
+
+ #include <linux/sched.h>
+ #include <asm/cputype.h>
++#include <asm/mmu.h>
+
+ /*
+ * Raw TLBI operations.
+@@ -54,6 +55,11 @@
+
+ #define __tlbi(op, ...) __TLBI_N(op, ##__VA_ARGS__, 1, 0)
+
++#define __tlbi_user(op, arg) do { \
++ if (arm64_kernel_unmapped_at_el0()) \
++ __tlbi(op, (arg) | USER_ASID_FLAG); \
++} while (0)
++
+ /*
+ * TLB Management
+ * ==============
+@@ -115,6 +121,7 @@ static inline void flush_tlb_mm(struct mm_struct *mm)
+
+ dsb(ishst);
+ __tlbi(aside1is, asid);
++ __tlbi_user(aside1is, asid);
+ dsb(ish);
+ }
+
+@@ -125,6 +132,7 @@ static inline void flush_tlb_page(struct vm_area_struct *vma,
+
+ dsb(ishst);
+ __tlbi(vale1is, addr);
++ __tlbi_user(vale1is, addr);
+ dsb(ish);
+ }
+
+@@ -151,10 +159,13 @@ static inline void __flush_tlb_range(struct vm_area_struct *vma,
+
+ dsb(ishst);
+ for (addr = start; addr < end; addr += 1 << (PAGE_SHIFT - 12)) {
+- if (last_level)
++ if (last_level) {
+ __tlbi(vale1is, addr);
+- else
++ __tlbi_user(vale1is, addr);
++ } else {
+ __tlbi(vae1is, addr);
++ __tlbi_user(vae1is, addr);
++ }
+ }
+ dsb(ish);
+ }
+@@ -194,6 +205,7 @@ static inline void __flush_tlb_pgtable(struct mm_struct *mm,
+ unsigned long addr = uaddr >> 12 | (ASID(mm) << 48);
+
+ __tlbi(vae1is, addr);
++ __tlbi_user(vae1is, addr);
+ dsb(ish);
+ }
+
+diff --git a/arch/arm64/include/asm/uaccess.h b/arch/arm64/include/asm/uaccess.h
+index fc0f9eb66039..fad8c1b2ca3e 100644
+--- a/arch/arm64/include/asm/uaccess.h
++++ b/arch/arm64/include/asm/uaccess.h
+@@ -35,16 +35,20 @@
+ #include <asm/compiler.h>
+ #include <asm/extable.h>
+
+-#define KERNEL_DS (-1UL)
+ #define get_ds() (KERNEL_DS)
+-
+-#define USER_DS TASK_SIZE_64
+ #define get_fs() (current_thread_info()->addr_limit)
+
+ static inline void set_fs(mm_segment_t fs)
+ {
+ current_thread_info()->addr_limit = fs;
+
++ /*
++ * Prevent a mispredicted conditional call to set_fs from forwarding
++ * the wrong address limit to access_ok under speculation.
++ */
++ dsb(nsh);
++ isb();
++
+ /* On user-mode return, check fs is correct */
+ set_thread_flag(TIF_FSCHECK);
+
+@@ -66,22 +70,32 @@ static inline void set_fs(mm_segment_t fs)
+ * Returns 1 if the range is valid, 0 otherwise.
+ *
+ * This is equivalent to the following test:
+- * (u65)addr + (u65)size <= current->addr_limit
+- *
+- * This needs 65-bit arithmetic.
++ * (u65)addr + (u65)size <= (u65)current->addr_limit + 1
+ */
+-#define __range_ok(addr, size) \
+-({ \
+- unsigned long __addr = (unsigned long)(addr); \
+- unsigned long flag, roksum; \
+- __chk_user_ptr(addr); \
+- asm("adds %1, %1, %3; ccmp %1, %4, #2, cc; cset %0, ls" \
+- : "=&r" (flag), "=&r" (roksum) \
+- : "1" (__addr), "Ir" (size), \
+- "r" (current_thread_info()->addr_limit) \
+- : "cc"); \
+- flag; \
+-})
++static inline unsigned long __range_ok(unsigned long addr, unsigned long size)
++{
++ unsigned long limit = current_thread_info()->addr_limit;
++
++ __chk_user_ptr(addr);
++ asm volatile(
++ // A + B <= C + 1 for all A,B,C, in four easy steps:
++ // 1: X = A + B; X' = X % 2^64
++ " adds %0, %0, %2\n"
++ // 2: Set C = 0 if X > 2^64, to guarantee X' > C in step 4
++ " csel %1, xzr, %1, hi\n"
++ // 3: Set X' = ~0 if X >= 2^64. For X == 2^64, this decrements X'
++ // to compensate for the carry flag being set in step 4. For
++ // X > 2^64, X' merely has to remain nonzero, which it does.
++ " csinv %0, %0, xzr, cc\n"
++ // 4: For X < 2^64, this gives us X' - C - 1 <= 0, where the -1
++ // comes from the carry in being clear. Otherwise, we are
++ // testing X' - C == 0, subject to the previous adjustments.
++ " sbcs xzr, %0, %1\n"
++ " cset %0, ls\n"
++ : "+r" (addr), "+r" (limit) : "Ir" (size) : "cc");
++
++ return addr;
++}
+
+ /*
+ * When dealing with data aborts, watchpoints, or instruction traps we may end
+@@ -90,7 +104,7 @@ static inline void set_fs(mm_segment_t fs)
+ */
+ #define untagged_addr(addr) sign_extend64(addr, 55)
+
+-#define access_ok(type, addr, size) __range_ok(addr, size)
++#define access_ok(type, addr, size) __range_ok((unsigned long)(addr), size)
+ #define user_addr_max get_fs
+
+ #define _ASM_EXTABLE(from, to) \
+@@ -105,17 +119,23 @@ static inline void set_fs(mm_segment_t fs)
+ #ifdef CONFIG_ARM64_SW_TTBR0_PAN
+ static inline void __uaccess_ttbr0_disable(void)
+ {
+- unsigned long ttbr;
++ unsigned long flags, ttbr;
+
++ local_irq_save(flags);
++ ttbr = read_sysreg(ttbr1_el1);
++ ttbr &= ~TTBR_ASID_MASK;
+ /* reserved_ttbr0 placed at the end of swapper_pg_dir */
+- ttbr = read_sysreg(ttbr1_el1) + SWAPPER_DIR_SIZE;
+- write_sysreg(ttbr, ttbr0_el1);
++ write_sysreg(ttbr + SWAPPER_DIR_SIZE, ttbr0_el1);
+ isb();
++ /* Set reserved ASID */
++ write_sysreg(ttbr, ttbr1_el1);
++ isb();
++ local_irq_restore(flags);
+ }
+
+ static inline void __uaccess_ttbr0_enable(void)
+ {
+- unsigned long flags;
++ unsigned long flags, ttbr0, ttbr1;
+
+ /*
+ * Disable interrupts to avoid preemption between reading the 'ttbr0'
+@@ -123,7 +143,17 @@ static inline void __uaccess_ttbr0_enable(void)
+ * roll-over and an update of 'ttbr0'.
+ */
+ local_irq_save(flags);
+- write_sysreg(current_thread_info()->ttbr0, ttbr0_el1);
++ ttbr0 = READ_ONCE(current_thread_info()->ttbr0);
++
++ /* Restore active ASID */
++ ttbr1 = read_sysreg(ttbr1_el1);
++ ttbr1 &= ~TTBR_ASID_MASK; /* safety measure */
++ ttbr1 |= ttbr0 & TTBR_ASID_MASK;
++ write_sysreg(ttbr1, ttbr1_el1);
++ isb();
++
++ /* Restore user page table */
++ write_sysreg(ttbr0, ttbr0_el1);
+ isb();
+ local_irq_restore(flags);
+ }
+@@ -192,6 +222,26 @@ static inline void uaccess_enable_not_uao(void)
+ __uaccess_enable(ARM64_ALT_PAN_NOT_UAO);
+ }
+
++/*
++ * Sanitise a uaccess pointer such that it becomes NULL if above the
++ * current addr_limit.
++ */
++#define uaccess_mask_ptr(ptr) (__typeof__(ptr))__uaccess_mask_ptr(ptr)
++static inline void __user *__uaccess_mask_ptr(const void __user *ptr)
++{
++ void __user *safe_ptr;
++
++ asm volatile(
++ " bics xzr, %1, %2\n"
++ " csel %0, %1, xzr, eq\n"
++ : "=&r" (safe_ptr)
++ : "r" (ptr), "r" (current_thread_info()->addr_limit)
++ : "cc");
++
++ csdb();
++ return safe_ptr;
++}
++
+ /*
+ * The "__xxx" versions of the user access functions do not verify the address
+ * space - it must have been done previously with a separate "access_ok()"
+@@ -244,28 +294,33 @@ do { \
+ (x) = (__force __typeof__(*(ptr)))__gu_val; \
+ } while (0)
+
+-#define __get_user(x, ptr) \
++#define __get_user_check(x, ptr, err) \
+ ({ \
+- int __gu_err = 0; \
+- __get_user_err((x), (ptr), __gu_err); \
+- __gu_err; \
++ __typeof__(*(ptr)) __user *__p = (ptr); \
++ might_fault(); \
++ if (access_ok(VERIFY_READ, __p, sizeof(*__p))) { \
++ __p = uaccess_mask_ptr(__p); \
++ __get_user_err((x), __p, (err)); \
++ } else { \
++ (x) = 0; (err) = -EFAULT; \
++ } \
+ })
+
+ #define __get_user_error(x, ptr, err) \
+ ({ \
+- __get_user_err((x), (ptr), (err)); \
++ __get_user_check((x), (ptr), (err)); \
+ (void)0; \
+ })
+
+-#define get_user(x, ptr) \
++#define __get_user(x, ptr) \
+ ({ \
+- __typeof__(*(ptr)) __user *__p = (ptr); \
+- might_fault(); \
+- access_ok(VERIFY_READ, __p, sizeof(*__p)) ? \
+- __get_user((x), __p) : \
+- ((x) = 0, -EFAULT); \
++ int __gu_err = 0; \
++ __get_user_check((x), (ptr), __gu_err); \
++ __gu_err; \
+ })
+
++#define get_user __get_user
++
+ #define __put_user_asm(instr, alt_instr, reg, x, addr, err, feature) \
+ asm volatile( \
+ "1:"ALTERNATIVE(instr " " reg "1, [%2]\n", \
+@@ -308,43 +363,63 @@ do { \
+ uaccess_disable_not_uao(); \
+ } while (0)
+
+-#define __put_user(x, ptr) \
++#define __put_user_check(x, ptr, err) \
+ ({ \
+- int __pu_err = 0; \
+- __put_user_err((x), (ptr), __pu_err); \
+- __pu_err; \
++ __typeof__(*(ptr)) __user *__p = (ptr); \
++ might_fault(); \
++ if (access_ok(VERIFY_WRITE, __p, sizeof(*__p))) { \
++ __p = uaccess_mask_ptr(__p); \
++ __put_user_err((x), __p, (err)); \
++ } else { \
++ (err) = -EFAULT; \
++ } \
+ })
+
+ #define __put_user_error(x, ptr, err) \
+ ({ \
+- __put_user_err((x), (ptr), (err)); \
++ __put_user_check((x), (ptr), (err)); \
+ (void)0; \
+ })
+
+-#define put_user(x, ptr) \
++#define __put_user(x, ptr) \
+ ({ \
+- __typeof__(*(ptr)) __user *__p = (ptr); \
+- might_fault(); \
+- access_ok(VERIFY_WRITE, __p, sizeof(*__p)) ? \
+- __put_user((x), __p) : \
+- -EFAULT; \
++ int __pu_err = 0; \
++ __put_user_check((x), (ptr), __pu_err); \
++ __pu_err; \
+ })
+
++#define put_user __put_user
++
+ extern unsigned long __must_check __arch_copy_from_user(void *to, const void __user *from, unsigned long n);
+-#define raw_copy_from_user __arch_copy_from_user
++#define raw_copy_from_user(to, from, n) \
++({ \
++ __arch_copy_from_user((to), __uaccess_mask_ptr(from), (n)); \
++})
++
+ extern unsigned long __must_check __arch_copy_to_user(void __user *to, const void *from, unsigned long n);
+-#define raw_copy_to_user __arch_copy_to_user
+-extern unsigned long __must_check raw_copy_in_user(void __user *to, const void __user *from, unsigned long n);
+-extern unsigned long __must_check __clear_user(void __user *addr, unsigned long n);
++#define raw_copy_to_user(to, from, n) \
++({ \
++ __arch_copy_to_user(__uaccess_mask_ptr(to), (from), (n)); \
++})
++
++extern unsigned long __must_check __arch_copy_in_user(void __user *to, const void __user *from, unsigned long n);
++#define raw_copy_in_user(to, from, n) \
++({ \
++ __arch_copy_in_user(__uaccess_mask_ptr(to), \
++ __uaccess_mask_ptr(from), (n)); \
++})
++
+ #define INLINE_COPY_TO_USER
+ #define INLINE_COPY_FROM_USER
+
+-static inline unsigned long __must_check clear_user(void __user *to, unsigned long n)
++extern unsigned long __must_check __arch_clear_user(void __user *to, unsigned long n);
++static inline unsigned long __must_check __clear_user(void __user *to, unsigned long n)
+ {
+ if (access_ok(VERIFY_WRITE, to, n))
+- n = __clear_user(to, n);
++ n = __arch_clear_user(__uaccess_mask_ptr(to), n);
+ return n;
+ }
++#define clear_user __clear_user
+
+ extern long strncpy_from_user(char *dest, const char __user *src, long count);
+
+@@ -358,7 +433,7 @@ extern unsigned long __must_check __copy_user_flushcache(void *to, const void __
+ static inline int __copy_from_user_flushcache(void *dst, const void __user *src, unsigned size)
+ {
+ kasan_check_write(dst, size);
+- return __copy_user_flushcache(dst, src, size);
++ return __copy_user_flushcache(dst, __uaccess_mask_ptr(src), size);
+ }
+ #endif
+
+diff --git a/arch/arm64/kernel/Makefile b/arch/arm64/kernel/Makefile
+index 067baace74a0..0c760db04858 100644
+--- a/arch/arm64/kernel/Makefile
++++ b/arch/arm64/kernel/Makefile
+@@ -53,6 +53,10 @@ arm64-obj-$(CONFIG_ARM64_RELOC_TEST) += arm64-reloc-test.o
+ arm64-reloc-test-y := reloc_test_core.o reloc_test_syms.o
+ arm64-obj-$(CONFIG_CRASH_DUMP) += crash_dump.o
+
++ifeq ($(CONFIG_KVM),y)
++arm64-obj-$(CONFIG_HARDEN_BRANCH_PREDICTOR) += bpi.o
++endif
++
+ obj-y += $(arm64-obj-y) vdso/ probes/
+ obj-m += $(arm64-obj-m)
+ head-y := head.o
+diff --git a/arch/arm64/kernel/arm64ksyms.c b/arch/arm64/kernel/arm64ksyms.c
+index 67368c7329c0..66be504edb6c 100644
+--- a/arch/arm64/kernel/arm64ksyms.c
++++ b/arch/arm64/kernel/arm64ksyms.c
+@@ -37,8 +37,8 @@ EXPORT_SYMBOL(clear_page);
+ /* user mem (segment) */
+ EXPORT_SYMBOL(__arch_copy_from_user);
+ EXPORT_SYMBOL(__arch_copy_to_user);
+-EXPORT_SYMBOL(__clear_user);
+-EXPORT_SYMBOL(raw_copy_in_user);
++EXPORT_SYMBOL(__arch_clear_user);
++EXPORT_SYMBOL(__arch_copy_in_user);
+
+ /* physical memory */
+ EXPORT_SYMBOL(memstart_addr);
+diff --git a/arch/arm64/kernel/asm-offsets.c b/arch/arm64/kernel/asm-offsets.c
+index 71bf088f1e4b..af247d10252f 100644
+--- a/arch/arm64/kernel/asm-offsets.c
++++ b/arch/arm64/kernel/asm-offsets.c
+@@ -24,6 +24,7 @@
+ #include <linux/kvm_host.h>
+ #include <linux/suspend.h>
+ #include <asm/cpufeature.h>
++#include <asm/fixmap.h>
+ #include <asm/thread_info.h>
+ #include <asm/memory.h>
+ #include <asm/smp_plat.h>
+@@ -148,11 +149,14 @@ int main(void)
+ DEFINE(ARM_SMCCC_RES_X2_OFFS, offsetof(struct arm_smccc_res, a2));
+ DEFINE(ARM_SMCCC_QUIRK_ID_OFFS, offsetof(struct arm_smccc_quirk, id));
+ DEFINE(ARM_SMCCC_QUIRK_STATE_OFFS, offsetof(struct arm_smccc_quirk, state));
+-
+ BLANK();
+ DEFINE(HIBERN_PBE_ORIG, offsetof(struct pbe, orig_address));
+ DEFINE(HIBERN_PBE_ADDR, offsetof(struct pbe, address));
+ DEFINE(HIBERN_PBE_NEXT, offsetof(struct pbe, next));
+ DEFINE(ARM64_FTR_SYSVAL, offsetof(struct arm64_ftr_reg, sys_val));
++ BLANK();
++#ifdef CONFIG_UNMAP_KERNEL_AT_EL0
++ DEFINE(TRAMP_VALIAS, TRAMP_VALIAS);
++#endif
+ return 0;
+ }
+diff --git a/arch/arm64/kernel/bpi.S b/arch/arm64/kernel/bpi.S
+new file mode 100644
+index 000000000000..e5de33513b5d
+--- /dev/null
++++ b/arch/arm64/kernel/bpi.S
+@@ -0,0 +1,83 @@
++/*
++ * Contains CPU specific branch predictor invalidation sequences
++ *
++ * Copyright (C) 2018 ARM Ltd.
++ *
++ * This program is free software; you can redistribute it and/or modify
++ * it under the terms of the GNU General Public License version 2 as
++ * published by the Free Software Foundation.
++ *
++ * This program is distributed in the hope that it will be useful,
++ * but WITHOUT ANY WARRANTY; without even the implied warranty of
++ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
++ * GNU General Public License for more details.
++ *
++ * You should have received a copy of the GNU General Public License
++ * along with this program. If not, see <http://www.gnu.org/licenses/>.
++ */
++
++#include <linux/linkage.h>
++#include <linux/arm-smccc.h>
++
++.macro ventry target
++ .rept 31
++ nop
++ .endr
++ b \target
++.endm
++
++.macro vectors target
++ ventry \target + 0x000
++ ventry \target + 0x080
++ ventry \target + 0x100
++ ventry \target + 0x180
++
++ ventry \target + 0x200
++ ventry \target + 0x280
++ ventry \target + 0x300
++ ventry \target + 0x380
++
++ ventry \target + 0x400
++ ventry \target + 0x480
++ ventry \target + 0x500
++ ventry \target + 0x580
++
++ ventry \target + 0x600
++ ventry \target + 0x680
++ ventry \target + 0x700
++ ventry \target + 0x780
++.endm
++
++ .align 11
++ENTRY(__bp_harden_hyp_vecs_start)
++ .rept 4
++ vectors __kvm_hyp_vector
++ .endr
++ENTRY(__bp_harden_hyp_vecs_end)
++
++ENTRY(__qcom_hyp_sanitize_link_stack_start)
++ stp x29, x30, [sp, #-16]!
++ .rept 16
++ bl . + 4
++ .endr
++ ldp x29, x30, [sp], #16
++ENTRY(__qcom_hyp_sanitize_link_stack_end)
++
++.macro smccc_workaround_1 inst
++ sub sp, sp, #(8 * 4)
++ stp x2, x3, [sp, #(8 * 0)]
++ stp x0, x1, [sp, #(8 * 2)]
++ mov w0, #ARM_SMCCC_ARCH_WORKAROUND_1
++ \inst #0
++ ldp x2, x3, [sp, #(8 * 0)]
++ ldp x0, x1, [sp, #(8 * 2)]
++ add sp, sp, #(8 * 4)
++.endm
++
++ENTRY(__smccc_workaround_1_smc_start)
++ smccc_workaround_1 smc
++ENTRY(__smccc_workaround_1_smc_end)
++
++ENTRY(__smccc_workaround_1_hvc_start)
++ smccc_workaround_1 hvc
++ENTRY(__smccc_workaround_1_hvc_end)
+diff --git a/arch/arm64/kernel/cpu-reset.S b/arch/arm64/kernel/cpu-reset.S
+index 2a752cb2a0f3..8021b46c9743 100644
+--- a/arch/arm64/kernel/cpu-reset.S
++++ b/arch/arm64/kernel/cpu-reset.S
+@@ -16,7 +16,7 @@
+ #include <asm/virt.h>
+
+ .text
+-.pushsection .idmap.text, "ax"
++.pushsection .idmap.text, "awx"
+
+ /*
+ * __cpu_soft_restart(el2_switch, entry, arg0, arg1, arg2) - Helper for
+diff --git a/arch/arm64/kernel/cpu_errata.c b/arch/arm64/kernel/cpu_errata.c
+index 0e27f86ee709..07823595b7f0 100644
+--- a/arch/arm64/kernel/cpu_errata.c
++++ b/arch/arm64/kernel/cpu_errata.c
+@@ -30,6 +30,20 @@ is_affected_midr_range(const struct arm64_cpu_capabilities *entry, int scope)
+ entry->midr_range_max);
+ }
+
++static bool __maybe_unused
++is_kryo_midr(const struct arm64_cpu_capabilities *entry, int scope)
++{
++ u32 model;
++
++ WARN_ON(scope != SCOPE_LOCAL_CPU || preemptible());
++
++ model = read_cpuid_id();
++ model &= MIDR_IMPLEMENTOR_MASK | (0xf00 << MIDR_PARTNUM_SHIFT) |
++ MIDR_ARCHITECTURE_MASK;
++
++ return model == entry->midr_model;
++}
++
+ static bool
+ has_mismatched_cache_line_size(const struct arm64_cpu_capabilities *entry,
+ int scope)
+@@ -46,6 +60,174 @@ static int cpu_enable_trap_ctr_access(void *__unused)
+ return 0;
+ }
+
++#ifdef CONFIG_HARDEN_BRANCH_PREDICTOR
++#include <asm/mmu_context.h>
++#include <asm/cacheflush.h>
++
++DEFINE_PER_CPU_READ_MOSTLY(struct bp_hardening_data, bp_hardening_data);
++
++#ifdef CONFIG_KVM
++extern char __qcom_hyp_sanitize_link_stack_start[];
++extern char __qcom_hyp_sanitize_link_stack_end[];
++extern char __smccc_workaround_1_smc_start[];
++extern char __smccc_workaround_1_smc_end[];
++extern char __smccc_workaround_1_hvc_start[];
++extern char __smccc_workaround_1_hvc_end[];
++
++static void __copy_hyp_vect_bpi(int slot, const char *hyp_vecs_start,
++ const char *hyp_vecs_end)
++{
++ void *dst = lm_alias(__bp_harden_hyp_vecs_start + slot * SZ_2K);
++ int i;
++
++ for (i = 0; i < SZ_2K; i += 0x80)
++ memcpy(dst + i, hyp_vecs_start, hyp_vecs_end - hyp_vecs_start);
++
++ flush_icache_range((uintptr_t)dst, (uintptr_t)dst + SZ_2K);
++}
++
++static void __install_bp_hardening_cb(bp_hardening_cb_t fn,
++ const char *hyp_vecs_start,
++ const char *hyp_vecs_end)
++{
++ static int last_slot = -1;
++ static DEFINE_SPINLOCK(bp_lock);
++ int cpu, slot = -1;
++
++ spin_lock(&bp_lock);
++ for_each_possible_cpu(cpu) {
++ if (per_cpu(bp_hardening_data.fn, cpu) == fn) {
++ slot = per_cpu(bp_hardening_data.hyp_vectors_slot, cpu);
++ break;
++ }
++ }
++
++ if (slot == -1) {
++ last_slot++;
++ BUG_ON(((__bp_harden_hyp_vecs_end - __bp_harden_hyp_vecs_start)
++ / SZ_2K) <= last_slot);
++ slot = last_slot;
++ __copy_hyp_vect_bpi(slot, hyp_vecs_start, hyp_vecs_end);
++ }
++
++ __this_cpu_write(bp_hardening_data.hyp_vectors_slot, slot);
++ __this_cpu_write(bp_hardening_data.fn, fn);
++ spin_unlock(&bp_lock);
++}
++#else
++#define __qcom_hyp_sanitize_link_stack_start NULL
++#define __qcom_hyp_sanitize_link_stack_end NULL
++#define __smccc_workaround_1_smc_start NULL
++#define __smccc_workaround_1_smc_end NULL
++#define __smccc_workaround_1_hvc_start NULL
++#define __smccc_workaround_1_hvc_end NULL
++
++static void __install_bp_hardening_cb(bp_hardening_cb_t fn,
++ const char *hyp_vecs_start,
++ const char *hyp_vecs_end)
++{
++ __this_cpu_write(bp_hardening_data.fn, fn);
++}
++#endif /* CONFIG_KVM */
++
++static void install_bp_hardening_cb(const struct arm64_cpu_capabilities *entry,
++ bp_hardening_cb_t fn,
++ const char *hyp_vecs_start,
++ const char *hyp_vecs_end)
++{
++ u64 pfr0;
++
++ if (!entry->matches(entry, SCOPE_LOCAL_CPU))
++ return;
++
++ pfr0 = read_cpuid(ID_AA64PFR0_EL1);
++ if (cpuid_feature_extract_unsigned_field(pfr0, ID_AA64PFR0_CSV2_SHIFT))
++ return;
++
++ __install_bp_hardening_cb(fn, hyp_vecs_start, hyp_vecs_end);
++}
++
++#include <uapi/linux/psci.h>
++#include <linux/arm-smccc.h>
++#include <linux/psci.h>
++
++static void call_smc_arch_workaround_1(void)
++{
++ arm_smccc_1_1_smc(ARM_SMCCC_ARCH_WORKAROUND_1, NULL);
++}
++
++static void call_hvc_arch_workaround_1(void)
++{
++ arm_smccc_1_1_hvc(ARM_SMCCC_ARCH_WORKAROUND_1, NULL);
++}
++
++static int enable_smccc_arch_workaround_1(void *data)
++{
++ const struct arm64_cpu_capabilities *entry = data;
++ bp_hardening_cb_t cb;
++ void *smccc_start, *smccc_end;
++ struct arm_smccc_res res;
++
++ if (!entry->matches(entry, SCOPE_LOCAL_CPU))
++ return 0;
++
++ if (psci_ops.smccc_version == SMCCC_VERSION_1_0)
++ return 0;
++
++ switch (psci_ops.conduit) {
++ case PSCI_CONDUIT_HVC:
++ arm_smccc_1_1_hvc(ARM_SMCCC_ARCH_FEATURES_FUNC_ID,
++ ARM_SMCCC_ARCH_WORKAROUND_1, &res);
++ if (res.a0)
++ return 0;
++ cb = call_hvc_arch_workaround_1;
++ smccc_start = __smccc_workaround_1_hvc_start;
++ smccc_end = __smccc_workaround_1_hvc_end;
++ break;
++
++ case PSCI_CONDUIT_SMC:
++ arm_smccc_1_1_smc(ARM_SMCCC_ARCH_FEATURES_FUNC_ID,
++ ARM_SMCCC_ARCH_WORKAROUND_1, &res);
++ if (res.a0)
++ return 0;
++ cb = call_smc_arch_workaround_1;
++ smccc_start = __smccc_workaround_1_smc_start;
++ smccc_end = __smccc_workaround_1_smc_end;
++ break;
++
++ default:
++ return 0;
++ }
++
++ install_bp_hardening_cb(entry, cb, smccc_start, smccc_end);
++
++ return 0;
++}
++
++static void qcom_link_stack_sanitization(void)
++{
++ u64 tmp;
++
++ asm volatile("mov %0, x30 \n"
++ ".rept 16 \n"
++ "bl . + 4 \n"
++ ".endr \n"
++ "mov x30, %0 \n"
++ : "=&r" (tmp));
++}
++
++static int qcom_enable_link_stack_sanitization(void *data)
++{
++ const struct arm64_cpu_capabilities *entry = data;
++
++ install_bp_hardening_cb(entry, qcom_link_stack_sanitization,
++ __qcom_hyp_sanitize_link_stack_start,
++ __qcom_hyp_sanitize_link_stack_end);
++
++ return 0;
++}
++#endif /* CONFIG_HARDEN_BRANCH_PREDICTOR */
++
+ #define MIDR_RANGE(model, min, max) \
+ .def_scope = SCOPE_LOCAL_CPU, \
+ .matches = is_affected_midr_range, \
+@@ -169,6 +351,13 @@ const struct arm64_cpu_capabilities arm64_errata[] = {
+ MIDR_CPU_VAR_REV(0, 0),
+ MIDR_CPU_VAR_REV(0, 0)),
+ },
++ {
++ .desc = "Qualcomm Technologies Kryo erratum 1003",
++ .capability = ARM64_WORKAROUND_QCOM_FALKOR_E1003,
++ .def_scope = SCOPE_LOCAL_CPU,
++ .midr_model = MIDR_QCOM_KRYO,
++ .matches = is_kryo_midr,
++ },
+ #endif
+ #ifdef CONFIG_QCOM_FALKOR_ERRATUM_1009
+ {
+@@ -186,6 +375,47 @@ const struct arm64_cpu_capabilities arm64_errata[] = {
+ .capability = ARM64_WORKAROUND_858921,
+ MIDR_ALL_VERSIONS(MIDR_CORTEX_A73),
+ },
++#endif
++#ifdef CONFIG_HARDEN_BRANCH_PREDICTOR
++ {
++ .capability = ARM64_HARDEN_BRANCH_PREDICTOR,
++ MIDR_ALL_VERSIONS(MIDR_CORTEX_A57),
++ .enable = enable_smccc_arch_workaround_1,
++ },
++ {
++ .capability = ARM64_HARDEN_BRANCH_PREDICTOR,
++ MIDR_ALL_VERSIONS(MIDR_CORTEX_A72),
++ .enable = enable_smccc_arch_workaround_1,
++ },
++ {
++ .capability = ARM64_HARDEN_BRANCH_PREDICTOR,
++ MIDR_ALL_VERSIONS(MIDR_CORTEX_A73),
++ .enable = enable_smccc_arch_workaround_1,
++ },
++ {
++ .capability = ARM64_HARDEN_BRANCH_PREDICTOR,
++ MIDR_ALL_VERSIONS(MIDR_CORTEX_A75),
++ .enable = enable_smccc_arch_workaround_1,
++ },
++ {
++ .capability = ARM64_HARDEN_BRANCH_PREDICTOR,
++ MIDR_ALL_VERSIONS(MIDR_QCOM_FALKOR_V1),
++ .enable = qcom_enable_link_stack_sanitization,
++ },
++ {
++ .capability = ARM64_HARDEN_BP_POST_GUEST_EXIT,
++ MIDR_ALL_VERSIONS(MIDR_QCOM_FALKOR_V1),
++ },
++ {
++ .capability = ARM64_HARDEN_BRANCH_PREDICTOR,
++ MIDR_ALL_VERSIONS(MIDR_BRCM_VULCAN),
++ .enable = enable_smccc_arch_workaround_1,
++ },
++ {
++ .capability = ARM64_HARDEN_BRANCH_PREDICTOR,
++ MIDR_ALL_VERSIONS(MIDR_CAVIUM_THUNDERX2),
++ .enable = enable_smccc_arch_workaround_1,
++ },
+ #endif
+ {
+ }
+@@ -200,15 +430,18 @@ void verify_local_cpu_errata_workarounds(void)
+ {
+ const struct arm64_cpu_capabilities *caps = arm64_errata;
+
+- for (; caps->matches; caps++)
+- if (!cpus_have_cap(caps->capability) &&
+- caps->matches(caps, SCOPE_LOCAL_CPU)) {
++ for (; caps->matches; caps++) {
++ if (cpus_have_cap(caps->capability)) {
++ if (caps->enable)
++ caps->enable((void *)caps);
++ } else if (caps->matches(caps, SCOPE_LOCAL_CPU)) {
+ pr_crit("CPU%d: Requires work around for %s, not detected"
+ " at boot time\n",
+ smp_processor_id(),
+ caps->desc ? : "an erratum");
+ cpu_die_early();
+ }
++ }
+ }
+
+ void update_cpu_errata_workarounds(void)
+diff --git a/arch/arm64/kernel/cpufeature.c b/arch/arm64/kernel/cpufeature.c
+index a73a5928f09b..46dee071bab1 100644
+--- a/arch/arm64/kernel/cpufeature.c
++++ b/arch/arm64/kernel/cpufeature.c
+@@ -145,6 +145,8 @@ static const struct arm64_ftr_bits ftr_id_aa64isar1[] = {
+ };
+
+ static const struct arm64_ftr_bits ftr_id_aa64pfr0[] = {
++ ARM64_FTR_BITS(FTR_HIDDEN, FTR_NONSTRICT, FTR_LOWER_SAFE, ID_AA64PFR0_CSV3_SHIFT, 4, 0),
++ ARM64_FTR_BITS(FTR_HIDDEN, FTR_NONSTRICT, FTR_LOWER_SAFE, ID_AA64PFR0_CSV2_SHIFT, 4, 0),
+ ARM64_FTR_BITS(FTR_VISIBLE_IF_IS_ENABLED(CONFIG_ARM64_SVE),
+ FTR_STRICT, FTR_LOWER_SAFE, ID_AA64PFR0_SVE_SHIFT, 4, 0),
+ ARM64_FTR_BITS(FTR_HIDDEN, FTR_STRICT, FTR_LOWER_SAFE, ID_AA64PFR0_GIC_SHIFT, 4, 0),
+@@ -846,6 +848,86 @@ static bool has_no_fpsimd(const struct arm64_cpu_capabilities *entry, int __unus
+ ID_AA64PFR0_FP_SHIFT) < 0;
+ }
+
++#ifdef CONFIG_UNMAP_KERNEL_AT_EL0
++static int __kpti_forced; /* 0: not forced, >0: forced on, <0: forced off */
++
++static bool unmap_kernel_at_el0(const struct arm64_cpu_capabilities *entry,
++ int __unused)
++{
++ char const *str = "command line option";
++ u64 pfr0 = read_sanitised_ftr_reg(SYS_ID_AA64PFR0_EL1);
++
++ /*
++ * For reasons that aren't entirely clear, enabling KPTI on Cavium
++ * ThunderX leads to apparent I-cache corruption of kernel text, which
++ * ends as well as you might imagine. Don't even try.
++ */
++ if (cpus_have_const_cap(ARM64_WORKAROUND_CAVIUM_27456)) {
++ str = "ARM64_WORKAROUND_CAVIUM_27456";
++ __kpti_forced = -1;
++ }
++
++ /* Forced? */
++ if (__kpti_forced) {
++ pr_info_once("kernel page table isolation forced %s by %s\n",
++ __kpti_forced > 0 ? "ON" : "OFF", str);
++ return __kpti_forced > 0;
++ }
++
++ /* Useful for KASLR robustness */
++ if (IS_ENABLED(CONFIG_RANDOMIZE_BASE))
++ return true;
++
++ /* Don't force KPTI for CPUs that are not vulnerable */
++ switch (read_cpuid_id() & MIDR_CPU_MODEL_MASK) {
++ case MIDR_CAVIUM_THUNDERX2:
++ case MIDR_BRCM_VULCAN:
++ return false;
++ }
++
++ /* Defer to CPU feature registers */
++ return !cpuid_feature_extract_unsigned_field(pfr0,
++ ID_AA64PFR0_CSV3_SHIFT);
++}
++
++static int kpti_install_ng_mappings(void *__unused)
++{
++ typedef void (kpti_remap_fn)(int, int, phys_addr_t);
++ extern kpti_remap_fn idmap_kpti_install_ng_mappings;
++ kpti_remap_fn *remap_fn;
++
++ static bool kpti_applied = false;
++ int cpu = smp_processor_id();
++
++ if (kpti_applied)
++ return 0;
++
++ remap_fn = (void *)__pa_symbol(idmap_kpti_install_ng_mappings);
++
++ cpu_install_idmap();
++ remap_fn(cpu, num_online_cpus(), __pa_symbol(swapper_pg_dir));
++ cpu_uninstall_idmap();
++
++ if (!cpu)
++ kpti_applied = true;
++
++ return 0;
++}
++
++static int __init parse_kpti(char *str)
++{
++ bool enabled;
++ int ret = strtobool(str, &enabled);
++
++ if (ret)
++ return ret;
++
++ __kpti_forced = enabled ? 1 : -1;
++ return 0;
++}
++__setup("kpti=", parse_kpti);
++#endif /* CONFIG_UNMAP_KERNEL_AT_EL0 */
++
+ static const struct arm64_cpu_capabilities arm64_features[] = {
+ {
+ .desc = "GIC system register CPU interface",
+@@ -932,6 +1014,15 @@ static const struct arm64_cpu_capabilities arm64_features[] = {
+ .def_scope = SCOPE_SYSTEM,
+ .matches = hyp_offset_low,
+ },
++#ifdef CONFIG_UNMAP_KERNEL_AT_EL0
++ {
++ .desc = "Kernel page table isolation (KPTI)",
++ .capability = ARM64_UNMAP_KERNEL_AT_EL0,
++ .def_scope = SCOPE_SYSTEM,
++ .matches = unmap_kernel_at_el0,
++ .enable = kpti_install_ng_mappings,
++ },
++#endif
+ {
+ /* FP/SIMD is not implemented */
+ .capability = ARM64_HAS_NO_FPSIMD,
+@@ -1071,6 +1162,25 @@ static void __init setup_elf_hwcaps(const struct arm64_cpu_capabilities *hwcaps)
+ cap_set_elf_hwcap(hwcaps);
+ }
+
++/*
++ * Check if the current CPU has a given feature capability.
++ * Should be called from non-preemptible context.
++ */
++static bool __this_cpu_has_cap(const struct arm64_cpu_capabilities *cap_array,
++ unsigned int cap)
++{
++ const struct arm64_cpu_capabilities *caps;
++
++ if (WARN_ON(preemptible()))
++ return false;
++
++ for (caps = cap_array; caps->matches; caps++)
++ if (caps->capability == cap &&
++ caps->matches(caps, SCOPE_LOCAL_CPU))
++ return true;
++ return false;
++}
++
+ void update_cpu_capabilities(const struct arm64_cpu_capabilities *caps,
+ const char *info)
+ {
+@@ -1106,7 +1216,7 @@ void __init enable_cpu_capabilities(const struct arm64_cpu_capabilities *caps)
+ * uses an IPI, giving us a PSTATE that disappears when
+ * we return.
+ */
+- stop_machine(caps->enable, NULL, cpu_online_mask);
++ stop_machine(caps->enable, (void *)caps, cpu_online_mask);
+ }
+ }
+ }
+@@ -1134,8 +1244,9 @@ verify_local_elf_hwcaps(const struct arm64_cpu_capabilities *caps)
+ }
+
+ static void
+-verify_local_cpu_features(const struct arm64_cpu_capabilities *caps)
++verify_local_cpu_features(const struct arm64_cpu_capabilities *caps_list)
+ {
++ const struct arm64_cpu_capabilities *caps = caps_list;
+ for (; caps->matches; caps++) {
+ if (!cpus_have_cap(caps->capability))
+ continue;
+@@ -1143,13 +1254,13 @@ verify_local_cpu_features(const struct arm64_cpu_capabilities *caps)
+ * If the new CPU misses an advertised feature, we cannot proceed
+ * further, park the cpu.
+ */
+- if (!caps->matches(caps, SCOPE_LOCAL_CPU)) {
++ if (!__this_cpu_has_cap(caps_list, caps->capability)) {
+ pr_crit("CPU%d: missing feature: %s\n",
+ smp_processor_id(), caps->desc);
+ cpu_die_early();
+ }
+ if (caps->enable)
+- caps->enable(NULL);
++ caps->enable((void *)caps);
+ }
+ }
+
+@@ -1225,25 +1336,6 @@ static void __init mark_const_caps_ready(void)
+ static_branch_enable(&arm64_const_caps_ready);
+ }
+
+-/*
+- * Check if the current CPU has a given feature capability.
+- * Should be called from non-preemptible context.
+- */
+-static bool __this_cpu_has_cap(const struct arm64_cpu_capabilities *cap_array,
+- unsigned int cap)
+-{
+- const struct arm64_cpu_capabilities *caps;
+-
+- if (WARN_ON(preemptible()))
+- return false;
+-
+- for (caps = cap_array; caps->desc; caps++)
+- if (caps->capability == cap && caps->matches)
+- return caps->matches(caps, SCOPE_LOCAL_CPU);
+-
+- return false;
+-}
+-
+ extern const struct arm64_cpu_capabilities arm64_errata[];
+
+ bool this_cpu_has_cap(unsigned int cap)
+diff --git a/arch/arm64/kernel/entry.S b/arch/arm64/kernel/entry.S
+index 6d14b8f29b5f..78647eda6d0d 100644
+--- a/arch/arm64/kernel/entry.S
++++ b/arch/arm64/kernel/entry.S
+@@ -28,6 +28,8 @@
+ #include <asm/errno.h>
+ #include <asm/esr.h>
+ #include <asm/irq.h>
++#include <asm/memory.h>
++#include <asm/mmu.h>
+ #include <asm/processor.h>
+ #include <asm/ptrace.h>
+ #include <asm/thread_info.h>
+@@ -69,8 +71,21 @@
+ #define BAD_FIQ 2
+ #define BAD_ERROR 3
+
+- .macro kernel_ventry label
++ .macro kernel_ventry, el, label, regsize = 64
+ .align 7
++#ifdef CONFIG_UNMAP_KERNEL_AT_EL0
++alternative_if ARM64_UNMAP_KERNEL_AT_EL0
++ .if \el == 0
++ .if \regsize == 64
++ mrs x30, tpidrro_el0
++ msr tpidrro_el0, xzr
++ .else
++ mov x30, xzr
++ .endif
++ .endif
++alternative_else_nop_endif
++#endif
++
+ sub sp, sp, #S_FRAME_SIZE
+ #ifdef CONFIG_VMAP_STACK
+ /*
+@@ -82,7 +97,7 @@
+ tbnz x0, #THREAD_SHIFT, 0f
+ sub x0, sp, x0 // x0'' = sp' - x0' = (sp + x0) - sp = x0
+ sub sp, sp, x0 // sp'' = sp' - x0 = (sp + x0) - x0 = sp
+- b \label
++ b el\()\el\()_\label
+
+ 0:
+ /*
+@@ -114,7 +129,12 @@
+ sub sp, sp, x0
+ mrs x0, tpidrro_el0
+ #endif
+- b \label
++ b el\()\el\()_\label
++ .endm
++
++ .macro tramp_alias, dst, sym
++ mov_q \dst, TRAMP_VALIAS
++ add \dst, \dst, #(\sym - .entry.tramp.text)
+ .endm
+
+ .macro kernel_entry, el, regsize = 64
+@@ -147,10 +167,10 @@
+ .else
+ add x21, sp, #S_FRAME_SIZE
+ get_thread_info tsk
+- /* Save the task's original addr_limit and set USER_DS (TASK_SIZE_64) */
++ /* Save the task's original addr_limit and set USER_DS */
+ ldr x20, [tsk, #TSK_TI_ADDR_LIMIT]
+ str x20, [sp, #S_ORIG_ADDR_LIMIT]
+- mov x20, #TASK_SIZE_64
++ mov x20, #USER_DS
+ str x20, [tsk, #TSK_TI_ADDR_LIMIT]
+ /* No need to reset PSTATE.UAO, hardware's already set it to 0 for us */
+ .endif /* \el == 0 */
+@@ -185,7 +205,7 @@ alternative_else_nop_endif
+
+ .if \el != 0
+ mrs x21, ttbr0_el1
+- tst x21, #0xffff << 48 // Check for the reserved ASID
++ tst x21, #TTBR_ASID_MASK // Check for the reserved ASID
+ orr x23, x23, #PSR_PAN_BIT // Set the emulated PAN in the saved SPSR
+ b.eq 1f // TTBR0 access already disabled
+ and x23, x23, #~PSR_PAN_BIT // Clear the emulated PAN in the saved SPSR
+@@ -248,7 +268,7 @@ alternative_else_nop_endif
+ tbnz x22, #22, 1f // Skip re-enabling TTBR0 access if the PSR_PAN_BIT is set
+ .endif
+
+- __uaccess_ttbr0_enable x0
++ __uaccess_ttbr0_enable x0, x1
+
+ .if \el == 0
+ /*
+@@ -257,7 +277,7 @@ alternative_else_nop_endif
+ * Cavium erratum 27456 (broadcast TLBI instructions may cause I-cache
+ * corruption).
+ */
+- post_ttbr0_update_workaround
++ bl post_ttbr_update_workaround
+ .endif
+ 1:
+ .if \el != 0
+@@ -269,18 +289,20 @@ alternative_else_nop_endif
+ .if \el == 0
+ ldr x23, [sp, #S_SP] // load return stack pointer
+ msr sp_el0, x23
++ tst x22, #PSR_MODE32_BIT // native task?
++ b.eq 3f
++
+ #ifdef CONFIG_ARM64_ERRATUM_845719
+ alternative_if ARM64_WORKAROUND_845719
+- tbz x22, #4, 1f
+ #ifdef CONFIG_PID_IN_CONTEXTIDR
+ mrs x29, contextidr_el1
+ msr contextidr_el1, x29
+ #else
+ msr contextidr_el1, xzr
+ #endif
+-1:
+ alternative_else_nop_endif
+ #endif
++3:
+ .endif
+
+ msr elr_el1, x21 // set up the return data
+@@ -302,7 +324,21 @@ alternative_else_nop_endif
+ ldp x28, x29, [sp, #16 * 14]
+ ldr lr, [sp, #S_LR]
+ add sp, sp, #S_FRAME_SIZE // restore sp
+- eret // return to kernel
++
++ .if \el == 0
++alternative_insn eret, nop, ARM64_UNMAP_KERNEL_AT_EL0
++#ifdef CONFIG_UNMAP_KERNEL_AT_EL0
++ bne 4f
++ msr far_el1, x30
++ tramp_alias x30, tramp_exit_native
++ br x30
++4:
++ tramp_alias x30, tramp_exit_compat
++ br x30
++#endif
++ .else
++ eret
++ .endif
+ .endm
+
+ .macro irq_stack_entry
+@@ -342,6 +378,7 @@ alternative_else_nop_endif
+ * x7 is reserved for the system call number in 32-bit mode.
+ */
+ wsc_nr .req w25 // number of system calls
++xsc_nr .req x25 // number of system calls (zero-extended)
+ wscno .req w26 // syscall number
+ xscno .req x26 // syscall number (zero-extended)
+ stbl .req x27 // syscall table pointer
+@@ -367,31 +404,31 @@ tsk .req x28 // current thread_info
+
+ .align 11
+ ENTRY(vectors)
+- kernel_ventry el1_sync_invalid // Synchronous EL1t
+- kernel_ventry el1_irq_invalid // IRQ EL1t
+- kernel_ventry el1_fiq_invalid // FIQ EL1t
+- kernel_ventry el1_error_invalid // Error EL1t
++ kernel_ventry 1, sync_invalid // Synchronous EL1t
++ kernel_ventry 1, irq_invalid // IRQ EL1t
++ kernel_ventry 1, fiq_invalid // FIQ EL1t
++ kernel_ventry 1, error_invalid // Error EL1t
+
+- kernel_ventry el1_sync // Synchronous EL1h
+- kernel_ventry el1_irq // IRQ EL1h
+- kernel_ventry el1_fiq_invalid // FIQ EL1h
+- kernel_ventry el1_error // Error EL1h
++ kernel_ventry 1, sync // Synchronous EL1h
++ kernel_ventry 1, irq // IRQ EL1h
++ kernel_ventry 1, fiq_invalid // FIQ EL1h
++ kernel_ventry 1, error // Error EL1h
+
+- kernel_ventry el0_sync // Synchronous 64-bit EL0
+- kernel_ventry el0_irq // IRQ 64-bit EL0
+- kernel_ventry el0_fiq_invalid // FIQ 64-bit EL0
+- kernel_ventry el0_error // Error 64-bit EL0
++ kernel_ventry 0, sync // Synchronous 64-bit EL0
++ kernel_ventry 0, irq // IRQ 64-bit EL0
++ kernel_ventry 0, fiq_invalid // FIQ 64-bit EL0
++ kernel_ventry 0, error // Error 64-bit EL0
+
+ #ifdef CONFIG_COMPAT
+- kernel_ventry el0_sync_compat // Synchronous 32-bit EL0
+- kernel_ventry el0_irq_compat // IRQ 32-bit EL0
+- kernel_ventry el0_fiq_invalid_compat // FIQ 32-bit EL0
+- kernel_ventry el0_error_compat // Error 32-bit EL0
++ kernel_ventry 0, sync_compat, 32 // Synchronous 32-bit EL0
++ kernel_ventry 0, irq_compat, 32 // IRQ 32-bit EL0
++ kernel_ventry 0, fiq_invalid_compat, 32 // FIQ 32-bit EL0
++ kernel_ventry 0, error_compat, 32 // Error 32-bit EL0
+ #else
+- kernel_ventry el0_sync_invalid // Synchronous 32-bit EL0
+- kernel_ventry el0_irq_invalid // IRQ 32-bit EL0
+- kernel_ventry el0_fiq_invalid // FIQ 32-bit EL0
+- kernel_ventry el0_error_invalid // Error 32-bit EL0
++ kernel_ventry 0, sync_invalid, 32 // Synchronous 32-bit EL0
++ kernel_ventry 0, irq_invalid, 32 // IRQ 32-bit EL0
++ kernel_ventry 0, fiq_invalid, 32 // FIQ 32-bit EL0
++ kernel_ventry 0, error_invalid, 32 // Error 32-bit EL0
+ #endif
+ END(vectors)
+
+@@ -685,12 +722,15 @@ el0_ia:
+ * Instruction abort handling
+ */
+ mrs x26, far_el1
+- enable_daif
++ enable_da_f
++#ifdef CONFIG_TRACE_IRQFLAGS
++ bl trace_hardirqs_off
++#endif
+ ct_user_exit
+ mov x0, x26
+ mov x1, x25
+ mov x2, sp
+- bl do_mem_abort
++ bl do_el0_ia_bp_hardening
+ b ret_to_user
+ el0_fpsimd_acc:
+ /*
+@@ -727,7 +767,10 @@ el0_sp_pc:
+ * Stack or PC alignment exception handling
+ */
+ mrs x26, far_el1
+- enable_daif
++ enable_da_f
++#ifdef CONFIG_TRACE_IRQFLAGS
++ bl trace_hardirqs_off
++#endif
+ ct_user_exit
+ mov x0, x26
+ mov x1, x25
+@@ -785,6 +828,11 @@ el0_irq_naked:
+ #endif
+
+ ct_user_exit
++#ifdef CONFIG_HARDEN_BRANCH_PREDICTOR
++ tbz x22, #55, 1f
++ bl do_el0_irq_bp_hardening
++1:
++#endif
+ irq_handler
+
+ #ifdef CONFIG_TRACE_IRQFLAGS
+@@ -896,6 +944,7 @@ el0_svc_naked: // compat entry point
+ b.ne __sys_trace
+ cmp wscno, wsc_nr // check upper syscall limit
+ b.hs ni_sys
++ mask_nospec64 xscno, xsc_nr, x19 // enforce bounds for syscall number
+ ldr x16, [stbl, xscno, lsl #3] // address in the syscall table
+ blr x16 // call sys_* routine
+ b ret_fast_syscall
+@@ -943,6 +992,117 @@ __ni_sys_trace:
+
+ .popsection // .entry.text
+
++#ifdef CONFIG_UNMAP_KERNEL_AT_EL0
++/*
++ * Exception vectors trampoline.
++ */
++ .pushsection ".entry.tramp.text", "ax"
++
++ .macro tramp_map_kernel, tmp
++ mrs \tmp, ttbr1_el1
++ sub \tmp, \tmp, #(SWAPPER_DIR_SIZE + RESERVED_TTBR0_SIZE)
++ bic \tmp, \tmp, #USER_ASID_FLAG
++ msr ttbr1_el1, \tmp
++#ifdef CONFIG_QCOM_FALKOR_ERRATUM_1003
++alternative_if ARM64_WORKAROUND_QCOM_FALKOR_E1003
++ /* ASID already in \tmp[63:48] */
++ movk \tmp, #:abs_g2_nc:(TRAMP_VALIAS >> 12)
++ movk \tmp, #:abs_g1_nc:(TRAMP_VALIAS >> 12)
++ /* 2MB boundary containing the vectors, so we nobble the walk cache */
++ movk \tmp, #:abs_g0_nc:((TRAMP_VALIAS & ~(SZ_2M - 1)) >> 12)
++ isb
++ tlbi vae1, \tmp
++ dsb nsh
++alternative_else_nop_endif
++#endif /* CONFIG_QCOM_FALKOR_ERRATUM_1003 */
++ .endm
++
++ .macro tramp_unmap_kernel, tmp
++ mrs \tmp, ttbr1_el1
++ add \tmp, \tmp, #(SWAPPER_DIR_SIZE + RESERVED_TTBR0_SIZE)
++ orr \tmp, \tmp, #USER_ASID_FLAG
++ msr ttbr1_el1, \tmp
++ /*
++ * We avoid running the post_ttbr_update_workaround here because
++ * it's only needed by Cavium ThunderX, which requires KPTI to be
++ * disabled.
++ */
++ .endm
++
++ .macro tramp_ventry, regsize = 64
++ .align 7
++1:
++ .if \regsize == 64
++ msr tpidrro_el0, x30 // Restored in kernel_ventry
++ .endif
++ /*
++ * Defend against branch aliasing attacks by pushing a dummy
++ * entry onto the return stack and using a RET instruction to
++ * enter the full-fat kernel vectors.
++ */
++ bl 2f
++ b .
++2:
++ tramp_map_kernel x30
++#ifdef CONFIG_RANDOMIZE_BASE
++ adr x30, tramp_vectors + PAGE_SIZE
++alternative_insn isb, nop, ARM64_WORKAROUND_QCOM_FALKOR_E1003
++ ldr x30, [x30]
++#else
++ ldr x30, =vectors
++#endif
++ prfm plil1strm, [x30, #(1b - tramp_vectors)]
++ msr vbar_el1, x30
++ add x30, x30, #(1b - tramp_vectors)
++ isb
++ ret
++ .endm
++
++ .macro tramp_exit, regsize = 64
++ adr x30, tramp_vectors
++ msr vbar_el1, x30
++ tramp_unmap_kernel x30
++ .if \regsize == 64
++ mrs x30, far_el1
++ .endif
++ eret
++ .endm
++
++ .align 11
++ENTRY(tramp_vectors)
++ .space 0x400
++
++ tramp_ventry
++ tramp_ventry
++ tramp_ventry
++ tramp_ventry
++
++ tramp_ventry 32
++ tramp_ventry 32
++ tramp_ventry 32
++ tramp_ventry 32
++END(tramp_vectors)
++
++ENTRY(tramp_exit_native)
++ tramp_exit
++END(tramp_exit_native)
++
++ENTRY(tramp_exit_compat)
++ tramp_exit 32
++END(tramp_exit_compat)
++
++ .ltorg
++ .popsection // .entry.tramp.text
++#ifdef CONFIG_RANDOMIZE_BASE
++ .pushsection ".rodata", "a"
++ .align PAGE_SHIFT
++ .globl __entry_tramp_data_start
++__entry_tramp_data_start:
++ .quad vectors
++ .popsection // .rodata
++#endif /* CONFIG_RANDOMIZE_BASE */
++#endif /* CONFIG_UNMAP_KERNEL_AT_EL0 */
++
+ /*
+ * Special system call wrappers.
+ */
+diff --git a/arch/arm64/kernel/head.S b/arch/arm64/kernel/head.S
+index e3cb9fbf96b6..9b655d69c471 100644
+--- a/arch/arm64/kernel/head.S
++++ b/arch/arm64/kernel/head.S
+@@ -371,7 +371,7 @@ ENDPROC(__primary_switched)
+ * end early head section, begin head code that is also used for
+ * hotplug and needs to have the same protections as the text region
+ */
+- .section ".idmap.text","ax"
++ .section ".idmap.text","awx"
+
+ ENTRY(kimage_vaddr)
+ .quad _text - TEXT_OFFSET
+diff --git a/arch/arm64/kernel/process.c b/arch/arm64/kernel/process.c
+index 6b7dcf4310ac..583fd8154695 100644
+--- a/arch/arm64/kernel/process.c
++++ b/arch/arm64/kernel/process.c
+@@ -370,16 +370,14 @@ void tls_preserve_current_state(void)
+
+ static void tls_thread_switch(struct task_struct *next)
+ {
+- unsigned long tpidr, tpidrro;
+-
+ tls_preserve_current_state();
+
+- tpidr = *task_user_tls(next);
+- tpidrro = is_compat_thread(task_thread_info(next)) ?
+- next->thread.tp_value : 0;
++ if (is_compat_thread(task_thread_info(next)))
++ write_sysreg(next->thread.tp_value, tpidrro_el0);
++ else if (!arm64_kernel_unmapped_at_el0())
++ write_sysreg(0, tpidrro_el0);
+
+- write_sysreg(tpidr, tpidr_el0);
+- write_sysreg(tpidrro, tpidrro_el0);
++ write_sysreg(*task_user_tls(next), tpidr_el0);
+ }
+
+ /* Restore the UAO state depending on next's addr_limit */
+diff --git a/arch/arm64/kernel/sleep.S b/arch/arm64/kernel/sleep.S
+index 10dd16d7902d..bebec8ef9372 100644
+--- a/arch/arm64/kernel/sleep.S
++++ b/arch/arm64/kernel/sleep.S
+@@ -96,7 +96,7 @@ ENTRY(__cpu_suspend_enter)
+ ret
+ ENDPROC(__cpu_suspend_enter)
+
+- .pushsection ".idmap.text", "ax"
++ .pushsection ".idmap.text", "awx"
+ ENTRY(cpu_resume)
+ bl el2_setup // if in EL2 drop to EL1 cleanly
+ bl __cpu_setup
+diff --git a/arch/arm64/kernel/vmlinux.lds.S b/arch/arm64/kernel/vmlinux.lds.S
+index 7da3e5c366a0..ddfd3c0942f7 100644
+--- a/arch/arm64/kernel/vmlinux.lds.S
++++ b/arch/arm64/kernel/vmlinux.lds.S
+@@ -57,6 +57,17 @@ jiffies = jiffies_64;
+ #define HIBERNATE_TEXT
+ #endif
+
++#ifdef CONFIG_UNMAP_KERNEL_AT_EL0
++#define TRAMP_TEXT \
++ . = ALIGN(PAGE_SIZE); \
++ VMLINUX_SYMBOL(__entry_tramp_text_start) = .; \
++ *(.entry.tramp.text) \
++ . = ALIGN(PAGE_SIZE); \
++ VMLINUX_SYMBOL(__entry_tramp_text_end) = .;
++#else
++#define TRAMP_TEXT
++#endif
++
+ /*
+ * The size of the PE/COFF section that covers the kernel image, which
+ * runs from stext to _edata, must be a round multiple of the PE/COFF
+@@ -113,6 +124,7 @@ SECTIONS
+ HYPERVISOR_TEXT
+ IDMAP_TEXT
+ HIBERNATE_TEXT
++ TRAMP_TEXT
+ *(.fixup)
+ *(.gnu.warning)
+ . = ALIGN(16);
+@@ -214,6 +226,11 @@ SECTIONS
+ . += RESERVED_TTBR0_SIZE;
+ #endif
+
++#ifdef CONFIG_UNMAP_KERNEL_AT_EL0
++ tramp_pg_dir = .;
++ . += PAGE_SIZE;
++#endif
++
+ __pecoff_data_size = ABSOLUTE(. - __initdata_begin);
+ _end = .;
+
+@@ -234,7 +251,10 @@ ASSERT(__idmap_text_end - (__idmap_text_start & ~(SZ_4K - 1)) <= SZ_4K,
+ ASSERT(__hibernate_exit_text_end - (__hibernate_exit_text_start & ~(SZ_4K - 1))
+ <= SZ_4K, "Hibernate exit text too big or misaligned")
+ #endif
+-
++#ifdef CONFIG_UNMAP_KERNEL_AT_EL0
++ASSERT((__entry_tramp_text_end - __entry_tramp_text_start) == PAGE_SIZE,
++ "Entry trampoline text too big")
++#endif
+ /*
+ * If padding is applied before .head.text, virt<->phys conversions will fail.
+ */
+diff --git a/arch/arm64/kvm/handle_exit.c b/arch/arm64/kvm/handle_exit.c
+index e60494f1eef9..c6c59356aa88 100644
+--- a/arch/arm64/kvm/handle_exit.c
++++ b/arch/arm64/kvm/handle_exit.c
+@@ -22,12 +22,13 @@
+ #include <linux/kvm.h>
+ #include <linux/kvm_host.h>
+
++#include <kvm/arm_psci.h>
++
+ #include <asm/esr.h>
+ #include <asm/kvm_asm.h>
+ #include <asm/kvm_coproc.h>
+ #include <asm/kvm_emulate.h>
+ #include <asm/kvm_mmu.h>
+-#include <asm/kvm_psci.h>
+ #include <asm/debug-monitors.h>
+
+ #define CREATE_TRACE_POINTS
+@@ -43,7 +44,7 @@ static int handle_hvc(struct kvm_vcpu *vcpu, struct kvm_run *run)
+ kvm_vcpu_hvc_get_imm(vcpu));
+ vcpu->stat.hvc_exit_stat++;
+
+- ret = kvm_psci_call(vcpu);
++ ret = kvm_hvc_call_handler(vcpu);
+ if (ret < 0) {
+ vcpu_set_reg(vcpu, 0, ~0UL);
+ return 1;
+@@ -54,7 +55,16 @@ static int handle_hvc(struct kvm_vcpu *vcpu, struct kvm_run *run)
+
+ static int handle_smc(struct kvm_vcpu *vcpu, struct kvm_run *run)
+ {
++ /*
++ * "If an SMC instruction executed at Non-secure EL1 is
++ * trapped to EL2 because HCR_EL2.TSC is 1, the exception is a
++ * Trap exception, not a Secure Monitor Call exception [...]"
++ *
++ * We need to advance the PC after the trap, as it would
++ * otherwise return to the same address...
++ */
+ vcpu_set_reg(vcpu, 0, ~0UL);
++ kvm_skip_instr(vcpu, kvm_vcpu_trap_il_is32bit(vcpu));
+ return 1;
+ }
+
+diff --git a/arch/arm64/kvm/hyp/entry.S b/arch/arm64/kvm/hyp/entry.S
+index 12ee62d6d410..9c45c6af1f58 100644
+--- a/arch/arm64/kvm/hyp/entry.S
++++ b/arch/arm64/kvm/hyp/entry.S
+@@ -196,3 +196,15 @@ alternative_endif
+
+ eret
+ ENDPROC(__fpsimd_guest_restore)
++
++ENTRY(__qcom_hyp_sanitize_btac_predictors)
++ /**
++ * Call SMC64 with Silicon provider serviceID 23<<8 (0xc2001700)
++ * 0xC2000000-0xC200FFFF: assigned to SiP Service Calls
++ * b15-b0: contains SiP functionID
++ */
++ movz x0, #0x1700
++ movk x0, #0xc200, lsl #16
++ smc #0
++ ret
++ENDPROC(__qcom_hyp_sanitize_btac_predictors)
+diff --git a/arch/arm64/kvm/hyp/hyp-entry.S b/arch/arm64/kvm/hyp/hyp-entry.S
+index 5170ce1021da..f49b53331d28 100644
+--- a/arch/arm64/kvm/hyp/hyp-entry.S
++++ b/arch/arm64/kvm/hyp/hyp-entry.S
+@@ -15,6 +15,7 @@
+ * along with this program. If not, see <http://www.gnu.org/licenses/>.
+ */
+
++#include <linux/arm-smccc.h>
+ #include <linux/linkage.h>
+
+ #include <asm/alternative.h>
+@@ -64,10 +65,11 @@ alternative_endif
+ lsr x0, x1, #ESR_ELx_EC_SHIFT
+
+ cmp x0, #ESR_ELx_EC_HVC64
++ ccmp x0, #ESR_ELx_EC_HVC32, #4, ne
+ b.ne el1_trap
+
+- mrs x1, vttbr_el2 // If vttbr is valid, the 64bit guest
+- cbnz x1, el1_trap // called HVC
++ mrs x1, vttbr_el2 // If vttbr is valid, the guest
++ cbnz x1, el1_hvc_guest // called HVC
+
+ /* Here, we're pretty sure the host called HVC. */
+ ldp x0, x1, [sp], #16
+@@ -100,6 +102,20 @@ alternative_endif
+
+ eret
+
++el1_hvc_guest:
++ /*
++ * Fastest possible path for ARM_SMCCC_ARCH_WORKAROUND_1.
++ * The workaround has already been applied on the host,
++ * so let's quickly get back to the guest. We don't bother
++ * restoring x1, as it can be clobbered anyway.
++ */
++ ldr x1, [sp] // Guest's x0
++ eor w1, w1, #ARM_SMCCC_ARCH_WORKAROUND_1
++ cbnz w1, el1_trap
++ mov x0, x1
++ add sp, sp, #16
++ eret
++
+ el1_trap:
+ /*
+ * x0: ESR_EC
+diff --git a/arch/arm64/kvm/hyp/switch.c b/arch/arm64/kvm/hyp/switch.c
+index f7c651f3a8c0..0b5ab4d8b57d 100644
+--- a/arch/arm64/kvm/hyp/switch.c
++++ b/arch/arm64/kvm/hyp/switch.c
+@@ -17,6 +17,9 @@
+
+ #include <linux/types.h>
+ #include <linux/jump_label.h>
++#include <uapi/linux/psci.h>
++
++#include <kvm/arm_psci.h>
+
+ #include <asm/kvm_asm.h>
+ #include <asm/kvm_emulate.h>
+@@ -52,7 +55,7 @@ static void __hyp_text __activate_traps_vhe(void)
+ val &= ~(CPACR_EL1_FPEN | CPACR_EL1_ZEN);
+ write_sysreg(val, cpacr_el1);
+
+- write_sysreg(__kvm_hyp_vector, vbar_el1);
++ write_sysreg(kvm_get_hyp_vector(), vbar_el1);
+ }
+
+ static void __hyp_text __activate_traps_nvhe(void)
+@@ -393,6 +396,14 @@ int __hyp_text __kvm_vcpu_run(struct kvm_vcpu *vcpu)
+ /* 0 falls through to be handled out of EL2 */
+ }
+
++ if (cpus_have_const_cap(ARM64_HARDEN_BP_POST_GUEST_EXIT)) {
++ u32 midr = read_cpuid_id();
++
++ /* Apply BTAC predictors mitigation to all Falkor chips */
++ if ((midr & MIDR_CPU_MODEL_MASK) == MIDR_QCOM_FALKOR_V1)
++ __qcom_hyp_sanitize_btac_predictors();
++ }
++
+ fp_enabled = __fpsimd_enabled();
+
+ __sysreg_save_guest_state(guest_ctxt);
+diff --git a/arch/arm64/lib/clear_user.S b/arch/arm64/lib/clear_user.S
+index e88fb99c1561..21ba0b29621b 100644
+--- a/arch/arm64/lib/clear_user.S
++++ b/arch/arm64/lib/clear_user.S
+@@ -21,7 +21,7 @@
+
+ .text
+
+-/* Prototype: int __clear_user(void *addr, size_t sz)
++/* Prototype: int __arch_clear_user(void *addr, size_t sz)
+ * Purpose : clear some user memory
+ * Params : addr - user memory address to clear
+ * : sz - number of bytes to clear
+@@ -29,8 +29,8 @@
+ *
+ * Alignment fixed up by hardware.
+ */
+-ENTRY(__clear_user)
+- uaccess_enable_not_uao x2, x3
++ENTRY(__arch_clear_user)
++ uaccess_enable_not_uao x2, x3, x4
+ mov x2, x1 // save the size for fixup return
+ subs x1, x1, #8
+ b.mi 2f
+@@ -50,9 +50,9 @@ uao_user_alternative 9f, strh, sttrh, wzr, x0, 2
+ b.mi 5f
+ uao_user_alternative 9f, strb, sttrb, wzr, x0, 0
+ 5: mov x0, #0
+- uaccess_disable_not_uao x2
++ uaccess_disable_not_uao x2, x3
+ ret
+-ENDPROC(__clear_user)
++ENDPROC(__arch_clear_user)
+
+ .section .fixup,"ax"
+ .align 2
+diff --git a/arch/arm64/lib/copy_from_user.S b/arch/arm64/lib/copy_from_user.S
+index 4b5d826895ff..20305d485046 100644
+--- a/arch/arm64/lib/copy_from_user.S
++++ b/arch/arm64/lib/copy_from_user.S
+@@ -64,10 +64,10 @@
+
+ end .req x5
+ ENTRY(__arch_copy_from_user)
+- uaccess_enable_not_uao x3, x4
++ uaccess_enable_not_uao x3, x4, x5
+ add end, x0, x2
+ #include "copy_template.S"
+- uaccess_disable_not_uao x3
++ uaccess_disable_not_uao x3, x4
+ mov x0, #0 // Nothing to copy
+ ret
+ ENDPROC(__arch_copy_from_user)
+diff --git a/arch/arm64/lib/copy_in_user.S b/arch/arm64/lib/copy_in_user.S
+index b24a830419ad..54b75deb1d16 100644
+--- a/arch/arm64/lib/copy_in_user.S
++++ b/arch/arm64/lib/copy_in_user.S
+@@ -64,14 +64,15 @@
+ .endm
+
+ end .req x5
+-ENTRY(raw_copy_in_user)
+- uaccess_enable_not_uao x3, x4
++
++ENTRY(__arch_copy_in_user)
++ uaccess_enable_not_uao x3, x4, x5
+ add end, x0, x2
+ #include "copy_template.S"
+- uaccess_disable_not_uao x3
++ uaccess_disable_not_uao x3, x4
+ mov x0, #0
+ ret
+-ENDPROC(raw_copy_in_user)
++ENDPROC(__arch_copy_in_user)
+
+ .section .fixup,"ax"
+ .align 2
+diff --git a/arch/arm64/lib/copy_to_user.S b/arch/arm64/lib/copy_to_user.S
+index 351f0766f7a6..fda6172d6b88 100644
+--- a/arch/arm64/lib/copy_to_user.S
++++ b/arch/arm64/lib/copy_to_user.S
+@@ -63,10 +63,10 @@
+
+ end .req x5
+ ENTRY(__arch_copy_to_user)
+- uaccess_enable_not_uao x3, x4
++ uaccess_enable_not_uao x3, x4, x5
+ add end, x0, x2
+ #include "copy_template.S"
+- uaccess_disable_not_uao x3
++ uaccess_disable_not_uao x3, x4
+ mov x0, #0
+ ret
+ ENDPROC(__arch_copy_to_user)
+diff --git a/arch/arm64/mm/cache.S b/arch/arm64/mm/cache.S
+index 7f1dbe962cf5..91464e7f77cc 100644
+--- a/arch/arm64/mm/cache.S
++++ b/arch/arm64/mm/cache.S
+@@ -49,7 +49,7 @@ ENTRY(flush_icache_range)
+ * - end - virtual end address of region
+ */
+ ENTRY(__flush_cache_user_range)
+- uaccess_ttbr0_enable x2, x3
++ uaccess_ttbr0_enable x2, x3, x4
+ dcache_line_size x2, x3
+ sub x3, x2, #1
+ bic x4, x0, x3
+@@ -72,7 +72,7 @@ USER(9f, ic ivau, x4 ) // invalidate I line PoU
+ isb
+ mov x0, #0
+ 1:
+- uaccess_ttbr0_disable x1
++ uaccess_ttbr0_disable x1, x2
+ ret
+ 9:
+ mov x0, #-EFAULT
+diff --git a/arch/arm64/mm/context.c b/arch/arm64/mm/context.c
+index 6f4017046323..b1ac80fba578 100644
+--- a/arch/arm64/mm/context.c
++++ b/arch/arm64/mm/context.c
+@@ -39,7 +39,16 @@ static cpumask_t tlb_flush_pending;
+
+ #define ASID_MASK (~GENMASK(asid_bits - 1, 0))
+ #define ASID_FIRST_VERSION (1UL << asid_bits)
+-#define NUM_USER_ASIDS ASID_FIRST_VERSION
++
++#ifdef CONFIG_UNMAP_KERNEL_AT_EL0
++#define NUM_USER_ASIDS (ASID_FIRST_VERSION >> 1)
++#define asid2idx(asid) (((asid) & ~ASID_MASK) >> 1)
++#define idx2asid(idx) (((idx) << 1) & ~ASID_MASK)
++#else
++#define NUM_USER_ASIDS (ASID_FIRST_VERSION)
++#define asid2idx(asid) ((asid) & ~ASID_MASK)
++#define idx2asid(idx) asid2idx(idx)
++#endif
+
+ /* Get the ASIDBits supported by the current CPU */
+ static u32 get_cpu_asid_bits(void)
+@@ -79,13 +88,6 @@ void verify_cpu_asid_bits(void)
+ }
+ }
+
+-static void set_reserved_asid_bits(void)
+-{
+- if (IS_ENABLED(CONFIG_QCOM_FALKOR_ERRATUM_1003) &&
+- cpus_have_const_cap(ARM64_WORKAROUND_QCOM_FALKOR_E1003))
+- __set_bit(FALKOR_RESERVED_ASID, asid_map);
+-}
+-
+ static void flush_context(unsigned int cpu)
+ {
+ int i;
+@@ -94,8 +96,6 @@ static void flush_context(unsigned int cpu)
+ /* Update the list of reserved ASIDs and the ASID bitmap. */
+ bitmap_clear(asid_map, 0, NUM_USER_ASIDS);
+
+- set_reserved_asid_bits();
+-
+ for_each_possible_cpu(i) {
+ asid = atomic64_xchg_relaxed(&per_cpu(active_asids, i), 0);
+ /*
+@@ -107,7 +107,7 @@ static void flush_context(unsigned int cpu)
+ */
+ if (asid == 0)
+ asid = per_cpu(reserved_asids, i);
+- __set_bit(asid & ~ASID_MASK, asid_map);
++ __set_bit(asid2idx(asid), asid_map);
+ per_cpu(reserved_asids, i) = asid;
+ }
+
+@@ -162,16 +162,16 @@ static u64 new_context(struct mm_struct *mm, unsigned int cpu)
+ * We had a valid ASID in a previous life, so try to re-use
+ * it if possible.
+ */
+- asid &= ~ASID_MASK;
+- if (!__test_and_set_bit(asid, asid_map))
++ if (!__test_and_set_bit(asid2idx(asid), asid_map))
+ return newasid;
+ }
+
+ /*
+ * Allocate a free ASID. If we can't find one, take a note of the
+- * currently active ASIDs and mark the TLBs as requiring flushes.
+- * We always count from ASID #1, as we use ASID #0 when setting a
+- * reserved TTBR0 for the init_mm.
++ * currently active ASIDs and mark the TLBs as requiring flushes. We
++ * always count from ASID #2 (index 1), as we use ASID #0 when setting
++ * a reserved TTBR0 for the init_mm and we allocate ASIDs in even/odd
++ * pairs.
+ */
+ asid = find_next_zero_bit(asid_map, NUM_USER_ASIDS, cur_idx);
+ if (asid != NUM_USER_ASIDS)
+@@ -188,7 +188,7 @@ static u64 new_context(struct mm_struct *mm, unsigned int cpu)
+ set_asid:
+ __set_bit(asid, asid_map);
+ cur_idx = asid;
+- return asid | generation;
++ return idx2asid(asid) | generation;
+ }
+
+ void check_and_switch_context(struct mm_struct *mm, unsigned int cpu)
+@@ -231,6 +231,9 @@ void check_and_switch_context(struct mm_struct *mm, unsigned int cpu)
+ raw_spin_unlock_irqrestore(&cpu_asid_lock, flags);
+
+ switch_mm_fastpath:
++
++ arm64_apply_bp_hardening();
++
+ /*
+ * Defer TTBR0_EL1 setting for user threads to uaccess_enable() when
+ * emulating PAN.
+@@ -239,6 +242,15 @@ void check_and_switch_context(struct mm_struct *mm, unsigned int cpu)
+ cpu_switch_mm(mm->pgd, mm);
+ }
+
++/* Errata workaround post TTBRx_EL1 update. */
++asmlinkage void post_ttbr_update_workaround(void)
++{
++ asm(ALTERNATIVE("nop; nop; nop",
++ "ic iallu; dsb nsh; isb",
++ ARM64_WORKAROUND_CAVIUM_27456,
++ CONFIG_CAVIUM_ERRATUM_27456));
++}
++
+ static int asids_init(void)
+ {
+ asid_bits = get_cpu_asid_bits();
+@@ -254,8 +266,6 @@ static int asids_init(void)
+ panic("Failed to allocate bitmap for %lu ASIDs\n",
+ NUM_USER_ASIDS);
+
+- set_reserved_asid_bits();
+-
+ pr_info("ASID allocator initialised with %lu entries\n", NUM_USER_ASIDS);
+ return 0;
+ }
+diff --git a/arch/arm64/mm/fault.c b/arch/arm64/mm/fault.c
+index 9b7f89df49db..dd8f5197b549 100644
+--- a/arch/arm64/mm/fault.c
++++ b/arch/arm64/mm/fault.c
+@@ -240,7 +240,7 @@ static inline bool is_permission_fault(unsigned int esr, struct pt_regs *regs,
+ if (fsc_type == ESR_ELx_FSC_PERM)
+ return true;
+
+- if (addr < USER_DS && system_uses_ttbr0_pan())
++ if (addr < TASK_SIZE && system_uses_ttbr0_pan())
+ return fsc_type == ESR_ELx_FSC_FAULT &&
+ (regs->pstate & PSR_PAN_BIT);
+
+@@ -414,7 +414,7 @@ static int __kprobes do_page_fault(unsigned long addr, unsigned int esr,
+ mm_flags |= FAULT_FLAG_WRITE;
+ }
+
+- if (addr < USER_DS && is_permission_fault(esr, regs, addr)) {
++ if (addr < TASK_SIZE && is_permission_fault(esr, regs, addr)) {
+ /* regs->orig_addr_limit may be 0 if we entered from EL0 */
+ if (regs->orig_addr_limit == KERNEL_DS)
+ die("Accessing user space memory with fs=KERNEL_DS", regs, esr);
+@@ -707,6 +707,29 @@ asmlinkage void __exception do_mem_abort(unsigned long addr, unsigned int esr,
+ arm64_notify_die("", regs, &info, esr);
+ }
+
++asmlinkage void __exception do_el0_irq_bp_hardening(void)
++{
++ /* PC has already been checked in entry.S */
++ arm64_apply_bp_hardening();
++}
++
++asmlinkage void __exception do_el0_ia_bp_hardening(unsigned long addr,
++ unsigned int esr,
++ struct pt_regs *regs)
++{
++ /*
++ * We've taken an instruction abort from userspace and not yet
++ * re-enabled IRQs. If the address is a kernel address, apply
++ * BP hardening prior to enabling IRQs and pre-emption.
++ */
++ if (addr > TASK_SIZE)
++ arm64_apply_bp_hardening();
++
++ local_irq_enable();
++ do_mem_abort(addr, esr, regs);
++}
++
++
+ asmlinkage void __exception do_sp_pc_abort(unsigned long addr,
+ unsigned int esr,
+ struct pt_regs *regs)
+@@ -714,6 +737,12 @@ asmlinkage void __exception do_sp_pc_abort(unsigned long addr,
+ struct siginfo info;
+ struct task_struct *tsk = current;
+
++ if (user_mode(regs)) {
++ if (instruction_pointer(regs) > TASK_SIZE)
++ arm64_apply_bp_hardening();
++ local_irq_enable();
++ }
++
+ if (show_unhandled_signals && unhandled_signal(tsk, SIGBUS))
+ pr_info_ratelimited("%s[%d]: %s exception: pc=%p sp=%p\n",
+ tsk->comm, task_pid_nr(tsk),
+@@ -773,6 +802,9 @@ asmlinkage int __exception do_debug_exception(unsigned long addr,
+ if (interrupts_enabled(regs))
+ trace_hardirqs_off();
+
++ if (user_mode(regs) && instruction_pointer(regs) > TASK_SIZE)
++ arm64_apply_bp_hardening();
++
+ if (!inf->fn(addr, esr, regs)) {
+ rv = 1;
+ } else {
+diff --git a/arch/arm64/mm/mmu.c b/arch/arm64/mm/mmu.c
+index 267d2b79d52d..451f96f3377c 100644
+--- a/arch/arm64/mm/mmu.c
++++ b/arch/arm64/mm/mmu.c
+@@ -117,6 +117,10 @@ static bool pgattr_change_is_safe(u64 old, u64 new)
+ if ((old | new) & PTE_CONT)
+ return false;
+
++ /* Transitioning from Global to Non-Global is safe */
++ if (((old ^ new) == PTE_NG) && (new & PTE_NG))
++ return true;
++
+ return ((old ^ new) & ~mask) == 0;
+ }
+
+@@ -525,6 +529,37 @@ static int __init parse_rodata(char *arg)
+ }
+ early_param("rodata", parse_rodata);
+
++#ifdef CONFIG_UNMAP_KERNEL_AT_EL0
++static int __init map_entry_trampoline(void)
++{
++ extern char __entry_tramp_text_start[];
++
++ pgprot_t prot = rodata_enabled ? PAGE_KERNEL_ROX : PAGE_KERNEL_EXEC;
++ phys_addr_t pa_start = __pa_symbol(__entry_tramp_text_start);
++
++ /* The trampoline is always mapped and can therefore be global */
++ pgprot_val(prot) &= ~PTE_NG;
++
++ /* Map only the text into the trampoline page table */
++ memset(tramp_pg_dir, 0, PGD_SIZE);
++ __create_pgd_mapping(tramp_pg_dir, pa_start, TRAMP_VALIAS, PAGE_SIZE,
++ prot, pgd_pgtable_alloc, 0);
++
++ /* Map both the text and data into the kernel page table */
++ __set_fixmap(FIX_ENTRY_TRAMP_TEXT, pa_start, prot);
++ if (IS_ENABLED(CONFIG_RANDOMIZE_BASE)) {
++ extern char __entry_tramp_data_start[];
++
++ __set_fixmap(FIX_ENTRY_TRAMP_DATA,
++ __pa_symbol(__entry_tramp_data_start),
++ PAGE_KERNEL_RO);
++ }
++
++ return 0;
++}
++core_initcall(map_entry_trampoline);
++#endif
++
+ /*
+ * Create fine-grained mappings for the kernel.
+ */
+diff --git a/arch/arm64/mm/proc.S b/arch/arm64/mm/proc.S
+index 95233dfc4c39..08572f95bd8a 100644
+--- a/arch/arm64/mm/proc.S
++++ b/arch/arm64/mm/proc.S
+@@ -86,7 +86,7 @@ ENDPROC(cpu_do_suspend)
+ *
+ * x0: Address of context pointer
+ */
+- .pushsection ".idmap.text", "ax"
++ .pushsection ".idmap.text", "awx"
+ ENTRY(cpu_do_resume)
+ ldp x2, x3, [x0]
+ ldp x4, x5, [x0, #16]
+@@ -138,16 +138,30 @@ ENDPROC(cpu_do_resume)
+ * - pgd_phys - physical address of new TTB
+ */
+ ENTRY(cpu_do_switch_mm)
+- pre_ttbr0_update_workaround x0, x2, x3
++ mrs x2, ttbr1_el1
+ mmid x1, x1 // get mm->context.id
+- bfi x0, x1, #48, #16 // set the ASID
+- msr ttbr0_el1, x0 // set TTBR0
++#ifdef CONFIG_ARM64_SW_TTBR0_PAN
++ bfi x0, x1, #48, #16 // set the ASID field in TTBR0
++#endif
++ bfi x2, x1, #48, #16 // set the ASID
++ msr ttbr1_el1, x2 // in TTBR1 (since TCR.A1 is set)
+ isb
+- post_ttbr0_update_workaround
+- ret
++ msr ttbr0_el1, x0 // now update TTBR0
++ isb
++ b post_ttbr_update_workaround // Back to C code...
+ ENDPROC(cpu_do_switch_mm)
+
+- .pushsection ".idmap.text", "ax"
++ .pushsection ".idmap.text", "awx"
++
++.macro __idmap_cpu_set_reserved_ttbr1, tmp1, tmp2
++ adrp \tmp1, empty_zero_page
++ msr ttbr1_el1, \tmp2
++ isb
++ tlbi vmalle1
++ dsb nsh
++ isb
++.endm
++
+ /*
+ * void idmap_cpu_replace_ttbr1(phys_addr_t new_pgd)
+ *
+@@ -157,13 +171,7 @@ ENDPROC(cpu_do_switch_mm)
+ ENTRY(idmap_cpu_replace_ttbr1)
+ save_and_disable_daif flags=x2
+
+- adrp x1, empty_zero_page
+- msr ttbr1_el1, x1
+- isb
+-
+- tlbi vmalle1
+- dsb nsh
+- isb
++ __idmap_cpu_set_reserved_ttbr1 x1, x3
+
+ msr ttbr1_el1, x0
+ isb
+@@ -174,13 +182,197 @@ ENTRY(idmap_cpu_replace_ttbr1)
+ ENDPROC(idmap_cpu_replace_ttbr1)
+ .popsection
+
++#ifdef CONFIG_UNMAP_KERNEL_AT_EL0
++ .pushsection ".idmap.text", "awx"
++
++ .macro __idmap_kpti_get_pgtable_ent, type
++ dc cvac, cur_\()\type\()p // Ensure any existing dirty
++ dmb sy // lines are written back before
++ ldr \type, [cur_\()\type\()p] // loading the entry
++ tbz \type, #0, next_\()\type // Skip invalid entries
++ .endm
++
++ .macro __idmap_kpti_put_pgtable_ent_ng, type
++ orr \type, \type, #PTE_NG // Same bit for blocks and pages
++ str \type, [cur_\()\type\()p] // Update the entry and ensure it
++ dc civac, cur_\()\type\()p // is visible to all CPUs.
++ .endm
++
++/*
++ * void __kpti_install_ng_mappings(int cpu, int num_cpus, phys_addr_t swapper)
++ *
++ * Called exactly once from stop_machine context by each CPU found during boot.
++ */
++__idmap_kpti_flag:
++ .long 1
++ENTRY(idmap_kpti_install_ng_mappings)
++ cpu .req w0
++ num_cpus .req w1
++ swapper_pa .req x2
++ swapper_ttb .req x3
++ flag_ptr .req x4
++ cur_pgdp .req x5
++ end_pgdp .req x6
++ pgd .req x7
++ cur_pudp .req x8
++ end_pudp .req x9
++ pud .req x10
++ cur_pmdp .req x11
++ end_pmdp .req x12
++ pmd .req x13
++ cur_ptep .req x14
++ end_ptep .req x15
++ pte .req x16
++
++ mrs swapper_ttb, ttbr1_el1
++ adr flag_ptr, __idmap_kpti_flag
++
++ cbnz cpu, __idmap_kpti_secondary
++
++ /* We're the boot CPU. Wait for the others to catch up */
++ sevl
++1: wfe
++ ldaxr w18, [flag_ptr]
++ eor w18, w18, num_cpus
++ cbnz w18, 1b
++
++ /* We need to walk swapper, so turn off the MMU. */
++ pre_disable_mmu_workaround
++ mrs x18, sctlr_el1
++ bic x18, x18, #SCTLR_ELx_M
++ msr sctlr_el1, x18
++ isb
++
++ /* Everybody is enjoying the idmap, so we can rewrite swapper. */
++ /* PGD */
++ mov cur_pgdp, swapper_pa
++ add end_pgdp, cur_pgdp, #(PTRS_PER_PGD * 8)
++do_pgd: __idmap_kpti_get_pgtable_ent pgd
++ tbnz pgd, #1, walk_puds
++ __idmap_kpti_put_pgtable_ent_ng pgd
++next_pgd:
++ add cur_pgdp, cur_pgdp, #8
++ cmp cur_pgdp, end_pgdp
++ b.ne do_pgd
++
++ /* Publish the updated tables and nuke all the TLBs */
++ dsb sy
++ tlbi vmalle1is
++ dsb ish
++ isb
++
++ /* We're done: fire up the MMU again */
++ mrs x18, sctlr_el1
++ orr x18, x18, #SCTLR_ELx_M
++ msr sctlr_el1, x18
++ isb
++
++ /* Set the flag to zero to indicate that we're all done */
++ str wzr, [flag_ptr]
++ ret
++
++ /* PUD */
++walk_puds:
++ .if CONFIG_PGTABLE_LEVELS > 3
++ pte_to_phys cur_pudp, pgd
++ add end_pudp, cur_pudp, #(PTRS_PER_PUD * 8)
++do_pud: __idmap_kpti_get_pgtable_ent pud
++ tbnz pud, #1, walk_pmds
++ __idmap_kpti_put_pgtable_ent_ng pud
++next_pud:
++ add cur_pudp, cur_pudp, 8
++ cmp cur_pudp, end_pudp
++ b.ne do_pud
++ b next_pgd
++ .else /* CONFIG_PGTABLE_LEVELS <= 3 */
++ mov pud, pgd
++ b walk_pmds
++next_pud:
++ b next_pgd
++ .endif
++
++ /* PMD */
++walk_pmds:
++ .if CONFIG_PGTABLE_LEVELS > 2
++ pte_to_phys cur_pmdp, pud
++ add end_pmdp, cur_pmdp, #(PTRS_PER_PMD * 8)
++do_pmd: __idmap_kpti_get_pgtable_ent pmd
++ tbnz pmd, #1, walk_ptes
++ __idmap_kpti_put_pgtable_ent_ng pmd
++next_pmd:
++ add cur_pmdp, cur_pmdp, #8
++ cmp cur_pmdp, end_pmdp
++ b.ne do_pmd
++ b next_pud
++ .else /* CONFIG_PGTABLE_LEVELS <= 2 */
++ mov pmd, pud
++ b walk_ptes
++next_pmd:
++ b next_pud
++ .endif
++
++ /* PTE */
++walk_ptes:
++ pte_to_phys cur_ptep, pmd
++ add end_ptep, cur_ptep, #(PTRS_PER_PTE * 8)
++do_pte: __idmap_kpti_get_pgtable_ent pte
++ __idmap_kpti_put_pgtable_ent_ng pte
++next_pte:
++ add cur_ptep, cur_ptep, #8
++ cmp cur_ptep, end_ptep
++ b.ne do_pte
++ b next_pmd
++
++ /* Secondary CPUs end up here */
++__idmap_kpti_secondary:
++ /* Uninstall swapper before surgery begins */
++ __idmap_cpu_set_reserved_ttbr1 x18, x17
++
++ /* Increment the flag to let the boot CPU we're ready */
++1: ldxr w18, [flag_ptr]
++ add w18, w18, #1
++ stxr w17, w18, [flag_ptr]
++ cbnz w17, 1b
++
++ /* Wait for the boot CPU to finish messing around with swapper */
++ sevl
++1: wfe
++ ldxr w18, [flag_ptr]
++ cbnz w18, 1b
++
++ /* All done, act like nothing happened */
++ msr ttbr1_el1, swapper_ttb
++ isb
++ ret
++
++ .unreq cpu
++ .unreq num_cpus
++ .unreq swapper_pa
++ .unreq swapper_ttb
++ .unreq flag_ptr
++ .unreq cur_pgdp
++ .unreq end_pgdp
++ .unreq pgd
++ .unreq cur_pudp
++ .unreq end_pudp
++ .unreq pud
++ .unreq cur_pmdp
++ .unreq end_pmdp
++ .unreq pmd
++ .unreq cur_ptep
++ .unreq end_ptep
++ .unreq pte
++ENDPROC(idmap_kpti_install_ng_mappings)
++ .popsection
++#endif
++
+ /*
+ * __cpu_setup
+ *
+ * Initialise the processor for turning the MMU on. Return in x0 the
+ * value of the SCTLR_EL1 register.
+ */
+- .pushsection ".idmap.text", "ax"
++ .pushsection ".idmap.text", "awx"
+ ENTRY(__cpu_setup)
+ tlbi vmalle1 // Invalidate local TLB
+ dsb nsh
+@@ -224,7 +416,7 @@ ENTRY(__cpu_setup)
+ * both user and kernel.
+ */
+ ldr x10, =TCR_TxSZ(VA_BITS) | TCR_CACHE_FLAGS | TCR_SMP_FLAGS | \
+- TCR_TG_FLAGS | TCR_ASID16 | TCR_TBI0
++ TCR_TG_FLAGS | TCR_ASID16 | TCR_TBI0 | TCR_A1
+ tcr_set_idmap_t0sz x10, x9
+
+ /*
+diff --git a/arch/arm64/xen/hypercall.S b/arch/arm64/xen/hypercall.S
+index 401ceb71540c..c5f05c4a4d00 100644
+--- a/arch/arm64/xen/hypercall.S
++++ b/arch/arm64/xen/hypercall.S
+@@ -101,12 +101,12 @@ ENTRY(privcmd_call)
+ * need the explicit uaccess_enable/disable if the TTBR0 PAN emulation
+ * is enabled (it implies that hardware UAO and PAN disabled).
+ */
+- uaccess_ttbr0_enable x6, x7
++ uaccess_ttbr0_enable x6, x7, x8
+ hvc XEN_IMM
+
+ /*
+ * Disable userspace access from kernel once the hyp call completed.
+ */
+- uaccess_ttbr0_disable x6
++ uaccess_ttbr0_disable x6, x7
+ ret
+ ENDPROC(privcmd_call);
+diff --git a/arch/mn10300/mm/misalignment.c b/arch/mn10300/mm/misalignment.c
+index b39a388825ae..8ace89617c1c 100644
+--- a/arch/mn10300/mm/misalignment.c
++++ b/arch/mn10300/mm/misalignment.c
+@@ -437,7 +437,7 @@ asmlinkage void misalignment(struct pt_regs *regs, enum exception_code code)
+
+ info.si_signo = SIGSEGV;
+ info.si_errno = 0;
+- info.si_code = 0;
++ info.si_code = SEGV_MAPERR;
+ info.si_addr = (void *) regs->pc;
+ force_sig_info(SIGSEGV, &info, current);
+ return;
+diff --git a/arch/openrisc/kernel/traps.c b/arch/openrisc/kernel/traps.c
+index 4085d72fa5ae..9e38dc66c9e4 100644
+--- a/arch/openrisc/kernel/traps.c
++++ b/arch/openrisc/kernel/traps.c
+@@ -266,12 +266,12 @@ asmlinkage void do_unaligned_access(struct pt_regs *regs, unsigned long address)
+ siginfo_t info;
+
+ if (user_mode(regs)) {
+- /* Send a SIGSEGV */
+- info.si_signo = SIGSEGV;
++ /* Send a SIGBUS */
++ info.si_signo = SIGBUS;
+ info.si_errno = 0;
+- /* info.si_code has been set above */
+- info.si_addr = (void *)address;
+- force_sig_info(SIGSEGV, &info, current);
++ info.si_code = BUS_ADRALN;
++ info.si_addr = (void __user *)address;
++ force_sig_info(SIGBUS, &info, current);
+ } else {
+ printk("KERNEL: Unaligned Access 0x%.8lx\n", address);
+ show_registers(regs);
+diff --git a/arch/powerpc/crypto/crc32c-vpmsum_glue.c b/arch/powerpc/crypto/crc32c-vpmsum_glue.c
+index f058e0c3e4d4..fd1d6c83f0c0 100644
+--- a/arch/powerpc/crypto/crc32c-vpmsum_glue.c
++++ b/arch/powerpc/crypto/crc32c-vpmsum_glue.c
+@@ -141,6 +141,7 @@ static struct shash_alg alg = {
+ .cra_name = "crc32c",
+ .cra_driver_name = "crc32c-vpmsum",
+ .cra_priority = 200,
++ .cra_flags = CRYPTO_ALG_OPTIONAL_KEY,
+ .cra_blocksize = CHKSUM_BLOCK_SIZE,
+ .cra_ctxsize = sizeof(u32),
+ .cra_module = THIS_MODULE,
+diff --git a/arch/powerpc/kvm/Kconfig b/arch/powerpc/kvm/Kconfig
+index b12b8eb39c29..648160334abf 100644
+--- a/arch/powerpc/kvm/Kconfig
++++ b/arch/powerpc/kvm/Kconfig
+@@ -68,7 +68,7 @@ config KVM_BOOK3S_64
+ select KVM_BOOK3S_64_HANDLER
+ select KVM
+ select KVM_BOOK3S_PR_POSSIBLE if !KVM_BOOK3S_HV_POSSIBLE
+- select SPAPR_TCE_IOMMU if IOMMU_SUPPORT && (PPC_SERIES || PPC_POWERNV)
++ select SPAPR_TCE_IOMMU if IOMMU_SUPPORT && (PPC_PSERIES || PPC_POWERNV)
+ ---help---
+ Support running unmodified book3s_64 and book3s_32 guest kernels
+ in virtual machines on book3s_64 host processors.
+diff --git a/arch/powerpc/kvm/book3s_hv.c b/arch/powerpc/kvm/book3s_hv.c
+index 2d46037ce936..6c402f6c4940 100644
+--- a/arch/powerpc/kvm/book3s_hv.c
++++ b/arch/powerpc/kvm/book3s_hv.c
+@@ -1005,8 +1005,6 @@ static int kvmppc_emulate_doorbell_instr(struct kvm_vcpu *vcpu)
+ struct kvm *kvm = vcpu->kvm;
+ struct kvm_vcpu *tvcpu;
+
+- if (!cpu_has_feature(CPU_FTR_ARCH_300))
+- return EMULATE_FAIL;
+ if (kvmppc_get_last_inst(vcpu, INST_GENERIC, &inst) != EMULATE_DONE)
+ return RESUME_GUEST;
+ if (get_op(inst) != 31)
+@@ -1056,6 +1054,7 @@ static int kvmppc_emulate_doorbell_instr(struct kvm_vcpu *vcpu)
+ return RESUME_GUEST;
+ }
+
++/* Called with vcpu->arch.vcore->lock held */
+ static int kvmppc_handle_exit_hv(struct kvm_run *run, struct kvm_vcpu *vcpu,
+ struct task_struct *tsk)
+ {
+@@ -1176,7 +1175,10 @@ static int kvmppc_handle_exit_hv(struct kvm_run *run, struct kvm_vcpu *vcpu,
+ swab32(vcpu->arch.emul_inst) :
+ vcpu->arch.emul_inst;
+ if (vcpu->guest_debug & KVM_GUESTDBG_USE_SW_BP) {
++ /* Need vcore unlocked to call kvmppc_get_last_inst */
++ spin_unlock(&vcpu->arch.vcore->lock);
+ r = kvmppc_emulate_debug_inst(run, vcpu);
++ spin_lock(&vcpu->arch.vcore->lock);
+ } else {
+ kvmppc_core_queue_program(vcpu, SRR1_PROGILL);
+ r = RESUME_GUEST;
+@@ -1191,8 +1193,13 @@ static int kvmppc_handle_exit_hv(struct kvm_run *run, struct kvm_vcpu *vcpu,
+ */
+ case BOOK3S_INTERRUPT_H_FAC_UNAVAIL:
+ r = EMULATE_FAIL;
+- if ((vcpu->arch.hfscr >> 56) == FSCR_MSGP_LG)
++ if (((vcpu->arch.hfscr >> 56) == FSCR_MSGP_LG) &&
++ cpu_has_feature(CPU_FTR_ARCH_300)) {
++ /* Need vcore unlocked to call kvmppc_get_last_inst */
++ spin_unlock(&vcpu->arch.vcore->lock);
+ r = kvmppc_emulate_doorbell_instr(vcpu);
++ spin_lock(&vcpu->arch.vcore->lock);
++ }
+ if (r == EMULATE_FAIL) {
+ kvmppc_core_queue_program(vcpu, SRR1_PROGILL);
+ r = RESUME_GUEST;
+@@ -2934,13 +2941,14 @@ static noinline void kvmppc_run_core(struct kvmppc_vcore *vc)
+ /* make sure updates to secondary vcpu structs are visible now */
+ smp_mb();
+
++ preempt_enable();
++
+ for (sub = 0; sub < core_info.n_subcores; ++sub) {
+ pvc = core_info.vc[sub];
+ post_guest_process(pvc, pvc == vc);
+ }
+
+ spin_lock(&vc->lock);
+- preempt_enable();
+
+ out:
+ vc->vcore_state = VCORE_INACTIVE;
+diff --git a/arch/powerpc/kvm/book3s_hv_rmhandlers.S b/arch/powerpc/kvm/book3s_hv_rmhandlers.S
+index 9c61f736c75b..ffec37062f3b 100644
+--- a/arch/powerpc/kvm/book3s_hv_rmhandlers.S
++++ b/arch/powerpc/kvm/book3s_hv_rmhandlers.S
+@@ -1423,6 +1423,26 @@ END_FTR_SECTION_IFSET(CPU_FTR_ARCH_300)
+ blt deliver_guest_interrupt
+
+ guest_exit_cont: /* r9 = vcpu, r12 = trap, r13 = paca */
++ /* Save more register state */
++ mfdar r6
++ mfdsisr r7
++ std r6, VCPU_DAR(r9)
++ stw r7, VCPU_DSISR(r9)
++ /* don't overwrite fault_dar/fault_dsisr if HDSI */
++ cmpwi r12,BOOK3S_INTERRUPT_H_DATA_STORAGE
++ beq mc_cont
++ std r6, VCPU_FAULT_DAR(r9)
++ stw r7, VCPU_FAULT_DSISR(r9)
++
++ /* See if it is a machine check */
++ cmpwi r12, BOOK3S_INTERRUPT_MACHINE_CHECK
++ beq machine_check_realmode
++mc_cont:
++#ifdef CONFIG_KVM_BOOK3S_HV_EXIT_TIMING
++ addi r3, r9, VCPU_TB_RMEXIT
++ mr r4, r9
++ bl kvmhv_accumulate_time
++#endif
+ #ifdef CONFIG_KVM_XICS
+ /* We are exiting, pull the VP from the XIVE */
+ lwz r0, VCPU_XIVE_PUSHED(r9)
+@@ -1460,26 +1480,6 @@ guest_exit_cont: /* r9 = vcpu, r12 = trap, r13 = paca */
+ eieio
+ 1:
+ #endif /* CONFIG_KVM_XICS */
+- /* Save more register state */
+- mfdar r6
+- mfdsisr r7
+- std r6, VCPU_DAR(r9)
+- stw r7, VCPU_DSISR(r9)
+- /* don't overwrite fault_dar/fault_dsisr if HDSI */
+- cmpwi r12,BOOK3S_INTERRUPT_H_DATA_STORAGE
+- beq mc_cont
+- std r6, VCPU_FAULT_DAR(r9)
+- stw r7, VCPU_FAULT_DSISR(r9)
+-
+- /* See if it is a machine check */
+- cmpwi r12, BOOK3S_INTERRUPT_MACHINE_CHECK
+- beq machine_check_realmode
+-mc_cont:
+-#ifdef CONFIG_KVM_BOOK3S_HV_EXIT_TIMING
+- addi r3, r9, VCPU_TB_RMEXIT
+- mr r4, r9
+- bl kvmhv_accumulate_time
+-#endif
+
+ mr r3, r12
+ /* Increment exit count, poke other threads to exit */
+diff --git a/arch/s390/crypto/crc32-vx.c b/arch/s390/crypto/crc32-vx.c
+index 436865926c26..423ee05887e6 100644
+--- a/arch/s390/crypto/crc32-vx.c
++++ b/arch/s390/crypto/crc32-vx.c
+@@ -239,6 +239,7 @@ static struct shash_alg crc32_vx_algs[] = {
+ .cra_name = "crc32",
+ .cra_driver_name = "crc32-vx",
+ .cra_priority = 200,
++ .cra_flags = CRYPTO_ALG_OPTIONAL_KEY,
+ .cra_blocksize = CRC32_BLOCK_SIZE,
+ .cra_ctxsize = sizeof(struct crc_ctx),
+ .cra_module = THIS_MODULE,
+@@ -259,6 +260,7 @@ static struct shash_alg crc32_vx_algs[] = {
+ .cra_name = "crc32be",
+ .cra_driver_name = "crc32be-vx",
+ .cra_priority = 200,
++ .cra_flags = CRYPTO_ALG_OPTIONAL_KEY,
+ .cra_blocksize = CRC32_BLOCK_SIZE,
+ .cra_ctxsize = sizeof(struct crc_ctx),
+ .cra_module = THIS_MODULE,
+@@ -279,6 +281,7 @@ static struct shash_alg crc32_vx_algs[] = {
+ .cra_name = "crc32c",
+ .cra_driver_name = "crc32c-vx",
+ .cra_priority = 200,
++ .cra_flags = CRYPTO_ALG_OPTIONAL_KEY,
+ .cra_blocksize = CRC32_BLOCK_SIZE,
+ .cra_ctxsize = sizeof(struct crc_ctx),
+ .cra_module = THIS_MODULE,
+diff --git a/arch/sh/kernel/traps_32.c b/arch/sh/kernel/traps_32.c
+index 57cff00cad17..b3770bb26211 100644
+--- a/arch/sh/kernel/traps_32.c
++++ b/arch/sh/kernel/traps_32.c
+@@ -609,7 +609,8 @@ asmlinkage void do_divide_error(unsigned long r4)
+ break;
+ }
+
+- force_sig_info(SIGFPE, &info, current);
++ info.si_signo = SIGFPE;
++ force_sig_info(info.si_signo, &info, current);
+ }
+ #endif
+
+diff --git a/arch/sparc/crypto/crc32c_glue.c b/arch/sparc/crypto/crc32c_glue.c
+index d1064e46efe8..8aa664638c3c 100644
+--- a/arch/sparc/crypto/crc32c_glue.c
++++ b/arch/sparc/crypto/crc32c_glue.c
+@@ -133,6 +133,7 @@ static struct shash_alg alg = {
+ .cra_name = "crc32c",
+ .cra_driver_name = "crc32c-sparc64",
+ .cra_priority = SPARC_CR_OPCODE_PRIORITY,
++ .cra_flags = CRYPTO_ALG_OPTIONAL_KEY,
+ .cra_blocksize = CHKSUM_BLOCK_SIZE,
+ .cra_ctxsize = sizeof(u32),
+ .cra_alignmask = 7,
+diff --git a/arch/x86/crypto/crc32-pclmul_glue.c b/arch/x86/crypto/crc32-pclmul_glue.c
+index 27226df3f7d8..c8d9cdacbf10 100644
+--- a/arch/x86/crypto/crc32-pclmul_glue.c
++++ b/arch/x86/crypto/crc32-pclmul_glue.c
+@@ -162,6 +162,7 @@ static struct shash_alg alg = {
+ .cra_name = "crc32",
+ .cra_driver_name = "crc32-pclmul",
+ .cra_priority = 200,
++ .cra_flags = CRYPTO_ALG_OPTIONAL_KEY,
+ .cra_blocksize = CHKSUM_BLOCK_SIZE,
+ .cra_ctxsize = sizeof(u32),
+ .cra_module = THIS_MODULE,
+diff --git a/arch/x86/crypto/crc32c-intel_glue.c b/arch/x86/crypto/crc32c-intel_glue.c
+index c194d5717ae5..5773e1161072 100644
+--- a/arch/x86/crypto/crc32c-intel_glue.c
++++ b/arch/x86/crypto/crc32c-intel_glue.c
+@@ -226,6 +226,7 @@ static struct shash_alg alg = {
+ .cra_name = "crc32c",
+ .cra_driver_name = "crc32c-intel",
+ .cra_priority = 200,
++ .cra_flags = CRYPTO_ALG_OPTIONAL_KEY,
+ .cra_blocksize = CHKSUM_BLOCK_SIZE,
+ .cra_ctxsize = sizeof(u32),
+ .cra_module = THIS_MODULE,
+diff --git a/arch/x86/crypto/poly1305_glue.c b/arch/x86/crypto/poly1305_glue.c
+index e32142bc071d..28c372003e44 100644
+--- a/arch/x86/crypto/poly1305_glue.c
++++ b/arch/x86/crypto/poly1305_glue.c
+@@ -164,7 +164,6 @@ static struct shash_alg alg = {
+ .init = poly1305_simd_init,
+ .update = poly1305_simd_update,
+ .final = crypto_poly1305_final,
+- .setkey = crypto_poly1305_setkey,
+ .descsize = sizeof(struct poly1305_simd_desc_ctx),
+ .base = {
+ .cra_name = "poly1305",
+diff --git a/arch/x86/crypto/sha512-mb/sha512_mb_mgr_init_avx2.c b/arch/x86/crypto/sha512-mb/sha512_mb_mgr_init_avx2.c
+index 36870b26067a..d08805032f01 100644
+--- a/arch/x86/crypto/sha512-mb/sha512_mb_mgr_init_avx2.c
++++ b/arch/x86/crypto/sha512-mb/sha512_mb_mgr_init_avx2.c
+@@ -57,10 +57,12 @@ void sha512_mb_mgr_init_avx2(struct sha512_mb_mgr *state)
+ {
+ unsigned int j;
+
+- state->lens[0] = 0;
+- state->lens[1] = 1;
+- state->lens[2] = 2;
+- state->lens[3] = 3;
++ /* initially all lanes are unused */
++ state->lens[0] = 0xFFFFFFFF00000000;
++ state->lens[1] = 0xFFFFFFFF00000001;
++ state->lens[2] = 0xFFFFFFFF00000002;
++ state->lens[3] = 0xFFFFFFFF00000003;
++
+ state->unused_lanes = 0xFF03020100;
+ for (j = 0; j < 4; j++)
+ state->ldata[j].job_in_lane = NULL;
+diff --git a/arch/x86/kvm/vmx.c b/arch/x86/kvm/vmx.c
+index bee4c49f6dd0..6f623848260f 100644
+--- a/arch/x86/kvm/vmx.c
++++ b/arch/x86/kvm/vmx.c
+@@ -5323,14 +5323,15 @@ static int vmx_deliver_nested_posted_interrupt(struct kvm_vcpu *vcpu,
+
+ if (is_guest_mode(vcpu) &&
+ vector == vmx->nested.posted_intr_nv) {
+- /* the PIR and ON have been set by L1. */
+- kvm_vcpu_trigger_posted_interrupt(vcpu, true);
+ /*
+ * If a posted intr is not recognized by hardware,
+ * we will accomplish it in the next vmentry.
+ */
+ vmx->nested.pi_pending = true;
+ kvm_make_request(KVM_REQ_EVENT, vcpu);
++ /* the PIR and ON have been set by L1. */
++ if (!kvm_vcpu_trigger_posted_interrupt(vcpu, true))
++ kvm_vcpu_kick(vcpu);
+ return 0;
+ }
+ return -1;
+@@ -11254,7 +11255,6 @@ static int vmx_check_nested_events(struct kvm_vcpu *vcpu, bool external_intr)
+ if (block_nested_events)
+ return -EBUSY;
+ nested_vmx_inject_exception_vmexit(vcpu, exit_qual);
+- vcpu->arch.exception.pending = false;
+ return 0;
+ }
+
+diff --git a/arch/x86/kvm/x86.h b/arch/x86/kvm/x86.h
+index d0b95b7a90b4..6d112d8f799c 100644
+--- a/arch/x86/kvm/x86.h
++++ b/arch/x86/kvm/x86.h
+@@ -12,6 +12,7 @@
+
+ static inline void kvm_clear_exception_queue(struct kvm_vcpu *vcpu)
+ {
++ vcpu->arch.exception.pending = false;
+ vcpu->arch.exception.injected = false;
+ }
+
+diff --git a/arch/xtensa/include/asm/futex.h b/arch/xtensa/include/asm/futex.h
+index eaaf1ebcc7a4..5bfbc1c401d4 100644
+--- a/arch/xtensa/include/asm/futex.h
++++ b/arch/xtensa/include/asm/futex.h
+@@ -92,7 +92,6 @@ futex_atomic_cmpxchg_inatomic(u32 *uval, u32 __user *uaddr,
+ u32 oldval, u32 newval)
+ {
+ int ret = 0;
+- u32 prev;
+
+ if (!access_ok(VERIFY_WRITE, uaddr, sizeof(u32)))
+ return -EFAULT;
+@@ -103,26 +102,24 @@ futex_atomic_cmpxchg_inatomic(u32 *uval, u32 __user *uaddr,
+
+ __asm__ __volatile__ (
+ " # futex_atomic_cmpxchg_inatomic\n"
+- "1: l32i %1, %3, 0\n"
+- " mov %0, %5\n"
+- " wsr %1, scompare1\n"
+- "2: s32c1i %0, %3, 0\n"
+- "3:\n"
++ " wsr %5, scompare1\n"
++ "1: s32c1i %1, %4, 0\n"
++ " s32i %1, %6, 0\n"
++ "2:\n"
+ " .section .fixup,\"ax\"\n"
+ " .align 4\n"
+- "4: .long 3b\n"
+- "5: l32r %1, 4b\n"
+- " movi %0, %6\n"
++ "3: .long 2b\n"
++ "4: l32r %1, 3b\n"
++ " movi %0, %7\n"
+ " jx %1\n"
+ " .previous\n"
+ " .section __ex_table,\"a\"\n"
+- " .long 1b,5b,2b,5b\n"
++ " .long 1b,4b\n"
+ " .previous\n"
+- : "+r" (ret), "=&r" (prev), "+m" (*uaddr)
+- : "r" (uaddr), "r" (oldval), "r" (newval), "I" (-EFAULT)
++ : "+r" (ret), "+r" (newval), "+m" (*uaddr), "+m" (*uval)
++ : "r" (uaddr), "r" (oldval), "r" (uval), "I" (-EFAULT)
+ : "memory");
+
+- *uval = prev;
+ return ret;
+ }
+
+diff --git a/block/blk-core.c b/block/blk-core.c
+index 3ba4326a63b5..82b92adf3477 100644
+--- a/block/blk-core.c
++++ b/block/blk-core.c
+@@ -699,6 +699,15 @@ void blk_cleanup_queue(struct request_queue *q)
+ queue_flag_set(QUEUE_FLAG_DEAD, q);
+ spin_unlock_irq(lock);
+
++ /*
++ * make sure all in-progress dispatch are completed because
++ * blk_freeze_queue() can only complete all requests, and
++ * dispatch may still be in-progress since we dispatch requests
++ * from more than one contexts
++ */
++ if (q->mq_ops)
++ blk_mq_quiesce_queue(q);
++
+ /* for synchronous bio-based driver finish in-flight integrity i/o */
+ blk_flush_integrity();
+
+diff --git a/crypto/ahash.c b/crypto/ahash.c
+index 3a35d67de7d9..266fc1d64f61 100644
+--- a/crypto/ahash.c
++++ b/crypto/ahash.c
+@@ -193,11 +193,18 @@ int crypto_ahash_setkey(struct crypto_ahash *tfm, const u8 *key,
+ unsigned int keylen)
+ {
+ unsigned long alignmask = crypto_ahash_alignmask(tfm);
++ int err;
+
+ if ((unsigned long)key & alignmask)
+- return ahash_setkey_unaligned(tfm, key, keylen);
++ err = ahash_setkey_unaligned(tfm, key, keylen);
++ else
++ err = tfm->setkey(tfm, key, keylen);
++
++ if (err)
++ return err;
+
+- return tfm->setkey(tfm, key, keylen);
++ crypto_ahash_clear_flags(tfm, CRYPTO_TFM_NEED_KEY);
++ return 0;
+ }
+ EXPORT_SYMBOL_GPL(crypto_ahash_setkey);
+
+@@ -368,7 +375,12 @@ EXPORT_SYMBOL_GPL(crypto_ahash_finup);
+
+ int crypto_ahash_digest(struct ahash_request *req)
+ {
+- return crypto_ahash_op(req, crypto_ahash_reqtfm(req)->digest);
++ struct crypto_ahash *tfm = crypto_ahash_reqtfm(req);
++
++ if (crypto_ahash_get_flags(tfm) & CRYPTO_TFM_NEED_KEY)
++ return -ENOKEY;
++
++ return crypto_ahash_op(req, tfm->digest);
+ }
+ EXPORT_SYMBOL_GPL(crypto_ahash_digest);
+
+@@ -450,7 +462,6 @@ static int crypto_ahash_init_tfm(struct crypto_tfm *tfm)
+ struct ahash_alg *alg = crypto_ahash_alg(hash);
+
+ hash->setkey = ahash_nosetkey;
+- hash->has_setkey = false;
+ hash->export = ahash_no_export;
+ hash->import = ahash_no_import;
+
+@@ -465,7 +476,8 @@ static int crypto_ahash_init_tfm(struct crypto_tfm *tfm)
+
+ if (alg->setkey) {
+ hash->setkey = alg->setkey;
+- hash->has_setkey = true;
++ if (!(alg->halg.base.cra_flags & CRYPTO_ALG_OPTIONAL_KEY))
++ crypto_ahash_set_flags(hash, CRYPTO_TFM_NEED_KEY);
+ }
+ if (alg->export)
+ hash->export = alg->export;
+@@ -649,5 +661,16 @@ struct hash_alg_common *ahash_attr_alg(struct rtattr *rta, u32 type, u32 mask)
+ }
+ EXPORT_SYMBOL_GPL(ahash_attr_alg);
+
++bool crypto_hash_alg_has_setkey(struct hash_alg_common *halg)
++{
++ struct crypto_alg *alg = &halg->base;
++
++ if (alg->cra_type != &crypto_ahash_type)
++ return crypto_shash_alg_has_setkey(__crypto_shash_alg(alg));
++
++ return __crypto_ahash_alg(alg)->setkey != NULL;
++}
++EXPORT_SYMBOL_GPL(crypto_hash_alg_has_setkey);
++
+ MODULE_LICENSE("GPL");
+ MODULE_DESCRIPTION("Asynchronous cryptographic hash type");
+diff --git a/crypto/algif_hash.c b/crypto/algif_hash.c
+index 76d2e716c792..6c9b1927a520 100644
+--- a/crypto/algif_hash.c
++++ b/crypto/algif_hash.c
+@@ -34,11 +34,6 @@ struct hash_ctx {
+ struct ahash_request req;
+ };
+
+-struct algif_hash_tfm {
+- struct crypto_ahash *hash;
+- bool has_key;
+-};
+-
+ static int hash_alloc_result(struct sock *sk, struct hash_ctx *ctx)
+ {
+ unsigned ds;
+@@ -307,7 +302,7 @@ static int hash_check_key(struct socket *sock)
+ int err = 0;
+ struct sock *psk;
+ struct alg_sock *pask;
+- struct algif_hash_tfm *tfm;
++ struct crypto_ahash *tfm;
+ struct sock *sk = sock->sk;
+ struct alg_sock *ask = alg_sk(sk);
+
+@@ -321,7 +316,7 @@ static int hash_check_key(struct socket *sock)
+
+ err = -ENOKEY;
+ lock_sock_nested(psk, SINGLE_DEPTH_NESTING);
+- if (!tfm->has_key)
++ if (crypto_ahash_get_flags(tfm) & CRYPTO_TFM_NEED_KEY)
+ goto unlock;
+
+ if (!pask->refcnt++)
+@@ -412,41 +407,17 @@ static struct proto_ops algif_hash_ops_nokey = {
+
+ static void *hash_bind(const char *name, u32 type, u32 mask)
+ {
+- struct algif_hash_tfm *tfm;
+- struct crypto_ahash *hash;
+-
+- tfm = kzalloc(sizeof(*tfm), GFP_KERNEL);
+- if (!tfm)
+- return ERR_PTR(-ENOMEM);
+-
+- hash = crypto_alloc_ahash(name, type, mask);
+- if (IS_ERR(hash)) {
+- kfree(tfm);
+- return ERR_CAST(hash);
+- }
+-
+- tfm->hash = hash;
+-
+- return tfm;
++ return crypto_alloc_ahash(name, type, mask);
+ }
+
+ static void hash_release(void *private)
+ {
+- struct algif_hash_tfm *tfm = private;
+-
+- crypto_free_ahash(tfm->hash);
+- kfree(tfm);
++ crypto_free_ahash(private);
+ }
+
+ static int hash_setkey(void *private, const u8 *key, unsigned int keylen)
+ {
+- struct algif_hash_tfm *tfm = private;
+- int err;
+-
+- err = crypto_ahash_setkey(tfm->hash, key, keylen);
+- tfm->has_key = !err;
+-
+- return err;
++ return crypto_ahash_setkey(private, key, keylen);
+ }
+
+ static void hash_sock_destruct(struct sock *sk)
+@@ -461,11 +432,10 @@ static void hash_sock_destruct(struct sock *sk)
+
+ static int hash_accept_parent_nokey(void *private, struct sock *sk)
+ {
+- struct hash_ctx *ctx;
++ struct crypto_ahash *tfm = private;
+ struct alg_sock *ask = alg_sk(sk);
+- struct algif_hash_tfm *tfm = private;
+- struct crypto_ahash *hash = tfm->hash;
+- unsigned len = sizeof(*ctx) + crypto_ahash_reqsize(hash);
++ struct hash_ctx *ctx;
++ unsigned int len = sizeof(*ctx) + crypto_ahash_reqsize(tfm);
+
+ ctx = sock_kmalloc(sk, len, GFP_KERNEL);
+ if (!ctx)
+@@ -478,7 +448,7 @@ static int hash_accept_parent_nokey(void *private, struct sock *sk)
+
+ ask->private = ctx;
+
+- ahash_request_set_tfm(&ctx->req, hash);
++ ahash_request_set_tfm(&ctx->req, tfm);
+ ahash_request_set_callback(&ctx->req, CRYPTO_TFM_REQ_MAY_BACKLOG,
+ crypto_req_done, &ctx->wait);
+
+@@ -489,9 +459,9 @@ static int hash_accept_parent_nokey(void *private, struct sock *sk)
+
+ static int hash_accept_parent(void *private, struct sock *sk)
+ {
+- struct algif_hash_tfm *tfm = private;
++ struct crypto_ahash *tfm = private;
+
+- if (!tfm->has_key && crypto_ahash_has_setkey(tfm->hash))
++ if (crypto_ahash_get_flags(tfm) & CRYPTO_TFM_NEED_KEY)
+ return -ENOKEY;
+
+ return hash_accept_parent_nokey(private, sk);
+diff --git a/crypto/crc32_generic.c b/crypto/crc32_generic.c
+index aa2a25fc7482..718cbce8d169 100644
+--- a/crypto/crc32_generic.c
++++ b/crypto/crc32_generic.c
+@@ -133,6 +133,7 @@ static struct shash_alg alg = {
+ .cra_name = "crc32",
+ .cra_driver_name = "crc32-generic",
+ .cra_priority = 100,
++ .cra_flags = CRYPTO_ALG_OPTIONAL_KEY,
+ .cra_blocksize = CHKSUM_BLOCK_SIZE,
+ .cra_ctxsize = sizeof(u32),
+ .cra_module = THIS_MODULE,
+diff --git a/crypto/crc32c_generic.c b/crypto/crc32c_generic.c
+index 4c0a0e271876..372320399622 100644
+--- a/crypto/crc32c_generic.c
++++ b/crypto/crc32c_generic.c
+@@ -146,6 +146,7 @@ static struct shash_alg alg = {
+ .cra_name = "crc32c",
+ .cra_driver_name = "crc32c-generic",
+ .cra_priority = 100,
++ .cra_flags = CRYPTO_ALG_OPTIONAL_KEY,
+ .cra_blocksize = CHKSUM_BLOCK_SIZE,
+ .cra_alignmask = 3,
+ .cra_ctxsize = sizeof(struct chksum_ctx),
+diff --git a/crypto/cryptd.c b/crypto/cryptd.c
+index bd43cf5be14c..c32b98b5daf8 100644
+--- a/crypto/cryptd.c
++++ b/crypto/cryptd.c
+@@ -893,10 +893,9 @@ static int cryptd_create_hash(struct crypto_template *tmpl, struct rtattr **tb,
+ if (err)
+ goto out_free_inst;
+
+- type = CRYPTO_ALG_ASYNC;
+- if (alg->cra_flags & CRYPTO_ALG_INTERNAL)
+- type |= CRYPTO_ALG_INTERNAL;
+- inst->alg.halg.base.cra_flags = type;
++ inst->alg.halg.base.cra_flags = CRYPTO_ALG_ASYNC |
++ (alg->cra_flags & (CRYPTO_ALG_INTERNAL |
++ CRYPTO_ALG_OPTIONAL_KEY));
+
+ inst->alg.halg.digestsize = salg->digestsize;
+ inst->alg.halg.statesize = salg->statesize;
+@@ -911,7 +910,8 @@ static int cryptd_create_hash(struct crypto_template *tmpl, struct rtattr **tb,
+ inst->alg.finup = cryptd_hash_finup_enqueue;
+ inst->alg.export = cryptd_hash_export;
+ inst->alg.import = cryptd_hash_import;
+- inst->alg.setkey = cryptd_hash_setkey;
++ if (crypto_shash_alg_has_setkey(salg))
++ inst->alg.setkey = cryptd_hash_setkey;
+ inst->alg.digest = cryptd_hash_digest_enqueue;
+
+ err = ahash_register_instance(tmpl, inst);
+diff --git a/crypto/mcryptd.c b/crypto/mcryptd.c
+index eca04d3729b3..e0732d979e3b 100644
+--- a/crypto/mcryptd.c
++++ b/crypto/mcryptd.c
+@@ -517,10 +517,9 @@ static int mcryptd_create_hash(struct crypto_template *tmpl, struct rtattr **tb,
+ if (err)
+ goto out_free_inst;
+
+- type = CRYPTO_ALG_ASYNC;
+- if (alg->cra_flags & CRYPTO_ALG_INTERNAL)
+- type |= CRYPTO_ALG_INTERNAL;
+- inst->alg.halg.base.cra_flags = type;
++ inst->alg.halg.base.cra_flags = CRYPTO_ALG_ASYNC |
++ (alg->cra_flags & (CRYPTO_ALG_INTERNAL |
++ CRYPTO_ALG_OPTIONAL_KEY));
+
+ inst->alg.halg.digestsize = halg->digestsize;
+ inst->alg.halg.statesize = halg->statesize;
+@@ -535,7 +534,8 @@ static int mcryptd_create_hash(struct crypto_template *tmpl, struct rtattr **tb,
+ inst->alg.finup = mcryptd_hash_finup_enqueue;
+ inst->alg.export = mcryptd_hash_export;
+ inst->alg.import = mcryptd_hash_import;
+- inst->alg.setkey = mcryptd_hash_setkey;
++ if (crypto_hash_alg_has_setkey(halg))
++ inst->alg.setkey = mcryptd_hash_setkey;
+ inst->alg.digest = mcryptd_hash_digest_enqueue;
+
+ err = ahash_register_instance(tmpl, inst);
+diff --git a/crypto/poly1305_generic.c b/crypto/poly1305_generic.c
+index b1c2d57dc734..ba39eb308c79 100644
+--- a/crypto/poly1305_generic.c
++++ b/crypto/poly1305_generic.c
+@@ -47,17 +47,6 @@ int crypto_poly1305_init(struct shash_desc *desc)
+ }
+ EXPORT_SYMBOL_GPL(crypto_poly1305_init);
+
+-int crypto_poly1305_setkey(struct crypto_shash *tfm,
+- const u8 *key, unsigned int keylen)
+-{
+- /* Poly1305 requires a unique key for each tag, which implies that
+- * we can't set it on the tfm that gets accessed by multiple users
+- * simultaneously. Instead we expect the key as the first 32 bytes in
+- * the update() call. */
+- return -ENOTSUPP;
+-}
+-EXPORT_SYMBOL_GPL(crypto_poly1305_setkey);
+-
+ static void poly1305_setrkey(struct poly1305_desc_ctx *dctx, const u8 *key)
+ {
+ /* r &= 0xffffffc0ffffffc0ffffffc0fffffff */
+@@ -76,6 +65,11 @@ static void poly1305_setskey(struct poly1305_desc_ctx *dctx, const u8 *key)
+ dctx->s[3] = get_unaligned_le32(key + 12);
+ }
+
++/*
++ * Poly1305 requires a unique key for each tag, which implies that we can't set
++ * it on the tfm that gets accessed by multiple users simultaneously. Instead we
++ * expect the key as the first 32 bytes in the update() call.
++ */
+ unsigned int crypto_poly1305_setdesckey(struct poly1305_desc_ctx *dctx,
+ const u8 *src, unsigned int srclen)
+ {
+@@ -281,7 +275,6 @@ static struct shash_alg poly1305_alg = {
+ .init = crypto_poly1305_init,
+ .update = crypto_poly1305_update,
+ .final = crypto_poly1305_final,
+- .setkey = crypto_poly1305_setkey,
+ .descsize = sizeof(struct poly1305_desc_ctx),
+ .base = {
+ .cra_name = "poly1305",
+diff --git a/crypto/shash.c b/crypto/shash.c
+index e849d3ee2e27..5d732c6bb4b2 100644
+--- a/crypto/shash.c
++++ b/crypto/shash.c
+@@ -58,11 +58,18 @@ int crypto_shash_setkey(struct crypto_shash *tfm, const u8 *key,
+ {
+ struct shash_alg *shash = crypto_shash_alg(tfm);
+ unsigned long alignmask = crypto_shash_alignmask(tfm);
++ int err;
+
+ if ((unsigned long)key & alignmask)
+- return shash_setkey_unaligned(tfm, key, keylen);
++ err = shash_setkey_unaligned(tfm, key, keylen);
++ else
++ err = shash->setkey(tfm, key, keylen);
++
++ if (err)
++ return err;
+
+- return shash->setkey(tfm, key, keylen);
++ crypto_shash_clear_flags(tfm, CRYPTO_TFM_NEED_KEY);
++ return 0;
+ }
+ EXPORT_SYMBOL_GPL(crypto_shash_setkey);
+
+@@ -181,6 +188,9 @@ int crypto_shash_digest(struct shash_desc *desc, const u8 *data,
+ struct shash_alg *shash = crypto_shash_alg(tfm);
+ unsigned long alignmask = crypto_shash_alignmask(tfm);
+
++ if (crypto_shash_get_flags(tfm) & CRYPTO_TFM_NEED_KEY)
++ return -ENOKEY;
++
+ if (((unsigned long)data | (unsigned long)out) & alignmask)
+ return shash_digest_unaligned(desc, data, len, out);
+
+@@ -360,7 +370,8 @@ int crypto_init_shash_ops_async(struct crypto_tfm *tfm)
+ crt->digest = shash_async_digest;
+ crt->setkey = shash_async_setkey;
+
+- crt->has_setkey = alg->setkey != shash_no_setkey;
++ crypto_ahash_set_flags(crt, crypto_shash_get_flags(shash) &
++ CRYPTO_TFM_NEED_KEY);
+
+ if (alg->export)
+ crt->export = shash_async_export;
+@@ -375,8 +386,14 @@ int crypto_init_shash_ops_async(struct crypto_tfm *tfm)
+ static int crypto_shash_init_tfm(struct crypto_tfm *tfm)
+ {
+ struct crypto_shash *hash = __crypto_shash_cast(tfm);
++ struct shash_alg *alg = crypto_shash_alg(hash);
++
++ hash->descsize = alg->descsize;
++
++ if (crypto_shash_alg_has_setkey(alg) &&
++ !(alg->base.cra_flags & CRYPTO_ALG_OPTIONAL_KEY))
++ crypto_shash_set_flags(hash, CRYPTO_TFM_NEED_KEY);
+
+- hash->descsize = crypto_shash_alg(hash)->descsize;
+ return 0;
+ }
+
+diff --git a/drivers/acpi/nfit/core.c b/drivers/acpi/nfit/core.c
+index abeb4df4f22e..b28ce440a06f 100644
+--- a/drivers/acpi/nfit/core.c
++++ b/drivers/acpi/nfit/core.c
+@@ -1867,6 +1867,9 @@ static int acpi_nfit_register_dimms(struct acpi_nfit_desc *acpi_desc)
+ struct kernfs_node *nfit_kernfs;
+
+ nvdimm = nfit_mem->nvdimm;
++ if (!nvdimm)
++ continue;
++
+ nfit_kernfs = sysfs_get_dirent(nvdimm_kobj(nvdimm)->sd, "nfit");
+ if (nfit_kernfs)
+ nfit_mem->flags_attr = sysfs_get_dirent(nfit_kernfs,
+diff --git a/drivers/acpi/sbshc.c b/drivers/acpi/sbshc.c
+index 2fa8304171e0..7a3431018e0a 100644
+--- a/drivers/acpi/sbshc.c
++++ b/drivers/acpi/sbshc.c
+@@ -275,8 +275,8 @@ static int acpi_smbus_hc_add(struct acpi_device *device)
+ device->driver_data = hc;
+
+ acpi_ec_add_query_handler(hc->ec, hc->query_bit, NULL, smbus_alarm, hc);
+- printk(KERN_INFO PREFIX "SBS HC: EC = 0x%p, offset = 0x%0x, query_bit = 0x%0x\n",
+- hc->ec, hc->offset, hc->query_bit);
++ dev_info(&device->dev, "SBS HC: offset = 0x%0x, query_bit = 0x%0x\n",
++ hc->offset, hc->query_bit);
+
+ return 0;
+ }
+diff --git a/drivers/ata/ahci.c b/drivers/ata/ahci.c
+index 5443cb71d7ba..44a9d630b7ac 100644
+--- a/drivers/ata/ahci.c
++++ b/drivers/ata/ahci.c
+@@ -268,9 +268,9 @@ static const struct pci_device_id ahci_pci_tbl[] = {
+ { PCI_VDEVICE(INTEL, 0x3b23), board_ahci }, /* PCH AHCI */
+ { PCI_VDEVICE(INTEL, 0x3b24), board_ahci }, /* PCH RAID */
+ { PCI_VDEVICE(INTEL, 0x3b25), board_ahci }, /* PCH RAID */
+- { PCI_VDEVICE(INTEL, 0x3b29), board_ahci }, /* PCH AHCI */
++ { PCI_VDEVICE(INTEL, 0x3b29), board_ahci }, /* PCH M AHCI */
+ { PCI_VDEVICE(INTEL, 0x3b2b), board_ahci }, /* PCH RAID */
+- { PCI_VDEVICE(INTEL, 0x3b2c), board_ahci }, /* PCH RAID */
++ { PCI_VDEVICE(INTEL, 0x3b2c), board_ahci }, /* PCH M RAID */
+ { PCI_VDEVICE(INTEL, 0x3b2f), board_ahci }, /* PCH AHCI */
+ { PCI_VDEVICE(INTEL, 0x19b0), board_ahci }, /* DNV AHCI */
+ { PCI_VDEVICE(INTEL, 0x19b1), board_ahci }, /* DNV AHCI */
+@@ -293,9 +293,9 @@ static const struct pci_device_id ahci_pci_tbl[] = {
+ { PCI_VDEVICE(INTEL, 0x19cE), board_ahci }, /* DNV AHCI */
+ { PCI_VDEVICE(INTEL, 0x19cF), board_ahci }, /* DNV AHCI */
+ { PCI_VDEVICE(INTEL, 0x1c02), board_ahci }, /* CPT AHCI */
+- { PCI_VDEVICE(INTEL, 0x1c03), board_ahci }, /* CPT AHCI */
++ { PCI_VDEVICE(INTEL, 0x1c03), board_ahci }, /* CPT M AHCI */
+ { PCI_VDEVICE(INTEL, 0x1c04), board_ahci }, /* CPT RAID */
+- { PCI_VDEVICE(INTEL, 0x1c05), board_ahci }, /* CPT RAID */
++ { PCI_VDEVICE(INTEL, 0x1c05), board_ahci }, /* CPT M RAID */
+ { PCI_VDEVICE(INTEL, 0x1c06), board_ahci }, /* CPT RAID */
+ { PCI_VDEVICE(INTEL, 0x1c07), board_ahci }, /* CPT RAID */
+ { PCI_VDEVICE(INTEL, 0x1d02), board_ahci }, /* PBG AHCI */
+@@ -304,20 +304,20 @@ static const struct pci_device_id ahci_pci_tbl[] = {
+ { PCI_VDEVICE(INTEL, 0x2826), board_ahci }, /* PBG RAID */
+ { PCI_VDEVICE(INTEL, 0x2323), board_ahci }, /* DH89xxCC AHCI */
+ { PCI_VDEVICE(INTEL, 0x1e02), board_ahci }, /* Panther Point AHCI */
+- { PCI_VDEVICE(INTEL, 0x1e03), board_ahci }, /* Panther Point AHCI */
++ { PCI_VDEVICE(INTEL, 0x1e03), board_ahci }, /* Panther Point M AHCI */
+ { PCI_VDEVICE(INTEL, 0x1e04), board_ahci }, /* Panther Point RAID */
+ { PCI_VDEVICE(INTEL, 0x1e05), board_ahci }, /* Panther Point RAID */
+ { PCI_VDEVICE(INTEL, 0x1e06), board_ahci }, /* Panther Point RAID */
+- { PCI_VDEVICE(INTEL, 0x1e07), board_ahci }, /* Panther Point RAID */
++ { PCI_VDEVICE(INTEL, 0x1e07), board_ahci }, /* Panther Point M RAID */
+ { PCI_VDEVICE(INTEL, 0x1e0e), board_ahci }, /* Panther Point RAID */
+ { PCI_VDEVICE(INTEL, 0x8c02), board_ahci }, /* Lynx Point AHCI */
+- { PCI_VDEVICE(INTEL, 0x8c03), board_ahci }, /* Lynx Point AHCI */
++ { PCI_VDEVICE(INTEL, 0x8c03), board_ahci }, /* Lynx Point M AHCI */
+ { PCI_VDEVICE(INTEL, 0x8c04), board_ahci }, /* Lynx Point RAID */
+- { PCI_VDEVICE(INTEL, 0x8c05), board_ahci }, /* Lynx Point RAID */
++ { PCI_VDEVICE(INTEL, 0x8c05), board_ahci }, /* Lynx Point M RAID */
+ { PCI_VDEVICE(INTEL, 0x8c06), board_ahci }, /* Lynx Point RAID */
+- { PCI_VDEVICE(INTEL, 0x8c07), board_ahci }, /* Lynx Point RAID */
++ { PCI_VDEVICE(INTEL, 0x8c07), board_ahci }, /* Lynx Point M RAID */
+ { PCI_VDEVICE(INTEL, 0x8c0e), board_ahci }, /* Lynx Point RAID */
+- { PCI_VDEVICE(INTEL, 0x8c0f), board_ahci }, /* Lynx Point RAID */
++ { PCI_VDEVICE(INTEL, 0x8c0f), board_ahci }, /* Lynx Point M RAID */
+ { PCI_VDEVICE(INTEL, 0x9c02), board_ahci }, /* Lynx Point-LP AHCI */
+ { PCI_VDEVICE(INTEL, 0x9c03), board_ahci }, /* Lynx Point-LP AHCI */
+ { PCI_VDEVICE(INTEL, 0x9c04), board_ahci }, /* Lynx Point-LP RAID */
+@@ -358,21 +358,21 @@ static const struct pci_device_id ahci_pci_tbl[] = {
+ { PCI_VDEVICE(INTEL, 0x9c87), board_ahci }, /* Wildcat Point-LP RAID */
+ { PCI_VDEVICE(INTEL, 0x9c8f), board_ahci }, /* Wildcat Point-LP RAID */
+ { PCI_VDEVICE(INTEL, 0x8c82), board_ahci }, /* 9 Series AHCI */
+- { PCI_VDEVICE(INTEL, 0x8c83), board_ahci }, /* 9 Series AHCI */
++ { PCI_VDEVICE(INTEL, 0x8c83), board_ahci }, /* 9 Series M AHCI */
+ { PCI_VDEVICE(INTEL, 0x8c84), board_ahci }, /* 9 Series RAID */
+- { PCI_VDEVICE(INTEL, 0x8c85), board_ahci }, /* 9 Series RAID */
++ { PCI_VDEVICE(INTEL, 0x8c85), board_ahci }, /* 9 Series M RAID */
+ { PCI_VDEVICE(INTEL, 0x8c86), board_ahci }, /* 9 Series RAID */
+- { PCI_VDEVICE(INTEL, 0x8c87), board_ahci }, /* 9 Series RAID */
++ { PCI_VDEVICE(INTEL, 0x8c87), board_ahci }, /* 9 Series M RAID */
+ { PCI_VDEVICE(INTEL, 0x8c8e), board_ahci }, /* 9 Series RAID */
+- { PCI_VDEVICE(INTEL, 0x8c8f), board_ahci }, /* 9 Series RAID */
++ { PCI_VDEVICE(INTEL, 0x8c8f), board_ahci }, /* 9 Series M RAID */
+ { PCI_VDEVICE(INTEL, 0x9d03), board_ahci }, /* Sunrise Point-LP AHCI */
+ { PCI_VDEVICE(INTEL, 0x9d05), board_ahci }, /* Sunrise Point-LP RAID */
+ { PCI_VDEVICE(INTEL, 0x9d07), board_ahci }, /* Sunrise Point-LP RAID */
+ { PCI_VDEVICE(INTEL, 0xa102), board_ahci }, /* Sunrise Point-H AHCI */
+- { PCI_VDEVICE(INTEL, 0xa103), board_ahci }, /* Sunrise Point-H AHCI */
++ { PCI_VDEVICE(INTEL, 0xa103), board_ahci }, /* Sunrise Point-H M AHCI */
+ { PCI_VDEVICE(INTEL, 0xa105), board_ahci }, /* Sunrise Point-H RAID */
+ { PCI_VDEVICE(INTEL, 0xa106), board_ahci }, /* Sunrise Point-H RAID */
+- { PCI_VDEVICE(INTEL, 0xa107), board_ahci }, /* Sunrise Point-H RAID */
++ { PCI_VDEVICE(INTEL, 0xa107), board_ahci }, /* Sunrise Point-H M RAID */
+ { PCI_VDEVICE(INTEL, 0xa10f), board_ahci }, /* Sunrise Point-H RAID */
+ { PCI_VDEVICE(INTEL, 0x2822), board_ahci }, /* Lewisburg RAID*/
+ { PCI_VDEVICE(INTEL, 0x2823), board_ahci }, /* Lewisburg AHCI*/
+@@ -386,6 +386,11 @@ static const struct pci_device_id ahci_pci_tbl[] = {
+ { PCI_VDEVICE(INTEL, 0xa206), board_ahci }, /* Lewisburg RAID*/
+ { PCI_VDEVICE(INTEL, 0xa252), board_ahci }, /* Lewisburg RAID*/
+ { PCI_VDEVICE(INTEL, 0xa256), board_ahci }, /* Lewisburg RAID*/
++ { PCI_VDEVICE(INTEL, 0xa356), board_ahci }, /* Cannon Lake PCH-H RAID */
++ { PCI_VDEVICE(INTEL, 0x0f22), board_ahci }, /* Bay Trail AHCI */
++ { PCI_VDEVICE(INTEL, 0x0f23), board_ahci }, /* Bay Trail AHCI */
++ { PCI_VDEVICE(INTEL, 0x22a3), board_ahci }, /* Cherry Trail AHCI */
++ { PCI_VDEVICE(INTEL, 0x5ae3), board_ahci }, /* Apollo Lake AHCI */
+
+ /* JMicron 360/1/3/5/6, match class to avoid IDE function */
+ { PCI_VENDOR_ID_JMICRON, PCI_ANY_ID, PCI_ANY_ID, PCI_ANY_ID,
+diff --git a/drivers/block/pktcdvd.c b/drivers/block/pktcdvd.c
+index 67974796c350..531a0915066b 100644
+--- a/drivers/block/pktcdvd.c
++++ b/drivers/block/pktcdvd.c
+@@ -2579,14 +2579,14 @@ static int pkt_new_dev(struct pktcdvd_device *pd, dev_t dev)
+ bdev = bdget(dev);
+ if (!bdev)
+ return -ENOMEM;
++ ret = blkdev_get(bdev, FMODE_READ | FMODE_NDELAY, NULL);
++ if (ret)
++ return ret;
+ if (!blk_queue_scsi_passthrough(bdev_get_queue(bdev))) {
+ WARN_ONCE(true, "Attempt to register a non-SCSI queue\n");
+- bdput(bdev);
++ blkdev_put(bdev, FMODE_READ | FMODE_NDELAY);
+ return -EINVAL;
+ }
+- ret = blkdev_get(bdev, FMODE_READ | FMODE_NDELAY, NULL);
+- if (ret)
+- return ret;
+
+ /* This is safe, since we have a reference from open(). */
+ __module_get(THIS_MODULE);
+@@ -2745,7 +2745,7 @@ static int pkt_setup_dev(dev_t dev, dev_t* pkt_dev)
+ pd->pkt_dev = MKDEV(pktdev_major, idx);
+ ret = pkt_new_dev(pd, dev);
+ if (ret)
+- goto out_new_dev;
++ goto out_mem2;
+
+ /* inherit events of the host device */
+ disk->events = pd->bdev->bd_disk->events;
+@@ -2763,8 +2763,6 @@ static int pkt_setup_dev(dev_t dev, dev_t* pkt_dev)
+ mutex_unlock(&ctl_mutex);
+ return 0;
+
+-out_new_dev:
+- blk_cleanup_queue(disk->queue);
+ out_mem2:
+ put_disk(disk);
+ out_mem:
+diff --git a/drivers/bluetooth/btsdio.c b/drivers/bluetooth/btsdio.c
+index c8e945d19ffe..20142bc77554 100644
+--- a/drivers/bluetooth/btsdio.c
++++ b/drivers/bluetooth/btsdio.c
+@@ -31,6 +31,7 @@
+ #include <linux/errno.h>
+ #include <linux/skbuff.h>
+
++#include <linux/mmc/host.h>
+ #include <linux/mmc/sdio_ids.h>
+ #include <linux/mmc/sdio_func.h>
+
+@@ -292,6 +293,14 @@ static int btsdio_probe(struct sdio_func *func,
+ tuple = tuple->next;
+ }
+
++ /* BCM43341 devices soldered onto the PCB (non-removable) use an
++ * uart connection for bluetooth, ignore the BT SDIO interface.
++ */
++ if (func->vendor == SDIO_VENDOR_ID_BROADCOM &&
++ func->device == SDIO_DEVICE_ID_BROADCOM_43341 &&
++ !mmc_card_is_removable(func->card->host))
++ return -ENODEV;
++
+ data = devm_kzalloc(&func->dev, sizeof(*data), GFP_KERNEL);
+ if (!data)
+ return -ENOMEM;
+diff --git a/drivers/bluetooth/btusb.c b/drivers/bluetooth/btusb.c
+index f7120c9eb9bd..76980e78ae56 100644
+--- a/drivers/bluetooth/btusb.c
++++ b/drivers/bluetooth/btusb.c
+@@ -23,6 +23,7 @@
+
+ #include <linux/module.h>
+ #include <linux/usb.h>
++#include <linux/usb/quirks.h>
+ #include <linux/firmware.h>
+ #include <linux/of_device.h>
+ #include <linux/of_irq.h>
+@@ -387,9 +388,8 @@ static const struct usb_device_id blacklist_table[] = {
+ #define BTUSB_FIRMWARE_LOADED 7
+ #define BTUSB_FIRMWARE_FAILED 8
+ #define BTUSB_BOOTING 9
+-#define BTUSB_RESET_RESUME 10
+-#define BTUSB_DIAG_RUNNING 11
+-#define BTUSB_OOB_WAKE_ENABLED 12
++#define BTUSB_DIAG_RUNNING 10
++#define BTUSB_OOB_WAKE_ENABLED 11
+
+ struct btusb_data {
+ struct hci_dev *hdev;
+@@ -3120,9 +3120,9 @@ static int btusb_probe(struct usb_interface *intf,
+
+ /* QCA Rome devices lose their updated firmware over suspend,
+ * but the USB hub doesn't notice any status change.
+- * Explicitly request a device reset on resume.
++ * explicitly request a device reset on resume.
+ */
+- set_bit(BTUSB_RESET_RESUME, &data->flags);
++ interface_to_usbdev(intf)->quirks |= USB_QUIRK_RESET_RESUME;
+ }
+
+ #ifdef CONFIG_BT_HCIBTUSB_RTL
+@@ -3133,7 +3133,7 @@ static int btusb_probe(struct usb_interface *intf,
+ * but the USB hub doesn't notice any status change.
+ * Explicitly request a device reset on resume.
+ */
+- set_bit(BTUSB_RESET_RESUME, &data->flags);
++ interface_to_usbdev(intf)->quirks |= USB_QUIRK_RESET_RESUME;
+ }
+ #endif
+
+@@ -3299,14 +3299,6 @@ static int btusb_suspend(struct usb_interface *intf, pm_message_t message)
+ enable_irq(data->oob_wake_irq);
+ }
+
+- /* Optionally request a device reset on resume, but only when
+- * wakeups are disabled. If wakeups are enabled we assume the
+- * device will stay powered up throughout suspend.
+- */
+- if (test_bit(BTUSB_RESET_RESUME, &data->flags) &&
+- !device_may_wakeup(&data->udev->dev))
+- data->udev->reset_resume = 1;
+-
+ return 0;
+ }
+
+diff --git a/drivers/char/ipmi/ipmi_dmi.c b/drivers/char/ipmi/ipmi_dmi.c
+index ab78b3be7e33..c5112b17d7ea 100644
+--- a/drivers/char/ipmi/ipmi_dmi.c
++++ b/drivers/char/ipmi/ipmi_dmi.c
+@@ -106,7 +106,10 @@ static void __init dmi_add_platform_ipmi(unsigned long base_addr,
+ pr_err("ipmi:dmi: Error allocation IPMI platform device\n");
+ return;
+ }
+- pdev->driver_override = override;
++ pdev->driver_override = kasprintf(GFP_KERNEL, "%s",
++ override);
++ if (!pdev->driver_override)
++ goto err;
+
+ if (type == IPMI_DMI_TYPE_SSIF) {
+ set_prop_entry(p[pidx++], "i2c-addr", u16, base_addr);
+diff --git a/drivers/clocksource/timer-stm32.c b/drivers/clocksource/timer-stm32.c
+index 8f2423789ba9..4bfeb9929ab2 100644
+--- a/drivers/clocksource/timer-stm32.c
++++ b/drivers/clocksource/timer-stm32.c
+@@ -106,6 +106,10 @@ static int __init stm32_clockevent_init(struct device_node *np)
+ unsigned long rate, max_delta;
+ int irq, ret, bits, prescaler = 1;
+
++ data = kmemdup(&clock_event_ddata, sizeof(*data), GFP_KERNEL);
++ if (!data)
++ return -ENOMEM;
++
+ clk = of_clk_get(np, 0);
+ if (IS_ERR(clk)) {
+ ret = PTR_ERR(clk);
+@@ -156,8 +160,8 @@ static int __init stm32_clockevent_init(struct device_node *np)
+
+ writel_relaxed(prescaler - 1, data->base + TIM_PSC);
+ writel_relaxed(TIM_EGR_UG, data->base + TIM_EGR);
+- writel_relaxed(TIM_DIER_UIE, data->base + TIM_DIER);
+ writel_relaxed(0, data->base + TIM_SR);
++ writel_relaxed(TIM_DIER_UIE, data->base + TIM_DIER);
+
+ data->periodic_top = DIV_ROUND_CLOSEST(rate, prescaler * HZ);
+
+@@ -184,6 +188,7 @@ static int __init stm32_clockevent_init(struct device_node *np)
+ err_clk_enable:
+ clk_put(clk);
+ err_clk_get:
++ kfree(data);
+ return ret;
+ }
+
+diff --git a/drivers/cpufreq/cpufreq-dt-platdev.c b/drivers/cpufreq/cpufreq-dt-platdev.c
+index ecc56e26f8f6..3b585e4bfac5 100644
+--- a/drivers/cpufreq/cpufreq-dt-platdev.c
++++ b/drivers/cpufreq/cpufreq-dt-platdev.c
+@@ -108,6 +108,14 @@ static const struct of_device_id blacklist[] __initconst = {
+
+ { .compatible = "marvell,armadaxp", },
+
++ { .compatible = "mediatek,mt2701", },
++ { .compatible = "mediatek,mt2712", },
++ { .compatible = "mediatek,mt7622", },
++ { .compatible = "mediatek,mt7623", },
++ { .compatible = "mediatek,mt817x", },
++ { .compatible = "mediatek,mt8173", },
++ { .compatible = "mediatek,mt8176", },
++
+ { .compatible = "nvidia,tegra124", },
+
+ { .compatible = "st,stih407", },
+diff --git a/drivers/crypto/bfin_crc.c b/drivers/crypto/bfin_crc.c
+index a118b9bed669..bfbf8bf77f03 100644
+--- a/drivers/crypto/bfin_crc.c
++++ b/drivers/crypto/bfin_crc.c
+@@ -494,7 +494,8 @@ static struct ahash_alg algs = {
+ .cra_driver_name = DRIVER_NAME,
+ .cra_priority = 100,
+ .cra_flags = CRYPTO_ALG_TYPE_AHASH |
+- CRYPTO_ALG_ASYNC,
++ CRYPTO_ALG_ASYNC |
++ CRYPTO_ALG_OPTIONAL_KEY,
+ .cra_blocksize = CHKSUM_BLOCK_SIZE,
+ .cra_ctxsize = sizeof(struct bfin_crypto_crc_ctx),
+ .cra_alignmask = 3,
+diff --git a/drivers/crypto/caam/ctrl.c b/drivers/crypto/caam/ctrl.c
+index 027e121c6f70..e1d4ae1153c4 100644
+--- a/drivers/crypto/caam/ctrl.c
++++ b/drivers/crypto/caam/ctrl.c
+@@ -228,12 +228,16 @@ static int instantiate_rng(struct device *ctrldev, int state_handle_mask,
+ * without any error (HW optimizations for later
+ * CAAM eras), then try again.
+ */
++ if (ret)
++ break;
++
+ rdsta_val = rd_reg32(&ctrl->r4tst[0].rdsta) & RDSTA_IFMASK;
+ if ((status && status != JRSTA_SSRC_JUMP_HALT_CC) ||
+- !(rdsta_val & (1 << sh_idx)))
++ !(rdsta_val & (1 << sh_idx))) {
+ ret = -EAGAIN;
+- if (ret)
+ break;
++ }
++
+ dev_info(ctrldev, "Instantiated RNG4 SH%d\n", sh_idx);
+ /* Clear the contents before recreating the descriptor */
+ memset(desc, 0x00, CAAM_CMD_SZ * 7);
+diff --git a/drivers/crypto/stm32/stm32_crc32.c b/drivers/crypto/stm32/stm32_crc32.c
+index 090582baecfe..8f09b8430893 100644
+--- a/drivers/crypto/stm32/stm32_crc32.c
++++ b/drivers/crypto/stm32/stm32_crc32.c
+@@ -208,6 +208,7 @@ static struct shash_alg algs[] = {
+ .cra_name = "crc32",
+ .cra_driver_name = DRIVER_NAME,
+ .cra_priority = 200,
++ .cra_flags = CRYPTO_ALG_OPTIONAL_KEY,
+ .cra_blocksize = CHKSUM_BLOCK_SIZE,
+ .cra_alignmask = 3,
+ .cra_ctxsize = sizeof(struct stm32_crc_ctx),
+@@ -229,6 +230,7 @@ static struct shash_alg algs[] = {
+ .cra_name = "crc32c",
+ .cra_driver_name = DRIVER_NAME,
+ .cra_priority = 200,
++ .cra_flags = CRYPTO_ALG_OPTIONAL_KEY,
+ .cra_blocksize = CHKSUM_BLOCK_SIZE,
+ .cra_alignmask = 3,
+ .cra_ctxsize = sizeof(struct stm32_crc_ctx),
+diff --git a/drivers/crypto/talitos.c b/drivers/crypto/talitos.c
+index 9c80e0cb1664..6882fa2f8bad 100644
+--- a/drivers/crypto/talitos.c
++++ b/drivers/crypto/talitos.c
+@@ -1138,6 +1138,10 @@ static int talitos_sg_map(struct device *dev, struct scatterlist *src,
+ struct talitos_private *priv = dev_get_drvdata(dev);
+ bool is_sec1 = has_ftr_sec1(priv);
+
++ if (!src) {
++ to_talitos_ptr(ptr, 0, 0, is_sec1);
++ return 1;
++ }
+ if (sg_count == 1) {
+ to_talitos_ptr(ptr, sg_dma_address(src) + offset, len, is_sec1);
+ return sg_count;
+diff --git a/drivers/dma/dmatest.c b/drivers/dma/dmatest.c
+index ec5f9d2bc820..80cc2be6483c 100644
+--- a/drivers/dma/dmatest.c
++++ b/drivers/dma/dmatest.c
+@@ -355,7 +355,7 @@ static void dmatest_callback(void *arg)
+ {
+ struct dmatest_done *done = arg;
+ struct dmatest_thread *thread =
+- container_of(arg, struct dmatest_thread, done_wait);
++ container_of(done, struct dmatest_thread, test_done);
+ if (!thread->done) {
+ done->done = true;
+ wake_up_all(done->wait);
+diff --git a/drivers/edac/octeon_edac-lmc.c b/drivers/edac/octeon_edac-lmc.c
+index 9c1ffe3e912b..aeb222ca3ed1 100644
+--- a/drivers/edac/octeon_edac-lmc.c
++++ b/drivers/edac/octeon_edac-lmc.c
+@@ -78,6 +78,7 @@ static void octeon_lmc_edac_poll_o2(struct mem_ctl_info *mci)
+ if (!pvt->inject)
+ int_reg.u64 = cvmx_read_csr(CVMX_LMCX_INT(mci->mc_idx));
+ else {
++ int_reg.u64 = 0;
+ if (pvt->error_type == 1)
+ int_reg.s.sec_err = 1;
+ if (pvt->error_type == 2)
+diff --git a/drivers/firmware/psci.c b/drivers/firmware/psci.c
+index d687ca3d5049..c80ec1d03274 100644
+--- a/drivers/firmware/psci.c
++++ b/drivers/firmware/psci.c
+@@ -59,7 +59,10 @@ bool psci_tos_resident_on(int cpu)
+ return cpu == resident_cpu;
+ }
+
+-struct psci_operations psci_ops;
++struct psci_operations psci_ops = {
++ .conduit = PSCI_CONDUIT_NONE,
++ .smccc_version = SMCCC_VERSION_1_0,
++};
+
+ typedef unsigned long (psci_fn)(unsigned long, unsigned long,
+ unsigned long, unsigned long);
+@@ -210,6 +213,22 @@ static unsigned long psci_migrate_info_up_cpu(void)
+ 0, 0, 0);
+ }
+
++static void set_conduit(enum psci_conduit conduit)
++{
++ switch (conduit) {
++ case PSCI_CONDUIT_HVC:
++ invoke_psci_fn = __invoke_psci_fn_hvc;
++ break;
++ case PSCI_CONDUIT_SMC:
++ invoke_psci_fn = __invoke_psci_fn_smc;
++ break;
++ default:
++ WARN(1, "Unexpected PSCI conduit %d\n", conduit);
++ }
++
++ psci_ops.conduit = conduit;
++}
++
+ static int get_set_conduit_method(struct device_node *np)
+ {
+ const char *method;
+@@ -222,9 +241,9 @@ static int get_set_conduit_method(struct device_node *np)
+ }
+
+ if (!strcmp("hvc", method)) {
+- invoke_psci_fn = __invoke_psci_fn_hvc;
++ set_conduit(PSCI_CONDUIT_HVC);
+ } else if (!strcmp("smc", method)) {
+- invoke_psci_fn = __invoke_psci_fn_smc;
++ set_conduit(PSCI_CONDUIT_SMC);
+ } else {
+ pr_warn("invalid \"method\" property: %s\n", method);
+ return -EINVAL;
+@@ -493,9 +512,36 @@ static void __init psci_init_migrate(void)
+ pr_info("Trusted OS resident on physical CPU 0x%lx\n", cpuid);
+ }
+
++static void __init psci_init_smccc(void)
++{
++ u32 ver = ARM_SMCCC_VERSION_1_0;
++ int feature;
++
++ feature = psci_features(ARM_SMCCC_VERSION_FUNC_ID);
++
++ if (feature != PSCI_RET_NOT_SUPPORTED) {
++ u32 ret;
++ ret = invoke_psci_fn(ARM_SMCCC_VERSION_FUNC_ID, 0, 0, 0);
++ if (ret == ARM_SMCCC_VERSION_1_1) {
++ psci_ops.smccc_version = SMCCC_VERSION_1_1;
++ ver = ret;
++ }
++ }
++
++ /*
++ * Conveniently, the SMCCC and PSCI versions are encoded the
++ * same way. No, this isn't accidental.
++ */
++ pr_info("SMC Calling Convention v%d.%d\n",
++ PSCI_VERSION_MAJOR(ver), PSCI_VERSION_MINOR(ver));
++
++}
++
+ static void __init psci_0_2_set_functions(void)
+ {
+ pr_info("Using standard PSCI v0.2 function IDs\n");
++ psci_ops.get_version = psci_get_version;
++
+ psci_function_id[PSCI_FN_CPU_SUSPEND] =
+ PSCI_FN_NATIVE(0_2, CPU_SUSPEND);
+ psci_ops.cpu_suspend = psci_cpu_suspend;
+@@ -539,6 +585,7 @@ static int __init psci_probe(void)
+ psci_init_migrate();
+
+ if (PSCI_VERSION_MAJOR(ver) >= 1) {
++ psci_init_smccc();
+ psci_init_cpu_suspend();
+ psci_init_system_suspend();
+ }
+@@ -652,9 +699,9 @@ int __init psci_acpi_init(void)
+ pr_info("probing for conduit method from ACPI.\n");
+
+ if (acpi_psci_use_hvc())
+- invoke_psci_fn = __invoke_psci_fn_hvc;
++ set_conduit(PSCI_CONDUIT_HVC);
+ else
+- invoke_psci_fn = __invoke_psci_fn_smc;
++ set_conduit(PSCI_CONDUIT_SMC);
+
+ return psci_probe();
+ }
+diff --git a/drivers/gpu/drm/i915/i915_pci.c b/drivers/gpu/drm/i915/i915_pci.c
+index 6458c309c039..756aad504ed5 100644
+--- a/drivers/gpu/drm/i915/i915_pci.c
++++ b/drivers/gpu/drm/i915/i915_pci.c
+@@ -74,19 +74,19 @@
+ GEN_DEFAULT_PAGE_SIZES, \
+ CURSOR_OFFSETS
+
+-static const struct intel_device_info intel_i830_info __initconst = {
++static const struct intel_device_info intel_i830_info = {
+ GEN2_FEATURES,
+ .platform = INTEL_I830,
+ .is_mobile = 1, .cursor_needs_physical = 1,
+ .num_pipes = 2, /* legal, last one wins */
+ };
+
+-static const struct intel_device_info intel_i845g_info __initconst = {
++static const struct intel_device_info intel_i845g_info = {
+ GEN2_FEATURES,
+ .platform = INTEL_I845G,
+ };
+
+-static const struct intel_device_info intel_i85x_info __initconst = {
++static const struct intel_device_info intel_i85x_info = {
+ GEN2_FEATURES,
+ .platform = INTEL_I85X, .is_mobile = 1,
+ .num_pipes = 2, /* legal, last one wins */
+@@ -94,7 +94,7 @@ static const struct intel_device_info intel_i85x_info __initconst = {
+ .has_fbc = 1,
+ };
+
+-static const struct intel_device_info intel_i865g_info __initconst = {
++static const struct intel_device_info intel_i865g_info = {
+ GEN2_FEATURES,
+ .platform = INTEL_I865G,
+ };
+@@ -108,7 +108,7 @@ static const struct intel_device_info intel_i865g_info __initconst = {
+ GEN_DEFAULT_PAGE_SIZES, \
+ CURSOR_OFFSETS
+
+-static const struct intel_device_info intel_i915g_info __initconst = {
++static const struct intel_device_info intel_i915g_info = {
+ GEN3_FEATURES,
+ .platform = INTEL_I915G, .cursor_needs_physical = 1,
+ .has_overlay = 1, .overlay_needs_physical = 1,
+@@ -116,7 +116,7 @@ static const struct intel_device_info intel_i915g_info __initconst = {
+ .unfenced_needs_alignment = 1,
+ };
+
+-static const struct intel_device_info intel_i915gm_info __initconst = {
++static const struct intel_device_info intel_i915gm_info = {
+ GEN3_FEATURES,
+ .platform = INTEL_I915GM,
+ .is_mobile = 1,
+@@ -128,7 +128,7 @@ static const struct intel_device_info intel_i915gm_info __initconst = {
+ .unfenced_needs_alignment = 1,
+ };
+
+-static const struct intel_device_info intel_i945g_info __initconst = {
++static const struct intel_device_info intel_i945g_info = {
+ GEN3_FEATURES,
+ .platform = INTEL_I945G,
+ .has_hotplug = 1, .cursor_needs_physical = 1,
+@@ -137,7 +137,7 @@ static const struct intel_device_info intel_i945g_info __initconst = {
+ .unfenced_needs_alignment = 1,
+ };
+
+-static const struct intel_device_info intel_i945gm_info __initconst = {
++static const struct intel_device_info intel_i945gm_info = {
+ GEN3_FEATURES,
+ .platform = INTEL_I945GM, .is_mobile = 1,
+ .has_hotplug = 1, .cursor_needs_physical = 1,
+@@ -148,14 +148,14 @@ static const struct intel_device_info intel_i945gm_info __initconst = {
+ .unfenced_needs_alignment = 1,
+ };
+
+-static const struct intel_device_info intel_g33_info __initconst = {
++static const struct intel_device_info intel_g33_info = {
+ GEN3_FEATURES,
+ .platform = INTEL_G33,
+ .has_hotplug = 1,
+ .has_overlay = 1,
+ };
+
+-static const struct intel_device_info intel_pineview_info __initconst = {
++static const struct intel_device_info intel_pineview_info = {
+ GEN3_FEATURES,
+ .platform = INTEL_PINEVIEW, .is_mobile = 1,
+ .has_hotplug = 1,
+@@ -172,7 +172,7 @@ static const struct intel_device_info intel_pineview_info __initconst = {
+ GEN_DEFAULT_PAGE_SIZES, \
+ CURSOR_OFFSETS
+
+-static const struct intel_device_info intel_i965g_info __initconst = {
++static const struct intel_device_info intel_i965g_info = {
+ GEN4_FEATURES,
+ .platform = INTEL_I965G,
+ .has_overlay = 1,
+@@ -180,7 +180,7 @@ static const struct intel_device_info intel_i965g_info __initconst = {
+ .has_snoop = false,
+ };
+
+-static const struct intel_device_info intel_i965gm_info __initconst = {
++static const struct intel_device_info intel_i965gm_info = {
+ GEN4_FEATURES,
+ .platform = INTEL_I965GM,
+ .is_mobile = 1, .has_fbc = 1,
+@@ -190,13 +190,13 @@ static const struct intel_device_info intel_i965gm_info __initconst = {
+ .has_snoop = false,
+ };
+
+-static const struct intel_device_info intel_g45_info __initconst = {
++static const struct intel_device_info intel_g45_info = {
+ GEN4_FEATURES,
+ .platform = INTEL_G45,
+ .ring_mask = RENDER_RING | BSD_RING,
+ };
+
+-static const struct intel_device_info intel_gm45_info __initconst = {
++static const struct intel_device_info intel_gm45_info = {
+ GEN4_FEATURES,
+ .platform = INTEL_GM45,
+ .is_mobile = 1, .has_fbc = 1,
+@@ -213,12 +213,12 @@ static const struct intel_device_info intel_gm45_info __initconst = {
+ GEN_DEFAULT_PAGE_SIZES, \
+ CURSOR_OFFSETS
+
+-static const struct intel_device_info intel_ironlake_d_info __initconst = {
++static const struct intel_device_info intel_ironlake_d_info = {
+ GEN5_FEATURES,
+ .platform = INTEL_IRONLAKE,
+ };
+
+-static const struct intel_device_info intel_ironlake_m_info __initconst = {
++static const struct intel_device_info intel_ironlake_m_info = {
+ GEN5_FEATURES,
+ .platform = INTEL_IRONLAKE,
+ .is_mobile = 1, .has_fbc = 1,
+@@ -241,12 +241,12 @@ static const struct intel_device_info intel_ironlake_m_info __initconst = {
+ GEN6_FEATURES, \
+ .platform = INTEL_SANDYBRIDGE
+
+-static const struct intel_device_info intel_sandybridge_d_gt1_info __initconst = {
++static const struct intel_device_info intel_sandybridge_d_gt1_info = {
+ SNB_D_PLATFORM,
+ .gt = 1,
+ };
+
+-static const struct intel_device_info intel_sandybridge_d_gt2_info __initconst = {
++static const struct intel_device_info intel_sandybridge_d_gt2_info = {
+ SNB_D_PLATFORM,
+ .gt = 2,
+ };
+@@ -257,12 +257,12 @@ static const struct intel_device_info intel_sandybridge_d_gt2_info __initconst =
+ .is_mobile = 1
+
+
+-static const struct intel_device_info intel_sandybridge_m_gt1_info __initconst = {
++static const struct intel_device_info intel_sandybridge_m_gt1_info = {
+ SNB_M_PLATFORM,
+ .gt = 1,
+ };
+
+-static const struct intel_device_info intel_sandybridge_m_gt2_info __initconst = {
++static const struct intel_device_info intel_sandybridge_m_gt2_info = {
+ SNB_M_PLATFORM,
+ .gt = 2,
+ };
+@@ -286,12 +286,12 @@ static const struct intel_device_info intel_sandybridge_m_gt2_info __initconst =
+ .platform = INTEL_IVYBRIDGE, \
+ .has_l3_dpf = 1
+
+-static const struct intel_device_info intel_ivybridge_d_gt1_info __initconst = {
++static const struct intel_device_info intel_ivybridge_d_gt1_info = {
+ IVB_D_PLATFORM,
+ .gt = 1,
+ };
+
+-static const struct intel_device_info intel_ivybridge_d_gt2_info __initconst = {
++static const struct intel_device_info intel_ivybridge_d_gt2_info = {
+ IVB_D_PLATFORM,
+ .gt = 2,
+ };
+@@ -302,17 +302,17 @@ static const struct intel_device_info intel_ivybridge_d_gt2_info __initconst = {
+ .is_mobile = 1, \
+ .has_l3_dpf = 1
+
+-static const struct intel_device_info intel_ivybridge_m_gt1_info __initconst = {
++static const struct intel_device_info intel_ivybridge_m_gt1_info = {
+ IVB_M_PLATFORM,
+ .gt = 1,
+ };
+
+-static const struct intel_device_info intel_ivybridge_m_gt2_info __initconst = {
++static const struct intel_device_info intel_ivybridge_m_gt2_info = {
+ IVB_M_PLATFORM,
+ .gt = 2,
+ };
+
+-static const struct intel_device_info intel_ivybridge_q_info __initconst = {
++static const struct intel_device_info intel_ivybridge_q_info = {
+ GEN7_FEATURES,
+ .platform = INTEL_IVYBRIDGE,
+ .gt = 2,
+@@ -320,7 +320,7 @@ static const struct intel_device_info intel_ivybridge_q_info __initconst = {
+ .has_l3_dpf = 1,
+ };
+
+-static const struct intel_device_info intel_valleyview_info __initconst = {
++static const struct intel_device_info intel_valleyview_info = {
+ .platform = INTEL_VALLEYVIEW,
+ .gen = 7,
+ .is_lp = 1,
+@@ -356,17 +356,17 @@ static const struct intel_device_info intel_valleyview_info __initconst = {
+ .platform = INTEL_HASWELL, \
+ .has_l3_dpf = 1
+
+-static const struct intel_device_info intel_haswell_gt1_info __initconst = {
++static const struct intel_device_info intel_haswell_gt1_info = {
+ HSW_PLATFORM,
+ .gt = 1,
+ };
+
+-static const struct intel_device_info intel_haswell_gt2_info __initconst = {
++static const struct intel_device_info intel_haswell_gt2_info = {
+ HSW_PLATFORM,
+ .gt = 2,
+ };
+
+-static const struct intel_device_info intel_haswell_gt3_info __initconst = {
++static const struct intel_device_info intel_haswell_gt3_info = {
+ HSW_PLATFORM,
+ .gt = 3,
+ };
+@@ -386,17 +386,17 @@ static const struct intel_device_info intel_haswell_gt3_info __initconst = {
+ .gen = 8, \
+ .platform = INTEL_BROADWELL
+
+-static const struct intel_device_info intel_broadwell_gt1_info __initconst = {
++static const struct intel_device_info intel_broadwell_gt1_info = {
+ BDW_PLATFORM,
+ .gt = 1,
+ };
+
+-static const struct intel_device_info intel_broadwell_gt2_info __initconst = {
++static const struct intel_device_info intel_broadwell_gt2_info = {
+ BDW_PLATFORM,
+ .gt = 2,
+ };
+
+-static const struct intel_device_info intel_broadwell_rsvd_info __initconst = {
++static const struct intel_device_info intel_broadwell_rsvd_info = {
+ BDW_PLATFORM,
+ .gt = 3,
+ /* According to the device ID those devices are GT3, they were
+@@ -404,13 +404,13 @@ static const struct intel_device_info intel_broadwell_rsvd_info __initconst = {
+ */
+ };
+
+-static const struct intel_device_info intel_broadwell_gt3_info __initconst = {
++static const struct intel_device_info intel_broadwell_gt3_info = {
+ BDW_PLATFORM,
+ .gt = 3,
+ .ring_mask = RENDER_RING | BSD_RING | BLT_RING | VEBOX_RING | BSD2_RING,
+ };
+
+-static const struct intel_device_info intel_cherryview_info __initconst = {
++static const struct intel_device_info intel_cherryview_info = {
+ .gen = 8, .num_pipes = 3,
+ .has_hotplug = 1,
+ .is_lp = 1,
+@@ -453,12 +453,12 @@ static const struct intel_device_info intel_cherryview_info __initconst = {
+ .gen = 9, \
+ .platform = INTEL_SKYLAKE
+
+-static const struct intel_device_info intel_skylake_gt1_info __initconst = {
++static const struct intel_device_info intel_skylake_gt1_info = {
+ SKL_PLATFORM,
+ .gt = 1,
+ };
+
+-static const struct intel_device_info intel_skylake_gt2_info __initconst = {
++static const struct intel_device_info intel_skylake_gt2_info = {
+ SKL_PLATFORM,
+ .gt = 2,
+ };
+@@ -468,12 +468,12 @@ static const struct intel_device_info intel_skylake_gt2_info __initconst = {
+ .ring_mask = RENDER_RING | BSD_RING | BLT_RING | VEBOX_RING | BSD2_RING
+
+
+-static const struct intel_device_info intel_skylake_gt3_info __initconst = {
++static const struct intel_device_info intel_skylake_gt3_info = {
+ SKL_GT3_PLUS_PLATFORM,
+ .gt = 3,
+ };
+
+-static const struct intel_device_info intel_skylake_gt4_info __initconst = {
++static const struct intel_device_info intel_skylake_gt4_info = {
+ SKL_GT3_PLUS_PLATFORM,
+ .gt = 4,
+ };
+@@ -509,13 +509,13 @@ static const struct intel_device_info intel_skylake_gt4_info __initconst = {
+ IVB_CURSOR_OFFSETS, \
+ BDW_COLORS
+
+-static const struct intel_device_info intel_broxton_info __initconst = {
++static const struct intel_device_info intel_broxton_info = {
+ GEN9_LP_FEATURES,
+ .platform = INTEL_BROXTON,
+ .ddb_size = 512,
+ };
+
+-static const struct intel_device_info intel_geminilake_info __initconst = {
++static const struct intel_device_info intel_geminilake_info = {
+ GEN9_LP_FEATURES,
+ .platform = INTEL_GEMINILAKE,
+ .ddb_size = 1024,
+@@ -527,17 +527,17 @@ static const struct intel_device_info intel_geminilake_info __initconst = {
+ .gen = 9, \
+ .platform = INTEL_KABYLAKE
+
+-static const struct intel_device_info intel_kabylake_gt1_info __initconst = {
++static const struct intel_device_info intel_kabylake_gt1_info = {
+ KBL_PLATFORM,
+ .gt = 1,
+ };
+
+-static const struct intel_device_info intel_kabylake_gt2_info __initconst = {
++static const struct intel_device_info intel_kabylake_gt2_info = {
+ KBL_PLATFORM,
+ .gt = 2,
+ };
+
+-static const struct intel_device_info intel_kabylake_gt3_info __initconst = {
++static const struct intel_device_info intel_kabylake_gt3_info = {
+ KBL_PLATFORM,
+ .gt = 3,
+ .ring_mask = RENDER_RING | BSD_RING | BLT_RING | VEBOX_RING | BSD2_RING,
+@@ -548,17 +548,17 @@ static const struct intel_device_info intel_kabylake_gt3_info __initconst = {
+ .gen = 9, \
+ .platform = INTEL_COFFEELAKE
+
+-static const struct intel_device_info intel_coffeelake_gt1_info __initconst = {
++static const struct intel_device_info intel_coffeelake_gt1_info = {
+ CFL_PLATFORM,
+ .gt = 1,
+ };
+
+-static const struct intel_device_info intel_coffeelake_gt2_info __initconst = {
++static const struct intel_device_info intel_coffeelake_gt2_info = {
+ CFL_PLATFORM,
+ .gt = 2,
+ };
+
+-static const struct intel_device_info intel_coffeelake_gt3_info __initconst = {
++static const struct intel_device_info intel_coffeelake_gt3_info = {
+ CFL_PLATFORM,
+ .gt = 3,
+ .ring_mask = RENDER_RING | BSD_RING | BLT_RING | VEBOX_RING | BSD2_RING,
+@@ -569,7 +569,7 @@ static const struct intel_device_info intel_coffeelake_gt3_info __initconst = {
+ .ddb_size = 1024, \
+ GLK_COLORS
+
+-static const struct intel_device_info intel_cannonlake_gt2_info __initconst = {
++static const struct intel_device_info intel_cannonlake_gt2_info = {
+ GEN10_FEATURES,
+ .is_alpha_support = 1,
+ .platform = INTEL_CANNONLAKE,
+diff --git a/drivers/gpu/drm/i915/intel_dp.c b/drivers/gpu/drm/i915/intel_dp.c
+index 158438bb0389..add4b2434aa3 100644
+--- a/drivers/gpu/drm/i915/intel_dp.c
++++ b/drivers/gpu/drm/i915/intel_dp.c
+@@ -5336,6 +5336,12 @@ intel_dp_init_panel_power_sequencer(struct drm_device *dev,
+ */
+ final->t8 = 1;
+ final->t9 = 1;
++
++ /*
++ * HW has only a 100msec granularity for t11_t12 so round it up
++ * accordingly.
++ */
++ final->t11_t12 = roundup(final->t11_t12, 100 * 10);
+ }
+
+ static void
+diff --git a/drivers/hid/hid-core.c b/drivers/hid/hid-core.c
+index 0c3f608131cf..b5f85d6f6bef 100644
+--- a/drivers/hid/hid-core.c
++++ b/drivers/hid/hid-core.c
+@@ -2643,7 +2643,6 @@ static const struct hid_device_id hid_ignore_list[] = {
+ { HID_USB_DEVICE(USB_VENDOR_ID_DELORME, USB_DEVICE_ID_DELORME_EARTHMATE) },
+ { HID_USB_DEVICE(USB_VENDOR_ID_DELORME, USB_DEVICE_ID_DELORME_EM_LT20) },
+ { HID_I2C_DEVICE(USB_VENDOR_ID_ELAN, 0x0400) },
+- { HID_I2C_DEVICE(USB_VENDOR_ID_ELAN, 0x0401) },
+ { HID_USB_DEVICE(USB_VENDOR_ID_ESSENTIAL_REALITY, USB_DEVICE_ID_ESSENTIAL_REALITY_P5) },
+ { HID_USB_DEVICE(USB_VENDOR_ID_ETT, USB_DEVICE_ID_TC5UH) },
+ { HID_USB_DEVICE(USB_VENDOR_ID_ETT, USB_DEVICE_ID_TC4UM) },
+@@ -2913,6 +2912,17 @@ bool hid_ignore(struct hid_device *hdev)
+ strncmp(hdev->name, "www.masterkit.ru MA901", 22) == 0)
+ return true;
+ break;
++ case USB_VENDOR_ID_ELAN:
++ /*
++ * Many Elan devices have a product id of 0x0401 and are handled
++ * by the elan_i2c input driver. But the ACPI HID ELAN0800 dev
++ * is not (and cannot be) handled by that driver ->
++ * Ignore all 0x0401 devs except for the ELAN0800 dev.
++ */
++ if (hdev->product == 0x0401 &&
++ strncmp(hdev->name, "ELAN0800", 8) != 0)
++ return true;
++ break;
+ }
+
+ if (hdev->type == HID_TYPE_USBMOUSE &&
+diff --git a/drivers/media/dvb-core/dvb_frontend.c b/drivers/media/dvb-core/dvb_frontend.c
+index 2afaa8226342..46f977177faf 100644
+--- a/drivers/media/dvb-core/dvb_frontend.c
++++ b/drivers/media/dvb-core/dvb_frontend.c
+@@ -2110,7 +2110,7 @@ static int dvb_frontend_handle_ioctl(struct file *file,
+ struct dvb_frontend *fe = dvbdev->priv;
+ struct dvb_frontend_private *fepriv = fe->frontend_priv;
+ struct dtv_frontend_properties *c = &fe->dtv_property_cache;
+- int i, err;
++ int i, err = -EOPNOTSUPP;
+
+ dev_dbg(fe->dvb->device, "%s:\n", __func__);
+
+@@ -2145,6 +2145,7 @@ static int dvb_frontend_handle_ioctl(struct file *file,
+ }
+ }
+ kfree(tvp);
++ err = 0;
+ break;
+ }
+ case FE_GET_PROPERTY: {
+@@ -2196,6 +2197,7 @@ static int dvb_frontend_handle_ioctl(struct file *file,
+ return -EFAULT;
+ }
+ kfree(tvp);
++ err = 0;
+ break;
+ }
+
+diff --git a/drivers/media/dvb-frontends/ascot2e.c b/drivers/media/dvb-frontends/ascot2e.c
+index 0ee0df53b91b..79d5d89bc95e 100644
+--- a/drivers/media/dvb-frontends/ascot2e.c
++++ b/drivers/media/dvb-frontends/ascot2e.c
+@@ -155,7 +155,9 @@ static int ascot2e_write_regs(struct ascot2e_priv *priv,
+
+ static int ascot2e_write_reg(struct ascot2e_priv *priv, u8 reg, u8 val)
+ {
+- return ascot2e_write_regs(priv, reg, &val, 1);
++ u8 tmp = val; /* see gcc.gnu.org/bugzilla/show_bug.cgi?id=81715 */
++
++ return ascot2e_write_regs(priv, reg, &tmp, 1);
+ }
+
+ static int ascot2e_read_regs(struct ascot2e_priv *priv,
+diff --git a/drivers/media/dvb-frontends/cxd2841er.c b/drivers/media/dvb-frontends/cxd2841er.c
+index 48ee9bc00c06..ccbd84fd6428 100644
+--- a/drivers/media/dvb-frontends/cxd2841er.c
++++ b/drivers/media/dvb-frontends/cxd2841er.c
+@@ -257,7 +257,9 @@ static int cxd2841er_write_regs(struct cxd2841er_priv *priv,
+ static int cxd2841er_write_reg(struct cxd2841er_priv *priv,
+ u8 addr, u8 reg, u8 val)
+ {
+- return cxd2841er_write_regs(priv, addr, reg, &val, 1);
++ u8 tmp = val; /* see gcc.gnu.org/bugzilla/show_bug.cgi?id=81715 */
++
++ return cxd2841er_write_regs(priv, addr, reg, &tmp, 1);
+ }
+
+ static int cxd2841er_read_regs(struct cxd2841er_priv *priv,
+diff --git a/drivers/media/dvb-frontends/helene.c b/drivers/media/dvb-frontends/helene.c
+index 4bf5a551ba40..2ab8d83e5576 100644
+--- a/drivers/media/dvb-frontends/helene.c
++++ b/drivers/media/dvb-frontends/helene.c
+@@ -331,7 +331,9 @@ static int helene_write_regs(struct helene_priv *priv,
+
+ static int helene_write_reg(struct helene_priv *priv, u8 reg, u8 val)
+ {
+- return helene_write_regs(priv, reg, &val, 1);
++ u8 tmp = val; /* see gcc.gnu.org/bugzilla/show_bug.cgi?id=81715 */
++
++ return helene_write_regs(priv, reg, &tmp, 1);
+ }
+
+ static int helene_read_regs(struct helene_priv *priv,
+diff --git a/drivers/media/dvb-frontends/horus3a.c b/drivers/media/dvb-frontends/horus3a.c
+index 68d759c4c52e..5c8b405f2ddc 100644
+--- a/drivers/media/dvb-frontends/horus3a.c
++++ b/drivers/media/dvb-frontends/horus3a.c
+@@ -89,7 +89,9 @@ static int horus3a_write_regs(struct horus3a_priv *priv,
+
+ static int horus3a_write_reg(struct horus3a_priv *priv, u8 reg, u8 val)
+ {
+- return horus3a_write_regs(priv, reg, &val, 1);
++ u8 tmp = val; /* see gcc.gnu.org/bugzilla/show_bug.cgi?id=81715 */
++
++ return horus3a_write_regs(priv, reg, &tmp, 1);
+ }
+
+ static int horus3a_enter_power_save(struct horus3a_priv *priv)
+diff --git a/drivers/media/dvb-frontends/itd1000.c b/drivers/media/dvb-frontends/itd1000.c
+index 5bb1e73a10b4..ce7c443d3eac 100644
+--- a/drivers/media/dvb-frontends/itd1000.c
++++ b/drivers/media/dvb-frontends/itd1000.c
+@@ -95,8 +95,9 @@ static int itd1000_read_reg(struct itd1000_state *state, u8 reg)
+
+ static inline int itd1000_write_reg(struct itd1000_state *state, u8 r, u8 v)
+ {
+- int ret = itd1000_write_regs(state, r, &v, 1);
+- state->shadow[r] = v;
++ u8 tmp = v; /* see gcc.gnu.org/bugzilla/show_bug.cgi?id=81715 */
++ int ret = itd1000_write_regs(state, r, &tmp, 1);
++ state->shadow[r] = tmp;
+ return ret;
+ }
+
+diff --git a/drivers/media/dvb-frontends/mt312.c b/drivers/media/dvb-frontends/mt312.c
+index 961b9a2508e0..0b23cbc021b8 100644
+--- a/drivers/media/dvb-frontends/mt312.c
++++ b/drivers/media/dvb-frontends/mt312.c
+@@ -142,7 +142,10 @@ static inline int mt312_readreg(struct mt312_state *state,
+ static inline int mt312_writereg(struct mt312_state *state,
+ const enum mt312_reg_addr reg, const u8 val)
+ {
+- return mt312_write(state, reg, &val, 1);
++ u8 tmp = val; /* see gcc.gnu.org/bugzilla/show_bug.cgi?id=81715 */
++
++
++ return mt312_write(state, reg, &tmp, 1);
+ }
+
+ static inline u32 mt312_div(u32 a, u32 b)
+diff --git a/drivers/media/dvb-frontends/stb0899_drv.c b/drivers/media/dvb-frontends/stb0899_drv.c
+index 02347598277a..db5dde3215f0 100644
+--- a/drivers/media/dvb-frontends/stb0899_drv.c
++++ b/drivers/media/dvb-frontends/stb0899_drv.c
+@@ -539,7 +539,8 @@ int stb0899_write_regs(struct stb0899_state *state, unsigned int reg, u8 *data,
+
+ int stb0899_write_reg(struct stb0899_state *state, unsigned int reg, u8 data)
+ {
+- return stb0899_write_regs(state, reg, &data, 1);
++ u8 tmp = data;
++ return stb0899_write_regs(state, reg, &tmp, 1);
+ }
+
+ /*
+diff --git a/drivers/media/dvb-frontends/stb6100.c b/drivers/media/dvb-frontends/stb6100.c
+index 17a955d0031b..75509bec66e4 100644
+--- a/drivers/media/dvb-frontends/stb6100.c
++++ b/drivers/media/dvb-frontends/stb6100.c
+@@ -226,12 +226,14 @@ static int stb6100_write_reg_range(struct stb6100_state *state, u8 buf[], int st
+
+ static int stb6100_write_reg(struct stb6100_state *state, u8 reg, u8 data)
+ {
++ u8 tmp = data; /* see gcc.gnu.org/bugzilla/show_bug.cgi?id=81715 */
++
+ if (unlikely(reg >= STB6100_NUMREGS)) {
+ dprintk(verbose, FE_ERROR, 1, "Invalid register offset 0x%x", reg);
+ return -EREMOTEIO;
+ }
+- data = (data & stb6100_template[reg].mask) | stb6100_template[reg].set;
+- return stb6100_write_reg_range(state, &data, reg, 1);
++ tmp = (tmp & stb6100_template[reg].mask) | stb6100_template[reg].set;
++ return stb6100_write_reg_range(state, &tmp, reg, 1);
+ }
+
+
+diff --git a/drivers/media/dvb-frontends/stv0367.c b/drivers/media/dvb-frontends/stv0367.c
+index f3529df8211d..1a726196c126 100644
+--- a/drivers/media/dvb-frontends/stv0367.c
++++ b/drivers/media/dvb-frontends/stv0367.c
+@@ -166,7 +166,9 @@ int stv0367_writeregs(struct stv0367_state *state, u16 reg, u8 *data, int len)
+
+ static int stv0367_writereg(struct stv0367_state *state, u16 reg, u8 data)
+ {
+- return stv0367_writeregs(state, reg, &data, 1);
++ u8 tmp = data; /* see gcc.gnu.org/bugzilla/show_bug.cgi?id=81715 */
++
++ return stv0367_writeregs(state, reg, &tmp, 1);
+ }
+
+ static u8 stv0367_readreg(struct stv0367_state *state, u16 reg)
+diff --git a/drivers/media/dvb-frontends/stv090x.c b/drivers/media/dvb-frontends/stv090x.c
+index 7ef469c0c866..2695e1eb6d9c 100644
+--- a/drivers/media/dvb-frontends/stv090x.c
++++ b/drivers/media/dvb-frontends/stv090x.c
+@@ -755,7 +755,9 @@ static int stv090x_write_regs(struct stv090x_state *state, unsigned int reg, u8
+
+ static int stv090x_write_reg(struct stv090x_state *state, unsigned int reg, u8 data)
+ {
+- return stv090x_write_regs(state, reg, &data, 1);
++ u8 tmp = data; /* see gcc.gnu.org/bugzilla/show_bug.cgi?id=81715 */
++
++ return stv090x_write_regs(state, reg, &tmp, 1);
+ }
+
+ static int stv090x_i2c_gate_ctrl(struct stv090x_state *state, int enable)
+diff --git a/drivers/media/dvb-frontends/stv6110x.c b/drivers/media/dvb-frontends/stv6110x.c
+index 66eba38f1014..7e8e01389c55 100644
+--- a/drivers/media/dvb-frontends/stv6110x.c
++++ b/drivers/media/dvb-frontends/stv6110x.c
+@@ -97,7 +97,9 @@ static int stv6110x_write_regs(struct stv6110x_state *stv6110x, int start, u8 da
+
+ static int stv6110x_write_reg(struct stv6110x_state *stv6110x, u8 reg, u8 data)
+ {
+- return stv6110x_write_regs(stv6110x, reg, &data, 1);
++ u8 tmp = data; /* see gcc.gnu.org/bugzilla/show_bug.cgi?id=81715 */
++
++ return stv6110x_write_regs(stv6110x, reg, &tmp, 1);
+ }
+
+ static int stv6110x_init(struct dvb_frontend *fe)
+diff --git a/drivers/media/dvb-frontends/ts2020.c b/drivers/media/dvb-frontends/ts2020.c
+index 931e5c98da8a..b879e1571469 100644
+--- a/drivers/media/dvb-frontends/ts2020.c
++++ b/drivers/media/dvb-frontends/ts2020.c
+@@ -368,7 +368,7 @@ static int ts2020_read_tuner_gain(struct dvb_frontend *fe, unsigned v_agc,
+ gain2 = clamp_t(long, gain2, 0, 13);
+ v_agc = clamp_t(long, v_agc, 400, 1100);
+
+- *_gain = -(gain1 * 2330 +
++ *_gain = -((__s64)gain1 * 2330 +
+ gain2 * 3500 +
+ v_agc * 24 / 10 * 10 +
+ 10000);
+@@ -386,7 +386,7 @@ static int ts2020_read_tuner_gain(struct dvb_frontend *fe, unsigned v_agc,
+ gain3 = clamp_t(long, gain3, 0, 6);
+ v_agc = clamp_t(long, v_agc, 600, 1600);
+
+- *_gain = -(gain1 * 2650 +
++ *_gain = -((__s64)gain1 * 2650 +
+ gain2 * 3380 +
+ gain3 * 2850 +
+ v_agc * 176 / 100 * 10 -
+diff --git a/drivers/media/dvb-frontends/zl10039.c b/drivers/media/dvb-frontends/zl10039.c
+index 623355fc2666..3208b866d1cb 100644
+--- a/drivers/media/dvb-frontends/zl10039.c
++++ b/drivers/media/dvb-frontends/zl10039.c
+@@ -134,7 +134,9 @@ static inline int zl10039_writereg(struct zl10039_state *state,
+ const enum zl10039_reg_addr reg,
+ const u8 val)
+ {
+- return zl10039_write(state, reg, &val, 1);
++ const u8 tmp = val; /* see gcc.gnu.org/bugzilla/show_bug.cgi?id=81715 */
++
++ return zl10039_write(state, reg, &tmp, 1);
+ }
+
+ static int zl10039_init(struct dvb_frontend *fe)
+diff --git a/drivers/media/platform/vivid/vivid-core.h b/drivers/media/platform/vivid/vivid-core.h
+index 5cdf95bdc4d1..0de136b0d8c8 100644
+--- a/drivers/media/platform/vivid/vivid-core.h
++++ b/drivers/media/platform/vivid/vivid-core.h
+@@ -154,6 +154,7 @@ struct vivid_dev {
+ struct v4l2_ctrl_handler ctrl_hdl_streaming;
+ struct v4l2_ctrl_handler ctrl_hdl_sdtv_cap;
+ struct v4l2_ctrl_handler ctrl_hdl_loop_cap;
++ struct v4l2_ctrl_handler ctrl_hdl_fb;
+ struct video_device vid_cap_dev;
+ struct v4l2_ctrl_handler ctrl_hdl_vid_cap;
+ struct video_device vid_out_dev;
+diff --git a/drivers/media/platform/vivid/vivid-ctrls.c b/drivers/media/platform/vivid/vivid-ctrls.c
+index 34731f71cc00..3f9d354827af 100644
+--- a/drivers/media/platform/vivid/vivid-ctrls.c
++++ b/drivers/media/platform/vivid/vivid-ctrls.c
+@@ -120,9 +120,6 @@ static int vivid_user_gen_s_ctrl(struct v4l2_ctrl *ctrl)
+ clear_bit(V4L2_FL_REGISTERED, &dev->radio_rx_dev.flags);
+ clear_bit(V4L2_FL_REGISTERED, &dev->radio_tx_dev.flags);
+ break;
+- case VIVID_CID_CLEAR_FB:
+- vivid_clear_fb(dev);
+- break;
+ case VIVID_CID_BUTTON:
+ dev->button_pressed = 30;
+ break;
+@@ -274,8 +271,28 @@ static const struct v4l2_ctrl_config vivid_ctrl_disconnect = {
+ .type = V4L2_CTRL_TYPE_BUTTON,
+ };
+
++
++/* Framebuffer Controls */
++
++static int vivid_fb_s_ctrl(struct v4l2_ctrl *ctrl)
++{
++ struct vivid_dev *dev = container_of(ctrl->handler,
++ struct vivid_dev, ctrl_hdl_fb);
++
++ switch (ctrl->id) {
++ case VIVID_CID_CLEAR_FB:
++ vivid_clear_fb(dev);
++ break;
++ }
++ return 0;
++}
++
++static const struct v4l2_ctrl_ops vivid_fb_ctrl_ops = {
++ .s_ctrl = vivid_fb_s_ctrl,
++};
++
+ static const struct v4l2_ctrl_config vivid_ctrl_clear_fb = {
+- .ops = &vivid_user_gen_ctrl_ops,
++ .ops = &vivid_fb_ctrl_ops,
+ .id = VIVID_CID_CLEAR_FB,
+ .name = "Clear Framebuffer",
+ .type = V4L2_CTRL_TYPE_BUTTON,
+@@ -1357,6 +1374,7 @@ int vivid_create_controls(struct vivid_dev *dev, bool show_ccs_cap,
+ struct v4l2_ctrl_handler *hdl_streaming = &dev->ctrl_hdl_streaming;
+ struct v4l2_ctrl_handler *hdl_sdtv_cap = &dev->ctrl_hdl_sdtv_cap;
+ struct v4l2_ctrl_handler *hdl_loop_cap = &dev->ctrl_hdl_loop_cap;
++ struct v4l2_ctrl_handler *hdl_fb = &dev->ctrl_hdl_fb;
+ struct v4l2_ctrl_handler *hdl_vid_cap = &dev->ctrl_hdl_vid_cap;
+ struct v4l2_ctrl_handler *hdl_vid_out = &dev->ctrl_hdl_vid_out;
+ struct v4l2_ctrl_handler *hdl_vbi_cap = &dev->ctrl_hdl_vbi_cap;
+@@ -1384,10 +1402,12 @@ int vivid_create_controls(struct vivid_dev *dev, bool show_ccs_cap,
+ v4l2_ctrl_new_custom(hdl_sdtv_cap, &vivid_ctrl_class, NULL);
+ v4l2_ctrl_handler_init(hdl_loop_cap, 1);
+ v4l2_ctrl_new_custom(hdl_loop_cap, &vivid_ctrl_class, NULL);
++ v4l2_ctrl_handler_init(hdl_fb, 1);
++ v4l2_ctrl_new_custom(hdl_fb, &vivid_ctrl_class, NULL);
+ v4l2_ctrl_handler_init(hdl_vid_cap, 55);
+ v4l2_ctrl_new_custom(hdl_vid_cap, &vivid_ctrl_class, NULL);
+ v4l2_ctrl_handler_init(hdl_vid_out, 26);
+- if (!no_error_inj)
++ if (!no_error_inj || dev->has_fb)
+ v4l2_ctrl_new_custom(hdl_vid_out, &vivid_ctrl_class, NULL);
+ v4l2_ctrl_handler_init(hdl_vbi_cap, 21);
+ v4l2_ctrl_new_custom(hdl_vbi_cap, &vivid_ctrl_class, NULL);
+@@ -1561,7 +1581,7 @@ int vivid_create_controls(struct vivid_dev *dev, bool show_ccs_cap,
+ v4l2_ctrl_new_custom(hdl_loop_cap, &vivid_ctrl_loop_video, NULL);
+
+ if (dev->has_fb)
+- v4l2_ctrl_new_custom(hdl_user_gen, &vivid_ctrl_clear_fb, NULL);
++ v4l2_ctrl_new_custom(hdl_fb, &vivid_ctrl_clear_fb, NULL);
+
+ if (dev->has_radio_rx) {
+ v4l2_ctrl_new_custom(hdl_radio_rx, &vivid_ctrl_radio_hw_seek_mode, NULL);
+@@ -1658,6 +1678,7 @@ int vivid_create_controls(struct vivid_dev *dev, bool show_ccs_cap,
+ v4l2_ctrl_add_handler(hdl_vid_cap, hdl_streaming, NULL);
+ v4l2_ctrl_add_handler(hdl_vid_cap, hdl_sdtv_cap, NULL);
+ v4l2_ctrl_add_handler(hdl_vid_cap, hdl_loop_cap, NULL);
++ v4l2_ctrl_add_handler(hdl_vid_cap, hdl_fb, NULL);
+ if (hdl_vid_cap->error)
+ return hdl_vid_cap->error;
+ dev->vid_cap_dev.ctrl_handler = hdl_vid_cap;
+@@ -1666,6 +1687,7 @@ int vivid_create_controls(struct vivid_dev *dev, bool show_ccs_cap,
+ v4l2_ctrl_add_handler(hdl_vid_out, hdl_user_gen, NULL);
+ v4l2_ctrl_add_handler(hdl_vid_out, hdl_user_aud, NULL);
+ v4l2_ctrl_add_handler(hdl_vid_out, hdl_streaming, NULL);
++ v4l2_ctrl_add_handler(hdl_vid_out, hdl_fb, NULL);
+ if (hdl_vid_out->error)
+ return hdl_vid_out->error;
+ dev->vid_out_dev.ctrl_handler = hdl_vid_out;
+@@ -1725,4 +1747,5 @@ void vivid_free_controls(struct vivid_dev *dev)
+ v4l2_ctrl_handler_free(&dev->ctrl_hdl_streaming);
+ v4l2_ctrl_handler_free(&dev->ctrl_hdl_sdtv_cap);
+ v4l2_ctrl_handler_free(&dev->ctrl_hdl_loop_cap);
++ v4l2_ctrl_handler_free(&dev->ctrl_hdl_fb);
+ }
+diff --git a/drivers/media/usb/dvb-usb-v2/lmedm04.c b/drivers/media/usb/dvb-usb-v2/lmedm04.c
+index 5e320fa4a795..be26c029546b 100644
+--- a/drivers/media/usb/dvb-usb-v2/lmedm04.c
++++ b/drivers/media/usb/dvb-usb-v2/lmedm04.c
+@@ -494,18 +494,23 @@ static int lme2510_pid_filter(struct dvb_usb_adapter *adap, int index, u16 pid,
+
+ static int lme2510_return_status(struct dvb_usb_device *d)
+ {
+- int ret = 0;
++ int ret;
+ u8 *data;
+
+- data = kzalloc(10, GFP_KERNEL);
++ data = kzalloc(6, GFP_KERNEL);
+ if (!data)
+ return -ENOMEM;
+
+- ret |= usb_control_msg(d->udev, usb_rcvctrlpipe(d->udev, 0),
+- 0x06, 0x80, 0x0302, 0x00, data, 0x0006, 200);
+- info("Firmware Status: %x (%x)", ret , data[2]);
++ ret = usb_control_msg(d->udev, usb_rcvctrlpipe(d->udev, 0),
++ 0x06, 0x80, 0x0302, 0x00,
++ data, 0x6, 200);
++ if (ret != 6)
++ ret = -EINVAL;
++ else
++ ret = data[2];
++
++ info("Firmware Status: %6ph", data);
+
+- ret = (ret < 0) ? -ENODEV : data[2];
+ kfree(data);
+ return ret;
+ }
+@@ -1071,8 +1076,6 @@ static int dm04_lme2510_frontend_attach(struct dvb_usb_adapter *adap)
+
+ if (adap->fe[0]) {
+ info("FE Found M88RS2000");
+- dvb_attach(ts2020_attach, adap->fe[0], &ts2020_config,
+- &d->i2c_adap);
+ st->i2c_tuner_gate_w = 5;
+ st->i2c_tuner_gate_r = 5;
+ st->i2c_tuner_addr = 0x60;
+@@ -1138,17 +1141,18 @@ static int dm04_lme2510_tuner(struct dvb_usb_adapter *adap)
+ ret = st->tuner_config;
+ break;
+ case TUNER_RS2000:
+- ret = st->tuner_config;
++ if (dvb_attach(ts2020_attach, adap->fe[0],
++ &ts2020_config, &d->i2c_adap))
++ ret = st->tuner_config;
+ break;
+ default:
+ break;
+ }
+
+- if (ret)
++ if (ret) {
+ info("TUN Found %s tuner", tun_msg[ret]);
+- else {
+- info("TUN No tuner found --- resetting device");
+- lme_coldreset(d);
++ } else {
++ info("TUN No tuner found");
+ return -ENODEV;
+ }
+
+@@ -1189,6 +1193,7 @@ static int lme2510_get_adapter_count(struct dvb_usb_device *d)
+ static int lme2510_identify_state(struct dvb_usb_device *d, const char **name)
+ {
+ struct lme2510_state *st = d->priv;
++ int status;
+
+ usb_reset_configuration(d->udev);
+
+@@ -1197,12 +1202,16 @@ static int lme2510_identify_state(struct dvb_usb_device *d, const char **name)
+
+ st->dvb_usb_lme2510_firmware = dvb_usb_lme2510_firmware;
+
+- if (lme2510_return_status(d) == 0x44) {
++ status = lme2510_return_status(d);
++ if (status == 0x44) {
+ *name = lme_firmware_switch(d, 0);
+ return COLD;
+ }
+
+- return 0;
++ if (status != 0x47)
++ return -EINVAL;
++
++ return WARM;
+ }
+
+ static int lme2510_get_stream_config(struct dvb_frontend *fe, u8 *ts_type,
+diff --git a/drivers/media/usb/dvb-usb/cxusb.c b/drivers/media/usb/dvb-usb/cxusb.c
+index 37dea0adc695..cfe86b4864b3 100644
+--- a/drivers/media/usb/dvb-usb/cxusb.c
++++ b/drivers/media/usb/dvb-usb/cxusb.c
+@@ -677,6 +677,8 @@ static int dvico_bluebird_xc2028_callback(void *ptr, int component,
+ case XC2028_RESET_CLK:
+ deb_info("%s: XC2028_RESET_CLK %d\n", __func__, arg);
+ break;
++ case XC2028_I2C_FLUSH:
++ break;
+ default:
+ deb_info("%s: unknown command %d, arg %d\n", __func__,
+ command, arg);
+diff --git a/drivers/media/usb/dvb-usb/dib0700_devices.c b/drivers/media/usb/dvb-usb/dib0700_devices.c
+index 366b05529915..a9968fb1e8e4 100644
+--- a/drivers/media/usb/dvb-usb/dib0700_devices.c
++++ b/drivers/media/usb/dvb-usb/dib0700_devices.c
+@@ -430,6 +430,7 @@ static int stk7700ph_xc3028_callback(void *ptr, int component,
+ state->dib7000p_ops.set_gpio(adap->fe_adap[0].fe, 8, 0, 1);
+ break;
+ case XC2028_RESET_CLK:
++ case XC2028_I2C_FLUSH:
+ break;
+ default:
+ err("%s: unknown command %d, arg %d\n", __func__,
+diff --git a/drivers/media/usb/hdpvr/hdpvr-core.c b/drivers/media/usb/hdpvr/hdpvr-core.c
+index dbe29c6c4d8b..1e8cbaf36896 100644
+--- a/drivers/media/usb/hdpvr/hdpvr-core.c
++++ b/drivers/media/usb/hdpvr/hdpvr-core.c
+@@ -292,7 +292,7 @@ static int hdpvr_probe(struct usb_interface *interface,
+ /* register v4l2_device early so it can be used for printks */
+ if (v4l2_device_register(&interface->dev, &dev->v4l2_dev)) {
+ dev_err(&interface->dev, "v4l2_device_register failed\n");
+- goto error;
++ goto error_free_dev;
+ }
+
+ mutex_init(&dev->io_mutex);
+@@ -301,7 +301,7 @@ static int hdpvr_probe(struct usb_interface *interface,
+ dev->usbc_buf = kmalloc(64, GFP_KERNEL);
+ if (!dev->usbc_buf) {
+ v4l2_err(&dev->v4l2_dev, "Out of memory\n");
+- goto error;
++ goto error_v4l2_unregister;
+ }
+
+ init_waitqueue_head(&dev->wait_buffer);
+@@ -339,13 +339,13 @@ static int hdpvr_probe(struct usb_interface *interface,
+ }
+ if (!dev->bulk_in_endpointAddr) {
+ v4l2_err(&dev->v4l2_dev, "Could not find bulk-in endpoint\n");
+- goto error;
++ goto error_put_usb;
+ }
+
+ /* init the device */
+ if (hdpvr_device_init(dev)) {
+ v4l2_err(&dev->v4l2_dev, "device init failed\n");
+- goto error;
++ goto error_put_usb;
+ }
+
+ mutex_lock(&dev->io_mutex);
+@@ -353,7 +353,7 @@ static int hdpvr_probe(struct usb_interface *interface,
+ mutex_unlock(&dev->io_mutex);
+ v4l2_err(&dev->v4l2_dev,
+ "allocating transfer buffers failed\n");
+- goto error;
++ goto error_put_usb;
+ }
+ mutex_unlock(&dev->io_mutex);
+
+@@ -361,7 +361,7 @@ static int hdpvr_probe(struct usb_interface *interface,
+ retval = hdpvr_register_i2c_adapter(dev);
+ if (retval < 0) {
+ v4l2_err(&dev->v4l2_dev, "i2c adapter register failed\n");
+- goto error;
++ goto error_free_buffers;
+ }
+
+ client = hdpvr_register_ir_rx_i2c(dev);
+@@ -394,13 +394,17 @@ static int hdpvr_probe(struct usb_interface *interface,
+ reg_fail:
+ #if IS_ENABLED(CONFIG_I2C)
+ i2c_del_adapter(&dev->i2c_adapter);
++error_free_buffers:
+ #endif
++ hdpvr_free_buffers(dev);
++error_put_usb:
++ usb_put_dev(dev->udev);
++ kfree(dev->usbc_buf);
++error_v4l2_unregister:
++ v4l2_device_unregister(&dev->v4l2_dev);
++error_free_dev:
++ kfree(dev);
+ error:
+- if (dev) {
+- flush_work(&dev->worker);
+- /* this frees allocated memory */
+- hdpvr_delete(dev);
+- }
+ return retval;
+ }
+
+diff --git a/drivers/media/v4l2-core/v4l2-compat-ioctl32.c b/drivers/media/v4l2-core/v4l2-compat-ioctl32.c
+index 821f2aa299ae..cbeea8343a5c 100644
+--- a/drivers/media/v4l2-core/v4l2-compat-ioctl32.c
++++ b/drivers/media/v4l2-core/v4l2-compat-ioctl32.c
+@@ -18,8 +18,18 @@
+ #include <linux/videodev2.h>
+ #include <linux/v4l2-subdev.h>
+ #include <media/v4l2-dev.h>
++#include <media/v4l2-fh.h>
++#include <media/v4l2-ctrls.h>
+ #include <media/v4l2-ioctl.h>
+
++/* Use the same argument order as copy_in_user */
++#define assign_in_user(to, from) \
++({ \
++ typeof(*from) __assign_tmp; \
++ \
++ get_user(__assign_tmp, from) || put_user(__assign_tmp, to); \
++})
++
+ static long native_ioctl(struct file *file, unsigned int cmd, unsigned long arg)
+ {
+ long ret = -ENOIOCTLCMD;
+@@ -46,135 +56,75 @@ struct v4l2_window32 {
+ __u8 global_alpha;
+ };
+
+-static int get_v4l2_window32(struct v4l2_window *kp, struct v4l2_window32 __user *up)
+-{
+- if (!access_ok(VERIFY_READ, up, sizeof(struct v4l2_window32)) ||
+- copy_from_user(&kp->w, &up->w, sizeof(up->w)) ||
+- get_user(kp->field, &up->field) ||
+- get_user(kp->chromakey, &up->chromakey) ||
+- get_user(kp->clipcount, &up->clipcount) ||
+- get_user(kp->global_alpha, &up->global_alpha))
+- return -EFAULT;
+- if (kp->clipcount > 2048)
+- return -EINVAL;
+- if (kp->clipcount) {
+- struct v4l2_clip32 __user *uclips;
+- struct v4l2_clip __user *kclips;
+- int n = kp->clipcount;
+- compat_caddr_t p;
+-
+- if (get_user(p, &up->clips))
+- return -EFAULT;
+- uclips = compat_ptr(p);
+- kclips = compat_alloc_user_space(n * sizeof(struct v4l2_clip));
+- kp->clips = kclips;
+- while (--n >= 0) {
+- if (copy_in_user(&kclips->c, &uclips->c, sizeof(uclips->c)))
+- return -EFAULT;
+- if (put_user(n ? kclips + 1 : NULL, &kclips->next))
+- return -EFAULT;
+- uclips += 1;
+- kclips += 1;
+- }
+- } else
+- kp->clips = NULL;
+- return 0;
+-}
+-
+-static int put_v4l2_window32(struct v4l2_window *kp, struct v4l2_window32 __user *up)
+-{
+- if (copy_to_user(&up->w, &kp->w, sizeof(kp->w)) ||
+- put_user(kp->field, &up->field) ||
+- put_user(kp->chromakey, &up->chromakey) ||
+- put_user(kp->clipcount, &up->clipcount) ||
+- put_user(kp->global_alpha, &up->global_alpha))
+- return -EFAULT;
+- return 0;
+-}
+-
+-static inline int get_v4l2_pix_format(struct v4l2_pix_format *kp, struct v4l2_pix_format __user *up)
+-{
+- if (copy_from_user(kp, up, sizeof(struct v4l2_pix_format)))
+- return -EFAULT;
+- return 0;
+-}
+-
+-static inline int get_v4l2_pix_format_mplane(struct v4l2_pix_format_mplane *kp,
+- struct v4l2_pix_format_mplane __user *up)
+-{
+- if (copy_from_user(kp, up, sizeof(struct v4l2_pix_format_mplane)))
+- return -EFAULT;
+- return 0;
+-}
+-
+-static inline int put_v4l2_pix_format(struct v4l2_pix_format *kp, struct v4l2_pix_format __user *up)
++static int get_v4l2_window32(struct v4l2_window __user *kp,
++ struct v4l2_window32 __user *up,
++ void __user *aux_buf, u32 aux_space)
+ {
+- if (copy_to_user(up, kp, sizeof(struct v4l2_pix_format)))
+- return -EFAULT;
+- return 0;
+-}
+-
+-static inline int put_v4l2_pix_format_mplane(struct v4l2_pix_format_mplane *kp,
+- struct v4l2_pix_format_mplane __user *up)
+-{
+- if (copy_to_user(up, kp, sizeof(struct v4l2_pix_format_mplane)))
+- return -EFAULT;
+- return 0;
+-}
+-
+-static inline int get_v4l2_vbi_format(struct v4l2_vbi_format *kp, struct v4l2_vbi_format __user *up)
+-{
+- if (copy_from_user(kp, up, sizeof(struct v4l2_vbi_format)))
+- return -EFAULT;
+- return 0;
+-}
+-
+-static inline int put_v4l2_vbi_format(struct v4l2_vbi_format *kp, struct v4l2_vbi_format __user *up)
+-{
+- if (copy_to_user(up, kp, sizeof(struct v4l2_vbi_format)))
++ struct v4l2_clip32 __user *uclips;
++ struct v4l2_clip __user *kclips;
++ compat_caddr_t p;
++ u32 clipcount;
++
++ if (!access_ok(VERIFY_READ, up, sizeof(*up)) ||
++ copy_in_user(&kp->w, &up->w, sizeof(up->w)) ||
++ assign_in_user(&kp->field, &up->field) ||
++ assign_in_user(&kp->chromakey, &up->chromakey) ||
++ assign_in_user(&kp->global_alpha, &up->global_alpha) ||
++ get_user(clipcount, &up->clipcount) ||
++ put_user(clipcount, &kp->clipcount))
+ return -EFAULT;
+- return 0;
+-}
++ if (clipcount > 2048)
++ return -EINVAL;
++ if (!clipcount)
++ return put_user(NULL, &kp->clips);
+
+-static inline int get_v4l2_sliced_vbi_format(struct v4l2_sliced_vbi_format *kp, struct v4l2_sliced_vbi_format __user *up)
+-{
+- if (copy_from_user(kp, up, sizeof(struct v4l2_sliced_vbi_format)))
++ if (get_user(p, &up->clips))
+ return -EFAULT;
+- return 0;
+-}
+-
+-static inline int put_v4l2_sliced_vbi_format(struct v4l2_sliced_vbi_format *kp, struct v4l2_sliced_vbi_format __user *up)
+-{
+- if (copy_to_user(up, kp, sizeof(struct v4l2_sliced_vbi_format)))
++ uclips = compat_ptr(p);
++ if (aux_space < clipcount * sizeof(*kclips))
+ return -EFAULT;
+- return 0;
+-}
+-
+-static inline int get_v4l2_sdr_format(struct v4l2_sdr_format *kp, struct v4l2_sdr_format __user *up)
+-{
+- if (copy_from_user(kp, up, sizeof(struct v4l2_sdr_format)))
++ kclips = aux_buf;
++ if (put_user(kclips, &kp->clips))
+ return -EFAULT;
+- return 0;
+-}
+
+-static inline int put_v4l2_sdr_format(struct v4l2_sdr_format *kp, struct v4l2_sdr_format __user *up)
+-{
+- if (copy_to_user(up, kp, sizeof(struct v4l2_sdr_format)))
+- return -EFAULT;
++ while (clipcount--) {
++ if (copy_in_user(&kclips->c, &uclips->c, sizeof(uclips->c)))
++ return -EFAULT;
++ if (put_user(clipcount ? kclips + 1 : NULL, &kclips->next))
++ return -EFAULT;
++ uclips++;
++ kclips++;
++ }
+ return 0;
+ }
+
+-static inline int get_v4l2_meta_format(struct v4l2_meta_format *kp, struct v4l2_meta_format __user *up)
++static int put_v4l2_window32(struct v4l2_window __user *kp,
++ struct v4l2_window32 __user *up)
+ {
+- if (copy_from_user(kp, up, sizeof(struct v4l2_meta_format)))
++ struct v4l2_clip __user *kclips = kp->clips;
++ struct v4l2_clip32 __user *uclips;
++ compat_caddr_t p;
++ u32 clipcount;
++
++ if (copy_in_user(&up->w, &kp->w, sizeof(kp->w)) ||
++ assign_in_user(&up->field, &kp->field) ||
++ assign_in_user(&up->chromakey, &kp->chromakey) ||
++ assign_in_user(&up->global_alpha, &kp->global_alpha) ||
++ get_user(clipcount, &kp->clipcount) ||
++ put_user(clipcount, &up->clipcount))
+ return -EFAULT;
+- return 0;
+-}
++ if (!clipcount)
++ return 0;
+
+-static inline int put_v4l2_meta_format(struct v4l2_meta_format *kp, struct v4l2_meta_format __user *up)
+-{
+- if (copy_to_user(up, kp, sizeof(struct v4l2_meta_format)))
++ if (get_user(p, &up->clips))
+ return -EFAULT;
++ uclips = compat_ptr(p);
++ while (clipcount--) {
++ if (copy_in_user(&uclips->c, &kclips->c, sizeof(uclips->c)))
++ return -EFAULT;
++ uclips++;
++ kclips++;
++ }
+ return 0;
+ }
+
+@@ -209,101 +159,164 @@ struct v4l2_create_buffers32 {
+ __u32 reserved[8];
+ };
+
+-static int __get_v4l2_format32(struct v4l2_format *kp, struct v4l2_format32 __user *up)
++static int __bufsize_v4l2_format(struct v4l2_format32 __user *up, u32 *size)
+ {
+- if (get_user(kp->type, &up->type))
++ u32 type;
++
++ if (get_user(type, &up->type))
+ return -EFAULT;
+
+- switch (kp->type) {
++ switch (type) {
++ case V4L2_BUF_TYPE_VIDEO_OVERLAY:
++ case V4L2_BUF_TYPE_VIDEO_OUTPUT_OVERLAY: {
++ u32 clipcount;
++
++ if (get_user(clipcount, &up->fmt.win.clipcount))
++ return -EFAULT;
++ if (clipcount > 2048)
++ return -EINVAL;
++ *size = clipcount * sizeof(struct v4l2_clip);
++ return 0;
++ }
++ default:
++ *size = 0;
++ return 0;
++ }
++}
++
++static int bufsize_v4l2_format(struct v4l2_format32 __user *up, u32 *size)
++{
++ if (!access_ok(VERIFY_READ, up, sizeof(*up)))
++ return -EFAULT;
++ return __bufsize_v4l2_format(up, size);
++}
++
++static int __get_v4l2_format32(struct v4l2_format __user *kp,
++ struct v4l2_format32 __user *up,
++ void __user *aux_buf, u32 aux_space)
++{
++ u32 type;
++
++ if (get_user(type, &up->type) || put_user(type, &kp->type))
++ return -EFAULT;
++
++ switch (type) {
+ case V4L2_BUF_TYPE_VIDEO_CAPTURE:
+ case V4L2_BUF_TYPE_VIDEO_OUTPUT:
+- return get_v4l2_pix_format(&kp->fmt.pix, &up->fmt.pix);
++ return copy_in_user(&kp->fmt.pix, &up->fmt.pix,
++ sizeof(kp->fmt.pix)) ? -EFAULT : 0;
+ case V4L2_BUF_TYPE_VIDEO_CAPTURE_MPLANE:
+ case V4L2_BUF_TYPE_VIDEO_OUTPUT_MPLANE:
+- return get_v4l2_pix_format_mplane(&kp->fmt.pix_mp,
+- &up->fmt.pix_mp);
++ return copy_in_user(&kp->fmt.pix_mp, &up->fmt.pix_mp,
++ sizeof(kp->fmt.pix_mp)) ? -EFAULT : 0;
+ case V4L2_BUF_TYPE_VIDEO_OVERLAY:
+ case V4L2_BUF_TYPE_VIDEO_OUTPUT_OVERLAY:
+- return get_v4l2_window32(&kp->fmt.win, &up->fmt.win);
++ return get_v4l2_window32(&kp->fmt.win, &up->fmt.win,
++ aux_buf, aux_space);
+ case V4L2_BUF_TYPE_VBI_CAPTURE:
+ case V4L2_BUF_TYPE_VBI_OUTPUT:
+- return get_v4l2_vbi_format(&kp->fmt.vbi, &up->fmt.vbi);
++ return copy_in_user(&kp->fmt.vbi, &up->fmt.vbi,
++ sizeof(kp->fmt.vbi)) ? -EFAULT : 0;
+ case V4L2_BUF_TYPE_SLICED_VBI_CAPTURE:
+ case V4L2_BUF_TYPE_SLICED_VBI_OUTPUT:
+- return get_v4l2_sliced_vbi_format(&kp->fmt.sliced, &up->fmt.sliced);
++ return copy_in_user(&kp->fmt.sliced, &up->fmt.sliced,
++ sizeof(kp->fmt.sliced)) ? -EFAULT : 0;
+ case V4L2_BUF_TYPE_SDR_CAPTURE:
+ case V4L2_BUF_TYPE_SDR_OUTPUT:
+- return get_v4l2_sdr_format(&kp->fmt.sdr, &up->fmt.sdr);
++ return copy_in_user(&kp->fmt.sdr, &up->fmt.sdr,
++ sizeof(kp->fmt.sdr)) ? -EFAULT : 0;
+ case V4L2_BUF_TYPE_META_CAPTURE:
+- return get_v4l2_meta_format(&kp->fmt.meta, &up->fmt.meta);
++ return copy_in_user(&kp->fmt.meta, &up->fmt.meta,
++ sizeof(kp->fmt.meta)) ? -EFAULT : 0;
+ default:
+- pr_info("compat_ioctl32: unexpected VIDIOC_FMT type %d\n",
+- kp->type);
+ return -EINVAL;
+ }
+ }
+
+-static int get_v4l2_format32(struct v4l2_format *kp, struct v4l2_format32 __user *up)
++static int get_v4l2_format32(struct v4l2_format __user *kp,
++ struct v4l2_format32 __user *up,
++ void __user *aux_buf, u32 aux_space)
++{
++ if (!access_ok(VERIFY_READ, up, sizeof(*up)))
++ return -EFAULT;
++ return __get_v4l2_format32(kp, up, aux_buf, aux_space);
++}
++
++static int bufsize_v4l2_create(struct v4l2_create_buffers32 __user *up,
++ u32 *size)
+ {
+- if (!access_ok(VERIFY_READ, up, sizeof(struct v4l2_format32)))
++ if (!access_ok(VERIFY_READ, up, sizeof(*up)))
+ return -EFAULT;
+- return __get_v4l2_format32(kp, up);
++ return __bufsize_v4l2_format(&up->format, size);
+ }
+
+-static int get_v4l2_create32(struct v4l2_create_buffers *kp, struct v4l2_create_buffers32 __user *up)
++static int get_v4l2_create32(struct v4l2_create_buffers __user *kp,
++ struct v4l2_create_buffers32 __user *up,
++ void __user *aux_buf, u32 aux_space)
+ {
+- if (!access_ok(VERIFY_READ, up, sizeof(struct v4l2_create_buffers32)) ||
+- copy_from_user(kp, up, offsetof(struct v4l2_create_buffers32, format)))
++ if (!access_ok(VERIFY_READ, up, sizeof(*up)) ||
++ copy_in_user(kp, up,
++ offsetof(struct v4l2_create_buffers32, format)))
+ return -EFAULT;
+- return __get_v4l2_format32(&kp->format, &up->format);
++ return __get_v4l2_format32(&kp->format, &up->format,
++ aux_buf, aux_space);
+ }
+
+-static int __put_v4l2_format32(struct v4l2_format *kp, struct v4l2_format32 __user *up)
++static int __put_v4l2_format32(struct v4l2_format __user *kp,
++ struct v4l2_format32 __user *up)
+ {
+- if (put_user(kp->type, &up->type))
++ u32 type;
++
++ if (get_user(type, &kp->type))
+ return -EFAULT;
+
+- switch (kp->type) {
++ switch (type) {
+ case V4L2_BUF_TYPE_VIDEO_CAPTURE:
+ case V4L2_BUF_TYPE_VIDEO_OUTPUT:
+- return put_v4l2_pix_format(&kp->fmt.pix, &up->fmt.pix);
++ return copy_in_user(&up->fmt.pix, &kp->fmt.pix,
++ sizeof(kp->fmt.pix)) ? -EFAULT : 0;
+ case V4L2_BUF_TYPE_VIDEO_CAPTURE_MPLANE:
+ case V4L2_BUF_TYPE_VIDEO_OUTPUT_MPLANE:
+- return put_v4l2_pix_format_mplane(&kp->fmt.pix_mp,
+- &up->fmt.pix_mp);
++ return copy_in_user(&up->fmt.pix_mp, &kp->fmt.pix_mp,
++ sizeof(kp->fmt.pix_mp)) ? -EFAULT : 0;
+ case V4L2_BUF_TYPE_VIDEO_OVERLAY:
+ case V4L2_BUF_TYPE_VIDEO_OUTPUT_OVERLAY:
+ return put_v4l2_window32(&kp->fmt.win, &up->fmt.win);
+ case V4L2_BUF_TYPE_VBI_CAPTURE:
+ case V4L2_BUF_TYPE_VBI_OUTPUT:
+- return put_v4l2_vbi_format(&kp->fmt.vbi, &up->fmt.vbi);
++ return copy_in_user(&up->fmt.vbi, &kp->fmt.vbi,
++ sizeof(kp->fmt.vbi)) ? -EFAULT : 0;
+ case V4L2_BUF_TYPE_SLICED_VBI_CAPTURE:
+ case V4L2_BUF_TYPE_SLICED_VBI_OUTPUT:
+- return put_v4l2_sliced_vbi_format(&kp->fmt.sliced, &up->fmt.sliced);
++ return copy_in_user(&up->fmt.sliced, &kp->fmt.sliced,
++ sizeof(kp->fmt.sliced)) ? -EFAULT : 0;
+ case V4L2_BUF_TYPE_SDR_CAPTURE:
+ case V4L2_BUF_TYPE_SDR_OUTPUT:
+- return put_v4l2_sdr_format(&kp->fmt.sdr, &up->fmt.sdr);
++ return copy_in_user(&up->fmt.sdr, &kp->fmt.sdr,
++ sizeof(kp->fmt.sdr)) ? -EFAULT : 0;
+ case V4L2_BUF_TYPE_META_CAPTURE:
+- return put_v4l2_meta_format(&kp->fmt.meta, &up->fmt.meta);
++ return copy_in_user(&up->fmt.meta, &kp->fmt.meta,
++ sizeof(kp->fmt.meta)) ? -EFAULT : 0;
+ default:
+- pr_info("compat_ioctl32: unexpected VIDIOC_FMT type %d\n",
+- kp->type);
+ return -EINVAL;
+ }
+ }
+
+-static int put_v4l2_format32(struct v4l2_format *kp, struct v4l2_format32 __user *up)
++static int put_v4l2_format32(struct v4l2_format __user *kp,
++ struct v4l2_format32 __user *up)
+ {
+- if (!access_ok(VERIFY_WRITE, up, sizeof(struct v4l2_format32)))
++ if (!access_ok(VERIFY_WRITE, up, sizeof(*up)))
+ return -EFAULT;
+ return __put_v4l2_format32(kp, up);
+ }
+
+-static int put_v4l2_create32(struct v4l2_create_buffers *kp, struct v4l2_create_buffers32 __user *up)
++static int put_v4l2_create32(struct v4l2_create_buffers __user *kp,
++ struct v4l2_create_buffers32 __user *up)
+ {
+- if (!access_ok(VERIFY_WRITE, up, sizeof(struct v4l2_create_buffers32)) ||
+- copy_to_user(up, kp, offsetof(struct v4l2_create_buffers32, format)) ||
+- copy_to_user(up->reserved, kp->reserved, sizeof(kp->reserved)))
++ if (!access_ok(VERIFY_WRITE, up, sizeof(*up)) ||
++ copy_in_user(up, kp,
++ offsetof(struct v4l2_create_buffers32, format)) ||
++ copy_in_user(up->reserved, kp->reserved, sizeof(kp->reserved)))
+ return -EFAULT;
+ return __put_v4l2_format32(&kp->format, &up->format);
+ }
+@@ -317,25 +330,28 @@ struct v4l2_standard32 {
+ __u32 reserved[4];
+ };
+
+-static int get_v4l2_standard32(struct v4l2_standard *kp, struct v4l2_standard32 __user *up)
++static int get_v4l2_standard32(struct v4l2_standard __user *kp,
++ struct v4l2_standard32 __user *up)
+ {
+ /* other fields are not set by the user, nor used by the driver */
+- if (!access_ok(VERIFY_READ, up, sizeof(struct v4l2_standard32)) ||
+- get_user(kp->index, &up->index))
++ if (!access_ok(VERIFY_READ, up, sizeof(*up)) ||
++ assign_in_user(&kp->index, &up->index))
+ return -EFAULT;
+ return 0;
+ }
+
+-static int put_v4l2_standard32(struct v4l2_standard *kp, struct v4l2_standard32 __user *up)
++static int put_v4l2_standard32(struct v4l2_standard __user *kp,
++ struct v4l2_standard32 __user *up)
+ {
+- if (!access_ok(VERIFY_WRITE, up, sizeof(struct v4l2_standard32)) ||
+- put_user(kp->index, &up->index) ||
+- put_user(kp->id, &up->id) ||
+- copy_to_user(up->name, kp->name, 24) ||
+- copy_to_user(&up->frameperiod, &kp->frameperiod, sizeof(kp->frameperiod)) ||
+- put_user(kp->framelines, &up->framelines) ||
+- copy_to_user(up->reserved, kp->reserved, 4 * sizeof(__u32)))
+- return -EFAULT;
++ if (!access_ok(VERIFY_WRITE, up, sizeof(*up)) ||
++ assign_in_user(&up->index, &kp->index) ||
++ assign_in_user(&up->id, &kp->id) ||
++ copy_in_user(up->name, kp->name, sizeof(up->name)) ||
++ copy_in_user(&up->frameperiod, &kp->frameperiod,
++ sizeof(up->frameperiod)) ||
++ assign_in_user(&up->framelines, &kp->framelines) ||
++ copy_in_user(up->reserved, kp->reserved, sizeof(up->reserved)))
++ return -EFAULT;
+ return 0;
+ }
+
+@@ -374,136 +390,186 @@ struct v4l2_buffer32 {
+ __u32 reserved;
+ };
+
+-static int get_v4l2_plane32(struct v4l2_plane __user *up, struct v4l2_plane32 __user *up32,
+- enum v4l2_memory memory)
++static int get_v4l2_plane32(struct v4l2_plane __user *up,
++ struct v4l2_plane32 __user *up32,
++ enum v4l2_memory memory)
+ {
+- void __user *up_pln;
+- compat_long_t p;
++ compat_ulong_t p;
+
+ if (copy_in_user(up, up32, 2 * sizeof(__u32)) ||
+- copy_in_user(&up->data_offset, &up32->data_offset,
+- sizeof(__u32)))
++ copy_in_user(&up->data_offset, &up32->data_offset,
++ sizeof(up->data_offset)))
+ return -EFAULT;
+
+- if (memory == V4L2_MEMORY_USERPTR) {
+- if (get_user(p, &up32->m.userptr))
+- return -EFAULT;
+- up_pln = compat_ptr(p);
+- if (put_user((unsigned long)up_pln, &up->m.userptr))
++ switch (memory) {
++ case V4L2_MEMORY_MMAP:
++ case V4L2_MEMORY_OVERLAY:
++ if (copy_in_user(&up->m.mem_offset, &up32->m.mem_offset,
++ sizeof(up32->m.mem_offset)))
+ return -EFAULT;
+- } else if (memory == V4L2_MEMORY_DMABUF) {
+- if (copy_in_user(&up->m.fd, &up32->m.fd, sizeof(int)))
++ break;
++ case V4L2_MEMORY_USERPTR:
++ if (get_user(p, &up32->m.userptr) ||
++ put_user((unsigned long)compat_ptr(p), &up->m.userptr))
+ return -EFAULT;
+- } else {
+- if (copy_in_user(&up->m.mem_offset, &up32->m.mem_offset,
+- sizeof(__u32)))
++ break;
++ case V4L2_MEMORY_DMABUF:
++ if (copy_in_user(&up->m.fd, &up32->m.fd, sizeof(up32->m.fd)))
+ return -EFAULT;
++ break;
+ }
+
+ return 0;
+ }
+
+-static int put_v4l2_plane32(struct v4l2_plane __user *up, struct v4l2_plane32 __user *up32,
+- enum v4l2_memory memory)
++static int put_v4l2_plane32(struct v4l2_plane __user *up,
++ struct v4l2_plane32 __user *up32,
++ enum v4l2_memory memory)
+ {
++ unsigned long p;
++
+ if (copy_in_user(up32, up, 2 * sizeof(__u32)) ||
+- copy_in_user(&up32->data_offset, &up->data_offset,
+- sizeof(__u32)))
++ copy_in_user(&up32->data_offset, &up->data_offset,
++ sizeof(up->data_offset)))
+ return -EFAULT;
+
+- /* For MMAP, driver might've set up the offset, so copy it back.
+- * USERPTR stays the same (was userspace-provided), so no copying. */
+- if (memory == V4L2_MEMORY_MMAP)
++ switch (memory) {
++ case V4L2_MEMORY_MMAP:
++ case V4L2_MEMORY_OVERLAY:
+ if (copy_in_user(&up32->m.mem_offset, &up->m.mem_offset,
+- sizeof(__u32)))
++ sizeof(up->m.mem_offset)))
++ return -EFAULT;
++ break;
++ case V4L2_MEMORY_USERPTR:
++ if (get_user(p, &up->m.userptr) ||
++ put_user((compat_ulong_t)ptr_to_compat((__force void *)p),
++ &up32->m.userptr))
+ return -EFAULT;
+- /* For DMABUF, driver might've set up the fd, so copy it back. */
+- if (memory == V4L2_MEMORY_DMABUF)
+- if (copy_in_user(&up32->m.fd, &up->m.fd,
+- sizeof(int)))
++ break;
++ case V4L2_MEMORY_DMABUF:
++ if (copy_in_user(&up32->m.fd, &up->m.fd, sizeof(up->m.fd)))
+ return -EFAULT;
++ break;
++ }
++
++ return 0;
++}
++
++static int bufsize_v4l2_buffer(struct v4l2_buffer32 __user *up, u32 *size)
++{
++ u32 type;
++ u32 length;
++
++ if (!access_ok(VERIFY_READ, up, sizeof(*up)) ||
++ get_user(type, &up->type) ||
++ get_user(length, &up->length))
++ return -EFAULT;
+
++ if (V4L2_TYPE_IS_MULTIPLANAR(type)) {
++ if (length > VIDEO_MAX_PLANES)
++ return -EINVAL;
++
++ /*
++ * We don't really care if userspace decides to kill itself
++ * by passing a very big length value
++ */
++ *size = length * sizeof(struct v4l2_plane);
++ } else {
++ *size = 0;
++ }
+ return 0;
+ }
+
+-static int get_v4l2_buffer32(struct v4l2_buffer *kp, struct v4l2_buffer32 __user *up)
++static int get_v4l2_buffer32(struct v4l2_buffer __user *kp,
++ struct v4l2_buffer32 __user *up,
++ void __user *aux_buf, u32 aux_space)
+ {
++ u32 type;
++ u32 length;
++ enum v4l2_memory memory;
+ struct v4l2_plane32 __user *uplane32;
+ struct v4l2_plane __user *uplane;
+ compat_caddr_t p;
+ int ret;
+
+- if (!access_ok(VERIFY_READ, up, sizeof(struct v4l2_buffer32)) ||
+- get_user(kp->index, &up->index) ||
+- get_user(kp->type, &up->type) ||
+- get_user(kp->flags, &up->flags) ||
+- get_user(kp->memory, &up->memory) ||
+- get_user(kp->length, &up->length))
+- return -EFAULT;
++ if (!access_ok(VERIFY_READ, up, sizeof(*up)) ||
++ assign_in_user(&kp->index, &up->index) ||
++ get_user(type, &up->type) ||
++ put_user(type, &kp->type) ||
++ assign_in_user(&kp->flags, &up->flags) ||
++ get_user(memory, &up->memory) ||
++ put_user(memory, &kp->memory) ||
++ get_user(length, &up->length) ||
++ put_user(length, &kp->length))
++ return -EFAULT;
+
+- if (V4L2_TYPE_IS_OUTPUT(kp->type))
+- if (get_user(kp->bytesused, &up->bytesused) ||
+- get_user(kp->field, &up->field) ||
+- get_user(kp->timestamp.tv_sec, &up->timestamp.tv_sec) ||
+- get_user(kp->timestamp.tv_usec,
+- &up->timestamp.tv_usec))
++ if (V4L2_TYPE_IS_OUTPUT(type))
++ if (assign_in_user(&kp->bytesused, &up->bytesused) ||
++ assign_in_user(&kp->field, &up->field) ||
++ assign_in_user(&kp->timestamp.tv_sec,
++ &up->timestamp.tv_sec) ||
++ assign_in_user(&kp->timestamp.tv_usec,
++ &up->timestamp.tv_usec))
+ return -EFAULT;
+
+- if (V4L2_TYPE_IS_MULTIPLANAR(kp->type)) {
+- unsigned int num_planes;
++ if (V4L2_TYPE_IS_MULTIPLANAR(type)) {
++ u32 num_planes = length;
+
+- if (kp->length == 0) {
+- kp->m.planes = NULL;
+- /* num_planes == 0 is legal, e.g. when userspace doesn't
+- * need planes array on DQBUF*/
+- return 0;
+- } else if (kp->length > VIDEO_MAX_PLANES) {
+- return -EINVAL;
++ if (num_planes == 0) {
++ /*
++ * num_planes == 0 is legal, e.g. when userspace doesn't
++ * need planes array on DQBUF
++ */
++ return put_user(NULL, &kp->m.planes);
+ }
++ if (num_planes > VIDEO_MAX_PLANES)
++ return -EINVAL;
+
+ if (get_user(p, &up->m.planes))
+ return -EFAULT;
+
+ uplane32 = compat_ptr(p);
+ if (!access_ok(VERIFY_READ, uplane32,
+- kp->length * sizeof(struct v4l2_plane32)))
++ num_planes * sizeof(*uplane32)))
+ return -EFAULT;
+
+- /* We don't really care if userspace decides to kill itself
+- * by passing a very big num_planes value */
+- uplane = compat_alloc_user_space(kp->length *
+- sizeof(struct v4l2_plane));
+- kp->m.planes = (__force struct v4l2_plane *)uplane;
++ /*
++ * We don't really care if userspace decides to kill itself
++ * by passing a very big num_planes value
++ */
++ if (aux_space < num_planes * sizeof(*uplane))
++ return -EFAULT;
+
+- for (num_planes = 0; num_planes < kp->length; num_planes++) {
+- ret = get_v4l2_plane32(uplane, uplane32, kp->memory);
++ uplane = aux_buf;
++ if (put_user((__force struct v4l2_plane *)uplane,
++ &kp->m.planes))
++ return -EFAULT;
++
++ while (num_planes--) {
++ ret = get_v4l2_plane32(uplane, uplane32, memory);
+ if (ret)
+ return ret;
+- ++uplane;
+- ++uplane32;
++ uplane++;
++ uplane32++;
+ }
+ } else {
+- switch (kp->memory) {
++ switch (memory) {
+ case V4L2_MEMORY_MMAP:
+- if (get_user(kp->m.offset, &up->m.offset))
++ case V4L2_MEMORY_OVERLAY:
++ if (assign_in_user(&kp->m.offset, &up->m.offset))
+ return -EFAULT;
+ break;
+- case V4L2_MEMORY_USERPTR:
+- {
+- compat_long_t tmp;
++ case V4L2_MEMORY_USERPTR: {
++ compat_ulong_t userptr;
+
+- if (get_user(tmp, &up->m.userptr))
+- return -EFAULT;
+-
+- kp->m.userptr = (unsigned long)compat_ptr(tmp);
+- }
+- break;
+- case V4L2_MEMORY_OVERLAY:
+- if (get_user(kp->m.offset, &up->m.offset))
++ if (get_user(userptr, &up->m.userptr) ||
++ put_user((unsigned long)compat_ptr(userptr),
++ &kp->m.userptr))
+ return -EFAULT;
+ break;
++ }
+ case V4L2_MEMORY_DMABUF:
+- if (get_user(kp->m.fd, &up->m.fd))
++ if (assign_in_user(&kp->m.fd, &up->m.fd))
+ return -EFAULT;
+ break;
+ }
+@@ -512,65 +578,70 @@ static int get_v4l2_buffer32(struct v4l2_buffer *kp, struct v4l2_buffer32 __user
+ return 0;
+ }
+
+-static int put_v4l2_buffer32(struct v4l2_buffer *kp, struct v4l2_buffer32 __user *up)
++static int put_v4l2_buffer32(struct v4l2_buffer __user *kp,
++ struct v4l2_buffer32 __user *up)
+ {
++ u32 type;
++ u32 length;
++ enum v4l2_memory memory;
+ struct v4l2_plane32 __user *uplane32;
+ struct v4l2_plane __user *uplane;
+ compat_caddr_t p;
+- int num_planes;
+ int ret;
+
+- if (!access_ok(VERIFY_WRITE, up, sizeof(struct v4l2_buffer32)) ||
+- put_user(kp->index, &up->index) ||
+- put_user(kp->type, &up->type) ||
+- put_user(kp->flags, &up->flags) ||
+- put_user(kp->memory, &up->memory))
+- return -EFAULT;
++ if (!access_ok(VERIFY_WRITE, up, sizeof(*up)) ||
++ assign_in_user(&up->index, &kp->index) ||
++ get_user(type, &kp->type) ||
++ put_user(type, &up->type) ||
++ assign_in_user(&up->flags, &kp->flags) ||
++ get_user(memory, &kp->memory) ||
++ put_user(memory, &up->memory))
++ return -EFAULT;
+
+- if (put_user(kp->bytesused, &up->bytesused) ||
+- put_user(kp->field, &up->field) ||
+- put_user(kp->timestamp.tv_sec, &up->timestamp.tv_sec) ||
+- put_user(kp->timestamp.tv_usec, &up->timestamp.tv_usec) ||
+- copy_to_user(&up->timecode, &kp->timecode, sizeof(struct v4l2_timecode)) ||
+- put_user(kp->sequence, &up->sequence) ||
+- put_user(kp->reserved2, &up->reserved2) ||
+- put_user(kp->reserved, &up->reserved) ||
+- put_user(kp->length, &up->length))
+- return -EFAULT;
++ if (assign_in_user(&up->bytesused, &kp->bytesused) ||
++ assign_in_user(&up->field, &kp->field) ||
++ assign_in_user(&up->timestamp.tv_sec, &kp->timestamp.tv_sec) ||
++ assign_in_user(&up->timestamp.tv_usec, &kp->timestamp.tv_usec) ||
++ copy_in_user(&up->timecode, &kp->timecode, sizeof(kp->timecode)) ||
++ assign_in_user(&up->sequence, &kp->sequence) ||
++ assign_in_user(&up->reserved2, &kp->reserved2) ||
++ assign_in_user(&up->reserved, &kp->reserved) ||
++ get_user(length, &kp->length) ||
++ put_user(length, &up->length))
++ return -EFAULT;
++
++ if (V4L2_TYPE_IS_MULTIPLANAR(type)) {
++ u32 num_planes = length;
+
+- if (V4L2_TYPE_IS_MULTIPLANAR(kp->type)) {
+- num_planes = kp->length;
+ if (num_planes == 0)
+ return 0;
+
+- uplane = (__force struct v4l2_plane __user *)kp->m.planes;
++ if (get_user(uplane, ((__force struct v4l2_plane __user **)&kp->m.planes)))
++ return -EFAULT;
+ if (get_user(p, &up->m.planes))
+ return -EFAULT;
+ uplane32 = compat_ptr(p);
+
+- while (--num_planes >= 0) {
+- ret = put_v4l2_plane32(uplane, uplane32, kp->memory);
++ while (num_planes--) {
++ ret = put_v4l2_plane32(uplane, uplane32, memory);
+ if (ret)
+ return ret;
+ ++uplane;
+ ++uplane32;
+ }
+ } else {
+- switch (kp->memory) {
++ switch (memory) {
+ case V4L2_MEMORY_MMAP:
+- if (put_user(kp->m.offset, &up->m.offset))
++ case V4L2_MEMORY_OVERLAY:
++ if (assign_in_user(&up->m.offset, &kp->m.offset))
+ return -EFAULT;
+ break;
+ case V4L2_MEMORY_USERPTR:
+- if (put_user(kp->m.userptr, &up->m.userptr))
+- return -EFAULT;
+- break;
+- case V4L2_MEMORY_OVERLAY:
+- if (put_user(kp->m.offset, &up->m.offset))
++ if (assign_in_user(&up->m.userptr, &kp->m.userptr))
+ return -EFAULT;
+ break;
+ case V4L2_MEMORY_DMABUF:
+- if (put_user(kp->m.fd, &up->m.fd))
++ if (assign_in_user(&up->m.fd, &kp->m.fd))
+ return -EFAULT;
+ break;
+ }
+@@ -595,30 +666,33 @@ struct v4l2_framebuffer32 {
+ } fmt;
+ };
+
+-static int get_v4l2_framebuffer32(struct v4l2_framebuffer *kp, struct v4l2_framebuffer32 __user *up)
++static int get_v4l2_framebuffer32(struct v4l2_framebuffer __user *kp,
++ struct v4l2_framebuffer32 __user *up)
+ {
+- u32 tmp;
+-
+- if (!access_ok(VERIFY_READ, up, sizeof(struct v4l2_framebuffer32)) ||
+- get_user(tmp, &up->base) ||
+- get_user(kp->capability, &up->capability) ||
+- get_user(kp->flags, &up->flags) ||
+- copy_from_user(&kp->fmt, &up->fmt, sizeof(up->fmt)))
+- return -EFAULT;
+- kp->base = (__force void *)compat_ptr(tmp);
++ compat_caddr_t tmp;
++
++ if (!access_ok(VERIFY_READ, up, sizeof(*up)) ||
++ get_user(tmp, &up->base) ||
++ put_user((__force void *)compat_ptr(tmp), &kp->base) ||
++ assign_in_user(&kp->capability, &up->capability) ||
++ assign_in_user(&kp->flags, &up->flags) ||
++ copy_in_user(&kp->fmt, &up->fmt, sizeof(kp->fmt)))
++ return -EFAULT;
+ return 0;
+ }
+
+-static int put_v4l2_framebuffer32(struct v4l2_framebuffer *kp, struct v4l2_framebuffer32 __user *up)
++static int put_v4l2_framebuffer32(struct v4l2_framebuffer __user *kp,
++ struct v4l2_framebuffer32 __user *up)
+ {
+- u32 tmp = (u32)((unsigned long)kp->base);
+-
+- if (!access_ok(VERIFY_WRITE, up, sizeof(struct v4l2_framebuffer32)) ||
+- put_user(tmp, &up->base) ||
+- put_user(kp->capability, &up->capability) ||
+- put_user(kp->flags, &up->flags) ||
+- copy_to_user(&up->fmt, &kp->fmt, sizeof(up->fmt)))
+- return -EFAULT;
++ void *base;
++
++ if (!access_ok(VERIFY_WRITE, up, sizeof(*up)) ||
++ get_user(base, &kp->base) ||
++ put_user(ptr_to_compat(base), &up->base) ||
++ assign_in_user(&up->capability, &kp->capability) ||
++ assign_in_user(&up->flags, &kp->flags) ||
++ copy_in_user(&up->fmt, &kp->fmt, sizeof(kp->fmt)))
++ return -EFAULT;
+ return 0;
+ }
+
+@@ -634,18 +708,22 @@ struct v4l2_input32 {
+ __u32 reserved[3];
+ };
+
+-/* The 64-bit v4l2_input struct has extra padding at the end of the struct.
+- Otherwise it is identical to the 32-bit version. */
+-static inline int get_v4l2_input32(struct v4l2_input *kp, struct v4l2_input32 __user *up)
++/*
++ * The 64-bit v4l2_input struct has extra padding at the end of the struct.
++ * Otherwise it is identical to the 32-bit version.
++ */
++static inline int get_v4l2_input32(struct v4l2_input __user *kp,
++ struct v4l2_input32 __user *up)
+ {
+- if (copy_from_user(kp, up, sizeof(struct v4l2_input32)))
++ if (copy_in_user(kp, up, sizeof(*up)))
+ return -EFAULT;
+ return 0;
+ }
+
+-static inline int put_v4l2_input32(struct v4l2_input *kp, struct v4l2_input32 __user *up)
++static inline int put_v4l2_input32(struct v4l2_input __user *kp,
++ struct v4l2_input32 __user *up)
+ {
+- if (copy_to_user(up, kp, sizeof(struct v4l2_input32)))
++ if (copy_in_user(up, kp, sizeof(*up)))
+ return -EFAULT;
+ return 0;
+ }
+@@ -669,60 +747,95 @@ struct v4l2_ext_control32 {
+ };
+ } __attribute__ ((packed));
+
+-/* The following function really belong in v4l2-common, but that causes
+- a circular dependency between modules. We need to think about this, but
+- for now this will do. */
+-
+-/* Return non-zero if this control is a pointer type. Currently only
+- type STRING is a pointer type. */
+-static inline int ctrl_is_pointer(u32 id)
++/* Return true if this control is a pointer type. */
++static inline bool ctrl_is_pointer(struct file *file, u32 id)
+ {
+- switch (id) {
+- case V4L2_CID_RDS_TX_PS_NAME:
+- case V4L2_CID_RDS_TX_RADIO_TEXT:
+- return 1;
+- default:
+- return 0;
++ struct video_device *vdev = video_devdata(file);
++ struct v4l2_fh *fh = NULL;
++ struct v4l2_ctrl_handler *hdl = NULL;
++ struct v4l2_query_ext_ctrl qec = { id };
++ const struct v4l2_ioctl_ops *ops = vdev->ioctl_ops;
++
++ if (test_bit(V4L2_FL_USES_V4L2_FH, &vdev->flags))
++ fh = file->private_data;
++
++ if (fh && fh->ctrl_handler)
++ hdl = fh->ctrl_handler;
++ else if (vdev->ctrl_handler)
++ hdl = vdev->ctrl_handler;
++
++ if (hdl) {
++ struct v4l2_ctrl *ctrl = v4l2_ctrl_find(hdl, id);
++
++ return ctrl && ctrl->is_ptr;
+ }
++
++ if (!ops || !ops->vidioc_query_ext_ctrl)
++ return false;
++
++ return !ops->vidioc_query_ext_ctrl(file, fh, &qec) &&
++ (qec.flags & V4L2_CTRL_FLAG_HAS_PAYLOAD);
++}
++
++static int bufsize_v4l2_ext_controls(struct v4l2_ext_controls32 __user *up,
++ u32 *size)
++{
++ u32 count;
++
++ if (!access_ok(VERIFY_READ, up, sizeof(*up)) ||
++ get_user(count, &up->count))
++ return -EFAULT;
++ if (count > V4L2_CID_MAX_CTRLS)
++ return -EINVAL;
++ *size = count * sizeof(struct v4l2_ext_control);
++ return 0;
+ }
+
+-static int get_v4l2_ext_controls32(struct v4l2_ext_controls *kp, struct v4l2_ext_controls32 __user *up)
++static int get_v4l2_ext_controls32(struct file *file,
++ struct v4l2_ext_controls __user *kp,
++ struct v4l2_ext_controls32 __user *up,
++ void __user *aux_buf, u32 aux_space)
+ {
+ struct v4l2_ext_control32 __user *ucontrols;
+ struct v4l2_ext_control __user *kcontrols;
+- unsigned int n;
++ u32 count;
++ u32 n;
+ compat_caddr_t p;
+
+- if (!access_ok(VERIFY_READ, up, sizeof(struct v4l2_ext_controls32)) ||
+- get_user(kp->which, &up->which) ||
+- get_user(kp->count, &up->count) ||
+- get_user(kp->error_idx, &up->error_idx) ||
+- copy_from_user(kp->reserved, up->reserved,
+- sizeof(kp->reserved)))
+- return -EFAULT;
+- if (kp->count == 0) {
+- kp->controls = NULL;
+- return 0;
+- } else if (kp->count > V4L2_CID_MAX_CTRLS) {
++ if (!access_ok(VERIFY_READ, up, sizeof(*up)) ||
++ assign_in_user(&kp->which, &up->which) ||
++ get_user(count, &up->count) ||
++ put_user(count, &kp->count) ||
++ assign_in_user(&kp->error_idx, &up->error_idx) ||
++ copy_in_user(kp->reserved, up->reserved, sizeof(kp->reserved)))
++ return -EFAULT;
++
++ if (count == 0)
++ return put_user(NULL, &kp->controls);
++ if (count > V4L2_CID_MAX_CTRLS)
+ return -EINVAL;
+- }
+ if (get_user(p, &up->controls))
+ return -EFAULT;
+ ucontrols = compat_ptr(p);
+- if (!access_ok(VERIFY_READ, ucontrols,
+- kp->count * sizeof(struct v4l2_ext_control32)))
++ if (!access_ok(VERIFY_READ, ucontrols, count * sizeof(*ucontrols)))
+ return -EFAULT;
+- kcontrols = compat_alloc_user_space(kp->count *
+- sizeof(struct v4l2_ext_control));
+- kp->controls = (__force struct v4l2_ext_control *)kcontrols;
+- for (n = 0; n < kp->count; n++) {
++ if (aux_space < count * sizeof(*kcontrols))
++ return -EFAULT;
++ kcontrols = aux_buf;
++ if (put_user((__force struct v4l2_ext_control *)kcontrols,
++ &kp->controls))
++ return -EFAULT;
++
++ for (n = 0; n < count; n++) {
+ u32 id;
+
+ if (copy_in_user(kcontrols, ucontrols, sizeof(*ucontrols)))
+ return -EFAULT;
++
+ if (get_user(id, &kcontrols->id))
+ return -EFAULT;
+- if (ctrl_is_pointer(id)) {
++
++ if (ctrl_is_pointer(file, id)) {
+ void __user *s;
+
+ if (get_user(p, &ucontrols->string))
+@@ -737,43 +850,55 @@ static int get_v4l2_ext_controls32(struct v4l2_ext_controls *kp, struct v4l2_ext
+ return 0;
+ }
+
+-static int put_v4l2_ext_controls32(struct v4l2_ext_controls *kp, struct v4l2_ext_controls32 __user *up)
++static int put_v4l2_ext_controls32(struct file *file,
++ struct v4l2_ext_controls __user *kp,
++ struct v4l2_ext_controls32 __user *up)
+ {
+ struct v4l2_ext_control32 __user *ucontrols;
+- struct v4l2_ext_control __user *kcontrols =
+- (__force struct v4l2_ext_control __user *)kp->controls;
+- int n = kp->count;
++ struct v4l2_ext_control __user *kcontrols;
++ u32 count;
++ u32 n;
+ compat_caddr_t p;
+
+- if (!access_ok(VERIFY_WRITE, up, sizeof(struct v4l2_ext_controls32)) ||
+- put_user(kp->which, &up->which) ||
+- put_user(kp->count, &up->count) ||
+- put_user(kp->error_idx, &up->error_idx) ||
+- copy_to_user(up->reserved, kp->reserved, sizeof(up->reserved)))
+- return -EFAULT;
+- if (!kp->count)
+- return 0;
++ if (!access_ok(VERIFY_WRITE, up, sizeof(*up)) ||
++ assign_in_user(&up->which, &kp->which) ||
++ get_user(count, &kp->count) ||
++ put_user(count, &up->count) ||
++ assign_in_user(&up->error_idx, &kp->error_idx) ||
++ copy_in_user(up->reserved, kp->reserved, sizeof(up->reserved)) ||
++ get_user(kcontrols, &kp->controls))
++ return -EFAULT;
+
++ if (!count)
++ return 0;
+ if (get_user(p, &up->controls))
+ return -EFAULT;
+ ucontrols = compat_ptr(p);
+- if (!access_ok(VERIFY_WRITE, ucontrols,
+- n * sizeof(struct v4l2_ext_control32)))
++ if (!access_ok(VERIFY_WRITE, ucontrols, count * sizeof(*ucontrols)))
+ return -EFAULT;
+
+- while (--n >= 0) {
+- unsigned size = sizeof(*ucontrols);
++ for (n = 0; n < count; n++) {
++ unsigned int size = sizeof(*ucontrols);
+ u32 id;
+
+- if (get_user(id, &kcontrols->id))
++ if (get_user(id, &kcontrols->id) ||
++ put_user(id, &ucontrols->id) ||
++ assign_in_user(&ucontrols->size, &kcontrols->size) ||
++ copy_in_user(&ucontrols->reserved2, &kcontrols->reserved2,
++ sizeof(ucontrols->reserved2)))
+ return -EFAULT;
+- /* Do not modify the pointer when copying a pointer control.
+- The contents of the pointer was changed, not the pointer
+- itself. */
+- if (ctrl_is_pointer(id))
++
++ /*
++ * Do not modify the pointer when copying a pointer control.
++ * The contents of the pointer was changed, not the pointer
++ * itself.
++ */
++ if (ctrl_is_pointer(file, id))
+ size -= sizeof(ucontrols->value64);
++
+ if (copy_in_user(ucontrols, kcontrols, size))
+ return -EFAULT;
++
+ ucontrols++;
+ kcontrols++;
+ }
+@@ -793,18 +918,19 @@ struct v4l2_event32 {
+ __u32 reserved[8];
+ };
+
+-static int put_v4l2_event32(struct v4l2_event *kp, struct v4l2_event32 __user *up)
++static int put_v4l2_event32(struct v4l2_event __user *kp,
++ struct v4l2_event32 __user *up)
+ {
+- if (!access_ok(VERIFY_WRITE, up, sizeof(struct v4l2_event32)) ||
+- put_user(kp->type, &up->type) ||
+- copy_to_user(&up->u, &kp->u, sizeof(kp->u)) ||
+- put_user(kp->pending, &up->pending) ||
+- put_user(kp->sequence, &up->sequence) ||
+- put_user(kp->timestamp.tv_sec, &up->timestamp.tv_sec) ||
+- put_user(kp->timestamp.tv_nsec, &up->timestamp.tv_nsec) ||
+- put_user(kp->id, &up->id) ||
+- copy_to_user(up->reserved, kp->reserved, 8 * sizeof(__u32)))
+- return -EFAULT;
++ if (!access_ok(VERIFY_WRITE, up, sizeof(*up)) ||
++ assign_in_user(&up->type, &kp->type) ||
++ copy_in_user(&up->u, &kp->u, sizeof(kp->u)) ||
++ assign_in_user(&up->pending, &kp->pending) ||
++ assign_in_user(&up->sequence, &kp->sequence) ||
++ assign_in_user(&up->timestamp.tv_sec, &kp->timestamp.tv_sec) ||
++ assign_in_user(&up->timestamp.tv_nsec, &kp->timestamp.tv_nsec) ||
++ assign_in_user(&up->id, &kp->id) ||
++ copy_in_user(up->reserved, kp->reserved, sizeof(up->reserved)))
++ return -EFAULT;
+ return 0;
+ }
+
+@@ -816,32 +942,35 @@ struct v4l2_edid32 {
+ compat_caddr_t edid;
+ };
+
+-static int get_v4l2_edid32(struct v4l2_edid *kp, struct v4l2_edid32 __user *up)
++static int get_v4l2_edid32(struct v4l2_edid __user *kp,
++ struct v4l2_edid32 __user *up)
+ {
+- u32 tmp;
+-
+- if (!access_ok(VERIFY_READ, up, sizeof(struct v4l2_edid32)) ||
+- get_user(kp->pad, &up->pad) ||
+- get_user(kp->start_block, &up->start_block) ||
+- get_user(kp->blocks, &up->blocks) ||
+- get_user(tmp, &up->edid) ||
+- copy_from_user(kp->reserved, up->reserved, sizeof(kp->reserved)))
+- return -EFAULT;
+- kp->edid = (__force u8 *)compat_ptr(tmp);
++ compat_uptr_t tmp;
++
++ if (!access_ok(VERIFY_READ, up, sizeof(*up)) ||
++ assign_in_user(&kp->pad, &up->pad) ||
++ assign_in_user(&kp->start_block, &up->start_block) ||
++ assign_in_user(&kp->blocks, &up->blocks) ||
++ get_user(tmp, &up->edid) ||
++ put_user(compat_ptr(tmp), &kp->edid) ||
++ copy_in_user(kp->reserved, up->reserved, sizeof(kp->reserved)))
++ return -EFAULT;
+ return 0;
+ }
+
+-static int put_v4l2_edid32(struct v4l2_edid *kp, struct v4l2_edid32 __user *up)
++static int put_v4l2_edid32(struct v4l2_edid __user *kp,
++ struct v4l2_edid32 __user *up)
+ {
+- u32 tmp = (u32)((unsigned long)kp->edid);
+-
+- if (!access_ok(VERIFY_WRITE, up, sizeof(struct v4l2_edid32)) ||
+- put_user(kp->pad, &up->pad) ||
+- put_user(kp->start_block, &up->start_block) ||
+- put_user(kp->blocks, &up->blocks) ||
+- put_user(tmp, &up->edid) ||
+- copy_to_user(up->reserved, kp->reserved, sizeof(up->reserved)))
+- return -EFAULT;
++ void *edid;
++
++ if (!access_ok(VERIFY_WRITE, up, sizeof(*up)) ||
++ assign_in_user(&up->pad, &kp->pad) ||
++ assign_in_user(&up->start_block, &kp->start_block) ||
++ assign_in_user(&up->blocks, &kp->blocks) ||
++ get_user(edid, &kp->edid) ||
++ put_user(ptr_to_compat(edid), &up->edid) ||
++ copy_in_user(up->reserved, kp->reserved, sizeof(up->reserved)))
++ return -EFAULT;
+ return 0;
+ }
+
+@@ -873,22 +1002,23 @@ static int put_v4l2_edid32(struct v4l2_edid *kp, struct v4l2_edid32 __user *up)
+ #define VIDIOC_G_OUTPUT32 _IOR ('V', 46, s32)
+ #define VIDIOC_S_OUTPUT32 _IOWR('V', 47, s32)
+
++static int alloc_userspace(unsigned int size, u32 aux_space,
++ void __user **up_native)
++{
++ *up_native = compat_alloc_user_space(size + aux_space);
++ if (!*up_native)
++ return -ENOMEM;
++ if (clear_user(*up_native, size))
++ return -EFAULT;
++ return 0;
++}
++
+ static long do_video_ioctl(struct file *file, unsigned int cmd, unsigned long arg)
+ {
+- union {
+- struct v4l2_format v2f;
+- struct v4l2_buffer v2b;
+- struct v4l2_framebuffer v2fb;
+- struct v4l2_input v2i;
+- struct v4l2_standard v2s;
+- struct v4l2_ext_controls v2ecs;
+- struct v4l2_event v2ev;
+- struct v4l2_create_buffers v2crt;
+- struct v4l2_edid v2edid;
+- unsigned long vx;
+- int vi;
+- } karg;
+ void __user *up = compat_ptr(arg);
++ void __user *up_native = NULL;
++ void __user *aux_buf;
++ u32 aux_space;
+ int compatible_arg = 1;
+ long err = 0;
+
+@@ -927,30 +1057,52 @@ static long do_video_ioctl(struct file *file, unsigned int cmd, unsigned long ar
+ case VIDIOC_STREAMOFF:
+ case VIDIOC_S_INPUT:
+ case VIDIOC_S_OUTPUT:
+- err = get_user(karg.vi, (s32 __user *)up);
++ err = alloc_userspace(sizeof(unsigned int), 0, &up_native);
++ if (!err && assign_in_user((unsigned int __user *)up_native,
++ (compat_uint_t __user *)up))
++ err = -EFAULT;
+ compatible_arg = 0;
+ break;
+
+ case VIDIOC_G_INPUT:
+ case VIDIOC_G_OUTPUT:
++ err = alloc_userspace(sizeof(unsigned int), 0, &up_native);
+ compatible_arg = 0;
+ break;
+
+ case VIDIOC_G_EDID:
+ case VIDIOC_S_EDID:
+- err = get_v4l2_edid32(&karg.v2edid, up);
++ err = alloc_userspace(sizeof(struct v4l2_edid), 0, &up_native);
++ if (!err)
++ err = get_v4l2_edid32(up_native, up);
+ compatible_arg = 0;
+ break;
+
+ case VIDIOC_G_FMT:
+ case VIDIOC_S_FMT:
+ case VIDIOC_TRY_FMT:
+- err = get_v4l2_format32(&karg.v2f, up);
++ err = bufsize_v4l2_format(up, &aux_space);
++ if (!err)
++ err = alloc_userspace(sizeof(struct v4l2_format),
++ aux_space, &up_native);
++ if (!err) {
++ aux_buf = up_native + sizeof(struct v4l2_format);
++ err = get_v4l2_format32(up_native, up,
++ aux_buf, aux_space);
++ }
+ compatible_arg = 0;
+ break;
+
+ case VIDIOC_CREATE_BUFS:
+- err = get_v4l2_create32(&karg.v2crt, up);
++ err = bufsize_v4l2_create(up, &aux_space);
++ if (!err)
++ err = alloc_userspace(sizeof(struct v4l2_create_buffers),
++ aux_space, &up_native);
++ if (!err) {
++ aux_buf = up_native + sizeof(struct v4l2_create_buffers);
++ err = get_v4l2_create32(up_native, up,
++ aux_buf, aux_space);
++ }
+ compatible_arg = 0;
+ break;
+
+@@ -958,36 +1110,63 @@ static long do_video_ioctl(struct file *file, unsigned int cmd, unsigned long ar
+ case VIDIOC_QUERYBUF:
+ case VIDIOC_QBUF:
+ case VIDIOC_DQBUF:
+- err = get_v4l2_buffer32(&karg.v2b, up);
++ err = bufsize_v4l2_buffer(up, &aux_space);
++ if (!err)
++ err = alloc_userspace(sizeof(struct v4l2_buffer),
++ aux_space, &up_native);
++ if (!err) {
++ aux_buf = up_native + sizeof(struct v4l2_buffer);
++ err = get_v4l2_buffer32(up_native, up,
++ aux_buf, aux_space);
++ }
+ compatible_arg = 0;
+ break;
+
+ case VIDIOC_S_FBUF:
+- err = get_v4l2_framebuffer32(&karg.v2fb, up);
++ err = alloc_userspace(sizeof(struct v4l2_framebuffer), 0,
++ &up_native);
++ if (!err)
++ err = get_v4l2_framebuffer32(up_native, up);
+ compatible_arg = 0;
+ break;
+
+ case VIDIOC_G_FBUF:
++ err = alloc_userspace(sizeof(struct v4l2_framebuffer), 0,
++ &up_native);
+ compatible_arg = 0;
+ break;
+
+ case VIDIOC_ENUMSTD:
+- err = get_v4l2_standard32(&karg.v2s, up);
++ err = alloc_userspace(sizeof(struct v4l2_standard), 0,
++ &up_native);
++ if (!err)
++ err = get_v4l2_standard32(up_native, up);
+ compatible_arg = 0;
+ break;
+
+ case VIDIOC_ENUMINPUT:
+- err = get_v4l2_input32(&karg.v2i, up);
++ err = alloc_userspace(sizeof(struct v4l2_input), 0, &up_native);
++ if (!err)
++ err = get_v4l2_input32(up_native, up);
+ compatible_arg = 0;
+ break;
+
+ case VIDIOC_G_EXT_CTRLS:
+ case VIDIOC_S_EXT_CTRLS:
+ case VIDIOC_TRY_EXT_CTRLS:
+- err = get_v4l2_ext_controls32(&karg.v2ecs, up);
++ err = bufsize_v4l2_ext_controls(up, &aux_space);
++ if (!err)
++ err = alloc_userspace(sizeof(struct v4l2_ext_controls),
++ aux_space, &up_native);
++ if (!err) {
++ aux_buf = up_native + sizeof(struct v4l2_ext_controls);
++ err = get_v4l2_ext_controls32(file, up_native, up,
++ aux_buf, aux_space);
++ }
+ compatible_arg = 0;
+ break;
+ case VIDIOC_DQEVENT:
++ err = alloc_userspace(sizeof(struct v4l2_event), 0, &up_native);
+ compatible_arg = 0;
+ break;
+ }
+@@ -996,26 +1175,26 @@ static long do_video_ioctl(struct file *file, unsigned int cmd, unsigned long ar
+
+ if (compatible_arg)
+ err = native_ioctl(file, cmd, (unsigned long)up);
+- else {
+- mm_segment_t old_fs = get_fs();
++ else
++ err = native_ioctl(file, cmd, (unsigned long)up_native);
+
+- set_fs(KERNEL_DS);
+- err = native_ioctl(file, cmd, (unsigned long)&karg);
+- set_fs(old_fs);
+- }
++ if (err == -ENOTTY)
++ return err;
+
+- /* Special case: even after an error we need to put the
+- results back for these ioctls since the error_idx will
+- contain information on which control failed. */
++ /*
++ * Special case: even after an error we need to put the
++ * results back for these ioctls since the error_idx will
++ * contain information on which control failed.
++ */
+ switch (cmd) {
+ case VIDIOC_G_EXT_CTRLS:
+ case VIDIOC_S_EXT_CTRLS:
+ case VIDIOC_TRY_EXT_CTRLS:
+- if (put_v4l2_ext_controls32(&karg.v2ecs, up))
++ if (put_v4l2_ext_controls32(file, up_native, up))
+ err = -EFAULT;
+ break;
+ case VIDIOC_S_EDID:
+- if (put_v4l2_edid32(&karg.v2edid, up))
++ if (put_v4l2_edid32(up_native, up))
+ err = -EFAULT;
+ break;
+ }
+@@ -1027,43 +1206,46 @@ static long do_video_ioctl(struct file *file, unsigned int cmd, unsigned long ar
+ case VIDIOC_S_OUTPUT:
+ case VIDIOC_G_INPUT:
+ case VIDIOC_G_OUTPUT:
+- err = put_user(((s32)karg.vi), (s32 __user *)up);
++ if (assign_in_user((compat_uint_t __user *)up,
++ ((unsigned int __user *)up_native)))
++ err = -EFAULT;
+ break;
+
+ case VIDIOC_G_FBUF:
+- err = put_v4l2_framebuffer32(&karg.v2fb, up);
++ err = put_v4l2_framebuffer32(up_native, up);
+ break;
+
+ case VIDIOC_DQEVENT:
+- err = put_v4l2_event32(&karg.v2ev, up);
++ err = put_v4l2_event32(up_native, up);
+ break;
+
+ case VIDIOC_G_EDID:
+- err = put_v4l2_edid32(&karg.v2edid, up);
++ err = put_v4l2_edid32(up_native, up);
+ break;
+
+ case VIDIOC_G_FMT:
+ case VIDIOC_S_FMT:
+ case VIDIOC_TRY_FMT:
+- err = put_v4l2_format32(&karg.v2f, up);
++ err = put_v4l2_format32(up_native, up);
+ break;
+
+ case VIDIOC_CREATE_BUFS:
+- err = put_v4l2_create32(&karg.v2crt, up);
++ err = put_v4l2_create32(up_native, up);
+ break;
+
++ case VIDIOC_PREPARE_BUF:
+ case VIDIOC_QUERYBUF:
+ case VIDIOC_QBUF:
+ case VIDIOC_DQBUF:
+- err = put_v4l2_buffer32(&karg.v2b, up);
++ err = put_v4l2_buffer32(up_native, up);
+ break;
+
+ case VIDIOC_ENUMSTD:
+- err = put_v4l2_standard32(&karg.v2s, up);
++ err = put_v4l2_standard32(up_native, up);
+ break;
+
+ case VIDIOC_ENUMINPUT:
+- err = put_v4l2_input32(&karg.v2i, up);
++ err = put_v4l2_input32(up_native, up);
+ break;
+ }
+ return err;
+diff --git a/drivers/media/v4l2-core/v4l2-ioctl.c b/drivers/media/v4l2-core/v4l2-ioctl.c
+index 79614992ee21..89e0878ce0a0 100644
+--- a/drivers/media/v4l2-core/v4l2-ioctl.c
++++ b/drivers/media/v4l2-core/v4l2-ioctl.c
+@@ -1311,52 +1311,50 @@ static int v4l_enum_fmt(const struct v4l2_ioctl_ops *ops,
+ struct file *file, void *fh, void *arg)
+ {
+ struct v4l2_fmtdesc *p = arg;
+- struct video_device *vfd = video_devdata(file);
+- bool is_vid = vfd->vfl_type == VFL_TYPE_GRABBER;
+- bool is_sdr = vfd->vfl_type == VFL_TYPE_SDR;
+- bool is_tch = vfd->vfl_type == VFL_TYPE_TOUCH;
+- bool is_rx = vfd->vfl_dir != VFL_DIR_TX;
+- bool is_tx = vfd->vfl_dir != VFL_DIR_RX;
+- int ret = -EINVAL;
++ int ret = check_fmt(file, p->type);
++
++ if (ret)
++ return ret;
++ ret = -EINVAL;
+
+ switch (p->type) {
+ case V4L2_BUF_TYPE_VIDEO_CAPTURE:
+- if (unlikely(!is_rx || (!is_vid && !is_tch) || !ops->vidioc_enum_fmt_vid_cap))
++ if (unlikely(!ops->vidioc_enum_fmt_vid_cap))
+ break;
+ ret = ops->vidioc_enum_fmt_vid_cap(file, fh, arg);
+ break;
+ case V4L2_BUF_TYPE_VIDEO_CAPTURE_MPLANE:
+- if (unlikely(!is_rx || !is_vid || !ops->vidioc_enum_fmt_vid_cap_mplane))
++ if (unlikely(!ops->vidioc_enum_fmt_vid_cap_mplane))
+ break;
+ ret = ops->vidioc_enum_fmt_vid_cap_mplane(file, fh, arg);
+ break;
+ case V4L2_BUF_TYPE_VIDEO_OVERLAY:
+- if (unlikely(!is_rx || !is_vid || !ops->vidioc_enum_fmt_vid_overlay))
++ if (unlikely(!ops->vidioc_enum_fmt_vid_overlay))
+ break;
+ ret = ops->vidioc_enum_fmt_vid_overlay(file, fh, arg);
+ break;
+ case V4L2_BUF_TYPE_VIDEO_OUTPUT:
+- if (unlikely(!is_tx || !is_vid || !ops->vidioc_enum_fmt_vid_out))
++ if (unlikely(!ops->vidioc_enum_fmt_vid_out))
+ break;
+ ret = ops->vidioc_enum_fmt_vid_out(file, fh, arg);
+ break;
+ case V4L2_BUF_TYPE_VIDEO_OUTPUT_MPLANE:
+- if (unlikely(!is_tx || !is_vid || !ops->vidioc_enum_fmt_vid_out_mplane))
++ if (unlikely(!ops->vidioc_enum_fmt_vid_out_mplane))
+ break;
+ ret = ops->vidioc_enum_fmt_vid_out_mplane(file, fh, arg);
+ break;
+ case V4L2_BUF_TYPE_SDR_CAPTURE:
+- if (unlikely(!is_rx || !is_sdr || !ops->vidioc_enum_fmt_sdr_cap))
++ if (unlikely(!ops->vidioc_enum_fmt_sdr_cap))
+ break;
+ ret = ops->vidioc_enum_fmt_sdr_cap(file, fh, arg);
+ break;
+ case V4L2_BUF_TYPE_SDR_OUTPUT:
+- if (unlikely(!is_tx || !is_sdr || !ops->vidioc_enum_fmt_sdr_out))
++ if (unlikely(!ops->vidioc_enum_fmt_sdr_out))
+ break;
+ ret = ops->vidioc_enum_fmt_sdr_out(file, fh, arg);
+ break;
+ case V4L2_BUF_TYPE_META_CAPTURE:
+- if (unlikely(!is_rx || !is_vid || !ops->vidioc_enum_fmt_meta_cap))
++ if (unlikely(!ops->vidioc_enum_fmt_meta_cap))
+ break;
+ ret = ops->vidioc_enum_fmt_meta_cap(file, fh, arg);
+ break;
+@@ -1370,13 +1368,10 @@ static int v4l_g_fmt(const struct v4l2_ioctl_ops *ops,
+ struct file *file, void *fh, void *arg)
+ {
+ struct v4l2_format *p = arg;
+- struct video_device *vfd = video_devdata(file);
+- bool is_vid = vfd->vfl_type == VFL_TYPE_GRABBER;
+- bool is_sdr = vfd->vfl_type == VFL_TYPE_SDR;
+- bool is_tch = vfd->vfl_type == VFL_TYPE_TOUCH;
+- bool is_rx = vfd->vfl_dir != VFL_DIR_TX;
+- bool is_tx = vfd->vfl_dir != VFL_DIR_RX;
+- int ret;
++ int ret = check_fmt(file, p->type);
++
++ if (ret)
++ return ret;
+
+ /*
+ * fmt can't be cleared for these overlay types due to the 'clips'
+@@ -1404,7 +1399,7 @@ static int v4l_g_fmt(const struct v4l2_ioctl_ops *ops,
+
+ switch (p->type) {
+ case V4L2_BUF_TYPE_VIDEO_CAPTURE:
+- if (unlikely(!is_rx || (!is_vid && !is_tch) || !ops->vidioc_g_fmt_vid_cap))
++ if (unlikely(!ops->vidioc_g_fmt_vid_cap))
+ break;
+ p->fmt.pix.priv = V4L2_PIX_FMT_PRIV_MAGIC;
+ ret = ops->vidioc_g_fmt_vid_cap(file, fh, arg);
+@@ -1412,23 +1407,15 @@ static int v4l_g_fmt(const struct v4l2_ioctl_ops *ops,
+ p->fmt.pix.priv = V4L2_PIX_FMT_PRIV_MAGIC;
+ return ret;
+ case V4L2_BUF_TYPE_VIDEO_CAPTURE_MPLANE:
+- if (unlikely(!is_rx || !is_vid || !ops->vidioc_g_fmt_vid_cap_mplane))
+- break;
+ return ops->vidioc_g_fmt_vid_cap_mplane(file, fh, arg);
+ case V4L2_BUF_TYPE_VIDEO_OVERLAY:
+- if (unlikely(!is_rx || !is_vid || !ops->vidioc_g_fmt_vid_overlay))
+- break;
+ return ops->vidioc_g_fmt_vid_overlay(file, fh, arg);
+ case V4L2_BUF_TYPE_VBI_CAPTURE:
+- if (unlikely(!is_rx || is_vid || !ops->vidioc_g_fmt_vbi_cap))
+- break;
+ return ops->vidioc_g_fmt_vbi_cap(file, fh, arg);
+ case V4L2_BUF_TYPE_SLICED_VBI_CAPTURE:
+- if (unlikely(!is_rx || is_vid || !ops->vidioc_g_fmt_sliced_vbi_cap))
+- break;
+ return ops->vidioc_g_fmt_sliced_vbi_cap(file, fh, arg);
+ case V4L2_BUF_TYPE_VIDEO_OUTPUT:
+- if (unlikely(!is_tx || !is_vid || !ops->vidioc_g_fmt_vid_out))
++ if (unlikely(!ops->vidioc_g_fmt_vid_out))
+ break;
+ p->fmt.pix.priv = V4L2_PIX_FMT_PRIV_MAGIC;
+ ret = ops->vidioc_g_fmt_vid_out(file, fh, arg);
+@@ -1436,32 +1423,18 @@ static int v4l_g_fmt(const struct v4l2_ioctl_ops *ops,
+ p->fmt.pix.priv = V4L2_PIX_FMT_PRIV_MAGIC;
+ return ret;
+ case V4L2_BUF_TYPE_VIDEO_OUTPUT_MPLANE:
+- if (unlikely(!is_tx || !is_vid || !ops->vidioc_g_fmt_vid_out_mplane))
+- break;
+ return ops->vidioc_g_fmt_vid_out_mplane(file, fh, arg);
+ case V4L2_BUF_TYPE_VIDEO_OUTPUT_OVERLAY:
+- if (unlikely(!is_tx || !is_vid || !ops->vidioc_g_fmt_vid_out_overlay))
+- break;
+ return ops->vidioc_g_fmt_vid_out_overlay(file, fh, arg);
+ case V4L2_BUF_TYPE_VBI_OUTPUT:
+- if (unlikely(!is_tx || is_vid || !ops->vidioc_g_fmt_vbi_out))
+- break;
+ return ops->vidioc_g_fmt_vbi_out(file, fh, arg);
+ case V4L2_BUF_TYPE_SLICED_VBI_OUTPUT:
+- if (unlikely(!is_tx || is_vid || !ops->vidioc_g_fmt_sliced_vbi_out))
+- break;
+ return ops->vidioc_g_fmt_sliced_vbi_out(file, fh, arg);
+ case V4L2_BUF_TYPE_SDR_CAPTURE:
+- if (unlikely(!is_rx || !is_sdr || !ops->vidioc_g_fmt_sdr_cap))
+- break;
+ return ops->vidioc_g_fmt_sdr_cap(file, fh, arg);
+ case V4L2_BUF_TYPE_SDR_OUTPUT:
+- if (unlikely(!is_tx || !is_sdr || !ops->vidioc_g_fmt_sdr_out))
+- break;
+ return ops->vidioc_g_fmt_sdr_out(file, fh, arg);
+ case V4L2_BUF_TYPE_META_CAPTURE:
+- if (unlikely(!is_rx || !is_vid || !ops->vidioc_g_fmt_meta_cap))
+- break;
+ return ops->vidioc_g_fmt_meta_cap(file, fh, arg);
+ }
+ return -EINVAL;
+@@ -1487,12 +1460,10 @@ static int v4l_s_fmt(const struct v4l2_ioctl_ops *ops,
+ {
+ struct v4l2_format *p = arg;
+ struct video_device *vfd = video_devdata(file);
+- bool is_vid = vfd->vfl_type == VFL_TYPE_GRABBER;
+- bool is_sdr = vfd->vfl_type == VFL_TYPE_SDR;
+- bool is_tch = vfd->vfl_type == VFL_TYPE_TOUCH;
+- bool is_rx = vfd->vfl_dir != VFL_DIR_TX;
+- bool is_tx = vfd->vfl_dir != VFL_DIR_RX;
+- int ret;
++ int ret = check_fmt(file, p->type);
++
++ if (ret)
++ return ret;
+
+ ret = v4l_enable_media_source(vfd);
+ if (ret)
+@@ -1501,37 +1472,37 @@ static int v4l_s_fmt(const struct v4l2_ioctl_ops *ops,
+
+ switch (p->type) {
+ case V4L2_BUF_TYPE_VIDEO_CAPTURE:
+- if (unlikely(!is_rx || (!is_vid && !is_tch) || !ops->vidioc_s_fmt_vid_cap))
++ if (unlikely(!ops->vidioc_s_fmt_vid_cap))
+ break;
+ CLEAR_AFTER_FIELD(p, fmt.pix);
+ ret = ops->vidioc_s_fmt_vid_cap(file, fh, arg);
+ /* just in case the driver zeroed it again */
+ p->fmt.pix.priv = V4L2_PIX_FMT_PRIV_MAGIC;
+- if (is_tch)
++ if (vfd->vfl_type == VFL_TYPE_TOUCH)
+ v4l_pix_format_touch(&p->fmt.pix);
+ return ret;
+ case V4L2_BUF_TYPE_VIDEO_CAPTURE_MPLANE:
+- if (unlikely(!is_rx || !is_vid || !ops->vidioc_s_fmt_vid_cap_mplane))
++ if (unlikely(!ops->vidioc_s_fmt_vid_cap_mplane))
+ break;
+ CLEAR_AFTER_FIELD(p, fmt.pix_mp.xfer_func);
+ return ops->vidioc_s_fmt_vid_cap_mplane(file, fh, arg);
+ case V4L2_BUF_TYPE_VIDEO_OVERLAY:
+- if (unlikely(!is_rx || !is_vid || !ops->vidioc_s_fmt_vid_overlay))
++ if (unlikely(!ops->vidioc_s_fmt_vid_overlay))
+ break;
+ CLEAR_AFTER_FIELD(p, fmt.win);
+ return ops->vidioc_s_fmt_vid_overlay(file, fh, arg);
+ case V4L2_BUF_TYPE_VBI_CAPTURE:
+- if (unlikely(!is_rx || is_vid || !ops->vidioc_s_fmt_vbi_cap))
++ if (unlikely(!ops->vidioc_s_fmt_vbi_cap))
+ break;
+ CLEAR_AFTER_FIELD(p, fmt.vbi);
+ return ops->vidioc_s_fmt_vbi_cap(file, fh, arg);
+ case V4L2_BUF_TYPE_SLICED_VBI_CAPTURE:
+- if (unlikely(!is_rx || is_vid || !ops->vidioc_s_fmt_sliced_vbi_cap))
++ if (unlikely(!ops->vidioc_s_fmt_sliced_vbi_cap))
+ break;
+ CLEAR_AFTER_FIELD(p, fmt.sliced);
+ return ops->vidioc_s_fmt_sliced_vbi_cap(file, fh, arg);
+ case V4L2_BUF_TYPE_VIDEO_OUTPUT:
+- if (unlikely(!is_tx || !is_vid || !ops->vidioc_s_fmt_vid_out))
++ if (unlikely(!ops->vidioc_s_fmt_vid_out))
+ break;
+ CLEAR_AFTER_FIELD(p, fmt.pix);
+ ret = ops->vidioc_s_fmt_vid_out(file, fh, arg);
+@@ -1539,37 +1510,37 @@ static int v4l_s_fmt(const struct v4l2_ioctl_ops *ops,
+ p->fmt.pix.priv = V4L2_PIX_FMT_PRIV_MAGIC;
+ return ret;
+ case V4L2_BUF_TYPE_VIDEO_OUTPUT_MPLANE:
+- if (unlikely(!is_tx || !is_vid || !ops->vidioc_s_fmt_vid_out_mplane))
++ if (unlikely(!ops->vidioc_s_fmt_vid_out_mplane))
+ break;
+ CLEAR_AFTER_FIELD(p, fmt.pix_mp.xfer_func);
+ return ops->vidioc_s_fmt_vid_out_mplane(file, fh, arg);
+ case V4L2_BUF_TYPE_VIDEO_OUTPUT_OVERLAY:
+- if (unlikely(!is_tx || !is_vid || !ops->vidioc_s_fmt_vid_out_overlay))
++ if (unlikely(!ops->vidioc_s_fmt_vid_out_overlay))
+ break;
+ CLEAR_AFTER_FIELD(p, fmt.win);
+ return ops->vidioc_s_fmt_vid_out_overlay(file, fh, arg);
+ case V4L2_BUF_TYPE_VBI_OUTPUT:
+- if (unlikely(!is_tx || is_vid || !ops->vidioc_s_fmt_vbi_out))
++ if (unlikely(!ops->vidioc_s_fmt_vbi_out))
+ break;
+ CLEAR_AFTER_FIELD(p, fmt.vbi);
+ return ops->vidioc_s_fmt_vbi_out(file, fh, arg);
+ case V4L2_BUF_TYPE_SLICED_VBI_OUTPUT:
+- if (unlikely(!is_tx || is_vid || !ops->vidioc_s_fmt_sliced_vbi_out))
++ if (unlikely(!ops->vidioc_s_fmt_sliced_vbi_out))
+ break;
+ CLEAR_AFTER_FIELD(p, fmt.sliced);
+ return ops->vidioc_s_fmt_sliced_vbi_out(file, fh, arg);
+ case V4L2_BUF_TYPE_SDR_CAPTURE:
+- if (unlikely(!is_rx || !is_sdr || !ops->vidioc_s_fmt_sdr_cap))
++ if (unlikely(!ops->vidioc_s_fmt_sdr_cap))
+ break;
+ CLEAR_AFTER_FIELD(p, fmt.sdr);
+ return ops->vidioc_s_fmt_sdr_cap(file, fh, arg);
+ case V4L2_BUF_TYPE_SDR_OUTPUT:
+- if (unlikely(!is_tx || !is_sdr || !ops->vidioc_s_fmt_sdr_out))
++ if (unlikely(!ops->vidioc_s_fmt_sdr_out))
+ break;
+ CLEAR_AFTER_FIELD(p, fmt.sdr);
+ return ops->vidioc_s_fmt_sdr_out(file, fh, arg);
+ case V4L2_BUF_TYPE_META_CAPTURE:
+- if (unlikely(!is_rx || !is_vid || !ops->vidioc_s_fmt_meta_cap))
++ if (unlikely(!ops->vidioc_s_fmt_meta_cap))
+ break;
+ CLEAR_AFTER_FIELD(p, fmt.meta);
+ return ops->vidioc_s_fmt_meta_cap(file, fh, arg);
+@@ -1581,19 +1552,16 @@ static int v4l_try_fmt(const struct v4l2_ioctl_ops *ops,
+ struct file *file, void *fh, void *arg)
+ {
+ struct v4l2_format *p = arg;
+- struct video_device *vfd = video_devdata(file);
+- bool is_vid = vfd->vfl_type == VFL_TYPE_GRABBER;
+- bool is_sdr = vfd->vfl_type == VFL_TYPE_SDR;
+- bool is_tch = vfd->vfl_type == VFL_TYPE_TOUCH;
+- bool is_rx = vfd->vfl_dir != VFL_DIR_TX;
+- bool is_tx = vfd->vfl_dir != VFL_DIR_RX;
+- int ret;
++ int ret = check_fmt(file, p->type);
++
++ if (ret)
++ return ret;
+
+ v4l_sanitize_format(p);
+
+ switch (p->type) {
+ case V4L2_BUF_TYPE_VIDEO_CAPTURE:
+- if (unlikely(!is_rx || (!is_vid && !is_tch) || !ops->vidioc_try_fmt_vid_cap))
++ if (unlikely(!ops->vidioc_try_fmt_vid_cap))
+ break;
+ CLEAR_AFTER_FIELD(p, fmt.pix);
+ ret = ops->vidioc_try_fmt_vid_cap(file, fh, arg);
+@@ -1601,27 +1569,27 @@ static int v4l_try_fmt(const struct v4l2_ioctl_ops *ops,
+ p->fmt.pix.priv = V4L2_PIX_FMT_PRIV_MAGIC;
+ return ret;
+ case V4L2_BUF_TYPE_VIDEO_CAPTURE_MPLANE:
+- if (unlikely(!is_rx || !is_vid || !ops->vidioc_try_fmt_vid_cap_mplane))
++ if (unlikely(!ops->vidioc_try_fmt_vid_cap_mplane))
+ break;
+ CLEAR_AFTER_FIELD(p, fmt.pix_mp.xfer_func);
+ return ops->vidioc_try_fmt_vid_cap_mplane(file, fh, arg);
+ case V4L2_BUF_TYPE_VIDEO_OVERLAY:
+- if (unlikely(!is_rx || !is_vid || !ops->vidioc_try_fmt_vid_overlay))
++ if (unlikely(!ops->vidioc_try_fmt_vid_overlay))
+ break;
+ CLEAR_AFTER_FIELD(p, fmt.win);
+ return ops->vidioc_try_fmt_vid_overlay(file, fh, arg);
+ case V4L2_BUF_TYPE_VBI_CAPTURE:
+- if (unlikely(!is_rx || is_vid || !ops->vidioc_try_fmt_vbi_cap))
++ if (unlikely(!ops->vidioc_try_fmt_vbi_cap))
+ break;
+ CLEAR_AFTER_FIELD(p, fmt.vbi);
+ return ops->vidioc_try_fmt_vbi_cap(file, fh, arg);
+ case V4L2_BUF_TYPE_SLICED_VBI_CAPTURE:
+- if (unlikely(!is_rx || is_vid || !ops->vidioc_try_fmt_sliced_vbi_cap))
++ if (unlikely(!ops->vidioc_try_fmt_sliced_vbi_cap))
+ break;
+ CLEAR_AFTER_FIELD(p, fmt.sliced);
+ return ops->vidioc_try_fmt_sliced_vbi_cap(file, fh, arg);
+ case V4L2_BUF_TYPE_VIDEO_OUTPUT:
+- if (unlikely(!is_tx || !is_vid || !ops->vidioc_try_fmt_vid_out))
++ if (unlikely(!ops->vidioc_try_fmt_vid_out))
+ break;
+ CLEAR_AFTER_FIELD(p, fmt.pix);
+ ret = ops->vidioc_try_fmt_vid_out(file, fh, arg);
+@@ -1629,37 +1597,37 @@ static int v4l_try_fmt(const struct v4l2_ioctl_ops *ops,
+ p->fmt.pix.priv = V4L2_PIX_FMT_PRIV_MAGIC;
+ return ret;
+ case V4L2_BUF_TYPE_VIDEO_OUTPUT_MPLANE:
+- if (unlikely(!is_tx || !is_vid || !ops->vidioc_try_fmt_vid_out_mplane))
++ if (unlikely(!ops->vidioc_try_fmt_vid_out_mplane))
+ break;
+ CLEAR_AFTER_FIELD(p, fmt.pix_mp.xfer_func);
+ return ops->vidioc_try_fmt_vid_out_mplane(file, fh, arg);
+ case V4L2_BUF_TYPE_VIDEO_OUTPUT_OVERLAY:
+- if (unlikely(!is_tx || !is_vid || !ops->vidioc_try_fmt_vid_out_overlay))
++ if (unlikely(!ops->vidioc_try_fmt_vid_out_overlay))
+ break;
+ CLEAR_AFTER_FIELD(p, fmt.win);
+ return ops->vidioc_try_fmt_vid_out_overlay(file, fh, arg);
+ case V4L2_BUF_TYPE_VBI_OUTPUT:
+- if (unlikely(!is_tx || is_vid || !ops->vidioc_try_fmt_vbi_out))
++ if (unlikely(!ops->vidioc_try_fmt_vbi_out))
+ break;
+ CLEAR_AFTER_FIELD(p, fmt.vbi);
+ return ops->vidioc_try_fmt_vbi_out(file, fh, arg);
+ case V4L2_BUF_TYPE_SLICED_VBI_OUTPUT:
+- if (unlikely(!is_tx || is_vid || !ops->vidioc_try_fmt_sliced_vbi_out))
++ if (unlikely(!ops->vidioc_try_fmt_sliced_vbi_out))
+ break;
+ CLEAR_AFTER_FIELD(p, fmt.sliced);
+ return ops->vidioc_try_fmt_sliced_vbi_out(file, fh, arg);
+ case V4L2_BUF_TYPE_SDR_CAPTURE:
+- if (unlikely(!is_rx || !is_sdr || !ops->vidioc_try_fmt_sdr_cap))
++ if (unlikely(!ops->vidioc_try_fmt_sdr_cap))
+ break;
+ CLEAR_AFTER_FIELD(p, fmt.sdr);
+ return ops->vidioc_try_fmt_sdr_cap(file, fh, arg);
+ case V4L2_BUF_TYPE_SDR_OUTPUT:
+- if (unlikely(!is_tx || !is_sdr || !ops->vidioc_try_fmt_sdr_out))
++ if (unlikely(!ops->vidioc_try_fmt_sdr_out))
+ break;
+ CLEAR_AFTER_FIELD(p, fmt.sdr);
+ return ops->vidioc_try_fmt_sdr_out(file, fh, arg);
+ case V4L2_BUF_TYPE_META_CAPTURE:
+- if (unlikely(!is_rx || !is_vid || !ops->vidioc_try_fmt_meta_cap))
++ if (unlikely(!ops->vidioc_try_fmt_meta_cap))
+ break;
+ CLEAR_AFTER_FIELD(p, fmt.meta);
+ return ops->vidioc_try_fmt_meta_cap(file, fh, arg);
+@@ -2927,8 +2895,11 @@ video_usercopy(struct file *file, unsigned int cmd, unsigned long arg,
+
+ /* Handles IOCTL */
+ err = func(file, cmd, parg);
+- if (err == -ENOIOCTLCMD)
++ if (err == -ENOTTY || err == -ENOIOCTLCMD) {
+ err = -ENOTTY;
++ goto out;
++ }
++
+ if (err == 0) {
+ if (cmd == VIDIOC_DQBUF)
+ trace_v4l2_dqbuf(video_devdata(file)->minor, parg);
+diff --git a/drivers/mtd/nand/brcmnand/brcmnand.c b/drivers/mtd/nand/brcmnand/brcmnand.c
+index dd56a671ea42..2a978d9832a7 100644
+--- a/drivers/mtd/nand/brcmnand/brcmnand.c
++++ b/drivers/mtd/nand/brcmnand/brcmnand.c
+@@ -2193,16 +2193,9 @@ static int brcmnand_setup_dev(struct brcmnand_host *host)
+ if (ctrl->nand_version >= 0x0702)
+ tmp |= ACC_CONTROL_RD_ERASED;
+ tmp &= ~ACC_CONTROL_FAST_PGM_RDIN;
+- if (ctrl->features & BRCMNAND_HAS_PREFETCH) {
+- /*
+- * FIXME: Flash DMA + prefetch may see spurious erased-page ECC
+- * errors
+- */
+- if (has_flash_dma(ctrl))
+- tmp &= ~ACC_CONTROL_PREFETCH;
+- else
+- tmp |= ACC_CONTROL_PREFETCH;
+- }
++ if (ctrl->features & BRCMNAND_HAS_PREFETCH)
++ tmp &= ~ACC_CONTROL_PREFETCH;
++
+ nand_writereg(ctrl, offs, tmp);
+
+ return 0;
+diff --git a/drivers/mtd/nand/nand_base.c b/drivers/mtd/nand/nand_base.c
+index 6135d007a068..9c702b46c6ee 100644
+--- a/drivers/mtd/nand/nand_base.c
++++ b/drivers/mtd/nand/nand_base.c
+@@ -2199,6 +2199,7 @@ EXPORT_SYMBOL(nand_write_oob_syndrome);
+ static int nand_do_read_oob(struct mtd_info *mtd, loff_t from,
+ struct mtd_oob_ops *ops)
+ {
++ unsigned int max_bitflips = 0;
+ int page, realpage, chipnr;
+ struct nand_chip *chip = mtd_to_nand(mtd);
+ struct mtd_ecc_stats stats;
+@@ -2256,6 +2257,8 @@ static int nand_do_read_oob(struct mtd_info *mtd, loff_t from,
+ nand_wait_ready(mtd);
+ }
+
++ max_bitflips = max_t(unsigned int, max_bitflips, ret);
++
+ readlen -= len;
+ if (!readlen)
+ break;
+@@ -2281,7 +2284,7 @@ static int nand_do_read_oob(struct mtd_info *mtd, loff_t from,
+ if (mtd->ecc_stats.failed - stats.failed)
+ return -EBADMSG;
+
+- return mtd->ecc_stats.corrected - stats.corrected ? -EUCLEAN : 0;
++ return max_bitflips;
+ }
+
+ /**
+diff --git a/drivers/mtd/nand/sunxi_nand.c b/drivers/mtd/nand/sunxi_nand.c
+index 82244be3e766..958974821582 100644
+--- a/drivers/mtd/nand/sunxi_nand.c
++++ b/drivers/mtd/nand/sunxi_nand.c
+@@ -1853,8 +1853,14 @@ static int sunxi_nand_hw_common_ecc_ctrl_init(struct mtd_info *mtd,
+
+ /* Add ECC info retrieval from DT */
+ for (i = 0; i < ARRAY_SIZE(strengths); i++) {
+- if (ecc->strength <= strengths[i])
++ if (ecc->strength <= strengths[i]) {
++ /*
++ * Update ecc->strength value with the actual strength
++ * that will be used by the ECC engine.
++ */
++ ecc->strength = strengths[i];
+ break;
++ }
+ }
+
+ if (i >= ARRAY_SIZE(strengths)) {
+diff --git a/drivers/mtd/ubi/block.c b/drivers/mtd/ubi/block.c
+index b210fdb31c98..b1fc28f63882 100644
+--- a/drivers/mtd/ubi/block.c
++++ b/drivers/mtd/ubi/block.c
+@@ -99,6 +99,8 @@ struct ubiblock {
+
+ /* Linked list of all ubiblock instances */
+ static LIST_HEAD(ubiblock_devices);
++static DEFINE_IDR(ubiblock_minor_idr);
++/* Protects ubiblock_devices and ubiblock_minor_idr */
+ static DEFINE_MUTEX(devices_mutex);
+ static int ubiblock_major;
+
+@@ -351,8 +353,6 @@ static const struct blk_mq_ops ubiblock_mq_ops = {
+ .init_request = ubiblock_init_request,
+ };
+
+-static DEFINE_IDR(ubiblock_minor_idr);
+-
+ int ubiblock_create(struct ubi_volume_info *vi)
+ {
+ struct ubiblock *dev;
+@@ -365,14 +365,15 @@ int ubiblock_create(struct ubi_volume_info *vi)
+ /* Check that the volume isn't already handled */
+ mutex_lock(&devices_mutex);
+ if (find_dev_nolock(vi->ubi_num, vi->vol_id)) {
+- mutex_unlock(&devices_mutex);
+- return -EEXIST;
++ ret = -EEXIST;
++ goto out_unlock;
+ }
+- mutex_unlock(&devices_mutex);
+
+ dev = kzalloc(sizeof(struct ubiblock), GFP_KERNEL);
+- if (!dev)
+- return -ENOMEM;
++ if (!dev) {
++ ret = -ENOMEM;
++ goto out_unlock;
++ }
+
+ mutex_init(&dev->dev_mutex);
+
+@@ -437,14 +438,13 @@ int ubiblock_create(struct ubi_volume_info *vi)
+ goto out_free_queue;
+ }
+
+- mutex_lock(&devices_mutex);
+ list_add_tail(&dev->list, &ubiblock_devices);
+- mutex_unlock(&devices_mutex);
+
+ /* Must be the last step: anyone can call file ops from now on */
+ add_disk(dev->gd);
+ dev_info(disk_to_dev(dev->gd), "created from ubi%d:%d(%s)",
+ dev->ubi_num, dev->vol_id, vi->name);
++ mutex_unlock(&devices_mutex);
+ return 0;
+
+ out_free_queue:
+@@ -457,6 +457,8 @@ int ubiblock_create(struct ubi_volume_info *vi)
+ put_disk(dev->gd);
+ out_free_dev:
+ kfree(dev);
++out_unlock:
++ mutex_unlock(&devices_mutex);
+
+ return ret;
+ }
+@@ -478,30 +480,36 @@ static void ubiblock_cleanup(struct ubiblock *dev)
+ int ubiblock_remove(struct ubi_volume_info *vi)
+ {
+ struct ubiblock *dev;
++ int ret;
+
+ mutex_lock(&devices_mutex);
+ dev = find_dev_nolock(vi->ubi_num, vi->vol_id);
+ if (!dev) {
+- mutex_unlock(&devices_mutex);
+- return -ENODEV;
++ ret = -ENODEV;
++ goto out_unlock;
+ }
+
+ /* Found a device, let's lock it so we can check if it's busy */
+ mutex_lock(&dev->dev_mutex);
+ if (dev->refcnt > 0) {
+- mutex_unlock(&dev->dev_mutex);
+- mutex_unlock(&devices_mutex);
+- return -EBUSY;
++ ret = -EBUSY;
++ goto out_unlock_dev;
+ }
+
+ /* Remove from device list */
+ list_del(&dev->list);
+- mutex_unlock(&devices_mutex);
+-
+ ubiblock_cleanup(dev);
+ mutex_unlock(&dev->dev_mutex);
++ mutex_unlock(&devices_mutex);
++
+ kfree(dev);
+ return 0;
++
++out_unlock_dev:
++ mutex_unlock(&dev->dev_mutex);
++out_unlock:
++ mutex_unlock(&devices_mutex);
++ return ret;
+ }
+
+ static int ubiblock_resize(struct ubi_volume_info *vi)
+@@ -630,6 +638,7 @@ static void ubiblock_remove_all(void)
+ struct ubiblock *next;
+ struct ubiblock *dev;
+
++ mutex_lock(&devices_mutex);
+ list_for_each_entry_safe(dev, next, &ubiblock_devices, list) {
+ /* The module is being forcefully removed */
+ WARN_ON(dev->desc);
+@@ -638,6 +647,7 @@ static void ubiblock_remove_all(void)
+ ubiblock_cleanup(dev);
+ kfree(dev);
+ }
++ mutex_unlock(&devices_mutex);
+ }
+
+ int __init ubiblock_init(void)
+diff --git a/drivers/mtd/ubi/vmt.c b/drivers/mtd/ubi/vmt.c
+index 85237cf661f9..3fd8d7ff7a02 100644
+--- a/drivers/mtd/ubi/vmt.c
++++ b/drivers/mtd/ubi/vmt.c
+@@ -270,6 +270,12 @@ int ubi_create_volume(struct ubi_device *ubi, struct ubi_mkvol_req *req)
+ vol->last_eb_bytes = vol->usable_leb_size;
+ }
+
++ /* Make volume "available" before it becomes accessible via sysfs */
++ spin_lock(&ubi->volumes_lock);
++ ubi->volumes[vol_id] = vol;
++ ubi->vol_count += 1;
++ spin_unlock(&ubi->volumes_lock);
++
+ /* Register character device for the volume */
+ cdev_init(&vol->cdev, &ubi_vol_cdev_operations);
+ vol->cdev.owner = THIS_MODULE;
+@@ -298,11 +304,6 @@ int ubi_create_volume(struct ubi_device *ubi, struct ubi_mkvol_req *req)
+ if (err)
+ goto out_sysfs;
+
+- spin_lock(&ubi->volumes_lock);
+- ubi->volumes[vol_id] = vol;
+- ubi->vol_count += 1;
+- spin_unlock(&ubi->volumes_lock);
+-
+ ubi_volume_notify(ubi, vol, UBI_VOLUME_ADDED);
+ self_check_volumes(ubi);
+ return err;
+@@ -315,6 +316,10 @@ int ubi_create_volume(struct ubi_device *ubi, struct ubi_mkvol_req *req)
+ */
+ cdev_device_del(&vol->cdev, &vol->dev);
+ out_mapping:
++ spin_lock(&ubi->volumes_lock);
++ ubi->volumes[vol_id] = NULL;
++ ubi->vol_count -= 1;
++ spin_unlock(&ubi->volumes_lock);
+ ubi_eba_destroy_table(eba_tbl);
+ out_acc:
+ spin_lock(&ubi->volumes_lock);
+diff --git a/drivers/mtd/ubi/wl.c b/drivers/mtd/ubi/wl.c
+index b5b8cd6f481c..668b46202507 100644
+--- a/drivers/mtd/ubi/wl.c
++++ b/drivers/mtd/ubi/wl.c
+@@ -1528,6 +1528,46 @@ static void shutdown_work(struct ubi_device *ubi)
+ }
+ }
+
++/**
++ * erase_aeb - erase a PEB given in UBI attach info PEB
++ * @ubi: UBI device description object
++ * @aeb: UBI attach info PEB
++ * @sync: If true, erase synchronously. Otherwise schedule for erasure
++ */
++static int erase_aeb(struct ubi_device *ubi, struct ubi_ainf_peb *aeb, bool sync)
++{
++ struct ubi_wl_entry *e;
++ int err;
++
++ e = kmem_cache_alloc(ubi_wl_entry_slab, GFP_KERNEL);
++ if (!e)
++ return -ENOMEM;
++
++ e->pnum = aeb->pnum;
++ e->ec = aeb->ec;
++ ubi->lookuptbl[e->pnum] = e;
++
++ if (sync) {
++ err = sync_erase(ubi, e, false);
++ if (err)
++ goto out_free;
++
++ wl_tree_add(e, &ubi->free);
++ ubi->free_count++;
++ } else {
++ err = schedule_erase(ubi, e, aeb->vol_id, aeb->lnum, 0, false);
++ if (err)
++ goto out_free;
++ }
++
++ return 0;
++
++out_free:
++ wl_entry_destroy(ubi, e);
++
++ return err;
++}
++
+ /**
+ * ubi_wl_init - initialize the WL sub-system using attaching information.
+ * @ubi: UBI device description object
+@@ -1566,18 +1606,10 @@ int ubi_wl_init(struct ubi_device *ubi, struct ubi_attach_info *ai)
+ list_for_each_entry_safe(aeb, tmp, &ai->erase, u.list) {
+ cond_resched();
+
+- e = kmem_cache_alloc(ubi_wl_entry_slab, GFP_KERNEL);
+- if (!e)
++ err = erase_aeb(ubi, aeb, false);
++ if (err)
+ goto out_free;
+
+- e->pnum = aeb->pnum;
+- e->ec = aeb->ec;
+- ubi->lookuptbl[e->pnum] = e;
+- if (schedule_erase(ubi, e, aeb->vol_id, aeb->lnum, 0, false)) {
+- wl_entry_destroy(ubi, e);
+- goto out_free;
+- }
+-
+ found_pebs++;
+ }
+
+@@ -1635,6 +1667,8 @@ int ubi_wl_init(struct ubi_device *ubi, struct ubi_attach_info *ai)
+ ubi_assert(!ubi->lookuptbl[e->pnum]);
+ ubi->lookuptbl[e->pnum] = e;
+ } else {
++ bool sync = false;
++
+ /*
+ * Usually old Fastmap PEBs are scheduled for erasure
+ * and we don't have to care about them but if we face
+@@ -1644,18 +1678,21 @@ int ubi_wl_init(struct ubi_device *ubi, struct ubi_attach_info *ai)
+ if (ubi->lookuptbl[aeb->pnum])
+ continue;
+
+- e = kmem_cache_alloc(ubi_wl_entry_slab, GFP_KERNEL);
+- if (!e)
+- goto out_free;
++ /*
++ * The fastmap update code might not find a free PEB for
++ * writing the fastmap anchor to and then reuses the
++ * current fastmap anchor PEB. When this PEB gets erased
++ * and a power cut happens before it is written again we
++ * must make sure that the fastmap attach code doesn't
++ * find any outdated fastmap anchors, hence we erase the
++ * outdated fastmap anchor PEBs synchronously here.
++ */
++ if (aeb->vol_id == UBI_FM_SB_VOLUME_ID)
++ sync = true;
+
+- e->pnum = aeb->pnum;
+- e->ec = aeb->ec;
+- ubi_assert(!ubi->lookuptbl[e->pnum]);
+- ubi->lookuptbl[e->pnum] = e;
+- if (schedule_erase(ubi, e, aeb->vol_id, aeb->lnum, 0, false)) {
+- wl_entry_destroy(ubi, e);
++ err = erase_aeb(ubi, aeb, sync);
++ if (err)
+ goto out_free;
+- }
+ }
+
+ found_pebs++;
+diff --git a/drivers/perf/arm_spe_pmu.c b/drivers/perf/arm_spe_pmu.c
+index 8ce262fc2561..51b40aecb776 100644
+--- a/drivers/perf/arm_spe_pmu.c
++++ b/drivers/perf/arm_spe_pmu.c
+@@ -1164,6 +1164,15 @@ static int arm_spe_pmu_device_dt_probe(struct platform_device *pdev)
+ struct arm_spe_pmu *spe_pmu;
+ struct device *dev = &pdev->dev;
+
++ /*
++ * If kernelspace is unmapped when running at EL0, then the SPE
++ * buffer will fault and prematurely terminate the AUX session.
++ */
++ if (arm64_kernel_unmapped_at_el0()) {
++ dev_warn_once(dev, "profiling buffer inaccessible. Try passing \"kpti=off\" on the kernel command line\n");
++ return -EPERM;
++ }
++
+ spe_pmu = devm_kzalloc(dev, sizeof(*spe_pmu), GFP_KERNEL);
+ if (!spe_pmu) {
+ dev_err(dev, "failed to allocate spe_pmu\n");
+diff --git a/drivers/pinctrl/intel/pinctrl-intel.c b/drivers/pinctrl/intel/pinctrl-intel.c
+index 12a1af45acb9..32209f37b2be 100644
+--- a/drivers/pinctrl/intel/pinctrl-intel.c
++++ b/drivers/pinctrl/intel/pinctrl-intel.c
+@@ -425,6 +425,18 @@ static void __intel_gpio_set_direction(void __iomem *padcfg0, bool input)
+ writel(value, padcfg0);
+ }
+
++static void intel_gpio_set_gpio_mode(void __iomem *padcfg0)
++{
++ u32 value;
++
++ /* Put the pad into GPIO mode */
++ value = readl(padcfg0) & ~PADCFG0_PMODE_MASK;
++ /* Disable SCI/SMI/NMI generation */
++ value &= ~(PADCFG0_GPIROUTIOXAPIC | PADCFG0_GPIROUTSCI);
++ value &= ~(PADCFG0_GPIROUTSMI | PADCFG0_GPIROUTNMI);
++ writel(value, padcfg0);
++}
++
+ static int intel_gpio_request_enable(struct pinctrl_dev *pctldev,
+ struct pinctrl_gpio_range *range,
+ unsigned pin)
+@@ -432,7 +444,6 @@ static int intel_gpio_request_enable(struct pinctrl_dev *pctldev,
+ struct intel_pinctrl *pctrl = pinctrl_dev_get_drvdata(pctldev);
+ void __iomem *padcfg0;
+ unsigned long flags;
+- u32 value;
+
+ raw_spin_lock_irqsave(&pctrl->lock, flags);
+
+@@ -442,13 +453,7 @@ static int intel_gpio_request_enable(struct pinctrl_dev *pctldev,
+ }
+
+ padcfg0 = intel_get_padcfg(pctrl, pin, PADCFG0);
+- /* Put the pad into GPIO mode */
+- value = readl(padcfg0) & ~PADCFG0_PMODE_MASK;
+- /* Disable SCI/SMI/NMI generation */
+- value &= ~(PADCFG0_GPIROUTIOXAPIC | PADCFG0_GPIROUTSCI);
+- value &= ~(PADCFG0_GPIROUTSMI | PADCFG0_GPIROUTNMI);
+- writel(value, padcfg0);
+-
++ intel_gpio_set_gpio_mode(padcfg0);
+ /* Disable TX buffer and enable RX (this will be input) */
+ __intel_gpio_set_direction(padcfg0, true);
+
+@@ -935,6 +940,8 @@ static int intel_gpio_irq_type(struct irq_data *d, unsigned type)
+
+ raw_spin_lock_irqsave(&pctrl->lock, flags);
+
++ intel_gpio_set_gpio_mode(reg);
++
+ value = readl(reg);
+
+ value &= ~(PADCFG0_RXEVCFG_MASK | PADCFG0_RXINV);
+diff --git a/drivers/pinctrl/pinctrl-mcp23s08.c b/drivers/pinctrl/pinctrl-mcp23s08.c
+index 4a6ea159c65d..c490899b77e5 100644
+--- a/drivers/pinctrl/pinctrl-mcp23s08.c
++++ b/drivers/pinctrl/pinctrl-mcp23s08.c
+@@ -896,16 +896,16 @@ static int mcp23s08_probe_one(struct mcp23s08 *mcp, struct device *dev,
+ goto fail;
+ }
+
+- ret = devm_gpiochip_add_data(dev, &mcp->chip, mcp);
+- if (ret < 0)
+- goto fail;
+-
+ if (mcp->irq && mcp->irq_controller) {
+ ret = mcp23s08_irq_setup(mcp);
+ if (ret)
+ goto fail;
+ }
+
++ ret = devm_gpiochip_add_data(dev, &mcp->chip, mcp);
++ if (ret < 0)
++ goto fail;
++
+ mcp->pinctrl_desc.name = "mcp23xxx-pinctrl";
+ mcp->pinctrl_desc.pctlops = &mcp_pinctrl_ops;
+ mcp->pinctrl_desc.confops = &mcp_pinconf_ops;
+diff --git a/drivers/pinctrl/pinctrl-sx150x.c b/drivers/pinctrl/pinctrl-sx150x.c
+index fb242c542dc9..cbf58a10113d 100644
+--- a/drivers/pinctrl/pinctrl-sx150x.c
++++ b/drivers/pinctrl/pinctrl-sx150x.c
+@@ -1144,6 +1144,27 @@ static int sx150x_probe(struct i2c_client *client,
+ if (ret)
+ return ret;
+
++ /* Pinctrl_desc */
++ pctl->pinctrl_desc.name = "sx150x-pinctrl";
++ pctl->pinctrl_desc.pctlops = &sx150x_pinctrl_ops;
++ pctl->pinctrl_desc.confops = &sx150x_pinconf_ops;
++ pctl->pinctrl_desc.pins = pctl->data->pins;
++ pctl->pinctrl_desc.npins = pctl->data->npins;
++ pctl->pinctrl_desc.owner = THIS_MODULE;
++
++ ret = devm_pinctrl_register_and_init(dev, &pctl->pinctrl_desc,
++ pctl, &pctl->pctldev);
++ if (ret) {
++ dev_err(dev, "Failed to register pinctrl device\n");
++ return ret;
++ }
++
++ ret = pinctrl_enable(pctl->pctldev);
++ if (ret) {
++ dev_err(dev, "Failed to enable pinctrl device\n");
++ return ret;
++ }
++
+ /* Register GPIO controller */
+ pctl->gpio.label = devm_kstrdup(dev, client->name, GFP_KERNEL);
+ pctl->gpio.base = -1;
+@@ -1172,6 +1193,11 @@ static int sx150x_probe(struct i2c_client *client,
+ if (ret)
+ return ret;
+
++ ret = gpiochip_add_pin_range(&pctl->gpio, dev_name(dev),
++ 0, 0, pctl->data->npins);
++ if (ret)
++ return ret;
++
+ /* Add Interrupt support if an irq is specified */
+ if (client->irq > 0) {
+ pctl->irq_chip.name = devm_kstrdup(dev, client->name,
+@@ -1217,20 +1243,6 @@ static int sx150x_probe(struct i2c_client *client,
+ client->irq);
+ }
+
+- /* Pinctrl_desc */
+- pctl->pinctrl_desc.name = "sx150x-pinctrl";
+- pctl->pinctrl_desc.pctlops = &sx150x_pinctrl_ops;
+- pctl->pinctrl_desc.confops = &sx150x_pinconf_ops;
+- pctl->pinctrl_desc.pins = pctl->data->pins;
+- pctl->pinctrl_desc.npins = pctl->data->npins;
+- pctl->pinctrl_desc.owner = THIS_MODULE;
+-
+- pctl->pctldev = pinctrl_register(&pctl->pinctrl_desc, dev, pctl);
+- if (IS_ERR(pctl->pctldev)) {
+- dev_err(dev, "Failed to register pinctrl device\n");
+- return PTR_ERR(pctl->pctldev);
+- }
+-
+ return 0;
+ }
+
+diff --git a/drivers/scsi/cxlflash/main.c b/drivers/scsi/cxlflash/main.c
+index 38b3a9c84fd1..48d366304582 100644
+--- a/drivers/scsi/cxlflash/main.c
++++ b/drivers/scsi/cxlflash/main.c
+@@ -620,6 +620,7 @@ static int cxlflash_queuecommand(struct Scsi_Host *host, struct scsi_cmnd *scp)
+ cmd->parent = afu;
+ cmd->hwq_index = hwq_index;
+
++ cmd->sa.ioasc = 0;
+ cmd->rcb.ctx_id = hwq->ctx_hndl;
+ cmd->rcb.msi = SISL_MSI_RRQ_UPDATED;
+ cmd->rcb.port_sel = CHAN2PORTMASK(scp->device->channel);
+diff --git a/drivers/scsi/hosts.c b/drivers/scsi/hosts.c
+index fe3a0da3ec97..57bf43e34863 100644
+--- a/drivers/scsi/hosts.c
++++ b/drivers/scsi/hosts.c
+@@ -318,6 +318,9 @@ static void scsi_host_dev_release(struct device *dev)
+
+ scsi_proc_hostdir_rm(shost->hostt);
+
++ /* Wait for functions invoked through call_rcu(&shost->rcu, ...) */
++ rcu_barrier();
++
+ if (shost->tmf_work_q)
+ destroy_workqueue(shost->tmf_work_q);
+ if (shost->ehandler)
+@@ -325,6 +328,8 @@ static void scsi_host_dev_release(struct device *dev)
+ if (shost->work_q)
+ destroy_workqueue(shost->work_q);
+
++ destroy_rcu_head(&shost->rcu);
++
+ if (shost->shost_state == SHOST_CREATED) {
+ /*
+ * Free the shost_dev device name here if scsi_host_alloc()
+@@ -399,6 +404,7 @@ struct Scsi_Host *scsi_host_alloc(struct scsi_host_template *sht, int privsize)
+ INIT_LIST_HEAD(&shost->starved_list);
+ init_waitqueue_head(&shost->host_wait);
+ mutex_init(&shost->scan_mutex);
++ init_rcu_head(&shost->rcu);
+
+ index = ida_simple_get(&host_index_ida, 0, 0, GFP_KERNEL);
+ if (index < 0)
+diff --git a/drivers/scsi/lpfc/lpfc_init.c b/drivers/scsi/lpfc/lpfc_init.c
+index 2b7ea7e53e12..a28b2994b009 100644
+--- a/drivers/scsi/lpfc/lpfc_init.c
++++ b/drivers/scsi/lpfc/lpfc_init.c
+@@ -9421,44 +9421,62 @@ lpfc_sli4_pci_mem_setup(struct lpfc_hba *phba)
+ lpfc_sli4_bar0_register_memmap(phba, if_type);
+ }
+
+- if ((if_type == LPFC_SLI_INTF_IF_TYPE_0) &&
+- (pci_resource_start(pdev, PCI_64BIT_BAR2))) {
+- /*
+- * Map SLI4 if type 0 HBA Control Register base to a kernel
+- * virtual address and setup the registers.
+- */
+- phba->pci_bar1_map = pci_resource_start(pdev, PCI_64BIT_BAR2);
+- bar1map_len = pci_resource_len(pdev, PCI_64BIT_BAR2);
+- phba->sli4_hba.ctrl_regs_memmap_p =
+- ioremap(phba->pci_bar1_map, bar1map_len);
+- if (!phba->sli4_hba.ctrl_regs_memmap_p) {
+- dev_printk(KERN_ERR, &pdev->dev,
+- "ioremap failed for SLI4 HBA control registers.\n");
++ if (if_type == LPFC_SLI_INTF_IF_TYPE_0) {
++ if (pci_resource_start(pdev, PCI_64BIT_BAR2)) {
++ /*
++ * Map SLI4 if type 0 HBA Control Register base to a
++ * kernel virtual address and setup the registers.
++ */
++ phba->pci_bar1_map = pci_resource_start(pdev,
++ PCI_64BIT_BAR2);
++ bar1map_len = pci_resource_len(pdev, PCI_64BIT_BAR2);
++ phba->sli4_hba.ctrl_regs_memmap_p =
++ ioremap(phba->pci_bar1_map,
++ bar1map_len);
++ if (!phba->sli4_hba.ctrl_regs_memmap_p) {
++ dev_err(&pdev->dev,
++ "ioremap failed for SLI4 HBA "
++ "control registers.\n");
++ error = -ENOMEM;
++ goto out_iounmap_conf;
++ }
++ phba->pci_bar2_memmap_p =
++ phba->sli4_hba.ctrl_regs_memmap_p;
++ lpfc_sli4_bar1_register_memmap(phba);
++ } else {
++ error = -ENOMEM;
+ goto out_iounmap_conf;
+ }
+- phba->pci_bar2_memmap_p = phba->sli4_hba.ctrl_regs_memmap_p;
+- lpfc_sli4_bar1_register_memmap(phba);
+ }
+
+- if ((if_type == LPFC_SLI_INTF_IF_TYPE_0) &&
+- (pci_resource_start(pdev, PCI_64BIT_BAR4))) {
+- /*
+- * Map SLI4 if type 0 HBA Doorbell Register base to a kernel
+- * virtual address and setup the registers.
+- */
+- phba->pci_bar2_map = pci_resource_start(pdev, PCI_64BIT_BAR4);
+- bar2map_len = pci_resource_len(pdev, PCI_64BIT_BAR4);
+- phba->sli4_hba.drbl_regs_memmap_p =
+- ioremap(phba->pci_bar2_map, bar2map_len);
+- if (!phba->sli4_hba.drbl_regs_memmap_p) {
+- dev_printk(KERN_ERR, &pdev->dev,
+- "ioremap failed for SLI4 HBA doorbell registers.\n");
+- goto out_iounmap_ctrl;
+- }
+- phba->pci_bar4_memmap_p = phba->sli4_hba.drbl_regs_memmap_p;
+- error = lpfc_sli4_bar2_register_memmap(phba, LPFC_VF0);
+- if (error)
++ if (if_type == LPFC_SLI_INTF_IF_TYPE_0) {
++ if (pci_resource_start(pdev, PCI_64BIT_BAR4)) {
++ /*
++ * Map SLI4 if type 0 HBA Doorbell Register base to
++ * a kernel virtual address and setup the registers.
++ */
++ phba->pci_bar2_map = pci_resource_start(pdev,
++ PCI_64BIT_BAR4);
++ bar2map_len = pci_resource_len(pdev, PCI_64BIT_BAR4);
++ phba->sli4_hba.drbl_regs_memmap_p =
++ ioremap(phba->pci_bar2_map,
++ bar2map_len);
++ if (!phba->sli4_hba.drbl_regs_memmap_p) {
++ dev_err(&pdev->dev,
++ "ioremap failed for SLI4 HBA"
++ " doorbell registers.\n");
++ error = -ENOMEM;
++ goto out_iounmap_ctrl;
++ }
++ phba->pci_bar4_memmap_p =
++ phba->sli4_hba.drbl_regs_memmap_p;
++ error = lpfc_sli4_bar2_register_memmap(phba, LPFC_VF0);
++ if (error)
++ goto out_iounmap_all;
++ } else {
++ error = -ENOMEM;
+ goto out_iounmap_all;
++ }
+ }
+
+ return 0;
+diff --git a/drivers/scsi/scsi_error.c b/drivers/scsi/scsi_error.c
+index 62b56de38ae8..3737c6d3b064 100644
+--- a/drivers/scsi/scsi_error.c
++++ b/drivers/scsi/scsi_error.c
+@@ -220,6 +220,17 @@ static void scsi_eh_reset(struct scsi_cmnd *scmd)
+ }
+ }
+
++static void scsi_eh_inc_host_failed(struct rcu_head *head)
++{
++ struct Scsi_Host *shost = container_of(head, typeof(*shost), rcu);
++ unsigned long flags;
++
++ spin_lock_irqsave(shost->host_lock, flags);
++ shost->host_failed++;
++ scsi_eh_wakeup(shost);
++ spin_unlock_irqrestore(shost->host_lock, flags);
++}
++
+ /**
+ * scsi_eh_scmd_add - add scsi cmd to error handling.
+ * @scmd: scmd to run eh on.
+@@ -242,9 +253,12 @@ void scsi_eh_scmd_add(struct scsi_cmnd *scmd)
+
+ scsi_eh_reset(scmd);
+ list_add_tail(&scmd->eh_entry, &shost->eh_cmd_q);
+- shost->host_failed++;
+- scsi_eh_wakeup(shost);
+ spin_unlock_irqrestore(shost->host_lock, flags);
++ /*
++ * Ensure that all tasks observe the host state change before the
++ * host_failed change.
++ */
++ call_rcu(&shost->rcu, scsi_eh_inc_host_failed);
+ }
+
+ /**
+diff --git a/drivers/scsi/scsi_lib.c b/drivers/scsi/scsi_lib.c
+index d9ca1dfab154..83856ee14851 100644
+--- a/drivers/scsi/scsi_lib.c
++++ b/drivers/scsi/scsi_lib.c
+@@ -318,22 +318,39 @@ static void scsi_init_cmd_errh(struct scsi_cmnd *cmd)
+ cmd->cmd_len = scsi_command_size(cmd->cmnd);
+ }
+
+-void scsi_device_unbusy(struct scsi_device *sdev)
++/*
++ * Decrement the host_busy counter and wake up the error handler if necessary.
++ * Avoid as follows that the error handler is not woken up if shost->host_busy
++ * == shost->host_failed: use call_rcu() in scsi_eh_scmd_add() in combination
++ * with an RCU read lock in this function to ensure that this function in its
++ * entirety either finishes before scsi_eh_scmd_add() increases the
++ * host_failed counter or that it notices the shost state change made by
++ * scsi_eh_scmd_add().
++ */
++static void scsi_dec_host_busy(struct Scsi_Host *shost)
+ {
+- struct Scsi_Host *shost = sdev->host;
+- struct scsi_target *starget = scsi_target(sdev);
+ unsigned long flags;
+
++ rcu_read_lock();
+ atomic_dec(&shost->host_busy);
+- if (starget->can_queue > 0)
+- atomic_dec(&starget->target_busy);
+-
+- if (unlikely(scsi_host_in_recovery(shost) &&
+- (shost->host_failed || shost->host_eh_scheduled))) {
++ if (unlikely(scsi_host_in_recovery(shost))) {
+ spin_lock_irqsave(shost->host_lock, flags);
+- scsi_eh_wakeup(shost);
++ if (shost->host_failed || shost->host_eh_scheduled)
++ scsi_eh_wakeup(shost);
+ spin_unlock_irqrestore(shost->host_lock, flags);
+ }
++ rcu_read_unlock();
++}
++
++void scsi_device_unbusy(struct scsi_device *sdev)
++{
++ struct Scsi_Host *shost = sdev->host;
++ struct scsi_target *starget = scsi_target(sdev);
++
++ scsi_dec_host_busy(shost);
++
++ if (starget->can_queue > 0)
++ atomic_dec(&starget->target_busy);
+
+ atomic_dec(&sdev->device_busy);
+ }
+@@ -1532,7 +1549,7 @@ static inline int scsi_host_queue_ready(struct request_queue *q,
+ list_add_tail(&sdev->starved_entry, &shost->starved_list);
+ spin_unlock_irq(shost->host_lock);
+ out_dec:
+- atomic_dec(&shost->host_busy);
++ scsi_dec_host_busy(shost);
+ return 0;
+ }
+
+@@ -2020,7 +2037,7 @@ static blk_status_t scsi_queue_rq(struct blk_mq_hw_ctx *hctx,
+ return BLK_STS_OK;
+
+ out_dec_host_busy:
+- atomic_dec(&shost->host_busy);
++ scsi_dec_host_busy(shost);
+ out_dec_target_busy:
+ if (scsi_target(sdev)->can_queue > 0)
+ atomic_dec(&scsi_target(sdev)->target_busy);
+diff --git a/drivers/ssb/Kconfig b/drivers/ssb/Kconfig
+index 71c73766ee22..65af12c3bdb2 100644
+--- a/drivers/ssb/Kconfig
++++ b/drivers/ssb/Kconfig
+@@ -32,7 +32,7 @@ config SSB_BLOCKIO
+
+ config SSB_PCIHOST_POSSIBLE
+ bool
+- depends on SSB && (PCI = y || PCI = SSB) && PCI_DRIVERS_LEGACY
++ depends on SSB && (PCI = y || PCI = SSB) && (PCI_DRIVERS_LEGACY || !MIPS)
+ default y
+
+ config SSB_PCIHOST
+diff --git a/drivers/staging/lustre/lnet/libcfs/linux/linux-crypto-adler.c b/drivers/staging/lustre/lnet/libcfs/linux/linux-crypto-adler.c
+index 2e5d311d2438..db81ed527452 100644
+--- a/drivers/staging/lustre/lnet/libcfs/linux/linux-crypto-adler.c
++++ b/drivers/staging/lustre/lnet/libcfs/linux/linux-crypto-adler.c
+@@ -120,6 +120,7 @@ static struct shash_alg alg = {
+ .cra_name = "adler32",
+ .cra_driver_name = "adler32-zlib",
+ .cra_priority = 100,
++ .cra_flags = CRYPTO_ALG_OPTIONAL_KEY,
+ .cra_blocksize = CHKSUM_BLOCK_SIZE,
+ .cra_ctxsize = sizeof(u32),
+ .cra_module = THIS_MODULE,
+diff --git a/drivers/watchdog/Kconfig b/drivers/watchdog/Kconfig
+index ca200d1f310a..5a606a4aee6a 100644
+--- a/drivers/watchdog/Kconfig
++++ b/drivers/watchdog/Kconfig
+@@ -1451,7 +1451,7 @@ config RC32434_WDT
+
+ config INDYDOG
+ tristate "Indy/I2 Hardware Watchdog"
+- depends on SGI_HAS_INDYDOG || (MIPS && COMPILE_TEST)
++ depends on SGI_HAS_INDYDOG
+ help
+ Hardware driver for the Indy's/I2's watchdog. This is a
+ watchdog timer that will reboot the machine after a 60 second
+diff --git a/drivers/watchdog/gpio_wdt.c b/drivers/watchdog/gpio_wdt.c
+index cb66c2f99ff1..7a6279daa8b9 100644
+--- a/drivers/watchdog/gpio_wdt.c
++++ b/drivers/watchdog/gpio_wdt.c
+@@ -80,7 +80,8 @@ static int gpio_wdt_stop(struct watchdog_device *wdd)
+
+ if (!priv->always_running) {
+ gpio_wdt_disable(priv);
+- clear_bit(WDOG_HW_RUNNING, &wdd->status);
++ } else {
++ set_bit(WDOG_HW_RUNNING, &wdd->status);
+ }
+
+ return 0;
+diff --git a/drivers/watchdog/imx2_wdt.c b/drivers/watchdog/imx2_wdt.c
+index 4874b0f18650..518dfa1047cb 100644
+--- a/drivers/watchdog/imx2_wdt.c
++++ b/drivers/watchdog/imx2_wdt.c
+@@ -169,15 +169,21 @@ static int imx2_wdt_ping(struct watchdog_device *wdog)
+ return 0;
+ }
+
+-static int imx2_wdt_set_timeout(struct watchdog_device *wdog,
+- unsigned int new_timeout)
++static void __imx2_wdt_set_timeout(struct watchdog_device *wdog,
++ unsigned int new_timeout)
+ {
+ struct imx2_wdt_device *wdev = watchdog_get_drvdata(wdog);
+
+- wdog->timeout = new_timeout;
+-
+ regmap_update_bits(wdev->regmap, IMX2_WDT_WCR, IMX2_WDT_WCR_WT,
+ WDOG_SEC_TO_COUNT(new_timeout));
++}
++
++static int imx2_wdt_set_timeout(struct watchdog_device *wdog,
++ unsigned int new_timeout)
++{
++ __imx2_wdt_set_timeout(wdog, new_timeout);
++
++ wdog->timeout = new_timeout;
+ return 0;
+ }
+
+@@ -371,7 +377,11 @@ static int imx2_wdt_suspend(struct device *dev)
+
+ /* The watchdog IP block is running */
+ if (imx2_wdt_is_running(wdev)) {
+- imx2_wdt_set_timeout(wdog, IMX2_WDT_MAX_TIME);
++ /*
++ * Don't update wdog->timeout, we'll restore the current value
++ * during resume.
++ */
++ __imx2_wdt_set_timeout(wdog, IMX2_WDT_MAX_TIME);
+ imx2_wdt_ping(wdog);
+ }
+
+diff --git a/fs/afs/addr_list.c b/fs/afs/addr_list.c
+index a537368ba0db..fd9f28b8a933 100644
+--- a/fs/afs/addr_list.c
++++ b/fs/afs/addr_list.c
+@@ -332,11 +332,18 @@ bool afs_iterate_addresses(struct afs_addr_cursor *ac)
+ */
+ int afs_end_cursor(struct afs_addr_cursor *ac)
+ {
+- if (ac->responded && ac->index != ac->start)
+- WRITE_ONCE(ac->alist->index, ac->index);
++ struct afs_addr_list *alist;
++
++ alist = ac->alist;
++ if (alist) {
++ if (ac->responded && ac->index != ac->start)
++ WRITE_ONCE(alist->index, ac->index);
++ afs_put_addrlist(alist);
++ }
+
+- afs_put_addrlist(ac->alist);
++ ac->addr = NULL;
+ ac->alist = NULL;
++ ac->begun = false;
+ return ac->error;
+ }
+
+diff --git a/fs/afs/rotate.c b/fs/afs/rotate.c
+index d04511fb3879..892a4904fd77 100644
+--- a/fs/afs/rotate.c
++++ b/fs/afs/rotate.c
+@@ -334,6 +334,7 @@ bool afs_select_fileserver(struct afs_fs_cursor *fc)
+
+ next_server:
+ _debug("next");
++ afs_end_cursor(&fc->ac);
+ afs_put_cb_interest(afs_v2net(vnode), fc->cbi);
+ fc->cbi = NULL;
+ fc->index++;
+@@ -383,6 +384,7 @@ bool afs_select_fileserver(struct afs_fs_cursor *fc)
+ afs_get_addrlist(alist);
+ read_unlock(&server->fs_lock);
+
++ memset(&fc->ac, 0, sizeof(fc->ac));
+
+ /* Probe the current fileserver if we haven't done so yet. */
+ if (!test_bit(AFS_SERVER_FL_PROBED, &server->flags)) {
+@@ -397,11 +399,8 @@ bool afs_select_fileserver(struct afs_fs_cursor *fc)
+ else
+ afs_put_addrlist(alist);
+
+- fc->ac.addr = NULL;
+ fc->ac.start = READ_ONCE(alist->index);
+ fc->ac.index = fc->ac.start;
+- fc->ac.error = 0;
+- fc->ac.begun = false;
+ goto iterate_address;
+
+ iterate_address:
+@@ -410,16 +409,15 @@ bool afs_select_fileserver(struct afs_fs_cursor *fc)
+ /* Iterate over the current server's address list to try and find an
+ * address on which it will respond to us.
+ */
+- if (afs_iterate_addresses(&fc->ac)) {
+- _leave(" = t");
+- return true;
+- }
++ if (!afs_iterate_addresses(&fc->ac))
++ goto next_server;
+
+- afs_end_cursor(&fc->ac);
+- goto next_server;
++ _leave(" = t");
++ return true;
+
+ failed:
+ fc->flags |= AFS_FS_CURSOR_STOP;
++ afs_end_cursor(&fc->ac);
+ _leave(" = f [failed %d]", fc->ac.error);
+ return false;
+ }
+@@ -458,12 +456,10 @@ bool afs_select_current_fileserver(struct afs_fs_cursor *fc)
+ return false;
+ }
+
++ memset(&fc->ac, 0, sizeof(fc->ac));
+ fc->ac.alist = alist;
+- fc->ac.addr = NULL;
+ fc->ac.start = READ_ONCE(alist->index);
+ fc->ac.index = fc->ac.start;
+- fc->ac.error = 0;
+- fc->ac.begun = false;
+ goto iterate_address;
+
+ case 0:
+diff --git a/fs/afs/server_list.c b/fs/afs/server_list.c
+index 0ab3f8457839..0f8dc4c8f07c 100644
+--- a/fs/afs/server_list.c
++++ b/fs/afs/server_list.c
+@@ -58,7 +58,8 @@ struct afs_server_list *afs_alloc_server_list(struct afs_cell *cell,
+ server = afs_lookup_server(cell, key, &vldb->fs_server[i]);
+ if (IS_ERR(server)) {
+ ret = PTR_ERR(server);
+- if (ret == -ENOENT)
++ if (ret == -ENOENT ||
++ ret == -ENOMEDIUM)
+ continue;
+ goto error_2;
+ }
+diff --git a/fs/afs/vlclient.c b/fs/afs/vlclient.c
+index e372f89fd36a..5d8562f1ad4a 100644
+--- a/fs/afs/vlclient.c
++++ b/fs/afs/vlclient.c
+@@ -23,7 +23,7 @@ static int afs_deliver_vl_get_entry_by_name_u(struct afs_call *call)
+ struct afs_uvldbentry__xdr *uvldb;
+ struct afs_vldb_entry *entry;
+ bool new_only = false;
+- u32 tmp;
++ u32 tmp, nr_servers;
+ int i, ret;
+
+ _enter("");
+@@ -36,6 +36,10 @@ static int afs_deliver_vl_get_entry_by_name_u(struct afs_call *call)
+ uvldb = call->buffer;
+ entry = call->reply[0];
+
++ nr_servers = ntohl(uvldb->nServers);
++ if (nr_servers > AFS_NMAXNSERVERS)
++ nr_servers = AFS_NMAXNSERVERS;
++
+ for (i = 0; i < ARRAY_SIZE(uvldb->name) - 1; i++)
+ entry->name[i] = (u8)ntohl(uvldb->name[i]);
+ entry->name[i] = 0;
+@@ -44,14 +48,14 @@ static int afs_deliver_vl_get_entry_by_name_u(struct afs_call *call)
+ /* If there is a new replication site that we can use, ignore all the
+ * sites that aren't marked as new.
+ */
+- for (i = 0; i < AFS_NMAXNSERVERS; i++) {
++ for (i = 0; i < nr_servers; i++) {
+ tmp = ntohl(uvldb->serverFlags[i]);
+ if (!(tmp & AFS_VLSF_DONTUSE) &&
+ (tmp & AFS_VLSF_NEWREPSITE))
+ new_only = true;
+ }
+
+- for (i = 0; i < AFS_NMAXNSERVERS; i++) {
++ for (i = 0; i < nr_servers; i++) {
+ struct afs_uuid__xdr *xdr;
+ struct afs_uuid *uuid;
+ int j;
+diff --git a/fs/afs/volume.c b/fs/afs/volume.c
+index 684c48293353..b517a588781f 100644
+--- a/fs/afs/volume.c
++++ b/fs/afs/volume.c
+@@ -26,9 +26,8 @@ static struct afs_volume *afs_alloc_volume(struct afs_mount_params *params,
+ unsigned long type_mask)
+ {
+ struct afs_server_list *slist;
+- struct afs_server *server;
+ struct afs_volume *volume;
+- int ret = -ENOMEM, nr_servers = 0, i, j;
++ int ret = -ENOMEM, nr_servers = 0, i;
+
+ for (i = 0; i < vldb->nr_servers; i++)
+ if (vldb->fs_mask[i] & type_mask)
+@@ -58,50 +57,10 @@ static struct afs_volume *afs_alloc_volume(struct afs_mount_params *params,
+
+ refcount_set(&slist->usage, 1);
+ volume->servers = slist;
+-
+- /* Make sure a records exists for each server this volume occupies. */
+- for (i = 0; i < nr_servers; i++) {
+- if (!(vldb->fs_mask[i] & type_mask))
+- continue;
+-
+- server = afs_lookup_server(params->cell, params->key,
+- &vldb->fs_server[i]);
+- if (IS_ERR(server)) {
+- ret = PTR_ERR(server);
+- if (ret == -ENOENT)
+- continue;
+- goto error_2;
+- }
+-
+- /* Insertion-sort by server pointer */
+- for (j = 0; j < slist->nr_servers; j++)
+- if (slist->servers[j].server >= server)
+- break;
+- if (j < slist->nr_servers) {
+- if (slist->servers[j].server == server) {
+- afs_put_server(params->net, server);
+- continue;
+- }
+-
+- memmove(slist->servers + j + 1,
+- slist->servers + j,
+- (slist->nr_servers - j) * sizeof(struct afs_server_entry));
+- }
+-
+- slist->servers[j].server = server;
+- slist->nr_servers++;
+- }
+-
+- if (slist->nr_servers == 0) {
+- ret = -EDESTADDRREQ;
+- goto error_2;
+- }
+-
+ return volume;
+
+-error_2:
+- afs_put_serverlist(params->net, slist);
+ error_1:
++ afs_put_cell(params->net, volume->cell);
+ kfree(volume);
+ error_0:
+ return ERR_PTR(ret);
+@@ -327,7 +286,7 @@ static int afs_update_volume_status(struct afs_volume *volume, struct key *key)
+
+ /* See if the volume's server list got updated. */
+ new = afs_alloc_server_list(volume->cell, key,
+- vldb, (1 << volume->type));
++ vldb, (1 << volume->type));
+ if (IS_ERR(new)) {
+ ret = PTR_ERR(new);
+ goto error_vldb;
+diff --git a/fs/btrfs/inode.c b/fs/btrfs/inode.c
+index e1a7f3cb5be9..0f57602092cf 100644
+--- a/fs/btrfs/inode.c
++++ b/fs/btrfs/inode.c
+@@ -2098,8 +2098,15 @@ static void btrfs_writepage_fixup_worker(struct btrfs_work *work)
+ goto out;
+ }
+
+- btrfs_set_extent_delalloc(inode, page_start, page_end, 0, &cached_state,
+- 0);
++ ret = btrfs_set_extent_delalloc(inode, page_start, page_end, 0,
++ &cached_state, 0);
++ if (ret) {
++ mapping_set_error(page->mapping, ret);
++ end_extent_writepage(page, ret, page_start, page_end);
++ ClearPageChecked(page);
++ goto out;
++ }
++
+ ClearPageChecked(page);
+ set_page_dirty(page);
+ btrfs_delalloc_release_extents(BTRFS_I(inode), PAGE_SIZE);
+diff --git a/fs/btrfs/raid56.c b/fs/btrfs/raid56.c
+index a7f79254ecca..8903c4fbf7e6 100644
+--- a/fs/btrfs/raid56.c
++++ b/fs/btrfs/raid56.c
+@@ -1435,14 +1435,13 @@ static int fail_bio_stripe(struct btrfs_raid_bio *rbio,
+ */
+ static void set_bio_pages_uptodate(struct bio *bio)
+ {
+- struct bio_vec bvec;
+- struct bvec_iter iter;
++ struct bio_vec *bvec;
++ int i;
+
+- if (bio_flagged(bio, BIO_CLONED))
+- bio->bi_iter = btrfs_io_bio(bio)->iter;
++ ASSERT(!bio_flagged(bio, BIO_CLONED));
+
+- bio_for_each_segment(bvec, bio, iter)
+- SetPageUptodate(bvec.bv_page);
++ bio_for_each_segment_all(bvec, bio, i)
++ SetPageUptodate(bvec->bv_page);
+ }
+
+ /*
+diff --git a/fs/cifs/cifsencrypt.c b/fs/cifs/cifsencrypt.c
+index 68abbb0db608..f2b0a7f124da 100644
+--- a/fs/cifs/cifsencrypt.c
++++ b/fs/cifs/cifsencrypt.c
+@@ -325,9 +325,8 @@ int calc_lanman_hash(const char *password, const char *cryptkey, bool encrypt,
+ {
+ int i;
+ int rc;
+- char password_with_pad[CIFS_ENCPWD_SIZE];
++ char password_with_pad[CIFS_ENCPWD_SIZE] = {0};
+
+- memset(password_with_pad, 0, CIFS_ENCPWD_SIZE);
+ if (password)
+ strncpy(password_with_pad, password, CIFS_ENCPWD_SIZE);
+
+diff --git a/fs/cifs/connect.c b/fs/cifs/connect.c
+index 0bfc2280436d..f7db2fedfa8c 100644
+--- a/fs/cifs/connect.c
++++ b/fs/cifs/connect.c
+@@ -1707,7 +1707,7 @@ cifs_parse_mount_options(const char *mountdata, const char *devname,
+ tmp_end++;
+ if (!(tmp_end < end && tmp_end[1] == delim)) {
+ /* No it is not. Set the password to NULL */
+- kfree(vol->password);
++ kzfree(vol->password);
+ vol->password = NULL;
+ break;
+ }
+@@ -1745,7 +1745,7 @@ cifs_parse_mount_options(const char *mountdata, const char *devname,
+ options = end;
+ }
+
+- kfree(vol->password);
++ kzfree(vol->password);
+ /* Now build new password string */
+ temp_len = strlen(value);
+ vol->password = kzalloc(temp_len+1, GFP_KERNEL);
+@@ -4235,7 +4235,7 @@ cifs_construct_tcon(struct cifs_sb_info *cifs_sb, kuid_t fsuid)
+ reset_cifs_unix_caps(0, tcon, NULL, vol_info);
+ out:
+ kfree(vol_info->username);
+- kfree(vol_info->password);
++ kzfree(vol_info->password);
+ kfree(vol_info);
+
+ return tcon;
+diff --git a/fs/cifs/file.c b/fs/cifs/file.c
+index df9f682708c6..3a85df2a9baf 100644
+--- a/fs/cifs/file.c
++++ b/fs/cifs/file.c
+@@ -3471,20 +3471,18 @@ static const struct vm_operations_struct cifs_file_vm_ops = {
+
+ int cifs_file_strict_mmap(struct file *file, struct vm_area_struct *vma)
+ {
+- int rc, xid;
++ int xid, rc = 0;
+ struct inode *inode = file_inode(file);
+
+ xid = get_xid();
+
+- if (!CIFS_CACHE_READ(CIFS_I(inode))) {
++ if (!CIFS_CACHE_READ(CIFS_I(inode)))
+ rc = cifs_zap_mapping(inode);
+- if (rc)
+- return rc;
+- }
+-
+- rc = generic_file_mmap(file, vma);
+- if (rc == 0)
++ if (!rc)
++ rc = generic_file_mmap(file, vma);
++ if (!rc)
+ vma->vm_ops = &cifs_file_vm_ops;
++
+ free_xid(xid);
+ return rc;
+ }
+@@ -3494,16 +3492,16 @@ int cifs_file_mmap(struct file *file, struct vm_area_struct *vma)
+ int rc, xid;
+
+ xid = get_xid();
++
+ rc = cifs_revalidate_file(file);
+- if (rc) {
++ if (rc)
+ cifs_dbg(FYI, "Validation prior to mmap failed, error=%d\n",
+ rc);
+- free_xid(xid);
+- return rc;
+- }
+- rc = generic_file_mmap(file, vma);
+- if (rc == 0)
++ if (!rc)
++ rc = generic_file_mmap(file, vma);
++ if (!rc)
+ vma->vm_ops = &cifs_file_vm_ops;
++
+ free_xid(xid);
+ return rc;
+ }
+diff --git a/fs/cifs/misc.c b/fs/cifs/misc.c
+index eea93ac15ef0..a0dbced4a45c 100644
+--- a/fs/cifs/misc.c
++++ b/fs/cifs/misc.c
+@@ -98,14 +98,11 @@ sesInfoFree(struct cifs_ses *buf_to_free)
+ kfree(buf_to_free->serverOS);
+ kfree(buf_to_free->serverDomain);
+ kfree(buf_to_free->serverNOS);
+- if (buf_to_free->password) {
+- memset(buf_to_free->password, 0, strlen(buf_to_free->password));
+- kfree(buf_to_free->password);
+- }
++ kzfree(buf_to_free->password);
+ kfree(buf_to_free->user_name);
+ kfree(buf_to_free->domainName);
+- kfree(buf_to_free->auth_key.response);
+- kfree(buf_to_free);
++ kzfree(buf_to_free->auth_key.response);
++ kzfree(buf_to_free);
+ }
+
+ struct cifs_tcon *
+@@ -136,10 +133,7 @@ tconInfoFree(struct cifs_tcon *buf_to_free)
+ }
+ atomic_dec(&tconInfoAllocCount);
+ kfree(buf_to_free->nativeFileSystem);
+- if (buf_to_free->password) {
+- memset(buf_to_free->password, 0, strlen(buf_to_free->password));
+- kfree(buf_to_free->password);
+- }
++ kzfree(buf_to_free->password);
+ kfree(buf_to_free);
+ }
+
+diff --git a/fs/cifs/smb2pdu.c b/fs/cifs/smb2pdu.c
+index 01346b8b6edb..66af1f8a13cc 100644
+--- a/fs/cifs/smb2pdu.c
++++ b/fs/cifs/smb2pdu.c
+@@ -733,8 +733,7 @@ int smb3_validate_negotiate(const unsigned int xid, struct cifs_tcon *tcon)
+ }
+
+ /* check validate negotiate info response matches what we got earlier */
+- if (pneg_rsp->Dialect !=
+- cpu_to_le16(tcon->ses->server->vals->protocol_id))
++ if (pneg_rsp->Dialect != cpu_to_le16(tcon->ses->server->dialect))
+ goto vneg_out;
+
+ if (pneg_rsp->SecurityMode != cpu_to_le16(tcon->ses->server->sec_mode))
+diff --git a/fs/devpts/inode.c b/fs/devpts/inode.c
+index 7eae33ffa3fc..e31d6ed3ec32 100644
+--- a/fs/devpts/inode.c
++++ b/fs/devpts/inode.c
+@@ -168,11 +168,11 @@ struct vfsmount *devpts_mntget(struct file *filp, struct pts_fs_info *fsi)
+ dput(path.dentry);
+ if (err) {
+ mntput(path.mnt);
+- path.mnt = ERR_PTR(err);
++ return ERR_PTR(err);
+ }
+ if (DEVPTS_SB(path.mnt->mnt_sb) != fsi) {
+ mntput(path.mnt);
+- path.mnt = ERR_PTR(-ENODEV);
++ return ERR_PTR(-ENODEV);
+ }
+ return path.mnt;
+ }
+diff --git a/fs/kernfs/file.c b/fs/kernfs/file.c
+index 9698e51656b1..d8f49c412f50 100644
+--- a/fs/kernfs/file.c
++++ b/fs/kernfs/file.c
+@@ -275,7 +275,7 @@ static ssize_t kernfs_fop_write(struct file *file, const char __user *user_buf,
+ {
+ struct kernfs_open_file *of = kernfs_of(file);
+ const struct kernfs_ops *ops;
+- size_t len;
++ ssize_t len;
+ char *buf;
+
+ if (of->atomic_write_len) {
+diff --git a/fs/nfs/direct.c b/fs/nfs/direct.c
+index d2972d537469..8c10b0562e75 100644
+--- a/fs/nfs/direct.c
++++ b/fs/nfs/direct.c
+@@ -775,10 +775,8 @@ static void nfs_direct_write_completion(struct nfs_pgio_header *hdr)
+
+ spin_lock(&dreq->lock);
+
+- if (test_bit(NFS_IOHDR_ERROR, &hdr->flags)) {
+- dreq->flags = 0;
++ if (test_bit(NFS_IOHDR_ERROR, &hdr->flags))
+ dreq->error = hdr->error;
+- }
+ if (dreq->error == 0) {
+ nfs_direct_good_bytes(dreq, hdr);
+ if (nfs_write_need_commit(hdr)) {
+diff --git a/fs/nfs/filelayout/filelayout.c b/fs/nfs/filelayout/filelayout.c
+index 4e54d8b5413a..d175724ff566 100644
+--- a/fs/nfs/filelayout/filelayout.c
++++ b/fs/nfs/filelayout/filelayout.c
+@@ -895,9 +895,7 @@ fl_pnfs_update_layout(struct inode *ino,
+
+ lseg = pnfs_update_layout(ino, ctx, pos, count, iomode, strict_iomode,
+ gfp_flags);
+- if (!lseg)
+- lseg = ERR_PTR(-ENOMEM);
+- if (IS_ERR(lseg))
++ if (IS_ERR_OR_NULL(lseg))
+ goto out;
+
+ lo = NFS_I(ino)->layout;
+diff --git a/fs/nfs/io.c b/fs/nfs/io.c
+index 20fef85d2bb1..9034b4926909 100644
+--- a/fs/nfs/io.c
++++ b/fs/nfs/io.c
+@@ -99,7 +99,7 @@ static void nfs_block_buffered(struct nfs_inode *nfsi, struct inode *inode)
+ {
+ if (!test_bit(NFS_INO_ODIRECT, &nfsi->flags)) {
+ set_bit(NFS_INO_ODIRECT, &nfsi->flags);
+- nfs_wb_all(inode);
++ nfs_sync_mapping(inode->i_mapping);
+ }
+ }
+
+diff --git a/fs/nfs/nfs4idmap.c b/fs/nfs/nfs4idmap.c
+index 30426c1a1bbd..22dc30a679a0 100644
+--- a/fs/nfs/nfs4idmap.c
++++ b/fs/nfs/nfs4idmap.c
+@@ -568,9 +568,13 @@ static int nfs_idmap_legacy_upcall(struct key_construction *cons,
+ struct idmap_msg *im;
+ struct idmap *idmap = (struct idmap *)aux;
+ struct key *key = cons->key;
+- int ret = -ENOMEM;
++ int ret = -ENOKEY;
++
++ if (!aux)
++ goto out1;
+
+ /* msg and im are freed in idmap_pipe_destroy_msg */
++ ret = -ENOMEM;
+ data = kzalloc(sizeof(*data), GFP_KERNEL);
+ if (!data)
+ goto out1;
+diff --git a/fs/nfs/nfs4xdr.c b/fs/nfs/nfs4xdr.c
+index 77c6729e57f0..65c9c4175145 100644
+--- a/fs/nfs/nfs4xdr.c
++++ b/fs/nfs/nfs4xdr.c
+@@ -7678,6 +7678,22 @@ nfs4_stat_to_errno(int stat)
+ .p_name = #proc, \
+ }
+
++#if defined(CONFIG_NFS_V4_1)
++#define PROC41(proc, argtype, restype) \
++ PROC(proc, argtype, restype)
++#else
++#define PROC41(proc, argtype, restype) \
++ STUB(proc)
++#endif
++
++#if defined(CONFIG_NFS_V4_2)
++#define PROC42(proc, argtype, restype) \
++ PROC(proc, argtype, restype)
++#else
++#define PROC42(proc, argtype, restype) \
++ STUB(proc)
++#endif
++
+ const struct rpc_procinfo nfs4_procedures[] = {
+ PROC(READ, enc_read, dec_read),
+ PROC(WRITE, enc_write, dec_write),
+@@ -7698,7 +7714,6 @@ const struct rpc_procinfo nfs4_procedures[] = {
+ PROC(ACCESS, enc_access, dec_access),
+ PROC(GETATTR, enc_getattr, dec_getattr),
+ PROC(LOOKUP, enc_lookup, dec_lookup),
+- PROC(LOOKUPP, enc_lookupp, dec_lookupp),
+ PROC(LOOKUP_ROOT, enc_lookup_root, dec_lookup_root),
+ PROC(REMOVE, enc_remove, dec_remove),
+ PROC(RENAME, enc_rename, dec_rename),
+@@ -7717,33 +7732,30 @@ const struct rpc_procinfo nfs4_procedures[] = {
+ PROC(RELEASE_LOCKOWNER, enc_release_lockowner, dec_release_lockowner),
+ PROC(SECINFO, enc_secinfo, dec_secinfo),
+ PROC(FSID_PRESENT, enc_fsid_present, dec_fsid_present),
+-#if defined(CONFIG_NFS_V4_1)
+- PROC(EXCHANGE_ID, enc_exchange_id, dec_exchange_id),
+- PROC(CREATE_SESSION, enc_create_session, dec_create_session),
+- PROC(DESTROY_SESSION, enc_destroy_session, dec_destroy_session),
+- PROC(SEQUENCE, enc_sequence, dec_sequence),
+- PROC(GET_LEASE_TIME, enc_get_lease_time, dec_get_lease_time),
+- PROC(RECLAIM_COMPLETE, enc_reclaim_complete, dec_reclaim_complete),
+- PROC(GETDEVICEINFO, enc_getdeviceinfo, dec_getdeviceinfo),
+- PROC(LAYOUTGET, enc_layoutget, dec_layoutget),
+- PROC(LAYOUTCOMMIT, enc_layoutcommit, dec_layoutcommit),
+- PROC(LAYOUTRETURN, enc_layoutreturn, dec_layoutreturn),
+- PROC(SECINFO_NO_NAME, enc_secinfo_no_name, dec_secinfo_no_name),
+- PROC(TEST_STATEID, enc_test_stateid, dec_test_stateid),
+- PROC(FREE_STATEID, enc_free_stateid, dec_free_stateid),
++ PROC41(EXCHANGE_ID, enc_exchange_id, dec_exchange_id),
++ PROC41(CREATE_SESSION, enc_create_session, dec_create_session),
++ PROC41(DESTROY_SESSION, enc_destroy_session, dec_destroy_session),
++ PROC41(SEQUENCE, enc_sequence, dec_sequence),
++ PROC41(GET_LEASE_TIME, enc_get_lease_time, dec_get_lease_time),
++ PROC41(RECLAIM_COMPLETE,enc_reclaim_complete, dec_reclaim_complete),
++ PROC41(GETDEVICEINFO, enc_getdeviceinfo, dec_getdeviceinfo),
++ PROC41(LAYOUTGET, enc_layoutget, dec_layoutget),
++ PROC41(LAYOUTCOMMIT, enc_layoutcommit, dec_layoutcommit),
++ PROC41(LAYOUTRETURN, enc_layoutreturn, dec_layoutreturn),
++ PROC41(SECINFO_NO_NAME, enc_secinfo_no_name, dec_secinfo_no_name),
++ PROC41(TEST_STATEID, enc_test_stateid, dec_test_stateid),
++ PROC41(FREE_STATEID, enc_free_stateid, dec_free_stateid),
+ STUB(GETDEVICELIST),
+- PROC(BIND_CONN_TO_SESSION,
++ PROC41(BIND_CONN_TO_SESSION,
+ enc_bind_conn_to_session, dec_bind_conn_to_session),
+- PROC(DESTROY_CLIENTID, enc_destroy_clientid, dec_destroy_clientid),
+-#endif /* CONFIG_NFS_V4_1 */
+-#ifdef CONFIG_NFS_V4_2
+- PROC(SEEK, enc_seek, dec_seek),
+- PROC(ALLOCATE, enc_allocate, dec_allocate),
+- PROC(DEALLOCATE, enc_deallocate, dec_deallocate),
+- PROC(LAYOUTSTATS, enc_layoutstats, dec_layoutstats),
+- PROC(CLONE, enc_clone, dec_clone),
+- PROC(COPY, enc_copy, dec_copy),
+-#endif /* CONFIG_NFS_V4_2 */
++ PROC41(DESTROY_CLIENTID,enc_destroy_clientid, dec_destroy_clientid),
++ PROC42(SEEK, enc_seek, dec_seek),
++ PROC42(ALLOCATE, enc_allocate, dec_allocate),
++ PROC42(DEALLOCATE, enc_deallocate, dec_deallocate),
++ PROC42(LAYOUTSTATS, enc_layoutstats, dec_layoutstats),
++ PROC42(CLONE, enc_clone, dec_clone),
++ PROC42(COPY, enc_copy, dec_copy),
++ PROC(LOOKUPP, enc_lookupp, dec_lookupp),
+ };
+
+ static unsigned int nfs_version4_counts[ARRAY_SIZE(nfs4_procedures)];
+diff --git a/fs/nfs/pnfs.c b/fs/nfs/pnfs.c
+index d602fe9e1ac8..eb098ccfefd5 100644
+--- a/fs/nfs/pnfs.c
++++ b/fs/nfs/pnfs.c
+@@ -2255,7 +2255,7 @@ pnfs_write_through_mds(struct nfs_pageio_descriptor *desc,
+ nfs_pageio_reset_write_mds(desc);
+ mirror->pg_recoalesce = 1;
+ }
+- hdr->release(hdr);
++ hdr->completion_ops->completion(hdr);
+ }
+
+ static enum pnfs_try_status
+@@ -2378,7 +2378,7 @@ pnfs_read_through_mds(struct nfs_pageio_descriptor *desc,
+ nfs_pageio_reset_read_mds(desc);
+ mirror->pg_recoalesce = 1;
+ }
+- hdr->release(hdr);
++ hdr->completion_ops->completion(hdr);
+ }
+
+ /*
+diff --git a/fs/nfs/write.c b/fs/nfs/write.c
+index 4a379d7918f2..cf61108f8f8d 100644
+--- a/fs/nfs/write.c
++++ b/fs/nfs/write.c
+@@ -1837,6 +1837,8 @@ static void nfs_commit_release_pages(struct nfs_commit_data *data)
+ set_bit(NFS_CONTEXT_RESEND_WRITES, &req->wb_context->flags);
+ next:
+ nfs_unlock_and_release_request(req);
++ /* Latency breaker */
++ cond_resched();
+ }
+ nfss = NFS_SERVER(data->inode);
+ if (atomic_long_read(&nfss->writeback) < NFS_CONGESTION_OFF_THRESH)
+diff --git a/fs/nfsd/nfs4state.c b/fs/nfsd/nfs4state.c
+index b29b5a185a2c..5a75135f5f53 100644
+--- a/fs/nfsd/nfs4state.c
++++ b/fs/nfsd/nfs4state.c
+@@ -3590,6 +3590,7 @@ nfsd4_verify_open_stid(struct nfs4_stid *s)
+ switch (s->sc_type) {
+ default:
+ break;
++ case 0:
+ case NFS4_CLOSED_STID:
+ case NFS4_CLOSED_DELEG_STID:
+ ret = nfserr_bad_stateid;
+diff --git a/fs/overlayfs/inode.c b/fs/overlayfs/inode.c
+index 00b6b294272a..94d2f8a8b779 100644
+--- a/fs/overlayfs/inode.c
++++ b/fs/overlayfs/inode.c
+@@ -606,6 +606,16 @@ static int ovl_inode_set(struct inode *inode, void *data)
+ static bool ovl_verify_inode(struct inode *inode, struct dentry *lowerdentry,
+ struct dentry *upperdentry)
+ {
++ if (S_ISDIR(inode->i_mode)) {
++ /* Real lower dir moved to upper layer under us? */
++ if (!lowerdentry && ovl_inode_lower(inode))
++ return false;
++
++ /* Lookup of an uncovered redirect origin? */
++ if (!upperdentry && ovl_inode_upper(inode))
++ return false;
++ }
++
+ /*
+ * Allow non-NULL lower inode in ovl_inode even if lowerdentry is NULL.
+ * This happens when finding a copied up overlay inode for a renamed
+@@ -633,6 +643,8 @@ struct inode *ovl_get_inode(struct dentry *dentry, struct dentry *upperdentry,
+ struct inode *inode;
+ /* Already indexed or could be indexed on copy up? */
+ bool indexed = (index || (ovl_indexdir(dentry->d_sb) && !upperdentry));
++ struct dentry *origin = indexed ? lowerdentry : NULL;
++ bool is_dir;
+
+ if (WARN_ON(upperdentry && indexed && !lowerdentry))
+ return ERR_PTR(-EIO);
+@@ -641,15 +653,19 @@ struct inode *ovl_get_inode(struct dentry *dentry, struct dentry *upperdentry,
+ realinode = d_inode(lowerdentry);
+
+ /*
+- * Copy up origin (lower) may exist for non-indexed upper, but we must
+- * not use lower as hash key in that case.
+- * Hash inodes that are or could be indexed by origin inode and
+- * non-indexed upper inodes that could be hard linked by upper inode.
++ * Copy up origin (lower) may exist for non-indexed non-dir upper, but
++ * we must not use lower as hash key in that case.
++ * Hash non-dir that is or could be indexed by origin inode.
++ * Hash dir that is or could be merged by origin inode.
++ * Hash pure upper and non-indexed non-dir by upper inode.
+ */
+- if (!S_ISDIR(realinode->i_mode) && (upperdentry || indexed)) {
+- struct inode *key = d_inode(indexed ? lowerdentry :
+- upperdentry);
+- unsigned int nlink;
++ is_dir = S_ISDIR(realinode->i_mode);
++ if (is_dir)
++ origin = lowerdentry;
++
++ if (upperdentry || origin) {
++ struct inode *key = d_inode(origin ?: upperdentry);
++ unsigned int nlink = is_dir ? 1 : realinode->i_nlink;
+
+ inode = iget5_locked(dentry->d_sb, (unsigned long) key,
+ ovl_inode_test, ovl_inode_set, key);
+@@ -670,8 +686,9 @@ struct inode *ovl_get_inode(struct dentry *dentry, struct dentry *upperdentry,
+ goto out;
+ }
+
+- nlink = ovl_get_nlink(lowerdentry, upperdentry,
+- realinode->i_nlink);
++ /* Recalculate nlink for non-dir due to indexing */
++ if (!is_dir)
++ nlink = ovl_get_nlink(lowerdentry, upperdentry, nlink);
+ set_nlink(inode, nlink);
+ } else {
+ inode = new_inode(dentry->d_sb);
+@@ -685,7 +702,7 @@ struct inode *ovl_get_inode(struct dentry *dentry, struct dentry *upperdentry,
+ ovl_set_flag(OVL_IMPURE, inode);
+
+ /* Check for non-merge dir that may have whiteouts */
+- if (S_ISDIR(realinode->i_mode)) {
++ if (is_dir) {
+ struct ovl_entry *oe = dentry->d_fsdata;
+
+ if (((upperdentry && lowerdentry) || oe->numlower > 1) ||
+diff --git a/fs/overlayfs/readdir.c b/fs/overlayfs/readdir.c
+index 8c98578d27a1..e258c234f357 100644
+--- a/fs/overlayfs/readdir.c
++++ b/fs/overlayfs/readdir.c
+@@ -593,8 +593,15 @@ static struct ovl_dir_cache *ovl_cache_get_impure(struct path *path)
+ return ERR_PTR(res);
+ }
+ if (list_empty(&cache->entries)) {
+- /* Good oportunity to get rid of an unnecessary "impure" flag */
+- ovl_do_removexattr(ovl_dentry_upper(dentry), OVL_XATTR_IMPURE);
++ /*
++ * A good opportunity to get rid of an unneeded "impure" flag.
++ * Removing the "impure" xattr is best effort.
++ */
++ if (!ovl_want_write(dentry)) {
++ ovl_do_removexattr(ovl_dentry_upper(dentry),
++ OVL_XATTR_IMPURE);
++ ovl_drop_write(dentry);
++ }
+ ovl_clear_flag(OVL_IMPURE, d_inode(dentry));
+ kfree(cache);
+ return NULL;
+@@ -769,10 +776,14 @@ static int ovl_dir_fsync(struct file *file, loff_t start, loff_t end,
+ struct dentry *dentry = file->f_path.dentry;
+ struct file *realfile = od->realfile;
+
++ /* Nothing to sync for lower */
++ if (!OVL_TYPE_UPPER(ovl_path_type(dentry)))
++ return 0;
++
+ /*
+ * Need to check if we started out being a lower dir, but got copied up
+ */
+- if (!od->is_upper && OVL_TYPE_UPPER(ovl_path_type(dentry))) {
++ if (!od->is_upper) {
+ struct inode *inode = file_inode(file);
+
+ realfile = READ_ONCE(od->upperfile);
+diff --git a/fs/overlayfs/super.c b/fs/overlayfs/super.c
+index 76440feb79f6..e3d5fb651f9a 100644
+--- a/fs/overlayfs/super.c
++++ b/fs/overlayfs/super.c
+@@ -211,6 +211,7 @@ static void ovl_destroy_inode(struct inode *inode)
+ struct ovl_inode *oi = OVL_I(inode);
+
+ dput(oi->__upperdentry);
++ iput(oi->lower);
+ kfree(oi->redirect);
+ ovl_dir_cache_free(inode);
+ mutex_destroy(&oi->lock);
+@@ -520,10 +521,6 @@ static struct dentry *ovl_workdir_create(struct ovl_fs *ofs,
+ bool retried = false;
+ bool locked = false;
+
+- err = mnt_want_write(mnt);
+- if (err)
+- goto out_err;
+-
+ inode_lock_nested(dir, I_MUTEX_PARENT);
+ locked = true;
+
+@@ -588,7 +585,6 @@ static struct dentry *ovl_workdir_create(struct ovl_fs *ofs,
+ goto out_err;
+ }
+ out_unlock:
+- mnt_drop_write(mnt);
+ if (locked)
+ inode_unlock(dir);
+
+@@ -703,7 +699,8 @@ static int ovl_lower_dir(const char *name, struct path *path,
+ * The inodes index feature needs to encode and decode file
+ * handles, so it requires that all layers support them.
+ */
+- if (ofs->config.index && !ovl_can_decode_fh(path->dentry->d_sb)) {
++ if (ofs->config.index && ofs->config.upperdir &&
++ !ovl_can_decode_fh(path->dentry->d_sb)) {
+ ofs->config.index = false;
+ pr_warn("overlayfs: fs on '%s' does not support file handles, falling back to index=off.\n", name);
+ }
+@@ -929,12 +926,17 @@ static int ovl_get_upper(struct ovl_fs *ofs, struct path *upperpath)
+
+ static int ovl_make_workdir(struct ovl_fs *ofs, struct path *workpath)
+ {
++ struct vfsmount *mnt = ofs->upper_mnt;
+ struct dentry *temp;
+ int err;
+
++ err = mnt_want_write(mnt);
++ if (err)
++ return err;
++
+ ofs->workdir = ovl_workdir_create(ofs, OVL_WORKDIR_NAME, false);
+ if (!ofs->workdir)
+- return 0;
++ goto out;
+
+ /*
+ * Upper should support d_type, else whiteouts are visible. Given
+@@ -944,7 +946,7 @@ static int ovl_make_workdir(struct ovl_fs *ofs, struct path *workpath)
+ */
+ err = ovl_check_d_type_supported(workpath);
+ if (err < 0)
+- return err;
++ goto out;
+
+ /*
+ * We allowed this configuration and don't want to break users over
+@@ -968,6 +970,7 @@ static int ovl_make_workdir(struct ovl_fs *ofs, struct path *workpath)
+ if (err) {
+ ofs->noxattr = true;
+ pr_warn("overlayfs: upper fs does not support xattr.\n");
++ err = 0;
+ } else {
+ vfs_removexattr(ofs->workdir, OVL_XATTR_OPAQUE);
+ }
+@@ -979,7 +982,9 @@ static int ovl_make_workdir(struct ovl_fs *ofs, struct path *workpath)
+ pr_warn("overlayfs: upper fs does not support file handles, falling back to index=off.\n");
+ }
+
+- return 0;
++out:
++ mnt_drop_write(mnt);
++ return err;
+ }
+
+ static int ovl_get_workdir(struct ovl_fs *ofs, struct path *upperpath)
+@@ -1026,8 +1031,13 @@ static int ovl_get_workdir(struct ovl_fs *ofs, struct path *upperpath)
+ static int ovl_get_indexdir(struct ovl_fs *ofs, struct ovl_entry *oe,
+ struct path *upperpath)
+ {
++ struct vfsmount *mnt = ofs->upper_mnt;
+ int err;
+
++ err = mnt_want_write(mnt);
++ if (err)
++ return err;
++
+ /* Verify lower root is upper root origin */
+ err = ovl_verify_origin(upperpath->dentry, oe->lowerstack[0].dentry,
+ false, true);
+@@ -1055,6 +1065,7 @@ static int ovl_get_indexdir(struct ovl_fs *ofs, struct ovl_entry *oe,
+ pr_warn("overlayfs: try deleting index dir or mounting with '-o index=off' to disable inodes index.\n");
+
+ out:
++ mnt_drop_write(mnt);
+ return err;
+ }
+
+@@ -1257,11 +1268,16 @@ static int ovl_fill_super(struct super_block *sb, void *data, int silent)
+ if (err)
+ goto out_free_oe;
+
+- if (!ofs->indexdir)
++ /* Force r/o mount with no index dir */
++ if (!ofs->indexdir) {
++ dput(ofs->workdir);
++ ofs->workdir = NULL;
+ sb->s_flags |= SB_RDONLY;
++ }
++
+ }
+
+- /* Show index=off/on in /proc/mounts for any of the reasons above */
++ /* Show index=off in /proc/mounts for forced r/o mount */
+ if (!ofs->indexdir)
+ ofs->config.index = false;
+
+diff --git a/fs/overlayfs/util.c b/fs/overlayfs/util.c
+index d6bb1c9f5e7a..06119f34a69d 100644
+--- a/fs/overlayfs/util.c
++++ b/fs/overlayfs/util.c
+@@ -257,7 +257,7 @@ void ovl_inode_init(struct inode *inode, struct dentry *upperdentry,
+ if (upperdentry)
+ OVL_I(inode)->__upperdentry = upperdentry;
+ if (lowerdentry)
+- OVL_I(inode)->lower = d_inode(lowerdentry);
++ OVL_I(inode)->lower = igrab(d_inode(lowerdentry));
+
+ ovl_copyattr(d_inode(upperdentry ?: lowerdentry), inode);
+ }
+@@ -273,7 +273,7 @@ void ovl_inode_update(struct inode *inode, struct dentry *upperdentry)
+ */
+ smp_wmb();
+ OVL_I(inode)->__upperdentry = upperdentry;
+- if (!S_ISDIR(upperinode->i_mode) && inode_unhashed(inode)) {
++ if (inode_unhashed(inode)) {
+ inode->i_private = upperinode;
+ __insert_inode_hash(inode, (unsigned long) upperinode);
+ }
+diff --git a/fs/pipe.c b/fs/pipe.c
+index 6d98566201ef..b37a59f84dd0 100644
+--- a/fs/pipe.c
++++ b/fs/pipe.c
+@@ -610,12 +610,17 @@ static unsigned long account_pipe_buffers(struct user_struct *user,
+
+ static bool too_many_pipe_buffers_soft(unsigned long user_bufs)
+ {
+- return pipe_user_pages_soft && user_bufs >= pipe_user_pages_soft;
++ return pipe_user_pages_soft && user_bufs > pipe_user_pages_soft;
+ }
+
+ static bool too_many_pipe_buffers_hard(unsigned long user_bufs)
+ {
+- return pipe_user_pages_hard && user_bufs >= pipe_user_pages_hard;
++ return pipe_user_pages_hard && user_bufs > pipe_user_pages_hard;
++}
++
++static bool is_unprivileged_user(void)
++{
++ return !capable(CAP_SYS_RESOURCE) && !capable(CAP_SYS_ADMIN);
+ }
+
+ struct pipe_inode_info *alloc_pipe_info(void)
+@@ -634,12 +639,12 @@ struct pipe_inode_info *alloc_pipe_info(void)
+
+ user_bufs = account_pipe_buffers(user, 0, pipe_bufs);
+
+- if (too_many_pipe_buffers_soft(user_bufs)) {
++ if (too_many_pipe_buffers_soft(user_bufs) && is_unprivileged_user()) {
+ user_bufs = account_pipe_buffers(user, pipe_bufs, 1);
+ pipe_bufs = 1;
+ }
+
+- if (too_many_pipe_buffers_hard(user_bufs))
++ if (too_many_pipe_buffers_hard(user_bufs) && is_unprivileged_user())
+ goto out_revert_acct;
+
+ pipe->bufs = kcalloc(pipe_bufs, sizeof(struct pipe_buffer),
+@@ -1069,7 +1074,7 @@ static long pipe_set_size(struct pipe_inode_info *pipe, unsigned long arg)
+ if (nr_pages > pipe->buffers &&
+ (too_many_pipe_buffers_hard(user_bufs) ||
+ too_many_pipe_buffers_soft(user_bufs)) &&
+- !capable(CAP_SYS_RESOURCE) && !capable(CAP_SYS_ADMIN)) {
++ is_unprivileged_user()) {
+ ret = -EPERM;
+ goto out_revert_acct;
+ }
+diff --git a/fs/proc/kcore.c b/fs/proc/kcore.c
+index 4bc85cb8be6a..e8a93bc8285d 100644
+--- a/fs/proc/kcore.c
++++ b/fs/proc/kcore.c
+@@ -512,23 +512,15 @@ read_kcore(struct file *file, char __user *buffer, size_t buflen, loff_t *fpos)
+ return -EFAULT;
+ } else {
+ if (kern_addr_valid(start)) {
+- unsigned long n;
+-
+ /*
+ * Using bounce buffer to bypass the
+ * hardened user copy kernel text checks.
+ */
+- memcpy(buf, (char *) start, tsz);
+- n = copy_to_user(buffer, buf, tsz);
+- /*
+- * We cannot distinguish between fault on source
+- * and fault on destination. When this happens
+- * we clear too and hope it will trigger the
+- * EFAULT again.
+- */
+- if (n) {
+- if (clear_user(buffer + tsz - n,
+- n))
++ if (probe_kernel_read(buf, (void *) start, tsz)) {
++ if (clear_user(buffer, tsz))
++ return -EFAULT;
++ } else {
++ if (copy_to_user(buffer, buf, tsz))
+ return -EFAULT;
+ }
+ } else {
+diff --git a/fs/ubifs/dir.c b/fs/ubifs/dir.c
+index 417fe0b29f23..ef820f803176 100644
+--- a/fs/ubifs/dir.c
++++ b/fs/ubifs/dir.c
+@@ -1216,10 +1216,8 @@ static int ubifs_symlink(struct inode *dir, struct dentry *dentry,
+ ostr.len = disk_link.len;
+
+ err = fscrypt_fname_usr_to_disk(inode, &istr, &ostr);
+- if (err) {
+- kfree(sd);
++ if (err)
+ goto out_inode;
+- }
+
+ sd->len = cpu_to_le16(ostr.len);
+ disk_link.name = (char *)sd;
+@@ -1251,11 +1249,10 @@ static int ubifs_symlink(struct inode *dir, struct dentry *dentry,
+ goto out_cancel;
+ mutex_unlock(&dir_ui->ui_mutex);
+
+- ubifs_release_budget(c, &req);
+ insert_inode_hash(inode);
+ d_instantiate(dentry, inode);
+- fscrypt_free_filename(&nm);
+- return 0;
++ err = 0;
++ goto out_fname;
+
+ out_cancel:
+ dir->i_size -= sz_change;
+@@ -1268,6 +1265,7 @@ static int ubifs_symlink(struct inode *dir, struct dentry *dentry,
+ fscrypt_free_filename(&nm);
+ out_budg:
+ ubifs_release_budget(c, &req);
++ kfree(sd);
+ return err;
+ }
+
+diff --git a/include/crypto/hash.h b/include/crypto/hash.h
+index 0ed31fd80242..3880793e280e 100644
+--- a/include/crypto/hash.h
++++ b/include/crypto/hash.h
+@@ -210,7 +210,6 @@ struct crypto_ahash {
+ unsigned int keylen);
+
+ unsigned int reqsize;
+- bool has_setkey;
+ struct crypto_tfm base;
+ };
+
+@@ -410,11 +409,6 @@ static inline void *ahash_request_ctx(struct ahash_request *req)
+ int crypto_ahash_setkey(struct crypto_ahash *tfm, const u8 *key,
+ unsigned int keylen);
+
+-static inline bool crypto_ahash_has_setkey(struct crypto_ahash *tfm)
+-{
+- return tfm->has_setkey;
+-}
+-
+ /**
+ * crypto_ahash_finup() - update and finalize message digest
+ * @req: reference to the ahash_request handle that holds all information
+@@ -487,7 +481,12 @@ static inline int crypto_ahash_export(struct ahash_request *req, void *out)
+ */
+ static inline int crypto_ahash_import(struct ahash_request *req, const void *in)
+ {
+- return crypto_ahash_reqtfm(req)->import(req, in);
++ struct crypto_ahash *tfm = crypto_ahash_reqtfm(req);
++
++ if (crypto_ahash_get_flags(tfm) & CRYPTO_TFM_NEED_KEY)
++ return -ENOKEY;
++
++ return tfm->import(req, in);
+ }
+
+ /**
+@@ -503,7 +502,12 @@ static inline int crypto_ahash_import(struct ahash_request *req, const void *in)
+ */
+ static inline int crypto_ahash_init(struct ahash_request *req)
+ {
+- return crypto_ahash_reqtfm(req)->init(req);
++ struct crypto_ahash *tfm = crypto_ahash_reqtfm(req);
++
++ if (crypto_ahash_get_flags(tfm) & CRYPTO_TFM_NEED_KEY)
++ return -ENOKEY;
++
++ return tfm->init(req);
+ }
+
+ /**
+@@ -855,7 +859,12 @@ static inline int crypto_shash_export(struct shash_desc *desc, void *out)
+ */
+ static inline int crypto_shash_import(struct shash_desc *desc, const void *in)
+ {
+- return crypto_shash_alg(desc->tfm)->import(desc, in);
++ struct crypto_shash *tfm = desc->tfm;
++
++ if (crypto_shash_get_flags(tfm) & CRYPTO_TFM_NEED_KEY)
++ return -ENOKEY;
++
++ return crypto_shash_alg(tfm)->import(desc, in);
+ }
+
+ /**
+@@ -871,7 +880,12 @@ static inline int crypto_shash_import(struct shash_desc *desc, const void *in)
+ */
+ static inline int crypto_shash_init(struct shash_desc *desc)
+ {
+- return crypto_shash_alg(desc->tfm)->init(desc);
++ struct crypto_shash *tfm = desc->tfm;
++
++ if (crypto_shash_get_flags(tfm) & CRYPTO_TFM_NEED_KEY)
++ return -ENOKEY;
++
++ return crypto_shash_alg(tfm)->init(desc);
+ }
+
+ /**
+diff --git a/include/crypto/internal/hash.h b/include/crypto/internal/hash.h
+index c2bae8da642c..27040a46d50a 100644
+--- a/include/crypto/internal/hash.h
++++ b/include/crypto/internal/hash.h
+@@ -90,6 +90,8 @@ static inline bool crypto_shash_alg_has_setkey(struct shash_alg *alg)
+ return alg->setkey != shash_no_setkey;
+ }
+
++bool crypto_hash_alg_has_setkey(struct hash_alg_common *halg);
++
+ int crypto_init_ahash_spawn(struct crypto_ahash_spawn *spawn,
+ struct hash_alg_common *alg,
+ struct crypto_instance *inst);
+diff --git a/include/crypto/poly1305.h b/include/crypto/poly1305.h
+index c65567d01e8e..f718a19da82f 100644
+--- a/include/crypto/poly1305.h
++++ b/include/crypto/poly1305.h
+@@ -31,8 +31,6 @@ struct poly1305_desc_ctx {
+ };
+
+ int crypto_poly1305_init(struct shash_desc *desc);
+-int crypto_poly1305_setkey(struct crypto_shash *tfm,
+- const u8 *key, unsigned int keylen);
+ unsigned int crypto_poly1305_setdesckey(struct poly1305_desc_ctx *dctx,
+ const u8 *src, unsigned int srclen);
+ int crypto_poly1305_update(struct shash_desc *desc,
+diff --git a/include/kvm/arm_psci.h b/include/kvm/arm_psci.h
+new file mode 100644
+index 000000000000..e518e4e3dfb5
+--- /dev/null
++++ b/include/kvm/arm_psci.h
+@@ -0,0 +1,51 @@
++/*
++ * Copyright (C) 2012,2013 - ARM Ltd
++ * Author: Marc Zyngier <marc.zyngier@arm.com>
++ *
++ * This program is free software; you can redistribute it and/or modify
++ * it under the terms of the GNU General Public License version 2 as
++ * published by the Free Software Foundation.
++ *
++ * This program is distributed in the hope that it will be useful,
++ * but WITHOUT ANY WARRANTY; without even the implied warranty of
++ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
++ * GNU General Public License for more details.
++ *
++ * You should have received a copy of the GNU General Public License
++ * along with this program. If not, see <http://www.gnu.org/licenses/>.
++ */
++
++#ifndef __KVM_ARM_PSCI_H__
++#define __KVM_ARM_PSCI_H__
++
++#include <linux/kvm_host.h>
++#include <uapi/linux/psci.h>
++
++#define KVM_ARM_PSCI_0_1 PSCI_VERSION(0, 1)
++#define KVM_ARM_PSCI_0_2 PSCI_VERSION(0, 2)
++#define KVM_ARM_PSCI_1_0 PSCI_VERSION(1, 0)
++
++#define KVM_ARM_PSCI_LATEST KVM_ARM_PSCI_1_0
++
++/*
++ * We need the KVM pointer independently from the vcpu as we can call
++ * this from HYP, and need to apply kern_hyp_va on it...
++ */
++static inline int kvm_psci_version(struct kvm_vcpu *vcpu, struct kvm *kvm)
++{
++ /*
++ * Our PSCI implementation stays the same across versions from
++ * v0.2 onward, only adding the few mandatory functions (such
++ * as FEATURES with 1.0) that are required by newer
++ * revisions. It is thus safe to return the latest.
++ */
++ if (test_bit(KVM_ARM_VCPU_PSCI_0_2, vcpu->arch.features))
++ return KVM_ARM_PSCI_LATEST;
++
++ return KVM_ARM_PSCI_0_1;
++}
++
++
++int kvm_hvc_call_handler(struct kvm_vcpu *vcpu);
++
++#endif /* __KVM_ARM_PSCI_H__ */
+diff --git a/include/linux/arm-smccc.h b/include/linux/arm-smccc.h
+index 4c5bca38c653..a031897fca76 100644
+--- a/include/linux/arm-smccc.h
++++ b/include/linux/arm-smccc.h
+@@ -14,14 +14,16 @@
+ #ifndef __LINUX_ARM_SMCCC_H
+ #define __LINUX_ARM_SMCCC_H
+
++#include <uapi/linux/const.h>
++
+ /*
+ * This file provides common defines for ARM SMC Calling Convention as
+ * specified in
+ * http://infocenter.arm.com/help/topic/com.arm.doc.den0028a/index.html
+ */
+
+-#define ARM_SMCCC_STD_CALL 0
+-#define ARM_SMCCC_FAST_CALL 1
++#define ARM_SMCCC_STD_CALL _AC(0,U)
++#define ARM_SMCCC_FAST_CALL _AC(1,U)
+ #define ARM_SMCCC_TYPE_SHIFT 31
+
+ #define ARM_SMCCC_SMC_32 0
+@@ -60,6 +62,24 @@
+ #define ARM_SMCCC_QUIRK_NONE 0
+ #define ARM_SMCCC_QUIRK_QCOM_A6 1 /* Save/restore register a6 */
+
++#define ARM_SMCCC_VERSION_1_0 0x10000
++#define ARM_SMCCC_VERSION_1_1 0x10001
++
++#define ARM_SMCCC_VERSION_FUNC_ID \
++ ARM_SMCCC_CALL_VAL(ARM_SMCCC_FAST_CALL, \
++ ARM_SMCCC_SMC_32, \
++ 0, 0)
++
++#define ARM_SMCCC_ARCH_FEATURES_FUNC_ID \
++ ARM_SMCCC_CALL_VAL(ARM_SMCCC_FAST_CALL, \
++ ARM_SMCCC_SMC_32, \
++ 0, 1)
++
++#define ARM_SMCCC_ARCH_WORKAROUND_1 \
++ ARM_SMCCC_CALL_VAL(ARM_SMCCC_FAST_CALL, \
++ ARM_SMCCC_SMC_32, \
++ 0, 0x8000)
++
+ #ifndef __ASSEMBLY__
+
+ #include <linux/linkage.h>
+@@ -130,5 +150,146 @@ asmlinkage void __arm_smccc_hvc(unsigned long a0, unsigned long a1,
+
+ #define arm_smccc_hvc_quirk(...) __arm_smccc_hvc(__VA_ARGS__)
+
++/* SMCCC v1.1 implementation madness follows */
++#ifdef CONFIG_ARM64
++
++#define SMCCC_SMC_INST "smc #0"
++#define SMCCC_HVC_INST "hvc #0"
++
++#elif defined(CONFIG_ARM)
++#include <asm/opcodes-sec.h>
++#include <asm/opcodes-virt.h>
++
++#define SMCCC_SMC_INST __SMC(0)
++#define SMCCC_HVC_INST __HVC(0)
++
++#endif
++
++#define ___count_args(_0, _1, _2, _3, _4, _5, _6, _7, _8, x, ...) x
++
++#define __count_args(...) \
++ ___count_args(__VA_ARGS__, 7, 6, 5, 4, 3, 2, 1, 0)
++
++#define __constraint_write_0 \
++ "+r" (r0), "=&r" (r1), "=&r" (r2), "=&r" (r3)
++#define __constraint_write_1 \
++ "+r" (r0), "+r" (r1), "=&r" (r2), "=&r" (r3)
++#define __constraint_write_2 \
++ "+r" (r0), "+r" (r1), "+r" (r2), "=&r" (r3)
++#define __constraint_write_3 \
++ "+r" (r0), "+r" (r1), "+r" (r2), "+r" (r3)
++#define __constraint_write_4 __constraint_write_3
++#define __constraint_write_5 __constraint_write_4
++#define __constraint_write_6 __constraint_write_5
++#define __constraint_write_7 __constraint_write_6
++
++#define __constraint_read_0
++#define __constraint_read_1
++#define __constraint_read_2
++#define __constraint_read_3
++#define __constraint_read_4 "r" (r4)
++#define __constraint_read_5 __constraint_read_4, "r" (r5)
++#define __constraint_read_6 __constraint_read_5, "r" (r6)
++#define __constraint_read_7 __constraint_read_6, "r" (r7)
++
++#define __declare_arg_0(a0, res) \
++ struct arm_smccc_res *___res = res; \
++ register u32 r0 asm("r0") = a0; \
++ register unsigned long r1 asm("r1"); \
++ register unsigned long r2 asm("r2"); \
++ register unsigned long r3 asm("r3")
++
++#define __declare_arg_1(a0, a1, res) \
++ struct arm_smccc_res *___res = res; \
++ register u32 r0 asm("r0") = a0; \
++ register typeof(a1) r1 asm("r1") = a1; \
++ register unsigned long r2 asm("r2"); \
++ register unsigned long r3 asm("r3")
++
++#define __declare_arg_2(a0, a1, a2, res) \
++ struct arm_smccc_res *___res = res; \
++ register u32 r0 asm("r0") = a0; \
++ register typeof(a1) r1 asm("r1") = a1; \
++ register typeof(a2) r2 asm("r2") = a2; \
++ register unsigned long r3 asm("r3")
++
++#define __declare_arg_3(a0, a1, a2, a3, res) \
++ struct arm_smccc_res *___res = res; \
++ register u32 r0 asm("r0") = a0; \
++ register typeof(a1) r1 asm("r1") = a1; \
++ register typeof(a2) r2 asm("r2") = a2; \
++ register typeof(a3) r3 asm("r3") = a3
++
++#define __declare_arg_4(a0, a1, a2, a3, a4, res) \
++ __declare_arg_3(a0, a1, a2, a3, res); \
++ register typeof(a4) r4 asm("r4") = a4
++
++#define __declare_arg_5(a0, a1, a2, a3, a4, a5, res) \
++ __declare_arg_4(a0, a1, a2, a3, a4, res); \
++ register typeof(a5) r5 asm("r5") = a5
++
++#define __declare_arg_6(a0, a1, a2, a3, a4, a5, a6, res) \
++ __declare_arg_5(a0, a1, a2, a3, a4, a5, res); \
++ register typeof(a6) r6 asm("r6") = a6
++
++#define __declare_arg_7(a0, a1, a2, a3, a4, a5, a6, a7, res) \
++ __declare_arg_6(a0, a1, a2, a3, a4, a5, a6, res); \
++ register typeof(a7) r7 asm("r7") = a7
++
++#define ___declare_args(count, ...) __declare_arg_ ## count(__VA_ARGS__)
++#define __declare_args(count, ...) ___declare_args(count, __VA_ARGS__)
++
++#define ___constraints(count) \
++ : __constraint_write_ ## count \
++ : __constraint_read_ ## count \
++ : "memory"
++#define __constraints(count) ___constraints(count)
++
++/*
++ * We have an output list that is not necessarily used, and GCC feels
++ * entitled to optimise the whole sequence away. "volatile" is what
++ * makes it stick.
++ */
++#define __arm_smccc_1_1(inst, ...) \
++ do { \
++ __declare_args(__count_args(__VA_ARGS__), __VA_ARGS__); \
++ asm volatile(inst "\n" \
++ __constraints(__count_args(__VA_ARGS__))); \
++ if (___res) \
++ *___res = (typeof(*___res)){r0, r1, r2, r3}; \
++ } while (0)
++
++/*
++ * arm_smccc_1_1_smc() - make an SMCCC v1.1 compliant SMC call
++ *
++ * This is a variadic macro taking one to eight source arguments, and
++ * an optional return structure.
++ *
++ * @a0-a7: arguments passed in registers 0 to 7
++ * @res: result values from registers 0 to 3
++ *
++ * This macro is used to make SMC calls following SMC Calling Convention v1.1.
++ * The content of the supplied param are copied to registers 0 to 7 prior
++ * to the SMC instruction. The return values are updated with the content
++ * from register 0 to 3 on return from the SMC instruction if not NULL.
++ */
++#define arm_smccc_1_1_smc(...) __arm_smccc_1_1(SMCCC_SMC_INST, __VA_ARGS__)
++
++/*
++ * arm_smccc_1_1_hvc() - make an SMCCC v1.1 compliant HVC call
++ *
++ * This is a variadic macro taking one to eight source arguments, and
++ * an optional return structure.
++ *
++ * @a0-a7: arguments passed in registers 0 to 7
++ * @res: result values from registers 0 to 3
++ *
++ * This macro is used to make HVC calls following SMC Calling Convention v1.1.
++ * The content of the supplied param are copied to registers 0 to 7 prior
++ * to the HVC instruction. The return values are updated with the content
++ * from register 0 to 3 on return from the HVC instruction if not NULL.
++ */
++#define arm_smccc_1_1_hvc(...) __arm_smccc_1_1(SMCCC_HVC_INST, __VA_ARGS__)
++
+ #endif /*__ASSEMBLY__*/
+ #endif /*__LINUX_ARM_SMCCC_H*/
+diff --git a/include/linux/crypto.h b/include/linux/crypto.h
+index 78508ca4b108..29c4257f9c5b 100644
+--- a/include/linux/crypto.h
++++ b/include/linux/crypto.h
+@@ -106,9 +106,17 @@
+ */
+ #define CRYPTO_ALG_INTERNAL 0x00002000
+
++/*
++ * Set if the algorithm has a ->setkey() method but can be used without
++ * calling it first, i.e. there is a default key.
++ */
++#define CRYPTO_ALG_OPTIONAL_KEY 0x00004000
++
+ /*
+ * Transform masks and values (for crt_flags).
+ */
++#define CRYPTO_TFM_NEED_KEY 0x00000001
++
+ #define CRYPTO_TFM_REQ_MASK 0x000fff00
+ #define CRYPTO_TFM_RES_MASK 0xfff00000
+
+diff --git a/include/linux/mtd/map.h b/include/linux/mtd/map.h
+index 3aa56e3104bb..b5b43f94f311 100644
+--- a/include/linux/mtd/map.h
++++ b/include/linux/mtd/map.h
+@@ -270,75 +270,67 @@ void map_destroy(struct mtd_info *mtd);
+ #define INVALIDATE_CACHED_RANGE(map, from, size) \
+ do { if (map->inval_cache) map->inval_cache(map, from, size); } while (0)
+
+-
+-static inline int map_word_equal(struct map_info *map, map_word val1, map_word val2)
+-{
+- int i;
+-
+- for (i = 0; i < map_words(map); i++) {
+- if (val1.x[i] != val2.x[i])
+- return 0;
+- }
+-
+- return 1;
+-}
+-
+-static inline map_word map_word_and(struct map_info *map, map_word val1, map_word val2)
+-{
+- map_word r;
+- int i;
+-
+- for (i = 0; i < map_words(map); i++)
+- r.x[i] = val1.x[i] & val2.x[i];
+-
+- return r;
+-}
+-
+-static inline map_word map_word_clr(struct map_info *map, map_word val1, map_word val2)
+-{
+- map_word r;
+- int i;
+-
+- for (i = 0; i < map_words(map); i++)
+- r.x[i] = val1.x[i] & ~val2.x[i];
+-
+- return r;
+-}
+-
+-static inline map_word map_word_or(struct map_info *map, map_word val1, map_word val2)
+-{
+- map_word r;
+- int i;
+-
+- for (i = 0; i < map_words(map); i++)
+- r.x[i] = val1.x[i] | val2.x[i];
+-
+- return r;
+-}
+-
+-static inline int map_word_andequal(struct map_info *map, map_word val1, map_word val2, map_word val3)
+-{
+- int i;
+-
+- for (i = 0; i < map_words(map); i++) {
+- if ((val1.x[i] & val2.x[i]) != val3.x[i])
+- return 0;
+- }
+-
+- return 1;
+-}
+-
+-static inline int map_word_bitsset(struct map_info *map, map_word val1, map_word val2)
+-{
+- int i;
+-
+- for (i = 0; i < map_words(map); i++) {
+- if (val1.x[i] & val2.x[i])
+- return 1;
+- }
+-
+- return 0;
+-}
++#define map_word_equal(map, val1, val2) \
++({ \
++ int i, ret = 1; \
++ for (i = 0; i < map_words(map); i++) \
++ if ((val1).x[i] != (val2).x[i]) { \
++ ret = 0; \
++ break; \
++ } \
++ ret; \
++})
++
++#define map_word_and(map, val1, val2) \
++({ \
++ map_word r; \
++ int i; \
++ for (i = 0; i < map_words(map); i++) \
++ r.x[i] = (val1).x[i] & (val2).x[i]; \
++ r; \
++})
++
++#define map_word_clr(map, val1, val2) \
++({ \
++ map_word r; \
++ int i; \
++ for (i = 0; i < map_words(map); i++) \
++ r.x[i] = (val1).x[i] & ~(val2).x[i]; \
++ r; \
++})
++
++#define map_word_or(map, val1, val2) \
++({ \
++ map_word r; \
++ int i; \
++ for (i = 0; i < map_words(map); i++) \
++ r.x[i] = (val1).x[i] | (val2).x[i]; \
++ r; \
++})
++
++#define map_word_andequal(map, val1, val2, val3) \
++({ \
++ int i, ret = 1; \
++ for (i = 0; i < map_words(map); i++) { \
++ if (((val1).x[i] & (val2).x[i]) != (val2).x[i]) { \
++ ret = 0; \
++ break; \
++ } \
++ } \
++ ret; \
++})
++
++#define map_word_bitsset(map, val1, val2) \
++({ \
++ int i, ret = 0; \
++ for (i = 0; i < map_words(map); i++) { \
++ if ((val1).x[i] & (val2).x[i]) { \
++ ret = 1; \
++ break; \
++ } \
++ } \
++ ret; \
++})
+
+ static inline map_word map_word_load(struct map_info *map, const void *ptr)
+ {
+diff --git a/include/linux/nfs4.h b/include/linux/nfs4.h
+index 47adac640191..57ffaa20d564 100644
+--- a/include/linux/nfs4.h
++++ b/include/linux/nfs4.h
+@@ -457,7 +457,12 @@ enum lock_type4 {
+
+ #define NFS4_DEBUG 1
+
+-/* Index of predefined Linux client operations */
++/*
++ * Index of predefined Linux client operations
++ *
++ * To ensure that /proc/net/rpc/nfs remains correctly ordered, please
++ * append only to this enum when adding new client operations.
++ */
+
+ enum {
+ NFSPROC4_CLNT_NULL = 0, /* Unused */
+@@ -480,7 +485,6 @@ enum {
+ NFSPROC4_CLNT_ACCESS,
+ NFSPROC4_CLNT_GETATTR,
+ NFSPROC4_CLNT_LOOKUP,
+- NFSPROC4_CLNT_LOOKUPP,
+ NFSPROC4_CLNT_LOOKUP_ROOT,
+ NFSPROC4_CLNT_REMOVE,
+ NFSPROC4_CLNT_RENAME,
+@@ -500,7 +504,6 @@ enum {
+ NFSPROC4_CLNT_SECINFO,
+ NFSPROC4_CLNT_FSID_PRESENT,
+
+- /* nfs41 */
+ NFSPROC4_CLNT_EXCHANGE_ID,
+ NFSPROC4_CLNT_CREATE_SESSION,
+ NFSPROC4_CLNT_DESTROY_SESSION,
+@@ -518,13 +521,14 @@ enum {
+ NFSPROC4_CLNT_BIND_CONN_TO_SESSION,
+ NFSPROC4_CLNT_DESTROY_CLIENTID,
+
+- /* nfs42 */
+ NFSPROC4_CLNT_SEEK,
+ NFSPROC4_CLNT_ALLOCATE,
+ NFSPROC4_CLNT_DEALLOCATE,
+ NFSPROC4_CLNT_LAYOUTSTATS,
+ NFSPROC4_CLNT_CLONE,
+ NFSPROC4_CLNT_COPY,
++
++ NFSPROC4_CLNT_LOOKUPP,
+ };
+
+ /* nfs41 types */
+diff --git a/include/linux/psci.h b/include/linux/psci.h
+index bdea1cb5e1db..347077cf19c6 100644
+--- a/include/linux/psci.h
++++ b/include/linux/psci.h
+@@ -25,7 +25,19 @@ bool psci_tos_resident_on(int cpu);
+ int psci_cpu_init_idle(unsigned int cpu);
+ int psci_cpu_suspend_enter(unsigned long index);
+
++enum psci_conduit {
++ PSCI_CONDUIT_NONE,
++ PSCI_CONDUIT_SMC,
++ PSCI_CONDUIT_HVC,
++};
++
++enum smccc_version {
++ SMCCC_VERSION_1_0,
++ SMCCC_VERSION_1_1,
++};
++
+ struct psci_operations {
++ u32 (*get_version)(void);
+ int (*cpu_suspend)(u32 state, unsigned long entry_point);
+ int (*cpu_off)(u32 state);
+ int (*cpu_on)(unsigned long cpuid, unsigned long entry_point);
+@@ -33,6 +45,8 @@ struct psci_operations {
+ int (*affinity_info)(unsigned long target_affinity,
+ unsigned long lowest_affinity_level);
+ int (*migrate_info_type)(void);
++ enum psci_conduit conduit;
++ enum smccc_version smccc_version;
+ };
+
+ extern struct psci_operations psci_ops;
+diff --git a/include/scsi/scsi_host.h b/include/scsi/scsi_host.h
+index a8b7bf879ced..1a1df0d21ee3 100644
+--- a/include/scsi/scsi_host.h
++++ b/include/scsi/scsi_host.h
+@@ -571,6 +571,8 @@ struct Scsi_Host {
+ struct blk_mq_tag_set tag_set;
+ };
+
++ struct rcu_head rcu;
++
+ atomic_t host_busy; /* commands actually active on low-level */
+ atomic_t host_blocked;
+
+diff --git a/include/uapi/linux/psci.h b/include/uapi/linux/psci.h
+index 760e52a9640f..b3bcabe380da 100644
+--- a/include/uapi/linux/psci.h
++++ b/include/uapi/linux/psci.h
+@@ -88,6 +88,9 @@
+ (((ver) & PSCI_VERSION_MAJOR_MASK) >> PSCI_VERSION_MAJOR_SHIFT)
+ #define PSCI_VERSION_MINOR(ver) \
+ ((ver) & PSCI_VERSION_MINOR_MASK)
++#define PSCI_VERSION(maj, min) \
++ ((((maj) << PSCI_VERSION_MAJOR_SHIFT) & PSCI_VERSION_MAJOR_MASK) | \
++ ((min) & PSCI_VERSION_MINOR_MASK))
+
+ /* PSCI features decoding (>=1.0) */
+ #define PSCI_1_0_FEATURES_CPU_SUSPEND_PF_SHIFT 1
+diff --git a/kernel/async.c b/kernel/async.c
+index 2cbd3dd5940d..a893d6170944 100644
+--- a/kernel/async.c
++++ b/kernel/async.c
+@@ -84,20 +84,24 @@ static atomic_t entry_count;
+
+ static async_cookie_t lowest_in_progress(struct async_domain *domain)
+ {
+- struct list_head *pending;
++ struct async_entry *first = NULL;
+ async_cookie_t ret = ASYNC_COOKIE_MAX;
+ unsigned long flags;
+
+ spin_lock_irqsave(&async_lock, flags);
+
+- if (domain)
+- pending = &domain->pending;
+- else
+- pending = &async_global_pending;
++ if (domain) {
++ if (!list_empty(&domain->pending))
++ first = list_first_entry(&domain->pending,
++ struct async_entry, domain_list);
++ } else {
++ if (!list_empty(&async_global_pending))
++ first = list_first_entry(&async_global_pending,
++ struct async_entry, global_list);
++ }
+
+- if (!list_empty(pending))
+- ret = list_first_entry(pending, struct async_entry,
+- domain_list)->cookie;
++ if (first)
++ ret = first->cookie;
+
+ spin_unlock_irqrestore(&async_lock, flags);
+ return ret;
+diff --git a/kernel/irq/autoprobe.c b/kernel/irq/autoprobe.c
+index 4e8089b319ae..8c82ea26e837 100644
+--- a/kernel/irq/autoprobe.c
++++ b/kernel/irq/autoprobe.c
+@@ -71,7 +71,7 @@ unsigned long probe_irq_on(void)
+ raw_spin_lock_irq(&desc->lock);
+ if (!desc->action && irq_settings_can_probe(desc)) {
+ desc->istate |= IRQS_AUTODETECT | IRQS_WAITING;
+- if (irq_startup(desc, IRQ_NORESEND, IRQ_START_FORCE))
++ if (irq_activate_and_startup(desc, IRQ_NORESEND))
+ desc->istate |= IRQS_PENDING;
+ }
+ raw_spin_unlock_irq(&desc->lock);
+diff --git a/kernel/irq/chip.c b/kernel/irq/chip.c
+index 043bfc35b353..c69357a43849 100644
+--- a/kernel/irq/chip.c
++++ b/kernel/irq/chip.c
+@@ -294,11 +294,11 @@ int irq_activate(struct irq_desc *desc)
+ return 0;
+ }
+
+-void irq_activate_and_startup(struct irq_desc *desc, bool resend)
++int irq_activate_and_startup(struct irq_desc *desc, bool resend)
+ {
+ if (WARN_ON(irq_activate(desc)))
+- return;
+- irq_startup(desc, resend, IRQ_START_FORCE);
++ return 0;
++ return irq_startup(desc, resend, IRQ_START_FORCE);
+ }
+
+ static void __irq_disable(struct irq_desc *desc, bool mask);
+diff --git a/kernel/irq/internals.h b/kernel/irq/internals.h
+index ab19371eab9b..ca6afa267070 100644
+--- a/kernel/irq/internals.h
++++ b/kernel/irq/internals.h
+@@ -76,7 +76,7 @@ extern void __enable_irq(struct irq_desc *desc);
+ #define IRQ_START_COND false
+
+ extern int irq_activate(struct irq_desc *desc);
+-extern void irq_activate_and_startup(struct irq_desc *desc, bool resend);
++extern int irq_activate_and_startup(struct irq_desc *desc, bool resend);
+ extern int irq_startup(struct irq_desc *desc, bool resend, bool force);
+
+ extern void irq_shutdown(struct irq_desc *desc);
+diff --git a/kernel/rcu/update.c b/kernel/rcu/update.c
+index fbd56d6e575b..68fa19a5e7bd 100644
+--- a/kernel/rcu/update.c
++++ b/kernel/rcu/update.c
+@@ -422,11 +422,13 @@ void init_rcu_head(struct rcu_head *head)
+ {
+ debug_object_init(head, &rcuhead_debug_descr);
+ }
++EXPORT_SYMBOL_GPL(init_rcu_head);
+
+ void destroy_rcu_head(struct rcu_head *head)
+ {
+ debug_object_free(head, &rcuhead_debug_descr);
+ }
++EXPORT_SYMBOL_GPL(destroy_rcu_head);
+
+ static bool rcuhead_is_static_object(void *addr)
+ {
+diff --git a/kernel/relay.c b/kernel/relay.c
+index 39a9dfc69486..55da824f4adc 100644
+--- a/kernel/relay.c
++++ b/kernel/relay.c
+@@ -611,7 +611,6 @@ struct rchan *relay_open(const char *base_filename,
+
+ kref_put(&chan->kref, relay_destroy_channel);
+ mutex_unlock(&relay_channels_mutex);
+- kfree(chan);
+ return NULL;
+ }
+ EXPORT_SYMBOL_GPL(relay_open);
+diff --git a/kernel/sched/rt.c b/kernel/sched/rt.c
+index 665ace2fc558..3401f588c916 100644
+--- a/kernel/sched/rt.c
++++ b/kernel/sched/rt.c
+@@ -1907,9 +1907,8 @@ static void push_rt_tasks(struct rq *rq)
+ * the rt_loop_next will cause the iterator to perform another scan.
+ *
+ */
+-static int rto_next_cpu(struct rq *rq)
++static int rto_next_cpu(struct root_domain *rd)
+ {
+- struct root_domain *rd = rq->rd;
+ int next;
+ int cpu;
+
+@@ -1985,19 +1984,24 @@ static void tell_cpu_to_push(struct rq *rq)
+ * Otherwise it is finishing up and an ipi needs to be sent.
+ */
+ if (rq->rd->rto_cpu < 0)
+- cpu = rto_next_cpu(rq);
++ cpu = rto_next_cpu(rq->rd);
+
+ raw_spin_unlock(&rq->rd->rto_lock);
+
+ rto_start_unlock(&rq->rd->rto_loop_start);
+
+- if (cpu >= 0)
++ if (cpu >= 0) {
++ /* Make sure the rd does not get freed while pushing */
++ sched_get_rd(rq->rd);
+ irq_work_queue_on(&rq->rd->rto_push_work, cpu);
++ }
+ }
+
+ /* Called from hardirq context */
+ void rto_push_irq_work_func(struct irq_work *work)
+ {
++ struct root_domain *rd =
++ container_of(work, struct root_domain, rto_push_work);
+ struct rq *rq;
+ int cpu;
+
+@@ -2013,18 +2017,20 @@ void rto_push_irq_work_func(struct irq_work *work)
+ raw_spin_unlock(&rq->lock);
+ }
+
+- raw_spin_lock(&rq->rd->rto_lock);
++ raw_spin_lock(&rd->rto_lock);
+
+ /* Pass the IPI to the next rt overloaded queue */
+- cpu = rto_next_cpu(rq);
++ cpu = rto_next_cpu(rd);
+
+- raw_spin_unlock(&rq->rd->rto_lock);
++ raw_spin_unlock(&rd->rto_lock);
+
+- if (cpu < 0)
++ if (cpu < 0) {
++ sched_put_rd(rd);
+ return;
++ }
+
+ /* Try the next RT overloaded CPU */
+- irq_work_queue_on(&rq->rd->rto_push_work, cpu);
++ irq_work_queue_on(&rd->rto_push_work, cpu);
+ }
+ #endif /* HAVE_RT_PUSH_IPI */
+
+diff --git a/kernel/sched/sched.h b/kernel/sched/sched.h
+index b19552a212de..74b57279e3ff 100644
+--- a/kernel/sched/sched.h
++++ b/kernel/sched/sched.h
+@@ -665,6 +665,8 @@ extern struct mutex sched_domains_mutex;
+ extern void init_defrootdomain(void);
+ extern int sched_init_domains(const struct cpumask *cpu_map);
+ extern void rq_attach_root(struct rq *rq, struct root_domain *rd);
++extern void sched_get_rd(struct root_domain *rd);
++extern void sched_put_rd(struct root_domain *rd);
+
+ #ifdef HAVE_RT_PUSH_IPI
+ extern void rto_push_irq_work_func(struct irq_work *work);
+diff --git a/kernel/sched/topology.c b/kernel/sched/topology.c
+index 034cbed7f88b..519b024f4e94 100644
+--- a/kernel/sched/topology.c
++++ b/kernel/sched/topology.c
+@@ -259,6 +259,19 @@ void rq_attach_root(struct rq *rq, struct root_domain *rd)
+ call_rcu_sched(&old_rd->rcu, free_rootdomain);
+ }
+
++void sched_get_rd(struct root_domain *rd)
++{
++ atomic_inc(&rd->refcount);
++}
++
++void sched_put_rd(struct root_domain *rd)
++{
++ if (!atomic_dec_and_test(&rd->refcount))
++ return;
++
++ call_rcu_sched(&rd->rcu, free_rootdomain);
++}
++
+ static int init_rootdomain(struct root_domain *rd)
+ {
+ if (!zalloc_cpumask_var(&rd->span, GFP_KERNEL))
+diff --git a/kernel/trace/ftrace.c b/kernel/trace/ftrace.c
+index 554b517c61a0..a7741d14f51b 100644
+--- a/kernel/trace/ftrace.c
++++ b/kernel/trace/ftrace.c
+@@ -4456,7 +4456,6 @@ unregister_ftrace_function_probe_func(char *glob, struct trace_array *tr,
+ func_g.type = filter_parse_regex(glob, strlen(glob),
+ &func_g.search, ¬);
+ func_g.len = strlen(func_g.search);
+- func_g.search = glob;
+
+ /* we do not support '!' for function probes */
+ if (WARN_ON(not))
+diff --git a/lib/Kconfig.debug b/lib/Kconfig.debug
+index 9d5b78aad4c5..f55c15ab6f93 100644
+--- a/lib/Kconfig.debug
++++ b/lib/Kconfig.debug
+@@ -217,7 +217,7 @@ config ENABLE_MUST_CHECK
+ config FRAME_WARN
+ int "Warn for stack frames larger than (needs gcc 4.4)"
+ range 0 8192
+- default 0 if KASAN
++ default 3072 if KASAN_EXTRA
+ default 2048 if GCC_PLUGIN_LATENT_ENTROPY
+ default 1280 if (!64BIT && PARISC)
+ default 1024 if (!64BIT && !PARISC)
+diff --git a/lib/Kconfig.kasan b/lib/Kconfig.kasan
+index bd38aab05929..3d35d062970d 100644
+--- a/lib/Kconfig.kasan
++++ b/lib/Kconfig.kasan
+@@ -20,6 +20,17 @@ config KASAN
+ Currently CONFIG_KASAN doesn't work with CONFIG_DEBUG_SLAB
+ (the resulting kernel does not boot).
+
++config KASAN_EXTRA
++ bool "KAsan: extra checks"
++ depends on KASAN && DEBUG_KERNEL && !COMPILE_TEST
++ help
++ This enables further checks in the kernel address sanitizer, for now
++ it only includes the address-use-after-scope check that can lead
++ to excessive kernel stack usage, frame size warnings and longer
++ compile time.
++ https://gcc.gnu.org/bugzilla/show_bug.cgi?id=81715 has more
++
++
+ choice
+ prompt "Instrumentation type"
+ depends on KASAN
+diff --git a/lib/ubsan.c b/lib/ubsan.c
+index fb0409df1bcf..50d1d5c25deb 100644
+--- a/lib/ubsan.c
++++ b/lib/ubsan.c
+@@ -265,14 +265,14 @@ void __ubsan_handle_divrem_overflow(struct overflow_data *data,
+ }
+ EXPORT_SYMBOL(__ubsan_handle_divrem_overflow);
+
+-static void handle_null_ptr_deref(struct type_mismatch_data *data)
++static void handle_null_ptr_deref(struct type_mismatch_data_common *data)
+ {
+ unsigned long flags;
+
+- if (suppress_report(&data->location))
++ if (suppress_report(data->location))
+ return;
+
+- ubsan_prologue(&data->location, &flags);
++ ubsan_prologue(data->location, &flags);
+
+ pr_err("%s null pointer of type %s\n",
+ type_check_kinds[data->type_check_kind],
+@@ -281,15 +281,15 @@ static void handle_null_ptr_deref(struct type_mismatch_data *data)
+ ubsan_epilogue(&flags);
+ }
+
+-static void handle_missaligned_access(struct type_mismatch_data *data,
++static void handle_misaligned_access(struct type_mismatch_data_common *data,
+ unsigned long ptr)
+ {
+ unsigned long flags;
+
+- if (suppress_report(&data->location))
++ if (suppress_report(data->location))
+ return;
+
+- ubsan_prologue(&data->location, &flags);
++ ubsan_prologue(data->location, &flags);
+
+ pr_err("%s misaligned address %p for type %s\n",
+ type_check_kinds[data->type_check_kind],
+@@ -299,15 +299,15 @@ static void handle_missaligned_access(struct type_mismatch_data *data,
+ ubsan_epilogue(&flags);
+ }
+
+-static void handle_object_size_mismatch(struct type_mismatch_data *data,
++static void handle_object_size_mismatch(struct type_mismatch_data_common *data,
+ unsigned long ptr)
+ {
+ unsigned long flags;
+
+- if (suppress_report(&data->location))
++ if (suppress_report(data->location))
+ return;
+
+- ubsan_prologue(&data->location, &flags);
++ ubsan_prologue(data->location, &flags);
+ pr_err("%s address %p with insufficient space\n",
+ type_check_kinds[data->type_check_kind],
+ (void *) ptr);
+@@ -315,19 +315,47 @@ static void handle_object_size_mismatch(struct type_mismatch_data *data,
+ ubsan_epilogue(&flags);
+ }
+
+-void __ubsan_handle_type_mismatch(struct type_mismatch_data *data,
++static void ubsan_type_mismatch_common(struct type_mismatch_data_common *data,
+ unsigned long ptr)
+ {
+
+ if (!ptr)
+ handle_null_ptr_deref(data);
+ else if (data->alignment && !IS_ALIGNED(ptr, data->alignment))
+- handle_missaligned_access(data, ptr);
++ handle_misaligned_access(data, ptr);
+ else
+ handle_object_size_mismatch(data, ptr);
+ }
++
++void __ubsan_handle_type_mismatch(struct type_mismatch_data *data,
++ unsigned long ptr)
++{
++ struct type_mismatch_data_common common_data = {
++ .location = &data->location,
++ .type = data->type,
++ .alignment = data->alignment,
++ .type_check_kind = data->type_check_kind
++ };
++
++ ubsan_type_mismatch_common(&common_data, ptr);
++}
+ EXPORT_SYMBOL(__ubsan_handle_type_mismatch);
+
++void __ubsan_handle_type_mismatch_v1(struct type_mismatch_data_v1 *data,
++ unsigned long ptr)
++{
++
++ struct type_mismatch_data_common common_data = {
++ .location = &data->location,
++ .type = data->type,
++ .alignment = 1UL << data->log_alignment,
++ .type_check_kind = data->type_check_kind
++ };
++
++ ubsan_type_mismatch_common(&common_data, ptr);
++}
++EXPORT_SYMBOL(__ubsan_handle_type_mismatch_v1);
++
+ void __ubsan_handle_nonnull_return(struct nonnull_return_data *data)
+ {
+ unsigned long flags;
+diff --git a/lib/ubsan.h b/lib/ubsan.h
+index 88f23557edbe..7e30b26497e0 100644
+--- a/lib/ubsan.h
++++ b/lib/ubsan.h
+@@ -37,6 +37,20 @@ struct type_mismatch_data {
+ unsigned char type_check_kind;
+ };
+
++struct type_mismatch_data_v1 {
++ struct source_location location;
++ struct type_descriptor *type;
++ unsigned char log_alignment;
++ unsigned char type_check_kind;
++};
++
++struct type_mismatch_data_common {
++ struct source_location *location;
++ struct type_descriptor *type;
++ unsigned long alignment;
++ unsigned char type_check_kind;
++};
++
+ struct nonnull_arg_data {
+ struct source_location location;
+ struct source_location attr_location;
+diff --git a/scripts/Makefile.kasan b/scripts/Makefile.kasan
+index 1ce7115aa499..97a56c0b565a 100644
+--- a/scripts/Makefile.kasan
++++ b/scripts/Makefile.kasan
+@@ -30,5 +30,10 @@ else
+ endif
+ endif
+
++ifdef CONFIG_KASAN_EXTRA
+ CFLAGS_KASAN += $(call cc-option, -fsanitize-address-use-after-scope)
+ endif
++
++CFLAGS_KASAN_NOSANITIZE := -fno-builtin
++
++endif
+diff --git a/scripts/Makefile.lib b/scripts/Makefile.lib
+index 1ca4dcd2d500..015aa9dbad86 100644
+--- a/scripts/Makefile.lib
++++ b/scripts/Makefile.lib
+@@ -121,7 +121,7 @@ endif
+ ifeq ($(CONFIG_KASAN),y)
+ _c_flags += $(if $(patsubst n%,, \
+ $(KASAN_SANITIZE_$(basetarget).o)$(KASAN_SANITIZE)y), \
+- $(CFLAGS_KASAN))
++ $(CFLAGS_KASAN), $(CFLAGS_KASAN_NOSANITIZE))
+ endif
+
+ ifeq ($(CONFIG_UBSAN),y)
+diff --git a/sound/soc/intel/skylake/skl-nhlt.c b/sound/soc/intel/skylake/skl-nhlt.c
+index 3eaac41090ca..26b0a5caea5a 100644
+--- a/sound/soc/intel/skylake/skl-nhlt.c
++++ b/sound/soc/intel/skylake/skl-nhlt.c
+@@ -43,7 +43,8 @@ struct nhlt_acpi_table *skl_nhlt_init(struct device *dev)
+ obj = acpi_evaluate_dsm(handle, &osc_guid, 1, 1, NULL);
+ if (obj && obj->type == ACPI_TYPE_BUFFER) {
+ nhlt_ptr = (struct nhlt_resource_desc *)obj->buffer.pointer;
+- nhlt_table = (struct nhlt_acpi_table *)
++ if (nhlt_ptr->length)
++ nhlt_table = (struct nhlt_acpi_table *)
+ memremap(nhlt_ptr->min_addr, nhlt_ptr->length,
+ MEMREMAP_WB);
+ ACPI_FREE(obj);
+diff --git a/sound/soc/rockchip/rockchip_i2s.c b/sound/soc/rockchip/rockchip_i2s.c
+index 908211e1d6fc..eb27f6c24bf7 100644
+--- a/sound/soc/rockchip/rockchip_i2s.c
++++ b/sound/soc/rockchip/rockchip_i2s.c
+@@ -504,6 +504,7 @@ static bool rockchip_i2s_rd_reg(struct device *dev, unsigned int reg)
+ case I2S_INTCR:
+ case I2S_XFER:
+ case I2S_CLR:
++ case I2S_TXDR:
+ case I2S_RXDR:
+ case I2S_FIFOLR:
+ case I2S_INTSR:
+@@ -518,6 +519,9 @@ static bool rockchip_i2s_volatile_reg(struct device *dev, unsigned int reg)
+ switch (reg) {
+ case I2S_INTSR:
+ case I2S_CLR:
++ case I2S_FIFOLR:
++ case I2S_TXDR:
++ case I2S_RXDR:
+ return true;
+ default:
+ return false;
+@@ -527,6 +531,8 @@ static bool rockchip_i2s_volatile_reg(struct device *dev, unsigned int reg)
+ static bool rockchip_i2s_precious_reg(struct device *dev, unsigned int reg)
+ {
+ switch (reg) {
++ case I2S_RXDR:
++ return true;
+ default:
+ return false;
+ }
+diff --git a/sound/soc/soc-acpi.c b/sound/soc/soc-acpi.c
+index f21df28bc28e..d4dd2efea45e 100644
+--- a/sound/soc/soc-acpi.c
++++ b/sound/soc/soc-acpi.c
+@@ -84,11 +84,9 @@ snd_soc_acpi_find_machine(struct snd_soc_acpi_mach *machines)
+
+ for (mach = machines; mach->id[0]; mach++) {
+ if (snd_soc_acpi_check_hid(mach->id) == true) {
+- if (mach->machine_quirk == NULL)
+- return mach;
+-
+- if (mach->machine_quirk(mach) != NULL)
+- return mach;
++ if (mach->machine_quirk)
++ mach = mach->machine_quirk(mach);
++ return mach;
+ }
+ }
+ return NULL;
+diff --git a/sound/soc/soc-compress.c b/sound/soc/soc-compress.c
+index d9b1e6417fb9..1507117d1185 100644
+--- a/sound/soc/soc-compress.c
++++ b/sound/soc/soc-compress.c
+@@ -944,7 +944,7 @@ static int soc_compr_copy(struct snd_compr_stream *cstream,
+ struct snd_soc_platform *platform = rtd->platform;
+ struct snd_soc_component *component;
+ struct snd_soc_rtdcom_list *rtdcom;
+- int ret = 0, __ret;
++ int ret = 0;
+
+ mutex_lock_nested(&rtd->pcm_mutex, rtd->pcm_subclass);
+
+@@ -965,10 +965,10 @@ static int soc_compr_copy(struct snd_compr_stream *cstream,
+ !component->driver->compr_ops->copy)
+ continue;
+
+- __ret = component->driver->compr_ops->copy(cstream, buf, count);
+- if (__ret < 0)
+- ret = __ret;
++ ret = component->driver->compr_ops->copy(cstream, buf, count);
++ break;
+ }
++
+ err:
+ mutex_unlock(&rtd->pcm_mutex);
+ return ret;
+diff --git a/tools/objtool/check.c b/tools/objtool/check.c
+index 9cd028aa1509..2e458eb45586 100644
+--- a/tools/objtool/check.c
++++ b/tools/objtool/check.c
+@@ -851,8 +851,14 @@ static int add_switch_table(struct objtool_file *file, struct symbol *func,
+ * This is a fairly uncommon pattern which is new for GCC 6. As of this
+ * writing, there are 11 occurrences of it in the allmodconfig kernel.
+ *
++ * As of GCC 7 there are quite a few more of these and the 'in between' code
++ * is significant. Esp. with KASAN enabled some of the code between the mov
++ * and jmpq uses .rodata itself, which can confuse things.
++ *
+ * TODO: Once we have DWARF CFI and smarter instruction decoding logic,
+ * ensure the same register is used in the mov and jump instructions.
++ *
++ * NOTE: RETPOLINE made it harder still to decode dynamic jumps.
+ */
+ static struct rela *find_switch_table(struct objtool_file *file,
+ struct symbol *func,
+@@ -874,12 +880,25 @@ static struct rela *find_switch_table(struct objtool_file *file,
+ text_rela->addend + 4);
+ if (!rodata_rela)
+ return NULL;
++
+ file->ignore_unreachables = true;
+ return rodata_rela;
+ }
+
+ /* case 3 */
+- func_for_each_insn_continue_reverse(file, func, insn) {
++ /*
++ * Backward search using the @first_jump_src links, these help avoid
++ * much of the 'in between' code. Which avoids us getting confused by
++ * it.
++ */
++ for (insn = list_prev_entry(insn, list);
++
++ &insn->list != &file->insn_list &&
++ insn->sec == func->sec &&
++ insn->offset >= func->offset;
++
++ insn = insn->first_jump_src ?: list_prev_entry(insn, list)) {
++
+ if (insn->type == INSN_JUMP_DYNAMIC)
+ break;
+
+@@ -909,14 +928,32 @@ static struct rela *find_switch_table(struct objtool_file *file,
+ return NULL;
+ }
+
++
+ static int add_func_switch_tables(struct objtool_file *file,
+ struct symbol *func)
+ {
+- struct instruction *insn, *prev_jump = NULL;
++ struct instruction *insn, *last = NULL, *prev_jump = NULL;
+ struct rela *rela, *prev_rela = NULL;
+ int ret;
+
+ func_for_each_insn(file, func, insn) {
++ if (!last)
++ last = insn;
++
++ /*
++ * Store back-pointers for unconditional forward jumps such
++ * that find_switch_table() can back-track using those and
++ * avoid some potentially confusing code.
++ */
++ if (insn->type == INSN_JUMP_UNCONDITIONAL && insn->jump_dest &&
++ insn->offset > last->offset &&
++ insn->jump_dest->offset > insn->offset &&
++ !insn->jump_dest->first_jump_src) {
++
++ insn->jump_dest->first_jump_src = insn;
++ last = insn->jump_dest;
++ }
++
+ if (insn->type != INSN_JUMP_DYNAMIC)
+ continue;
+
+diff --git a/tools/objtool/check.h b/tools/objtool/check.h
+index dbadb304a410..23a1d065cae1 100644
+--- a/tools/objtool/check.h
++++ b/tools/objtool/check.h
+@@ -47,6 +47,7 @@ struct instruction {
+ bool alt_group, visited, dead_end, ignore, hint, save, restore, ignore_alts;
+ struct symbol *call_dest;
+ struct instruction *jump_dest;
++ struct instruction *first_jump_src;
+ struct list_head alts;
+ struct symbol *func;
+ struct stack_op stack_op;
+diff --git a/virt/kvm/arm/arm.c b/virt/kvm/arm/arm.c
+index 2e43f9d42bd5..9a866459bff4 100644
+--- a/virt/kvm/arm/arm.c
++++ b/virt/kvm/arm/arm.c
+@@ -31,6 +31,7 @@
+ #include <linux/irqbypass.h>
+ #include <trace/events/kvm.h>
+ #include <kvm/arm_pmu.h>
++#include <kvm/arm_psci.h>
+
+ #define CREATE_TRACE_POINTS
+ #include "trace.h"
+@@ -46,7 +47,6 @@
+ #include <asm/kvm_mmu.h>
+ #include <asm/kvm_emulate.h>
+ #include <asm/kvm_coproc.h>
+-#include <asm/kvm_psci.h>
+ #include <asm/sections.h>
+
+ #ifdef REQUIRES_VIRT
+@@ -1158,7 +1158,7 @@ static void cpu_init_hyp_mode(void *dummy)
+ pgd_ptr = kvm_mmu_get_httbr();
+ stack_page = __this_cpu_read(kvm_arm_hyp_stack_page);
+ hyp_stack_ptr = stack_page + PAGE_SIZE;
+- vector_ptr = (unsigned long)kvm_ksym_ref(__kvm_hyp_vector);
++ vector_ptr = (unsigned long)kvm_get_hyp_vector();
+
+ __cpu_init_hyp_mode(pgd_ptr, hyp_stack_ptr, vector_ptr);
+ __cpu_init_stage2();
+@@ -1239,6 +1239,7 @@ static int hyp_init_cpu_pm_notifier(struct notifier_block *self,
+ cpu_hyp_reset();
+
+ return NOTIFY_OK;
++ case CPU_PM_ENTER_FAILED:
+ case CPU_PM_EXIT:
+ if (__this_cpu_read(kvm_arm_hardware_enabled))
+ /* The hardware was enabled before suspend. */
+@@ -1403,6 +1404,12 @@ static int init_hyp_mode(void)
+ goto out_err;
+ }
+
++ err = kvm_map_vectors();
++ if (err) {
++ kvm_err("Cannot map vectors\n");
++ goto out_err;
++ }
++
+ /*
+ * Map the Hyp stack pages
+ */
+diff --git a/virt/kvm/arm/psci.c b/virt/kvm/arm/psci.c
+index f1e363bab5e8..6919352cbf15 100644
+--- a/virt/kvm/arm/psci.c
++++ b/virt/kvm/arm/psci.c
+@@ -15,16 +15,16 @@
+ * along with this program. If not, see <http://www.gnu.org/licenses/>.
+ */
+
++#include <linux/arm-smccc.h>
+ #include <linux/preempt.h>
+ #include <linux/kvm_host.h>
+ #include <linux/wait.h>
+
+ #include <asm/cputype.h>
+ #include <asm/kvm_emulate.h>
+-#include <asm/kvm_psci.h>
+ #include <asm/kvm_host.h>
+
+-#include <uapi/linux/psci.h>
++#include <kvm/arm_psci.h>
+
+ /*
+ * This is an implementation of the Power State Coordination Interface
+@@ -33,6 +33,38 @@
+
+ #define AFFINITY_MASK(level) ~((0x1UL << ((level) * MPIDR_LEVEL_BITS)) - 1)
+
++static u32 smccc_get_function(struct kvm_vcpu *vcpu)
++{
++ return vcpu_get_reg(vcpu, 0);
++}
++
++static unsigned long smccc_get_arg1(struct kvm_vcpu *vcpu)
++{
++ return vcpu_get_reg(vcpu, 1);
++}
++
++static unsigned long smccc_get_arg2(struct kvm_vcpu *vcpu)
++{
++ return vcpu_get_reg(vcpu, 2);
++}
++
++static unsigned long smccc_get_arg3(struct kvm_vcpu *vcpu)
++{
++ return vcpu_get_reg(vcpu, 3);
++}
++
++static void smccc_set_retval(struct kvm_vcpu *vcpu,
++ unsigned long a0,
++ unsigned long a1,
++ unsigned long a2,
++ unsigned long a3)
++{
++ vcpu_set_reg(vcpu, 0, a0);
++ vcpu_set_reg(vcpu, 1, a1);
++ vcpu_set_reg(vcpu, 2, a2);
++ vcpu_set_reg(vcpu, 3, a3);
++}
++
+ static unsigned long psci_affinity_mask(unsigned long affinity_level)
+ {
+ if (affinity_level <= 3)
+@@ -78,7 +110,7 @@ static unsigned long kvm_psci_vcpu_on(struct kvm_vcpu *source_vcpu)
+ unsigned long context_id;
+ phys_addr_t target_pc;
+
+- cpu_id = vcpu_get_reg(source_vcpu, 1) & MPIDR_HWID_BITMASK;
++ cpu_id = smccc_get_arg1(source_vcpu) & MPIDR_HWID_BITMASK;
+ if (vcpu_mode_is_32bit(source_vcpu))
+ cpu_id &= ~((u32) 0);
+
+@@ -91,14 +123,14 @@ static unsigned long kvm_psci_vcpu_on(struct kvm_vcpu *source_vcpu)
+ if (!vcpu)
+ return PSCI_RET_INVALID_PARAMS;
+ if (!vcpu->arch.power_off) {
+- if (kvm_psci_version(source_vcpu) != KVM_ARM_PSCI_0_1)
++ if (kvm_psci_version(source_vcpu, kvm) != KVM_ARM_PSCI_0_1)
+ return PSCI_RET_ALREADY_ON;
+ else
+ return PSCI_RET_INVALID_PARAMS;
+ }
+
+- target_pc = vcpu_get_reg(source_vcpu, 2);
+- context_id = vcpu_get_reg(source_vcpu, 3);
++ target_pc = smccc_get_arg2(source_vcpu);
++ context_id = smccc_get_arg3(source_vcpu);
+
+ kvm_reset_vcpu(vcpu);
+
+@@ -117,7 +149,7 @@ static unsigned long kvm_psci_vcpu_on(struct kvm_vcpu *source_vcpu)
+ * NOTE: We always update r0 (or x0) because for PSCI v0.1
+ * the general puspose registers are undefined upon CPU_ON.
+ */
+- vcpu_set_reg(vcpu, 0, context_id);
++ smccc_set_retval(vcpu, context_id, 0, 0, 0);
+ vcpu->arch.power_off = false;
+ smp_mb(); /* Make sure the above is visible */
+
+@@ -137,8 +169,8 @@ static unsigned long kvm_psci_vcpu_affinity_info(struct kvm_vcpu *vcpu)
+ struct kvm *kvm = vcpu->kvm;
+ struct kvm_vcpu *tmp;
+
+- target_affinity = vcpu_get_reg(vcpu, 1);
+- lowest_affinity_level = vcpu_get_reg(vcpu, 2);
++ target_affinity = smccc_get_arg1(vcpu);
++ lowest_affinity_level = smccc_get_arg2(vcpu);
+
+ /* Determine target affinity mask */
+ target_affinity_mask = psci_affinity_mask(lowest_affinity_level);
+@@ -200,18 +232,10 @@ static void kvm_psci_system_reset(struct kvm_vcpu *vcpu)
+ kvm_prepare_system_event(vcpu, KVM_SYSTEM_EVENT_RESET);
+ }
+
+-int kvm_psci_version(struct kvm_vcpu *vcpu)
+-{
+- if (test_bit(KVM_ARM_VCPU_PSCI_0_2, vcpu->arch.features))
+- return KVM_ARM_PSCI_0_2;
+-
+- return KVM_ARM_PSCI_0_1;
+-}
+-
+ static int kvm_psci_0_2_call(struct kvm_vcpu *vcpu)
+ {
+ struct kvm *kvm = vcpu->kvm;
+- unsigned long psci_fn = vcpu_get_reg(vcpu, 0) & ~((u32) 0);
++ u32 psci_fn = smccc_get_function(vcpu);
+ unsigned long val;
+ int ret = 1;
+
+@@ -221,7 +245,7 @@ static int kvm_psci_0_2_call(struct kvm_vcpu *vcpu)
+ * Bits[31:16] = Major Version = 0
+ * Bits[15:0] = Minor Version = 2
+ */
+- val = 2;
++ val = KVM_ARM_PSCI_0_2;
+ break;
+ case PSCI_0_2_FN_CPU_SUSPEND:
+ case PSCI_0_2_FN64_CPU_SUSPEND:
+@@ -278,14 +302,56 @@ static int kvm_psci_0_2_call(struct kvm_vcpu *vcpu)
+ break;
+ }
+
+- vcpu_set_reg(vcpu, 0, val);
++ smccc_set_retval(vcpu, val, 0, 0, 0);
++ return ret;
++}
++
++static int kvm_psci_1_0_call(struct kvm_vcpu *vcpu)
++{
++ u32 psci_fn = smccc_get_function(vcpu);
++ u32 feature;
++ unsigned long val;
++ int ret = 1;
++
++ switch(psci_fn) {
++ case PSCI_0_2_FN_PSCI_VERSION:
++ val = KVM_ARM_PSCI_1_0;
++ break;
++ case PSCI_1_0_FN_PSCI_FEATURES:
++ feature = smccc_get_arg1(vcpu);
++ switch(feature) {
++ case PSCI_0_2_FN_PSCI_VERSION:
++ case PSCI_0_2_FN_CPU_SUSPEND:
++ case PSCI_0_2_FN64_CPU_SUSPEND:
++ case PSCI_0_2_FN_CPU_OFF:
++ case PSCI_0_2_FN_CPU_ON:
++ case PSCI_0_2_FN64_CPU_ON:
++ case PSCI_0_2_FN_AFFINITY_INFO:
++ case PSCI_0_2_FN64_AFFINITY_INFO:
++ case PSCI_0_2_FN_MIGRATE_INFO_TYPE:
++ case PSCI_0_2_FN_SYSTEM_OFF:
++ case PSCI_0_2_FN_SYSTEM_RESET:
++ case PSCI_1_0_FN_PSCI_FEATURES:
++ case ARM_SMCCC_VERSION_FUNC_ID:
++ val = 0;
++ break;
++ default:
++ val = PSCI_RET_NOT_SUPPORTED;
++ break;
++ }
++ break;
++ default:
++ return kvm_psci_0_2_call(vcpu);
++ }
++
++ smccc_set_retval(vcpu, val, 0, 0, 0);
+ return ret;
+ }
+
+ static int kvm_psci_0_1_call(struct kvm_vcpu *vcpu)
+ {
+ struct kvm *kvm = vcpu->kvm;
+- unsigned long psci_fn = vcpu_get_reg(vcpu, 0) & ~((u32) 0);
++ u32 psci_fn = smccc_get_function(vcpu);
+ unsigned long val;
+
+ switch (psci_fn) {
+@@ -303,7 +369,7 @@ static int kvm_psci_0_1_call(struct kvm_vcpu *vcpu)
+ break;
+ }
+
+- vcpu_set_reg(vcpu, 0, val);
++ smccc_set_retval(vcpu, val, 0, 0, 0);
+ return 1;
+ }
+
+@@ -321,9 +387,11 @@ static int kvm_psci_0_1_call(struct kvm_vcpu *vcpu)
+ * Errors:
+ * -EINVAL: Unrecognized PSCI function
+ */
+-int kvm_psci_call(struct kvm_vcpu *vcpu)
++static int kvm_psci_call(struct kvm_vcpu *vcpu)
+ {
+- switch (kvm_psci_version(vcpu)) {
++ switch (kvm_psci_version(vcpu, vcpu->kvm)) {
++ case KVM_ARM_PSCI_1_0:
++ return kvm_psci_1_0_call(vcpu);
+ case KVM_ARM_PSCI_0_2:
+ return kvm_psci_0_2_call(vcpu);
+ case KVM_ARM_PSCI_0_1:
+@@ -332,3 +400,30 @@ int kvm_psci_call(struct kvm_vcpu *vcpu)
+ return -EINVAL;
+ };
+ }
++
++int kvm_hvc_call_handler(struct kvm_vcpu *vcpu)
++{
++ u32 func_id = smccc_get_function(vcpu);
++ u32 val = PSCI_RET_NOT_SUPPORTED;
++ u32 feature;
++
++ switch (func_id) {
++ case ARM_SMCCC_VERSION_FUNC_ID:
++ val = ARM_SMCCC_VERSION_1_1;
++ break;
++ case ARM_SMCCC_ARCH_FEATURES_FUNC_ID:
++ feature = smccc_get_arg1(vcpu);
++ switch(feature) {
++ case ARM_SMCCC_ARCH_WORKAROUND_1:
++ if (kvm_arm_harden_branch_predictor())
++ val = 0;
++ break;
++ }
++ break;
++ default:
++ return kvm_psci_call(vcpu);
++ }
++
++ smccc_set_retval(vcpu, val, 0, 0, 0);
++ return 1;
++}
^ permalink raw reply related [flat|nested] 27+ messages in thread
* [gentoo-commits] proj/linux-patches:4.15 commit in: /
@ 2018-02-22 23:24 Mike Pagano
0 siblings, 0 replies; 27+ messages in thread
From: Mike Pagano @ 2018-02-22 23:24 UTC (permalink / raw
To: gentoo-commits
commit: 0efeda7197763f237f87dab43a52676839e87f2d
Author: Mike Pagano <mpagano <AT> gentoo <DOT> org>
AuthorDate: Thu Feb 22 23:24:31 2018 +0000
Commit: Mike Pagano <mpagano <AT> gentoo <DOT> org>
CommitDate: Thu Feb 22 23:24:31 2018 +0000
URL: https://gitweb.gentoo.org/proj/linux-patches.git/commit/?id=0efeda71
Linux patch 4.15.5
0000_README | 4 +
1004_linux-4.15.5.patch | 6693 +++++++++++++++++++++++++++++++++++++++++++++++
2 files changed, 6697 insertions(+)
diff --git a/0000_README b/0000_README
index ffe8729..f22a6fe 100644
--- a/0000_README
+++ b/0000_README
@@ -59,6 +59,10 @@ Patch: 1003_linux-4.15.4.patch
From: http://www.kernel.org
Desc: Linux 4.15.4
+Patch: 1004_linux-4.15.5.patch
+From: http://www.kernel.org
+Desc: Linux 4.15.5
+
Patch: 1500_XATTR_USER_PREFIX.patch
From: https://bugs.gentoo.org/show_bug.cgi?id=470644
Desc: Support for namespace user.pax.* on tmpfs.
diff --git a/1004_linux-4.15.5.patch b/1004_linux-4.15.5.patch
new file mode 100644
index 0000000..5340f07
--- /dev/null
+++ b/1004_linux-4.15.5.patch
@@ -0,0 +1,6693 @@
+diff --git a/Documentation/devicetree/bindings/dma/snps-dma.txt b/Documentation/devicetree/bindings/dma/snps-dma.txt
+index a122723907ac..99acc712f83a 100644
+--- a/Documentation/devicetree/bindings/dma/snps-dma.txt
++++ b/Documentation/devicetree/bindings/dma/snps-dma.txt
+@@ -64,6 +64,6 @@ Example:
+ reg = <0xe0000000 0x1000>;
+ interrupts = <0 35 0x4>;
+ dmas = <&dmahost 12 0 1>,
+- <&dmahost 13 0 1 0>;
++ <&dmahost 13 1 0>;
+ dma-names = "rx", "rx";
+ };
+diff --git a/Documentation/filesystems/ext4.txt b/Documentation/filesystems/ext4.txt
+index 75236c0c2ac2..d081ce0482cc 100644
+--- a/Documentation/filesystems/ext4.txt
++++ b/Documentation/filesystems/ext4.txt
+@@ -233,7 +233,7 @@ data_err=ignore(*) Just print an error message if an error occurs
+ data_err=abort Abort the journal if an error occurs in a file
+ data buffer in ordered mode.
+
+-grpid Give objects the same group ID as their creator.
++grpid New objects have the group ID of their parent.
+ bsdgroups
+
+ nogrpid (*) New objects have the group ID of their creator.
+diff --git a/Makefile b/Makefile
+index 8495e1ca052e..28c537fbe328 100644
+--- a/Makefile
++++ b/Makefile
+@@ -1,7 +1,7 @@
+ # SPDX-License-Identifier: GPL-2.0
+ VERSION = 4
+ PATCHLEVEL = 15
+-SUBLEVEL = 4
++SUBLEVEL = 5
+ EXTRAVERSION =
+ NAME = Fearless Coyote
+
+diff --git a/arch/arm/boot/dts/arm-realview-eb-mp.dtsi b/arch/arm/boot/dts/arm-realview-eb-mp.dtsi
+index 7b8d90b7aeea..29b636fce23f 100644
+--- a/arch/arm/boot/dts/arm-realview-eb-mp.dtsi
++++ b/arch/arm/boot/dts/arm-realview-eb-mp.dtsi
+@@ -150,11 +150,6 @@
+ interrupts = <0 8 IRQ_TYPE_LEVEL_HIGH>;
+ };
+
+-&charlcd {
+- interrupt-parent = <&intc>;
+- interrupts = <0 IRQ_TYPE_LEVEL_HIGH>;
+-};
+-
+ &serial0 {
+ interrupt-parent = <&intc>;
+ interrupts = <0 4 IRQ_TYPE_LEVEL_HIGH>;
+diff --git a/arch/arm/boot/dts/exynos5410.dtsi b/arch/arm/boot/dts/exynos5410.dtsi
+index 06713ec86f0d..d2174727df9a 100644
+--- a/arch/arm/boot/dts/exynos5410.dtsi
++++ b/arch/arm/boot/dts/exynos5410.dtsi
+@@ -333,7 +333,6 @@
+ &rtc {
+ clocks = <&clock CLK_RTC>;
+ clock-names = "rtc";
+- interrupt-parent = <&pmu_system_controller>;
+ status = "disabled";
+ };
+
+diff --git a/arch/arm/boot/dts/lpc3250-ea3250.dts b/arch/arm/boot/dts/lpc3250-ea3250.dts
+index c43adb7b4d7c..58ea0a4e7afa 100644
+--- a/arch/arm/boot/dts/lpc3250-ea3250.dts
++++ b/arch/arm/boot/dts/lpc3250-ea3250.dts
+@@ -156,8 +156,8 @@
+ uda1380: uda1380@18 {
+ compatible = "nxp,uda1380";
+ reg = <0x18>;
+- power-gpio = <&gpio 0x59 0>;
+- reset-gpio = <&gpio 0x51 0>;
++ power-gpio = <&gpio 3 10 0>;
++ reset-gpio = <&gpio 3 2 0>;
+ dac-clk = "wspll";
+ };
+
+diff --git a/arch/arm/boot/dts/lpc3250-phy3250.dts b/arch/arm/boot/dts/lpc3250-phy3250.dts
+index c72eb9845603..1e1c2f517a82 100644
+--- a/arch/arm/boot/dts/lpc3250-phy3250.dts
++++ b/arch/arm/boot/dts/lpc3250-phy3250.dts
+@@ -81,8 +81,8 @@
+ uda1380: uda1380@18 {
+ compatible = "nxp,uda1380";
+ reg = <0x18>;
+- power-gpio = <&gpio 0x59 0>;
+- reset-gpio = <&gpio 0x51 0>;
++ power-gpio = <&gpio 3 10 0>;
++ reset-gpio = <&gpio 3 2 0>;
+ dac-clk = "wspll";
+ };
+
+diff --git a/arch/arm/boot/dts/mt2701.dtsi b/arch/arm/boot/dts/mt2701.dtsi
+index 965ddfbc9953..05557fce0f1d 100644
+--- a/arch/arm/boot/dts/mt2701.dtsi
++++ b/arch/arm/boot/dts/mt2701.dtsi
+@@ -604,6 +604,7 @@
+ compatible = "mediatek,mt2701-hifsys", "syscon";
+ reg = <0 0x1a000000 0 0x1000>;
+ #clock-cells = <1>;
++ #reset-cells = <1>;
+ };
+
+ usb0: usb@1a1c0000 {
+@@ -688,6 +689,7 @@
+ compatible = "mediatek,mt2701-ethsys", "syscon";
+ reg = <0 0x1b000000 0 0x1000>;
+ #clock-cells = <1>;
++ #reset-cells = <1>;
+ };
+
+ eth: ethernet@1b100000 {
+diff --git a/arch/arm/boot/dts/mt7623.dtsi b/arch/arm/boot/dts/mt7623.dtsi
+index 0640fb75bf59..3a442a16ea06 100644
+--- a/arch/arm/boot/dts/mt7623.dtsi
++++ b/arch/arm/boot/dts/mt7623.dtsi
+@@ -758,6 +758,7 @@
+ "syscon";
+ reg = <0 0x1b000000 0 0x1000>;
+ #clock-cells = <1>;
++ #reset-cells = <1>;
+ };
+
+ eth: ethernet@1b100000 {
+diff --git a/arch/arm/boot/dts/mt7623n-bananapi-bpi-r2.dts b/arch/arm/boot/dts/mt7623n-bananapi-bpi-r2.dts
+index 688a86378cee..7bf5aa2237c9 100644
+--- a/arch/arm/boot/dts/mt7623n-bananapi-bpi-r2.dts
++++ b/arch/arm/boot/dts/mt7623n-bananapi-bpi-r2.dts
+@@ -204,7 +204,7 @@
+ bus-width = <4>;
+ max-frequency = <50000000>;
+ cap-sd-highspeed;
+- cd-gpios = <&pio 261 0>;
++ cd-gpios = <&pio 261 GPIO_ACTIVE_LOW>;
+ vmmc-supply = <&mt6323_vmch_reg>;
+ vqmmc-supply = <&mt6323_vio18_reg>;
+ };
+diff --git a/arch/arm/boot/dts/s5pv210.dtsi b/arch/arm/boot/dts/s5pv210.dtsi
+index 726c5d0dbd5b..b290a5abb901 100644
+--- a/arch/arm/boot/dts/s5pv210.dtsi
++++ b/arch/arm/boot/dts/s5pv210.dtsi
+@@ -463,6 +463,7 @@
+ compatible = "samsung,exynos4210-ohci";
+ reg = <0xec300000 0x100>;
+ interrupts = <23>;
++ interrupt-parent = <&vic1>;
+ clocks = <&clocks CLK_USB_HOST>;
+ clock-names = "usbhost";
+ #address-cells = <1>;
+diff --git a/arch/arm/boot/dts/spear1310-evb.dts b/arch/arm/boot/dts/spear1310-evb.dts
+index 84101e4eebbf..0f5f379323a8 100644
+--- a/arch/arm/boot/dts/spear1310-evb.dts
++++ b/arch/arm/boot/dts/spear1310-evb.dts
+@@ -349,7 +349,7 @@
+ spi0: spi@e0100000 {
+ status = "okay";
+ num-cs = <3>;
+- cs-gpios = <&gpio1 7 0>, <&spics 0>, <&spics 1>;
++ cs-gpios = <&gpio1 7 0>, <&spics 0 0>, <&spics 1 0>;
+
+ stmpe610@0 {
+ compatible = "st,stmpe610";
+diff --git a/arch/arm/boot/dts/spear1340.dtsi b/arch/arm/boot/dts/spear1340.dtsi
+index 5f347054527d..d4dbc4098653 100644
+--- a/arch/arm/boot/dts/spear1340.dtsi
++++ b/arch/arm/boot/dts/spear1340.dtsi
+@@ -142,8 +142,8 @@
+ reg = <0xb4100000 0x1000>;
+ interrupts = <0 105 0x4>;
+ status = "disabled";
+- dmas = <&dwdma0 0x600 0 0 1>, /* 0xC << 11 */
+- <&dwdma0 0x680 0 1 0>; /* 0xD << 7 */
++ dmas = <&dwdma0 12 0 1>,
++ <&dwdma0 13 1 0>;
+ dma-names = "tx", "rx";
+ };
+
+diff --git a/arch/arm/boot/dts/spear13xx.dtsi b/arch/arm/boot/dts/spear13xx.dtsi
+index 17ea0abcdbd7..086b4b333249 100644
+--- a/arch/arm/boot/dts/spear13xx.dtsi
++++ b/arch/arm/boot/dts/spear13xx.dtsi
+@@ -100,7 +100,7 @@
+ reg = <0xb2800000 0x1000>;
+ interrupts = <0 29 0x4>;
+ status = "disabled";
+- dmas = <&dwdma0 0 0 0 0>;
++ dmas = <&dwdma0 0 0 0>;
+ dma-names = "data";
+ };
+
+@@ -290,8 +290,8 @@
+ #size-cells = <0>;
+ interrupts = <0 31 0x4>;
+ status = "disabled";
+- dmas = <&dwdma0 0x2000 0 0 0>, /* 0x4 << 11 */
+- <&dwdma0 0x0280 0 0 0>; /* 0x5 << 7 */
++ dmas = <&dwdma0 4 0 0>,
++ <&dwdma0 5 0 0>;
+ dma-names = "tx", "rx";
+ };
+
+diff --git a/arch/arm/boot/dts/spear600.dtsi b/arch/arm/boot/dts/spear600.dtsi
+index 6b32d20acc9f..00166eb9be86 100644
+--- a/arch/arm/boot/dts/spear600.dtsi
++++ b/arch/arm/boot/dts/spear600.dtsi
+@@ -194,6 +194,7 @@
+ rtc: rtc@fc900000 {
+ compatible = "st,spear600-rtc";
+ reg = <0xfc900000 0x1000>;
++ interrupt-parent = <&vic0>;
+ interrupts = <10>;
+ status = "disabled";
+ };
+diff --git a/arch/arm/boot/dts/ste-nomadik-stn8815.dtsi b/arch/arm/boot/dts/ste-nomadik-stn8815.dtsi
+index 68aab50a73ab..733678b75b88 100644
+--- a/arch/arm/boot/dts/ste-nomadik-stn8815.dtsi
++++ b/arch/arm/boot/dts/ste-nomadik-stn8815.dtsi
+@@ -750,6 +750,7 @@
+ reg = <0x10120000 0x1000>;
+ interrupt-names = "combined";
+ interrupts = <14>;
++ interrupt-parent = <&vica>;
+ clocks = <&clcdclk>, <&hclkclcd>;
+ clock-names = "clcdclk", "apb_pclk";
+ status = "disabled";
+diff --git a/arch/arm/boot/dts/stih407.dtsi b/arch/arm/boot/dts/stih407.dtsi
+index fa149837df14..11fdecd9312e 100644
+--- a/arch/arm/boot/dts/stih407.dtsi
++++ b/arch/arm/boot/dts/stih407.dtsi
+@@ -8,6 +8,7 @@
+ */
+ #include "stih407-clock.dtsi"
+ #include "stih407-family.dtsi"
++#include <dt-bindings/gpio/gpio.h>
+ / {
+ soc {
+ sti-display-subsystem {
+@@ -122,7 +123,7 @@
+ <&clk_s_d2_quadfs 0>,
+ <&clk_s_d2_quadfs 1>;
+
+- hdmi,hpd-gpio = <&pio5 3>;
++ hdmi,hpd-gpio = <&pio5 3 GPIO_ACTIVE_LOW>;
+ reset-names = "hdmi";
+ resets = <&softreset STIH407_HDMI_TX_PHY_SOFTRESET>;
+ ddc = <&hdmiddc>;
+diff --git a/arch/arm/boot/dts/stih410.dtsi b/arch/arm/boot/dts/stih410.dtsi
+index cffa50db5d72..68b5ff91d6a7 100644
+--- a/arch/arm/boot/dts/stih410.dtsi
++++ b/arch/arm/boot/dts/stih410.dtsi
+@@ -9,6 +9,7 @@
+ #include "stih410-clock.dtsi"
+ #include "stih407-family.dtsi"
+ #include "stih410-pinctrl.dtsi"
++#include <dt-bindings/gpio/gpio.h>
+ / {
+ aliases {
+ bdisp0 = &bdisp0;
+@@ -213,7 +214,7 @@
+ <&clk_s_d2_quadfs 0>,
+ <&clk_s_d2_quadfs 1>;
+
+- hdmi,hpd-gpio = <&pio5 3>;
++ hdmi,hpd-gpio = <&pio5 3 GPIO_ACTIVE_LOW>;
+ reset-names = "hdmi";
+ resets = <&softreset STIH407_HDMI_TX_PHY_SOFTRESET>;
+ ddc = <&hdmiddc>;
+diff --git a/arch/arm/mach-pxa/tosa-bt.c b/arch/arm/mach-pxa/tosa-bt.c
+index 107f37210fb9..83606087edc7 100644
+--- a/arch/arm/mach-pxa/tosa-bt.c
++++ b/arch/arm/mach-pxa/tosa-bt.c
+@@ -132,3 +132,7 @@ static struct platform_driver tosa_bt_driver = {
+ },
+ };
+ module_platform_driver(tosa_bt_driver);
++
++MODULE_LICENSE("GPL");
++MODULE_AUTHOR("Dmitry Baryshkov");
++MODULE_DESCRIPTION("Bluetooth built-in chip control");
+diff --git a/arch/arm64/boot/dts/qcom/msm8916.dtsi b/arch/arm64/boot/dts/qcom/msm8916.dtsi
+index 6b2127a6ced1..b84c0ca4f84a 100644
+--- a/arch/arm64/boot/dts/qcom/msm8916.dtsi
++++ b/arch/arm64/boot/dts/qcom/msm8916.dtsi
+@@ -906,6 +906,7 @@
+ "dsi_phy_regulator";
+
+ #clock-cells = <1>;
++ #phy-cells = <0>;
+
+ clocks = <&gcc GCC_MDSS_AHB_CLK>;
+ clock-names = "iface_clk";
+@@ -1435,8 +1436,8 @@
+ #address-cells = <1>;
+ #size-cells = <0>;
+
+- qcom,ipc-1 = <&apcs 0 13>;
+- qcom,ipc-6 = <&apcs 0 19>;
++ qcom,ipc-1 = <&apcs 8 13>;
++ qcom,ipc-3 = <&apcs 8 19>;
+
+ apps_smsm: apps@0 {
+ reg = <0>;
+diff --git a/arch/arm64/kernel/cpu_errata.c b/arch/arm64/kernel/cpu_errata.c
+index 07823595b7f0..52f15cd896e1 100644
+--- a/arch/arm64/kernel/cpu_errata.c
++++ b/arch/arm64/kernel/cpu_errata.c
+@@ -406,6 +406,15 @@ const struct arm64_cpu_capabilities arm64_errata[] = {
+ .capability = ARM64_HARDEN_BP_POST_GUEST_EXIT,
+ MIDR_ALL_VERSIONS(MIDR_QCOM_FALKOR_V1),
+ },
++ {
++ .capability = ARM64_HARDEN_BRANCH_PREDICTOR,
++ MIDR_ALL_VERSIONS(MIDR_QCOM_FALKOR),
++ .enable = qcom_enable_link_stack_sanitization,
++ },
++ {
++ .capability = ARM64_HARDEN_BP_POST_GUEST_EXIT,
++ MIDR_ALL_VERSIONS(MIDR_QCOM_FALKOR),
++ },
+ {
+ .capability = ARM64_HARDEN_BRANCH_PREDICTOR,
+ MIDR_ALL_VERSIONS(MIDR_BRCM_VULCAN),
+diff --git a/arch/arm64/kvm/hyp/switch.c b/arch/arm64/kvm/hyp/switch.c
+index 0b5ab4d8b57d..30b5495b82b5 100644
+--- a/arch/arm64/kvm/hyp/switch.c
++++ b/arch/arm64/kvm/hyp/switch.c
+@@ -400,8 +400,10 @@ int __hyp_text __kvm_vcpu_run(struct kvm_vcpu *vcpu)
+ u32 midr = read_cpuid_id();
+
+ /* Apply BTAC predictors mitigation to all Falkor chips */
+- if ((midr & MIDR_CPU_MODEL_MASK) == MIDR_QCOM_FALKOR_V1)
++ if (((midr & MIDR_CPU_MODEL_MASK) == MIDR_QCOM_FALKOR) ||
++ ((midr & MIDR_CPU_MODEL_MASK) == MIDR_QCOM_FALKOR_V1)) {
+ __qcom_hyp_sanitize_btac_predictors();
++ }
+ }
+
+ fp_enabled = __fpsimd_enabled();
+diff --git a/arch/arm64/mm/proc.S b/arch/arm64/mm/proc.S
+index 08572f95bd8a..248f2e7b24ab 100644
+--- a/arch/arm64/mm/proc.S
++++ b/arch/arm64/mm/proc.S
+@@ -189,7 +189,8 @@ ENDPROC(idmap_cpu_replace_ttbr1)
+ dc cvac, cur_\()\type\()p // Ensure any existing dirty
+ dmb sy // lines are written back before
+ ldr \type, [cur_\()\type\()p] // loading the entry
+- tbz \type, #0, next_\()\type // Skip invalid entries
++ tbz \type, #0, skip_\()\type // Skip invalid and
++ tbnz \type, #11, skip_\()\type // non-global entries
+ .endm
+
+ .macro __idmap_kpti_put_pgtable_ent_ng, type
+@@ -249,8 +250,9 @@ ENTRY(idmap_kpti_install_ng_mappings)
+ add end_pgdp, cur_pgdp, #(PTRS_PER_PGD * 8)
+ do_pgd: __idmap_kpti_get_pgtable_ent pgd
+ tbnz pgd, #1, walk_puds
+- __idmap_kpti_put_pgtable_ent_ng pgd
+ next_pgd:
++ __idmap_kpti_put_pgtable_ent_ng pgd
++skip_pgd:
+ add cur_pgdp, cur_pgdp, #8
+ cmp cur_pgdp, end_pgdp
+ b.ne do_pgd
+@@ -278,8 +280,9 @@ walk_puds:
+ add end_pudp, cur_pudp, #(PTRS_PER_PUD * 8)
+ do_pud: __idmap_kpti_get_pgtable_ent pud
+ tbnz pud, #1, walk_pmds
+- __idmap_kpti_put_pgtable_ent_ng pud
+ next_pud:
++ __idmap_kpti_put_pgtable_ent_ng pud
++skip_pud:
+ add cur_pudp, cur_pudp, 8
+ cmp cur_pudp, end_pudp
+ b.ne do_pud
+@@ -298,8 +301,9 @@ walk_pmds:
+ add end_pmdp, cur_pmdp, #(PTRS_PER_PMD * 8)
+ do_pmd: __idmap_kpti_get_pgtable_ent pmd
+ tbnz pmd, #1, walk_ptes
+- __idmap_kpti_put_pgtable_ent_ng pmd
+ next_pmd:
++ __idmap_kpti_put_pgtable_ent_ng pmd
++skip_pmd:
+ add cur_pmdp, cur_pmdp, #8
+ cmp cur_pmdp, end_pmdp
+ b.ne do_pmd
+@@ -317,7 +321,7 @@ walk_ptes:
+ add end_ptep, cur_ptep, #(PTRS_PER_PTE * 8)
+ do_pte: __idmap_kpti_get_pgtable_ent pte
+ __idmap_kpti_put_pgtable_ent_ng pte
+-next_pte:
++skip_pte:
+ add cur_ptep, cur_ptep, #8
+ cmp cur_ptep, end_ptep
+ b.ne do_pte
+diff --git a/arch/mips/Kconfig b/arch/mips/Kconfig
+index 8e0b3702f1c0..efaa3b130f4d 100644
+--- a/arch/mips/Kconfig
++++ b/arch/mips/Kconfig
+@@ -119,12 +119,12 @@ config MIPS_GENERIC
+ select SYS_SUPPORTS_MULTITHREADING
+ select SYS_SUPPORTS_RELOCATABLE
+ select SYS_SUPPORTS_SMARTMIPS
+- select USB_EHCI_BIG_ENDIAN_DESC if BIG_ENDIAN
+- select USB_EHCI_BIG_ENDIAN_MMIO if BIG_ENDIAN
+- select USB_OHCI_BIG_ENDIAN_DESC if BIG_ENDIAN
+- select USB_OHCI_BIG_ENDIAN_MMIO if BIG_ENDIAN
+- select USB_UHCI_BIG_ENDIAN_DESC if BIG_ENDIAN
+- select USB_UHCI_BIG_ENDIAN_MMIO if BIG_ENDIAN
++ select USB_EHCI_BIG_ENDIAN_DESC if CPU_BIG_ENDIAN
++ select USB_EHCI_BIG_ENDIAN_MMIO if CPU_BIG_ENDIAN
++ select USB_OHCI_BIG_ENDIAN_DESC if CPU_BIG_ENDIAN
++ select USB_OHCI_BIG_ENDIAN_MMIO if CPU_BIG_ENDIAN
++ select USB_UHCI_BIG_ENDIAN_DESC if CPU_BIG_ENDIAN
++ select USB_UHCI_BIG_ENDIAN_MMIO if CPU_BIG_ENDIAN
+ select USE_OF
+ help
+ Select this to build a kernel which aims to support multiple boards,
+diff --git a/arch/mips/kernel/cps-vec.S b/arch/mips/kernel/cps-vec.S
+index e68e6e04063a..1025f937ab0e 100644
+--- a/arch/mips/kernel/cps-vec.S
++++ b/arch/mips/kernel/cps-vec.S
+@@ -388,15 +388,16 @@ LEAF(mips_cps_boot_vpes)
+
+ #elif defined(CONFIG_MIPS_MT)
+
+- .set push
+- .set MIPS_ISA_LEVEL_RAW
+- .set mt
+-
+ /* If the core doesn't support MT then return */
+ has_mt t0, 5f
+
+ /* Enter VPE configuration state */
++ .set push
++ .set MIPS_ISA_LEVEL_RAW
++ .set mt
+ dvpe
++ .set pop
++
+ PTR_LA t1, 1f
+ jr.hb t1
+ nop
+@@ -422,6 +423,10 @@ LEAF(mips_cps_boot_vpes)
+ mtc0 t0, CP0_VPECONTROL
+ ehb
+
++ .set push
++ .set MIPS_ISA_LEVEL_RAW
++ .set mt
++
+ /* Skip the VPE if its TC is not halted */
+ mftc0 t0, CP0_TCHALT
+ beqz t0, 2f
+@@ -495,6 +500,8 @@ LEAF(mips_cps_boot_vpes)
+ ehb
+ evpe
+
++ .set pop
++
+ /* Check whether this VPE is meant to be running */
+ li t0, 1
+ sll t0, t0, a1
+@@ -509,7 +516,7 @@ LEAF(mips_cps_boot_vpes)
+ 1: jr.hb t0
+ nop
+
+-2: .set pop
++2:
+
+ #endif /* CONFIG_MIPS_MT_SMP */
+
+diff --git a/arch/mips/kernel/setup.c b/arch/mips/kernel/setup.c
+index 702c678de116..e4a1581ce822 100644
+--- a/arch/mips/kernel/setup.c
++++ b/arch/mips/kernel/setup.c
+@@ -375,6 +375,7 @@ static void __init bootmem_init(void)
+ unsigned long reserved_end;
+ unsigned long mapstart = ~0UL;
+ unsigned long bootmap_size;
++ phys_addr_t ramstart = (phys_addr_t)ULLONG_MAX;
+ bool bootmap_valid = false;
+ int i;
+
+@@ -395,7 +396,8 @@ static void __init bootmem_init(void)
+ max_low_pfn = 0;
+
+ /*
+- * Find the highest page frame number we have available.
++ * Find the highest page frame number we have available
++ * and the lowest used RAM address
+ */
+ for (i = 0; i < boot_mem_map.nr_map; i++) {
+ unsigned long start, end;
+@@ -407,6 +409,8 @@ static void __init bootmem_init(void)
+ end = PFN_DOWN(boot_mem_map.map[i].addr
+ + boot_mem_map.map[i].size);
+
++ ramstart = min(ramstart, boot_mem_map.map[i].addr);
++
+ #ifndef CONFIG_HIGHMEM
+ /*
+ * Skip highmem here so we get an accurate max_low_pfn if low
+@@ -436,6 +440,13 @@ static void __init bootmem_init(void)
+ mapstart = max(reserved_end, start);
+ }
+
++ /*
++ * Reserve any memory between the start of RAM and PHYS_OFFSET
++ */
++ if (ramstart > PHYS_OFFSET)
++ add_memory_region(PHYS_OFFSET, ramstart - PHYS_OFFSET,
++ BOOT_MEM_RESERVED);
++
+ if (min_low_pfn >= max_low_pfn)
+ panic("Incorrect memory mapping !!!");
+ if (min_low_pfn > ARCH_PFN_OFFSET) {
+@@ -664,9 +675,6 @@ static int __init early_parse_mem(char *p)
+
+ add_memory_region(start, size, BOOT_MEM_RAM);
+
+- if (start && start > PHYS_OFFSET)
+- add_memory_region(PHYS_OFFSET, start - PHYS_OFFSET,
+- BOOT_MEM_RESERVED);
+ return 0;
+ }
+ early_param("mem", early_parse_mem);
+diff --git a/arch/powerpc/include/asm/topology.h b/arch/powerpc/include/asm/topology.h
+index 88187c285c70..1c02e6900f78 100644
+--- a/arch/powerpc/include/asm/topology.h
++++ b/arch/powerpc/include/asm/topology.h
+@@ -44,6 +44,11 @@ extern int sysfs_add_device_to_node(struct device *dev, int nid);
+ extern void sysfs_remove_device_from_node(struct device *dev, int nid);
+ extern int numa_update_cpu_topology(bool cpus_locked);
+
++static inline void update_numa_cpu_lookup_table(unsigned int cpu, int node)
++{
++ numa_cpu_lookup_table[cpu] = node;
++}
++
+ static inline int early_cpu_to_node(int cpu)
+ {
+ int nid;
+diff --git a/arch/powerpc/kernel/process.c b/arch/powerpc/kernel/process.c
+index 72be0c32e902..2010e4c827b7 100644
+--- a/arch/powerpc/kernel/process.c
++++ b/arch/powerpc/kernel/process.c
+@@ -1509,14 +1509,15 @@ static int assign_thread_tidr(void)
+ {
+ int index;
+ int err;
++ unsigned long flags;
+
+ again:
+ if (!ida_pre_get(&vas_thread_ida, GFP_KERNEL))
+ return -ENOMEM;
+
+- spin_lock(&vas_thread_id_lock);
++ spin_lock_irqsave(&vas_thread_id_lock, flags);
+ err = ida_get_new_above(&vas_thread_ida, 1, &index);
+- spin_unlock(&vas_thread_id_lock);
++ spin_unlock_irqrestore(&vas_thread_id_lock, flags);
+
+ if (err == -EAGAIN)
+ goto again;
+@@ -1524,9 +1525,9 @@ static int assign_thread_tidr(void)
+ return err;
+
+ if (index > MAX_THREAD_CONTEXT) {
+- spin_lock(&vas_thread_id_lock);
++ spin_lock_irqsave(&vas_thread_id_lock, flags);
+ ida_remove(&vas_thread_ida, index);
+- spin_unlock(&vas_thread_id_lock);
++ spin_unlock_irqrestore(&vas_thread_id_lock, flags);
+ return -ENOMEM;
+ }
+
+@@ -1535,9 +1536,11 @@ static int assign_thread_tidr(void)
+
+ static void free_thread_tidr(int id)
+ {
+- spin_lock(&vas_thread_id_lock);
++ unsigned long flags;
++
++ spin_lock_irqsave(&vas_thread_id_lock, flags);
+ ida_remove(&vas_thread_ida, id);
+- spin_unlock(&vas_thread_id_lock);
++ spin_unlock_irqrestore(&vas_thread_id_lock, flags);
+ }
+
+ /*
+diff --git a/arch/powerpc/mm/numa.c b/arch/powerpc/mm/numa.c
+index adb6364f4091..09be66fcea68 100644
+--- a/arch/powerpc/mm/numa.c
++++ b/arch/powerpc/mm/numa.c
+@@ -142,11 +142,6 @@ static void reset_numa_cpu_lookup_table(void)
+ numa_cpu_lookup_table[cpu] = -1;
+ }
+
+-static void update_numa_cpu_lookup_table(unsigned int cpu, int node)
+-{
+- numa_cpu_lookup_table[cpu] = node;
+-}
+-
+ static void map_cpu_to_node(int cpu, int node)
+ {
+ update_numa_cpu_lookup_table(cpu, node);
+diff --git a/arch/powerpc/mm/pgtable-radix.c b/arch/powerpc/mm/pgtable-radix.c
+index cfbbee941a76..17ae5c15a9e0 100644
+--- a/arch/powerpc/mm/pgtable-radix.c
++++ b/arch/powerpc/mm/pgtable-radix.c
+@@ -17,6 +17,7 @@
+ #include <linux/of_fdt.h>
+ #include <linux/mm.h>
+ #include <linux/string_helpers.h>
++#include <linux/stop_machine.h>
+
+ #include <asm/pgtable.h>
+ #include <asm/pgalloc.h>
+@@ -671,6 +672,30 @@ static void free_pmd_table(pmd_t *pmd_start, pud_t *pud)
+ pud_clear(pud);
+ }
+
++struct change_mapping_params {
++ pte_t *pte;
++ unsigned long start;
++ unsigned long end;
++ unsigned long aligned_start;
++ unsigned long aligned_end;
++};
++
++static int stop_machine_change_mapping(void *data)
++{
++ struct change_mapping_params *params =
++ (struct change_mapping_params *)data;
++
++ if (!data)
++ return -1;
++
++ spin_unlock(&init_mm.page_table_lock);
++ pte_clear(&init_mm, params->aligned_start, params->pte);
++ create_physical_mapping(params->aligned_start, params->start);
++ create_physical_mapping(params->end, params->aligned_end);
++ spin_lock(&init_mm.page_table_lock);
++ return 0;
++}
++
+ static void remove_pte_table(pte_t *pte_start, unsigned long addr,
+ unsigned long end)
+ {
+@@ -699,6 +724,52 @@ static void remove_pte_table(pte_t *pte_start, unsigned long addr,
+ }
+ }
+
++/*
++ * clear the pte and potentially split the mapping helper
++ */
++static void split_kernel_mapping(unsigned long addr, unsigned long end,
++ unsigned long size, pte_t *pte)
++{
++ unsigned long mask = ~(size - 1);
++ unsigned long aligned_start = addr & mask;
++ unsigned long aligned_end = addr + size;
++ struct change_mapping_params params;
++ bool split_region = false;
++
++ if ((end - addr) < size) {
++ /*
++ * We're going to clear the PTE, but not flushed
++ * the mapping, time to remap and flush. The
++ * effects if visible outside the processor or
++ * if we are running in code close to the
++ * mapping we cleared, we are in trouble.
++ */
++ if (overlaps_kernel_text(aligned_start, addr) ||
++ overlaps_kernel_text(end, aligned_end)) {
++ /*
++ * Hack, just return, don't pte_clear
++ */
++ WARN_ONCE(1, "Linear mapping %lx->%lx overlaps kernel "
++ "text, not splitting\n", addr, end);
++ return;
++ }
++ split_region = true;
++ }
++
++ if (split_region) {
++ params.pte = pte;
++ params.start = addr;
++ params.end = end;
++ params.aligned_start = addr & ~(size - 1);
++ params.aligned_end = min_t(unsigned long, aligned_end,
++ (unsigned long)__va(memblock_end_of_DRAM()));
++ stop_machine(stop_machine_change_mapping, ¶ms, NULL);
++ return;
++ }
++
++ pte_clear(&init_mm, addr, pte);
++}
++
+ static void remove_pmd_table(pmd_t *pmd_start, unsigned long addr,
+ unsigned long end)
+ {
+@@ -714,13 +785,7 @@ static void remove_pmd_table(pmd_t *pmd_start, unsigned long addr,
+ continue;
+
+ if (pmd_huge(*pmd)) {
+- if (!IS_ALIGNED(addr, PMD_SIZE) ||
+- !IS_ALIGNED(next, PMD_SIZE)) {
+- WARN_ONCE(1, "%s: unaligned range\n", __func__);
+- continue;
+- }
+-
+- pte_clear(&init_mm, addr, (pte_t *)pmd);
++ split_kernel_mapping(addr, end, PMD_SIZE, (pte_t *)pmd);
+ continue;
+ }
+
+@@ -745,13 +810,7 @@ static void remove_pud_table(pud_t *pud_start, unsigned long addr,
+ continue;
+
+ if (pud_huge(*pud)) {
+- if (!IS_ALIGNED(addr, PUD_SIZE) ||
+- !IS_ALIGNED(next, PUD_SIZE)) {
+- WARN_ONCE(1, "%s: unaligned range\n", __func__);
+- continue;
+- }
+-
+- pte_clear(&init_mm, addr, (pte_t *)pud);
++ split_kernel_mapping(addr, end, PUD_SIZE, (pte_t *)pud);
+ continue;
+ }
+
+@@ -777,13 +836,7 @@ static void remove_pagetable(unsigned long start, unsigned long end)
+ continue;
+
+ if (pgd_huge(*pgd)) {
+- if (!IS_ALIGNED(addr, PGDIR_SIZE) ||
+- !IS_ALIGNED(next, PGDIR_SIZE)) {
+- WARN_ONCE(1, "%s: unaligned range\n", __func__);
+- continue;
+- }
+-
+- pte_clear(&init_mm, addr, (pte_t *)pgd);
++ split_kernel_mapping(addr, end, PGDIR_SIZE, (pte_t *)pgd);
+ continue;
+ }
+
+diff --git a/arch/powerpc/mm/pgtable_64.c b/arch/powerpc/mm/pgtable_64.c
+index 813ea22c3e00..eec1367c2f32 100644
+--- a/arch/powerpc/mm/pgtable_64.c
++++ b/arch/powerpc/mm/pgtable_64.c
+@@ -483,6 +483,8 @@ void mmu_partition_table_set_entry(unsigned int lpid, unsigned long dw0,
+ if (old & PATB_HR) {
+ asm volatile(PPC_TLBIE_5(%0,%1,2,0,1) : :
+ "r" (TLBIEL_INVAL_SET_LPID), "r" (lpid));
++ asm volatile(PPC_TLBIE_5(%0,%1,2,1,1) : :
++ "r" (TLBIEL_INVAL_SET_LPID), "r" (lpid));
+ trace_tlbie(lpid, 0, TLBIEL_INVAL_SET_LPID, lpid, 2, 0, 1);
+ } else {
+ asm volatile(PPC_TLBIE_5(%0,%1,2,0,0) : :
+diff --git a/arch/powerpc/mm/tlb-radix.c b/arch/powerpc/mm/tlb-radix.c
+index 884f4b705b57..913a2b81b177 100644
+--- a/arch/powerpc/mm/tlb-radix.c
++++ b/arch/powerpc/mm/tlb-radix.c
+@@ -600,14 +600,12 @@ void radix__flush_tlb_all(void)
+ */
+ asm volatile(PPC_TLBIE_5(%0, %4, %3, %2, %1)
+ : : "r"(rb), "i"(r), "i"(1), "i"(ric), "r"(rs) : "memory");
+- trace_tlbie(0, 0, rb, rs, ric, prs, r);
+ /*
+ * now flush host entires by passing PRS = 0 and LPID == 0
+ */
+ asm volatile(PPC_TLBIE_5(%0, %4, %3, %2, %1)
+ : : "r"(rb), "i"(r), "i"(prs), "i"(ric), "r"(0) : "memory");
+ asm volatile("eieio; tlbsync; ptesync": : :"memory");
+- trace_tlbie(0, 0, rb, 0, ric, prs, r);
+ }
+
+ void radix__flush_tlb_pte_p9_dd1(unsigned long old_pte, struct mm_struct *mm,
+diff --git a/arch/powerpc/platforms/powernv/vas-window.c b/arch/powerpc/platforms/powernv/vas-window.c
+index 2b3eb01ab110..b7c53a51c31b 100644
+--- a/arch/powerpc/platforms/powernv/vas-window.c
++++ b/arch/powerpc/platforms/powernv/vas-window.c
+@@ -1063,16 +1063,16 @@ struct vas_window *vas_tx_win_open(int vasid, enum vas_cop_type cop,
+ rc = PTR_ERR(txwin->paste_kaddr);
+ goto free_window;
+ }
++ } else {
++ /*
++ * A user mapping must ensure that context switch issues
++ * CP_ABORT for this thread.
++ */
++ rc = set_thread_uses_vas();
++ if (rc)
++ goto free_window;
+ }
+
+- /*
+- * Now that we have a send window, ensure context switch issues
+- * CP_ABORT for this thread.
+- */
+- rc = -EINVAL;
+- if (set_thread_uses_vas() < 0)
+- goto free_window;
+-
+ set_vinst_win(vinst, txwin);
+
+ return txwin;
+diff --git a/arch/powerpc/platforms/pseries/hotplug-cpu.c b/arch/powerpc/platforms/pseries/hotplug-cpu.c
+index a7d14aa7bb7c..09083ad82f7a 100644
+--- a/arch/powerpc/platforms/pseries/hotplug-cpu.c
++++ b/arch/powerpc/platforms/pseries/hotplug-cpu.c
+@@ -36,6 +36,7 @@
+ #include <asm/xics.h>
+ #include <asm/xive.h>
+ #include <asm/plpar_wrappers.h>
++#include <asm/topology.h>
+
+ #include "pseries.h"
+ #include "offline_states.h"
+@@ -331,6 +332,7 @@ static void pseries_remove_processor(struct device_node *np)
+ BUG_ON(cpu_online(cpu));
+ set_cpu_present(cpu, false);
+ set_hard_smp_processor_id(cpu, -1);
++ update_numa_cpu_lookup_table(cpu, -1);
+ break;
+ }
+ if (cpu >= nr_cpu_ids)
+diff --git a/arch/powerpc/sysdev/xive/spapr.c b/arch/powerpc/sysdev/xive/spapr.c
+index d9c4c9366049..091f1d0d0af1 100644
+--- a/arch/powerpc/sysdev/xive/spapr.c
++++ b/arch/powerpc/sysdev/xive/spapr.c
+@@ -356,7 +356,8 @@ static int xive_spapr_configure_queue(u32 target, struct xive_q *q, u8 prio,
+
+ rc = plpar_int_get_queue_info(0, target, prio, &esn_page, &esn_size);
+ if (rc) {
+- pr_err("Error %lld getting queue info prio %d\n", rc, prio);
++ pr_err("Error %lld getting queue info CPU %d prio %d\n", rc,
++ target, prio);
+ rc = -EIO;
+ goto fail;
+ }
+@@ -370,7 +371,8 @@ static int xive_spapr_configure_queue(u32 target, struct xive_q *q, u8 prio,
+ /* Configure and enable the queue in HW */
+ rc = plpar_int_set_queue_config(flags, target, prio, qpage_phys, order);
+ if (rc) {
+- pr_err("Error %lld setting queue for prio %d\n", rc, prio);
++ pr_err("Error %lld setting queue for CPU %d prio %d\n", rc,
++ target, prio);
+ rc = -EIO;
+ } else {
+ q->qpage = qpage;
+@@ -389,8 +391,8 @@ static int xive_spapr_setup_queue(unsigned int cpu, struct xive_cpu *xc,
+ if (IS_ERR(qpage))
+ return PTR_ERR(qpage);
+
+- return xive_spapr_configure_queue(cpu, q, prio, qpage,
+- xive_queue_shift);
++ return xive_spapr_configure_queue(get_hard_smp_processor_id(cpu),
++ q, prio, qpage, xive_queue_shift);
+ }
+
+ static void xive_spapr_cleanup_queue(unsigned int cpu, struct xive_cpu *xc,
+@@ -399,10 +401,12 @@ static void xive_spapr_cleanup_queue(unsigned int cpu, struct xive_cpu *xc,
+ struct xive_q *q = &xc->queue[prio];
+ unsigned int alloc_order;
+ long rc;
++ int hw_cpu = get_hard_smp_processor_id(cpu);
+
+- rc = plpar_int_set_queue_config(0, cpu, prio, 0, 0);
++ rc = plpar_int_set_queue_config(0, hw_cpu, prio, 0, 0);
+ if (rc)
+- pr_err("Error %ld setting queue for prio %d\n", rc, prio);
++ pr_err("Error %ld setting queue for CPU %d prio %d\n", rc,
++ hw_cpu, prio);
+
+ alloc_order = xive_alloc_order(xive_queue_shift);
+ free_pages((unsigned long)q->qpage, alloc_order);
+diff --git a/arch/s390/kernel/compat_linux.c b/arch/s390/kernel/compat_linux.c
+index 59eea9c65d3e..79b7a3438d54 100644
+--- a/arch/s390/kernel/compat_linux.c
++++ b/arch/s390/kernel/compat_linux.c
+@@ -110,7 +110,7 @@ COMPAT_SYSCALL_DEFINE2(s390_setregid16, u16, rgid, u16, egid)
+
+ COMPAT_SYSCALL_DEFINE1(s390_setgid16, u16, gid)
+ {
+- return sys_setgid((gid_t)gid);
++ return sys_setgid(low2highgid(gid));
+ }
+
+ COMPAT_SYSCALL_DEFINE2(s390_setreuid16, u16, ruid, u16, euid)
+@@ -120,7 +120,7 @@ COMPAT_SYSCALL_DEFINE2(s390_setreuid16, u16, ruid, u16, euid)
+
+ COMPAT_SYSCALL_DEFINE1(s390_setuid16, u16, uid)
+ {
+- return sys_setuid((uid_t)uid);
++ return sys_setuid(low2highuid(uid));
+ }
+
+ COMPAT_SYSCALL_DEFINE3(s390_setresuid16, u16, ruid, u16, euid, u16, suid)
+@@ -173,12 +173,12 @@ COMPAT_SYSCALL_DEFINE3(s390_getresgid16, u16 __user *, rgidp,
+
+ COMPAT_SYSCALL_DEFINE1(s390_setfsuid16, u16, uid)
+ {
+- return sys_setfsuid((uid_t)uid);
++ return sys_setfsuid(low2highuid(uid));
+ }
+
+ COMPAT_SYSCALL_DEFINE1(s390_setfsgid16, u16, gid)
+ {
+- return sys_setfsgid((gid_t)gid);
++ return sys_setfsgid(low2highgid(gid));
+ }
+
+ static int groups16_to_user(u16 __user *grouplist, struct group_info *group_info)
+diff --git a/arch/x86/entry/calling.h b/arch/x86/entry/calling.h
+index 3f48f695d5e6..dce7092ab24a 100644
+--- a/arch/x86/entry/calling.h
++++ b/arch/x86/entry/calling.h
+@@ -97,80 +97,69 @@ For 32-bit we have the following conventions - kernel is built with
+
+ #define SIZEOF_PTREGS 21*8
+
+- .macro ALLOC_PT_GPREGS_ON_STACK
+- addq $-(15*8), %rsp
+- .endm
++.macro PUSH_AND_CLEAR_REGS rdx=%rdx rax=%rax
++ /*
++ * Push registers and sanitize registers of values that a
++ * speculation attack might otherwise want to exploit. The
++ * lower registers are likely clobbered well before they
++ * could be put to use in a speculative execution gadget.
++ * Interleave XOR with PUSH for better uop scheduling:
++ */
++ pushq %rdi /* pt_regs->di */
++ pushq %rsi /* pt_regs->si */
++ pushq \rdx /* pt_regs->dx */
++ pushq %rcx /* pt_regs->cx */
++ pushq \rax /* pt_regs->ax */
++ pushq %r8 /* pt_regs->r8 */
++ xorq %r8, %r8 /* nospec r8 */
++ pushq %r9 /* pt_regs->r9 */
++ xorq %r9, %r9 /* nospec r9 */
++ pushq %r10 /* pt_regs->r10 */
++ xorq %r10, %r10 /* nospec r10 */
++ pushq %r11 /* pt_regs->r11 */
++ xorq %r11, %r11 /* nospec r11*/
++ pushq %rbx /* pt_regs->rbx */
++ xorl %ebx, %ebx /* nospec rbx*/
++ pushq %rbp /* pt_regs->rbp */
++ xorl %ebp, %ebp /* nospec rbp*/
++ pushq %r12 /* pt_regs->r12 */
++ xorq %r12, %r12 /* nospec r12*/
++ pushq %r13 /* pt_regs->r13 */
++ xorq %r13, %r13 /* nospec r13*/
++ pushq %r14 /* pt_regs->r14 */
++ xorq %r14, %r14 /* nospec r14*/
++ pushq %r15 /* pt_regs->r15 */
++ xorq %r15, %r15 /* nospec r15*/
++ UNWIND_HINT_REGS
++.endm
+
+- .macro SAVE_C_REGS_HELPER offset=0 rax=1 rcx=1 r8910=1 r11=1
+- .if \r11
+- movq %r11, 6*8+\offset(%rsp)
+- .endif
+- .if \r8910
+- movq %r10, 7*8+\offset(%rsp)
+- movq %r9, 8*8+\offset(%rsp)
+- movq %r8, 9*8+\offset(%rsp)
+- .endif
+- .if \rax
+- movq %rax, 10*8+\offset(%rsp)
+- .endif
+- .if \rcx
+- movq %rcx, 11*8+\offset(%rsp)
+- .endif
+- movq %rdx, 12*8+\offset(%rsp)
+- movq %rsi, 13*8+\offset(%rsp)
+- movq %rdi, 14*8+\offset(%rsp)
+- UNWIND_HINT_REGS offset=\offset extra=0
+- .endm
+- .macro SAVE_C_REGS offset=0
+- SAVE_C_REGS_HELPER \offset, 1, 1, 1, 1
+- .endm
+- .macro SAVE_C_REGS_EXCEPT_RAX_RCX offset=0
+- SAVE_C_REGS_HELPER \offset, 0, 0, 1, 1
+- .endm
+- .macro SAVE_C_REGS_EXCEPT_R891011
+- SAVE_C_REGS_HELPER 0, 1, 1, 0, 0
+- .endm
+- .macro SAVE_C_REGS_EXCEPT_RCX_R891011
+- SAVE_C_REGS_HELPER 0, 1, 0, 0, 0
+- .endm
+- .macro SAVE_C_REGS_EXCEPT_RAX_RCX_R11
+- SAVE_C_REGS_HELPER 0, 0, 0, 1, 0
+- .endm
+-
+- .macro SAVE_EXTRA_REGS offset=0
+- movq %r15, 0*8+\offset(%rsp)
+- movq %r14, 1*8+\offset(%rsp)
+- movq %r13, 2*8+\offset(%rsp)
+- movq %r12, 3*8+\offset(%rsp)
+- movq %rbp, 4*8+\offset(%rsp)
+- movq %rbx, 5*8+\offset(%rsp)
+- UNWIND_HINT_REGS offset=\offset
+- .endm
+-
+- .macro POP_EXTRA_REGS
++.macro POP_REGS pop_rdi=1 skip_r11rcx=0
+ popq %r15
+ popq %r14
+ popq %r13
+ popq %r12
+ popq %rbp
+ popq %rbx
+- .endm
+-
+- .macro POP_C_REGS
++ .if \skip_r11rcx
++ popq %rsi
++ .else
+ popq %r11
++ .endif
+ popq %r10
+ popq %r9
+ popq %r8
+ popq %rax
++ .if \skip_r11rcx
++ popq %rsi
++ .else
+ popq %rcx
++ .endif
+ popq %rdx
+ popq %rsi
++ .if \pop_rdi
+ popq %rdi
+- .endm
+-
+- .macro icebp
+- .byte 0xf1
+- .endm
++ .endif
++.endm
+
+ /*
+ * This is a sneaky trick to help the unwinder find pt_regs on the stack. The
+@@ -178,7 +167,7 @@ For 32-bit we have the following conventions - kernel is built with
+ * is just setting the LSB, which makes it an invalid stack address and is also
+ * a signal to the unwinder that it's a pt_regs pointer in disguise.
+ *
+- * NOTE: This macro must be used *after* SAVE_EXTRA_REGS because it corrupts
++ * NOTE: This macro must be used *after* PUSH_AND_CLEAR_REGS because it corrupts
+ * the original rbp.
+ */
+ .macro ENCODE_FRAME_POINTER ptregs_offset=0
+diff --git a/arch/x86/entry/entry_64.S b/arch/x86/entry/entry_64.S
+index c752abe89d80..4fd9044e72e7 100644
+--- a/arch/x86/entry/entry_64.S
++++ b/arch/x86/entry/entry_64.S
+@@ -213,7 +213,7 @@ ENTRY(entry_SYSCALL_64)
+
+ swapgs
+ /*
+- * This path is not taken when PAGE_TABLE_ISOLATION is disabled so it
++ * This path is only taken when PAGE_TABLE_ISOLATION is disabled so it
+ * is not required to switch CR3.
+ */
+ movq %rsp, PER_CPU_VAR(rsp_scratch)
+@@ -227,22 +227,8 @@ ENTRY(entry_SYSCALL_64)
+ pushq %rcx /* pt_regs->ip */
+ GLOBAL(entry_SYSCALL_64_after_hwframe)
+ pushq %rax /* pt_regs->orig_ax */
+- pushq %rdi /* pt_regs->di */
+- pushq %rsi /* pt_regs->si */
+- pushq %rdx /* pt_regs->dx */
+- pushq %rcx /* pt_regs->cx */
+- pushq $-ENOSYS /* pt_regs->ax */
+- pushq %r8 /* pt_regs->r8 */
+- pushq %r9 /* pt_regs->r9 */
+- pushq %r10 /* pt_regs->r10 */
+- pushq %r11 /* pt_regs->r11 */
+- pushq %rbx /* pt_regs->rbx */
+- pushq %rbp /* pt_regs->rbp */
+- pushq %r12 /* pt_regs->r12 */
+- pushq %r13 /* pt_regs->r13 */
+- pushq %r14 /* pt_regs->r14 */
+- pushq %r15 /* pt_regs->r15 */
+- UNWIND_HINT_REGS
++
++ PUSH_AND_CLEAR_REGS rax=$-ENOSYS
+
+ TRACE_IRQS_OFF
+
+@@ -321,15 +307,7 @@ GLOBAL(entry_SYSCALL_64_after_hwframe)
+ syscall_return_via_sysret:
+ /* rcx and r11 are already restored (see code above) */
+ UNWIND_HINT_EMPTY
+- POP_EXTRA_REGS
+- popq %rsi /* skip r11 */
+- popq %r10
+- popq %r9
+- popq %r8
+- popq %rax
+- popq %rsi /* skip rcx */
+- popq %rdx
+- popq %rsi
++ POP_REGS pop_rdi=0 skip_r11rcx=1
+
+ /*
+ * Now all regs are restored except RSP and RDI.
+@@ -559,9 +537,7 @@ END(irq_entries_start)
+ call switch_to_thread_stack
+ 1:
+
+- ALLOC_PT_GPREGS_ON_STACK
+- SAVE_C_REGS
+- SAVE_EXTRA_REGS
++ PUSH_AND_CLEAR_REGS
+ ENCODE_FRAME_POINTER
+
+ testb $3, CS(%rsp)
+@@ -622,15 +598,7 @@ GLOBAL(swapgs_restore_regs_and_return_to_usermode)
+ ud2
+ 1:
+ #endif
+- POP_EXTRA_REGS
+- popq %r11
+- popq %r10
+- popq %r9
+- popq %r8
+- popq %rax
+- popq %rcx
+- popq %rdx
+- popq %rsi
++ POP_REGS pop_rdi=0
+
+ /*
+ * The stack is now user RDI, orig_ax, RIP, CS, EFLAGS, RSP, SS.
+@@ -688,8 +656,7 @@ GLOBAL(restore_regs_and_return_to_kernel)
+ ud2
+ 1:
+ #endif
+- POP_EXTRA_REGS
+- POP_C_REGS
++ POP_REGS
+ addq $8, %rsp /* skip regs->orig_ax */
+ INTERRUPT_RETURN
+
+@@ -904,7 +871,9 @@ ENTRY(\sym)
+ pushq $-1 /* ORIG_RAX: no syscall to restart */
+ .endif
+
+- ALLOC_PT_GPREGS_ON_STACK
++ /* Save all registers in pt_regs */
++ PUSH_AND_CLEAR_REGS
++ ENCODE_FRAME_POINTER
+
+ .if \paranoid < 2
+ testb $3, CS(%rsp) /* If coming from userspace, switch stacks */
+@@ -1117,9 +1086,7 @@ ENTRY(xen_failsafe_callback)
+ addq $0x30, %rsp
+ UNWIND_HINT_IRET_REGS
+ pushq $-1 /* orig_ax = -1 => not a system call */
+- ALLOC_PT_GPREGS_ON_STACK
+- SAVE_C_REGS
+- SAVE_EXTRA_REGS
++ PUSH_AND_CLEAR_REGS
+ ENCODE_FRAME_POINTER
+ jmp error_exit
+ END(xen_failsafe_callback)
+@@ -1156,16 +1123,13 @@ idtentry machine_check do_mce has_error_code=0 paranoid=1
+ #endif
+
+ /*
+- * Save all registers in pt_regs, and switch gs if needed.
++ * Switch gs if needed.
+ * Use slow, but surefire "are we in kernel?" check.
+ * Return: ebx=0: need swapgs on exit, ebx=1: otherwise
+ */
+ ENTRY(paranoid_entry)
+ UNWIND_HINT_FUNC
+ cld
+- SAVE_C_REGS 8
+- SAVE_EXTRA_REGS 8
+- ENCODE_FRAME_POINTER 8
+ movl $1, %ebx
+ movl $MSR_GS_BASE, %ecx
+ rdmsr
+@@ -1204,21 +1168,18 @@ ENTRY(paranoid_exit)
+ jmp .Lparanoid_exit_restore
+ .Lparanoid_exit_no_swapgs:
+ TRACE_IRQS_IRETQ_DEBUG
++ RESTORE_CR3 scratch_reg=%rbx save_reg=%r14
+ .Lparanoid_exit_restore:
+ jmp restore_regs_and_return_to_kernel
+ END(paranoid_exit)
+
+ /*
+- * Save all registers in pt_regs, and switch gs if needed.
++ * Switch gs if needed.
+ * Return: EBX=0: came from user mode; EBX=1: otherwise
+ */
+ ENTRY(error_entry)
+- UNWIND_HINT_FUNC
++ UNWIND_HINT_REGS offset=8
+ cld
+- SAVE_C_REGS 8
+- SAVE_EXTRA_REGS 8
+- ENCODE_FRAME_POINTER 8
+- xorl %ebx, %ebx
+ testb $3, CS+8(%rsp)
+ jz .Lerror_kernelspace
+
+@@ -1399,22 +1360,7 @@ ENTRY(nmi)
+ pushq 1*8(%rdx) /* pt_regs->rip */
+ UNWIND_HINT_IRET_REGS
+ pushq $-1 /* pt_regs->orig_ax */
+- pushq %rdi /* pt_regs->di */
+- pushq %rsi /* pt_regs->si */
+- pushq (%rdx) /* pt_regs->dx */
+- pushq %rcx /* pt_regs->cx */
+- pushq %rax /* pt_regs->ax */
+- pushq %r8 /* pt_regs->r8 */
+- pushq %r9 /* pt_regs->r9 */
+- pushq %r10 /* pt_regs->r10 */
+- pushq %r11 /* pt_regs->r11 */
+- pushq %rbx /* pt_regs->rbx */
+- pushq %rbp /* pt_regs->rbp */
+- pushq %r12 /* pt_regs->r12 */
+- pushq %r13 /* pt_regs->r13 */
+- pushq %r14 /* pt_regs->r14 */
+- pushq %r15 /* pt_regs->r15 */
+- UNWIND_HINT_REGS
++ PUSH_AND_CLEAR_REGS rdx=(%rdx)
+ ENCODE_FRAME_POINTER
+
+ /*
+@@ -1624,7 +1570,8 @@ end_repeat_nmi:
+ * frame to point back to repeat_nmi.
+ */
+ pushq $-1 /* ORIG_RAX: no syscall to restart */
+- ALLOC_PT_GPREGS_ON_STACK
++ PUSH_AND_CLEAR_REGS
++ ENCODE_FRAME_POINTER
+
+ /*
+ * Use paranoid_entry to handle SWAPGS, but no need to use paranoid_exit
+@@ -1648,8 +1595,7 @@ end_repeat_nmi:
+ nmi_swapgs:
+ SWAPGS_UNSAFE_STACK
+ nmi_restore:
+- POP_EXTRA_REGS
+- POP_C_REGS
++ POP_REGS
+
+ /*
+ * Skip orig_ax and the "outermost" frame to point RSP at the "iret"
+diff --git a/arch/x86/entry/entry_64_compat.S b/arch/x86/entry/entry_64_compat.S
+index 98d5358e4041..fd65e016e413 100644
+--- a/arch/x86/entry/entry_64_compat.S
++++ b/arch/x86/entry/entry_64_compat.S
+@@ -85,15 +85,25 @@ ENTRY(entry_SYSENTER_compat)
+ pushq %rcx /* pt_regs->cx */
+ pushq $-ENOSYS /* pt_regs->ax */
+ pushq $0 /* pt_regs->r8 = 0 */
++ xorq %r8, %r8 /* nospec r8 */
+ pushq $0 /* pt_regs->r9 = 0 */
++ xorq %r9, %r9 /* nospec r9 */
+ pushq $0 /* pt_regs->r10 = 0 */
++ xorq %r10, %r10 /* nospec r10 */
+ pushq $0 /* pt_regs->r11 = 0 */
++ xorq %r11, %r11 /* nospec r11 */
+ pushq %rbx /* pt_regs->rbx */
++ xorl %ebx, %ebx /* nospec rbx */
+ pushq %rbp /* pt_regs->rbp (will be overwritten) */
++ xorl %ebp, %ebp /* nospec rbp */
+ pushq $0 /* pt_regs->r12 = 0 */
++ xorq %r12, %r12 /* nospec r12 */
+ pushq $0 /* pt_regs->r13 = 0 */
++ xorq %r13, %r13 /* nospec r13 */
+ pushq $0 /* pt_regs->r14 = 0 */
++ xorq %r14, %r14 /* nospec r14 */
+ pushq $0 /* pt_regs->r15 = 0 */
++ xorq %r15, %r15 /* nospec r15 */
+ cld
+
+ /*
+@@ -214,15 +224,25 @@ GLOBAL(entry_SYSCALL_compat_after_hwframe)
+ pushq %rbp /* pt_regs->cx (stashed in bp) */
+ pushq $-ENOSYS /* pt_regs->ax */
+ pushq $0 /* pt_regs->r8 = 0 */
++ xorq %r8, %r8 /* nospec r8 */
+ pushq $0 /* pt_regs->r9 = 0 */
++ xorq %r9, %r9 /* nospec r9 */
+ pushq $0 /* pt_regs->r10 = 0 */
++ xorq %r10, %r10 /* nospec r10 */
+ pushq $0 /* pt_regs->r11 = 0 */
++ xorq %r11, %r11 /* nospec r11 */
+ pushq %rbx /* pt_regs->rbx */
++ xorl %ebx, %ebx /* nospec rbx */
+ pushq %rbp /* pt_regs->rbp (will be overwritten) */
++ xorl %ebp, %ebp /* nospec rbp */
+ pushq $0 /* pt_regs->r12 = 0 */
++ xorq %r12, %r12 /* nospec r12 */
+ pushq $0 /* pt_regs->r13 = 0 */
++ xorq %r13, %r13 /* nospec r13 */
+ pushq $0 /* pt_regs->r14 = 0 */
++ xorq %r14, %r14 /* nospec r14 */
+ pushq $0 /* pt_regs->r15 = 0 */
++ xorq %r15, %r15 /* nospec r15 */
+
+ /*
+ * User mode is traced as though IRQs are on, and SYSENTER
+@@ -338,15 +358,25 @@ ENTRY(entry_INT80_compat)
+ pushq %rcx /* pt_regs->cx */
+ pushq $-ENOSYS /* pt_regs->ax */
+ pushq $0 /* pt_regs->r8 = 0 */
++ xorq %r8, %r8 /* nospec r8 */
+ pushq $0 /* pt_regs->r9 = 0 */
++ xorq %r9, %r9 /* nospec r9 */
+ pushq $0 /* pt_regs->r10 = 0 */
++ xorq %r10, %r10 /* nospec r10 */
+ pushq $0 /* pt_regs->r11 = 0 */
++ xorq %r11, %r11 /* nospec r11 */
+ pushq %rbx /* pt_regs->rbx */
++ xorl %ebx, %ebx /* nospec rbx */
+ pushq %rbp /* pt_regs->rbp */
++ xorl %ebp, %ebp /* nospec rbp */
+ pushq %r12 /* pt_regs->r12 */
++ xorq %r12, %r12 /* nospec r12 */
+ pushq %r13 /* pt_regs->r13 */
++ xorq %r13, %r13 /* nospec r13 */
+ pushq %r14 /* pt_regs->r14 */
++ xorq %r14, %r14 /* nospec r14 */
+ pushq %r15 /* pt_regs->r15 */
++ xorq %r15, %r15 /* nospec r15 */
+ cld
+
+ /*
+diff --git a/arch/x86/events/intel/core.c b/arch/x86/events/intel/core.c
+index 731153a4681e..56457cb73448 100644
+--- a/arch/x86/events/intel/core.c
++++ b/arch/x86/events/intel/core.c
+@@ -3559,7 +3559,7 @@ static int intel_snb_pebs_broken(int cpu)
+ break;
+
+ case INTEL_FAM6_SANDYBRIDGE_X:
+- switch (cpu_data(cpu).x86_mask) {
++ switch (cpu_data(cpu).x86_stepping) {
+ case 6: rev = 0x618; break;
+ case 7: rev = 0x70c; break;
+ }
+diff --git a/arch/x86/events/intel/lbr.c b/arch/x86/events/intel/lbr.c
+index ae64d0b69729..cf372b90557e 100644
+--- a/arch/x86/events/intel/lbr.c
++++ b/arch/x86/events/intel/lbr.c
+@@ -1186,7 +1186,7 @@ void __init intel_pmu_lbr_init_atom(void)
+ * on PMU interrupt
+ */
+ if (boot_cpu_data.x86_model == 28
+- && boot_cpu_data.x86_mask < 10) {
++ && boot_cpu_data.x86_stepping < 10) {
+ pr_cont("LBR disabled due to erratum");
+ return;
+ }
+diff --git a/arch/x86/events/intel/p6.c b/arch/x86/events/intel/p6.c
+index a5604c352930..408879b0c0d4 100644
+--- a/arch/x86/events/intel/p6.c
++++ b/arch/x86/events/intel/p6.c
+@@ -234,7 +234,7 @@ static __initconst const struct x86_pmu p6_pmu = {
+
+ static __init void p6_pmu_rdpmc_quirk(void)
+ {
+- if (boot_cpu_data.x86_mask < 9) {
++ if (boot_cpu_data.x86_stepping < 9) {
+ /*
+ * PPro erratum 26; fixed in stepping 9 and above.
+ */
+diff --git a/arch/x86/include/asm/acpi.h b/arch/x86/include/asm/acpi.h
+index 8d0ec9df1cbe..f077401869ee 100644
+--- a/arch/x86/include/asm/acpi.h
++++ b/arch/x86/include/asm/acpi.h
+@@ -94,7 +94,7 @@ static inline unsigned int acpi_processor_cstate_check(unsigned int max_cstate)
+ if (boot_cpu_data.x86 == 0x0F &&
+ boot_cpu_data.x86_vendor == X86_VENDOR_AMD &&
+ boot_cpu_data.x86_model <= 0x05 &&
+- boot_cpu_data.x86_mask < 0x0A)
++ boot_cpu_data.x86_stepping < 0x0A)
+ return 1;
+ else if (boot_cpu_has(X86_BUG_AMD_APIC_C1E))
+ return 1;
+diff --git a/arch/x86/include/asm/barrier.h b/arch/x86/include/asm/barrier.h
+index 30d406146016..e1259f043ae9 100644
+--- a/arch/x86/include/asm/barrier.h
++++ b/arch/x86/include/asm/barrier.h
+@@ -40,7 +40,7 @@ static inline unsigned long array_index_mask_nospec(unsigned long index,
+
+ asm ("cmp %1,%2; sbb %0,%0;"
+ :"=r" (mask)
+- :"r"(size),"r" (index)
++ :"g"(size),"r" (index)
+ :"cc");
+ return mask;
+ }
+diff --git a/arch/x86/include/asm/bug.h b/arch/x86/include/asm/bug.h
+index 34d99af43994..6804d6642767 100644
+--- a/arch/x86/include/asm/bug.h
++++ b/arch/x86/include/asm/bug.h
+@@ -5,23 +5,20 @@
+ #include <linux/stringify.h>
+
+ /*
+- * Since some emulators terminate on UD2, we cannot use it for WARN.
+- * Since various instruction decoders disagree on the length of UD1,
+- * we cannot use it either. So use UD0 for WARN.
++ * Despite that some emulators terminate on UD2, we use it for WARN().
+ *
+- * (binutils knows about "ud1" but {en,de}codes it as 2 bytes, whereas
+- * our kernel decoder thinks it takes a ModRM byte, which seems consistent
+- * with various things like the Intel SDM instruction encoding rules)
++ * Since various instruction decoders/specs disagree on the encoding of
++ * UD0/UD1.
+ */
+
+-#define ASM_UD0 ".byte 0x0f, 0xff"
++#define ASM_UD0 ".byte 0x0f, 0xff" /* + ModRM (for Intel) */
+ #define ASM_UD1 ".byte 0x0f, 0xb9" /* + ModRM */
+ #define ASM_UD2 ".byte 0x0f, 0x0b"
+
+ #define INSN_UD0 0xff0f
+ #define INSN_UD2 0x0b0f
+
+-#define LEN_UD0 2
++#define LEN_UD2 2
+
+ #ifdef CONFIG_GENERIC_BUG
+
+@@ -77,7 +74,11 @@ do { \
+ unreachable(); \
+ } while (0)
+
+-#define __WARN_FLAGS(flags) _BUG_FLAGS(ASM_UD0, BUGFLAG_WARNING|(flags))
++#define __WARN_FLAGS(flags) \
++do { \
++ _BUG_FLAGS(ASM_UD2, BUGFLAG_WARNING|(flags)); \
++ annotate_reachable(); \
++} while (0)
+
+ #include <asm-generic/bug.h>
+
+diff --git a/arch/x86/include/asm/nospec-branch.h b/arch/x86/include/asm/nospec-branch.h
+index 4d57894635f2..76b058533e47 100644
+--- a/arch/x86/include/asm/nospec-branch.h
++++ b/arch/x86/include/asm/nospec-branch.h
+@@ -6,6 +6,7 @@
+ #include <asm/alternative.h>
+ #include <asm/alternative-asm.h>
+ #include <asm/cpufeatures.h>
++#include <asm/msr-index.h>
+
+ #ifdef __ASSEMBLY__
+
+@@ -164,10 +165,15 @@ static inline void vmexit_fill_RSB(void)
+
+ static inline void indirect_branch_prediction_barrier(void)
+ {
+- alternative_input("",
+- "call __ibp_barrier",
+- X86_FEATURE_USE_IBPB,
+- ASM_NO_INPUT_CLOBBER("eax", "ecx", "edx", "memory"));
++ asm volatile(ALTERNATIVE("",
++ "movl %[msr], %%ecx\n\t"
++ "movl %[val], %%eax\n\t"
++ "movl $0, %%edx\n\t"
++ "wrmsr",
++ X86_FEATURE_USE_IBPB)
++ : : [msr] "i" (MSR_IA32_PRED_CMD),
++ [val] "i" (PRED_CMD_IBPB)
++ : "eax", "ecx", "edx", "memory");
+ }
+
+ #endif /* __ASSEMBLY__ */
+diff --git a/arch/x86/include/asm/page_64.h b/arch/x86/include/asm/page_64.h
+index 4baa6bceb232..d652a3808065 100644
+--- a/arch/x86/include/asm/page_64.h
++++ b/arch/x86/include/asm/page_64.h
+@@ -52,10 +52,6 @@ static inline void clear_page(void *page)
+
+ void copy_page(void *to, void *from);
+
+-#ifdef CONFIG_X86_MCE
+-#define arch_unmap_kpfn arch_unmap_kpfn
+-#endif
+-
+ #endif /* !__ASSEMBLY__ */
+
+ #ifdef CONFIG_X86_VSYSCALL_EMULATION
+diff --git a/arch/x86/include/asm/paravirt.h b/arch/x86/include/asm/paravirt.h
+index 892df375b615..554841fab717 100644
+--- a/arch/x86/include/asm/paravirt.h
++++ b/arch/x86/include/asm/paravirt.h
+@@ -297,9 +297,9 @@ static inline void __flush_tlb_global(void)
+ {
+ PVOP_VCALL0(pv_mmu_ops.flush_tlb_kernel);
+ }
+-static inline void __flush_tlb_single(unsigned long addr)
++static inline void __flush_tlb_one_user(unsigned long addr)
+ {
+- PVOP_VCALL1(pv_mmu_ops.flush_tlb_single, addr);
++ PVOP_VCALL1(pv_mmu_ops.flush_tlb_one_user, addr);
+ }
+
+ static inline void flush_tlb_others(const struct cpumask *cpumask,
+diff --git a/arch/x86/include/asm/paravirt_types.h b/arch/x86/include/asm/paravirt_types.h
+index 6ec54d01972d..f624f1f10316 100644
+--- a/arch/x86/include/asm/paravirt_types.h
++++ b/arch/x86/include/asm/paravirt_types.h
+@@ -217,7 +217,7 @@ struct pv_mmu_ops {
+ /* TLB operations */
+ void (*flush_tlb_user)(void);
+ void (*flush_tlb_kernel)(void);
+- void (*flush_tlb_single)(unsigned long addr);
++ void (*flush_tlb_one_user)(unsigned long addr);
+ void (*flush_tlb_others)(const struct cpumask *cpus,
+ const struct flush_tlb_info *info);
+
+diff --git a/arch/x86/include/asm/pgtable_32.h b/arch/x86/include/asm/pgtable_32.h
+index e67c0620aec2..e55466760ff8 100644
+--- a/arch/x86/include/asm/pgtable_32.h
++++ b/arch/x86/include/asm/pgtable_32.h
+@@ -61,7 +61,7 @@ void paging_init(void);
+ #define kpte_clear_flush(ptep, vaddr) \
+ do { \
+ pte_clear(&init_mm, (vaddr), (ptep)); \
+- __flush_tlb_one((vaddr)); \
++ __flush_tlb_one_kernel((vaddr)); \
+ } while (0)
+
+ #endif /* !__ASSEMBLY__ */
+diff --git a/arch/x86/include/asm/processor.h b/arch/x86/include/asm/processor.h
+index 513f9604c192..44c2c4ec6d60 100644
+--- a/arch/x86/include/asm/processor.h
++++ b/arch/x86/include/asm/processor.h
+@@ -91,7 +91,7 @@ struct cpuinfo_x86 {
+ __u8 x86; /* CPU family */
+ __u8 x86_vendor; /* CPU vendor */
+ __u8 x86_model;
+- __u8 x86_mask;
++ __u8 x86_stepping;
+ #ifdef CONFIG_X86_64
+ /* Number of 4K pages in DTLB/ITLB combined(in pages): */
+ int x86_tlbsize;
+@@ -109,7 +109,7 @@ struct cpuinfo_x86 {
+ char x86_vendor_id[16];
+ char x86_model_id[64];
+ /* in KB - valid for CPUS which support this call: */
+- int x86_cache_size;
++ unsigned int x86_cache_size;
+ int x86_cache_alignment; /* In bytes */
+ /* Cache QoS architectural values: */
+ int x86_cache_max_rmid; /* max index */
+@@ -969,7 +969,4 @@ bool xen_set_default_idle(void);
+
+ void stop_this_cpu(void *dummy);
+ void df_debug(struct pt_regs *regs, long error_code);
+-
+-void __ibp_barrier(void);
+-
+ #endif /* _ASM_X86_PROCESSOR_H */
+diff --git a/arch/x86/include/asm/tlbflush.h b/arch/x86/include/asm/tlbflush.h
+index 2b8f18ca5874..84137c22fdfa 100644
+--- a/arch/x86/include/asm/tlbflush.h
++++ b/arch/x86/include/asm/tlbflush.h
+@@ -140,7 +140,7 @@ static inline unsigned long build_cr3_noflush(pgd_t *pgd, u16 asid)
+ #else
+ #define __flush_tlb() __native_flush_tlb()
+ #define __flush_tlb_global() __native_flush_tlb_global()
+-#define __flush_tlb_single(addr) __native_flush_tlb_single(addr)
++#define __flush_tlb_one_user(addr) __native_flush_tlb_one_user(addr)
+ #endif
+
+ static inline bool tlb_defer_switch_to_init_mm(void)
+@@ -400,7 +400,7 @@ static inline void __native_flush_tlb_global(void)
+ /*
+ * flush one page in the user mapping
+ */
+-static inline void __native_flush_tlb_single(unsigned long addr)
++static inline void __native_flush_tlb_one_user(unsigned long addr)
+ {
+ u32 loaded_mm_asid = this_cpu_read(cpu_tlbstate.loaded_mm_asid);
+
+@@ -437,18 +437,31 @@ static inline void __flush_tlb_all(void)
+ /*
+ * flush one page in the kernel mapping
+ */
+-static inline void __flush_tlb_one(unsigned long addr)
++static inline void __flush_tlb_one_kernel(unsigned long addr)
+ {
+ count_vm_tlb_event(NR_TLB_LOCAL_FLUSH_ONE);
+- __flush_tlb_single(addr);
++
++ /*
++ * If PTI is off, then __flush_tlb_one_user() is just INVLPG or its
++ * paravirt equivalent. Even with PCID, this is sufficient: we only
++ * use PCID if we also use global PTEs for the kernel mapping, and
++ * INVLPG flushes global translations across all address spaces.
++ *
++ * If PTI is on, then the kernel is mapped with non-global PTEs, and
++ * __flush_tlb_one_user() will flush the given address for the current
++ * kernel address space and for its usermode counterpart, but it does
++ * not flush it for other address spaces.
++ */
++ __flush_tlb_one_user(addr);
+
+ if (!static_cpu_has(X86_FEATURE_PTI))
+ return;
+
+ /*
+- * __flush_tlb_single() will have cleared the TLB entry for this ASID,
+- * but since kernel space is replicated across all, we must also
+- * invalidate all others.
++ * See above. We need to propagate the flush to all other address
++ * spaces. In principle, we only need to propagate it to kernelmode
++ * address spaces, but the extra bookkeeping we would need is not
++ * worth it.
+ */
+ invalidate_other_asid();
+ }
+diff --git a/arch/x86/kernel/amd_nb.c b/arch/x86/kernel/amd_nb.c
+index 6db28f17ff28..c88e0b127810 100644
+--- a/arch/x86/kernel/amd_nb.c
++++ b/arch/x86/kernel/amd_nb.c
+@@ -235,7 +235,7 @@ int amd_cache_northbridges(void)
+ if (boot_cpu_data.x86 == 0x10 &&
+ boot_cpu_data.x86_model >= 0x8 &&
+ (boot_cpu_data.x86_model > 0x9 ||
+- boot_cpu_data.x86_mask >= 0x1))
++ boot_cpu_data.x86_stepping >= 0x1))
+ amd_northbridges.flags |= AMD_NB_L3_INDEX_DISABLE;
+
+ if (boot_cpu_data.x86 == 0x15)
+diff --git a/arch/x86/kernel/apic/apic.c b/arch/x86/kernel/apic/apic.c
+index 25ddf02598d2..b203af0855b5 100644
+--- a/arch/x86/kernel/apic/apic.c
++++ b/arch/x86/kernel/apic/apic.c
+@@ -546,7 +546,7 @@ static DEFINE_PER_CPU(struct clock_event_device, lapic_events);
+
+ static u32 hsx_deadline_rev(void)
+ {
+- switch (boot_cpu_data.x86_mask) {
++ switch (boot_cpu_data.x86_stepping) {
+ case 0x02: return 0x3a; /* EP */
+ case 0x04: return 0x0f; /* EX */
+ }
+@@ -556,7 +556,7 @@ static u32 hsx_deadline_rev(void)
+
+ static u32 bdx_deadline_rev(void)
+ {
+- switch (boot_cpu_data.x86_mask) {
++ switch (boot_cpu_data.x86_stepping) {
+ case 0x02: return 0x00000011;
+ case 0x03: return 0x0700000e;
+ case 0x04: return 0x0f00000c;
+@@ -568,7 +568,7 @@ static u32 bdx_deadline_rev(void)
+
+ static u32 skx_deadline_rev(void)
+ {
+- switch (boot_cpu_data.x86_mask) {
++ switch (boot_cpu_data.x86_stepping) {
+ case 0x03: return 0x01000136;
+ case 0x04: return 0x02000014;
+ }
+diff --git a/arch/x86/kernel/apm_32.c b/arch/x86/kernel/apm_32.c
+index e4b0d92b3ae0..2a7fd56e67b3 100644
+--- a/arch/x86/kernel/apm_32.c
++++ b/arch/x86/kernel/apm_32.c
+@@ -2389,6 +2389,7 @@ static int __init apm_init(void)
+ if (HZ != 100)
+ idle_period = (idle_period * HZ) / 100;
+ if (idle_threshold < 100) {
++ cpuidle_poll_state_init(&apm_idle_driver);
+ if (!cpuidle_register_driver(&apm_idle_driver))
+ if (cpuidle_register_device(&apm_cpuidle_device))
+ cpuidle_unregister_driver(&apm_idle_driver);
+diff --git a/arch/x86/kernel/asm-offsets_32.c b/arch/x86/kernel/asm-offsets_32.c
+index fa1261eefa16..f91ba53e06c8 100644
+--- a/arch/x86/kernel/asm-offsets_32.c
++++ b/arch/x86/kernel/asm-offsets_32.c
+@@ -18,7 +18,7 @@ void foo(void)
+ OFFSET(CPUINFO_x86, cpuinfo_x86, x86);
+ OFFSET(CPUINFO_x86_vendor, cpuinfo_x86, x86_vendor);
+ OFFSET(CPUINFO_x86_model, cpuinfo_x86, x86_model);
+- OFFSET(CPUINFO_x86_mask, cpuinfo_x86, x86_mask);
++ OFFSET(CPUINFO_x86_stepping, cpuinfo_x86, x86_stepping);
+ OFFSET(CPUINFO_cpuid_level, cpuinfo_x86, cpuid_level);
+ OFFSET(CPUINFO_x86_capability, cpuinfo_x86, x86_capability);
+ OFFSET(CPUINFO_x86_vendor_id, cpuinfo_x86, x86_vendor_id);
+diff --git a/arch/x86/kernel/cpu/amd.c b/arch/x86/kernel/cpu/amd.c
+index ea831c858195..e7d5a7883632 100644
+--- a/arch/x86/kernel/cpu/amd.c
++++ b/arch/x86/kernel/cpu/amd.c
+@@ -119,7 +119,7 @@ static void init_amd_k6(struct cpuinfo_x86 *c)
+ return;
+ }
+
+- if (c->x86_model == 6 && c->x86_mask == 1) {
++ if (c->x86_model == 6 && c->x86_stepping == 1) {
+ const int K6_BUG_LOOP = 1000000;
+ int n;
+ void (*f_vide)(void);
+@@ -149,7 +149,7 @@ static void init_amd_k6(struct cpuinfo_x86 *c)
+
+ /* K6 with old style WHCR */
+ if (c->x86_model < 8 ||
+- (c->x86_model == 8 && c->x86_mask < 8)) {
++ (c->x86_model == 8 && c->x86_stepping < 8)) {
+ /* We can only write allocate on the low 508Mb */
+ if (mbytes > 508)
+ mbytes = 508;
+@@ -168,7 +168,7 @@ static void init_amd_k6(struct cpuinfo_x86 *c)
+ return;
+ }
+
+- if ((c->x86_model == 8 && c->x86_mask > 7) ||
++ if ((c->x86_model == 8 && c->x86_stepping > 7) ||
+ c->x86_model == 9 || c->x86_model == 13) {
+ /* The more serious chips .. */
+
+@@ -221,7 +221,7 @@ static void init_amd_k7(struct cpuinfo_x86 *c)
+ * are more robust with CLK_CTL set to 200xxxxx instead of 600xxxxx
+ * As per AMD technical note 27212 0.2
+ */
+- if ((c->x86_model == 8 && c->x86_mask >= 1) || (c->x86_model > 8)) {
++ if ((c->x86_model == 8 && c->x86_stepping >= 1) || (c->x86_model > 8)) {
+ rdmsr(MSR_K7_CLK_CTL, l, h);
+ if ((l & 0xfff00000) != 0x20000000) {
+ pr_info("CPU: CLK_CTL MSR was %x. Reprogramming to %x\n",
+@@ -241,12 +241,12 @@ static void init_amd_k7(struct cpuinfo_x86 *c)
+ * but they are not certified as MP capable.
+ */
+ /* Athlon 660/661 is valid. */
+- if ((c->x86_model == 6) && ((c->x86_mask == 0) ||
+- (c->x86_mask == 1)))
++ if ((c->x86_model == 6) && ((c->x86_stepping == 0) ||
++ (c->x86_stepping == 1)))
+ return;
+
+ /* Duron 670 is valid */
+- if ((c->x86_model == 7) && (c->x86_mask == 0))
++ if ((c->x86_model == 7) && (c->x86_stepping == 0))
+ return;
+
+ /*
+@@ -256,8 +256,8 @@ static void init_amd_k7(struct cpuinfo_x86 *c)
+ * See http://www.heise.de/newsticker/data/jow-18.10.01-000 for
+ * more.
+ */
+- if (((c->x86_model == 6) && (c->x86_mask >= 2)) ||
+- ((c->x86_model == 7) && (c->x86_mask >= 1)) ||
++ if (((c->x86_model == 6) && (c->x86_stepping >= 2)) ||
++ ((c->x86_model == 7) && (c->x86_stepping >= 1)) ||
+ (c->x86_model > 7))
+ if (cpu_has(c, X86_FEATURE_MP))
+ return;
+@@ -583,7 +583,7 @@ static void early_init_amd(struct cpuinfo_x86 *c)
+ /* Set MTRR capability flag if appropriate */
+ if (c->x86 == 5)
+ if (c->x86_model == 13 || c->x86_model == 9 ||
+- (c->x86_model == 8 && c->x86_mask >= 8))
++ (c->x86_model == 8 && c->x86_stepping >= 8))
+ set_cpu_cap(c, X86_FEATURE_K6_MTRR);
+ #endif
+ #if defined(CONFIG_X86_LOCAL_APIC) && defined(CONFIG_PCI)
+@@ -769,7 +769,7 @@ static void init_amd_zn(struct cpuinfo_x86 *c)
+ * Fix erratum 1076: CPB feature bit not being set in CPUID. It affects
+ * all up to and including B1.
+ */
+- if (c->x86_model <= 1 && c->x86_mask <= 1)
++ if (c->x86_model <= 1 && c->x86_stepping <= 1)
+ set_cpu_cap(c, X86_FEATURE_CPB);
+ }
+
+@@ -880,11 +880,11 @@ static unsigned int amd_size_cache(struct cpuinfo_x86 *c, unsigned int size)
+ /* AMD errata T13 (order #21922) */
+ if ((c->x86 == 6)) {
+ /* Duron Rev A0 */
+- if (c->x86_model == 3 && c->x86_mask == 0)
++ if (c->x86_model == 3 && c->x86_stepping == 0)
+ size = 64;
+ /* Tbird rev A1/A2 */
+ if (c->x86_model == 4 &&
+- (c->x86_mask == 0 || c->x86_mask == 1))
++ (c->x86_stepping == 0 || c->x86_stepping == 1))
+ size = 256;
+ }
+ return size;
+@@ -1021,7 +1021,7 @@ static bool cpu_has_amd_erratum(struct cpuinfo_x86 *cpu, const int *erratum)
+ }
+
+ /* OSVW unavailable or ID unknown, match family-model-stepping range */
+- ms = (cpu->x86_model << 4) | cpu->x86_mask;
++ ms = (cpu->x86_model << 4) | cpu->x86_stepping;
+ while ((range = *erratum++))
+ if ((cpu->x86 == AMD_MODEL_RANGE_FAMILY(range)) &&
+ (ms >= AMD_MODEL_RANGE_START(range)) &&
+diff --git a/arch/x86/kernel/cpu/bugs.c b/arch/x86/kernel/cpu/bugs.c
+index 71949bf2de5a..d71c8b54b696 100644
+--- a/arch/x86/kernel/cpu/bugs.c
++++ b/arch/x86/kernel/cpu/bugs.c
+@@ -162,8 +162,7 @@ static enum spectre_v2_mitigation_cmd __init spectre_v2_parse_cmdline(void)
+ if (cmdline_find_option_bool(boot_command_line, "nospectre_v2"))
+ return SPECTRE_V2_CMD_NONE;
+ else {
+- ret = cmdline_find_option(boot_command_line, "spectre_v2", arg,
+- sizeof(arg));
++ ret = cmdline_find_option(boot_command_line, "spectre_v2", arg, sizeof(arg));
+ if (ret < 0)
+ return SPECTRE_V2_CMD_AUTO;
+
+@@ -175,8 +174,7 @@ static enum spectre_v2_mitigation_cmd __init spectre_v2_parse_cmdline(void)
+ }
+
+ if (i >= ARRAY_SIZE(mitigation_options)) {
+- pr_err("unknown option (%s). Switching to AUTO select\n",
+- mitigation_options[i].option);
++ pr_err("unknown option (%s). Switching to AUTO select\n", arg);
+ return SPECTRE_V2_CMD_AUTO;
+ }
+ }
+@@ -185,8 +183,7 @@ static enum spectre_v2_mitigation_cmd __init spectre_v2_parse_cmdline(void)
+ cmd == SPECTRE_V2_CMD_RETPOLINE_AMD ||
+ cmd == SPECTRE_V2_CMD_RETPOLINE_GENERIC) &&
+ !IS_ENABLED(CONFIG_RETPOLINE)) {
+- pr_err("%s selected but not compiled in. Switching to AUTO select\n",
+- mitigation_options[i].option);
++ pr_err("%s selected but not compiled in. Switching to AUTO select\n", mitigation_options[i].option);
+ return SPECTRE_V2_CMD_AUTO;
+ }
+
+@@ -256,14 +253,14 @@ static void __init spectre_v2_select_mitigation(void)
+ goto retpoline_auto;
+ break;
+ }
+- pr_err("kernel not compiled with retpoline; no mitigation available!");
++ pr_err("Spectre mitigation: kernel not compiled with retpoline; no mitigation available!");
+ return;
+
+ retpoline_auto:
+ if (boot_cpu_data.x86_vendor == X86_VENDOR_AMD) {
+ retpoline_amd:
+ if (!boot_cpu_has(X86_FEATURE_LFENCE_RDTSC)) {
+- pr_err("LFENCE not serializing. Switching to generic retpoline\n");
++ pr_err("Spectre mitigation: LFENCE not serializing, switching to generic retpoline\n");
+ goto retpoline_generic;
+ }
+ mode = retp_compiler() ? SPECTRE_V2_RETPOLINE_AMD :
+@@ -281,7 +278,7 @@ static void __init spectre_v2_select_mitigation(void)
+ pr_info("%s\n", spectre_v2_strings[mode]);
+
+ /*
+- * If neither SMEP or KPTI are available, there is a risk of
++ * If neither SMEP nor PTI are available, there is a risk of
+ * hitting userspace addresses in the RSB after a context switch
+ * from a shallow call stack to a deeper one. To prevent this fill
+ * the entire RSB, even when using IBRS.
+@@ -295,21 +292,20 @@ static void __init spectre_v2_select_mitigation(void)
+ if ((!boot_cpu_has(X86_FEATURE_PTI) &&
+ !boot_cpu_has(X86_FEATURE_SMEP)) || is_skylake_era()) {
+ setup_force_cpu_cap(X86_FEATURE_RSB_CTXSW);
+- pr_info("Filling RSB on context switch\n");
++ pr_info("Spectre v2 mitigation: Filling RSB on context switch\n");
+ }
+
+ /* Initialize Indirect Branch Prediction Barrier if supported */
+ if (boot_cpu_has(X86_FEATURE_IBPB)) {
+ setup_force_cpu_cap(X86_FEATURE_USE_IBPB);
+- pr_info("Enabling Indirect Branch Prediction Barrier\n");
++ pr_info("Spectre v2 mitigation: Enabling Indirect Branch Prediction Barrier\n");
+ }
+ }
+
+ #undef pr_fmt
+
+ #ifdef CONFIG_SYSFS
+-ssize_t cpu_show_meltdown(struct device *dev,
+- struct device_attribute *attr, char *buf)
++ssize_t cpu_show_meltdown(struct device *dev, struct device_attribute *attr, char *buf)
+ {
+ if (!boot_cpu_has_bug(X86_BUG_CPU_MELTDOWN))
+ return sprintf(buf, "Not affected\n");
+@@ -318,16 +314,14 @@ ssize_t cpu_show_meltdown(struct device *dev,
+ return sprintf(buf, "Vulnerable\n");
+ }
+
+-ssize_t cpu_show_spectre_v1(struct device *dev,
+- struct device_attribute *attr, char *buf)
++ssize_t cpu_show_spectre_v1(struct device *dev, struct device_attribute *attr, char *buf)
+ {
+ if (!boot_cpu_has_bug(X86_BUG_SPECTRE_V1))
+ return sprintf(buf, "Not affected\n");
+ return sprintf(buf, "Mitigation: __user pointer sanitization\n");
+ }
+
+-ssize_t cpu_show_spectre_v2(struct device *dev,
+- struct device_attribute *attr, char *buf)
++ssize_t cpu_show_spectre_v2(struct device *dev, struct device_attribute *attr, char *buf)
+ {
+ if (!boot_cpu_has_bug(X86_BUG_SPECTRE_V2))
+ return sprintf(buf, "Not affected\n");
+@@ -337,9 +331,3 @@ ssize_t cpu_show_spectre_v2(struct device *dev,
+ spectre_v2_module_string());
+ }
+ #endif
+-
+-void __ibp_barrier(void)
+-{
+- __wrmsr(MSR_IA32_PRED_CMD, PRED_CMD_IBPB, 0);
+-}
+-EXPORT_SYMBOL_GPL(__ibp_barrier);
+diff --git a/arch/x86/kernel/cpu/centaur.c b/arch/x86/kernel/cpu/centaur.c
+index 68bc6d9b3132..595be776727d 100644
+--- a/arch/x86/kernel/cpu/centaur.c
++++ b/arch/x86/kernel/cpu/centaur.c
+@@ -136,7 +136,7 @@ static void init_centaur(struct cpuinfo_x86 *c)
+ clear_cpu_cap(c, X86_FEATURE_TSC);
+ break;
+ case 8:
+- switch (c->x86_mask) {
++ switch (c->x86_stepping) {
+ default:
+ name = "2";
+ break;
+@@ -211,7 +211,7 @@ centaur_size_cache(struct cpuinfo_x86 *c, unsigned int size)
+ * - Note, it seems this may only be in engineering samples.
+ */
+ if ((c->x86 == 6) && (c->x86_model == 9) &&
+- (c->x86_mask == 1) && (size == 65))
++ (c->x86_stepping == 1) && (size == 65))
+ size -= 1;
+ return size;
+ }
+diff --git a/arch/x86/kernel/cpu/common.c b/arch/x86/kernel/cpu/common.c
+index d63f4b5706e4..824aee0117bb 100644
+--- a/arch/x86/kernel/cpu/common.c
++++ b/arch/x86/kernel/cpu/common.c
+@@ -731,7 +731,7 @@ void cpu_detect(struct cpuinfo_x86 *c)
+ cpuid(0x00000001, &tfms, &misc, &junk, &cap0);
+ c->x86 = x86_family(tfms);
+ c->x86_model = x86_model(tfms);
+- c->x86_mask = x86_stepping(tfms);
++ c->x86_stepping = x86_stepping(tfms);
+
+ if (cap0 & (1<<19)) {
+ c->x86_clflush_size = ((misc >> 8) & 0xff) * 8;
+@@ -1184,9 +1184,9 @@ static void identify_cpu(struct cpuinfo_x86 *c)
+ int i;
+
+ c->loops_per_jiffy = loops_per_jiffy;
+- c->x86_cache_size = -1;
++ c->x86_cache_size = 0;
+ c->x86_vendor = X86_VENDOR_UNKNOWN;
+- c->x86_model = c->x86_mask = 0; /* So far unknown... */
++ c->x86_model = c->x86_stepping = 0; /* So far unknown... */
+ c->x86_vendor_id[0] = '\0'; /* Unset */
+ c->x86_model_id[0] = '\0'; /* Unset */
+ c->x86_max_cores = 1;
+@@ -1378,8 +1378,8 @@ void print_cpu_info(struct cpuinfo_x86 *c)
+
+ pr_cont(" (family: 0x%x, model: 0x%x", c->x86, c->x86_model);
+
+- if (c->x86_mask || c->cpuid_level >= 0)
+- pr_cont(", stepping: 0x%x)\n", c->x86_mask);
++ if (c->x86_stepping || c->cpuid_level >= 0)
++ pr_cont(", stepping: 0x%x)\n", c->x86_stepping);
+ else
+ pr_cont(")\n");
+ }
+diff --git a/arch/x86/kernel/cpu/cyrix.c b/arch/x86/kernel/cpu/cyrix.c
+index 6b4bb335641f..8949b7ae6d92 100644
+--- a/arch/x86/kernel/cpu/cyrix.c
++++ b/arch/x86/kernel/cpu/cyrix.c
+@@ -215,7 +215,7 @@ static void init_cyrix(struct cpuinfo_x86 *c)
+
+ /* common case step number/rev -- exceptions handled below */
+ c->x86_model = (dir1 >> 4) + 1;
+- c->x86_mask = dir1 & 0xf;
++ c->x86_stepping = dir1 & 0xf;
+
+ /* Now cook; the original recipe is by Channing Corn, from Cyrix.
+ * We do the same thing for each generation: we work out
+diff --git a/arch/x86/kernel/cpu/intel.c b/arch/x86/kernel/cpu/intel.c
+index 319bf989fad1..d19e903214b4 100644
+--- a/arch/x86/kernel/cpu/intel.c
++++ b/arch/x86/kernel/cpu/intel.c
+@@ -116,14 +116,13 @@ struct sku_microcode {
+ u32 microcode;
+ };
+ static const struct sku_microcode spectre_bad_microcodes[] = {
+- { INTEL_FAM6_KABYLAKE_DESKTOP, 0x0B, 0x84 },
+- { INTEL_FAM6_KABYLAKE_DESKTOP, 0x0A, 0x84 },
+- { INTEL_FAM6_KABYLAKE_DESKTOP, 0x09, 0x84 },
+- { INTEL_FAM6_KABYLAKE_MOBILE, 0x0A, 0x84 },
+- { INTEL_FAM6_KABYLAKE_MOBILE, 0x09, 0x84 },
++ { INTEL_FAM6_KABYLAKE_DESKTOP, 0x0B, 0x80 },
++ { INTEL_FAM6_KABYLAKE_DESKTOP, 0x0A, 0x80 },
++ { INTEL_FAM6_KABYLAKE_DESKTOP, 0x09, 0x80 },
++ { INTEL_FAM6_KABYLAKE_MOBILE, 0x0A, 0x80 },
++ { INTEL_FAM6_KABYLAKE_MOBILE, 0x09, 0x80 },
+ { INTEL_FAM6_SKYLAKE_X, 0x03, 0x0100013e },
+ { INTEL_FAM6_SKYLAKE_X, 0x04, 0x0200003c },
+- { INTEL_FAM6_SKYLAKE_MOBILE, 0x03, 0xc2 },
+ { INTEL_FAM6_SKYLAKE_DESKTOP, 0x03, 0xc2 },
+ { INTEL_FAM6_BROADWELL_CORE, 0x04, 0x28 },
+ { INTEL_FAM6_BROADWELL_GT3E, 0x01, 0x1b },
+@@ -136,8 +135,6 @@ static const struct sku_microcode spectre_bad_microcodes[] = {
+ { INTEL_FAM6_HASWELL_X, 0x02, 0x3b },
+ { INTEL_FAM6_HASWELL_X, 0x04, 0x10 },
+ { INTEL_FAM6_IVYBRIDGE_X, 0x04, 0x42a },
+- /* Updated in the 20180108 release; blacklist until we know otherwise */
+- { INTEL_FAM6_ATOM_GEMINI_LAKE, 0x01, 0x22 },
+ /* Observed in the wild */
+ { INTEL_FAM6_SANDYBRIDGE_X, 0x06, 0x61b },
+ { INTEL_FAM6_SANDYBRIDGE_X, 0x07, 0x712 },
+@@ -149,7 +146,7 @@ static bool bad_spectre_microcode(struct cpuinfo_x86 *c)
+
+ for (i = 0; i < ARRAY_SIZE(spectre_bad_microcodes); i++) {
+ if (c->x86_model == spectre_bad_microcodes[i].model &&
+- c->x86_mask == spectre_bad_microcodes[i].stepping)
++ c->x86_stepping == spectre_bad_microcodes[i].stepping)
+ return (c->microcode <= spectre_bad_microcodes[i].microcode);
+ }
+ return false;
+@@ -196,7 +193,7 @@ static void early_init_intel(struct cpuinfo_x86 *c)
+ * need the microcode to have already been loaded... so if it is
+ * not, recommend a BIOS update and disable large pages.
+ */
+- if (c->x86 == 6 && c->x86_model == 0x1c && c->x86_mask <= 2 &&
++ if (c->x86 == 6 && c->x86_model == 0x1c && c->x86_stepping <= 2 &&
+ c->microcode < 0x20e) {
+ pr_warn("Atom PSE erratum detected, BIOS microcode update recommended\n");
+ clear_cpu_cap(c, X86_FEATURE_PSE);
+@@ -212,7 +209,7 @@ static void early_init_intel(struct cpuinfo_x86 *c)
+
+ /* CPUID workaround for 0F33/0F34 CPU */
+ if (c->x86 == 0xF && c->x86_model == 0x3
+- && (c->x86_mask == 0x3 || c->x86_mask == 0x4))
++ && (c->x86_stepping == 0x3 || c->x86_stepping == 0x4))
+ c->x86_phys_bits = 36;
+
+ /*
+@@ -310,7 +307,7 @@ int ppro_with_ram_bug(void)
+ if (boot_cpu_data.x86_vendor == X86_VENDOR_INTEL &&
+ boot_cpu_data.x86 == 6 &&
+ boot_cpu_data.x86_model == 1 &&
+- boot_cpu_data.x86_mask < 8) {
++ boot_cpu_data.x86_stepping < 8) {
+ pr_info("Pentium Pro with Errata#50 detected. Taking evasive action.\n");
+ return 1;
+ }
+@@ -327,7 +324,7 @@ static void intel_smp_check(struct cpuinfo_x86 *c)
+ * Mask B, Pentium, but not Pentium MMX
+ */
+ if (c->x86 == 5 &&
+- c->x86_mask >= 1 && c->x86_mask <= 4 &&
++ c->x86_stepping >= 1 && c->x86_stepping <= 4 &&
+ c->x86_model <= 3) {
+ /*
+ * Remember we have B step Pentia with bugs
+@@ -370,7 +367,7 @@ static void intel_workarounds(struct cpuinfo_x86 *c)
+ * SEP CPUID bug: Pentium Pro reports SEP but doesn't have it until
+ * model 3 mask 3
+ */
+- if ((c->x86<<8 | c->x86_model<<4 | c->x86_mask) < 0x633)
++ if ((c->x86<<8 | c->x86_model<<4 | c->x86_stepping) < 0x633)
+ clear_cpu_cap(c, X86_FEATURE_SEP);
+
+ /*
+@@ -388,7 +385,7 @@ static void intel_workarounds(struct cpuinfo_x86 *c)
+ * P4 Xeon erratum 037 workaround.
+ * Hardware prefetcher may cause stale data to be loaded into the cache.
+ */
+- if ((c->x86 == 15) && (c->x86_model == 1) && (c->x86_mask == 1)) {
++ if ((c->x86 == 15) && (c->x86_model == 1) && (c->x86_stepping == 1)) {
+ if (msr_set_bit(MSR_IA32_MISC_ENABLE,
+ MSR_IA32_MISC_ENABLE_PREFETCH_DISABLE_BIT) > 0) {
+ pr_info("CPU: C0 stepping P4 Xeon detected.\n");
+@@ -403,7 +400,7 @@ static void intel_workarounds(struct cpuinfo_x86 *c)
+ * Specification Update").
+ */
+ if (boot_cpu_has(X86_FEATURE_APIC) && (c->x86<<8 | c->x86_model<<4) == 0x520 &&
+- (c->x86_mask < 0x6 || c->x86_mask == 0xb))
++ (c->x86_stepping < 0x6 || c->x86_stepping == 0xb))
+ set_cpu_bug(c, X86_BUG_11AP);
+
+
+@@ -650,7 +647,7 @@ static void init_intel(struct cpuinfo_x86 *c)
+ case 6:
+ if (l2 == 128)
+ p = "Celeron (Mendocino)";
+- else if (c->x86_mask == 0 || c->x86_mask == 5)
++ else if (c->x86_stepping == 0 || c->x86_stepping == 5)
+ p = "Celeron-A";
+ break;
+
+diff --git a/arch/x86/kernel/cpu/intel_rdt.c b/arch/x86/kernel/cpu/intel_rdt.c
+index 99442370de40..18dd8f22e353 100644
+--- a/arch/x86/kernel/cpu/intel_rdt.c
++++ b/arch/x86/kernel/cpu/intel_rdt.c
+@@ -771,7 +771,7 @@ static __init void rdt_quirks(void)
+ cache_alloc_hsw_probe();
+ break;
+ case INTEL_FAM6_SKYLAKE_X:
+- if (boot_cpu_data.x86_mask <= 4)
++ if (boot_cpu_data.x86_stepping <= 4)
+ set_rdt_options("!cmt,!mbmtotal,!mbmlocal,!l3cat");
+ }
+ }
+diff --git a/arch/x86/kernel/cpu/mcheck/mce-internal.h b/arch/x86/kernel/cpu/mcheck/mce-internal.h
+index aa0d5df9dc60..e956eb267061 100644
+--- a/arch/x86/kernel/cpu/mcheck/mce-internal.h
++++ b/arch/x86/kernel/cpu/mcheck/mce-internal.h
+@@ -115,4 +115,19 @@ static inline void mce_unregister_injector_chain(struct notifier_block *nb) { }
+
+ extern struct mca_config mca_cfg;
+
++#ifndef CONFIG_X86_64
++/*
++ * On 32-bit systems it would be difficult to safely unmap a poison page
++ * from the kernel 1:1 map because there are no non-canonical addresses that
++ * we can use to refer to the address without risking a speculative access.
++ * However, this isn't much of an issue because:
++ * 1) Few unmappable pages are in the 1:1 map. Most are in HIGHMEM which
++ * are only mapped into the kernel as needed
++ * 2) Few people would run a 32-bit kernel on a machine that supports
++ * recoverable errors because they have too much memory to boot 32-bit.
++ */
++static inline void mce_unmap_kpfn(unsigned long pfn) {}
++#define mce_unmap_kpfn mce_unmap_kpfn
++#endif
++
+ #endif /* __X86_MCE_INTERNAL_H__ */
+diff --git a/arch/x86/kernel/cpu/mcheck/mce.c b/arch/x86/kernel/cpu/mcheck/mce.c
+index 868e412b4f0c..2fe482f6ecd8 100644
+--- a/arch/x86/kernel/cpu/mcheck/mce.c
++++ b/arch/x86/kernel/cpu/mcheck/mce.c
+@@ -106,6 +106,10 @@ static struct irq_work mce_irq_work;
+
+ static void (*quirk_no_way_out)(int bank, struct mce *m, struct pt_regs *regs);
+
++#ifndef mce_unmap_kpfn
++static void mce_unmap_kpfn(unsigned long pfn);
++#endif
++
+ /*
+ * CPU/chipset specific EDAC code can register a notifier call here to print
+ * MCE errors in a human-readable form.
+@@ -582,7 +586,8 @@ static int srao_decode_notifier(struct notifier_block *nb, unsigned long val,
+
+ if (mce_usable_address(mce) && (mce->severity == MCE_AO_SEVERITY)) {
+ pfn = mce->addr >> PAGE_SHIFT;
+- memory_failure(pfn, MCE_VECTOR, 0);
++ if (memory_failure(pfn, MCE_VECTOR, 0))
++ mce_unmap_kpfn(pfn);
+ }
+
+ return NOTIFY_OK;
+@@ -1049,12 +1054,13 @@ static int do_memory_failure(struct mce *m)
+ ret = memory_failure(m->addr >> PAGE_SHIFT, MCE_VECTOR, flags);
+ if (ret)
+ pr_err("Memory error not recovered");
++ else
++ mce_unmap_kpfn(m->addr >> PAGE_SHIFT);
+ return ret;
+ }
+
+-#if defined(arch_unmap_kpfn) && defined(CONFIG_MEMORY_FAILURE)
+-
+-void arch_unmap_kpfn(unsigned long pfn)
++#ifndef mce_unmap_kpfn
++static void mce_unmap_kpfn(unsigned long pfn)
+ {
+ unsigned long decoy_addr;
+
+@@ -1065,7 +1071,7 @@ void arch_unmap_kpfn(unsigned long pfn)
+ * We would like to just call:
+ * set_memory_np((unsigned long)pfn_to_kaddr(pfn), 1);
+ * but doing that would radically increase the odds of a
+- * speculative access to the posion page because we'd have
++ * speculative access to the poison page because we'd have
+ * the virtual address of the kernel 1:1 mapping sitting
+ * around in registers.
+ * Instead we get tricky. We create a non-canonical address
+@@ -1090,7 +1096,6 @@ void arch_unmap_kpfn(unsigned long pfn)
+
+ if (set_memory_np(decoy_addr, 1))
+ pr_warn("Could not invalidate pfn=0x%lx from 1:1 map\n", pfn);
+-
+ }
+ #endif
+
+diff --git a/arch/x86/kernel/cpu/microcode/intel.c b/arch/x86/kernel/cpu/microcode/intel.c
+index f7c55b0e753a..a15db2b4e0d6 100644
+--- a/arch/x86/kernel/cpu/microcode/intel.c
++++ b/arch/x86/kernel/cpu/microcode/intel.c
+@@ -921,7 +921,7 @@ static bool is_blacklisted(unsigned int cpu)
+ */
+ if (c->x86 == 6 &&
+ c->x86_model == INTEL_FAM6_BROADWELL_X &&
+- c->x86_mask == 0x01 &&
++ c->x86_stepping == 0x01 &&
+ llc_size_per_core > 2621440 &&
+ c->microcode < 0x0b000021) {
+ pr_err_once("Erratum BDF90: late loading with revision < 0x0b000021 (0x%x) disabled.\n", c->microcode);
+@@ -944,7 +944,7 @@ static enum ucode_state request_microcode_fw(int cpu, struct device *device,
+ return UCODE_NFOUND;
+
+ sprintf(name, "intel-ucode/%02x-%02x-%02x",
+- c->x86, c->x86_model, c->x86_mask);
++ c->x86, c->x86_model, c->x86_stepping);
+
+ if (request_firmware_direct(&firmware, name, device)) {
+ pr_debug("data file %s load failed\n", name);
+@@ -982,7 +982,7 @@ static struct microcode_ops microcode_intel_ops = {
+
+ static int __init calc_llc_size_per_core(struct cpuinfo_x86 *c)
+ {
+- u64 llc_size = c->x86_cache_size * 1024;
++ u64 llc_size = c->x86_cache_size * 1024ULL;
+
+ do_div(llc_size, c->x86_max_cores);
+
+diff --git a/arch/x86/kernel/cpu/mtrr/generic.c b/arch/x86/kernel/cpu/mtrr/generic.c
+index fdc55215d44d..e12ee86906c6 100644
+--- a/arch/x86/kernel/cpu/mtrr/generic.c
++++ b/arch/x86/kernel/cpu/mtrr/generic.c
+@@ -859,7 +859,7 @@ int generic_validate_add_page(unsigned long base, unsigned long size,
+ */
+ if (is_cpu(INTEL) && boot_cpu_data.x86 == 6 &&
+ boot_cpu_data.x86_model == 1 &&
+- boot_cpu_data.x86_mask <= 7) {
++ boot_cpu_data.x86_stepping <= 7) {
+ if (base & ((1 << (22 - PAGE_SHIFT)) - 1)) {
+ pr_warn("mtrr: base(0x%lx000) is not 4 MiB aligned\n", base);
+ return -EINVAL;
+diff --git a/arch/x86/kernel/cpu/mtrr/main.c b/arch/x86/kernel/cpu/mtrr/main.c
+index 40d5a8a75212..7468de429087 100644
+--- a/arch/x86/kernel/cpu/mtrr/main.c
++++ b/arch/x86/kernel/cpu/mtrr/main.c
+@@ -711,8 +711,8 @@ void __init mtrr_bp_init(void)
+ if (boot_cpu_data.x86_vendor == X86_VENDOR_INTEL &&
+ boot_cpu_data.x86 == 0xF &&
+ boot_cpu_data.x86_model == 0x3 &&
+- (boot_cpu_data.x86_mask == 0x3 ||
+- boot_cpu_data.x86_mask == 0x4))
++ (boot_cpu_data.x86_stepping == 0x3 ||
++ boot_cpu_data.x86_stepping == 0x4))
+ phys_addr = 36;
+
+ size_or_mask = SIZE_OR_MASK_BITS(phys_addr);
+diff --git a/arch/x86/kernel/cpu/proc.c b/arch/x86/kernel/cpu/proc.c
+index e7ecedafa1c8..2c8522a39ed5 100644
+--- a/arch/x86/kernel/cpu/proc.c
++++ b/arch/x86/kernel/cpu/proc.c
+@@ -72,8 +72,8 @@ static int show_cpuinfo(struct seq_file *m, void *v)
+ c->x86_model,
+ c->x86_model_id[0] ? c->x86_model_id : "unknown");
+
+- if (c->x86_mask || c->cpuid_level >= 0)
+- seq_printf(m, "stepping\t: %d\n", c->x86_mask);
++ if (c->x86_stepping || c->cpuid_level >= 0)
++ seq_printf(m, "stepping\t: %d\n", c->x86_stepping);
+ else
+ seq_puts(m, "stepping\t: unknown\n");
+ if (c->microcode)
+@@ -91,8 +91,8 @@ static int show_cpuinfo(struct seq_file *m, void *v)
+ }
+
+ /* Cache size */
+- if (c->x86_cache_size >= 0)
+- seq_printf(m, "cache size\t: %d KB\n", c->x86_cache_size);
++ if (c->x86_cache_size)
++ seq_printf(m, "cache size\t: %u KB\n", c->x86_cache_size);
+
+ show_cpuinfo_core(m, c, cpu);
+ show_cpuinfo_misc(m, c);
+diff --git a/arch/x86/kernel/early-quirks.c b/arch/x86/kernel/early-quirks.c
+index 1e82f787c160..c87560e1e3ef 100644
+--- a/arch/x86/kernel/early-quirks.c
++++ b/arch/x86/kernel/early-quirks.c
+@@ -527,6 +527,7 @@ static const struct pci_device_id intel_early_ids[] __initconst = {
+ INTEL_SKL_IDS(&gen9_early_ops),
+ INTEL_BXT_IDS(&gen9_early_ops),
+ INTEL_KBL_IDS(&gen9_early_ops),
++ INTEL_CFL_IDS(&gen9_early_ops),
+ INTEL_GLK_IDS(&gen9_early_ops),
+ INTEL_CNL_IDS(&gen9_early_ops),
+ };
+diff --git a/arch/x86/kernel/head_32.S b/arch/x86/kernel/head_32.S
+index c29020907886..b59e4fb40fd9 100644
+--- a/arch/x86/kernel/head_32.S
++++ b/arch/x86/kernel/head_32.S
+@@ -37,7 +37,7 @@
+ #define X86 new_cpu_data+CPUINFO_x86
+ #define X86_VENDOR new_cpu_data+CPUINFO_x86_vendor
+ #define X86_MODEL new_cpu_data+CPUINFO_x86_model
+-#define X86_MASK new_cpu_data+CPUINFO_x86_mask
++#define X86_STEPPING new_cpu_data+CPUINFO_x86_stepping
+ #define X86_HARD_MATH new_cpu_data+CPUINFO_hard_math
+ #define X86_CPUID new_cpu_data+CPUINFO_cpuid_level
+ #define X86_CAPABILITY new_cpu_data+CPUINFO_x86_capability
+@@ -332,7 +332,7 @@ ENTRY(startup_32_smp)
+ shrb $4,%al
+ movb %al,X86_MODEL
+ andb $0x0f,%cl # mask mask revision
+- movb %cl,X86_MASK
++ movb %cl,X86_STEPPING
+ movl %edx,X86_CAPABILITY
+
+ .Lis486:
+diff --git a/arch/x86/kernel/mpparse.c b/arch/x86/kernel/mpparse.c
+index 3a4b12809ab5..bc6bc6689e68 100644
+--- a/arch/x86/kernel/mpparse.c
++++ b/arch/x86/kernel/mpparse.c
+@@ -407,7 +407,7 @@ static inline void __init construct_default_ISA_mptable(int mpc_default_type)
+ processor.apicver = mpc_default_type > 4 ? 0x10 : 0x01;
+ processor.cpuflag = CPU_ENABLED;
+ processor.cpufeature = (boot_cpu_data.x86 << 8) |
+- (boot_cpu_data.x86_model << 4) | boot_cpu_data.x86_mask;
++ (boot_cpu_data.x86_model << 4) | boot_cpu_data.x86_stepping;
+ processor.featureflag = boot_cpu_data.x86_capability[CPUID_1_EDX];
+ processor.reserved[0] = 0;
+ processor.reserved[1] = 0;
+diff --git a/arch/x86/kernel/paravirt.c b/arch/x86/kernel/paravirt.c
+index 041096bdef86..99dc79e76bdc 100644
+--- a/arch/x86/kernel/paravirt.c
++++ b/arch/x86/kernel/paravirt.c
+@@ -200,9 +200,9 @@ static void native_flush_tlb_global(void)
+ __native_flush_tlb_global();
+ }
+
+-static void native_flush_tlb_single(unsigned long addr)
++static void native_flush_tlb_one_user(unsigned long addr)
+ {
+- __native_flush_tlb_single(addr);
++ __native_flush_tlb_one_user(addr);
+ }
+
+ struct static_key paravirt_steal_enabled;
+@@ -401,7 +401,7 @@ struct pv_mmu_ops pv_mmu_ops __ro_after_init = {
+
+ .flush_tlb_user = native_flush_tlb,
+ .flush_tlb_kernel = native_flush_tlb_global,
+- .flush_tlb_single = native_flush_tlb_single,
++ .flush_tlb_one_user = native_flush_tlb_one_user,
+ .flush_tlb_others = native_flush_tlb_others,
+
+ .pgd_alloc = __paravirt_pgd_alloc,
+diff --git a/arch/x86/kernel/relocate_kernel_64.S b/arch/x86/kernel/relocate_kernel_64.S
+index 307d3bac5f04..11eda21eb697 100644
+--- a/arch/x86/kernel/relocate_kernel_64.S
++++ b/arch/x86/kernel/relocate_kernel_64.S
+@@ -68,6 +68,9 @@ relocate_kernel:
+ movq %cr4, %rax
+ movq %rax, CR4(%r11)
+
++ /* Save CR4. Required to enable the right paging mode later. */
++ movq %rax, %r13
++
+ /* zero out flags, and disable interrupts */
+ pushq $0
+ popfq
+@@ -126,8 +129,13 @@ identity_mapped:
+ /*
+ * Set cr4 to a known state:
+ * - physical address extension enabled
++ * - 5-level paging, if it was enabled before
+ */
+ movl $X86_CR4_PAE, %eax
++ testq $X86_CR4_LA57, %r13
++ jz 1f
++ orl $X86_CR4_LA57, %eax
++1:
+ movq %rax, %cr4
+
+ jmp 1f
+diff --git a/arch/x86/kernel/smpboot.c b/arch/x86/kernel/smpboot.c
+index ed556d50d7ed..844279c3ff4a 100644
+--- a/arch/x86/kernel/smpboot.c
++++ b/arch/x86/kernel/smpboot.c
+@@ -1431,7 +1431,6 @@ static void remove_siblinginfo(int cpu)
+ cpumask_clear(cpu_llc_shared_mask(cpu));
+ cpumask_clear(topology_sibling_cpumask(cpu));
+ cpumask_clear(topology_core_cpumask(cpu));
+- c->phys_proc_id = 0;
+ c->cpu_core_id = 0;
+ cpumask_clear_cpu(cpu, cpu_sibling_setup_mask);
+ recompute_smt_state();
+diff --git a/arch/x86/kernel/traps.c b/arch/x86/kernel/traps.c
+index 446c9ef8cfc3..3d9b2308e7fa 100644
+--- a/arch/x86/kernel/traps.c
++++ b/arch/x86/kernel/traps.c
+@@ -181,7 +181,7 @@ int fixup_bug(struct pt_regs *regs, int trapnr)
+ break;
+
+ case BUG_TRAP_TYPE_WARN:
+- regs->ip += LEN_UD0;
++ regs->ip += LEN_UD2;
+ return 1;
+ }
+
+diff --git a/arch/x86/kvm/mmu.c b/arch/x86/kvm/mmu.c
+index 2b8eb4da4d08..cc83bdcb65d1 100644
+--- a/arch/x86/kvm/mmu.c
++++ b/arch/x86/kvm/mmu.c
+@@ -5058,7 +5058,7 @@ void kvm_mmu_uninit_vm(struct kvm *kvm)
+ typedef bool (*slot_level_handler) (struct kvm *kvm, struct kvm_rmap_head *rmap_head);
+
+ /* The caller should hold mmu-lock before calling this function. */
+-static bool
++static __always_inline bool
+ slot_handle_level_range(struct kvm *kvm, struct kvm_memory_slot *memslot,
+ slot_level_handler fn, int start_level, int end_level,
+ gfn_t start_gfn, gfn_t end_gfn, bool lock_flush_tlb)
+@@ -5088,7 +5088,7 @@ slot_handle_level_range(struct kvm *kvm, struct kvm_memory_slot *memslot,
+ return flush;
+ }
+
+-static bool
++static __always_inline bool
+ slot_handle_level(struct kvm *kvm, struct kvm_memory_slot *memslot,
+ slot_level_handler fn, int start_level, int end_level,
+ bool lock_flush_tlb)
+@@ -5099,7 +5099,7 @@ slot_handle_level(struct kvm *kvm, struct kvm_memory_slot *memslot,
+ lock_flush_tlb);
+ }
+
+-static bool
++static __always_inline bool
+ slot_handle_all_level(struct kvm *kvm, struct kvm_memory_slot *memslot,
+ slot_level_handler fn, bool lock_flush_tlb)
+ {
+@@ -5107,7 +5107,7 @@ slot_handle_all_level(struct kvm *kvm, struct kvm_memory_slot *memslot,
+ PT_MAX_HUGEPAGE_LEVEL, lock_flush_tlb);
+ }
+
+-static bool
++static __always_inline bool
+ slot_handle_large_level(struct kvm *kvm, struct kvm_memory_slot *memslot,
+ slot_level_handler fn, bool lock_flush_tlb)
+ {
+@@ -5115,7 +5115,7 @@ slot_handle_large_level(struct kvm *kvm, struct kvm_memory_slot *memslot,
+ PT_MAX_HUGEPAGE_LEVEL, lock_flush_tlb);
+ }
+
+-static bool
++static __always_inline bool
+ slot_handle_leaf(struct kvm *kvm, struct kvm_memory_slot *memslot,
+ slot_level_handler fn, bool lock_flush_tlb)
+ {
+diff --git a/arch/x86/kvm/vmx.c b/arch/x86/kvm/vmx.c
+index 6f623848260f..561d8937fac5 100644
+--- a/arch/x86/kvm/vmx.c
++++ b/arch/x86/kvm/vmx.c
+@@ -10131,7 +10131,8 @@ static void nested_get_vmcs12_pages(struct kvm_vcpu *vcpu,
+ if (cpu_has_vmx_msr_bitmap() &&
+ nested_cpu_has(vmcs12, CPU_BASED_USE_MSR_BITMAPS) &&
+ nested_vmx_merge_msr_bitmap(vcpu, vmcs12))
+- ;
++ vmcs_set_bits(CPU_BASED_VM_EXEC_CONTROL,
++ CPU_BASED_USE_MSR_BITMAPS);
+ else
+ vmcs_clear_bits(CPU_BASED_VM_EXEC_CONTROL,
+ CPU_BASED_USE_MSR_BITMAPS);
+@@ -10220,8 +10221,8 @@ static inline bool nested_vmx_merge_msr_bitmap(struct kvm_vcpu *vcpu,
+ * updated to reflect this when L1 (or its L2s) actually write to
+ * the MSR.
+ */
+- bool pred_cmd = msr_write_intercepted_l01(vcpu, MSR_IA32_PRED_CMD);
+- bool spec_ctrl = msr_write_intercepted_l01(vcpu, MSR_IA32_SPEC_CTRL);
++ bool pred_cmd = !msr_write_intercepted_l01(vcpu, MSR_IA32_PRED_CMD);
++ bool spec_ctrl = !msr_write_intercepted_l01(vcpu, MSR_IA32_SPEC_CTRL);
+
+ if (!nested_cpu_has_virt_x2apic_mode(vmcs12) &&
+ !pred_cmd && !spec_ctrl)
+diff --git a/arch/x86/lib/cpu.c b/arch/x86/lib/cpu.c
+index d6f848d1211d..2dd1fe13a37b 100644
+--- a/arch/x86/lib/cpu.c
++++ b/arch/x86/lib/cpu.c
+@@ -18,7 +18,7 @@ unsigned int x86_model(unsigned int sig)
+ {
+ unsigned int fam, model;
+
+- fam = x86_family(sig);
++ fam = x86_family(sig);
+
+ model = (sig >> 4) & 0xf;
+
+diff --git a/arch/x86/mm/init_64.c b/arch/x86/mm/init_64.c
+index 4a837289f2ad..60ae1fe3609f 100644
+--- a/arch/x86/mm/init_64.c
++++ b/arch/x86/mm/init_64.c
+@@ -256,7 +256,7 @@ static void __set_pte_vaddr(pud_t *pud, unsigned long vaddr, pte_t new_pte)
+ * It's enough to flush this one mapping.
+ * (PGE mappings get flushed as well)
+ */
+- __flush_tlb_one(vaddr);
++ __flush_tlb_one_kernel(vaddr);
+ }
+
+ void set_pte_vaddr_p4d(p4d_t *p4d_page, unsigned long vaddr, pte_t new_pte)
+diff --git a/arch/x86/mm/ioremap.c b/arch/x86/mm/ioremap.c
+index c45b6ec5357b..e2db83bebc3b 100644
+--- a/arch/x86/mm/ioremap.c
++++ b/arch/x86/mm/ioremap.c
+@@ -820,5 +820,5 @@ void __init __early_set_fixmap(enum fixed_addresses idx,
+ set_pte(pte, pfn_pte(phys >> PAGE_SHIFT, flags));
+ else
+ pte_clear(&init_mm, addr, pte);
+- __flush_tlb_one(addr);
++ __flush_tlb_one_kernel(addr);
+ }
+diff --git a/arch/x86/mm/kmmio.c b/arch/x86/mm/kmmio.c
+index 58477ec3d66d..7c8686709636 100644
+--- a/arch/x86/mm/kmmio.c
++++ b/arch/x86/mm/kmmio.c
+@@ -168,7 +168,7 @@ static int clear_page_presence(struct kmmio_fault_page *f, bool clear)
+ return -1;
+ }
+
+- __flush_tlb_one(f->addr);
++ __flush_tlb_one_kernel(f->addr);
+ return 0;
+ }
+
+diff --git a/arch/x86/mm/pgtable_32.c b/arch/x86/mm/pgtable_32.c
+index c3c5274410a9..9bb7f0ab9fe6 100644
+--- a/arch/x86/mm/pgtable_32.c
++++ b/arch/x86/mm/pgtable_32.c
+@@ -63,7 +63,7 @@ void set_pte_vaddr(unsigned long vaddr, pte_t pteval)
+ * It's enough to flush this one mapping.
+ * (PGE mappings get flushed as well)
+ */
+- __flush_tlb_one(vaddr);
++ __flush_tlb_one_kernel(vaddr);
+ }
+
+ unsigned long __FIXADDR_TOP = 0xfffff000;
+diff --git a/arch/x86/mm/tlb.c b/arch/x86/mm/tlb.c
+index 012d02624848..0c936435ea93 100644
+--- a/arch/x86/mm/tlb.c
++++ b/arch/x86/mm/tlb.c
+@@ -492,7 +492,7 @@ static void flush_tlb_func_common(const struct flush_tlb_info *f,
+ * flush that changes context.tlb_gen from 2 to 3. If they get
+ * processed on this CPU in reverse order, we'll see
+ * local_tlb_gen == 1, mm_tlb_gen == 3, and end != TLB_FLUSH_ALL.
+- * If we were to use __flush_tlb_single() and set local_tlb_gen to
++ * If we were to use __flush_tlb_one_user() and set local_tlb_gen to
+ * 3, we'd be break the invariant: we'd update local_tlb_gen above
+ * 1 without the full flush that's needed for tlb_gen 2.
+ *
+@@ -513,7 +513,7 @@ static void flush_tlb_func_common(const struct flush_tlb_info *f,
+
+ addr = f->start;
+ while (addr < f->end) {
+- __flush_tlb_single(addr);
++ __flush_tlb_one_user(addr);
+ addr += PAGE_SIZE;
+ }
+ if (local)
+@@ -660,7 +660,7 @@ static void do_kernel_range_flush(void *info)
+
+ /* flush range by one by one 'invlpg' */
+ for (addr = f->start; addr < f->end; addr += PAGE_SIZE)
+- __flush_tlb_one(addr);
++ __flush_tlb_one_kernel(addr);
+ }
+
+ void flush_tlb_kernel_range(unsigned long start, unsigned long end)
+diff --git a/arch/x86/platform/uv/tlb_uv.c b/arch/x86/platform/uv/tlb_uv.c
+index 8538a6723171..7d5d53f36a7a 100644
+--- a/arch/x86/platform/uv/tlb_uv.c
++++ b/arch/x86/platform/uv/tlb_uv.c
+@@ -299,7 +299,7 @@ static void bau_process_message(struct msg_desc *mdp, struct bau_control *bcp,
+ local_flush_tlb();
+ stat->d_alltlb++;
+ } else {
+- __flush_tlb_single(msg->address);
++ __flush_tlb_one_user(msg->address);
+ stat->d_onetlb++;
+ }
+ stat->d_requestee++;
+diff --git a/arch/x86/xen/mmu_pv.c b/arch/x86/xen/mmu_pv.c
+index d85076223a69..aae88fec9941 100644
+--- a/arch/x86/xen/mmu_pv.c
++++ b/arch/x86/xen/mmu_pv.c
+@@ -1300,12 +1300,12 @@ static void xen_flush_tlb(void)
+ preempt_enable();
+ }
+
+-static void xen_flush_tlb_single(unsigned long addr)
++static void xen_flush_tlb_one_user(unsigned long addr)
+ {
+ struct mmuext_op *op;
+ struct multicall_space mcs;
+
+- trace_xen_mmu_flush_tlb_single(addr);
++ trace_xen_mmu_flush_tlb_one_user(addr);
+
+ preempt_disable();
+
+@@ -2370,7 +2370,7 @@ static const struct pv_mmu_ops xen_mmu_ops __initconst = {
+
+ .flush_tlb_user = xen_flush_tlb,
+ .flush_tlb_kernel = xen_flush_tlb,
+- .flush_tlb_single = xen_flush_tlb_single,
++ .flush_tlb_one_user = xen_flush_tlb_one_user,
+ .flush_tlb_others = xen_flush_tlb_others,
+
+ .pgd_alloc = xen_pgd_alloc,
+diff --git a/arch/x86/xen/p2m.c b/arch/x86/xen/p2m.c
+index 13b4f19b9131..159a897151d6 100644
+--- a/arch/x86/xen/p2m.c
++++ b/arch/x86/xen/p2m.c
+@@ -694,6 +694,9 @@ int set_foreign_p2m_mapping(struct gnttab_map_grant_ref *map_ops,
+ int i, ret = 0;
+ pte_t *pte;
+
++ if (xen_feature(XENFEAT_auto_translated_physmap))
++ return 0;
++
+ if (kmap_ops) {
+ ret = HYPERVISOR_grant_table_op(GNTTABOP_map_grant_ref,
+ kmap_ops, count);
+@@ -736,6 +739,9 @@ int clear_foreign_p2m_mapping(struct gnttab_unmap_grant_ref *unmap_ops,
+ {
+ int i, ret = 0;
+
++ if (xen_feature(XENFEAT_auto_translated_physmap))
++ return 0;
++
+ for (i = 0; i < count; i++) {
+ unsigned long mfn = __pfn_to_mfn(page_to_pfn(pages[i]));
+ unsigned long pfn = page_to_pfn(pages[i]);
+diff --git a/arch/x86/xen/xen-head.S b/arch/x86/xen/xen-head.S
+index 497cc55a0c16..96f26e026783 100644
+--- a/arch/x86/xen/xen-head.S
++++ b/arch/x86/xen/xen-head.S
+@@ -9,7 +9,9 @@
+
+ #include <asm/boot.h>
+ #include <asm/asm.h>
++#include <asm/msr.h>
+ #include <asm/page_types.h>
++#include <asm/percpu.h>
+ #include <asm/unwind_hints.h>
+
+ #include <xen/interface/elfnote.h>
+@@ -35,6 +37,20 @@ ENTRY(startup_xen)
+ mov %_ASM_SI, xen_start_info
+ mov $init_thread_union+THREAD_SIZE, %_ASM_SP
+
++#ifdef CONFIG_X86_64
++ /* Set up %gs.
++ *
++ * The base of %gs always points to the bottom of the irqstack
++ * union. If the stack protector canary is enabled, it is
++ * located at %gs:40. Note that, on SMP, the boot cpu uses
++ * init data section till per cpu areas are set up.
++ */
++ movl $MSR_GS_BASE,%ecx
++ movq $INIT_PER_CPU_VAR(irq_stack_union),%rax
++ cdq
++ wrmsr
++#endif
++
+ jmp xen_start_kernel
+ END(startup_xen)
+ __FINIT
+diff --git a/block/blk-wbt.c b/block/blk-wbt.c
+index ae8de9780085..f92fc84b5e2c 100644
+--- a/block/blk-wbt.c
++++ b/block/blk-wbt.c
+@@ -697,7 +697,15 @@ u64 wbt_default_latency_nsec(struct request_queue *q)
+
+ static int wbt_data_dir(const struct request *rq)
+ {
+- return rq_data_dir(rq);
++ const int op = req_op(rq);
++
++ if (op == REQ_OP_READ)
++ return READ;
++ else if (op == REQ_OP_WRITE || op == REQ_OP_FLUSH)
++ return WRITE;
++
++ /* don't account */
++ return -1;
+ }
+
+ int wbt_init(struct request_queue *q)
+diff --git a/drivers/base/core.c b/drivers/base/core.c
+index 110230d86527..6835736daf2d 100644
+--- a/drivers/base/core.c
++++ b/drivers/base/core.c
+@@ -313,6 +313,9 @@ static void __device_link_del(struct device_link *link)
+ dev_info(link->consumer, "Dropping the link to %s\n",
+ dev_name(link->supplier));
+
++ if (link->flags & DL_FLAG_PM_RUNTIME)
++ pm_runtime_drop_link(link->consumer);
++
+ list_del(&link->s_node);
+ list_del(&link->c_node);
+ device_link_free(link);
+diff --git a/drivers/block/rbd.c b/drivers/block/rbd.c
+index cc93522a6d41..1bbf14338bdb 100644
+--- a/drivers/block/rbd.c
++++ b/drivers/block/rbd.c
+@@ -124,11 +124,13 @@ static int atomic_dec_return_safe(atomic_t *v)
+ #define RBD_FEATURE_STRIPINGV2 (1ULL<<1)
+ #define RBD_FEATURE_EXCLUSIVE_LOCK (1ULL<<2)
+ #define RBD_FEATURE_DATA_POOL (1ULL<<7)
++#define RBD_FEATURE_OPERATIONS (1ULL<<8)
+
+ #define RBD_FEATURES_ALL (RBD_FEATURE_LAYERING | \
+ RBD_FEATURE_STRIPINGV2 | \
+ RBD_FEATURE_EXCLUSIVE_LOCK | \
+- RBD_FEATURE_DATA_POOL)
++ RBD_FEATURE_DATA_POOL | \
++ RBD_FEATURE_OPERATIONS)
+
+ /* Features supported by this (client software) implementation. */
+
+diff --git a/drivers/char/hw_random/via-rng.c b/drivers/char/hw_random/via-rng.c
+index d1f5bb534e0e..6e9df558325b 100644
+--- a/drivers/char/hw_random/via-rng.c
++++ b/drivers/char/hw_random/via-rng.c
+@@ -162,7 +162,7 @@ static int via_rng_init(struct hwrng *rng)
+ /* Enable secondary noise source on CPUs where it is present. */
+
+ /* Nehemiah stepping 8 and higher */
+- if ((c->x86_model == 9) && (c->x86_mask > 7))
++ if ((c->x86_model == 9) && (c->x86_stepping > 7))
+ lo |= VIA_NOISESRC2;
+
+ /* Esther */
+diff --git a/drivers/cpufreq/acpi-cpufreq.c b/drivers/cpufreq/acpi-cpufreq.c
+index 3a2ca0f79daf..d0c34df0529c 100644
+--- a/drivers/cpufreq/acpi-cpufreq.c
++++ b/drivers/cpufreq/acpi-cpufreq.c
+@@ -629,7 +629,7 @@ static int acpi_cpufreq_blacklist(struct cpuinfo_x86 *c)
+ if (c->x86_vendor == X86_VENDOR_INTEL) {
+ if ((c->x86 == 15) &&
+ (c->x86_model == 6) &&
+- (c->x86_mask == 8)) {
++ (c->x86_stepping == 8)) {
+ pr_info("Intel(R) Xeon(R) 7100 Errata AL30, processors may lock up on frequency changes: disabling acpi-cpufreq\n");
+ return -ENODEV;
+ }
+diff --git a/drivers/cpufreq/longhaul.c b/drivers/cpufreq/longhaul.c
+index c46a12df40dd..d5e27bc7585a 100644
+--- a/drivers/cpufreq/longhaul.c
++++ b/drivers/cpufreq/longhaul.c
+@@ -775,7 +775,7 @@ static int longhaul_cpu_init(struct cpufreq_policy *policy)
+ break;
+
+ case 7:
+- switch (c->x86_mask) {
++ switch (c->x86_stepping) {
+ case 0:
+ longhaul_version = TYPE_LONGHAUL_V1;
+ cpu_model = CPU_SAMUEL2;
+@@ -787,7 +787,7 @@ static int longhaul_cpu_init(struct cpufreq_policy *policy)
+ break;
+ case 1 ... 15:
+ longhaul_version = TYPE_LONGHAUL_V2;
+- if (c->x86_mask < 8) {
++ if (c->x86_stepping < 8) {
+ cpu_model = CPU_SAMUEL2;
+ cpuname = "C3 'Samuel 2' [C5B]";
+ } else {
+@@ -814,7 +814,7 @@ static int longhaul_cpu_init(struct cpufreq_policy *policy)
+ numscales = 32;
+ memcpy(mults, nehemiah_mults, sizeof(nehemiah_mults));
+ memcpy(eblcr, nehemiah_eblcr, sizeof(nehemiah_eblcr));
+- switch (c->x86_mask) {
++ switch (c->x86_stepping) {
+ case 0 ... 1:
+ cpu_model = CPU_NEHEMIAH;
+ cpuname = "C3 'Nehemiah A' [C5XLOE]";
+diff --git a/drivers/cpufreq/p4-clockmod.c b/drivers/cpufreq/p4-clockmod.c
+index fd77812313f3..a25741b1281b 100644
+--- a/drivers/cpufreq/p4-clockmod.c
++++ b/drivers/cpufreq/p4-clockmod.c
+@@ -168,7 +168,7 @@ static int cpufreq_p4_cpu_init(struct cpufreq_policy *policy)
+ #endif
+
+ /* Errata workaround */
+- cpuid = (c->x86 << 8) | (c->x86_model << 4) | c->x86_mask;
++ cpuid = (c->x86 << 8) | (c->x86_model << 4) | c->x86_stepping;
+ switch (cpuid) {
+ case 0x0f07:
+ case 0x0f0a:
+diff --git a/drivers/cpufreq/powernow-k7.c b/drivers/cpufreq/powernow-k7.c
+index 80ac313e6c59..302e9ce793a0 100644
+--- a/drivers/cpufreq/powernow-k7.c
++++ b/drivers/cpufreq/powernow-k7.c
+@@ -131,7 +131,7 @@ static int check_powernow(void)
+ return 0;
+ }
+
+- if ((c->x86_model == 6) && (c->x86_mask == 0)) {
++ if ((c->x86_model == 6) && (c->x86_stepping == 0)) {
+ pr_info("K7 660[A0] core detected, enabling errata workarounds\n");
+ have_a0 = 1;
+ }
+diff --git a/drivers/cpufreq/powernv-cpufreq.c b/drivers/cpufreq/powernv-cpufreq.c
+index b6d7c4c98d0a..da7fdb4b661a 100644
+--- a/drivers/cpufreq/powernv-cpufreq.c
++++ b/drivers/cpufreq/powernv-cpufreq.c
+@@ -288,9 +288,9 @@ static int init_powernv_pstates(void)
+
+ if (id == pstate_max)
+ powernv_pstate_info.max = i;
+- else if (id == pstate_nominal)
++ if (id == pstate_nominal)
+ powernv_pstate_info.nominal = i;
+- else if (id == pstate_min)
++ if (id == pstate_min)
+ powernv_pstate_info.min = i;
+
+ if (powernv_pstate_info.wof_enabled && id == pstate_turbo) {
+diff --git a/drivers/cpufreq/speedstep-centrino.c b/drivers/cpufreq/speedstep-centrino.c
+index 41bc5397f4bb..4fa5adf16c70 100644
+--- a/drivers/cpufreq/speedstep-centrino.c
++++ b/drivers/cpufreq/speedstep-centrino.c
+@@ -37,7 +37,7 @@ struct cpu_id
+ {
+ __u8 x86; /* CPU family */
+ __u8 x86_model; /* model */
+- __u8 x86_mask; /* stepping */
++ __u8 x86_stepping; /* stepping */
+ };
+
+ enum {
+@@ -277,7 +277,7 @@ static int centrino_verify_cpu_id(const struct cpuinfo_x86 *c,
+ {
+ if ((c->x86 == x->x86) &&
+ (c->x86_model == x->x86_model) &&
+- (c->x86_mask == x->x86_mask))
++ (c->x86_stepping == x->x86_stepping))
+ return 1;
+ return 0;
+ }
+diff --git a/drivers/cpufreq/speedstep-lib.c b/drivers/cpufreq/speedstep-lib.c
+index 8085ec9000d1..e3a9962ee410 100644
+--- a/drivers/cpufreq/speedstep-lib.c
++++ b/drivers/cpufreq/speedstep-lib.c
+@@ -272,9 +272,9 @@ unsigned int speedstep_detect_processor(void)
+ ebx = cpuid_ebx(0x00000001);
+ ebx &= 0x000000FF;
+
+- pr_debug("ebx value is %x, x86_mask is %x\n", ebx, c->x86_mask);
++ pr_debug("ebx value is %x, x86_stepping is %x\n", ebx, c->x86_stepping);
+
+- switch (c->x86_mask) {
++ switch (c->x86_stepping) {
+ case 4:
+ /*
+ * B-stepping [M-P4-M]
+@@ -361,7 +361,7 @@ unsigned int speedstep_detect_processor(void)
+ msr_lo, msr_hi);
+ if ((msr_hi & (1<<18)) &&
+ (relaxed_check ? 1 : (msr_hi & (3<<24)))) {
+- if (c->x86_mask == 0x01) {
++ if (c->x86_stepping == 0x01) {
+ pr_debug("early PIII version\n");
+ return SPEEDSTEP_CPU_PIII_C_EARLY;
+ } else
+diff --git a/drivers/crypto/padlock-aes.c b/drivers/crypto/padlock-aes.c
+index 4b6642a25df5..1c6cbda56afe 100644
+--- a/drivers/crypto/padlock-aes.c
++++ b/drivers/crypto/padlock-aes.c
+@@ -512,7 +512,7 @@ static int __init padlock_init(void)
+
+ printk(KERN_NOTICE PFX "Using VIA PadLock ACE for AES algorithm.\n");
+
+- if (c->x86 == 6 && c->x86_model == 15 && c->x86_mask == 2) {
++ if (c->x86 == 6 && c->x86_model == 15 && c->x86_stepping == 2) {
+ ecb_fetch_blocks = MAX_ECB_FETCH_BLOCKS;
+ cbc_fetch_blocks = MAX_CBC_FETCH_BLOCKS;
+ printk(KERN_NOTICE PFX "VIA Nano stepping 2 detected: enabling workaround.\n");
+diff --git a/drivers/crypto/sunxi-ss/sun4i-ss-prng.c b/drivers/crypto/sunxi-ss/sun4i-ss-prng.c
+index 0d01d1624252..63d636424161 100644
+--- a/drivers/crypto/sunxi-ss/sun4i-ss-prng.c
++++ b/drivers/crypto/sunxi-ss/sun4i-ss-prng.c
+@@ -28,7 +28,7 @@ int sun4i_ss_prng_generate(struct crypto_rng *tfm, const u8 *src,
+ algt = container_of(alg, struct sun4i_ss_alg_template, alg.rng);
+ ss = algt->ss;
+
+- spin_lock(&ss->slock);
++ spin_lock_bh(&ss->slock);
+
+ writel(mode, ss->base + SS_CTL);
+
+@@ -51,6 +51,6 @@ int sun4i_ss_prng_generate(struct crypto_rng *tfm, const u8 *src,
+ }
+
+ writel(0, ss->base + SS_CTL);
+- spin_unlock(&ss->slock);
+- return dlen;
++ spin_unlock_bh(&ss->slock);
++ return 0;
+ }
+diff --git a/drivers/devfreq/devfreq.c b/drivers/devfreq/devfreq.c
+index 78fb496ecb4e..99c4021fc33b 100644
+--- a/drivers/devfreq/devfreq.c
++++ b/drivers/devfreq/devfreq.c
+@@ -737,7 +737,7 @@ struct devfreq *devm_devfreq_add_device(struct device *dev,
+ devfreq = devfreq_add_device(dev, profile, governor_name, data);
+ if (IS_ERR(devfreq)) {
+ devres_free(ptr);
+- return ERR_PTR(-ENOMEM);
++ return devfreq;
+ }
+
+ *ptr = devfreq;
+diff --git a/drivers/dma-buf/reservation.c b/drivers/dma-buf/reservation.c
+index b44d9d7db347..012fa3d1f407 100644
+--- a/drivers/dma-buf/reservation.c
++++ b/drivers/dma-buf/reservation.c
+@@ -455,13 +455,15 @@ long reservation_object_wait_timeout_rcu(struct reservation_object *obj,
+ unsigned long timeout)
+ {
+ struct dma_fence *fence;
+- unsigned seq, shared_count, i = 0;
++ unsigned seq, shared_count;
+ long ret = timeout ? timeout : 1;
++ int i;
+
+ retry:
+ shared_count = 0;
+ seq = read_seqcount_begin(&obj->seq);
+ rcu_read_lock();
++ i = -1;
+
+ fence = rcu_dereference(obj->fence_excl);
+ if (fence && !test_bit(DMA_FENCE_FLAG_SIGNALED_BIT, &fence->flags)) {
+@@ -477,14 +479,14 @@ long reservation_object_wait_timeout_rcu(struct reservation_object *obj,
+ fence = NULL;
+ }
+
+- if (!fence && wait_all) {
++ if (wait_all) {
+ struct reservation_object_list *fobj =
+ rcu_dereference(obj->fence);
+
+ if (fobj)
+ shared_count = fobj->shared_count;
+
+- for (i = 0; i < shared_count; ++i) {
++ for (i = 0; !fence && i < shared_count; ++i) {
+ struct dma_fence *lfence = rcu_dereference(fobj->shared[i]);
+
+ if (test_bit(DMA_FENCE_FLAG_SIGNALED_BIT,
+diff --git a/drivers/edac/amd64_edac.c b/drivers/edac/amd64_edac.c
+index 8b16ec595fa7..329cb96f886f 100644
+--- a/drivers/edac/amd64_edac.c
++++ b/drivers/edac/amd64_edac.c
+@@ -3147,7 +3147,7 @@ static struct amd64_family_type *per_family_init(struct amd64_pvt *pvt)
+ struct amd64_family_type *fam_type = NULL;
+
+ pvt->ext_model = boot_cpu_data.x86_model >> 4;
+- pvt->stepping = boot_cpu_data.x86_mask;
++ pvt->stepping = boot_cpu_data.x86_stepping;
+ pvt->model = boot_cpu_data.x86_model;
+ pvt->fam = boot_cpu_data.x86;
+
+diff --git a/drivers/gpu/drm/amd/powerplay/smumgr/rv_smumgr.h b/drivers/gpu/drm/amd/powerplay/smumgr/rv_smumgr.h
+index 58888400f1b8..caebdbebdcd8 100644
+--- a/drivers/gpu/drm/amd/powerplay/smumgr/rv_smumgr.h
++++ b/drivers/gpu/drm/amd/powerplay/smumgr/rv_smumgr.h
+@@ -40,7 +40,7 @@ struct smu_table_entry {
+ uint32_t table_addr_high;
+ uint32_t table_addr_low;
+ uint8_t *table;
+- uint32_t handle;
++ unsigned long handle;
+ };
+
+ struct smu_table_array {
+diff --git a/drivers/gpu/drm/ast/ast_mode.c b/drivers/gpu/drm/ast/ast_mode.c
+index 9555a3542022..831b73392d82 100644
+--- a/drivers/gpu/drm/ast/ast_mode.c
++++ b/drivers/gpu/drm/ast/ast_mode.c
+@@ -644,6 +644,7 @@ static void ast_crtc_commit(struct drm_crtc *crtc)
+ {
+ struct ast_private *ast = crtc->dev->dev_private;
+ ast_set_index_reg_mask(ast, AST_IO_SEQ_PORT, 0x1, 0xdf, 0);
++ ast_crtc_load_lut(crtc);
+ }
+
+
+diff --git a/drivers/gpu/drm/drm_auth.c b/drivers/gpu/drm/drm_auth.c
+index aad468d170a7..d9c0f7573905 100644
+--- a/drivers/gpu/drm/drm_auth.c
++++ b/drivers/gpu/drm/drm_auth.c
+@@ -230,6 +230,12 @@ int drm_dropmaster_ioctl(struct drm_device *dev, void *data,
+ if (!dev->master)
+ goto out_unlock;
+
++ if (file_priv->master->lessor != NULL) {
++ DRM_DEBUG_LEASE("Attempt to drop lessee %d as master\n", file_priv->master->lessee_id);
++ ret = -EINVAL;
++ goto out_unlock;
++ }
++
+ ret = 0;
+ drm_drop_master(dev, file_priv);
+ out_unlock:
+diff --git a/drivers/gpu/drm/qxl/qxl_display.c b/drivers/gpu/drm/qxl/qxl_display.c
+index 4756b3c9bf2c..9a9214ae0fb5 100644
+--- a/drivers/gpu/drm/qxl/qxl_display.c
++++ b/drivers/gpu/drm/qxl/qxl_display.c
+@@ -289,6 +289,7 @@ static void qxl_crtc_destroy(struct drm_crtc *crtc)
+ {
+ struct qxl_crtc *qxl_crtc = to_qxl_crtc(crtc);
+
++ qxl_bo_unref(&qxl_crtc->cursor_bo);
+ drm_crtc_cleanup(crtc);
+ kfree(qxl_crtc);
+ }
+@@ -495,6 +496,53 @@ static int qxl_primary_atomic_check(struct drm_plane *plane,
+ return 0;
+ }
+
++static int qxl_primary_apply_cursor(struct drm_plane *plane)
++{
++ struct drm_device *dev = plane->dev;
++ struct qxl_device *qdev = dev->dev_private;
++ struct drm_framebuffer *fb = plane->state->fb;
++ struct qxl_crtc *qcrtc = to_qxl_crtc(plane->state->crtc);
++ struct qxl_cursor_cmd *cmd;
++ struct qxl_release *release;
++ int ret = 0;
++
++ if (!qcrtc->cursor_bo)
++ return 0;
++
++ ret = qxl_alloc_release_reserved(qdev, sizeof(*cmd),
++ QXL_RELEASE_CURSOR_CMD,
++ &release, NULL);
++ if (ret)
++ return ret;
++
++ ret = qxl_release_list_add(release, qcrtc->cursor_bo);
++ if (ret)
++ goto out_free_release;
++
++ ret = qxl_release_reserve_list(release, false);
++ if (ret)
++ goto out_free_release;
++
++ cmd = (struct qxl_cursor_cmd *)qxl_release_map(qdev, release);
++ cmd->type = QXL_CURSOR_SET;
++ cmd->u.set.position.x = plane->state->crtc_x + fb->hot_x;
++ cmd->u.set.position.y = plane->state->crtc_y + fb->hot_y;
++
++ cmd->u.set.shape = qxl_bo_physical_address(qdev, qcrtc->cursor_bo, 0);
++
++ cmd->u.set.visible = 1;
++ qxl_release_unmap(qdev, release, &cmd->release_info);
++
++ qxl_push_cursor_ring_release(qdev, release, QXL_CMD_CURSOR, false);
++ qxl_release_fence_buffer_objects(release);
++
++ return ret;
++
++out_free_release:
++ qxl_release_free(qdev, release);
++ return ret;
++}
++
+ static void qxl_primary_atomic_update(struct drm_plane *plane,
+ struct drm_plane_state *old_state)
+ {
+@@ -510,6 +558,7 @@ static void qxl_primary_atomic_update(struct drm_plane *plane,
+ .x2 = qfb->base.width,
+ .y2 = qfb->base.height
+ };
++ int ret;
+ bool same_shadow = false;
+
+ if (old_state->fb) {
+@@ -531,6 +580,11 @@ static void qxl_primary_atomic_update(struct drm_plane *plane,
+ if (!same_shadow)
+ qxl_io_destroy_primary(qdev);
+ bo_old->is_primary = false;
++
++ ret = qxl_primary_apply_cursor(plane);
++ if (ret)
++ DRM_ERROR(
++ "could not set cursor after creating primary");
+ }
+
+ if (!bo->is_primary) {
+@@ -571,11 +625,12 @@ static void qxl_cursor_atomic_update(struct drm_plane *plane,
+ struct drm_device *dev = plane->dev;
+ struct qxl_device *qdev = dev->dev_private;
+ struct drm_framebuffer *fb = plane->state->fb;
++ struct qxl_crtc *qcrtc = to_qxl_crtc(plane->state->crtc);
+ struct qxl_release *release;
+ struct qxl_cursor_cmd *cmd;
+ struct qxl_cursor *cursor;
+ struct drm_gem_object *obj;
+- struct qxl_bo *cursor_bo, *user_bo = NULL;
++ struct qxl_bo *cursor_bo = NULL, *user_bo = NULL;
+ int ret;
+ void *user_ptr;
+ int size = 64*64*4;
+@@ -628,6 +683,10 @@ static void qxl_cursor_atomic_update(struct drm_plane *plane,
+ cmd->u.set.shape = qxl_bo_physical_address(qdev,
+ cursor_bo, 0);
+ cmd->type = QXL_CURSOR_SET;
++
++ qxl_bo_unref(&qcrtc->cursor_bo);
++ qcrtc->cursor_bo = cursor_bo;
++ cursor_bo = NULL;
+ } else {
+
+ ret = qxl_release_reserve_list(release, true);
+@@ -645,6 +704,8 @@ static void qxl_cursor_atomic_update(struct drm_plane *plane,
+ qxl_push_cursor_ring_release(qdev, release, QXL_CMD_CURSOR, false);
+ qxl_release_fence_buffer_objects(release);
+
++ qxl_bo_unref(&cursor_bo);
++
+ return;
+
+ out_backoff:
+diff --git a/drivers/gpu/drm/qxl/qxl_drv.h b/drivers/gpu/drm/qxl/qxl_drv.h
+index 08752c0ffb35..00a1a66b052a 100644
+--- a/drivers/gpu/drm/qxl/qxl_drv.h
++++ b/drivers/gpu/drm/qxl/qxl_drv.h
+@@ -111,6 +111,8 @@ struct qxl_bo_list {
+ struct qxl_crtc {
+ struct drm_crtc base;
+ int index;
++
++ struct qxl_bo *cursor_bo;
+ };
+
+ struct qxl_output {
+diff --git a/drivers/gpu/drm/radeon/radeon_uvd.c b/drivers/gpu/drm/radeon/radeon_uvd.c
+index d34d1cf33895..95f4db70dd22 100644
+--- a/drivers/gpu/drm/radeon/radeon_uvd.c
++++ b/drivers/gpu/drm/radeon/radeon_uvd.c
+@@ -995,7 +995,7 @@ int radeon_uvd_calc_upll_dividers(struct radeon_device *rdev,
+ /* calc dclk divider with current vco freq */
+ dclk_div = radeon_uvd_calc_upll_post_div(vco_freq, dclk,
+ pd_min, pd_even);
+- if (vclk_div > pd_max)
++ if (dclk_div > pd_max)
+ break; /* vco is too big, it has to stop */
+
+ /* calc score with current vco freq */
+diff --git a/drivers/gpu/drm/radeon/si_dpm.c b/drivers/gpu/drm/radeon/si_dpm.c
+index ee3e74266a13..97a0a639dad9 100644
+--- a/drivers/gpu/drm/radeon/si_dpm.c
++++ b/drivers/gpu/drm/radeon/si_dpm.c
+@@ -2984,6 +2984,11 @@ static void si_apply_state_adjust_rules(struct radeon_device *rdev,
+ (rdev->pdev->device == 0x6667)) {
+ max_sclk = 75000;
+ }
++ if ((rdev->pdev->revision == 0xC3) ||
++ (rdev->pdev->device == 0x6665)) {
++ max_sclk = 60000;
++ max_mclk = 80000;
++ }
+ } else if (rdev->family == CHIP_OLAND) {
+ if ((rdev->pdev->revision == 0xC7) ||
+ (rdev->pdev->revision == 0x80) ||
+diff --git a/drivers/gpu/drm/ttm/ttm_bo.c b/drivers/gpu/drm/ttm/ttm_bo.c
+index c088703777e2..68eed684dff5 100644
+--- a/drivers/gpu/drm/ttm/ttm_bo.c
++++ b/drivers/gpu/drm/ttm/ttm_bo.c
+@@ -175,7 +175,8 @@ void ttm_bo_add_to_lru(struct ttm_buffer_object *bo)
+ list_add_tail(&bo->lru, &man->lru[bo->priority]);
+ kref_get(&bo->list_kref);
+
+- if (bo->ttm && !(bo->ttm->page_flags & TTM_PAGE_FLAG_SG)) {
++ if (bo->ttm && !(bo->ttm->page_flags &
++ (TTM_PAGE_FLAG_SG | TTM_PAGE_FLAG_SWAPPED))) {
+ list_add_tail(&bo->swap,
+ &bo->glob->swap_lru[bo->priority]);
+ kref_get(&bo->list_kref);
+diff --git a/drivers/gpu/drm/ttm/ttm_bo_vm.c b/drivers/gpu/drm/ttm/ttm_bo_vm.c
+index c8ebb757e36b..b17d0d38f290 100644
+--- a/drivers/gpu/drm/ttm/ttm_bo_vm.c
++++ b/drivers/gpu/drm/ttm/ttm_bo_vm.c
+@@ -299,7 +299,7 @@ static void ttm_bo_vm_close(struct vm_area_struct *vma)
+
+ static int ttm_bo_vm_access_kmap(struct ttm_buffer_object *bo,
+ unsigned long offset,
+- void *buf, int len, int write)
++ uint8_t *buf, int len, int write)
+ {
+ unsigned long page = offset >> PAGE_SHIFT;
+ unsigned long bytes_left = len;
+@@ -328,6 +328,7 @@ static int ttm_bo_vm_access_kmap(struct ttm_buffer_object *bo,
+ ttm_bo_kunmap(&map);
+
+ page++;
++ buf += bytes;
+ bytes_left -= bytes;
+ offset = 0;
+ } while (bytes_left);
+diff --git a/drivers/hwmon/coretemp.c b/drivers/hwmon/coretemp.c
+index c13a4fd86b3c..a42744c7665b 100644
+--- a/drivers/hwmon/coretemp.c
++++ b/drivers/hwmon/coretemp.c
+@@ -268,13 +268,13 @@ static int adjust_tjmax(struct cpuinfo_x86 *c, u32 id, struct device *dev)
+ for (i = 0; i < ARRAY_SIZE(tjmax_model_table); i++) {
+ const struct tjmax_model *tm = &tjmax_model_table[i];
+ if (c->x86_model == tm->model &&
+- (tm->mask == ANY || c->x86_mask == tm->mask))
++ (tm->mask == ANY || c->x86_stepping == tm->mask))
+ return tm->tjmax;
+ }
+
+ /* Early chips have no MSR for TjMax */
+
+- if (c->x86_model == 0xf && c->x86_mask < 4)
++ if (c->x86_model == 0xf && c->x86_stepping < 4)
+ usemsr_ee = 0;
+
+ if (c->x86_model > 0xe && usemsr_ee) {
+@@ -425,7 +425,7 @@ static int chk_ucode_version(unsigned int cpu)
+ * Readings might stop update when processor visited too deep sleep,
+ * fixed for stepping D0 (6EC).
+ */
+- if (c->x86_model == 0xe && c->x86_mask < 0xc && c->microcode < 0x39) {
++ if (c->x86_model == 0xe && c->x86_stepping < 0xc && c->microcode < 0x39) {
+ pr_err("Errata AE18 not fixed, update BIOS or microcode of the CPU!\n");
+ return -ENODEV;
+ }
+diff --git a/drivers/hwmon/hwmon-vid.c b/drivers/hwmon/hwmon-vid.c
+index ef91b8a67549..84e91286fc4f 100644
+--- a/drivers/hwmon/hwmon-vid.c
++++ b/drivers/hwmon/hwmon-vid.c
+@@ -293,7 +293,7 @@ u8 vid_which_vrm(void)
+ if (c->x86 < 6) /* Any CPU with family lower than 6 */
+ return 0; /* doesn't have VID */
+
+- vrm_ret = find_vrm(c->x86, c->x86_model, c->x86_mask, c->x86_vendor);
++ vrm_ret = find_vrm(c->x86, c->x86_model, c->x86_stepping, c->x86_vendor);
+ if (vrm_ret == 134)
+ vrm_ret = get_via_model_d_vrm();
+ if (vrm_ret == 0)
+diff --git a/drivers/hwmon/k10temp.c b/drivers/hwmon/k10temp.c
+index 0721e175664a..b960015cb073 100644
+--- a/drivers/hwmon/k10temp.c
++++ b/drivers/hwmon/k10temp.c
+@@ -226,7 +226,7 @@ static bool has_erratum_319(struct pci_dev *pdev)
+ * and AM3 formats, but that's the best we can do.
+ */
+ return boot_cpu_data.x86_model < 4 ||
+- (boot_cpu_data.x86_model == 4 && boot_cpu_data.x86_mask <= 2);
++ (boot_cpu_data.x86_model == 4 && boot_cpu_data.x86_stepping <= 2);
+ }
+
+ static int k10temp_probe(struct pci_dev *pdev,
+diff --git a/drivers/hwmon/k8temp.c b/drivers/hwmon/k8temp.c
+index 5a632bcf869b..e59f9113fb93 100644
+--- a/drivers/hwmon/k8temp.c
++++ b/drivers/hwmon/k8temp.c
+@@ -187,7 +187,7 @@ static int k8temp_probe(struct pci_dev *pdev,
+ return -ENOMEM;
+
+ model = boot_cpu_data.x86_model;
+- stepping = boot_cpu_data.x86_mask;
++ stepping = boot_cpu_data.x86_stepping;
+
+ /* feature available since SH-C0, exclude older revisions */
+ if ((model == 4 && stepping == 0) ||
+diff --git a/drivers/infiniband/core/device.c b/drivers/infiniband/core/device.c
+index 465520627e4b..d7d042a20ab4 100644
+--- a/drivers/infiniband/core/device.c
++++ b/drivers/infiniband/core/device.c
+@@ -462,7 +462,6 @@ int ib_register_device(struct ib_device *device,
+ struct ib_udata uhw = {.outlen = 0, .inlen = 0};
+ struct device *parent = device->dev.parent;
+
+- WARN_ON_ONCE(!parent);
+ WARN_ON_ONCE(device->dma_device);
+ if (device->dev.dma_ops) {
+ /*
+@@ -471,16 +470,25 @@ int ib_register_device(struct ib_device *device,
+ * into device->dev.
+ */
+ device->dma_device = &device->dev;
+- if (!device->dev.dma_mask)
+- device->dev.dma_mask = parent->dma_mask;
+- if (!device->dev.coherent_dma_mask)
+- device->dev.coherent_dma_mask =
+- parent->coherent_dma_mask;
++ if (!device->dev.dma_mask) {
++ if (parent)
++ device->dev.dma_mask = parent->dma_mask;
++ else
++ WARN_ON_ONCE(true);
++ }
++ if (!device->dev.coherent_dma_mask) {
++ if (parent)
++ device->dev.coherent_dma_mask =
++ parent->coherent_dma_mask;
++ else
++ WARN_ON_ONCE(true);
++ }
+ } else {
+ /*
+ * The caller did not provide custom DMA operations. Use the
+ * DMA mapping operations of the parent device.
+ */
++ WARN_ON_ONCE(!parent);
+ device->dma_device = parent;
+ }
+
+diff --git a/drivers/infiniband/core/sysfs.c b/drivers/infiniband/core/sysfs.c
+index e30d86fa1855..8ae1308eecc7 100644
+--- a/drivers/infiniband/core/sysfs.c
++++ b/drivers/infiniband/core/sysfs.c
+@@ -1276,7 +1276,6 @@ int ib_device_register_sysfs(struct ib_device *device,
+ int ret;
+ int i;
+
+- WARN_ON_ONCE(!device->dev.parent);
+ ret = dev_set_name(class_dev, "%s", device->name);
+ if (ret)
+ return ret;
+diff --git a/drivers/infiniband/core/user_mad.c b/drivers/infiniband/core/user_mad.c
+index 4b64dd02e090..3205800f9579 100644
+--- a/drivers/infiniband/core/user_mad.c
++++ b/drivers/infiniband/core/user_mad.c
+@@ -500,7 +500,7 @@ static ssize_t ib_umad_write(struct file *filp, const char __user *buf,
+ }
+
+ memset(&ah_attr, 0, sizeof ah_attr);
+- ah_attr.type = rdma_ah_find_type(file->port->ib_dev,
++ ah_attr.type = rdma_ah_find_type(agent->device,
+ file->port->port_num);
+ rdma_ah_set_dlid(&ah_attr, be16_to_cpu(packet->mad.hdr.lid));
+ rdma_ah_set_sl(&ah_attr, packet->mad.hdr.sl);
+diff --git a/drivers/infiniband/core/uverbs_std_types.c b/drivers/infiniband/core/uverbs_std_types.c
+index c3ee5d9b336d..cca70d36ee15 100644
+--- a/drivers/infiniband/core/uverbs_std_types.c
++++ b/drivers/infiniband/core/uverbs_std_types.c
+@@ -315,7 +315,7 @@ static int uverbs_create_cq_handler(struct ib_device *ib_dev,
+ cq->uobject = &obj->uobject;
+ cq->comp_handler = ib_uverbs_comp_handler;
+ cq->event_handler = ib_uverbs_cq_event_handler;
+- cq->cq_context = &ev_file->ev_queue;
++ cq->cq_context = ev_file ? &ev_file->ev_queue : NULL;
+ obj->uobject.object = cq;
+ obj->uobject.user_handle = user_handle;
+ atomic_set(&cq->usecnt, 0);
+diff --git a/drivers/infiniband/hw/mlx4/main.c b/drivers/infiniband/hw/mlx4/main.c
+index 8c8a16791a3f..5caf37ba7fff 100644
+--- a/drivers/infiniband/hw/mlx4/main.c
++++ b/drivers/infiniband/hw/mlx4/main.c
+@@ -2995,9 +2995,8 @@ static void *mlx4_ib_add(struct mlx4_dev *dev)
+ kfree(ibdev->ib_uc_qpns_bitmap);
+
+ err_steer_qp_release:
+- if (ibdev->steering_support == MLX4_STEERING_MODE_DEVICE_MANAGED)
+- mlx4_qp_release_range(dev, ibdev->steer_qpn_base,
+- ibdev->steer_qpn_count);
++ mlx4_qp_release_range(dev, ibdev->steer_qpn_base,
++ ibdev->steer_qpn_count);
+ err_counter:
+ for (i = 0; i < ibdev->num_ports; ++i)
+ mlx4_ib_delete_counters_table(ibdev, &ibdev->counters_table[i]);
+@@ -3102,11 +3101,9 @@ static void mlx4_ib_remove(struct mlx4_dev *dev, void *ibdev_ptr)
+ ibdev->iboe.nb.notifier_call = NULL;
+ }
+
+- if (ibdev->steering_support == MLX4_STEERING_MODE_DEVICE_MANAGED) {
+- mlx4_qp_release_range(dev, ibdev->steer_qpn_base,
+- ibdev->steer_qpn_count);
+- kfree(ibdev->ib_uc_qpns_bitmap);
+- }
++ mlx4_qp_release_range(dev, ibdev->steer_qpn_base,
++ ibdev->steer_qpn_count);
++ kfree(ibdev->ib_uc_qpns_bitmap);
+
+ iounmap(ibdev->uar_map);
+ for (p = 0; p < ibdev->num_ports; ++p)
+diff --git a/drivers/infiniband/hw/qib/qib_rc.c b/drivers/infiniband/hw/qib/qib_rc.c
+index 8f5754fb8579..e4a9ba1dd9ba 100644
+--- a/drivers/infiniband/hw/qib/qib_rc.c
++++ b/drivers/infiniband/hw/qib/qib_rc.c
+@@ -434,13 +434,13 @@ int qib_make_rc_req(struct rvt_qp *qp, unsigned long *flags)
+ qp->s_state = OP(COMPARE_SWAP);
+ put_ib_ateth_swap(wqe->atomic_wr.swap,
+ &ohdr->u.atomic_eth);
+- put_ib_ateth_swap(wqe->atomic_wr.compare_add,
+- &ohdr->u.atomic_eth);
++ put_ib_ateth_compare(wqe->atomic_wr.compare_add,
++ &ohdr->u.atomic_eth);
+ } else {
+ qp->s_state = OP(FETCH_ADD);
+ put_ib_ateth_swap(wqe->atomic_wr.compare_add,
+ &ohdr->u.atomic_eth);
+- put_ib_ateth_swap(0, &ohdr->u.atomic_eth);
++ put_ib_ateth_compare(0, &ohdr->u.atomic_eth);
+ }
+ put_ib_ateth_vaddr(wqe->atomic_wr.remote_addr,
+ &ohdr->u.atomic_eth);
+diff --git a/drivers/infiniband/sw/rxe/rxe_loc.h b/drivers/infiniband/sw/rxe/rxe_loc.h
+index d7472a442a2c..96c3a6c5c4b5 100644
+--- a/drivers/infiniband/sw/rxe/rxe_loc.h
++++ b/drivers/infiniband/sw/rxe/rxe_loc.h
+@@ -237,7 +237,6 @@ int rxe_srq_from_attr(struct rxe_dev *rxe, struct rxe_srq *srq,
+
+ void rxe_release(struct kref *kref);
+
+-void rxe_drain_req_pkts(struct rxe_qp *qp, bool notify);
+ int rxe_completer(void *arg);
+ int rxe_requester(void *arg);
+ int rxe_responder(void *arg);
+diff --git a/drivers/infiniband/sw/rxe/rxe_qp.c b/drivers/infiniband/sw/rxe/rxe_qp.c
+index 4469592b839d..137d6c0c49d4 100644
+--- a/drivers/infiniband/sw/rxe/rxe_qp.c
++++ b/drivers/infiniband/sw/rxe/rxe_qp.c
+@@ -824,9 +824,9 @@ void rxe_qp_destroy(struct rxe_qp *qp)
+ }
+
+ /* called when the last reference to the qp is dropped */
+-void rxe_qp_cleanup(struct rxe_pool_entry *arg)
++static void rxe_qp_do_cleanup(struct work_struct *work)
+ {
+- struct rxe_qp *qp = container_of(arg, typeof(*qp), pelem);
++ struct rxe_qp *qp = container_of(work, typeof(*qp), cleanup_work.work);
+
+ rxe_drop_all_mcast_groups(qp);
+
+@@ -859,3 +859,11 @@ void rxe_qp_cleanup(struct rxe_pool_entry *arg)
+ kernel_sock_shutdown(qp->sk, SHUT_RDWR);
+ sock_release(qp->sk);
+ }
++
++/* called when the last reference to the qp is dropped */
++void rxe_qp_cleanup(struct rxe_pool_entry *arg)
++{
++ struct rxe_qp *qp = container_of(arg, typeof(*qp), pelem);
++
++ execute_in_process_context(rxe_qp_do_cleanup, &qp->cleanup_work);
++}
+diff --git a/drivers/infiniband/sw/rxe/rxe_req.c b/drivers/infiniband/sw/rxe/rxe_req.c
+index 26a7f923045b..7bdaf71b8221 100644
+--- a/drivers/infiniband/sw/rxe/rxe_req.c
++++ b/drivers/infiniband/sw/rxe/rxe_req.c
+@@ -594,15 +594,8 @@ int rxe_requester(void *arg)
+ rxe_add_ref(qp);
+
+ next_wqe:
+- if (unlikely(!qp->valid)) {
+- rxe_drain_req_pkts(qp, true);
++ if (unlikely(!qp->valid || qp->req.state == QP_STATE_ERROR))
+ goto exit;
+- }
+-
+- if (unlikely(qp->req.state == QP_STATE_ERROR)) {
+- rxe_drain_req_pkts(qp, true);
+- goto exit;
+- }
+
+ if (unlikely(qp->req.state == QP_STATE_RESET)) {
+ qp->req.wqe_index = consumer_index(qp->sq.queue);
+diff --git a/drivers/infiniband/sw/rxe/rxe_resp.c b/drivers/infiniband/sw/rxe/rxe_resp.c
+index 4240866a5331..01f926fd9029 100644
+--- a/drivers/infiniband/sw/rxe/rxe_resp.c
++++ b/drivers/infiniband/sw/rxe/rxe_resp.c
+@@ -1210,7 +1210,7 @@ static enum resp_states do_class_d1e_error(struct rxe_qp *qp)
+ }
+ }
+
+-void rxe_drain_req_pkts(struct rxe_qp *qp, bool notify)
++static void rxe_drain_req_pkts(struct rxe_qp *qp, bool notify)
+ {
+ struct sk_buff *skb;
+
+diff --git a/drivers/infiniband/sw/rxe/rxe_verbs.c b/drivers/infiniband/sw/rxe/rxe_verbs.c
+index d03002b9d84d..7210a784abb4 100644
+--- a/drivers/infiniband/sw/rxe/rxe_verbs.c
++++ b/drivers/infiniband/sw/rxe/rxe_verbs.c
+@@ -814,6 +814,8 @@ static int rxe_post_send_kernel(struct rxe_qp *qp, struct ib_send_wr *wr,
+ (queue_count(qp->sq.queue) > 1);
+
+ rxe_run_task(&qp->req.task, must_sched);
++ if (unlikely(qp->req.state == QP_STATE_ERROR))
++ rxe_run_task(&qp->comp.task, 1);
+
+ return err;
+ }
+diff --git a/drivers/infiniband/sw/rxe/rxe_verbs.h b/drivers/infiniband/sw/rxe/rxe_verbs.h
+index 0c2dbe45c729..1019f5e7dbdd 100644
+--- a/drivers/infiniband/sw/rxe/rxe_verbs.h
++++ b/drivers/infiniband/sw/rxe/rxe_verbs.h
+@@ -35,6 +35,7 @@
+ #define RXE_VERBS_H
+
+ #include <linux/interrupt.h>
++#include <linux/workqueue.h>
+ #include <rdma/rdma_user_rxe.h>
+ #include "rxe_pool.h"
+ #include "rxe_task.h"
+@@ -281,6 +282,8 @@ struct rxe_qp {
+ struct timer_list rnr_nak_timer;
+
+ spinlock_t state_lock; /* guard requester and completer */
++
++ struct execute_work cleanup_work;
+ };
+
+ enum rxe_mem_state {
+diff --git a/drivers/md/dm.c b/drivers/md/dm.c
+index de17b7193299..1c42b00d3be2 100644
+--- a/drivers/md/dm.c
++++ b/drivers/md/dm.c
+@@ -817,7 +817,8 @@ static void dec_pending(struct dm_io *io, blk_status_t error)
+ queue_io(md, bio);
+ } else {
+ /* done with normal IO or empty flush */
+- bio->bi_status = io_error;
++ if (io_error)
++ bio->bi_status = io_error;
+ bio_endio(bio);
+ }
+ }
+diff --git a/drivers/media/tuners/r820t.c b/drivers/media/tuners/r820t.c
+index ba80376a3b86..d097eb04a0e9 100644
+--- a/drivers/media/tuners/r820t.c
++++ b/drivers/media/tuners/r820t.c
+@@ -396,9 +396,11 @@ static int r820t_write(struct r820t_priv *priv, u8 reg, const u8 *val,
+ return 0;
+ }
+
+-static int r820t_write_reg(struct r820t_priv *priv, u8 reg, u8 val)
++static inline int r820t_write_reg(struct r820t_priv *priv, u8 reg, u8 val)
+ {
+- return r820t_write(priv, reg, &val, 1);
++ u8 tmp = val; /* work around GCC PR81715 with asan-stack=1 */
++
++ return r820t_write(priv, reg, &tmp, 1);
+ }
+
+ static int r820t_read_cache_reg(struct r820t_priv *priv, int reg)
+@@ -411,17 +413,18 @@ static int r820t_read_cache_reg(struct r820t_priv *priv, int reg)
+ return -EINVAL;
+ }
+
+-static int r820t_write_reg_mask(struct r820t_priv *priv, u8 reg, u8 val,
++static inline int r820t_write_reg_mask(struct r820t_priv *priv, u8 reg, u8 val,
+ u8 bit_mask)
+ {
++ u8 tmp = val;
+ int rc = r820t_read_cache_reg(priv, reg);
+
+ if (rc < 0)
+ return rc;
+
+- val = (rc & ~bit_mask) | (val & bit_mask);
++ tmp = (rc & ~bit_mask) | (tmp & bit_mask);
+
+- return r820t_write(priv, reg, &val, 1);
++ return r820t_write(priv, reg, &tmp, 1);
+ }
+
+ static int r820t_read(struct r820t_priv *priv, u8 reg, u8 *val, int len)
+diff --git a/drivers/mmc/host/bcm2835.c b/drivers/mmc/host/bcm2835.c
+index 229dc18f0581..768972af8b85 100644
+--- a/drivers/mmc/host/bcm2835.c
++++ b/drivers/mmc/host/bcm2835.c
+@@ -1265,7 +1265,8 @@ static int bcm2835_add_host(struct bcm2835_host *host)
+ char pio_limit_string[20];
+ int ret;
+
+- mmc->f_max = host->max_clk;
++ if (!mmc->f_max || mmc->f_max > host->max_clk)
++ mmc->f_max = host->max_clk;
+ mmc->f_min = host->max_clk / SDCDIV_MAX_CDIV;
+
+ mmc->max_busy_timeout = ~0 / (mmc->f_max / 1000);
+diff --git a/drivers/mmc/host/meson-gx-mmc.c b/drivers/mmc/host/meson-gx-mmc.c
+index e0862d3f65b3..730fbe01726d 100644
+--- a/drivers/mmc/host/meson-gx-mmc.c
++++ b/drivers/mmc/host/meson-gx-mmc.c
+@@ -716,22 +716,6 @@ static int meson_mmc_clk_phase_tuning(struct mmc_host *mmc, u32 opcode,
+ static int meson_mmc_execute_tuning(struct mmc_host *mmc, u32 opcode)
+ {
+ struct meson_host *host = mmc_priv(mmc);
+- int ret;
+-
+- /*
+- * If this is the initial tuning, try to get a sane Rx starting
+- * phase before doing the actual tuning.
+- */
+- if (!mmc->doing_retune) {
+- ret = meson_mmc_clk_phase_tuning(mmc, opcode, host->rx_clk);
+-
+- if (ret)
+- return ret;
+- }
+-
+- ret = meson_mmc_clk_phase_tuning(mmc, opcode, host->tx_clk);
+- if (ret)
+- return ret;
+
+ return meson_mmc_clk_phase_tuning(mmc, opcode, host->rx_clk);
+ }
+@@ -762,9 +746,8 @@ static void meson_mmc_set_ios(struct mmc_host *mmc, struct mmc_ios *ios)
+ if (!IS_ERR(mmc->supply.vmmc))
+ mmc_regulator_set_ocr(mmc, mmc->supply.vmmc, ios->vdd);
+
+- /* Reset phases */
++ /* Reset rx phase */
+ clk_set_phase(host->rx_clk, 0);
+- clk_set_phase(host->tx_clk, 270);
+
+ break;
+
+diff --git a/drivers/mmc/host/sdhci-of-esdhc.c b/drivers/mmc/host/sdhci-of-esdhc.c
+index 1f424374bbbb..4ffa6b173a21 100644
+--- a/drivers/mmc/host/sdhci-of-esdhc.c
++++ b/drivers/mmc/host/sdhci-of-esdhc.c
+@@ -589,10 +589,18 @@ static void esdhc_pltfm_set_bus_width(struct sdhci_host *host, int width)
+
+ static void esdhc_reset(struct sdhci_host *host, u8 mask)
+ {
++ u32 val;
++
+ sdhci_reset(host, mask);
+
+ sdhci_writel(host, host->ier, SDHCI_INT_ENABLE);
+ sdhci_writel(host, host->ier, SDHCI_SIGNAL_ENABLE);
++
++ if (mask & SDHCI_RESET_ALL) {
++ val = sdhci_readl(host, ESDHC_TBCTL);
++ val &= ~ESDHC_TB_EN;
++ sdhci_writel(host, val, ESDHC_TBCTL);
++ }
+ }
+
+ /* The SCFG, Supplemental Configuration Unit, provides SoC specific
+diff --git a/drivers/mmc/host/sdhci.c b/drivers/mmc/host/sdhci.c
+index e9290a3439d5..d24306b2b839 100644
+--- a/drivers/mmc/host/sdhci.c
++++ b/drivers/mmc/host/sdhci.c
+@@ -21,6 +21,7 @@
+ #include <linux/dma-mapping.h>
+ #include <linux/slab.h>
+ #include <linux/scatterlist.h>
++#include <linux/sizes.h>
+ #include <linux/swiotlb.h>
+ #include <linux/regulator/consumer.h>
+ #include <linux/pm_runtime.h>
+@@ -502,8 +503,35 @@ static int sdhci_pre_dma_transfer(struct sdhci_host *host,
+ if (data->host_cookie == COOKIE_PRE_MAPPED)
+ return data->sg_count;
+
+- sg_count = dma_map_sg(mmc_dev(host->mmc), data->sg, data->sg_len,
+- mmc_get_dma_dir(data));
++ /* Bounce write requests to the bounce buffer */
++ if (host->bounce_buffer) {
++ unsigned int length = data->blksz * data->blocks;
++
++ if (length > host->bounce_buffer_size) {
++ pr_err("%s: asked for transfer of %u bytes exceeds bounce buffer %u bytes\n",
++ mmc_hostname(host->mmc), length,
++ host->bounce_buffer_size);
++ return -EIO;
++ }
++ if (mmc_get_dma_dir(data) == DMA_TO_DEVICE) {
++ /* Copy the data to the bounce buffer */
++ sg_copy_to_buffer(data->sg, data->sg_len,
++ host->bounce_buffer,
++ length);
++ }
++ /* Switch ownership to the DMA */
++ dma_sync_single_for_device(host->mmc->parent,
++ host->bounce_addr,
++ host->bounce_buffer_size,
++ mmc_get_dma_dir(data));
++ /* Just a dummy value */
++ sg_count = 1;
++ } else {
++ /* Just access the data directly from memory */
++ sg_count = dma_map_sg(mmc_dev(host->mmc),
++ data->sg, data->sg_len,
++ mmc_get_dma_dir(data));
++ }
+
+ if (sg_count == 0)
+ return -ENOSPC;
+@@ -673,6 +701,14 @@ static void sdhci_adma_table_post(struct sdhci_host *host,
+ }
+ }
+
++static u32 sdhci_sdma_address(struct sdhci_host *host)
++{
++ if (host->bounce_buffer)
++ return host->bounce_addr;
++ else
++ return sg_dma_address(host->data->sg);
++}
++
+ static u8 sdhci_calc_timeout(struct sdhci_host *host, struct mmc_command *cmd)
+ {
+ u8 count;
+@@ -858,8 +894,8 @@ static void sdhci_prepare_data(struct sdhci_host *host, struct mmc_command *cmd)
+ SDHCI_ADMA_ADDRESS_HI);
+ } else {
+ WARN_ON(sg_cnt != 1);
+- sdhci_writel(host, sg_dma_address(data->sg),
+- SDHCI_DMA_ADDRESS);
++ sdhci_writel(host, sdhci_sdma_address(host),
++ SDHCI_DMA_ADDRESS);
+ }
+ }
+
+@@ -2248,7 +2284,12 @@ static void sdhci_pre_req(struct mmc_host *mmc, struct mmc_request *mrq)
+
+ mrq->data->host_cookie = COOKIE_UNMAPPED;
+
+- if (host->flags & SDHCI_REQ_USE_DMA)
++ /*
++ * No pre-mapping in the pre hook if we're using the bounce buffer,
++ * for that we would need two bounce buffers since one buffer is
++ * in flight when this is getting called.
++ */
++ if (host->flags & SDHCI_REQ_USE_DMA && !host->bounce_buffer)
+ sdhci_pre_dma_transfer(host, mrq->data, COOKIE_PRE_MAPPED);
+ }
+
+@@ -2352,8 +2393,45 @@ static bool sdhci_request_done(struct sdhci_host *host)
+ struct mmc_data *data = mrq->data;
+
+ if (data && data->host_cookie == COOKIE_MAPPED) {
+- dma_unmap_sg(mmc_dev(host->mmc), data->sg, data->sg_len,
+- mmc_get_dma_dir(data));
++ if (host->bounce_buffer) {
++ /*
++ * On reads, copy the bounced data into the
++ * sglist
++ */
++ if (mmc_get_dma_dir(data) == DMA_FROM_DEVICE) {
++ unsigned int length = data->bytes_xfered;
++
++ if (length > host->bounce_buffer_size) {
++ pr_err("%s: bounce buffer is %u bytes but DMA claims to have transferred %u bytes\n",
++ mmc_hostname(host->mmc),
++ host->bounce_buffer_size,
++ data->bytes_xfered);
++ /* Cap it down and continue */
++ length = host->bounce_buffer_size;
++ }
++ dma_sync_single_for_cpu(
++ host->mmc->parent,
++ host->bounce_addr,
++ host->bounce_buffer_size,
++ DMA_FROM_DEVICE);
++ sg_copy_from_buffer(data->sg,
++ data->sg_len,
++ host->bounce_buffer,
++ length);
++ } else {
++ /* No copying, just switch ownership */
++ dma_sync_single_for_cpu(
++ host->mmc->parent,
++ host->bounce_addr,
++ host->bounce_buffer_size,
++ mmc_get_dma_dir(data));
++ }
++ } else {
++ /* Unmap the raw data */
++ dma_unmap_sg(mmc_dev(host->mmc), data->sg,
++ data->sg_len,
++ mmc_get_dma_dir(data));
++ }
+ data->host_cookie = COOKIE_UNMAPPED;
+ }
+ }
+@@ -2636,7 +2714,8 @@ static void sdhci_data_irq(struct sdhci_host *host, u32 intmask)
+ */
+ if (intmask & SDHCI_INT_DMA_END) {
+ u32 dmastart, dmanow;
+- dmastart = sg_dma_address(host->data->sg);
++
++ dmastart = sdhci_sdma_address(host);
+ dmanow = dmastart + host->data->bytes_xfered;
+ /*
+ * Force update to the next DMA block boundary.
+@@ -3217,6 +3296,68 @@ void __sdhci_read_caps(struct sdhci_host *host, u16 *ver, u32 *caps, u32 *caps1)
+ }
+ EXPORT_SYMBOL_GPL(__sdhci_read_caps);
+
++static int sdhci_allocate_bounce_buffer(struct sdhci_host *host)
++{
++ struct mmc_host *mmc = host->mmc;
++ unsigned int max_blocks;
++ unsigned int bounce_size;
++ int ret;
++
++ /*
++ * Cap the bounce buffer at 64KB. Using a bigger bounce buffer
++ * has diminishing returns, this is probably because SD/MMC
++ * cards are usually optimized to handle this size of requests.
++ */
++ bounce_size = SZ_64K;
++ /*
++ * Adjust downwards to maximum request size if this is less
++ * than our segment size, else hammer down the maximum
++ * request size to the maximum buffer size.
++ */
++ if (mmc->max_req_size < bounce_size)
++ bounce_size = mmc->max_req_size;
++ max_blocks = bounce_size / 512;
++
++ /*
++ * When we just support one segment, we can get significant
++ * speedups by the help of a bounce buffer to group scattered
++ * reads/writes together.
++ */
++ host->bounce_buffer = devm_kmalloc(mmc->parent,
++ bounce_size,
++ GFP_KERNEL);
++ if (!host->bounce_buffer) {
++ pr_err("%s: failed to allocate %u bytes for bounce buffer, falling back to single segments\n",
++ mmc_hostname(mmc),
++ bounce_size);
++ /*
++ * Exiting with zero here makes sure we proceed with
++ * mmc->max_segs == 1.
++ */
++ return 0;
++ }
++
++ host->bounce_addr = dma_map_single(mmc->parent,
++ host->bounce_buffer,
++ bounce_size,
++ DMA_BIDIRECTIONAL);
++ ret = dma_mapping_error(mmc->parent, host->bounce_addr);
++ if (ret)
++ /* Again fall back to max_segs == 1 */
++ return 0;
++ host->bounce_buffer_size = bounce_size;
++
++ /* Lie about this since we're bouncing */
++ mmc->max_segs = max_blocks;
++ mmc->max_seg_size = bounce_size;
++ mmc->max_req_size = bounce_size;
++
++ pr_info("%s bounce up to %u segments into one, max segment size %u bytes\n",
++ mmc_hostname(mmc), max_blocks, bounce_size);
++
++ return 0;
++}
++
+ int sdhci_setup_host(struct sdhci_host *host)
+ {
+ struct mmc_host *mmc;
+@@ -3713,6 +3854,13 @@ int sdhci_setup_host(struct sdhci_host *host)
+ */
+ mmc->max_blk_count = (host->quirks & SDHCI_QUIRK_NO_MULTIBLOCK) ? 1 : 65535;
+
++ if (mmc->max_segs == 1) {
++ /* This may alter mmc->*_blk_* parameters */
++ ret = sdhci_allocate_bounce_buffer(host);
++ if (ret)
++ return ret;
++ }
++
+ return 0;
+
+ unreg:
+diff --git a/drivers/mmc/host/sdhci.h b/drivers/mmc/host/sdhci.h
+index 54bc444c317f..1d7d61e25dbf 100644
+--- a/drivers/mmc/host/sdhci.h
++++ b/drivers/mmc/host/sdhci.h
+@@ -440,6 +440,9 @@ struct sdhci_host {
+
+ int irq; /* Device IRQ */
+ void __iomem *ioaddr; /* Mapped address */
++ char *bounce_buffer; /* For packing SDMA reads/writes */
++ dma_addr_t bounce_addr;
++ unsigned int bounce_buffer_size;
+
+ const struct sdhci_ops *ops; /* Low level hw interface */
+
+diff --git a/drivers/mtd/nand/vf610_nfc.c b/drivers/mtd/nand/vf610_nfc.c
+index 8037d4b48a05..e2583a539b41 100644
+--- a/drivers/mtd/nand/vf610_nfc.c
++++ b/drivers/mtd/nand/vf610_nfc.c
+@@ -752,10 +752,8 @@ static int vf610_nfc_probe(struct platform_device *pdev)
+ if (mtd->oobsize > 64)
+ mtd->oobsize = 64;
+
+- /*
+- * mtd->ecclayout is not specified here because we're using the
+- * default large page ECC layout defined in NAND core.
+- */
++ /* Use default large page ECC layout defined in NAND core */
++ mtd_set_ooblayout(mtd, &nand_ooblayout_lp_ops);
+ if (chip->ecc.strength == 32) {
+ nfc->ecc_mode = ECC_60_BYTE;
+ chip->ecc.bytes = 60;
+diff --git a/drivers/net/ethernet/marvell/mvpp2.c b/drivers/net/ethernet/marvell/mvpp2.c
+index 634b2f41cc9e..908acd4624e8 100644
+--- a/drivers/net/ethernet/marvell/mvpp2.c
++++ b/drivers/net/ethernet/marvell/mvpp2.c
+@@ -7127,6 +7127,7 @@ static void mvpp2_set_rx_mode(struct net_device *dev)
+ int id = port->id;
+ bool allmulti = dev->flags & IFF_ALLMULTI;
+
++retry:
+ mvpp2_prs_mac_promisc_set(priv, id, dev->flags & IFF_PROMISC);
+ mvpp2_prs_mac_multi_set(priv, id, MVPP2_PE_MAC_MC_ALL, allmulti);
+ mvpp2_prs_mac_multi_set(priv, id, MVPP2_PE_MAC_MC_IP6, allmulti);
+@@ -7134,9 +7135,13 @@ static void mvpp2_set_rx_mode(struct net_device *dev)
+ /* Remove all port->id's mcast enries */
+ mvpp2_prs_mcast_del_all(priv, id);
+
+- if (allmulti && !netdev_mc_empty(dev)) {
+- netdev_for_each_mc_addr(ha, dev)
+- mvpp2_prs_mac_da_accept(priv, id, ha->addr, true);
++ if (!allmulti) {
++ netdev_for_each_mc_addr(ha, dev) {
++ if (mvpp2_prs_mac_da_accept(priv, id, ha->addr, true)) {
++ allmulti = true;
++ goto retry;
++ }
++ }
+ }
+ }
+
+diff --git a/drivers/net/ethernet/mellanox/mlx4/qp.c b/drivers/net/ethernet/mellanox/mlx4/qp.c
+index 769598f7b6c8..3aaf4bad6c5a 100644
+--- a/drivers/net/ethernet/mellanox/mlx4/qp.c
++++ b/drivers/net/ethernet/mellanox/mlx4/qp.c
+@@ -287,6 +287,9 @@ void mlx4_qp_release_range(struct mlx4_dev *dev, int base_qpn, int cnt)
+ u64 in_param = 0;
+ int err;
+
++ if (!cnt)
++ return;
++
+ if (mlx4_is_mfunc(dev)) {
+ set_param_l(&in_param, base_qpn);
+ set_param_h(&in_param, cnt);
+diff --git a/drivers/net/wireless/marvell/mwifiex/pcie.c b/drivers/net/wireless/marvell/mwifiex/pcie.c
+index cd314946452c..9511f5fe62f4 100644
+--- a/drivers/net/wireless/marvell/mwifiex/pcie.c
++++ b/drivers/net/wireless/marvell/mwifiex/pcie.c
+@@ -2781,7 +2781,10 @@ static void mwifiex_pcie_card_reset_work(struct mwifiex_adapter *adapter)
+ {
+ struct pcie_service_card *card = adapter->card;
+
+- pci_reset_function(card->dev);
++ /* We can't afford to wait here; remove() might be waiting on us. If we
++ * can't grab the device lock, maybe we'll get another chance later.
++ */
++ pci_try_reset_function(card->dev);
+ }
+
+ static void mwifiex_pcie_work(struct work_struct *work)
+diff --git a/drivers/net/wireless/realtek/rtlwifi/rtl8821ae/hw.c b/drivers/net/wireless/realtek/rtlwifi/rtl8821ae/hw.c
+index 43e18c4c1e68..999ddd947b2a 100644
+--- a/drivers/net/wireless/realtek/rtlwifi/rtl8821ae/hw.c
++++ b/drivers/net/wireless/realtek/rtlwifi/rtl8821ae/hw.c
+@@ -1123,7 +1123,7 @@ static u8 _rtl8821ae_dbi_read(struct rtl_priv *rtlpriv, u16 addr)
+ }
+ if (0 == tmp) {
+ read_addr = REG_DBI_RDATA + addr % 4;
+- ret = rtl_read_word(rtlpriv, read_addr);
++ ret = rtl_read_byte(rtlpriv, read_addr);
+ }
+ return ret;
+ }
+@@ -1165,7 +1165,8 @@ static void _rtl8821ae_enable_aspm_back_door(struct ieee80211_hw *hw)
+ }
+
+ tmp = _rtl8821ae_dbi_read(rtlpriv, 0x70f);
+- _rtl8821ae_dbi_write(rtlpriv, 0x70f, tmp | BIT(7));
++ _rtl8821ae_dbi_write(rtlpriv, 0x70f, tmp | BIT(7) |
++ ASPM_L1_LATENCY << 3);
+
+ tmp = _rtl8821ae_dbi_read(rtlpriv, 0x719);
+ _rtl8821ae_dbi_write(rtlpriv, 0x719, tmp | BIT(3) | BIT(4));
+diff --git a/drivers/net/wireless/realtek/rtlwifi/wifi.h b/drivers/net/wireless/realtek/rtlwifi/wifi.h
+index 92d4859ec906..2a37125b2ef5 100644
+--- a/drivers/net/wireless/realtek/rtlwifi/wifi.h
++++ b/drivers/net/wireless/realtek/rtlwifi/wifi.h
+@@ -99,6 +99,7 @@
+ #define RTL_USB_MAX_RX_COUNT 100
+ #define QBSS_LOAD_SIZE 5
+ #define MAX_WMMELE_LENGTH 64
++#define ASPM_L1_LATENCY 7
+
+ #define TOTAL_CAM_ENTRY 32
+
+diff --git a/drivers/pci/dwc/pci-keystone.c b/drivers/pci/dwc/pci-keystone.c
+index 5bee3af47588..39405598b22d 100644
+--- a/drivers/pci/dwc/pci-keystone.c
++++ b/drivers/pci/dwc/pci-keystone.c
+@@ -178,7 +178,7 @@ static int ks_pcie_get_irq_controller_info(struct keystone_pcie *ks_pcie,
+ }
+
+ /* interrupt controller is in a child node */
+- *np_temp = of_find_node_by_name(np_pcie, controller);
++ *np_temp = of_get_child_by_name(np_pcie, controller);
+ if (!(*np_temp)) {
+ dev_err(dev, "Node for %s is absent\n", controller);
+ return -EINVAL;
+@@ -187,6 +187,7 @@ static int ks_pcie_get_irq_controller_info(struct keystone_pcie *ks_pcie,
+ temp = of_irq_count(*np_temp);
+ if (!temp) {
+ dev_err(dev, "No IRQ entries in %s\n", controller);
++ of_node_put(*np_temp);
+ return -EINVAL;
+ }
+
+@@ -204,6 +205,8 @@ static int ks_pcie_get_irq_controller_info(struct keystone_pcie *ks_pcie,
+ break;
+ }
+
++ of_node_put(*np_temp);
++
+ if (temp) {
+ *num_irqs = temp;
+ return 0;
+diff --git a/drivers/pci/host/pcie-iproc-platform.c b/drivers/pci/host/pcie-iproc-platform.c
+index a5073a921a04..32228d41f746 100644
+--- a/drivers/pci/host/pcie-iproc-platform.c
++++ b/drivers/pci/host/pcie-iproc-platform.c
+@@ -92,6 +92,13 @@ static int iproc_pcie_pltfm_probe(struct platform_device *pdev)
+ pcie->need_ob_cfg = true;
+ }
+
++ /*
++ * DT nodes are not used by all platforms that use the iProc PCIe
++ * core driver. For platforms that require explict inbound mapping
++ * configuration, "dma-ranges" would have been present in DT
++ */
++ pcie->need_ib_cfg = of_property_read_bool(np, "dma-ranges");
++
+ /* PHY use is optional */
+ pcie->phy = devm_phy_get(dev, "pcie-phy");
+ if (IS_ERR(pcie->phy)) {
+diff --git a/drivers/pci/host/pcie-iproc.c b/drivers/pci/host/pcie-iproc.c
+index 935909bbe5c4..75836067f538 100644
+--- a/drivers/pci/host/pcie-iproc.c
++++ b/drivers/pci/host/pcie-iproc.c
+@@ -1378,9 +1378,11 @@ int iproc_pcie_setup(struct iproc_pcie *pcie, struct list_head *res)
+ }
+ }
+
+- ret = iproc_pcie_map_dma_ranges(pcie);
+- if (ret && ret != -ENOENT)
+- goto err_power_off_phy;
++ if (pcie->need_ib_cfg) {
++ ret = iproc_pcie_map_dma_ranges(pcie);
++ if (ret && ret != -ENOENT)
++ goto err_power_off_phy;
++ }
+
+ #ifdef CONFIG_ARM
+ pcie->sysdata.private_data = pcie;
+diff --git a/drivers/pci/host/pcie-iproc.h b/drivers/pci/host/pcie-iproc.h
+index a6b55cec9a66..4ac6282f2bfd 100644
+--- a/drivers/pci/host/pcie-iproc.h
++++ b/drivers/pci/host/pcie-iproc.h
+@@ -74,6 +74,7 @@ struct iproc_msi;
+ * @ob: outbound mapping related parameters
+ * @ob_map: outbound mapping related parameters specific to the controller
+ *
++ * @need_ib_cfg: indicates SW needs to configure the inbound mapping window
+ * @ib: inbound mapping related parameters
+ * @ib_map: outbound mapping region related parameters
+ *
+@@ -101,6 +102,7 @@ struct iproc_pcie {
+ struct iproc_pcie_ob ob;
+ const struct iproc_pcie_ob_map *ob_map;
+
++ bool need_ib_cfg;
+ struct iproc_pcie_ib ib;
+ const struct iproc_pcie_ib_map *ib_map;
+
+diff --git a/drivers/pci/hotplug/pciehp_hpc.c b/drivers/pci/hotplug/pciehp_hpc.c
+index 7bab0606f1a9..a89d8b990228 100644
+--- a/drivers/pci/hotplug/pciehp_hpc.c
++++ b/drivers/pci/hotplug/pciehp_hpc.c
+@@ -848,6 +848,13 @@ struct controller *pcie_init(struct pcie_device *dev)
+ if (pdev->hotplug_user_indicators)
+ slot_cap &= ~(PCI_EXP_SLTCAP_AIP | PCI_EXP_SLTCAP_PIP);
+
++ /*
++ * We assume no Thunderbolt controllers support Command Complete events,
++ * but some controllers falsely claim they do.
++ */
++ if (pdev->is_thunderbolt)
++ slot_cap |= PCI_EXP_SLTCAP_NCCS;
++
+ ctrl->slot_cap = slot_cap;
+ mutex_init(&ctrl->ctrl_lock);
+ init_waitqueue_head(&ctrl->queue);
+diff --git a/drivers/pci/quirks.c b/drivers/pci/quirks.c
+index 10684b17d0bd..d22750ea7444 100644
+--- a/drivers/pci/quirks.c
++++ b/drivers/pci/quirks.c
+@@ -1636,8 +1636,8 @@ static void quirk_pcie_mch(struct pci_dev *pdev)
+ DECLARE_PCI_FIXUP_FINAL(PCI_VENDOR_ID_INTEL, PCI_DEVICE_ID_INTEL_E7520_MCH, quirk_pcie_mch);
+ DECLARE_PCI_FIXUP_FINAL(PCI_VENDOR_ID_INTEL, PCI_DEVICE_ID_INTEL_E7320_MCH, quirk_pcie_mch);
+ DECLARE_PCI_FIXUP_FINAL(PCI_VENDOR_ID_INTEL, PCI_DEVICE_ID_INTEL_E7525_MCH, quirk_pcie_mch);
+-DECLARE_PCI_FIXUP_FINAL(PCI_VENDOR_ID_HUAWEI, 0x1610, quirk_pcie_mch);
+
++DECLARE_PCI_FIXUP_CLASS_FINAL(PCI_VENDOR_ID_HUAWEI, 0x1610, PCI_CLASS_BRIDGE_PCI, 8, quirk_pcie_mch);
+
+ /*
+ * It's possible for the MSI to get corrupted if shpc and acpi
+diff --git a/drivers/platform/x86/apple-gmux.c b/drivers/platform/x86/apple-gmux.c
+index 623d322447a2..7c4eb86c851e 100644
+--- a/drivers/platform/x86/apple-gmux.c
++++ b/drivers/platform/x86/apple-gmux.c
+@@ -24,7 +24,6 @@
+ #include <linux/delay.h>
+ #include <linux/pci.h>
+ #include <linux/vga_switcheroo.h>
+-#include <linux/vgaarb.h>
+ #include <acpi/video.h>
+ #include <asm/io.h>
+
+@@ -54,7 +53,6 @@ struct apple_gmux_data {
+ bool indexed;
+ struct mutex index_lock;
+
+- struct pci_dev *pdev;
+ struct backlight_device *bdev;
+
+ /* switcheroo data */
+@@ -599,23 +597,6 @@ static int gmux_resume(struct device *dev)
+ return 0;
+ }
+
+-static struct pci_dev *gmux_get_io_pdev(void)
+-{
+- struct pci_dev *pdev = NULL;
+-
+- while ((pdev = pci_get_class(PCI_CLASS_DISPLAY_VGA << 8, pdev))) {
+- u16 cmd;
+-
+- pci_read_config_word(pdev, PCI_COMMAND, &cmd);
+- if (!(cmd & PCI_COMMAND_IO))
+- continue;
+-
+- return pdev;
+- }
+-
+- return NULL;
+-}
+-
+ static int is_thunderbolt(struct device *dev, void *data)
+ {
+ return to_pci_dev(dev)->is_thunderbolt;
+@@ -631,7 +612,6 @@ static int gmux_probe(struct pnp_dev *pnp, const struct pnp_device_id *id)
+ int ret = -ENXIO;
+ acpi_status status;
+ unsigned long long gpe;
+- struct pci_dev *pdev = NULL;
+
+ if (apple_gmux_data)
+ return -EBUSY;
+@@ -682,7 +662,7 @@ static int gmux_probe(struct pnp_dev *pnp, const struct pnp_device_id *id)
+ ver_minor = (version >> 16) & 0xff;
+ ver_release = (version >> 8) & 0xff;
+ } else {
+- pr_info("gmux device not present or IO disabled\n");
++ pr_info("gmux device not present\n");
+ ret = -ENODEV;
+ goto err_release;
+ }
+@@ -690,23 +670,6 @@ static int gmux_probe(struct pnp_dev *pnp, const struct pnp_device_id *id)
+ pr_info("Found gmux version %d.%d.%d [%s]\n", ver_major, ver_minor,
+ ver_release, (gmux_data->indexed ? "indexed" : "classic"));
+
+- /*
+- * Apple systems with gmux are EFI based and normally don't use
+- * VGA. In addition changing IO+MEM ownership between IGP and dGPU
+- * disables IO/MEM used for backlight control on some systems.
+- * Lock IO+MEM to GPU with active IO to prevent switch.
+- */
+- pdev = gmux_get_io_pdev();
+- if (pdev && vga_tryget(pdev,
+- VGA_RSRC_NORMAL_IO | VGA_RSRC_NORMAL_MEM)) {
+- pr_err("IO+MEM vgaarb-locking for PCI:%s failed\n",
+- pci_name(pdev));
+- ret = -EBUSY;
+- goto err_release;
+- } else if (pdev)
+- pr_info("locked IO for PCI:%s\n", pci_name(pdev));
+- gmux_data->pdev = pdev;
+-
+ memset(&props, 0, sizeof(props));
+ props.type = BACKLIGHT_PLATFORM;
+ props.max_brightness = gmux_read32(gmux_data, GMUX_PORT_MAX_BRIGHTNESS);
+@@ -822,10 +785,6 @@ static int gmux_probe(struct pnp_dev *pnp, const struct pnp_device_id *id)
+ err_notify:
+ backlight_device_unregister(bdev);
+ err_release:
+- if (gmux_data->pdev)
+- vga_put(gmux_data->pdev,
+- VGA_RSRC_NORMAL_IO | VGA_RSRC_NORMAL_MEM);
+- pci_dev_put(pdev);
+ release_region(gmux_data->iostart, gmux_data->iolen);
+ err_free:
+ kfree(gmux_data);
+@@ -845,11 +804,6 @@ static void gmux_remove(struct pnp_dev *pnp)
+ &gmux_notify_handler);
+ }
+
+- if (gmux_data->pdev) {
+- vga_put(gmux_data->pdev,
+- VGA_RSRC_NORMAL_IO | VGA_RSRC_NORMAL_MEM);
+- pci_dev_put(gmux_data->pdev);
+- }
+ backlight_device_unregister(gmux_data->bdev);
+
+ release_region(gmux_data->iostart, gmux_data->iolen);
+diff --git a/drivers/platform/x86/wmi.c b/drivers/platform/x86/wmi.c
+index daa68acbc900..c0c8945603cb 100644
+--- a/drivers/platform/x86/wmi.c
++++ b/drivers/platform/x86/wmi.c
+@@ -933,7 +933,7 @@ static int wmi_dev_probe(struct device *dev)
+ goto probe_failure;
+ }
+
+- buf = kmalloc(strlen(wdriver->driver.name) + 4, GFP_KERNEL);
++ buf = kmalloc(strlen(wdriver->driver.name) + 5, GFP_KERNEL);
+ if (!buf) {
+ ret = -ENOMEM;
+ goto probe_string_failure;
+diff --git a/drivers/rtc/rtc-opal.c b/drivers/rtc/rtc-opal.c
+index e2a946c0e667..304e891e35fc 100644
+--- a/drivers/rtc/rtc-opal.c
++++ b/drivers/rtc/rtc-opal.c
+@@ -58,6 +58,7 @@ static void tm_to_opal(struct rtc_time *tm, u32 *y_m_d, u64 *h_m_s_ms)
+ static int opal_get_rtc_time(struct device *dev, struct rtc_time *tm)
+ {
+ long rc = OPAL_BUSY;
++ int retries = 10;
+ u32 y_m_d;
+ u64 h_m_s_ms;
+ __be32 __y_m_d;
+@@ -67,8 +68,11 @@ static int opal_get_rtc_time(struct device *dev, struct rtc_time *tm)
+ rc = opal_rtc_read(&__y_m_d, &__h_m_s_ms);
+ if (rc == OPAL_BUSY_EVENT)
+ opal_poll_events(NULL);
+- else
++ else if (retries-- && (rc == OPAL_HARDWARE
++ || rc == OPAL_INTERNAL_ERROR))
+ msleep(10);
++ else if (rc != OPAL_BUSY && rc != OPAL_BUSY_EVENT)
++ break;
+ }
+
+ if (rc != OPAL_SUCCESS)
+@@ -84,6 +88,7 @@ static int opal_get_rtc_time(struct device *dev, struct rtc_time *tm)
+ static int opal_set_rtc_time(struct device *dev, struct rtc_time *tm)
+ {
+ long rc = OPAL_BUSY;
++ int retries = 10;
+ u32 y_m_d = 0;
+ u64 h_m_s_ms = 0;
+
+@@ -92,8 +97,11 @@ static int opal_set_rtc_time(struct device *dev, struct rtc_time *tm)
+ rc = opal_rtc_write(y_m_d, h_m_s_ms);
+ if (rc == OPAL_BUSY_EVENT)
+ opal_poll_events(NULL);
+- else
++ else if (retries-- && (rc == OPAL_HARDWARE
++ || rc == OPAL_INTERNAL_ERROR))
+ msleep(10);
++ else if (rc != OPAL_BUSY && rc != OPAL_BUSY_EVENT)
++ break;
+ }
+
+ return rc == OPAL_SUCCESS ? 0 : -EIO;
+diff --git a/drivers/scsi/smartpqi/Makefile b/drivers/scsi/smartpqi/Makefile
+index 0f42a225a664..e6b779930230 100644
+--- a/drivers/scsi/smartpqi/Makefile
++++ b/drivers/scsi/smartpqi/Makefile
+@@ -1,3 +1,3 @@
+ ccflags-y += -I.
+-obj-m += smartpqi.o
++obj-$(CONFIG_SCSI_SMARTPQI) += smartpqi.o
+ smartpqi-objs := smartpqi_init.o smartpqi_sis.o smartpqi_sas_transport.o
+diff --git a/drivers/target/iscsi/iscsi_target_auth.c b/drivers/target/iscsi/iscsi_target_auth.c
+index f9bc8ec6fb6b..9518ffd8b8ba 100644
+--- a/drivers/target/iscsi/iscsi_target_auth.c
++++ b/drivers/target/iscsi/iscsi_target_auth.c
+@@ -421,7 +421,8 @@ static int chap_server_compute_md5(
+ auth_ret = 0;
+ out:
+ kzfree(desc);
+- crypto_free_shash(tfm);
++ if (tfm)
++ crypto_free_shash(tfm);
+ kfree(challenge);
+ kfree(challenge_binhex);
+ return auth_ret;
+diff --git a/drivers/target/iscsi/iscsi_target_nego.c b/drivers/target/iscsi/iscsi_target_nego.c
+index b686e2ce9c0e..8a5e8d17a942 100644
+--- a/drivers/target/iscsi/iscsi_target_nego.c
++++ b/drivers/target/iscsi/iscsi_target_nego.c
+@@ -432,6 +432,9 @@ static void iscsi_target_sk_data_ready(struct sock *sk)
+ if (test_and_set_bit(LOGIN_FLAGS_READ_ACTIVE, &conn->login_flags)) {
+ write_unlock_bh(&sk->sk_callback_lock);
+ pr_debug("Got LOGIN_FLAGS_READ_ACTIVE=1, conn: %p >>>>\n", conn);
++ if (iscsi_target_sk_data_ready == conn->orig_data_ready)
++ return;
++ conn->orig_data_ready(sk);
+ return;
+ }
+
+diff --git a/drivers/usb/Kconfig b/drivers/usb/Kconfig
+index f699abab1787..65812a2f60b4 100644
+--- a/drivers/usb/Kconfig
++++ b/drivers/usb/Kconfig
+@@ -19,6 +19,14 @@ config USB_EHCI_BIG_ENDIAN_MMIO
+ config USB_EHCI_BIG_ENDIAN_DESC
+ bool
+
++config USB_UHCI_BIG_ENDIAN_MMIO
++ bool
++ default y if SPARC_LEON
++
++config USB_UHCI_BIG_ENDIAN_DESC
++ bool
++ default y if SPARC_LEON
++
+ menuconfig USB_SUPPORT
+ bool "USB support"
+ depends on HAS_IOMEM
+diff --git a/drivers/usb/host/Kconfig b/drivers/usb/host/Kconfig
+index b80a94e632af..2763a640359f 100644
+--- a/drivers/usb/host/Kconfig
++++ b/drivers/usb/host/Kconfig
+@@ -625,14 +625,6 @@ config USB_UHCI_ASPEED
+ bool
+ default y if ARCH_ASPEED
+
+-config USB_UHCI_BIG_ENDIAN_MMIO
+- bool
+- default y if SPARC_LEON
+-
+-config USB_UHCI_BIG_ENDIAN_DESC
+- bool
+- default y if SPARC_LEON
+-
+ config USB_FHCI_HCD
+ tristate "Freescale QE USB Host Controller support"
+ depends on OF_GPIO && QE_GPIO && QUICC_ENGINE
+diff --git a/drivers/video/console/dummycon.c b/drivers/video/console/dummycon.c
+index 9269d5685239..b90ef96e43d6 100644
+--- a/drivers/video/console/dummycon.c
++++ b/drivers/video/console/dummycon.c
+@@ -67,7 +67,6 @@ const struct consw dummy_con = {
+ .con_switch = DUMMY,
+ .con_blank = DUMMY,
+ .con_font_set = DUMMY,
+- .con_font_get = DUMMY,
+ .con_font_default = DUMMY,
+ .con_font_copy = DUMMY,
+ };
+diff --git a/drivers/video/fbdev/atmel_lcdfb.c b/drivers/video/fbdev/atmel_lcdfb.c
+index e06358da4b99..3dee267d7c75 100644
+--- a/drivers/video/fbdev/atmel_lcdfb.c
++++ b/drivers/video/fbdev/atmel_lcdfb.c
+@@ -1119,7 +1119,7 @@ static int atmel_lcdfb_of_init(struct atmel_lcdfb_info *sinfo)
+ goto put_display_node;
+ }
+
+- timings_np = of_find_node_by_name(display_np, "display-timings");
++ timings_np = of_get_child_by_name(display_np, "display-timings");
+ if (!timings_np) {
+ dev_err(dev, "failed to find display-timings node\n");
+ ret = -ENODEV;
+@@ -1140,6 +1140,12 @@ static int atmel_lcdfb_of_init(struct atmel_lcdfb_info *sinfo)
+ fb_add_videomode(&fb_vm, &info->modelist);
+ }
+
++ /*
++ * FIXME: Make sure we are not referencing any fields in display_np
++ * and timings_np and drop our references to them before returning to
++ * avoid leaking the nodes on probe deferral and driver unbind.
++ */
++
+ return 0;
+
+ put_timings_node:
+diff --git a/drivers/video/fbdev/geode/video_gx.c b/drivers/video/fbdev/geode/video_gx.c
+index 6082f653c68a..67773e8bbb95 100644
+--- a/drivers/video/fbdev/geode/video_gx.c
++++ b/drivers/video/fbdev/geode/video_gx.c
+@@ -127,7 +127,7 @@ void gx_set_dclk_frequency(struct fb_info *info)
+ int timeout = 1000;
+
+ /* Rev. 1 Geode GXs use a 14 MHz reference clock instead of 48 MHz. */
+- if (cpu_data(0).x86_mask == 1) {
++ if (cpu_data(0).x86_stepping == 1) {
+ pll_table = gx_pll_table_14MHz;
+ pll_table_len = ARRAY_SIZE(gx_pll_table_14MHz);
+ } else {
+diff --git a/drivers/xen/xenbus/xenbus.h b/drivers/xen/xenbus/xenbus.h
+index 149c5e7efc89..092981171df1 100644
+--- a/drivers/xen/xenbus/xenbus.h
++++ b/drivers/xen/xenbus/xenbus.h
+@@ -76,6 +76,7 @@ struct xb_req_data {
+ struct list_head list;
+ wait_queue_head_t wq;
+ struct xsd_sockmsg msg;
++ uint32_t caller_req_id;
+ enum xsd_sockmsg_type type;
+ char *body;
+ const struct kvec *vec;
+diff --git a/drivers/xen/xenbus/xenbus_comms.c b/drivers/xen/xenbus/xenbus_comms.c
+index 5b081a01779d..d239fc3c5e3d 100644
+--- a/drivers/xen/xenbus/xenbus_comms.c
++++ b/drivers/xen/xenbus/xenbus_comms.c
+@@ -309,6 +309,7 @@ static int process_msg(void)
+ goto out;
+
+ if (req->state == xb_req_state_wait_reply) {
++ req->msg.req_id = req->caller_req_id;
+ req->msg.type = state.msg.type;
+ req->msg.len = state.msg.len;
+ req->body = state.body;
+diff --git a/drivers/xen/xenbus/xenbus_xs.c b/drivers/xen/xenbus/xenbus_xs.c
+index 3e59590c7254..3f3b29398ab8 100644
+--- a/drivers/xen/xenbus/xenbus_xs.c
++++ b/drivers/xen/xenbus/xenbus_xs.c
+@@ -227,6 +227,8 @@ static void xs_send(struct xb_req_data *req, struct xsd_sockmsg *msg)
+ req->state = xb_req_state_queued;
+ init_waitqueue_head(&req->wq);
+
++ /* Save the caller req_id and restore it later in the reply */
++ req->caller_req_id = req->msg.req_id;
+ req->msg.req_id = xs_request_enter(req);
+
+ mutex_lock(&xb_write_mutex);
+@@ -310,6 +312,7 @@ static void *xs_talkv(struct xenbus_transaction t,
+ req->num_vecs = num_vecs;
+ req->cb = xs_wake_up;
+
++ msg.req_id = 0;
+ msg.tx_id = t.id;
+ msg.type = type;
+ msg.len = 0;
+diff --git a/fs/btrfs/inode.c b/fs/btrfs/inode.c
+index 0f57602092cf..c04183cc2117 100644
+--- a/fs/btrfs/inode.c
++++ b/fs/btrfs/inode.c
+@@ -1330,8 +1330,11 @@ static noinline int run_delalloc_nocow(struct inode *inode,
+ leaf = path->nodes[0];
+ if (path->slots[0] >= btrfs_header_nritems(leaf)) {
+ ret = btrfs_next_leaf(root, path);
+- if (ret < 0)
++ if (ret < 0) {
++ if (cow_start != (u64)-1)
++ cur_offset = cow_start;
+ goto error;
++ }
+ if (ret > 0)
+ break;
+ leaf = path->nodes[0];
+@@ -3366,6 +3369,11 @@ int btrfs_orphan_add(struct btrfs_trans_handle *trans,
+ ret = btrfs_orphan_reserve_metadata(trans, inode);
+ ASSERT(!ret);
+ if (ret) {
++ /*
++ * dec doesn't need spin_lock as ->orphan_block_rsv
++ * would be released only if ->orphan_inodes is
++ * zero.
++ */
+ atomic_dec(&root->orphan_inodes);
+ clear_bit(BTRFS_INODE_ORPHAN_META_RESERVED,
+ &inode->runtime_flags);
+@@ -3380,12 +3388,17 @@ int btrfs_orphan_add(struct btrfs_trans_handle *trans,
+ if (insert >= 1) {
+ ret = btrfs_insert_orphan_item(trans, root, btrfs_ino(inode));
+ if (ret) {
+- atomic_dec(&root->orphan_inodes);
+ if (reserve) {
+ clear_bit(BTRFS_INODE_ORPHAN_META_RESERVED,
+ &inode->runtime_flags);
+ btrfs_orphan_release_metadata(inode);
+ }
++ /*
++ * btrfs_orphan_commit_root may race with us and set
++ * ->orphan_block_rsv to zero, in order to avoid that,
++ * decrease ->orphan_inodes after everything is done.
++ */
++ atomic_dec(&root->orphan_inodes);
+ if (ret != -EEXIST) {
+ clear_bit(BTRFS_INODE_HAS_ORPHAN_ITEM,
+ &inode->runtime_flags);
+@@ -3417,28 +3430,26 @@ static int btrfs_orphan_del(struct btrfs_trans_handle *trans,
+ {
+ struct btrfs_root *root = inode->root;
+ int delete_item = 0;
+- int release_rsv = 0;
+ int ret = 0;
+
+- spin_lock(&root->orphan_lock);
+ if (test_and_clear_bit(BTRFS_INODE_HAS_ORPHAN_ITEM,
+ &inode->runtime_flags))
+ delete_item = 1;
+
++ if (delete_item && trans)
++ ret = btrfs_del_orphan_item(trans, root, btrfs_ino(inode));
++
+ if (test_and_clear_bit(BTRFS_INODE_ORPHAN_META_RESERVED,
+ &inode->runtime_flags))
+- release_rsv = 1;
+- spin_unlock(&root->orphan_lock);
++ btrfs_orphan_release_metadata(inode);
+
+- if (delete_item) {
++ /*
++ * btrfs_orphan_commit_root may race with us and set ->orphan_block_rsv
++ * to zero, in order to avoid that, decrease ->orphan_inodes after
++ * everything is done.
++ */
++ if (delete_item)
+ atomic_dec(&root->orphan_inodes);
+- if (trans)
+- ret = btrfs_del_orphan_item(trans, root,
+- btrfs_ino(inode));
+- }
+-
+- if (release_rsv)
+- btrfs_orphan_release_metadata(inode);
+
+ return ret;
+ }
+@@ -5263,7 +5274,7 @@ void btrfs_evict_inode(struct inode *inode)
+ trace_btrfs_inode_evict(inode);
+
+ if (!root) {
+- kmem_cache_free(btrfs_inode_cachep, BTRFS_I(inode));
++ clear_inode(inode);
+ return;
+ }
+
+diff --git a/fs/btrfs/tree-log.c b/fs/btrfs/tree-log.c
+index 7bf9b31561db..b5e1afb30f36 100644
+--- a/fs/btrfs/tree-log.c
++++ b/fs/btrfs/tree-log.c
+@@ -28,6 +28,7 @@
+ #include "hash.h"
+ #include "compression.h"
+ #include "qgroup.h"
++#include "inode-map.h"
+
+ /* magic values for the inode_only field in btrfs_log_inode:
+ *
+@@ -2494,6 +2495,9 @@ static noinline int walk_down_log_tree(struct btrfs_trans_handle *trans,
+ clean_tree_block(fs_info, next);
+ btrfs_wait_tree_block_writeback(next);
+ btrfs_tree_unlock(next);
++ } else {
++ if (test_and_clear_bit(EXTENT_BUFFER_DIRTY, &next->bflags))
++ clear_extent_buffer_dirty(next);
+ }
+
+ WARN_ON(root_owner !=
+@@ -2574,6 +2578,9 @@ static noinline int walk_up_log_tree(struct btrfs_trans_handle *trans,
+ clean_tree_block(fs_info, next);
+ btrfs_wait_tree_block_writeback(next);
+ btrfs_tree_unlock(next);
++ } else {
++ if (test_and_clear_bit(EXTENT_BUFFER_DIRTY, &next->bflags))
++ clear_extent_buffer_dirty(next);
+ }
+
+ WARN_ON(root_owner != BTRFS_TREE_LOG_OBJECTID);
+@@ -2652,6 +2659,9 @@ static int walk_log_tree(struct btrfs_trans_handle *trans,
+ clean_tree_block(fs_info, next);
+ btrfs_wait_tree_block_writeback(next);
+ btrfs_tree_unlock(next);
++ } else {
++ if (test_and_clear_bit(EXTENT_BUFFER_DIRTY, &next->bflags))
++ clear_extent_buffer_dirty(next);
+ }
+
+ WARN_ON(log->root_key.objectid !=
+@@ -3040,13 +3050,14 @@ static void free_log_tree(struct btrfs_trans_handle *trans,
+
+ while (1) {
+ ret = find_first_extent_bit(&log->dirty_log_pages,
+- 0, &start, &end, EXTENT_DIRTY | EXTENT_NEW,
++ 0, &start, &end,
++ EXTENT_DIRTY | EXTENT_NEW | EXTENT_NEED_WAIT,
+ NULL);
+ if (ret)
+ break;
+
+ clear_extent_bits(&log->dirty_log_pages, start, end,
+- EXTENT_DIRTY | EXTENT_NEW);
++ EXTENT_DIRTY | EXTENT_NEW | EXTENT_NEED_WAIT);
+ }
+
+ /*
+@@ -5705,6 +5716,23 @@ int btrfs_recover_log_trees(struct btrfs_root *log_root_tree)
+ path);
+ }
+
++ if (!ret && wc.stage == LOG_WALK_REPLAY_ALL) {
++ struct btrfs_root *root = wc.replay_dest;
++
++ btrfs_release_path(path);
++
++ /*
++ * We have just replayed everything, and the highest
++ * objectid of fs roots probably has changed in case
++ * some inode_item's got replayed.
++ *
++ * root->objectid_mutex is not acquired as log replay
++ * could only happen during mount.
++ */
++ ret = btrfs_find_highest_objectid(root,
++ &root->highest_objectid);
++ }
++
+ key.offset = found_key.offset - 1;
+ wc.replay_dest->log_root = NULL;
+ free_extent_buffer(log->node);
+diff --git a/fs/ext4/inode.c b/fs/ext4/inode.c
+index 534a9130f625..4c2f8b57bdc7 100644
+--- a/fs/ext4/inode.c
++++ b/fs/ext4/inode.c
+@@ -3767,10 +3767,18 @@ static ssize_t ext4_direct_IO_write(struct kiocb *iocb, struct iov_iter *iter)
+ /* Credits for sb + inode write */
+ handle = ext4_journal_start(inode, EXT4_HT_INODE, 2);
+ if (IS_ERR(handle)) {
+- /* This is really bad luck. We've written the data
+- * but cannot extend i_size. Bail out and pretend
+- * the write failed... */
+- ret = PTR_ERR(handle);
++ /*
++ * We wrote the data but cannot extend
++ * i_size. Bail out. In async io case, we do
++ * not return error here because we have
++ * already submmitted the corresponding
++ * bio. Returning error here makes the caller
++ * think that this IO is done and failed
++ * resulting in race with bio's completion
++ * handler.
++ */
++ if (!ret)
++ ret = PTR_ERR(handle);
+ if (inode->i_nlink)
+ ext4_orphan_del(NULL, inode);
+
+diff --git a/fs/ext4/super.c b/fs/ext4/super.c
+index 7c46693a14d7..71594382e195 100644
+--- a/fs/ext4/super.c
++++ b/fs/ext4/super.c
+@@ -742,6 +742,7 @@ __acquires(bitlock)
+ }
+
+ ext4_unlock_group(sb, grp);
++ ext4_commit_super(sb, 1);
+ ext4_handle_error(sb);
+ /*
+ * We only get here in the ERRORS_RO case; relocking the group
+diff --git a/fs/gfs2/bmap.c b/fs/gfs2/bmap.c
+index d5f0d96169c5..8c50d6878aa5 100644
+--- a/fs/gfs2/bmap.c
++++ b/fs/gfs2/bmap.c
+@@ -736,7 +736,7 @@ int gfs2_iomap_begin(struct inode *inode, loff_t pos, loff_t length,
+ __be64 *ptr;
+ sector_t lblock;
+ sector_t lend;
+- int ret;
++ int ret = 0;
+ int eob;
+ unsigned int len;
+ struct buffer_head *bh;
+@@ -748,12 +748,14 @@ int gfs2_iomap_begin(struct inode *inode, loff_t pos, loff_t length,
+ goto out;
+ }
+
+- if ((flags & IOMAP_REPORT) && gfs2_is_stuffed(ip)) {
+- gfs2_stuffed_iomap(inode, iomap);
+- if (pos >= iomap->length)
+- return -ENOENT;
+- ret = 0;
+- goto out;
++ if (gfs2_is_stuffed(ip)) {
++ if (flags & IOMAP_REPORT) {
++ gfs2_stuffed_iomap(inode, iomap);
++ if (pos >= iomap->length)
++ ret = -ENOENT;
++ goto out;
++ }
++ BUG_ON(!(flags & IOMAP_WRITE));
+ }
+
+ lblock = pos >> inode->i_blkbits;
+@@ -764,7 +766,7 @@ int gfs2_iomap_begin(struct inode *inode, loff_t pos, loff_t length,
+ iomap->type = IOMAP_HOLE;
+ iomap->length = (u64)(lend - lblock) << inode->i_blkbits;
+ iomap->flags = IOMAP_F_MERGED;
+- bmap_lock(ip, 0);
++ bmap_lock(ip, flags & IOMAP_WRITE);
+
+ /*
+ * Directory data blocks have a struct gfs2_meta_header header, so the
+@@ -807,27 +809,28 @@ int gfs2_iomap_begin(struct inode *inode, loff_t pos, loff_t length,
+ iomap->flags |= IOMAP_F_BOUNDARY;
+ iomap->length = (u64)len << inode->i_blkbits;
+
+- ret = 0;
+-
+ out_release:
+ release_metapath(&mp);
+- bmap_unlock(ip, 0);
++ bmap_unlock(ip, flags & IOMAP_WRITE);
+ out:
+ trace_gfs2_iomap_end(ip, iomap, ret);
+ return ret;
+
+ do_alloc:
+- if (!(flags & IOMAP_WRITE)) {
+- if (pos >= i_size_read(inode)) {
++ if (flags & IOMAP_WRITE) {
++ ret = gfs2_iomap_alloc(inode, iomap, flags, &mp);
++ } else if (flags & IOMAP_REPORT) {
++ loff_t size = i_size_read(inode);
++ if (pos >= size)
+ ret = -ENOENT;
+- goto out_release;
+- }
+- ret = 0;
+- iomap->length = hole_size(inode, lblock, &mp);
+- goto out_release;
++ else if (height <= ip->i_height)
++ iomap->length = hole_size(inode, lblock, &mp);
++ else
++ iomap->length = size - pos;
++ } else {
++ if (height <= ip->i_height)
++ iomap->length = hole_size(inode, lblock, &mp);
+ }
+-
+- ret = gfs2_iomap_alloc(inode, iomap, flags, &mp);
+ goto out_release;
+ }
+
+diff --git a/fs/jbd2/transaction.c b/fs/jbd2/transaction.c
+index 8b08044b3120..c0681814c379 100644
+--- a/fs/jbd2/transaction.c
++++ b/fs/jbd2/transaction.c
+@@ -495,8 +495,10 @@ void jbd2_journal_free_reserved(handle_t *handle)
+ EXPORT_SYMBOL(jbd2_journal_free_reserved);
+
+ /**
+- * int jbd2_journal_start_reserved(handle_t *handle) - start reserved handle
++ * int jbd2_journal_start_reserved() - start reserved handle
+ * @handle: handle to start
++ * @type: for handle statistics
++ * @line_no: for handle statistics
+ *
+ * Start handle that has been previously reserved with jbd2_journal_reserve().
+ * This attaches @handle to the running transaction (or creates one if there's
+@@ -626,6 +628,7 @@ int jbd2_journal_extend(handle_t *handle, int nblocks)
+ * int jbd2_journal_restart() - restart a handle .
+ * @handle: handle to restart
+ * @nblocks: nr credits requested
++ * @gfp_mask: memory allocation flags (for start_this_handle)
+ *
+ * Restart a handle for a multi-transaction filesystem
+ * operation.
+diff --git a/fs/mbcache.c b/fs/mbcache.c
+index b8b8b9ced9f8..46b23bb432fe 100644
+--- a/fs/mbcache.c
++++ b/fs/mbcache.c
+@@ -94,6 +94,7 @@ int mb_cache_entry_create(struct mb_cache *cache, gfp_t mask, u32 key,
+ entry->e_key = key;
+ entry->e_value = value;
+ entry->e_reusable = reusable;
++ entry->e_referenced = 0;
+ head = mb_cache_entry_head(cache, key);
+ hlist_bl_lock(head);
+ hlist_bl_for_each_entry(dup, dup_node, head, e_hash_list) {
+diff --git a/fs/ocfs2/dlmglue.c b/fs/ocfs2/dlmglue.c
+index 4689940a953c..5193218f5889 100644
+--- a/fs/ocfs2/dlmglue.c
++++ b/fs/ocfs2/dlmglue.c
+@@ -2486,6 +2486,15 @@ int ocfs2_inode_lock_with_page(struct inode *inode,
+ ret = ocfs2_inode_lock_full(inode, ret_bh, ex, OCFS2_LOCK_NONBLOCK);
+ if (ret == -EAGAIN) {
+ unlock_page(page);
++ /*
++ * If we can't get inode lock immediately, we should not return
++ * directly here, since this will lead to a softlockup problem.
++ * The method is to get a blocking lock and immediately unlock
++ * before returning, this can avoid CPU resource waste due to
++ * lots of retries, and benefits fairness in getting lock.
++ */
++ if (ocfs2_inode_lock(inode, ret_bh, ex) == 0)
++ ocfs2_inode_unlock(inode, ex);
+ ret = AOP_TRUNCATED_PAGE;
+ }
+
+diff --git a/fs/seq_file.c b/fs/seq_file.c
+index 4be761c1a03d..eea09f6d8830 100644
+--- a/fs/seq_file.c
++++ b/fs/seq_file.c
+@@ -181,8 +181,11 @@ ssize_t seq_read(struct file *file, char __user *buf, size_t size, loff_t *ppos)
+ * if request is to read from zero offset, reset iterator to first
+ * record as it might have been already advanced by previous requests
+ */
+- if (*ppos == 0)
++ if (*ppos == 0) {
+ m->index = 0;
++ m->version = 0;
++ m->count = 0;
++ }
+
+ /* Don't assume *ppos is where we left it */
+ if (unlikely(*ppos != m->read_pos)) {
+diff --git a/include/drm/i915_pciids.h b/include/drm/i915_pciids.h
+index 972a25633525..c65e4489006d 100644
+--- a/include/drm/i915_pciids.h
++++ b/include/drm/i915_pciids.h
+@@ -392,6 +392,12 @@
+ INTEL_VGA_DEVICE(0x3EA8, info), /* ULT GT3 */ \
+ INTEL_VGA_DEVICE(0x3EA5, info) /* ULT GT3 */
+
++#define INTEL_CFL_IDS(info) \
++ INTEL_CFL_S_GT1_IDS(info), \
++ INTEL_CFL_S_GT2_IDS(info), \
++ INTEL_CFL_H_GT2_IDS(info), \
++ INTEL_CFL_U_GT3_IDS(info)
++
+ /* CNL U 2+2 */
+ #define INTEL_CNL_U_GT2_IDS(info) \
+ INTEL_VGA_DEVICE(0x5A52, info), \
+diff --git a/include/linux/compiler-gcc.h b/include/linux/compiler-gcc.h
+index 631354acfa72..73bc63e0a1c4 100644
+--- a/include/linux/compiler-gcc.h
++++ b/include/linux/compiler-gcc.h
+@@ -167,8 +167,6 @@
+
+ #if GCC_VERSION >= 40100
+ # define __compiletime_object_size(obj) __builtin_object_size(obj, 0)
+-
+-#define __nostackprotector __attribute__((__optimize__("no-stack-protector")))
+ #endif
+
+ #if GCC_VERSION >= 40300
+@@ -196,6 +194,11 @@
+ #endif /* __CHECKER__ */
+ #endif /* GCC_VERSION >= 40300 */
+
++#if GCC_VERSION >= 40400
++#define __optimize(level) __attribute__((__optimize__(level)))
++#define __nostackprotector __optimize("no-stack-protector")
++#endif /* GCC_VERSION >= 40400 */
++
+ #if GCC_VERSION >= 40500
+
+ #ifndef __CHECKER__
+diff --git a/include/linux/compiler.h b/include/linux/compiler.h
+index 52e611ab9a6c..5ff818e9a836 100644
+--- a/include/linux/compiler.h
++++ b/include/linux/compiler.h
+@@ -271,6 +271,10 @@ static __always_inline void __write_once_size(volatile void *p, void *res, int s
+
+ #endif /* __ASSEMBLY__ */
+
++#ifndef __optimize
++# define __optimize(level)
++#endif
++
+ /* Compile time object size, -1 for unknown */
+ #ifndef __compiletime_object_size
+ # define __compiletime_object_size(obj) -1
+diff --git a/include/linux/cpuidle.h b/include/linux/cpuidle.h
+index 8f7788d23b57..a6989e02d0a0 100644
+--- a/include/linux/cpuidle.h
++++ b/include/linux/cpuidle.h
+@@ -225,7 +225,7 @@ static inline void cpuidle_coupled_parallel_barrier(struct cpuidle_device *dev,
+ }
+ #endif
+
+-#ifdef CONFIG_ARCH_HAS_CPU_RELAX
++#if defined(CONFIG_CPU_IDLE) && defined(CONFIG_ARCH_HAS_CPU_RELAX)
+ void cpuidle_poll_state_init(struct cpuidle_driver *drv);
+ #else
+ static inline void cpuidle_poll_state_init(struct cpuidle_driver *drv) {}
+diff --git a/include/linux/jbd2.h b/include/linux/jbd2.h
+index 296d1e0ea87b..b708e5169d1d 100644
+--- a/include/linux/jbd2.h
++++ b/include/linux/jbd2.h
+@@ -418,26 +418,41 @@ static inline void jbd_unlock_bh_journal_head(struct buffer_head *bh)
+ #define JI_WAIT_DATA (1 << __JI_WAIT_DATA)
+
+ /**
+- * struct jbd_inode is the structure linking inodes in ordered mode
+- * present in a transaction so that we can sync them during commit.
++ * struct jbd_inode - The jbd_inode type is the structure linking inodes in
++ * ordered mode present in a transaction so that we can sync them during commit.
+ */
+ struct jbd2_inode {
+- /* Which transaction does this inode belong to? Either the running
+- * transaction or the committing one. [j_list_lock] */
++ /**
++ * @i_transaction:
++ *
++ * Which transaction does this inode belong to? Either the running
++ * transaction or the committing one. [j_list_lock]
++ */
+ transaction_t *i_transaction;
+
+- /* Pointer to the running transaction modifying inode's data in case
+- * there is already a committing transaction touching it. [j_list_lock] */
++ /**
++ * @i_next_transaction:
++ *
++ * Pointer to the running transaction modifying inode's data in case
++ * there is already a committing transaction touching it. [j_list_lock]
++ */
+ transaction_t *i_next_transaction;
+
+- /* List of inodes in the i_transaction [j_list_lock] */
++ /**
++ * @i_list: List of inodes in the i_transaction [j_list_lock]
++ */
+ struct list_head i_list;
+
+- /* VFS inode this inode belongs to [constant during the lifetime
+- * of the structure] */
++ /**
++ * @i_vfs_inode:
++ *
++ * VFS inode this inode belongs to [constant for lifetime of structure]
++ */
+ struct inode *i_vfs_inode;
+
+- /* Flags of inode [j_list_lock] */
++ /**
++ * @i_flags: Flags of inode [j_list_lock]
++ */
+ unsigned long i_flags;
+ };
+
+@@ -447,12 +462,20 @@ struct jbd2_revoke_table_s;
+ * struct handle_s - The handle_s type is the concrete type associated with
+ * handle_t.
+ * @h_transaction: Which compound transaction is this update a part of?
++ * @h_journal: Which journal handle belongs to - used iff h_reserved set.
++ * @h_rsv_handle: Handle reserved for finishing the logical operation.
+ * @h_buffer_credits: Number of remaining buffers we are allowed to dirty.
+- * @h_ref: Reference count on this handle
+- * @h_err: Field for caller's use to track errors through large fs operations
+- * @h_sync: flag for sync-on-close
+- * @h_jdata: flag to force data journaling
+- * @h_aborted: flag indicating fatal error on handle
++ * @h_ref: Reference count on this handle.
++ * @h_err: Field for caller's use to track errors through large fs operations.
++ * @h_sync: Flag for sync-on-close.
++ * @h_jdata: Flag to force data journaling.
++ * @h_reserved: Flag for handle for reserved credits.
++ * @h_aborted: Flag indicating fatal error on handle.
++ * @h_type: For handle statistics.
++ * @h_line_no: For handle statistics.
++ * @h_start_jiffies: Handle Start time.
++ * @h_requested_credits: Holds @h_buffer_credits after handle is started.
++ * @saved_alloc_context: Saved context while transaction is open.
+ **/
+
+ /* Docbook can't yet cope with the bit fields, but will leave the documentation
+@@ -462,32 +485,23 @@ struct jbd2_revoke_table_s;
+ struct jbd2_journal_handle
+ {
+ union {
+- /* Which compound transaction is this update a part of? */
+ transaction_t *h_transaction;
+ /* Which journal handle belongs to - used iff h_reserved set */
+ journal_t *h_journal;
+ };
+
+- /* Handle reserved for finishing the logical operation */
+ handle_t *h_rsv_handle;
+-
+- /* Number of remaining buffers we are allowed to dirty: */
+ int h_buffer_credits;
+-
+- /* Reference count on this handle */
+ int h_ref;
+-
+- /* Field for caller's use to track errors through large fs */
+- /* operations */
+ int h_err;
+
+ /* Flags [no locking] */
+- unsigned int h_sync: 1; /* sync-on-close */
+- unsigned int h_jdata: 1; /* force data journaling */
+- unsigned int h_reserved: 1; /* handle with reserved credits */
+- unsigned int h_aborted: 1; /* fatal error on handle */
+- unsigned int h_type: 8; /* for handle statistics */
+- unsigned int h_line_no: 16; /* for handle statistics */
++ unsigned int h_sync: 1;
++ unsigned int h_jdata: 1;
++ unsigned int h_reserved: 1;
++ unsigned int h_aborted: 1;
++ unsigned int h_type: 8;
++ unsigned int h_line_no: 16;
+
+ unsigned long h_start_jiffies;
+ unsigned int h_requested_credits;
+@@ -729,228 +743,253 @@ jbd2_time_diff(unsigned long start, unsigned long end)
+ /**
+ * struct journal_s - The journal_s type is the concrete type associated with
+ * journal_t.
+- * @j_flags: General journaling state flags
+- * @j_errno: Is there an outstanding uncleared error on the journal (from a
+- * prior abort)?
+- * @j_sb_buffer: First part of superblock buffer
+- * @j_superblock: Second part of superblock buffer
+- * @j_format_version: Version of the superblock format
+- * @j_state_lock: Protect the various scalars in the journal
+- * @j_barrier_count: Number of processes waiting to create a barrier lock
+- * @j_barrier: The barrier lock itself
+- * @j_running_transaction: The current running transaction..
+- * @j_committing_transaction: the transaction we are pushing to disk
+- * @j_checkpoint_transactions: a linked circular list of all transactions
+- * waiting for checkpointing
+- * @j_wait_transaction_locked: Wait queue for waiting for a locked transaction
+- * to start committing, or for a barrier lock to be released
+- * @j_wait_done_commit: Wait queue for waiting for commit to complete
+- * @j_wait_commit: Wait queue to trigger commit
+- * @j_wait_updates: Wait queue to wait for updates to complete
+- * @j_wait_reserved: Wait queue to wait for reserved buffer credits to drop
+- * @j_checkpoint_mutex: Mutex for locking against concurrent checkpoints
+- * @j_head: Journal head - identifies the first unused block in the journal
+- * @j_tail: Journal tail - identifies the oldest still-used block in the
+- * journal.
+- * @j_free: Journal free - how many free blocks are there in the journal?
+- * @j_first: The block number of the first usable block
+- * @j_last: The block number one beyond the last usable block
+- * @j_dev: Device where we store the journal
+- * @j_blocksize: blocksize for the location where we store the journal.
+- * @j_blk_offset: starting block offset for into the device where we store the
+- * journal
+- * @j_fs_dev: Device which holds the client fs. For internal journal this will
+- * be equal to j_dev
+- * @j_reserved_credits: Number of buffers reserved from the running transaction
+- * @j_maxlen: Total maximum capacity of the journal region on disk.
+- * @j_list_lock: Protects the buffer lists and internal buffer state.
+- * @j_inode: Optional inode where we store the journal. If present, all journal
+- * block numbers are mapped into this inode via bmap().
+- * @j_tail_sequence: Sequence number of the oldest transaction in the log
+- * @j_transaction_sequence: Sequence number of the next transaction to grant
+- * @j_commit_sequence: Sequence number of the most recently committed
+- * transaction
+- * @j_commit_request: Sequence number of the most recent transaction wanting
+- * commit
+- * @j_uuid: Uuid of client object.
+- * @j_task: Pointer to the current commit thread for this journal
+- * @j_max_transaction_buffers: Maximum number of metadata buffers to allow in a
+- * single compound commit transaction
+- * @j_commit_interval: What is the maximum transaction lifetime before we begin
+- * a commit?
+- * @j_commit_timer: The timer used to wakeup the commit thread
+- * @j_revoke_lock: Protect the revoke table
+- * @j_revoke: The revoke table - maintains the list of revoked blocks in the
+- * current transaction.
+- * @j_revoke_table: alternate revoke tables for j_revoke
+- * @j_wbuf: array of buffer_heads for jbd2_journal_commit_transaction
+- * @j_wbufsize: maximum number of buffer_heads allowed in j_wbuf, the
+- * number that will fit in j_blocksize
+- * @j_last_sync_writer: most recent pid which did a synchronous write
+- * @j_history_lock: Protect the transactions statistics history
+- * @j_proc_entry: procfs entry for the jbd statistics directory
+- * @j_stats: Overall statistics
+- * @j_private: An opaque pointer to fs-private information.
+- * @j_trans_commit_map: Lockdep entity to track transaction commit dependencies
+ */
+-
+ struct journal_s
+ {
+- /* General journaling state flags [j_state_lock] */
++ /**
++ * @j_flags: General journaling state flags [j_state_lock]
++ */
+ unsigned long j_flags;
+
+- /*
++ /**
++ * @j_errno:
++ *
+ * Is there an outstanding uncleared error on the journal (from a prior
+ * abort)? [j_state_lock]
+ */
+ int j_errno;
+
+- /* The superblock buffer */
++ /**
++ * @j_sb_buffer: The first part of the superblock buffer.
++ */
+ struct buffer_head *j_sb_buffer;
++
++ /**
++ * @j_superblock: The second part of the superblock buffer.
++ */
+ journal_superblock_t *j_superblock;
+
+- /* Version of the superblock format */
++ /**
++ * @j_format_version: Version of the superblock format.
++ */
+ int j_format_version;
+
+- /*
+- * Protect the various scalars in the journal
++ /**
++ * @j_state_lock: Protect the various scalars in the journal.
+ */
+ rwlock_t j_state_lock;
+
+- /*
++ /**
++ * @j_barrier_count:
++ *
+ * Number of processes waiting to create a barrier lock [j_state_lock]
+ */
+ int j_barrier_count;
+
+- /* The barrier lock itself */
++ /**
++ * @j_barrier: The barrier lock itself.
++ */
+ struct mutex j_barrier;
+
+- /*
++ /**
++ * @j_running_transaction:
++ *
+ * Transactions: The current running transaction...
+ * [j_state_lock] [caller holding open handle]
+ */
+ transaction_t *j_running_transaction;
+
+- /*
++ /**
++ * @j_committing_transaction:
++ *
+ * the transaction we are pushing to disk
+ * [j_state_lock] [caller holding open handle]
+ */
+ transaction_t *j_committing_transaction;
+
+- /*
++ /**
++ * @j_checkpoint_transactions:
++ *
+ * ... and a linked circular list of all transactions waiting for
+ * checkpointing. [j_list_lock]
+ */
+ transaction_t *j_checkpoint_transactions;
+
+- /*
++ /**
++ * @j_wait_transaction_locked:
++ *
+ * Wait queue for waiting for a locked transaction to start committing,
+- * or for a barrier lock to be released
++ * or for a barrier lock to be released.
+ */
+ wait_queue_head_t j_wait_transaction_locked;
+
+- /* Wait queue for waiting for commit to complete */
++ /**
++ * @j_wait_done_commit: Wait queue for waiting for commit to complete.
++ */
+ wait_queue_head_t j_wait_done_commit;
+
+- /* Wait queue to trigger commit */
++ /**
++ * @j_wait_commit: Wait queue to trigger commit.
++ */
+ wait_queue_head_t j_wait_commit;
+
+- /* Wait queue to wait for updates to complete */
++ /**
++ * @j_wait_updates: Wait queue to wait for updates to complete.
++ */
+ wait_queue_head_t j_wait_updates;
+
+- /* Wait queue to wait for reserved buffer credits to drop */
++ /**
++ * @j_wait_reserved:
++ *
++ * Wait queue to wait for reserved buffer credits to drop.
++ */
+ wait_queue_head_t j_wait_reserved;
+
+- /* Semaphore for locking against concurrent checkpoints */
++ /**
++ * @j_checkpoint_mutex:
++ *
++ * Semaphore for locking against concurrent checkpoints.
++ */
+ struct mutex j_checkpoint_mutex;
+
+- /*
++ /**
++ * @j_chkpt_bhs:
++ *
+ * List of buffer heads used by the checkpoint routine. This
+ * was moved from jbd2_log_do_checkpoint() to reduce stack
+ * usage. Access to this array is controlled by the
+- * j_checkpoint_mutex. [j_checkpoint_mutex]
++ * @j_checkpoint_mutex. [j_checkpoint_mutex]
+ */
+ struct buffer_head *j_chkpt_bhs[JBD2_NR_BATCH];
+-
+- /*
++
++ /**
++ * @j_head:
++ *
+ * Journal head: identifies the first unused block in the journal.
+ * [j_state_lock]
+ */
+ unsigned long j_head;
+
+- /*
++ /**
++ * @j_tail:
++ *
+ * Journal tail: identifies the oldest still-used block in the journal.
+ * [j_state_lock]
+ */
+ unsigned long j_tail;
+
+- /*
++ /**
++ * @j_free:
++ *
+ * Journal free: how many free blocks are there in the journal?
+ * [j_state_lock]
+ */
+ unsigned long j_free;
+
+- /*
+- * Journal start and end: the block numbers of the first usable block
+- * and one beyond the last usable block in the journal. [j_state_lock]
++ /**
++ * @j_first:
++ *
++ * The block number of the first usable block in the journal
++ * [j_state_lock].
+ */
+ unsigned long j_first;
++
++ /**
++ * @j_last:
++ *
++ * The block number one beyond the last usable block in the journal
++ * [j_state_lock].
++ */
+ unsigned long j_last;
+
+- /*
+- * Device, blocksize and starting block offset for the location where we
+- * store the journal.
++ /**
++ * @j_dev: Device where we store the journal.
+ */
+ struct block_device *j_dev;
++
++ /**
++ * @j_blocksize: Block size for the location where we store the journal.
++ */
+ int j_blocksize;
++
++ /**
++ * @j_blk_offset:
++ *
++ * Starting block offset into the device where we store the journal.
++ */
+ unsigned long long j_blk_offset;
++
++ /**
++ * @j_devname: Journal device name.
++ */
+ char j_devname[BDEVNAME_SIZE+24];
+
+- /*
++ /**
++ * @j_fs_dev:
++ *
+ * Device which holds the client fs. For internal journal this will be
+ * equal to j_dev.
+ */
+ struct block_device *j_fs_dev;
+
+- /* Total maximum capacity of the journal region on disk. */
++ /**
++ * @j_maxlen: Total maximum capacity of the journal region on disk.
++ */
+ unsigned int j_maxlen;
+
+- /* Number of buffers reserved from the running transaction */
++ /**
++ * @j_reserved_credits:
++ *
++ * Number of buffers reserved from the running transaction.
++ */
+ atomic_t j_reserved_credits;
+
+- /*
+- * Protects the buffer lists and internal buffer state.
++ /**
++ * @j_list_lock: Protects the buffer lists and internal buffer state.
+ */
+ spinlock_t j_list_lock;
+
+- /* Optional inode where we store the journal. If present, all */
+- /* journal block numbers are mapped into this inode via */
+- /* bmap(). */
++ /**
++ * @j_inode:
++ *
++ * Optional inode where we store the journal. If present, all
++ * journal block numbers are mapped into this inode via bmap().
++ */
+ struct inode *j_inode;
+
+- /*
++ /**
++ * @j_tail_sequence:
++ *
+ * Sequence number of the oldest transaction in the log [j_state_lock]
+ */
+ tid_t j_tail_sequence;
+
+- /*
++ /**
++ * @j_transaction_sequence:
++ *
+ * Sequence number of the next transaction to grant [j_state_lock]
+ */
+ tid_t j_transaction_sequence;
+
+- /*
++ /**
++ * @j_commit_sequence:
++ *
+ * Sequence number of the most recently committed transaction
+ * [j_state_lock].
+ */
+ tid_t j_commit_sequence;
+
+- /*
++ /**
++ * @j_commit_request:
++ *
+ * Sequence number of the most recent transaction wanting commit
+ * [j_state_lock]
+ */
+ tid_t j_commit_request;
+
+- /*
++ /**
++ * @j_uuid:
++ *
+ * Journal uuid: identifies the object (filesystem, LVM volume etc)
+ * backed by this journal. This will eventually be replaced by an array
+ * of uuids, allowing us to index multiple devices within a single
+@@ -958,85 +997,151 @@ struct journal_s
+ */
+ __u8 j_uuid[16];
+
+- /* Pointer to the current commit thread for this journal */
++ /**
++ * @j_task: Pointer to the current commit thread for this journal.
++ */
+ struct task_struct *j_task;
+
+- /*
++ /**
++ * @j_max_transaction_buffers:
++ *
+ * Maximum number of metadata buffers to allow in a single compound
+- * commit transaction
++ * commit transaction.
+ */
+ int j_max_transaction_buffers;
+
+- /*
++ /**
++ * @j_commit_interval:
++ *
+ * What is the maximum transaction lifetime before we begin a commit?
+ */
+ unsigned long j_commit_interval;
+
+- /* The timer used to wakeup the commit thread: */
++ /**
++ * @j_commit_timer: The timer used to wakeup the commit thread.
++ */
+ struct timer_list j_commit_timer;
+
+- /*
+- * The revoke table: maintains the list of revoked blocks in the
+- * current transaction. [j_revoke_lock]
++ /**
++ * @j_revoke_lock: Protect the revoke table.
+ */
+ spinlock_t j_revoke_lock;
++
++ /**
++ * @j_revoke:
++ *
++ * The revoke table - maintains the list of revoked blocks in the
++ * current transaction.
++ */
+ struct jbd2_revoke_table_s *j_revoke;
++
++ /**
++ * @j_revoke_table: Alternate revoke tables for j_revoke.
++ */
+ struct jbd2_revoke_table_s *j_revoke_table[2];
+
+- /*
+- * array of bhs for jbd2_journal_commit_transaction
++ /**
++ * @j_wbuf: Array of bhs for jbd2_journal_commit_transaction.
+ */
+ struct buffer_head **j_wbuf;
++
++ /**
++ * @j_wbufsize:
++ *
++ * Size of @j_wbuf array.
++ */
+ int j_wbufsize;
+
+- /*
+- * this is the pid of hte last person to run a synchronous operation
+- * through the journal
++ /**
++ * @j_last_sync_writer:
++ *
++ * The pid of the last person to run a synchronous operation
++ * through the journal.
+ */
+ pid_t j_last_sync_writer;
+
+- /*
+- * the average amount of time in nanoseconds it takes to commit a
++ /**
++ * @j_average_commit_time:
++ *
++ * The average amount of time in nanoseconds it takes to commit a
+ * transaction to disk. [j_state_lock]
+ */
+ u64 j_average_commit_time;
+
+- /*
+- * minimum and maximum times that we should wait for
+- * additional filesystem operations to get batched into a
+- * synchronous handle in microseconds
++ /**
++ * @j_min_batch_time:
++ *
++ * Minimum time that we should wait for additional filesystem operations
++ * to get batched into a synchronous handle in microseconds.
+ */
+ u32 j_min_batch_time;
++
++ /**
++ * @j_max_batch_time:
++ *
++ * Maximum time that we should wait for additional filesystem operations
++ * to get batched into a synchronous handle in microseconds.
++ */
+ u32 j_max_batch_time;
+
+- /* This function is called when a transaction is closed */
++ /**
++ * @j_commit_callback:
++ *
++ * This function is called when a transaction is closed.
++ */
+ void (*j_commit_callback)(journal_t *,
+ transaction_t *);
+
+ /*
+ * Journal statistics
+ */
++
++ /**
++ * @j_history_lock: Protect the transactions statistics history.
++ */
+ spinlock_t j_history_lock;
++
++ /**
++ * @j_proc_entry: procfs entry for the jbd statistics directory.
++ */
+ struct proc_dir_entry *j_proc_entry;
++
++ /**
++ * @j_stats: Overall statistics.
++ */
+ struct transaction_stats_s j_stats;
+
+- /* Failed journal commit ID */
++ /**
++ * @j_failed_commit: Failed journal commit ID.
++ */
+ unsigned int j_failed_commit;
+
+- /*
++ /**
++ * @j_private:
++ *
+ * An opaque pointer to fs-private information. ext3 puts its
+- * superblock pointer here
++ * superblock pointer here.
+ */
+ void *j_private;
+
+- /* Reference to checksum algorithm driver via cryptoapi */
++ /**
++ * @j_chksum_driver:
++ *
++ * Reference to checksum algorithm driver via cryptoapi.
++ */
+ struct crypto_shash *j_chksum_driver;
+
+- /* Precomputed journal UUID checksum for seeding other checksums */
++ /**
++ * @j_csum_seed:
++ *
++ * Precomputed journal UUID checksum for seeding other checksums.
++ */
+ __u32 j_csum_seed;
+
+ #ifdef CONFIG_DEBUG_LOCK_ALLOC
+- /*
++ /**
++ * @j_trans_commit_map:
++ *
+ * Lockdep entity to track transaction commit dependencies. Handles
+ * hold this "lock" for read, when we wait for commit, we acquire the
+ * "lock" for writing. This matches the properties of jbd2 journalling
+diff --git a/include/linux/mlx5/driver.h b/include/linux/mlx5/driver.h
+index a0610427e168..b82c4ae92411 100644
+--- a/include/linux/mlx5/driver.h
++++ b/include/linux/mlx5/driver.h
+@@ -1238,7 +1238,7 @@ mlx5_get_vector_affinity(struct mlx5_core_dev *dev, int vector)
+ int eqn;
+ int err;
+
+- err = mlx5_vector2eqn(dev, vector, &eqn, &irq);
++ err = mlx5_vector2eqn(dev, MLX5_EQ_VEC_COMP_BASE + vector, &eqn, &irq);
+ if (err)
+ return NULL;
+
+diff --git a/include/linux/mm_inline.h b/include/linux/mm_inline.h
+index c30b32e3c862..10191c28fc04 100644
+--- a/include/linux/mm_inline.h
++++ b/include/linux/mm_inline.h
+@@ -127,10 +127,4 @@ static __always_inline enum lru_list page_lru(struct page *page)
+
+ #define lru_to_page(head) (list_entry((head)->prev, struct page, lru))
+
+-#ifdef arch_unmap_kpfn
+-extern void arch_unmap_kpfn(unsigned long pfn);
+-#else
+-static __always_inline void arch_unmap_kpfn(unsigned long pfn) { }
+-#endif
+-
+ #endif
+diff --git a/include/linux/nospec.h b/include/linux/nospec.h
+index b99bced39ac2..fbc98e2c8228 100644
+--- a/include/linux/nospec.h
++++ b/include/linux/nospec.h
+@@ -19,20 +19,6 @@
+ static inline unsigned long array_index_mask_nospec(unsigned long index,
+ unsigned long size)
+ {
+- /*
+- * Warn developers about inappropriate array_index_nospec() usage.
+- *
+- * Even if the CPU speculates past the WARN_ONCE branch, the
+- * sign bit of @index is taken into account when generating the
+- * mask.
+- *
+- * This warning is compiled out when the compiler can infer that
+- * @index and @size are less than LONG_MAX.
+- */
+- if (WARN_ONCE(index > LONG_MAX || size > LONG_MAX,
+- "array_index_nospec() limited to range of [0, LONG_MAX]\n"))
+- return 0;
+-
+ /*
+ * Always calculate and emit the mask even if the compiler
+ * thinks the mask is not needed. The compiler does not take
+@@ -43,6 +29,26 @@ static inline unsigned long array_index_mask_nospec(unsigned long index,
+ }
+ #endif
+
++/*
++ * Warn developers about inappropriate array_index_nospec() usage.
++ *
++ * Even if the CPU speculates past the WARN_ONCE branch, the
++ * sign bit of @index is taken into account when generating the
++ * mask.
++ *
++ * This warning is compiled out when the compiler can infer that
++ * @index and @size are less than LONG_MAX.
++ */
++#define array_index_mask_nospec_check(index, size) \
++({ \
++ if (WARN_ONCE(index > LONG_MAX || size > LONG_MAX, \
++ "array_index_nospec() limited to range of [0, LONG_MAX]\n")) \
++ _mask = 0; \
++ else \
++ _mask = array_index_mask_nospec(index, size); \
++ _mask; \
++})
++
+ /*
+ * array_index_nospec - sanitize an array index after a bounds check
+ *
+@@ -61,7 +67,7 @@ static inline unsigned long array_index_mask_nospec(unsigned long index,
+ ({ \
+ typeof(index) _i = (index); \
+ typeof(size) _s = (size); \
+- unsigned long _mask = array_index_mask_nospec(_i, _s); \
++ unsigned long _mask = array_index_mask_nospec_check(_i, _s); \
+ \
+ BUILD_BUG_ON(sizeof(_i) > sizeof(long)); \
+ BUILD_BUG_ON(sizeof(_s) > sizeof(long)); \
+diff --git a/include/rdma/ib_verbs.h b/include/rdma/ib_verbs.h
+index fd84cda5ed7c..0d6a110dae7c 100644
+--- a/include/rdma/ib_verbs.h
++++ b/include/rdma/ib_verbs.h
+@@ -983,9 +983,9 @@ struct ib_wc {
+ u32 invalidate_rkey;
+ } ex;
+ u32 src_qp;
++ u32 slid;
+ int wc_flags;
+ u16 pkey_index;
+- u32 slid;
+ u8 sl;
+ u8 dlid_path_bits;
+ u8 port_num; /* valid only for DR SMPs on switches */
+diff --git a/include/trace/events/xen.h b/include/trace/events/xen.h
+index b8adf05c534e..7dd8f34c37df 100644
+--- a/include/trace/events/xen.h
++++ b/include/trace/events/xen.h
+@@ -368,7 +368,7 @@ TRACE_EVENT(xen_mmu_flush_tlb,
+ TP_printk("%s", "")
+ );
+
+-TRACE_EVENT(xen_mmu_flush_tlb_single,
++TRACE_EVENT(xen_mmu_flush_tlb_one_user,
+ TP_PROTO(unsigned long addr),
+ TP_ARGS(addr),
+ TP_STRUCT__entry(
+diff --git a/kernel/memremap.c b/kernel/memremap.c
+index 403ab9cdb949..4712ce646e04 100644
+--- a/kernel/memremap.c
++++ b/kernel/memremap.c
+@@ -301,7 +301,8 @@ static void devm_memremap_pages_release(struct device *dev, void *data)
+
+ /* pages are dead and unused, undo the arch mapping */
+ align_start = res->start & ~(SECTION_SIZE - 1);
+- align_size = ALIGN(resource_size(res), SECTION_SIZE);
++ align_size = ALIGN(res->start + resource_size(res), SECTION_SIZE)
++ - align_start;
+
+ mem_hotplug_begin();
+ arch_remove_memory(align_start, align_size);
+diff --git a/kernel/trace/trace_events_filter.c b/kernel/trace/trace_events_filter.c
+index 61e7f0678d33..a764aec3c9a1 100644
+--- a/kernel/trace/trace_events_filter.c
++++ b/kernel/trace/trace_events_filter.c
+@@ -400,7 +400,6 @@ enum regex_type filter_parse_regex(char *buff, int len, char **search, int *not)
+ for (i = 0; i < len; i++) {
+ if (buff[i] == '*') {
+ if (!i) {
+- *search = buff + 1;
+ type = MATCH_END_ONLY;
+ } else if (i == len - 1) {
+ if (type == MATCH_END_ONLY)
+@@ -410,14 +409,14 @@ enum regex_type filter_parse_regex(char *buff, int len, char **search, int *not)
+ buff[i] = 0;
+ break;
+ } else { /* pattern continues, use full glob */
+- type = MATCH_GLOB;
+- break;
++ return MATCH_GLOB;
+ }
+ } else if (strchr("[?\\", buff[i])) {
+- type = MATCH_GLOB;
+- break;
++ return MATCH_GLOB;
+ }
+ }
++ if (buff[0] == '*')
++ *search = buff + 1;
+
+ return type;
+ }
+diff --git a/kernel/trace/trace_uprobe.c b/kernel/trace/trace_uprobe.c
+index 40592e7b3568..268029ae1be6 100644
+--- a/kernel/trace/trace_uprobe.c
++++ b/kernel/trace/trace_uprobe.c
+@@ -608,7 +608,7 @@ static int probes_seq_show(struct seq_file *m, void *v)
+
+ /* Don't print "0x (null)" when offset is 0 */
+ if (tu->offset) {
+- seq_printf(m, "0x%p", (void *)tu->offset);
++ seq_printf(m, "0x%px", (void *)tu->offset);
+ } else {
+ switch (sizeof(void *)) {
+ case 4:
+diff --git a/lib/swiotlb.c b/lib/swiotlb.c
+index cea19aaf303c..0d7f46fb993a 100644
+--- a/lib/swiotlb.c
++++ b/lib/swiotlb.c
+@@ -586,7 +586,7 @@ phys_addr_t swiotlb_tbl_map_single(struct device *hwdev,
+
+ not_found:
+ spin_unlock_irqrestore(&io_tlb_lock, flags);
+- if (printk_ratelimit())
++ if (!(attrs & DMA_ATTR_NO_WARN) && printk_ratelimit())
+ dev_warn(hwdev, "swiotlb buffer is full (sz: %zd bytes)\n", size);
+ return SWIOTLB_MAP_ERROR;
+ found:
+@@ -713,6 +713,7 @@ void *
+ swiotlb_alloc_coherent(struct device *hwdev, size_t size,
+ dma_addr_t *dma_handle, gfp_t flags)
+ {
++ bool warn = !(flags & __GFP_NOWARN);
+ dma_addr_t dev_addr;
+ void *ret;
+ int order = get_order(size);
+@@ -738,8 +739,8 @@ swiotlb_alloc_coherent(struct device *hwdev, size_t size,
+ * GFP_DMA memory; fall back on map_single(), which
+ * will grab memory from the lowest available address range.
+ */
+- phys_addr_t paddr = map_single(hwdev, 0, size,
+- DMA_FROM_DEVICE, 0);
++ phys_addr_t paddr = map_single(hwdev, 0, size, DMA_FROM_DEVICE,
++ warn ? 0 : DMA_ATTR_NO_WARN);
+ if (paddr == SWIOTLB_MAP_ERROR)
+ goto err_warn;
+
+@@ -769,9 +770,11 @@ swiotlb_alloc_coherent(struct device *hwdev, size_t size,
+ return ret;
+
+ err_warn:
+- pr_warn("swiotlb: coherent allocation failed for device %s size=%zu\n",
+- dev_name(hwdev), size);
+- dump_stack();
++ if (warn && printk_ratelimit()) {
++ pr_warn("swiotlb: coherent allocation failed for device %s size=%zu\n",
++ dev_name(hwdev), size);
++ dump_stack();
++ }
+
+ return NULL;
+ }
+diff --git a/mm/memory-failure.c b/mm/memory-failure.c
+index 4acdf393a801..c85fa0038848 100644
+--- a/mm/memory-failure.c
++++ b/mm/memory-failure.c
+@@ -1146,8 +1146,6 @@ int memory_failure(unsigned long pfn, int trapno, int flags)
+ return 0;
+ }
+
+- arch_unmap_kpfn(pfn);
+-
+ orig_head = hpage = compound_head(p);
+ num_poisoned_pages_inc();
+
+diff --git a/mm/memory.c b/mm/memory.c
+index 793004608332..93e51ad41ba3 100644
+--- a/mm/memory.c
++++ b/mm/memory.c
+@@ -81,7 +81,7 @@
+
+ #include "internal.h"
+
+-#ifdef LAST_CPUPID_NOT_IN_PAGE_FLAGS
++#if defined(LAST_CPUPID_NOT_IN_PAGE_FLAGS) && !defined(CONFIG_COMPILE_TEST)
+ #warning Unfortunate NUMA and NUMA Balancing config, growing page-frame for last_cpupid.
+ #endif
+
+diff --git a/mm/page_alloc.c b/mm/page_alloc.c
+index 76c9688b6a0a..d23818c5465a 100644
+--- a/mm/page_alloc.c
++++ b/mm/page_alloc.c
+@@ -1177,9 +1177,10 @@ static void free_one_page(struct zone *zone,
+ }
+
+ static void __meminit __init_single_page(struct page *page, unsigned long pfn,
+- unsigned long zone, int nid)
++ unsigned long zone, int nid, bool zero)
+ {
+- mm_zero_struct_page(page);
++ if (zero)
++ mm_zero_struct_page(page);
+ set_page_links(page, zone, nid, pfn);
+ init_page_count(page);
+ page_mapcount_reset(page);
+@@ -1194,9 +1195,9 @@ static void __meminit __init_single_page(struct page *page, unsigned long pfn,
+ }
+
+ static void __meminit __init_single_pfn(unsigned long pfn, unsigned long zone,
+- int nid)
++ int nid, bool zero)
+ {
+- return __init_single_page(pfn_to_page(pfn), pfn, zone, nid);
++ return __init_single_page(pfn_to_page(pfn), pfn, zone, nid, zero);
+ }
+
+ #ifdef CONFIG_DEFERRED_STRUCT_PAGE_INIT
+@@ -1217,7 +1218,7 @@ static void __meminit init_reserved_page(unsigned long pfn)
+ if (pfn >= zone->zone_start_pfn && pfn < zone_end_pfn(zone))
+ break;
+ }
+- __init_single_pfn(pfn, zid, nid);
++ __init_single_pfn(pfn, zid, nid, true);
+ }
+ #else
+ static inline void init_reserved_page(unsigned long pfn)
+@@ -1514,7 +1515,7 @@ static unsigned long __init deferred_init_range(int nid, int zid,
+ page++;
+ else
+ page = pfn_to_page(pfn);
+- __init_single_page(page, pfn, zid, nid);
++ __init_single_page(page, pfn, zid, nid, true);
+ cond_resched();
+ }
+ }
+@@ -5393,15 +5394,20 @@ void __meminit memmap_init_zone(unsigned long size, int nid, unsigned long zone,
+ * can be created for invalid pages (for alignment)
+ * check here not to call set_pageblock_migratetype() against
+ * pfn out of zone.
++ *
++ * Please note that MEMMAP_HOTPLUG path doesn't clear memmap
++ * because this is done early in sparse_add_one_section
+ */
+ if (!(pfn & (pageblock_nr_pages - 1))) {
+ struct page *page = pfn_to_page(pfn);
+
+- __init_single_page(page, pfn, zone, nid);
++ __init_single_page(page, pfn, zone, nid,
++ context != MEMMAP_HOTPLUG);
+ set_pageblock_migratetype(page, MIGRATE_MOVABLE);
+ cond_resched();
+ } else {
+- __init_single_pfn(pfn, zone, nid);
++ __init_single_pfn(pfn, zone, nid,
++ context != MEMMAP_HOTPLUG);
+ }
+ }
+ }
+diff --git a/net/9p/trans_virtio.c b/net/9p/trans_virtio.c
+index f3a4efcf1456..3aa5a93ad107 100644
+--- a/net/9p/trans_virtio.c
++++ b/net/9p/trans_virtio.c
+@@ -160,7 +160,8 @@ static void req_done(struct virtqueue *vq)
+ spin_unlock_irqrestore(&chan->lock, flags);
+ /* Wakeup if anyone waiting for VirtIO ring space. */
+ wake_up(chan->vc_wq);
+- p9_client_cb(chan->client, req, REQ_STATUS_RCVD);
++ if (len)
++ p9_client_cb(chan->client, req, REQ_STATUS_RCVD);
+ }
+ }
+
+diff --git a/net/mpls/af_mpls.c b/net/mpls/af_mpls.c
+index 8ca9915befc8..aae3565c3a92 100644
+--- a/net/mpls/af_mpls.c
++++ b/net/mpls/af_mpls.c
+@@ -8,6 +8,7 @@
+ #include <linux/ipv6.h>
+ #include <linux/mpls.h>
+ #include <linux/netconf.h>
++#include <linux/nospec.h>
+ #include <linux/vmalloc.h>
+ #include <linux/percpu.h>
+ #include <net/ip.h>
+@@ -935,24 +936,27 @@ static int mpls_nh_build_multi(struct mpls_route_config *cfg,
+ return err;
+ }
+
+-static bool mpls_label_ok(struct net *net, unsigned int index,
++static bool mpls_label_ok(struct net *net, unsigned int *index,
+ struct netlink_ext_ack *extack)
+ {
++ bool is_ok = true;
++
+ /* Reserved labels may not be set */
+- if (index < MPLS_LABEL_FIRST_UNRESERVED) {
++ if (*index < MPLS_LABEL_FIRST_UNRESERVED) {
+ NL_SET_ERR_MSG(extack,
+ "Invalid label - must be MPLS_LABEL_FIRST_UNRESERVED or higher");
+- return false;
++ is_ok = false;
+ }
+
+ /* The full 20 bit range may not be supported. */
+- if (index >= net->mpls.platform_labels) {
++ if (is_ok && *index >= net->mpls.platform_labels) {
+ NL_SET_ERR_MSG(extack,
+ "Label >= configured maximum in platform_labels");
+- return false;
++ is_ok = false;
+ }
+
+- return true;
++ *index = array_index_nospec(*index, net->mpls.platform_labels);
++ return is_ok;
+ }
+
+ static int mpls_route_add(struct mpls_route_config *cfg,
+@@ -975,7 +979,7 @@ static int mpls_route_add(struct mpls_route_config *cfg,
+ index = find_free_label(net);
+ }
+
+- if (!mpls_label_ok(net, index, extack))
++ if (!mpls_label_ok(net, &index, extack))
+ goto errout;
+
+ /* Append makes no sense with mpls */
+@@ -1052,7 +1056,7 @@ static int mpls_route_del(struct mpls_route_config *cfg,
+
+ index = cfg->rc_label;
+
+- if (!mpls_label_ok(net, index, extack))
++ if (!mpls_label_ok(net, &index, extack))
+ goto errout;
+
+ mpls_route_update(net, index, NULL, &cfg->rc_nlinfo);
+@@ -1810,7 +1814,7 @@ static int rtm_to_route_config(struct sk_buff *skb,
+ goto errout;
+
+ if (!mpls_label_ok(cfg->rc_nlinfo.nl_net,
+- cfg->rc_label, extack))
++ &cfg->rc_label, extack))
+ goto errout;
+ break;
+ }
+@@ -2137,7 +2141,7 @@ static int mpls_getroute(struct sk_buff *in_skb, struct nlmsghdr *in_nlh,
+ goto errout;
+ }
+
+- if (!mpls_label_ok(net, in_label, extack)) {
++ if (!mpls_label_ok(net, &in_label, extack)) {
+ err = -EINVAL;
+ goto errout;
+ }
+diff --git a/net/sunrpc/xprtrdma/rpc_rdma.c b/net/sunrpc/xprtrdma/rpc_rdma.c
+index a3f2ab283aeb..852b838d37b3 100644
+--- a/net/sunrpc/xprtrdma/rpc_rdma.c
++++ b/net/sunrpc/xprtrdma/rpc_rdma.c
+@@ -143,7 +143,7 @@ static bool rpcrdma_args_inline(struct rpcrdma_xprt *r_xprt,
+ if (xdr->page_len) {
+ remaining = xdr->page_len;
+ offset = offset_in_page(xdr->page_base);
+- count = 0;
++ count = RPCRDMA_MIN_SEND_SGES;
+ while (remaining) {
+ remaining -= min_t(unsigned int,
+ PAGE_SIZE - offset, remaining);
+diff --git a/net/sunrpc/xprtrdma/verbs.c b/net/sunrpc/xprtrdma/verbs.c
+index 8607c029c0dd..8cd7ee4fa0cd 100644
+--- a/net/sunrpc/xprtrdma/verbs.c
++++ b/net/sunrpc/xprtrdma/verbs.c
+@@ -509,7 +509,7 @@ rpcrdma_ep_create(struct rpcrdma_ep *ep, struct rpcrdma_ia *ia,
+ pr_warn("rpcrdma: HCA provides only %d send SGEs\n", max_sge);
+ return -ENOMEM;
+ }
+- ia->ri_max_send_sges = max_sge - RPCRDMA_MIN_SEND_SGES;
++ ia->ri_max_send_sges = max_sge;
+
+ if (ia->ri_device->attrs.max_qp_wr <= RPCRDMA_BACKWARD_WRS) {
+ dprintk("RPC: %s: insufficient wqe's available\n",
+@@ -1476,6 +1476,9 @@ __rpcrdma_dma_map_regbuf(struct rpcrdma_ia *ia, struct rpcrdma_regbuf *rb)
+ static void
+ rpcrdma_dma_unmap_regbuf(struct rpcrdma_regbuf *rb)
+ {
++ if (!rb)
++ return;
++
+ if (!rpcrdma_regbuf_is_mapped(rb))
+ return;
+
+@@ -1491,9 +1494,6 @@ rpcrdma_dma_unmap_regbuf(struct rpcrdma_regbuf *rb)
+ void
+ rpcrdma_free_regbuf(struct rpcrdma_regbuf *rb)
+ {
+- if (!rb)
+- return;
+-
+ rpcrdma_dma_unmap_regbuf(rb);
+ kfree(rb);
+ }
+diff --git a/sound/core/seq/seq_clientmgr.c b/sound/core/seq/seq_clientmgr.c
+index d01913404581..a42cbbf2c8d9 100644
+--- a/sound/core/seq/seq_clientmgr.c
++++ b/sound/core/seq/seq_clientmgr.c
+@@ -1003,7 +1003,7 @@ static ssize_t snd_seq_write(struct file *file, const char __user *buf,
+ {
+ struct snd_seq_client *client = file->private_data;
+ int written = 0, len;
+- int err = -EINVAL;
++ int err;
+ struct snd_seq_event event;
+
+ if (!(snd_seq_file_flags(file) & SNDRV_SEQ_LFLG_OUTPUT))
+@@ -1018,11 +1018,15 @@ static ssize_t snd_seq_write(struct file *file, const char __user *buf,
+
+ /* allocate the pool now if the pool is not allocated yet */
+ if (client->pool->size > 0 && !snd_seq_write_pool_allocated(client)) {
+- if (snd_seq_pool_init(client->pool) < 0)
++ mutex_lock(&client->ioctl_mutex);
++ err = snd_seq_pool_init(client->pool);
++ mutex_unlock(&client->ioctl_mutex);
++ if (err < 0)
+ return -ENOMEM;
+ }
+
+ /* only process whole events */
++ err = -EINVAL;
+ while (count >= sizeof(struct snd_seq_event)) {
+ /* Read in the event header from the user */
+ len = sizeof(event);
+diff --git a/sound/pci/hda/patch_realtek.c b/sound/pci/hda/patch_realtek.c
+index 1750e00c5bb4..4ff1f0ca52fc 100644
+--- a/sound/pci/hda/patch_realtek.c
++++ b/sound/pci/hda/patch_realtek.c
+@@ -3378,6 +3378,19 @@ static void alc269_fixup_pincfg_no_hp_to_lineout(struct hda_codec *codec,
+ spec->parse_flags = HDA_PINCFG_NO_HP_FIXUP;
+ }
+
++static void alc269_fixup_pincfg_U7x7_headset_mic(struct hda_codec *codec,
++ const struct hda_fixup *fix,
++ int action)
++{
++ unsigned int cfg_headphone = snd_hda_codec_get_pincfg(codec, 0x21);
++ unsigned int cfg_headset_mic = snd_hda_codec_get_pincfg(codec, 0x19);
++
++ if (cfg_headphone && cfg_headset_mic == 0x411111f0)
++ snd_hda_codec_set_pincfg(codec, 0x19,
++ (cfg_headphone & ~AC_DEFCFG_DEVICE) |
++ (AC_JACK_MIC_IN << AC_DEFCFG_DEVICE_SHIFT));
++}
++
+ static void alc269_fixup_hweq(struct hda_codec *codec,
+ const struct hda_fixup *fix, int action)
+ {
+@@ -4850,6 +4863,28 @@ static void alc_fixup_tpt440_dock(struct hda_codec *codec,
+ }
+ }
+
++static void alc_fixup_tpt470_dock(struct hda_codec *codec,
++ const struct hda_fixup *fix, int action)
++{
++ static const struct hda_pintbl pincfgs[] = {
++ { 0x17, 0x21211010 }, /* dock headphone */
++ { 0x19, 0x21a11010 }, /* dock mic */
++ { }
++ };
++ struct alc_spec *spec = codec->spec;
++
++ if (action == HDA_FIXUP_ACT_PRE_PROBE) {
++ spec->parse_flags = HDA_PINCFG_NO_HP_FIXUP;
++ /* Enable DOCK device */
++ snd_hda_codec_write(codec, 0x17, 0,
++ AC_VERB_SET_CONFIG_DEFAULT_BYTES_3, 0);
++ /* Enable DOCK device */
++ snd_hda_codec_write(codec, 0x19, 0,
++ AC_VERB_SET_CONFIG_DEFAULT_BYTES_3, 0);
++ snd_hda_apply_pincfgs(codec, pincfgs);
++ }
++}
++
+ static void alc_shutup_dell_xps13(struct hda_codec *codec)
+ {
+ struct alc_spec *spec = codec->spec;
+@@ -5229,6 +5264,7 @@ enum {
+ ALC269_FIXUP_LIFEBOOK_EXTMIC,
+ ALC269_FIXUP_LIFEBOOK_HP_PIN,
+ ALC269_FIXUP_LIFEBOOK_NO_HP_TO_LINEOUT,
++ ALC255_FIXUP_LIFEBOOK_U7x7_HEADSET_MIC,
+ ALC269_FIXUP_AMIC,
+ ALC269_FIXUP_DMIC,
+ ALC269VB_FIXUP_AMIC,
+@@ -5324,6 +5360,7 @@ enum {
+ ALC700_FIXUP_INTEL_REFERENCE,
+ ALC274_FIXUP_DELL_BIND_DACS,
+ ALC274_FIXUP_DELL_AIO_LINEOUT_VERB,
++ ALC298_FIXUP_TPT470_DOCK,
+ };
+
+ static const struct hda_fixup alc269_fixups[] = {
+@@ -5434,6 +5471,10 @@ static const struct hda_fixup alc269_fixups[] = {
+ .type = HDA_FIXUP_FUNC,
+ .v.func = alc269_fixup_pincfg_no_hp_to_lineout,
+ },
++ [ALC255_FIXUP_LIFEBOOK_U7x7_HEADSET_MIC] = {
++ .type = HDA_FIXUP_FUNC,
++ .v.func = alc269_fixup_pincfg_U7x7_headset_mic,
++ },
+ [ALC269_FIXUP_AMIC] = {
+ .type = HDA_FIXUP_PINS,
+ .v.pins = (const struct hda_pintbl[]) {
+@@ -6149,6 +6190,12 @@ static const struct hda_fixup alc269_fixups[] = {
+ .chained = true,
+ .chain_id = ALC274_FIXUP_DELL_BIND_DACS
+ },
++ [ALC298_FIXUP_TPT470_DOCK] = {
++ .type = HDA_FIXUP_FUNC,
++ .v.func = alc_fixup_tpt470_dock,
++ .chained = true,
++ .chain_id = ALC293_FIXUP_LENOVO_SPK_NOISE
++ },
+ };
+
+ static const struct snd_pci_quirk alc269_fixup_tbl[] = {
+@@ -6199,6 +6246,8 @@ static const struct snd_pci_quirk alc269_fixup_tbl[] = {
+ SND_PCI_QUIRK(0x1028, 0x075d, "Dell AIO", ALC298_FIXUP_SPK_VOLUME),
+ SND_PCI_QUIRK(0x1028, 0x0798, "Dell Inspiron 17 7000 Gaming", ALC256_FIXUP_DELL_INSPIRON_7559_SUBWOOFER),
+ SND_PCI_QUIRK(0x1028, 0x082a, "Dell XPS 13 9360", ALC256_FIXUP_DELL_XPS_13_HEADPHONE_NOISE),
++ SND_PCI_QUIRK(0x1028, 0x084b, "Dell", ALC274_FIXUP_DELL_AIO_LINEOUT_VERB),
++ SND_PCI_QUIRK(0x1028, 0x084e, "Dell", ALC274_FIXUP_DELL_AIO_LINEOUT_VERB),
+ SND_PCI_QUIRK(0x1028, 0x164a, "Dell", ALC293_FIXUP_DELL1_MIC_NO_PRESENCE),
+ SND_PCI_QUIRK(0x1028, 0x164b, "Dell", ALC293_FIXUP_DELL1_MIC_NO_PRESENCE),
+ SND_PCI_QUIRK(0x103c, 0x1586, "HP", ALC269_FIXUP_HP_MUTE_LED_MIC2),
+@@ -6300,6 +6349,7 @@ static const struct snd_pci_quirk alc269_fixup_tbl[] = {
+ SND_PCI_QUIRK(0x10cf, 0x159f, "Lifebook E780", ALC269_FIXUP_LIFEBOOK_NO_HP_TO_LINEOUT),
+ SND_PCI_QUIRK(0x10cf, 0x15dc, "Lifebook T731", ALC269_FIXUP_LIFEBOOK_HP_PIN),
+ SND_PCI_QUIRK(0x10cf, 0x1757, "Lifebook E752", ALC269_FIXUP_LIFEBOOK_HP_PIN),
++ SND_PCI_QUIRK(0x10cf, 0x1629, "Lifebook U7x7", ALC255_FIXUP_LIFEBOOK_U7x7_HEADSET_MIC),
+ SND_PCI_QUIRK(0x10cf, 0x1845, "Lifebook U904", ALC269_FIXUP_LIFEBOOK_EXTMIC),
+ SND_PCI_QUIRK(0x10ec, 0x10f2, "Intel Reference board", ALC700_FIXUP_INTEL_REFERENCE),
+ SND_PCI_QUIRK(0x144d, 0xc109, "Samsung Ativ book 9 (NP900X3G)", ALC269_FIXUP_INV_DMIC),
+@@ -6328,8 +6378,16 @@ static const struct snd_pci_quirk alc269_fixup_tbl[] = {
+ SND_PCI_QUIRK(0x17aa, 0x2218, "Thinkpad X1 Carbon 2nd", ALC292_FIXUP_TPT440_DOCK),
+ SND_PCI_QUIRK(0x17aa, 0x2223, "ThinkPad T550", ALC292_FIXUP_TPT440_DOCK),
+ SND_PCI_QUIRK(0x17aa, 0x2226, "ThinkPad X250", ALC292_FIXUP_TPT440_DOCK),
++ SND_PCI_QUIRK(0x17aa, 0x222d, "Thinkpad", ALC298_FIXUP_TPT470_DOCK),
++ SND_PCI_QUIRK(0x17aa, 0x222e, "Thinkpad", ALC298_FIXUP_TPT470_DOCK),
+ SND_PCI_QUIRK(0x17aa, 0x2231, "Thinkpad T560", ALC292_FIXUP_TPT460),
+ SND_PCI_QUIRK(0x17aa, 0x2233, "Thinkpad", ALC292_FIXUP_TPT460),
++ SND_PCI_QUIRK(0x17aa, 0x2245, "Thinkpad T470", ALC298_FIXUP_TPT470_DOCK),
++ SND_PCI_QUIRK(0x17aa, 0x2246, "Thinkpad", ALC298_FIXUP_TPT470_DOCK),
++ SND_PCI_QUIRK(0x17aa, 0x2247, "Thinkpad", ALC298_FIXUP_TPT470_DOCK),
++ SND_PCI_QUIRK(0x17aa, 0x224b, "Thinkpad", ALC298_FIXUP_TPT470_DOCK),
++ SND_PCI_QUIRK(0x17aa, 0x224c, "Thinkpad", ALC298_FIXUP_TPT470_DOCK),
++ SND_PCI_QUIRK(0x17aa, 0x224d, "Thinkpad", ALC298_FIXUP_TPT470_DOCK),
+ SND_PCI_QUIRK(0x17aa, 0x30bb, "ThinkCentre AIO", ALC233_FIXUP_LENOVO_LINE2_MIC_HOTKEY),
+ SND_PCI_QUIRK(0x17aa, 0x30e2, "ThinkCentre AIO", ALC233_FIXUP_LENOVO_LINE2_MIC_HOTKEY),
+ SND_PCI_QUIRK(0x17aa, 0x310c, "ThinkCentre Station", ALC294_FIXUP_LENOVO_MIC_LOCATION),
+@@ -6350,7 +6408,12 @@ static const struct snd_pci_quirk alc269_fixup_tbl[] = {
+ SND_PCI_QUIRK(0x17aa, 0x5050, "Thinkpad T560p", ALC292_FIXUP_TPT460),
+ SND_PCI_QUIRK(0x17aa, 0x5051, "Thinkpad L460", ALC292_FIXUP_TPT460),
+ SND_PCI_QUIRK(0x17aa, 0x5053, "Thinkpad T460", ALC292_FIXUP_TPT460),
++ SND_PCI_QUIRK(0x17aa, 0x505d, "Thinkpad", ALC298_FIXUP_TPT470_DOCK),
++ SND_PCI_QUIRK(0x17aa, 0x505f, "Thinkpad", ALC298_FIXUP_TPT470_DOCK),
++ SND_PCI_QUIRK(0x17aa, 0x5062, "Thinkpad", ALC298_FIXUP_TPT470_DOCK),
+ SND_PCI_QUIRK(0x17aa, 0x5109, "Thinkpad", ALC269_FIXUP_LIMIT_INT_MIC_BOOST),
++ SND_PCI_QUIRK(0x17aa, 0x511e, "Thinkpad", ALC298_FIXUP_TPT470_DOCK),
++ SND_PCI_QUIRK(0x17aa, 0x511f, "Thinkpad", ALC298_FIXUP_TPT470_DOCK),
+ SND_PCI_QUIRK(0x17aa, 0x3bf8, "Quanta FL1", ALC269_FIXUP_PCM_44K),
+ SND_PCI_QUIRK(0x17aa, 0x9e54, "LENOVO NB", ALC269_FIXUP_LENOVO_EAPD),
+ SND_PCI_QUIRK(0x1b7d, 0xa831, "Ordissimo EVE2 ", ALC269VB_FIXUP_ORDISSIMO_EVE2), /* Also known as Malata PC-B1303 */
+@@ -6612,6 +6675,11 @@ static const struct snd_hda_pin_quirk alc269_pin_fixup_tbl[] = {
+ {0x12, 0xb7a60130},
+ {0x14, 0x90170110},
+ {0x21, 0x02211020}),
++ SND_HDA_PIN_QUIRK(0x10ec0256, 0x1028, "Dell", ALC255_FIXUP_DELL1_MIC_NO_PRESENCE,
++ {0x12, 0x90a60130},
++ {0x14, 0x90170110},
++ {0x14, 0x01011020},
++ {0x21, 0x0221101f}),
+ SND_HDA_PIN_QUIRK(0x10ec0256, 0x1028, "Dell", ALC255_FIXUP_DELL1_MIC_NO_PRESENCE,
+ ALC256_STANDARD_PINS),
+ SND_HDA_PIN_QUIRK(0x10ec0256, 0x1043, "ASUS", ALC256_FIXUP_ASUS_MIC,
+@@ -6681,6 +6749,10 @@ static const struct snd_hda_pin_quirk alc269_pin_fixup_tbl[] = {
+ {0x12, 0x90a60120},
+ {0x14, 0x90170110},
+ {0x21, 0x0321101f}),
++ SND_HDA_PIN_QUIRK(0x10ec0289, 0x1028, "Dell", ALC225_FIXUP_DELL1_MIC_NO_PRESENCE,
++ {0x12, 0xb7a60130},
++ {0x14, 0x90170110},
++ {0x21, 0x04211020}),
+ SND_HDA_PIN_QUIRK(0x10ec0290, 0x103c, "HP", ALC269_FIXUP_HP_MUTE_LED_MIC1,
+ ALC290_STANDARD_PINS,
+ {0x15, 0x04211040},
+diff --git a/sound/usb/mixer.c b/sound/usb/mixer.c
+index 2b4ceda36291..20b28a5a1456 100644
+--- a/sound/usb/mixer.c
++++ b/sound/usb/mixer.c
+@@ -347,17 +347,20 @@ static int get_ctl_value_v2(struct usb_mixer_elem_info *cval, int request,
+ int validx, int *value_ret)
+ {
+ struct snd_usb_audio *chip = cval->head.mixer->chip;
+- unsigned char buf[4 + 3 * sizeof(__u32)]; /* enough space for one range */
++ /* enough space for one range */
++ unsigned char buf[sizeof(__u16) + 3 * sizeof(__u32)];
+ unsigned char *val;
+- int idx = 0, ret, size;
++ int idx = 0, ret, val_size, size;
+ __u8 bRequest;
+
++ val_size = uac2_ctl_value_size(cval->val_type);
++
+ if (request == UAC_GET_CUR) {
+ bRequest = UAC2_CS_CUR;
+- size = uac2_ctl_value_size(cval->val_type);
++ size = val_size;
+ } else {
+ bRequest = UAC2_CS_RANGE;
+- size = sizeof(buf);
++ size = sizeof(__u16) + 3 * val_size;
+ }
+
+ memset(buf, 0, sizeof(buf));
+@@ -390,16 +393,17 @@ static int get_ctl_value_v2(struct usb_mixer_elem_info *cval, int request,
+ val = buf + sizeof(__u16);
+ break;
+ case UAC_GET_MAX:
+- val = buf + sizeof(__u16) * 2;
++ val = buf + sizeof(__u16) + val_size;
+ break;
+ case UAC_GET_RES:
+- val = buf + sizeof(__u16) * 3;
++ val = buf + sizeof(__u16) + val_size * 2;
+ break;
+ default:
+ return -EINVAL;
+ }
+
+- *value_ret = convert_signed_value(cval, snd_usb_combine_bytes(val, sizeof(__u16)));
++ *value_ret = convert_signed_value(cval,
++ snd_usb_combine_bytes(val, val_size));
+
+ return 0;
+ }
+diff --git a/sound/usb/pcm.c b/sound/usb/pcm.c
+index b9c9a19f9588..3cbfae6604f9 100644
+--- a/sound/usb/pcm.c
++++ b/sound/usb/pcm.c
+@@ -352,6 +352,15 @@ static int set_sync_ep_implicit_fb_quirk(struct snd_usb_substream *subs,
+ ep = 0x86;
+ iface = usb_ifnum_to_if(dev, 2);
+
++ if (!iface || iface->num_altsetting == 0)
++ return -EINVAL;
++
++ alts = &iface->altsetting[1];
++ goto add_sync_ep;
++ case USB_ID(0x1397, 0x0002):
++ ep = 0x81;
++ iface = usb_ifnum_to_if(dev, 1);
++
+ if (!iface || iface->num_altsetting == 0)
+ return -EINVAL;
+
+diff --git a/sound/usb/quirks.c b/sound/usb/quirks.c
+index a66ef5777887..ea8f3de92fa4 100644
+--- a/sound/usb/quirks.c
++++ b/sound/usb/quirks.c
+@@ -1363,8 +1363,11 @@ u64 snd_usb_interface_dsd_format_quirks(struct snd_usb_audio *chip,
+ return SNDRV_PCM_FMTBIT_DSD_U32_BE;
+ break;
+
+- /* Amanero Combo384 USB interface with native DSD support */
+- case USB_ID(0x16d0, 0x071a):
++ /* Amanero Combo384 USB based DACs with native DSD support */
++ case USB_ID(0x16d0, 0x071a): /* Amanero - Combo384 */
++ case USB_ID(0x2ab6, 0x0004): /* T+A DAC8DSD-V2.0, MP1000E-V2.0, MP2000R-V2.0, MP2500R-V2.0, MP3100HV-V2.0 */
++ case USB_ID(0x2ab6, 0x0005): /* T+A USB HD Audio 1 */
++ case USB_ID(0x2ab6, 0x0006): /* T+A USB HD Audio 2 */
+ if (fp->altsetting == 2) {
+ switch (le16_to_cpu(chip->dev->descriptor.bcdDevice)) {
+ case 0x199:
+diff --git a/tools/objtool/check.c b/tools/objtool/check.c
+index 2e458eb45586..c7fb5c2392ee 100644
+--- a/tools/objtool/check.c
++++ b/tools/objtool/check.c
+@@ -1935,13 +1935,19 @@ static bool ignore_unreachable_insn(struct instruction *insn)
+ if (is_kasan_insn(insn) || is_ubsan_insn(insn))
+ return true;
+
+- if (insn->type == INSN_JUMP_UNCONDITIONAL && insn->jump_dest) {
+- insn = insn->jump_dest;
+- continue;
++ if (insn->type == INSN_JUMP_UNCONDITIONAL) {
++ if (insn->jump_dest &&
++ insn->jump_dest->func == insn->func) {
++ insn = insn->jump_dest;
++ continue;
++ }
++
++ break;
+ }
+
+ if (insn->offset + insn->len >= insn->func->offset + insn->func->len)
+ break;
++
+ insn = list_next_entry(insn, list);
+ }
+
+diff --git a/tools/testing/selftests/seccomp/seccomp_bpf.c b/tools/testing/selftests/seccomp/seccomp_bpf.c
+index 24dbf634e2dd..0b457e8e0f0c 100644
+--- a/tools/testing/selftests/seccomp/seccomp_bpf.c
++++ b/tools/testing/selftests/seccomp/seccomp_bpf.c
+@@ -1717,7 +1717,7 @@ void tracer_ptrace(struct __test_metadata *_metadata, pid_t tracee,
+
+ if (nr == __NR_getpid)
+ change_syscall(_metadata, tracee, __NR_getppid);
+- if (nr == __NR_open)
++ if (nr == __NR_openat)
+ change_syscall(_metadata, tracee, -1);
+ }
+
+@@ -1792,7 +1792,7 @@ TEST_F(TRACE_syscall, ptrace_syscall_dropped)
+ true);
+
+ /* Tracer should skip the open syscall, resulting in EPERM. */
+- EXPECT_SYSCALL_RETURN(EPERM, syscall(__NR_open));
++ EXPECT_SYSCALL_RETURN(EPERM, syscall(__NR_openat));
+ }
+
+ TEST_F(TRACE_syscall, syscall_allowed)
+diff --git a/tools/testing/selftests/vm/compaction_test.c b/tools/testing/selftests/vm/compaction_test.c
+index a65b016d4c13..1097f04e4d80 100644
+--- a/tools/testing/selftests/vm/compaction_test.c
++++ b/tools/testing/selftests/vm/compaction_test.c
+@@ -137,6 +137,8 @@ int check_compaction(unsigned long mem_free, unsigned int hugepage_size)
+ printf("No of huge pages allocated = %d\n",
+ (atoi(nr_hugepages)));
+
++ lseek(fd, 0, SEEK_SET);
++
+ if (write(fd, initial_nr_hugepages, strlen(initial_nr_hugepages))
+ != strlen(initial_nr_hugepages)) {
+ perror("Failed to write value to /proc/sys/vm/nr_hugepages\n");
+diff --git a/tools/testing/selftests/x86/Makefile b/tools/testing/selftests/x86/Makefile
+index 5d4f10ac2af2..aa6e2d7f6a1f 100644
+--- a/tools/testing/selftests/x86/Makefile
++++ b/tools/testing/selftests/x86/Makefile
+@@ -5,16 +5,26 @@ include ../lib.mk
+
+ .PHONY: all all_32 all_64 warn_32bit_failure clean
+
+-TARGETS_C_BOTHBITS := single_step_syscall sysret_ss_attrs syscall_nt ptrace_syscall test_mremap_vdso \
+- check_initial_reg_state sigreturn ldt_gdt iopl mpx-mini-test ioperm \
++UNAME_M := $(shell uname -m)
++CAN_BUILD_I386 := $(shell ./check_cc.sh $(CC) trivial_32bit_program.c -m32)
++CAN_BUILD_X86_64 := $(shell ./check_cc.sh $(CC) trivial_64bit_program.c)
++
++TARGETS_C_BOTHBITS := single_step_syscall sysret_ss_attrs syscall_nt test_mremap_vdso \
++ check_initial_reg_state sigreturn iopl mpx-mini-test ioperm \
+ protection_keys test_vdso test_vsyscall
+ TARGETS_C_32BIT_ONLY := entry_from_vm86 syscall_arg_fault test_syscall_vdso unwind_vdso \
+ test_FCMOV test_FCOMI test_FISTTP \
+ vdso_restorer
+-TARGETS_C_64BIT_ONLY := fsgsbase sysret_rip 5lvl
++TARGETS_C_64BIT_ONLY := fsgsbase sysret_rip
++# Some selftests require 32bit support enabled also on 64bit systems
++TARGETS_C_32BIT_NEEDED := ldt_gdt ptrace_syscall
+
+-TARGETS_C_32BIT_ALL := $(TARGETS_C_BOTHBITS) $(TARGETS_C_32BIT_ONLY)
++TARGETS_C_32BIT_ALL := $(TARGETS_C_BOTHBITS) $(TARGETS_C_32BIT_ONLY) $(TARGETS_C_32BIT_NEEDED)
+ TARGETS_C_64BIT_ALL := $(TARGETS_C_BOTHBITS) $(TARGETS_C_64BIT_ONLY)
++ifeq ($(CAN_BUILD_I386)$(CAN_BUILD_X86_64),11)
++TARGETS_C_64BIT_ALL += $(TARGETS_C_32BIT_NEEDED)
++endif
++
+ BINARIES_32 := $(TARGETS_C_32BIT_ALL:%=%_32)
+ BINARIES_64 := $(TARGETS_C_64BIT_ALL:%=%_64)
+
+@@ -23,18 +33,16 @@ BINARIES_64 := $(patsubst %,$(OUTPUT)/%,$(BINARIES_64))
+
+ CFLAGS := -O2 -g -std=gnu99 -pthread -Wall -no-pie
+
+-UNAME_M := $(shell uname -m)
+-CAN_BUILD_I386 := $(shell ./check_cc.sh $(CC) trivial_32bit_program.c -m32)
+-CAN_BUILD_X86_64 := $(shell ./check_cc.sh $(CC) trivial_64bit_program.c)
+-
+ ifeq ($(CAN_BUILD_I386),1)
+ all: all_32
+ TEST_PROGS += $(BINARIES_32)
++EXTRA_CFLAGS += -DCAN_BUILD_32
+ endif
+
+ ifeq ($(CAN_BUILD_X86_64),1)
+ all: all_64
+ TEST_PROGS += $(BINARIES_64)
++EXTRA_CFLAGS += -DCAN_BUILD_64
+ endif
+
+ all_32: $(BINARIES_32)
+diff --git a/tools/testing/selftests/x86/mpx-mini-test.c b/tools/testing/selftests/x86/mpx-mini-test.c
+index ec0f6b45ce8b..9c0325e1ea68 100644
+--- a/tools/testing/selftests/x86/mpx-mini-test.c
++++ b/tools/testing/selftests/x86/mpx-mini-test.c
+@@ -315,11 +315,39 @@ static inline void *__si_bounds_upper(siginfo_t *si)
+ return si->si_upper;
+ }
+ #else
++
++/*
++ * This deals with old version of _sigfault in some distros:
++ *
++
++old _sigfault:
++ struct {
++ void *si_addr;
++ } _sigfault;
++
++new _sigfault:
++ struct {
++ void __user *_addr;
++ int _trapno;
++ short _addr_lsb;
++ union {
++ struct {
++ void __user *_lower;
++ void __user *_upper;
++ } _addr_bnd;
++ __u32 _pkey;
++ };
++ } _sigfault;
++ *
++ */
++
+ static inline void **__si_bounds_hack(siginfo_t *si)
+ {
+ void *sigfault = &si->_sifields._sigfault;
+ void *end_sigfault = sigfault + sizeof(si->_sifields._sigfault);
+- void **__si_lower = end_sigfault;
++ int *trapno = (int*)end_sigfault;
++ /* skip _trapno and _addr_lsb */
++ void **__si_lower = (void**)(trapno + 2);
+
+ return __si_lower;
+ }
+@@ -331,7 +359,7 @@ static inline void *__si_bounds_lower(siginfo_t *si)
+
+ static inline void *__si_bounds_upper(siginfo_t *si)
+ {
+- return (*__si_bounds_hack(si)) + sizeof(void *);
++ return *(__si_bounds_hack(si) + 1);
+ }
+ #endif
+
+diff --git a/tools/testing/selftests/x86/protection_keys.c b/tools/testing/selftests/x86/protection_keys.c
+index bc1b0735bb50..f15aa5a76fe3 100644
+--- a/tools/testing/selftests/x86/protection_keys.c
++++ b/tools/testing/selftests/x86/protection_keys.c
+@@ -393,34 +393,6 @@ pid_t fork_lazy_child(void)
+ return forkret;
+ }
+
+-void davecmp(void *_a, void *_b, int len)
+-{
+- int i;
+- unsigned long *a = _a;
+- unsigned long *b = _b;
+-
+- for (i = 0; i < len / sizeof(*a); i++) {
+- if (a[i] == b[i])
+- continue;
+-
+- dprintf3("[%3d]: a: %016lx b: %016lx\n", i, a[i], b[i]);
+- }
+-}
+-
+-void dumpit(char *f)
+-{
+- int fd = open(f, O_RDONLY);
+- char buf[100];
+- int nr_read;
+-
+- dprintf2("maps fd: %d\n", fd);
+- do {
+- nr_read = read(fd, &buf[0], sizeof(buf));
+- write(1, buf, nr_read);
+- } while (nr_read > 0);
+- close(fd);
+-}
+-
+ #define PKEY_DISABLE_ACCESS 0x1
+ #define PKEY_DISABLE_WRITE 0x2
+
+diff --git a/tools/testing/selftests/x86/single_step_syscall.c b/tools/testing/selftests/x86/single_step_syscall.c
+index a48da95c18fd..ddfdd635de16 100644
+--- a/tools/testing/selftests/x86/single_step_syscall.c
++++ b/tools/testing/selftests/x86/single_step_syscall.c
+@@ -119,7 +119,9 @@ static void check_result(void)
+
+ int main()
+ {
++#ifdef CAN_BUILD_32
+ int tmp;
++#endif
+
+ sethandler(SIGTRAP, sigtrap, 0);
+
+@@ -139,12 +141,13 @@ int main()
+ : : "c" (post_nop) : "r11");
+ check_result();
+ #endif
+-
++#ifdef CAN_BUILD_32
+ printf("[RUN]\tSet TF and check int80\n");
+ set_eflags(get_eflags() | X86_EFLAGS_TF);
+ asm volatile ("int $0x80" : "=a" (tmp) : "a" (SYS_getpid)
+ : INT80_CLOBBERS);
+ check_result();
++#endif
+
+ /*
+ * This test is particularly interesting if fast syscalls use
+diff --git a/tools/testing/selftests/x86/test_mremap_vdso.c b/tools/testing/selftests/x86/test_mremap_vdso.c
+index bf0d687c7db7..64f11c8d9b76 100644
+--- a/tools/testing/selftests/x86/test_mremap_vdso.c
++++ b/tools/testing/selftests/x86/test_mremap_vdso.c
+@@ -90,8 +90,12 @@ int main(int argc, char **argv, char **envp)
+ vdso_size += PAGE_SIZE;
+ }
+
++#ifdef __i386__
+ /* Glibc is likely to explode now - exit with raw syscall */
+ asm volatile ("int $0x80" : : "a" (__NR_exit), "b" (!!ret));
++#else /* __x86_64__ */
++ syscall(SYS_exit, ret);
++#endif
+ } else {
+ int status;
+
+diff --git a/tools/testing/selftests/x86/test_vdso.c b/tools/testing/selftests/x86/test_vdso.c
+index 29973cde06d3..235259011704 100644
+--- a/tools/testing/selftests/x86/test_vdso.c
++++ b/tools/testing/selftests/x86/test_vdso.c
+@@ -26,20 +26,59 @@
+ # endif
+ #endif
+
++/* max length of lines in /proc/self/maps - anything longer is skipped here */
++#define MAPS_LINE_LEN 128
++
+ int nerrs = 0;
+
++typedef long (*getcpu_t)(unsigned *, unsigned *, void *);
++
++getcpu_t vgetcpu;
++getcpu_t vdso_getcpu;
++
++static void *vsyscall_getcpu(void)
++{
+ #ifdef __x86_64__
+-# define VSYS(x) (x)
++ FILE *maps;
++ char line[MAPS_LINE_LEN];
++ bool found = false;
++
++ maps = fopen("/proc/self/maps", "r");
++ if (!maps) /* might still be present, but ignore it here, as we test vDSO not vsyscall */
++ return NULL;
++
++ while (fgets(line, MAPS_LINE_LEN, maps)) {
++ char r, x;
++ void *start, *end;
++ char name[MAPS_LINE_LEN];
++
++ /* sscanf() is safe here as strlen(name) >= strlen(line) */
++ if (sscanf(line, "%p-%p %c-%cp %*x %*x:%*x %*u %s",
++ &start, &end, &r, &x, name) != 5)
++ continue;
++
++ if (strcmp(name, "[vsyscall]"))
++ continue;
++
++ /* assume entries are OK, as we test vDSO here not vsyscall */
++ found = true;
++ break;
++ }
++
++ fclose(maps);
++
++ if (!found) {
++ printf("Warning: failed to find vsyscall getcpu\n");
++ return NULL;
++ }
++ return (void *) (0xffffffffff600800);
+ #else
+-# define VSYS(x) 0
++ return NULL;
+ #endif
++}
+
+-typedef long (*getcpu_t)(unsigned *, unsigned *, void *);
+-
+-const getcpu_t vgetcpu = (getcpu_t)VSYS(0xffffffffff600800);
+-getcpu_t vdso_getcpu;
+
+-void fill_function_pointers()
++static void fill_function_pointers()
+ {
+ void *vdso = dlopen("linux-vdso.so.1",
+ RTLD_LAZY | RTLD_LOCAL | RTLD_NOLOAD);
+@@ -54,6 +93,8 @@ void fill_function_pointers()
+ vdso_getcpu = (getcpu_t)dlsym(vdso, "__vdso_getcpu");
+ if (!vdso_getcpu)
+ printf("Warning: failed to find getcpu in vDSO\n");
++
++ vgetcpu = (getcpu_t) vsyscall_getcpu();
+ }
+
+ static long sys_getcpu(unsigned * cpu, unsigned * node,
+diff --git a/tools/testing/selftests/x86/test_vsyscall.c b/tools/testing/selftests/x86/test_vsyscall.c
+index 7a744fa7b786..be81621446f0 100644
+--- a/tools/testing/selftests/x86/test_vsyscall.c
++++ b/tools/testing/selftests/x86/test_vsyscall.c
+@@ -33,6 +33,9 @@
+ # endif
+ #endif
+
++/* max length of lines in /proc/self/maps - anything longer is skipped here */
++#define MAPS_LINE_LEN 128
++
+ static void sethandler(int sig, void (*handler)(int, siginfo_t *, void *),
+ int flags)
+ {
+@@ -98,7 +101,7 @@ static int init_vsys(void)
+ #ifdef __x86_64__
+ int nerrs = 0;
+ FILE *maps;
+- char line[128];
++ char line[MAPS_LINE_LEN];
+ bool found = false;
+
+ maps = fopen("/proc/self/maps", "r");
+@@ -108,10 +111,12 @@ static int init_vsys(void)
+ return 0;
+ }
+
+- while (fgets(line, sizeof(line), maps)) {
++ while (fgets(line, MAPS_LINE_LEN, maps)) {
+ char r, x;
+ void *start, *end;
+- char name[128];
++ char name[MAPS_LINE_LEN];
++
++ /* sscanf() is safe here as strlen(name) >= strlen(line) */
+ if (sscanf(line, "%p-%p %c-%cp %*x %*x:%*x %*u %s",
+ &start, &end, &r, &x, name) != 5)
+ continue;
^ permalink raw reply related [flat|nested] 27+ messages in thread
* [gentoo-commits] proj/linux-patches:4.15 commit in: /
@ 2018-02-25 13:46 Alice Ferrazzi
0 siblings, 0 replies; 27+ messages in thread
From: Alice Ferrazzi @ 2018-02-25 13:46 UTC (permalink / raw
To: gentoo-commits
commit: d37109d7587a6faca28251272766319005a4c30b
Author: Alice Ferrazzi <alicef <AT> gentoo <DOT> org>
AuthorDate: Sun Feb 25 13:45:54 2018 +0000
Commit: Alice Ferrazzi <alicef <AT> gentoo <DOT> org>
CommitDate: Sun Feb 25 13:45:54 2018 +0000
URL: https://gitweb.gentoo.org/proj/linux-patches.git/commit/?id=d37109d7
linux kernel 4.15.6
0000_README | 4 +
1005_linux-4.15.6.patch | 1556 +++++++++++++++++++++++++++++++++++++++++++++++
2 files changed, 1560 insertions(+)
diff --git a/0000_README b/0000_README
index f22a6fe..828cfeb 100644
--- a/0000_README
+++ b/0000_README
@@ -63,6 +63,10 @@ Patch: 1004_linux-4.15.5.patch
From: http://www.kernel.org
Desc: Linux 4.15.5
+Patch: 1005_linux-4.15.6.patch
+From: http://www.kernel.org
+Desc: Linux 4.15.6
+
Patch: 1500_XATTR_USER_PREFIX.patch
From: https://bugs.gentoo.org/show_bug.cgi?id=470644
Desc: Support for namespace user.pax.* on tmpfs.
diff --git a/1005_linux-4.15.6.patch b/1005_linux-4.15.6.patch
new file mode 100644
index 0000000..dc80bb9
--- /dev/null
+++ b/1005_linux-4.15.6.patch
@@ -0,0 +1,1556 @@
+diff --git a/Makefile b/Makefile
+index 28c537fbe328..51563c76bdf6 100644
+--- a/Makefile
++++ b/Makefile
+@@ -1,7 +1,7 @@
+ # SPDX-License-Identifier: GPL-2.0
+ VERSION = 4
+ PATCHLEVEL = 15
+-SUBLEVEL = 5
++SUBLEVEL = 6
+ EXTRAVERSION =
+ NAME = Fearless Coyote
+
+diff --git a/arch/arm/common/bL_switcher_dummy_if.c b/arch/arm/common/bL_switcher_dummy_if.c
+index 4c10c6452678..f4dc1714a79e 100644
+--- a/arch/arm/common/bL_switcher_dummy_if.c
++++ b/arch/arm/common/bL_switcher_dummy_if.c
+@@ -57,3 +57,7 @@ static struct miscdevice bL_switcher_device = {
+ &bL_switcher_fops
+ };
+ module_misc_device(bL_switcher_device);
++
++MODULE_AUTHOR("Nicolas Pitre <nico@linaro.org>");
++MODULE_LICENSE("GPL v2");
++MODULE_DESCRIPTION("big.LITTLE switcher dummy user interface");
+diff --git a/arch/arm64/boot/dts/mediatek/mt8173.dtsi b/arch/arm64/boot/dts/mediatek/mt8173.dtsi
+index 26396ef53bde..ea407aff1251 100644
+--- a/arch/arm64/boot/dts/mediatek/mt8173.dtsi
++++ b/arch/arm64/boot/dts/mediatek/mt8173.dtsi
+@@ -81,6 +81,7 @@
+ reg = <0x000>;
+ enable-method = "psci";
+ cpu-idle-states = <&CPU_SLEEP_0>;
++ #cooling-cells = <2>;
+ };
+
+ cpu1: cpu@1 {
+@@ -97,6 +98,7 @@
+ reg = <0x100>;
+ enable-method = "psci";
+ cpu-idle-states = <&CPU_SLEEP_0>;
++ #cooling-cells = <2>;
+ };
+
+ cpu3: cpu@101 {
+diff --git a/arch/x86/crypto/twofish-x86_64-asm_64-3way.S b/arch/x86/crypto/twofish-x86_64-asm_64-3way.S
+index 1c3b7ceb36d2..e7273a606a07 100644
+--- a/arch/x86/crypto/twofish-x86_64-asm_64-3way.S
++++ b/arch/x86/crypto/twofish-x86_64-asm_64-3way.S
+@@ -55,29 +55,31 @@
+ #define RAB1bl %bl
+ #define RAB2bl %cl
+
++#define CD0 0x0(%rsp)
++#define CD1 0x8(%rsp)
++#define CD2 0x10(%rsp)
++
++# used only before/after all rounds
+ #define RCD0 %r8
+ #define RCD1 %r9
+ #define RCD2 %r10
+
+-#define RCD0d %r8d
+-#define RCD1d %r9d
+-#define RCD2d %r10d
+-
+-#define RX0 %rbp
+-#define RX1 %r11
+-#define RX2 %r12
++# used only during rounds
++#define RX0 %r8
++#define RX1 %r9
++#define RX2 %r10
+
+-#define RX0d %ebp
+-#define RX1d %r11d
+-#define RX2d %r12d
++#define RX0d %r8d
++#define RX1d %r9d
++#define RX2d %r10d
+
+-#define RY0 %r13
+-#define RY1 %r14
+-#define RY2 %r15
++#define RY0 %r11
++#define RY1 %r12
++#define RY2 %r13
+
+-#define RY0d %r13d
+-#define RY1d %r14d
+-#define RY2d %r15d
++#define RY0d %r11d
++#define RY1d %r12d
++#define RY2d %r13d
+
+ #define RT0 %rdx
+ #define RT1 %rsi
+@@ -85,6 +87,8 @@
+ #define RT0d %edx
+ #define RT1d %esi
+
++#define RT1bl %sil
++
+ #define do16bit_ror(rot, op1, op2, T0, T1, tmp1, tmp2, ab, dst) \
+ movzbl ab ## bl, tmp2 ## d; \
+ movzbl ab ## bh, tmp1 ## d; \
+@@ -92,6 +96,11 @@
+ op1##l T0(CTX, tmp2, 4), dst ## d; \
+ op2##l T1(CTX, tmp1, 4), dst ## d;
+
++#define swap_ab_with_cd(ab, cd, tmp) \
++ movq cd, tmp; \
++ movq ab, cd; \
++ movq tmp, ab;
++
+ /*
+ * Combined G1 & G2 function. Reordered with help of rotates to have moves
+ * at begining.
+@@ -110,15 +119,15 @@
+ /* G1,2 && G2,2 */ \
+ do16bit_ror(32, xor, xor, Tx2, Tx3, RT0, RT1, ab ## 0, x ## 0); \
+ do16bit_ror(16, xor, xor, Ty3, Ty0, RT0, RT1, ab ## 0, y ## 0); \
+- xchgq cd ## 0, ab ## 0; \
++ swap_ab_with_cd(ab ## 0, cd ## 0, RT0); \
+ \
+ do16bit_ror(32, xor, xor, Tx2, Tx3, RT0, RT1, ab ## 1, x ## 1); \
+ do16bit_ror(16, xor, xor, Ty3, Ty0, RT0, RT1, ab ## 1, y ## 1); \
+- xchgq cd ## 1, ab ## 1; \
++ swap_ab_with_cd(ab ## 1, cd ## 1, RT0); \
+ \
+ do16bit_ror(32, xor, xor, Tx2, Tx3, RT0, RT1, ab ## 2, x ## 2); \
+ do16bit_ror(16, xor, xor, Ty3, Ty0, RT0, RT1, ab ## 2, y ## 2); \
+- xchgq cd ## 2, ab ## 2;
++ swap_ab_with_cd(ab ## 2, cd ## 2, RT0);
+
+ #define enc_round_end(ab, x, y, n) \
+ addl y ## d, x ## d; \
+@@ -168,6 +177,16 @@
+ decrypt_round3(ba, dc, (n*2)+1); \
+ decrypt_round3(ba, dc, (n*2));
+
++#define push_cd() \
++ pushq RCD2; \
++ pushq RCD1; \
++ pushq RCD0;
++
++#define pop_cd() \
++ popq RCD0; \
++ popq RCD1; \
++ popq RCD2;
++
+ #define inpack3(in, n, xy, m) \
+ movq 4*(n)(in), xy ## 0; \
+ xorq w+4*m(CTX), xy ## 0; \
+@@ -223,11 +242,8 @@ ENTRY(__twofish_enc_blk_3way)
+ * %rdx: src, RIO
+ * %rcx: bool, if true: xor output
+ */
+- pushq %r15;
+- pushq %r14;
+ pushq %r13;
+ pushq %r12;
+- pushq %rbp;
+ pushq %rbx;
+
+ pushq %rcx; /* bool xor */
+@@ -235,40 +251,36 @@ ENTRY(__twofish_enc_blk_3way)
+
+ inpack_enc3();
+
+- encrypt_cycle3(RAB, RCD, 0);
+- encrypt_cycle3(RAB, RCD, 1);
+- encrypt_cycle3(RAB, RCD, 2);
+- encrypt_cycle3(RAB, RCD, 3);
+- encrypt_cycle3(RAB, RCD, 4);
+- encrypt_cycle3(RAB, RCD, 5);
+- encrypt_cycle3(RAB, RCD, 6);
+- encrypt_cycle3(RAB, RCD, 7);
++ push_cd();
++ encrypt_cycle3(RAB, CD, 0);
++ encrypt_cycle3(RAB, CD, 1);
++ encrypt_cycle3(RAB, CD, 2);
++ encrypt_cycle3(RAB, CD, 3);
++ encrypt_cycle3(RAB, CD, 4);
++ encrypt_cycle3(RAB, CD, 5);
++ encrypt_cycle3(RAB, CD, 6);
++ encrypt_cycle3(RAB, CD, 7);
++ pop_cd();
+
+ popq RIO; /* dst */
+- popq %rbp; /* bool xor */
++ popq RT1; /* bool xor */
+
+- testb %bpl, %bpl;
++ testb RT1bl, RT1bl;
+ jnz .L__enc_xor3;
+
+ outunpack_enc3(mov);
+
+ popq %rbx;
+- popq %rbp;
+ popq %r12;
+ popq %r13;
+- popq %r14;
+- popq %r15;
+ ret;
+
+ .L__enc_xor3:
+ outunpack_enc3(xor);
+
+ popq %rbx;
+- popq %rbp;
+ popq %r12;
+ popq %r13;
+- popq %r14;
+- popq %r15;
+ ret;
+ ENDPROC(__twofish_enc_blk_3way)
+
+@@ -278,35 +290,31 @@ ENTRY(twofish_dec_blk_3way)
+ * %rsi: dst
+ * %rdx: src, RIO
+ */
+- pushq %r15;
+- pushq %r14;
+ pushq %r13;
+ pushq %r12;
+- pushq %rbp;
+ pushq %rbx;
+
+ pushq %rsi; /* dst */
+
+ inpack_dec3();
+
+- decrypt_cycle3(RAB, RCD, 7);
+- decrypt_cycle3(RAB, RCD, 6);
+- decrypt_cycle3(RAB, RCD, 5);
+- decrypt_cycle3(RAB, RCD, 4);
+- decrypt_cycle3(RAB, RCD, 3);
+- decrypt_cycle3(RAB, RCD, 2);
+- decrypt_cycle3(RAB, RCD, 1);
+- decrypt_cycle3(RAB, RCD, 0);
++ push_cd();
++ decrypt_cycle3(RAB, CD, 7);
++ decrypt_cycle3(RAB, CD, 6);
++ decrypt_cycle3(RAB, CD, 5);
++ decrypt_cycle3(RAB, CD, 4);
++ decrypt_cycle3(RAB, CD, 3);
++ decrypt_cycle3(RAB, CD, 2);
++ decrypt_cycle3(RAB, CD, 1);
++ decrypt_cycle3(RAB, CD, 0);
++ pop_cd();
+
+ popq RIO; /* dst */
+
+ outunpack_dec3();
+
+ popq %rbx;
+- popq %rbp;
+ popq %r12;
+ popq %r13;
+- popq %r14;
+- popq %r15;
+ ret;
+ ENDPROC(twofish_dec_blk_3way)
+diff --git a/arch/x86/kvm/x86.c b/arch/x86/kvm/x86.c
+index ac381437c291..17f4eca37d22 100644
+--- a/arch/x86/kvm/x86.c
++++ b/arch/x86/kvm/x86.c
+@@ -2939,6 +2939,12 @@ void kvm_arch_vcpu_put(struct kvm_vcpu *vcpu)
+ pagefault_enable();
+ kvm_x86_ops->vcpu_put(vcpu);
+ vcpu->arch.last_host_tsc = rdtsc();
++ /*
++ * If userspace has set any breakpoints or watchpoints, dr6 is restored
++ * on every vmexit, but if not, we might have a stale dr6 from the
++ * guest. do_debug expects dr6 to be cleared after it runs, do the same.
++ */
++ set_debugreg(0, 6);
+ }
+
+ static int kvm_vcpu_ioctl_get_lapic(struct kvm_vcpu *vcpu,
+diff --git a/block/blk-map.c b/block/blk-map.c
+index d3a94719f03f..db9373bd31ac 100644
+--- a/block/blk-map.c
++++ b/block/blk-map.c
+@@ -119,7 +119,7 @@ int blk_rq_map_user_iov(struct request_queue *q, struct request *rq,
+ unsigned long align = q->dma_pad_mask | queue_dma_alignment(q);
+ struct bio *bio = NULL;
+ struct iov_iter i;
+- int ret;
++ int ret = -EINVAL;
+
+ if (!iter_is_iovec(iter))
+ goto fail;
+@@ -148,7 +148,7 @@ int blk_rq_map_user_iov(struct request_queue *q, struct request *rq,
+ __blk_rq_unmap_user(bio);
+ fail:
+ rq->bio = NULL;
+- return -EINVAL;
++ return ret;
+ }
+ EXPORT_SYMBOL(blk_rq_map_user_iov);
+
+diff --git a/drivers/android/binder.c b/drivers/android/binder.c
+index ec0917fb7cca..255eabdca2a4 100644
+--- a/drivers/android/binder.c
++++ b/drivers/android/binder.c
+@@ -1933,8 +1933,14 @@ static void binder_send_failed_reply(struct binder_transaction *t,
+ &target_thread->todo);
+ wake_up_interruptible(&target_thread->wait);
+ } else {
+- WARN(1, "Unexpected reply error: %u\n",
+- target_thread->reply_error.cmd);
++ /*
++ * Cannot get here for normal operation, but
++ * we can if multiple synchronous transactions
++ * are sent without blocking for responses.
++ * Just ignore the 2nd error in this case.
++ */
++ pr_warn("Unexpected reply error: %u\n",
++ target_thread->reply_error.cmd);
+ }
+ binder_inner_proc_unlock(target_thread->proc);
+ binder_thread_dec_tmpref(target_thread);
+@@ -2135,7 +2141,7 @@ static void binder_transaction_buffer_release(struct binder_proc *proc,
+ int debug_id = buffer->debug_id;
+
+ binder_debug(BINDER_DEBUG_TRANSACTION,
+- "%d buffer release %d, size %zd-%zd, failed at %p\n",
++ "%d buffer release %d, size %zd-%zd, failed at %pK\n",
+ proc->pid, buffer->debug_id,
+ buffer->data_size, buffer->offsets_size, failed_at);
+
+@@ -3647,7 +3653,7 @@ static int binder_thread_write(struct binder_proc *proc,
+ }
+ }
+ binder_debug(BINDER_DEBUG_DEAD_BINDER,
+- "%d:%d BC_DEAD_BINDER_DONE %016llx found %p\n",
++ "%d:%d BC_DEAD_BINDER_DONE %016llx found %pK\n",
+ proc->pid, thread->pid, (u64)cookie,
+ death);
+ if (death == NULL) {
+@@ -4316,6 +4322,15 @@ static int binder_thread_release(struct binder_proc *proc,
+
+ binder_inner_proc_unlock(thread->proc);
+
++ /*
++ * This is needed to avoid races between wake_up_poll() above and
++ * and ep_remove_waitqueue() called for other reasons (eg the epoll file
++ * descriptor being closed); ep_remove_waitqueue() holds an RCU read
++ * lock, so we can be sure it's done after calling synchronize_rcu().
++ */
++ if (thread->looper & BINDER_LOOPER_STATE_POLL)
++ synchronize_rcu();
++
+ if (send_reply)
+ binder_send_failed_reply(send_reply, BR_DEAD_REPLY);
+ binder_release_work(proc, &thread->todo);
+@@ -4331,6 +4346,8 @@ static unsigned int binder_poll(struct file *filp,
+ bool wait_for_proc_work;
+
+ thread = binder_get_thread(proc);
++ if (!thread)
++ return POLLERR;
+
+ binder_inner_proc_lock(thread->proc);
+ thread->looper |= BINDER_LOOPER_STATE_POLL;
+@@ -4974,7 +4991,7 @@ static void print_binder_transaction_ilocked(struct seq_file *m,
+ spin_lock(&t->lock);
+ to_proc = t->to_proc;
+ seq_printf(m,
+- "%s %d: %p from %d:%d to %d:%d code %x flags %x pri %ld r%d",
++ "%s %d: %pK from %d:%d to %d:%d code %x flags %x pri %ld r%d",
+ prefix, t->debug_id, t,
+ t->from ? t->from->proc->pid : 0,
+ t->from ? t->from->pid : 0,
+@@ -4998,7 +5015,7 @@ static void print_binder_transaction_ilocked(struct seq_file *m,
+ }
+ if (buffer->target_node)
+ seq_printf(m, " node %d", buffer->target_node->debug_id);
+- seq_printf(m, " size %zd:%zd data %p\n",
++ seq_printf(m, " size %zd:%zd data %pK\n",
+ buffer->data_size, buffer->offsets_size,
+ buffer->data);
+ }
+diff --git a/drivers/crypto/s5p-sss.c b/drivers/crypto/s5p-sss.c
+index 142c6020cec7..5c0496d1ed41 100644
+--- a/drivers/crypto/s5p-sss.c
++++ b/drivers/crypto/s5p-sss.c
+@@ -1926,15 +1926,21 @@ static void s5p_aes_crypt_start(struct s5p_aes_dev *dev, unsigned long mode)
+ uint32_t aes_control;
+ unsigned long flags;
+ int err;
++ u8 *iv;
+
+ aes_control = SSS_AES_KEY_CHANGE_MODE;
+ if (mode & FLAGS_AES_DECRYPT)
+ aes_control |= SSS_AES_MODE_DECRYPT;
+
+- if ((mode & FLAGS_AES_MODE_MASK) == FLAGS_AES_CBC)
++ if ((mode & FLAGS_AES_MODE_MASK) == FLAGS_AES_CBC) {
+ aes_control |= SSS_AES_CHAIN_MODE_CBC;
+- else if ((mode & FLAGS_AES_MODE_MASK) == FLAGS_AES_CTR)
++ iv = req->info;
++ } else if ((mode & FLAGS_AES_MODE_MASK) == FLAGS_AES_CTR) {
+ aes_control |= SSS_AES_CHAIN_MODE_CTR;
++ iv = req->info;
++ } else {
++ iv = NULL; /* AES_ECB */
++ }
+
+ if (dev->ctx->keylen == AES_KEYSIZE_192)
+ aes_control |= SSS_AES_KEY_SIZE_192;
+@@ -1965,7 +1971,7 @@ static void s5p_aes_crypt_start(struct s5p_aes_dev *dev, unsigned long mode)
+ goto outdata_error;
+
+ SSS_AES_WRITE(dev, AES_CONTROL, aes_control);
+- s5p_set_aes(dev, dev->ctx->aes_key, req->info, dev->ctx->keylen);
++ s5p_set_aes(dev, dev->ctx->aes_key, iv, dev->ctx->keylen);
+
+ s5p_set_dma_indata(dev, dev->sg_src);
+ s5p_set_dma_outdata(dev, dev->sg_dst);
+diff --git a/drivers/media/usb/pvrusb2/pvrusb2-hdw.c b/drivers/media/usb/pvrusb2/pvrusb2-hdw.c
+index 8289ee482f49..09bd6c6c176c 100644
+--- a/drivers/media/usb/pvrusb2/pvrusb2-hdw.c
++++ b/drivers/media/usb/pvrusb2/pvrusb2-hdw.c
+@@ -3648,6 +3648,12 @@ static int pvr2_send_request_ex(struct pvr2_hdw *hdw,
+ hdw);
+ hdw->ctl_write_urb->actual_length = 0;
+ hdw->ctl_write_pend_flag = !0;
++ if (usb_urb_ep_type_check(hdw->ctl_write_urb)) {
++ pvr2_trace(
++ PVR2_TRACE_ERROR_LEGS,
++ "Invalid write control endpoint");
++ return -EINVAL;
++ }
+ status = usb_submit_urb(hdw->ctl_write_urb,GFP_KERNEL);
+ if (status < 0) {
+ pvr2_trace(PVR2_TRACE_ERROR_LEGS,
+@@ -3672,6 +3678,12 @@ status);
+ hdw);
+ hdw->ctl_read_urb->actual_length = 0;
+ hdw->ctl_read_pend_flag = !0;
++ if (usb_urb_ep_type_check(hdw->ctl_read_urb)) {
++ pvr2_trace(
++ PVR2_TRACE_ERROR_LEGS,
++ "Invalid read control endpoint");
++ return -EINVAL;
++ }
+ status = usb_submit_urb(hdw->ctl_read_urb,GFP_KERNEL);
+ if (status < 0) {
+ pvr2_trace(PVR2_TRACE_ERROR_LEGS,
+diff --git a/drivers/misc/mei/hw-me-regs.h b/drivers/misc/mei/hw-me-regs.h
+index 0ccccbaf530d..e4b10b2d1a08 100644
+--- a/drivers/misc/mei/hw-me-regs.h
++++ b/drivers/misc/mei/hw-me-regs.h
+@@ -132,6 +132,11 @@
+ #define MEI_DEV_ID_KBP 0xA2BA /* Kaby Point */
+ #define MEI_DEV_ID_KBP_2 0xA2BB /* Kaby Point 2 */
+
++#define MEI_DEV_ID_CNP_LP 0x9DE0 /* Cannon Point LP */
++#define MEI_DEV_ID_CNP_LP_4 0x9DE4 /* Cannon Point LP 4 (iTouch) */
++#define MEI_DEV_ID_CNP_H 0xA360 /* Cannon Point H */
++#define MEI_DEV_ID_CNP_H_4 0xA364 /* Cannon Point H 4 (iTouch) */
++
+ /*
+ * MEI HW Section
+ */
+diff --git a/drivers/misc/mei/pci-me.c b/drivers/misc/mei/pci-me.c
+index 4a0ccda4d04b..ea4e152270a3 100644
+--- a/drivers/misc/mei/pci-me.c
++++ b/drivers/misc/mei/pci-me.c
+@@ -98,6 +98,11 @@ static const struct pci_device_id mei_me_pci_tbl[] = {
+ {MEI_PCI_DEVICE(MEI_DEV_ID_KBP, MEI_ME_PCH8_CFG)},
+ {MEI_PCI_DEVICE(MEI_DEV_ID_KBP_2, MEI_ME_PCH8_CFG)},
+
++ {MEI_PCI_DEVICE(MEI_DEV_ID_CNP_LP, MEI_ME_PCH8_CFG)},
++ {MEI_PCI_DEVICE(MEI_DEV_ID_CNP_LP_4, MEI_ME_PCH8_CFG)},
++ {MEI_PCI_DEVICE(MEI_DEV_ID_CNP_H, MEI_ME_PCH8_CFG)},
++ {MEI_PCI_DEVICE(MEI_DEV_ID_CNP_H_4, MEI_ME_PCH8_CFG)},
++
+ /* required last entry */
+ {0, }
+ };
+diff --git a/drivers/net/tun.c b/drivers/net/tun.c
+index a8ec589d1359..e29cd5c7d39f 100644
+--- a/drivers/net/tun.c
++++ b/drivers/net/tun.c
+@@ -1317,27 +1317,23 @@ static struct sk_buff *tun_napi_alloc_frags(struct tun_file *tfile,
+ skb->truesize += skb->data_len;
+
+ for (i = 1; i < it->nr_segs; i++) {
++ struct page_frag *pfrag = ¤t->task_frag;
+ size_t fragsz = it->iov[i].iov_len;
+- unsigned long offset;
+- struct page *page;
+- void *data;
+
+ if (fragsz == 0 || fragsz > PAGE_SIZE) {
+ err = -EINVAL;
+ goto free;
+ }
+
+- local_bh_disable();
+- data = napi_alloc_frag(fragsz);
+- local_bh_enable();
+- if (!data) {
++ if (!skb_page_frag_refill(fragsz, pfrag, GFP_KERNEL)) {
+ err = -ENOMEM;
+ goto free;
+ }
+
+- page = virt_to_head_page(data);
+- offset = data - page_address(page);
+- skb_fill_page_desc(skb, i - 1, page, offset, fragsz);
++ skb_fill_page_desc(skb, i - 1, pfrag->page,
++ pfrag->offset, fragsz);
++ page_ref_inc(pfrag->page);
++ pfrag->offset += fragsz;
+ }
+
+ return skb;
+diff --git a/drivers/soc/qcom/rmtfs_mem.c b/drivers/soc/qcom/rmtfs_mem.c
+index ce35ff748adf..0a43b2e8906f 100644
+--- a/drivers/soc/qcom/rmtfs_mem.c
++++ b/drivers/soc/qcom/rmtfs_mem.c
+@@ -267,3 +267,7 @@ static void qcom_rmtfs_mem_exit(void)
+ unregister_chrdev_region(qcom_rmtfs_mem_major, QCOM_RMTFS_MEM_DEV_MAX);
+ }
+ module_exit(qcom_rmtfs_mem_exit);
++
++MODULE_AUTHOR("Linaro Ltd");
++MODULE_DESCRIPTION("Qualcomm Remote Filesystem memory driver");
++MODULE_LICENSE("GPL v2");
+diff --git a/drivers/staging/android/ashmem.c b/drivers/staging/android/ashmem.c
+index 372ce9913e6d..e7541dc90473 100644
+--- a/drivers/staging/android/ashmem.c
++++ b/drivers/staging/android/ashmem.c
+@@ -710,30 +710,32 @@ static int ashmem_pin_unpin(struct ashmem_area *asma, unsigned long cmd,
+ size_t pgstart, pgend;
+ int ret = -EINVAL;
+
++ mutex_lock(&ashmem_mutex);
++
+ if (unlikely(!asma->file))
+- return -EINVAL;
++ goto out_unlock;
+
+- if (unlikely(copy_from_user(&pin, p, sizeof(pin))))
+- return -EFAULT;
++ if (unlikely(copy_from_user(&pin, p, sizeof(pin)))) {
++ ret = -EFAULT;
++ goto out_unlock;
++ }
+
+ /* per custom, you can pass zero for len to mean "everything onward" */
+ if (!pin.len)
+ pin.len = PAGE_ALIGN(asma->size) - pin.offset;
+
+ if (unlikely((pin.offset | pin.len) & ~PAGE_MASK))
+- return -EINVAL;
++ goto out_unlock;
+
+ if (unlikely(((__u32)-1) - pin.offset < pin.len))
+- return -EINVAL;
++ goto out_unlock;
+
+ if (unlikely(PAGE_ALIGN(asma->size) < pin.offset + pin.len))
+- return -EINVAL;
++ goto out_unlock;
+
+ pgstart = pin.offset / PAGE_SIZE;
+ pgend = pgstart + (pin.len / PAGE_SIZE) - 1;
+
+- mutex_lock(&ashmem_mutex);
+-
+ switch (cmd) {
+ case ASHMEM_PIN:
+ ret = ashmem_pin(asma, pgstart, pgend);
+@@ -746,6 +748,7 @@ static int ashmem_pin_unpin(struct ashmem_area *asma, unsigned long cmd,
+ break;
+ }
+
++out_unlock:
+ mutex_unlock(&ashmem_mutex);
+
+ return ret;
+diff --git a/drivers/staging/android/ion/ion-ioctl.c b/drivers/staging/android/ion/ion-ioctl.c
+index c78989351f9c..6cfed48f376e 100644
+--- a/drivers/staging/android/ion/ion-ioctl.c
++++ b/drivers/staging/android/ion/ion-ioctl.c
+@@ -70,8 +70,10 @@ long ion_ioctl(struct file *filp, unsigned int cmd, unsigned long arg)
+ return -EFAULT;
+
+ ret = validate_ioctl_arg(cmd, &data);
+- if (WARN_ON_ONCE(ret))
++ if (ret) {
++ pr_warn_once("%s: ioctl validate failed\n", __func__);
+ return ret;
++ }
+
+ if (!(dir & _IOC_WRITE))
+ memset(&data, 0, sizeof(data));
+diff --git a/drivers/staging/android/ion/ion_system_heap.c b/drivers/staging/android/ion/ion_system_heap.c
+index 4dc5d7a589c2..b6ece18e6a88 100644
+--- a/drivers/staging/android/ion/ion_system_heap.c
++++ b/drivers/staging/android/ion/ion_system_heap.c
+@@ -371,7 +371,7 @@ static int ion_system_contig_heap_allocate(struct ion_heap *heap,
+ unsigned long i;
+ int ret;
+
+- page = alloc_pages(low_order_gfp_flags, order);
++ page = alloc_pages(low_order_gfp_flags | __GFP_NOWARN, order);
+ if (!page)
+ return -ENOMEM;
+
+diff --git a/drivers/staging/fsl-mc/bus/Kconfig b/drivers/staging/fsl-mc/bus/Kconfig
+index 504c987447f2..eee1c1b277fa 100644
+--- a/drivers/staging/fsl-mc/bus/Kconfig
++++ b/drivers/staging/fsl-mc/bus/Kconfig
+@@ -8,7 +8,7 @@
+
+ config FSL_MC_BUS
+ bool "QorIQ DPAA2 fsl-mc bus driver"
+- depends on OF && (ARCH_LAYERSCAPE || (COMPILE_TEST && (ARM || ARM64 || X86 || PPC)))
++ depends on OF && (ARCH_LAYERSCAPE || (COMPILE_TEST && (ARM || ARM64 || X86_LOCAL_APIC || PPC)))
+ select GENERIC_MSI_IRQ_DOMAIN
+ help
+ Driver to enable the bus infrastructure for the QorIQ DPAA2
+diff --git a/drivers/staging/iio/adc/ad7192.c b/drivers/staging/iio/adc/ad7192.c
+index cadfb96734ed..d4da2807eb55 100644
+--- a/drivers/staging/iio/adc/ad7192.c
++++ b/drivers/staging/iio/adc/ad7192.c
+@@ -141,6 +141,8 @@
+ #define AD7192_GPOCON_P1DAT BIT(1) /* P1 state */
+ #define AD7192_GPOCON_P0DAT BIT(0) /* P0 state */
+
++#define AD7192_EXT_FREQ_MHZ_MIN 2457600
++#define AD7192_EXT_FREQ_MHZ_MAX 5120000
+ #define AD7192_INT_FREQ_MHZ 4915200
+
+ /* NOTE:
+@@ -218,6 +220,12 @@ static int ad7192_calibrate_all(struct ad7192_state *st)
+ ARRAY_SIZE(ad7192_calib_arr));
+ }
+
++static inline bool ad7192_valid_external_frequency(u32 freq)
++{
++ return (freq >= AD7192_EXT_FREQ_MHZ_MIN &&
++ freq <= AD7192_EXT_FREQ_MHZ_MAX);
++}
++
+ static int ad7192_setup(struct ad7192_state *st,
+ const struct ad7192_platform_data *pdata)
+ {
+@@ -243,17 +251,20 @@ static int ad7192_setup(struct ad7192_state *st,
+ id);
+
+ switch (pdata->clock_source_sel) {
+- case AD7192_CLK_EXT_MCLK1_2:
+- case AD7192_CLK_EXT_MCLK2:
+- st->mclk = AD7192_INT_FREQ_MHZ;
+- break;
+ case AD7192_CLK_INT:
+ case AD7192_CLK_INT_CO:
+- if (pdata->ext_clk_hz)
+- st->mclk = pdata->ext_clk_hz;
+- else
+- st->mclk = AD7192_INT_FREQ_MHZ;
++ st->mclk = AD7192_INT_FREQ_MHZ;
+ break;
++ case AD7192_CLK_EXT_MCLK1_2:
++ case AD7192_CLK_EXT_MCLK2:
++ if (ad7192_valid_external_frequency(pdata->ext_clk_hz)) {
++ st->mclk = pdata->ext_clk_hz;
++ break;
++ }
++ dev_err(&st->sd.spi->dev, "Invalid frequency setting %u\n",
++ pdata->ext_clk_hz);
++ ret = -EINVAL;
++ goto out;
+ default:
+ ret = -EINVAL;
+ goto out;
+diff --git a/drivers/staging/iio/impedance-analyzer/ad5933.c b/drivers/staging/iio/impedance-analyzer/ad5933.c
+index 2b28fb9c0048..3bcf49466361 100644
+--- a/drivers/staging/iio/impedance-analyzer/ad5933.c
++++ b/drivers/staging/iio/impedance-analyzer/ad5933.c
+@@ -648,8 +648,6 @@ static int ad5933_register_ring_funcs_and_init(struct iio_dev *indio_dev)
+ /* Ring buffer functions - here trigger setup related */
+ indio_dev->setup_ops = &ad5933_ring_setup_ops;
+
+- indio_dev->modes |= INDIO_BUFFER_HARDWARE;
+-
+ return 0;
+ }
+
+@@ -762,7 +760,7 @@ static int ad5933_probe(struct i2c_client *client,
+ indio_dev->dev.parent = &client->dev;
+ indio_dev->info = &ad5933_info;
+ indio_dev->name = id->name;
+- indio_dev->modes = INDIO_DIRECT_MODE;
++ indio_dev->modes = (INDIO_BUFFER_SOFTWARE | INDIO_DIRECT_MODE);
+ indio_dev->channels = ad5933_channels;
+ indio_dev->num_channels = ARRAY_SIZE(ad5933_channels);
+
+diff --git a/drivers/usb/host/xhci-debugfs.c b/drivers/usb/host/xhci-debugfs.c
+index e26e685d8a57..5851052d4668 100644
+--- a/drivers/usb/host/xhci-debugfs.c
++++ b/drivers/usb/host/xhci-debugfs.c
+@@ -211,7 +211,7 @@ static void xhci_ring_dump_segment(struct seq_file *s,
+ static int xhci_ring_trb_show(struct seq_file *s, void *unused)
+ {
+ int i;
+- struct xhci_ring *ring = s->private;
++ struct xhci_ring *ring = *(struct xhci_ring **)s->private;
+ struct xhci_segment *seg = ring->first_seg;
+
+ for (i = 0; i < ring->num_segs; i++) {
+@@ -387,7 +387,7 @@ void xhci_debugfs_create_endpoint(struct xhci_hcd *xhci,
+
+ snprintf(epriv->name, sizeof(epriv->name), "ep%02d", ep_index);
+ epriv->root = xhci_debugfs_create_ring_dir(xhci,
+- &dev->eps[ep_index].new_ring,
++ &dev->eps[ep_index].ring,
+ epriv->name,
+ spriv->root);
+ spriv->eps[ep_index] = epriv;
+diff --git a/drivers/usb/host/xhci.c b/drivers/usb/host/xhci.c
+index da6dbe3ebd8b..5c1326154e66 100644
+--- a/drivers/usb/host/xhci.c
++++ b/drivers/usb/host/xhci.c
+@@ -652,8 +652,6 @@ static void xhci_stop(struct usb_hcd *hcd)
+ return;
+ }
+
+- xhci_debugfs_exit(xhci);
+-
+ spin_lock_irq(&xhci->lock);
+ xhci->xhc_state |= XHCI_STATE_HALTED;
+ xhci->cmd_ring_state = CMD_RING_STATE_STOPPED;
+@@ -685,6 +683,7 @@ static void xhci_stop(struct usb_hcd *hcd)
+
+ xhci_dbg_trace(xhci, trace_xhci_dbg_init, "cleaning up memory");
+ xhci_mem_cleanup(xhci);
++ xhci_debugfs_exit(xhci);
+ xhci_dbg_trace(xhci, trace_xhci_dbg_init,
+ "xhci_stop completed - status = %x",
+ readl(&xhci->op_regs->status));
+@@ -1018,6 +1017,7 @@ int xhci_resume(struct xhci_hcd *xhci, bool hibernated)
+
+ xhci_dbg(xhci, "cleaning up memory\n");
+ xhci_mem_cleanup(xhci);
++ xhci_debugfs_exit(xhci);
+ xhci_dbg(xhci, "xhci_stop completed - status = %x\n",
+ readl(&xhci->op_regs->status));
+
+@@ -3551,12 +3551,10 @@ static void xhci_free_dev(struct usb_hcd *hcd, struct usb_device *udev)
+ virt_dev->eps[i].ep_state &= ~EP_STOP_CMD_PENDING;
+ del_timer_sync(&virt_dev->eps[i].stop_cmd_timer);
+ }
+-
++ xhci_debugfs_remove_slot(xhci, udev->slot_id);
+ ret = xhci_disable_slot(xhci, udev->slot_id);
+- if (ret) {
+- xhci_debugfs_remove_slot(xhci, udev->slot_id);
++ if (ret)
+ xhci_free_virt_device(xhci, udev->slot_id);
+- }
+ }
+
+ int xhci_disable_slot(struct xhci_hcd *xhci, u32 slot_id)
+diff --git a/drivers/usb/usbip/stub_dev.c b/drivers/usb/usbip/stub_dev.c
+index e31a6f204397..86037e5b1101 100644
+--- a/drivers/usb/usbip/stub_dev.c
++++ b/drivers/usb/usbip/stub_dev.c
+@@ -73,6 +73,7 @@ static ssize_t store_sockfd(struct device *dev, struct device_attribute *attr,
+ goto err;
+
+ sdev->ud.tcp_socket = socket;
++ sdev->ud.sockfd = sockfd;
+
+ spin_unlock_irq(&sdev->ud.lock);
+
+@@ -172,6 +173,7 @@ static void stub_shutdown_connection(struct usbip_device *ud)
+ if (ud->tcp_socket) {
+ sockfd_put(ud->tcp_socket);
+ ud->tcp_socket = NULL;
++ ud->sockfd = -1;
+ }
+
+ /* 3. free used data */
+@@ -266,6 +268,7 @@ static struct stub_device *stub_device_alloc(struct usb_device *udev)
+ sdev->ud.status = SDEV_ST_AVAILABLE;
+ spin_lock_init(&sdev->ud.lock);
+ sdev->ud.tcp_socket = NULL;
++ sdev->ud.sockfd = -1;
+
+ INIT_LIST_HEAD(&sdev->priv_init);
+ INIT_LIST_HEAD(&sdev->priv_tx);
+diff --git a/drivers/usb/usbip/vhci_hcd.c b/drivers/usb/usbip/vhci_hcd.c
+index c3e1008aa491..20e3d4609583 100644
+--- a/drivers/usb/usbip/vhci_hcd.c
++++ b/drivers/usb/usbip/vhci_hcd.c
+@@ -984,6 +984,7 @@ static void vhci_shutdown_connection(struct usbip_device *ud)
+ if (vdev->ud.tcp_socket) {
+ sockfd_put(vdev->ud.tcp_socket);
+ vdev->ud.tcp_socket = NULL;
++ vdev->ud.sockfd = -1;
+ }
+ pr_info("release socket\n");
+
+@@ -1030,6 +1031,7 @@ static void vhci_device_reset(struct usbip_device *ud)
+ if (ud->tcp_socket) {
+ sockfd_put(ud->tcp_socket);
+ ud->tcp_socket = NULL;
++ ud->sockfd = -1;
+ }
+ ud->status = VDEV_ST_NULL;
+
+diff --git a/drivers/video/fbdev/mmp/core.c b/drivers/video/fbdev/mmp/core.c
+index a0f496049db7..3a6bb6561ba0 100644
+--- a/drivers/video/fbdev/mmp/core.c
++++ b/drivers/video/fbdev/mmp/core.c
+@@ -23,6 +23,7 @@
+ #include <linux/slab.h>
+ #include <linux/dma-mapping.h>
+ #include <linux/export.h>
++#include <linux/module.h>
+ #include <video/mmp_disp.h>
+
+ static struct mmp_overlay *path_get_overlay(struct mmp_path *path,
+@@ -249,3 +250,7 @@ void mmp_unregister_path(struct mmp_path *path)
+ mutex_unlock(&disp_lock);
+ }
+ EXPORT_SYMBOL_GPL(mmp_unregister_path);
++
++MODULE_AUTHOR("Zhou Zhu <zzhu3@marvell.com>");
++MODULE_DESCRIPTION("Marvell MMP display framework");
++MODULE_LICENSE("GPL");
+diff --git a/include/linux/ptr_ring.h b/include/linux/ptr_ring.h
+index d72b2e7dd500..59c77c1388ae 100644
+--- a/include/linux/ptr_ring.h
++++ b/include/linux/ptr_ring.h
+@@ -451,9 +451,14 @@ static inline int ptr_ring_consume_batched_bh(struct ptr_ring *r,
+ __PTR_RING_PEEK_CALL_v; \
+ })
+
++/* Not all gfp_t flags (besides GFP_KERNEL) are allowed. See
++ * documentation for vmalloc for which of them are legal.
++ */
+ static inline void **__ptr_ring_init_queue_alloc(unsigned int size, gfp_t gfp)
+ {
+- return kcalloc(size, sizeof(void *), gfp);
++ if (size * sizeof(void *) > KMALLOC_MAX_SIZE)
++ return NULL;
++ return kvmalloc_array(size, sizeof(void *), gfp | __GFP_ZERO);
+ }
+
+ static inline void __ptr_ring_set_size(struct ptr_ring *r, int size)
+@@ -586,7 +591,7 @@ static inline int ptr_ring_resize(struct ptr_ring *r, int size, gfp_t gfp,
+ spin_unlock(&(r)->producer_lock);
+ spin_unlock_irqrestore(&(r)->consumer_lock, flags);
+
+- kfree(old);
++ kvfree(old);
+
+ return 0;
+ }
+@@ -626,7 +631,7 @@ static inline int ptr_ring_resize_multiple(struct ptr_ring **rings,
+ }
+
+ for (i = 0; i < nrings; ++i)
+- kfree(queues[i]);
++ kvfree(queues[i]);
+
+ kfree(queues);
+
+@@ -634,7 +639,7 @@ static inline int ptr_ring_resize_multiple(struct ptr_ring **rings,
+
+ nomem:
+ while (--i >= 0)
+- kfree(queues[i]);
++ kvfree(queues[i]);
+
+ kfree(queues);
+
+@@ -649,7 +654,7 @@ static inline void ptr_ring_cleanup(struct ptr_ring *r, void (*destroy)(void *))
+ if (destroy)
+ while ((ptr = ptr_ring_consume(r)))
+ destroy(ptr);
+- kfree(r->queue);
++ kvfree(r->queue);
+ }
+
+ #endif /* _LINUX_PTR_RING_H */
+diff --git a/kernel/kcov.c b/kernel/kcov.c
+index 7594c033d98a..2c16f1ab5e10 100644
+--- a/kernel/kcov.c
++++ b/kernel/kcov.c
+@@ -358,7 +358,8 @@ static int kcov_ioctl_locked(struct kcov *kcov, unsigned int cmd,
+ */
+ if (kcov->mode != KCOV_MODE_INIT || !kcov->area)
+ return -EINVAL;
+- if (kcov->t != NULL)
++ t = current;
++ if (kcov->t != NULL || t->kcov != NULL)
+ return -EBUSY;
+ if (arg == KCOV_TRACE_PC)
+ kcov->mode = KCOV_MODE_TRACE_PC;
+@@ -370,7 +371,6 @@ static int kcov_ioctl_locked(struct kcov *kcov, unsigned int cmd,
+ #endif
+ else
+ return -EINVAL;
+- t = current;
+ /* Cache in task struct for performance. */
+ t->kcov_size = kcov->size;
+ t->kcov_area = kcov->area;
+diff --git a/mm/vmalloc.c b/mm/vmalloc.c
+index 673942094328..ebff729cc956 100644
+--- a/mm/vmalloc.c
++++ b/mm/vmalloc.c
+@@ -1943,11 +1943,15 @@ void *vmalloc_exec(unsigned long size)
+ }
+
+ #if defined(CONFIG_64BIT) && defined(CONFIG_ZONE_DMA32)
+-#define GFP_VMALLOC32 GFP_DMA32 | GFP_KERNEL
++#define GFP_VMALLOC32 (GFP_DMA32 | GFP_KERNEL)
+ #elif defined(CONFIG_64BIT) && defined(CONFIG_ZONE_DMA)
+-#define GFP_VMALLOC32 GFP_DMA | GFP_KERNEL
++#define GFP_VMALLOC32 (GFP_DMA | GFP_KERNEL)
+ #else
+-#define GFP_VMALLOC32 GFP_KERNEL
++/*
++ * 64b systems should always have either DMA or DMA32 zones. For others
++ * GFP_DMA32 should do the right thing and use the normal zone.
++ */
++#define GFP_VMALLOC32 GFP_DMA32 | GFP_KERNEL
+ #endif
+
+ /**
+diff --git a/net/core/dev.c b/net/core/dev.c
+index 613fb4066be7..c8c102a3467f 100644
+--- a/net/core/dev.c
++++ b/net/core/dev.c
+@@ -2815,7 +2815,7 @@ struct sk_buff *__skb_gso_segment(struct sk_buff *skb,
+
+ segs = skb_mac_gso_segment(skb, features);
+
+- if (unlikely(skb_needs_check(skb, tx_path)))
++ if (unlikely(skb_needs_check(skb, tx_path) && !IS_ERR(segs)))
+ skb_warn_bad_offload(skb);
+
+ return segs;
+diff --git a/net/core/gen_estimator.c b/net/core/gen_estimator.c
+index 9834cfa21b21..0a3f88f08727 100644
+--- a/net/core/gen_estimator.c
++++ b/net/core/gen_estimator.c
+@@ -159,7 +159,11 @@ int gen_new_estimator(struct gnet_stats_basic_packed *bstats,
+ est->intvl_log = intvl_log;
+ est->cpu_bstats = cpu_bstats;
+
++ if (stats_lock)
++ local_bh_disable();
+ est_fetch_counters(est, &b);
++ if (stats_lock)
++ local_bh_enable();
+ est->last_bytes = b.bytes;
+ est->last_packets = b.packets;
+ old = rcu_dereference_protected(*rate_est, 1);
+diff --git a/net/decnet/af_decnet.c b/net/decnet/af_decnet.c
+index 518cea17b811..ea9b55309483 100644
+--- a/net/decnet/af_decnet.c
++++ b/net/decnet/af_decnet.c
+@@ -1338,6 +1338,12 @@ static int dn_setsockopt(struct socket *sock, int level, int optname, char __use
+ lock_sock(sk);
+ err = __dn_setsockopt(sock, level, optname, optval, optlen, 0);
+ release_sock(sk);
++#ifdef CONFIG_NETFILTER
++ /* we need to exclude all possible ENOPROTOOPTs except default case */
++ if (err == -ENOPROTOOPT && optname != DSO_LINKINFO &&
++ optname != DSO_STREAM && optname != DSO_SEQPACKET)
++ err = nf_setsockopt(sk, PF_DECnet, optname, optval, optlen);
++#endif
+
+ return err;
+ }
+@@ -1445,15 +1451,6 @@ static int __dn_setsockopt(struct socket *sock, int level,int optname, char __us
+ dn_nsp_send_disc(sk, 0x38, 0, sk->sk_allocation);
+ break;
+
+- default:
+-#ifdef CONFIG_NETFILTER
+- return nf_setsockopt(sk, PF_DECnet, optname, optval, optlen);
+-#endif
+- case DSO_LINKINFO:
+- case DSO_STREAM:
+- case DSO_SEQPACKET:
+- return -ENOPROTOOPT;
+-
+ case DSO_MAXWINDOW:
+ if (optlen != sizeof(unsigned long))
+ return -EINVAL;
+@@ -1501,6 +1498,12 @@ static int __dn_setsockopt(struct socket *sock, int level,int optname, char __us
+ return -EINVAL;
+ scp->info_loc = u.info;
+ break;
++
++ case DSO_LINKINFO:
++ case DSO_STREAM:
++ case DSO_SEQPACKET:
++ default:
++ return -ENOPROTOOPT;
+ }
+
+ return 0;
+@@ -1514,6 +1517,20 @@ static int dn_getsockopt(struct socket *sock, int level, int optname, char __use
+ lock_sock(sk);
+ err = __dn_getsockopt(sock, level, optname, optval, optlen, 0);
+ release_sock(sk);
++#ifdef CONFIG_NETFILTER
++ if (err == -ENOPROTOOPT && optname != DSO_STREAM &&
++ optname != DSO_SEQPACKET && optname != DSO_CONACCEPT &&
++ optname != DSO_CONREJECT) {
++ int len;
++
++ if (get_user(len, optlen))
++ return -EFAULT;
++
++ err = nf_getsockopt(sk, PF_DECnet, optname, optval, &len);
++ if (err >= 0)
++ err = put_user(len, optlen);
++ }
++#endif
+
+ return err;
+ }
+@@ -1579,26 +1596,6 @@ static int __dn_getsockopt(struct socket *sock, int level,int optname, char __us
+ r_data = &link;
+ break;
+
+- default:
+-#ifdef CONFIG_NETFILTER
+- {
+- int ret, len;
+-
+- if (get_user(len, optlen))
+- return -EFAULT;
+-
+- ret = nf_getsockopt(sk, PF_DECnet, optname, optval, &len);
+- if (ret >= 0)
+- ret = put_user(len, optlen);
+- return ret;
+- }
+-#endif
+- case DSO_STREAM:
+- case DSO_SEQPACKET:
+- case DSO_CONACCEPT:
+- case DSO_CONREJECT:
+- return -ENOPROTOOPT;
+-
+ case DSO_MAXWINDOW:
+ if (r_len > sizeof(unsigned long))
+ r_len = sizeof(unsigned long);
+@@ -1630,6 +1627,13 @@ static int __dn_getsockopt(struct socket *sock, int level,int optname, char __us
+ r_len = sizeof(unsigned char);
+ r_data = &scp->info_rem;
+ break;
++
++ case DSO_STREAM:
++ case DSO_SEQPACKET:
++ case DSO_CONACCEPT:
++ case DSO_CONREJECT:
++ default:
++ return -ENOPROTOOPT;
+ }
+
+ if (r_data) {
+diff --git a/net/ipv4/ip_sockglue.c b/net/ipv4/ip_sockglue.c
+index 60fb1eb7d7d8..c7df4969f80a 100644
+--- a/net/ipv4/ip_sockglue.c
++++ b/net/ipv4/ip_sockglue.c
+@@ -1251,11 +1251,8 @@ int ip_setsockopt(struct sock *sk, int level,
+ if (err == -ENOPROTOOPT && optname != IP_HDRINCL &&
+ optname != IP_IPSEC_POLICY &&
+ optname != IP_XFRM_POLICY &&
+- !ip_mroute_opt(optname)) {
+- lock_sock(sk);
++ !ip_mroute_opt(optname))
+ err = nf_setsockopt(sk, PF_INET, optname, optval, optlen);
+- release_sock(sk);
+- }
+ #endif
+ return err;
+ }
+@@ -1280,12 +1277,9 @@ int compat_ip_setsockopt(struct sock *sk, int level, int optname,
+ if (err == -ENOPROTOOPT && optname != IP_HDRINCL &&
+ optname != IP_IPSEC_POLICY &&
+ optname != IP_XFRM_POLICY &&
+- !ip_mroute_opt(optname)) {
+- lock_sock(sk);
+- err = compat_nf_setsockopt(sk, PF_INET, optname,
+- optval, optlen);
+- release_sock(sk);
+- }
++ !ip_mroute_opt(optname))
++ err = compat_nf_setsockopt(sk, PF_INET, optname, optval,
++ optlen);
+ #endif
+ return err;
+ }
+diff --git a/net/ipv4/netfilter/ipt_CLUSTERIP.c b/net/ipv4/netfilter/ipt_CLUSTERIP.c
+index 69060e3abe85..1e4a7209a3d2 100644
+--- a/net/ipv4/netfilter/ipt_CLUSTERIP.c
++++ b/net/ipv4/netfilter/ipt_CLUSTERIP.c
+@@ -431,7 +431,7 @@ static int clusterip_tg_check(const struct xt_tgchk_param *par)
+ struct ipt_clusterip_tgt_info *cipinfo = par->targinfo;
+ const struct ipt_entry *e = par->entryinfo;
+ struct clusterip_config *config;
+- int ret;
++ int ret, i;
+
+ if (par->nft_compat) {
+ pr_err("cannot use CLUSTERIP target from nftables compat\n");
+@@ -450,8 +450,18 @@ static int clusterip_tg_check(const struct xt_tgchk_param *par)
+ pr_info("Please specify destination IP\n");
+ return -EINVAL;
+ }
+-
+- /* FIXME: further sanity checks */
++ if (cipinfo->num_local_nodes > ARRAY_SIZE(cipinfo->local_nodes)) {
++ pr_info("bad num_local_nodes %u\n", cipinfo->num_local_nodes);
++ return -EINVAL;
++ }
++ for (i = 0; i < cipinfo->num_local_nodes; i++) {
++ if (cipinfo->local_nodes[i] - 1 >=
++ sizeof(config->local_nodes) * 8) {
++ pr_info("bad local_nodes[%d] %u\n",
++ i, cipinfo->local_nodes[i]);
++ return -EINVAL;
++ }
++ }
+
+ config = clusterip_config_find_get(par->net, e->ip.dst.s_addr, 1);
+ if (!config) {
+diff --git a/net/ipv4/netfilter/nf_conntrack_l3proto_ipv4.c b/net/ipv4/netfilter/nf_conntrack_l3proto_ipv4.c
+index 89af9d88ca21..a5727036a8a8 100644
+--- a/net/ipv4/netfilter/nf_conntrack_l3proto_ipv4.c
++++ b/net/ipv4/netfilter/nf_conntrack_l3proto_ipv4.c
+@@ -218,15 +218,19 @@ getorigdst(struct sock *sk, int optval, void __user *user, int *len)
+ struct nf_conntrack_tuple tuple;
+
+ memset(&tuple, 0, sizeof(tuple));
++
++ lock_sock(sk);
+ tuple.src.u3.ip = inet->inet_rcv_saddr;
+ tuple.src.u.tcp.port = inet->inet_sport;
+ tuple.dst.u3.ip = inet->inet_daddr;
+ tuple.dst.u.tcp.port = inet->inet_dport;
+ tuple.src.l3num = PF_INET;
+ tuple.dst.protonum = sk->sk_protocol;
++ release_sock(sk);
+
+ /* We only do TCP and SCTP at the moment: is there a better way? */
+- if (sk->sk_protocol != IPPROTO_TCP && sk->sk_protocol != IPPROTO_SCTP) {
++ if (tuple.dst.protonum != IPPROTO_TCP &&
++ tuple.dst.protonum != IPPROTO_SCTP) {
+ pr_debug("SO_ORIGINAL_DST: Not a TCP/SCTP socket\n");
+ return -ENOPROTOOPT;
+ }
+diff --git a/net/ipv6/ipv6_sockglue.c b/net/ipv6/ipv6_sockglue.c
+index e8ffb5b5d84e..d78d41fc4b1a 100644
+--- a/net/ipv6/ipv6_sockglue.c
++++ b/net/ipv6/ipv6_sockglue.c
+@@ -923,12 +923,8 @@ int ipv6_setsockopt(struct sock *sk, int level, int optname,
+ #ifdef CONFIG_NETFILTER
+ /* we need to exclude all possible ENOPROTOOPTs except default case */
+ if (err == -ENOPROTOOPT && optname != IPV6_IPSEC_POLICY &&
+- optname != IPV6_XFRM_POLICY) {
+- lock_sock(sk);
+- err = nf_setsockopt(sk, PF_INET6, optname, optval,
+- optlen);
+- release_sock(sk);
+- }
++ optname != IPV6_XFRM_POLICY)
++ err = nf_setsockopt(sk, PF_INET6, optname, optval, optlen);
+ #endif
+ return err;
+ }
+@@ -958,12 +954,9 @@ int compat_ipv6_setsockopt(struct sock *sk, int level, int optname,
+ #ifdef CONFIG_NETFILTER
+ /* we need to exclude all possible ENOPROTOOPTs except default case */
+ if (err == -ENOPROTOOPT && optname != IPV6_IPSEC_POLICY &&
+- optname != IPV6_XFRM_POLICY) {
+- lock_sock(sk);
+- err = compat_nf_setsockopt(sk, PF_INET6, optname,
+- optval, optlen);
+- release_sock(sk);
+- }
++ optname != IPV6_XFRM_POLICY)
++ err = compat_nf_setsockopt(sk, PF_INET6, optname, optval,
++ optlen);
+ #endif
+ return err;
+ }
+diff --git a/net/ipv6/netfilter/nf_conntrack_l3proto_ipv6.c b/net/ipv6/netfilter/nf_conntrack_l3proto_ipv6.c
+index 3b80a38f62b8..5863579800c1 100644
+--- a/net/ipv6/netfilter/nf_conntrack_l3proto_ipv6.c
++++ b/net/ipv6/netfilter/nf_conntrack_l3proto_ipv6.c
+@@ -226,20 +226,27 @@ static const struct nf_hook_ops ipv6_conntrack_ops[] = {
+ static int
+ ipv6_getorigdst(struct sock *sk, int optval, void __user *user, int *len)
+ {
+- const struct inet_sock *inet = inet_sk(sk);
++ struct nf_conntrack_tuple tuple = { .src.l3num = NFPROTO_IPV6 };
+ const struct ipv6_pinfo *inet6 = inet6_sk(sk);
++ const struct inet_sock *inet = inet_sk(sk);
+ const struct nf_conntrack_tuple_hash *h;
+ struct sockaddr_in6 sin6;
+- struct nf_conntrack_tuple tuple = { .src.l3num = NFPROTO_IPV6 };
+ struct nf_conn *ct;
++ __be32 flow_label;
++ int bound_dev_if;
+
++ lock_sock(sk);
+ tuple.src.u3.in6 = sk->sk_v6_rcv_saddr;
+ tuple.src.u.tcp.port = inet->inet_sport;
+ tuple.dst.u3.in6 = sk->sk_v6_daddr;
+ tuple.dst.u.tcp.port = inet->inet_dport;
+ tuple.dst.protonum = sk->sk_protocol;
++ bound_dev_if = sk->sk_bound_dev_if;
++ flow_label = inet6->flow_label;
++ release_sock(sk);
+
+- if (sk->sk_protocol != IPPROTO_TCP && sk->sk_protocol != IPPROTO_SCTP)
++ if (tuple.dst.protonum != IPPROTO_TCP &&
++ tuple.dst.protonum != IPPROTO_SCTP)
+ return -ENOPROTOOPT;
+
+ if (*len < 0 || (unsigned int) *len < sizeof(sin6))
+@@ -257,14 +264,13 @@ ipv6_getorigdst(struct sock *sk, int optval, void __user *user, int *len)
+
+ sin6.sin6_family = AF_INET6;
+ sin6.sin6_port = ct->tuplehash[IP_CT_DIR_ORIGINAL].tuple.dst.u.tcp.port;
+- sin6.sin6_flowinfo = inet6->flow_label & IPV6_FLOWINFO_MASK;
++ sin6.sin6_flowinfo = flow_label & IPV6_FLOWINFO_MASK;
+ memcpy(&sin6.sin6_addr,
+ &ct->tuplehash[IP_CT_DIR_ORIGINAL].tuple.dst.u3.in6,
+ sizeof(sin6.sin6_addr));
+
+ nf_ct_put(ct);
+- sin6.sin6_scope_id = ipv6_iface_scope_id(&sin6.sin6_addr,
+- sk->sk_bound_dev_if);
++ sin6.sin6_scope_id = ipv6_iface_scope_id(&sin6.sin6_addr, bound_dev_if);
+ return copy_to_user(user, &sin6, sizeof(sin6)) ? -EFAULT : 0;
+ }
+
+diff --git a/net/netfilter/x_tables.c b/net/netfilter/x_tables.c
+index 55802e97f906..d7070d18db20 100644
+--- a/net/netfilter/x_tables.c
++++ b/net/netfilter/x_tables.c
+@@ -39,7 +39,6 @@ MODULE_LICENSE("GPL");
+ MODULE_AUTHOR("Harald Welte <laforge@netfilter.org>");
+ MODULE_DESCRIPTION("{ip,ip6,arp,eb}_tables backend module");
+
+-#define SMP_ALIGN(x) (((x) + SMP_CACHE_BYTES-1) & ~(SMP_CACHE_BYTES-1))
+ #define XT_PCPU_BLOCK_SIZE 4096
+
+ struct compat_delta {
+@@ -210,6 +209,9 @@ xt_request_find_match(uint8_t nfproto, const char *name, uint8_t revision)
+ {
+ struct xt_match *match;
+
++ if (strnlen(name, XT_EXTENSION_MAXNAMELEN) == XT_EXTENSION_MAXNAMELEN)
++ return ERR_PTR(-EINVAL);
++
+ match = xt_find_match(nfproto, name, revision);
+ if (IS_ERR(match)) {
+ request_module("%st_%s", xt_prefix[nfproto], name);
+@@ -252,6 +254,9 @@ struct xt_target *xt_request_find_target(u8 af, const char *name, u8 revision)
+ {
+ struct xt_target *target;
+
++ if (strnlen(name, XT_EXTENSION_MAXNAMELEN) == XT_EXTENSION_MAXNAMELEN)
++ return ERR_PTR(-EINVAL);
++
+ target = xt_find_target(af, name, revision);
+ if (IS_ERR(target)) {
+ request_module("%st_%s", xt_prefix[af], name);
+@@ -1000,7 +1005,7 @@ struct xt_table_info *xt_alloc_table_info(unsigned int size)
+ return NULL;
+
+ /* Pedantry: prevent them from hitting BUG() in vmalloc.c --RR */
+- if ((SMP_ALIGN(size) >> PAGE_SHIFT) + 2 > totalram_pages)
++ if ((size >> PAGE_SHIFT) + 2 > totalram_pages)
+ return NULL;
+
+ info = kvmalloc(sz, GFP_KERNEL);
+diff --git a/net/netfilter/xt_RATEEST.c b/net/netfilter/xt_RATEEST.c
+index 498b54fd04d7..141c295191f6 100644
+--- a/net/netfilter/xt_RATEEST.c
++++ b/net/netfilter/xt_RATEEST.c
+@@ -39,23 +39,31 @@ static void xt_rateest_hash_insert(struct xt_rateest *est)
+ hlist_add_head(&est->list, &rateest_hash[h]);
+ }
+
+-struct xt_rateest *xt_rateest_lookup(const char *name)
++static struct xt_rateest *__xt_rateest_lookup(const char *name)
+ {
+ struct xt_rateest *est;
+ unsigned int h;
+
+ h = xt_rateest_hash(name);
+- mutex_lock(&xt_rateest_mutex);
+ hlist_for_each_entry(est, &rateest_hash[h], list) {
+ if (strcmp(est->name, name) == 0) {
+ est->refcnt++;
+- mutex_unlock(&xt_rateest_mutex);
+ return est;
+ }
+ }
+- mutex_unlock(&xt_rateest_mutex);
++
+ return NULL;
+ }
++
++struct xt_rateest *xt_rateest_lookup(const char *name)
++{
++ struct xt_rateest *est;
++
++ mutex_lock(&xt_rateest_mutex);
++ est = __xt_rateest_lookup(name);
++ mutex_unlock(&xt_rateest_mutex);
++ return est;
++}
+ EXPORT_SYMBOL_GPL(xt_rateest_lookup);
+
+ void xt_rateest_put(struct xt_rateest *est)
+@@ -100,8 +108,10 @@ static int xt_rateest_tg_checkentry(const struct xt_tgchk_param *par)
+
+ net_get_random_once(&jhash_rnd, sizeof(jhash_rnd));
+
+- est = xt_rateest_lookup(info->name);
++ mutex_lock(&xt_rateest_mutex);
++ est = __xt_rateest_lookup(info->name);
+ if (est) {
++ mutex_unlock(&xt_rateest_mutex);
+ /*
+ * If estimator parameters are specified, they must match the
+ * existing estimator.
+@@ -139,11 +149,13 @@ static int xt_rateest_tg_checkentry(const struct xt_tgchk_param *par)
+
+ info->est = est;
+ xt_rateest_hash_insert(est);
++ mutex_unlock(&xt_rateest_mutex);
+ return 0;
+
+ err2:
+ kfree(est);
+ err1:
++ mutex_unlock(&xt_rateest_mutex);
+ return ret;
+ }
+
+diff --git a/net/netfilter/xt_cgroup.c b/net/netfilter/xt_cgroup.c
+index 1db1ce59079f..891f4e7e8ea7 100644
+--- a/net/netfilter/xt_cgroup.c
++++ b/net/netfilter/xt_cgroup.c
+@@ -52,6 +52,7 @@ static int cgroup_mt_check_v1(const struct xt_mtchk_param *par)
+ return -EINVAL;
+ }
+
++ info->priv = NULL;
+ if (info->has_path) {
+ cgrp = cgroup_get_from_path(info->path);
+ if (IS_ERR(cgrp)) {
+diff --git a/net/rds/connection.c b/net/rds/connection.c
+index 7ee2d5d68b78..9efc82c665b5 100644
+--- a/net/rds/connection.c
++++ b/net/rds/connection.c
+@@ -366,6 +366,8 @@ void rds_conn_shutdown(struct rds_conn_path *cp)
+ * to the conn hash, so we never trigger a reconnect on this
+ * conn - the reconnect is always triggered by the active peer. */
+ cancel_delayed_work_sync(&cp->cp_conn_w);
++ if (conn->c_destroy_in_prog)
++ return;
+ rcu_read_lock();
+ if (!hlist_unhashed(&conn->c_hash_node)) {
+ rcu_read_unlock();
+@@ -445,7 +447,6 @@ void rds_conn_destroy(struct rds_connection *conn)
+ */
+ rds_cong_remove_conn(conn);
+
+- put_net(conn->c_net);
+ kfree(conn->c_path);
+ kmem_cache_free(rds_conn_slab, conn);
+
+diff --git a/net/rds/rds.h b/net/rds/rds.h
+index c349c71babff..d09f6c1facb4 100644
+--- a/net/rds/rds.h
++++ b/net/rds/rds.h
+@@ -150,7 +150,7 @@ struct rds_connection {
+
+ /* Protocol version */
+ unsigned int c_version;
+- struct net *c_net;
++ possible_net_t c_net;
+
+ struct list_head c_map_item;
+ unsigned long c_map_queued;
+@@ -165,13 +165,13 @@ struct rds_connection {
+ static inline
+ struct net *rds_conn_net(struct rds_connection *conn)
+ {
+- return conn->c_net;
++ return read_pnet(&conn->c_net);
+ }
+
+ static inline
+ void rds_conn_net_set(struct rds_connection *conn, struct net *net)
+ {
+- conn->c_net = get_net(net);
++ write_pnet(&conn->c_net, net);
+ }
+
+ #define RDS_FLAG_CONG_BITMAP 0x01
+diff --git a/net/rds/tcp.c b/net/rds/tcp.c
+index ab7356e0ba83..4df21e47d2ab 100644
+--- a/net/rds/tcp.c
++++ b/net/rds/tcp.c
+@@ -307,7 +307,8 @@ static void rds_tcp_conn_free(void *arg)
+ rdsdebug("freeing tc %p\n", tc);
+
+ spin_lock_irqsave(&rds_tcp_conn_lock, flags);
+- list_del(&tc->t_tcp_node);
++ if (!tc->t_tcp_node_detached)
++ list_del(&tc->t_tcp_node);
+ spin_unlock_irqrestore(&rds_tcp_conn_lock, flags);
+
+ kmem_cache_free(rds_tcp_conn_slab, tc);
+@@ -528,12 +529,16 @@ static void rds_tcp_kill_sock(struct net *net)
+ rds_tcp_listen_stop(lsock, &rtn->rds_tcp_accept_w);
+ spin_lock_irq(&rds_tcp_conn_lock);
+ list_for_each_entry_safe(tc, _tc, &rds_tcp_conn_list, t_tcp_node) {
+- struct net *c_net = tc->t_cpath->cp_conn->c_net;
++ struct net *c_net = read_pnet(&tc->t_cpath->cp_conn->c_net);
+
+ if (net != c_net || !tc->t_sock)
+ continue;
+- if (!list_has_conn(&tmp_list, tc->t_cpath->cp_conn))
++ if (!list_has_conn(&tmp_list, tc->t_cpath->cp_conn)) {
+ list_move_tail(&tc->t_tcp_node, &tmp_list);
++ } else {
++ list_del(&tc->t_tcp_node);
++ tc->t_tcp_node_detached = true;
++ }
+ }
+ spin_unlock_irq(&rds_tcp_conn_lock);
+ list_for_each_entry_safe(tc, _tc, &tmp_list, t_tcp_node) {
+@@ -587,7 +592,7 @@ static void rds_tcp_sysctl_reset(struct net *net)
+
+ spin_lock_irq(&rds_tcp_conn_lock);
+ list_for_each_entry_safe(tc, _tc, &rds_tcp_conn_list, t_tcp_node) {
+- struct net *c_net = tc->t_cpath->cp_conn->c_net;
++ struct net *c_net = read_pnet(&tc->t_cpath->cp_conn->c_net);
+
+ if (net != c_net || !tc->t_sock)
+ continue;
+diff --git a/net/rds/tcp.h b/net/rds/tcp.h
+index 864ca7d8f019..c6fa080e9b6d 100644
+--- a/net/rds/tcp.h
++++ b/net/rds/tcp.h
+@@ -12,6 +12,7 @@ struct rds_tcp_incoming {
+ struct rds_tcp_connection {
+
+ struct list_head t_tcp_node;
++ bool t_tcp_node_detached;
+ struct rds_conn_path *t_cpath;
+ /* t_conn_path_lock synchronizes the connection establishment between
+ * rds_tcp_accept_one and rds_tcp_conn_path_connect
+diff --git a/security/selinux/ss/services.c b/security/selinux/ss/services.c
+index 33cfe5d3d6cb..8900ea5cbabf 100644
+--- a/security/selinux/ss/services.c
++++ b/security/selinux/ss/services.c
+@@ -867,6 +867,9 @@ int security_bounded_transition(u32 old_sid, u32 new_sid)
+ int index;
+ int rc;
+
++ if (!ss_initialized)
++ return 0;
++
+ read_lock(&policy_rwlock);
+
+ rc = -EINVAL;
+@@ -1413,27 +1416,25 @@ static int security_context_to_sid_core(const char *scontext, u32 scontext_len,
+ if (!scontext_len)
+ return -EINVAL;
+
++ /* Copy the string to allow changes and ensure a NUL terminator */
++ scontext2 = kmemdup_nul(scontext, scontext_len, gfp_flags);
++ if (!scontext2)
++ return -ENOMEM;
++
+ if (!ss_initialized) {
+ int i;
+
+ for (i = 1; i < SECINITSID_NUM; i++) {
+- if (!strcmp(initial_sid_to_string[i], scontext)) {
++ if (!strcmp(initial_sid_to_string[i], scontext2)) {
+ *sid = i;
+- return 0;
++ goto out;
+ }
+ }
+ *sid = SECINITSID_KERNEL;
+- return 0;
++ goto out;
+ }
+ *sid = SECSID_NULL;
+
+- /* Copy the string so that we can modify the copy as we parse it. */
+- scontext2 = kmalloc(scontext_len + 1, gfp_flags);
+- if (!scontext2)
+- return -ENOMEM;
+- memcpy(scontext2, scontext, scontext_len);
+- scontext2[scontext_len] = 0;
+-
+ if (force) {
+ /* Save another copy for storing in uninterpreted form */
+ rc = -ENOMEM;
+diff --git a/sound/soc/ux500/mop500.c b/sound/soc/ux500/mop500.c
+index 070a6880980e..c60a57797640 100644
+--- a/sound/soc/ux500/mop500.c
++++ b/sound/soc/ux500/mop500.c
+@@ -163,3 +163,7 @@ static struct platform_driver snd_soc_mop500_driver = {
+ };
+
+ module_platform_driver(snd_soc_mop500_driver);
++
++MODULE_LICENSE("GPL v2");
++MODULE_DESCRIPTION("ASoC MOP500 board driver");
++MODULE_AUTHOR("Ola Lilja");
+diff --git a/sound/soc/ux500/ux500_pcm.c b/sound/soc/ux500/ux500_pcm.c
+index f12c01dddc8d..d35ba7700f46 100644
+--- a/sound/soc/ux500/ux500_pcm.c
++++ b/sound/soc/ux500/ux500_pcm.c
+@@ -165,3 +165,8 @@ int ux500_pcm_unregister_platform(struct platform_device *pdev)
+ return 0;
+ }
+ EXPORT_SYMBOL_GPL(ux500_pcm_unregister_platform);
++
++MODULE_AUTHOR("Ola Lilja");
++MODULE_AUTHOR("Roger Nilsson");
++MODULE_DESCRIPTION("ASoC UX500 driver");
++MODULE_LICENSE("GPL v2");
^ permalink raw reply related [flat|nested] 27+ messages in thread
* [gentoo-commits] proj/linux-patches:4.15 commit in: /
@ 2018-02-26 14:18 Alice Ferrazzi
0 siblings, 0 replies; 27+ messages in thread
From: Alice Ferrazzi @ 2018-02-26 14:18 UTC (permalink / raw
To: gentoo-commits
commit: ed41d637e2edb6303fe73f5d8be583a5d93046df
Author: Alice Ferrazzi <alicef <AT> gentoo <DOT> org>
AuthorDate: Mon Feb 26 14:13:45 2018 +0000
Commit: Alice Ferrazzi <alicef <AT> gentoo <DOT> org>
CommitDate: Mon Feb 26 14:13:45 2018 +0000
URL: https://gitweb.gentoo.org/proj/linux-patches.git/commit/?id=ed41d637
dell-laptop: Allocate buffer on heap rather than globally
0000_README | 4 +
...ocate_buffer_on_heap_rather_than_globally.patch | 489 +++++++++++++++++++++
2 files changed, 493 insertions(+)
diff --git a/0000_README b/0000_README
index 828cfeb..af0f948 100644
--- a/0000_README
+++ b/0000_README
@@ -91,6 +91,10 @@ Patch: 2900_dev-root-proc-mount-fix.patch
From: https://bugs.gentoo.org/show_bug.cgi?id=438380
Desc: Ensure that /dev/root doesn't appear in /proc/mounts when bootint without an initramfs.
+Patch: 2901_allocate_buffer_on_heap_rather_than_globally.patch
+From: https://patchwork.kernel.org/patch/10194287/
+Desc: Patchwork [v2] platform/x86: dell-laptop: Allocate buffer on heap rather than globally
+
Patch: 4200_fbcondecor.patch
From: http://www.mepiscommunity.org/fbcondecor
Desc: Bootsplash ported by Conrad Kostecki. (Bug #637434)
diff --git a/2901_allocate_buffer_on_heap_rather_than_globally.patch b/2901_allocate_buffer_on_heap_rather_than_globally.patch
new file mode 100644
index 0000000..dd2f7ad
--- /dev/null
+++ b/2901_allocate_buffer_on_heap_rather_than_globally.patch
@@ -0,0 +1,489 @@
+From patchwork Wed Jan 31 17:47:35 2018
+Content-Type: text/plain; charset="utf-8"
+MIME-Version: 1.0
+Content-Transfer-Encoding: 8bit
+Subject: [v2] platform/x86: dell-laptop: Allocate buffer on heap rather than
+ globally
+From: Mario Limonciello <mario.limonciello@dell.com>
+X-Patchwork-Id: 10194287
+Message-Id: <1517420855-19374-1-git-send-email-mario.limonciello@dell.com>
+To: dvhart@infradead.org, Andy Shevchenko <andy.shevchenko@gmail.com>
+Cc: pali.rohar@gmail.com, LKML <linux-kernel@vger.kernel.org>,
+ platform-driver-x86@vger.kernel.org,
+ Mario Limonciello <mario.limonciello@dell.com>
+Date: Wed, 31 Jan 2018 11:47:35 -0600
+
+There is no longer a need for the buffer to be defined in
+first 4GB physical address space.
+
+Furthermore there may be race conditions with multiple different functions
+working on a module wide buffer causing incorrect results.
+
+Fixes: 549b4930f057658dc50d8010e66219233119a4d8
+Suggested-by: Pali Rohar <pali.rohar@gmail.com>
+Signed-off-by: Mario Limonciello <mario.limonciello@dell.com>
+Reviewed-by: Pali Rohár <pali.rohar@gmail.com>
+---
+ drivers/platform/x86/dell-laptop.c | 188 ++++++++++++++++++++-----------------
+ 1 file changed, 103 insertions(+), 85 deletions(-)
+
+diff --git a/drivers/platform/x86/dell-laptop.c b/drivers/platform/x86/dell-laptop.c
+index fc2dfc8..a7b1419 100644
+--- a/drivers/platform/x86/dell-laptop.c
++++ b/drivers/platform/x86/dell-laptop.c
+@@ -78,7 +78,6 @@ static struct platform_driver platform_driver = {
+ }
+ };
+
+-static struct calling_interface_buffer *buffer;
+ static struct platform_device *platform_device;
+ static struct backlight_device *dell_backlight_device;
+ static struct rfkill *wifi_rfkill;
+@@ -322,7 +321,8 @@ static const struct dmi_system_id dell_quirks[] __initconst = {
+ { }
+ };
+
+-static void dell_set_arguments(u32 arg0, u32 arg1, u32 arg2, u32 arg3)
++static void dell_fill_request(struct calling_interface_buffer *buffer,
++ u32 arg0, u32 arg1, u32 arg2, u32 arg3)
+ {
+ memset(buffer, 0, sizeof(struct calling_interface_buffer));
+ buffer->input[0] = arg0;
+@@ -331,7 +331,8 @@ static void dell_set_arguments(u32 arg0, u32 arg1, u32 arg2, u32 arg3)
+ buffer->input[3] = arg3;
+ }
+
+-static int dell_send_request(u16 class, u16 select)
++static int dell_send_request(struct calling_interface_buffer *buffer,
++ u16 class, u16 select)
+ {
+ int ret;
+
+@@ -468,21 +469,22 @@ static int dell_rfkill_set(void *data, bool blocked)
+ int disable = blocked ? 1 : 0;
+ unsigned long radio = (unsigned long)data;
+ int hwswitch_bit = (unsigned long)data - 1;
++ struct calling_interface_buffer buffer;
+ int hwswitch;
+ int status;
+ int ret;
+
+- dell_set_arguments(0, 0, 0, 0);
+- ret = dell_send_request(CLASS_INFO, SELECT_RFKILL);
++ dell_fill_request(&buffer, 0, 0, 0, 0);
++ ret = dell_send_request(&buffer, CLASS_INFO, SELECT_RFKILL);
+ if (ret)
+ return ret;
+- status = buffer->output[1];
++ status = buffer.output[1];
+
+- dell_set_arguments(0x2, 0, 0, 0);
+- ret = dell_send_request(CLASS_INFO, SELECT_RFKILL);
++ dell_fill_request(&buffer, 0x2, 0, 0, 0);
++ ret = dell_send_request(&buffer, CLASS_INFO, SELECT_RFKILL);
+ if (ret)
+ return ret;
+- hwswitch = buffer->output[1];
++ hwswitch = buffer.output[1];
+
+ /* If the hardware switch controls this radio, and the hardware
+ switch is disabled, always disable the radio */
+@@ -490,8 +492,8 @@ static int dell_rfkill_set(void *data, bool blocked)
+ (status & BIT(0)) && !(status & BIT(16)))
+ disable = 1;
+
+- dell_set_arguments(1 | (radio<<8) | (disable << 16), 0, 0, 0);
+- ret = dell_send_request(CLASS_INFO, SELECT_RFKILL);
++ dell_fill_request(&buffer, 1 | (radio<<8) | (disable << 16), 0, 0, 0);
++ ret = dell_send_request(&buffer, CLASS_INFO, SELECT_RFKILL);
+ return ret;
+ }
+
+@@ -500,9 +502,11 @@ static void dell_rfkill_update_sw_state(struct rfkill *rfkill, int radio,
+ {
+ if (status & BIT(0)) {
+ /* Has hw-switch, sync sw_state to BIOS */
++ struct calling_interface_buffer buffer;
+ int block = rfkill_blocked(rfkill);
+- dell_set_arguments(1 | (radio << 8) | (block << 16), 0, 0, 0);
+- dell_send_request(CLASS_INFO, SELECT_RFKILL);
++ dell_fill_request(&buffer,
++ 1 | (radio << 8) | (block << 16), 0, 0, 0);
++ dell_send_request(&buffer, CLASS_INFO, SELECT_RFKILL);
+ } else {
+ /* No hw-switch, sync BIOS state to sw_state */
+ rfkill_set_sw_state(rfkill, !!(status & BIT(radio + 16)));
+@@ -519,21 +523,22 @@ static void dell_rfkill_update_hw_state(struct rfkill *rfkill, int radio,
+ static void dell_rfkill_query(struct rfkill *rfkill, void *data)
+ {
+ int radio = ((unsigned long)data & 0xF);
++ struct calling_interface_buffer buffer;
+ int hwswitch;
+ int status;
+ int ret;
+
+- dell_set_arguments(0, 0, 0, 0);
+- ret = dell_send_request(CLASS_INFO, SELECT_RFKILL);
+- status = buffer->output[1];
++ dell_fill_request(&buffer, 0, 0, 0, 0);
++ ret = dell_send_request(&buffer, CLASS_INFO, SELECT_RFKILL);
++ status = buffer.output[1];
+
+ if (ret != 0 || !(status & BIT(0))) {
+ return;
+ }
+
+- dell_set_arguments(0, 0x2, 0, 0);
+- ret = dell_send_request(CLASS_INFO, SELECT_RFKILL);
+- hwswitch = buffer->output[1];
++ dell_fill_request(&buffer, 0, 0x2, 0, 0);
++ ret = dell_send_request(&buffer, CLASS_INFO, SELECT_RFKILL);
++ hwswitch = buffer.output[1];
+
+ if (ret != 0)
+ return;
+@@ -550,22 +555,23 @@ static struct dentry *dell_laptop_dir;
+
+ static int dell_debugfs_show(struct seq_file *s, void *data)
+ {
++ struct calling_interface_buffer buffer;
+ int hwswitch_state;
+ int hwswitch_ret;
+ int status;
+ int ret;
+
+- dell_set_arguments(0, 0, 0, 0);
+- ret = dell_send_request(CLASS_INFO, SELECT_RFKILL);
++ dell_fill_request(&buffer, 0, 0, 0, 0);
++ ret = dell_send_request(&buffer, CLASS_INFO, SELECT_RFKILL);
+ if (ret)
+ return ret;
+- status = buffer->output[1];
++ status = buffer.output[1];
+
+- dell_set_arguments(0, 0x2, 0, 0);
+- hwswitch_ret = dell_send_request(CLASS_INFO, SELECT_RFKILL);
++ dell_fill_request(&buffer, 0, 0x2, 0, 0);
++ hwswitch_ret = dell_send_request(&buffer, CLASS_INFO, SELECT_RFKILL);
+ if (hwswitch_ret)
+ return hwswitch_ret;
+- hwswitch_state = buffer->output[1];
++ hwswitch_state = buffer.output[1];
+
+ seq_printf(s, "return:\t%d\n", ret);
+ seq_printf(s, "status:\t0x%X\n", status);
+@@ -646,22 +652,23 @@ static const struct file_operations dell_debugfs_fops = {
+
+ static void dell_update_rfkill(struct work_struct *ignored)
+ {
++ struct calling_interface_buffer buffer;
+ int hwswitch = 0;
+ int status;
+ int ret;
+
+- dell_set_arguments(0, 0, 0, 0);
+- ret = dell_send_request(CLASS_INFO, SELECT_RFKILL);
+- status = buffer->output[1];
++ dell_fill_request(&buffer, 0, 0, 0, 0);
++ ret = dell_send_request(&buffer, CLASS_INFO, SELECT_RFKILL);
++ status = buffer.output[1];
+
+ if (ret != 0)
+ return;
+
+- dell_set_arguments(0, 0x2, 0, 0);
+- ret = dell_send_request(CLASS_INFO, SELECT_RFKILL);
++ dell_fill_request(&buffer, 0, 0x2, 0, 0);
++ ret = dell_send_request(&buffer, CLASS_INFO, SELECT_RFKILL);
+
+ if (ret == 0 && (status & BIT(0)))
+- hwswitch = buffer->output[1];
++ hwswitch = buffer.output[1];
+
+ if (wifi_rfkill) {
+ dell_rfkill_update_hw_state(wifi_rfkill, 1, status, hwswitch);
+@@ -719,6 +726,7 @@ static struct notifier_block dell_laptop_rbtn_notifier = {
+
+ static int __init dell_setup_rfkill(void)
+ {
++ struct calling_interface_buffer buffer;
+ int status, ret, whitelisted;
+ const char *product;
+
+@@ -734,9 +742,9 @@ static int __init dell_setup_rfkill(void)
+ if (!force_rfkill && !whitelisted)
+ return 0;
+
+- dell_set_arguments(0, 0, 0, 0);
+- ret = dell_send_request(CLASS_INFO, SELECT_RFKILL);
+- status = buffer->output[1];
++ dell_fill_request(&buffer, 0, 0, 0, 0);
++ ret = dell_send_request(&buffer, CLASS_INFO, SELECT_RFKILL);
++ status = buffer.output[1];
+
+ /* dell wireless info smbios call is not supported */
+ if (ret != 0)
+@@ -889,6 +897,7 @@ static void dell_cleanup_rfkill(void)
+
+ static int dell_send_intensity(struct backlight_device *bd)
+ {
++ struct calling_interface_buffer buffer;
+ struct calling_interface_token *token;
+ int ret;
+
+@@ -896,17 +905,21 @@ static int dell_send_intensity(struct backlight_device *bd)
+ if (!token)
+ return -ENODEV;
+
+- dell_set_arguments(token->location, bd->props.brightness, 0, 0);
++ dell_fill_request(&buffer,
++ token->location, bd->props.brightness, 0, 0);
+ if (power_supply_is_system_supplied() > 0)
+- ret = dell_send_request(CLASS_TOKEN_WRITE, SELECT_TOKEN_AC);
++ ret = dell_send_request(&buffer,
++ CLASS_TOKEN_WRITE, SELECT_TOKEN_AC);
+ else
+- ret = dell_send_request(CLASS_TOKEN_WRITE, SELECT_TOKEN_BAT);
++ ret = dell_send_request(&buffer,
++ CLASS_TOKEN_WRITE, SELECT_TOKEN_BAT);
+
+ return ret;
+ }
+
+ static int dell_get_intensity(struct backlight_device *bd)
+ {
++ struct calling_interface_buffer buffer;
+ struct calling_interface_token *token;
+ int ret;
+
+@@ -914,14 +927,17 @@ static int dell_get_intensity(struct backlight_device *bd)
+ if (!token)
+ return -ENODEV;
+
+- dell_set_arguments(token->location, 0, 0, 0);
++ dell_fill_request(&buffer, token->location, 0, 0, 0);
+ if (power_supply_is_system_supplied() > 0)
+- ret = dell_send_request(CLASS_TOKEN_READ, SELECT_TOKEN_AC);
++ ret = dell_send_request(&buffer,
++ CLASS_TOKEN_READ, SELECT_TOKEN_AC);
+ else
+- ret = dell_send_request(CLASS_TOKEN_READ, SELECT_TOKEN_BAT);
++ ret = dell_send_request(&buffer,
++ CLASS_TOKEN_READ, SELECT_TOKEN_BAT);
+
+ if (ret == 0)
+- ret = buffer->output[1];
++ ret = buffer.output[1];
++
+ return ret;
+ }
+
+@@ -1186,31 +1202,33 @@ static enum led_brightness kbd_led_level;
+
+ static int kbd_get_info(struct kbd_info *info)
+ {
++ struct calling_interface_buffer buffer;
+ u8 units;
+ int ret;
+
+- dell_set_arguments(0, 0, 0, 0);
+- ret = dell_send_request(CLASS_KBD_BACKLIGHT, SELECT_KBD_BACKLIGHT);
++ dell_fill_request(&buffer, 0, 0, 0, 0);
++ ret = dell_send_request(&buffer,
++ CLASS_KBD_BACKLIGHT, SELECT_KBD_BACKLIGHT);
+ if (ret)
+ return ret;
+
+- info->modes = buffer->output[1] & 0xFFFF;
+- info->type = (buffer->output[1] >> 24) & 0xFF;
+- info->triggers = buffer->output[2] & 0xFF;
+- units = (buffer->output[2] >> 8) & 0xFF;
+- info->levels = (buffer->output[2] >> 16) & 0xFF;
++ info->modes = buffer.output[1] & 0xFFFF;
++ info->type = (buffer.output[1] >> 24) & 0xFF;
++ info->triggers = buffer.output[2] & 0xFF;
++ units = (buffer.output[2] >> 8) & 0xFF;
++ info->levels = (buffer.output[2] >> 16) & 0xFF;
+
+ if (quirks && quirks->kbd_led_levels_off_1 && info->levels)
+ info->levels--;
+
+ if (units & BIT(0))
+- info->seconds = (buffer->output[3] >> 0) & 0xFF;
++ info->seconds = (buffer.output[3] >> 0) & 0xFF;
+ if (units & BIT(1))
+- info->minutes = (buffer->output[3] >> 8) & 0xFF;
++ info->minutes = (buffer.output[3] >> 8) & 0xFF;
+ if (units & BIT(2))
+- info->hours = (buffer->output[3] >> 16) & 0xFF;
++ info->hours = (buffer.output[3] >> 16) & 0xFF;
+ if (units & BIT(3))
+- info->days = (buffer->output[3] >> 24) & 0xFF;
++ info->days = (buffer.output[3] >> 24) & 0xFF;
+
+ return ret;
+ }
+@@ -1270,31 +1288,34 @@ static int kbd_set_level(struct kbd_state *state, u8 level)
+
+ static int kbd_get_state(struct kbd_state *state)
+ {
++ struct calling_interface_buffer buffer;
+ int ret;
+
+- dell_set_arguments(0x1, 0, 0, 0);
+- ret = dell_send_request(CLASS_KBD_BACKLIGHT, SELECT_KBD_BACKLIGHT);
++ dell_fill_request(&buffer, 0, 0, 0, 0);
++ ret = dell_send_request(&buffer,
++ CLASS_KBD_BACKLIGHT, SELECT_KBD_BACKLIGHT);
+ if (ret)
+ return ret;
+
+- state->mode_bit = ffs(buffer->output[1] & 0xFFFF);
++ state->mode_bit = ffs(buffer.output[1] & 0xFFFF);
+ if (state->mode_bit != 0)
+ state->mode_bit--;
+
+- state->triggers = (buffer->output[1] >> 16) & 0xFF;
+- state->timeout_value = (buffer->output[1] >> 24) & 0x3F;
+- state->timeout_unit = (buffer->output[1] >> 30) & 0x3;
+- state->als_setting = buffer->output[2] & 0xFF;
+- state->als_value = (buffer->output[2] >> 8) & 0xFF;
+- state->level = (buffer->output[2] >> 16) & 0xFF;
+- state->timeout_value_ac = (buffer->output[2] >> 24) & 0x3F;
+- state->timeout_unit_ac = (buffer->output[2] >> 30) & 0x3;
++ state->triggers = (buffer.output[1] >> 16) & 0xFF;
++ state->timeout_value = (buffer.output[1] >> 24) & 0x3F;
++ state->timeout_unit = (buffer.output[1] >> 30) & 0x3;
++ state->als_setting = buffer.output[2] & 0xFF;
++ state->als_value = (buffer.output[2] >> 8) & 0xFF;
++ state->level = (buffer.output[2] >> 16) & 0xFF;
++ state->timeout_value_ac = (buffer.output[2] >> 24) & 0x3F;
++ state->timeout_unit_ac = (buffer.output[2] >> 30) & 0x3;
+
+ return ret;
+ }
+
+ static int kbd_set_state(struct kbd_state *state)
+ {
++ struct calling_interface_buffer buffer;
+ int ret;
+ u32 input1;
+ u32 input2;
+@@ -1307,8 +1328,9 @@ static int kbd_set_state(struct kbd_state *state)
+ input2 |= (state->level & 0xFF) << 16;
+ input2 |= (state->timeout_value_ac & 0x3F) << 24;
+ input2 |= (state->timeout_unit_ac & 0x3) << 30;
+- dell_set_arguments(0x2, input1, input2, 0);
+- ret = dell_send_request(CLASS_KBD_BACKLIGHT, SELECT_KBD_BACKLIGHT);
++ dell_fill_request(&buffer, 0x2, input1, input2, 0);
++ ret = dell_send_request(&buffer,
++ CLASS_KBD_BACKLIGHT, SELECT_KBD_BACKLIGHT);
+
+ return ret;
+ }
+@@ -1335,6 +1357,7 @@ static int kbd_set_state_safe(struct kbd_state *state, struct kbd_state *old)
+
+ static int kbd_set_token_bit(u8 bit)
+ {
++ struct calling_interface_buffer buffer;
+ struct calling_interface_token *token;
+ int ret;
+
+@@ -1345,14 +1368,15 @@ static int kbd_set_token_bit(u8 bit)
+ if (!token)
+ return -EINVAL;
+
+- dell_set_arguments(token->location, token->value, 0, 0);
+- ret = dell_send_request(CLASS_TOKEN_WRITE, SELECT_TOKEN_STD);
++ dell_fill_request(&buffer, token->location, token->value, 0, 0);
++ ret = dell_send_request(&buffer, CLASS_TOKEN_WRITE, SELECT_TOKEN_STD);
+
+ return ret;
+ }
+
+ static int kbd_get_token_bit(u8 bit)
+ {
++ struct calling_interface_buffer buffer;
+ struct calling_interface_token *token;
+ int ret;
+ int val;
+@@ -1364,9 +1388,9 @@ static int kbd_get_token_bit(u8 bit)
+ if (!token)
+ return -EINVAL;
+
+- dell_set_arguments(token->location, 0, 0, 0);
+- ret = dell_send_request(CLASS_TOKEN_READ, SELECT_TOKEN_STD);
+- val = buffer->output[1];
++ dell_fill_request(&buffer, token->location, 0, 0, 0);
++ ret = dell_send_request(&buffer, CLASS_TOKEN_READ, SELECT_TOKEN_STD);
++ val = buffer.output[1];
+
+ if (ret)
+ return ret;
+@@ -2102,6 +2126,7 @@ static struct notifier_block dell_laptop_notifier = {
+
+ int dell_micmute_led_set(int state)
+ {
++ struct calling_interface_buffer buffer;
+ struct calling_interface_token *token;
+
+ if (state == 0)
+@@ -2114,8 +2139,8 @@ int dell_micmute_led_set(int state)
+ if (!token)
+ return -ENODEV;
+
+- dell_set_arguments(token->location, token->value, 0, 0);
+- dell_send_request(CLASS_TOKEN_WRITE, SELECT_TOKEN_STD);
++ dell_fill_request(&buffer, token->location, token->value, 0, 0);
++ dell_send_request(&buffer, CLASS_TOKEN_WRITE, SELECT_TOKEN_STD);
+
+ return state;
+ }
+@@ -2146,13 +2171,6 @@ static int __init dell_init(void)
+ if (ret)
+ goto fail_platform_device2;
+
+- buffer = kzalloc(sizeof(struct calling_interface_buffer), GFP_KERNEL);
+- if (!buffer) {
+- ret = -ENOMEM;
+- goto fail_buffer;
+- }
+-
+-
+ ret = dell_setup_rfkill();
+
+ if (ret) {
+@@ -2177,10 +2195,13 @@ static int __init dell_init(void)
+
+ token = dell_smbios_find_token(BRIGHTNESS_TOKEN);
+ if (token) {
+- dell_set_arguments(token->location, 0, 0, 0);
+- ret = dell_send_request(CLASS_TOKEN_READ, SELECT_TOKEN_AC);
++ struct calling_interface_buffer buffer;
++
++ dell_fill_request(&buffer, token->location, 0, 0, 0);
++ ret = dell_send_request(&buffer,
++ CLASS_TOKEN_READ, SELECT_TOKEN_AC);
+ if (ret)
+- max_intensity = buffer->output[3];
++ max_intensity = buffer.output[3];
+ }
+
+ if (max_intensity) {
+@@ -2214,8 +2235,6 @@ static int __init dell_init(void)
+ fail_get_brightness:
+ backlight_device_unregister(dell_backlight_device);
+ fail_backlight:
+- kfree(buffer);
+-fail_buffer:
+ dell_cleanup_rfkill();
+ fail_rfkill:
+ platform_device_del(platform_device);
+@@ -2235,7 +2254,6 @@ static void __exit dell_exit(void)
+ touchpad_led_exit();
+ kbd_led_exit();
+ backlight_device_unregister(dell_backlight_device);
+- kfree(buffer);
+ dell_cleanup_rfkill();
+ if (platform_device) {
+ platform_device_unregister(platform_device);
^ permalink raw reply related [flat|nested] 27+ messages in thread
* [gentoo-commits] proj/linux-patches:4.15 commit in: /
@ 2018-02-26 14:18 Alice Ferrazzi
0 siblings, 0 replies; 27+ messages in thread
From: Alice Ferrazzi @ 2018-02-26 14:18 UTC (permalink / raw
To: gentoo-commits
commit: f98193ddca07dc67810a96082be4e2a6d8026a66
Author: Alice Ferrazzi <alicef <AT> gentoo <DOT> org>
AuthorDate: Mon Feb 26 14:17:39 2018 +0000
Commit: Alice Ferrazzi <alicef <AT> gentoo <DOT> org>
CommitDate: Mon Feb 26 14:17:39 2018 +0000
URL: https://gitweb.gentoo.org/proj/linux-patches.git/commit/?id=f98193dd
ia64 fix ptrace
0000_README | 6 +++-
1700_ia64_fix_ptrace.patch | 87 ++++++++++++++++++++++++++++++++++++++++++++++
2 files changed, 92 insertions(+), 1 deletion(-)
diff --git a/0000_README b/0000_README
index af0f948..8e1b91f 100644
--- a/0000_README
+++ b/0000_README
@@ -75,6 +75,10 @@ Patch: 1510_fs-enable-link-security-restrictions-by-default.patch
From: http://sources.debian.net/src/linux/3.16.7-ckt4-3/debian/patches/debian/fs-enable-link-security-restrictions-by-default.patch/
Desc: Enable link security restrictions by default.
+Patch: 1700_ia64_fix_ptrace.patch
+From: https://patchwork.kernel.org/patch/10198159/
+Desc: ia64: fix ptrace(PTRACE_GETREGS) (unbreaks strace, gdb).
+
Patch: 2300_enable-poweroff-on-Mac-Pro-11.patch
From: http://kernel.ubuntu.com/git/ubuntu/ubuntu-xenial.git/patch/drivers/pci/quirks.c?id=5080ff61a438f3dd80b88b423e1a20791d8a774c
Desc: Workaround to enable poweroff on Mac Pro 11. See bug #601964.
@@ -93,7 +97,7 @@ Desc: Ensure that /dev/root doesn't appear in /proc/mounts when bootint withou
Patch: 2901_allocate_buffer_on_heap_rather_than_globally.patch
From: https://patchwork.kernel.org/patch/10194287/
-Desc: Patchwork [v2] platform/x86: dell-laptop: Allocate buffer on heap rather than globally
+Desc: Patchwork [v2] platform/x86: dell-laptop: Allocate buffer on heap rather than globally.
Patch: 4200_fbcondecor.patch
From: http://www.mepiscommunity.org/fbcondecor
diff --git a/1700_ia64_fix_ptrace.patch b/1700_ia64_fix_ptrace.patch
new file mode 100644
index 0000000..6173b05
--- /dev/null
+++ b/1700_ia64_fix_ptrace.patch
@@ -0,0 +1,87 @@
+From patchwork Fri Feb 2 22:12:24 2018
+Content-Type: text/plain; charset="utf-8"
+MIME-Version: 1.0
+Content-Transfer-Encoding: 8bit
+Subject: ia64: fix ptrace(PTRACE_GETREGS) (unbreaks strace, gdb)
+From: Sergei Trofimovich <slyfox@gentoo.org>
+X-Patchwork-Id: 10198159
+Message-Id: <20180202221224.16597-1-slyfox@gentoo.org>
+To: Tony Luck <tony.luck@intel.com>, Fenghua Yu <fenghua.yu@intel.com>,
+ linux-ia64@vger.kernel.org, linux-kernel@vger.kernel.org
+Cc: Sergei Trofimovich <slyfox@gentoo.org>
+Date: Fri, 2 Feb 2018 22:12:24 +0000
+
+The strace breakage looks like that:
+./strace: get_regs: get_regs_error: Input/output error
+
+It happens because ia64 needs to load unwind tables
+to read certain registers. Unwind tables fail to load
+due to GCC quirk on the following code:
+
+ extern char __end_unwind[];
+ const struct unw_table_entry *end = (struct unw_table_entry *)table_end;
+ table->end = segment_base + end[-1].end_offset;
+
+GCC does not generate correct code for this single memory
+reference after constant propagation (see https://gcc.gnu.org/PR84184).
+Two triggers are required for bad code generation:
+- '__end_unwind' has alignment lower (char), than
+ 'struct unw_table_entry' (8).
+- symbol offset is negative.
+
+This commit workarounds it by fixing alignment of '__end_unwind'.
+While at it use hidden symbols to generate shorter gp-relative
+relocations.
+
+CC: Tony Luck <tony.luck@intel.com>
+CC: Fenghua Yu <fenghua.yu@intel.com>
+CC: linux-ia64@vger.kernel.org
+CC: linux-kernel@vger.kernel.org
+Bug: https://github.com/strace/strace/issues/33
+Bug: https://gcc.gnu.org/PR84184
+Reported-by: Émeric Maschino <emeric.maschino@gmail.com>
+Signed-off-by: Sergei Trofimovich <slyfox@gentoo.org>
+Tested-by: stanton_arch@mail.com
+---
+ arch/ia64/include/asm/sections.h | 1 -
+ arch/ia64/kernel/unwind.c | 15 ++++++++++++++-
+ 2 files changed, 14 insertions(+), 2 deletions(-)
+
+diff --git a/arch/ia64/include/asm/sections.h b/arch/ia64/include/asm/sections.h
+index f3481408594e..0fc4f1757a44 100644
+--- a/arch/ia64/include/asm/sections.h
++++ b/arch/ia64/include/asm/sections.h
+@@ -24,7 +24,6 @@ extern char __start_gate_mckinley_e9_patchlist[], __end_gate_mckinley_e9_patchli
+ extern char __start_gate_vtop_patchlist[], __end_gate_vtop_patchlist[];
+ extern char __start_gate_fsyscall_patchlist[], __end_gate_fsyscall_patchlist[];
+ extern char __start_gate_brl_fsys_bubble_down_patchlist[], __end_gate_brl_fsys_bubble_down_patchlist[];
+-extern char __start_unwind[], __end_unwind[];
+ extern char __start_ivt_text[], __end_ivt_text[];
+
+ #undef dereference_function_descriptor
+diff --git a/arch/ia64/kernel/unwind.c b/arch/ia64/kernel/unwind.c
+index e04efa088902..025ba6700790 100644
+--- a/arch/ia64/kernel/unwind.c
++++ b/arch/ia64/kernel/unwind.c
+@@ -2243,7 +2243,20 @@ __initcall(create_gate_table);
+ void __init
+ unw_init (void)
+ {
+- extern char __gp[];
++ #define __ia64_hidden __attribute__((visibility("hidden")))
++ /*
++ * We use hidden symbols to generate more efficient code using
++ * gp-relative addressing.
++ */
++ extern char __gp[] __ia64_hidden;
++ /*
++ * Unwind tables need to have proper alignment as init_unwind_table()
++ * uses negative offsets against '__end_unwind'.
++ * See https://gcc.gnu.org/PR84184
++ */
++ extern const struct unw_table_entry __start_unwind[] __ia64_hidden;
++ extern const struct unw_table_entry __end_unwind[] __ia64_hidden;
++ #undef __ia64_hidden
+ extern void unw_hash_index_t_is_too_narrow (void);
+ long i, off;
+
^ permalink raw reply related [flat|nested] 27+ messages in thread
* [gentoo-commits] proj/linux-patches:4.15 commit in: /
@ 2018-02-28 14:57 Alice Ferrazzi
0 siblings, 0 replies; 27+ messages in thread
From: Alice Ferrazzi @ 2018-02-28 14:57 UTC (permalink / raw
To: gentoo-commits
commit: 006fd7a2fd975934fe667e0ab4c9a35d03ab9ce4
Author: Alice Ferrazzi <alicef <AT> gentoo <DOT> org>
AuthorDate: Wed Feb 28 14:57:21 2018 +0000
Commit: Alice Ferrazzi <alicef <AT> gentoo <DOT> org>
CommitDate: Wed Feb 28 14:57:21 2018 +0000
URL: https://gitweb.gentoo.org/proj/linux-patches.git/commit/?id=006fd7a2
linux kernel 4.15.7
0000_README | 4 +
1006_linux-4.15.7.patch | 1984 +++++++++++++++++++++++++++++++++++++++++++++++
2 files changed, 1988 insertions(+)
diff --git a/0000_README b/0000_README
index 8e1b91f..2454462 100644
--- a/0000_README
+++ b/0000_README
@@ -67,6 +67,10 @@ Patch: 1005_linux-4.15.6.patch
From: http://www.kernel.org
Desc: Linux 4.15.6
+Patch: 1006_linux-4.15.7.patch
+From: http://www.kernel.org
+Desc: Linux 4.15.7
+
Patch: 1500_XATTR_USER_PREFIX.patch
From: https://bugs.gentoo.org/show_bug.cgi?id=470644
Desc: Support for namespace user.pax.* on tmpfs.
diff --git a/1006_linux-4.15.7.patch b/1006_linux-4.15.7.patch
new file mode 100644
index 0000000..b68c91a
--- /dev/null
+++ b/1006_linux-4.15.7.patch
@@ -0,0 +1,1984 @@
+diff --git a/Makefile b/Makefile
+index 51563c76bdf6..49f524444050 100644
+--- a/Makefile
++++ b/Makefile
+@@ -1,7 +1,7 @@
+ # SPDX-License-Identifier: GPL-2.0
+ VERSION = 4
+ PATCHLEVEL = 15
+-SUBLEVEL = 6
++SUBLEVEL = 7
+ EXTRAVERSION =
+ NAME = Fearless Coyote
+
+diff --git a/arch/arm64/kernel/cpufeature.c b/arch/arm64/kernel/cpufeature.c
+index 46dee071bab1..1bcf03b5cd04 100644
+--- a/arch/arm64/kernel/cpufeature.c
++++ b/arch/arm64/kernel/cpufeature.c
+@@ -197,9 +197,11 @@ static const struct arm64_ftr_bits ftr_id_aa64mmfr2[] = {
+ };
+
+ static const struct arm64_ftr_bits ftr_ctr[] = {
+- ARM64_FTR_BITS(FTR_VISIBLE, FTR_STRICT, FTR_EXACT, 31, 1, 1), /* RAO */
++ ARM64_FTR_BITS(FTR_VISIBLE, FTR_STRICT, FTR_EXACT, 31, 1, 1), /* RES1 */
++ ARM64_FTR_BITS(FTR_VISIBLE, FTR_STRICT, FTR_LOWER_SAFE, 29, 1, 1), /* DIC */
++ ARM64_FTR_BITS(FTR_VISIBLE, FTR_STRICT, FTR_LOWER_SAFE, 28, 1, 1), /* IDC */
+ ARM64_FTR_BITS(FTR_VISIBLE, FTR_STRICT, FTR_HIGHER_SAFE, 24, 4, 0), /* CWG */
+- ARM64_FTR_BITS(FTR_VISIBLE, FTR_STRICT, FTR_LOWER_SAFE, 20, 4, 0), /* ERG */
++ ARM64_FTR_BITS(FTR_VISIBLE, FTR_STRICT, FTR_HIGHER_SAFE, 20, 4, 0), /* ERG */
+ ARM64_FTR_BITS(FTR_VISIBLE, FTR_STRICT, FTR_LOWER_SAFE, 16, 4, 1), /* DminLine */
+ /*
+ * Linux can handle differing I-cache policies. Userspace JITs will
+diff --git a/arch/arm64/kernel/process.c b/arch/arm64/kernel/process.c
+index 583fd8154695..d6ca5fccb229 100644
+--- a/arch/arm64/kernel/process.c
++++ b/arch/arm64/kernel/process.c
+@@ -221,8 +221,15 @@ void __show_regs(struct pt_regs *regs)
+
+ show_regs_print_info(KERN_DEFAULT);
+ print_pstate(regs);
+- print_symbol("pc : %s\n", regs->pc);
+- print_symbol("lr : %s\n", lr);
++
++ if (!user_mode(regs)) {
++ print_symbol("pc : %s\n", regs->pc);
++ print_symbol("lr : %s\n", lr);
++ } else {
++ printk("pc : %016llx\n", regs->pc);
++ printk("lr : %016llx\n", lr);
++ }
++
+ printk("sp : %016llx\n", sp);
+
+ i = top_reg;
+diff --git a/arch/arm64/kernel/traps.c b/arch/arm64/kernel/traps.c
+index 3d3588fcd1c7..c759d9ca0e5a 100644
+--- a/arch/arm64/kernel/traps.c
++++ b/arch/arm64/kernel/traps.c
+@@ -57,7 +57,7 @@ static const char *handler[]= {
+ "Error"
+ };
+
+-int show_unhandled_signals = 1;
++int show_unhandled_signals = 0;
+
+ static void dump_backtrace_entry(unsigned long where)
+ {
+@@ -526,14 +526,6 @@ asmlinkage long do_ni_syscall(struct pt_regs *regs)
+ }
+ #endif
+
+- if (show_unhandled_signals_ratelimited()) {
+- pr_info("%s[%d]: syscall %d\n", current->comm,
+- task_pid_nr(current), regs->syscallno);
+- dump_instr("", regs);
+- if (user_mode(regs))
+- __show_regs(regs);
+- }
+-
+ return sys_ni_syscall();
+ }
+
+diff --git a/arch/arm64/mm/proc.S b/arch/arm64/mm/proc.S
+index 248f2e7b24ab..a233975848cc 100644
+--- a/arch/arm64/mm/proc.S
++++ b/arch/arm64/mm/proc.S
+@@ -155,7 +155,7 @@ ENDPROC(cpu_do_switch_mm)
+
+ .macro __idmap_cpu_set_reserved_ttbr1, tmp1, tmp2
+ adrp \tmp1, empty_zero_page
+- msr ttbr1_el1, \tmp2
++ msr ttbr1_el1, \tmp1
+ isb
+ tlbi vmalle1
+ dsb nsh
+diff --git a/arch/microblaze/Makefile b/arch/microblaze/Makefile
+index 830ee7d42fa0..d269dd4b8279 100644
+--- a/arch/microblaze/Makefile
++++ b/arch/microblaze/Makefile
+@@ -36,16 +36,21 @@ endif
+ CPUFLAGS-$(CONFIG_XILINX_MICROBLAZE0_USE_DIV) += -mno-xl-soft-div
+ CPUFLAGS-$(CONFIG_XILINX_MICROBLAZE0_USE_BARREL) += -mxl-barrel-shift
+ CPUFLAGS-$(CONFIG_XILINX_MICROBLAZE0_USE_PCMP_INSTR) += -mxl-pattern-compare
+-CPUFLAGS-$(CONFIG_BIG_ENDIAN) += -mbig-endian
+-CPUFLAGS-$(CONFIG_LITTLE_ENDIAN) += -mlittle-endian
++
++ifdef CONFIG_CPU_BIG_ENDIAN
++KBUILD_CFLAGS += -mbig-endian
++KBUILD_AFLAGS += -mbig-endian
++LD += -EB
++else
++KBUILD_CFLAGS += -mlittle-endian
++KBUILD_AFLAGS += -mlittle-endian
++LD += -EL
++endif
+
+ CPUFLAGS-1 += $(call cc-option,-mcpu=v$(CPU_VER))
+
+ # r31 holds current when in kernel mode
+-KBUILD_CFLAGS += -ffixed-r31 $(CPUFLAGS-1) $(CPUFLAGS-2)
+-
+-LDFLAGS :=
+-LDFLAGS_vmlinux :=
++KBUILD_CFLAGS += -ffixed-r31 $(CPUFLAGS-y) $(CPUFLAGS-1) $(CPUFLAGS-2)
+
+ head-y := arch/microblaze/kernel/head.o
+ libs-y += arch/microblaze/lib/
+diff --git a/arch/mips/boot/Makefile b/arch/mips/boot/Makefile
+index 1bd5c4f00d19..c22da16d67b8 100644
+--- a/arch/mips/boot/Makefile
++++ b/arch/mips/boot/Makefile
+@@ -126,6 +126,7 @@ $(obj)/vmlinux.its.S: $(addprefix $(srctree)/arch/mips/$(PLATFORM)/,$(ITS_INPUTS
+
+ quiet_cmd_cpp_its_S = ITS $@
+ cmd_cpp_its_S = $(CPP) $(cpp_flags) -P -C -o $@ $< \
++ -D__ASSEMBLY__ \
+ -DKERNEL_NAME="\"Linux $(KERNELRELEASE)\"" \
+ -DVMLINUX_BINARY="\"$(3)\"" \
+ -DVMLINUX_COMPRESSION="\"$(2)\"" \
+diff --git a/arch/mips/include/asm/compat.h b/arch/mips/include/asm/compat.h
+index 49691331ada4..08ec0762ca50 100644
+--- a/arch/mips/include/asm/compat.h
++++ b/arch/mips/include/asm/compat.h
+@@ -86,7 +86,6 @@ struct compat_flock {
+ compat_off_t l_len;
+ s32 l_sysid;
+ compat_pid_t l_pid;
+- short __unused;
+ s32 pad[4];
+ };
+
+diff --git a/arch/x86/kernel/apic/vector.c b/arch/x86/kernel/apic/vector.c
+index 3cc471beb50b..bb6f7a2148d7 100644
+--- a/arch/x86/kernel/apic/vector.c
++++ b/arch/x86/kernel/apic/vector.c
+@@ -134,21 +134,40 @@ static void apic_update_vector(struct irq_data *irqd, unsigned int newvec,
+ {
+ struct apic_chip_data *apicd = apic_chip_data(irqd);
+ struct irq_desc *desc = irq_data_to_desc(irqd);
++ bool managed = irqd_affinity_is_managed(irqd);
+
+ lockdep_assert_held(&vector_lock);
+
+ trace_vector_update(irqd->irq, newvec, newcpu, apicd->vector,
+ apicd->cpu);
+
+- /* Setup the vector move, if required */
+- if (apicd->vector && cpu_online(apicd->cpu)) {
++ /*
++ * If there is no vector associated or if the associated vector is
++ * the shutdown vector, which is associated to make PCI/MSI
++ * shutdown mode work, then there is nothing to release. Clear out
++ * prev_vector for this and the offlined target case.
++ */
++ apicd->prev_vector = 0;
++ if (!apicd->vector || apicd->vector == MANAGED_IRQ_SHUTDOWN_VECTOR)
++ goto setnew;
++ /*
++ * If the target CPU of the previous vector is online, then mark
++ * the vector as move in progress and store it for cleanup when the
++ * first interrupt on the new vector arrives. If the target CPU is
++ * offline then the regular release mechanism via the cleanup
++ * vector is not possible and the vector can be immediately freed
++ * in the underlying matrix allocator.
++ */
++ if (cpu_online(apicd->cpu)) {
+ apicd->move_in_progress = true;
+ apicd->prev_vector = apicd->vector;
+ apicd->prev_cpu = apicd->cpu;
+ } else {
+- apicd->prev_vector = 0;
++ irq_matrix_free(vector_matrix, apicd->cpu, apicd->vector,
++ managed);
+ }
+
++setnew:
+ apicd->vector = newvec;
+ apicd->cpu = newcpu;
+ BUG_ON(!IS_ERR_OR_NULL(per_cpu(vector_irq, newcpu)[newvec]));
+diff --git a/arch/x86/oprofile/nmi_int.c b/arch/x86/oprofile/nmi_int.c
+index 174c59774cc9..a7a7677265b6 100644
+--- a/arch/x86/oprofile/nmi_int.c
++++ b/arch/x86/oprofile/nmi_int.c
+@@ -460,7 +460,7 @@ static int nmi_setup(void)
+ goto fail;
+
+ for_each_possible_cpu(cpu) {
+- if (!cpu)
++ if (!IS_ENABLED(CONFIG_SMP) || !cpu)
+ continue;
+
+ memcpy(per_cpu(cpu_msrs, cpu).counters,
+diff --git a/arch/xtensa/mm/init.c b/arch/xtensa/mm/init.c
+index 720fe4e8b497..8dad076661fc 100644
+--- a/arch/xtensa/mm/init.c
++++ b/arch/xtensa/mm/init.c
+@@ -79,19 +79,75 @@ void __init zones_init(void)
+ free_area_init_node(0, zones_size, ARCH_PFN_OFFSET, NULL);
+ }
+
++#ifdef CONFIG_HIGHMEM
++static void __init free_area_high(unsigned long pfn, unsigned long end)
++{
++ for (; pfn < end; pfn++)
++ free_highmem_page(pfn_to_page(pfn));
++}
++
++static void __init free_highpages(void)
++{
++ unsigned long max_low = max_low_pfn;
++ struct memblock_region *mem, *res;
++
++ reset_all_zones_managed_pages();
++ /* set highmem page free */
++ for_each_memblock(memory, mem) {
++ unsigned long start = memblock_region_memory_base_pfn(mem);
++ unsigned long end = memblock_region_memory_end_pfn(mem);
++
++ /* Ignore complete lowmem entries */
++ if (end <= max_low)
++ continue;
++
++ if (memblock_is_nomap(mem))
++ continue;
++
++ /* Truncate partial highmem entries */
++ if (start < max_low)
++ start = max_low;
++
++ /* Find and exclude any reserved regions */
++ for_each_memblock(reserved, res) {
++ unsigned long res_start, res_end;
++
++ res_start = memblock_region_reserved_base_pfn(res);
++ res_end = memblock_region_reserved_end_pfn(res);
++
++ if (res_end < start)
++ continue;
++ if (res_start < start)
++ res_start = start;
++ if (res_start > end)
++ res_start = end;
++ if (res_end > end)
++ res_end = end;
++ if (res_start != start)
++ free_area_high(start, res_start);
++ start = res_end;
++ if (start == end)
++ break;
++ }
++
++ /* And now free anything which remains */
++ if (start < end)
++ free_area_high(start, end);
++ }
++}
++#else
++static void __init free_highpages(void)
++{
++}
++#endif
++
+ /*
+ * Initialize memory pages.
+ */
+
+ void __init mem_init(void)
+ {
+-#ifdef CONFIG_HIGHMEM
+- unsigned long tmp;
+-
+- reset_all_zones_managed_pages();
+- for (tmp = max_low_pfn; tmp < max_pfn; tmp++)
+- free_highmem_page(pfn_to_page(tmp));
+-#endif
++ free_highpages();
+
+ max_mapnr = max_pfn - ARCH_PFN_OFFSET;
+ high_memory = (void *)__va(max_low_pfn << PAGE_SHIFT);
+diff --git a/crypto/asymmetric_keys/pkcs7_verify.c b/crypto/asymmetric_keys/pkcs7_verify.c
+index 39e6de0c2761..97c77f66b20d 100644
+--- a/crypto/asymmetric_keys/pkcs7_verify.c
++++ b/crypto/asymmetric_keys/pkcs7_verify.c
+@@ -270,7 +270,7 @@ static int pkcs7_verify_sig_chain(struct pkcs7_message *pkcs7,
+ sinfo->index);
+ return 0;
+ }
+- ret = public_key_verify_signature(p->pub, p->sig);
++ ret = public_key_verify_signature(p->pub, x509->sig);
+ if (ret < 0)
+ return ret;
+ x509->signer = p;
+@@ -366,8 +366,7 @@ static int pkcs7_verify_one(struct pkcs7_message *pkcs7,
+ *
+ * (*) -EBADMSG if some part of the message was invalid, or:
+ *
+- * (*) 0 if no signature chains were found to be blacklisted or to contain
+- * unsupported crypto, or:
++ * (*) 0 if a signature chain passed verification, or:
+ *
+ * (*) -EKEYREJECTED if a blacklisted key was encountered, or:
+ *
+@@ -423,8 +422,11 @@ int pkcs7_verify(struct pkcs7_message *pkcs7,
+
+ for (sinfo = pkcs7->signed_infos; sinfo; sinfo = sinfo->next) {
+ ret = pkcs7_verify_one(pkcs7, sinfo);
+- if (sinfo->blacklisted && actual_ret == -ENOPKG)
+- actual_ret = -EKEYREJECTED;
++ if (sinfo->blacklisted) {
++ if (actual_ret == -ENOPKG)
++ actual_ret = -EKEYREJECTED;
++ continue;
++ }
+ if (ret < 0) {
+ if (ret == -ENOPKG) {
+ sinfo->unsupported_crypto = true;
+diff --git a/crypto/asymmetric_keys/public_key.c b/crypto/asymmetric_keys/public_key.c
+index de996586762a..e929fe1e4106 100644
+--- a/crypto/asymmetric_keys/public_key.c
++++ b/crypto/asymmetric_keys/public_key.c
+@@ -79,9 +79,11 @@ int public_key_verify_signature(const struct public_key *pkey,
+
+ BUG_ON(!pkey);
+ BUG_ON(!sig);
+- BUG_ON(!sig->digest);
+ BUG_ON(!sig->s);
+
++ if (!sig->digest)
++ return -ENOPKG;
++
+ alg_name = sig->pkey_algo;
+ if (strcmp(sig->pkey_algo, "rsa") == 0) {
+ /* The data wangled by the RSA algorithm is typically padded
+diff --git a/crypto/asymmetric_keys/restrict.c b/crypto/asymmetric_keys/restrict.c
+index 86fb68508952..7c93c7728454 100644
+--- a/crypto/asymmetric_keys/restrict.c
++++ b/crypto/asymmetric_keys/restrict.c
+@@ -67,8 +67,9 @@ __setup("ca_keys=", ca_keys_setup);
+ *
+ * Returns 0 if the new certificate was accepted, -ENOKEY if we couldn't find a
+ * matching parent certificate in the trusted list, -EKEYREJECTED if the
+- * signature check fails or the key is blacklisted and some other error if
+- * there is a matching certificate but the signature check cannot be performed.
++ * signature check fails or the key is blacklisted, -ENOPKG if the signature
++ * uses unsupported crypto, or some other error if there is a matching
++ * certificate but the signature check cannot be performed.
+ */
+ int restrict_link_by_signature(struct key *dest_keyring,
+ const struct key_type *type,
+@@ -88,6 +89,8 @@ int restrict_link_by_signature(struct key *dest_keyring,
+ return -EOPNOTSUPP;
+
+ sig = payload->data[asym_auth];
++ if (!sig)
++ return -ENOPKG;
+ if (!sig->auth_ids[0] && !sig->auth_ids[1])
+ return -ENOKEY;
+
+@@ -139,6 +142,8 @@ static int key_or_keyring_common(struct key *dest_keyring,
+ return -EOPNOTSUPP;
+
+ sig = payload->data[asym_auth];
++ if (!sig)
++ return -ENOPKG;
+ if (!sig->auth_ids[0] && !sig->auth_ids[1])
+ return -ENOKEY;
+
+@@ -222,9 +227,9 @@ static int key_or_keyring_common(struct key *dest_keyring,
+ *
+ * Returns 0 if the new certificate was accepted, -ENOKEY if we
+ * couldn't find a matching parent certificate in the trusted list,
+- * -EKEYREJECTED if the signature check fails, and some other error if
+- * there is a matching certificate but the signature check cannot be
+- * performed.
++ * -EKEYREJECTED if the signature check fails, -ENOPKG if the signature uses
++ * unsupported crypto, or some other error if there is a matching certificate
++ * but the signature check cannot be performed.
+ */
+ int restrict_link_by_key_or_keyring(struct key *dest_keyring,
+ const struct key_type *type,
+@@ -249,9 +254,9 @@ int restrict_link_by_key_or_keyring(struct key *dest_keyring,
+ *
+ * Returns 0 if the new certificate was accepted, -ENOKEY if we
+ * couldn't find a matching parent certificate in the trusted list,
+- * -EKEYREJECTED if the signature check fails, and some other error if
+- * there is a matching certificate but the signature check cannot be
+- * performed.
++ * -EKEYREJECTED if the signature check fails, -ENOPKG if the signature uses
++ * unsupported crypto, or some other error if there is a matching certificate
++ * but the signature check cannot be performed.
+ */
+ int restrict_link_by_key_or_keyring_chain(struct key *dest_keyring,
+ const struct key_type *type,
+diff --git a/drivers/extcon/extcon-intel-int3496.c b/drivers/extcon/extcon-intel-int3496.c
+index c8691b5a9cb0..191e99f06a9a 100644
+--- a/drivers/extcon/extcon-intel-int3496.c
++++ b/drivers/extcon/extcon-intel-int3496.c
+@@ -153,8 +153,9 @@ static int int3496_probe(struct platform_device *pdev)
+ return ret;
+ }
+
+- /* queue initial processing of id-pin */
++ /* process id-pin so that we start with the right status */
+ queue_delayed_work(system_wq, &data->work, 0);
++ flush_delayed_work(&data->work);
+
+ platform_set_drvdata(pdev, data);
+
+diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_atpx_handler.c b/drivers/gpu/drm/amd/amdgpu/amdgpu_atpx_handler.c
+index c13c51af0b68..c53095b3b0fb 100644
+--- a/drivers/gpu/drm/amd/amdgpu/amdgpu_atpx_handler.c
++++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_atpx_handler.c
+@@ -14,6 +14,16 @@
+
+ #include "amd_acpi.h"
+
++#define AMDGPU_PX_QUIRK_FORCE_ATPX (1 << 0)
++
++struct amdgpu_px_quirk {
++ u32 chip_vendor;
++ u32 chip_device;
++ u32 subsys_vendor;
++ u32 subsys_device;
++ u32 px_quirk_flags;
++};
++
+ struct amdgpu_atpx_functions {
+ bool px_params;
+ bool power_cntl;
+@@ -35,6 +45,7 @@ struct amdgpu_atpx {
+ static struct amdgpu_atpx_priv {
+ bool atpx_detected;
+ bool bridge_pm_usable;
++ unsigned int quirks;
+ /* handle for device - and atpx */
+ acpi_handle dhandle;
+ acpi_handle other_handle;
+@@ -205,13 +216,19 @@ static int amdgpu_atpx_validate(struct amdgpu_atpx *atpx)
+
+ atpx->is_hybrid = false;
+ if (valid_bits & ATPX_MS_HYBRID_GFX_SUPPORTED) {
+- printk("ATPX Hybrid Graphics\n");
+- /*
+- * Disable legacy PM methods only when pcie port PM is usable,
+- * otherwise the device might fail to power off or power on.
+- */
+- atpx->functions.power_cntl = !amdgpu_atpx_priv.bridge_pm_usable;
+- atpx->is_hybrid = true;
++ if (amdgpu_atpx_priv.quirks & AMDGPU_PX_QUIRK_FORCE_ATPX) {
++ printk("ATPX Hybrid Graphics, forcing to ATPX\n");
++ atpx->functions.power_cntl = true;
++ atpx->is_hybrid = false;
++ } else {
++ printk("ATPX Hybrid Graphics\n");
++ /*
++ * Disable legacy PM methods only when pcie port PM is usable,
++ * otherwise the device might fail to power off or power on.
++ */
++ atpx->functions.power_cntl = !amdgpu_atpx_priv.bridge_pm_usable;
++ atpx->is_hybrid = true;
++ }
+ }
+
+ atpx->dgpu_req_power_for_displays = false;
+@@ -547,6 +564,31 @@ static const struct vga_switcheroo_handler amdgpu_atpx_handler = {
+ .get_client_id = amdgpu_atpx_get_client_id,
+ };
+
++static const struct amdgpu_px_quirk amdgpu_px_quirk_list[] = {
++ /* HG _PR3 doesn't seem to work on this A+A weston board */
++ { 0x1002, 0x6900, 0x1002, 0x0124, AMDGPU_PX_QUIRK_FORCE_ATPX },
++ { 0x1002, 0x6900, 0x1028, 0x0812, AMDGPU_PX_QUIRK_FORCE_ATPX },
++ { 0x1002, 0x6900, 0x1028, 0x0813, AMDGPU_PX_QUIRK_FORCE_ATPX },
++ { 0, 0, 0, 0, 0 },
++};
++
++static void amdgpu_atpx_get_quirks(struct pci_dev *pdev)
++{
++ const struct amdgpu_px_quirk *p = amdgpu_px_quirk_list;
++
++ /* Apply PX quirks */
++ while (p && p->chip_device != 0) {
++ if (pdev->vendor == p->chip_vendor &&
++ pdev->device == p->chip_device &&
++ pdev->subsystem_vendor == p->subsys_vendor &&
++ pdev->subsystem_device == p->subsys_device) {
++ amdgpu_atpx_priv.quirks |= p->px_quirk_flags;
++ break;
++ }
++ ++p;
++ }
++}
++
+ /**
+ * amdgpu_atpx_detect - detect whether we have PX
+ *
+@@ -570,6 +612,7 @@ static bool amdgpu_atpx_detect(void)
+
+ parent_pdev = pci_upstream_bridge(pdev);
+ d3_supported |= parent_pdev && parent_pdev->bridge_d3;
++ amdgpu_atpx_get_quirks(pdev);
+ }
+
+ while ((pdev = pci_get_class(PCI_CLASS_DISPLAY_OTHER << 8, pdev)) != NULL) {
+@@ -579,6 +622,7 @@ static bool amdgpu_atpx_detect(void)
+
+ parent_pdev = pci_upstream_bridge(pdev);
+ d3_supported |= parent_pdev && parent_pdev->bridge_d3;
++ amdgpu_atpx_get_quirks(pdev);
+ }
+
+ if (has_atpx && vga_count == 2) {
+diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_cs.c b/drivers/gpu/drm/amd/amdgpu/amdgpu_cs.c
+index 57abf7abd7a9..b9cfcffbf80f 100644
+--- a/drivers/gpu/drm/amd/amdgpu/amdgpu_cs.c
++++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_cs.c
+@@ -865,8 +865,8 @@ static int amdgpu_cs_ib_vm_chunk(struct amdgpu_device *adev,
+ struct amdgpu_bo_va_mapping *m;
+ struct amdgpu_bo *aobj = NULL;
+ struct amdgpu_cs_chunk *chunk;
++ uint64_t offset, va_start;
+ struct amdgpu_ib *ib;
+- uint64_t offset;
+ uint8_t *kptr;
+
+ chunk = &p->chunks[i];
+@@ -876,14 +876,14 @@ static int amdgpu_cs_ib_vm_chunk(struct amdgpu_device *adev,
+ if (chunk->chunk_id != AMDGPU_CHUNK_ID_IB)
+ continue;
+
+- r = amdgpu_cs_find_mapping(p, chunk_ib->va_start,
+- &aobj, &m);
++ va_start = chunk_ib->va_start & AMDGPU_VA_HOLE_MASK;
++ r = amdgpu_cs_find_mapping(p, va_start, &aobj, &m);
+ if (r) {
+ DRM_ERROR("IB va_start is invalid\n");
+ return r;
+ }
+
+- if ((chunk_ib->va_start + chunk_ib->ib_bytes) >
++ if ((va_start + chunk_ib->ib_bytes) >
+ (m->last + 1) * AMDGPU_GPU_PAGE_SIZE) {
+ DRM_ERROR("IB va_start+ib_bytes is invalid\n");
+ return -EINVAL;
+@@ -896,7 +896,7 @@ static int amdgpu_cs_ib_vm_chunk(struct amdgpu_device *adev,
+ }
+
+ offset = m->start * AMDGPU_GPU_PAGE_SIZE;
+- kptr += chunk_ib->va_start - offset;
++ kptr += va_start - offset;
+
+ memcpy(ib->ptr, kptr, chunk_ib->ib_bytes);
+ amdgpu_bo_kunmap(aobj);
+diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_device.c b/drivers/gpu/drm/amd/amdgpu/amdgpu_device.c
+index 3573ecdb06ee..3288cbdd2df0 100644
+--- a/drivers/gpu/drm/amd/amdgpu/amdgpu_device.c
++++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_device.c
+@@ -2228,8 +2228,6 @@ int amdgpu_device_init(struct amdgpu_device *adev,
+ * ignore it */
+ vga_client_register(adev->pdev, adev, NULL, amdgpu_vga_set_decode);
+
+- if (amdgpu_runtime_pm == 1)
+- runtime = true;
+ if (amdgpu_device_is_px(ddev))
+ runtime = true;
+ if (!pci_is_thunderbolt_attached(adev->pdev))
+diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_gem.c b/drivers/gpu/drm/amd/amdgpu/amdgpu_gem.c
+index e87eedcc0da9..1eac7c3c687b 100644
+--- a/drivers/gpu/drm/amd/amdgpu/amdgpu_gem.c
++++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_gem.c
+@@ -563,6 +563,17 @@ int amdgpu_gem_va_ioctl(struct drm_device *dev, void *data,
+ return -EINVAL;
+ }
+
++ if (args->va_address >= AMDGPU_VA_HOLE_START &&
++ args->va_address < AMDGPU_VA_HOLE_END) {
++ dev_dbg(&dev->pdev->dev,
++ "va_address 0x%LX is in VA hole 0x%LX-0x%LX\n",
++ args->va_address, AMDGPU_VA_HOLE_START,
++ AMDGPU_VA_HOLE_END);
++ return -EINVAL;
++ }
++
++ args->va_address &= AMDGPU_VA_HOLE_MASK;
++
+ if ((args->flags & ~valid_flags) && (args->flags & ~prt_flags)) {
+ dev_err(&dev->pdev->dev, "invalid flags combination 0x%08X\n",
+ args->flags);
+diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_kms.c b/drivers/gpu/drm/amd/amdgpu/amdgpu_kms.c
+index 720139e182a3..c8b7abf887ed 100644
+--- a/drivers/gpu/drm/amd/amdgpu/amdgpu_kms.c
++++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_kms.c
+@@ -586,7 +586,9 @@ static int amdgpu_info_ioctl(struct drm_device *dev, void *data, struct drm_file
+ if (amdgpu_sriov_vf(adev))
+ dev_info.ids_flags |= AMDGPU_IDS_FLAGS_PREEMPTION;
+ dev_info.virtual_address_offset = AMDGPU_VA_RESERVED_SIZE;
+- dev_info.virtual_address_max = (uint64_t)adev->vm_manager.max_pfn * AMDGPU_GPU_PAGE_SIZE;
++ dev_info.virtual_address_max =
++ min(adev->vm_manager.max_pfn * AMDGPU_GPU_PAGE_SIZE,
++ AMDGPU_VA_HOLE_START);
+ dev_info.virtual_address_alignment = max((int)PAGE_SIZE, AMDGPU_GPU_PAGE_SIZE);
+ dev_info.pte_fragment_size = (1 << adev->vm_manager.fragment_size) * AMDGPU_GPU_PAGE_SIZE;
+ dev_info.gart_page_size = AMDGPU_GPU_PAGE_SIZE;
+diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_vm.h b/drivers/gpu/drm/amd/amdgpu/amdgpu_vm.h
+index bae77353447b..aef9ae5cec51 100644
+--- a/drivers/gpu/drm/amd/amdgpu/amdgpu_vm.h
++++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_vm.h
+@@ -96,6 +96,19 @@ struct amdgpu_bo_list_entry;
+ /* hardcode that limit for now */
+ #define AMDGPU_VA_RESERVED_SIZE (8ULL << 20)
+
++/* VA hole for 48bit addresses on Vega10 */
++#define AMDGPU_VA_HOLE_START 0x0000800000000000ULL
++#define AMDGPU_VA_HOLE_END 0xffff800000000000ULL
++
++/*
++ * Hardware is programmed as if the hole doesn't exists with start and end
++ * address values.
++ *
++ * This mask is used to remove the upper 16bits of the VA and so come up with
++ * the linear addr value.
++ */
++#define AMDGPU_VA_HOLE_MASK 0x0000ffffffffffffULL
++
+ /* max vmids dedicated for process */
+ #define AMDGPU_VM_MAX_RESERVED_VMID 1
+
+diff --git a/drivers/gpu/drm/amd/amdgpu/si_dpm.c b/drivers/gpu/drm/amd/amdgpu/si_dpm.c
+index 51fd0c9a20a5..3af322adae76 100644
+--- a/drivers/gpu/drm/amd/amdgpu/si_dpm.c
++++ b/drivers/gpu/drm/amd/amdgpu/si_dpm.c
+@@ -3464,6 +3464,11 @@ static void si_apply_state_adjust_rules(struct amdgpu_device *adev,
+ (adev->pdev->device == 0x6667)) {
+ max_sclk = 75000;
+ }
++ if ((adev->pdev->revision == 0xC3) ||
++ (adev->pdev->device == 0x6665)) {
++ max_sclk = 60000;
++ max_mclk = 80000;
++ }
+ } else if (adev->asic_type == CHIP_OLAND) {
+ if ((adev->pdev->revision == 0xC7) ||
+ (adev->pdev->revision == 0x80) ||
+diff --git a/drivers/gpu/drm/amd/amdgpu/soc15.c b/drivers/gpu/drm/amd/amdgpu/soc15.c
+index 4e67fe1e7955..40767fdb6cd3 100644
+--- a/drivers/gpu/drm/amd/amdgpu/soc15.c
++++ b/drivers/gpu/drm/amd/amdgpu/soc15.c
+@@ -659,8 +659,8 @@ static int soc15_common_early_init(void *handle)
+ AMD_CG_SUPPORT_MC_LS |
+ AMD_CG_SUPPORT_SDMA_MGCG |
+ AMD_CG_SUPPORT_SDMA_LS;
+- adev->pg_flags = AMD_PG_SUPPORT_SDMA |
+- AMD_PG_SUPPORT_MMHUB;
++ adev->pg_flags = AMD_PG_SUPPORT_SDMA;
++
+ adev->external_rev_id = 0x1;
+ break;
+ default:
+diff --git a/drivers/gpu/drm/amd/amdgpu/vi.c b/drivers/gpu/drm/amd/amdgpu/vi.c
+index 3a4c2fa7e36d..d582964702ad 100644
+--- a/drivers/gpu/drm/amd/amdgpu/vi.c
++++ b/drivers/gpu/drm/amd/amdgpu/vi.c
+@@ -449,14 +449,19 @@ static bool vi_read_bios_from_rom(struct amdgpu_device *adev,
+
+ static void vi_detect_hw_virtualization(struct amdgpu_device *adev)
+ {
+- uint32_t reg = RREG32(mmBIF_IOV_FUNC_IDENTIFIER);
+- /* bit0: 0 means pf and 1 means vf */
+- /* bit31: 0 means disable IOV and 1 means enable */
+- if (reg & 1)
+- adev->virt.caps |= AMDGPU_SRIOV_CAPS_IS_VF;
+-
+- if (reg & 0x80000000)
+- adev->virt.caps |= AMDGPU_SRIOV_CAPS_ENABLE_IOV;
++ uint32_t reg = 0;
++
++ if (adev->asic_type == CHIP_TONGA ||
++ adev->asic_type == CHIP_FIJI) {
++ reg = RREG32(mmBIF_IOV_FUNC_IDENTIFIER);
++ /* bit0: 0 means pf and 1 means vf */
++ /* bit31: 0 means disable IOV and 1 means enable */
++ if (reg & 1)
++ adev->virt.caps |= AMDGPU_SRIOV_CAPS_IS_VF;
++
++ if (reg & 0x80000000)
++ adev->virt.caps |= AMDGPU_SRIOV_CAPS_ENABLE_IOV;
++ }
+
+ if (reg == 0) {
+ if (is_virtual_machine()) /* passthrough mode exclus sr-iov mode */
+diff --git a/drivers/gpu/drm/cirrus/cirrus_mode.c b/drivers/gpu/drm/cirrus/cirrus_mode.c
+index cd23b1b28259..c91b9b054e3f 100644
+--- a/drivers/gpu/drm/cirrus/cirrus_mode.c
++++ b/drivers/gpu/drm/cirrus/cirrus_mode.c
+@@ -294,22 +294,7 @@ static void cirrus_crtc_prepare(struct drm_crtc *crtc)
+ {
+ }
+
+-/*
+- * This is called after a mode is programmed. It should reverse anything done
+- * by the prepare function
+- */
+-static void cirrus_crtc_commit(struct drm_crtc *crtc)
+-{
+-}
+-
+-/*
+- * The core can pass us a set of gamma values to program. We actually only
+- * use this for 8-bit mode so can't perform smooth fades on deeper modes,
+- * but it's a requirement that we provide the function
+- */
+-static int cirrus_crtc_gamma_set(struct drm_crtc *crtc, u16 *red, u16 *green,
+- u16 *blue, uint32_t size,
+- struct drm_modeset_acquire_ctx *ctx)
++static void cirrus_crtc_load_lut(struct drm_crtc *crtc)
+ {
+ struct drm_device *dev = crtc->dev;
+ struct cirrus_device *cdev = dev->dev_private;
+@@ -317,7 +302,7 @@ static int cirrus_crtc_gamma_set(struct drm_crtc *crtc, u16 *red, u16 *green,
+ int i;
+
+ if (!crtc->enabled)
+- return 0;
++ return;
+
+ r = crtc->gamma_store;
+ g = r + crtc->gamma_size;
+@@ -330,6 +315,27 @@ static int cirrus_crtc_gamma_set(struct drm_crtc *crtc, u16 *red, u16 *green,
+ WREG8(PALETTE_DATA, *g++ >> 8);
+ WREG8(PALETTE_DATA, *b++ >> 8);
+ }
++}
++
++/*
++ * This is called after a mode is programmed. It should reverse anything done
++ * by the prepare function
++ */
++static void cirrus_crtc_commit(struct drm_crtc *crtc)
++{
++ cirrus_crtc_load_lut(crtc);
++}
++
++/*
++ * The core can pass us a set of gamma values to program. We actually only
++ * use this for 8-bit mode so can't perform smooth fades on deeper modes,
++ * but it's a requirement that we provide the function
++ */
++static int cirrus_crtc_gamma_set(struct drm_crtc *crtc, u16 *red, u16 *green,
++ u16 *blue, uint32_t size,
++ struct drm_modeset_acquire_ctx *ctx)
++{
++ cirrus_crtc_load_lut(crtc);
+
+ return 0;
+ }
+diff --git a/drivers/gpu/drm/drm_atomic_helper.c b/drivers/gpu/drm/drm_atomic_helper.c
+index b16f1d69a0bb..e8c249361d7e 100644
+--- a/drivers/gpu/drm/drm_atomic_helper.c
++++ b/drivers/gpu/drm/drm_atomic_helper.c
+@@ -1778,6 +1778,8 @@ int drm_atomic_helper_setup_commit(struct drm_atomic_state *state,
+ new_crtc_state->event->base.completion = &commit->flip_done;
+ new_crtc_state->event->base.completion_release = release_crtc_commit;
+ drm_crtc_commit_get(commit);
++
++ commit->abort_completion = true;
+ }
+
+ for_each_oldnew_connector_in_state(state, conn, old_conn_state, new_conn_state, i) {
+@@ -3327,8 +3329,21 @@ EXPORT_SYMBOL(drm_atomic_helper_crtc_duplicate_state);
+ void __drm_atomic_helper_crtc_destroy_state(struct drm_crtc_state *state)
+ {
+ if (state->commit) {
++ /*
++ * In the event that a non-blocking commit returns
++ * -ERESTARTSYS before the commit_tail work is queued, we will
++ * have an extra reference to the commit object. Release it, if
++ * the event has not been consumed by the worker.
++ *
++ * state->event may be freed, so we can't directly look at
++ * state->event->base.completion.
++ */
++ if (state->event && state->commit->abort_completion)
++ drm_crtc_commit_put(state->commit);
++
+ kfree(state->commit->event);
+ state->commit->event = NULL;
++
+ drm_crtc_commit_put(state->commit);
+ }
+
+diff --git a/drivers/gpu/drm/drm_edid.c b/drivers/gpu/drm/drm_edid.c
+index cb487148359a..16fb76ba6509 100644
+--- a/drivers/gpu/drm/drm_edid.c
++++ b/drivers/gpu/drm/drm_edid.c
+@@ -113,6 +113,9 @@ static const struct edid_quirk {
+ /* AEO model 0 reports 8 bpc, but is a 6 bpc panel */
+ { "AEO", 0, EDID_QUIRK_FORCE_6BPC },
+
++ /* CPT panel of Asus UX303LA reports 8 bpc, but is a 6 bpc panel */
++ { "CPT", 0x17df, EDID_QUIRK_FORCE_6BPC },
++
+ /* Belinea 10 15 55 */
+ { "MAX", 1516, EDID_QUIRK_PREFER_LARGE_60 },
+ { "MAX", 0x77e, EDID_QUIRK_PREFER_LARGE_60 },
+diff --git a/drivers/gpu/drm/drm_mm.c b/drivers/gpu/drm/drm_mm.c
+index c3c79ee6119e..edab571dbc90 100644
+--- a/drivers/gpu/drm/drm_mm.c
++++ b/drivers/gpu/drm/drm_mm.c
+@@ -836,9 +836,24 @@ struct drm_mm_node *drm_mm_scan_color_evict(struct drm_mm_scan *scan)
+ if (!mm->color_adjust)
+ return NULL;
+
+- hole = list_first_entry(&mm->hole_stack, typeof(*hole), hole_stack);
+- hole_start = __drm_mm_hole_node_start(hole);
+- hole_end = hole_start + hole->hole_size;
++ /*
++ * The hole found during scanning should ideally be the first element
++ * in the hole_stack list, but due to side-effects in the driver it
++ * may not be.
++ */
++ list_for_each_entry(hole, &mm->hole_stack, hole_stack) {
++ hole_start = __drm_mm_hole_node_start(hole);
++ hole_end = hole_start + hole->hole_size;
++
++ if (hole_start <= scan->hit_start &&
++ hole_end >= scan->hit_end)
++ break;
++ }
++
++ /* We should only be called after we found the hole previously */
++ DRM_MM_BUG_ON(&hole->hole_stack == &mm->hole_stack);
++ if (unlikely(&hole->hole_stack == &mm->hole_stack))
++ return NULL;
+
+ DRM_MM_BUG_ON(hole_start > scan->hit_start);
+ DRM_MM_BUG_ON(hole_end < scan->hit_end);
+diff --git a/drivers/gpu/drm/i915/intel_breadcrumbs.c b/drivers/gpu/drm/i915/intel_breadcrumbs.c
+index bcbc7abe6693..5d0c6504efe8 100644
+--- a/drivers/gpu/drm/i915/intel_breadcrumbs.c
++++ b/drivers/gpu/drm/i915/intel_breadcrumbs.c
+@@ -552,29 +552,16 @@ void intel_engine_remove_wait(struct intel_engine_cs *engine,
+ spin_unlock_irq(&b->rb_lock);
+ }
+
+-static bool signal_valid(const struct drm_i915_gem_request *request)
+-{
+- return intel_wait_check_request(&request->signaling.wait, request);
+-}
+-
+ static bool signal_complete(const struct drm_i915_gem_request *request)
+ {
+ if (!request)
+ return false;
+
+- /* If another process served as the bottom-half it may have already
+- * signalled that this wait is already completed.
+- */
+- if (intel_wait_complete(&request->signaling.wait))
+- return signal_valid(request);
+-
+- /* Carefully check if the request is complete, giving time for the
++ /*
++ * Carefully check if the request is complete, giving time for the
+ * seqno to be visible or if the GPU hung.
+ */
+- if (__i915_request_irq_complete(request))
+- return true;
+-
+- return false;
++ return __i915_request_irq_complete(request);
+ }
+
+ static struct drm_i915_gem_request *to_signaler(struct rb_node *rb)
+@@ -617,9 +604,13 @@ static int intel_breadcrumbs_signaler(void *arg)
+ request = i915_gem_request_get_rcu(request);
+ rcu_read_unlock();
+ if (signal_complete(request)) {
+- local_bh_disable();
+- dma_fence_signal(&request->fence);
+- local_bh_enable(); /* kick start the tasklets */
++ if (!test_bit(DMA_FENCE_FLAG_SIGNALED_BIT,
++ &request->fence.flags)) {
++ local_bh_disable();
++ dma_fence_signal(&request->fence);
++ GEM_BUG_ON(!i915_gem_request_completed(request));
++ local_bh_enable(); /* kick start the tasklets */
++ }
+
+ spin_lock_irq(&b->rb_lock);
+
+diff --git a/drivers/hid/hid-core.c b/drivers/hid/hid-core.c
+index b5f85d6f6bef..721633658544 100644
+--- a/drivers/hid/hid-core.c
++++ b/drivers/hid/hid-core.c
+@@ -2721,6 +2721,9 @@ static const struct hid_device_id hid_ignore_list[] = {
+ { HID_USB_DEVICE(USB_VENDOR_ID_LD, USB_DEVICE_ID_LD_MICROCASSYTIME) },
+ { HID_USB_DEVICE(USB_VENDOR_ID_LD, USB_DEVICE_ID_LD_MICROCASSYTEMPERATURE) },
+ { HID_USB_DEVICE(USB_VENDOR_ID_LD, USB_DEVICE_ID_LD_MICROCASSYPH) },
++ { HID_USB_DEVICE(USB_VENDOR_ID_LD, USB_DEVICE_ID_LD_POWERANALYSERCASSY) },
++ { HID_USB_DEVICE(USB_VENDOR_ID_LD, USB_DEVICE_ID_LD_CONVERTERCONTROLLERCASSY) },
++ { HID_USB_DEVICE(USB_VENDOR_ID_LD, USB_DEVICE_ID_LD_MACHINETESTCASSY) },
+ { HID_USB_DEVICE(USB_VENDOR_ID_LD, USB_DEVICE_ID_LD_JWM) },
+ { HID_USB_DEVICE(USB_VENDOR_ID_LD, USB_DEVICE_ID_LD_DMMP) },
+ { HID_USB_DEVICE(USB_VENDOR_ID_LD, USB_DEVICE_ID_LD_UMIP) },
+diff --git a/drivers/hid/hid-ids.h b/drivers/hid/hid-ids.h
+index 5da3d6256d25..a0baa5ba5b84 100644
+--- a/drivers/hid/hid-ids.h
++++ b/drivers/hid/hid-ids.h
+@@ -641,6 +641,9 @@
+ #define USB_DEVICE_ID_LD_MICROCASSYTIME 0x1033
+ #define USB_DEVICE_ID_LD_MICROCASSYTEMPERATURE 0x1035
+ #define USB_DEVICE_ID_LD_MICROCASSYPH 0x1038
++#define USB_DEVICE_ID_LD_POWERANALYSERCASSY 0x1040
++#define USB_DEVICE_ID_LD_CONVERTERCONTROLLERCASSY 0x1042
++#define USB_DEVICE_ID_LD_MACHINETESTCASSY 0x1043
+ #define USB_DEVICE_ID_LD_JWM 0x1080
+ #define USB_DEVICE_ID_LD_DMMP 0x1081
+ #define USB_DEVICE_ID_LD_UMIP 0x1090
+diff --git a/drivers/i2c/busses/i2c-bcm2835.c b/drivers/i2c/busses/i2c-bcm2835.c
+index cd07a69e2e93..44deae78913e 100644
+--- a/drivers/i2c/busses/i2c-bcm2835.c
++++ b/drivers/i2c/busses/i2c-bcm2835.c
+@@ -50,6 +50,9 @@
+ #define BCM2835_I2C_S_CLKT BIT(9)
+ #define BCM2835_I2C_S_LEN BIT(10) /* Fake bit for SW error reporting */
+
++#define BCM2835_I2C_FEDL_SHIFT 16
++#define BCM2835_I2C_REDL_SHIFT 0
++
+ #define BCM2835_I2C_CDIV_MIN 0x0002
+ #define BCM2835_I2C_CDIV_MAX 0xFFFE
+
+@@ -81,7 +84,7 @@ static inline u32 bcm2835_i2c_readl(struct bcm2835_i2c_dev *i2c_dev, u32 reg)
+
+ static int bcm2835_i2c_set_divider(struct bcm2835_i2c_dev *i2c_dev)
+ {
+- u32 divider;
++ u32 divider, redl, fedl;
+
+ divider = DIV_ROUND_UP(clk_get_rate(i2c_dev->clk),
+ i2c_dev->bus_clk_rate);
+@@ -100,6 +103,22 @@ static int bcm2835_i2c_set_divider(struct bcm2835_i2c_dev *i2c_dev)
+
+ bcm2835_i2c_writel(i2c_dev, BCM2835_I2C_DIV, divider);
+
++ /*
++ * Number of core clocks to wait after falling edge before
++ * outputting the next data bit. Note that both FEDL and REDL
++ * can't be greater than CDIV/2.
++ */
++ fedl = max(divider / 16, 1u);
++
++ /*
++ * Number of core clocks to wait after rising edge before
++ * sampling the next incoming data bit.
++ */
++ redl = max(divider / 4, 1u);
++
++ bcm2835_i2c_writel(i2c_dev, BCM2835_I2C_DEL,
++ (fedl << BCM2835_I2C_FEDL_SHIFT) |
++ (redl << BCM2835_I2C_REDL_SHIFT));
+ return 0;
+ }
+
+diff --git a/drivers/i2c/busses/i2c-designware-master.c b/drivers/i2c/busses/i2c-designware-master.c
+index 418c233075d3..13e849bf9aa0 100644
+--- a/drivers/i2c/busses/i2c-designware-master.c
++++ b/drivers/i2c/busses/i2c-designware-master.c
+@@ -207,7 +207,7 @@ static void i2c_dw_xfer_init(struct dw_i2c_dev *dev)
+ i2c_dw_disable_int(dev);
+
+ /* Enable the adapter */
+- __i2c_dw_enable(dev, true);
++ __i2c_dw_enable_and_wait(dev, true);
+
+ /* Clear and enable interrupts */
+ dw_readl(dev, DW_IC_CLR_INTR);
+diff --git a/drivers/iio/adc/stm32-adc.c b/drivers/iio/adc/stm32-adc.c
+index cecf1e5b244c..b61b52f43179 100644
+--- a/drivers/iio/adc/stm32-adc.c
++++ b/drivers/iio/adc/stm32-adc.c
+@@ -765,8 +765,6 @@ static int stm32h7_adc_enable(struct stm32_adc *adc)
+ int ret;
+ u32 val;
+
+- /* Clear ADRDY by writing one, then enable ADC */
+- stm32_adc_set_bits(adc, STM32H7_ADC_ISR, STM32H7_ADRDY);
+ stm32_adc_set_bits(adc, STM32H7_ADC_CR, STM32H7_ADEN);
+
+ /* Poll for ADRDY to be set (after adc startup time) */
+@@ -774,8 +772,11 @@ static int stm32h7_adc_enable(struct stm32_adc *adc)
+ val & STM32H7_ADRDY,
+ 100, STM32_ADC_TIMEOUT_US);
+ if (ret) {
+- stm32_adc_clr_bits(adc, STM32H7_ADC_CR, STM32H7_ADEN);
++ stm32_adc_set_bits(adc, STM32H7_ADC_CR, STM32H7_ADDIS);
+ dev_err(&indio_dev->dev, "Failed to enable ADC\n");
++ } else {
++ /* Clear ADRDY by writing one */
++ stm32_adc_set_bits(adc, STM32H7_ADC_ISR, STM32H7_ADRDY);
+ }
+
+ return ret;
+diff --git a/drivers/iio/imu/adis_trigger.c b/drivers/iio/imu/adis_trigger.c
+index 0dd5a381be64..457372f36791 100644
+--- a/drivers/iio/imu/adis_trigger.c
++++ b/drivers/iio/imu/adis_trigger.c
+@@ -46,6 +46,10 @@ int adis_probe_trigger(struct adis *adis, struct iio_dev *indio_dev)
+ if (adis->trig == NULL)
+ return -ENOMEM;
+
++ adis->trig->dev.parent = &adis->spi->dev;
++ adis->trig->ops = &adis_trigger_ops;
++ iio_trigger_set_drvdata(adis->trig, adis);
++
+ ret = request_irq(adis->spi->irq,
+ &iio_trigger_generic_data_rdy_poll,
+ IRQF_TRIGGER_RISING,
+@@ -54,9 +58,6 @@ int adis_probe_trigger(struct adis *adis, struct iio_dev *indio_dev)
+ if (ret)
+ goto error_free_trig;
+
+- adis->trig->dev.parent = &adis->spi->dev;
+- adis->trig->ops = &adis_trigger_ops;
+- iio_trigger_set_drvdata(adis->trig, adis);
+ ret = iio_trigger_register(adis->trig);
+
+ indio_dev->trig = iio_trigger_get(adis->trig);
+diff --git a/drivers/iio/industrialio-buffer.c b/drivers/iio/industrialio-buffer.c
+index d2b465140a6b..78482d456c3b 100644
+--- a/drivers/iio/industrialio-buffer.c
++++ b/drivers/iio/industrialio-buffer.c
+@@ -175,7 +175,7 @@ unsigned int iio_buffer_poll(struct file *filp,
+ struct iio_dev *indio_dev = filp->private_data;
+ struct iio_buffer *rb = indio_dev->buffer;
+
+- if (!indio_dev->info)
++ if (!indio_dev->info || rb == NULL)
+ return 0;
+
+ poll_wait(filp, &rb->pollq, wait);
+diff --git a/drivers/iio/proximity/Kconfig b/drivers/iio/proximity/Kconfig
+index fcb1c4ba5e41..f726f9427602 100644
+--- a/drivers/iio/proximity/Kconfig
++++ b/drivers/iio/proximity/Kconfig
+@@ -68,6 +68,8 @@ config SX9500
+
+ config SRF08
+ tristate "Devantech SRF02/SRF08/SRF10 ultrasonic ranger sensor"
++ select IIO_BUFFER
++ select IIO_TRIGGERED_BUFFER
+ depends on I2C
+ help
+ Say Y here to build a driver for Devantech SRF02/SRF08/SRF10
+diff --git a/drivers/infiniband/core/rdma_core.c b/drivers/infiniband/core/rdma_core.c
+index 85b5ee4defa4..4e1f76730855 100644
+--- a/drivers/infiniband/core/rdma_core.c
++++ b/drivers/infiniband/core/rdma_core.c
+@@ -196,7 +196,15 @@ static struct ib_uobject *lookup_get_idr_uobject(const struct uverbs_obj_type *t
+ goto free;
+ }
+
+- uverbs_uobject_get(uobj);
++ /*
++ * The idr_find is guaranteed to return a pointer to something that
++ * isn't freed yet, or NULL, as the free after idr_remove goes through
++ * kfree_rcu(). However the object may still have been released and
++ * kfree() could be called at any time.
++ */
++ if (!kref_get_unless_zero(&uobj->ref))
++ uobj = ERR_PTR(-ENOENT);
++
+ free:
+ rcu_read_unlock();
+ return uobj;
+diff --git a/drivers/infiniband/core/uverbs_cmd.c b/drivers/infiniband/core/uverbs_cmd.c
+index 840b24096690..df127c8b9a9f 100644
+--- a/drivers/infiniband/core/uverbs_cmd.c
++++ b/drivers/infiniband/core/uverbs_cmd.c
+@@ -560,9 +560,10 @@ ssize_t ib_uverbs_open_xrcd(struct ib_uverbs_file *file,
+ if (f.file)
+ fdput(f);
+
++ mutex_unlock(&file->device->xrcd_tree_mutex);
++
+ uobj_alloc_commit(&obj->uobject);
+
+- mutex_unlock(&file->device->xrcd_tree_mutex);
+ return in_len;
+
+ err_copy:
+@@ -601,10 +602,8 @@ ssize_t ib_uverbs_close_xrcd(struct ib_uverbs_file *file,
+
+ uobj = uobj_get_write(uobj_get_type(xrcd), cmd.xrcd_handle,
+ file->ucontext);
+- if (IS_ERR(uobj)) {
+- mutex_unlock(&file->device->xrcd_tree_mutex);
++ if (IS_ERR(uobj))
+ return PTR_ERR(uobj);
+- }
+
+ ret = uobj_remove_commit(uobj);
+ return ret ?: in_len;
+@@ -1971,8 +1970,15 @@ static int modify_qp(struct ib_uverbs_file *file,
+ goto release_qp;
+ }
+
++ if ((cmd->base.attr_mask & IB_QP_AV) &&
++ !rdma_is_port_valid(qp->device, cmd->base.dest.port_num)) {
++ ret = -EINVAL;
++ goto release_qp;
++ }
++
+ if ((cmd->base.attr_mask & IB_QP_ALT_PATH) &&
+- !rdma_is_port_valid(qp->device, cmd->base.alt_port_num)) {
++ (!rdma_is_port_valid(qp->device, cmd->base.alt_port_num) ||
++ !rdma_is_port_valid(qp->device, cmd->base.alt_dest.port_num))) {
+ ret = -EINVAL;
+ goto release_qp;
+ }
+diff --git a/drivers/infiniband/core/uverbs_main.c b/drivers/infiniband/core/uverbs_main.c
+index 381fd9c096ae..0804239e43f0 100644
+--- a/drivers/infiniband/core/uverbs_main.c
++++ b/drivers/infiniband/core/uverbs_main.c
+@@ -648,12 +648,21 @@ static int verify_command_mask(struct ib_device *ib_dev, __u32 command)
+ return -1;
+ }
+
++static bool verify_command_idx(u32 command, bool extended)
++{
++ if (extended)
++ return command < ARRAY_SIZE(uverbs_ex_cmd_table);
++
++ return command < ARRAY_SIZE(uverbs_cmd_table);
++}
++
+ static ssize_t ib_uverbs_write(struct file *filp, const char __user *buf,
+ size_t count, loff_t *pos)
+ {
+ struct ib_uverbs_file *file = filp->private_data;
+ struct ib_device *ib_dev;
+ struct ib_uverbs_cmd_hdr hdr;
++ bool extended_command;
+ __u32 command;
+ __u32 flags;
+ int srcu_key;
+@@ -686,6 +695,15 @@ static ssize_t ib_uverbs_write(struct file *filp, const char __user *buf,
+ }
+
+ command = hdr.command & IB_USER_VERBS_CMD_COMMAND_MASK;
++ flags = (hdr.command &
++ IB_USER_VERBS_CMD_FLAGS_MASK) >> IB_USER_VERBS_CMD_FLAGS_SHIFT;
++
++ extended_command = flags & IB_USER_VERBS_CMD_FLAG_EXTENDED;
++ if (!verify_command_idx(command, extended_command)) {
++ ret = -EINVAL;
++ goto out;
++ }
++
+ if (verify_command_mask(ib_dev, command)) {
+ ret = -EOPNOTSUPP;
+ goto out;
+@@ -697,12 +715,8 @@ static ssize_t ib_uverbs_write(struct file *filp, const char __user *buf,
+ goto out;
+ }
+
+- flags = (hdr.command &
+- IB_USER_VERBS_CMD_FLAGS_MASK) >> IB_USER_VERBS_CMD_FLAGS_SHIFT;
+-
+ if (!flags) {
+- if (command >= ARRAY_SIZE(uverbs_cmd_table) ||
+- !uverbs_cmd_table[command]) {
++ if (!uverbs_cmd_table[command]) {
+ ret = -EINVAL;
+ goto out;
+ }
+@@ -723,8 +737,7 @@ static ssize_t ib_uverbs_write(struct file *filp, const char __user *buf,
+ struct ib_udata uhw;
+ size_t written_count = count;
+
+- if (command >= ARRAY_SIZE(uverbs_ex_cmd_table) ||
+- !uverbs_ex_cmd_table[command]) {
++ if (!uverbs_ex_cmd_table[command]) {
+ ret = -ENOSYS;
+ goto out;
+ }
+diff --git a/drivers/irqchip/irq-gic-v3.c b/drivers/irqchip/irq-gic-v3.c
+index b56c3e23f0af..980ae8e7df30 100644
+--- a/drivers/irqchip/irq-gic-v3.c
++++ b/drivers/irqchip/irq-gic-v3.c
+@@ -688,7 +688,7 @@ static void gic_raise_softirq(const struct cpumask *mask, unsigned int irq)
+ * Ensure that stores to Normal memory are visible to the
+ * other CPUs before issuing the IPI.
+ */
+- smp_wmb();
++ wmb();
+
+ for_each_cpu(cpu, mask) {
+ u64 cluster_id = MPIDR_TO_SGI_CLUSTER_ID(cpu_logical_map(cpu));
+diff --git a/drivers/irqchip/irq-mips-gic.c b/drivers/irqchip/irq-mips-gic.c
+index ef92a4d2038e..d32268cc1174 100644
+--- a/drivers/irqchip/irq-mips-gic.c
++++ b/drivers/irqchip/irq-mips-gic.c
+@@ -424,8 +424,6 @@ static int gic_shared_irq_domain_map(struct irq_domain *d, unsigned int virq,
+ spin_lock_irqsave(&gic_lock, flags);
+ write_gic_map_pin(intr, GIC_MAP_PIN_MAP_TO_PIN | gic_cpu_pin);
+ write_gic_map_vp(intr, BIT(mips_cm_vp_id(cpu)));
+- gic_clear_pcpu_masks(intr);
+- set_bit(intr, per_cpu_ptr(pcpu_masks, cpu));
+ irq_data_update_effective_affinity(data, cpumask_of(cpu));
+ spin_unlock_irqrestore(&gic_lock, flags);
+
+diff --git a/drivers/net/ethernet/chelsio/cxgb4/t4_hw.c b/drivers/net/ethernet/chelsio/cxgb4/t4_hw.c
+index 375ef86a84da..322671076c9c 100644
+--- a/drivers/net/ethernet/chelsio/cxgb4/t4_hw.c
++++ b/drivers/net/ethernet/chelsio/cxgb4/t4_hw.c
+@@ -2632,7 +2632,6 @@ void t4_get_regs(struct adapter *adap, void *buf, size_t buf_size)
+ }
+
+ #define EEPROM_STAT_ADDR 0x7bfc
+-#define VPD_SIZE 0x800
+ #define VPD_BASE 0x400
+ #define VPD_BASE_OLD 0
+ #define VPD_LEN 1024
+@@ -2699,15 +2698,6 @@ int t4_get_raw_vpd_params(struct adapter *adapter, struct vpd_params *p)
+ if (!vpd)
+ return -ENOMEM;
+
+- /* We have two VPD data structures stored in the adapter VPD area.
+- * By default, Linux calculates the size of the VPD area by traversing
+- * the first VPD area at offset 0x0, so we need to tell the OS what
+- * our real VPD size is.
+- */
+- ret = pci_set_vpd_size(adapter->pdev, VPD_SIZE);
+- if (ret < 0)
+- goto out;
+-
+ /* Card information normally starts at VPD_BASE but early cards had
+ * it at 0.
+ */
+diff --git a/drivers/net/thunderbolt.c b/drivers/net/thunderbolt.c
+index ca5e375de27c..e0d6760f3219 100644
+--- a/drivers/net/thunderbolt.c
++++ b/drivers/net/thunderbolt.c
+@@ -166,6 +166,8 @@ struct tbnet_ring {
+ * @connected_work: Worker that finalizes the ThunderboltIP connection
+ * setup and enables DMA paths for high speed data
+ * transfers
++ * @disconnect_work: Worker that handles tearing down the ThunderboltIP
++ * connection
+ * @rx_hdr: Copy of the currently processed Rx frame. Used when a
+ * network packet consists of multiple Thunderbolt frames.
+ * In host byte order.
+@@ -190,6 +192,7 @@ struct tbnet {
+ int login_retries;
+ struct delayed_work login_work;
+ struct work_struct connected_work;
++ struct work_struct disconnect_work;
+ struct thunderbolt_ip_frame_header rx_hdr;
+ struct tbnet_ring rx_ring;
+ atomic_t frame_id;
+@@ -445,7 +448,7 @@ static int tbnet_handle_packet(const void *buf, size_t size, void *data)
+ case TBIP_LOGOUT:
+ ret = tbnet_logout_response(net, route, sequence, command_id);
+ if (!ret)
+- tbnet_tear_down(net, false);
++ queue_work(system_long_wq, &net->disconnect_work);
+ break;
+
+ default:
+@@ -659,6 +662,13 @@ static void tbnet_login_work(struct work_struct *work)
+ }
+ }
+
++static void tbnet_disconnect_work(struct work_struct *work)
++{
++ struct tbnet *net = container_of(work, typeof(*net), disconnect_work);
++
++ tbnet_tear_down(net, false);
++}
++
+ static bool tbnet_check_frame(struct tbnet *net, const struct tbnet_frame *tf,
+ const struct thunderbolt_ip_frame_header *hdr)
+ {
+@@ -881,6 +891,7 @@ static int tbnet_stop(struct net_device *dev)
+
+ napi_disable(&net->napi);
+
++ cancel_work_sync(&net->disconnect_work);
+ tbnet_tear_down(net, true);
+
+ tb_ring_free(net->rx_ring.ring);
+@@ -1195,6 +1206,7 @@ static int tbnet_probe(struct tb_service *svc, const struct tb_service_id *id)
+ net = netdev_priv(dev);
+ INIT_DELAYED_WORK(&net->login_work, tbnet_login_work);
+ INIT_WORK(&net->connected_work, tbnet_connected_work);
++ INIT_WORK(&net->disconnect_work, tbnet_disconnect_work);
+ mutex_init(&net->connection_lock);
+ atomic_set(&net->command_id, 0);
+ atomic_set(&net->frame_id, 0);
+@@ -1270,10 +1282,7 @@ static int __maybe_unused tbnet_suspend(struct device *dev)
+ stop_login(net);
+ if (netif_running(net->dev)) {
+ netif_device_detach(net->dev);
+- tb_ring_stop(net->rx_ring.ring);
+- tb_ring_stop(net->tx_ring.ring);
+- tbnet_free_buffers(&net->rx_ring);
+- tbnet_free_buffers(&net->tx_ring);
++ tbnet_tear_down(net, true);
+ }
+
+ return 0;
+diff --git a/drivers/pci/quirks.c b/drivers/pci/quirks.c
+index d22750ea7444..d7135140bf40 100644
+--- a/drivers/pci/quirks.c
++++ b/drivers/pci/quirks.c
+@@ -3419,22 +3419,29 @@ DECLARE_PCI_FIXUP_FINAL(PCI_VENDOR_ID_INTEL, PCI_DEVICE_ID_INTEL_PORT_RIDGE,
+
+ static void quirk_chelsio_extend_vpd(struct pci_dev *dev)
+ {
+- pci_set_vpd_size(dev, 8192);
+-}
+-
+-DECLARE_PCI_FIXUP_FINAL(PCI_VENDOR_ID_CHELSIO, 0x20, quirk_chelsio_extend_vpd);
+-DECLARE_PCI_FIXUP_FINAL(PCI_VENDOR_ID_CHELSIO, 0x21, quirk_chelsio_extend_vpd);
+-DECLARE_PCI_FIXUP_FINAL(PCI_VENDOR_ID_CHELSIO, 0x22, quirk_chelsio_extend_vpd);
+-DECLARE_PCI_FIXUP_FINAL(PCI_VENDOR_ID_CHELSIO, 0x23, quirk_chelsio_extend_vpd);
+-DECLARE_PCI_FIXUP_FINAL(PCI_VENDOR_ID_CHELSIO, 0x24, quirk_chelsio_extend_vpd);
+-DECLARE_PCI_FIXUP_FINAL(PCI_VENDOR_ID_CHELSIO, 0x25, quirk_chelsio_extend_vpd);
+-DECLARE_PCI_FIXUP_FINAL(PCI_VENDOR_ID_CHELSIO, 0x26, quirk_chelsio_extend_vpd);
+-DECLARE_PCI_FIXUP_FINAL(PCI_VENDOR_ID_CHELSIO, 0x30, quirk_chelsio_extend_vpd);
+-DECLARE_PCI_FIXUP_FINAL(PCI_VENDOR_ID_CHELSIO, 0x31, quirk_chelsio_extend_vpd);
+-DECLARE_PCI_FIXUP_FINAL(PCI_VENDOR_ID_CHELSIO, 0x32, quirk_chelsio_extend_vpd);
+-DECLARE_PCI_FIXUP_FINAL(PCI_VENDOR_ID_CHELSIO, 0x35, quirk_chelsio_extend_vpd);
+-DECLARE_PCI_FIXUP_FINAL(PCI_VENDOR_ID_CHELSIO, 0x36, quirk_chelsio_extend_vpd);
+-DECLARE_PCI_FIXUP_FINAL(PCI_VENDOR_ID_CHELSIO, 0x37, quirk_chelsio_extend_vpd);
++ int chip = (dev->device & 0xf000) >> 12;
++ int func = (dev->device & 0x0f00) >> 8;
++ int prod = (dev->device & 0x00ff) >> 0;
++
++ /*
++ * If this is a T3-based adapter, there's a 1KB VPD area at offset
++ * 0xc00 which contains the preferred VPD values. If this is a T4 or
++ * later based adapter, the special VPD is at offset 0x400 for the
++ * Physical Functions (the SR-IOV Virtual Functions have no VPD
++ * Capabilities). The PCI VPD Access core routines will normally
++ * compute the size of the VPD by parsing the VPD Data Structure at
++ * offset 0x000. This will result in silent failures when attempting
++ * to accesses these other VPD areas which are beyond those computed
++ * limits.
++ */
++ if (chip == 0x0 && prod >= 0x20)
++ pci_set_vpd_size(dev, 8192);
++ else if (chip >= 0x4 && func < 0x8)
++ pci_set_vpd_size(dev, 2048);
++}
++
++DECLARE_PCI_FIXUP_FINAL(PCI_VENDOR_ID_CHELSIO, PCI_ANY_ID,
++ quirk_chelsio_extend_vpd);
+
+ #ifdef CONFIG_ACPI
+ /*
+diff --git a/drivers/scsi/ibmvscsi/ibmvfc.h b/drivers/scsi/ibmvscsi/ibmvfc.h
+index 9a0696f68f37..b81a53c4a9a8 100644
+--- a/drivers/scsi/ibmvscsi/ibmvfc.h
++++ b/drivers/scsi/ibmvscsi/ibmvfc.h
+@@ -367,7 +367,7 @@ enum ibmvfc_fcp_rsp_info_codes {
+ };
+
+ struct ibmvfc_fcp_rsp_info {
+- __be16 reserved;
++ u8 reserved[3];
+ u8 rsp_code;
+ u8 reserved2[4];
+ }__attribute__((packed, aligned (2)));
+diff --git a/drivers/usb/core/quirks.c b/drivers/usb/core/quirks.c
+index 4024926c1d68..f4a548471f0f 100644
+--- a/drivers/usb/core/quirks.c
++++ b/drivers/usb/core/quirks.c
+@@ -226,6 +226,9 @@ static const struct usb_device_id usb_quirk_list[] = {
+ { USB_DEVICE(0x1a0a, 0x0200), .driver_info =
+ USB_QUIRK_LINEAR_UFRAME_INTR_BINTERVAL },
+
++ /* Corsair K70 RGB */
++ { USB_DEVICE(0x1b1c, 0x1b13), .driver_info = USB_QUIRK_DELAY_INIT },
++
+ /* Corsair Strafe RGB */
+ { USB_DEVICE(0x1b1c, 0x1b20), .driver_info = USB_QUIRK_DELAY_INIT },
+
+diff --git a/drivers/usb/dwc3/ep0.c b/drivers/usb/dwc3/ep0.c
+index fd3e7ad2eb0e..618b4260f0d9 100644
+--- a/drivers/usb/dwc3/ep0.c
++++ b/drivers/usb/dwc3/ep0.c
+@@ -858,7 +858,12 @@ static void dwc3_ep0_complete_data(struct dwc3 *dwc,
+ trb++;
+ trb->ctrl &= ~DWC3_TRB_CTRL_HWO;
+ trace_dwc3_complete_trb(ep0, trb);
+- ep0->trb_enqueue = 0;
++
++ if (r->direction)
++ dwc->eps[1]->trb_enqueue = 0;
++ else
++ dwc->eps[0]->trb_enqueue = 0;
++
+ dwc->ep0_bounced = false;
+ }
+
+diff --git a/drivers/usb/dwc3/gadget.c b/drivers/usb/dwc3/gadget.c
+index 639dd1b163a0..21abea0ac622 100644
+--- a/drivers/usb/dwc3/gadget.c
++++ b/drivers/usb/dwc3/gadget.c
+@@ -2744,6 +2744,8 @@ static void dwc3_gadget_conndone_interrupt(struct dwc3 *dwc)
+ break;
+ }
+
++ dwc->eps[1]->endpoint.maxpacket = dwc->gadget.ep0->maxpacket;
++
+ /* Enable USB2 LPM Capability */
+
+ if ((dwc->revision > DWC3_REVISION_194A) &&
+diff --git a/drivers/usb/gadget/function/f_fs.c b/drivers/usb/gadget/function/f_fs.c
+index f9bd351637cd..0ef08a909ba6 100644
+--- a/drivers/usb/gadget/function/f_fs.c
++++ b/drivers/usb/gadget/function/f_fs.c
+@@ -1852,44 +1852,20 @@ static int ffs_func_eps_enable(struct ffs_function *func)
+
+ spin_lock_irqsave(&func->ffs->eps_lock, flags);
+ while(count--) {
+- struct usb_endpoint_descriptor *ds;
+- struct usb_ss_ep_comp_descriptor *comp_desc = NULL;
+- int needs_comp_desc = false;
+- int desc_idx;
+-
+- if (ffs->gadget->speed == USB_SPEED_SUPER) {
+- desc_idx = 2;
+- needs_comp_desc = true;
+- } else if (ffs->gadget->speed == USB_SPEED_HIGH)
+- desc_idx = 1;
+- else
+- desc_idx = 0;
+-
+- /* fall-back to lower speed if desc missing for current speed */
+- do {
+- ds = ep->descs[desc_idx];
+- } while (!ds && --desc_idx >= 0);
+-
+- if (!ds) {
+- ret = -EINVAL;
+- break;
+- }
+-
+ ep->ep->driver_data = ep;
+- ep->ep->desc = ds;
+
+- if (needs_comp_desc) {
+- comp_desc = (struct usb_ss_ep_comp_descriptor *)(ds +
+- USB_DT_ENDPOINT_SIZE);
+- ep->ep->maxburst = comp_desc->bMaxBurst + 1;
+- ep->ep->comp_desc = comp_desc;
++ ret = config_ep_by_speed(func->gadget, &func->function, ep->ep);
++ if (ret) {
++ pr_err("%s: config_ep_by_speed(%s) returned %d\n",
++ __func__, ep->ep->name, ret);
++ break;
+ }
+
+ ret = usb_ep_enable(ep->ep);
+ if (likely(!ret)) {
+ epfile->ep = ep;
+- epfile->in = usb_endpoint_dir_in(ds);
+- epfile->isoc = usb_endpoint_xfer_isoc(ds);
++ epfile->in = usb_endpoint_dir_in(ep->ep->desc);
++ epfile->isoc = usb_endpoint_xfer_isoc(ep->ep->desc);
+ } else {
+ break;
+ }
+@@ -2976,10 +2952,8 @@ static int _ffs_func_bind(struct usb_configuration *c,
+ struct ffs_data *ffs = func->ffs;
+
+ const int full = !!func->ffs->fs_descs_count;
+- const int high = gadget_is_dualspeed(func->gadget) &&
+- func->ffs->hs_descs_count;
+- const int super = gadget_is_superspeed(func->gadget) &&
+- func->ffs->ss_descs_count;
++ const int high = !!func->ffs->hs_descs_count;
++ const int super = !!func->ffs->ss_descs_count;
+
+ int fs_len, hs_len, ss_len, ret, i;
+ struct ffs_ep *eps_ptr;
+diff --git a/drivers/usb/host/ehci-hub.c b/drivers/usb/host/ehci-hub.c
+index facafdf8fb95..d7641cbdee43 100644
+--- a/drivers/usb/host/ehci-hub.c
++++ b/drivers/usb/host/ehci-hub.c
+@@ -774,12 +774,12 @@ static struct urb *request_single_step_set_feature_urb(
+ atomic_inc(&urb->use_count);
+ atomic_inc(&urb->dev->urbnum);
+ urb->setup_dma = dma_map_single(
+- hcd->self.controller,
++ hcd->self.sysdev,
+ urb->setup_packet,
+ sizeof(struct usb_ctrlrequest),
+ DMA_TO_DEVICE);
+ urb->transfer_dma = dma_map_single(
+- hcd->self.controller,
++ hcd->self.sysdev,
+ urb->transfer_buffer,
+ urb->transfer_buffer_length,
+ DMA_FROM_DEVICE);
+diff --git a/drivers/usb/host/ohci-hcd.c b/drivers/usb/host/ohci-hcd.c
+index ee9676349333..84f88fa411cd 100644
+--- a/drivers/usb/host/ohci-hcd.c
++++ b/drivers/usb/host/ohci-hcd.c
+@@ -74,6 +74,7 @@ static const char hcd_name [] = "ohci_hcd";
+
+ #define STATECHANGE_DELAY msecs_to_jiffies(300)
+ #define IO_WATCHDOG_DELAY msecs_to_jiffies(275)
++#define IO_WATCHDOG_OFF 0xffffff00
+
+ #include "ohci.h"
+ #include "pci-quirks.h"
+@@ -231,7 +232,7 @@ static int ohci_urb_enqueue (
+ }
+
+ /* Start up the I/O watchdog timer, if it's not running */
+- if (!timer_pending(&ohci->io_watchdog) &&
++ if (ohci->prev_frame_no == IO_WATCHDOG_OFF &&
+ list_empty(&ohci->eds_in_use) &&
+ !(ohci->flags & OHCI_QUIRK_QEMU)) {
+ ohci->prev_frame_no = ohci_frame_no(ohci);
+@@ -501,6 +502,7 @@ static int ohci_init (struct ohci_hcd *ohci)
+ return 0;
+
+ timer_setup(&ohci->io_watchdog, io_watchdog_func, 0);
++ ohci->prev_frame_no = IO_WATCHDOG_OFF;
+
+ ohci->hcca = dma_alloc_coherent (hcd->self.controller,
+ sizeof(*ohci->hcca), &ohci->hcca_dma, GFP_KERNEL);
+@@ -730,7 +732,7 @@ static void io_watchdog_func(struct timer_list *t)
+ u32 head;
+ struct ed *ed;
+ struct td *td, *td_start, *td_next;
+- unsigned frame_no;
++ unsigned frame_no, prev_frame_no = IO_WATCHDOG_OFF;
+ unsigned long flags;
+
+ spin_lock_irqsave(&ohci->lock, flags);
+@@ -835,7 +837,7 @@ static void io_watchdog_func(struct timer_list *t)
+ }
+ }
+ if (!list_empty(&ohci->eds_in_use)) {
+- ohci->prev_frame_no = frame_no;
++ prev_frame_no = frame_no;
+ ohci->prev_wdh_cnt = ohci->wdh_cnt;
+ ohci->prev_donehead = ohci_readl(ohci,
+ &ohci->regs->donehead);
+@@ -845,6 +847,7 @@ static void io_watchdog_func(struct timer_list *t)
+ }
+
+ done:
++ ohci->prev_frame_no = prev_frame_no;
+ spin_unlock_irqrestore(&ohci->lock, flags);
+ }
+
+@@ -973,6 +976,7 @@ static void ohci_stop (struct usb_hcd *hcd)
+ if (quirk_nec(ohci))
+ flush_work(&ohci->nec_work);
+ del_timer_sync(&ohci->io_watchdog);
++ ohci->prev_frame_no = IO_WATCHDOG_OFF;
+
+ ohci_writel (ohci, OHCI_INTR_MIE, &ohci->regs->intrdisable);
+ ohci_usb_reset(ohci);
+diff --git a/drivers/usb/host/ohci-hub.c b/drivers/usb/host/ohci-hub.c
+index fb7aaa3b9d06..634f3c7bf774 100644
+--- a/drivers/usb/host/ohci-hub.c
++++ b/drivers/usb/host/ohci-hub.c
+@@ -311,8 +311,10 @@ static int ohci_bus_suspend (struct usb_hcd *hcd)
+ rc = ohci_rh_suspend (ohci, 0);
+ spin_unlock_irq (&ohci->lock);
+
+- if (rc == 0)
++ if (rc == 0) {
+ del_timer_sync(&ohci->io_watchdog);
++ ohci->prev_frame_no = IO_WATCHDOG_OFF;
++ }
+ return rc;
+ }
+
+diff --git a/drivers/usb/host/ohci-q.c b/drivers/usb/host/ohci-q.c
+index b2ec8c399363..4ccb85a67bb3 100644
+--- a/drivers/usb/host/ohci-q.c
++++ b/drivers/usb/host/ohci-q.c
+@@ -1019,6 +1019,8 @@ static void finish_unlinks(struct ohci_hcd *ohci)
+ * have modified this list. normally it's just prepending
+ * entries (which we'd ignore), but paranoia won't hurt.
+ */
++ *last = ed->ed_next;
++ ed->ed_next = NULL;
+ modified = 0;
+
+ /* unlink urbs as requested, but rescan the list after
+@@ -1077,21 +1079,22 @@ static void finish_unlinks(struct ohci_hcd *ohci)
+ goto rescan_this;
+
+ /*
+- * If no TDs are queued, take ED off the ed_rm_list.
++ * If no TDs are queued, ED is now idle.
+ * Otherwise, if the HC is running, reschedule.
+- * If not, leave it on the list for further dequeues.
++ * If the HC isn't running, add ED back to the
++ * start of the list for later processing.
+ */
+ if (list_empty(&ed->td_list)) {
+- *last = ed->ed_next;
+- ed->ed_next = NULL;
+ ed->state = ED_IDLE;
+ list_del(&ed->in_use_list);
+ } else if (ohci->rh_state == OHCI_RH_RUNNING) {
+- *last = ed->ed_next;
+- ed->ed_next = NULL;
+ ed_schedule(ohci, ed);
+ } else {
+- last = &ed->ed_next;
++ ed->ed_next = ohci->ed_rm_list;
++ ohci->ed_rm_list = ed;
++ /* Don't loop on the same ED */
++ if (last == &ohci->ed_rm_list)
++ last = &ed->ed_next;
+ }
+
+ if (modified)
+diff --git a/drivers/usb/misc/ldusb.c b/drivers/usb/misc/ldusb.c
+index 5c1a3b852453..98ef7fcbda58 100644
+--- a/drivers/usb/misc/ldusb.c
++++ b/drivers/usb/misc/ldusb.c
+@@ -42,6 +42,9 @@
+ #define USB_DEVICE_ID_LD_MICROCASSYTIME 0x1033 /* USB Product ID of Micro-CASSY Time (reserved) */
+ #define USB_DEVICE_ID_LD_MICROCASSYTEMPERATURE 0x1035 /* USB Product ID of Micro-CASSY Temperature */
+ #define USB_DEVICE_ID_LD_MICROCASSYPH 0x1038 /* USB Product ID of Micro-CASSY pH */
++#define USB_DEVICE_ID_LD_POWERANALYSERCASSY 0x1040 /* USB Product ID of Power Analyser CASSY */
++#define USB_DEVICE_ID_LD_CONVERTERCONTROLLERCASSY 0x1042 /* USB Product ID of Converter Controller CASSY */
++#define USB_DEVICE_ID_LD_MACHINETESTCASSY 0x1043 /* USB Product ID of Machine Test CASSY */
+ #define USB_DEVICE_ID_LD_JWM 0x1080 /* USB Product ID of Joule and Wattmeter */
+ #define USB_DEVICE_ID_LD_DMMP 0x1081 /* USB Product ID of Digital Multimeter P (reserved) */
+ #define USB_DEVICE_ID_LD_UMIP 0x1090 /* USB Product ID of UMI P */
+@@ -84,6 +87,9 @@ static const struct usb_device_id ld_usb_table[] = {
+ { USB_DEVICE(USB_VENDOR_ID_LD, USB_DEVICE_ID_LD_MICROCASSYTIME) },
+ { USB_DEVICE(USB_VENDOR_ID_LD, USB_DEVICE_ID_LD_MICROCASSYTEMPERATURE) },
+ { USB_DEVICE(USB_VENDOR_ID_LD, USB_DEVICE_ID_LD_MICROCASSYPH) },
++ { USB_DEVICE(USB_VENDOR_ID_LD, USB_DEVICE_ID_LD_POWERANALYSERCASSY) },
++ { USB_DEVICE(USB_VENDOR_ID_LD, USB_DEVICE_ID_LD_CONVERTERCONTROLLERCASSY) },
++ { USB_DEVICE(USB_VENDOR_ID_LD, USB_DEVICE_ID_LD_MACHINETESTCASSY) },
+ { USB_DEVICE(USB_VENDOR_ID_LD, USB_DEVICE_ID_LD_JWM) },
+ { USB_DEVICE(USB_VENDOR_ID_LD, USB_DEVICE_ID_LD_DMMP) },
+ { USB_DEVICE(USB_VENDOR_ID_LD, USB_DEVICE_ID_LD_UMIP) },
+diff --git a/drivers/usb/musb/musb_host.c b/drivers/usb/musb/musb_host.c
+index 2627363fb4fe..6f311212f3c5 100644
+--- a/drivers/usb/musb/musb_host.c
++++ b/drivers/usb/musb/musb_host.c
+@@ -393,13 +393,7 @@ static void musb_advance_schedule(struct musb *musb, struct urb *urb,
+ }
+ }
+
+- /*
+- * The pipe must be broken if current urb->status is set, so don't
+- * start next urb.
+- * TODO: to minimize the risk of regression, only check urb->status
+- * for RX, until we have a test case to understand the behavior of TX.
+- */
+- if ((!status || !is_in) && qh && qh->is_ready) {
++ if (qh != NULL && qh->is_ready) {
+ musb_dbg(musb, "... next ep%d %cX urb %p",
+ hw_ep->epnum, is_in ? 'R' : 'T', next_urb(qh));
+ musb_start_urb(musb, is_in, qh);
+diff --git a/drivers/usb/phy/phy-mxs-usb.c b/drivers/usb/phy/phy-mxs-usb.c
+index da031c45395a..fbec863350f6 100644
+--- a/drivers/usb/phy/phy-mxs-usb.c
++++ b/drivers/usb/phy/phy-mxs-usb.c
+@@ -602,6 +602,9 @@ static enum usb_charger_type mxs_phy_charger_detect(struct usb_phy *phy)
+ void __iomem *base = phy->io_priv;
+ enum usb_charger_type chgr_type = UNKNOWN_TYPE;
+
++ if (!regmap)
++ return UNKNOWN_TYPE;
++
+ if (mxs_charger_data_contact_detect(mxs_phy))
+ return chgr_type;
+
+diff --git a/drivers/usb/renesas_usbhs/fifo.c b/drivers/usb/renesas_usbhs/fifo.c
+index 2d24ef3076ef..b295e204a575 100644
+--- a/drivers/usb/renesas_usbhs/fifo.c
++++ b/drivers/usb/renesas_usbhs/fifo.c
+@@ -989,6 +989,10 @@ static int usbhsf_dma_prepare_pop_with_usb_dmac(struct usbhs_pkt *pkt,
+ if ((uintptr_t)pkt->buf & (USBHS_USB_DMAC_XFER_SIZE - 1))
+ goto usbhsf_pio_prepare_pop;
+
++ /* return at this time if the pipe is running */
++ if (usbhs_pipe_is_running(pipe))
++ return 0;
++
+ usbhs_pipe_config_change_bfre(pipe, 1);
+
+ ret = usbhsf_fifo_select(pipe, fifo, 0);
+@@ -1179,6 +1183,7 @@ static int usbhsf_dma_pop_done_with_usb_dmac(struct usbhs_pkt *pkt,
+ usbhsf_fifo_clear(pipe, fifo);
+ pkt->actual = usbhs_dma_calc_received_size(pkt, chan, rcv_len);
+
++ usbhs_pipe_running(pipe, 0);
+ usbhsf_dma_stop(pipe, fifo);
+ usbhsf_dma_unmap(pkt);
+ usbhsf_fifo_unselect(pipe, pipe->fifo);
+diff --git a/drivers/xen/tmem.c b/drivers/xen/tmem.c
+index bf13d1ec51f3..04e7b3b29bac 100644
+--- a/drivers/xen/tmem.c
++++ b/drivers/xen/tmem.c
+@@ -284,6 +284,10 @@ static int tmem_frontswap_store(unsigned type, pgoff_t offset,
+ int pool = tmem_frontswap_poolid;
+ int ret;
+
++ /* THP isn't supported */
++ if (PageTransHuge(page))
++ return -1;
++
+ if (pool < 0)
+ return -1;
+ if (ind64 != ind)
+diff --git a/include/drm/drm_atomic.h b/include/drm/drm_atomic.h
+index 5afd6e364fb6..c63b0b48e884 100644
+--- a/include/drm/drm_atomic.h
++++ b/include/drm/drm_atomic.h
+@@ -134,6 +134,15 @@ struct drm_crtc_commit {
+ * &drm_pending_vblank_event pointer to clean up private events.
+ */
+ struct drm_pending_vblank_event *event;
++
++ /**
++ * @abort_completion:
++ *
++ * A flag that's set after drm_atomic_helper_setup_commit takes a second
++ * reference for the completion of $drm_crtc_state.event. It's used by
++ * the free code to remove the second reference if commit fails.
++ */
++ bool abort_completion;
+ };
+
+ struct __drm_planes_state {
+diff --git a/include/linux/kconfig.h b/include/linux/kconfig.h
+index fec5076eda91..dcde9471897d 100644
+--- a/include/linux/kconfig.h
++++ b/include/linux/kconfig.h
+@@ -4,6 +4,12 @@
+
+ #include <generated/autoconf.h>
+
++#ifdef CONFIG_CPU_BIG_ENDIAN
++#define __BIG_ENDIAN 4321
++#else
++#define __LITTLE_ENDIAN 1234
++#endif
++
+ #define __ARG_PLACEHOLDER_1 0,
+ #define __take_second_arg(__ignored, val, ...) val
+
+@@ -64,4 +70,7 @@
+ */
+ #define IS_ENABLED(option) __or(IS_BUILTIN(option), IS_MODULE(option))
+
++/* Make sure we always have all types and struct attributes defined. */
++#include <linux/compiler_types.h>
++
+ #endif /* __LINUX_KCONFIG_H */
+diff --git a/include/uapi/linux/if_ether.h b/include/uapi/linux/if_ether.h
+index 144de4d2f385..153c9c2eaaa7 100644
+--- a/include/uapi/linux/if_ether.h
++++ b/include/uapi/linux/if_ether.h
+@@ -23,7 +23,6 @@
+ #define _UAPI_LINUX_IF_ETHER_H
+
+ #include <linux/types.h>
+-#include <linux/libc-compat.h>
+
+ /*
+ * IEEE 802.3 Ethernet magic constants. The frame sizes omit the preamble
+@@ -150,6 +149,11 @@
+ * This is an Ethernet frame header.
+ */
+
++/* allow libcs like musl to deactivate this, glibc does not implement this. */
++#ifndef __UAPI_DEF_ETHHDR
++#define __UAPI_DEF_ETHHDR 1
++#endif
++
+ #if __UAPI_DEF_ETHHDR
+ struct ethhdr {
+ unsigned char h_dest[ETH_ALEN]; /* destination eth addr */
+diff --git a/include/uapi/linux/libc-compat.h b/include/uapi/linux/libc-compat.h
+index fc29efaa918c..8254c937c9f4 100644
+--- a/include/uapi/linux/libc-compat.h
++++ b/include/uapi/linux/libc-compat.h
+@@ -264,10 +264,4 @@
+
+ #endif /* __GLIBC__ */
+
+-/* Definitions for if_ether.h */
+-/* allow libcs like musl to deactivate this, glibc does not implement this. */
+-#ifndef __UAPI_DEF_ETHHDR
+-#define __UAPI_DEF_ETHHDR 1
+-#endif
+-
+ #endif /* _UAPI_LIBC_COMPAT_H */
+diff --git a/kernel/irq/matrix.c b/kernel/irq/matrix.c
+index 5187dfe809ac..4c5770407031 100644
+--- a/kernel/irq/matrix.c
++++ b/kernel/irq/matrix.c
+@@ -16,6 +16,7 @@ struct cpumap {
+ unsigned int available;
+ unsigned int allocated;
+ unsigned int managed;
++ bool initialized;
+ bool online;
+ unsigned long alloc_map[IRQ_MATRIX_SIZE];
+ unsigned long managed_map[IRQ_MATRIX_SIZE];
+@@ -81,9 +82,11 @@ void irq_matrix_online(struct irq_matrix *m)
+
+ BUG_ON(cm->online);
+
+- bitmap_zero(cm->alloc_map, m->matrix_bits);
+- cm->available = m->alloc_size - (cm->managed + m->systembits_inalloc);
+- cm->allocated = 0;
++ if (!cm->initialized) {
++ cm->available = m->alloc_size;
++ cm->available -= cm->managed + m->systembits_inalloc;
++ cm->initialized = true;
++ }
+ m->global_available += cm->available;
+ cm->online = true;
+ m->online_maps++;
+@@ -370,14 +373,16 @@ void irq_matrix_free(struct irq_matrix *m, unsigned int cpu,
+ if (WARN_ON_ONCE(bit < m->alloc_start || bit >= m->alloc_end))
+ return;
+
+- if (cm->online) {
+- clear_bit(bit, cm->alloc_map);
+- cm->allocated--;
++ clear_bit(bit, cm->alloc_map);
++ cm->allocated--;
++
++ if (cm->online)
+ m->total_allocated--;
+- if (!managed) {
+- cm->available++;
++
++ if (!managed) {
++ cm->available++;
++ if (cm->online)
+ m->global_available++;
+- }
+ }
+ trace_irq_matrix_free(bit, cpu, m, cm);
+ }
+diff --git a/mm/page_alloc.c b/mm/page_alloc.c
+index d23818c5465a..9f927497f2f5 100644
+--- a/mm/page_alloc.c
++++ b/mm/page_alloc.c
+@@ -46,6 +46,7 @@
+ #include <linux/stop_machine.h>
+ #include <linux/sort.h>
+ #include <linux/pfn.h>
++#include <xen/xen.h>
+ #include <linux/backing-dev.h>
+ #include <linux/fault-inject.h>
+ #include <linux/page-isolation.h>
+@@ -347,6 +348,9 @@ static inline bool update_defer_init(pg_data_t *pgdat,
+ /* Always populate low zones for address-contrained allocations */
+ if (zone_end < pgdat_end_pfn(pgdat))
+ return true;
++ /* Xen PV domains need page structures early */
++ if (xen_pv_domain())
++ return true;
+ (*nr_initialised)++;
+ if ((*nr_initialised > pgdat->static_init_pgcnt) &&
+ (pfn & (PAGES_PER_SECTION - 1)) == 0) {
+diff --git a/mm/zswap.c b/mm/zswap.c
+index d39581a076c3..597008a44f70 100644
+--- a/mm/zswap.c
++++ b/mm/zswap.c
+@@ -970,6 +970,12 @@ static int zswap_frontswap_store(unsigned type, pgoff_t offset,
+ u8 *src, *dst;
+ struct zswap_header *zhdr;
+
++ /* THP isn't supported */
++ if (PageTransHuge(page)) {
++ ret = -EINVAL;
++ goto reject;
++ }
++
+ if (!zswap_enabled || !tree) {
+ ret = -ENODEV;
+ goto reject;
+diff --git a/net/ipv4/ip_sockglue.c b/net/ipv4/ip_sockglue.c
+index c7df4969f80a..f56aab54e0c8 100644
+--- a/net/ipv4/ip_sockglue.c
++++ b/net/ipv4/ip_sockglue.c
+@@ -1563,10 +1563,7 @@ int ip_getsockopt(struct sock *sk, int level,
+ if (get_user(len, optlen))
+ return -EFAULT;
+
+- lock_sock(sk);
+- err = nf_getsockopt(sk, PF_INET, optname, optval,
+- &len);
+- release_sock(sk);
++ err = nf_getsockopt(sk, PF_INET, optname, optval, &len);
+ if (err >= 0)
+ err = put_user(len, optlen);
+ return err;
+@@ -1598,9 +1595,7 @@ int compat_ip_getsockopt(struct sock *sk, int level, int optname,
+ if (get_user(len, optlen))
+ return -EFAULT;
+
+- lock_sock(sk);
+ err = compat_nf_getsockopt(sk, PF_INET, optname, optval, &len);
+- release_sock(sk);
+ if (err >= 0)
+ err = put_user(len, optlen);
+ return err;
+diff --git a/net/ipv6/ipv6_sockglue.c b/net/ipv6/ipv6_sockglue.c
+index d78d41fc4b1a..24535169663d 100644
+--- a/net/ipv6/ipv6_sockglue.c
++++ b/net/ipv6/ipv6_sockglue.c
+@@ -1367,10 +1367,7 @@ int ipv6_getsockopt(struct sock *sk, int level, int optname,
+ if (get_user(len, optlen))
+ return -EFAULT;
+
+- lock_sock(sk);
+- err = nf_getsockopt(sk, PF_INET6, optname, optval,
+- &len);
+- release_sock(sk);
++ err = nf_getsockopt(sk, PF_INET6, optname, optval, &len);
+ if (err >= 0)
+ err = put_user(len, optlen);
+ }
+@@ -1409,10 +1406,7 @@ int compat_ipv6_getsockopt(struct sock *sk, int level, int optname,
+ if (get_user(len, optlen))
+ return -EFAULT;
+
+- lock_sock(sk);
+- err = compat_nf_getsockopt(sk, PF_INET6,
+- optname, optval, &len);
+- release_sock(sk);
++ err = compat_nf_getsockopt(sk, PF_INET6, optname, optval, &len);
+ if (err >= 0)
+ err = put_user(len, optlen);
+ }
+diff --git a/net/mac80211/cfg.c b/net/mac80211/cfg.c
+index fb15d3b97cb2..84f757c5d91a 100644
+--- a/net/mac80211/cfg.c
++++ b/net/mac80211/cfg.c
+@@ -2863,7 +2863,7 @@ cfg80211_beacon_dup(struct cfg80211_beacon_data *beacon)
+ }
+ if (beacon->probe_resp_len) {
+ new_beacon->probe_resp_len = beacon->probe_resp_len;
+- beacon->probe_resp = pos;
++ new_beacon->probe_resp = pos;
+ memcpy(pos, beacon->probe_resp, beacon->probe_resp_len);
+ pos += beacon->probe_resp_len;
+ }
^ permalink raw reply related [flat|nested] 27+ messages in thread
* [gentoo-commits] proj/linux-patches:4.15 commit in: /
@ 2018-02-28 15:16 Alice Ferrazzi
0 siblings, 0 replies; 27+ messages in thread
From: Alice Ferrazzi @ 2018-02-28 15:16 UTC (permalink / raw
To: gentoo-commits
commit: c416f80615aaa67080d64834d2281d57607a4445
Author: Alice Ferrazzi <alicef <AT> gentoo <DOT> org>
AuthorDate: Wed Feb 28 15:16:38 2018 +0000
Commit: Alice Ferrazzi <alicef <AT> gentoo <DOT> org>
CommitDate: Wed Feb 28 15:16:38 2018 +0000
URL: https://gitweb.gentoo.org/proj/linux-patches.git/commit/?id=c416f806
removed upstreamd patch
0000_README | 4 -
...ocate_buffer_on_heap_rather_than_globally.patch | 489 ---------------------
2 files changed, 493 deletions(-)
diff --git a/0000_README b/0000_README
index 2454462..a7bb4af 100644
--- a/0000_README
+++ b/0000_README
@@ -99,10 +99,6 @@ Patch: 2900_dev-root-proc-mount-fix.patch
From: https://bugs.gentoo.org/show_bug.cgi?id=438380
Desc: Ensure that /dev/root doesn't appear in /proc/mounts when bootint without an initramfs.
-Patch: 2901_allocate_buffer_on_heap_rather_than_globally.patch
-From: https://patchwork.kernel.org/patch/10194287/
-Desc: Patchwork [v2] platform/x86: dell-laptop: Allocate buffer on heap rather than globally.
-
Patch: 4200_fbcondecor.patch
From: http://www.mepiscommunity.org/fbcondecor
Desc: Bootsplash ported by Conrad Kostecki. (Bug #637434)
diff --git a/2901_allocate_buffer_on_heap_rather_than_globally.patch b/2901_allocate_buffer_on_heap_rather_than_globally.patch
deleted file mode 100644
index dd2f7ad..0000000
--- a/2901_allocate_buffer_on_heap_rather_than_globally.patch
+++ /dev/null
@@ -1,489 +0,0 @@
-From patchwork Wed Jan 31 17:47:35 2018
-Content-Type: text/plain; charset="utf-8"
-MIME-Version: 1.0
-Content-Transfer-Encoding: 8bit
-Subject: [v2] platform/x86: dell-laptop: Allocate buffer on heap rather than
- globally
-From: Mario Limonciello <mario.limonciello@dell.com>
-X-Patchwork-Id: 10194287
-Message-Id: <1517420855-19374-1-git-send-email-mario.limonciello@dell.com>
-To: dvhart@infradead.org, Andy Shevchenko <andy.shevchenko@gmail.com>
-Cc: pali.rohar@gmail.com, LKML <linux-kernel@vger.kernel.org>,
- platform-driver-x86@vger.kernel.org,
- Mario Limonciello <mario.limonciello@dell.com>
-Date: Wed, 31 Jan 2018 11:47:35 -0600
-
-There is no longer a need for the buffer to be defined in
-first 4GB physical address space.
-
-Furthermore there may be race conditions with multiple different functions
-working on a module wide buffer causing incorrect results.
-
-Fixes: 549b4930f057658dc50d8010e66219233119a4d8
-Suggested-by: Pali Rohar <pali.rohar@gmail.com>
-Signed-off-by: Mario Limonciello <mario.limonciello@dell.com>
-Reviewed-by: Pali Rohár <pali.rohar@gmail.com>
----
- drivers/platform/x86/dell-laptop.c | 188 ++++++++++++++++++++-----------------
- 1 file changed, 103 insertions(+), 85 deletions(-)
-
-diff --git a/drivers/platform/x86/dell-laptop.c b/drivers/platform/x86/dell-laptop.c
-index fc2dfc8..a7b1419 100644
---- a/drivers/platform/x86/dell-laptop.c
-+++ b/drivers/platform/x86/dell-laptop.c
-@@ -78,7 +78,6 @@ static struct platform_driver platform_driver = {
- }
- };
-
--static struct calling_interface_buffer *buffer;
- static struct platform_device *platform_device;
- static struct backlight_device *dell_backlight_device;
- static struct rfkill *wifi_rfkill;
-@@ -322,7 +321,8 @@ static const struct dmi_system_id dell_quirks[] __initconst = {
- { }
- };
-
--static void dell_set_arguments(u32 arg0, u32 arg1, u32 arg2, u32 arg3)
-+static void dell_fill_request(struct calling_interface_buffer *buffer,
-+ u32 arg0, u32 arg1, u32 arg2, u32 arg3)
- {
- memset(buffer, 0, sizeof(struct calling_interface_buffer));
- buffer->input[0] = arg0;
-@@ -331,7 +331,8 @@ static void dell_set_arguments(u32 arg0, u32 arg1, u32 arg2, u32 arg3)
- buffer->input[3] = arg3;
- }
-
--static int dell_send_request(u16 class, u16 select)
-+static int dell_send_request(struct calling_interface_buffer *buffer,
-+ u16 class, u16 select)
- {
- int ret;
-
-@@ -468,21 +469,22 @@ static int dell_rfkill_set(void *data, bool blocked)
- int disable = blocked ? 1 : 0;
- unsigned long radio = (unsigned long)data;
- int hwswitch_bit = (unsigned long)data - 1;
-+ struct calling_interface_buffer buffer;
- int hwswitch;
- int status;
- int ret;
-
-- dell_set_arguments(0, 0, 0, 0);
-- ret = dell_send_request(CLASS_INFO, SELECT_RFKILL);
-+ dell_fill_request(&buffer, 0, 0, 0, 0);
-+ ret = dell_send_request(&buffer, CLASS_INFO, SELECT_RFKILL);
- if (ret)
- return ret;
-- status = buffer->output[1];
-+ status = buffer.output[1];
-
-- dell_set_arguments(0x2, 0, 0, 0);
-- ret = dell_send_request(CLASS_INFO, SELECT_RFKILL);
-+ dell_fill_request(&buffer, 0x2, 0, 0, 0);
-+ ret = dell_send_request(&buffer, CLASS_INFO, SELECT_RFKILL);
- if (ret)
- return ret;
-- hwswitch = buffer->output[1];
-+ hwswitch = buffer.output[1];
-
- /* If the hardware switch controls this radio, and the hardware
- switch is disabled, always disable the radio */
-@@ -490,8 +492,8 @@ static int dell_rfkill_set(void *data, bool blocked)
- (status & BIT(0)) && !(status & BIT(16)))
- disable = 1;
-
-- dell_set_arguments(1 | (radio<<8) | (disable << 16), 0, 0, 0);
-- ret = dell_send_request(CLASS_INFO, SELECT_RFKILL);
-+ dell_fill_request(&buffer, 1 | (radio<<8) | (disable << 16), 0, 0, 0);
-+ ret = dell_send_request(&buffer, CLASS_INFO, SELECT_RFKILL);
- return ret;
- }
-
-@@ -500,9 +502,11 @@ static void dell_rfkill_update_sw_state(struct rfkill *rfkill, int radio,
- {
- if (status & BIT(0)) {
- /* Has hw-switch, sync sw_state to BIOS */
-+ struct calling_interface_buffer buffer;
- int block = rfkill_blocked(rfkill);
-- dell_set_arguments(1 | (radio << 8) | (block << 16), 0, 0, 0);
-- dell_send_request(CLASS_INFO, SELECT_RFKILL);
-+ dell_fill_request(&buffer,
-+ 1 | (radio << 8) | (block << 16), 0, 0, 0);
-+ dell_send_request(&buffer, CLASS_INFO, SELECT_RFKILL);
- } else {
- /* No hw-switch, sync BIOS state to sw_state */
- rfkill_set_sw_state(rfkill, !!(status & BIT(radio + 16)));
-@@ -519,21 +523,22 @@ static void dell_rfkill_update_hw_state(struct rfkill *rfkill, int radio,
- static void dell_rfkill_query(struct rfkill *rfkill, void *data)
- {
- int radio = ((unsigned long)data & 0xF);
-+ struct calling_interface_buffer buffer;
- int hwswitch;
- int status;
- int ret;
-
-- dell_set_arguments(0, 0, 0, 0);
-- ret = dell_send_request(CLASS_INFO, SELECT_RFKILL);
-- status = buffer->output[1];
-+ dell_fill_request(&buffer, 0, 0, 0, 0);
-+ ret = dell_send_request(&buffer, CLASS_INFO, SELECT_RFKILL);
-+ status = buffer.output[1];
-
- if (ret != 0 || !(status & BIT(0))) {
- return;
- }
-
-- dell_set_arguments(0, 0x2, 0, 0);
-- ret = dell_send_request(CLASS_INFO, SELECT_RFKILL);
-- hwswitch = buffer->output[1];
-+ dell_fill_request(&buffer, 0, 0x2, 0, 0);
-+ ret = dell_send_request(&buffer, CLASS_INFO, SELECT_RFKILL);
-+ hwswitch = buffer.output[1];
-
- if (ret != 0)
- return;
-@@ -550,22 +555,23 @@ static struct dentry *dell_laptop_dir;
-
- static int dell_debugfs_show(struct seq_file *s, void *data)
- {
-+ struct calling_interface_buffer buffer;
- int hwswitch_state;
- int hwswitch_ret;
- int status;
- int ret;
-
-- dell_set_arguments(0, 0, 0, 0);
-- ret = dell_send_request(CLASS_INFO, SELECT_RFKILL);
-+ dell_fill_request(&buffer, 0, 0, 0, 0);
-+ ret = dell_send_request(&buffer, CLASS_INFO, SELECT_RFKILL);
- if (ret)
- return ret;
-- status = buffer->output[1];
-+ status = buffer.output[1];
-
-- dell_set_arguments(0, 0x2, 0, 0);
-- hwswitch_ret = dell_send_request(CLASS_INFO, SELECT_RFKILL);
-+ dell_fill_request(&buffer, 0, 0x2, 0, 0);
-+ hwswitch_ret = dell_send_request(&buffer, CLASS_INFO, SELECT_RFKILL);
- if (hwswitch_ret)
- return hwswitch_ret;
-- hwswitch_state = buffer->output[1];
-+ hwswitch_state = buffer.output[1];
-
- seq_printf(s, "return:\t%d\n", ret);
- seq_printf(s, "status:\t0x%X\n", status);
-@@ -646,22 +652,23 @@ static const struct file_operations dell_debugfs_fops = {
-
- static void dell_update_rfkill(struct work_struct *ignored)
- {
-+ struct calling_interface_buffer buffer;
- int hwswitch = 0;
- int status;
- int ret;
-
-- dell_set_arguments(0, 0, 0, 0);
-- ret = dell_send_request(CLASS_INFO, SELECT_RFKILL);
-- status = buffer->output[1];
-+ dell_fill_request(&buffer, 0, 0, 0, 0);
-+ ret = dell_send_request(&buffer, CLASS_INFO, SELECT_RFKILL);
-+ status = buffer.output[1];
-
- if (ret != 0)
- return;
-
-- dell_set_arguments(0, 0x2, 0, 0);
-- ret = dell_send_request(CLASS_INFO, SELECT_RFKILL);
-+ dell_fill_request(&buffer, 0, 0x2, 0, 0);
-+ ret = dell_send_request(&buffer, CLASS_INFO, SELECT_RFKILL);
-
- if (ret == 0 && (status & BIT(0)))
-- hwswitch = buffer->output[1];
-+ hwswitch = buffer.output[1];
-
- if (wifi_rfkill) {
- dell_rfkill_update_hw_state(wifi_rfkill, 1, status, hwswitch);
-@@ -719,6 +726,7 @@ static struct notifier_block dell_laptop_rbtn_notifier = {
-
- static int __init dell_setup_rfkill(void)
- {
-+ struct calling_interface_buffer buffer;
- int status, ret, whitelisted;
- const char *product;
-
-@@ -734,9 +742,9 @@ static int __init dell_setup_rfkill(void)
- if (!force_rfkill && !whitelisted)
- return 0;
-
-- dell_set_arguments(0, 0, 0, 0);
-- ret = dell_send_request(CLASS_INFO, SELECT_RFKILL);
-- status = buffer->output[1];
-+ dell_fill_request(&buffer, 0, 0, 0, 0);
-+ ret = dell_send_request(&buffer, CLASS_INFO, SELECT_RFKILL);
-+ status = buffer.output[1];
-
- /* dell wireless info smbios call is not supported */
- if (ret != 0)
-@@ -889,6 +897,7 @@ static void dell_cleanup_rfkill(void)
-
- static int dell_send_intensity(struct backlight_device *bd)
- {
-+ struct calling_interface_buffer buffer;
- struct calling_interface_token *token;
- int ret;
-
-@@ -896,17 +905,21 @@ static int dell_send_intensity(struct backlight_device *bd)
- if (!token)
- return -ENODEV;
-
-- dell_set_arguments(token->location, bd->props.brightness, 0, 0);
-+ dell_fill_request(&buffer,
-+ token->location, bd->props.brightness, 0, 0);
- if (power_supply_is_system_supplied() > 0)
-- ret = dell_send_request(CLASS_TOKEN_WRITE, SELECT_TOKEN_AC);
-+ ret = dell_send_request(&buffer,
-+ CLASS_TOKEN_WRITE, SELECT_TOKEN_AC);
- else
-- ret = dell_send_request(CLASS_TOKEN_WRITE, SELECT_TOKEN_BAT);
-+ ret = dell_send_request(&buffer,
-+ CLASS_TOKEN_WRITE, SELECT_TOKEN_BAT);
-
- return ret;
- }
-
- static int dell_get_intensity(struct backlight_device *bd)
- {
-+ struct calling_interface_buffer buffer;
- struct calling_interface_token *token;
- int ret;
-
-@@ -914,14 +927,17 @@ static int dell_get_intensity(struct backlight_device *bd)
- if (!token)
- return -ENODEV;
-
-- dell_set_arguments(token->location, 0, 0, 0);
-+ dell_fill_request(&buffer, token->location, 0, 0, 0);
- if (power_supply_is_system_supplied() > 0)
-- ret = dell_send_request(CLASS_TOKEN_READ, SELECT_TOKEN_AC);
-+ ret = dell_send_request(&buffer,
-+ CLASS_TOKEN_READ, SELECT_TOKEN_AC);
- else
-- ret = dell_send_request(CLASS_TOKEN_READ, SELECT_TOKEN_BAT);
-+ ret = dell_send_request(&buffer,
-+ CLASS_TOKEN_READ, SELECT_TOKEN_BAT);
-
- if (ret == 0)
-- ret = buffer->output[1];
-+ ret = buffer.output[1];
-+
- return ret;
- }
-
-@@ -1186,31 +1202,33 @@ static enum led_brightness kbd_led_level;
-
- static int kbd_get_info(struct kbd_info *info)
- {
-+ struct calling_interface_buffer buffer;
- u8 units;
- int ret;
-
-- dell_set_arguments(0, 0, 0, 0);
-- ret = dell_send_request(CLASS_KBD_BACKLIGHT, SELECT_KBD_BACKLIGHT);
-+ dell_fill_request(&buffer, 0, 0, 0, 0);
-+ ret = dell_send_request(&buffer,
-+ CLASS_KBD_BACKLIGHT, SELECT_KBD_BACKLIGHT);
- if (ret)
- return ret;
-
-- info->modes = buffer->output[1] & 0xFFFF;
-- info->type = (buffer->output[1] >> 24) & 0xFF;
-- info->triggers = buffer->output[2] & 0xFF;
-- units = (buffer->output[2] >> 8) & 0xFF;
-- info->levels = (buffer->output[2] >> 16) & 0xFF;
-+ info->modes = buffer.output[1] & 0xFFFF;
-+ info->type = (buffer.output[1] >> 24) & 0xFF;
-+ info->triggers = buffer.output[2] & 0xFF;
-+ units = (buffer.output[2] >> 8) & 0xFF;
-+ info->levels = (buffer.output[2] >> 16) & 0xFF;
-
- if (quirks && quirks->kbd_led_levels_off_1 && info->levels)
- info->levels--;
-
- if (units & BIT(0))
-- info->seconds = (buffer->output[3] >> 0) & 0xFF;
-+ info->seconds = (buffer.output[3] >> 0) & 0xFF;
- if (units & BIT(1))
-- info->minutes = (buffer->output[3] >> 8) & 0xFF;
-+ info->minutes = (buffer.output[3] >> 8) & 0xFF;
- if (units & BIT(2))
-- info->hours = (buffer->output[3] >> 16) & 0xFF;
-+ info->hours = (buffer.output[3] >> 16) & 0xFF;
- if (units & BIT(3))
-- info->days = (buffer->output[3] >> 24) & 0xFF;
-+ info->days = (buffer.output[3] >> 24) & 0xFF;
-
- return ret;
- }
-@@ -1270,31 +1288,34 @@ static int kbd_set_level(struct kbd_state *state, u8 level)
-
- static int kbd_get_state(struct kbd_state *state)
- {
-+ struct calling_interface_buffer buffer;
- int ret;
-
-- dell_set_arguments(0x1, 0, 0, 0);
-- ret = dell_send_request(CLASS_KBD_BACKLIGHT, SELECT_KBD_BACKLIGHT);
-+ dell_fill_request(&buffer, 0, 0, 0, 0);
-+ ret = dell_send_request(&buffer,
-+ CLASS_KBD_BACKLIGHT, SELECT_KBD_BACKLIGHT);
- if (ret)
- return ret;
-
-- state->mode_bit = ffs(buffer->output[1] & 0xFFFF);
-+ state->mode_bit = ffs(buffer.output[1] & 0xFFFF);
- if (state->mode_bit != 0)
- state->mode_bit--;
-
-- state->triggers = (buffer->output[1] >> 16) & 0xFF;
-- state->timeout_value = (buffer->output[1] >> 24) & 0x3F;
-- state->timeout_unit = (buffer->output[1] >> 30) & 0x3;
-- state->als_setting = buffer->output[2] & 0xFF;
-- state->als_value = (buffer->output[2] >> 8) & 0xFF;
-- state->level = (buffer->output[2] >> 16) & 0xFF;
-- state->timeout_value_ac = (buffer->output[2] >> 24) & 0x3F;
-- state->timeout_unit_ac = (buffer->output[2] >> 30) & 0x3;
-+ state->triggers = (buffer.output[1] >> 16) & 0xFF;
-+ state->timeout_value = (buffer.output[1] >> 24) & 0x3F;
-+ state->timeout_unit = (buffer.output[1] >> 30) & 0x3;
-+ state->als_setting = buffer.output[2] & 0xFF;
-+ state->als_value = (buffer.output[2] >> 8) & 0xFF;
-+ state->level = (buffer.output[2] >> 16) & 0xFF;
-+ state->timeout_value_ac = (buffer.output[2] >> 24) & 0x3F;
-+ state->timeout_unit_ac = (buffer.output[2] >> 30) & 0x3;
-
- return ret;
- }
-
- static int kbd_set_state(struct kbd_state *state)
- {
-+ struct calling_interface_buffer buffer;
- int ret;
- u32 input1;
- u32 input2;
-@@ -1307,8 +1328,9 @@ static int kbd_set_state(struct kbd_state *state)
- input2 |= (state->level & 0xFF) << 16;
- input2 |= (state->timeout_value_ac & 0x3F) << 24;
- input2 |= (state->timeout_unit_ac & 0x3) << 30;
-- dell_set_arguments(0x2, input1, input2, 0);
-- ret = dell_send_request(CLASS_KBD_BACKLIGHT, SELECT_KBD_BACKLIGHT);
-+ dell_fill_request(&buffer, 0x2, input1, input2, 0);
-+ ret = dell_send_request(&buffer,
-+ CLASS_KBD_BACKLIGHT, SELECT_KBD_BACKLIGHT);
-
- return ret;
- }
-@@ -1335,6 +1357,7 @@ static int kbd_set_state_safe(struct kbd_state *state, struct kbd_state *old)
-
- static int kbd_set_token_bit(u8 bit)
- {
-+ struct calling_interface_buffer buffer;
- struct calling_interface_token *token;
- int ret;
-
-@@ -1345,14 +1368,15 @@ static int kbd_set_token_bit(u8 bit)
- if (!token)
- return -EINVAL;
-
-- dell_set_arguments(token->location, token->value, 0, 0);
-- ret = dell_send_request(CLASS_TOKEN_WRITE, SELECT_TOKEN_STD);
-+ dell_fill_request(&buffer, token->location, token->value, 0, 0);
-+ ret = dell_send_request(&buffer, CLASS_TOKEN_WRITE, SELECT_TOKEN_STD);
-
- return ret;
- }
-
- static int kbd_get_token_bit(u8 bit)
- {
-+ struct calling_interface_buffer buffer;
- struct calling_interface_token *token;
- int ret;
- int val;
-@@ -1364,9 +1388,9 @@ static int kbd_get_token_bit(u8 bit)
- if (!token)
- return -EINVAL;
-
-- dell_set_arguments(token->location, 0, 0, 0);
-- ret = dell_send_request(CLASS_TOKEN_READ, SELECT_TOKEN_STD);
-- val = buffer->output[1];
-+ dell_fill_request(&buffer, token->location, 0, 0, 0);
-+ ret = dell_send_request(&buffer, CLASS_TOKEN_READ, SELECT_TOKEN_STD);
-+ val = buffer.output[1];
-
- if (ret)
- return ret;
-@@ -2102,6 +2126,7 @@ static struct notifier_block dell_laptop_notifier = {
-
- int dell_micmute_led_set(int state)
- {
-+ struct calling_interface_buffer buffer;
- struct calling_interface_token *token;
-
- if (state == 0)
-@@ -2114,8 +2139,8 @@ int dell_micmute_led_set(int state)
- if (!token)
- return -ENODEV;
-
-- dell_set_arguments(token->location, token->value, 0, 0);
-- dell_send_request(CLASS_TOKEN_WRITE, SELECT_TOKEN_STD);
-+ dell_fill_request(&buffer, token->location, token->value, 0, 0);
-+ dell_send_request(&buffer, CLASS_TOKEN_WRITE, SELECT_TOKEN_STD);
-
- return state;
- }
-@@ -2146,13 +2171,6 @@ static int __init dell_init(void)
- if (ret)
- goto fail_platform_device2;
-
-- buffer = kzalloc(sizeof(struct calling_interface_buffer), GFP_KERNEL);
-- if (!buffer) {
-- ret = -ENOMEM;
-- goto fail_buffer;
-- }
--
--
- ret = dell_setup_rfkill();
-
- if (ret) {
-@@ -2177,10 +2195,13 @@ static int __init dell_init(void)
-
- token = dell_smbios_find_token(BRIGHTNESS_TOKEN);
- if (token) {
-- dell_set_arguments(token->location, 0, 0, 0);
-- ret = dell_send_request(CLASS_TOKEN_READ, SELECT_TOKEN_AC);
-+ struct calling_interface_buffer buffer;
-+
-+ dell_fill_request(&buffer, token->location, 0, 0, 0);
-+ ret = dell_send_request(&buffer,
-+ CLASS_TOKEN_READ, SELECT_TOKEN_AC);
- if (ret)
-- max_intensity = buffer->output[3];
-+ max_intensity = buffer.output[3];
- }
-
- if (max_intensity) {
-@@ -2214,8 +2235,6 @@ static int __init dell_init(void)
- fail_get_brightness:
- backlight_device_unregister(dell_backlight_device);
- fail_backlight:
-- kfree(buffer);
--fail_buffer:
- dell_cleanup_rfkill();
- fail_rfkill:
- platform_device_del(platform_device);
-@@ -2235,7 +2254,6 @@ static void __exit dell_exit(void)
- touchpad_led_exit();
- kbd_led_exit();
- backlight_device_unregister(dell_backlight_device);
-- kfree(buffer);
- dell_cleanup_rfkill();
- if (platform_device) {
- platform_device_unregister(platform_device);
^ permalink raw reply related [flat|nested] 27+ messages in thread
* [gentoo-commits] proj/linux-patches:4.15 commit in: /
@ 2018-02-28 19:53 Alice Ferrazzi
0 siblings, 0 replies; 27+ messages in thread
From: Alice Ferrazzi @ 2018-02-28 19:53 UTC (permalink / raw
To: gentoo-commits
commit: 0933e08b7a49a8d4cd1a5bfbc3fba0411031c225
Author: Alice Ferrazzi <alicef <AT> gentoo <DOT> org>
AuthorDate: Wed Feb 28 19:53:03 2018 +0000
Commit: Alice Ferrazzi <alicef <AT> gentoo <DOT> org>
CommitDate: Wed Feb 28 19:53:03 2018 +0000
URL: https://gitweb.gentoo.org/proj/linux-patches.git/commit/?id=0933e08b
dell-laptop: Allocate buffer on heap rather than globally
0000_README | 4 +
1700_ia64_fix_ptrace.patch | 543 +++++++++++++++++----
...ocate_buffer_on_heap_rather_than_globally.patch | 458 +++++++++++++++++
3 files changed, 920 insertions(+), 85 deletions(-)
diff --git a/0000_README b/0000_README
index a7bb4af..994c735 100644
--- a/0000_README
+++ b/0000_README
@@ -99,6 +99,10 @@ Patch: 2900_dev-root-proc-mount-fix.patch
From: https://bugs.gentoo.org/show_bug.cgi?id=438380
Desc: Ensure that /dev/root doesn't appear in /proc/mounts when bootint without an initramfs.
+Patch: 2901_allocate_buffer_on_heap_rather_than_globally.patch
+From: https://bugs.gentoo.org/646438
+Desc: Patchwork [v2] platform/x86: dell-laptop: Allocate buffer on heap rather than globally Bug #646438
+
Patch: 4200_fbcondecor.patch
From: http://www.mepiscommunity.org/fbcondecor
Desc: Bootsplash ported by Conrad Kostecki. (Bug #637434)
diff --git a/1700_ia64_fix_ptrace.patch b/1700_ia64_fix_ptrace.patch
index 6173b05..ad48a1a 100644
--- a/1700_ia64_fix_ptrace.patch
+++ b/1700_ia64_fix_ptrace.patch
@@ -1,87 +1,460 @@
-From patchwork Fri Feb 2 22:12:24 2018
-Content-Type: text/plain; charset="utf-8"
-MIME-Version: 1.0
-Content-Transfer-Encoding: 8bit
-Subject: ia64: fix ptrace(PTRACE_GETREGS) (unbreaks strace, gdb)
-From: Sergei Trofimovich <slyfox@gentoo.org>
-X-Patchwork-Id: 10198159
-Message-Id: <20180202221224.16597-1-slyfox@gentoo.org>
-To: Tony Luck <tony.luck@intel.com>, Fenghua Yu <fenghua.yu@intel.com>,
- linux-ia64@vger.kernel.org, linux-kernel@vger.kernel.org
-Cc: Sergei Trofimovich <slyfox@gentoo.org>
-Date: Fri, 2 Feb 2018 22:12:24 +0000
-
-The strace breakage looks like that:
-./strace: get_regs: get_regs_error: Input/output error
-
-It happens because ia64 needs to load unwind tables
-to read certain registers. Unwind tables fail to load
-due to GCC quirk on the following code:
-
- extern char __end_unwind[];
- const struct unw_table_entry *end = (struct unw_table_entry *)table_end;
- table->end = segment_base + end[-1].end_offset;
-
-GCC does not generate correct code for this single memory
-reference after constant propagation (see https://gcc.gnu.org/PR84184).
-Two triggers are required for bad code generation:
-- '__end_unwind' has alignment lower (char), than
- 'struct unw_table_entry' (8).
-- symbol offset is negative.
-
-This commit workarounds it by fixing alignment of '__end_unwind'.
-While at it use hidden symbols to generate shorter gp-relative
-relocations.
-
-CC: Tony Luck <tony.luck@intel.com>
-CC: Fenghua Yu <fenghua.yu@intel.com>
-CC: linux-ia64@vger.kernel.org
-CC: linux-kernel@vger.kernel.org
-Bug: https://github.com/strace/strace/issues/33
-Bug: https://gcc.gnu.org/PR84184
-Reported-by: Émeric Maschino <emeric.maschino@gmail.com>
-Signed-off-by: Sergei Trofimovich <slyfox@gentoo.org>
-Tested-by: stanton_arch@mail.com
----
- arch/ia64/include/asm/sections.h | 1 -
- arch/ia64/kernel/unwind.c | 15 ++++++++++++++-
- 2 files changed, 14 insertions(+), 2 deletions(-)
-
-diff --git a/arch/ia64/include/asm/sections.h b/arch/ia64/include/asm/sections.h
-index f3481408594e..0fc4f1757a44 100644
---- a/arch/ia64/include/asm/sections.h
-+++ b/arch/ia64/include/asm/sections.h
-@@ -24,7 +24,6 @@ extern char __start_gate_mckinley_e9_patchlist[], __end_gate_mckinley_e9_patchli
- extern char __start_gate_vtop_patchlist[], __end_gate_vtop_patchlist[];
- extern char __start_gate_fsyscall_patchlist[], __end_gate_fsyscall_patchlist[];
- extern char __start_gate_brl_fsys_bubble_down_patchlist[], __end_gate_brl_fsys_bubble_down_patchlist[];
--extern char __start_unwind[], __end_unwind[];
- extern char __start_ivt_text[], __end_ivt_text[];
-
- #undef dereference_function_descriptor
-diff --git a/arch/ia64/kernel/unwind.c b/arch/ia64/kernel/unwind.c
-index e04efa088902..025ba6700790 100644
---- a/arch/ia64/kernel/unwind.c
-+++ b/arch/ia64/kernel/unwind.c
-@@ -2243,7 +2243,20 @@ __initcall(create_gate_table);
- void __init
- unw_init (void)
+diff --git a/drivers/platform/x86/dell-laptop.c b/drivers/platform/x86/dell-laptop.c
+index fc2dfc8..a7b1419 100644
+--- a/drivers/platform/x86/dell-laptop.c
++++ b/drivers/platform/x86/dell-laptop.c
+@@ -78,7 +78,6 @@ static struct platform_driver platform_driver = {
+ }
+ };
+
+-static struct calling_interface_buffer *buffer;
+ static struct platform_device *platform_device;
+ static struct backlight_device *dell_backlight_device;
+ static struct rfkill *wifi_rfkill;
+@@ -322,7 +321,8 @@ static const struct dmi_system_id dell_quirks[] __initconst = {
+ { }
+ };
+
+-static void dell_set_arguments(u32 arg0, u32 arg1, u32 arg2, u32 arg3)
++static void dell_fill_request(struct calling_interface_buffer *buffer,
++ u32 arg0, u32 arg1, u32 arg2, u32 arg3)
+ {
+ memset(buffer, 0, sizeof(struct calling_interface_buffer));
+ buffer->input[0] = arg0;
+@@ -331,7 +331,8 @@ static void dell_set_arguments(u32 arg0, u32 arg1, u32 arg2, u32 arg3)
+ buffer->input[3] = arg3;
+ }
+
+-static int dell_send_request(u16 class, u16 select)
++static int dell_send_request(struct calling_interface_buffer *buffer,
++ u16 class, u16 select)
+ {
+ int ret;
+
+@@ -468,21 +469,22 @@ static int dell_rfkill_set(void *data, bool blocked)
+ int disable = blocked ? 1 : 0;
+ unsigned long radio = (unsigned long)data;
+ int hwswitch_bit = (unsigned long)data - 1;
++ struct calling_interface_buffer buffer;
+ int hwswitch;
+ int status;
+ int ret;
+
+- dell_set_arguments(0, 0, 0, 0);
+- ret = dell_send_request(CLASS_INFO, SELECT_RFKILL);
++ dell_fill_request(&buffer, 0, 0, 0, 0);
++ ret = dell_send_request(&buffer, CLASS_INFO, SELECT_RFKILL);
+ if (ret)
+ return ret;
+- status = buffer->output[1];
++ status = buffer.output[1];
+
+- dell_set_arguments(0x2, 0, 0, 0);
+- ret = dell_send_request(CLASS_INFO, SELECT_RFKILL);
++ dell_fill_request(&buffer, 0x2, 0, 0, 0);
++ ret = dell_send_request(&buffer, CLASS_INFO, SELECT_RFKILL);
+ if (ret)
+ return ret;
+- hwswitch = buffer->output[1];
++ hwswitch = buffer.output[1];
+
+ /* If the hardware switch controls this radio, and the hardware
+ switch is disabled, always disable the radio */
+@@ -490,8 +492,8 @@ static int dell_rfkill_set(void *data, bool blocked)
+ (status & BIT(0)) && !(status & BIT(16)))
+ disable = 1;
+
+- dell_set_arguments(1 | (radio<<8) | (disable << 16), 0, 0, 0);
+- ret = dell_send_request(CLASS_INFO, SELECT_RFKILL);
++ dell_fill_request(&buffer, 1 | (radio<<8) | (disable << 16), 0, 0, 0);
++ ret = dell_send_request(&buffer, CLASS_INFO, SELECT_RFKILL);
+ return ret;
+ }
+
+@@ -500,9 +502,11 @@ static void dell_rfkill_update_sw_state(struct rfkill *rfkill, int radio,
+ {
+ if (status & BIT(0)) {
+ /* Has hw-switch, sync sw_state to BIOS */
++ struct calling_interface_buffer buffer;
+ int block = rfkill_blocked(rfkill);
+- dell_set_arguments(1 | (radio << 8) | (block << 16), 0, 0, 0);
+- dell_send_request(CLASS_INFO, SELECT_RFKILL);
++ dell_fill_request(&buffer,
++ 1 | (radio << 8) | (block << 16), 0, 0, 0);
++ dell_send_request(&buffer, CLASS_INFO, SELECT_RFKILL);
+ } else {
+ /* No hw-switch, sync BIOS state to sw_state */
+ rfkill_set_sw_state(rfkill, !!(status & BIT(radio + 16)));
+@@ -519,21 +523,22 @@ static void dell_rfkill_update_hw_state(struct rfkill *rfkill, int radio,
+ static void dell_rfkill_query(struct rfkill *rfkill, void *data)
+ {
+ int radio = ((unsigned long)data & 0xF);
++ struct calling_interface_buffer buffer;
+ int hwswitch;
+ int status;
+ int ret;
+
+- dell_set_arguments(0, 0, 0, 0);
+- ret = dell_send_request(CLASS_INFO, SELECT_RFKILL);
+- status = buffer->output[1];
++ dell_fill_request(&buffer, 0, 0, 0, 0);
++ ret = dell_send_request(&buffer, CLASS_INFO, SELECT_RFKILL);
++ status = buffer.output[1];
+
+ if (ret != 0 || !(status & BIT(0))) {
+ return;
+ }
+
+- dell_set_arguments(0, 0x2, 0, 0);
+- ret = dell_send_request(CLASS_INFO, SELECT_RFKILL);
+- hwswitch = buffer->output[1];
++ dell_fill_request(&buffer, 0, 0x2, 0, 0);
++ ret = dell_send_request(&buffer, CLASS_INFO, SELECT_RFKILL);
++ hwswitch = buffer.output[1];
+
+ if (ret != 0)
+ return;
+@@ -550,22 +555,23 @@ static struct dentry *dell_laptop_dir;
+
+ static int dell_debugfs_show(struct seq_file *s, void *data)
{
-- extern char __gp[];
-+ #define __ia64_hidden __attribute__((visibility("hidden")))
-+ /*
-+ * We use hidden symbols to generate more efficient code using
-+ * gp-relative addressing.
-+ */
-+ extern char __gp[] __ia64_hidden;
-+ /*
-+ * Unwind tables need to have proper alignment as init_unwind_table()
-+ * uses negative offsets against '__end_unwind'.
-+ * See https://gcc.gnu.org/PR84184
-+ */
-+ extern const struct unw_table_entry __start_unwind[] __ia64_hidden;
-+ extern const struct unw_table_entry __end_unwind[] __ia64_hidden;
-+ #undef __ia64_hidden
- extern void unw_hash_index_t_is_too_narrow (void);
- long i, off;
++ struct calling_interface_buffer buffer;
+ int hwswitch_state;
+ int hwswitch_ret;
+ int status;
+ int ret;
+
+- dell_set_arguments(0, 0, 0, 0);
+- ret = dell_send_request(CLASS_INFO, SELECT_RFKILL);
++ dell_fill_request(&buffer, 0, 0, 0, 0);
++ ret = dell_send_request(&buffer, CLASS_INFO, SELECT_RFKILL);
+ if (ret)
+ return ret;
+- status = buffer->output[1];
++ status = buffer.output[1];
+
+- dell_set_arguments(0, 0x2, 0, 0);
+- hwswitch_ret = dell_send_request(CLASS_INFO, SELECT_RFKILL);
++ dell_fill_request(&buffer, 0, 0x2, 0, 0);
++ hwswitch_ret = dell_send_request(&buffer, CLASS_INFO, SELECT_RFKILL);
+ if (hwswitch_ret)
+ return hwswitch_ret;
+- hwswitch_state = buffer->output[1];
++ hwswitch_state = buffer.output[1];
+
+ seq_printf(s, "return:\t%d\n", ret);
+ seq_printf(s, "status:\t0x%X\n", status);
+@@ -646,22 +652,23 @@ static const struct file_operations dell_debugfs_fops = {
+
+ static void dell_update_rfkill(struct work_struct *ignored)
+ {
++ struct calling_interface_buffer buffer;
+ int hwswitch = 0;
+ int status;
+ int ret;
+
+- dell_set_arguments(0, 0, 0, 0);
+- ret = dell_send_request(CLASS_INFO, SELECT_RFKILL);
+- status = buffer->output[1];
++ dell_fill_request(&buffer, 0, 0, 0, 0);
++ ret = dell_send_request(&buffer, CLASS_INFO, SELECT_RFKILL);
++ status = buffer.output[1];
+
+ if (ret != 0)
+ return;
+
+- dell_set_arguments(0, 0x2, 0, 0);
+- ret = dell_send_request(CLASS_INFO, SELECT_RFKILL);
++ dell_fill_request(&buffer, 0, 0x2, 0, 0);
++ ret = dell_send_request(&buffer, CLASS_INFO, SELECT_RFKILL);
+
+ if (ret == 0 && (status & BIT(0)))
+- hwswitch = buffer->output[1];
++ hwswitch = buffer.output[1];
+
+ if (wifi_rfkill) {
+ dell_rfkill_update_hw_state(wifi_rfkill, 1, status, hwswitch);
+@@ -719,6 +726,7 @@ static struct notifier_block dell_laptop_rbtn_notifier = {
+
+ static int __init dell_setup_rfkill(void)
+ {
++ struct calling_interface_buffer buffer;
+ int status, ret, whitelisted;
+ const char *product;
+
+@@ -734,9 +742,9 @@ static int __init dell_setup_rfkill(void)
+ if (!force_rfkill && !whitelisted)
+ return 0;
+
+- dell_set_arguments(0, 0, 0, 0);
+- ret = dell_send_request(CLASS_INFO, SELECT_RFKILL);
+- status = buffer->output[1];
++ dell_fill_request(&buffer, 0, 0, 0, 0);
++ ret = dell_send_request(&buffer, CLASS_INFO, SELECT_RFKILL);
++ status = buffer.output[1];
+
+ /* dell wireless info smbios call is not supported */
+ if (ret != 0)
+@@ -889,6 +897,7 @@ static void dell_cleanup_rfkill(void)
+
+ static int dell_send_intensity(struct backlight_device *bd)
+ {
++ struct calling_interface_buffer buffer;
+ struct calling_interface_token *token;
+ int ret;
+
+@@ -896,17 +905,21 @@ static int dell_send_intensity(struct backlight_device *bd)
+ if (!token)
+ return -ENODEV;
+
+- dell_set_arguments(token->location, bd->props.brightness, 0, 0);
++ dell_fill_request(&buffer,
++ token->location, bd->props.brightness, 0, 0);
+ if (power_supply_is_system_supplied() > 0)
+- ret = dell_send_request(CLASS_TOKEN_WRITE, SELECT_TOKEN_AC);
++ ret = dell_send_request(&buffer,
++ CLASS_TOKEN_WRITE, SELECT_TOKEN_AC);
+ else
+- ret = dell_send_request(CLASS_TOKEN_WRITE, SELECT_TOKEN_BAT);
++ ret = dell_send_request(&buffer,
++ CLASS_TOKEN_WRITE, SELECT_TOKEN_BAT);
+
+ return ret;
+ }
+
+ static int dell_get_intensity(struct backlight_device *bd)
+ {
++ struct calling_interface_buffer buffer;
+ struct calling_interface_token *token;
+ int ret;
+
+@@ -914,14 +927,17 @@ static int dell_get_intensity(struct backlight_device *bd)
+ if (!token)
+ return -ENODEV;
+
+- dell_set_arguments(token->location, 0, 0, 0);
++ dell_fill_request(&buffer, token->location, 0, 0, 0);
+ if (power_supply_is_system_supplied() > 0)
+- ret = dell_send_request(CLASS_TOKEN_READ, SELECT_TOKEN_AC);
++ ret = dell_send_request(&buffer,
++ CLASS_TOKEN_READ, SELECT_TOKEN_AC);
+ else
+- ret = dell_send_request(CLASS_TOKEN_READ, SELECT_TOKEN_BAT);
++ ret = dell_send_request(&buffer,
++ CLASS_TOKEN_READ, SELECT_TOKEN_BAT);
+
+ if (ret == 0)
+- ret = buffer->output[1];
++ ret = buffer.output[1];
++
+ return ret;
+ }
+
+@@ -1186,31 +1202,33 @@ static enum led_brightness kbd_led_level;
+
+ static int kbd_get_info(struct kbd_info *info)
+ {
++ struct calling_interface_buffer buffer;
+ u8 units;
+ int ret;
+
+- dell_set_arguments(0, 0, 0, 0);
+- ret = dell_send_request(CLASS_KBD_BACKLIGHT, SELECT_KBD_BACKLIGHT);
++ dell_fill_request(&buffer, 0, 0, 0, 0);
++ ret = dell_send_request(&buffer,
++ CLASS_KBD_BACKLIGHT, SELECT_KBD_BACKLIGHT);
+ if (ret)
+ return ret;
+
+- info->modes = buffer->output[1] & 0xFFFF;
+- info->type = (buffer->output[1] >> 24) & 0xFF;
+- info->triggers = buffer->output[2] & 0xFF;
+- units = (buffer->output[2] >> 8) & 0xFF;
+- info->levels = (buffer->output[2] >> 16) & 0xFF;
++ info->modes = buffer.output[1] & 0xFFFF;
++ info->type = (buffer.output[1] >> 24) & 0xFF;
++ info->triggers = buffer.output[2] & 0xFF;
++ units = (buffer.output[2] >> 8) & 0xFF;
++ info->levels = (buffer.output[2] >> 16) & 0xFF;
+
+ if (quirks && quirks->kbd_led_levels_off_1 && info->levels)
+ info->levels--;
+
+ if (units & BIT(0))
+- info->seconds = (buffer->output[3] >> 0) & 0xFF;
++ info->seconds = (buffer.output[3] >> 0) & 0xFF;
+ if (units & BIT(1))
+- info->minutes = (buffer->output[3] >> 8) & 0xFF;
++ info->minutes = (buffer.output[3] >> 8) & 0xFF;
+ if (units & BIT(2))
+- info->hours = (buffer->output[3] >> 16) & 0xFF;
++ info->hours = (buffer.output[3] >> 16) & 0xFF;
+ if (units & BIT(3))
+- info->days = (buffer->output[3] >> 24) & 0xFF;
++ info->days = (buffer.output[3] >> 24) & 0xFF;
+
+ return ret;
+ }
+@@ -1270,31 +1288,34 @@ static int kbd_set_level(struct kbd_state *state, u8 level)
+
+ static int kbd_get_state(struct kbd_state *state)
+ {
++ struct calling_interface_buffer buffer;
+ int ret;
+
+- dell_set_arguments(0x1, 0, 0, 0);
+- ret = dell_send_request(CLASS_KBD_BACKLIGHT, SELECT_KBD_BACKLIGHT);
++ dell_fill_request(&buffer, 0, 0, 0, 0);
++ ret = dell_send_request(&buffer,
++ CLASS_KBD_BACKLIGHT, SELECT_KBD_BACKLIGHT);
+ if (ret)
+ return ret;
+
+- state->mode_bit = ffs(buffer->output[1] & 0xFFFF);
++ state->mode_bit = ffs(buffer.output[1] & 0xFFFF);
+ if (state->mode_bit != 0)
+ state->mode_bit--;
+
+- state->triggers = (buffer->output[1] >> 16) & 0xFF;
+- state->timeout_value = (buffer->output[1] >> 24) & 0x3F;
+- state->timeout_unit = (buffer->output[1] >> 30) & 0x3;
+- state->als_setting = buffer->output[2] & 0xFF;
+- state->als_value = (buffer->output[2] >> 8) & 0xFF;
+- state->level = (buffer->output[2] >> 16) & 0xFF;
+- state->timeout_value_ac = (buffer->output[2] >> 24) & 0x3F;
+- state->timeout_unit_ac = (buffer->output[2] >> 30) & 0x3;
++ state->triggers = (buffer.output[1] >> 16) & 0xFF;
++ state->timeout_value = (buffer.output[1] >> 24) & 0x3F;
++ state->timeout_unit = (buffer.output[1] >> 30) & 0x3;
++ state->als_setting = buffer.output[2] & 0xFF;
++ state->als_value = (buffer.output[2] >> 8) & 0xFF;
++ state->level = (buffer.output[2] >> 16) & 0xFF;
++ state->timeout_value_ac = (buffer.output[2] >> 24) & 0x3F;
++ state->timeout_unit_ac = (buffer.output[2] >> 30) & 0x3;
+
+ return ret;
+ }
+
+ static int kbd_set_state(struct kbd_state *state)
+ {
++ struct calling_interface_buffer buffer;
+ int ret;
+ u32 input1;
+ u32 input2;
+@@ -1307,8 +1328,9 @@ static int kbd_set_state(struct kbd_state *state)
+ input2 |= (state->level & 0xFF) << 16;
+ input2 |= (state->timeout_value_ac & 0x3F) << 24;
+ input2 |= (state->timeout_unit_ac & 0x3) << 30;
+- dell_set_arguments(0x2, input1, input2, 0);
+- ret = dell_send_request(CLASS_KBD_BACKLIGHT, SELECT_KBD_BACKLIGHT);
++ dell_fill_request(&buffer, 0x2, input1, input2, 0);
++ ret = dell_send_request(&buffer,
++ CLASS_KBD_BACKLIGHT, SELECT_KBD_BACKLIGHT);
+
+ return ret;
+ }
+@@ -1335,6 +1357,7 @@ static int kbd_set_state_safe(struct kbd_state *state, struct kbd_state *old)
+
+ static int kbd_set_token_bit(u8 bit)
+ {
++ struct calling_interface_buffer buffer;
+ struct calling_interface_token *token;
+ int ret;
+
+@@ -1345,14 +1368,15 @@ static int kbd_set_token_bit(u8 bit)
+ if (!token)
+ return -EINVAL;
+
+- dell_set_arguments(token->location, token->value, 0, 0);
+- ret = dell_send_request(CLASS_TOKEN_WRITE, SELECT_TOKEN_STD);
++ dell_fill_request(&buffer, token->location, token->value, 0, 0);
++ ret = dell_send_request(&buffer, CLASS_TOKEN_WRITE, SELECT_TOKEN_STD);
+
+ return ret;
+ }
+
+ static int kbd_get_token_bit(u8 bit)
+ {
++ struct calling_interface_buffer buffer;
+ struct calling_interface_token *token;
+ int ret;
+ int val;
+@@ -1364,9 +1388,9 @@ static int kbd_get_token_bit(u8 bit)
+ if (!token)
+ return -EINVAL;
+
+- dell_set_arguments(token->location, 0, 0, 0);
+- ret = dell_send_request(CLASS_TOKEN_READ, SELECT_TOKEN_STD);
+- val = buffer->output[1];
++ dell_fill_request(&buffer, token->location, 0, 0, 0);
++ ret = dell_send_request(&buffer, CLASS_TOKEN_READ, SELECT_TOKEN_STD);
++ val = buffer.output[1];
+
+ if (ret)
+ return ret;
+@@ -2102,6 +2126,7 @@ static struct notifier_block dell_laptop_notifier = {
+
+ int dell_micmute_led_set(int state)
+ {
++ struct calling_interface_buffer buffer;
+ struct calling_interface_token *token;
+
+ if (state == 0)
+@@ -2114,8 +2139,8 @@ int dell_micmute_led_set(int state)
+ if (!token)
+ return -ENODEV;
+
+- dell_set_arguments(token->location, token->value, 0, 0);
+- dell_send_request(CLASS_TOKEN_WRITE, SELECT_TOKEN_STD);
++ dell_fill_request(&buffer, token->location, token->value, 0, 0);
++ dell_send_request(&buffer, CLASS_TOKEN_WRITE, SELECT_TOKEN_STD);
+
+ return state;
+ }
+@@ -2146,13 +2171,6 @@ static int __init dell_init(void)
+ if (ret)
+ goto fail_platform_device2;
+
+- buffer = kzalloc(sizeof(struct calling_interface_buffer), GFP_KERNEL);
+- if (!buffer) {
+- ret = -ENOMEM;
+- goto fail_buffer;
+- }
+-
+-
+ ret = dell_setup_rfkill();
+
+ if (ret) {
+@@ -2177,10 +2195,13 @@ static int __init dell_init(void)
+
+ token = dell_smbios_find_token(BRIGHTNESS_TOKEN);
+ if (token) {
+- dell_set_arguments(token->location, 0, 0, 0);
+- ret = dell_send_request(CLASS_TOKEN_READ, SELECT_TOKEN_AC);
++ struct calling_interface_buffer buffer;
++
++ dell_fill_request(&buffer, token->location, 0, 0, 0);
++ ret = dell_send_request(&buffer,
++ CLASS_TOKEN_READ, SELECT_TOKEN_AC);
+ if (ret)
+- max_intensity = buffer->output[3];
++ max_intensity = buffer.output[3];
+ }
+ if (max_intensity) {
+@@ -2214,8 +2235,6 @@ static int __init dell_init(void)
+ fail_get_brightness:
+ backlight_device_unregister(dell_backlight_device);
+ fail_backlight:
+- kfree(buffer);
+-fail_buffer:
+ dell_cleanup_rfkill();
+ fail_rfkill:
+ platform_device_del(platform_device);
+@@ -2235,7 +2254,6 @@ static void __exit dell_exit(void)
+ touchpad_led_exit();
+ kbd_led_exit();
+ backlight_device_unregister(dell_backlight_device);
+- kfree(buffer);
+ dell_cleanup_rfkill();
+ if (platform_device) {
+ platform_device_unregister(platform_device);
diff --git a/2901_allocate_buffer_on_heap_rather_than_globally.patch b/2901_allocate_buffer_on_heap_rather_than_globally.patch
new file mode 100644
index 0000000..1912d9f
--- /dev/null
+++ b/2901_allocate_buffer_on_heap_rather_than_globally.patch
@@ -0,0 +1,458 @@
+--- dell-laptop.c.orig 2018-02-28 19:24:04.598049000 +0000
++++ linux-4.15.0/drivers/platform/x86/dell-laptop.c 2018-02-28 19:40:00.358049000 +0000
+@@ -78,7 +78,6 @@ static struct platform_driver platform_d
+ }
+ };
+
+-static struct calling_interface_buffer *buffer;
+ static struct platform_device *platform_device;
+ static struct backlight_device *dell_backlight_device;
+ static struct rfkill *wifi_rfkill;
+@@ -286,7 +285,8 @@ static const struct dmi_system_id dell_q
+ { }
+ };
+
+-void dell_set_arguments(u32 arg0, u32 arg1, u32 arg2, u32 arg3)
++void dell_set_arguments(struct calling_interface_buffer *buffer,
++ u32 arg0, u32 arg1, u32 arg2, u32 arg3)
+ {
+ memset(buffer, 0, sizeof(struct calling_interface_buffer));
+ buffer->input[0] = arg0;
+@@ -295,7 +295,8 @@ void dell_set_arguments(u32 arg0, u32 ar
+ buffer->input[3] = arg3;
+ }
+
+-int dell_send_request(u16 class, u16 select)
++int dell_send_request(struct calling_interface_buffer *buffer,
++ u16 class, u16 select)
+ {
+ int ret;
+
+@@ -432,21 +433,22 @@ static int dell_rfkill_set(void *data, b
+ int disable = blocked ? 1 : 0;
+ unsigned long radio = (unsigned long)data;
+ int hwswitch_bit = (unsigned long)data - 1;
++ struct calling_interface_buffer buffer;
+ int hwswitch;
+ int status;
+ int ret;
+
+- dell_set_arguments(0, 0, 0, 0);
+- ret = dell_send_request(CLASS_INFO, SELECT_RFKILL);
++ dell_fill_request(&buffer, 0, 0, 0, 0);
++ ret = dell_send_request(&buffer, CLASS_INFO, SELECT_RFKILL);
+ if (ret)
+ return ret;
+- status = buffer->output[1];
++ status = buffer.output[1];
+
+- dell_set_arguments(0x2, 0, 0, 0);
+- ret = dell_send_request(CLASS_INFO, SELECT_RFKILL);
++ dell_fill_request(&buffer, 0x2, 0, 0, 0);
++ ret = dell_send_request(&buffer, CLASS_INFO, SELECT_RFKILL);
+ if (ret)
+ return ret;
+- hwswitch = buffer->output[1];
++ hwswitch = buffer.output[1];
+
+ /* If the hardware switch controls this radio, and the hardware
+ switch is disabled, always disable the radio */
+@@ -454,8 +456,8 @@ static int dell_rfkill_set(void *data, b
+ (status & BIT(0)) && !(status & BIT(16)))
+ disable = 1;
+
+- dell_set_arguments(1 | (radio<<8) | (disable << 16), 0, 0, 0);
+- ret = dell_send_request(CLASS_INFO, SELECT_RFKILL);
++ dell_fill_request(&buffer, 1 | (radio<<8) | (disable << 16), 0, 0, 0);
++ ret = dell_send_request(&buffer, CLASS_INFO, SELECT_RFKILL);
+ return ret;
+ }
+
+@@ -464,9 +466,11 @@ static void dell_rfkill_update_sw_state(
+ {
+ if (status & BIT(0)) {
+ /* Has hw-switch, sync sw_state to BIOS */
++ struct calling_interface_buffer buffer;
+ int block = rfkill_blocked(rfkill);
+- dell_set_arguments(1 | (radio << 8) | (block << 16), 0, 0, 0);
+- dell_send_request(CLASS_INFO, SELECT_RFKILL);
++ dell_fill_request(&buffer,
++ 1 | (radio << 8) | (block << 16), 0, 0, 0);
++ dell_send_request(&buffer, CLASS_INFO, SELECT_RFKILL);
+ } else {
+ /* No hw-switch, sync BIOS state to sw_state */
+ rfkill_set_sw_state(rfkill, !!(status & BIT(radio + 16)));
+@@ -483,21 +487,22 @@ static void dell_rfkill_update_hw_state(
+ static void dell_rfkill_query(struct rfkill *rfkill, void *data)
+ {
+ int radio = ((unsigned long)data & 0xF);
++ struct calling_interface_buffer buffer;
+ int hwswitch;
+ int status;
+ int ret;
+
+- dell_set_arguments(0, 0, 0, 0);
+- ret = dell_send_request(CLASS_INFO, SELECT_RFKILL);
+- status = buffer->output[1];
++ dell_fill_request(&buffer, 0, 0, 0, 0);
++ ret = dell_send_request(&buffer, CLASS_INFO, SELECT_RFKILL);
++ status = buffer.output[1];
+
+ if (ret != 0 || !(status & BIT(0))) {
+ return;
+ }
+
+- dell_set_arguments(0, 0x2, 0, 0);
+- ret = dell_send_request(CLASS_INFO, SELECT_RFKILL);
+- hwswitch = buffer->output[1];
++ dell_fill_request(&buffer, 0, 0x2, 0, 0);
++ ret = dell_send_request(&buffer, CLASS_INFO, SELECT_RFKILL);
++ hwswitch = buffer.output[1];
+
+ if (ret != 0)
+ return;
+@@ -514,22 +519,23 @@ static struct dentry *dell_laptop_dir;
+
+ static int dell_debugfs_show(struct seq_file *s, void *data)
+ {
++ struct calling_interface_buffer buffer;
+ int hwswitch_state;
+ int hwswitch_ret;
+ int status;
+ int ret;
+
+- dell_set_arguments(0, 0, 0, 0);
+- ret = dell_send_request(CLASS_INFO, SELECT_RFKILL);
++ dell_fill_request(&buffer, 0, 0, 0, 0);
++ ret = dell_send_request(&buffer, CLASS_INFO, SELECT_RFKILL);
+ if (ret)
+ return ret;
+- status = buffer->output[1];
++ status = buffer.output[1];
+
+- dell_set_arguments(0, 0x2, 0, 0);
+- hwswitch_ret = dell_send_request(CLASS_INFO, SELECT_RFKILL);
++ dell_fill_request(&buffer, 0, 0x2, 0, 0);
++ hwswitch_ret = dell_send_request(&buffer, CLASS_INFO, SELECT_RFKILL);
+ if (hwswitch_ret)
+ return hwswitch_ret;
+- hwswitch_state = buffer->output[1];
++ hwswitch_state = buffer.output[1];
+
+ seq_printf(s, "return:\t%d\n", ret);
+ seq_printf(s, "status:\t0x%X\n", status);
+@@ -610,22 +616,23 @@ static const struct file_operations dell
+
+ static void dell_update_rfkill(struct work_struct *ignored)
+ {
++ struct calling_interface_buffer buffer;
+ int hwswitch = 0;
+ int status;
+ int ret;
+
+- dell_set_arguments(0, 0, 0, 0);
+- ret = dell_send_request(CLASS_INFO, SELECT_RFKILL);
+- status = buffer->output[1];
++ dell_fill_request(&buffer, 0, 0, 0, 0);
++ ret = dell_send_request(&buffer, CLASS_INFO, SELECT_RFKILL);
++ status = buffer.output[1];
+
+ if (ret != 0)
+ return;
+
+- dell_set_arguments(0, 0x2, 0, 0);
+- ret = dell_send_request(CLASS_INFO, SELECT_RFKILL);
++ dell_fill_request(&buffer, 0, 0x2, 0, 0);
++ ret = dell_send_request(&buffer, CLASS_INFO, SELECT_RFKILL);
+
+ if (ret == 0 && (status & BIT(0)))
+- hwswitch = buffer->output[1];
++ hwswitch = buffer.output[1];
+
+ if (wifi_rfkill) {
+ dell_rfkill_update_hw_state(wifi_rfkill, 1, status, hwswitch);
+@@ -683,6 +690,7 @@ static struct notifier_block dell_laptop
+
+ static int __init dell_setup_rfkill(void)
+ {
++ struct calling_interface_buffer buffer;
+ int status, ret, whitelisted;
+ const char *product;
+
+@@ -698,9 +706,9 @@ static int __init dell_setup_rfkill(void
+ if (!force_rfkill && !whitelisted)
+ return 0;
+
+- dell_set_arguments(0, 0, 0, 0);
+- ret = dell_send_request(CLASS_INFO, SELECT_RFKILL);
+- status = buffer->output[1];
++ dell_fill_request(&buffer, 0, 0, 0, 0);
++ ret = dell_send_request(&buffer, CLASS_INFO, SELECT_RFKILL);
++ status = buffer.output[1];
+
+ /* dell wireless info smbios call is not supported */
+ if (ret != 0)
+@@ -853,6 +861,7 @@ static void dell_cleanup_rfkill(void)
+
+ static int dell_send_intensity(struct backlight_device *bd)
+ {
++ struct calling_interface_buffer buffer;
+ struct calling_interface_token *token;
+ int ret;
+
+@@ -860,17 +869,21 @@ static int dell_send_intensity(struct ba
+ if (!token)
+ return -ENODEV;
+
+- dell_set_arguments(token->location, bd->props.brightness, 0, 0);
++ dell_fill_request(&buffer,
++ token->location, bd->props.brightness, 0, 0);
+ if (power_supply_is_system_supplied() > 0)
+- ret = dell_send_request(CLASS_TOKEN_WRITE, SELECT_TOKEN_AC);
++ ret = dell_send_request(&buffer,
++ CLASS_TOKEN_WRITE, SELECT_TOKEN_AC);
+ else
+- ret = dell_send_request(CLASS_TOKEN_WRITE, SELECT_TOKEN_BAT);
++ ret = dell_send_request(&buffer,
++ CLASS_TOKEN_WRITE, SELECT_TOKEN_BAT);
+
+ return ret;
+ }
+
+ static int dell_get_intensity(struct backlight_device *bd)
+ {
++ struct calling_interface_buffer buffer;
+ struct calling_interface_token *token;
+ int ret;
+
+@@ -878,14 +891,17 @@ static int dell_get_intensity(struct bac
+ if (!token)
+ return -ENODEV;
+
+- dell_set_arguments(token->location, 0, 0, 0);
++ dell_fill_request(&buffer, token->location, 0, 0, 0);
+ if (power_supply_is_system_supplied() > 0)
+- ret = dell_send_request(CLASS_TOKEN_READ, SELECT_TOKEN_AC);
++ ret = dell_send_request(&buffer,
++ CLASS_TOKEN_READ, SELECT_TOKEN_AC);
+ else
+- ret = dell_send_request(CLASS_TOKEN_READ, SELECT_TOKEN_BAT);
++ ret = dell_send_request(&buffer,
++ CLASS_TOKEN_READ, SELECT_TOKEN_BAT);
+
+ if (ret == 0)
+- ret = buffer->output[1];
++ ret = buffer.output[1];
++
+ return ret;
+ }
+
+@@ -1149,31 +1165,33 @@ static DEFINE_MUTEX(kbd_led_mutex);
+
+ static int kbd_get_info(struct kbd_info *info)
+ {
++ struct calling_interface_buffer buffer;
+ u8 units;
+ int ret;
+
+- dell_set_arguments(0, 0, 0, 0);
+- ret = dell_send_request(CLASS_KBD_BACKLIGHT, SELECT_KBD_BACKLIGHT);
++ dell_fill_request(&buffer, 0, 0, 0, 0);
++ ret = dell_send_request(&buffer,
++ CLASS_KBD_BACKLIGHT, SELECT_KBD_BACKLIGHT);
+ if (ret)
+ return ret;
+
+- info->modes = buffer->output[1] & 0xFFFF;
+- info->type = (buffer->output[1] >> 24) & 0xFF;
+- info->triggers = buffer->output[2] & 0xFF;
+- units = (buffer->output[2] >> 8) & 0xFF;
+- info->levels = (buffer->output[2] >> 16) & 0xFF;
++ info->modes = buffer.output[1] & 0xFFFF;
++ info->type = (buffer.output[1] >> 24) & 0xFF;
++ info->triggers = buffer.output[2] & 0xFF;
++ units = (buffer.output[2] >> 8) & 0xFF;
++ info->levels = (buffer.output[2] >> 16) & 0xFF;
+
+ if (quirks && quirks->kbd_led_levels_off_1 && info->levels)
+ info->levels--;
+
+ if (units & BIT(0))
+- info->seconds = (buffer->output[3] >> 0) & 0xFF;
++ info->seconds = (buffer.output[3] >> 0) & 0xFF;
+ if (units & BIT(1))
+- info->minutes = (buffer->output[3] >> 8) & 0xFF;
++ info->minutes = (buffer.output[3] >> 8) & 0xFF;
+ if (units & BIT(2))
+- info->hours = (buffer->output[3] >> 16) & 0xFF;
++ info->hours = (buffer.output[3] >> 16) & 0xFF;
+ if (units & BIT(3))
+- info->days = (buffer->output[3] >> 24) & 0xFF;
++ info->days = (buffer.output[3] >> 24) & 0xFF;
+
+ return ret;
+ }
+@@ -1233,31 +1251,34 @@ static int kbd_set_level(struct kbd_stat
+
+ static int kbd_get_state(struct kbd_state *state)
+ {
++ struct calling_interface_buffer buffer;
+ int ret;
+
+- dell_set_arguments(0x1, 0, 0, 0);
+- ret = dell_send_request(CLASS_KBD_BACKLIGHT, SELECT_KBD_BACKLIGHT);
++ dell_fill_request(&buffer, 0, 0, 0, 0);
++ ret = dell_send_request(&buffer,
++ CLASS_KBD_BACKLIGHT, SELECT_KBD_BACKLIGHT);
+ if (ret)
+ return ret;
+
+- state->mode_bit = ffs(buffer->output[1] & 0xFFFF);
++ state->mode_bit = ffs(buffer.output[1] & 0xFFFF);
+ if (state->mode_bit != 0)
+ state->mode_bit--;
+
+- state->triggers = (buffer->output[1] >> 16) & 0xFF;
+- state->timeout_value = (buffer->output[1] >> 24) & 0x3F;
+- state->timeout_unit = (buffer->output[1] >> 30) & 0x3;
+- state->als_setting = buffer->output[2] & 0xFF;
+- state->als_value = (buffer->output[2] >> 8) & 0xFF;
+- state->level = (buffer->output[2] >> 16) & 0xFF;
+- state->timeout_value_ac = (buffer->output[2] >> 24) & 0x3F;
+- state->timeout_unit_ac = (buffer->output[2] >> 30) & 0x3;
++ state->triggers = (buffer.output[1] >> 16) & 0xFF;
++ state->timeout_value = (buffer.output[1] >> 24) & 0x3F;
++ state->timeout_unit = (buffer.output[1] >> 30) & 0x3;
++ state->als_setting = buffer.output[2] & 0xFF;
++ state->als_value = (buffer.output[2] >> 8) & 0xFF;
++ state->level = (buffer.output[2] >> 16) & 0xFF;
++ state->timeout_value_ac = (buffer.output[2] >> 24) & 0x3F;
++ state->timeout_unit_ac = (buffer.output[2] >> 30) & 0x3;
+
+ return ret;
+ }
+
+ static int kbd_set_state(struct kbd_state *state)
+ {
++ struct calling_interface_buffer buffer;
+ int ret;
+ u32 input1;
+ u32 input2;
+@@ -1270,8 +1291,9 @@ static int kbd_set_state(struct kbd_stat
+ input2 |= (state->level & 0xFF) << 16;
+ input2 |= (state->timeout_value_ac & 0x3F) << 24;
+ input2 |= (state->timeout_unit_ac & 0x3) << 30;
+- dell_set_arguments(0x2, input1, input2, 0);
+- ret = dell_send_request(CLASS_KBD_BACKLIGHT, SELECT_KBD_BACKLIGHT);
++ dell_fill_request(&buffer, 0x2, input1, input2, 0);
++ ret = dell_send_request(&buffer,
++ CLASS_KBD_BACKLIGHT, SELECT_KBD_BACKLIGHT);
+
+ return ret;
+ }
+@@ -1298,6 +1320,7 @@ static int kbd_set_state_safe(struct kbd
+
+ static int kbd_set_token_bit(u8 bit)
+ {
++ struct calling_interface_buffer buffer;
+ struct calling_interface_token *token;
+ int ret;
+
+@@ -1308,14 +1331,15 @@ static int kbd_set_token_bit(u8 bit)
+ if (!token)
+ return -EINVAL;
+
+- dell_set_arguments(token->location, token->value, 0, 0);
+- ret = dell_send_request(CLASS_TOKEN_WRITE, SELECT_TOKEN_STD);
++ dell_fill_request(&buffer, token->location, token->value, 0, 0);
++ ret = dell_send_request(&buffer, CLASS_TOKEN_WRITE, SELECT_TOKEN_STD);
+
+ return ret;
+ }
+
+ static int kbd_get_token_bit(u8 bit)
+ {
++ struct calling_interface_buffer buffer;
+ struct calling_interface_token *token;
+ int ret;
+ int val;
+@@ -1327,9 +1351,9 @@ static int kbd_get_token_bit(u8 bit)
+ if (!token)
+ return -EINVAL;
+
+- dell_set_arguments(token->location, 0, 0, 0);
+- ret = dell_send_request(CLASS_TOKEN_READ, SELECT_TOKEN_STD);
+- val = buffer->output[1];
++ dell_fill_request(&buffer, token->location, 0, 0, 0);
++ ret = dell_send_request(&buffer, CLASS_TOKEN_READ, SELECT_TOKEN_STD);
++ val = buffer.output[1];
+
+ if (ret)
+ return ret;
+@@ -2046,6 +2070,7 @@ static struct notifier_block dell_laptop
+
+ int dell_micmute_led_set(int state)
+ {
++ struct calling_interface_buffer buffer;
+ struct calling_interface_token *token;
+
+ if (state == 0)
+@@ -2058,8 +2083,8 @@ int dell_micmute_led_set(int state)
+ if (!token)
+ return -ENODEV;
+
+- dell_set_arguments(token->location, token->value, 0, 0);
+- dell_send_request(CLASS_TOKEN_WRITE, SELECT_TOKEN_STD);
++ dell_fill_request(&buffer, token->location, token->value, 0, 0);
++ dell_send_request(&buffer, CLASS_TOKEN_WRITE, SELECT_TOKEN_STD);
+
+ return state;
+ }
+@@ -2090,13 +2115,6 @@ static int __init dell_init(void)
+ if (ret)
+ goto fail_platform_device2;
+
+- buffer = kzalloc(sizeof(struct calling_interface_buffer), GFP_KERNEL);
+- if (!buffer) {
+- ret = -ENOMEM;
+- goto fail_buffer;
+- }
+-
+-
+ ret = dell_setup_rfkill();
+
+ if (ret) {
+@@ -2121,10 +2139,13 @@ static int __init dell_init(void)
+
+ token = dell_smbios_find_token(BRIGHTNESS_TOKEN);
+ if (token) {
+- dell_set_arguments(token->location, 0, 0, 0);
+- ret = dell_send_request(CLASS_TOKEN_READ, SELECT_TOKEN_AC);
++ struct calling_interface_buffer buffer;
++
++ dell_fill_request(&buffer, token->location, 0, 0, 0);
++ ret = dell_send_request(&buffer,
++ CLASS_TOKEN_READ, SELECT_TOKEN_AC);
+ if (ret)
+- max_intensity = buffer->output[3];
++ max_intensity = buffer.output[3];
+ }
+
+ if (max_intensity) {
+@@ -2158,8 +2179,6 @@ static int __init dell_init(void)
+ fail_get_brightness:
+ backlight_device_unregister(dell_backlight_device);
+ fail_backlight:
+- kfree(buffer);
+-fail_buffer:
+ dell_cleanup_rfkill();
+ fail_rfkill:
+ platform_device_del(platform_device);
+@@ -2179,7 +2198,6 @@ static void __exit dell_exit(void)
+ touchpad_led_exit();
+ kbd_led_exit();
+ backlight_device_unregister(dell_backlight_device);
+- kfree(buffer);
+ dell_cleanup_rfkill();
+ if (platform_device) {
+ platform_device_unregister(platform_device);
^ permalink raw reply related [flat|nested] 27+ messages in thread
* [gentoo-commits] proj/linux-patches:4.15 commit in: /
@ 2018-02-28 20:07 Alice Ferrazzi
0 siblings, 0 replies; 27+ messages in thread
From: Alice Ferrazzi @ 2018-02-28 20:07 UTC (permalink / raw
To: gentoo-commits
commit: ad2bf1670e8e20a07183cf3e6b500e2aa54b9a0a
Author: Alice Ferrazzi <alicef <AT> gentoo <DOT> org>
AuthorDate: Wed Feb 28 20:04:26 2018 +0000
Commit: Alice Ferrazzi <alicef <AT> gentoo <DOT> org>
CommitDate: Wed Feb 28 20:05:28 2018 +0000
URL: https://gitweb.gentoo.org/proj/linux-patches.git/commit/?id=ad2bf167
ia64 fix ptrace (restore from wrong commit)
1700_ia64_fix_ptrace.patch | 460 ---------------------------------------------
1 file changed, 460 deletions(-)
diff --git a/1700_ia64_fix_ptrace.patch b/1700_ia64_fix_ptrace.patch
deleted file mode 100644
index ad48a1a..0000000
--- a/1700_ia64_fix_ptrace.patch
+++ /dev/null
@@ -1,460 +0,0 @@
-diff --git a/drivers/platform/x86/dell-laptop.c b/drivers/platform/x86/dell-laptop.c
-index fc2dfc8..a7b1419 100644
---- a/drivers/platform/x86/dell-laptop.c
-+++ b/drivers/platform/x86/dell-laptop.c
-@@ -78,7 +78,6 @@ static struct platform_driver platform_driver = {
- }
- };
-
--static struct calling_interface_buffer *buffer;
- static struct platform_device *platform_device;
- static struct backlight_device *dell_backlight_device;
- static struct rfkill *wifi_rfkill;
-@@ -322,7 +321,8 @@ static const struct dmi_system_id dell_quirks[] __initconst = {
- { }
- };
-
--static void dell_set_arguments(u32 arg0, u32 arg1, u32 arg2, u32 arg3)
-+static void dell_fill_request(struct calling_interface_buffer *buffer,
-+ u32 arg0, u32 arg1, u32 arg2, u32 arg3)
- {
- memset(buffer, 0, sizeof(struct calling_interface_buffer));
- buffer->input[0] = arg0;
-@@ -331,7 +331,8 @@ static void dell_set_arguments(u32 arg0, u32 arg1, u32 arg2, u32 arg3)
- buffer->input[3] = arg3;
- }
-
--static int dell_send_request(u16 class, u16 select)
-+static int dell_send_request(struct calling_interface_buffer *buffer,
-+ u16 class, u16 select)
- {
- int ret;
-
-@@ -468,21 +469,22 @@ static int dell_rfkill_set(void *data, bool blocked)
- int disable = blocked ? 1 : 0;
- unsigned long radio = (unsigned long)data;
- int hwswitch_bit = (unsigned long)data - 1;
-+ struct calling_interface_buffer buffer;
- int hwswitch;
- int status;
- int ret;
-
-- dell_set_arguments(0, 0, 0, 0);
-- ret = dell_send_request(CLASS_INFO, SELECT_RFKILL);
-+ dell_fill_request(&buffer, 0, 0, 0, 0);
-+ ret = dell_send_request(&buffer, CLASS_INFO, SELECT_RFKILL);
- if (ret)
- return ret;
-- status = buffer->output[1];
-+ status = buffer.output[1];
-
-- dell_set_arguments(0x2, 0, 0, 0);
-- ret = dell_send_request(CLASS_INFO, SELECT_RFKILL);
-+ dell_fill_request(&buffer, 0x2, 0, 0, 0);
-+ ret = dell_send_request(&buffer, CLASS_INFO, SELECT_RFKILL);
- if (ret)
- return ret;
-- hwswitch = buffer->output[1];
-+ hwswitch = buffer.output[1];
-
- /* If the hardware switch controls this radio, and the hardware
- switch is disabled, always disable the radio */
-@@ -490,8 +492,8 @@ static int dell_rfkill_set(void *data, bool blocked)
- (status & BIT(0)) && !(status & BIT(16)))
- disable = 1;
-
-- dell_set_arguments(1 | (radio<<8) | (disable << 16), 0, 0, 0);
-- ret = dell_send_request(CLASS_INFO, SELECT_RFKILL);
-+ dell_fill_request(&buffer, 1 | (radio<<8) | (disable << 16), 0, 0, 0);
-+ ret = dell_send_request(&buffer, CLASS_INFO, SELECT_RFKILL);
- return ret;
- }
-
-@@ -500,9 +502,11 @@ static void dell_rfkill_update_sw_state(struct rfkill *rfkill, int radio,
- {
- if (status & BIT(0)) {
- /* Has hw-switch, sync sw_state to BIOS */
-+ struct calling_interface_buffer buffer;
- int block = rfkill_blocked(rfkill);
-- dell_set_arguments(1 | (radio << 8) | (block << 16), 0, 0, 0);
-- dell_send_request(CLASS_INFO, SELECT_RFKILL);
-+ dell_fill_request(&buffer,
-+ 1 | (radio << 8) | (block << 16), 0, 0, 0);
-+ dell_send_request(&buffer, CLASS_INFO, SELECT_RFKILL);
- } else {
- /* No hw-switch, sync BIOS state to sw_state */
- rfkill_set_sw_state(rfkill, !!(status & BIT(radio + 16)));
-@@ -519,21 +523,22 @@ static void dell_rfkill_update_hw_state(struct rfkill *rfkill, int radio,
- static void dell_rfkill_query(struct rfkill *rfkill, void *data)
- {
- int radio = ((unsigned long)data & 0xF);
-+ struct calling_interface_buffer buffer;
- int hwswitch;
- int status;
- int ret;
-
-- dell_set_arguments(0, 0, 0, 0);
-- ret = dell_send_request(CLASS_INFO, SELECT_RFKILL);
-- status = buffer->output[1];
-+ dell_fill_request(&buffer, 0, 0, 0, 0);
-+ ret = dell_send_request(&buffer, CLASS_INFO, SELECT_RFKILL);
-+ status = buffer.output[1];
-
- if (ret != 0 || !(status & BIT(0))) {
- return;
- }
-
-- dell_set_arguments(0, 0x2, 0, 0);
-- ret = dell_send_request(CLASS_INFO, SELECT_RFKILL);
-- hwswitch = buffer->output[1];
-+ dell_fill_request(&buffer, 0, 0x2, 0, 0);
-+ ret = dell_send_request(&buffer, CLASS_INFO, SELECT_RFKILL);
-+ hwswitch = buffer.output[1];
-
- if (ret != 0)
- return;
-@@ -550,22 +555,23 @@ static struct dentry *dell_laptop_dir;
-
- static int dell_debugfs_show(struct seq_file *s, void *data)
- {
-+ struct calling_interface_buffer buffer;
- int hwswitch_state;
- int hwswitch_ret;
- int status;
- int ret;
-
-- dell_set_arguments(0, 0, 0, 0);
-- ret = dell_send_request(CLASS_INFO, SELECT_RFKILL);
-+ dell_fill_request(&buffer, 0, 0, 0, 0);
-+ ret = dell_send_request(&buffer, CLASS_INFO, SELECT_RFKILL);
- if (ret)
- return ret;
-- status = buffer->output[1];
-+ status = buffer.output[1];
-
-- dell_set_arguments(0, 0x2, 0, 0);
-- hwswitch_ret = dell_send_request(CLASS_INFO, SELECT_RFKILL);
-+ dell_fill_request(&buffer, 0, 0x2, 0, 0);
-+ hwswitch_ret = dell_send_request(&buffer, CLASS_INFO, SELECT_RFKILL);
- if (hwswitch_ret)
- return hwswitch_ret;
-- hwswitch_state = buffer->output[1];
-+ hwswitch_state = buffer.output[1];
-
- seq_printf(s, "return:\t%d\n", ret);
- seq_printf(s, "status:\t0x%X\n", status);
-@@ -646,22 +652,23 @@ static const struct file_operations dell_debugfs_fops = {
-
- static void dell_update_rfkill(struct work_struct *ignored)
- {
-+ struct calling_interface_buffer buffer;
- int hwswitch = 0;
- int status;
- int ret;
-
-- dell_set_arguments(0, 0, 0, 0);
-- ret = dell_send_request(CLASS_INFO, SELECT_RFKILL);
-- status = buffer->output[1];
-+ dell_fill_request(&buffer, 0, 0, 0, 0);
-+ ret = dell_send_request(&buffer, CLASS_INFO, SELECT_RFKILL);
-+ status = buffer.output[1];
-
- if (ret != 0)
- return;
-
-- dell_set_arguments(0, 0x2, 0, 0);
-- ret = dell_send_request(CLASS_INFO, SELECT_RFKILL);
-+ dell_fill_request(&buffer, 0, 0x2, 0, 0);
-+ ret = dell_send_request(&buffer, CLASS_INFO, SELECT_RFKILL);
-
- if (ret == 0 && (status & BIT(0)))
-- hwswitch = buffer->output[1];
-+ hwswitch = buffer.output[1];
-
- if (wifi_rfkill) {
- dell_rfkill_update_hw_state(wifi_rfkill, 1, status, hwswitch);
-@@ -719,6 +726,7 @@ static struct notifier_block dell_laptop_rbtn_notifier = {
-
- static int __init dell_setup_rfkill(void)
- {
-+ struct calling_interface_buffer buffer;
- int status, ret, whitelisted;
- const char *product;
-
-@@ -734,9 +742,9 @@ static int __init dell_setup_rfkill(void)
- if (!force_rfkill && !whitelisted)
- return 0;
-
-- dell_set_arguments(0, 0, 0, 0);
-- ret = dell_send_request(CLASS_INFO, SELECT_RFKILL);
-- status = buffer->output[1];
-+ dell_fill_request(&buffer, 0, 0, 0, 0);
-+ ret = dell_send_request(&buffer, CLASS_INFO, SELECT_RFKILL);
-+ status = buffer.output[1];
-
- /* dell wireless info smbios call is not supported */
- if (ret != 0)
-@@ -889,6 +897,7 @@ static void dell_cleanup_rfkill(void)
-
- static int dell_send_intensity(struct backlight_device *bd)
- {
-+ struct calling_interface_buffer buffer;
- struct calling_interface_token *token;
- int ret;
-
-@@ -896,17 +905,21 @@ static int dell_send_intensity(struct backlight_device *bd)
- if (!token)
- return -ENODEV;
-
-- dell_set_arguments(token->location, bd->props.brightness, 0, 0);
-+ dell_fill_request(&buffer,
-+ token->location, bd->props.brightness, 0, 0);
- if (power_supply_is_system_supplied() > 0)
-- ret = dell_send_request(CLASS_TOKEN_WRITE, SELECT_TOKEN_AC);
-+ ret = dell_send_request(&buffer,
-+ CLASS_TOKEN_WRITE, SELECT_TOKEN_AC);
- else
-- ret = dell_send_request(CLASS_TOKEN_WRITE, SELECT_TOKEN_BAT);
-+ ret = dell_send_request(&buffer,
-+ CLASS_TOKEN_WRITE, SELECT_TOKEN_BAT);
-
- return ret;
- }
-
- static int dell_get_intensity(struct backlight_device *bd)
- {
-+ struct calling_interface_buffer buffer;
- struct calling_interface_token *token;
- int ret;
-
-@@ -914,14 +927,17 @@ static int dell_get_intensity(struct backlight_device *bd)
- if (!token)
- return -ENODEV;
-
-- dell_set_arguments(token->location, 0, 0, 0);
-+ dell_fill_request(&buffer, token->location, 0, 0, 0);
- if (power_supply_is_system_supplied() > 0)
-- ret = dell_send_request(CLASS_TOKEN_READ, SELECT_TOKEN_AC);
-+ ret = dell_send_request(&buffer,
-+ CLASS_TOKEN_READ, SELECT_TOKEN_AC);
- else
-- ret = dell_send_request(CLASS_TOKEN_READ, SELECT_TOKEN_BAT);
-+ ret = dell_send_request(&buffer,
-+ CLASS_TOKEN_READ, SELECT_TOKEN_BAT);
-
- if (ret == 0)
-- ret = buffer->output[1];
-+ ret = buffer.output[1];
-+
- return ret;
- }
-
-@@ -1186,31 +1202,33 @@ static enum led_brightness kbd_led_level;
-
- static int kbd_get_info(struct kbd_info *info)
- {
-+ struct calling_interface_buffer buffer;
- u8 units;
- int ret;
-
-- dell_set_arguments(0, 0, 0, 0);
-- ret = dell_send_request(CLASS_KBD_BACKLIGHT, SELECT_KBD_BACKLIGHT);
-+ dell_fill_request(&buffer, 0, 0, 0, 0);
-+ ret = dell_send_request(&buffer,
-+ CLASS_KBD_BACKLIGHT, SELECT_KBD_BACKLIGHT);
- if (ret)
- return ret;
-
-- info->modes = buffer->output[1] & 0xFFFF;
-- info->type = (buffer->output[1] >> 24) & 0xFF;
-- info->triggers = buffer->output[2] & 0xFF;
-- units = (buffer->output[2] >> 8) & 0xFF;
-- info->levels = (buffer->output[2] >> 16) & 0xFF;
-+ info->modes = buffer.output[1] & 0xFFFF;
-+ info->type = (buffer.output[1] >> 24) & 0xFF;
-+ info->triggers = buffer.output[2] & 0xFF;
-+ units = (buffer.output[2] >> 8) & 0xFF;
-+ info->levels = (buffer.output[2] >> 16) & 0xFF;
-
- if (quirks && quirks->kbd_led_levels_off_1 && info->levels)
- info->levels--;
-
- if (units & BIT(0))
-- info->seconds = (buffer->output[3] >> 0) & 0xFF;
-+ info->seconds = (buffer.output[3] >> 0) & 0xFF;
- if (units & BIT(1))
-- info->minutes = (buffer->output[3] >> 8) & 0xFF;
-+ info->minutes = (buffer.output[3] >> 8) & 0xFF;
- if (units & BIT(2))
-- info->hours = (buffer->output[3] >> 16) & 0xFF;
-+ info->hours = (buffer.output[3] >> 16) & 0xFF;
- if (units & BIT(3))
-- info->days = (buffer->output[3] >> 24) & 0xFF;
-+ info->days = (buffer.output[3] >> 24) & 0xFF;
-
- return ret;
- }
-@@ -1270,31 +1288,34 @@ static int kbd_set_level(struct kbd_state *state, u8 level)
-
- static int kbd_get_state(struct kbd_state *state)
- {
-+ struct calling_interface_buffer buffer;
- int ret;
-
-- dell_set_arguments(0x1, 0, 0, 0);
-- ret = dell_send_request(CLASS_KBD_BACKLIGHT, SELECT_KBD_BACKLIGHT);
-+ dell_fill_request(&buffer, 0, 0, 0, 0);
-+ ret = dell_send_request(&buffer,
-+ CLASS_KBD_BACKLIGHT, SELECT_KBD_BACKLIGHT);
- if (ret)
- return ret;
-
-- state->mode_bit = ffs(buffer->output[1] & 0xFFFF);
-+ state->mode_bit = ffs(buffer.output[1] & 0xFFFF);
- if (state->mode_bit != 0)
- state->mode_bit--;
-
-- state->triggers = (buffer->output[1] >> 16) & 0xFF;
-- state->timeout_value = (buffer->output[1] >> 24) & 0x3F;
-- state->timeout_unit = (buffer->output[1] >> 30) & 0x3;
-- state->als_setting = buffer->output[2] & 0xFF;
-- state->als_value = (buffer->output[2] >> 8) & 0xFF;
-- state->level = (buffer->output[2] >> 16) & 0xFF;
-- state->timeout_value_ac = (buffer->output[2] >> 24) & 0x3F;
-- state->timeout_unit_ac = (buffer->output[2] >> 30) & 0x3;
-+ state->triggers = (buffer.output[1] >> 16) & 0xFF;
-+ state->timeout_value = (buffer.output[1] >> 24) & 0x3F;
-+ state->timeout_unit = (buffer.output[1] >> 30) & 0x3;
-+ state->als_setting = buffer.output[2] & 0xFF;
-+ state->als_value = (buffer.output[2] >> 8) & 0xFF;
-+ state->level = (buffer.output[2] >> 16) & 0xFF;
-+ state->timeout_value_ac = (buffer.output[2] >> 24) & 0x3F;
-+ state->timeout_unit_ac = (buffer.output[2] >> 30) & 0x3;
-
- return ret;
- }
-
- static int kbd_set_state(struct kbd_state *state)
- {
-+ struct calling_interface_buffer buffer;
- int ret;
- u32 input1;
- u32 input2;
-@@ -1307,8 +1328,9 @@ static int kbd_set_state(struct kbd_state *state)
- input2 |= (state->level & 0xFF) << 16;
- input2 |= (state->timeout_value_ac & 0x3F) << 24;
- input2 |= (state->timeout_unit_ac & 0x3) << 30;
-- dell_set_arguments(0x2, input1, input2, 0);
-- ret = dell_send_request(CLASS_KBD_BACKLIGHT, SELECT_KBD_BACKLIGHT);
-+ dell_fill_request(&buffer, 0x2, input1, input2, 0);
-+ ret = dell_send_request(&buffer,
-+ CLASS_KBD_BACKLIGHT, SELECT_KBD_BACKLIGHT);
-
- return ret;
- }
-@@ -1335,6 +1357,7 @@ static int kbd_set_state_safe(struct kbd_state *state, struct kbd_state *old)
-
- static int kbd_set_token_bit(u8 bit)
- {
-+ struct calling_interface_buffer buffer;
- struct calling_interface_token *token;
- int ret;
-
-@@ -1345,14 +1368,15 @@ static int kbd_set_token_bit(u8 bit)
- if (!token)
- return -EINVAL;
-
-- dell_set_arguments(token->location, token->value, 0, 0);
-- ret = dell_send_request(CLASS_TOKEN_WRITE, SELECT_TOKEN_STD);
-+ dell_fill_request(&buffer, token->location, token->value, 0, 0);
-+ ret = dell_send_request(&buffer, CLASS_TOKEN_WRITE, SELECT_TOKEN_STD);
-
- return ret;
- }
-
- static int kbd_get_token_bit(u8 bit)
- {
-+ struct calling_interface_buffer buffer;
- struct calling_interface_token *token;
- int ret;
- int val;
-@@ -1364,9 +1388,9 @@ static int kbd_get_token_bit(u8 bit)
- if (!token)
- return -EINVAL;
-
-- dell_set_arguments(token->location, 0, 0, 0);
-- ret = dell_send_request(CLASS_TOKEN_READ, SELECT_TOKEN_STD);
-- val = buffer->output[1];
-+ dell_fill_request(&buffer, token->location, 0, 0, 0);
-+ ret = dell_send_request(&buffer, CLASS_TOKEN_READ, SELECT_TOKEN_STD);
-+ val = buffer.output[1];
-
- if (ret)
- return ret;
-@@ -2102,6 +2126,7 @@ static struct notifier_block dell_laptop_notifier = {
-
- int dell_micmute_led_set(int state)
- {
-+ struct calling_interface_buffer buffer;
- struct calling_interface_token *token;
-
- if (state == 0)
-@@ -2114,8 +2139,8 @@ int dell_micmute_led_set(int state)
- if (!token)
- return -ENODEV;
-
-- dell_set_arguments(token->location, token->value, 0, 0);
-- dell_send_request(CLASS_TOKEN_WRITE, SELECT_TOKEN_STD);
-+ dell_fill_request(&buffer, token->location, token->value, 0, 0);
-+ dell_send_request(&buffer, CLASS_TOKEN_WRITE, SELECT_TOKEN_STD);
-
- return state;
- }
-@@ -2146,13 +2171,6 @@ static int __init dell_init(void)
- if (ret)
- goto fail_platform_device2;
-
-- buffer = kzalloc(sizeof(struct calling_interface_buffer), GFP_KERNEL);
-- if (!buffer) {
-- ret = -ENOMEM;
-- goto fail_buffer;
-- }
--
--
- ret = dell_setup_rfkill();
-
- if (ret) {
-@@ -2177,10 +2195,13 @@ static int __init dell_init(void)
-
- token = dell_smbios_find_token(BRIGHTNESS_TOKEN);
- if (token) {
-- dell_set_arguments(token->location, 0, 0, 0);
-- ret = dell_send_request(CLASS_TOKEN_READ, SELECT_TOKEN_AC);
-+ struct calling_interface_buffer buffer;
-+
-+ dell_fill_request(&buffer, token->location, 0, 0, 0);
-+ ret = dell_send_request(&buffer,
-+ CLASS_TOKEN_READ, SELECT_TOKEN_AC);
- if (ret)
-- max_intensity = buffer->output[3];
-+ max_intensity = buffer.output[3];
- }
-
- if (max_intensity) {
-@@ -2214,8 +2235,6 @@ static int __init dell_init(void)
- fail_get_brightness:
- backlight_device_unregister(dell_backlight_device);
- fail_backlight:
-- kfree(buffer);
--fail_buffer:
- dell_cleanup_rfkill();
- fail_rfkill:
- platform_device_del(platform_device);
-@@ -2235,7 +2254,6 @@ static void __exit dell_exit(void)
- touchpad_led_exit();
- kbd_led_exit();
- backlight_device_unregister(dell_backlight_device);
-- kfree(buffer);
- dell_cleanup_rfkill();
- if (platform_device) {
- platform_device_unregister(platform_device);
^ permalink raw reply related [flat|nested] 27+ messages in thread
* [gentoo-commits] proj/linux-patches:4.15 commit in: /
@ 2018-02-28 20:07 Alice Ferrazzi
0 siblings, 0 replies; 27+ messages in thread
From: Alice Ferrazzi @ 2018-02-28 20:07 UTC (permalink / raw
To: gentoo-commits
commit: e08bdc6ca7fec4ff403493157ca3ae663bace3a6
Author: Alice Ferrazzi <alicef <AT> gentoo <DOT> org>
AuthorDate: Wed Feb 28 20:06:15 2018 +0000
Commit: Alice Ferrazzi <alicef <AT> gentoo <DOT> org>
CommitDate: Wed Feb 28 20:06:15 2018 +0000
URL: https://gitweb.gentoo.org/proj/linux-patches.git/commit/?id=e08bdc6c
ia64 fix ptrace patch
1700_ia64_fix_ptrace.patch | 87 ++++++++++++++++++++++++++++++++++++++++++++++
1 file changed, 87 insertions(+)
diff --git a/1700_ia64_fix_ptrace.patch b/1700_ia64_fix_ptrace.patch
new file mode 100644
index 0000000..6173b05
--- /dev/null
+++ b/1700_ia64_fix_ptrace.patch
@@ -0,0 +1,87 @@
+From patchwork Fri Feb 2 22:12:24 2018
+Content-Type: text/plain; charset="utf-8"
+MIME-Version: 1.0
+Content-Transfer-Encoding: 8bit
+Subject: ia64: fix ptrace(PTRACE_GETREGS) (unbreaks strace, gdb)
+From: Sergei Trofimovich <slyfox@gentoo.org>
+X-Patchwork-Id: 10198159
+Message-Id: <20180202221224.16597-1-slyfox@gentoo.org>
+To: Tony Luck <tony.luck@intel.com>, Fenghua Yu <fenghua.yu@intel.com>,
+ linux-ia64@vger.kernel.org, linux-kernel@vger.kernel.org
+Cc: Sergei Trofimovich <slyfox@gentoo.org>
+Date: Fri, 2 Feb 2018 22:12:24 +0000
+
+The strace breakage looks like that:
+./strace: get_regs: get_regs_error: Input/output error
+
+It happens because ia64 needs to load unwind tables
+to read certain registers. Unwind tables fail to load
+due to GCC quirk on the following code:
+
+ extern char __end_unwind[];
+ const struct unw_table_entry *end = (struct unw_table_entry *)table_end;
+ table->end = segment_base + end[-1].end_offset;
+
+GCC does not generate correct code for this single memory
+reference after constant propagation (see https://gcc.gnu.org/PR84184).
+Two triggers are required for bad code generation:
+- '__end_unwind' has alignment lower (char), than
+ 'struct unw_table_entry' (8).
+- symbol offset is negative.
+
+This commit workarounds it by fixing alignment of '__end_unwind'.
+While at it use hidden symbols to generate shorter gp-relative
+relocations.
+
+CC: Tony Luck <tony.luck@intel.com>
+CC: Fenghua Yu <fenghua.yu@intel.com>
+CC: linux-ia64@vger.kernel.org
+CC: linux-kernel@vger.kernel.org
+Bug: https://github.com/strace/strace/issues/33
+Bug: https://gcc.gnu.org/PR84184
+Reported-by: Émeric Maschino <emeric.maschino@gmail.com>
+Signed-off-by: Sergei Trofimovich <slyfox@gentoo.org>
+Tested-by: stanton_arch@mail.com
+---
+ arch/ia64/include/asm/sections.h | 1 -
+ arch/ia64/kernel/unwind.c | 15 ++++++++++++++-
+ 2 files changed, 14 insertions(+), 2 deletions(-)
+
+diff --git a/arch/ia64/include/asm/sections.h b/arch/ia64/include/asm/sections.h
+index f3481408594e..0fc4f1757a44 100644
+--- a/arch/ia64/include/asm/sections.h
++++ b/arch/ia64/include/asm/sections.h
+@@ -24,7 +24,6 @@ extern char __start_gate_mckinley_e9_patchlist[], __end_gate_mckinley_e9_patchli
+ extern char __start_gate_vtop_patchlist[], __end_gate_vtop_patchlist[];
+ extern char __start_gate_fsyscall_patchlist[], __end_gate_fsyscall_patchlist[];
+ extern char __start_gate_brl_fsys_bubble_down_patchlist[], __end_gate_brl_fsys_bubble_down_patchlist[];
+-extern char __start_unwind[], __end_unwind[];
+ extern char __start_ivt_text[], __end_ivt_text[];
+
+ #undef dereference_function_descriptor
+diff --git a/arch/ia64/kernel/unwind.c b/arch/ia64/kernel/unwind.c
+index e04efa088902..025ba6700790 100644
+--- a/arch/ia64/kernel/unwind.c
++++ b/arch/ia64/kernel/unwind.c
+@@ -2243,7 +2243,20 @@ __initcall(create_gate_table);
+ void __init
+ unw_init (void)
+ {
+- extern char __gp[];
++ #define __ia64_hidden __attribute__((visibility("hidden")))
++ /*
++ * We use hidden symbols to generate more efficient code using
++ * gp-relative addressing.
++ */
++ extern char __gp[] __ia64_hidden;
++ /*
++ * Unwind tables need to have proper alignment as init_unwind_table()
++ * uses negative offsets against '__end_unwind'.
++ * See https://gcc.gnu.org/PR84184
++ */
++ extern const struct unw_table_entry __start_unwind[] __ia64_hidden;
++ extern const struct unw_table_entry __end_unwind[] __ia64_hidden;
++ #undef __ia64_hidden
+ extern void unw_hash_index_t_is_too_narrow (void);
+ long i, off;
+
^ permalink raw reply related [flat|nested] 27+ messages in thread
* [gentoo-commits] proj/linux-patches:4.15 commit in: /
@ 2018-03-01 13:51 Alice Ferrazzi
0 siblings, 0 replies; 27+ messages in thread
From: Alice Ferrazzi @ 2018-03-01 13:51 UTC (permalink / raw
To: gentoo-commits
commit: ea57e40b92f14ab61fb8a8352c4962a324bde09b
Author: Alice Ferrazzi <alicef <AT> gentoo <DOT> org>
AuthorDate: Thu Mar 1 13:46:48 2018 +0000
Commit: Alice Ferrazzi <alicef <AT> gentoo <DOT> org>
CommitDate: Thu Mar 1 13:47:06 2018 +0000
URL: https://gitweb.gentoo.org/proj/linux-patches.git/commit/?id=ea57e40b
fix Allocate buffer on heap rather than globally patch
bug #646438
...ocate_buffer_on_heap_rather_than_globally.patch | 46 +++++++++++-----------
1 file changed, 24 insertions(+), 22 deletions(-)
diff --git a/2901_allocate_buffer_on_heap_rather_than_globally.patch b/2901_allocate_buffer_on_heap_rather_than_globally.patch
index 1912d9f..eb5fecc 100644
--- a/2901_allocate_buffer_on_heap_rather_than_globally.patch
+++ b/2901_allocate_buffer_on_heap_rather_than_globally.patch
@@ -1,6 +1,8 @@
---- dell-laptop.c.orig 2018-02-28 19:24:04.598049000 +0000
-+++ linux-4.15.0/drivers/platform/x86/dell-laptop.c 2018-02-28 19:40:00.358049000 +0000
-@@ -78,7 +78,6 @@ static struct platform_driver platform_d
+diff --git a/drivers/platform/x86/dell-laptop.c b/drivers/platform/x86/dell-laptop.c
+index cd4725e7e0b5..6e8071d493dc 100644
+--- a/drivers/platform/x86/dell-laptop.c
++++ b/drivers/platform/x86/dell-laptop.c
+@@ -78,7 +78,6 @@ static struct platform_driver platform_driver = {
}
};
@@ -8,27 +10,27 @@
static struct platform_device *platform_device;
static struct backlight_device *dell_backlight_device;
static struct rfkill *wifi_rfkill;
-@@ -286,7 +285,8 @@ static const struct dmi_system_id dell_q
+@@ -286,7 +285,8 @@ static const struct dmi_system_id dell_quirks[] __initconst = {
{ }
};
-void dell_set_arguments(u32 arg0, u32 arg1, u32 arg2, u32 arg3)
-+void dell_set_arguments(struct calling_interface_buffer *buffer,
-+ u32 arg0, u32 arg1, u32 arg2, u32 arg3)
++static void dell_fill_request(struct calling_interface_buffer *buffer,
++ u32 arg0, u32 arg1, u32 arg2, u32 arg3)
{
memset(buffer, 0, sizeof(struct calling_interface_buffer));
buffer->input[0] = arg0;
-@@ -295,7 +295,8 @@ void dell_set_arguments(u32 arg0, u32 ar
+@@ -295,7 +295,8 @@ void dell_set_arguments(u32 arg0, u32 arg1, u32 arg2, u32 arg3)
buffer->input[3] = arg3;
}
-int dell_send_request(u16 class, u16 select)
-+int dell_send_request(struct calling_interface_buffer *buffer,
-+ u16 class, u16 select)
++static int dell_send_request(struct calling_interface_buffer *buffer,
++ u16 class, u16 select)
{
int ret;
-@@ -432,21 +433,22 @@ static int dell_rfkill_set(void *data, b
+@@ -432,21 +433,22 @@ static int dell_rfkill_set(void *data, bool blocked)
int disable = blocked ? 1 : 0;
unsigned long radio = (unsigned long)data;
int hwswitch_bit = (unsigned long)data - 1;
@@ -57,7 +59,7 @@
/* If the hardware switch controls this radio, and the hardware
switch is disabled, always disable the radio */
-@@ -454,8 +456,8 @@ static int dell_rfkill_set(void *data, b
+@@ -454,8 +456,8 @@ static int dell_rfkill_set(void *data, bool blocked)
(status & BIT(0)) && !(status & BIT(16)))
disable = 1;
@@ -68,7 +70,7 @@
return ret;
}
-@@ -464,9 +466,11 @@ static void dell_rfkill_update_sw_state(
+@@ -464,9 +466,11 @@ static void dell_rfkill_update_sw_state(struct rfkill *rfkill, int radio,
{
if (status & BIT(0)) {
/* Has hw-switch, sync sw_state to BIOS */
@@ -82,7 +84,7 @@
} else {
/* No hw-switch, sync BIOS state to sw_state */
rfkill_set_sw_state(rfkill, !!(status & BIT(radio + 16)));
-@@ -483,21 +487,22 @@ static void dell_rfkill_update_hw_state(
+@@ -483,21 +487,22 @@ static void dell_rfkill_update_hw_state(struct rfkill *rfkill, int radio,
static void dell_rfkill_query(struct rfkill *rfkill, void *data)
{
int radio = ((unsigned long)data & 0xF);
@@ -141,7 +143,7 @@
seq_printf(s, "return:\t%d\n", ret);
seq_printf(s, "status:\t0x%X\n", status);
-@@ -610,22 +616,23 @@ static const struct file_operations dell
+@@ -610,22 +616,23 @@ static const struct file_operations dell_debugfs_fops = {
static void dell_update_rfkill(struct work_struct *ignored)
{
@@ -171,7 +173,7 @@
if (wifi_rfkill) {
dell_rfkill_update_hw_state(wifi_rfkill, 1, status, hwswitch);
-@@ -683,6 +690,7 @@ static struct notifier_block dell_laptop
+@@ -683,6 +690,7 @@ static struct notifier_block dell_laptop_rbtn_notifier = {
static int __init dell_setup_rfkill(void)
{
@@ -179,7 +181,7 @@
int status, ret, whitelisted;
const char *product;
-@@ -698,9 +706,9 @@ static int __init dell_setup_rfkill(void
+@@ -698,9 +706,9 @@ static int __init dell_setup_rfkill(void)
if (!force_rfkill && !whitelisted)
return 0;
@@ -200,7 +202,7 @@
struct calling_interface_token *token;
int ret;
-@@ -860,17 +869,21 @@ static int dell_send_intensity(struct ba
+@@ -860,17 +869,21 @@ static int dell_send_intensity(struct backlight_device *bd)
if (!token)
return -ENODEV;
@@ -225,7 +227,7 @@
struct calling_interface_token *token;
int ret;
-@@ -878,14 +891,17 @@ static int dell_get_intensity(struct bac
+@@ -878,14 +891,17 @@ static int dell_get_intensity(struct backlight_device *bd)
if (!token)
return -ENODEV;
@@ -292,7 +294,7 @@
return ret;
}
-@@ -1233,31 +1251,34 @@ static int kbd_set_level(struct kbd_stat
+@@ -1233,31 +1251,34 @@ static int kbd_set_level(struct kbd_state *state, u8 level)
static int kbd_get_state(struct kbd_state *state)
{
@@ -338,7 +340,7 @@
int ret;
u32 input1;
u32 input2;
-@@ -1270,8 +1291,9 @@ static int kbd_set_state(struct kbd_stat
+@@ -1270,8 +1291,9 @@ static int kbd_set_state(struct kbd_state *state)
input2 |= (state->level & 0xFF) << 16;
input2 |= (state->timeout_value_ac & 0x3F) << 24;
input2 |= (state->timeout_unit_ac & 0x3) << 30;
@@ -350,7 +352,7 @@
return ret;
}
-@@ -1298,6 +1320,7 @@ static int kbd_set_state_safe(struct kbd
+@@ -1298,6 +1320,7 @@ static int kbd_set_state_safe(struct kbd_state *state, struct kbd_state *old)
static int kbd_set_token_bit(u8 bit)
{
@@ -389,7 +391,7 @@
if (ret)
return ret;
-@@ -2046,6 +2070,7 @@ static struct notifier_block dell_laptop
+@@ -2046,6 +2070,7 @@ static struct notifier_block dell_laptop_notifier = {
int dell_micmute_led_set(int state)
{
^ permalink raw reply related [flat|nested] 27+ messages in thread
* [gentoo-commits] proj/linux-patches:4.15 commit in: /
@ 2018-03-09 16:38 Alice Ferrazzi
0 siblings, 0 replies; 27+ messages in thread
From: Alice Ferrazzi @ 2018-03-09 16:38 UTC (permalink / raw
To: gentoo-commits
commit: 278b0fa2dd63fef732577b7d90ad5629e92b2596
Author: Alice Ferrazzi <alicef <AT> gentoo <DOT> org>
AuthorDate: Fri Mar 9 16:37:59 2018 +0000
Commit: Alice Ferrazzi <alicef <AT> gentoo <DOT> org>
CommitDate: Fri Mar 9 16:37:59 2018 +0000
URL: https://gitweb.gentoo.org/proj/linux-patches.git/commit/?id=278b0fa2
linux kernel 4.15.8
0000_README | 4 +
1007_linux-4.15.8.patch | 4922 +++++++++++++++++++++++++++++++++++++++++++++++
2 files changed, 4926 insertions(+)
diff --git a/0000_README b/0000_README
index 994c735..bd1cdaf 100644
--- a/0000_README
+++ b/0000_README
@@ -71,6 +71,10 @@ Patch: 1006_linux-4.15.7.patch
From: http://www.kernel.org
Desc: Linux 4.15.7
+Patch: 1007_linux-4.15.8.patch
+From: http://www.kernel.org
+Desc: Linux 4.15.8
+
Patch: 1500_XATTR_USER_PREFIX.patch
From: https://bugs.gentoo.org/show_bug.cgi?id=470644
Desc: Support for namespace user.pax.* on tmpfs.
diff --git a/1007_linux-4.15.8.patch b/1007_linux-4.15.8.patch
new file mode 100644
index 0000000..27ea8d6
--- /dev/null
+++ b/1007_linux-4.15.8.patch
@@ -0,0 +1,4922 @@
+diff --git a/Documentation/networking/ip-sysctl.txt b/Documentation/networking/ip-sysctl.txt
+index 46c7e1085efc..e269541a7d10 100644
+--- a/Documentation/networking/ip-sysctl.txt
++++ b/Documentation/networking/ip-sysctl.txt
+@@ -508,7 +508,7 @@ tcp_rmem - vector of 3 INTEGERs: min, default, max
+ min: Minimal size of receive buffer used by TCP sockets.
+ It is guaranteed to each TCP socket, even under moderate memory
+ pressure.
+- Default: 1 page
++ Default: 4K
+
+ default: initial size of receive buffer used by TCP sockets.
+ This value overrides net.core.rmem_default used by other protocols.
+@@ -666,7 +666,7 @@ tcp_window_scaling - BOOLEAN
+ tcp_wmem - vector of 3 INTEGERs: min, default, max
+ min: Amount of memory reserved for send buffers for TCP sockets.
+ Each TCP socket has rights to use it due to fact of its birth.
+- Default: 1 page
++ Default: 4K
+
+ default: initial size of send buffer used by TCP sockets. This
+ value overrides net.core.wmem_default used by other protocols.
+diff --git a/Makefile b/Makefile
+index 49f524444050..eb18d200a603 100644
+--- a/Makefile
++++ b/Makefile
+@@ -1,7 +1,7 @@
+ # SPDX-License-Identifier: GPL-2.0
+ VERSION = 4
+ PATCHLEVEL = 15
+-SUBLEVEL = 7
++SUBLEVEL = 8
+ EXTRAVERSION =
+ NAME = Fearless Coyote
+
+diff --git a/arch/arm/boot/dts/logicpd-som-lv.dtsi b/arch/arm/boot/dts/logicpd-som-lv.dtsi
+index 29cb804d10cc..06cce72508a2 100644
+--- a/arch/arm/boot/dts/logicpd-som-lv.dtsi
++++ b/arch/arm/boot/dts/logicpd-som-lv.dtsi
+@@ -98,6 +98,8 @@
+ };
+
+ &i2c1 {
++ pinctrl-names = "default";
++ pinctrl-0 = <&i2c1_pins>;
+ clock-frequency = <2600000>;
+
+ twl: twl@48 {
+@@ -216,7 +218,12 @@
+ >;
+ };
+
+-
++ i2c1_pins: pinmux_i2c1_pins {
++ pinctrl-single,pins = <
++ OMAP3_CORE1_IOPAD(0x21ba, PIN_INPUT | MUX_MODE0) /* i2c1_scl.i2c1_scl */
++ OMAP3_CORE1_IOPAD(0x21bc, PIN_INPUT | MUX_MODE0) /* i2c1_sda.i2c1_sda */
++ >;
++ };
+ };
+
+ &omap3_pmx_wkup {
+diff --git a/arch/arm/boot/dts/logicpd-torpedo-som.dtsi b/arch/arm/boot/dts/logicpd-torpedo-som.dtsi
+index 6d89736c7b44..cf22b35f0a28 100644
+--- a/arch/arm/boot/dts/logicpd-torpedo-som.dtsi
++++ b/arch/arm/boot/dts/logicpd-torpedo-som.dtsi
+@@ -104,6 +104,8 @@
+ };
+
+ &i2c1 {
++ pinctrl-names = "default";
++ pinctrl-0 = <&i2c1_pins>;
+ clock-frequency = <2600000>;
+
+ twl: twl@48 {
+@@ -211,6 +213,12 @@
+ OMAP3_CORE1_IOPAD(0x21b8, PIN_INPUT | MUX_MODE0) /* hsusb0_data7.hsusb0_data7 */
+ >;
+ };
++ i2c1_pins: pinmux_i2c1_pins {
++ pinctrl-single,pins = <
++ OMAP3_CORE1_IOPAD(0x21ba, PIN_INPUT | MUX_MODE0) /* i2c1_scl.i2c1_scl */
++ OMAP3_CORE1_IOPAD(0x21bc, PIN_INPUT | MUX_MODE0) /* i2c1_sda.i2c1_sda */
++ >;
++ };
+ };
+
+ &uart2 {
+diff --git a/arch/arm/boot/dts/rk3288-phycore-som.dtsi b/arch/arm/boot/dts/rk3288-phycore-som.dtsi
+index 99cfae875e12..5eae4776ffde 100644
+--- a/arch/arm/boot/dts/rk3288-phycore-som.dtsi
++++ b/arch/arm/boot/dts/rk3288-phycore-som.dtsi
+@@ -110,26 +110,6 @@
+ };
+ };
+
+-&cpu0 {
+- cpu0-supply = <&vdd_cpu>;
+- operating-points = <
+- /* KHz uV */
+- 1800000 1400000
+- 1608000 1350000
+- 1512000 1300000
+- 1416000 1200000
+- 1200000 1100000
+- 1008000 1050000
+- 816000 1000000
+- 696000 950000
+- 600000 900000
+- 408000 900000
+- 312000 900000
+- 216000 900000
+- 126000 900000
+- >;
+-};
+-
+ &emmc {
+ status = "okay";
+ bus-width = <8>;
+diff --git a/arch/arm/kvm/hyp/Makefile b/arch/arm/kvm/hyp/Makefile
+index 5638ce0c9524..63d6b404d88e 100644
+--- a/arch/arm/kvm/hyp/Makefile
++++ b/arch/arm/kvm/hyp/Makefile
+@@ -7,6 +7,8 @@ ccflags-y += -fno-stack-protector -DDISABLE_BRANCH_PROFILING
+
+ KVM=../../../../virt/kvm
+
++CFLAGS_ARMV7VE :=$(call cc-option, -march=armv7ve)
++
+ obj-$(CONFIG_KVM_ARM_HOST) += $(KVM)/arm/hyp/vgic-v2-sr.o
+ obj-$(CONFIG_KVM_ARM_HOST) += $(KVM)/arm/hyp/vgic-v3-sr.o
+ obj-$(CONFIG_KVM_ARM_HOST) += $(KVM)/arm/hyp/timer-sr.o
+@@ -15,7 +17,10 @@ obj-$(CONFIG_KVM_ARM_HOST) += tlb.o
+ obj-$(CONFIG_KVM_ARM_HOST) += cp15-sr.o
+ obj-$(CONFIG_KVM_ARM_HOST) += vfp.o
+ obj-$(CONFIG_KVM_ARM_HOST) += banked-sr.o
++CFLAGS_banked-sr.o += $(CFLAGS_ARMV7VE)
++
+ obj-$(CONFIG_KVM_ARM_HOST) += entry.o
+ obj-$(CONFIG_KVM_ARM_HOST) += hyp-entry.o
+ obj-$(CONFIG_KVM_ARM_HOST) += switch.o
++CFLAGS_switch.o += $(CFLAGS_ARMV7VE)
+ obj-$(CONFIG_KVM_ARM_HOST) += s2-setup.o
+diff --git a/arch/arm/kvm/hyp/banked-sr.c b/arch/arm/kvm/hyp/banked-sr.c
+index 111bda8cdebd..be4b8b0a40ad 100644
+--- a/arch/arm/kvm/hyp/banked-sr.c
++++ b/arch/arm/kvm/hyp/banked-sr.c
+@@ -20,6 +20,10 @@
+
+ #include <asm/kvm_hyp.h>
+
++/*
++ * gcc before 4.9 doesn't understand -march=armv7ve, so we have to
++ * trick the assembler.
++ */
+ __asm__(".arch_extension virt");
+
+ void __hyp_text __banked_save_state(struct kvm_cpu_context *ctxt)
+diff --git a/arch/arm/mach-mvebu/Kconfig b/arch/arm/mach-mvebu/Kconfig
+index 9b49867154bf..63fa79f9f121 100644
+--- a/arch/arm/mach-mvebu/Kconfig
++++ b/arch/arm/mach-mvebu/Kconfig
+@@ -42,7 +42,7 @@ config MACH_ARMADA_375
+ depends on ARCH_MULTI_V7
+ select ARMADA_370_XP_IRQ
+ select ARM_ERRATA_720789
+- select ARM_ERRATA_753970
++ select PL310_ERRATA_753970
+ select ARM_GIC
+ select ARMADA_375_CLK
+ select HAVE_ARM_SCU
+@@ -58,7 +58,7 @@ config MACH_ARMADA_38X
+ bool "Marvell Armada 380/385 boards"
+ depends on ARCH_MULTI_V7
+ select ARM_ERRATA_720789
+- select ARM_ERRATA_753970
++ select PL310_ERRATA_753970
+ select ARM_GIC
+ select ARM_GLOBAL_TIMER
+ select CLKSRC_ARM_GLOBAL_TIMER_SCHED_CLOCK
+diff --git a/arch/arm/plat-orion/common.c b/arch/arm/plat-orion/common.c
+index aff6994950ba..a2399fd66e97 100644
+--- a/arch/arm/plat-orion/common.c
++++ b/arch/arm/plat-orion/common.c
+@@ -472,28 +472,27 @@ void __init orion_ge11_init(struct mv643xx_eth_platform_data *eth_data,
+ /*****************************************************************************
+ * Ethernet switch
+ ****************************************************************************/
+-static __initconst const char *orion_ge00_mvmdio_bus_name = "orion-mii";
+-static __initdata struct mdio_board_info
+- orion_ge00_switch_board_info;
++static __initdata struct mdio_board_info orion_ge00_switch_board_info = {
++ .bus_id = "orion-mii",
++ .modalias = "mv88e6085",
++};
+
+ void __init orion_ge00_switch_init(struct dsa_chip_data *d)
+ {
+- struct mdio_board_info *bd;
+ unsigned int i;
+
+ if (!IS_BUILTIN(CONFIG_PHYLIB))
+ return;
+
+- for (i = 0; i < ARRAY_SIZE(d->port_names); i++)
+- if (!strcmp(d->port_names[i], "cpu"))
++ for (i = 0; i < ARRAY_SIZE(d->port_names); i++) {
++ if (!strcmp(d->port_names[i], "cpu")) {
++ d->netdev[i] = &orion_ge00.dev;
+ break;
++ }
++ }
+
+- bd = &orion_ge00_switch_board_info;
+- bd->bus_id = orion_ge00_mvmdio_bus_name;
+- bd->mdio_addr = d->sw_addr;
+- d->netdev[i] = &orion_ge00.dev;
+- strcpy(bd->modalias, "mv88e6085");
+- bd->platform_data = d;
++ orion_ge00_switch_board_info.mdio_addr = d->sw_addr;
++ orion_ge00_switch_board_info.platform_data = d;
+
+ mdiobus_register_board_info(&orion_ge00_switch_board_info, 1);
+ }
+diff --git a/arch/parisc/include/asm/cacheflush.h b/arch/parisc/include/asm/cacheflush.h
+index 3742508cc534..bd5ce31936f5 100644
+--- a/arch/parisc/include/asm/cacheflush.h
++++ b/arch/parisc/include/asm/cacheflush.h
+@@ -26,6 +26,7 @@ void flush_user_icache_range_asm(unsigned long, unsigned long);
+ void flush_kernel_icache_range_asm(unsigned long, unsigned long);
+ void flush_user_dcache_range_asm(unsigned long, unsigned long);
+ void flush_kernel_dcache_range_asm(unsigned long, unsigned long);
++void purge_kernel_dcache_range_asm(unsigned long, unsigned long);
+ void flush_kernel_dcache_page_asm(void *);
+ void flush_kernel_icache_page(void *);
+
+diff --git a/arch/parisc/include/asm/processor.h b/arch/parisc/include/asm/processor.h
+index 0e6ab6e4a4e9..2dbe5580a1a4 100644
+--- a/arch/parisc/include/asm/processor.h
++++ b/arch/parisc/include/asm/processor.h
+@@ -316,6 +316,8 @@ extern int _parisc_requires_coherency;
+ #define parisc_requires_coherency() (0)
+ #endif
+
++extern int running_on_qemu;
++
+ #endif /* __ASSEMBLY__ */
+
+ #endif /* __ASM_PARISC_PROCESSOR_H */
+diff --git a/arch/parisc/kernel/cache.c b/arch/parisc/kernel/cache.c
+index 19c0c141bc3f..79089778725b 100644
+--- a/arch/parisc/kernel/cache.c
++++ b/arch/parisc/kernel/cache.c
+@@ -465,10 +465,10 @@ EXPORT_SYMBOL(copy_user_page);
+ int __flush_tlb_range(unsigned long sid, unsigned long start,
+ unsigned long end)
+ {
+- unsigned long flags, size;
++ unsigned long flags;
+
+- size = (end - start);
+- if (size >= parisc_tlb_flush_threshold) {
++ if ((!IS_ENABLED(CONFIG_SMP) || !arch_irqs_disabled()) &&
++ end - start >= parisc_tlb_flush_threshold) {
+ flush_tlb_all();
+ return 1;
+ }
+@@ -539,13 +539,11 @@ void flush_cache_mm(struct mm_struct *mm)
+ struct vm_area_struct *vma;
+ pgd_t *pgd;
+
+- /* Flush the TLB to avoid speculation if coherency is required. */
+- if (parisc_requires_coherency())
+- flush_tlb_all();
+-
+ /* Flushing the whole cache on each cpu takes forever on
+ rp3440, etc. So, avoid it if the mm isn't too big. */
+- if (mm_total_size(mm) >= parisc_cache_flush_threshold) {
++ if ((!IS_ENABLED(CONFIG_SMP) || !arch_irqs_disabled()) &&
++ mm_total_size(mm) >= parisc_cache_flush_threshold) {
++ flush_tlb_all();
+ flush_cache_all();
+ return;
+ }
+@@ -553,9 +551,9 @@ void flush_cache_mm(struct mm_struct *mm)
+ if (mm->context == mfsp(3)) {
+ for (vma = mm->mmap; vma; vma = vma->vm_next) {
+ flush_user_dcache_range_asm(vma->vm_start, vma->vm_end);
+- if ((vma->vm_flags & VM_EXEC) == 0)
+- continue;
+- flush_user_icache_range_asm(vma->vm_start, vma->vm_end);
++ if (vma->vm_flags & VM_EXEC)
++ flush_user_icache_range_asm(vma->vm_start, vma->vm_end);
++ flush_tlb_range(vma, vma->vm_start, vma->vm_end);
+ }
+ return;
+ }
+@@ -581,14 +579,9 @@ void flush_cache_mm(struct mm_struct *mm)
+ void flush_cache_range(struct vm_area_struct *vma,
+ unsigned long start, unsigned long end)
+ {
+- BUG_ON(!vma->vm_mm->context);
+-
+- /* Flush the TLB to avoid speculation if coherency is required. */
+- if (parisc_requires_coherency())
++ if ((!IS_ENABLED(CONFIG_SMP) || !arch_irqs_disabled()) &&
++ end - start >= parisc_cache_flush_threshold) {
+ flush_tlb_range(vma, start, end);
+-
+- if ((end - start) >= parisc_cache_flush_threshold
+- || vma->vm_mm->context != mfsp(3)) {
+ flush_cache_all();
+ return;
+ }
+@@ -596,6 +589,7 @@ void flush_cache_range(struct vm_area_struct *vma,
+ flush_user_dcache_range_asm(start, end);
+ if (vma->vm_flags & VM_EXEC)
+ flush_user_icache_range_asm(start, end);
++ flush_tlb_range(vma, start, end);
+ }
+
+ void
+@@ -604,8 +598,7 @@ flush_cache_page(struct vm_area_struct *vma, unsigned long vmaddr, unsigned long
+ BUG_ON(!vma->vm_mm->context);
+
+ if (pfn_valid(pfn)) {
+- if (parisc_requires_coherency())
+- flush_tlb_page(vma, vmaddr);
++ flush_tlb_page(vma, vmaddr);
+ __flush_cache_page(vma, vmaddr, PFN_PHYS(pfn));
+ }
+ }
+@@ -613,21 +606,33 @@ flush_cache_page(struct vm_area_struct *vma, unsigned long vmaddr, unsigned long
+ void flush_kernel_vmap_range(void *vaddr, int size)
+ {
+ unsigned long start = (unsigned long)vaddr;
++ unsigned long end = start + size;
+
+- if ((unsigned long)size > parisc_cache_flush_threshold)
++ if ((!IS_ENABLED(CONFIG_SMP) || !arch_irqs_disabled()) &&
++ (unsigned long)size >= parisc_cache_flush_threshold) {
++ flush_tlb_kernel_range(start, end);
+ flush_data_cache();
+- else
+- flush_kernel_dcache_range_asm(start, start + size);
++ return;
++ }
++
++ flush_kernel_dcache_range_asm(start, end);
++ flush_tlb_kernel_range(start, end);
+ }
+ EXPORT_SYMBOL(flush_kernel_vmap_range);
+
+ void invalidate_kernel_vmap_range(void *vaddr, int size)
+ {
+ unsigned long start = (unsigned long)vaddr;
++ unsigned long end = start + size;
+
+- if ((unsigned long)size > parisc_cache_flush_threshold)
++ if ((!IS_ENABLED(CONFIG_SMP) || !arch_irqs_disabled()) &&
++ (unsigned long)size >= parisc_cache_flush_threshold) {
++ flush_tlb_kernel_range(start, end);
+ flush_data_cache();
+- else
+- flush_kernel_dcache_range_asm(start, start + size);
++ return;
++ }
++
++ purge_kernel_dcache_range_asm(start, end);
++ flush_tlb_kernel_range(start, end);
+ }
+ EXPORT_SYMBOL(invalidate_kernel_vmap_range);
+diff --git a/arch/parisc/kernel/pacache.S b/arch/parisc/kernel/pacache.S
+index 2d40c4ff3f69..67b0f7532e83 100644
+--- a/arch/parisc/kernel/pacache.S
++++ b/arch/parisc/kernel/pacache.S
+@@ -1110,6 +1110,28 @@ ENTRY_CFI(flush_kernel_dcache_range_asm)
+ .procend
+ ENDPROC_CFI(flush_kernel_dcache_range_asm)
+
++ENTRY_CFI(purge_kernel_dcache_range_asm)
++ .proc
++ .callinfo NO_CALLS
++ .entry
++
++ ldil L%dcache_stride, %r1
++ ldw R%dcache_stride(%r1), %r23
++ ldo -1(%r23), %r21
++ ANDCM %r26, %r21, %r26
++
++1: cmpb,COND(<<),n %r26, %r25,1b
++ pdc,m %r23(%r26)
++
++ sync
++ syncdma
++ bv %r0(%r2)
++ nop
++ .exit
++
++ .procend
++ENDPROC_CFI(purge_kernel_dcache_range_asm)
++
+ ENTRY_CFI(flush_user_icache_range_asm)
+ .proc
+ .callinfo NO_CALLS
+diff --git a/arch/parisc/kernel/time.c b/arch/parisc/kernel/time.c
+index 4b8fd6dc22da..f7e684560186 100644
+--- a/arch/parisc/kernel/time.c
++++ b/arch/parisc/kernel/time.c
+@@ -76,10 +76,10 @@ irqreturn_t __irq_entry timer_interrupt(int irq, void *dev_id)
+ next_tick = cpuinfo->it_value;
+
+ /* Calculate how many ticks have elapsed. */
++ now = mfctl(16);
+ do {
+ ++ticks_elapsed;
+ next_tick += cpt;
+- now = mfctl(16);
+ } while (next_tick - now > cpt);
+
+ /* Store (in CR16 cycles) up to when we are accounting right now. */
+@@ -103,16 +103,17 @@ irqreturn_t __irq_entry timer_interrupt(int irq, void *dev_id)
+ * if one or the other wrapped. If "now" is "bigger" we'll end up
+ * with a very large unsigned number.
+ */
+- while (next_tick - mfctl(16) > cpt)
++ now = mfctl(16);
++ while (next_tick - now > cpt)
+ next_tick += cpt;
+
+ /* Program the IT when to deliver the next interrupt.
+ * Only bottom 32-bits of next_tick are writable in CR16!
+ * Timer interrupt will be delivered at least a few hundred cycles
+- * after the IT fires, so if we are too close (<= 500 cycles) to the
++ * after the IT fires, so if we are too close (<= 8000 cycles) to the
+ * next cycle, simply skip it.
+ */
+- if (next_tick - mfctl(16) <= 500)
++ if (next_tick - now <= 8000)
+ next_tick += cpt;
+ mtctl(next_tick, 16);
+
+@@ -248,7 +249,7 @@ static int __init init_cr16_clocksource(void)
+ * different sockets, so mark them unstable and lower rating on
+ * multi-socket SMP systems.
+ */
+- if (num_online_cpus() > 1) {
++ if (num_online_cpus() > 1 && !running_on_qemu) {
+ int cpu;
+ unsigned long cpu0_loc;
+ cpu0_loc = per_cpu(cpu_data, 0).cpu_loc;
+diff --git a/arch/parisc/mm/init.c b/arch/parisc/mm/init.c
+index 48f41399fc0b..cab32ee824d2 100644
+--- a/arch/parisc/mm/init.c
++++ b/arch/parisc/mm/init.c
+@@ -629,7 +629,12 @@ void __init mem_init(void)
+ #endif
+
+ mem_init_print_info(NULL);
+-#ifdef CONFIG_DEBUG_KERNEL /* double-sanity-check paranoia */
++
++#if 0
++ /*
++ * Do not expose the virtual kernel memory layout to userspace.
++ * But keep code for debugging purposes.
++ */
+ printk("virtual kernel memory layout:\n"
+ " vmalloc : 0x%px - 0x%px (%4ld MB)\n"
+ " memory : 0x%px - 0x%px (%4ld MB)\n"
+diff --git a/arch/powerpc/mm/pgtable-radix.c b/arch/powerpc/mm/pgtable-radix.c
+index 17ae5c15a9e0..804ba030d859 100644
+--- a/arch/powerpc/mm/pgtable-radix.c
++++ b/arch/powerpc/mm/pgtable-radix.c
+@@ -21,6 +21,7 @@
+
+ #include <asm/pgtable.h>
+ #include <asm/pgalloc.h>
++#include <asm/mmu_context.h>
+ #include <asm/dma.h>
+ #include <asm/machdep.h>
+ #include <asm/mmu.h>
+@@ -334,6 +335,22 @@ static void __init radix_init_pgtable(void)
+ "r" (TLBIEL_INVAL_SET_LPID), "r" (0));
+ asm volatile("eieio; tlbsync; ptesync" : : : "memory");
+ trace_tlbie(0, 0, TLBIEL_INVAL_SET_LPID, 0, 2, 1, 1);
++
++ /*
++ * The init_mm context is given the first available (non-zero) PID,
++ * which is the "guard PID" and contains no page table. PIDR should
++ * never be set to zero because that duplicates the kernel address
++ * space at the 0x0... offset (quadrant 0)!
++ *
++ * An arbitrary PID that may later be allocated by the PID allocator
++ * for userspace processes must not be used either, because that
++ * would cause stale user mappings for that PID on CPUs outside of
++ * the TLB invalidation scheme (because it won't be in mm_cpumask).
++ *
++ * So permanently carve out one PID for the purpose of a guard PID.
++ */
++ init_mm.context.id = mmu_base_pid;
++ mmu_base_pid++;
+ }
+
+ static void __init radix_init_partition_table(void)
+@@ -580,6 +597,8 @@ void __init radix__early_init_mmu(void)
+
+ radix_init_iamr();
+ radix_init_pgtable();
++ /* Switch to the guard PID before turning on MMU */
++ radix__switch_mmu_context(NULL, &init_mm);
+ }
+
+ void radix__early_init_mmu_secondary(void)
+@@ -601,6 +620,7 @@ void radix__early_init_mmu_secondary(void)
+ radix_init_amor();
+ }
+ radix_init_iamr();
++ radix__switch_mmu_context(NULL, &init_mm);
+ }
+
+ void radix__mmu_cleanup_all(void)
+diff --git a/arch/powerpc/platforms/pseries/ras.c b/arch/powerpc/platforms/pseries/ras.c
+index 81d8614e7379..5e1ef9150182 100644
+--- a/arch/powerpc/platforms/pseries/ras.c
++++ b/arch/powerpc/platforms/pseries/ras.c
+@@ -48,6 +48,28 @@ static irqreturn_t ras_epow_interrupt(int irq, void *dev_id);
+ static irqreturn_t ras_error_interrupt(int irq, void *dev_id);
+
+
++/*
++ * Enable the hotplug interrupt late because processing them may touch other
++ * devices or systems (e.g. hugepages) that have not been initialized at the
++ * subsys stage.
++ */
++int __init init_ras_hotplug_IRQ(void)
++{
++ struct device_node *np;
++
++ /* Hotplug Events */
++ np = of_find_node_by_path("/event-sources/hot-plug-events");
++ if (np != NULL) {
++ if (dlpar_workqueue_init() == 0)
++ request_event_sources_irqs(np, ras_hotplug_interrupt,
++ "RAS_HOTPLUG");
++ of_node_put(np);
++ }
++
++ return 0;
++}
++machine_late_initcall(pseries, init_ras_hotplug_IRQ);
++
+ /*
+ * Initialize handlers for the set of interrupts caused by hardware errors
+ * and power system events.
+@@ -66,15 +88,6 @@ static int __init init_ras_IRQ(void)
+ of_node_put(np);
+ }
+
+- /* Hotplug Events */
+- np = of_find_node_by_path("/event-sources/hot-plug-events");
+- if (np != NULL) {
+- if (dlpar_workqueue_init() == 0)
+- request_event_sources_irqs(np, ras_hotplug_interrupt,
+- "RAS_HOTPLUG");
+- of_node_put(np);
+- }
+-
+ /* EPOW Events */
+ np = of_find_node_by_path("/event-sources/epow-events");
+ if (np != NULL) {
+diff --git a/arch/s390/kvm/interrupt.c b/arch/s390/kvm/interrupt.c
+index 024ad8bcc516..5b8089b0d3ee 100644
+--- a/arch/s390/kvm/interrupt.c
++++ b/arch/s390/kvm/interrupt.c
+@@ -170,8 +170,15 @@ static int ckc_interrupts_enabled(struct kvm_vcpu *vcpu)
+
+ static int ckc_irq_pending(struct kvm_vcpu *vcpu)
+ {
+- if (vcpu->arch.sie_block->ckc >= kvm_s390_get_tod_clock_fast(vcpu->kvm))
++ const u64 now = kvm_s390_get_tod_clock_fast(vcpu->kvm);
++ const u64 ckc = vcpu->arch.sie_block->ckc;
++
++ if (vcpu->arch.sie_block->gcr[0] & 0x0020000000000000ul) {
++ if ((s64)ckc >= (s64)now)
++ return 0;
++ } else if (ckc >= now) {
+ return 0;
++ }
+ return ckc_interrupts_enabled(vcpu);
+ }
+
+@@ -1011,13 +1018,19 @@ int kvm_cpu_has_pending_timer(struct kvm_vcpu *vcpu)
+
+ static u64 __calculate_sltime(struct kvm_vcpu *vcpu)
+ {
+- u64 now, cputm, sltime = 0;
++ const u64 now = kvm_s390_get_tod_clock_fast(vcpu->kvm);
++ const u64 ckc = vcpu->arch.sie_block->ckc;
++ u64 cputm, sltime = 0;
+
+ if (ckc_interrupts_enabled(vcpu)) {
+- now = kvm_s390_get_tod_clock_fast(vcpu->kvm);
+- sltime = tod_to_ns(vcpu->arch.sie_block->ckc - now);
+- /* already expired or overflow? */
+- if (!sltime || vcpu->arch.sie_block->ckc <= now)
++ if (vcpu->arch.sie_block->gcr[0] & 0x0020000000000000ul) {
++ if ((s64)now < (s64)ckc)
++ sltime = tod_to_ns((s64)ckc - (s64)now);
++ } else if (now < ckc) {
++ sltime = tod_to_ns(ckc - now);
++ }
++ /* already expired */
++ if (!sltime)
+ return 0;
+ if (cpu_timer_interrupts_enabled(vcpu)) {
+ cputm = kvm_s390_get_cpu_timer(vcpu);
+diff --git a/arch/s390/kvm/kvm-s390.c b/arch/s390/kvm/kvm-s390.c
+index 1371dff2b90d..5c03e371b7b8 100644
+--- a/arch/s390/kvm/kvm-s390.c
++++ b/arch/s390/kvm/kvm-s390.c
+@@ -166,6 +166,28 @@ int kvm_arch_hardware_enable(void)
+ static void kvm_gmap_notifier(struct gmap *gmap, unsigned long start,
+ unsigned long end);
+
++static void kvm_clock_sync_scb(struct kvm_s390_sie_block *scb, u64 delta)
++{
++ u8 delta_idx = 0;
++
++ /*
++ * The TOD jumps by delta, we have to compensate this by adding
++ * -delta to the epoch.
++ */
++ delta = -delta;
++
++ /* sign-extension - we're adding to signed values below */
++ if ((s64)delta < 0)
++ delta_idx = -1;
++
++ scb->epoch += delta;
++ if (scb->ecd & ECD_MEF) {
++ scb->epdx += delta_idx;
++ if (scb->epoch < delta)
++ scb->epdx += 1;
++ }
++}
++
+ /*
+ * This callback is executed during stop_machine(). All CPUs are therefore
+ * temporarily stopped. In order not to change guest behavior, we have to
+@@ -181,13 +203,17 @@ static int kvm_clock_sync(struct notifier_block *notifier, unsigned long val,
+ unsigned long long *delta = v;
+
+ list_for_each_entry(kvm, &vm_list, vm_list) {
+- kvm->arch.epoch -= *delta;
+ kvm_for_each_vcpu(i, vcpu, kvm) {
+- vcpu->arch.sie_block->epoch -= *delta;
++ kvm_clock_sync_scb(vcpu->arch.sie_block, *delta);
++ if (i == 0) {
++ kvm->arch.epoch = vcpu->arch.sie_block->epoch;
++ kvm->arch.epdx = vcpu->arch.sie_block->epdx;
++ }
+ if (vcpu->arch.cputm_enabled)
+ vcpu->arch.cputm_start += *delta;
+ if (vcpu->arch.vsie_block)
+- vcpu->arch.vsie_block->epoch -= *delta;
++ kvm_clock_sync_scb(vcpu->arch.vsie_block,
++ *delta);
+ }
+ }
+ return NOTIFY_OK;
+@@ -889,12 +915,9 @@ static int kvm_s390_set_tod_ext(struct kvm *kvm, struct kvm_device_attr *attr)
+ if (copy_from_user(>od, (void __user *)attr->addr, sizeof(gtod)))
+ return -EFAULT;
+
+- if (test_kvm_facility(kvm, 139))
+- kvm_s390_set_tod_clock_ext(kvm, >od);
+- else if (gtod.epoch_idx == 0)
+- kvm_s390_set_tod_clock(kvm, gtod.tod);
+- else
++ if (!test_kvm_facility(kvm, 139) && gtod.epoch_idx)
+ return -EINVAL;
++ kvm_s390_set_tod_clock(kvm, >od);
+
+ VM_EVENT(kvm, 3, "SET: TOD extension: 0x%x, TOD base: 0x%llx",
+ gtod.epoch_idx, gtod.tod);
+@@ -919,13 +942,14 @@ static int kvm_s390_set_tod_high(struct kvm *kvm, struct kvm_device_attr *attr)
+
+ static int kvm_s390_set_tod_low(struct kvm *kvm, struct kvm_device_attr *attr)
+ {
+- u64 gtod;
++ struct kvm_s390_vm_tod_clock gtod = { 0 };
+
+- if (copy_from_user(>od, (void __user *)attr->addr, sizeof(gtod)))
++ if (copy_from_user(>od.tod, (void __user *)attr->addr,
++ sizeof(gtod.tod)))
+ return -EFAULT;
+
+- kvm_s390_set_tod_clock(kvm, gtod);
+- VM_EVENT(kvm, 3, "SET: TOD base: 0x%llx", gtod);
++ kvm_s390_set_tod_clock(kvm, >od);
++ VM_EVENT(kvm, 3, "SET: TOD base: 0x%llx", gtod.tod);
+ return 0;
+ }
+
+@@ -2361,6 +2385,7 @@ void kvm_arch_vcpu_postcreate(struct kvm_vcpu *vcpu)
+ mutex_lock(&vcpu->kvm->lock);
+ preempt_disable();
+ vcpu->arch.sie_block->epoch = vcpu->kvm->arch.epoch;
++ vcpu->arch.sie_block->epdx = vcpu->kvm->arch.epdx;
+ preempt_enable();
+ mutex_unlock(&vcpu->kvm->lock);
+ if (!kvm_is_ucontrol(vcpu->kvm)) {
+@@ -2947,8 +2972,8 @@ static int kvm_s390_handle_requests(struct kvm_vcpu *vcpu)
+ return 0;
+ }
+
+-void kvm_s390_set_tod_clock_ext(struct kvm *kvm,
+- const struct kvm_s390_vm_tod_clock *gtod)
++void kvm_s390_set_tod_clock(struct kvm *kvm,
++ const struct kvm_s390_vm_tod_clock *gtod)
+ {
+ struct kvm_vcpu *vcpu;
+ struct kvm_s390_tod_clock_ext htod;
+@@ -2960,10 +2985,12 @@ void kvm_s390_set_tod_clock_ext(struct kvm *kvm,
+ get_tod_clock_ext((char *)&htod);
+
+ kvm->arch.epoch = gtod->tod - htod.tod;
+- kvm->arch.epdx = gtod->epoch_idx - htod.epoch_idx;
+-
+- if (kvm->arch.epoch > gtod->tod)
+- kvm->arch.epdx -= 1;
++ kvm->arch.epdx = 0;
++ if (test_kvm_facility(kvm, 139)) {
++ kvm->arch.epdx = gtod->epoch_idx - htod.epoch_idx;
++ if (kvm->arch.epoch > gtod->tod)
++ kvm->arch.epdx -= 1;
++ }
+
+ kvm_s390_vcpu_block_all(kvm);
+ kvm_for_each_vcpu(i, vcpu, kvm) {
+@@ -2976,22 +3003,6 @@ void kvm_s390_set_tod_clock_ext(struct kvm *kvm,
+ mutex_unlock(&kvm->lock);
+ }
+
+-void kvm_s390_set_tod_clock(struct kvm *kvm, u64 tod)
+-{
+- struct kvm_vcpu *vcpu;
+- int i;
+-
+- mutex_lock(&kvm->lock);
+- preempt_disable();
+- kvm->arch.epoch = tod - get_tod_clock();
+- kvm_s390_vcpu_block_all(kvm);
+- kvm_for_each_vcpu(i, vcpu, kvm)
+- vcpu->arch.sie_block->epoch = kvm->arch.epoch;
+- kvm_s390_vcpu_unblock_all(kvm);
+- preempt_enable();
+- mutex_unlock(&kvm->lock);
+-}
+-
+ /**
+ * kvm_arch_fault_in_page - fault-in guest page if necessary
+ * @vcpu: The corresponding virtual cpu
+diff --git a/arch/s390/kvm/kvm-s390.h b/arch/s390/kvm/kvm-s390.h
+index 5e46ba429bcb..efa186f065fb 100644
+--- a/arch/s390/kvm/kvm-s390.h
++++ b/arch/s390/kvm/kvm-s390.h
+@@ -268,9 +268,8 @@ int kvm_s390_handle_sigp(struct kvm_vcpu *vcpu);
+ int kvm_s390_handle_sigp_pei(struct kvm_vcpu *vcpu);
+
+ /* implemented in kvm-s390.c */
+-void kvm_s390_set_tod_clock_ext(struct kvm *kvm,
+- const struct kvm_s390_vm_tod_clock *gtod);
+-void kvm_s390_set_tod_clock(struct kvm *kvm, u64 tod);
++void kvm_s390_set_tod_clock(struct kvm *kvm,
++ const struct kvm_s390_vm_tod_clock *gtod);
+ long kvm_arch_fault_in_page(struct kvm_vcpu *vcpu, gpa_t gpa, int writable);
+ int kvm_s390_store_status_unloaded(struct kvm_vcpu *vcpu, unsigned long addr);
+ int kvm_s390_vcpu_store_status(struct kvm_vcpu *vcpu, unsigned long addr);
+diff --git a/arch/s390/kvm/priv.c b/arch/s390/kvm/priv.c
+index 0714bfa56da0..23bebdbbf490 100644
+--- a/arch/s390/kvm/priv.c
++++ b/arch/s390/kvm/priv.c
+@@ -81,9 +81,10 @@ int kvm_s390_handle_e3(struct kvm_vcpu *vcpu)
+ /* Handle SCK (SET CLOCK) interception */
+ static int handle_set_clock(struct kvm_vcpu *vcpu)
+ {
++ struct kvm_s390_vm_tod_clock gtod = { 0 };
+ int rc;
+ u8 ar;
+- u64 op2, val;
++ u64 op2;
+
+ if (vcpu->arch.sie_block->gpsw.mask & PSW_MASK_PSTATE)
+ return kvm_s390_inject_program_int(vcpu, PGM_PRIVILEGED_OP);
+@@ -91,12 +92,12 @@ static int handle_set_clock(struct kvm_vcpu *vcpu)
+ op2 = kvm_s390_get_base_disp_s(vcpu, &ar);
+ if (op2 & 7) /* Operand must be on a doubleword boundary */
+ return kvm_s390_inject_program_int(vcpu, PGM_SPECIFICATION);
+- rc = read_guest(vcpu, op2, ar, &val, sizeof(val));
++ rc = read_guest(vcpu, op2, ar, >od.tod, sizeof(gtod.tod));
+ if (rc)
+ return kvm_s390_inject_prog_cond(vcpu, rc);
+
+- VCPU_EVENT(vcpu, 3, "SCK: setting guest TOD to 0x%llx", val);
+- kvm_s390_set_tod_clock(vcpu->kvm, val);
++ VCPU_EVENT(vcpu, 3, "SCK: setting guest TOD to 0x%llx", gtod.tod);
++ kvm_s390_set_tod_clock(vcpu->kvm, >od);
+
+ kvm_s390_set_psw_cc(vcpu, 0);
+ return 0;
+diff --git a/arch/x86/include/asm/pgtable.h b/arch/x86/include/asm/pgtable.h
+index e42b8943cb1a..cd0dba7a2293 100644
+--- a/arch/x86/include/asm/pgtable.h
++++ b/arch/x86/include/asm/pgtable.h
+@@ -350,14 +350,14 @@ static inline pmd_t pmd_set_flags(pmd_t pmd, pmdval_t set)
+ {
+ pmdval_t v = native_pmd_val(pmd);
+
+- return __pmd(v | set);
++ return native_make_pmd(v | set);
+ }
+
+ static inline pmd_t pmd_clear_flags(pmd_t pmd, pmdval_t clear)
+ {
+ pmdval_t v = native_pmd_val(pmd);
+
+- return __pmd(v & ~clear);
++ return native_make_pmd(v & ~clear);
+ }
+
+ static inline pmd_t pmd_mkold(pmd_t pmd)
+@@ -409,14 +409,14 @@ static inline pud_t pud_set_flags(pud_t pud, pudval_t set)
+ {
+ pudval_t v = native_pud_val(pud);
+
+- return __pud(v | set);
++ return native_make_pud(v | set);
+ }
+
+ static inline pud_t pud_clear_flags(pud_t pud, pudval_t clear)
+ {
+ pudval_t v = native_pud_val(pud);
+
+- return __pud(v & ~clear);
++ return native_make_pud(v & ~clear);
+ }
+
+ static inline pud_t pud_mkold(pud_t pud)
+diff --git a/arch/x86/include/asm/pgtable_32.h b/arch/x86/include/asm/pgtable_32.h
+index e55466760ff8..b3ec519e3982 100644
+--- a/arch/x86/include/asm/pgtable_32.h
++++ b/arch/x86/include/asm/pgtable_32.h
+@@ -32,6 +32,7 @@ extern pmd_t initial_pg_pmd[];
+ static inline void pgtable_cache_init(void) { }
+ static inline void check_pgt_cache(void) { }
+ void paging_init(void);
++void sync_initial_page_table(void);
+
+ /*
+ * Define this if things work differently on an i386 and an i486:
+diff --git a/arch/x86/include/asm/pgtable_64.h b/arch/x86/include/asm/pgtable_64.h
+index 81462e9a34f6..1149d2112b2e 100644
+--- a/arch/x86/include/asm/pgtable_64.h
++++ b/arch/x86/include/asm/pgtable_64.h
+@@ -28,6 +28,7 @@ extern pgd_t init_top_pgt[];
+ #define swapper_pg_dir init_top_pgt
+
+ extern void paging_init(void);
++static inline void sync_initial_page_table(void) { }
+
+ #define pte_ERROR(e) \
+ pr_err("%s:%d: bad pte %p(%016lx)\n", \
+diff --git a/arch/x86/include/asm/pgtable_types.h b/arch/x86/include/asm/pgtable_types.h
+index 3696398a9475..246f15b4e64c 100644
+--- a/arch/x86/include/asm/pgtable_types.h
++++ b/arch/x86/include/asm/pgtable_types.h
+@@ -323,6 +323,11 @@ static inline pudval_t native_pud_val(pud_t pud)
+ #else
+ #include <asm-generic/pgtable-nopud.h>
+
++static inline pud_t native_make_pud(pudval_t val)
++{
++ return (pud_t) { .p4d.pgd = native_make_pgd(val) };
++}
++
+ static inline pudval_t native_pud_val(pud_t pud)
+ {
+ return native_pgd_val(pud.p4d.pgd);
+@@ -344,6 +349,11 @@ static inline pmdval_t native_pmd_val(pmd_t pmd)
+ #else
+ #include <asm-generic/pgtable-nopmd.h>
+
++static inline pmd_t native_make_pmd(pmdval_t val)
++{
++ return (pmd_t) { .pud.p4d.pgd = native_make_pgd(val) };
++}
++
+ static inline pmdval_t native_pmd_val(pmd_t pmd)
+ {
+ return native_pgd_val(pmd.pud.p4d.pgd);
+diff --git a/arch/x86/kernel/setup.c b/arch/x86/kernel/setup.c
+index 68d7ab81c62f..1fbe6b9fff37 100644
+--- a/arch/x86/kernel/setup.c
++++ b/arch/x86/kernel/setup.c
+@@ -1205,20 +1205,13 @@ void __init setup_arch(char **cmdline_p)
+
+ kasan_init();
+
+-#ifdef CONFIG_X86_32
+- /* sync back kernel address range */
+- clone_pgd_range(initial_page_table + KERNEL_PGD_BOUNDARY,
+- swapper_pg_dir + KERNEL_PGD_BOUNDARY,
+- KERNEL_PGD_PTRS);
+-
+ /*
+- * sync back low identity map too. It is used for example
+- * in the 32-bit EFI stub.
++ * Sync back kernel address range.
++ *
++ * FIXME: Can the later sync in setup_cpu_entry_areas() replace
++ * this call?
+ */
+- clone_pgd_range(initial_page_table,
+- swapper_pg_dir + KERNEL_PGD_BOUNDARY,
+- min(KERNEL_PGD_PTRS, KERNEL_PGD_BOUNDARY));
+-#endif
++ sync_initial_page_table();
+
+ tboot_probe();
+
+diff --git a/arch/x86/kernel/setup_percpu.c b/arch/x86/kernel/setup_percpu.c
+index 497aa766fab3..ea554f812ee1 100644
+--- a/arch/x86/kernel/setup_percpu.c
++++ b/arch/x86/kernel/setup_percpu.c
+@@ -287,24 +287,15 @@ void __init setup_per_cpu_areas(void)
+ /* Setup cpu initialized, callin, callout masks */
+ setup_cpu_local_masks();
+
+-#ifdef CONFIG_X86_32
+ /*
+ * Sync back kernel address range again. We already did this in
+ * setup_arch(), but percpu data also needs to be available in
+ * the smpboot asm. We can't reliably pick up percpu mappings
+ * using vmalloc_fault(), because exception dispatch needs
+ * percpu data.
++ *
++ * FIXME: Can the later sync in setup_cpu_entry_areas() replace
++ * this call?
+ */
+- clone_pgd_range(initial_page_table + KERNEL_PGD_BOUNDARY,
+- swapper_pg_dir + KERNEL_PGD_BOUNDARY,
+- KERNEL_PGD_PTRS);
+-
+- /*
+- * sync back low identity map too. It is used for example
+- * in the 32-bit EFI stub.
+- */
+- clone_pgd_range(initial_page_table,
+- swapper_pg_dir + KERNEL_PGD_BOUNDARY,
+- min(KERNEL_PGD_PTRS, KERNEL_PGD_BOUNDARY));
+-#endif
++ sync_initial_page_table();
+ }
+diff --git a/arch/x86/kvm/lapic.c b/arch/x86/kvm/lapic.c
+index e2c1fb8d35ce..dbb8b476b41b 100644
+--- a/arch/x86/kvm/lapic.c
++++ b/arch/x86/kvm/lapic.c
+@@ -1993,14 +1993,13 @@ void kvm_lapic_set_base(struct kvm_vcpu *vcpu, u64 value)
+
+ void kvm_lapic_reset(struct kvm_vcpu *vcpu, bool init_event)
+ {
+- struct kvm_lapic *apic;
++ struct kvm_lapic *apic = vcpu->arch.apic;
+ int i;
+
+- apic_debug("%s\n", __func__);
++ if (!apic)
++ return;
+
+- ASSERT(vcpu);
+- apic = vcpu->arch.apic;
+- ASSERT(apic != NULL);
++ apic_debug("%s\n", __func__);
+
+ /* Stop the timer in case it's a reset to an active apic */
+ hrtimer_cancel(&apic->lapic_timer.timer);
+@@ -2156,7 +2155,6 @@ int kvm_create_lapic(struct kvm_vcpu *vcpu)
+ */
+ vcpu->arch.apic_base = MSR_IA32_APICBASE_ENABLE;
+ static_key_slow_inc(&apic_sw_disabled.key); /* sw disabled at reset */
+- kvm_lapic_reset(vcpu, false);
+ kvm_iodevice_init(&apic->dev, &apic_mmio_ops);
+
+ return 0;
+@@ -2560,7 +2558,6 @@ void kvm_apic_accept_events(struct kvm_vcpu *vcpu)
+
+ pe = xchg(&apic->pending_events, 0);
+ if (test_bit(KVM_APIC_INIT, &pe)) {
+- kvm_lapic_reset(vcpu, true);
+ kvm_vcpu_reset(vcpu, true);
+ if (kvm_vcpu_is_bsp(apic->vcpu))
+ vcpu->arch.mp_state = KVM_MP_STATE_RUNNABLE;
+diff --git a/arch/x86/kvm/mmu.c b/arch/x86/kvm/mmu.c
+index cc83bdcb65d1..e080dbe55360 100644
+--- a/arch/x86/kvm/mmu.c
++++ b/arch/x86/kvm/mmu.c
+@@ -3017,7 +3017,7 @@ static int kvm_handle_bad_page(struct kvm_vcpu *vcpu, gfn_t gfn, kvm_pfn_t pfn)
+ return RET_PF_RETRY;
+ }
+
+- return -EFAULT;
++ return RET_PF_EMULATE;
+ }
+
+ static void transparent_hugepage_adjust(struct kvm_vcpu *vcpu,
+diff --git a/arch/x86/kvm/svm.c b/arch/x86/kvm/svm.c
+index 4e3c79530526..3505afabce5d 100644
+--- a/arch/x86/kvm/svm.c
++++ b/arch/x86/kvm/svm.c
+@@ -45,6 +45,7 @@
+ #include <asm/debugreg.h>
+ #include <asm/kvm_para.h>
+ #include <asm/irq_remapping.h>
++#include <asm/microcode.h>
+ #include <asm/nospec-branch.h>
+
+ #include <asm/virtext.h>
+@@ -5029,7 +5030,7 @@ static void svm_vcpu_run(struct kvm_vcpu *vcpu)
+ * being speculatively taken.
+ */
+ if (svm->spec_ctrl)
+- wrmsrl(MSR_IA32_SPEC_CTRL, svm->spec_ctrl);
++ native_wrmsrl(MSR_IA32_SPEC_CTRL, svm->spec_ctrl);
+
+ asm volatile (
+ "push %%" _ASM_BP "; \n\t"
+@@ -5138,11 +5139,11 @@ static void svm_vcpu_run(struct kvm_vcpu *vcpu)
+ * If the L02 MSR bitmap does not intercept the MSR, then we need to
+ * save it.
+ */
+- if (!msr_write_intercepted(vcpu, MSR_IA32_SPEC_CTRL))
+- rdmsrl(MSR_IA32_SPEC_CTRL, svm->spec_ctrl);
++ if (unlikely(!msr_write_intercepted(vcpu, MSR_IA32_SPEC_CTRL)))
++ svm->spec_ctrl = native_read_msr(MSR_IA32_SPEC_CTRL);
+
+ if (svm->spec_ctrl)
+- wrmsrl(MSR_IA32_SPEC_CTRL, 0);
++ native_wrmsrl(MSR_IA32_SPEC_CTRL, 0);
+
+ /* Eliminate branch target predictions from guest mode */
+ vmexit_fill_RSB();
+diff --git a/arch/x86/kvm/vmx.c b/arch/x86/kvm/vmx.c
+index 561d8937fac5..87b453eeae40 100644
+--- a/arch/x86/kvm/vmx.c
++++ b/arch/x86/kvm/vmx.c
+@@ -51,6 +51,7 @@
+ #include <asm/apic.h>
+ #include <asm/irq_remapping.h>
+ #include <asm/mmu_context.h>
++#include <asm/microcode.h>
+ #include <asm/nospec-branch.h>
+
+ #include "trace.h"
+@@ -9443,7 +9444,7 @@ static void __noclone vmx_vcpu_run(struct kvm_vcpu *vcpu)
+ * being speculatively taken.
+ */
+ if (vmx->spec_ctrl)
+- wrmsrl(MSR_IA32_SPEC_CTRL, vmx->spec_ctrl);
++ native_wrmsrl(MSR_IA32_SPEC_CTRL, vmx->spec_ctrl);
+
+ vmx->__launched = vmx->loaded_vmcs->launched;
+ asm(
+@@ -9578,11 +9579,11 @@ static void __noclone vmx_vcpu_run(struct kvm_vcpu *vcpu)
+ * If the L02 MSR bitmap does not intercept the MSR, then we need to
+ * save it.
+ */
+- if (!msr_write_intercepted(vcpu, MSR_IA32_SPEC_CTRL))
+- rdmsrl(MSR_IA32_SPEC_CTRL, vmx->spec_ctrl);
++ if (unlikely(!msr_write_intercepted(vcpu, MSR_IA32_SPEC_CTRL)))
++ vmx->spec_ctrl = native_read_msr(MSR_IA32_SPEC_CTRL);
+
+ if (vmx->spec_ctrl)
+- wrmsrl(MSR_IA32_SPEC_CTRL, 0);
++ native_wrmsrl(MSR_IA32_SPEC_CTRL, 0);
+
+ /* Eliminate branch target predictions from guest mode */
+ vmexit_fill_RSB();
+diff --git a/arch/x86/kvm/x86.c b/arch/x86/kvm/x86.c
+index 17f4eca37d22..a10da5052072 100644
+--- a/arch/x86/kvm/x86.c
++++ b/arch/x86/kvm/x86.c
+@@ -7835,6 +7835,8 @@ void kvm_arch_vcpu_destroy(struct kvm_vcpu *vcpu)
+
+ void kvm_vcpu_reset(struct kvm_vcpu *vcpu, bool init_event)
+ {
++ kvm_lapic_reset(vcpu, init_event);
++
+ vcpu->arch.hflags = 0;
+
+ vcpu->arch.smi_pending = 0;
+@@ -8279,10 +8281,8 @@ int __x86_set_memory_region(struct kvm *kvm, int id, gpa_t gpa, u32 size)
+ return r;
+ }
+
+- if (!size) {
+- r = vm_munmap(old.userspace_addr, old.npages * PAGE_SIZE);
+- WARN_ON(r < 0);
+- }
++ if (!size)
++ vm_munmap(old.userspace_addr, old.npages * PAGE_SIZE);
+
+ return 0;
+ }
+diff --git a/arch/x86/mm/cpu_entry_area.c b/arch/x86/mm/cpu_entry_area.c
+index b9283cc27622..476d810639a8 100644
+--- a/arch/x86/mm/cpu_entry_area.c
++++ b/arch/x86/mm/cpu_entry_area.c
+@@ -163,4 +163,10 @@ void __init setup_cpu_entry_areas(void)
+
+ for_each_possible_cpu(cpu)
+ setup_cpu_entry_area(cpu);
++
++ /*
++ * This is the last essential update to swapper_pgdir which needs
++ * to be synchronized to initial_page_table on 32bit.
++ */
++ sync_initial_page_table();
+ }
+diff --git a/arch/x86/mm/init_32.c b/arch/x86/mm/init_32.c
+index 135c9a7898c7..3141e67ec24c 100644
+--- a/arch/x86/mm/init_32.c
++++ b/arch/x86/mm/init_32.c
+@@ -453,6 +453,21 @@ static inline void permanent_kmaps_init(pgd_t *pgd_base)
+ }
+ #endif /* CONFIG_HIGHMEM */
+
++void __init sync_initial_page_table(void)
++{
++ clone_pgd_range(initial_page_table + KERNEL_PGD_BOUNDARY,
++ swapper_pg_dir + KERNEL_PGD_BOUNDARY,
++ KERNEL_PGD_PTRS);
++
++ /*
++ * sync back low identity map too. It is used for example
++ * in the 32-bit EFI stub.
++ */
++ clone_pgd_range(initial_page_table,
++ swapper_pg_dir + KERNEL_PGD_BOUNDARY,
++ min(KERNEL_PGD_PTRS, KERNEL_PGD_BOUNDARY));
++}
++
+ void __init native_pagetable_init(void)
+ {
+ unsigned long pfn, va;
+diff --git a/arch/x86/platform/intel-mid/intel-mid.c b/arch/x86/platform/intel-mid/intel-mid.c
+index 86676cec99a1..09dd7f3cf621 100644
+--- a/arch/x86/platform/intel-mid/intel-mid.c
++++ b/arch/x86/platform/intel-mid/intel-mid.c
+@@ -79,7 +79,7 @@ static void intel_mid_power_off(void)
+
+ static void intel_mid_reboot(void)
+ {
+- intel_scu_ipc_simple_command(IPCMSG_COLD_BOOT, 0);
++ intel_scu_ipc_simple_command(IPCMSG_COLD_RESET, 0);
+ }
+
+ static unsigned long __init intel_mid_calibrate_tsc(void)
+diff --git a/arch/x86/xen/suspend.c b/arch/x86/xen/suspend.c
+index d9f96cc5d743..1d83152c761b 100644
+--- a/arch/x86/xen/suspend.c
++++ b/arch/x86/xen/suspend.c
+@@ -1,12 +1,15 @@
+ // SPDX-License-Identifier: GPL-2.0
+ #include <linux/types.h>
+ #include <linux/tick.h>
++#include <linux/percpu-defs.h>
+
+ #include <xen/xen.h>
+ #include <xen/interface/xen.h>
+ #include <xen/grant_table.h>
+ #include <xen/events.h>
+
++#include <asm/cpufeatures.h>
++#include <asm/msr-index.h>
+ #include <asm/xen/hypercall.h>
+ #include <asm/xen/page.h>
+ #include <asm/fixmap.h>
+@@ -15,6 +18,8 @@
+ #include "mmu.h"
+ #include "pmu.h"
+
++static DEFINE_PER_CPU(u64, spec_ctrl);
++
+ void xen_arch_pre_suspend(void)
+ {
+ xen_save_time_memory_area();
+@@ -35,6 +40,9 @@ void xen_arch_post_suspend(int cancelled)
+
+ static void xen_vcpu_notify_restore(void *data)
+ {
++ if (xen_pv_domain() && boot_cpu_has(X86_FEATURE_SPEC_CTRL))
++ wrmsrl(MSR_IA32_SPEC_CTRL, this_cpu_read(spec_ctrl));
++
+ /* Boot processor notified via generic timekeeping_resume() */
+ if (smp_processor_id() == 0)
+ return;
+@@ -44,7 +52,15 @@ static void xen_vcpu_notify_restore(void *data)
+
+ static void xen_vcpu_notify_suspend(void *data)
+ {
++ u64 tmp;
++
+ tick_suspend_local();
++
++ if (xen_pv_domain() && boot_cpu_has(X86_FEATURE_SPEC_CTRL)) {
++ rdmsrl(MSR_IA32_SPEC_CTRL, tmp);
++ this_cpu_write(spec_ctrl, tmp);
++ wrmsrl(MSR_IA32_SPEC_CTRL, 0);
++ }
+ }
+
+ void xen_arch_resume(void)
+diff --git a/block/blk-core.c b/block/blk-core.c
+index 82b92adf3477..b725d9e340c2 100644
+--- a/block/blk-core.c
++++ b/block/blk-core.c
+@@ -2401,7 +2401,7 @@ blk_qc_t submit_bio(struct bio *bio)
+ unsigned int count;
+
+ if (unlikely(bio_op(bio) == REQ_OP_WRITE_SAME))
+- count = queue_logical_block_size(bio->bi_disk->queue);
++ count = queue_logical_block_size(bio->bi_disk->queue) >> 9;
+ else
+ count = bio_sectors(bio);
+
+diff --git a/block/blk-mq.c b/block/blk-mq.c
+index 3d3797327491..5629f18b51bd 100644
+--- a/block/blk-mq.c
++++ b/block/blk-mq.c
+@@ -655,7 +655,6 @@ static void __blk_mq_requeue_request(struct request *rq)
+
+ trace_block_rq_requeue(q, rq);
+ wbt_requeue(q->rq_wb, &rq->issue_stat);
+- blk_mq_sched_requeue_request(rq);
+
+ if (test_and_clear_bit(REQ_ATOM_STARTED, &rq->atomic_flags)) {
+ if (q->dma_drain_size && blk_rq_bytes(rq))
+@@ -667,6 +666,9 @@ void blk_mq_requeue_request(struct request *rq, bool kick_requeue_list)
+ {
+ __blk_mq_requeue_request(rq);
+
++ /* this request will be re-inserted to io scheduler queue */
++ blk_mq_sched_requeue_request(rq);
++
+ BUG_ON(blk_queued_rq(rq));
+ blk_mq_add_to_requeue_list(rq, true, kick_requeue_list);
+ }
+diff --git a/block/ioctl.c b/block/ioctl.c
+index 1668506d8ed8..3884d810efd2 100644
+--- a/block/ioctl.c
++++ b/block/ioctl.c
+@@ -225,7 +225,7 @@ static int blk_ioctl_discard(struct block_device *bdev, fmode_t mode,
+
+ if (start + len > i_size_read(bdev->bd_inode))
+ return -EINVAL;
+- truncate_inode_pages_range(mapping, start, start + len);
++ truncate_inode_pages_range(mapping, start, start + len - 1);
+ return blkdev_issue_discard(bdev, start >> 9, len >> 9,
+ GFP_KERNEL, flags);
+ }
+diff --git a/block/kyber-iosched.c b/block/kyber-iosched.c
+index f95c60774ce8..0d6d25e32e1f 100644
+--- a/block/kyber-iosched.c
++++ b/block/kyber-iosched.c
+@@ -833,6 +833,7 @@ static struct elevator_type kyber_sched = {
+ .limit_depth = kyber_limit_depth,
+ .prepare_request = kyber_prepare_request,
+ .finish_request = kyber_finish_request,
++ .requeue_request = kyber_finish_request,
+ .completed_request = kyber_completed_request,
+ .dispatch_request = kyber_dispatch_request,
+ .has_work = kyber_has_work,
+diff --git a/drivers/acpi/bus.c b/drivers/acpi/bus.c
+index 4d0979e02a28..b6d58cc58f5f 100644
+--- a/drivers/acpi/bus.c
++++ b/drivers/acpi/bus.c
+@@ -66,10 +66,37 @@ static int set_copy_dsdt(const struct dmi_system_id *id)
+ return 0;
+ }
+ #endif
++static int set_gbl_term_list(const struct dmi_system_id *id)
++{
++ acpi_gbl_parse_table_as_term_list = 1;
++ return 0;
++}
+
+-static const struct dmi_system_id dsdt_dmi_table[] __initconst = {
++static const struct dmi_system_id acpi_quirks_dmi_table[] __initconst = {
++ /*
++ * Touchpad on Dell XPS 9570/Precision M5530 doesn't work under I2C
++ * mode.
++ * https://bugzilla.kernel.org/show_bug.cgi?id=198515
++ */
++ {
++ .callback = set_gbl_term_list,
++ .ident = "Dell Precision M5530",
++ .matches = {
++ DMI_MATCH(DMI_SYS_VENDOR, "Dell Inc."),
++ DMI_MATCH(DMI_PRODUCT_NAME, "Precision M5530"),
++ },
++ },
++ {
++ .callback = set_gbl_term_list,
++ .ident = "Dell XPS 15 9570",
++ .matches = {
++ DMI_MATCH(DMI_SYS_VENDOR, "Dell Inc."),
++ DMI_MATCH(DMI_PRODUCT_NAME, "XPS 15 9570"),
++ },
++ },
+ /*
+ * Invoke DSDT corruption work-around on all Toshiba Satellite.
++ * DSDT will be copied to memory.
+ * https://bugzilla.kernel.org/show_bug.cgi?id=14679
+ */
+ {
+@@ -83,7 +110,7 @@ static const struct dmi_system_id dsdt_dmi_table[] __initconst = {
+ {}
+ };
+ #else
+-static const struct dmi_system_id dsdt_dmi_table[] __initconst = {
++static const struct dmi_system_id acpi_quirks_dmi_table[] __initconst = {
+ {}
+ };
+ #endif
+@@ -1001,11 +1028,8 @@ void __init acpi_early_init(void)
+
+ acpi_permanent_mmap = true;
+
+- /*
+- * If the machine falls into the DMI check table,
+- * DSDT will be copied to memory
+- */
+- dmi_check_system(dsdt_dmi_table);
++ /* Check machine-specific quirks */
++ dmi_check_system(acpi_quirks_dmi_table);
+
+ status = acpi_reallocate_root_table();
+ if (ACPI_FAILURE(status)) {
+diff --git a/drivers/bluetooth/btusb.c b/drivers/bluetooth/btusb.c
+index 76980e78ae56..e71e54c478da 100644
+--- a/drivers/bluetooth/btusb.c
++++ b/drivers/bluetooth/btusb.c
+@@ -21,6 +21,7 @@
+ *
+ */
+
++#include <linux/dmi.h>
+ #include <linux/module.h>
+ #include <linux/usb.h>
+ #include <linux/usb/quirks.h>
+@@ -376,6 +377,21 @@ static const struct usb_device_id blacklist_table[] = {
+ { } /* Terminating entry */
+ };
+
++/* The Bluetooth USB module build into some devices needs to be reset on resume,
++ * this is a problem with the platform (likely shutting off all power) not with
++ * the module itself. So we use a DMI list to match known broken platforms.
++ */
++static const struct dmi_system_id btusb_needs_reset_resume_table[] = {
++ {
++ /* Lenovo Yoga 920 (QCA Rome device 0cf3:e300) */
++ .matches = {
++ DMI_MATCH(DMI_SYS_VENDOR, "LENOVO"),
++ DMI_MATCH(DMI_PRODUCT_VERSION, "Lenovo YOGA 920"),
++ },
++ },
++ {}
++};
++
+ #define BTUSB_MAX_ISOC_FRAMES 10
+
+ #define BTUSB_INTR_RUNNING 0
+@@ -3031,6 +3047,9 @@ static int btusb_probe(struct usb_interface *intf,
+ hdev->send = btusb_send_frame;
+ hdev->notify = btusb_notify;
+
++ if (dmi_check_system(btusb_needs_reset_resume_table))
++ interface_to_usbdev(intf)->quirks |= USB_QUIRK_RESET_RESUME;
++
+ #ifdef CONFIG_PM
+ err = btusb_config_oob_wake(hdev);
+ if (err)
+@@ -3117,12 +3136,6 @@ static int btusb_probe(struct usb_interface *intf,
+ if (id->driver_info & BTUSB_QCA_ROME) {
+ data->setup_on_usb = btusb_setup_qca;
+ hdev->set_bdaddr = btusb_set_bdaddr_ath3012;
+-
+- /* QCA Rome devices lose their updated firmware over suspend,
+- * but the USB hub doesn't notice any status change.
+- * explicitly request a device reset on resume.
+- */
+- interface_to_usbdev(intf)->quirks |= USB_QUIRK_RESET_RESUME;
+ }
+
+ #ifdef CONFIG_BT_HCIBTUSB_RTL
+diff --git a/drivers/char/ipmi/ipmi_si_intf.c b/drivers/char/ipmi/ipmi_si_intf.c
+index 71fad747c0c7..7499b0cd8326 100644
+--- a/drivers/char/ipmi/ipmi_si_intf.c
++++ b/drivers/char/ipmi/ipmi_si_intf.c
+@@ -2045,6 +2045,7 @@ static int try_smi_init(struct smi_info *new_smi)
+ int rv = 0;
+ int i;
+ char *init_name = NULL;
++ bool platform_device_registered = false;
+
+ pr_info(PFX "Trying %s-specified %s state machine at %s address 0x%lx, slave address 0x%x, irq %d\n",
+ ipmi_addr_src_to_str(new_smi->io.addr_source),
+@@ -2173,6 +2174,7 @@ static int try_smi_init(struct smi_info *new_smi)
+ rv);
+ goto out_err;
+ }
++ platform_device_registered = true;
+ }
+
+ dev_set_drvdata(new_smi->io.dev, new_smi);
+@@ -2279,10 +2281,11 @@ static int try_smi_init(struct smi_info *new_smi)
+ }
+
+ if (new_smi->pdev) {
+- platform_device_unregister(new_smi->pdev);
++ if (platform_device_registered)
++ platform_device_unregister(new_smi->pdev);
++ else
++ platform_device_put(new_smi->pdev);
+ new_smi->pdev = NULL;
+- } else if (new_smi->pdev) {
+- platform_device_put(new_smi->pdev);
+ }
+
+ kfree(init_name);
+diff --git a/drivers/char/tpm/st33zp24/st33zp24.c b/drivers/char/tpm/st33zp24/st33zp24.c
+index 4d1dc8b46877..f95b9c75175b 100644
+--- a/drivers/char/tpm/st33zp24/st33zp24.c
++++ b/drivers/char/tpm/st33zp24/st33zp24.c
+@@ -457,7 +457,7 @@ static int st33zp24_recv(struct tpm_chip *chip, unsigned char *buf,
+ size_t count)
+ {
+ int size = 0;
+- int expected;
++ u32 expected;
+
+ if (!chip)
+ return -EBUSY;
+@@ -474,7 +474,7 @@ static int st33zp24_recv(struct tpm_chip *chip, unsigned char *buf,
+ }
+
+ expected = be32_to_cpu(*(__be32 *)(buf + 2));
+- if (expected > count) {
++ if (expected > count || expected < TPM_HEADER_SIZE) {
+ size = -EIO;
+ goto out;
+ }
+diff --git a/drivers/char/tpm/tpm-interface.c b/drivers/char/tpm/tpm-interface.c
+index 1d6729be4cd6..3cec403a80b3 100644
+--- a/drivers/char/tpm/tpm-interface.c
++++ b/drivers/char/tpm/tpm-interface.c
+@@ -1228,6 +1228,10 @@ int tpm_get_random(u32 chip_num, u8 *out, size_t max)
+ break;
+
+ recd = be32_to_cpu(tpm_cmd.params.getrandom_out.rng_data_len);
++ if (recd > num_bytes) {
++ total = -EFAULT;
++ break;
++ }
+
+ rlength = be32_to_cpu(tpm_cmd.header.out.length);
+ if (rlength < offsetof(struct tpm_getrandom_out, rng_data) +
+diff --git a/drivers/char/tpm/tpm2-cmd.c b/drivers/char/tpm/tpm2-cmd.c
+index f40d20671a78..f6be08483ae6 100644
+--- a/drivers/char/tpm/tpm2-cmd.c
++++ b/drivers/char/tpm/tpm2-cmd.c
+@@ -683,6 +683,10 @@ static int tpm2_unseal_cmd(struct tpm_chip *chip,
+ if (!rc) {
+ data_len = be16_to_cpup(
+ (__be16 *) &buf.data[TPM_HEADER_SIZE + 4]);
++ if (data_len < MIN_KEY_SIZE || data_len > MAX_KEY_SIZE + 1) {
++ rc = -EFAULT;
++ goto out;
++ }
+
+ rlength = be32_to_cpu(((struct tpm2_cmd *)&buf)
+ ->header.out.length);
+diff --git a/drivers/char/tpm/tpm_i2c_infineon.c b/drivers/char/tpm/tpm_i2c_infineon.c
+index 79d6bbb58e39..d5b44cadac56 100644
+--- a/drivers/char/tpm/tpm_i2c_infineon.c
++++ b/drivers/char/tpm/tpm_i2c_infineon.c
+@@ -473,7 +473,8 @@ static int recv_data(struct tpm_chip *chip, u8 *buf, size_t count)
+ static int tpm_tis_i2c_recv(struct tpm_chip *chip, u8 *buf, size_t count)
+ {
+ int size = 0;
+- int expected, status;
++ int status;
++ u32 expected;
+
+ if (count < TPM_HEADER_SIZE) {
+ size = -EIO;
+@@ -488,7 +489,7 @@ static int tpm_tis_i2c_recv(struct tpm_chip *chip, u8 *buf, size_t count)
+ }
+
+ expected = be32_to_cpu(*(__be32 *)(buf + 2));
+- if ((size_t) expected > count) {
++ if (((size_t) expected > count) || (expected < TPM_HEADER_SIZE)) {
+ size = -EIO;
+ goto out;
+ }
+diff --git a/drivers/char/tpm/tpm_i2c_nuvoton.c b/drivers/char/tpm/tpm_i2c_nuvoton.c
+index c6428771841f..caa86b19c76d 100644
+--- a/drivers/char/tpm/tpm_i2c_nuvoton.c
++++ b/drivers/char/tpm/tpm_i2c_nuvoton.c
+@@ -281,7 +281,11 @@ static int i2c_nuvoton_recv(struct tpm_chip *chip, u8 *buf, size_t count)
+ struct device *dev = chip->dev.parent;
+ struct i2c_client *client = to_i2c_client(dev);
+ s32 rc;
+- int expected, status, burst_count, retries, size = 0;
++ int status;
++ int burst_count;
++ int retries;
++ int size = 0;
++ u32 expected;
+
+ if (count < TPM_HEADER_SIZE) {
+ i2c_nuvoton_ready(chip); /* return to idle */
+@@ -323,7 +327,7 @@ static int i2c_nuvoton_recv(struct tpm_chip *chip, u8 *buf, size_t count)
+ * to machine native
+ */
+ expected = be32_to_cpu(*(__be32 *) (buf + 2));
+- if (expected > count) {
++ if (expected > count || expected < size) {
+ dev_err(dev, "%s() expected > count\n", __func__);
+ size = -EIO;
+ continue;
+diff --git a/drivers/char/tpm/tpm_tis_core.c b/drivers/char/tpm/tpm_tis_core.c
+index fdde971bc810..7561922bc8f8 100644
+--- a/drivers/char/tpm/tpm_tis_core.c
++++ b/drivers/char/tpm/tpm_tis_core.c
+@@ -202,7 +202,8 @@ static int tpm_tis_recv(struct tpm_chip *chip, u8 *buf, size_t count)
+ {
+ struct tpm_tis_data *priv = dev_get_drvdata(&chip->dev);
+ int size = 0;
+- int expected, status;
++ int status;
++ u32 expected;
+
+ if (count < TPM_HEADER_SIZE) {
+ size = -EIO;
+@@ -217,7 +218,7 @@ static int tpm_tis_recv(struct tpm_chip *chip, u8 *buf, size_t count)
+ }
+
+ expected = be32_to_cpu(*(__be32 *) (buf + 2));
+- if (expected > count) {
++ if (expected > count || expected < TPM_HEADER_SIZE) {
+ size = -EIO;
+ goto out;
+ }
+diff --git a/drivers/cpufreq/s3c24xx-cpufreq.c b/drivers/cpufreq/s3c24xx-cpufreq.c
+index 7b596fa38ad2..6bebc1f9f55a 100644
+--- a/drivers/cpufreq/s3c24xx-cpufreq.c
++++ b/drivers/cpufreq/s3c24xx-cpufreq.c
+@@ -351,7 +351,13 @@ struct clk *s3c_cpufreq_clk_get(struct device *dev, const char *name)
+ static int s3c_cpufreq_init(struct cpufreq_policy *policy)
+ {
+ policy->clk = clk_arm;
+- return cpufreq_generic_init(policy, ftab, cpu_cur.info->latency);
++
++ policy->cpuinfo.transition_latency = cpu_cur.info->latency;
++
++ if (ftab)
++ return cpufreq_table_validate_and_show(policy, ftab);
++
++ return 0;
+ }
+
+ static int __init s3c_cpufreq_initclks(void)
+diff --git a/drivers/edac/sb_edac.c b/drivers/edac/sb_edac.c
+index f34430f99fd8..872100215ca0 100644
+--- a/drivers/edac/sb_edac.c
++++ b/drivers/edac/sb_edac.c
+@@ -279,7 +279,7 @@ static const u32 correrrthrsld[] = {
+ * sbridge structs
+ */
+
+-#define NUM_CHANNELS 4 /* Max channels per MC */
++#define NUM_CHANNELS 6 /* Max channels per MC */
+ #define MAX_DIMMS 3 /* Max DIMMS per channel */
+ #define KNL_MAX_CHAS 38 /* KNL max num. of Cache Home Agents */
+ #define KNL_MAX_CHANNELS 6 /* KNL max num. of PCI channels */
+diff --git a/drivers/md/md.c b/drivers/md/md.c
+index 4e4dee0ec2de..926542fbc892 100644
+--- a/drivers/md/md.c
++++ b/drivers/md/md.c
+@@ -8554,6 +8554,10 @@ static int remove_and_add_spares(struct mddev *mddev,
+ int removed = 0;
+ bool remove_some = false;
+
++ if (this && test_bit(MD_RECOVERY_RUNNING, &mddev->recovery))
++ /* Mustn't remove devices when resync thread is running */
++ return 0;
++
+ rdev_for_each(rdev, mddev) {
+ if ((this == NULL || rdev == this) &&
+ rdev->raid_disk >= 0 &&
+diff --git a/drivers/media/dvb-frontends/m88ds3103.c b/drivers/media/dvb-frontends/m88ds3103.c
+index 50bce68ffd66..65d157fe76d1 100644
+--- a/drivers/media/dvb-frontends/m88ds3103.c
++++ b/drivers/media/dvb-frontends/m88ds3103.c
+@@ -1262,11 +1262,12 @@ static int m88ds3103_select(struct i2c_mux_core *muxc, u32 chan)
+ * New users must use I2C client binding directly!
+ */
+ struct dvb_frontend *m88ds3103_attach(const struct m88ds3103_config *cfg,
+- struct i2c_adapter *i2c, struct i2c_adapter **tuner_i2c_adapter)
++ struct i2c_adapter *i2c,
++ struct i2c_adapter **tuner_i2c_adapter)
+ {
+ struct i2c_client *client;
+ struct i2c_board_info board_info;
+- struct m88ds3103_platform_data pdata;
++ struct m88ds3103_platform_data pdata = {};
+
+ pdata.clk = cfg->clock;
+ pdata.i2c_wr_max = cfg->i2c_wr_max;
+@@ -1409,6 +1410,8 @@ static int m88ds3103_probe(struct i2c_client *client,
+ case M88DS3103_CHIP_ID:
+ break;
+ default:
++ ret = -ENODEV;
++ dev_err(&client->dev, "Unknown device. Chip_id=%02x\n", dev->chip_id);
+ goto err_kfree;
+ }
+
+diff --git a/drivers/mmc/host/dw_mmc-exynos.c b/drivers/mmc/host/dw_mmc-exynos.c
+index 35026795be28..fa41d9422d57 100644
+--- a/drivers/mmc/host/dw_mmc-exynos.c
++++ b/drivers/mmc/host/dw_mmc-exynos.c
+@@ -487,6 +487,7 @@ static unsigned long exynos_dwmmc_caps[4] = {
+
+ static const struct dw_mci_drv_data exynos_drv_data = {
+ .caps = exynos_dwmmc_caps,
++ .num_caps = ARRAY_SIZE(exynos_dwmmc_caps),
+ .init = dw_mci_exynos_priv_init,
+ .set_ios = dw_mci_exynos_set_ios,
+ .parse_dt = dw_mci_exynos_parse_dt,
+diff --git a/drivers/mmc/host/dw_mmc-k3.c b/drivers/mmc/host/dw_mmc-k3.c
+index 73fd75c3c824..89cdb3d533bb 100644
+--- a/drivers/mmc/host/dw_mmc-k3.c
++++ b/drivers/mmc/host/dw_mmc-k3.c
+@@ -135,6 +135,9 @@ static int dw_mci_hi6220_parse_dt(struct dw_mci *host)
+ if (priv->ctrl_id < 0)
+ priv->ctrl_id = 0;
+
++ if (priv->ctrl_id >= TIMING_MODE)
++ return -EINVAL;
++
+ host->priv = priv;
+ return 0;
+ }
+@@ -207,6 +210,7 @@ static int dw_mci_hi6220_execute_tuning(struct dw_mci_slot *slot, u32 opcode)
+
+ static const struct dw_mci_drv_data hi6220_data = {
+ .caps = dw_mci_hi6220_caps,
++ .num_caps = ARRAY_SIZE(dw_mci_hi6220_caps),
+ .switch_voltage = dw_mci_hi6220_switch_voltage,
+ .set_ios = dw_mci_hi6220_set_ios,
+ .parse_dt = dw_mci_hi6220_parse_dt,
+diff --git a/drivers/mmc/host/dw_mmc-rockchip.c b/drivers/mmc/host/dw_mmc-rockchip.c
+index a3f1c2b30145..339295212935 100644
+--- a/drivers/mmc/host/dw_mmc-rockchip.c
++++ b/drivers/mmc/host/dw_mmc-rockchip.c
+@@ -319,6 +319,7 @@ static const struct dw_mci_drv_data rk2928_drv_data = {
+
+ static const struct dw_mci_drv_data rk3288_drv_data = {
+ .caps = dw_mci_rk3288_dwmmc_caps,
++ .num_caps = ARRAY_SIZE(dw_mci_rk3288_dwmmc_caps),
+ .set_ios = dw_mci_rk3288_set_ios,
+ .execute_tuning = dw_mci_rk3288_execute_tuning,
+ .parse_dt = dw_mci_rk3288_parse_dt,
+diff --git a/drivers/mmc/host/dw_mmc-zx.c b/drivers/mmc/host/dw_mmc-zx.c
+index d38e94ae2b85..c06b5393312f 100644
+--- a/drivers/mmc/host/dw_mmc-zx.c
++++ b/drivers/mmc/host/dw_mmc-zx.c
+@@ -195,6 +195,7 @@ static unsigned long zx_dwmmc_caps[3] = {
+
+ static const struct dw_mci_drv_data zx_drv_data = {
+ .caps = zx_dwmmc_caps,
++ .num_caps = ARRAY_SIZE(zx_dwmmc_caps),
+ .execute_tuning = dw_mci_zx_execute_tuning,
+ .prepare_hs400_tuning = dw_mci_zx_prepare_hs400_tuning,
+ .parse_dt = dw_mci_zx_parse_dt,
+diff --git a/drivers/mmc/host/dw_mmc.c b/drivers/mmc/host/dw_mmc.c
+index 0aa39975f33b..d9b4acefed31 100644
+--- a/drivers/mmc/host/dw_mmc.c
++++ b/drivers/mmc/host/dw_mmc.c
+@@ -165,6 +165,8 @@ static int dw_mci_regs_show(struct seq_file *s, void *v)
+ {
+ struct dw_mci *host = s->private;
+
++ pm_runtime_get_sync(host->dev);
++
+ seq_printf(s, "STATUS:\t0x%08x\n", mci_readl(host, STATUS));
+ seq_printf(s, "RINTSTS:\t0x%08x\n", mci_readl(host, RINTSTS));
+ seq_printf(s, "CMD:\t0x%08x\n", mci_readl(host, CMD));
+@@ -172,6 +174,8 @@ static int dw_mci_regs_show(struct seq_file *s, void *v)
+ seq_printf(s, "INTMASK:\t0x%08x\n", mci_readl(host, INTMASK));
+ seq_printf(s, "CLKENA:\t0x%08x\n", mci_readl(host, CLKENA));
+
++ pm_runtime_put_autosuspend(host->dev);
++
+ return 0;
+ }
+
+@@ -2778,12 +2782,57 @@ static irqreturn_t dw_mci_interrupt(int irq, void *dev_id)
+ return IRQ_HANDLED;
+ }
+
++static int dw_mci_init_slot_caps(struct dw_mci_slot *slot)
++{
++ struct dw_mci *host = slot->host;
++ const struct dw_mci_drv_data *drv_data = host->drv_data;
++ struct mmc_host *mmc = slot->mmc;
++ int ctrl_id;
++
++ if (host->pdata->caps)
++ mmc->caps = host->pdata->caps;
++
++ /*
++ * Support MMC_CAP_ERASE by default.
++ * It needs to use trim/discard/erase commands.
++ */
++ mmc->caps |= MMC_CAP_ERASE;
++
++ if (host->pdata->pm_caps)
++ mmc->pm_caps = host->pdata->pm_caps;
++
++ if (host->dev->of_node) {
++ ctrl_id = of_alias_get_id(host->dev->of_node, "mshc");
++ if (ctrl_id < 0)
++ ctrl_id = 0;
++ } else {
++ ctrl_id = to_platform_device(host->dev)->id;
++ }
++
++ if (drv_data && drv_data->caps) {
++ if (ctrl_id >= drv_data->num_caps) {
++ dev_err(host->dev, "invalid controller id %d\n",
++ ctrl_id);
++ return -EINVAL;
++ }
++ mmc->caps |= drv_data->caps[ctrl_id];
++ }
++
++ if (host->pdata->caps2)
++ mmc->caps2 = host->pdata->caps2;
++
++ /* Process SDIO IRQs through the sdio_irq_work. */
++ if (mmc->caps & MMC_CAP_SDIO_IRQ)
++ mmc->caps2 |= MMC_CAP2_SDIO_IRQ_NOTHREAD;
++
++ return 0;
++}
++
+ static int dw_mci_init_slot(struct dw_mci *host)
+ {
+ struct mmc_host *mmc;
+ struct dw_mci_slot *slot;
+- const struct dw_mci_drv_data *drv_data = host->drv_data;
+- int ctrl_id, ret;
++ int ret;
+ u32 freq[2];
+
+ mmc = mmc_alloc_host(sizeof(struct dw_mci_slot), host->dev);
+@@ -2817,38 +2866,13 @@ static int dw_mci_init_slot(struct dw_mci *host)
+ if (!mmc->ocr_avail)
+ mmc->ocr_avail = MMC_VDD_32_33 | MMC_VDD_33_34;
+
+- if (host->pdata->caps)
+- mmc->caps = host->pdata->caps;
+-
+- /*
+- * Support MMC_CAP_ERASE by default.
+- * It needs to use trim/discard/erase commands.
+- */
+- mmc->caps |= MMC_CAP_ERASE;
+-
+- if (host->pdata->pm_caps)
+- mmc->pm_caps = host->pdata->pm_caps;
+-
+- if (host->dev->of_node) {
+- ctrl_id = of_alias_get_id(host->dev->of_node, "mshc");
+- if (ctrl_id < 0)
+- ctrl_id = 0;
+- } else {
+- ctrl_id = to_platform_device(host->dev)->id;
+- }
+- if (drv_data && drv_data->caps)
+- mmc->caps |= drv_data->caps[ctrl_id];
+-
+- if (host->pdata->caps2)
+- mmc->caps2 = host->pdata->caps2;
+-
+ ret = mmc_of_parse(mmc);
+ if (ret)
+ goto err_host_allocated;
+
+- /* Process SDIO IRQs through the sdio_irq_work. */
+- if (mmc->caps & MMC_CAP_SDIO_IRQ)
+- mmc->caps2 |= MMC_CAP2_SDIO_IRQ_NOTHREAD;
++ ret = dw_mci_init_slot_caps(slot);
++ if (ret)
++ goto err_host_allocated;
+
+ /* Useful defaults if platform data is unset. */
+ if (host->use_dma == TRANS_MODE_IDMAC) {
+diff --git a/drivers/mmc/host/dw_mmc.h b/drivers/mmc/host/dw_mmc.h
+index e3124f06a47e..1424bd490dd1 100644
+--- a/drivers/mmc/host/dw_mmc.h
++++ b/drivers/mmc/host/dw_mmc.h
+@@ -543,6 +543,7 @@ struct dw_mci_slot {
+ /**
+ * dw_mci driver data - dw-mshc implementation specific driver data.
+ * @caps: mmc subsystem specified capabilities of the controller(s).
++ * @num_caps: number of capabilities specified by @caps.
+ * @init: early implementation specific initialization.
+ * @set_ios: handle bus specific extensions.
+ * @parse_dt: parse implementation specific device tree properties.
+@@ -554,6 +555,7 @@ struct dw_mci_slot {
+ */
+ struct dw_mci_drv_data {
+ unsigned long *caps;
++ u32 num_caps;
+ int (*init)(struct dw_mci *host);
+ void (*set_ios)(struct dw_mci *host, struct mmc_ios *ios);
+ int (*parse_dt)(struct dw_mci *host);
+diff --git a/drivers/mmc/host/sdhci-pci-core.c b/drivers/mmc/host/sdhci-pci-core.c
+index 3e4f04fd5175..bf93e8b0b191 100644
+--- a/drivers/mmc/host/sdhci-pci-core.c
++++ b/drivers/mmc/host/sdhci-pci-core.c
+@@ -593,9 +593,36 @@ static void byt_read_dsm(struct sdhci_pci_slot *slot)
+ slot->chip->rpm_retune = intel_host->d3_retune;
+ }
+
+-static int byt_emmc_probe_slot(struct sdhci_pci_slot *slot)
++static int intel_execute_tuning(struct mmc_host *mmc, u32 opcode)
++{
++ int err = sdhci_execute_tuning(mmc, opcode);
++ struct sdhci_host *host = mmc_priv(mmc);
++
++ if (err)
++ return err;
++
++ /*
++ * Tuning can leave the IP in an active state (Buffer Read Enable bit
++ * set) which prevents the entry to low power states (i.e. S0i3). Data
++ * reset will clear it.
++ */
++ sdhci_reset(host, SDHCI_RESET_DATA);
++
++ return 0;
++}
++
++static void byt_probe_slot(struct sdhci_pci_slot *slot)
+ {
++ struct mmc_host_ops *ops = &slot->host->mmc_host_ops;
++
+ byt_read_dsm(slot);
++
++ ops->execute_tuning = intel_execute_tuning;
++}
++
++static int byt_emmc_probe_slot(struct sdhci_pci_slot *slot)
++{
++ byt_probe_slot(slot);
+ slot->host->mmc->caps |= MMC_CAP_8_BIT_DATA | MMC_CAP_NONREMOVABLE |
+ MMC_CAP_HW_RESET | MMC_CAP_1_8V_DDR |
+ MMC_CAP_CMD_DURING_TFR |
+@@ -650,7 +677,7 @@ static int ni_byt_sdio_probe_slot(struct sdhci_pci_slot *slot)
+ {
+ int err;
+
+- byt_read_dsm(slot);
++ byt_probe_slot(slot);
+
+ err = ni_set_max_freq(slot);
+ if (err)
+@@ -663,7 +690,7 @@ static int ni_byt_sdio_probe_slot(struct sdhci_pci_slot *slot)
+
+ static int byt_sdio_probe_slot(struct sdhci_pci_slot *slot)
+ {
+- byt_read_dsm(slot);
++ byt_probe_slot(slot);
+ slot->host->mmc->caps |= MMC_CAP_POWER_OFF_CARD | MMC_CAP_NONREMOVABLE |
+ MMC_CAP_WAIT_WHILE_BUSY;
+ return 0;
+@@ -671,7 +698,7 @@ static int byt_sdio_probe_slot(struct sdhci_pci_slot *slot)
+
+ static int byt_sd_probe_slot(struct sdhci_pci_slot *slot)
+ {
+- byt_read_dsm(slot);
++ byt_probe_slot(slot);
+ slot->host->mmc->caps |= MMC_CAP_WAIT_WHILE_BUSY |
+ MMC_CAP_AGGRESSIVE_PM | MMC_CAP_CD_WAKE;
+ slot->cd_idx = 0;
+diff --git a/drivers/net/ethernet/amd/xgbe/xgbe-drv.c b/drivers/net/ethernet/amd/xgbe/xgbe-drv.c
+index a74a8fbad53a..2e6075ce5dca 100644
+--- a/drivers/net/ethernet/amd/xgbe/xgbe-drv.c
++++ b/drivers/net/ethernet/amd/xgbe/xgbe-drv.c
+@@ -595,7 +595,7 @@ static void xgbe_isr_task(unsigned long data)
+
+ reissue_mask = 1 << 0;
+ if (!pdata->per_channel_irq)
+- reissue_mask |= 0xffff < 4;
++ reissue_mask |= 0xffff << 4;
+
+ XP_IOWRITE(pdata, XP_INT_REISSUE_EN, reissue_mask);
+ }
+diff --git a/drivers/net/ethernet/amd/xgbe/xgbe-pci.c b/drivers/net/ethernet/amd/xgbe/xgbe-pci.c
+index 3e5833cf1fab..eb23f9ba1a9a 100644
+--- a/drivers/net/ethernet/amd/xgbe/xgbe-pci.c
++++ b/drivers/net/ethernet/amd/xgbe/xgbe-pci.c
+@@ -426,6 +426,8 @@ static int xgbe_pci_resume(struct pci_dev *pdev)
+ struct net_device *netdev = pdata->netdev;
+ int ret = 0;
+
++ XP_IOWRITE(pdata, XP_INT_EN, 0x1fffff);
++
+ pdata->lpm_ctrl &= ~MDIO_CTRL1_LPOWER;
+ XMDIO_WRITE(pdata, MDIO_MMD_PCS, MDIO_CTRL1, pdata->lpm_ctrl);
+
+diff --git a/drivers/net/ethernet/chelsio/cxgb4/cudbg_lib.c b/drivers/net/ethernet/chelsio/cxgb4/cudbg_lib.c
+index d699bf88d18f..6044fdcf6056 100644
+--- a/drivers/net/ethernet/chelsio/cxgb4/cudbg_lib.c
++++ b/drivers/net/ethernet/chelsio/cxgb4/cudbg_lib.c
+@@ -156,7 +156,7 @@ int cudbg_collect_cim_la(struct cudbg_init *pdbg_init,
+
+ if (is_t6(padap->params.chip)) {
+ size = padap->params.cim_la_size / 10 + 1;
+- size *= 11 * sizeof(u32);
++ size *= 10 * sizeof(u32);
+ } else {
+ size = padap->params.cim_la_size / 8;
+ size *= 8 * sizeof(u32);
+diff --git a/drivers/net/ethernet/chelsio/cxgb4/cxgb4_cudbg.c b/drivers/net/ethernet/chelsio/cxgb4/cxgb4_cudbg.c
+index 29cc625e9833..97465101e0b9 100644
+--- a/drivers/net/ethernet/chelsio/cxgb4/cxgb4_cudbg.c
++++ b/drivers/net/ethernet/chelsio/cxgb4/cxgb4_cudbg.c
+@@ -97,7 +97,7 @@ static u32 cxgb4_get_entity_length(struct adapter *adap, u32 entity)
+ case CUDBG_CIM_LA:
+ if (is_t6(adap->params.chip)) {
+ len = adap->params.cim_la_size / 10 + 1;
+- len *= 11 * sizeof(u32);
++ len *= 10 * sizeof(u32);
+ } else {
+ len = adap->params.cim_la_size / 8;
+ len *= 8 * sizeof(u32);
+diff --git a/drivers/net/ethernet/intel/ixgbe/ixgbe_main.c b/drivers/net/ethernet/intel/ixgbe/ixgbe_main.c
+index 62a18914f00f..a7113e702f58 100644
+--- a/drivers/net/ethernet/intel/ixgbe/ixgbe_main.c
++++ b/drivers/net/ethernet/intel/ixgbe/ixgbe_main.c
+@@ -1878,6 +1878,14 @@ static void ixgbe_dma_sync_frag(struct ixgbe_ring *rx_ring,
+ ixgbe_rx_pg_size(rx_ring),
+ DMA_FROM_DEVICE,
+ IXGBE_RX_DMA_ATTR);
++ } else if (ring_uses_build_skb(rx_ring)) {
++ unsigned long offset = (unsigned long)(skb->data) & ~PAGE_MASK;
++
++ dma_sync_single_range_for_cpu(rx_ring->dev,
++ IXGBE_CB(skb)->dma,
++ offset,
++ skb_headlen(skb),
++ DMA_FROM_DEVICE);
+ } else {
+ struct skb_frag_struct *frag = &skb_shinfo(skb)->frags[0];
+
+diff --git a/drivers/net/ethernet/mellanox/mlx5/core/en_main.c b/drivers/net/ethernet/mellanox/mlx5/core/en_main.c
+index d8aefeed124d..0d352d4cf48c 100644
+--- a/drivers/net/ethernet/mellanox/mlx5/core/en_main.c
++++ b/drivers/net/ethernet/mellanox/mlx5/core/en_main.c
+@@ -1911,13 +1911,16 @@ static void mlx5e_build_rq_param(struct mlx5e_priv *priv,
+ param->wq.linear = 1;
+ }
+
+-static void mlx5e_build_drop_rq_param(struct mlx5e_rq_param *param)
++static void mlx5e_build_drop_rq_param(struct mlx5_core_dev *mdev,
++ struct mlx5e_rq_param *param)
+ {
+ void *rqc = param->rqc;
+ void *wq = MLX5_ADDR_OF(rqc, rqc, wq);
+
+ MLX5_SET(wq, wq, wq_type, MLX5_WQ_TYPE_LINKED_LIST);
+ MLX5_SET(wq, wq, log_wq_stride, ilog2(sizeof(struct mlx5e_rx_wqe)));
++
++ param->wq.buf_numa_node = dev_to_node(&mdev->pdev->dev);
+ }
+
+ static void mlx5e_build_sq_param_common(struct mlx5e_priv *priv,
+@@ -2774,6 +2777,9 @@ static int mlx5e_alloc_drop_cq(struct mlx5_core_dev *mdev,
+ struct mlx5e_cq *cq,
+ struct mlx5e_cq_param *param)
+ {
++ param->wq.buf_numa_node = dev_to_node(&mdev->pdev->dev);
++ param->wq.db_numa_node = dev_to_node(&mdev->pdev->dev);
++
+ return mlx5e_alloc_cq_common(mdev, param, cq);
+ }
+
+@@ -2785,7 +2791,7 @@ static int mlx5e_open_drop_rq(struct mlx5_core_dev *mdev,
+ struct mlx5e_cq *cq = &drop_rq->cq;
+ int err;
+
+- mlx5e_build_drop_rq_param(&rq_param);
++ mlx5e_build_drop_rq_param(mdev, &rq_param);
+
+ err = mlx5e_alloc_drop_cq(mdev, cq, &cq_param);
+ if (err)
+diff --git a/drivers/net/ethernet/mellanox/mlx5/core/en_rx.c b/drivers/net/ethernet/mellanox/mlx5/core/en_rx.c
+index 5b499c7a698f..36611b64a91c 100644
+--- a/drivers/net/ethernet/mellanox/mlx5/core/en_rx.c
++++ b/drivers/net/ethernet/mellanox/mlx5/core/en_rx.c
+@@ -36,6 +36,7 @@
+ #include <linux/tcp.h>
+ #include <linux/bpf_trace.h>
+ #include <net/busy_poll.h>
++#include <net/ip6_checksum.h>
+ #include "en.h"
+ #include "en_tc.h"
+ #include "eswitch.h"
+@@ -547,20 +548,33 @@ bool mlx5e_post_rx_mpwqes(struct mlx5e_rq *rq)
+ return true;
+ }
+
++static void mlx5e_lro_update_tcp_hdr(struct mlx5_cqe64 *cqe, struct tcphdr *tcp)
++{
++ u8 l4_hdr_type = get_cqe_l4_hdr_type(cqe);
++ u8 tcp_ack = (l4_hdr_type == CQE_L4_HDR_TYPE_TCP_ACK_NO_DATA) ||
++ (l4_hdr_type == CQE_L4_HDR_TYPE_TCP_ACK_AND_DATA);
++
++ tcp->check = 0;
++ tcp->psh = get_cqe_lro_tcppsh(cqe);
++
++ if (tcp_ack) {
++ tcp->ack = 1;
++ tcp->ack_seq = cqe->lro_ack_seq_num;
++ tcp->window = cqe->lro_tcp_win;
++ }
++}
++
+ static void mlx5e_lro_update_hdr(struct sk_buff *skb, struct mlx5_cqe64 *cqe,
+ u32 cqe_bcnt)
+ {
+ struct ethhdr *eth = (struct ethhdr *)(skb->data);
+ struct tcphdr *tcp;
+ int network_depth = 0;
++ __wsum check;
+ __be16 proto;
+ u16 tot_len;
+ void *ip_p;
+
+- u8 l4_hdr_type = get_cqe_l4_hdr_type(cqe);
+- u8 tcp_ack = (l4_hdr_type == CQE_L4_HDR_TYPE_TCP_ACK_NO_DATA) ||
+- (l4_hdr_type == CQE_L4_HDR_TYPE_TCP_ACK_AND_DATA);
+-
+ proto = __vlan_get_protocol(skb, eth->h_proto, &network_depth);
+
+ tot_len = cqe_bcnt - network_depth;
+@@ -577,23 +591,30 @@ static void mlx5e_lro_update_hdr(struct sk_buff *skb, struct mlx5_cqe64 *cqe,
+ ipv4->check = 0;
+ ipv4->check = ip_fast_csum((unsigned char *)ipv4,
+ ipv4->ihl);
++
++ mlx5e_lro_update_tcp_hdr(cqe, tcp);
++ check = csum_partial(tcp, tcp->doff * 4,
++ csum_unfold((__force __sum16)cqe->check_sum));
++ /* Almost done, don't forget the pseudo header */
++ tcp->check = csum_tcpudp_magic(ipv4->saddr, ipv4->daddr,
++ tot_len - sizeof(struct iphdr),
++ IPPROTO_TCP, check);
+ } else {
++ u16 payload_len = tot_len - sizeof(struct ipv6hdr);
+ struct ipv6hdr *ipv6 = ip_p;
+
+ tcp = ip_p + sizeof(struct ipv6hdr);
+ skb_shinfo(skb)->gso_type = SKB_GSO_TCPV6;
+
+ ipv6->hop_limit = cqe->lro_min_ttl;
+- ipv6->payload_len = cpu_to_be16(tot_len -
+- sizeof(struct ipv6hdr));
+- }
+-
+- tcp->psh = get_cqe_lro_tcppsh(cqe);
+-
+- if (tcp_ack) {
+- tcp->ack = 1;
+- tcp->ack_seq = cqe->lro_ack_seq_num;
+- tcp->window = cqe->lro_tcp_win;
++ ipv6->payload_len = cpu_to_be16(payload_len);
++
++ mlx5e_lro_update_tcp_hdr(cqe, tcp);
++ check = csum_partial(tcp, tcp->doff * 4,
++ csum_unfold((__force __sum16)cqe->check_sum));
++ /* Almost done, don't forget the pseudo header */
++ tcp->check = csum_ipv6_magic(&ipv6->saddr, &ipv6->daddr, payload_len,
++ IPPROTO_TCP, check);
+ }
+ }
+
+diff --git a/drivers/net/ethernet/mellanox/mlx5/core/en_selftest.c b/drivers/net/ethernet/mellanox/mlx5/core/en_selftest.c
+index 5a4608281f38..707976482c09 100644
+--- a/drivers/net/ethernet/mellanox/mlx5/core/en_selftest.c
++++ b/drivers/net/ethernet/mellanox/mlx5/core/en_selftest.c
+@@ -216,7 +216,8 @@ mlx5e_test_loopback_validate(struct sk_buff *skb,
+ if (iph->protocol != IPPROTO_UDP)
+ goto out;
+
+- udph = udp_hdr(skb);
++ /* Don't assume skb_transport_header() was set */
++ udph = (struct udphdr *)((u8 *)iph + 4 * iph->ihl);
+ if (udph->dest != htons(9))
+ goto out;
+
+diff --git a/drivers/net/ethernet/mellanox/mlx5/core/en_tx.c b/drivers/net/ethernet/mellanox/mlx5/core/en_tx.c
+index 569b42a01026..11b4f1089d1c 100644
+--- a/drivers/net/ethernet/mellanox/mlx5/core/en_tx.c
++++ b/drivers/net/ethernet/mellanox/mlx5/core/en_tx.c
+@@ -176,7 +176,7 @@ static inline u16 mlx5e_calc_min_inline(enum mlx5_inline_modes mode,
+ default:
+ hlen = mlx5e_skb_l2_header_offset(skb);
+ }
+- return min_t(u16, hlen, skb->len);
++ return min_t(u16, hlen, skb_headlen(skb));
+ }
+
+ static inline void mlx5e_tx_skb_pull_inline(unsigned char **skb_data,
+diff --git a/drivers/net/ethernet/mellanox/mlx5/core/fs_core.c b/drivers/net/ethernet/mellanox/mlx5/core/fs_core.c
+index dfaad9ecb2b8..a681693631aa 100644
+--- a/drivers/net/ethernet/mellanox/mlx5/core/fs_core.c
++++ b/drivers/net/ethernet/mellanox/mlx5/core/fs_core.c
+@@ -1755,8 +1755,11 @@ _mlx5_add_flow_rules(struct mlx5_flow_table *ft,
+
+ /* Collect all fgs which has a matching match_criteria */
+ err = build_match_list(&match_head, ft, spec);
+- if (err)
++ if (err) {
++ if (take_write)
++ up_write_ref_node(&ft->node);
+ return ERR_PTR(err);
++ }
+
+ if (!take_write)
+ up_read_ref_node(&ft->node);
+@@ -1765,8 +1768,11 @@ _mlx5_add_flow_rules(struct mlx5_flow_table *ft,
+ dest_num, version);
+ free_match_list(&match_head);
+ if (!IS_ERR(rule) ||
+- (PTR_ERR(rule) != -ENOENT && PTR_ERR(rule) != -EAGAIN))
++ (PTR_ERR(rule) != -ENOENT && PTR_ERR(rule) != -EAGAIN)) {
++ if (take_write)
++ up_write_ref_node(&ft->node);
+ return rule;
++ }
+
+ if (!take_write) {
+ nested_down_write_ref_node(&ft->node, FS_LOCK_GRANDPARENT);
+diff --git a/drivers/net/ethernet/mellanox/mlxsw/spectrum_router.c b/drivers/net/ethernet/mellanox/mlxsw/spectrum_router.c
+index 7042c855a5d6..7e50dbc8282c 100644
+--- a/drivers/net/ethernet/mellanox/mlxsw/spectrum_router.c
++++ b/drivers/net/ethernet/mellanox/mlxsw/spectrum_router.c
+@@ -737,6 +737,9 @@ static struct mlxsw_sp_vr *mlxsw_sp_vr_create(struct mlxsw_sp *mlxsw_sp,
+ u32 tb_id,
+ struct netlink_ext_ack *extack)
+ {
++ struct mlxsw_sp_mr_table *mr4_table;
++ struct mlxsw_sp_fib *fib4;
++ struct mlxsw_sp_fib *fib6;
+ struct mlxsw_sp_vr *vr;
+ int err;
+
+@@ -745,29 +748,30 @@ static struct mlxsw_sp_vr *mlxsw_sp_vr_create(struct mlxsw_sp *mlxsw_sp,
+ NL_SET_ERR_MSG(extack, "spectrum: Exceeded number of supported virtual routers");
+ return ERR_PTR(-EBUSY);
+ }
+- vr->fib4 = mlxsw_sp_fib_create(vr, MLXSW_SP_L3_PROTO_IPV4);
+- if (IS_ERR(vr->fib4))
+- return ERR_CAST(vr->fib4);
+- vr->fib6 = mlxsw_sp_fib_create(vr, MLXSW_SP_L3_PROTO_IPV6);
+- if (IS_ERR(vr->fib6)) {
+- err = PTR_ERR(vr->fib6);
++ fib4 = mlxsw_sp_fib_create(vr, MLXSW_SP_L3_PROTO_IPV4);
++ if (IS_ERR(fib4))
++ return ERR_CAST(fib4);
++ fib6 = mlxsw_sp_fib_create(vr, MLXSW_SP_L3_PROTO_IPV6);
++ if (IS_ERR(fib6)) {
++ err = PTR_ERR(fib6);
+ goto err_fib6_create;
+ }
+- vr->mr4_table = mlxsw_sp_mr_table_create(mlxsw_sp, vr->id,
+- MLXSW_SP_L3_PROTO_IPV4);
+- if (IS_ERR(vr->mr4_table)) {
+- err = PTR_ERR(vr->mr4_table);
++ mr4_table = mlxsw_sp_mr_table_create(mlxsw_sp, vr->id,
++ MLXSW_SP_L3_PROTO_IPV4);
++ if (IS_ERR(mr4_table)) {
++ err = PTR_ERR(mr4_table);
+ goto err_mr_table_create;
+ }
++ vr->fib4 = fib4;
++ vr->fib6 = fib6;
++ vr->mr4_table = mr4_table;
+ vr->tb_id = tb_id;
+ return vr;
+
+ err_mr_table_create:
+- mlxsw_sp_fib_destroy(vr->fib6);
+- vr->fib6 = NULL;
++ mlxsw_sp_fib_destroy(fib6);
+ err_fib6_create:
+- mlxsw_sp_fib_destroy(vr->fib4);
+- vr->fib4 = NULL;
++ mlxsw_sp_fib_destroy(fib4);
+ return ERR_PTR(err);
+ }
+
+@@ -3761,6 +3765,9 @@ mlxsw_sp_fib4_entry_offload_unset(struct mlxsw_sp_fib_entry *fib_entry)
+ struct mlxsw_sp_nexthop_group *nh_grp = fib_entry->nh_group;
+ int i;
+
++ if (!list_is_singular(&nh_grp->fib_list))
++ return;
++
+ for (i = 0; i < nh_grp->count; i++) {
+ struct mlxsw_sp_nexthop *nh = &nh_grp->nexthops[i];
+
+diff --git a/drivers/net/ethernet/mellanox/mlxsw/spectrum_switchdev.c b/drivers/net/ethernet/mellanox/mlxsw/spectrum_switchdev.c
+index 593ad31be749..161bcdc012f0 100644
+--- a/drivers/net/ethernet/mellanox/mlxsw/spectrum_switchdev.c
++++ b/drivers/net/ethernet/mellanox/mlxsw/spectrum_switchdev.c
+@@ -1203,6 +1203,7 @@ static int __mlxsw_sp_port_fdb_uc_op(struct mlxsw_sp *mlxsw_sp, u8 local_port,
+ bool dynamic)
+ {
+ char *sfd_pl;
++ u8 num_rec;
+ int err;
+
+ sfd_pl = kmalloc(MLXSW_REG_SFD_LEN, GFP_KERNEL);
+@@ -1212,9 +1213,16 @@ static int __mlxsw_sp_port_fdb_uc_op(struct mlxsw_sp *mlxsw_sp, u8 local_port,
+ mlxsw_reg_sfd_pack(sfd_pl, mlxsw_sp_sfd_op(adding), 0);
+ mlxsw_reg_sfd_uc_pack(sfd_pl, 0, mlxsw_sp_sfd_rec_policy(dynamic),
+ mac, fid, action, local_port);
++ num_rec = mlxsw_reg_sfd_num_rec_get(sfd_pl);
+ err = mlxsw_reg_write(mlxsw_sp->core, MLXSW_REG(sfd), sfd_pl);
+- kfree(sfd_pl);
++ if (err)
++ goto out;
++
++ if (num_rec != mlxsw_reg_sfd_num_rec_get(sfd_pl))
++ err = -EBUSY;
+
++out:
++ kfree(sfd_pl);
+ return err;
+ }
+
+@@ -1239,6 +1247,7 @@ static int mlxsw_sp_port_fdb_uc_lag_op(struct mlxsw_sp *mlxsw_sp, u16 lag_id,
+ bool adding, bool dynamic)
+ {
+ char *sfd_pl;
++ u8 num_rec;
+ int err;
+
+ sfd_pl = kmalloc(MLXSW_REG_SFD_LEN, GFP_KERNEL);
+@@ -1249,9 +1258,16 @@ static int mlxsw_sp_port_fdb_uc_lag_op(struct mlxsw_sp *mlxsw_sp, u16 lag_id,
+ mlxsw_reg_sfd_uc_lag_pack(sfd_pl, 0, mlxsw_sp_sfd_rec_policy(dynamic),
+ mac, fid, MLXSW_REG_SFD_REC_ACTION_NOP,
+ lag_vid, lag_id);
++ num_rec = mlxsw_reg_sfd_num_rec_get(sfd_pl);
+ err = mlxsw_reg_write(mlxsw_sp->core, MLXSW_REG(sfd), sfd_pl);
+- kfree(sfd_pl);
++ if (err)
++ goto out;
++
++ if (num_rec != mlxsw_reg_sfd_num_rec_get(sfd_pl))
++ err = -EBUSY;
+
++out:
++ kfree(sfd_pl);
+ return err;
+ }
+
+@@ -1296,6 +1312,7 @@ static int mlxsw_sp_port_mdb_op(struct mlxsw_sp *mlxsw_sp, const char *addr,
+ u16 fid, u16 mid_idx, bool adding)
+ {
+ char *sfd_pl;
++ u8 num_rec;
+ int err;
+
+ sfd_pl = kmalloc(MLXSW_REG_SFD_LEN, GFP_KERNEL);
+@@ -1305,7 +1322,15 @@ static int mlxsw_sp_port_mdb_op(struct mlxsw_sp *mlxsw_sp, const char *addr,
+ mlxsw_reg_sfd_pack(sfd_pl, mlxsw_sp_sfd_op(adding), 0);
+ mlxsw_reg_sfd_mc_pack(sfd_pl, 0, addr, fid,
+ MLXSW_REG_SFD_REC_ACTION_NOP, mid_idx);
++ num_rec = mlxsw_reg_sfd_num_rec_get(sfd_pl);
+ err = mlxsw_reg_write(mlxsw_sp->core, MLXSW_REG(sfd), sfd_pl);
++ if (err)
++ goto out;
++
++ if (num_rec != mlxsw_reg_sfd_num_rec_get(sfd_pl))
++ err = -EBUSY;
++
++out:
+ kfree(sfd_pl);
+ return err;
+ }
+diff --git a/drivers/net/ethernet/ti/cpsw.c b/drivers/net/ethernet/ti/cpsw.c
+index a73600dceb8b..a1ffc3ed77f9 100644
+--- a/drivers/net/ethernet/ti/cpsw.c
++++ b/drivers/net/ethernet/ti/cpsw.c
+@@ -1618,6 +1618,7 @@ static netdev_tx_t cpsw_ndo_start_xmit(struct sk_buff *skb,
+ q_idx = q_idx % cpsw->tx_ch_num;
+
+ txch = cpsw->txv[q_idx].ch;
++ txq = netdev_get_tx_queue(ndev, q_idx);
+ ret = cpsw_tx_packet_submit(priv, skb, txch);
+ if (unlikely(ret != 0)) {
+ cpsw_err(priv, tx_err, "desc submit failed\n");
+@@ -1628,15 +1629,26 @@ static netdev_tx_t cpsw_ndo_start_xmit(struct sk_buff *skb,
+ * tell the kernel to stop sending us tx frames.
+ */
+ if (unlikely(!cpdma_check_free_tx_desc(txch))) {
+- txq = netdev_get_tx_queue(ndev, q_idx);
+ netif_tx_stop_queue(txq);
++
++ /* Barrier, so that stop_queue visible to other cpus */
++ smp_mb__after_atomic();
++
++ if (cpdma_check_free_tx_desc(txch))
++ netif_tx_wake_queue(txq);
+ }
+
+ return NETDEV_TX_OK;
+ fail:
+ ndev->stats.tx_dropped++;
+- txq = netdev_get_tx_queue(ndev, skb_get_queue_mapping(skb));
+ netif_tx_stop_queue(txq);
++
++ /* Barrier, so that stop_queue visible to other cpus */
++ smp_mb__after_atomic();
++
++ if (cpdma_check_free_tx_desc(txch))
++ netif_tx_wake_queue(txq);
++
+ return NETDEV_TX_BUSY;
+ }
+
+diff --git a/drivers/net/phy/phy.c b/drivers/net/phy/phy.c
+index ed10d1fc8f59..39de77a8bb63 100644
+--- a/drivers/net/phy/phy.c
++++ b/drivers/net/phy/phy.c
+@@ -841,10 +841,10 @@ void phy_start(struct phy_device *phydev)
+ break;
+ case PHY_HALTED:
+ /* if phy was suspended, bring the physical link up again */
+- phy_resume(phydev);
++ __phy_resume(phydev);
+
+ /* make sure interrupts are re-enabled for the PHY */
+- if (phydev->irq != PHY_POLL) {
++ if (phy_interrupt_is_valid(phydev)) {
+ err = phy_enable_interrupts(phydev);
+ if (err < 0)
+ break;
+diff --git a/drivers/net/phy/phy_device.c b/drivers/net/phy/phy_device.c
+index b15b31ca2618..d312b314825e 100644
+--- a/drivers/net/phy/phy_device.c
++++ b/drivers/net/phy/phy_device.c
+@@ -135,9 +135,7 @@ static int mdio_bus_phy_resume(struct device *dev)
+ if (!mdio_bus_phy_may_suspend(phydev))
+ goto no_resume;
+
+- mutex_lock(&phydev->lock);
+ ret = phy_resume(phydev);
+- mutex_unlock(&phydev->lock);
+ if (ret < 0)
+ return ret;
+
+@@ -1028,9 +1026,7 @@ int phy_attach_direct(struct net_device *dev, struct phy_device *phydev,
+ if (err)
+ goto error;
+
+- mutex_lock(&phydev->lock);
+ phy_resume(phydev);
+- mutex_unlock(&phydev->lock);
+ phy_led_triggers_register(phydev);
+
+ return err;
+@@ -1156,7 +1152,7 @@ int phy_suspend(struct phy_device *phydev)
+ }
+ EXPORT_SYMBOL(phy_suspend);
+
+-int phy_resume(struct phy_device *phydev)
++int __phy_resume(struct phy_device *phydev)
+ {
+ struct phy_driver *phydrv = to_phy_driver(phydev->mdio.dev.driver);
+ int ret = 0;
+@@ -1173,6 +1169,18 @@ int phy_resume(struct phy_device *phydev)
+
+ return ret;
+ }
++EXPORT_SYMBOL(__phy_resume);
++
++int phy_resume(struct phy_device *phydev)
++{
++ int ret;
++
++ mutex_lock(&phydev->lock);
++ ret = __phy_resume(phydev);
++ mutex_unlock(&phydev->lock);
++
++ return ret;
++}
+ EXPORT_SYMBOL(phy_resume);
+
+ int phy_loopback(struct phy_device *phydev, bool enable)
+diff --git a/drivers/net/ppp/ppp_generic.c b/drivers/net/ppp/ppp_generic.c
+index 264d4af0bf69..9f79f9274c50 100644
+--- a/drivers/net/ppp/ppp_generic.c
++++ b/drivers/net/ppp/ppp_generic.c
+@@ -3161,6 +3161,15 @@ ppp_connect_channel(struct channel *pch, int unit)
+ goto outl;
+
+ ppp_lock(ppp);
++ spin_lock_bh(&pch->downl);
++ if (!pch->chan) {
++ /* Don't connect unregistered channels */
++ spin_unlock_bh(&pch->downl);
++ ppp_unlock(ppp);
++ ret = -ENOTCONN;
++ goto outl;
++ }
++ spin_unlock_bh(&pch->downl);
+ if (pch->file.hdrlen > ppp->file.hdrlen)
+ ppp->file.hdrlen = pch->file.hdrlen;
+ hdrlen = pch->file.hdrlen + 2; /* for protocol bytes */
+diff --git a/drivers/net/tun.c b/drivers/net/tun.c
+index e29cd5c7d39f..f50cf06c9353 100644
+--- a/drivers/net/tun.c
++++ b/drivers/net/tun.c
+@@ -1471,6 +1471,7 @@ static struct sk_buff *tun_build_skb(struct tun_struct *tun,
+ else
+ *skb_xdp = 0;
+
++ preempt_disable();
+ rcu_read_lock();
+ xdp_prog = rcu_dereference(tun->xdp_prog);
+ if (xdp_prog && !*skb_xdp) {
+@@ -1490,9 +1491,11 @@ static struct sk_buff *tun_build_skb(struct tun_struct *tun,
+ get_page(alloc_frag->page);
+ alloc_frag->offset += buflen;
+ err = xdp_do_redirect(tun->dev, &xdp, xdp_prog);
++ xdp_do_flush_map();
+ if (err)
+ goto err_redirect;
+ rcu_read_unlock();
++ preempt_enable();
+ return NULL;
+ case XDP_TX:
+ xdp_xmit = true;
+@@ -1514,6 +1517,7 @@ static struct sk_buff *tun_build_skb(struct tun_struct *tun,
+ skb = build_skb(buf, buflen);
+ if (!skb) {
+ rcu_read_unlock();
++ preempt_enable();
+ return ERR_PTR(-ENOMEM);
+ }
+
+@@ -1526,10 +1530,12 @@ static struct sk_buff *tun_build_skb(struct tun_struct *tun,
+ skb->dev = tun->dev;
+ generic_xdp_tx(skb, xdp_prog);
+ rcu_read_unlock();
++ preempt_enable();
+ return NULL;
+ }
+
+ rcu_read_unlock();
++ preempt_enable();
+
+ return skb;
+
+@@ -1537,6 +1543,7 @@ static struct sk_buff *tun_build_skb(struct tun_struct *tun,
+ put_page(alloc_frag->page);
+ err_xdp:
+ rcu_read_unlock();
++ preempt_enable();
+ this_cpu_inc(tun->pcpu_stats->rx_dropped);
+ return NULL;
+ }
+diff --git a/drivers/net/virtio_net.c b/drivers/net/virtio_net.c
+index 559b215c0169..5907a8d0e921 100644
+--- a/drivers/net/virtio_net.c
++++ b/drivers/net/virtio_net.c
+@@ -2040,8 +2040,9 @@ static int virtnet_xdp_set(struct net_device *dev, struct bpf_prog *prog,
+ }
+
+ /* Make sure NAPI is not using any XDP TX queues for RX. */
+- for (i = 0; i < vi->max_queue_pairs; i++)
+- napi_disable(&vi->rq[i].napi);
++ if (netif_running(dev))
++ for (i = 0; i < vi->max_queue_pairs; i++)
++ napi_disable(&vi->rq[i].napi);
+
+ netif_set_real_num_rx_queues(dev, curr_qp + xdp_qp);
+ err = _virtnet_set_queues(vi, curr_qp + xdp_qp);
+@@ -2060,7 +2061,8 @@ static int virtnet_xdp_set(struct net_device *dev, struct bpf_prog *prog,
+ }
+ if (old_prog)
+ bpf_prog_put(old_prog);
+- virtnet_napi_enable(vi->rq[i].vq, &vi->rq[i].napi);
++ if (netif_running(dev))
++ virtnet_napi_enable(vi->rq[i].vq, &vi->rq[i].napi);
+ }
+
+ return 0;
+diff --git a/drivers/net/wan/hdlc_ppp.c b/drivers/net/wan/hdlc_ppp.c
+index afeca6bcdade..ab8b3cbbb205 100644
+--- a/drivers/net/wan/hdlc_ppp.c
++++ b/drivers/net/wan/hdlc_ppp.c
+@@ -574,7 +574,10 @@ static void ppp_timer(struct timer_list *t)
+ ppp_cp_event(proto->dev, proto->pid, TO_GOOD, 0, 0,
+ 0, NULL);
+ proto->restart_counter--;
+- } else
++ } else if (netif_carrier_ok(proto->dev))
++ ppp_cp_event(proto->dev, proto->pid, TO_GOOD, 0, 0,
++ 0, NULL);
++ else
+ ppp_cp_event(proto->dev, proto->pid, TO_BAD, 0, 0,
+ 0, NULL);
+ break;
+diff --git a/drivers/platform/x86/dell-laptop.c b/drivers/platform/x86/dell-laptop.c
+index cd4725e7e0b5..c864430b9fcf 100644
+--- a/drivers/platform/x86/dell-laptop.c
++++ b/drivers/platform/x86/dell-laptop.c
+@@ -78,7 +78,6 @@ static struct platform_driver platform_driver = {
+ }
+ };
+
+-static struct calling_interface_buffer *buffer;
+ static struct platform_device *platform_device;
+ static struct backlight_device *dell_backlight_device;
+ static struct rfkill *wifi_rfkill;
+@@ -286,7 +285,8 @@ static const struct dmi_system_id dell_quirks[] __initconst = {
+ { }
+ };
+
+-void dell_set_arguments(u32 arg0, u32 arg1, u32 arg2, u32 arg3)
++static void dell_fill_request(struct calling_interface_buffer *buffer,
++ u32 arg0, u32 arg1, u32 arg2, u32 arg3)
+ {
+ memset(buffer, 0, sizeof(struct calling_interface_buffer));
+ buffer->input[0] = arg0;
+@@ -295,7 +295,8 @@ void dell_set_arguments(u32 arg0, u32 arg1, u32 arg2, u32 arg3)
+ buffer->input[3] = arg3;
+ }
+
+-int dell_send_request(u16 class, u16 select)
++static int dell_send_request(struct calling_interface_buffer *buffer,
++ u16 class, u16 select)
+ {
+ int ret;
+
+@@ -432,21 +433,22 @@ static int dell_rfkill_set(void *data, bool blocked)
+ int disable = blocked ? 1 : 0;
+ unsigned long radio = (unsigned long)data;
+ int hwswitch_bit = (unsigned long)data - 1;
++ struct calling_interface_buffer buffer;
+ int hwswitch;
+ int status;
+ int ret;
+
+- dell_set_arguments(0, 0, 0, 0);
+- ret = dell_send_request(CLASS_INFO, SELECT_RFKILL);
++ dell_fill_request(&buffer, 0, 0, 0, 0);
++ ret = dell_send_request(&buffer, CLASS_INFO, SELECT_RFKILL);
+ if (ret)
+ return ret;
+- status = buffer->output[1];
++ status = buffer.output[1];
+
+- dell_set_arguments(0x2, 0, 0, 0);
+- ret = dell_send_request(CLASS_INFO, SELECT_RFKILL);
++ dell_fill_request(&buffer, 0x2, 0, 0, 0);
++ ret = dell_send_request(&buffer, CLASS_INFO, SELECT_RFKILL);
+ if (ret)
+ return ret;
+- hwswitch = buffer->output[1];
++ hwswitch = buffer.output[1];
+
+ /* If the hardware switch controls this radio, and the hardware
+ switch is disabled, always disable the radio */
+@@ -454,8 +456,8 @@ static int dell_rfkill_set(void *data, bool blocked)
+ (status & BIT(0)) && !(status & BIT(16)))
+ disable = 1;
+
+- dell_set_arguments(1 | (radio<<8) | (disable << 16), 0, 0, 0);
+- ret = dell_send_request(CLASS_INFO, SELECT_RFKILL);
++ dell_fill_request(&buffer, 1 | (radio<<8) | (disable << 16), 0, 0, 0);
++ ret = dell_send_request(&buffer, CLASS_INFO, SELECT_RFKILL);
+ return ret;
+ }
+
+@@ -464,9 +466,11 @@ static void dell_rfkill_update_sw_state(struct rfkill *rfkill, int radio,
+ {
+ if (status & BIT(0)) {
+ /* Has hw-switch, sync sw_state to BIOS */
++ struct calling_interface_buffer buffer;
+ int block = rfkill_blocked(rfkill);
+- dell_set_arguments(1 | (radio << 8) | (block << 16), 0, 0, 0);
+- dell_send_request(CLASS_INFO, SELECT_RFKILL);
++ dell_fill_request(&buffer,
++ 1 | (radio << 8) | (block << 16), 0, 0, 0);
++ dell_send_request(&buffer, CLASS_INFO, SELECT_RFKILL);
+ } else {
+ /* No hw-switch, sync BIOS state to sw_state */
+ rfkill_set_sw_state(rfkill, !!(status & BIT(radio + 16)));
+@@ -483,21 +487,22 @@ static void dell_rfkill_update_hw_state(struct rfkill *rfkill, int radio,
+ static void dell_rfkill_query(struct rfkill *rfkill, void *data)
+ {
+ int radio = ((unsigned long)data & 0xF);
++ struct calling_interface_buffer buffer;
+ int hwswitch;
+ int status;
+ int ret;
+
+- dell_set_arguments(0, 0, 0, 0);
+- ret = dell_send_request(CLASS_INFO, SELECT_RFKILL);
+- status = buffer->output[1];
++ dell_fill_request(&buffer, 0, 0, 0, 0);
++ ret = dell_send_request(&buffer, CLASS_INFO, SELECT_RFKILL);
++ status = buffer.output[1];
+
+ if (ret != 0 || !(status & BIT(0))) {
+ return;
+ }
+
+- dell_set_arguments(0, 0x2, 0, 0);
+- ret = dell_send_request(CLASS_INFO, SELECT_RFKILL);
+- hwswitch = buffer->output[1];
++ dell_fill_request(&buffer, 0, 0x2, 0, 0);
++ ret = dell_send_request(&buffer, CLASS_INFO, SELECT_RFKILL);
++ hwswitch = buffer.output[1];
+
+ if (ret != 0)
+ return;
+@@ -514,22 +519,23 @@ static struct dentry *dell_laptop_dir;
+
+ static int dell_debugfs_show(struct seq_file *s, void *data)
+ {
++ struct calling_interface_buffer buffer;
+ int hwswitch_state;
+ int hwswitch_ret;
+ int status;
+ int ret;
+
+- dell_set_arguments(0, 0, 0, 0);
+- ret = dell_send_request(CLASS_INFO, SELECT_RFKILL);
++ dell_fill_request(&buffer, 0, 0, 0, 0);
++ ret = dell_send_request(&buffer, CLASS_INFO, SELECT_RFKILL);
+ if (ret)
+ return ret;
+- status = buffer->output[1];
++ status = buffer.output[1];
+
+- dell_set_arguments(0, 0x2, 0, 0);
+- hwswitch_ret = dell_send_request(CLASS_INFO, SELECT_RFKILL);
++ dell_fill_request(&buffer, 0, 0x2, 0, 0);
++ hwswitch_ret = dell_send_request(&buffer, CLASS_INFO, SELECT_RFKILL);
+ if (hwswitch_ret)
+ return hwswitch_ret;
+- hwswitch_state = buffer->output[1];
++ hwswitch_state = buffer.output[1];
+
+ seq_printf(s, "return:\t%d\n", ret);
+ seq_printf(s, "status:\t0x%X\n", status);
+@@ -610,22 +616,23 @@ static const struct file_operations dell_debugfs_fops = {
+
+ static void dell_update_rfkill(struct work_struct *ignored)
+ {
++ struct calling_interface_buffer buffer;
+ int hwswitch = 0;
+ int status;
+ int ret;
+
+- dell_set_arguments(0, 0, 0, 0);
+- ret = dell_send_request(CLASS_INFO, SELECT_RFKILL);
+- status = buffer->output[1];
++ dell_fill_request(&buffer, 0, 0, 0, 0);
++ ret = dell_send_request(&buffer, CLASS_INFO, SELECT_RFKILL);
++ status = buffer.output[1];
+
+ if (ret != 0)
+ return;
+
+- dell_set_arguments(0, 0x2, 0, 0);
+- ret = dell_send_request(CLASS_INFO, SELECT_RFKILL);
++ dell_fill_request(&buffer, 0, 0x2, 0, 0);
++ ret = dell_send_request(&buffer, CLASS_INFO, SELECT_RFKILL);
+
+ if (ret == 0 && (status & BIT(0)))
+- hwswitch = buffer->output[1];
++ hwswitch = buffer.output[1];
+
+ if (wifi_rfkill) {
+ dell_rfkill_update_hw_state(wifi_rfkill, 1, status, hwswitch);
+@@ -683,6 +690,7 @@ static struct notifier_block dell_laptop_rbtn_notifier = {
+
+ static int __init dell_setup_rfkill(void)
+ {
++ struct calling_interface_buffer buffer;
+ int status, ret, whitelisted;
+ const char *product;
+
+@@ -698,9 +706,9 @@ static int __init dell_setup_rfkill(void)
+ if (!force_rfkill && !whitelisted)
+ return 0;
+
+- dell_set_arguments(0, 0, 0, 0);
+- ret = dell_send_request(CLASS_INFO, SELECT_RFKILL);
+- status = buffer->output[1];
++ dell_fill_request(&buffer, 0, 0, 0, 0);
++ ret = dell_send_request(&buffer, CLASS_INFO, SELECT_RFKILL);
++ status = buffer.output[1];
+
+ /* dell wireless info smbios call is not supported */
+ if (ret != 0)
+@@ -853,6 +861,7 @@ static void dell_cleanup_rfkill(void)
+
+ static int dell_send_intensity(struct backlight_device *bd)
+ {
++ struct calling_interface_buffer buffer;
+ struct calling_interface_token *token;
+ int ret;
+
+@@ -860,17 +869,21 @@ static int dell_send_intensity(struct backlight_device *bd)
+ if (!token)
+ return -ENODEV;
+
+- dell_set_arguments(token->location, bd->props.brightness, 0, 0);
++ dell_fill_request(&buffer,
++ token->location, bd->props.brightness, 0, 0);
+ if (power_supply_is_system_supplied() > 0)
+- ret = dell_send_request(CLASS_TOKEN_WRITE, SELECT_TOKEN_AC);
++ ret = dell_send_request(&buffer,
++ CLASS_TOKEN_WRITE, SELECT_TOKEN_AC);
+ else
+- ret = dell_send_request(CLASS_TOKEN_WRITE, SELECT_TOKEN_BAT);
++ ret = dell_send_request(&buffer,
++ CLASS_TOKEN_WRITE, SELECT_TOKEN_BAT);
+
+ return ret;
+ }
+
+ static int dell_get_intensity(struct backlight_device *bd)
+ {
++ struct calling_interface_buffer buffer;
+ struct calling_interface_token *token;
+ int ret;
+
+@@ -878,14 +891,17 @@ static int dell_get_intensity(struct backlight_device *bd)
+ if (!token)
+ return -ENODEV;
+
+- dell_set_arguments(token->location, 0, 0, 0);
++ dell_fill_request(&buffer, token->location, 0, 0, 0);
+ if (power_supply_is_system_supplied() > 0)
+- ret = dell_send_request(CLASS_TOKEN_READ, SELECT_TOKEN_AC);
++ ret = dell_send_request(&buffer,
++ CLASS_TOKEN_READ, SELECT_TOKEN_AC);
+ else
+- ret = dell_send_request(CLASS_TOKEN_READ, SELECT_TOKEN_BAT);
++ ret = dell_send_request(&buffer,
++ CLASS_TOKEN_READ, SELECT_TOKEN_BAT);
+
+ if (ret == 0)
+- ret = buffer->output[1];
++ ret = buffer.output[1];
++
+ return ret;
+ }
+
+@@ -1149,31 +1165,33 @@ static DEFINE_MUTEX(kbd_led_mutex);
+
+ static int kbd_get_info(struct kbd_info *info)
+ {
++ struct calling_interface_buffer buffer;
+ u8 units;
+ int ret;
+
+- dell_set_arguments(0, 0, 0, 0);
+- ret = dell_send_request(CLASS_KBD_BACKLIGHT, SELECT_KBD_BACKLIGHT);
++ dell_fill_request(&buffer, 0, 0, 0, 0);
++ ret = dell_send_request(&buffer,
++ CLASS_KBD_BACKLIGHT, SELECT_KBD_BACKLIGHT);
+ if (ret)
+ return ret;
+
+- info->modes = buffer->output[1] & 0xFFFF;
+- info->type = (buffer->output[1] >> 24) & 0xFF;
+- info->triggers = buffer->output[2] & 0xFF;
+- units = (buffer->output[2] >> 8) & 0xFF;
+- info->levels = (buffer->output[2] >> 16) & 0xFF;
++ info->modes = buffer.output[1] & 0xFFFF;
++ info->type = (buffer.output[1] >> 24) & 0xFF;
++ info->triggers = buffer.output[2] & 0xFF;
++ units = (buffer.output[2] >> 8) & 0xFF;
++ info->levels = (buffer.output[2] >> 16) & 0xFF;
+
+ if (quirks && quirks->kbd_led_levels_off_1 && info->levels)
+ info->levels--;
+
+ if (units & BIT(0))
+- info->seconds = (buffer->output[3] >> 0) & 0xFF;
++ info->seconds = (buffer.output[3] >> 0) & 0xFF;
+ if (units & BIT(1))
+- info->minutes = (buffer->output[3] >> 8) & 0xFF;
++ info->minutes = (buffer.output[3] >> 8) & 0xFF;
+ if (units & BIT(2))
+- info->hours = (buffer->output[3] >> 16) & 0xFF;
++ info->hours = (buffer.output[3] >> 16) & 0xFF;
+ if (units & BIT(3))
+- info->days = (buffer->output[3] >> 24) & 0xFF;
++ info->days = (buffer.output[3] >> 24) & 0xFF;
+
+ return ret;
+ }
+@@ -1233,31 +1251,34 @@ static int kbd_set_level(struct kbd_state *state, u8 level)
+
+ static int kbd_get_state(struct kbd_state *state)
+ {
++ struct calling_interface_buffer buffer;
+ int ret;
+
+- dell_set_arguments(0x1, 0, 0, 0);
+- ret = dell_send_request(CLASS_KBD_BACKLIGHT, SELECT_KBD_BACKLIGHT);
++ dell_fill_request(&buffer, 0x1, 0, 0, 0);
++ ret = dell_send_request(&buffer,
++ CLASS_KBD_BACKLIGHT, SELECT_KBD_BACKLIGHT);
+ if (ret)
+ return ret;
+
+- state->mode_bit = ffs(buffer->output[1] & 0xFFFF);
++ state->mode_bit = ffs(buffer.output[1] & 0xFFFF);
+ if (state->mode_bit != 0)
+ state->mode_bit--;
+
+- state->triggers = (buffer->output[1] >> 16) & 0xFF;
+- state->timeout_value = (buffer->output[1] >> 24) & 0x3F;
+- state->timeout_unit = (buffer->output[1] >> 30) & 0x3;
+- state->als_setting = buffer->output[2] & 0xFF;
+- state->als_value = (buffer->output[2] >> 8) & 0xFF;
+- state->level = (buffer->output[2] >> 16) & 0xFF;
+- state->timeout_value_ac = (buffer->output[2] >> 24) & 0x3F;
+- state->timeout_unit_ac = (buffer->output[2] >> 30) & 0x3;
++ state->triggers = (buffer.output[1] >> 16) & 0xFF;
++ state->timeout_value = (buffer.output[1] >> 24) & 0x3F;
++ state->timeout_unit = (buffer.output[1] >> 30) & 0x3;
++ state->als_setting = buffer.output[2] & 0xFF;
++ state->als_value = (buffer.output[2] >> 8) & 0xFF;
++ state->level = (buffer.output[2] >> 16) & 0xFF;
++ state->timeout_value_ac = (buffer.output[2] >> 24) & 0x3F;
++ state->timeout_unit_ac = (buffer.output[2] >> 30) & 0x3;
+
+ return ret;
+ }
+
+ static int kbd_set_state(struct kbd_state *state)
+ {
++ struct calling_interface_buffer buffer;
+ int ret;
+ u32 input1;
+ u32 input2;
+@@ -1270,8 +1291,9 @@ static int kbd_set_state(struct kbd_state *state)
+ input2 |= (state->level & 0xFF) << 16;
+ input2 |= (state->timeout_value_ac & 0x3F) << 24;
+ input2 |= (state->timeout_unit_ac & 0x3) << 30;
+- dell_set_arguments(0x2, input1, input2, 0);
+- ret = dell_send_request(CLASS_KBD_BACKLIGHT, SELECT_KBD_BACKLIGHT);
++ dell_fill_request(&buffer, 0x2, input1, input2, 0);
++ ret = dell_send_request(&buffer,
++ CLASS_KBD_BACKLIGHT, SELECT_KBD_BACKLIGHT);
+
+ return ret;
+ }
+@@ -1298,6 +1320,7 @@ static int kbd_set_state_safe(struct kbd_state *state, struct kbd_state *old)
+
+ static int kbd_set_token_bit(u8 bit)
+ {
++ struct calling_interface_buffer buffer;
+ struct calling_interface_token *token;
+ int ret;
+
+@@ -1308,14 +1331,15 @@ static int kbd_set_token_bit(u8 bit)
+ if (!token)
+ return -EINVAL;
+
+- dell_set_arguments(token->location, token->value, 0, 0);
+- ret = dell_send_request(CLASS_TOKEN_WRITE, SELECT_TOKEN_STD);
++ dell_fill_request(&buffer, token->location, token->value, 0, 0);
++ ret = dell_send_request(&buffer, CLASS_TOKEN_WRITE, SELECT_TOKEN_STD);
+
+ return ret;
+ }
+
+ static int kbd_get_token_bit(u8 bit)
+ {
++ struct calling_interface_buffer buffer;
+ struct calling_interface_token *token;
+ int ret;
+ int val;
+@@ -1327,9 +1351,9 @@ static int kbd_get_token_bit(u8 bit)
+ if (!token)
+ return -EINVAL;
+
+- dell_set_arguments(token->location, 0, 0, 0);
+- ret = dell_send_request(CLASS_TOKEN_READ, SELECT_TOKEN_STD);
+- val = buffer->output[1];
++ dell_fill_request(&buffer, token->location, 0, 0, 0);
++ ret = dell_send_request(&buffer, CLASS_TOKEN_READ, SELECT_TOKEN_STD);
++ val = buffer.output[1];
+
+ if (ret)
+ return ret;
+@@ -2046,6 +2070,7 @@ static struct notifier_block dell_laptop_notifier = {
+
+ int dell_micmute_led_set(int state)
+ {
++ struct calling_interface_buffer buffer;
+ struct calling_interface_token *token;
+
+ if (state == 0)
+@@ -2058,8 +2083,8 @@ int dell_micmute_led_set(int state)
+ if (!token)
+ return -ENODEV;
+
+- dell_set_arguments(token->location, token->value, 0, 0);
+- dell_send_request(CLASS_TOKEN_WRITE, SELECT_TOKEN_STD);
++ dell_fill_request(&buffer, token->location, token->value, 0, 0);
++ dell_send_request(&buffer, CLASS_TOKEN_WRITE, SELECT_TOKEN_STD);
+
+ return state;
+ }
+@@ -2090,13 +2115,6 @@ static int __init dell_init(void)
+ if (ret)
+ goto fail_platform_device2;
+
+- buffer = kzalloc(sizeof(struct calling_interface_buffer), GFP_KERNEL);
+- if (!buffer) {
+- ret = -ENOMEM;
+- goto fail_buffer;
+- }
+-
+-
+ ret = dell_setup_rfkill();
+
+ if (ret) {
+@@ -2121,10 +2139,13 @@ static int __init dell_init(void)
+
+ token = dell_smbios_find_token(BRIGHTNESS_TOKEN);
+ if (token) {
+- dell_set_arguments(token->location, 0, 0, 0);
+- ret = dell_send_request(CLASS_TOKEN_READ, SELECT_TOKEN_AC);
++ struct calling_interface_buffer buffer;
++
++ dell_fill_request(&buffer, token->location, 0, 0, 0);
++ ret = dell_send_request(&buffer,
++ CLASS_TOKEN_READ, SELECT_TOKEN_AC);
+ if (ret)
+- max_intensity = buffer->output[3];
++ max_intensity = buffer.output[3];
+ }
+
+ if (max_intensity) {
+@@ -2158,8 +2179,6 @@ static int __init dell_init(void)
+ fail_get_brightness:
+ backlight_device_unregister(dell_backlight_device);
+ fail_backlight:
+- kfree(buffer);
+-fail_buffer:
+ dell_cleanup_rfkill();
+ fail_rfkill:
+ platform_device_del(platform_device);
+@@ -2179,7 +2198,6 @@ static void __exit dell_exit(void)
+ touchpad_led_exit();
+ kbd_led_exit();
+ backlight_device_unregister(dell_backlight_device);
+- kfree(buffer);
+ dell_cleanup_rfkill();
+ if (platform_device) {
+ platform_device_unregister(platform_device);
+diff --git a/drivers/s390/net/qeth_core.h b/drivers/s390/net/qeth_core.h
+index badf42acbf95..185b3cd48b88 100644
+--- a/drivers/s390/net/qeth_core.h
++++ b/drivers/s390/net/qeth_core.h
+@@ -581,6 +581,11 @@ struct qeth_cmd_buffer {
+ void (*callback) (struct qeth_channel *, struct qeth_cmd_buffer *);
+ };
+
++static inline struct qeth_ipa_cmd *__ipa_cmd(struct qeth_cmd_buffer *iob)
++{
++ return (struct qeth_ipa_cmd *)(iob->data + IPA_PDU_HEADER_SIZE);
++}
++
+ /**
+ * definition of a qeth channel, used for read and write
+ */
+@@ -836,7 +841,7 @@ struct qeth_trap_id {
+ */
+ static inline int qeth_get_elements_for_range(addr_t start, addr_t end)
+ {
+- return PFN_UP(end - 1) - PFN_DOWN(start);
++ return PFN_UP(end) - PFN_DOWN(start);
+ }
+
+ static inline int qeth_get_micros(void)
+diff --git a/drivers/s390/net/qeth_core_main.c b/drivers/s390/net/qeth_core_main.c
+index 3614df68830f..61e9d0bca197 100644
+--- a/drivers/s390/net/qeth_core_main.c
++++ b/drivers/s390/net/qeth_core_main.c
+@@ -2057,7 +2057,7 @@ int qeth_send_control_data(struct qeth_card *card, int len,
+ unsigned long flags;
+ struct qeth_reply *reply = NULL;
+ unsigned long timeout, event_timeout;
+- struct qeth_ipa_cmd *cmd;
++ struct qeth_ipa_cmd *cmd = NULL;
+
+ QETH_CARD_TEXT(card, 2, "sendctl");
+
+@@ -2071,22 +2071,26 @@ int qeth_send_control_data(struct qeth_card *card, int len,
+ }
+ reply->callback = reply_cb;
+ reply->param = reply_param;
+- if (card->state == CARD_STATE_DOWN)
+- reply->seqno = QETH_IDX_COMMAND_SEQNO;
+- else
+- reply->seqno = card->seqno.ipa++;
++
+ init_waitqueue_head(&reply->wait_q);
+- spin_lock_irqsave(&card->lock, flags);
+- list_add_tail(&reply->list, &card->cmd_waiter_list);
+- spin_unlock_irqrestore(&card->lock, flags);
+
+ while (atomic_cmpxchg(&card->write.irq_pending, 0, 1)) ;
+- qeth_prepare_control_data(card, len, iob);
+
+- if (IS_IPA(iob->data))
++ if (IS_IPA(iob->data)) {
++ cmd = __ipa_cmd(iob);
++ cmd->hdr.seqno = card->seqno.ipa++;
++ reply->seqno = cmd->hdr.seqno;
+ event_timeout = QETH_IPA_TIMEOUT;
+- else
++ } else {
++ reply->seqno = QETH_IDX_COMMAND_SEQNO;
+ event_timeout = QETH_TIMEOUT;
++ }
++ qeth_prepare_control_data(card, len, iob);
++
++ spin_lock_irqsave(&card->lock, flags);
++ list_add_tail(&reply->list, &card->cmd_waiter_list);
++ spin_unlock_irqrestore(&card->lock, flags);
++
+ timeout = jiffies + event_timeout;
+
+ QETH_CARD_TEXT(card, 6, "noirqpnd");
+@@ -2111,9 +2115,8 @@ int qeth_send_control_data(struct qeth_card *card, int len,
+
+ /* we have only one long running ipassist, since we can ensure
+ process context of this command we can sleep */
+- cmd = (struct qeth_ipa_cmd *)(iob->data+IPA_PDU_HEADER_SIZE);
+- if ((cmd->hdr.command == IPA_CMD_SETIP) &&
+- (cmd->hdr.prot_version == QETH_PROT_IPV4)) {
++ if (cmd && cmd->hdr.command == IPA_CMD_SETIP &&
++ cmd->hdr.prot_version == QETH_PROT_IPV4) {
+ if (!wait_event_timeout(reply->wait_q,
+ atomic_read(&reply->received), event_timeout))
+ goto time_err;
+@@ -2868,7 +2871,7 @@ static void qeth_fill_ipacmd_header(struct qeth_card *card,
+ memset(cmd, 0, sizeof(struct qeth_ipa_cmd));
+ cmd->hdr.command = command;
+ cmd->hdr.initiator = IPA_CMD_INITIATOR_HOST;
+- cmd->hdr.seqno = card->seqno.ipa;
++ /* cmd->hdr.seqno is set by qeth_send_control_data() */
+ cmd->hdr.adapter_type = qeth_get_ipa_adp_type(card->info.link_type);
+ cmd->hdr.rel_adapter_no = (__u8) card->info.portno;
+ if (card->options.layer2)
+@@ -3833,10 +3836,12 @@ EXPORT_SYMBOL_GPL(qeth_get_elements_for_frags);
+ int qeth_get_elements_no(struct qeth_card *card,
+ struct sk_buff *skb, int extra_elems, int data_offset)
+ {
+- int elements = qeth_get_elements_for_range(
+- (addr_t)skb->data + data_offset,
+- (addr_t)skb->data + skb_headlen(skb)) +
+- qeth_get_elements_for_frags(skb);
++ addr_t end = (addr_t)skb->data + skb_headlen(skb);
++ int elements = qeth_get_elements_for_frags(skb);
++ addr_t start = (addr_t)skb->data + data_offset;
++
++ if (start != end)
++ elements += qeth_get_elements_for_range(start, end);
+
+ if ((elements + extra_elems) > QETH_MAX_BUFFER_ELEMENTS(card)) {
+ QETH_DBF_MESSAGE(2, "Invalid size of IP packet "
+diff --git a/drivers/s390/net/qeth_l3.h b/drivers/s390/net/qeth_l3.h
+index e5833837b799..8727b9517de8 100644
+--- a/drivers/s390/net/qeth_l3.h
++++ b/drivers/s390/net/qeth_l3.h
+@@ -40,8 +40,40 @@ struct qeth_ipaddr {
+ unsigned int pfxlen;
+ } a6;
+ } u;
+-
+ };
++
++static inline bool qeth_l3_addr_match_ip(struct qeth_ipaddr *a1,
++ struct qeth_ipaddr *a2)
++{
++ if (a1->proto != a2->proto)
++ return false;
++ if (a1->proto == QETH_PROT_IPV6)
++ return ipv6_addr_equal(&a1->u.a6.addr, &a2->u.a6.addr);
++ return a1->u.a4.addr == a2->u.a4.addr;
++}
++
++static inline bool qeth_l3_addr_match_all(struct qeth_ipaddr *a1,
++ struct qeth_ipaddr *a2)
++{
++ /* Assumes that the pair was obtained via qeth_l3_addr_find_by_ip(),
++ * so 'proto' and 'addr' match for sure.
++ *
++ * For ucast:
++ * - 'mac' is always 0.
++ * - 'mask'/'pfxlen' for RXIP/VIPA is always 0. For NORMAL, matching
++ * values are required to avoid mixups in takeover eligibility.
++ *
++ * For mcast,
++ * - 'mac' is mapped from the IP, and thus always matches.
++ * - 'mask'/'pfxlen' is always 0.
++ */
++ if (a1->type != a2->type)
++ return false;
++ if (a1->proto == QETH_PROT_IPV6)
++ return a1->u.a6.pfxlen == a2->u.a6.pfxlen;
++ return a1->u.a4.mask == a2->u.a4.mask;
++}
++
+ static inline u64 qeth_l3_ipaddr_hash(struct qeth_ipaddr *addr)
+ {
+ u64 ret = 0;
+diff --git a/drivers/s390/net/qeth_l3_main.c b/drivers/s390/net/qeth_l3_main.c
+index ef0961e18686..33131c594627 100644
+--- a/drivers/s390/net/qeth_l3_main.c
++++ b/drivers/s390/net/qeth_l3_main.c
+@@ -150,6 +150,24 @@ int qeth_l3_string_to_ipaddr(const char *buf, enum qeth_prot_versions proto,
+ return -EINVAL;
+ }
+
++static struct qeth_ipaddr *qeth_l3_find_addr_by_ip(struct qeth_card *card,
++ struct qeth_ipaddr *query)
++{
++ u64 key = qeth_l3_ipaddr_hash(query);
++ struct qeth_ipaddr *addr;
++
++ if (query->is_multicast) {
++ hash_for_each_possible(card->ip_mc_htable, addr, hnode, key)
++ if (qeth_l3_addr_match_ip(addr, query))
++ return addr;
++ } else {
++ hash_for_each_possible(card->ip_htable, addr, hnode, key)
++ if (qeth_l3_addr_match_ip(addr, query))
++ return addr;
++ }
++ return NULL;
++}
++
+ static void qeth_l3_convert_addr_to_bits(u8 *addr, u8 *bits, int len)
+ {
+ int i, j;
+@@ -203,34 +221,6 @@ static bool qeth_l3_is_addr_covered_by_ipato(struct qeth_card *card,
+ return rc;
+ }
+
+-inline int
+-qeth_l3_ipaddrs_is_equal(struct qeth_ipaddr *addr1, struct qeth_ipaddr *addr2)
+-{
+- return addr1->proto == addr2->proto &&
+- !memcmp(&addr1->u, &addr2->u, sizeof(addr1->u)) &&
+- !memcmp(&addr1->mac, &addr2->mac, sizeof(addr1->mac));
+-}
+-
+-static struct qeth_ipaddr *
+-qeth_l3_ip_from_hash(struct qeth_card *card, struct qeth_ipaddr *tmp_addr)
+-{
+- struct qeth_ipaddr *addr;
+-
+- if (tmp_addr->is_multicast) {
+- hash_for_each_possible(card->ip_mc_htable, addr,
+- hnode, qeth_l3_ipaddr_hash(tmp_addr))
+- if (qeth_l3_ipaddrs_is_equal(tmp_addr, addr))
+- return addr;
+- } else {
+- hash_for_each_possible(card->ip_htable, addr,
+- hnode, qeth_l3_ipaddr_hash(tmp_addr))
+- if (qeth_l3_ipaddrs_is_equal(tmp_addr, addr))
+- return addr;
+- }
+-
+- return NULL;
+-}
+-
+ int qeth_l3_delete_ip(struct qeth_card *card, struct qeth_ipaddr *tmp_addr)
+ {
+ int rc = 0;
+@@ -245,23 +235,18 @@ int qeth_l3_delete_ip(struct qeth_card *card, struct qeth_ipaddr *tmp_addr)
+ QETH_CARD_HEX(card, 4, ((char *)&tmp_addr->u.a6.addr) + 8, 8);
+ }
+
+- addr = qeth_l3_ip_from_hash(card, tmp_addr);
+- if (!addr)
++ addr = qeth_l3_find_addr_by_ip(card, tmp_addr);
++ if (!addr || !qeth_l3_addr_match_all(addr, tmp_addr))
+ return -ENOENT;
+
+ addr->ref_counter--;
+- if (addr->ref_counter > 0 && (addr->type == QETH_IP_TYPE_NORMAL ||
+- addr->type == QETH_IP_TYPE_RXIP))
++ if (addr->type == QETH_IP_TYPE_NORMAL && addr->ref_counter > 0)
+ return rc;
+ if (addr->in_progress)
+ return -EINPROGRESS;
+
+- if (!qeth_card_hw_is_reachable(card)) {
+- addr->disp_flag = QETH_DISP_ADDR_DELETE;
+- return 0;
+- }
+-
+- rc = qeth_l3_deregister_addr_entry(card, addr);
++ if (qeth_card_hw_is_reachable(card))
++ rc = qeth_l3_deregister_addr_entry(card, addr);
+
+ hash_del(&addr->hnode);
+ kfree(addr);
+@@ -273,6 +258,7 @@ int qeth_l3_add_ip(struct qeth_card *card, struct qeth_ipaddr *tmp_addr)
+ {
+ int rc = 0;
+ struct qeth_ipaddr *addr;
++ char buf[40];
+
+ QETH_CARD_TEXT(card, 4, "addip");
+
+@@ -283,8 +269,20 @@ int qeth_l3_add_ip(struct qeth_card *card, struct qeth_ipaddr *tmp_addr)
+ QETH_CARD_HEX(card, 4, ((char *)&tmp_addr->u.a6.addr) + 8, 8);
+ }
+
+- addr = qeth_l3_ip_from_hash(card, tmp_addr);
+- if (!addr) {
++ addr = qeth_l3_find_addr_by_ip(card, tmp_addr);
++ if (addr) {
++ if (tmp_addr->type != QETH_IP_TYPE_NORMAL)
++ return -EADDRINUSE;
++ if (qeth_l3_addr_match_all(addr, tmp_addr)) {
++ addr->ref_counter++;
++ return 0;
++ }
++ qeth_l3_ipaddr_to_string(tmp_addr->proto, (u8 *)&tmp_addr->u,
++ buf);
++ dev_warn(&card->gdev->dev,
++ "Registering IP address %s failed\n", buf);
++ return -EADDRINUSE;
++ } else {
+ addr = qeth_l3_get_addr_buffer(tmp_addr->proto);
+ if (!addr)
+ return -ENOMEM;
+@@ -324,19 +322,15 @@ int qeth_l3_add_ip(struct qeth_card *card, struct qeth_ipaddr *tmp_addr)
+ (rc == IPA_RC_LAN_OFFLINE)) {
+ addr->disp_flag = QETH_DISP_ADDR_DO_NOTHING;
+ if (addr->ref_counter < 1) {
+- qeth_l3_delete_ip(card, addr);
++ qeth_l3_deregister_addr_entry(card, addr);
++ hash_del(&addr->hnode);
+ kfree(addr);
+ }
+ } else {
+ hash_del(&addr->hnode);
+ kfree(addr);
+ }
+- } else {
+- if (addr->type == QETH_IP_TYPE_NORMAL ||
+- addr->type == QETH_IP_TYPE_RXIP)
+- addr->ref_counter++;
+ }
+-
+ return rc;
+ }
+
+@@ -404,11 +398,7 @@ static void qeth_l3_recover_ip(struct qeth_card *card)
+ spin_lock_bh(&card->ip_lock);
+
+ hash_for_each_safe(card->ip_htable, i, tmp, addr, hnode) {
+- if (addr->disp_flag == QETH_DISP_ADDR_DELETE) {
+- qeth_l3_deregister_addr_entry(card, addr);
+- hash_del(&addr->hnode);
+- kfree(addr);
+- } else if (addr->disp_flag == QETH_DISP_ADDR_ADD) {
++ if (addr->disp_flag == QETH_DISP_ADDR_ADD) {
+ if (addr->proto == QETH_PROT_IPV4) {
+ addr->in_progress = 1;
+ spin_unlock_bh(&card->ip_lock);
+@@ -724,12 +714,7 @@ int qeth_l3_add_vipa(struct qeth_card *card, enum qeth_prot_versions proto,
+ return -ENOMEM;
+
+ spin_lock_bh(&card->ip_lock);
+-
+- if (qeth_l3_ip_from_hash(card, ipaddr))
+- rc = -EEXIST;
+- else
+- qeth_l3_add_ip(card, ipaddr);
+-
++ rc = qeth_l3_add_ip(card, ipaddr);
+ spin_unlock_bh(&card->ip_lock);
+
+ kfree(ipaddr);
+@@ -792,12 +777,7 @@ int qeth_l3_add_rxip(struct qeth_card *card, enum qeth_prot_versions proto,
+ return -ENOMEM;
+
+ spin_lock_bh(&card->ip_lock);
+-
+- if (qeth_l3_ip_from_hash(card, ipaddr))
+- rc = -EEXIST;
+- else
+- qeth_l3_add_ip(card, ipaddr);
+-
++ rc = qeth_l3_add_ip(card, ipaddr);
+ spin_unlock_bh(&card->ip_lock);
+
+ kfree(ipaddr);
+@@ -1405,8 +1385,9 @@ qeth_l3_add_mc_to_hash(struct qeth_card *card, struct in_device *in4_dev)
+ memcpy(tmp->mac, buf, sizeof(tmp->mac));
+ tmp->is_multicast = 1;
+
+- ipm = qeth_l3_ip_from_hash(card, tmp);
++ ipm = qeth_l3_find_addr_by_ip(card, tmp);
+ if (ipm) {
++ /* for mcast, by-IP match means full match */
+ ipm->disp_flag = QETH_DISP_ADDR_DO_NOTHING;
+ } else {
+ ipm = qeth_l3_get_addr_buffer(QETH_PROT_IPV4);
+@@ -1489,8 +1470,9 @@ qeth_l3_add_mc6_to_hash(struct qeth_card *card, struct inet6_dev *in6_dev)
+ sizeof(struct in6_addr));
+ tmp->is_multicast = 1;
+
+- ipm = qeth_l3_ip_from_hash(card, tmp);
++ ipm = qeth_l3_find_addr_by_ip(card, tmp);
+ if (ipm) {
++ /* for mcast, by-IP match means full match */
+ ipm->disp_flag = QETH_DISP_ADDR_DO_NOTHING;
+ continue;
+ }
+@@ -2629,11 +2611,12 @@ static void qeth_tso_fill_header(struct qeth_card *card,
+ static int qeth_l3_get_elements_no_tso(struct qeth_card *card,
+ struct sk_buff *skb, int extra_elems)
+ {
+- addr_t tcpdptr = (addr_t)tcp_hdr(skb) + tcp_hdrlen(skb);
+- int elements = qeth_get_elements_for_range(
+- tcpdptr,
+- (addr_t)skb->data + skb_headlen(skb)) +
+- qeth_get_elements_for_frags(skb);
++ addr_t start = (addr_t)tcp_hdr(skb) + tcp_hdrlen(skb);
++ addr_t end = (addr_t)skb->data + skb_headlen(skb);
++ int elements = qeth_get_elements_for_frags(skb);
++
++ if (start != end)
++ elements += qeth_get_elements_for_range(start, end);
+
+ if ((elements + extra_elems) > QETH_MAX_BUFFER_ELEMENTS(card)) {
+ QETH_DBF_MESSAGE(2,
+diff --git a/drivers/vfio/vfio_iommu_type1.c b/drivers/vfio/vfio_iommu_type1.c
+index e30e29ae4819..45657e2b1ff7 100644
+--- a/drivers/vfio/vfio_iommu_type1.c
++++ b/drivers/vfio/vfio_iommu_type1.c
+@@ -338,11 +338,12 @@ static int vaddr_get_pfn(struct mm_struct *mm, unsigned long vaddr,
+ {
+ struct page *page[1];
+ struct vm_area_struct *vma;
++ struct vm_area_struct *vmas[1];
+ int ret;
+
+ if (mm == current->mm) {
+- ret = get_user_pages_fast(vaddr, 1, !!(prot & IOMMU_WRITE),
+- page);
++ ret = get_user_pages_longterm(vaddr, 1, !!(prot & IOMMU_WRITE),
++ page, vmas);
+ } else {
+ unsigned int flags = 0;
+
+@@ -351,7 +352,18 @@ static int vaddr_get_pfn(struct mm_struct *mm, unsigned long vaddr,
+
+ down_read(&mm->mmap_sem);
+ ret = get_user_pages_remote(NULL, mm, vaddr, 1, flags, page,
+- NULL, NULL);
++ vmas, NULL);
++ /*
++ * The lifetime of a vaddr_get_pfn() page pin is
++ * userspace-controlled. In the fs-dax case this could
++ * lead to indefinite stalls in filesystem operations.
++ * Disallow attempts to pin fs-dax pages via this
++ * interface.
++ */
++ if (ret > 0 && vma_is_fsdax(vmas[0])) {
++ ret = -EOPNOTSUPP;
++ put_page(page[0]);
++ }
+ up_read(&mm->mmap_sem);
+ }
+
+diff --git a/fs/btrfs/sysfs.c b/fs/btrfs/sysfs.c
+index a28bba801264..27baaff96880 100644
+--- a/fs/btrfs/sysfs.c
++++ b/fs/btrfs/sysfs.c
+@@ -423,7 +423,7 @@ static ssize_t btrfs_nodesize_show(struct kobject *kobj,
+ {
+ struct btrfs_fs_info *fs_info = to_fs_info(kobj);
+
+- return snprintf(buf, PAGE_SIZE, "%u\n", fs_info->super_copy->nodesize);
++ return snprintf(buf, PAGE_SIZE, "%u\n", fs_info->nodesize);
+ }
+
+ BTRFS_ATTR(, nodesize, btrfs_nodesize_show);
+@@ -433,8 +433,7 @@ static ssize_t btrfs_sectorsize_show(struct kobject *kobj,
+ {
+ struct btrfs_fs_info *fs_info = to_fs_info(kobj);
+
+- return snprintf(buf, PAGE_SIZE, "%u\n",
+- fs_info->super_copy->sectorsize);
++ return snprintf(buf, PAGE_SIZE, "%u\n", fs_info->sectorsize);
+ }
+
+ BTRFS_ATTR(, sectorsize, btrfs_sectorsize_show);
+@@ -444,8 +443,7 @@ static ssize_t btrfs_clone_alignment_show(struct kobject *kobj,
+ {
+ struct btrfs_fs_info *fs_info = to_fs_info(kobj);
+
+- return snprintf(buf, PAGE_SIZE, "%u\n",
+- fs_info->super_copy->sectorsize);
++ return snprintf(buf, PAGE_SIZE, "%u\n", fs_info->sectorsize);
+ }
+
+ BTRFS_ATTR(, clone_alignment, btrfs_clone_alignment_show);
+diff --git a/fs/btrfs/transaction.c b/fs/btrfs/transaction.c
+index 5a8c2649af2f..10d12b3de001 100644
+--- a/fs/btrfs/transaction.c
++++ b/fs/btrfs/transaction.c
+@@ -1723,19 +1723,23 @@ static void update_super_roots(struct btrfs_fs_info *fs_info)
+
+ super = fs_info->super_copy;
+
++ /* update latest btrfs_super_block::chunk_root refs */
+ root_item = &fs_info->chunk_root->root_item;
+- super->chunk_root = root_item->bytenr;
+- super->chunk_root_generation = root_item->generation;
+- super->chunk_root_level = root_item->level;
++ btrfs_set_super_chunk_root(super, root_item->bytenr);
++ btrfs_set_super_chunk_root_generation(super, root_item->generation);
++ btrfs_set_super_chunk_root_level(super, root_item->level);
+
++ /* update latest btrfs_super_block::root refs */
+ root_item = &fs_info->tree_root->root_item;
+- super->root = root_item->bytenr;
+- super->generation = root_item->generation;
+- super->root_level = root_item->level;
++ btrfs_set_super_root(super, root_item->bytenr);
++ btrfs_set_super_generation(super, root_item->generation);
++ btrfs_set_super_root_level(super, root_item->level);
++
+ if (btrfs_test_opt(fs_info, SPACE_CACHE))
+- super->cache_generation = root_item->generation;
++ btrfs_set_super_cache_generation(super, root_item->generation);
+ if (test_bit(BTRFS_FS_UPDATE_UUID_TREE_GEN, &fs_info->flags))
+- super->uuid_tree_generation = root_item->generation;
++ btrfs_set_super_uuid_tree_generation(super,
++ root_item->generation);
+ }
+
+ int btrfs_transaction_in_commit(struct btrfs_fs_info *info)
+diff --git a/fs/direct-io.c b/fs/direct-io.c
+index 3aafb3343a65..b76110e96d62 100644
+--- a/fs/direct-io.c
++++ b/fs/direct-io.c
+@@ -1252,8 +1252,7 @@ do_blockdev_direct_IO(struct kiocb *iocb, struct inode *inode,
+ */
+ if (dio->is_async && iov_iter_rw(iter) == WRITE) {
+ retval = 0;
+- if ((iocb->ki_filp->f_flags & O_DSYNC) ||
+- IS_SYNC(iocb->ki_filp->f_mapping->host))
++ if (iocb->ki_flags & IOCB_DSYNC)
+ retval = dio_set_defer_completion(dio);
+ else if (!dio->inode->i_sb->s_dio_done_wq) {
+ /*
+diff --git a/include/linux/fs.h b/include/linux/fs.h
+index 511fbaabf624..79421287ff5e 100644
+--- a/include/linux/fs.h
++++ b/include/linux/fs.h
+@@ -3204,7 +3204,7 @@ static inline bool vma_is_fsdax(struct vm_area_struct *vma)
+ if (!vma_is_dax(vma))
+ return false;
+ inode = file_inode(vma->vm_file);
+- if (inode->i_mode == S_IFCHR)
++ if (S_ISCHR(inode->i_mode))
+ return false; /* device-dax */
+ return true;
+ }
+diff --git a/include/linux/nospec.h b/include/linux/nospec.h
+index fbc98e2c8228..132e3f5a2e0d 100644
+--- a/include/linux/nospec.h
++++ b/include/linux/nospec.h
+@@ -72,7 +72,6 @@ static inline unsigned long array_index_mask_nospec(unsigned long index,
+ BUILD_BUG_ON(sizeof(_i) > sizeof(long)); \
+ BUILD_BUG_ON(sizeof(_s) > sizeof(long)); \
+ \
+- _i &= _mask; \
+- _i; \
++ (typeof(_i)) (_i & _mask); \
+ })
+ #endif /* _LINUX_NOSPEC_H */
+diff --git a/include/linux/phy.h b/include/linux/phy.h
+index dc82a07cb4fd..123cd703741d 100644
+--- a/include/linux/phy.h
++++ b/include/linux/phy.h
+@@ -819,6 +819,7 @@ void phy_device_remove(struct phy_device *phydev);
+ int phy_init_hw(struct phy_device *phydev);
+ int phy_suspend(struct phy_device *phydev);
+ int phy_resume(struct phy_device *phydev);
++int __phy_resume(struct phy_device *phydev);
+ int phy_loopback(struct phy_device *phydev, bool enable);
+ struct phy_device *phy_attach(struct net_device *dev, const char *bus_id,
+ phy_interface_t interface);
+diff --git a/include/net/udplite.h b/include/net/udplite.h
+index 81bdbf97319b..9185e45b997f 100644
+--- a/include/net/udplite.h
++++ b/include/net/udplite.h
+@@ -64,6 +64,7 @@ static inline int udplite_checksum_init(struct sk_buff *skb, struct udphdr *uh)
+ UDP_SKB_CB(skb)->cscov = cscov;
+ if (skb->ip_summed == CHECKSUM_COMPLETE)
+ skb->ip_summed = CHECKSUM_NONE;
++ skb->csum_valid = 0;
+ }
+
+ return 0;
+diff --git a/kernel/time/hrtimer.c b/kernel/time/hrtimer.c
+index aa9d2a2b1210..cf8e4df808cf 100644
+--- a/kernel/time/hrtimer.c
++++ b/kernel/time/hrtimer.c
+@@ -1104,7 +1104,12 @@ static void __hrtimer_init(struct hrtimer *timer, clockid_t clock_id,
+
+ cpu_base = raw_cpu_ptr(&hrtimer_bases);
+
+- if (clock_id == CLOCK_REALTIME && mode != HRTIMER_MODE_ABS)
++ /*
++ * POSIX magic: Relative CLOCK_REALTIME timers are not affected by
++ * clock modifications, so they needs to become CLOCK_MONOTONIC to
++ * ensure POSIX compliance.
++ */
++ if (clock_id == CLOCK_REALTIME && mode & HRTIMER_MODE_REL)
+ clock_id = CLOCK_MONOTONIC;
+
+ base = hrtimer_clockid_to_base(clock_id);
+diff --git a/kernel/time/timer.c b/kernel/time/timer.c
+index 0bcf00e3ce48..e9eb29a0edc5 100644
+--- a/kernel/time/timer.c
++++ b/kernel/time/timer.c
+@@ -1886,6 +1886,12 @@ int timers_dead_cpu(unsigned int cpu)
+ raw_spin_lock_irq(&new_base->lock);
+ raw_spin_lock_nested(&old_base->lock, SINGLE_DEPTH_NESTING);
+
++ /*
++ * The current CPUs base clock might be stale. Update it
++ * before moving the timers over.
++ */
++ forward_timer_base(new_base);
++
+ BUG_ON(old_base->running_timer);
+
+ for (i = 0; i < WHEEL_SIZE; i++)
+diff --git a/lib/vsprintf.c b/lib/vsprintf.c
+index 01c3957b2de6..062ac753a101 100644
+--- a/lib/vsprintf.c
++++ b/lib/vsprintf.c
+@@ -1849,7 +1849,7 @@ char *pointer(const char *fmt, char *buf, char *end, void *ptr,
+ {
+ const int default_width = 2 * sizeof(void *);
+
+- if (!ptr && *fmt != 'K') {
++ if (!ptr && *fmt != 'K' && *fmt != 'x') {
+ /*
+ * Print (null) with the same width as a pointer so it makes
+ * tabular output look nice.
+diff --git a/net/bridge/br_sysfs_if.c b/net/bridge/br_sysfs_if.c
+index 0254c35b2bf0..126a8ea73c96 100644
+--- a/net/bridge/br_sysfs_if.c
++++ b/net/bridge/br_sysfs_if.c
+@@ -255,6 +255,9 @@ static ssize_t brport_show(struct kobject *kobj,
+ struct brport_attribute *brport_attr = to_brport_attr(attr);
+ struct net_bridge_port *p = to_brport(kobj);
+
++ if (!brport_attr->show)
++ return -EINVAL;
++
+ return brport_attr->show(p, buf);
+ }
+
+diff --git a/net/bridge/br_vlan.c b/net/bridge/br_vlan.c
+index 51935270c651..9896f4975353 100644
+--- a/net/bridge/br_vlan.c
++++ b/net/bridge/br_vlan.c
+@@ -168,6 +168,8 @@ static struct net_bridge_vlan *br_vlan_get_master(struct net_bridge *br, u16 vid
+ masterv = br_vlan_find(vg, vid);
+ if (WARN_ON(!masterv))
+ return NULL;
++ refcount_set(&masterv->refcnt, 1);
++ return masterv;
+ }
+ refcount_inc(&masterv->refcnt);
+
+diff --git a/net/core/dev.c b/net/core/dev.c
+index c8c102a3467f..a2a89acd0de8 100644
+--- a/net/core/dev.c
++++ b/net/core/dev.c
+@@ -2366,8 +2366,11 @@ EXPORT_SYMBOL(netdev_set_num_tc);
+ */
+ int netif_set_real_num_tx_queues(struct net_device *dev, unsigned int txq)
+ {
++ bool disabling;
+ int rc;
+
++ disabling = txq < dev->real_num_tx_queues;
++
+ if (txq < 1 || txq > dev->num_tx_queues)
+ return -EINVAL;
+
+@@ -2383,15 +2386,19 @@ int netif_set_real_num_tx_queues(struct net_device *dev, unsigned int txq)
+ if (dev->num_tc)
+ netif_setup_tc(dev, txq);
+
+- if (txq < dev->real_num_tx_queues) {
++ dev->real_num_tx_queues = txq;
++
++ if (disabling) {
++ synchronize_net();
+ qdisc_reset_all_tx_gt(dev, txq);
+ #ifdef CONFIG_XPS
+ netif_reset_xps_queues_gt(dev, txq);
+ #endif
+ }
++ } else {
++ dev->real_num_tx_queues = txq;
+ }
+
+- dev->real_num_tx_queues = txq;
+ return 0;
+ }
+ EXPORT_SYMBOL(netif_set_real_num_tx_queues);
+diff --git a/net/core/gen_estimator.c b/net/core/gen_estimator.c
+index 0a3f88f08727..98fd12721221 100644
+--- a/net/core/gen_estimator.c
++++ b/net/core/gen_estimator.c
+@@ -66,6 +66,7 @@ struct net_rate_estimator {
+ static void est_fetch_counters(struct net_rate_estimator *e,
+ struct gnet_stats_basic_packed *b)
+ {
++ memset(b, 0, sizeof(*b));
+ if (e->stats_lock)
+ spin_lock(e->stats_lock);
+
+diff --git a/net/ipv4/fib_semantics.c b/net/ipv4/fib_semantics.c
+index c586597da20d..7d36a950d961 100644
+--- a/net/ipv4/fib_semantics.c
++++ b/net/ipv4/fib_semantics.c
+@@ -646,6 +646,11 @@ int fib_nh_match(struct fib_config *cfg, struct fib_info *fi,
+ fi->fib_nh, cfg, extack))
+ return 1;
+ }
++#ifdef CONFIG_IP_ROUTE_CLASSID
++ if (cfg->fc_flow &&
++ cfg->fc_flow != fi->fib_nh->nh_tclassid)
++ return 1;
++#endif
+ if ((!cfg->fc_oif || cfg->fc_oif == fi->fib_nh->nh_oif) &&
+ (!cfg->fc_gw || cfg->fc_gw == fi->fib_nh->nh_gw))
+ return 0;
+diff --git a/net/ipv4/route.c b/net/ipv4/route.c
+index 4e153b23bcec..f746e49dd585 100644
+--- a/net/ipv4/route.c
++++ b/net/ipv4/route.c
+@@ -128,10 +128,13 @@ static int ip_rt_redirect_silence __read_mostly = ((HZ / 50) << (9 + 1));
+ static int ip_rt_error_cost __read_mostly = HZ;
+ static int ip_rt_error_burst __read_mostly = 5 * HZ;
+ static int ip_rt_mtu_expires __read_mostly = 10 * 60 * HZ;
+-static int ip_rt_min_pmtu __read_mostly = 512 + 20 + 20;
++static u32 ip_rt_min_pmtu __read_mostly = 512 + 20 + 20;
+ static int ip_rt_min_advmss __read_mostly = 256;
+
+ static int ip_rt_gc_timeout __read_mostly = RT_GC_TIMEOUT;
++
++static int ip_min_valid_pmtu __read_mostly = IPV4_MIN_MTU;
++
+ /*
+ * Interface to generic destination cache.
+ */
+@@ -1829,6 +1832,8 @@ int fib_multipath_hash(const struct fib_info *fi, const struct flowi4 *fl4,
+ return skb_get_hash_raw(skb) >> 1;
+ memset(&hash_keys, 0, sizeof(hash_keys));
+ skb_flow_dissect_flow_keys(skb, &keys, flag);
++
++ hash_keys.control.addr_type = FLOW_DISSECTOR_KEY_IPV4_ADDRS;
+ hash_keys.addrs.v4addrs.src = keys.addrs.v4addrs.src;
+ hash_keys.addrs.v4addrs.dst = keys.addrs.v4addrs.dst;
+ hash_keys.ports.src = keys.ports.src;
+@@ -2934,7 +2939,8 @@ static struct ctl_table ipv4_route_table[] = {
+ .data = &ip_rt_min_pmtu,
+ .maxlen = sizeof(int),
+ .mode = 0644,
+- .proc_handler = proc_dointvec,
++ .proc_handler = proc_dointvec_minmax,
++ .extra1 = &ip_min_valid_pmtu,
+ },
+ {
+ .procname = "min_adv_mss",
+diff --git a/net/ipv4/tcp_input.c b/net/ipv4/tcp_input.c
+index 45f750e85714..0228f494b0a5 100644
+--- a/net/ipv4/tcp_input.c
++++ b/net/ipv4/tcp_input.c
+@@ -1977,11 +1977,6 @@ void tcp_enter_loss(struct sock *sk)
+ /* F-RTO RFC5682 sec 3.1 step 1: retransmit SND.UNA if no previous
+ * loss recovery is underway except recurring timeout(s) on
+ * the same SND.UNA (sec 3.2). Disable F-RTO on path MTU probing
+- *
+- * In theory F-RTO can be used repeatedly during loss recovery.
+- * In practice this interacts badly with broken middle-boxes that
+- * falsely raise the receive window, which results in repeated
+- * timeouts and stop-and-go behavior.
+ */
+ tp->frto = net->ipv4.sysctl_tcp_frto &&
+ (new_recovery || icsk->icsk_retransmits) &&
+@@ -2637,18 +2632,14 @@ static void tcp_process_loss(struct sock *sk, int flag, bool is_dupack,
+ tcp_try_undo_loss(sk, false))
+ return;
+
+- /* The ACK (s)acks some never-retransmitted data meaning not all
+- * the data packets before the timeout were lost. Therefore we
+- * undo the congestion window and state. This is essentially
+- * the operation in F-RTO (RFC5682 section 3.1 step 3.b). Since
+- * a retransmitted skb is permantly marked, we can apply such an
+- * operation even if F-RTO was not used.
+- */
+- if ((flag & FLAG_ORIG_SACK_ACKED) &&
+- tcp_try_undo_loss(sk, tp->undo_marker))
+- return;
+-
+ if (tp->frto) { /* F-RTO RFC5682 sec 3.1 (sack enhanced version). */
++ /* Step 3.b. A timeout is spurious if not all data are
++ * lost, i.e., never-retransmitted data are (s)acked.
++ */
++ if ((flag & FLAG_ORIG_SACK_ACKED) &&
++ tcp_try_undo_loss(sk, true))
++ return;
++
+ if (after(tp->snd_nxt, tp->high_seq)) {
+ if (flag & FLAG_DATA_SACKED || is_dupack)
+ tp->frto = 0; /* Step 3.a. loss was real */
+@@ -3988,6 +3979,7 @@ void tcp_reset(struct sock *sk)
+ /* This barrier is coupled with smp_rmb() in tcp_poll() */
+ smp_wmb();
+
++ tcp_write_queue_purge(sk);
+ tcp_done(sk);
+
+ if (!sock_flag(sk, SOCK_DEAD))
+diff --git a/net/ipv4/tcp_ipv4.c b/net/ipv4/tcp_ipv4.c
+index 94e28350f420..3b051b9b3743 100644
+--- a/net/ipv4/tcp_ipv4.c
++++ b/net/ipv4/tcp_ipv4.c
+@@ -705,7 +705,8 @@ static void tcp_v4_send_reset(const struct sock *sk, struct sk_buff *skb)
+ */
+ if (sk) {
+ arg.bound_dev_if = sk->sk_bound_dev_if;
+- trace_tcp_send_reset(sk, skb);
++ if (sk_fullsock(sk))
++ trace_tcp_send_reset(sk, skb);
+ }
+
+ BUILD_BUG_ON(offsetof(struct sock, sk_bound_dev_if) !=
+diff --git a/net/ipv4/tcp_output.c b/net/ipv4/tcp_output.c
+index a4d214c7b506..580912de16c2 100644
+--- a/net/ipv4/tcp_output.c
++++ b/net/ipv4/tcp_output.c
+@@ -1730,7 +1730,7 @@ u32 tcp_tso_autosize(const struct sock *sk, unsigned int mss_now,
+ */
+ segs = max_t(u32, bytes / mss_now, min_tso_segs);
+
+- return min_t(u32, segs, sk->sk_gso_max_segs);
++ return segs;
+ }
+ EXPORT_SYMBOL(tcp_tso_autosize);
+
+@@ -1742,9 +1742,10 @@ static u32 tcp_tso_segs(struct sock *sk, unsigned int mss_now)
+ const struct tcp_congestion_ops *ca_ops = inet_csk(sk)->icsk_ca_ops;
+ u32 tso_segs = ca_ops->tso_segs_goal ? ca_ops->tso_segs_goal(sk) : 0;
+
+- return tso_segs ? :
+- tcp_tso_autosize(sk, mss_now,
+- sock_net(sk)->ipv4.sysctl_tcp_min_tso_segs);
++ if (!tso_segs)
++ tso_segs = tcp_tso_autosize(sk, mss_now,
++ sock_net(sk)->ipv4.sysctl_tcp_min_tso_segs);
++ return min_t(u32, tso_segs, sk->sk_gso_max_segs);
+ }
+
+ /* Returns the portion of skb which can be sent right away */
+@@ -2026,6 +2027,24 @@ static inline void tcp_mtu_check_reprobe(struct sock *sk)
+ }
+ }
+
++static bool tcp_can_coalesce_send_queue_head(struct sock *sk, int len)
++{
++ struct sk_buff *skb, *next;
++
++ skb = tcp_send_head(sk);
++ tcp_for_write_queue_from_safe(skb, next, sk) {
++ if (len <= skb->len)
++ break;
++
++ if (unlikely(TCP_SKB_CB(skb)->eor))
++ return false;
++
++ len -= skb->len;
++ }
++
++ return true;
++}
++
+ /* Create a new MTU probe if we are ready.
+ * MTU probe is regularly attempting to increase the path MTU by
+ * deliberately sending larger packets. This discovers routing
+@@ -2098,6 +2117,9 @@ static int tcp_mtu_probe(struct sock *sk)
+ return 0;
+ }
+
++ if (!tcp_can_coalesce_send_queue_head(sk, probe_size))
++ return -1;
++
+ /* We're allowed to probe. Build it now. */
+ nskb = sk_stream_alloc_skb(sk, probe_size, GFP_ATOMIC, false);
+ if (!nskb)
+@@ -2133,6 +2155,10 @@ static int tcp_mtu_probe(struct sock *sk)
+ /* We've eaten all the data from this skb.
+ * Throw it away. */
+ TCP_SKB_CB(nskb)->tcp_flags |= TCP_SKB_CB(skb)->tcp_flags;
++ /* If this is the last SKB we copy and eor is set
++ * we need to propagate it to the new skb.
++ */
++ TCP_SKB_CB(nskb)->eor = TCP_SKB_CB(skb)->eor;
+ tcp_unlink_write_queue(skb, sk);
+ sk_wmem_free_skb(sk, skb);
+ } else {
+diff --git a/net/ipv4/udp.c b/net/ipv4/udp.c
+index e4ff25c947c5..590f9ed90c1f 100644
+--- a/net/ipv4/udp.c
++++ b/net/ipv4/udp.c
+@@ -2031,6 +2031,11 @@ static inline int udp4_csum_init(struct sk_buff *skb, struct udphdr *uh,
+ err = udplite_checksum_init(skb, uh);
+ if (err)
+ return err;
++
++ if (UDP_SKB_CB(skb)->partial_cov) {
++ skb->csum = inet_compute_pseudo(skb, proto);
++ return 0;
++ }
+ }
+
+ /* Note, we are only interested in != 0 or == 0, thus the
+diff --git a/net/ipv6/ip6_checksum.c b/net/ipv6/ip6_checksum.c
+index ec43d18b5ff9..547515e8450a 100644
+--- a/net/ipv6/ip6_checksum.c
++++ b/net/ipv6/ip6_checksum.c
+@@ -73,6 +73,11 @@ int udp6_csum_init(struct sk_buff *skb, struct udphdr *uh, int proto)
+ err = udplite_checksum_init(skb, uh);
+ if (err)
+ return err;
++
++ if (UDP_SKB_CB(skb)->partial_cov) {
++ skb->csum = ip6_compute_pseudo(skb, proto);
++ return 0;
++ }
+ }
+
+ /* To support RFC 6936 (allow zero checksum in UDP/IPV6 for tunnels)
+diff --git a/net/ipv6/sit.c b/net/ipv6/sit.c
+index 3873d3877135..3a1775a62973 100644
+--- a/net/ipv6/sit.c
++++ b/net/ipv6/sit.c
+@@ -182,7 +182,7 @@ static void ipip6_tunnel_clone_6rd(struct net_device *dev, struct sit_net *sitn)
+ #ifdef CONFIG_IPV6_SIT_6RD
+ struct ip_tunnel *t = netdev_priv(dev);
+
+- if (t->dev == sitn->fb_tunnel_dev) {
++ if (dev == sitn->fb_tunnel_dev) {
+ ipv6_addr_set(&t->ip6rd.prefix, htonl(0x20020000), 0, 0, 0);
+ t->ip6rd.relay_prefix = 0;
+ t->ip6rd.prefixlen = 16;
+diff --git a/net/ipv6/tcp_ipv6.c b/net/ipv6/tcp_ipv6.c
+index 7178476b3d2f..6378f6fbc89f 100644
+--- a/net/ipv6/tcp_ipv6.c
++++ b/net/ipv6/tcp_ipv6.c
+@@ -943,7 +943,8 @@ static void tcp_v6_send_reset(const struct sock *sk, struct sk_buff *skb)
+
+ if (sk) {
+ oif = sk->sk_bound_dev_if;
+- trace_tcp_send_reset(sk, skb);
++ if (sk_fullsock(sk))
++ trace_tcp_send_reset(sk, skb);
+ }
+
+ tcp_v6_send_response(sk, skb, seq, ack_seq, 0, 0, 0, oif, key, 1, 0, 0);
+diff --git a/net/l2tp/l2tp_core.c b/net/l2tp/l2tp_core.c
+index 115918ad8eca..861b67c34191 100644
+--- a/net/l2tp/l2tp_core.c
++++ b/net/l2tp/l2tp_core.c
+@@ -136,51 +136,6 @@ l2tp_session_id_hash_2(struct l2tp_net *pn, u32 session_id)
+
+ }
+
+-/* Lookup the tunnel socket, possibly involving the fs code if the socket is
+- * owned by userspace. A struct sock returned from this function must be
+- * released using l2tp_tunnel_sock_put once you're done with it.
+- */
+-static struct sock *l2tp_tunnel_sock_lookup(struct l2tp_tunnel *tunnel)
+-{
+- int err = 0;
+- struct socket *sock = NULL;
+- struct sock *sk = NULL;
+-
+- if (!tunnel)
+- goto out;
+-
+- if (tunnel->fd >= 0) {
+- /* Socket is owned by userspace, who might be in the process
+- * of closing it. Look the socket up using the fd to ensure
+- * consistency.
+- */
+- sock = sockfd_lookup(tunnel->fd, &err);
+- if (sock)
+- sk = sock->sk;
+- } else {
+- /* Socket is owned by kernelspace */
+- sk = tunnel->sock;
+- sock_hold(sk);
+- }
+-
+-out:
+- return sk;
+-}
+-
+-/* Drop a reference to a tunnel socket obtained via. l2tp_tunnel_sock_put */
+-static void l2tp_tunnel_sock_put(struct sock *sk)
+-{
+- struct l2tp_tunnel *tunnel = l2tp_sock_to_tunnel(sk);
+- if (tunnel) {
+- if (tunnel->fd >= 0) {
+- /* Socket is owned by userspace */
+- sockfd_put(sk->sk_socket);
+- }
+- sock_put(sk);
+- }
+- sock_put(sk);
+-}
+-
+ /* Session hash list.
+ * The session_id SHOULD be random according to RFC2661, but several
+ * L2TP implementations (Cisco and Microsoft) use incrementing
+@@ -193,6 +148,13 @@ l2tp_session_id_hash(struct l2tp_tunnel *tunnel, u32 session_id)
+ return &tunnel->session_hlist[hash_32(session_id, L2TP_HASH_BITS)];
+ }
+
++void l2tp_tunnel_free(struct l2tp_tunnel *tunnel)
++{
++ sock_put(tunnel->sock);
++ /* the tunnel is freed in the socket destructor */
++}
++EXPORT_SYMBOL(l2tp_tunnel_free);
++
+ /* Lookup a tunnel. A new reference is held on the returned tunnel. */
+ struct l2tp_tunnel *l2tp_tunnel_get(const struct net *net, u32 tunnel_id)
+ {
+@@ -345,13 +307,11 @@ int l2tp_session_register(struct l2tp_session *session,
+ }
+
+ l2tp_tunnel_inc_refcount(tunnel);
+- sock_hold(tunnel->sock);
+ hlist_add_head_rcu(&session->global_hlist, g_head);
+
+ spin_unlock_bh(&pn->l2tp_session_hlist_lock);
+ } else {
+ l2tp_tunnel_inc_refcount(tunnel);
+- sock_hold(tunnel->sock);
+ }
+
+ hlist_add_head(&session->hlist, head);
+@@ -975,7 +935,7 @@ int l2tp_udp_encap_recv(struct sock *sk, struct sk_buff *skb)
+ {
+ struct l2tp_tunnel *tunnel;
+
+- tunnel = l2tp_sock_to_tunnel(sk);
++ tunnel = l2tp_tunnel(sk);
+ if (tunnel == NULL)
+ goto pass_up;
+
+@@ -983,13 +943,10 @@ int l2tp_udp_encap_recv(struct sock *sk, struct sk_buff *skb)
+ tunnel->name, skb->len);
+
+ if (l2tp_udp_recv_core(tunnel, skb, tunnel->recv_payload_hook))
+- goto pass_up_put;
++ goto pass_up;
+
+- sock_put(sk);
+ return 0;
+
+-pass_up_put:
+- sock_put(sk);
+ pass_up:
+ return 1;
+ }
+@@ -1216,14 +1173,12 @@ EXPORT_SYMBOL_GPL(l2tp_xmit_skb);
+ static void l2tp_tunnel_destruct(struct sock *sk)
+ {
+ struct l2tp_tunnel *tunnel = l2tp_tunnel(sk);
+- struct l2tp_net *pn;
+
+ if (tunnel == NULL)
+ goto end;
+
+ l2tp_info(tunnel, L2TP_MSG_CONTROL, "%s: closing...\n", tunnel->name);
+
+-
+ /* Disable udp encapsulation */
+ switch (tunnel->encap) {
+ case L2TP_ENCAPTYPE_UDP:
+@@ -1240,18 +1195,11 @@ static void l2tp_tunnel_destruct(struct sock *sk)
+ sk->sk_destruct = tunnel->old_sk_destruct;
+ sk->sk_user_data = NULL;
+
+- /* Remove the tunnel struct from the tunnel list */
+- pn = l2tp_pernet(tunnel->l2tp_net);
+- spin_lock_bh(&pn->l2tp_tunnel_list_lock);
+- list_del_rcu(&tunnel->list);
+- spin_unlock_bh(&pn->l2tp_tunnel_list_lock);
+-
+- tunnel->sock = NULL;
+- l2tp_tunnel_dec_refcount(tunnel);
+-
+ /* Call the original destructor */
+ if (sk->sk_destruct)
+ (*sk->sk_destruct)(sk);
++
++ kfree_rcu(tunnel, rcu);
+ end:
+ return;
+ }
+@@ -1312,49 +1260,43 @@ EXPORT_SYMBOL_GPL(l2tp_tunnel_closeall);
+ /* Tunnel socket destroy hook for UDP encapsulation */
+ static void l2tp_udp_encap_destroy(struct sock *sk)
+ {
+- struct l2tp_tunnel *tunnel = l2tp_sock_to_tunnel(sk);
+- if (tunnel) {
+- l2tp_tunnel_closeall(tunnel);
+- sock_put(sk);
+- }
++ struct l2tp_tunnel *tunnel = l2tp_tunnel(sk);
++
++ if (tunnel)
++ l2tp_tunnel_delete(tunnel);
+ }
+
+ /* Workqueue tunnel deletion function */
+ static void l2tp_tunnel_del_work(struct work_struct *work)
+ {
+- struct l2tp_tunnel *tunnel = NULL;
+- struct socket *sock = NULL;
+- struct sock *sk = NULL;
+-
+- tunnel = container_of(work, struct l2tp_tunnel, del_work);
++ struct l2tp_tunnel *tunnel = container_of(work, struct l2tp_tunnel,
++ del_work);
++ struct sock *sk = tunnel->sock;
++ struct socket *sock = sk->sk_socket;
++ struct l2tp_net *pn;
+
+ l2tp_tunnel_closeall(tunnel);
+
+- sk = l2tp_tunnel_sock_lookup(tunnel);
+- if (!sk)
+- goto out;
+-
+- sock = sk->sk_socket;
+-
+- /* If the tunnel socket was created by userspace, then go through the
+- * inet layer to shut the socket down, and let userspace close it.
+- * Otherwise, if we created the socket directly within the kernel, use
++ /* If the tunnel socket was created within the kernel, use
+ * the sk API to release it here.
+- * In either case the tunnel resources are freed in the socket
+- * destructor when the tunnel socket goes away.
+ */
+- if (tunnel->fd >= 0) {
+- if (sock)
+- inet_shutdown(sock, 2);
+- } else {
++ if (tunnel->fd < 0) {
+ if (sock) {
+ kernel_sock_shutdown(sock, SHUT_RDWR);
+ sock_release(sock);
+ }
+ }
+
+- l2tp_tunnel_sock_put(sk);
+-out:
++ /* Remove the tunnel struct from the tunnel list */
++ pn = l2tp_pernet(tunnel->l2tp_net);
++ spin_lock_bh(&pn->l2tp_tunnel_list_lock);
++ list_del_rcu(&tunnel->list);
++ spin_unlock_bh(&pn->l2tp_tunnel_list_lock);
++
++ /* drop initial ref */
++ l2tp_tunnel_dec_refcount(tunnel);
++
++ /* drop workqueue ref */
+ l2tp_tunnel_dec_refcount(tunnel);
+ }
+
+@@ -1607,13 +1549,22 @@ int l2tp_tunnel_create(struct net *net, int fd, int version, u32 tunnel_id, u32
+ sk->sk_user_data = tunnel;
+ }
+
++ /* Bump the reference count. The tunnel context is deleted
++ * only when this drops to zero. A reference is also held on
++ * the tunnel socket to ensure that it is not released while
++ * the tunnel is extant. Must be done before sk_destruct is
++ * set.
++ */
++ refcount_set(&tunnel->ref_count, 1);
++ sock_hold(sk);
++ tunnel->sock = sk;
++ tunnel->fd = fd;
++
+ /* Hook on the tunnel socket destructor so that we can cleanup
+ * if the tunnel socket goes away.
+ */
+ tunnel->old_sk_destruct = sk->sk_destruct;
+ sk->sk_destruct = &l2tp_tunnel_destruct;
+- tunnel->sock = sk;
+- tunnel->fd = fd;
+ lockdep_set_class_and_name(&sk->sk_lock.slock, &l2tp_socket_class, "l2tp_sock");
+
+ sk->sk_allocation = GFP_ATOMIC;
+@@ -1623,11 +1574,6 @@ int l2tp_tunnel_create(struct net *net, int fd, int version, u32 tunnel_id, u32
+
+ /* Add tunnel to our list */
+ INIT_LIST_HEAD(&tunnel->list);
+-
+- /* Bump the reference count. The tunnel context is deleted
+- * only when this drops to zero. Must be done before list insertion
+- */
+- refcount_set(&tunnel->ref_count, 1);
+ spin_lock_bh(&pn->l2tp_tunnel_list_lock);
+ list_add_rcu(&tunnel->list, &pn->l2tp_tunnel_list);
+ spin_unlock_bh(&pn->l2tp_tunnel_list_lock);
+@@ -1668,8 +1614,6 @@ void l2tp_session_free(struct l2tp_session *session)
+
+ if (tunnel) {
+ BUG_ON(tunnel->magic != L2TP_TUNNEL_MAGIC);
+- sock_put(tunnel->sock);
+- session->tunnel = NULL;
+ l2tp_tunnel_dec_refcount(tunnel);
+ }
+
+diff --git a/net/l2tp/l2tp_core.h b/net/l2tp/l2tp_core.h
+index 9534e16965cc..8ecb1d357445 100644
+--- a/net/l2tp/l2tp_core.h
++++ b/net/l2tp/l2tp_core.h
+@@ -219,27 +219,8 @@ static inline void *l2tp_session_priv(struct l2tp_session *session)
+ return &session->priv[0];
+ }
+
+-static inline struct l2tp_tunnel *l2tp_sock_to_tunnel(struct sock *sk)
+-{
+- struct l2tp_tunnel *tunnel;
+-
+- if (sk == NULL)
+- return NULL;
+-
+- sock_hold(sk);
+- tunnel = (struct l2tp_tunnel *)(sk->sk_user_data);
+- if (tunnel == NULL) {
+- sock_put(sk);
+- goto out;
+- }
+-
+- BUG_ON(tunnel->magic != L2TP_TUNNEL_MAGIC);
+-
+-out:
+- return tunnel;
+-}
+-
+ struct l2tp_tunnel *l2tp_tunnel_get(const struct net *net, u32 tunnel_id);
++void l2tp_tunnel_free(struct l2tp_tunnel *tunnel);
+
+ struct l2tp_session *l2tp_session_get(const struct net *net,
+ struct l2tp_tunnel *tunnel,
+@@ -288,7 +269,7 @@ static inline void l2tp_tunnel_inc_refcount(struct l2tp_tunnel *tunnel)
+ static inline void l2tp_tunnel_dec_refcount(struct l2tp_tunnel *tunnel)
+ {
+ if (refcount_dec_and_test(&tunnel->ref_count))
+- kfree_rcu(tunnel, rcu);
++ l2tp_tunnel_free(tunnel);
+ }
+
+ /* Session reference counts. Incremented when code obtains a reference
+diff --git a/net/l2tp/l2tp_ip.c b/net/l2tp/l2tp_ip.c
+index ff61124fdf59..3428fba6f2b7 100644
+--- a/net/l2tp/l2tp_ip.c
++++ b/net/l2tp/l2tp_ip.c
+@@ -234,17 +234,13 @@ static void l2tp_ip_close(struct sock *sk, long timeout)
+ static void l2tp_ip_destroy_sock(struct sock *sk)
+ {
+ struct sk_buff *skb;
+- struct l2tp_tunnel *tunnel = l2tp_sock_to_tunnel(sk);
++ struct l2tp_tunnel *tunnel = sk->sk_user_data;
+
+ while ((skb = __skb_dequeue_tail(&sk->sk_write_queue)) != NULL)
+ kfree_skb(skb);
+
+- if (tunnel) {
+- l2tp_tunnel_closeall(tunnel);
+- sock_put(sk);
+- }
+-
+- sk_refcnt_debug_dec(sk);
++ if (tunnel)
++ l2tp_tunnel_delete(tunnel);
+ }
+
+ static int l2tp_ip_bind(struct sock *sk, struct sockaddr *uaddr, int addr_len)
+diff --git a/net/l2tp/l2tp_ip6.c b/net/l2tp/l2tp_ip6.c
+index 192344688c06..6f009eaa5fbe 100644
+--- a/net/l2tp/l2tp_ip6.c
++++ b/net/l2tp/l2tp_ip6.c
+@@ -248,16 +248,14 @@ static void l2tp_ip6_close(struct sock *sk, long timeout)
+
+ static void l2tp_ip6_destroy_sock(struct sock *sk)
+ {
+- struct l2tp_tunnel *tunnel = l2tp_sock_to_tunnel(sk);
++ struct l2tp_tunnel *tunnel = sk->sk_user_data;
+
+ lock_sock(sk);
+ ip6_flush_pending_frames(sk);
+ release_sock(sk);
+
+- if (tunnel) {
+- l2tp_tunnel_closeall(tunnel);
+- sock_put(sk);
+- }
++ if (tunnel)
++ l2tp_tunnel_delete(tunnel);
+
+ inet6_destroy_sock(sk);
+ }
+diff --git a/net/l2tp/l2tp_ppp.c b/net/l2tp/l2tp_ppp.c
+index b412fc3351dc..5ea718609fe8 100644
+--- a/net/l2tp/l2tp_ppp.c
++++ b/net/l2tp/l2tp_ppp.c
+@@ -416,20 +416,28 @@ static int pppol2tp_xmit(struct ppp_channel *chan, struct sk_buff *skb)
+ * Session (and tunnel control) socket create/destroy.
+ *****************************************************************************/
+
++static void pppol2tp_put_sk(struct rcu_head *head)
++{
++ struct pppol2tp_session *ps;
++
++ ps = container_of(head, typeof(*ps), rcu);
++ sock_put(ps->__sk);
++}
++
+ /* Called by l2tp_core when a session socket is being closed.
+ */
+ static void pppol2tp_session_close(struct l2tp_session *session)
+ {
+- struct sock *sk;
+-
+- BUG_ON(session->magic != L2TP_SESSION_MAGIC);
++ struct pppol2tp_session *ps;
+
+- sk = pppol2tp_session_get_sock(session);
+- if (sk) {
+- if (sk->sk_socket)
+- inet_shutdown(sk->sk_socket, SEND_SHUTDOWN);
+- sock_put(sk);
+- }
++ ps = l2tp_session_priv(session);
++ mutex_lock(&ps->sk_lock);
++ ps->__sk = rcu_dereference_protected(ps->sk,
++ lockdep_is_held(&ps->sk_lock));
++ RCU_INIT_POINTER(ps->sk, NULL);
++ if (ps->__sk)
++ call_rcu(&ps->rcu, pppol2tp_put_sk);
++ mutex_unlock(&ps->sk_lock);
+ }
+
+ /* Really kill the session socket. (Called from sock_put() if
+@@ -449,14 +457,6 @@ static void pppol2tp_session_destruct(struct sock *sk)
+ }
+ }
+
+-static void pppol2tp_put_sk(struct rcu_head *head)
+-{
+- struct pppol2tp_session *ps;
+-
+- ps = container_of(head, typeof(*ps), rcu);
+- sock_put(ps->__sk);
+-}
+-
+ /* Called when the PPPoX socket (session) is closed.
+ */
+ static int pppol2tp_release(struct socket *sock)
+@@ -480,26 +480,17 @@ static int pppol2tp_release(struct socket *sock)
+ sock_orphan(sk);
+ sock->sk = NULL;
+
++ /* If the socket is associated with a session,
++ * l2tp_session_delete will call pppol2tp_session_close which
++ * will drop the session's ref on the socket.
++ */
+ session = pppol2tp_sock_to_session(sk);
+-
+- if (session != NULL) {
+- struct pppol2tp_session *ps;
+-
++ if (session) {
+ l2tp_session_delete(session);
+-
+- ps = l2tp_session_priv(session);
+- mutex_lock(&ps->sk_lock);
+- ps->__sk = rcu_dereference_protected(ps->sk,
+- lockdep_is_held(&ps->sk_lock));
+- RCU_INIT_POINTER(ps->sk, NULL);
+- mutex_unlock(&ps->sk_lock);
+- call_rcu(&ps->rcu, pppol2tp_put_sk);
+-
+- /* Rely on the sock_put() call at the end of the function for
+- * dropping the reference held by pppol2tp_sock_to_session().
+- * The last reference will be dropped by pppol2tp_put_sk().
+- */
++ /* drop the ref obtained by pppol2tp_sock_to_session */
++ sock_put(sk);
+ }
++
+ release_sock(sk);
+
+ /* This will delete the session context via
+@@ -796,6 +787,7 @@ static int pppol2tp_connect(struct socket *sock, struct sockaddr *uservaddr,
+
+ out_no_ppp:
+ /* This is how we get the session context from the socket. */
++ sock_hold(sk);
+ sk->sk_user_data = session;
+ rcu_assign_pointer(ps->sk, sk);
+ mutex_unlock(&ps->sk_lock);
+diff --git a/net/netlink/af_netlink.c b/net/netlink/af_netlink.c
+index 84a4e4c3be4b..ca9c0544c856 100644
+--- a/net/netlink/af_netlink.c
++++ b/net/netlink/af_netlink.c
+@@ -2275,7 +2275,7 @@ int __netlink_dump_start(struct sock *ssk, struct sk_buff *skb,
+ if (cb->start) {
+ ret = cb->start(cb);
+ if (ret)
+- goto error_unlock;
++ goto error_put;
+ }
+
+ nlk->cb_running = true;
+@@ -2295,6 +2295,8 @@ int __netlink_dump_start(struct sock *ssk, struct sk_buff *skb,
+ */
+ return -EINTR;
+
++error_put:
++ module_put(control->module);
+ error_unlock:
+ sock_put(sk);
+ mutex_unlock(nlk->cb_mutex);
+diff --git a/net/netlink/genetlink.c b/net/netlink/genetlink.c
+index d444daf1ac04..6f02499ef007 100644
+--- a/net/netlink/genetlink.c
++++ b/net/netlink/genetlink.c
+@@ -1081,6 +1081,7 @@ static int genlmsg_mcast(struct sk_buff *skb, u32 portid, unsigned long group,
+ {
+ struct sk_buff *tmp;
+ struct net *net, *prev = NULL;
++ bool delivered = false;
+ int err;
+
+ for_each_net_rcu(net) {
+@@ -1092,14 +1093,21 @@ static int genlmsg_mcast(struct sk_buff *skb, u32 portid, unsigned long group,
+ }
+ err = nlmsg_multicast(prev->genl_sock, tmp,
+ portid, group, flags);
+- if (err)
++ if (!err)
++ delivered = true;
++ else if (err != -ESRCH)
+ goto error;
+ }
+
+ prev = net;
+ }
+
+- return nlmsg_multicast(prev->genl_sock, skb, portid, group, flags);
++ err = nlmsg_multicast(prev->genl_sock, skb, portid, group, flags);
++ if (!err)
++ delivered = true;
++ else if (err != -ESRCH)
++ goto error;
++ return delivered ? 0 : -ESRCH;
+ error:
+ kfree_skb(skb);
+ return err;
+diff --git a/net/rxrpc/output.c b/net/rxrpc/output.c
+index 42410e910aff..cf73dc006c3b 100644
+--- a/net/rxrpc/output.c
++++ b/net/rxrpc/output.c
+@@ -445,7 +445,7 @@ int rxrpc_send_data_packet(struct rxrpc_call *call, struct sk_buff *skb,
+ (char *)&opt, sizeof(opt));
+ if (ret == 0) {
+ ret = kernel_sendmsg(conn->params.local->socket, &msg,
+- iov, 1, iov[0].iov_len);
++ iov, 2, len);
+
+ opt = IPV6_PMTUDISC_DO;
+ kernel_setsockopt(conn->params.local->socket,
+diff --git a/net/sched/cls_api.c b/net/sched/cls_api.c
+index e6b853f0ee4f..2e437bbd3358 100644
+--- a/net/sched/cls_api.c
++++ b/net/sched/cls_api.c
+@@ -1054,13 +1054,18 @@ static int tc_dump_tfilter(struct sk_buff *skb, struct netlink_callback *cb)
+ nla_get_u32(tca[TCA_CHAIN]) != chain->index)
+ continue;
+ if (!tcf_chain_dump(chain, q, parent, skb, cb,
+- index_start, &index))
++ index_start, &index)) {
++ err = -EMSGSIZE;
+ break;
++ }
+ }
+
+ cb->args[0] = index;
+
+ out:
++ /* If we did no progress, the error (EMSGSIZE) is real */
++ if (skb->len == 0 && err)
++ return err;
+ return skb->len;
+ }
+
+diff --git a/net/sched/cls_u32.c b/net/sched/cls_u32.c
+index 33294b5b2c6a..425cc341fd41 100644
+--- a/net/sched/cls_u32.c
++++ b/net/sched/cls_u32.c
+@@ -397,10 +397,12 @@ static int u32_init(struct tcf_proto *tp)
+ static int u32_destroy_key(struct tcf_proto *tp, struct tc_u_knode *n,
+ bool free_pf)
+ {
++ struct tc_u_hnode *ht = rtnl_dereference(n->ht_down);
++
+ tcf_exts_destroy(&n->exts);
+ tcf_exts_put_net(&n->exts);
+- if (n->ht_down)
+- n->ht_down->refcnt--;
++ if (ht && --ht->refcnt == 0)
++ kfree(ht);
+ #ifdef CONFIG_CLS_U32_PERF
+ if (free_pf)
+ free_percpu(n->pf);
+@@ -653,16 +655,15 @@ static void u32_destroy(struct tcf_proto *tp)
+
+ hlist_del(&tp_c->hnode);
+
+- for (ht = rtnl_dereference(tp_c->hlist);
+- ht;
+- ht = rtnl_dereference(ht->next)) {
+- ht->refcnt--;
+- u32_clear_hnode(tp, ht);
+- }
+-
+ while ((ht = rtnl_dereference(tp_c->hlist)) != NULL) {
++ u32_clear_hnode(tp, ht);
+ RCU_INIT_POINTER(tp_c->hlist, ht->next);
+- kfree_rcu(ht, rcu);
++
++ /* u32_destroy_key() will later free ht for us, if it's
++ * still referenced by some knode
++ */
++ if (--ht->refcnt == 0)
++ kfree_rcu(ht, rcu);
+ }
+
+ idr_destroy(&tp_c->handle_idr);
+@@ -928,7 +929,8 @@ static int u32_change(struct net *net, struct sk_buff *in_skb,
+ if (TC_U32_KEY(n->handle) == 0)
+ return -EINVAL;
+
+- if (n->flags != flags)
++ if ((n->flags ^ flags) &
++ ~(TCA_CLS_FLAGS_IN_HW | TCA_CLS_FLAGS_NOT_IN_HW))
+ return -EINVAL;
+
+ new = u32_init_knode(tp, n);
+diff --git a/net/sctp/input.c b/net/sctp/input.c
+index 141c9c466ec1..0247cc432e02 100644
+--- a/net/sctp/input.c
++++ b/net/sctp/input.c
+@@ -897,15 +897,12 @@ int sctp_hash_transport(struct sctp_transport *t)
+ rhl_for_each_entry_rcu(transport, tmp, list, node)
+ if (transport->asoc->ep == t->asoc->ep) {
+ rcu_read_unlock();
+- err = -EEXIST;
+- goto out;
++ return -EEXIST;
+ }
+ rcu_read_unlock();
+
+ err = rhltable_insert_key(&sctp_transport_hashtable, &arg,
+ &t->node, sctp_hash_params);
+-
+-out:
+ if (err)
+ pr_err_once("insert transport fail, errno %d\n", err);
+
+diff --git a/net/sctp/ipv6.c b/net/sctp/ipv6.c
+index 5d4c15bf66d2..e35d4f73d2df 100644
+--- a/net/sctp/ipv6.c
++++ b/net/sctp/ipv6.c
+@@ -326,8 +326,10 @@ static void sctp_v6_get_dst(struct sctp_transport *t, union sctp_addr *saddr,
+ final_p = fl6_update_dst(fl6, rcu_dereference(np->opt), &final);
+ bdst = ip6_dst_lookup_flow(sk, fl6, final_p);
+
+- if (!IS_ERR(bdst) &&
+- ipv6_chk_addr(dev_net(bdst->dev),
++ if (IS_ERR(bdst))
++ continue;
++
++ if (ipv6_chk_addr(dev_net(bdst->dev),
+ &laddr->a.v6.sin6_addr, bdst->dev, 1)) {
+ if (!IS_ERR_OR_NULL(dst))
+ dst_release(dst);
+@@ -336,8 +338,10 @@ static void sctp_v6_get_dst(struct sctp_transport *t, union sctp_addr *saddr,
+ }
+
+ bmatchlen = sctp_v6_addr_match_len(daddr, &laddr->a);
+- if (matchlen > bmatchlen)
++ if (matchlen > bmatchlen) {
++ dst_release(bdst);
+ continue;
++ }
+
+ if (!IS_ERR_OR_NULL(dst))
+ dst_release(dst);
+diff --git a/net/sctp/protocol.c b/net/sctp/protocol.c
+index 6a38c2503649..91813e686c67 100644
+--- a/net/sctp/protocol.c
++++ b/net/sctp/protocol.c
+@@ -514,22 +514,20 @@ static void sctp_v4_get_dst(struct sctp_transport *t, union sctp_addr *saddr,
+ if (IS_ERR(rt))
+ continue;
+
+- if (!dst)
+- dst = &rt->dst;
+-
+ /* Ensure the src address belongs to the output
+ * interface.
+ */
+ odev = __ip_dev_find(sock_net(sk), laddr->a.v4.sin_addr.s_addr,
+ false);
+ if (!odev || odev->ifindex != fl4->flowi4_oif) {
+- if (&rt->dst != dst)
++ if (!dst)
++ dst = &rt->dst;
++ else
+ dst_release(&rt->dst);
+ continue;
+ }
+
+- if (dst != &rt->dst)
+- dst_release(dst);
++ dst_release(dst);
+ dst = &rt->dst;
+ break;
+ }
+diff --git a/net/sctp/sm_make_chunk.c b/net/sctp/sm_make_chunk.c
+index 9bf575f2e8ed..ea4226e382f9 100644
+--- a/net/sctp/sm_make_chunk.c
++++ b/net/sctp/sm_make_chunk.c
+@@ -1378,9 +1378,14 @@ static struct sctp_chunk *_sctp_make_chunk(const struct sctp_association *asoc,
+ struct sctp_chunk *retval;
+ struct sk_buff *skb;
+ struct sock *sk;
++ int chunklen;
++
++ chunklen = SCTP_PAD4(sizeof(*chunk_hdr) + paylen);
++ if (chunklen > SCTP_MAX_CHUNK_LEN)
++ goto nodata;
+
+ /* No need to allocate LL here, as this is only a chunk. */
+- skb = alloc_skb(SCTP_PAD4(sizeof(*chunk_hdr) + paylen), gfp);
++ skb = alloc_skb(chunklen, gfp);
+ if (!skb)
+ goto nodata;
+
+diff --git a/net/tls/tls_main.c b/net/tls/tls_main.c
+index 736719c8314e..3a780337c393 100644
+--- a/net/tls/tls_main.c
++++ b/net/tls/tls_main.c
+@@ -45,17 +45,27 @@ MODULE_AUTHOR("Mellanox Technologies");
+ MODULE_DESCRIPTION("Transport Layer Security Support");
+ MODULE_LICENSE("Dual BSD/GPL");
+
++enum {
++ TLSV4,
++ TLSV6,
++ TLS_NUM_PROTS,
++};
++
+ enum {
+ TLS_BASE_TX,
+ TLS_SW_TX,
+ TLS_NUM_CONFIG,
+ };
+
+-static struct proto tls_prots[TLS_NUM_CONFIG];
++static struct proto *saved_tcpv6_prot;
++static DEFINE_MUTEX(tcpv6_prot_mutex);
++static struct proto tls_prots[TLS_NUM_PROTS][TLS_NUM_CONFIG];
+
+ static inline void update_sk_prot(struct sock *sk, struct tls_context *ctx)
+ {
+- sk->sk_prot = &tls_prots[ctx->tx_conf];
++ int ip_ver = sk->sk_family == AF_INET6 ? TLSV6 : TLSV4;
++
++ sk->sk_prot = &tls_prots[ip_ver][ctx->tx_conf];
+ }
+
+ int wait_on_pending_writer(struct sock *sk, long *timeo)
+@@ -450,8 +460,21 @@ static int tls_setsockopt(struct sock *sk, int level, int optname,
+ return do_tls_setsockopt(sk, optname, optval, optlen);
+ }
+
++static void build_protos(struct proto *prot, struct proto *base)
++{
++ prot[TLS_BASE_TX] = *base;
++ prot[TLS_BASE_TX].setsockopt = tls_setsockopt;
++ prot[TLS_BASE_TX].getsockopt = tls_getsockopt;
++ prot[TLS_BASE_TX].close = tls_sk_proto_close;
++
++ prot[TLS_SW_TX] = prot[TLS_BASE_TX];
++ prot[TLS_SW_TX].sendmsg = tls_sw_sendmsg;
++ prot[TLS_SW_TX].sendpage = tls_sw_sendpage;
++}
++
+ static int tls_init(struct sock *sk)
+ {
++ int ip_ver = sk->sk_family == AF_INET6 ? TLSV6 : TLSV4;
+ struct inet_connection_sock *icsk = inet_csk(sk);
+ struct tls_context *ctx;
+ int rc = 0;
+@@ -476,6 +499,17 @@ static int tls_init(struct sock *sk)
+ ctx->getsockopt = sk->sk_prot->getsockopt;
+ ctx->sk_proto_close = sk->sk_prot->close;
+
++ /* Build IPv6 TLS whenever the address of tcpv6_prot changes */
++ if (ip_ver == TLSV6 &&
++ unlikely(sk->sk_prot != smp_load_acquire(&saved_tcpv6_prot))) {
++ mutex_lock(&tcpv6_prot_mutex);
++ if (likely(sk->sk_prot != saved_tcpv6_prot)) {
++ build_protos(tls_prots[TLSV6], sk->sk_prot);
++ smp_store_release(&saved_tcpv6_prot, sk->sk_prot);
++ }
++ mutex_unlock(&tcpv6_prot_mutex);
++ }
++
+ ctx->tx_conf = TLS_BASE_TX;
+ update_sk_prot(sk, ctx);
+ out:
+@@ -488,21 +522,9 @@ static struct tcp_ulp_ops tcp_tls_ulp_ops __read_mostly = {
+ .init = tls_init,
+ };
+
+-static void build_protos(struct proto *prot, struct proto *base)
+-{
+- prot[TLS_BASE_TX] = *base;
+- prot[TLS_BASE_TX].setsockopt = tls_setsockopt;
+- prot[TLS_BASE_TX].getsockopt = tls_getsockopt;
+- prot[TLS_BASE_TX].close = tls_sk_proto_close;
+-
+- prot[TLS_SW_TX] = prot[TLS_BASE_TX];
+- prot[TLS_SW_TX].sendmsg = tls_sw_sendmsg;
+- prot[TLS_SW_TX].sendpage = tls_sw_sendpage;
+-}
+-
+ static int __init tls_register(void)
+ {
+- build_protos(tls_prots, &tcp_prot);
++ build_protos(tls_prots[TLSV4], &tcp_prot);
+
+ tcp_register_ulp(&tcp_tls_ulp_ops);
+
+diff --git a/sound/core/control.c b/sound/core/control.c
+index 56b3e2d49c82..af7e6165e21e 100644
+--- a/sound/core/control.c
++++ b/sound/core/control.c
+@@ -888,7 +888,7 @@ static int snd_ctl_elem_read(struct snd_card *card,
+
+ index_offset = snd_ctl_get_ioff(kctl, &control->id);
+ vd = &kctl->vd[index_offset];
+- if (!(vd->access & SNDRV_CTL_ELEM_ACCESS_READ) && kctl->get == NULL)
++ if (!(vd->access & SNDRV_CTL_ELEM_ACCESS_READ) || kctl->get == NULL)
+ return -EPERM;
+
+ snd_ctl_build_ioff(&control->id, kctl, index_offset);
+diff --git a/sound/pci/hda/hda_intel.c b/sound/pci/hda/hda_intel.c
+index c71dcacea807..96143df19b21 100644
+--- a/sound/pci/hda/hda_intel.c
++++ b/sound/pci/hda/hda_intel.c
+@@ -181,7 +181,7 @@ static const struct kernel_param_ops param_ops_xint = {
+ };
+ #define param_check_xint param_check_int
+
+-static int power_save = CONFIG_SND_HDA_POWER_SAVE_DEFAULT;
++static int power_save = -1;
+ module_param(power_save, xint, 0644);
+ MODULE_PARM_DESC(power_save, "Automatic power-saving timeout "
+ "(in second, 0 = disable).");
+@@ -2186,6 +2186,24 @@ static int azx_probe(struct pci_dev *pci,
+ return err;
+ }
+
++#ifdef CONFIG_PM
++/* On some boards setting power_save to a non 0 value leads to clicking /
++ * popping sounds when ever we enter/leave powersaving mode. Ideally we would
++ * figure out how to avoid these sounds, but that is not always feasible.
++ * So we keep a list of devices where we disable powersaving as its known
++ * to causes problems on these devices.
++ */
++static struct snd_pci_quirk power_save_blacklist[] = {
++ /* https://bugzilla.redhat.com/show_bug.cgi?id=1525104 */
++ SND_PCI_QUIRK(0x1849, 0x0c0c, "Asrock B85M-ITX", 0),
++ /* https://bugzilla.redhat.com/show_bug.cgi?id=1525104 */
++ SND_PCI_QUIRK(0x1043, 0x8733, "Asus Prime X370-Pro", 0),
++ /* https://bugzilla.kernel.org/show_bug.cgi?id=198611 */
++ SND_PCI_QUIRK(0x17aa, 0x2227, "Lenovo X1 Carbon 3rd Gen", 0),
++ {}
++};
++#endif /* CONFIG_PM */
++
+ /* number of codec slots for each chipset: 0 = default slots (i.e. 4) */
+ static unsigned int azx_max_codecs[AZX_NUM_DRIVERS] = {
+ [AZX_DRIVER_NVIDIA] = 8,
+@@ -2198,6 +2216,7 @@ static int azx_probe_continue(struct azx *chip)
+ struct hdac_bus *bus = azx_bus(chip);
+ struct pci_dev *pci = chip->pci;
+ int dev = chip->dev_index;
++ int val;
+ int err;
+
+ hda->probe_continued = 1;
+@@ -2278,7 +2297,22 @@ static int azx_probe_continue(struct azx *chip)
+
+ chip->running = 1;
+ azx_add_card_list(chip);
+- snd_hda_set_power_save(&chip->bus, power_save * 1000);
++
++ val = power_save;
++#ifdef CONFIG_PM
++ if (val == -1) {
++ const struct snd_pci_quirk *q;
++
++ val = CONFIG_SND_HDA_POWER_SAVE_DEFAULT;
++ q = snd_pci_quirk_lookup(chip->pci, power_save_blacklist);
++ if (q && val) {
++ dev_info(chip->card->dev, "device %04x:%04x is on the power_save blacklist, forcing power_save to 0\n",
++ q->subvendor, q->subdevice);
++ val = 0;
++ }
++ }
++#endif /* CONFIG_PM */
++ snd_hda_set_power_save(&chip->bus, val * 1000);
+ if (azx_has_pm_runtime(chip) || hda->use_vga_switcheroo)
+ pm_runtime_put_autosuspend(&pci->dev);
+
+diff --git a/sound/pci/hda/patch_realtek.c b/sound/pci/hda/patch_realtek.c
+index 4ff1f0ca52fc..8fe38c18e29d 100644
+--- a/sound/pci/hda/patch_realtek.c
++++ b/sound/pci/hda/patch_realtek.c
+@@ -4875,13 +4875,14 @@ static void alc_fixup_tpt470_dock(struct hda_codec *codec,
+
+ if (action == HDA_FIXUP_ACT_PRE_PROBE) {
+ spec->parse_flags = HDA_PINCFG_NO_HP_FIXUP;
++ snd_hda_apply_pincfgs(codec, pincfgs);
++ } else if (action == HDA_FIXUP_ACT_INIT) {
+ /* Enable DOCK device */
+ snd_hda_codec_write(codec, 0x17, 0,
+ AC_VERB_SET_CONFIG_DEFAULT_BYTES_3, 0);
+ /* Enable DOCK device */
+ snd_hda_codec_write(codec, 0x19, 0,
+ AC_VERB_SET_CONFIG_DEFAULT_BYTES_3, 0);
+- snd_hda_apply_pincfgs(codec, pincfgs);
+ }
+ }
+
+diff --git a/sound/usb/quirks-table.h b/sound/usb/quirks-table.h
+index 8a59d4782a0f..69bf5cf1e91e 100644
+--- a/sound/usb/quirks-table.h
++++ b/sound/usb/quirks-table.h
+@@ -3277,4 +3277,51 @@ AU0828_DEVICE(0x2040, 0x7270, "Hauppauge", "HVR-950Q"),
+ }
+ },
+
++{
++ /*
++ * Bower's & Wilkins PX headphones only support the 48 kHz sample rate
++ * even though it advertises more. The capture interface doesn't work
++ * even on windows.
++ */
++ USB_DEVICE(0x19b5, 0x0021),
++ .driver_info = (unsigned long) &(const struct snd_usb_audio_quirk) {
++ .ifnum = QUIRK_ANY_INTERFACE,
++ .type = QUIRK_COMPOSITE,
++ .data = (const struct snd_usb_audio_quirk[]) {
++ {
++ .ifnum = 0,
++ .type = QUIRK_AUDIO_STANDARD_MIXER,
++ },
++ /* Capture */
++ {
++ .ifnum = 1,
++ .type = QUIRK_IGNORE_INTERFACE,
++ },
++ /* Playback */
++ {
++ .ifnum = 2,
++ .type = QUIRK_AUDIO_FIXED_ENDPOINT,
++ .data = &(const struct audioformat) {
++ .formats = SNDRV_PCM_FMTBIT_S16_LE,
++ .channels = 2,
++ .iface = 2,
++ .altsetting = 1,
++ .altset_idx = 1,
++ .attributes = UAC_EP_CS_ATTR_FILL_MAX |
++ UAC_EP_CS_ATTR_SAMPLE_RATE,
++ .endpoint = 0x03,
++ .ep_attr = USB_ENDPOINT_XFER_ISOC,
++ .rates = SNDRV_PCM_RATE_48000,
++ .rate_min = 48000,
++ .rate_max = 48000,
++ .nr_rates = 1,
++ .rate_table = (unsigned int[]) {
++ 48000
++ }
++ }
++ },
++ }
++ }
++},
++
+ #undef USB_DEVICE_VENDOR_SPEC
+diff --git a/sound/x86/intel_hdmi_audio.c b/sound/x86/intel_hdmi_audio.c
+index a0951505c7f5..697872d8308e 100644
+--- a/sound/x86/intel_hdmi_audio.c
++++ b/sound/x86/intel_hdmi_audio.c
+@@ -1827,6 +1827,8 @@ static int hdmi_lpe_audio_probe(struct platform_device *pdev)
+ ctx->port = port;
+ ctx->pipe = -1;
+
++ spin_lock_init(&ctx->had_spinlock);
++ mutex_init(&ctx->mutex);
+ INIT_WORK(&ctx->hdmi_audio_wq, had_audio_wq);
+
+ ret = snd_pcm_new(card, INTEL_HAD, port, MAX_PB_STREAMS,
+diff --git a/virt/kvm/kvm_main.c b/virt/kvm/kvm_main.c
+index 210bf820385a..e536977e7b6d 100644
+--- a/virt/kvm/kvm_main.c
++++ b/virt/kvm/kvm_main.c
+@@ -974,8 +974,7 @@ int __kvm_set_memory_region(struct kvm *kvm,
+ /* Check for overlaps */
+ r = -EEXIST;
+ kvm_for_each_memslot(slot, __kvm_memslots(kvm, as_id)) {
+- if ((slot->id >= KVM_USER_MEM_SLOTS) ||
+- (slot->id == id))
++ if (slot->id == id)
+ continue;
+ if (!((base_gfn + npages <= slot->base_gfn) ||
+ (base_gfn >= slot->base_gfn + slot->npages)))
^ permalink raw reply related [flat|nested] 27+ messages in thread
* [gentoo-commits] proj/linux-patches:4.15 commit in: /
@ 2018-03-09 22:51 Mike Pagano
0 siblings, 0 replies; 27+ messages in thread
From: Mike Pagano @ 2018-03-09 22:51 UTC (permalink / raw
To: gentoo-commits
commit: 186dad837693de84ff4792013172b41a991bfa70
Author: Mike Pagano <mpagano <AT> gentoo <DOT> org>
AuthorDate: Fri Mar 9 22:50:57 2018 +0000
Commit: Mike Pagano <mpagano <AT> gentoo <DOT> org>
CommitDate: Fri Mar 9 22:50:57 2018 +0000
URL: https://gitweb.gentoo.org/proj/linux-patches.git/commit/?id=186dad83
Remove redundant patch 2901_allocate_buffer_on_heap_rather_than_globally.patch
0000_README | 4 -
...ocate_buffer_on_heap_rather_than_globally.patch | 460 ---------------------
2 files changed, 464 deletions(-)
diff --git a/0000_README b/0000_README
index bd1cdaf..552a9c3 100644
--- a/0000_README
+++ b/0000_README
@@ -103,10 +103,6 @@ Patch: 2900_dev-root-proc-mount-fix.patch
From: https://bugs.gentoo.org/show_bug.cgi?id=438380
Desc: Ensure that /dev/root doesn't appear in /proc/mounts when bootint without an initramfs.
-Patch: 2901_allocate_buffer_on_heap_rather_than_globally.patch
-From: https://bugs.gentoo.org/646438
-Desc: Patchwork [v2] platform/x86: dell-laptop: Allocate buffer on heap rather than globally Bug #646438
-
Patch: 4200_fbcondecor.patch
From: http://www.mepiscommunity.org/fbcondecor
Desc: Bootsplash ported by Conrad Kostecki. (Bug #637434)
diff --git a/2901_allocate_buffer_on_heap_rather_than_globally.patch b/2901_allocate_buffer_on_heap_rather_than_globally.patch
deleted file mode 100644
index eb5fecc..0000000
--- a/2901_allocate_buffer_on_heap_rather_than_globally.patch
+++ /dev/null
@@ -1,460 +0,0 @@
-diff --git a/drivers/platform/x86/dell-laptop.c b/drivers/platform/x86/dell-laptop.c
-index cd4725e7e0b5..6e8071d493dc 100644
---- a/drivers/platform/x86/dell-laptop.c
-+++ b/drivers/platform/x86/dell-laptop.c
-@@ -78,7 +78,6 @@ static struct platform_driver platform_driver = {
- }
- };
-
--static struct calling_interface_buffer *buffer;
- static struct platform_device *platform_device;
- static struct backlight_device *dell_backlight_device;
- static struct rfkill *wifi_rfkill;
-@@ -286,7 +285,8 @@ static const struct dmi_system_id dell_quirks[] __initconst = {
- { }
- };
-
--void dell_set_arguments(u32 arg0, u32 arg1, u32 arg2, u32 arg3)
-+static void dell_fill_request(struct calling_interface_buffer *buffer,
-+ u32 arg0, u32 arg1, u32 arg2, u32 arg3)
- {
- memset(buffer, 0, sizeof(struct calling_interface_buffer));
- buffer->input[0] = arg0;
-@@ -295,7 +295,8 @@ void dell_set_arguments(u32 arg0, u32 arg1, u32 arg2, u32 arg3)
- buffer->input[3] = arg3;
- }
-
--int dell_send_request(u16 class, u16 select)
-+static int dell_send_request(struct calling_interface_buffer *buffer,
-+ u16 class, u16 select)
- {
- int ret;
-
-@@ -432,21 +433,22 @@ static int dell_rfkill_set(void *data, bool blocked)
- int disable = blocked ? 1 : 0;
- unsigned long radio = (unsigned long)data;
- int hwswitch_bit = (unsigned long)data - 1;
-+ struct calling_interface_buffer buffer;
- int hwswitch;
- int status;
- int ret;
-
-- dell_set_arguments(0, 0, 0, 0);
-- ret = dell_send_request(CLASS_INFO, SELECT_RFKILL);
-+ dell_fill_request(&buffer, 0, 0, 0, 0);
-+ ret = dell_send_request(&buffer, CLASS_INFO, SELECT_RFKILL);
- if (ret)
- return ret;
-- status = buffer->output[1];
-+ status = buffer.output[1];
-
-- dell_set_arguments(0x2, 0, 0, 0);
-- ret = dell_send_request(CLASS_INFO, SELECT_RFKILL);
-+ dell_fill_request(&buffer, 0x2, 0, 0, 0);
-+ ret = dell_send_request(&buffer, CLASS_INFO, SELECT_RFKILL);
- if (ret)
- return ret;
-- hwswitch = buffer->output[1];
-+ hwswitch = buffer.output[1];
-
- /* If the hardware switch controls this radio, and the hardware
- switch is disabled, always disable the radio */
-@@ -454,8 +456,8 @@ static int dell_rfkill_set(void *data, bool blocked)
- (status & BIT(0)) && !(status & BIT(16)))
- disable = 1;
-
-- dell_set_arguments(1 | (radio<<8) | (disable << 16), 0, 0, 0);
-- ret = dell_send_request(CLASS_INFO, SELECT_RFKILL);
-+ dell_fill_request(&buffer, 1 | (radio<<8) | (disable << 16), 0, 0, 0);
-+ ret = dell_send_request(&buffer, CLASS_INFO, SELECT_RFKILL);
- return ret;
- }
-
-@@ -464,9 +466,11 @@ static void dell_rfkill_update_sw_state(struct rfkill *rfkill, int radio,
- {
- if (status & BIT(0)) {
- /* Has hw-switch, sync sw_state to BIOS */
-+ struct calling_interface_buffer buffer;
- int block = rfkill_blocked(rfkill);
-- dell_set_arguments(1 | (radio << 8) | (block << 16), 0, 0, 0);
-- dell_send_request(CLASS_INFO, SELECT_RFKILL);
-+ dell_fill_request(&buffer,
-+ 1 | (radio << 8) | (block << 16), 0, 0, 0);
-+ dell_send_request(&buffer, CLASS_INFO, SELECT_RFKILL);
- } else {
- /* No hw-switch, sync BIOS state to sw_state */
- rfkill_set_sw_state(rfkill, !!(status & BIT(radio + 16)));
-@@ -483,21 +487,22 @@ static void dell_rfkill_update_hw_state(struct rfkill *rfkill, int radio,
- static void dell_rfkill_query(struct rfkill *rfkill, void *data)
- {
- int radio = ((unsigned long)data & 0xF);
-+ struct calling_interface_buffer buffer;
- int hwswitch;
- int status;
- int ret;
-
-- dell_set_arguments(0, 0, 0, 0);
-- ret = dell_send_request(CLASS_INFO, SELECT_RFKILL);
-- status = buffer->output[1];
-+ dell_fill_request(&buffer, 0, 0, 0, 0);
-+ ret = dell_send_request(&buffer, CLASS_INFO, SELECT_RFKILL);
-+ status = buffer.output[1];
-
- if (ret != 0 || !(status & BIT(0))) {
- return;
- }
-
-- dell_set_arguments(0, 0x2, 0, 0);
-- ret = dell_send_request(CLASS_INFO, SELECT_RFKILL);
-- hwswitch = buffer->output[1];
-+ dell_fill_request(&buffer, 0, 0x2, 0, 0);
-+ ret = dell_send_request(&buffer, CLASS_INFO, SELECT_RFKILL);
-+ hwswitch = buffer.output[1];
-
- if (ret != 0)
- return;
-@@ -514,22 +519,23 @@ static struct dentry *dell_laptop_dir;
-
- static int dell_debugfs_show(struct seq_file *s, void *data)
- {
-+ struct calling_interface_buffer buffer;
- int hwswitch_state;
- int hwswitch_ret;
- int status;
- int ret;
-
-- dell_set_arguments(0, 0, 0, 0);
-- ret = dell_send_request(CLASS_INFO, SELECT_RFKILL);
-+ dell_fill_request(&buffer, 0, 0, 0, 0);
-+ ret = dell_send_request(&buffer, CLASS_INFO, SELECT_RFKILL);
- if (ret)
- return ret;
-- status = buffer->output[1];
-+ status = buffer.output[1];
-
-- dell_set_arguments(0, 0x2, 0, 0);
-- hwswitch_ret = dell_send_request(CLASS_INFO, SELECT_RFKILL);
-+ dell_fill_request(&buffer, 0, 0x2, 0, 0);
-+ hwswitch_ret = dell_send_request(&buffer, CLASS_INFO, SELECT_RFKILL);
- if (hwswitch_ret)
- return hwswitch_ret;
-- hwswitch_state = buffer->output[1];
-+ hwswitch_state = buffer.output[1];
-
- seq_printf(s, "return:\t%d\n", ret);
- seq_printf(s, "status:\t0x%X\n", status);
-@@ -610,22 +616,23 @@ static const struct file_operations dell_debugfs_fops = {
-
- static void dell_update_rfkill(struct work_struct *ignored)
- {
-+ struct calling_interface_buffer buffer;
- int hwswitch = 0;
- int status;
- int ret;
-
-- dell_set_arguments(0, 0, 0, 0);
-- ret = dell_send_request(CLASS_INFO, SELECT_RFKILL);
-- status = buffer->output[1];
-+ dell_fill_request(&buffer, 0, 0, 0, 0);
-+ ret = dell_send_request(&buffer, CLASS_INFO, SELECT_RFKILL);
-+ status = buffer.output[1];
-
- if (ret != 0)
- return;
-
-- dell_set_arguments(0, 0x2, 0, 0);
-- ret = dell_send_request(CLASS_INFO, SELECT_RFKILL);
-+ dell_fill_request(&buffer, 0, 0x2, 0, 0);
-+ ret = dell_send_request(&buffer, CLASS_INFO, SELECT_RFKILL);
-
- if (ret == 0 && (status & BIT(0)))
-- hwswitch = buffer->output[1];
-+ hwswitch = buffer.output[1];
-
- if (wifi_rfkill) {
- dell_rfkill_update_hw_state(wifi_rfkill, 1, status, hwswitch);
-@@ -683,6 +690,7 @@ static struct notifier_block dell_laptop_rbtn_notifier = {
-
- static int __init dell_setup_rfkill(void)
- {
-+ struct calling_interface_buffer buffer;
- int status, ret, whitelisted;
- const char *product;
-
-@@ -698,9 +706,9 @@ static int __init dell_setup_rfkill(void)
- if (!force_rfkill && !whitelisted)
- return 0;
-
-- dell_set_arguments(0, 0, 0, 0);
-- ret = dell_send_request(CLASS_INFO, SELECT_RFKILL);
-- status = buffer->output[1];
-+ dell_fill_request(&buffer, 0, 0, 0, 0);
-+ ret = dell_send_request(&buffer, CLASS_INFO, SELECT_RFKILL);
-+ status = buffer.output[1];
-
- /* dell wireless info smbios call is not supported */
- if (ret != 0)
-@@ -853,6 +861,7 @@ static void dell_cleanup_rfkill(void)
-
- static int dell_send_intensity(struct backlight_device *bd)
- {
-+ struct calling_interface_buffer buffer;
- struct calling_interface_token *token;
- int ret;
-
-@@ -860,17 +869,21 @@ static int dell_send_intensity(struct backlight_device *bd)
- if (!token)
- return -ENODEV;
-
-- dell_set_arguments(token->location, bd->props.brightness, 0, 0);
-+ dell_fill_request(&buffer,
-+ token->location, bd->props.brightness, 0, 0);
- if (power_supply_is_system_supplied() > 0)
-- ret = dell_send_request(CLASS_TOKEN_WRITE, SELECT_TOKEN_AC);
-+ ret = dell_send_request(&buffer,
-+ CLASS_TOKEN_WRITE, SELECT_TOKEN_AC);
- else
-- ret = dell_send_request(CLASS_TOKEN_WRITE, SELECT_TOKEN_BAT);
-+ ret = dell_send_request(&buffer,
-+ CLASS_TOKEN_WRITE, SELECT_TOKEN_BAT);
-
- return ret;
- }
-
- static int dell_get_intensity(struct backlight_device *bd)
- {
-+ struct calling_interface_buffer buffer;
- struct calling_interface_token *token;
- int ret;
-
-@@ -878,14 +891,17 @@ static int dell_get_intensity(struct backlight_device *bd)
- if (!token)
- return -ENODEV;
-
-- dell_set_arguments(token->location, 0, 0, 0);
-+ dell_fill_request(&buffer, token->location, 0, 0, 0);
- if (power_supply_is_system_supplied() > 0)
-- ret = dell_send_request(CLASS_TOKEN_READ, SELECT_TOKEN_AC);
-+ ret = dell_send_request(&buffer,
-+ CLASS_TOKEN_READ, SELECT_TOKEN_AC);
- else
-- ret = dell_send_request(CLASS_TOKEN_READ, SELECT_TOKEN_BAT);
-+ ret = dell_send_request(&buffer,
-+ CLASS_TOKEN_READ, SELECT_TOKEN_BAT);
-
- if (ret == 0)
-- ret = buffer->output[1];
-+ ret = buffer.output[1];
-+
- return ret;
- }
-
-@@ -1149,31 +1165,33 @@ static DEFINE_MUTEX(kbd_led_mutex);
-
- static int kbd_get_info(struct kbd_info *info)
- {
-+ struct calling_interface_buffer buffer;
- u8 units;
- int ret;
-
-- dell_set_arguments(0, 0, 0, 0);
-- ret = dell_send_request(CLASS_KBD_BACKLIGHT, SELECT_KBD_BACKLIGHT);
-+ dell_fill_request(&buffer, 0, 0, 0, 0);
-+ ret = dell_send_request(&buffer,
-+ CLASS_KBD_BACKLIGHT, SELECT_KBD_BACKLIGHT);
- if (ret)
- return ret;
-
-- info->modes = buffer->output[1] & 0xFFFF;
-- info->type = (buffer->output[1] >> 24) & 0xFF;
-- info->triggers = buffer->output[2] & 0xFF;
-- units = (buffer->output[2] >> 8) & 0xFF;
-- info->levels = (buffer->output[2] >> 16) & 0xFF;
-+ info->modes = buffer.output[1] & 0xFFFF;
-+ info->type = (buffer.output[1] >> 24) & 0xFF;
-+ info->triggers = buffer.output[2] & 0xFF;
-+ units = (buffer.output[2] >> 8) & 0xFF;
-+ info->levels = (buffer.output[2] >> 16) & 0xFF;
-
- if (quirks && quirks->kbd_led_levels_off_1 && info->levels)
- info->levels--;
-
- if (units & BIT(0))
-- info->seconds = (buffer->output[3] >> 0) & 0xFF;
-+ info->seconds = (buffer.output[3] >> 0) & 0xFF;
- if (units & BIT(1))
-- info->minutes = (buffer->output[3] >> 8) & 0xFF;
-+ info->minutes = (buffer.output[3] >> 8) & 0xFF;
- if (units & BIT(2))
-- info->hours = (buffer->output[3] >> 16) & 0xFF;
-+ info->hours = (buffer.output[3] >> 16) & 0xFF;
- if (units & BIT(3))
-- info->days = (buffer->output[3] >> 24) & 0xFF;
-+ info->days = (buffer.output[3] >> 24) & 0xFF;
-
- return ret;
- }
-@@ -1233,31 +1251,34 @@ static int kbd_set_level(struct kbd_state *state, u8 level)
-
- static int kbd_get_state(struct kbd_state *state)
- {
-+ struct calling_interface_buffer buffer;
- int ret;
-
-- dell_set_arguments(0x1, 0, 0, 0);
-- ret = dell_send_request(CLASS_KBD_BACKLIGHT, SELECT_KBD_BACKLIGHT);
-+ dell_fill_request(&buffer, 0, 0, 0, 0);
-+ ret = dell_send_request(&buffer,
-+ CLASS_KBD_BACKLIGHT, SELECT_KBD_BACKLIGHT);
- if (ret)
- return ret;
-
-- state->mode_bit = ffs(buffer->output[1] & 0xFFFF);
-+ state->mode_bit = ffs(buffer.output[1] & 0xFFFF);
- if (state->mode_bit != 0)
- state->mode_bit--;
-
-- state->triggers = (buffer->output[1] >> 16) & 0xFF;
-- state->timeout_value = (buffer->output[1] >> 24) & 0x3F;
-- state->timeout_unit = (buffer->output[1] >> 30) & 0x3;
-- state->als_setting = buffer->output[2] & 0xFF;
-- state->als_value = (buffer->output[2] >> 8) & 0xFF;
-- state->level = (buffer->output[2] >> 16) & 0xFF;
-- state->timeout_value_ac = (buffer->output[2] >> 24) & 0x3F;
-- state->timeout_unit_ac = (buffer->output[2] >> 30) & 0x3;
-+ state->triggers = (buffer.output[1] >> 16) & 0xFF;
-+ state->timeout_value = (buffer.output[1] >> 24) & 0x3F;
-+ state->timeout_unit = (buffer.output[1] >> 30) & 0x3;
-+ state->als_setting = buffer.output[2] & 0xFF;
-+ state->als_value = (buffer.output[2] >> 8) & 0xFF;
-+ state->level = (buffer.output[2] >> 16) & 0xFF;
-+ state->timeout_value_ac = (buffer.output[2] >> 24) & 0x3F;
-+ state->timeout_unit_ac = (buffer.output[2] >> 30) & 0x3;
-
- return ret;
- }
-
- static int kbd_set_state(struct kbd_state *state)
- {
-+ struct calling_interface_buffer buffer;
- int ret;
- u32 input1;
- u32 input2;
-@@ -1270,8 +1291,9 @@ static int kbd_set_state(struct kbd_state *state)
- input2 |= (state->level & 0xFF) << 16;
- input2 |= (state->timeout_value_ac & 0x3F) << 24;
- input2 |= (state->timeout_unit_ac & 0x3) << 30;
-- dell_set_arguments(0x2, input1, input2, 0);
-- ret = dell_send_request(CLASS_KBD_BACKLIGHT, SELECT_KBD_BACKLIGHT);
-+ dell_fill_request(&buffer, 0x2, input1, input2, 0);
-+ ret = dell_send_request(&buffer,
-+ CLASS_KBD_BACKLIGHT, SELECT_KBD_BACKLIGHT);
-
- return ret;
- }
-@@ -1298,6 +1320,7 @@ static int kbd_set_state_safe(struct kbd_state *state, struct kbd_state *old)
-
- static int kbd_set_token_bit(u8 bit)
- {
-+ struct calling_interface_buffer buffer;
- struct calling_interface_token *token;
- int ret;
-
-@@ -1308,14 +1331,15 @@ static int kbd_set_token_bit(u8 bit)
- if (!token)
- return -EINVAL;
-
-- dell_set_arguments(token->location, token->value, 0, 0);
-- ret = dell_send_request(CLASS_TOKEN_WRITE, SELECT_TOKEN_STD);
-+ dell_fill_request(&buffer, token->location, token->value, 0, 0);
-+ ret = dell_send_request(&buffer, CLASS_TOKEN_WRITE, SELECT_TOKEN_STD);
-
- return ret;
- }
-
- static int kbd_get_token_bit(u8 bit)
- {
-+ struct calling_interface_buffer buffer;
- struct calling_interface_token *token;
- int ret;
- int val;
-@@ -1327,9 +1351,9 @@ static int kbd_get_token_bit(u8 bit)
- if (!token)
- return -EINVAL;
-
-- dell_set_arguments(token->location, 0, 0, 0);
-- ret = dell_send_request(CLASS_TOKEN_READ, SELECT_TOKEN_STD);
-- val = buffer->output[1];
-+ dell_fill_request(&buffer, token->location, 0, 0, 0);
-+ ret = dell_send_request(&buffer, CLASS_TOKEN_READ, SELECT_TOKEN_STD);
-+ val = buffer.output[1];
-
- if (ret)
- return ret;
-@@ -2046,6 +2070,7 @@ static struct notifier_block dell_laptop_notifier = {
-
- int dell_micmute_led_set(int state)
- {
-+ struct calling_interface_buffer buffer;
- struct calling_interface_token *token;
-
- if (state == 0)
-@@ -2058,8 +2083,8 @@ int dell_micmute_led_set(int state)
- if (!token)
- return -ENODEV;
-
-- dell_set_arguments(token->location, token->value, 0, 0);
-- dell_send_request(CLASS_TOKEN_WRITE, SELECT_TOKEN_STD);
-+ dell_fill_request(&buffer, token->location, token->value, 0, 0);
-+ dell_send_request(&buffer, CLASS_TOKEN_WRITE, SELECT_TOKEN_STD);
-
- return state;
- }
-@@ -2090,13 +2115,6 @@ static int __init dell_init(void)
- if (ret)
- goto fail_platform_device2;
-
-- buffer = kzalloc(sizeof(struct calling_interface_buffer), GFP_KERNEL);
-- if (!buffer) {
-- ret = -ENOMEM;
-- goto fail_buffer;
-- }
--
--
- ret = dell_setup_rfkill();
-
- if (ret) {
-@@ -2121,10 +2139,13 @@ static int __init dell_init(void)
-
- token = dell_smbios_find_token(BRIGHTNESS_TOKEN);
- if (token) {
-- dell_set_arguments(token->location, 0, 0, 0);
-- ret = dell_send_request(CLASS_TOKEN_READ, SELECT_TOKEN_AC);
-+ struct calling_interface_buffer buffer;
-+
-+ dell_fill_request(&buffer, token->location, 0, 0, 0);
-+ ret = dell_send_request(&buffer,
-+ CLASS_TOKEN_READ, SELECT_TOKEN_AC);
- if (ret)
-- max_intensity = buffer->output[3];
-+ max_intensity = buffer.output[3];
- }
-
- if (max_intensity) {
-@@ -2158,8 +2179,6 @@ static int __init dell_init(void)
- fail_get_brightness:
- backlight_device_unregister(dell_backlight_device);
- fail_backlight:
-- kfree(buffer);
--fail_buffer:
- dell_cleanup_rfkill();
- fail_rfkill:
- platform_device_del(platform_device);
-@@ -2179,7 +2198,6 @@ static void __exit dell_exit(void)
- touchpad_led_exit();
- kbd_led_exit();
- backlight_device_unregister(dell_backlight_device);
-- kfree(buffer);
- dell_cleanup_rfkill();
- if (platform_device) {
- platform_device_unregister(platform_device);
^ permalink raw reply related [flat|nested] 27+ messages in thread
* [gentoo-commits] proj/linux-patches:4.15 commit in: /
@ 2018-03-11 17:39 Mike Pagano
0 siblings, 0 replies; 27+ messages in thread
From: Mike Pagano @ 2018-03-11 17:39 UTC (permalink / raw
To: gentoo-commits
commit: 5e0efe43ac3cd3191ca0b0ae3bac9313cbe72c3f
Author: Mike Pagano <mpagano <AT> gentoo <DOT> org>
AuthorDate: Sun Mar 11 17:38:59 2018 +0000
Commit: Mike Pagano <mpagano <AT> gentoo <DOT> org>
CommitDate: Sun Mar 11 17:38:59 2018 +0000
URL: https://gitweb.gentoo.org/proj/linux-patches.git/commit/?id=5e0efe43
Linux patch 4.15.9
0000_README | 4 +
1008_linux-4.15.9.patch | 680 ++++++++++++++++++++++++++++++++++++++++++++++++
2 files changed, 684 insertions(+)
diff --git a/0000_README b/0000_README
index 552a9c3..bce11f7 100644
--- a/0000_README
+++ b/0000_README
@@ -75,6 +75,10 @@ Patch: 1007_linux-4.15.8.patch
From: http://www.kernel.org
Desc: Linux 4.15.8
+Patch: 1008_linux-4.15.9.patch
+From: http://www.kernel.org
+Desc: Linux 4.15.9
+
Patch: 1500_XATTR_USER_PREFIX.patch
From: https://bugs.gentoo.org/show_bug.cgi?id=470644
Desc: Support for namespace user.pax.* on tmpfs.
diff --git a/1008_linux-4.15.9.patch b/1008_linux-4.15.9.patch
new file mode 100644
index 0000000..40befd5
--- /dev/null
+++ b/1008_linux-4.15.9.patch
@@ -0,0 +1,680 @@
+diff --git a/Documentation/virtual/kvm/cpuid.txt b/Documentation/virtual/kvm/cpuid.txt
+index 3c65feb83010..a81c97a4b4a5 100644
+--- a/Documentation/virtual/kvm/cpuid.txt
++++ b/Documentation/virtual/kvm/cpuid.txt
+@@ -54,6 +54,10 @@ KVM_FEATURE_PV_UNHALT || 7 || guest checks this feature bit
+ || || before enabling paravirtualized
+ || || spinlock support.
+ ------------------------------------------------------------------------------
++KVM_FEATURE_ASYNC_PF_VMEXIT || 10 || paravirtualized async PF VM exit
++ || || can be enabled by setting bit 2
++ || || when writing to msr 0x4b564d02
++------------------------------------------------------------------------------
+ KVM_FEATURE_CLOCKSOURCE_STABLE_BIT || 24 || host will warn if no guest-side
+ || || per-cpu warps are expected in
+ || || kvmclock.
+diff --git a/Documentation/virtual/kvm/msr.txt b/Documentation/virtual/kvm/msr.txt
+index 1ebecc115dc6..f3f0d57ced8e 100644
+--- a/Documentation/virtual/kvm/msr.txt
++++ b/Documentation/virtual/kvm/msr.txt
+@@ -170,7 +170,8 @@ MSR_KVM_ASYNC_PF_EN: 0x4b564d02
+ when asynchronous page faults are enabled on the vcpu 0 when
+ disabled. Bit 1 is 1 if asynchronous page faults can be injected
+ when vcpu is in cpl == 0. Bit 2 is 1 if asynchronous page faults
+- are delivered to L1 as #PF vmexits.
++ are delivered to L1 as #PF vmexits. Bit 2 can be set only if
++ KVM_FEATURE_ASYNC_PF_VMEXIT is present in CPUID.
+
+ First 4 byte of 64 byte memory location will be written to by
+ the hypervisor at the time of asynchronous page fault (APF)
+diff --git a/Makefile b/Makefile
+index eb18d200a603..0420f9a0c70f 100644
+--- a/Makefile
++++ b/Makefile
+@@ -1,7 +1,7 @@
+ # SPDX-License-Identifier: GPL-2.0
+ VERSION = 4
+ PATCHLEVEL = 15
+-SUBLEVEL = 8
++SUBLEVEL = 9
+ EXTRAVERSION =
+ NAME = Fearless Coyote
+
+diff --git a/arch/arm64/net/bpf_jit_comp.c b/arch/arm64/net/bpf_jit_comp.c
+index bb32f7f6dd0f..be155f70f108 100644
+--- a/arch/arm64/net/bpf_jit_comp.c
++++ b/arch/arm64/net/bpf_jit_comp.c
+@@ -238,8 +238,9 @@ static int emit_bpf_tail_call(struct jit_ctx *ctx)
+ off = offsetof(struct bpf_array, map.max_entries);
+ emit_a64_mov_i64(tmp, off, ctx);
+ emit(A64_LDR32(tmp, r2, tmp), ctx);
++ emit(A64_MOV(0, r3, r3), ctx);
+ emit(A64_CMP(0, r3, tmp), ctx);
+- emit(A64_B_(A64_COND_GE, jmp_offset), ctx);
++ emit(A64_B_(A64_COND_CS, jmp_offset), ctx);
+
+ /* if (tail_call_cnt > MAX_TAIL_CALL_CNT)
+ * goto out;
+@@ -247,7 +248,7 @@ static int emit_bpf_tail_call(struct jit_ctx *ctx)
+ */
+ emit_a64_mov_i64(tmp, MAX_TAIL_CALL_CNT, ctx);
+ emit(A64_CMP(1, tcc, tmp), ctx);
+- emit(A64_B_(A64_COND_GT, jmp_offset), ctx);
++ emit(A64_B_(A64_COND_HI, jmp_offset), ctx);
+ emit(A64_ADD_I(1, tcc, tcc, 1), ctx);
+
+ /* prog = array->ptrs[index];
+diff --git a/arch/powerpc/net/bpf_jit_comp64.c b/arch/powerpc/net/bpf_jit_comp64.c
+index d183b4801bdb..35591fb09042 100644
+--- a/arch/powerpc/net/bpf_jit_comp64.c
++++ b/arch/powerpc/net/bpf_jit_comp64.c
+@@ -242,6 +242,7 @@ static void bpf_jit_emit_tail_call(u32 *image, struct codegen_context *ctx, u32
+ * goto out;
+ */
+ PPC_LWZ(b2p[TMP_REG_1], b2p_bpf_array, offsetof(struct bpf_array, map.max_entries));
++ PPC_RLWINM(b2p_index, b2p_index, 0, 0, 31);
+ PPC_CMPLW(b2p_index, b2p[TMP_REG_1]);
+ PPC_BCC(COND_GE, out);
+
+diff --git a/arch/x86/include/asm/nospec-branch.h b/arch/x86/include/asm/nospec-branch.h
+index 76b058533e47..81a1be326571 100644
+--- a/arch/x86/include/asm/nospec-branch.h
++++ b/arch/x86/include/asm/nospec-branch.h
+@@ -177,4 +177,41 @@ static inline void indirect_branch_prediction_barrier(void)
+ }
+
+ #endif /* __ASSEMBLY__ */
++
++/*
++ * Below is used in the eBPF JIT compiler and emits the byte sequence
++ * for the following assembly:
++ *
++ * With retpolines configured:
++ *
++ * callq do_rop
++ * spec_trap:
++ * pause
++ * lfence
++ * jmp spec_trap
++ * do_rop:
++ * mov %rax,(%rsp)
++ * retq
++ *
++ * Without retpolines configured:
++ *
++ * jmp *%rax
++ */
++#ifdef CONFIG_RETPOLINE
++# define RETPOLINE_RAX_BPF_JIT_SIZE 17
++# define RETPOLINE_RAX_BPF_JIT() \
++ EMIT1_off32(0xE8, 7); /* callq do_rop */ \
++ /* spec_trap: */ \
++ EMIT2(0xF3, 0x90); /* pause */ \
++ EMIT3(0x0F, 0xAE, 0xE8); /* lfence */ \
++ EMIT2(0xEB, 0xF9); /* jmp spec_trap */ \
++ /* do_rop: */ \
++ EMIT4(0x48, 0x89, 0x04, 0x24); /* mov %rax,(%rsp) */ \
++ EMIT1(0xC3); /* retq */
++#else
++# define RETPOLINE_RAX_BPF_JIT_SIZE 2
++# define RETPOLINE_RAX_BPF_JIT() \
++ EMIT2(0xFF, 0xE0); /* jmp *%rax */
++#endif
++
+ #endif /* _ASM_X86_NOSPEC_BRANCH_H_ */
+diff --git a/arch/x86/include/uapi/asm/kvm_para.h b/arch/x86/include/uapi/asm/kvm_para.h
+index 09cc06483bed..989db885de97 100644
+--- a/arch/x86/include/uapi/asm/kvm_para.h
++++ b/arch/x86/include/uapi/asm/kvm_para.h
+@@ -25,6 +25,7 @@
+ #define KVM_FEATURE_STEAL_TIME 5
+ #define KVM_FEATURE_PV_EOI 6
+ #define KVM_FEATURE_PV_UNHALT 7
++#define KVM_FEATURE_ASYNC_PF_VMEXIT 10
+
+ /* The last 8 bits are used to indicate how to interpret the flags field
+ * in pvclock structure. If no bits are set, all flags are ignored.
+diff --git a/arch/x86/kernel/kvm.c b/arch/x86/kernel/kvm.c
+index b40ffbf156c1..0a93e83b774a 100644
+--- a/arch/x86/kernel/kvm.c
++++ b/arch/x86/kernel/kvm.c
+@@ -341,10 +341,10 @@ static void kvm_guest_cpu_init(void)
+ #endif
+ pa |= KVM_ASYNC_PF_ENABLED;
+
+- /* Async page fault support for L1 hypervisor is optional */
+- if (wrmsr_safe(MSR_KVM_ASYNC_PF_EN,
+- (pa | KVM_ASYNC_PF_DELIVERY_AS_PF_VMEXIT) & 0xffffffff, pa >> 32) < 0)
+- wrmsrl(MSR_KVM_ASYNC_PF_EN, pa);
++ if (kvm_para_has_feature(KVM_FEATURE_ASYNC_PF_VMEXIT))
++ pa |= KVM_ASYNC_PF_DELIVERY_AS_PF_VMEXIT;
++
++ wrmsrl(MSR_KVM_ASYNC_PF_EN, pa);
+ __this_cpu_write(apf_reason.enabled, 1);
+ printk(KERN_INFO"KVM setup async PF for cpu %d\n",
+ smp_processor_id());
+diff --git a/arch/x86/kvm/cpuid.c b/arch/x86/kvm/cpuid.c
+index 13f5d4217e4f..4f544f2a7b06 100644
+--- a/arch/x86/kvm/cpuid.c
++++ b/arch/x86/kvm/cpuid.c
+@@ -597,7 +597,8 @@ static inline int __do_cpuid_ent(struct kvm_cpuid_entry2 *entry, u32 function,
+ (1 << KVM_FEATURE_ASYNC_PF) |
+ (1 << KVM_FEATURE_PV_EOI) |
+ (1 << KVM_FEATURE_CLOCKSOURCE_STABLE_BIT) |
+- (1 << KVM_FEATURE_PV_UNHALT);
++ (1 << KVM_FEATURE_PV_UNHALT) |
++ (1 << KVM_FEATURE_ASYNC_PF_VMEXIT);
+
+ if (sched_info_on())
+ entry->eax |= (1 << KVM_FEATURE_STEAL_TIME);
+diff --git a/arch/x86/net/bpf_jit_comp.c b/arch/x86/net/bpf_jit_comp.c
+index 0554e8aef4d5..940aac70b4da 100644
+--- a/arch/x86/net/bpf_jit_comp.c
++++ b/arch/x86/net/bpf_jit_comp.c
+@@ -13,6 +13,7 @@
+ #include <linux/if_vlan.h>
+ #include <asm/cacheflush.h>
+ #include <asm/set_memory.h>
++#include <asm/nospec-branch.h>
+ #include <linux/bpf.h>
+
+ int bpf_jit_enable __read_mostly;
+@@ -287,7 +288,7 @@ static void emit_bpf_tail_call(u8 **pprog)
+ EMIT2(0x89, 0xD2); /* mov edx, edx */
+ EMIT3(0x39, 0x56, /* cmp dword ptr [rsi + 16], edx */
+ offsetof(struct bpf_array, map.max_entries));
+-#define OFFSET1 43 /* number of bytes to jump */
++#define OFFSET1 (41 + RETPOLINE_RAX_BPF_JIT_SIZE) /* number of bytes to jump */
+ EMIT2(X86_JBE, OFFSET1); /* jbe out */
+ label1 = cnt;
+
+@@ -296,7 +297,7 @@ static void emit_bpf_tail_call(u8 **pprog)
+ */
+ EMIT2_off32(0x8B, 0x85, 36); /* mov eax, dword ptr [rbp + 36] */
+ EMIT3(0x83, 0xF8, MAX_TAIL_CALL_CNT); /* cmp eax, MAX_TAIL_CALL_CNT */
+-#define OFFSET2 32
++#define OFFSET2 (30 + RETPOLINE_RAX_BPF_JIT_SIZE)
+ EMIT2(X86_JA, OFFSET2); /* ja out */
+ label2 = cnt;
+ EMIT3(0x83, 0xC0, 0x01); /* add eax, 1 */
+@@ -310,7 +311,7 @@ static void emit_bpf_tail_call(u8 **pprog)
+ * goto out;
+ */
+ EMIT3(0x48, 0x85, 0xC0); /* test rax,rax */
+-#define OFFSET3 10
++#define OFFSET3 (8 + RETPOLINE_RAX_BPF_JIT_SIZE)
+ EMIT2(X86_JE, OFFSET3); /* je out */
+ label3 = cnt;
+
+@@ -323,7 +324,7 @@ static void emit_bpf_tail_call(u8 **pprog)
+ * rdi == ctx (1st arg)
+ * rax == prog->bpf_func + prologue_size
+ */
+- EMIT2(0xFF, 0xE0); /* jmp rax */
++ RETPOLINE_RAX_BPF_JIT();
+
+ /* out: */
+ BUILD_BUG_ON(cnt - label1 != OFFSET1);
+diff --git a/drivers/scsi/mpt3sas/mpt3sas_base.c b/drivers/scsi/mpt3sas/mpt3sas_base.c
+index 8027de465d47..f43b51452596 100644
+--- a/drivers/scsi/mpt3sas/mpt3sas_base.c
++++ b/drivers/scsi/mpt3sas/mpt3sas_base.c
+@@ -6289,14 +6289,14 @@ _base_reset_handler(struct MPT3SAS_ADAPTER *ioc, int reset_phase)
+ }
+
+ /**
+- * _wait_for_commands_to_complete - reset controller
++ * mpt3sas_wait_for_commands_to_complete - reset controller
+ * @ioc: Pointer to MPT_ADAPTER structure
+ *
+ * This function waiting(3s) for all pending commands to complete
+ * prior to putting controller in reset.
+ */
+-static void
+-_wait_for_commands_to_complete(struct MPT3SAS_ADAPTER *ioc)
++void
++mpt3sas_wait_for_commands_to_complete(struct MPT3SAS_ADAPTER *ioc)
+ {
+ u32 ioc_state;
+ unsigned long flags;
+@@ -6375,7 +6375,7 @@ mpt3sas_base_hard_reset_handler(struct MPT3SAS_ADAPTER *ioc,
+ is_fault = 1;
+ }
+ _base_reset_handler(ioc, MPT3_IOC_PRE_RESET);
+- _wait_for_commands_to_complete(ioc);
++ mpt3sas_wait_for_commands_to_complete(ioc);
+ _base_mask_interrupts(ioc);
+ r = _base_make_ioc_ready(ioc, type);
+ if (r)
+diff --git a/drivers/scsi/mpt3sas/mpt3sas_base.h b/drivers/scsi/mpt3sas/mpt3sas_base.h
+index 60f42ca3954f..69022b10a3d8 100644
+--- a/drivers/scsi/mpt3sas/mpt3sas_base.h
++++ b/drivers/scsi/mpt3sas/mpt3sas_base.h
+@@ -1435,6 +1435,9 @@ void mpt3sas_base_update_missing_delay(struct MPT3SAS_ADAPTER *ioc,
+
+ int mpt3sas_port_enable(struct MPT3SAS_ADAPTER *ioc);
+
++void
++mpt3sas_wait_for_commands_to_complete(struct MPT3SAS_ADAPTER *ioc);
++
+
+ /* scsih shared API */
+ u8 mpt3sas_scsih_event_callback(struct MPT3SAS_ADAPTER *ioc, u8 msix_index,
+diff --git a/drivers/scsi/mpt3sas/mpt3sas_scsih.c b/drivers/scsi/mpt3sas/mpt3sas_scsih.c
+index b258f210120a..741b0a28c2e3 100644
+--- a/drivers/scsi/mpt3sas/mpt3sas_scsih.c
++++ b/drivers/scsi/mpt3sas/mpt3sas_scsih.c
+@@ -2998,7 +2998,8 @@ scsih_abort(struct scsi_cmnd *scmd)
+ _scsih_tm_display_info(ioc, scmd);
+
+ sas_device_priv_data = scmd->device->hostdata;
+- if (!sas_device_priv_data || !sas_device_priv_data->sas_target) {
++ if (!sas_device_priv_data || !sas_device_priv_data->sas_target ||
++ ioc->remove_host) {
+ sdev_printk(KERN_INFO, scmd->device,
+ "device been deleted! scmd(%p)\n", scmd);
+ scmd->result = DID_NO_CONNECT << 16;
+@@ -3060,7 +3061,8 @@ scsih_dev_reset(struct scsi_cmnd *scmd)
+ _scsih_tm_display_info(ioc, scmd);
+
+ sas_device_priv_data = scmd->device->hostdata;
+- if (!sas_device_priv_data || !sas_device_priv_data->sas_target) {
++ if (!sas_device_priv_data || !sas_device_priv_data->sas_target ||
++ ioc->remove_host) {
+ sdev_printk(KERN_INFO, scmd->device,
+ "device been deleted! scmd(%p)\n", scmd);
+ scmd->result = DID_NO_CONNECT << 16;
+@@ -3122,7 +3124,8 @@ scsih_target_reset(struct scsi_cmnd *scmd)
+ _scsih_tm_display_info(ioc, scmd);
+
+ sas_device_priv_data = scmd->device->hostdata;
+- if (!sas_device_priv_data || !sas_device_priv_data->sas_target) {
++ if (!sas_device_priv_data || !sas_device_priv_data->sas_target ||
++ ioc->remove_host) {
+ starget_printk(KERN_INFO, starget, "target been deleted! scmd(%p)\n",
+ scmd);
+ scmd->result = DID_NO_CONNECT << 16;
+@@ -3179,7 +3182,7 @@ scsih_host_reset(struct scsi_cmnd *scmd)
+ ioc->name, scmd);
+ scsi_print_command(scmd);
+
+- if (ioc->is_driver_loading) {
++ if (ioc->is_driver_loading || ioc->remove_host) {
+ pr_info(MPT3SAS_FMT "Blocking the host reset\n",
+ ioc->name);
+ r = FAILED;
+@@ -4611,7 +4614,7 @@ _scsih_flush_running_cmds(struct MPT3SAS_ADAPTER *ioc)
+ _scsih_set_satl_pending(scmd, false);
+ mpt3sas_base_free_smid(ioc, smid);
+ scsi_dma_unmap(scmd);
+- if (ioc->pci_error_recovery)
++ if (ioc->pci_error_recovery || ioc->remove_host)
+ scmd->result = DID_NO_CONNECT << 16;
+ else
+ scmd->result = DID_RESET << 16;
+@@ -9901,6 +9904,10 @@ static void scsih_remove(struct pci_dev *pdev)
+ unsigned long flags;
+
+ ioc->remove_host = 1;
++
++ mpt3sas_wait_for_commands_to_complete(ioc);
++ _scsih_flush_running_cmds(ioc);
++
+ _scsih_fw_event_cleanup_queue(ioc);
+
+ spin_lock_irqsave(&ioc->fw_event_lock, flags);
+@@ -9977,6 +9984,10 @@ scsih_shutdown(struct pci_dev *pdev)
+ unsigned long flags;
+
+ ioc->remove_host = 1;
++
++ mpt3sas_wait_for_commands_to_complete(ioc);
++ _scsih_flush_running_cmds(ioc);
++
+ _scsih_fw_event_cleanup_queue(ioc);
+
+ spin_lock_irqsave(&ioc->fw_event_lock, flags);
+diff --git a/kernel/bpf/arraymap.c b/kernel/bpf/arraymap.c
+index ab94d304a634..8596aa31c75e 100644
+--- a/kernel/bpf/arraymap.c
++++ b/kernel/bpf/arraymap.c
+@@ -26,8 +26,10 @@ static void bpf_array_free_percpu(struct bpf_array *array)
+ {
+ int i;
+
+- for (i = 0; i < array->map.max_entries; i++)
++ for (i = 0; i < array->map.max_entries; i++) {
+ free_percpu(array->pptrs[i]);
++ cond_resched();
++ }
+ }
+
+ static int bpf_array_alloc_percpu(struct bpf_array *array)
+@@ -43,6 +45,7 @@ static int bpf_array_alloc_percpu(struct bpf_array *array)
+ return -ENOMEM;
+ }
+ array->pptrs[i] = ptr;
++ cond_resched();
+ }
+
+ return 0;
+@@ -52,11 +55,11 @@ static int bpf_array_alloc_percpu(struct bpf_array *array)
+ static struct bpf_map *array_map_alloc(union bpf_attr *attr)
+ {
+ bool percpu = attr->map_type == BPF_MAP_TYPE_PERCPU_ARRAY;
+- int numa_node = bpf_map_attr_numa_node(attr);
++ int ret, numa_node = bpf_map_attr_numa_node(attr);
+ u32 elem_size, index_mask, max_entries;
+ bool unpriv = !capable(CAP_SYS_ADMIN);
++ u64 cost, array_size, mask64;
+ struct bpf_array *array;
+- u64 array_size, mask64;
+
+ /* check sanity of attributes */
+ if (attr->max_entries == 0 || attr->key_size != 4 ||
+@@ -101,8 +104,19 @@ static struct bpf_map *array_map_alloc(union bpf_attr *attr)
+ array_size += (u64) max_entries * elem_size;
+
+ /* make sure there is no u32 overflow later in round_up() */
+- if (array_size >= U32_MAX - PAGE_SIZE)
++ cost = array_size;
++ if (cost >= U32_MAX - PAGE_SIZE)
+ return ERR_PTR(-ENOMEM);
++ if (percpu) {
++ cost += (u64)attr->max_entries * elem_size * num_possible_cpus();
++ if (cost >= U32_MAX - PAGE_SIZE)
++ return ERR_PTR(-ENOMEM);
++ }
++ cost = round_up(cost, PAGE_SIZE) >> PAGE_SHIFT;
++
++ ret = bpf_map_precharge_memlock(cost);
++ if (ret < 0)
++ return ERR_PTR(ret);
+
+ /* allocate all map elements and zero-initialize them */
+ array = bpf_map_area_alloc(array_size, numa_node);
+@@ -118,20 +132,13 @@ static struct bpf_map *array_map_alloc(union bpf_attr *attr)
+ array->map.max_entries = attr->max_entries;
+ array->map.map_flags = attr->map_flags;
+ array->map.numa_node = numa_node;
++ array->map.pages = cost;
+ array->elem_size = elem_size;
+
+- if (!percpu)
+- goto out;
+-
+- array_size += (u64) attr->max_entries * elem_size * num_possible_cpus();
+-
+- if (array_size >= U32_MAX - PAGE_SIZE ||
+- bpf_array_alloc_percpu(array)) {
++ if (percpu && bpf_array_alloc_percpu(array)) {
+ bpf_map_area_free(array);
+ return ERR_PTR(-ENOMEM);
+ }
+-out:
+- array->map.pages = round_up(array_size, PAGE_SIZE) >> PAGE_SHIFT;
+
+ return &array->map;
+ }
+diff --git a/kernel/bpf/lpm_trie.c b/kernel/bpf/lpm_trie.c
+index 885e45479680..424f89ac4adc 100644
+--- a/kernel/bpf/lpm_trie.c
++++ b/kernel/bpf/lpm_trie.c
+@@ -560,7 +560,10 @@ static void trie_free(struct bpf_map *map)
+ struct lpm_trie_node __rcu **slot;
+ struct lpm_trie_node *node;
+
+- raw_spin_lock(&trie->lock);
++ /* Wait for outstanding programs to complete
++ * update/lookup/delete/get_next_key and free the trie.
++ */
++ synchronize_rcu();
+
+ /* Always start at the root and walk down to a node that has no
+ * children. Then free that node, nullify its reference in the parent
+@@ -571,10 +574,9 @@ static void trie_free(struct bpf_map *map)
+ slot = &trie->root;
+
+ for (;;) {
+- node = rcu_dereference_protected(*slot,
+- lockdep_is_held(&trie->lock));
++ node = rcu_dereference_protected(*slot, 1);
+ if (!node)
+- goto unlock;
++ goto out;
+
+ if (rcu_access_pointer(node->child[0])) {
+ slot = &node->child[0];
+@@ -592,8 +594,8 @@ static void trie_free(struct bpf_map *map)
+ }
+ }
+
+-unlock:
+- raw_spin_unlock(&trie->lock);
++out:
++ kfree(trie);
+ }
+
+ static int trie_get_next_key(struct bpf_map *map, void *key, void *next_key)
+diff --git a/kernel/bpf/verifier.c b/kernel/bpf/verifier.c
+index 13551e623501..7125ddbb24df 100644
+--- a/kernel/bpf/verifier.c
++++ b/kernel/bpf/verifier.c
+@@ -985,6 +985,13 @@ static bool is_ctx_reg(struct bpf_verifier_env *env, int regno)
+ return reg->type == PTR_TO_CTX;
+ }
+
++static bool is_pkt_reg(struct bpf_verifier_env *env, int regno)
++{
++ const struct bpf_reg_state *reg = cur_regs(env) + regno;
++
++ return type_is_pkt_pointer(reg->type);
++}
++
+ static int check_pkt_ptr_alignment(struct bpf_verifier_env *env,
+ const struct bpf_reg_state *reg,
+ int off, int size, bool strict)
+@@ -1045,10 +1052,10 @@ static int check_generic_ptr_alignment(struct bpf_verifier_env *env,
+ }
+
+ static int check_ptr_alignment(struct bpf_verifier_env *env,
+- const struct bpf_reg_state *reg,
+- int off, int size)
++ const struct bpf_reg_state *reg, int off,
++ int size, bool strict_alignment_once)
+ {
+- bool strict = env->strict_alignment;
++ bool strict = env->strict_alignment || strict_alignment_once;
+ const char *pointer_desc = "";
+
+ switch (reg->type) {
+@@ -1108,9 +1115,9 @@ static void coerce_reg_to_size(struct bpf_reg_state *reg, int size)
+ * if t==write && value_regno==-1, some unknown value is stored into memory
+ * if t==read && value_regno==-1, don't care what we read from memory
+ */
+-static int check_mem_access(struct bpf_verifier_env *env, int insn_idx, u32 regno, int off,
+- int bpf_size, enum bpf_access_type t,
+- int value_regno)
++static int check_mem_access(struct bpf_verifier_env *env, int insn_idx, u32 regno,
++ int off, int bpf_size, enum bpf_access_type t,
++ int value_regno, bool strict_alignment_once)
+ {
+ struct bpf_verifier_state *state = env->cur_state;
+ struct bpf_reg_state *regs = cur_regs(env);
+@@ -1122,7 +1129,7 @@ static int check_mem_access(struct bpf_verifier_env *env, int insn_idx, u32 regn
+ return size;
+
+ /* alignment checks will add in reg->off themselves */
+- err = check_ptr_alignment(env, reg, off, size);
++ err = check_ptr_alignment(env, reg, off, size, strict_alignment_once);
+ if (err)
+ return err;
+
+@@ -1265,21 +1272,23 @@ static int check_xadd(struct bpf_verifier_env *env, int insn_idx, struct bpf_ins
+ return -EACCES;
+ }
+
+- if (is_ctx_reg(env, insn->dst_reg)) {
+- verbose(env, "BPF_XADD stores into R%d context is not allowed\n",
+- insn->dst_reg);
++ if (is_ctx_reg(env, insn->dst_reg) ||
++ is_pkt_reg(env, insn->dst_reg)) {
++ verbose(env, "BPF_XADD stores into R%d %s is not allowed\n",
++ insn->dst_reg, is_ctx_reg(env, insn->dst_reg) ?
++ "context" : "packet");
+ return -EACCES;
+ }
+
+ /* check whether atomic_add can read the memory */
+ err = check_mem_access(env, insn_idx, insn->dst_reg, insn->off,
+- BPF_SIZE(insn->code), BPF_READ, -1);
++ BPF_SIZE(insn->code), BPF_READ, -1, true);
+ if (err)
+ return err;
+
+ /* check whether atomic_add can write into the same memory */
+ return check_mem_access(env, insn_idx, insn->dst_reg, insn->off,
+- BPF_SIZE(insn->code), BPF_WRITE, -1);
++ BPF_SIZE(insn->code), BPF_WRITE, -1, true);
+ }
+
+ /* Does this register contain a constant zero? */
+@@ -1763,7 +1772,8 @@ static int check_call(struct bpf_verifier_env *env, int func_id, int insn_idx)
+ * is inferred from register state.
+ */
+ for (i = 0; i < meta.access_size; i++) {
+- err = check_mem_access(env, insn_idx, meta.regno, i, BPF_B, BPF_WRITE, -1);
++ err = check_mem_access(env, insn_idx, meta.regno, i, BPF_B,
++ BPF_WRITE, -1, false);
+ if (err)
+ return err;
+ }
+@@ -3933,7 +3943,7 @@ static int do_check(struct bpf_verifier_env *env)
+ */
+ err = check_mem_access(env, insn_idx, insn->src_reg, insn->off,
+ BPF_SIZE(insn->code), BPF_READ,
+- insn->dst_reg);
++ insn->dst_reg, false);
+ if (err)
+ return err;
+
+@@ -3985,7 +3995,7 @@ static int do_check(struct bpf_verifier_env *env)
+ /* check that memory (dst_reg + off) is writeable */
+ err = check_mem_access(env, insn_idx, insn->dst_reg, insn->off,
+ BPF_SIZE(insn->code), BPF_WRITE,
+- insn->src_reg);
++ insn->src_reg, false);
+ if (err)
+ return err;
+
+@@ -4020,7 +4030,7 @@ static int do_check(struct bpf_verifier_env *env)
+ /* check that memory (dst_reg + off) is writeable */
+ err = check_mem_access(env, insn_idx, insn->dst_reg, insn->off,
+ BPF_SIZE(insn->code), BPF_WRITE,
+- -1);
++ -1, false);
+ if (err)
+ return err;
+
+diff --git a/tools/testing/selftests/bpf/test_verifier.c b/tools/testing/selftests/bpf/test_verifier.c
+index 5ed4175c4ff8..0694527acaa0 100644
+--- a/tools/testing/selftests/bpf/test_verifier.c
++++ b/tools/testing/selftests/bpf/test_verifier.c
+@@ -2254,6 +2254,32 @@ static struct bpf_test tests[] = {
+ .result_unpriv = REJECT,
+ .result = ACCEPT,
+ },
++ {
++ "runtime/jit: pass negative index to tail_call",
++ .insns = {
++ BPF_MOV64_IMM(BPF_REG_3, -1),
++ BPF_LD_MAP_FD(BPF_REG_2, 0),
++ BPF_RAW_INSN(BPF_JMP | BPF_CALL, 0, 0, 0,
++ BPF_FUNC_tail_call),
++ BPF_MOV64_IMM(BPF_REG_0, 0),
++ BPF_EXIT_INSN(),
++ },
++ .fixup_prog = { 1 },
++ .result = ACCEPT,
++ },
++ {
++ "runtime/jit: pass > 32bit index to tail_call",
++ .insns = {
++ BPF_LD_IMM64(BPF_REG_3, 0x100000000ULL),
++ BPF_LD_MAP_FD(BPF_REG_2, 0),
++ BPF_RAW_INSN(BPF_JMP | BPF_CALL, 0, 0, 0,
++ BPF_FUNC_tail_call),
++ BPF_MOV64_IMM(BPF_REG_0, 0),
++ BPF_EXIT_INSN(),
++ },
++ .fixup_prog = { 2 },
++ .result = ACCEPT,
++ },
+ {
+ "stack pointer arithmetic",
+ .insns = {
+@@ -8826,6 +8852,64 @@ static struct bpf_test tests[] = {
+ .result = REJECT,
+ .prog_type = BPF_PROG_TYPE_CGROUP_SOCK,
+ },
++ {
++ "xadd/w check unaligned stack",
++ .insns = {
++ BPF_MOV64_IMM(BPF_REG_0, 1),
++ BPF_STX_MEM(BPF_DW, BPF_REG_10, BPF_REG_0, -8),
++ BPF_STX_XADD(BPF_W, BPF_REG_10, BPF_REG_0, -7),
++ BPF_LDX_MEM(BPF_DW, BPF_REG_0, BPF_REG_10, -8),
++ BPF_EXIT_INSN(),
++ },
++ .result = REJECT,
++ .errstr = "misaligned stack access off",
++ .prog_type = BPF_PROG_TYPE_SCHED_CLS,
++ },
++ {
++ "xadd/w check unaligned map",
++ .insns = {
++ BPF_ST_MEM(BPF_DW, BPF_REG_10, -8, 0),
++ BPF_MOV64_REG(BPF_REG_2, BPF_REG_10),
++ BPF_ALU64_IMM(BPF_ADD, BPF_REG_2, -8),
++ BPF_LD_MAP_FD(BPF_REG_1, 0),
++ BPF_RAW_INSN(BPF_JMP | BPF_CALL, 0, 0, 0,
++ BPF_FUNC_map_lookup_elem),
++ BPF_JMP_IMM(BPF_JNE, BPF_REG_0, 0, 1),
++ BPF_EXIT_INSN(),
++ BPF_MOV64_IMM(BPF_REG_1, 1),
++ BPF_STX_XADD(BPF_W, BPF_REG_0, BPF_REG_1, 3),
++ BPF_LDX_MEM(BPF_W, BPF_REG_0, BPF_REG_0, 3),
++ BPF_EXIT_INSN(),
++ },
++ .fixup_map1 = { 3 },
++ .result = REJECT,
++ .errstr = "misaligned value access off",
++ .prog_type = BPF_PROG_TYPE_SCHED_CLS,
++ },
++ {
++ "xadd/w check unaligned pkt",
++ .insns = {
++ BPF_LDX_MEM(BPF_W, BPF_REG_2, BPF_REG_1,
++ offsetof(struct xdp_md, data)),
++ BPF_LDX_MEM(BPF_W, BPF_REG_3, BPF_REG_1,
++ offsetof(struct xdp_md, data_end)),
++ BPF_MOV64_REG(BPF_REG_1, BPF_REG_2),
++ BPF_ALU64_IMM(BPF_ADD, BPF_REG_1, 8),
++ BPF_JMP_REG(BPF_JLT, BPF_REG_1, BPF_REG_3, 2),
++ BPF_MOV64_IMM(BPF_REG_0, 99),
++ BPF_JMP_IMM(BPF_JA, 0, 0, 6),
++ BPF_MOV64_IMM(BPF_REG_0, 1),
++ BPF_ST_MEM(BPF_W, BPF_REG_2, 0, 0),
++ BPF_ST_MEM(BPF_W, BPF_REG_2, 3, 0),
++ BPF_STX_XADD(BPF_W, BPF_REG_2, BPF_REG_0, 1),
++ BPF_STX_XADD(BPF_W, BPF_REG_2, BPF_REG_0, 2),
++ BPF_LDX_MEM(BPF_W, BPF_REG_0, BPF_REG_2, 1),
++ BPF_EXIT_INSN(),
++ },
++ .result = REJECT,
++ .errstr = "BPF_XADD stores into R2 packet",
++ .prog_type = BPF_PROG_TYPE_XDP,
++ },
+ };
+
+ static int probe_filter_length(const struct bpf_insn *fp)
^ permalink raw reply related [flat|nested] 27+ messages in thread
* [gentoo-commits] proj/linux-patches:4.15 commit in: /
@ 2018-03-15 10:26 Mike Pagano
0 siblings, 0 replies; 27+ messages in thread
From: Mike Pagano @ 2018-03-15 10:26 UTC (permalink / raw
To: gentoo-commits
commit: 769b138afbb1b415122f8af9d94069d9c2d8b28d
Author: Mike Pagano <mpagano <AT> gentoo <DOT> org>
AuthorDate: Thu Mar 15 10:26:07 2018 +0000
Commit: Mike Pagano <mpagano <AT> gentoo <DOT> org>
CommitDate: Thu Mar 15 10:26:07 2018 +0000
URL: https://gitweb.gentoo.org/proj/linux-patches.git/commit/?id=769b138a
Linux patch 4.15.10
0000_README | 4 +
1009_linux-4.15.10.patch | 6492 ++++++++++++++++++++++++++++++++++++++++++++++
2 files changed, 6496 insertions(+)
diff --git a/0000_README b/0000_README
index bce11f7..172ed03 100644
--- a/0000_README
+++ b/0000_README
@@ -79,6 +79,10 @@ Patch: 1008_linux-4.15.9.patch
From: http://www.kernel.org
Desc: Linux 4.15.9
+Patch: 1009_linux-4.15.10.patch
+From: http://www.kernel.org
+Desc: Linux 4.15.10
+
Patch: 1500_XATTR_USER_PREFIX.patch
From: https://bugs.gentoo.org/show_bug.cgi?id=470644
Desc: Support for namespace user.pax.* on tmpfs.
diff --git a/1009_linux-4.15.10.patch b/1009_linux-4.15.10.patch
new file mode 100644
index 0000000..8b78977
--- /dev/null
+++ b/1009_linux-4.15.10.patch
@@ -0,0 +1,6492 @@
+diff --git a/Documentation/devicetree/bindings/power/mti,mips-cpc.txt b/Documentation/devicetree/bindings/power/mti,mips-cpc.txt
+new file mode 100644
+index 000000000000..c6b82511ae8a
+--- /dev/null
++++ b/Documentation/devicetree/bindings/power/mti,mips-cpc.txt
+@@ -0,0 +1,8 @@
++Binding for MIPS Cluster Power Controller (CPC).
++
++This binding allows a system to specify where the CPC registers are
++located.
++
++Required properties:
++compatible : Should be "mti,mips-cpc".
++regs: Should describe the address & size of the CPC register region.
+diff --git a/Documentation/sphinx/kerneldoc.py b/Documentation/sphinx/kerneldoc.py
+index 39aa9e8697cc..fbedcc39460b 100644
+--- a/Documentation/sphinx/kerneldoc.py
++++ b/Documentation/sphinx/kerneldoc.py
+@@ -36,8 +36,7 @@ import glob
+
+ from docutils import nodes, statemachine
+ from docutils.statemachine import ViewList
+-from docutils.parsers.rst import directives
+-from sphinx.util.compat import Directive
++from docutils.parsers.rst import directives, Directive
+ from sphinx.ext.autodoc import AutodocReporter
+
+ __version__ = '1.0'
+diff --git a/MAINTAINERS b/MAINTAINERS
+index 845fc25812f1..8e5d2e5d85bf 100644
+--- a/MAINTAINERS
++++ b/MAINTAINERS
+@@ -9107,6 +9107,7 @@ MIPS GENERIC PLATFORM
+ M: Paul Burton <paul.burton@mips.com>
+ L: linux-mips@linux-mips.org
+ S: Supported
++F: Documentation/devicetree/bindings/power/mti,mips-cpc.txt
+ F: arch/mips/generic/
+ F: arch/mips/tools/generic-board-config.sh
+
+diff --git a/Makefile b/Makefile
+index 0420f9a0c70f..7eed0f168b13 100644
+--- a/Makefile
++++ b/Makefile
+@@ -1,7 +1,7 @@
+ # SPDX-License-Identifier: GPL-2.0
+ VERSION = 4
+ PATCHLEVEL = 15
+-SUBLEVEL = 9
++SUBLEVEL = 10
+ EXTRAVERSION =
+ NAME = Fearless Coyote
+
+@@ -487,6 +487,11 @@ KBUILD_CFLAGS += $(CLANG_TARGET) $(CLANG_GCC_TC)
+ KBUILD_AFLAGS += $(CLANG_TARGET) $(CLANG_GCC_TC)
+ endif
+
++RETPOLINE_CFLAGS_GCC := -mindirect-branch=thunk-extern -mindirect-branch-register
++RETPOLINE_CFLAGS_CLANG := -mretpoline-external-thunk
++RETPOLINE_CFLAGS := $(call cc-option,$(RETPOLINE_CFLAGS_GCC),$(call cc-option,$(RETPOLINE_CFLAGS_CLANG)))
++export RETPOLINE_CFLAGS
++
+ ifeq ($(config-targets),1)
+ # ===========================================================================
+ # *config targets only - make sure prerequisites are updated, and descend
+diff --git a/arch/arm64/mm/mmu.c b/arch/arm64/mm/mmu.c
+index 451f96f3377c..5bdc2c4db9ad 100644
+--- a/arch/arm64/mm/mmu.c
++++ b/arch/arm64/mm/mmu.c
+@@ -107,7 +107,7 @@ static bool pgattr_change_is_safe(u64 old, u64 new)
+ * The following mapping attributes may be updated in live
+ * kernel mappings without the need for break-before-make.
+ */
+- static const pteval_t mask = PTE_PXN | PTE_RDONLY | PTE_WRITE;
++ static const pteval_t mask = PTE_PXN | PTE_RDONLY | PTE_WRITE | PTE_NG;
+
+ /* creating or taking down mappings is always safe */
+ if (old == 0 || new == 0)
+@@ -117,9 +117,9 @@ static bool pgattr_change_is_safe(u64 old, u64 new)
+ if ((old | new) & PTE_CONT)
+ return false;
+
+- /* Transitioning from Global to Non-Global is safe */
+- if (((old ^ new) == PTE_NG) && (new & PTE_NG))
+- return true;
++ /* Transitioning from Non-Global to Global is unsafe */
++ if (old & ~new & PTE_NG)
++ return false;
+
+ return ((old ^ new) & ~mask) == 0;
+ }
+diff --git a/arch/mips/ath25/board.c b/arch/mips/ath25/board.c
+index 9ab48ff80c1c..6d11ae581ea7 100644
+--- a/arch/mips/ath25/board.c
++++ b/arch/mips/ath25/board.c
+@@ -135,6 +135,8 @@ int __init ath25_find_config(phys_addr_t base, unsigned long size)
+ }
+
+ board_data = kzalloc(BOARD_CONFIG_BUFSZ, GFP_KERNEL);
++ if (!board_data)
++ goto error;
+ ath25_board.config = (struct ath25_boarddata *)board_data;
+ memcpy_fromio(board_data, bcfg, 0x100);
+ if (broken_boarddata) {
+diff --git a/arch/mips/cavium-octeon/octeon-irq.c b/arch/mips/cavium-octeon/octeon-irq.c
+index 5b3a3f6a9ad3..d99f5242169e 100644
+--- a/arch/mips/cavium-octeon/octeon-irq.c
++++ b/arch/mips/cavium-octeon/octeon-irq.c
+@@ -2277,6 +2277,8 @@ static int __init octeon_irq_init_cib(struct device_node *ciu_node,
+ }
+
+ host_data = kzalloc(sizeof(*host_data), GFP_KERNEL);
++ if (!host_data)
++ return -ENOMEM;
+ raw_spin_lock_init(&host_data->lock);
+
+ addr = of_get_address(ciu_node, 0, NULL, NULL);
+diff --git a/arch/mips/kernel/mips-cpc.c b/arch/mips/kernel/mips-cpc.c
+index 19c88d770054..fcf9af492d60 100644
+--- a/arch/mips/kernel/mips-cpc.c
++++ b/arch/mips/kernel/mips-cpc.c
+@@ -10,6 +10,8 @@
+
+ #include <linux/errno.h>
+ #include <linux/percpu.h>
++#include <linux/of.h>
++#include <linux/of_address.h>
+ #include <linux/spinlock.h>
+
+ #include <asm/mips-cps.h>
+@@ -22,6 +24,17 @@ static DEFINE_PER_CPU_ALIGNED(unsigned long, cpc_core_lock_flags);
+
+ phys_addr_t __weak mips_cpc_default_phys_base(void)
+ {
++ struct device_node *cpc_node;
++ struct resource res;
++ int err;
++
++ cpc_node = of_find_compatible_node(of_root, NULL, "mti,mips-cpc");
++ if (cpc_node) {
++ err = of_address_to_resource(cpc_node, 0, &res);
++ if (!err)
++ return res.start;
++ }
++
+ return 0;
+ }
+
+diff --git a/arch/mips/kernel/smp-bmips.c b/arch/mips/kernel/smp-bmips.c
+index 87dcac2447c8..382d12eb88f0 100644
+--- a/arch/mips/kernel/smp-bmips.c
++++ b/arch/mips/kernel/smp-bmips.c
+@@ -168,11 +168,11 @@ static void bmips_prepare_cpus(unsigned int max_cpus)
+ return;
+ }
+
+- if (request_irq(IPI0_IRQ, bmips_ipi_interrupt, IRQF_PERCPU,
+- "smp_ipi0", NULL))
++ if (request_irq(IPI0_IRQ, bmips_ipi_interrupt,
++ IRQF_PERCPU | IRQF_NO_SUSPEND, "smp_ipi0", NULL))
+ panic("Can't request IPI0 interrupt");
+- if (request_irq(IPI1_IRQ, bmips_ipi_interrupt, IRQF_PERCPU,
+- "smp_ipi1", NULL))
++ if (request_irq(IPI1_IRQ, bmips_ipi_interrupt,
++ IRQF_PERCPU | IRQF_NO_SUSPEND, "smp_ipi1", NULL))
+ panic("Can't request IPI1 interrupt");
+ }
+
+diff --git a/arch/s390/kvm/kvm-s390.c b/arch/s390/kvm/kvm-s390.c
+index 5c03e371b7b8..004684eaa827 100644
+--- a/arch/s390/kvm/kvm-s390.c
++++ b/arch/s390/kvm/kvm-s390.c
+@@ -2118,6 +2118,7 @@ static void sca_add_vcpu(struct kvm_vcpu *vcpu)
+ /* we still need the basic sca for the ipte control */
+ vcpu->arch.sie_block->scaoh = (__u32)(((__u64)sca) >> 32);
+ vcpu->arch.sie_block->scaol = (__u32)(__u64)sca;
++ return;
+ }
+ read_lock(&vcpu->kvm->arch.sca_lock);
+ if (vcpu->kvm->arch.use_esca) {
+diff --git a/arch/x86/Kconfig b/arch/x86/Kconfig
+index 20da391b5f32..7bb4eb14a2e0 100644
+--- a/arch/x86/Kconfig
++++ b/arch/x86/Kconfig
+@@ -432,6 +432,7 @@ config GOLDFISH
+ config RETPOLINE
+ bool "Avoid speculative indirect branches in kernel"
+ default y
++ select STACK_VALIDATION if HAVE_STACK_VALIDATION
+ help
+ Compile kernel with the retpoline compiler options to guard against
+ kernel-to-user data leaks by avoiding speculative indirect
+diff --git a/arch/x86/Makefile b/arch/x86/Makefile
+index fad55160dcb9..498c1b812300 100644
+--- a/arch/x86/Makefile
++++ b/arch/x86/Makefile
+@@ -232,10 +232,9 @@ KBUILD_CFLAGS += -fno-asynchronous-unwind-tables
+
+ # Avoid indirect branches in kernel to deal with Spectre
+ ifdef CONFIG_RETPOLINE
+- RETPOLINE_CFLAGS += $(call cc-option,-mindirect-branch=thunk-extern -mindirect-branch-register)
+- ifneq ($(RETPOLINE_CFLAGS),)
+- KBUILD_CFLAGS += $(RETPOLINE_CFLAGS) -DRETPOLINE
+- endif
++ifneq ($(RETPOLINE_CFLAGS),)
++ KBUILD_CFLAGS += $(RETPOLINE_CFLAGS) -DRETPOLINE
++endif
+ endif
+
+ archscripts: scripts_basic
+diff --git a/arch/x86/entry/calling.h b/arch/x86/entry/calling.h
+index dce7092ab24a..5d10b7a85cad 100644
+--- a/arch/x86/entry/calling.h
++++ b/arch/x86/entry/calling.h
+@@ -97,7 +97,7 @@ For 32-bit we have the following conventions - kernel is built with
+
+ #define SIZEOF_PTREGS 21*8
+
+-.macro PUSH_AND_CLEAR_REGS rdx=%rdx rax=%rax
++.macro PUSH_AND_CLEAR_REGS rdx=%rdx rax=%rax save_ret=0
+ /*
+ * Push registers and sanitize registers of values that a
+ * speculation attack might otherwise want to exploit. The
+@@ -105,32 +105,41 @@ For 32-bit we have the following conventions - kernel is built with
+ * could be put to use in a speculative execution gadget.
+ * Interleave XOR with PUSH for better uop scheduling:
+ */
++ .if \save_ret
++ pushq %rsi /* pt_regs->si */
++ movq 8(%rsp), %rsi /* temporarily store the return address in %rsi */
++ movq %rdi, 8(%rsp) /* pt_regs->di (overwriting original return address) */
++ .else
+ pushq %rdi /* pt_regs->di */
+ pushq %rsi /* pt_regs->si */
++ .endif
+ pushq \rdx /* pt_regs->dx */
+ pushq %rcx /* pt_regs->cx */
+ pushq \rax /* pt_regs->ax */
+ pushq %r8 /* pt_regs->r8 */
+- xorq %r8, %r8 /* nospec r8 */
++ xorl %r8d, %r8d /* nospec r8 */
+ pushq %r9 /* pt_regs->r9 */
+- xorq %r9, %r9 /* nospec r9 */
++ xorl %r9d, %r9d /* nospec r9 */
+ pushq %r10 /* pt_regs->r10 */
+- xorq %r10, %r10 /* nospec r10 */
++ xorl %r10d, %r10d /* nospec r10 */
+ pushq %r11 /* pt_regs->r11 */
+- xorq %r11, %r11 /* nospec r11*/
++ xorl %r11d, %r11d /* nospec r11*/
+ pushq %rbx /* pt_regs->rbx */
+ xorl %ebx, %ebx /* nospec rbx*/
+ pushq %rbp /* pt_regs->rbp */
+ xorl %ebp, %ebp /* nospec rbp*/
+ pushq %r12 /* pt_regs->r12 */
+- xorq %r12, %r12 /* nospec r12*/
++ xorl %r12d, %r12d /* nospec r12*/
+ pushq %r13 /* pt_regs->r13 */
+- xorq %r13, %r13 /* nospec r13*/
++ xorl %r13d, %r13d /* nospec r13*/
+ pushq %r14 /* pt_regs->r14 */
+- xorq %r14, %r14 /* nospec r14*/
++ xorl %r14d, %r14d /* nospec r14*/
+ pushq %r15 /* pt_regs->r15 */
+- xorq %r15, %r15 /* nospec r15*/
++ xorl %r15d, %r15d /* nospec r15*/
+ UNWIND_HINT_REGS
++ .if \save_ret
++ pushq %rsi /* return address on top of stack */
++ .endif
+ .endm
+
+ .macro POP_REGS pop_rdi=1 skip_r11rcx=0
+diff --git a/arch/x86/entry/entry_32.S b/arch/x86/entry/entry_32.S
+index 2a35b1e0fb90..60c4c342316c 100644
+--- a/arch/x86/entry/entry_32.S
++++ b/arch/x86/entry/entry_32.S
+@@ -252,8 +252,7 @@ ENTRY(__switch_to_asm)
+ * exist, overwrite the RSB with entries which capture
+ * speculative execution to prevent attack.
+ */
+- /* Clobbers %ebx */
+- FILL_RETURN_BUFFER RSB_CLEAR_LOOPS, X86_FEATURE_RSB_CTXSW
++ FILL_RETURN_BUFFER %ebx, RSB_CLEAR_LOOPS, X86_FEATURE_RSB_CTXSW
+ #endif
+
+ /* restore callee-saved registers */
+diff --git a/arch/x86/entry/entry_64.S b/arch/x86/entry/entry_64.S
+index 4fd9044e72e7..50dcbf640850 100644
+--- a/arch/x86/entry/entry_64.S
++++ b/arch/x86/entry/entry_64.S
+@@ -364,8 +364,7 @@ ENTRY(__switch_to_asm)
+ * exist, overwrite the RSB with entries which capture
+ * speculative execution to prevent attack.
+ */
+- /* Clobbers %rbx */
+- FILL_RETURN_BUFFER RSB_CLEAR_LOOPS, X86_FEATURE_RSB_CTXSW
++ FILL_RETURN_BUFFER %r12, RSB_CLEAR_LOOPS, X86_FEATURE_RSB_CTXSW
+ #endif
+
+ /* restore callee-saved registers */
+@@ -871,12 +870,8 @@ ENTRY(\sym)
+ pushq $-1 /* ORIG_RAX: no syscall to restart */
+ .endif
+
+- /* Save all registers in pt_regs */
+- PUSH_AND_CLEAR_REGS
+- ENCODE_FRAME_POINTER
+-
+ .if \paranoid < 2
+- testb $3, CS(%rsp) /* If coming from userspace, switch stacks */
++ testb $3, CS-ORIG_RAX(%rsp) /* If coming from userspace, switch stacks */
+ jnz .Lfrom_usermode_switch_stack_\@
+ .endif
+
+@@ -1123,13 +1118,15 @@ idtentry machine_check do_mce has_error_code=0 paranoid=1
+ #endif
+
+ /*
+- * Switch gs if needed.
++ * Save all registers in pt_regs, and switch gs if needed.
+ * Use slow, but surefire "are we in kernel?" check.
+ * Return: ebx=0: need swapgs on exit, ebx=1: otherwise
+ */
+ ENTRY(paranoid_entry)
+ UNWIND_HINT_FUNC
+ cld
++ PUSH_AND_CLEAR_REGS save_ret=1
++ ENCODE_FRAME_POINTER 8
+ movl $1, %ebx
+ movl $MSR_GS_BASE, %ecx
+ rdmsr
+@@ -1174,12 +1171,14 @@ ENTRY(paranoid_exit)
+ END(paranoid_exit)
+
+ /*
+- * Switch gs if needed.
++ * Save all registers in pt_regs, and switch GS if needed.
+ * Return: EBX=0: came from user mode; EBX=1: otherwise
+ */
+ ENTRY(error_entry)
+- UNWIND_HINT_REGS offset=8
++ UNWIND_HINT_FUNC
+ cld
++ PUSH_AND_CLEAR_REGS save_ret=1
++ ENCODE_FRAME_POINTER 8
+ testb $3, CS+8(%rsp)
+ jz .Lerror_kernelspace
+
+@@ -1570,8 +1569,6 @@ end_repeat_nmi:
+ * frame to point back to repeat_nmi.
+ */
+ pushq $-1 /* ORIG_RAX: no syscall to restart */
+- PUSH_AND_CLEAR_REGS
+- ENCODE_FRAME_POINTER
+
+ /*
+ * Use paranoid_entry to handle SWAPGS, but no need to use paranoid_exit
+diff --git a/arch/x86/entry/entry_64_compat.S b/arch/x86/entry/entry_64_compat.S
+index fd65e016e413..364ea4a207be 100644
+--- a/arch/x86/entry/entry_64_compat.S
++++ b/arch/x86/entry/entry_64_compat.S
+@@ -85,25 +85,25 @@ ENTRY(entry_SYSENTER_compat)
+ pushq %rcx /* pt_regs->cx */
+ pushq $-ENOSYS /* pt_regs->ax */
+ pushq $0 /* pt_regs->r8 = 0 */
+- xorq %r8, %r8 /* nospec r8 */
++ xorl %r8d, %r8d /* nospec r8 */
+ pushq $0 /* pt_regs->r9 = 0 */
+- xorq %r9, %r9 /* nospec r9 */
++ xorl %r9d, %r9d /* nospec r9 */
+ pushq $0 /* pt_regs->r10 = 0 */
+- xorq %r10, %r10 /* nospec r10 */
++ xorl %r10d, %r10d /* nospec r10 */
+ pushq $0 /* pt_regs->r11 = 0 */
+- xorq %r11, %r11 /* nospec r11 */
++ xorl %r11d, %r11d /* nospec r11 */
+ pushq %rbx /* pt_regs->rbx */
+ xorl %ebx, %ebx /* nospec rbx */
+ pushq %rbp /* pt_regs->rbp (will be overwritten) */
+ xorl %ebp, %ebp /* nospec rbp */
+ pushq $0 /* pt_regs->r12 = 0 */
+- xorq %r12, %r12 /* nospec r12 */
++ xorl %r12d, %r12d /* nospec r12 */
+ pushq $0 /* pt_regs->r13 = 0 */
+- xorq %r13, %r13 /* nospec r13 */
++ xorl %r13d, %r13d /* nospec r13 */
+ pushq $0 /* pt_regs->r14 = 0 */
+- xorq %r14, %r14 /* nospec r14 */
++ xorl %r14d, %r14d /* nospec r14 */
+ pushq $0 /* pt_regs->r15 = 0 */
+- xorq %r15, %r15 /* nospec r15 */
++ xorl %r15d, %r15d /* nospec r15 */
+ cld
+
+ /*
+@@ -224,25 +224,25 @@ GLOBAL(entry_SYSCALL_compat_after_hwframe)
+ pushq %rbp /* pt_regs->cx (stashed in bp) */
+ pushq $-ENOSYS /* pt_regs->ax */
+ pushq $0 /* pt_regs->r8 = 0 */
+- xorq %r8, %r8 /* nospec r8 */
++ xorl %r8d, %r8d /* nospec r8 */
+ pushq $0 /* pt_regs->r9 = 0 */
+- xorq %r9, %r9 /* nospec r9 */
++ xorl %r9d, %r9d /* nospec r9 */
+ pushq $0 /* pt_regs->r10 = 0 */
+- xorq %r10, %r10 /* nospec r10 */
++ xorl %r10d, %r10d /* nospec r10 */
+ pushq $0 /* pt_regs->r11 = 0 */
+- xorq %r11, %r11 /* nospec r11 */
++ xorl %r11d, %r11d /* nospec r11 */
+ pushq %rbx /* pt_regs->rbx */
+ xorl %ebx, %ebx /* nospec rbx */
+ pushq %rbp /* pt_regs->rbp (will be overwritten) */
+ xorl %ebp, %ebp /* nospec rbp */
+ pushq $0 /* pt_regs->r12 = 0 */
+- xorq %r12, %r12 /* nospec r12 */
++ xorl %r12d, %r12d /* nospec r12 */
+ pushq $0 /* pt_regs->r13 = 0 */
+- xorq %r13, %r13 /* nospec r13 */
++ xorl %r13d, %r13d /* nospec r13 */
+ pushq $0 /* pt_regs->r14 = 0 */
+- xorq %r14, %r14 /* nospec r14 */
++ xorl %r14d, %r14d /* nospec r14 */
+ pushq $0 /* pt_regs->r15 = 0 */
+- xorq %r15, %r15 /* nospec r15 */
++ xorl %r15d, %r15d /* nospec r15 */
+
+ /*
+ * User mode is traced as though IRQs are on, and SYSENTER
+@@ -298,9 +298,9 @@ sysret32_from_system_call:
+ */
+ SWITCH_TO_USER_CR3_NOSTACK scratch_reg=%r8 scratch_reg2=%r9
+
+- xorq %r8, %r8
+- xorq %r9, %r9
+- xorq %r10, %r10
++ xorl %r8d, %r8d
++ xorl %r9d, %r9d
++ xorl %r10d, %r10d
+ swapgs
+ sysretl
+ END(entry_SYSCALL_compat)
+@@ -358,25 +358,25 @@ ENTRY(entry_INT80_compat)
+ pushq %rcx /* pt_regs->cx */
+ pushq $-ENOSYS /* pt_regs->ax */
+ pushq $0 /* pt_regs->r8 = 0 */
+- xorq %r8, %r8 /* nospec r8 */
++ xorl %r8d, %r8d /* nospec r8 */
+ pushq $0 /* pt_regs->r9 = 0 */
+- xorq %r9, %r9 /* nospec r9 */
++ xorl %r9d, %r9d /* nospec r9 */
+ pushq $0 /* pt_regs->r10 = 0 */
+- xorq %r10, %r10 /* nospec r10 */
++ xorl %r10d, %r10d /* nospec r10 */
+ pushq $0 /* pt_regs->r11 = 0 */
+- xorq %r11, %r11 /* nospec r11 */
++ xorl %r11d, %r11d /* nospec r11 */
+ pushq %rbx /* pt_regs->rbx */
+ xorl %ebx, %ebx /* nospec rbx */
+ pushq %rbp /* pt_regs->rbp */
+ xorl %ebp, %ebp /* nospec rbp */
+ pushq %r12 /* pt_regs->r12 */
+- xorq %r12, %r12 /* nospec r12 */
++ xorl %r12d, %r12d /* nospec r12 */
+ pushq %r13 /* pt_regs->r13 */
+- xorq %r13, %r13 /* nospec r13 */
++ xorl %r13d, %r13d /* nospec r13 */
+ pushq %r14 /* pt_regs->r14 */
+- xorq %r14, %r14 /* nospec r14 */
++ xorl %r14d, %r14d /* nospec r14 */
+ pushq %r15 /* pt_regs->r15 */
+- xorq %r15, %r15 /* nospec r15 */
++ xorl %r15d, %r15d /* nospec r15 */
+ cld
+
+ /*
+diff --git a/arch/x86/include/asm/apm.h b/arch/x86/include/asm/apm.h
+index 4d4015ddcf26..c356098b6fb9 100644
+--- a/arch/x86/include/asm/apm.h
++++ b/arch/x86/include/asm/apm.h
+@@ -7,6 +7,8 @@
+ #ifndef _ASM_X86_MACH_DEFAULT_APM_H
+ #define _ASM_X86_MACH_DEFAULT_APM_H
+
++#include <asm/nospec-branch.h>
++
+ #ifdef APM_ZERO_SEGS
+ # define APM_DO_ZERO_SEGS \
+ "pushl %%ds\n\t" \
+@@ -32,6 +34,7 @@ static inline void apm_bios_call_asm(u32 func, u32 ebx_in, u32 ecx_in,
+ * N.B. We do NOT need a cld after the BIOS call
+ * because we always save and restore the flags.
+ */
++ firmware_restrict_branch_speculation_start();
+ __asm__ __volatile__(APM_DO_ZERO_SEGS
+ "pushl %%edi\n\t"
+ "pushl %%ebp\n\t"
+@@ -44,6 +47,7 @@ static inline void apm_bios_call_asm(u32 func, u32 ebx_in, u32 ecx_in,
+ "=S" (*esi)
+ : "a" (func), "b" (ebx_in), "c" (ecx_in)
+ : "memory", "cc");
++ firmware_restrict_branch_speculation_end();
+ }
+
+ static inline bool apm_bios_call_simple_asm(u32 func, u32 ebx_in,
+@@ -56,6 +60,7 @@ static inline bool apm_bios_call_simple_asm(u32 func, u32 ebx_in,
+ * N.B. We do NOT need a cld after the BIOS call
+ * because we always save and restore the flags.
+ */
++ firmware_restrict_branch_speculation_start();
+ __asm__ __volatile__(APM_DO_ZERO_SEGS
+ "pushl %%edi\n\t"
+ "pushl %%ebp\n\t"
+@@ -68,6 +73,7 @@ static inline bool apm_bios_call_simple_asm(u32 func, u32 ebx_in,
+ "=S" (si)
+ : "a" (func), "b" (ebx_in), "c" (ecx_in)
+ : "memory", "cc");
++ firmware_restrict_branch_speculation_end();
+ return error;
+ }
+
+diff --git a/arch/x86/include/asm/asm-prototypes.h b/arch/x86/include/asm/asm-prototypes.h
+index 4d111616524b..1908214b9125 100644
+--- a/arch/x86/include/asm/asm-prototypes.h
++++ b/arch/x86/include/asm/asm-prototypes.h
+@@ -38,7 +38,4 @@ INDIRECT_THUNK(dx)
+ INDIRECT_THUNK(si)
+ INDIRECT_THUNK(di)
+ INDIRECT_THUNK(bp)
+-asmlinkage void __fill_rsb(void);
+-asmlinkage void __clear_rsb(void);
+-
+ #endif /* CONFIG_RETPOLINE */
+diff --git a/arch/x86/include/asm/cpufeatures.h b/arch/x86/include/asm/cpufeatures.h
+index 73b5fff159a4..66c14347c502 100644
+--- a/arch/x86/include/asm/cpufeatures.h
++++ b/arch/x86/include/asm/cpufeatures.h
+@@ -211,6 +211,7 @@
+ #define X86_FEATURE_RSB_CTXSW ( 7*32+19) /* "" Fill RSB on context switches */
+
+ #define X86_FEATURE_USE_IBPB ( 7*32+21) /* "" Indirect Branch Prediction Barrier enabled */
++#define X86_FEATURE_USE_IBRS_FW ( 7*32+22) /* "" Use IBRS during runtime firmware calls */
+
+ /* Virtualization flags: Linux defined, word 8 */
+ #define X86_FEATURE_TPR_SHADOW ( 8*32+ 0) /* Intel TPR Shadow */
+diff --git a/arch/x86/include/asm/efi.h b/arch/x86/include/asm/efi.h
+index 85f6ccb80b91..a399c1ebf6f0 100644
+--- a/arch/x86/include/asm/efi.h
++++ b/arch/x86/include/asm/efi.h
+@@ -6,6 +6,7 @@
+ #include <asm/pgtable.h>
+ #include <asm/processor-flags.h>
+ #include <asm/tlb.h>
++#include <asm/nospec-branch.h>
+
+ /*
+ * We map the EFI regions needed for runtime services non-contiguously,
+@@ -36,8 +37,18 @@
+
+ extern asmlinkage unsigned long efi_call_phys(void *, ...);
+
+-#define arch_efi_call_virt_setup() kernel_fpu_begin()
+-#define arch_efi_call_virt_teardown() kernel_fpu_end()
++#define arch_efi_call_virt_setup() \
++({ \
++ kernel_fpu_begin(); \
++ firmware_restrict_branch_speculation_start(); \
++})
++
++#define arch_efi_call_virt_teardown() \
++({ \
++ firmware_restrict_branch_speculation_end(); \
++ kernel_fpu_end(); \
++})
++
+
+ /*
+ * Wrap all the virtual calls in a way that forces the parameters on the stack.
+@@ -73,6 +84,7 @@ struct efi_scratch {
+ efi_sync_low_kernel_mappings(); \
+ preempt_disable(); \
+ __kernel_fpu_begin(); \
++ firmware_restrict_branch_speculation_start(); \
+ \
+ if (efi_scratch.use_pgd) { \
+ efi_scratch.prev_cr3 = __read_cr3(); \
+@@ -91,6 +103,7 @@ struct efi_scratch {
+ __flush_tlb_all(); \
+ } \
+ \
++ firmware_restrict_branch_speculation_end(); \
+ __kernel_fpu_end(); \
+ preempt_enable(); \
+ })
+diff --git a/arch/x86/include/asm/mmu_context.h b/arch/x86/include/asm/mmu_context.h
+index c931b88982a0..1de72ce514cd 100644
+--- a/arch/x86/include/asm/mmu_context.h
++++ b/arch/x86/include/asm/mmu_context.h
+@@ -74,6 +74,7 @@ static inline void *ldt_slot_va(int slot)
+ return (void *)(LDT_BASE_ADDR + LDT_SLOT_STRIDE * slot);
+ #else
+ BUG();
++ return (void *)fix_to_virt(FIX_HOLE);
+ #endif
+ }
+
+diff --git a/arch/x86/include/asm/nospec-branch.h b/arch/x86/include/asm/nospec-branch.h
+index 81a1be326571..d0dabeae0505 100644
+--- a/arch/x86/include/asm/nospec-branch.h
++++ b/arch/x86/include/asm/nospec-branch.h
+@@ -8,6 +8,50 @@
+ #include <asm/cpufeatures.h>
+ #include <asm/msr-index.h>
+
++/*
++ * Fill the CPU return stack buffer.
++ *
++ * Each entry in the RSB, if used for a speculative 'ret', contains an
++ * infinite 'pause; lfence; jmp' loop to capture speculative execution.
++ *
++ * This is required in various cases for retpoline and IBRS-based
++ * mitigations for the Spectre variant 2 vulnerability. Sometimes to
++ * eliminate potentially bogus entries from the RSB, and sometimes
++ * purely to ensure that it doesn't get empty, which on some CPUs would
++ * allow predictions from other (unwanted!) sources to be used.
++ *
++ * We define a CPP macro such that it can be used from both .S files and
++ * inline assembly. It's possible to do a .macro and then include that
++ * from C via asm(".include <asm/nospec-branch.h>") but let's not go there.
++ */
++
++#define RSB_CLEAR_LOOPS 32 /* To forcibly overwrite all entries */
++#define RSB_FILL_LOOPS 16 /* To avoid underflow */
++
++/*
++ * Google experimented with loop-unrolling and this turned out to be
++ * the optimal version — two calls, each with their own speculation
++ * trap should their return address end up getting used, in a loop.
++ */
++#define __FILL_RETURN_BUFFER(reg, nr, sp) \
++ mov $(nr/2), reg; \
++771: \
++ call 772f; \
++773: /* speculation trap */ \
++ pause; \
++ lfence; \
++ jmp 773b; \
++772: \
++ call 774f; \
++775: /* speculation trap */ \
++ pause; \
++ lfence; \
++ jmp 775b; \
++774: \
++ dec reg; \
++ jnz 771b; \
++ add $(BITS_PER_LONG/8) * nr, sp;
++
+ #ifdef __ASSEMBLY__
+
+ /*
+@@ -23,6 +67,18 @@
+ .popsection
+ .endm
+
++/*
++ * This should be used immediately before an indirect jump/call. It tells
++ * objtool the subsequent indirect jump/call is vouched safe for retpoline
++ * builds.
++ */
++.macro ANNOTATE_RETPOLINE_SAFE
++ .Lannotate_\@:
++ .pushsection .discard.retpoline_safe
++ _ASM_PTR .Lannotate_\@
++ .popsection
++.endm
++
+ /*
+ * These are the bare retpoline primitives for indirect jmp and call.
+ * Do not use these directly; they only exist to make the ALTERNATIVE
+@@ -59,9 +115,9 @@
+ .macro JMP_NOSPEC reg:req
+ #ifdef CONFIG_RETPOLINE
+ ANNOTATE_NOSPEC_ALTERNATIVE
+- ALTERNATIVE_2 __stringify(jmp *\reg), \
++ ALTERNATIVE_2 __stringify(ANNOTATE_RETPOLINE_SAFE; jmp *\reg), \
+ __stringify(RETPOLINE_JMP \reg), X86_FEATURE_RETPOLINE, \
+- __stringify(lfence; jmp *\reg), X86_FEATURE_RETPOLINE_AMD
++ __stringify(lfence; ANNOTATE_RETPOLINE_SAFE; jmp *\reg), X86_FEATURE_RETPOLINE_AMD
+ #else
+ jmp *\reg
+ #endif
+@@ -70,18 +126,25 @@
+ .macro CALL_NOSPEC reg:req
+ #ifdef CONFIG_RETPOLINE
+ ANNOTATE_NOSPEC_ALTERNATIVE
+- ALTERNATIVE_2 __stringify(call *\reg), \
++ ALTERNATIVE_2 __stringify(ANNOTATE_RETPOLINE_SAFE; call *\reg), \
+ __stringify(RETPOLINE_CALL \reg), X86_FEATURE_RETPOLINE,\
+- __stringify(lfence; call *\reg), X86_FEATURE_RETPOLINE_AMD
++ __stringify(lfence; ANNOTATE_RETPOLINE_SAFE; call *\reg), X86_FEATURE_RETPOLINE_AMD
+ #else
+ call *\reg
+ #endif
+ .endm
+
+-/* This clobbers the BX register */
+-.macro FILL_RETURN_BUFFER nr:req ftr:req
++ /*
++ * A simpler FILL_RETURN_BUFFER macro. Don't make people use the CPP
++ * monstrosity above, manually.
++ */
++.macro FILL_RETURN_BUFFER reg:req nr:req ftr:req
+ #ifdef CONFIG_RETPOLINE
+- ALTERNATIVE "", "call __clear_rsb", \ftr
++ ANNOTATE_NOSPEC_ALTERNATIVE
++ ALTERNATIVE "jmp .Lskip_rsb_\@", \
++ __stringify(__FILL_RETURN_BUFFER(\reg,\nr,%_ASM_SP)) \
++ \ftr
++.Lskip_rsb_\@:
+ #endif
+ .endm
+
+@@ -93,6 +156,12 @@
+ ".long 999b - .\n\t" \
+ ".popsection\n\t"
+
++#define ANNOTATE_RETPOLINE_SAFE \
++ "999:\n\t" \
++ ".pushsection .discard.retpoline_safe\n\t" \
++ _ASM_PTR " 999b\n\t" \
++ ".popsection\n\t"
++
+ #if defined(CONFIG_X86_64) && defined(RETPOLINE)
+
+ /*
+@@ -102,6 +171,7 @@
+ # define CALL_NOSPEC \
+ ANNOTATE_NOSPEC_ALTERNATIVE \
+ ALTERNATIVE( \
++ ANNOTATE_RETPOLINE_SAFE \
+ "call *%[thunk_target]\n", \
+ "call __x86_indirect_thunk_%V[thunk_target]\n", \
+ X86_FEATURE_RETPOLINE)
+@@ -156,26 +226,54 @@ extern char __indirect_thunk_end[];
+ static inline void vmexit_fill_RSB(void)
+ {
+ #ifdef CONFIG_RETPOLINE
+- alternative_input("",
+- "call __fill_rsb",
+- X86_FEATURE_RETPOLINE,
+- ASM_NO_INPUT_CLOBBER(_ASM_BX, "memory"));
++ unsigned long loops;
++
++ asm volatile (ANNOTATE_NOSPEC_ALTERNATIVE
++ ALTERNATIVE("jmp 910f",
++ __stringify(__FILL_RETURN_BUFFER(%0, RSB_CLEAR_LOOPS, %1)),
++ X86_FEATURE_RETPOLINE)
++ "910:"
++ : "=r" (loops), ASM_CALL_CONSTRAINT
++ : : "memory" );
+ #endif
+ }
+
++#define alternative_msr_write(_msr, _val, _feature) \
++ asm volatile(ALTERNATIVE("", \
++ "movl %[msr], %%ecx\n\t" \
++ "movl %[val], %%eax\n\t" \
++ "movl $0, %%edx\n\t" \
++ "wrmsr", \
++ _feature) \
++ : : [msr] "i" (_msr), [val] "i" (_val) \
++ : "eax", "ecx", "edx", "memory")
++
+ static inline void indirect_branch_prediction_barrier(void)
+ {
+- asm volatile(ALTERNATIVE("",
+- "movl %[msr], %%ecx\n\t"
+- "movl %[val], %%eax\n\t"
+- "movl $0, %%edx\n\t"
+- "wrmsr",
+- X86_FEATURE_USE_IBPB)
+- : : [msr] "i" (MSR_IA32_PRED_CMD),
+- [val] "i" (PRED_CMD_IBPB)
+- : "eax", "ecx", "edx", "memory");
++ alternative_msr_write(MSR_IA32_PRED_CMD, PRED_CMD_IBPB,
++ X86_FEATURE_USE_IBPB);
+ }
+
++/*
++ * With retpoline, we must use IBRS to restrict branch prediction
++ * before calling into firmware.
++ *
++ * (Implemented as CPP macros due to header hell.)
++ */
++#define firmware_restrict_branch_speculation_start() \
++do { \
++ preempt_disable(); \
++ alternative_msr_write(MSR_IA32_SPEC_CTRL, SPEC_CTRL_IBRS, \
++ X86_FEATURE_USE_IBRS_FW); \
++} while (0)
++
++#define firmware_restrict_branch_speculation_end() \
++do { \
++ alternative_msr_write(MSR_IA32_SPEC_CTRL, 0, \
++ X86_FEATURE_USE_IBRS_FW); \
++ preempt_enable(); \
++} while (0)
++
+ #endif /* __ASSEMBLY__ */
+
+ /*
+diff --git a/arch/x86/include/asm/paravirt.h b/arch/x86/include/asm/paravirt.h
+index 554841fab717..c83a2f418cea 100644
+--- a/arch/x86/include/asm/paravirt.h
++++ b/arch/x86/include/asm/paravirt.h
+@@ -7,6 +7,7 @@
+ #ifdef CONFIG_PARAVIRT
+ #include <asm/pgtable_types.h>
+ #include <asm/asm.h>
++#include <asm/nospec-branch.h>
+
+ #include <asm/paravirt_types.h>
+
+@@ -879,23 +880,27 @@ extern void default_banner(void);
+
+ #define INTERRUPT_RETURN \
+ PARA_SITE(PARA_PATCH(pv_cpu_ops, PV_CPU_iret), CLBR_NONE, \
+- jmp PARA_INDIRECT(pv_cpu_ops+PV_CPU_iret))
++ ANNOTATE_RETPOLINE_SAFE; \
++ jmp PARA_INDIRECT(pv_cpu_ops+PV_CPU_iret);)
+
+ #define DISABLE_INTERRUPTS(clobbers) \
+ PARA_SITE(PARA_PATCH(pv_irq_ops, PV_IRQ_irq_disable), clobbers, \
+ PV_SAVE_REGS(clobbers | CLBR_CALLEE_SAVE); \
++ ANNOTATE_RETPOLINE_SAFE; \
+ call PARA_INDIRECT(pv_irq_ops+PV_IRQ_irq_disable); \
+ PV_RESTORE_REGS(clobbers | CLBR_CALLEE_SAVE);)
+
+ #define ENABLE_INTERRUPTS(clobbers) \
+ PARA_SITE(PARA_PATCH(pv_irq_ops, PV_IRQ_irq_enable), clobbers, \
+ PV_SAVE_REGS(clobbers | CLBR_CALLEE_SAVE); \
++ ANNOTATE_RETPOLINE_SAFE; \
+ call PARA_INDIRECT(pv_irq_ops+PV_IRQ_irq_enable); \
+ PV_RESTORE_REGS(clobbers | CLBR_CALLEE_SAVE);)
+
+ #ifdef CONFIG_X86_32
+ #define GET_CR0_INTO_EAX \
+ push %ecx; push %edx; \
++ ANNOTATE_RETPOLINE_SAFE; \
+ call PARA_INDIRECT(pv_cpu_ops+PV_CPU_read_cr0); \
+ pop %edx; pop %ecx
+ #else /* !CONFIG_X86_32 */
+@@ -917,21 +922,25 @@ extern void default_banner(void);
+ */
+ #define SWAPGS \
+ PARA_SITE(PARA_PATCH(pv_cpu_ops, PV_CPU_swapgs), CLBR_NONE, \
+- call PARA_INDIRECT(pv_cpu_ops+PV_CPU_swapgs) \
++ ANNOTATE_RETPOLINE_SAFE; \
++ call PARA_INDIRECT(pv_cpu_ops+PV_CPU_swapgs); \
+ )
+
+ #define GET_CR2_INTO_RAX \
+- call PARA_INDIRECT(pv_mmu_ops+PV_MMU_read_cr2)
++ ANNOTATE_RETPOLINE_SAFE; \
++ call PARA_INDIRECT(pv_mmu_ops+PV_MMU_read_cr2);
+
+ #define USERGS_SYSRET64 \
+ PARA_SITE(PARA_PATCH(pv_cpu_ops, PV_CPU_usergs_sysret64), \
+ CLBR_NONE, \
+- jmp PARA_INDIRECT(pv_cpu_ops+PV_CPU_usergs_sysret64))
++ ANNOTATE_RETPOLINE_SAFE; \
++ jmp PARA_INDIRECT(pv_cpu_ops+PV_CPU_usergs_sysret64);)
+
+ #ifdef CONFIG_DEBUG_ENTRY
+ #define SAVE_FLAGS(clobbers) \
+ PARA_SITE(PARA_PATCH(pv_irq_ops, PV_IRQ_save_fl), clobbers, \
+ PV_SAVE_REGS(clobbers | CLBR_CALLEE_SAVE); \
++ ANNOTATE_RETPOLINE_SAFE; \
+ call PARA_INDIRECT(pv_irq_ops+PV_IRQ_save_fl); \
+ PV_RESTORE_REGS(clobbers | CLBR_CALLEE_SAVE);)
+ #endif
+diff --git a/arch/x86/include/asm/paravirt_types.h b/arch/x86/include/asm/paravirt_types.h
+index f624f1f10316..180bc0bff0fb 100644
+--- a/arch/x86/include/asm/paravirt_types.h
++++ b/arch/x86/include/asm/paravirt_types.h
+@@ -43,6 +43,7 @@
+ #include <asm/desc_defs.h>
+ #include <asm/kmap_types.h>
+ #include <asm/pgtable_types.h>
++#include <asm/nospec-branch.h>
+
+ struct page;
+ struct thread_struct;
+@@ -392,7 +393,9 @@ int paravirt_disable_iospace(void);
+ * offset into the paravirt_patch_template structure, and can therefore be
+ * freely converted back into a structure offset.
+ */
+-#define PARAVIRT_CALL "call *%c[paravirt_opptr];"
++#define PARAVIRT_CALL \
++ ANNOTATE_RETPOLINE_SAFE \
++ "call *%c[paravirt_opptr];"
+
+ /*
+ * These macros are intended to wrap calls through one of the paravirt
+diff --git a/arch/x86/include/asm/refcount.h b/arch/x86/include/asm/refcount.h
+index 4e44250e7d0d..d65171120e90 100644
+--- a/arch/x86/include/asm/refcount.h
++++ b/arch/x86/include/asm/refcount.h
+@@ -67,13 +67,13 @@ static __always_inline __must_check
+ bool refcount_sub_and_test(unsigned int i, refcount_t *r)
+ {
+ GEN_BINARY_SUFFIXED_RMWcc(LOCK_PREFIX "subl", REFCOUNT_CHECK_LT_ZERO,
+- r->refs.counter, "er", i, "%0", e);
++ r->refs.counter, "er", i, "%0", e, "cx");
+ }
+
+ static __always_inline __must_check bool refcount_dec_and_test(refcount_t *r)
+ {
+ GEN_UNARY_SUFFIXED_RMWcc(LOCK_PREFIX "decl", REFCOUNT_CHECK_LT_ZERO,
+- r->refs.counter, "%0", e);
++ r->refs.counter, "%0", e, "cx");
+ }
+
+ static __always_inline __must_check
+diff --git a/arch/x86/include/asm/rmwcc.h b/arch/x86/include/asm/rmwcc.h
+index f91c365e57c3..4914a3e7c803 100644
+--- a/arch/x86/include/asm/rmwcc.h
++++ b/arch/x86/include/asm/rmwcc.h
+@@ -2,8 +2,7 @@
+ #ifndef _ASM_X86_RMWcc
+ #define _ASM_X86_RMWcc
+
+-#define __CLOBBERS_MEM "memory"
+-#define __CLOBBERS_MEM_CC_CX "memory", "cc", "cx"
++#define __CLOBBERS_MEM(clb...) "memory", ## clb
+
+ #if !defined(__GCC_ASM_FLAG_OUTPUTS__) && defined(CC_HAVE_ASM_GOTO)
+
+@@ -40,18 +39,19 @@ do { \
+ #endif /* defined(__GCC_ASM_FLAG_OUTPUTS__) || !defined(CC_HAVE_ASM_GOTO) */
+
+ #define GEN_UNARY_RMWcc(op, var, arg0, cc) \
+- __GEN_RMWcc(op " " arg0, var, cc, __CLOBBERS_MEM)
++ __GEN_RMWcc(op " " arg0, var, cc, __CLOBBERS_MEM())
+
+-#define GEN_UNARY_SUFFIXED_RMWcc(op, suffix, var, arg0, cc) \
++#define GEN_UNARY_SUFFIXED_RMWcc(op, suffix, var, arg0, cc, clobbers...)\
+ __GEN_RMWcc(op " " arg0 "\n\t" suffix, var, cc, \
+- __CLOBBERS_MEM_CC_CX)
++ __CLOBBERS_MEM(clobbers))
+
+ #define GEN_BINARY_RMWcc(op, var, vcon, val, arg0, cc) \
+ __GEN_RMWcc(op __BINARY_RMWcc_ARG arg0, var, cc, \
+- __CLOBBERS_MEM, vcon (val))
++ __CLOBBERS_MEM(), vcon (val))
+
+-#define GEN_BINARY_SUFFIXED_RMWcc(op, suffix, var, vcon, val, arg0, cc) \
++#define GEN_BINARY_SUFFIXED_RMWcc(op, suffix, var, vcon, val, arg0, cc, \
++ clobbers...) \
+ __GEN_RMWcc(op __BINARY_RMWcc_ARG arg0 "\n\t" suffix, var, cc, \
+- __CLOBBERS_MEM_CC_CX, vcon (val))
++ __CLOBBERS_MEM(clobbers), vcon (val))
+
+ #endif /* _ASM_X86_RMWcc */
+diff --git a/arch/x86/include/asm/sections.h b/arch/x86/include/asm/sections.h
+index d6baf23782bc..5c019d23d06b 100644
+--- a/arch/x86/include/asm/sections.h
++++ b/arch/x86/include/asm/sections.h
+@@ -10,6 +10,7 @@ extern struct exception_table_entry __stop___ex_table[];
+
+ #if defined(CONFIG_X86_64)
+ extern char __end_rodata_hpage_align[];
++extern char __entry_trampoline_start[], __entry_trampoline_end[];
+ #endif
+
+ #endif /* _ASM_X86_SECTIONS_H */
+diff --git a/arch/x86/include/asm/smp.h b/arch/x86/include/asm/smp.h
+index 461f53d27708..a4189762b266 100644
+--- a/arch/x86/include/asm/smp.h
++++ b/arch/x86/include/asm/smp.h
+@@ -129,6 +129,7 @@ static inline void arch_send_call_function_ipi_mask(const struct cpumask *mask)
+ void cpu_disable_common(void);
+ void native_smp_prepare_boot_cpu(void);
+ void native_smp_prepare_cpus(unsigned int max_cpus);
++void calculate_max_logical_packages(void);
+ void native_smp_cpus_done(unsigned int max_cpus);
+ void common_cpu_up(unsigned int cpunum, struct task_struct *tidle);
+ int native_cpu_up(unsigned int cpunum, struct task_struct *tidle);
+diff --git a/arch/x86/include/uapi/asm/mce.h b/arch/x86/include/uapi/asm/mce.h
+index 91723461dc1f..435db58a7bad 100644
+--- a/arch/x86/include/uapi/asm/mce.h
++++ b/arch/x86/include/uapi/asm/mce.h
+@@ -30,6 +30,7 @@ struct mce {
+ __u64 synd; /* MCA_SYND MSR: only valid on SMCA systems */
+ __u64 ipid; /* MCA_IPID MSR: only valid on SMCA systems */
+ __u64 ppin; /* Protected Processor Inventory Number */
++ __u32 microcode;/* Microcode revision */
+ };
+
+ #define MCE_GET_RECORD_LEN _IOR('M', 1, int)
+diff --git a/arch/x86/kernel/apic/io_apic.c b/arch/x86/kernel/apic/io_apic.c
+index 8a7963421460..93d5f55cd8b6 100644
+--- a/arch/x86/kernel/apic/io_apic.c
++++ b/arch/x86/kernel/apic/io_apic.c
+@@ -1603,7 +1603,7 @@ static void __init delay_with_tsc(void)
+ do {
+ rep_nop();
+ now = rdtsc();
+- } while ((now - start) < 40000000000UL / HZ &&
++ } while ((now - start) < 40000000000ULL / HZ &&
+ time_before_eq(jiffies, end));
+ }
+
+diff --git a/arch/x86/kernel/cpu/bugs.c b/arch/x86/kernel/cpu/bugs.c
+index d71c8b54b696..bfca937bdcc3 100644
+--- a/arch/x86/kernel/cpu/bugs.c
++++ b/arch/x86/kernel/cpu/bugs.c
+@@ -300,6 +300,15 @@ static void __init spectre_v2_select_mitigation(void)
+ setup_force_cpu_cap(X86_FEATURE_USE_IBPB);
+ pr_info("Spectre v2 mitigation: Enabling Indirect Branch Prediction Barrier\n");
+ }
++
++ /*
++ * Retpoline means the kernel is safe because it has no indirect
++ * branches. But firmware isn't, so use IBRS to protect that.
++ */
++ if (boot_cpu_has(X86_FEATURE_IBRS)) {
++ setup_force_cpu_cap(X86_FEATURE_USE_IBRS_FW);
++ pr_info("Enabling Restricted Speculation for firmware calls\n");
++ }
+ }
+
+ #undef pr_fmt
+@@ -326,8 +335,9 @@ ssize_t cpu_show_spectre_v2(struct device *dev, struct device_attribute *attr, c
+ if (!boot_cpu_has_bug(X86_BUG_SPECTRE_V2))
+ return sprintf(buf, "Not affected\n");
+
+- return sprintf(buf, "%s%s%s\n", spectre_v2_strings[spectre_v2_enabled],
++ return sprintf(buf, "%s%s%s%s\n", spectre_v2_strings[spectre_v2_enabled],
+ boot_cpu_has(X86_FEATURE_USE_IBPB) ? ", IBPB" : "",
++ boot_cpu_has(X86_FEATURE_USE_IBRS_FW) ? ", IBRS_FW" : "",
+ spectre_v2_module_string());
+ }
+ #endif
+diff --git a/arch/x86/kernel/cpu/intel.c b/arch/x86/kernel/cpu/intel.c
+index d19e903214b4..4aa9fd379390 100644
+--- a/arch/x86/kernel/cpu/intel.c
++++ b/arch/x86/kernel/cpu/intel.c
+@@ -144,6 +144,13 @@ static bool bad_spectre_microcode(struct cpuinfo_x86 *c)
+ {
+ int i;
+
++ /*
++ * We know that the hypervisor lie to us on the microcode version so
++ * we may as well hope that it is running the correct version.
++ */
++ if (cpu_has(c, X86_FEATURE_HYPERVISOR))
++ return false;
++
+ for (i = 0; i < ARRAY_SIZE(spectre_bad_microcodes); i++) {
+ if (c->x86_model == spectre_bad_microcodes[i].model &&
+ c->x86_stepping == spectre_bad_microcodes[i].stepping)
+diff --git a/arch/x86/kernel/cpu/mcheck/mce.c b/arch/x86/kernel/cpu/mcheck/mce.c
+index 2fe482f6ecd8..7a16a0fd1cb1 100644
+--- a/arch/x86/kernel/cpu/mcheck/mce.c
++++ b/arch/x86/kernel/cpu/mcheck/mce.c
+@@ -57,6 +57,9 @@
+
+ static DEFINE_MUTEX(mce_log_mutex);
+
++/* sysfs synchronization */
++static DEFINE_MUTEX(mce_sysfs_mutex);
++
+ #define CREATE_TRACE_POINTS
+ #include <trace/events/mce.h>
+
+@@ -131,6 +134,8 @@ void mce_setup(struct mce *m)
+
+ if (this_cpu_has(X86_FEATURE_INTEL_PPIN))
+ rdmsrl(MSR_PPIN, m->ppin);
++
++ m->microcode = boot_cpu_data.microcode;
+ }
+
+ DEFINE_PER_CPU(struct mce, injectm);
+@@ -263,7 +268,7 @@ static void __print_mce(struct mce *m)
+ */
+ pr_emerg(HW_ERR "PROCESSOR %u:%x TIME %llu SOCKET %u APIC %x microcode %x\n",
+ m->cpuvendor, m->cpuid, m->time, m->socketid, m->apicid,
+- cpu_data(m->extcpu).microcode);
++ m->microcode);
+ }
+
+ static void print_mce(struct mce *m)
+@@ -2078,6 +2083,7 @@ static ssize_t set_ignore_ce(struct device *s,
+ if (kstrtou64(buf, 0, &new) < 0)
+ return -EINVAL;
+
++ mutex_lock(&mce_sysfs_mutex);
+ if (mca_cfg.ignore_ce ^ !!new) {
+ if (new) {
+ /* disable ce features */
+@@ -2090,6 +2096,8 @@ static ssize_t set_ignore_ce(struct device *s,
+ on_each_cpu(mce_enable_ce, (void *)1, 1);
+ }
+ }
++ mutex_unlock(&mce_sysfs_mutex);
++
+ return size;
+ }
+
+@@ -2102,6 +2110,7 @@ static ssize_t set_cmci_disabled(struct device *s,
+ if (kstrtou64(buf, 0, &new) < 0)
+ return -EINVAL;
+
++ mutex_lock(&mce_sysfs_mutex);
+ if (mca_cfg.cmci_disabled ^ !!new) {
+ if (new) {
+ /* disable cmci */
+@@ -2113,6 +2122,8 @@ static ssize_t set_cmci_disabled(struct device *s,
+ on_each_cpu(mce_enable_ce, NULL, 1);
+ }
+ }
++ mutex_unlock(&mce_sysfs_mutex);
++
+ return size;
+ }
+
+@@ -2120,8 +2131,19 @@ static ssize_t store_int_with_restart(struct device *s,
+ struct device_attribute *attr,
+ const char *buf, size_t size)
+ {
+- ssize_t ret = device_store_int(s, attr, buf, size);
++ unsigned long old_check_interval = check_interval;
++ ssize_t ret = device_store_ulong(s, attr, buf, size);
++
++ if (check_interval == old_check_interval)
++ return ret;
++
++ if (check_interval < 1)
++ check_interval = 1;
++
++ mutex_lock(&mce_sysfs_mutex);
+ mce_restart();
++ mutex_unlock(&mce_sysfs_mutex);
++
+ return ret;
+ }
+
+diff --git a/arch/x86/kernel/head_64.S b/arch/x86/kernel/head_64.S
+index 04a625f0fcda..0f545b3cf926 100644
+--- a/arch/x86/kernel/head_64.S
++++ b/arch/x86/kernel/head_64.S
+@@ -23,6 +23,7 @@
+ #include <asm/nops.h>
+ #include "../entry/calling.h"
+ #include <asm/export.h>
++#include <asm/nospec-branch.h>
+
+ #ifdef CONFIG_PARAVIRT
+ #include <asm/asm-offsets.h>
+@@ -134,6 +135,7 @@ ENTRY(secondary_startup_64)
+
+ /* Ensure I am executing from virtual addresses */
+ movq $1f, %rax
++ ANNOTATE_RETPOLINE_SAFE
+ jmp *%rax
+ 1:
+ UNWIND_HINT_EMPTY
+diff --git a/arch/x86/kernel/kprobes/core.c b/arch/x86/kernel/kprobes/core.c
+index bd36f3c33cd0..0715f827607c 100644
+--- a/arch/x86/kernel/kprobes/core.c
++++ b/arch/x86/kernel/kprobes/core.c
+@@ -1168,10 +1168,18 @@ NOKPROBE_SYMBOL(longjmp_break_handler);
+
+ bool arch_within_kprobe_blacklist(unsigned long addr)
+ {
++ bool is_in_entry_trampoline_section = false;
++
++#ifdef CONFIG_X86_64
++ is_in_entry_trampoline_section =
++ (addr >= (unsigned long)__entry_trampoline_start &&
++ addr < (unsigned long)__entry_trampoline_end);
++#endif
+ return (addr >= (unsigned long)__kprobes_text_start &&
+ addr < (unsigned long)__kprobes_text_end) ||
+ (addr >= (unsigned long)__entry_text_start &&
+- addr < (unsigned long)__entry_text_end);
++ addr < (unsigned long)__entry_text_end) ||
++ is_in_entry_trampoline_section;
+ }
+
+ int __init arch_init_kprobes(void)
+diff --git a/arch/x86/kernel/smpboot.c b/arch/x86/kernel/smpboot.c
+index 844279c3ff4a..d0829a6e1bf5 100644
+--- a/arch/x86/kernel/smpboot.c
++++ b/arch/x86/kernel/smpboot.c
+@@ -1282,11 +1282,10 @@ void __init native_smp_prepare_boot_cpu(void)
+ cpu_set_state_online(me);
+ }
+
+-void __init native_smp_cpus_done(unsigned int max_cpus)
++void __init calculate_max_logical_packages(void)
+ {
+ int ncpus;
+
+- pr_debug("Boot done\n");
+ /*
+ * Today neither Intel nor AMD support heterogenous systems so
+ * extrapolate the boot cpu's data to all packages.
+@@ -1294,6 +1293,13 @@ void __init native_smp_cpus_done(unsigned int max_cpus)
+ ncpus = cpu_data(0).booted_cores * topology_max_smt_threads();
+ __max_logical_packages = DIV_ROUND_UP(nr_cpu_ids, ncpus);
+ pr_info("Max logical packages: %u\n", __max_logical_packages);
++}
++
++void __init native_smp_cpus_done(unsigned int max_cpus)
++{
++ pr_debug("Boot done\n");
++
++ calculate_max_logical_packages();
+
+ if (x86_has_numa_in_package)
+ set_sched_topology(x86_numa_in_package_topology);
+diff --git a/arch/x86/kernel/vmlinux.lds.S b/arch/x86/kernel/vmlinux.lds.S
+index 9b138a06c1a4..b854ebf5851b 100644
+--- a/arch/x86/kernel/vmlinux.lds.S
++++ b/arch/x86/kernel/vmlinux.lds.S
+@@ -118,9 +118,11 @@ SECTIONS
+
+ #ifdef CONFIG_X86_64
+ . = ALIGN(PAGE_SIZE);
++ VMLINUX_SYMBOL(__entry_trampoline_start) = .;
+ _entry_trampoline = .;
+ *(.entry_trampoline)
+ . = ALIGN(PAGE_SIZE);
++ VMLINUX_SYMBOL(__entry_trampoline_end) = .;
+ ASSERT(. - _entry_trampoline == PAGE_SIZE, "entry trampoline is too big");
+ #endif
+
+diff --git a/arch/x86/lib/Makefile b/arch/x86/lib/Makefile
+index 69a473919260..f23934bbaf4e 100644
+--- a/arch/x86/lib/Makefile
++++ b/arch/x86/lib/Makefile
+@@ -27,7 +27,6 @@ lib-$(CONFIG_RWSEM_XCHGADD_ALGORITHM) += rwsem.o
+ lib-$(CONFIG_INSTRUCTION_DECODER) += insn.o inat.o insn-eval.o
+ lib-$(CONFIG_RANDOMIZE_BASE) += kaslr.o
+ lib-$(CONFIG_RETPOLINE) += retpoline.o
+-OBJECT_FILES_NON_STANDARD_retpoline.o :=y
+
+ obj-y += msr.o msr-reg.o msr-reg-export.o hweight.o
+
+diff --git a/arch/x86/lib/retpoline.S b/arch/x86/lib/retpoline.S
+index 480edc3a5e03..c909961e678a 100644
+--- a/arch/x86/lib/retpoline.S
++++ b/arch/x86/lib/retpoline.S
+@@ -7,7 +7,6 @@
+ #include <asm/alternative-asm.h>
+ #include <asm/export.h>
+ #include <asm/nospec-branch.h>
+-#include <asm/bitsperlong.h>
+
+ .macro THUNK reg
+ .section .text.__x86.indirect_thunk
+@@ -47,58 +46,3 @@ GENERATE_THUNK(r13)
+ GENERATE_THUNK(r14)
+ GENERATE_THUNK(r15)
+ #endif
+-
+-/*
+- * Fill the CPU return stack buffer.
+- *
+- * Each entry in the RSB, if used for a speculative 'ret', contains an
+- * infinite 'pause; lfence; jmp' loop to capture speculative execution.
+- *
+- * This is required in various cases for retpoline and IBRS-based
+- * mitigations for the Spectre variant 2 vulnerability. Sometimes to
+- * eliminate potentially bogus entries from the RSB, and sometimes
+- * purely to ensure that it doesn't get empty, which on some CPUs would
+- * allow predictions from other (unwanted!) sources to be used.
+- *
+- * Google experimented with loop-unrolling and this turned out to be
+- * the optimal version - two calls, each with their own speculation
+- * trap should their return address end up getting used, in a loop.
+- */
+-.macro STUFF_RSB nr:req sp:req
+- mov $(\nr / 2), %_ASM_BX
+- .align 16
+-771:
+- call 772f
+-773: /* speculation trap */
+- pause
+- lfence
+- jmp 773b
+- .align 16
+-772:
+- call 774f
+-775: /* speculation trap */
+- pause
+- lfence
+- jmp 775b
+- .align 16
+-774:
+- dec %_ASM_BX
+- jnz 771b
+- add $((BITS_PER_LONG/8) * \nr), \sp
+-.endm
+-
+-#define RSB_FILL_LOOPS 16 /* To avoid underflow */
+-
+-ENTRY(__fill_rsb)
+- STUFF_RSB RSB_FILL_LOOPS, %_ASM_SP
+- ret
+-END(__fill_rsb)
+-EXPORT_SYMBOL_GPL(__fill_rsb)
+-
+-#define RSB_CLEAR_LOOPS 32 /* To forcibly overwrite all entries */
+-
+-ENTRY(__clear_rsb)
+- STUFF_RSB RSB_CLEAR_LOOPS, %_ASM_SP
+- ret
+-END(__clear_rsb)
+-EXPORT_SYMBOL_GPL(__clear_rsb)
+diff --git a/arch/x86/mm/fault.c b/arch/x86/mm/fault.c
+index 800de815519c..c88573d90f3e 100644
+--- a/arch/x86/mm/fault.c
++++ b/arch/x86/mm/fault.c
+@@ -1248,10 +1248,6 @@ __do_page_fault(struct pt_regs *regs, unsigned long error_code,
+ tsk = current;
+ mm = tsk->mm;
+
+- /*
+- * Detect and handle instructions that would cause a page fault for
+- * both a tracked kernel page and a userspace page.
+- */
+ prefetchw(&mm->mmap_sem);
+
+ if (unlikely(kmmio_fault(regs, address)))
+diff --git a/arch/x86/mm/mem_encrypt_boot.S b/arch/x86/mm/mem_encrypt_boot.S
+index 01f682cf77a8..40a6085063d6 100644
+--- a/arch/x86/mm/mem_encrypt_boot.S
++++ b/arch/x86/mm/mem_encrypt_boot.S
+@@ -15,6 +15,7 @@
+ #include <asm/page.h>
+ #include <asm/processor-flags.h>
+ #include <asm/msr-index.h>
++#include <asm/nospec-branch.h>
+
+ .text
+ .code64
+@@ -59,6 +60,7 @@ ENTRY(sme_encrypt_execute)
+ movq %rax, %r8 /* Workarea encryption routine */
+ addq $PAGE_SIZE, %r8 /* Workarea intermediate copy buffer */
+
++ ANNOTATE_RETPOLINE_SAFE
+ call *%rax /* Call the encryption routine */
+
+ pop %r12
+diff --git a/arch/x86/realmode/rm/trampoline_64.S b/arch/x86/realmode/rm/trampoline_64.S
+index de53bd15df5a..24bb7598774e 100644
+--- a/arch/x86/realmode/rm/trampoline_64.S
++++ b/arch/x86/realmode/rm/trampoline_64.S
+@@ -102,7 +102,7 @@ ENTRY(startup_32)
+ * don't we'll eventually crash trying to execute encrypted
+ * instructions.
+ */
+- bt $TH_FLAGS_SME_ACTIVE_BIT, pa_tr_flags
++ btl $TH_FLAGS_SME_ACTIVE_BIT, pa_tr_flags
+ jnc .Ldone
+ movl $MSR_K8_SYSCFG, %ecx
+ rdmsr
+diff --git a/arch/x86/xen/smp.c b/arch/x86/xen/smp.c
+index 77c959cf81e7..7a43b2ae19f1 100644
+--- a/arch/x86/xen/smp.c
++++ b/arch/x86/xen/smp.c
+@@ -122,6 +122,8 @@ void __init xen_smp_cpus_done(unsigned int max_cpus)
+
+ if (xen_hvm_domain())
+ native_smp_cpus_done(max_cpus);
++ else
++ calculate_max_logical_packages();
+
+ if (xen_have_vcpu_info_placement)
+ return;
+diff --git a/drivers/block/loop.c b/drivers/block/loop.c
+index d5fe720cf149..89d2ee00cced 100644
+--- a/drivers/block/loop.c
++++ b/drivers/block/loop.c
+@@ -266,7 +266,7 @@ static int lo_write_bvec(struct file *file, struct bio_vec *bvec, loff_t *ppos)
+ struct iov_iter i;
+ ssize_t bw;
+
+- iov_iter_bvec(&i, ITER_BVEC, bvec, 1, bvec->bv_len);
++ iov_iter_bvec(&i, ITER_BVEC | WRITE, bvec, 1, bvec->bv_len);
+
+ file_start_write(file);
+ bw = vfs_iter_write(file, &i, ppos, 0);
+diff --git a/drivers/char/tpm/tpm-interface.c b/drivers/char/tpm/tpm-interface.c
+index 3cec403a80b3..5294442505cb 100644
+--- a/drivers/char/tpm/tpm-interface.c
++++ b/drivers/char/tpm/tpm-interface.c
+@@ -413,6 +413,9 @@ ssize_t tpm_transmit(struct tpm_chip *chip, struct tpm_space *space,
+ if (chip->dev.parent)
+ pm_runtime_get_sync(chip->dev.parent);
+
++ if (chip->ops->clk_enable != NULL)
++ chip->ops->clk_enable(chip, true);
++
+ /* Store the decision as chip->locality will be changed. */
+ need_locality = chip->locality == -1;
+
+@@ -489,6 +492,9 @@ ssize_t tpm_transmit(struct tpm_chip *chip, struct tpm_space *space,
+ chip->locality = -1;
+ }
+ out_no_locality:
++ if (chip->ops->clk_enable != NULL)
++ chip->ops->clk_enable(chip, false);
++
+ if (chip->dev.parent)
+ pm_runtime_put_sync(chip->dev.parent);
+
+diff --git a/drivers/char/tpm/tpm_tis.c b/drivers/char/tpm/tpm_tis.c
+index e2d1055fb814..f08949a5f678 100644
+--- a/drivers/char/tpm/tpm_tis.c
++++ b/drivers/char/tpm/tpm_tis.c
+@@ -133,93 +133,14 @@ static int check_acpi_tpm2(struct device *dev)
+ }
+ #endif
+
+-#ifdef CONFIG_X86
+-#define INTEL_LEGACY_BLK_BASE_ADDR 0xFED08000
+-#define ILB_REMAP_SIZE 0x100
+-#define LPC_CNTRL_REG_OFFSET 0x84
+-#define LPC_CLKRUN_EN (1 << 2)
+-
+-static void __iomem *ilb_base_addr;
+-
+-static inline bool is_bsw(void)
+-{
+- return ((boot_cpu_data.x86_model == INTEL_FAM6_ATOM_AIRMONT) ? 1 : 0);
+-}
+-
+-/**
+- * tpm_platform_begin_xfer() - clear LPC CLKRUN_EN i.e. clocks will be running
+- */
+-static void tpm_platform_begin_xfer(void)
+-{
+- u32 clkrun_val;
+-
+- if (!is_bsw())
+- return;
+-
+- clkrun_val = ioread32(ilb_base_addr + LPC_CNTRL_REG_OFFSET);
+-
+- /* Disable LPC CLKRUN# */
+- clkrun_val &= ~LPC_CLKRUN_EN;
+- iowrite32(clkrun_val, ilb_base_addr + LPC_CNTRL_REG_OFFSET);
+-
+- /*
+- * Write any random value on port 0x80 which is on LPC, to make
+- * sure LPC clock is running before sending any TPM command.
+- */
+- outb(0xCC, 0x80);
+-
+-}
+-
+-/**
+- * tpm_platform_end_xfer() - set LPC CLKRUN_EN i.e. clocks can be turned off
+- */
+-static void tpm_platform_end_xfer(void)
+-{
+- u32 clkrun_val;
+-
+- if (!is_bsw())
+- return;
+-
+- clkrun_val = ioread32(ilb_base_addr + LPC_CNTRL_REG_OFFSET);
+-
+- /* Enable LPC CLKRUN# */
+- clkrun_val |= LPC_CLKRUN_EN;
+- iowrite32(clkrun_val, ilb_base_addr + LPC_CNTRL_REG_OFFSET);
+-
+- /*
+- * Write any random value on port 0x80 which is on LPC, to make
+- * sure LPC clock is running before sending any TPM command.
+- */
+- outb(0xCC, 0x80);
+-
+-}
+-#else
+-static inline bool is_bsw(void)
+-{
+- return false;
+-}
+-
+-static void tpm_platform_begin_xfer(void)
+-{
+-}
+-
+-static void tpm_platform_end_xfer(void)
+-{
+-}
+-#endif
+-
+ static int tpm_tcg_read_bytes(struct tpm_tis_data *data, u32 addr, u16 len,
+ u8 *result)
+ {
+ struct tpm_tis_tcg_phy *phy = to_tpm_tis_tcg_phy(data);
+
+- tpm_platform_begin_xfer();
+-
+ while (len--)
+ *result++ = ioread8(phy->iobase + addr);
+
+- tpm_platform_end_xfer();
+-
+ return 0;
+ }
+
+@@ -228,13 +149,9 @@ static int tpm_tcg_write_bytes(struct tpm_tis_data *data, u32 addr, u16 len,
+ {
+ struct tpm_tis_tcg_phy *phy = to_tpm_tis_tcg_phy(data);
+
+- tpm_platform_begin_xfer();
+-
+ while (len--)
+ iowrite8(*value++, phy->iobase + addr);
+
+- tpm_platform_end_xfer();
+-
+ return 0;
+ }
+
+@@ -242,12 +159,8 @@ static int tpm_tcg_read16(struct tpm_tis_data *data, u32 addr, u16 *result)
+ {
+ struct tpm_tis_tcg_phy *phy = to_tpm_tis_tcg_phy(data);
+
+- tpm_platform_begin_xfer();
+-
+ *result = ioread16(phy->iobase + addr);
+
+- tpm_platform_end_xfer();
+-
+ return 0;
+ }
+
+@@ -255,12 +168,8 @@ static int tpm_tcg_read32(struct tpm_tis_data *data, u32 addr, u32 *result)
+ {
+ struct tpm_tis_tcg_phy *phy = to_tpm_tis_tcg_phy(data);
+
+- tpm_platform_begin_xfer();
+-
+ *result = ioread32(phy->iobase + addr);
+
+- tpm_platform_end_xfer();
+-
+ return 0;
+ }
+
+@@ -268,12 +177,8 @@ static int tpm_tcg_write32(struct tpm_tis_data *data, u32 addr, u32 value)
+ {
+ struct tpm_tis_tcg_phy *phy = to_tpm_tis_tcg_phy(data);
+
+- tpm_platform_begin_xfer();
+-
+ iowrite32(value, phy->iobase + addr);
+
+- tpm_platform_end_xfer();
+-
+ return 0;
+ }
+
+@@ -461,11 +366,6 @@ static int __init init_tis(void)
+ if (rc)
+ goto err_force;
+
+-#ifdef CONFIG_X86
+- if (is_bsw())
+- ilb_base_addr = ioremap(INTEL_LEGACY_BLK_BASE_ADDR,
+- ILB_REMAP_SIZE);
+-#endif
+ rc = platform_driver_register(&tis_drv);
+ if (rc)
+ goto err_platform;
+@@ -484,10 +384,6 @@ static int __init init_tis(void)
+ err_platform:
+ if (force_pdev)
+ platform_device_unregister(force_pdev);
+-#ifdef CONFIG_X86
+- if (is_bsw())
+- iounmap(ilb_base_addr);
+-#endif
+ err_force:
+ return rc;
+ }
+@@ -497,10 +393,6 @@ static void __exit cleanup_tis(void)
+ pnp_unregister_driver(&tis_pnp_driver);
+ platform_driver_unregister(&tis_drv);
+
+-#ifdef CONFIG_X86
+- if (is_bsw())
+- iounmap(ilb_base_addr);
+-#endif
+ if (force_pdev)
+ platform_device_unregister(force_pdev);
+ }
+diff --git a/drivers/char/tpm/tpm_tis_core.c b/drivers/char/tpm/tpm_tis_core.c
+index 7561922bc8f8..08ae49dee8b1 100644
+--- a/drivers/char/tpm/tpm_tis_core.c
++++ b/drivers/char/tpm/tpm_tis_core.c
+@@ -31,6 +31,8 @@
+ #include "tpm.h"
+ #include "tpm_tis_core.h"
+
++static void tpm_tis_clkrun_enable(struct tpm_chip *chip, bool value);
++
+ /* Before we attempt to access the TPM we must see that the valid bit is set.
+ * The specification says that this bit is 0 at reset and remains 0 until the
+ * 'TPM has gone through its self test and initialization and has established
+@@ -422,19 +424,28 @@ static bool tpm_tis_update_timeouts(struct tpm_chip *chip,
+ int i, rc;
+ u32 did_vid;
+
++ if (chip->ops->clk_enable != NULL)
++ chip->ops->clk_enable(chip, true);
++
+ rc = tpm_tis_read32(priv, TPM_DID_VID(0), &did_vid);
+ if (rc < 0)
+- return rc;
++ goto out;
+
+ for (i = 0; i != ARRAY_SIZE(vendor_timeout_overrides); i++) {
+ if (vendor_timeout_overrides[i].did_vid != did_vid)
+ continue;
+ memcpy(timeout_cap, vendor_timeout_overrides[i].timeout_us,
+ sizeof(vendor_timeout_overrides[i].timeout_us));
+- return true;
++ rc = true;
+ }
+
+- return false;
++ rc = false;
++
++out:
++ if (chip->ops->clk_enable != NULL)
++ chip->ops->clk_enable(chip, false);
++
++ return rc;
+ }
+
+ /*
+@@ -654,14 +665,73 @@ void tpm_tis_remove(struct tpm_chip *chip)
+ u32 interrupt;
+ int rc;
+
++ tpm_tis_clkrun_enable(chip, true);
++
+ rc = tpm_tis_read32(priv, reg, &interrupt);
+ if (rc < 0)
+ interrupt = 0;
+
+ tpm_tis_write32(priv, reg, ~TPM_GLOBAL_INT_ENABLE & interrupt);
++
++ tpm_tis_clkrun_enable(chip, false);
++
++ if (priv->ilb_base_addr)
++ iounmap(priv->ilb_base_addr);
+ }
+ EXPORT_SYMBOL_GPL(tpm_tis_remove);
+
++/**
++ * tpm_tis_clkrun_enable() - Keep clkrun protocol disabled for entire duration
++ * of a single TPM command
++ * @chip: TPM chip to use
++ * @value: 1 - Disable CLKRUN protocol, so that clocks are free running
++ * 0 - Enable CLKRUN protocol
++ * Call this function directly in tpm_tis_remove() in error or driver removal
++ * path, since the chip->ops is set to NULL in tpm_chip_unregister().
++ */
++static void tpm_tis_clkrun_enable(struct tpm_chip *chip, bool value)
++{
++ struct tpm_tis_data *data = dev_get_drvdata(&chip->dev);
++ u32 clkrun_val;
++
++ if (!IS_ENABLED(CONFIG_X86) || !is_bsw() ||
++ !data->ilb_base_addr)
++ return;
++
++ if (value) {
++ data->clkrun_enabled++;
++ if (data->clkrun_enabled > 1)
++ return;
++ clkrun_val = ioread32(data->ilb_base_addr + LPC_CNTRL_OFFSET);
++
++ /* Disable LPC CLKRUN# */
++ clkrun_val &= ~LPC_CLKRUN_EN;
++ iowrite32(clkrun_val, data->ilb_base_addr + LPC_CNTRL_OFFSET);
++
++ /*
++ * Write any random value on port 0x80 which is on LPC, to make
++ * sure LPC clock is running before sending any TPM command.
++ */
++ outb(0xCC, 0x80);
++ } else {
++ data->clkrun_enabled--;
++ if (data->clkrun_enabled)
++ return;
++
++ clkrun_val = ioread32(data->ilb_base_addr + LPC_CNTRL_OFFSET);
++
++ /* Enable LPC CLKRUN# */
++ clkrun_val |= LPC_CLKRUN_EN;
++ iowrite32(clkrun_val, data->ilb_base_addr + LPC_CNTRL_OFFSET);
++
++ /*
++ * Write any random value on port 0x80 which is on LPC, to make
++ * sure LPC clock is running before sending any TPM command.
++ */
++ outb(0xCC, 0x80);
++ }
++}
++
+ static const struct tpm_class_ops tpm_tis = {
+ .flags = TPM_OPS_AUTO_STARTUP,
+ .status = tpm_tis_status,
+@@ -674,6 +744,7 @@ static const struct tpm_class_ops tpm_tis = {
+ .req_canceled = tpm_tis_req_canceled,
+ .request_locality = request_locality,
+ .relinquish_locality = release_locality,
++ .clk_enable = tpm_tis_clkrun_enable,
+ };
+
+ int tpm_tis_core_init(struct device *dev, struct tpm_tis_data *priv, int irq,
+@@ -681,6 +752,7 @@ int tpm_tis_core_init(struct device *dev, struct tpm_tis_data *priv, int irq,
+ acpi_handle acpi_dev_handle)
+ {
+ u32 vendor, intfcaps, intmask;
++ u32 clkrun_val;
+ u8 rid;
+ int rc, probe;
+ struct tpm_chip *chip;
+@@ -701,6 +773,23 @@ int tpm_tis_core_init(struct device *dev, struct tpm_tis_data *priv, int irq,
+ priv->phy_ops = phy_ops;
+ dev_set_drvdata(&chip->dev, priv);
+
++ if (is_bsw()) {
++ priv->ilb_base_addr = ioremap(INTEL_LEGACY_BLK_BASE_ADDR,
++ ILB_REMAP_SIZE);
++ if (!priv->ilb_base_addr)
++ return -ENOMEM;
++
++ clkrun_val = ioread32(priv->ilb_base_addr + LPC_CNTRL_OFFSET);
++ /* Check if CLKRUN# is already not enabled in the LPC bus */
++ if (!(clkrun_val & LPC_CLKRUN_EN)) {
++ iounmap(priv->ilb_base_addr);
++ priv->ilb_base_addr = NULL;
++ }
++ }
++
++ if (chip->ops->clk_enable != NULL)
++ chip->ops->clk_enable(chip, true);
++
+ if (wait_startup(chip, 0) != 0) {
+ rc = -ENODEV;
+ goto out_err;
+@@ -791,9 +880,20 @@ int tpm_tis_core_init(struct device *dev, struct tpm_tis_data *priv, int irq,
+ }
+ }
+
+- return tpm_chip_register(chip);
++ rc = tpm_chip_register(chip);
++ if (rc)
++ goto out_err;
++
++ if (chip->ops->clk_enable != NULL)
++ chip->ops->clk_enable(chip, false);
++
++ return 0;
+ out_err:
++ if ((chip->ops != NULL) && (chip->ops->clk_enable != NULL))
++ chip->ops->clk_enable(chip, false);
++
+ tpm_tis_remove(chip);
++
+ return rc;
+ }
+ EXPORT_SYMBOL_GPL(tpm_tis_core_init);
+@@ -805,22 +905,31 @@ static void tpm_tis_reenable_interrupts(struct tpm_chip *chip)
+ u32 intmask;
+ int rc;
+
++ if (chip->ops->clk_enable != NULL)
++ chip->ops->clk_enable(chip, true);
++
+ /* reenable interrupts that device may have lost or
+ * BIOS/firmware may have disabled
+ */
+ rc = tpm_tis_write8(priv, TPM_INT_VECTOR(priv->locality), priv->irq);
+ if (rc < 0)
+- return;
++ goto out;
+
+ rc = tpm_tis_read32(priv, TPM_INT_ENABLE(priv->locality), &intmask);
+ if (rc < 0)
+- return;
++ goto out;
+
+ intmask |= TPM_INTF_CMD_READY_INT
+ | TPM_INTF_LOCALITY_CHANGE_INT | TPM_INTF_DATA_AVAIL_INT
+ | TPM_INTF_STS_VALID_INT | TPM_GLOBAL_INT_ENABLE;
+
+ tpm_tis_write32(priv, TPM_INT_ENABLE(priv->locality), intmask);
++
++out:
++ if (chip->ops->clk_enable != NULL)
++ chip->ops->clk_enable(chip, false);
++
++ return;
+ }
+
+ int tpm_tis_resume(struct device *dev)
+diff --git a/drivers/char/tpm/tpm_tis_core.h b/drivers/char/tpm/tpm_tis_core.h
+index 6bbac319ff3b..d5c6a2e952b3 100644
+--- a/drivers/char/tpm/tpm_tis_core.h
++++ b/drivers/char/tpm/tpm_tis_core.h
+@@ -79,6 +79,11 @@ enum tis_defaults {
+ #define TPM_DID_VID(l) (0x0F00 | ((l) << 12))
+ #define TPM_RID(l) (0x0F04 | ((l) << 12))
+
++#define LPC_CNTRL_OFFSET 0x84
++#define LPC_CLKRUN_EN (1 << 2)
++#define INTEL_LEGACY_BLK_BASE_ADDR 0xFED08000
++#define ILB_REMAP_SIZE 0x100
++
+ enum tpm_tis_flags {
+ TPM_TIS_ITPM_WORKAROUND = BIT(0),
+ };
+@@ -89,6 +94,8 @@ struct tpm_tis_data {
+ int irq;
+ bool irq_tested;
+ unsigned int flags;
++ void __iomem *ilb_base_addr;
++ u16 clkrun_enabled;
+ wait_queue_head_t int_queue;
+ wait_queue_head_t read_queue;
+ const struct tpm_tis_phy_ops *phy_ops;
+@@ -144,6 +151,15 @@ static inline int tpm_tis_write32(struct tpm_tis_data *data, u32 addr,
+ return data->phy_ops->write32(data, addr, value);
+ }
+
++static inline bool is_bsw(void)
++{
++#ifdef CONFIG_X86
++ return ((boot_cpu_data.x86_model == INTEL_FAM6_ATOM_AIRMONT) ? 1 : 0);
++#else
++ return false;
++#endif
++}
++
+ void tpm_tis_remove(struct tpm_chip *chip);
+ int tpm_tis_core_init(struct device *dev, struct tpm_tis_data *priv, int irq,
+ const struct tpm_tis_phy_ops *phy_ops,
+diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_acpi.c b/drivers/gpu/drm/amd/amdgpu/amdgpu_acpi.c
+index 57afad79f55d..8fa850a070e0 100644
+--- a/drivers/gpu/drm/amd/amdgpu/amdgpu_acpi.c
++++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_acpi.c
+@@ -540,6 +540,9 @@ int amdgpu_acpi_pcie_performance_request(struct amdgpu_device *adev,
+ size_t size;
+ u32 retry = 3;
+
++ if (amdgpu_acpi_pcie_notify_device_ready(adev))
++ return -EINVAL;
++
+ /* Get the device handle */
+ handle = ACPI_HANDLE(&adev->pdev->dev);
+ if (!handle)
+diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_connectors.c b/drivers/gpu/drm/amd/amdgpu/amdgpu_connectors.c
+index df9cbc78e168..21e7ae159dff 100644
+--- a/drivers/gpu/drm/amd/amdgpu/amdgpu_connectors.c
++++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_connectors.c
+@@ -737,9 +737,11 @@ amdgpu_connector_lvds_detect(struct drm_connector *connector, bool force)
+ enum drm_connector_status ret = connector_status_disconnected;
+ int r;
+
+- r = pm_runtime_get_sync(connector->dev->dev);
+- if (r < 0)
+- return connector_status_disconnected;
++ if (!drm_kms_helper_is_poll_worker()) {
++ r = pm_runtime_get_sync(connector->dev->dev);
++ if (r < 0)
++ return connector_status_disconnected;
++ }
+
+ if (encoder) {
+ struct amdgpu_encoder *amdgpu_encoder = to_amdgpu_encoder(encoder);
+@@ -758,8 +760,12 @@ amdgpu_connector_lvds_detect(struct drm_connector *connector, bool force)
+ /* check acpi lid status ??? */
+
+ amdgpu_connector_update_scratch_regs(connector, ret);
+- pm_runtime_mark_last_busy(connector->dev->dev);
+- pm_runtime_put_autosuspend(connector->dev->dev);
++
++ if (!drm_kms_helper_is_poll_worker()) {
++ pm_runtime_mark_last_busy(connector->dev->dev);
++ pm_runtime_put_autosuspend(connector->dev->dev);
++ }
++
+ return ret;
+ }
+
+@@ -869,9 +875,11 @@ amdgpu_connector_vga_detect(struct drm_connector *connector, bool force)
+ enum drm_connector_status ret = connector_status_disconnected;
+ int r;
+
+- r = pm_runtime_get_sync(connector->dev->dev);
+- if (r < 0)
+- return connector_status_disconnected;
++ if (!drm_kms_helper_is_poll_worker()) {
++ r = pm_runtime_get_sync(connector->dev->dev);
++ if (r < 0)
++ return connector_status_disconnected;
++ }
+
+ encoder = amdgpu_connector_best_single_encoder(connector);
+ if (!encoder)
+@@ -925,8 +933,10 @@ amdgpu_connector_vga_detect(struct drm_connector *connector, bool force)
+ amdgpu_connector_update_scratch_regs(connector, ret);
+
+ out:
+- pm_runtime_mark_last_busy(connector->dev->dev);
+- pm_runtime_put_autosuspend(connector->dev->dev);
++ if (!drm_kms_helper_is_poll_worker()) {
++ pm_runtime_mark_last_busy(connector->dev->dev);
++ pm_runtime_put_autosuspend(connector->dev->dev);
++ }
+
+ return ret;
+ }
+@@ -989,9 +999,11 @@ amdgpu_connector_dvi_detect(struct drm_connector *connector, bool force)
+ enum drm_connector_status ret = connector_status_disconnected;
+ bool dret = false, broken_edid = false;
+
+- r = pm_runtime_get_sync(connector->dev->dev);
+- if (r < 0)
+- return connector_status_disconnected;
++ if (!drm_kms_helper_is_poll_worker()) {
++ r = pm_runtime_get_sync(connector->dev->dev);
++ if (r < 0)
++ return connector_status_disconnected;
++ }
+
+ if (!force && amdgpu_connector_check_hpd_status_unchanged(connector)) {
+ ret = connector->status;
+@@ -1116,8 +1128,10 @@ amdgpu_connector_dvi_detect(struct drm_connector *connector, bool force)
+ amdgpu_connector_update_scratch_regs(connector, ret);
+
+ exit:
+- pm_runtime_mark_last_busy(connector->dev->dev);
+- pm_runtime_put_autosuspend(connector->dev->dev);
++ if (!drm_kms_helper_is_poll_worker()) {
++ pm_runtime_mark_last_busy(connector->dev->dev);
++ pm_runtime_put_autosuspend(connector->dev->dev);
++ }
+
+ return ret;
+ }
+@@ -1360,9 +1374,11 @@ amdgpu_connector_dp_detect(struct drm_connector *connector, bool force)
+ struct drm_encoder *encoder = amdgpu_connector_best_single_encoder(connector);
+ int r;
+
+- r = pm_runtime_get_sync(connector->dev->dev);
+- if (r < 0)
+- return connector_status_disconnected;
++ if (!drm_kms_helper_is_poll_worker()) {
++ r = pm_runtime_get_sync(connector->dev->dev);
++ if (r < 0)
++ return connector_status_disconnected;
++ }
+
+ if (!force && amdgpu_connector_check_hpd_status_unchanged(connector)) {
+ ret = connector->status;
+@@ -1430,8 +1446,10 @@ amdgpu_connector_dp_detect(struct drm_connector *connector, bool force)
+
+ amdgpu_connector_update_scratch_regs(connector, ret);
+ out:
+- pm_runtime_mark_last_busy(connector->dev->dev);
+- pm_runtime_put_autosuspend(connector->dev->dev);
++ if (!drm_kms_helper_is_poll_worker()) {
++ pm_runtime_mark_last_busy(connector->dev->dev);
++ pm_runtime_put_autosuspend(connector->dev->dev);
++ }
+
+ return ret;
+ }
+diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_uvd.c b/drivers/gpu/drm/amd/amdgpu/amdgpu_uvd.c
+index e8bd50cf9785..9df2a8c7d35d 100644
+--- a/drivers/gpu/drm/amd/amdgpu/amdgpu_uvd.c
++++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_uvd.c
+@@ -297,12 +297,15 @@ int amdgpu_uvd_suspend(struct amdgpu_device *adev)
+ if (adev->uvd.vcpu_bo == NULL)
+ return 0;
+
+- for (i = 0; i < adev->uvd.max_handles; ++i)
+- if (atomic_read(&adev->uvd.handles[i]))
+- break;
++ /* only valid for physical mode */
++ if (adev->asic_type < CHIP_POLARIS10) {
++ for (i = 0; i < adev->uvd.max_handles; ++i)
++ if (atomic_read(&adev->uvd.handles[i]))
++ break;
+
+- if (i == AMDGPU_MAX_UVD_HANDLES)
+- return 0;
++ if (i == adev->uvd.max_handles)
++ return 0;
++ }
+
+ cancel_delayed_work_sync(&adev->uvd.idle_work);
+
+diff --git a/drivers/gpu/drm/amd/amdgpu/gfx_v7_0.c b/drivers/gpu/drm/amd/amdgpu/gfx_v7_0.c
+index 419ba0ce7ee5..356ca560c80e 100644
+--- a/drivers/gpu/drm/amd/amdgpu/gfx_v7_0.c
++++ b/drivers/gpu/drm/amd/amdgpu/gfx_v7_0.c
+@@ -4403,34 +4403,8 @@ static void gfx_v7_0_gpu_early_init(struct amdgpu_device *adev)
+ case CHIP_KAVERI:
+ adev->gfx.config.max_shader_engines = 1;
+ adev->gfx.config.max_tile_pipes = 4;
+- if ((adev->pdev->device == 0x1304) ||
+- (adev->pdev->device == 0x1305) ||
+- (adev->pdev->device == 0x130C) ||
+- (adev->pdev->device == 0x130F) ||
+- (adev->pdev->device == 0x1310) ||
+- (adev->pdev->device == 0x1311) ||
+- (adev->pdev->device == 0x131C)) {
+- adev->gfx.config.max_cu_per_sh = 8;
+- adev->gfx.config.max_backends_per_se = 2;
+- } else if ((adev->pdev->device == 0x1309) ||
+- (adev->pdev->device == 0x130A) ||
+- (adev->pdev->device == 0x130D) ||
+- (adev->pdev->device == 0x1313) ||
+- (adev->pdev->device == 0x131D)) {
+- adev->gfx.config.max_cu_per_sh = 6;
+- adev->gfx.config.max_backends_per_se = 2;
+- } else if ((adev->pdev->device == 0x1306) ||
+- (adev->pdev->device == 0x1307) ||
+- (adev->pdev->device == 0x130B) ||
+- (adev->pdev->device == 0x130E) ||
+- (adev->pdev->device == 0x1315) ||
+- (adev->pdev->device == 0x131B)) {
+- adev->gfx.config.max_cu_per_sh = 4;
+- adev->gfx.config.max_backends_per_se = 1;
+- } else {
+- adev->gfx.config.max_cu_per_sh = 3;
+- adev->gfx.config.max_backends_per_se = 1;
+- }
++ adev->gfx.config.max_cu_per_sh = 8;
++ adev->gfx.config.max_backends_per_se = 2;
+ adev->gfx.config.max_sh_per_se = 1;
+ adev->gfx.config.max_texture_channel_caches = 4;
+ adev->gfx.config.max_gprs = 256;
+diff --git a/drivers/gpu/drm/amd/amdgpu/si.c b/drivers/gpu/drm/amd/amdgpu/si.c
+index 8284d5dbfc30..4c178feeb4bd 100644
+--- a/drivers/gpu/drm/amd/amdgpu/si.c
++++ b/drivers/gpu/drm/amd/amdgpu/si.c
+@@ -31,6 +31,7 @@
+ #include "amdgpu_uvd.h"
+ #include "amdgpu_vce.h"
+ #include "atom.h"
++#include "amd_pcie.h"
+ #include "amdgpu_powerplay.h"
+ #include "sid.h"
+ #include "si_ih.h"
+@@ -1461,8 +1462,8 @@ static void si_pcie_gen3_enable(struct amdgpu_device *adev)
+ {
+ struct pci_dev *root = adev->pdev->bus->self;
+ int bridge_pos, gpu_pos;
+- u32 speed_cntl, mask, current_data_rate;
+- int ret, i;
++ u32 speed_cntl, current_data_rate;
++ int i;
+ u16 tmp16;
+
+ if (pci_is_root_bus(adev->pdev->bus))
+@@ -1474,23 +1475,20 @@ static void si_pcie_gen3_enable(struct amdgpu_device *adev)
+ if (adev->flags & AMD_IS_APU)
+ return;
+
+- ret = drm_pcie_get_speed_cap_mask(adev->ddev, &mask);
+- if (ret != 0)
+- return;
+-
+- if (!(mask & (DRM_PCIE_SPEED_50 | DRM_PCIE_SPEED_80)))
++ if (!(adev->pm.pcie_gen_mask & (CAIL_PCIE_LINK_SPEED_SUPPORT_GEN2 |
++ CAIL_PCIE_LINK_SPEED_SUPPORT_GEN3)))
+ return;
+
+ speed_cntl = RREG32_PCIE_PORT(PCIE_LC_SPEED_CNTL);
+ current_data_rate = (speed_cntl & LC_CURRENT_DATA_RATE_MASK) >>
+ LC_CURRENT_DATA_RATE_SHIFT;
+- if (mask & DRM_PCIE_SPEED_80) {
++ if (adev->pm.pcie_gen_mask & CAIL_PCIE_LINK_SPEED_SUPPORT_GEN3) {
+ if (current_data_rate == 2) {
+ DRM_INFO("PCIE gen 3 link speeds already enabled\n");
+ return;
+ }
+ DRM_INFO("enabling PCIE gen 3 link speeds, disable with amdgpu.pcie_gen2=0\n");
+- } else if (mask & DRM_PCIE_SPEED_50) {
++ } else if (adev->pm.pcie_gen_mask & CAIL_PCIE_LINK_SPEED_SUPPORT_GEN2) {
+ if (current_data_rate == 1) {
+ DRM_INFO("PCIE gen 2 link speeds already enabled\n");
+ return;
+@@ -1506,7 +1504,7 @@ static void si_pcie_gen3_enable(struct amdgpu_device *adev)
+ if (!gpu_pos)
+ return;
+
+- if (mask & DRM_PCIE_SPEED_80) {
++ if (adev->pm.pcie_gen_mask & CAIL_PCIE_LINK_SPEED_SUPPORT_GEN3) {
+ if (current_data_rate != 2) {
+ u16 bridge_cfg, gpu_cfg;
+ u16 bridge_cfg2, gpu_cfg2;
+@@ -1589,9 +1587,9 @@ static void si_pcie_gen3_enable(struct amdgpu_device *adev)
+
+ pci_read_config_word(adev->pdev, gpu_pos + PCI_EXP_LNKCTL2, &tmp16);
+ tmp16 &= ~0xf;
+- if (mask & DRM_PCIE_SPEED_80)
++ if (adev->pm.pcie_gen_mask & CAIL_PCIE_LINK_SPEED_SUPPORT_GEN3)
+ tmp16 |= 3;
+- else if (mask & DRM_PCIE_SPEED_50)
++ else if (adev->pm.pcie_gen_mask & CAIL_PCIE_LINK_SPEED_SUPPORT_GEN2)
+ tmp16 |= 2;
+ else
+ tmp16 |= 1;
+diff --git a/drivers/gpu/drm/amd/amdgpu/si_dpm.c b/drivers/gpu/drm/amd/amdgpu/si_dpm.c
+index 3af322adae76..ea80b7ca5c37 100644
+--- a/drivers/gpu/drm/amd/amdgpu/si_dpm.c
++++ b/drivers/gpu/drm/amd/amdgpu/si_dpm.c
+@@ -26,6 +26,7 @@
+ #include "amdgpu_pm.h"
+ #include "amdgpu_dpm.h"
+ #include "amdgpu_atombios.h"
++#include "amd_pcie.h"
+ #include "sid.h"
+ #include "r600_dpm.h"
+ #include "si_dpm.h"
+@@ -3331,29 +3332,6 @@ static void btc_apply_voltage_delta_rules(struct amdgpu_device *adev,
+ }
+ }
+
+-static enum amdgpu_pcie_gen r600_get_pcie_gen_support(struct amdgpu_device *adev,
+- u32 sys_mask,
+- enum amdgpu_pcie_gen asic_gen,
+- enum amdgpu_pcie_gen default_gen)
+-{
+- switch (asic_gen) {
+- case AMDGPU_PCIE_GEN1:
+- return AMDGPU_PCIE_GEN1;
+- case AMDGPU_PCIE_GEN2:
+- return AMDGPU_PCIE_GEN2;
+- case AMDGPU_PCIE_GEN3:
+- return AMDGPU_PCIE_GEN3;
+- default:
+- if ((sys_mask & DRM_PCIE_SPEED_80) && (default_gen == AMDGPU_PCIE_GEN3))
+- return AMDGPU_PCIE_GEN3;
+- else if ((sys_mask & DRM_PCIE_SPEED_50) && (default_gen == AMDGPU_PCIE_GEN2))
+- return AMDGPU_PCIE_GEN2;
+- else
+- return AMDGPU_PCIE_GEN1;
+- }
+- return AMDGPU_PCIE_GEN1;
+-}
+-
+ static void r600_calculate_u_and_p(u32 i, u32 r_c, u32 p_b,
+ u32 *p, u32 *u)
+ {
+@@ -5028,10 +5006,11 @@ static int si_populate_smc_acpi_state(struct amdgpu_device *adev,
+ table->ACPIState.levels[0].vddc.index,
+ &table->ACPIState.levels[0].std_vddc);
+ }
+- table->ACPIState.levels[0].gen2PCIE = (u8)r600_get_pcie_gen_support(adev,
+- si_pi->sys_pcie_mask,
+- si_pi->boot_pcie_gen,
+- AMDGPU_PCIE_GEN1);
++ table->ACPIState.levels[0].gen2PCIE =
++ (u8)amdgpu_get_pcie_gen_support(adev,
++ si_pi->sys_pcie_mask,
++ si_pi->boot_pcie_gen,
++ AMDGPU_PCIE_GEN1);
+
+ if (si_pi->vddc_phase_shed_control)
+ si_populate_phase_shedding_value(adev,
+@@ -7172,10 +7151,10 @@ static void si_parse_pplib_clock_info(struct amdgpu_device *adev,
+ pl->vddc = le16_to_cpu(clock_info->si.usVDDC);
+ pl->vddci = le16_to_cpu(clock_info->si.usVDDCI);
+ pl->flags = le32_to_cpu(clock_info->si.ulFlags);
+- pl->pcie_gen = r600_get_pcie_gen_support(adev,
+- si_pi->sys_pcie_mask,
+- si_pi->boot_pcie_gen,
+- clock_info->si.ucPCIEGen);
++ pl->pcie_gen = amdgpu_get_pcie_gen_support(adev,
++ si_pi->sys_pcie_mask,
++ si_pi->boot_pcie_gen,
++ clock_info->si.ucPCIEGen);
+
+ /* patch up vddc if necessary */
+ ret = si_get_leakage_voltage_from_leakage_index(adev, pl->vddc,
+@@ -7330,7 +7309,6 @@ static int si_dpm_init(struct amdgpu_device *adev)
+ struct si_power_info *si_pi;
+ struct atom_clock_dividers dividers;
+ int ret;
+- u32 mask;
+
+ si_pi = kzalloc(sizeof(struct si_power_info), GFP_KERNEL);
+ if (si_pi == NULL)
+@@ -7340,11 +7318,9 @@ static int si_dpm_init(struct amdgpu_device *adev)
+ eg_pi = &ni_pi->eg;
+ pi = &eg_pi->rv7xx;
+
+- ret = drm_pcie_get_speed_cap_mask(adev->ddev, &mask);
+- if (ret)
+- si_pi->sys_pcie_mask = 0;
+- else
+- si_pi->sys_pcie_mask = mask;
++ si_pi->sys_pcie_mask =
++ (adev->pm.pcie_gen_mask & CAIL_PCIE_LINK_SPEED_SUPPORT_MASK) >>
++ CAIL_PCIE_LINK_SPEED_SUPPORT_SHIFT;
+ si_pi->force_pcie_gen = AMDGPU_PCIE_GEN_INVALID;
+ si_pi->boot_pcie_gen = si_get_current_pcie_speed(adev);
+
+diff --git a/drivers/gpu/drm/amd/display/dc/core/dc_stream.c b/drivers/gpu/drm/amd/display/dc/core/dc_stream.c
+index e230cc44a0a7..bd6cab5a9f43 100644
+--- a/drivers/gpu/drm/amd/display/dc/core/dc_stream.c
++++ b/drivers/gpu/drm/amd/display/dc/core/dc_stream.c
+@@ -200,7 +200,8 @@ bool dc_stream_set_cursor_attributes(
+ for (i = 0; i < MAX_PIPES; i++) {
+ struct pipe_ctx *pipe_ctx = &res_ctx->pipe_ctx[i];
+
+- if (pipe_ctx->stream != stream || (!pipe_ctx->plane_res.xfm && !pipe_ctx->plane_res.dpp))
++ if (pipe_ctx->stream != stream || (!pipe_ctx->plane_res.xfm &&
++ !pipe_ctx->plane_res.dpp) || !pipe_ctx->plane_res.ipp)
+ continue;
+ if (pipe_ctx->top_pipe && pipe_ctx->plane_state != pipe_ctx->top_pipe->plane_state)
+ continue;
+@@ -276,7 +277,8 @@ bool dc_stream_set_cursor_position(
+ if (pipe_ctx->stream != stream ||
+ (!pipe_ctx->plane_res.mi && !pipe_ctx->plane_res.hubp) ||
+ !pipe_ctx->plane_state ||
+- (!pipe_ctx->plane_res.xfm && !pipe_ctx->plane_res.dpp))
++ (!pipe_ctx->plane_res.xfm && !pipe_ctx->plane_res.dpp) ||
++ !pipe_ctx->plane_res.ipp)
+ continue;
+
+ if (pipe_ctx->plane_state->address.type
+diff --git a/drivers/gpu/drm/amd/display/dc/dce/dce_link_encoder.c b/drivers/gpu/drm/amd/display/dc/dce/dce_link_encoder.c
+index fe88852b4774..00c728260616 100644
+--- a/drivers/gpu/drm/amd/display/dc/dce/dce_link_encoder.c
++++ b/drivers/gpu/drm/amd/display/dc/dce/dce_link_encoder.c
+@@ -683,6 +683,7 @@ void dce110_link_encoder_construct(
+ {
+ struct bp_encoder_cap_info bp_cap_info = {0};
+ const struct dc_vbios_funcs *bp_funcs = init_data->ctx->dc_bios->funcs;
++ enum bp_result result = BP_RESULT_OK;
+
+ enc110->base.funcs = &dce110_lnk_enc_funcs;
+ enc110->base.ctx = init_data->ctx;
+@@ -757,15 +758,24 @@ void dce110_link_encoder_construct(
+ enc110->base.preferred_engine = ENGINE_ID_UNKNOWN;
+ }
+
++ /* default to one to mirror Windows behavior */
++ enc110->base.features.flags.bits.HDMI_6GB_EN = 1;
++
++ result = bp_funcs->get_encoder_cap_info(enc110->base.ctx->dc_bios,
++ enc110->base.id, &bp_cap_info);
++
+ /* Override features with DCE-specific values */
+- if (BP_RESULT_OK == bp_funcs->get_encoder_cap_info(
+- enc110->base.ctx->dc_bios, enc110->base.id,
+- &bp_cap_info)) {
++ if (BP_RESULT_OK == result) {
+ enc110->base.features.flags.bits.IS_HBR2_CAPABLE =
+ bp_cap_info.DP_HBR2_EN;
+ enc110->base.features.flags.bits.IS_HBR3_CAPABLE =
+ bp_cap_info.DP_HBR3_EN;
+ enc110->base.features.flags.bits.HDMI_6GB_EN = bp_cap_info.HDMI_6GB_EN;
++ } else {
++ dm_logger_write(enc110->base.ctx->logger, LOG_WARNING,
++ "%s: Failed to get encoder_cap_info from VBIOS with error code %d!\n",
++ __func__,
++ result);
+ }
+ }
+
+diff --git a/drivers/gpu/drm/amd/powerplay/hwmgr/smu7_hwmgr.c b/drivers/gpu/drm/amd/powerplay/hwmgr/smu7_hwmgr.c
+index e33ec7fc5d09..6688cdb216e9 100644
+--- a/drivers/gpu/drm/amd/powerplay/hwmgr/smu7_hwmgr.c
++++ b/drivers/gpu/drm/amd/powerplay/hwmgr/smu7_hwmgr.c
+@@ -2791,10 +2791,13 @@ static int smu7_apply_state_adjust_rules(struct pp_hwmgr *hwmgr,
+ PHM_PlatformCaps_DisableMclkSwitchingForFrameLock);
+
+
+- disable_mclk_switching = ((1 < info.display_count) ||
+- disable_mclk_switching_for_frame_lock ||
+- smu7_vblank_too_short(hwmgr, mode_info.vblank_time_us) ||
+- (mode_info.refresh_rate > 120));
++ if (info.display_count == 0)
++ disable_mclk_switching = false;
++ else
++ disable_mclk_switching = ((1 < info.display_count) ||
++ disable_mclk_switching_for_frame_lock ||
++ smu7_vblank_too_short(hwmgr, mode_info.vblank_time_us) ||
++ (mode_info.refresh_rate > 120));
+
+ sclk = smu7_ps->performance_levels[0].engine_clock;
+ mclk = smu7_ps->performance_levels[0].memory_clock;
+@@ -4569,13 +4572,6 @@ static int smu7_set_power_profile_state(struct pp_hwmgr *hwmgr,
+ int tmp_result, result = 0;
+ uint32_t sclk_mask = 0, mclk_mask = 0;
+
+- if (hwmgr->chip_id == CHIP_FIJI) {
+- if (request->type == AMD_PP_GFX_PROFILE)
+- smu7_enable_power_containment(hwmgr);
+- else if (request->type == AMD_PP_COMPUTE_PROFILE)
+- smu7_disable_power_containment(hwmgr);
+- }
+-
+ if (hwmgr->dpm_level != AMD_DPM_FORCED_LEVEL_AUTO)
+ return -EINVAL;
+
+diff --git a/drivers/gpu/drm/amd/powerplay/hwmgr/vega10_hwmgr.c b/drivers/gpu/drm/amd/powerplay/hwmgr/vega10_hwmgr.c
+index f8d838c2c8ee..9acbefb33bd6 100644
+--- a/drivers/gpu/drm/amd/powerplay/hwmgr/vega10_hwmgr.c
++++ b/drivers/gpu/drm/amd/powerplay/hwmgr/vega10_hwmgr.c
+@@ -3208,10 +3208,13 @@ static int vega10_apply_state_adjust_rules(struct pp_hwmgr *hwmgr,
+ disable_mclk_switching_for_vr = PP_CAP(PHM_PlatformCaps_DisableMclkSwitchForVR);
+ force_mclk_high = PP_CAP(PHM_PlatformCaps_ForceMclkHigh);
+
+- disable_mclk_switching = (info.display_count > 1) ||
+- disable_mclk_switching_for_frame_lock ||
+- disable_mclk_switching_for_vr ||
+- force_mclk_high;
++ if (info.display_count == 0)
++ disable_mclk_switching = false;
++ else
++ disable_mclk_switching = (info.display_count > 1) ||
++ disable_mclk_switching_for_frame_lock ||
++ disable_mclk_switching_for_vr ||
++ force_mclk_high;
+
+ sclk = vega10_ps->performance_levels[0].gfx_clock;
+ mclk = vega10_ps->performance_levels[0].mem_clock;
+diff --git a/drivers/gpu/drm/drm_framebuffer.c b/drivers/gpu/drm/drm_framebuffer.c
+index 279c1035c12d..5e1f1e2deb52 100644
+--- a/drivers/gpu/drm/drm_framebuffer.c
++++ b/drivers/gpu/drm/drm_framebuffer.c
+@@ -118,6 +118,10 @@ int drm_mode_addfb(struct drm_device *dev,
+ r.pixel_format = drm_mode_legacy_fb_format(or->bpp, or->depth);
+ r.handles[0] = or->handle;
+
++ if (r.pixel_format == DRM_FORMAT_XRGB2101010 &&
++ dev->driver->driver_features & DRIVER_PREFER_XBGR_30BPP)
++ r.pixel_format = DRM_FORMAT_XBGR2101010;
++
+ ret = drm_mode_addfb2(dev, &r, file_priv);
+ if (ret)
+ return ret;
+diff --git a/drivers/gpu/drm/drm_probe_helper.c b/drivers/gpu/drm/drm_probe_helper.c
+index 6dc2dde5b672..7a6b2dc08913 100644
+--- a/drivers/gpu/drm/drm_probe_helper.c
++++ b/drivers/gpu/drm/drm_probe_helper.c
+@@ -654,6 +654,26 @@ static void output_poll_execute(struct work_struct *work)
+ schedule_delayed_work(delayed_work, DRM_OUTPUT_POLL_PERIOD);
+ }
+
++/**
++ * drm_kms_helper_is_poll_worker - is %current task an output poll worker?
++ *
++ * Determine if %current task is an output poll worker. This can be used
++ * to select distinct code paths for output polling versus other contexts.
++ *
++ * One use case is to avoid a deadlock between the output poll worker and
++ * the autosuspend worker wherein the latter waits for polling to finish
++ * upon calling drm_kms_helper_poll_disable(), while the former waits for
++ * runtime suspend to finish upon calling pm_runtime_get_sync() in a
++ * connector ->detect hook.
++ */
++bool drm_kms_helper_is_poll_worker(void)
++{
++ struct work_struct *work = current_work();
++
++ return work && work->func == output_poll_execute;
++}
++EXPORT_SYMBOL(drm_kms_helper_is_poll_worker);
++
+ /**
+ * drm_kms_helper_poll_disable - disable output polling
+ * @dev: drm_device
+diff --git a/drivers/gpu/drm/i915/i915_drv.c b/drivers/gpu/drm/i915/i915_drv.c
+index 2cf10d17acfb..62004ea403c6 100644
+--- a/drivers/gpu/drm/i915/i915_drv.c
++++ b/drivers/gpu/drm/i915/i915_drv.c
+@@ -1827,6 +1827,8 @@ static int i915_drm_resume_early(struct drm_device *dev)
+ if (IS_GEN9_LP(dev_priv) ||
+ !(dev_priv->suspended_to_idle && dev_priv->csr.dmc_payload))
+ intel_power_domains_init_hw(dev_priv, true);
++ else
++ intel_display_set_init_power(dev_priv, true);
+
+ i915_gem_sanitize(dev_priv);
+
+diff --git a/drivers/gpu/drm/i915/i915_gem_execbuffer.c b/drivers/gpu/drm/i915/i915_gem_execbuffer.c
+index 435ed95df144..3d0ae387691f 100644
+--- a/drivers/gpu/drm/i915/i915_gem_execbuffer.c
++++ b/drivers/gpu/drm/i915/i915_gem_execbuffer.c
+@@ -505,6 +505,8 @@ eb_add_vma(struct i915_execbuffer *eb, unsigned int i, struct i915_vma *vma)
+ list_add_tail(&vma->exec_link, &eb->unbound);
+ if (drm_mm_node_allocated(&vma->node))
+ err = i915_vma_unbind(vma);
++ if (unlikely(err))
++ vma->exec_flags = NULL;
+ }
+ return err;
+ }
+@@ -2419,7 +2421,7 @@ i915_gem_do_execbuffer(struct drm_device *dev,
+ if (out_fence) {
+ if (err == 0) {
+ fd_install(out_fence_fd, out_fence->file);
+- args->rsvd2 &= GENMASK_ULL(0, 31); /* keep in-fence */
++ args->rsvd2 &= GENMASK_ULL(31, 0); /* keep in-fence */
+ args->rsvd2 |= (u64)out_fence_fd << 32;
+ out_fence_fd = -1;
+ } else {
+diff --git a/drivers/gpu/drm/i915/i915_perf.c b/drivers/gpu/drm/i915/i915_perf.c
+index 59ee808f8fd9..cc2a10f22c3d 100644
+--- a/drivers/gpu/drm/i915/i915_perf.c
++++ b/drivers/gpu/drm/i915/i915_perf.c
+@@ -1301,9 +1301,8 @@ static void i915_oa_stream_destroy(struct i915_perf_stream *stream)
+ */
+ mutex_lock(&dev_priv->drm.struct_mutex);
+ dev_priv->perf.oa.exclusive_stream = NULL;
+- mutex_unlock(&dev_priv->drm.struct_mutex);
+-
+ dev_priv->perf.oa.ops.disable_metric_set(dev_priv);
++ mutex_unlock(&dev_priv->drm.struct_mutex);
+
+ free_oa_buffer(dev_priv);
+
+@@ -1755,22 +1754,13 @@ static int gen8_switch_to_updated_kernel_context(struct drm_i915_private *dev_pr
+ * Note: it's only the RCS/Render context that has any OA state.
+ */
+ static int gen8_configure_all_contexts(struct drm_i915_private *dev_priv,
+- const struct i915_oa_config *oa_config,
+- bool interruptible)
++ const struct i915_oa_config *oa_config)
+ {
+ struct i915_gem_context *ctx;
+ int ret;
+ unsigned int wait_flags = I915_WAIT_LOCKED;
+
+- if (interruptible) {
+- ret = i915_mutex_lock_interruptible(&dev_priv->drm);
+- if (ret)
+- return ret;
+-
+- wait_flags |= I915_WAIT_INTERRUPTIBLE;
+- } else {
+- mutex_lock(&dev_priv->drm.struct_mutex);
+- }
++ lockdep_assert_held(&dev_priv->drm.struct_mutex);
+
+ /* Switch away from any user context. */
+ ret = gen8_switch_to_updated_kernel_context(dev_priv, oa_config);
+@@ -1818,8 +1808,6 @@ static int gen8_configure_all_contexts(struct drm_i915_private *dev_priv,
+ }
+
+ out:
+- mutex_unlock(&dev_priv->drm.struct_mutex);
+-
+ return ret;
+ }
+
+@@ -1862,7 +1850,7 @@ static int gen8_enable_metric_set(struct drm_i915_private *dev_priv,
+ * to make sure all slices/subslices are ON before writing to NOA
+ * registers.
+ */
+- ret = gen8_configure_all_contexts(dev_priv, oa_config, true);
++ ret = gen8_configure_all_contexts(dev_priv, oa_config);
+ if (ret)
+ return ret;
+
+@@ -1877,7 +1865,7 @@ static int gen8_enable_metric_set(struct drm_i915_private *dev_priv,
+ static void gen8_disable_metric_set(struct drm_i915_private *dev_priv)
+ {
+ /* Reset all contexts' slices/subslices configurations. */
+- gen8_configure_all_contexts(dev_priv, NULL, false);
++ gen8_configure_all_contexts(dev_priv, NULL);
+
+ I915_WRITE(GDT_CHICKEN_BITS, (I915_READ(GDT_CHICKEN_BITS) &
+ ~GT_NOA_ENABLE));
+@@ -2127,6 +2115,10 @@ static int i915_oa_stream_init(struct i915_perf_stream *stream,
+ if (ret)
+ goto err_oa_buf_alloc;
+
++ ret = i915_mutex_lock_interruptible(&dev_priv->drm);
++ if (ret)
++ goto err_lock;
++
+ ret = dev_priv->perf.oa.ops.enable_metric_set(dev_priv,
+ stream->oa_config);
+ if (ret)
+@@ -2134,23 +2126,17 @@ static int i915_oa_stream_init(struct i915_perf_stream *stream,
+
+ stream->ops = &i915_oa_stream_ops;
+
+- /* Lock device for exclusive_stream access late because
+- * enable_metric_set() might lock as well on gen8+.
+- */
+- ret = i915_mutex_lock_interruptible(&dev_priv->drm);
+- if (ret)
+- goto err_lock;
+-
+ dev_priv->perf.oa.exclusive_stream = stream;
+
+ mutex_unlock(&dev_priv->drm.struct_mutex);
+
+ return 0;
+
+-err_lock:
++err_enable:
+ dev_priv->perf.oa.ops.disable_metric_set(dev_priv);
++ mutex_unlock(&dev_priv->drm.struct_mutex);
+
+-err_enable:
++err_lock:
+ free_oa_buffer(dev_priv);
+
+ err_oa_buf_alloc:
+diff --git a/drivers/gpu/drm/i915/intel_audio.c b/drivers/gpu/drm/i915/intel_audio.c
+index 0ddba16fde1b..538a762f7318 100644
+--- a/drivers/gpu/drm/i915/intel_audio.c
++++ b/drivers/gpu/drm/i915/intel_audio.c
+@@ -754,11 +754,11 @@ static struct intel_encoder *get_saved_enc(struct drm_i915_private *dev_priv,
+ {
+ struct intel_encoder *encoder;
+
+- if (WARN_ON(pipe >= INTEL_INFO(dev_priv)->num_pipes))
+- return NULL;
+-
+ /* MST */
+ if (pipe >= 0) {
++ if (WARN_ON(pipe >= ARRAY_SIZE(dev_priv->av_enc_map)))
++ return NULL;
++
+ encoder = dev_priv->av_enc_map[pipe];
+ /*
+ * when bootup, audio driver may not know it is
+diff --git a/drivers/gpu/drm/i915/intel_display.c b/drivers/gpu/drm/i915/intel_display.c
+index 50f8443641b8..a83e18c72f7b 100644
+--- a/drivers/gpu/drm/i915/intel_display.c
++++ b/drivers/gpu/drm/i915/intel_display.c
+@@ -14463,6 +14463,8 @@ static void sanitize_watermarks(struct drm_device *dev)
+
+ cs->wm.need_postvbl_update = true;
+ dev_priv->display.optimize_watermarks(intel_state, cs);
++
++ to_intel_crtc_state(crtc->state)->wm = cs->wm;
+ }
+
+ put_state:
+diff --git a/drivers/gpu/drm/i915/intel_hdmi.c b/drivers/gpu/drm/i915/intel_hdmi.c
+index 4dea833f9d1b..847cda4c017c 100644
+--- a/drivers/gpu/drm/i915/intel_hdmi.c
++++ b/drivers/gpu/drm/i915/intel_hdmi.c
+@@ -1573,12 +1573,20 @@ intel_hdmi_set_edid(struct drm_connector *connector)
+ struct intel_hdmi *intel_hdmi = intel_attached_hdmi(connector);
+ struct edid *edid;
+ bool connected = false;
++ struct i2c_adapter *i2c;
+
+ intel_display_power_get(dev_priv, POWER_DOMAIN_GMBUS);
+
+- edid = drm_get_edid(connector,
+- intel_gmbus_get_adapter(dev_priv,
+- intel_hdmi->ddc_bus));
++ i2c = intel_gmbus_get_adapter(dev_priv, intel_hdmi->ddc_bus);
++
++ edid = drm_get_edid(connector, i2c);
++
++ if (!edid && !intel_gmbus_is_forced_bit(i2c)) {
++ DRM_DEBUG_KMS("HDMI GMBUS EDID read failed, retry using GPIO bit-banging\n");
++ intel_gmbus_force_bit(i2c, true);
++ edid = drm_get_edid(connector, i2c);
++ intel_gmbus_force_bit(i2c, false);
++ }
+
+ intel_hdmi_dp_dual_mode_detect(connector, edid != NULL);
+
+diff --git a/drivers/gpu/drm/i915/intel_runtime_pm.c b/drivers/gpu/drm/i915/intel_runtime_pm.c
+index 7e115f3927f6..d169bfb98368 100644
+--- a/drivers/gpu/drm/i915/intel_runtime_pm.c
++++ b/drivers/gpu/drm/i915/intel_runtime_pm.c
+@@ -1844,6 +1844,7 @@ void intel_display_power_put(struct drm_i915_private *dev_priv,
+ CNL_DISPLAY_POWERWELL_2_POWER_DOMAINS | \
+ BIT_ULL(POWER_DOMAIN_MODESET) | \
+ BIT_ULL(POWER_DOMAIN_AUX_A) | \
++ BIT_ULL(POWER_DOMAIN_GMBUS) | \
+ BIT_ULL(POWER_DOMAIN_INIT))
+
+ static const struct i915_power_well_ops i9xx_always_on_power_well_ops = {
+diff --git a/drivers/gpu/drm/nouveau/nouveau_connector.c b/drivers/gpu/drm/nouveau/nouveau_connector.c
+index 69d6e61a01ec..6ed9cb053dfa 100644
+--- a/drivers/gpu/drm/nouveau/nouveau_connector.c
++++ b/drivers/gpu/drm/nouveau/nouveau_connector.c
+@@ -570,9 +570,15 @@ nouveau_connector_detect(struct drm_connector *connector, bool force)
+ nv_connector->edid = NULL;
+ }
+
+- ret = pm_runtime_get_sync(connector->dev->dev);
+- if (ret < 0 && ret != -EACCES)
+- return conn_status;
++ /* Outputs are only polled while runtime active, so acquiring a
++ * runtime PM ref here is unnecessary (and would deadlock upon
++ * runtime suspend because it waits for polling to finish).
++ */
++ if (!drm_kms_helper_is_poll_worker()) {
++ ret = pm_runtime_get_sync(connector->dev->dev);
++ if (ret < 0 && ret != -EACCES)
++ return conn_status;
++ }
+
+ nv_encoder = nouveau_connector_ddc_detect(connector);
+ if (nv_encoder && (i2c = nv_encoder->i2c) != NULL) {
+@@ -647,8 +653,10 @@ nouveau_connector_detect(struct drm_connector *connector, bool force)
+
+ out:
+
+- pm_runtime_mark_last_busy(connector->dev->dev);
+- pm_runtime_put_autosuspend(connector->dev->dev);
++ if (!drm_kms_helper_is_poll_worker()) {
++ pm_runtime_mark_last_busy(connector->dev->dev);
++ pm_runtime_put_autosuspend(connector->dev->dev);
++ }
+
+ return conn_status;
+ }
+diff --git a/drivers/gpu/drm/nouveau/nv50_display.c b/drivers/gpu/drm/nouveau/nv50_display.c
+index 584466ef688f..325bff420f5a 100644
+--- a/drivers/gpu/drm/nouveau/nv50_display.c
++++ b/drivers/gpu/drm/nouveau/nv50_display.c
+@@ -4426,6 +4426,7 @@ nv50_display_create(struct drm_device *dev)
+ nouveau_display(dev)->fini = nv50_display_fini;
+ disp->disp = &nouveau_display(dev)->disp;
+ dev->mode_config.funcs = &nv50_disp_func;
++ dev->driver->driver_features |= DRIVER_PREFER_XBGR_30BPP;
+ if (nouveau_atomic)
+ dev->driver->driver_features |= DRIVER_ATOMIC;
+
+diff --git a/drivers/gpu/drm/radeon/cik.c b/drivers/gpu/drm/radeon/cik.c
+index a6511918f632..8ce36cf42055 100644
+--- a/drivers/gpu/drm/radeon/cik.c
++++ b/drivers/gpu/drm/radeon/cik.c
+@@ -3228,35 +3228,8 @@ static void cik_gpu_init(struct radeon_device *rdev)
+ case CHIP_KAVERI:
+ rdev->config.cik.max_shader_engines = 1;
+ rdev->config.cik.max_tile_pipes = 4;
+- if ((rdev->pdev->device == 0x1304) ||
+- (rdev->pdev->device == 0x1305) ||
+- (rdev->pdev->device == 0x130C) ||
+- (rdev->pdev->device == 0x130F) ||
+- (rdev->pdev->device == 0x1310) ||
+- (rdev->pdev->device == 0x1311) ||
+- (rdev->pdev->device == 0x131C)) {
+- rdev->config.cik.max_cu_per_sh = 8;
+- rdev->config.cik.max_backends_per_se = 2;
+- } else if ((rdev->pdev->device == 0x1309) ||
+- (rdev->pdev->device == 0x130A) ||
+- (rdev->pdev->device == 0x130D) ||
+- (rdev->pdev->device == 0x1313) ||
+- (rdev->pdev->device == 0x131D)) {
+- rdev->config.cik.max_cu_per_sh = 6;
+- rdev->config.cik.max_backends_per_se = 2;
+- } else if ((rdev->pdev->device == 0x1306) ||
+- (rdev->pdev->device == 0x1307) ||
+- (rdev->pdev->device == 0x130B) ||
+- (rdev->pdev->device == 0x130E) ||
+- (rdev->pdev->device == 0x1315) ||
+- (rdev->pdev->device == 0x1318) ||
+- (rdev->pdev->device == 0x131B)) {
+- rdev->config.cik.max_cu_per_sh = 4;
+- rdev->config.cik.max_backends_per_se = 1;
+- } else {
+- rdev->config.cik.max_cu_per_sh = 3;
+- rdev->config.cik.max_backends_per_se = 1;
+- }
++ rdev->config.cik.max_cu_per_sh = 8;
++ rdev->config.cik.max_backends_per_se = 2;
+ rdev->config.cik.max_sh_per_se = 1;
+ rdev->config.cik.max_texture_channel_caches = 4;
+ rdev->config.cik.max_gprs = 256;
+diff --git a/drivers/gpu/drm/radeon/radeon_connectors.c b/drivers/gpu/drm/radeon/radeon_connectors.c
+index 59dcefb2df3b..30e129684c7c 100644
+--- a/drivers/gpu/drm/radeon/radeon_connectors.c
++++ b/drivers/gpu/drm/radeon/radeon_connectors.c
+@@ -900,9 +900,11 @@ radeon_lvds_detect(struct drm_connector *connector, bool force)
+ enum drm_connector_status ret = connector_status_disconnected;
+ int r;
+
+- r = pm_runtime_get_sync(connector->dev->dev);
+- if (r < 0)
+- return connector_status_disconnected;
++ if (!drm_kms_helper_is_poll_worker()) {
++ r = pm_runtime_get_sync(connector->dev->dev);
++ if (r < 0)
++ return connector_status_disconnected;
++ }
+
+ if (encoder) {
+ struct radeon_encoder *radeon_encoder = to_radeon_encoder(encoder);
+@@ -925,8 +927,12 @@ radeon_lvds_detect(struct drm_connector *connector, bool force)
+ /* check acpi lid status ??? */
+
+ radeon_connector_update_scratch_regs(connector, ret);
+- pm_runtime_mark_last_busy(connector->dev->dev);
+- pm_runtime_put_autosuspend(connector->dev->dev);
++
++ if (!drm_kms_helper_is_poll_worker()) {
++ pm_runtime_mark_last_busy(connector->dev->dev);
++ pm_runtime_put_autosuspend(connector->dev->dev);
++ }
++
+ return ret;
+ }
+
+@@ -1040,9 +1046,11 @@ radeon_vga_detect(struct drm_connector *connector, bool force)
+ enum drm_connector_status ret = connector_status_disconnected;
+ int r;
+
+- r = pm_runtime_get_sync(connector->dev->dev);
+- if (r < 0)
+- return connector_status_disconnected;
++ if (!drm_kms_helper_is_poll_worker()) {
++ r = pm_runtime_get_sync(connector->dev->dev);
++ if (r < 0)
++ return connector_status_disconnected;
++ }
+
+ encoder = radeon_best_single_encoder(connector);
+ if (!encoder)
+@@ -1109,8 +1117,10 @@ radeon_vga_detect(struct drm_connector *connector, bool force)
+ radeon_connector_update_scratch_regs(connector, ret);
+
+ out:
+- pm_runtime_mark_last_busy(connector->dev->dev);
+- pm_runtime_put_autosuspend(connector->dev->dev);
++ if (!drm_kms_helper_is_poll_worker()) {
++ pm_runtime_mark_last_busy(connector->dev->dev);
++ pm_runtime_put_autosuspend(connector->dev->dev);
++ }
+
+ return ret;
+ }
+@@ -1174,9 +1184,11 @@ radeon_tv_detect(struct drm_connector *connector, bool force)
+ if (!radeon_connector->dac_load_detect)
+ return ret;
+
+- r = pm_runtime_get_sync(connector->dev->dev);
+- if (r < 0)
+- return connector_status_disconnected;
++ if (!drm_kms_helper_is_poll_worker()) {
++ r = pm_runtime_get_sync(connector->dev->dev);
++ if (r < 0)
++ return connector_status_disconnected;
++ }
+
+ encoder = radeon_best_single_encoder(connector);
+ if (!encoder)
+@@ -1188,8 +1200,12 @@ radeon_tv_detect(struct drm_connector *connector, bool force)
+ if (ret == connector_status_connected)
+ ret = radeon_connector_analog_encoder_conflict_solve(connector, encoder, ret, false);
+ radeon_connector_update_scratch_regs(connector, ret);
+- pm_runtime_mark_last_busy(connector->dev->dev);
+- pm_runtime_put_autosuspend(connector->dev->dev);
++
++ if (!drm_kms_helper_is_poll_worker()) {
++ pm_runtime_mark_last_busy(connector->dev->dev);
++ pm_runtime_put_autosuspend(connector->dev->dev);
++ }
++
+ return ret;
+ }
+
+@@ -1252,9 +1268,11 @@ radeon_dvi_detect(struct drm_connector *connector, bool force)
+ enum drm_connector_status ret = connector_status_disconnected;
+ bool dret = false, broken_edid = false;
+
+- r = pm_runtime_get_sync(connector->dev->dev);
+- if (r < 0)
+- return connector_status_disconnected;
++ if (!drm_kms_helper_is_poll_worker()) {
++ r = pm_runtime_get_sync(connector->dev->dev);
++ if (r < 0)
++ return connector_status_disconnected;
++ }
+
+ if (radeon_connector->detected_hpd_without_ddc) {
+ force = true;
+@@ -1437,8 +1455,10 @@ radeon_dvi_detect(struct drm_connector *connector, bool force)
+ }
+
+ exit:
+- pm_runtime_mark_last_busy(connector->dev->dev);
+- pm_runtime_put_autosuspend(connector->dev->dev);
++ if (!drm_kms_helper_is_poll_worker()) {
++ pm_runtime_mark_last_busy(connector->dev->dev);
++ pm_runtime_put_autosuspend(connector->dev->dev);
++ }
+
+ return ret;
+ }
+@@ -1689,9 +1709,11 @@ radeon_dp_detect(struct drm_connector *connector, bool force)
+ if (radeon_dig_connector->is_mst)
+ return connector_status_disconnected;
+
+- r = pm_runtime_get_sync(connector->dev->dev);
+- if (r < 0)
+- return connector_status_disconnected;
++ if (!drm_kms_helper_is_poll_worker()) {
++ r = pm_runtime_get_sync(connector->dev->dev);
++ if (r < 0)
++ return connector_status_disconnected;
++ }
+
+ if (!force && radeon_check_hpd_status_unchanged(connector)) {
+ ret = connector->status;
+@@ -1778,8 +1800,10 @@ radeon_dp_detect(struct drm_connector *connector, bool force)
+ }
+
+ out:
+- pm_runtime_mark_last_busy(connector->dev->dev);
+- pm_runtime_put_autosuspend(connector->dev->dev);
++ if (!drm_kms_helper_is_poll_worker()) {
++ pm_runtime_mark_last_busy(connector->dev->dev);
++ pm_runtime_put_autosuspend(connector->dev->dev);
++ }
+
+ return ret;
+ }
+diff --git a/drivers/gpu/drm/radeon/radeon_device.c b/drivers/gpu/drm/radeon/radeon_device.c
+index ffc10cadcf34..32b577c776b9 100644
+--- a/drivers/gpu/drm/radeon/radeon_device.c
++++ b/drivers/gpu/drm/radeon/radeon_device.c
+@@ -1397,6 +1397,10 @@ int radeon_device_init(struct radeon_device *rdev,
+ if ((rdev->flags & RADEON_IS_PCI) &&
+ (rdev->family <= CHIP_RS740))
+ rdev->need_dma32 = true;
++#ifdef CONFIG_PPC64
++ if (rdev->family == CHIP_CEDAR)
++ rdev->need_dma32 = true;
++#endif
+
+ dma_bits = rdev->need_dma32 ? 32 : 40;
+ r = pci_set_dma_mask(rdev->pdev, DMA_BIT_MASK(dma_bits));
+diff --git a/drivers/gpu/drm/radeon/radeon_pm.c b/drivers/gpu/drm/radeon/radeon_pm.c
+index 326ad068c15a..4b6542538ff9 100644
+--- a/drivers/gpu/drm/radeon/radeon_pm.c
++++ b/drivers/gpu/drm/radeon/radeon_pm.c
+@@ -47,7 +47,6 @@ static bool radeon_pm_in_vbl(struct radeon_device *rdev);
+ static bool radeon_pm_debug_check_in_vbl(struct radeon_device *rdev, bool finish);
+ static void radeon_pm_update_profile(struct radeon_device *rdev);
+ static void radeon_pm_set_clocks(struct radeon_device *rdev);
+-static void radeon_pm_compute_clocks_dpm(struct radeon_device *rdev);
+
+ int radeon_pm_get_type_index(struct radeon_device *rdev,
+ enum radeon_pm_state_type ps_type,
+@@ -80,8 +79,6 @@ void radeon_pm_acpi_event_handler(struct radeon_device *rdev)
+ radeon_dpm_enable_bapm(rdev, rdev->pm.dpm.ac_power);
+ }
+ mutex_unlock(&rdev->pm.mutex);
+- /* allow new DPM state to be picked */
+- radeon_pm_compute_clocks_dpm(rdev);
+ } else if (rdev->pm.pm_method == PM_METHOD_PROFILE) {
+ if (rdev->pm.profile == PM_PROFILE_AUTO) {
+ mutex_lock(&rdev->pm.mutex);
+@@ -885,8 +882,7 @@ static struct radeon_ps *radeon_dpm_pick_power_state(struct radeon_device *rdev,
+ dpm_state = POWER_STATE_TYPE_INTERNAL_3DPERF;
+ /* balanced states don't exist at the moment */
+ if (dpm_state == POWER_STATE_TYPE_BALANCED)
+- dpm_state = rdev->pm.dpm.ac_power ?
+- POWER_STATE_TYPE_PERFORMANCE : POWER_STATE_TYPE_BATTERY;
++ dpm_state = POWER_STATE_TYPE_PERFORMANCE;
+
+ restart_search:
+ /* Pick the best power state based on current conditions */
+diff --git a/drivers/infiniband/core/device.c b/drivers/infiniband/core/device.c
+index d7d042a20ab4..4dff06ab771e 100644
+--- a/drivers/infiniband/core/device.c
++++ b/drivers/infiniband/core/device.c
+@@ -534,14 +534,14 @@ int ib_register_device(struct ib_device *device,
+ ret = device->query_device(device, &device->attrs, &uhw);
+ if (ret) {
+ pr_warn("Couldn't query the device attributes\n");
+- goto cache_cleanup;
++ goto cg_cleanup;
+ }
+
+ ret = ib_device_register_sysfs(device, port_callback);
+ if (ret) {
+ pr_warn("Couldn't register device %s with driver model\n",
+ device->name);
+- goto cache_cleanup;
++ goto cg_cleanup;
+ }
+
+ device->reg_state = IB_DEV_REGISTERED;
+@@ -557,6 +557,8 @@ int ib_register_device(struct ib_device *device,
+ mutex_unlock(&device_mutex);
+ return 0;
+
++cg_cleanup:
++ ib_device_unregister_rdmacg(device);
+ cache_cleanup:
+ ib_cache_cleanup_one(device);
+ ib_cache_release_one(device);
+diff --git a/drivers/infiniband/core/rdma_core.c b/drivers/infiniband/core/rdma_core.c
+index 4e1f76730855..9cb801d1fe54 100644
+--- a/drivers/infiniband/core/rdma_core.c
++++ b/drivers/infiniband/core/rdma_core.c
+@@ -407,13 +407,13 @@ static int __must_check remove_commit_fd_uobject(struct ib_uobject *uobj,
+ return ret;
+ }
+
+-static void lockdep_check(struct ib_uobject *uobj, bool exclusive)
++static void assert_uverbs_usecnt(struct ib_uobject *uobj, bool exclusive)
+ {
+ #ifdef CONFIG_LOCKDEP
+ if (exclusive)
+- WARN_ON(atomic_read(&uobj->usecnt) > 0);
++ WARN_ON(atomic_read(&uobj->usecnt) != -1);
+ else
+- WARN_ON(atomic_read(&uobj->usecnt) == -1);
++ WARN_ON(atomic_read(&uobj->usecnt) <= 0);
+ #endif
+ }
+
+@@ -452,7 +452,7 @@ int __must_check rdma_remove_commit_uobject(struct ib_uobject *uobj)
+ WARN(true, "ib_uverbs: Cleanup is running while removing an uobject\n");
+ return 0;
+ }
+- lockdep_check(uobj, true);
++ assert_uverbs_usecnt(uobj, true);
+ ret = _rdma_remove_commit_uobject(uobj, RDMA_REMOVE_DESTROY);
+
+ up_read(&ucontext->cleanup_rwsem);
+@@ -482,7 +482,7 @@ int rdma_explicit_destroy(struct ib_uobject *uobject)
+ WARN(true, "ib_uverbs: Cleanup is running while removing an uobject\n");
+ return 0;
+ }
+- lockdep_check(uobject, true);
++ assert_uverbs_usecnt(uobject, true);
+ ret = uobject->type->type_class->remove_commit(uobject,
+ RDMA_REMOVE_DESTROY);
+ if (ret)
+@@ -569,7 +569,7 @@ static void lookup_put_fd_uobject(struct ib_uobject *uobj, bool exclusive)
+
+ void rdma_lookup_put_uobject(struct ib_uobject *uobj, bool exclusive)
+ {
+- lockdep_check(uobj, exclusive);
++ assert_uverbs_usecnt(uobj, exclusive);
+ uobj->type->type_class->lookup_put(uobj, exclusive);
+ /*
+ * In order to unlock an object, either decrease its usecnt for
+diff --git a/drivers/infiniband/core/ucma.c b/drivers/infiniband/core/ucma.c
+index eb85b546e223..c8b3a45e9edc 100644
+--- a/drivers/infiniband/core/ucma.c
++++ b/drivers/infiniband/core/ucma.c
+@@ -1148,6 +1148,9 @@ static ssize_t ucma_init_qp_attr(struct ucma_file *file,
+ if (copy_from_user(&cmd, inbuf, sizeof(cmd)))
+ return -EFAULT;
+
++ if (cmd.qp_state > IB_QPS_ERR)
++ return -EINVAL;
++
+ ctx = ucma_get_ctx(file, cmd.id);
+ if (IS_ERR(ctx))
+ return PTR_ERR(ctx);
+@@ -1293,6 +1296,9 @@ static ssize_t ucma_set_option(struct ucma_file *file, const char __user *inbuf,
+ if (IS_ERR(ctx))
+ return PTR_ERR(ctx);
+
++ if (unlikely(cmd.optval > KMALLOC_MAX_SIZE))
++ return -EINVAL;
++
+ optval = memdup_user((void __user *) (unsigned long) cmd.optval,
+ cmd.optlen);
+ if (IS_ERR(optval)) {
+diff --git a/drivers/infiniband/hw/mlx5/cq.c b/drivers/infiniband/hw/mlx5/cq.c
+index 18705cbcdc8c..8b179238f405 100644
+--- a/drivers/infiniband/hw/mlx5/cq.c
++++ b/drivers/infiniband/hw/mlx5/cq.c
+@@ -1177,7 +1177,12 @@ static int resize_user(struct mlx5_ib_dev *dev, struct mlx5_ib_cq *cq,
+ if (ucmd.reserved0 || ucmd.reserved1)
+ return -EINVAL;
+
+- umem = ib_umem_get(context, ucmd.buf_addr, entries * ucmd.cqe_size,
++ /* check multiplication overflow */
++ if (ucmd.cqe_size && SIZE_MAX / ucmd.cqe_size <= entries - 1)
++ return -EINVAL;
++
++ umem = ib_umem_get(context, ucmd.buf_addr,
++ (size_t)ucmd.cqe_size * entries,
+ IB_ACCESS_LOCAL_WRITE, 1);
+ if (IS_ERR(umem)) {
+ err = PTR_ERR(umem);
+diff --git a/drivers/infiniband/hw/mlx5/mr.c b/drivers/infiniband/hw/mlx5/mr.c
+index d109fe8290a7..3832edd867ed 100644
+--- a/drivers/infiniband/hw/mlx5/mr.c
++++ b/drivers/infiniband/hw/mlx5/mr.c
+@@ -1813,7 +1813,6 @@ mlx5_ib_sg_to_klms(struct mlx5_ib_mr *mr,
+
+ mr->ibmr.iova = sg_dma_address(sg) + sg_offset;
+ mr->ibmr.length = 0;
+- mr->ndescs = sg_nents;
+
+ for_each_sg(sgl, sg, sg_nents, i) {
+ if (unlikely(i >= mr->max_descs))
+@@ -1825,6 +1824,7 @@ mlx5_ib_sg_to_klms(struct mlx5_ib_mr *mr,
+
+ sg_offset = 0;
+ }
++ mr->ndescs = i;
+
+ if (sg_offset_p)
+ *sg_offset_p = sg_offset;
+diff --git a/drivers/input/keyboard/matrix_keypad.c b/drivers/input/keyboard/matrix_keypad.c
+index 1f316d66e6f7..41614c185918 100644
+--- a/drivers/input/keyboard/matrix_keypad.c
++++ b/drivers/input/keyboard/matrix_keypad.c
+@@ -218,8 +218,10 @@ static void matrix_keypad_stop(struct input_dev *dev)
+ {
+ struct matrix_keypad *keypad = input_get_drvdata(dev);
+
++ spin_lock_irq(&keypad->lock);
+ keypad->stopped = true;
+- mb();
++ spin_unlock_irq(&keypad->lock);
++
+ flush_work(&keypad->work.work);
+ /*
+ * matrix_keypad_scan() will leave IRQs enabled;
+diff --git a/drivers/input/mouse/synaptics.c b/drivers/input/mouse/synaptics.c
+index cd9f61cb3fc6..ee5466a374bf 100644
+--- a/drivers/input/mouse/synaptics.c
++++ b/drivers/input/mouse/synaptics.c
+@@ -173,7 +173,6 @@ static const char * const smbus_pnp_ids[] = {
+ "LEN0046", /* X250 */
+ "LEN004a", /* W541 */
+ "LEN200f", /* T450s */
+- "LEN2018", /* T460p */
+ NULL
+ };
+
+diff --git a/drivers/md/bcache/super.c b/drivers/md/bcache/super.c
+index b4d28928dec5..14bdaf1cef2c 100644
+--- a/drivers/md/bcache/super.c
++++ b/drivers/md/bcache/super.c
+@@ -951,6 +951,7 @@ int bch_cached_dev_attach(struct cached_dev *dc, struct cache_set *c)
+ uint32_t rtime = cpu_to_le32(get_seconds());
+ struct uuid_entry *u;
+ char buf[BDEVNAME_SIZE];
++ struct cached_dev *exist_dc, *t;
+
+ bdevname(dc->bdev, buf);
+
+@@ -974,6 +975,16 @@ int bch_cached_dev_attach(struct cached_dev *dc, struct cache_set *c)
+ return -EINVAL;
+ }
+
++ /* Check whether already attached */
++ list_for_each_entry_safe(exist_dc, t, &c->cached_devs, list) {
++ if (!memcmp(dc->sb.uuid, exist_dc->sb.uuid, 16)) {
++ pr_err("Tried to attach %s but duplicate UUID already attached",
++ buf);
++
++ return -EINVAL;
++ }
++ }
++
+ u = uuid_find(c, dc->sb.uuid);
+
+ if (u &&
+@@ -1191,7 +1202,7 @@ static void register_bdev(struct cache_sb *sb, struct page *sb_page,
+
+ return;
+ err:
+- pr_notice("error opening %s: %s", bdevname(bdev, name), err);
++ pr_notice("error %s: %s", bdevname(bdev, name), err);
+ bcache_device_stop(&dc->disk);
+ }
+
+@@ -1859,6 +1870,8 @@ static int register_cache(struct cache_sb *sb, struct page *sb_page,
+ const char *err = NULL; /* must be set for any error case */
+ int ret = 0;
+
++ bdevname(bdev, name);
++
+ memcpy(&ca->sb, sb, sizeof(struct cache_sb));
+ ca->bdev = bdev;
+ ca->bdev->bd_holder = ca;
+@@ -1867,11 +1880,12 @@ static int register_cache(struct cache_sb *sb, struct page *sb_page,
+ ca->sb_bio.bi_io_vec[0].bv_page = sb_page;
+ get_page(sb_page);
+
+- if (blk_queue_discard(bdev_get_queue(ca->bdev)))
++ if (blk_queue_discard(bdev_get_queue(bdev)))
+ ca->discard = CACHE_DISCARD(&ca->sb);
+
+ ret = cache_alloc(ca);
+ if (ret != 0) {
++ blkdev_put(bdev, FMODE_READ|FMODE_WRITE|FMODE_EXCL);
+ if (ret == -ENOMEM)
+ err = "cache_alloc(): -ENOMEM";
+ else
+@@ -1894,14 +1908,14 @@ static int register_cache(struct cache_sb *sb, struct page *sb_page,
+ goto out;
+ }
+
+- pr_info("registered cache device %s", bdevname(bdev, name));
++ pr_info("registered cache device %s", name);
+
+ out:
+ kobject_put(&ca->kobj);
+
+ err:
+ if (err)
+- pr_notice("error opening %s: %s", bdevname(bdev, name), err);
++ pr_notice("error %s: %s", name, err);
+
+ return ret;
+ }
+@@ -1990,6 +2004,7 @@ static ssize_t register_bcache(struct kobject *k, struct kobj_attribute *attr,
+ if (err)
+ goto err_close;
+
++ err = "failed to register device";
+ if (SB_IS_BDEV(sb)) {
+ struct cached_dev *dc = kzalloc(sizeof(*dc), GFP_KERNEL);
+ if (!dc)
+@@ -2004,7 +2019,7 @@ static ssize_t register_bcache(struct kobject *k, struct kobj_attribute *attr,
+ goto err_close;
+
+ if (register_cache(sb, sb_page, bdev, ca) != 0)
+- goto err_close;
++ goto err;
+ }
+ out:
+ if (sb_page)
+@@ -2017,7 +2032,7 @@ static ssize_t register_bcache(struct kobject *k, struct kobj_attribute *attr,
+ err_close:
+ blkdev_put(bdev, FMODE_READ|FMODE_WRITE|FMODE_EXCL);
+ err:
+- pr_info("error opening %s: %s", path, err);
++ pr_info("error %s: %s", path, err);
+ ret = -EINVAL;
+ goto out;
+ }
+diff --git a/drivers/md/dm-bufio.c b/drivers/md/dm-bufio.c
+index c546b567f3b5..b3454e8c0956 100644
+--- a/drivers/md/dm-bufio.c
++++ b/drivers/md/dm-bufio.c
+@@ -386,9 +386,6 @@ static void __cache_size_refresh(void)
+ static void *alloc_buffer_data(struct dm_bufio_client *c, gfp_t gfp_mask,
+ enum data_mode *data_mode)
+ {
+- unsigned noio_flag;
+- void *ptr;
+-
+ if (c->block_size <= DM_BUFIO_BLOCK_SIZE_SLAB_LIMIT) {
+ *data_mode = DATA_MODE_SLAB;
+ return kmem_cache_alloc(DM_BUFIO_CACHE(c), gfp_mask);
+@@ -412,16 +409,15 @@ static void *alloc_buffer_data(struct dm_bufio_client *c, gfp_t gfp_mask,
+ * all allocations done by this process (including pagetables) are done
+ * as if GFP_NOIO was specified.
+ */
++ if (gfp_mask & __GFP_NORETRY) {
++ unsigned noio_flag = memalloc_noio_save();
++ void *ptr = __vmalloc(c->block_size, gfp_mask, PAGE_KERNEL);
+
+- if (gfp_mask & __GFP_NORETRY)
+- noio_flag = memalloc_noio_save();
+-
+- ptr = __vmalloc(c->block_size, gfp_mask, PAGE_KERNEL);
+-
+- if (gfp_mask & __GFP_NORETRY)
+ memalloc_noio_restore(noio_flag);
++ return ptr;
++ }
+
+- return ptr;
++ return __vmalloc(c->block_size, gfp_mask, PAGE_KERNEL);
+ }
+
+ /*
+diff --git a/drivers/net/wireless/mac80211_hwsim.c b/drivers/net/wireless/mac80211_hwsim.c
+index f6d4a50f1bdb..829ac22b72fc 100644
+--- a/drivers/net/wireless/mac80211_hwsim.c
++++ b/drivers/net/wireless/mac80211_hwsim.c
+@@ -3455,7 +3455,7 @@ static int __init init_mac80211_hwsim(void)
+
+ spin_lock_init(&hwsim_radio_lock);
+
+- hwsim_wq = alloc_workqueue("hwsim_wq",WQ_MEM_RECLAIM,0);
++ hwsim_wq = alloc_workqueue("hwsim_wq", 0, 0);
+ if (!hwsim_wq)
+ return -ENOMEM;
+
+diff --git a/drivers/nvme/host/core.c b/drivers/nvme/host/core.c
+index 839650e0926a..3551fbd6fe41 100644
+--- a/drivers/nvme/host/core.c
++++ b/drivers/nvme/host/core.c
+@@ -2950,7 +2950,6 @@ static void nvme_alloc_ns(struct nvme_ctrl *ctrl, unsigned nsid)
+
+ if (new)
+ nvme_mpath_add_disk(ns->head);
+- nvme_mpath_add_disk_links(ns);
+ return;
+ out_unlink_ns:
+ mutex_lock(&ctrl->subsys->lock);
+@@ -2970,7 +2969,6 @@ static void nvme_ns_remove(struct nvme_ns *ns)
+ return;
+
+ if (ns->disk && ns->disk->flags & GENHD_FL_UP) {
+- nvme_mpath_remove_disk_links(ns);
+ sysfs_remove_group(&disk_to_dev(ns->disk)->kobj,
+ &nvme_ns_id_attr_group);
+ if (ns->ndev)
+diff --git a/drivers/nvme/host/multipath.c b/drivers/nvme/host/multipath.c
+index 1218a9fca846..cf16905d25e2 100644
+--- a/drivers/nvme/host/multipath.c
++++ b/drivers/nvme/host/multipath.c
+@@ -245,25 +245,6 @@ void nvme_mpath_add_disk(struct nvme_ns_head *head)
+ head->disk->disk_name);
+ }
+
+-void nvme_mpath_add_disk_links(struct nvme_ns *ns)
+-{
+- struct kobject *slave_disk_kobj, *holder_disk_kobj;
+-
+- if (!ns->head->disk)
+- return;
+-
+- slave_disk_kobj = &disk_to_dev(ns->disk)->kobj;
+- if (sysfs_create_link(ns->head->disk->slave_dir, slave_disk_kobj,
+- kobject_name(slave_disk_kobj)))
+- return;
+-
+- holder_disk_kobj = &disk_to_dev(ns->head->disk)->kobj;
+- if (sysfs_create_link(ns->disk->part0.holder_dir, holder_disk_kobj,
+- kobject_name(holder_disk_kobj)))
+- sysfs_remove_link(ns->head->disk->slave_dir,
+- kobject_name(slave_disk_kobj));
+-}
+-
+ void nvme_mpath_remove_disk(struct nvme_ns_head *head)
+ {
+ if (!head->disk)
+@@ -278,14 +259,3 @@ void nvme_mpath_remove_disk(struct nvme_ns_head *head)
+ blk_cleanup_queue(head->disk->queue);
+ put_disk(head->disk);
+ }
+-
+-void nvme_mpath_remove_disk_links(struct nvme_ns *ns)
+-{
+- if (!ns->head->disk)
+- return;
+-
+- sysfs_remove_link(ns->disk->part0.holder_dir,
+- kobject_name(&disk_to_dev(ns->head->disk)->kobj));
+- sysfs_remove_link(ns->head->disk->slave_dir,
+- kobject_name(&disk_to_dev(ns->disk)->kobj));
+-}
+diff --git a/drivers/nvme/host/nvme.h b/drivers/nvme/host/nvme.h
+index a00eabd06427..55c49a1aa231 100644
+--- a/drivers/nvme/host/nvme.h
++++ b/drivers/nvme/host/nvme.h
+@@ -405,9 +405,7 @@ bool nvme_req_needs_failover(struct request *req);
+ void nvme_kick_requeue_lists(struct nvme_ctrl *ctrl);
+ int nvme_mpath_alloc_disk(struct nvme_ctrl *ctrl,struct nvme_ns_head *head);
+ void nvme_mpath_add_disk(struct nvme_ns_head *head);
+-void nvme_mpath_add_disk_links(struct nvme_ns *ns);
+ void nvme_mpath_remove_disk(struct nvme_ns_head *head);
+-void nvme_mpath_remove_disk_links(struct nvme_ns *ns);
+
+ static inline void nvme_mpath_clear_current_path(struct nvme_ns *ns)
+ {
+@@ -448,12 +446,6 @@ static inline void nvme_mpath_add_disk(struct nvme_ns_head *head)
+ static inline void nvme_mpath_remove_disk(struct nvme_ns_head *head)
+ {
+ }
+-static inline void nvme_mpath_add_disk_links(struct nvme_ns *ns)
+-{
+-}
+-static inline void nvme_mpath_remove_disk_links(struct nvme_ns *ns)
+-{
+-}
+ static inline void nvme_mpath_clear_current_path(struct nvme_ns *ns)
+ {
+ }
+diff --git a/drivers/pci/dwc/pcie-designware-host.c b/drivers/pci/dwc/pcie-designware-host.c
+index 81e2157a7cfb..bc3e2d8d0cce 100644
+--- a/drivers/pci/dwc/pcie-designware-host.c
++++ b/drivers/pci/dwc/pcie-designware-host.c
+@@ -607,7 +607,7 @@ void dw_pcie_setup_rc(struct pcie_port *pp)
+ /* setup bus numbers */
+ val = dw_pcie_readl_dbi(pci, PCI_PRIMARY_BUS);
+ val &= 0xff000000;
+- val |= 0x00010100;
++ val |= 0x00ff0100;
+ dw_pcie_writel_dbi(pci, PCI_PRIMARY_BUS, val);
+
+ /* setup command register */
+diff --git a/drivers/regulator/stm32-vrefbuf.c b/drivers/regulator/stm32-vrefbuf.c
+index 72c8b3e1022b..e0a9c445ed67 100644
+--- a/drivers/regulator/stm32-vrefbuf.c
++++ b/drivers/regulator/stm32-vrefbuf.c
+@@ -51,7 +51,7 @@ static int stm32_vrefbuf_enable(struct regulator_dev *rdev)
+ * arbitrary timeout.
+ */
+ ret = readl_poll_timeout(priv->base + STM32_VREFBUF_CSR, val,
+- !(val & STM32_VRR), 650, 10000);
++ val & STM32_VRR, 650, 10000);
+ if (ret) {
+ dev_err(&rdev->dev, "stm32 vrefbuf timed out!\n");
+ val = readl_relaxed(priv->base + STM32_VREFBUF_CSR);
+diff --git a/drivers/scsi/hosts.c b/drivers/scsi/hosts.c
+index 57bf43e34863..dd9464920456 100644
+--- a/drivers/scsi/hosts.c
++++ b/drivers/scsi/hosts.c
+@@ -328,8 +328,6 @@ static void scsi_host_dev_release(struct device *dev)
+ if (shost->work_q)
+ destroy_workqueue(shost->work_q);
+
+- destroy_rcu_head(&shost->rcu);
+-
+ if (shost->shost_state == SHOST_CREATED) {
+ /*
+ * Free the shost_dev device name here if scsi_host_alloc()
+@@ -404,7 +402,6 @@ struct Scsi_Host *scsi_host_alloc(struct scsi_host_template *sht, int privsize)
+ INIT_LIST_HEAD(&shost->starved_list);
+ init_waitqueue_head(&shost->host_wait);
+ mutex_init(&shost->scan_mutex);
+- init_rcu_head(&shost->rcu);
+
+ index = ida_simple_get(&host_index_ida, 0, 0, GFP_KERNEL);
+ if (index < 0)
+diff --git a/drivers/scsi/qla2xxx/qla_def.h b/drivers/scsi/qla2xxx/qla_def.h
+index 01a9b8971e88..93ff92e2363f 100644
+--- a/drivers/scsi/qla2xxx/qla_def.h
++++ b/drivers/scsi/qla2xxx/qla_def.h
+@@ -315,6 +315,29 @@ struct srb_cmd {
+ /* To identify if a srb is of T10-CRC type. @sp => srb_t pointer */
+ #define IS_PROT_IO(sp) (sp->flags & SRB_CRC_CTX_DSD_VALID)
+
++/*
++ * 24 bit port ID type definition.
++ */
++typedef union {
++ uint32_t b24 : 24;
++
++ struct {
++#ifdef __BIG_ENDIAN
++ uint8_t domain;
++ uint8_t area;
++ uint8_t al_pa;
++#elif defined(__LITTLE_ENDIAN)
++ uint8_t al_pa;
++ uint8_t area;
++ uint8_t domain;
++#else
++#error "__BIG_ENDIAN or __LITTLE_ENDIAN must be defined!"
++#endif
++ uint8_t rsvd_1;
++ } b;
++} port_id_t;
++#define INVALID_PORT_ID 0xFFFFFF
++
+ struct els_logo_payload {
+ uint8_t opcode;
+ uint8_t rsvd[3];
+@@ -338,6 +361,7 @@ struct ct_arg {
+ u32 rsp_size;
+ void *req;
+ void *rsp;
++ port_id_t id;
+ };
+
+ /*
+@@ -499,6 +523,7 @@ typedef struct srb {
+ const char *name;
+ int iocbs;
+ struct qla_qpair *qpair;
++ struct list_head elem;
+ u32 gen1; /* scratch */
+ u32 gen2; /* scratch */
+ union {
+@@ -2164,28 +2189,6 @@ struct imm_ntfy_from_isp {
+ #define REQUEST_ENTRY_SIZE (sizeof(request_t))
+
+
+-/*
+- * 24 bit port ID type definition.
+- */
+-typedef union {
+- uint32_t b24 : 24;
+-
+- struct {
+-#ifdef __BIG_ENDIAN
+- uint8_t domain;
+- uint8_t area;
+- uint8_t al_pa;
+-#elif defined(__LITTLE_ENDIAN)
+- uint8_t al_pa;
+- uint8_t area;
+- uint8_t domain;
+-#else
+-#error "__BIG_ENDIAN or __LITTLE_ENDIAN must be defined!"
+-#endif
+- uint8_t rsvd_1;
+- } b;
+-} port_id_t;
+-#define INVALID_PORT_ID 0xFFFFFF
+
+ /*
+ * Switch info gathering structure.
+@@ -4107,6 +4110,7 @@ typedef struct scsi_qla_host {
+ #define LOOP_READY 5
+ #define LOOP_DEAD 6
+
++ unsigned long relogin_jif;
+ unsigned long dpc_flags;
+ #define RESET_MARKER_NEEDED 0 /* Send marker to ISP. */
+ #define RESET_ACTIVE 1
+@@ -4252,6 +4256,7 @@ typedef struct scsi_qla_host {
+ uint8_t n2n_node_name[WWN_SIZE];
+ uint8_t n2n_port_name[WWN_SIZE];
+ uint16_t n2n_id;
++ struct list_head gpnid_list;
+ } scsi_qla_host_t;
+
+ struct qla27xx_image_status {
+diff --git a/drivers/scsi/qla2xxx/qla_gs.c b/drivers/scsi/qla2xxx/qla_gs.c
+index bc3db6abc9a0..7d715e58901f 100644
+--- a/drivers/scsi/qla2xxx/qla_gs.c
++++ b/drivers/scsi/qla2xxx/qla_gs.c
+@@ -175,6 +175,9 @@ qla2x00_chk_ms_status(scsi_qla_host_t *vha, ms_iocb_entry_t *ms_pkt,
+ set_bit(LOCAL_LOOP_UPDATE, &vha->dpc_flags);
+ }
+ break;
++ case CS_TIMEOUT:
++ rval = QLA_FUNCTION_TIMEOUT;
++ /* drop through */
+ default:
+ ql_dbg(ql_dbg_disc, vha, 0x2033,
+ "%s failed, completion status (%x) on port_id: "
+@@ -2833,7 +2836,7 @@ void qla24xx_handle_gidpn_event(scsi_qla_host_t *vha, struct event_arg *ea)
+ }
+ } else { /* fcport->d_id.b24 != ea->id.b24 */
+ fcport->d_id.b24 = ea->id.b24;
+- if (fcport->deleted == QLA_SESS_DELETED) {
++ if (fcport->deleted != QLA_SESS_DELETED) {
+ ql_dbg(ql_dbg_disc, vha, 0x2021,
+ "%s %d %8phC post del sess\n",
+ __func__, __LINE__, fcport->port_name);
+@@ -2889,9 +2892,22 @@ static void qla2x00_async_gidpn_sp_done(void *s, int res)
+ ea.rc = res;
+ ea.event = FCME_GIDPN_DONE;
+
+- ql_dbg(ql_dbg_disc, vha, 0x204f,
+- "Async done-%s res %x, WWPN %8phC ID %3phC \n",
+- sp->name, res, fcport->port_name, id);
++ if (res == QLA_FUNCTION_TIMEOUT) {
++ ql_dbg(ql_dbg_disc, sp->vha, 0xffff,
++ "Async done-%s WWPN %8phC timed out.\n",
++ sp->name, fcport->port_name);
++ qla24xx_post_gidpn_work(sp->vha, fcport);
++ sp->free(sp);
++ return;
++ } else if (res) {
++ ql_dbg(ql_dbg_disc, sp->vha, 0xffff,
++ "Async done-%s fail res %x, WWPN %8phC\n",
++ sp->name, res, fcport->port_name);
++ } else {
++ ql_dbg(ql_dbg_disc, vha, 0x204f,
++ "Async done-%s good WWPN %8phC ID %3phC\n",
++ sp->name, fcport->port_name, id);
++ }
+
+ qla2x00_fcport_event_handler(vha, &ea);
+
+@@ -3155,43 +3171,136 @@ void qla24xx_async_gpnid_done(scsi_qla_host_t *vha, srb_t *sp)
+
+ void qla24xx_handle_gpnid_event(scsi_qla_host_t *vha, struct event_arg *ea)
+ {
+- fc_port_t *fcport;
+- unsigned long flags;
++ fc_port_t *fcport, *conflict, *t;
+
+- spin_lock_irqsave(&vha->hw->tgt.sess_lock, flags);
+- fcport = qla2x00_find_fcport_by_wwpn(vha, ea->port_name, 1);
+- spin_unlock_irqrestore(&vha->hw->tgt.sess_lock, flags);
++ ql_dbg(ql_dbg_disc, vha, 0xffff,
++ "%s %d port_id: %06x\n",
++ __func__, __LINE__, ea->id.b24);
+
+- if (fcport) {
+- /* cable moved. just plugged in */
+- fcport->rscn_gen++;
+- fcport->d_id = ea->id;
+- fcport->scan_state = QLA_FCPORT_FOUND;
+- fcport->flags |= FCF_FABRIC_DEVICE;
+-
+- switch (fcport->disc_state) {
+- case DSC_DELETED:
+- ql_dbg(ql_dbg_disc, vha, 0x210d,
+- "%s %d %8phC login\n", __func__, __LINE__,
+- fcport->port_name);
+- qla24xx_fcport_handle_login(vha, fcport);
+- break;
+- case DSC_DELETE_PEND:
+- break;
+- default:
+- ql_dbg(ql_dbg_disc, vha, 0x2064,
+- "%s %d %8phC post del sess\n",
+- __func__, __LINE__, fcport->port_name);
+- qlt_schedule_sess_for_deletion_lock(fcport);
+- break;
++ if (ea->rc) {
++ /* cable is disconnected */
++ list_for_each_entry_safe(fcport, t, &vha->vp_fcports, list) {
++ if (fcport->d_id.b24 == ea->id.b24) {
++ ql_dbg(ql_dbg_disc, vha, 0xffff,
++ "%s %d %8phC DS %d\n",
++ __func__, __LINE__,
++ fcport->port_name,
++ fcport->disc_state);
++ fcport->scan_state = QLA_FCPORT_SCAN;
++ switch (fcport->disc_state) {
++ case DSC_DELETED:
++ case DSC_DELETE_PEND:
++ break;
++ default:
++ ql_dbg(ql_dbg_disc, vha, 0xffff,
++ "%s %d %8phC post del sess\n",
++ __func__, __LINE__,
++ fcport->port_name);
++ qlt_schedule_sess_for_deletion_lock
++ (fcport);
++ break;
++ }
++ }
+ }
+ } else {
+- /* create new fcport */
+- ql_dbg(ql_dbg_disc, vha, 0x2065,
+- "%s %d %8phC post new sess\n",
+- __func__, __LINE__, ea->port_name);
++ /* cable is connected */
++ fcport = qla2x00_find_fcport_by_wwpn(vha, ea->port_name, 1);
++ if (fcport) {
++ list_for_each_entry_safe(conflict, t, &vha->vp_fcports,
++ list) {
++ if ((conflict->d_id.b24 == ea->id.b24) &&
++ (fcport != conflict)) {
++ /* 2 fcports with conflict Nport ID or
++ * an existing fcport is having nport ID
++ * conflict with new fcport.
++ */
++
++ ql_dbg(ql_dbg_disc, vha, 0xffff,
++ "%s %d %8phC DS %d\n",
++ __func__, __LINE__,
++ conflict->port_name,
++ conflict->disc_state);
++ conflict->scan_state = QLA_FCPORT_SCAN;
++ switch (conflict->disc_state) {
++ case DSC_DELETED:
++ case DSC_DELETE_PEND:
++ break;
++ default:
++ ql_dbg(ql_dbg_disc, vha, 0xffff,
++ "%s %d %8phC post del sess\n",
++ __func__, __LINE__,
++ conflict->port_name);
++ qlt_schedule_sess_for_deletion_lock
++ (conflict);
++ break;
++ }
++ }
++ }
+
+- qla24xx_post_newsess_work(vha, &ea->id, ea->port_name, NULL);
++ fcport->rscn_gen++;
++ fcport->scan_state = QLA_FCPORT_FOUND;
++ fcport->flags |= FCF_FABRIC_DEVICE;
++ switch (fcport->disc_state) {
++ case DSC_LOGIN_COMPLETE:
++ /* recheck session is still intact. */
++ ql_dbg(ql_dbg_disc, vha, 0x210d,
++ "%s %d %8phC revalidate session with ADISC\n",
++ __func__, __LINE__, fcport->port_name);
++ qla24xx_post_gpdb_work(vha, fcport,
++ PDO_FORCE_ADISC);
++ break;
++ case DSC_DELETED:
++ ql_dbg(ql_dbg_disc, vha, 0x210d,
++ "%s %d %8phC login\n", __func__, __LINE__,
++ fcport->port_name);
++ fcport->d_id = ea->id;
++ qla24xx_fcport_handle_login(vha, fcport);
++ break;
++ case DSC_DELETE_PEND:
++ fcport->d_id = ea->id;
++ break;
++ default:
++ fcport->d_id = ea->id;
++ break;
++ }
++ } else {
++ list_for_each_entry_safe(conflict, t, &vha->vp_fcports,
++ list) {
++ if (conflict->d_id.b24 == ea->id.b24) {
++ /* 2 fcports with conflict Nport ID or
++ * an existing fcport is having nport ID
++ * conflict with new fcport.
++ */
++ ql_dbg(ql_dbg_disc, vha, 0xffff,
++ "%s %d %8phC DS %d\n",
++ __func__, __LINE__,
++ conflict->port_name,
++ conflict->disc_state);
++
++ conflict->scan_state = QLA_FCPORT_SCAN;
++ switch (conflict->disc_state) {
++ case DSC_DELETED:
++ case DSC_DELETE_PEND:
++ break;
++ default:
++ ql_dbg(ql_dbg_disc, vha, 0xffff,
++ "%s %d %8phC post del sess\n",
++ __func__, __LINE__,
++ conflict->port_name);
++ qlt_schedule_sess_for_deletion_lock
++ (conflict);
++ break;
++ }
++ }
++ }
++
++ /* create new fcport */
++ ql_dbg(ql_dbg_disc, vha, 0x2065,
++ "%s %d %8phC post new sess\n",
++ __func__, __LINE__, ea->port_name);
++ qla24xx_post_newsess_work(vha, &ea->id,
++ ea->port_name, NULL);
++ }
+ }
+ }
+
+@@ -3205,11 +3314,18 @@ static void qla2x00_async_gpnid_sp_done(void *s, int res)
+ (struct ct_sns_rsp *)sp->u.iocb_cmd.u.ctarg.rsp;
+ struct event_arg ea;
+ struct qla_work_evt *e;
++ unsigned long flags;
+
+- ql_dbg(ql_dbg_disc, vha, 0x2066,
+- "Async done-%s res %x ID %3phC. %8phC\n",
+- sp->name, res, ct_req->req.port_id.port_id,
+- ct_rsp->rsp.gpn_id.port_name);
++ if (res)
++ ql_dbg(ql_dbg_disc, vha, 0x2066,
++ "Async done-%s fail res %x rscn gen %d ID %3phC. %8phC\n",
++ sp->name, res, sp->gen1, ct_req->req.port_id.port_id,
++ ct_rsp->rsp.gpn_id.port_name);
++ else
++ ql_dbg(ql_dbg_disc, vha, 0x2066,
++ "Async done-%s good rscn gen %d ID %3phC. %8phC\n",
++ sp->name, sp->gen1, ct_req->req.port_id.port_id,
++ ct_rsp->rsp.gpn_id.port_name);
+
+ memset(&ea, 0, sizeof(ea));
+ memcpy(ea.port_name, ct_rsp->rsp.gpn_id.port_name, WWN_SIZE);
+@@ -3220,6 +3336,23 @@ static void qla2x00_async_gpnid_sp_done(void *s, int res)
+ ea.rc = res;
+ ea.event = FCME_GPNID_DONE;
+
++ spin_lock_irqsave(&vha->hw->tgt.sess_lock, flags);
++ list_del(&sp->elem);
++ spin_unlock_irqrestore(&vha->hw->tgt.sess_lock, flags);
++
++ if (res) {
++ if (res == QLA_FUNCTION_TIMEOUT) {
++ qla24xx_post_gpnid_work(sp->vha, &ea.id);
++ sp->free(sp);
++ return;
++ }
++ } else if (sp->gen1) {
++ /* There was another RSCN for this Nport ID */
++ qla24xx_post_gpnid_work(sp->vha, &ea.id);
++ sp->free(sp);
++ return;
++ }
++
+ qla2x00_fcport_event_handler(vha, &ea);
+
+ e = qla2x00_alloc_work(vha, QLA_EVT_GPNID_DONE);
+@@ -3253,8 +3386,9 @@ int qla24xx_async_gpnid(scsi_qla_host_t *vha, port_id_t *id)
+ {
+ int rval = QLA_FUNCTION_FAILED;
+ struct ct_sns_req *ct_req;
+- srb_t *sp;
++ srb_t *sp, *tsp;
+ struct ct_sns_pkt *ct_sns;
++ unsigned long flags;
+
+ if (!vha->flags.online)
+ goto done;
+@@ -3265,8 +3399,22 @@ int qla24xx_async_gpnid(scsi_qla_host_t *vha, port_id_t *id)
+
+ sp->type = SRB_CT_PTHRU_CMD;
+ sp->name = "gpnid";
++ sp->u.iocb_cmd.u.ctarg.id = *id;
++ sp->gen1 = 0;
+ qla2x00_init_timer(sp, qla2x00_get_async_timeout(vha) + 2);
+
++ spin_lock_irqsave(&vha->hw->tgt.sess_lock, flags);
++ list_for_each_entry(tsp, &vha->gpnid_list, elem) {
++ if (tsp->u.iocb_cmd.u.ctarg.id.b24 == id->b24) {
++ tsp->gen1++;
++ spin_unlock_irqrestore(&vha->hw->tgt.sess_lock, flags);
++ sp->free(sp);
++ goto done;
++ }
++ }
++ list_add_tail(&sp->elem, &vha->gpnid_list);
++ spin_unlock_irqrestore(&vha->hw->tgt.sess_lock, flags);
++
+ sp->u.iocb_cmd.u.ctarg.req = dma_alloc_coherent(&vha->hw->pdev->dev,
+ sizeof(struct ct_sns_pkt), &sp->u.iocb_cmd.u.ctarg.req_dma,
+ GFP_KERNEL);
+diff --git a/drivers/scsi/qla2xxx/qla_init.c b/drivers/scsi/qla2xxx/qla_init.c
+index 1bafa043f9f1..6082389f25c3 100644
+--- a/drivers/scsi/qla2xxx/qla_init.c
++++ b/drivers/scsi/qla2xxx/qla_init.c
+@@ -863,6 +863,7 @@ void qla24xx_handle_gpdb_event(scsi_qla_host_t *vha, struct event_arg *ea)
+ int rval = ea->rc;
+ fc_port_t *fcport = ea->fcport;
+ unsigned long flags;
++ u16 opt = ea->sp->u.iocb_cmd.u.mbx.out_mb[10];
+
+ fcport->flags &= ~FCF_ASYNC_SENT;
+
+@@ -893,7 +894,8 @@ void qla24xx_handle_gpdb_event(scsi_qla_host_t *vha, struct event_arg *ea)
+ }
+
+ spin_lock_irqsave(&vha->hw->tgt.sess_lock, flags);
+- ea->fcport->login_gen++;
++ if (opt != PDO_FORCE_ADISC)
++ ea->fcport->login_gen++;
+ ea->fcport->deleted = 0;
+ ea->fcport->logout_on_delete = 1;
+
+@@ -917,6 +919,16 @@ void qla24xx_handle_gpdb_event(scsi_qla_host_t *vha, struct event_arg *ea)
+
+ qla24xx_post_gpsc_work(vha, fcport);
+ }
++ } else if (ea->fcport->login_succ) {
++ /*
++ * We have an existing session. A late RSCN delivery
++ * must have triggered the session to be re-validate.
++ * session is still valid.
++ */
++ ql_dbg(ql_dbg_disc, vha, 0x20d6,
++ "%s %d %8phC session revalidate success\n",
++ __func__, __LINE__, fcport->port_name);
++ fcport->disc_state = DSC_LOGIN_COMPLETE;
+ }
+ spin_unlock_irqrestore(&vha->hw->tgt.sess_lock, flags);
+ } /* gpdb event */
+@@ -963,7 +975,7 @@ int qla24xx_fcport_handle_login(struct scsi_qla_host *vha, fc_port_t *fcport)
+ ql_dbg(ql_dbg_disc, vha, 0x20bd,
+ "%s %d %8phC post gnl\n",
+ __func__, __LINE__, fcport->port_name);
+- qla24xx_async_gnl(vha, fcport);
++ qla24xx_post_gnl_work(vha, fcport);
+ } else {
+ ql_dbg(ql_dbg_disc, vha, 0x20bf,
+ "%s %d %8phC post login\n",
+@@ -1040,9 +1052,8 @@ void qla24xx_handle_rscn_event(fc_port_t *fcport, struct event_arg *ea)
+ switch (fcport->disc_state) {
+ case DSC_DELETED:
+ case DSC_LOGIN_COMPLETE:
+- qla24xx_post_gidpn_work(fcport->vha, fcport);
++ qla24xx_post_gpnid_work(fcport->vha, &ea->id);
+ break;
+-
+ default:
+ break;
+ }
+@@ -1132,7 +1143,7 @@ void qla24xx_handle_relogin_event(scsi_qla_host_t *vha,
+ ql_dbg(ql_dbg_disc, vha, 0x20e9, "%s %d %8phC post gidpn\n",
+ __func__, __LINE__, fcport->port_name);
+
+- qla24xx_async_gidpn(vha, fcport);
++ qla24xx_post_gidpn_work(vha, fcport);
+ return;
+ }
+
+@@ -1347,6 +1358,7 @@ qla24xx_abort_sp_done(void *ptr, int res)
+ srb_t *sp = ptr;
+ struct srb_iocb *abt = &sp->u.iocb_cmd;
+
++ del_timer(&sp->u.iocb_cmd.timer);
+ complete(&abt->u.abt.comp);
+ }
+
+@@ -1452,6 +1464,8 @@ static void
+ qla24xx_handle_plogi_done_event(struct scsi_qla_host *vha, struct event_arg *ea)
+ {
+ port_id_t cid; /* conflict Nport id */
++ u16 lid;
++ struct fc_port *conflict_fcport;
+
+ switch (ea->data[0]) {
+ case MBS_COMMAND_COMPLETE:
+@@ -1467,8 +1481,12 @@ qla24xx_handle_plogi_done_event(struct scsi_qla_host *vha, struct event_arg *ea)
+ qla24xx_post_prli_work(vha, ea->fcport);
+ } else {
+ ql_dbg(ql_dbg_disc, vha, 0x20ea,
+- "%s %d %8phC post gpdb\n",
+- __func__, __LINE__, ea->fcport->port_name);
++ "%s %d %8phC LoopID 0x%x in use with %06x. post gnl\n",
++ __func__, __LINE__, ea->fcport->port_name,
++ ea->fcport->loop_id, ea->fcport->d_id.b24);
++
++ set_bit(ea->fcport->loop_id, vha->hw->loop_id_map);
++ ea->fcport->loop_id = FC_NO_LOOP_ID;
+ ea->fcport->chip_reset = vha->hw->base_qpair->chip_reset;
+ ea->fcport->logout_on_delete = 1;
+ ea->fcport->send_els_logo = 0;
+@@ -1513,8 +1531,38 @@ qla24xx_handle_plogi_done_event(struct scsi_qla_host *vha, struct event_arg *ea)
+ ea->fcport->d_id.b.domain, ea->fcport->d_id.b.area,
+ ea->fcport->d_id.b.al_pa);
+
+- qla2x00_clear_loop_id(ea->fcport);
+- qla24xx_post_gidpn_work(vha, ea->fcport);
++ lid = ea->iop[1] & 0xffff;
++ qlt_find_sess_invalidate_other(vha,
++ wwn_to_u64(ea->fcport->port_name),
++ ea->fcport->d_id, lid, &conflict_fcport);
++
++ if (conflict_fcport) {
++ /*
++ * Another fcport share the same loop_id/nport id.
++ * Conflict fcport needs to finish cleanup before this
++ * fcport can proceed to login.
++ */
++ conflict_fcport->conflict = ea->fcport;
++ ea->fcport->login_pause = 1;
++
++ ql_dbg(ql_dbg_disc, vha, 0x20ed,
++ "%s %d %8phC NPortId %06x inuse with loopid 0x%x. post gidpn\n",
++ __func__, __LINE__, ea->fcport->port_name,
++ ea->fcport->d_id.b24, lid);
++ qla2x00_clear_loop_id(ea->fcport);
++ qla24xx_post_gidpn_work(vha, ea->fcport);
++ } else {
++ ql_dbg(ql_dbg_disc, vha, 0x20ed,
++ "%s %d %8phC NPortId %06x inuse with loopid 0x%x. sched delete\n",
++ __func__, __LINE__, ea->fcport->port_name,
++ ea->fcport->d_id.b24, lid);
++
++ qla2x00_clear_loop_id(ea->fcport);
++ set_bit(lid, vha->hw->loop_id_map);
++ ea->fcport->loop_id = lid;
++ ea->fcport->keep_nport_handle = 0;
++ qlt_schedule_sess_for_deletion(ea->fcport, false);
++ }
+ break;
+ }
+ return;
+@@ -8173,9 +8221,6 @@ int qla2xxx_delete_qpair(struct scsi_qla_host *vha, struct qla_qpair *qpair)
+ int ret = QLA_FUNCTION_FAILED;
+ struct qla_hw_data *ha = qpair->hw;
+
+- if (!vha->flags.qpairs_req_created && !vha->flags.qpairs_rsp_created)
+- goto fail;
+-
+ qpair->delete_in_progress = 1;
+ while (atomic_read(&qpair->ref_count))
+ msleep(500);
+@@ -8183,6 +8228,7 @@ int qla2xxx_delete_qpair(struct scsi_qla_host *vha, struct qla_qpair *qpair)
+ ret = qla25xx_delete_req_que(vha, qpair->req);
+ if (ret != QLA_SUCCESS)
+ goto fail;
++
+ ret = qla25xx_delete_rsp_que(vha, qpair->rsp);
+ if (ret != QLA_SUCCESS)
+ goto fail;
+diff --git a/drivers/scsi/qla2xxx/qla_iocb.c b/drivers/scsi/qla2xxx/qla_iocb.c
+index d810a447cb4a..8ea59586f4f1 100644
+--- a/drivers/scsi/qla2xxx/qla_iocb.c
++++ b/drivers/scsi/qla2xxx/qla_iocb.c
+@@ -2392,26 +2392,13 @@ qla2x00_els_dcmd_iocb_timeout(void *data)
+ srb_t *sp = data;
+ fc_port_t *fcport = sp->fcport;
+ struct scsi_qla_host *vha = sp->vha;
+- struct qla_hw_data *ha = vha->hw;
+ struct srb_iocb *lio = &sp->u.iocb_cmd;
+- unsigned long flags = 0;
+
+ ql_dbg(ql_dbg_io, vha, 0x3069,
+ "%s Timeout, hdl=%x, portid=%02x%02x%02x\n",
+ sp->name, sp->handle, fcport->d_id.b.domain, fcport->d_id.b.area,
+ fcport->d_id.b.al_pa);
+
+- /* Abort the exchange */
+- spin_lock_irqsave(&ha->hardware_lock, flags);
+- if (ha->isp_ops->abort_command(sp)) {
+- ql_dbg(ql_dbg_io, vha, 0x3070,
+- "mbx abort_command failed.\n");
+- } else {
+- ql_dbg(ql_dbg_io, vha, 0x3071,
+- "mbx abort_command success.\n");
+- }
+- spin_unlock_irqrestore(&ha->hardware_lock, flags);
+-
+ complete(&lio->u.els_logo.comp);
+ }
+
+diff --git a/drivers/scsi/qla2xxx/qla_isr.c b/drivers/scsi/qla2xxx/qla_isr.c
+index 2fd79129bb2a..85382387a52b 100644
+--- a/drivers/scsi/qla2xxx/qla_isr.c
++++ b/drivers/scsi/qla2xxx/qla_isr.c
+@@ -1574,7 +1574,7 @@ qla24xx_els_ct_entry(scsi_qla_host_t *vha, struct req_que *req,
+ /* borrowing sts_entry_24xx.comp_status.
+ same location as ct_entry_24xx.comp_status
+ */
+- res = qla2x00_chk_ms_status(vha, (ms_iocb_entry_t *)pkt,
++ res = qla2x00_chk_ms_status(sp->vha, (ms_iocb_entry_t *)pkt,
+ (struct ct_sns_rsp *)sp->u.iocb_cmd.u.ctarg.rsp,
+ sp->name);
+ sp->done(sp, res);
+@@ -2369,7 +2369,6 @@ qla2x00_status_entry(scsi_qla_host_t *vha, struct rsp_que *rsp, void *pkt)
+ int res = 0;
+ uint16_t state_flags = 0;
+ uint16_t retry_delay = 0;
+- uint8_t no_logout = 0;
+
+ sts = (sts_entry_t *) pkt;
+ sts24 = (struct sts_entry_24xx *) pkt;
+@@ -2640,7 +2639,6 @@ qla2x00_status_entry(scsi_qla_host_t *vha, struct rsp_que *rsp, void *pkt)
+ break;
+
+ case CS_PORT_LOGGED_OUT:
+- no_logout = 1;
+ case CS_PORT_CONFIG_CHG:
+ case CS_PORT_BUSY:
+ case CS_INCOMPLETE:
+@@ -2671,9 +2669,6 @@ qla2x00_status_entry(scsi_qla_host_t *vha, struct rsp_que *rsp, void *pkt)
+ port_state_str[atomic_read(&fcport->state)],
+ comp_status);
+
+- if (no_logout)
+- fcport->logout_on_delete = 0;
+-
+ qla2x00_mark_device_lost(fcport->vha, fcport, 1, 1);
+ qlt_schedule_sess_for_deletion_lock(fcport);
+ }
+diff --git a/drivers/scsi/qla2xxx/qla_mbx.c b/drivers/scsi/qla2xxx/qla_mbx.c
+index cb717d47339f..e2b5fa47bb57 100644
+--- a/drivers/scsi/qla2xxx/qla_mbx.c
++++ b/drivers/scsi/qla2xxx/qla_mbx.c
+@@ -6160,8 +6160,7 @@ int __qla24xx_parse_gpdb(struct scsi_qla_host *vha, fc_port_t *fcport,
+ }
+
+ /* Check for logged in state. */
+- if (current_login_state != PDS_PRLI_COMPLETE &&
+- last_login_state != PDS_PRLI_COMPLETE) {
++ if (current_login_state != PDS_PRLI_COMPLETE) {
+ ql_dbg(ql_dbg_mbx, vha, 0x119a,
+ "Unable to verify login-state (%x/%x) for loop_id %x.\n",
+ current_login_state, last_login_state, fcport->loop_id);
+diff --git a/drivers/scsi/qla2xxx/qla_mid.c b/drivers/scsi/qla2xxx/qla_mid.c
+index bd9f14bf7ac2..e538e6308885 100644
+--- a/drivers/scsi/qla2xxx/qla_mid.c
++++ b/drivers/scsi/qla2xxx/qla_mid.c
+@@ -343,15 +343,21 @@ qla2x00_do_dpc_vp(scsi_qla_host_t *vha)
+ "FCPort update end.\n");
+ }
+
+- if ((test_and_clear_bit(RELOGIN_NEEDED, &vha->dpc_flags)) &&
+- !test_bit(LOOP_RESYNC_NEEDED, &vha->dpc_flags) &&
+- atomic_read(&vha->loop_state) != LOOP_DOWN) {
+-
+- ql_dbg(ql_dbg_dpc, vha, 0x4018,
+- "Relogin needed scheduled.\n");
+- qla2x00_relogin(vha);
+- ql_dbg(ql_dbg_dpc, vha, 0x4019,
+- "Relogin needed end.\n");
++ if (test_bit(RELOGIN_NEEDED, &vha->dpc_flags) &&
++ !test_bit(LOOP_RESYNC_NEEDED, &vha->dpc_flags) &&
++ atomic_read(&vha->loop_state) != LOOP_DOWN) {
++
++ if (!vha->relogin_jif ||
++ time_after_eq(jiffies, vha->relogin_jif)) {
++ vha->relogin_jif = jiffies + HZ;
++ clear_bit(RELOGIN_NEEDED, &vha->dpc_flags);
++
++ ql_dbg(ql_dbg_dpc, vha, 0x4018,
++ "Relogin needed scheduled.\n");
++ qla2x00_relogin(vha);
++ ql_dbg(ql_dbg_dpc, vha, 0x4019,
++ "Relogin needed end.\n");
++ }
+ }
+
+ if (test_and_clear_bit(RESET_MARKER_NEEDED, &vha->dpc_flags) &&
+@@ -569,14 +575,15 @@ qla25xx_free_rsp_que(struct scsi_qla_host *vha, struct rsp_que *rsp)
+ int
+ qla25xx_delete_req_que(struct scsi_qla_host *vha, struct req_que *req)
+ {
+- int ret = -1;
++ int ret = QLA_SUCCESS;
+
+- if (req) {
++ if (req && vha->flags.qpairs_req_created) {
+ req->options |= BIT_0;
+ ret = qla25xx_init_req_que(vha, req);
++ if (ret != QLA_SUCCESS)
++ return QLA_FUNCTION_FAILED;
+ }
+- if (ret == QLA_SUCCESS)
+- qla25xx_free_req_que(vha, req);
++ qla25xx_free_req_que(vha, req);
+
+ return ret;
+ }
+@@ -584,14 +591,15 @@ qla25xx_delete_req_que(struct scsi_qla_host *vha, struct req_que *req)
+ int
+ qla25xx_delete_rsp_que(struct scsi_qla_host *vha, struct rsp_que *rsp)
+ {
+- int ret = -1;
++ int ret = QLA_SUCCESS;
+
+- if (rsp) {
++ if (rsp && vha->flags.qpairs_rsp_created) {
+ rsp->options |= BIT_0;
+ ret = qla25xx_init_rsp_que(vha, rsp);
++ if (ret != QLA_SUCCESS)
++ return QLA_FUNCTION_FAILED;
+ }
+- if (ret == QLA_SUCCESS)
+- qla25xx_free_rsp_que(vha, rsp);
++ qla25xx_free_rsp_que(vha, rsp);
+
+ return ret;
+ }
+diff --git a/drivers/scsi/qla2xxx/qla_os.c b/drivers/scsi/qla2xxx/qla_os.c
+index 46f2d0cf7c0d..1f69e89b950f 100644
+--- a/drivers/scsi/qla2xxx/qla_os.c
++++ b/drivers/scsi/qla2xxx/qla_os.c
+@@ -3011,9 +3011,6 @@ qla2x00_probe_one(struct pci_dev *pdev, const struct pci_device_id *id)
+ base_vha = qla2x00_create_host(sht, ha);
+ if (!base_vha) {
+ ret = -ENOMEM;
+- qla2x00_mem_free(ha);
+- qla2x00_free_req_que(ha, req);
+- qla2x00_free_rsp_que(ha, rsp);
+ goto probe_hw_failed;
+ }
+
+@@ -3074,7 +3071,7 @@ qla2x00_probe_one(struct pci_dev *pdev, const struct pci_device_id *id)
+ /* Set up the irqs */
+ ret = qla2x00_request_irqs(ha, rsp);
+ if (ret)
+- goto probe_init_failed;
++ goto probe_hw_failed;
+
+ /* Alloc arrays of request and response ring ptrs */
+ if (!qla2x00_alloc_queues(ha, req, rsp)) {
+@@ -3193,10 +3190,11 @@ qla2x00_probe_one(struct pci_dev *pdev, const struct pci_device_id *id)
+ host->can_queue, base_vha->req,
+ base_vha->mgmt_svr_loop_id, host->sg_tablesize);
+
++ ha->wq = alloc_workqueue("qla2xxx_wq", WQ_MEM_RECLAIM, 0);
++
+ if (ha->mqenable) {
+ bool mq = false;
+ bool startit = false;
+- ha->wq = alloc_workqueue("qla2xxx_wq", WQ_MEM_RECLAIM, 0);
+
+ if (QLA_TGT_MODE_ENABLED()) {
+ mq = true;
+@@ -3390,6 +3388,9 @@ qla2x00_probe_one(struct pci_dev *pdev, const struct pci_device_id *id)
+ scsi_host_put(base_vha->host);
+
+ probe_hw_failed:
++ qla2x00_mem_free(ha);
++ qla2x00_free_req_que(ha, req);
++ qla2x00_free_rsp_que(ha, rsp);
+ qla2x00_clear_drv_active(ha);
+
+ iospace_config_failed:
+@@ -4514,6 +4515,7 @@ struct scsi_qla_host *qla2x00_create_host(struct scsi_host_template *sht,
+ INIT_LIST_HEAD(&vha->qp_list);
+ INIT_LIST_HEAD(&vha->gnl.fcports);
+ INIT_LIST_HEAD(&vha->nvme_rport_list);
++ INIT_LIST_HEAD(&vha->gpnid_list);
+
+ spin_lock_init(&vha->work_lock);
+ spin_lock_init(&vha->cmd_list_lock);
+@@ -4748,20 +4750,49 @@ void qla24xx_create_new_sess(struct scsi_qla_host *vha, struct qla_work_evt *e)
+ } else {
+ list_add_tail(&fcport->list, &vha->vp_fcports);
+
+- if (pla) {
+- qlt_plogi_ack_link(vha, pla, fcport,
+- QLT_PLOGI_LINK_SAME_WWN);
+- pla->ref_count--;
+- }
++ }
++ if (pla) {
++ qlt_plogi_ack_link(vha, pla, fcport,
++ QLT_PLOGI_LINK_SAME_WWN);
++ pla->ref_count--;
+ }
+ }
+ spin_unlock_irqrestore(&vha->hw->tgt.sess_lock, flags);
+
+ if (fcport) {
+- if (pla)
++ if (pla) {
+ qlt_plogi_ack_unref(vha, pla);
+- else
+- qla24xx_async_gffid(vha, fcport);
++ } else {
++ spin_lock_irqsave(&vha->hw->tgt.sess_lock, flags);
++ tfcp = qla2x00_find_fcport_by_nportid(vha,
++ &e->u.new_sess.id, 1);
++ if (tfcp && (tfcp != fcport)) {
++ /*
++ * We have a conflict fcport with same NportID.
++ */
++ ql_dbg(ql_dbg_disc, vha, 0xffff,
++ "%s %8phC found conflict b4 add. DS %d LS %d\n",
++ __func__, tfcp->port_name, tfcp->disc_state,
++ tfcp->fw_login_state);
++
++ switch (tfcp->disc_state) {
++ case DSC_DELETED:
++ break;
++ case DSC_DELETE_PEND:
++ fcport->login_pause = 1;
++ tfcp->conflict = fcport;
++ break;
++ default:
++ fcport->login_pause = 1;
++ tfcp->conflict = fcport;
++ qlt_schedule_sess_for_deletion_lock
++ (tfcp);
++ break;
++ }
++ }
++ spin_unlock_irqrestore(&vha->hw->tgt.sess_lock, flags);
++ qla24xx_async_gnl(vha, fcport);
++ }
+ }
+
+ if (free_fcport) {
+@@ -4874,7 +4905,7 @@ void qla2x00_relogin(struct scsi_qla_host *vha)
+ */
+ if (atomic_read(&fcport->state) != FCS_ONLINE &&
+ fcport->login_retry && !(fcport->flags & FCF_ASYNC_SENT)) {
+- fcport->login_retry--;
++
+ if (fcport->flags & FCF_FABRIC_DEVICE) {
+ ql_dbg(ql_dbg_disc, fcport->vha, 0x2108,
+ "%s %8phC DS %d LS %d\n", __func__,
+@@ -4885,6 +4916,7 @@ void qla2x00_relogin(struct scsi_qla_host *vha)
+ ea.fcport = fcport;
+ qla2x00_fcport_event_handler(vha, &ea);
+ } else {
++ fcport->login_retry--;
+ status = qla2x00_local_device_login(vha,
+ fcport);
+ if (status == QLA_SUCCESS) {
+@@ -5867,16 +5899,21 @@ qla2x00_do_dpc(void *data)
+ }
+
+ /* Retry each device up to login retry count */
+- if ((test_and_clear_bit(RELOGIN_NEEDED,
+- &base_vha->dpc_flags)) &&
++ if (test_bit(RELOGIN_NEEDED, &base_vha->dpc_flags) &&
+ !test_bit(LOOP_RESYNC_NEEDED, &base_vha->dpc_flags) &&
+ atomic_read(&base_vha->loop_state) != LOOP_DOWN) {
+
+- ql_dbg(ql_dbg_dpc, base_vha, 0x400d,
+- "Relogin scheduled.\n");
+- qla2x00_relogin(base_vha);
+- ql_dbg(ql_dbg_dpc, base_vha, 0x400e,
+- "Relogin end.\n");
++ if (!base_vha->relogin_jif ||
++ time_after_eq(jiffies, base_vha->relogin_jif)) {
++ base_vha->relogin_jif = jiffies + HZ;
++ clear_bit(RELOGIN_NEEDED, &base_vha->dpc_flags);
++
++ ql_dbg(ql_dbg_dpc, base_vha, 0x400d,
++ "Relogin scheduled.\n");
++ qla2x00_relogin(base_vha);
++ ql_dbg(ql_dbg_dpc, base_vha, 0x400e,
++ "Relogin end.\n");
++ }
+ }
+ loop_resync_check:
+ if (test_and_clear_bit(LOOP_RESYNC_NEEDED,
+@@ -6608,9 +6645,14 @@ qla83xx_disable_laser(scsi_qla_host_t *vha)
+
+ static int qla2xxx_map_queues(struct Scsi_Host *shost)
+ {
++ int rc;
+ scsi_qla_host_t *vha = (scsi_qla_host_t *)shost->hostdata;
+
+- return blk_mq_pci_map_queues(&shost->tag_set, vha->hw->pdev);
++ if (USER_CTRL_IRQ(vha->hw))
++ rc = blk_mq_map_queues(&shost->tag_set);
++ else
++ rc = blk_mq_pci_map_queues(&shost->tag_set, vha->hw->pdev);
++ return rc;
+ }
+
+ static const struct pci_error_handlers qla2xxx_err_handler = {
+diff --git a/drivers/scsi/qla2xxx/qla_target.c b/drivers/scsi/qla2xxx/qla_target.c
+index 18069edd4773..cb35bb1ae305 100644
+--- a/drivers/scsi/qla2xxx/qla_target.c
++++ b/drivers/scsi/qla2xxx/qla_target.c
+@@ -665,7 +665,7 @@ int qla24xx_async_notify_ack(scsi_qla_host_t *vha, fc_port_t *fcport,
+ qla2x00_init_timer(sp, qla2x00_get_async_timeout(vha)+2);
+
+ sp->u.iocb_cmd.u.nack.ntfy = ntfy;
+-
++ sp->u.iocb_cmd.timeout = qla2x00_async_iocb_timeout;
+ sp->done = qla2x00_async_nack_sp_done;
+
+ rval = qla2x00_start_sp(sp);
+@@ -890,6 +890,17 @@ qlt_plogi_ack_link(struct scsi_qla_host *vha, struct qlt_plogi_ack_t *pla,
+ iocb->u.isp24.port_id[1], iocb->u.isp24.port_id[0],
+ pla->ref_count, pla, link);
+
++ if (link == QLT_PLOGI_LINK_CONFLICT) {
++ switch (sess->disc_state) {
++ case DSC_DELETED:
++ case DSC_DELETE_PEND:
++ pla->ref_count--;
++ return;
++ default:
++ break;
++ }
++ }
++
+ if (sess->plogi_link[link])
+ qlt_plogi_ack_unref(vha, sess->plogi_link[link]);
+
+@@ -974,7 +985,7 @@ static void qlt_free_session_done(struct work_struct *work)
+ qlt_send_first_logo(vha, &logo);
+ }
+
+- if (sess->logout_on_delete) {
++ if (sess->logout_on_delete && sess->loop_id != FC_NO_LOOP_ID) {
+ int rc;
+
+ rc = qla2x00_post_async_logout_work(vha, sess, NULL);
+@@ -1033,8 +1044,7 @@ static void qlt_free_session_done(struct work_struct *work)
+ sess->login_succ = 0;
+ }
+
+- if (sess->chip_reset != ha->base_qpair->chip_reset)
+- qla2x00_clear_loop_id(sess);
++ qla2x00_clear_loop_id(sess);
+
+ if (sess->conflict) {
+ sess->conflict->login_pause = 0;
+@@ -1205,7 +1215,8 @@ void qlt_schedule_sess_for_deletion(struct fc_port *sess,
+ ql_dbg(ql_dbg_tgt, sess->vha, 0xe001,
+ "Scheduling sess %p for deletion\n", sess);
+
+- schedule_work(&sess->del_work);
++ INIT_WORK(&sess->del_work, qla24xx_delete_sess_fn);
++ queue_work(sess->vha->hw->wq, &sess->del_work);
+ }
+
+ void qlt_schedule_sess_for_deletion_lock(struct fc_port *sess)
+@@ -1560,8 +1571,11 @@ static void qlt_release(struct qla_tgt *tgt)
+
+ btree_destroy64(&tgt->lun_qpair_map);
+
+- if (ha->tgt.tgt_ops && ha->tgt.tgt_ops->remove_target)
+- ha->tgt.tgt_ops->remove_target(vha);
++ if (vha->vp_idx)
++ if (ha->tgt.tgt_ops &&
++ ha->tgt.tgt_ops->remove_target &&
++ vha->vha_tgt.target_lport_ptr)
++ ha->tgt.tgt_ops->remove_target(vha);
+
+ vha->vha_tgt.qla_tgt = NULL;
+
+@@ -3708,7 +3722,7 @@ static int qlt_term_ctio_exchange(struct qla_qpair *qpair, void *ctio,
+ term = 1;
+
+ if (term)
+- qlt_term_ctio_exchange(qpair, ctio, cmd, status);
++ qlt_send_term_exchange(qpair, cmd, &cmd->atio, 1, 0);
+
+ return term;
+ }
+@@ -4584,9 +4598,9 @@ qlt_find_sess_invalidate_other(scsi_qla_host_t *vha, uint64_t wwn,
+ "Invalidating sess %p loop_id %d wwn %llx.\n",
+ other_sess, other_sess->loop_id, other_wwn);
+
+-
+ other_sess->keep_nport_handle = 1;
+- *conflict_sess = other_sess;
++ if (other_sess->disc_state != DSC_DELETED)
++ *conflict_sess = other_sess;
+ qlt_schedule_sess_for_deletion(other_sess,
+ true);
+ }
+@@ -4733,6 +4747,10 @@ static int qlt_24xx_handle_els(struct scsi_qla_host *vha,
+ sess->d_id = port_id;
+ sess->login_gen++;
+
++ ql_dbg(ql_dbg_disc, vha, 0x20f9,
++ "%s %d %8phC DS %d\n",
++ __func__, __LINE__, sess->port_name, sess->disc_state);
++
+ switch (sess->disc_state) {
+ case DSC_DELETED:
+ qlt_plogi_ack_unref(vha, pla);
+@@ -4782,12 +4800,20 @@ static int qlt_24xx_handle_els(struct scsi_qla_host *vha,
+ }
+
+ if (conflict_sess) {
+- ql_dbg(ql_dbg_tgt_mgt, vha, 0xf09b,
+- "PRLI with conflicting sess %p port %8phC\n",
+- conflict_sess, conflict_sess->port_name);
+- qlt_send_term_imm_notif(vha, iocb, 1);
+- res = 0;
+- break;
++ switch (conflict_sess->disc_state) {
++ case DSC_DELETED:
++ case DSC_DELETE_PEND:
++ break;
++ default:
++ ql_dbg(ql_dbg_tgt_mgt, vha, 0xf09b,
++ "PRLI with conflicting sess %p port %8phC\n",
++ conflict_sess, conflict_sess->port_name);
++ conflict_sess->fw_login_state =
++ DSC_LS_PORT_UNAVAIL;
++ qlt_send_term_imm_notif(vha, iocb, 1);
++ res = 0;
++ break;
++ }
+ }
+
+ if (sess != NULL) {
+@@ -5755,7 +5781,7 @@ static fc_port_t *qlt_get_port_database(struct scsi_qla_host *vha,
+ unsigned long flags;
+ u8 newfcport = 0;
+
+- fcport = kzalloc(sizeof(*fcport), GFP_KERNEL);
++ fcport = qla2x00_alloc_fcport(vha, GFP_KERNEL);
+ if (!fcport) {
+ ql_dbg(ql_dbg_tgt_mgt, vha, 0xf06f,
+ "qla_target(%d): Allocation of tmp FC port failed",
+@@ -5784,6 +5810,7 @@ static fc_port_t *qlt_get_port_database(struct scsi_qla_host *vha,
+ tfcp->port_type = fcport->port_type;
+ tfcp->supported_classes = fcport->supported_classes;
+ tfcp->flags |= fcport->flags;
++ tfcp->scan_state = QLA_FCPORT_FOUND;
+
+ del = fcport;
+ fcport = tfcp;
+diff --git a/drivers/scsi/scsi_error.c b/drivers/scsi/scsi_error.c
+index 3737c6d3b064..61628581c6a2 100644
+--- a/drivers/scsi/scsi_error.c
++++ b/drivers/scsi/scsi_error.c
+@@ -222,7 +222,8 @@ static void scsi_eh_reset(struct scsi_cmnd *scmd)
+
+ static void scsi_eh_inc_host_failed(struct rcu_head *head)
+ {
+- struct Scsi_Host *shost = container_of(head, typeof(*shost), rcu);
++ struct scsi_cmnd *scmd = container_of(head, typeof(*scmd), rcu);
++ struct Scsi_Host *shost = scmd->device->host;
+ unsigned long flags;
+
+ spin_lock_irqsave(shost->host_lock, flags);
+@@ -258,7 +259,7 @@ void scsi_eh_scmd_add(struct scsi_cmnd *scmd)
+ * Ensure that all tasks observe the host state change before the
+ * host_failed change.
+ */
+- call_rcu(&shost->rcu, scsi_eh_inc_host_failed);
++ call_rcu(&scmd->rcu, scsi_eh_inc_host_failed);
+ }
+
+ /**
+diff --git a/drivers/scsi/scsi_lib.c b/drivers/scsi/scsi_lib.c
+index 83856ee14851..8f9a2e50d742 100644
+--- a/drivers/scsi/scsi_lib.c
++++ b/drivers/scsi/scsi_lib.c
+@@ -670,6 +670,7 @@ static bool scsi_end_request(struct request *req, blk_status_t error,
+ if (!blk_rq_is_scsi(req)) {
+ WARN_ON_ONCE(!(cmd->flags & SCMD_INITIALIZED));
+ cmd->flags &= ~SCMD_INITIALIZED;
++ destroy_rcu_head(&cmd->rcu);
+ }
+
+ if (req->mq_ctx) {
+@@ -1150,6 +1151,7 @@ void scsi_initialize_rq(struct request *rq)
+ struct scsi_cmnd *cmd = blk_mq_rq_to_pdu(rq);
+
+ scsi_req_init(&cmd->req);
++ init_rcu_head(&cmd->rcu);
+ cmd->jiffies_at_alloc = jiffies;
+ cmd->retries = 0;
+ }
+diff --git a/drivers/virtio/virtio_ring.c b/drivers/virtio/virtio_ring.c
+index eb30f3e09a47..71458f493cf8 100644
+--- a/drivers/virtio/virtio_ring.c
++++ b/drivers/virtio/virtio_ring.c
+@@ -428,8 +428,6 @@ static inline int virtqueue_add(struct virtqueue *_vq,
+ i = virtio16_to_cpu(_vq->vdev, vq->vring.desc[i].next);
+ }
+
+- vq->vq.num_free += total_sg;
+-
+ if (indirect)
+ kfree(desc);
+
+diff --git a/drivers/watchdog/hpwdt.c b/drivers/watchdog/hpwdt.c
+index 67fbe35ce7cf..b0a158073abd 100644
+--- a/drivers/watchdog/hpwdt.c
++++ b/drivers/watchdog/hpwdt.c
+@@ -28,16 +28,7 @@
+ #include <linux/types.h>
+ #include <linux/uaccess.h>
+ #include <linux/watchdog.h>
+-#ifdef CONFIG_HPWDT_NMI_DECODING
+-#include <linux/dmi.h>
+-#include <linux/spinlock.h>
+-#include <linux/nmi.h>
+-#include <linux/kdebug.h>
+-#include <linux/notifier.h>
+-#include <asm/set_memory.h>
+-#endif /* CONFIG_HPWDT_NMI_DECODING */
+ #include <asm/nmi.h>
+-#include <asm/frame.h>
+
+ #define HPWDT_VERSION "1.4.0"
+ #define SECS_TO_TICKS(secs) ((secs) * 1000 / 128)
+@@ -48,10 +39,14 @@
+ static unsigned int soft_margin = DEFAULT_MARGIN; /* in seconds */
+ static unsigned int reload; /* the computed soft_margin */
+ static bool nowayout = WATCHDOG_NOWAYOUT;
++#ifdef CONFIG_HPWDT_NMI_DECODING
++static unsigned int allow_kdump = 1;
++#endif
+ static char expect_release;
+ static unsigned long hpwdt_is_open;
+
+ static void __iomem *pci_mem_addr; /* the PCI-memory address */
++static unsigned long __iomem *hpwdt_nmistat;
+ static unsigned long __iomem *hpwdt_timer_reg;
+ static unsigned long __iomem *hpwdt_timer_con;
+
+@@ -62,373 +57,6 @@ static const struct pci_device_id hpwdt_devices[] = {
+ };
+ MODULE_DEVICE_TABLE(pci, hpwdt_devices);
+
+-#ifdef CONFIG_HPWDT_NMI_DECODING
+-#define PCI_BIOS32_SD_VALUE 0x5F32335F /* "_32_" */
+-#define CRU_BIOS_SIGNATURE_VALUE 0x55524324
+-#define PCI_BIOS32_PARAGRAPH_LEN 16
+-#define PCI_ROM_BASE1 0x000F0000
+-#define ROM_SIZE 0x10000
+-
+-struct bios32_service_dir {
+- u32 signature;
+- u32 entry_point;
+- u8 revision;
+- u8 length;
+- u8 checksum;
+- u8 reserved[5];
+-};
+-
+-/* type 212 */
+-struct smbios_cru64_info {
+- u8 type;
+- u8 byte_length;
+- u16 handle;
+- u32 signature;
+- u64 physical_address;
+- u32 double_length;
+- u32 double_offset;
+-};
+-#define SMBIOS_CRU64_INFORMATION 212
+-
+-/* type 219 */
+-struct smbios_proliant_info {
+- u8 type;
+- u8 byte_length;
+- u16 handle;
+- u32 power_features;
+- u32 omega_features;
+- u32 reserved;
+- u32 misc_features;
+-};
+-#define SMBIOS_ICRU_INFORMATION 219
+-
+-
+-struct cmn_registers {
+- union {
+- struct {
+- u8 ral;
+- u8 rah;
+- u16 rea2;
+- };
+- u32 reax;
+- } u1;
+- union {
+- struct {
+- u8 rbl;
+- u8 rbh;
+- u8 reb2l;
+- u8 reb2h;
+- };
+- u32 rebx;
+- } u2;
+- union {
+- struct {
+- u8 rcl;
+- u8 rch;
+- u16 rec2;
+- };
+- u32 recx;
+- } u3;
+- union {
+- struct {
+- u8 rdl;
+- u8 rdh;
+- u16 red2;
+- };
+- u32 redx;
+- } u4;
+-
+- u32 resi;
+- u32 redi;
+- u16 rds;
+- u16 res;
+- u32 reflags;
+-} __attribute__((packed));
+-
+-static unsigned int hpwdt_nmi_decoding;
+-static unsigned int allow_kdump = 1;
+-static unsigned int is_icru;
+-static unsigned int is_uefi;
+-static DEFINE_SPINLOCK(rom_lock);
+-static void *cru_rom_addr;
+-static struct cmn_registers cmn_regs;
+-
+-extern asmlinkage void asminline_call(struct cmn_registers *pi86Regs,
+- unsigned long *pRomEntry);
+-
+-#ifdef CONFIG_X86_32
+-/* --32 Bit Bios------------------------------------------------------------ */
+-
+-#define HPWDT_ARCH 32
+-
+-asm(".text \n\t"
+- ".align 4 \n\t"
+- ".globl asminline_call \n"
+- "asminline_call: \n\t"
+- "pushl %ebp \n\t"
+- "movl %esp, %ebp \n\t"
+- "pusha \n\t"
+- "pushf \n\t"
+- "push %es \n\t"
+- "push %ds \n\t"
+- "pop %es \n\t"
+- "movl 8(%ebp),%eax \n\t"
+- "movl 4(%eax),%ebx \n\t"
+- "movl 8(%eax),%ecx \n\t"
+- "movl 12(%eax),%edx \n\t"
+- "movl 16(%eax),%esi \n\t"
+- "movl 20(%eax),%edi \n\t"
+- "movl (%eax),%eax \n\t"
+- "push %cs \n\t"
+- "call *12(%ebp) \n\t"
+- "pushf \n\t"
+- "pushl %eax \n\t"
+- "movl 8(%ebp),%eax \n\t"
+- "movl %ebx,4(%eax) \n\t"
+- "movl %ecx,8(%eax) \n\t"
+- "movl %edx,12(%eax) \n\t"
+- "movl %esi,16(%eax) \n\t"
+- "movl %edi,20(%eax) \n\t"
+- "movw %ds,24(%eax) \n\t"
+- "movw %es,26(%eax) \n\t"
+- "popl %ebx \n\t"
+- "movl %ebx,(%eax) \n\t"
+- "popl %ebx \n\t"
+- "movl %ebx,28(%eax) \n\t"
+- "pop %es \n\t"
+- "popf \n\t"
+- "popa \n\t"
+- "leave \n\t"
+- "ret \n\t"
+- ".previous");
+-
+-
+-/*
+- * cru_detect
+- *
+- * Routine Description:
+- * This function uses the 32-bit BIOS Service Directory record to
+- * search for a $CRU record.
+- *
+- * Return Value:
+- * 0 : SUCCESS
+- * <0 : FAILURE
+- */
+-static int cru_detect(unsigned long map_entry,
+- unsigned long map_offset)
+-{
+- void *bios32_map;
+- unsigned long *bios32_entrypoint;
+- unsigned long cru_physical_address;
+- unsigned long cru_length;
+- unsigned long physical_bios_base = 0;
+- unsigned long physical_bios_offset = 0;
+- int retval = -ENODEV;
+-
+- bios32_map = ioremap(map_entry, (2 * PAGE_SIZE));
+-
+- if (bios32_map == NULL)
+- return -ENODEV;
+-
+- bios32_entrypoint = bios32_map + map_offset;
+-
+- cmn_regs.u1.reax = CRU_BIOS_SIGNATURE_VALUE;
+-
+- set_memory_x((unsigned long)bios32_map, 2);
+- asminline_call(&cmn_regs, bios32_entrypoint);
+-
+- if (cmn_regs.u1.ral != 0) {
+- pr_warn("Call succeeded but with an error: 0x%x\n",
+- cmn_regs.u1.ral);
+- } else {
+- physical_bios_base = cmn_regs.u2.rebx;
+- physical_bios_offset = cmn_regs.u4.redx;
+- cru_length = cmn_regs.u3.recx;
+- cru_physical_address =
+- physical_bios_base + physical_bios_offset;
+-
+- /* If the values look OK, then map it in. */
+- if ((physical_bios_base + physical_bios_offset)) {
+- cru_rom_addr =
+- ioremap(cru_physical_address, cru_length);
+- if (cru_rom_addr) {
+- set_memory_x((unsigned long)cru_rom_addr & PAGE_MASK,
+- (cru_length + PAGE_SIZE - 1) >> PAGE_SHIFT);
+- retval = 0;
+- }
+- }
+-
+- pr_debug("CRU Base Address: 0x%lx\n", physical_bios_base);
+- pr_debug("CRU Offset Address: 0x%lx\n", physical_bios_offset);
+- pr_debug("CRU Length: 0x%lx\n", cru_length);
+- pr_debug("CRU Mapped Address: %p\n", &cru_rom_addr);
+- }
+- iounmap(bios32_map);
+- return retval;
+-}
+-
+-/*
+- * bios_checksum
+- */
+-static int bios_checksum(const char __iomem *ptr, int len)
+-{
+- char sum = 0;
+- int i;
+-
+- /*
+- * calculate checksum of size bytes. This should add up
+- * to zero if we have a valid header.
+- */
+- for (i = 0; i < len; i++)
+- sum += ptr[i];
+-
+- return ((sum == 0) && (len > 0));
+-}
+-
+-/*
+- * bios32_present
+- *
+- * Routine Description:
+- * This function finds the 32-bit BIOS Service Directory
+- *
+- * Return Value:
+- * 0 : SUCCESS
+- * <0 : FAILURE
+- */
+-static int bios32_present(const char __iomem *p)
+-{
+- struct bios32_service_dir *bios_32_ptr;
+- int length;
+- unsigned long map_entry, map_offset;
+-
+- bios_32_ptr = (struct bios32_service_dir *) p;
+-
+- /*
+- * Search for signature by checking equal to the swizzled value
+- * instead of calling another routine to perform a strcmp.
+- */
+- if (bios_32_ptr->signature == PCI_BIOS32_SD_VALUE) {
+- length = bios_32_ptr->length * PCI_BIOS32_PARAGRAPH_LEN;
+- if (bios_checksum(p, length)) {
+- /*
+- * According to the spec, we're looking for the
+- * first 4KB-aligned address below the entrypoint
+- * listed in the header. The Service Directory code
+- * is guaranteed to occupy no more than 2 4KB pages.
+- */
+- map_entry = bios_32_ptr->entry_point & ~(PAGE_SIZE - 1);
+- map_offset = bios_32_ptr->entry_point - map_entry;
+-
+- return cru_detect(map_entry, map_offset);
+- }
+- }
+- return -ENODEV;
+-}
+-
+-static int detect_cru_service(void)
+-{
+- char __iomem *p, *q;
+- int rc = -1;
+-
+- /*
+- * Search from 0x0f0000 through 0x0fffff, inclusive.
+- */
+- p = ioremap(PCI_ROM_BASE1, ROM_SIZE);
+- if (p == NULL)
+- return -ENOMEM;
+-
+- for (q = p; q < p + ROM_SIZE; q += 16) {
+- rc = bios32_present(q);
+- if (!rc)
+- break;
+- }
+- iounmap(p);
+- return rc;
+-}
+-/* ------------------------------------------------------------------------- */
+-#endif /* CONFIG_X86_32 */
+-#ifdef CONFIG_X86_64
+-/* --64 Bit Bios------------------------------------------------------------ */
+-
+-#define HPWDT_ARCH 64
+-
+-asm(".text \n\t"
+- ".align 4 \n\t"
+- ".globl asminline_call \n\t"
+- ".type asminline_call, @function \n\t"
+- "asminline_call: \n\t"
+- FRAME_BEGIN
+- "pushq %rax \n\t"
+- "pushq %rbx \n\t"
+- "pushq %rdx \n\t"
+- "pushq %r12 \n\t"
+- "pushq %r9 \n\t"
+- "movq %rsi, %r12 \n\t"
+- "movq %rdi, %r9 \n\t"
+- "movl 4(%r9),%ebx \n\t"
+- "movl 8(%r9),%ecx \n\t"
+- "movl 12(%r9),%edx \n\t"
+- "movl 16(%r9),%esi \n\t"
+- "movl 20(%r9),%edi \n\t"
+- "movl (%r9),%eax \n\t"
+- "call *%r12 \n\t"
+- "pushfq \n\t"
+- "popq %r12 \n\t"
+- "movl %eax, (%r9) \n\t"
+- "movl %ebx, 4(%r9) \n\t"
+- "movl %ecx, 8(%r9) \n\t"
+- "movl %edx, 12(%r9) \n\t"
+- "movl %esi, 16(%r9) \n\t"
+- "movl %edi, 20(%r9) \n\t"
+- "movq %r12, %rax \n\t"
+- "movl %eax, 28(%r9) \n\t"
+- "popq %r9 \n\t"
+- "popq %r12 \n\t"
+- "popq %rdx \n\t"
+- "popq %rbx \n\t"
+- "popq %rax \n\t"
+- FRAME_END
+- "ret \n\t"
+- ".previous");
+-
+-/*
+- * dmi_find_cru
+- *
+- * Routine Description:
+- * This function checks whether or not a SMBIOS/DMI record is
+- * the 64bit CRU info or not
+- */
+-static void dmi_find_cru(const struct dmi_header *dm, void *dummy)
+-{
+- struct smbios_cru64_info *smbios_cru64_ptr;
+- unsigned long cru_physical_address;
+-
+- if (dm->type == SMBIOS_CRU64_INFORMATION) {
+- smbios_cru64_ptr = (struct smbios_cru64_info *) dm;
+- if (smbios_cru64_ptr->signature == CRU_BIOS_SIGNATURE_VALUE) {
+- cru_physical_address =
+- smbios_cru64_ptr->physical_address +
+- smbios_cru64_ptr->double_offset;
+- cru_rom_addr = ioremap(cru_physical_address,
+- smbios_cru64_ptr->double_length);
+- set_memory_x((unsigned long)cru_rom_addr & PAGE_MASK,
+- smbios_cru64_ptr->double_length >> PAGE_SHIFT);
+- }
+- }
+-}
+-
+-static int detect_cru_service(void)
+-{
+- cru_rom_addr = NULL;
+-
+- dmi_walk(dmi_find_cru, NULL);
+-
+- /* if cru_rom_addr has been set then we found a CRU service */
+- return ((cru_rom_addr != NULL) ? 0 : -ENODEV);
+-}
+-/* ------------------------------------------------------------------------- */
+-#endif /* CONFIG_X86_64 */
+-#endif /* CONFIG_HPWDT_NMI_DECODING */
+
+ /*
+ * Watchdog operations
+@@ -475,32 +103,22 @@ static int hpwdt_time_left(void)
+ }
+
+ #ifdef CONFIG_HPWDT_NMI_DECODING
++static int hpwdt_my_nmi(void)
++{
++ return ioread8(hpwdt_nmistat) & 0x6;
++}
++
+ /*
+ * NMI Handler
+ */
+ static int hpwdt_pretimeout(unsigned int ulReason, struct pt_regs *regs)
+ {
+- unsigned long rom_pl;
+- static int die_nmi_called;
+-
+- if (!hpwdt_nmi_decoding)
++ if ((ulReason == NMI_UNKNOWN) && !hpwdt_my_nmi())
+ return NMI_DONE;
+
+- spin_lock_irqsave(&rom_lock, rom_pl);
+- if (!die_nmi_called && !is_icru && !is_uefi)
+- asminline_call(&cmn_regs, cru_rom_addr);
+- die_nmi_called = 1;
+- spin_unlock_irqrestore(&rom_lock, rom_pl);
+-
+ if (allow_kdump)
+ hpwdt_stop();
+
+- if (!is_icru && !is_uefi) {
+- if (cmn_regs.u1.ral == 0) {
+- nmi_panic(regs, "An NMI occurred, but unable to determine source.\n");
+- return NMI_HANDLED;
+- }
+- }
+ nmi_panic(regs, "An NMI occurred. Depending on your system the reason "
+ "for the NMI is logged in any one of the following "
+ "resources:\n"
+@@ -666,84 +284,11 @@ static struct miscdevice hpwdt_miscdev = {
+ * Init & Exit
+ */
+
+-#ifdef CONFIG_HPWDT_NMI_DECODING
+-#ifdef CONFIG_X86_LOCAL_APIC
+-static void hpwdt_check_nmi_decoding(struct pci_dev *dev)
+-{
+- /*
+- * If nmi_watchdog is turned off then we can turn on
+- * our nmi decoding capability.
+- */
+- hpwdt_nmi_decoding = 1;
+-}
+-#else
+-static void hpwdt_check_nmi_decoding(struct pci_dev *dev)
+-{
+- dev_warn(&dev->dev, "NMI decoding is disabled. "
+- "Your kernel does not support a NMI Watchdog.\n");
+-}
+-#endif /* CONFIG_X86_LOCAL_APIC */
+-
+-/*
+- * dmi_find_icru
+- *
+- * Routine Description:
+- * This function checks whether or not we are on an iCRU-based server.
+- * This check is independent of architecture and needs to be made for
+- * any ProLiant system.
+- */
+-static void dmi_find_icru(const struct dmi_header *dm, void *dummy)
+-{
+- struct smbios_proliant_info *smbios_proliant_ptr;
+-
+- if (dm->type == SMBIOS_ICRU_INFORMATION) {
+- smbios_proliant_ptr = (struct smbios_proliant_info *) dm;
+- if (smbios_proliant_ptr->misc_features & 0x01)
+- is_icru = 1;
+- if (smbios_proliant_ptr->misc_features & 0x408)
+- is_uefi = 1;
+- }
+-}
+
+ static int hpwdt_init_nmi_decoding(struct pci_dev *dev)
+ {
++#ifdef CONFIG_HPWDT_NMI_DECODING
+ int retval;
+-
+- /*
+- * On typical CRU-based systems we need to map that service in
+- * the BIOS. For 32 bit Operating Systems we need to go through
+- * the 32 Bit BIOS Service Directory. For 64 bit Operating
+- * Systems we get that service through SMBIOS.
+- *
+- * On systems that support the new iCRU service all we need to
+- * do is call dmi_walk to get the supported flag value and skip
+- * the old cru detect code.
+- */
+- dmi_walk(dmi_find_icru, NULL);
+- if (!is_icru && !is_uefi) {
+-
+- /*
+- * We need to map the ROM to get the CRU service.
+- * For 32 bit Operating Systems we need to go through the 32 Bit
+- * BIOS Service Directory
+- * For 64 bit Operating Systems we get that service through SMBIOS.
+- */
+- retval = detect_cru_service();
+- if (retval < 0) {
+- dev_warn(&dev->dev,
+- "Unable to detect the %d Bit CRU Service.\n",
+- HPWDT_ARCH);
+- return retval;
+- }
+-
+- /*
+- * We know this is the only CRU call we need to make so lets keep as
+- * few instructions as possible once the NMI comes in.
+- */
+- cmn_regs.u1.rah = 0x0D;
+- cmn_regs.u1.ral = 0x02;
+- }
+-
+ /*
+ * Only one function can register for NMI_UNKNOWN
+ */
+@@ -771,44 +316,25 @@ static int hpwdt_init_nmi_decoding(struct pci_dev *dev)
+ dev_warn(&dev->dev,
+ "Unable to register a die notifier (err=%d).\n",
+ retval);
+- if (cru_rom_addr)
+- iounmap(cru_rom_addr);
+ return retval;
++#endif /* CONFIG_HPWDT_NMI_DECODING */
++ return 0;
+ }
+
+ static void hpwdt_exit_nmi_decoding(void)
+ {
++#ifdef CONFIG_HPWDT_NMI_DECODING
+ unregister_nmi_handler(NMI_UNKNOWN, "hpwdt");
+ unregister_nmi_handler(NMI_SERR, "hpwdt");
+ unregister_nmi_handler(NMI_IO_CHECK, "hpwdt");
+- if (cru_rom_addr)
+- iounmap(cru_rom_addr);
+-}
+-#else /* !CONFIG_HPWDT_NMI_DECODING */
+-static void hpwdt_check_nmi_decoding(struct pci_dev *dev)
+-{
+-}
+-
+-static int hpwdt_init_nmi_decoding(struct pci_dev *dev)
+-{
+- return 0;
++#endif
+ }
+
+-static void hpwdt_exit_nmi_decoding(void)
+-{
+-}
+-#endif /* CONFIG_HPWDT_NMI_DECODING */
+-
+ static int hpwdt_init_one(struct pci_dev *dev,
+ const struct pci_device_id *ent)
+ {
+ int retval;
+
+- /*
+- * Check if we can do NMI decoding or not
+- */
+- hpwdt_check_nmi_decoding(dev);
+-
+ /*
+ * First let's find out if we are on an iLO2+ server. We will
+ * not run on a legacy ASM box.
+@@ -842,6 +368,7 @@ static int hpwdt_init_one(struct pci_dev *dev,
+ retval = -ENOMEM;
+ goto error_pci_iomap;
+ }
++ hpwdt_nmistat = pci_mem_addr + 0x6e;
+ hpwdt_timer_reg = pci_mem_addr + 0x70;
+ hpwdt_timer_con = pci_mem_addr + 0x72;
+
+@@ -912,6 +439,6 @@ MODULE_PARM_DESC(nowayout, "Watchdog cannot be stopped once started (default="
+ #ifdef CONFIG_HPWDT_NMI_DECODING
+ module_param(allow_kdump, int, 0);
+ MODULE_PARM_DESC(allow_kdump, "Start a kernel dump after NMI occurs");
+-#endif /* !CONFIG_HPWDT_NMI_DECODING */
++#endif /* CONFIG_HPWDT_NMI_DECODING */
+
+ module_pci_driver(hpwdt_driver);
+diff --git a/fs/nfs/direct.c b/fs/nfs/direct.c
+index 8c10b0562e75..621c517b325c 100644
+--- a/fs/nfs/direct.c
++++ b/fs/nfs/direct.c
+@@ -86,10 +86,10 @@ struct nfs_direct_req {
+ struct nfs_direct_mirror mirrors[NFS_PAGEIO_DESCRIPTOR_MIRROR_MAX];
+ int mirror_count;
+
++ loff_t io_start; /* Start offset for I/O */
+ ssize_t count, /* bytes actually processed */
+ max_count, /* max expected count */
+ bytes_left, /* bytes left to be sent */
+- io_start, /* start of IO */
+ error; /* any reported error */
+ struct completion completion; /* wait for i/o completion */
+
+diff --git a/fs/nfs/pnfs.c b/fs/nfs/pnfs.c
+index eb098ccfefd5..b99200828d08 100644
+--- a/fs/nfs/pnfs.c
++++ b/fs/nfs/pnfs.c
+@@ -292,8 +292,11 @@ pnfs_detach_layout_hdr(struct pnfs_layout_hdr *lo)
+ void
+ pnfs_put_layout_hdr(struct pnfs_layout_hdr *lo)
+ {
+- struct inode *inode = lo->plh_inode;
++ struct inode *inode;
+
++ if (!lo)
++ return;
++ inode = lo->plh_inode;
+ pnfs_layoutreturn_before_put_layout_hdr(lo);
+
+ if (refcount_dec_and_lock(&lo->plh_refcount, &inode->i_lock)) {
+@@ -1241,10 +1244,12 @@ bool pnfs_roc(struct inode *ino,
+ spin_lock(&ino->i_lock);
+ lo = nfsi->layout;
+ if (!lo || !pnfs_layout_is_valid(lo) ||
+- test_bit(NFS_LAYOUT_BULK_RECALL, &lo->plh_flags))
++ test_bit(NFS_LAYOUT_BULK_RECALL, &lo->plh_flags)) {
++ lo = NULL;
+ goto out_noroc;
++ }
++ pnfs_get_layout_hdr(lo);
+ if (test_bit(NFS_LAYOUT_RETURN_LOCK, &lo->plh_flags)) {
+- pnfs_get_layout_hdr(lo);
+ spin_unlock(&ino->i_lock);
+ wait_on_bit(&lo->plh_flags, NFS_LAYOUT_RETURN,
+ TASK_UNINTERRUPTIBLE);
+@@ -1312,10 +1317,12 @@ bool pnfs_roc(struct inode *ino,
+ struct pnfs_layoutdriver_type *ld = NFS_SERVER(ino)->pnfs_curr_ld;
+ if (ld->prepare_layoutreturn)
+ ld->prepare_layoutreturn(args);
++ pnfs_put_layout_hdr(lo);
+ return true;
+ }
+ if (layoutreturn)
+ pnfs_send_layoutreturn(lo, &stateid, iomode, true);
++ pnfs_put_layout_hdr(lo);
+ return false;
+ }
+
+diff --git a/fs/nfs/write.c b/fs/nfs/write.c
+index cf61108f8f8d..8607ad8626f6 100644
+--- a/fs/nfs/write.c
++++ b/fs/nfs/write.c
+@@ -1878,40 +1878,43 @@ int nfs_generic_commit_list(struct inode *inode, struct list_head *head,
+ return status;
+ }
+
+-int nfs_commit_inode(struct inode *inode, int how)
++static int __nfs_commit_inode(struct inode *inode, int how,
++ struct writeback_control *wbc)
+ {
+ LIST_HEAD(head);
+ struct nfs_commit_info cinfo;
+ int may_wait = how & FLUSH_SYNC;
+- int error = 0;
+- int res;
++ int ret, nscan;
+
+ nfs_init_cinfo_from_inode(&cinfo, inode);
+ nfs_commit_begin(cinfo.mds);
+- res = nfs_scan_commit(inode, &head, &cinfo);
+- if (res)
+- error = nfs_generic_commit_list(inode, &head, how, &cinfo);
++ for (;;) {
++ ret = nscan = nfs_scan_commit(inode, &head, &cinfo);
++ if (ret <= 0)
++ break;
++ ret = nfs_generic_commit_list(inode, &head, how, &cinfo);
++ if (ret < 0)
++ break;
++ ret = 0;
++ if (wbc && wbc->sync_mode == WB_SYNC_NONE) {
++ if (nscan < wbc->nr_to_write)
++ wbc->nr_to_write -= nscan;
++ else
++ wbc->nr_to_write = 0;
++ }
++ if (nscan < INT_MAX)
++ break;
++ cond_resched();
++ }
+ nfs_commit_end(cinfo.mds);
+- if (res == 0)
+- return res;
+- if (error < 0)
+- goto out_error;
+- if (!may_wait)
+- goto out_mark_dirty;
+- error = wait_on_commit(cinfo.mds);
+- if (error < 0)
+- return error;
+- return res;
+-out_error:
+- res = error;
+- /* Note: If we exit without ensuring that the commit is complete,
+- * we must mark the inode as dirty. Otherwise, future calls to
+- * sync_inode() with the WB_SYNC_ALL flag set will fail to ensure
+- * that the data is on the disk.
+- */
+-out_mark_dirty:
+- __mark_inode_dirty(inode, I_DIRTY_DATASYNC);
+- return res;
++ if (ret || !may_wait)
++ return ret;
++ return wait_on_commit(cinfo.mds);
++}
++
++int nfs_commit_inode(struct inode *inode, int how)
++{
++ return __nfs_commit_inode(inode, how, NULL);
+ }
+ EXPORT_SYMBOL_GPL(nfs_commit_inode);
+
+@@ -1921,11 +1924,11 @@ int nfs_write_inode(struct inode *inode, struct writeback_control *wbc)
+ int flags = FLUSH_SYNC;
+ int ret = 0;
+
+- /* no commits means nothing needs to be done */
+- if (!atomic_long_read(&nfsi->commit_info.ncommit))
+- return ret;
+-
+ if (wbc->sync_mode == WB_SYNC_NONE) {
++ /* no commits means nothing needs to be done */
++ if (!atomic_long_read(&nfsi->commit_info.ncommit))
++ goto check_requests_outstanding;
++
+ /* Don't commit yet if this is a non-blocking flush and there
+ * are a lot of outstanding writes for this mapping.
+ */
+@@ -1936,16 +1939,16 @@ int nfs_write_inode(struct inode *inode, struct writeback_control *wbc)
+ flags = 0;
+ }
+
+- ret = nfs_commit_inode(inode, flags);
+- if (ret >= 0) {
+- if (wbc->sync_mode == WB_SYNC_NONE) {
+- if (ret < wbc->nr_to_write)
+- wbc->nr_to_write -= ret;
+- else
+- wbc->nr_to_write = 0;
+- }
+- return 0;
+- }
++ ret = __nfs_commit_inode(inode, flags, wbc);
++ if (!ret) {
++ if (flags & FLUSH_SYNC)
++ return 0;
++ } else if (atomic_long_read(&nfsi->commit_info.ncommit))
++ goto out_mark_dirty;
++
++check_requests_outstanding:
++ if (!atomic_read(&nfsi->commit_info.rpcs_out))
++ return ret;
+ out_mark_dirty:
+ __mark_inode_dirty(inode, I_DIRTY_DATASYNC);
+ return ret;
+diff --git a/fs/overlayfs/namei.c b/fs/overlayfs/namei.c
+index beb945e1963c..ef3e7ea76296 100644
+--- a/fs/overlayfs/namei.c
++++ b/fs/overlayfs/namei.c
+@@ -678,9 +678,6 @@ struct dentry *ovl_lookup(struct inode *dir, struct dentry *dentry,
+ stack[ctr].layer = lower.layer;
+ ctr++;
+
+- if (d.stop)
+- break;
+-
+ /*
+ * Following redirects can have security consequences: it's like
+ * a symlink into the lower layer without the permission checks.
+@@ -697,6 +694,9 @@ struct dentry *ovl_lookup(struct inode *dir, struct dentry *dentry,
+ goto out_put;
+ }
+
++ if (d.stop)
++ break;
++
+ if (d.redirect && d.redirect[0] == '/' && poe != roe) {
+ poe = roe;
+
+diff --git a/include/drm/drm_crtc_helper.h b/include/drm/drm_crtc_helper.h
+index 76e237bd989b..6914633037a5 100644
+--- a/include/drm/drm_crtc_helper.h
++++ b/include/drm/drm_crtc_helper.h
+@@ -77,5 +77,6 @@ void drm_kms_helper_hotplug_event(struct drm_device *dev);
+
+ void drm_kms_helper_poll_disable(struct drm_device *dev);
+ void drm_kms_helper_poll_enable(struct drm_device *dev);
++bool drm_kms_helper_is_poll_worker(void);
+
+ #endif
+diff --git a/include/drm/drm_drv.h b/include/drm/drm_drv.h
+index 412e83a4d3db..29c839ed656b 100644
+--- a/include/drm/drm_drv.h
++++ b/include/drm/drm_drv.h
+@@ -55,6 +55,7 @@ struct drm_mode_create_dumb;
+ #define DRIVER_ATOMIC 0x10000
+ #define DRIVER_KMS_LEGACY_CONTEXT 0x20000
+ #define DRIVER_SYNCOBJ 0x40000
++#define DRIVER_PREFER_XBGR_30BPP 0x80000
+
+ /**
+ * struct drm_driver - DRM driver structure
+diff --git a/include/linux/compiler-clang.h b/include/linux/compiler-clang.h
+index 3b609edffa8f..be3aef6839f6 100644
+--- a/include/linux/compiler-clang.h
++++ b/include/linux/compiler-clang.h
+@@ -19,3 +19,8 @@
+
+ #define randomized_struct_fields_start struct {
+ #define randomized_struct_fields_end };
++
++/* Clang doesn't have a way to turn it off per-function, yet. */
++#ifdef __noretpoline
++#undef __noretpoline
++#endif
+diff --git a/include/linux/compiler-gcc.h b/include/linux/compiler-gcc.h
+index 73bc63e0a1c4..673fbf904fe5 100644
+--- a/include/linux/compiler-gcc.h
++++ b/include/linux/compiler-gcc.h
+@@ -93,6 +93,10 @@
+ #define __weak __attribute__((weak))
+ #define __alias(symbol) __attribute__((alias(#symbol)))
+
++#ifdef RETPOLINE
++#define __noretpoline __attribute__((indirect_branch("keep")))
++#endif
++
+ /*
+ * it doesn't make sense on ARM (currently the only user of __naked)
+ * to trace naked functions because then mcount is called without
+diff --git a/include/linux/init.h b/include/linux/init.h
+index 506a98151131..bc27cf03c41e 100644
+--- a/include/linux/init.h
++++ b/include/linux/init.h
+@@ -6,10 +6,10 @@
+ #include <linux/types.h>
+
+ /* Built-in __init functions needn't be compiled with retpoline */
+-#if defined(RETPOLINE) && !defined(MODULE)
+-#define __noretpoline __attribute__((indirect_branch("keep")))
++#if defined(__noretpoline) && !defined(MODULE)
++#define __noinitretpoline __noretpoline
+ #else
+-#define __noretpoline
++#define __noinitretpoline
+ #endif
+
+ /* These macros are used to mark some functions or
+@@ -47,7 +47,7 @@
+
+ /* These are for everybody (although not all archs will actually
+ discard it in modules) */
+-#define __init __section(.init.text) __cold __latent_entropy __noretpoline
++#define __init __section(.init.text) __cold __latent_entropy __noinitretpoline
+ #define __initdata __section(.init.data)
+ #define __initconst __section(.init.rodata)
+ #define __exitdata __section(.exit.data)
+diff --git a/include/linux/nospec.h b/include/linux/nospec.h
+index 132e3f5a2e0d..e791ebc65c9c 100644
+--- a/include/linux/nospec.h
++++ b/include/linux/nospec.h
+@@ -5,6 +5,7 @@
+
+ #ifndef _LINUX_NOSPEC_H
+ #define _LINUX_NOSPEC_H
++#include <asm/barrier.h>
+
+ /**
+ * array_index_mask_nospec() - generate a ~0 mask when index < size, 0 otherwise
+@@ -29,26 +30,6 @@ static inline unsigned long array_index_mask_nospec(unsigned long index,
+ }
+ #endif
+
+-/*
+- * Warn developers about inappropriate array_index_nospec() usage.
+- *
+- * Even if the CPU speculates past the WARN_ONCE branch, the
+- * sign bit of @index is taken into account when generating the
+- * mask.
+- *
+- * This warning is compiled out when the compiler can infer that
+- * @index and @size are less than LONG_MAX.
+- */
+-#define array_index_mask_nospec_check(index, size) \
+-({ \
+- if (WARN_ONCE(index > LONG_MAX || size > LONG_MAX, \
+- "array_index_nospec() limited to range of [0, LONG_MAX]\n")) \
+- _mask = 0; \
+- else \
+- _mask = array_index_mask_nospec(index, size); \
+- _mask; \
+-})
+-
+ /*
+ * array_index_nospec - sanitize an array index after a bounds check
+ *
+@@ -67,7 +48,7 @@ static inline unsigned long array_index_mask_nospec(unsigned long index,
+ ({ \
+ typeof(index) _i = (index); \
+ typeof(size) _s = (size); \
+- unsigned long _mask = array_index_mask_nospec_check(_i, _s); \
++ unsigned long _mask = array_index_mask_nospec(_i, _s); \
+ \
+ BUILD_BUG_ON(sizeof(_i) > sizeof(long)); \
+ BUILD_BUG_ON(sizeof(_s) > sizeof(long)); \
+diff --git a/include/linux/tpm.h b/include/linux/tpm.h
+index 5a090f5ab335..881312d85574 100644
+--- a/include/linux/tpm.h
++++ b/include/linux/tpm.h
+@@ -50,6 +50,7 @@ struct tpm_class_ops {
+ unsigned long *timeout_cap);
+ int (*request_locality)(struct tpm_chip *chip, int loc);
+ void (*relinquish_locality)(struct tpm_chip *chip, int loc);
++ void (*clk_enable)(struct tpm_chip *chip, bool value);
+ };
+
+ #if defined(CONFIG_TCG_TPM) || defined(CONFIG_TCG_TPM_MODULE)
+diff --git a/include/linux/workqueue.h b/include/linux/workqueue.h
+index 4a54ef96aff5..bc0cda180c8b 100644
+--- a/include/linux/workqueue.h
++++ b/include/linux/workqueue.h
+@@ -465,6 +465,7 @@ extern bool cancel_delayed_work_sync(struct delayed_work *dwork);
+
+ extern void workqueue_set_max_active(struct workqueue_struct *wq,
+ int max_active);
++extern struct work_struct *current_work(void);
+ extern bool current_is_workqueue_rescuer(void);
+ extern bool workqueue_congested(int cpu, struct workqueue_struct *wq);
+ extern unsigned int work_busy(struct work_struct *work);
+diff --git a/include/scsi/scsi_cmnd.h b/include/scsi/scsi_cmnd.h
+index 7fb57e905526..7bc752fc98de 100644
+--- a/include/scsi/scsi_cmnd.h
++++ b/include/scsi/scsi_cmnd.h
+@@ -69,6 +69,9 @@ struct scsi_cmnd {
+ struct list_head list; /* scsi_cmnd participates in queue lists */
+ struct list_head eh_entry; /* entry for the host eh_cmd_q */
+ struct delayed_work abort_work;
++
++ struct rcu_head rcu;
++
+ int eh_eflags; /* Used by error handlr */
+
+ /*
+diff --git a/include/scsi/scsi_host.h b/include/scsi/scsi_host.h
+index 1a1df0d21ee3..a8b7bf879ced 100644
+--- a/include/scsi/scsi_host.h
++++ b/include/scsi/scsi_host.h
+@@ -571,8 +571,6 @@ struct Scsi_Host {
+ struct blk_mq_tag_set tag_set;
+ };
+
+- struct rcu_head rcu;
+-
+ atomic_t host_busy; /* commands actually active on low-level */
+ atomic_t host_blocked;
+
+diff --git a/kernel/bpf/cpumap.c b/kernel/bpf/cpumap.c
+index ce5b669003b2..ea8212118404 100644
+--- a/kernel/bpf/cpumap.c
++++ b/kernel/bpf/cpumap.c
+@@ -339,7 +339,7 @@ static int cpu_map_kthread_run(void *data)
+
+ struct bpf_cpu_map_entry *__cpu_map_entry_alloc(u32 qsize, u32 cpu, int map_id)
+ {
+- gfp_t gfp = GFP_ATOMIC|__GFP_NOWARN;
++ gfp_t gfp = GFP_KERNEL | __GFP_NOWARN;
+ struct bpf_cpu_map_entry *rcpu;
+ int numa, err;
+
+diff --git a/kernel/panic.c b/kernel/panic.c
+index 2cfef408fec9..4b794f1d8561 100644
+--- a/kernel/panic.c
++++ b/kernel/panic.c
+@@ -640,7 +640,7 @@ device_initcall(register_warn_debugfs);
+ */
+ __visible void __stack_chk_fail(void)
+ {
+- panic("stack-protector: Kernel stack is corrupted in: %p\n",
++ panic("stack-protector: Kernel stack is corrupted in: %pB\n",
+ __builtin_return_address(0));
+ }
+ EXPORT_SYMBOL(__stack_chk_fail);
+diff --git a/kernel/workqueue.c b/kernel/workqueue.c
+index f699122dab32..34f1e1a2ec12 100644
+--- a/kernel/workqueue.c
++++ b/kernel/workqueue.c
+@@ -4168,6 +4168,22 @@ void workqueue_set_max_active(struct workqueue_struct *wq, int max_active)
+ }
+ EXPORT_SYMBOL_GPL(workqueue_set_max_active);
+
++/**
++ * current_work - retrieve %current task's work struct
++ *
++ * Determine if %current task is a workqueue worker and what it's working on.
++ * Useful to find out the context that the %current task is running in.
++ *
++ * Return: work struct if %current task is a workqueue worker, %NULL otherwise.
++ */
++struct work_struct *current_work(void)
++{
++ struct worker *worker = current_wq_worker();
++
++ return worker ? worker->current_work : NULL;
++}
++EXPORT_SYMBOL(current_work);
++
+ /**
+ * current_is_workqueue_rescuer - is %current workqueue rescuer?
+ *
+diff --git a/lib/bug.c b/lib/bug.c
+index c1b0fad31b10..1077366f496b 100644
+--- a/lib/bug.c
++++ b/lib/bug.c
+@@ -150,6 +150,8 @@ enum bug_trap_type report_bug(unsigned long bugaddr, struct pt_regs *regs)
+ return BUG_TRAP_TYPE_NONE;
+
+ bug = find_bug(bugaddr);
++ if (!bug)
++ return BUG_TRAP_TYPE_NONE;
+
+ file = NULL;
+ line = 0;
+@@ -191,7 +193,7 @@ enum bug_trap_type report_bug(unsigned long bugaddr, struct pt_regs *regs)
+ if (file)
+ pr_crit("kernel BUG at %s:%u!\n", file, line);
+ else
+- pr_crit("Kernel BUG at %p [verbose debug info unavailable]\n",
++ pr_crit("Kernel BUG at %pB [verbose debug info unavailable]\n",
+ (void *)bugaddr);
+
+ return BUG_TRAP_TYPE_BUG;
+diff --git a/mm/memblock.c b/mm/memblock.c
+index 46aacdfa4f4d..d25b5a456cca 100644
+--- a/mm/memblock.c
++++ b/mm/memblock.c
+@@ -1107,7 +1107,7 @@ unsigned long __init_memblock memblock_next_valid_pfn(unsigned long pfn,
+ struct memblock_type *type = &memblock.memory;
+ unsigned int right = type->cnt;
+ unsigned int mid, left = 0;
+- phys_addr_t addr = PFN_PHYS(pfn + 1);
++ phys_addr_t addr = PFN_PHYS(++pfn);
+
+ do {
+ mid = (right + left) / 2;
+@@ -1118,15 +1118,15 @@ unsigned long __init_memblock memblock_next_valid_pfn(unsigned long pfn,
+ type->regions[mid].size))
+ left = mid + 1;
+ else {
+- /* addr is within the region, so pfn + 1 is valid */
+- return min(pfn + 1, max_pfn);
++ /* addr is within the region, so pfn is valid */
++ return pfn;
+ }
+ } while (left < right);
+
+ if (right == type->cnt)
+- return max_pfn;
++ return -1UL;
+ else
+- return min(PHYS_PFN(type->regions[right].base), max_pfn);
++ return PHYS_PFN(type->regions[right].base);
+ }
+
+ /**
+diff --git a/net/bridge/netfilter/ebt_among.c b/net/bridge/netfilter/ebt_among.c
+index 279527f8b1fe..59baaecd3e54 100644
+--- a/net/bridge/netfilter/ebt_among.c
++++ b/net/bridge/netfilter/ebt_among.c
+@@ -172,18 +172,35 @@ ebt_among_mt(const struct sk_buff *skb, struct xt_action_param *par)
+ return true;
+ }
+
++static bool poolsize_invalid(const struct ebt_mac_wormhash *w)
++{
++ return w && w->poolsize >= (INT_MAX / sizeof(struct ebt_mac_wormhash_tuple));
++}
++
+ static int ebt_among_mt_check(const struct xt_mtchk_param *par)
+ {
+ const struct ebt_among_info *info = par->matchinfo;
+ const struct ebt_entry_match *em =
+ container_of(par->matchinfo, const struct ebt_entry_match, data);
+- int expected_length = sizeof(struct ebt_among_info);
++ unsigned int expected_length = sizeof(struct ebt_among_info);
+ const struct ebt_mac_wormhash *wh_dst, *wh_src;
+ int err;
+
++ if (expected_length > em->match_size)
++ return -EINVAL;
++
+ wh_dst = ebt_among_wh_dst(info);
+- wh_src = ebt_among_wh_src(info);
++ if (poolsize_invalid(wh_dst))
++ return -EINVAL;
++
+ expected_length += ebt_mac_wormhash_size(wh_dst);
++ if (expected_length > em->match_size)
++ return -EINVAL;
++
++ wh_src = ebt_among_wh_src(info);
++ if (poolsize_invalid(wh_src))
++ return -EINVAL;
++
+ expected_length += ebt_mac_wormhash_size(wh_src);
+
+ if (em->match_size != EBT_ALIGN(expected_length)) {
+diff --git a/net/bridge/netfilter/ebtables.c b/net/bridge/netfilter/ebtables.c
+index 37817d25b63d..895ba1cd9750 100644
+--- a/net/bridge/netfilter/ebtables.c
++++ b/net/bridge/netfilter/ebtables.c
+@@ -2053,7 +2053,9 @@ static int ebt_size_mwt(struct compat_ebt_entry_mwt *match32,
+ if (match_kern)
+ match_kern->match_size = ret;
+
+- WARN_ON(type == EBT_COMPAT_TARGET && size_left);
++ if (WARN_ON(type == EBT_COMPAT_TARGET && size_left))
++ return -EINVAL;
++
+ match32 = (struct compat_ebt_entry_mwt *) buf;
+ }
+
+@@ -2109,6 +2111,15 @@ static int size_entry_mwt(struct ebt_entry *entry, const unsigned char *base,
+ *
+ * offsets are relative to beginning of struct ebt_entry (i.e., 0).
+ */
++ for (i = 0; i < 4 ; ++i) {
++ if (offsets[i] >= *total)
++ return -EINVAL;
++ if (i == 0)
++ continue;
++ if (offsets[i-1] > offsets[i])
++ return -EINVAL;
++ }
++
+ for (i = 0, j = 1 ; j < 4 ; j++, i++) {
+ struct compat_ebt_entry_mwt *match32;
+ unsigned int size;
+diff --git a/net/ipv4/netfilter/arp_tables.c b/net/ipv4/netfilter/arp_tables.c
+index 0c3c944a7b72..8e5185ad6310 100644
+--- a/net/ipv4/netfilter/arp_tables.c
++++ b/net/ipv4/netfilter/arp_tables.c
+@@ -257,6 +257,10 @@ unsigned int arpt_do_table(struct sk_buff *skb,
+ }
+ if (table_base + v
+ != arpt_next_entry(e)) {
++ if (unlikely(stackidx >= private->stacksize)) {
++ verdict = NF_DROP;
++ break;
++ }
+ jumpstack[stackidx++] = e;
+ }
+
+diff --git a/net/ipv4/netfilter/ip_tables.c b/net/ipv4/netfilter/ip_tables.c
+index 2e0d339028bb..a74a81624983 100644
+--- a/net/ipv4/netfilter/ip_tables.c
++++ b/net/ipv4/netfilter/ip_tables.c
+@@ -335,8 +335,13 @@ ipt_do_table(struct sk_buff *skb,
+ continue;
+ }
+ if (table_base + v != ipt_next_entry(e) &&
+- !(e->ip.flags & IPT_F_GOTO))
++ !(e->ip.flags & IPT_F_GOTO)) {
++ if (unlikely(stackidx >= private->stacksize)) {
++ verdict = NF_DROP;
++ break;
++ }
+ jumpstack[stackidx++] = e;
++ }
+
+ e = get_entry(table_base, v);
+ continue;
+diff --git a/net/ipv4/netfilter/ipt_CLUSTERIP.c b/net/ipv4/netfilter/ipt_CLUSTERIP.c
+index 1e4a7209a3d2..77a01c484807 100644
+--- a/net/ipv4/netfilter/ipt_CLUSTERIP.c
++++ b/net/ipv4/netfilter/ipt_CLUSTERIP.c
+@@ -107,12 +107,6 @@ clusterip_config_entry_put(struct net *net, struct clusterip_config *c)
+
+ local_bh_disable();
+ if (refcount_dec_and_lock(&c->entries, &cn->lock)) {
+- list_del_rcu(&c->list);
+- spin_unlock(&cn->lock);
+- local_bh_enable();
+-
+- unregister_netdevice_notifier(&c->notifier);
+-
+ /* In case anyone still accesses the file, the open/close
+ * functions are also incrementing the refcount on their own,
+ * so it's safe to remove the entry even if it's in use. */
+@@ -120,6 +114,12 @@ clusterip_config_entry_put(struct net *net, struct clusterip_config *c)
+ if (cn->procdir)
+ proc_remove(c->pde);
+ #endif
++ list_del_rcu(&c->list);
++ spin_unlock(&cn->lock);
++ local_bh_enable();
++
++ unregister_netdevice_notifier(&c->notifier);
++
+ return;
+ }
+ local_bh_enable();
+diff --git a/net/ipv6/netfilter.c b/net/ipv6/netfilter.c
+index 39970e212ad5..9bf260459f83 100644
+--- a/net/ipv6/netfilter.c
++++ b/net/ipv6/netfilter.c
+@@ -21,18 +21,19 @@
+ int ip6_route_me_harder(struct net *net, struct sk_buff *skb)
+ {
+ const struct ipv6hdr *iph = ipv6_hdr(skb);
++ struct sock *sk = sk_to_full_sk(skb->sk);
+ unsigned int hh_len;
+ struct dst_entry *dst;
+ struct flowi6 fl6 = {
+- .flowi6_oif = skb->sk ? skb->sk->sk_bound_dev_if : 0,
++ .flowi6_oif = sk ? sk->sk_bound_dev_if : 0,
+ .flowi6_mark = skb->mark,
+- .flowi6_uid = sock_net_uid(net, skb->sk),
++ .flowi6_uid = sock_net_uid(net, sk),
+ .daddr = iph->daddr,
+ .saddr = iph->saddr,
+ };
+ int err;
+
+- dst = ip6_route_output(net, skb->sk, &fl6);
++ dst = ip6_route_output(net, sk, &fl6);
+ err = dst->error;
+ if (err) {
+ IP6_INC_STATS(net, ip6_dst_idev(dst), IPSTATS_MIB_OUTNOROUTES);
+@@ -50,7 +51,7 @@ int ip6_route_me_harder(struct net *net, struct sk_buff *skb)
+ if (!(IP6CB(skb)->flags & IP6SKB_XFRM_TRANSFORMED) &&
+ xfrm_decode_session(skb, flowi6_to_flowi(&fl6), AF_INET6) == 0) {
+ skb_dst_set(skb, NULL);
+- dst = xfrm_lookup(net, dst, flowi6_to_flowi(&fl6), skb->sk, 0);
++ dst = xfrm_lookup(net, dst, flowi6_to_flowi(&fl6), sk, 0);
+ if (IS_ERR(dst))
+ return PTR_ERR(dst);
+ skb_dst_set(skb, dst);
+diff --git a/net/ipv6/netfilter/ip6_tables.c b/net/ipv6/netfilter/ip6_tables.c
+index 1d7ae9366335..51f3bc632c7c 100644
+--- a/net/ipv6/netfilter/ip6_tables.c
++++ b/net/ipv6/netfilter/ip6_tables.c
+@@ -357,6 +357,10 @@ ip6t_do_table(struct sk_buff *skb,
+ }
+ if (table_base + v != ip6t_next_entry(e) &&
+ !(e->ipv6.flags & IP6T_F_GOTO)) {
++ if (unlikely(stackidx >= private->stacksize)) {
++ verdict = NF_DROP;
++ break;
++ }
+ jumpstack[stackidx++] = e;
+ }
+
+diff --git a/net/ipv6/netfilter/nf_nat_l3proto_ipv6.c b/net/ipv6/netfilter/nf_nat_l3proto_ipv6.c
+index 1d2fb9267d6f..6a203fa82dbd 100644
+--- a/net/ipv6/netfilter/nf_nat_l3proto_ipv6.c
++++ b/net/ipv6/netfilter/nf_nat_l3proto_ipv6.c
+@@ -99,6 +99,10 @@ static bool nf_nat_ipv6_manip_pkt(struct sk_buff *skb,
+ !l4proto->manip_pkt(skb, &nf_nat_l3proto_ipv6, iphdroff, hdroff,
+ target, maniptype))
+ return false;
++
++ /* must reload, offset might have changed */
++ ipv6h = (void *)skb->data + iphdroff;
++
+ manip_addr:
+ if (maniptype == NF_NAT_MANIP_SRC)
+ ipv6h->saddr = target->src.u3.in6;
+diff --git a/net/netfilter/nf_nat_proto_common.c b/net/netfilter/nf_nat_proto_common.c
+index fbce552a796e..7d7466dbf663 100644
+--- a/net/netfilter/nf_nat_proto_common.c
++++ b/net/netfilter/nf_nat_proto_common.c
+@@ -41,7 +41,7 @@ void nf_nat_l4proto_unique_tuple(const struct nf_nat_l3proto *l3proto,
+ const struct nf_conn *ct,
+ u16 *rover)
+ {
+- unsigned int range_size, min, i;
++ unsigned int range_size, min, max, i;
+ __be16 *portptr;
+ u_int16_t off;
+
+@@ -71,7 +71,10 @@ void nf_nat_l4proto_unique_tuple(const struct nf_nat_l3proto *l3proto,
+ }
+ } else {
+ min = ntohs(range->min_proto.all);
+- range_size = ntohs(range->max_proto.all) - min + 1;
++ max = ntohs(range->max_proto.all);
++ if (unlikely(max < min))
++ swap(max, min);
++ range_size = max - min + 1;
+ }
+
+ if (range->flags & NF_NAT_RANGE_PROTO_RANDOM) {
+diff --git a/net/netfilter/xt_IDLETIMER.c b/net/netfilter/xt_IDLETIMER.c
+index ee3421ad108d..18b7412ab99a 100644
+--- a/net/netfilter/xt_IDLETIMER.c
++++ b/net/netfilter/xt_IDLETIMER.c
+@@ -146,11 +146,11 @@ static int idletimer_tg_create(struct idletimer_tg_info *info)
+ timer_setup(&info->timer->timer, idletimer_tg_expired, 0);
+ info->timer->refcnt = 1;
+
++ INIT_WORK(&info->timer->work, idletimer_tg_work);
++
+ mod_timer(&info->timer->timer,
+ msecs_to_jiffies(info->timeout * 1000) + jiffies);
+
+- INIT_WORK(&info->timer->work, idletimer_tg_work);
+-
+ return 0;
+
+ out_free_attr:
+@@ -191,7 +191,10 @@ static int idletimer_tg_checkentry(const struct xt_tgchk_param *par)
+ pr_debug("timeout value is zero\n");
+ return -EINVAL;
+ }
+-
++ if (info->timeout >= INT_MAX / 1000) {
++ pr_debug("timeout value is too big\n");
++ return -EINVAL;
++ }
+ if (info->label[0] == '\0' ||
+ strnlen(info->label,
+ MAX_IDLETIMER_LABEL_SIZE) == MAX_IDLETIMER_LABEL_SIZE) {
+diff --git a/net/netfilter/xt_LED.c b/net/netfilter/xt_LED.c
+index 0971634e5444..18d3af5e1098 100644
+--- a/net/netfilter/xt_LED.c
++++ b/net/netfilter/xt_LED.c
+@@ -142,9 +142,10 @@ static int led_tg_check(const struct xt_tgchk_param *par)
+ goto exit_alloc;
+ }
+
+- /* See if we need to set up a timer */
+- if (ledinfo->delay > 0)
+- timer_setup(&ledinternal->timer, led_timeout_callback, 0);
++ /* Since the letinternal timer can be shared between multiple targets,
++ * always set it up, even if the current target does not need it
++ */
++ timer_setup(&ledinternal->timer, led_timeout_callback, 0);
+
+ list_add_tail(&ledinternal->list, &xt_led_triggers);
+
+@@ -181,8 +182,7 @@ static void led_tg_destroy(const struct xt_tgdtor_param *par)
+
+ list_del(&ledinternal->list);
+
+- if (ledinfo->delay > 0)
+- del_timer_sync(&ledinternal->timer);
++ del_timer_sync(&ledinternal->timer);
+
+ led_trigger_unregister(&ledinternal->netfilter_led_trigger);
+
+diff --git a/net/netfilter/xt_hashlimit.c b/net/netfilter/xt_hashlimit.c
+index 5da8746f7b88..b8a3e740ffd4 100644
+--- a/net/netfilter/xt_hashlimit.c
++++ b/net/netfilter/xt_hashlimit.c
+@@ -774,7 +774,7 @@ hashlimit_mt_common(const struct sk_buff *skb, struct xt_action_param *par,
+ if (!dh->rateinfo.prev_window &&
+ (dh->rateinfo.current_rate <= dh->rateinfo.burst)) {
+ spin_unlock(&dh->lock);
+- rcu_read_unlock_bh();
++ local_bh_enable();
+ return !(cfg->mode & XT_HASHLIMIT_INVERT);
+ } else {
+ goto overlimit;
+diff --git a/net/smc/af_smc.c b/net/smc/af_smc.c
+index 6451c5013e06..af465e681b9b 100644
+--- a/net/smc/af_smc.c
++++ b/net/smc/af_smc.c
+@@ -1369,8 +1369,10 @@ static int smc_create(struct net *net, struct socket *sock, int protocol,
+ smc->use_fallback = false; /* assume rdma capability first */
+ rc = sock_create_kern(net, PF_INET, SOCK_STREAM,
+ IPPROTO_TCP, &smc->clcsock);
+- if (rc)
++ if (rc) {
+ sk_common_release(sk);
++ goto out;
++ }
+ smc->sk.sk_sndbuf = max(smc->clcsock->sk->sk_sndbuf, SMC_BUF_MIN_SIZE);
+ smc->sk.sk_rcvbuf = max(smc->clcsock->sk->sk_rcvbuf, SMC_BUF_MIN_SIZE);
+
+diff --git a/scripts/Makefile.build b/scripts/Makefile.build
+index 47cddf32aeba..4f2b25d43ec9 100644
+--- a/scripts/Makefile.build
++++ b/scripts/Makefile.build
+@@ -256,6 +256,8 @@ __objtool_obj := $(objtree)/tools/objtool/objtool
+
+ objtool_args = $(if $(CONFIG_UNWINDER_ORC),orc generate,check)
+
++objtool_args += $(if $(part-of-module), --module,)
++
+ ifndef CONFIG_FRAME_POINTER
+ objtool_args += --no-fp
+ endif
+@@ -264,6 +266,12 @@ objtool_args += --no-unreachable
+ else
+ objtool_args += $(call cc-ifversion, -lt, 0405, --no-unreachable)
+ endif
++ifdef CONFIG_RETPOLINE
++ifneq ($(RETPOLINE_CFLAGS),)
++ objtool_args += --retpoline
++endif
++endif
++
+
+ ifdef CONFIG_MODVERSIONS
+ objtool_o = $(@D)/.tmp_$(@F)
+diff --git a/scripts/Makefile.lib b/scripts/Makefile.lib
+index 015aa9dbad86..06cf4c00fe88 100644
+--- a/scripts/Makefile.lib
++++ b/scripts/Makefile.lib
+@@ -287,11 +287,11 @@ cmd_dt_S_dtb= \
+ echo '\#include <asm-generic/vmlinux.lds.h>'; \
+ echo '.section .dtb.init.rodata,"a"'; \
+ echo '.balign STRUCT_ALIGNMENT'; \
+- echo '.global __dtb_$(*F)_begin'; \
+- echo '__dtb_$(*F)_begin:'; \
++ echo '.global __dtb_$(subst -,_,$(*F))_begin'; \
++ echo '__dtb_$(subst -,_,$(*F))_begin:'; \
+ echo '.incbin "$<" '; \
+- echo '__dtb_$(*F)_end:'; \
+- echo '.global __dtb_$(*F)_end'; \
++ echo '__dtb_$(subst -,_,$(*F))_end:'; \
++ echo '.global __dtb_$(subst -,_,$(*F))_end'; \
+ echo '.balign STRUCT_ALIGNMENT'; \
+ ) > $@
+
+diff --git a/sound/core/seq/seq_clientmgr.c b/sound/core/seq/seq_clientmgr.c
+index a42cbbf2c8d9..35ff97bfd492 100644
+--- a/sound/core/seq/seq_clientmgr.c
++++ b/sound/core/seq/seq_clientmgr.c
+@@ -910,7 +910,8 @@ int snd_seq_dispatch_event(struct snd_seq_event_cell *cell, int atomic, int hop)
+ static int snd_seq_client_enqueue_event(struct snd_seq_client *client,
+ struct snd_seq_event *event,
+ struct file *file, int blocking,
+- int atomic, int hop)
++ int atomic, int hop,
++ struct mutex *mutexp)
+ {
+ struct snd_seq_event_cell *cell;
+ int err;
+@@ -948,7 +949,8 @@ static int snd_seq_client_enqueue_event(struct snd_seq_client *client,
+ return -ENXIO; /* queue is not allocated */
+
+ /* allocate an event cell */
+- err = snd_seq_event_dup(client->pool, event, &cell, !blocking || atomic, file);
++ err = snd_seq_event_dup(client->pool, event, &cell, !blocking || atomic,
++ file, mutexp);
+ if (err < 0)
+ return err;
+
+@@ -1017,12 +1019,11 @@ static ssize_t snd_seq_write(struct file *file, const char __user *buf,
+ return -ENXIO;
+
+ /* allocate the pool now if the pool is not allocated yet */
++ mutex_lock(&client->ioctl_mutex);
+ if (client->pool->size > 0 && !snd_seq_write_pool_allocated(client)) {
+- mutex_lock(&client->ioctl_mutex);
+ err = snd_seq_pool_init(client->pool);
+- mutex_unlock(&client->ioctl_mutex);
+ if (err < 0)
+- return -ENOMEM;
++ goto out;
+ }
+
+ /* only process whole events */
+@@ -1073,7 +1074,7 @@ static ssize_t snd_seq_write(struct file *file, const char __user *buf,
+ /* ok, enqueue it */
+ err = snd_seq_client_enqueue_event(client, &event, file,
+ !(file->f_flags & O_NONBLOCK),
+- 0, 0);
++ 0, 0, &client->ioctl_mutex);
+ if (err < 0)
+ break;
+
+@@ -1084,6 +1085,8 @@ static ssize_t snd_seq_write(struct file *file, const char __user *buf,
+ written += len;
+ }
+
++ out:
++ mutex_unlock(&client->ioctl_mutex);
+ return written ? written : err;
+ }
+
+@@ -1838,6 +1841,9 @@ static int snd_seq_ioctl_set_client_pool(struct snd_seq_client *client,
+ (! snd_seq_write_pool_allocated(client) ||
+ info->output_pool != client->pool->size)) {
+ if (snd_seq_write_pool_allocated(client)) {
++ /* is the pool in use? */
++ if (atomic_read(&client->pool->counter))
++ return -EBUSY;
+ /* remove all existing cells */
+ snd_seq_pool_mark_closing(client->pool);
+ snd_seq_queue_client_leave_cells(client->number);
+@@ -2260,7 +2266,8 @@ static int kernel_client_enqueue(int client, struct snd_seq_event *ev,
+ if (! cptr->accept_output)
+ result = -EPERM;
+ else /* send it */
+- result = snd_seq_client_enqueue_event(cptr, ev, file, blocking, atomic, hop);
++ result = snd_seq_client_enqueue_event(cptr, ev, file, blocking,
++ atomic, hop, NULL);
+
+ snd_seq_client_unlock(cptr);
+ return result;
+diff --git a/sound/core/seq/seq_fifo.c b/sound/core/seq/seq_fifo.c
+index a8c2822e0198..72c0302a55d2 100644
+--- a/sound/core/seq/seq_fifo.c
++++ b/sound/core/seq/seq_fifo.c
+@@ -125,7 +125,7 @@ int snd_seq_fifo_event_in(struct snd_seq_fifo *f,
+ return -EINVAL;
+
+ snd_use_lock_use(&f->use_lock);
+- err = snd_seq_event_dup(f->pool, event, &cell, 1, NULL); /* always non-blocking */
++ err = snd_seq_event_dup(f->pool, event, &cell, 1, NULL, NULL); /* always non-blocking */
+ if (err < 0) {
+ if ((err == -ENOMEM) || (err == -EAGAIN))
+ atomic_inc(&f->overflow);
+diff --git a/sound/core/seq/seq_memory.c b/sound/core/seq/seq_memory.c
+index f763682584a8..ab1112e90f88 100644
+--- a/sound/core/seq/seq_memory.c
++++ b/sound/core/seq/seq_memory.c
+@@ -220,7 +220,8 @@ void snd_seq_cell_free(struct snd_seq_event_cell * cell)
+ */
+ static int snd_seq_cell_alloc(struct snd_seq_pool *pool,
+ struct snd_seq_event_cell **cellp,
+- int nonblock, struct file *file)
++ int nonblock, struct file *file,
++ struct mutex *mutexp)
+ {
+ struct snd_seq_event_cell *cell;
+ unsigned long flags;
+@@ -244,7 +245,11 @@ static int snd_seq_cell_alloc(struct snd_seq_pool *pool,
+ set_current_state(TASK_INTERRUPTIBLE);
+ add_wait_queue(&pool->output_sleep, &wait);
+ spin_unlock_irq(&pool->lock);
++ if (mutexp)
++ mutex_unlock(mutexp);
+ schedule();
++ if (mutexp)
++ mutex_lock(mutexp);
+ spin_lock_irq(&pool->lock);
+ remove_wait_queue(&pool->output_sleep, &wait);
+ /* interrupted? */
+@@ -287,7 +292,7 @@ static int snd_seq_cell_alloc(struct snd_seq_pool *pool,
+ */
+ int snd_seq_event_dup(struct snd_seq_pool *pool, struct snd_seq_event *event,
+ struct snd_seq_event_cell **cellp, int nonblock,
+- struct file *file)
++ struct file *file, struct mutex *mutexp)
+ {
+ int ncells, err;
+ unsigned int extlen;
+@@ -304,7 +309,7 @@ int snd_seq_event_dup(struct snd_seq_pool *pool, struct snd_seq_event *event,
+ if (ncells >= pool->total_elements)
+ return -ENOMEM;
+
+- err = snd_seq_cell_alloc(pool, &cell, nonblock, file);
++ err = snd_seq_cell_alloc(pool, &cell, nonblock, file, mutexp);
+ if (err < 0)
+ return err;
+
+@@ -330,7 +335,8 @@ int snd_seq_event_dup(struct snd_seq_pool *pool, struct snd_seq_event *event,
+ int size = sizeof(struct snd_seq_event);
+ if (len < size)
+ size = len;
+- err = snd_seq_cell_alloc(pool, &tmp, nonblock, file);
++ err = snd_seq_cell_alloc(pool, &tmp, nonblock, file,
++ mutexp);
+ if (err < 0)
+ goto __error;
+ if (cell->event.data.ext.ptr == NULL)
+diff --git a/sound/core/seq/seq_memory.h b/sound/core/seq/seq_memory.h
+index 32f959c17786..3abe306c394a 100644
+--- a/sound/core/seq/seq_memory.h
++++ b/sound/core/seq/seq_memory.h
+@@ -66,7 +66,8 @@ struct snd_seq_pool {
+ void snd_seq_cell_free(struct snd_seq_event_cell *cell);
+
+ int snd_seq_event_dup(struct snd_seq_pool *pool, struct snd_seq_event *event,
+- struct snd_seq_event_cell **cellp, int nonblock, struct file *file);
++ struct snd_seq_event_cell **cellp, int nonblock,
++ struct file *file, struct mutex *mutexp);
+
+ /* return number of unused (free) cells */
+ static inline int snd_seq_unused_cells(struct snd_seq_pool *pool)
+diff --git a/sound/pci/hda/patch_conexant.c b/sound/pci/hda/patch_conexant.c
+index 37e1cf8218ff..5b4dbcec6de8 100644
+--- a/sound/pci/hda/patch_conexant.c
++++ b/sound/pci/hda/patch_conexant.c
+@@ -957,6 +957,8 @@ static const struct snd_pci_quirk cxt5066_fixups[] = {
+ SND_PCI_QUIRK(0x1025, 0x054c, "Acer Aspire 3830TG", CXT_FIXUP_ASPIRE_DMIC),
+ SND_PCI_QUIRK(0x1025, 0x054f, "Acer Aspire 4830T", CXT_FIXUP_ASPIRE_DMIC),
+ SND_PCI_QUIRK(0x103c, 0x8079, "HP EliteBook 840 G3", CXT_FIXUP_HP_DOCK),
++ SND_PCI_QUIRK(0x103c, 0x807C, "HP EliteBook 820 G3", CXT_FIXUP_HP_DOCK),
++ SND_PCI_QUIRK(0x103c, 0x80FD, "HP ProBook 640 G2", CXT_FIXUP_HP_DOCK),
+ SND_PCI_QUIRK(0x103c, 0x8174, "HP Spectre x360", CXT_FIXUP_HP_SPECTRE),
+ SND_PCI_QUIRK(0x103c, 0x8115, "HP Z1 Gen3", CXT_FIXUP_HP_GATE_MIC),
+ SND_PCI_QUIRK(0x103c, 0x814f, "HP ZBook 15u G3", CXT_FIXUP_MUTE_LED_GPIO),
+diff --git a/sound/pci/hda/patch_realtek.c b/sound/pci/hda/patch_realtek.c
+index 8fe38c18e29d..18bab5ffbe4a 100644
+--- a/sound/pci/hda/patch_realtek.c
++++ b/sound/pci/hda/patch_realtek.c
+@@ -5152,6 +5152,16 @@ static void alc298_fixup_speaker_volume(struct hda_codec *codec,
+ }
+ }
+
++/* disable DAC3 (0x06) selection on NID 0x17 as it has no volume amp control */
++static void alc295_fixup_disable_dac3(struct hda_codec *codec,
++ const struct hda_fixup *fix, int action)
++{
++ if (action == HDA_FIXUP_ACT_PRE_PROBE) {
++ hda_nid_t conn[2] = { 0x02, 0x03 };
++ snd_hda_override_conn_list(codec, 0x17, 2, conn);
++ }
++}
++
+ /* Hook to update amp GPIO4 for automute */
+ static void alc280_hp_gpio4_automute_hook(struct hda_codec *codec,
+ struct hda_jack_callback *jack)
+@@ -5344,6 +5354,7 @@ enum {
+ ALC233_FIXUP_LENOVO_LINE2_MIC_HOTKEY,
+ ALC255_FIXUP_DELL_SPK_NOISE,
+ ALC225_FIXUP_DELL1_MIC_NO_PRESENCE,
++ ALC295_FIXUP_DISABLE_DAC3,
+ ALC280_FIXUP_HP_HEADSET_MIC,
+ ALC221_FIXUP_HP_FRONT_MIC,
+ ALC292_FIXUP_TPT460,
+@@ -5358,10 +5369,12 @@ enum {
+ ALC233_FIXUP_EAPD_COEF_AND_MIC_NO_PRESENCE,
+ ALC233_FIXUP_LENOVO_MULTI_CODECS,
+ ALC294_FIXUP_LENOVO_MIC_LOCATION,
++ ALC225_FIXUP_DELL_WYSE_MIC_NO_PRESENCE,
+ ALC700_FIXUP_INTEL_REFERENCE,
+ ALC274_FIXUP_DELL_BIND_DACS,
+ ALC274_FIXUP_DELL_AIO_LINEOUT_VERB,
+ ALC298_FIXUP_TPT470_DOCK,
++ ALC255_FIXUP_DUMMY_LINEOUT_VERB,
+ };
+
+ static const struct hda_fixup alc269_fixups[] = {
+@@ -6076,6 +6089,10 @@ static const struct hda_fixup alc269_fixups[] = {
+ .chained = true,
+ .chain_id = ALC298_FIXUP_DELL_AIO_MIC_NO_PRESENCE,
+ },
++ [ALC295_FIXUP_DISABLE_DAC3] = {
++ .type = HDA_FIXUP_FUNC,
++ .v.func = alc295_fixup_disable_dac3,
++ },
+ [ALC256_FIXUP_DELL_INSPIRON_7559_SUBWOOFER] = {
+ .type = HDA_FIXUP_PINS,
+ .v.pins = (const struct hda_pintbl[]) {
+@@ -6161,6 +6178,18 @@ static const struct hda_fixup alc269_fixups[] = {
+ { }
+ },
+ },
++ [ALC225_FIXUP_DELL_WYSE_MIC_NO_PRESENCE] = {
++ .type = HDA_FIXUP_PINS,
++ .v.pins = (const struct hda_pintbl[]) {
++ { 0x16, 0x0101102f }, /* Rear Headset HP */
++ { 0x19, 0x02a1913c }, /* use as Front headset mic, without its own jack detect */
++ { 0x1a, 0x01a19030 }, /* Rear Headset MIC */
++ { 0x1b, 0x02011020 },
++ { }
++ },
++ .chained = true,
++ .chain_id = ALC269_FIXUP_HEADSET_MODE_NO_HP_MIC
++ },
+ [ALC700_FIXUP_INTEL_REFERENCE] = {
+ .type = HDA_FIXUP_VERBS,
+ .v.verbs = (const struct hda_verb[]) {
+@@ -6197,6 +6226,15 @@ static const struct hda_fixup alc269_fixups[] = {
+ .chained = true,
+ .chain_id = ALC293_FIXUP_LENOVO_SPK_NOISE
+ },
++ [ALC255_FIXUP_DUMMY_LINEOUT_VERB] = {
++ .type = HDA_FIXUP_PINS,
++ .v.pins = (const struct hda_pintbl[]) {
++ { 0x14, 0x0201101f },
++ { }
++ },
++ .chained = true,
++ .chain_id = ALC255_FIXUP_DELL1_MIC_NO_PRESENCE
++ },
+ };
+
+ static const struct snd_pci_quirk alc269_fixup_tbl[] = {
+@@ -6245,10 +6283,13 @@ static const struct snd_pci_quirk alc269_fixup_tbl[] = {
+ SND_PCI_QUIRK(0x1028, 0x0725, "Dell Inspiron 3162", ALC255_FIXUP_DELL_SPK_NOISE),
+ SND_PCI_QUIRK(0x1028, 0x075b, "Dell XPS 13 9360", ALC256_FIXUP_DELL_XPS_13_HEADPHONE_NOISE),
+ SND_PCI_QUIRK(0x1028, 0x075d, "Dell AIO", ALC298_FIXUP_SPK_VOLUME),
++ SND_PCI_QUIRK(0x1028, 0x07b0, "Dell Precision 7520", ALC295_FIXUP_DISABLE_DAC3),
+ SND_PCI_QUIRK(0x1028, 0x0798, "Dell Inspiron 17 7000 Gaming", ALC256_FIXUP_DELL_INSPIRON_7559_SUBWOOFER),
++ SND_PCI_QUIRK(0x1028, 0x080c, "Dell WYSE", ALC225_FIXUP_DELL_WYSE_MIC_NO_PRESENCE),
+ SND_PCI_QUIRK(0x1028, 0x082a, "Dell XPS 13 9360", ALC256_FIXUP_DELL_XPS_13_HEADPHONE_NOISE),
+ SND_PCI_QUIRK(0x1028, 0x084b, "Dell", ALC274_FIXUP_DELL_AIO_LINEOUT_VERB),
+ SND_PCI_QUIRK(0x1028, 0x084e, "Dell", ALC274_FIXUP_DELL_AIO_LINEOUT_VERB),
++ SND_PCI_QUIRK(0x1028, 0x0873, "Dell Precision 3930", ALC255_FIXUP_DUMMY_LINEOUT_VERB),
+ SND_PCI_QUIRK(0x1028, 0x164a, "Dell", ALC293_FIXUP_DELL1_MIC_NO_PRESENCE),
+ SND_PCI_QUIRK(0x1028, 0x164b, "Dell", ALC293_FIXUP_DELL1_MIC_NO_PRESENCE),
+ SND_PCI_QUIRK(0x103c, 0x1586, "HP", ALC269_FIXUP_HP_MUTE_LED_MIC2),
+@@ -6386,9 +6427,11 @@ static const struct snd_pci_quirk alc269_fixup_tbl[] = {
+ SND_PCI_QUIRK(0x17aa, 0x2245, "Thinkpad T470", ALC298_FIXUP_TPT470_DOCK),
+ SND_PCI_QUIRK(0x17aa, 0x2246, "Thinkpad", ALC298_FIXUP_TPT470_DOCK),
+ SND_PCI_QUIRK(0x17aa, 0x2247, "Thinkpad", ALC298_FIXUP_TPT470_DOCK),
++ SND_PCI_QUIRK(0x17aa, 0x2249, "Thinkpad", ALC292_FIXUP_TPT460),
+ SND_PCI_QUIRK(0x17aa, 0x224b, "Thinkpad", ALC298_FIXUP_TPT470_DOCK),
+ SND_PCI_QUIRK(0x17aa, 0x224c, "Thinkpad", ALC298_FIXUP_TPT470_DOCK),
+ SND_PCI_QUIRK(0x17aa, 0x224d, "Thinkpad", ALC298_FIXUP_TPT470_DOCK),
++ SND_PCI_QUIRK(0x17aa, 0x225d, "Thinkpad T480", ALC269_FIXUP_LIMIT_INT_MIC_BOOST),
+ SND_PCI_QUIRK(0x17aa, 0x30bb, "ThinkCentre AIO", ALC233_FIXUP_LENOVO_LINE2_MIC_HOTKEY),
+ SND_PCI_QUIRK(0x17aa, 0x30e2, "ThinkCentre AIO", ALC233_FIXUP_LENOVO_LINE2_MIC_HOTKEY),
+ SND_PCI_QUIRK(0x17aa, 0x310c, "ThinkCentre Station", ALC294_FIXUP_LENOVO_MIC_LOCATION),
+@@ -6750,7 +6793,7 @@ static const struct snd_hda_pin_quirk alc269_pin_fixup_tbl[] = {
+ {0x12, 0x90a60120},
+ {0x14, 0x90170110},
+ {0x21, 0x0321101f}),
+- SND_HDA_PIN_QUIRK(0x10ec0289, 0x1028, "Dell", ALC225_FIXUP_DELL1_MIC_NO_PRESENCE,
++ SND_HDA_PIN_QUIRK(0x10ec0289, 0x1028, "Dell", ALC269_FIXUP_DELL4_MIC_NO_PRESENCE,
+ {0x12, 0xb7a60130},
+ {0x14, 0x90170110},
+ {0x21, 0x04211020}),
+diff --git a/tools/objtool/builtin-check.c b/tools/objtool/builtin-check.c
+index 57254f5b2779..694abc628e9b 100644
+--- a/tools/objtool/builtin-check.c
++++ b/tools/objtool/builtin-check.c
+@@ -29,7 +29,7 @@
+ #include "builtin.h"
+ #include "check.h"
+
+-bool no_fp, no_unreachable;
++bool no_fp, no_unreachable, retpoline, module;
+
+ static const char * const check_usage[] = {
+ "objtool check [<options>] file.o",
+@@ -39,6 +39,8 @@ static const char * const check_usage[] = {
+ const struct option check_options[] = {
+ OPT_BOOLEAN('f', "no-fp", &no_fp, "Skip frame pointer validation"),
+ OPT_BOOLEAN('u', "no-unreachable", &no_unreachable, "Skip 'unreachable instruction' warnings"),
++ OPT_BOOLEAN('r', "retpoline", &retpoline, "Validate retpoline assumptions"),
++ OPT_BOOLEAN('m', "module", &module, "Indicates the object will be part of a kernel module"),
+ OPT_END(),
+ };
+
+@@ -53,5 +55,5 @@ int cmd_check(int argc, const char **argv)
+
+ objname = argv[0];
+
+- return check(objname, no_fp, no_unreachable, false);
++ return check(objname, false);
+ }
+diff --git a/tools/objtool/builtin-orc.c b/tools/objtool/builtin-orc.c
+index 91e8e19ff5e0..77ea2b97117d 100644
+--- a/tools/objtool/builtin-orc.c
++++ b/tools/objtool/builtin-orc.c
+@@ -25,7 +25,6 @@
+ */
+
+ #include <string.h>
+-#include <subcmd/parse-options.h>
+ #include "builtin.h"
+ #include "check.h"
+
+@@ -36,9 +35,6 @@ static const char *orc_usage[] = {
+ NULL,
+ };
+
+-extern const struct option check_options[];
+-extern bool no_fp, no_unreachable;
+-
+ int cmd_orc(int argc, const char **argv)
+ {
+ const char *objname;
+@@ -54,7 +50,7 @@ int cmd_orc(int argc, const char **argv)
+
+ objname = argv[0];
+
+- return check(objname, no_fp, no_unreachable, true);
++ return check(objname, true);
+ }
+
+ if (!strcmp(argv[0], "dump")) {
+diff --git a/tools/objtool/builtin.h b/tools/objtool/builtin.h
+index dd526067fed5..28ff40e19a14 100644
+--- a/tools/objtool/builtin.h
++++ b/tools/objtool/builtin.h
+@@ -17,6 +17,11 @@
+ #ifndef _BUILTIN_H
+ #define _BUILTIN_H
+
++#include <subcmd/parse-options.h>
++
++extern const struct option check_options[];
++extern bool no_fp, no_unreachable, retpoline, module;
++
+ extern int cmd_check(int argc, const char **argv);
+ extern int cmd_orc(int argc, const char **argv);
+
+diff --git a/tools/objtool/check.c b/tools/objtool/check.c
+index c7fb5c2392ee..9d01d0b1084e 100644
+--- a/tools/objtool/check.c
++++ b/tools/objtool/check.c
+@@ -18,6 +18,7 @@
+ #include <string.h>
+ #include <stdlib.h>
+
++#include "builtin.h"
+ #include "check.h"
+ #include "elf.h"
+ #include "special.h"
+@@ -33,7 +34,6 @@ struct alternative {
+ };
+
+ const char *objname;
+-static bool no_fp;
+ struct cfi_state initial_func_cfi;
+
+ struct instruction *find_insn(struct objtool_file *file,
+@@ -496,6 +496,7 @@ static int add_jump_destinations(struct objtool_file *file)
+ * disguise, so convert them accordingly.
+ */
+ insn->type = INSN_JUMP_DYNAMIC;
++ insn->retpoline_safe = true;
+ continue;
+ } else {
+ /* sibling call */
+@@ -547,7 +548,8 @@ static int add_call_destinations(struct objtool_file *file)
+ if (!insn->call_dest && !insn->ignore) {
+ WARN_FUNC("unsupported intra-function call",
+ insn->sec, insn->offset);
+- WARN("If this is a retpoline, please patch it in with alternatives and annotate it with ANNOTATE_NOSPEC_ALTERNATIVE.");
++ if (retpoline)
++ WARN("If this is a retpoline, please patch it in with alternatives and annotate it with ANNOTATE_NOSPEC_ALTERNATIVE.");
+ return -1;
+ }
+
+@@ -922,7 +924,11 @@ static struct rela *find_switch_table(struct objtool_file *file,
+ if (find_symbol_containing(file->rodata, text_rela->addend))
+ continue;
+
+- return find_rela_by_dest(file->rodata, text_rela->addend);
++ rodata_rela = find_rela_by_dest(file->rodata, text_rela->addend);
++ if (!rodata_rela)
++ continue;
++
++ return rodata_rela;
+ }
+
+ return NULL;
+@@ -1107,6 +1113,41 @@ static int read_unwind_hints(struct objtool_file *file)
+ return 0;
+ }
+
++static int read_retpoline_hints(struct objtool_file *file)
++{
++ struct section *sec;
++ struct instruction *insn;
++ struct rela *rela;
++
++ sec = find_section_by_name(file->elf, ".rela.discard.retpoline_safe");
++ if (!sec)
++ return 0;
++
++ list_for_each_entry(rela, &sec->rela_list, list) {
++ if (rela->sym->type != STT_SECTION) {
++ WARN("unexpected relocation symbol type in %s", sec->name);
++ return -1;
++ }
++
++ insn = find_insn(file, rela->sym->sec, rela->addend);
++ if (!insn) {
++ WARN("bad .discard.retpoline_safe entry");
++ return -1;
++ }
++
++ if (insn->type != INSN_JUMP_DYNAMIC &&
++ insn->type != INSN_CALL_DYNAMIC) {
++ WARN_FUNC("retpoline_safe hint not an indirect jump/call",
++ insn->sec, insn->offset);
++ return -1;
++ }
++
++ insn->retpoline_safe = true;
++ }
++
++ return 0;
++}
++
+ static int decode_sections(struct objtool_file *file)
+ {
+ int ret;
+@@ -1145,6 +1186,10 @@ static int decode_sections(struct objtool_file *file)
+ if (ret)
+ return ret;
+
++ ret = read_retpoline_hints(file);
++ if (ret)
++ return ret;
++
+ return 0;
+ }
+
+@@ -1890,6 +1935,38 @@ static int validate_unwind_hints(struct objtool_file *file)
+ return warnings;
+ }
+
++static int validate_retpoline(struct objtool_file *file)
++{
++ struct instruction *insn;
++ int warnings = 0;
++
++ for_each_insn(file, insn) {
++ if (insn->type != INSN_JUMP_DYNAMIC &&
++ insn->type != INSN_CALL_DYNAMIC)
++ continue;
++
++ if (insn->retpoline_safe)
++ continue;
++
++ /*
++ * .init.text code is ran before userspace and thus doesn't
++ * strictly need retpolines, except for modules which are
++ * loaded late, they very much do need retpoline in their
++ * .init.text
++ */
++ if (!strcmp(insn->sec->name, ".init.text") && !module)
++ continue;
++
++ WARN_FUNC("indirect %s found in RETPOLINE build",
++ insn->sec, insn->offset,
++ insn->type == INSN_JUMP_DYNAMIC ? "jump" : "call");
++
++ warnings++;
++ }
++
++ return warnings;
++}
++
+ static bool is_kasan_insn(struct instruction *insn)
+ {
+ return (insn->type == INSN_CALL &&
+@@ -2021,13 +2098,12 @@ static void cleanup(struct objtool_file *file)
+ elf_close(file->elf);
+ }
+
+-int check(const char *_objname, bool _no_fp, bool no_unreachable, bool orc)
++int check(const char *_objname, bool orc)
+ {
+ struct objtool_file file;
+ int ret, warnings = 0;
+
+ objname = _objname;
+- no_fp = _no_fp;
+
+ file.elf = elf_open(objname, orc ? O_RDWR : O_RDONLY);
+ if (!file.elf)
+@@ -2051,6 +2127,13 @@ int check(const char *_objname, bool _no_fp, bool no_unreachable, bool orc)
+ if (list_empty(&file.insn_list))
+ goto out;
+
++ if (retpoline) {
++ ret = validate_retpoline(&file);
++ if (ret < 0)
++ return ret;
++ warnings += ret;
++ }
++
+ ret = validate_functions(&file);
+ if (ret < 0)
+ goto out;
+diff --git a/tools/objtool/check.h b/tools/objtool/check.h
+index 23a1d065cae1..c6b68fcb926f 100644
+--- a/tools/objtool/check.h
++++ b/tools/objtool/check.h
+@@ -45,6 +45,7 @@ struct instruction {
+ unsigned char type;
+ unsigned long immediate;
+ bool alt_group, visited, dead_end, ignore, hint, save, restore, ignore_alts;
++ bool retpoline_safe;
+ struct symbol *call_dest;
+ struct instruction *jump_dest;
+ struct instruction *first_jump_src;
+@@ -63,7 +64,7 @@ struct objtool_file {
+ bool ignore_unreachables, c_file, hints;
+ };
+
+-int check(const char *objname, bool no_fp, bool no_unreachable, bool orc);
++int check(const char *objname, bool orc);
+
+ struct instruction *find_insn(struct objtool_file *file,
+ struct section *sec, unsigned long offset);
+diff --git a/tools/perf/util/trigger.h b/tools/perf/util/trigger.h
+index 370138e7e35c..88223bc7c82b 100644
+--- a/tools/perf/util/trigger.h
++++ b/tools/perf/util/trigger.h
+@@ -12,7 +12,7 @@
+ * States and transits:
+ *
+ *
+- * OFF--(on)--> READY --(hit)--> HIT
++ * OFF--> ON --> READY --(hit)--> HIT
+ * ^ |
+ * | (ready)
+ * | |
+@@ -27,8 +27,9 @@ struct trigger {
+ volatile enum {
+ TRIGGER_ERROR = -2,
+ TRIGGER_OFF = -1,
+- TRIGGER_READY = 0,
+- TRIGGER_HIT = 1,
++ TRIGGER_ON = 0,
++ TRIGGER_READY = 1,
++ TRIGGER_HIT = 2,
+ } state;
+ const char *name;
+ };
+@@ -50,7 +51,7 @@ static inline bool trigger_is_error(struct trigger *t)
+ static inline void trigger_on(struct trigger *t)
+ {
+ TRIGGER_WARN_ONCE(t, TRIGGER_OFF);
+- t->state = TRIGGER_READY;
++ t->state = TRIGGER_ON;
+ }
+
+ static inline void trigger_ready(struct trigger *t)
^ permalink raw reply related [flat|nested] 27+ messages in thread
* [gentoo-commits] proj/linux-patches:4.15 commit in: /
@ 2018-03-19 12:02 Mike Pagano
0 siblings, 0 replies; 27+ messages in thread
From: Mike Pagano @ 2018-03-19 12:02 UTC (permalink / raw
To: gentoo-commits
commit: 89e744181b46d6751c181899d789820432d371db
Author: Mike Pagano <mpagano <AT> gentoo <DOT> org>
AuthorDate: Mon Mar 19 12:02:08 2018 +0000
Commit: Mike Pagano <mpagano <AT> gentoo <DOT> org>
CommitDate: Mon Mar 19 12:02:08 2018 +0000
URL: https://gitweb.gentoo.org/proj/linux-patches.git/commit/?id=89e74418
Linux patch 4.15.11
0000_README | 4 +
1010_linux-4.15.11.patch | 3788 ++++++++++++++++++++++++++++++++++++++++++++++
2 files changed, 3792 insertions(+)
diff --git a/0000_README b/0000_README
index 172ed03..6b57403 100644
--- a/0000_README
+++ b/0000_README
@@ -83,6 +83,10 @@ Patch: 1009_linux-4.15.10.patch
From: http://www.kernel.org
Desc: Linux 4.15.10
+Patch: 1010_linux-4.15.11.patch
+From: http://www.kernel.org
+Desc: Linux 4.15.11
+
Patch: 1500_XATTR_USER_PREFIX.patch
From: https://bugs.gentoo.org/show_bug.cgi?id=470644
Desc: Support for namespace user.pax.* on tmpfs.
diff --git a/1010_linux-4.15.11.patch b/1010_linux-4.15.11.patch
new file mode 100644
index 0000000..f914c5d
--- /dev/null
+++ b/1010_linux-4.15.11.patch
@@ -0,0 +1,3788 @@
+diff --git a/Documentation/devicetree/bindings/serial/fsl-imx-uart.txt b/Documentation/devicetree/bindings/serial/fsl-imx-uart.txt
+index 860a9559839a..89b92c1314cf 100644
+--- a/Documentation/devicetree/bindings/serial/fsl-imx-uart.txt
++++ b/Documentation/devicetree/bindings/serial/fsl-imx-uart.txt
+@@ -9,7 +9,8 @@ Optional properties:
+ - fsl,irda-mode : Indicate the uart supports irda mode
+ - fsl,dte-mode : Indicate the uart works in DTE mode. The uart works
+ in DCE mode by default.
+-- rs485-rts-delay, rs485-rx-during-tx, linux,rs485-enabled-at-boot-time: see rs485.txt
++- rs485-rts-delay, rs485-rts-active-low, rs485-rx-during-tx,
++ linux,rs485-enabled-at-boot-time: see rs485.txt
+
+ Please check Documentation/devicetree/bindings/serial/serial.txt
+ for the complete list of generic properties.
+diff --git a/Documentation/devicetree/bindings/serial/fsl-lpuart.txt b/Documentation/devicetree/bindings/serial/fsl-lpuart.txt
+index 59567b51cf09..6bd3f2e93d61 100644
+--- a/Documentation/devicetree/bindings/serial/fsl-lpuart.txt
++++ b/Documentation/devicetree/bindings/serial/fsl-lpuart.txt
+@@ -16,7 +16,8 @@ Required properties:
+ Optional properties:
+ - dmas: A list of two dma specifiers, one for each entry in dma-names.
+ - dma-names: should contain "tx" and "rx".
+-- rs485-rts-delay, rs485-rx-during-tx, linux,rs485-enabled-at-boot-time: see rs485.txt
++- rs485-rts-delay, rs485-rts-active-low, rs485-rx-during-tx,
++ linux,rs485-enabled-at-boot-time: see rs485.txt
+
+ Note: Optional properties for DMA support. Write them both or both not.
+
+diff --git a/Documentation/devicetree/bindings/serial/omap_serial.txt b/Documentation/devicetree/bindings/serial/omap_serial.txt
+index 43eac675f21f..4b0f05adb228 100644
+--- a/Documentation/devicetree/bindings/serial/omap_serial.txt
++++ b/Documentation/devicetree/bindings/serial/omap_serial.txt
+@@ -20,6 +20,7 @@ Optional properties:
+ node and a DMA channel number.
+ - dma-names : "rx" for receive channel, "tx" for transmit channel.
+ - rs485-rts-delay, rs485-rx-during-tx, linux,rs485-enabled-at-boot-time: see rs485.txt
++- rs485-rts-active-high: drive RTS high when sending (default is low).
+
+ Example:
+
+diff --git a/Documentation/devicetree/bindings/serial/rs485.txt b/Documentation/devicetree/bindings/serial/rs485.txt
+index b8415936dfdb..b7c29f74ebb2 100644
+--- a/Documentation/devicetree/bindings/serial/rs485.txt
++++ b/Documentation/devicetree/bindings/serial/rs485.txt
+@@ -12,6 +12,7 @@ Optional properties:
+ * b is the delay between end of data sent and rts signal in milliseconds
+ it corresponds to the delay after sending data and actual release of the line.
+ If this property is not specified, <0 0> is assumed.
++- rs485-rts-active-low: drive RTS low when sending (default is high).
+ - linux,rs485-enabled-at-boot-time: empty property telling to enable the rs485
+ feature at boot time. It can be disabled later with proper ioctl.
+ - rs485-rx-during-tx: empty property that enables the receiving of data even
+diff --git a/Documentation/devicetree/bindings/usb/usb-xhci.txt b/Documentation/devicetree/bindings/usb/usb-xhci.txt
+index ae6e484a8d7c..4143ef222540 100644
+--- a/Documentation/devicetree/bindings/usb/usb-xhci.txt
++++ b/Documentation/devicetree/bindings/usb/usb-xhci.txt
+@@ -12,6 +12,7 @@ Required properties:
+ - "renesas,xhci-r8a7793" for r8a7793 SoC
+ - "renesas,xhci-r8a7795" for r8a7795 SoC
+ - "renesas,xhci-r8a7796" for r8a7796 SoC
++ - "renesas,xhci-r8a77965" for r8a77965 SoC
+ - "renesas,rcar-gen2-xhci" for a generic R-Car Gen2 compatible device
+ - "renesas,rcar-gen3-xhci" for a generic R-Car Gen3 compatible device
+ - "xhci-platform" (deprecated)
+diff --git a/Makefile b/Makefile
+index 7eed0f168b13..74c0f5e8dd55 100644
+--- a/Makefile
++++ b/Makefile
+@@ -1,7 +1,7 @@
+ # SPDX-License-Identifier: GPL-2.0
+ VERSION = 4
+ PATCHLEVEL = 15
+-SUBLEVEL = 10
++SUBLEVEL = 11
+ EXTRAVERSION =
+ NAME = Fearless Coyote
+
+diff --git a/arch/arm/boot/dts/am335x-pepper.dts b/arch/arm/boot/dts/am335x-pepper.dts
+index 03c7d77023c6..9fb7426070ce 100644
+--- a/arch/arm/boot/dts/am335x-pepper.dts
++++ b/arch/arm/boot/dts/am335x-pepper.dts
+@@ -139,7 +139,7 @@
+ &audio_codec {
+ status = "okay";
+
+- gpio-reset = <&gpio1 16 GPIO_ACTIVE_LOW>;
++ reset-gpios = <&gpio1 16 GPIO_ACTIVE_LOW>;
+ AVDD-supply = <&ldo3_reg>;
+ IOVDD-supply = <&ldo3_reg>;
+ DRVDD-supply = <&ldo3_reg>;
+diff --git a/arch/arm/boot/dts/exynos4412-trats2.dts b/arch/arm/boot/dts/exynos4412-trats2.dts
+index 220cdf109405..9f4672ba9943 100644
+--- a/arch/arm/boot/dts/exynos4412-trats2.dts
++++ b/arch/arm/boot/dts/exynos4412-trats2.dts
+@@ -454,7 +454,7 @@
+ reg = <0>;
+ vdd3-supply = <&lcd_vdd3_reg>;
+ vci-supply = <&ldo25_reg>;
+- reset-gpios = <&gpy4 5 GPIO_ACTIVE_HIGH>;
++ reset-gpios = <&gpf2 1 GPIO_ACTIVE_HIGH>;
+ power-on-delay= <50>;
+ reset-delay = <100>;
+ init-delay = <100>;
+diff --git a/arch/arm/boot/dts/omap3-n900.dts b/arch/arm/boot/dts/omap3-n900.dts
+index 669c51c00c00..5362139d5312 100644
+--- a/arch/arm/boot/dts/omap3-n900.dts
++++ b/arch/arm/boot/dts/omap3-n900.dts
+@@ -558,7 +558,7 @@
+ tlv320aic3x: tlv320aic3x@18 {
+ compatible = "ti,tlv320aic3x";
+ reg = <0x18>;
+- gpio-reset = <&gpio2 28 GPIO_ACTIVE_HIGH>; /* 60 */
++ reset-gpios = <&gpio2 28 GPIO_ACTIVE_LOW>; /* 60 */
+ ai3x-gpio-func = <
+ 0 /* AIC3X_GPIO1_FUNC_DISABLED */
+ 5 /* AIC3X_GPIO2_FUNC_DIGITAL_MIC_INPUT */
+@@ -575,7 +575,7 @@
+ tlv320aic3x_aux: tlv320aic3x@19 {
+ compatible = "ti,tlv320aic3x";
+ reg = <0x19>;
+- gpio-reset = <&gpio2 28 GPIO_ACTIVE_HIGH>; /* 60 */
++ reset-gpios = <&gpio2 28 GPIO_ACTIVE_LOW>; /* 60 */
+
+ AVDD-supply = <&vmmc2>;
+ DRVDD-supply = <&vmmc2>;
+diff --git a/arch/arm/boot/dts/r8a7791-koelsch.dts b/arch/arm/boot/dts/r8a7791-koelsch.dts
+index e164eda69baf..4126de417922 100644
+--- a/arch/arm/boot/dts/r8a7791-koelsch.dts
++++ b/arch/arm/boot/dts/r8a7791-koelsch.dts
+@@ -278,6 +278,12 @@
+ };
+ };
+
++ cec_clock: cec-clock {
++ compatible = "fixed-clock";
++ #clock-cells = <0>;
++ clock-frequency = <12000000>;
++ };
++
+ hdmi-out {
+ compatible = "hdmi-connector";
+ type = "a";
+@@ -640,12 +646,6 @@
+ };
+ };
+
+- cec_clock: cec-clock {
+- compatible = "fixed-clock";
+- #clock-cells = <0>;
+- clock-frequency = <12000000>;
+- };
+-
+ hdmi@39 {
+ compatible = "adi,adv7511w";
+ reg = <0x39>;
+diff --git a/arch/arm64/boot/dts/renesas/salvator-common.dtsi b/arch/arm64/boot/dts/renesas/salvator-common.dtsi
+index dbe2648649db..7c49b3e8c8fb 100644
+--- a/arch/arm64/boot/dts/renesas/salvator-common.dtsi
++++ b/arch/arm64/boot/dts/renesas/salvator-common.dtsi
+@@ -263,6 +263,7 @@
+ reg = <0>;
+ interrupt-parent = <&gpio2>;
+ interrupts = <11 IRQ_TYPE_LEVEL_LOW>;
++ reset-gpios = <&gpio2 10 GPIO_ACTIVE_LOW>;
+ };
+ };
+
+diff --git a/arch/powerpc/include/asm/code-patching.h b/arch/powerpc/include/asm/code-patching.h
+index abef812de7f8..2c895e8d07f7 100644
+--- a/arch/powerpc/include/asm/code-patching.h
++++ b/arch/powerpc/include/asm/code-patching.h
+@@ -33,6 +33,7 @@ int patch_branch(unsigned int *addr, unsigned long target, int flags);
+ int patch_instruction(unsigned int *addr, unsigned int instr);
+
+ int instr_is_relative_branch(unsigned int instr);
++int instr_is_relative_link_branch(unsigned int instr);
+ int instr_is_branch_to_addr(const unsigned int *instr, unsigned long addr);
+ unsigned long branch_target(const unsigned int *instr);
+ unsigned int translate_branch(const unsigned int *dest,
+diff --git a/arch/powerpc/include/asm/kvm_book3s_64.h b/arch/powerpc/include/asm/kvm_book3s_64.h
+index 735cfa35298a..998f7b7aaa9e 100644
+--- a/arch/powerpc/include/asm/kvm_book3s_64.h
++++ b/arch/powerpc/include/asm/kvm_book3s_64.h
+@@ -122,13 +122,13 @@ static inline int kvmppc_hpte_page_shifts(unsigned long h, unsigned long l)
+ lphi = (l >> 16) & 0xf;
+ switch ((l >> 12) & 0xf) {
+ case 0:
+- return !lphi ? 24 : -1; /* 16MB */
++ return !lphi ? 24 : 0; /* 16MB */
+ break;
+ case 1:
+ return 16; /* 64kB */
+ break;
+ case 3:
+- return !lphi ? 34 : -1; /* 16GB */
++ return !lphi ? 34 : 0; /* 16GB */
+ break;
+ case 7:
+ return (16 << 8) + 12; /* 64kB in 4kB */
+@@ -140,7 +140,7 @@ static inline int kvmppc_hpte_page_shifts(unsigned long h, unsigned long l)
+ return (24 << 8) + 12; /* 16MB in 4kB */
+ break;
+ }
+- return -1;
++ return 0;
+ }
+
+ static inline int kvmppc_hpte_base_page_shift(unsigned long h, unsigned long l)
+@@ -159,7 +159,11 @@ static inline int kvmppc_hpte_actual_page_shift(unsigned long h, unsigned long l
+
+ static inline unsigned long kvmppc_actual_pgsz(unsigned long v, unsigned long r)
+ {
+- return 1ul << kvmppc_hpte_actual_page_shift(v, r);
++ int shift = kvmppc_hpte_actual_page_shift(v, r);
++
++ if (shift)
++ return 1ul << shift;
++ return 0;
+ }
+
+ static inline int kvmppc_pgsize_lp_encoding(int base_shift, int actual_shift)
+@@ -232,7 +236,7 @@ static inline unsigned long compute_tlbie_rb(unsigned long v, unsigned long r,
+ va_low ^= v >> (SID_SHIFT_1T - 16);
+ va_low &= 0x7ff;
+
+- if (b_pgshift == 12) {
++ if (b_pgshift <= 12) {
+ if (a_pgshift > 12) {
+ sllp = (a_pgshift == 16) ? 5 : 4;
+ rb |= sllp << 5; /* AP field */
+diff --git a/arch/powerpc/kernel/entry_64.S b/arch/powerpc/kernel/entry_64.S
+index 2748584b767d..fde89f04f276 100644
+--- a/arch/powerpc/kernel/entry_64.S
++++ b/arch/powerpc/kernel/entry_64.S
+@@ -939,9 +939,13 @@ END_FTR_SECTION_IFSET(CPU_FTR_HAS_PPR)
+ beq 1f
+ rlwinm r7,r7,0,~PACA_IRQ_HARD_DIS
+ stb r7,PACAIRQHAPPENED(r13)
+-1: li r0,0
+- stb r0,PACASOFTIRQEN(r13);
+- TRACE_DISABLE_INTS
++1:
++#if defined(CONFIG_TRACE_IRQFLAGS) && defined(CONFIG_BUG)
++ /* The interrupt should not have soft enabled. */
++ lbz r7,PACASOFTIRQEN(r13)
++1: tdnei r7,0
++ EMIT_BUG_ENTRY 1b,__FILE__,__LINE__,BUGFLAG_WARNING
++#endif
+ b .Ldo_restore
+
+ /*
+diff --git a/arch/powerpc/kernel/module_64.c b/arch/powerpc/kernel/module_64.c
+index 759104b99f9f..180c16f04063 100644
+--- a/arch/powerpc/kernel/module_64.c
++++ b/arch/powerpc/kernel/module_64.c
+@@ -487,7 +487,17 @@ static bool is_early_mcount_callsite(u32 *instruction)
+ restore r2. */
+ static int restore_r2(u32 *instruction, struct module *me)
+ {
+- if (is_early_mcount_callsite(instruction - 1))
++ u32 *prev_insn = instruction - 1;
++
++ if (is_early_mcount_callsite(prev_insn))
++ return 1;
++
++ /*
++ * Make sure the branch isn't a sibling call. Sibling calls aren't
++ * "link" branches and they don't return, so they don't need the r2
++ * restore afterwards.
++ */
++ if (!instr_is_relative_link_branch(*prev_insn))
+ return 1;
+
+ if (*instruction != PPC_INST_NOP) {
+diff --git a/arch/powerpc/kvm/book3s_64_mmu_radix.c b/arch/powerpc/kvm/book3s_64_mmu_radix.c
+index 58618f644c56..0c854816e653 100644
+--- a/arch/powerpc/kvm/book3s_64_mmu_radix.c
++++ b/arch/powerpc/kvm/book3s_64_mmu_radix.c
+@@ -573,7 +573,7 @@ long kvmppc_hv_get_dirty_log_radix(struct kvm *kvm,
+ j = i + 1;
+ if (npages) {
+ set_dirty_bits(map, i, npages);
+- i = j + npages;
++ j = i + npages;
+ }
+ }
+ return 0;
+diff --git a/arch/powerpc/lib/code-patching.c b/arch/powerpc/lib/code-patching.c
+index d469224c4ada..096d4e4d31e6 100644
+--- a/arch/powerpc/lib/code-patching.c
++++ b/arch/powerpc/lib/code-patching.c
+@@ -302,6 +302,11 @@ int instr_is_relative_branch(unsigned int instr)
+ return instr_is_branch_iform(instr) || instr_is_branch_bform(instr);
+ }
+
++int instr_is_relative_link_branch(unsigned int instr)
++{
++ return instr_is_relative_branch(instr) && (instr & BRANCH_SET_LINK);
++}
++
+ static unsigned long branch_iform_target(const unsigned int *instr)
+ {
+ signed long imm;
+diff --git a/arch/x86/kernel/machine_kexec_64.c b/arch/x86/kernel/machine_kexec_64.c
+index 1f790cf9d38f..3b7427aa7d85 100644
+--- a/arch/x86/kernel/machine_kexec_64.c
++++ b/arch/x86/kernel/machine_kexec_64.c
+@@ -542,6 +542,7 @@ int arch_kexec_apply_relocations_add(const Elf64_Ehdr *ehdr,
+ goto overflow;
+ break;
+ case R_X86_64_PC32:
++ case R_X86_64_PLT32:
+ value -= (u64)address;
+ *(u32 *)location = value;
+ break;
+diff --git a/arch/x86/kernel/module.c b/arch/x86/kernel/module.c
+index da0c160e5589..f58336af095c 100644
+--- a/arch/x86/kernel/module.c
++++ b/arch/x86/kernel/module.c
+@@ -191,6 +191,7 @@ int apply_relocate_add(Elf64_Shdr *sechdrs,
+ goto overflow;
+ break;
+ case R_X86_64_PC32:
++ case R_X86_64_PLT32:
+ if (*(u32 *)loc != 0)
+ goto invalid_relocation;
+ val -= (u64)loc;
+diff --git a/arch/x86/kvm/mmu.c b/arch/x86/kvm/mmu.c
+index e080dbe55360..fe2cb4cfa75b 100644
+--- a/arch/x86/kvm/mmu.c
++++ b/arch/x86/kvm/mmu.c
+@@ -4951,6 +4951,16 @@ int kvm_mmu_page_fault(struct kvm_vcpu *vcpu, gva_t cr2, u64 error_code,
+ if (mmio_info_in_cache(vcpu, cr2, direct))
+ emulation_type = 0;
+ emulate:
++ /*
++ * On AMD platforms, under certain conditions insn_len may be zero on #NPF.
++ * This can happen if a guest gets a page-fault on data access but the HW
++ * table walker is not able to read the instruction page (e.g instruction
++ * page is not present in memory). In those cases we simply restart the
++ * guest.
++ */
++ if (unlikely(insn && !insn_len))
++ return 1;
++
+ er = x86_emulate_instruction(vcpu, cr2, emulation_type, insn, insn_len);
+
+ switch (er) {
+diff --git a/arch/x86/kvm/svm.c b/arch/x86/kvm/svm.c
+index 3505afabce5d..76bbca2c8159 100644
+--- a/arch/x86/kvm/svm.c
++++ b/arch/x86/kvm/svm.c
+@@ -2178,7 +2178,8 @@ static int pf_interception(struct vcpu_svm *svm)
+ u64 error_code = svm->vmcb->control.exit_info_1;
+
+ return kvm_handle_page_fault(&svm->vcpu, error_code, fault_address,
+- svm->vmcb->control.insn_bytes,
++ static_cpu_has(X86_FEATURE_DECODEASSISTS) ?
++ svm->vmcb->control.insn_bytes : NULL,
+ svm->vmcb->control.insn_len);
+ }
+
+@@ -2189,7 +2190,8 @@ static int npf_interception(struct vcpu_svm *svm)
+
+ trace_kvm_page_fault(fault_address, error_code);
+ return kvm_mmu_page_fault(&svm->vcpu, fault_address, error_code,
+- svm->vmcb->control.insn_bytes,
++ static_cpu_has(X86_FEATURE_DECODEASSISTS) ?
++ svm->vmcb->control.insn_bytes : NULL,
+ svm->vmcb->control.insn_len);
+ }
+
+diff --git a/arch/x86/tools/relocs.c b/arch/x86/tools/relocs.c
+index 5d73c443e778..220e97841e49 100644
+--- a/arch/x86/tools/relocs.c
++++ b/arch/x86/tools/relocs.c
+@@ -770,9 +770,12 @@ static int do_reloc64(struct section *sec, Elf_Rel *rel, ElfW(Sym) *sym,
+ break;
+
+ case R_X86_64_PC32:
++ case R_X86_64_PLT32:
+ /*
+ * PC relative relocations don't need to be adjusted unless
+ * referencing a percpu symbol.
++ *
++ * NB: R_X86_64_PLT32 can be treated as R_X86_64_PC32.
+ */
+ if (is_percpu_sym(sym, symname))
+ add_reloc(&relocs32neg, offset);
+diff --git a/crypto/ecc.c b/crypto/ecc.c
+index 633a9bcdc574..18f32f2a5e1c 100644
+--- a/crypto/ecc.c
++++ b/crypto/ecc.c
+@@ -964,7 +964,7 @@ int ecc_gen_privkey(unsigned int curve_id, unsigned int ndigits, u64 *privkey)
+ * DRBG with a security strength of 256.
+ */
+ if (crypto_get_default_rng())
+- err = -EFAULT;
++ return -EFAULT;
+
+ err = crypto_rng_get_bytes(crypto_default_rng, (u8 *)priv, nbytes);
+ crypto_put_default_rng();
+diff --git a/crypto/keywrap.c b/crypto/keywrap.c
+index 744e35134c45..ec5c6a087c90 100644
+--- a/crypto/keywrap.c
++++ b/crypto/keywrap.c
+@@ -188,7 +188,7 @@ static int crypto_kw_decrypt(struct blkcipher_desc *desc,
+ }
+
+ /* Perform authentication check */
+- if (block.A != cpu_to_be64(0xa6a6a6a6a6a6a6a6))
++ if (block.A != cpu_to_be64(0xa6a6a6a6a6a6a6a6ULL))
+ ret = -EBADMSG;
+
+ memzero_explicit(&block, sizeof(struct crypto_kw_block));
+@@ -221,7 +221,7 @@ static int crypto_kw_encrypt(struct blkcipher_desc *desc,
+ * Place the predefined IV into block A -- for encrypt, the caller
+ * does not need to provide an IV, but he needs to fetch the final IV.
+ */
+- block.A = cpu_to_be64(0xa6a6a6a6a6a6a6a6);
++ block.A = cpu_to_be64(0xa6a6a6a6a6a6a6a6ULL);
+
+ /*
+ * src scatterlist is read-only. dst scatterlist is r/w. During the
+diff --git a/drivers/base/Kconfig b/drivers/base/Kconfig
+index 2415ad9f6dd4..49fd50fccd48 100644
+--- a/drivers/base/Kconfig
++++ b/drivers/base/Kconfig
+@@ -249,6 +249,7 @@ config DMA_SHARED_BUFFER
+ bool
+ default n
+ select ANON_INODES
++ select IRQ_WORK
+ help
+ This option enables the framework for buffer-sharing between
+ multiple drivers. A buffer is associated with a file using driver
+diff --git a/drivers/char/agp/intel-gtt.c b/drivers/char/agp/intel-gtt.c
+index 9b6b6023193b..dde7caac7f9f 100644
+--- a/drivers/char/agp/intel-gtt.c
++++ b/drivers/char/agp/intel-gtt.c
+@@ -872,6 +872,8 @@ void intel_gtt_insert_sg_entries(struct sg_table *st,
+ }
+ }
+ wmb();
++ if (intel_private.driver->chipset_flush)
++ intel_private.driver->chipset_flush();
+ }
+ EXPORT_SYMBOL(intel_gtt_insert_sg_entries);
+
+diff --git a/drivers/clk/meson/gxbb.c b/drivers/clk/meson/gxbb.c
+index ae385310e980..2ac9f3fa9578 100644
+--- a/drivers/clk/meson/gxbb.c
++++ b/drivers/clk/meson/gxbb.c
+@@ -1386,7 +1386,7 @@ static MESON_GATE(gxbb_pl301, HHI_GCLK_MPEG0, 6);
+ static MESON_GATE(gxbb_periphs, HHI_GCLK_MPEG0, 7);
+ static MESON_GATE(gxbb_spicc, HHI_GCLK_MPEG0, 8);
+ static MESON_GATE(gxbb_i2c, HHI_GCLK_MPEG0, 9);
+-static MESON_GATE(gxbb_sar_adc, HHI_GCLK_MPEG0, 10);
++static MESON_GATE(gxbb_sana, HHI_GCLK_MPEG0, 10);
+ static MESON_GATE(gxbb_smart_card, HHI_GCLK_MPEG0, 11);
+ static MESON_GATE(gxbb_rng0, HHI_GCLK_MPEG0, 12);
+ static MESON_GATE(gxbb_uart0, HHI_GCLK_MPEG0, 13);
+@@ -1437,7 +1437,7 @@ static MESON_GATE(gxbb_usb0_ddr_bridge, HHI_GCLK_MPEG2, 9);
+ static MESON_GATE(gxbb_mmc_pclk, HHI_GCLK_MPEG2, 11);
+ static MESON_GATE(gxbb_dvin, HHI_GCLK_MPEG2, 12);
+ static MESON_GATE(gxbb_uart2, HHI_GCLK_MPEG2, 15);
+-static MESON_GATE(gxbb_sana, HHI_GCLK_MPEG2, 22);
++static MESON_GATE(gxbb_sar_adc, HHI_GCLK_MPEG2, 22);
+ static MESON_GATE(gxbb_vpu_intr, HHI_GCLK_MPEG2, 25);
+ static MESON_GATE(gxbb_sec_ahb_ahb3_bridge, HHI_GCLK_MPEG2, 26);
+ static MESON_GATE(gxbb_clk81_a53, HHI_GCLK_MPEG2, 29);
+diff --git a/drivers/clk/qcom/gcc-msm8916.c b/drivers/clk/qcom/gcc-msm8916.c
+index 3410ee68d4bc..2057809219f4 100644
+--- a/drivers/clk/qcom/gcc-msm8916.c
++++ b/drivers/clk/qcom/gcc-msm8916.c
+@@ -1438,6 +1438,7 @@ static const struct freq_tbl ftbl_codec_clk[] = {
+
+ static struct clk_rcg2 codec_digcodec_clk_src = {
+ .cmd_rcgr = 0x1c09c,
++ .mnd_width = 8,
+ .hid_width = 5,
+ .parent_map = gcc_xo_gpll1_emclk_sleep_map,
+ .freq_tbl = ftbl_codec_clk,
+diff --git a/drivers/clk/renesas/r8a77970-cpg-mssr.c b/drivers/clk/renesas/r8a77970-cpg-mssr.c
+index 72f98527473a..f55842917e8d 100644
+--- a/drivers/clk/renesas/r8a77970-cpg-mssr.c
++++ b/drivers/clk/renesas/r8a77970-cpg-mssr.c
+@@ -105,6 +105,7 @@ static const struct mssr_mod_clk r8a77970_mod_clks[] __initconst = {
+ DEF_MOD("vspd0", 623, R8A77970_CLK_S2D1),
+ DEF_MOD("csi40", 716, R8A77970_CLK_CSI0),
+ DEF_MOD("du0", 724, R8A77970_CLK_S2D1),
++ DEF_MOD("lvds", 727, R8A77970_CLK_S2D1),
+ DEF_MOD("vin3", 808, R8A77970_CLK_S2D1),
+ DEF_MOD("vin2", 809, R8A77970_CLK_S2D1),
+ DEF_MOD("vin1", 810, R8A77970_CLK_S2D1),
+diff --git a/drivers/cpufreq/cpufreq.c b/drivers/cpufreq/cpufreq.c
+index 41d148af7748..d6a3038a128d 100644
+--- a/drivers/cpufreq/cpufreq.c
++++ b/drivers/cpufreq/cpufreq.c
+@@ -637,6 +637,8 @@ static int cpufreq_parse_governor(char *str_governor, unsigned int *policy,
+ *governor = t;
+ err = 0;
+ }
++ if (t && !try_module_get(t->owner))
++ t = NULL;
+
+ mutex_unlock(&cpufreq_governor_mutex);
+ }
+@@ -765,6 +767,10 @@ static ssize_t store_scaling_governor(struct cpufreq_policy *policy,
+ return -EINVAL;
+
+ ret = cpufreq_set_policy(policy, &new_policy);
++
++ if (new_policy.governor)
++ module_put(new_policy.governor->owner);
++
+ return ret ? ret : count;
+ }
+
+diff --git a/drivers/crypto/caam/caamalg_qi.c b/drivers/crypto/caam/caamalg_qi.c
+index f9f08fce4356..ad14b69a052e 100644
+--- a/drivers/crypto/caam/caamalg_qi.c
++++ b/drivers/crypto/caam/caamalg_qi.c
+@@ -668,7 +668,7 @@ static struct aead_edesc *aead_edesc_alloc(struct aead_request *req,
+ qm_sg_ents = 1 + !!ivsize + mapped_src_nents +
+ (mapped_dst_nents > 1 ? mapped_dst_nents : 0);
+ if (unlikely(qm_sg_ents > CAAM_QI_MAX_AEAD_SG)) {
+- dev_err(qidev, "Insufficient S/G entries: %d > %lu\n",
++ dev_err(qidev, "Insufficient S/G entries: %d > %zu\n",
+ qm_sg_ents, CAAM_QI_MAX_AEAD_SG);
+ caam_unmap(qidev, req->src, req->dst, src_nents, dst_nents,
+ iv_dma, ivsize, op_type, 0, 0);
+@@ -905,7 +905,7 @@ static struct ablkcipher_edesc *ablkcipher_edesc_alloc(struct ablkcipher_request
+
+ qm_sg_ents += mapped_dst_nents > 1 ? mapped_dst_nents : 0;
+ if (unlikely(qm_sg_ents > CAAM_QI_MAX_ABLKCIPHER_SG)) {
+- dev_err(qidev, "Insufficient S/G entries: %d > %lu\n",
++ dev_err(qidev, "Insufficient S/G entries: %d > %zu\n",
+ qm_sg_ents, CAAM_QI_MAX_ABLKCIPHER_SG);
+ caam_unmap(qidev, req->src, req->dst, src_nents, dst_nents,
+ iv_dma, ivsize, op_type, 0, 0);
+@@ -1058,7 +1058,7 @@ static struct ablkcipher_edesc *ablkcipher_giv_edesc_alloc(
+ }
+
+ if (unlikely(qm_sg_ents > CAAM_QI_MAX_ABLKCIPHER_SG)) {
+- dev_err(qidev, "Insufficient S/G entries: %d > %lu\n",
++ dev_err(qidev, "Insufficient S/G entries: %d > %zu\n",
+ qm_sg_ents, CAAM_QI_MAX_ABLKCIPHER_SG);
+ caam_unmap(qidev, req->src, req->dst, src_nents, dst_nents,
+ iv_dma, ivsize, GIVENCRYPT, 0, 0);
+diff --git a/drivers/crypto/cavium/cpt/cptvf_reqmanager.c b/drivers/crypto/cavium/cpt/cptvf_reqmanager.c
+index 169e66231bcf..b0ba4331944b 100644
+--- a/drivers/crypto/cavium/cpt/cptvf_reqmanager.c
++++ b/drivers/crypto/cavium/cpt/cptvf_reqmanager.c
+@@ -459,7 +459,8 @@ int process_request(struct cpt_vf *cptvf, struct cpt_request_info *req)
+ info->completion_addr = kzalloc(sizeof(union cpt_res_s), GFP_KERNEL);
+ if (unlikely(!info->completion_addr)) {
+ dev_err(&pdev->dev, "Unable to allocate memory for completion_addr\n");
+- return -ENOMEM;
++ ret = -ENOMEM;
++ goto request_cleanup;
+ }
+
+ result = (union cpt_res_s *)info->completion_addr;
+diff --git a/drivers/crypto/chelsio/chcr_algo.c b/drivers/crypto/chelsio/chcr_algo.c
+index 4eed7171e2ae..38fe59b5c689 100644
+--- a/drivers/crypto/chelsio/chcr_algo.c
++++ b/drivers/crypto/chelsio/chcr_algo.c
+@@ -2414,7 +2414,7 @@ static inline int chcr_hash_dma_map(struct device *dev,
+ error = dma_map_sg(dev, req->src, sg_nents(req->src),
+ DMA_TO_DEVICE);
+ if (!error)
+- return error;
++ return -ENOMEM;
+ req_ctx->is_sg_map = 1;
+ return 0;
+ }
+diff --git a/drivers/dma-buf/dma-fence-array.c b/drivers/dma-buf/dma-fence-array.c
+index 0350829ba62e..dd1edfb27b61 100644
+--- a/drivers/dma-buf/dma-fence-array.c
++++ b/drivers/dma-buf/dma-fence-array.c
+@@ -31,6 +31,14 @@ static const char *dma_fence_array_get_timeline_name(struct dma_fence *fence)
+ return "unbound";
+ }
+
++static void irq_dma_fence_array_work(struct irq_work *wrk)
++{
++ struct dma_fence_array *array = container_of(wrk, typeof(*array), work);
++
++ dma_fence_signal(&array->base);
++ dma_fence_put(&array->base);
++}
++
+ static void dma_fence_array_cb_func(struct dma_fence *f,
+ struct dma_fence_cb *cb)
+ {
+@@ -39,8 +47,9 @@ static void dma_fence_array_cb_func(struct dma_fence *f,
+ struct dma_fence_array *array = array_cb->array;
+
+ if (atomic_dec_and_test(&array->num_pending))
+- dma_fence_signal(&array->base);
+- dma_fence_put(&array->base);
++ irq_work_queue(&array->work);
++ else
++ dma_fence_put(&array->base);
+ }
+
+ static bool dma_fence_array_enable_signaling(struct dma_fence *fence)
+@@ -136,6 +145,7 @@ struct dma_fence_array *dma_fence_array_create(int num_fences,
+ spin_lock_init(&array->lock);
+ dma_fence_init(&array->base, &dma_fence_array_ops, &array->lock,
+ context, seqno);
++ init_irq_work(&array->work, irq_dma_fence_array_work);
+
+ array->num_fences = num_fences;
+ atomic_set(&array->num_pending, signal_on_any ? 1 : num_fences);
+diff --git a/drivers/dma/qcom/hidma_ll.c b/drivers/dma/qcom/hidma_ll.c
+index 4999e266b2de..7c6e2ff212a2 100644
+--- a/drivers/dma/qcom/hidma_ll.c
++++ b/drivers/dma/qcom/hidma_ll.c
+@@ -393,6 +393,8 @@ static int hidma_ll_reset(struct hidma_lldev *lldev)
+ */
+ static void hidma_ll_int_handler_internal(struct hidma_lldev *lldev, int cause)
+ {
++ unsigned long irqflags;
++
+ if (cause & HIDMA_ERR_INT_MASK) {
+ dev_err(lldev->dev, "error 0x%x, disabling...\n",
+ cause);
+@@ -410,6 +412,10 @@ static void hidma_ll_int_handler_internal(struct hidma_lldev *lldev, int cause)
+ return;
+ }
+
++ spin_lock_irqsave(&lldev->lock, irqflags);
++ writel_relaxed(cause, lldev->evca + HIDMA_EVCA_IRQ_CLR_REG);
++ spin_unlock_irqrestore(&lldev->lock, irqflags);
++
+ /*
+ * Fine tuned for this HW...
+ *
+@@ -421,9 +427,6 @@ static void hidma_ll_int_handler_internal(struct hidma_lldev *lldev, int cause)
+ * Try to consume as many EVREs as possible.
+ */
+ hidma_handle_tre_completion(lldev);
+-
+- /* We consumed TREs or there are pending TREs or EVREs. */
+- writel_relaxed(cause, lldev->evca + HIDMA_EVCA_IRQ_CLR_REG);
+ }
+
+ irqreturn_t hidma_ll_inthandler(int chirq, void *arg)
+diff --git a/drivers/gpio/gpiolib.c b/drivers/gpio/gpiolib.c
+index f6efcf94f6ad..7a5cf5b08c54 100644
+--- a/drivers/gpio/gpiolib.c
++++ b/drivers/gpio/gpiolib.c
+@@ -460,6 +460,15 @@ static int linehandle_create(struct gpio_device *gdev, void __user *ip)
+ if (lflags & ~GPIOHANDLE_REQUEST_VALID_FLAGS)
+ return -EINVAL;
+
++ /*
++ * Do not allow OPEN_SOURCE & OPEN_DRAIN flags in a single request. If
++ * the hardware actually supports enabling both at the same time the
++ * electrical result would be disastrous.
++ */
++ if ((lflags & GPIOHANDLE_REQUEST_OPEN_DRAIN) &&
++ (lflags & GPIOHANDLE_REQUEST_OPEN_SOURCE))
++ return -EINVAL;
++
+ /* OPEN_DRAIN and OPEN_SOURCE flags only make sense for output mode. */
+ if (!(lflags & GPIOHANDLE_REQUEST_OUTPUT) &&
+ ((lflags & GPIOHANDLE_REQUEST_OPEN_DRAIN) ||
+diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_amdkfd.c b/drivers/gpu/drm/amd/amdgpu/amdgpu_amdkfd.c
+index 5432af39a674..f7fa7675215c 100644
+--- a/drivers/gpu/drm/amd/amdgpu/amdgpu_amdkfd.c
++++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_amdkfd.c
+@@ -265,6 +265,9 @@ uint32_t get_max_engine_clock_in_mhz(struct kgd_dev *kgd)
+ {
+ struct amdgpu_device *adev = (struct amdgpu_device *)kgd;
+
+- /* The sclk is in quantas of 10kHz */
+- return adev->pm.dpm.dyn_state.max_clock_voltage_on_ac.sclk / 100;
++ /* the sclk is in quantas of 10kHz */
++ if (amdgpu_sriov_vf(adev))
++ return adev->clock.default_sclk / 100;
++
++ return amdgpu_dpm_get_sclk(adev, false) / 100;
+ }
+diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_sync.c b/drivers/gpu/drm/amd/amdgpu/amdgpu_sync.c
+index a4bf21f8f1c1..bbbc40d630a0 100644
+--- a/drivers/gpu/drm/amd/amdgpu/amdgpu_sync.c
++++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_sync.c
+@@ -191,9 +191,6 @@ int amdgpu_sync_resv(struct amdgpu_device *adev,
+ f = reservation_object_get_excl(resv);
+ r = amdgpu_sync_fence(adev, sync, f);
+
+- if (explicit_sync)
+- return r;
+-
+ flist = reservation_object_get_list(resv);
+ if (!flist || r)
+ return r;
+@@ -212,11 +209,11 @@ int amdgpu_sync_resv(struct amdgpu_device *adev,
+ (fence_owner == AMDGPU_FENCE_OWNER_VM)))
+ continue;
+
+- /* Ignore fence from the same owner as
++ /* Ignore fence from the same owner and explicit one as
+ * long as it isn't undefined.
+ */
+ if (owner != AMDGPU_FENCE_OWNER_UNDEFINED &&
+- fence_owner == owner)
++ (fence_owner == owner || explicit_sync))
+ continue;
+ }
+
+diff --git a/drivers/gpu/drm/amd/amdgpu/dce_virtual.c b/drivers/gpu/drm/amd/amdgpu/dce_virtual.c
+index a8829af120c1..39460eb1e71a 100644
+--- a/drivers/gpu/drm/amd/amdgpu/dce_virtual.c
++++ b/drivers/gpu/drm/amd/amdgpu/dce_virtual.c
+@@ -437,6 +437,8 @@ static int dce_virtual_sw_fini(void *handle)
+ drm_kms_helper_poll_fini(adev->ddev);
+
+ drm_mode_config_cleanup(adev->ddev);
++ /* clear crtcs pointer to avoid dce irq finish routine access freed data */
++ memset(adev->mode_info.crtcs, 0, sizeof(adev->mode_info.crtcs[0]) * AMDGPU_MAX_CRTCS);
+ adev->mode_info.mode_config_initialized = false;
+ return 0;
+ }
+@@ -723,7 +725,7 @@ static void dce_virtual_set_crtc_vblank_interrupt_state(struct amdgpu_device *ad
+ int crtc,
+ enum amdgpu_interrupt_state state)
+ {
+- if (crtc >= adev->mode_info.num_crtc) {
++ if (crtc >= adev->mode_info.num_crtc || !adev->mode_info.crtcs[crtc]) {
+ DRM_DEBUG("invalid crtc %d\n", crtc);
+ return;
+ }
+diff --git a/drivers/gpu/drm/amd/amdgpu/mxgpu_ai.c b/drivers/gpu/drm/amd/amdgpu/mxgpu_ai.c
+index b4906d2f30d3..0031f8f34db5 100644
+--- a/drivers/gpu/drm/amd/amdgpu/mxgpu_ai.c
++++ b/drivers/gpu/drm/amd/amdgpu/mxgpu_ai.c
+@@ -282,9 +282,17 @@ static int xgpu_ai_mailbox_rcv_irq(struct amdgpu_device *adev,
+ /* see what event we get */
+ r = xgpu_ai_mailbox_rcv_msg(adev, IDH_FLR_NOTIFICATION);
+
+- /* only handle FLR_NOTIFY now */
+- if (!r)
+- schedule_work(&adev->virt.flr_work);
++ /* sometimes the interrupt is delayed to inject to VM, so under such case
++ * the IDH_FLR_NOTIFICATION is overwritten by VF FLR from GIM side, thus
++ * above recieve message could be failed, we should schedule the flr_work
++ * anyway
++ */
++ if (r) {
++ DRM_ERROR("FLR_NOTIFICATION is missed\n");
++ xgpu_ai_mailbox_send_ack(adev);
++ }
++
++ schedule_work(&adev->virt.flr_work);
+ }
+
+ return 0;
+diff --git a/drivers/gpu/drm/amd/amdkfd/kfd_topology.c b/drivers/gpu/drm/amd/amdkfd/kfd_topology.c
+index 19ce59028d6b..e0b78fd9804d 100644
+--- a/drivers/gpu/drm/amd/amdkfd/kfd_topology.c
++++ b/drivers/gpu/drm/amd/amdkfd/kfd_topology.c
+@@ -501,11 +501,17 @@ static ssize_t sysprops_show(struct kobject *kobj, struct attribute *attr,
+ return ret;
+ }
+
++static void kfd_topology_kobj_release(struct kobject *kobj)
++{
++ kfree(kobj);
++}
++
+ static const struct sysfs_ops sysprops_ops = {
+ .show = sysprops_show,
+ };
+
+ static struct kobj_type sysprops_type = {
++ .release = kfd_topology_kobj_release,
+ .sysfs_ops = &sysprops_ops,
+ };
+
+@@ -541,6 +547,7 @@ static const struct sysfs_ops iolink_ops = {
+ };
+
+ static struct kobj_type iolink_type = {
++ .release = kfd_topology_kobj_release,
+ .sysfs_ops = &iolink_ops,
+ };
+
+@@ -568,6 +575,7 @@ static const struct sysfs_ops mem_ops = {
+ };
+
+ static struct kobj_type mem_type = {
++ .release = kfd_topology_kobj_release,
+ .sysfs_ops = &mem_ops,
+ };
+
+@@ -607,6 +615,7 @@ static const struct sysfs_ops cache_ops = {
+ };
+
+ static struct kobj_type cache_type = {
++ .release = kfd_topology_kobj_release,
+ .sysfs_ops = &cache_ops,
+ };
+
+@@ -729,6 +738,7 @@ static const struct sysfs_ops node_ops = {
+ };
+
+ static struct kobj_type node_type = {
++ .release = kfd_topology_kobj_release,
+ .sysfs_ops = &node_ops,
+ };
+
+diff --git a/drivers/gpu/drm/drm_edid.c b/drivers/gpu/drm/drm_edid.c
+index 16fb76ba6509..96afdb4d3ecf 100644
+--- a/drivers/gpu/drm/drm_edid.c
++++ b/drivers/gpu/drm/drm_edid.c
+@@ -3843,8 +3843,7 @@ EXPORT_SYMBOL(drm_edid_get_monitor_name);
+ * @edid: EDID to parse
+ *
+ * Fill the ELD (EDID-Like Data) buffer for passing to the audio driver. The
+- * Conn_Type, HDCP and Port_ID ELD fields are left for the graphics driver to
+- * fill in.
++ * HDCP and Port_ID ELD fields are left for the graphics driver to fill in.
+ */
+ void drm_edid_to_eld(struct drm_connector *connector, struct edid *edid)
+ {
+@@ -3925,6 +3924,12 @@ void drm_edid_to_eld(struct drm_connector *connector, struct edid *edid)
+ }
+ eld[5] |= total_sad_count << 4;
+
++ if (connector->connector_type == DRM_MODE_CONNECTOR_DisplayPort ||
++ connector->connector_type == DRM_MODE_CONNECTOR_eDP)
++ eld[DRM_ELD_SAD_COUNT_CONN_TYPE] |= DRM_ELD_CONN_TYPE_DP;
++ else
++ eld[DRM_ELD_SAD_COUNT_CONN_TYPE] |= DRM_ELD_CONN_TYPE_HDMI;
++
+ eld[DRM_ELD_BASELINE_ELD_LEN] =
+ DIV_ROUND_UP(drm_eld_calc_baseline_block_size(eld), 4);
+
+diff --git a/drivers/gpu/drm/drm_vblank.c b/drivers/gpu/drm/drm_vblank.c
+index 3717b3df34a4..32d9bcf5be7f 100644
+--- a/drivers/gpu/drm/drm_vblank.c
++++ b/drivers/gpu/drm/drm_vblank.c
+@@ -663,14 +663,16 @@ bool drm_calc_vbltimestamp_from_scanoutpos(struct drm_device *dev,
+ delta_ns = div_s64(1000000LL * (vpos * mode->crtc_htotal + hpos),
+ mode->crtc_clock);
+
+- /* save this only for debugging purposes */
+- ts_etime = ktime_to_timespec64(etime);
+- ts_vblank_time = ktime_to_timespec64(*vblank_time);
+ /* Subtract time delta from raw timestamp to get final
+ * vblank_time timestamp for end of vblank.
+ */
+- etime = ktime_sub_ns(etime, delta_ns);
+- *vblank_time = etime;
++ *vblank_time = ktime_sub_ns(etime, delta_ns);
++
++ if ((drm_debug & DRM_UT_VBL) == 0)
++ return true;
++
++ ts_etime = ktime_to_timespec64(etime);
++ ts_vblank_time = ktime_to_timespec64(*vblank_time);
+
+ DRM_DEBUG_VBL("crtc %u : v p(%d,%d)@ %lld.%06ld -> %lld.%06ld [e %d us, %d rep]\n",
+ pipe, hpos, vpos,
+diff --git a/drivers/gpu/drm/etnaviv/Kconfig b/drivers/gpu/drm/etnaviv/Kconfig
+index a29b8f59eb15..3f58b4077767 100644
+--- a/drivers/gpu/drm/etnaviv/Kconfig
++++ b/drivers/gpu/drm/etnaviv/Kconfig
+@@ -6,6 +6,7 @@ config DRM_ETNAVIV
+ depends on MMU
+ select SHMEM
+ select SYNC_FILE
++ select THERMAL if DRM_ETNAVIV_THERMAL
+ select TMPFS
+ select WANT_DEV_COREDUMP
+ select CMA if HAVE_DMA_CONTIGUOUS
+@@ -13,6 +14,14 @@ config DRM_ETNAVIV
+ help
+ DRM driver for Vivante GPUs.
+
++config DRM_ETNAVIV_THERMAL
++ bool "enable ETNAVIV thermal throttling"
++ depends on DRM_ETNAVIV
++ default y
++ help
++ Compile in support for thermal throttling.
++ Say Y unless you want to risk burning your SoC.
++
+ config DRM_ETNAVIV_REGISTER_LOGGING
+ bool "enable ETNAVIV register logging"
+ depends on DRM_ETNAVIV
+diff --git a/drivers/gpu/drm/etnaviv/etnaviv_gpu.c b/drivers/gpu/drm/etnaviv/etnaviv_gpu.c
+index e19cbe05da2a..968cbc2be9c4 100644
+--- a/drivers/gpu/drm/etnaviv/etnaviv_gpu.c
++++ b/drivers/gpu/drm/etnaviv/etnaviv_gpu.c
+@@ -1738,7 +1738,7 @@ static int etnaviv_gpu_bind(struct device *dev, struct device *master,
+ struct etnaviv_gpu *gpu = dev_get_drvdata(dev);
+ int ret;
+
+- if (IS_ENABLED(CONFIG_THERMAL)) {
++ if (IS_ENABLED(CONFIG_DRM_ETNAVIV_THERMAL)) {
+ gpu->cooling = thermal_of_cooling_device_register(dev->of_node,
+ (char *)dev_name(dev), gpu, &cooling_ops);
+ if (IS_ERR(gpu->cooling))
+@@ -1751,7 +1751,8 @@ static int etnaviv_gpu_bind(struct device *dev, struct device *master,
+ ret = etnaviv_gpu_clk_enable(gpu);
+ #endif
+ if (ret < 0) {
+- thermal_cooling_device_unregister(gpu->cooling);
++ if (IS_ENABLED(CONFIG_DRM_ETNAVIV_THERMAL))
++ thermal_cooling_device_unregister(gpu->cooling);
+ return ret;
+ }
+
+@@ -1808,7 +1809,8 @@ static void etnaviv_gpu_unbind(struct device *dev, struct device *master,
+
+ gpu->drm = NULL;
+
+- thermal_cooling_device_unregister(gpu->cooling);
++ if (IS_ENABLED(CONFIG_DRM_ETNAVIV_THERMAL))
++ thermal_cooling_device_unregister(gpu->cooling);
+ gpu->cooling = NULL;
+ }
+
+diff --git a/drivers/gpu/drm/i915/intel_guc_fw.c b/drivers/gpu/drm/i915/intel_guc_fw.c
+index ef67a36354c5..cd05bdea46d3 100644
+--- a/drivers/gpu/drm/i915/intel_guc_fw.c
++++ b/drivers/gpu/drm/i915/intel_guc_fw.c
+@@ -39,9 +39,6 @@
+ #define KBL_FW_MAJOR 9
+ #define KBL_FW_MINOR 14
+
+-#define GLK_FW_MAJOR 10
+-#define GLK_FW_MINOR 56
+-
+ #define GUC_FW_PATH(platform, major, minor) \
+ "i915/" __stringify(platform) "_guc_ver" __stringify(major) "_" __stringify(minor) ".bin"
+
+@@ -54,8 +51,6 @@ MODULE_FIRMWARE(I915_BXT_GUC_UCODE);
+ #define I915_KBL_GUC_UCODE GUC_FW_PATH(kbl, KBL_FW_MAJOR, KBL_FW_MINOR)
+ MODULE_FIRMWARE(I915_KBL_GUC_UCODE);
+
+-#define I915_GLK_GUC_UCODE GUC_FW_PATH(glk, GLK_FW_MAJOR, GLK_FW_MINOR)
+-
+ /**
+ * intel_guc_fw_select() - selects GuC firmware for uploading
+ *
+@@ -85,10 +80,6 @@ int intel_guc_fw_select(struct intel_guc *guc)
+ guc->fw.path = I915_KBL_GUC_UCODE;
+ guc->fw.major_ver_wanted = KBL_FW_MAJOR;
+ guc->fw.minor_ver_wanted = KBL_FW_MINOR;
+- } else if (IS_GEMINILAKE(dev_priv)) {
+- guc->fw.path = I915_GLK_GUC_UCODE;
+- guc->fw.major_ver_wanted = GLK_FW_MAJOR;
+- guc->fw.minor_ver_wanted = GLK_FW_MINOR;
+ } else {
+ DRM_ERROR("No GuC firmware known for platform with GuC!\n");
+ return -ENOENT;
+diff --git a/drivers/gpu/drm/i915/intel_huc.c b/drivers/gpu/drm/i915/intel_huc.c
+index c8a48cbc2b7d..c3460c4a7b54 100644
+--- a/drivers/gpu/drm/i915/intel_huc.c
++++ b/drivers/gpu/drm/i915/intel_huc.c
+@@ -54,10 +54,6 @@
+ #define KBL_HUC_FW_MINOR 00
+ #define KBL_BLD_NUM 1810
+
+-#define GLK_HUC_FW_MAJOR 02
+-#define GLK_HUC_FW_MINOR 00
+-#define GLK_BLD_NUM 1748
+-
+ #define HUC_FW_PATH(platform, major, minor, bld_num) \
+ "i915/" __stringify(platform) "_huc_ver" __stringify(major) "_" \
+ __stringify(minor) "_" __stringify(bld_num) ".bin"
+@@ -74,9 +70,6 @@ MODULE_FIRMWARE(I915_BXT_HUC_UCODE);
+ KBL_HUC_FW_MINOR, KBL_BLD_NUM)
+ MODULE_FIRMWARE(I915_KBL_HUC_UCODE);
+
+-#define I915_GLK_HUC_UCODE HUC_FW_PATH(glk, GLK_HUC_FW_MAJOR, \
+- GLK_HUC_FW_MINOR, GLK_BLD_NUM)
+-
+ /**
+ * intel_huc_select_fw() - selects HuC firmware for loading
+ * @huc: intel_huc struct
+@@ -103,10 +96,6 @@ void intel_huc_select_fw(struct intel_huc *huc)
+ huc->fw.path = I915_KBL_HUC_UCODE;
+ huc->fw.major_ver_wanted = KBL_HUC_FW_MAJOR;
+ huc->fw.minor_ver_wanted = KBL_HUC_FW_MINOR;
+- } else if (IS_GEMINILAKE(dev_priv)) {
+- huc->fw.path = I915_GLK_HUC_UCODE;
+- huc->fw.major_ver_wanted = GLK_HUC_FW_MAJOR;
+- huc->fw.minor_ver_wanted = GLK_HUC_FW_MINOR;
+ } else {
+ DRM_ERROR("No HuC firmware known for platform with HuC!\n");
+ return;
+diff --git a/drivers/gpu/drm/panel/panel-raspberrypi-touchscreen.c b/drivers/gpu/drm/panel/panel-raspberrypi-touchscreen.c
+index 890fd6ff397c..d964d454e4ae 100644
+--- a/drivers/gpu/drm/panel/panel-raspberrypi-touchscreen.c
++++ b/drivers/gpu/drm/panel/panel-raspberrypi-touchscreen.c
+@@ -221,7 +221,7 @@ static struct rpi_touchscreen *panel_to_ts(struct drm_panel *panel)
+ return container_of(panel, struct rpi_touchscreen, base);
+ }
+
+-static u8 rpi_touchscreen_i2c_read(struct rpi_touchscreen *ts, u8 reg)
++static int rpi_touchscreen_i2c_read(struct rpi_touchscreen *ts, u8 reg)
+ {
+ return i2c_smbus_read_byte_data(ts->i2c, reg);
+ }
+diff --git a/drivers/gpu/drm/sun4i/sun8i_mixer.h b/drivers/gpu/drm/sun4i/sun8i_mixer.h
+index 4785ac090b8c..c142fbb8661e 100644
+--- a/drivers/gpu/drm/sun4i/sun8i_mixer.h
++++ b/drivers/gpu/drm/sun4i/sun8i_mixer.h
+@@ -80,7 +80,7 @@
+
+ #define SUN8I_MIXER_CHAN_UI_LAYER_ATTR_EN BIT(0)
+ #define SUN8I_MIXER_CHAN_UI_LAYER_ATTR_ALPHA_MODE_MASK GENMASK(2, 1)
+-#define SUN8I_MIXER_CHAN_UI_LAYER_ATTR_FBFMT_MASK GENMASK(11, 8)
++#define SUN8I_MIXER_CHAN_UI_LAYER_ATTR_FBFMT_MASK GENMASK(12, 8)
+ #define SUN8I_MIXER_CHAN_UI_LAYER_ATTR_ALPHA_MASK GENMASK(31, 24)
+ #define SUN8I_MIXER_CHAN_UI_LAYER_ATTR_ALPHA_MODE_DEF (1 << 1)
+ #define SUN8I_MIXER_CHAN_UI_LAYER_ATTR_FBFMT_ARGB8888 (0 << 8)
+diff --git a/drivers/hid/hid-elo.c b/drivers/hid/hid-elo.c
+index 0cd4f7216239..5eea6fe0d7bd 100644
+--- a/drivers/hid/hid-elo.c
++++ b/drivers/hid/hid-elo.c
+@@ -42,6 +42,12 @@ static int elo_input_configured(struct hid_device *hdev,
+ {
+ struct input_dev *input = hidinput->input;
+
++ /*
++ * ELO devices have one Button usage in GenDesk field, which makes
++ * hid-input map it to BTN_LEFT; that confuses userspace, which then
++ * considers the device to be a mouse/touchpad instead of touchscreen.
++ */
++ clear_bit(BTN_LEFT, input->keybit);
+ set_bit(BTN_TOUCH, input->keybit);
+ set_bit(ABS_PRESSURE, input->absbit);
+ input_set_abs_params(input, ABS_PRESSURE, 0, 256, 0, 0);
+diff --git a/drivers/hid/hid-multitouch.c b/drivers/hid/hid-multitouch.c
+index 65ea23be9677..397592959238 100644
+--- a/drivers/hid/hid-multitouch.c
++++ b/drivers/hid/hid-multitouch.c
+@@ -778,9 +778,11 @@ static int mt_touch_event(struct hid_device *hid, struct hid_field *field,
+ }
+
+ static void mt_process_mt_event(struct hid_device *hid, struct hid_field *field,
+- struct hid_usage *usage, __s32 value)
++ struct hid_usage *usage, __s32 value,
++ bool first_packet)
+ {
+ struct mt_device *td = hid_get_drvdata(hid);
++ __s32 cls = td->mtclass.name;
+ __s32 quirks = td->mtclass.quirks;
+ struct input_dev *input = field->hidinput->input;
+
+@@ -837,6 +839,15 @@ static void mt_process_mt_event(struct hid_device *hid, struct hid_field *field,
+ break;
+
+ default:
++ /*
++ * For Win8 PTP touchpads we should only look at
++ * non finger/touch events in the first_packet of
++ * a (possible) multi-packet frame.
++ */
++ if ((cls == MT_CLS_WIN_8 || cls == MT_CLS_WIN_8_DUAL) &&
++ !first_packet)
++ return;
++
+ if (usage->type)
+ input_event(input, usage->type, usage->code,
+ value);
+@@ -856,6 +867,7 @@ static void mt_touch_report(struct hid_device *hid, struct hid_report *report)
+ {
+ struct mt_device *td = hid_get_drvdata(hid);
+ struct hid_field *field;
++ bool first_packet;
+ unsigned count;
+ int r, n;
+
+@@ -874,6 +886,7 @@ static void mt_touch_report(struct hid_device *hid, struct hid_report *report)
+ td->num_expected = value;
+ }
+
++ first_packet = td->num_received == 0;
+ for (r = 0; r < report->maxfield; r++) {
+ field = report->field[r];
+ count = field->report_count;
+@@ -883,7 +896,7 @@ static void mt_touch_report(struct hid_device *hid, struct hid_report *report)
+
+ for (n = 0; n < count; n++)
+ mt_process_mt_event(hid, field, &field->usage[n],
+- field->value[n]);
++ field->value[n], first_packet);
+ }
+
+ if (td->num_received >= td->num_expected)
+diff --git a/drivers/iio/adc/ina2xx-adc.c b/drivers/iio/adc/ina2xx-adc.c
+index 84a43871f7dc..651dc9df2a7b 100644
+--- a/drivers/iio/adc/ina2xx-adc.c
++++ b/drivers/iio/adc/ina2xx-adc.c
+@@ -44,7 +44,6 @@
+
+ #define INA226_MASK_ENABLE 0x06
+ #define INA226_CVRF BIT(3)
+-#define INA219_CNVR BIT(1)
+
+ #define INA2XX_MAX_REGISTERS 8
+
+@@ -79,6 +78,11 @@
+ #define INA226_ITS_MASK GENMASK(5, 3)
+ #define INA226_SHIFT_ITS(val) ((val) << 3)
+
++/* INA219 Bus voltage register, low bits are flags */
++#define INA219_OVF BIT(0)
++#define INA219_CNVR BIT(1)
++#define INA219_BUS_VOLTAGE_SHIFT 3
++
+ /* Cosmetic macro giving the sampling period for a full P=UxI cycle */
+ #define SAMPLING_PERIOD(c) ((c->int_time_vbus + c->int_time_vshunt) \
+ * c->avg)
+@@ -112,7 +116,7 @@ struct ina2xx_config {
+ u16 config_default;
+ int calibration_factor;
+ int shunt_div;
+- int bus_voltage_shift;
++ int bus_voltage_shift; /* position of lsb */
+ int bus_voltage_lsb; /* uV */
+ int power_lsb; /* uW */
+ enum ina2xx_ids chip_id;
+@@ -135,7 +139,7 @@ static const struct ina2xx_config ina2xx_config[] = {
+ .config_default = INA219_CONFIG_DEFAULT,
+ .calibration_factor = 40960000,
+ .shunt_div = 100,
+- .bus_voltage_shift = 3,
++ .bus_voltage_shift = INA219_BUS_VOLTAGE_SHIFT,
+ .bus_voltage_lsb = 4000,
+ .power_lsb = 20000,
+ .chip_id = ina219,
+@@ -170,6 +174,9 @@ static int ina2xx_read_raw(struct iio_dev *indio_dev,
+ else
+ *val = regval;
+
++ if (chan->address == INA2XX_BUS_VOLTAGE)
++ *val >>= chip->config->bus_voltage_shift;
++
+ return IIO_VAL_INT;
+
+ case IIO_CHAN_INFO_OVERSAMPLING_RATIO:
+@@ -203,9 +210,9 @@ static int ina2xx_read_raw(struct iio_dev *indio_dev,
+ return IIO_VAL_FRACTIONAL;
+
+ case INA2XX_BUS_VOLTAGE:
+- /* processed (mV) = raw*lsb (uV) / (1000 << shift) */
++ /* processed (mV) = raw * lsb (uV) / 1000 */
+ *val = chip->config->bus_voltage_lsb;
+- *val2 = 1000 << chip->config->bus_voltage_shift;
++ *val2 = 1000;
+ return IIO_VAL_FRACTIONAL;
+
+ case INA2XX_POWER:
+@@ -532,7 +539,7 @@ static ssize_t ina2xx_shunt_resistor_store(struct device *dev,
+ * Sampling Freq is a consequence of the integration times of
+ * the Voltage channels.
+ */
+-#define INA219_CHAN_VOLTAGE(_index, _address) { \
++#define INA219_CHAN_VOLTAGE(_index, _address, _shift) { \
+ .type = IIO_VOLTAGE, \
+ .address = (_address), \
+ .indexed = 1, \
+@@ -544,7 +551,8 @@ static ssize_t ina2xx_shunt_resistor_store(struct device *dev,
+ .scan_index = (_index), \
+ .scan_type = { \
+ .sign = 'u', \
+- .realbits = 16, \
++ .shift = _shift, \
++ .realbits = 16 - _shift, \
+ .storagebits = 16, \
+ .endianness = IIO_LE, \
+ } \
+@@ -579,8 +587,8 @@ static const struct iio_chan_spec ina226_channels[] = {
+ };
+
+ static const struct iio_chan_spec ina219_channels[] = {
+- INA219_CHAN_VOLTAGE(0, INA2XX_SHUNT_VOLTAGE),
+- INA219_CHAN_VOLTAGE(1, INA2XX_BUS_VOLTAGE),
++ INA219_CHAN_VOLTAGE(0, INA2XX_SHUNT_VOLTAGE, 0),
++ INA219_CHAN_VOLTAGE(1, INA2XX_BUS_VOLTAGE, INA219_BUS_VOLTAGE_SHIFT),
+ INA219_CHAN(IIO_POWER, 2, INA2XX_POWER),
+ INA219_CHAN(IIO_CURRENT, 3, INA2XX_CURRENT),
+ IIO_CHAN_SOFT_TIMESTAMP(4),
+diff --git a/drivers/iio/health/max30102.c b/drivers/iio/health/max30102.c
+index 147a8c14235f..a14fc2eb1fe9 100644
+--- a/drivers/iio/health/max30102.c
++++ b/drivers/iio/health/max30102.c
+@@ -329,20 +329,31 @@ static int max30102_read_temp(struct max30102_data *data, int *val)
+ return 0;
+ }
+
+-static int max30102_get_temp(struct max30102_data *data, int *val)
++static int max30102_get_temp(struct max30102_data *data, int *val, bool en)
+ {
+ int ret;
+
++ if (en) {
++ ret = max30102_set_powermode(data, true);
++ if (ret)
++ return ret;
++ }
++
+ /* start acquisition */
+ ret = regmap_update_bits(data->regmap, MAX30102_REG_TEMP_CONFIG,
+ MAX30102_REG_TEMP_CONFIG_TEMP_EN,
+ MAX30102_REG_TEMP_CONFIG_TEMP_EN);
+ if (ret)
+- return ret;
++ goto out;
+
+ msleep(35);
++ ret = max30102_read_temp(data, val);
++
++out:
++ if (en)
++ max30102_set_powermode(data, false);
+
+- return max30102_read_temp(data, val);
++ return ret;
+ }
+
+ static int max30102_read_raw(struct iio_dev *indio_dev,
+@@ -355,20 +366,19 @@ static int max30102_read_raw(struct iio_dev *indio_dev,
+ switch (mask) {
+ case IIO_CHAN_INFO_RAW:
+ /*
+- * Temperature reading can only be acquired while engine
+- * is running
++ * Temperature reading can only be acquired when not in
++ * shutdown; leave shutdown briefly when buffer not running
+ */
+ mutex_lock(&indio_dev->mlock);
+-
+ if (!iio_buffer_enabled(indio_dev))
+- ret = -EBUSY;
+- else {
+- ret = max30102_get_temp(data, val);
+- if (!ret)
+- ret = IIO_VAL_INT;
+- }
+-
++ ret = max30102_get_temp(data, val, true);
++ else
++ ret = max30102_get_temp(data, val, false);
+ mutex_unlock(&indio_dev->mlock);
++ if (ret)
++ return ret;
++
++ ret = IIO_VAL_INT;
+ break;
+ case IIO_CHAN_INFO_SCALE:
+ *val = 1000; /* 62.5 */
+diff --git a/drivers/infiniband/hw/mlx5/mr.c b/drivers/infiniband/hw/mlx5/mr.c
+index 3832edd867ed..1961c6a45437 100644
+--- a/drivers/infiniband/hw/mlx5/mr.c
++++ b/drivers/infiniband/hw/mlx5/mr.c
+@@ -1206,6 +1206,9 @@ struct ib_mr *mlx5_ib_reg_user_mr(struct ib_pd *pd, u64 start, u64 length,
+ int err;
+ bool use_umr = true;
+
++ if (!IS_ENABLED(CONFIG_INFINIBAND_USER_MEM))
++ return ERR_PTR(-EINVAL);
++
+ mlx5_ib_dbg(dev, "start 0x%llx, virt_addr 0x%llx, length 0x%llx, access_flags 0x%x\n",
+ start, virt_addr, length, access_flags);
+
+diff --git a/drivers/leds/leds-pm8058.c b/drivers/leds/leds-pm8058.c
+index a52674327857..8988ba3b2d65 100644
+--- a/drivers/leds/leds-pm8058.c
++++ b/drivers/leds/leds-pm8058.c
+@@ -106,7 +106,7 @@ static int pm8058_led_probe(struct platform_device *pdev)
+ if (!led)
+ return -ENOMEM;
+
+- led->ledtype = (u32)of_device_get_match_data(&pdev->dev);
++ led->ledtype = (u32)(unsigned long)of_device_get_match_data(&pdev->dev);
+
+ map = dev_get_regmap(pdev->dev.parent, NULL);
+ if (!map) {
+diff --git a/drivers/md/dm-mpath.c b/drivers/md/dm-mpath.c
+index f7810cc869ac..025e033ecb78 100644
+--- a/drivers/md/dm-mpath.c
++++ b/drivers/md/dm-mpath.c
+@@ -1968,8 +1968,9 @@ static int multipath_busy(struct dm_target *ti)
+ *---------------------------------------------------------------*/
+ static struct target_type multipath_target = {
+ .name = "multipath",
+- .version = {1, 12, 0},
+- .features = DM_TARGET_SINGLETON | DM_TARGET_IMMUTABLE,
++ .version = {1, 13, 0},
++ .features = DM_TARGET_SINGLETON | DM_TARGET_IMMUTABLE |
++ DM_TARGET_PASSES_INTEGRITY,
+ .module = THIS_MODULE,
+ .ctr = multipath_ctr,
+ .dtr = multipath_dtr,
+diff --git a/drivers/md/dm-raid.c b/drivers/md/dm-raid.c
+index 6319d846e0ad..d0f330a5d0cb 100644
+--- a/drivers/md/dm-raid.c
++++ b/drivers/md/dm-raid.c
+@@ -675,15 +675,11 @@ static struct raid_type *get_raid_type_by_ll(const int level, const int layout)
+ return NULL;
+ }
+
+-/*
+- * Conditionally change bdev capacity of @rs
+- * in case of a disk add/remove reshape
+- */
+-static void rs_set_capacity(struct raid_set *rs)
++/* Adjust rdev sectors */
++static void rs_set_rdev_sectors(struct raid_set *rs)
+ {
+ struct mddev *mddev = &rs->md;
+ struct md_rdev *rdev;
+- struct gendisk *gendisk = dm_disk(dm_table_get_md(rs->ti->table));
+
+ /*
+ * raid10 sets rdev->sector to the device size, which
+@@ -692,8 +688,16 @@ static void rs_set_capacity(struct raid_set *rs)
+ rdev_for_each(rdev, mddev)
+ if (!test_bit(Journal, &rdev->flags))
+ rdev->sectors = mddev->dev_sectors;
++}
+
+- set_capacity(gendisk, mddev->array_sectors);
++/*
++ * Change bdev capacity of @rs in case of a disk add/remove reshape
++ */
++static void rs_set_capacity(struct raid_set *rs)
++{
++ struct gendisk *gendisk = dm_disk(dm_table_get_md(rs->ti->table));
++
++ set_capacity(gendisk, rs->md.array_sectors);
+ revalidate_disk(gendisk);
+ }
+
+@@ -1674,8 +1678,11 @@ static void do_table_event(struct work_struct *ws)
+ struct raid_set *rs = container_of(ws, struct raid_set, md.event_work);
+
+ smp_rmb(); /* Make sure we access most actual mddev properties */
+- if (!rs_is_reshaping(rs))
++ if (!rs_is_reshaping(rs)) {
++ if (rs_is_raid10(rs))
++ rs_set_rdev_sectors(rs);
+ rs_set_capacity(rs);
++ }
+ dm_table_event(rs->ti->table);
+ }
+
+@@ -3842,11 +3849,10 @@ static int raid_preresume(struct dm_target *ti)
+ mddev->resync_min = mddev->recovery_cp;
+ }
+
+- rs_set_capacity(rs);
+-
+ /* Check for any reshape request unless new raid set */
+ if (test_and_clear_bit(RT_FLAG_RESHAPE_RS, &rs->runtime_flags)) {
+ /* Initiate a reshape. */
++ rs_set_rdev_sectors(rs);
+ mddev_lock_nointr(mddev);
+ r = rs_start_reshape(rs);
+ mddev_unlock(mddev);
+@@ -3875,6 +3881,10 @@ static void raid_resume(struct dm_target *ti)
+ mddev->ro = 0;
+ mddev->in_sync = 0;
+
++ /* Only reduce raid set size before running a disk removing reshape. */
++ if (mddev->delta_disks < 0)
++ rs_set_capacity(rs);
++
+ /*
+ * Keep the RAID set frozen if reshape/rebuild flags are set.
+ * The RAID set is unfrozen once the next table load/resume,
+diff --git a/drivers/media/platform/davinci/vpif_capture.c b/drivers/media/platform/davinci/vpif_capture.c
+index fca4dc829f73..e45916f69def 100644
+--- a/drivers/media/platform/davinci/vpif_capture.c
++++ b/drivers/media/platform/davinci/vpif_capture.c
+@@ -1550,6 +1550,8 @@ vpif_capture_get_pdata(struct platform_device *pdev)
+ sizeof(*chan->inputs) *
+ VPIF_CAPTURE_NUM_CHANNELS,
+ GFP_KERNEL);
++ if (!chan->inputs)
++ return NULL;
+
+ chan->input_count++;
+ chan->inputs[i].input.type = V4L2_INPUT_TYPE_CAMERA;
+diff --git a/drivers/media/platform/vsp1/vsp1_drv.c b/drivers/media/platform/vsp1/vsp1_drv.c
+index 962e4c304076..eed9516e25e1 100644
+--- a/drivers/media/platform/vsp1/vsp1_drv.c
++++ b/drivers/media/platform/vsp1/vsp1_drv.c
+@@ -571,7 +571,13 @@ static int __maybe_unused vsp1_pm_suspend(struct device *dev)
+ {
+ struct vsp1_device *vsp1 = dev_get_drvdata(dev);
+
+- vsp1_pipelines_suspend(vsp1);
++ /*
++ * When used as part of a display pipeline, the VSP is stopped and
++ * restarted explicitly by the DU.
++ */
++ if (!vsp1->drm)
++ vsp1_pipelines_suspend(vsp1);
++
+ pm_runtime_force_suspend(vsp1->dev);
+
+ return 0;
+@@ -582,7 +588,13 @@ static int __maybe_unused vsp1_pm_resume(struct device *dev)
+ struct vsp1_device *vsp1 = dev_get_drvdata(dev);
+
+ pm_runtime_force_resume(vsp1->dev);
+- vsp1_pipelines_resume(vsp1);
++
++ /*
++ * When used as part of a display pipeline, the VSP is stopped and
++ * restarted explicitly by the DU.
++ */
++ if (!vsp1->drm)
++ vsp1_pipelines_resume(vsp1);
+
+ return 0;
+ }
+diff --git a/drivers/media/usb/cpia2/cpia2_v4l.c b/drivers/media/usb/cpia2/cpia2_v4l.c
+index 3dedd83f0b19..a1c59f19cf2d 100644
+--- a/drivers/media/usb/cpia2/cpia2_v4l.c
++++ b/drivers/media/usb/cpia2/cpia2_v4l.c
+@@ -808,7 +808,7 @@ static int cpia2_querybuf(struct file *file, void *fh, struct v4l2_buffer *buf)
+ struct camera_data *cam = video_drvdata(file);
+
+ if(buf->type != V4L2_BUF_TYPE_VIDEO_CAPTURE ||
+- buf->index > cam->num_frames)
++ buf->index >= cam->num_frames)
+ return -EINVAL;
+
+ buf->m.offset = cam->buffers[buf->index].data - cam->frame_buffer;
+@@ -859,7 +859,7 @@ static int cpia2_qbuf(struct file *file, void *fh, struct v4l2_buffer *buf)
+
+ if(buf->type != V4L2_BUF_TYPE_VIDEO_CAPTURE ||
+ buf->memory != V4L2_MEMORY_MMAP ||
+- buf->index > cam->num_frames)
++ buf->index >= cam->num_frames)
+ return -EINVAL;
+
+ DBG("QBUF #%d\n", buf->index);
+diff --git a/drivers/mmc/core/mmc_test.c b/drivers/mmc/core/mmc_test.c
+index 478869805b96..789afef66fce 100644
+--- a/drivers/mmc/core/mmc_test.c
++++ b/drivers/mmc/core/mmc_test.c
+@@ -2328,10 +2328,17 @@ static int mmc_test_reset(struct mmc_test_card *test)
+ int err;
+
+ err = mmc_hw_reset(host);
+- if (!err)
++ if (!err) {
++ /*
++ * Reset will re-enable the card's command queue, but tests
++ * expect it to be disabled.
++ */
++ if (card->ext_csd.cmdq_en)
++ mmc_cmdq_disable(card);
+ return RESULT_OK;
+- else if (err == -EOPNOTSUPP)
++ } else if (err == -EOPNOTSUPP) {
+ return RESULT_UNSUP_HOST;
++ }
+
+ return RESULT_FAIL;
+ }
+diff --git a/drivers/mtd/nand/fsl_ifc_nand.c b/drivers/mtd/nand/fsl_ifc_nand.c
+index 9e03bac7f34c..bbdd68a54d68 100644
+--- a/drivers/mtd/nand/fsl_ifc_nand.c
++++ b/drivers/mtd/nand/fsl_ifc_nand.c
+@@ -916,6 +916,13 @@ static int fsl_ifc_chip_init(struct fsl_ifc_mtd *priv)
+ if (ctrl->version >= FSL_IFC_VERSION_1_1_0)
+ fsl_ifc_sram_init(priv);
+
++ /*
++ * As IFC version 2.0.0 has 16KB of internal SRAM as compared to older
++ * versions which had 8KB. Hence bufnum mask needs to be updated.
++ */
++ if (ctrl->version >= FSL_IFC_VERSION_2_0_0)
++ priv->bufnum_mask = (priv->bufnum_mask * 2) + 1;
++
+ return 0;
+ }
+
+diff --git a/drivers/mtd/nand/nand_base.c b/drivers/mtd/nand/nand_base.c
+index 9c702b46c6ee..e38edfa766f2 100644
+--- a/drivers/mtd/nand/nand_base.c
++++ b/drivers/mtd/nand/nand_base.c
+@@ -710,7 +710,8 @@ static void nand_command(struct mtd_info *mtd, unsigned int command,
+ chip->cmd_ctrl(mtd, readcmd, ctrl);
+ ctrl &= ~NAND_CTRL_CHANGE;
+ }
+- chip->cmd_ctrl(mtd, command, ctrl);
++ if (command != NAND_CMD_NONE)
++ chip->cmd_ctrl(mtd, command, ctrl);
+
+ /* Address cycle, when necessary */
+ ctrl = NAND_CTRL_ALE | NAND_CTRL_CHANGE;
+@@ -738,6 +739,7 @@ static void nand_command(struct mtd_info *mtd, unsigned int command,
+ */
+ switch (command) {
+
++ case NAND_CMD_NONE:
+ case NAND_CMD_PAGEPROG:
+ case NAND_CMD_ERASE1:
+ case NAND_CMD_ERASE2:
+@@ -831,7 +833,9 @@ static void nand_command_lp(struct mtd_info *mtd, unsigned int command,
+ }
+
+ /* Command latch cycle */
+- chip->cmd_ctrl(mtd, command, NAND_NCE | NAND_CLE | NAND_CTRL_CHANGE);
++ if (command != NAND_CMD_NONE)
++ chip->cmd_ctrl(mtd, command,
++ NAND_NCE | NAND_CLE | NAND_CTRL_CHANGE);
+
+ if (column != -1 || page_addr != -1) {
+ int ctrl = NAND_CTRL_CHANGE | NAND_NCE | NAND_ALE;
+@@ -866,6 +870,7 @@ static void nand_command_lp(struct mtd_info *mtd, unsigned int command,
+ */
+ switch (command) {
+
++ case NAND_CMD_NONE:
+ case NAND_CMD_CACHEDPROG:
+ case NAND_CMD_PAGEPROG:
+ case NAND_CMD_ERASE1:
+diff --git a/drivers/net/ethernet/broadcom/bnxt/bnxt.c b/drivers/net/ethernet/broadcom/bnxt/bnxt.c
+index 61ca4eb7c6fa..6a9ee65099a6 100644
+--- a/drivers/net/ethernet/broadcom/bnxt/bnxt.c
++++ b/drivers/net/ethernet/broadcom/bnxt/bnxt.c
+@@ -1706,12 +1706,16 @@ static int bnxt_async_event_process(struct bnxt *bp,
+
+ if (BNXT_VF(bp))
+ goto async_event_process_exit;
+- if (data1 & 0x20000) {
++
++ /* print unsupported speed warning in forced speed mode only */
++ if (!(link_info->autoneg & BNXT_AUTONEG_SPEED) &&
++ (data1 & 0x20000)) {
+ u16 fw_speed = link_info->force_link_speed;
+ u32 speed = bnxt_fw_to_ethtool_speed(fw_speed);
+
+- netdev_warn(bp->dev, "Link speed %d no longer supported\n",
+- speed);
++ if (speed != SPEED_UNKNOWN)
++ netdev_warn(bp->dev, "Link speed %d no longer supported\n",
++ speed);
+ }
+ set_bit(BNXT_LINK_SPEED_CHNG_SP_EVENT, &bp->sp_event);
+ /* fall thru */
+diff --git a/drivers/net/ethernet/broadcom/bnxt/bnxt_tc.c b/drivers/net/ethernet/broadcom/bnxt/bnxt_tc.c
+index d8fee26cd45e..aa484d72f38c 100644
+--- a/drivers/net/ethernet/broadcom/bnxt/bnxt_tc.c
++++ b/drivers/net/ethernet/broadcom/bnxt/bnxt_tc.c
+@@ -148,9 +148,6 @@ static int bnxt_tc_parse_actions(struct bnxt *bp,
+ }
+ }
+
+- if (rc)
+- return rc;
+-
+ if (actions->flags & BNXT_TC_ACTION_FLAG_FWD) {
+ if (actions->flags & BNXT_TC_ACTION_FLAG_TUNNEL_ENCAP) {
+ /* dst_fid is PF's fid */
+@@ -164,7 +161,7 @@ static int bnxt_tc_parse_actions(struct bnxt *bp,
+ }
+ }
+
+- return rc;
++ return 0;
+ }
+
+ #define GET_KEY(flow_cmd, key_type) \
+diff --git a/drivers/net/ethernet/cavium/thunder/nicvf_main.c b/drivers/net/ethernet/cavium/thunder/nicvf_main.c
+index a063c36c4c58..3e6286d402ef 100644
+--- a/drivers/net/ethernet/cavium/thunder/nicvf_main.c
++++ b/drivers/net/ethernet/cavium/thunder/nicvf_main.c
+@@ -1833,6 +1833,11 @@ static int nicvf_probe(struct pci_dev *pdev, const struct pci_device_id *ent)
+ nic->pdev = pdev;
+ nic->pnicvf = nic;
+ nic->max_queues = qcount;
++ /* If no of CPUs are too low, there won't be any queues left
++ * for XDP_TX, hence double it.
++ */
++ if (!nic->t88)
++ nic->max_queues *= 2;
+
+ /* MAP VF's configuration registers */
+ nic->reg_base = pcim_iomap(pdev, PCI_CFG_REG_BAR_NUM, 0);
+diff --git a/drivers/net/ieee802154/adf7242.c b/drivers/net/ieee802154/adf7242.c
+index 400fdbd3a120..8381f4279bc7 100644
+--- a/drivers/net/ieee802154/adf7242.c
++++ b/drivers/net/ieee802154/adf7242.c
+@@ -888,7 +888,7 @@ static const struct ieee802154_ops adf7242_ops = {
+ .set_cca_ed_level = adf7242_set_cca_ed_level,
+ };
+
+-static void adf7242_debug(u8 irq1)
++static void adf7242_debug(struct adf7242_local *lp, u8 irq1)
+ {
+ #ifdef DEBUG
+ u8 stat;
+@@ -932,7 +932,7 @@ static irqreturn_t adf7242_isr(int irq, void *data)
+ dev_err(&lp->spi->dev, "%s :ERROR IRQ1 = 0x%X\n",
+ __func__, irq1);
+
+- adf7242_debug(irq1);
++ adf7242_debug(lp, irq1);
+
+ xmit = test_bit(FLAG_XMIT, &lp->flags);
+
+diff --git a/drivers/net/ipvlan/ipvlan_core.c b/drivers/net/ipvlan/ipvlan_core.c
+index 77cc4fbaeace..e92f31a53339 100644
+--- a/drivers/net/ipvlan/ipvlan_core.c
++++ b/drivers/net/ipvlan/ipvlan_core.c
+@@ -322,6 +322,10 @@ static int ipvlan_rcv_frame(struct ipvl_addr *addr, struct sk_buff **pskb,
+ if (dev_forward_skb(ipvlan->dev, skb) == NET_RX_SUCCESS)
+ success = true;
+ } else {
++ if (!ether_addr_equal_64bits(eth_hdr(skb)->h_dest,
++ ipvlan->phy_dev->dev_addr))
++ skb->pkt_type = PACKET_OTHERHOST;
++
+ ret = RX_HANDLER_ANOTHER;
+ success = true;
+ }
+diff --git a/drivers/net/veth.c b/drivers/net/veth.c
+index f5438d0978ca..a69ad39ee57e 100644
+--- a/drivers/net/veth.c
++++ b/drivers/net/veth.c
+@@ -410,6 +410,9 @@ static int veth_newlink(struct net *src_net, struct net_device *dev,
+ if (ifmp && (dev->ifindex != 0))
+ peer->ifindex = ifmp->ifi_index;
+
++ peer->gso_max_size = dev->gso_max_size;
++ peer->gso_max_segs = dev->gso_max_segs;
++
+ err = register_netdevice(peer);
+ put_net(net);
+ net = NULL;
+diff --git a/drivers/net/virtio_net.c b/drivers/net/virtio_net.c
+index 5907a8d0e921..f42ee452072b 100644
+--- a/drivers/net/virtio_net.c
++++ b/drivers/net/virtio_net.c
+@@ -261,9 +261,12 @@ static void virtqueue_napi_complete(struct napi_struct *napi,
+ int opaque;
+
+ opaque = virtqueue_enable_cb_prepare(vq);
+- if (napi_complete_done(napi, processed) &&
+- unlikely(virtqueue_poll(vq, opaque)))
+- virtqueue_napi_schedule(napi, vq);
++ if (napi_complete_done(napi, processed)) {
++ if (unlikely(virtqueue_poll(vq, opaque)))
++ virtqueue_napi_schedule(napi, vq);
++ } else {
++ virtqueue_disable_cb(vq);
++ }
+ }
+
+ static void skb_xmit_done(struct virtqueue *vq)
+diff --git a/drivers/net/wireless/ath/ath10k/mac.c b/drivers/net/wireless/ath/ath10k/mac.c
+index 0a947eef348d..c6460e7f6d78 100644
+--- a/drivers/net/wireless/ath/ath10k/mac.c
++++ b/drivers/net/wireless/ath/ath10k/mac.c
+@@ -6201,6 +6201,16 @@ static int ath10k_sta_state(struct ieee80211_hw *hw,
+ "mac vdev %d peer delete %pM sta %pK (sta gone)\n",
+ arvif->vdev_id, sta->addr, sta);
+
++ if (sta->tdls) {
++ ret = ath10k_mac_tdls_peer_update(ar, arvif->vdev_id,
++ sta,
++ WMI_TDLS_PEER_STATE_TEARDOWN);
++ if (ret)
++ ath10k_warn(ar, "failed to update tdls peer state for %pM state %d: %i\n",
++ sta->addr,
++ WMI_TDLS_PEER_STATE_TEARDOWN, ret);
++ }
++
+ ret = ath10k_peer_delete(ar, arvif->vdev_id, sta->addr);
+ if (ret)
+ ath10k_warn(ar, "failed to delete peer %pM for vdev %d: %i\n",
+diff --git a/drivers/net/wireless/ath/ath10k/wmi.h b/drivers/net/wireless/ath/ath10k/wmi.h
+index c02b21cff38d..3cb8fe4374c5 100644
+--- a/drivers/net/wireless/ath/ath10k/wmi.h
++++ b/drivers/net/wireless/ath/ath10k/wmi.h
+@@ -5236,7 +5236,8 @@ enum wmi_10_4_vdev_param {
+ #define WMI_VDEV_PARAM_TXBF_MU_TX_BFER BIT(3)
+
+ #define WMI_TXBF_STS_CAP_OFFSET_LSB 4
+-#define WMI_TXBF_STS_CAP_OFFSET_MASK 0xf0
++#define WMI_TXBF_STS_CAP_OFFSET_MASK 0x70
++#define WMI_TXBF_CONF_IMPLICIT_BF BIT(7)
+ #define WMI_BF_SOUND_DIM_OFFSET_LSB 8
+ #define WMI_BF_SOUND_DIM_OFFSET_MASK 0xf00
+
+diff --git a/drivers/net/wireless/intel/iwlwifi/mvm/rs.c b/drivers/net/wireless/intel/iwlwifi/mvm/rs.c
+index c69515ed72df..fbfa5eafcc93 100644
+--- a/drivers/net/wireless/intel/iwlwifi/mvm/rs.c
++++ b/drivers/net/wireless/intel/iwlwifi/mvm/rs.c
+@@ -1877,12 +1877,10 @@ static int rs_switch_to_column(struct iwl_mvm *mvm,
+ struct rs_rate *rate = &search_tbl->rate;
+ const struct rs_tx_column *column = &rs_tx_columns[col_id];
+ const struct rs_tx_column *curr_column = &rs_tx_columns[tbl->column];
+- u32 sz = (sizeof(struct iwl_scale_tbl_info) -
+- (sizeof(struct iwl_rate_scale_data) * IWL_RATE_COUNT));
+ unsigned long rate_mask = 0;
+ u32 rate_idx = 0;
+
+- memcpy(search_tbl, tbl, sz);
++ memcpy(search_tbl, tbl, offsetof(struct iwl_scale_tbl_info, win));
+
+ rate->sgi = column->sgi;
+ rate->ant = column->ant;
+diff --git a/drivers/net/wireless/intel/iwlwifi/mvm/utils.c b/drivers/net/wireless/intel/iwlwifi/mvm/utils.c
+index 03ffd84786ca..50255944525e 100644
+--- a/drivers/net/wireless/intel/iwlwifi/mvm/utils.c
++++ b/drivers/net/wireless/intel/iwlwifi/mvm/utils.c
+@@ -595,6 +595,12 @@ static void iwl_mvm_dump_lmac_error_log(struct iwl_mvm *mvm, u32 base)
+
+ void iwl_mvm_dump_nic_error_log(struct iwl_mvm *mvm)
+ {
++ if (!test_bit(STATUS_DEVICE_ENABLED, &mvm->trans->status)) {
++ IWL_ERR(mvm,
++ "DEVICE_ENABLED bit is not set. Aborting dump.\n");
++ return;
++ }
++
+ iwl_mvm_dump_lmac_error_log(mvm, mvm->error_event_table[0]);
+
+ if (mvm->error_event_table[1])
+diff --git a/drivers/net/wireless/mac80211_hwsim.c b/drivers/net/wireless/mac80211_hwsim.c
+index 829ac22b72fc..731d59ee6efe 100644
+--- a/drivers/net/wireless/mac80211_hwsim.c
++++ b/drivers/net/wireless/mac80211_hwsim.c
+@@ -729,16 +729,21 @@ static int hwsim_fops_ps_write(void *dat, u64 val)
+ val != PS_MANUAL_POLL)
+ return -EINVAL;
+
+- old_ps = data->ps;
+- data->ps = val;
+-
+- local_bh_disable();
+ if (val == PS_MANUAL_POLL) {
++ if (data->ps != PS_ENABLED)
++ return -EINVAL;
++ local_bh_disable();
+ ieee80211_iterate_active_interfaces_atomic(
+ data->hw, IEEE80211_IFACE_ITER_NORMAL,
+ hwsim_send_ps_poll, data);
+- data->ps_poll_pending = true;
+- } else if (old_ps == PS_DISABLED && val != PS_DISABLED) {
++ local_bh_enable();
++ return 0;
++ }
++ old_ps = data->ps;
++ data->ps = val;
++
++ local_bh_disable();
++ if (old_ps == PS_DISABLED && val != PS_DISABLED) {
+ ieee80211_iterate_active_interfaces_atomic(
+ data->hw, IEEE80211_IFACE_ITER_NORMAL,
+ hwsim_send_nullfunc_ps, data);
+diff --git a/drivers/net/wireless/marvell/mwifiex/cfg80211.c b/drivers/net/wireless/marvell/mwifiex/cfg80211.c
+index 6e0d9a9c5cfb..f32401197f7c 100644
+--- a/drivers/net/wireless/marvell/mwifiex/cfg80211.c
++++ b/drivers/net/wireless/marvell/mwifiex/cfg80211.c
+@@ -1116,6 +1116,12 @@ mwifiex_cfg80211_change_virtual_intf(struct wiphy *wiphy,
+ struct mwifiex_private *priv = mwifiex_netdev_get_priv(dev);
+ enum nl80211_iftype curr_iftype = dev->ieee80211_ptr->iftype;
+
++ if (priv->scan_request) {
++ mwifiex_dbg(priv->adapter, ERROR,
++ "change virtual interface: scan in process\n");
++ return -EBUSY;
++ }
++
+ switch (curr_iftype) {
+ case NL80211_IFTYPE_ADHOC:
+ switch (type) {
+diff --git a/drivers/pinctrl/sh-pfc/pfc-r8a7791.c b/drivers/pinctrl/sh-pfc/pfc-r8a7791.c
+index 10bd35f8c894..c01ef02d326b 100644
+--- a/drivers/pinctrl/sh-pfc/pfc-r8a7791.c
++++ b/drivers/pinctrl/sh-pfc/pfc-r8a7791.c
+@@ -4826,6 +4826,10 @@ static const char * const can0_groups[] = {
+ "can0_data_d",
+ "can0_data_e",
+ "can0_data_f",
++ /*
++ * Retained for backwards compatibility, use can_clk_groups in new
++ * designs.
++ */
+ "can_clk",
+ "can_clk_b",
+ "can_clk_c",
+@@ -4837,6 +4841,21 @@ static const char * const can1_groups[] = {
+ "can1_data_b",
+ "can1_data_c",
+ "can1_data_d",
++ /*
++ * Retained for backwards compatibility, use can_clk_groups in new
++ * designs.
++ */
++ "can_clk",
++ "can_clk_b",
++ "can_clk_c",
++ "can_clk_d",
++};
++
++/*
++ * can_clk_groups allows for independent configuration, use can_clk function
++ * in new designs.
++ */
++static const char * const can_clk_groups[] = {
+ "can_clk",
+ "can_clk_b",
+ "can_clk_c",
+@@ -5308,7 +5327,7 @@ static const char * const vin2_groups[] = {
+ };
+
+ static const struct {
+- struct sh_pfc_function common[56];
++ struct sh_pfc_function common[57];
+ struct sh_pfc_function r8a779x[2];
+ } pinmux_functions = {
+ .common = {
+@@ -5316,6 +5335,7 @@ static const struct {
+ SH_PFC_FUNCTION(avb),
+ SH_PFC_FUNCTION(can0),
+ SH_PFC_FUNCTION(can1),
++ SH_PFC_FUNCTION(can_clk),
+ SH_PFC_FUNCTION(du),
+ SH_PFC_FUNCTION(du0),
+ SH_PFC_FUNCTION(du1),
+diff --git a/drivers/pinctrl/sh-pfc/pfc-r8a7795-es1.c b/drivers/pinctrl/sh-pfc/pfc-r8a7795-es1.c
+index 1d4d84f34d60..292e35d4d2f4 100644
+--- a/drivers/pinctrl/sh-pfc/pfc-r8a7795-es1.c
++++ b/drivers/pinctrl/sh-pfc/pfc-r8a7795-es1.c
+@@ -1397,7 +1397,7 @@ static const u16 pinmux_data[] = {
+ PINMUX_IPSR_MSEL(IP16_27_24, AUDIO_CLKOUT_B, SEL_ADG_1),
+ PINMUX_IPSR_MSEL(IP16_27_24, SSI_SCK2_B, SEL_SSI_1),
+ PINMUX_IPSR_MSEL(IP16_27_24, TS_SDEN1_D, SEL_TSIF1_3),
+- PINMUX_IPSR_MSEL(IP16_27_24, STP_ISEN_1_D, SEL_SSP1_1_2),
++ PINMUX_IPSR_MSEL(IP16_27_24, STP_ISEN_1_D, SEL_SSP1_1_3),
+ PINMUX_IPSR_MSEL(IP16_27_24, STP_OPWM_0_E, SEL_SSP1_0_4),
+ PINMUX_IPSR_MSEL(IP16_27_24, RIF3_D0_B, SEL_DRIF3_1),
+ PINMUX_IPSR_MSEL(IP16_27_24, TCLK2_B, SEL_TIMER_TMU_1),
+diff --git a/drivers/power/supply/ab8500_charger.c b/drivers/power/supply/ab8500_charger.c
+index 4ebbcce45c48..5a76c6d343de 100644
+--- a/drivers/power/supply/ab8500_charger.c
++++ b/drivers/power/supply/ab8500_charger.c
+@@ -3218,11 +3218,13 @@ static int ab8500_charger_init_hw_registers(struct ab8500_charger *di)
+ }
+
+ /* Enable backup battery charging */
+- abx500_mask_and_set_register_interruptible(di->dev,
++ ret = abx500_mask_and_set_register_interruptible(di->dev,
+ AB8500_RTC, AB8500_RTC_CTRL_REG,
+ RTC_BUP_CH_ENA, RTC_BUP_CH_ENA);
+- if (ret < 0)
++ if (ret < 0) {
+ dev_err(di->dev, "%s mask and set failed\n", __func__);
++ goto out;
++ }
+
+ if (is_ab8540(di->parent)) {
+ ret = abx500_mask_and_set_register_interruptible(di->dev,
+diff --git a/drivers/power/supply/sbs-manager.c b/drivers/power/supply/sbs-manager.c
+index ccb4217b9638..cb6e8f66c7a2 100644
+--- a/drivers/power/supply/sbs-manager.c
++++ b/drivers/power/supply/sbs-manager.c
+@@ -183,7 +183,7 @@ static int sbsm_select(struct i2c_mux_core *muxc, u32 chan)
+ return ret;
+
+ /* chan goes from 1 ... 4 */
+- reg = 1 << BIT(SBSM_SMB_BAT_OFFSET + chan);
++ reg = BIT(SBSM_SMB_BAT_OFFSET + chan);
+ ret = sbsm_write_word(data->client, SBSM_CMD_BATSYSSTATE, reg);
+ if (ret)
+ dev_err(dev, "Failed to select channel %i\n", chan);
+diff --git a/drivers/pwm/pwm-stmpe.c b/drivers/pwm/pwm-stmpe.c
+index e464582a390a..3439f1e902cb 100644
+--- a/drivers/pwm/pwm-stmpe.c
++++ b/drivers/pwm/pwm-stmpe.c
+@@ -145,7 +145,7 @@ static int stmpe_24xx_pwm_config(struct pwm_chip *chip, struct pwm_device *pwm,
+ break;
+
+ case 2:
+- offset = STMPE24XX_PWMIC1;
++ offset = STMPE24XX_PWMIC2;
+ break;
+
+ default:
+diff --git a/drivers/rtc/rtc-brcmstb-waketimer.c b/drivers/rtc/rtc-brcmstb-waketimer.c
+index 796ac792a381..6cee61201c30 100644
+--- a/drivers/rtc/rtc-brcmstb-waketimer.c
++++ b/drivers/rtc/rtc-brcmstb-waketimer.c
+@@ -253,7 +253,7 @@ static int brcmstb_waketmr_probe(struct platform_device *pdev)
+ ret = devm_request_irq(dev, timer->irq, brcmstb_waketmr_irq, 0,
+ "brcmstb-waketimer", timer);
+ if (ret < 0)
+- return ret;
++ goto err_clk;
+
+ timer->reboot_notifier.notifier_call = brcmstb_waketmr_reboot;
+ register_reboot_notifier(&timer->reboot_notifier);
+@@ -262,12 +262,21 @@ static int brcmstb_waketmr_probe(struct platform_device *pdev)
+ &brcmstb_waketmr_ops, THIS_MODULE);
+ if (IS_ERR(timer->rtc)) {
+ dev_err(dev, "unable to register device\n");
+- unregister_reboot_notifier(&timer->reboot_notifier);
+- return PTR_ERR(timer->rtc);
++ ret = PTR_ERR(timer->rtc);
++ goto err_notifier;
+ }
+
+ dev_info(dev, "registered, with irq %d\n", timer->irq);
+
++ return 0;
++
++err_notifier:
++ unregister_reboot_notifier(&timer->reboot_notifier);
++
++err_clk:
++ if (timer->clk)
++ clk_disable_unprepare(timer->clk);
++
+ return ret;
+ }
+
+diff --git a/drivers/scsi/lpfc/lpfc_nvme.c b/drivers/scsi/lpfc/lpfc_nvme.c
+index 517ae570e507..0e63c88441a3 100644
+--- a/drivers/scsi/lpfc/lpfc_nvme.c
++++ b/drivers/scsi/lpfc/lpfc_nvme.c
+@@ -416,6 +416,9 @@ lpfc_nvme_ls_req(struct nvme_fc_local_port *pnvme_lport,
+ lport = (struct lpfc_nvme_lport *)pnvme_lport->private;
+ vport = lport->vport;
+
++ if (vport->load_flag & FC_UNLOADING)
++ return -ENODEV;
++
+ if (vport->load_flag & FC_UNLOADING)
+ return -ENODEV;
+
+@@ -534,6 +537,9 @@ lpfc_nvme_ls_abort(struct nvme_fc_local_port *pnvme_lport,
+ vport = lport->vport;
+ phba = vport->phba;
+
++ if (vport->load_flag & FC_UNLOADING)
++ return;
++
+ ndlp = lpfc_findnode_did(vport, pnvme_rport->port_id);
+ if (!ndlp) {
+ lpfc_printf_vlog(vport, KERN_ERR, LOG_NVME_ABTS,
+@@ -1260,6 +1266,11 @@ lpfc_nvme_fcp_io_submit(struct nvme_fc_local_port *pnvme_lport,
+ goto out_fail;
+ }
+
++ if (vport->load_flag & FC_UNLOADING) {
++ ret = -ENODEV;
++ goto out_fail;
++ }
++
+ /* Validate pointers. */
+ if (!pnvme_lport || !pnvme_rport || !freqpriv) {
+ lpfc_printf_vlog(vport, KERN_INFO, LOG_NVME_IOERR | LOG_NODE,
+@@ -1487,6 +1498,9 @@ lpfc_nvme_fcp_abort(struct nvme_fc_local_port *pnvme_lport,
+ vport = lport->vport;
+ phba = vport->phba;
+
++ if (vport->load_flag & FC_UNLOADING)
++ return;
++
+ /* Announce entry to new IO submit field. */
+ lpfc_printf_vlog(vport, KERN_INFO, LOG_NVME_ABTS,
+ "6002 Abort Request to rport DID x%06x "
+diff --git a/drivers/scsi/lpfc/lpfc_nvmet.c b/drivers/scsi/lpfc/lpfc_nvmet.c
+index 84cf1b9079f7..2b50aecc2722 100644
+--- a/drivers/scsi/lpfc/lpfc_nvmet.c
++++ b/drivers/scsi/lpfc/lpfc_nvmet.c
+@@ -632,6 +632,9 @@ lpfc_nvmet_xmt_ls_rsp(struct nvmet_fc_target_port *tgtport,
+ struct ulp_bde64 bpl;
+ int rc;
+
++ if (phba->pport->load_flag & FC_UNLOADING)
++ return -ENODEV;
++
+ if (phba->pport->load_flag & FC_UNLOADING)
+ return -ENODEV;
+
+@@ -721,6 +724,11 @@ lpfc_nvmet_xmt_fcp_op(struct nvmet_fc_target_port *tgtport,
+ goto aerr;
+ }
+
++ if (phba->pport->load_flag & FC_UNLOADING) {
++ rc = -ENODEV;
++ goto aerr;
++ }
++
+ #ifdef CONFIG_SCSI_LPFC_DEBUG_FS
+ if (ctxp->ts_cmd_nvme) {
+ if (rsp->op == NVMET_FCOP_RSP)
+@@ -820,6 +828,9 @@ lpfc_nvmet_xmt_fcp_abort(struct nvmet_fc_target_port *tgtport,
+ struct lpfc_hba *phba = ctxp->phba;
+ unsigned long flags;
+
++ if (phba->pport->load_flag & FC_UNLOADING)
++ return;
++
+ if (phba->pport->load_flag & FC_UNLOADING)
+ return;
+
+diff --git a/drivers/scsi/scsi_devinfo.c b/drivers/scsi/scsi_devinfo.c
+index dfb8da83fa50..0a17b9e5dc35 100644
+--- a/drivers/scsi/scsi_devinfo.c
++++ b/drivers/scsi/scsi_devinfo.c
+@@ -181,7 +181,7 @@ static struct {
+ {"HITACHI", "6586-", "*", BLIST_SPARSELUN | BLIST_LARGELUN},
+ {"HITACHI", "6588-", "*", BLIST_SPARSELUN | BLIST_LARGELUN},
+ {"HP", "A6189A", NULL, BLIST_SPARSELUN | BLIST_LARGELUN}, /* HP VA7400 */
+- {"HP", "OPEN-", "*", BLIST_REPORTLUN2}, /* HP XP Arrays */
++ {"HP", "OPEN-", "*", BLIST_REPORTLUN2 | BLIST_TRY_VPD_PAGES}, /* HP XP Arrays */
+ {"HP", "NetRAID-4M", NULL, BLIST_FORCELUN},
+ {"HP", "HSV100", NULL, BLIST_REPORTLUN2 | BLIST_NOSTARTONADD},
+ {"HP", "C1557A", NULL, BLIST_FORCELUN},
+@@ -590,17 +590,12 @@ blist_flags_t scsi_get_device_flags_keyed(struct scsi_device *sdev,
+ int key)
+ {
+ struct scsi_dev_info_list *devinfo;
+- int err;
+
+ devinfo = scsi_dev_info_list_find(vendor, model, key);
+ if (!IS_ERR(devinfo))
+ return devinfo->flags;
+
+- err = PTR_ERR(devinfo);
+- if (err != -ENOENT)
+- return err;
+-
+- /* nothing found, return nothing */
++ /* key or device not found: return nothing */
+ if (key != SCSI_DEVINFO_GLOBAL)
+ return 0;
+
+diff --git a/drivers/scsi/scsi_dh.c b/drivers/scsi/scsi_dh.c
+index 2b785d09d5bd..b88b5dbbc444 100644
+--- a/drivers/scsi/scsi_dh.c
++++ b/drivers/scsi/scsi_dh.c
+@@ -56,10 +56,13 @@ static const struct scsi_dh_blist scsi_dh_blist[] = {
+ {"IBM", "1815", "rdac", },
+ {"IBM", "1818", "rdac", },
+ {"IBM", "3526", "rdac", },
++ {"IBM", "3542", "rdac", },
++ {"IBM", "3552", "rdac", },
+ {"SGI", "TP9", "rdac", },
+ {"SGI", "IS", "rdac", },
+- {"STK", "OPENstorage D280", "rdac", },
++ {"STK", "OPENstorage", "rdac", },
+ {"STK", "FLEXLINE 380", "rdac", },
++ {"STK", "BladeCtlr", "rdac", },
+ {"SUN", "CSM", "rdac", },
+ {"SUN", "LCSM100", "rdac", },
+ {"SUN", "STK6580_6780", "rdac", },
+diff --git a/drivers/scsi/sd_zbc.c b/drivers/scsi/sd_zbc.c
+index 27793b9f54c0..9049a189c8e5 100644
+--- a/drivers/scsi/sd_zbc.c
++++ b/drivers/scsi/sd_zbc.c
+@@ -486,7 +486,7 @@ static int sd_zbc_check_capacity(struct scsi_disk *sdkp, unsigned char *buf)
+ */
+ static int sd_zbc_check_zone_size(struct scsi_disk *sdkp)
+ {
+- u64 zone_blocks;
++ u64 zone_blocks = 0;
+ sector_t block = 0;
+ unsigned char *buf;
+ unsigned char *rec;
+@@ -504,10 +504,8 @@ static int sd_zbc_check_zone_size(struct scsi_disk *sdkp)
+
+ /* Do a report zone to get the same field */
+ ret = sd_zbc_report_zones(sdkp, buf, SD_ZBC_BUF_SIZE, 0);
+- if (ret) {
+- zone_blocks = 0;
+- goto out;
+- }
++ if (ret)
++ goto out_free;
+
+ same = buf[4] & 0x0f;
+ if (same > 0) {
+@@ -547,7 +545,7 @@ static int sd_zbc_check_zone_size(struct scsi_disk *sdkp)
+ ret = sd_zbc_report_zones(sdkp, buf,
+ SD_ZBC_BUF_SIZE, block);
+ if (ret)
+- return ret;
++ goto out_free;
+ }
+
+ } while (block < sdkp->capacity);
+@@ -555,35 +553,32 @@ static int sd_zbc_check_zone_size(struct scsi_disk *sdkp)
+ zone_blocks = sdkp->zone_blocks;
+
+ out:
+- kfree(buf);
+-
+ if (!zone_blocks) {
+ if (sdkp->first_scan)
+ sd_printk(KERN_NOTICE, sdkp,
+ "Devices with non constant zone "
+ "size are not supported\n");
+- return -ENODEV;
+- }
+-
+- if (!is_power_of_2(zone_blocks)) {
++ ret = -ENODEV;
++ } else if (!is_power_of_2(zone_blocks)) {
+ if (sdkp->first_scan)
+ sd_printk(KERN_NOTICE, sdkp,
+ "Devices with non power of 2 zone "
+ "size are not supported\n");
+- return -ENODEV;
+- }
+-
+- if (logical_to_sectors(sdkp->device, zone_blocks) > UINT_MAX) {
++ ret = -ENODEV;
++ } else if (logical_to_sectors(sdkp->device, zone_blocks) > UINT_MAX) {
+ if (sdkp->first_scan)
+ sd_printk(KERN_NOTICE, sdkp,
+ "Zone size too large\n");
+- return -ENODEV;
++ ret = -ENODEV;
++ } else {
++ sdkp->zone_blocks = zone_blocks;
++ sdkp->zone_shift = ilog2(zone_blocks);
+ }
+
+- sdkp->zone_blocks = zone_blocks;
+- sdkp->zone_shift = ilog2(zone_blocks);
++out_free:
++ kfree(buf);
+
+- return 0;
++ return ret;
+ }
+
+ static int sd_zbc_setup(struct scsi_disk *sdkp)
+diff --git a/drivers/scsi/ses.c b/drivers/scsi/ses.c
+index 11826c5c2dd4..62f04c0511cf 100644
+--- a/drivers/scsi/ses.c
++++ b/drivers/scsi/ses.c
+@@ -615,13 +615,16 @@ static void ses_enclosure_data_process(struct enclosure_device *edev,
+ }
+
+ static void ses_match_to_enclosure(struct enclosure_device *edev,
+- struct scsi_device *sdev)
++ struct scsi_device *sdev,
++ int refresh)
+ {
++ struct scsi_device *edev_sdev = to_scsi_device(edev->edev.parent);
+ struct efd efd = {
+ .addr = 0,
+ };
+
+- ses_enclosure_data_process(edev, to_scsi_device(edev->edev.parent), 0);
++ if (refresh)
++ ses_enclosure_data_process(edev, edev_sdev, 0);
+
+ if (scsi_is_sas_rphy(sdev->sdev_target->dev.parent))
+ efd.addr = sas_get_address(sdev);
+@@ -652,7 +655,7 @@ static int ses_intf_add(struct device *cdev,
+ struct enclosure_device *prev = NULL;
+
+ while ((edev = enclosure_find(&sdev->host->shost_gendev, prev)) != NULL) {
+- ses_match_to_enclosure(edev, sdev);
++ ses_match_to_enclosure(edev, sdev, 1);
+ prev = edev;
+ }
+ return -ENODEV;
+@@ -768,7 +771,7 @@ static int ses_intf_add(struct device *cdev,
+ shost_for_each_device(tmp_sdev, sdev->host) {
+ if (tmp_sdev->lun != 0 || scsi_device_enclosure(tmp_sdev))
+ continue;
+- ses_match_to_enclosure(edev, tmp_sdev);
++ ses_match_to_enclosure(edev, tmp_sdev, 0);
+ }
+
+ return 0;
+diff --git a/drivers/spi/spi-imx.c b/drivers/spi/spi-imx.c
+index 40390d31a93b..6f57592a7f95 100644
+--- a/drivers/spi/spi-imx.c
++++ b/drivers/spi/spi-imx.c
+@@ -1622,6 +1622,11 @@ static int spi_imx_probe(struct platform_device *pdev)
+ spi_imx->devtype_data->intctrl(spi_imx, 0);
+
+ master->dev.of_node = pdev->dev.of_node;
++ ret = spi_bitbang_start(&spi_imx->bitbang);
++ if (ret) {
++ dev_err(&pdev->dev, "bitbang start failed with %d\n", ret);
++ goto out_clk_put;
++ }
+
+ /* Request GPIO CS lines, if any */
+ if (!spi_imx->slave_mode && master->cs_gpios) {
+@@ -1640,12 +1645,6 @@ static int spi_imx_probe(struct platform_device *pdev)
+ }
+ }
+
+- ret = spi_bitbang_start(&spi_imx->bitbang);
+- if (ret) {
+- dev_err(&pdev->dev, "bitbang start failed with %d\n", ret);
+- goto out_clk_put;
+- }
+-
+ dev_info(&pdev->dev, "probed\n");
+
+ clk_disable(spi_imx->clk_ipg);
+diff --git a/drivers/spi/spi-sun6i.c b/drivers/spi/spi-sun6i.c
+index fb38234249a8..8533f4edd00a 100644
+--- a/drivers/spi/spi-sun6i.c
++++ b/drivers/spi/spi-sun6i.c
+@@ -541,7 +541,7 @@ static int sun6i_spi_probe(struct platform_device *pdev)
+
+ static int sun6i_spi_remove(struct platform_device *pdev)
+ {
+- pm_runtime_disable(&pdev->dev);
++ pm_runtime_force_suspend(&pdev->dev);
+
+ return 0;
+ }
+diff --git a/drivers/staging/android/ashmem.c b/drivers/staging/android/ashmem.c
+index e7541dc90473..d9941b0c468d 100644
+--- a/drivers/staging/android/ashmem.c
++++ b/drivers/staging/android/ashmem.c
+@@ -334,24 +334,23 @@ static loff_t ashmem_llseek(struct file *file, loff_t offset, int origin)
+ mutex_lock(&ashmem_mutex);
+
+ if (asma->size == 0) {
+- ret = -EINVAL;
+- goto out;
++ mutex_unlock(&ashmem_mutex);
++ return -EINVAL;
+ }
+
+ if (!asma->file) {
+- ret = -EBADF;
+- goto out;
++ mutex_unlock(&ashmem_mutex);
++ return -EBADF;
+ }
+
++ mutex_unlock(&ashmem_mutex);
++
+ ret = vfs_llseek(asma->file, offset, origin);
+ if (ret < 0)
+- goto out;
++ return ret;
+
+ /** Copy f_pos from backing file, since f_ops->llseek() sets it */
+ file->f_pos = asma->file->f_pos;
+-
+-out:
+- mutex_unlock(&ashmem_mutex);
+ return ret;
+ }
+
+diff --git a/drivers/staging/comedi/drivers.c b/drivers/staging/comedi/drivers.c
+index 0b43db6371c6..c11c22bd6d13 100644
+--- a/drivers/staging/comedi/drivers.c
++++ b/drivers/staging/comedi/drivers.c
+@@ -484,8 +484,7 @@ unsigned int comedi_nsamples_left(struct comedi_subdevice *s,
+ struct comedi_cmd *cmd = &async->cmd;
+
+ if (cmd->stop_src == TRIG_COUNT) {
+- unsigned int nscans = nsamples / cmd->scan_end_arg;
+- unsigned int scans_left = __comedi_nscans_left(s, nscans);
++ unsigned int scans_left = __comedi_nscans_left(s, cmd->stop_arg);
+ unsigned int scan_pos =
+ comedi_bytes_to_samples(s, async->scan_progress);
+ unsigned long long samples_left = 0;
+diff --git a/drivers/staging/fsl-dpaa2/ethernet/dpaa2-eth.c b/drivers/staging/fsl-dpaa2/ethernet/dpaa2-eth.c
+index 0d8ed002adcb..c8a8e3abfc3a 100644
+--- a/drivers/staging/fsl-dpaa2/ethernet/dpaa2-eth.c
++++ b/drivers/staging/fsl-dpaa2/ethernet/dpaa2-eth.c
+@@ -249,7 +249,7 @@ static void dpaa2_eth_rx(struct dpaa2_eth_priv *priv,
+ vaddr = dpaa2_iova_to_virt(priv->iommu_domain, addr);
+ dma_unmap_single(dev, addr, DPAA2_ETH_RX_BUF_SIZE, DMA_FROM_DEVICE);
+
+- fas = dpaa2_get_fas(vaddr);
++ fas = dpaa2_get_fas(vaddr, false);
+ prefetch(fas);
+ buf_data = vaddr + dpaa2_fd_get_offset(fd);
+ prefetch(buf_data);
+@@ -385,7 +385,7 @@ static int build_sg_fd(struct dpaa2_eth_priv *priv,
+ * on TX confirmation. We are clearing FAS (Frame Annotation Status)
+ * field from the hardware annotation area
+ */
+- fas = dpaa2_get_fas(sgt_buf);
++ fas = dpaa2_get_fas(sgt_buf, true);
+ memset(fas, 0, DPAA2_FAS_SIZE);
+
+ sgt = (struct dpaa2_sg_entry *)(sgt_buf + priv->tx_data_offset);
+@@ -458,7 +458,7 @@ static int build_single_fd(struct dpaa2_eth_priv *priv,
+ * on TX confirmation. We are clearing FAS (Frame Annotation Status)
+ * field from the hardware annotation area
+ */
+- fas = dpaa2_get_fas(buffer_start);
++ fas = dpaa2_get_fas(buffer_start, true);
+ memset(fas, 0, DPAA2_FAS_SIZE);
+
+ /* Store a backpointer to the skb at the beginning of the buffer
+@@ -510,7 +510,7 @@ static void free_tx_fd(const struct dpaa2_eth_priv *priv,
+
+ fd_addr = dpaa2_fd_get_addr(fd);
+ skbh = dpaa2_iova_to_virt(priv->iommu_domain, fd_addr);
+- fas = dpaa2_get_fas(skbh);
++ fas = dpaa2_get_fas(skbh, true);
+
+ if (fd_format == dpaa2_fd_single) {
+ skb = *skbh;
+diff --git a/drivers/staging/fsl-dpaa2/ethernet/dpaa2-eth.h b/drivers/staging/fsl-dpaa2/ethernet/dpaa2-eth.h
+index 5b3ab9f62d5e..3a4e9395acdc 100644
+--- a/drivers/staging/fsl-dpaa2/ethernet/dpaa2-eth.h
++++ b/drivers/staging/fsl-dpaa2/ethernet/dpaa2-eth.h
+@@ -153,10 +153,15 @@ struct dpaa2_fas {
+ #define DPAA2_FAS_SIZE (sizeof(struct dpaa2_fas))
+
+ /* Accessors for the hardware annotation fields that we use */
+-#define dpaa2_get_hwa(buf_addr) \
+- ((void *)(buf_addr) + DPAA2_ETH_SWA_SIZE)
+-#define dpaa2_get_fas(buf_addr) \
+- (struct dpaa2_fas *)(dpaa2_get_hwa(buf_addr) + DPAA2_FAS_OFFSET)
++static inline void *dpaa2_get_hwa(void *buf_addr, bool swa)
++{
++ return buf_addr + (swa ? DPAA2_ETH_SWA_SIZE : 0);
++}
++
++static inline struct dpaa2_fas *dpaa2_get_fas(void *buf_addr, bool swa)
++{
++ return dpaa2_get_hwa(buf_addr, swa) + DPAA2_FAS_OFFSET;
++}
+
+ /* Error and status bits in the frame annotation status word */
+ /* Debug frame, otherwise supposed to be discarded */
+diff --git a/drivers/staging/rtlwifi/rtl8822be/fw.c b/drivers/staging/rtlwifi/rtl8822be/fw.c
+index f45487122517..483ea85943c3 100644
+--- a/drivers/staging/rtlwifi/rtl8822be/fw.c
++++ b/drivers/staging/rtlwifi/rtl8822be/fw.c
+@@ -464,6 +464,8 @@ bool rtl8822b_halmac_cb_write_data_rsvd_page(struct rtl_priv *rtlpriv, u8 *buf,
+ int count;
+
+ skb = dev_alloc_skb(size);
++ if (!skb)
++ return false;
+ memcpy((u8 *)skb_put(skb, size), buf, size);
+
+ if (!_rtl8822be_send_bcn_or_cmd_packet(rtlpriv->hw, skb, BEACON_QUEUE))
+diff --git a/drivers/tty/serial/8250/8250_pci.c b/drivers/tty/serial/8250/8250_pci.c
+index 54adf8d56350..38850672c57e 100644
+--- a/drivers/tty/serial/8250/8250_pci.c
++++ b/drivers/tty/serial/8250/8250_pci.c
+@@ -4698,6 +4698,17 @@ static const struct pci_device_id serial_pci_tbl[] = {
+ { PCI_VENDOR_ID_INTASHIELD, PCI_DEVICE_ID_INTASHIELD_IS400,
+ PCI_ANY_ID, PCI_ANY_ID, 0, 0, /* 135a.0dc0 */
+ pbn_b2_4_115200 },
++ /*
++ * BrainBoxes UC-260
++ */
++ { PCI_VENDOR_ID_INTASHIELD, 0x0D21,
++ PCI_ANY_ID, PCI_ANY_ID,
++ PCI_CLASS_COMMUNICATION_MULTISERIAL << 8, 0xffff00,
++ pbn_b2_4_115200 },
++ { PCI_VENDOR_ID_INTASHIELD, 0x0E34,
++ PCI_ANY_ID, PCI_ANY_ID,
++ PCI_CLASS_COMMUNICATION_MULTISERIAL << 8, 0xffff00,
++ pbn_b2_4_115200 },
+ /*
+ * Perle PCI-RAS cards
+ */
+diff --git a/drivers/tty/serial/atmel_serial.c b/drivers/tty/serial/atmel_serial.c
+index efa25611ca0c..ae9f1dcbf3fc 100644
+--- a/drivers/tty/serial/atmel_serial.c
++++ b/drivers/tty/serial/atmel_serial.c
+@@ -1734,6 +1734,7 @@ static void atmel_get_ip_name(struct uart_port *port)
+ switch (version) {
+ case 0x302:
+ case 0x10213:
++ case 0x10302:
+ dev_dbg(port->dev, "This version is usart\n");
+ atmel_port->has_frac_baudrate = true;
+ atmel_port->has_hw_timer = true;
+diff --git a/drivers/tty/serial/earlycon.c b/drivers/tty/serial/earlycon.c
+index 4c8b80f1c688..ade1bbbeb9c9 100644
+--- a/drivers/tty/serial/earlycon.c
++++ b/drivers/tty/serial/earlycon.c
+@@ -250,11 +250,12 @@ int __init of_setup_earlycon(const struct earlycon_id *match,
+ }
+ port->mapbase = addr;
+ port->uartclk = BASE_BAUD * 16;
+- port->membase = earlycon_map(port->mapbase, SZ_4K);
+
+ val = of_get_flat_dt_prop(node, "reg-offset", NULL);
+ if (val)
+ port->mapbase += be32_to_cpu(*val);
++ port->membase = earlycon_map(port->mapbase, SZ_4K);
++
+ val = of_get_flat_dt_prop(node, "reg-shift", NULL);
+ if (val)
+ port->regshift = be32_to_cpu(*val);
+diff --git a/drivers/tty/serial/serial_core.c b/drivers/tty/serial/serial_core.c
+index 7e7e6eb95b0a..6e0ab3333f62 100644
+--- a/drivers/tty/serial/serial_core.c
++++ b/drivers/tty/serial/serial_core.c
+@@ -1144,6 +1144,8 @@ static int uart_do_autoconfig(struct tty_struct *tty,struct uart_state *state)
+ uport->ops->config_port(uport, flags);
+
+ ret = uart_startup(tty, state, 1);
++ if (ret == 0)
++ tty_port_set_initialized(port, true);
+ if (ret > 0)
+ ret = 0;
+ }
+diff --git a/drivers/tty/serial/sh-sci.c b/drivers/tty/serial/sh-sci.c
+index d9f399c4e90c..db2ebeca7788 100644
+--- a/drivers/tty/serial/sh-sci.c
++++ b/drivers/tty/serial/sh-sci.c
+@@ -885,6 +885,8 @@ static void sci_receive_chars(struct uart_port *port)
+ /* Tell the rest of the system the news. New characters! */
+ tty_flip_buffer_push(tport);
+ } else {
++ /* TTY buffers full; read from RX reg to prevent lockup */
++ serial_port_in(port, SCxRDR);
+ serial_port_in(port, SCxSR); /* dummy read */
+ sci_clear_SCxSR(port, SCxSR_RDxF_CLEAR(port));
+ }
+diff --git a/drivers/usb/core/ledtrig-usbport.c b/drivers/usb/core/ledtrig-usbport.c
+index 9dbb429cd471..f1fde5165068 100644
+--- a/drivers/usb/core/ledtrig-usbport.c
++++ b/drivers/usb/core/ledtrig-usbport.c
+@@ -137,11 +137,17 @@ static bool usbport_trig_port_observed(struct usbport_trig_data *usbport_data,
+ if (!led_np)
+ return false;
+
+- /* Get node of port being added */
++ /*
++ * Get node of port being added
++ *
++ * FIXME: This is really the device node of the connected device
++ */
+ port_np = usb_of_get_child_node(usb_dev->dev.of_node, port1);
+ if (!port_np)
+ return false;
+
++ of_node_put(port_np);
++
+ /* Amount of trigger sources for this LED */
+ count = of_count_phandle_with_args(led_np, "trigger-sources",
+ "#trigger-source-cells");
+diff --git a/drivers/usb/core/message.c b/drivers/usb/core/message.c
+index 77001bcfc504..a4025760dd84 100644
+--- a/drivers/usb/core/message.c
++++ b/drivers/usb/core/message.c
+@@ -150,6 +150,10 @@ int usb_control_msg(struct usb_device *dev, unsigned int pipe, __u8 request,
+
+ ret = usb_internal_control_msg(dev, pipe, dr, data, size, timeout);
+
++ /* Linger a bit, prior to the next control message. */
++ if (dev->quirks & USB_QUIRK_DELAY_CTRL_MSG)
++ msleep(200);
++
+ kfree(dr);
+
+ return ret;
+diff --git a/drivers/usb/core/quirks.c b/drivers/usb/core/quirks.c
+index f4a548471f0f..54b019e267c5 100644
+--- a/drivers/usb/core/quirks.c
++++ b/drivers/usb/core/quirks.c
+@@ -230,7 +230,8 @@ static const struct usb_device_id usb_quirk_list[] = {
+ { USB_DEVICE(0x1b1c, 0x1b13), .driver_info = USB_QUIRK_DELAY_INIT },
+
+ /* Corsair Strafe RGB */
+- { USB_DEVICE(0x1b1c, 0x1b20), .driver_info = USB_QUIRK_DELAY_INIT },
++ { USB_DEVICE(0x1b1c, 0x1b20), .driver_info = USB_QUIRK_DELAY_INIT |
++ USB_QUIRK_DELAY_CTRL_MSG },
+
+ /* Corsair K70 LUX */
+ { USB_DEVICE(0x1b1c, 0x1b36), .driver_info = USB_QUIRK_DELAY_INIT },
+diff --git a/drivers/usb/dwc3/core.c b/drivers/usb/dwc3/core.c
+index 07832509584f..51de21ef3cdc 100644
+--- a/drivers/usb/dwc3/core.c
++++ b/drivers/usb/dwc3/core.c
+@@ -174,7 +174,7 @@ void dwc3_set_mode(struct dwc3 *dwc, u32 mode)
+ dwc->desired_dr_role = mode;
+ spin_unlock_irqrestore(&dwc->lock, flags);
+
+- queue_work(system_power_efficient_wq, &dwc->drd_work);
++ queue_work(system_freezable_wq, &dwc->drd_work);
+ }
+
+ u32 dwc3_core_fifo_space(struct dwc3_ep *dep, u8 type)
+diff --git a/drivers/usb/gadget/function/f_fs.c b/drivers/usb/gadget/function/f_fs.c
+index 0ef08a909ba6..569882e04213 100644
+--- a/drivers/usb/gadget/function/f_fs.c
++++ b/drivers/usb/gadget/function/f_fs.c
+@@ -1535,7 +1535,6 @@ ffs_fs_kill_sb(struct super_block *sb)
+ if (sb->s_fs_info) {
+ ffs_release_dev(sb->s_fs_info);
+ ffs_data_closed(sb->s_fs_info);
+- ffs_data_put(sb->s_fs_info);
+ }
+ }
+
+diff --git a/drivers/usb/host/xhci-pci.c b/drivers/usb/host/xhci-pci.c
+index 1aad89b8aba0..f68f364c77d5 100644
+--- a/drivers/usb/host/xhci-pci.c
++++ b/drivers/usb/host/xhci-pci.c
+@@ -122,6 +122,9 @@ static void xhci_pci_quirks(struct device *dev, struct xhci_hcd *xhci)
+ if (pdev->vendor == PCI_VENDOR_ID_AMD && usb_amd_find_chipset_info())
+ xhci->quirks |= XHCI_AMD_PLL_FIX;
+
++ if (pdev->vendor == PCI_VENDOR_ID_AMD && pdev->device == 0x43bb)
++ xhci->quirks |= XHCI_SUSPEND_DELAY;
++
+ if (pdev->vendor == PCI_VENDOR_ID_AMD)
+ xhci->quirks |= XHCI_TRUST_TX_LENGTH;
+
+diff --git a/drivers/usb/host/xhci-rcar.c b/drivers/usb/host/xhci-rcar.c
+index f0b559660007..f33ffc2bc4ed 100644
+--- a/drivers/usb/host/xhci-rcar.c
++++ b/drivers/usb/host/xhci-rcar.c
+@@ -83,6 +83,10 @@ static const struct soc_device_attribute rcar_quirks_match[] = {
+ .soc_id = "r8a7796",
+ .data = (void *)RCAR_XHCI_FIRMWARE_V3,
+ },
++ {
++ .soc_id = "r8a77965",
++ .data = (void *)RCAR_XHCI_FIRMWARE_V3,
++ },
+ { /* sentinel */ },
+ };
+
+diff --git a/drivers/usb/host/xhci.c b/drivers/usb/host/xhci.c
+index 5c1326154e66..a7c99e121cc6 100644
+--- a/drivers/usb/host/xhci.c
++++ b/drivers/usb/host/xhci.c
+@@ -880,6 +880,9 @@ int xhci_suspend(struct xhci_hcd *xhci, bool do_wakeup)
+ clear_bit(HCD_FLAG_POLL_RH, &xhci->shared_hcd->flags);
+ del_timer_sync(&xhci->shared_hcd->rh_timer);
+
++ if (xhci->quirks & XHCI_SUSPEND_DELAY)
++ usleep_range(1000, 1500);
++
+ spin_lock_irq(&xhci->lock);
+ clear_bit(HCD_FLAG_HW_ACCESSIBLE, &hcd->flags);
+ clear_bit(HCD_FLAG_HW_ACCESSIBLE, &xhci->shared_hcd->flags);
+diff --git a/drivers/usb/host/xhci.h b/drivers/usb/host/xhci.h
+index 99a014a920d3..6740448779fe 100644
+--- a/drivers/usb/host/xhci.h
++++ b/drivers/usb/host/xhci.h
+@@ -718,11 +718,12 @@ struct xhci_ep_ctx {
+ /* bits 10:14 are Max Primary Streams */
+ /* bit 15 is Linear Stream Array */
+ /* Interval - period between requests to an endpoint - 125u increments. */
+-#define EP_INTERVAL(p) (((p) & 0xff) << 16)
+-#define EP_INTERVAL_TO_UFRAMES(p) (1 << (((p) >> 16) & 0xff))
+-#define CTX_TO_EP_INTERVAL(p) (((p) >> 16) & 0xff)
+-#define EP_MAXPSTREAMS_MASK (0x1f << 10)
+-#define EP_MAXPSTREAMS(p) (((p) << 10) & EP_MAXPSTREAMS_MASK)
++#define EP_INTERVAL(p) (((p) & 0xff) << 16)
++#define EP_INTERVAL_TO_UFRAMES(p) (1 << (((p) >> 16) & 0xff))
++#define CTX_TO_EP_INTERVAL(p) (((p) >> 16) & 0xff)
++#define EP_MAXPSTREAMS_MASK (0x1f << 10)
++#define EP_MAXPSTREAMS(p) (((p) << 10) & EP_MAXPSTREAMS_MASK)
++#define CTX_TO_EP_MAXPSTREAMS(p) (((p) & EP_MAXPSTREAMS_MASK) >> 10)
+ /* Endpoint is set up with a Linear Stream Array (vs. Secondary Stream Array) */
+ #define EP_HAS_LSA (1 << 15)
+ /* hosts with LEC=1 use bits 31:24 as ESIT high bits. */
+@@ -1823,6 +1824,7 @@ struct xhci_hcd {
+ /* Reserved. It was XHCI_U2_DISABLE_WAKE */
+ #define XHCI_ASMEDIA_MODIFY_FLOWCONTROL (1 << 28)
+ #define XHCI_HW_LPM_DISABLE (1 << 29)
++#define XHCI_SUSPEND_DELAY (1 << 30)
+
+ unsigned int num_active_eps;
+ unsigned int limit_active_eps;
+@@ -2537,21 +2539,22 @@ static inline const char *xhci_decode_ep_context(u32 info, u32 info2, u64 deq,
+ u8 burst;
+ u8 cerr;
+ u8 mult;
+- u8 lsa;
+- u8 hid;
++
++ bool lsa;
++ bool hid;
+
+ esit = CTX_TO_MAX_ESIT_PAYLOAD_HI(info) << 16 |
+ CTX_TO_MAX_ESIT_PAYLOAD(tx_info);
+
+ ep_state = info & EP_STATE_MASK;
+- max_pstr = info & EP_MAXPSTREAMS_MASK;
++ max_pstr = CTX_TO_EP_MAXPSTREAMS(info);
+ interval = CTX_TO_EP_INTERVAL(info);
+ mult = CTX_TO_EP_MULT(info) + 1;
+- lsa = info & EP_HAS_LSA;
++ lsa = !!(info & EP_HAS_LSA);
+
+ cerr = (info2 & (3 << 1)) >> 1;
+ ep_type = CTX_TO_EP_TYPE(info2);
+- hid = info2 & (1 << 7);
++ hid = !!(info2 & (1 << 7));
+ burst = CTX_TO_MAX_BURST(info2);
+ maxp = MAX_PACKET_DECODED(info2);
+
+diff --git a/drivers/usb/mon/mon_text.c b/drivers/usb/mon/mon_text.c
+index f5e1bb5e5217..984f7e12a6a5 100644
+--- a/drivers/usb/mon/mon_text.c
++++ b/drivers/usb/mon/mon_text.c
+@@ -85,6 +85,8 @@ struct mon_reader_text {
+
+ wait_queue_head_t wait;
+ int printf_size;
++ size_t printf_offset;
++ size_t printf_togo;
+ char *printf_buf;
+ struct mutex printf_lock;
+
+@@ -376,75 +378,103 @@ static int mon_text_open(struct inode *inode, struct file *file)
+ return rc;
+ }
+
+-/*
+- * For simplicity, we read one record in one system call and throw out
+- * what does not fit. This means that the following does not work:
+- * dd if=/dbg/usbmon/0t bs=10
+- * Also, we do not allow seeks and do not bother advancing the offset.
+- */
++static ssize_t mon_text_copy_to_user(struct mon_reader_text *rp,
++ char __user * const buf, const size_t nbytes)
++{
++ const size_t togo = min(nbytes, rp->printf_togo);
++
++ if (copy_to_user(buf, &rp->printf_buf[rp->printf_offset], togo))
++ return -EFAULT;
++ rp->printf_togo -= togo;
++ rp->printf_offset += togo;
++ return togo;
++}
++
++/* ppos is not advanced since the llseek operation is not permitted. */
+ static ssize_t mon_text_read_t(struct file *file, char __user *buf,
+- size_t nbytes, loff_t *ppos)
++ size_t nbytes, loff_t *ppos)
+ {
+ struct mon_reader_text *rp = file->private_data;
+ struct mon_event_text *ep;
+ struct mon_text_ptr ptr;
++ ssize_t ret;
+
+- ep = mon_text_read_wait(rp, file);
+- if (IS_ERR(ep))
+- return PTR_ERR(ep);
+ mutex_lock(&rp->printf_lock);
+- ptr.cnt = 0;
+- ptr.pbuf = rp->printf_buf;
+- ptr.limit = rp->printf_size;
+-
+- mon_text_read_head_t(rp, &ptr, ep);
+- mon_text_read_statset(rp, &ptr, ep);
+- ptr.cnt += snprintf(ptr.pbuf + ptr.cnt, ptr.limit - ptr.cnt,
+- " %d", ep->length);
+- mon_text_read_data(rp, &ptr, ep);
+-
+- if (copy_to_user(buf, rp->printf_buf, ptr.cnt))
+- ptr.cnt = -EFAULT;
++
++ if (rp->printf_togo == 0) {
++
++ ep = mon_text_read_wait(rp, file);
++ if (IS_ERR(ep)) {
++ mutex_unlock(&rp->printf_lock);
++ return PTR_ERR(ep);
++ }
++ ptr.cnt = 0;
++ ptr.pbuf = rp->printf_buf;
++ ptr.limit = rp->printf_size;
++
++ mon_text_read_head_t(rp, &ptr, ep);
++ mon_text_read_statset(rp, &ptr, ep);
++ ptr.cnt += snprintf(ptr.pbuf + ptr.cnt, ptr.limit - ptr.cnt,
++ " %d", ep->length);
++ mon_text_read_data(rp, &ptr, ep);
++
++ rp->printf_togo = ptr.cnt;
++ rp->printf_offset = 0;
++
++ kmem_cache_free(rp->e_slab, ep);
++ }
++
++ ret = mon_text_copy_to_user(rp, buf, nbytes);
+ mutex_unlock(&rp->printf_lock);
+- kmem_cache_free(rp->e_slab, ep);
+- return ptr.cnt;
++ return ret;
+ }
+
++/* ppos is not advanced since the llseek operation is not permitted. */
+ static ssize_t mon_text_read_u(struct file *file, char __user *buf,
+- size_t nbytes, loff_t *ppos)
++ size_t nbytes, loff_t *ppos)
+ {
+ struct mon_reader_text *rp = file->private_data;
+ struct mon_event_text *ep;
+ struct mon_text_ptr ptr;
++ ssize_t ret;
+
+- ep = mon_text_read_wait(rp, file);
+- if (IS_ERR(ep))
+- return PTR_ERR(ep);
+ mutex_lock(&rp->printf_lock);
+- ptr.cnt = 0;
+- ptr.pbuf = rp->printf_buf;
+- ptr.limit = rp->printf_size;
+
+- mon_text_read_head_u(rp, &ptr, ep);
+- if (ep->type == 'E') {
+- mon_text_read_statset(rp, &ptr, ep);
+- } else if (ep->xfertype == USB_ENDPOINT_XFER_ISOC) {
+- mon_text_read_isostat(rp, &ptr, ep);
+- mon_text_read_isodesc(rp, &ptr, ep);
+- } else if (ep->xfertype == USB_ENDPOINT_XFER_INT) {
+- mon_text_read_intstat(rp, &ptr, ep);
+- } else {
+- mon_text_read_statset(rp, &ptr, ep);
++ if (rp->printf_togo == 0) {
++
++ ep = mon_text_read_wait(rp, file);
++ if (IS_ERR(ep)) {
++ mutex_unlock(&rp->printf_lock);
++ return PTR_ERR(ep);
++ }
++ ptr.cnt = 0;
++ ptr.pbuf = rp->printf_buf;
++ ptr.limit = rp->printf_size;
++
++ mon_text_read_head_u(rp, &ptr, ep);
++ if (ep->type == 'E') {
++ mon_text_read_statset(rp, &ptr, ep);
++ } else if (ep->xfertype == USB_ENDPOINT_XFER_ISOC) {
++ mon_text_read_isostat(rp, &ptr, ep);
++ mon_text_read_isodesc(rp, &ptr, ep);
++ } else if (ep->xfertype == USB_ENDPOINT_XFER_INT) {
++ mon_text_read_intstat(rp, &ptr, ep);
++ } else {
++ mon_text_read_statset(rp, &ptr, ep);
++ }
++ ptr.cnt += snprintf(ptr.pbuf + ptr.cnt, ptr.limit - ptr.cnt,
++ " %d", ep->length);
++ mon_text_read_data(rp, &ptr, ep);
++
++ rp->printf_togo = ptr.cnt;
++ rp->printf_offset = 0;
++
++ kmem_cache_free(rp->e_slab, ep);
+ }
+- ptr.cnt += snprintf(ptr.pbuf + ptr.cnt, ptr.limit - ptr.cnt,
+- " %d", ep->length);
+- mon_text_read_data(rp, &ptr, ep);
+
+- if (copy_to_user(buf, rp->printf_buf, ptr.cnt))
+- ptr.cnt = -EFAULT;
++ ret = mon_text_copy_to_user(rp, buf, nbytes);
+ mutex_unlock(&rp->printf_lock);
+- kmem_cache_free(rp->e_slab, ep);
+- return ptr.cnt;
++ return ret;
+ }
+
+ static struct mon_event_text *mon_text_read_wait(struct mon_reader_text *rp,
+diff --git a/drivers/usb/storage/uas.c b/drivers/usb/storage/uas.c
+index 3b1b9695177a..6034c39b67d1 100644
+--- a/drivers/usb/storage/uas.c
++++ b/drivers/usb/storage/uas.c
+@@ -1076,7 +1076,7 @@ static int uas_post_reset(struct usb_interface *intf)
+ return 0;
+
+ err = uas_configure_endpoints(devinfo);
+- if (err && err != ENODEV)
++ if (err && err != -ENODEV)
+ shost_printk(KERN_ERR, shost,
+ "%s: alloc streams error %d after reset",
+ __func__, err);
+diff --git a/drivers/usb/storage/unusual_devs.h b/drivers/usb/storage/unusual_devs.h
+index f72d045ee9ef..89e1c743e652 100644
+--- a/drivers/usb/storage/unusual_devs.h
++++ b/drivers/usb/storage/unusual_devs.h
+@@ -2124,6 +2124,13 @@ UNUSUAL_DEV( 0x152d, 0x2566, 0x0114, 0x0114,
+ USB_SC_DEVICE, USB_PR_DEVICE, NULL,
+ US_FL_BROKEN_FUA ),
+
++/* Reported by Teijo Kinnunen <teijo.kinnunen@code-q.fi> */
++UNUSUAL_DEV( 0x152d, 0x2567, 0x0117, 0x0117,
++ "JMicron",
++ "USB to ATA/ATAPI Bridge",
++ USB_SC_DEVICE, USB_PR_DEVICE, NULL,
++ US_FL_BROKEN_FUA ),
++
+ /* Reported-by George Cherian <george.cherian@cavium.com> */
+ UNUSUAL_DEV(0x152d, 0x9561, 0x0000, 0x9999,
+ "JMicron",
+diff --git a/drivers/usb/typec/fusb302/fusb302.c b/drivers/usb/typec/fusb302/fusb302.c
+index 72cb060b3fca..d20008576b8f 100644
+--- a/drivers/usb/typec/fusb302/fusb302.c
++++ b/drivers/usb/typec/fusb302/fusb302.c
+@@ -1543,6 +1543,21 @@ static int fusb302_pd_read_message(struct fusb302_chip *chip,
+ fusb302_log(chip, "PD message header: %x", msg->header);
+ fusb302_log(chip, "PD message len: %d", len);
+
++ /*
++ * Check if we've read off a GoodCRC message. If so then indicate to
++ * TCPM that the previous transmission has completed. Otherwise we pass
++ * the received message over to TCPM for processing.
++ *
++ * We make this check here instead of basing the reporting decision on
++ * the IRQ event type, as it's possible for the chip to report the
++ * TX_SUCCESS and GCRCSENT events out of order on occasion, so we need
++ * to check the message type to ensure correct reporting to TCPM.
++ */
++ if ((!len) && (pd_header_type_le(msg->header) == PD_CTRL_GOOD_CRC))
++ tcpm_pd_transmit_complete(chip->tcpm_port, TCPC_TX_SUCCESS);
++ else
++ tcpm_pd_receive(chip->tcpm_port, msg);
++
+ return ret;
+ }
+
+@@ -1650,13 +1665,12 @@ static irqreturn_t fusb302_irq_intn(int irq, void *dev_id)
+
+ if (interrupta & FUSB_REG_INTERRUPTA_TX_SUCCESS) {
+ fusb302_log(chip, "IRQ: PD tx success");
+- /* read out the received good CRC */
+ ret = fusb302_pd_read_message(chip, &pd_msg);
+ if (ret < 0) {
+- fusb302_log(chip, "cannot read in GCRC, ret=%d", ret);
++ fusb302_log(chip,
++ "cannot read in PD message, ret=%d", ret);
+ goto done;
+ }
+- tcpm_pd_transmit_complete(chip->tcpm_port, TCPC_TX_SUCCESS);
+ }
+
+ if (interrupta & FUSB_REG_INTERRUPTA_HARDRESET) {
+@@ -1677,7 +1691,6 @@ static irqreturn_t fusb302_irq_intn(int irq, void *dev_id)
+ "cannot read in PD message, ret=%d", ret);
+ goto done;
+ }
+- tcpm_pd_receive(chip->tcpm_port, &pd_msg);
+ }
+ done:
+ mutex_unlock(&chip->lock);
+diff --git a/drivers/usb/usbip/vudc_sysfs.c b/drivers/usb/usbip/vudc_sysfs.c
+index 1adc8af292ec..59a4d1bcda1e 100644
+--- a/drivers/usb/usbip/vudc_sysfs.c
++++ b/drivers/usb/usbip/vudc_sysfs.c
+@@ -105,10 +105,14 @@ static ssize_t store_sockfd(struct device *dev, struct device_attribute *attr,
+ if (rv != 0)
+ return -EINVAL;
+
++ if (!udc) {
++ dev_err(dev, "no device");
++ return -ENODEV;
++ }
+ spin_lock_irqsave(&udc->lock, flags);
+ /* Don't export what we don't have */
+- if (!udc || !udc->driver || !udc->pullup) {
+- dev_err(dev, "no device or gadget not bound");
++ if (!udc->driver || !udc->pullup) {
++ dev_err(dev, "gadget not bound");
+ ret = -ENODEV;
+ goto unlock;
+ }
+diff --git a/drivers/video/hdmi.c b/drivers/video/hdmi.c
+index 1cf907ecded4..111a0ab6280a 100644
+--- a/drivers/video/hdmi.c
++++ b/drivers/video/hdmi.c
+@@ -321,6 +321,17 @@ int hdmi_vendor_infoframe_init(struct hdmi_vendor_infoframe *frame)
+ }
+ EXPORT_SYMBOL(hdmi_vendor_infoframe_init);
+
++static int hdmi_vendor_infoframe_length(const struct hdmi_vendor_infoframe *frame)
++{
++ /* for side by side (half) we also need to provide 3D_Ext_Data */
++ if (frame->s3d_struct >= HDMI_3D_STRUCTURE_SIDE_BY_SIDE_HALF)
++ return 6;
++ else if (frame->vic != 0 || frame->s3d_struct != HDMI_3D_STRUCTURE_INVALID)
++ return 5;
++ else
++ return 4;
++}
++
+ /**
+ * hdmi_vendor_infoframe_pack() - write a HDMI vendor infoframe to binary buffer
+ * @frame: HDMI infoframe
+@@ -341,19 +352,11 @@ ssize_t hdmi_vendor_infoframe_pack(struct hdmi_vendor_infoframe *frame,
+ u8 *ptr = buffer;
+ size_t length;
+
+- /* empty info frame */
+- if (frame->vic == 0 && frame->s3d_struct == HDMI_3D_STRUCTURE_INVALID)
+- return -EINVAL;
+-
+ /* only one of those can be supplied */
+ if (frame->vic != 0 && frame->s3d_struct != HDMI_3D_STRUCTURE_INVALID)
+ return -EINVAL;
+
+- /* for side by side (half) we also need to provide 3D_Ext_Data */
+- if (frame->s3d_struct >= HDMI_3D_STRUCTURE_SIDE_BY_SIDE_HALF)
+- frame->length = 6;
+- else
+- frame->length = 5;
++ frame->length = hdmi_vendor_infoframe_length(frame);
+
+ length = HDMI_INFOFRAME_HEADER_SIZE + frame->length;
+
+@@ -372,14 +375,16 @@ ssize_t hdmi_vendor_infoframe_pack(struct hdmi_vendor_infoframe *frame,
+ ptr[5] = 0x0c;
+ ptr[6] = 0x00;
+
+- if (frame->vic) {
+- ptr[7] = 0x1 << 5; /* video format */
+- ptr[8] = frame->vic;
+- } else {
++ if (frame->s3d_struct != HDMI_3D_STRUCTURE_INVALID) {
+ ptr[7] = 0x2 << 5; /* video format */
+ ptr[8] = (frame->s3d_struct & 0xf) << 4;
+ if (frame->s3d_struct >= HDMI_3D_STRUCTURE_SIDE_BY_SIDE_HALF)
+ ptr[9] = (frame->s3d_ext_data & 0xf) << 4;
++ } else if (frame->vic) {
++ ptr[7] = 0x1 << 5; /* video format */
++ ptr[8] = frame->vic;
++ } else {
++ ptr[7] = 0x0 << 5; /* video format */
+ }
+
+ hdmi_infoframe_set_checksum(buffer, length);
+@@ -1165,7 +1170,7 @@ hdmi_vendor_any_infoframe_unpack(union hdmi_vendor_any_infoframe *frame,
+
+ if (ptr[0] != HDMI_INFOFRAME_TYPE_VENDOR ||
+ ptr[1] != 1 ||
+- (ptr[2] != 5 && ptr[2] != 6))
++ (ptr[2] != 4 && ptr[2] != 5 && ptr[2] != 6))
+ return -EINVAL;
+
+ length = ptr[2];
+@@ -1193,16 +1198,22 @@ hdmi_vendor_any_infoframe_unpack(union hdmi_vendor_any_infoframe *frame,
+
+ hvf->length = length;
+
+- if (hdmi_video_format == 0x1) {
+- hvf->vic = ptr[4];
+- } else if (hdmi_video_format == 0x2) {
++ if (hdmi_video_format == 0x2) {
++ if (length != 5 && length != 6)
++ return -EINVAL;
+ hvf->s3d_struct = ptr[4] >> 4;
+ if (hvf->s3d_struct >= HDMI_3D_STRUCTURE_SIDE_BY_SIDE_HALF) {
+- if (length == 6)
+- hvf->s3d_ext_data = ptr[5] >> 4;
+- else
++ if (length != 6)
+ return -EINVAL;
++ hvf->s3d_ext_data = ptr[5] >> 4;
+ }
++ } else if (hdmi_video_format == 0x1) {
++ if (length != 5)
++ return -EINVAL;
++ hvf->vic = ptr[4];
++ } else {
++ if (length != 4)
++ return -EINVAL;
+ }
+
+ return 0;
+diff --git a/fs/btrfs/sysfs.c b/fs/btrfs/sysfs.c
+index 27baaff96880..a28bba801264 100644
+--- a/fs/btrfs/sysfs.c
++++ b/fs/btrfs/sysfs.c
+@@ -423,7 +423,7 @@ static ssize_t btrfs_nodesize_show(struct kobject *kobj,
+ {
+ struct btrfs_fs_info *fs_info = to_fs_info(kobj);
+
+- return snprintf(buf, PAGE_SIZE, "%u\n", fs_info->nodesize);
++ return snprintf(buf, PAGE_SIZE, "%u\n", fs_info->super_copy->nodesize);
+ }
+
+ BTRFS_ATTR(, nodesize, btrfs_nodesize_show);
+@@ -433,7 +433,8 @@ static ssize_t btrfs_sectorsize_show(struct kobject *kobj,
+ {
+ struct btrfs_fs_info *fs_info = to_fs_info(kobj);
+
+- return snprintf(buf, PAGE_SIZE, "%u\n", fs_info->sectorsize);
++ return snprintf(buf, PAGE_SIZE, "%u\n",
++ fs_info->super_copy->sectorsize);
+ }
+
+ BTRFS_ATTR(, sectorsize, btrfs_sectorsize_show);
+@@ -443,7 +444,8 @@ static ssize_t btrfs_clone_alignment_show(struct kobject *kobj,
+ {
+ struct btrfs_fs_info *fs_info = to_fs_info(kobj);
+
+- return snprintf(buf, PAGE_SIZE, "%u\n", fs_info->sectorsize);
++ return snprintf(buf, PAGE_SIZE, "%u\n",
++ fs_info->super_copy->sectorsize);
+ }
+
+ BTRFS_ATTR(, clone_alignment, btrfs_clone_alignment_show);
+diff --git a/fs/btrfs/transaction.c b/fs/btrfs/transaction.c
+index 10d12b3de001..5a8c2649af2f 100644
+--- a/fs/btrfs/transaction.c
++++ b/fs/btrfs/transaction.c
+@@ -1723,23 +1723,19 @@ static void update_super_roots(struct btrfs_fs_info *fs_info)
+
+ super = fs_info->super_copy;
+
+- /* update latest btrfs_super_block::chunk_root refs */
+ root_item = &fs_info->chunk_root->root_item;
+- btrfs_set_super_chunk_root(super, root_item->bytenr);
+- btrfs_set_super_chunk_root_generation(super, root_item->generation);
+- btrfs_set_super_chunk_root_level(super, root_item->level);
++ super->chunk_root = root_item->bytenr;
++ super->chunk_root_generation = root_item->generation;
++ super->chunk_root_level = root_item->level;
+
+- /* update latest btrfs_super_block::root refs */
+ root_item = &fs_info->tree_root->root_item;
+- btrfs_set_super_root(super, root_item->bytenr);
+- btrfs_set_super_generation(super, root_item->generation);
+- btrfs_set_super_root_level(super, root_item->level);
+-
++ super->root = root_item->bytenr;
++ super->generation = root_item->generation;
++ super->root_level = root_item->level;
+ if (btrfs_test_opt(fs_info, SPACE_CACHE))
+- btrfs_set_super_cache_generation(super, root_item->generation);
++ super->cache_generation = root_item->generation;
+ if (test_bit(BTRFS_FS_UPDATE_UUID_TREE_GEN, &fs_info->flags))
+- btrfs_set_super_uuid_tree_generation(super,
+- root_item->generation);
++ super->uuid_tree_generation = root_item->generation;
+ }
+
+ int btrfs_transaction_in_commit(struct btrfs_fs_info *info)
+diff --git a/fs/gfs2/bmap.c b/fs/gfs2/bmap.c
+index 8c50d6878aa5..f13e986696f7 100644
+--- a/fs/gfs2/bmap.c
++++ b/fs/gfs2/bmap.c
+@@ -305,21 +305,22 @@ static void gfs2_metapath_ra(struct gfs2_glock *gl,
+ }
+ }
+
+-/**
+- * lookup_mp_height - helper function for lookup_metapath
+- * @ip: the inode
+- * @mp: the metapath
+- * @h: the height which needs looking up
+- */
+-static int lookup_mp_height(struct gfs2_inode *ip, struct metapath *mp, int h)
++static int __fillup_metapath(struct gfs2_inode *ip, struct metapath *mp,
++ unsigned int x, unsigned int h)
+ {
+- __be64 *ptr = metapointer(h, mp);
+- u64 dblock = be64_to_cpu(*ptr);
+-
+- if (!dblock)
+- return h + 1;
++ for (; x < h; x++) {
++ __be64 *ptr = metapointer(x, mp);
++ u64 dblock = be64_to_cpu(*ptr);
++ int ret;
+
+- return gfs2_meta_indirect_buffer(ip, h + 1, dblock, &mp->mp_bh[h + 1]);
++ if (!dblock)
++ break;
++ ret = gfs2_meta_indirect_buffer(ip, x + 1, dblock, &mp->mp_bh[x + 1]);
++ if (ret)
++ return ret;
++ }
++ mp->mp_aheight = x + 1;
++ return 0;
+ }
+
+ /**
+@@ -336,25 +337,12 @@ static int lookup_mp_height(struct gfs2_inode *ip, struct metapath *mp, int h)
+ * at which it found the unallocated block. Blocks which are found are
+ * added to the mp->mp_bh[] list.
+ *
+- * Returns: error or height of metadata tree
++ * Returns: error
+ */
+
+ static int lookup_metapath(struct gfs2_inode *ip, struct metapath *mp)
+ {
+- unsigned int end_of_metadata = ip->i_height - 1;
+- unsigned int x;
+- int ret;
+-
+- for (x = 0; x < end_of_metadata; x++) {
+- ret = lookup_mp_height(ip, mp, x);
+- if (ret)
+- goto out;
+- }
+-
+- ret = ip->i_height;
+-out:
+- mp->mp_aheight = ret;
+- return ret;
++ return __fillup_metapath(ip, mp, 0, ip->i_height - 1);
+ }
+
+ /**
+@@ -365,25 +353,21 @@ static int lookup_metapath(struct gfs2_inode *ip, struct metapath *mp)
+ *
+ * Similar to lookup_metapath, but does lookups for a range of heights
+ *
+- * Returns: error or height of metadata tree
++ * Returns: error
+ */
+
+ static int fillup_metapath(struct gfs2_inode *ip, struct metapath *mp, int h)
+ {
+- unsigned int start_h = h - 1;
+- int ret;
++ unsigned int x = 0;
+
+ if (h) {
+ /* find the first buffer we need to look up. */
+- while (start_h > 0 && mp->mp_bh[start_h] == NULL)
+- start_h--;
+- for (; start_h < h; start_h++) {
+- ret = lookup_mp_height(ip, mp, start_h);
+- if (ret)
+- return ret;
++ for (x = h - 1; x > 0; x--) {
++ if (mp->mp_bh[x])
++ break;
+ }
+ }
+- return ip->i_height;
++ return __fillup_metapath(ip, mp, x, h);
+ }
+
+ static inline void release_metapath(struct metapath *mp)
+@@ -790,7 +774,7 @@ int gfs2_iomap_begin(struct inode *inode, loff_t pos, loff_t length,
+ goto do_alloc;
+
+ ret = lookup_metapath(ip, &mp);
+- if (ret < 0)
++ if (ret)
+ goto out_release;
+
+ if (mp.mp_aheight != ip->i_height)
+@@ -827,9 +811,6 @@ int gfs2_iomap_begin(struct inode *inode, loff_t pos, loff_t length,
+ iomap->length = hole_size(inode, lblock, &mp);
+ else
+ iomap->length = size - pos;
+- } else {
+- if (height <= ip->i_height)
+- iomap->length = hole_size(inode, lblock, &mp);
+ }
+ goto out_release;
+ }
+@@ -1339,7 +1320,9 @@ static int trunc_dealloc(struct gfs2_inode *ip, u64 newsize)
+
+ mp.mp_bh[0] = dibh;
+ ret = lookup_metapath(ip, &mp);
+- if (ret == ip->i_height)
++ if (ret)
++ goto out_metapath;
++ if (mp.mp_aheight == ip->i_height)
+ state = DEALLOC_MP_FULL; /* We have a complete metapath */
+ else
+ state = DEALLOC_FILL_MP; /* deal with partial metapath */
+@@ -1435,16 +1418,16 @@ static int trunc_dealloc(struct gfs2_inode *ip, u64 newsize)
+ case DEALLOC_FILL_MP:
+ /* Fill the buffers out to the current height. */
+ ret = fillup_metapath(ip, &mp, mp_h);
+- if (ret < 0)
++ if (ret)
+ goto out;
+
+ /* If buffers found for the entire strip height */
+- if ((ret == ip->i_height) && (mp_h == strip_h)) {
++ if (mp.mp_aheight - 1 == strip_h) {
+ state = DEALLOC_MP_FULL;
+ break;
+ }
+- if (ret < ip->i_height) /* We have a partial height */
+- mp_h = ret - 1;
++ if (mp.mp_aheight < ip->i_height) /* We have a partial height */
++ mp_h = mp.mp_aheight - 1;
+
+ /* If we find a non-null block pointer, crawl a bit
+ higher up in the metapath and try again, otherwise
+diff --git a/fs/namei.c b/fs/namei.c
+index 9cc91fb7f156..4e3fc58dae72 100644
+--- a/fs/namei.c
++++ b/fs/namei.c
+@@ -1133,9 +1133,6 @@ static int follow_automount(struct path *path, struct nameidata *nd,
+ path->dentry->d_inode)
+ return -EISDIR;
+
+- if (path->dentry->d_sb->s_user_ns != &init_user_ns)
+- return -EACCES;
+-
+ nd->total_link_count++;
+ if (nd->total_link_count >= 40)
+ return -ELOOP;
+diff --git a/include/linux/dma-fence-array.h b/include/linux/dma-fence-array.h
+index 332a5420243c..bc8940ca280d 100644
+--- a/include/linux/dma-fence-array.h
++++ b/include/linux/dma-fence-array.h
+@@ -21,6 +21,7 @@
+ #define __LINUX_DMA_FENCE_ARRAY_H
+
+ #include <linux/dma-fence.h>
++#include <linux/irq_work.h>
+
+ /**
+ * struct dma_fence_array_cb - callback helper for fence array
+@@ -47,6 +48,8 @@ struct dma_fence_array {
+ unsigned num_fences;
+ atomic_t num_pending;
+ struct dma_fence **fences;
++
++ struct irq_work work;
+ };
+
+ extern const struct dma_fence_ops dma_fence_array_ops;
+diff --git a/include/linux/usb/quirks.h b/include/linux/usb/quirks.h
+index f1fcec2fd5f8..b7a99ce56bc9 100644
+--- a/include/linux/usb/quirks.h
++++ b/include/linux/usb/quirks.h
+@@ -63,4 +63,7 @@
+ */
+ #define USB_QUIRK_DISCONNECT_SUSPEND BIT(12)
+
++/* Device needs a pause after every control message. */
++#define USB_QUIRK_DELAY_CTRL_MSG BIT(13)
++
+ #endif /* __LINUX_USB_QUIRKS_H */
+diff --git a/kernel/bpf/verifier.c b/kernel/bpf/verifier.c
+index 7125ddbb24df..6b4a72e5f874 100644
+--- a/kernel/bpf/verifier.c
++++ b/kernel/bpf/verifier.c
+@@ -279,7 +279,7 @@ static void print_verifier_state(struct bpf_verifier_env *env,
+ for (i = 0; i < state->allocated_stack / BPF_REG_SIZE; i++) {
+ if (state->stack[i].slot_type[0] == STACK_SPILL)
+ verbose(env, " fp%d=%s",
+- -MAX_BPF_STACK + i * BPF_REG_SIZE,
++ (-i - 1) * BPF_REG_SIZE,
+ reg_type_str[state->stack[i].spilled_ptr.type]);
+ }
+ verbose(env, "\n");
+diff --git a/kernel/locking/locktorture.c b/kernel/locking/locktorture.c
+index f24582d4dad3..6dca260eeccf 100644
+--- a/kernel/locking/locktorture.c
++++ b/kernel/locking/locktorture.c
+@@ -715,8 +715,7 @@ static void __torture_print_stats(char *page,
+ {
+ bool fail = 0;
+ int i, n_stress;
+- long max = 0;
+- long min = statp[0].n_lock_acquired;
++ long max = 0, min = statp ? statp[0].n_lock_acquired : 0;
+ long long sum = 0;
+
+ n_stress = write ? cxt.nrealwriters_stress : cxt.nrealreaders_stress;
+@@ -823,7 +822,7 @@ static void lock_torture_cleanup(void)
+ * such, only perform the underlying torture-specific cleanups,
+ * and avoid anything related to locktorture.
+ */
+- if (!cxt.lwsa)
++ if (!cxt.lwsa && !cxt.lrsa)
+ goto end;
+
+ if (writer_tasks) {
+@@ -898,6 +897,13 @@ static int __init lock_torture_init(void)
+ firsterr = -EINVAL;
+ goto unwind;
+ }
++
++ if (nwriters_stress == 0 && nreaders_stress == 0) {
++ pr_alert("lock-torture: must run at least one locking thread\n");
++ firsterr = -EINVAL;
++ goto unwind;
++ }
++
+ if (cxt.cur_ops->init)
+ cxt.cur_ops->init();
+
+@@ -921,17 +927,19 @@ static int __init lock_torture_init(void)
+ #endif
+
+ /* Initialize the statistics so that each run gets its own numbers. */
++ if (nwriters_stress) {
++ lock_is_write_held = 0;
++ cxt.lwsa = kmalloc(sizeof(*cxt.lwsa) * cxt.nrealwriters_stress, GFP_KERNEL);
++ if (cxt.lwsa == NULL) {
++ VERBOSE_TOROUT_STRING("cxt.lwsa: Out of memory");
++ firsterr = -ENOMEM;
++ goto unwind;
++ }
+
+- lock_is_write_held = 0;
+- cxt.lwsa = kmalloc(sizeof(*cxt.lwsa) * cxt.nrealwriters_stress, GFP_KERNEL);
+- if (cxt.lwsa == NULL) {
+- VERBOSE_TOROUT_STRING("cxt.lwsa: Out of memory");
+- firsterr = -ENOMEM;
+- goto unwind;
+- }
+- for (i = 0; i < cxt.nrealwriters_stress; i++) {
+- cxt.lwsa[i].n_lock_fail = 0;
+- cxt.lwsa[i].n_lock_acquired = 0;
++ for (i = 0; i < cxt.nrealwriters_stress; i++) {
++ cxt.lwsa[i].n_lock_fail = 0;
++ cxt.lwsa[i].n_lock_acquired = 0;
++ }
+ }
+
+ if (cxt.cur_ops->readlock) {
+@@ -948,19 +956,21 @@ static int __init lock_torture_init(void)
+ cxt.nrealreaders_stress = cxt.nrealwriters_stress;
+ }
+
+- lock_is_read_held = 0;
+- cxt.lrsa = kmalloc(sizeof(*cxt.lrsa) * cxt.nrealreaders_stress, GFP_KERNEL);
+- if (cxt.lrsa == NULL) {
+- VERBOSE_TOROUT_STRING("cxt.lrsa: Out of memory");
+- firsterr = -ENOMEM;
+- kfree(cxt.lwsa);
+- cxt.lwsa = NULL;
+- goto unwind;
+- }
+-
+- for (i = 0; i < cxt.nrealreaders_stress; i++) {
+- cxt.lrsa[i].n_lock_fail = 0;
+- cxt.lrsa[i].n_lock_acquired = 0;
++ if (nreaders_stress) {
++ lock_is_read_held = 0;
++ cxt.lrsa = kmalloc(sizeof(*cxt.lrsa) * cxt.nrealreaders_stress, GFP_KERNEL);
++ if (cxt.lrsa == NULL) {
++ VERBOSE_TOROUT_STRING("cxt.lrsa: Out of memory");
++ firsterr = -ENOMEM;
++ kfree(cxt.lwsa);
++ cxt.lwsa = NULL;
++ goto unwind;
++ }
++
++ for (i = 0; i < cxt.nrealreaders_stress; i++) {
++ cxt.lrsa[i].n_lock_fail = 0;
++ cxt.lrsa[i].n_lock_acquired = 0;
++ }
+ }
+ }
+
+@@ -990,12 +1000,14 @@ static int __init lock_torture_init(void)
+ goto unwind;
+ }
+
+- writer_tasks = kzalloc(cxt.nrealwriters_stress * sizeof(writer_tasks[0]),
+- GFP_KERNEL);
+- if (writer_tasks == NULL) {
+- VERBOSE_TOROUT_ERRSTRING("writer_tasks: Out of memory");
+- firsterr = -ENOMEM;
+- goto unwind;
++ if (nwriters_stress) {
++ writer_tasks = kzalloc(cxt.nrealwriters_stress * sizeof(writer_tasks[0]),
++ GFP_KERNEL);
++ if (writer_tasks == NULL) {
++ VERBOSE_TOROUT_ERRSTRING("writer_tasks: Out of memory");
++ firsterr = -ENOMEM;
++ goto unwind;
++ }
+ }
+
+ if (cxt.cur_ops->readlock) {
+diff --git a/kernel/sched/core.c b/kernel/sched/core.c
+index a7bf32aabfda..5a31a85bbd84 100644
+--- a/kernel/sched/core.c
++++ b/kernel/sched/core.c
+@@ -508,7 +508,8 @@ void resched_cpu(int cpu)
+ unsigned long flags;
+
+ raw_spin_lock_irqsave(&rq->lock, flags);
+- resched_curr(rq);
++ if (cpu_online(cpu) || cpu == smp_processor_id())
++ resched_curr(rq);
+ raw_spin_unlock_irqrestore(&rq->lock, flags);
+ }
+
+diff --git a/kernel/sched/rt.c b/kernel/sched/rt.c
+index 3401f588c916..89a086ed2b16 100644
+--- a/kernel/sched/rt.c
++++ b/kernel/sched/rt.c
+@@ -2218,7 +2218,7 @@ static void switched_to_rt(struct rq *rq, struct task_struct *p)
+ if (p->nr_cpus_allowed > 1 && rq->rt.overloaded)
+ queue_push_tasks(rq);
+ #endif /* CONFIG_SMP */
+- if (p->prio < rq->curr->prio)
++ if (p->prio < rq->curr->prio && cpu_online(cpu_of(rq)))
+ resched_curr(rq);
+ }
+ }
+diff --git a/lib/usercopy.c b/lib/usercopy.c
+index 15e2e6fb060e..3744b2a8e591 100644
+--- a/lib/usercopy.c
++++ b/lib/usercopy.c
+@@ -20,7 +20,7 @@ EXPORT_SYMBOL(_copy_from_user);
+ #endif
+
+ #ifndef INLINE_COPY_TO_USER
+-unsigned long _copy_to_user(void *to, const void __user *from, unsigned long n)
++unsigned long _copy_to_user(void __user *to, const void *from, unsigned long n)
+ {
+ might_fault();
+ if (likely(access_ok(VERIFY_WRITE, to, n))) {
+diff --git a/net/mac80211/iface.c b/net/mac80211/iface.c
+index 13b16f90e1cf..0fa844293710 100644
+--- a/net/mac80211/iface.c
++++ b/net/mac80211/iface.c
+@@ -1474,7 +1474,7 @@ static void ieee80211_setup_sdata(struct ieee80211_sub_if_data *sdata,
+ break;
+ case NL80211_IFTYPE_UNSPECIFIED:
+ case NUM_NL80211_IFTYPES:
+- BUG();
++ WARN_ON(1);
+ break;
+ }
+
+diff --git a/net/sched/sch_generic.c b/net/sched/sch_generic.c
+index cac003fddf3e..b8db13708370 100644
+--- a/net/sched/sch_generic.c
++++ b/net/sched/sch_generic.c
+@@ -762,10 +762,6 @@ struct Qdisc *dev_graft_qdisc(struct netdev_queue *dev_queue,
+ root_lock = qdisc_lock(oqdisc);
+ spin_lock_bh(root_lock);
+
+- /* Prune old scheduler */
+- if (oqdisc && refcount_read(&oqdisc->refcnt) <= 1)
+- qdisc_reset(oqdisc);
+-
+ /* ... and graft new one */
+ if (qdisc == NULL)
+ qdisc = &noop_qdisc;
+@@ -916,6 +912,16 @@ static bool some_qdisc_is_busy(struct net_device *dev)
+ return false;
+ }
+
++static void dev_qdisc_reset(struct net_device *dev,
++ struct netdev_queue *dev_queue,
++ void *none)
++{
++ struct Qdisc *qdisc = dev_queue->qdisc_sleeping;
++
++ if (qdisc)
++ qdisc_reset(qdisc);
++}
++
+ /**
+ * dev_deactivate_many - deactivate transmissions on several devices
+ * @head: list of devices to deactivate
+@@ -926,7 +932,6 @@ static bool some_qdisc_is_busy(struct net_device *dev)
+ void dev_deactivate_many(struct list_head *head)
+ {
+ struct net_device *dev;
+- bool sync_needed = false;
+
+ list_for_each_entry(dev, head, close_list) {
+ netdev_for_each_tx_queue(dev, dev_deactivate_queue,
+@@ -936,20 +941,25 @@ void dev_deactivate_many(struct list_head *head)
+ &noop_qdisc);
+
+ dev_watchdog_down(dev);
+- sync_needed |= !dev->dismantle;
+ }
+
+ /* Wait for outstanding qdisc-less dev_queue_xmit calls.
+ * This is avoided if all devices are in dismantle phase :
+ * Caller will call synchronize_net() for us
+ */
+- if (sync_needed)
+- synchronize_net();
++ synchronize_net();
+
+ /* Wait for outstanding qdisc_run calls. */
+- list_for_each_entry(dev, head, close_list)
++ list_for_each_entry(dev, head, close_list) {
+ while (some_qdisc_is_busy(dev))
+ yield();
++ /* The new qdisc is assigned at this point so we can safely
++ * unwind stale skb lists and qdisc statistics
++ */
++ netdev_for_each_tx_queue(dev, dev_qdisc_reset, NULL);
++ if (dev_ingress_queue(dev))
++ dev_qdisc_reset(dev, dev_ingress_queue(dev), NULL);
++ }
+ }
+
+ void dev_deactivate(struct net_device *dev)
+diff --git a/net/xfrm/xfrm_policy.c b/net/xfrm/xfrm_policy.c
+index bd6b0e7a0ee4..c135ed9bc8c4 100644
+--- a/net/xfrm/xfrm_policy.c
++++ b/net/xfrm/xfrm_policy.c
+@@ -1256,7 +1256,7 @@ EXPORT_SYMBOL(xfrm_policy_delete);
+
+ int xfrm_sk_policy_insert(struct sock *sk, int dir, struct xfrm_policy *pol)
+ {
+- struct net *net = xp_net(pol);
++ struct net *net = sock_net(sk);
+ struct xfrm_policy *old_pol;
+
+ #ifdef CONFIG_XFRM_SUB_POLICY
+diff --git a/net/xfrm/xfrm_replay.c b/net/xfrm/xfrm_replay.c
+index 8b23c5bcf8e8..02501817227b 100644
+--- a/net/xfrm/xfrm_replay.c
++++ b/net/xfrm/xfrm_replay.c
+@@ -666,7 +666,7 @@ static int xfrm_replay_overflow_offload_esn(struct xfrm_state *x, struct sk_buff
+ if (unlikely(oseq < replay_esn->oseq)) {
+ XFRM_SKB_CB(skb)->seq.output.hi = ++oseq_hi;
+ xo->seq.hi = oseq_hi;
+-
++ replay_esn->oseq_hi = oseq_hi;
+ if (replay_esn->oseq_hi == 0) {
+ replay_esn->oseq--;
+ replay_esn->oseq_hi--;
+@@ -678,7 +678,6 @@ static int xfrm_replay_overflow_offload_esn(struct xfrm_state *x, struct sk_buff
+ }
+
+ replay_esn->oseq = oseq;
+- replay_esn->oseq_hi = oseq_hi;
+
+ if (xfrm_aevent_is_on(net))
+ x->repl->notify(x, XFRM_REPLAY_UPDATE);
+diff --git a/net/xfrm/xfrm_state.c b/net/xfrm/xfrm_state.c
+index a3785f538018..54e21f19d722 100644
+--- a/net/xfrm/xfrm_state.c
++++ b/net/xfrm/xfrm_state.c
+@@ -2056,6 +2056,13 @@ int xfrm_user_policy(struct sock *sk, int optname, u8 __user *optval, int optlen
+ struct xfrm_mgr *km;
+ struct xfrm_policy *pol = NULL;
+
++ if (!optval && !optlen) {
++ xfrm_sk_policy_insert(sk, XFRM_POLICY_IN, NULL);
++ xfrm_sk_policy_insert(sk, XFRM_POLICY_OUT, NULL);
++ __sk_dst_reset(sk);
++ return 0;
++ }
++
+ if (optlen <= 0 || optlen > PAGE_SIZE)
+ return -EMSGSIZE;
+
+diff --git a/security/integrity/ima/ima_appraise.c b/security/integrity/ima/ima_appraise.c
+index 65fbcf3c32c7..d32e6a1d931a 100644
+--- a/security/integrity/ima/ima_appraise.c
++++ b/security/integrity/ima/ima_appraise.c
+@@ -223,7 +223,8 @@ int ima_appraise_measurement(enum ima_hooks func,
+ if (opened & FILE_CREATED)
+ iint->flags |= IMA_NEW_FILE;
+ if ((iint->flags & IMA_NEW_FILE) &&
+- !(iint->flags & IMA_DIGSIG_REQUIRED))
++ (!(iint->flags & IMA_DIGSIG_REQUIRED) ||
++ (inode->i_size == 0)))
+ status = INTEGRITY_PASS;
+ goto out;
+ }
+diff --git a/sound/soc/codecs/rt5651.c b/sound/soc/codecs/rt5651.c
+index 831b297978a4..45a73049cf64 100644
+--- a/sound/soc/codecs/rt5651.c
++++ b/sound/soc/codecs/rt5651.c
+@@ -1722,6 +1722,7 @@ static const struct regmap_config rt5651_regmap = {
+ .num_reg_defaults = ARRAY_SIZE(rt5651_reg),
+ .ranges = rt5651_ranges,
+ .num_ranges = ARRAY_SIZE(rt5651_ranges),
++ .use_single_rw = true,
+ };
+
+ #if defined(CONFIG_OF)
+diff --git a/sound/soc/codecs/sgtl5000.c b/sound/soc/codecs/sgtl5000.c
+index f2bb4feba3b6..0b11a2e01b2f 100644
+--- a/sound/soc/codecs/sgtl5000.c
++++ b/sound/soc/codecs/sgtl5000.c
+@@ -871,15 +871,26 @@ static int sgtl5000_pcm_hw_params(struct snd_pcm_substream *substream,
+ static int sgtl5000_set_bias_level(struct snd_soc_codec *codec,
+ enum snd_soc_bias_level level)
+ {
++ struct sgtl5000_priv *sgtl = snd_soc_codec_get_drvdata(codec);
++ int ret;
++
+ switch (level) {
+ case SND_SOC_BIAS_ON:
+ case SND_SOC_BIAS_PREPARE:
+ case SND_SOC_BIAS_STANDBY:
++ regcache_cache_only(sgtl->regmap, false);
++ ret = regcache_sync(sgtl->regmap);
++ if (ret) {
++ regcache_cache_only(sgtl->regmap, true);
++ return ret;
++ }
++
+ snd_soc_update_bits(codec, SGTL5000_CHIP_ANA_POWER,
+ SGTL5000_REFTOP_POWERUP,
+ SGTL5000_REFTOP_POWERUP);
+ break;
+ case SND_SOC_BIAS_OFF:
++ regcache_cache_only(sgtl->regmap, true);
+ snd_soc_update_bits(codec, SGTL5000_CHIP_ANA_POWER,
+ SGTL5000_REFTOP_POWERUP, 0);
+ break;
+diff --git a/sound/soc/codecs/wm_adsp.c b/sound/soc/codecs/wm_adsp.c
+index 66e32f5d2917..989d093abda7 100644
+--- a/sound/soc/codecs/wm_adsp.c
++++ b/sound/soc/codecs/wm_adsp.c
+@@ -1204,12 +1204,14 @@ static int wmfw_add_ctl(struct wm_adsp *dsp, struct wm_coeff_ctl *ctl)
+ kcontrol->put = wm_coeff_put_acked;
+ break;
+ default:
+- kcontrol->get = wm_coeff_get;
+- kcontrol->put = wm_coeff_put;
+-
+- ctl->bytes_ext.max = ctl->len;
+- ctl->bytes_ext.get = wm_coeff_tlv_get;
+- ctl->bytes_ext.put = wm_coeff_tlv_put;
++ if (kcontrol->access & SNDRV_CTL_ELEM_ACCESS_TLV_CALLBACK) {
++ ctl->bytes_ext.max = ctl->len;
++ ctl->bytes_ext.get = wm_coeff_tlv_get;
++ ctl->bytes_ext.put = wm_coeff_tlv_put;
++ } else {
++ kcontrol->get = wm_coeff_get;
++ kcontrol->put = wm_coeff_put;
++ }
+ break;
+ }
+
+diff --git a/sound/soc/nuc900/nuc900-ac97.c b/sound/soc/nuc900/nuc900-ac97.c
+index b6615affe571..fde974d52bb2 100644
+--- a/sound/soc/nuc900/nuc900-ac97.c
++++ b/sound/soc/nuc900/nuc900-ac97.c
+@@ -67,7 +67,7 @@ static unsigned short nuc900_ac97_read(struct snd_ac97 *ac97,
+
+ /* polling the AC_R_FINISH */
+ while (!(AUDIO_READ(nuc900_audio->mmio + ACTL_ACCON) & AC_R_FINISH)
+- && timeout--)
++ && --timeout)
+ mdelay(1);
+
+ if (!timeout) {
+@@ -121,7 +121,7 @@ static void nuc900_ac97_write(struct snd_ac97 *ac97, unsigned short reg,
+
+ /* polling the AC_W_FINISH */
+ while ((AUDIO_READ(nuc900_audio->mmio + ACTL_ACCON) & AC_W_FINISH)
+- && timeout--)
++ && --timeout)
+ mdelay(1);
+
+ if (!timeout)
+diff --git a/sound/soc/sunxi/sun4i-i2s.c b/sound/soc/sunxi/sun4i-i2s.c
+index 04f92583a969..b4af5ce78ecb 100644
+--- a/sound/soc/sunxi/sun4i-i2s.c
++++ b/sound/soc/sunxi/sun4i-i2s.c
+@@ -104,7 +104,7 @@
+
+ #define SUN8I_I2S_CHAN_CFG_REG 0x30
+ #define SUN8I_I2S_CHAN_CFG_RX_SLOT_NUM_MASK GENMASK(6, 4)
+-#define SUN8I_I2S_CHAN_CFG_RX_SLOT_NUM(chan) (chan - 1)
++#define SUN8I_I2S_CHAN_CFG_RX_SLOT_NUM(chan) ((chan - 1) << 4)
+ #define SUN8I_I2S_CHAN_CFG_TX_SLOT_NUM_MASK GENMASK(2, 0)
+ #define SUN8I_I2S_CHAN_CFG_TX_SLOT_NUM(chan) (chan - 1)
+
+diff --git a/tools/perf/arch/s390/annotate/instructions.c b/tools/perf/arch/s390/annotate/instructions.c
+index e0e466c650df..8c72b44444cb 100644
+--- a/tools/perf/arch/s390/annotate/instructions.c
++++ b/tools/perf/arch/s390/annotate/instructions.c
+@@ -18,7 +18,8 @@ static struct ins_ops *s390__associate_ins_ops(struct arch *arch, const char *na
+ if (!strcmp(name, "br"))
+ ops = &ret_ops;
+
+- arch__associate_ins_ops(arch, name, ops);
++ if (ops)
++ arch__associate_ins_ops(arch, name, ops);
+ return ops;
+ }
+
+diff --git a/tools/perf/util/annotate.c b/tools/perf/util/annotate.c
+index 3369c7830260..6a631acf8bb7 100644
+--- a/tools/perf/util/annotate.c
++++ b/tools/perf/util/annotate.c
+@@ -322,6 +322,8 @@ static int comment__symbol(char *raw, char *comment, u64 *addrp, char **namep)
+ return 0;
+
+ *addrp = strtoull(comment, &endptr, 16);
++ if (endptr == comment)
++ return 0;
+ name = strchr(endptr, '<');
+ if (name == NULL)
+ return -1;
+@@ -435,8 +437,8 @@ static int mov__parse(struct arch *arch, struct ins_operands *ops, struct map *m
+ return 0;
+
+ comment = ltrim(comment);
+- comment__symbol(ops->source.raw, comment, &ops->source.addr, &ops->source.name);
+- comment__symbol(ops->target.raw, comment, &ops->target.addr, &ops->target.name);
++ comment__symbol(ops->source.raw, comment + 1, &ops->source.addr, &ops->source.name);
++ comment__symbol(ops->target.raw, comment + 1, &ops->target.addr, &ops->target.name);
+
+ return 0;
+
+@@ -480,7 +482,7 @@ static int dec__parse(struct arch *arch __maybe_unused, struct ins_operands *ops
+ return 0;
+
+ comment = ltrim(comment);
+- comment__symbol(ops->target.raw, comment, &ops->target.addr, &ops->target.name);
++ comment__symbol(ops->target.raw, comment + 1, &ops->target.addr, &ops->target.name);
+
+ return 0;
+ }
+diff --git a/tools/perf/util/session.c b/tools/perf/util/session.c
+index 5c412310f266..1ea5b95ee93e 100644
+--- a/tools/perf/util/session.c
++++ b/tools/perf/util/session.c
+@@ -1350,10 +1350,11 @@ static s64 perf_session__process_user_event(struct perf_session *session,
+ {
+ struct ordered_events *oe = &session->ordered_events;
+ struct perf_tool *tool = session->tool;
++ struct perf_sample sample = { .time = 0, };
+ int fd = perf_data__fd(session->data);
+ int err;
+
+- dump_event(session->evlist, event, file_offset, NULL);
++ dump_event(session->evlist, event, file_offset, &sample);
+
+ /* These events are processed right away */
+ switch (event->header.type) {
+diff --git a/tools/testing/selftests/firmware/fw_filesystem.sh b/tools/testing/selftests/firmware/fw_filesystem.sh
+index b1f20fef36c7..f9508e1a4058 100755
+--- a/tools/testing/selftests/firmware/fw_filesystem.sh
++++ b/tools/testing/selftests/firmware/fw_filesystem.sh
+@@ -45,7 +45,10 @@ test_finish()
+ if [ "$HAS_FW_LOADER_USER_HELPER" = "yes" ]; then
+ echo "$OLD_TIMEOUT" >/sys/class/firmware/timeout
+ fi
+- echo -n "$OLD_PATH" >/sys/module/firmware_class/parameters/path
++ if [ "$OLD_FWPATH" = "" ]; then
++ OLD_FWPATH=" "
++ fi
++ echo -n "$OLD_FWPATH" >/sys/module/firmware_class/parameters/path
+ rm -f "$FW"
+ rmdir "$FWPATH"
+ }
+diff --git a/tools/testing/selftests/rcutorture/bin/configinit.sh b/tools/testing/selftests/rcutorture/bin/configinit.sh
+index 51f66a7ce876..c15f270e121d 100755
+--- a/tools/testing/selftests/rcutorture/bin/configinit.sh
++++ b/tools/testing/selftests/rcutorture/bin/configinit.sh
+@@ -51,7 +51,7 @@ then
+ mkdir $builddir
+ fi
+ else
+- echo Bad build directory: \"$builddir\"
++ echo Bad build directory: \"$buildloc\"
+ exit 2
+ fi
+ fi
+diff --git a/tools/usb/usbip/src/usbipd.c b/tools/usb/usbip/src/usbipd.c
+index 009afb4a3aae..c6dad2a13c80 100644
+--- a/tools/usb/usbip/src/usbipd.c
++++ b/tools/usb/usbip/src/usbipd.c
+@@ -456,7 +456,7 @@ static void set_signal(void)
+ sigaction(SIGTERM, &act, NULL);
+ sigaction(SIGINT, &act, NULL);
+ act.sa_handler = SIG_IGN;
+- sigaction(SIGCLD, &act, NULL);
++ sigaction(SIGCHLD, &act, NULL);
+ }
+
+ static const char *pid_file;
^ permalink raw reply related [flat|nested] 27+ messages in thread
* [gentoo-commits] proj/linux-patches:4.15 commit in: /
@ 2018-03-21 14:42 Mike Pagano
0 siblings, 0 replies; 27+ messages in thread
From: Mike Pagano @ 2018-03-21 14:42 UTC (permalink / raw
To: gentoo-commits
commit: eaeb5c4a6f5cf7ae7aa3f783c83f06ce942641a9
Author: Mike Pagano <mpagano <AT> gentoo <DOT> org>
AuthorDate: Wed Mar 21 14:42:31 2018 +0000
Commit: Mike Pagano <mpagano <AT> gentoo <DOT> org>
CommitDate: Wed Mar 21 14:42:31 2018 +0000
URL: https://gitweb.gentoo.org/proj/linux-patches.git/commit/?id=eaeb5c4a
Linux patch 4.15.12
0000_README | 4 +
1011_linux-4.15.12.patch | 1976 ++++++++++++++++++++++++++++++++++++++++++++++
2 files changed, 1980 insertions(+)
diff --git a/0000_README b/0000_README
index 6b57403..2ad5c9b 100644
--- a/0000_README
+++ b/0000_README
@@ -87,6 +87,10 @@ Patch: 1010_linux-4.15.11.patch
From: http://www.kernel.org
Desc: Linux 4.15.11
+Patch: 1011_linux-4.15.12.patch
+From: http://www.kernel.org
+Desc: Linux 4.15.12
+
Patch: 1500_XATTR_USER_PREFIX.patch
From: https://bugs.gentoo.org/show_bug.cgi?id=470644
Desc: Support for namespace user.pax.* on tmpfs.
diff --git a/1011_linux-4.15.12.patch b/1011_linux-4.15.12.patch
new file mode 100644
index 0000000..b55e91e
--- /dev/null
+++ b/1011_linux-4.15.12.patch
@@ -0,0 +1,1976 @@
+diff --git a/Documentation/devicetree/bindings/usb/dwc2.txt b/Documentation/devicetree/bindings/usb/dwc2.txt
+index e64d903bcbe8..46da5f184460 100644
+--- a/Documentation/devicetree/bindings/usb/dwc2.txt
++++ b/Documentation/devicetree/bindings/usb/dwc2.txt
+@@ -19,7 +19,7 @@ Required properties:
+ configured in FS mode;
+ - "st,stm32f4x9-hsotg": The DWC2 USB HS controller instance in STM32F4x9 SoCs
+ configured in HS mode;
+- - "st,stm32f7xx-hsotg": The DWC2 USB HS controller instance in STM32F7xx SoCs
++ - "st,stm32f7-hsotg": The DWC2 USB HS controller instance in STM32F7 SoCs
+ configured in HS mode;
+ - reg : Should contain 1 register range (address and length)
+ - interrupts : Should contain 1 interrupt
+diff --git a/Makefile b/Makefile
+index 74c0f5e8dd55..2e6ba1553dff 100644
+--- a/Makefile
++++ b/Makefile
+@@ -1,7 +1,7 @@
+ # SPDX-License-Identifier: GPL-2.0
+ VERSION = 4
+ PATCHLEVEL = 15
+-SUBLEVEL = 11
++SUBLEVEL = 12
+ EXTRAVERSION =
+ NAME = Fearless Coyote
+
+diff --git a/arch/parisc/kernel/cache.c b/arch/parisc/kernel/cache.c
+index 79089778725b..e3b45546d589 100644
+--- a/arch/parisc/kernel/cache.c
++++ b/arch/parisc/kernel/cache.c
+@@ -543,7 +543,8 @@ void flush_cache_mm(struct mm_struct *mm)
+ rp3440, etc. So, avoid it if the mm isn't too big. */
+ if ((!IS_ENABLED(CONFIG_SMP) || !arch_irqs_disabled()) &&
+ mm_total_size(mm) >= parisc_cache_flush_threshold) {
+- flush_tlb_all();
++ if (mm->context)
++ flush_tlb_all();
+ flush_cache_all();
+ return;
+ }
+@@ -571,6 +572,8 @@ void flush_cache_mm(struct mm_struct *mm)
+ pfn = pte_pfn(*ptep);
+ if (!pfn_valid(pfn))
+ continue;
++ if (unlikely(mm->context))
++ flush_tlb_page(vma, addr);
+ __flush_cache_page(vma, addr, PFN_PHYS(pfn));
+ }
+ }
+@@ -579,26 +582,46 @@ void flush_cache_mm(struct mm_struct *mm)
+ void flush_cache_range(struct vm_area_struct *vma,
+ unsigned long start, unsigned long end)
+ {
++ pgd_t *pgd;
++ unsigned long addr;
++
+ if ((!IS_ENABLED(CONFIG_SMP) || !arch_irqs_disabled()) &&
+ end - start >= parisc_cache_flush_threshold) {
+- flush_tlb_range(vma, start, end);
++ if (vma->vm_mm->context)
++ flush_tlb_range(vma, start, end);
+ flush_cache_all();
+ return;
+ }
+
+- flush_user_dcache_range_asm(start, end);
+- if (vma->vm_flags & VM_EXEC)
+- flush_user_icache_range_asm(start, end);
+- flush_tlb_range(vma, start, end);
++ if (vma->vm_mm->context == mfsp(3)) {
++ flush_user_dcache_range_asm(start, end);
++ if (vma->vm_flags & VM_EXEC)
++ flush_user_icache_range_asm(start, end);
++ flush_tlb_range(vma, start, end);
++ return;
++ }
++
++ pgd = vma->vm_mm->pgd;
++ for (addr = vma->vm_start; addr < vma->vm_end; addr += PAGE_SIZE) {
++ unsigned long pfn;
++ pte_t *ptep = get_ptep(pgd, addr);
++ if (!ptep)
++ continue;
++ pfn = pte_pfn(*ptep);
++ if (pfn_valid(pfn)) {
++ if (unlikely(vma->vm_mm->context))
++ flush_tlb_page(vma, addr);
++ __flush_cache_page(vma, addr, PFN_PHYS(pfn));
++ }
++ }
+ }
+
+ void
+ flush_cache_page(struct vm_area_struct *vma, unsigned long vmaddr, unsigned long pfn)
+ {
+- BUG_ON(!vma->vm_mm->context);
+-
+ if (pfn_valid(pfn)) {
+- flush_tlb_page(vma, vmaddr);
++ if (likely(vma->vm_mm->context))
++ flush_tlb_page(vma, vmaddr);
+ __flush_cache_page(vma, vmaddr, PFN_PHYS(pfn));
+ }
+ }
+diff --git a/arch/x86/include/asm/cpufeatures.h b/arch/x86/include/asm/cpufeatures.h
+index 66c14347c502..23a65439c37c 100644
+--- a/arch/x86/include/asm/cpufeatures.h
++++ b/arch/x86/include/asm/cpufeatures.h
+@@ -314,6 +314,7 @@
+ #define X86_FEATURE_VPCLMULQDQ (16*32+10) /* Carry-Less Multiplication Double Quadword */
+ #define X86_FEATURE_AVX512_VNNI (16*32+11) /* Vector Neural Network Instructions */
+ #define X86_FEATURE_AVX512_BITALG (16*32+12) /* Support for VPOPCNT[B,W] and VPSHUF-BITQMB instructions */
++#define X86_FEATURE_TME (16*32+13) /* Intel Total Memory Encryption */
+ #define X86_FEATURE_AVX512_VPOPCNTDQ (16*32+14) /* POPCNT for vectors of DW/QW */
+ #define X86_FEATURE_LA57 (16*32+16) /* 5-level page tables */
+ #define X86_FEATURE_RDPID (16*32+22) /* RDPID instruction */
+@@ -326,6 +327,7 @@
+ /* Intel-defined CPU features, CPUID level 0x00000007:0 (EDX), word 18 */
+ #define X86_FEATURE_AVX512_4VNNIW (18*32+ 2) /* AVX-512 Neural Network Instructions */
+ #define X86_FEATURE_AVX512_4FMAPS (18*32+ 3) /* AVX-512 Multiply Accumulation Single precision */
++#define X86_FEATURE_PCONFIG (18*32+18) /* Intel PCONFIG */
+ #define X86_FEATURE_SPEC_CTRL (18*32+26) /* "" Speculation Control (IBRS + IBPB) */
+ #define X86_FEATURE_INTEL_STIBP (18*32+27) /* "" Single Thread Indirect Branch Predictors */
+ #define X86_FEATURE_ARCH_CAPABILITIES (18*32+29) /* IA32_ARCH_CAPABILITIES MSR (Intel) */
+diff --git a/arch/x86/include/asm/nospec-branch.h b/arch/x86/include/asm/nospec-branch.h
+index d0dabeae0505..f928ad9b143f 100644
+--- a/arch/x86/include/asm/nospec-branch.h
++++ b/arch/x86/include/asm/nospec-branch.h
+@@ -183,7 +183,10 @@
+ * otherwise we'll run out of registers. We don't care about CET
+ * here, anyway.
+ */
+-# define CALL_NOSPEC ALTERNATIVE("call *%[thunk_target]\n", \
++# define CALL_NOSPEC \
++ ALTERNATIVE( \
++ ANNOTATE_RETPOLINE_SAFE \
++ "call *%[thunk_target]\n", \
+ " jmp 904f;\n" \
+ " .align 16\n" \
+ "901: call 903f;\n" \
+diff --git a/arch/x86/kernel/cpu/intel.c b/arch/x86/kernel/cpu/intel.c
+index 4aa9fd379390..c3af167d0a70 100644
+--- a/arch/x86/kernel/cpu/intel.c
++++ b/arch/x86/kernel/cpu/intel.c
+@@ -105,7 +105,7 @@ static void probe_xeon_phi_r3mwait(struct cpuinfo_x86 *c)
+ /*
+ * Early microcode releases for the Spectre v2 mitigation were broken.
+ * Information taken from;
+- * - https://newsroom.intel.com/wp-content/uploads/sites/11/2018/01/microcode-update-guidance.pdf
++ * - https://newsroom.intel.com/wp-content/uploads/sites/11/2018/03/microcode-update-guidance.pdf
+ * - https://kb.vmware.com/s/article/52345
+ * - Microcode revisions observed in the wild
+ * - Release note from 20180108 microcode release
+@@ -123,7 +123,6 @@ static const struct sku_microcode spectre_bad_microcodes[] = {
+ { INTEL_FAM6_KABYLAKE_MOBILE, 0x09, 0x80 },
+ { INTEL_FAM6_SKYLAKE_X, 0x03, 0x0100013e },
+ { INTEL_FAM6_SKYLAKE_X, 0x04, 0x0200003c },
+- { INTEL_FAM6_SKYLAKE_DESKTOP, 0x03, 0xc2 },
+ { INTEL_FAM6_BROADWELL_CORE, 0x04, 0x28 },
+ { INTEL_FAM6_BROADWELL_GT3E, 0x01, 0x1b },
+ { INTEL_FAM6_BROADWELL_XEON_D, 0x02, 0x14 },
+diff --git a/arch/x86/kernel/vm86_32.c b/arch/x86/kernel/vm86_32.c
+index 5edb27f1a2c4..9d0b5af7db91 100644
+--- a/arch/x86/kernel/vm86_32.c
++++ b/arch/x86/kernel/vm86_32.c
+@@ -727,7 +727,8 @@ void handle_vm86_fault(struct kernel_vm86_regs *regs, long error_code)
+ return;
+
+ check_vip:
+- if (VEFLAGS & X86_EFLAGS_VIP) {
++ if ((VEFLAGS & (X86_EFLAGS_VIP | X86_EFLAGS_VIF)) ==
++ (X86_EFLAGS_VIP | X86_EFLAGS_VIF)) {
+ save_v86_state(regs, VM86_STI);
+ return;
+ }
+diff --git a/arch/x86/kvm/mmu.c b/arch/x86/kvm/mmu.c
+index fe2cb4cfa75b..37277859a2a1 100644
+--- a/arch/x86/kvm/mmu.c
++++ b/arch/x86/kvm/mmu.c
+@@ -2758,8 +2758,10 @@ static int set_spte(struct kvm_vcpu *vcpu, u64 *sptep,
+ else
+ pte_access &= ~ACC_WRITE_MASK;
+
++ if (!kvm_is_mmio_pfn(pfn))
++ spte |= shadow_me_mask;
++
+ spte |= (u64)pfn << PAGE_SHIFT;
+- spte |= shadow_me_mask;
+
+ if (pte_access & ACC_WRITE_MASK) {
+
+diff --git a/arch/x86/mm/fault.c b/arch/x86/mm/fault.c
+index c88573d90f3e..25a30b5d6582 100644
+--- a/arch/x86/mm/fault.c
++++ b/arch/x86/mm/fault.c
+@@ -330,7 +330,7 @@ static noinline int vmalloc_fault(unsigned long address)
+ if (!pmd_k)
+ return -1;
+
+- if (pmd_huge(*pmd_k))
++ if (pmd_large(*pmd_k))
+ return 0;
+
+ pte_k = pte_offset_kernel(pmd_k, address);
+@@ -475,7 +475,7 @@ static noinline int vmalloc_fault(unsigned long address)
+ if (pud_none(*pud) || pud_pfn(*pud) != pud_pfn(*pud_ref))
+ BUG();
+
+- if (pud_huge(*pud))
++ if (pud_large(*pud))
+ return 0;
+
+ pmd = pmd_offset(pud, address);
+@@ -486,7 +486,7 @@ static noinline int vmalloc_fault(unsigned long address)
+ if (pmd_none(*pmd) || pmd_pfn(*pmd) != pmd_pfn(*pmd_ref))
+ BUG();
+
+- if (pmd_huge(*pmd))
++ if (pmd_large(*pmd))
+ return 0;
+
+ pte_ref = pte_offset_kernel(pmd_ref, address);
+diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_connectors.c b/drivers/gpu/drm/amd/amdgpu/amdgpu_connectors.c
+index 21e7ae159dff..9f72993a6175 100644
+--- a/drivers/gpu/drm/amd/amdgpu/amdgpu_connectors.c
++++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_connectors.c
+@@ -69,25 +69,18 @@ void amdgpu_connector_hotplug(struct drm_connector *connector)
+ /* don't do anything if sink is not display port, i.e.,
+ * passive dp->(dvi|hdmi) adaptor
+ */
+- if (dig_connector->dp_sink_type == CONNECTOR_OBJECT_ID_DISPLAYPORT) {
+- int saved_dpms = connector->dpms;
+- /* Only turn off the display if it's physically disconnected */
+- if (!amdgpu_display_hpd_sense(adev, amdgpu_connector->hpd.hpd)) {
+- drm_helper_connector_dpms(connector, DRM_MODE_DPMS_OFF);
+- } else if (amdgpu_atombios_dp_needs_link_train(amdgpu_connector)) {
+- /* Don't try to start link training before we
+- * have the dpcd */
+- if (amdgpu_atombios_dp_get_dpcd(amdgpu_connector))
+- return;
+-
+- /* set it to OFF so that drm_helper_connector_dpms()
+- * won't return immediately since the current state
+- * is ON at this point.
+- */
+- connector->dpms = DRM_MODE_DPMS_OFF;
+- drm_helper_connector_dpms(connector, DRM_MODE_DPMS_ON);
+- }
+- connector->dpms = saved_dpms;
++ if (dig_connector->dp_sink_type == CONNECTOR_OBJECT_ID_DISPLAYPORT &&
++ amdgpu_display_hpd_sense(adev, amdgpu_connector->hpd.hpd) &&
++ amdgpu_atombios_dp_needs_link_train(amdgpu_connector)) {
++ /* Don't start link training before we have the DPCD */
++ if (amdgpu_atombios_dp_get_dpcd(amdgpu_connector))
++ return;
++
++ /* Turn the connector off and back on immediately, which
++ * will trigger link training
++ */
++ drm_helper_connector_dpms(connector, DRM_MODE_DPMS_OFF);
++ drm_helper_connector_dpms(connector, DRM_MODE_DPMS_ON);
+ }
+ }
+ }
+diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_gem.c b/drivers/gpu/drm/amd/amdgpu/amdgpu_gem.c
+index 1eac7c3c687b..e0eef2c41190 100644
+--- a/drivers/gpu/drm/amd/amdgpu/amdgpu_gem.c
++++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_gem.c
+@@ -36,8 +36,6 @@ void amdgpu_gem_object_free(struct drm_gem_object *gobj)
+ struct amdgpu_bo *robj = gem_to_amdgpu_bo(gobj);
+
+ if (robj) {
+- if (robj->gem_base.import_attach)
+- drm_prime_gem_destroy(&robj->gem_base, robj->tbo.sg);
+ amdgpu_mn_unregister(robj);
+ amdgpu_bo_unref(&robj);
+ }
+diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_object.c b/drivers/gpu/drm/amd/amdgpu/amdgpu_object.c
+index ea25164e7f4b..828252dc1d91 100644
+--- a/drivers/gpu/drm/amd/amdgpu/amdgpu_object.c
++++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_object.c
+@@ -44,6 +44,8 @@ static void amdgpu_ttm_bo_destroy(struct ttm_buffer_object *tbo)
+
+ amdgpu_bo_kunmap(bo);
+
++ if (bo->gem_base.import_attach)
++ drm_prime_gem_destroy(&bo->gem_base, bo->tbo.sg);
+ drm_gem_object_release(&bo->gem_base);
+ amdgpu_bo_unref(&bo->parent);
+ if (!list_empty(&bo->shadow_list)) {
+diff --git a/drivers/gpu/drm/nouveau/nouveau_backlight.c b/drivers/gpu/drm/nouveau/nouveau_backlight.c
+index 380f340204e8..f56f60f695e1 100644
+--- a/drivers/gpu/drm/nouveau/nouveau_backlight.c
++++ b/drivers/gpu/drm/nouveau/nouveau_backlight.c
+@@ -268,13 +268,13 @@ nouveau_backlight_init(struct drm_device *dev)
+ struct nvif_device *device = &drm->client.device;
+ struct drm_connector *connector;
+
++ INIT_LIST_HEAD(&drm->bl_connectors);
++
+ if (apple_gmux_present()) {
+ NV_INFO(drm, "Apple GMUX detected: not registering Nouveau backlight interface\n");
+ return 0;
+ }
+
+- INIT_LIST_HEAD(&drm->bl_connectors);
+-
+ list_for_each_entry(connector, &dev->mode_config.connector_list, head) {
+ if (connector->connector_type != DRM_MODE_CONNECTOR_LVDS &&
+ connector->connector_type != DRM_MODE_CONNECTOR_eDP)
+diff --git a/drivers/gpu/drm/nouveau/nvkm/subdev/mmu/vmm.c b/drivers/gpu/drm/nouveau/nvkm/subdev/mmu/vmm.c
+index e35d3e17cd7c..c6e3d0dd1070 100644
+--- a/drivers/gpu/drm/nouveau/nvkm/subdev/mmu/vmm.c
++++ b/drivers/gpu/drm/nouveau/nvkm/subdev/mmu/vmm.c
+@@ -1354,7 +1354,7 @@ nvkm_vmm_get_locked(struct nvkm_vmm *vmm, bool getref, bool mapref, bool sparse,
+
+ tail = this->addr + this->size;
+ if (vmm->func->page_block && next && next->page != p)
+- tail = ALIGN_DOWN(addr, vmm->func->page_block);
++ tail = ALIGN_DOWN(tail, vmm->func->page_block);
+
+ if (addr <= tail && tail - addr >= size) {
+ rb_erase(&this->tree, &vmm->free);
+diff --git a/drivers/gpu/drm/radeon/radeon_gem.c b/drivers/gpu/drm/radeon/radeon_gem.c
+index cf3deb283da5..065c058f7b5f 100644
+--- a/drivers/gpu/drm/radeon/radeon_gem.c
++++ b/drivers/gpu/drm/radeon/radeon_gem.c
+@@ -34,8 +34,6 @@ void radeon_gem_object_free(struct drm_gem_object *gobj)
+ struct radeon_bo *robj = gem_to_radeon_bo(gobj);
+
+ if (robj) {
+- if (robj->gem_base.import_attach)
+- drm_prime_gem_destroy(&robj->gem_base, robj->tbo.sg);
+ radeon_mn_unregister(robj);
+ radeon_bo_unref(&robj);
+ }
+diff --git a/drivers/gpu/drm/radeon/radeon_object.c b/drivers/gpu/drm/radeon/radeon_object.c
+index 093594976126..baadb706c276 100644
+--- a/drivers/gpu/drm/radeon/radeon_object.c
++++ b/drivers/gpu/drm/radeon/radeon_object.c
+@@ -82,6 +82,8 @@ static void radeon_ttm_bo_destroy(struct ttm_buffer_object *tbo)
+ mutex_unlock(&bo->rdev->gem.mutex);
+ radeon_bo_clear_surface_reg(bo);
+ WARN_ON_ONCE(!list_empty(&bo->va));
++ if (bo->gem_base.import_attach)
++ drm_prime_gem_destroy(&bo->gem_base, bo->tbo.sg);
+ drm_gem_object_release(&bo->gem_base);
+ kfree(bo);
+ }
+diff --git a/drivers/infiniband/sw/rdmavt/mr.c b/drivers/infiniband/sw/rdmavt/mr.c
+index 42713511b53b..524e6134642e 100644
+--- a/drivers/infiniband/sw/rdmavt/mr.c
++++ b/drivers/infiniband/sw/rdmavt/mr.c
+@@ -489,11 +489,13 @@ static int rvt_check_refs(struct rvt_mregion *mr, const char *t)
+ unsigned long timeout;
+ struct rvt_dev_info *rdi = ib_to_rvt(mr->pd->device);
+
+- if (percpu_ref_is_zero(&mr->refcount))
+- return 0;
+- /* avoid dma mr */
+- if (mr->lkey)
++ if (mr->lkey) {
++ /* avoid dma mr */
+ rvt_dereg_clean_qps(mr);
++ /* @mr was indexed on rcu protected @lkey_table */
++ synchronize_rcu();
++ }
++
+ timeout = wait_for_completion_timeout(&mr->comp, 5 * HZ);
+ if (!timeout) {
+ rvt_pr_err(rdi,
+diff --git a/drivers/irqchip/irq-gic-v3-its.c b/drivers/irqchip/irq-gic-v3-its.c
+index 06f025fd5726..12c325066deb 100644
+--- a/drivers/irqchip/irq-gic-v3-its.c
++++ b/drivers/irqchip/irq-gic-v3-its.c
+@@ -1412,7 +1412,7 @@ static struct irq_chip its_irq_chip = {
+ * This gives us (((1UL << id_bits) - 8192) >> 5) possible allocations.
+ */
+ #define IRQS_PER_CHUNK_SHIFT 5
+-#define IRQS_PER_CHUNK (1 << IRQS_PER_CHUNK_SHIFT)
++#define IRQS_PER_CHUNK (1UL << IRQS_PER_CHUNK_SHIFT)
+ #define ITS_MAX_LPI_NRBITS 16 /* 64K LPIs */
+
+ static unsigned long *lpi_bitmap;
+@@ -2119,11 +2119,10 @@ static struct its_device *its_create_device(struct its_node *its, u32 dev_id,
+
+ dev = kzalloc(sizeof(*dev), GFP_KERNEL);
+ /*
+- * At least one bit of EventID is being used, hence a minimum
+- * of two entries. No, the architecture doesn't let you
+- * express an ITT with a single entry.
++ * We allocate at least one chunk worth of LPIs bet device,
++ * and thus that many ITEs. The device may require less though.
+ */
+- nr_ites = max(2UL, roundup_pow_of_two(nvecs));
++ nr_ites = max(IRQS_PER_CHUNK, roundup_pow_of_two(nvecs));
+ sz = nr_ites * its->ite_size;
+ sz = max(sz, ITS_ITT_ALIGN) + ITS_ITT_ALIGN - 1;
+ itt = kzalloc(sz, GFP_KERNEL);
+diff --git a/drivers/nvme/host/core.c b/drivers/nvme/host/core.c
+index 3551fbd6fe41..935593032123 100644
+--- a/drivers/nvme/host/core.c
++++ b/drivers/nvme/host/core.c
+@@ -2052,6 +2052,22 @@ static const struct attribute_group *nvme_subsys_attrs_groups[] = {
+ NULL,
+ };
+
++static int nvme_active_ctrls(struct nvme_subsystem *subsys)
++{
++ int count = 0;
++ struct nvme_ctrl *ctrl;
++
++ mutex_lock(&subsys->lock);
++ list_for_each_entry(ctrl, &subsys->ctrls, subsys_entry) {
++ if (ctrl->state != NVME_CTRL_DELETING &&
++ ctrl->state != NVME_CTRL_DEAD)
++ count++;
++ }
++ mutex_unlock(&subsys->lock);
++
++ return count;
++}
++
+ static int nvme_init_subsystem(struct nvme_ctrl *ctrl, struct nvme_id_ctrl *id)
+ {
+ struct nvme_subsystem *subsys, *found;
+@@ -2090,7 +2106,7 @@ static int nvme_init_subsystem(struct nvme_ctrl *ctrl, struct nvme_id_ctrl *id)
+ * Verify that the subsystem actually supports multiple
+ * controllers, else bail out.
+ */
+- if (!(id->cmic & (1 << 1))) {
++ if (nvme_active_ctrls(found) && !(id->cmic & (1 << 1))) {
+ dev_err(ctrl->device,
+ "ignoring ctrl due to duplicate subnqn (%s).\n",
+ found->subnqn);
+diff --git a/drivers/phy/broadcom/phy-brcm-usb-init.c b/drivers/phy/broadcom/phy-brcm-usb-init.c
+index 1e7ce0b6f299..1b7febc43da9 100644
+--- a/drivers/phy/broadcom/phy-brcm-usb-init.c
++++ b/drivers/phy/broadcom/phy-brcm-usb-init.c
+@@ -50,6 +50,8 @@
+ #define USB_CTRL_PLL_CTL_PLL_IDDQ_PWRDN_MASK 0x80000000 /* option */
+ #define USB_CTRL_EBRIDGE 0x0c
+ #define USB_CTRL_EBRIDGE_ESTOP_SCB_REQ_MASK 0x00020000 /* option */
++#define USB_CTRL_OBRIDGE 0x10
++#define USB_CTRL_OBRIDGE_LS_KEEP_ALIVE_MASK 0x08000000
+ #define USB_CTRL_MDIO 0x14
+ #define USB_CTRL_MDIO2 0x18
+ #define USB_CTRL_UTMI_CTL_1 0x2c
+@@ -71,6 +73,7 @@
+ #define USB_CTRL_USB30_CTL1_USB3_IPP_MASK 0x20000000 /* option */
+ #define USB_CTRL_USB30_PCTL 0x70
+ #define USB_CTRL_USB30_PCTL_PHY3_SOFT_RESETB_MASK 0x00000002
++#define USB_CTRL_USB30_PCTL_PHY3_IDDQ_OVERRIDE_MASK 0x00008000
+ #define USB_CTRL_USB30_PCTL_PHY3_SOFT_RESETB_P1_MASK 0x00020000
+ #define USB_CTRL_USB_DEVICE_CTL1 0x90
+ #define USB_CTRL_USB_DEVICE_CTL1_PORT_MODE_MASK 0x00000003 /* option */
+@@ -116,7 +119,6 @@ enum {
+ USB_CTRL_SETUP_STRAP_IPP_SEL_SELECTOR,
+ USB_CTRL_SETUP_OC3_DISABLE_SELECTOR,
+ USB_CTRL_PLL_CTL_PLL_IDDQ_PWRDN_SELECTOR,
+- USB_CTRL_EBRIDGE_ESTOP_SCB_REQ_SELECTOR,
+ USB_CTRL_USB_PM_BDC_SOFT_RESETB_SELECTOR,
+ USB_CTRL_USB_PM_XHC_SOFT_RESETB_SELECTOR,
+ USB_CTRL_USB_PM_USB_PWRDN_SELECTOR,
+@@ -203,7 +205,6 @@ usb_reg_bits_map_table[BRCM_FAMILY_COUNT][USB_CTRL_SELECTOR_COUNT] = {
+ USB_CTRL_SETUP_STRAP_IPP_SEL_MASK,
+ USB_CTRL_SETUP_OC3_DISABLE_MASK,
+ 0, /* USB_CTRL_PLL_CTL_PLL_IDDQ_PWRDN_MASK */
+- USB_CTRL_EBRIDGE_ESTOP_SCB_REQ_MASK,
+ 0, /* USB_CTRL_USB_PM_BDC_SOFT_RESETB_MASK */
+ USB_CTRL_USB_PM_XHC_SOFT_RESETB_MASK,
+ USB_CTRL_USB_PM_USB_PWRDN_MASK,
+@@ -225,7 +226,6 @@ usb_reg_bits_map_table[BRCM_FAMILY_COUNT][USB_CTRL_SELECTOR_COUNT] = {
+ 0, /* USB_CTRL_SETUP_STRAP_IPP_SEL_MASK */
+ USB_CTRL_SETUP_OC3_DISABLE_MASK,
+ USB_CTRL_PLL_CTL_PLL_IDDQ_PWRDN_MASK,
+- USB_CTRL_EBRIDGE_ESTOP_SCB_REQ_MASK,
+ 0, /* USB_CTRL_USB_PM_BDC_SOFT_RESETB_MASK */
+ USB_CTRL_USB_PM_XHC_SOFT_RESETB_VAR_MASK,
+ 0, /* USB_CTRL_USB_PM_USB_PWRDN_MASK */
+@@ -247,7 +247,6 @@ usb_reg_bits_map_table[BRCM_FAMILY_COUNT][USB_CTRL_SELECTOR_COUNT] = {
+ USB_CTRL_SETUP_STRAP_IPP_SEL_MASK,
+ USB_CTRL_SETUP_OC3_DISABLE_MASK,
+ 0, /* USB_CTRL_PLL_CTL_PLL_IDDQ_PWRDN_MASK */
+- USB_CTRL_EBRIDGE_ESTOP_SCB_REQ_MASK,
+ USB_CTRL_USB_PM_BDC_SOFT_RESETB_MASK,
+ USB_CTRL_USB_PM_XHC_SOFT_RESETB_MASK,
+ USB_CTRL_USB_PM_USB_PWRDN_MASK,
+@@ -269,7 +268,6 @@ usb_reg_bits_map_table[BRCM_FAMILY_COUNT][USB_CTRL_SELECTOR_COUNT] = {
+ 0, /* USB_CTRL_SETUP_STRAP_IPP_SEL_MASK */
+ USB_CTRL_SETUP_OC3_DISABLE_MASK,
+ USB_CTRL_PLL_CTL_PLL_IDDQ_PWRDN_MASK,
+- USB_CTRL_EBRIDGE_ESTOP_SCB_REQ_MASK,
+ 0, /* USB_CTRL_USB_PM_BDC_SOFT_RESETB_MASK */
+ USB_CTRL_USB_PM_XHC_SOFT_RESETB_VAR_MASK,
+ 0, /* USB_CTRL_USB_PM_USB_PWRDN_MASK */
+@@ -291,7 +289,6 @@ usb_reg_bits_map_table[BRCM_FAMILY_COUNT][USB_CTRL_SELECTOR_COUNT] = {
+ 0, /* USB_CTRL_SETUP_STRAP_IPP_SEL_MASK */
+ USB_CTRL_SETUP_OC3_DISABLE_MASK,
+ 0, /* USB_CTRL_PLL_CTL_PLL_IDDQ_PWRDN_MASK */
+- USB_CTRL_EBRIDGE_ESTOP_SCB_REQ_MASK,
+ 0, /* USB_CTRL_USB_PM_BDC_SOFT_RESETB_MASK */
+ USB_CTRL_USB_PM_XHC_SOFT_RESETB_VAR_MASK,
+ USB_CTRL_USB_PM_USB_PWRDN_MASK,
+@@ -313,7 +310,6 @@ usb_reg_bits_map_table[BRCM_FAMILY_COUNT][USB_CTRL_SELECTOR_COUNT] = {
+ 0, /* USB_CTRL_SETUP_STRAP_IPP_SEL_MASK */
+ 0, /* USB_CTRL_SETUP_OC3_DISABLE_MASK */
+ USB_CTRL_PLL_CTL_PLL_IDDQ_PWRDN_MASK,
+- 0, /* USB_CTRL_EBRIDGE_ESTOP_SCB_REQ_MASK */
+ 0, /* USB_CTRL_USB_PM_BDC_SOFT_RESETB_MASK */
+ 0, /* USB_CTRL_USB_PM_XHC_SOFT_RESETB_MASK */
+ 0, /* USB_CTRL_USB_PM_USB_PWRDN_MASK */
+@@ -335,7 +331,6 @@ usb_reg_bits_map_table[BRCM_FAMILY_COUNT][USB_CTRL_SELECTOR_COUNT] = {
+ USB_CTRL_SETUP_STRAP_IPP_SEL_MASK,
+ USB_CTRL_SETUP_OC3_DISABLE_MASK,
+ 0, /* USB_CTRL_PLL_CTL_PLL_IDDQ_PWRDN_MASK */
+- 0, /* USB_CTRL_EBRIDGE_ESTOP_SCB_REQ_MASK */
+ USB_CTRL_USB_PM_BDC_SOFT_RESETB_MASK,
+ USB_CTRL_USB_PM_XHC_SOFT_RESETB_MASK,
+ USB_CTRL_USB_PM_USB_PWRDN_MASK,
+@@ -357,7 +352,6 @@ usb_reg_bits_map_table[BRCM_FAMILY_COUNT][USB_CTRL_SELECTOR_COUNT] = {
+ 0, /* USB_CTRL_SETUP_STRAP_IPP_SEL_MASK */
+ USB_CTRL_SETUP_OC3_DISABLE_MASK,
+ USB_CTRL_PLL_CTL_PLL_IDDQ_PWRDN_MASK,
+- 0, /* USB_CTRL_EBRIDGE_ESTOP_SCB_REQ_MASK */
+ 0, /* USB_CTRL_USB_PM_BDC_SOFT_RESETB_MASK */
+ 0, /* USB_CTRL_USB_PM_XHC_SOFT_RESETB_MASK */
+ 0, /* USB_CTRL_USB_PM_USB_PWRDN_MASK */
+@@ -379,7 +373,6 @@ usb_reg_bits_map_table[BRCM_FAMILY_COUNT][USB_CTRL_SELECTOR_COUNT] = {
+ USB_CTRL_SETUP_STRAP_IPP_SEL_MASK,
+ USB_CTRL_SETUP_OC3_DISABLE_MASK,
+ 0, /* USB_CTRL_PLL_CTL_PLL_IDDQ_PWRDN_MASK */
+- USB_CTRL_EBRIDGE_ESTOP_SCB_REQ_MASK,
+ USB_CTRL_USB_PM_BDC_SOFT_RESETB_MASK,
+ USB_CTRL_USB_PM_XHC_SOFT_RESETB_MASK,
+ USB_CTRL_USB_PM_USB_PWRDN_MASK,
+@@ -401,7 +394,6 @@ usb_reg_bits_map_table[BRCM_FAMILY_COUNT][USB_CTRL_SELECTOR_COUNT] = {
+ USB_CTRL_SETUP_STRAP_IPP_SEL_MASK,
+ USB_CTRL_SETUP_OC3_DISABLE_MASK,
+ 0, /* USB_CTRL_PLL_CTL_PLL_IDDQ_PWRDN_MASK */
+- USB_CTRL_EBRIDGE_ESTOP_SCB_REQ_MASK,
+ USB_CTRL_USB_PM_BDC_SOFT_RESETB_MASK,
+ USB_CTRL_USB_PM_XHC_SOFT_RESETB_MASK,
+ USB_CTRL_USB_PM_USB_PWRDN_MASK,
+@@ -926,6 +918,7 @@ void brcm_usb_init_common(struct brcm_usb_init_params *params)
+ USB_CTRL_UNSET_FAMILY(params, USB_PM, BDC_SOFT_RESETB);
+ break;
+ default:
++ USB_CTRL_UNSET_FAMILY(params, USB_PM, BDC_SOFT_RESETB);
+ USB_CTRL_SET_FAMILY(params, USB_PM, BDC_SOFT_RESETB);
+ break;
+ }
+@@ -952,13 +945,17 @@ void brcm_usb_init_eohci(struct brcm_usb_init_params *params)
+ * Don't enable this so the memory controller doesn't read
+ * into memory holes. NOTE: This bit is low true on 7366C0.
+ */
+- USB_CTRL_SET_FAMILY(params, EBRIDGE, ESTOP_SCB_REQ);
++ USB_CTRL_SET(ctrl, EBRIDGE, ESTOP_SCB_REQ);
+
+ /* Setup the endian bits */
+ reg = brcmusb_readl(USB_CTRL_REG(ctrl, SETUP));
+ reg &= ~USB_CTRL_SETUP_ENDIAN_BITS;
+ reg |= USB_CTRL_MASK_FAMILY(params, SETUP, ENDIAN);
+ brcmusb_writel(reg, USB_CTRL_REG(ctrl, SETUP));
++
++ if (params->selected_family == BRCM_FAMILY_7271A0)
++ /* Enable LS keep alive fix for certain keyboards */
++ USB_CTRL_SET(ctrl, OBRIDGE, LS_KEEP_ALIVE);
+ }
+
+ void brcm_usb_init_xhci(struct brcm_usb_init_params *params)
+@@ -1003,6 +1000,7 @@ void brcm_usb_uninit_eohci(struct brcm_usb_init_params *params)
+ void brcm_usb_uninit_xhci(struct brcm_usb_init_params *params)
+ {
+ brcmusb_xhci_soft_reset(params, 1);
++ USB_CTRL_SET(params->ctrl_regs, USB30_PCTL, PHY3_IDDQ_OVERRIDE);
+ }
+
+ void brcm_usb_set_family_map(struct brcm_usb_init_params *params)
+diff --git a/drivers/phy/broadcom/phy-brcm-usb.c b/drivers/phy/broadcom/phy-brcm-usb.c
+index 195b98139e5f..d1dab36fa5b7 100644
+--- a/drivers/phy/broadcom/phy-brcm-usb.c
++++ b/drivers/phy/broadcom/phy-brcm-usb.c
+@@ -338,9 +338,9 @@ static int brcm_usb_phy_probe(struct platform_device *pdev)
+ ARRAY_SIZE(brcm_dr_mode_to_name),
+ mode, &priv->ini.mode);
+ }
+- if (of_property_read_bool(dn, "brcm,has_xhci"))
++ if (of_property_read_bool(dn, "brcm,has-xhci"))
+ priv->has_xhci = true;
+- if (of_property_read_bool(dn, "brcm,has_eohci"))
++ if (of_property_read_bool(dn, "brcm,has-eohci"))
+ priv->has_eohci = true;
+
+ err = brcm_usb_phy_dvr_init(dev, priv, dn);
+diff --git a/drivers/scsi/qla2xxx/qla_init.c b/drivers/scsi/qla2xxx/qla_init.c
+index 6082389f25c3..7b44a2c68a45 100644
+--- a/drivers/scsi/qla2xxx/qla_init.c
++++ b/drivers/scsi/qla2xxx/qla_init.c
+@@ -102,11 +102,16 @@ qla2x00_async_iocb_timeout(void *data)
+ struct srb_iocb *lio = &sp->u.iocb_cmd;
+ struct event_arg ea;
+
+- ql_dbg(ql_dbg_disc, fcport->vha, 0x2071,
+- "Async-%s timeout - hdl=%x portid=%06x %8phC.\n",
+- sp->name, sp->handle, fcport->d_id.b24, fcport->port_name);
++ if (fcport) {
++ ql_dbg(ql_dbg_disc, fcport->vha, 0x2071,
++ "Async-%s timeout - hdl=%x portid=%06x %8phC.\n",
++ sp->name, sp->handle, fcport->d_id.b24, fcport->port_name);
+
+- fcport->flags &= ~FCF_ASYNC_SENT;
++ fcport->flags &= ~FCF_ASYNC_SENT;
++ } else {
++ pr_info("Async-%s timeout - hdl=%x.\n",
++ sp->name, sp->handle);
++ }
+
+ switch (sp->type) {
+ case SRB_LOGIN_CMD:
+diff --git a/drivers/scsi/qla2xxx/qla_mid.c b/drivers/scsi/qla2xxx/qla_mid.c
+index e538e6308885..522d585a1a08 100644
+--- a/drivers/scsi/qla2xxx/qla_mid.c
++++ b/drivers/scsi/qla2xxx/qla_mid.c
+@@ -582,8 +582,9 @@ qla25xx_delete_req_que(struct scsi_qla_host *vha, struct req_que *req)
+ ret = qla25xx_init_req_que(vha, req);
+ if (ret != QLA_SUCCESS)
+ return QLA_FUNCTION_FAILED;
++
++ qla25xx_free_req_que(vha, req);
+ }
+- qla25xx_free_req_que(vha, req);
+
+ return ret;
+ }
+@@ -598,8 +599,9 @@ qla25xx_delete_rsp_que(struct scsi_qla_host *vha, struct rsp_que *rsp)
+ ret = qla25xx_init_rsp_que(vha, rsp);
+ if (ret != QLA_SUCCESS)
+ return QLA_FUNCTION_FAILED;
++
++ qla25xx_free_rsp_que(vha, rsp);
+ }
+- qla25xx_free_rsp_que(vha, rsp);
+
+ return ret;
+ }
+diff --git a/drivers/scsi/qla2xxx/qla_os.c b/drivers/scsi/qla2xxx/qla_os.c
+index 1f69e89b950f..1204c1d59bc4 100644
+--- a/drivers/scsi/qla2xxx/qla_os.c
++++ b/drivers/scsi/qla2xxx/qla_os.c
+@@ -449,7 +449,7 @@ static int qla2x00_alloc_queues(struct qla_hw_data *ha, struct req_que *req,
+ ha->req_q_map[0] = req;
+ set_bit(0, ha->rsp_qid_map);
+ set_bit(0, ha->req_qid_map);
+- return 1;
++ return 0;
+
+ fail_qpair_map:
+ kfree(ha->base_qpair);
+@@ -466,6 +466,9 @@ static int qla2x00_alloc_queues(struct qla_hw_data *ha, struct req_que *req,
+
+ static void qla2x00_free_req_que(struct qla_hw_data *ha, struct req_que *req)
+ {
++ if (!ha->req_q_map)
++ return;
++
+ if (IS_QLAFX00(ha)) {
+ if (req && req->ring_fx00)
+ dma_free_coherent(&ha->pdev->dev,
+@@ -476,14 +479,17 @@ static void qla2x00_free_req_que(struct qla_hw_data *ha, struct req_que *req)
+ (req->length + 1) * sizeof(request_t),
+ req->ring, req->dma);
+
+- if (req)
++ if (req) {
+ kfree(req->outstanding_cmds);
+-
+- kfree(req);
++ kfree(req);
++ }
+ }
+
+ static void qla2x00_free_rsp_que(struct qla_hw_data *ha, struct rsp_que *rsp)
+ {
++ if (!ha->rsp_q_map)
++ return;
++
+ if (IS_QLAFX00(ha)) {
+ if (rsp && rsp->ring)
+ dma_free_coherent(&ha->pdev->dev,
+@@ -494,7 +500,8 @@ static void qla2x00_free_rsp_que(struct qla_hw_data *ha, struct rsp_que *rsp)
+ (rsp->length + 1) * sizeof(response_t),
+ rsp->ring, rsp->dma);
+ }
+- kfree(rsp);
++ if (rsp)
++ kfree(rsp);
+ }
+
+ static void qla2x00_free_queues(struct qla_hw_data *ha)
+@@ -1717,6 +1724,8 @@ qla2x00_abort_all_cmds(scsi_qla_host_t *vha, int res)
+ struct qla_tgt_cmd *cmd;
+ uint8_t trace = 0;
+
++ if (!ha->req_q_map)
++ return;
+ spin_lock_irqsave(&ha->hardware_lock, flags);
+ for (que = 0; que < ha->max_req_queues; que++) {
+ req = ha->req_q_map[que];
+@@ -3071,14 +3080,14 @@ qla2x00_probe_one(struct pci_dev *pdev, const struct pci_device_id *id)
+ /* Set up the irqs */
+ ret = qla2x00_request_irqs(ha, rsp);
+ if (ret)
+- goto probe_hw_failed;
++ goto probe_failed;
+
+ /* Alloc arrays of request and response ring ptrs */
+- if (!qla2x00_alloc_queues(ha, req, rsp)) {
++ if (qla2x00_alloc_queues(ha, req, rsp)) {
+ ql_log(ql_log_fatal, base_vha, 0x003d,
+ "Failed to allocate memory for queue pointers..."
+ "aborting.\n");
+- goto probe_init_failed;
++ goto probe_failed;
+ }
+
+ if (ha->mqenable && shost_use_blk_mq(host)) {
+@@ -3363,15 +3372,6 @@ qla2x00_probe_one(struct pci_dev *pdev, const struct pci_device_id *id)
+
+ return 0;
+
+-probe_init_failed:
+- qla2x00_free_req_que(ha, req);
+- ha->req_q_map[0] = NULL;
+- clear_bit(0, ha->req_qid_map);
+- qla2x00_free_rsp_que(ha, rsp);
+- ha->rsp_q_map[0] = NULL;
+- clear_bit(0, ha->rsp_qid_map);
+- ha->max_req_queues = ha->max_rsp_queues = 0;
+-
+ probe_failed:
+ if (base_vha->timer_active)
+ qla2x00_stop_timer(base_vha);
+@@ -4451,11 +4451,17 @@ qla2x00_mem_free(struct qla_hw_data *ha)
+ if (ha->init_cb)
+ dma_free_coherent(&ha->pdev->dev, ha->init_cb_size,
+ ha->init_cb, ha->init_cb_dma);
+- vfree(ha->optrom_buffer);
+- kfree(ha->nvram);
+- kfree(ha->npiv_info);
+- kfree(ha->swl);
+- kfree(ha->loop_id_map);
++
++ if (ha->optrom_buffer)
++ vfree(ha->optrom_buffer);
++ if (ha->nvram)
++ kfree(ha->nvram);
++ if (ha->npiv_info)
++ kfree(ha->npiv_info);
++ if (ha->swl)
++ kfree(ha->swl);
++ if (ha->loop_id_map)
++ kfree(ha->loop_id_map);
+
+ ha->srb_mempool = NULL;
+ ha->ctx_mempool = NULL;
+@@ -4471,6 +4477,15 @@ qla2x00_mem_free(struct qla_hw_data *ha)
+ ha->ex_init_cb_dma = 0;
+ ha->async_pd = NULL;
+ ha->async_pd_dma = 0;
++ ha->loop_id_map = NULL;
++ ha->npiv_info = NULL;
++ ha->optrom_buffer = NULL;
++ ha->swl = NULL;
++ ha->nvram = NULL;
++ ha->mctp_dump = NULL;
++ ha->dcbx_tlv = NULL;
++ ha->xgmac_data = NULL;
++ ha->sfp_data = NULL;
+
+ ha->s_dma_pool = NULL;
+ ha->dl_dma_pool = NULL;
+diff --git a/drivers/scsi/qla2xxx/qla_target.c b/drivers/scsi/qla2xxx/qla_target.c
+index cb35bb1ae305..46bb4d057293 100644
+--- a/drivers/scsi/qla2xxx/qla_target.c
++++ b/drivers/scsi/qla2xxx/qla_target.c
+@@ -982,6 +982,7 @@ static void qlt_free_session_done(struct work_struct *work)
+
+ logo.id = sess->d_id;
+ logo.cmd_count = 0;
++ sess->send_els_logo = 0;
+ qlt_send_first_logo(vha, &logo);
+ }
+
+diff --git a/drivers/usb/dwc2/params.c b/drivers/usb/dwc2/params.c
+index 03fd20f0b496..c4a47496d2fb 100644
+--- a/drivers/usb/dwc2/params.c
++++ b/drivers/usb/dwc2/params.c
+@@ -137,7 +137,7 @@ static void dwc2_set_stm32f4x9_fsotg_params(struct dwc2_hsotg *hsotg)
+ p->activate_stm_fs_transceiver = true;
+ }
+
+-static void dwc2_set_stm32f7xx_hsotg_params(struct dwc2_hsotg *hsotg)
++static void dwc2_set_stm32f7_hsotg_params(struct dwc2_hsotg *hsotg)
+ {
+ struct dwc2_core_params *p = &hsotg->params;
+
+@@ -164,8 +164,8 @@ const struct of_device_id dwc2_of_match_table[] = {
+ { .compatible = "st,stm32f4x9-fsotg",
+ .data = dwc2_set_stm32f4x9_fsotg_params },
+ { .compatible = "st,stm32f4x9-hsotg" },
+- { .compatible = "st,stm32f7xx-hsotg",
+- .data = dwc2_set_stm32f7xx_hsotg_params },
++ { .compatible = "st,stm32f7-hsotg",
++ .data = dwc2_set_stm32f7_hsotg_params },
+ {},
+ };
+ MODULE_DEVICE_TABLE(of, dwc2_of_match_table);
+diff --git a/drivers/usb/dwc3/core.c b/drivers/usb/dwc3/core.c
+index 51de21ef3cdc..b417d9aeaeeb 100644
+--- a/drivers/usb/dwc3/core.c
++++ b/drivers/usb/dwc3/core.c
+@@ -100,6 +100,8 @@ static void dwc3_set_prtcap(struct dwc3 *dwc, u32 mode)
+ reg &= ~(DWC3_GCTL_PRTCAPDIR(DWC3_GCTL_PRTCAP_OTG));
+ reg |= DWC3_GCTL_PRTCAPDIR(mode);
+ dwc3_writel(dwc->regs, DWC3_GCTL, reg);
++
++ dwc->current_dr_role = mode;
+ }
+
+ static void __dwc3_set_mode(struct work_struct *work)
+@@ -133,8 +135,6 @@ static void __dwc3_set_mode(struct work_struct *work)
+
+ dwc3_set_prtcap(dwc, dwc->desired_dr_role);
+
+- dwc->current_dr_role = dwc->desired_dr_role;
+-
+ spin_unlock_irqrestore(&dwc->lock, flags);
+
+ switch (dwc->desired_dr_role) {
+@@ -218,7 +218,7 @@ static int dwc3_core_soft_reset(struct dwc3 *dwc)
+ * XHCI driver will reset the host block. If dwc3 was configured for
+ * host-only mode, then we can return early.
+ */
+- if (dwc->dr_mode == USB_DR_MODE_HOST)
++ if (dwc->current_dr_role == DWC3_GCTL_PRTCAP_HOST)
+ return 0;
+
+ reg = dwc3_readl(dwc->regs, DWC3_DCTL);
+@@ -915,7 +915,6 @@ static int dwc3_core_init_mode(struct dwc3 *dwc)
+
+ switch (dwc->dr_mode) {
+ case USB_DR_MODE_PERIPHERAL:
+- dwc->current_dr_role = DWC3_GCTL_PRTCAP_DEVICE;
+ dwc3_set_prtcap(dwc, DWC3_GCTL_PRTCAP_DEVICE);
+
+ if (dwc->usb2_phy)
+@@ -931,7 +930,6 @@ static int dwc3_core_init_mode(struct dwc3 *dwc)
+ }
+ break;
+ case USB_DR_MODE_HOST:
+- dwc->current_dr_role = DWC3_GCTL_PRTCAP_HOST;
+ dwc3_set_prtcap(dwc, DWC3_GCTL_PRTCAP_HOST);
+
+ if (dwc->usb2_phy)
+@@ -1279,7 +1277,7 @@ static int dwc3_remove(struct platform_device *pdev)
+ }
+
+ #ifdef CONFIG_PM
+-static int dwc3_suspend_common(struct dwc3 *dwc)
++static int dwc3_suspend_common(struct dwc3 *dwc, pm_message_t msg)
+ {
+ unsigned long flags;
+
+@@ -1291,6 +1289,10 @@ static int dwc3_suspend_common(struct dwc3 *dwc)
+ dwc3_core_exit(dwc);
+ break;
+ case DWC3_GCTL_PRTCAP_HOST:
++ /* do nothing during host runtime_suspend */
++ if (!PMSG_IS_AUTO(msg))
++ dwc3_core_exit(dwc);
++ break;
+ default:
+ /* do nothing */
+ break;
+@@ -1299,7 +1301,7 @@ static int dwc3_suspend_common(struct dwc3 *dwc)
+ return 0;
+ }
+
+-static int dwc3_resume_common(struct dwc3 *dwc)
++static int dwc3_resume_common(struct dwc3 *dwc, pm_message_t msg)
+ {
+ unsigned long flags;
+ int ret;
+@@ -1315,6 +1317,13 @@ static int dwc3_resume_common(struct dwc3 *dwc)
+ spin_unlock_irqrestore(&dwc->lock, flags);
+ break;
+ case DWC3_GCTL_PRTCAP_HOST:
++ /* nothing to do on host runtime_resume */
++ if (!PMSG_IS_AUTO(msg)) {
++ ret = dwc3_core_init(dwc);
++ if (ret)
++ return ret;
++ }
++ break;
+ default:
+ /* do nothing */
+ break;
+@@ -1326,12 +1335,11 @@ static int dwc3_resume_common(struct dwc3 *dwc)
+ static int dwc3_runtime_checks(struct dwc3 *dwc)
+ {
+ switch (dwc->current_dr_role) {
+- case USB_DR_MODE_PERIPHERAL:
+- case USB_DR_MODE_OTG:
++ case DWC3_GCTL_PRTCAP_DEVICE:
+ if (dwc->connected)
+ return -EBUSY;
+ break;
+- case USB_DR_MODE_HOST:
++ case DWC3_GCTL_PRTCAP_HOST:
+ default:
+ /* do nothing */
+ break;
+@@ -1348,7 +1356,7 @@ static int dwc3_runtime_suspend(struct device *dev)
+ if (dwc3_runtime_checks(dwc))
+ return -EBUSY;
+
+- ret = dwc3_suspend_common(dwc);
++ ret = dwc3_suspend_common(dwc, PMSG_AUTO_SUSPEND);
+ if (ret)
+ return ret;
+
+@@ -1364,7 +1372,7 @@ static int dwc3_runtime_resume(struct device *dev)
+
+ device_init_wakeup(dev, false);
+
+- ret = dwc3_resume_common(dwc);
++ ret = dwc3_resume_common(dwc, PMSG_AUTO_RESUME);
+ if (ret)
+ return ret;
+
+@@ -1411,7 +1419,7 @@ static int dwc3_suspend(struct device *dev)
+ struct dwc3 *dwc = dev_get_drvdata(dev);
+ int ret;
+
+- ret = dwc3_suspend_common(dwc);
++ ret = dwc3_suspend_common(dwc, PMSG_SUSPEND);
+ if (ret)
+ return ret;
+
+@@ -1427,7 +1435,7 @@ static int dwc3_resume(struct device *dev)
+
+ pinctrl_pm_select_default_state(dev);
+
+- ret = dwc3_resume_common(dwc);
++ ret = dwc3_resume_common(dwc, PMSG_RESUME);
+ if (ret)
+ return ret;
+
+diff --git a/drivers/usb/dwc3/core.h b/drivers/usb/dwc3/core.h
+index 4a4a4c98508c..6d4e7a66cedd 100644
+--- a/drivers/usb/dwc3/core.h
++++ b/drivers/usb/dwc3/core.h
+@@ -158,13 +158,15 @@
+ #define DWC3_GDBGFIFOSPACE_TYPE(n) (((n) << 5) & 0x1e0)
+ #define DWC3_GDBGFIFOSPACE_SPACE_AVAILABLE(n) (((n) >> 16) & 0xffff)
+
+-#define DWC3_TXFIFOQ 1
+-#define DWC3_RXFIFOQ 3
+-#define DWC3_TXREQQ 5
+-#define DWC3_RXREQQ 7
+-#define DWC3_RXINFOQ 9
+-#define DWC3_DESCFETCHQ 13
+-#define DWC3_EVENTQ 15
++#define DWC3_TXFIFOQ 0
++#define DWC3_RXFIFOQ 1
++#define DWC3_TXREQQ 2
++#define DWC3_RXREQQ 3
++#define DWC3_RXINFOQ 4
++#define DWC3_PSTATQ 5
++#define DWC3_DESCFETCHQ 6
++#define DWC3_EVENTQ 7
++#define DWC3_AUXEVENTQ 8
+
+ /* Global RX Threshold Configuration Register */
+ #define DWC3_GRXTHRCFG_MAXRXBURSTSIZE(n) (((n) & 0x1f) << 19)
+diff --git a/drivers/usb/dwc3/dwc3-of-simple.c b/drivers/usb/dwc3/dwc3-of-simple.c
+index 7ae0eefc7cc7..e54c3622eb28 100644
+--- a/drivers/usb/dwc3/dwc3-of-simple.c
++++ b/drivers/usb/dwc3/dwc3-of-simple.c
+@@ -143,6 +143,7 @@ static int dwc3_of_simple_remove(struct platform_device *pdev)
+ clk_disable_unprepare(simple->clks[i]);
+ clk_put(simple->clks[i]);
+ }
++ simple->num_clocks = 0;
+
+ reset_control_assert(simple->resets);
+ reset_control_put(simple->resets);
+diff --git a/drivers/usb/gadget/udc/bdc/bdc_pci.c b/drivers/usb/gadget/udc/bdc/bdc_pci.c
+index 1e940f054cb8..6dbc489513cd 100644
+--- a/drivers/usb/gadget/udc/bdc/bdc_pci.c
++++ b/drivers/usb/gadget/udc/bdc/bdc_pci.c
+@@ -77,6 +77,7 @@ static int bdc_pci_probe(struct pci_dev *pci, const struct pci_device_id *id)
+ if (ret) {
+ dev_err(&pci->dev,
+ "couldn't add resources to bdc device\n");
++ platform_device_put(bdc);
+ return ret;
+ }
+
+diff --git a/drivers/usb/gadget/udc/renesas_usb3.c b/drivers/usb/gadget/udc/renesas_usb3.c
+index 6e87af248367..409cde4e6a51 100644
+--- a/drivers/usb/gadget/udc/renesas_usb3.c
++++ b/drivers/usb/gadget/udc/renesas_usb3.c
+@@ -2410,7 +2410,7 @@ static int renesas_usb3_remove(struct platform_device *pdev)
+ __renesas_usb3_ep_free_request(usb3->ep0_req);
+ if (usb3->phy)
+ phy_put(usb3->phy);
+- pm_runtime_disable(usb3_to_dev(usb3));
++ pm_runtime_disable(&pdev->dev);
+
+ return 0;
+ }
+diff --git a/fs/aio.c b/fs/aio.c
+index a062d75109cb..6bcd3fb5265a 100644
+--- a/fs/aio.c
++++ b/fs/aio.c
+@@ -68,9 +68,9 @@ struct aio_ring {
+ #define AIO_RING_PAGES 8
+
+ struct kioctx_table {
+- struct rcu_head rcu;
+- unsigned nr;
+- struct kioctx *table[];
++ struct rcu_head rcu;
++ unsigned nr;
++ struct kioctx __rcu *table[];
+ };
+
+ struct kioctx_cpu {
+@@ -115,7 +115,8 @@ struct kioctx {
+ struct page **ring_pages;
+ long nr_pages;
+
+- struct work_struct free_work;
++ struct rcu_head free_rcu;
++ struct work_struct free_work; /* see free_ioctx() */
+
+ /*
+ * signals when all in-flight requests are done
+@@ -329,7 +330,7 @@ static int aio_ring_mremap(struct vm_area_struct *vma)
+ for (i = 0; i < table->nr; i++) {
+ struct kioctx *ctx;
+
+- ctx = table->table[i];
++ ctx = rcu_dereference(table->table[i]);
+ if (ctx && ctx->aio_ring_file == file) {
+ if (!atomic_read(&ctx->dead)) {
+ ctx->user_id = ctx->mmap_base = vma->vm_start;
+@@ -588,6 +589,12 @@ static int kiocb_cancel(struct aio_kiocb *kiocb)
+ return cancel(&kiocb->common);
+ }
+
++/*
++ * free_ioctx() should be RCU delayed to synchronize against the RCU
++ * protected lookup_ioctx() and also needs process context to call
++ * aio_free_ring(), so the double bouncing through kioctx->free_rcu and
++ * ->free_work.
++ */
+ static void free_ioctx(struct work_struct *work)
+ {
+ struct kioctx *ctx = container_of(work, struct kioctx, free_work);
+@@ -601,6 +608,14 @@ static void free_ioctx(struct work_struct *work)
+ kmem_cache_free(kioctx_cachep, ctx);
+ }
+
++static void free_ioctx_rcufn(struct rcu_head *head)
++{
++ struct kioctx *ctx = container_of(head, struct kioctx, free_rcu);
++
++ INIT_WORK(&ctx->free_work, free_ioctx);
++ schedule_work(&ctx->free_work);
++}
++
+ static void free_ioctx_reqs(struct percpu_ref *ref)
+ {
+ struct kioctx *ctx = container_of(ref, struct kioctx, reqs);
+@@ -609,8 +624,8 @@ static void free_ioctx_reqs(struct percpu_ref *ref)
+ if (ctx->rq_wait && atomic_dec_and_test(&ctx->rq_wait->count))
+ complete(&ctx->rq_wait->comp);
+
+- INIT_WORK(&ctx->free_work, free_ioctx);
+- schedule_work(&ctx->free_work);
++ /* Synchronize against RCU protected table->table[] dereferences */
++ call_rcu(&ctx->free_rcu, free_ioctx_rcufn);
+ }
+
+ /*
+@@ -651,9 +666,9 @@ static int ioctx_add_table(struct kioctx *ctx, struct mm_struct *mm)
+ while (1) {
+ if (table)
+ for (i = 0; i < table->nr; i++)
+- if (!table->table[i]) {
++ if (!rcu_access_pointer(table->table[i])) {
+ ctx->id = i;
+- table->table[i] = ctx;
++ rcu_assign_pointer(table->table[i], ctx);
+ spin_unlock(&mm->ioctx_lock);
+
+ /* While kioctx setup is in progress,
+@@ -834,11 +849,11 @@ static int kill_ioctx(struct mm_struct *mm, struct kioctx *ctx,
+ }
+
+ table = rcu_dereference_raw(mm->ioctx_table);
+- WARN_ON(ctx != table->table[ctx->id]);
+- table->table[ctx->id] = NULL;
++ WARN_ON(ctx != rcu_access_pointer(table->table[ctx->id]));
++ RCU_INIT_POINTER(table->table[ctx->id], NULL);
+ spin_unlock(&mm->ioctx_lock);
+
+- /* percpu_ref_kill() will do the necessary call_rcu() */
++ /* free_ioctx_reqs() will do the necessary RCU synchronization */
+ wake_up_all(&ctx->wait);
+
+ /*
+@@ -880,7 +895,8 @@ void exit_aio(struct mm_struct *mm)
+
+ skipped = 0;
+ for (i = 0; i < table->nr; ++i) {
+- struct kioctx *ctx = table->table[i];
++ struct kioctx *ctx =
++ rcu_dereference_protected(table->table[i], true);
+
+ if (!ctx) {
+ skipped++;
+@@ -1069,7 +1085,7 @@ static struct kioctx *lookup_ioctx(unsigned long ctx_id)
+ if (!table || id >= table->nr)
+ goto out;
+
+- ctx = table->table[id];
++ ctx = rcu_dereference(table->table[id]);
+ if (ctx && ctx->user_id == ctx_id) {
+ percpu_ref_get(&ctx->users);
+ ret = ctx;
+diff --git a/fs/btrfs/backref.c b/fs/btrfs/backref.c
+index 7d0dc100a09a..8a9df8003345 100644
+--- a/fs/btrfs/backref.c
++++ b/fs/btrfs/backref.c
+@@ -1263,7 +1263,16 @@ static int find_parent_nodes(struct btrfs_trans_handle *trans,
+ while (node) {
+ ref = rb_entry(node, struct prelim_ref, rbnode);
+ node = rb_next(&ref->rbnode);
+- WARN_ON(ref->count < 0);
++ /*
++ * ref->count < 0 can happen here if there are delayed
++ * refs with a node->action of BTRFS_DROP_DELAYED_REF.
++ * prelim_ref_insert() relies on this when merging
++ * identical refs to keep the overall count correct.
++ * prelim_ref_insert() will merge only those refs
++ * which compare identically. Any refs having
++ * e.g. different offsets would not be merged,
++ * and would retain their original ref->count < 0.
++ */
+ if (roots && ref->count && ref->root_id && ref->parent == 0) {
+ if (sc && sc->root_objectid &&
+ ref->root_id != sc->root_objectid) {
+@@ -1509,6 +1518,7 @@ int btrfs_check_shared(struct btrfs_root *root, u64 inum, u64 bytenr)
+ if (!node)
+ break;
+ bytenr = node->val;
++ shared.share_count = 0;
+ cond_resched();
+ }
+
+diff --git a/fs/btrfs/raid56.c b/fs/btrfs/raid56.c
+index 8903c4fbf7e6..8a3e42412506 100644
+--- a/fs/btrfs/raid56.c
++++ b/fs/btrfs/raid56.c
+@@ -1351,6 +1351,7 @@ static int find_bio_stripe(struct btrfs_raid_bio *rbio,
+ stripe_start = stripe->physical;
+ if (physical >= stripe_start &&
+ physical < stripe_start + rbio->stripe_len &&
++ stripe->dev->bdev &&
+ bio->bi_disk == stripe->dev->bdev->bd_disk &&
+ bio->bi_partno == stripe->dev->bdev->bd_partno) {
+ return i;
+diff --git a/fs/btrfs/volumes.c b/fs/btrfs/volumes.c
+index a25684287501..6631f48c6a11 100644
+--- a/fs/btrfs/volumes.c
++++ b/fs/btrfs/volumes.c
+@@ -574,6 +574,7 @@ static void btrfs_free_stale_device(struct btrfs_device *cur_dev)
+ btrfs_sysfs_remove_fsid(fs_devs);
+ list_del(&fs_devs->list);
+ free_fs_devices(fs_devs);
++ break;
+ } else {
+ fs_devs->num_devices--;
+ list_del(&dev->dev_list);
+@@ -4737,10 +4738,13 @@ static int __btrfs_alloc_chunk(struct btrfs_trans_handle *trans,
+ ndevs = min(ndevs, devs_max);
+
+ /*
+- * the primary goal is to maximize the number of stripes, so use as many
+- * devices as possible, even if the stripes are not maximum sized.
++ * The primary goal is to maximize the number of stripes, so use as
++ * many devices as possible, even if the stripes are not maximum sized.
++ *
++ * The DUP profile stores more than one stripe per device, the
++ * max_avail is the total size so we have to adjust.
+ */
+- stripe_size = devices_info[ndevs-1].max_avail;
++ stripe_size = div_u64(devices_info[ndevs - 1].max_avail, dev_stripes);
+ num_stripes = ndevs * dev_stripes;
+
+ /*
+@@ -4775,8 +4779,6 @@ static int __btrfs_alloc_chunk(struct btrfs_trans_handle *trans,
+ stripe_size = devices_info[ndevs-1].max_avail;
+ }
+
+- stripe_size = div_u64(stripe_size, dev_stripes);
+-
+ /* align to BTRFS_STRIPE_LEN */
+ stripe_size = round_down(stripe_size, BTRFS_STRIPE_LEN);
+
+@@ -7091,10 +7093,24 @@ int btrfs_run_dev_stats(struct btrfs_trans_handle *trans,
+
+ mutex_lock(&fs_devices->device_list_mutex);
+ list_for_each_entry(device, &fs_devices->devices, dev_list) {
+- if (!device->dev_stats_valid || !btrfs_dev_stats_dirty(device))
++ stats_cnt = atomic_read(&device->dev_stats_ccnt);
++ if (!device->dev_stats_valid || stats_cnt == 0)
+ continue;
+
+- stats_cnt = atomic_read(&device->dev_stats_ccnt);
++
++ /*
++ * There is a LOAD-LOAD control dependency between the value of
++ * dev_stats_ccnt and updating the on-disk values which requires
++ * reading the in-memory counters. Such control dependencies
++ * require explicit read memory barriers.
++ *
++ * This memory barriers pairs with smp_mb__before_atomic in
++ * btrfs_dev_stat_inc/btrfs_dev_stat_set and with the full
++ * barrier implied by atomic_xchg in
++ * btrfs_dev_stats_read_and_reset
++ */
++ smp_rmb();
++
+ ret = update_dev_stat_item(trans, fs_info, device);
+ if (!ret)
+ atomic_sub(stats_cnt, &device->dev_stats_ccnt);
+diff --git a/fs/btrfs/volumes.h b/fs/btrfs/volumes.h
+index ff15208344a7..52ee7b094f3f 100644
+--- a/fs/btrfs/volumes.h
++++ b/fs/btrfs/volumes.h
+@@ -498,6 +498,12 @@ static inline void btrfs_dev_stat_inc(struct btrfs_device *dev,
+ int index)
+ {
+ atomic_inc(dev->dev_stat_values + index);
++ /*
++ * This memory barrier orders stores updating statistics before stores
++ * updating dev_stats_ccnt.
++ *
++ * It pairs with smp_rmb() in btrfs_run_dev_stats().
++ */
+ smp_mb__before_atomic();
+ atomic_inc(&dev->dev_stats_ccnt);
+ }
+@@ -523,6 +529,12 @@ static inline void btrfs_dev_stat_set(struct btrfs_device *dev,
+ int index, unsigned long val)
+ {
+ atomic_set(dev->dev_stat_values + index, val);
++ /*
++ * This memory barrier orders stores updating statistics before stores
++ * updating dev_stats_ccnt.
++ *
++ * It pairs with smp_rmb() in btrfs_run_dev_stats().
++ */
+ smp_mb__before_atomic();
+ atomic_inc(&dev->dev_stats_ccnt);
+ }
+diff --git a/fs/dcache.c b/fs/dcache.c
+index 5c7df1df81ff..eb2c297a87d0 100644
+--- a/fs/dcache.c
++++ b/fs/dcache.c
+@@ -644,11 +644,16 @@ static inline struct dentry *lock_parent(struct dentry *dentry)
+ spin_unlock(&parent->d_lock);
+ goto again;
+ }
+- rcu_read_unlock();
+- if (parent != dentry)
++ if (parent != dentry) {
+ spin_lock_nested(&dentry->d_lock, DENTRY_D_LOCK_NESTED);
+- else
++ if (unlikely(dentry->d_lockref.count < 0)) {
++ spin_unlock(&parent->d_lock);
++ parent = NULL;
++ }
++ } else {
+ parent = NULL;
++ }
++ rcu_read_unlock();
+ return parent;
+ }
+
+diff --git a/fs/namei.c b/fs/namei.c
+index 4e3fc58dae72..ee19c4ef24b2 100644
+--- a/fs/namei.c
++++ b/fs/namei.c
+@@ -578,9 +578,10 @@ static int __nd_alloc_stack(struct nameidata *nd)
+ static bool path_connected(const struct path *path)
+ {
+ struct vfsmount *mnt = path->mnt;
++ struct super_block *sb = mnt->mnt_sb;
+
+- /* Only bind mounts can have disconnected paths */
+- if (mnt->mnt_root == mnt->mnt_sb->s_root)
++ /* Bind mounts and multi-root filesystems can have disconnected paths */
++ if (!(sb->s_iflags & SB_I_MULTIROOT) && (mnt->mnt_root == sb->s_root))
+ return true;
+
+ return is_subdir(path->dentry, mnt->mnt_root);
+diff --git a/fs/nfs/super.c b/fs/nfs/super.c
+index 29bacdc56f6a..5e470e233c83 100644
+--- a/fs/nfs/super.c
++++ b/fs/nfs/super.c
+@@ -2631,6 +2631,8 @@ struct dentry *nfs_fs_mount_common(struct nfs_server *server,
+ /* initial superblock/root creation */
+ mount_info->fill_super(s, mount_info);
+ nfs_get_cache_cookie(s, mount_info->parsed, mount_info->cloned);
++ if (!(server->flags & NFS_MOUNT_UNSHARED))
++ s->s_iflags |= SB_I_MULTIROOT;
+ }
+
+ mntroot = nfs_get_root(s, mount_info->mntfh, dev_name);
+diff --git a/fs/xfs/xfs_icache.c b/fs/xfs/xfs_icache.c
+index 3861d61fb265..3ce946063ffe 100644
+--- a/fs/xfs/xfs_icache.c
++++ b/fs/xfs/xfs_icache.c
+@@ -295,6 +295,7 @@ xfs_reinit_inode(
+ uint32_t generation = inode->i_generation;
+ uint64_t version = inode->i_version;
+ umode_t mode = inode->i_mode;
++ dev_t dev = inode->i_rdev;
+
+ error = inode_init_always(mp->m_super, inode);
+
+@@ -302,6 +303,7 @@ xfs_reinit_inode(
+ inode->i_generation = generation;
+ inode->i_version = version;
+ inode->i_mode = mode;
++ inode->i_rdev = dev;
+ return error;
+ }
+
+diff --git a/include/kvm/arm_vgic.h b/include/kvm/arm_vgic.h
+index 8c896540a72c..ff58c2933fdf 100644
+--- a/include/kvm/arm_vgic.h
++++ b/include/kvm/arm_vgic.h
+@@ -349,6 +349,7 @@ void kvm_vgic_put(struct kvm_vcpu *vcpu);
+ bool kvm_vcpu_has_pending_irqs(struct kvm_vcpu *vcpu);
+ void kvm_vgic_sync_hwstate(struct kvm_vcpu *vcpu);
+ void kvm_vgic_flush_hwstate(struct kvm_vcpu *vcpu);
++void kvm_vgic_reset_mapped_irq(struct kvm_vcpu *vcpu, u32 vintid);
+
+ void vgic_v3_dispatch_sgi(struct kvm_vcpu *vcpu, u64 reg);
+
+diff --git a/include/linux/fs.h b/include/linux/fs.h
+index 79421287ff5e..d8af431d9c91 100644
+--- a/include/linux/fs.h
++++ b/include/linux/fs.h
+@@ -1312,6 +1312,7 @@ extern int send_sigurg(struct fown_struct *fown);
+ #define SB_I_CGROUPWB 0x00000001 /* cgroup-aware writeback enabled */
+ #define SB_I_NOEXEC 0x00000002 /* Ignore executables on this fs */
+ #define SB_I_NODEV 0x00000004 /* Ignore devices on this fs */
++#define SB_I_MULTIROOT 0x00000008 /* Multiple roots to the dentry tree */
+
+ /* sb->s_iflags to limit user namespace mounts */
+ #define SB_I_USERNS_VISIBLE 0x00000010 /* fstype already mounted */
+diff --git a/include/linux/irqchip/arm-gic-v3.h b/include/linux/irqchip/arm-gic-v3.h
+index c00c4c33e432..b26eccc78fb1 100644
+--- a/include/linux/irqchip/arm-gic-v3.h
++++ b/include/linux/irqchip/arm-gic-v3.h
+@@ -503,6 +503,7 @@
+
+ #define ICH_HCR_EN (1 << 0)
+ #define ICH_HCR_UIE (1 << 1)
++#define ICH_HCR_NPIE (1 << 3)
+ #define ICH_HCR_TC (1 << 10)
+ #define ICH_HCR_TALL0 (1 << 11)
+ #define ICH_HCR_TALL1 (1 << 12)
+diff --git a/include/linux/irqchip/arm-gic.h b/include/linux/irqchip/arm-gic.h
+index d3453ee072fc..68d8b1f73682 100644
+--- a/include/linux/irqchip/arm-gic.h
++++ b/include/linux/irqchip/arm-gic.h
+@@ -84,6 +84,7 @@
+
+ #define GICH_HCR_EN (1 << 0)
+ #define GICH_HCR_UIE (1 << 1)
++#define GICH_HCR_NPIE (1 << 3)
+
+ #define GICH_LR_VIRTUALID (0x3ff << 0)
+ #define GICH_LR_PHYSID_CPUID_SHIFT (10)
+diff --git a/sound/core/oss/pcm_oss.c b/sound/core/oss/pcm_oss.c
+index c2db7e905f7d..012881461058 100644
+--- a/sound/core/oss/pcm_oss.c
++++ b/sound/core/oss/pcm_oss.c
+@@ -1762,10 +1762,9 @@ static int snd_pcm_oss_get_formats(struct snd_pcm_oss_file *pcm_oss_file)
+ return -ENOMEM;
+ _snd_pcm_hw_params_any(params);
+ err = snd_pcm_hw_refine(substream, params);
+- format_mask = hw_param_mask_c(params, SNDRV_PCM_HW_PARAM_FORMAT);
+- kfree(params);
+ if (err < 0)
+- return err;
++ goto error;
++ format_mask = hw_param_mask_c(params, SNDRV_PCM_HW_PARAM_FORMAT);
+ for (fmt = 0; fmt < 32; ++fmt) {
+ if (snd_mask_test(format_mask, fmt)) {
+ int f = snd_pcm_oss_format_to(fmt);
+@@ -1773,7 +1772,10 @@ static int snd_pcm_oss_get_formats(struct snd_pcm_oss_file *pcm_oss_file)
+ formats |= f;
+ }
+ }
+- return formats;
++
++ error:
++ kfree(params);
++ return err < 0 ? err : formats;
+ }
+
+ static int snd_pcm_oss_set_format(struct snd_pcm_oss_file *pcm_oss_file, int format)
+diff --git a/sound/core/seq/seq_clientmgr.c b/sound/core/seq/seq_clientmgr.c
+index 35ff97bfd492..6204b886309a 100644
+--- a/sound/core/seq/seq_clientmgr.c
++++ b/sound/core/seq/seq_clientmgr.c
+@@ -255,12 +255,12 @@ static int seq_free_client1(struct snd_seq_client *client)
+
+ if (!client)
+ return 0;
+- snd_seq_delete_all_ports(client);
+- snd_seq_queue_client_leave(client->number);
+ spin_lock_irqsave(&clients_lock, flags);
+ clienttablock[client->number] = 1;
+ clienttab[client->number] = NULL;
+ spin_unlock_irqrestore(&clients_lock, flags);
++ snd_seq_delete_all_ports(client);
++ snd_seq_queue_client_leave(client->number);
+ snd_use_lock_sync(&client->use_lock);
+ snd_seq_queue_client_termination(client->number);
+ if (client->pool)
+diff --git a/sound/core/seq/seq_prioq.c b/sound/core/seq/seq_prioq.c
+index bc1c8488fc2a..2bc6759e4adc 100644
+--- a/sound/core/seq/seq_prioq.c
++++ b/sound/core/seq/seq_prioq.c
+@@ -87,7 +87,7 @@ void snd_seq_prioq_delete(struct snd_seq_prioq **fifo)
+ if (f->cells > 0) {
+ /* drain prioQ */
+ while (f->cells > 0)
+- snd_seq_cell_free(snd_seq_prioq_cell_out(f));
++ snd_seq_cell_free(snd_seq_prioq_cell_out(f, NULL));
+ }
+
+ kfree(f);
+@@ -214,8 +214,18 @@ int snd_seq_prioq_cell_in(struct snd_seq_prioq * f,
+ return 0;
+ }
+
++/* return 1 if the current time >= event timestamp */
++static int event_is_ready(struct snd_seq_event *ev, void *current_time)
++{
++ if ((ev->flags & SNDRV_SEQ_TIME_STAMP_MASK) == SNDRV_SEQ_TIME_STAMP_TICK)
++ return snd_seq_compare_tick_time(current_time, &ev->time.tick);
++ else
++ return snd_seq_compare_real_time(current_time, &ev->time.time);
++}
++
+ /* dequeue cell from prioq */
+-struct snd_seq_event_cell *snd_seq_prioq_cell_out(struct snd_seq_prioq *f)
++struct snd_seq_event_cell *snd_seq_prioq_cell_out(struct snd_seq_prioq *f,
++ void *current_time)
+ {
+ struct snd_seq_event_cell *cell;
+ unsigned long flags;
+@@ -227,6 +237,8 @@ struct snd_seq_event_cell *snd_seq_prioq_cell_out(struct snd_seq_prioq *f)
+ spin_lock_irqsave(&f->lock, flags);
+
+ cell = f->head;
++ if (cell && current_time && !event_is_ready(&cell->event, current_time))
++ cell = NULL;
+ if (cell) {
+ f->head = cell->next;
+
+@@ -252,18 +264,6 @@ int snd_seq_prioq_avail(struct snd_seq_prioq * f)
+ return f->cells;
+ }
+
+-
+-/* peek at cell at the head of the prioq */
+-struct snd_seq_event_cell *snd_seq_prioq_cell_peek(struct snd_seq_prioq * f)
+-{
+- if (f == NULL) {
+- pr_debug("ALSA: seq: snd_seq_prioq_cell_in() called with NULL prioq\n");
+- return NULL;
+- }
+- return f->head;
+-}
+-
+-
+ static inline int prioq_match(struct snd_seq_event_cell *cell,
+ int client, int timestamp)
+ {
+diff --git a/sound/core/seq/seq_prioq.h b/sound/core/seq/seq_prioq.h
+index d38bb78d9345..2c315ca10fc4 100644
+--- a/sound/core/seq/seq_prioq.h
++++ b/sound/core/seq/seq_prioq.h
+@@ -44,14 +44,12 @@ void snd_seq_prioq_delete(struct snd_seq_prioq **fifo);
+ int snd_seq_prioq_cell_in(struct snd_seq_prioq *f, struct snd_seq_event_cell *cell);
+
+ /* dequeue cell from prioq */
+-struct snd_seq_event_cell *snd_seq_prioq_cell_out(struct snd_seq_prioq *f);
++struct snd_seq_event_cell *snd_seq_prioq_cell_out(struct snd_seq_prioq *f,
++ void *current_time);
+
+ /* return number of events available in prioq */
+ int snd_seq_prioq_avail(struct snd_seq_prioq *f);
+
+-/* peek at cell at the head of the prioq */
+-struct snd_seq_event_cell *snd_seq_prioq_cell_peek(struct snd_seq_prioq *f);
+-
+ /* client left queue */
+ void snd_seq_prioq_leave(struct snd_seq_prioq *f, int client, int timestamp);
+
+diff --git a/sound/core/seq/seq_queue.c b/sound/core/seq/seq_queue.c
+index 79e0c5604ef8..1a6dc4ff44a6 100644
+--- a/sound/core/seq/seq_queue.c
++++ b/sound/core/seq/seq_queue.c
+@@ -277,30 +277,20 @@ void snd_seq_check_queue(struct snd_seq_queue *q, int atomic, int hop)
+
+ __again:
+ /* Process tick queue... */
+- while ((cell = snd_seq_prioq_cell_peek(q->tickq)) != NULL) {
+- if (snd_seq_compare_tick_time(&q->timer->tick.cur_tick,
+- &cell->event.time.tick)) {
+- cell = snd_seq_prioq_cell_out(q->tickq);
+- if (cell)
+- snd_seq_dispatch_event(cell, atomic, hop);
+- } else {
+- /* event remains in the queue */
++ for (;;) {
++ cell = snd_seq_prioq_cell_out(q->tickq,
++ &q->timer->tick.cur_tick);
++ if (!cell)
+ break;
+- }
++ snd_seq_dispatch_event(cell, atomic, hop);
+ }
+
+-
+ /* Process time queue... */
+- while ((cell = snd_seq_prioq_cell_peek(q->timeq)) != NULL) {
+- if (snd_seq_compare_real_time(&q->timer->cur_time,
+- &cell->event.time.time)) {
+- cell = snd_seq_prioq_cell_out(q->timeq);
+- if (cell)
+- snd_seq_dispatch_event(cell, atomic, hop);
+- } else {
+- /* event remains in the queue */
++ for (;;) {
++ cell = snd_seq_prioq_cell_out(q->timeq, &q->timer->cur_time);
++ if (!cell)
+ break;
+- }
++ snd_seq_dispatch_event(cell, atomic, hop);
+ }
+
+ /* free lock */
+diff --git a/sound/pci/hda/hda_intel.c b/sound/pci/hda/hda_intel.c
+index 96143df19b21..d5017adf9feb 100644
+--- a/sound/pci/hda/hda_intel.c
++++ b/sound/pci/hda/hda_intel.c
+@@ -181,11 +181,15 @@ static const struct kernel_param_ops param_ops_xint = {
+ };
+ #define param_check_xint param_check_int
+
+-static int power_save = -1;
++static int power_save = CONFIG_SND_HDA_POWER_SAVE_DEFAULT;
+ module_param(power_save, xint, 0644);
+ MODULE_PARM_DESC(power_save, "Automatic power-saving timeout "
+ "(in second, 0 = disable).");
+
++static bool pm_blacklist = true;
++module_param(pm_blacklist, bool, 0644);
++MODULE_PARM_DESC(pm_blacklist, "Enable power-management blacklist");
++
+ /* reset the HD-audio controller in power save mode.
+ * this may give more power-saving, but will take longer time to
+ * wake up.
+@@ -2300,10 +2304,9 @@ static int azx_probe_continue(struct azx *chip)
+
+ val = power_save;
+ #ifdef CONFIG_PM
+- if (val == -1) {
++ if (pm_blacklist) {
+ const struct snd_pci_quirk *q;
+
+- val = CONFIG_SND_HDA_POWER_SAVE_DEFAULT;
+ q = snd_pci_quirk_lookup(chip->pci, power_save_blacklist);
+ if (q && val) {
+ dev_info(chip->card->dev, "device %04x:%04x is on the power_save blacklist, forcing power_save to 0\n",
+diff --git a/tools/testing/selftests/x86/entry_from_vm86.c b/tools/testing/selftests/x86/entry_from_vm86.c
+index 361466a2eaef..ade443a88421 100644
+--- a/tools/testing/selftests/x86/entry_from_vm86.c
++++ b/tools/testing/selftests/x86/entry_from_vm86.c
+@@ -95,6 +95,10 @@ asm (
+ "int3\n\t"
+ "vmcode_int80:\n\t"
+ "int $0x80\n\t"
++ "vmcode_popf_hlt:\n\t"
++ "push %ax\n\t"
++ "popf\n\t"
++ "hlt\n\t"
+ "vmcode_umip:\n\t"
+ /* addressing via displacements */
+ "smsw (2052)\n\t"
+@@ -124,8 +128,8 @@ asm (
+
+ extern unsigned char vmcode[], end_vmcode[];
+ extern unsigned char vmcode_bound[], vmcode_sysenter[], vmcode_syscall[],
+- vmcode_sti[], vmcode_int3[], vmcode_int80[], vmcode_umip[],
+- vmcode_umip_str[], vmcode_umip_sldt[];
++ vmcode_sti[], vmcode_int3[], vmcode_int80[], vmcode_popf_hlt[],
++ vmcode_umip[], vmcode_umip_str[], vmcode_umip_sldt[];
+
+ /* Returns false if the test was skipped. */
+ static bool do_test(struct vm86plus_struct *v86, unsigned long eip,
+@@ -175,7 +179,7 @@ static bool do_test(struct vm86plus_struct *v86, unsigned long eip,
+ (VM86_TYPE(ret) == rettype && VM86_ARG(ret) == retarg)) {
+ printf("[OK]\tReturned correctly\n");
+ } else {
+- printf("[FAIL]\tIncorrect return reason\n");
++ printf("[FAIL]\tIncorrect return reason (started at eip = 0x%lx, ended at eip = 0x%lx)\n", eip, v86->regs.eip);
+ nerrs++;
+ }
+
+@@ -264,6 +268,9 @@ int main(void)
+ v86.regs.ds = load_addr / 16;
+ v86.regs.es = load_addr / 16;
+
++ /* Use the end of the page as our stack. */
++ v86.regs.esp = 4096;
++
+ assert((v86.regs.cs & 3) == 0); /* Looks like RPL = 0 */
+
+ /* #BR -- should deliver SIG??? */
+@@ -295,6 +302,23 @@ int main(void)
+ v86.regs.eflags &= ~X86_EFLAGS_IF;
+ do_test(&v86, vmcode_sti - vmcode, VM86_STI, 0, "STI with VIP set");
+
++ /* POPF with VIP set but IF clear: should not trap */
++ v86.regs.eflags = X86_EFLAGS_VIP;
++ v86.regs.eax = 0;
++ do_test(&v86, vmcode_popf_hlt - vmcode, VM86_UNKNOWN, 0, "POPF with VIP set and IF clear");
++
++ /* POPF with VIP set and IF set: should trap */
++ v86.regs.eflags = X86_EFLAGS_VIP;
++ v86.regs.eax = X86_EFLAGS_IF;
++ do_test(&v86, vmcode_popf_hlt - vmcode, VM86_STI, 0, "POPF with VIP and IF set");
++
++ /* POPF with VIP clear and IF set: should not trap */
++ v86.regs.eflags = 0;
++ v86.regs.eax = X86_EFLAGS_IF;
++ do_test(&v86, vmcode_popf_hlt - vmcode, VM86_UNKNOWN, 0, "POPF with VIP clear and IF set");
++
++ v86.regs.eflags = 0;
++
+ /* INT3 -- should cause #BP */
+ do_test(&v86, vmcode_int3 - vmcode, VM86_TRAP, 3, "INT3");
+
+@@ -318,7 +342,7 @@ int main(void)
+ clearhandler(SIGSEGV);
+
+ /* Make sure nothing explodes if we fork. */
+- if (fork() > 0)
++ if (fork() == 0)
+ return 0;
+
+ return (nerrs == 0 ? 0 : 1);
+diff --git a/virt/kvm/arm/arch_timer.c b/virt/kvm/arm/arch_timer.c
+index cc29a8148328..811631a1296c 100644
+--- a/virt/kvm/arm/arch_timer.c
++++ b/virt/kvm/arm/arch_timer.c
+@@ -589,6 +589,7 @@ void kvm_timer_sync_hwstate(struct kvm_vcpu *vcpu)
+
+ int kvm_timer_vcpu_reset(struct kvm_vcpu *vcpu)
+ {
++ struct arch_timer_cpu *timer = &vcpu->arch.timer_cpu;
+ struct arch_timer_context *vtimer = vcpu_vtimer(vcpu);
+ struct arch_timer_context *ptimer = vcpu_ptimer(vcpu);
+
+@@ -602,6 +603,9 @@ int kvm_timer_vcpu_reset(struct kvm_vcpu *vcpu)
+ ptimer->cnt_ctl = 0;
+ kvm_timer_update_state(vcpu);
+
++ if (timer->enabled && irqchip_in_kernel(vcpu->kvm))
++ kvm_vgic_reset_mapped_irq(vcpu, vtimer->irq.irq);
++
+ return 0;
+ }
+
+@@ -773,7 +777,7 @@ int kvm_timer_hyp_init(bool has_gic)
+ }
+ }
+
+- kvm_info("virtual timer IRQ%d\n", host_vtimer_irq);
++ kvm_debug("virtual timer IRQ%d\n", host_vtimer_irq);
+
+ cpuhp_setup_state(CPUHP_AP_KVM_ARM_TIMER_STARTING,
+ "kvm/arm/timer:starting", kvm_timer_starting_cpu,
+diff --git a/virt/kvm/arm/hyp/vgic-v3-sr.c b/virt/kvm/arm/hyp/vgic-v3-sr.c
+index f5c3d6d7019e..b89ce5432214 100644
+--- a/virt/kvm/arm/hyp/vgic-v3-sr.c
++++ b/virt/kvm/arm/hyp/vgic-v3-sr.c
+@@ -215,7 +215,8 @@ void __hyp_text __vgic_v3_save_state(struct kvm_vcpu *vcpu)
+ * are now visible to the system register interface.
+ */
+ if (!cpu_if->vgic_sre) {
+- dsb(st);
++ dsb(sy);
++ isb();
+ cpu_if->vgic_vmcr = read_gicreg(ICH_VMCR_EL2);
+ }
+
+diff --git a/virt/kvm/arm/mmu.c b/virt/kvm/arm/mmu.c
+index 9dea96380339..b69798a7880e 100644
+--- a/virt/kvm/arm/mmu.c
++++ b/virt/kvm/arm/mmu.c
+@@ -1760,9 +1760,9 @@ int kvm_mmu_init(void)
+ */
+ BUG_ON((hyp_idmap_start ^ (hyp_idmap_end - 1)) & PAGE_MASK);
+
+- kvm_info("IDMAP page: %lx\n", hyp_idmap_start);
+- kvm_info("HYP VA range: %lx:%lx\n",
+- kern_hyp_va(PAGE_OFFSET), kern_hyp_va(~0UL));
++ kvm_debug("IDMAP page: %lx\n", hyp_idmap_start);
++ kvm_debug("HYP VA range: %lx:%lx\n",
++ kern_hyp_va(PAGE_OFFSET), kern_hyp_va(~0UL));
+
+ if (hyp_idmap_start >= kern_hyp_va(PAGE_OFFSET) &&
+ hyp_idmap_start < kern_hyp_va(~0UL) &&
+diff --git a/virt/kvm/arm/vgic/vgic-v2.c b/virt/kvm/arm/vgic/vgic-v2.c
+index 80897102da26..028d2ba05b7b 100644
+--- a/virt/kvm/arm/vgic/vgic-v2.c
++++ b/virt/kvm/arm/vgic/vgic-v2.c
+@@ -37,6 +37,13 @@ void vgic_v2_init_lrs(void)
+ vgic_v2_write_lr(i, 0);
+ }
+
++void vgic_v2_set_npie(struct kvm_vcpu *vcpu)
++{
++ struct vgic_v2_cpu_if *cpuif = &vcpu->arch.vgic_cpu.vgic_v2;
++
++ cpuif->vgic_hcr |= GICH_HCR_NPIE;
++}
++
+ void vgic_v2_set_underflow(struct kvm_vcpu *vcpu)
+ {
+ struct vgic_v2_cpu_if *cpuif = &vcpu->arch.vgic_cpu.vgic_v2;
+@@ -64,7 +71,7 @@ void vgic_v2_fold_lr_state(struct kvm_vcpu *vcpu)
+ int lr;
+ unsigned long flags;
+
+- cpuif->vgic_hcr &= ~GICH_HCR_UIE;
++ cpuif->vgic_hcr &= ~(GICH_HCR_UIE | GICH_HCR_NPIE);
+
+ for (lr = 0; lr < vgic_cpu->used_lrs; lr++) {
+ u32 val = cpuif->vgic_lr[lr];
+@@ -381,7 +388,7 @@ int vgic_v2_probe(const struct gic_kvm_info *info)
+ kvm_vgic_global_state.type = VGIC_V2;
+ kvm_vgic_global_state.max_gic_vcpus = VGIC_V2_MAX_CPUS;
+
+- kvm_info("vgic-v2@%llx\n", info->vctrl.start);
++ kvm_debug("vgic-v2@%llx\n", info->vctrl.start);
+
+ return 0;
+ out:
+diff --git a/virt/kvm/arm/vgic/vgic-v3.c b/virt/kvm/arm/vgic/vgic-v3.c
+index f47e8481fa45..f667c7e86b8f 100644
+--- a/virt/kvm/arm/vgic/vgic-v3.c
++++ b/virt/kvm/arm/vgic/vgic-v3.c
+@@ -26,6 +26,13 @@ static bool group1_trap;
+ static bool common_trap;
+ static bool gicv4_enable;
+
++void vgic_v3_set_npie(struct kvm_vcpu *vcpu)
++{
++ struct vgic_v3_cpu_if *cpuif = &vcpu->arch.vgic_cpu.vgic_v3;
++
++ cpuif->vgic_hcr |= ICH_HCR_NPIE;
++}
++
+ void vgic_v3_set_underflow(struct kvm_vcpu *vcpu)
+ {
+ struct vgic_v3_cpu_if *cpuif = &vcpu->arch.vgic_cpu.vgic_v3;
+@@ -47,7 +54,7 @@ void vgic_v3_fold_lr_state(struct kvm_vcpu *vcpu)
+ int lr;
+ unsigned long flags;
+
+- cpuif->vgic_hcr &= ~ICH_HCR_UIE;
++ cpuif->vgic_hcr &= ~(ICH_HCR_UIE | ICH_HCR_NPIE);
+
+ for (lr = 0; lr < vgic_cpu->used_lrs; lr++) {
+ u64 val = cpuif->vgic_lr[lr];
+diff --git a/virt/kvm/arm/vgic/vgic.c b/virt/kvm/arm/vgic/vgic.c
+index ecb8e25f5fe5..04816ecdf9ce 100644
+--- a/virt/kvm/arm/vgic/vgic.c
++++ b/virt/kvm/arm/vgic/vgic.c
+@@ -460,6 +460,32 @@ int kvm_vgic_map_phys_irq(struct kvm_vcpu *vcpu, unsigned int host_irq,
+ return ret;
+ }
+
++/**
++ * kvm_vgic_reset_mapped_irq - Reset a mapped IRQ
++ * @vcpu: The VCPU pointer
++ * @vintid: The INTID of the interrupt
++ *
++ * Reset the active and pending states of a mapped interrupt. Kernel
++ * subsystems injecting mapped interrupts should reset their interrupt lines
++ * when we are doing a reset of the VM.
++ */
++void kvm_vgic_reset_mapped_irq(struct kvm_vcpu *vcpu, u32 vintid)
++{
++ struct vgic_irq *irq = vgic_get_irq(vcpu->kvm, vcpu, vintid);
++ unsigned long flags;
++
++ if (!irq->hw)
++ goto out;
++
++ spin_lock_irqsave(&irq->irq_lock, flags);
++ irq->active = false;
++ irq->pending_latch = false;
++ irq->line_level = false;
++ spin_unlock_irqrestore(&irq->irq_lock, flags);
++out:
++ vgic_put_irq(vcpu->kvm, irq);
++}
++
+ int kvm_vgic_unmap_phys_irq(struct kvm_vcpu *vcpu, unsigned int vintid)
+ {
+ struct vgic_irq *irq;
+@@ -649,22 +675,37 @@ static inline void vgic_set_underflow(struct kvm_vcpu *vcpu)
+ vgic_v3_set_underflow(vcpu);
+ }
+
++static inline void vgic_set_npie(struct kvm_vcpu *vcpu)
++{
++ if (kvm_vgic_global_state.type == VGIC_V2)
++ vgic_v2_set_npie(vcpu);
++ else
++ vgic_v3_set_npie(vcpu);
++}
++
+ /* Requires the ap_list_lock to be held. */
+-static int compute_ap_list_depth(struct kvm_vcpu *vcpu)
++static int compute_ap_list_depth(struct kvm_vcpu *vcpu,
++ bool *multi_sgi)
+ {
+ struct vgic_cpu *vgic_cpu = &vcpu->arch.vgic_cpu;
+ struct vgic_irq *irq;
+ int count = 0;
+
++ *multi_sgi = false;
++
+ DEBUG_SPINLOCK_BUG_ON(!spin_is_locked(&vgic_cpu->ap_list_lock));
+
+ list_for_each_entry(irq, &vgic_cpu->ap_list_head, ap_list) {
+ spin_lock(&irq->irq_lock);
+ /* GICv2 SGIs can count for more than one... */
+- if (vgic_irq_is_sgi(irq->intid) && irq->source)
+- count += hweight8(irq->source);
+- else
++ if (vgic_irq_is_sgi(irq->intid) && irq->source) {
++ int w = hweight8(irq->source);
++
++ count += w;
++ *multi_sgi |= (w > 1);
++ } else {
+ count++;
++ }
+ spin_unlock(&irq->irq_lock);
+ }
+ return count;
+@@ -675,28 +716,43 @@ static void vgic_flush_lr_state(struct kvm_vcpu *vcpu)
+ {
+ struct vgic_cpu *vgic_cpu = &vcpu->arch.vgic_cpu;
+ struct vgic_irq *irq;
+- int count = 0;
++ int count;
++ bool npie = false;
++ bool multi_sgi;
++ u8 prio = 0xff;
+
+ DEBUG_SPINLOCK_BUG_ON(!spin_is_locked(&vgic_cpu->ap_list_lock));
+
+- if (compute_ap_list_depth(vcpu) > kvm_vgic_global_state.nr_lr)
++ count = compute_ap_list_depth(vcpu, &multi_sgi);
++ if (count > kvm_vgic_global_state.nr_lr || multi_sgi)
+ vgic_sort_ap_list(vcpu);
+
++ count = 0;
++
+ list_for_each_entry(irq, &vgic_cpu->ap_list_head, ap_list) {
+ spin_lock(&irq->irq_lock);
+
+- if (unlikely(vgic_target_oracle(irq) != vcpu))
+- goto next;
+-
+ /*
+- * If we get an SGI with multiple sources, try to get
+- * them in all at once.
++ * If we have multi-SGIs in the pipeline, we need to
++ * guarantee that they are all seen before any IRQ of
++ * lower priority. In that case, we need to filter out
++ * these interrupts by exiting early. This is easy as
++ * the AP list has been sorted already.
+ */
+- do {
++ if (multi_sgi && irq->priority > prio) {
++ spin_unlock(&irq->irq_lock);
++ break;
++ }
++
++ if (likely(vgic_target_oracle(irq) == vcpu)) {
+ vgic_populate_lr(vcpu, irq, count++);
+- } while (irq->source && count < kvm_vgic_global_state.nr_lr);
+
+-next:
++ if (irq->source) {
++ npie = true;
++ prio = irq->priority;
++ }
++ }
++
+ spin_unlock(&irq->irq_lock);
+
+ if (count == kvm_vgic_global_state.nr_lr) {
+@@ -707,6 +763,9 @@ static void vgic_flush_lr_state(struct kvm_vcpu *vcpu)
+ }
+ }
+
++ if (npie)
++ vgic_set_npie(vcpu);
++
+ vcpu->arch.vgic_cpu.used_lrs = count;
+
+ /* Nuke remaining LRs */
+diff --git a/virt/kvm/arm/vgic/vgic.h b/virt/kvm/arm/vgic/vgic.h
+index efbcf8f96f9c..d434ebd67599 100644
+--- a/virt/kvm/arm/vgic/vgic.h
++++ b/virt/kvm/arm/vgic/vgic.h
+@@ -151,6 +151,7 @@ void vgic_v2_fold_lr_state(struct kvm_vcpu *vcpu);
+ void vgic_v2_populate_lr(struct kvm_vcpu *vcpu, struct vgic_irq *irq, int lr);
+ void vgic_v2_clear_lr(struct kvm_vcpu *vcpu, int lr);
+ void vgic_v2_set_underflow(struct kvm_vcpu *vcpu);
++void vgic_v2_set_npie(struct kvm_vcpu *vcpu);
+ int vgic_v2_has_attr_regs(struct kvm_device *dev, struct kvm_device_attr *attr);
+ int vgic_v2_dist_uaccess(struct kvm_vcpu *vcpu, bool is_write,
+ int offset, u32 *val);
+@@ -180,6 +181,7 @@ void vgic_v3_fold_lr_state(struct kvm_vcpu *vcpu);
+ void vgic_v3_populate_lr(struct kvm_vcpu *vcpu, struct vgic_irq *irq, int lr);
+ void vgic_v3_clear_lr(struct kvm_vcpu *vcpu, int lr);
+ void vgic_v3_set_underflow(struct kvm_vcpu *vcpu);
++void vgic_v3_set_npie(struct kvm_vcpu *vcpu);
+ void vgic_v3_set_vmcr(struct kvm_vcpu *vcpu, struct vgic_vmcr *vmcr);
+ void vgic_v3_get_vmcr(struct kvm_vcpu *vcpu, struct vgic_vmcr *vmcr);
+ void vgic_v3_enable(struct kvm_vcpu *vcpu);
^ permalink raw reply related [flat|nested] 27+ messages in thread
* [gentoo-commits] proj/linux-patches:4.15 commit in: /
@ 2018-03-25 13:37 Mike Pagano
0 siblings, 0 replies; 27+ messages in thread
From: Mike Pagano @ 2018-03-25 13:37 UTC (permalink / raw
To: gentoo-commits
commit: bbfab546bd200ea52bed26d55ab0cdc7e92c0a82
Author: Mike Pagano <mpagano <AT> gentoo <DOT> org>
AuthorDate: Sun Mar 25 13:37:05 2018 +0000
Commit: Mike Pagano <mpagano <AT> gentoo <DOT> org>
CommitDate: Sun Mar 25 13:37:05 2018 +0000
URL: https://gitweb.gentoo.org/proj/linux-patches.git/commit/?id=bbfab546
Linux patch 4.15.13
0000_README | 4 +
1012_linux-4.15.13.patch | 2806 ++++++++++++++++++++++++++++++++++++++++++++++
2 files changed, 2810 insertions(+)
diff --git a/0000_README b/0000_README
index 2ad5c9b..2b51c4d 100644
--- a/0000_README
+++ b/0000_README
@@ -91,6 +91,10 @@ Patch: 1011_linux-4.15.12.patch
From: http://www.kernel.org
Desc: Linux 4.15.12
+Patch: 1012_linux-4.15.13.patch
+From: http://www.kernel.org
+Desc: Linux 4.15.13
+
Patch: 1500_XATTR_USER_PREFIX.patch
From: https://bugs.gentoo.org/show_bug.cgi?id=470644
Desc: Support for namespace user.pax.* on tmpfs.
diff --git a/1012_linux-4.15.13.patch b/1012_linux-4.15.13.patch
new file mode 100644
index 0000000..5685162
--- /dev/null
+++ b/1012_linux-4.15.13.patch
@@ -0,0 +1,2806 @@
+diff --git a/Documentation/devicetree/bindings/display/panel/toppoly,td028ttec1.txt b/Documentation/devicetree/bindings/display/panel/toppoly,td028ttec1.txt
+deleted file mode 100644
+index 7175dc3740ac..000000000000
+--- a/Documentation/devicetree/bindings/display/panel/toppoly,td028ttec1.txt
++++ /dev/null
+@@ -1,30 +0,0 @@
+-Toppoly TD028TTEC1 Panel
+-========================
+-
+-Required properties:
+-- compatible: "toppoly,td028ttec1"
+-
+-Optional properties:
+-- label: a symbolic name for the panel
+-
+-Required nodes:
+-- Video port for DPI input
+-
+-Example
+--------
+-
+-lcd-panel: td028ttec1@0 {
+- compatible = "toppoly,td028ttec1";
+- reg = <0>;
+- spi-max-frequency = <100000>;
+- spi-cpol;
+- spi-cpha;
+-
+- label = "lcd";
+- port {
+- lcd_in: endpoint {
+- remote-endpoint = <&dpi_out>;
+- };
+- };
+-};
+-
+diff --git a/Documentation/devicetree/bindings/display/panel/toshiba,lt089ac29000.txt b/Documentation/devicetree/bindings/display/panel/toshiba,lt089ac29000.txt
+index 4c0caaf246c9..89826116628c 100644
+--- a/Documentation/devicetree/bindings/display/panel/toshiba,lt089ac29000.txt
++++ b/Documentation/devicetree/bindings/display/panel/toshiba,lt089ac29000.txt
+@@ -1,7 +1,7 @@
+ Toshiba 8.9" WXGA (1280x768) TFT LCD panel
+
+ Required properties:
+-- compatible: should be "toshiba,lt089ac29000.txt"
++- compatible: should be "toshiba,lt089ac29000"
+ - power-supply: as specified in the base binding
+
+ This binding is compatible with the simple-panel binding, which is specified
+diff --git a/Documentation/devicetree/bindings/display/panel/tpo,td028ttec1.txt b/Documentation/devicetree/bindings/display/panel/tpo,td028ttec1.txt
+new file mode 100644
+index 000000000000..ed34253d9fb1
+--- /dev/null
++++ b/Documentation/devicetree/bindings/display/panel/tpo,td028ttec1.txt
+@@ -0,0 +1,30 @@
++Toppoly TD028TTEC1 Panel
++========================
++
++Required properties:
++- compatible: "tpo,td028ttec1"
++
++Optional properties:
++- label: a symbolic name for the panel
++
++Required nodes:
++- Video port for DPI input
++
++Example
++-------
++
++lcd-panel: td028ttec1@0 {
++ compatible = "tpo,td028ttec1";
++ reg = <0>;
++ spi-max-frequency = <100000>;
++ spi-cpol;
++ spi-cpha;
++
++ label = "lcd";
++ port {
++ lcd_in: endpoint {
++ remote-endpoint = <&dpi_out>;
++ };
++ };
++};
++
+diff --git a/Makefile b/Makefile
+index 2e6ba1553dff..82245e654d10 100644
+--- a/Makefile
++++ b/Makefile
+@@ -1,7 +1,7 @@
+ # SPDX-License-Identifier: GPL-2.0
+ VERSION = 4
+ PATCHLEVEL = 15
+-SUBLEVEL = 12
++SUBLEVEL = 13
+ EXTRAVERSION =
+ NAME = Fearless Coyote
+
+diff --git a/arch/alpha/kernel/console.c b/arch/alpha/kernel/console.c
+index 8e9a41966881..5476279329a6 100644
+--- a/arch/alpha/kernel/console.c
++++ b/arch/alpha/kernel/console.c
+@@ -21,6 +21,7 @@
+ struct pci_controller *pci_vga_hose;
+ static struct resource alpha_vga = {
+ .name = "alpha-vga+",
++ .flags = IORESOURCE_IO,
+ .start = 0x3C0,
+ .end = 0x3DF
+ };
+diff --git a/arch/arm/boot/dts/aspeed-ast2500-evb.dts b/arch/arm/boot/dts/aspeed-ast2500-evb.dts
+index 602bc10fdaf4..7472ed355d4b 100644
+--- a/arch/arm/boot/dts/aspeed-ast2500-evb.dts
++++ b/arch/arm/boot/dts/aspeed-ast2500-evb.dts
+@@ -16,7 +16,7 @@
+ bootargs = "console=ttyS4,115200 earlyprintk";
+ };
+
+- memory {
++ memory@80000000 {
+ reg = <0x80000000 0x20000000>;
+ };
+ };
+diff --git a/drivers/bluetooth/btqcomsmd.c b/drivers/bluetooth/btqcomsmd.c
+index 663bed63b871..2c9a5fc9137d 100644
+--- a/drivers/bluetooth/btqcomsmd.c
++++ b/drivers/bluetooth/btqcomsmd.c
+@@ -88,7 +88,8 @@ static int btqcomsmd_send(struct hci_dev *hdev, struct sk_buff *skb)
+ break;
+ }
+
+- kfree_skb(skb);
++ if (!ret)
++ kfree_skb(skb);
+
+ return ret;
+ }
+diff --git a/drivers/bluetooth/hci_qca.c b/drivers/bluetooth/hci_qca.c
+index bbd7db7384e6..05ec530b8a3a 100644
+--- a/drivers/bluetooth/hci_qca.c
++++ b/drivers/bluetooth/hci_qca.c
+@@ -932,6 +932,9 @@ static int qca_setup(struct hci_uart *hu)
+ if (!ret) {
+ set_bit(STATE_IN_BAND_SLEEP_ENABLED, &qca->flags);
+ qca_debugfs_init(hdev);
++ } else if (ret == -ENOENT) {
++ /* No patch/nvm-config found, run with original fw/config */
++ ret = 0;
+ }
+
+ /* Setup bdaddr */
+diff --git a/drivers/char/hw_random/core.c b/drivers/char/hw_random/core.c
+index 657b8770b6b9..91bb98c42a1c 100644
+--- a/drivers/char/hw_random/core.c
++++ b/drivers/char/hw_random/core.c
+@@ -306,6 +306,10 @@ static int enable_best_rng(void)
+ ret = ((new_rng == current_rng) ? 0 : set_current_rng(new_rng));
+ if (!ret)
+ cur_rng_set_by_user = 0;
++ } else {
++ drop_current_rng();
++ cur_rng_set_by_user = 0;
++ ret = 0;
+ }
+
+ return ret;
+diff --git a/drivers/char/mem.c b/drivers/char/mem.c
+index 6aefe5370e5b..052011bcf100 100644
+--- a/drivers/char/mem.c
++++ b/drivers/char/mem.c
+@@ -107,6 +107,8 @@ static ssize_t read_mem(struct file *file, char __user *buf,
+ phys_addr_t p = *ppos;
+ ssize_t read, sz;
+ void *ptr;
++ char *bounce;
++ int err;
+
+ if (p != *ppos)
+ return 0;
+@@ -129,15 +131,22 @@ static ssize_t read_mem(struct file *file, char __user *buf,
+ }
+ #endif
+
++ bounce = kmalloc(PAGE_SIZE, GFP_KERNEL);
++ if (!bounce)
++ return -ENOMEM;
++
+ while (count > 0) {
+ unsigned long remaining;
+ int allowed;
+
+ sz = size_inside_page(p, count);
+
++ err = -EPERM;
+ allowed = page_is_allowed(p >> PAGE_SHIFT);
+ if (!allowed)
+- return -EPERM;
++ goto failed;
++
++ err = -EFAULT;
+ if (allowed == 2) {
+ /* Show zeros for restricted memory. */
+ remaining = clear_user(buf, sz);
+@@ -149,24 +158,32 @@ static ssize_t read_mem(struct file *file, char __user *buf,
+ */
+ ptr = xlate_dev_mem_ptr(p);
+ if (!ptr)
+- return -EFAULT;
+-
+- remaining = copy_to_user(buf, ptr, sz);
++ goto failed;
+
++ err = probe_kernel_read(bounce, ptr, sz);
+ unxlate_dev_mem_ptr(p, ptr);
++ if (err)
++ goto failed;
++
++ remaining = copy_to_user(buf, bounce, sz);
+ }
+
+ if (remaining)
+- return -EFAULT;
++ goto failed;
+
+ buf += sz;
+ p += sz;
+ count -= sz;
+ read += sz;
+ }
++ kfree(bounce);
+
+ *ppos += read;
+ return read;
++
++failed:
++ kfree(bounce);
++ return err;
+ }
+
+ static ssize_t write_mem(struct file *file, const char __user *buf,
+diff --git a/drivers/clk/at91/pmc.c b/drivers/clk/at91/pmc.c
+index 775af473fe11..5c2b26de303e 100644
+--- a/drivers/clk/at91/pmc.c
++++ b/drivers/clk/at91/pmc.c
+@@ -107,10 +107,20 @@ static int pmc_suspend(void)
+ return 0;
+ }
+
++static bool pmc_ready(unsigned int mask)
++{
++ unsigned int status;
++
++ regmap_read(pmcreg, AT91_PMC_SR, &status);
++
++ return ((status & mask) == mask) ? 1 : 0;
++}
++
+ static void pmc_resume(void)
+ {
+- int i, ret = 0;
++ int i;
+ u32 tmp;
++ u32 mask = AT91_PMC_MCKRDY | AT91_PMC_LOCKA;
+
+ regmap_read(pmcreg, AT91_PMC_MCKR, &tmp);
+ if (pmc_cache.mckr != tmp)
+@@ -134,13 +144,11 @@ static void pmc_resume(void)
+ AT91_PMC_PCR_CMD);
+ }
+
+- if (pmc_cache.uckr & AT91_PMC_UPLLEN) {
+- ret = regmap_read_poll_timeout(pmcreg, AT91_PMC_SR, tmp,
+- !(tmp & AT91_PMC_LOCKU),
+- 10, 5000);
+- if (ret)
+- pr_crit("USB PLL didn't lock when resuming\n");
+- }
++ if (pmc_cache.uckr & AT91_PMC_UPLLEN)
++ mask |= AT91_PMC_LOCKU;
++
++ while (!pmc_ready(mask))
++ cpu_relax();
+ }
+
+ static struct syscore_ops pmc_syscore_ops = {
+diff --git a/drivers/clk/clk-axi-clkgen.c b/drivers/clk/clk-axi-clkgen.c
+index 5e918e7afaba..95a6e9834392 100644
+--- a/drivers/clk/clk-axi-clkgen.c
++++ b/drivers/clk/clk-axi-clkgen.c
+@@ -40,6 +40,10 @@
+ #define MMCM_REG_FILTER1 0x4e
+ #define MMCM_REG_FILTER2 0x4f
+
++#define MMCM_CLKOUT_NOCOUNT BIT(6)
++
++#define MMCM_CLK_DIV_NOCOUNT BIT(12)
++
+ struct axi_clkgen {
+ void __iomem *base;
+ struct clk_hw clk_hw;
+@@ -315,12 +319,27 @@ static unsigned long axi_clkgen_recalc_rate(struct clk_hw *clk_hw,
+ unsigned int reg;
+ unsigned long long tmp;
+
+- axi_clkgen_mmcm_read(axi_clkgen, MMCM_REG_CLKOUT0_1, ®);
+- dout = (reg & 0x3f) + ((reg >> 6) & 0x3f);
++ axi_clkgen_mmcm_read(axi_clkgen, MMCM_REG_CLKOUT0_2, ®);
++ if (reg & MMCM_CLKOUT_NOCOUNT) {
++ dout = 1;
++ } else {
++ axi_clkgen_mmcm_read(axi_clkgen, MMCM_REG_CLKOUT0_1, ®);
++ dout = (reg & 0x3f) + ((reg >> 6) & 0x3f);
++ }
++
+ axi_clkgen_mmcm_read(axi_clkgen, MMCM_REG_CLK_DIV, ®);
+- d = (reg & 0x3f) + ((reg >> 6) & 0x3f);
+- axi_clkgen_mmcm_read(axi_clkgen, MMCM_REG_CLK_FB1, ®);
+- m = (reg & 0x3f) + ((reg >> 6) & 0x3f);
++ if (reg & MMCM_CLK_DIV_NOCOUNT)
++ d = 1;
++ else
++ d = (reg & 0x3f) + ((reg >> 6) & 0x3f);
++
++ axi_clkgen_mmcm_read(axi_clkgen, MMCM_REG_CLK_FB2, ®);
++ if (reg & MMCM_CLKOUT_NOCOUNT) {
++ m = 1;
++ } else {
++ axi_clkgen_mmcm_read(axi_clkgen, MMCM_REG_CLK_FB1, ®);
++ m = (reg & 0x3f) + ((reg >> 6) & 0x3f);
++ }
+
+ if (d == 0 || dout == 0)
+ return 0;
+diff --git a/drivers/clk/clk-si5351.c b/drivers/clk/clk-si5351.c
+index 20d90769cced..653b0f38d475 100644
+--- a/drivers/clk/clk-si5351.c
++++ b/drivers/clk/clk-si5351.c
+@@ -72,7 +72,7 @@ static const char * const si5351_input_names[] = {
+ "xtal", "clkin"
+ };
+ static const char * const si5351_pll_names[] = {
+- "plla", "pllb", "vxco"
++ "si5351_plla", "si5351_pllb", "si5351_vxco"
+ };
+ static const char * const si5351_msynth_names[] = {
+ "ms0", "ms1", "ms2", "ms3", "ms4", "ms5", "ms6", "ms7"
+diff --git a/drivers/clk/clk.c b/drivers/clk/clk.c
+index b56c11f51baf..7782c3e4abba 100644
+--- a/drivers/clk/clk.c
++++ b/drivers/clk/clk.c
+@@ -1642,16 +1642,37 @@ static void clk_change_rate(struct clk_core *core)
+ clk_pm_runtime_put(core);
+ }
+
++static unsigned long clk_core_req_round_rate_nolock(struct clk_core *core,
++ unsigned long req_rate)
++{
++ int ret;
++ struct clk_rate_request req;
++
++ lockdep_assert_held(&prepare_lock);
++
++ if (!core)
++ return 0;
++
++ clk_core_get_boundaries(core, &req.min_rate, &req.max_rate);
++ req.rate = req_rate;
++
++ ret = clk_core_round_rate_nolock(core, &req);
++
++ return ret ? 0 : req.rate;
++}
++
+ static int clk_core_set_rate_nolock(struct clk_core *core,
+ unsigned long req_rate)
+ {
+ struct clk_core *top, *fail_clk;
+- unsigned long rate = req_rate;
++ unsigned long rate;
+ int ret = 0;
+
+ if (!core)
+ return 0;
+
++ rate = clk_core_req_round_rate_nolock(core, req_rate);
++
+ /* bail early if nothing to do */
+ if (rate == clk_core_get_rate_nolock(core))
+ return 0;
+@@ -1660,7 +1681,7 @@ static int clk_core_set_rate_nolock(struct clk_core *core,
+ return -EBUSY;
+
+ /* calculate new rates and get the topmost changed clock */
+- top = clk_calc_new_rates(core, rate);
++ top = clk_calc_new_rates(core, req_rate);
+ if (!top)
+ return -EINVAL;
+
+@@ -2570,6 +2591,21 @@ static int __clk_core_init(struct clk_core *core)
+ rate = 0;
+ core->rate = core->req_rate = rate;
+
++ /*
++ * Enable CLK_IS_CRITICAL clocks so newly added critical clocks
++ * don't get accidentally disabled when walking the orphan tree and
++ * reparenting clocks
++ */
++ if (core->flags & CLK_IS_CRITICAL) {
++ unsigned long flags;
++
++ clk_core_prepare(core);
++
++ flags = clk_enable_lock();
++ clk_core_enable(core);
++ clk_enable_unlock(flags);
++ }
++
+ /*
+ * walk the list of orphan clocks and reparent any that newly finds a
+ * parent.
+@@ -2578,10 +2614,13 @@ static int __clk_core_init(struct clk_core *core)
+ struct clk_core *parent = __clk_init_parent(orphan);
+
+ /*
+- * we could call __clk_set_parent, but that would result in a
+- * redundant call to the .set_rate op, if it exists
++ * We need to use __clk_set_parent_before() and _after() to
++ * to properly migrate any prepare/enable count of the orphan
++ * clock. This is important for CLK_IS_CRITICAL clocks, which
++ * are enabled during init but might not have a parent yet.
+ */
+ if (parent) {
++ /* update the clk tree topology */
+ __clk_set_parent_before(orphan, parent);
+ __clk_set_parent_after(orphan, parent, NULL);
+ __clk_recalc_accuracies(orphan);
+@@ -2600,16 +2639,6 @@ static int __clk_core_init(struct clk_core *core)
+ if (core->ops->init)
+ core->ops->init(core->hw);
+
+- if (core->flags & CLK_IS_CRITICAL) {
+- unsigned long flags;
+-
+- clk_core_prepare(core);
+-
+- flags = clk_enable_lock();
+- clk_core_enable(core);
+- clk_enable_unlock(flags);
+- }
+-
+ kref_init(&core->ref);
+ out:
+ clk_pm_runtime_put(core);
+@@ -2684,7 +2713,13 @@ struct clk *clk_register(struct device *dev, struct clk_hw *hw)
+ ret = -ENOMEM;
+ goto fail_name;
+ }
++
++ if (WARN_ON(!hw->init->ops)) {
++ ret = -EINVAL;
++ goto fail_ops;
++ }
+ core->ops = hw->init->ops;
++
+ if (dev && pm_runtime_enabled(dev))
+ core->dev = dev;
+ if (dev && dev->driver)
+@@ -2746,6 +2781,7 @@ struct clk *clk_register(struct device *dev, struct clk_hw *hw)
+ kfree_const(core->parent_names[i]);
+ kfree(core->parent_names);
+ fail_parent_names:
++fail_ops:
+ kfree_const(core->name);
+ fail_name:
+ kfree(core);
+diff --git a/drivers/cpufreq/longhaul.c b/drivers/cpufreq/longhaul.c
+index d5e27bc7585a..859a62ea6120 100644
+--- a/drivers/cpufreq/longhaul.c
++++ b/drivers/cpufreq/longhaul.c
+@@ -894,7 +894,7 @@ static int longhaul_cpu_init(struct cpufreq_policy *policy)
+ if ((longhaul_version != TYPE_LONGHAUL_V1) && (scale_voltage != 0))
+ longhaul_setup_voltagescaling();
+
+- policy->cpuinfo.transition_latency = 200000; /* nsec */
++ policy->transition_delay_us = 200000; /* usec */
+
+ return cpufreq_table_validate_and_show(policy, longhaul_table);
+ }
+diff --git a/drivers/crypto/axis/artpec6_crypto.c b/drivers/crypto/axis/artpec6_crypto.c
+index 456278440863..22df6b55e172 100644
+--- a/drivers/crypto/axis/artpec6_crypto.c
++++ b/drivers/crypto/axis/artpec6_crypto.c
+@@ -22,6 +22,7 @@
+ #include <linux/slab.h>
+
+ #include <crypto/aes.h>
++#include <crypto/gcm.h>
+ #include <crypto/internal/aead.h>
+ #include <crypto/internal/hash.h>
+ #include <crypto/internal/skcipher.h>
+@@ -1934,7 +1935,7 @@ static int artpec6_crypto_prepare_aead(struct aead_request *areq)
+
+ memcpy(req_ctx->hw_ctx.J0, areq->iv, crypto_aead_ivsize(cipher));
+ // The HW omits the initial increment of the counter field.
+- crypto_inc(req_ctx->hw_ctx.J0+12, 4);
++ memcpy(req_ctx->hw_ctx.J0 + GCM_AES_IV_SIZE, "\x00\x00\x00\x01", 4);
+
+ ret = artpec6_crypto_setup_out_descr(common, &req_ctx->hw_ctx,
+ sizeof(struct artpec6_crypto_aead_hw_ctx), false, false);
+@@ -2956,7 +2957,7 @@ static struct aead_alg aead_algos[] = {
+ .setkey = artpec6_crypto_aead_set_key,
+ .encrypt = artpec6_crypto_aead_encrypt,
+ .decrypt = artpec6_crypto_aead_decrypt,
+- .ivsize = AES_BLOCK_SIZE,
++ .ivsize = GCM_AES_IV_SIZE,
+ .maxauthsize = AES_BLOCK_SIZE,
+
+ .base = {
+diff --git a/drivers/dma/ti-dma-crossbar.c b/drivers/dma/ti-dma-crossbar.c
+index 7df910e7c348..9272b173c746 100644
+--- a/drivers/dma/ti-dma-crossbar.c
++++ b/drivers/dma/ti-dma-crossbar.c
+@@ -54,7 +54,15 @@ struct ti_am335x_xbar_map {
+
+ static inline void ti_am335x_xbar_write(void __iomem *iomem, int event, u8 val)
+ {
+- writeb_relaxed(val, iomem + event);
++ /*
++ * TPCC_EVT_MUX_60_63 register layout is different than the
++ * rest, in the sense, that event 63 is mapped to lowest byte
++ * and event 60 is mapped to highest, handle it separately.
++ */
++ if (event >= 60 && event <= 63)
++ writeb_relaxed(val, iomem + (63 - event % 4));
++ else
++ writeb_relaxed(val, iomem + event);
+ }
+
+ static void ti_am335x_xbar_free(struct device *dev, void *route_data)
+diff --git a/drivers/dma/xilinx/zynqmp_dma.c b/drivers/dma/xilinx/zynqmp_dma.c
+index 1ee1241ca797..5cc8ed31f26b 100644
+--- a/drivers/dma/xilinx/zynqmp_dma.c
++++ b/drivers/dma/xilinx/zynqmp_dma.c
+@@ -838,7 +838,8 @@ static void zynqmp_dma_chan_remove(struct zynqmp_dma_chan *chan)
+ if (!chan)
+ return;
+
+- devm_free_irq(chan->zdev->dev, chan->irq, chan);
++ if (chan->irq)
++ devm_free_irq(chan->zdev->dev, chan->irq, chan);
+ tasklet_kill(&chan->tasklet);
+ list_del(&chan->common.device_node);
+ clk_disable_unprepare(chan->clk_apb);
+diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_ring.h b/drivers/gpu/drm/amd/amdgpu/amdgpu_ring.h
+index b18c2b96691f..522a8742a60b 100644
+--- a/drivers/gpu/drm/amd/amdgpu/amdgpu_ring.h
++++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_ring.h
+@@ -187,6 +187,7 @@ struct amdgpu_ring {
+ uint64_t eop_gpu_addr;
+ u32 doorbell_index;
+ bool use_doorbell;
++ bool use_pollmem;
+ unsigned wptr_offs;
+ unsigned fence_offs;
+ uint64_t current_ctx;
+diff --git a/drivers/gpu/drm/amd/amdgpu/sdma_v3_0.c b/drivers/gpu/drm/amd/amdgpu/sdma_v3_0.c
+index 6d06f8eb659f..cc4fc2e43b7b 100644
+--- a/drivers/gpu/drm/amd/amdgpu/sdma_v3_0.c
++++ b/drivers/gpu/drm/amd/amdgpu/sdma_v3_0.c
+@@ -355,7 +355,7 @@ static uint64_t sdma_v3_0_ring_get_wptr(struct amdgpu_ring *ring)
+ struct amdgpu_device *adev = ring->adev;
+ u32 wptr;
+
+- if (ring->use_doorbell) {
++ if (ring->use_doorbell || ring->use_pollmem) {
+ /* XXX check if swapping is necessary on BE */
+ wptr = ring->adev->wb.wb[ring->wptr_offs] >> 2;
+ } else {
+@@ -380,10 +380,13 @@ static void sdma_v3_0_ring_set_wptr(struct amdgpu_ring *ring)
+
+ if (ring->use_doorbell) {
+ u32 *wb = (u32 *)&adev->wb.wb[ring->wptr_offs];
+-
+ /* XXX check if swapping is necessary on BE */
+ WRITE_ONCE(*wb, (lower_32_bits(ring->wptr) << 2));
+ WDOORBELL32(ring->doorbell_index, lower_32_bits(ring->wptr) << 2);
++ } else if (ring->use_pollmem) {
++ u32 *wb = (u32 *)&adev->wb.wb[ring->wptr_offs];
++
++ WRITE_ONCE(*wb, (lower_32_bits(ring->wptr) << 2));
+ } else {
+ int me = (ring == &ring->adev->sdma.instance[0].ring) ? 0 : 1;
+
+@@ -718,10 +721,14 @@ static int sdma_v3_0_gfx_resume(struct amdgpu_device *adev)
+ WREG32(mmSDMA0_GFX_RB_WPTR_POLL_ADDR_HI + sdma_offsets[i],
+ upper_32_bits(wptr_gpu_addr));
+ wptr_poll_cntl = RREG32(mmSDMA0_GFX_RB_WPTR_POLL_CNTL + sdma_offsets[i]);
+- if (amdgpu_sriov_vf(adev))
+- wptr_poll_cntl = REG_SET_FIELD(wptr_poll_cntl, SDMA0_GFX_RB_WPTR_POLL_CNTL, F32_POLL_ENABLE, 1);
++ if (ring->use_pollmem)
++ wptr_poll_cntl = REG_SET_FIELD(wptr_poll_cntl,
++ SDMA0_GFX_RB_WPTR_POLL_CNTL,
++ ENABLE, 1);
+ else
+- wptr_poll_cntl = REG_SET_FIELD(wptr_poll_cntl, SDMA0_GFX_RB_WPTR_POLL_CNTL, F32_POLL_ENABLE, 0);
++ wptr_poll_cntl = REG_SET_FIELD(wptr_poll_cntl,
++ SDMA0_GFX_RB_WPTR_POLL_CNTL,
++ ENABLE, 0);
+ WREG32(mmSDMA0_GFX_RB_WPTR_POLL_CNTL + sdma_offsets[i], wptr_poll_cntl);
+
+ /* enable DMA RB */
+@@ -1203,9 +1210,13 @@ static int sdma_v3_0_sw_init(void *handle)
+ for (i = 0; i < adev->sdma.num_instances; i++) {
+ ring = &adev->sdma.instance[i].ring;
+ ring->ring_obj = NULL;
+- ring->use_doorbell = true;
+- ring->doorbell_index = (i == 0) ?
+- AMDGPU_DOORBELL_sDMA_ENGINE0 : AMDGPU_DOORBELL_sDMA_ENGINE1;
++ if (!amdgpu_sriov_vf(adev)) {
++ ring->use_doorbell = true;
++ ring->doorbell_index = (i == 0) ?
++ AMDGPU_DOORBELL_sDMA_ENGINE0 : AMDGPU_DOORBELL_sDMA_ENGINE1;
++ } else {
++ ring->use_pollmem = true;
++ }
+
+ sprintf(ring->name, "sdma%d", i);
+ r = amdgpu_ring_init(adev, ring, 1024,
+diff --git a/drivers/gpu/drm/msm/msm_gem.c b/drivers/gpu/drm/msm/msm_gem.c
+index 81fe6d6740ce..07376de9ff4c 100644
+--- a/drivers/gpu/drm/msm/msm_gem.c
++++ b/drivers/gpu/drm/msm/msm_gem.c
+@@ -93,14 +93,17 @@ static struct page **get_pages(struct drm_gem_object *obj)
+ return p;
+ }
+
++ msm_obj->pages = p;
++
+ msm_obj->sgt = drm_prime_pages_to_sg(p, npages);
+ if (IS_ERR(msm_obj->sgt)) {
++ void *ptr = ERR_CAST(msm_obj->sgt);
++
+ dev_err(dev->dev, "failed to allocate sgt\n");
+- return ERR_CAST(msm_obj->sgt);
++ msm_obj->sgt = NULL;
++ return ptr;
+ }
+
+- msm_obj->pages = p;
+-
+ /* For non-cached buffers, ensure the new pages are clean
+ * because display controller, GPU, etc. are not coherent:
+ */
+@@ -135,7 +138,10 @@ static void put_pages(struct drm_gem_object *obj)
+ if (msm_obj->flags & (MSM_BO_WC|MSM_BO_UNCACHED))
+ dma_unmap_sg(obj->dev->dev, msm_obj->sgt->sgl,
+ msm_obj->sgt->nents, DMA_BIDIRECTIONAL);
+- sg_free_table(msm_obj->sgt);
++
++ if (msm_obj->sgt)
++ sg_free_table(msm_obj->sgt);
++
+ kfree(msm_obj->sgt);
+
+ if (use_pages(obj))
+diff --git a/drivers/gpu/drm/omapdrm/displays/panel-tpo-td028ttec1.c b/drivers/gpu/drm/omapdrm/displays/panel-tpo-td028ttec1.c
+index 0a38a0e8c925..a0dfa14f4fab 100644
+--- a/drivers/gpu/drm/omapdrm/displays/panel-tpo-td028ttec1.c
++++ b/drivers/gpu/drm/omapdrm/displays/panel-tpo-td028ttec1.c
+@@ -452,6 +452,8 @@ static int td028ttec1_panel_remove(struct spi_device *spi)
+ }
+
+ static const struct of_device_id td028ttec1_of_match[] = {
++ { .compatible = "omapdss,tpo,td028ttec1", },
++ /* keep to not break older DTB */
+ { .compatible = "omapdss,toppoly,td028ttec1", },
+ {},
+ };
+@@ -471,6 +473,7 @@ static struct spi_driver td028ttec1_spi_driver = {
+
+ module_spi_driver(td028ttec1_spi_driver);
+
++MODULE_ALIAS("spi:tpo,td028ttec1");
+ MODULE_ALIAS("spi:toppoly,td028ttec1");
+ MODULE_AUTHOR("H. Nikolaus Schaller <hns@goldelico.com>");
+ MODULE_DESCRIPTION("Toppoly TD028TTEC1 panel driver");
+diff --git a/drivers/gpu/drm/omapdrm/omap_dmm_tiler.c b/drivers/gpu/drm/omapdrm/omap_dmm_tiler.c
+index c60a85e82c6d..fd05f7e9f43f 100644
+--- a/drivers/gpu/drm/omapdrm/omap_dmm_tiler.c
++++ b/drivers/gpu/drm/omapdrm/omap_dmm_tiler.c
+@@ -298,7 +298,12 @@ static int dmm_txn_commit(struct dmm_txn *txn, bool wait)
+ msecs_to_jiffies(100))) {
+ dev_err(dmm->dev, "timed out waiting for done\n");
+ ret = -ETIMEDOUT;
++ goto cleanup;
+ }
++
++ /* Check the engine status before continue */
++ ret = wait_status(engine, DMM_PATSTATUS_READY |
++ DMM_PATSTATUS_VALID | DMM_PATSTATUS_DONE);
+ }
+
+ cleanup:
+diff --git a/drivers/gpu/drm/tilcdc/tilcdc_regs.h b/drivers/gpu/drm/tilcdc/tilcdc_regs.h
+index 9d528c0a67a4..5048ebb86835 100644
+--- a/drivers/gpu/drm/tilcdc/tilcdc_regs.h
++++ b/drivers/gpu/drm/tilcdc/tilcdc_regs.h
+@@ -133,7 +133,7 @@ static inline void tilcdc_write64(struct drm_device *dev, u32 reg, u64 data)
+ struct tilcdc_drm_private *priv = dev->dev_private;
+ volatile void __iomem *addr = priv->mmio + reg;
+
+-#ifdef iowrite64
++#if defined(iowrite64) && !defined(iowrite64_is_nonatomic)
+ iowrite64(data, addr);
+ #else
+ __iowmb();
+diff --git a/drivers/hwtracing/coresight/coresight-tpiu.c b/drivers/hwtracing/coresight/coresight-tpiu.c
+index bef49a3a5ca7..4b46c494be5e 100644
+--- a/drivers/hwtracing/coresight/coresight-tpiu.c
++++ b/drivers/hwtracing/coresight/coresight-tpiu.c
+@@ -46,8 +46,11 @@
+ #define TPIU_ITATBCTR0 0xef8
+
+ /** register definition **/
++/* FFSR - 0x300 */
++#define FFSR_FT_STOPPED BIT(1)
+ /* FFCR - 0x304 */
+ #define FFCR_FON_MAN BIT(6)
++#define FFCR_STOP_FI BIT(12)
+
+ /**
+ * @base: memory mapped base address for this component.
+@@ -85,10 +88,14 @@ static void tpiu_disable_hw(struct tpiu_drvdata *drvdata)
+ {
+ CS_UNLOCK(drvdata->base);
+
+- /* Clear formatter controle reg. */
+- writel_relaxed(0x0, drvdata->base + TPIU_FFCR);
++ /* Clear formatter and stop on flush */
++ writel_relaxed(FFCR_STOP_FI, drvdata->base + TPIU_FFCR);
+ /* Generate manual flush */
+- writel_relaxed(FFCR_FON_MAN, drvdata->base + TPIU_FFCR);
++ writel_relaxed(FFCR_STOP_FI | FFCR_FON_MAN, drvdata->base + TPIU_FFCR);
++ /* Wait for flush to complete */
++ coresight_timeout(drvdata->base, TPIU_FFCR, FFCR_FON_MAN, 0);
++ /* Wait for formatter to stop */
++ coresight_timeout(drvdata->base, TPIU_FFSR, FFSR_FT_STOPPED, 1);
+
+ CS_LOCK(drvdata->base);
+ }
+diff --git a/drivers/infiniband/core/cma.c b/drivers/infiniband/core/cma.c
+index 6294a7001d33..67aece2f5d8d 100644
+--- a/drivers/infiniband/core/cma.c
++++ b/drivers/infiniband/core/cma.c
+@@ -3013,7 +3013,8 @@ static int cma_port_is_unique(struct rdma_bind_list *bind_list,
+ continue;
+
+ /* different dest port -> unique */
+- if (!cma_any_port(cur_daddr) &&
++ if (!cma_any_port(daddr) &&
++ !cma_any_port(cur_daddr) &&
+ (dport != cur_dport))
+ continue;
+
+@@ -3024,7 +3025,8 @@ static int cma_port_is_unique(struct rdma_bind_list *bind_list,
+ continue;
+
+ /* different dst address -> unique */
+- if (!cma_any_addr(cur_daddr) &&
++ if (!cma_any_addr(daddr) &&
++ !cma_any_addr(cur_daddr) &&
+ cma_addr_cmp(daddr, cur_daddr))
+ continue;
+
+@@ -3322,13 +3324,13 @@ int rdma_bind_addr(struct rdma_cm_id *id, struct sockaddr *addr)
+ }
+ #endif
+ }
++ daddr = cma_dst_addr(id_priv);
++ daddr->sa_family = addr->sa_family;
++
+ ret = cma_get_port(id_priv);
+ if (ret)
+ goto err2;
+
+- daddr = cma_dst_addr(id_priv);
+- daddr->sa_family = addr->sa_family;
+-
+ return 0;
+ err2:
+ if (id_priv->cma_dev)
+@@ -4114,6 +4116,9 @@ int rdma_join_multicast(struct rdma_cm_id *id, struct sockaddr *addr,
+ struct cma_multicast *mc;
+ int ret;
+
++ if (!id->device)
++ return -EINVAL;
++
+ id_priv = container_of(id, struct rdma_id_private, id);
+ if (!cma_comp(id_priv, RDMA_CM_ADDR_BOUND) &&
+ !cma_comp(id_priv, RDMA_CM_ADDR_RESOLVED))
+@@ -4432,7 +4437,7 @@ static int cma_get_id_stats(struct sk_buff *skb, struct netlink_callback *cb)
+ RDMA_NL_RDMA_CM_ATTR_SRC_ADDR))
+ goto out;
+ if (ibnl_put_attr(skb, nlh,
+- rdma_addr_size(cma_src_addr(id_priv)),
++ rdma_addr_size(cma_dst_addr(id_priv)),
+ cma_dst_addr(id_priv),
+ RDMA_NL_RDMA_CM_ATTR_DST_ADDR))
+ goto out;
+diff --git a/drivers/infiniband/core/iwpm_util.c b/drivers/infiniband/core/iwpm_util.c
+index 3c4faadb8cdd..81528f64061a 100644
+--- a/drivers/infiniband/core/iwpm_util.c
++++ b/drivers/infiniband/core/iwpm_util.c
+@@ -654,6 +654,7 @@ int iwpm_send_mapinfo(u8 nl_client, int iwpm_pid)
+ }
+ skb_num++;
+ spin_lock_irqsave(&iwpm_mapinfo_lock, flags);
++ ret = -EINVAL;
+ for (i = 0; i < IWPM_MAPINFO_HASH_SIZE; i++) {
+ hlist_for_each_entry(map_info, &iwpm_hash_bucket[i],
+ hlist_node) {
+diff --git a/drivers/infiniband/core/ucma.c b/drivers/infiniband/core/ucma.c
+index c8b3a45e9edc..77ca9da570a2 100644
+--- a/drivers/infiniband/core/ucma.c
++++ b/drivers/infiniband/core/ucma.c
+@@ -1348,7 +1348,7 @@ static ssize_t ucma_process_join(struct ucma_file *file,
+ return -ENOSPC;
+
+ addr = (struct sockaddr *) &cmd->addr;
+- if (!cmd->addr_size || (cmd->addr_size != rdma_addr_size(addr)))
++ if (cmd->addr_size != rdma_addr_size(addr))
+ return -EINVAL;
+
+ if (cmd->join_flags == RDMA_MC_JOIN_FLAG_FULLMEMBER)
+@@ -1416,6 +1416,9 @@ static ssize_t ucma_join_ip_multicast(struct ucma_file *file,
+ join_cmd.uid = cmd.uid;
+ join_cmd.id = cmd.id;
+ join_cmd.addr_size = rdma_addr_size((struct sockaddr *) &cmd.addr);
++ if (!join_cmd.addr_size)
++ return -EINVAL;
++
+ join_cmd.join_flags = RDMA_MC_JOIN_FLAG_FULLMEMBER;
+ memcpy(&join_cmd.addr, &cmd.addr, join_cmd.addr_size);
+
+@@ -1431,6 +1434,9 @@ static ssize_t ucma_join_multicast(struct ucma_file *file,
+ if (copy_from_user(&cmd, inbuf, sizeof(cmd)))
+ return -EFAULT;
+
++ if (!rdma_addr_size((struct sockaddr *)&cmd.addr))
++ return -EINVAL;
++
+ return ucma_process_join(file, &cmd, out_len);
+ }
+
+diff --git a/drivers/infiniband/core/umem.c b/drivers/infiniband/core/umem.c
+index 130606c3b07c..9a4e899d94b3 100644
+--- a/drivers/infiniband/core/umem.c
++++ b/drivers/infiniband/core/umem.c
+@@ -352,7 +352,7 @@ int ib_umem_copy_from(void *dst, struct ib_umem *umem, size_t offset,
+ return -EINVAL;
+ }
+
+- ret = sg_pcopy_to_buffer(umem->sg_head.sgl, umem->nmap, dst, length,
++ ret = sg_pcopy_to_buffer(umem->sg_head.sgl, umem->npages, dst, length,
+ offset + ib_umem_offset(umem));
+
+ if (ret < 0)
+diff --git a/drivers/infiniband/hw/mlx5/qp.c b/drivers/infiniband/hw/mlx5/qp.c
+index cffe5966aef9..47b39c3e9812 100644
+--- a/drivers/infiniband/hw/mlx5/qp.c
++++ b/drivers/infiniband/hw/mlx5/qp.c
+@@ -1130,7 +1130,7 @@ static void destroy_raw_packet_qp_sq(struct mlx5_ib_dev *dev,
+ ib_umem_release(sq->ubuffer.umem);
+ }
+
+-static int get_rq_pas_size(void *qpc)
++static size_t get_rq_pas_size(void *qpc)
+ {
+ u32 log_page_size = MLX5_GET(qpc, qpc, log_page_size) + 12;
+ u32 log_rq_stride = MLX5_GET(qpc, qpc, log_rq_stride);
+@@ -1146,7 +1146,8 @@ static int get_rq_pas_size(void *qpc)
+ }
+
+ static int create_raw_packet_qp_rq(struct mlx5_ib_dev *dev,
+- struct mlx5_ib_rq *rq, void *qpin)
++ struct mlx5_ib_rq *rq, void *qpin,
++ size_t qpinlen)
+ {
+ struct mlx5_ib_qp *mqp = rq->base.container_mibqp;
+ __be64 *pas;
+@@ -1155,9 +1156,12 @@ static int create_raw_packet_qp_rq(struct mlx5_ib_dev *dev,
+ void *rqc;
+ void *wq;
+ void *qpc = MLX5_ADDR_OF(create_qp_in, qpin, qpc);
+- int inlen;
++ size_t rq_pas_size = get_rq_pas_size(qpc);
++ size_t inlen;
+ int err;
+- u32 rq_pas_size = get_rq_pas_size(qpc);
++
++ if (qpinlen < rq_pas_size + MLX5_BYTE_OFF(create_qp_in, pas))
++ return -EINVAL;
+
+ inlen = MLX5_ST_SZ_BYTES(create_rq_in) + rq_pas_size;
+ in = kvzalloc(inlen, GFP_KERNEL);
+@@ -1246,7 +1250,7 @@ static void destroy_raw_packet_qp_tir(struct mlx5_ib_dev *dev,
+ }
+
+ static int create_raw_packet_qp(struct mlx5_ib_dev *dev, struct mlx5_ib_qp *qp,
+- u32 *in,
++ u32 *in, size_t inlen,
+ struct ib_pd *pd)
+ {
+ struct mlx5_ib_raw_packet_qp *raw_packet_qp = &qp->raw_packet_qp;
+@@ -1278,7 +1282,7 @@ static int create_raw_packet_qp(struct mlx5_ib_dev *dev, struct mlx5_ib_qp *qp,
+ rq->flags |= MLX5_IB_RQ_CVLAN_STRIPPING;
+ if (qp->flags & MLX5_IB_QP_PCI_WRITE_END_PADDING)
+ rq->flags |= MLX5_IB_RQ_PCI_WRITE_END_PADDING;
+- err = create_raw_packet_qp_rq(dev, rq, in);
++ err = create_raw_packet_qp_rq(dev, rq, in, inlen);
+ if (err)
+ goto err_destroy_sq;
+
+@@ -1836,11 +1840,16 @@ static int create_qp_common(struct mlx5_ib_dev *dev, struct ib_pd *pd,
+ }
+ }
+
++ if (inlen < 0) {
++ err = -EINVAL;
++ goto err;
++ }
++
+ if (init_attr->qp_type == IB_QPT_RAW_PACKET ||
+ qp->flags & MLX5_IB_QP_UNDERLAY) {
+ qp->raw_packet_qp.sq.ubuffer.buf_addr = ucmd.sq_buf_addr;
+ raw_packet_qp_copy_info(qp, &qp->raw_packet_qp);
+- err = create_raw_packet_qp(dev, qp, in, pd);
++ err = create_raw_packet_qp(dev, qp, in, inlen, pd);
+ } else {
+ err = mlx5_core_create_qp(dev->mdev, &base->mqp, in, inlen);
+ }
+diff --git a/drivers/infiniband/hw/mlx5/srq.c b/drivers/infiniband/hw/mlx5/srq.c
+index 6d5fadad9090..3c7522d025f2 100644
+--- a/drivers/infiniband/hw/mlx5/srq.c
++++ b/drivers/infiniband/hw/mlx5/srq.c
+@@ -241,8 +241,8 @@ struct ib_srq *mlx5_ib_create_srq(struct ib_pd *pd,
+ {
+ struct mlx5_ib_dev *dev = to_mdev(pd->device);
+ struct mlx5_ib_srq *srq;
+- int desc_size;
+- int buf_size;
++ size_t desc_size;
++ size_t buf_size;
+ int err;
+ struct mlx5_srq_attr in = {0};
+ __u32 max_srq_wqes = 1 << MLX5_CAP_GEN(dev->mdev, log_max_srq_sz);
+@@ -266,15 +266,18 @@ struct ib_srq *mlx5_ib_create_srq(struct ib_pd *pd,
+
+ desc_size = sizeof(struct mlx5_wqe_srq_next_seg) +
+ srq->msrq.max_gs * sizeof(struct mlx5_wqe_data_seg);
++ if (desc_size == 0 || srq->msrq.max_gs > desc_size)
++ return ERR_PTR(-EINVAL);
+ desc_size = roundup_pow_of_two(desc_size);
+- desc_size = max_t(int, 32, desc_size);
++ desc_size = max_t(size_t, 32, desc_size);
++ if (desc_size < sizeof(struct mlx5_wqe_srq_next_seg))
++ return ERR_PTR(-EINVAL);
+ srq->msrq.max_avail_gather = (desc_size - sizeof(struct mlx5_wqe_srq_next_seg)) /
+ sizeof(struct mlx5_wqe_data_seg);
+ srq->msrq.wqe_shift = ilog2(desc_size);
+ buf_size = srq->msrq.max * desc_size;
+- mlx5_ib_dbg(dev, "desc_size 0x%x, req wr 0x%x, srq size 0x%x, max_gs 0x%x, max_avail_gather 0x%x\n",
+- desc_size, init_attr->attr.max_wr, srq->msrq.max, srq->msrq.max_gs,
+- srq->msrq.max_avail_gather);
++ if (buf_size < desc_size)
++ return ERR_PTR(-EINVAL);
+ in.type = init_attr->srq_type;
+
+ if (pd->uobject)
+diff --git a/drivers/infiniband/hw/ocrdma/ocrdma_stats.c b/drivers/infiniband/hw/ocrdma/ocrdma_stats.c
+index e528d7acb7f6..fb78b16ce671 100644
+--- a/drivers/infiniband/hw/ocrdma/ocrdma_stats.c
++++ b/drivers/infiniband/hw/ocrdma/ocrdma_stats.c
+@@ -834,7 +834,7 @@ void ocrdma_add_port_stats(struct ocrdma_dev *dev)
+
+ dev->reset_stats.type = OCRDMA_RESET_STATS;
+ dev->reset_stats.dev = dev;
+- if (!debugfs_create_file("reset_stats", S_IRUSR, dev->dir,
++ if (!debugfs_create_file("reset_stats", 0200, dev->dir,
+ &dev->reset_stats, &ocrdma_dbg_ops))
+ goto err;
+
+diff --git a/drivers/infiniband/hw/vmw_pvrdma/pvrdma_cq.c b/drivers/infiniband/hw/vmw_pvrdma/pvrdma_cq.c
+index e529622cefad..28b0f0a82039 100644
+--- a/drivers/infiniband/hw/vmw_pvrdma/pvrdma_cq.c
++++ b/drivers/infiniband/hw/vmw_pvrdma/pvrdma_cq.c
+@@ -114,6 +114,7 @@ struct ib_cq *pvrdma_create_cq(struct ib_device *ibdev,
+ union pvrdma_cmd_resp rsp;
+ struct pvrdma_cmd_create_cq *cmd = &req.create_cq;
+ struct pvrdma_cmd_create_cq_resp *resp = &rsp.create_cq_resp;
++ struct pvrdma_create_cq_resp cq_resp = {0};
+ struct pvrdma_create_cq ucmd;
+
+ BUILD_BUG_ON(sizeof(struct pvrdma_cqe) != 64);
+@@ -198,6 +199,7 @@ struct ib_cq *pvrdma_create_cq(struct ib_device *ibdev,
+
+ cq->ibcq.cqe = resp->cqe;
+ cq->cq_handle = resp->cq_handle;
++ cq_resp.cqn = resp->cq_handle;
+ spin_lock_irqsave(&dev->cq_tbl_lock, flags);
+ dev->cq_tbl[cq->cq_handle % dev->dsr->caps.max_cq] = cq;
+ spin_unlock_irqrestore(&dev->cq_tbl_lock, flags);
+@@ -206,7 +208,7 @@ struct ib_cq *pvrdma_create_cq(struct ib_device *ibdev,
+ cq->uar = &(to_vucontext(context)->uar);
+
+ /* Copy udata back. */
+- if (ib_copy_to_udata(udata, &cq->cq_handle, sizeof(__u32))) {
++ if (ib_copy_to_udata(udata, &cq_resp, sizeof(cq_resp))) {
+ dev_warn(&dev->pdev->dev,
+ "failed to copy back udata\n");
+ pvrdma_destroy_cq(&cq->ibcq);
+diff --git a/drivers/infiniband/hw/vmw_pvrdma/pvrdma_srq.c b/drivers/infiniband/hw/vmw_pvrdma/pvrdma_srq.c
+index 5acebb1ef631..af235967a9c2 100644
+--- a/drivers/infiniband/hw/vmw_pvrdma/pvrdma_srq.c
++++ b/drivers/infiniband/hw/vmw_pvrdma/pvrdma_srq.c
+@@ -113,6 +113,7 @@ struct ib_srq *pvrdma_create_srq(struct ib_pd *pd,
+ union pvrdma_cmd_resp rsp;
+ struct pvrdma_cmd_create_srq *cmd = &req.create_srq;
+ struct pvrdma_cmd_create_srq_resp *resp = &rsp.create_srq_resp;
++ struct pvrdma_create_srq_resp srq_resp = {0};
+ struct pvrdma_create_srq ucmd;
+ unsigned long flags;
+ int ret;
+@@ -204,12 +205,13 @@ struct ib_srq *pvrdma_create_srq(struct ib_pd *pd,
+ }
+
+ srq->srq_handle = resp->srqn;
++ srq_resp.srqn = resp->srqn;
+ spin_lock_irqsave(&dev->srq_tbl_lock, flags);
+ dev->srq_tbl[srq->srq_handle % dev->dsr->caps.max_srq] = srq;
+ spin_unlock_irqrestore(&dev->srq_tbl_lock, flags);
+
+ /* Copy udata back. */
+- if (ib_copy_to_udata(udata, &srq->srq_handle, sizeof(__u32))) {
++ if (ib_copy_to_udata(udata, &srq_resp, sizeof(srq_resp))) {
+ dev_warn(&dev->pdev->dev, "failed to copy back udata\n");
+ pvrdma_destroy_srq(&srq->ibsrq);
+ return ERR_PTR(-EINVAL);
+diff --git a/drivers/infiniband/hw/vmw_pvrdma/pvrdma_verbs.c b/drivers/infiniband/hw/vmw_pvrdma/pvrdma_verbs.c
+index 16b96616ef7e..a51463cd2f37 100644
+--- a/drivers/infiniband/hw/vmw_pvrdma/pvrdma_verbs.c
++++ b/drivers/infiniband/hw/vmw_pvrdma/pvrdma_verbs.c
+@@ -447,6 +447,7 @@ struct ib_pd *pvrdma_alloc_pd(struct ib_device *ibdev,
+ union pvrdma_cmd_resp rsp;
+ struct pvrdma_cmd_create_pd *cmd = &req.create_pd;
+ struct pvrdma_cmd_create_pd_resp *resp = &rsp.create_pd_resp;
++ struct pvrdma_alloc_pd_resp pd_resp = {0};
+ int ret;
+ void *ptr;
+
+@@ -475,9 +476,10 @@ struct ib_pd *pvrdma_alloc_pd(struct ib_device *ibdev,
+ pd->privileged = !context;
+ pd->pd_handle = resp->pd_handle;
+ pd->pdn = resp->pd_handle;
++ pd_resp.pdn = resp->pd_handle;
+
+ if (context) {
+- if (ib_copy_to_udata(udata, &pd->pdn, sizeof(__u32))) {
++ if (ib_copy_to_udata(udata, &pd_resp, sizeof(pd_resp))) {
+ dev_warn(&dev->pdev->dev,
+ "failed to copy back protection domain\n");
+ pvrdma_dealloc_pd(&pd->ibpd);
+diff --git a/drivers/infiniband/ulp/ipoib/ipoib_main.c b/drivers/infiniband/ulp/ipoib/ipoib_main.c
+index 8880351df179..160c5d9bca4c 100644
+--- a/drivers/infiniband/ulp/ipoib/ipoib_main.c
++++ b/drivers/infiniband/ulp/ipoib/ipoib_main.c
+@@ -775,6 +775,22 @@ static void path_rec_completion(int status,
+ spin_lock_irqsave(&priv->lock, flags);
+
+ if (!IS_ERR_OR_NULL(ah)) {
++ /*
++ * pathrec.dgid is used as the database key from the LLADDR,
++ * it must remain unchanged even if the SA returns a different
++ * GID to use in the AH.
++ */
++ if (memcmp(pathrec->dgid.raw, path->pathrec.dgid.raw,
++ sizeof(union ib_gid))) {
++ ipoib_dbg(
++ priv,
++ "%s got PathRec for gid %pI6 while asked for %pI6\n",
++ dev->name, pathrec->dgid.raw,
++ path->pathrec.dgid.raw);
++ memcpy(pathrec->dgid.raw, path->pathrec.dgid.raw,
++ sizeof(union ib_gid));
++ }
++
+ path->pathrec = *pathrec;
+
+ old_ah = path->ah;
+@@ -2207,8 +2223,10 @@ static struct net_device *ipoib_add_port(const char *format,
+ int result = -ENOMEM;
+
+ priv = ipoib_intf_alloc(hca, port, format);
+- if (!priv)
++ if (!priv) {
++ pr_warn("%s, %d: ipoib_intf_alloc failed\n", hca->name, port);
+ goto alloc_mem_failed;
++ }
+
+ SET_NETDEV_DEV(priv->dev, hca->dev.parent);
+ priv->dev->dev_id = port - 1;
+@@ -2337,8 +2355,7 @@ static void ipoib_add_one(struct ib_device *device)
+ }
+
+ if (!count) {
+- pr_err("Failed to init port, removing it\n");
+- ipoib_remove_one(device, dev_list);
++ kfree(dev_list);
+ return;
+ }
+
+diff --git a/drivers/infiniband/ulp/isert/ib_isert.c b/drivers/infiniband/ulp/isert/ib_isert.c
+index 1b02283ce20e..fff40b097947 100644
+--- a/drivers/infiniband/ulp/isert/ib_isert.c
++++ b/drivers/infiniband/ulp/isert/ib_isert.c
+@@ -2124,6 +2124,9 @@ isert_rdma_rw_ctx_post(struct isert_cmd *cmd, struct isert_conn *conn,
+ u32 rkey, offset;
+ int ret;
+
++ if (cmd->ctx_init_done)
++ goto rdma_ctx_post;
++
+ if (dir == DMA_FROM_DEVICE) {
+ addr = cmd->write_va;
+ rkey = cmd->write_stag;
+@@ -2151,11 +2154,15 @@ isert_rdma_rw_ctx_post(struct isert_cmd *cmd, struct isert_conn *conn,
+ se_cmd->t_data_sg, se_cmd->t_data_nents,
+ offset, addr, rkey, dir);
+ }
++
+ if (ret < 0) {
+ isert_err("Cmd: %p failed to prepare RDMA res\n", cmd);
+ return ret;
+ }
+
++ cmd->ctx_init_done = true;
++
++rdma_ctx_post:
+ ret = rdma_rw_ctx_post(&cmd->rw, conn->qp, port_num, cqe, chain_wr);
+ if (ret < 0)
+ isert_err("Cmd: %p failed to post RDMA res\n", cmd);
+diff --git a/drivers/infiniband/ulp/isert/ib_isert.h b/drivers/infiniband/ulp/isert/ib_isert.h
+index d6fd248320ae..3b296bac4f60 100644
+--- a/drivers/infiniband/ulp/isert/ib_isert.h
++++ b/drivers/infiniband/ulp/isert/ib_isert.h
+@@ -126,6 +126,7 @@ struct isert_cmd {
+ struct rdma_rw_ctx rw;
+ struct work_struct comp_work;
+ struct scatterlist sg;
++ bool ctx_init_done;
+ };
+
+ static inline struct isert_cmd *tx_desc_to_cmd(struct iser_tx_desc *desc)
+diff --git a/drivers/iommu/intel-svm.c b/drivers/iommu/intel-svm.c
+index ed1cf7c5a43b..6643277e321e 100644
+--- a/drivers/iommu/intel-svm.c
++++ b/drivers/iommu/intel-svm.c
+@@ -129,6 +129,7 @@ int intel_svm_enable_prq(struct intel_iommu *iommu)
+ pr_err("IOMMU: %s: Failed to request IRQ for page request queue\n",
+ iommu->name);
+ dmar_free_hwirq(irq);
++ iommu->pr_irq = 0;
+ goto err;
+ }
+ dmar_writeq(iommu->reg + DMAR_PQH_REG, 0ULL);
+@@ -144,9 +145,11 @@ int intel_svm_finish_prq(struct intel_iommu *iommu)
+ dmar_writeq(iommu->reg + DMAR_PQT_REG, 0ULL);
+ dmar_writeq(iommu->reg + DMAR_PQA_REG, 0ULL);
+
+- free_irq(iommu->pr_irq, iommu);
+- dmar_free_hwirq(iommu->pr_irq);
+- iommu->pr_irq = 0;
++ if (iommu->pr_irq) {
++ free_irq(iommu->pr_irq, iommu);
++ dmar_free_hwirq(iommu->pr_irq);
++ iommu->pr_irq = 0;
++ }
+
+ free_pages((unsigned long)iommu->prq, PRQ_ORDER);
+ iommu->prq = NULL;
+diff --git a/drivers/media/dvb-frontends/si2168.c b/drivers/media/dvb-frontends/si2168.c
+index 41d9c513b7e8..539399dac551 100644
+--- a/drivers/media/dvb-frontends/si2168.c
++++ b/drivers/media/dvb-frontends/si2168.c
+@@ -14,6 +14,8 @@
+ * GNU General Public License for more details.
+ */
+
++#include <linux/delay.h>
++
+ #include "si2168_priv.h"
+
+ static const struct dvb_frontend_ops si2168_ops;
+@@ -435,6 +437,7 @@ static int si2168_init(struct dvb_frontend *fe)
+ if (ret)
+ goto err;
+
++ udelay(100);
+ memcpy(cmd.args, "\x85", 1);
+ cmd.wlen = 1;
+ cmd.rlen = 1;
+diff --git a/drivers/media/pci/bt8xx/bt878.c b/drivers/media/pci/bt8xx/bt878.c
+index a5f52137d306..d4bc78b4fcb5 100644
+--- a/drivers/media/pci/bt8xx/bt878.c
++++ b/drivers/media/pci/bt8xx/bt878.c
+@@ -422,8 +422,7 @@ static int bt878_probe(struct pci_dev *dev, const struct pci_device_id *pci_id)
+ bt878_num);
+ if (bt878_num >= BT878_MAX) {
+ printk(KERN_ERR "bt878: Too many devices inserted\n");
+- result = -ENOMEM;
+- goto fail0;
++ return -ENOMEM;
+ }
+ if (pci_enable_device(dev))
+ return -EIO;
+diff --git a/drivers/media/platform/davinci/vpif_capture.c b/drivers/media/platform/davinci/vpif_capture.c
+index e45916f69def..a288d58fd29c 100644
+--- a/drivers/media/platform/davinci/vpif_capture.c
++++ b/drivers/media/platform/davinci/vpif_capture.c
+@@ -1397,9 +1397,9 @@ static int vpif_async_bound(struct v4l2_async_notifier *notifier,
+ vpif_obj.config->chan_config->inputs[i].subdev_name =
+ (char *)to_of_node(subdev->fwnode)->full_name;
+ vpif_dbg(2, debug,
+- "%s: setting input %d subdev_name = %pOF\n",
++ "%s: setting input %d subdev_name = %s\n",
+ __func__, i,
+- to_of_node(subdev->fwnode));
++ vpif_obj.config->chan_config->inputs[i].subdev_name);
+ return 0;
+ }
+ }
+diff --git a/drivers/media/platform/s5p-mfc/s5p_mfc.c b/drivers/media/platform/s5p-mfc/s5p_mfc.c
+index bc68dbbcaec1..cac27ad510de 100644
+--- a/drivers/media/platform/s5p-mfc/s5p_mfc.c
++++ b/drivers/media/platform/s5p-mfc/s5p_mfc.c
+@@ -1309,6 +1309,12 @@ static int s5p_mfc_probe(struct platform_device *pdev)
+ goto err_dma;
+ }
+
++ /*
++ * Load fails if fs isn't mounted. Try loading anyway.
++ * _open() will load it, it it fails now. Ignore failure.
++ */
++ s5p_mfc_load_firmware(dev);
++
+ mutex_init(&dev->mfc_mutex);
+ init_waitqueue_head(&dev->queue);
+ dev->hw_lock = 0;
+diff --git a/drivers/media/platform/s5p-mfc/s5p_mfc_common.h b/drivers/media/platform/s5p-mfc/s5p_mfc_common.h
+index 4220914529b2..76119a8cc477 100644
+--- a/drivers/media/platform/s5p-mfc/s5p_mfc_common.h
++++ b/drivers/media/platform/s5p-mfc/s5p_mfc_common.h
+@@ -290,6 +290,8 @@ struct s5p_mfc_priv_buf {
+ * @mfc_cmds: cmd structure holding HW commands function pointers
+ * @mfc_regs: structure holding MFC registers
+ * @fw_ver: loaded firmware sub-version
++ * @fw_get_done flag set when request_firmware() is complete and
++ * copied into fw_buf
+ * risc_on: flag indicates RISC is on or off
+ *
+ */
+@@ -336,6 +338,7 @@ struct s5p_mfc_dev {
+ struct s5p_mfc_hw_cmds *mfc_cmds;
+ const struct s5p_mfc_regs *mfc_regs;
+ enum s5p_mfc_fw_ver fw_ver;
++ bool fw_get_done;
+ bool risc_on; /* indicates if RISC is on or off */
+ };
+
+diff --git a/drivers/media/platform/s5p-mfc/s5p_mfc_ctrl.c b/drivers/media/platform/s5p-mfc/s5p_mfc_ctrl.c
+index 69ef9c23a99a..d94e59e79fe9 100644
+--- a/drivers/media/platform/s5p-mfc/s5p_mfc_ctrl.c
++++ b/drivers/media/platform/s5p-mfc/s5p_mfc_ctrl.c
+@@ -55,6 +55,9 @@ int s5p_mfc_load_firmware(struct s5p_mfc_dev *dev)
+ * into kernel. */
+ mfc_debug_enter();
+
++ if (dev->fw_get_done)
++ return 0;
++
+ for (i = MFC_FW_MAX_VERSIONS - 1; i >= 0; i--) {
+ if (!dev->variant->fw_name[i])
+ continue;
+@@ -82,6 +85,7 @@ int s5p_mfc_load_firmware(struct s5p_mfc_dev *dev)
+ }
+ memcpy(dev->fw_buf.virt, fw_blob->data, fw_blob->size);
+ wmb();
++ dev->fw_get_done = true;
+ release_firmware(fw_blob);
+ mfc_debug_leave();
+ return 0;
+@@ -93,6 +97,7 @@ int s5p_mfc_release_firmware(struct s5p_mfc_dev *dev)
+ /* Before calling this function one has to make sure
+ * that MFC is no longer processing */
+ s5p_mfc_release_priv_buf(dev, &dev->fw_buf);
++ dev->fw_get_done = false;
+ return 0;
+ }
+
+diff --git a/drivers/media/platform/sti/c8sectpfe/c8sectpfe-core.c b/drivers/media/platform/sti/c8sectpfe/c8sectpfe-core.c
+index a0acee7671b1..8bd19e61846d 100644
+--- a/drivers/media/platform/sti/c8sectpfe/c8sectpfe-core.c
++++ b/drivers/media/platform/sti/c8sectpfe/c8sectpfe-core.c
+@@ -83,7 +83,7 @@ static void c8sectpfe_timer_interrupt(struct timer_list *t)
+ static void channel_swdemux_tsklet(unsigned long data)
+ {
+ struct channel_info *channel = (struct channel_info *)data;
+- struct c8sectpfei *fei = channel->fei;
++ struct c8sectpfei *fei;
+ unsigned long wp, rp;
+ int pos, num_packets, n, size;
+ u8 *buf;
+@@ -91,6 +91,8 @@ static void channel_swdemux_tsklet(unsigned long data)
+ if (unlikely(!channel || !channel->irec))
+ return;
+
++ fei = channel->fei;
++
+ wp = readl(channel->irec + DMA_PRDS_BUSWP_TP(0));
+ rp = readl(channel->irec + DMA_PRDS_BUSRP_TP(0));
+
+diff --git a/drivers/mmc/core/block.c b/drivers/mmc/core/block.c
+index ccfa98af1dd3..b737a9540331 100644
+--- a/drivers/mmc/core/block.c
++++ b/drivers/mmc/core/block.c
+@@ -2623,6 +2623,7 @@ static int mmc_ext_csd_open(struct inode *inode, struct file *filp)
+
+ if (n != EXT_CSD_STR_LEN) {
+ err = -EINVAL;
++ kfree(ext_csd);
+ goto out_free;
+ }
+
+diff --git a/drivers/mmc/core/core.c b/drivers/mmc/core/core.c
+index 1f0f44f4dd5f..af194640dbc6 100644
+--- a/drivers/mmc/core/core.c
++++ b/drivers/mmc/core/core.c
+@@ -2959,6 +2959,14 @@ static int mmc_pm_notify(struct notifier_block *notify_block,
+ if (!err)
+ break;
+
++ if (!mmc_card_is_removable(host)) {
++ dev_warn(mmc_dev(host),
++ "pre_suspend failed for non-removable host: "
++ "%d\n", err);
++ /* Avoid removing non-removable hosts */
++ break;
++ }
++
+ /* Calling bus_ops->remove() with a claimed host can deadlock */
+ host->bus_ops->remove(host);
+ mmc_claim_host(host);
+diff --git a/drivers/mmc/host/sdhci-xenon.c b/drivers/mmc/host/sdhci-xenon.c
+index 0842bbc2d7ad..4d0791f6ec23 100644
+--- a/drivers/mmc/host/sdhci-xenon.c
++++ b/drivers/mmc/host/sdhci-xenon.c
+@@ -230,7 +230,14 @@ static void xenon_set_power(struct sdhci_host *host, unsigned char mode,
+ mmc_regulator_set_ocr(mmc, mmc->supply.vmmc, vdd);
+ }
+
++static void xenon_voltage_switch(struct sdhci_host *host)
++{
++ /* Wait for 5ms after set 1.8V signal enable bit */
++ usleep_range(5000, 5500);
++}
++
+ static const struct sdhci_ops sdhci_xenon_ops = {
++ .voltage_switch = xenon_voltage_switch,
+ .set_clock = sdhci_set_clock,
+ .set_power = xenon_set_power,
+ .set_bus_width = sdhci_set_bus_width,
+diff --git a/drivers/net/hyperv/hyperv_net.h b/drivers/net/hyperv/hyperv_net.h
+index 88ddfb92122b..c2653ac499e1 100644
+--- a/drivers/net/hyperv/hyperv_net.h
++++ b/drivers/net/hyperv/hyperv_net.h
+@@ -635,14 +635,27 @@ struct nvsp_message {
+ #define NETVSC_MTU 65535
+ #define NETVSC_MTU_MIN ETH_MIN_MTU
+
+-#define NETVSC_RECEIVE_BUFFER_SIZE (1024*1024*16) /* 16MB */
+-#define NETVSC_RECEIVE_BUFFER_SIZE_LEGACY (1024*1024*15) /* 15MB */
+-#define NETVSC_SEND_BUFFER_SIZE (1024 * 1024 * 15) /* 15MB */
++/* Max buffer sizes allowed by a host */
++#define NETVSC_RECEIVE_BUFFER_SIZE (1024 * 1024 * 31) /* 31MB */
++#define NETVSC_RECEIVE_BUFFER_SIZE_LEGACY (1024 * 1024 * 15) /* 15MB */
++#define NETVSC_RECEIVE_BUFFER_DEFAULT (1024 * 1024 * 16)
++
++#define NETVSC_SEND_BUFFER_SIZE (1024 * 1024 * 15) /* 15MB */
++#define NETVSC_SEND_BUFFER_DEFAULT (1024 * 1024)
++
+ #define NETVSC_INVALID_INDEX -1
+
+ #define NETVSC_SEND_SECTION_SIZE 6144
+ #define NETVSC_RECV_SECTION_SIZE 1728
+
++/* Default size of TX buf: 1MB, RX buf: 16MB */
++#define NETVSC_MIN_TX_SECTIONS 10
++#define NETVSC_DEFAULT_TX (NETVSC_SEND_BUFFER_DEFAULT \
++ / NETVSC_SEND_SECTION_SIZE)
++#define NETVSC_MIN_RX_SECTIONS 10
++#define NETVSC_DEFAULT_RX (NETVSC_RECEIVE_BUFFER_DEFAULT \
++ / NETVSC_RECV_SECTION_SIZE)
++
+ #define NETVSC_RECEIVE_BUFFER_ID 0xcafe
+ #define NETVSC_SEND_BUFFER_ID 0
+
+diff --git a/drivers/net/hyperv/netvsc.c b/drivers/net/hyperv/netvsc.c
+index bfc79698b8f4..1e4f512fb90d 100644
+--- a/drivers/net/hyperv/netvsc.c
++++ b/drivers/net/hyperv/netvsc.c
+@@ -267,6 +267,11 @@ static int netvsc_init_buf(struct hv_device *device,
+ buf_size = device_info->recv_sections * device_info->recv_section_size;
+ buf_size = roundup(buf_size, PAGE_SIZE);
+
++ /* Legacy hosts only allow smaller receive buffer */
++ if (net_device->nvsp_version <= NVSP_PROTOCOL_VERSION_2)
++ buf_size = min_t(unsigned int, buf_size,
++ NETVSC_RECEIVE_BUFFER_SIZE_LEGACY);
++
+ net_device->recv_buf = vzalloc(buf_size);
+ if (!net_device->recv_buf) {
+ netdev_err(ndev,
+diff --git a/drivers/net/hyperv/netvsc_drv.c b/drivers/net/hyperv/netvsc_drv.c
+index 5129647d420c..dde3251da004 100644
+--- a/drivers/net/hyperv/netvsc_drv.c
++++ b/drivers/net/hyperv/netvsc_drv.c
+@@ -46,10 +46,6 @@
+ #include "hyperv_net.h"
+
+ #define RING_SIZE_MIN 64
+-#define NETVSC_MIN_TX_SECTIONS 10
+-#define NETVSC_DEFAULT_TX 192 /* ~1M */
+-#define NETVSC_MIN_RX_SECTIONS 10 /* ~64K */
+-#define NETVSC_DEFAULT_RX 10485 /* Max ~16M */
+
+ #define LINKCHANGE_INT (2 * HZ)
+ #define VF_TAKEOVER_INT (HZ / 10)
+diff --git a/drivers/net/phy/meson-gxl.c b/drivers/net/phy/meson-gxl.c
+index 842eb871a6e3..f431c83ba0b5 100644
+--- a/drivers/net/phy/meson-gxl.c
++++ b/drivers/net/phy/meson-gxl.c
+@@ -26,27 +26,53 @@
+
+ static int meson_gxl_config_init(struct phy_device *phydev)
+ {
++ int ret;
++
+ /* Enable Analog and DSP register Bank access by */
+- phy_write(phydev, 0x14, 0x0000);
+- phy_write(phydev, 0x14, 0x0400);
+- phy_write(phydev, 0x14, 0x0000);
+- phy_write(phydev, 0x14, 0x0400);
++ ret = phy_write(phydev, 0x14, 0x0000);
++ if (ret)
++ return ret;
++ ret = phy_write(phydev, 0x14, 0x0400);
++ if (ret)
++ return ret;
++ ret = phy_write(phydev, 0x14, 0x0000);
++ if (ret)
++ return ret;
++ ret = phy_write(phydev, 0x14, 0x0400);
++ if (ret)
++ return ret;
+
+ /* Write Analog register 23 */
+- phy_write(phydev, 0x17, 0x8E0D);
+- phy_write(phydev, 0x14, 0x4417);
++ ret = phy_write(phydev, 0x17, 0x8E0D);
++ if (ret)
++ return ret;
++ ret = phy_write(phydev, 0x14, 0x4417);
++ if (ret)
++ return ret;
+
+ /* Enable fractional PLL */
+- phy_write(phydev, 0x17, 0x0005);
+- phy_write(phydev, 0x14, 0x5C1B);
++ ret = phy_write(phydev, 0x17, 0x0005);
++ if (ret)
++ return ret;
++ ret = phy_write(phydev, 0x14, 0x5C1B);
++ if (ret)
++ return ret;
+
+ /* Program fraction FR_PLL_DIV1 */
+- phy_write(phydev, 0x17, 0x029A);
+- phy_write(phydev, 0x14, 0x5C1D);
++ ret = phy_write(phydev, 0x17, 0x029A);
++ if (ret)
++ return ret;
++ ret = phy_write(phydev, 0x14, 0x5C1D);
++ if (ret)
++ return ret;
+
+ /* Program fraction FR_PLL_DIV1 */
+- phy_write(phydev, 0x17, 0xAAAA);
+- phy_write(phydev, 0x14, 0x5C1C);
++ ret = phy_write(phydev, 0x17, 0xAAAA);
++ if (ret)
++ return ret;
++ ret = phy_write(phydev, 0x14, 0x5C1C);
++ if (ret)
++ return ret;
+
+ return 0;
+ }
+diff --git a/drivers/net/phy/sfp.c b/drivers/net/phy/sfp.c
+index 9dfc1c4c954f..10b77ac781ca 100644
+--- a/drivers/net/phy/sfp.c
++++ b/drivers/net/phy/sfp.c
+@@ -315,12 +315,12 @@ static void sfp_sm_probe_phy(struct sfp *sfp)
+ msleep(T_PHY_RESET_MS);
+
+ phy = mdiobus_scan(sfp->i2c_mii, SFP_PHY_ADDR);
+- if (IS_ERR(phy)) {
+- dev_err(sfp->dev, "mdiobus scan returned %ld\n", PTR_ERR(phy));
++ if (phy == ERR_PTR(-ENODEV)) {
++ dev_info(sfp->dev, "no PHY detected\n");
+ return;
+ }
+- if (!phy) {
+- dev_info(sfp->dev, "no PHY detected\n");
++ if (IS_ERR(phy)) {
++ dev_err(sfp->dev, "mdiobus scan returned %ld\n", PTR_ERR(phy));
+ return;
+ }
+
+@@ -683,20 +683,19 @@ static int sfp_module_eeprom(struct sfp *sfp, struct ethtool_eeprom *ee,
+ len = min_t(unsigned int, last, ETH_MODULE_SFF_8079_LEN);
+ len -= first;
+
+- ret = sfp->read(sfp, false, first, data, len);
++ ret = sfp_read(sfp, false, first, data, len);
+ if (ret < 0)
+ return ret;
+
+ first += len;
+ data += len;
+ }
+- if (first >= ETH_MODULE_SFF_8079_LEN &&
+- first < ETH_MODULE_SFF_8472_LEN) {
++ if (first < ETH_MODULE_SFF_8472_LEN && last > ETH_MODULE_SFF_8079_LEN) {
+ len = min_t(unsigned int, last, ETH_MODULE_SFF_8472_LEN);
+ len -= first;
+ first -= ETH_MODULE_SFF_8079_LEN;
+
+- ret = sfp->read(sfp, true, first, data, len);
++ ret = sfp_read(sfp, true, first, data, len);
+ if (ret < 0)
+ return ret;
+ }
+diff --git a/drivers/net/usb/qmi_wwan.c b/drivers/net/usb/qmi_wwan.c
+index e7114c34fe4b..76ac48095c29 100644
+--- a/drivers/net/usb/qmi_wwan.c
++++ b/drivers/net/usb/qmi_wwan.c
+@@ -826,7 +826,7 @@ static int qmi_wwan_resume(struct usb_interface *intf)
+
+ static const struct driver_info qmi_wwan_info = {
+ .description = "WWAN/QMI device",
+- .flags = FLAG_WWAN,
++ .flags = FLAG_WWAN | FLAG_SEND_ZLP,
+ .bind = qmi_wwan_bind,
+ .unbind = qmi_wwan_unbind,
+ .manage_power = qmi_wwan_manage_power,
+@@ -835,7 +835,7 @@ static const struct driver_info qmi_wwan_info = {
+
+ static const struct driver_info qmi_wwan_info_quirk_dtr = {
+ .description = "WWAN/QMI device",
+- .flags = FLAG_WWAN,
++ .flags = FLAG_WWAN | FLAG_SEND_ZLP,
+ .bind = qmi_wwan_bind,
+ .unbind = qmi_wwan_unbind,
+ .manage_power = qmi_wwan_manage_power,
+diff --git a/drivers/net/wireless/ath/ath10k/mac.c b/drivers/net/wireless/ath/ath10k/mac.c
+index c6460e7f6d78..397a5b6b50b1 100644
+--- a/drivers/net/wireless/ath/ath10k/mac.c
++++ b/drivers/net/wireless/ath/ath10k/mac.c
+@@ -2563,7 +2563,7 @@ static void ath10k_peer_assoc_h_qos(struct ath10k *ar,
+ }
+ break;
+ case WMI_VDEV_TYPE_STA:
+- if (vif->bss_conf.qos)
++ if (sta->wme)
+ arg->peer_flags |= arvif->ar->wmi.peer_flags->qos;
+ break;
+ case WMI_VDEV_TYPE_IBSS:
+diff --git a/drivers/net/wireless/realtek/rtlwifi/base.c b/drivers/net/wireless/realtek/rtlwifi/base.c
+index cad2272ae21b..704741d6f495 100644
+--- a/drivers/net/wireless/realtek/rtlwifi/base.c
++++ b/drivers/net/wireless/realtek/rtlwifi/base.c
+@@ -1726,7 +1726,7 @@ int rtl_tx_agg_oper(struct ieee80211_hw *hw,
+ void rtl_rx_ampdu_apply(struct rtl_priv *rtlpriv)
+ {
+ struct rtl_btc_ops *btc_ops = rtlpriv->btcoexist.btc_ops;
+- u8 reject_agg, ctrl_agg_size = 0, agg_size;
++ u8 reject_agg = 0, ctrl_agg_size = 0, agg_size = 0;
+
+ if (rtlpriv->cfg->ops->get_btc_status())
+ btc_ops->btc_get_ampdu_cfg(rtlpriv, &reject_agg,
+diff --git a/drivers/net/wireless/realtek/rtlwifi/pci.c b/drivers/net/wireless/realtek/rtlwifi/pci.c
+index c2575b0b9440..1b7715fd13b1 100644
+--- a/drivers/net/wireless/realtek/rtlwifi/pci.c
++++ b/drivers/net/wireless/realtek/rtlwifi/pci.c
+@@ -1555,7 +1555,14 @@ int rtl_pci_reset_trx_ring(struct ieee80211_hw *hw)
+ dev_kfree_skb_irq(skb);
+ ring->idx = (ring->idx + 1) % ring->entries;
+ }
++
++ if (rtlpriv->use_new_trx_flow) {
++ rtlpci->tx_ring[i].cur_tx_rp = 0;
++ rtlpci->tx_ring[i].cur_tx_wp = 0;
++ }
++
+ ring->idx = 0;
++ ring->entries = rtlpci->txringcount[i];
+ }
+ }
+ spin_unlock_irqrestore(&rtlpriv->locks.irq_th_lock, flags);
+diff --git a/drivers/pci/dwc/pcie-designware-ep.c b/drivers/pci/dwc/pcie-designware-ep.c
+index d53d5f168363..7c621877a939 100644
+--- a/drivers/pci/dwc/pcie-designware-ep.c
++++ b/drivers/pci/dwc/pcie-designware-ep.c
+@@ -197,20 +197,14 @@ static int dw_pcie_ep_map_addr(struct pci_epc *epc, phys_addr_t addr,
+ static int dw_pcie_ep_get_msi(struct pci_epc *epc)
+ {
+ int val;
+- u32 lower_addr;
+- u32 upper_addr;
+ struct dw_pcie_ep *ep = epc_get_drvdata(epc);
+ struct dw_pcie *pci = to_dw_pcie_from_ep(ep);
+
+- val = dw_pcie_readb_dbi(pci, MSI_MESSAGE_CONTROL);
+- val = (val & MSI_CAP_MME_MASK) >> MSI_CAP_MME_SHIFT;
+-
+- lower_addr = dw_pcie_readl_dbi(pci, MSI_MESSAGE_ADDR_L32);
+- upper_addr = dw_pcie_readl_dbi(pci, MSI_MESSAGE_ADDR_U32);
+-
+- if (!(lower_addr || upper_addr))
++ val = dw_pcie_readw_dbi(pci, MSI_MESSAGE_CONTROL);
++ if (!(val & MSI_CAP_MSI_EN_MASK))
+ return -EINVAL;
+
++ val = (val & MSI_CAP_MME_MASK) >> MSI_CAP_MME_SHIFT;
+ return val;
+ }
+
+diff --git a/drivers/pci/dwc/pcie-designware.h b/drivers/pci/dwc/pcie-designware.h
+index e5d9d77b778e..cb493bcae8b4 100644
+--- a/drivers/pci/dwc/pcie-designware.h
++++ b/drivers/pci/dwc/pcie-designware.h
+@@ -101,6 +101,7 @@
+ #define MSI_MESSAGE_CONTROL 0x52
+ #define MSI_CAP_MMC_SHIFT 1
+ #define MSI_CAP_MME_SHIFT 4
++#define MSI_CAP_MSI_EN_MASK 0x1
+ #define MSI_CAP_MME_MASK (7 << MSI_CAP_MME_SHIFT)
+ #define MSI_MESSAGE_ADDR_L32 0x54
+ #define MSI_MESSAGE_ADDR_U32 0x58
+diff --git a/drivers/pci/endpoint/pci-ep-cfs.c b/drivers/pci/endpoint/pci-ep-cfs.c
+index 4f74386c1ced..5508cd32afcd 100644
+--- a/drivers/pci/endpoint/pci-ep-cfs.c
++++ b/drivers/pci/endpoint/pci-ep-cfs.c
+@@ -109,7 +109,10 @@ static int pci_epc_epf_link(struct config_item *epc_item,
+ goto err_add_epf;
+
+ func_no = find_first_zero_bit(&epc_group->function_num_map,
+- sizeof(epc_group->function_num_map));
++ BITS_PER_LONG);
++ if (func_no >= BITS_PER_LONG)
++ return -EINVAL;
++
+ set_bit(func_no, &epc_group->function_num_map);
+ epf->func_no = func_no;
+
+diff --git a/drivers/pci/host/pcie-rcar.c b/drivers/pci/host/pcie-rcar.c
+index 52ab3cb0a0bf..95ca4a1feba4 100644
+--- a/drivers/pci/host/pcie-rcar.c
++++ b/drivers/pci/host/pcie-rcar.c
+@@ -1123,7 +1123,9 @@ static int rcar_pcie_probe(struct platform_device *pdev)
+
+ INIT_LIST_HEAD(&pcie->resources);
+
+- rcar_pcie_parse_request_of_pci_ranges(pcie);
++ err = rcar_pcie_parse_request_of_pci_ranges(pcie);
++ if (err)
++ goto err_free_bridge;
+
+ err = rcar_pcie_get_resources(pcie);
+ if (err < 0) {
+@@ -1178,6 +1180,7 @@ static int rcar_pcie_probe(struct platform_device *pdev)
+
+ err_free_resource_list:
+ pci_free_resource_list(&pcie->resources);
++err_free_bridge:
+ pci_free_host_bridge(bridge);
+
+ return err;
+diff --git a/drivers/pci/pcie/aspm.c b/drivers/pci/pcie/aspm.c
+index 9783e10da3a9..3b9b4d50cd98 100644
+--- a/drivers/pci/pcie/aspm.c
++++ b/drivers/pci/pcie/aspm.c
+@@ -43,18 +43,6 @@
+ #define ASPM_STATE_ALL (ASPM_STATE_L0S | ASPM_STATE_L1 | \
+ ASPM_STATE_L1SS)
+
+-/*
+- * When L1 substates are enabled, the LTR L1.2 threshold is a timing parameter
+- * that decides whether L1.1 or L1.2 is entered (Refer PCIe spec for details).
+- * Not sure is there is a way to "calculate" this on the fly, but maybe we
+- * could turn it into a parameter in future. This value has been taken from
+- * the following files from Intel's coreboot (which is the only code I found
+- * to have used this):
+- * https://www.coreboot.org/pipermail/coreboot-gerrit/2015-March/021134.html
+- * https://review.coreboot.org/#/c/8832/
+- */
+-#define LTR_L1_2_THRESHOLD_BITS ((1 << 21) | (1 << 23) | (1 << 30))
+-
+ struct aspm_latency {
+ u32 l0s; /* L0s latency (nsec) */
+ u32 l1; /* L1 latency (nsec) */
+@@ -333,6 +321,32 @@ static u32 calc_l1ss_pwron(struct pci_dev *pdev, u32 scale, u32 val)
+ return 0;
+ }
+
++static void encode_l12_threshold(u32 threshold_us, u32 *scale, u32 *value)
++{
++ u64 threshold_ns = threshold_us * 1000;
++
++ /* See PCIe r3.1, sec 7.33.3 and sec 6.18 */
++ if (threshold_ns < 32) {
++ *scale = 0;
++ *value = threshold_ns;
++ } else if (threshold_ns < 1024) {
++ *scale = 1;
++ *value = threshold_ns >> 5;
++ } else if (threshold_ns < 32768) {
++ *scale = 2;
++ *value = threshold_ns >> 10;
++ } else if (threshold_ns < 1048576) {
++ *scale = 3;
++ *value = threshold_ns >> 15;
++ } else if (threshold_ns < 33554432) {
++ *scale = 4;
++ *value = threshold_ns >> 20;
++ } else {
++ *scale = 5;
++ *value = threshold_ns >> 25;
++ }
++}
++
+ struct aspm_register_info {
+ u32 support:2;
+ u32 enabled:2;
+@@ -443,6 +457,7 @@ static void aspm_calc_l1ss_info(struct pcie_link_state *link,
+ struct aspm_register_info *dwreg)
+ {
+ u32 val1, val2, scale1, scale2;
++ u32 t_common_mode, t_power_on, l1_2_threshold, scale, value;
+
+ link->l1ss.up_cap_ptr = upreg->l1ss_cap_ptr;
+ link->l1ss.dw_cap_ptr = dwreg->l1ss_cap_ptr;
+@@ -454,16 +469,7 @@ static void aspm_calc_l1ss_info(struct pcie_link_state *link,
+ /* Choose the greater of the two Port Common_Mode_Restore_Times */
+ val1 = (upreg->l1ss_cap & PCI_L1SS_CAP_CM_RESTORE_TIME) >> 8;
+ val2 = (dwreg->l1ss_cap & PCI_L1SS_CAP_CM_RESTORE_TIME) >> 8;
+- if (val1 > val2)
+- link->l1ss.ctl1 |= val1 << 8;
+- else
+- link->l1ss.ctl1 |= val2 << 8;
+-
+- /*
+- * We currently use LTR L1.2 threshold to be fixed constant picked from
+- * Intel's coreboot.
+- */
+- link->l1ss.ctl1 |= LTR_L1_2_THRESHOLD_BITS;
++ t_common_mode = max(val1, val2);
+
+ /* Choose the greater of the two Port T_POWER_ON times */
+ val1 = (upreg->l1ss_cap & PCI_L1SS_CAP_P_PWR_ON_VALUE) >> 19;
+@@ -472,10 +478,27 @@ static void aspm_calc_l1ss_info(struct pcie_link_state *link,
+ scale2 = (dwreg->l1ss_cap & PCI_L1SS_CAP_P_PWR_ON_SCALE) >> 16;
+
+ if (calc_l1ss_pwron(link->pdev, scale1, val1) >
+- calc_l1ss_pwron(link->downstream, scale2, val2))
++ calc_l1ss_pwron(link->downstream, scale2, val2)) {
+ link->l1ss.ctl2 |= scale1 | (val1 << 3);
+- else
++ t_power_on = calc_l1ss_pwron(link->pdev, scale1, val1);
++ } else {
+ link->l1ss.ctl2 |= scale2 | (val2 << 3);
++ t_power_on = calc_l1ss_pwron(link->downstream, scale2, val2);
++ }
++
++ /*
++ * Set LTR_L1.2_THRESHOLD to the time required to transition the
++ * Link from L0 to L1.2 and back to L0 so we enter L1.2 only if
++ * downstream devices report (via LTR) that they can tolerate at
++ * least that much latency.
++ *
++ * Based on PCIe r3.1, sec 5.5.3.3.1, Figures 5-16 and 5-17, and
++ * Table 5-11. T(POWER_OFF) is at most 2us and T(L1.2) is at
++ * least 4us.
++ */
++ l1_2_threshold = 2 + 4 + t_common_mode + t_power_on;
++ encode_l12_threshold(l1_2_threshold, &scale, &value);
++ link->l1ss.ctl1 |= t_common_mode << 8 | scale << 29 | value << 16;
+ }
+
+ static void pcie_aspm_cap_init(struct pcie_link_state *link, int blacklist)
+diff --git a/drivers/pinctrl/core.c b/drivers/pinctrl/core.c
+index 4c8d5b23e4d0..2c0dbfcff3e6 100644
+--- a/drivers/pinctrl/core.c
++++ b/drivers/pinctrl/core.c
+@@ -1189,19 +1189,16 @@ struct pinctrl_state *pinctrl_lookup_state(struct pinctrl *p,
+ EXPORT_SYMBOL_GPL(pinctrl_lookup_state);
+
+ /**
+- * pinctrl_select_state() - select/activate/program a pinctrl state to HW
++ * pinctrl_commit_state() - select/activate/program a pinctrl state to HW
+ * @p: the pinctrl handle for the device that requests configuration
+ * @state: the state handle to select/activate/program
+ */
+-int pinctrl_select_state(struct pinctrl *p, struct pinctrl_state *state)
++static int pinctrl_commit_state(struct pinctrl *p, struct pinctrl_state *state)
+ {
+ struct pinctrl_setting *setting, *setting2;
+ struct pinctrl_state *old_state = p->state;
+ int ret;
+
+- if (p->state == state)
+- return 0;
+-
+ if (p->state) {
+ /*
+ * For each pinmux setting in the old state, forget SW's record
+@@ -1265,6 +1262,19 @@ int pinctrl_select_state(struct pinctrl *p, struct pinctrl_state *state)
+
+ return ret;
+ }
++
++/**
++ * pinctrl_select_state() - select/activate/program a pinctrl state to HW
++ * @p: the pinctrl handle for the device that requests configuration
++ * @state: the state handle to select/activate/program
++ */
++int pinctrl_select_state(struct pinctrl *p, struct pinctrl_state *state)
++{
++ if (p->state == state)
++ return 0;
++
++ return pinctrl_commit_state(p, state);
++}
+ EXPORT_SYMBOL_GPL(pinctrl_select_state);
+
+ static void devm_pinctrl_release(struct device *dev, void *res)
+@@ -1430,7 +1440,7 @@ void pinctrl_unregister_map(const struct pinctrl_map *map)
+ int pinctrl_force_sleep(struct pinctrl_dev *pctldev)
+ {
+ if (!IS_ERR(pctldev->p) && !IS_ERR(pctldev->hog_sleep))
+- return pinctrl_select_state(pctldev->p, pctldev->hog_sleep);
++ return pinctrl_commit_state(pctldev->p, pctldev->hog_sleep);
+ return 0;
+ }
+ EXPORT_SYMBOL_GPL(pinctrl_force_sleep);
+@@ -1442,7 +1452,7 @@ EXPORT_SYMBOL_GPL(pinctrl_force_sleep);
+ int pinctrl_force_default(struct pinctrl_dev *pctldev)
+ {
+ if (!IS_ERR(pctldev->p) && !IS_ERR(pctldev->hog_default))
+- return pinctrl_select_state(pctldev->p, pctldev->hog_default);
++ return pinctrl_commit_state(pctldev->p, pctldev->hog_default);
+ return 0;
+ }
+ EXPORT_SYMBOL_GPL(pinctrl_force_default);
+diff --git a/drivers/pinctrl/pinctrl-rockchip.c b/drivers/pinctrl/pinctrl-rockchip.c
+index 2ba17548ad5b..073de6a9ed34 100644
+--- a/drivers/pinctrl/pinctrl-rockchip.c
++++ b/drivers/pinctrl/pinctrl-rockchip.c
+@@ -2014,8 +2014,16 @@ static int rockchip_gpio_get_direction(struct gpio_chip *chip, unsigned offset)
+ {
+ struct rockchip_pin_bank *bank = gpiochip_get_data(chip);
+ u32 data;
++ int ret;
+
++ ret = clk_enable(bank->clk);
++ if (ret < 0) {
++ dev_err(bank->drvdata->dev,
++ "failed to enable clock for bank %s\n", bank->name);
++ return ret;
++ }
+ data = readl_relaxed(bank->reg_base + GPIO_SWPORT_DDR);
++ clk_disable(bank->clk);
+
+ return !(data & BIT(offset));
+ }
+diff --git a/drivers/platform/chrome/cros_ec_proto.c b/drivers/platform/chrome/cros_ec_proto.c
+index 8dfa7fcb1248..e7bbdf947bbc 100644
+--- a/drivers/platform/chrome/cros_ec_proto.c
++++ b/drivers/platform/chrome/cros_ec_proto.c
+@@ -60,12 +60,14 @@ static int send_command(struct cros_ec_device *ec_dev,
+ struct cros_ec_command *msg)
+ {
+ int ret;
++ int (*xfer_fxn)(struct cros_ec_device *ec, struct cros_ec_command *msg);
+
+ if (ec_dev->proto_version > 2)
+- ret = ec_dev->pkt_xfer(ec_dev, msg);
++ xfer_fxn = ec_dev->pkt_xfer;
+ else
+- ret = ec_dev->cmd_xfer(ec_dev, msg);
++ xfer_fxn = ec_dev->cmd_xfer;
+
++ ret = (*xfer_fxn)(ec_dev, msg);
+ if (msg->result == EC_RES_IN_PROGRESS) {
+ int i;
+ struct cros_ec_command *status_msg;
+@@ -88,7 +90,7 @@ static int send_command(struct cros_ec_device *ec_dev,
+ for (i = 0; i < EC_COMMAND_RETRIES; i++) {
+ usleep_range(10000, 11000);
+
+- ret = ec_dev->cmd_xfer(ec_dev, status_msg);
++ ret = (*xfer_fxn)(ec_dev, status_msg);
+ if (ret < 0)
+ break;
+
+diff --git a/drivers/platform/chrome/cros_ec_sysfs.c b/drivers/platform/chrome/cros_ec_sysfs.c
+index f3baf9973989..24f1630a8b3f 100644
+--- a/drivers/platform/chrome/cros_ec_sysfs.c
++++ b/drivers/platform/chrome/cros_ec_sysfs.c
+@@ -187,7 +187,7 @@ static ssize_t show_ec_version(struct device *dev,
+ count += scnprintf(buf + count, PAGE_SIZE - count,
+ "Build info: EC error %d\n", msg->result);
+ else {
+- msg->data[sizeof(msg->data) - 1] = '\0';
++ msg->data[EC_HOST_PARAM_SIZE - 1] = '\0';
+ count += scnprintf(buf + count, PAGE_SIZE - count,
+ "Build info: %s\n", msg->data);
+ }
+diff --git a/drivers/rtc/rtc-ac100.c b/drivers/rtc/rtc-ac100.c
+index 9e336184491c..0e358d4b6738 100644
+--- a/drivers/rtc/rtc-ac100.c
++++ b/drivers/rtc/rtc-ac100.c
+@@ -567,6 +567,12 @@ static int ac100_rtc_probe(struct platform_device *pdev)
+ return chip->irq;
+ }
+
++ chip->rtc = devm_rtc_allocate_device(&pdev->dev);
++ if (IS_ERR(chip->rtc))
++ return PTR_ERR(chip->rtc);
++
++ chip->rtc->ops = &ac100_rtc_ops;
++
+ ret = devm_request_threaded_irq(&pdev->dev, chip->irq, NULL,
+ ac100_rtc_irq,
+ IRQF_SHARED | IRQF_ONESHOT,
+@@ -586,17 +592,16 @@ static int ac100_rtc_probe(struct platform_device *pdev)
+ /* clear counter alarm pending interrupts */
+ regmap_write(chip->regmap, AC100_ALM_INT_STA, AC100_ALM_INT_ENABLE);
+
+- chip->rtc = devm_rtc_device_register(&pdev->dev, "rtc-ac100",
+- &ac100_rtc_ops, THIS_MODULE);
+- if (IS_ERR(chip->rtc)) {
+- dev_err(&pdev->dev, "unable to register device\n");
+- return PTR_ERR(chip->rtc);
+- }
+-
+ ret = ac100_rtc_register_clks(chip);
+ if (ret)
+ return ret;
+
++ ret = rtc_register_device(chip->rtc);
++ if (ret) {
++ dev_err(&pdev->dev, "unable to register device\n");
++ return ret;
++ }
++
+ dev_info(&pdev->dev, "RTC enabled\n");
+
+ return 0;
+diff --git a/drivers/scsi/lpfc/lpfc_ct.c b/drivers/scsi/lpfc/lpfc_ct.c
+index f77673ab4a84..3a8ec82e685a 100644
+--- a/drivers/scsi/lpfc/lpfc_ct.c
++++ b/drivers/scsi/lpfc/lpfc_ct.c
+@@ -471,6 +471,7 @@ lpfc_prep_node_fc4type(struct lpfc_vport *vport, uint32_t Did, uint8_t fc4_type)
+ "Parse GID_FTrsp: did:x%x flg:x%x x%x",
+ Did, ndlp->nlp_flag, vport->fc_flag);
+
++ ndlp->nlp_fc4_type &= ~(NLP_FC4_FCP | NLP_FC4_NVME);
+ /* By default, the driver expects to support FCP FC4 */
+ if (fc4_type == FC_TYPE_FCP)
+ ndlp->nlp_fc4_type |= NLP_FC4_FCP;
+diff --git a/drivers/scsi/lpfc/lpfc_els.c b/drivers/scsi/lpfc/lpfc_els.c
+index 39d5b146202e..537ee0c44198 100644
+--- a/drivers/scsi/lpfc/lpfc_els.c
++++ b/drivers/scsi/lpfc/lpfc_els.c
+@@ -2088,6 +2088,10 @@ lpfc_cmpl_els_prli(struct lpfc_hba *phba, struct lpfc_iocbq *cmdiocb,
+ ndlp = (struct lpfc_nodelist *) cmdiocb->context1;
+ spin_lock_irq(shost->host_lock);
+ ndlp->nlp_flag &= ~NLP_PRLI_SND;
++
++ /* Driver supports multiple FC4 types. Counters matter. */
++ vport->fc_prli_sent--;
++ ndlp->fc4_prli_sent--;
+ spin_unlock_irq(shost->host_lock);
+
+ lpfc_debugfs_disc_trc(vport, LPFC_DISC_TRC_ELS_CMD,
+@@ -2095,9 +2099,6 @@ lpfc_cmpl_els_prli(struct lpfc_hba *phba, struct lpfc_iocbq *cmdiocb,
+ irsp->ulpStatus, irsp->un.ulpWord[4],
+ ndlp->nlp_DID);
+
+- /* Ddriver supports multiple FC4 types. Counters matter. */
+- vport->fc_prli_sent--;
+-
+ /* PRLI completes to NPort <nlp_DID> */
+ lpfc_printf_vlog(vport, KERN_INFO, LOG_ELS,
+ "0103 PRLI completes to NPort x%06x "
+@@ -2111,7 +2112,6 @@ lpfc_cmpl_els_prli(struct lpfc_hba *phba, struct lpfc_iocbq *cmdiocb,
+
+ if (irsp->ulpStatus) {
+ /* Check for retry */
+- ndlp->fc4_prli_sent--;
+ if (lpfc_els_retry(phba, cmdiocb, rspiocb)) {
+ /* ELS command is being retried */
+ goto out;
+@@ -2190,6 +2190,15 @@ lpfc_issue_els_prli(struct lpfc_vport *vport, struct lpfc_nodelist *ndlp,
+ ndlp->nlp_fc4_type |= NLP_FC4_NVME;
+ local_nlp_type = ndlp->nlp_fc4_type;
+
++ /* This routine will issue 1 or 2 PRLIs, so zero all the ndlp
++ * fields here before any of them can complete.
++ */
++ ndlp->nlp_type &= ~(NLP_FCP_TARGET | NLP_FCP_INITIATOR);
++ ndlp->nlp_type &= ~(NLP_NVME_TARGET | NLP_NVME_INITIATOR);
++ ndlp->nlp_fcp_info &= ~NLP_FCP_2_DEVICE;
++ ndlp->nlp_flag &= ~NLP_FIRSTBURST;
++ ndlp->nvme_fb_size = 0;
++
+ send_next_prli:
+ if (local_nlp_type & NLP_FC4_FCP) {
+ /* Payload is 4 + 16 = 20 x14 bytes. */
+@@ -2298,6 +2307,13 @@ lpfc_issue_els_prli(struct lpfc_vport *vport, struct lpfc_nodelist *ndlp,
+ elsiocb->iocb_cmpl = lpfc_cmpl_els_prli;
+ spin_lock_irq(shost->host_lock);
+ ndlp->nlp_flag |= NLP_PRLI_SND;
++
++ /* The vport counters are used for lpfc_scan_finished, but
++ * the ndlp is used to track outstanding PRLIs for different
++ * FC4 types.
++ */
++ vport->fc_prli_sent++;
++ ndlp->fc4_prli_sent++;
+ spin_unlock_irq(shost->host_lock);
+ if (lpfc_sli_issue_iocb(phba, LPFC_ELS_RING, elsiocb, 0) ==
+ IOCB_ERROR) {
+@@ -2308,12 +2324,6 @@ lpfc_issue_els_prli(struct lpfc_vport *vport, struct lpfc_nodelist *ndlp,
+ return 1;
+ }
+
+- /* The vport counters are used for lpfc_scan_finished, but
+- * the ndlp is used to track outstanding PRLIs for different
+- * FC4 types.
+- */
+- vport->fc_prli_sent++;
+- ndlp->fc4_prli_sent++;
+
+ /* The driver supports 2 FC4 types. Make sure
+ * a PRLI is issued for all types before exiting.
+diff --git a/drivers/scsi/lpfc/lpfc_nportdisc.c b/drivers/scsi/lpfc/lpfc_nportdisc.c
+index b6957d944b9a..d489f6827cc1 100644
+--- a/drivers/scsi/lpfc/lpfc_nportdisc.c
++++ b/drivers/scsi/lpfc/lpfc_nportdisc.c
+@@ -390,6 +390,11 @@ lpfc_rcv_plogi(struct lpfc_vport *vport, struct lpfc_nodelist *ndlp,
+ break;
+ }
+
++ ndlp->nlp_type &= ~(NLP_FCP_TARGET | NLP_FCP_INITIATOR);
++ ndlp->nlp_type &= ~(NLP_NVME_TARGET | NLP_NVME_INITIATOR);
++ ndlp->nlp_fcp_info &= ~NLP_FCP_2_DEVICE;
++ ndlp->nlp_flag &= ~NLP_FIRSTBURST;
++
+ /* Check for Nport to NPort pt2pt protocol */
+ if ((vport->fc_flag & FC_PT2PT) &&
+ !(vport->fc_flag & FC_PT2PT_PLOGI)) {
+@@ -742,9 +747,6 @@ lpfc_rcv_prli(struct lpfc_vport *vport, struct lpfc_nodelist *ndlp,
+ lp = (uint32_t *) pcmd->virt;
+ npr = (PRLI *) ((uint8_t *) lp + sizeof (uint32_t));
+
+- ndlp->nlp_type &= ~(NLP_FCP_TARGET | NLP_FCP_INITIATOR);
+- ndlp->nlp_fcp_info &= ~NLP_FCP_2_DEVICE;
+- ndlp->nlp_flag &= ~NLP_FIRSTBURST;
+ if ((npr->prliType == PRLI_FCP_TYPE) ||
+ (npr->prliType == PRLI_NVME_TYPE)) {
+ if (npr->initiatorFunc) {
+@@ -769,8 +771,12 @@ lpfc_rcv_prli(struct lpfc_vport *vport, struct lpfc_nodelist *ndlp,
+ * type. Target mode does not issue gft_id so doesn't get
+ * the fc4 type set until now.
+ */
+- if ((phba->nvmet_support) && (npr->prliType == PRLI_NVME_TYPE))
++ if (phba->nvmet_support && (npr->prliType == PRLI_NVME_TYPE)) {
+ ndlp->nlp_fc4_type |= NLP_FC4_NVME;
++ lpfc_nlp_set_state(vport, ndlp, NLP_STE_UNMAPPED_NODE);
++ }
++ if (npr->prliType == PRLI_FCP_TYPE)
++ ndlp->nlp_fc4_type |= NLP_FC4_FCP;
+ }
+ if (rport) {
+ /* We need to update the rport role values */
+@@ -1552,7 +1558,6 @@ lpfc_rcv_prli_reglogin_issue(struct lpfc_vport *vport,
+ if (ndlp->nlp_flag & NLP_RPI_REGISTERED) {
+ lpfc_rcv_prli(vport, ndlp, cmdiocb);
+ lpfc_els_rsp_prli_acc(vport, cmdiocb, ndlp);
+- lpfc_nlp_set_state(vport, ndlp, NLP_STE_UNMAPPED_NODE);
+ } else {
+ /* RPI registration has not completed. Reject the PRLI
+ * to prevent an illegal state transition when the
+@@ -1564,10 +1569,11 @@ lpfc_rcv_prli_reglogin_issue(struct lpfc_vport *vport,
+ ndlp->nlp_rpi, ndlp->nlp_state,
+ ndlp->nlp_flag);
+ memset(&stat, 0, sizeof(struct ls_rjt));
+- stat.un.b.lsRjtRsnCode = LSRJT_UNABLE_TPC;
+- stat.un.b.lsRjtRsnCodeExp = LSEXP_CMD_IN_PROGRESS;
++ stat.un.b.lsRjtRsnCode = LSRJT_LOGICAL_BSY;
++ stat.un.b.lsRjtRsnCodeExp = LSEXP_NOTHING_MORE;
+ lpfc_els_rsp_reject(vport, stat.un.lsRjtError, cmdiocb,
+ ndlp, NULL);
++ return ndlp->nlp_state;
+ }
+ } else {
+ /* Initiator mode. */
+@@ -1922,13 +1928,6 @@ lpfc_cmpl_prli_prli_issue(struct lpfc_vport *vport, struct lpfc_nodelist *ndlp,
+ return ndlp->nlp_state;
+ }
+
+- /* Check out PRLI rsp */
+- ndlp->nlp_type &= ~(NLP_FCP_TARGET | NLP_FCP_INITIATOR);
+- ndlp->nlp_fcp_info &= ~NLP_FCP_2_DEVICE;
+-
+- /* NVME or FCP first burst must be negotiated for each PRLI. */
+- ndlp->nlp_flag &= ~NLP_FIRSTBURST;
+- ndlp->nvme_fb_size = 0;
+ if (npr && (npr->acceptRspCode == PRLI_REQ_EXECUTED) &&
+ (npr->prliType == PRLI_FCP_TYPE)) {
+ lpfc_printf_vlog(vport, KERN_INFO, LOG_NVME_DISC,
+@@ -1945,8 +1944,6 @@ lpfc_cmpl_prli_prli_issue(struct lpfc_vport *vport, struct lpfc_nodelist *ndlp,
+ if (npr->Retry)
+ ndlp->nlp_fcp_info |= NLP_FCP_2_DEVICE;
+
+- /* PRLI completed. Decrement count. */
+- ndlp->fc4_prli_sent--;
+ } else if (nvpr &&
+ (bf_get_be32(prli_acc_rsp_code, nvpr) ==
+ PRLI_REQ_EXECUTED) &&
+@@ -1991,8 +1988,6 @@ lpfc_cmpl_prli_prli_issue(struct lpfc_vport *vport, struct lpfc_nodelist *ndlp,
+ be32_to_cpu(nvpr->word5),
+ ndlp->nlp_flag, ndlp->nlp_fcp_info,
+ ndlp->nlp_type);
+- /* PRLI completed. Decrement count. */
+- ndlp->fc4_prli_sent--;
+ }
+ if (!(ndlp->nlp_type & NLP_FCP_TARGET) &&
+ (vport->port_type == LPFC_NPIV_PORT) &&
+@@ -2016,7 +2011,8 @@ lpfc_cmpl_prli_prli_issue(struct lpfc_vport *vport, struct lpfc_nodelist *ndlp,
+ ndlp->nlp_prev_state = NLP_STE_PRLI_ISSUE;
+ if (ndlp->nlp_type & (NLP_FCP_TARGET | NLP_NVME_TARGET))
+ lpfc_nlp_set_state(vport, ndlp, NLP_STE_MAPPED_NODE);
+- else
++ else if (ndlp->nlp_type &
++ (NLP_FCP_INITIATOR | NLP_NVME_INITIATOR))
+ lpfc_nlp_set_state(vport, ndlp, NLP_STE_UNMAPPED_NODE);
+ } else
+ lpfc_printf_vlog(vport,
+diff --git a/drivers/scsi/megaraid/megaraid_sas_fusion.c b/drivers/scsi/megaraid/megaraid_sas_fusion.c
+index 65dc4fea6352..be2992509b8c 100644
+--- a/drivers/scsi/megaraid/megaraid_sas_fusion.c
++++ b/drivers/scsi/megaraid/megaraid_sas_fusion.c
+@@ -216,36 +216,30 @@ inline void megasas_return_cmd_fusion(struct megasas_instance *instance,
+ /**
+ * megasas_fire_cmd_fusion - Sends command to the FW
+ * @instance: Adapter soft state
+- * @req_desc: 32bit or 64bit Request descriptor
++ * @req_desc: 64bit Request descriptor
+ *
+- * Perform PCI Write. Ventura supports 32 bit Descriptor.
+- * Prior to Ventura (12G) MR controller supports 64 bit Descriptor.
++ * Perform PCI Write.
+ */
+
+ static void
+ megasas_fire_cmd_fusion(struct megasas_instance *instance,
+ union MEGASAS_REQUEST_DESCRIPTOR_UNION *req_desc)
+ {
+- if (instance->adapter_type == VENTURA_SERIES)
+- writel(le32_to_cpu(req_desc->u.low),
+- &instance->reg_set->inbound_single_queue_port);
+- else {
+ #if defined(writeq) && defined(CONFIG_64BIT)
+- u64 req_data = (((u64)le32_to_cpu(req_desc->u.high) << 32) |
+- le32_to_cpu(req_desc->u.low));
++ u64 req_data = (((u64)le32_to_cpu(req_desc->u.high) << 32) |
++ le32_to_cpu(req_desc->u.low));
+
+- writeq(req_data, &instance->reg_set->inbound_low_queue_port);
++ writeq(req_data, &instance->reg_set->inbound_low_queue_port);
+ #else
+- unsigned long flags;
+- spin_lock_irqsave(&instance->hba_lock, flags);
+- writel(le32_to_cpu(req_desc->u.low),
+- &instance->reg_set->inbound_low_queue_port);
+- writel(le32_to_cpu(req_desc->u.high),
+- &instance->reg_set->inbound_high_queue_port);
+- mmiowb();
+- spin_unlock_irqrestore(&instance->hba_lock, flags);
++ unsigned long flags;
++ spin_lock_irqsave(&instance->hba_lock, flags);
++ writel(le32_to_cpu(req_desc->u.low),
++ &instance->reg_set->inbound_low_queue_port);
++ writel(le32_to_cpu(req_desc->u.high),
++ &instance->reg_set->inbound_high_queue_port);
++ mmiowb();
++ spin_unlock_irqrestore(&instance->hba_lock, flags);
+ #endif
+- }
+ }
+
+ /**
+@@ -982,7 +976,6 @@ megasas_ioc_init_fusion(struct megasas_instance *instance)
+ const char *sys_info;
+ MFI_CAPABILITIES *drv_ops;
+ u32 scratch_pad_2;
+- unsigned long flags;
+ struct timeval tv;
+ bool cur_fw_64bit_dma_capable;
+
+@@ -1121,14 +1114,7 @@ megasas_ioc_init_fusion(struct megasas_instance *instance)
+ break;
+ }
+
+- /* For Ventura also IOC INIT required 64 bit Descriptor write. */
+- spin_lock_irqsave(&instance->hba_lock, flags);
+- writel(le32_to_cpu(req_desc.u.low),
+- &instance->reg_set->inbound_low_queue_port);
+- writel(le32_to_cpu(req_desc.u.high),
+- &instance->reg_set->inbound_high_queue_port);
+- mmiowb();
+- spin_unlock_irqrestore(&instance->hba_lock, flags);
++ megasas_fire_cmd_fusion(instance, &req_desc);
+
+ wait_and_poll(instance, cmd, MFI_POLL_TIMEOUT_SECS);
+
+diff --git a/drivers/soc/qcom/smsm.c b/drivers/soc/qcom/smsm.c
+index 403bea9d546b..50214b620865 100644
+--- a/drivers/soc/qcom/smsm.c
++++ b/drivers/soc/qcom/smsm.c
+@@ -496,8 +496,10 @@ static int qcom_smsm_probe(struct platform_device *pdev)
+ if (!smsm->hosts)
+ return -ENOMEM;
+
+- local_node = of_find_node_with_property(of_node_get(pdev->dev.of_node),
+- "#qcom,smem-state-cells");
++ for_each_child_of_node(pdev->dev.of_node, local_node) {
++ if (of_find_property(local_node, "#qcom,smem-state-cells", NULL))
++ break;
++ }
+ if (!local_node) {
+ dev_err(&pdev->dev, "no state entry\n");
+ return -EINVAL;
+diff --git a/drivers/spi/spi-sh-msiof.c b/drivers/spi/spi-sh-msiof.c
+index fcd261f98b9f..bf34e9b238af 100644
+--- a/drivers/spi/spi-sh-msiof.c
++++ b/drivers/spi/spi-sh-msiof.c
+@@ -55,6 +55,8 @@ struct sh_msiof_spi_priv {
+ void *rx_dma_page;
+ dma_addr_t tx_dma_addr;
+ dma_addr_t rx_dma_addr;
++ bool native_cs_inited;
++ bool native_cs_high;
+ bool slave_aborted;
+ };
+
+@@ -528,8 +530,7 @@ static int sh_msiof_spi_setup(struct spi_device *spi)
+ {
+ struct device_node *np = spi->master->dev.of_node;
+ struct sh_msiof_spi_priv *p = spi_master_get_devdata(spi->master);
+-
+- pm_runtime_get_sync(&p->pdev->dev);
++ u32 clr, set, tmp;
+
+ if (!np) {
+ /*
+@@ -539,19 +540,31 @@ static int sh_msiof_spi_setup(struct spi_device *spi)
+ spi->cs_gpio = (uintptr_t)spi->controller_data;
+ }
+
+- /* Configure pins before deasserting CS */
+- sh_msiof_spi_set_pin_regs(p, !!(spi->mode & SPI_CPOL),
+- !!(spi->mode & SPI_CPHA),
+- !!(spi->mode & SPI_3WIRE),
+- !!(spi->mode & SPI_LSB_FIRST),
+- !!(spi->mode & SPI_CS_HIGH));
+-
+- if (spi->cs_gpio >= 0)
++ if (spi->cs_gpio >= 0) {
+ gpio_set_value(spi->cs_gpio, !(spi->mode & SPI_CS_HIGH));
++ return 0;
++ }
+
++ if (spi_controller_is_slave(p->master))
++ return 0;
+
+- pm_runtime_put(&p->pdev->dev);
++ if (p->native_cs_inited &&
++ (p->native_cs_high == !!(spi->mode & SPI_CS_HIGH)))
++ return 0;
+
++ /* Configure native chip select mode/polarity early */
++ clr = MDR1_SYNCMD_MASK;
++ set = MDR1_TRMD | TMDR1_PCON | MDR1_SYNCMD_SPI;
++ if (spi->mode & SPI_CS_HIGH)
++ clr |= BIT(MDR1_SYNCAC_SHIFT);
++ else
++ set |= BIT(MDR1_SYNCAC_SHIFT);
++ pm_runtime_get_sync(&p->pdev->dev);
++ tmp = sh_msiof_read(p, TMDR1) & ~clr;
++ sh_msiof_write(p, TMDR1, tmp | set);
++ pm_runtime_put(&p->pdev->dev);
++ p->native_cs_high = spi->mode & SPI_CS_HIGH;
++ p->native_cs_inited = true;
+ return 0;
+ }
+
+diff --git a/drivers/staging/android/ashmem.c b/drivers/staging/android/ashmem.c
+index d9941b0c468d..893b2836089c 100644
+--- a/drivers/staging/android/ashmem.c
++++ b/drivers/staging/android/ashmem.c
+@@ -709,16 +709,14 @@ static int ashmem_pin_unpin(struct ashmem_area *asma, unsigned long cmd,
+ size_t pgstart, pgend;
+ int ret = -EINVAL;
+
++ if (unlikely(copy_from_user(&pin, p, sizeof(pin))))
++ return -EFAULT;
++
+ mutex_lock(&ashmem_mutex);
+
+ if (unlikely(!asma->file))
+ goto out_unlock;
+
+- if (unlikely(copy_from_user(&pin, p, sizeof(pin)))) {
+- ret = -EFAULT;
+- goto out_unlock;
+- }
+-
+ /* per custom, you can pass zero for len to mean "everything onward" */
+ if (!pin.len)
+ pin.len = PAGE_ALIGN(asma->size) - pin.offset;
+diff --git a/drivers/tty/Kconfig b/drivers/tty/Kconfig
+index cc2b4d9433ed..b811442c5ce6 100644
+--- a/drivers/tty/Kconfig
++++ b/drivers/tty/Kconfig
+@@ -394,10 +394,14 @@ config GOLDFISH_TTY
+ depends on GOLDFISH
+ select SERIAL_CORE
+ select SERIAL_CORE_CONSOLE
+- select SERIAL_EARLYCON
+ help
+ Console and system TTY driver for the Goldfish virtual platform.
+
++config GOLDFISH_TTY_EARLY_CONSOLE
++ bool
++ default y if GOLDFISH_TTY=y
++ select SERIAL_EARLYCON
++
+ config DA_TTY
+ bool "DA TTY"
+ depends on METAG_DA
+diff --git a/drivers/tty/goldfish.c b/drivers/tty/goldfish.c
+index 7f657bb5113c..1c1bd0afcd48 100644
+--- a/drivers/tty/goldfish.c
++++ b/drivers/tty/goldfish.c
+@@ -433,6 +433,7 @@ static int goldfish_tty_remove(struct platform_device *pdev)
+ return 0;
+ }
+
++#ifdef CONFIG_GOLDFISH_TTY_EARLY_CONSOLE
+ static void gf_early_console_putchar(struct uart_port *port, int ch)
+ {
+ __raw_writel(ch, port->membase);
+@@ -456,6 +457,7 @@ static int __init gf_earlycon_setup(struct earlycon_device *device,
+ }
+
+ OF_EARLYCON_DECLARE(early_gf_tty, "google,goldfish-tty", gf_earlycon_setup);
++#endif
+
+ static const struct of_device_id goldfish_tty_of_match[] = {
+ { .compatible = "google,goldfish-tty", },
+diff --git a/drivers/tty/serial/8250/8250_dw.c b/drivers/tty/serial/8250/8250_dw.c
+index 7070203e3157..cd1b94a0f451 100644
+--- a/drivers/tty/serial/8250/8250_dw.c
++++ b/drivers/tty/serial/8250/8250_dw.c
+@@ -509,7 +509,8 @@ static int dw8250_probe(struct platform_device *pdev)
+ /* If no clock rate is defined, fail. */
+ if (!p->uartclk) {
+ dev_err(dev, "clock rate not defined\n");
+- return -EINVAL;
++ err = -EINVAL;
++ goto err_clk;
+ }
+
+ data->pclk = devm_clk_get(dev, "apb_pclk");
+diff --git a/drivers/tty/serial/8250/8250_pci.c b/drivers/tty/serial/8250/8250_pci.c
+index 38850672c57e..a93f77ab3da0 100644
+--- a/drivers/tty/serial/8250/8250_pci.c
++++ b/drivers/tty/serial/8250/8250_pci.c
+@@ -3387,11 +3387,9 @@ static int serial_pci_is_class_communication(struct pci_dev *dev)
+ /*
+ * If it is not a communications device or the programming
+ * interface is greater than 6, give up.
+- *
+- * (Should we try to make guesses for multiport serial devices
+- * later?)
+ */
+ if ((((dev->class >> 8) != PCI_CLASS_COMMUNICATION_SERIAL) &&
++ ((dev->class >> 8) != PCI_CLASS_COMMUNICATION_MULTISERIAL) &&
+ ((dev->class >> 8) != PCI_CLASS_COMMUNICATION_MODEM)) ||
+ (dev->class & 0xff) > 6)
+ return -ENODEV;
+@@ -3428,6 +3426,12 @@ serial_pci_guess_board(struct pci_dev *dev, struct pciserial_board *board)
+ {
+ int num_iomem, num_port, first_port = -1, i;
+
++ /*
++ * Should we try to make guesses for multiport serial devices later?
++ */
++ if ((dev->class >> 8) == PCI_CLASS_COMMUNICATION_MULTISERIAL)
++ return -ENODEV;
++
+ num_iomem = num_port = 0;
+ for (i = 0; i < PCI_NUM_BAR_RESOURCES; i++) {
+ if (pci_resource_flags(dev, i) & IORESOURCE_IO) {
+diff --git a/drivers/tty/tty_io.c b/drivers/tty/tty_io.c
+index 4b506f2d3522..688bd25aa6b0 100644
+--- a/drivers/tty/tty_io.c
++++ b/drivers/tty/tty_io.c
+@@ -1482,6 +1482,8 @@ static void release_tty(struct tty_struct *tty, int idx)
+ if (tty->link)
+ tty->link->port->itty = NULL;
+ tty_buffer_cancel_work(tty->port);
++ if (tty->link)
++ tty_buffer_cancel_work(tty->link->port);
+
+ tty_kref_put(tty->link);
+ tty_kref_put(tty);
+diff --git a/drivers/video/console/vgacon.c b/drivers/video/console/vgacon.c
+index 445b1dc5d441..a17ba1465815 100644
+--- a/drivers/video/console/vgacon.c
++++ b/drivers/video/console/vgacon.c
+@@ -422,7 +422,10 @@ static const char *vgacon_startup(void)
+ vga_video_port_val = VGA_CRT_DM;
+ if ((screen_info.orig_video_ega_bx & 0xff) != 0x10) {
+ static struct resource ega_console_resource =
+- { .name = "ega", .start = 0x3B0, .end = 0x3BF };
++ { .name = "ega",
++ .flags = IORESOURCE_IO,
++ .start = 0x3B0,
++ .end = 0x3BF };
+ vga_video_type = VIDEO_TYPE_EGAM;
+ vga_vram_size = 0x8000;
+ display_desc = "EGA+";
+@@ -430,9 +433,15 @@ static const char *vgacon_startup(void)
+ &ega_console_resource);
+ } else {
+ static struct resource mda1_console_resource =
+- { .name = "mda", .start = 0x3B0, .end = 0x3BB };
++ { .name = "mda",
++ .flags = IORESOURCE_IO,
++ .start = 0x3B0,
++ .end = 0x3BB };
+ static struct resource mda2_console_resource =
+- { .name = "mda", .start = 0x3BF, .end = 0x3BF };
++ { .name = "mda",
++ .flags = IORESOURCE_IO,
++ .start = 0x3BF,
++ .end = 0x3BF };
+ vga_video_type = VIDEO_TYPE_MDA;
+ vga_vram_size = 0x2000;
+ display_desc = "*MDA";
+@@ -454,15 +463,21 @@ static const char *vgacon_startup(void)
+ vga_vram_size = 0x8000;
+
+ if (!screen_info.orig_video_isVGA) {
+- static struct resource ega_console_resource
+- = { .name = "ega", .start = 0x3C0, .end = 0x3DF };
++ static struct resource ega_console_resource =
++ { .name = "ega",
++ .flags = IORESOURCE_IO,
++ .start = 0x3C0,
++ .end = 0x3DF };
+ vga_video_type = VIDEO_TYPE_EGAC;
+ display_desc = "EGA";
+ request_resource(&ioport_resource,
+ &ega_console_resource);
+ } else {
+- static struct resource vga_console_resource
+- = { .name = "vga+", .start = 0x3C0, .end = 0x3DF };
++ static struct resource vga_console_resource =
++ { .name = "vga+",
++ .flags = IORESOURCE_IO,
++ .start = 0x3C0,
++ .end = 0x3DF };
+ vga_video_type = VIDEO_TYPE_VGAC;
+ display_desc = "VGA+";
+ request_resource(&ioport_resource,
+@@ -494,7 +509,10 @@ static const char *vgacon_startup(void)
+ }
+ } else {
+ static struct resource cga_console_resource =
+- { .name = "cga", .start = 0x3D4, .end = 0x3D5 };
++ { .name = "cga",
++ .flags = IORESOURCE_IO,
++ .start = 0x3D4,
++ .end = 0x3D5 };
+ vga_video_type = VIDEO_TYPE_CGA;
+ vga_vram_size = 0x2000;
+ display_desc = "*CGA";
+diff --git a/drivers/video/fbdev/omap2/omapfb/displays/panel-tpo-td028ttec1.c b/drivers/video/fbdev/omap2/omapfb/displays/panel-tpo-td028ttec1.c
+index 57e9e146ff74..4aeb908f2d1e 100644
+--- a/drivers/video/fbdev/omap2/omapfb/displays/panel-tpo-td028ttec1.c
++++ b/drivers/video/fbdev/omap2/omapfb/displays/panel-tpo-td028ttec1.c
+@@ -455,6 +455,8 @@ static int td028ttec1_panel_remove(struct spi_device *spi)
+ }
+
+ static const struct of_device_id td028ttec1_of_match[] = {
++ { .compatible = "omapdss,tpo,td028ttec1", },
++ /* keep to not break older DTB */
+ { .compatible = "omapdss,toppoly,td028ttec1", },
+ {},
+ };
+@@ -474,6 +476,7 @@ static struct spi_driver td028ttec1_spi_driver = {
+
+ module_spi_driver(td028ttec1_spi_driver);
+
++MODULE_ALIAS("spi:tpo,td028ttec1");
+ MODULE_ALIAS("spi:toppoly,td028ttec1");
+ MODULE_AUTHOR("H. Nikolaus Schaller <hns@goldelico.com>");
+ MODULE_DESCRIPTION("Toppoly TD028TTEC1 panel driver");
+diff --git a/drivers/watchdog/watchdog_dev.c b/drivers/watchdog/watchdog_dev.c
+index 1e971a50d7fb..95b96f3cb36f 100644
+--- a/drivers/watchdog/watchdog_dev.c
++++ b/drivers/watchdog/watchdog_dev.c
+@@ -769,6 +769,7 @@ static int watchdog_open(struct inode *inode, struct file *file)
+ {
+ struct watchdog_core_data *wd_data;
+ struct watchdog_device *wdd;
++ bool hw_running;
+ int err;
+
+ /* Get the corresponding watchdog device */
+@@ -788,7 +789,8 @@ static int watchdog_open(struct inode *inode, struct file *file)
+ * If the /dev/watchdog device is open, we don't want the module
+ * to be unloaded.
+ */
+- if (!watchdog_hw_running(wdd) && !try_module_get(wdd->ops->owner)) {
++ hw_running = watchdog_hw_running(wdd);
++ if (!hw_running && !try_module_get(wdd->ops->owner)) {
+ err = -EBUSY;
+ goto out_clear;
+ }
+@@ -799,7 +801,7 @@ static int watchdog_open(struct inode *inode, struct file *file)
+
+ file->private_data = wd_data;
+
+- if (!watchdog_hw_running(wdd))
++ if (!hw_running)
+ kref_get(&wd_data->kref);
+
+ /* dev/watchdog is a virtual (and thus non-seekable) filesystem */
+@@ -965,14 +967,13 @@ static int watchdog_cdev_register(struct watchdog_device *wdd, dev_t devno)
+ * and schedule an immediate ping.
+ */
+ if (watchdog_hw_running(wdd)) {
+- if (handle_boot_enabled) {
+- __module_get(wdd->ops->owner);
+- kref_get(&wd_data->kref);
++ __module_get(wdd->ops->owner);
++ kref_get(&wd_data->kref);
++ if (handle_boot_enabled)
+ queue_delayed_work(watchdog_wq, &wd_data->work, 0);
+- } else {
++ else
+ pr_info("watchdog%d running and kernel based pre-userspace handler disabled\n",
+- wdd->id);
+- }
++ wdd->id);
+ }
+
+ return 0;
+diff --git a/fs/nfsd/nfs4proc.c b/fs/nfsd/nfs4proc.c
+index 008ea0b627d0..effeeb4f556f 100644
+--- a/fs/nfsd/nfs4proc.c
++++ b/fs/nfsd/nfs4proc.c
+@@ -1363,14 +1363,14 @@ nfsd4_layoutget(struct svc_rqst *rqstp,
+ const struct nfsd4_layout_ops *ops;
+ struct nfs4_layout_stateid *ls;
+ __be32 nfserr;
+- int accmode;
++ int accmode = NFSD_MAY_READ_IF_EXEC;
+
+ switch (lgp->lg_seg.iomode) {
+ case IOMODE_READ:
+- accmode = NFSD_MAY_READ;
++ accmode |= NFSD_MAY_READ;
+ break;
+ case IOMODE_RW:
+- accmode = NFSD_MAY_READ | NFSD_MAY_WRITE;
++ accmode |= NFSD_MAY_READ | NFSD_MAY_WRITE;
+ break;
+ default:
+ dprintk("%s: invalid iomode %d\n",
+diff --git a/include/linux/mlx5/driver.h b/include/linux/mlx5/driver.h
+index b82c4ae92411..c8198ed8b180 100644
+--- a/include/linux/mlx5/driver.h
++++ b/include/linux/mlx5/driver.h
+@@ -442,8 +442,8 @@ struct mlx5_core_srq {
+ struct mlx5_core_rsc_common common; /* must be first */
+ u32 srqn;
+ int max;
+- int max_gs;
+- int max_avail_gather;
++ size_t max_gs;
++ size_t max_avail_gather;
+ int wqe_shift;
+ void (*event) (struct mlx5_core_srq *, enum mlx5_event);
+
+diff --git a/include/uapi/linux/bpf.h b/include/uapi/linux/bpf.h
+index 4c223ab30293..04b2f613dd06 100644
+--- a/include/uapi/linux/bpf.h
++++ b/include/uapi/linux/bpf.h
+@@ -995,7 +995,8 @@ struct bpf_perf_event_value {
+ #define BPF_DEVCG_DEV_CHAR (1ULL << 1)
+
+ struct bpf_cgroup_dev_ctx {
+- __u32 access_type; /* (access << 16) | type */
++ /* access_type encoded as (BPF_DEVCG_ACC_* << 16) | BPF_DEVCG_DEV_* */
++ __u32 access_type;
+ __u32 major;
+ __u32 minor;
+ };
+diff --git a/kernel/bpf/cgroup.c b/kernel/bpf/cgroup.c
+index b789ab78d28f..c1c0b60d3f2f 100644
+--- a/kernel/bpf/cgroup.c
++++ b/kernel/bpf/cgroup.c
+@@ -568,6 +568,8 @@ static bool cgroup_dev_is_valid_access(int off, int size,
+ enum bpf_access_type type,
+ struct bpf_insn_access_aux *info)
+ {
++ const int size_default = sizeof(__u32);
++
+ if (type == BPF_WRITE)
+ return false;
+
+@@ -576,8 +578,17 @@ static bool cgroup_dev_is_valid_access(int off, int size,
+ /* The verifier guarantees that size > 0. */
+ if (off % size != 0)
+ return false;
+- if (size != sizeof(__u32))
+- return false;
++
++ switch (off) {
++ case bpf_ctx_range(struct bpf_cgroup_dev_ctx, access_type):
++ bpf_ctx_record_field_size(info, size_default);
++ if (!bpf_ctx_narrow_access_ok(off, size, size_default))
++ return false;
++ break;
++ default:
++ if (size != size_default)
++ return false;
++ }
+
+ return true;
+ }
+diff --git a/net/ipv4/ip_gre.c b/net/ipv4/ip_gre.c
+index 45ffd3d045d2..01bbbfe2c2a7 100644
+--- a/net/ipv4/ip_gre.c
++++ b/net/ipv4/ip_gre.c
+@@ -303,8 +303,10 @@ static int erspan_rcv(struct sk_buff *skb, struct tnl_ptk_info *tpi,
+ return PACKET_REJECT;
+
+ md = ip_tunnel_info_opts(&tun_dst->u.tun_info);
+- if (!md)
++ if (!md) {
++ dst_release((struct dst_entry *)tun_dst);
+ return PACKET_REJECT;
++ }
+
+ md->index = index;
+ info = &tun_dst->u.tun_info;
+@@ -408,11 +410,13 @@ static int gre_rcv(struct sk_buff *skb)
+ if (unlikely(tpi.proto == htons(ETH_P_ERSPAN))) {
+ if (erspan_rcv(skb, &tpi, hdr_len) == PACKET_RCVD)
+ return 0;
++ goto out;
+ }
+
+ if (ipgre_rcv(skb, &tpi, hdr_len) == PACKET_RCVD)
+ return 0;
+
++out:
+ icmp_send(skb, ICMP_DEST_UNREACH, ICMP_PORT_UNREACH, 0);
+ drop:
+ kfree_skb(skb);
+diff --git a/net/ipv4/tcp_output.c b/net/ipv4/tcp_output.c
+index 580912de16c2..871c7e28cccf 100644
+--- a/net/ipv4/tcp_output.c
++++ b/net/ipv4/tcp_output.c
+@@ -2440,15 +2440,12 @@ bool tcp_schedule_loss_probe(struct sock *sk, bool advancing_rto)
+
+ early_retrans = sock_net(sk)->ipv4.sysctl_tcp_early_retrans;
+ /* Schedule a loss probe in 2*RTT for SACK capable connections
+- * in Open state, that are either limited by cwnd or application.
++ * not in loss recovery, that are either limited by cwnd or application.
+ */
+ if ((early_retrans != 3 && early_retrans != 4) ||
+ !tp->packets_out || !tcp_is_sack(tp) ||
+- icsk->icsk_ca_state != TCP_CA_Open)
+- return false;
+-
+- if ((tp->snd_cwnd > tcp_packets_in_flight(tp)) &&
+- !tcp_write_queue_empty(sk))
++ (icsk->icsk_ca_state != TCP_CA_Open &&
++ icsk->icsk_ca_state != TCP_CA_CWR))
+ return false;
+
+ /* Probe timeout is 2*rtt. Add minimum RTO to account
+diff --git a/net/ipv6/ip6_vti.c b/net/ipv6/ip6_vti.c
+index 8c184f84f353..fa3ae1cb50d3 100644
+--- a/net/ipv6/ip6_vti.c
++++ b/net/ipv6/ip6_vti.c
+@@ -626,6 +626,7 @@ static void vti6_link_config(struct ip6_tnl *t)
+ {
+ struct net_device *dev = t->dev;
+ struct __ip6_tnl_parm *p = &t->parms;
++ struct net_device *tdev = NULL;
+
+ memcpy(dev->dev_addr, &p->laddr, sizeof(struct in6_addr));
+ memcpy(dev->broadcast, &p->raddr, sizeof(struct in6_addr));
+@@ -638,6 +639,25 @@ static void vti6_link_config(struct ip6_tnl *t)
+ dev->flags |= IFF_POINTOPOINT;
+ else
+ dev->flags &= ~IFF_POINTOPOINT;
++
++ if (p->flags & IP6_TNL_F_CAP_XMIT) {
++ int strict = (ipv6_addr_type(&p->raddr) &
++ (IPV6_ADDR_MULTICAST | IPV6_ADDR_LINKLOCAL));
++ struct rt6_info *rt = rt6_lookup(t->net,
++ &p->raddr, &p->laddr,
++ p->link, strict);
++
++ if (rt)
++ tdev = rt->dst.dev;
++ ip6_rt_put(rt);
++ }
++
++ if (!tdev && p->link)
++ tdev = __dev_get_by_index(t->net, p->link);
++
++ if (tdev)
++ dev->mtu = max_t(int, tdev->mtu - dev->hard_header_len,
++ IPV6_MIN_MTU);
+ }
+
+ /**
+diff --git a/security/Kconfig b/security/Kconfig
+index b0cb9a5f9448..3709db95027f 100644
+--- a/security/Kconfig
++++ b/security/Kconfig
+@@ -154,6 +154,7 @@ config HARDENED_USERCOPY
+ bool "Harden memory copies between kernel and userspace"
+ depends on HAVE_HARDENED_USERCOPY_ALLOCATOR
+ select BUG
++ imply STRICT_DEVMEM
+ help
+ This option checks for obviously wrong memory regions when
+ copying memory to/from the kernel (via copy_to_user() and
+diff --git a/tools/lib/bpf/libbpf.c b/tools/lib/bpf/libbpf.c
+index 5aa45f89da93..cda652a12880 100644
+--- a/tools/lib/bpf/libbpf.c
++++ b/tools/lib/bpf/libbpf.c
+@@ -387,6 +387,8 @@ bpf_object__init_prog_names(struct bpf_object *obj)
+ continue;
+ if (sym.st_shndx != prog->idx)
+ continue;
++ if (GELF_ST_BIND(sym.st_info) != STB_GLOBAL)
++ continue;
+
+ name = elf_strptr(obj->efile.elf,
+ obj->efile.strtabidx,
^ permalink raw reply related [flat|nested] 27+ messages in thread
* [gentoo-commits] proj/linux-patches:4.15 commit in: /
@ 2018-03-28 17:03 Mike Pagano
0 siblings, 0 replies; 27+ messages in thread
From: Mike Pagano @ 2018-03-28 17:03 UTC (permalink / raw
To: gentoo-commits
commit: dfca21c3d8cf051f97cc37a7e27b7640b98e0472
Author: Mike Pagano <mpagano <AT> gentoo <DOT> org>
AuthorDate: Wed Mar 28 17:03:40 2018 +0000
Commit: Mike Pagano <mpagano <AT> gentoo <DOT> org>
CommitDate: Wed Mar 28 17:03:40 2018 +0000
URL: https://gitweb.gentoo.org/proj/linux-patches.git/commit/?id=dfca21c3
Linux patch 4.15.14
0000_README | 4 +
1013_linux-4.15.14.patch | 3891 ++++++++++++++++++++++++++++++++++++++++++++++
2 files changed, 3895 insertions(+)
diff --git a/0000_README b/0000_README
index 2b51c4d..f4d8a80 100644
--- a/0000_README
+++ b/0000_README
@@ -95,6 +95,10 @@ Patch: 1012_linux-4.15.13.patch
From: http://www.kernel.org
Desc: Linux 4.15.13
+Patch: 1013_linux-4.15.14.patch
+From: http://www.kernel.org
+Desc: Linux 4.15.14
+
Patch: 1500_XATTR_USER_PREFIX.patch
From: https://bugs.gentoo.org/show_bug.cgi?id=470644
Desc: Support for namespace user.pax.* on tmpfs.
diff --git a/1013_linux-4.15.14.patch b/1013_linux-4.15.14.patch
new file mode 100644
index 0000000..9227e36
--- /dev/null
+++ b/1013_linux-4.15.14.patch
@@ -0,0 +1,3891 @@
+diff --git a/Documentation/ABI/testing/sysfs-bus-iio b/Documentation/ABI/testing/sysfs-bus-iio
+index 2e3f919485f4..fdec308a5041 100644
+--- a/Documentation/ABI/testing/sysfs-bus-iio
++++ b/Documentation/ABI/testing/sysfs-bus-iio
+@@ -32,7 +32,7 @@ Description:
+ Description of the physical chip / device for device X.
+ Typically a part number.
+
+-What: /sys/bus/iio/devices/iio:deviceX/timestamp_clock
++What: /sys/bus/iio/devices/iio:deviceX/current_timestamp_clock
+ KernelVersion: 4.5
+ Contact: linux-iio@vger.kernel.org
+ Description:
+diff --git a/Makefile b/Makefile
+index 82245e654d10..a5e561900daf 100644
+--- a/Makefile
++++ b/Makefile
+@@ -1,7 +1,7 @@
+ # SPDX-License-Identifier: GPL-2.0
+ VERSION = 4
+ PATCHLEVEL = 15
+-SUBLEVEL = 13
++SUBLEVEL = 14
+ EXTRAVERSION =
+ NAME = Fearless Coyote
+
+@@ -798,6 +798,15 @@ KBUILD_CFLAGS += $(call cc-disable-warning, pointer-sign)
+ # disable invalid "can't wrap" optimizations for signed / pointers
+ KBUILD_CFLAGS += $(call cc-option,-fno-strict-overflow)
+
++# clang sets -fmerge-all-constants by default as optimization, but this
++# is non-conforming behavior for C and in fact breaks the kernel, so we
++# need to disable it here generally.
++KBUILD_CFLAGS += $(call cc-option,-fno-merge-all-constants)
++
++# for gcc -fno-merge-all-constants disables everything, but it is fine
++# to have actual conforming behavior enabled.
++KBUILD_CFLAGS += $(call cc-option,-fmerge-constants)
++
+ # Make sure -fstack-check isn't enabled (like gentoo apparently did)
+ KBUILD_CFLAGS += $(call cc-option,-fno-stack-check,)
+
+diff --git a/arch/arm64/mm/mmu.c b/arch/arm64/mm/mmu.c
+index 5bdc2c4db9ad..f15dc3dfecf8 100644
+--- a/arch/arm64/mm/mmu.c
++++ b/arch/arm64/mm/mmu.c
+@@ -941,3 +941,13 @@ int pmd_clear_huge(pmd_t *pmd)
+ pmd_clear(pmd);
+ return 1;
+ }
++
++int pud_free_pmd_page(pud_t *pud)
++{
++ return pud_none(*pud);
++}
++
++int pmd_free_pte_page(pmd_t *pmd)
++{
++ return pmd_none(*pmd);
++}
+diff --git a/arch/h8300/include/asm/byteorder.h b/arch/h8300/include/asm/byteorder.h
+index ecff2d1ca5a3..6eaa7ad5fc2c 100644
+--- a/arch/h8300/include/asm/byteorder.h
++++ b/arch/h8300/include/asm/byteorder.h
+@@ -2,7 +2,6 @@
+ #ifndef __H8300_BYTEORDER_H__
+ #define __H8300_BYTEORDER_H__
+
+-#define __BIG_ENDIAN __ORDER_BIG_ENDIAN__
+ #include <linux/byteorder/big_endian.h>
+
+ #endif
+diff --git a/arch/mips/lantiq/Kconfig b/arch/mips/lantiq/Kconfig
+index 692ae85a3e3d..8e3a1fc2bc39 100644
+--- a/arch/mips/lantiq/Kconfig
++++ b/arch/mips/lantiq/Kconfig
+@@ -13,6 +13,8 @@ choice
+ config SOC_AMAZON_SE
+ bool "Amazon SE"
+ select SOC_TYPE_XWAY
++ select MFD_SYSCON
++ select MFD_CORE
+
+ config SOC_XWAY
+ bool "XWAY"
+diff --git a/arch/mips/lantiq/xway/sysctrl.c b/arch/mips/lantiq/xway/sysctrl.c
+index 52500d3b7004..e0af39b33e28 100644
+--- a/arch/mips/lantiq/xway/sysctrl.c
++++ b/arch/mips/lantiq/xway/sysctrl.c
+@@ -549,9 +549,9 @@ void __init ltq_soc_init(void)
+ clkdev_add_static(ltq_ar9_cpu_hz(), ltq_ar9_fpi_hz(),
+ ltq_ar9_fpi_hz(), CLOCK_250M);
+ clkdev_add_pmu("1f203018.usb2-phy", "phy", 1, 0, PMU_USB0_P);
+- clkdev_add_pmu("1e101000.usb", "otg", 1, 0, PMU_USB0);
++ clkdev_add_pmu("1e101000.usb", "otg", 1, 0, PMU_USB0 | PMU_AHBM);
+ clkdev_add_pmu("1f203034.usb2-phy", "phy", 1, 0, PMU_USB1_P);
+- clkdev_add_pmu("1e106000.usb", "otg", 1, 0, PMU_USB1);
++ clkdev_add_pmu("1e106000.usb", "otg", 1, 0, PMU_USB1 | PMU_AHBM);
+ clkdev_add_pmu("1e180000.etop", "switch", 1, 0, PMU_SWITCH);
+ clkdev_add_pmu("1e103000.sdio", NULL, 1, 0, PMU_SDIO);
+ clkdev_add_pmu("1e103100.deu", NULL, 1, 0, PMU_DEU);
+@@ -560,7 +560,7 @@ void __init ltq_soc_init(void)
+ } else {
+ clkdev_add_static(ltq_danube_cpu_hz(), ltq_danube_fpi_hz(),
+ ltq_danube_fpi_hz(), ltq_danube_pp32_hz());
+- clkdev_add_pmu("1f203018.usb2-phy", "ctrl", 1, 0, PMU_USB0);
++ clkdev_add_pmu("1e101000.usb", "otg", 1, 0, PMU_USB0 | PMU_AHBM);
+ clkdev_add_pmu("1f203018.usb2-phy", "phy", 1, 0, PMU_USB0_P);
+ clkdev_add_pmu("1e103000.sdio", NULL, 1, 0, PMU_SDIO);
+ clkdev_add_pmu("1e103100.deu", NULL, 1, 0, PMU_DEU);
+diff --git a/arch/mips/ralink/mt7621.c b/arch/mips/ralink/mt7621.c
+index 1b274742077d..d2718de60b9b 100644
+--- a/arch/mips/ralink/mt7621.c
++++ b/arch/mips/ralink/mt7621.c
+@@ -170,6 +170,28 @@ void prom_soc_init(struct ralink_soc_info *soc_info)
+ u32 n1;
+ u32 rev;
+
++ /* Early detection of CMP support */
++ mips_cm_probe();
++ mips_cpc_probe();
++
++ if (mips_cps_numiocu(0)) {
++ /*
++ * mips_cm_probe() wipes out bootloader
++ * config for CM regions and we have to configure them
++ * again. This SoC cannot talk to pamlbus devices
++ * witout proper iocu region set up.
++ *
++ * FIXME: it would be better to do this with values
++ * from DT, but we need this very early because
++ * without this we cannot talk to pretty much anything
++ * including serial.
++ */
++ write_gcr_reg0_base(MT7621_PALMBUS_BASE);
++ write_gcr_reg0_mask(~MT7621_PALMBUS_SIZE |
++ CM_GCR_REGn_MASK_CMTGT_IOCU0);
++ __sync();
++ }
++
+ n0 = __raw_readl(sysc + SYSC_REG_CHIP_NAME0);
+ n1 = __raw_readl(sysc + SYSC_REG_CHIP_NAME1);
+
+@@ -194,26 +216,6 @@ void prom_soc_init(struct ralink_soc_info *soc_info)
+
+ rt2880_pinmux_data = mt7621_pinmux_data;
+
+- /* Early detection of CMP support */
+- mips_cm_probe();
+- mips_cpc_probe();
+-
+- if (mips_cps_numiocu(0)) {
+- /*
+- * mips_cm_probe() wipes out bootloader
+- * config for CM regions and we have to configure them
+- * again. This SoC cannot talk to pamlbus devices
+- * witout proper iocu region set up.
+- *
+- * FIXME: it would be better to do this with values
+- * from DT, but we need this very early because
+- * without this we cannot talk to pretty much anything
+- * including serial.
+- */
+- write_gcr_reg0_base(MT7621_PALMBUS_BASE);
+- write_gcr_reg0_mask(~MT7621_PALMBUS_SIZE |
+- CM_GCR_REGn_MASK_CMTGT_IOCU0);
+- }
+
+ if (!register_cps_smp_ops())
+ return;
+diff --git a/arch/mips/ralink/reset.c b/arch/mips/ralink/reset.c
+index 64543d66e76b..e9531fea23a2 100644
+--- a/arch/mips/ralink/reset.c
++++ b/arch/mips/ralink/reset.c
+@@ -96,16 +96,9 @@ static void ralink_restart(char *command)
+ unreachable();
+ }
+
+-static void ralink_halt(void)
+-{
+- local_irq_disable();
+- unreachable();
+-}
+-
+ static int __init mips_reboot_setup(void)
+ {
+ _machine_restart = ralink_restart;
+- _machine_halt = ralink_halt;
+
+ return 0;
+ }
+diff --git a/arch/x86/Makefile b/arch/x86/Makefile
+index 498c1b812300..1c4d012550ec 100644
+--- a/arch/x86/Makefile
++++ b/arch/x86/Makefile
+@@ -223,6 +223,15 @@ KBUILD_CFLAGS += $(cfi) $(cfi-sigframe) $(cfi-sections) $(asinstr) $(avx_instr)
+
+ LDFLAGS := -m elf_$(UTS_MACHINE)
+
++#
++# The 64-bit kernel must be aligned to 2MB. Pass -z max-page-size=0x200000 to
++# the linker to force 2MB page size regardless of the default page size used
++# by the linker.
++#
++ifdef CONFIG_X86_64
++LDFLAGS += $(call ld-option, -z max-page-size=0x200000)
++endif
++
+ # Speed up the build
+ KBUILD_CFLAGS += -pipe
+ # Workaround for a gcc prelease that unfortunately was shipped in a suse release
+diff --git a/arch/x86/boot/compressed/misc.c b/arch/x86/boot/compressed/misc.c
+index 98761a1576ce..252fee320816 100644
+--- a/arch/x86/boot/compressed/misc.c
++++ b/arch/x86/boot/compressed/misc.c
+@@ -309,6 +309,10 @@ static void parse_elf(void *output)
+
+ switch (phdr->p_type) {
+ case PT_LOAD:
++#ifdef CONFIG_X86_64
++ if ((phdr->p_align % 0x200000) != 0)
++ error("Alignment of LOAD segment isn't multiple of 2MB");
++#endif
+ #ifdef CONFIG_RELOCATABLE
+ dest = output;
+ dest += (phdr->p_paddr - LOAD_PHYSICAL_ADDR);
+diff --git a/arch/x86/entry/entry_64.S b/arch/x86/entry/entry_64.S
+index 50dcbf640850..3a7d58384479 100644
+--- a/arch/x86/entry/entry_64.S
++++ b/arch/x86/entry/entry_64.S
+@@ -1097,7 +1097,7 @@ apicinterrupt3 HYPERVISOR_CALLBACK_VECTOR \
+ #endif /* CONFIG_HYPERV */
+
+ idtentry debug do_debug has_error_code=0 paranoid=1 shift_ist=DEBUG_STACK
+-idtentry int3 do_int3 has_error_code=0 paranoid=1 shift_ist=DEBUG_STACK
++idtentry int3 do_int3 has_error_code=0
+ idtentry stack_segment do_stack_segment has_error_code=1
+
+ #ifdef CONFIG_XEN
+diff --git a/arch/x86/entry/vsyscall/vsyscall_64.c b/arch/x86/entry/vsyscall/vsyscall_64.c
+index 577fa8adb785..542392b6aab6 100644
+--- a/arch/x86/entry/vsyscall/vsyscall_64.c
++++ b/arch/x86/entry/vsyscall/vsyscall_64.c
+@@ -355,7 +355,7 @@ void __init set_vsyscall_pgtable_user_bits(pgd_t *root)
+ set_pgd(pgd, __pgd(pgd_val(*pgd) | _PAGE_USER));
+ p4d = p4d_offset(pgd, VSYSCALL_ADDR);
+ #if CONFIG_PGTABLE_LEVELS >= 5
+- p4d->p4d |= _PAGE_USER;
++ set_p4d(p4d, __p4d(p4d_val(*p4d) | _PAGE_USER));
+ #endif
+ pud = pud_offset(p4d, VSYSCALL_ADDR);
+ set_pud(pud, __pud(pud_val(*pud) | _PAGE_USER));
+diff --git a/arch/x86/events/intel/core.c b/arch/x86/events/intel/core.c
+index 56457cb73448..9b18a227fff7 100644
+--- a/arch/x86/events/intel/core.c
++++ b/arch/x86/events/intel/core.c
+@@ -3194,7 +3194,7 @@ static unsigned bdw_limit_period(struct perf_event *event, unsigned left)
+ X86_CONFIG(.event=0xc0, .umask=0x01)) {
+ if (left < 128)
+ left = 128;
+- left &= ~0x3fu;
++ left &= ~0x3fULL;
+ }
+ return left;
+ }
+diff --git a/arch/x86/events/intel/uncore_snbep.c b/arch/x86/events/intel/uncore_snbep.c
+index 6d8044ab1060..7da7a79eba53 100644
+--- a/arch/x86/events/intel/uncore_snbep.c
++++ b/arch/x86/events/intel/uncore_snbep.c
+@@ -3562,24 +3562,27 @@ static struct intel_uncore_type *skx_msr_uncores[] = {
+ NULL,
+ };
+
++/*
++ * To determine the number of CHAs, it should read bits 27:0 in the CAPID6
++ * register which located at Device 30, Function 3, Offset 0x9C. PCI ID 0x2083.
++ */
++#define SKX_CAPID6 0x9c
++#define SKX_CHA_BIT_MASK GENMASK(27, 0)
++
+ static int skx_count_chabox(void)
+ {
+- struct pci_dev *chabox_dev = NULL;
+- int bus, count = 0;
++ struct pci_dev *dev = NULL;
++ u32 val = 0;
+
+- while (1) {
+- chabox_dev = pci_get_device(PCI_VENDOR_ID_INTEL, 0x208d, chabox_dev);
+- if (!chabox_dev)
+- break;
+- if (count == 0)
+- bus = chabox_dev->bus->number;
+- if (bus != chabox_dev->bus->number)
+- break;
+- count++;
+- }
++ dev = pci_get_device(PCI_VENDOR_ID_INTEL, 0x2083, dev);
++ if (!dev)
++ goto out;
+
+- pci_dev_put(chabox_dev);
+- return count;
++ pci_read_config_dword(dev, SKX_CAPID6, &val);
++ val &= SKX_CHA_BIT_MASK;
++out:
++ pci_dev_put(dev);
++ return hweight32(val);
+ }
+
+ void skx_uncore_cpu_init(void)
+@@ -3606,7 +3609,7 @@ static struct intel_uncore_type skx_uncore_imc = {
+ };
+
+ static struct attribute *skx_upi_uncore_formats_attr[] = {
+- &format_attr_event_ext.attr,
++ &format_attr_event.attr,
+ &format_attr_umask_ext.attr,
+ &format_attr_edge.attr,
+ &format_attr_inv.attr,
+diff --git a/arch/x86/include/asm/vmx.h b/arch/x86/include/asm/vmx.h
+index 8b6780751132..5db8b0b10766 100644
+--- a/arch/x86/include/asm/vmx.h
++++ b/arch/x86/include/asm/vmx.h
+@@ -352,6 +352,7 @@ enum vmcs_field {
+ #define INTR_TYPE_NMI_INTR (2 << 8) /* NMI */
+ #define INTR_TYPE_HARD_EXCEPTION (3 << 8) /* processor exception */
+ #define INTR_TYPE_SOFT_INTR (4 << 8) /* software interrupt */
++#define INTR_TYPE_PRIV_SW_EXCEPTION (5 << 8) /* ICE breakpoint - undocumented */
+ #define INTR_TYPE_SOFT_EXCEPTION (6 << 8) /* software exception */
+
+ /* GUEST_INTERRUPTIBILITY_INFO flags. */
+diff --git a/arch/x86/kernel/idt.c b/arch/x86/kernel/idt.c
+index 56d99be3706a..50bee5fe1140 100644
+--- a/arch/x86/kernel/idt.c
++++ b/arch/x86/kernel/idt.c
+@@ -160,7 +160,6 @@ static const __initconst struct idt_data early_pf_idts[] = {
+ */
+ static const __initconst struct idt_data dbg_idts[] = {
+ INTG(X86_TRAP_DB, debug),
+- INTG(X86_TRAP_BP, int3),
+ };
+ #endif
+
+@@ -183,7 +182,6 @@ gate_desc debug_idt_table[IDT_ENTRIES] __page_aligned_bss;
+ static const __initconst struct idt_data ist_idts[] = {
+ ISTG(X86_TRAP_DB, debug, DEBUG_STACK),
+ ISTG(X86_TRAP_NMI, nmi, NMI_STACK),
+- SISTG(X86_TRAP_BP, int3, DEBUG_STACK),
+ ISTG(X86_TRAP_DF, double_fault, DOUBLEFAULT_STACK),
+ #ifdef CONFIG_X86_MCE
+ ISTG(X86_TRAP_MC, &machine_check, MCE_STACK),
+diff --git a/arch/x86/kernel/traps.c b/arch/x86/kernel/traps.c
+index 3d9b2308e7fa..03f3d7695dac 100644
+--- a/arch/x86/kernel/traps.c
++++ b/arch/x86/kernel/traps.c
+@@ -577,7 +577,6 @@ do_general_protection(struct pt_regs *regs, long error_code)
+ }
+ NOKPROBE_SYMBOL(do_general_protection);
+
+-/* May run on IST stack. */
+ dotraplinkage void notrace do_int3(struct pt_regs *regs, long error_code)
+ {
+ #ifdef CONFIG_DYNAMIC_FTRACE
+@@ -592,6 +591,13 @@ dotraplinkage void notrace do_int3(struct pt_regs *regs, long error_code)
+ if (poke_int3_handler(regs))
+ return;
+
++ /*
++ * Use ist_enter despite the fact that we don't use an IST stack.
++ * We can be called from a kprobe in non-CONTEXT_KERNEL kernel
++ * mode or even during context tracking state changes.
++ *
++ * This means that we can't schedule. That's okay.
++ */
+ ist_enter(regs);
+ RCU_LOCKDEP_WARN(!rcu_is_watching(), "entry code didn't wake RCU");
+ #ifdef CONFIG_KGDB_LOW_LEVEL_TRAP
+@@ -609,15 +615,10 @@ dotraplinkage void notrace do_int3(struct pt_regs *regs, long error_code)
+ SIGTRAP) == NOTIFY_STOP)
+ goto exit;
+
+- /*
+- * Let others (NMI) know that the debug stack is in use
+- * as we may switch to the interrupt stack.
+- */
+- debug_stack_usage_inc();
+ cond_local_irq_enable(regs);
+ do_trap(X86_TRAP_BP, SIGTRAP, "int3", regs, error_code, NULL);
+ cond_local_irq_disable(regs);
+- debug_stack_usage_dec();
++
+ exit:
+ ist_exit(regs);
+ }
+diff --git a/arch/x86/kvm/vmx.c b/arch/x86/kvm/vmx.c
+index 87b453eeae40..2beb77c319e8 100644
+--- a/arch/x86/kvm/vmx.c
++++ b/arch/x86/kvm/vmx.c
+@@ -1079,6 +1079,13 @@ static inline bool is_machine_check(u32 intr_info)
+ (INTR_TYPE_HARD_EXCEPTION | MC_VECTOR | INTR_INFO_VALID_MASK);
+ }
+
++/* Undocumented: icebp/int1 */
++static inline bool is_icebp(u32 intr_info)
++{
++ return (intr_info & (INTR_INFO_INTR_TYPE_MASK | INTR_INFO_VALID_MASK))
++ == (INTR_TYPE_PRIV_SW_EXCEPTION | INTR_INFO_VALID_MASK);
++}
++
+ static inline bool cpu_has_vmx_msr_bitmap(void)
+ {
+ return vmcs_config.cpu_based_exec_ctrl & CPU_BASED_USE_MSR_BITMAPS;
+@@ -6173,7 +6180,7 @@ static int handle_exception(struct kvm_vcpu *vcpu)
+ (KVM_GUESTDBG_SINGLESTEP | KVM_GUESTDBG_USE_HW_BP))) {
+ vcpu->arch.dr6 &= ~15;
+ vcpu->arch.dr6 |= dr6 | DR6_RTM;
+- if (!(dr6 & ~DR6_RESERVED)) /* icebp */
++ if (is_icebp(intr_info))
+ skip_emulated_instruction(vcpu);
+
+ kvm_queue_exception(vcpu, DB_VECTOR);
+diff --git a/arch/x86/mm/pgtable.c b/arch/x86/mm/pgtable.c
+index 004abf9ebf12..34cda7e0551b 100644
+--- a/arch/x86/mm/pgtable.c
++++ b/arch/x86/mm/pgtable.c
+@@ -702,4 +702,52 @@ int pmd_clear_huge(pmd_t *pmd)
+
+ return 0;
+ }
++
++/**
++ * pud_free_pmd_page - Clear pud entry and free pmd page.
++ * @pud: Pointer to a PUD.
++ *
++ * Context: The pud range has been unmaped and TLB purged.
++ * Return: 1 if clearing the entry succeeded. 0 otherwise.
++ */
++int pud_free_pmd_page(pud_t *pud)
++{
++ pmd_t *pmd;
++ int i;
++
++ if (pud_none(*pud))
++ return 1;
++
++ pmd = (pmd_t *)pud_page_vaddr(*pud);
++
++ for (i = 0; i < PTRS_PER_PMD; i++)
++ if (!pmd_free_pte_page(&pmd[i]))
++ return 0;
++
++ pud_clear(pud);
++ free_page((unsigned long)pmd);
++
++ return 1;
++}
++
++/**
++ * pmd_free_pte_page - Clear pmd entry and free pte page.
++ * @pmd: Pointer to a PMD.
++ *
++ * Context: The pmd range has been unmaped and TLB purged.
++ * Return: 1 if clearing the entry succeeded. 0 otherwise.
++ */
++int pmd_free_pte_page(pmd_t *pmd)
++{
++ pte_t *pte;
++
++ if (pmd_none(*pmd))
++ return 1;
++
++ pte = (pte_t *)pmd_page_vaddr(*pmd);
++ pmd_clear(pmd);
++ free_page((unsigned long)pte);
++
++ return 1;
++}
+ #endif /* CONFIG_HAVE_ARCH_HUGE_VMAP */
+diff --git a/arch/x86/net/bpf_jit_comp.c b/arch/x86/net/bpf_jit_comp.c
+index 940aac70b4da..bb77606d04e0 100644
+--- a/arch/x86/net/bpf_jit_comp.c
++++ b/arch/x86/net/bpf_jit_comp.c
+@@ -1156,7 +1156,7 @@ struct bpf_prog *bpf_int_jit_compile(struct bpf_prog *prog)
+ * may converge on the last pass. In such case do one more
+ * pass to emit the final image
+ */
+- for (pass = 0; pass < 10 || image; pass++) {
++ for (pass = 0; pass < 20 || image; pass++) {
+ proglen = do_jit(prog, addrs, image, oldproglen, &ctx);
+ if (proglen <= 0) {
+ image = NULL;
+@@ -1183,6 +1183,7 @@ struct bpf_prog *bpf_int_jit_compile(struct bpf_prog *prog)
+ }
+ }
+ oldproglen = proglen;
++ cond_resched();
+ }
+
+ if (bpf_jit_enable > 1)
+diff --git a/arch/x86/platform/efi/efi_64.c b/arch/x86/platform/efi/efi_64.c
+index 2dd15e967c3f..0167e526b04f 100644
+--- a/arch/x86/platform/efi/efi_64.c
++++ b/arch/x86/platform/efi/efi_64.c
+@@ -228,7 +228,7 @@ int __init efi_alloc_page_tables(void)
+ if (!pud) {
+ if (CONFIG_PGTABLE_LEVELS > 4)
+ free_page((unsigned long) pgd_page_vaddr(*pgd));
+- free_page((unsigned long)efi_pgd);
++ free_pages((unsigned long)efi_pgd, PGD_ALLOCATION_ORDER);
+ return -ENOMEM;
+ }
+
+diff --git a/drivers/acpi/acpi_watchdog.c b/drivers/acpi/acpi_watchdog.c
+index 11b113f8e367..ebb626ffb5fa 100644
+--- a/drivers/acpi/acpi_watchdog.c
++++ b/drivers/acpi/acpi_watchdog.c
+@@ -74,10 +74,10 @@ void __init acpi_watchdog_init(void)
+ res.start = gas->address;
+ if (gas->space_id == ACPI_ADR_SPACE_SYSTEM_MEMORY) {
+ res.flags = IORESOURCE_MEM;
+- res.end = res.start + ALIGN(gas->access_width, 4);
++ res.end = res.start + ALIGN(gas->access_width, 4) - 1;
+ } else if (gas->space_id == ACPI_ADR_SPACE_SYSTEM_IO) {
+ res.flags = IORESOURCE_IO;
+- res.end = res.start + gas->access_width;
++ res.end = res.start + gas->access_width - 1;
+ } else {
+ pr_warn("Unsupported address space: %u\n",
+ gas->space_id);
+diff --git a/drivers/acpi/numa.c b/drivers/acpi/numa.c
+index 917f1cc0fda4..8fb74d9011da 100644
+--- a/drivers/acpi/numa.c
++++ b/drivers/acpi/numa.c
+@@ -103,25 +103,27 @@ int acpi_map_pxm_to_node(int pxm)
+ */
+ int acpi_map_pxm_to_online_node(int pxm)
+ {
+- int node, n, dist, min_dist;
++ int node, min_node;
+
+ node = acpi_map_pxm_to_node(pxm);
+
+ if (node == NUMA_NO_NODE)
+ node = 0;
+
++ min_node = node;
+ if (!node_online(node)) {
+- min_dist = INT_MAX;
++ int min_dist = INT_MAX, dist, n;
++
+ for_each_online_node(n) {
+ dist = node_distance(node, n);
+ if (dist < min_dist) {
+ min_dist = dist;
+- node = n;
++ min_node = n;
+ }
+ }
+ }
+
+- return node;
++ return min_node;
+ }
+ EXPORT_SYMBOL(acpi_map_pxm_to_online_node);
+
+diff --git a/drivers/ata/ahci.c b/drivers/ata/ahci.c
+index 44a9d630b7ac..2badef1271fd 100644
+--- a/drivers/ata/ahci.c
++++ b/drivers/ata/ahci.c
+@@ -542,7 +542,9 @@ static const struct pci_device_id ahci_pci_tbl[] = {
+ .driver_data = board_ahci_yes_fbs },
+ { PCI_DEVICE(PCI_VENDOR_ID_MARVELL_EXT, 0x9230),
+ .driver_data = board_ahci_yes_fbs },
+- { PCI_DEVICE(PCI_VENDOR_ID_TTI, 0x0642),
++ { PCI_DEVICE(PCI_VENDOR_ID_TTI, 0x0642), /* highpoint rocketraid 642L */
++ .driver_data = board_ahci_yes_fbs },
++ { PCI_DEVICE(PCI_VENDOR_ID_TTI, 0x0645), /* highpoint rocketraid 644L */
+ .driver_data = board_ahci_yes_fbs },
+
+ /* Promise */
+diff --git a/drivers/ata/libata-core.c b/drivers/ata/libata-core.c
+index 3c09122bf038..7431ccd03316 100644
+--- a/drivers/ata/libata-core.c
++++ b/drivers/ata/libata-core.c
+@@ -4530,6 +4530,25 @@ static const struct ata_blacklist_entry ata_device_blacklist [] = {
+ { "PIONEER DVD-RW DVR-212D", NULL, ATA_HORKAGE_NOSETXFER },
+ { "PIONEER DVD-RW DVR-216D", NULL, ATA_HORKAGE_NOSETXFER },
+
++ /* Crucial BX100 SSD 500GB has broken LPM support */
++ { "CT500BX100SSD1", NULL, ATA_HORKAGE_NOLPM },
++
++ /* 512GB MX100 with MU01 firmware has both queued TRIM and LPM issues */
++ { "Crucial_CT512MX100*", "MU01", ATA_HORKAGE_NO_NCQ_TRIM |
++ ATA_HORKAGE_ZERO_AFTER_TRIM |
++ ATA_HORKAGE_NOLPM, },
++ /* 512GB MX100 with newer firmware has only LPM issues */
++ { "Crucial_CT512MX100*", NULL, ATA_HORKAGE_ZERO_AFTER_TRIM |
++ ATA_HORKAGE_NOLPM, },
++
++ /* 480GB+ M500 SSDs have both queued TRIM and LPM issues */
++ { "Crucial_CT480M500*", NULL, ATA_HORKAGE_NO_NCQ_TRIM |
++ ATA_HORKAGE_ZERO_AFTER_TRIM |
++ ATA_HORKAGE_NOLPM, },
++ { "Crucial_CT960M500*", NULL, ATA_HORKAGE_NO_NCQ_TRIM |
++ ATA_HORKAGE_ZERO_AFTER_TRIM |
++ ATA_HORKAGE_NOLPM, },
++
+ /* devices that don't properly handle queued TRIM commands */
+ { "Micron_M500_*", NULL, ATA_HORKAGE_NO_NCQ_TRIM |
+ ATA_HORKAGE_ZERO_AFTER_TRIM, },
+@@ -4541,7 +4560,9 @@ static const struct ata_blacklist_entry ata_device_blacklist [] = {
+ ATA_HORKAGE_ZERO_AFTER_TRIM, },
+ { "Crucial_CT*MX100*", "MU01", ATA_HORKAGE_NO_NCQ_TRIM |
+ ATA_HORKAGE_ZERO_AFTER_TRIM, },
+- { "Samsung SSD 8*", NULL, ATA_HORKAGE_NO_NCQ_TRIM |
++ { "Samsung SSD 840*", NULL, ATA_HORKAGE_NO_NCQ_TRIM |
++ ATA_HORKAGE_ZERO_AFTER_TRIM, },
++ { "Samsung SSD 850*", NULL, ATA_HORKAGE_NO_NCQ_TRIM |
+ ATA_HORKAGE_ZERO_AFTER_TRIM, },
+ { "FCCT*M500*", NULL, ATA_HORKAGE_NO_NCQ_TRIM |
+ ATA_HORKAGE_ZERO_AFTER_TRIM, },
+@@ -5401,8 +5422,7 @@ void ata_qc_issue(struct ata_queued_cmd *qc)
+ * We guarantee to LLDs that they will have at least one
+ * non-zero sg if the command is a data command.
+ */
+- if (WARN_ON_ONCE(ata_is_data(prot) &&
+- (!qc->sg || !qc->n_elem || !qc->nbytes)))
++ if (ata_is_data(prot) && (!qc->sg || !qc->n_elem || !qc->nbytes))
+ goto sys_err;
+
+ if (ata_is_dma(prot) || (ata_is_pio(prot) &&
+diff --git a/drivers/ata/libata-scsi.c b/drivers/ata/libata-scsi.c
+index 66be961c93a4..197e110f8ac7 100644
+--- a/drivers/ata/libata-scsi.c
++++ b/drivers/ata/libata-scsi.c
+@@ -3316,6 +3316,12 @@ static unsigned int ata_scsi_pass_thru(struct ata_queued_cmd *qc)
+ goto invalid_fld;
+ }
+
++ /* We may not issue NCQ commands to devices not supporting NCQ */
++ if (ata_is_ncq(tf->protocol) && !ata_ncq_enabled(dev)) {
++ fp = 1;
++ goto invalid_fld;
++ }
++
+ /* sanity check for pio multi commands */
+ if ((cdb[1] & 0xe0) && !is_multi_taskfile(tf)) {
+ fp = 1;
+@@ -4309,7 +4315,9 @@ static inline int __ata_scsi_queuecmd(struct scsi_cmnd *scmd,
+ if (likely((scsi_op != ATA_16) || !atapi_passthru16)) {
+ /* relay SCSI command to ATAPI device */
+ int len = COMMAND_SIZE(scsi_op);
+- if (unlikely(len > scmd->cmd_len || len > dev->cdb_len))
++ if (unlikely(len > scmd->cmd_len ||
++ len > dev->cdb_len ||
++ scmd->cmd_len > ATAPI_CDB_LEN))
+ goto bad_cdb_len;
+
+ xlat_func = atapi_xlat;
+diff --git a/drivers/bluetooth/btusb.c b/drivers/bluetooth/btusb.c
+index e71e54c478da..2f57e8b88a7a 100644
+--- a/drivers/bluetooth/btusb.c
++++ b/drivers/bluetooth/btusb.c
+@@ -230,7 +230,6 @@ static const struct usb_device_id blacklist_table[] = {
+ { USB_DEVICE(0x0930, 0x0227), .driver_info = BTUSB_ATH3012 },
+ { USB_DEVICE(0x0b05, 0x17d0), .driver_info = BTUSB_ATH3012 },
+ { USB_DEVICE(0x0cf3, 0x0036), .driver_info = BTUSB_ATH3012 },
+- { USB_DEVICE(0x0cf3, 0x3004), .driver_info = BTUSB_ATH3012 },
+ { USB_DEVICE(0x0cf3, 0x3008), .driver_info = BTUSB_ATH3012 },
+ { USB_DEVICE(0x0cf3, 0x311d), .driver_info = BTUSB_ATH3012 },
+ { USB_DEVICE(0x0cf3, 0x311e), .driver_info = BTUSB_ATH3012 },
+@@ -263,6 +262,7 @@ static const struct usb_device_id blacklist_table[] = {
+ { USB_DEVICE(0x0489, 0xe03c), .driver_info = BTUSB_ATH3012 },
+
+ /* QCA ROME chipset */
++ { USB_DEVICE(0x0cf3, 0x3004), .driver_info = BTUSB_QCA_ROME },
+ { USB_DEVICE(0x0cf3, 0xe007), .driver_info = BTUSB_QCA_ROME },
+ { USB_DEVICE(0x0cf3, 0xe009), .driver_info = BTUSB_QCA_ROME },
+ { USB_DEVICE(0x0cf3, 0xe300), .driver_info = BTUSB_QCA_ROME },
+@@ -383,10 +383,10 @@ static const struct usb_device_id blacklist_table[] = {
+ */
+ static const struct dmi_system_id btusb_needs_reset_resume_table[] = {
+ {
+- /* Lenovo Yoga 920 (QCA Rome device 0cf3:e300) */
++ /* Dell OptiPlex 3060 (QCA ROME device 0cf3:e007) */
+ .matches = {
+- DMI_MATCH(DMI_SYS_VENDOR, "LENOVO"),
+- DMI_MATCH(DMI_PRODUCT_VERSION, "Lenovo YOGA 920"),
++ DMI_MATCH(DMI_SYS_VENDOR, "Dell Inc."),
++ DMI_MATCH(DMI_PRODUCT_NAME, "OptiPlex 3060"),
+ },
+ },
+ {}
+diff --git a/drivers/clk/bcm/clk-bcm2835.c b/drivers/clk/bcm/clk-bcm2835.c
+index 44301a3d9963..a07f6451694a 100644
+--- a/drivers/clk/bcm/clk-bcm2835.c
++++ b/drivers/clk/bcm/clk-bcm2835.c
+@@ -449,17 +449,17 @@ struct bcm2835_pll_ana_bits {
+ static const struct bcm2835_pll_ana_bits bcm2835_ana_default = {
+ .mask0 = 0,
+ .set0 = 0,
+- .mask1 = (u32)~(A2W_PLL_KI_MASK | A2W_PLL_KP_MASK),
++ .mask1 = A2W_PLL_KI_MASK | A2W_PLL_KP_MASK,
+ .set1 = (2 << A2W_PLL_KI_SHIFT) | (8 << A2W_PLL_KP_SHIFT),
+- .mask3 = (u32)~A2W_PLL_KA_MASK,
++ .mask3 = A2W_PLL_KA_MASK,
+ .set3 = (2 << A2W_PLL_KA_SHIFT),
+ .fb_prediv_mask = BIT(14),
+ };
+
+ static const struct bcm2835_pll_ana_bits bcm2835_ana_pllh = {
+- .mask0 = (u32)~(A2W_PLLH_KA_MASK | A2W_PLLH_KI_LOW_MASK),
++ .mask0 = A2W_PLLH_KA_MASK | A2W_PLLH_KI_LOW_MASK,
+ .set0 = (2 << A2W_PLLH_KA_SHIFT) | (2 << A2W_PLLH_KI_LOW_SHIFT),
+- .mask1 = (u32)~(A2W_PLLH_KI_HIGH_MASK | A2W_PLLH_KP_MASK),
++ .mask1 = A2W_PLLH_KI_HIGH_MASK | A2W_PLLH_KP_MASK,
+ .set1 = (6 << A2W_PLLH_KP_SHIFT),
+ .mask3 = 0,
+ .set3 = 0,
+@@ -623,8 +623,10 @@ static int bcm2835_pll_on(struct clk_hw *hw)
+ ~A2W_PLL_CTRL_PWRDN);
+
+ /* Take the PLL out of reset. */
++ spin_lock(&cprman->regs_lock);
+ cprman_write(cprman, data->cm_ctrl_reg,
+ cprman_read(cprman, data->cm_ctrl_reg) & ~CM_PLL_ANARST);
++ spin_unlock(&cprman->regs_lock);
+
+ /* Wait for the PLL to lock. */
+ timeout = ktime_add_ns(ktime_get(), LOCK_TIMEOUT_NS);
+@@ -701,9 +703,11 @@ static int bcm2835_pll_set_rate(struct clk_hw *hw,
+ }
+
+ /* Unmask the reference clock from the oscillator. */
++ spin_lock(&cprman->regs_lock);
+ cprman_write(cprman, A2W_XOSC_CTRL,
+ cprman_read(cprman, A2W_XOSC_CTRL) |
+ data->reference_enable_mask);
++ spin_unlock(&cprman->regs_lock);
+
+ if (do_ana_setup_first)
+ bcm2835_pll_write_ana(cprman, data->ana_reg_base, ana);
+diff --git a/drivers/clk/sunxi-ng/ccu-sun6i-a31.c b/drivers/clk/sunxi-ng/ccu-sun6i-a31.c
+index 72b16ed1012b..3b97f60540ad 100644
+--- a/drivers/clk/sunxi-ng/ccu-sun6i-a31.c
++++ b/drivers/clk/sunxi-ng/ccu-sun6i-a31.c
+@@ -762,7 +762,7 @@ static struct ccu_mp out_a_clk = {
+ .features = CCU_FEATURE_FIXED_PREDIV,
+ .hw.init = CLK_HW_INIT_PARENTS("out-a",
+ clk_out_parents,
+- &ccu_div_ops,
++ &ccu_mp_ops,
+ 0),
+ },
+ };
+@@ -783,7 +783,7 @@ static struct ccu_mp out_b_clk = {
+ .features = CCU_FEATURE_FIXED_PREDIV,
+ .hw.init = CLK_HW_INIT_PARENTS("out-b",
+ clk_out_parents,
+- &ccu_div_ops,
++ &ccu_mp_ops,
+ 0),
+ },
+ };
+@@ -804,7 +804,7 @@ static struct ccu_mp out_c_clk = {
+ .features = CCU_FEATURE_FIXED_PREDIV,
+ .hw.init = CLK_HW_INIT_PARENTS("out-c",
+ clk_out_parents,
+- &ccu_div_ops,
++ &ccu_mp_ops,
+ 0),
+ },
+ };
+diff --git a/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.c b/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.c
+index bb5fa895fb64..97ecfd16ee82 100644
+--- a/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.c
++++ b/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.c
+@@ -3090,8 +3090,6 @@ static int amdgpu_dm_plane_init(struct amdgpu_display_manager *dm,
+
+ switch (aplane->base.type) {
+ case DRM_PLANE_TYPE_PRIMARY:
+- aplane->base.format_default = true;
+-
+ res = drm_universal_plane_init(
+ dm->adev->ddev,
+ &aplane->base,
+diff --git a/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm_helpers.c b/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm_helpers.c
+index 9bd142f65f9b..e1acc10e35a2 100644
+--- a/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm_helpers.c
++++ b/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm_helpers.c
+@@ -109,7 +109,7 @@ enum dc_edid_status dm_helpers_parse_edid_caps(
+ struct cea_sad *sad = &sads[i];
+
+ edid_caps->audio_modes[i].format_code = sad->format;
+- edid_caps->audio_modes[i].channel_count = sad->channels;
++ edid_caps->audio_modes[i].channel_count = sad->channels + 1;
+ edid_caps->audio_modes[i].sample_rate = sad->freq;
+ edid_caps->audio_modes[i].sample_size = sad->byte2;
+ }
+diff --git a/drivers/gpu/drm/drm_framebuffer.c b/drivers/gpu/drm/drm_framebuffer.c
+index 5e1f1e2deb52..343cb5fa9b03 100644
+--- a/drivers/gpu/drm/drm_framebuffer.c
++++ b/drivers/gpu/drm/drm_framebuffer.c
+@@ -458,6 +458,12 @@ int drm_mode_getfb(struct drm_device *dev,
+ if (!fb)
+ return -ENOENT;
+
++ /* Multi-planar framebuffers need getfb2. */
++ if (fb->format->num_planes > 1) {
++ ret = -EINVAL;
++ goto out;
++ }
++
+ r->height = fb->height;
+ r->width = fb->width;
+ r->depth = fb->format->depth;
+@@ -481,6 +487,7 @@ int drm_mode_getfb(struct drm_device *dev,
+ ret = -ENODEV;
+ }
+
++out:
+ drm_framebuffer_put(fb);
+
+ return ret;
+diff --git a/drivers/gpu/drm/radeon/radeon_connectors.c b/drivers/gpu/drm/radeon/radeon_connectors.c
+index 30e129684c7c..fd0d0c758a94 100644
+--- a/drivers/gpu/drm/radeon/radeon_connectors.c
++++ b/drivers/gpu/drm/radeon/radeon_connectors.c
+@@ -90,25 +90,18 @@ void radeon_connector_hotplug(struct drm_connector *connector)
+ /* don't do anything if sink is not display port, i.e.,
+ * passive dp->(dvi|hdmi) adaptor
+ */
+- if (dig_connector->dp_sink_type == CONNECTOR_OBJECT_ID_DISPLAYPORT) {
+- int saved_dpms = connector->dpms;
+- /* Only turn off the display if it's physically disconnected */
+- if (!radeon_hpd_sense(rdev, radeon_connector->hpd.hpd)) {
+- drm_helper_connector_dpms(connector, DRM_MODE_DPMS_OFF);
+- } else if (radeon_dp_needs_link_train(radeon_connector)) {
+- /* Don't try to start link training before we
+- * have the dpcd */
+- if (!radeon_dp_getdpcd(radeon_connector))
+- return;
+-
+- /* set it to OFF so that drm_helper_connector_dpms()
+- * won't return immediately since the current state
+- * is ON at this point.
+- */
+- connector->dpms = DRM_MODE_DPMS_OFF;
+- drm_helper_connector_dpms(connector, DRM_MODE_DPMS_ON);
+- }
+- connector->dpms = saved_dpms;
++ if (dig_connector->dp_sink_type == CONNECTOR_OBJECT_ID_DISPLAYPORT &&
++ radeon_hpd_sense(rdev, radeon_connector->hpd.hpd) &&
++ radeon_dp_needs_link_train(radeon_connector)) {
++ /* Don't start link training before we have the DPCD */
++ if (!radeon_dp_getdpcd(radeon_connector))
++ return;
++
++ /* Turn the connector off and back on immediately, which
++ * will trigger link training
++ */
++ drm_helper_connector_dpms(connector, DRM_MODE_DPMS_OFF);
++ drm_helper_connector_dpms(connector, DRM_MODE_DPMS_ON);
+ }
+ }
+ }
+diff --git a/drivers/gpu/drm/udl/udl_fb.c b/drivers/gpu/drm/udl/udl_fb.c
+index b5b335c9b2bb..2ebdc6d5a76e 100644
+--- a/drivers/gpu/drm/udl/udl_fb.c
++++ b/drivers/gpu/drm/udl/udl_fb.c
+@@ -159,10 +159,15 @@ static int udl_fb_mmap(struct fb_info *info, struct vm_area_struct *vma)
+ {
+ unsigned long start = vma->vm_start;
+ unsigned long size = vma->vm_end - vma->vm_start;
+- unsigned long offset = vma->vm_pgoff << PAGE_SHIFT;
++ unsigned long offset;
+ unsigned long page, pos;
+
+- if (offset + size > info->fix.smem_len)
++ if (vma->vm_pgoff > (~0UL >> PAGE_SHIFT))
++ return -EINVAL;
++
++ offset = vma->vm_pgoff << PAGE_SHIFT;
++
++ if (offset > info->fix.smem_len || size > info->fix.smem_len - offset)
+ return -EINVAL;
+
+ pos = (unsigned long)info->fix.smem_start + offset;
+diff --git a/drivers/gpu/drm/vmwgfx/vmwgfx_drv.c b/drivers/gpu/drm/vmwgfx/vmwgfx_drv.c
+index 184340d486c3..86d25f18aa99 100644
+--- a/drivers/gpu/drm/vmwgfx/vmwgfx_drv.c
++++ b/drivers/gpu/drm/vmwgfx/vmwgfx_drv.c
+@@ -1337,6 +1337,19 @@ static void __vmw_svga_disable(struct vmw_private *dev_priv)
+ */
+ void vmw_svga_disable(struct vmw_private *dev_priv)
+ {
++ /*
++ * Disabling SVGA will turn off device modesetting capabilities, so
++ * notify KMS about that so that it doesn't cache atomic state that
++ * isn't valid anymore, for example crtcs turned on.
++ * Strictly we'd want to do this under the SVGA lock (or an SVGA mutex),
++ * but vmw_kms_lost_device() takes the reservation sem and thus we'll
++ * end up with lock order reversal. Thus, a master may actually perform
++ * a new modeset just after we call vmw_kms_lost_device() and race with
++ * vmw_svga_disable(), but that should at worst cause atomic KMS state
++ * to be inconsistent with the device, causing modesetting problems.
++ *
++ */
++ vmw_kms_lost_device(dev_priv->dev);
+ ttm_write_lock(&dev_priv->reservation_sem, false);
+ spin_lock(&dev_priv->svga_lock);
+ if (dev_priv->bdev.man[TTM_PL_VRAM].use_type) {
+diff --git a/drivers/gpu/drm/vmwgfx/vmwgfx_drv.h b/drivers/gpu/drm/vmwgfx/vmwgfx_drv.h
+index 7e5f30e234b1..8c65cc3b0dda 100644
+--- a/drivers/gpu/drm/vmwgfx/vmwgfx_drv.h
++++ b/drivers/gpu/drm/vmwgfx/vmwgfx_drv.h
+@@ -938,6 +938,7 @@ int vmw_kms_present(struct vmw_private *dev_priv,
+ int vmw_kms_update_layout_ioctl(struct drm_device *dev, void *data,
+ struct drm_file *file_priv);
+ void vmw_kms_legacy_hotspot_clear(struct vmw_private *dev_priv);
++void vmw_kms_lost_device(struct drm_device *dev);
+
+ int vmw_dumb_create(struct drm_file *file_priv,
+ struct drm_device *dev,
+diff --git a/drivers/gpu/drm/vmwgfx/vmwgfx_kms.c b/drivers/gpu/drm/vmwgfx/vmwgfx_kms.c
+index fcd58145d0da..dfaaf9a2d81e 100644
+--- a/drivers/gpu/drm/vmwgfx/vmwgfx_kms.c
++++ b/drivers/gpu/drm/vmwgfx/vmwgfx_kms.c
+@@ -31,7 +31,6 @@
+ #include <drm/drm_atomic_helper.h>
+ #include <drm/drm_rect.h>
+
+-
+ /* Might need a hrtimer here? */
+ #define VMWGFX_PRESENT_RATE ((HZ / 60 > 0) ? HZ / 60 : 1)
+
+@@ -2531,9 +2530,12 @@ void vmw_kms_helper_buffer_finish(struct vmw_private *dev_priv,
+ * Helper to be used if an error forces the caller to undo the actions of
+ * vmw_kms_helper_resource_prepare.
+ */
+-void vmw_kms_helper_resource_revert(struct vmw_resource *res)
++void vmw_kms_helper_resource_revert(struct vmw_validation_ctx *ctx)
+ {
+- vmw_kms_helper_buffer_revert(res->backup);
++ struct vmw_resource *res = ctx->res;
++
++ vmw_kms_helper_buffer_revert(ctx->buf);
++ vmw_dmabuf_unreference(&ctx->buf);
+ vmw_resource_unreserve(res, false, NULL, 0);
+ mutex_unlock(&res->dev_priv->cmdbuf_mutex);
+ }
+@@ -2550,10 +2552,14 @@ void vmw_kms_helper_resource_revert(struct vmw_resource *res)
+ * interrupted by a signal.
+ */
+ int vmw_kms_helper_resource_prepare(struct vmw_resource *res,
+- bool interruptible)
++ bool interruptible,
++ struct vmw_validation_ctx *ctx)
+ {
+ int ret = 0;
+
++ ctx->buf = NULL;
++ ctx->res = res;
++
+ if (interruptible)
+ ret = mutex_lock_interruptible(&res->dev_priv->cmdbuf_mutex);
+ else
+@@ -2572,6 +2578,8 @@ int vmw_kms_helper_resource_prepare(struct vmw_resource *res,
+ res->dev_priv->has_mob);
+ if (ret)
+ goto out_unreserve;
++
++ ctx->buf = vmw_dmabuf_reference(res->backup);
+ }
+ ret = vmw_resource_validate(res);
+ if (ret)
+@@ -2579,7 +2587,7 @@ int vmw_kms_helper_resource_prepare(struct vmw_resource *res,
+ return 0;
+
+ out_revert:
+- vmw_kms_helper_buffer_revert(res->backup);
++ vmw_kms_helper_buffer_revert(ctx->buf);
+ out_unreserve:
+ vmw_resource_unreserve(res, false, NULL, 0);
+ out_unlock:
+@@ -2595,11 +2603,13 @@ int vmw_kms_helper_resource_prepare(struct vmw_resource *res,
+ * @out_fence: Optional pointer to a fence pointer. If non-NULL, a
+ * ref-counted fence pointer is returned here.
+ */
+-void vmw_kms_helper_resource_finish(struct vmw_resource *res,
+- struct vmw_fence_obj **out_fence)
++void vmw_kms_helper_resource_finish(struct vmw_validation_ctx *ctx,
++ struct vmw_fence_obj **out_fence)
+ {
+- if (res->backup || out_fence)
+- vmw_kms_helper_buffer_finish(res->dev_priv, NULL, res->backup,
++ struct vmw_resource *res = ctx->res;
++
++ if (ctx->buf || out_fence)
++ vmw_kms_helper_buffer_finish(res->dev_priv, NULL, ctx->buf,
+ out_fence, NULL);
+
+ vmw_resource_unreserve(res, false, NULL, 0);
+@@ -2865,3 +2875,14 @@ int vmw_kms_set_config(struct drm_mode_set *set,
+
+ return drm_atomic_helper_set_config(set, ctx);
+ }
++
++
++/**
++ * vmw_kms_lost_device - Notify kms that modesetting capabilities will be lost
++ *
++ * @dev: Pointer to the drm device
++ */
++void vmw_kms_lost_device(struct drm_device *dev)
++{
++ drm_atomic_helper_shutdown(dev);
++}
+diff --git a/drivers/gpu/drm/vmwgfx/vmwgfx_kms.h b/drivers/gpu/drm/vmwgfx/vmwgfx_kms.h
+index cd9da2dd79af..3d2ca280eaa7 100644
+--- a/drivers/gpu/drm/vmwgfx/vmwgfx_kms.h
++++ b/drivers/gpu/drm/vmwgfx/vmwgfx_kms.h
+@@ -240,6 +240,11 @@ struct vmw_display_unit {
+ int set_gui_y;
+ };
+
++struct vmw_validation_ctx {
++ struct vmw_resource *res;
++ struct vmw_dma_buffer *buf;
++};
++
+ #define vmw_crtc_to_du(x) \
+ container_of(x, struct vmw_display_unit, crtc)
+ #define vmw_connector_to_du(x) \
+@@ -296,9 +301,10 @@ void vmw_kms_helper_buffer_finish(struct vmw_private *dev_priv,
+ struct drm_vmw_fence_rep __user *
+ user_fence_rep);
+ int vmw_kms_helper_resource_prepare(struct vmw_resource *res,
+- bool interruptible);
+-void vmw_kms_helper_resource_revert(struct vmw_resource *res);
+-void vmw_kms_helper_resource_finish(struct vmw_resource *res,
++ bool interruptible,
++ struct vmw_validation_ctx *ctx);
++void vmw_kms_helper_resource_revert(struct vmw_validation_ctx *ctx);
++void vmw_kms_helper_resource_finish(struct vmw_validation_ctx *ctx,
+ struct vmw_fence_obj **out_fence);
+ int vmw_kms_readback(struct vmw_private *dev_priv,
+ struct drm_file *file_priv,
+@@ -439,5 +445,4 @@ int vmw_kms_stdu_dma(struct vmw_private *dev_priv,
+
+ int vmw_kms_set_config(struct drm_mode_set *set,
+ struct drm_modeset_acquire_ctx *ctx);
+-
+ #endif
+diff --git a/drivers/gpu/drm/vmwgfx/vmwgfx_scrn.c b/drivers/gpu/drm/vmwgfx/vmwgfx_scrn.c
+index 63a4cd794b73..3ec9eae831b8 100644
+--- a/drivers/gpu/drm/vmwgfx/vmwgfx_scrn.c
++++ b/drivers/gpu/drm/vmwgfx/vmwgfx_scrn.c
+@@ -909,12 +909,13 @@ int vmw_kms_sou_do_surface_dirty(struct vmw_private *dev_priv,
+ struct vmw_framebuffer_surface *vfbs =
+ container_of(framebuffer, typeof(*vfbs), base);
+ struct vmw_kms_sou_surface_dirty sdirty;
++ struct vmw_validation_ctx ctx;
+ int ret;
+
+ if (!srf)
+ srf = &vfbs->surface->res;
+
+- ret = vmw_kms_helper_resource_prepare(srf, true);
++ ret = vmw_kms_helper_resource_prepare(srf, true, &ctx);
+ if (ret)
+ return ret;
+
+@@ -933,7 +934,7 @@ int vmw_kms_sou_do_surface_dirty(struct vmw_private *dev_priv,
+ ret = vmw_kms_helper_dirty(dev_priv, framebuffer, clips, vclips,
+ dest_x, dest_y, num_clips, inc,
+ &sdirty.base);
+- vmw_kms_helper_resource_finish(srf, out_fence);
++ vmw_kms_helper_resource_finish(&ctx, out_fence);
+
+ return ret;
+ }
+diff --git a/drivers/gpu/drm/vmwgfx/vmwgfx_stdu.c b/drivers/gpu/drm/vmwgfx/vmwgfx_stdu.c
+index b68d74888ab1..6b969e5dea2a 100644
+--- a/drivers/gpu/drm/vmwgfx/vmwgfx_stdu.c
++++ b/drivers/gpu/drm/vmwgfx/vmwgfx_stdu.c
+@@ -980,12 +980,13 @@ int vmw_kms_stdu_surface_dirty(struct vmw_private *dev_priv,
+ struct vmw_framebuffer_surface *vfbs =
+ container_of(framebuffer, typeof(*vfbs), base);
+ struct vmw_stdu_dirty sdirty;
++ struct vmw_validation_ctx ctx;
+ int ret;
+
+ if (!srf)
+ srf = &vfbs->surface->res;
+
+- ret = vmw_kms_helper_resource_prepare(srf, true);
++ ret = vmw_kms_helper_resource_prepare(srf, true, &ctx);
+ if (ret)
+ return ret;
+
+@@ -1008,7 +1009,7 @@ int vmw_kms_stdu_surface_dirty(struct vmw_private *dev_priv,
+ dest_x, dest_y, num_clips, inc,
+ &sdirty.base);
+ out_finish:
+- vmw_kms_helper_resource_finish(srf, out_fence);
++ vmw_kms_helper_resource_finish(&ctx, out_fence);
+
+ return ret;
+ }
+diff --git a/drivers/hv/ring_buffer.c b/drivers/hv/ring_buffer.c
+index 12eb8caa4263..3f8dde8d59ba 100644
+--- a/drivers/hv/ring_buffer.c
++++ b/drivers/hv/ring_buffer.c
+@@ -394,13 +394,24 @@ __hv_pkt_iter_next(struct vmbus_channel *channel,
+ }
+ EXPORT_SYMBOL_GPL(__hv_pkt_iter_next);
+
++/* How many bytes were read in this iterator cycle */
++static u32 hv_pkt_iter_bytes_read(const struct hv_ring_buffer_info *rbi,
++ u32 start_read_index)
++{
++ if (rbi->priv_read_index >= start_read_index)
++ return rbi->priv_read_index - start_read_index;
++ else
++ return rbi->ring_datasize - start_read_index +
++ rbi->priv_read_index;
++}
++
+ /*
+ * Update host ring buffer after iterating over packets.
+ */
+ void hv_pkt_iter_close(struct vmbus_channel *channel)
+ {
+ struct hv_ring_buffer_info *rbi = &channel->inbound;
+- u32 orig_write_sz = hv_get_bytes_to_write(rbi);
++ u32 curr_write_sz, pending_sz, bytes_read, start_read_index;
+
+ /*
+ * Make sure all reads are done before we update the read index since
+@@ -408,8 +419,12 @@ void hv_pkt_iter_close(struct vmbus_channel *channel)
+ * is updated.
+ */
+ virt_rmb();
++ start_read_index = rbi->ring_buffer->read_index;
+ rbi->ring_buffer->read_index = rbi->priv_read_index;
+
++ if (!rbi->ring_buffer->feature_bits.feat_pending_send_sz)
++ return;
++
+ /*
+ * Issue a full memory barrier before making the signaling decision.
+ * Here is the reason for having this barrier:
+@@ -423,26 +438,29 @@ void hv_pkt_iter_close(struct vmbus_channel *channel)
+ */
+ virt_mb();
+
+- /* If host has disabled notifications then skip */
+- if (rbi->ring_buffer->interrupt_mask)
++ pending_sz = READ_ONCE(rbi->ring_buffer->pending_send_sz);
++ if (!pending_sz)
+ return;
+
+- if (rbi->ring_buffer->feature_bits.feat_pending_send_sz) {
+- u32 pending_sz = READ_ONCE(rbi->ring_buffer->pending_send_sz);
++ /*
++ * Ensure the read of write_index in hv_get_bytes_to_write()
++ * happens after the read of pending_send_sz.
++ */
++ virt_rmb();
++ curr_write_sz = hv_get_bytes_to_write(rbi);
++ bytes_read = hv_pkt_iter_bytes_read(rbi, start_read_index);
+
+- /*
+- * If there was space before we began iteration,
+- * then host was not blocked. Also handles case where
+- * pending_sz is zero then host has nothing pending
+- * and does not need to be signaled.
+- */
+- if (orig_write_sz > pending_sz)
+- return;
++ /*
++ * If there was space before we began iteration,
++ * then host was not blocked.
++ */
+
+- /* If pending write will not fit, don't give false hope. */
+- if (hv_get_bytes_to_write(rbi) < pending_sz)
+- return;
+- }
++ if (curr_write_sz - bytes_read > pending_sz)
++ return;
++
++ /* If pending write will not fit, don't give false hope. */
++ if (curr_write_sz <= pending_sz)
++ return;
+
+ vmbus_setevent(channel);
+ }
+diff --git a/drivers/hwmon/k10temp.c b/drivers/hwmon/k10temp.c
+index b960015cb073..051a72eecb24 100644
+--- a/drivers/hwmon/k10temp.c
++++ b/drivers/hwmon/k10temp.c
+@@ -86,6 +86,7 @@ static const struct tctl_offset tctl_offset_table[] = {
+ { 0x17, "AMD Ryzen 7 1800X", 20000 },
+ { 0x17, "AMD Ryzen Threadripper 1950X", 27000 },
+ { 0x17, "AMD Ryzen Threadripper 1920X", 27000 },
++ { 0x17, "AMD Ryzen Threadripper 1900X", 27000 },
+ { 0x17, "AMD Ryzen Threadripper 1950", 10000 },
+ { 0x17, "AMD Ryzen Threadripper 1920", 10000 },
+ { 0x17, "AMD Ryzen Threadripper 1910", 10000 },
+@@ -128,7 +129,10 @@ static ssize_t temp1_input_show(struct device *dev,
+
+ data->read_tempreg(data->pdev, ®val);
+ temp = (regval >> 21) * 125;
+- temp -= data->temp_offset;
++ if (temp > data->temp_offset)
++ temp -= data->temp_offset;
++ else
++ temp = 0;
+
+ return sprintf(buf, "%u\n", temp);
+ }
+diff --git a/drivers/iio/accel/st_accel_core.c b/drivers/iio/accel/st_accel_core.c
+index 460aa58e0159..3e6fd5a8ac5b 100644
+--- a/drivers/iio/accel/st_accel_core.c
++++ b/drivers/iio/accel/st_accel_core.c
+@@ -951,7 +951,7 @@ int st_accel_common_probe(struct iio_dev *indio_dev)
+ if (!pdata)
+ pdata = (struct st_sensors_platform_data *)&default_accel_pdata;
+
+- err = st_sensors_init_sensor(indio_dev, adata->dev->platform_data);
++ err = st_sensors_init_sensor(indio_dev, pdata);
+ if (err < 0)
+ goto st_accel_power_off;
+
+diff --git a/drivers/iio/adc/meson_saradc.c b/drivers/iio/adc/meson_saradc.c
+index 36047147ce7c..0d237fd69769 100644
+--- a/drivers/iio/adc/meson_saradc.c
++++ b/drivers/iio/adc/meson_saradc.c
+@@ -462,8 +462,10 @@ static int meson_sar_adc_lock(struct iio_dev *indio_dev)
+ regmap_read(priv->regmap, MESON_SAR_ADC_DELAY, &val);
+ } while (val & MESON_SAR_ADC_DELAY_BL30_BUSY && timeout--);
+
+- if (timeout < 0)
++ if (timeout < 0) {
++ mutex_unlock(&indio_dev->mlock);
+ return -ETIMEDOUT;
++ }
+ }
+
+ return 0;
+diff --git a/drivers/iio/chemical/ccs811.c b/drivers/iio/chemical/ccs811.c
+index fbe2431f5b81..1ea9f5513b02 100644
+--- a/drivers/iio/chemical/ccs811.c
++++ b/drivers/iio/chemical/ccs811.c
+@@ -133,6 +133,9 @@ static int ccs811_start_sensor_application(struct i2c_client *client)
+ if (ret < 0)
+ return ret;
+
++ if ((ret & CCS811_STATUS_FW_MODE_APPLICATION))
++ return 0;
++
+ if ((ret & CCS811_STATUS_APP_VALID_MASK) !=
+ CCS811_STATUS_APP_VALID_LOADED)
+ return -EIO;
+diff --git a/drivers/iio/imu/st_lsm6dsx/st_lsm6dsx.h b/drivers/iio/imu/st_lsm6dsx/st_lsm6dsx.h
+index 4fdb7fcc3ea8..cebc6bd31b79 100644
+--- a/drivers/iio/imu/st_lsm6dsx/st_lsm6dsx.h
++++ b/drivers/iio/imu/st_lsm6dsx/st_lsm6dsx.h
+@@ -130,6 +130,7 @@ struct st_lsm6dsx_sensor {
+ * @irq: Device interrupt line (I2C or SPI).
+ * @lock: Mutex to protect read and write operations.
+ * @fifo_lock: Mutex to prevent concurrent access to the hw FIFO.
++ * @conf_lock: Mutex to prevent concurrent FIFO configuration update.
+ * @fifo_mode: FIFO operating mode supported by the device.
+ * @enable_mask: Enabled sensor bitmask.
+ * @sip: Total number of samples (acc/gyro) in a given pattern.
+@@ -144,6 +145,7 @@ struct st_lsm6dsx_hw {
+
+ struct mutex lock;
+ struct mutex fifo_lock;
++ struct mutex conf_lock;
+
+ enum st_lsm6dsx_fifo_mode fifo_mode;
+ u8 enable_mask;
+diff --git a/drivers/iio/imu/st_lsm6dsx/st_lsm6dsx_buffer.c b/drivers/iio/imu/st_lsm6dsx/st_lsm6dsx_buffer.c
+index 755c472e8a05..c899d658f6be 100644
+--- a/drivers/iio/imu/st_lsm6dsx/st_lsm6dsx_buffer.c
++++ b/drivers/iio/imu/st_lsm6dsx/st_lsm6dsx_buffer.c
+@@ -325,38 +325,40 @@ static int st_lsm6dsx_update_fifo(struct iio_dev *iio_dev, bool enable)
+ struct st_lsm6dsx_hw *hw = sensor->hw;
+ int err;
+
++ mutex_lock(&hw->conf_lock);
++
+ if (hw->fifo_mode != ST_LSM6DSX_FIFO_BYPASS) {
+ err = st_lsm6dsx_flush_fifo(hw);
+ if (err < 0)
+- return err;
++ goto out;
+ }
+
+ if (enable) {
+ err = st_lsm6dsx_sensor_enable(sensor);
+ if (err < 0)
+- return err;
++ goto out;
+ } else {
+ err = st_lsm6dsx_sensor_disable(sensor);
+ if (err < 0)
+- return err;
++ goto out;
+ }
+
+ err = st_lsm6dsx_set_fifo_odr(sensor, enable);
+ if (err < 0)
+- return err;
++ goto out;
+
+ err = st_lsm6dsx_update_decimators(hw);
+ if (err < 0)
+- return err;
++ goto out;
+
+ err = st_lsm6dsx_update_watermark(sensor, sensor->watermark);
+ if (err < 0)
+- return err;
++ goto out;
+
+ if (hw->enable_mask) {
+ err = st_lsm6dsx_set_fifo_mode(hw, ST_LSM6DSX_FIFO_CONT);
+ if (err < 0)
+- return err;
++ goto out;
+
+ /*
+ * store enable buffer timestamp as reference to compute
+@@ -365,7 +367,10 @@ static int st_lsm6dsx_update_fifo(struct iio_dev *iio_dev, bool enable)
+ sensor->ts = iio_get_time_ns(iio_dev);
+ }
+
+- return 0;
++out:
++ mutex_unlock(&hw->conf_lock);
++
++ return err;
+ }
+
+ static irqreturn_t st_lsm6dsx_handler_irq(int irq, void *private)
+diff --git a/drivers/iio/imu/st_lsm6dsx/st_lsm6dsx_core.c b/drivers/iio/imu/st_lsm6dsx/st_lsm6dsx_core.c
+index 239c735242be..4d43c956d676 100644
+--- a/drivers/iio/imu/st_lsm6dsx/st_lsm6dsx_core.c
++++ b/drivers/iio/imu/st_lsm6dsx/st_lsm6dsx_core.c
+@@ -448,7 +448,7 @@ static int st_lsm6dsx_read_oneshot(struct st_lsm6dsx_sensor *sensor,
+
+ st_lsm6dsx_sensor_disable(sensor);
+
+- *val = (s16)data;
++ *val = (s16)le16_to_cpu(data);
+
+ return IIO_VAL_INT;
+ }
+@@ -528,7 +528,12 @@ static int st_lsm6dsx_set_watermark(struct iio_dev *iio_dev, unsigned int val)
+ if (val < 1 || val > hw->settings->max_fifo_size)
+ return -EINVAL;
+
++ mutex_lock(&hw->conf_lock);
++
+ err = st_lsm6dsx_update_watermark(sensor, val);
++
++ mutex_unlock(&hw->conf_lock);
++
+ if (err < 0)
+ return err;
+
+@@ -739,6 +744,7 @@ int st_lsm6dsx_probe(struct device *dev, int irq, int hw_id, const char *name,
+
+ mutex_init(&hw->lock);
+ mutex_init(&hw->fifo_lock);
++ mutex_init(&hw->conf_lock);
+
+ hw->dev = dev;
+ hw->irq = irq;
+diff --git a/drivers/iio/pressure/st_pressure_core.c b/drivers/iio/pressure/st_pressure_core.c
+index 349e5c713c03..4ddb6cf7d401 100644
+--- a/drivers/iio/pressure/st_pressure_core.c
++++ b/drivers/iio/pressure/st_pressure_core.c
+@@ -640,7 +640,7 @@ int st_press_common_probe(struct iio_dev *indio_dev)
+ press_data->sensor_settings->drdy_irq.int2.addr))
+ pdata = (struct st_sensors_platform_data *)&default_press_pdata;
+
+- err = st_sensors_init_sensor(indio_dev, press_data->dev->platform_data);
++ err = st_sensors_init_sensor(indio_dev, pdata);
+ if (err < 0)
+ goto st_press_power_off;
+
+diff --git a/drivers/infiniband/hw/mlx5/mr.c b/drivers/infiniband/hw/mlx5/mr.c
+index 1961c6a45437..c51c602f06d6 100644
+--- a/drivers/infiniband/hw/mlx5/mr.c
++++ b/drivers/infiniband/hw/mlx5/mr.c
+@@ -838,7 +838,8 @@ static int mr_umem_get(struct ib_pd *pd, u64 start, u64 length,
+ *umem = ib_umem_get(pd->uobject->context, start, length,
+ access_flags, 0);
+ err = PTR_ERR_OR_ZERO(*umem);
+- if (err < 0) {
++ if (err) {
++ *umem = NULL;
+ mlx5_ib_err(dev, "umem get failed (%d)\n", err);
+ return err;
+ }
+@@ -1415,6 +1416,7 @@ int mlx5_ib_rereg_user_mr(struct ib_mr *ib_mr, int flags, u64 start,
+ if (err) {
+ mlx5_ib_warn(dev, "Failed to rereg UMR\n");
+ ib_umem_release(mr->umem);
++ mr->umem = NULL;
+ clean_mr(dev, mr);
+ return err;
+ }
+@@ -1498,14 +1500,11 @@ static int clean_mr(struct mlx5_ib_dev *dev, struct mlx5_ib_mr *mr)
+ u32 key = mr->mmkey.key;
+
+ err = destroy_mkey(dev, mr);
+- kfree(mr);
+ if (err) {
+ mlx5_ib_warn(dev, "failed to destroy mkey 0x%x (%d)\n",
+ key, err);
+ return err;
+ }
+- } else {
+- mlx5_mr_cache_free(dev, mr);
+ }
+
+ return 0;
+@@ -1548,6 +1547,11 @@ static int dereg_mr(struct mlx5_ib_dev *dev, struct mlx5_ib_mr *mr)
+ atomic_sub(npages, &dev->mdev->priv.reg_pages);
+ }
+
++ if (!mr->allocated_from_cache)
++ kfree(mr);
++ else
++ mlx5_mr_cache_free(dev, mr);
++
+ return 0;
+ }
+
+diff --git a/drivers/media/platform/tegra-cec/tegra_cec.c b/drivers/media/platform/tegra-cec/tegra_cec.c
+index 92f93a880015..aba488cd0e64 100644
+--- a/drivers/media/platform/tegra-cec/tegra_cec.c
++++ b/drivers/media/platform/tegra-cec/tegra_cec.c
+@@ -172,16 +172,13 @@ static irqreturn_t tegra_cec_irq_handler(int irq, void *data)
+ }
+ }
+
+- if (status & (TEGRA_CEC_INT_STAT_RX_REGISTER_OVERRUN |
+- TEGRA_CEC_INT_STAT_RX_BUS_ANOMALY_DETECTED |
+- TEGRA_CEC_INT_STAT_RX_START_BIT_DETECTED |
+- TEGRA_CEC_INT_STAT_RX_BUS_ERROR_DETECTED)) {
++ if (status & TEGRA_CEC_INT_STAT_RX_START_BIT_DETECTED) {
+ cec_write(cec, TEGRA_CEC_INT_STAT,
+- (TEGRA_CEC_INT_STAT_RX_REGISTER_OVERRUN |
+- TEGRA_CEC_INT_STAT_RX_BUS_ANOMALY_DETECTED |
+- TEGRA_CEC_INT_STAT_RX_START_BIT_DETECTED |
+- TEGRA_CEC_INT_STAT_RX_BUS_ERROR_DETECTED));
+- } else if (status & TEGRA_CEC_INT_STAT_RX_REGISTER_FULL) {
++ TEGRA_CEC_INT_STAT_RX_START_BIT_DETECTED);
++ cec->rx_done = false;
++ cec->rx_buf_cnt = 0;
++ }
++ if (status & TEGRA_CEC_INT_STAT_RX_REGISTER_FULL) {
+ u32 v;
+
+ cec_write(cec, TEGRA_CEC_INT_STAT,
+@@ -255,7 +252,7 @@ static int tegra_cec_adap_enable(struct cec_adapter *adap, bool enable)
+ TEGRA_CEC_INT_MASK_TX_BUS_ANOMALY_DETECTED |
+ TEGRA_CEC_INT_MASK_TX_FRAME_TRANSMITTED |
+ TEGRA_CEC_INT_MASK_RX_REGISTER_FULL |
+- TEGRA_CEC_INT_MASK_RX_REGISTER_OVERRUN);
++ TEGRA_CEC_INT_MASK_RX_START_BIT_DETECTED);
+
+ cec_write(cec, TEGRA_CEC_HW_CONTROL, TEGRA_CEC_HWCTRL_TX_RX_MODE);
+ return 0;
+diff --git a/drivers/mmc/core/block.c b/drivers/mmc/core/block.c
+index b737a9540331..df5fe43072d6 100644
+--- a/drivers/mmc/core/block.c
++++ b/drivers/mmc/core/block.c
+@@ -66,6 +66,7 @@ MODULE_ALIAS("mmc:block");
+ #define MMC_BLK_TIMEOUT_MS (10 * 60 * 1000) /* 10 minute timeout */
+ #define MMC_SANITIZE_REQ_TIMEOUT 240000
+ #define MMC_EXTRACT_INDEX_FROM_ARG(x) ((x & 0x00FF0000) >> 16)
++#define MMC_EXTRACT_VALUE_FROM_ARG(x) ((x & 0x0000FF00) >> 8)
+
+ #define mmc_req_rel_wr(req) ((req->cmd_flags & REQ_FUA) && \
+ (rq_data_dir(req) == WRITE))
+@@ -579,6 +580,24 @@ static int __mmc_blk_ioctl_cmd(struct mmc_card *card, struct mmc_blk_data *md,
+ return data.error;
+ }
+
++ /*
++ * Make sure the cache of the PARTITION_CONFIG register and
++ * PARTITION_ACCESS bits is updated in case the ioctl ext_csd write
++ * changed it successfully.
++ */
++ if ((MMC_EXTRACT_INDEX_FROM_ARG(cmd.arg) == EXT_CSD_PART_CONFIG) &&
++ (cmd.opcode == MMC_SWITCH)) {
++ struct mmc_blk_data *main_md = dev_get_drvdata(&card->dev);
++ u8 value = MMC_EXTRACT_VALUE_FROM_ARG(cmd.arg);
++
++ /*
++ * Update cache so the next mmc_blk_part_switch call operates
++ * on up-to-date data.
++ */
++ card->ext_csd.part_config = value;
++ main_md->part_curr = value & EXT_CSD_PART_CONFIG_ACC_MASK;
++ }
++
+ /*
+ * According to the SD specs, some commands require a delay after
+ * issuing the command.
+diff --git a/drivers/mmc/core/card.h b/drivers/mmc/core/card.h
+index 79a5b985ccf5..9c821eedd156 100644
+--- a/drivers/mmc/core/card.h
++++ b/drivers/mmc/core/card.h
+@@ -82,6 +82,7 @@ struct mmc_fixup {
+ #define CID_MANFID_APACER 0x27
+ #define CID_MANFID_KINGSTON 0x70
+ #define CID_MANFID_HYNIX 0x90
++#define CID_MANFID_NUMONYX 0xFE
+
+ #define END_FIXUP { NULL }
+
+diff --git a/drivers/mmc/core/quirks.h b/drivers/mmc/core/quirks.h
+index 75d317623852..5153577754f0 100644
+--- a/drivers/mmc/core/quirks.h
++++ b/drivers/mmc/core/quirks.h
+@@ -109,6 +109,12 @@ static const struct mmc_fixup mmc_ext_csd_fixups[] = {
+ */
+ MMC_FIXUP_EXT_CSD_REV(CID_NAME_ANY, CID_MANFID_HYNIX,
+ 0x014a, add_quirk, MMC_QUIRK_BROKEN_HPI, 5),
++ /*
++ * Certain Micron (Numonyx) eMMC 4.5 cards might get broken when HPI
++ * feature is used so disable the HPI feature for such buggy cards.
++ */
++ MMC_FIXUP_EXT_CSD_REV(CID_NAME_ANY, CID_MANFID_NUMONYX,
++ 0x014e, add_quirk, MMC_QUIRK_BROKEN_HPI, 6),
+
+ END_FIXUP
+ };
+diff --git a/drivers/mmc/host/dw_mmc-exynos.c b/drivers/mmc/host/dw_mmc-exynos.c
+index fa41d9422d57..a84aa3f1ae85 100644
+--- a/drivers/mmc/host/dw_mmc-exynos.c
++++ b/drivers/mmc/host/dw_mmc-exynos.c
+@@ -165,9 +165,15 @@ static void dw_mci_exynos_set_clksel_timing(struct dw_mci *host, u32 timing)
+ static int dw_mci_exynos_runtime_resume(struct device *dev)
+ {
+ struct dw_mci *host = dev_get_drvdata(dev);
++ int ret;
++
++ ret = dw_mci_runtime_resume(dev);
++ if (ret)
++ return ret;
+
+ dw_mci_exynos_config_smu(host);
+- return dw_mci_runtime_resume(dev);
++
++ return ret;
+ }
+
+ /**
+diff --git a/drivers/mmc/host/dw_mmc.c b/drivers/mmc/host/dw_mmc.c
+index d9b4acefed31..06d47414d0c1 100644
+--- a/drivers/mmc/host/dw_mmc.c
++++ b/drivers/mmc/host/dw_mmc.c
+@@ -413,7 +413,9 @@ static inline void dw_mci_set_cto(struct dw_mci *host)
+ cto_div = (mci_readl(host, CLKDIV) & 0xff) * 2;
+ if (cto_div == 0)
+ cto_div = 1;
+- cto_ms = DIV_ROUND_UP(MSEC_PER_SEC * cto_clks * cto_div, host->bus_hz);
++
++ cto_ms = DIV_ROUND_UP_ULL((u64)MSEC_PER_SEC * cto_clks * cto_div,
++ host->bus_hz);
+
+ /* add a bit spare time */
+ cto_ms += 10;
+@@ -562,6 +564,7 @@ static int dw_mci_idmac_init(struct dw_mci *host)
+ (sizeof(struct idmac_desc_64addr) *
+ (i + 1))) >> 32;
+ /* Initialize reserved and buffer size fields to "0" */
++ p->des0 = 0;
+ p->des1 = 0;
+ p->des2 = 0;
+ p->des3 = 0;
+@@ -584,6 +587,7 @@ static int dw_mci_idmac_init(struct dw_mci *host)
+ i++, p++) {
+ p->des3 = cpu_to_le32(host->sg_dma +
+ (sizeof(struct idmac_desc) * (i + 1)));
++ p->des0 = 0;
+ p->des1 = 0;
+ }
+
+@@ -1799,8 +1803,8 @@ static bool dw_mci_reset(struct dw_mci *host)
+ }
+
+ if (host->use_dma == TRANS_MODE_IDMAC)
+- /* It is also recommended that we reset and reprogram idmac */
+- dw_mci_idmac_reset(host);
++ /* It is also required that we reinit idmac */
++ dw_mci_idmac_init(host);
+
+ ret = true;
+
+@@ -1948,8 +1952,9 @@ static void dw_mci_set_drto(struct dw_mci *host)
+ drto_div = (mci_readl(host, CLKDIV) & 0xff) * 2;
+ if (drto_div == 0)
+ drto_div = 1;
+- drto_ms = DIV_ROUND_UP(MSEC_PER_SEC * drto_clks * drto_div,
+- host->bus_hz);
++
++ drto_ms = DIV_ROUND_UP_ULL((u64)MSEC_PER_SEC * drto_clks * drto_div,
++ host->bus_hz);
+
+ /* add a bit spare time */
+ drto_ms += 10;
+diff --git a/drivers/mtd/mtdchar.c b/drivers/mtd/mtdchar.c
+index de8c902059b8..7d80a8bb96fe 100644
+--- a/drivers/mtd/mtdchar.c
++++ b/drivers/mtd/mtdchar.c
+@@ -479,7 +479,7 @@ static int shrink_ecclayout(struct mtd_info *mtd,
+ for (i = 0; i < MTD_MAX_ECCPOS_ENTRIES;) {
+ u32 eccpos;
+
+- ret = mtd_ooblayout_ecc(mtd, section, &oobregion);
++ ret = mtd_ooblayout_ecc(mtd, section++, &oobregion);
+ if (ret < 0) {
+ if (ret != -ERANGE)
+ return ret;
+@@ -526,7 +526,7 @@ static int get_oobinfo(struct mtd_info *mtd, struct nand_oobinfo *to)
+ for (i = 0; i < ARRAY_SIZE(to->eccpos);) {
+ u32 eccpos;
+
+- ret = mtd_ooblayout_ecc(mtd, section, &oobregion);
++ ret = mtd_ooblayout_ecc(mtd, section++, &oobregion);
+ if (ret < 0) {
+ if (ret != -ERANGE)
+ return ret;
+diff --git a/drivers/mtd/nand/fsl_ifc_nand.c b/drivers/mtd/nand/fsl_ifc_nand.c
+index bbdd68a54d68..4005b427023c 100644
+--- a/drivers/mtd/nand/fsl_ifc_nand.c
++++ b/drivers/mtd/nand/fsl_ifc_nand.c
+@@ -173,14 +173,9 @@ static void set_addr(struct mtd_info *mtd, int column, int page_addr, int oob)
+
+ /* returns nonzero if entire page is blank */
+ static int check_read_ecc(struct mtd_info *mtd, struct fsl_ifc_ctrl *ctrl,
+- u32 *eccstat, unsigned int bufnum)
++ u32 eccstat, unsigned int bufnum)
+ {
+- u32 reg = eccstat[bufnum / 4];
+- int errors;
+-
+- errors = (reg >> ((3 - bufnum % 4) * 8)) & 15;
+-
+- return errors;
++ return (eccstat >> ((3 - bufnum % 4) * 8)) & 15;
+ }
+
+ /*
+@@ -193,7 +188,7 @@ static void fsl_ifc_run_command(struct mtd_info *mtd)
+ struct fsl_ifc_ctrl *ctrl = priv->ctrl;
+ struct fsl_ifc_nand_ctrl *nctrl = ifc_nand_ctrl;
+ struct fsl_ifc_runtime __iomem *ifc = ctrl->rregs;
+- u32 eccstat[4];
++ u32 eccstat;
+ int i;
+
+ /* set the chip select for NAND Transaction */
+@@ -228,19 +223,17 @@ static void fsl_ifc_run_command(struct mtd_info *mtd)
+ if (nctrl->eccread) {
+ int errors;
+ int bufnum = nctrl->page & priv->bufnum_mask;
+- int sector = bufnum * chip->ecc.steps;
+- int sector_end = sector + chip->ecc.steps - 1;
++ int sector_start = bufnum * chip->ecc.steps;
++ int sector_end = sector_start + chip->ecc.steps - 1;
+ __be32 *eccstat_regs;
+
+- if (ctrl->version >= FSL_IFC_VERSION_2_0_0)
+- eccstat_regs = ifc->ifc_nand.v2_nand_eccstat;
+- else
+- eccstat_regs = ifc->ifc_nand.v1_nand_eccstat;
++ eccstat_regs = ifc->ifc_nand.nand_eccstat;
++ eccstat = ifc_in32(&eccstat_regs[sector_start / 4]);
+
+- for (i = sector / 4; i <= sector_end / 4; i++)
+- eccstat[i] = ifc_in32(&eccstat_regs[i]);
++ for (i = sector_start; i <= sector_end; i++) {
++ if (i != sector_start && !(i % 4))
++ eccstat = ifc_in32(&eccstat_regs[i / 4]);
+
+- for (i = sector; i <= sector_end; i++) {
+ errors = check_read_ecc(mtd, ctrl, eccstat, i);
+
+ if (errors == 15) {
+@@ -626,6 +619,7 @@ static int fsl_ifc_wait(struct mtd_info *mtd, struct nand_chip *chip)
+ struct fsl_ifc_ctrl *ctrl = priv->ctrl;
+ struct fsl_ifc_runtime __iomem *ifc = ctrl->rregs;
+ u32 nand_fsr;
++ int status;
+
+ /* Use READ_STATUS command, but wait for the device to be ready */
+ ifc_out32((IFC_FIR_OP_CW0 << IFC_NAND_FIR0_OP0_SHIFT) |
+@@ -640,12 +634,12 @@ static int fsl_ifc_wait(struct mtd_info *mtd, struct nand_chip *chip)
+ fsl_ifc_run_command(mtd);
+
+ nand_fsr = ifc_in32(&ifc->ifc_nand.nand_fsr);
+-
++ status = nand_fsr >> 24;
+ /*
+ * The chip always seems to report that it is
+ * write-protected, even when it is not.
+ */
+- return nand_fsr | NAND_STATUS_WP;
++ return status | NAND_STATUS_WP;
+ }
+
+ /*
+diff --git a/drivers/net/can/cc770/cc770.c b/drivers/net/can/cc770/cc770.c
+index 1e37313054f3..6da69af103e6 100644
+--- a/drivers/net/can/cc770/cc770.c
++++ b/drivers/net/can/cc770/cc770.c
+@@ -390,37 +390,23 @@ static int cc770_get_berr_counter(const struct net_device *dev,
+ return 0;
+ }
+
+-static netdev_tx_t cc770_start_xmit(struct sk_buff *skb, struct net_device *dev)
++static void cc770_tx(struct net_device *dev, int mo)
+ {
+ struct cc770_priv *priv = netdev_priv(dev);
+- struct net_device_stats *stats = &dev->stats;
+- struct can_frame *cf = (struct can_frame *)skb->data;
+- unsigned int mo = obj2msgobj(CC770_OBJ_TX);
++ struct can_frame *cf = (struct can_frame *)priv->tx_skb->data;
+ u8 dlc, rtr;
+ u32 id;
+ int i;
+
+- if (can_dropped_invalid_skb(dev, skb))
+- return NETDEV_TX_OK;
+-
+- if ((cc770_read_reg(priv,
+- msgobj[mo].ctrl1) & TXRQST_UNC) == TXRQST_SET) {
+- netdev_err(dev, "TX register is still occupied!\n");
+- return NETDEV_TX_BUSY;
+- }
+-
+- netif_stop_queue(dev);
+-
+ dlc = cf->can_dlc;
+ id = cf->can_id;
+- if (cf->can_id & CAN_RTR_FLAG)
+- rtr = 0;
+- else
+- rtr = MSGCFG_DIR;
++ rtr = cf->can_id & CAN_RTR_FLAG ? 0 : MSGCFG_DIR;
++
++ cc770_write_reg(priv, msgobj[mo].ctrl0,
++ MSGVAL_RES | TXIE_RES | RXIE_RES | INTPND_RES);
+ cc770_write_reg(priv, msgobj[mo].ctrl1,
+ RMTPND_RES | TXRQST_RES | CPUUPD_SET | NEWDAT_RES);
+- cc770_write_reg(priv, msgobj[mo].ctrl0,
+- MSGVAL_SET | TXIE_SET | RXIE_RES | INTPND_RES);
++
+ if (id & CAN_EFF_FLAG) {
+ id &= CAN_EFF_MASK;
+ cc770_write_reg(priv, msgobj[mo].config,
+@@ -439,22 +425,30 @@ static netdev_tx_t cc770_start_xmit(struct sk_buff *skb, struct net_device *dev)
+ for (i = 0; i < dlc; i++)
+ cc770_write_reg(priv, msgobj[mo].data[i], cf->data[i]);
+
+- /* Store echo skb before starting the transfer */
+- can_put_echo_skb(skb, dev, 0);
+-
+ cc770_write_reg(priv, msgobj[mo].ctrl1,
+- RMTPND_RES | TXRQST_SET | CPUUPD_RES | NEWDAT_UNC);
++ RMTPND_UNC | TXRQST_SET | CPUUPD_RES | NEWDAT_UNC);
++ cc770_write_reg(priv, msgobj[mo].ctrl0,
++ MSGVAL_SET | TXIE_SET | RXIE_SET | INTPND_UNC);
++}
+
+- stats->tx_bytes += dlc;
++static netdev_tx_t cc770_start_xmit(struct sk_buff *skb, struct net_device *dev)
++{
++ struct cc770_priv *priv = netdev_priv(dev);
++ unsigned int mo = obj2msgobj(CC770_OBJ_TX);
+
++ if (can_dropped_invalid_skb(dev, skb))
++ return NETDEV_TX_OK;
+
+- /*
+- * HM: We had some cases of repeated IRQs so make sure the
+- * INT is acknowledged I know it's already further up, but
+- * doing again fixed the issue
+- */
+- cc770_write_reg(priv, msgobj[mo].ctrl0,
+- MSGVAL_UNC | TXIE_UNC | RXIE_UNC | INTPND_RES);
++ netif_stop_queue(dev);
++
++ if ((cc770_read_reg(priv,
++ msgobj[mo].ctrl1) & TXRQST_UNC) == TXRQST_SET) {
++ netdev_err(dev, "TX register is still occupied!\n");
++ return NETDEV_TX_BUSY;
++ }
++
++ priv->tx_skb = skb;
++ cc770_tx(dev, mo);
+
+ return NETDEV_TX_OK;
+ }
+@@ -680,19 +674,46 @@ static void cc770_tx_interrupt(struct net_device *dev, unsigned int o)
+ struct cc770_priv *priv = netdev_priv(dev);
+ struct net_device_stats *stats = &dev->stats;
+ unsigned int mo = obj2msgobj(o);
++ struct can_frame *cf;
++ u8 ctrl1;
++
++ ctrl1 = cc770_read_reg(priv, msgobj[mo].ctrl1);
+
+- /* Nothing more to send, switch off interrupts */
+ cc770_write_reg(priv, msgobj[mo].ctrl0,
+ MSGVAL_RES | TXIE_RES | RXIE_RES | INTPND_RES);
+- /*
+- * We had some cases of repeated IRQ so make sure the
+- * INT is acknowledged
++ cc770_write_reg(priv, msgobj[mo].ctrl1,
++ RMTPND_RES | TXRQST_RES | MSGLST_RES | NEWDAT_RES);
++
++ if (unlikely(!priv->tx_skb)) {
++ netdev_err(dev, "missing tx skb in tx interrupt\n");
++ return;
++ }
++
++ if (unlikely(ctrl1 & MSGLST_SET)) {
++ stats->rx_over_errors++;
++ stats->rx_errors++;
++ }
++
++ /* When the CC770 is sending an RTR message and it receives a regular
++ * message that matches the id of the RTR message, it will overwrite the
++ * outgoing message in the TX register. When this happens we must
++ * process the received message and try to transmit the outgoing skb
++ * again.
+ */
+- cc770_write_reg(priv, msgobj[mo].ctrl0,
+- MSGVAL_UNC | TXIE_UNC | RXIE_UNC | INTPND_RES);
++ if (unlikely(ctrl1 & NEWDAT_SET)) {
++ cc770_rx(dev, mo, ctrl1);
++ cc770_tx(dev, mo);
++ return;
++ }
+
++ cf = (struct can_frame *)priv->tx_skb->data;
++ stats->tx_bytes += cf->can_dlc;
+ stats->tx_packets++;
++
++ can_put_echo_skb(priv->tx_skb, dev, 0);
+ can_get_echo_skb(dev, 0);
++ priv->tx_skb = NULL;
++
+ netif_wake_queue(dev);
+ }
+
+@@ -804,6 +825,7 @@ struct net_device *alloc_cc770dev(int sizeof_priv)
+ priv->can.do_set_bittiming = cc770_set_bittiming;
+ priv->can.do_set_mode = cc770_set_mode;
+ priv->can.ctrlmode_supported = CAN_CTRLMODE_3_SAMPLES;
++ priv->tx_skb = NULL;
+
+ memcpy(priv->obj_flags, cc770_obj_flags, sizeof(cc770_obj_flags));
+
+diff --git a/drivers/net/can/cc770/cc770.h b/drivers/net/can/cc770/cc770.h
+index a1739db98d91..95752e1d1283 100644
+--- a/drivers/net/can/cc770/cc770.h
++++ b/drivers/net/can/cc770/cc770.h
+@@ -193,6 +193,8 @@ struct cc770_priv {
+ u8 cpu_interface; /* CPU interface register */
+ u8 clkout; /* Clock out register */
+ u8 bus_config; /* Bus conffiguration register */
++
++ struct sk_buff *tx_skb;
+ };
+
+ struct net_device *alloc_cc770dev(int sizeof_priv);
+diff --git a/drivers/net/can/ifi_canfd/ifi_canfd.c b/drivers/net/can/ifi_canfd/ifi_canfd.c
+index 2772d05ff11c..fedd927ba6ed 100644
+--- a/drivers/net/can/ifi_canfd/ifi_canfd.c
++++ b/drivers/net/can/ifi_canfd/ifi_canfd.c
+@@ -30,6 +30,7 @@
+ #define IFI_CANFD_STCMD_ERROR_ACTIVE BIT(2)
+ #define IFI_CANFD_STCMD_ERROR_PASSIVE BIT(3)
+ #define IFI_CANFD_STCMD_BUSOFF BIT(4)
++#define IFI_CANFD_STCMD_ERROR_WARNING BIT(5)
+ #define IFI_CANFD_STCMD_BUSMONITOR BIT(16)
+ #define IFI_CANFD_STCMD_LOOPBACK BIT(18)
+ #define IFI_CANFD_STCMD_DISABLE_CANFD BIT(24)
+@@ -52,7 +53,10 @@
+ #define IFI_CANFD_TXSTCMD_OVERFLOW BIT(13)
+
+ #define IFI_CANFD_INTERRUPT 0xc
++#define IFI_CANFD_INTERRUPT_ERROR_BUSOFF BIT(0)
+ #define IFI_CANFD_INTERRUPT_ERROR_WARNING BIT(1)
++#define IFI_CANFD_INTERRUPT_ERROR_STATE_CHG BIT(2)
++#define IFI_CANFD_INTERRUPT_ERROR_REC_TEC_INC BIT(3)
+ #define IFI_CANFD_INTERRUPT_ERROR_COUNTER BIT(10)
+ #define IFI_CANFD_INTERRUPT_TXFIFO_EMPTY BIT(16)
+ #define IFI_CANFD_INTERRUPT_TXFIFO_REMOVE BIT(22)
+@@ -61,6 +65,10 @@
+ #define IFI_CANFD_INTERRUPT_SET_IRQ ((u32)BIT(31))
+
+ #define IFI_CANFD_IRQMASK 0x10
++#define IFI_CANFD_IRQMASK_ERROR_BUSOFF BIT(0)
++#define IFI_CANFD_IRQMASK_ERROR_WARNING BIT(1)
++#define IFI_CANFD_IRQMASK_ERROR_STATE_CHG BIT(2)
++#define IFI_CANFD_IRQMASK_ERROR_REC_TEC_INC BIT(3)
+ #define IFI_CANFD_IRQMASK_SET_ERR BIT(7)
+ #define IFI_CANFD_IRQMASK_SET_TS BIT(15)
+ #define IFI_CANFD_IRQMASK_TXFIFO_EMPTY BIT(16)
+@@ -136,6 +144,8 @@
+ #define IFI_CANFD_SYSCLOCK 0x50
+
+ #define IFI_CANFD_VER 0x54
++#define IFI_CANFD_VER_REV_MASK 0xff
++#define IFI_CANFD_VER_REV_MIN_SUPPORTED 0x15
+
+ #define IFI_CANFD_IP_ID 0x58
+ #define IFI_CANFD_IP_ID_VALUE 0xD073CAFD
+@@ -220,7 +230,10 @@ static void ifi_canfd_irq_enable(struct net_device *ndev, bool enable)
+
+ if (enable) {
+ enirq = IFI_CANFD_IRQMASK_TXFIFO_EMPTY |
+- IFI_CANFD_IRQMASK_RXFIFO_NEMPTY;
++ IFI_CANFD_IRQMASK_RXFIFO_NEMPTY |
++ IFI_CANFD_IRQMASK_ERROR_STATE_CHG |
++ IFI_CANFD_IRQMASK_ERROR_WARNING |
++ IFI_CANFD_IRQMASK_ERROR_BUSOFF;
+ if (priv->can.ctrlmode & CAN_CTRLMODE_BERR_REPORTING)
+ enirq |= IFI_CANFD_INTERRUPT_ERROR_COUNTER;
+ }
+@@ -361,12 +374,13 @@ static int ifi_canfd_handle_lost_msg(struct net_device *ndev)
+ return 1;
+ }
+
+-static int ifi_canfd_handle_lec_err(struct net_device *ndev, const u32 errctr)
++static int ifi_canfd_handle_lec_err(struct net_device *ndev)
+ {
+ struct ifi_canfd_priv *priv = netdev_priv(ndev);
+ struct net_device_stats *stats = &ndev->stats;
+ struct can_frame *cf;
+ struct sk_buff *skb;
++ u32 errctr = readl(priv->base + IFI_CANFD_ERROR_CTR);
+ const u32 errmask = IFI_CANFD_ERROR_CTR_OVERLOAD_FIRST |
+ IFI_CANFD_ERROR_CTR_ACK_ERROR_FIRST |
+ IFI_CANFD_ERROR_CTR_BIT0_ERROR_FIRST |
+@@ -449,6 +463,11 @@ static int ifi_canfd_handle_state_change(struct net_device *ndev,
+
+ switch (new_state) {
+ case CAN_STATE_ERROR_ACTIVE:
++ /* error active state */
++ priv->can.can_stats.error_warning++;
++ priv->can.state = CAN_STATE_ERROR_ACTIVE;
++ break;
++ case CAN_STATE_ERROR_WARNING:
+ /* error warning state */
+ priv->can.can_stats.error_warning++;
+ priv->can.state = CAN_STATE_ERROR_WARNING;
+@@ -477,7 +496,7 @@ static int ifi_canfd_handle_state_change(struct net_device *ndev,
+ ifi_canfd_get_berr_counter(ndev, &bec);
+
+ switch (new_state) {
+- case CAN_STATE_ERROR_ACTIVE:
++ case CAN_STATE_ERROR_WARNING:
+ /* error warning state */
+ cf->can_id |= CAN_ERR_CRTL;
+ cf->data[1] = (bec.txerr > bec.rxerr) ?
+@@ -510,22 +529,21 @@ static int ifi_canfd_handle_state_change(struct net_device *ndev,
+ return 1;
+ }
+
+-static int ifi_canfd_handle_state_errors(struct net_device *ndev, u32 stcmd)
++static int ifi_canfd_handle_state_errors(struct net_device *ndev)
+ {
+ struct ifi_canfd_priv *priv = netdev_priv(ndev);
++ u32 stcmd = readl(priv->base + IFI_CANFD_STCMD);
+ int work_done = 0;
+- u32 isr;
+
+- /*
+- * The ErrWarn condition is a little special, since the bit is
+- * located in the INTERRUPT register instead of STCMD register.
+- */
+- isr = readl(priv->base + IFI_CANFD_INTERRUPT);
+- if ((isr & IFI_CANFD_INTERRUPT_ERROR_WARNING) &&
++ if ((stcmd & IFI_CANFD_STCMD_ERROR_ACTIVE) &&
++ (priv->can.state != CAN_STATE_ERROR_ACTIVE)) {
++ netdev_dbg(ndev, "Error, entered active state\n");
++ work_done += ifi_canfd_handle_state_change(ndev,
++ CAN_STATE_ERROR_ACTIVE);
++ }
++
++ if ((stcmd & IFI_CANFD_STCMD_ERROR_WARNING) &&
+ (priv->can.state != CAN_STATE_ERROR_WARNING)) {
+- /* Clear the interrupt */
+- writel(IFI_CANFD_INTERRUPT_ERROR_WARNING,
+- priv->base + IFI_CANFD_INTERRUPT);
+ netdev_dbg(ndev, "Error, entered warning state\n");
+ work_done += ifi_canfd_handle_state_change(ndev,
+ CAN_STATE_ERROR_WARNING);
+@@ -552,18 +570,11 @@ static int ifi_canfd_poll(struct napi_struct *napi, int quota)
+ {
+ struct net_device *ndev = napi->dev;
+ struct ifi_canfd_priv *priv = netdev_priv(ndev);
+- const u32 stcmd_state_mask = IFI_CANFD_STCMD_ERROR_PASSIVE |
+- IFI_CANFD_STCMD_BUSOFF;
+- int work_done = 0;
+-
+- u32 stcmd = readl(priv->base + IFI_CANFD_STCMD);
+ u32 rxstcmd = readl(priv->base + IFI_CANFD_RXSTCMD);
+- u32 errctr = readl(priv->base + IFI_CANFD_ERROR_CTR);
++ int work_done = 0;
+
+ /* Handle bus state changes */
+- if ((stcmd & stcmd_state_mask) ||
+- ((stcmd & IFI_CANFD_STCMD_ERROR_ACTIVE) == 0))
+- work_done += ifi_canfd_handle_state_errors(ndev, stcmd);
++ work_done += ifi_canfd_handle_state_errors(ndev);
+
+ /* Handle lost messages on RX */
+ if (rxstcmd & IFI_CANFD_RXSTCMD_OVERFLOW)
+@@ -571,7 +582,7 @@ static int ifi_canfd_poll(struct napi_struct *napi, int quota)
+
+ /* Handle lec errors on the bus */
+ if (priv->can.ctrlmode & CAN_CTRLMODE_BERR_REPORTING)
+- work_done += ifi_canfd_handle_lec_err(ndev, errctr);
++ work_done += ifi_canfd_handle_lec_err(ndev);
+
+ /* Handle normal messages on RX */
+ if (!(rxstcmd & IFI_CANFD_RXSTCMD_EMPTY))
+@@ -592,12 +603,13 @@ static irqreturn_t ifi_canfd_isr(int irq, void *dev_id)
+ struct net_device_stats *stats = &ndev->stats;
+ const u32 rx_irq_mask = IFI_CANFD_INTERRUPT_RXFIFO_NEMPTY |
+ IFI_CANFD_INTERRUPT_RXFIFO_NEMPTY_PER |
++ IFI_CANFD_INTERRUPT_ERROR_COUNTER |
++ IFI_CANFD_INTERRUPT_ERROR_STATE_CHG |
+ IFI_CANFD_INTERRUPT_ERROR_WARNING |
+- IFI_CANFD_INTERRUPT_ERROR_COUNTER;
++ IFI_CANFD_INTERRUPT_ERROR_BUSOFF;
+ const u32 tx_irq_mask = IFI_CANFD_INTERRUPT_TXFIFO_EMPTY |
+ IFI_CANFD_INTERRUPT_TXFIFO_REMOVE;
+- const u32 clr_irq_mask = ~((u32)(IFI_CANFD_INTERRUPT_SET_IRQ |
+- IFI_CANFD_INTERRUPT_ERROR_WARNING));
++ const u32 clr_irq_mask = ~((u32)IFI_CANFD_INTERRUPT_SET_IRQ);
+ u32 isr;
+
+ isr = readl(priv->base + IFI_CANFD_INTERRUPT);
+@@ -933,7 +945,7 @@ static int ifi_canfd_plat_probe(struct platform_device *pdev)
+ struct resource *res;
+ void __iomem *addr;
+ int irq, ret;
+- u32 id;
++ u32 id, rev;
+
+ res = platform_get_resource(pdev, IORESOURCE_MEM, 0);
+ addr = devm_ioremap_resource(dev, res);
+@@ -947,6 +959,13 @@ static int ifi_canfd_plat_probe(struct platform_device *pdev)
+ return -EINVAL;
+ }
+
++ rev = readl(addr + IFI_CANFD_VER) & IFI_CANFD_VER_REV_MASK;
++ if (rev < IFI_CANFD_VER_REV_MIN_SUPPORTED) {
++ dev_err(dev, "This block is too old (rev %i), minimum supported is rev %i\n",
++ rev, IFI_CANFD_VER_REV_MIN_SUPPORTED);
++ return -EINVAL;
++ }
++
+ ndev = alloc_candev(sizeof(*priv), 1);
+ if (!ndev)
+ return -ENOMEM;
+diff --git a/drivers/net/can/peak_canfd/peak_canfd.c b/drivers/net/can/peak_canfd/peak_canfd.c
+index 55513411a82e..ed8561d4a90f 100644
+--- a/drivers/net/can/peak_canfd/peak_canfd.c
++++ b/drivers/net/can/peak_canfd/peak_canfd.c
+@@ -262,7 +262,6 @@ static int pucan_handle_can_rx(struct peak_canfd_priv *priv,
+
+ spin_lock_irqsave(&priv->echo_lock, flags);
+ can_get_echo_skb(priv->ndev, msg->client);
+- spin_unlock_irqrestore(&priv->echo_lock, flags);
+
+ /* count bytes of the echo instead of skb */
+ stats->tx_bytes += cf_len;
+@@ -271,6 +270,7 @@ static int pucan_handle_can_rx(struct peak_canfd_priv *priv,
+ /* restart tx queue (a slot is free) */
+ netif_wake_queue(priv->ndev);
+
++ spin_unlock_irqrestore(&priv->echo_lock, flags);
+ return 0;
+ }
+
+@@ -333,7 +333,6 @@ static int pucan_handle_status(struct peak_canfd_priv *priv,
+
+ /* this STATUS is the CNF of the RX_BARRIER: Tx path can be setup */
+ if (pucan_status_is_rx_barrier(msg)) {
+- unsigned long flags;
+
+ if (priv->enable_tx_path) {
+ int err = priv->enable_tx_path(priv);
+@@ -342,16 +341,8 @@ static int pucan_handle_status(struct peak_canfd_priv *priv,
+ return err;
+ }
+
+- /* restart network queue only if echo skb array is free */
+- spin_lock_irqsave(&priv->echo_lock, flags);
+-
+- if (!priv->can.echo_skb[priv->echo_idx]) {
+- spin_unlock_irqrestore(&priv->echo_lock, flags);
+-
+- netif_wake_queue(ndev);
+- } else {
+- spin_unlock_irqrestore(&priv->echo_lock, flags);
+- }
++ /* start network queue (echo_skb array is empty) */
++ netif_start_queue(ndev);
+
+ return 0;
+ }
+@@ -726,11 +717,6 @@ static netdev_tx_t peak_canfd_start_xmit(struct sk_buff *skb,
+ */
+ should_stop_tx_queue = !!(priv->can.echo_skb[priv->echo_idx]);
+
+- spin_unlock_irqrestore(&priv->echo_lock, flags);
+-
+- /* write the skb on the interface */
+- priv->write_tx_msg(priv, msg);
+-
+ /* stop network tx queue if not enough room to save one more msg too */
+ if (priv->can.ctrlmode & CAN_CTRLMODE_FD)
+ should_stop_tx_queue |= (room_left <
+@@ -742,6 +728,11 @@ static netdev_tx_t peak_canfd_start_xmit(struct sk_buff *skb,
+ if (should_stop_tx_queue)
+ netif_stop_queue(ndev);
+
++ spin_unlock_irqrestore(&priv->echo_lock, flags);
++
++ /* write the skb on the interface */
++ priv->write_tx_msg(priv, msg);
++
+ return NETDEV_TX_OK;
+ }
+
+diff --git a/drivers/net/can/peak_canfd/peak_pciefd_main.c b/drivers/net/can/peak_canfd/peak_pciefd_main.c
+index 788c3464a3b0..3c51a884db87 100644
+--- a/drivers/net/can/peak_canfd/peak_pciefd_main.c
++++ b/drivers/net/can/peak_canfd/peak_pciefd_main.c
+@@ -349,8 +349,12 @@ static irqreturn_t pciefd_irq_handler(int irq, void *arg)
+ priv->tx_pages_free++;
+ spin_unlock_irqrestore(&priv->tx_lock, flags);
+
+- /* wake producer up */
+- netif_wake_queue(priv->ucan.ndev);
++ /* wake producer up (only if enough room in echo_skb array) */
++ spin_lock_irqsave(&priv->ucan.echo_lock, flags);
++ if (!priv->ucan.can.echo_skb[priv->ucan.echo_idx])
++ netif_wake_queue(priv->ucan.ndev);
++
++ spin_unlock_irqrestore(&priv->ucan.echo_lock, flags);
+ }
+
+ /* re-enable Rx DMA transfer for this CAN */
+diff --git a/drivers/net/wireless/broadcom/brcm80211/brcmfmac/p2p.c b/drivers/net/wireless/broadcom/brcm80211/brcmfmac/p2p.c
+index 2ee54133efa1..82064e909784 100644
+--- a/drivers/net/wireless/broadcom/brcm80211/brcmfmac/p2p.c
++++ b/drivers/net/wireless/broadcom/brcm80211/brcmfmac/p2p.c
+@@ -462,25 +462,23 @@ static int brcmf_p2p_set_firmware(struct brcmf_if *ifp, u8 *p2p_mac)
+ * @dev_addr: optional device address.
+ *
+ * P2P needs mac addresses for P2P device and interface. If no device
+- * address it specified, these are derived from the primary net device, ie.
+- * the permanent ethernet address of the device.
++ * address it specified, these are derived from a random ethernet
++ * address.
+ */
+ static void brcmf_p2p_generate_bss_mac(struct brcmf_p2p_info *p2p, u8 *dev_addr)
+ {
+- struct brcmf_if *pri_ifp = p2p->bss_idx[P2PAPI_BSSCFG_PRIMARY].vif->ifp;
+- bool local_admin = false;
++ bool random_addr = false;
+
+- if (!dev_addr || is_zero_ether_addr(dev_addr)) {
+- dev_addr = pri_ifp->mac_addr;
+- local_admin = true;
+- }
++ if (!dev_addr || is_zero_ether_addr(dev_addr))
++ random_addr = true;
+
+- /* Generate the P2P Device Address. This consists of the device's
+- * primary MAC address with the locally administered bit set.
++ /* Generate the P2P Device Address obtaining a random ethernet
++ * address with the locally administered bit set.
+ */
+- memcpy(p2p->dev_addr, dev_addr, ETH_ALEN);
+- if (local_admin)
+- p2p->dev_addr[0] |= 0x02;
++ if (random_addr)
++ eth_random_addr(p2p->dev_addr);
++ else
++ memcpy(p2p->dev_addr, dev_addr, ETH_ALEN);
+
+ /* Generate the P2P Interface Address. If the discovery and connection
+ * BSSCFGs need to simultaneously co-exist, then this address must be
+diff --git a/drivers/net/wireless/realtek/rtlwifi/rtl8723be/hw.c b/drivers/net/wireless/realtek/rtlwifi/rtl8723be/hw.c
+index 7cd1ffa7d4a7..7df0a02f9b34 100644
+--- a/drivers/net/wireless/realtek/rtlwifi/rtl8723be/hw.c
++++ b/drivers/net/wireless/realtek/rtlwifi/rtl8723be/hw.c
+@@ -1124,7 +1124,8 @@ static void _rtl8723be_enable_aspm_back_door(struct ieee80211_hw *hw)
+
+ /* Configuration Space offset 0x70f BIT7 is used to control L0S */
+ tmp8 = _rtl8723be_dbi_read(rtlpriv, 0x70f);
+- _rtl8723be_dbi_write(rtlpriv, 0x70f, tmp8 | BIT(7));
++ _rtl8723be_dbi_write(rtlpriv, 0x70f, tmp8 | BIT(7) |
++ ASPM_L1_LATENCY << 3);
+
+ /* Configuration Space offset 0x719 Bit3 is for L1
+ * BIT4 is for clock request
+diff --git a/drivers/nvdimm/blk.c b/drivers/nvdimm/blk.c
+index 345acca576b3..1bd7b3734751 100644
+--- a/drivers/nvdimm/blk.c
++++ b/drivers/nvdimm/blk.c
+@@ -278,8 +278,6 @@ static int nsblk_attach_disk(struct nd_namespace_blk *nsblk)
+ disk->queue = q;
+ disk->flags = GENHD_FL_EXT_DEVT;
+ nvdimm_namespace_disk_name(&nsblk->common, disk->disk_name);
+- set_capacity(disk, 0);
+- device_add_disk(dev, disk);
+
+ if (devm_add_action_or_reset(dev, nd_blk_release_disk, disk))
+ return -ENOMEM;
+@@ -292,6 +290,7 @@ static int nsblk_attach_disk(struct nd_namespace_blk *nsblk)
+ }
+
+ set_capacity(disk, available_disk_size >> SECTOR_SHIFT);
++ device_add_disk(dev, disk);
+ revalidate_disk(disk);
+ return 0;
+ }
+diff --git a/drivers/nvdimm/btt.c b/drivers/nvdimm/btt.c
+index c586bcdb5190..c625df951fa1 100644
+--- a/drivers/nvdimm/btt.c
++++ b/drivers/nvdimm/btt.c
+@@ -1545,8 +1545,6 @@ static int btt_blk_init(struct btt *btt)
+ queue_flag_set_unlocked(QUEUE_FLAG_NONROT, btt->btt_queue);
+ btt->btt_queue->queuedata = btt;
+
+- set_capacity(btt->btt_disk, 0);
+- device_add_disk(&btt->nd_btt->dev, btt->btt_disk);
+ if (btt_meta_size(btt)) {
+ int rc = nd_integrity_init(btt->btt_disk, btt_meta_size(btt));
+
+@@ -1558,6 +1556,7 @@ static int btt_blk_init(struct btt *btt)
+ }
+ }
+ set_capacity(btt->btt_disk, btt->nlba * btt->sector_size >> 9);
++ device_add_disk(&btt->nd_btt->dev, btt->btt_disk);
+ btt->nd_btt->size = btt->nlba * (u64)btt->sector_size;
+ revalidate_disk(btt->btt_disk);
+
+diff --git a/drivers/pci/quirks.c b/drivers/pci/quirks.c
+index d7135140bf40..a2f18c4a0346 100644
+--- a/drivers/pci/quirks.c
++++ b/drivers/pci/quirks.c
+@@ -3906,6 +3906,8 @@ DECLARE_PCI_FIXUP_HEADER(PCI_VENDOR_ID_MARVELL_EXT, 0x9230,
+ quirk_dma_func1_alias);
+ DECLARE_PCI_FIXUP_HEADER(PCI_VENDOR_ID_TTI, 0x0642,
+ quirk_dma_func1_alias);
++DECLARE_PCI_FIXUP_HEADER(PCI_VENDOR_ID_TTI, 0x0645,
++ quirk_dma_func1_alias);
+ /* https://bugs.gentoo.org/show_bug.cgi?id=497630 */
+ DECLARE_PCI_FIXUP_HEADER(PCI_VENDOR_ID_JMICRON,
+ PCI_DEVICE_ID_JMICRON_JMB388_ESD,
+diff --git a/drivers/pinctrl/samsung/pinctrl-exynos-arm.c b/drivers/pinctrl/samsung/pinctrl-exynos-arm.c
+index 071084d3ee9c..92aeea174a56 100644
+--- a/drivers/pinctrl/samsung/pinctrl-exynos-arm.c
++++ b/drivers/pinctrl/samsung/pinctrl-exynos-arm.c
+@@ -129,7 +129,7 @@ static const struct samsung_pin_bank_data s5pv210_pin_bank[] __initconst = {
+ EXYNOS_PIN_BANK_EINTW(8, 0xc60, "gph3", 0x0c),
+ };
+
+-const struct samsung_pin_ctrl s5pv210_pin_ctrl[] __initconst = {
++static const struct samsung_pin_ctrl s5pv210_pin_ctrl[] __initconst = {
+ {
+ /* pin-controller instance 0 data */
+ .pin_banks = s5pv210_pin_bank,
+@@ -142,6 +142,11 @@ const struct samsung_pin_ctrl s5pv210_pin_ctrl[] __initconst = {
+ },
+ };
+
++const struct samsung_pinctrl_of_match_data s5pv210_of_data __initconst = {
++ .ctrl = s5pv210_pin_ctrl,
++ .num_ctrl = ARRAY_SIZE(s5pv210_pin_ctrl),
++};
++
+ /* Pad retention control code for accessing PMU regmap */
+ static atomic_t exynos_shared_retention_refcnt;
+
+@@ -204,7 +209,7 @@ static const struct samsung_retention_data exynos3250_retention_data __initconst
+ * Samsung pinctrl driver data for Exynos3250 SoC. Exynos3250 SoC includes
+ * two gpio/pin-mux/pinconfig controllers.
+ */
+-const struct samsung_pin_ctrl exynos3250_pin_ctrl[] __initconst = {
++static const struct samsung_pin_ctrl exynos3250_pin_ctrl[] __initconst = {
+ {
+ /* pin-controller instance 0 data */
+ .pin_banks = exynos3250_pin_banks0,
+@@ -225,6 +230,11 @@ const struct samsung_pin_ctrl exynos3250_pin_ctrl[] __initconst = {
+ },
+ };
+
++const struct samsung_pinctrl_of_match_data exynos3250_of_data __initconst = {
++ .ctrl = exynos3250_pin_ctrl,
++ .num_ctrl = ARRAY_SIZE(exynos3250_pin_ctrl),
++};
++
+ /* pin banks of exynos4210 pin-controller 0 */
+ static const struct samsung_pin_bank_data exynos4210_pin_banks0[] __initconst = {
+ EXYNOS_PIN_BANK_EINTG(8, 0x000, "gpa0", 0x00),
+@@ -308,7 +318,7 @@ static const struct samsung_retention_data exynos4_audio_retention_data __initco
+ * Samsung pinctrl driver data for Exynos4210 SoC. Exynos4210 SoC includes
+ * three gpio/pin-mux/pinconfig controllers.
+ */
+-const struct samsung_pin_ctrl exynos4210_pin_ctrl[] __initconst = {
++static const struct samsung_pin_ctrl exynos4210_pin_ctrl[] __initconst = {
+ {
+ /* pin-controller instance 0 data */
+ .pin_banks = exynos4210_pin_banks0,
+@@ -334,6 +344,11 @@ const struct samsung_pin_ctrl exynos4210_pin_ctrl[] __initconst = {
+ },
+ };
+
++const struct samsung_pinctrl_of_match_data exynos4210_of_data __initconst = {
++ .ctrl = exynos4210_pin_ctrl,
++ .num_ctrl = ARRAY_SIZE(exynos4210_pin_ctrl),
++};
++
+ /* pin banks of exynos4x12 pin-controller 0 */
+ static const struct samsung_pin_bank_data exynos4x12_pin_banks0[] __initconst = {
+ EXYNOS_PIN_BANK_EINTG(8, 0x000, "gpa0", 0x00),
+@@ -396,7 +411,7 @@ static const struct samsung_pin_bank_data exynos4x12_pin_banks3[] __initconst =
+ * Samsung pinctrl driver data for Exynos4x12 SoC. Exynos4x12 SoC includes
+ * four gpio/pin-mux/pinconfig controllers.
+ */
+-const struct samsung_pin_ctrl exynos4x12_pin_ctrl[] __initconst = {
++static const struct samsung_pin_ctrl exynos4x12_pin_ctrl[] __initconst = {
+ {
+ /* pin-controller instance 0 data */
+ .pin_banks = exynos4x12_pin_banks0,
+@@ -432,6 +447,11 @@ const struct samsung_pin_ctrl exynos4x12_pin_ctrl[] __initconst = {
+ },
+ };
+
++const struct samsung_pinctrl_of_match_data exynos4x12_of_data __initconst = {
++ .ctrl = exynos4x12_pin_ctrl,
++ .num_ctrl = ARRAY_SIZE(exynos4x12_pin_ctrl),
++};
++
+ /* pin banks of exynos5250 pin-controller 0 */
+ static const struct samsung_pin_bank_data exynos5250_pin_banks0[] __initconst = {
+ EXYNOS_PIN_BANK_EINTG(8, 0x000, "gpa0", 0x00),
+@@ -492,7 +512,7 @@ static const struct samsung_pin_bank_data exynos5250_pin_banks3[] __initconst =
+ * Samsung pinctrl driver data for Exynos5250 SoC. Exynos5250 SoC includes
+ * four gpio/pin-mux/pinconfig controllers.
+ */
+-const struct samsung_pin_ctrl exynos5250_pin_ctrl[] __initconst = {
++static const struct samsung_pin_ctrl exynos5250_pin_ctrl[] __initconst = {
+ {
+ /* pin-controller instance 0 data */
+ .pin_banks = exynos5250_pin_banks0,
+@@ -528,6 +548,11 @@ const struct samsung_pin_ctrl exynos5250_pin_ctrl[] __initconst = {
+ },
+ };
+
++const struct samsung_pinctrl_of_match_data exynos5250_of_data __initconst = {
++ .ctrl = exynos5250_pin_ctrl,
++ .num_ctrl = ARRAY_SIZE(exynos5250_pin_ctrl),
++};
++
+ /* pin banks of exynos5260 pin-controller 0 */
+ static const struct samsung_pin_bank_data exynos5260_pin_banks0[] __initconst = {
+ EXYNOS_PIN_BANK_EINTG(4, 0x000, "gpa0", 0x00),
+@@ -572,7 +597,7 @@ static const struct samsung_pin_bank_data exynos5260_pin_banks2[] __initconst =
+ * Samsung pinctrl driver data for Exynos5260 SoC. Exynos5260 SoC includes
+ * three gpio/pin-mux/pinconfig controllers.
+ */
+-const struct samsung_pin_ctrl exynos5260_pin_ctrl[] __initconst = {
++static const struct samsung_pin_ctrl exynos5260_pin_ctrl[] __initconst = {
+ {
+ /* pin-controller instance 0 data */
+ .pin_banks = exynos5260_pin_banks0,
+@@ -592,6 +617,11 @@ const struct samsung_pin_ctrl exynos5260_pin_ctrl[] __initconst = {
+ },
+ };
+
++const struct samsung_pinctrl_of_match_data exynos5260_of_data __initconst = {
++ .ctrl = exynos5260_pin_ctrl,
++ .num_ctrl = ARRAY_SIZE(exynos5260_pin_ctrl),
++};
++
+ /* pin banks of exynos5410 pin-controller 0 */
+ static const struct samsung_pin_bank_data exynos5410_pin_banks0[] __initconst = {
+ EXYNOS_PIN_BANK_EINTG(8, 0x000, "gpa0", 0x00),
+@@ -662,7 +692,7 @@ static const struct samsung_pin_bank_data exynos5410_pin_banks3[] __initconst =
+ * Samsung pinctrl driver data for Exynos5410 SoC. Exynos5410 SoC includes
+ * four gpio/pin-mux/pinconfig controllers.
+ */
+-const struct samsung_pin_ctrl exynos5410_pin_ctrl[] __initconst = {
++static const struct samsung_pin_ctrl exynos5410_pin_ctrl[] __initconst = {
+ {
+ /* pin-controller instance 0 data */
+ .pin_banks = exynos5410_pin_banks0,
+@@ -695,6 +725,11 @@ const struct samsung_pin_ctrl exynos5410_pin_ctrl[] __initconst = {
+ },
+ };
+
++const struct samsung_pinctrl_of_match_data exynos5410_of_data __initconst = {
++ .ctrl = exynos5410_pin_ctrl,
++ .num_ctrl = ARRAY_SIZE(exynos5410_pin_ctrl),
++};
++
+ /* pin banks of exynos5420 pin-controller 0 */
+ static const struct samsung_pin_bank_data exynos5420_pin_banks0[] __initconst = {
+ EXYNOS_PIN_BANK_EINTG(8, 0x000, "gpy7", 0x00),
+@@ -779,7 +814,7 @@ static const struct samsung_retention_data exynos5420_retention_data __initconst
+ * Samsung pinctrl driver data for Exynos5420 SoC. Exynos5420 SoC includes
+ * four gpio/pin-mux/pinconfig controllers.
+ */
+-const struct samsung_pin_ctrl exynos5420_pin_ctrl[] __initconst = {
++static const struct samsung_pin_ctrl exynos5420_pin_ctrl[] __initconst = {
+ {
+ /* pin-controller instance 0 data */
+ .pin_banks = exynos5420_pin_banks0,
+@@ -813,3 +848,8 @@ const struct samsung_pin_ctrl exynos5420_pin_ctrl[] __initconst = {
+ .retention_data = &exynos4_audio_retention_data,
+ },
+ };
++
++const struct samsung_pinctrl_of_match_data exynos5420_of_data __initconst = {
++ .ctrl = exynos5420_pin_ctrl,
++ .num_ctrl = ARRAY_SIZE(exynos5420_pin_ctrl),
++};
+diff --git a/drivers/pinctrl/samsung/pinctrl-exynos-arm64.c b/drivers/pinctrl/samsung/pinctrl-exynos-arm64.c
+index 08e9fdb58fd2..0ab88fc268ea 100644
+--- a/drivers/pinctrl/samsung/pinctrl-exynos-arm64.c
++++ b/drivers/pinctrl/samsung/pinctrl-exynos-arm64.c
+@@ -180,7 +180,7 @@ static const struct samsung_retention_data exynos5433_fsys_retention_data __init
+ * Samsung pinctrl driver data for Exynos5433 SoC. Exynos5433 SoC includes
+ * ten gpio/pin-mux/pinconfig controllers.
+ */
+-const struct samsung_pin_ctrl exynos5433_pin_ctrl[] __initconst = {
++static const struct samsung_pin_ctrl exynos5433_pin_ctrl[] __initconst = {
+ {
+ /* pin-controller instance 0 data */
+ .pin_banks = exynos5433_pin_banks0,
+@@ -265,6 +265,11 @@ const struct samsung_pin_ctrl exynos5433_pin_ctrl[] __initconst = {
+ },
+ };
+
++const struct samsung_pinctrl_of_match_data exynos5433_of_data __initconst = {
++ .ctrl = exynos5433_pin_ctrl,
++ .num_ctrl = ARRAY_SIZE(exynos5433_pin_ctrl),
++};
++
+ /* pin banks of exynos7 pin-controller - ALIVE */
+ static const struct samsung_pin_bank_data exynos7_pin_banks0[] __initconst = {
+ EXYNOS_PIN_BANK_EINTW(8, 0x000, "gpa0", 0x00),
+@@ -344,7 +349,7 @@ static const struct samsung_pin_bank_data exynos7_pin_banks9[] __initconst = {
+ EXYNOS_PIN_BANK_EINTG(4, 0x020, "gpz1", 0x04),
+ };
+
+-const struct samsung_pin_ctrl exynos7_pin_ctrl[] __initconst = {
++static const struct samsung_pin_ctrl exynos7_pin_ctrl[] __initconst = {
+ {
+ /* pin-controller instance 0 Alive data */
+ .pin_banks = exynos7_pin_banks0,
+@@ -397,3 +402,8 @@ const struct samsung_pin_ctrl exynos7_pin_ctrl[] __initconst = {
+ .eint_gpio_init = exynos_eint_gpio_init,
+ },
+ };
++
++const struct samsung_pinctrl_of_match_data exynos7_of_data __initconst = {
++ .ctrl = exynos7_pin_ctrl,
++ .num_ctrl = ARRAY_SIZE(exynos7_pin_ctrl),
++};
+diff --git a/drivers/pinctrl/samsung/pinctrl-s3c24xx.c b/drivers/pinctrl/samsung/pinctrl-s3c24xx.c
+index edf27264b603..67da1cf18b68 100644
+--- a/drivers/pinctrl/samsung/pinctrl-s3c24xx.c
++++ b/drivers/pinctrl/samsung/pinctrl-s3c24xx.c
+@@ -570,7 +570,7 @@ static const struct samsung_pin_bank_data s3c2412_pin_banks[] __initconst = {
+ PIN_BANK_2BIT(13, 0x080, "gpj"),
+ };
+
+-const struct samsung_pin_ctrl s3c2412_pin_ctrl[] __initconst = {
++static const struct samsung_pin_ctrl s3c2412_pin_ctrl[] __initconst = {
+ {
+ .pin_banks = s3c2412_pin_banks,
+ .nr_banks = ARRAY_SIZE(s3c2412_pin_banks),
+@@ -578,6 +578,11 @@ const struct samsung_pin_ctrl s3c2412_pin_ctrl[] __initconst = {
+ },
+ };
+
++const struct samsung_pinctrl_of_match_data s3c2412_of_data __initconst = {
++ .ctrl = s3c2412_pin_ctrl,
++ .num_ctrl = ARRAY_SIZE(s3c2412_pin_ctrl),
++};
++
+ static const struct samsung_pin_bank_data s3c2416_pin_banks[] __initconst = {
+ PIN_BANK_A(27, 0x000, "gpa"),
+ PIN_BANK_2BIT(11, 0x010, "gpb"),
+@@ -592,7 +597,7 @@ static const struct samsung_pin_bank_data s3c2416_pin_banks[] __initconst = {
+ PIN_BANK_2BIT(2, 0x100, "gpm"),
+ };
+
+-const struct samsung_pin_ctrl s3c2416_pin_ctrl[] __initconst = {
++static const struct samsung_pin_ctrl s3c2416_pin_ctrl[] __initconst = {
+ {
+ .pin_banks = s3c2416_pin_banks,
+ .nr_banks = ARRAY_SIZE(s3c2416_pin_banks),
+@@ -600,6 +605,11 @@ const struct samsung_pin_ctrl s3c2416_pin_ctrl[] __initconst = {
+ },
+ };
+
++const struct samsung_pinctrl_of_match_data s3c2416_of_data __initconst = {
++ .ctrl = s3c2416_pin_ctrl,
++ .num_ctrl = ARRAY_SIZE(s3c2416_pin_ctrl),
++};
++
+ static const struct samsung_pin_bank_data s3c2440_pin_banks[] __initconst = {
+ PIN_BANK_A(25, 0x000, "gpa"),
+ PIN_BANK_2BIT(11, 0x010, "gpb"),
+@@ -612,7 +622,7 @@ static const struct samsung_pin_bank_data s3c2440_pin_banks[] __initconst = {
+ PIN_BANK_2BIT(13, 0x0d0, "gpj"),
+ };
+
+-const struct samsung_pin_ctrl s3c2440_pin_ctrl[] __initconst = {
++static const struct samsung_pin_ctrl s3c2440_pin_ctrl[] __initconst = {
+ {
+ .pin_banks = s3c2440_pin_banks,
+ .nr_banks = ARRAY_SIZE(s3c2440_pin_banks),
+@@ -620,6 +630,11 @@ const struct samsung_pin_ctrl s3c2440_pin_ctrl[] __initconst = {
+ },
+ };
+
++const struct samsung_pinctrl_of_match_data s3c2440_of_data __initconst = {
++ .ctrl = s3c2440_pin_ctrl,
++ .num_ctrl = ARRAY_SIZE(s3c2440_pin_ctrl),
++};
++
+ static const struct samsung_pin_bank_data s3c2450_pin_banks[] __initconst = {
+ PIN_BANK_A(28, 0x000, "gpa"),
+ PIN_BANK_2BIT(11, 0x010, "gpb"),
+@@ -635,10 +650,15 @@ static const struct samsung_pin_bank_data s3c2450_pin_banks[] __initconst = {
+ PIN_BANK_2BIT(2, 0x100, "gpm"),
+ };
+
+-const struct samsung_pin_ctrl s3c2450_pin_ctrl[] __initconst = {
++static const struct samsung_pin_ctrl s3c2450_pin_ctrl[] __initconst = {
+ {
+ .pin_banks = s3c2450_pin_banks,
+ .nr_banks = ARRAY_SIZE(s3c2450_pin_banks),
+ .eint_wkup_init = s3c24xx_eint_init,
+ },
+ };
++
++const struct samsung_pinctrl_of_match_data s3c2450_of_data __initconst = {
++ .ctrl = s3c2450_pin_ctrl,
++ .num_ctrl = ARRAY_SIZE(s3c2450_pin_ctrl),
++};
+diff --git a/drivers/pinctrl/samsung/pinctrl-s3c64xx.c b/drivers/pinctrl/samsung/pinctrl-s3c64xx.c
+index e63663b32907..0bdc1e683181 100644
+--- a/drivers/pinctrl/samsung/pinctrl-s3c64xx.c
++++ b/drivers/pinctrl/samsung/pinctrl-s3c64xx.c
+@@ -794,7 +794,7 @@ static const struct samsung_pin_bank_data s3c64xx_pin_banks0[] __initconst = {
+ * Samsung pinctrl driver data for S3C64xx SoC. S3C64xx SoC includes
+ * one gpio/pin-mux/pinconfig controller.
+ */
+-const struct samsung_pin_ctrl s3c64xx_pin_ctrl[] __initconst = {
++static const struct samsung_pin_ctrl s3c64xx_pin_ctrl[] __initconst = {
+ {
+ /* pin-controller instance 1 data */
+ .pin_banks = s3c64xx_pin_banks0,
+@@ -803,3 +803,8 @@ const struct samsung_pin_ctrl s3c64xx_pin_ctrl[] __initconst = {
+ .eint_wkup_init = s3c64xx_eint_eint0_init,
+ },
+ };
++
++const struct samsung_pinctrl_of_match_data s3c64xx_of_data __initconst = {
++ .ctrl = s3c64xx_pin_ctrl,
++ .num_ctrl = ARRAY_SIZE(s3c64xx_pin_ctrl),
++};
+diff --git a/drivers/pinctrl/samsung/pinctrl-samsung.c b/drivers/pinctrl/samsung/pinctrl-samsung.c
+index e04f7fe0a65d..26e8fab736f1 100644
+--- a/drivers/pinctrl/samsung/pinctrl-samsung.c
++++ b/drivers/pinctrl/samsung/pinctrl-samsung.c
+@@ -947,12 +947,33 @@ static int samsung_gpiolib_register(struct platform_device *pdev,
+ return 0;
+ }
+
++static const struct samsung_pin_ctrl *
++samsung_pinctrl_get_soc_data_for_of_alias(struct platform_device *pdev)
++{
++ struct device_node *node = pdev->dev.of_node;
++ const struct samsung_pinctrl_of_match_data *of_data;
++ int id;
++
++ id = of_alias_get_id(node, "pinctrl");
++ if (id < 0) {
++ dev_err(&pdev->dev, "failed to get alias id\n");
++ return NULL;
++ }
++
++ of_data = of_device_get_match_data(&pdev->dev);
++ if (id >= of_data->num_ctrl) {
++ dev_err(&pdev->dev, "invalid alias id %d\n", id);
++ return NULL;
++ }
++
++ return &(of_data->ctrl[id]);
++}
++
+ /* retrieve the soc specific data */
+ static const struct samsung_pin_ctrl *
+ samsung_pinctrl_get_soc_data(struct samsung_pinctrl_drv_data *d,
+ struct platform_device *pdev)
+ {
+- int id;
+ struct device_node *node = pdev->dev.of_node;
+ struct device_node *np;
+ const struct samsung_pin_bank_data *bdata;
+@@ -962,13 +983,9 @@ samsung_pinctrl_get_soc_data(struct samsung_pinctrl_drv_data *d,
+ void __iomem *virt_base[SAMSUNG_PINCTRL_NUM_RESOURCES];
+ unsigned int i;
+
+- id = of_alias_get_id(node, "pinctrl");
+- if (id < 0) {
+- dev_err(&pdev->dev, "failed to get alias id\n");
++ ctrl = samsung_pinctrl_get_soc_data_for_of_alias(pdev);
++ if (!ctrl)
+ return ERR_PTR(-ENOENT);
+- }
+- ctrl = of_device_get_match_data(&pdev->dev);
+- ctrl += id;
+
+ d->suspend = ctrl->suspend;
+ d->resume = ctrl->resume;
+@@ -1193,41 +1210,41 @@ static int __maybe_unused samsung_pinctrl_resume(struct device *dev)
+ static const struct of_device_id samsung_pinctrl_dt_match[] = {
+ #ifdef CONFIG_PINCTRL_EXYNOS_ARM
+ { .compatible = "samsung,exynos3250-pinctrl",
+- .data = exynos3250_pin_ctrl },
++ .data = &exynos3250_of_data },
+ { .compatible = "samsung,exynos4210-pinctrl",
+- .data = exynos4210_pin_ctrl },
++ .data = &exynos4210_of_data },
+ { .compatible = "samsung,exynos4x12-pinctrl",
+- .data = exynos4x12_pin_ctrl },
++ .data = &exynos4x12_of_data },
+ { .compatible = "samsung,exynos5250-pinctrl",
+- .data = exynos5250_pin_ctrl },
++ .data = &exynos5250_of_data },
+ { .compatible = "samsung,exynos5260-pinctrl",
+- .data = exynos5260_pin_ctrl },
++ .data = &exynos5260_of_data },
+ { .compatible = "samsung,exynos5410-pinctrl",
+- .data = exynos5410_pin_ctrl },
++ .data = &exynos5410_of_data },
+ { .compatible = "samsung,exynos5420-pinctrl",
+- .data = exynos5420_pin_ctrl },
++ .data = &exynos5420_of_data },
+ { .compatible = "samsung,s5pv210-pinctrl",
+- .data = s5pv210_pin_ctrl },
++ .data = &s5pv210_of_data },
+ #endif
+ #ifdef CONFIG_PINCTRL_EXYNOS_ARM64
+ { .compatible = "samsung,exynos5433-pinctrl",
+- .data = exynos5433_pin_ctrl },
++ .data = &exynos5433_of_data },
+ { .compatible = "samsung,exynos7-pinctrl",
+- .data = exynos7_pin_ctrl },
++ .data = &exynos7_of_data },
+ #endif
+ #ifdef CONFIG_PINCTRL_S3C64XX
+ { .compatible = "samsung,s3c64xx-pinctrl",
+- .data = s3c64xx_pin_ctrl },
++ .data = &s3c64xx_of_data },
+ #endif
+ #ifdef CONFIG_PINCTRL_S3C24XX
+ { .compatible = "samsung,s3c2412-pinctrl",
+- .data = s3c2412_pin_ctrl },
++ .data = &s3c2412_of_data },
+ { .compatible = "samsung,s3c2416-pinctrl",
+- .data = s3c2416_pin_ctrl },
++ .data = &s3c2416_of_data },
+ { .compatible = "samsung,s3c2440-pinctrl",
+- .data = s3c2440_pin_ctrl },
++ .data = &s3c2440_of_data },
+ { .compatible = "samsung,s3c2450-pinctrl",
+- .data = s3c2450_pin_ctrl },
++ .data = &s3c2450_of_data },
+ #endif
+ {},
+ };
+diff --git a/drivers/pinctrl/samsung/pinctrl-samsung.h b/drivers/pinctrl/samsung/pinctrl-samsung.h
+index 9af07af6cad6..ae932e0c05f2 100644
+--- a/drivers/pinctrl/samsung/pinctrl-samsung.h
++++ b/drivers/pinctrl/samsung/pinctrl-samsung.h
+@@ -285,6 +285,16 @@ struct samsung_pinctrl_drv_data {
+ void (*resume)(struct samsung_pinctrl_drv_data *);
+ };
+
++/**
++ * struct samsung_pinctrl_of_match_data: OF match device specific configuration data.
++ * @ctrl: array of pin controller data.
++ * @num_ctrl: size of array @ctrl.
++ */
++struct samsung_pinctrl_of_match_data {
++ const struct samsung_pin_ctrl *ctrl;
++ unsigned int num_ctrl;
++};
++
+ /**
+ * struct samsung_pin_group: represent group of pins of a pinmux function.
+ * @name: name of the pin group, used to lookup the group.
+@@ -313,20 +323,20 @@ struct samsung_pmx_func {
+ };
+
+ /* list of all exported SoC specific data */
+-extern const struct samsung_pin_ctrl exynos3250_pin_ctrl[];
+-extern const struct samsung_pin_ctrl exynos4210_pin_ctrl[];
+-extern const struct samsung_pin_ctrl exynos4x12_pin_ctrl[];
+-extern const struct samsung_pin_ctrl exynos5250_pin_ctrl[];
+-extern const struct samsung_pin_ctrl exynos5260_pin_ctrl[];
+-extern const struct samsung_pin_ctrl exynos5410_pin_ctrl[];
+-extern const struct samsung_pin_ctrl exynos5420_pin_ctrl[];
+-extern const struct samsung_pin_ctrl exynos5433_pin_ctrl[];
+-extern const struct samsung_pin_ctrl exynos7_pin_ctrl[];
+-extern const struct samsung_pin_ctrl s3c64xx_pin_ctrl[];
+-extern const struct samsung_pin_ctrl s3c2412_pin_ctrl[];
+-extern const struct samsung_pin_ctrl s3c2416_pin_ctrl[];
+-extern const struct samsung_pin_ctrl s3c2440_pin_ctrl[];
+-extern const struct samsung_pin_ctrl s3c2450_pin_ctrl[];
+-extern const struct samsung_pin_ctrl s5pv210_pin_ctrl[];
++extern const struct samsung_pinctrl_of_match_data exynos3250_of_data;
++extern const struct samsung_pinctrl_of_match_data exynos4210_of_data;
++extern const struct samsung_pinctrl_of_match_data exynos4x12_of_data;
++extern const struct samsung_pinctrl_of_match_data exynos5250_of_data;
++extern const struct samsung_pinctrl_of_match_data exynos5260_of_data;
++extern const struct samsung_pinctrl_of_match_data exynos5410_of_data;
++extern const struct samsung_pinctrl_of_match_data exynos5420_of_data;
++extern const struct samsung_pinctrl_of_match_data exynos5433_of_data;
++extern const struct samsung_pinctrl_of_match_data exynos7_of_data;
++extern const struct samsung_pinctrl_of_match_data s3c64xx_of_data;
++extern const struct samsung_pinctrl_of_match_data s3c2412_of_data;
++extern const struct samsung_pinctrl_of_match_data s3c2416_of_data;
++extern const struct samsung_pinctrl_of_match_data s3c2440_of_data;
++extern const struct samsung_pinctrl_of_match_data s3c2450_of_data;
++extern const struct samsung_pinctrl_of_match_data s5pv210_of_data;
+
+ #endif /* __PINCTRL_SAMSUNG_H */
+diff --git a/drivers/staging/android/ion/ion_cma_heap.c b/drivers/staging/android/ion/ion_cma_heap.c
+index 86196ffd2faf..fa3e4b7e0c9f 100644
+--- a/drivers/staging/android/ion/ion_cma_heap.c
++++ b/drivers/staging/android/ion/ion_cma_heap.c
+@@ -21,6 +21,7 @@
+ #include <linux/err.h>
+ #include <linux/cma.h>
+ #include <linux/scatterlist.h>
++#include <linux/highmem.h>
+
+ #include "ion.h"
+
+@@ -51,6 +52,22 @@ static int ion_cma_allocate(struct ion_heap *heap, struct ion_buffer *buffer,
+ if (!pages)
+ return -ENOMEM;
+
++ if (PageHighMem(pages)) {
++ unsigned long nr_clear_pages = nr_pages;
++ struct page *page = pages;
++
++ while (nr_clear_pages > 0) {
++ void *vaddr = kmap_atomic(page);
++
++ memset(vaddr, 0, PAGE_SIZE);
++ kunmap_atomic(vaddr);
++ page++;
++ nr_clear_pages--;
++ }
++ } else {
++ memset(page_address(pages), 0, size);
++ }
++
+ table = kmalloc(sizeof(*table), GFP_KERNEL);
+ if (!table)
+ goto err;
+diff --git a/drivers/tty/vt/vt.c b/drivers/tty/vt/vt.c
+index 88b902c525d7..b4e57c5a8bba 100644
+--- a/drivers/tty/vt/vt.c
++++ b/drivers/tty/vt/vt.c
+@@ -1727,7 +1727,7 @@ static void reset_terminal(struct vc_data *vc, int do_clear)
+ default_attr(vc);
+ update_attr(vc);
+
+- vc->vc_tab_stop[0] = 0x01010100;
++ vc->vc_tab_stop[0] =
+ vc->vc_tab_stop[1] =
+ vc->vc_tab_stop[2] =
+ vc->vc_tab_stop[3] =
+@@ -1771,7 +1771,7 @@ static void do_con_trol(struct tty_struct *tty, struct vc_data *vc, int c)
+ vc->vc_pos -= (vc->vc_x << 1);
+ while (vc->vc_x < vc->vc_cols - 1) {
+ vc->vc_x++;
+- if (vc->vc_tab_stop[vc->vc_x >> 5] & (1 << (vc->vc_x & 31)))
++ if (vc->vc_tab_stop[7 & (vc->vc_x >> 5)] & (1 << (vc->vc_x & 31)))
+ break;
+ }
+ vc->vc_pos += (vc->vc_x << 1);
+@@ -1831,7 +1831,7 @@ static void do_con_trol(struct tty_struct *tty, struct vc_data *vc, int c)
+ lf(vc);
+ return;
+ case 'H':
+- vc->vc_tab_stop[vc->vc_x >> 5] |= (1 << (vc->vc_x & 31));
++ vc->vc_tab_stop[7 & (vc->vc_x >> 5)] |= (1 << (vc->vc_x & 31));
+ return;
+ case 'Z':
+ respond_ID(tty);
+@@ -2024,7 +2024,7 @@ static void do_con_trol(struct tty_struct *tty, struct vc_data *vc, int c)
+ return;
+ case 'g':
+ if (!vc->vc_par[0])
+- vc->vc_tab_stop[vc->vc_x >> 5] &= ~(1 << (vc->vc_x & 31));
++ vc->vc_tab_stop[7 & (vc->vc_x >> 5)] &= ~(1 << (vc->vc_x & 31));
+ else if (vc->vc_par[0] == 3) {
+ vc->vc_tab_stop[0] =
+ vc->vc_tab_stop[1] =
+diff --git a/drivers/watchdog/wdat_wdt.c b/drivers/watchdog/wdat_wdt.c
+index 6d1fbda0f461..0da9943d405f 100644
+--- a/drivers/watchdog/wdat_wdt.c
++++ b/drivers/watchdog/wdat_wdt.c
+@@ -392,7 +392,7 @@ static int wdat_wdt_probe(struct platform_device *pdev)
+
+ memset(&r, 0, sizeof(r));
+ r.start = gas->address;
+- r.end = r.start + gas->access_width;
++ r.end = r.start + gas->access_width - 1;
+ if (gas->space_id == ACPI_ADR_SPACE_SYSTEM_MEMORY) {
+ r.flags = IORESOURCE_MEM;
+ } else if (gas->space_id == ACPI_ADR_SPACE_SYSTEM_IO) {
+diff --git a/fs/hugetlbfs/inode.c b/fs/hugetlbfs/inode.c
+index 8a85f3f53446..8a5bde8b1444 100644
+--- a/fs/hugetlbfs/inode.c
++++ b/fs/hugetlbfs/inode.c
+@@ -118,6 +118,16 @@ static void huge_pagevec_release(struct pagevec *pvec)
+ pagevec_reinit(pvec);
+ }
+
++/*
++ * Mask used when checking the page offset value passed in via system
++ * calls. This value will be converted to a loff_t which is signed.
++ * Therefore, we want to check the upper PAGE_SHIFT + 1 bits of the
++ * value. The extra bit (- 1 in the shift value) is to take the sign
++ * bit into account.
++ */
++#define PGOFF_LOFFT_MAX \
++ (((1UL << (PAGE_SHIFT + 1)) - 1) << (BITS_PER_LONG - (PAGE_SHIFT + 1)))
++
+ static int hugetlbfs_file_mmap(struct file *file, struct vm_area_struct *vma)
+ {
+ struct inode *inode = file_inode(file);
+@@ -137,12 +147,13 @@ static int hugetlbfs_file_mmap(struct file *file, struct vm_area_struct *vma)
+ vma->vm_ops = &hugetlb_vm_ops;
+
+ /*
+- * Offset passed to mmap (before page shift) could have been
+- * negative when represented as a (l)off_t.
++ * page based offset in vm_pgoff could be sufficiently large to
++ * overflow a (l)off_t when converted to byte offset.
+ */
+- if (((loff_t)vma->vm_pgoff << PAGE_SHIFT) < 0)
++ if (vma->vm_pgoff & PGOFF_LOFFT_MAX)
+ return -EINVAL;
+
++ /* must be huge page aligned */
+ if (vma->vm_pgoff & (~huge_page_mask(h) >> PAGE_SHIFT))
+ return -EINVAL;
+
+diff --git a/fs/ncpfs/ncplib_kernel.c b/fs/ncpfs/ncplib_kernel.c
+index 804adfebba2f..3e047eb4cc7c 100644
+--- a/fs/ncpfs/ncplib_kernel.c
++++ b/fs/ncpfs/ncplib_kernel.c
+@@ -981,6 +981,10 @@ ncp_read_kernel(struct ncp_server *server, const char *file_id,
+ goto out;
+ }
+ *bytes_read = ncp_reply_be16(server, 0);
++ if (*bytes_read > to_read) {
++ result = -EINVAL;
++ goto out;
++ }
+ source = ncp_reply_data(server, 2 + (offset & 1));
+
+ memcpy(target, source, *bytes_read);
+diff --git a/fs/nfsd/nfs4state.c b/fs/nfsd/nfs4state.c
+index 5a75135f5f53..974b1be7f148 100644
+--- a/fs/nfsd/nfs4state.c
++++ b/fs/nfsd/nfs4state.c
+@@ -268,6 +268,35 @@ free_blocked_lock(struct nfsd4_blocked_lock *nbl)
+ kfree(nbl);
+ }
+
++static void
++remove_blocked_locks(struct nfs4_lockowner *lo)
++{
++ struct nfs4_client *clp = lo->lo_owner.so_client;
++ struct nfsd_net *nn = net_generic(clp->net, nfsd_net_id);
++ struct nfsd4_blocked_lock *nbl;
++ LIST_HEAD(reaplist);
++
++ /* Dequeue all blocked locks */
++ spin_lock(&nn->blocked_locks_lock);
++ while (!list_empty(&lo->lo_blocked)) {
++ nbl = list_first_entry(&lo->lo_blocked,
++ struct nfsd4_blocked_lock,
++ nbl_list);
++ list_del_init(&nbl->nbl_list);
++ list_move(&nbl->nbl_lru, &reaplist);
++ }
++ spin_unlock(&nn->blocked_locks_lock);
++
++ /* Now free them */
++ while (!list_empty(&reaplist)) {
++ nbl = list_first_entry(&reaplist, struct nfsd4_blocked_lock,
++ nbl_lru);
++ list_del_init(&nbl->nbl_lru);
++ posix_unblock_lock(&nbl->nbl_lock);
++ free_blocked_lock(nbl);
++ }
++}
++
+ static int
+ nfsd4_cb_notify_lock_done(struct nfsd4_callback *cb, struct rpc_task *task)
+ {
+@@ -1866,6 +1895,7 @@ static __be32 mark_client_expired_locked(struct nfs4_client *clp)
+ static void
+ __destroy_client(struct nfs4_client *clp)
+ {
++ int i;
+ struct nfs4_openowner *oo;
+ struct nfs4_delegation *dp;
+ struct list_head reaplist;
+@@ -1895,6 +1925,16 @@ __destroy_client(struct nfs4_client *clp)
+ nfs4_get_stateowner(&oo->oo_owner);
+ release_openowner(oo);
+ }
++ for (i = 0; i < OWNER_HASH_SIZE; i++) {
++ struct nfs4_stateowner *so, *tmp;
++
++ list_for_each_entry_safe(so, tmp, &clp->cl_ownerstr_hashtbl[i],
++ so_strhash) {
++ /* Should be no openowners at this point */
++ WARN_ON_ONCE(so->so_is_open_owner);
++ remove_blocked_locks(lockowner(so));
++ }
++ }
+ nfsd4_return_all_client_layouts(clp);
+ nfsd4_shutdown_callback(clp);
+ if (clp->cl_cb_conn.cb_xprt)
+@@ -6358,6 +6398,7 @@ nfsd4_release_lockowner(struct svc_rqst *rqstp,
+ }
+ spin_unlock(&clp->cl_lock);
+ free_ol_stateid_reaplist(&reaplist);
++ remove_blocked_locks(lo);
+ nfs4_put_stateowner(&lo->lo_owner);
+
+ return status;
+@@ -7143,6 +7184,8 @@ nfs4_state_destroy_net(struct net *net)
+ }
+ }
+
++ WARN_ON(!list_empty(&nn->blocked_locks_lru));
++
+ for (i = 0; i < CLIENT_HASH_SIZE; i++) {
+ while (!list_empty(&nn->unconf_id_hashtbl[i])) {
+ clp = list_entry(nn->unconf_id_hashtbl[i].next, struct nfs4_client, cl_idhash);
+@@ -7209,7 +7252,6 @@ nfs4_state_shutdown_net(struct net *net)
+ struct nfs4_delegation *dp = NULL;
+ struct list_head *pos, *next, reaplist;
+ struct nfsd_net *nn = net_generic(net, nfsd_net_id);
+- struct nfsd4_blocked_lock *nbl;
+
+ cancel_delayed_work_sync(&nn->laundromat_work);
+ locks_end_grace(&nn->nfsd4_manager);
+@@ -7230,24 +7272,6 @@ nfs4_state_shutdown_net(struct net *net)
+ nfs4_put_stid(&dp->dl_stid);
+ }
+
+- BUG_ON(!list_empty(&reaplist));
+- spin_lock(&nn->blocked_locks_lock);
+- while (!list_empty(&nn->blocked_locks_lru)) {
+- nbl = list_first_entry(&nn->blocked_locks_lru,
+- struct nfsd4_blocked_lock, nbl_lru);
+- list_move(&nbl->nbl_lru, &reaplist);
+- list_del_init(&nbl->nbl_list);
+- }
+- spin_unlock(&nn->blocked_locks_lock);
+-
+- while (!list_empty(&reaplist)) {
+- nbl = list_first_entry(&reaplist,
+- struct nfsd4_blocked_lock, nbl_lru);
+- list_del_init(&nbl->nbl_lru);
+- posix_unblock_lock(&nbl->nbl_lock);
+- free_blocked_lock(nbl);
+- }
+-
+ nfsd4_client_tracking_exit(net);
+ nfs4_state_destroy_net(net);
+ }
+diff --git a/include/asm-generic/pgtable.h b/include/asm-generic/pgtable.h
+index 868e68561f91..dfc861caa478 100644
+--- a/include/asm-generic/pgtable.h
++++ b/include/asm-generic/pgtable.h
+@@ -976,6 +976,8 @@ int pud_set_huge(pud_t *pud, phys_addr_t addr, pgprot_t prot);
+ int pmd_set_huge(pmd_t *pmd, phys_addr_t addr, pgprot_t prot);
+ int pud_clear_huge(pud_t *pud);
+ int pmd_clear_huge(pmd_t *pmd);
++int pud_free_pmd_page(pud_t *pud);
++int pmd_free_pte_page(pmd_t *pmd);
+ #else /* !CONFIG_HAVE_ARCH_HUGE_VMAP */
+ static inline int p4d_set_huge(p4d_t *p4d, phys_addr_t addr, pgprot_t prot)
+ {
+@@ -1001,6 +1003,14 @@ static inline int pmd_clear_huge(pmd_t *pmd)
+ {
+ return 0;
+ }
++static inline int pud_free_pmd_page(pud_t *pud)
++{
++ return 0;
++}
++static inline int pmd_free_pte_page(pmd_t *pmd)
++{
++ return 0;
++}
+ #endif /* CONFIG_HAVE_ARCH_HUGE_VMAP */
+
+ #ifndef __HAVE_ARCH_FLUSH_PMD_TLB_RANGE
+diff --git a/include/linux/fsl_ifc.h b/include/linux/fsl_ifc.h
+index c332f0a45607..3fdfede2f0f3 100644
+--- a/include/linux/fsl_ifc.h
++++ b/include/linux/fsl_ifc.h
+@@ -734,11 +734,7 @@ struct fsl_ifc_nand {
+ u32 res19[0x10];
+ __be32 nand_fsr;
+ u32 res20;
+- /* The V1 nand_eccstat is actually 4 words that overlaps the
+- * V2 nand_eccstat.
+- */
+- __be32 v1_nand_eccstat[2];
+- __be32 v2_nand_eccstat[6];
++ __be32 nand_eccstat[8];
+ u32 res21[0x1c];
+ __be32 nanndcr;
+ u32 res22[0x2];
+diff --git a/include/linux/memblock.h b/include/linux/memblock.h
+index 7ed0f7782d16..9efd592c5da4 100644
+--- a/include/linux/memblock.h
++++ b/include/linux/memblock.h
+@@ -187,7 +187,6 @@ int memblock_search_pfn_nid(unsigned long pfn, unsigned long *start_pfn,
+ unsigned long *end_pfn);
+ void __next_mem_pfn_range(int *idx, int nid, unsigned long *out_start_pfn,
+ unsigned long *out_end_pfn, int *out_nid);
+-unsigned long memblock_next_valid_pfn(unsigned long pfn, unsigned long max_pfn);
+
+ /**
+ * for_each_mem_pfn_range - early memory pfn range iterator
+diff --git a/include/trace/events/mmc.h b/include/trace/events/mmc.h
+index 200f731be557..7b706ff21335 100644
+--- a/include/trace/events/mmc.h
++++ b/include/trace/events/mmc.h
+@@ -86,8 +86,8 @@ TRACE_EVENT(mmc_request_start,
+ __entry->stop_flags, __entry->stop_retries,
+ __entry->sbc_opcode, __entry->sbc_arg,
+ __entry->sbc_flags, __entry->sbc_retries,
+- __entry->blocks, __entry->blk_addr,
+- __entry->blksz, __entry->data_flags, __entry->tag,
++ __entry->blocks, __entry->blksz,
++ __entry->blk_addr, __entry->data_flags, __entry->tag,
+ __entry->can_retune, __entry->doing_retune,
+ __entry->retune_now, __entry->need_retune,
+ __entry->hold_retune, __entry->retune_period)
+diff --git a/include/uapi/linux/usb/audio.h b/include/uapi/linux/usb/audio.h
+index 17a022c5b414..da3315ed1bcd 100644
+--- a/include/uapi/linux/usb/audio.h
++++ b/include/uapi/linux/usb/audio.h
+@@ -370,7 +370,7 @@ static inline __u8 uac_processing_unit_bControlSize(struct uac_processing_unit_d
+ {
+ return (protocol == UAC_VERSION_1) ?
+ desc->baSourceID[desc->bNrInPins + 4] :
+- desc->baSourceID[desc->bNrInPins + 6];
++ 2; /* in UAC2, this value is constant */
+ }
+
+ static inline __u8 *uac_processing_unit_bmControls(struct uac_processing_unit_descriptor *desc,
+@@ -378,7 +378,7 @@ static inline __u8 *uac_processing_unit_bmControls(struct uac_processing_unit_de
+ {
+ return (protocol == UAC_VERSION_1) ?
+ &desc->baSourceID[desc->bNrInPins + 5] :
+- &desc->baSourceID[desc->bNrInPins + 7];
++ &desc->baSourceID[desc->bNrInPins + 6];
+ }
+
+ static inline __u8 uac_processing_unit_iProcessing(struct uac_processing_unit_descriptor *desc,
+diff --git a/kernel/bpf/syscall.c b/kernel/bpf/syscall.c
+index 5cb783fc8224..b719351b48d2 100644
+--- a/kernel/bpf/syscall.c
++++ b/kernel/bpf/syscall.c
+@@ -1687,7 +1687,7 @@ SYSCALL_DEFINE3(bpf, int, cmd, union bpf_attr __user *, uattr, unsigned int, siz
+ union bpf_attr attr = {};
+ int err;
+
+- if (!capable(CAP_SYS_ADMIN) && sysctl_unprivileged_bpf_disabled)
++ if (sysctl_unprivileged_bpf_disabled && !capable(CAP_SYS_ADMIN))
+ return -EPERM;
+
+ err = check_uarg_tail_zero(uattr, sizeof(attr), size);
+diff --git a/kernel/cgroup/cgroup.c b/kernel/cgroup/cgroup.c
+index 7e4c44538119..2522fac782af 100644
+--- a/kernel/cgroup/cgroup.c
++++ b/kernel/cgroup/cgroup.c
+@@ -3183,6 +3183,16 @@ static int cgroup_enable_threaded(struct cgroup *cgrp)
+ if (cgroup_is_threaded(cgrp))
+ return 0;
+
++ /*
++ * If @cgroup is populated or has domain controllers enabled, it
++ * can't be switched. While the below cgroup_can_be_thread_root()
++ * test can catch the same conditions, that's only when @parent is
++ * not mixable, so let's check it explicitly.
++ */
++ if (cgroup_is_populated(cgrp) ||
++ cgrp->subtree_control & ~cgrp_dfl_threaded_ss_mask)
++ return -EOPNOTSUPP;
++
+ /* we're joining the parent's domain, ensure its validity */
+ if (!cgroup_is_valid_domain(dom_cgrp) ||
+ !cgroup_can_be_thread_root(dom_cgrp))
+diff --git a/kernel/events/core.c b/kernel/events/core.c
+index 5d8f4031f8d5..385480a5aa45 100644
+--- a/kernel/events/core.c
++++ b/kernel/events/core.c
+@@ -2246,7 +2246,7 @@ static void ctx_resched(struct perf_cpu_context *cpuctx,
+ struct perf_event_context *task_ctx,
+ enum event_type_t event_type)
+ {
+- enum event_type_t ctx_event_type = event_type & EVENT_ALL;
++ enum event_type_t ctx_event_type;
+ bool cpu_event = !!(event_type & EVENT_CPU);
+
+ /*
+@@ -2256,6 +2256,8 @@ static void ctx_resched(struct perf_cpu_context *cpuctx,
+ if (event_type & EVENT_PINNED)
+ event_type |= EVENT_FLEXIBLE;
+
++ ctx_event_type = event_type & EVENT_ALL;
++
+ perf_pmu_disable(cpuctx->ctx.pmu);
+ if (task_ctx)
+ task_ctx_sched_out(cpuctx, task_ctx, event_type);
+diff --git a/kernel/module.c b/kernel/module.c
+index 09e48eee4d55..0c4530763f02 100644
+--- a/kernel/module.c
++++ b/kernel/module.c
+@@ -4223,7 +4223,7 @@ static int modules_open(struct inode *inode, struct file *file)
+ m->private = kallsyms_show_value() ? NULL : (void *)8ul;
+ }
+
+- return 0;
++ return err;
+ }
+
+ static const struct file_operations proc_modules_operations = {
+diff --git a/kernel/sched/core.c b/kernel/sched/core.c
+index 5a31a85bbd84..b4f7f890c1b9 100644
+--- a/kernel/sched/core.c
++++ b/kernel/sched/core.c
+@@ -6611,13 +6611,18 @@ static int tg_cfs_schedulable_down(struct task_group *tg, void *data)
+ parent_quota = parent_b->hierarchical_quota;
+
+ /*
+- * Ensure max(child_quota) <= parent_quota, inherit when no
++ * Ensure max(child_quota) <= parent_quota. On cgroup2,
++ * always take the min. On cgroup1, only inherit when no
+ * limit is set:
+ */
+- if (quota == RUNTIME_INF)
+- quota = parent_quota;
+- else if (parent_quota != RUNTIME_INF && quota > parent_quota)
+- return -EINVAL;
++ if (cgroup_subsys_on_dfl(cpu_cgrp_subsys)) {
++ quota = min(quota, parent_quota);
++ } else {
++ if (quota == RUNTIME_INF)
++ quota = parent_quota;
++ else if (parent_quota != RUNTIME_INF && quota > parent_quota)
++ return -EINVAL;
++ }
+ }
+ cfs_b->hierarchical_quota = quota;
+
+diff --git a/kernel/time/posix-timers.c b/kernel/time/posix-timers.c
+index ec999f32c840..708992708332 100644
+--- a/kernel/time/posix-timers.c
++++ b/kernel/time/posix-timers.c
+@@ -50,6 +50,7 @@
+ #include <linux/export.h>
+ #include <linux/hashtable.h>
+ #include <linux/compat.h>
++#include <linux/nospec.h>
+
+ #include "timekeeping.h"
+ #include "posix-timers.h"
+@@ -1346,11 +1347,15 @@ static const struct k_clock * const posix_clocks[] = {
+
+ static const struct k_clock *clockid_to_kclock(const clockid_t id)
+ {
+- if (id < 0)
++ clockid_t idx = id;
++
++ if (id < 0) {
+ return (id & CLOCKFD_MASK) == CLOCKFD ?
+ &clock_posix_dynamic : &clock_posix_cpu;
++ }
+
+- if (id >= ARRAY_SIZE(posix_clocks) || !posix_clocks[id])
++ if (id >= ARRAY_SIZE(posix_clocks))
+ return NULL;
+- return posix_clocks[id];
++
++ return posix_clocks[array_index_nospec(idx, ARRAY_SIZE(posix_clocks))];
+ }
+diff --git a/kernel/trace/bpf_trace.c b/kernel/trace/bpf_trace.c
+index 40207c2a4113..fe2429b382ca 100644
+--- a/kernel/trace/bpf_trace.c
++++ b/kernel/trace/bpf_trace.c
+@@ -636,7 +636,41 @@ static const struct bpf_func_proto bpf_get_stackid_proto_tp = {
+ .arg3_type = ARG_ANYTHING,
+ };
+
+-BPF_CALL_3(bpf_perf_prog_read_value_tp, struct bpf_perf_event_data_kern *, ctx,
++static const struct bpf_func_proto *tp_prog_func_proto(enum bpf_func_id func_id)
++{
++ switch (func_id) {
++ case BPF_FUNC_perf_event_output:
++ return &bpf_perf_event_output_proto_tp;
++ case BPF_FUNC_get_stackid:
++ return &bpf_get_stackid_proto_tp;
++ default:
++ return tracing_func_proto(func_id);
++ }
++}
++
++static bool tp_prog_is_valid_access(int off, int size, enum bpf_access_type type,
++ struct bpf_insn_access_aux *info)
++{
++ if (off < sizeof(void *) || off >= PERF_MAX_TRACE_SIZE)
++ return false;
++ if (type != BPF_READ)
++ return false;
++ if (off % size != 0)
++ return false;
++
++ BUILD_BUG_ON(PERF_MAX_TRACE_SIZE % sizeof(__u64));
++ return true;
++}
++
++const struct bpf_verifier_ops tracepoint_verifier_ops = {
++ .get_func_proto = tp_prog_func_proto,
++ .is_valid_access = tp_prog_is_valid_access,
++};
++
++const struct bpf_prog_ops tracepoint_prog_ops = {
++};
++
++BPF_CALL_3(bpf_perf_prog_read_value, struct bpf_perf_event_data_kern *, ctx,
+ struct bpf_perf_event_value *, buf, u32, size)
+ {
+ int err = -EINVAL;
+@@ -653,8 +687,8 @@ BPF_CALL_3(bpf_perf_prog_read_value_tp, struct bpf_perf_event_data_kern *, ctx,
+ return err;
+ }
+
+-static const struct bpf_func_proto bpf_perf_prog_read_value_proto_tp = {
+- .func = bpf_perf_prog_read_value_tp,
++static const struct bpf_func_proto bpf_perf_prog_read_value_proto = {
++ .func = bpf_perf_prog_read_value,
+ .gpl_only = true,
+ .ret_type = RET_INTEGER,
+ .arg1_type = ARG_PTR_TO_CTX,
+@@ -662,7 +696,7 @@ static const struct bpf_func_proto bpf_perf_prog_read_value_proto_tp = {
+ .arg3_type = ARG_CONST_SIZE,
+ };
+
+-static const struct bpf_func_proto *tp_prog_func_proto(enum bpf_func_id func_id)
++static const struct bpf_func_proto *pe_prog_func_proto(enum bpf_func_id func_id)
+ {
+ switch (func_id) {
+ case BPF_FUNC_perf_event_output:
+@@ -670,34 +704,12 @@ static const struct bpf_func_proto *tp_prog_func_proto(enum bpf_func_id func_id)
+ case BPF_FUNC_get_stackid:
+ return &bpf_get_stackid_proto_tp;
+ case BPF_FUNC_perf_prog_read_value:
+- return &bpf_perf_prog_read_value_proto_tp;
++ return &bpf_perf_prog_read_value_proto;
+ default:
+ return tracing_func_proto(func_id);
+ }
+ }
+
+-static bool tp_prog_is_valid_access(int off, int size, enum bpf_access_type type,
+- struct bpf_insn_access_aux *info)
+-{
+- if (off < sizeof(void *) || off >= PERF_MAX_TRACE_SIZE)
+- return false;
+- if (type != BPF_READ)
+- return false;
+- if (off % size != 0)
+- return false;
+-
+- BUILD_BUG_ON(PERF_MAX_TRACE_SIZE % sizeof(__u64));
+- return true;
+-}
+-
+-const struct bpf_verifier_ops tracepoint_verifier_ops = {
+- .get_func_proto = tp_prog_func_proto,
+- .is_valid_access = tp_prog_is_valid_access,
+-};
+-
+-const struct bpf_prog_ops tracepoint_prog_ops = {
+-};
+-
+ static bool pe_prog_is_valid_access(int off, int size, enum bpf_access_type type,
+ struct bpf_insn_access_aux *info)
+ {
+@@ -754,7 +766,7 @@ static u32 pe_prog_convert_ctx_access(enum bpf_access_type type,
+ }
+
+ const struct bpf_verifier_ops perf_event_verifier_ops = {
+- .get_func_proto = tp_prog_func_proto,
++ .get_func_proto = pe_prog_func_proto,
+ .is_valid_access = pe_prog_is_valid_access,
+ .convert_ctx_access = pe_prog_convert_ctx_access,
+ };
+diff --git a/kernel/trace/trace_kprobe.c b/kernel/trace/trace_kprobe.c
+index 492700c5fb4d..fccf00a66298 100644
+--- a/kernel/trace/trace_kprobe.c
++++ b/kernel/trace/trace_kprobe.c
+@@ -635,7 +635,7 @@ static int create_trace_kprobe(int argc, char **argv)
+ char *symbol = NULL, *event = NULL, *group = NULL;
+ int maxactive = 0;
+ char *arg;
+- unsigned long offset = 0;
++ long offset = 0;
+ void *addr = NULL;
+ char buf[MAX_EVENT_NAME_LEN];
+
+@@ -723,7 +723,7 @@ static int create_trace_kprobe(int argc, char **argv)
+ symbol = argv[1];
+ /* TODO: support .init module functions */
+ ret = traceprobe_split_symbol_offset(symbol, &offset);
+- if (ret) {
++ if (ret || offset < 0 || offset > UINT_MAX) {
+ pr_info("Failed to parse either an address or a symbol.\n");
+ return ret;
+ }
+diff --git a/kernel/trace/trace_probe.c b/kernel/trace/trace_probe.c
+index d59357308677..daf54bda4dc8 100644
+--- a/kernel/trace/trace_probe.c
++++ b/kernel/trace/trace_probe.c
+@@ -320,7 +320,7 @@ static fetch_func_t get_fetch_size_function(const struct fetch_type *type,
+ }
+
+ /* Split symbol and offset. */
+-int traceprobe_split_symbol_offset(char *symbol, unsigned long *offset)
++int traceprobe_split_symbol_offset(char *symbol, long *offset)
+ {
+ char *tmp;
+ int ret;
+@@ -328,13 +328,11 @@ int traceprobe_split_symbol_offset(char *symbol, unsigned long *offset)
+ if (!offset)
+ return -EINVAL;
+
+- tmp = strchr(symbol, '+');
++ tmp = strpbrk(symbol, "+-");
+ if (tmp) {
+- /* skip sign because kstrtoul doesn't accept '+' */
+- ret = kstrtoul(tmp + 1, 0, offset);
++ ret = kstrtol(tmp, 0, offset);
+ if (ret)
+ return ret;
+-
+ *tmp = '\0';
+ } else
+ *offset = 0;
+diff --git a/kernel/trace/trace_probe.h b/kernel/trace/trace_probe.h
+index fb66e3eaa192..a0d750e3d17c 100644
+--- a/kernel/trace/trace_probe.h
++++ b/kernel/trace/trace_probe.h
+@@ -353,7 +353,7 @@ extern int traceprobe_conflict_field_name(const char *name,
+ extern void traceprobe_update_arg(struct probe_arg *arg);
+ extern void traceprobe_free_probe_arg(struct probe_arg *arg);
+
+-extern int traceprobe_split_symbol_offset(char *symbol, unsigned long *offset);
++extern int traceprobe_split_symbol_offset(char *symbol, long *offset);
+
+ /* Sum up total data length for dynamic arraies (strings) */
+ static nokprobe_inline int
+diff --git a/lib/ioremap.c b/lib/ioremap.c
+index b808a390e4c3..54e5bbaa3200 100644
+--- a/lib/ioremap.c
++++ b/lib/ioremap.c
+@@ -91,7 +91,8 @@ static inline int ioremap_pmd_range(pud_t *pud, unsigned long addr,
+
+ if (ioremap_pmd_enabled() &&
+ ((next - addr) == PMD_SIZE) &&
+- IS_ALIGNED(phys_addr + addr, PMD_SIZE)) {
++ IS_ALIGNED(phys_addr + addr, PMD_SIZE) &&
++ pmd_free_pte_page(pmd)) {
+ if (pmd_set_huge(pmd, phys_addr + addr, prot))
+ continue;
+ }
+@@ -117,7 +118,8 @@ static inline int ioremap_pud_range(p4d_t *p4d, unsigned long addr,
+
+ if (ioremap_pud_enabled() &&
+ ((next - addr) == PUD_SIZE) &&
+- IS_ALIGNED(phys_addr + addr, PUD_SIZE)) {
++ IS_ALIGNED(phys_addr + addr, PUD_SIZE) &&
++ pud_free_pmd_page(pud)) {
+ if (pud_set_huge(pud, phys_addr + addr, prot))
+ continue;
+ }
+diff --git a/mm/huge_memory.c b/mm/huge_memory.c
+index 0e7ded98d114..4ed6c89e95c3 100644
+--- a/mm/huge_memory.c
++++ b/mm/huge_memory.c
+@@ -2791,11 +2791,13 @@ static unsigned long deferred_split_scan(struct shrinker *shrink,
+
+ list_for_each_safe(pos, next, &list) {
+ page = list_entry((void *)pos, struct page, mapping);
+- lock_page(page);
++ if (!trylock_page(page))
++ goto next;
+ /* split_huge_page() removes page from list on success */
+ if (!split_huge_page(page))
+ split++;
+ unlock_page(page);
++next:
+ put_page(page);
+ }
+
+diff --git a/mm/hugetlb.c b/mm/hugetlb.c
+index 9a334f5fb730..d01912be98da 100644
+--- a/mm/hugetlb.c
++++ b/mm/hugetlb.c
+@@ -18,6 +18,7 @@
+ #include <linux/bootmem.h>
+ #include <linux/sysfs.h>
+ #include <linux/slab.h>
++#include <linux/mmdebug.h>
+ #include <linux/sched/signal.h>
+ #include <linux/rmap.h>
+ #include <linux/string_helpers.h>
+@@ -4354,6 +4355,12 @@ int hugetlb_reserve_pages(struct inode *inode,
+ struct resv_map *resv_map;
+ long gbl_reserve;
+
++ /* This should never happen */
++ if (from > to) {
++ VM_WARN(1, "%s called with a negative range\n", __func__);
++ return -EINVAL;
++ }
++
+ /*
+ * Only apply hugepage reservation if asked. At fault time, an
+ * attempt will be made for VM_NORESERVE to allocate a page
+diff --git a/mm/khugepaged.c b/mm/khugepaged.c
+index ea4ff259b671..33255bf91074 100644
+--- a/mm/khugepaged.c
++++ b/mm/khugepaged.c
+@@ -530,7 +530,12 @@ static int __collapse_huge_page_isolate(struct vm_area_struct *vma,
+ goto out;
+ }
+
+- VM_BUG_ON_PAGE(PageCompound(page), page);
++ /* TODO: teach khugepaged to collapse THP mapped with pte */
++ if (PageCompound(page)) {
++ result = SCAN_PAGE_COMPOUND;
++ goto out;
++ }
++
+ VM_BUG_ON_PAGE(!PageAnon(page), page);
+
+ /*
+diff --git a/mm/memblock.c b/mm/memblock.c
+index d25b5a456cca..028429d5bc38 100644
+--- a/mm/memblock.c
++++ b/mm/memblock.c
+@@ -1101,34 +1101,6 @@ void __init_memblock __next_mem_pfn_range(int *idx, int nid,
+ *out_nid = r->nid;
+ }
+
+-unsigned long __init_memblock memblock_next_valid_pfn(unsigned long pfn,
+- unsigned long max_pfn)
+-{
+- struct memblock_type *type = &memblock.memory;
+- unsigned int right = type->cnt;
+- unsigned int mid, left = 0;
+- phys_addr_t addr = PFN_PHYS(++pfn);
+-
+- do {
+- mid = (right + left) / 2;
+-
+- if (addr < type->regions[mid].base)
+- right = mid;
+- else if (addr >= (type->regions[mid].base +
+- type->regions[mid].size))
+- left = mid + 1;
+- else {
+- /* addr is within the region, so pfn is valid */
+- return pfn;
+- }
+- } while (left < right);
+-
+- if (right == type->cnt)
+- return -1UL;
+- else
+- return PHYS_PFN(type->regions[right].base);
+-}
+-
+ /**
+ * memblock_set_node - set node ID on memblock regions
+ * @base: base of area to set node ID for
+diff --git a/mm/page_alloc.c b/mm/page_alloc.c
+index 9f927497f2f5..d8ee1effa4a6 100644
+--- a/mm/page_alloc.c
++++ b/mm/page_alloc.c
+@@ -3588,7 +3588,7 @@ static bool __need_fs_reclaim(gfp_t gfp_mask)
+ return false;
+
+ /* this guy won't enter reclaim */
+- if ((current->flags & PF_MEMALLOC) && !(gfp_mask & __GFP_NOMEMALLOC))
++ if (current->flags & PF_MEMALLOC)
+ return false;
+
+ /* We're only interested __GFP_FS allocations for now */
+@@ -5348,17 +5348,8 @@ void __meminit memmap_init_zone(unsigned long size, int nid, unsigned long zone,
+ if (context != MEMMAP_EARLY)
+ goto not_early;
+
+- if (!early_pfn_valid(pfn)) {
+-#ifdef CONFIG_HAVE_MEMBLOCK_NODE_MAP
+- /*
+- * Skip to the pfn preceding the next valid one (or
+- * end_pfn), such that we hit a valid pfn (or end_pfn)
+- * on our next iteration of the loop.
+- */
+- pfn = memblock_next_valid_pfn(pfn, end_pfn) - 1;
+-#endif
++ if (!early_pfn_valid(pfn))
+ continue;
+- }
+ if (!early_pfn_in_nid(pfn, nid))
+ continue;
+ if (!update_defer_init(pgdat, pfn, end_pfn, &nr_initialised))
+diff --git a/mm/shmem.c b/mm/shmem.c
+index 7fbe67be86fa..29a369c2067f 100644
+--- a/mm/shmem.c
++++ b/mm/shmem.c
+@@ -493,36 +493,45 @@ static unsigned long shmem_unused_huge_shrink(struct shmem_sb_info *sbinfo,
+ info = list_entry(pos, struct shmem_inode_info, shrinklist);
+ inode = &info->vfs_inode;
+
+- if (nr_to_split && split >= nr_to_split) {
+- iput(inode);
+- continue;
+- }
++ if (nr_to_split && split >= nr_to_split)
++ goto leave;
+
+- page = find_lock_page(inode->i_mapping,
++ page = find_get_page(inode->i_mapping,
+ (inode->i_size & HPAGE_PMD_MASK) >> PAGE_SHIFT);
+ if (!page)
+ goto drop;
+
++ /* No huge page at the end of the file: nothing to split */
+ if (!PageTransHuge(page)) {
+- unlock_page(page);
+ put_page(page);
+ goto drop;
+ }
+
++ /*
++ * Leave the inode on the list if we failed to lock
++ * the page at this time.
++ *
++ * Waiting for the lock may lead to deadlock in the
++ * reclaim path.
++ */
++ if (!trylock_page(page)) {
++ put_page(page);
++ goto leave;
++ }
++
+ ret = split_huge_page(page);
+ unlock_page(page);
+ put_page(page);
+
+- if (ret) {
+- /* split failed: leave it on the list */
+- iput(inode);
+- continue;
+- }
++ /* If split failed leave the inode on the list */
++ if (ret)
++ goto leave;
+
+ split++;
+ drop:
+ list_del_init(&info->shrinklist);
+ removed++;
++leave:
+ iput(inode);
+ }
+
+diff --git a/mm/vmscan.c b/mm/vmscan.c
+index 47d5ced51f2d..503fa224c6b6 100644
+--- a/mm/vmscan.c
++++ b/mm/vmscan.c
+@@ -1846,6 +1846,20 @@ shrink_inactive_list(unsigned long nr_to_scan, struct lruvec *lruvec,
+ if (stat.nr_writeback && stat.nr_writeback == nr_taken)
+ set_bit(PGDAT_WRITEBACK, &pgdat->flags);
+
++ /*
++ * If dirty pages are scanned that are not queued for IO, it
++ * implies that flushers are not doing their job. This can
++ * happen when memory pressure pushes dirty pages to the end of
++ * the LRU before the dirty limits are breached and the dirty
++ * data has expired. It can also happen when the proportion of
++ * dirty pages grows not through writes but through memory
++ * pressure reclaiming all the clean cache. And in some cases,
++ * the flushers simply cannot keep up with the allocation
++ * rate. Nudge the flusher threads in case they are asleep.
++ */
++ if (stat.nr_unqueued_dirty == nr_taken)
++ wakeup_flusher_threads(WB_REASON_VMSCAN);
++
+ /*
+ * Legacy memcg will stall in page writeback so avoid forcibly
+ * stalling here.
+@@ -1858,22 +1872,9 @@ shrink_inactive_list(unsigned long nr_to_scan, struct lruvec *lruvec,
+ if (stat.nr_dirty && stat.nr_dirty == stat.nr_congested)
+ set_bit(PGDAT_CONGESTED, &pgdat->flags);
+
+- /*
+- * If dirty pages are scanned that are not queued for IO, it
+- * implies that flushers are not doing their job. This can
+- * happen when memory pressure pushes dirty pages to the end of
+- * the LRU before the dirty limits are breached and the dirty
+- * data has expired. It can also happen when the proportion of
+- * dirty pages grows not through writes but through memory
+- * pressure reclaiming all the clean cache. And in some cases,
+- * the flushers simply cannot keep up with the allocation
+- * rate. Nudge the flusher threads in case they are asleep, but
+- * also allow kswapd to start writing pages during reclaim.
+- */
+- if (stat.nr_unqueued_dirty == nr_taken) {
+- wakeup_flusher_threads(WB_REASON_VMSCAN);
++ /* Allow kswapd to start writing pages during reclaim. */
++ if (stat.nr_unqueued_dirty == nr_taken)
+ set_bit(PGDAT_DIRTY, &pgdat->flags);
+- }
+
+ /*
+ * If kswapd scans pages marked marked for immediate
+diff --git a/sound/drivers/aloop.c b/sound/drivers/aloop.c
+index 0333143a1fa7..1063a4377502 100644
+--- a/sound/drivers/aloop.c
++++ b/sound/drivers/aloop.c
+@@ -192,6 +192,11 @@ static inline void loopback_timer_stop(struct loopback_pcm *dpcm)
+ dpcm->timer.expires = 0;
+ }
+
++static inline void loopback_timer_stop_sync(struct loopback_pcm *dpcm)
++{
++ del_timer_sync(&dpcm->timer);
++}
++
+ #define CABLE_VALID_PLAYBACK (1 << SNDRV_PCM_STREAM_PLAYBACK)
+ #define CABLE_VALID_CAPTURE (1 << SNDRV_PCM_STREAM_CAPTURE)
+ #define CABLE_VALID_BOTH (CABLE_VALID_PLAYBACK|CABLE_VALID_CAPTURE)
+@@ -326,6 +331,8 @@ static int loopback_prepare(struct snd_pcm_substream *substream)
+ struct loopback_cable *cable = dpcm->cable;
+ int bps, salign;
+
++ loopback_timer_stop_sync(dpcm);
++
+ salign = (snd_pcm_format_width(runtime->format) *
+ runtime->channels) / 8;
+ bps = salign * runtime->rate;
+@@ -659,7 +666,9 @@ static void free_cable(struct snd_pcm_substream *substream)
+ return;
+ if (cable->streams[!substream->stream]) {
+ /* other stream is still alive */
++ spin_lock_irq(&cable->lock);
+ cable->streams[substream->stream] = NULL;
++ spin_unlock_irq(&cable->lock);
+ } else {
+ /* free the cable */
+ loopback->cables[substream->number][dev] = NULL;
+@@ -698,7 +707,6 @@ static int loopback_open(struct snd_pcm_substream *substream)
+ loopback->cables[substream->number][dev] = cable;
+ }
+ dpcm->cable = cable;
+- cable->streams[substream->stream] = dpcm;
+
+ snd_pcm_hw_constraint_integer(runtime, SNDRV_PCM_HW_PARAM_PERIODS);
+
+@@ -730,6 +738,11 @@ static int loopback_open(struct snd_pcm_substream *substream)
+ runtime->hw = loopback_pcm_hardware;
+ else
+ runtime->hw = cable->hw;
++
++ spin_lock_irq(&cable->lock);
++ cable->streams[substream->stream] = dpcm;
++ spin_unlock_irq(&cable->lock);
++
+ unlock:
+ if (err < 0) {
+ free_cable(substream);
+@@ -744,7 +757,7 @@ static int loopback_close(struct snd_pcm_substream *substream)
+ struct loopback *loopback = substream->private_data;
+ struct loopback_pcm *dpcm = substream->runtime->private_data;
+
+- loopback_timer_stop(dpcm);
++ loopback_timer_stop_sync(dpcm);
+ mutex_lock(&loopback->cable_lock);
+ free_cable(substream);
+ mutex_unlock(&loopback->cable_lock);
+diff --git a/sound/pci/hda/hda_intel.c b/sound/pci/hda/hda_intel.c
+index d5017adf9feb..c507c69029e3 100644
+--- a/sound/pci/hda/hda_intel.c
++++ b/sound/pci/hda/hda_intel.c
+@@ -375,6 +375,7 @@ enum {
+ ((pci)->device == 0x160c))
+
+ #define IS_BXT(pci) ((pci)->vendor == 0x8086 && (pci)->device == 0x5a98)
++#define IS_CFL(pci) ((pci)->vendor == 0x8086 && (pci)->device == 0xa348)
+
+ static char *driver_short_names[] = {
+ [AZX_DRIVER_ICH] = "HDA Intel",
+@@ -1744,6 +1745,10 @@ static int azx_create(struct snd_card *card, struct pci_dev *pci,
+ else
+ chip->bdl_pos_adj = bdl_pos_adj[dev];
+
++ /* Workaround for a communication error on CFL (bko#199007) */
++ if (IS_CFL(pci))
++ chip->polling_mode = 1;
++
+ err = azx_bus_init(chip, model[dev], &pci_hda_io_ops);
+ if (err < 0) {
+ kfree(hda);
+diff --git a/sound/pci/hda/patch_realtek.c b/sound/pci/hda/patch_realtek.c
+index 18bab5ffbe4a..206774703a33 100644
+--- a/sound/pci/hda/patch_realtek.c
++++ b/sound/pci/hda/patch_realtek.c
+@@ -3130,6 +3130,8 @@ static void alc256_init(struct hda_codec *codec)
+
+ alc_update_coef_idx(codec, 0x46, 3 << 12, 0);
+ alc_update_coefex_idx(codec, 0x57, 0x04, 0x0007, 0x4); /* Hight power */
++ alc_update_coefex_idx(codec, 0x53, 0x02, 0x8000, 1 << 15); /* Clear bit */
++ alc_update_coefex_idx(codec, 0x53, 0x02, 0x8000, 0 << 15);
+ }
+
+ static void alc256_shutup(struct hda_codec *codec)
+@@ -3509,8 +3511,12 @@ static void alc269_fixup_mic_mute_hook(void *private_data, int enabled)
+ pinval = snd_hda_codec_get_pin_target(codec, spec->mute_led_nid);
+ pinval &= ~AC_PINCTL_VREFEN;
+ pinval |= enabled ? AC_PINCTL_VREF_HIZ : AC_PINCTL_VREF_80;
+- if (spec->mute_led_nid)
++ if (spec->mute_led_nid) {
++ /* temporarily power up/down for setting VREF */
++ snd_hda_power_up_pm(codec);
+ snd_hda_set_pin_ctl_cache(codec, spec->mute_led_nid, pinval);
++ snd_hda_power_down_pm(codec);
++ }
+ }
+
+ /* Make sure the led works even in runtime suspend */
+@@ -5375,6 +5381,7 @@ enum {
+ ALC274_FIXUP_DELL_AIO_LINEOUT_VERB,
+ ALC298_FIXUP_TPT470_DOCK,
+ ALC255_FIXUP_DUMMY_LINEOUT_VERB,
++ ALC255_FIXUP_DELL_HEADSET_MIC,
+ };
+
+ static const struct hda_fixup alc269_fixups[] = {
+@@ -6235,6 +6242,13 @@ static const struct hda_fixup alc269_fixups[] = {
+ .chained = true,
+ .chain_id = ALC255_FIXUP_DELL1_MIC_NO_PRESENCE
+ },
++ [ALC255_FIXUP_DELL_HEADSET_MIC] = {
++ .type = HDA_FIXUP_PINS,
++ .v.pins = (const struct hda_pintbl[]) {
++ { 0x19, 0x01a1913c }, /* use as headset mic, without its own jack detect */
++ { }
++ },
++ },
+ };
+
+ static const struct snd_pci_quirk alc269_fixup_tbl[] = {
+@@ -6289,6 +6303,8 @@ static const struct snd_pci_quirk alc269_fixup_tbl[] = {
+ SND_PCI_QUIRK(0x1028, 0x082a, "Dell XPS 13 9360", ALC256_FIXUP_DELL_XPS_13_HEADPHONE_NOISE),
+ SND_PCI_QUIRK(0x1028, 0x084b, "Dell", ALC274_FIXUP_DELL_AIO_LINEOUT_VERB),
+ SND_PCI_QUIRK(0x1028, 0x084e, "Dell", ALC274_FIXUP_DELL_AIO_LINEOUT_VERB),
++ SND_PCI_QUIRK(0x1028, 0x0871, "Dell Precision 3630", ALC255_FIXUP_DELL_HEADSET_MIC),
++ SND_PCI_QUIRK(0x1028, 0x0872, "Dell Precision 3630", ALC255_FIXUP_DELL_HEADSET_MIC),
+ SND_PCI_QUIRK(0x1028, 0x0873, "Dell Precision 3930", ALC255_FIXUP_DUMMY_LINEOUT_VERB),
+ SND_PCI_QUIRK(0x1028, 0x164a, "Dell", ALC293_FIXUP_DELL1_MIC_NO_PRESENCE),
+ SND_PCI_QUIRK(0x1028, 0x164b, "Dell", ALC293_FIXUP_DELL1_MIC_NO_PRESENCE),
+@@ -7032,6 +7048,8 @@ static int patch_alc269(struct hda_codec *codec)
+ break;
+ case 0x10ec0257:
+ spec->codec_variant = ALC269_TYPE_ALC257;
++ spec->shutup = alc256_shutup;
++ spec->init_hook = alc256_init;
+ spec->gen.mixer_nid = 0;
+ break;
+ case 0x10ec0215:
+diff --git a/tools/perf/builtin-stat.c b/tools/perf/builtin-stat.c
+index 59af5a8419e2..00be6e8c35a2 100644
+--- a/tools/perf/builtin-stat.c
++++ b/tools/perf/builtin-stat.c
+@@ -967,7 +967,7 @@ static void print_metric_csv(void *ctx,
+ char buf[64], *vals, *ends;
+
+ if (unit == NULL || fmt == NULL) {
+- fprintf(out, "%s%s%s%s", csv_sep, csv_sep, csv_sep, csv_sep);
++ fprintf(out, "%s%s", csv_sep, csv_sep);
+ return;
+ }
+ snprintf(buf, sizeof(buf), fmt, val);
+diff --git a/tools/testing/selftests/x86/ptrace_syscall.c b/tools/testing/selftests/x86/ptrace_syscall.c
+index 1ae1c5a7392e..6f22238f3217 100644
+--- a/tools/testing/selftests/x86/ptrace_syscall.c
++++ b/tools/testing/selftests/x86/ptrace_syscall.c
+@@ -183,8 +183,10 @@ static void test_ptrace_syscall_restart(void)
+ if (ptrace(PTRACE_TRACEME, 0, 0, 0) != 0)
+ err(1, "PTRACE_TRACEME");
+
++ pid_t pid = getpid(), tid = syscall(SYS_gettid);
++
+ printf("\tChild will make one syscall\n");
+- raise(SIGSTOP);
++ syscall(SYS_tgkill, pid, tid, SIGSTOP);
+
+ syscall(SYS_gettid, 10, 11, 12, 13, 14, 15);
+ _exit(0);
+@@ -301,9 +303,11 @@ static void test_restart_under_ptrace(void)
+ if (ptrace(PTRACE_TRACEME, 0, 0, 0) != 0)
+ err(1, "PTRACE_TRACEME");
+
++ pid_t pid = getpid(), tid = syscall(SYS_gettid);
++
+ printf("\tChild will take a nap until signaled\n");
+ setsigign(SIGUSR1, SA_RESTART);
+- raise(SIGSTOP);
++ syscall(SYS_tgkill, pid, tid, SIGSTOP);
+
+ syscall(SYS_pause, 0, 0, 0, 0, 0, 0);
+ _exit(0);
^ permalink raw reply related [flat|nested] 27+ messages in thread
* [gentoo-commits] proj/linux-patches:4.15 commit in: /
@ 2018-03-31 22:20 Mike Pagano
0 siblings, 0 replies; 27+ messages in thread
From: Mike Pagano @ 2018-03-31 22:20 UTC (permalink / raw
To: gentoo-commits
commit: b2dfe994a979d5ace0f18e467c1e82bfa4d3ab30
Author: Mike Pagano <mpagano <AT> gentoo <DOT> org>
AuthorDate: Sat Mar 31 22:19:50 2018 +0000
Commit: Mike Pagano <mpagano <AT> gentoo <DOT> org>
CommitDate: Sat Mar 31 22:19:50 2018 +0000
URL: https://gitweb.gentoo.org/proj/linux-patches.git/commit/?id=b2dfe994
Linux patch 4.15.15
0000_README | 4 +
1014_linux-4.15.15.patch | 1683 ++++++++++++++++++++++++++++++++++++++++++++++
2 files changed, 1687 insertions(+)
diff --git a/0000_README b/0000_README
index f4d8a80..f1a4ce6 100644
--- a/0000_README
+++ b/0000_README
@@ -99,6 +99,10 @@ Patch: 1013_linux-4.15.14.patch
From: http://www.kernel.org
Desc: Linux 4.15.14
+Patch: 1014_linux-4.15.15.patch
+From: http://www.kernel.org
+Desc: Linux 4.15.15
+
Patch: 1500_XATTR_USER_PREFIX.patch
From: https://bugs.gentoo.org/show_bug.cgi?id=470644
Desc: Support for namespace user.pax.* on tmpfs.
diff --git a/1014_linux-4.15.15.patch b/1014_linux-4.15.15.patch
new file mode 100644
index 0000000..ab1089f
--- /dev/null
+++ b/1014_linux-4.15.15.patch
@@ -0,0 +1,1683 @@
+diff --git a/Makefile b/Makefile
+index a5e561900daf..20c9b7bfeed4 100644
+--- a/Makefile
++++ b/Makefile
+@@ -1,7 +1,7 @@
+ # SPDX-License-Identifier: GPL-2.0
+ VERSION = 4
+ PATCHLEVEL = 15
+-SUBLEVEL = 14
++SUBLEVEL = 15
+ EXTRAVERSION =
+ NAME = Fearless Coyote
+
+diff --git a/drivers/net/ethernet/arc/emac_rockchip.c b/drivers/net/ethernet/arc/emac_rockchip.c
+index 16f9bee992fe..0f6576802607 100644
+--- a/drivers/net/ethernet/arc/emac_rockchip.c
++++ b/drivers/net/ethernet/arc/emac_rockchip.c
+@@ -169,8 +169,10 @@ static int emac_rockchip_probe(struct platform_device *pdev)
+ /* Optional regulator for PHY */
+ priv->regulator = devm_regulator_get_optional(dev, "phy");
+ if (IS_ERR(priv->regulator)) {
+- if (PTR_ERR(priv->regulator) == -EPROBE_DEFER)
+- return -EPROBE_DEFER;
++ if (PTR_ERR(priv->regulator) == -EPROBE_DEFER) {
++ err = -EPROBE_DEFER;
++ goto out_clk_disable;
++ }
+ dev_err(dev, "no regulator found\n");
+ priv->regulator = NULL;
+ }
+diff --git a/drivers/net/ethernet/broadcom/bcmsysport.c b/drivers/net/ethernet/broadcom/bcmsysport.c
+index 087f01b4dc3a..f239ef2e6f23 100644
+--- a/drivers/net/ethernet/broadcom/bcmsysport.c
++++ b/drivers/net/ethernet/broadcom/bcmsysport.c
+@@ -855,10 +855,12 @@ static void bcm_sysport_tx_reclaim_one(struct bcm_sysport_tx_ring *ring,
+ static unsigned int __bcm_sysport_tx_reclaim(struct bcm_sysport_priv *priv,
+ struct bcm_sysport_tx_ring *ring)
+ {
+- unsigned int c_index, last_c_index, last_tx_cn, num_tx_cbs;
+ unsigned int pkts_compl = 0, bytes_compl = 0;
+ struct net_device *ndev = priv->netdev;
++ unsigned int txbds_processed = 0;
+ struct bcm_sysport_cb *cb;
++ unsigned int txbds_ready;
++ unsigned int c_index;
+ u32 hw_ind;
+
+ /* Clear status before servicing to reduce spurious interrupts */
+@@ -871,29 +873,23 @@ static unsigned int __bcm_sysport_tx_reclaim(struct bcm_sysport_priv *priv,
+ /* Compute how many descriptors have been processed since last call */
+ hw_ind = tdma_readl(priv, TDMA_DESC_RING_PROD_CONS_INDEX(ring->index));
+ c_index = (hw_ind >> RING_CONS_INDEX_SHIFT) & RING_CONS_INDEX_MASK;
+- ring->p_index = (hw_ind & RING_PROD_INDEX_MASK);
+-
+- last_c_index = ring->c_index;
+- num_tx_cbs = ring->size;
+-
+- c_index &= (num_tx_cbs - 1);
+-
+- if (c_index >= last_c_index)
+- last_tx_cn = c_index - last_c_index;
+- else
+- last_tx_cn = num_tx_cbs - last_c_index + c_index;
++ txbds_ready = (c_index - ring->c_index) & RING_CONS_INDEX_MASK;
+
+ netif_dbg(priv, tx_done, ndev,
+- "ring=%d c_index=%d last_tx_cn=%d last_c_index=%d\n",
+- ring->index, c_index, last_tx_cn, last_c_index);
++ "ring=%d old_c_index=%u c_index=%u txbds_ready=%u\n",
++ ring->index, ring->c_index, c_index, txbds_ready);
+
+- while (last_tx_cn-- > 0) {
+- cb = ring->cbs + last_c_index;
++ while (txbds_processed < txbds_ready) {
++ cb = &ring->cbs[ring->clean_index];
+ bcm_sysport_tx_reclaim_one(ring, cb, &bytes_compl, &pkts_compl);
+
+ ring->desc_count++;
+- last_c_index++;
+- last_c_index &= (num_tx_cbs - 1);
++ txbds_processed++;
++
++ if (likely(ring->clean_index < ring->size - 1))
++ ring->clean_index++;
++ else
++ ring->clean_index = 0;
+ }
+
+ u64_stats_update_begin(&priv->syncp);
+@@ -1406,6 +1402,7 @@ static int bcm_sysport_init_tx_ring(struct bcm_sysport_priv *priv,
+ netif_tx_napi_add(priv->netdev, &ring->napi, bcm_sysport_tx_poll, 64);
+ ring->index = index;
+ ring->size = size;
++ ring->clean_index = 0;
+ ring->alloc_size = ring->size;
+ ring->desc_cpu = p;
+ ring->desc_count = ring->size;
+diff --git a/drivers/net/ethernet/broadcom/bcmsysport.h b/drivers/net/ethernet/broadcom/bcmsysport.h
+index f5a984c1c986..19c91c76e327 100644
+--- a/drivers/net/ethernet/broadcom/bcmsysport.h
++++ b/drivers/net/ethernet/broadcom/bcmsysport.h
+@@ -706,7 +706,7 @@ struct bcm_sysport_tx_ring {
+ unsigned int desc_count; /* Number of descriptors */
+ unsigned int curr_desc; /* Current descriptor */
+ unsigned int c_index; /* Last consumer index */
+- unsigned int p_index; /* Current producer index */
++ unsigned int clean_index; /* Current clean index */
+ struct bcm_sysport_cb *cbs; /* Transmit control blocks */
+ struct dma_desc *desc_cpu; /* CPU view of the descriptor */
+ struct bcm_sysport_priv *priv; /* private context backpointer */
+diff --git a/drivers/net/ethernet/freescale/dpaa/dpaa_eth.c b/drivers/net/ethernet/freescale/dpaa/dpaa_eth.c
+index 7caa8da48421..e4ec32a9ca15 100644
+--- a/drivers/net/ethernet/freescale/dpaa/dpaa_eth.c
++++ b/drivers/net/ethernet/freescale/dpaa/dpaa_eth.c
+@@ -2008,7 +2008,6 @@ static inline int dpaa_xmit(struct dpaa_priv *priv,
+ }
+
+ if (unlikely(err < 0)) {
+- percpu_stats->tx_errors++;
+ percpu_stats->tx_fifo_errors++;
+ return err;
+ }
+@@ -2278,7 +2277,6 @@ static enum qman_cb_dqrr_result rx_default_dqrr(struct qman_portal *portal,
+ vaddr = phys_to_virt(addr);
+ prefetch(vaddr + qm_fd_get_offset(fd));
+
+- fd_format = qm_fd_get_format(fd);
+ /* The only FD types that we may receive are contig and S/G */
+ WARN_ON((fd_format != qm_fd_contig) && (fd_format != qm_fd_sg));
+
+@@ -2311,8 +2309,10 @@ static enum qman_cb_dqrr_result rx_default_dqrr(struct qman_portal *portal,
+
+ skb_len = skb->len;
+
+- if (unlikely(netif_receive_skb(skb) == NET_RX_DROP))
++ if (unlikely(netif_receive_skb(skb) == NET_RX_DROP)) {
++ percpu_stats->rx_dropped++;
+ return qman_cb_dqrr_consume;
++ }
+
+ percpu_stats->rx_packets++;
+ percpu_stats->rx_bytes += skb_len;
+@@ -2860,7 +2860,7 @@ static int dpaa_remove(struct platform_device *pdev)
+ struct device *dev;
+ int err;
+
+- dev = &pdev->dev;
++ dev = pdev->dev.parent;
+ net_dev = dev_get_drvdata(dev);
+
+ priv = netdev_priv(net_dev);
+diff --git a/drivers/net/ethernet/freescale/fec_main.c b/drivers/net/ethernet/freescale/fec_main.c
+index a74300a4459c..febadd39e29a 100644
+--- a/drivers/net/ethernet/freescale/fec_main.c
++++ b/drivers/net/ethernet/freescale/fec_main.c
+@@ -3578,6 +3578,8 @@ fec_drv_remove(struct platform_device *pdev)
+ fec_enet_mii_remove(fep);
+ if (fep->reg_phy)
+ regulator_disable(fep->reg_phy);
++ pm_runtime_put(&pdev->dev);
++ pm_runtime_disable(&pdev->dev);
+ if (of_phy_is_fixed_link(np))
+ of_phy_deregister_fixed_link(np);
+ of_node_put(fep->phy_node);
+diff --git a/drivers/net/ethernet/mellanox/mlxsw/spectrum_buffers.c b/drivers/net/ethernet/mellanox/mlxsw/spectrum_buffers.c
+index 93728c694e6d..0a9adc5962fb 100644
+--- a/drivers/net/ethernet/mellanox/mlxsw/spectrum_buffers.c
++++ b/drivers/net/ethernet/mellanox/mlxsw/spectrum_buffers.c
+@@ -385,13 +385,13 @@ static const struct mlxsw_sp_sb_cm mlxsw_sp_sb_cms_egress[] = {
+
+ static const struct mlxsw_sp_sb_cm mlxsw_sp_cpu_port_sb_cms[] = {
+ MLXSW_SP_CPU_PORT_SB_CM,
++ MLXSW_SP_SB_CM(MLXSW_PORT_MAX_MTU, 0, 0),
++ MLXSW_SP_SB_CM(MLXSW_PORT_MAX_MTU, 0, 0),
++ MLXSW_SP_SB_CM(MLXSW_PORT_MAX_MTU, 0, 0),
++ MLXSW_SP_SB_CM(MLXSW_PORT_MAX_MTU, 0, 0),
++ MLXSW_SP_SB_CM(MLXSW_PORT_MAX_MTU, 0, 0),
+ MLXSW_SP_CPU_PORT_SB_CM,
+- MLXSW_SP_CPU_PORT_SB_CM,
+- MLXSW_SP_CPU_PORT_SB_CM,
+- MLXSW_SP_CPU_PORT_SB_CM,
+- MLXSW_SP_CPU_PORT_SB_CM,
+- MLXSW_SP_CPU_PORT_SB_CM,
+- MLXSW_SP_SB_CM(10000, 0, 0),
++ MLXSW_SP_SB_CM(MLXSW_PORT_MAX_MTU, 0, 0),
+ MLXSW_SP_CPU_PORT_SB_CM,
+ MLXSW_SP_CPU_PORT_SB_CM,
+ MLXSW_SP_CPU_PORT_SB_CM,
+diff --git a/drivers/net/ethernet/qlogic/qed/qed_iwarp.c b/drivers/net/ethernet/qlogic/qed/qed_iwarp.c
+index 409041eab189..fba7f5c34b85 100644
+--- a/drivers/net/ethernet/qlogic/qed/qed_iwarp.c
++++ b/drivers/net/ethernet/qlogic/qed/qed_iwarp.c
+@@ -1681,6 +1681,13 @@ qed_iwarp_parse_rx_pkt(struct qed_hwfn *p_hwfn,
+ iph = (struct iphdr *)((u8 *)(ethh) + eth_hlen);
+
+ if (eth_type == ETH_P_IP) {
++ if (iph->protocol != IPPROTO_TCP) {
++ DP_NOTICE(p_hwfn,
++ "Unexpected ip protocol on ll2 %x\n",
++ iph->protocol);
++ return -EINVAL;
++ }
++
+ cm_info->local_ip[0] = ntohl(iph->daddr);
+ cm_info->remote_ip[0] = ntohl(iph->saddr);
+ cm_info->ip_version = TCP_IPV4;
+@@ -1689,6 +1696,14 @@ qed_iwarp_parse_rx_pkt(struct qed_hwfn *p_hwfn,
+ *payload_len = ntohs(iph->tot_len) - ip_hlen;
+ } else if (eth_type == ETH_P_IPV6) {
+ ip6h = (struct ipv6hdr *)iph;
++
++ if (ip6h->nexthdr != IPPROTO_TCP) {
++ DP_NOTICE(p_hwfn,
++ "Unexpected ip protocol on ll2 %x\n",
++ iph->protocol);
++ return -EINVAL;
++ }
++
+ for (i = 0; i < 4; i++) {
+ cm_info->local_ip[i] =
+ ntohl(ip6h->daddr.in6_u.u6_addr32[i]);
+@@ -1906,8 +1921,8 @@ qed_iwarp_update_fpdu_length(struct qed_hwfn *p_hwfn,
+ /* Missing lower byte is now available */
+ mpa_len = fpdu->fpdu_length | *mpa_data;
+ fpdu->fpdu_length = QED_IWARP_FPDU_LEN_WITH_PAD(mpa_len);
+- fpdu->mpa_frag_len = fpdu->fpdu_length;
+ /* one byte of hdr */
++ fpdu->mpa_frag_len = 1;
+ fpdu->incomplete_bytes = fpdu->fpdu_length - 1;
+ DP_VERBOSE(p_hwfn,
+ QED_MSG_RDMA,
+diff --git a/drivers/net/ethernet/qlogic/qede/qede_main.c b/drivers/net/ethernet/qlogic/qede/qede_main.c
+index 8f9b3eb82137..cdcccecfc24a 100644
+--- a/drivers/net/ethernet/qlogic/qede/qede_main.c
++++ b/drivers/net/ethernet/qlogic/qede/qede_main.c
+@@ -2066,8 +2066,6 @@ static int qede_load(struct qede_dev *edev, enum qede_load_mode mode,
+ link_params.link_up = true;
+ edev->ops->common->set_link(edev->cdev, &link_params);
+
+- qede_rdma_dev_event_open(edev);
+-
+ edev->state = QEDE_STATE_OPEN;
+
+ DP_INFO(edev, "Ending successfully qede load\n");
+@@ -2168,12 +2166,14 @@ static void qede_link_update(void *dev, struct qed_link_output *link)
+ DP_NOTICE(edev, "Link is up\n");
+ netif_tx_start_all_queues(edev->ndev);
+ netif_carrier_on(edev->ndev);
++ qede_rdma_dev_event_open(edev);
+ }
+ } else {
+ if (netif_carrier_ok(edev->ndev)) {
+ DP_NOTICE(edev, "Link is down\n");
+ netif_tx_disable(edev->ndev);
+ netif_carrier_off(edev->ndev);
++ qede_rdma_dev_event_close(edev);
+ }
+ }
+ }
+diff --git a/drivers/net/ethernet/ti/cpsw.c b/drivers/net/ethernet/ti/cpsw.c
+index a1ffc3ed77f9..c08d74cd1fd2 100644
+--- a/drivers/net/ethernet/ti/cpsw.c
++++ b/drivers/net/ethernet/ti/cpsw.c
+@@ -996,7 +996,8 @@ static void _cpsw_adjust_link(struct cpsw_slave *slave,
+ /* set speed_in input in case RMII mode is used in 100Mbps */
+ if (phy->speed == 100)
+ mac_control |= BIT(15);
+- else if (phy->speed == 10)
++ /* in band mode only works in 10Mbps RGMII mode */
++ else if ((phy->speed == 10) && phy_interface_is_rgmii(phy))
+ mac_control |= BIT(18); /* In Band mode */
+
+ if (priv->rx_pause)
+diff --git a/drivers/net/macvlan.c b/drivers/net/macvlan.c
+index a0f2be81d52e..4884f6149b0a 100644
+--- a/drivers/net/macvlan.c
++++ b/drivers/net/macvlan.c
+@@ -1036,7 +1036,7 @@ static netdev_features_t macvlan_fix_features(struct net_device *dev,
+ lowerdev_features &= (features | ~NETIF_F_LRO);
+ features = netdev_increment_features(lowerdev_features, features, mask);
+ features |= ALWAYS_ON_FEATURES;
+- features &= ~NETIF_F_NETNS_LOCAL;
++ features &= (ALWAYS_ON_FEATURES | MACVLAN_FEATURES);
+
+ return features;
+ }
+diff --git a/drivers/net/phy/phy.c b/drivers/net/phy/phy.c
+index 39de77a8bb63..dba6d17ad885 100644
+--- a/drivers/net/phy/phy.c
++++ b/drivers/net/phy/phy.c
+@@ -614,6 +614,91 @@ static void phy_error(struct phy_device *phydev)
+ phy_trigger_machine(phydev, false);
+ }
+
++/**
++ * phy_disable_interrupts - Disable the PHY interrupts from the PHY side
++ * @phydev: target phy_device struct
++ */
++static int phy_disable_interrupts(struct phy_device *phydev)
++{
++ int err;
++
++ /* Disable PHY interrupts */
++ err = phy_config_interrupt(phydev, PHY_INTERRUPT_DISABLED);
++ if (err)
++ goto phy_err;
++
++ /* Clear the interrupt */
++ err = phy_clear_interrupt(phydev);
++ if (err)
++ goto phy_err;
++
++ return 0;
++
++phy_err:
++ phy_error(phydev);
++
++ return err;
++}
++
++/**
++ * phy_change - Called by the phy_interrupt to handle PHY changes
++ * @phydev: phy_device struct that interrupted
++ */
++static irqreturn_t phy_change(struct phy_device *phydev)
++{
++ if (phy_interrupt_is_valid(phydev)) {
++ if (phydev->drv->did_interrupt &&
++ !phydev->drv->did_interrupt(phydev))
++ goto ignore;
++
++ if (phy_disable_interrupts(phydev))
++ goto phy_err;
++ }
++
++ mutex_lock(&phydev->lock);
++ if ((PHY_RUNNING == phydev->state) || (PHY_NOLINK == phydev->state))
++ phydev->state = PHY_CHANGELINK;
++ mutex_unlock(&phydev->lock);
++
++ if (phy_interrupt_is_valid(phydev)) {
++ atomic_dec(&phydev->irq_disable);
++ enable_irq(phydev->irq);
++
++ /* Reenable interrupts */
++ if (PHY_HALTED != phydev->state &&
++ phy_config_interrupt(phydev, PHY_INTERRUPT_ENABLED))
++ goto irq_enable_err;
++ }
++
++ /* reschedule state queue work to run as soon as possible */
++ phy_trigger_machine(phydev, true);
++ return IRQ_HANDLED;
++
++ignore:
++ atomic_dec(&phydev->irq_disable);
++ enable_irq(phydev->irq);
++ return IRQ_NONE;
++
++irq_enable_err:
++ disable_irq(phydev->irq);
++ atomic_inc(&phydev->irq_disable);
++phy_err:
++ phy_error(phydev);
++ return IRQ_NONE;
++}
++
++/**
++ * phy_change_work - Scheduled by the phy_mac_interrupt to handle PHY changes
++ * @work: work_struct that describes the work to be done
++ */
++void phy_change_work(struct work_struct *work)
++{
++ struct phy_device *phydev =
++ container_of(work, struct phy_device, phy_queue);
++
++ phy_change(phydev);
++}
++
+ /**
+ * phy_interrupt - PHY interrupt handler
+ * @irq: interrupt line
+@@ -632,9 +717,7 @@ static irqreturn_t phy_interrupt(int irq, void *phy_dat)
+ disable_irq_nosync(irq);
+ atomic_inc(&phydev->irq_disable);
+
+- phy_change(phydev);
+-
+- return IRQ_HANDLED;
++ return phy_change(phydev);
+ }
+
+ /**
+@@ -651,32 +734,6 @@ static int phy_enable_interrupts(struct phy_device *phydev)
+ return phy_config_interrupt(phydev, PHY_INTERRUPT_ENABLED);
+ }
+
+-/**
+- * phy_disable_interrupts - Disable the PHY interrupts from the PHY side
+- * @phydev: target phy_device struct
+- */
+-static int phy_disable_interrupts(struct phy_device *phydev)
+-{
+- int err;
+-
+- /* Disable PHY interrupts */
+- err = phy_config_interrupt(phydev, PHY_INTERRUPT_DISABLED);
+- if (err)
+- goto phy_err;
+-
+- /* Clear the interrupt */
+- err = phy_clear_interrupt(phydev);
+- if (err)
+- goto phy_err;
+-
+- return 0;
+-
+-phy_err:
+- phy_error(phydev);
+-
+- return err;
+-}
+-
+ /**
+ * phy_start_interrupts - request and enable interrupts for a PHY device
+ * @phydev: target phy_device struct
+@@ -727,64 +784,6 @@ int phy_stop_interrupts(struct phy_device *phydev)
+ }
+ EXPORT_SYMBOL(phy_stop_interrupts);
+
+-/**
+- * phy_change - Called by the phy_interrupt to handle PHY changes
+- * @phydev: phy_device struct that interrupted
+- */
+-void phy_change(struct phy_device *phydev)
+-{
+- if (phy_interrupt_is_valid(phydev)) {
+- if (phydev->drv->did_interrupt &&
+- !phydev->drv->did_interrupt(phydev))
+- goto ignore;
+-
+- if (phy_disable_interrupts(phydev))
+- goto phy_err;
+- }
+-
+- mutex_lock(&phydev->lock);
+- if ((PHY_RUNNING == phydev->state) || (PHY_NOLINK == phydev->state))
+- phydev->state = PHY_CHANGELINK;
+- mutex_unlock(&phydev->lock);
+-
+- if (phy_interrupt_is_valid(phydev)) {
+- atomic_dec(&phydev->irq_disable);
+- enable_irq(phydev->irq);
+-
+- /* Reenable interrupts */
+- if (PHY_HALTED != phydev->state &&
+- phy_config_interrupt(phydev, PHY_INTERRUPT_ENABLED))
+- goto irq_enable_err;
+- }
+-
+- /* reschedule state queue work to run as soon as possible */
+- phy_trigger_machine(phydev, true);
+- return;
+-
+-ignore:
+- atomic_dec(&phydev->irq_disable);
+- enable_irq(phydev->irq);
+- return;
+-
+-irq_enable_err:
+- disable_irq(phydev->irq);
+- atomic_inc(&phydev->irq_disable);
+-phy_err:
+- phy_error(phydev);
+-}
+-
+-/**
+- * phy_change_work - Scheduled by the phy_mac_interrupt to handle PHY changes
+- * @work: work_struct that describes the work to be done
+- */
+-void phy_change_work(struct work_struct *work)
+-{
+- struct phy_device *phydev =
+- container_of(work, struct phy_device, phy_queue);
+-
+- phy_change(phydev);
+-}
+-
+ /**
+ * phy_stop - Bring down the PHY link, and stop checking the status
+ * @phydev: target phy_device struct
+diff --git a/drivers/net/phy/phy_device.c b/drivers/net/phy/phy_device.c
+index d312b314825e..a1e7ea4d4b16 100644
+--- a/drivers/net/phy/phy_device.c
++++ b/drivers/net/phy/phy_device.c
+@@ -999,10 +999,17 @@ int phy_attach_direct(struct net_device *dev, struct phy_device *phydev,
+ err = sysfs_create_link(&phydev->mdio.dev.kobj, &dev->dev.kobj,
+ "attached_dev");
+ if (!err) {
+- err = sysfs_create_link(&dev->dev.kobj, &phydev->mdio.dev.kobj,
+- "phydev");
+- if (err)
+- goto error;
++ err = sysfs_create_link_nowarn(&dev->dev.kobj,
++ &phydev->mdio.dev.kobj,
++ "phydev");
++ if (err) {
++ dev_err(&dev->dev, "could not add device link to %s err %d\n",
++ kobject_name(&phydev->mdio.dev.kobj),
++ err);
++ /* non-fatal - some net drivers can use one netdevice
++ * with more then one phy
++ */
++ }
+
+ phydev->sysfs_links = true;
+ }
+diff --git a/drivers/net/ppp/ppp_generic.c b/drivers/net/ppp/ppp_generic.c
+index 9f79f9274c50..d37183aec313 100644
+--- a/drivers/net/ppp/ppp_generic.c
++++ b/drivers/net/ppp/ppp_generic.c
+@@ -257,7 +257,7 @@ struct ppp_net {
+ /* Prototypes. */
+ static int ppp_unattached_ioctl(struct net *net, struct ppp_file *pf,
+ struct file *file, unsigned int cmd, unsigned long arg);
+-static void ppp_xmit_process(struct ppp *ppp);
++static void ppp_xmit_process(struct ppp *ppp, struct sk_buff *skb);
+ static void ppp_send_frame(struct ppp *ppp, struct sk_buff *skb);
+ static void ppp_push(struct ppp *ppp);
+ static void ppp_channel_push(struct channel *pch);
+@@ -513,13 +513,12 @@ static ssize_t ppp_write(struct file *file, const char __user *buf,
+ goto out;
+ }
+
+- skb_queue_tail(&pf->xq, skb);
+-
+ switch (pf->kind) {
+ case INTERFACE:
+- ppp_xmit_process(PF_TO_PPP(pf));
++ ppp_xmit_process(PF_TO_PPP(pf), skb);
+ break;
+ case CHANNEL:
++ skb_queue_tail(&pf->xq, skb);
+ ppp_channel_push(PF_TO_CHANNEL(pf));
+ break;
+ }
+@@ -1267,8 +1266,8 @@ ppp_start_xmit(struct sk_buff *skb, struct net_device *dev)
+ put_unaligned_be16(proto, pp);
+
+ skb_scrub_packet(skb, !net_eq(ppp->ppp_net, dev_net(dev)));
+- skb_queue_tail(&ppp->file.xq, skb);
+- ppp_xmit_process(ppp);
++ ppp_xmit_process(ppp, skb);
++
+ return NETDEV_TX_OK;
+
+ outf:
+@@ -1420,13 +1419,14 @@ static void ppp_setup(struct net_device *dev)
+ */
+
+ /* Called to do any work queued up on the transmit side that can now be done */
+-static void __ppp_xmit_process(struct ppp *ppp)
++static void __ppp_xmit_process(struct ppp *ppp, struct sk_buff *skb)
+ {
+- struct sk_buff *skb;
+-
+ ppp_xmit_lock(ppp);
+ if (!ppp->closing) {
+ ppp_push(ppp);
++
++ if (skb)
++ skb_queue_tail(&ppp->file.xq, skb);
+ while (!ppp->xmit_pending &&
+ (skb = skb_dequeue(&ppp->file.xq)))
+ ppp_send_frame(ppp, skb);
+@@ -1440,7 +1440,7 @@ static void __ppp_xmit_process(struct ppp *ppp)
+ ppp_xmit_unlock(ppp);
+ }
+
+-static void ppp_xmit_process(struct ppp *ppp)
++static void ppp_xmit_process(struct ppp *ppp, struct sk_buff *skb)
+ {
+ local_bh_disable();
+
+@@ -1448,7 +1448,7 @@ static void ppp_xmit_process(struct ppp *ppp)
+ goto err;
+
+ (*this_cpu_ptr(ppp->xmit_recursion))++;
+- __ppp_xmit_process(ppp);
++ __ppp_xmit_process(ppp, skb);
+ (*this_cpu_ptr(ppp->xmit_recursion))--;
+
+ local_bh_enable();
+@@ -1458,6 +1458,8 @@ static void ppp_xmit_process(struct ppp *ppp)
+ err:
+ local_bh_enable();
+
++ kfree_skb(skb);
++
+ if (net_ratelimit())
+ netdev_err(ppp->dev, "recursion detected\n");
+ }
+@@ -1942,7 +1944,7 @@ static void __ppp_channel_push(struct channel *pch)
+ if (skb_queue_empty(&pch->file.xq)) {
+ ppp = pch->ppp;
+ if (ppp)
+- __ppp_xmit_process(ppp);
++ __ppp_xmit_process(ppp, NULL);
+ }
+ }
+
+diff --git a/drivers/net/team/team.c b/drivers/net/team/team.c
+index a468439969df..56c701b73c12 100644
+--- a/drivers/net/team/team.c
++++ b/drivers/net/team/team.c
+@@ -2395,7 +2395,7 @@ static int team_nl_send_options_get(struct team *team, u32 portid, u32 seq,
+ if (!nlh) {
+ err = __send_and_alloc_skb(&skb, team, portid, send_func);
+ if (err)
+- goto errout;
++ return err;
+ goto send_done;
+ }
+
+@@ -2681,7 +2681,7 @@ static int team_nl_send_port_list_get(struct team *team, u32 portid, u32 seq,
+ if (!nlh) {
+ err = __send_and_alloc_skb(&skb, team, portid, send_func);
+ if (err)
+- goto errout;
++ return err;
+ goto send_done;
+ }
+
+diff --git a/drivers/s390/net/qeth_core_main.c b/drivers/s390/net/qeth_core_main.c
+index 61e9d0bca197..eeabbcf7a4e2 100644
+--- a/drivers/s390/net/qeth_core_main.c
++++ b/drivers/s390/net/qeth_core_main.c
+@@ -526,8 +526,7 @@ static inline int qeth_is_cq(struct qeth_card *card, unsigned int queue)
+ queue == card->qdio.no_in_queues - 1;
+ }
+
+-
+-static int qeth_issue_next_read(struct qeth_card *card)
++static int __qeth_issue_next_read(struct qeth_card *card)
+ {
+ int rc;
+ struct qeth_cmd_buffer *iob;
+@@ -558,6 +557,17 @@ static int qeth_issue_next_read(struct qeth_card *card)
+ return rc;
+ }
+
++static int qeth_issue_next_read(struct qeth_card *card)
++{
++ int ret;
++
++ spin_lock_irq(get_ccwdev_lock(CARD_RDEV(card)));
++ ret = __qeth_issue_next_read(card);
++ spin_unlock_irq(get_ccwdev_lock(CARD_RDEV(card)));
++
++ return ret;
++}
++
+ static struct qeth_reply *qeth_alloc_reply(struct qeth_card *card)
+ {
+ struct qeth_reply *reply;
+@@ -961,7 +971,7 @@ void qeth_clear_thread_running_bit(struct qeth_card *card, unsigned long thread)
+ spin_lock_irqsave(&card->thread_mask_lock, flags);
+ card->thread_running_mask &= ~thread;
+ spin_unlock_irqrestore(&card->thread_mask_lock, flags);
+- wake_up(&card->wait_q);
++ wake_up_all(&card->wait_q);
+ }
+ EXPORT_SYMBOL_GPL(qeth_clear_thread_running_bit);
+
+@@ -1165,6 +1175,7 @@ static void qeth_irq(struct ccw_device *cdev, unsigned long intparm,
+ }
+ rc = qeth_get_problem(cdev, irb);
+ if (rc) {
++ card->read_or_write_problem = 1;
+ qeth_clear_ipacmd_list(card);
+ qeth_schedule_recovery(card);
+ goto out;
+@@ -1183,7 +1194,7 @@ static void qeth_irq(struct ccw_device *cdev, unsigned long intparm,
+ return;
+ if (channel == &card->read &&
+ channel->state == CH_STATE_UP)
+- qeth_issue_next_read(card);
++ __qeth_issue_next_read(card);
+
+ iob = channel->iob;
+ index = channel->buf_no;
+@@ -5022,8 +5033,6 @@ static void qeth_core_free_card(struct qeth_card *card)
+ QETH_DBF_HEX(SETUP, 2, &card, sizeof(void *));
+ qeth_clean_channel(&card->read);
+ qeth_clean_channel(&card->write);
+- if (card->dev)
+- free_netdev(card->dev);
+ qeth_free_qdio_buffers(card);
+ unregister_service_level(&card->qeth_service_level);
+ kfree(card);
+diff --git a/drivers/s390/net/qeth_l2_main.c b/drivers/s390/net/qeth_l2_main.c
+index 5863ea170ff2..42d56b3bed82 100644
+--- a/drivers/s390/net/qeth_l2_main.c
++++ b/drivers/s390/net/qeth_l2_main.c
+@@ -933,8 +933,8 @@ static void qeth_l2_remove_device(struct ccwgroup_device *cgdev)
+ qeth_l2_set_offline(cgdev);
+
+ if (card->dev) {
+- netif_napi_del(&card->napi);
+ unregister_netdev(card->dev);
++ free_netdev(card->dev);
+ card->dev = NULL;
+ }
+ return;
+diff --git a/drivers/s390/net/qeth_l3_main.c b/drivers/s390/net/qeth_l3_main.c
+index 33131c594627..5287eab5c600 100644
+--- a/drivers/s390/net/qeth_l3_main.c
++++ b/drivers/s390/net/qeth_l3_main.c
+@@ -3042,8 +3042,8 @@ static void qeth_l3_remove_device(struct ccwgroup_device *cgdev)
+ qeth_l3_set_offline(cgdev);
+
+ if (card->dev) {
+- netif_napi_del(&card->napi);
+ unregister_netdev(card->dev);
++ free_netdev(card->dev);
+ card->dev = NULL;
+ }
+
+diff --git a/drivers/soc/fsl/qbman/qman.c b/drivers/soc/fsl/qbman/qman.c
+index e4f5bb056fd2..ba3cfa8e279b 100644
+--- a/drivers/soc/fsl/qbman/qman.c
++++ b/drivers/soc/fsl/qbman/qman.c
+@@ -2443,39 +2443,21 @@ struct cgr_comp {
+ struct completion completion;
+ };
+
+-static int qman_delete_cgr_thread(void *p)
++static void qman_delete_cgr_smp_call(void *p)
+ {
+- struct cgr_comp *cgr_comp = (struct cgr_comp *)p;
+- int ret;
+-
+- ret = qman_delete_cgr(cgr_comp->cgr);
+- complete(&cgr_comp->completion);
+-
+- return ret;
++ qman_delete_cgr((struct qman_cgr *)p);
+ }
+
+ void qman_delete_cgr_safe(struct qman_cgr *cgr)
+ {
+- struct task_struct *thread;
+- struct cgr_comp cgr_comp;
+-
+ preempt_disable();
+ if (qman_cgr_cpus[cgr->cgrid] != smp_processor_id()) {
+- init_completion(&cgr_comp.completion);
+- cgr_comp.cgr = cgr;
+- thread = kthread_create(qman_delete_cgr_thread, &cgr_comp,
+- "cgr_del");
+-
+- if (IS_ERR(thread))
+- goto out;
+-
+- kthread_bind(thread, qman_cgr_cpus[cgr->cgrid]);
+- wake_up_process(thread);
+- wait_for_completion(&cgr_comp.completion);
++ smp_call_function_single(qman_cgr_cpus[cgr->cgrid],
++ qman_delete_cgr_smp_call, cgr, true);
+ preempt_enable();
+ return;
+ }
+-out:
++
+ qman_delete_cgr(cgr);
+ preempt_enable();
+ }
+diff --git a/fs/sysfs/symlink.c b/fs/sysfs/symlink.c
+index aecb15f84557..808f018fa976 100644
+--- a/fs/sysfs/symlink.c
++++ b/fs/sysfs/symlink.c
+@@ -107,6 +107,7 @@ int sysfs_create_link_nowarn(struct kobject *kobj, struct kobject *target,
+ {
+ return sysfs_do_create_link(kobj, target, name, 0);
+ }
++EXPORT_SYMBOL_GPL(sysfs_create_link_nowarn);
+
+ /**
+ * sysfs_delete_link - remove symlink in object's directory.
+diff --git a/include/linux/cgroup-defs.h b/include/linux/cgroup-defs.h
+index 8b7fd8eeccee..cb8a9ce149de 100644
+--- a/include/linux/cgroup-defs.h
++++ b/include/linux/cgroup-defs.h
+@@ -755,13 +755,13 @@ struct sock_cgroup_data {
+ * updaters and return part of the previous pointer as the prioidx or
+ * classid. Such races are short-lived and the result isn't critical.
+ */
+-static inline u16 sock_cgroup_prioidx(struct sock_cgroup_data *skcd)
++static inline u16 sock_cgroup_prioidx(const struct sock_cgroup_data *skcd)
+ {
+ /* fallback to 1 which is always the ID of the root cgroup */
+ return (skcd->is_data & 1) ? skcd->prioidx : 1;
+ }
+
+-static inline u32 sock_cgroup_classid(struct sock_cgroup_data *skcd)
++static inline u32 sock_cgroup_classid(const struct sock_cgroup_data *skcd)
+ {
+ /* fallback to 0 which is the unconfigured default classid */
+ return (skcd->is_data & 1) ? skcd->classid : 0;
+diff --git a/include/linux/phy.h b/include/linux/phy.h
+index 123cd703741d..ea0cbd6d9556 100644
+--- a/include/linux/phy.h
++++ b/include/linux/phy.h
+@@ -897,7 +897,6 @@ int phy_driver_register(struct phy_driver *new_driver, struct module *owner);
+ int phy_drivers_register(struct phy_driver *new_driver, int n,
+ struct module *owner);
+ void phy_state_machine(struct work_struct *work);
+-void phy_change(struct phy_device *phydev);
+ void phy_change_work(struct work_struct *work);
+ void phy_mac_interrupt(struct phy_device *phydev, int new_link);
+ void phy_start_machine(struct phy_device *phydev);
+diff --git a/include/linux/rhashtable.h b/include/linux/rhashtable.h
+index 361c08e35dbc..7fd514f36e74 100644
+--- a/include/linux/rhashtable.h
++++ b/include/linux/rhashtable.h
+@@ -750,8 +750,10 @@ static inline void *__rhashtable_insert_fast(
+ if (!key ||
+ (params.obj_cmpfn ?
+ params.obj_cmpfn(&arg, rht_obj(ht, head)) :
+- rhashtable_compare(&arg, rht_obj(ht, head))))
++ rhashtable_compare(&arg, rht_obj(ht, head)))) {
++ pprev = &head->next;
+ continue;
++ }
+
+ data = rht_obj(ht, head);
+
+diff --git a/include/net/sch_generic.h b/include/net/sch_generic.h
+index d6ec5a5a6782..d794aebb3157 100644
+--- a/include/net/sch_generic.h
++++ b/include/net/sch_generic.h
+@@ -735,6 +735,16 @@ static inline void __qdisc_drop(struct sk_buff *skb, struct sk_buff **to_free)
+ *to_free = skb;
+ }
+
++static inline void __qdisc_drop_all(struct sk_buff *skb,
++ struct sk_buff **to_free)
++{
++ if (skb->prev)
++ skb->prev->next = *to_free;
++ else
++ skb->next = *to_free;
++ *to_free = skb;
++}
++
+ static inline unsigned int __qdisc_queue_drop_head(struct Qdisc *sch,
+ struct qdisc_skb_head *qh,
+ struct sk_buff **to_free)
+@@ -855,6 +865,15 @@ static inline int qdisc_drop(struct sk_buff *skb, struct Qdisc *sch,
+ return NET_XMIT_DROP;
+ }
+
++static inline int qdisc_drop_all(struct sk_buff *skb, struct Qdisc *sch,
++ struct sk_buff **to_free)
++{
++ __qdisc_drop_all(skb, to_free);
++ qdisc_qstats_drop(sch);
++
++ return NET_XMIT_DROP;
++}
++
+ /* Length to Time (L2T) lookup in a qdisc_rate_table, to determine how
+ long it will take to send a packet given its size.
+ */
+diff --git a/lib/rhashtable.c b/lib/rhashtable.c
+index ddd7dde87c3c..b734ce731a7a 100644
+--- a/lib/rhashtable.c
++++ b/lib/rhashtable.c
+@@ -537,8 +537,10 @@ static void *rhashtable_lookup_one(struct rhashtable *ht,
+ if (!key ||
+ (ht->p.obj_cmpfn ?
+ ht->p.obj_cmpfn(&arg, rht_obj(ht, head)) :
+- rhashtable_compare(&arg, rht_obj(ht, head))))
++ rhashtable_compare(&arg, rht_obj(ht, head)))) {
++ pprev = &head->next;
+ continue;
++ }
+
+ if (!ht->rhlist)
+ return rht_obj(ht, head);
+diff --git a/lib/test_rhashtable.c b/lib/test_rhashtable.c
+index 8e83cbdc049c..6f2e3dc44a80 100644
+--- a/lib/test_rhashtable.c
++++ b/lib/test_rhashtable.c
+@@ -79,6 +79,21 @@ struct thread_data {
+ struct test_obj *objs;
+ };
+
++static u32 my_hashfn(const void *data, u32 len, u32 seed)
++{
++ const struct test_obj_rhl *obj = data;
++
++ return (obj->value.id % 10) << RHT_HASH_RESERVED_SPACE;
++}
++
++static int my_cmpfn(struct rhashtable_compare_arg *arg, const void *obj)
++{
++ const struct test_obj_rhl *test_obj = obj;
++ const struct test_obj_val *val = arg->key;
++
++ return test_obj->value.id - val->id;
++}
++
+ static struct rhashtable_params test_rht_params = {
+ .head_offset = offsetof(struct test_obj, node),
+ .key_offset = offsetof(struct test_obj, value),
+@@ -87,6 +102,17 @@ static struct rhashtable_params test_rht_params = {
+ .nulls_base = (3U << RHT_BASE_SHIFT),
+ };
+
++static struct rhashtable_params test_rht_params_dup = {
++ .head_offset = offsetof(struct test_obj_rhl, list_node),
++ .key_offset = offsetof(struct test_obj_rhl, value),
++ .key_len = sizeof(struct test_obj_val),
++ .hashfn = jhash,
++ .obj_hashfn = my_hashfn,
++ .obj_cmpfn = my_cmpfn,
++ .nelem_hint = 128,
++ .automatic_shrinking = false,
++};
++
+ static struct semaphore prestart_sem;
+ static struct semaphore startup_sem = __SEMAPHORE_INITIALIZER(startup_sem, 0);
+
+@@ -469,6 +495,112 @@ static int __init test_rhashtable_max(struct test_obj *array,
+ return err;
+ }
+
++static unsigned int __init print_ht(struct rhltable *rhlt)
++{
++ struct rhashtable *ht;
++ const struct bucket_table *tbl;
++ char buff[512] = "";
++ unsigned int i, cnt = 0;
++
++ ht = &rhlt->ht;
++ tbl = rht_dereference(ht->tbl, ht);
++ for (i = 0; i < tbl->size; i++) {
++ struct rhash_head *pos, *next;
++ struct test_obj_rhl *p;
++
++ pos = rht_dereference(tbl->buckets[i], ht);
++ next = !rht_is_a_nulls(pos) ? rht_dereference(pos->next, ht) : NULL;
++
++ if (!rht_is_a_nulls(pos)) {
++ sprintf(buff, "%s\nbucket[%d] -> ", buff, i);
++ }
++
++ while (!rht_is_a_nulls(pos)) {
++ struct rhlist_head *list = container_of(pos, struct rhlist_head, rhead);
++ sprintf(buff, "%s[[", buff);
++ do {
++ pos = &list->rhead;
++ list = rht_dereference(list->next, ht);
++ p = rht_obj(ht, pos);
++
++ sprintf(buff, "%s val %d (tid=%d)%s", buff, p->value.id, p->value.tid,
++ list? ", " : " ");
++ cnt++;
++ } while (list);
++
++ pos = next,
++ next = !rht_is_a_nulls(pos) ?
++ rht_dereference(pos->next, ht) : NULL;
++
++ sprintf(buff, "%s]]%s", buff, !rht_is_a_nulls(pos) ? " -> " : "");
++ }
++ }
++ printk(KERN_ERR "\n---- ht: ----%s\n-------------\n", buff);
++
++ return cnt;
++}
++
++static int __init test_insert_dup(struct test_obj_rhl *rhl_test_objects,
++ int cnt, bool slow)
++{
++ struct rhltable rhlt;
++ unsigned int i, ret;
++ const char *key;
++ int err = 0;
++
++ err = rhltable_init(&rhlt, &test_rht_params_dup);
++ if (WARN_ON(err))
++ return err;
++
++ for (i = 0; i < cnt; i++) {
++ rhl_test_objects[i].value.tid = i;
++ key = rht_obj(&rhlt.ht, &rhl_test_objects[i].list_node.rhead);
++ key += test_rht_params_dup.key_offset;
++
++ if (slow) {
++ err = PTR_ERR(rhashtable_insert_slow(&rhlt.ht, key,
++ &rhl_test_objects[i].list_node.rhead));
++ if (err == -EAGAIN)
++ err = 0;
++ } else
++ err = rhltable_insert(&rhlt,
++ &rhl_test_objects[i].list_node,
++ test_rht_params_dup);
++ if (WARN(err, "error %d on element %d/%d (%s)\n", err, i, cnt, slow? "slow" : "fast"))
++ goto skip_print;
++ }
++
++ ret = print_ht(&rhlt);
++ WARN(ret != cnt, "missing rhltable elements (%d != %d, %s)\n", ret, cnt, slow? "slow" : "fast");
++
++skip_print:
++ rhltable_destroy(&rhlt);
++
++ return 0;
++}
++
++static int __init test_insert_duplicates_run(void)
++{
++ struct test_obj_rhl rhl_test_objects[3] = {};
++
++ pr_info("test inserting duplicates\n");
++
++ /* two different values that map to same bucket */
++ rhl_test_objects[0].value.id = 1;
++ rhl_test_objects[1].value.id = 21;
++
++ /* and another duplicate with same as [0] value
++ * which will be second on the bucket list */
++ rhl_test_objects[2].value.id = rhl_test_objects[0].value.id;
++
++ test_insert_dup(rhl_test_objects, 2, false);
++ test_insert_dup(rhl_test_objects, 3, false);
++ test_insert_dup(rhl_test_objects, 2, true);
++ test_insert_dup(rhl_test_objects, 3, true);
++
++ return 0;
++}
++
+ static int thread_lookup_test(struct thread_data *tdata)
+ {
+ unsigned int entries = tdata->entries;
+@@ -617,6 +749,8 @@ static int __init test_rht_init(void)
+ do_div(total_time, runs);
+ pr_info("Average test time: %llu\n", total_time);
+
++ test_insert_duplicates_run();
++
+ if (!tcount)
+ return 0;
+
+diff --git a/net/core/dev.c b/net/core/dev.c
+index a2a89acd0de8..f3fbd10a0632 100644
+--- a/net/core/dev.c
++++ b/net/core/dev.c
+@@ -3247,15 +3247,23 @@ static inline int __dev_xmit_skb(struct sk_buff *skb, struct Qdisc *q,
+ #if IS_ENABLED(CONFIG_CGROUP_NET_PRIO)
+ static void skb_update_prio(struct sk_buff *skb)
+ {
+- struct netprio_map *map = rcu_dereference_bh(skb->dev->priomap);
++ const struct netprio_map *map;
++ const struct sock *sk;
++ unsigned int prioidx;
+
+- if (!skb->priority && skb->sk && map) {
+- unsigned int prioidx =
+- sock_cgroup_prioidx(&skb->sk->sk_cgrp_data);
++ if (skb->priority)
++ return;
++ map = rcu_dereference_bh(skb->dev->priomap);
++ if (!map)
++ return;
++ sk = skb_to_full_sk(skb);
++ if (!sk)
++ return;
+
+- if (prioidx < map->priomap_len)
+- skb->priority = map->priomap[prioidx];
+- }
++ prioidx = sock_cgroup_prioidx(&sk->sk_cgrp_data);
++
++ if (prioidx < map->priomap_len)
++ skb->priority = map->priomap[prioidx];
+ }
+ #else
+ #define skb_update_prio(skb)
+diff --git a/net/core/devlink.c b/net/core/devlink.c
+index 7d430c1d9c3e..5ba973311025 100644
+--- a/net/core/devlink.c
++++ b/net/core/devlink.c
+@@ -1776,7 +1776,7 @@ static int devlink_dpipe_tables_fill(struct genl_info *info,
+ if (!nlh) {
+ err = devlink_dpipe_send_and_alloc_skb(&skb, info);
+ if (err)
+- goto err_skb_send_alloc;
++ return err;
+ goto send_done;
+ }
+
+@@ -1785,7 +1785,6 @@ static int devlink_dpipe_tables_fill(struct genl_info *info,
+ nla_put_failure:
+ err = -EMSGSIZE;
+ err_table_put:
+-err_skb_send_alloc:
+ genlmsg_cancel(skb, hdr);
+ nlmsg_free(skb);
+ return err;
+@@ -2051,7 +2050,7 @@ static int devlink_dpipe_entries_fill(struct genl_info *info,
+ table->counters_enabled,
+ &dump_ctx);
+ if (err)
+- goto err_entries_dump;
++ return err;
+
+ send_done:
+ nlh = nlmsg_put(dump_ctx.skb, info->snd_portid, info->snd_seq,
+@@ -2059,16 +2058,10 @@ static int devlink_dpipe_entries_fill(struct genl_info *info,
+ if (!nlh) {
+ err = devlink_dpipe_send_and_alloc_skb(&dump_ctx.skb, info);
+ if (err)
+- goto err_skb_send_alloc;
++ return err;
+ goto send_done;
+ }
+ return genlmsg_reply(dump_ctx.skb, info);
+-
+-err_entries_dump:
+-err_skb_send_alloc:
+- genlmsg_cancel(dump_ctx.skb, dump_ctx.hdr);
+- nlmsg_free(dump_ctx.skb);
+- return err;
+ }
+
+ static int devlink_nl_cmd_dpipe_entries_get(struct sk_buff *skb,
+@@ -2207,7 +2200,7 @@ static int devlink_dpipe_headers_fill(struct genl_info *info,
+ if (!nlh) {
+ err = devlink_dpipe_send_and_alloc_skb(&skb, info);
+ if (err)
+- goto err_skb_send_alloc;
++ return err;
+ goto send_done;
+ }
+ return genlmsg_reply(skb, info);
+@@ -2215,7 +2208,6 @@ static int devlink_dpipe_headers_fill(struct genl_info *info,
+ nla_put_failure:
+ err = -EMSGSIZE;
+ err_table_put:
+-err_skb_send_alloc:
+ genlmsg_cancel(skb, hdr);
+ nlmsg_free(skb);
+ return err;
+diff --git a/net/core/skbuff.c b/net/core/skbuff.c
+index 08f574081315..3538ba8771e9 100644
+--- a/net/core/skbuff.c
++++ b/net/core/skbuff.c
+@@ -4173,7 +4173,7 @@ int sock_queue_err_skb(struct sock *sk, struct sk_buff *skb)
+
+ skb_queue_tail(&sk->sk_error_queue, skb);
+ if (!sock_flag(sk, SOCK_DEAD))
+- sk->sk_data_ready(sk);
++ sk->sk_error_report(sk);
+ return 0;
+ }
+ EXPORT_SYMBOL(sock_queue_err_skb);
+diff --git a/net/dccp/proto.c b/net/dccp/proto.c
+index 9d43c1f40274..ff3b058cf58c 100644
+--- a/net/dccp/proto.c
++++ b/net/dccp/proto.c
+@@ -789,6 +789,11 @@ int dccp_sendmsg(struct sock *sk, struct msghdr *msg, size_t len)
+ if (skb == NULL)
+ goto out_release;
+
++ if (sk->sk_state == DCCP_CLOSED) {
++ rc = -ENOTCONN;
++ goto out_discard;
++ }
++
+ skb_reserve(skb, sk->sk_prot->max_header);
+ rc = memcpy_from_msg(skb_put(skb, len), msg, len);
+ if (rc != 0)
+diff --git a/net/dsa/legacy.c b/net/dsa/legacy.c
+index 84611d7fcfa2..3c9cee268b8a 100644
+--- a/net/dsa/legacy.c
++++ b/net/dsa/legacy.c
+@@ -194,7 +194,7 @@ static int dsa_switch_setup_one(struct dsa_switch *ds,
+ ds->ports[i].dn = cd->port_dn[i];
+ ds->ports[i].cpu_dp = dst->cpu_dp;
+
+- if (dsa_is_user_port(ds, i))
++ if (!dsa_is_user_port(ds, i))
+ continue;
+
+ ret = dsa_slave_create(&ds->ports[i]);
+diff --git a/net/ieee802154/6lowpan/core.c b/net/ieee802154/6lowpan/core.c
+index 974765b7d92a..e9f0489e4229 100644
+--- a/net/ieee802154/6lowpan/core.c
++++ b/net/ieee802154/6lowpan/core.c
+@@ -206,9 +206,13 @@ static inline void lowpan_netlink_fini(void)
+ static int lowpan_device_event(struct notifier_block *unused,
+ unsigned long event, void *ptr)
+ {
+- struct net_device *wdev = netdev_notifier_info_to_dev(ptr);
++ struct net_device *ndev = netdev_notifier_info_to_dev(ptr);
++ struct wpan_dev *wpan_dev;
+
+- if (wdev->type != ARPHRD_IEEE802154)
++ if (ndev->type != ARPHRD_IEEE802154)
++ return NOTIFY_DONE;
++ wpan_dev = ndev->ieee802154_ptr;
++ if (!wpan_dev)
+ return NOTIFY_DONE;
+
+ switch (event) {
+@@ -217,8 +221,8 @@ static int lowpan_device_event(struct notifier_block *unused,
+ * also delete possible lowpan interfaces which belongs
+ * to the wpan interface.
+ */
+- if (wdev->ieee802154_ptr->lowpan_dev)
+- lowpan_dellink(wdev->ieee802154_ptr->lowpan_dev, NULL);
++ if (wpan_dev->lowpan_dev)
++ lowpan_dellink(wpan_dev->lowpan_dev, NULL);
+ break;
+ default:
+ return NOTIFY_DONE;
+diff --git a/net/ipv4/inet_fragment.c b/net/ipv4/inet_fragment.c
+index 26a3d0315728..e8ec28999f5c 100644
+--- a/net/ipv4/inet_fragment.c
++++ b/net/ipv4/inet_fragment.c
+@@ -119,6 +119,9 @@ static void inet_frag_secret_rebuild(struct inet_frags *f)
+
+ static bool inet_fragq_should_evict(const struct inet_frag_queue *q)
+ {
++ if (!hlist_unhashed(&q->list_evictor))
++ return false;
++
+ return q->net->low_thresh == 0 ||
+ frag_mem_limit(q->net) >= q->net->low_thresh;
+ }
+diff --git a/net/ipv4/ip_sockglue.c b/net/ipv4/ip_sockglue.c
+index f56aab54e0c8..1e70ed5244ea 100644
+--- a/net/ipv4/ip_sockglue.c
++++ b/net/ipv4/ip_sockglue.c
+@@ -258,7 +258,8 @@ int ip_cmsg_send(struct sock *sk, struct msghdr *msg, struct ipcm_cookie *ipc,
+ src_info = (struct in6_pktinfo *)CMSG_DATA(cmsg);
+ if (!ipv6_addr_v4mapped(&src_info->ipi6_addr))
+ return -EINVAL;
+- ipc->oif = src_info->ipi6_ifindex;
++ if (src_info->ipi6_ifindex)
++ ipc->oif = src_info->ipi6_ifindex;
+ ipc->addr = src_info->ipi6_addr.s6_addr32[3];
+ continue;
+ }
+@@ -288,7 +289,8 @@ int ip_cmsg_send(struct sock *sk, struct msghdr *msg, struct ipcm_cookie *ipc,
+ if (cmsg->cmsg_len != CMSG_LEN(sizeof(struct in_pktinfo)))
+ return -EINVAL;
+ info = (struct in_pktinfo *)CMSG_DATA(cmsg);
+- ipc->oif = info->ipi_ifindex;
++ if (info->ipi_ifindex)
++ ipc->oif = info->ipi_ifindex;
+ ipc->addr = info->ipi_spec_dst.s_addr;
+ break;
+ }
+diff --git a/net/ipv4/tcp.c b/net/ipv4/tcp.c
+index c821f5d68720..2eb91b97a062 100644
+--- a/net/ipv4/tcp.c
++++ b/net/ipv4/tcp.c
+@@ -3542,6 +3542,7 @@ int tcp_abort(struct sock *sk, int err)
+
+ bh_unlock_sock(sk);
+ local_bh_enable();
++ tcp_write_queue_purge(sk);
+ release_sock(sk);
+ return 0;
+ }
+diff --git a/net/ipv4/tcp_timer.c b/net/ipv4/tcp_timer.c
+index 388158c9d9f6..c721140a7d79 100644
+--- a/net/ipv4/tcp_timer.c
++++ b/net/ipv4/tcp_timer.c
+@@ -34,6 +34,7 @@ static void tcp_write_err(struct sock *sk)
+ sk->sk_err = sk->sk_err_soft ? : ETIMEDOUT;
+ sk->sk_error_report(sk);
+
++ tcp_write_queue_purge(sk);
+ tcp_done(sk);
+ __NET_INC_STATS(sock_net(sk), LINUX_MIB_TCPABORTONTIMEOUT);
+ }
+diff --git a/net/ipv6/datagram.c b/net/ipv6/datagram.c
+index a1f918713006..287112da3c06 100644
+--- a/net/ipv6/datagram.c
++++ b/net/ipv6/datagram.c
+@@ -146,10 +146,12 @@ int __ip6_datagram_connect(struct sock *sk, struct sockaddr *uaddr,
+ struct sockaddr_in6 *usin = (struct sockaddr_in6 *) uaddr;
+ struct inet_sock *inet = inet_sk(sk);
+ struct ipv6_pinfo *np = inet6_sk(sk);
+- struct in6_addr *daddr;
++ struct in6_addr *daddr, old_daddr;
++ __be32 fl6_flowlabel = 0;
++ __be32 old_fl6_flowlabel;
++ __be16 old_dport;
+ int addr_type;
+ int err;
+- __be32 fl6_flowlabel = 0;
+
+ if (usin->sin6_family == AF_INET) {
+ if (__ipv6_only_sock(sk))
+@@ -239,9 +241,13 @@ int __ip6_datagram_connect(struct sock *sk, struct sockaddr *uaddr,
+ }
+ }
+
++ /* save the current peer information before updating it */
++ old_daddr = sk->sk_v6_daddr;
++ old_fl6_flowlabel = np->flow_label;
++ old_dport = inet->inet_dport;
++
+ sk->sk_v6_daddr = *daddr;
+ np->flow_label = fl6_flowlabel;
+-
+ inet->inet_dport = usin->sin6_port;
+
+ /*
+@@ -251,11 +257,12 @@ int __ip6_datagram_connect(struct sock *sk, struct sockaddr *uaddr,
+
+ err = ip6_datagram_dst_update(sk, true);
+ if (err) {
+- /* Reset daddr and dport so that udp_v6_early_demux()
+- * fails to find this socket
++ /* Restore the socket peer info, to keep it consistent with
++ * the old socket state
+ */
+- memset(&sk->sk_v6_daddr, 0, sizeof(sk->sk_v6_daddr));
+- inet->inet_dport = 0;
++ sk->sk_v6_daddr = old_daddr;
++ np->flow_label = old_fl6_flowlabel;
++ inet->inet_dport = old_dport;
+ goto out;
+ }
+
+diff --git a/net/ipv6/ndisc.c b/net/ipv6/ndisc.c
+index f61a5b613b52..ba5e04c6ae17 100644
+--- a/net/ipv6/ndisc.c
++++ b/net/ipv6/ndisc.c
+@@ -1554,7 +1554,8 @@ static void ndisc_fill_redirect_hdr_option(struct sk_buff *skb,
+ *(opt++) = (rd_len >> 3);
+ opt += 6;
+
+- memcpy(opt, ipv6_hdr(orig_skb), rd_len - 8);
++ skb_copy_bits(orig_skb, skb_network_offset(orig_skb), opt,
++ rd_len - 8);
+ }
+
+ void ndisc_send_redirect(struct sk_buff *skb, const struct in6_addr *target)
+diff --git a/net/ipv6/route.c b/net/ipv6/route.c
+index a560fb1d0230..08a2a65d3304 100644
+--- a/net/ipv6/route.c
++++ b/net/ipv6/route.c
+@@ -1510,7 +1510,30 @@ static void rt6_exceptions_remove_prefsrc(struct rt6_info *rt)
+ }
+ }
+
+-static void rt6_exceptions_update_pmtu(struct rt6_info *rt, int mtu)
++static bool rt6_mtu_change_route_allowed(struct inet6_dev *idev,
++ struct rt6_info *rt, int mtu)
++{
++ /* If the new MTU is lower than the route PMTU, this new MTU will be the
++ * lowest MTU in the path: always allow updating the route PMTU to
++ * reflect PMTU decreases.
++ *
++ * If the new MTU is higher, and the route PMTU is equal to the local
++ * MTU, this means the old MTU is the lowest in the path, so allow
++ * updating it: if other nodes now have lower MTUs, PMTU discovery will
++ * handle this.
++ */
++
++ if (dst_mtu(&rt->dst) >= mtu)
++ return true;
++
++ if (dst_mtu(&rt->dst) == idev->cnf.mtu6)
++ return true;
++
++ return false;
++}
++
++static void rt6_exceptions_update_pmtu(struct inet6_dev *idev,
++ struct rt6_info *rt, int mtu)
+ {
+ struct rt6_exception_bucket *bucket;
+ struct rt6_exception *rt6_ex;
+@@ -1519,20 +1542,22 @@ static void rt6_exceptions_update_pmtu(struct rt6_info *rt, int mtu)
+ bucket = rcu_dereference_protected(rt->rt6i_exception_bucket,
+ lockdep_is_held(&rt6_exception_lock));
+
+- if (bucket) {
+- for (i = 0; i < FIB6_EXCEPTION_BUCKET_SIZE; i++) {
+- hlist_for_each_entry(rt6_ex, &bucket->chain, hlist) {
+- struct rt6_info *entry = rt6_ex->rt6i;
+- /* For RTF_CACHE with rt6i_pmtu == 0
+- * (i.e. a redirected route),
+- * the metrics of its rt->dst.from has already
+- * been updated.
+- */
+- if (entry->rt6i_pmtu && entry->rt6i_pmtu > mtu)
+- entry->rt6i_pmtu = mtu;
+- }
+- bucket++;
++ if (!bucket)
++ return;
++
++ for (i = 0; i < FIB6_EXCEPTION_BUCKET_SIZE; i++) {
++ hlist_for_each_entry(rt6_ex, &bucket->chain, hlist) {
++ struct rt6_info *entry = rt6_ex->rt6i;
++
++ /* For RTF_CACHE with rt6i_pmtu == 0 (i.e. a redirected
++ * route), the metrics of its rt->dst.from have already
++ * been updated.
++ */
++ if (entry->rt6i_pmtu &&
++ rt6_mtu_change_route_allowed(idev, entry, mtu))
++ entry->rt6i_pmtu = mtu;
+ }
++ bucket++;
+ }
+ }
+
+@@ -3521,25 +3546,13 @@ static int rt6_mtu_change_route(struct rt6_info *rt, void *p_arg)
+ Since RFC 1981 doesn't include administrative MTU increase
+ update PMTU increase is a MUST. (i.e. jumbo frame)
+ */
+- /*
+- If new MTU is less than route PMTU, this new MTU will be the
+- lowest MTU in the path, update the route PMTU to reflect PMTU
+- decreases; if new MTU is greater than route PMTU, and the
+- old MTU is the lowest MTU in the path, update the route PMTU
+- to reflect the increase. In this case if the other nodes' MTU
+- also have the lowest MTU, TOO BIG MESSAGE will be lead to
+- PMTU discovery.
+- */
+ if (rt->dst.dev == arg->dev &&
+- dst_metric_raw(&rt->dst, RTAX_MTU) &&
+ !dst_metric_locked(&rt->dst, RTAX_MTU)) {
+ spin_lock_bh(&rt6_exception_lock);
+- if (dst_mtu(&rt->dst) >= arg->mtu ||
+- (dst_mtu(&rt->dst) < arg->mtu &&
+- dst_mtu(&rt->dst) == idev->cnf.mtu6)) {
++ if (dst_metric_raw(&rt->dst, RTAX_MTU) &&
++ rt6_mtu_change_route_allowed(idev, rt, arg->mtu))
+ dst_metric_set(&rt->dst, RTAX_MTU, arg->mtu);
+- }
+- rt6_exceptions_update_pmtu(rt, arg->mtu);
++ rt6_exceptions_update_pmtu(idev, rt, arg->mtu);
+ spin_unlock_bh(&rt6_exception_lock);
+ }
+ return 0;
+diff --git a/net/ipv6/seg6_iptunnel.c b/net/ipv6/seg6_iptunnel.c
+index bd6cc688bd19..7a78dcfda68a 100644
+--- a/net/ipv6/seg6_iptunnel.c
++++ b/net/ipv6/seg6_iptunnel.c
+@@ -93,7 +93,8 @@ static void set_tun_src(struct net *net, struct net_device *dev,
+ /* encapsulate an IPv6 packet within an outer IPv6 header with a given SRH */
+ int seg6_do_srh_encap(struct sk_buff *skb, struct ipv6_sr_hdr *osrh, int proto)
+ {
+- struct net *net = dev_net(skb_dst(skb)->dev);
++ struct dst_entry *dst = skb_dst(skb);
++ struct net *net = dev_net(dst->dev);
+ struct ipv6hdr *hdr, *inner_hdr;
+ struct ipv6_sr_hdr *isrh;
+ int hdrlen, tot_len, err;
+@@ -134,7 +135,7 @@ int seg6_do_srh_encap(struct sk_buff *skb, struct ipv6_sr_hdr *osrh, int proto)
+ isrh->nexthdr = proto;
+
+ hdr->daddr = isrh->segments[isrh->first_segment];
+- set_tun_src(net, skb->dev, &hdr->daddr, &hdr->saddr);
++ set_tun_src(net, ip6_dst_idev(dst)->dev, &hdr->daddr, &hdr->saddr);
+
+ #ifdef CONFIG_IPV6_SEG6_HMAC
+ if (sr_has_hmac(isrh)) {
+@@ -418,7 +419,7 @@ static int seg6_build_state(struct nlattr *nla,
+
+ slwt = seg6_lwt_lwtunnel(newts);
+
+- err = dst_cache_init(&slwt->cache, GFP_KERNEL);
++ err = dst_cache_init(&slwt->cache, GFP_ATOMIC);
+ if (err) {
+ kfree(newts);
+ return err;
+diff --git a/net/iucv/af_iucv.c b/net/iucv/af_iucv.c
+index 148533169b1d..ca98276c2709 100644
+--- a/net/iucv/af_iucv.c
++++ b/net/iucv/af_iucv.c
+@@ -2433,9 +2433,11 @@ static int afiucv_iucv_init(void)
+ af_iucv_dev->driver = &af_iucv_driver;
+ err = device_register(af_iucv_dev);
+ if (err)
+- goto out_driver;
++ goto out_iucv_dev;
+ return 0;
+
++out_iucv_dev:
++ put_device(af_iucv_dev);
+ out_driver:
+ driver_unregister(&af_iucv_driver);
+ out_iucv:
+diff --git a/net/kcm/kcmsock.c b/net/kcm/kcmsock.c
+index 4a8d407f8902..3f15ffd356da 100644
+--- a/net/kcm/kcmsock.c
++++ b/net/kcm/kcmsock.c
+@@ -1381,24 +1381,32 @@ static int kcm_attach(struct socket *sock, struct socket *csock,
+ .parse_msg = kcm_parse_func_strparser,
+ .read_sock_done = kcm_read_sock_done,
+ };
+- int err;
++ int err = 0;
+
+ csk = csock->sk;
+ if (!csk)
+ return -EINVAL;
+
++ lock_sock(csk);
++
+ /* Only allow TCP sockets to be attached for now */
+ if ((csk->sk_family != AF_INET && csk->sk_family != AF_INET6) ||
+- csk->sk_protocol != IPPROTO_TCP)
+- return -EOPNOTSUPP;
++ csk->sk_protocol != IPPROTO_TCP) {
++ err = -EOPNOTSUPP;
++ goto out;
++ }
+
+ /* Don't allow listeners or closed sockets */
+- if (csk->sk_state == TCP_LISTEN || csk->sk_state == TCP_CLOSE)
+- return -EOPNOTSUPP;
++ if (csk->sk_state == TCP_LISTEN || csk->sk_state == TCP_CLOSE) {
++ err = -EOPNOTSUPP;
++ goto out;
++ }
+
+ psock = kmem_cache_zalloc(kcm_psockp, GFP_KERNEL);
+- if (!psock)
+- return -ENOMEM;
++ if (!psock) {
++ err = -ENOMEM;
++ goto out;
++ }
+
+ psock->mux = mux;
+ psock->sk = csk;
+@@ -1407,7 +1415,7 @@ static int kcm_attach(struct socket *sock, struct socket *csock,
+ err = strp_init(&psock->strp, csk, &cb);
+ if (err) {
+ kmem_cache_free(kcm_psockp, psock);
+- return err;
++ goto out;
+ }
+
+ write_lock_bh(&csk->sk_callback_lock);
+@@ -1419,7 +1427,8 @@ static int kcm_attach(struct socket *sock, struct socket *csock,
+ write_unlock_bh(&csk->sk_callback_lock);
+ strp_done(&psock->strp);
+ kmem_cache_free(kcm_psockp, psock);
+- return -EALREADY;
++ err = -EALREADY;
++ goto out;
+ }
+
+ psock->save_data_ready = csk->sk_data_ready;
+@@ -1455,7 +1464,10 @@ static int kcm_attach(struct socket *sock, struct socket *csock,
+ /* Schedule RX work in case there are already bytes queued */
+ strp_check_rcv(&psock->strp);
+
+- return 0;
++out:
++ release_sock(csk);
++
++ return err;
+ }
+
+ static int kcm_attach_ioctl(struct socket *sock, struct kcm_attach *info)
+@@ -1507,6 +1519,7 @@ static void kcm_unattach(struct kcm_psock *psock)
+
+ if (WARN_ON(psock->rx_kcm)) {
+ write_unlock_bh(&csk->sk_callback_lock);
++ release_sock(csk);
+ return;
+ }
+
+diff --git a/net/l2tp/l2tp_core.c b/net/l2tp/l2tp_core.c
+index 861b67c34191..e8b26afeb194 100644
+--- a/net/l2tp/l2tp_core.c
++++ b/net/l2tp/l2tp_core.c
+@@ -1466,9 +1466,14 @@ int l2tp_tunnel_create(struct net *net, int fd, int version, u32 tunnel_id, u32
+ encap = cfg->encap;
+
+ /* Quick sanity checks */
++ err = -EPROTONOSUPPORT;
++ if (sk->sk_type != SOCK_DGRAM) {
++ pr_debug("tunl %hu: fd %d wrong socket type\n",
++ tunnel_id, fd);
++ goto err;
++ }
+ switch (encap) {
+ case L2TP_ENCAPTYPE_UDP:
+- err = -EPROTONOSUPPORT;
+ if (sk->sk_protocol != IPPROTO_UDP) {
+ pr_err("tunl %hu: fd %d wrong protocol, got %d, expected %d\n",
+ tunnel_id, fd, sk->sk_protocol, IPPROTO_UDP);
+@@ -1476,7 +1481,6 @@ int l2tp_tunnel_create(struct net *net, int fd, int version, u32 tunnel_id, u32
+ }
+ break;
+ case L2TP_ENCAPTYPE_IP:
+- err = -EPROTONOSUPPORT;
+ if (sk->sk_protocol != IPPROTO_L2TP) {
+ pr_err("tunl %hu: fd %d wrong protocol, got %d, expected %d\n",
+ tunnel_id, fd, sk->sk_protocol, IPPROTO_L2TP);
+diff --git a/net/netlink/genetlink.c b/net/netlink/genetlink.c
+index 6f02499ef007..b9ce82c9440f 100644
+--- a/net/netlink/genetlink.c
++++ b/net/netlink/genetlink.c
+@@ -1106,7 +1106,7 @@ static int genlmsg_mcast(struct sk_buff *skb, u32 portid, unsigned long group,
+ if (!err)
+ delivered = true;
+ else if (err != -ESRCH)
+- goto error;
++ return err;
+ return delivered ? 0 : -ESRCH;
+ error:
+ kfree_skb(skb);
+diff --git a/net/openvswitch/meter.c b/net/openvswitch/meter.c
+index 3fbfc78991ac..0d961f09d0c7 100644
+--- a/net/openvswitch/meter.c
++++ b/net/openvswitch/meter.c
+@@ -242,14 +242,20 @@ static struct dp_meter *dp_meter_create(struct nlattr **a)
+
+ band->type = nla_get_u32(attr[OVS_BAND_ATTR_TYPE]);
+ band->rate = nla_get_u32(attr[OVS_BAND_ATTR_RATE]);
++ if (band->rate == 0) {
++ err = -EINVAL;
++ goto exit_free_meter;
++ }
++
+ band->burst_size = nla_get_u32(attr[OVS_BAND_ATTR_BURST]);
+ /* Figure out max delta_t that is enough to fill any bucket.
+ * Keep max_delta_t size to the bucket units:
+ * pkts => 1/1000 packets, kilobits => bits.
++ *
++ * Start with a full bucket.
+ */
+- band_max_delta_t = (band->burst_size + band->rate) * 1000;
+- /* Start with a full bucket. */
+- band->bucket = band_max_delta_t;
++ band->bucket = (band->burst_size + band->rate) * 1000;
++ band_max_delta_t = band->bucket / band->rate;
+ if (band_max_delta_t > meter->max_delta_t)
+ meter->max_delta_t = band_max_delta_t;
+ band++;
+diff --git a/net/sched/act_tunnel_key.c b/net/sched/act_tunnel_key.c
+index 30c96274c638..22bf1a376b91 100644
+--- a/net/sched/act_tunnel_key.c
++++ b/net/sched/act_tunnel_key.c
+@@ -153,6 +153,7 @@ static int tunnel_key_init(struct net *net, struct nlattr *nla,
+ metadata->u.tun_info.mode |= IP_TUNNEL_INFO_TX;
+ break;
+ default:
++ ret = -EINVAL;
+ goto err_out;
+ }
+
+diff --git a/net/sched/sch_netem.c b/net/sched/sch_netem.c
+index dd70924cbcdf..2aeca57f9bd0 100644
+--- a/net/sched/sch_netem.c
++++ b/net/sched/sch_netem.c
+@@ -509,7 +509,7 @@ static int netem_enqueue(struct sk_buff *skb, struct Qdisc *sch,
+ }
+
+ if (unlikely(sch->q.qlen >= sch->limit))
+- return qdisc_drop(skb, sch, to_free);
++ return qdisc_drop_all(skb, sch, to_free);
+
+ qdisc_qstats_backlog_inc(sch, skb);
+
^ permalink raw reply related [flat|nested] 27+ messages in thread
* [gentoo-commits] proj/linux-patches:4.15 commit in: /
@ 2018-04-08 14:31 Mike Pagano
0 siblings, 0 replies; 27+ messages in thread
From: Mike Pagano @ 2018-04-08 14:31 UTC (permalink / raw
To: gentoo-commits
commit: 14243423be4baa539a5f9b13c2ddbeed904b7cad
Author: Mike Pagano <mpagano <AT> gentoo <DOT> org>
AuthorDate: Sun Apr 8 14:30:59 2018 +0000
Commit: Mike Pagano <mpagano <AT> gentoo <DOT> org>
CommitDate: Sun Apr 8 14:30:59 2018 +0000
URL: https://gitweb.gentoo.org/proj/linux-patches.git/commit/?id=14243423
Linux patch 4.15.16
0000_README | 4 +
1015_linux-4.15.16.patch | 2899 ++++++++++++++++++++++++++++++++++++++++++++++
2 files changed, 2903 insertions(+)
diff --git a/0000_README b/0000_README
index f1a4ce6..ba8435c 100644
--- a/0000_README
+++ b/0000_README
@@ -103,6 +103,10 @@ Patch: 1014_linux-4.15.15.patch
From: http://www.kernel.org
Desc: Linux 4.15.15
+Patch: 1015_linux-4.15.16.patch
+From: http://www.kernel.org
+Desc: Linux 4.15.16
+
Patch: 1500_XATTR_USER_PREFIX.patch
From: https://bugs.gentoo.org/show_bug.cgi?id=470644
Desc: Support for namespace user.pax.* on tmpfs.
diff --git a/1015_linux-4.15.16.patch b/1015_linux-4.15.16.patch
new file mode 100644
index 0000000..bec9a07
--- /dev/null
+++ b/1015_linux-4.15.16.patch
@@ -0,0 +1,2899 @@
+diff --git a/Documentation/devicetree/bindings/serial/8250.txt b/Documentation/devicetree/bindings/serial/8250.txt
+index dad3b2ec66d4..aeb6db4e35c3 100644
+--- a/Documentation/devicetree/bindings/serial/8250.txt
++++ b/Documentation/devicetree/bindings/serial/8250.txt
+@@ -24,6 +24,7 @@ Required properties:
+ - "ti,da830-uart"
+ - "aspeed,ast2400-vuart"
+ - "aspeed,ast2500-vuart"
++ - "nuvoton,npcm750-uart"
+ - "serial" if the port type is unknown.
+ - reg : offset and length of the register set for the device.
+ - interrupts : should contain uart interrupt.
+diff --git a/Makefile b/Makefile
+index 20c9b7bfeed4..b28f0f721ec7 100644
+--- a/Makefile
++++ b/Makefile
+@@ -1,7 +1,7 @@
+ # SPDX-License-Identifier: GPL-2.0
+ VERSION = 4
+ PATCHLEVEL = 15
+-SUBLEVEL = 15
++SUBLEVEL = 16
+ EXTRAVERSION =
+ NAME = Fearless Coyote
+
+diff --git a/arch/arm/boot/dts/am335x-pepper.dts b/arch/arm/boot/dts/am335x-pepper.dts
+index 9fb7426070ce..03c7d77023c6 100644
+--- a/arch/arm/boot/dts/am335x-pepper.dts
++++ b/arch/arm/boot/dts/am335x-pepper.dts
+@@ -139,7 +139,7 @@
+ &audio_codec {
+ status = "okay";
+
+- reset-gpios = <&gpio1 16 GPIO_ACTIVE_LOW>;
++ gpio-reset = <&gpio1 16 GPIO_ACTIVE_LOW>;
+ AVDD-supply = <&ldo3_reg>;
+ IOVDD-supply = <&ldo3_reg>;
+ DRVDD-supply = <&ldo3_reg>;
+diff --git a/arch/arm/boot/dts/dra76-evm.dts b/arch/arm/boot/dts/dra76-evm.dts
+index b024a65c6e27..f64aab450315 100644
+--- a/arch/arm/boot/dts/dra76-evm.dts
++++ b/arch/arm/boot/dts/dra76-evm.dts
+@@ -148,6 +148,7 @@
+ compatible = "ti,tps65917";
+ reg = <0x58>;
+ ti,system-power-controller;
++ ti,palmas-override-powerhold;
+ interrupt-controller;
+ #interrupt-cells = <2>;
+
+diff --git a/arch/arm/boot/dts/omap3-n900.dts b/arch/arm/boot/dts/omap3-n900.dts
+index 5362139d5312..669c51c00c00 100644
+--- a/arch/arm/boot/dts/omap3-n900.dts
++++ b/arch/arm/boot/dts/omap3-n900.dts
+@@ -558,7 +558,7 @@
+ tlv320aic3x: tlv320aic3x@18 {
+ compatible = "ti,tlv320aic3x";
+ reg = <0x18>;
+- reset-gpios = <&gpio2 28 GPIO_ACTIVE_LOW>; /* 60 */
++ gpio-reset = <&gpio2 28 GPIO_ACTIVE_HIGH>; /* 60 */
+ ai3x-gpio-func = <
+ 0 /* AIC3X_GPIO1_FUNC_DISABLED */
+ 5 /* AIC3X_GPIO2_FUNC_DIGITAL_MIC_INPUT */
+@@ -575,7 +575,7 @@
+ tlv320aic3x_aux: tlv320aic3x@19 {
+ compatible = "ti,tlv320aic3x";
+ reg = <0x19>;
+- reset-gpios = <&gpio2 28 GPIO_ACTIVE_LOW>; /* 60 */
++ gpio-reset = <&gpio2 28 GPIO_ACTIVE_HIGH>; /* 60 */
+
+ AVDD-supply = <&vmmc2>;
+ DRVDD-supply = <&vmmc2>;
+diff --git a/arch/arm/boot/dts/sun6i-a31s-sinovoip-bpi-m2.dts b/arch/arm/boot/dts/sun6i-a31s-sinovoip-bpi-m2.dts
+index 51e6f1d21c32..b2758dd8ce43 100644
+--- a/arch/arm/boot/dts/sun6i-a31s-sinovoip-bpi-m2.dts
++++ b/arch/arm/boot/dts/sun6i-a31s-sinovoip-bpi-m2.dts
+@@ -42,7 +42,6 @@
+
+ /dts-v1/;
+ #include "sun6i-a31s.dtsi"
+-#include "sunxi-common-regulators.dtsi"
+ #include <dt-bindings/gpio/gpio.h>
+
+ / {
+@@ -99,6 +98,7 @@
+ pinctrl-0 = <&gmac_pins_rgmii_a>, <&gmac_phy_reset_pin_bpi_m2>;
+ phy = <&phy1>;
+ phy-mode = "rgmii";
++ phy-supply = <®_dldo1>;
+ snps,reset-gpio = <&pio 0 21 GPIO_ACTIVE_HIGH>; /* PA21 */
+ snps,reset-active-low;
+ snps,reset-delays-us = <0 10000 30000>;
+@@ -118,7 +118,7 @@
+ &mmc0 {
+ pinctrl-names = "default";
+ pinctrl-0 = <&mmc0_pins_a>, <&mmc0_cd_pin_bpi_m2>;
+- vmmc-supply = <®_vcc3v0>;
++ vmmc-supply = <®_dcdc1>;
+ bus-width = <4>;
+ cd-gpios = <&pio 0 4 GPIO_ACTIVE_HIGH>; /* PA4 */
+ cd-inverted;
+@@ -132,7 +132,7 @@
+ &mmc2 {
+ pinctrl-names = "default";
+ pinctrl-0 = <&mmc2_pins_a>;
+- vmmc-supply = <®_vcc3v0>;
++ vmmc-supply = <®_aldo1>;
+ mmc-pwrseq = <&mmc2_pwrseq>;
+ bus-width = <4>;
+ non-removable;
+@@ -163,6 +163,8 @@
+ reg = <0x68>;
+ interrupt-parent = <&nmi_intc>;
+ interrupts = <0 IRQ_TYPE_LEVEL_LOW>;
++ eldoin-supply = <®_dcdc1>;
++ x-powers,drive-vbus-en;
+ };
+ };
+
+@@ -193,7 +195,28 @@
+
+ #include "axp22x.dtsi"
+
++®_aldo1 {
++ regulator-min-microvolt = <3300000>;
++ regulator-max-microvolt = <3300000>;
++ regulator-name = "vcc-wifi";
++};
++
++®_aldo2 {
++ regulator-always-on;
++ regulator-min-microvolt = <2500000>;
++ regulator-max-microvolt = <2500000>;
++ regulator-name = "vcc-gmac";
++};
++
++®_aldo3 {
++ regulator-always-on;
++ regulator-min-microvolt = <3000000>;
++ regulator-max-microvolt = <3000000>;
++ regulator-name = "avcc";
++};
++
+ ®_dc5ldo {
++ regulator-always-on;
+ regulator-min-microvolt = <700000>;
+ regulator-max-microvolt = <1320000>;
+ regulator-name = "vdd-cpus";
+@@ -233,6 +256,40 @@
+ regulator-name = "vcc-dram";
+ };
+
++®_dldo1 {
++ regulator-min-microvolt = <3000000>;
++ regulator-max-microvolt = <3000000>;
++ regulator-name = "vcc-mac";
++};
++
++®_dldo2 {
++ regulator-min-microvolt = <2800000>;
++ regulator-max-microvolt = <2800000>;
++ regulator-name = "avdd-csi";
++};
++
++®_dldo3 {
++ regulator-always-on;
++ regulator-min-microvolt = <3300000>;
++ regulator-max-microvolt = <3300000>;
++ regulator-name = "vcc-pb";
++};
++
++®_eldo1 {
++ regulator-min-microvolt = <1800000>;
++ regulator-max-microvolt = <1800000>;
++ regulator-name = "vdd-csi";
++ status = "okay";
++};
++
++®_ldo_io1 {
++ regulator-always-on;
++ regulator-min-microvolt = <1800000>;
++ regulator-max-microvolt = <1800000>;
++ regulator-name = "vcc-pm-cpus";
++ status = "okay";
++};
++
+ &uart0 {
+ pinctrl-names = "default";
+ pinctrl-0 = <&uart0_pins_a>;
+diff --git a/arch/arm/crypto/Makefile b/arch/arm/crypto/Makefile
+index 30ef8e291271..c9919c2b7ad1 100644
+--- a/arch/arm/crypto/Makefile
++++ b/arch/arm/crypto/Makefile
+@@ -54,6 +54,7 @@ crct10dif-arm-ce-y := crct10dif-ce-core.o crct10dif-ce-glue.o
+ crc32-arm-ce-y:= crc32-ce-core.o crc32-ce-glue.o
+ chacha20-neon-y := chacha20-neon-core.o chacha20-neon-glue.o
+
++ifdef REGENERATE_ARM_CRYPTO
+ quiet_cmd_perl = PERL $@
+ cmd_perl = $(PERL) $(<) > $(@)
+
+@@ -62,5 +63,6 @@ $(src)/sha256-core.S_shipped: $(src)/sha256-armv4.pl
+
+ $(src)/sha512-core.S_shipped: $(src)/sha512-armv4.pl
+ $(call cmd,perl)
++endif
+
+ .PRECIOUS: $(obj)/sha256-core.S $(obj)/sha512-core.S
+diff --git a/arch/arm/plat-omap/include/plat/sram.h b/arch/arm/plat-omap/include/plat/sram.h
+index fb061cf0d736..30a07730807a 100644
+--- a/arch/arm/plat-omap/include/plat/sram.h
++++ b/arch/arm/plat-omap/include/plat/sram.h
+@@ -5,13 +5,4 @@ void omap_map_sram(unsigned long start, unsigned long size,
+ unsigned long skip, int cached);
+ void omap_sram_reset(void);
+
+-extern void *omap_sram_push_address(unsigned long size);
+-
+-/* Macro to push a function to the internal SRAM, using the fncpy API */
+-#define omap_sram_push(funcp, size) ({ \
+- typeof(&(funcp)) _res = NULL; \
+- void *_sram_address = omap_sram_push_address(size); \
+- if (_sram_address) \
+- _res = fncpy(_sram_address, &(funcp), size); \
+- _res; \
+-})
++extern void *omap_sram_push(void *funcp, unsigned long size);
+diff --git a/arch/arm/plat-omap/sram.c b/arch/arm/plat-omap/sram.c
+index a5bc92d7e476..921840acf65c 100644
+--- a/arch/arm/plat-omap/sram.c
++++ b/arch/arm/plat-omap/sram.c
+@@ -23,6 +23,7 @@
+ #include <asm/fncpy.h>
+ #include <asm/tlb.h>
+ #include <asm/cacheflush.h>
++#include <asm/set_memory.h>
+
+ #include <asm/mach/map.h>
+
+@@ -42,7 +43,7 @@ static void __iomem *omap_sram_ceil;
+ * Note that fncpy requires the returned address to be aligned
+ * to an 8-byte boundary.
+ */
+-void *omap_sram_push_address(unsigned long size)
++static void *omap_sram_push_address(unsigned long size)
+ {
+ unsigned long available, new_ceil = (unsigned long)omap_sram_ceil;
+
+@@ -60,6 +61,30 @@ void *omap_sram_push_address(unsigned long size)
+ return (void *)omap_sram_ceil;
+ }
+
++void *omap_sram_push(void *funcp, unsigned long size)
++{
++ void *sram;
++ unsigned long base;
++ int pages;
++ void *dst = NULL;
++
++ sram = omap_sram_push_address(size);
++ if (!sram)
++ return NULL;
++
++ base = (unsigned long)sram & PAGE_MASK;
++ pages = PAGE_ALIGN(size) / PAGE_SIZE;
++
++ set_memory_rw(base, pages);
++
++ dst = fncpy(sram, funcp, size);
++
++ set_memory_ro(base, pages);
++ set_memory_x(base, pages);
++
++ return dst;
++}
++
+ /*
+ * The SRAM context is lost during off-idle and stack
+ * needs to be reset.
+@@ -75,6 +100,9 @@ void omap_sram_reset(void)
+ void __init omap_map_sram(unsigned long start, unsigned long size,
+ unsigned long skip, int cached)
+ {
++ unsigned long base;
++ int pages;
++
+ if (size == 0)
+ return;
+
+@@ -95,4 +123,10 @@ void __init omap_map_sram(unsigned long start, unsigned long size,
+ */
+ memset_io(omap_sram_base + omap_sram_skip, 0,
+ omap_sram_size - omap_sram_skip);
++
++ base = (unsigned long)omap_sram_base;
++ pages = PAGE_ALIGN(omap_sram_size) / PAGE_SIZE;
++
++ set_memory_ro(base, pages);
++ set_memory_x(base, pages);
+ }
+diff --git a/arch/arm/vfp/vfpmodule.c b/arch/arm/vfp/vfpmodule.c
+index a71a48e71fff..aa7496be311d 100644
+--- a/arch/arm/vfp/vfpmodule.c
++++ b/arch/arm/vfp/vfpmodule.c
+@@ -648,7 +648,7 @@ int vfp_restore_user_hwstate(struct user_vfp __user *ufp,
+ */
+ static int vfp_dying_cpu(unsigned int cpu)
+ {
+- vfp_force_reload(cpu, current_thread_info());
++ vfp_current_hw_state[cpu] = NULL;
+ return 0;
+ }
+
+diff --git a/arch/arm64/crypto/Makefile b/arch/arm64/crypto/Makefile
+index b5edc5918c28..12fd81af1d1c 100644
+--- a/arch/arm64/crypto/Makefile
++++ b/arch/arm64/crypto/Makefile
+@@ -58,6 +58,7 @@ CFLAGS_aes-glue-ce.o := -DUSE_V8_CRYPTO_EXTENSIONS
+ $(obj)/aes-glue-%.o: $(src)/aes-glue.c FORCE
+ $(call if_changed_rule,cc_o_c)
+
++ifdef REGENERATE_ARM64_CRYPTO
+ quiet_cmd_perlasm = PERLASM $@
+ cmd_perlasm = $(PERL) $(<) void $(@)
+
+@@ -66,5 +67,6 @@ $(src)/sha256-core.S_shipped: $(src)/sha512-armv8.pl
+
+ $(src)/sha512-core.S_shipped: $(src)/sha512-armv8.pl
+ $(call cmd,perlasm)
++endif
+
+ .PRECIOUS: $(obj)/sha256-core.S $(obj)/sha512-core.S
+diff --git a/arch/powerpc/include/asm/book3s/64/mmu.h b/arch/powerpc/include/asm/book3s/64/mmu.h
+index c9448e19847a..62290feddd56 100644
+--- a/arch/powerpc/include/asm/book3s/64/mmu.h
++++ b/arch/powerpc/include/asm/book3s/64/mmu.h
+@@ -87,6 +87,9 @@ typedef struct {
+ /* Number of bits in the mm_cpumask */
+ atomic_t active_cpus;
+
++ /* Number of users of the external (Nest) MMU */
++ atomic_t copros;
++
+ /* NPU NMMU context */
+ struct npu_context *npu_context;
+
+diff --git a/arch/powerpc/include/asm/mmu_context.h b/arch/powerpc/include/asm/mmu_context.h
+index e2a2b8400490..4f8a03c74e3f 100644
+--- a/arch/powerpc/include/asm/mmu_context.h
++++ b/arch/powerpc/include/asm/mmu_context.h
+@@ -92,15 +92,23 @@ static inline void dec_mm_active_cpus(struct mm_struct *mm)
+ static inline void mm_context_add_copro(struct mm_struct *mm)
+ {
+ /*
+- * On hash, should only be called once over the lifetime of
+- * the context, as we can't decrement the active cpus count
+- * and flush properly for the time being.
++ * If any copro is in use, increment the active CPU count
++ * in order to force TLB invalidations to be global as to
++ * propagate to the Nest MMU.
+ */
+- inc_mm_active_cpus(mm);
++ if (atomic_inc_return(&mm->context.copros) == 1)
++ inc_mm_active_cpus(mm);
+ }
+
+ static inline void mm_context_remove_copro(struct mm_struct *mm)
+ {
++ int c;
++
++ c = atomic_dec_if_positive(&mm->context.copros);
++
++ /* Detect imbalance between add and remove */
++ WARN_ON(c < 0);
++
+ /*
+ * Need to broadcast a global flush of the full mm before
+ * decrementing active_cpus count, as the next TLBI may be
+@@ -111,7 +119,7 @@ static inline void mm_context_remove_copro(struct mm_struct *mm)
+ * for the time being. Invalidations will remain global if
+ * used on hash.
+ */
+- if (radix_enabled()) {
++ if (c == 0 && radix_enabled()) {
+ flush_all_mm(mm);
+ dec_mm_active_cpus(mm);
+ }
+diff --git a/arch/powerpc/kernel/exceptions-64s.S b/arch/powerpc/kernel/exceptions-64s.S
+index 2dc10bf646b8..9b060a41185e 100644
+--- a/arch/powerpc/kernel/exceptions-64s.S
++++ b/arch/powerpc/kernel/exceptions-64s.S
+@@ -706,7 +706,7 @@ EXC_COMMON_BEGIN(bad_addr_slb)
+ ld r3, PACA_EXSLB+EX_DAR(r13)
+ std r3, _DAR(r1)
+ beq cr6, 2f
+- li r10, 0x480 /* fix trap number for I-SLB miss */
++ li r10, 0x481 /* fix trap number for I-SLB miss */
+ std r10, _TRAP(r1)
+ 2: bl save_nvgprs
+ addi r3, r1, STACK_FRAME_OVERHEAD
+diff --git a/arch/powerpc/kernel/irq.c b/arch/powerpc/kernel/irq.c
+index b7a84522e652..c933632aa08d 100644
+--- a/arch/powerpc/kernel/irq.c
++++ b/arch/powerpc/kernel/irq.c
+@@ -475,6 +475,14 @@ void force_external_irq_replay(void)
+ */
+ WARN_ON(!arch_irqs_disabled());
+
++ /*
++ * Interrupts must always be hard disabled before irq_happened is
++ * modified (to prevent lost update in case of interrupt between
++ * load and store).
++ */
++ __hard_irq_disable();
++ local_paca->irq_happened |= PACA_IRQ_HARD_DIS;
++
+ /* Indicate in the PACA that we have an interrupt to replay */
+ local_paca->irq_happened |= PACA_IRQ_EE;
+ }
+diff --git a/arch/powerpc/mm/mmu_context_book3s64.c b/arch/powerpc/mm/mmu_context_book3s64.c
+index 59c0766ae4e0..5066276229f8 100644
+--- a/arch/powerpc/mm/mmu_context_book3s64.c
++++ b/arch/powerpc/mm/mmu_context_book3s64.c
+@@ -171,6 +171,7 @@ int init_new_context(struct task_struct *tsk, struct mm_struct *mm)
+ mm_iommu_init(mm);
+ #endif
+ atomic_set(&mm->context.active_cpus, 0);
++ atomic_set(&mm->context.copros, 0);
+
+ return 0;
+ }
+diff --git a/arch/powerpc/mm/tlb-radix.c b/arch/powerpc/mm/tlb-radix.c
+index 913a2b81b177..91afe00abbb0 100644
+--- a/arch/powerpc/mm/tlb-radix.c
++++ b/arch/powerpc/mm/tlb-radix.c
+@@ -85,7 +85,23 @@ static inline void _tlbiel_pid(unsigned long pid, unsigned long ric)
+ static inline void _tlbie_pid(unsigned long pid, unsigned long ric)
+ {
+ asm volatile("ptesync": : :"memory");
+- __tlbie_pid(pid, ric);
++
++ /*
++ * Workaround the fact that the "ric" argument to __tlbie_pid
++ * must be a compile-time contraint to match the "i" constraint
++ * in the asm statement.
++ */
++ switch (ric) {
++ case RIC_FLUSH_TLB:
++ __tlbie_pid(pid, RIC_FLUSH_TLB);
++ break;
++ case RIC_FLUSH_PWC:
++ __tlbie_pid(pid, RIC_FLUSH_PWC);
++ break;
++ case RIC_FLUSH_ALL:
++ default:
++ __tlbie_pid(pid, RIC_FLUSH_ALL);
++ }
+ asm volatile("eieio; tlbsync; ptesync": : :"memory");
+ }
+
+@@ -245,6 +261,16 @@ void radix__local_flush_tlb_page(struct vm_area_struct *vma, unsigned long vmadd
+ }
+ EXPORT_SYMBOL(radix__local_flush_tlb_page);
+
++static bool mm_needs_flush_escalation(struct mm_struct *mm)
++{
++ /*
++ * P9 nest MMU has issues with the page walk cache
++ * caching PTEs and not flushing them properly when
++ * RIC = 0 for a PID/LPID invalidate
++ */
++ return atomic_read(&mm->context.copros) != 0;
++}
++
+ #ifdef CONFIG_SMP
+ void radix__flush_tlb_mm(struct mm_struct *mm)
+ {
+@@ -255,9 +281,12 @@ void radix__flush_tlb_mm(struct mm_struct *mm)
+ return;
+
+ preempt_disable();
+- if (!mm_is_thread_local(mm))
+- _tlbie_pid(pid, RIC_FLUSH_TLB);
+- else
++ if (!mm_is_thread_local(mm)) {
++ if (mm_needs_flush_escalation(mm))
++ _tlbie_pid(pid, RIC_FLUSH_ALL);
++ else
++ _tlbie_pid(pid, RIC_FLUSH_TLB);
++ } else
+ _tlbiel_pid(pid, RIC_FLUSH_TLB);
+ preempt_enable();
+ }
+@@ -369,10 +398,14 @@ void radix__flush_tlb_range(struct vm_area_struct *vma, unsigned long start,
+ }
+
+ if (full) {
+- if (local)
++ if (local) {
+ _tlbiel_pid(pid, RIC_FLUSH_TLB);
+- else
+- _tlbie_pid(pid, RIC_FLUSH_TLB);
++ } else {
++ if (mm_needs_flush_escalation(mm))
++ _tlbie_pid(pid, RIC_FLUSH_ALL);
++ else
++ _tlbie_pid(pid, RIC_FLUSH_TLB);
++ }
+ } else {
+ bool hflush = false;
+ unsigned long hstart, hend;
+@@ -482,6 +515,9 @@ static inline void __radix__flush_tlb_range_psize(struct mm_struct *mm,
+ }
+
+ if (full) {
++ if (!local && mm_needs_flush_escalation(mm))
++ also_pwc = true;
++
+ if (local)
+ _tlbiel_pid(pid, also_pwc ? RIC_FLUSH_ALL : RIC_FLUSH_TLB);
+ else
+diff --git a/arch/x86/crypto/cast5_avx_glue.c b/arch/x86/crypto/cast5_avx_glue.c
+index dbea6020ffe7..575292a33bdf 100644
+--- a/arch/x86/crypto/cast5_avx_glue.c
++++ b/arch/x86/crypto/cast5_avx_glue.c
+@@ -66,8 +66,6 @@ static int ecb_crypt(struct blkcipher_desc *desc, struct blkcipher_walk *walk,
+ void (*fn)(struct cast5_ctx *ctx, u8 *dst, const u8 *src);
+ int err;
+
+- fn = (enc) ? cast5_ecb_enc_16way : cast5_ecb_dec_16way;
+-
+ err = blkcipher_walk_virt(desc, walk);
+ desc->flags &= ~CRYPTO_TFM_REQ_MAY_SLEEP;
+
+@@ -79,6 +77,7 @@ static int ecb_crypt(struct blkcipher_desc *desc, struct blkcipher_walk *walk,
+
+ /* Process multi-block batch */
+ if (nbytes >= bsize * CAST5_PARALLEL_BLOCKS) {
++ fn = (enc) ? cast5_ecb_enc_16way : cast5_ecb_dec_16way;
+ do {
+ fn(ctx, wdst, wsrc);
+
+diff --git a/arch/x86/include/asm/hw_irq.h b/arch/x86/include/asm/hw_irq.h
+index 2851077b6051..32e666e1231e 100644
+--- a/arch/x86/include/asm/hw_irq.h
++++ b/arch/x86/include/asm/hw_irq.h
+@@ -36,6 +36,7 @@ extern asmlinkage void kvm_posted_intr_wakeup_ipi(void);
+ extern asmlinkage void kvm_posted_intr_nested_ipi(void);
+ extern asmlinkage void error_interrupt(void);
+ extern asmlinkage void irq_work_interrupt(void);
++extern asmlinkage void uv_bau_message_intr1(void);
+
+ extern asmlinkage void spurious_interrupt(void);
+ extern asmlinkage void thermal_interrupt(void);
+diff --git a/arch/x86/kernel/idt.c b/arch/x86/kernel/idt.c
+index 50bee5fe1140..2c3a1b4294eb 100644
+--- a/arch/x86/kernel/idt.c
++++ b/arch/x86/kernel/idt.c
+@@ -140,6 +140,9 @@ static const __initconst struct idt_data apic_idts[] = {
+ # ifdef CONFIG_IRQ_WORK
+ INTG(IRQ_WORK_VECTOR, irq_work_interrupt),
+ # endif
++#ifdef CONFIG_X86_UV
++ INTG(UV_BAU_MESSAGE, uv_bau_message_intr1),
++#endif
+ INTG(SPURIOUS_APIC_VECTOR, spurious_interrupt),
+ INTG(ERROR_APIC_VECTOR, error_interrupt),
+ #endif
+diff --git a/arch/x86/platform/uv/tlb_uv.c b/arch/x86/platform/uv/tlb_uv.c
+index 7d5d53f36a7a..0b530c53de1f 100644
+--- a/arch/x86/platform/uv/tlb_uv.c
++++ b/arch/x86/platform/uv/tlb_uv.c
+@@ -2254,8 +2254,6 @@ static int __init uv_bau_init(void)
+ init_uvhub(uvhub, vector, uv_base_pnode);
+ }
+
+- alloc_intr_gate(vector, uv_bau_message_intr1);
+-
+ for_each_possible_blade(uvhub) {
+ if (uv_blade_nr_possible_cpus(uvhub)) {
+ unsigned long val;
+diff --git a/block/bio.c b/block/bio.c
+index 9ef6cf3addb3..2dd0c1305be5 100644
+--- a/block/bio.c
++++ b/block/bio.c
+@@ -43,9 +43,9 @@
+ * break badly! cannot be bigger than what you can fit into an
+ * unsigned short
+ */
+-#define BV(x) { .nr_vecs = x, .name = "biovec-"__stringify(x) }
++#define BV(x, n) { .nr_vecs = x, .name = "biovec-"#n }
+ static struct biovec_slab bvec_slabs[BVEC_POOL_NR] __read_mostly = {
+- BV(1), BV(4), BV(16), BV(64), BV(128), BV(BIO_MAX_PAGES),
++ BV(1, 1), BV(4, 4), BV(16, 16), BV(64, 64), BV(128, 128), BV(BIO_MAX_PAGES, max),
+ };
+ #undef BV
+
+diff --git a/block/partitions/msdos.c b/block/partitions/msdos.c
+index 0af3a3db6fb0..82c44f7df911 100644
+--- a/block/partitions/msdos.c
++++ b/block/partitions/msdos.c
+@@ -301,7 +301,9 @@ static void parse_bsd(struct parsed_partitions *state,
+ continue;
+ bsd_start = le32_to_cpu(p->p_offset);
+ bsd_size = le32_to_cpu(p->p_size);
+- if (memcmp(flavour, "bsd\0", 4) == 0)
++ /* FreeBSD has relative offset if C partition offset is zero */
++ if (memcmp(flavour, "bsd\0", 4) == 0 &&
++ le32_to_cpu(l->d_partitions[2].p_offset) == 0)
+ bsd_start += offset;
+ if (offset == bsd_start && size == bsd_size)
+ /* full parent partition, we have it already */
+diff --git a/crypto/ahash.c b/crypto/ahash.c
+index 266fc1d64f61..c03cc177870b 100644
+--- a/crypto/ahash.c
++++ b/crypto/ahash.c
+@@ -92,13 +92,14 @@ int crypto_hash_walk_done(struct crypto_hash_walk *walk, int err)
+
+ if (nbytes && walk->offset & alignmask && !err) {
+ walk->offset = ALIGN(walk->offset, alignmask + 1);
+- walk->data += walk->offset;
+-
+ nbytes = min(nbytes,
+ ((unsigned int)(PAGE_SIZE)) - walk->offset);
+ walk->entrylen -= nbytes;
+
+- return nbytes;
++ if (nbytes) {
++ walk->data += walk->offset;
++ return nbytes;
++ }
+ }
+
+ if (walk->flags & CRYPTO_ALG_ASYNC)
+diff --git a/crypto/lrw.c b/crypto/lrw.c
+index cbbd7c50ad19..1d813a6d3fec 100644
+--- a/crypto/lrw.c
++++ b/crypto/lrw.c
+@@ -313,7 +313,7 @@ static void exit_crypt(struct skcipher_request *req)
+ rctx->left = 0;
+
+ if (rctx->ext)
+- kfree(rctx->ext);
++ kzfree(rctx->ext);
+ }
+
+ static int do_encrypt(struct skcipher_request *req, int err)
+diff --git a/crypto/testmgr.h b/crypto/testmgr.h
+index a714b6293959..a3cd18476e40 100644
+--- a/crypto/testmgr.h
++++ b/crypto/testmgr.h
+@@ -548,7 +548,7 @@ static const struct akcipher_testvec rsa_tv_template[] = {
+ static const struct akcipher_testvec pkcs1pad_rsa_tv_template[] = {
+ {
+ .key =
+- "\x30\x82\x03\x1f\x02\x01\x10\x02\x82\x01\x01\x00\xd7\x1e\x77\x82"
++ "\x30\x82\x03\x1f\x02\x01\x00\x02\x82\x01\x01\x00\xd7\x1e\x77\x82"
+ "\x8c\x92\x31\xe7\x69\x02\xa2\xd5\x5c\x78\xde\xa2\x0c\x8f\xfe\x28"
+ "\x59\x31\xdf\x40\x9c\x60\x61\x06\xb9\x2f\x62\x40\x80\x76\xcb\x67"
+ "\x4a\xb5\x59\x56\x69\x17\x07\xfa\xf9\x4c\xbd\x6c\x37\x7a\x46\x7d"
+@@ -597,8 +597,8 @@ static const struct akcipher_testvec pkcs1pad_rsa_tv_template[] = {
+ "\xfe\xf8\x27\x1b\xd6\x55\x60\x5e\x48\xb7\x6d\x9a\xa8\x37\xf9\x7a"
+ "\xde\x1b\xcd\x5d\x1a\x30\xd4\xe9\x9e\x5b\x3c\x15\xf8\x9c\x1f\xda"
+ "\xd1\x86\x48\x55\xce\x83\xee\x8e\x51\xc7\xde\x32\x12\x47\x7d\x46"
+- "\xb8\x35\xdf\x41\x02\x01\x30\x02\x01\x30\x02\x01\x30\x02\x01\x30"
+- "\x02\x01\x30",
++ "\xb8\x35\xdf\x41\x02\x01\x00\x02\x01\x00\x02\x01\x00\x02\x01\x00"
++ "\x02\x01\x00",
+ .key_len = 804,
+ /*
+ * m is SHA256 hash of following message:
+diff --git a/drivers/base/arch_topology.c b/drivers/base/arch_topology.c
+index 4de87b0b53c8..8bfd498d3090 100644
+--- a/drivers/base/arch_topology.c
++++ b/drivers/base/arch_topology.c
+@@ -175,11 +175,11 @@ bool __init topology_parse_cpu_capacity(struct device_node *cpu_node, int cpu)
+ }
+
+ #ifdef CONFIG_CPU_FREQ
+-static cpumask_var_t cpus_to_visit __initdata;
+-static void __init parsing_done_workfn(struct work_struct *work);
+-static __initdata DECLARE_WORK(parsing_done_work, parsing_done_workfn);
++static cpumask_var_t cpus_to_visit;
++static void parsing_done_workfn(struct work_struct *work);
++static DECLARE_WORK(parsing_done_work, parsing_done_workfn);
+
+-static int __init
++static int
+ init_cpu_capacity_callback(struct notifier_block *nb,
+ unsigned long val,
+ void *data)
+@@ -215,7 +215,7 @@ init_cpu_capacity_callback(struct notifier_block *nb,
+ return 0;
+ }
+
+-static struct notifier_block init_cpu_capacity_notifier __initdata = {
++static struct notifier_block init_cpu_capacity_notifier = {
+ .notifier_call = init_cpu_capacity_callback,
+ };
+
+@@ -248,7 +248,7 @@ static int __init register_cpufreq_notifier(void)
+ }
+ core_initcall(register_cpufreq_notifier);
+
+-static void __init parsing_done_workfn(struct work_struct *work)
++static void parsing_done_workfn(struct work_struct *work)
+ {
+ cpufreq_unregister_notifier(&init_cpu_capacity_notifier,
+ CPUFREQ_POLICY_NOTIFIER);
+diff --git a/drivers/char/mem.c b/drivers/char/mem.c
+index 052011bcf100..ffeb60d3434c 100644
+--- a/drivers/char/mem.c
++++ b/drivers/char/mem.c
+@@ -137,7 +137,7 @@ static ssize_t read_mem(struct file *file, char __user *buf,
+
+ while (count > 0) {
+ unsigned long remaining;
+- int allowed;
++ int allowed, probe;
+
+ sz = size_inside_page(p, count);
+
+@@ -160,9 +160,9 @@ static ssize_t read_mem(struct file *file, char __user *buf,
+ if (!ptr)
+ goto failed;
+
+- err = probe_kernel_read(bounce, ptr, sz);
++ probe = probe_kernel_read(bounce, ptr, sz);
+ unxlate_dev_mem_ptr(p, ptr);
+- if (err)
++ if (probe)
+ goto failed;
+
+ remaining = copy_to_user(buf, bounce, sz);
+diff --git a/drivers/cpufreq/cpufreq.c b/drivers/cpufreq/cpufreq.c
+index d6a3038a128d..41d148af7748 100644
+--- a/drivers/cpufreq/cpufreq.c
++++ b/drivers/cpufreq/cpufreq.c
+@@ -637,8 +637,6 @@ static int cpufreq_parse_governor(char *str_governor, unsigned int *policy,
+ *governor = t;
+ err = 0;
+ }
+- if (t && !try_module_get(t->owner))
+- t = NULL;
+
+ mutex_unlock(&cpufreq_governor_mutex);
+ }
+@@ -767,10 +765,6 @@ static ssize_t store_scaling_governor(struct cpufreq_policy *policy,
+ return -EINVAL;
+
+ ret = cpufreq_set_policy(policy, &new_policy);
+-
+- if (new_policy.governor)
+- module_put(new_policy.governor->owner);
+-
+ return ret ? ret : count;
+ }
+
+diff --git a/drivers/crypto/caam/ctrl.c b/drivers/crypto/caam/ctrl.c
+index e1d4ae1153c4..39f70411f28f 100644
+--- a/drivers/crypto/caam/ctrl.c
++++ b/drivers/crypto/caam/ctrl.c
+@@ -813,9 +813,6 @@ static int caam_probe(struct platform_device *pdev)
+ return 0;
+
+ caam_remove:
+-#ifdef CONFIG_DEBUG_FS
+- debugfs_remove_recursive(ctrlpriv->dfs_root);
+-#endif
+ caam_remove(pdev);
+ return ret;
+
+diff --git a/drivers/crypto/ccp/ccp-crypto-rsa.c b/drivers/crypto/ccp/ccp-crypto-rsa.c
+index e6db8672d89c..05850dfd7940 100644
+--- a/drivers/crypto/ccp/ccp-crypto-rsa.c
++++ b/drivers/crypto/ccp/ccp-crypto-rsa.c
+@@ -60,10 +60,9 @@ static int ccp_rsa_complete(struct crypto_async_request *async_req, int ret)
+
+ static unsigned int ccp_rsa_maxsize(struct crypto_akcipher *tfm)
+ {
+- if (ccp_version() > CCP_VERSION(3, 0))
+- return CCP5_RSA_MAXMOD;
+- else
+- return CCP_RSA_MAXMOD;
++ struct ccp_ctx *ctx = akcipher_tfm_ctx(tfm);
++
++ return ctx->u.rsa.n_len;
+ }
+
+ static int ccp_rsa_crypt(struct akcipher_request *req, bool encrypt)
+diff --git a/drivers/crypto/inside-secure/safexcel.c b/drivers/crypto/inside-secure/safexcel.c
+index 4bcef78a08aa..d4c81cb73bee 100644
+--- a/drivers/crypto/inside-secure/safexcel.c
++++ b/drivers/crypto/inside-secure/safexcel.c
+@@ -789,7 +789,7 @@ static int safexcel_probe(struct platform_device *pdev)
+ return PTR_ERR(priv->base);
+ }
+
+- priv->clk = of_clk_get(dev->of_node, 0);
++ priv->clk = devm_clk_get(&pdev->dev, NULL);
+ if (!IS_ERR(priv->clk)) {
+ ret = clk_prepare_enable(priv->clk);
+ if (ret) {
+diff --git a/drivers/crypto/talitos.c b/drivers/crypto/talitos.c
+index 6882fa2f8bad..c805d0122c0b 100644
+--- a/drivers/crypto/talitos.c
++++ b/drivers/crypto/talitos.c
+@@ -832,8 +832,6 @@ struct talitos_ctx {
+ unsigned int keylen;
+ unsigned int enckeylen;
+ unsigned int authkeylen;
+- dma_addr_t dma_buf;
+- dma_addr_t dma_hw_context;
+ };
+
+ #define HASH_MAX_BLOCK_SIZE SHA512_BLOCK_SIZE
+@@ -1130,10 +1128,10 @@ static int sg_to_link_tbl_offset(struct scatterlist *sg, int sg_count,
+ return count;
+ }
+
+-static int talitos_sg_map(struct device *dev, struct scatterlist *src,
+- unsigned int len, struct talitos_edesc *edesc,
+- struct talitos_ptr *ptr,
+- int sg_count, unsigned int offset, int tbl_off)
++static int talitos_sg_map_ext(struct device *dev, struct scatterlist *src,
++ unsigned int len, struct talitos_edesc *edesc,
++ struct talitos_ptr *ptr, int sg_count,
++ unsigned int offset, int tbl_off, int elen)
+ {
+ struct talitos_private *priv = dev_get_drvdata(dev);
+ bool is_sec1 = has_ftr_sec1(priv);
+@@ -1142,6 +1140,7 @@ static int talitos_sg_map(struct device *dev, struct scatterlist *src,
+ to_talitos_ptr(ptr, 0, 0, is_sec1);
+ return 1;
+ }
++ to_talitos_ptr_ext_set(ptr, elen, is_sec1);
+ if (sg_count == 1) {
+ to_talitos_ptr(ptr, sg_dma_address(src) + offset, len, is_sec1);
+ return sg_count;
+@@ -1150,7 +1149,7 @@ static int talitos_sg_map(struct device *dev, struct scatterlist *src,
+ to_talitos_ptr(ptr, edesc->dma_link_tbl + offset, len, is_sec1);
+ return sg_count;
+ }
+- sg_count = sg_to_link_tbl_offset(src, sg_count, offset, len,
++ sg_count = sg_to_link_tbl_offset(src, sg_count, offset, len + elen,
+ &edesc->link_tbl[tbl_off]);
+ if (sg_count == 1) {
+ /* Only one segment now, so no link tbl needed*/
+@@ -1164,6 +1163,15 @@ static int talitos_sg_map(struct device *dev, struct scatterlist *src,
+ return sg_count;
+ }
+
++static int talitos_sg_map(struct device *dev, struct scatterlist *src,
++ unsigned int len, struct talitos_edesc *edesc,
++ struct talitos_ptr *ptr, int sg_count,
++ unsigned int offset, int tbl_off)
++{
++ return talitos_sg_map_ext(dev, src, len, edesc, ptr, sg_count, offset,
++ tbl_off, 0);
++}
++
+ /*
+ * fill in and submit ipsec_esp descriptor
+ */
+@@ -1181,7 +1189,7 @@ static int ipsec_esp(struct talitos_edesc *edesc, struct aead_request *areq,
+ unsigned int ivsize = crypto_aead_ivsize(aead);
+ int tbl_off = 0;
+ int sg_count, ret;
+- int sg_link_tbl_len;
++ int elen = 0;
+ bool sync_needed = false;
+ struct talitos_private *priv = dev_get_drvdata(dev);
+ bool is_sec1 = has_ftr_sec1(priv);
+@@ -1223,17 +1231,11 @@ static int ipsec_esp(struct talitos_edesc *edesc, struct aead_request *areq,
+ * extent is bytes of HMAC postpended to ciphertext,
+ * typically 12 for ipsec
+ */
+- sg_link_tbl_len = cryptlen;
+-
+- if (is_ipsec_esp) {
+- to_talitos_ptr_ext_set(&desc->ptr[4], authsize, is_sec1);
+-
+- if (desc->hdr & DESC_HDR_MODE1_MDEU_CICV)
+- sg_link_tbl_len += authsize;
+- }
++ if (is_ipsec_esp && (desc->hdr & DESC_HDR_MODE1_MDEU_CICV))
++ elen = authsize;
+
+- ret = talitos_sg_map(dev, areq->src, sg_link_tbl_len, edesc,
+- &desc->ptr[4], sg_count, areq->assoclen, tbl_off);
++ ret = talitos_sg_map_ext(dev, areq->src, cryptlen, edesc, &desc->ptr[4],
++ sg_count, areq->assoclen, tbl_off, elen);
+
+ if (ret > 1) {
+ tbl_off += ret;
+@@ -1690,9 +1692,30 @@ static void common_nonsnoop_hash_unmap(struct device *dev,
+ struct ahash_request *areq)
+ {
+ struct talitos_ahash_req_ctx *req_ctx = ahash_request_ctx(areq);
++ struct talitos_private *priv = dev_get_drvdata(dev);
++ bool is_sec1 = has_ftr_sec1(priv);
++ struct talitos_desc *desc = &edesc->desc;
++ struct talitos_desc *desc2 = desc + 1;
++
++ unmap_single_talitos_ptr(dev, &edesc->desc.ptr[5], DMA_FROM_DEVICE);
++ if (desc->next_desc &&
++ desc->ptr[5].ptr != desc2->ptr[5].ptr)
++ unmap_single_talitos_ptr(dev, &desc2->ptr[5], DMA_FROM_DEVICE);
+
+ talitos_sg_unmap(dev, edesc, req_ctx->psrc, NULL, 0, 0);
+
++ /* When using hashctx-in, must unmap it. */
++ if (from_talitos_ptr_len(&edesc->desc.ptr[1], is_sec1))
++ unmap_single_talitos_ptr(dev, &edesc->desc.ptr[1],
++ DMA_TO_DEVICE);
++ else if (desc->next_desc)
++ unmap_single_talitos_ptr(dev, &desc2->ptr[1],
++ DMA_TO_DEVICE);
++
++ if (is_sec1 && req_ctx->nbuf)
++ unmap_single_talitos_ptr(dev, &desc->ptr[3],
++ DMA_TO_DEVICE);
++
+ if (edesc->dma_len)
+ dma_unmap_single(dev, edesc->dma_link_tbl, edesc->dma_len,
+ DMA_BIDIRECTIONAL);
+@@ -1766,8 +1789,10 @@ static int common_nonsnoop_hash(struct talitos_edesc *edesc,
+
+ /* hash context in */
+ if (!req_ctx->first || req_ctx->swinit) {
+- to_talitos_ptr(&desc->ptr[1], ctx->dma_hw_context,
+- req_ctx->hw_context_size, is_sec1);
++ map_single_talitos_ptr(dev, &desc->ptr[1],
++ req_ctx->hw_context_size,
++ (char *)req_ctx->hw_context,
++ DMA_TO_DEVICE);
+ req_ctx->swinit = 0;
+ }
+ /* Indicate next op is not the first. */
+@@ -1793,10 +1818,9 @@ static int common_nonsnoop_hash(struct talitos_edesc *edesc,
+ * data in
+ */
+ if (is_sec1 && req_ctx->nbuf) {
+- dma_addr_t dma_buf = ctx->dma_buf + req_ctx->buf_idx *
+- HASH_MAX_BLOCK_SIZE;
+-
+- to_talitos_ptr(&desc->ptr[3], dma_buf, req_ctx->nbuf, is_sec1);
++ map_single_talitos_ptr(dev, &desc->ptr[3], req_ctx->nbuf,
++ req_ctx->buf[req_ctx->buf_idx],
++ DMA_TO_DEVICE);
+ } else {
+ sg_count = talitos_sg_map(dev, req_ctx->psrc, length, edesc,
+ &desc->ptr[3], sg_count, offset, 0);
+@@ -1812,8 +1836,9 @@ static int common_nonsnoop_hash(struct talitos_edesc *edesc,
+ crypto_ahash_digestsize(tfm),
+ areq->result, DMA_FROM_DEVICE);
+ else
+- to_talitos_ptr(&desc->ptr[5], ctx->dma_hw_context,
+- req_ctx->hw_context_size, is_sec1);
++ map_single_talitos_ptr(dev, &desc->ptr[5],
++ req_ctx->hw_context_size,
++ req_ctx->hw_context, DMA_FROM_DEVICE);
+
+ /* last DWORD empty */
+
+@@ -1832,9 +1857,14 @@ static int common_nonsnoop_hash(struct talitos_edesc *edesc,
+ desc->hdr |= DESC_HDR_MODE0_MDEU_CONT;
+ desc->hdr &= ~DESC_HDR_DONE_NOTIFY;
+
+- to_talitos_ptr(&desc2->ptr[1], ctx->dma_hw_context,
+- req_ctx->hw_context_size, is_sec1);
+-
++ if (desc->ptr[1].ptr)
++ copy_talitos_ptr(&desc2->ptr[1], &desc->ptr[1],
++ is_sec1);
++ else
++ map_single_talitos_ptr(dev, &desc2->ptr[1],
++ req_ctx->hw_context_size,
++ req_ctx->hw_context,
++ DMA_TO_DEVICE);
+ copy_talitos_ptr(&desc2->ptr[2], &desc->ptr[2], is_sec1);
+ sg_count = talitos_sg_map(dev, req_ctx->psrc, length, edesc,
+ &desc2->ptr[3], sg_count, offset, 0);
+@@ -1842,8 +1872,10 @@ static int common_nonsnoop_hash(struct talitos_edesc *edesc,
+ sync_needed = true;
+ copy_talitos_ptr(&desc2->ptr[5], &desc->ptr[5], is_sec1);
+ if (req_ctx->last)
+- to_talitos_ptr(&desc->ptr[5], ctx->dma_hw_context,
+- req_ctx->hw_context_size, is_sec1);
++ map_single_talitos_ptr(dev, &desc->ptr[5],
++ req_ctx->hw_context_size,
++ req_ctx->hw_context,
++ DMA_FROM_DEVICE);
+
+ next_desc = dma_map_single(dev, &desc2->hdr1, TALITOS_DESC_SIZE,
+ DMA_BIDIRECTIONAL);
+@@ -1881,12 +1913,8 @@ static struct talitos_edesc *ahash_edesc_alloc(struct ahash_request *areq,
+ static int ahash_init(struct ahash_request *areq)
+ {
+ struct crypto_ahash *tfm = crypto_ahash_reqtfm(areq);
+- struct talitos_ctx *ctx = crypto_ahash_ctx(tfm);
+- struct device *dev = ctx->dev;
+ struct talitos_ahash_req_ctx *req_ctx = ahash_request_ctx(areq);
+ unsigned int size;
+- struct talitos_private *priv = dev_get_drvdata(dev);
+- bool is_sec1 = has_ftr_sec1(priv);
+
+ /* Initialize the context */
+ req_ctx->buf_idx = 0;
+@@ -1898,18 +1926,6 @@ static int ahash_init(struct ahash_request *areq)
+ : TALITOS_MDEU_CONTEXT_SIZE_SHA384_SHA512;
+ req_ctx->hw_context_size = size;
+
+- if (ctx->dma_hw_context)
+- dma_unmap_single(dev, ctx->dma_hw_context, size,
+- DMA_BIDIRECTIONAL);
+- ctx->dma_hw_context = dma_map_single(dev, req_ctx->hw_context, size,
+- DMA_BIDIRECTIONAL);
+- if (ctx->dma_buf)
+- dma_unmap_single(dev, ctx->dma_buf, sizeof(req_ctx->buf),
+- DMA_TO_DEVICE);
+- if (is_sec1)
+- ctx->dma_buf = dma_map_single(dev, req_ctx->buf,
+- sizeof(req_ctx->buf),
+- DMA_TO_DEVICE);
+ return 0;
+ }
+
+@@ -1920,9 +1936,6 @@ static int ahash_init(struct ahash_request *areq)
+ static int ahash_init_sha224_swinit(struct ahash_request *areq)
+ {
+ struct talitos_ahash_req_ctx *req_ctx = ahash_request_ctx(areq);
+- struct crypto_ahash *tfm = crypto_ahash_reqtfm(areq);
+- struct talitos_ctx *ctx = crypto_ahash_ctx(tfm);
+- struct device *dev = ctx->dev;
+
+ ahash_init(areq);
+ req_ctx->swinit = 1;/* prevent h/w initting context with sha256 values*/
+@@ -1940,9 +1953,6 @@ static int ahash_init_sha224_swinit(struct ahash_request *areq)
+ req_ctx->hw_context[8] = 0;
+ req_ctx->hw_context[9] = 0;
+
+- dma_sync_single_for_device(dev, ctx->dma_hw_context,
+- req_ctx->hw_context_size, DMA_TO_DEVICE);
+-
+ return 0;
+ }
+
+@@ -2046,13 +2056,6 @@ static int ahash_process_req(struct ahash_request *areq, unsigned int nbytes)
+ /* request SEC to INIT hash. */
+ if (req_ctx->first && !req_ctx->swinit)
+ edesc->desc.hdr |= DESC_HDR_MODE0_MDEU_INIT;
+- if (is_sec1) {
+- dma_addr_t dma_buf = ctx->dma_buf + req_ctx->buf_idx *
+- HASH_MAX_BLOCK_SIZE;
+-
+- dma_sync_single_for_device(dev, dma_buf,
+- req_ctx->nbuf, DMA_TO_DEVICE);
+- }
+
+ /* When the tfm context has a keylen, it's an HMAC.
+ * A first or last (ie. not middle) descriptor must request HMAC.
+@@ -2106,12 +2109,7 @@ static int ahash_export(struct ahash_request *areq, void *out)
+ {
+ struct talitos_ahash_req_ctx *req_ctx = ahash_request_ctx(areq);
+ struct talitos_export_state *export = out;
+- struct crypto_ahash *ahash = crypto_ahash_reqtfm(areq);
+- struct talitos_ctx *ctx = crypto_ahash_ctx(ahash);
+- struct device *dev = ctx->dev;
+
+- dma_sync_single_for_cpu(dev, ctx->dma_hw_context,
+- req_ctx->hw_context_size, DMA_FROM_DEVICE);
+ memcpy(export->hw_context, req_ctx->hw_context,
+ req_ctx->hw_context_size);
+ memcpy(export->buf, req_ctx->buf[req_ctx->buf_idx], req_ctx->nbuf);
+@@ -2130,31 +2128,14 @@ static int ahash_import(struct ahash_request *areq, const void *in)
+ struct crypto_ahash *tfm = crypto_ahash_reqtfm(areq);
+ const struct talitos_export_state *export = in;
+ unsigned int size;
+- struct talitos_ctx *ctx = crypto_ahash_ctx(tfm);
+- struct device *dev = ctx->dev;
+- struct talitos_private *priv = dev_get_drvdata(dev);
+- bool is_sec1 = has_ftr_sec1(priv);
+
+ memset(req_ctx, 0, sizeof(*req_ctx));
+ size = (crypto_ahash_digestsize(tfm) <= SHA256_DIGEST_SIZE)
+ ? TALITOS_MDEU_CONTEXT_SIZE_MD5_SHA1_SHA256
+ : TALITOS_MDEU_CONTEXT_SIZE_SHA384_SHA512;
+ req_ctx->hw_context_size = size;
+- if (ctx->dma_hw_context)
+- dma_unmap_single(dev, ctx->dma_hw_context, size,
+- DMA_BIDIRECTIONAL);
+-
+ memcpy(req_ctx->hw_context, export->hw_context, size);
+- ctx->dma_hw_context = dma_map_single(dev, req_ctx->hw_context, size,
+- DMA_BIDIRECTIONAL);
+- if (ctx->dma_buf)
+- dma_unmap_single(dev, ctx->dma_buf, sizeof(req_ctx->buf),
+- DMA_TO_DEVICE);
+ memcpy(req_ctx->buf[0], export->buf, export->nbuf);
+- if (is_sec1)
+- ctx->dma_buf = dma_map_single(dev, req_ctx->buf,
+- sizeof(req_ctx->buf),
+- DMA_TO_DEVICE);
+ req_ctx->swinit = export->swinit;
+ req_ctx->first = export->first;
+ req_ctx->last = export->last;
+@@ -3064,27 +3045,6 @@ static void talitos_cra_exit(struct crypto_tfm *tfm)
+ dma_unmap_single(dev, ctx->dma_key, ctx->keylen, DMA_TO_DEVICE);
+ }
+
+-static void talitos_cra_exit_ahash(struct crypto_tfm *tfm)
+-{
+- struct talitos_ctx *ctx = crypto_tfm_ctx(tfm);
+- struct device *dev = ctx->dev;
+- unsigned int size;
+-
+- talitos_cra_exit(tfm);
+-
+- size = (crypto_ahash_digestsize(__crypto_ahash_cast(tfm)) <=
+- SHA256_DIGEST_SIZE)
+- ? TALITOS_MDEU_CONTEXT_SIZE_MD5_SHA1_SHA256
+- : TALITOS_MDEU_CONTEXT_SIZE_SHA384_SHA512;
+-
+- if (ctx->dma_hw_context)
+- dma_unmap_single(dev, ctx->dma_hw_context, size,
+- DMA_BIDIRECTIONAL);
+- if (ctx->dma_buf)
+- dma_unmap_single(dev, ctx->dma_buf, HASH_MAX_BLOCK_SIZE * 2,
+- DMA_TO_DEVICE);
+-}
+-
+ /*
+ * given the alg's descriptor header template, determine whether descriptor
+ * type and primary/secondary execution units required match the hw
+@@ -3183,7 +3143,7 @@ static struct talitos_crypto_alg *talitos_alg_alloc(struct device *dev,
+ case CRYPTO_ALG_TYPE_AHASH:
+ alg = &t_alg->algt.alg.hash.halg.base;
+ alg->cra_init = talitos_cra_init_ahash;
+- alg->cra_exit = talitos_cra_exit_ahash;
++ alg->cra_exit = talitos_cra_exit;
+ alg->cra_type = &crypto_ahash_type;
+ t_alg->algt.alg.hash.init = ahash_init;
+ t_alg->algt.alg.hash.update = ahash_update;
+diff --git a/drivers/gpu/drm/i915/intel_ddi.c b/drivers/gpu/drm/i915/intel_ddi.c
+index 58a3755544b2..38e53d6b8127 100644
+--- a/drivers/gpu/drm/i915/intel_ddi.c
++++ b/drivers/gpu/drm/i915/intel_ddi.c
+@@ -2208,8 +2208,7 @@ static void intel_ddi_pre_enable_dp(struct intel_encoder *encoder,
+ intel_prepare_dp_ddi_buffers(encoder);
+
+ intel_ddi_init_dp_buf_reg(encoder);
+- if (!is_mst)
+- intel_dp_sink_dpms(intel_dp, DRM_MODE_DPMS_ON);
++ intel_dp_sink_dpms(intel_dp, DRM_MODE_DPMS_ON);
+ intel_dp_start_link_train(intel_dp);
+ if (port != PORT_A || INTEL_GEN(dev_priv) >= 9)
+ intel_dp_stop_link_train(intel_dp);
+@@ -2294,19 +2293,12 @@ static void intel_ddi_post_disable_dp(struct intel_encoder *encoder,
+ struct drm_i915_private *dev_priv = to_i915(encoder->base.dev);
+ struct intel_digital_port *dig_port = enc_to_dig_port(&encoder->base);
+ struct intel_dp *intel_dp = &dig_port->dp;
+- /*
+- * old_crtc_state and old_conn_state are NULL when called from
+- * DP_MST. The main connector associated with this port is never
+- * bound to a crtc for MST.
+- */
+- bool is_mst = !old_crtc_state;
+
+ /*
+ * Power down sink before disabling the port, otherwise we end
+ * up getting interrupts from the sink on detecting link loss.
+ */
+- if (!is_mst)
+- intel_dp_sink_dpms(intel_dp, DRM_MODE_DPMS_OFF);
++ intel_dp_sink_dpms(intel_dp, DRM_MODE_DPMS_OFF);
+
+ intel_disable_ddi_buf(encoder);
+
+diff --git a/drivers/i2c/busses/i2c-stm32f7.c b/drivers/i2c/busses/i2c-stm32f7.c
+index b445b3bb0bb1..f273e28c39db 100644
+--- a/drivers/i2c/busses/i2c-stm32f7.c
++++ b/drivers/i2c/busses/i2c-stm32f7.c
+@@ -888,6 +888,11 @@ static int stm32f7_i2c_probe(struct platform_device *pdev)
+ }
+
+ setup = of_device_get_match_data(&pdev->dev);
++ if (!setup) {
++ dev_err(&pdev->dev, "Can't get device data\n");
++ ret = -ENODEV;
++ goto clk_free;
++ }
+ i2c_dev->setup = *setup;
+
+ ret = device_property_read_u32(i2c_dev->dev, "i2c-scl-rising-time-ns",
+diff --git a/drivers/infiniband/core/addr.c b/drivers/infiniband/core/addr.c
+index f4e8185bccd3..0885b8cfbe0b 100644
+--- a/drivers/infiniband/core/addr.c
++++ b/drivers/infiniband/core/addr.c
+@@ -207,6 +207,22 @@ int rdma_addr_size(struct sockaddr *addr)
+ }
+ EXPORT_SYMBOL(rdma_addr_size);
+
++int rdma_addr_size_in6(struct sockaddr_in6 *addr)
++{
++ int ret = rdma_addr_size((struct sockaddr *) addr);
++
++ return ret <= sizeof(*addr) ? ret : 0;
++}
++EXPORT_SYMBOL(rdma_addr_size_in6);
++
++int rdma_addr_size_kss(struct __kernel_sockaddr_storage *addr)
++{
++ int ret = rdma_addr_size((struct sockaddr *) addr);
++
++ return ret <= sizeof(*addr) ? ret : 0;
++}
++EXPORT_SYMBOL(rdma_addr_size_kss);
++
+ static struct rdma_addr_client self;
+
+ void rdma_addr_register_client(struct rdma_addr_client *client)
+@@ -598,6 +614,15 @@ static void process_one_req(struct work_struct *_work)
+ list_del(&req->list);
+ mutex_unlock(&lock);
+
++ /*
++ * Although the work will normally have been canceled by the
++ * workqueue, it can still be requeued as long as it is on the
++ * req_list, so it could have been requeued before we grabbed &lock.
++ * We need to cancel it after it is removed from req_list to really be
++ * sure it is safe to free.
++ */
++ cancel_delayed_work(&req->work);
++
+ req->callback(req->status, (struct sockaddr *)&req->src_addr,
+ req->addr, req->context);
+ put_client(req->client);
+diff --git a/drivers/infiniband/core/ucma.c b/drivers/infiniband/core/ucma.c
+index 77ca9da570a2..722235bed075 100644
+--- a/drivers/infiniband/core/ucma.c
++++ b/drivers/infiniband/core/ucma.c
+@@ -132,7 +132,7 @@ static inline struct ucma_context *_ucma_find_context(int id,
+ ctx = idr_find(&ctx_idr, id);
+ if (!ctx)
+ ctx = ERR_PTR(-ENOENT);
+- else if (ctx->file != file)
++ else if (ctx->file != file || !ctx->cm_id)
+ ctx = ERR_PTR(-EINVAL);
+ return ctx;
+ }
+@@ -456,6 +456,7 @@ static ssize_t ucma_create_id(struct ucma_file *file, const char __user *inbuf,
+ struct rdma_ucm_create_id cmd;
+ struct rdma_ucm_create_id_resp resp;
+ struct ucma_context *ctx;
++ struct rdma_cm_id *cm_id;
+ enum ib_qp_type qp_type;
+ int ret;
+
+@@ -476,10 +477,10 @@ static ssize_t ucma_create_id(struct ucma_file *file, const char __user *inbuf,
+ return -ENOMEM;
+
+ ctx->uid = cmd.uid;
+- ctx->cm_id = rdma_create_id(current->nsproxy->net_ns,
+- ucma_event_handler, ctx, cmd.ps, qp_type);
+- if (IS_ERR(ctx->cm_id)) {
+- ret = PTR_ERR(ctx->cm_id);
++ cm_id = rdma_create_id(current->nsproxy->net_ns,
++ ucma_event_handler, ctx, cmd.ps, qp_type);
++ if (IS_ERR(cm_id)) {
++ ret = PTR_ERR(cm_id);
+ goto err1;
+ }
+
+@@ -489,14 +490,19 @@ static ssize_t ucma_create_id(struct ucma_file *file, const char __user *inbuf,
+ ret = -EFAULT;
+ goto err2;
+ }
++
++ ctx->cm_id = cm_id;
+ return 0;
+
+ err2:
+- rdma_destroy_id(ctx->cm_id);
++ rdma_destroy_id(cm_id);
+ err1:
+ mutex_lock(&mut);
+ idr_remove(&ctx_idr, ctx->id);
+ mutex_unlock(&mut);
++ mutex_lock(&file->mut);
++ list_del(&ctx->list);
++ mutex_unlock(&file->mut);
+ kfree(ctx);
+ return ret;
+ }
+@@ -626,6 +632,9 @@ static ssize_t ucma_bind_ip(struct ucma_file *file, const char __user *inbuf,
+ if (copy_from_user(&cmd, inbuf, sizeof(cmd)))
+ return -EFAULT;
+
++ if (!rdma_addr_size_in6(&cmd.addr))
++ return -EINVAL;
++
+ ctx = ucma_get_ctx(file, cmd.id);
+ if (IS_ERR(ctx))
+ return PTR_ERR(ctx);
+@@ -639,22 +648,21 @@ static ssize_t ucma_bind(struct ucma_file *file, const char __user *inbuf,
+ int in_len, int out_len)
+ {
+ struct rdma_ucm_bind cmd;
+- struct sockaddr *addr;
+ struct ucma_context *ctx;
+ int ret;
+
+ if (copy_from_user(&cmd, inbuf, sizeof(cmd)))
+ return -EFAULT;
+
+- addr = (struct sockaddr *) &cmd.addr;
+- if (cmd.reserved || !cmd.addr_size || (cmd.addr_size != rdma_addr_size(addr)))
++ if (cmd.reserved || !cmd.addr_size ||
++ cmd.addr_size != rdma_addr_size_kss(&cmd.addr))
+ return -EINVAL;
+
+ ctx = ucma_get_ctx(file, cmd.id);
+ if (IS_ERR(ctx))
+ return PTR_ERR(ctx);
+
+- ret = rdma_bind_addr(ctx->cm_id, addr);
++ ret = rdma_bind_addr(ctx->cm_id, (struct sockaddr *) &cmd.addr);
+ ucma_put_ctx(ctx);
+ return ret;
+ }
+@@ -670,13 +678,16 @@ static ssize_t ucma_resolve_ip(struct ucma_file *file,
+ if (copy_from_user(&cmd, inbuf, sizeof(cmd)))
+ return -EFAULT;
+
++ if (!rdma_addr_size_in6(&cmd.src_addr) ||
++ !rdma_addr_size_in6(&cmd.dst_addr))
++ return -EINVAL;
++
+ ctx = ucma_get_ctx(file, cmd.id);
+ if (IS_ERR(ctx))
+ return PTR_ERR(ctx);
+
+ ret = rdma_resolve_addr(ctx->cm_id, (struct sockaddr *) &cmd.src_addr,
+- (struct sockaddr *) &cmd.dst_addr,
+- cmd.timeout_ms);
++ (struct sockaddr *) &cmd.dst_addr, cmd.timeout_ms);
+ ucma_put_ctx(ctx);
+ return ret;
+ }
+@@ -686,24 +697,23 @@ static ssize_t ucma_resolve_addr(struct ucma_file *file,
+ int in_len, int out_len)
+ {
+ struct rdma_ucm_resolve_addr cmd;
+- struct sockaddr *src, *dst;
+ struct ucma_context *ctx;
+ int ret;
+
+ if (copy_from_user(&cmd, inbuf, sizeof(cmd)))
+ return -EFAULT;
+
+- src = (struct sockaddr *) &cmd.src_addr;
+- dst = (struct sockaddr *) &cmd.dst_addr;
+- if (cmd.reserved || (cmd.src_size && (cmd.src_size != rdma_addr_size(src))) ||
+- !cmd.dst_size || (cmd.dst_size != rdma_addr_size(dst)))
++ if (cmd.reserved ||
++ (cmd.src_size && (cmd.src_size != rdma_addr_size_kss(&cmd.src_addr))) ||
++ !cmd.dst_size || (cmd.dst_size != rdma_addr_size_kss(&cmd.dst_addr)))
+ return -EINVAL;
+
+ ctx = ucma_get_ctx(file, cmd.id);
+ if (IS_ERR(ctx))
+ return PTR_ERR(ctx);
+
+- ret = rdma_resolve_addr(ctx->cm_id, src, dst, cmd.timeout_ms);
++ ret = rdma_resolve_addr(ctx->cm_id, (struct sockaddr *) &cmd.src_addr,
++ (struct sockaddr *) &cmd.dst_addr, cmd.timeout_ms);
+ ucma_put_ctx(ctx);
+ return ret;
+ }
+@@ -1155,6 +1165,11 @@ static ssize_t ucma_init_qp_attr(struct ucma_file *file,
+ if (IS_ERR(ctx))
+ return PTR_ERR(ctx);
+
++ if (!ctx->cm_id->device) {
++ ret = -EINVAL;
++ goto out;
++ }
++
+ resp.qp_attr_mask = 0;
+ memset(&qp_attr, 0, sizeof qp_attr);
+ qp_attr.qp_state = cmd.qp_state;
+@@ -1320,7 +1335,7 @@ static ssize_t ucma_notify(struct ucma_file *file, const char __user *inbuf,
+ {
+ struct rdma_ucm_notify cmd;
+ struct ucma_context *ctx;
+- int ret;
++ int ret = -EINVAL;
+
+ if (copy_from_user(&cmd, inbuf, sizeof(cmd)))
+ return -EFAULT;
+@@ -1329,7 +1344,9 @@ static ssize_t ucma_notify(struct ucma_file *file, const char __user *inbuf,
+ if (IS_ERR(ctx))
+ return PTR_ERR(ctx);
+
+- ret = rdma_notify(ctx->cm_id, (enum ib_event_type) cmd.event);
++ if (ctx->cm_id->device)
++ ret = rdma_notify(ctx->cm_id, (enum ib_event_type)cmd.event);
++
+ ucma_put_ctx(ctx);
+ return ret;
+ }
+@@ -1415,7 +1432,7 @@ static ssize_t ucma_join_ip_multicast(struct ucma_file *file,
+ join_cmd.response = cmd.response;
+ join_cmd.uid = cmd.uid;
+ join_cmd.id = cmd.id;
+- join_cmd.addr_size = rdma_addr_size((struct sockaddr *) &cmd.addr);
++ join_cmd.addr_size = rdma_addr_size_in6(&cmd.addr);
+ if (!join_cmd.addr_size)
+ return -EINVAL;
+
+@@ -1434,7 +1451,7 @@ static ssize_t ucma_join_multicast(struct ucma_file *file,
+ if (copy_from_user(&cmd, inbuf, sizeof(cmd)))
+ return -EFAULT;
+
+- if (!rdma_addr_size((struct sockaddr *)&cmd.addr))
++ if (!rdma_addr_size_kss(&cmd.addr))
+ return -EINVAL;
+
+ return ucma_process_join(file, &cmd, out_len);
+diff --git a/drivers/input/mouse/alps.c b/drivers/input/mouse/alps.c
+index dbe57da8c1a1..4a3bc168a4a7 100644
+--- a/drivers/input/mouse/alps.c
++++ b/drivers/input/mouse/alps.c
+@@ -2544,13 +2544,31 @@ static int alps_update_btn_info_ss4_v2(unsigned char otp[][4],
+ }
+
+ static int alps_update_dual_info_ss4_v2(unsigned char otp[][4],
+- struct alps_data *priv)
++ struct alps_data *priv,
++ struct psmouse *psmouse)
+ {
+ bool is_dual = false;
++ int reg_val = 0;
++ struct ps2dev *ps2dev = &psmouse->ps2dev;
+
+- if (IS_SS4PLUS_DEV(priv->dev_id))
++ if (IS_SS4PLUS_DEV(priv->dev_id)) {
+ is_dual = (otp[0][0] >> 4) & 0x01;
+
++ if (!is_dual) {
++ /* For support TrackStick of Thinkpad L/E series */
++ if (alps_exit_command_mode(psmouse) == 0 &&
++ alps_enter_command_mode(psmouse) == 0) {
++ reg_val = alps_command_mode_read_reg(psmouse,
++ 0xD7);
++ }
++ alps_exit_command_mode(psmouse);
++ ps2_command(ps2dev, NULL, PSMOUSE_CMD_ENABLE);
++
++ if (reg_val == 0x0C || reg_val == 0x1D)
++ is_dual = true;
++ }
++ }
++
+ if (is_dual)
+ priv->flags |= ALPS_DUALPOINT |
+ ALPS_DUALPOINT_WITH_PRESSURE;
+@@ -2573,7 +2591,7 @@ static int alps_set_defaults_ss4_v2(struct psmouse *psmouse,
+
+ alps_update_btn_info_ss4_v2(otp, priv);
+
+- alps_update_dual_info_ss4_v2(otp, priv);
++ alps_update_dual_info_ss4_v2(otp, priv, psmouse);
+
+ return 0;
+ }
+diff --git a/drivers/input/serio/i8042-x86ia64io.h b/drivers/input/serio/i8042-x86ia64io.h
+index 6cbbdc6e9687..b353d494ad40 100644
+--- a/drivers/input/serio/i8042-x86ia64io.h
++++ b/drivers/input/serio/i8042-x86ia64io.h
+@@ -530,6 +530,20 @@ static const struct dmi_system_id __initconst i8042_dmi_nomux_table[] = {
+ { }
+ };
+
++static const struct dmi_system_id i8042_dmi_forcemux_table[] __initconst = {
++ {
++ /*
++ * Sony Vaio VGN-CS series require MUX or the touch sensor
++ * buttons will disturb touchpad operation
++ */
++ .matches = {
++ DMI_MATCH(DMI_SYS_VENDOR, "Sony Corporation"),
++ DMI_MATCH(DMI_PRODUCT_NAME, "VGN-CS"),
++ },
++ },
++ { }
++};
++
+ /*
+ * On some Asus laptops, just running self tests cause problems.
+ */
+@@ -620,6 +634,13 @@ static const struct dmi_system_id __initconst i8042_dmi_reset_table[] = {
+ DMI_MATCH(DMI_PRODUCT_NAME, "20046"),
+ },
+ },
++ {
++ /* Lenovo ThinkPad L460 */
++ .matches = {
++ DMI_MATCH(DMI_SYS_VENDOR, "LENOVO"),
++ DMI_MATCH(DMI_PRODUCT_VERSION, "ThinkPad L460"),
++ },
++ },
+ {
+ /* Clevo P650RS, 650RP6, Sager NP8152-S, and others */
+ .matches = {
+@@ -1163,6 +1184,9 @@ static int __init i8042_platform_init(void)
+ if (dmi_check_system(i8042_dmi_nomux_table))
+ i8042_nomux = true;
+
++ if (dmi_check_system(i8042_dmi_forcemux_table))
++ i8042_nomux = false;
++
+ if (dmi_check_system(i8042_dmi_notimeout_table))
+ i8042_notimeout = true;
+
+diff --git a/drivers/media/usb/usbtv/usbtv-core.c b/drivers/media/usb/usbtv/usbtv-core.c
+index 127f8a0c098b..0c2e628e8723 100644
+--- a/drivers/media/usb/usbtv/usbtv-core.c
++++ b/drivers/media/usb/usbtv/usbtv-core.c
+@@ -112,6 +112,8 @@ static int usbtv_probe(struct usb_interface *intf,
+ return 0;
+
+ usbtv_audio_fail:
++ /* we must not free at this point */
++ usb_get_dev(usbtv->udev);
+ usbtv_video_free(usbtv);
+
+ usbtv_video_fail:
+diff --git a/drivers/misc/mei/main.c b/drivers/misc/mei/main.c
+index e825f013e54e..22efc039f302 100644
+--- a/drivers/misc/mei/main.c
++++ b/drivers/misc/mei/main.c
+@@ -507,7 +507,6 @@ static long mei_ioctl(struct file *file, unsigned int cmd, unsigned long data)
+ break;
+
+ default:
+- dev_err(dev->dev, ": unsupported ioctl %d.\n", cmd);
+ rets = -ENOIOCTLCMD;
+ }
+
+diff --git a/drivers/mtd/chips/jedec_probe.c b/drivers/mtd/chips/jedec_probe.c
+index 7c0b27d132b1..b479bd81120b 100644
+--- a/drivers/mtd/chips/jedec_probe.c
++++ b/drivers/mtd/chips/jedec_probe.c
+@@ -1889,6 +1889,8 @@ static inline u32 jedec_read_mfr(struct map_info *map, uint32_t base,
+ do {
+ uint32_t ofs = cfi_build_cmd_addr(0 + (bank << 8), map, cfi);
+ mask = (1 << (cfi->device_type * 8)) - 1;
++ if (ofs >= map->size)
++ return 0;
+ result = map_read(map, base + ofs);
+ bank++;
+ } while ((result.x[0] & mask) == CFI_MFR_CONTINUATION);
+diff --git a/drivers/mtd/nand/atmel/pmecc.c b/drivers/mtd/nand/atmel/pmecc.c
+index fcbe4fd6e684..ca0a70389ba9 100644
+--- a/drivers/mtd/nand/atmel/pmecc.c
++++ b/drivers/mtd/nand/atmel/pmecc.c
+@@ -426,7 +426,7 @@ static int get_strength(struct atmel_pmecc_user *user)
+
+ static int get_sectorsize(struct atmel_pmecc_user *user)
+ {
+- return user->cache.cfg & PMECC_LOOKUP_TABLE_SIZE_1024 ? 1024 : 512;
++ return user->cache.cfg & PMECC_CFG_SECTOR1024 ? 1024 : 512;
+ }
+
+ static void atmel_pmecc_gen_syndrome(struct atmel_pmecc_user *user, int sector)
+diff --git a/drivers/net/ethernet/hisilicon/hns/hns_dsaf_gmac.c b/drivers/net/ethernet/hisilicon/hns/hns_dsaf_gmac.c
+index 86944bc3b273..74bd260ca02a 100644
+--- a/drivers/net/ethernet/hisilicon/hns/hns_dsaf_gmac.c
++++ b/drivers/net/ethernet/hisilicon/hns/hns_dsaf_gmac.c
+@@ -666,7 +666,7 @@ static void hns_gmac_get_strings(u32 stringset, u8 *data)
+
+ static int hns_gmac_get_sset_count(int stringset)
+ {
+- if (stringset == ETH_SS_STATS || stringset == ETH_SS_PRIV_FLAGS)
++ if (stringset == ETH_SS_STATS)
+ return ARRAY_SIZE(g_gmac_stats_string);
+
+ return 0;
+diff --git a/drivers/net/ethernet/hisilicon/hns/hns_dsaf_ppe.c b/drivers/net/ethernet/hisilicon/hns/hns_dsaf_ppe.c
+index b62816c1574e..93e71e27401b 100644
+--- a/drivers/net/ethernet/hisilicon/hns/hns_dsaf_ppe.c
++++ b/drivers/net/ethernet/hisilicon/hns/hns_dsaf_ppe.c
+@@ -422,7 +422,7 @@ void hns_ppe_update_stats(struct hns_ppe_cb *ppe_cb)
+
+ int hns_ppe_get_sset_count(int stringset)
+ {
+- if (stringset == ETH_SS_STATS || stringset == ETH_SS_PRIV_FLAGS)
++ if (stringset == ETH_SS_STATS)
+ return ETH_PPE_STATIC_NUM;
+ return 0;
+ }
+diff --git a/drivers/net/ethernet/hisilicon/hns/hns_dsaf_rcb.c b/drivers/net/ethernet/hisilicon/hns/hns_dsaf_rcb.c
+index 6f3570cfb501..e2e28532e4dc 100644
+--- a/drivers/net/ethernet/hisilicon/hns/hns_dsaf_rcb.c
++++ b/drivers/net/ethernet/hisilicon/hns/hns_dsaf_rcb.c
+@@ -876,7 +876,7 @@ void hns_rcb_get_stats(struct hnae_queue *queue, u64 *data)
+ */
+ int hns_rcb_get_ring_sset_count(int stringset)
+ {
+- if (stringset == ETH_SS_STATS || stringset == ETH_SS_PRIV_FLAGS)
++ if (stringset == ETH_SS_STATS)
+ return HNS_RING_STATIC_REG_NUM;
+
+ return 0;
+diff --git a/drivers/net/ethernet/hisilicon/hns/hns_ethtool.c b/drivers/net/ethernet/hisilicon/hns/hns_ethtool.c
+index 7ea7f8a4aa2a..2e14a3ae1d8b 100644
+--- a/drivers/net/ethernet/hisilicon/hns/hns_ethtool.c
++++ b/drivers/net/ethernet/hisilicon/hns/hns_ethtool.c
+@@ -993,8 +993,10 @@ int hns_get_sset_count(struct net_device *netdev, int stringset)
+ cnt--;
+
+ return cnt;
+- } else {
++ } else if (stringset == ETH_SS_STATS) {
+ return (HNS_NET_STATS_CNT + ops->get_sset_count(h, stringset));
++ } else {
++ return -EOPNOTSUPP;
+ }
+ }
+
+diff --git a/drivers/parport/parport_pc.c b/drivers/parport/parport_pc.c
+index 489492b608cf..380916bff9e0 100644
+--- a/drivers/parport/parport_pc.c
++++ b/drivers/parport/parport_pc.c
+@@ -2646,6 +2646,7 @@ enum parport_pc_pci_cards {
+ netmos_9901,
+ netmos_9865,
+ quatech_sppxp100,
++ wch_ch382l,
+ };
+
+
+@@ -2708,6 +2709,7 @@ static struct parport_pc_pci {
+ /* netmos_9901 */ { 1, { { 0, -1 }, } },
+ /* netmos_9865 */ { 1, { { 0, -1 }, } },
+ /* quatech_sppxp100 */ { 1, { { 0, 1 }, } },
++ /* wch_ch382l */ { 1, { { 2, -1 }, } },
+ };
+
+ static const struct pci_device_id parport_pc_pci_tbl[] = {
+@@ -2797,6 +2799,8 @@ static const struct pci_device_id parport_pc_pci_tbl[] = {
+ /* Quatech SPPXP-100 Parallel port PCI ExpressCard */
+ { PCI_VENDOR_ID_QUATECH, PCI_DEVICE_ID_QUATECH_SPPXP_100,
+ PCI_ANY_ID, PCI_ANY_ID, 0, 0, quatech_sppxp100 },
++ /* WCH CH382L PCI-E single parallel port card */
++ { 0x1c00, 0x3050, 0x1c00, 0x3050, 0, 0, wch_ch382l },
+ { 0, } /* terminate list */
+ };
+ MODULE_DEVICE_TABLE(pci, parport_pc_pci_tbl);
+diff --git a/drivers/phy/qualcomm/phy-qcom-ufs.c b/drivers/phy/qualcomm/phy-qcom-ufs.c
+index c5ff4525edef..c5493ea51282 100644
+--- a/drivers/phy/qualcomm/phy-qcom-ufs.c
++++ b/drivers/phy/qualcomm/phy-qcom-ufs.c
+@@ -675,3 +675,8 @@ int ufs_qcom_phy_power_off(struct phy *generic_phy)
+ return 0;
+ }
+ EXPORT_SYMBOL_GPL(ufs_qcom_phy_power_off);
++
++MODULE_AUTHOR("Yaniv Gardi <ygardi@codeaurora.org>");
++MODULE_AUTHOR("Vivek Gautam <vivek.gautam@codeaurora.org>");
++MODULE_DESCRIPTION("Universal Flash Storage (UFS) QCOM PHY");
++MODULE_LICENSE("GPL v2");
+diff --git a/drivers/staging/comedi/drivers/ni_mio_common.c b/drivers/staging/comedi/drivers/ni_mio_common.c
+index 398347fedc47..2cac160993bb 100644
+--- a/drivers/staging/comedi/drivers/ni_mio_common.c
++++ b/drivers/staging/comedi/drivers/ni_mio_common.c
+@@ -1284,6 +1284,8 @@ static void ack_a_interrupt(struct comedi_device *dev, unsigned short a_status)
+ ack |= NISTC_INTA_ACK_AI_START;
+ if (a_status & NISTC_AI_STATUS1_STOP)
+ ack |= NISTC_INTA_ACK_AI_STOP;
++ if (a_status & NISTC_AI_STATUS1_OVER)
++ ack |= NISTC_INTA_ACK_AI_ERR;
+ if (ack)
+ ni_stc_writew(dev, ack, NISTC_INTA_ACK_REG);
+ }
+diff --git a/drivers/tty/serial/8250/8250_of.c b/drivers/tty/serial/8250/8250_of.c
+index 160b8906d9b9..9835b1c1cbe1 100644
+--- a/drivers/tty/serial/8250/8250_of.c
++++ b/drivers/tty/serial/8250/8250_of.c
+@@ -316,6 +316,7 @@ static const struct of_device_id of_platform_serial_table[] = {
+ { .compatible = "mrvl,mmp-uart",
+ .data = (void *)PORT_XSCALE, },
+ { .compatible = "ti,da830-uart", .data = (void *)PORT_DA830, },
++ { .compatible = "nuvoton,npcm750-uart", .data = (void *)PORT_NPCM, },
+ { /* end of list */ },
+ };
+ MODULE_DEVICE_TABLE(of, of_platform_serial_table);
+diff --git a/drivers/tty/serial/8250/8250_port.c b/drivers/tty/serial/8250/8250_port.c
+index 11434551ac0a..2cda4b28c78a 100644
+--- a/drivers/tty/serial/8250/8250_port.c
++++ b/drivers/tty/serial/8250/8250_port.c
+@@ -47,6 +47,10 @@
+ #define UART_EXAR_SLEEP 0x8b /* Sleep mode */
+ #define UART_EXAR_DVID 0x8d /* Device identification */
+
++/* Nuvoton NPCM timeout register */
++#define UART_NPCM_TOR 7
++#define UART_NPCM_TOIE BIT(7) /* Timeout Interrupt Enable */
++
+ /*
+ * Debugging.
+ */
+@@ -293,6 +297,15 @@ static const struct serial8250_config uart_config[] = {
+ UART_FCR_CLEAR_RCVR | UART_FCR_CLEAR_XMIT,
+ .flags = UART_CAP_FIFO,
+ },
++ [PORT_NPCM] = {
++ .name = "Nuvoton 16550",
++ .fifo_size = 16,
++ .tx_loadsz = 16,
++ .fcr = UART_FCR_ENABLE_FIFO | UART_FCR_R_TRIG_10 |
++ UART_FCR_CLEAR_RCVR | UART_FCR_CLEAR_XMIT,
++ .rxtrig_bytes = {1, 4, 8, 14},
++ .flags = UART_CAP_FIFO,
++ },
+ };
+
+ /* Uart divisor latch read */
+@@ -2161,6 +2174,15 @@ int serial8250_do_startup(struct uart_port *port)
+ UART_DA830_PWREMU_MGMT_FREE);
+ }
+
++ if (port->type == PORT_NPCM) {
++ /*
++ * Nuvoton calls the scratch register 'UART_TOR' (timeout
++ * register). Enable it, and set TIOC (timeout interrupt
++ * comparator) to be 0x20 for correct operation.
++ */
++ serial_port_out(port, UART_NPCM_TOR, UART_NPCM_TOIE | 0x20);
++ }
++
+ #ifdef CONFIG_SERIAL_8250_RSA
+ /*
+ * If this is an RSA port, see if we can kick it up to the
+@@ -2483,6 +2505,15 @@ static unsigned int xr17v35x_get_divisor(struct uart_8250_port *up,
+ return quot_16 >> 4;
+ }
+
++/* Nuvoton NPCM UARTs have a custom divisor calculation */
++static unsigned int npcm_get_divisor(struct uart_8250_port *up,
++ unsigned int baud)
++{
++ struct uart_port *port = &up->port;
++
++ return DIV_ROUND_CLOSEST(port->uartclk, 16 * baud + 2) - 2;
++}
++
+ static unsigned int serial8250_get_divisor(struct uart_8250_port *up,
+ unsigned int baud,
+ unsigned int *frac)
+@@ -2503,6 +2534,8 @@ static unsigned int serial8250_get_divisor(struct uart_8250_port *up,
+ quot = 0x8002;
+ else if (up->port.type == PORT_XR17V35X)
+ quot = xr17v35x_get_divisor(up, baud, frac);
++ else if (up->port.type == PORT_NPCM)
++ quot = npcm_get_divisor(up, baud);
+ else
+ quot = uart_get_divisor(port, baud);
+
+diff --git a/drivers/tty/vt/vt.c b/drivers/tty/vt/vt.c
+index b4e57c5a8bba..f97251f39c26 100644
+--- a/drivers/tty/vt/vt.c
++++ b/drivers/tty/vt/vt.c
+@@ -1354,6 +1354,11 @@ static void csi_m(struct vc_data *vc)
+ case 3:
+ vc->vc_italic = 1;
+ break;
++ case 21:
++ /*
++ * No console drivers support double underline, so
++ * convert it to a single underline.
++ */
+ case 4:
+ vc->vc_underline = 1;
+ break;
+@@ -1389,7 +1394,6 @@ static void csi_m(struct vc_data *vc)
+ vc->vc_disp_ctrl = 1;
+ vc->vc_toggle_meta = 1;
+ break;
+- case 21:
+ case 22:
+ vc->vc_intensity = 1;
+ break;
+diff --git a/drivers/usb/serial/cp210x.c b/drivers/usb/serial/cp210x.c
+index 06d502b3e913..de1e759dd512 100644
+--- a/drivers/usb/serial/cp210x.c
++++ b/drivers/usb/serial/cp210x.c
+@@ -155,6 +155,7 @@ static const struct usb_device_id id_table[] = {
+ { USB_DEVICE(0x12B8, 0xEC62) }, /* Link G4+ ECU */
+ { USB_DEVICE(0x13AD, 0x9999) }, /* Baltech card reader */
+ { USB_DEVICE(0x1555, 0x0004) }, /* Owen AC4 USB-RS485 Converter */
++ { USB_DEVICE(0x155A, 0x1006) }, /* ELDAT Easywave RX09 */
+ { USB_DEVICE(0x166A, 0x0201) }, /* Clipsal 5500PACA C-Bus Pascal Automation Controller */
+ { USB_DEVICE(0x166A, 0x0301) }, /* Clipsal 5800PC C-Bus Wireless PC Interface */
+ { USB_DEVICE(0x166A, 0x0303) }, /* Clipsal 5500PCU C-Bus USB interface */
+diff --git a/drivers/usb/serial/ftdi_sio.c b/drivers/usb/serial/ftdi_sio.c
+index fc68952c994a..e456617216f9 100644
+--- a/drivers/usb/serial/ftdi_sio.c
++++ b/drivers/usb/serial/ftdi_sio.c
+@@ -769,6 +769,7 @@ static const struct usb_device_id id_table_combined[] = {
+ .driver_info = (kernel_ulong_t)&ftdi_NDI_device_quirk },
+ { USB_DEVICE(TELLDUS_VID, TELLDUS_TELLSTICK_PID) },
+ { USB_DEVICE(NOVITUS_VID, NOVITUS_BONO_E_PID) },
++ { USB_DEVICE(FTDI_VID, RTSYSTEMS_USB_VX8_PID) },
+ { USB_DEVICE(RTSYSTEMS_VID, RTSYSTEMS_USB_S03_PID) },
+ { USB_DEVICE(RTSYSTEMS_VID, RTSYSTEMS_USB_59_PID) },
+ { USB_DEVICE(RTSYSTEMS_VID, RTSYSTEMS_USB_57A_PID) },
+@@ -931,6 +932,7 @@ static const struct usb_device_id id_table_combined[] = {
+ { USB_DEVICE(FTDI_VID, FTDI_SCIENCESCOPE_LS_LOGBOOK_PID) },
+ { USB_DEVICE(FTDI_VID, FTDI_SCIENCESCOPE_HS_LOGBOOK_PID) },
+ { USB_DEVICE(FTDI_VID, FTDI_CINTERION_MC55I_PID) },
++ { USB_DEVICE(FTDI_VID, FTDI_FHE_PID) },
+ { USB_DEVICE(FTDI_VID, FTDI_DOTEC_PID) },
+ { USB_DEVICE(QIHARDWARE_VID, MILKYMISTONE_JTAGSERIAL_PID),
+ .driver_info = (kernel_ulong_t)&ftdi_jtag_quirk },
+diff --git a/drivers/usb/serial/ftdi_sio_ids.h b/drivers/usb/serial/ftdi_sio_ids.h
+index 8b4ecd2bd297..975d02666c5a 100644
+--- a/drivers/usb/serial/ftdi_sio_ids.h
++++ b/drivers/usb/serial/ftdi_sio_ids.h
+@@ -923,6 +923,9 @@
+ /*
+ * RT Systems programming cables for various ham radios
+ */
++/* This device uses the VID of FTDI */
++#define RTSYSTEMS_USB_VX8_PID 0x9e50 /* USB-VX8 USB to 7 pin modular plug for Yaesu VX-8 radio */
++
+ #define RTSYSTEMS_VID 0x2100 /* Vendor ID */
+ #define RTSYSTEMS_USB_S03_PID 0x9001 /* RTS-03 USB to Serial Adapter */
+ #define RTSYSTEMS_USB_59_PID 0x9e50 /* USB-59 USB to 8 pin plug */
+@@ -1441,6 +1444,12 @@
+ */
+ #define FTDI_CINTERION_MC55I_PID 0xA951
+
++/*
++ * Product: FirmwareHubEmulator
++ * Manufacturer: Harman Becker Automotive Systems
++ */
++#define FTDI_FHE_PID 0xA9A0
++
+ /*
+ * Product: Comet Caller ID decoder
+ * Manufacturer: Crucible Technologies
+diff --git a/fs/btrfs/inode.c b/fs/btrfs/inode.c
+index c04183cc2117..ae07b3a2fb5d 100644
+--- a/fs/btrfs/inode.c
++++ b/fs/btrfs/inode.c
+@@ -1257,6 +1257,8 @@ static noinline int csum_exist_in_range(struct btrfs_fs_info *fs_info,
+ list_del(&sums->list);
+ kfree(sums);
+ }
++ if (ret < 0)
++ return ret;
+ return 1;
+ }
+
+@@ -1389,10 +1391,23 @@ static noinline int run_delalloc_nocow(struct inode *inode,
+ goto out_check;
+ if (btrfs_extent_readonly(fs_info, disk_bytenr))
+ goto out_check;
+- if (btrfs_cross_ref_exist(root, ino,
+- found_key.offset -
+- extent_offset, disk_bytenr))
++ ret = btrfs_cross_ref_exist(root, ino,
++ found_key.offset -
++ extent_offset, disk_bytenr);
++ if (ret) {
++ /*
++ * ret could be -EIO if the above fails to read
++ * metadata.
++ */
++ if (ret < 0) {
++ if (cow_start != (u64)-1)
++ cur_offset = cow_start;
++ goto error;
++ }
++
++ WARN_ON_ONCE(nolock);
+ goto out_check;
++ }
+ disk_bytenr += extent_offset;
+ disk_bytenr += cur_offset - found_key.offset;
+ num_bytes = min(end + 1, extent_end) - cur_offset;
+@@ -1410,10 +1425,22 @@ static noinline int run_delalloc_nocow(struct inode *inode,
+ * this ensure that csum for a given extent are
+ * either valid or do not exist.
+ */
+- if (csum_exist_in_range(fs_info, disk_bytenr,
+- num_bytes)) {
++ ret = csum_exist_in_range(fs_info, disk_bytenr,
++ num_bytes);
++ if (ret) {
+ if (!nolock)
+ btrfs_end_write_no_snapshotting(root);
++
++ /*
++ * ret could be -EIO if the above fails to read
++ * metadata.
++ */
++ if (ret < 0) {
++ if (cow_start != (u64)-1)
++ cur_offset = cow_start;
++ goto error;
++ }
++ WARN_ON_ONCE(nolock);
+ goto out_check;
+ }
+ if (!btrfs_inc_nocow_writers(fs_info, disk_bytenr)) {
+diff --git a/fs/ceph/file.c b/fs/ceph/file.c
+index 5c17125f45c7..0024d3e61bcd 100644
+--- a/fs/ceph/file.c
++++ b/fs/ceph/file.c
+@@ -635,7 +635,8 @@ static ssize_t ceph_sync_read(struct kiocb *iocb, struct iov_iter *to,
+ struct ceph_aio_request {
+ struct kiocb *iocb;
+ size_t total_len;
+- int write;
++ bool write;
++ bool should_dirty;
+ int error;
+ struct list_head osd_reqs;
+ unsigned num_reqs;
+@@ -745,7 +746,7 @@ static void ceph_aio_complete_req(struct ceph_osd_request *req)
+ }
+ }
+
+- ceph_put_page_vector(osd_data->pages, num_pages, !aio_req->write);
++ ceph_put_page_vector(osd_data->pages, num_pages, aio_req->should_dirty);
+ ceph_osdc_put_request(req);
+
+ if (rc < 0)
+@@ -842,6 +843,7 @@ ceph_direct_read_write(struct kiocb *iocb, struct iov_iter *iter,
+ size_t count = iov_iter_count(iter);
+ loff_t pos = iocb->ki_pos;
+ bool write = iov_iter_rw(iter) == WRITE;
++ bool should_dirty = !write && iter_is_iovec(iter);
+
+ if (write && ceph_snap(file_inode(file)) != CEPH_NOSNAP)
+ return -EROFS;
+@@ -909,6 +911,7 @@ ceph_direct_read_write(struct kiocb *iocb, struct iov_iter *iter,
+ if (aio_req) {
+ aio_req->iocb = iocb;
+ aio_req->write = write;
++ aio_req->should_dirty = should_dirty;
+ INIT_LIST_HEAD(&aio_req->osd_reqs);
+ if (write) {
+ aio_req->mtime = mtime;
+@@ -966,7 +969,7 @@ ceph_direct_read_write(struct kiocb *iocb, struct iov_iter *iter,
+ len = ret;
+ }
+
+- ceph_put_page_vector(pages, num_pages, !write);
++ ceph_put_page_vector(pages, num_pages, should_dirty);
+
+ ceph_osdc_put_request(req);
+ if (ret < 0)
+diff --git a/include/linux/bitmap.h b/include/linux/bitmap.h
+index 3489253e38fc..1347b33399e1 100644
+--- a/include/linux/bitmap.h
++++ b/include/linux/bitmap.h
+@@ -271,12 +271,20 @@ static inline void bitmap_complement(unsigned long *dst, const unsigned long *sr
+ __bitmap_complement(dst, src, nbits);
+ }
+
++#ifdef __LITTLE_ENDIAN
++#define BITMAP_MEM_ALIGNMENT 8
++#else
++#define BITMAP_MEM_ALIGNMENT (8 * sizeof(unsigned long))
++#endif
++#define BITMAP_MEM_MASK (BITMAP_MEM_ALIGNMENT - 1)
++
+ static inline int bitmap_equal(const unsigned long *src1,
+ const unsigned long *src2, unsigned int nbits)
+ {
+ if (small_const_nbits(nbits))
+ return !((*src1 ^ *src2) & BITMAP_LAST_WORD_MASK(nbits));
+- if (__builtin_constant_p(nbits & 7) && IS_ALIGNED(nbits, 8))
++ if (__builtin_constant_p(nbits & BITMAP_MEM_MASK) &&
++ IS_ALIGNED(nbits, BITMAP_MEM_ALIGNMENT))
+ return !memcmp(src1, src2, nbits / 8);
+ return __bitmap_equal(src1, src2, nbits);
+ }
+@@ -327,8 +335,10 @@ static __always_inline void bitmap_set(unsigned long *map, unsigned int start,
+ {
+ if (__builtin_constant_p(nbits) && nbits == 1)
+ __set_bit(start, map);
+- else if (__builtin_constant_p(start & 7) && IS_ALIGNED(start, 8) &&
+- __builtin_constant_p(nbits & 7) && IS_ALIGNED(nbits, 8))
++ else if (__builtin_constant_p(start & BITMAP_MEM_MASK) &&
++ IS_ALIGNED(start, BITMAP_MEM_ALIGNMENT) &&
++ __builtin_constant_p(nbits & BITMAP_MEM_MASK) &&
++ IS_ALIGNED(nbits, BITMAP_MEM_ALIGNMENT))
+ memset((char *)map + start / 8, 0xff, nbits / 8);
+ else
+ __bitmap_set(map, start, nbits);
+@@ -339,8 +349,10 @@ static __always_inline void bitmap_clear(unsigned long *map, unsigned int start,
+ {
+ if (__builtin_constant_p(nbits) && nbits == 1)
+ __clear_bit(start, map);
+- else if (__builtin_constant_p(start & 7) && IS_ALIGNED(start, 8) &&
+- __builtin_constant_p(nbits & 7) && IS_ALIGNED(nbits, 8))
++ else if (__builtin_constant_p(start & BITMAP_MEM_MASK) &&
++ IS_ALIGNED(start, BITMAP_MEM_ALIGNMENT) &&
++ __builtin_constant_p(nbits & BITMAP_MEM_MASK) &&
++ IS_ALIGNED(nbits, BITMAP_MEM_ALIGNMENT))
+ memset((char *)map + start / 8, 0, nbits / 8);
+ else
+ __bitmap_clear(map, start, nbits);
+diff --git a/include/linux/netfilter/x_tables.h b/include/linux/netfilter/x_tables.h
+index 33f7530f96b9..8e46c35d654b 100644
+--- a/include/linux/netfilter/x_tables.h
++++ b/include/linux/netfilter/x_tables.h
+@@ -285,6 +285,8 @@ unsigned int *xt_alloc_entry_offsets(unsigned int size);
+ bool xt_find_jump_offset(const unsigned int *offsets,
+ unsigned int target, unsigned int size);
+
++int xt_check_proc_name(const char *name, unsigned int size);
++
+ int xt_check_match(struct xt_mtchk_param *, unsigned int size, u_int8_t proto,
+ bool inv_proto);
+ int xt_check_target(struct xt_tgchk_param *, unsigned int size, u_int8_t proto,
+diff --git a/include/rdma/ib_addr.h b/include/rdma/ib_addr.h
+index 18c564f60e93..60e5cabee928 100644
+--- a/include/rdma/ib_addr.h
++++ b/include/rdma/ib_addr.h
+@@ -130,6 +130,8 @@ void rdma_copy_addr(struct rdma_dev_addr *dev_addr,
+ const unsigned char *dst_dev_addr);
+
+ int rdma_addr_size(struct sockaddr *addr);
++int rdma_addr_size_in6(struct sockaddr_in6 *addr);
++int rdma_addr_size_kss(struct __kernel_sockaddr_storage *addr);
+
+ int rdma_addr_find_smac_by_sgid(union ib_gid *sgid, u8 *smac, u16 *vlan_id);
+ int rdma_addr_find_l2_eth_by_grh(const union ib_gid *sgid,
+diff --git a/include/uapi/linux/serial_core.h b/include/uapi/linux/serial_core.h
+index 1c8413f93e3d..dce5f9dae121 100644
+--- a/include/uapi/linux/serial_core.h
++++ b/include/uapi/linux/serial_core.h
+@@ -76,6 +76,9 @@
+ #define PORT_SUNZILOG 38
+ #define PORT_SUNSAB 39
+
++/* Nuvoton UART */
++#define PORT_NPCM 40
++
+ /* Intel EG20 */
+ #define PORT_PCH_8LINE 44
+ #define PORT_PCH_2LINE 45
+diff --git a/ipc/shm.c b/ipc/shm.c
+index 7acda23430aa..50e88fc060b1 100644
+--- a/ipc/shm.c
++++ b/ipc/shm.c
+@@ -386,6 +386,17 @@ static int shm_fault(struct vm_fault *vmf)
+ return sfd->vm_ops->fault(vmf);
+ }
+
++static int shm_split(struct vm_area_struct *vma, unsigned long addr)
++{
++ struct file *file = vma->vm_file;
++ struct shm_file_data *sfd = shm_file_data(file);
++
++ if (sfd->vm_ops && sfd->vm_ops->split)
++ return sfd->vm_ops->split(vma, addr);
++
++ return 0;
++}
++
+ #ifdef CONFIG_NUMA
+ static int shm_set_policy(struct vm_area_struct *vma, struct mempolicy *new)
+ {
+@@ -510,6 +521,7 @@ static const struct vm_operations_struct shm_vm_ops = {
+ .open = shm_open, /* callback for a new vm-area open */
+ .close = shm_close, /* callback for when the vm-area is released */
+ .fault = shm_fault,
++ .split = shm_split,
+ #if defined(CONFIG_NUMA)
+ .set_policy = shm_set_policy,
+ .get_policy = shm_get_policy,
+diff --git a/kernel/events/hw_breakpoint.c b/kernel/events/hw_breakpoint.c
+index 3f8cb1e14588..253ae2da13c3 100644
+--- a/kernel/events/hw_breakpoint.c
++++ b/kernel/events/hw_breakpoint.c
+@@ -427,16 +427,9 @@ EXPORT_SYMBOL_GPL(register_user_hw_breakpoint);
+ * modify_user_hw_breakpoint - modify a user-space hardware breakpoint
+ * @bp: the breakpoint structure to modify
+ * @attr: new breakpoint attributes
+- * @triggered: callback to trigger when we hit the breakpoint
+- * @tsk: pointer to 'task_struct' of the process to which the address belongs
+ */
+ int modify_user_hw_breakpoint(struct perf_event *bp, struct perf_event_attr *attr)
+ {
+- u64 old_addr = bp->attr.bp_addr;
+- u64 old_len = bp->attr.bp_len;
+- int old_type = bp->attr.bp_type;
+- int err = 0;
+-
+ /*
+ * modify_user_hw_breakpoint can be invoked with IRQs disabled and hence it
+ * will not be possible to raise IPIs that invoke __perf_event_disable.
+@@ -451,27 +444,18 @@ int modify_user_hw_breakpoint(struct perf_event *bp, struct perf_event_attr *att
+ bp->attr.bp_addr = attr->bp_addr;
+ bp->attr.bp_type = attr->bp_type;
+ bp->attr.bp_len = attr->bp_len;
++ bp->attr.disabled = 1;
+
+- if (attr->disabled)
+- goto end;
+-
+- err = validate_hw_breakpoint(bp);
+- if (!err)
+- perf_event_enable(bp);
++ if (!attr->disabled) {
++ int err = validate_hw_breakpoint(bp);
+
+- if (err) {
+- bp->attr.bp_addr = old_addr;
+- bp->attr.bp_type = old_type;
+- bp->attr.bp_len = old_len;
+- if (!bp->attr.disabled)
+- perf_event_enable(bp);
++ if (err)
++ return err;
+
+- return err;
++ perf_event_enable(bp);
++ bp->attr.disabled = 0;
+ }
+
+-end:
+- bp->attr.disabled = attr->disabled;
+-
+ return 0;
+ }
+ EXPORT_SYMBOL_GPL(modify_user_hw_breakpoint);
+diff --git a/mm/percpu-km.c b/mm/percpu-km.c
+index d2a76642c4ae..0d88d7bd5706 100644
+--- a/mm/percpu-km.c
++++ b/mm/percpu-km.c
+@@ -34,7 +34,7 @@
+ #include <linux/log2.h>
+
+ static int pcpu_populate_chunk(struct pcpu_chunk *chunk,
+- int page_start, int page_end)
++ int page_start, int page_end, gfp_t gfp)
+ {
+ return 0;
+ }
+@@ -45,18 +45,18 @@ static void pcpu_depopulate_chunk(struct pcpu_chunk *chunk,
+ /* nada */
+ }
+
+-static struct pcpu_chunk *pcpu_create_chunk(void)
++static struct pcpu_chunk *pcpu_create_chunk(gfp_t gfp)
+ {
+ const int nr_pages = pcpu_group_sizes[0] >> PAGE_SHIFT;
+ struct pcpu_chunk *chunk;
+ struct page *pages;
+ int i;
+
+- chunk = pcpu_alloc_chunk();
++ chunk = pcpu_alloc_chunk(gfp);
+ if (!chunk)
+ return NULL;
+
+- pages = alloc_pages(GFP_KERNEL, order_base_2(nr_pages));
++ pages = alloc_pages(gfp | GFP_KERNEL, order_base_2(nr_pages));
+ if (!pages) {
+ pcpu_free_chunk(chunk);
+ return NULL;
+diff --git a/mm/percpu-vm.c b/mm/percpu-vm.c
+index 9158e5a81391..0af71eb2fff0 100644
+--- a/mm/percpu-vm.c
++++ b/mm/percpu-vm.c
+@@ -37,7 +37,7 @@ static struct page **pcpu_get_pages(void)
+ lockdep_assert_held(&pcpu_alloc_mutex);
+
+ if (!pages)
+- pages = pcpu_mem_zalloc(pages_size);
++ pages = pcpu_mem_zalloc(pages_size, 0);
+ return pages;
+ }
+
+@@ -73,18 +73,21 @@ static void pcpu_free_pages(struct pcpu_chunk *chunk,
+ * @pages: array to put the allocated pages into, indexed by pcpu_page_idx()
+ * @page_start: page index of the first page to be allocated
+ * @page_end: page index of the last page to be allocated + 1
++ * @gfp: allocation flags passed to the underlying allocator
+ *
+ * Allocate pages [@page_start,@page_end) into @pages for all units.
+ * The allocation is for @chunk. Percpu core doesn't care about the
+ * content of @pages and will pass it verbatim to pcpu_map_pages().
+ */
+ static int pcpu_alloc_pages(struct pcpu_chunk *chunk,
+- struct page **pages, int page_start, int page_end)
++ struct page **pages, int page_start, int page_end,
++ gfp_t gfp)
+ {
+- const gfp_t gfp = GFP_KERNEL | __GFP_HIGHMEM;
+ unsigned int cpu, tcpu;
+ int i;
+
++ gfp |= GFP_KERNEL | __GFP_HIGHMEM;
++
+ for_each_possible_cpu(cpu) {
+ for (i = page_start; i < page_end; i++) {
+ struct page **pagep = &pages[pcpu_page_idx(cpu, i)];
+@@ -262,6 +265,7 @@ static void pcpu_post_map_flush(struct pcpu_chunk *chunk,
+ * @chunk: chunk of interest
+ * @page_start: the start page
+ * @page_end: the end page
++ * @gfp: allocation flags passed to the underlying memory allocator
+ *
+ * For each cpu, populate and map pages [@page_start,@page_end) into
+ * @chunk.
+@@ -270,7 +274,7 @@ static void pcpu_post_map_flush(struct pcpu_chunk *chunk,
+ * pcpu_alloc_mutex, does GFP_KERNEL allocation.
+ */
+ static int pcpu_populate_chunk(struct pcpu_chunk *chunk,
+- int page_start, int page_end)
++ int page_start, int page_end, gfp_t gfp)
+ {
+ struct page **pages;
+
+@@ -278,7 +282,7 @@ static int pcpu_populate_chunk(struct pcpu_chunk *chunk,
+ if (!pages)
+ return -ENOMEM;
+
+- if (pcpu_alloc_pages(chunk, pages, page_start, page_end))
++ if (pcpu_alloc_pages(chunk, pages, page_start, page_end, gfp))
+ return -ENOMEM;
+
+ if (pcpu_map_pages(chunk, pages, page_start, page_end)) {
+@@ -325,12 +329,12 @@ static void pcpu_depopulate_chunk(struct pcpu_chunk *chunk,
+ pcpu_free_pages(chunk, pages, page_start, page_end);
+ }
+
+-static struct pcpu_chunk *pcpu_create_chunk(void)
++static struct pcpu_chunk *pcpu_create_chunk(gfp_t gfp)
+ {
+ struct pcpu_chunk *chunk;
+ struct vm_struct **vms;
+
+- chunk = pcpu_alloc_chunk();
++ chunk = pcpu_alloc_chunk(gfp);
+ if (!chunk)
+ return NULL;
+
+diff --git a/mm/percpu.c b/mm/percpu.c
+index 50e7fdf84055..3248afb3862c 100644
+--- a/mm/percpu.c
++++ b/mm/percpu.c
+@@ -447,10 +447,12 @@ static void pcpu_next_fit_region(struct pcpu_chunk *chunk, int alloc_bits,
+ /**
+ * pcpu_mem_zalloc - allocate memory
+ * @size: bytes to allocate
++ * @gfp: allocation flags
+ *
+ * Allocate @size bytes. If @size is smaller than PAGE_SIZE,
+- * kzalloc() is used; otherwise, vzalloc() is used. The returned
+- * memory is always zeroed.
++ * kzalloc() is used; otherwise, the equivalent of vzalloc() is used.
++ * This is to facilitate passing through whitelisted flags. The
++ * returned memory is always zeroed.
+ *
+ * CONTEXT:
+ * Does GFP_KERNEL allocation.
+@@ -458,15 +460,16 @@ static void pcpu_next_fit_region(struct pcpu_chunk *chunk, int alloc_bits,
+ * RETURNS:
+ * Pointer to the allocated area on success, NULL on failure.
+ */
+-static void *pcpu_mem_zalloc(size_t size)
++static void *pcpu_mem_zalloc(size_t size, gfp_t gfp)
+ {
+ if (WARN_ON_ONCE(!slab_is_available()))
+ return NULL;
+
+ if (size <= PAGE_SIZE)
+- return kzalloc(size, GFP_KERNEL);
++ return kzalloc(size, gfp | GFP_KERNEL);
+ else
+- return vzalloc(size);
++ return __vmalloc(size, gfp | GFP_KERNEL | __GFP_ZERO,
++ PAGE_KERNEL);
+ }
+
+ /**
+@@ -1154,12 +1157,12 @@ static struct pcpu_chunk * __init pcpu_alloc_first_chunk(unsigned long tmp_addr,
+ return chunk;
+ }
+
+-static struct pcpu_chunk *pcpu_alloc_chunk(void)
++static struct pcpu_chunk *pcpu_alloc_chunk(gfp_t gfp)
+ {
+ struct pcpu_chunk *chunk;
+ int region_bits;
+
+- chunk = pcpu_mem_zalloc(pcpu_chunk_struct_size);
++ chunk = pcpu_mem_zalloc(pcpu_chunk_struct_size, gfp);
+ if (!chunk)
+ return NULL;
+
+@@ -1168,17 +1171,17 @@ static struct pcpu_chunk *pcpu_alloc_chunk(void)
+ region_bits = pcpu_chunk_map_bits(chunk);
+
+ chunk->alloc_map = pcpu_mem_zalloc(BITS_TO_LONGS(region_bits) *
+- sizeof(chunk->alloc_map[0]));
++ sizeof(chunk->alloc_map[0]), gfp);
+ if (!chunk->alloc_map)
+ goto alloc_map_fail;
+
+ chunk->bound_map = pcpu_mem_zalloc(BITS_TO_LONGS(region_bits + 1) *
+- sizeof(chunk->bound_map[0]));
++ sizeof(chunk->bound_map[0]), gfp);
+ if (!chunk->bound_map)
+ goto bound_map_fail;
+
+ chunk->md_blocks = pcpu_mem_zalloc(pcpu_chunk_nr_blocks(chunk) *
+- sizeof(chunk->md_blocks[0]));
++ sizeof(chunk->md_blocks[0]), gfp);
+ if (!chunk->md_blocks)
+ goto md_blocks_fail;
+
+@@ -1277,9 +1280,10 @@ static void pcpu_chunk_depopulated(struct pcpu_chunk *chunk,
+ * pcpu_addr_to_page - translate address to physical address
+ * pcpu_verify_alloc_info - check alloc_info is acceptable during init
+ */
+-static int pcpu_populate_chunk(struct pcpu_chunk *chunk, int off, int size);
++static int pcpu_populate_chunk(struct pcpu_chunk *chunk, int off, int size,
++ gfp_t gfp);
+ static void pcpu_depopulate_chunk(struct pcpu_chunk *chunk, int off, int size);
+-static struct pcpu_chunk *pcpu_create_chunk(void);
++static struct pcpu_chunk *pcpu_create_chunk(gfp_t gfp);
+ static void pcpu_destroy_chunk(struct pcpu_chunk *chunk);
+ static struct page *pcpu_addr_to_page(void *addr);
+ static int __init pcpu_verify_alloc_info(const struct pcpu_alloc_info *ai);
+@@ -1421,7 +1425,7 @@ static void __percpu *pcpu_alloc(size_t size, size_t align, bool reserved,
+ }
+
+ if (list_empty(&pcpu_slot[pcpu_nr_slots - 1])) {
+- chunk = pcpu_create_chunk();
++ chunk = pcpu_create_chunk(0);
+ if (!chunk) {
+ err = "failed to allocate new chunk";
+ goto fail;
+@@ -1450,7 +1454,7 @@ static void __percpu *pcpu_alloc(size_t size, size_t align, bool reserved,
+ page_start, page_end) {
+ WARN_ON(chunk->immutable);
+
+- ret = pcpu_populate_chunk(chunk, rs, re);
++ ret = pcpu_populate_chunk(chunk, rs, re, 0);
+
+ spin_lock_irqsave(&pcpu_lock, flags);
+ if (ret) {
+@@ -1561,10 +1565,17 @@ void __percpu *__alloc_reserved_percpu(size_t size, size_t align)
+ * pcpu_balance_workfn - manage the amount of free chunks and populated pages
+ * @work: unused
+ *
+- * Reclaim all fully free chunks except for the first one.
++ * Reclaim all fully free chunks except for the first one. This is also
++ * responsible for maintaining the pool of empty populated pages. However,
++ * it is possible that this is called when physical memory is scarce causing
++ * OOM killer to be triggered. We should avoid doing so until an actual
++ * allocation causes the failure as it is possible that requests can be
++ * serviced from already backed regions.
+ */
+ static void pcpu_balance_workfn(struct work_struct *work)
+ {
++ /* gfp flags passed to underlying allocators */
++ const gfp_t gfp = __GFP_NORETRY | __GFP_NOWARN;
+ LIST_HEAD(to_free);
+ struct list_head *free_head = &pcpu_slot[pcpu_nr_slots - 1];
+ struct pcpu_chunk *chunk, *next;
+@@ -1645,7 +1656,7 @@ static void pcpu_balance_workfn(struct work_struct *work)
+ chunk->nr_pages) {
+ int nr = min(re - rs, nr_to_pop);
+
+- ret = pcpu_populate_chunk(chunk, rs, rs + nr);
++ ret = pcpu_populate_chunk(chunk, rs, rs + nr, gfp);
+ if (!ret) {
+ nr_to_pop -= nr;
+ spin_lock_irq(&pcpu_lock);
+@@ -1662,7 +1673,7 @@ static void pcpu_balance_workfn(struct work_struct *work)
+
+ if (nr_to_pop) {
+ /* ran out of chunks to populate, create a new one and retry */
+- chunk = pcpu_create_chunk();
++ chunk = pcpu_create_chunk(gfp);
+ if (chunk) {
+ spin_lock_irq(&pcpu_lock);
+ pcpu_chunk_relocate(chunk, -1);
+diff --git a/net/bluetooth/smp.c b/net/bluetooth/smp.c
+index 01117ae84f1d..a2ddae2f37d7 100644
+--- a/net/bluetooth/smp.c
++++ b/net/bluetooth/smp.c
+@@ -2296,8 +2296,14 @@ static u8 smp_cmd_security_req(struct l2cap_conn *conn, struct sk_buff *skb)
+ else
+ sec_level = authreq_to_seclevel(auth);
+
+- if (smp_sufficient_security(hcon, sec_level, SMP_USE_LTK))
++ if (smp_sufficient_security(hcon, sec_level, SMP_USE_LTK)) {
++ /* If link is already encrypted with sufficient security we
++ * still need refresh encryption as per Core Spec 5.0 Vol 3,
++ * Part H 2.4.6
++ */
++ smp_ltk_encrypt(conn, hcon->sec_level);
+ return 0;
++ }
+
+ if (sec_level > hcon->pending_sec_level)
+ hcon->pending_sec_level = sec_level;
+diff --git a/net/bridge/netfilter/ebt_among.c b/net/bridge/netfilter/ebt_among.c
+index 59baaecd3e54..0b589a6b365c 100644
+--- a/net/bridge/netfilter/ebt_among.c
++++ b/net/bridge/netfilter/ebt_among.c
+@@ -177,6 +177,28 @@ static bool poolsize_invalid(const struct ebt_mac_wormhash *w)
+ return w && w->poolsize >= (INT_MAX / sizeof(struct ebt_mac_wormhash_tuple));
+ }
+
++static bool wormhash_offset_invalid(int off, unsigned int len)
++{
++ if (off == 0) /* not present */
++ return false;
++
++ if (off < (int)sizeof(struct ebt_among_info) ||
++ off % __alignof__(struct ebt_mac_wormhash))
++ return true;
++
++ off += sizeof(struct ebt_mac_wormhash);
++
++ return off > len;
++}
++
++static bool wormhash_sizes_valid(const struct ebt_mac_wormhash *wh, int a, int b)
++{
++ if (a == 0)
++ a = sizeof(struct ebt_among_info);
++
++ return ebt_mac_wormhash_size(wh) + a == b;
++}
++
+ static int ebt_among_mt_check(const struct xt_mtchk_param *par)
+ {
+ const struct ebt_among_info *info = par->matchinfo;
+@@ -189,6 +211,10 @@ static int ebt_among_mt_check(const struct xt_mtchk_param *par)
+ if (expected_length > em->match_size)
+ return -EINVAL;
+
++ if (wormhash_offset_invalid(info->wh_dst_ofs, em->match_size) ||
++ wormhash_offset_invalid(info->wh_src_ofs, em->match_size))
++ return -EINVAL;
++
+ wh_dst = ebt_among_wh_dst(info);
+ if (poolsize_invalid(wh_dst))
+ return -EINVAL;
+@@ -201,6 +227,14 @@ static int ebt_among_mt_check(const struct xt_mtchk_param *par)
+ if (poolsize_invalid(wh_src))
+ return -EINVAL;
+
++ if (info->wh_src_ofs < info->wh_dst_ofs) {
++ if (!wormhash_sizes_valid(wh_src, info->wh_src_ofs, info->wh_dst_ofs))
++ return -EINVAL;
++ } else {
++ if (!wormhash_sizes_valid(wh_dst, info->wh_dst_ofs, info->wh_src_ofs))
++ return -EINVAL;
++ }
++
+ expected_length += ebt_mac_wormhash_size(wh_src);
+
+ if (em->match_size != EBT_ALIGN(expected_length)) {
+diff --git a/net/ipv4/netfilter/nf_conntrack_l3proto_ipv4.c b/net/ipv4/netfilter/nf_conntrack_l3proto_ipv4.c
+index a5727036a8a8..a9670b642ba4 100644
+--- a/net/ipv4/netfilter/nf_conntrack_l3proto_ipv4.c
++++ b/net/ipv4/netfilter/nf_conntrack_l3proto_ipv4.c
+@@ -159,8 +159,20 @@ static unsigned int ipv4_conntrack_local(void *priv,
+ ip_hdrlen(skb) < sizeof(struct iphdr))
+ return NF_ACCEPT;
+
+- if (ip_is_fragment(ip_hdr(skb))) /* IP_NODEFRAG setsockopt set */
++ if (ip_is_fragment(ip_hdr(skb))) { /* IP_NODEFRAG setsockopt set */
++ enum ip_conntrack_info ctinfo;
++ struct nf_conn *tmpl;
++
++ tmpl = nf_ct_get(skb, &ctinfo);
++ if (tmpl && nf_ct_is_template(tmpl)) {
++ /* when skipping ct, clear templates to avoid fooling
++ * later targets/matches
++ */
++ skb->_nfct = 0;
++ nf_ct_put(tmpl);
++ }
+ return NF_ACCEPT;
++ }
+
+ return nf_conntrack_in(state->net, PF_INET, state->hook, skb);
+ }
+diff --git a/net/ipv6/ip6_vti.c b/net/ipv6/ip6_vti.c
+index fa3ae1cb50d3..8c184f84f353 100644
+--- a/net/ipv6/ip6_vti.c
++++ b/net/ipv6/ip6_vti.c
+@@ -626,7 +626,6 @@ static void vti6_link_config(struct ip6_tnl *t)
+ {
+ struct net_device *dev = t->dev;
+ struct __ip6_tnl_parm *p = &t->parms;
+- struct net_device *tdev = NULL;
+
+ memcpy(dev->dev_addr, &p->laddr, sizeof(struct in6_addr));
+ memcpy(dev->broadcast, &p->raddr, sizeof(struct in6_addr));
+@@ -639,25 +638,6 @@ static void vti6_link_config(struct ip6_tnl *t)
+ dev->flags |= IFF_POINTOPOINT;
+ else
+ dev->flags &= ~IFF_POINTOPOINT;
+-
+- if (p->flags & IP6_TNL_F_CAP_XMIT) {
+- int strict = (ipv6_addr_type(&p->raddr) &
+- (IPV6_ADDR_MULTICAST | IPV6_ADDR_LINKLOCAL));
+- struct rt6_info *rt = rt6_lookup(t->net,
+- &p->raddr, &p->laddr,
+- p->link, strict);
+-
+- if (rt)
+- tdev = rt->dst.dev;
+- ip6_rt_put(rt);
+- }
+-
+- if (!tdev && p->link)
+- tdev = __dev_get_by_index(t->net, p->link);
+-
+- if (tdev)
+- dev->mtu = max_t(int, tdev->mtu - dev->hard_header_len,
+- IPV6_MIN_MTU);
+ }
+
+ /**
+diff --git a/net/ipv6/route.c b/net/ipv6/route.c
+index 08a2a65d3304..1f0d94439c77 100644
+--- a/net/ipv6/route.c
++++ b/net/ipv6/route.c
+@@ -1627,11 +1627,10 @@ static void rt6_age_examine_exception(struct rt6_exception_bucket *bucket,
+ struct neighbour *neigh;
+ __u8 neigh_flags = 0;
+
+- neigh = dst_neigh_lookup(&rt->dst, &rt->rt6i_gateway);
+- if (neigh) {
++ neigh = __ipv6_neigh_lookup_noref(rt->dst.dev, &rt->rt6i_gateway);
++ if (neigh)
+ neigh_flags = neigh->flags;
+- neigh_release(neigh);
+- }
++
+ if (!(neigh_flags & NTF_ROUTER)) {
+ RT6_TRACE("purging route %p via non-router but gateway\n",
+ rt);
+@@ -1655,7 +1654,8 @@ void rt6_age_exceptions(struct rt6_info *rt,
+ if (!rcu_access_pointer(rt->rt6i_exception_bucket))
+ return;
+
+- spin_lock_bh(&rt6_exception_lock);
++ rcu_read_lock_bh();
++ spin_lock(&rt6_exception_lock);
+ bucket = rcu_dereference_protected(rt->rt6i_exception_bucket,
+ lockdep_is_held(&rt6_exception_lock));
+
+@@ -1669,7 +1669,8 @@ void rt6_age_exceptions(struct rt6_info *rt,
+ bucket++;
+ }
+ }
+- spin_unlock_bh(&rt6_exception_lock);
++ spin_unlock(&rt6_exception_lock);
++ rcu_read_unlock_bh();
+ }
+
+ struct rt6_info *ip6_pol_route(struct net *net, struct fib6_table *table,
+diff --git a/net/l2tp/l2tp_core.c b/net/l2tp/l2tp_core.c
+index e8b26afeb194..083fa2ffee15 100644
+--- a/net/l2tp/l2tp_core.c
++++ b/net/l2tp/l2tp_core.c
+@@ -111,6 +111,13 @@ struct l2tp_net {
+ spinlock_t l2tp_session_hlist_lock;
+ };
+
++#if IS_ENABLED(CONFIG_IPV6)
++static bool l2tp_sk_is_v6(struct sock *sk)
++{
++ return sk->sk_family == PF_INET6 &&
++ !ipv6_addr_v4mapped(&sk->sk_v6_daddr);
++}
++#endif
+
+ static inline struct l2tp_tunnel *l2tp_tunnel(struct sock *sk)
+ {
+@@ -1058,7 +1065,7 @@ static int l2tp_xmit_core(struct l2tp_session *session, struct sk_buff *skb,
+ /* Queue the packet to IP for output */
+ skb->ignore_df = 1;
+ #if IS_ENABLED(CONFIG_IPV6)
+- if (tunnel->sock->sk_family == PF_INET6 && !tunnel->v4mapped)
++ if (l2tp_sk_is_v6(tunnel->sock))
+ error = inet6_csk_xmit(tunnel->sock, skb, NULL);
+ else
+ #endif
+@@ -1121,6 +1128,15 @@ int l2tp_xmit_skb(struct l2tp_session *session, struct sk_buff *skb, int hdr_len
+ goto out_unlock;
+ }
+
++ /* The user-space may change the connection status for the user-space
++ * provided socket at run time: we must check it under the socket lock
++ */
++ if (tunnel->fd >= 0 && sk->sk_state != TCP_ESTABLISHED) {
++ kfree_skb(skb);
++ ret = NET_XMIT_DROP;
++ goto out_unlock;
++ }
++
+ /* Get routing info from the tunnel socket */
+ skb_dst_drop(skb);
+ skb_dst_set(skb, dst_clone(__sk_dst_check(sk, 0)));
+@@ -1140,7 +1156,7 @@ int l2tp_xmit_skb(struct l2tp_session *session, struct sk_buff *skb, int hdr_len
+
+ /* Calculate UDP checksum if configured to do so */
+ #if IS_ENABLED(CONFIG_IPV6)
+- if (sk->sk_family == PF_INET6 && !tunnel->v4mapped)
++ if (l2tp_sk_is_v6(sk))
+ udp6_set_csum(udp_get_no_check6_tx(sk),
+ skb, &inet6_sk(sk)->saddr,
+ &sk->sk_v6_daddr, udp_len);
+@@ -1520,24 +1536,6 @@ int l2tp_tunnel_create(struct net *net, int fd, int version, u32 tunnel_id, u32
+ if (cfg != NULL)
+ tunnel->debug = cfg->debug;
+
+-#if IS_ENABLED(CONFIG_IPV6)
+- if (sk->sk_family == PF_INET6) {
+- struct ipv6_pinfo *np = inet6_sk(sk);
+-
+- if (ipv6_addr_v4mapped(&np->saddr) &&
+- ipv6_addr_v4mapped(&sk->sk_v6_daddr)) {
+- struct inet_sock *inet = inet_sk(sk);
+-
+- tunnel->v4mapped = true;
+- inet->inet_saddr = np->saddr.s6_addr32[3];
+- inet->inet_rcv_saddr = sk->sk_v6_rcv_saddr.s6_addr32[3];
+- inet->inet_daddr = sk->sk_v6_daddr.s6_addr32[3];
+- } else {
+- tunnel->v4mapped = false;
+- }
+- }
+-#endif
+-
+ /* Mark socket as an encapsulation socket. See net/ipv4/udp.c */
+ tunnel->encap = encap;
+ if (encap == L2TP_ENCAPTYPE_UDP) {
+diff --git a/net/l2tp/l2tp_core.h b/net/l2tp/l2tp_core.h
+index 8ecb1d357445..3fddfb207113 100644
+--- a/net/l2tp/l2tp_core.h
++++ b/net/l2tp/l2tp_core.h
+@@ -193,9 +193,6 @@ struct l2tp_tunnel {
+ struct sock *sock; /* Parent socket */
+ int fd; /* Parent fd, if tunnel socket
+ * was created by userspace */
+-#if IS_ENABLED(CONFIG_IPV6)
+- bool v4mapped;
+-#endif
+
+ struct work_struct del_work;
+
+diff --git a/net/netfilter/x_tables.c b/net/netfilter/x_tables.c
+index d7070d18db20..a33def530adf 100644
+--- a/net/netfilter/x_tables.c
++++ b/net/netfilter/x_tables.c
+@@ -423,6 +423,36 @@ textify_hooks(char *buf, size_t size, unsigned int mask, uint8_t nfproto)
+ return buf;
+ }
+
++/**
++ * xt_check_proc_name - check that name is suitable for /proc file creation
++ *
++ * @name: file name candidate
++ * @size: length of buffer
++ *
++ * some x_tables modules wish to create a file in /proc.
++ * This function makes sure that the name is suitable for this
++ * purpose, it checks that name is NUL terminated and isn't a 'special'
++ * name, like "..".
++ *
++ * returns negative number on error or 0 if name is useable.
++ */
++int xt_check_proc_name(const char *name, unsigned int size)
++{
++ if (name[0] == '\0')
++ return -EINVAL;
++
++ if (strnlen(name, size) == size)
++ return -ENAMETOOLONG;
++
++ if (strcmp(name, ".") == 0 ||
++ strcmp(name, "..") == 0 ||
++ strchr(name, '/'))
++ return -EINVAL;
++
++ return 0;
++}
++EXPORT_SYMBOL(xt_check_proc_name);
++
+ int xt_check_match(struct xt_mtchk_param *par,
+ unsigned int size, u_int8_t proto, bool inv_proto)
+ {
+@@ -1008,7 +1038,12 @@ struct xt_table_info *xt_alloc_table_info(unsigned int size)
+ if ((size >> PAGE_SHIFT) + 2 > totalram_pages)
+ return NULL;
+
+- info = kvmalloc(sz, GFP_KERNEL);
++ /* __GFP_NORETRY is not fully supported by kvmalloc but it should
++ * work reasonably well if sz is too large and bail out rather
++ * than shoot all processes down before realizing there is nothing
++ * more to reclaim.
++ */
++ info = kvmalloc(sz, GFP_KERNEL | __GFP_NORETRY);
+ if (!info)
+ return NULL;
+
+diff --git a/net/netfilter/xt_hashlimit.c b/net/netfilter/xt_hashlimit.c
+index b8a3e740ffd4..0c034597b9b8 100644
+--- a/net/netfilter/xt_hashlimit.c
++++ b/net/netfilter/xt_hashlimit.c
+@@ -915,8 +915,9 @@ static int hashlimit_mt_check_v1(const struct xt_mtchk_param *par)
+ struct hashlimit_cfg3 cfg = {};
+ int ret;
+
+- if (info->name[sizeof(info->name) - 1] != '\0')
+- return -EINVAL;
++ ret = xt_check_proc_name(info->name, sizeof(info->name));
++ if (ret)
++ return ret;
+
+ ret = cfg_copy(&cfg, (void *)&info->cfg, 1);
+
+@@ -933,8 +934,9 @@ static int hashlimit_mt_check_v2(const struct xt_mtchk_param *par)
+ struct hashlimit_cfg3 cfg = {};
+ int ret;
+
+- if (info->name[sizeof(info->name) - 1] != '\0')
+- return -EINVAL;
++ ret = xt_check_proc_name(info->name, sizeof(info->name));
++ if (ret)
++ return ret;
+
+ ret = cfg_copy(&cfg, (void *)&info->cfg, 2);
+
+@@ -948,9 +950,11 @@ static int hashlimit_mt_check_v2(const struct xt_mtchk_param *par)
+ static int hashlimit_mt_check(const struct xt_mtchk_param *par)
+ {
+ struct xt_hashlimit_mtinfo3 *info = par->matchinfo;
++ int ret;
+
+- if (info->name[sizeof(info->name) - 1] != '\0')
+- return -EINVAL;
++ ret = xt_check_proc_name(info->name, sizeof(info->name));
++ if (ret)
++ return ret;
+
+ return hashlimit_mt_check_common(par, &info->hinfo, &info->cfg,
+ info->name, 3);
+diff --git a/net/netfilter/xt_recent.c b/net/netfilter/xt_recent.c
+index 245fa350a7a8..cf96d230e5a3 100644
+--- a/net/netfilter/xt_recent.c
++++ b/net/netfilter/xt_recent.c
+@@ -361,9 +361,9 @@ static int recent_mt_check(const struct xt_mtchk_param *par,
+ info->hit_count, XT_RECENT_MAX_NSTAMPS - 1);
+ return -EINVAL;
+ }
+- if (info->name[0] == '\0' ||
+- strnlen(info->name, XT_RECENT_NAME_LEN) == XT_RECENT_NAME_LEN)
+- return -EINVAL;
++ ret = xt_check_proc_name(info->name, sizeof(info->name));
++ if (ret)
++ return ret;
+
+ if (ip_pkt_list_tot && info->hit_count < ip_pkt_list_tot)
+ nstamp_mask = roundup_pow_of_two(ip_pkt_list_tot) - 1;
+diff --git a/net/xfrm/xfrm_ipcomp.c b/net/xfrm/xfrm_ipcomp.c
+index ccfdc7115a83..a00ec715aa46 100644
+--- a/net/xfrm/xfrm_ipcomp.c
++++ b/net/xfrm/xfrm_ipcomp.c
+@@ -283,7 +283,7 @@ static struct crypto_comp * __percpu *ipcomp_alloc_tfms(const char *alg_name)
+ struct crypto_comp *tfm;
+
+ /* This can be any valid CPU ID so we don't need locking. */
+- tfm = __this_cpu_read(*pos->tfms);
++ tfm = this_cpu_read(*pos->tfms);
+
+ if (!strcmp(crypto_comp_name(tfm), alg_name)) {
+ pos->users++;
+diff --git a/net/xfrm/xfrm_state.c b/net/xfrm/xfrm_state.c
+index 54e21f19d722..f9d2f2233f09 100644
+--- a/net/xfrm/xfrm_state.c
++++ b/net/xfrm/xfrm_state.c
+@@ -2056,6 +2056,11 @@ int xfrm_user_policy(struct sock *sk, int optname, u8 __user *optval, int optlen
+ struct xfrm_mgr *km;
+ struct xfrm_policy *pol = NULL;
+
++#ifdef CONFIG_COMPAT
++ if (in_compat_syscall())
++ return -EOPNOTSUPP;
++#endif
++
+ if (!optval && !optlen) {
+ xfrm_sk_policy_insert(sk, XFRM_POLICY_IN, NULL);
+ xfrm_sk_policy_insert(sk, XFRM_POLICY_OUT, NULL);
+diff --git a/net/xfrm/xfrm_user.c b/net/xfrm/xfrm_user.c
+index 7f52b8eb177d..080035f056d9 100644
+--- a/net/xfrm/xfrm_user.c
++++ b/net/xfrm/xfrm_user.c
+@@ -121,22 +121,17 @@ static inline int verify_replay(struct xfrm_usersa_info *p,
+ struct nlattr *rt = attrs[XFRMA_REPLAY_ESN_VAL];
+ struct xfrm_replay_state_esn *rs;
+
+- if (p->flags & XFRM_STATE_ESN) {
+- if (!rt)
+- return -EINVAL;
++ if (!rt)
++ return (p->flags & XFRM_STATE_ESN) ? -EINVAL : 0;
+
+- rs = nla_data(rt);
++ rs = nla_data(rt);
+
+- if (rs->bmp_len > XFRMA_REPLAY_ESN_MAX / sizeof(rs->bmp[0]) / 8)
+- return -EINVAL;
+-
+- if (nla_len(rt) < (int)xfrm_replay_state_esn_len(rs) &&
+- nla_len(rt) != sizeof(*rs))
+- return -EINVAL;
+- }
++ if (rs->bmp_len > XFRMA_REPLAY_ESN_MAX / sizeof(rs->bmp[0]) / 8)
++ return -EINVAL;
+
+- if (!rt)
+- return 0;
++ if (nla_len(rt) < (int)xfrm_replay_state_esn_len(rs) &&
++ nla_len(rt) != sizeof(*rs))
++ return -EINVAL;
+
+ /* As only ESP and AH support ESN feature. */
+ if ((p->id.proto != IPPROTO_ESP) && (p->id.proto != IPPROTO_AH))
+diff --git a/sound/core/oss/pcm_oss.c b/sound/core/oss/pcm_oss.c
+index 012881461058..d6e9a18fd821 100644
+--- a/sound/core/oss/pcm_oss.c
++++ b/sound/core/oss/pcm_oss.c
+@@ -1326,7 +1326,7 @@ static ssize_t snd_pcm_oss_write2(struct snd_pcm_substream *substream, const cha
+ static ssize_t snd_pcm_oss_write1(struct snd_pcm_substream *substream, const char __user *buf, size_t bytes)
+ {
+ size_t xfer = 0;
+- ssize_t tmp;
++ ssize_t tmp = 0;
+ struct snd_pcm_runtime *runtime = substream->runtime;
+
+ if (atomic_read(&substream->mmap_count))
+@@ -1433,7 +1433,7 @@ static ssize_t snd_pcm_oss_read2(struct snd_pcm_substream *substream, char *buf,
+ static ssize_t snd_pcm_oss_read1(struct snd_pcm_substream *substream, char __user *buf, size_t bytes)
+ {
+ size_t xfer = 0;
+- ssize_t tmp;
++ ssize_t tmp = 0;
+ struct snd_pcm_runtime *runtime = substream->runtime;
+
+ if (atomic_read(&substream->mmap_count))
+diff --git a/sound/core/pcm_native.c b/sound/core/pcm_native.c
+index f08772568c17..b01cc015a93c 100644
+--- a/sound/core/pcm_native.c
++++ b/sound/core/pcm_native.c
+@@ -3422,7 +3422,7 @@ int snd_pcm_lib_default_mmap(struct snd_pcm_substream *substream,
+ area,
+ substream->runtime->dma_area,
+ substream->runtime->dma_addr,
+- area->vm_end - area->vm_start);
++ substream->runtime->dma_bytes);
+ #endif /* CONFIG_X86 */
+ /* mmap with fault handler */
+ area->vm_ops = &snd_pcm_vm_ops_data_fault;
+diff --git a/sound/usb/quirks.c b/sound/usb/quirks.c
+index ea8f3de92fa4..794224e1d6df 100644
+--- a/sound/usb/quirks.c
++++ b/sound/usb/quirks.c
+@@ -1171,6 +1171,7 @@ static bool is_teac_dsd_dac(unsigned int id)
+ switch (id) {
+ case USB_ID(0x0644, 0x8043): /* TEAC UD-501/UD-503/NT-503 */
+ case USB_ID(0x0644, 0x8044): /* Esoteric D-05X */
++ case USB_ID(0x0644, 0x804a): /* TEAC UD-301 */
+ return true;
+ }
+ return false;
^ permalink raw reply related [flat|nested] 27+ messages in thread
* [gentoo-commits] proj/linux-patches:4.15 commit in: /
@ 2018-04-12 12:20 Mike Pagano
0 siblings, 0 replies; 27+ messages in thread
From: Mike Pagano @ 2018-04-12 12:20 UTC (permalink / raw
To: gentoo-commits
commit: b57564a42c932434a1633d677be3e73f9716454d
Author: Mike Pagano <mpagano <AT> gentoo <DOT> org>
AuthorDate: Thu Apr 12 12:20:02 2018 +0000
Commit: Mike Pagano <mpagano <AT> gentoo <DOT> org>
CommitDate: Thu Apr 12 12:20:02 2018 +0000
URL: https://gitweb.gentoo.org/proj/linux-patches.git/commit/?id=b57564a4
Linux patch 4.15.17
0000_README | 4 +
1016_linux-4.15.17.patch | 6212 ++++++++++++++++++++++++++++++++++++++++++++++
2 files changed, 6216 insertions(+)
diff --git a/0000_README b/0000_README
index ba8435c..f973683 100644
--- a/0000_README
+++ b/0000_README
@@ -107,6 +107,10 @@ Patch: 1015_linux-4.15.16.patch
From: http://www.kernel.org
Desc: Linux 4.15.16
+Patch: 1016_linux-4.15.17.patch
+From: http://www.kernel.org
+Desc: Linux 4.15.17
+
Patch: 1500_XATTR_USER_PREFIX.patch
From: https://bugs.gentoo.org/show_bug.cgi?id=470644
Desc: Support for namespace user.pax.* on tmpfs.
diff --git a/1016_linux-4.15.17.patch b/1016_linux-4.15.17.patch
new file mode 100644
index 0000000..1b23b1b
--- /dev/null
+++ b/1016_linux-4.15.17.patch
@@ -0,0 +1,6212 @@
+diff --git a/Makefile b/Makefile
+index b28f0f721ec7..cfff73b62eb5 100644
+--- a/Makefile
++++ b/Makefile
+@@ -1,7 +1,7 @@
+ # SPDX-License-Identifier: GPL-2.0
+ VERSION = 4
+ PATCHLEVEL = 15
+-SUBLEVEL = 16
++SUBLEVEL = 17
+ EXTRAVERSION =
+ NAME = Fearless Coyote
+
+diff --git a/arch/arm/boot/dts/ls1021a.dtsi b/arch/arm/boot/dts/ls1021a.dtsi
+index 9319e1f0f1d8..379b4a03cfe2 100644
+--- a/arch/arm/boot/dts/ls1021a.dtsi
++++ b/arch/arm/boot/dts/ls1021a.dtsi
+@@ -155,7 +155,7 @@
+ };
+
+ esdhc: esdhc@1560000 {
+- compatible = "fsl,esdhc";
++ compatible = "fsl,ls1021a-esdhc", "fsl,esdhc";
+ reg = <0x0 0x1560000 0x0 0x10000>;
+ interrupts = <GIC_SPI 94 IRQ_TYPE_LEVEL_HIGH>;
+ clock-frequency = <0>;
+diff --git a/arch/arm64/mm/context.c b/arch/arm64/mm/context.c
+index b1ac80fba578..301417ae2ba8 100644
+--- a/arch/arm64/mm/context.c
++++ b/arch/arm64/mm/context.c
+@@ -194,26 +194,29 @@ static u64 new_context(struct mm_struct *mm, unsigned int cpu)
+ void check_and_switch_context(struct mm_struct *mm, unsigned int cpu)
+ {
+ unsigned long flags;
+- u64 asid;
++ u64 asid, old_active_asid;
+
+ asid = atomic64_read(&mm->context.id);
+
+ /*
+ * The memory ordering here is subtle.
+- * If our ASID matches the current generation, then we update
+- * our active_asids entry with a relaxed xchg. Racing with a
+- * concurrent rollover means that either:
++ * If our active_asids is non-zero and the ASID matches the current
++ * generation, then we update the active_asids entry with a relaxed
++ * cmpxchg. Racing with a concurrent rollover means that either:
+ *
+- * - We get a zero back from the xchg and end up waiting on the
++ * - We get a zero back from the cmpxchg and end up waiting on the
+ * lock. Taking the lock synchronises with the rollover and so
+ * we are forced to see the updated generation.
+ *
+- * - We get a valid ASID back from the xchg, which means the
++ * - We get a valid ASID back from the cmpxchg, which means the
+ * relaxed xchg in flush_context will treat us as reserved
+ * because atomic RmWs are totally ordered for a given location.
+ */
+- if (!((asid ^ atomic64_read(&asid_generation)) >> asid_bits)
+- && atomic64_xchg_relaxed(&per_cpu(active_asids, cpu), asid))
++ old_active_asid = atomic64_read(&per_cpu(active_asids, cpu));
++ if (old_active_asid &&
++ !((asid ^ atomic64_read(&asid_generation)) >> asid_bits) &&
++ atomic64_cmpxchg_relaxed(&per_cpu(active_asids, cpu),
++ old_active_asid, asid))
+ goto switch_mm_fastpath;
+
+ raw_spin_lock_irqsave(&cpu_asid_lock, flags);
+diff --git a/arch/x86/include/asm/microcode.h b/arch/x86/include/asm/microcode.h
+index 55520cec8b27..6cf0e4cb7b97 100644
+--- a/arch/x86/include/asm/microcode.h
++++ b/arch/x86/include/asm/microcode.h
+@@ -37,7 +37,13 @@ struct cpu_signature {
+
+ struct device;
+
+-enum ucode_state { UCODE_ERROR, UCODE_OK, UCODE_NFOUND };
++enum ucode_state {
++ UCODE_OK = 0,
++ UCODE_NEW,
++ UCODE_UPDATED,
++ UCODE_NFOUND,
++ UCODE_ERROR,
++};
+
+ struct microcode_ops {
+ enum ucode_state (*request_microcode_user) (int cpu,
+@@ -54,7 +60,7 @@ struct microcode_ops {
+ * are being called.
+ * See also the "Synchronization" section in microcode_core.c.
+ */
+- int (*apply_microcode) (int cpu);
++ enum ucode_state (*apply_microcode) (int cpu);
+ int (*collect_cpu_info) (int cpu, struct cpu_signature *csig);
+ };
+
+diff --git a/arch/x86/include/asm/processor.h b/arch/x86/include/asm/processor.h
+index 44c2c4ec6d60..a5fc8f8bfb83 100644
+--- a/arch/x86/include/asm/processor.h
++++ b/arch/x86/include/asm/processor.h
+@@ -969,4 +969,5 @@ bool xen_set_default_idle(void);
+
+ void stop_this_cpu(void *dummy);
+ void df_debug(struct pt_regs *regs, long error_code);
++void microcode_check(void);
+ #endif /* _ASM_X86_PROCESSOR_H */
+diff --git a/arch/x86/kernel/aperture_64.c b/arch/x86/kernel/aperture_64.c
+index f5d92bc3b884..2c4d5ece7456 100644
+--- a/arch/x86/kernel/aperture_64.c
++++ b/arch/x86/kernel/aperture_64.c
+@@ -30,6 +30,7 @@
+ #include <asm/dma.h>
+ #include <asm/amd_nb.h>
+ #include <asm/x86_init.h>
++#include <linux/crash_dump.h>
+
+ /*
+ * Using 512M as goal, in case kexec will load kernel_big
+@@ -56,6 +57,33 @@ int fallback_aper_force __initdata;
+
+ int fix_aperture __initdata = 1;
+
++#ifdef CONFIG_PROC_VMCORE
++/*
++ * If the first kernel maps the aperture over e820 RAM, the kdump kernel will
++ * use the same range because it will remain configured in the northbridge.
++ * Trying to dump this area via /proc/vmcore may crash the machine, so exclude
++ * it from vmcore.
++ */
++static unsigned long aperture_pfn_start, aperture_page_count;
++
++static int gart_oldmem_pfn_is_ram(unsigned long pfn)
++{
++ return likely((pfn < aperture_pfn_start) ||
++ (pfn >= aperture_pfn_start + aperture_page_count));
++}
++
++static void exclude_from_vmcore(u64 aper_base, u32 aper_order)
++{
++ aperture_pfn_start = aper_base >> PAGE_SHIFT;
++ aperture_page_count = (32 * 1024 * 1024) << aper_order >> PAGE_SHIFT;
++ WARN_ON(register_oldmem_pfn_is_ram(&gart_oldmem_pfn_is_ram));
++}
++#else
++static void exclude_from_vmcore(u64 aper_base, u32 aper_order)
++{
++}
++#endif
++
+ /* This code runs before the PCI subsystem is initialized, so just
+ access the northbridge directly. */
+
+@@ -435,8 +463,16 @@ int __init gart_iommu_hole_init(void)
+
+ out:
+ if (!fix && !fallback_aper_force) {
+- if (last_aper_base)
++ if (last_aper_base) {
++ /*
++ * If this is the kdump kernel, the first kernel
++ * may have allocated the range over its e820 RAM
++ * and fixed up the northbridge
++ */
++ exclude_from_vmcore(last_aper_base, last_aper_order);
++
+ return 1;
++ }
+ return 0;
+ }
+
+@@ -473,6 +509,14 @@ int __init gart_iommu_hole_init(void)
+ return 0;
+ }
+
++ /*
++ * If this is the kdump kernel _and_ the first kernel did not
++ * configure the aperture in the northbridge, this range may
++ * overlap with the first kernel's memory. We can't access the
++ * range through vmcore even though it should be part of the dump.
++ */
++ exclude_from_vmcore(aper_alloc, aper_order);
++
+ /* Fix up the north bridges */
+ for (i = 0; i < amd_nb_bus_dev_ranges[i].dev_limit; i++) {
+ int bus, dev_base, dev_limit;
+diff --git a/arch/x86/kernel/cpu/common.c b/arch/x86/kernel/cpu/common.c
+index 824aee0117bb..348cf4821240 100644
+--- a/arch/x86/kernel/cpu/common.c
++++ b/arch/x86/kernel/cpu/common.c
+@@ -1749,3 +1749,33 @@ static int __init init_cpu_syscore(void)
+ return 0;
+ }
+ core_initcall(init_cpu_syscore);
++
++/*
++ * The microcode loader calls this upon late microcode load to recheck features,
++ * only when microcode has been updated. Caller holds microcode_mutex and CPU
++ * hotplug lock.
++ */
++void microcode_check(void)
++{
++ struct cpuinfo_x86 info;
++
++ perf_check_microcode();
++
++ /* Reload CPUID max function as it might've changed. */
++ info.cpuid_level = cpuid_eax(0);
++
++ /*
++ * Copy all capability leafs to pick up the synthetic ones so that
++ * memcmp() below doesn't fail on that. The ones coming from CPUID will
++ * get overwritten in get_cpu_cap().
++ */
++ memcpy(&info.x86_capability, &boot_cpu_data.x86_capability, sizeof(info.x86_capability));
++
++ get_cpu_cap(&info);
++
++ if (!memcmp(&info.x86_capability, &boot_cpu_data.x86_capability, sizeof(info.x86_capability)))
++ return;
++
++ pr_warn("x86/CPU: CPU features have changed after loading microcode, but might not take effect.\n");
++ pr_warn("x86/CPU: Please consider either early loading through initrd/built-in or a potential BIOS update.\n");
++}
+diff --git a/arch/x86/kernel/cpu/microcode/amd.c b/arch/x86/kernel/cpu/microcode/amd.c
+index 330b8462d426..48179928ff38 100644
+--- a/arch/x86/kernel/cpu/microcode/amd.c
++++ b/arch/x86/kernel/cpu/microcode/amd.c
+@@ -339,7 +339,7 @@ int __init save_microcode_in_initrd_amd(unsigned int cpuid_1_eax)
+ return -EINVAL;
+
+ ret = load_microcode_amd(true, x86_family(cpuid_1_eax), desc.data, desc.size);
+- if (ret != UCODE_OK)
++ if (ret > UCODE_UPDATED)
+ return -EINVAL;
+
+ return 0;
+@@ -498,7 +498,7 @@ static unsigned int verify_patch_size(u8 family, u32 patch_size,
+ return patch_size;
+ }
+
+-static int apply_microcode_amd(int cpu)
++static enum ucode_state apply_microcode_amd(int cpu)
+ {
+ struct cpuinfo_x86 *c = &cpu_data(cpu);
+ struct microcode_amd *mc_amd;
+@@ -512,7 +512,7 @@ static int apply_microcode_amd(int cpu)
+
+ p = find_patch(cpu);
+ if (!p)
+- return 0;
++ return UCODE_NFOUND;
+
+ mc_amd = p->data;
+ uci->mc = p->data;
+@@ -523,13 +523,13 @@ static int apply_microcode_amd(int cpu)
+ if (rev >= mc_amd->hdr.patch_id) {
+ c->microcode = rev;
+ uci->cpu_sig.rev = rev;
+- return 0;
++ return UCODE_OK;
+ }
+
+ if (__apply_microcode_amd(mc_amd)) {
+ pr_err("CPU%d: update failed for patch_level=0x%08x\n",
+ cpu, mc_amd->hdr.patch_id);
+- return -1;
++ return UCODE_ERROR;
+ }
+ pr_info("CPU%d: new patch_level=0x%08x\n", cpu,
+ mc_amd->hdr.patch_id);
+@@ -537,7 +537,7 @@ static int apply_microcode_amd(int cpu)
+ uci->cpu_sig.rev = mc_amd->hdr.patch_id;
+ c->microcode = mc_amd->hdr.patch_id;
+
+- return 0;
++ return UCODE_UPDATED;
+ }
+
+ static int install_equiv_cpu_table(const u8 *buf)
+@@ -683,27 +683,35 @@ static enum ucode_state __load_microcode_amd(u8 family, const u8 *data,
+ static enum ucode_state
+ load_microcode_amd(bool save, u8 family, const u8 *data, size_t size)
+ {
++ struct ucode_patch *p;
+ enum ucode_state ret;
+
+ /* free old equiv table */
+ free_equiv_cpu_table();
+
+ ret = __load_microcode_amd(family, data, size);
+-
+- if (ret != UCODE_OK)
++ if (ret != UCODE_OK) {
+ cleanup();
++ return ret;
++ }
+
+-#ifdef CONFIG_X86_32
+- /* save BSP's matching patch for early load */
+- if (save) {
+- struct ucode_patch *p = find_patch(0);
+- if (p) {
+- memset(amd_ucode_patch, 0, PATCH_MAX_SIZE);
+- memcpy(amd_ucode_patch, p->data, min_t(u32, ksize(p->data),
+- PATCH_MAX_SIZE));
+- }
++ p = find_patch(0);
++ if (!p) {
++ return ret;
++ } else {
++ if (boot_cpu_data.microcode == p->patch_id)
++ return ret;
++
++ ret = UCODE_NEW;
+ }
+-#endif
++
++ /* save BSP's matching patch for early load */
++ if (!save)
++ return ret;
++
++ memset(amd_ucode_patch, 0, PATCH_MAX_SIZE);
++ memcpy(amd_ucode_patch, p->data, min_t(u32, ksize(p->data), PATCH_MAX_SIZE));
++
+ return ret;
+ }
+
+diff --git a/arch/x86/kernel/cpu/microcode/core.c b/arch/x86/kernel/cpu/microcode/core.c
+index e4fc595cd6ea..021c90464cc2 100644
+--- a/arch/x86/kernel/cpu/microcode/core.c
++++ b/arch/x86/kernel/cpu/microcode/core.c
+@@ -22,13 +22,16 @@
+ #define pr_fmt(fmt) "microcode: " fmt
+
+ #include <linux/platform_device.h>
++#include <linux/stop_machine.h>
+ #include <linux/syscore_ops.h>
+ #include <linux/miscdevice.h>
+ #include <linux/capability.h>
+ #include <linux/firmware.h>
+ #include <linux/kernel.h>
++#include <linux/delay.h>
+ #include <linux/mutex.h>
+ #include <linux/cpu.h>
++#include <linux/nmi.h>
+ #include <linux/fs.h>
+ #include <linux/mm.h>
+
+@@ -64,6 +67,11 @@ LIST_HEAD(microcode_cache);
+ */
+ static DEFINE_MUTEX(microcode_mutex);
+
++/*
++ * Serialize late loading so that CPUs get updated one-by-one.
++ */
++static DEFINE_SPINLOCK(update_lock);
++
+ struct ucode_cpu_info ucode_cpu_info[NR_CPUS];
+
+ struct cpu_info_ctx {
+@@ -373,26 +381,23 @@ static int collect_cpu_info(int cpu)
+ return ret;
+ }
+
+-struct apply_microcode_ctx {
+- int err;
+-};
+-
+ static void apply_microcode_local(void *arg)
+ {
+- struct apply_microcode_ctx *ctx = arg;
++ enum ucode_state *err = arg;
+
+- ctx->err = microcode_ops->apply_microcode(smp_processor_id());
++ *err = microcode_ops->apply_microcode(smp_processor_id());
+ }
+
+ static int apply_microcode_on_target(int cpu)
+ {
+- struct apply_microcode_ctx ctx = { .err = 0 };
++ enum ucode_state err;
+ int ret;
+
+- ret = smp_call_function_single(cpu, apply_microcode_local, &ctx, 1);
+- if (!ret)
+- ret = ctx.err;
+-
++ ret = smp_call_function_single(cpu, apply_microcode_local, &err, 1);
++ if (!ret) {
++ if (err == UCODE_ERROR)
++ ret = 1;
++ }
+ return ret;
+ }
+
+@@ -489,31 +494,124 @@ static void __exit microcode_dev_exit(void)
+ /* fake device for request_firmware */
+ static struct platform_device *microcode_pdev;
+
+-static int reload_for_cpu(int cpu)
++/*
++ * Late loading dance. Why the heavy-handed stomp_machine effort?
++ *
++ * - HT siblings must be idle and not execute other code while the other sibling
++ * is loading microcode in order to avoid any negative interactions caused by
++ * the loading.
++ *
++ * - In addition, microcode update on the cores must be serialized until this
++ * requirement can be relaxed in the future. Right now, this is conservative
++ * and good.
++ */
++#define SPINUNIT 100 /* 100 nsec */
++
++static int check_online_cpus(void)
+ {
+- struct ucode_cpu_info *uci = ucode_cpu_info + cpu;
+- enum ucode_state ustate;
+- int err = 0;
++ if (num_online_cpus() == num_present_cpus())
++ return 0;
+
+- if (!uci->valid)
+- return err;
++ pr_err("Not all CPUs online, aborting microcode update.\n");
+
+- ustate = microcode_ops->request_microcode_fw(cpu, µcode_pdev->dev, true);
+- if (ustate == UCODE_OK)
+- apply_microcode_on_target(cpu);
+- else
+- if (ustate == UCODE_ERROR)
+- err = -EINVAL;
+- return err;
++ return -EINVAL;
++}
++
++static atomic_t late_cpus_in;
++static atomic_t late_cpus_out;
++
++static int __wait_for_cpus(atomic_t *t, long long timeout)
++{
++ int all_cpus = num_online_cpus();
++
++ atomic_inc(t);
++
++ while (atomic_read(t) < all_cpus) {
++ if (timeout < SPINUNIT) {
++ pr_err("Timeout while waiting for CPUs rendezvous, remaining: %d\n",
++ all_cpus - atomic_read(t));
++ return 1;
++ }
++
++ ndelay(SPINUNIT);
++ timeout -= SPINUNIT;
++
++ touch_nmi_watchdog();
++ }
++ return 0;
++}
++
++/*
++ * Returns:
++ * < 0 - on error
++ * 0 - no update done
++ * 1 - microcode was updated
++ */
++static int __reload_late(void *info)
++{
++ int cpu = smp_processor_id();
++ enum ucode_state err;
++ int ret = 0;
++
++ /*
++ * Wait for all CPUs to arrive. A load will not be attempted unless all
++ * CPUs show up.
++ * */
++ if (__wait_for_cpus(&late_cpus_in, NSEC_PER_SEC))
++ return -1;
++
++ spin_lock(&update_lock);
++ apply_microcode_local(&err);
++ spin_unlock(&update_lock);
++
++ if (err > UCODE_NFOUND) {
++ pr_warn("Error reloading microcode on CPU %d\n", cpu);
++ return -1;
++ /* siblings return UCODE_OK because their engine got updated already */
++ } else if (err == UCODE_UPDATED || err == UCODE_OK) {
++ ret = 1;
++ } else {
++ return ret;
++ }
++
++ /*
++ * Increase the wait timeout to a safe value here since we're
++ * serializing the microcode update and that could take a while on a
++ * large number of CPUs. And that is fine as the *actual* timeout will
++ * be determined by the last CPU finished updating and thus cut short.
++ */
++ if (__wait_for_cpus(&late_cpus_out, NSEC_PER_SEC * num_online_cpus()))
++ panic("Timeout during microcode update!\n");
++
++ return ret;
++}
++
++/*
++ * Reload microcode late on all CPUs. Wait for a sec until they
++ * all gather together.
++ */
++static int microcode_reload_late(void)
++{
++ int ret;
++
++ atomic_set(&late_cpus_in, 0);
++ atomic_set(&late_cpus_out, 0);
++
++ ret = stop_machine_cpuslocked(__reload_late, NULL, cpu_online_mask);
++ if (ret > 0)
++ microcode_check();
++
++ return ret;
+ }
+
+ static ssize_t reload_store(struct device *dev,
+ struct device_attribute *attr,
+ const char *buf, size_t size)
+ {
++ enum ucode_state tmp_ret = UCODE_OK;
++ int bsp = boot_cpu_data.cpu_index;
+ unsigned long val;
+- int cpu;
+- ssize_t ret = 0, tmp_ret;
++ ssize_t ret = 0;
+
+ ret = kstrtoul(buf, 0, &val);
+ if (ret)
+@@ -522,23 +620,24 @@ static ssize_t reload_store(struct device *dev,
+ if (val != 1)
+ return size;
+
++ tmp_ret = microcode_ops->request_microcode_fw(bsp, µcode_pdev->dev, true);
++ if (tmp_ret != UCODE_NEW)
++ return size;
++
+ get_online_cpus();
+- mutex_lock(µcode_mutex);
+- for_each_online_cpu(cpu) {
+- tmp_ret = reload_for_cpu(cpu);
+- if (tmp_ret != 0)
+- pr_warn("Error reloading microcode on CPU %d\n", cpu);
+
+- /* save retval of the first encountered reload error */
+- if (!ret)
+- ret = tmp_ret;
+- }
+- if (!ret)
+- perf_check_microcode();
++ ret = check_online_cpus();
++ if (ret)
++ goto put;
++
++ mutex_lock(µcode_mutex);
++ ret = microcode_reload_late();
+ mutex_unlock(µcode_mutex);
++
++put:
+ put_online_cpus();
+
+- if (!ret)
++ if (ret >= 0)
+ ret = size;
+
+ return ret;
+@@ -606,10 +705,8 @@ static enum ucode_state microcode_init_cpu(int cpu, bool refresh_fw)
+ if (system_state != SYSTEM_RUNNING)
+ return UCODE_NFOUND;
+
+- ustate = microcode_ops->request_microcode_fw(cpu, µcode_pdev->dev,
+- refresh_fw);
+-
+- if (ustate == UCODE_OK) {
++ ustate = microcode_ops->request_microcode_fw(cpu, µcode_pdev->dev, refresh_fw);
++ if (ustate == UCODE_NEW) {
+ pr_debug("CPU%d updated upon init\n", cpu);
+ apply_microcode_on_target(cpu);
+ }
+diff --git a/arch/x86/kernel/cpu/microcode/intel.c b/arch/x86/kernel/cpu/microcode/intel.c
+index a15db2b4e0d6..32b8e5724f96 100644
+--- a/arch/x86/kernel/cpu/microcode/intel.c
++++ b/arch/x86/kernel/cpu/microcode/intel.c
+@@ -589,6 +589,23 @@ static int apply_microcode_early(struct ucode_cpu_info *uci, bool early)
+ if (!mc)
+ return 0;
+
++ /*
++ * Save us the MSR write below - which is a particular expensive
++ * operation - when the other hyperthread has updated the microcode
++ * already.
++ */
++ rev = intel_get_microcode_revision();
++ if (rev >= mc->hdr.rev) {
++ uci->cpu_sig.rev = rev;
++ return UCODE_OK;
++ }
++
++ /*
++ * Writeback and invalidate caches before updating microcode to avoid
++ * internal issues depending on what the microcode is updating.
++ */
++ native_wbinvd();
++
+ /* write microcode via MSR 0x79 */
+ native_wrmsrl(MSR_IA32_UCODE_WRITE, (unsigned long)mc->bits);
+
+@@ -772,27 +789,44 @@ static int collect_cpu_info(int cpu_num, struct cpu_signature *csig)
+ return 0;
+ }
+
+-static int apply_microcode_intel(int cpu)
++static enum ucode_state apply_microcode_intel(int cpu)
+ {
++ struct ucode_cpu_info *uci = ucode_cpu_info + cpu;
++ struct cpuinfo_x86 *c = &cpu_data(cpu);
+ struct microcode_intel *mc;
+- struct ucode_cpu_info *uci;
+- struct cpuinfo_x86 *c;
+ static int prev_rev;
+ u32 rev;
+
+ /* We should bind the task to the CPU */
+ if (WARN_ON(raw_smp_processor_id() != cpu))
+- return -1;
++ return UCODE_ERROR;
+
+- uci = ucode_cpu_info + cpu;
+- mc = uci->mc;
++ /* Look for a newer patch in our cache: */
++ mc = find_patch(uci);
+ if (!mc) {
+- /* Look for a newer patch in our cache: */
+- mc = find_patch(uci);
++ mc = uci->mc;
+ if (!mc)
+- return 0;
++ return UCODE_NFOUND;
+ }
+
++ /*
++ * Save us the MSR write below - which is a particular expensive
++ * operation - when the other hyperthread has updated the microcode
++ * already.
++ */
++ rev = intel_get_microcode_revision();
++ if (rev >= mc->hdr.rev) {
++ uci->cpu_sig.rev = rev;
++ c->microcode = rev;
++ return UCODE_OK;
++ }
++
++ /*
++ * Writeback and invalidate caches before updating microcode to avoid
++ * internal issues depending on what the microcode is updating.
++ */
++ native_wbinvd();
++
+ /* write microcode via MSR 0x79 */
+ wrmsrl(MSR_IA32_UCODE_WRITE, (unsigned long)mc->bits);
+
+@@ -801,7 +835,7 @@ static int apply_microcode_intel(int cpu)
+ if (rev != mc->hdr.rev) {
+ pr_err("CPU%d update to revision 0x%x failed\n",
+ cpu, mc->hdr.rev);
+- return -1;
++ return UCODE_ERROR;
+ }
+
+ if (rev != prev_rev) {
+@@ -813,12 +847,10 @@ static int apply_microcode_intel(int cpu)
+ prev_rev = rev;
+ }
+
+- c = &cpu_data(cpu);
+-
+ uci->cpu_sig.rev = rev;
+ c->microcode = rev;
+
+- return 0;
++ return UCODE_UPDATED;
+ }
+
+ static enum ucode_state generic_load_microcode(int cpu, void *data, size_t size,
+@@ -830,6 +862,7 @@ static enum ucode_state generic_load_microcode(int cpu, void *data, size_t size,
+ unsigned int leftover = size;
+ unsigned int curr_mc_size = 0, new_mc_size = 0;
+ unsigned int csig, cpf;
++ enum ucode_state ret = UCODE_OK;
+
+ while (leftover) {
+ struct microcode_header_intel mc_header;
+@@ -871,6 +904,7 @@ static enum ucode_state generic_load_microcode(int cpu, void *data, size_t size,
+ new_mc = mc;
+ new_mc_size = mc_size;
+ mc = NULL; /* trigger new vmalloc */
++ ret = UCODE_NEW;
+ }
+
+ ucode_ptr += mc_size;
+@@ -900,7 +934,7 @@ static enum ucode_state generic_load_microcode(int cpu, void *data, size_t size,
+ pr_debug("CPU%d found a matching microcode update with version 0x%x (current=0x%x)\n",
+ cpu, new_rev, uci->cpu_sig.rev);
+
+- return UCODE_OK;
++ return ret;
+ }
+
+ static int get_ucode_fw(void *to, const void *from, size_t n)
+diff --git a/arch/x86/xen/mmu_hvm.c b/arch/x86/xen/mmu_hvm.c
+index 2cfcfe4f6b2a..dd2ad82eee80 100644
+--- a/arch/x86/xen/mmu_hvm.c
++++ b/arch/x86/xen/mmu_hvm.c
+@@ -75,6 +75,6 @@ void __init xen_hvm_init_mmu_ops(void)
+ if (is_pagetable_dying_supported())
+ pv_mmu_ops.exit_mmap = xen_hvm_exit_mmap;
+ #ifdef CONFIG_PROC_VMCORE
+- register_oldmem_pfn_is_ram(&xen_oldmem_pfn_is_ram);
++ WARN_ON(register_oldmem_pfn_is_ram(&xen_oldmem_pfn_is_ram));
+ #endif
+ }
+diff --git a/block/bfq-cgroup.c b/block/bfq-cgroup.c
+index da1525ec4c87..d819dc77fe65 100644
+--- a/block/bfq-cgroup.c
++++ b/block/bfq-cgroup.c
+@@ -775,10 +775,11 @@ static void bfq_pd_offline(struct blkg_policy_data *pd)
+ unsigned long flags;
+ int i;
+
++ spin_lock_irqsave(&bfqd->lock, flags);
++
+ if (!entity) /* root group */
+- return;
++ goto put_async_queues;
+
+- spin_lock_irqsave(&bfqd->lock, flags);
+ /*
+ * Empty all service_trees belonging to this group before
+ * deactivating the group itself.
+@@ -809,6 +810,8 @@ static void bfq_pd_offline(struct blkg_policy_data *pd)
+ }
+
+ __bfq_deactivate_entity(entity, false);
++
++put_async_queues:
+ bfq_put_async_queues(bfqd, bfqg);
+
+ spin_unlock_irqrestore(&bfqd->lock, flags);
+diff --git a/block/blk-mq.c b/block/blk-mq.c
+index 5629f18b51bd..ab88ff3314a7 100644
+--- a/block/blk-mq.c
++++ b/block/blk-mq.c
+@@ -1996,7 +1996,8 @@ static void blk_mq_exit_hctx(struct request_queue *q,
+ {
+ blk_mq_debugfs_unregister_hctx(hctx);
+
+- blk_mq_tag_idle(hctx);
++ if (blk_mq_hw_queue_mapped(hctx))
++ blk_mq_tag_idle(hctx);
+
+ if (set->ops->exit_request)
+ set->ops->exit_request(set, hctx->fq->flush_rq, hctx_idx);
+@@ -2388,6 +2389,9 @@ static void blk_mq_realloc_hw_ctxs(struct blk_mq_tag_set *set,
+ struct blk_mq_hw_ctx **hctxs = q->queue_hw_ctx;
+
+ blk_mq_sysfs_unregister(q);
++
++ /* protect against switching io scheduler */
++ mutex_lock(&q->sysfs_lock);
+ for (i = 0; i < set->nr_hw_queues; i++) {
+ int node;
+
+@@ -2432,6 +2436,7 @@ static void blk_mq_realloc_hw_ctxs(struct blk_mq_tag_set *set,
+ }
+ }
+ q->nr_hw_queues = i;
++ mutex_unlock(&q->sysfs_lock);
+ blk_mq_sysfs_register(q);
+ }
+
+@@ -2603,9 +2608,27 @@ static int blk_mq_alloc_rq_maps(struct blk_mq_tag_set *set)
+
+ static int blk_mq_update_queue_map(struct blk_mq_tag_set *set)
+ {
+- if (set->ops->map_queues)
++ if (set->ops->map_queues) {
++ int cpu;
++ /*
++ * transport .map_queues is usually done in the following
++ * way:
++ *
++ * for (queue = 0; queue < set->nr_hw_queues; queue++) {
++ * mask = get_cpu_mask(queue)
++ * for_each_cpu(cpu, mask)
++ * set->mq_map[cpu] = queue;
++ * }
++ *
++ * When we need to remap, the table has to be cleared for
++ * killing stale mapping since one CPU may not be mapped
++ * to any hw queue.
++ */
++ for_each_possible_cpu(cpu)
++ set->mq_map[cpu] = 0;
++
+ return set->ops->map_queues(set);
+- else
++ } else
+ return blk_mq_map_queues(set);
+ }
+
+diff --git a/crypto/Makefile b/crypto/Makefile
+index d674884b2d51..daa69360e054 100644
+--- a/crypto/Makefile
++++ b/crypto/Makefile
+@@ -99,6 +99,7 @@ obj-$(CONFIG_CRYPTO_TWOFISH_COMMON) += twofish_common.o
+ obj-$(CONFIG_CRYPTO_SERPENT) += serpent_generic.o
+ CFLAGS_serpent_generic.o := $(call cc-option,-fsched-pressure) # https://gcc.gnu.org/bugzilla/show_bug.cgi?id=79149
+ obj-$(CONFIG_CRYPTO_AES) += aes_generic.o
++CFLAGS_aes_generic.o := $(call cc-ifversion, -ge, 0701, -Os) # https://gcc.gnu.org/bugzilla/show_bug.cgi?id=83356
+ obj-$(CONFIG_CRYPTO_AES_TI) += aes_ti.o
+ obj-$(CONFIG_CRYPTO_CAMELLIA) += camellia_generic.o
+ obj-$(CONFIG_CRYPTO_CAST_COMMON) += cast_common.o
+diff --git a/drivers/acpi/acpi_video.c b/drivers/acpi/acpi_video.c
+index 0972ec0e2eb8..f53ccc680238 100644
+--- a/drivers/acpi/acpi_video.c
++++ b/drivers/acpi/acpi_video.c
+@@ -80,8 +80,8 @@ MODULE_PARM_DESC(report_key_events,
+ static bool device_id_scheme = false;
+ module_param(device_id_scheme, bool, 0444);
+
+-static bool only_lcd = false;
+-module_param(only_lcd, bool, 0444);
++static int only_lcd = -1;
++module_param(only_lcd, int, 0444);
+
+ static int register_count;
+ static DEFINE_MUTEX(register_count_mutex);
+@@ -2136,6 +2136,16 @@ int acpi_video_register(void)
+ goto leave;
+ }
+
++ /*
++ * We're seeing a lot of bogus backlight interfaces on newer machines
++ * without a LCD such as desktops, servers and HDMI sticks. Checking
++ * the lcd flag fixes this, so enable this on any machines which are
++ * win8 ready (where we also prefer the native backlight driver, so
++ * normally the acpi_video code should not register there anyways).
++ */
++ if (only_lcd == -1)
++ only_lcd = acpi_osi_is_win8();
++
+ dmi_check_system(video_dmi_table);
+
+ ret = acpi_bus_register_driver(&acpi_video_bus);
+diff --git a/drivers/acpi/ec.c b/drivers/acpi/ec.c
+index 0252c9b9af3d..d9f38c645e4a 100644
+--- a/drivers/acpi/ec.c
++++ b/drivers/acpi/ec.c
+@@ -1516,7 +1516,7 @@ static int acpi_ec_setup(struct acpi_ec *ec, bool handle_events)
+ }
+
+ acpi_handle_info(ec->handle,
+- "GPE=0x%lx, EC_CMD/EC_SC=0x%lx, EC_DATA=0x%lx\n",
++ "GPE=0x%x, EC_CMD/EC_SC=0x%lx, EC_DATA=0x%lx\n",
+ ec->gpe, ec->command_addr, ec->data_addr);
+ return ret;
+ }
+diff --git a/drivers/acpi/ec_sys.c b/drivers/acpi/ec_sys.c
+index 6c7dd7af789e..dd70d6c2bca0 100644
+--- a/drivers/acpi/ec_sys.c
++++ b/drivers/acpi/ec_sys.c
+@@ -128,7 +128,7 @@ static int acpi_ec_add_debugfs(struct acpi_ec *ec, unsigned int ec_device_count)
+ return -ENOMEM;
+ }
+
+- if (!debugfs_create_x32("gpe", 0444, dev_dir, (u32 *)&first_ec->gpe))
++ if (!debugfs_create_x32("gpe", 0444, dev_dir, &first_ec->gpe))
+ goto error;
+ if (!debugfs_create_bool("use_global_lock", 0444, dev_dir,
+ &first_ec->global_lock))
+diff --git a/drivers/acpi/internal.h b/drivers/acpi/internal.h
+index 7f43423de43c..1d0a501bc7f0 100644
+--- a/drivers/acpi/internal.h
++++ b/drivers/acpi/internal.h
+@@ -159,7 +159,7 @@ static inline void acpi_early_processor_osc(void) {}
+ -------------------------------------------------------------------------- */
+ struct acpi_ec {
+ acpi_handle handle;
+- unsigned long gpe;
++ u32 gpe;
+ unsigned long command_addr;
+ unsigned long data_addr;
+ bool global_lock;
+diff --git a/drivers/base/power/domain.c b/drivers/base/power/domain.c
+index 0c80bea05bcb..b4501873354e 100644
+--- a/drivers/base/power/domain.c
++++ b/drivers/base/power/domain.c
+@@ -1032,15 +1032,12 @@ static int genpd_prepare(struct device *dev)
+ static int genpd_finish_suspend(struct device *dev, bool poweroff)
+ {
+ struct generic_pm_domain *genpd;
+- int ret;
++ int ret = 0;
+
+ genpd = dev_to_genpd(dev);
+ if (IS_ERR(genpd))
+ return -EINVAL;
+
+- if (dev->power.wakeup_path && genpd_is_active_wakeup(genpd))
+- return 0;
+-
+ if (poweroff)
+ ret = pm_generic_poweroff_noirq(dev);
+ else
+@@ -1048,10 +1045,18 @@ static int genpd_finish_suspend(struct device *dev, bool poweroff)
+ if (ret)
+ return ret;
+
++ if (dev->power.wakeup_path && genpd_is_active_wakeup(genpd))
++ return 0;
++
+ if (genpd->dev_ops.stop && genpd->dev_ops.start) {
+ ret = pm_runtime_force_suspend(dev);
+- if (ret)
++ if (ret) {
++ if (poweroff)
++ pm_generic_restore_noirq(dev);
++ else
++ pm_generic_resume_noirq(dev);
+ return ret;
++ }
+ }
+
+ genpd_lock(genpd);
+@@ -1085,7 +1090,7 @@ static int genpd_suspend_noirq(struct device *dev)
+ static int genpd_resume_noirq(struct device *dev)
+ {
+ struct generic_pm_domain *genpd;
+- int ret = 0;
++ int ret;
+
+ dev_dbg(dev, "%s()\n", __func__);
+
+@@ -1094,21 +1099,20 @@ static int genpd_resume_noirq(struct device *dev)
+ return -EINVAL;
+
+ if (dev->power.wakeup_path && genpd_is_active_wakeup(genpd))
+- return 0;
++ return pm_generic_resume_noirq(dev);
+
+ genpd_lock(genpd);
+ genpd_sync_power_on(genpd, true, 0);
+ genpd->suspended_count--;
+ genpd_unlock(genpd);
+
+- if (genpd->dev_ops.stop && genpd->dev_ops.start)
++ if (genpd->dev_ops.stop && genpd->dev_ops.start) {
+ ret = pm_runtime_force_resume(dev);
++ if (ret)
++ return ret;
++ }
+
+- ret = pm_generic_resume_noirq(dev);
+- if (ret)
+- return ret;
+-
+- return ret;
++ return pm_generic_resume_noirq(dev);
+ }
+
+ /**
+diff --git a/drivers/bluetooth/btusb.c b/drivers/bluetooth/btusb.c
+index 2f57e8b88a7a..0fec82469536 100644
+--- a/drivers/bluetooth/btusb.c
++++ b/drivers/bluetooth/btusb.c
+@@ -272,6 +272,7 @@ static const struct usb_device_id blacklist_table[] = {
+ { USB_DEVICE(0x0489, 0xe09f), .driver_info = BTUSB_QCA_ROME },
+ { USB_DEVICE(0x0489, 0xe0a2), .driver_info = BTUSB_QCA_ROME },
+ { USB_DEVICE(0x04ca, 0x3011), .driver_info = BTUSB_QCA_ROME },
++ { USB_DEVICE(0x04ca, 0x3015), .driver_info = BTUSB_QCA_ROME },
+ { USB_DEVICE(0x04ca, 0x3016), .driver_info = BTUSB_QCA_ROME },
+
+ /* Broadcom BCM2035 */
+diff --git a/drivers/bluetooth/hci_bcm.c b/drivers/bluetooth/hci_bcm.c
+index 707c2d1b84c7..7d98f9a17636 100644
+--- a/drivers/bluetooth/hci_bcm.c
++++ b/drivers/bluetooth/hci_bcm.c
+@@ -379,7 +379,7 @@ static int bcm_close(struct hci_uart *hu)
+ pm_runtime_disable(bdev->dev);
+ pm_runtime_set_suspended(bdev->dev);
+
+- if (device_can_wakeup(bdev->dev)) {
++ if (bdev->irq > 0) {
+ devm_free_irq(bdev->dev, bdev->irq, bdev);
+ device_init_wakeup(bdev->dev, false);
+ }
+@@ -577,11 +577,9 @@ static int bcm_suspend_device(struct device *dev)
+ }
+
+ /* Suspend the device */
+- if (bdev->device_wakeup) {
+- gpiod_set_value(bdev->device_wakeup, false);
+- bt_dev_dbg(bdev, "suspend, delaying 15 ms");
+- mdelay(15);
+- }
++ gpiod_set_value(bdev->device_wakeup, false);
++ bt_dev_dbg(bdev, "suspend, delaying 15 ms");
++ mdelay(15);
+
+ return 0;
+ }
+@@ -592,11 +590,9 @@ static int bcm_resume_device(struct device *dev)
+
+ bt_dev_dbg(bdev, "");
+
+- if (bdev->device_wakeup) {
+- gpiod_set_value(bdev->device_wakeup, true);
+- bt_dev_dbg(bdev, "resume, delaying 15 ms");
+- mdelay(15);
+- }
++ gpiod_set_value(bdev->device_wakeup, true);
++ bt_dev_dbg(bdev, "resume, delaying 15 ms");
++ mdelay(15);
+
+ /* When this executes, the device has woken up already */
+ if (bdev->is_suspended && bdev->hu) {
+@@ -632,7 +628,7 @@ static int bcm_suspend(struct device *dev)
+ if (pm_runtime_active(dev))
+ bcm_suspend_device(dev);
+
+- if (device_may_wakeup(dev)) {
++ if (device_may_wakeup(dev) && bdev->irq > 0) {
+ error = enable_irq_wake(bdev->irq);
+ if (!error)
+ bt_dev_dbg(bdev, "BCM irq: enabled");
+@@ -662,7 +658,7 @@ static int bcm_resume(struct device *dev)
+ if (!bdev->hu)
+ goto unlock;
+
+- if (device_may_wakeup(dev)) {
++ if (device_may_wakeup(dev) && bdev->irq > 0) {
+ disable_irq_wake(bdev->irq);
+ bt_dev_dbg(bdev, "BCM irq: disabled");
+ }
+@@ -779,8 +775,7 @@ static int bcm_get_resources(struct bcm_device *dev)
+
+ dev->clk = devm_clk_get(dev->dev, NULL);
+
+- dev->device_wakeup = devm_gpiod_get_optional(dev->dev,
+- "device-wakeup",
++ dev->device_wakeup = devm_gpiod_get_optional(dev->dev, "device-wakeup",
+ GPIOD_OUT_LOW);
+ if (IS_ERR(dev->device_wakeup))
+ return PTR_ERR(dev->device_wakeup);
+diff --git a/drivers/char/tpm/tpm-interface.c b/drivers/char/tpm/tpm-interface.c
+index 5294442505cb..0f1dc35e7078 100644
+--- a/drivers/char/tpm/tpm-interface.c
++++ b/drivers/char/tpm/tpm-interface.c
+@@ -328,7 +328,7 @@ unsigned long tpm_calc_ordinal_duration(struct tpm_chip *chip,
+ }
+ EXPORT_SYMBOL_GPL(tpm_calc_ordinal_duration);
+
+-static bool tpm_validate_command(struct tpm_chip *chip,
++static int tpm_validate_command(struct tpm_chip *chip,
+ struct tpm_space *space,
+ const u8 *cmd,
+ size_t len)
+@@ -340,10 +340,10 @@ static bool tpm_validate_command(struct tpm_chip *chip,
+ unsigned int nr_handles;
+
+ if (len < TPM_HEADER_SIZE)
+- return false;
++ return -EINVAL;
+
+ if (!space)
+- return true;
++ return 0;
+
+ if (chip->flags & TPM_CHIP_FLAG_TPM2 && chip->nr_commands) {
+ cc = be32_to_cpu(header->ordinal);
+@@ -352,7 +352,7 @@ static bool tpm_validate_command(struct tpm_chip *chip,
+ if (i < 0) {
+ dev_dbg(&chip->dev, "0x%04X is an invalid command\n",
+ cc);
+- return false;
++ return -EOPNOTSUPP;
+ }
+
+ attrs = chip->cc_attrs_tbl[i];
+@@ -362,11 +362,11 @@ static bool tpm_validate_command(struct tpm_chip *chip,
+ goto err_len;
+ }
+
+- return true;
++ return 0;
+ err_len:
+ dev_dbg(&chip->dev,
+ "%s: insufficient command length %zu", __func__, len);
+- return false;
++ return -EINVAL;
+ }
+
+ /**
+@@ -391,8 +391,20 @@ ssize_t tpm_transmit(struct tpm_chip *chip, struct tpm_space *space,
+ unsigned long stop;
+ bool need_locality;
+
+- if (!tpm_validate_command(chip, space, buf, bufsiz))
+- return -EINVAL;
++ rc = tpm_validate_command(chip, space, buf, bufsiz);
++ if (rc == -EINVAL)
++ return rc;
++ /*
++ * If the command is not implemented by the TPM, synthesize a
++ * response with a TPM2_RC_COMMAND_CODE return for user-space.
++ */
++ if (rc == -EOPNOTSUPP) {
++ header->length = cpu_to_be32(sizeof(*header));
++ header->tag = cpu_to_be16(TPM2_ST_NO_SESSIONS);
++ header->return_code = cpu_to_be32(TPM2_RC_COMMAND_CODE |
++ TSS2_RESMGR_TPM_RC_LAYER);
++ return bufsiz;
++ }
+
+ if (bufsiz > TPM_BUFSIZE)
+ bufsiz = TPM_BUFSIZE;
+diff --git a/drivers/char/tpm/tpm.h b/drivers/char/tpm/tpm.h
+index 528cffbd49d3..f6f56dfda6c7 100644
+--- a/drivers/char/tpm/tpm.h
++++ b/drivers/char/tpm/tpm.h
+@@ -93,12 +93,17 @@ enum tpm2_structures {
+ TPM2_ST_SESSIONS = 0x8002,
+ };
+
++/* Indicates from what layer of the software stack the error comes from */
++#define TSS2_RC_LAYER_SHIFT 16
++#define TSS2_RESMGR_TPM_RC_LAYER (11 << TSS2_RC_LAYER_SHIFT)
++
+ enum tpm2_return_codes {
+ TPM2_RC_SUCCESS = 0x0000,
+ TPM2_RC_HASH = 0x0083, /* RC_FMT1 */
+ TPM2_RC_HANDLE = 0x008B,
+ TPM2_RC_INITIALIZE = 0x0100, /* RC_VER1 */
+ TPM2_RC_DISABLED = 0x0120,
++ TPM2_RC_COMMAND_CODE = 0x0143,
+ TPM2_RC_TESTING = 0x090A, /* RC_WARN */
+ TPM2_RC_REFERENCE_H0 = 0x0910,
+ };
+diff --git a/drivers/clk/clk-divider.c b/drivers/clk/clk-divider.c
+index 4ed516cb7276..b49942b9fe50 100644
+--- a/drivers/clk/clk-divider.c
++++ b/drivers/clk/clk-divider.c
+@@ -118,12 +118,11 @@ static unsigned int _get_val(const struct clk_div_table *table,
+ unsigned long divider_recalc_rate(struct clk_hw *hw, unsigned long parent_rate,
+ unsigned int val,
+ const struct clk_div_table *table,
+- unsigned long flags)
++ unsigned long flags, unsigned long width)
+ {
+- struct clk_divider *divider = to_clk_divider(hw);
+ unsigned int div;
+
+- div = _get_div(table, val, flags, divider->width);
++ div = _get_div(table, val, flags, width);
+ if (!div) {
+ WARN(!(flags & CLK_DIVIDER_ALLOW_ZERO),
+ "%s: Zero divisor and CLK_DIVIDER_ALLOW_ZERO not set\n",
+@@ -145,7 +144,7 @@ static unsigned long clk_divider_recalc_rate(struct clk_hw *hw,
+ val &= div_mask(divider->width);
+
+ return divider_recalc_rate(hw, parent_rate, val, divider->table,
+- divider->flags);
++ divider->flags, divider->width);
+ }
+
+ static bool _is_valid_table_div(const struct clk_div_table *table,
+diff --git a/drivers/clk/hisilicon/clkdivider-hi6220.c b/drivers/clk/hisilicon/clkdivider-hi6220.c
+index a1c1f684ad58..9f46cf9dcc65 100644
+--- a/drivers/clk/hisilicon/clkdivider-hi6220.c
++++ b/drivers/clk/hisilicon/clkdivider-hi6220.c
+@@ -56,7 +56,7 @@ static unsigned long hi6220_clkdiv_recalc_rate(struct clk_hw *hw,
+ val &= div_mask(dclk->width);
+
+ return divider_recalc_rate(hw, parent_rate, val, dclk->table,
+- CLK_DIVIDER_ROUND_CLOSEST);
++ CLK_DIVIDER_ROUND_CLOSEST, dclk->width);
+ }
+
+ static long hi6220_clkdiv_round_rate(struct clk_hw *hw, unsigned long rate,
+diff --git a/drivers/clk/meson/clk-mpll.c b/drivers/clk/meson/clk-mpll.c
+index 44a5a535ca63..5144360e2c80 100644
+--- a/drivers/clk/meson/clk-mpll.c
++++ b/drivers/clk/meson/clk-mpll.c
+@@ -98,7 +98,7 @@ static void params_from_rate(unsigned long requested_rate,
+ *sdm = SDM_DEN - 1;
+ } else {
+ *n2 = div;
+- *sdm = DIV_ROUND_UP(rem * SDM_DEN, requested_rate);
++ *sdm = DIV_ROUND_UP_ULL((u64)rem * SDM_DEN, requested_rate);
+ }
+ }
+
+diff --git a/drivers/clk/nxp/clk-lpc32xx.c b/drivers/clk/nxp/clk-lpc32xx.c
+index 7b359afd620e..a6438f50e6db 100644
+--- a/drivers/clk/nxp/clk-lpc32xx.c
++++ b/drivers/clk/nxp/clk-lpc32xx.c
+@@ -956,7 +956,7 @@ static unsigned long clk_divider_recalc_rate(struct clk_hw *hw,
+ val &= div_mask(divider->width);
+
+ return divider_recalc_rate(hw, parent_rate, val, divider->table,
+- divider->flags);
++ divider->flags, divider->width);
+ }
+
+ static long clk_divider_round_rate(struct clk_hw *hw, unsigned long rate,
+diff --git a/drivers/clk/qcom/clk-regmap-divider.c b/drivers/clk/qcom/clk-regmap-divider.c
+index 53484912301e..928fcc16ee27 100644
+--- a/drivers/clk/qcom/clk-regmap-divider.c
++++ b/drivers/clk/qcom/clk-regmap-divider.c
+@@ -59,7 +59,7 @@ static unsigned long div_recalc_rate(struct clk_hw *hw,
+ div &= BIT(divider->width) - 1;
+
+ return divider_recalc_rate(hw, parent_rate, div, NULL,
+- CLK_DIVIDER_ROUND_CLOSEST);
++ CLK_DIVIDER_ROUND_CLOSEST, divider->width);
+ }
+
+ const struct clk_ops clk_regmap_div_ops = {
+diff --git a/drivers/clk/sunxi-ng/ccu-sun8i-a83t.c b/drivers/clk/sunxi-ng/ccu-sun8i-a83t.c
+index 5cedcd0d8be8..aeafa7a4fff5 100644
+--- a/drivers/clk/sunxi-ng/ccu-sun8i-a83t.c
++++ b/drivers/clk/sunxi-ng/ccu-sun8i-a83t.c
+@@ -493,8 +493,8 @@ static SUNXI_CCU_MUX_WITH_GATE(tcon0_clk, "tcon0", tcon0_parents,
+ 0x118, 24, 3, BIT(31), CLK_SET_RATE_PARENT);
+
+ static const char * const tcon1_parents[] = { "pll-video1" };
+-static SUNXI_CCU_MUX_WITH_GATE(tcon1_clk, "tcon1", tcon1_parents,
+- 0x11c, 24, 3, BIT(31), CLK_SET_RATE_PARENT);
++static SUNXI_CCU_M_WITH_MUX_GATE(tcon1_clk, "tcon1", tcon1_parents,
++ 0x11c, 0, 4, 24, 2, BIT(31), CLK_SET_RATE_PARENT);
+
+ static SUNXI_CCU_GATE(csi_misc_clk, "csi-misc", "osc24M", 0x130, BIT(16), 0);
+
+diff --git a/drivers/clk/sunxi-ng/ccu_div.c b/drivers/clk/sunxi-ng/ccu_div.c
+index baa3cf96507b..302a18efd39f 100644
+--- a/drivers/clk/sunxi-ng/ccu_div.c
++++ b/drivers/clk/sunxi-ng/ccu_div.c
+@@ -71,7 +71,7 @@ static unsigned long ccu_div_recalc_rate(struct clk_hw *hw,
+ parent_rate);
+
+ val = divider_recalc_rate(hw, parent_rate, val, cd->div.table,
+- cd->div.flags);
++ cd->div.flags, cd->div.width);
+
+ if (cd->common.features & CCU_FEATURE_FIXED_POSTDIV)
+ val /= cd->fixed_post_div;
+diff --git a/drivers/cpufreq/powernv-cpufreq.c b/drivers/cpufreq/powernv-cpufreq.c
+index da7fdb4b661a..37381238bf69 100644
+--- a/drivers/cpufreq/powernv-cpufreq.c
++++ b/drivers/cpufreq/powernv-cpufreq.c
+@@ -41,11 +41,9 @@
+ #define POWERNV_MAX_PSTATES 256
+ #define PMSR_PSAFE_ENABLE (1UL << 30)
+ #define PMSR_SPR_EM_DISABLE (1UL << 31)
+-#define PMSR_MAX(x) ((x >> 32) & 0xFF)
++#define MAX_PSTATE_SHIFT 32
+ #define LPSTATE_SHIFT 48
+ #define GPSTATE_SHIFT 56
+-#define GET_LPSTATE(x) (((x) >> LPSTATE_SHIFT) & 0xFF)
+-#define GET_GPSTATE(x) (((x) >> GPSTATE_SHIFT) & 0xFF)
+
+ #define MAX_RAMP_DOWN_TIME 5120
+ /*
+@@ -94,6 +92,7 @@ struct global_pstate_info {
+ };
+
+ static struct cpufreq_frequency_table powernv_freqs[POWERNV_MAX_PSTATES+1];
++u32 pstate_sign_prefix;
+ static bool rebooting, throttled, occ_reset;
+
+ static const char * const throttle_reason[] = {
+@@ -148,6 +147,20 @@ static struct powernv_pstate_info {
+ bool wof_enabled;
+ } powernv_pstate_info;
+
++static inline int extract_pstate(u64 pmsr_val, unsigned int shift)
++{
++ int ret = ((pmsr_val >> shift) & 0xFF);
++
++ if (!ret)
++ return ret;
++
++ return (pstate_sign_prefix | ret);
++}
++
++#define extract_local_pstate(x) extract_pstate(x, LPSTATE_SHIFT)
++#define extract_global_pstate(x) extract_pstate(x, GPSTATE_SHIFT)
++#define extract_max_pstate(x) extract_pstate(x, MAX_PSTATE_SHIFT)
++
+ /* Use following macros for conversions between pstate_id and index */
+ static inline int idx_to_pstate(unsigned int i)
+ {
+@@ -278,6 +291,9 @@ static int init_powernv_pstates(void)
+
+ powernv_pstate_info.nr_pstates = nr_pstates;
+ pr_debug("NR PStates %d\n", nr_pstates);
++
++ pstate_sign_prefix = pstate_min & ~0xFF;
++
+ for (i = 0; i < nr_pstates; i++) {
+ u32 id = be32_to_cpu(pstate_ids[i]);
+ u32 freq = be32_to_cpu(pstate_freqs[i]);
+@@ -438,17 +454,10 @@ struct powernv_smp_call_data {
+ static void powernv_read_cpu_freq(void *arg)
+ {
+ unsigned long pmspr_val;
+- s8 local_pstate_id;
+ struct powernv_smp_call_data *freq_data = arg;
+
+ pmspr_val = get_pmspr(SPRN_PMSR);
+-
+- /*
+- * The local pstate id corresponds bits 48..55 in the PMSR.
+- * Note: Watch out for the sign!
+- */
+- local_pstate_id = (pmspr_val >> 48) & 0xFF;
+- freq_data->pstate_id = local_pstate_id;
++ freq_data->pstate_id = extract_local_pstate(pmspr_val);
+ freq_data->freq = pstate_id_to_freq(freq_data->pstate_id);
+
+ pr_debug("cpu %d pmsr %016lX pstate_id %d frequency %d kHz\n",
+@@ -522,7 +531,7 @@ static void powernv_cpufreq_throttle_check(void *data)
+ chip = this_cpu_read(chip_info);
+
+ /* Check for Pmax Capping */
+- pmsr_pmax = (s8)PMSR_MAX(pmsr);
++ pmsr_pmax = extract_max_pstate(pmsr);
+ pmsr_pmax_idx = pstate_to_idx(pmsr_pmax);
+ if (pmsr_pmax_idx != powernv_pstate_info.max) {
+ if (chip->throttled)
+@@ -645,8 +654,8 @@ void gpstate_timer_handler(struct timer_list *t)
+ * value. Hence, read from PMCR to get correct data.
+ */
+ val = get_pmspr(SPRN_PMCR);
+- freq_data.gpstate_id = (s8)GET_GPSTATE(val);
+- freq_data.pstate_id = (s8)GET_LPSTATE(val);
++ freq_data.gpstate_id = extract_global_pstate(val);
++ freq_data.pstate_id = extract_local_pstate(val);
+ if (freq_data.gpstate_id == freq_data.pstate_id) {
+ reset_gpstates(policy);
+ spin_unlock(&gpstates->gpstate_lock);
+diff --git a/drivers/crypto/amcc/crypto4xx_alg.c b/drivers/crypto/amcc/crypto4xx_alg.c
+index eeaf27859d80..ea83d0bff0e9 100644
+--- a/drivers/crypto/amcc/crypto4xx_alg.c
++++ b/drivers/crypto/amcc/crypto4xx_alg.c
+@@ -256,10 +256,6 @@ static inline bool crypto4xx_aead_need_fallback(struct aead_request *req,
+ if (is_ccm && !(req->iv[0] == 1 || req->iv[0] == 3))
+ return true;
+
+- /* CCM - fix CBC MAC mismatch in special case */
+- if (is_ccm && decrypt && !req->assoclen)
+- return true;
+-
+ return false;
+ }
+
+@@ -330,7 +326,7 @@ int crypto4xx_setkey_aes_ccm(struct crypto_aead *cipher, const u8 *key,
+ sa = (struct dynamic_sa_ctl *) ctx->sa_in;
+ sa->sa_contents.w = SA_AES_CCM_CONTENTS | (keylen << 2);
+
+- set_dynamic_sa_command_0(sa, SA_NOT_SAVE_HASH, SA_NOT_SAVE_IV,
++ set_dynamic_sa_command_0(sa, SA_SAVE_HASH, SA_NOT_SAVE_IV,
+ SA_LOAD_HASH_FROM_SA, SA_LOAD_IV_FROM_STATE,
+ SA_NO_HEADER_PROC, SA_HASH_ALG_CBC_MAC,
+ SA_CIPHER_ALG_AES,
+diff --git a/drivers/crypto/amcc/crypto4xx_core.c b/drivers/crypto/amcc/crypto4xx_core.c
+index c44954e274bc..33256b4a302e 100644
+--- a/drivers/crypto/amcc/crypto4xx_core.c
++++ b/drivers/crypto/amcc/crypto4xx_core.c
+@@ -570,15 +570,14 @@ static void crypto4xx_aead_done(struct crypto4xx_device *dev,
+ struct pd_uinfo *pd_uinfo,
+ struct ce_pd *pd)
+ {
+- struct aead_request *aead_req;
+- struct crypto4xx_ctx *ctx;
++ struct aead_request *aead_req = container_of(pd_uinfo->async_req,
++ struct aead_request, base);
+ struct scatterlist *dst = pd_uinfo->dest_va;
++ size_t cp_len = crypto_aead_authsize(
++ crypto_aead_reqtfm(aead_req));
++ u32 icv[cp_len];
+ int err = 0;
+
+- aead_req = container_of(pd_uinfo->async_req, struct aead_request,
+- base);
+- ctx = crypto_tfm_ctx(aead_req->base.tfm);
+-
+ if (pd_uinfo->using_sd) {
+ crypto4xx_copy_pkt_to_dst(dev, pd, pd_uinfo,
+ pd->pd_ctl_len.bf.pkt_len,
+@@ -590,38 +589,39 @@ static void crypto4xx_aead_done(struct crypto4xx_device *dev,
+
+ if (pd_uinfo->sa_va->sa_command_0.bf.dir == DIR_OUTBOUND) {
+ /* append icv at the end */
+- size_t cp_len = crypto_aead_authsize(
+- crypto_aead_reqtfm(aead_req));
+- u32 icv[cp_len];
+-
+ crypto4xx_memcpy_from_le32(icv, pd_uinfo->sr_va->save_digest,
+ cp_len);
+
+ scatterwalk_map_and_copy(icv, dst, aead_req->cryptlen,
+ cp_len, 1);
++ } else {
++ /* check icv at the end */
++ scatterwalk_map_and_copy(icv, aead_req->src,
++ aead_req->assoclen + aead_req->cryptlen -
++ cp_len, cp_len, 0);
++
++ crypto4xx_memcpy_from_le32(icv, icv, cp_len);
++
++ if (crypto_memneq(icv, pd_uinfo->sr_va->save_digest, cp_len))
++ err = -EBADMSG;
+ }
+
+ crypto4xx_ret_sg_desc(dev, pd_uinfo);
+
+ if (pd->pd_ctl.bf.status & 0xff) {
+- if (pd->pd_ctl.bf.status & 0x1) {
+- /* authentication error */
+- err = -EBADMSG;
+- } else {
+- if (!__ratelimit(&dev->aead_ratelimit)) {
+- if (pd->pd_ctl.bf.status & 2)
+- pr_err("pad fail error\n");
+- if (pd->pd_ctl.bf.status & 4)
+- pr_err("seqnum fail\n");
+- if (pd->pd_ctl.bf.status & 8)
+- pr_err("error _notify\n");
+- pr_err("aead return err status = 0x%02x\n",
+- pd->pd_ctl.bf.status & 0xff);
+- pr_err("pd pad_ctl = 0x%08x\n",
+- pd->pd_ctl.bf.pd_pad_ctl);
+- }
+- err = -EINVAL;
++ if (!__ratelimit(&dev->aead_ratelimit)) {
++ if (pd->pd_ctl.bf.status & 2)
++ pr_err("pad fail error\n");
++ if (pd->pd_ctl.bf.status & 4)
++ pr_err("seqnum fail\n");
++ if (pd->pd_ctl.bf.status & 8)
++ pr_err("error _notify\n");
++ pr_err("aead return err status = 0x%02x\n",
++ pd->pd_ctl.bf.status & 0xff);
++ pr_err("pd pad_ctl = 0x%08x\n",
++ pd->pd_ctl.bf.pd_pad_ctl);
+ }
++ err = -EINVAL;
+ }
+
+ if (pd_uinfo->state & PD_ENTRY_BUSY)
+diff --git a/drivers/devfreq/devfreq.c b/drivers/devfreq/devfreq.c
+index 99c4021fc33b..fe2af6aa88fc 100644
+--- a/drivers/devfreq/devfreq.c
++++ b/drivers/devfreq/devfreq.c
+@@ -996,7 +996,8 @@ static ssize_t governor_store(struct device *dev, struct device_attribute *attr,
+ if (df->governor == governor) {
+ ret = 0;
+ goto out;
+- } else if (df->governor->immutable || governor->immutable) {
++ } else if ((df->governor && df->governor->immutable) ||
++ governor->immutable) {
+ ret = -EINVAL;
+ goto out;
+ }
+diff --git a/drivers/edac/mv64x60_edac.c b/drivers/edac/mv64x60_edac.c
+index ec5d695bbb72..3c68bb525d5d 100644
+--- a/drivers/edac/mv64x60_edac.c
++++ b/drivers/edac/mv64x60_edac.c
+@@ -758,7 +758,7 @@ static int mv64x60_mc_err_probe(struct platform_device *pdev)
+ /* Non-ECC RAM? */
+ printk(KERN_WARNING "%s: No ECC DIMMs discovered\n", __func__);
+ res = -ENODEV;
+- goto err2;
++ goto err;
+ }
+
+ edac_dbg(3, "init mci\n");
+diff --git a/drivers/gpio/gpio-thunderx.c b/drivers/gpio/gpio-thunderx.c
+index b5adb79a631a..d16e9d4a129b 100644
+--- a/drivers/gpio/gpio-thunderx.c
++++ b/drivers/gpio/gpio-thunderx.c
+@@ -553,8 +553,10 @@ static int thunderx_gpio_probe(struct pci_dev *pdev,
+ txgpio->irqd = irq_domain_create_hierarchy(irq_get_irq_data(txgpio->msix_entries[0].vector)->domain,
+ 0, 0, of_node_to_fwnode(dev->of_node),
+ &thunderx_gpio_irqd_ops, txgpio);
+- if (!txgpio->irqd)
++ if (!txgpio->irqd) {
++ err = -ENOMEM;
+ goto out;
++ }
+
+ /* Push on irq_data and the domain for each line. */
+ for (i = 0; i < ngpio; i++) {
+diff --git a/drivers/gpio/gpiolib.c b/drivers/gpio/gpiolib.c
+index 7a5cf5b08c54..ec6e922123cb 100644
+--- a/drivers/gpio/gpiolib.c
++++ b/drivers/gpio/gpiolib.c
+@@ -2468,7 +2468,7 @@ EXPORT_SYMBOL_GPL(gpiod_direction_output_raw);
+ */
+ int gpiod_direction_output(struct gpio_desc *desc, int value)
+ {
+- struct gpio_chip *gc = desc->gdev->chip;
++ struct gpio_chip *gc;
+ int ret;
+
+ VALIDATE_DESC(desc);
+@@ -2485,6 +2485,7 @@ int gpiod_direction_output(struct gpio_desc *desc, int value)
+ return -EIO;
+ }
+
++ gc = desc->gdev->chip;
+ if (test_bit(FLAG_OPEN_DRAIN, &desc->flags)) {
+ /* First see if we can enable open drain in hardware */
+ ret = gpio_set_drive_single_ended(gc, gpio_chip_hwgpio(desc),
+@@ -3646,7 +3647,8 @@ struct gpio_desc *__must_check gpiod_get_index(struct device *dev,
+ return desc;
+ }
+
+- status = gpiod_request(desc, con_id);
++ /* If a connection label was passed use that, else use the device name as label */
++ status = gpiod_request(desc, con_id ? con_id : dev_name(dev));
+ if (status < 0)
+ return ERR_PTR(status);
+
+diff --git a/drivers/gpu/drm/amd/powerplay/smumgr/smu7_smumgr.c b/drivers/gpu/drm/amd/powerplay/smumgr/smu7_smumgr.c
+index 7f5359a97ef2..885d9d802670 100644
+--- a/drivers/gpu/drm/amd/powerplay/smumgr/smu7_smumgr.c
++++ b/drivers/gpu/drm/amd/powerplay/smumgr/smu7_smumgr.c
+@@ -648,6 +648,12 @@ int smu7_init(struct pp_hwmgr *hwmgr)
+
+ int smu7_smu_fini(struct pp_hwmgr *hwmgr)
+ {
++ struct smu7_smumgr *smu_data = (struct smu7_smumgr *)(hwmgr->smu_backend);
++
++ smu_free_memory(hwmgr->device, (void *) smu_data->header_buffer.handle);
++ if (!cgs_is_virtualization_enabled(hwmgr->device))
++ smu_free_memory(hwmgr->device, (void *) smu_data->smu_buffer.handle);
++
+ kfree(hwmgr->smu_backend);
+ hwmgr->smu_backend = NULL;
+ cgs_rel_firmware(hwmgr->device, CGS_UCODE_ID_SMU);
+diff --git a/drivers/gpu/drm/i915/intel_bios.c b/drivers/gpu/drm/i915/intel_bios.c
+index fd23023df7c1..3a0728a212fb 100644
+--- a/drivers/gpu/drm/i915/intel_bios.c
++++ b/drivers/gpu/drm/i915/intel_bios.c
+@@ -1107,6 +1107,7 @@ static void sanitize_aux_ch(struct drm_i915_private *dev_priv,
+ }
+
+ static const u8 cnp_ddc_pin_map[] = {
++ [0] = 0, /* N/A */
+ [DDC_BUS_DDI_B] = GMBUS_PIN_1_BXT,
+ [DDC_BUS_DDI_C] = GMBUS_PIN_2_BXT,
+ [DDC_BUS_DDI_D] = GMBUS_PIN_4_CNP, /* sic */
+@@ -1115,9 +1116,14 @@ static const u8 cnp_ddc_pin_map[] = {
+
+ static u8 map_ddc_pin(struct drm_i915_private *dev_priv, u8 vbt_pin)
+ {
+- if (HAS_PCH_CNP(dev_priv) &&
+- vbt_pin > 0 && vbt_pin < ARRAY_SIZE(cnp_ddc_pin_map))
+- return cnp_ddc_pin_map[vbt_pin];
++ if (HAS_PCH_CNP(dev_priv)) {
++ if (vbt_pin < ARRAY_SIZE(cnp_ddc_pin_map)) {
++ return cnp_ddc_pin_map[vbt_pin];
++ } else {
++ DRM_DEBUG_KMS("Ignoring alternate pin: VBT claims DDC pin %d, which is not valid for this platform\n", vbt_pin);
++ return 0;
++ }
++ }
+
+ return vbt_pin;
+ }
+diff --git a/drivers/gpu/drm/msm/adreno/adreno_device.c b/drivers/gpu/drm/msm/adreno/adreno_device.c
+index 05022ea2a007..bfb3d689f47d 100644
+--- a/drivers/gpu/drm/msm/adreno/adreno_device.c
++++ b/drivers/gpu/drm/msm/adreno/adreno_device.c
+@@ -125,11 +125,14 @@ struct msm_gpu *adreno_load_gpu(struct drm_device *dev)
+ {
+ struct msm_drm_private *priv = dev->dev_private;
+ struct platform_device *pdev = priv->gpu_pdev;
+- struct msm_gpu *gpu = platform_get_drvdata(priv->gpu_pdev);
++ struct msm_gpu *gpu = NULL;
+ int ret;
+
++ if (pdev)
++ gpu = platform_get_drvdata(pdev);
++
+ if (!gpu) {
+- dev_err(dev->dev, "no adreno device\n");
++ dev_err_once(dev->dev, "no GPU device was found\n");
+ return NULL;
+ }
+
+diff --git a/drivers/gpu/drm/msm/dsi/pll/dsi_pll_14nm.c b/drivers/gpu/drm/msm/dsi/pll/dsi_pll_14nm.c
+index fe15aa64086f..71fe60e5f01f 100644
+--- a/drivers/gpu/drm/msm/dsi/pll/dsi_pll_14nm.c
++++ b/drivers/gpu/drm/msm/dsi/pll/dsi_pll_14nm.c
+@@ -698,7 +698,7 @@ static unsigned long dsi_pll_14nm_postdiv_recalc_rate(struct clk_hw *hw,
+ val &= div_mask(width);
+
+ return divider_recalc_rate(hw, parent_rate, val, NULL,
+- postdiv->flags);
++ postdiv->flags, width);
+ }
+
+ static long dsi_pll_14nm_postdiv_round_rate(struct clk_hw *hw,
+diff --git a/drivers/hwmon/ina2xx.c b/drivers/hwmon/ina2xx.c
+index 62e38fa8cda2..e362a932fe8c 100644
+--- a/drivers/hwmon/ina2xx.c
++++ b/drivers/hwmon/ina2xx.c
+@@ -95,18 +95,20 @@ enum ina2xx_ids { ina219, ina226 };
+
+ struct ina2xx_config {
+ u16 config_default;
+- int calibration_factor;
++ int calibration_value;
+ int registers;
+ int shunt_div;
+ int bus_voltage_shift;
+ int bus_voltage_lsb; /* uV */
+- int power_lsb; /* uW */
++ int power_lsb_factor;
+ };
+
+ struct ina2xx_data {
+ const struct ina2xx_config *config;
+
+ long rshunt;
++ long current_lsb_uA;
++ long power_lsb_uW;
+ struct mutex config_lock;
+ struct regmap *regmap;
+
+@@ -116,21 +118,21 @@ struct ina2xx_data {
+ static const struct ina2xx_config ina2xx_config[] = {
+ [ina219] = {
+ .config_default = INA219_CONFIG_DEFAULT,
+- .calibration_factor = 40960000,
++ .calibration_value = 4096,
+ .registers = INA219_REGISTERS,
+ .shunt_div = 100,
+ .bus_voltage_shift = 3,
+ .bus_voltage_lsb = 4000,
+- .power_lsb = 20000,
++ .power_lsb_factor = 20,
+ },
+ [ina226] = {
+ .config_default = INA226_CONFIG_DEFAULT,
+- .calibration_factor = 5120000,
++ .calibration_value = 2048,
+ .registers = INA226_REGISTERS,
+ .shunt_div = 400,
+ .bus_voltage_shift = 0,
+ .bus_voltage_lsb = 1250,
+- .power_lsb = 25000,
++ .power_lsb_factor = 25,
+ },
+ };
+
+@@ -169,12 +171,16 @@ static u16 ina226_interval_to_reg(int interval)
+ return INA226_SHIFT_AVG(avg_bits);
+ }
+
++/*
++ * Calibration register is set to the best value, which eliminates
++ * truncation errors on calculating current register in hardware.
++ * According to datasheet (eq. 3) the best values are 2048 for
++ * ina226 and 4096 for ina219. They are hardcoded as calibration_value.
++ */
+ static int ina2xx_calibrate(struct ina2xx_data *data)
+ {
+- u16 val = DIV_ROUND_CLOSEST(data->config->calibration_factor,
+- data->rshunt);
+-
+- return regmap_write(data->regmap, INA2XX_CALIBRATION, val);
++ return regmap_write(data->regmap, INA2XX_CALIBRATION,
++ data->config->calibration_value);
+ }
+
+ /*
+@@ -187,10 +193,6 @@ static int ina2xx_init(struct ina2xx_data *data)
+ if (ret < 0)
+ return ret;
+
+- /*
+- * Set current LSB to 1mA, shunt is in uOhms
+- * (equation 13 in datasheet).
+- */
+ return ina2xx_calibrate(data);
+ }
+
+@@ -268,15 +270,15 @@ static int ina2xx_get_value(struct ina2xx_data *data, u8 reg,
+ val = DIV_ROUND_CLOSEST(val, 1000);
+ break;
+ case INA2XX_POWER:
+- val = regval * data->config->power_lsb;
++ val = regval * data->power_lsb_uW;
+ break;
+ case INA2XX_CURRENT:
+- /* signed register, LSB=1mA (selected), in mA */
+- val = (s16)regval;
++ /* signed register, result in mA */
++ val = regval * data->current_lsb_uA;
++ val = DIV_ROUND_CLOSEST(val, 1000);
+ break;
+ case INA2XX_CALIBRATION:
+- val = DIV_ROUND_CLOSEST(data->config->calibration_factor,
+- regval);
++ val = regval;
+ break;
+ default:
+ /* programmer goofed */
+@@ -304,9 +306,32 @@ static ssize_t ina2xx_show_value(struct device *dev,
+ ina2xx_get_value(data, attr->index, regval));
+ }
+
+-static ssize_t ina2xx_set_shunt(struct device *dev,
+- struct device_attribute *da,
+- const char *buf, size_t count)
++/*
++ * In order to keep calibration register value fixed, the product
++ * of current_lsb and shunt_resistor should also be fixed and equal
++ * to shunt_voltage_lsb = 1 / shunt_div multiplied by 10^9 in order
++ * to keep the scale.
++ */
++static int ina2xx_set_shunt(struct ina2xx_data *data, long val)
++{
++ unsigned int dividend = DIV_ROUND_CLOSEST(1000000000,
++ data->config->shunt_div);
++ if (val <= 0 || val > dividend)
++ return -EINVAL;
++
++ mutex_lock(&data->config_lock);
++ data->rshunt = val;
++ data->current_lsb_uA = DIV_ROUND_CLOSEST(dividend, val);
++ data->power_lsb_uW = data->config->power_lsb_factor *
++ data->current_lsb_uA;
++ mutex_unlock(&data->config_lock);
++
++ return 0;
++}
++
++static ssize_t ina2xx_store_shunt(struct device *dev,
++ struct device_attribute *da,
++ const char *buf, size_t count)
+ {
+ unsigned long val;
+ int status;
+@@ -316,18 +341,9 @@ static ssize_t ina2xx_set_shunt(struct device *dev,
+ if (status < 0)
+ return status;
+
+- if (val == 0 ||
+- /* Values greater than the calibration factor make no sense. */
+- val > data->config->calibration_factor)
+- return -EINVAL;
+-
+- mutex_lock(&data->config_lock);
+- data->rshunt = val;
+- status = ina2xx_calibrate(data);
+- mutex_unlock(&data->config_lock);
++ status = ina2xx_set_shunt(data, val);
+ if (status < 0)
+ return status;
+-
+ return count;
+ }
+
+@@ -387,7 +403,7 @@ static SENSOR_DEVICE_ATTR(power1_input, S_IRUGO, ina2xx_show_value, NULL,
+
+ /* shunt resistance */
+ static SENSOR_DEVICE_ATTR(shunt_resistor, S_IRUGO | S_IWUSR,
+- ina2xx_show_value, ina2xx_set_shunt,
++ ina2xx_show_value, ina2xx_store_shunt,
+ INA2XX_CALIBRATION);
+
+ /* update interval (ina226 only) */
+@@ -448,10 +464,7 @@ static int ina2xx_probe(struct i2c_client *client,
+ val = INA2XX_RSHUNT_DEFAULT;
+ }
+
+- if (val <= 0 || val > data->config->calibration_factor)
+- return -ENODEV;
+-
+- data->rshunt = val;
++ ina2xx_set_shunt(data, val);
+
+ ina2xx_regmap_config.max_register = data->config->registers;
+
+diff --git a/drivers/infiniband/core/cma.c b/drivers/infiniband/core/cma.c
+index 67aece2f5d8d..2fb7f2586353 100644
+--- a/drivers/infiniband/core/cma.c
++++ b/drivers/infiniband/core/cma.c
+@@ -4449,6 +4449,7 @@ static int cma_get_id_stats(struct sk_buff *skb, struct netlink_callback *cb)
+ id_stats->qp_type = id->qp_type;
+
+ i_id++;
++ nlmsg_end(skb, nlh);
+ }
+
+ cb->args[1] = 0;
+diff --git a/drivers/infiniband/core/ucma.c b/drivers/infiniband/core/ucma.c
+index 722235bed075..d6fa38f8604f 100644
+--- a/drivers/infiniband/core/ucma.c
++++ b/drivers/infiniband/core/ucma.c
+@@ -914,13 +914,14 @@ static ssize_t ucma_query_path(struct ucma_context *ctx,
+
+ resp->path_data[i].flags = IB_PATH_GMP | IB_PATH_PRIMARY |
+ IB_PATH_BIDIRECTIONAL;
+- if (rec->rec_type == SA_PATH_REC_TYPE_IB) {
+- ib_sa_pack_path(rec, &resp->path_data[i].path_rec);
+- } else {
++ if (rec->rec_type == SA_PATH_REC_TYPE_OPA) {
+ struct sa_path_rec ib;
+
+ sa_convert_path_opa_to_ib(&ib, rec);
+ ib_sa_pack_path(&ib, &resp->path_data[i].path_rec);
++
++ } else {
++ ib_sa_pack_path(rec, &resp->path_data[i].path_rec);
+ }
+ }
+
+diff --git a/drivers/infiniband/hw/hns/hns_roce_hw_v2.c b/drivers/infiniband/hw/hns/hns_roce_hw_v2.c
+index 8e18445714a9..0d925b3d3d47 100644
+--- a/drivers/infiniband/hw/hns/hns_roce_hw_v2.c
++++ b/drivers/infiniband/hw/hns/hns_roce_hw_v2.c
+@@ -2463,11 +2463,14 @@ static int modify_qp_init_to_rtr(struct ib_qp *ibqp,
+ roce_set_bit(qpc_mask->byte_28_at_fl, V2_QPC_BYTE_28_LBI_S, 0);
+ }
+
+- roce_set_field(context->byte_140_raq, V2_QPC_BYTE_140_RR_MAX_M,
+- V2_QPC_BYTE_140_RR_MAX_S,
+- ilog2((unsigned int)attr->max_dest_rd_atomic));
+- roce_set_field(qpc_mask->byte_140_raq, V2_QPC_BYTE_140_RR_MAX_M,
+- V2_QPC_BYTE_140_RR_MAX_S, 0);
++ if ((attr_mask & IB_QP_MAX_DEST_RD_ATOMIC) &&
++ attr->max_dest_rd_atomic) {
++ roce_set_field(context->byte_140_raq, V2_QPC_BYTE_140_RR_MAX_M,
++ V2_QPC_BYTE_140_RR_MAX_S,
++ fls(attr->max_dest_rd_atomic - 1));
++ roce_set_field(qpc_mask->byte_140_raq, V2_QPC_BYTE_140_RR_MAX_M,
++ V2_QPC_BYTE_140_RR_MAX_S, 0);
++ }
+
+ roce_set_field(context->byte_56_dqpn_err, V2_QPC_BYTE_56_DQPN_M,
+ V2_QPC_BYTE_56_DQPN_S, attr->dest_qp_num);
+@@ -2557,12 +2560,6 @@ static int modify_qp_init_to_rtr(struct ib_qp *ibqp,
+ V2_QPC_BYTE_168_LP_SGEN_INI_M,
+ V2_QPC_BYTE_168_LP_SGEN_INI_S, 0);
+
+- roce_set_field(context->byte_208_irrl, V2_QPC_BYTE_208_SR_MAX_M,
+- V2_QPC_BYTE_208_SR_MAX_S,
+- ilog2((unsigned int)attr->max_rd_atomic));
+- roce_set_field(qpc_mask->byte_208_irrl, V2_QPC_BYTE_208_SR_MAX_M,
+- V2_QPC_BYTE_208_SR_MAX_S, 0);
+-
+ roce_set_field(context->byte_28_at_fl, V2_QPC_BYTE_28_SL_M,
+ V2_QPC_BYTE_28_SL_S, rdma_ah_get_sl(&attr->ah_attr));
+ roce_set_field(qpc_mask->byte_28_at_fl, V2_QPC_BYTE_28_SL_M,
+@@ -2766,6 +2763,14 @@ static int modify_qp_rtr_to_rts(struct ib_qp *ibqp,
+ roce_set_field(qpc_mask->byte_196_sq_psn, V2_QPC_BYTE_196_SQ_MAX_PSN_M,
+ V2_QPC_BYTE_196_SQ_MAX_PSN_S, 0);
+
++ if ((attr_mask & IB_QP_MAX_QP_RD_ATOMIC) && attr->max_rd_atomic) {
++ roce_set_field(context->byte_208_irrl, V2_QPC_BYTE_208_SR_MAX_M,
++ V2_QPC_BYTE_208_SR_MAX_S,
++ fls(attr->max_rd_atomic - 1));
++ roce_set_field(qpc_mask->byte_208_irrl,
++ V2_QPC_BYTE_208_SR_MAX_M,
++ V2_QPC_BYTE_208_SR_MAX_S, 0);
++ }
+ return 0;
+ }
+
+diff --git a/drivers/infiniband/hw/i40iw/i40iw_cm.c b/drivers/infiniband/hw/i40iw/i40iw_cm.c
+index 77870f9e1736..726d7143475f 100644
+--- a/drivers/infiniband/hw/i40iw/i40iw_cm.c
++++ b/drivers/infiniband/hw/i40iw/i40iw_cm.c
+@@ -125,7 +125,8 @@ static u8 i40iw_derive_hw_ird_setting(u16 cm_ird)
+ * @conn_ird: connection IRD
+ * @conn_ord: connection ORD
+ */
+-static void i40iw_record_ird_ord(struct i40iw_cm_node *cm_node, u16 conn_ird, u16 conn_ord)
++static void i40iw_record_ird_ord(struct i40iw_cm_node *cm_node, u32 conn_ird,
++ u32 conn_ord)
+ {
+ if (conn_ird > I40IW_MAX_IRD_SIZE)
+ conn_ird = I40IW_MAX_IRD_SIZE;
+@@ -3849,7 +3850,7 @@ int i40iw_connect(struct iw_cm_id *cm_id, struct iw_cm_conn_param *conn_param)
+ }
+
+ cm_node->apbvt_set = true;
+- i40iw_record_ird_ord(cm_node, (u16)conn_param->ird, (u16)conn_param->ord);
++ i40iw_record_ird_ord(cm_node, conn_param->ird, conn_param->ord);
+ if (cm_node->send_rdma0_op == SEND_RDMA_READ_ZERO &&
+ !cm_node->ord_size)
+ cm_node->ord_size = 1;
+diff --git a/drivers/infiniband/hw/i40iw/i40iw_ctrl.c b/drivers/infiniband/hw/i40iw/i40iw_ctrl.c
+index da9821a10e0d..1b9ca09d3cee 100644
+--- a/drivers/infiniband/hw/i40iw/i40iw_ctrl.c
++++ b/drivers/infiniband/hw/i40iw/i40iw_ctrl.c
+@@ -3928,8 +3928,10 @@ enum i40iw_status_code i40iw_config_fpm_values(struct i40iw_sc_dev *dev, u32 qp_
+ hmc_info->hmc_obj[I40IW_HMC_IW_APBVT_ENTRY].cnt = 1;
+ hmc_info->hmc_obj[I40IW_HMC_IW_MR].cnt = mrwanted;
+
+- hmc_info->hmc_obj[I40IW_HMC_IW_XF].cnt = I40IW_MAX_WQ_ENTRIES * qpwanted;
+- hmc_info->hmc_obj[I40IW_HMC_IW_Q1].cnt = 4 * I40IW_MAX_IRD_SIZE * qpwanted;
++ hmc_info->hmc_obj[I40IW_HMC_IW_XF].cnt =
++ roundup_pow_of_two(I40IW_MAX_WQ_ENTRIES * qpwanted);
++ hmc_info->hmc_obj[I40IW_HMC_IW_Q1].cnt =
++ roundup_pow_of_two(2 * I40IW_MAX_IRD_SIZE * qpwanted);
+ hmc_info->hmc_obj[I40IW_HMC_IW_XFFL].cnt =
+ hmc_info->hmc_obj[I40IW_HMC_IW_XF].cnt / hmc_fpm_misc->xf_block_size;
+ hmc_info->hmc_obj[I40IW_HMC_IW_Q1FL].cnt =
+diff --git a/drivers/infiniband/hw/i40iw/i40iw_d.h b/drivers/infiniband/hw/i40iw/i40iw_d.h
+index 029083cb81d5..4b65e4140bd7 100644
+--- a/drivers/infiniband/hw/i40iw/i40iw_d.h
++++ b/drivers/infiniband/hw/i40iw/i40iw_d.h
+@@ -97,6 +97,7 @@
+ #define RDMA_OPCODE_MASK 0x0f
+ #define RDMA_READ_REQ_OPCODE 1
+ #define Q2_BAD_FRAME_OFFSET 72
++#define Q2_FPSN_OFFSET 64
+ #define CQE_MAJOR_DRV 0x8000
+
+ #define I40IW_TERM_SENT 0x01
+diff --git a/drivers/infiniband/hw/i40iw/i40iw_puda.c b/drivers/infiniband/hw/i40iw/i40iw_puda.c
+index 796a815b53fd..f64b6700f43f 100644
+--- a/drivers/infiniband/hw/i40iw/i40iw_puda.c
++++ b/drivers/infiniband/hw/i40iw/i40iw_puda.c
+@@ -1378,7 +1378,7 @@ static void i40iw_ieq_handle_exception(struct i40iw_puda_rsrc *ieq,
+ u32 *hw_host_ctx = (u32 *)qp->hw_host_ctx;
+ u32 rcv_wnd = hw_host_ctx[23];
+ /* first partial seq # in q2 */
+- u32 fps = qp->q2_buf[16];
++ u32 fps = *(u32 *)(qp->q2_buf + Q2_FPSN_OFFSET);
+ struct list_head *rxlist = &pfpdu->rxlist;
+ struct list_head *plist;
+
+diff --git a/drivers/infiniband/hw/mlx5/main.c b/drivers/infiniband/hw/mlx5/main.c
+index 262c1aa2e028..f8b06102cc5d 100644
+--- a/drivers/infiniband/hw/mlx5/main.c
++++ b/drivers/infiniband/hw/mlx5/main.c
+@@ -682,7 +682,8 @@ static int mlx5_ib_query_device(struct ib_device *ibdev,
+ MLX5_RX_HASH_SRC_PORT_TCP |
+ MLX5_RX_HASH_DST_PORT_TCP |
+ MLX5_RX_HASH_SRC_PORT_UDP |
+- MLX5_RX_HASH_DST_PORT_UDP;
++ MLX5_RX_HASH_DST_PORT_UDP |
++ MLX5_RX_HASH_INNER;
+ resp.response_length += sizeof(resp.rss_caps);
+ }
+ } else {
+diff --git a/drivers/infiniband/sw/rdmavt/cq.c b/drivers/infiniband/sw/rdmavt/cq.c
+index 97d71e49c092..88fa4d44ab5f 100644
+--- a/drivers/infiniband/sw/rdmavt/cq.c
++++ b/drivers/infiniband/sw/rdmavt/cq.c
+@@ -198,7 +198,7 @@ struct ib_cq *rvt_create_cq(struct ib_device *ibdev,
+ return ERR_PTR(-EINVAL);
+
+ /* Allocate the completion queue structure. */
+- cq = kzalloc(sizeof(*cq), GFP_KERNEL);
++ cq = kzalloc_node(sizeof(*cq), GFP_KERNEL, rdi->dparms.node);
+ if (!cq)
+ return ERR_PTR(-ENOMEM);
+
+@@ -214,7 +214,9 @@ struct ib_cq *rvt_create_cq(struct ib_device *ibdev,
+ sz += sizeof(struct ib_uverbs_wc) * (entries + 1);
+ else
+ sz += sizeof(struct ib_wc) * (entries + 1);
+- wc = vmalloc_user(sz);
++ wc = udata ?
++ vmalloc_user(sz) :
++ vzalloc_node(sz, rdi->dparms.node);
+ if (!wc) {
+ ret = ERR_PTR(-ENOMEM);
+ goto bail_cq;
+@@ -369,7 +371,9 @@ int rvt_resize_cq(struct ib_cq *ibcq, int cqe, struct ib_udata *udata)
+ sz += sizeof(struct ib_uverbs_wc) * (cqe + 1);
+ else
+ sz += sizeof(struct ib_wc) * (cqe + 1);
+- wc = vmalloc_user(sz);
++ wc = udata ?
++ vmalloc_user(sz) :
++ vzalloc_node(sz, rdi->dparms.node);
+ if (!wc)
+ return -ENOMEM;
+
+diff --git a/drivers/infiniband/ulp/ipoib/ipoib_cm.c b/drivers/infiniband/ulp/ipoib/ipoib_cm.c
+index 71ea9e26666c..c075d6850ed3 100644
+--- a/drivers/infiniband/ulp/ipoib/ipoib_cm.c
++++ b/drivers/infiniband/ulp/ipoib/ipoib_cm.c
+@@ -766,12 +766,14 @@ void ipoib_cm_send(struct net_device *dev, struct sk_buff *skb, struct ipoib_cm_
+ skb_orphan(skb);
+ skb_dst_drop(skb);
+
+- if (netif_queue_stopped(dev))
+- if (ib_req_notify_cq(priv->send_cq, IB_CQ_NEXT_COMP |
+- IB_CQ_REPORT_MISSED_EVENTS)) {
++ if (netif_queue_stopped(dev)) {
++ rc = ib_req_notify_cq(priv->send_cq, IB_CQ_NEXT_COMP |
++ IB_CQ_REPORT_MISSED_EVENTS);
++ if (unlikely(rc < 0))
+ ipoib_warn(priv, "IPoIB/CM:request notify on send CQ failed\n");
++ else if (rc)
+ napi_schedule(&priv->send_napi);
+- }
++ }
+
+ rc = post_send(priv, tx, tx->tx_head & (ipoib_sendq_size - 1), tx_req);
+ if (unlikely(rc)) {
+diff --git a/drivers/infiniband/ulp/ipoib/ipoib_ib.c b/drivers/infiniband/ulp/ipoib/ipoib_ib.c
+index e6151a29c412..28658080e761 100644
+--- a/drivers/infiniband/ulp/ipoib/ipoib_ib.c
++++ b/drivers/infiniband/ulp/ipoib/ipoib_ib.c
+@@ -644,7 +644,7 @@ int ipoib_send(struct net_device *dev, struct sk_buff *skb,
+
+ if (netif_queue_stopped(dev))
+ if (ib_req_notify_cq(priv->send_cq, IB_CQ_NEXT_COMP |
+- IB_CQ_REPORT_MISSED_EVENTS))
++ IB_CQ_REPORT_MISSED_EVENTS) < 0)
+ ipoib_warn(priv, "request notify on send CQ failed\n");
+
+ rc = post_send(priv, priv->tx_head & (ipoib_sendq_size - 1),
+diff --git a/drivers/input/touchscreen/goodix.c b/drivers/input/touchscreen/goodix.c
+index 69d0b8cbc71f..ecec8eb17f28 100644
+--- a/drivers/input/touchscreen/goodix.c
++++ b/drivers/input/touchscreen/goodix.c
+@@ -878,8 +878,10 @@ static int __maybe_unused goodix_suspend(struct device *dev)
+ int error;
+
+ /* We need gpio pins to suspend/resume */
+- if (!ts->gpiod_int || !ts->gpiod_rst)
++ if (!ts->gpiod_int || !ts->gpiod_rst) {
++ disable_irq(client->irq);
+ return 0;
++ }
+
+ wait_for_completion(&ts->firmware_loading_complete);
+
+@@ -919,8 +921,10 @@ static int __maybe_unused goodix_resume(struct device *dev)
+ struct goodix_ts_data *ts = i2c_get_clientdata(client);
+ int error;
+
+- if (!ts->gpiod_int || !ts->gpiod_rst)
++ if (!ts->gpiod_int || !ts->gpiod_rst) {
++ enable_irq(client->irq);
+ return 0;
++ }
+
+ /*
+ * Exit sleep mode by outputting HIGH level to INT pin
+diff --git a/drivers/irqchip/irq-gic-v3.c b/drivers/irqchip/irq-gic-v3.c
+index 980ae8e7df30..45ced9e48c5d 100644
+--- a/drivers/irqchip/irq-gic-v3.c
++++ b/drivers/irqchip/irq-gic-v3.c
+@@ -1331,6 +1331,10 @@ gic_acpi_parse_madt_gicc(struct acpi_subtable_header *header,
+ u32 size = reg == GIC_PIDR2_ARCH_GICv4 ? SZ_64K * 4 : SZ_64K * 2;
+ void __iomem *redist_base;
+
++ /* GICC entry which has !ACPI_MADT_ENABLED is not unusable so skip */
++ if (!(gicc->flags & ACPI_MADT_ENABLED))
++ return 0;
++
+ redist_base = ioremap(gicc->gicr_base_address, size);
+ if (!redist_base)
+ return -ENOMEM;
+@@ -1380,6 +1384,13 @@ static int __init gic_acpi_match_gicc(struct acpi_subtable_header *header,
+ if ((gicc->flags & ACPI_MADT_ENABLED) && gicc->gicr_base_address)
+ return 0;
+
++ /*
++ * It's perfectly valid firmware can pass disabled GICC entry, driver
++ * should not treat as errors, skip the entry instead of probe fail.
++ */
++ if (!(gicc->flags & ACPI_MADT_ENABLED))
++ return 0;
++
+ return -ENODEV;
+ }
+
+diff --git a/drivers/irqchip/irq-ompic.c b/drivers/irqchip/irq-ompic.c
+index cf6d0c455518..e66ef4373b1e 100644
+--- a/drivers/irqchip/irq-ompic.c
++++ b/drivers/irqchip/irq-ompic.c
+@@ -171,9 +171,9 @@ static int __init ompic_of_init(struct device_node *node,
+
+ /* Setup the device */
+ ompic_base = ioremap(res.start, resource_size(&res));
+- if (IS_ERR(ompic_base)) {
++ if (!ompic_base) {
+ pr_err("ompic: unable to map registers");
+- return PTR_ERR(ompic_base);
++ return -ENOMEM;
+ }
+
+ irq = irq_of_parse_and_map(node, 0);
+diff --git a/drivers/md/bcache/alloc.c b/drivers/md/bcache/alloc.c
+index a0cc1bc6d884..6cc6c0f9c3a9 100644
+--- a/drivers/md/bcache/alloc.c
++++ b/drivers/md/bcache/alloc.c
+@@ -525,15 +525,21 @@ struct open_bucket {
+
+ /*
+ * We keep multiple buckets open for writes, and try to segregate different
+- * write streams for better cache utilization: first we look for a bucket where
+- * the last write to it was sequential with the current write, and failing that
+- * we look for a bucket that was last used by the same task.
++ * write streams for better cache utilization: first we try to segregate flash
++ * only volume write streams from cached devices, secondly we look for a bucket
++ * where the last write to it was sequential with the current write, and
++ * failing that we look for a bucket that was last used by the same task.
+ *
+ * The ideas is if you've got multiple tasks pulling data into the cache at the
+ * same time, you'll get better cache utilization if you try to segregate their
+ * data and preserve locality.
+ *
+- * For example, say you've starting Firefox at the same time you're copying a
++ * For example, dirty sectors of flash only volume is not reclaimable, if their
++ * dirty sectors mixed with dirty sectors of cached device, such buckets will
++ * be marked as dirty and won't be reclaimed, though the dirty data of cached
++ * device have been written back to backend device.
++ *
++ * And say you've starting Firefox at the same time you're copying a
+ * bunch of files. Firefox will likely end up being fairly hot and stay in the
+ * cache awhile, but the data you copied might not be; if you wrote all that
+ * data to the same buckets it'd get invalidated at the same time.
+@@ -550,7 +556,10 @@ static struct open_bucket *pick_data_bucket(struct cache_set *c,
+ struct open_bucket *ret, *ret_task = NULL;
+
+ list_for_each_entry_reverse(ret, &c->data_buckets, list)
+- if (!bkey_cmp(&ret->key, search))
++ if (UUID_FLASH_ONLY(&c->uuids[KEY_INODE(&ret->key)]) !=
++ UUID_FLASH_ONLY(&c->uuids[KEY_INODE(search)]))
++ continue;
++ else if (!bkey_cmp(&ret->key, search))
+ goto found;
+ else if (ret->last_write_point == write_point)
+ ret_task = ret;
+diff --git a/drivers/md/bcache/request.c b/drivers/md/bcache/request.c
+index 643c3021624f..d1faaba6b93f 100644
+--- a/drivers/md/bcache/request.c
++++ b/drivers/md/bcache/request.c
+@@ -576,6 +576,7 @@ static void cache_lookup(struct closure *cl)
+ {
+ struct search *s = container_of(cl, struct search, iop.cl);
+ struct bio *bio = &s->bio.bio;
++ struct cached_dev *dc;
+ int ret;
+
+ bch_btree_op_init(&s->op, -1);
+@@ -588,6 +589,27 @@ static void cache_lookup(struct closure *cl)
+ return;
+ }
+
++ /*
++ * We might meet err when searching the btree, If that happens, we will
++ * get negative ret, in this scenario we should not recover data from
++ * backing device (when cache device is dirty) because we don't know
++ * whether bkeys the read request covered are all clean.
++ *
++ * And after that happened, s->iop.status is still its initial value
++ * before we submit s->bio.bio
++ */
++ if (ret < 0) {
++ BUG_ON(ret == -EINTR);
++ if (s->d && s->d->c &&
++ !UUID_FLASH_ONLY(&s->d->c->uuids[s->d->id])) {
++ dc = container_of(s->d, struct cached_dev, disk);
++ if (dc && atomic_read(&dc->has_dirty))
++ s->recoverable = false;
++ }
++ if (!s->iop.status)
++ s->iop.status = BLK_STS_IOERR;
++ }
++
+ closure_return(cl);
+ }
+
+diff --git a/drivers/md/bcache/super.c b/drivers/md/bcache/super.c
+index 14bdaf1cef2c..47785eb22aab 100644
+--- a/drivers/md/bcache/super.c
++++ b/drivers/md/bcache/super.c
+@@ -906,6 +906,12 @@ static void cached_dev_detach_finish(struct work_struct *w)
+
+ mutex_lock(&bch_register_lock);
+
++ cancel_delayed_work_sync(&dc->writeback_rate_update);
++ if (!IS_ERR_OR_NULL(dc->writeback_thread)) {
++ kthread_stop(dc->writeback_thread);
++ dc->writeback_thread = NULL;
++ }
++
+ memset(&dc->sb.set_uuid, 0, 16);
+ SET_BDEV_STATE(&dc->sb, BDEV_STATE_NONE);
+
+diff --git a/drivers/media/v4l2-core/videobuf2-core.c b/drivers/media/v4l2-core/videobuf2-core.c
+index a8589d96ef72..2bacadf50247 100644
+--- a/drivers/media/v4l2-core/videobuf2-core.c
++++ b/drivers/media/v4l2-core/videobuf2-core.c
+@@ -332,6 +332,10 @@ static int __vb2_queue_alloc(struct vb2_queue *q, enum vb2_memory memory,
+ struct vb2_buffer *vb;
+ int ret;
+
++ /* Ensure that q->num_buffers+num_buffers is below VB2_MAX_FRAME */
++ num_buffers = min_t(unsigned int, num_buffers,
++ VB2_MAX_FRAME - q->num_buffers);
++
+ for (buffer = 0; buffer < num_buffers; ++buffer) {
+ /* Allocate videobuf buffer structures */
+ vb = kzalloc(q->buf_struct_size, GFP_KERNEL);
+diff --git a/drivers/mmc/host/sdhci-pci-core.c b/drivers/mmc/host/sdhci-pci-core.c
+index bf93e8b0b191..b8fa17a759dd 100644
+--- a/drivers/mmc/host/sdhci-pci-core.c
++++ b/drivers/mmc/host/sdhci-pci-core.c
+@@ -805,6 +805,8 @@ static int intel_mrfld_mmc_probe_slot(struct sdhci_pci_slot *slot)
+ slot->host->quirks2 |= SDHCI_QUIRK2_NO_1_8_V;
+ break;
+ case INTEL_MRFLD_SDIO:
++ /* Advertise 2.0v for compatibility with the SDIO card's OCR */
++ slot->host->ocr_mask = MMC_VDD_20_21 | MMC_VDD_165_195;
+ slot->host->mmc->caps |= MMC_CAP_NONREMOVABLE |
+ MMC_CAP_POWER_OFF_CARD;
+ break;
+diff --git a/drivers/mmc/host/sdhci.c b/drivers/mmc/host/sdhci.c
+index d24306b2b839..3a5f305fd442 100644
+--- a/drivers/mmc/host/sdhci.c
++++ b/drivers/mmc/host/sdhci.c
+@@ -1470,6 +1470,13 @@ void sdhci_set_power_noreg(struct sdhci_host *host, unsigned char mode,
+ if (mode != MMC_POWER_OFF) {
+ switch (1 << vdd) {
+ case MMC_VDD_165_195:
++ /*
++ * Without a regulator, SDHCI does not support 2.0v
++ * so we only get here if the driver deliberately
++ * added the 2.0v range to ocr_avail. Map it to 1.8v
++ * for the purpose of turning on the power.
++ */
++ case MMC_VDD_20_21:
+ pwr = SDHCI_POWER_180;
+ break;
+ case MMC_VDD_29_30:
+diff --git a/drivers/mtd/tests/oobtest.c b/drivers/mtd/tests/oobtest.c
+index 1cb3f7758fb6..766b2c385682 100644
+--- a/drivers/mtd/tests/oobtest.c
++++ b/drivers/mtd/tests/oobtest.c
+@@ -193,6 +193,9 @@ static int verify_eraseblock(int ebnum)
+ ops.datbuf = NULL;
+ ops.oobbuf = readbuf;
+ err = mtd_read_oob(mtd, addr, &ops);
++ if (mtd_is_bitflip(err))
++ err = 0;
++
+ if (err || ops.oobretlen != use_len) {
+ pr_err("error: readoob failed at %#llx\n",
+ (long long)addr);
+@@ -227,6 +230,9 @@ static int verify_eraseblock(int ebnum)
+ ops.datbuf = NULL;
+ ops.oobbuf = readbuf;
+ err = mtd_read_oob(mtd, addr, &ops);
++ if (mtd_is_bitflip(err))
++ err = 0;
++
+ if (err || ops.oobretlen != mtd->oobavail) {
+ pr_err("error: readoob failed at %#llx\n",
+ (long long)addr);
+@@ -286,6 +292,9 @@ static int verify_eraseblock_in_one_go(int ebnum)
+
+ /* read entire block's OOB at one go */
+ err = mtd_read_oob(mtd, addr, &ops);
++ if (mtd_is_bitflip(err))
++ err = 0;
++
+ if (err || ops.oobretlen != len) {
+ pr_err("error: readoob failed at %#llx\n",
+ (long long)addr);
+@@ -527,6 +536,9 @@ static int __init mtd_oobtest_init(void)
+ pr_info("attempting to start read past end of OOB\n");
+ pr_info("an error is expected...\n");
+ err = mtd_read_oob(mtd, addr0, &ops);
++ if (mtd_is_bitflip(err))
++ err = 0;
++
+ if (err) {
+ pr_info("error occurred as expected\n");
+ err = 0;
+@@ -571,6 +583,9 @@ static int __init mtd_oobtest_init(void)
+ pr_info("attempting to read past end of device\n");
+ pr_info("an error is expected...\n");
+ err = mtd_read_oob(mtd, mtd->size - mtd->writesize, &ops);
++ if (mtd_is_bitflip(err))
++ err = 0;
++
+ if (err) {
+ pr_info("error occurred as expected\n");
+ err = 0;
+@@ -615,6 +630,9 @@ static int __init mtd_oobtest_init(void)
+ pr_info("attempting to read past end of device\n");
+ pr_info("an error is expected...\n");
+ err = mtd_read_oob(mtd, mtd->size - mtd->writesize, &ops);
++ if (mtd_is_bitflip(err))
++ err = 0;
++
+ if (err) {
+ pr_info("error occurred as expected\n");
+ err = 0;
+@@ -684,6 +702,9 @@ static int __init mtd_oobtest_init(void)
+ ops.datbuf = NULL;
+ ops.oobbuf = readbuf;
+ err = mtd_read_oob(mtd, addr, &ops);
++ if (mtd_is_bitflip(err))
++ err = 0;
++
+ if (err)
+ goto out;
+ if (memcmpshow(addr, readbuf, writebuf,
+diff --git a/drivers/net/bonding/bond_main.c b/drivers/net/bonding/bond_main.c
+index c669554d70bb..b7b113018853 100644
+--- a/drivers/net/bonding/bond_main.c
++++ b/drivers/net/bonding/bond_main.c
+@@ -1528,39 +1528,6 @@ int bond_enslave(struct net_device *bond_dev, struct net_device *slave_dev,
+ goto err_close;
+ }
+
+- /* If the mode uses primary, then the following is handled by
+- * bond_change_active_slave().
+- */
+- if (!bond_uses_primary(bond)) {
+- /* set promiscuity level to new slave */
+- if (bond_dev->flags & IFF_PROMISC) {
+- res = dev_set_promiscuity(slave_dev, 1);
+- if (res)
+- goto err_close;
+- }
+-
+- /* set allmulti level to new slave */
+- if (bond_dev->flags & IFF_ALLMULTI) {
+- res = dev_set_allmulti(slave_dev, 1);
+- if (res)
+- goto err_close;
+- }
+-
+- netif_addr_lock_bh(bond_dev);
+-
+- dev_mc_sync_multiple(slave_dev, bond_dev);
+- dev_uc_sync_multiple(slave_dev, bond_dev);
+-
+- netif_addr_unlock_bh(bond_dev);
+- }
+-
+- if (BOND_MODE(bond) == BOND_MODE_8023AD) {
+- /* add lacpdu mc addr to mc list */
+- u8 lacpdu_multicast[ETH_ALEN] = MULTICAST_LACPDU_ADDR;
+-
+- dev_mc_add(slave_dev, lacpdu_multicast);
+- }
+-
+ res = vlan_vids_add_by_dev(slave_dev, bond_dev);
+ if (res) {
+ netdev_err(bond_dev, "Couldn't add bond vlan ids to %s\n",
+@@ -1725,6 +1692,40 @@ int bond_enslave(struct net_device *bond_dev, struct net_device *slave_dev,
+ goto err_upper_unlink;
+ }
+
++ /* If the mode uses primary, then the following is handled by
++ * bond_change_active_slave().
++ */
++ if (!bond_uses_primary(bond)) {
++ /* set promiscuity level to new slave */
++ if (bond_dev->flags & IFF_PROMISC) {
++ res = dev_set_promiscuity(slave_dev, 1);
++ if (res)
++ goto err_sysfs_del;
++ }
++
++ /* set allmulti level to new slave */
++ if (bond_dev->flags & IFF_ALLMULTI) {
++ res = dev_set_allmulti(slave_dev, 1);
++ if (res) {
++ if (bond_dev->flags & IFF_PROMISC)
++ dev_set_promiscuity(slave_dev, -1);
++ goto err_sysfs_del;
++ }
++ }
++
++ netif_addr_lock_bh(bond_dev);
++ dev_mc_sync_multiple(slave_dev, bond_dev);
++ dev_uc_sync_multiple(slave_dev, bond_dev);
++ netif_addr_unlock_bh(bond_dev);
++
++ if (BOND_MODE(bond) == BOND_MODE_8023AD) {
++ /* add lacpdu mc addr to mc list */
++ u8 lacpdu_multicast[ETH_ALEN] = MULTICAST_LACPDU_ADDR;
++
++ dev_mc_add(slave_dev, lacpdu_multicast);
++ }
++ }
++
+ bond->slave_cnt++;
+ bond_compute_features(bond);
+ bond_set_carrier(bond);
+@@ -1748,6 +1749,9 @@ int bond_enslave(struct net_device *bond_dev, struct net_device *slave_dev,
+ return 0;
+
+ /* Undo stages on error */
++err_sysfs_del:
++ bond_sysfs_slave_del(new_slave);
++
+ err_upper_unlink:
+ bond_upper_dev_unlink(bond, new_slave);
+
+@@ -1755,9 +1759,6 @@ int bond_enslave(struct net_device *bond_dev, struct net_device *slave_dev,
+ netdev_rx_handler_unregister(slave_dev);
+
+ err_detach:
+- if (!bond_uses_primary(bond))
+- bond_hw_addr_flush(bond_dev, slave_dev);
+-
+ vlan_vids_del_by_dev(slave_dev, bond_dev);
+ if (rcu_access_pointer(bond->primary_slave) == new_slave)
+ RCU_INIT_POINTER(bond->primary_slave, NULL);
+diff --git a/drivers/net/ethernet/chelsio/cxgb4vf/sge.c b/drivers/net/ethernet/chelsio/cxgb4vf/sge.c
+index 14d7e673c656..129b914a434c 100644
+--- a/drivers/net/ethernet/chelsio/cxgb4vf/sge.c
++++ b/drivers/net/ethernet/chelsio/cxgb4vf/sge.c
+@@ -2619,8 +2619,8 @@ void t4vf_sge_stop(struct adapter *adapter)
+ int t4vf_sge_init(struct adapter *adapter)
+ {
+ struct sge_params *sge_params = &adapter->params.sge;
+- u32 fl0 = sge_params->sge_fl_buffer_size[0];
+- u32 fl1 = sge_params->sge_fl_buffer_size[1];
++ u32 fl_small_pg = sge_params->sge_fl_buffer_size[0];
++ u32 fl_large_pg = sge_params->sge_fl_buffer_size[1];
+ struct sge *s = &adapter->sge;
+
+ /*
+@@ -2628,9 +2628,20 @@ int t4vf_sge_init(struct adapter *adapter)
+ * the Physical Function Driver. Ideally we should be able to deal
+ * with _any_ configuration. Practice is different ...
+ */
+- if (fl0 != PAGE_SIZE || (fl1 != 0 && fl1 <= fl0)) {
++
++ /* We only bother using the Large Page logic if the Large Page Buffer
++ * is larger than our Page Size Buffer.
++ */
++ if (fl_large_pg <= fl_small_pg)
++ fl_large_pg = 0;
++
++ /* The Page Size Buffer must be exactly equal to our Page Size and the
++ * Large Page Size Buffer should be 0 (per above) or a power of 2.
++ */
++ if (fl_small_pg != PAGE_SIZE ||
++ (fl_large_pg & (fl_large_pg - 1)) != 0) {
+ dev_err(adapter->pdev_dev, "bad SGE FL buffer sizes [%d, %d]\n",
+- fl0, fl1);
++ fl_small_pg, fl_large_pg);
+ return -EINVAL;
+ }
+ if ((sge_params->sge_control & RXPKTCPLMODE_F) !=
+@@ -2642,8 +2653,8 @@ int t4vf_sge_init(struct adapter *adapter)
+ /*
+ * Now translate the adapter parameters into our internal forms.
+ */
+- if (fl1)
+- s->fl_pg_order = ilog2(fl1) - PAGE_SHIFT;
++ if (fl_large_pg)
++ s->fl_pg_order = ilog2(fl_large_pg) - PAGE_SHIFT;
+ s->stat_len = ((sge_params->sge_control & EGRSTATUSPAGESIZE_F)
+ ? 128 : 64);
+ s->pktshift = PKTSHIFT_G(sge_params->sge_control);
+diff --git a/drivers/net/ethernet/hisilicon/hns3/hns3pf/hclge_main.c b/drivers/net/ethernet/hisilicon/hns3/hns3pf/hclge_main.c
+index 59ed806a52c3..3a3c7fe50e13 100644
+--- a/drivers/net/ethernet/hisilicon/hns3/hns3pf/hclge_main.c
++++ b/drivers/net/ethernet/hisilicon/hns3/hns3pf/hclge_main.c
+@@ -2189,6 +2189,10 @@ static int hclge_get_autoneg(struct hnae3_handle *handle)
+ {
+ struct hclge_vport *vport = hclge_get_vport(handle);
+ struct hclge_dev *hdev = vport->back;
++ struct phy_device *phydev = hdev->hw.mac.phydev;
++
++ if (phydev)
++ return phydev->autoneg;
+
+ hclge_query_autoneg_result(hdev);
+
+diff --git a/drivers/net/ethernet/hisilicon/hns3/hns3pf/hclge_mdio.c b/drivers/net/ethernet/hisilicon/hns3/hns3pf/hclge_mdio.c
+index 7069e9408d7d..22be638be40f 100644
+--- a/drivers/net/ethernet/hisilicon/hns3/hns3pf/hclge_mdio.c
++++ b/drivers/net/ethernet/hisilicon/hns3/hns3pf/hclge_mdio.c
+@@ -17,6 +17,7 @@
+ #define HCLGE_PHY_SUPPORTED_FEATURES (SUPPORTED_Autoneg | \
+ SUPPORTED_TP | \
+ SUPPORTED_Pause | \
++ SUPPORTED_Asym_Pause | \
+ PHY_10BT_FEATURES | \
+ PHY_100BT_FEATURES | \
+ PHY_1000BT_FEATURES)
+diff --git a/drivers/net/ethernet/hisilicon/hns3/hns3pf/hns3_enet.c b/drivers/net/ethernet/hisilicon/hns3/hns3pf/hns3_enet.c
+index 59415090ff0f..a685368ab25b 100644
+--- a/drivers/net/ethernet/hisilicon/hns3/hns3pf/hns3_enet.c
++++ b/drivers/net/ethernet/hisilicon/hns3/hns3pf/hns3_enet.c
+@@ -1055,6 +1055,8 @@ hns3_nic_get_stats64(struct net_device *netdev, struct rtnl_link_stats64 *stats)
+ u64 rx_bytes = 0;
+ u64 tx_pkts = 0;
+ u64 rx_pkts = 0;
++ u64 tx_drop = 0;
++ u64 rx_drop = 0;
+
+ for (idx = 0; idx < queue_num; idx++) {
+ /* fetch the tx stats */
+@@ -1063,6 +1065,8 @@ hns3_nic_get_stats64(struct net_device *netdev, struct rtnl_link_stats64 *stats)
+ start = u64_stats_fetch_begin_irq(&ring->syncp);
+ tx_bytes += ring->stats.tx_bytes;
+ tx_pkts += ring->stats.tx_pkts;
++ tx_drop += ring->stats.tx_busy;
++ tx_drop += ring->stats.sw_err_cnt;
+ } while (u64_stats_fetch_retry_irq(&ring->syncp, start));
+
+ /* fetch the rx stats */
+@@ -1071,6 +1075,9 @@ hns3_nic_get_stats64(struct net_device *netdev, struct rtnl_link_stats64 *stats)
+ start = u64_stats_fetch_begin_irq(&ring->syncp);
+ rx_bytes += ring->stats.rx_bytes;
+ rx_pkts += ring->stats.rx_pkts;
++ rx_drop += ring->stats.non_vld_descs;
++ rx_drop += ring->stats.err_pkt_len;
++ rx_drop += ring->stats.l2_err;
+ } while (u64_stats_fetch_retry_irq(&ring->syncp, start));
+ }
+
+@@ -1086,8 +1093,8 @@ hns3_nic_get_stats64(struct net_device *netdev, struct rtnl_link_stats64 *stats)
+ stats->rx_missed_errors = netdev->stats.rx_missed_errors;
+
+ stats->tx_errors = netdev->stats.tx_errors;
+- stats->rx_dropped = netdev->stats.rx_dropped;
+- stats->tx_dropped = netdev->stats.tx_dropped;
++ stats->rx_dropped = rx_drop + netdev->stats.rx_dropped;
++ stats->tx_dropped = tx_drop + netdev->stats.tx_dropped;
+ stats->collisions = netdev->stats.collisions;
+ stats->rx_over_errors = netdev->stats.rx_over_errors;
+ stats->rx_frame_errors = netdev->stats.rx_frame_errors;
+@@ -1317,6 +1324,8 @@ static int hns3_nic_change_mtu(struct net_device *netdev, int new_mtu)
+ return ret;
+ }
+
++ netdev->mtu = new_mtu;
++
+ /* if the netdev was running earlier, bring it up again */
+ if (if_running && hns3_nic_net_open(netdev))
+ ret = -EINVAL;
+@@ -2785,8 +2794,12 @@ int hns3_uninit_all_ring(struct hns3_nic_priv *priv)
+ h->ae_algo->ops->reset_queue(h, i);
+
+ hns3_fini_ring(priv->ring_data[i].ring);
++ devm_kfree(priv->dev, priv->ring_data[i].ring);
+ hns3_fini_ring(priv->ring_data[i + h->kinfo.num_tqps].ring);
++ devm_kfree(priv->dev,
++ priv->ring_data[i + h->kinfo.num_tqps].ring);
+ }
++ devm_kfree(priv->dev, priv->ring_data);
+
+ return 0;
+ }
+diff --git a/drivers/net/ethernet/hisilicon/hns3/hns3pf/hns3_ethtool.c b/drivers/net/ethernet/hisilicon/hns3/hns3pf/hns3_ethtool.c
+index a21470c72da3..8974be4011e5 100644
+--- a/drivers/net/ethernet/hisilicon/hns3/hns3pf/hns3_ethtool.c
++++ b/drivers/net/ethernet/hisilicon/hns3/hns3pf/hns3_ethtool.c
+@@ -23,7 +23,8 @@ struct hns3_stats {
+ #define HNS3_TQP_STAT(_string, _member) { \
+ .stats_string = _string, \
+ .stats_size = FIELD_SIZEOF(struct ring_stats, _member), \
+- .stats_offset = offsetof(struct hns3_enet_ring, stats), \
++ .stats_offset = offsetof(struct hns3_enet_ring, stats) +\
++ offsetof(struct ring_stats, _member), \
+ } \
+
+ static const struct hns3_stats hns3_txq_stats[] = {
+@@ -455,13 +456,13 @@ static u64 *hns3_get_stats_tqps(struct hnae3_handle *handle, u64 *data)
+ struct hnae3_knic_private_info *kinfo = &handle->kinfo;
+ struct hns3_enet_ring *ring;
+ u8 *stat;
+- u32 i;
++ int i, j;
+
+ /* get stats for Tx */
+ for (i = 0; i < kinfo->num_tqps; i++) {
+ ring = nic_priv->ring_data[i].ring;
+- for (i = 0; i < HNS3_TXQ_STATS_COUNT; i++) {
+- stat = (u8 *)ring + hns3_txq_stats[i].stats_offset;
++ for (j = 0; j < HNS3_TXQ_STATS_COUNT; j++) {
++ stat = (u8 *)ring + hns3_txq_stats[j].stats_offset;
+ *data++ = *(u64 *)stat;
+ }
+ }
+@@ -469,8 +470,8 @@ static u64 *hns3_get_stats_tqps(struct hnae3_handle *handle, u64 *data)
+ /* get stats for Rx */
+ for (i = 0; i < kinfo->num_tqps; i++) {
+ ring = nic_priv->ring_data[i + kinfo->num_tqps].ring;
+- for (i = 0; i < HNS3_RXQ_STATS_COUNT; i++) {
+- stat = (u8 *)ring + hns3_rxq_stats[i].stats_offset;
++ for (j = 0; j < HNS3_RXQ_STATS_COUNT; j++) {
++ stat = (u8 *)ring + hns3_rxq_stats[j].stats_offset;
+ *data++ = *(u64 *)stat;
+ }
+ }
+diff --git a/drivers/net/ethernet/ibm/ibmvnic.c b/drivers/net/ethernet/ibm/ibmvnic.c
+index b65f5f3ac034..6e064c04ce0b 100644
+--- a/drivers/net/ethernet/ibm/ibmvnic.c
++++ b/drivers/net/ethernet/ibm/ibmvnic.c
+@@ -2484,6 +2484,12 @@ static irqreturn_t ibmvnic_interrupt_rx(int irq, void *instance)
+ struct ibmvnic_sub_crq_queue *scrq = instance;
+ struct ibmvnic_adapter *adapter = scrq->adapter;
+
++ /* When booting a kdump kernel we can hit pending interrupts
++ * prior to completing driver initialization.
++ */
++ if (unlikely(adapter->state != VNIC_OPEN))
++ return IRQ_NONE;
++
+ adapter->rx_stats_buffers[scrq->scrq_num].interrupts++;
+
+ if (napi_schedule_prep(&adapter->napi[scrq->scrq_num])) {
+diff --git a/drivers/net/ethernet/intel/i40evf/i40evf_main.c b/drivers/net/ethernet/intel/i40evf/i40evf_main.c
+index 7b2a4eba92e2..0b23bf6d7873 100644
+--- a/drivers/net/ethernet/intel/i40evf/i40evf_main.c
++++ b/drivers/net/ethernet/intel/i40evf/i40evf_main.c
+@@ -1796,7 +1796,11 @@ static void i40evf_disable_vf(struct i40evf_adapter *adapter)
+
+ adapter->flags |= I40EVF_FLAG_PF_COMMS_FAILED;
+
+- if (netif_running(adapter->netdev)) {
++ /* We don't use netif_running() because it may be true prior to
++ * ndo_open() returning, so we can't assume it means all our open
++ * tasks have finished, since we're not holding the rtnl_lock here.
++ */
++ if (adapter->state == __I40EVF_RUNNING) {
+ set_bit(__I40E_VSI_DOWN, adapter->vsi.state);
+ netif_carrier_off(adapter->netdev);
+ netif_tx_disable(adapter->netdev);
+@@ -1854,6 +1858,7 @@ static void i40evf_reset_task(struct work_struct *work)
+ struct i40evf_mac_filter *f;
+ u32 reg_val;
+ int i = 0, err;
++ bool running;
+
+ while (test_and_set_bit(__I40EVF_IN_CLIENT_TASK,
+ &adapter->crit_section))
+@@ -1913,7 +1918,13 @@ static void i40evf_reset_task(struct work_struct *work)
+ }
+
+ continue_reset:
+- if (netif_running(netdev)) {
++ /* We don't use netif_running() because it may be true prior to
++ * ndo_open() returning, so we can't assume it means all our open
++ * tasks have finished, since we're not holding the rtnl_lock here.
++ */
++ running = (adapter->state == __I40EVF_RUNNING);
++
++ if (running) {
+ netif_carrier_off(netdev);
+ netif_tx_stop_all_queues(netdev);
+ adapter->link_up = false;
+@@ -1964,7 +1975,10 @@ static void i40evf_reset_task(struct work_struct *work)
+
+ mod_timer(&adapter->watchdog_timer, jiffies + 2);
+
+- if (netif_running(adapter->netdev)) {
++ /* We were running when the reset started, so we need to restore some
++ * state here.
++ */
++ if (running) {
+ /* allocate transmit descriptors */
+ err = i40evf_setup_all_tx_resources(adapter);
+ if (err)
+diff --git a/drivers/net/ethernet/marvell/sky2.c b/drivers/net/ethernet/marvell/sky2.c
+index 9efe1771423c..523e1108c9df 100644
+--- a/drivers/net/ethernet/marvell/sky2.c
++++ b/drivers/net/ethernet/marvell/sky2.c
+@@ -5087,7 +5087,7 @@ static int sky2_probe(struct pci_dev *pdev, const struct pci_device_id *ent)
+ INIT_WORK(&hw->restart_work, sky2_restart);
+
+ pci_set_drvdata(pdev, hw);
+- pdev->d3_delay = 150;
++ pdev->d3_delay = 200;
+
+ return 0;
+
+diff --git a/drivers/net/ethernet/mellanox/mlx4/en_dcb_nl.c b/drivers/net/ethernet/mellanox/mlx4/en_dcb_nl.c
+index 5f41dc92aa68..752a72499b4f 100644
+--- a/drivers/net/ethernet/mellanox/mlx4/en_dcb_nl.c
++++ b/drivers/net/ethernet/mellanox/mlx4/en_dcb_nl.c
+@@ -156,57 +156,63 @@ static int mlx4_en_dcbnl_getnumtcs(struct net_device *netdev, int tcid, u8 *num)
+ static u8 mlx4_en_dcbnl_set_all(struct net_device *netdev)
+ {
+ struct mlx4_en_priv *priv = netdev_priv(netdev);
++ struct mlx4_en_port_profile *prof = priv->prof;
+ struct mlx4_en_dev *mdev = priv->mdev;
++ u8 tx_pause, tx_ppp, rx_pause, rx_ppp;
+
+ if (!(priv->dcbx_cap & DCB_CAP_DCBX_VER_CEE))
+ return 1;
+
+ if (priv->cee_config.pfc_state) {
+ int tc;
++ rx_ppp = prof->rx_ppp;
++ tx_ppp = prof->tx_ppp;
+
+- priv->prof->rx_pause = 0;
+- priv->prof->tx_pause = 0;
+ for (tc = 0; tc < CEE_DCBX_MAX_PRIO; tc++) {
+ u8 tc_mask = 1 << tc;
+
+ switch (priv->cee_config.dcb_pfc[tc]) {
+ case pfc_disabled:
+- priv->prof->tx_ppp &= ~tc_mask;
+- priv->prof->rx_ppp &= ~tc_mask;
++ tx_ppp &= ~tc_mask;
++ rx_ppp &= ~tc_mask;
+ break;
+ case pfc_enabled_full:
+- priv->prof->tx_ppp |= tc_mask;
+- priv->prof->rx_ppp |= tc_mask;
++ tx_ppp |= tc_mask;
++ rx_ppp |= tc_mask;
+ break;
+ case pfc_enabled_tx:
+- priv->prof->tx_ppp |= tc_mask;
+- priv->prof->rx_ppp &= ~tc_mask;
++ tx_ppp |= tc_mask;
++ rx_ppp &= ~tc_mask;
+ break;
+ case pfc_enabled_rx:
+- priv->prof->tx_ppp &= ~tc_mask;
+- priv->prof->rx_ppp |= tc_mask;
++ tx_ppp &= ~tc_mask;
++ rx_ppp |= tc_mask;
+ break;
+ default:
+ break;
+ }
+ }
+- en_dbg(DRV, priv, "Set pfc on\n");
++ rx_pause = !!(rx_ppp || tx_ppp) ? 0 : prof->rx_pause;
++ tx_pause = !!(rx_ppp || tx_ppp) ? 0 : prof->tx_pause;
+ } else {
+- priv->prof->rx_pause = 1;
+- priv->prof->tx_pause = 1;
+- en_dbg(DRV, priv, "Set pfc off\n");
++ rx_ppp = 0;
++ tx_ppp = 0;
++ rx_pause = prof->rx_pause;
++ tx_pause = prof->tx_pause;
+ }
+
+ if (mlx4_SET_PORT_general(mdev->dev, priv->port,
+ priv->rx_skb_size + ETH_FCS_LEN,
+- priv->prof->tx_pause,
+- priv->prof->tx_ppp,
+- priv->prof->rx_pause,
+- priv->prof->rx_ppp)) {
++ tx_pause, tx_ppp, rx_pause, rx_ppp)) {
+ en_err(priv, "Failed setting pause params\n");
+ return 1;
+ }
+
++ prof->tx_ppp = tx_ppp;
++ prof->rx_ppp = rx_ppp;
++ prof->tx_pause = tx_pause;
++ prof->rx_pause = rx_pause;
++
+ return 0;
+ }
+
+@@ -310,6 +316,7 @@ static int mlx4_en_ets_validate(struct mlx4_en_priv *priv, struct ieee_ets *ets)
+ }
+
+ switch (ets->tc_tsa[i]) {
++ case IEEE_8021QAZ_TSA_VENDOR:
+ case IEEE_8021QAZ_TSA_STRICT:
+ break;
+ case IEEE_8021QAZ_TSA_ETS:
+@@ -347,6 +354,10 @@ static int mlx4_en_config_port_scheduler(struct mlx4_en_priv *priv,
+ /* higher TC means higher priority => lower pg */
+ for (i = IEEE_8021QAZ_MAX_TCS - 1; i >= 0; i--) {
+ switch (ets->tc_tsa[i]) {
++ case IEEE_8021QAZ_TSA_VENDOR:
++ pg[i] = MLX4_EN_TC_VENDOR;
++ tc_tx_bw[i] = MLX4_EN_BW_MAX;
++ break;
+ case IEEE_8021QAZ_TSA_STRICT:
+ pg[i] = num_strict++;
+ tc_tx_bw[i] = MLX4_EN_BW_MAX;
+@@ -403,6 +414,7 @@ static int mlx4_en_dcbnl_ieee_setpfc(struct net_device *dev,
+ struct mlx4_en_priv *priv = netdev_priv(dev);
+ struct mlx4_en_port_profile *prof = priv->prof;
+ struct mlx4_en_dev *mdev = priv->mdev;
++ u32 tx_pause, tx_ppp, rx_pause, rx_ppp;
+ int err;
+
+ en_dbg(DRV, priv, "cap: 0x%x en: 0x%x mbc: 0x%x delay: %d\n",
+@@ -411,23 +423,26 @@ static int mlx4_en_dcbnl_ieee_setpfc(struct net_device *dev,
+ pfc->mbc,
+ pfc->delay);
+
+- prof->rx_pause = !pfc->pfc_en;
+- prof->tx_pause = !pfc->pfc_en;
+- prof->rx_ppp = pfc->pfc_en;
+- prof->tx_ppp = pfc->pfc_en;
++ rx_pause = prof->rx_pause && !pfc->pfc_en;
++ tx_pause = prof->tx_pause && !pfc->pfc_en;
++ rx_ppp = pfc->pfc_en;
++ tx_ppp = pfc->pfc_en;
+
+ err = mlx4_SET_PORT_general(mdev->dev, priv->port,
+ priv->rx_skb_size + ETH_FCS_LEN,
+- prof->tx_pause,
+- prof->tx_ppp,
+- prof->rx_pause,
+- prof->rx_ppp);
+- if (err)
++ tx_pause, tx_ppp, rx_pause, rx_ppp);
++ if (err) {
+ en_err(priv, "Failed setting pause params\n");
+- else
+- mlx4_en_update_pfc_stats_bitmap(mdev->dev, &priv->stats_bitmap,
+- prof->rx_ppp, prof->rx_pause,
+- prof->tx_ppp, prof->tx_pause);
++ return err;
++ }
++
++ mlx4_en_update_pfc_stats_bitmap(mdev->dev, &priv->stats_bitmap,
++ rx_ppp, rx_pause, tx_ppp, tx_pause);
++
++ prof->tx_ppp = tx_ppp;
++ prof->rx_ppp = rx_ppp;
++ prof->rx_pause = rx_pause;
++ prof->tx_pause = tx_pause;
+
+ return err;
+ }
+diff --git a/drivers/net/ethernet/mellanox/mlx4/en_ethtool.c b/drivers/net/ethernet/mellanox/mlx4/en_ethtool.c
+index bf1f04164885..c5ab626f4cba 100644
+--- a/drivers/net/ethernet/mellanox/mlx4/en_ethtool.c
++++ b/drivers/net/ethernet/mellanox/mlx4/en_ethtool.c
+@@ -1046,27 +1046,32 @@ static int mlx4_en_set_pauseparam(struct net_device *dev,
+ {
+ struct mlx4_en_priv *priv = netdev_priv(dev);
+ struct mlx4_en_dev *mdev = priv->mdev;
++ u8 tx_pause, tx_ppp, rx_pause, rx_ppp;
+ int err;
+
+ if (pause->autoneg)
+ return -EINVAL;
+
+- priv->prof->tx_pause = pause->tx_pause != 0;
+- priv->prof->rx_pause = pause->rx_pause != 0;
++ tx_pause = !!(pause->tx_pause);
++ rx_pause = !!(pause->rx_pause);
++ rx_ppp = priv->prof->rx_ppp && !(tx_pause || rx_pause);
++ tx_ppp = priv->prof->tx_ppp && !(tx_pause || rx_pause);
++
+ err = mlx4_SET_PORT_general(mdev->dev, priv->port,
+ priv->rx_skb_size + ETH_FCS_LEN,
+- priv->prof->tx_pause,
+- priv->prof->tx_ppp,
+- priv->prof->rx_pause,
+- priv->prof->rx_ppp);
+- if (err)
+- en_err(priv, "Failed setting pause params\n");
+- else
+- mlx4_en_update_pfc_stats_bitmap(mdev->dev, &priv->stats_bitmap,
+- priv->prof->rx_ppp,
+- priv->prof->rx_pause,
+- priv->prof->tx_ppp,
+- priv->prof->tx_pause);
++ tx_pause, tx_ppp, rx_pause, rx_ppp);
++ if (err) {
++ en_err(priv, "Failed setting pause params, err = %d\n", err);
++ return err;
++ }
++
++ mlx4_en_update_pfc_stats_bitmap(mdev->dev, &priv->stats_bitmap,
++ rx_ppp, rx_pause, tx_ppp, tx_pause);
++
++ priv->prof->tx_pause = tx_pause;
++ priv->prof->rx_pause = rx_pause;
++ priv->prof->tx_ppp = tx_ppp;
++ priv->prof->rx_ppp = rx_ppp;
+
+ return err;
+ }
+diff --git a/drivers/net/ethernet/mellanox/mlx4/en_main.c b/drivers/net/ethernet/mellanox/mlx4/en_main.c
+index 2c2965497ed3..d25e16d2c319 100644
+--- a/drivers/net/ethernet/mellanox/mlx4/en_main.c
++++ b/drivers/net/ethernet/mellanox/mlx4/en_main.c
+@@ -163,9 +163,9 @@ static void mlx4_en_get_profile(struct mlx4_en_dev *mdev)
+ params->udp_rss = 0;
+ }
+ for (i = 1; i <= MLX4_MAX_PORTS; i++) {
+- params->prof[i].rx_pause = 1;
++ params->prof[i].rx_pause = !(pfcrx || pfctx);
+ params->prof[i].rx_ppp = pfcrx;
+- params->prof[i].tx_pause = 1;
++ params->prof[i].tx_pause = !(pfcrx || pfctx);
+ params->prof[i].tx_ppp = pfctx;
+ params->prof[i].tx_ring_size = MLX4_EN_DEF_TX_RING_SIZE;
+ params->prof[i].rx_ring_size = MLX4_EN_DEF_RX_RING_SIZE;
+diff --git a/drivers/net/ethernet/mellanox/mlx4/en_netdev.c b/drivers/net/ethernet/mellanox/mlx4/en_netdev.c
+index 99051a294fa6..21bc17fa3854 100644
+--- a/drivers/net/ethernet/mellanox/mlx4/en_netdev.c
++++ b/drivers/net/ethernet/mellanox/mlx4/en_netdev.c
+@@ -3336,6 +3336,13 @@ int mlx4_en_init_netdev(struct mlx4_en_dev *mdev, int port,
+ priv->msg_enable = MLX4_EN_MSG_LEVEL;
+ #ifdef CONFIG_MLX4_EN_DCB
+ if (!mlx4_is_slave(priv->mdev->dev)) {
++ u8 prio;
++
++ for (prio = 0; prio < IEEE_8021QAZ_MAX_TCS; ++prio) {
++ priv->ets.prio_tc[prio] = prio;
++ priv->ets.tc_tsa[prio] = IEEE_8021QAZ_TSA_VENDOR;
++ }
++
+ priv->dcbx_cap = DCB_CAP_DCBX_VER_CEE | DCB_CAP_DCBX_HOST |
+ DCB_CAP_DCBX_VER_IEEE;
+ priv->flags |= MLX4_EN_DCB_ENABLED;
+diff --git a/drivers/net/ethernet/mellanox/mlx4/mlx4_en.h b/drivers/net/ethernet/mellanox/mlx4/mlx4_en.h
+index 2b72677eccd4..7db3d0d9bfce 100644
+--- a/drivers/net/ethernet/mellanox/mlx4/mlx4_en.h
++++ b/drivers/net/ethernet/mellanox/mlx4/mlx4_en.h
+@@ -479,6 +479,7 @@ struct mlx4_en_frag_info {
+ #define MLX4_EN_BW_MIN 1
+ #define MLX4_EN_BW_MAX 100 /* Utilize 100% of the line */
+
++#define MLX4_EN_TC_VENDOR 0
+ #define MLX4_EN_TC_ETS 7
+
+ enum dcb_pfc_type {
+diff --git a/drivers/net/ethernet/mellanox/mlx4/resource_tracker.c b/drivers/net/ethernet/mellanox/mlx4/resource_tracker.c
+index 606a0e0beeae..29e50f787349 100644
+--- a/drivers/net/ethernet/mellanox/mlx4/resource_tracker.c
++++ b/drivers/net/ethernet/mellanox/mlx4/resource_tracker.c
+@@ -5088,6 +5088,7 @@ static void rem_slave_fs_rule(struct mlx4_dev *dev, int slave)
+ &tracker->res_tree[RES_FS_RULE]);
+ list_del(&fs_rule->com.list);
+ spin_unlock_irq(mlx4_tlock(dev));
++ kfree(fs_rule->mirr_mbox);
+ kfree(fs_rule);
+ state = 0;
+ break;
+diff --git a/drivers/net/ethernet/mellanox/mlx5/core/en_ethtool.c b/drivers/net/ethernet/mellanox/mlx5/core/en_ethtool.c
+index ea5fff2c3143..f909d0dbae10 100644
+--- a/drivers/net/ethernet/mellanox/mlx5/core/en_ethtool.c
++++ b/drivers/net/ethernet/mellanox/mlx5/core/en_ethtool.c
+@@ -492,6 +492,9 @@ static int mlx5e_get_coalesce(struct net_device *netdev,
+ return mlx5e_ethtool_get_coalesce(priv, coal);
+ }
+
++#define MLX5E_MAX_COAL_TIME MLX5_MAX_CQ_PERIOD
++#define MLX5E_MAX_COAL_FRAMES MLX5_MAX_CQ_COUNT
++
+ static void
+ mlx5e_set_priv_channels_coalesce(struct mlx5e_priv *priv, struct ethtool_coalesce *coal)
+ {
+@@ -526,6 +529,20 @@ int mlx5e_ethtool_set_coalesce(struct mlx5e_priv *priv,
+ if (!MLX5_CAP_GEN(mdev, cq_moderation))
+ return -EOPNOTSUPP;
+
++ if (coal->tx_coalesce_usecs > MLX5E_MAX_COAL_TIME ||
++ coal->rx_coalesce_usecs > MLX5E_MAX_COAL_TIME) {
++ netdev_info(priv->netdev, "%s: maximum coalesce time supported is %lu usecs\n",
++ __func__, MLX5E_MAX_COAL_TIME);
++ return -ERANGE;
++ }
++
++ if (coal->tx_max_coalesced_frames > MLX5E_MAX_COAL_FRAMES ||
++ coal->rx_max_coalesced_frames > MLX5E_MAX_COAL_FRAMES) {
++ netdev_info(priv->netdev, "%s: maximum coalesced frames supported is %lu\n",
++ __func__, MLX5E_MAX_COAL_FRAMES);
++ return -ERANGE;
++ }
++
+ mutex_lock(&priv->state_lock);
+ new_channels.params = priv->channels.params;
+
+diff --git a/drivers/net/ethernet/mellanox/mlx5/core/en_main.c b/drivers/net/ethernet/mellanox/mlx5/core/en_main.c
+index 0d352d4cf48c..f5a704c7d143 100644
+--- a/drivers/net/ethernet/mellanox/mlx5/core/en_main.c
++++ b/drivers/net/ethernet/mellanox/mlx5/core/en_main.c
+@@ -2715,6 +2715,9 @@ int mlx5e_open(struct net_device *netdev)
+ mlx5_set_port_admin_status(priv->mdev, MLX5_PORT_UP);
+ mutex_unlock(&priv->state_lock);
+
++ if (mlx5e_vxlan_allowed(priv->mdev))
++ udp_tunnel_get_rx_info(netdev);
++
+ return err;
+ }
+
+@@ -4075,7 +4078,7 @@ void mlx5e_build_nic_params(struct mlx5_core_dev *mdev,
+ struct mlx5e_params *params,
+ u16 max_channels)
+ {
+- u8 cq_period_mode = 0;
++ u8 rx_cq_period_mode;
+ u32 link_speed = 0;
+ u32 pci_bw = 0;
+
+@@ -4111,12 +4114,12 @@ void mlx5e_build_nic_params(struct mlx5_core_dev *mdev,
+ params->lro_timeout = mlx5e_choose_lro_timeout(mdev, MLX5E_DEFAULT_LRO_TIMEOUT);
+
+ /* CQ moderation params */
+- cq_period_mode = MLX5_CAP_GEN(mdev, cq_period_start_from_cqe) ?
++ rx_cq_period_mode = MLX5_CAP_GEN(mdev, cq_period_start_from_cqe) ?
+ MLX5_CQ_PERIOD_MODE_START_FROM_CQE :
+ MLX5_CQ_PERIOD_MODE_START_FROM_EQE;
+ params->rx_am_enabled = MLX5_CAP_GEN(mdev, cq_moderation);
+- mlx5e_set_rx_cq_mode_params(params, cq_period_mode);
+- mlx5e_set_tx_cq_mode_params(params, cq_period_mode);
++ mlx5e_set_rx_cq_mode_params(params, rx_cq_period_mode);
++ mlx5e_set_tx_cq_mode_params(params, MLX5_CQ_PERIOD_MODE_START_FROM_EQE);
+
+ /* TX inline */
+ params->tx_max_inline = mlx5e_get_max_inline_cap(mdev);
+@@ -4428,12 +4431,6 @@ static void mlx5e_nic_enable(struct mlx5e_priv *priv)
+ #ifdef CONFIG_MLX5_CORE_EN_DCB
+ mlx5e_dcbnl_init_app(priv);
+ #endif
+- /* Device already registered: sync netdev system state */
+- if (mlx5e_vxlan_allowed(mdev)) {
+- rtnl_lock();
+- udp_tunnel_get_rx_info(netdev);
+- rtnl_unlock();
+- }
+
+ queue_work(priv->wq, &priv->set_rx_mode_work);
+
+diff --git a/drivers/net/ethernet/mellanox/mlx5/core/en_rep.c b/drivers/net/ethernet/mellanox/mlx5/core/en_rep.c
+index 3409d86eb06b..dfa8c6a28a6c 100644
+--- a/drivers/net/ethernet/mellanox/mlx5/core/en_rep.c
++++ b/drivers/net/ethernet/mellanox/mlx5/core/en_rep.c
+@@ -44,6 +44,11 @@
+ #include "en_tc.h"
+ #include "fs_core.h"
+
++#define MLX5E_REP_PARAMS_LOG_SQ_SIZE \
++ max(0x6, MLX5E_PARAMS_MINIMUM_LOG_SQ_SIZE)
++#define MLX5E_REP_PARAMS_LOG_RQ_SIZE \
++ max(0x6, MLX5E_PARAMS_MINIMUM_LOG_RQ_SIZE)
++
+ static const char mlx5e_rep_driver_name[] = "mlx5e_rep";
+
+ static void mlx5e_rep_get_drvinfo(struct net_device *dev,
+@@ -231,7 +236,7 @@ void mlx5e_remove_sqs_fwd_rules(struct mlx5e_priv *priv)
+ static void mlx5e_rep_neigh_update_init_interval(struct mlx5e_rep_priv *rpriv)
+ {
+ #if IS_ENABLED(CONFIG_IPV6)
+- unsigned long ipv6_interval = NEIGH_VAR(&ipv6_stub->nd_tbl->parms,
++ unsigned long ipv6_interval = NEIGH_VAR(&nd_tbl.parms,
+ DELAY_PROBE_TIME);
+ #else
+ unsigned long ipv6_interval = ~0UL;
+@@ -367,7 +372,7 @@ static int mlx5e_rep_netevent_event(struct notifier_block *nb,
+ case NETEVENT_NEIGH_UPDATE:
+ n = ptr;
+ #if IS_ENABLED(CONFIG_IPV6)
+- if (n->tbl != ipv6_stub->nd_tbl && n->tbl != &arp_tbl)
++ if (n->tbl != &nd_tbl && n->tbl != &arp_tbl)
+ #else
+ if (n->tbl != &arp_tbl)
+ #endif
+@@ -415,7 +420,7 @@ static int mlx5e_rep_netevent_event(struct notifier_block *nb,
+ * done per device delay prob time parameter.
+ */
+ #if IS_ENABLED(CONFIG_IPV6)
+- if (!p->dev || (p->tbl != ipv6_stub->nd_tbl && p->tbl != &arp_tbl))
++ if (!p->dev || (p->tbl != &nd_tbl && p->tbl != &arp_tbl))
+ #else
+ if (!p->dev || p->tbl != &arp_tbl)
+ #endif
+@@ -611,7 +616,6 @@ static int mlx5e_rep_open(struct net_device *dev)
+ struct mlx5e_priv *priv = netdev_priv(dev);
+ struct mlx5e_rep_priv *rpriv = priv->ppriv;
+ struct mlx5_eswitch_rep *rep = rpriv->rep;
+- struct mlx5_eswitch *esw = priv->mdev->priv.eswitch;
+ int err;
+
+ mutex_lock(&priv->state_lock);
+@@ -619,8 +623,9 @@ static int mlx5e_rep_open(struct net_device *dev)
+ if (err)
+ goto unlock;
+
+- if (!mlx5_eswitch_set_vport_state(esw, rep->vport,
+- MLX5_ESW_VPORT_ADMIN_STATE_UP))
++ if (!mlx5_modify_vport_admin_state(priv->mdev,
++ MLX5_QUERY_VPORT_STATE_IN_OP_MOD_ESW_VPORT,
++ rep->vport, MLX5_ESW_VPORT_ADMIN_STATE_UP))
+ netif_carrier_on(dev);
+
+ unlock:
+@@ -633,11 +638,12 @@ static int mlx5e_rep_close(struct net_device *dev)
+ struct mlx5e_priv *priv = netdev_priv(dev);
+ struct mlx5e_rep_priv *rpriv = priv->ppriv;
+ struct mlx5_eswitch_rep *rep = rpriv->rep;
+- struct mlx5_eswitch *esw = priv->mdev->priv.eswitch;
+ int ret;
+
+ mutex_lock(&priv->state_lock);
+- (void)mlx5_eswitch_set_vport_state(esw, rep->vport, MLX5_ESW_VPORT_ADMIN_STATE_DOWN);
++ mlx5_modify_vport_admin_state(priv->mdev,
++ MLX5_QUERY_VPORT_STATE_IN_OP_MOD_ESW_VPORT,
++ rep->vport, MLX5_ESW_VPORT_ADMIN_STATE_DOWN);
+ ret = mlx5e_close_locked(dev);
+ mutex_unlock(&priv->state_lock);
+ return ret;
+@@ -823,9 +829,9 @@ static void mlx5e_build_rep_params(struct mlx5_core_dev *mdev,
+ MLX5_CQ_PERIOD_MODE_START_FROM_CQE :
+ MLX5_CQ_PERIOD_MODE_START_FROM_EQE;
+
+- params->log_sq_size = MLX5E_PARAMS_MINIMUM_LOG_SQ_SIZE;
++ params->log_sq_size = MLX5E_REP_PARAMS_LOG_SQ_SIZE;
+ params->rq_wq_type = MLX5_WQ_TYPE_LINKED_LIST;
+- params->log_rq_size = MLX5E_PARAMS_MINIMUM_LOG_RQ_SIZE;
++ params->log_rq_size = MLX5E_REP_PARAMS_LOG_RQ_SIZE;
+
+ params->rx_am_enabled = MLX5_CAP_GEN(mdev, cq_moderation);
+ mlx5e_set_rx_cq_mode_params(params, cq_period_mode);
+diff --git a/drivers/net/ethernet/mellanox/mlx5/core/en_rx.c b/drivers/net/ethernet/mellanox/mlx5/core/en_rx.c
+index 36611b64a91c..f7d9aab2b3b6 100644
+--- a/drivers/net/ethernet/mellanox/mlx5/core/en_rx.c
++++ b/drivers/net/ethernet/mellanox/mlx5/core/en_rx.c
+@@ -1196,7 +1196,9 @@ static inline void mlx5i_complete_rx_cqe(struct mlx5e_rq *rq,
+ u32 cqe_bcnt,
+ struct sk_buff *skb)
+ {
++ struct hwtstamp_config *tstamp;
+ struct net_device *netdev;
++ struct mlx5e_priv *priv;
+ char *pseudo_header;
+ u32 qpn;
+ u8 *dgid;
+@@ -1215,6 +1217,9 @@ static inline void mlx5i_complete_rx_cqe(struct mlx5e_rq *rq,
+ return;
+ }
+
++ priv = mlx5i_epriv(netdev);
++ tstamp = &priv->tstamp;
++
+ g = (be32_to_cpu(cqe->flags_rqpn) >> 28) & 3;
+ dgid = skb->data + MLX5_IB_GRH_DGID_OFFSET;
+ if ((!g) || dgid[0] != 0xff)
+@@ -1235,7 +1240,7 @@ static inline void mlx5i_complete_rx_cqe(struct mlx5e_rq *rq,
+ skb->ip_summed = CHECKSUM_COMPLETE;
+ skb->csum = csum_unfold((__force __sum16)cqe->check_sum);
+
+- if (unlikely(mlx5e_rx_hw_stamp(rq->tstamp)))
++ if (unlikely(mlx5e_rx_hw_stamp(tstamp)))
+ skb_hwtstamps(skb)->hwtstamp =
+ mlx5_timecounter_cyc2time(rq->clock, get_cqe_ts(cqe));
+
+diff --git a/drivers/net/ethernet/mellanox/mlx5/core/en_tc.c b/drivers/net/ethernet/mellanox/mlx5/core/en_tc.c
+index 55979ec2e88a..dfab6b08db70 100644
+--- a/drivers/net/ethernet/mellanox/mlx5/core/en_tc.c
++++ b/drivers/net/ethernet/mellanox/mlx5/core/en_tc.c
+@@ -495,7 +495,7 @@ void mlx5e_tc_update_neigh_used_value(struct mlx5e_neigh_hash_entry *nhe)
+ tbl = &arp_tbl;
+ #if IS_ENABLED(CONFIG_IPV6)
+ else if (m_neigh->family == AF_INET6)
+- tbl = ipv6_stub->nd_tbl;
++ tbl = &nd_tbl;
+ #endif
+ else
+ return;
+@@ -2102,19 +2102,19 @@ int mlx5e_configure_flower(struct mlx5e_priv *priv,
+ if (err != -EAGAIN)
+ flow->flags |= MLX5E_TC_FLOW_OFFLOADED;
+
++ if (!(flow->flags & MLX5E_TC_FLOW_ESWITCH) ||
++ !(flow->esw_attr->action & MLX5_FLOW_CONTEXT_ACTION_ENCAP))
++ kvfree(parse_attr);
++
+ err = rhashtable_insert_fast(&tc->ht, &flow->node,
+ tc->ht_params);
+- if (err)
+- goto err_del_rule;
++ if (err) {
++ mlx5e_tc_del_flow(priv, flow);
++ kfree(flow);
++ }
+
+- if (flow->flags & MLX5E_TC_FLOW_ESWITCH &&
+- !(flow->esw_attr->action & MLX5_FLOW_CONTEXT_ACTION_ENCAP))
+- kvfree(parse_attr);
+ return err;
+
+-err_del_rule:
+- mlx5e_tc_del_flow(priv, flow);
+-
+ err_free:
+ kvfree(parse_attr);
+ kfree(flow);
+diff --git a/drivers/net/ethernet/mellanox/mlx5/core/vport.c b/drivers/net/ethernet/mellanox/mlx5/core/vport.c
+index a1296a62497d..71153c0f1605 100644
+--- a/drivers/net/ethernet/mellanox/mlx5/core/vport.c
++++ b/drivers/net/ethernet/mellanox/mlx5/core/vport.c
+@@ -36,6 +36,9 @@
+ #include <linux/mlx5/vport.h>
+ #include "mlx5_core.h"
+
++/* Mutex to hold while enabling or disabling RoCE */
++static DEFINE_MUTEX(mlx5_roce_en_lock);
++
+ static int _mlx5_query_vport_state(struct mlx5_core_dev *mdev, u8 opmod,
+ u16 vport, u32 *out, int outlen)
+ {
+@@ -998,17 +1001,35 @@ static int mlx5_nic_vport_update_roce_state(struct mlx5_core_dev *mdev,
+
+ int mlx5_nic_vport_enable_roce(struct mlx5_core_dev *mdev)
+ {
+- if (atomic_inc_return(&mdev->roce.roce_en) != 1)
+- return 0;
+- return mlx5_nic_vport_update_roce_state(mdev, MLX5_VPORT_ROCE_ENABLED);
++ int err = 0;
++
++ mutex_lock(&mlx5_roce_en_lock);
++ if (!mdev->roce.roce_en)
++ err = mlx5_nic_vport_update_roce_state(mdev, MLX5_VPORT_ROCE_ENABLED);
++
++ if (!err)
++ mdev->roce.roce_en++;
++ mutex_unlock(&mlx5_roce_en_lock);
++
++ return err;
+ }
+ EXPORT_SYMBOL_GPL(mlx5_nic_vport_enable_roce);
+
+ int mlx5_nic_vport_disable_roce(struct mlx5_core_dev *mdev)
+ {
+- if (atomic_dec_return(&mdev->roce.roce_en) != 0)
+- return 0;
+- return mlx5_nic_vport_update_roce_state(mdev, MLX5_VPORT_ROCE_DISABLED);
++ int err = 0;
++
++ mutex_lock(&mlx5_roce_en_lock);
++ if (mdev->roce.roce_en) {
++ mdev->roce.roce_en--;
++ if (mdev->roce.roce_en == 0)
++ err = mlx5_nic_vport_update_roce_state(mdev, MLX5_VPORT_ROCE_DISABLED);
++
++ if (err)
++ mdev->roce.roce_en++;
++ }
++ mutex_unlock(&mlx5_roce_en_lock);
++ return err;
+ }
+ EXPORT_SYMBOL_GPL(mlx5_nic_vport_disable_roce);
+
+diff --git a/drivers/net/ethernet/netronome/nfp/nfpcore/nfp_nsp.c b/drivers/net/ethernet/netronome/nfp/nfpcore/nfp_nsp.c
+index 14a6d1ba51a9..54fe044ceef8 100644
+--- a/drivers/net/ethernet/netronome/nfp/nfpcore/nfp_nsp.c
++++ b/drivers/net/ethernet/netronome/nfp/nfpcore/nfp_nsp.c
+@@ -68,10 +68,11 @@
+ /* CPP address to retrieve the data from */
+ #define NSP_BUFFER 0x10
+ #define NSP_BUFFER_CPP GENMASK_ULL(63, 40)
+-#define NSP_BUFFER_PCIE GENMASK_ULL(39, 38)
+-#define NSP_BUFFER_ADDRESS GENMASK_ULL(37, 0)
++#define NSP_BUFFER_ADDRESS GENMASK_ULL(39, 0)
+
+ #define NSP_DFLT_BUFFER 0x18
++#define NSP_DFLT_BUFFER_CPP GENMASK_ULL(63, 40)
++#define NSP_DFLT_BUFFER_ADDRESS GENMASK_ULL(39, 0)
+
+ #define NSP_DFLT_BUFFER_CONFIG 0x20
+ #define NSP_DFLT_BUFFER_SIZE_MB GENMASK_ULL(7, 0)
+@@ -412,8 +413,8 @@ static int nfp_nsp_command_buf(struct nfp_nsp *nsp, u16 code, u32 option,
+ if (err < 0)
+ return err;
+
+- cpp_id = FIELD_GET(NSP_BUFFER_CPP, reg) << 8;
+- cpp_buf = FIELD_GET(NSP_BUFFER_ADDRESS, reg);
++ cpp_id = FIELD_GET(NSP_DFLT_BUFFER_CPP, reg) << 8;
++ cpp_buf = FIELD_GET(NSP_DFLT_BUFFER_ADDRESS, reg);
+
+ if (in_buf && in_size) {
+ err = nfp_cpp_write(cpp, cpp_id, cpp_buf, in_buf, in_size);
+diff --git a/drivers/net/ethernet/realtek/r8169.c b/drivers/net/ethernet/realtek/r8169.c
+index dd713dff8d22..3a0c450552d6 100644
+--- a/drivers/net/ethernet/realtek/r8169.c
++++ b/drivers/net/ethernet/realtek/r8169.c
+@@ -8699,12 +8699,12 @@ static int rtl_init_one(struct pci_dev *pdev, const struct pci_device_id *ent)
+ goto err_out_msi_5;
+ }
+
++ pci_set_drvdata(pdev, dev);
++
+ rc = register_netdev(dev);
+ if (rc < 0)
+ goto err_out_cnt_6;
+
+- pci_set_drvdata(pdev, dev);
+-
+ netif_info(tp, probe, dev, "%s at 0x%p, %pM, XID %08x IRQ %d\n",
+ rtl_chip_infos[chipset].name, ioaddr, dev->dev_addr,
+ (u32)(RTL_R32(TxConfig) & 0x9cf0f8ff), pdev->irq);
+diff --git a/drivers/net/ppp/pptp.c b/drivers/net/ppp/pptp.c
+index 6dde9a0cfe76..9b70a3af678e 100644
+--- a/drivers/net/ppp/pptp.c
++++ b/drivers/net/ppp/pptp.c
+@@ -464,7 +464,6 @@ static int pptp_connect(struct socket *sock, struct sockaddr *uservaddr,
+ po->chan.mtu = dst_mtu(&rt->dst);
+ if (!po->chan.mtu)
+ po->chan.mtu = PPP_MRU;
+- ip_rt_put(rt);
+ po->chan.mtu -= PPTP_HEADER_OVERHEAD;
+
+ po->chan.hdrlen = 2 + sizeof(struct pptp_gre_header);
+diff --git a/drivers/net/team/team.c b/drivers/net/team/team.c
+index 56c701b73c12..befed2d22bf4 100644
+--- a/drivers/net/team/team.c
++++ b/drivers/net/team/team.c
+@@ -1197,11 +1197,6 @@ static int team_port_add(struct team *team, struct net_device *port_dev)
+ goto err_dev_open;
+ }
+
+- netif_addr_lock_bh(dev);
+- dev_uc_sync_multiple(port_dev, dev);
+- dev_mc_sync_multiple(port_dev, dev);
+- netif_addr_unlock_bh(dev);
+-
+ err = vlan_vids_add_by_dev(port_dev, dev);
+ if (err) {
+ netdev_err(dev, "Failed to add vlan ids to device %s\n",
+@@ -1241,6 +1236,11 @@ static int team_port_add(struct team *team, struct net_device *port_dev)
+ goto err_option_port_add;
+ }
+
++ netif_addr_lock_bh(dev);
++ dev_uc_sync_multiple(port_dev, dev);
++ dev_mc_sync_multiple(port_dev, dev);
++ netif_addr_unlock_bh(dev);
++
+ port->index = -1;
+ list_add_tail_rcu(&port->list, &team->port_list);
+ team_port_enable(team, port);
+@@ -1265,8 +1265,6 @@ static int team_port_add(struct team *team, struct net_device *port_dev)
+ vlan_vids_del_by_dev(port_dev, dev);
+
+ err_vids_add:
+- dev_uc_unsync(port_dev, dev);
+- dev_mc_unsync(port_dev, dev);
+ dev_close(port_dev);
+
+ err_dev_open:
+diff --git a/drivers/net/usb/lan78xx.c b/drivers/net/usb/lan78xx.c
+index ec56ff29aac4..02048263c1fb 100644
+--- a/drivers/net/usb/lan78xx.c
++++ b/drivers/net/usb/lan78xx.c
+@@ -2863,8 +2863,7 @@ static int lan78xx_bind(struct lan78xx_net *dev, struct usb_interface *intf)
+ if (ret < 0) {
+ netdev_warn(dev->net,
+ "lan78xx_setup_irq_domain() failed : %d", ret);
+- kfree(pdata);
+- return ret;
++ goto out1;
+ }
+
+ dev->net->hard_header_len += TX_OVERHEAD;
+@@ -2872,14 +2871,32 @@ static int lan78xx_bind(struct lan78xx_net *dev, struct usb_interface *intf)
+
+ /* Init all registers */
+ ret = lan78xx_reset(dev);
++ if (ret) {
++ netdev_warn(dev->net, "Registers INIT FAILED....");
++ goto out2;
++ }
+
+ ret = lan78xx_mdio_init(dev);
++ if (ret) {
++ netdev_warn(dev->net, "MDIO INIT FAILED.....");
++ goto out2;
++ }
+
+ dev->net->flags |= IFF_MULTICAST;
+
+ pdata->wol = WAKE_MAGIC;
+
+ return ret;
++
++out2:
++ lan78xx_remove_irq_domain(dev);
++
++out1:
++ netdev_warn(dev->net, "Bind routine FAILED");
++ cancel_work_sync(&pdata->set_multicast);
++ cancel_work_sync(&pdata->set_vlan);
++ kfree(pdata);
++ return ret;
+ }
+
+ static void lan78xx_unbind(struct lan78xx_net *dev, struct usb_interface *intf)
+@@ -2891,6 +2908,8 @@ static void lan78xx_unbind(struct lan78xx_net *dev, struct usb_interface *intf)
+ lan78xx_remove_mdio(dev);
+
+ if (pdata) {
++ cancel_work_sync(&pdata->set_multicast);
++ cancel_work_sync(&pdata->set_vlan);
+ netif_dbg(dev, ifdown, dev->net, "free pdata");
+ kfree(pdata);
+ pdata = NULL;
+diff --git a/drivers/net/vrf.c b/drivers/net/vrf.c
+index 139c61c8244a..ac40924fe437 100644
+--- a/drivers/net/vrf.c
++++ b/drivers/net/vrf.c
+@@ -578,12 +578,13 @@ static int vrf_finish_output(struct net *net, struct sock *sk, struct sk_buff *s
+ if (!IS_ERR(neigh)) {
+ sock_confirm_neigh(skb, neigh);
+ ret = neigh_output(neigh, skb);
++ rcu_read_unlock_bh();
++ return ret;
+ }
+
+ rcu_read_unlock_bh();
+ err:
+- if (unlikely(ret < 0))
+- vrf_tx_error(skb->dev, skb);
++ vrf_tx_error(skb->dev, skb);
+ return ret;
+ }
+
+diff --git a/drivers/net/wireless/ralink/rt2x00/rt2x00mac.c b/drivers/net/wireless/ralink/rt2x00/rt2x00mac.c
+index ecc96312a370..6fe0c6abe0d6 100644
+--- a/drivers/net/wireless/ralink/rt2x00/rt2x00mac.c
++++ b/drivers/net/wireless/ralink/rt2x00/rt2x00mac.c
+@@ -142,15 +142,25 @@ void rt2x00mac_tx(struct ieee80211_hw *hw,
+ if (!rt2x00dev->ops->hw->set_rts_threshold &&
+ (tx_info->control.rates[0].flags & (IEEE80211_TX_RC_USE_RTS_CTS |
+ IEEE80211_TX_RC_USE_CTS_PROTECT))) {
+- if (rt2x00queue_available(queue) <= 1)
+- goto exit_fail;
++ if (rt2x00queue_available(queue) <= 1) {
++ /*
++ * Recheck for full queue under lock to avoid race
++ * conditions with rt2x00lib_txdone().
++ */
++ spin_lock(&queue->tx_lock);
++ if (rt2x00queue_threshold(queue))
++ rt2x00queue_pause_queue(queue);
++ spin_unlock(&queue->tx_lock);
++
++ goto exit_free_skb;
++ }
+
+ if (rt2x00mac_tx_rts_cts(rt2x00dev, queue, skb))
+- goto exit_fail;
++ goto exit_free_skb;
+ }
+
+ if (unlikely(rt2x00queue_write_tx_frame(queue, skb, control->sta, false)))
+- goto exit_fail;
++ goto exit_free_skb;
+
+ /*
+ * Pausing queue has to be serialized with rt2x00lib_txdone(). Note
+@@ -164,10 +174,6 @@ void rt2x00mac_tx(struct ieee80211_hw *hw,
+
+ return;
+
+- exit_fail:
+- spin_lock(&queue->tx_lock);
+- rt2x00queue_pause_queue(queue);
+- spin_unlock(&queue->tx_lock);
+ exit_free_skb:
+ ieee80211_free_txskb(hw, skb);
+ }
+diff --git a/drivers/net/wireless/ti/wl1251/main.c b/drivers/net/wireless/ti/wl1251/main.c
+index 6d02c660b4ab..037defd10b91 100644
+--- a/drivers/net/wireless/ti/wl1251/main.c
++++ b/drivers/net/wireless/ti/wl1251/main.c
+@@ -1200,8 +1200,7 @@ static void wl1251_op_bss_info_changed(struct ieee80211_hw *hw,
+ WARN_ON(wl->bss_type != BSS_TYPE_STA_BSS);
+
+ enable = bss_conf->arp_addr_cnt == 1 && bss_conf->assoc;
+- wl1251_acx_arp_ip_filter(wl, enable, addr);
+-
++ ret = wl1251_acx_arp_ip_filter(wl, enable, addr);
+ if (ret < 0)
+ goto out_sleep;
+ }
+diff --git a/drivers/nvme/host/fabrics.c b/drivers/nvme/host/fabrics.c
+index 894c2ccb3891..8b5e640d8686 100644
+--- a/drivers/nvme/host/fabrics.c
++++ b/drivers/nvme/host/fabrics.c
+@@ -869,32 +869,41 @@ nvmf_create_ctrl(struct device *dev, const char *buf, size_t count)
+ goto out_unlock;
+ }
+
++ if (!try_module_get(ops->module)) {
++ ret = -EBUSY;
++ goto out_unlock;
++ }
++
+ ret = nvmf_check_required_opts(opts, ops->required_opts);
+ if (ret)
+- goto out_unlock;
++ goto out_module_put;
+ ret = nvmf_check_allowed_opts(opts, NVMF_ALLOWED_OPTS |
+ ops->allowed_opts | ops->required_opts);
+ if (ret)
+- goto out_unlock;
++ goto out_module_put;
+
+ ctrl = ops->create_ctrl(dev, opts);
+ if (IS_ERR(ctrl)) {
+ ret = PTR_ERR(ctrl);
+- goto out_unlock;
++ goto out_module_put;
+ }
+
+ if (strcmp(ctrl->subsys->subnqn, opts->subsysnqn)) {
+ dev_warn(ctrl->device,
+ "controller returned incorrect NQN: \"%s\".\n",
+ ctrl->subsys->subnqn);
++ module_put(ops->module);
+ up_read(&nvmf_transports_rwsem);
+ nvme_delete_ctrl_sync(ctrl);
+ return ERR_PTR(-EINVAL);
+ }
+
++ module_put(ops->module);
+ up_read(&nvmf_transports_rwsem);
+ return ctrl;
+
++out_module_put:
++ module_put(ops->module);
+ out_unlock:
+ up_read(&nvmf_transports_rwsem);
+ out_free_opts:
+diff --git a/drivers/nvme/host/fabrics.h b/drivers/nvme/host/fabrics.h
+index 9ba614953607..25b19f722f5b 100644
+--- a/drivers/nvme/host/fabrics.h
++++ b/drivers/nvme/host/fabrics.h
+@@ -108,6 +108,7 @@ struct nvmf_ctrl_options {
+ * fabric implementation of NVMe fabrics.
+ * @entry: Used by the fabrics library to add the new
+ * registration entry to its linked-list internal tree.
++ * @module: Transport module reference
+ * @name: Name of the NVMe fabric driver implementation.
+ * @required_opts: sysfs command-line options that must be specified
+ * when adding a new NVMe controller.
+@@ -126,6 +127,7 @@ struct nvmf_ctrl_options {
+ */
+ struct nvmf_transport_ops {
+ struct list_head entry;
++ struct module *module;
+ const char *name;
+ int required_opts;
+ int allowed_opts;
+diff --git a/drivers/nvme/host/fc.c b/drivers/nvme/host/fc.c
+index 794e66e4aa20..306aee47c8ce 100644
+--- a/drivers/nvme/host/fc.c
++++ b/drivers/nvme/host/fc.c
+@@ -3380,6 +3380,7 @@ nvme_fc_create_ctrl(struct device *dev, struct nvmf_ctrl_options *opts)
+
+ static struct nvmf_transport_ops nvme_fc_transport = {
+ .name = "fc",
++ .module = THIS_MODULE,
+ .required_opts = NVMF_OPT_TRADDR | NVMF_OPT_HOST_TRADDR,
+ .allowed_opts = NVMF_OPT_RECONNECT_DELAY | NVMF_OPT_CTRL_LOSS_TMO,
+ .create_ctrl = nvme_fc_create_ctrl,
+diff --git a/drivers/nvme/host/rdma.c b/drivers/nvme/host/rdma.c
+index 2a0bba7f50cf..d49b1e74f304 100644
+--- a/drivers/nvme/host/rdma.c
++++ b/drivers/nvme/host/rdma.c
+@@ -2018,6 +2018,7 @@ static struct nvme_ctrl *nvme_rdma_create_ctrl(struct device *dev,
+
+ static struct nvmf_transport_ops nvme_rdma_transport = {
+ .name = "rdma",
++ .module = THIS_MODULE,
+ .required_opts = NVMF_OPT_TRADDR,
+ .allowed_opts = NVMF_OPT_TRSVCID | NVMF_OPT_RECONNECT_DELAY |
+ NVMF_OPT_HOST_TRADDR | NVMF_OPT_CTRL_LOSS_TMO,
+diff --git a/drivers/nvme/target/fcloop.c b/drivers/nvme/target/fcloop.c
+index 6a018a0bd6ce..cd1adb9e7e9d 100644
+--- a/drivers/nvme/target/fcloop.c
++++ b/drivers/nvme/target/fcloop.c
+@@ -204,6 +204,10 @@ struct fcloop_lport {
+ struct completion unreg_done;
+ };
+
++struct fcloop_lport_priv {
++ struct fcloop_lport *lport;
++};
++
+ struct fcloop_rport {
+ struct nvme_fc_remote_port *remoteport;
+ struct nvmet_fc_target_port *targetport;
+@@ -370,6 +374,7 @@ fcloop_tgt_fcprqst_done_work(struct work_struct *work)
+
+ spin_lock(&tfcp_req->reqlock);
+ fcpreq = tfcp_req->fcpreq;
++ tfcp_req->fcpreq = NULL;
+ spin_unlock(&tfcp_req->reqlock);
+
+ if (tport->remoteport && fcpreq) {
+@@ -611,11 +616,7 @@ fcloop_fcp_abort(struct nvme_fc_local_port *localport,
+
+ if (!tfcp_req)
+ /* abort has already been called */
+- return;
+-
+- if (rport->targetport)
+- nvmet_fc_rcv_fcp_abort(rport->targetport,
+- &tfcp_req->tgt_fcp_req);
++ goto finish;
+
+ /* break initiator/target relationship for io */
+ spin_lock(&tfcp_req->reqlock);
+@@ -623,6 +624,11 @@ fcloop_fcp_abort(struct nvme_fc_local_port *localport,
+ tfcp_req->fcpreq = NULL;
+ spin_unlock(&tfcp_req->reqlock);
+
++ if (rport->targetport)
++ nvmet_fc_rcv_fcp_abort(rport->targetport,
++ &tfcp_req->tgt_fcp_req);
++
++finish:
+ /* post the aborted io completion */
+ fcpreq->status = -ECANCELED;
+ schedule_work(&inireq->iniwork);
+@@ -657,7 +663,8 @@ fcloop_nport_get(struct fcloop_nport *nport)
+ static void
+ fcloop_localport_delete(struct nvme_fc_local_port *localport)
+ {
+- struct fcloop_lport *lport = localport->private;
++ struct fcloop_lport_priv *lport_priv = localport->private;
++ struct fcloop_lport *lport = lport_priv->lport;
+
+ /* release any threads waiting for the unreg to complete */
+ complete(&lport->unreg_done);
+@@ -697,7 +704,7 @@ static struct nvme_fc_port_template fctemplate = {
+ .max_dif_sgl_segments = FCLOOP_SGL_SEGS,
+ .dma_boundary = FCLOOP_DMABOUND_4G,
+ /* sizes of additional private data for data structures */
+- .local_priv_sz = sizeof(struct fcloop_lport),
++ .local_priv_sz = sizeof(struct fcloop_lport_priv),
+ .remote_priv_sz = sizeof(struct fcloop_rport),
+ .lsrqst_priv_sz = sizeof(struct fcloop_lsreq),
+ .fcprqst_priv_sz = sizeof(struct fcloop_ini_fcpreq),
+@@ -728,11 +735,17 @@ fcloop_create_local_port(struct device *dev, struct device_attribute *attr,
+ struct fcloop_ctrl_options *opts;
+ struct nvme_fc_local_port *localport;
+ struct fcloop_lport *lport;
+- int ret;
++ struct fcloop_lport_priv *lport_priv;
++ unsigned long flags;
++ int ret = -ENOMEM;
++
++ lport = kzalloc(sizeof(*lport), GFP_KERNEL);
++ if (!lport)
++ return -ENOMEM;
+
+ opts = kzalloc(sizeof(*opts), GFP_KERNEL);
+ if (!opts)
+- return -ENOMEM;
++ goto out_free_lport;
+
+ ret = fcloop_parse_options(opts, buf);
+ if (ret)
+@@ -752,23 +765,25 @@ fcloop_create_local_port(struct device *dev, struct device_attribute *attr,
+
+ ret = nvme_fc_register_localport(&pinfo, &fctemplate, NULL, &localport);
+ if (!ret) {
+- unsigned long flags;
+-
+ /* success */
+- lport = localport->private;
++ lport_priv = localport->private;
++ lport_priv->lport = lport;
++
+ lport->localport = localport;
+ INIT_LIST_HEAD(&lport->lport_list);
+
+ spin_lock_irqsave(&fcloop_lock, flags);
+ list_add_tail(&lport->lport_list, &fcloop_lports);
+ spin_unlock_irqrestore(&fcloop_lock, flags);
+-
+- /* mark all of the input buffer consumed */
+- ret = count;
+ }
+
+ out_free_opts:
+ kfree(opts);
++out_free_lport:
++ /* free only if we're going to fail */
++ if (ret)
++ kfree(lport);
++
+ return ret ? ret : count;
+ }
+
+@@ -790,6 +805,8 @@ __wait_localport_unreg(struct fcloop_lport *lport)
+
+ wait_for_completion(&lport->unreg_done);
+
++ kfree(lport);
++
+ return ret;
+ }
+
+diff --git a/drivers/nvme/target/loop.c b/drivers/nvme/target/loop.c
+index 1e21b286f299..fdfcc961029f 100644
+--- a/drivers/nvme/target/loop.c
++++ b/drivers/nvme/target/loop.c
+@@ -686,6 +686,7 @@ static struct nvmet_fabrics_ops nvme_loop_ops = {
+
+ static struct nvmf_transport_ops nvme_loop_transport = {
+ .name = "loop",
++ .module = THIS_MODULE,
+ .create_ctrl = nvme_loop_create_ctrl,
+ };
+
+diff --git a/drivers/pinctrl/intel/pinctrl-baytrail.c b/drivers/pinctrl/intel/pinctrl-baytrail.c
+index 9c1ca29c60b7..6b52ea1440a6 100644
+--- a/drivers/pinctrl/intel/pinctrl-baytrail.c
++++ b/drivers/pinctrl/intel/pinctrl-baytrail.c
+@@ -46,6 +46,9 @@
+ #define BYT_TRIG_POS BIT(25)
+ #define BYT_TRIG_LVL BIT(24)
+ #define BYT_DEBOUNCE_EN BIT(20)
++#define BYT_GLITCH_FILTER_EN BIT(19)
++#define BYT_GLITCH_F_SLOW_CLK BIT(17)
++#define BYT_GLITCH_F_FAST_CLK BIT(16)
+ #define BYT_PULL_STR_SHIFT 9
+ #define BYT_PULL_STR_MASK (3 << BYT_PULL_STR_SHIFT)
+ #define BYT_PULL_STR_2K (0 << BYT_PULL_STR_SHIFT)
+@@ -1579,6 +1582,9 @@ static int byt_irq_type(struct irq_data *d, unsigned int type)
+ */
+ value &= ~(BYT_DIRECT_IRQ_EN | BYT_TRIG_POS | BYT_TRIG_NEG |
+ BYT_TRIG_LVL);
++ /* Enable glitch filtering */
++ value |= BYT_GLITCH_FILTER_EN | BYT_GLITCH_F_SLOW_CLK |
++ BYT_GLITCH_F_FAST_CLK;
+
+ writel(value, reg);
+
+diff --git a/drivers/power/supply/axp288_charger.c b/drivers/power/supply/axp288_charger.c
+index d51ebd1da65e..9dc7590e07cb 100644
+--- a/drivers/power/supply/axp288_charger.c
++++ b/drivers/power/supply/axp288_charger.c
+@@ -785,6 +785,14 @@ static int charger_init_hw_regs(struct axp288_chrg_info *info)
+ return 0;
+ }
+
++static void axp288_charger_cancel_work(void *data)
++{
++ struct axp288_chrg_info *info = data;
++
++ cancel_work_sync(&info->otg.work);
++ cancel_work_sync(&info->cable.work);
++}
++
+ static int axp288_charger_probe(struct platform_device *pdev)
+ {
+ int ret, i, pirq;
+@@ -836,6 +844,11 @@ static int axp288_charger_probe(struct platform_device *pdev)
+ return ret;
+ }
+
++ /* Cancel our work on cleanup, register this before the notifiers */
++ ret = devm_add_action(dev, axp288_charger_cancel_work, info);
++ if (ret)
++ return ret;
++
+ /* Register for extcon notification */
+ INIT_WORK(&info->cable.work, axp288_charger_extcon_evt_worker);
+ info->cable.nb[0].notifier_call = axp288_charger_handle_cable0_evt;
+diff --git a/drivers/rtc/rtc-ac100.c b/drivers/rtc/rtc-ac100.c
+index 0e358d4b6738..8ff9dc3fe5bf 100644
+--- a/drivers/rtc/rtc-ac100.c
++++ b/drivers/rtc/rtc-ac100.c
+@@ -137,13 +137,15 @@ static unsigned long ac100_clkout_recalc_rate(struct clk_hw *hw,
+ div = (reg >> AC100_CLKOUT_PRE_DIV_SHIFT) &
+ ((1 << AC100_CLKOUT_PRE_DIV_WIDTH) - 1);
+ prate = divider_recalc_rate(hw, prate, div,
+- ac100_clkout_prediv, 0);
++ ac100_clkout_prediv, 0,
++ AC100_CLKOUT_PRE_DIV_WIDTH);
+ }
+
+ div = (reg >> AC100_CLKOUT_DIV_SHIFT) &
+ (BIT(AC100_CLKOUT_DIV_WIDTH) - 1);
+ return divider_recalc_rate(hw, prate, div, NULL,
+- CLK_DIVIDER_POWER_OF_TWO);
++ CLK_DIVIDER_POWER_OF_TWO,
++ AC100_CLKOUT_DIV_WIDTH);
+ }
+
+ static long ac100_clkout_round_rate(struct clk_hw *hw, unsigned long rate,
+diff --git a/drivers/scsi/libiscsi.c b/drivers/scsi/libiscsi.c
+index 9c50d2d9f27c..785d1c55d152 100644
+--- a/drivers/scsi/libiscsi.c
++++ b/drivers/scsi/libiscsi.c
+@@ -1696,6 +1696,15 @@ int iscsi_queuecommand(struct Scsi_Host *host, struct scsi_cmnd *sc)
+ */
+ switch (session->state) {
+ case ISCSI_STATE_FAILED:
++ /*
++ * cmds should fail during shutdown, if the session
++ * state is bad, allowing completion to happen
++ */
++ if (unlikely(system_state != SYSTEM_RUNNING)) {
++ reason = FAILURE_SESSION_FAILED;
++ sc->result = DID_NO_CONNECT << 16;
++ break;
++ }
+ case ISCSI_STATE_IN_RECOVERY:
+ reason = FAILURE_SESSION_IN_RECOVERY;
+ sc->result = DID_IMM_RETRY << 16;
+@@ -1978,6 +1987,19 @@ enum blk_eh_timer_return iscsi_eh_cmd_timed_out(struct scsi_cmnd *sc)
+ }
+
+ if (session->state != ISCSI_STATE_LOGGED_IN) {
++ /*
++ * During shutdown, if session is prematurely disconnected,
++ * recovery won't happen and there will be hung cmds. Not
++ * handling cmds would trigger EH, also bad in this case.
++ * Instead, handle cmd, allow completion to happen and let
++ * upper layer to deal with the result.
++ */
++ if (unlikely(system_state != SYSTEM_RUNNING)) {
++ sc->result = DID_NO_CONNECT << 16;
++ ISCSI_DBG_EH(session, "sc on shutdown, handled\n");
++ rc = BLK_EH_HANDLED;
++ goto done;
++ }
+ /*
+ * We are probably in the middle of iscsi recovery so let
+ * that complete and handle the error.
+@@ -2082,7 +2104,7 @@ enum blk_eh_timer_return iscsi_eh_cmd_timed_out(struct scsi_cmnd *sc)
+ task->last_timeout = jiffies;
+ spin_unlock(&session->frwd_lock);
+ ISCSI_DBG_EH(session, "return %s\n", rc == BLK_EH_RESET_TIMER ?
+- "timer reset" : "nh");
++ "timer reset" : "shutdown or nh");
+ return rc;
+ }
+ EXPORT_SYMBOL_GPL(iscsi_eh_cmd_timed_out);
+diff --git a/drivers/scsi/libsas/sas_event.c b/drivers/scsi/libsas/sas_event.c
+index 0bb9eefc08c8..5d7254aa2dd2 100644
+--- a/drivers/scsi/libsas/sas_event.c
++++ b/drivers/scsi/libsas/sas_event.c
+@@ -29,7 +29,8 @@
+
+ int sas_queue_work(struct sas_ha_struct *ha, struct sas_work *sw)
+ {
+- int rc = 0;
++ /* it's added to the defer_q when draining so return succeed */
++ int rc = 1;
+
+ if (!test_bit(SAS_HA_REGISTERED, &ha->state))
+ return 0;
+@@ -44,19 +45,15 @@ int sas_queue_work(struct sas_ha_struct *ha, struct sas_work *sw)
+ return rc;
+ }
+
+-static int sas_queue_event(int event, unsigned long *pending,
+- struct sas_work *work,
++static int sas_queue_event(int event, struct sas_work *work,
+ struct sas_ha_struct *ha)
+ {
+- int rc = 0;
++ unsigned long flags;
++ int rc;
+
+- if (!test_and_set_bit(event, pending)) {
+- unsigned long flags;
+-
+- spin_lock_irqsave(&ha->lock, flags);
+- rc = sas_queue_work(ha, work);
+- spin_unlock_irqrestore(&ha->lock, flags);
+- }
++ spin_lock_irqsave(&ha->lock, flags);
++ rc = sas_queue_work(ha, work);
++ spin_unlock_irqrestore(&ha->lock, flags);
+
+ return rc;
+ }
+@@ -66,6 +63,7 @@ void __sas_drain_work(struct sas_ha_struct *ha)
+ {
+ struct workqueue_struct *wq = ha->core.shost->work_q;
+ struct sas_work *sw, *_sw;
++ int ret;
+
+ set_bit(SAS_HA_DRAINING, &ha->state);
+ /* flush submitters */
+@@ -78,7 +76,10 @@ void __sas_drain_work(struct sas_ha_struct *ha)
+ clear_bit(SAS_HA_DRAINING, &ha->state);
+ list_for_each_entry_safe(sw, _sw, &ha->defer_q, drain_node) {
+ list_del_init(&sw->drain_node);
+- sas_queue_work(ha, sw);
++ ret = sas_queue_work(ha, sw);
++ if (ret != 1)
++ sas_free_event(to_asd_sas_event(&sw->work));
++
+ }
+ spin_unlock_irq(&ha->lock);
+ }
+@@ -119,29 +120,68 @@ void sas_enable_revalidation(struct sas_ha_struct *ha)
+ if (!test_and_clear_bit(ev, &d->pending))
+ continue;
+
+- sas_queue_event(ev, &d->pending, &d->disc_work[ev].work, ha);
++ sas_queue_event(ev, &d->disc_work[ev].work, ha);
+ }
+ mutex_unlock(&ha->disco_mutex);
+ }
+
++
++static void sas_port_event_worker(struct work_struct *work)
++{
++ struct asd_sas_event *ev = to_asd_sas_event(work);
++
++ sas_port_event_fns[ev->event](work);
++ sas_free_event(ev);
++}
++
++static void sas_phy_event_worker(struct work_struct *work)
++{
++ struct asd_sas_event *ev = to_asd_sas_event(work);
++
++ sas_phy_event_fns[ev->event](work);
++ sas_free_event(ev);
++}
++
+ static int sas_notify_port_event(struct asd_sas_phy *phy, enum port_event event)
+ {
++ struct asd_sas_event *ev;
+ struct sas_ha_struct *ha = phy->ha;
++ int ret;
+
+ BUG_ON(event >= PORT_NUM_EVENTS);
+
+- return sas_queue_event(event, &phy->port_events_pending,
+- &phy->port_events[event].work, ha);
++ ev = sas_alloc_event(phy);
++ if (!ev)
++ return -ENOMEM;
++
++ INIT_SAS_EVENT(ev, sas_port_event_worker, phy, event);
++
++ ret = sas_queue_event(event, &ev->work, ha);
++ if (ret != 1)
++ sas_free_event(ev);
++
++ return ret;
+ }
+
+ int sas_notify_phy_event(struct asd_sas_phy *phy, enum phy_event event)
+ {
++ struct asd_sas_event *ev;
+ struct sas_ha_struct *ha = phy->ha;
++ int ret;
+
+ BUG_ON(event >= PHY_NUM_EVENTS);
+
+- return sas_queue_event(event, &phy->phy_events_pending,
+- &phy->phy_events[event].work, ha);
++ ev = sas_alloc_event(phy);
++ if (!ev)
++ return -ENOMEM;
++
++ INIT_SAS_EVENT(ev, sas_phy_event_worker, phy, event);
++
++ ret = sas_queue_event(event, &ev->work, ha);
++ if (ret != 1)
++ sas_free_event(ev);
++
++ return ret;
+ }
+
+ int sas_init_events(struct sas_ha_struct *sas_ha)
+diff --git a/drivers/scsi/libsas/sas_expander.c b/drivers/scsi/libsas/sas_expander.c
+index 3183d63de4da..39e42744aa33 100644
+--- a/drivers/scsi/libsas/sas_expander.c
++++ b/drivers/scsi/libsas/sas_expander.c
+@@ -293,6 +293,7 @@ static void sas_set_ex_phy(struct domain_device *dev, int phy_id, void *rsp)
+ phy->phy->minimum_linkrate = dr->pmin_linkrate;
+ phy->phy->maximum_linkrate = dr->pmax_linkrate;
+ phy->phy->negotiated_linkrate = phy->linkrate;
++ phy->phy->enabled = (phy->linkrate != SAS_PHY_DISABLED);
+
+ skip:
+ if (new_phy)
+@@ -686,7 +687,7 @@ int sas_smp_get_phy_events(struct sas_phy *phy)
+ res = smp_execute_task(dev, req, RPEL_REQ_SIZE,
+ resp, RPEL_RESP_SIZE);
+
+- if (!res)
++ if (res)
+ goto out;
+
+ phy->invalid_dword_count = scsi_to_u32(&resp[12]);
+@@ -695,6 +696,7 @@ int sas_smp_get_phy_events(struct sas_phy *phy)
+ phy->phy_reset_problem_count = scsi_to_u32(&resp[24]);
+
+ out:
++ kfree(req);
+ kfree(resp);
+ return res;
+
+diff --git a/drivers/scsi/libsas/sas_init.c b/drivers/scsi/libsas/sas_init.c
+index 64fa6f53cb8b..e04f6d6f5aff 100644
+--- a/drivers/scsi/libsas/sas_init.c
++++ b/drivers/scsi/libsas/sas_init.c
+@@ -39,6 +39,7 @@
+ #include "../scsi_sas_internal.h"
+
+ static struct kmem_cache *sas_task_cache;
++static struct kmem_cache *sas_event_cache;
+
+ struct sas_task *sas_alloc_task(gfp_t flags)
+ {
+@@ -364,8 +365,6 @@ void sas_prep_resume_ha(struct sas_ha_struct *ha)
+ struct asd_sas_phy *phy = ha->sas_phy[i];
+
+ memset(phy->attached_sas_addr, 0, SAS_ADDR_SIZE);
+- phy->port_events_pending = 0;
+- phy->phy_events_pending = 0;
+ phy->frame_rcvd_size = 0;
+ }
+ }
+@@ -555,20 +554,42 @@ sas_domain_attach_transport(struct sas_domain_function_template *dft)
+ }
+ EXPORT_SYMBOL_GPL(sas_domain_attach_transport);
+
++
++struct asd_sas_event *sas_alloc_event(struct asd_sas_phy *phy)
++{
++ gfp_t flags = in_interrupt() ? GFP_ATOMIC : GFP_KERNEL;
++
++ return kmem_cache_zalloc(sas_event_cache, flags);
++}
++
++void sas_free_event(struct asd_sas_event *event)
++{
++ kmem_cache_free(sas_event_cache, event);
++}
++
+ /* ---------- SAS Class register/unregister ---------- */
+
+ static int __init sas_class_init(void)
+ {
+ sas_task_cache = KMEM_CACHE(sas_task, SLAB_HWCACHE_ALIGN);
+ if (!sas_task_cache)
+- return -ENOMEM;
++ goto out;
++
++ sas_event_cache = KMEM_CACHE(asd_sas_event, SLAB_HWCACHE_ALIGN);
++ if (!sas_event_cache)
++ goto free_task_kmem;
+
+ return 0;
++free_task_kmem:
++ kmem_cache_destroy(sas_task_cache);
++out:
++ return -ENOMEM;
+ }
+
+ static void __exit sas_class_exit(void)
+ {
+ kmem_cache_destroy(sas_task_cache);
++ kmem_cache_destroy(sas_event_cache);
+ }
+
+ MODULE_AUTHOR("Luben Tuikov <luben_tuikov@adaptec.com>");
+diff --git a/drivers/scsi/libsas/sas_internal.h b/drivers/scsi/libsas/sas_internal.h
+index c07e08136491..d8826a747690 100644
+--- a/drivers/scsi/libsas/sas_internal.h
++++ b/drivers/scsi/libsas/sas_internal.h
+@@ -61,6 +61,9 @@ int sas_show_oob_mode(enum sas_oob_mode oob_mode, char *buf);
+ int sas_register_phys(struct sas_ha_struct *sas_ha);
+ void sas_unregister_phys(struct sas_ha_struct *sas_ha);
+
++struct asd_sas_event *sas_alloc_event(struct asd_sas_phy *phy);
++void sas_free_event(struct asd_sas_event *event);
++
+ int sas_register_ports(struct sas_ha_struct *sas_ha);
+ void sas_unregister_ports(struct sas_ha_struct *sas_ha);
+
+@@ -99,6 +102,9 @@ void sas_hae_reset(struct work_struct *work);
+
+ void sas_free_device(struct kref *kref);
+
++extern const work_func_t sas_phy_event_fns[PHY_NUM_EVENTS];
++extern const work_func_t sas_port_event_fns[PORT_NUM_EVENTS];
++
+ #ifdef CONFIG_SCSI_SAS_HOST_SMP
+ extern void sas_smp_host_handler(struct bsg_job *job, struct Scsi_Host *shost);
+ #else
+diff --git a/drivers/scsi/libsas/sas_phy.c b/drivers/scsi/libsas/sas_phy.c
+index cdee446c29e1..59f82929b0a3 100644
+--- a/drivers/scsi/libsas/sas_phy.c
++++ b/drivers/scsi/libsas/sas_phy.c
+@@ -35,7 +35,6 @@ static void sas_phye_loss_of_signal(struct work_struct *work)
+ struct asd_sas_event *ev = to_asd_sas_event(work);
+ struct asd_sas_phy *phy = ev->phy;
+
+- clear_bit(PHYE_LOSS_OF_SIGNAL, &phy->phy_events_pending);
+ phy->error = 0;
+ sas_deform_port(phy, 1);
+ }
+@@ -45,7 +44,6 @@ static void sas_phye_oob_done(struct work_struct *work)
+ struct asd_sas_event *ev = to_asd_sas_event(work);
+ struct asd_sas_phy *phy = ev->phy;
+
+- clear_bit(PHYE_OOB_DONE, &phy->phy_events_pending);
+ phy->error = 0;
+ }
+
+@@ -58,8 +56,6 @@ static void sas_phye_oob_error(struct work_struct *work)
+ struct sas_internal *i =
+ to_sas_internal(sas_ha->core.shost->transportt);
+
+- clear_bit(PHYE_OOB_ERROR, &phy->phy_events_pending);
+-
+ sas_deform_port(phy, 1);
+
+ if (!port && phy->enabled && i->dft->lldd_control_phy) {
+@@ -88,8 +84,6 @@ static void sas_phye_spinup_hold(struct work_struct *work)
+ struct sas_internal *i =
+ to_sas_internal(sas_ha->core.shost->transportt);
+
+- clear_bit(PHYE_SPINUP_HOLD, &phy->phy_events_pending);
+-
+ phy->error = 0;
+ i->dft->lldd_control_phy(phy, PHY_FUNC_RELEASE_SPINUP_HOLD, NULL);
+ }
+@@ -99,8 +93,6 @@ static void sas_phye_resume_timeout(struct work_struct *work)
+ struct asd_sas_event *ev = to_asd_sas_event(work);
+ struct asd_sas_phy *phy = ev->phy;
+
+- clear_bit(PHYE_RESUME_TIMEOUT, &phy->phy_events_pending);
+-
+ /* phew, lldd got the phy back in the nick of time */
+ if (!phy->suspended) {
+ dev_info(&phy->phy->dev, "resume timeout cancelled\n");
+@@ -119,39 +111,12 @@ int sas_register_phys(struct sas_ha_struct *sas_ha)
+ {
+ int i;
+
+- static const work_func_t sas_phy_event_fns[PHY_NUM_EVENTS] = {
+- [PHYE_LOSS_OF_SIGNAL] = sas_phye_loss_of_signal,
+- [PHYE_OOB_DONE] = sas_phye_oob_done,
+- [PHYE_OOB_ERROR] = sas_phye_oob_error,
+- [PHYE_SPINUP_HOLD] = sas_phye_spinup_hold,
+- [PHYE_RESUME_TIMEOUT] = sas_phye_resume_timeout,
+-
+- };
+-
+- static const work_func_t sas_port_event_fns[PORT_NUM_EVENTS] = {
+- [PORTE_BYTES_DMAED] = sas_porte_bytes_dmaed,
+- [PORTE_BROADCAST_RCVD] = sas_porte_broadcast_rcvd,
+- [PORTE_LINK_RESET_ERR] = sas_porte_link_reset_err,
+- [PORTE_TIMER_EVENT] = sas_porte_timer_event,
+- [PORTE_HARD_RESET] = sas_porte_hard_reset,
+- };
+-
+ /* Now register the phys. */
+ for (i = 0; i < sas_ha->num_phys; i++) {
+- int k;
+ struct asd_sas_phy *phy = sas_ha->sas_phy[i];
+
+ phy->error = 0;
+ INIT_LIST_HEAD(&phy->port_phy_el);
+- for (k = 0; k < PORT_NUM_EVENTS; k++) {
+- INIT_SAS_WORK(&phy->port_events[k].work, sas_port_event_fns[k]);
+- phy->port_events[k].phy = phy;
+- }
+-
+- for (k = 0; k < PHY_NUM_EVENTS; k++) {
+- INIT_SAS_WORK(&phy->phy_events[k].work, sas_phy_event_fns[k]);
+- phy->phy_events[k].phy = phy;
+- }
+
+ phy->port = NULL;
+ phy->ha = sas_ha;
+@@ -179,3 +144,12 @@ int sas_register_phys(struct sas_ha_struct *sas_ha)
+
+ return 0;
+ }
++
++const work_func_t sas_phy_event_fns[PHY_NUM_EVENTS] = {
++ [PHYE_LOSS_OF_SIGNAL] = sas_phye_loss_of_signal,
++ [PHYE_OOB_DONE] = sas_phye_oob_done,
++ [PHYE_OOB_ERROR] = sas_phye_oob_error,
++ [PHYE_SPINUP_HOLD] = sas_phye_spinup_hold,
++ [PHYE_RESUME_TIMEOUT] = sas_phye_resume_timeout,
++
++};
+diff --git a/drivers/scsi/libsas/sas_port.c b/drivers/scsi/libsas/sas_port.c
+index d3c5297c6c89..93266283f51f 100644
+--- a/drivers/scsi/libsas/sas_port.c
++++ b/drivers/scsi/libsas/sas_port.c
+@@ -261,8 +261,6 @@ void sas_porte_bytes_dmaed(struct work_struct *work)
+ struct asd_sas_event *ev = to_asd_sas_event(work);
+ struct asd_sas_phy *phy = ev->phy;
+
+- clear_bit(PORTE_BYTES_DMAED, &phy->port_events_pending);
+-
+ sas_form_port(phy);
+ }
+
+@@ -273,8 +271,6 @@ void sas_porte_broadcast_rcvd(struct work_struct *work)
+ unsigned long flags;
+ u32 prim;
+
+- clear_bit(PORTE_BROADCAST_RCVD, &phy->port_events_pending);
+-
+ spin_lock_irqsave(&phy->sas_prim_lock, flags);
+ prim = phy->sas_prim;
+ spin_unlock_irqrestore(&phy->sas_prim_lock, flags);
+@@ -288,8 +284,6 @@ void sas_porte_link_reset_err(struct work_struct *work)
+ struct asd_sas_event *ev = to_asd_sas_event(work);
+ struct asd_sas_phy *phy = ev->phy;
+
+- clear_bit(PORTE_LINK_RESET_ERR, &phy->port_events_pending);
+-
+ sas_deform_port(phy, 1);
+ }
+
+@@ -298,8 +292,6 @@ void sas_porte_timer_event(struct work_struct *work)
+ struct asd_sas_event *ev = to_asd_sas_event(work);
+ struct asd_sas_phy *phy = ev->phy;
+
+- clear_bit(PORTE_TIMER_EVENT, &phy->port_events_pending);
+-
+ sas_deform_port(phy, 1);
+ }
+
+@@ -308,8 +300,6 @@ void sas_porte_hard_reset(struct work_struct *work)
+ struct asd_sas_event *ev = to_asd_sas_event(work);
+ struct asd_sas_phy *phy = ev->phy;
+
+- clear_bit(PORTE_HARD_RESET, &phy->port_events_pending);
+-
+ sas_deform_port(phy, 1);
+ }
+
+@@ -353,3 +343,11 @@ void sas_unregister_ports(struct sas_ha_struct *sas_ha)
+ sas_deform_port(sas_ha->sas_phy[i], 0);
+
+ }
++
++const work_func_t sas_port_event_fns[PORT_NUM_EVENTS] = {
++ [PORTE_BYTES_DMAED] = sas_porte_bytes_dmaed,
++ [PORTE_BROADCAST_RCVD] = sas_porte_broadcast_rcvd,
++ [PORTE_LINK_RESET_ERR] = sas_porte_link_reset_err,
++ [PORTE_TIMER_EVENT] = sas_porte_timer_event,
++ [PORTE_HARD_RESET] = sas_porte_hard_reset,
++};
+diff --git a/drivers/scsi/megaraid/megaraid_sas_base.c b/drivers/scsi/megaraid/megaraid_sas_base.c
+index cc54bdb5c712..d4129469a73c 100644
+--- a/drivers/scsi/megaraid/megaraid_sas_base.c
++++ b/drivers/scsi/megaraid/megaraid_sas_base.c
+@@ -6822,7 +6822,6 @@ static void megasas_detach_one(struct pci_dev *pdev)
+ u32 pd_seq_map_sz;
+
+ instance = pci_get_drvdata(pdev);
+- instance->unload = 1;
+ host = instance->host;
+ fusion = instance->ctrl_context;
+
+@@ -6833,6 +6832,7 @@ static void megasas_detach_one(struct pci_dev *pdev)
+ if (instance->fw_crash_state != UNAVAILABLE)
+ megasas_free_host_crash_buffer(instance);
+ scsi_remove_host(instance->host);
++ instance->unload = 1;
+
+ if (megasas_wait_for_adapter_operational(instance))
+ goto skip_firing_dcmds;
+diff --git a/drivers/scsi/megaraid/megaraid_sas_fp.c b/drivers/scsi/megaraid/megaraid_sas_fp.c
+index bfad9bfc313f..f2ffde430ec1 100644
+--- a/drivers/scsi/megaraid/megaraid_sas_fp.c
++++ b/drivers/scsi/megaraid/megaraid_sas_fp.c
+@@ -168,7 +168,7 @@ static struct MR_LD_SPAN *MR_LdSpanPtrGet(u32 ld, u32 span,
+ /*
+ * This function will Populate Driver Map using firmware raid map
+ */
+-void MR_PopulateDrvRaidMap(struct megasas_instance *instance)
++static int MR_PopulateDrvRaidMap(struct megasas_instance *instance)
+ {
+ struct fusion_context *fusion = instance->ctrl_context;
+ struct MR_FW_RAID_MAP_ALL *fw_map_old = NULL;
+@@ -259,7 +259,7 @@ void MR_PopulateDrvRaidMap(struct megasas_instance *instance)
+ ld_count = (u16)le16_to_cpu(fw_map_ext->ldCount);
+ if (ld_count > MAX_LOGICAL_DRIVES_EXT) {
+ dev_dbg(&instance->pdev->dev, "megaraid_sas: LD count exposed in RAID map in not valid\n");
+- return;
++ return 1;
+ }
+
+ pDrvRaidMap->ldCount = (__le16)cpu_to_le16(ld_count);
+@@ -285,6 +285,12 @@ void MR_PopulateDrvRaidMap(struct megasas_instance *instance)
+ fusion->ld_map[(instance->map_id & 1)];
+ pFwRaidMap = &fw_map_old->raidMap;
+ ld_count = (u16)le32_to_cpu(pFwRaidMap->ldCount);
++ if (ld_count > MAX_LOGICAL_DRIVES) {
++ dev_dbg(&instance->pdev->dev,
++ "LD count exposed in RAID map in not valid\n");
++ return 1;
++ }
++
+ pDrvRaidMap->totalSize = pFwRaidMap->totalSize;
+ pDrvRaidMap->ldCount = (__le16)cpu_to_le16(ld_count);
+ pDrvRaidMap->fpPdIoTimeoutSec = pFwRaidMap->fpPdIoTimeoutSec;
+@@ -300,6 +306,8 @@ void MR_PopulateDrvRaidMap(struct megasas_instance *instance)
+ sizeof(struct MR_DEV_HANDLE_INFO) *
+ MAX_RAIDMAP_PHYSICAL_DEVICES);
+ }
++
++ return 0;
+ }
+
+ /*
+@@ -317,8 +325,8 @@ u8 MR_ValidateMapInfo(struct megasas_instance *instance)
+ u16 ld;
+ u32 expected_size;
+
+-
+- MR_PopulateDrvRaidMap(instance);
++ if (MR_PopulateDrvRaidMap(instance))
++ return 0;
+
+ fusion = instance->ctrl_context;
+ drv_map = fusion->ld_drv_map[(instance->map_id & 1)];
+diff --git a/drivers/scsi/mpt3sas/mpt3sas_scsih.c b/drivers/scsi/mpt3sas/mpt3sas_scsih.c
+index 741b0a28c2e3..fecc19eb1d25 100644
+--- a/drivers/scsi/mpt3sas/mpt3sas_scsih.c
++++ b/drivers/scsi/mpt3sas/mpt3sas_scsih.c
+@@ -4761,19 +4761,6 @@ scsih_qcmd(struct Scsi_Host *shost, struct scsi_cmnd *scmd)
+ return 0;
+ }
+
+- /*
+- * Bug work around for firmware SATL handling. The loop
+- * is based on atomic operations and ensures consistency
+- * since we're lockless at this point
+- */
+- do {
+- if (test_bit(0, &sas_device_priv_data->ata_command_pending)) {
+- scmd->result = SAM_STAT_BUSY;
+- scmd->scsi_done(scmd);
+- return 0;
+- }
+- } while (_scsih_set_satl_pending(scmd, true));
+-
+ sas_target_priv_data = sas_device_priv_data->sas_target;
+
+ /* invalid device handle */
+@@ -4799,6 +4786,19 @@ scsih_qcmd(struct Scsi_Host *shost, struct scsi_cmnd *scmd)
+ sas_device_priv_data->block)
+ return SCSI_MLQUEUE_DEVICE_BUSY;
+
++ /*
++ * Bug work around for firmware SATL handling. The loop
++ * is based on atomic operations and ensures consistency
++ * since we're lockless at this point
++ */
++ do {
++ if (test_bit(0, &sas_device_priv_data->ata_command_pending)) {
++ scmd->result = SAM_STAT_BUSY;
++ scmd->scsi_done(scmd);
++ return 0;
++ }
++ } while (_scsih_set_satl_pending(scmd, true));
++
+ if (scmd->sc_data_direction == DMA_FROM_DEVICE)
+ mpi_control = MPI2_SCSIIO_CONTROL_READ;
+ else if (scmd->sc_data_direction == DMA_TO_DEVICE)
+@@ -4826,6 +4826,7 @@ scsih_qcmd(struct Scsi_Host *shost, struct scsi_cmnd *scmd)
+ if (!smid) {
+ pr_err(MPT3SAS_FMT "%s: failed obtaining a smid\n",
+ ioc->name, __func__);
++ _scsih_set_satl_pending(scmd, false);
+ goto out;
+ }
+ mpi_request = mpt3sas_base_get_msg_frame(ioc, smid);
+@@ -4857,6 +4858,7 @@ scsih_qcmd(struct Scsi_Host *shost, struct scsi_cmnd *scmd)
+ pcie_device = sas_target_priv_data->pcie_dev;
+ if (ioc->build_sg_scmd(ioc, scmd, smid, pcie_device)) {
+ mpt3sas_base_free_smid(ioc, smid);
++ _scsih_set_satl_pending(scmd, false);
+ goto out;
+ }
+ } else
+diff --git a/drivers/spi/spi-sh-msiof.c b/drivers/spi/spi-sh-msiof.c
+index bf34e9b238af..e83e93dc0859 100644
+--- a/drivers/spi/spi-sh-msiof.c
++++ b/drivers/spi/spi-sh-msiof.c
+@@ -797,11 +797,21 @@ static int sh_msiof_dma_once(struct sh_msiof_spi_priv *p, const void *tx,
+ goto stop_dma;
+ }
+
+- /* wait for tx fifo to be emptied / rx fifo to be filled */
++ /* wait for tx/rx DMA completion */
+ ret = sh_msiof_wait_for_completion(p);
+ if (ret)
+ goto stop_reset;
+
++ if (!rx) {
++ reinit_completion(&p->done);
++ sh_msiof_write(p, IER, IER_TEOFE);
++
++ /* wait for tx fifo to be emptied */
++ ret = sh_msiof_wait_for_completion(p);
++ if (ret)
++ goto stop_reset;
++ }
++
+ /* clear status bits */
+ sh_msiof_reset_str(p);
+
+diff --git a/drivers/staging/lustre/lnet/libcfs/linux/linux-cpu.c b/drivers/staging/lustre/lnet/libcfs/linux/linux-cpu.c
+index 51823ce71773..a013f7eb208f 100644
+--- a/drivers/staging/lustre/lnet/libcfs/linux/linux-cpu.c
++++ b/drivers/staging/lustre/lnet/libcfs/linux/linux-cpu.c
+@@ -529,19 +529,20 @@ EXPORT_SYMBOL(cfs_cpt_spread_node);
+ int
+ cfs_cpt_current(struct cfs_cpt_table *cptab, int remap)
+ {
+- int cpu = smp_processor_id();
+- int cpt = cptab->ctb_cpu2cpt[cpu];
++ int cpu;
++ int cpt;
+
+- if (cpt < 0) {
+- if (!remap)
+- return cpt;
++ preempt_disable();
++ cpu = smp_processor_id();
++ cpt = cptab->ctb_cpu2cpt[cpu];
+
++ if (cpt < 0 && remap) {
+ /* don't return negative value for safety of upper layer,
+ * instead we shadow the unknown cpu to a valid partition ID
+ */
+ cpt = cpu % cptab->ctb_nparts;
+ }
+-
++ preempt_enable();
+ return cpt;
+ }
+ EXPORT_SYMBOL(cfs_cpt_current);
+diff --git a/drivers/target/target_core_user.c b/drivers/target/target_core_user.c
+index a415d87f22d2..3ab96d0f705e 100644
+--- a/drivers/target/target_core_user.c
++++ b/drivers/target/target_core_user.c
+@@ -805,6 +805,13 @@ tcmu_queue_cmd_ring(struct tcmu_cmd *tcmu_cmd)
+ int ret;
+ DEFINE_WAIT(__wait);
+
++ /*
++ * Don't leave commands partially setup because the unmap
++ * thread might need the blocks to make forward progress.
++ */
++ tcmu_cmd_free_data(tcmu_cmd, tcmu_cmd->dbi_cur);
++ tcmu_cmd_reset_dbi_cur(tcmu_cmd);
++
+ prepare_to_wait(&udev->wait_cmdr, &__wait, TASK_INTERRUPTIBLE);
+
+ pr_debug("sleeping for ring space\n");
+diff --git a/drivers/thermal/hisi_thermal.c b/drivers/thermal/hisi_thermal.c
+index 2d855a96cdd9..761d0559c268 100644
+--- a/drivers/thermal/hisi_thermal.c
++++ b/drivers/thermal/hisi_thermal.c
+@@ -527,7 +527,7 @@ static void hisi_thermal_toggle_sensor(struct hisi_thermal_sensor *sensor,
+ static int hisi_thermal_probe(struct platform_device *pdev)
+ {
+ struct hisi_thermal_data *data;
+- int const (*platform_probe)(struct hisi_thermal_data *);
++ int (*platform_probe)(struct hisi_thermal_data *);
+ struct device *dev = &pdev->dev;
+ int ret;
+
+diff --git a/drivers/thermal/int340x_thermal/int3400_thermal.c b/drivers/thermal/int340x_thermal/int3400_thermal.c
+index 8ee38f55c7f3..43b90fd577e4 100644
+--- a/drivers/thermal/int340x_thermal/int3400_thermal.c
++++ b/drivers/thermal/int340x_thermal/int3400_thermal.c
+@@ -319,17 +319,21 @@ static int int3400_thermal_probe(struct platform_device *pdev)
+
+ result = sysfs_create_group(&pdev->dev.kobj, &uuid_attribute_group);
+ if (result)
+- goto free_zone;
++ goto free_rel_misc;
+
+ result = acpi_install_notify_handler(
+ priv->adev->handle, ACPI_DEVICE_NOTIFY, int3400_notify,
+ (void *)priv);
+ if (result)
+- goto free_zone;
++ goto free_sysfs;
+
+ return 0;
+
+-free_zone:
++free_sysfs:
++ sysfs_remove_group(&pdev->dev.kobj, &uuid_attribute_group);
++free_rel_misc:
++ if (!priv->rel_misc_dev_res)
++ acpi_thermal_rel_misc_device_remove(priv->adev->handle);
+ thermal_zone_device_unregister(priv->thermal);
+ free_art_trt:
+ kfree(priv->trts);
+diff --git a/drivers/thermal/power_allocator.c b/drivers/thermal/power_allocator.c
+index b4d3116cfdaf..3055f9a12a17 100644
+--- a/drivers/thermal/power_allocator.c
++++ b/drivers/thermal/power_allocator.c
+@@ -523,6 +523,7 @@ static void allow_maximum_power(struct thermal_zone_device *tz)
+ struct thermal_instance *instance;
+ struct power_allocator_params *params = tz->governor_data;
+
++ mutex_lock(&tz->lock);
+ list_for_each_entry(instance, &tz->thermal_instances, tz_node) {
+ if ((instance->trip != params->trip_max_desired_temperature) ||
+ (!cdev_is_power_actor(instance->cdev)))
+@@ -534,6 +535,7 @@ static void allow_maximum_power(struct thermal_zone_device *tz)
+ mutex_unlock(&instance->cdev->lock);
+ thermal_cdev_update(instance->cdev);
+ }
++ mutex_unlock(&tz->lock);
+ }
+
+ /**
+diff --git a/drivers/tty/n_gsm.c b/drivers/tty/n_gsm.c
+index 5131bdc9e765..db33fc50bfaa 100644
+--- a/drivers/tty/n_gsm.c
++++ b/drivers/tty/n_gsm.c
+@@ -1451,6 +1451,10 @@ static void gsm_dlci_open(struct gsm_dlci *dlci)
+ * in which case an opening port goes back to closed and a closing port
+ * is simply put into closed state (any further frames from the other
+ * end will get a DM response)
++ *
++ * Some control dlci can stay in ADM mode with other dlci working just
++ * fine. In that case we can just keep the control dlci open after the
++ * DLCI_OPENING retries time out.
+ */
+
+ static void gsm_dlci_t1(struct timer_list *t)
+@@ -1464,8 +1468,15 @@ static void gsm_dlci_t1(struct timer_list *t)
+ if (dlci->retries) {
+ gsm_command(dlci->gsm, dlci->addr, SABM|PF);
+ mod_timer(&dlci->t1, jiffies + gsm->t1 * HZ / 100);
+- } else
++ } else if (!dlci->addr && gsm->control == (DM | PF)) {
++ if (debug & 8)
++ pr_info("DLCI %d opening in ADM mode.\n",
++ dlci->addr);
++ gsm_dlci_open(dlci);
++ } else {
+ gsm_dlci_close(dlci);
++ }
++
+ break;
+ case DLCI_CLOSING:
+ dlci->retries--;
+@@ -1483,8 +1494,8 @@ static void gsm_dlci_t1(struct timer_list *t)
+ * @dlci: DLCI to open
+ *
+ * Commence opening a DLCI from the Linux side. We issue SABM messages
+- * to the modem which should then reply with a UA, at which point we
+- * will move into open state. Opening is done asynchronously with retry
++ * to the modem which should then reply with a UA or ADM, at which point
++ * we will move into open state. Opening is done asynchronously with retry
+ * running off timers and the responses.
+ */
+
+diff --git a/drivers/tty/serdev/core.c b/drivers/tty/serdev/core.c
+index 1bef39828ca7..571ce1f69d8d 100644
+--- a/drivers/tty/serdev/core.c
++++ b/drivers/tty/serdev/core.c
+@@ -54,6 +54,11 @@ static int serdev_uevent(struct device *dev, struct kobj_uevent_env *env)
+ int rc;
+
+ /* TODO: platform modalias */
++
++ /* ACPI enumerated controllers do not have a modalias */
++ if (!dev->of_node && dev->type == &serdev_ctrl_type)
++ return 0;
++
+ rc = acpi_device_uevent_modalias(dev, env);
+ if (rc != -ENODEV)
+ return rc;
+diff --git a/drivers/uio/uio_hv_generic.c b/drivers/uio/uio_hv_generic.c
+index 48d5327d38d4..fe5cdda80b2c 100644
+--- a/drivers/uio/uio_hv_generic.c
++++ b/drivers/uio/uio_hv_generic.c
+@@ -124,6 +124,13 @@ hv_uio_probe(struct hv_device *dev,
+ if (ret)
+ goto fail;
+
++ /* Communicating with host has to be via shared memory not hypercall */
++ if (!dev->channel->offermsg.monitor_allocated) {
++ dev_err(&dev->device, "vmbus channel requires hypercall\n");
++ ret = -ENOTSUPP;
++ goto fail_close;
++ }
++
+ dev->channel->inbound.ring_buffer->interrupt_mask = 1;
+ set_channel_read_mode(dev->channel, HV_CALL_DIRECT);
+
+diff --git a/drivers/vhost/net.c b/drivers/vhost/net.c
+index 5636c7ca8eba..0020ae906bf9 100644
+--- a/drivers/vhost/net.c
++++ b/drivers/vhost/net.c
+@@ -618,7 +618,7 @@ static int vhost_net_rx_peek_head_len(struct vhost_net *net, struct sock *sk)
+
+ if (!len && vq->busyloop_timeout) {
+ /* Both tx vq and rx socket were polled here */
+- mutex_lock(&vq->mutex);
++ mutex_lock_nested(&vq->mutex, 1);
+ vhost_disable_notify(&net->dev, vq);
+
+ preempt_disable();
+@@ -751,7 +751,7 @@ static void handle_rx(struct vhost_net *net)
+ struct iov_iter fixup;
+ __virtio16 num_buffers;
+
+- mutex_lock(&vq->mutex);
++ mutex_lock_nested(&vq->mutex, 0);
+ sock = vq->private_data;
+ if (!sock)
+ goto out;
+diff --git a/drivers/vhost/vhost.c b/drivers/vhost/vhost.c
+index 5727b186b3ca..a5622a8364cb 100644
+--- a/drivers/vhost/vhost.c
++++ b/drivers/vhost/vhost.c
+@@ -213,8 +213,7 @@ int vhost_poll_start(struct vhost_poll *poll, struct file *file)
+ if (mask)
+ vhost_poll_wakeup(&poll->wait, 0, 0, (void *)mask);
+ if (mask & POLLERR) {
+- if (poll->wqh)
+- remove_wait_queue(poll->wqh, &poll->wait);
++ vhost_poll_stop(poll);
+ ret = -EINVAL;
+ }
+
+@@ -1257,14 +1256,12 @@ static int vq_log_access_ok(struct vhost_virtqueue *vq,
+ /* Caller should have vq mutex and device mutex */
+ int vhost_vq_access_ok(struct vhost_virtqueue *vq)
+ {
+- if (vq->iotlb) {
+- /* When device IOTLB was used, the access validation
+- * will be validated during prefetching.
+- */
+- return 1;
+- }
+- return vq_access_ok(vq, vq->num, vq->desc, vq->avail, vq->used) &&
+- vq_log_access_ok(vq, vq->log_base);
++ int ret = vq_log_access_ok(vq, vq->log_base);
++
++ if (ret || vq->iotlb)
++ return ret;
++
++ return vq_access_ok(vq, vq->num, vq->desc, vq->avail, vq->used);
+ }
+ EXPORT_SYMBOL_GPL(vhost_vq_access_ok);
+
+diff --git a/drivers/video/backlight/corgi_lcd.c b/drivers/video/backlight/corgi_lcd.c
+index d7c239ea3d09..f5574060f9c8 100644
+--- a/drivers/video/backlight/corgi_lcd.c
++++ b/drivers/video/backlight/corgi_lcd.c
+@@ -177,7 +177,7 @@ static int corgi_ssp_lcdtg_send(struct corgi_lcd *lcd, int adrs, uint8_t data)
+ struct spi_message msg;
+ struct spi_transfer xfer = {
+ .len = 1,
+- .cs_change = 1,
++ .cs_change = 0,
+ .tx_buf = lcd->buf,
+ };
+
+diff --git a/drivers/video/backlight/tdo24m.c b/drivers/video/backlight/tdo24m.c
+index eab1f842f9c0..e4bd63e9db6b 100644
+--- a/drivers/video/backlight/tdo24m.c
++++ b/drivers/video/backlight/tdo24m.c
+@@ -369,7 +369,7 @@ static int tdo24m_probe(struct spi_device *spi)
+
+ spi_message_init(m);
+
+- x->cs_change = 1;
++ x->cs_change = 0;
+ x->tx_buf = &lcd->buf[0];
+ spi_message_add_tail(x, m);
+
+diff --git a/drivers/video/backlight/tosa_lcd.c b/drivers/video/backlight/tosa_lcd.c
+index 6a41ea92737a..4dc5ee8debeb 100644
+--- a/drivers/video/backlight/tosa_lcd.c
++++ b/drivers/video/backlight/tosa_lcd.c
+@@ -49,7 +49,7 @@ static int tosa_tg_send(struct spi_device *spi, int adrs, uint8_t data)
+ struct spi_message msg;
+ struct spi_transfer xfer = {
+ .len = 1,
+- .cs_change = 1,
++ .cs_change = 0,
+ .tx_buf = buf,
+ };
+
+diff --git a/drivers/video/fbdev/vfb.c b/drivers/video/fbdev/vfb.c
+index da653a080394..54127905bfe7 100644
+--- a/drivers/video/fbdev/vfb.c
++++ b/drivers/video/fbdev/vfb.c
+@@ -239,8 +239,23 @@ static int vfb_check_var(struct fb_var_screeninfo *var,
+ */
+ static int vfb_set_par(struct fb_info *info)
+ {
++ switch (info->var.bits_per_pixel) {
++ case 1:
++ info->fix.visual = FB_VISUAL_MONO01;
++ break;
++ case 8:
++ info->fix.visual = FB_VISUAL_PSEUDOCOLOR;
++ break;
++ case 16:
++ case 24:
++ case 32:
++ info->fix.visual = FB_VISUAL_TRUECOLOR;
++ break;
++ }
++
+ info->fix.line_length = get_line_length(info->var.xres_virtual,
+ info->var.bits_per_pixel);
++
+ return 0;
+ }
+
+@@ -450,6 +465,8 @@ static int vfb_probe(struct platform_device *dev)
+ goto err2;
+ platform_set_drvdata(dev, info);
+
++ vfb_set_par(info);
++
+ fb_info(info, "Virtual frame buffer device, using %ldK of video memory\n",
+ videomemorysize >> 10);
+ return 0;
+diff --git a/drivers/watchdog/dw_wdt.c b/drivers/watchdog/dw_wdt.c
+index 36be987ff9ef..c2f4ff516230 100644
+--- a/drivers/watchdog/dw_wdt.c
++++ b/drivers/watchdog/dw_wdt.c
+@@ -127,14 +127,27 @@ static int dw_wdt_start(struct watchdog_device *wdd)
+
+ dw_wdt_set_timeout(wdd, wdd->timeout);
+
+- set_bit(WDOG_HW_RUNNING, &wdd->status);
+-
+ writel(WDOG_CONTROL_REG_WDT_EN_MASK,
+ dw_wdt->regs + WDOG_CONTROL_REG_OFFSET);
+
+ return 0;
+ }
+
++static int dw_wdt_stop(struct watchdog_device *wdd)
++{
++ struct dw_wdt *dw_wdt = to_dw_wdt(wdd);
++
++ if (!dw_wdt->rst) {
++ set_bit(WDOG_HW_RUNNING, &wdd->status);
++ return 0;
++ }
++
++ reset_control_assert(dw_wdt->rst);
++ reset_control_deassert(dw_wdt->rst);
++
++ return 0;
++}
++
+ static int dw_wdt_restart(struct watchdog_device *wdd,
+ unsigned long action, void *data)
+ {
+@@ -173,6 +186,7 @@ static const struct watchdog_info dw_wdt_ident = {
+ static const struct watchdog_ops dw_wdt_ops = {
+ .owner = THIS_MODULE,
+ .start = dw_wdt_start,
++ .stop = dw_wdt_stop,
+ .ping = dw_wdt_ping,
+ .set_timeout = dw_wdt_set_timeout,
+ .get_timeleft = dw_wdt_get_timeleft,
+diff --git a/fs/dcache.c b/fs/dcache.c
+index eb2c297a87d0..485d9d158429 100644
+--- a/fs/dcache.c
++++ b/fs/dcache.c
+@@ -468,9 +468,11 @@ static void dentry_lru_add(struct dentry *dentry)
+ * d_drop() is used mainly for stuff that wants to invalidate a dentry for some
+ * reason (NFS timeouts or autofs deletes).
+ *
+- * __d_drop requires dentry->d_lock.
++ * __d_drop requires dentry->d_lock
++ * ___d_drop doesn't mark dentry as "unhashed"
++ * (dentry->d_hash.pprev will be LIST_POISON2, not NULL).
+ */
+-void __d_drop(struct dentry *dentry)
++static void ___d_drop(struct dentry *dentry)
+ {
+ if (!d_unhashed(dentry)) {
+ struct hlist_bl_head *b;
+@@ -486,12 +488,17 @@ void __d_drop(struct dentry *dentry)
+
+ hlist_bl_lock(b);
+ __hlist_bl_del(&dentry->d_hash);
+- dentry->d_hash.pprev = NULL;
+ hlist_bl_unlock(b);
+ /* After this call, in-progress rcu-walk path lookup will fail. */
+ write_seqcount_invalidate(&dentry->d_seq);
+ }
+ }
++
++void __d_drop(struct dentry *dentry)
++{
++ ___d_drop(dentry);
++ dentry->d_hash.pprev = NULL;
++}
+ EXPORT_SYMBOL(__d_drop);
+
+ void d_drop(struct dentry *dentry)
+@@ -2386,7 +2393,7 @@ EXPORT_SYMBOL(d_delete);
+ static void __d_rehash(struct dentry *entry)
+ {
+ struct hlist_bl_head *b = d_hash(entry->d_name.hash);
+- BUG_ON(!d_unhashed(entry));
++
+ hlist_bl_lock(b);
+ hlist_bl_add_head_rcu(&entry->d_hash, b);
+ hlist_bl_unlock(b);
+@@ -2821,9 +2828,9 @@ static void __d_move(struct dentry *dentry, struct dentry *target,
+ write_seqcount_begin_nested(&target->d_seq, DENTRY_D_LOCK_NESTED);
+
+ /* unhash both */
+- /* __d_drop does write_seqcount_barrier, but they're OK to nest. */
+- __d_drop(dentry);
+- __d_drop(target);
++ /* ___d_drop does write_seqcount_barrier, but they're OK to nest. */
++ ___d_drop(dentry);
++ ___d_drop(target);
+
+ /* Switch the names.. */
+ if (exchange)
+@@ -2835,6 +2842,8 @@ static void __d_move(struct dentry *dentry, struct dentry *target,
+ __d_rehash(dentry);
+ if (exchange)
+ __d_rehash(target);
++ else
++ target->d_hash.pprev = NULL;
+
+ /* ... and switch them in the tree */
+ if (IS_ROOT(dentry)) {
+diff --git a/fs/f2fs/file.c b/fs/f2fs/file.c
+index 7874bbd7311d..84a011a522a1 100644
+--- a/fs/f2fs/file.c
++++ b/fs/f2fs/file.c
+@@ -1186,14 +1186,14 @@ static int f2fs_collapse_range(struct inode *inode, loff_t offset, loff_t len)
+ pg_start = offset >> PAGE_SHIFT;
+ pg_end = (offset + len) >> PAGE_SHIFT;
+
++ /* avoid gc operation during block exchange */
++ down_write(&F2FS_I(inode)->dio_rwsem[WRITE]);
++
+ down_write(&F2FS_I(inode)->i_mmap_sem);
+ /* write out all dirty pages from offset */
+ ret = filemap_write_and_wait_range(inode->i_mapping, offset, LLONG_MAX);
+ if (ret)
+- goto out;
+-
+- /* avoid gc operation during block exchange */
+- down_write(&F2FS_I(inode)->dio_rwsem[WRITE]);
++ goto out_unlock;
+
+ truncate_pagecache(inode, offset);
+
+@@ -1212,9 +1212,8 @@ static int f2fs_collapse_range(struct inode *inode, loff_t offset, loff_t len)
+ if (!ret)
+ f2fs_i_size_write(inode, new_size);
+ out_unlock:
+- up_write(&F2FS_I(inode)->dio_rwsem[WRITE]);
+-out:
+ up_write(&F2FS_I(inode)->i_mmap_sem);
++ up_write(&F2FS_I(inode)->dio_rwsem[WRITE]);
+ return ret;
+ }
+
+@@ -1385,6 +1384,9 @@ static int f2fs_insert_range(struct inode *inode, loff_t offset, loff_t len)
+
+ f2fs_balance_fs(sbi, true);
+
++ /* avoid gc operation during block exchange */
++ down_write(&F2FS_I(inode)->dio_rwsem[WRITE]);
++
+ down_write(&F2FS_I(inode)->i_mmap_sem);
+ ret = truncate_blocks(inode, i_size_read(inode), true);
+ if (ret)
+@@ -1395,9 +1397,6 @@ static int f2fs_insert_range(struct inode *inode, loff_t offset, loff_t len)
+ if (ret)
+ goto out;
+
+- /* avoid gc operation during block exchange */
+- down_write(&F2FS_I(inode)->dio_rwsem[WRITE]);
+-
+ truncate_pagecache(inode, offset);
+
+ pg_start = offset >> PAGE_SHIFT;
+@@ -1425,10 +1424,9 @@ static int f2fs_insert_range(struct inode *inode, loff_t offset, loff_t len)
+
+ if (!ret)
+ f2fs_i_size_write(inode, new_size);
+-
+- up_write(&F2FS_I(inode)->dio_rwsem[WRITE]);
+ out:
+ up_write(&F2FS_I(inode)->i_mmap_sem);
++ up_write(&F2FS_I(inode)->dio_rwsem[WRITE]);
+ return ret;
+ }
+
+diff --git a/include/linux/clk-provider.h b/include/linux/clk-provider.h
+index 7c925e6211f1..48171b349b88 100644
+--- a/include/linux/clk-provider.h
++++ b/include/linux/clk-provider.h
+@@ -412,7 +412,7 @@ extern const struct clk_ops clk_divider_ro_ops;
+
+ unsigned long divider_recalc_rate(struct clk_hw *hw, unsigned long parent_rate,
+ unsigned int val, const struct clk_div_table *table,
+- unsigned long flags);
++ unsigned long flags, unsigned long width);
+ long divider_round_rate_parent(struct clk_hw *hw, struct clk_hw *parent,
+ unsigned long rate, unsigned long *prate,
+ const struct clk_div_table *table,
+diff --git a/include/linux/mlx5/driver.h b/include/linux/mlx5/driver.h
+index c8198ed8b180..907adbf99a9c 100644
+--- a/include/linux/mlx5/driver.h
++++ b/include/linux/mlx5/driver.h
+@@ -826,7 +826,7 @@ struct mlx5_core_dev {
+ struct mlx5e_resources mlx5e_res;
+ struct {
+ struct mlx5_rsvd_gids reserved_gids;
+- atomic_t roce_en;
++ u32 roce_en;
+ } roce;
+ #ifdef CONFIG_MLX5_FPGA
+ struct mlx5_fpga_device *fpga;
+diff --git a/include/linux/netdevice.h b/include/linux/netdevice.h
+index ef789e1d679e..ddb1fc7bd938 100644
+--- a/include/linux/netdevice.h
++++ b/include/linux/netdevice.h
+@@ -4402,8 +4402,8 @@ do { \
+ WARN(1, "netdevice: %s%s\n" format, netdev_name(dev), \
+ netdev_reg_state(dev), ##args)
+
+-#define netdev_WARN_ONCE(dev, condition, format, arg...) \
+- WARN_ONCE(1, "netdevice: %s%s\n" format, netdev_name(dev) \
++#define netdev_WARN_ONCE(dev, format, args...) \
++ WARN_ONCE(1, "netdevice: %s%s\n" format, netdev_name(dev), \
+ netdev_reg_state(dev), ##args)
+
+ /* netif printk helpers, similar to netdev_printk */
+diff --git a/include/scsi/libsas.h b/include/scsi/libsas.h
+index 6df6fe0c2198..61c84d536a7e 100644
+--- a/include/scsi/libsas.h
++++ b/include/scsi/libsas.h
+@@ -292,6 +292,7 @@ struct asd_sas_port {
+ struct asd_sas_event {
+ struct sas_work work;
+ struct asd_sas_phy *phy;
++ int event;
+ };
+
+ static inline struct asd_sas_event *to_asd_sas_event(struct work_struct *work)
+@@ -301,17 +302,21 @@ static inline struct asd_sas_event *to_asd_sas_event(struct work_struct *work)
+ return ev;
+ }
+
++static inline void INIT_SAS_EVENT(struct asd_sas_event *ev,
++ void (*fn)(struct work_struct *),
++ struct asd_sas_phy *phy, int event)
++{
++ INIT_SAS_WORK(&ev->work, fn);
++ ev->phy = phy;
++ ev->event = event;
++}
++
++
+ /* The phy pretty much is controlled by the LLDD.
+ * The class only reads those fields.
+ */
+ struct asd_sas_phy {
+ /* private: */
+- struct asd_sas_event port_events[PORT_NUM_EVENTS];
+- struct asd_sas_event phy_events[PHY_NUM_EVENTS];
+-
+- unsigned long port_events_pending;
+- unsigned long phy_events_pending;
+-
+ int error;
+ int suspended;
+
+diff --git a/include/uapi/rdma/mlx5-abi.h b/include/uapi/rdma/mlx5-abi.h
+index a33e0517d3fd..062d14f07b61 100644
+--- a/include/uapi/rdma/mlx5-abi.h
++++ b/include/uapi/rdma/mlx5-abi.h
+@@ -307,7 +307,7 @@ enum mlx5_rx_hash_fields {
+ MLX5_RX_HASH_SRC_PORT_UDP = 1 << 6,
+ MLX5_RX_HASH_DST_PORT_UDP = 1 << 7,
+ /* Save bits for future fields */
+- MLX5_RX_HASH_INNER = 1 << 31
++ MLX5_RX_HASH_INNER = (1UL << 31),
+ };
+
+ struct mlx5_ib_create_qp_rss {
+diff --git a/net/8021q/vlan_dev.c b/net/8021q/vlan_dev.c
+index f7e83f6d2e64..236452ebbd9e 100644
+--- a/net/8021q/vlan_dev.c
++++ b/net/8021q/vlan_dev.c
+@@ -29,6 +29,7 @@
+ #include <linux/net_tstamp.h>
+ #include <linux/etherdevice.h>
+ #include <linux/ethtool.h>
++#include <linux/phy.h>
+ #include <net/arp.h>
+ #include <net/switchdev.h>
+
+@@ -665,8 +666,11 @@ static int vlan_ethtool_get_ts_info(struct net_device *dev,
+ {
+ const struct vlan_dev_priv *vlan = vlan_dev_priv(dev);
+ const struct ethtool_ops *ops = vlan->real_dev->ethtool_ops;
++ struct phy_device *phydev = vlan->real_dev->phydev;
+
+- if (ops->get_ts_info) {
++ if (phydev && phydev->drv && phydev->drv->ts_info) {
++ return phydev->drv->ts_info(phydev, info);
++ } else if (ops->get_ts_info) {
+ return ops->get_ts_info(vlan->real_dev, info);
+ } else {
+ info->so_timestamping = SOF_TIMESTAMPING_RX_SOFTWARE |
+diff --git a/net/core/dev.c b/net/core/dev.c
+index f3fbd10a0632..af4d670f5619 100644
+--- a/net/core/dev.c
++++ b/net/core/dev.c
+@@ -1027,7 +1027,7 @@ bool dev_valid_name(const char *name)
+ {
+ if (*name == '\0')
+ return false;
+- if (strlen(name) >= IFNAMSIZ)
++ if (strnlen(name, IFNAMSIZ) == IFNAMSIZ)
+ return false;
+ if (!strcmp(name, ".") || !strcmp(name, ".."))
+ return false;
+@@ -2719,7 +2719,7 @@ __be16 skb_network_protocol(struct sk_buff *skb, int *depth)
+ if (unlikely(!pskb_may_pull(skb, sizeof(struct ethhdr))))
+ return 0;
+
+- eth = (struct ethhdr *)skb_mac_header(skb);
++ eth = (struct ethhdr *)skb->data;
+ type = eth->h_proto;
+ }
+
+diff --git a/net/dsa/dsa_priv.h b/net/dsa/dsa_priv.h
+index 7d036696e8c4..65292b9fa01a 100644
+--- a/net/dsa/dsa_priv.h
++++ b/net/dsa/dsa_priv.h
+@@ -117,6 +117,7 @@ static inline struct net_device *dsa_master_find_slave(struct net_device *dev,
+ struct dsa_port *cpu_dp = dev->dsa_ptr;
+ struct dsa_switch_tree *dst = cpu_dp->dst;
+ struct dsa_switch *ds;
++ struct dsa_port *slave_port;
+
+ if (device < 0 || device >= DSA_MAX_SWITCHES)
+ return NULL;
+@@ -128,7 +129,12 @@ static inline struct net_device *dsa_master_find_slave(struct net_device *dev,
+ if (port < 0 || port >= ds->num_ports)
+ return NULL;
+
+- return ds->ports[port].slave;
++ slave_port = &ds->ports[port];
++
++ if (unlikely(slave_port->type != DSA_PORT_TYPE_USER))
++ return NULL;
++
++ return slave_port->slave;
+ }
+
+ /* port.c */
+diff --git a/net/ipv4/arp.c b/net/ipv4/arp.c
+index 6c231b43974d..e981e05594c5 100644
+--- a/net/ipv4/arp.c
++++ b/net/ipv4/arp.c
+@@ -437,7 +437,7 @@ static int arp_filter(__be32 sip, __be32 tip, struct net_device *dev)
+ /*unsigned long now; */
+ struct net *net = dev_net(dev);
+
+- rt = ip_route_output(net, sip, tip, 0, 0);
++ rt = ip_route_output(net, sip, tip, 0, l3mdev_master_ifindex_rcu(dev));
+ if (IS_ERR(rt))
+ return 1;
+ if (rt->dst.dev != dev) {
+diff --git a/net/ipv4/fib_semantics.c b/net/ipv4/fib_semantics.c
+index 7d36a950d961..9d512922243f 100644
+--- a/net/ipv4/fib_semantics.c
++++ b/net/ipv4/fib_semantics.c
+@@ -1746,18 +1746,20 @@ void fib_select_multipath(struct fib_result *res, int hash)
+ bool first = false;
+
+ for_nexthops(fi) {
++ if (net->ipv4.sysctl_fib_multipath_use_neigh) {
++ if (!fib_good_nh(nh))
++ continue;
++ if (!first) {
++ res->nh_sel = nhsel;
++ first = true;
++ }
++ }
++
+ if (hash > atomic_read(&nh->nh_upper_bound))
+ continue;
+
+- if (!net->ipv4.sysctl_fib_multipath_use_neigh ||
+- fib_good_nh(nh)) {
+- res->nh_sel = nhsel;
+- return;
+- }
+- if (!first) {
+- res->nh_sel = nhsel;
+- first = true;
+- }
++ res->nh_sel = nhsel;
++ return;
+ } endfor_nexthops(fi);
+ }
+ #endif
+diff --git a/net/ipv4/ip_tunnel.c b/net/ipv4/ip_tunnel.c
+index 6d21068f9b55..a70a1d6db157 100644
+--- a/net/ipv4/ip_tunnel.c
++++ b/net/ipv4/ip_tunnel.c
+@@ -253,13 +253,14 @@ static struct net_device *__ip_tunnel_create(struct net *net,
+ struct net_device *dev;
+ char name[IFNAMSIZ];
+
+- if (parms->name[0])
++ err = -E2BIG;
++ if (parms->name[0]) {
++ if (!dev_valid_name(parms->name))
++ goto failed;
+ strlcpy(name, parms->name, IFNAMSIZ);
+- else {
+- if (strlen(ops->kind) > (IFNAMSIZ - 3)) {
+- err = -E2BIG;
++ } else {
++ if (strlen(ops->kind) > (IFNAMSIZ - 3))
+ goto failed;
+- }
+ strlcpy(name, ops->kind, IFNAMSIZ);
+ strncat(name, "%d", 2);
+ }
+diff --git a/net/ipv6/ip6_gre.c b/net/ipv6/ip6_gre.c
+index 873549228ccb..9f9f38dd6775 100644
+--- a/net/ipv6/ip6_gre.c
++++ b/net/ipv6/ip6_gre.c
+@@ -319,11 +319,13 @@ static struct ip6_tnl *ip6gre_tunnel_locate(struct net *net,
+ if (t || !create)
+ return t;
+
+- if (parms->name[0])
++ if (parms->name[0]) {
++ if (!dev_valid_name(parms->name))
++ return NULL;
+ strlcpy(name, parms->name, IFNAMSIZ);
+- else
++ } else {
+ strcpy(name, "ip6gre%d");
+-
++ }
+ dev = alloc_netdev(sizeof(*t), name, NET_NAME_UNKNOWN,
+ ip6gre_tunnel_setup);
+ if (!dev)
+diff --git a/net/ipv6/ip6_output.c b/net/ipv6/ip6_output.c
+index 3763dc01e374..ffbb81609016 100644
+--- a/net/ipv6/ip6_output.c
++++ b/net/ipv6/ip6_output.c
+@@ -138,6 +138,14 @@ static int ip6_finish_output(struct net *net, struct sock *sk, struct sk_buff *s
+ return ret;
+ }
+
++#if defined(CONFIG_NETFILTER) && defined(CONFIG_XFRM)
++ /* Policy lookup after SNAT yielded a new policy */
++ if (skb_dst(skb)->xfrm) {
++ IPCB(skb)->flags |= IPSKB_REROUTED;
++ return dst_output(net, sk, skb);
++ }
++#endif
++
+ if ((skb->len > ip6_skb_dst_mtu(skb) && !skb_is_gso(skb)) ||
+ dst_allfrag(skb_dst(skb)) ||
+ (IP6CB(skb)->frag_max_size && skb->len > IP6CB(skb)->frag_max_size))
+@@ -367,6 +375,11 @@ static int ip6_forward_proxy_check(struct sk_buff *skb)
+ static inline int ip6_forward_finish(struct net *net, struct sock *sk,
+ struct sk_buff *skb)
+ {
++ struct dst_entry *dst = skb_dst(skb);
++
++ __IP6_INC_STATS(net, ip6_dst_idev(dst), IPSTATS_MIB_OUTFORWDATAGRAMS);
++ __IP6_ADD_STATS(net, ip6_dst_idev(dst), IPSTATS_MIB_OUTOCTETS, skb->len);
++
+ return dst_output(net, sk, skb);
+ }
+
+@@ -560,8 +573,6 @@ int ip6_forward(struct sk_buff *skb)
+
+ hdr->hop_limit--;
+
+- __IP6_INC_STATS(net, ip6_dst_idev(dst), IPSTATS_MIB_OUTFORWDATAGRAMS);
+- __IP6_ADD_STATS(net, ip6_dst_idev(dst), IPSTATS_MIB_OUTOCTETS, skb->len);
+ return NF_HOOK(NFPROTO_IPV6, NF_INET_FORWARD,
+ net, NULL, skb, skb->dev, dst->dev,
+ ip6_forward_finish);
+@@ -1237,7 +1248,7 @@ static int __ip6_append_data(struct sock *sk,
+ const struct sockcm_cookie *sockc)
+ {
+ struct sk_buff *skb, *skb_prev = NULL;
+- unsigned int maxfraglen, fragheaderlen, mtu, orig_mtu;
++ unsigned int maxfraglen, fragheaderlen, mtu, orig_mtu, pmtu;
+ int exthdrlen = 0;
+ int dst_exthdrlen = 0;
+ int hh_len;
+@@ -1273,6 +1284,12 @@ static int __ip6_append_data(struct sock *sk,
+ sizeof(struct frag_hdr) : 0) +
+ rt->rt6i_nfheader_len;
+
++ /* as per RFC 7112 section 5, the entire IPv6 Header Chain must fit
++ * the first fragment
++ */
++ if (headersize + transhdrlen > mtu)
++ goto emsgsize;
++
+ if (cork->length + length > mtu - headersize && ipc6->dontfrag &&
+ (sk->sk_protocol == IPPROTO_UDP ||
+ sk->sk_protocol == IPPROTO_RAW)) {
+@@ -1288,9 +1305,8 @@ static int __ip6_append_data(struct sock *sk,
+
+ if (cork->length + length > maxnonfragsize - headersize) {
+ emsgsize:
+- ipv6_local_error(sk, EMSGSIZE, fl6,
+- mtu - headersize +
+- sizeof(struct ipv6hdr));
++ pmtu = max_t(int, mtu - headersize + sizeof(struct ipv6hdr), 0);
++ ipv6_local_error(sk, EMSGSIZE, fl6, pmtu);
+ return -EMSGSIZE;
+ }
+
+diff --git a/net/ipv6/ip6_tunnel.c b/net/ipv6/ip6_tunnel.c
+index 1ee5584c3555..38e0952e2396 100644
+--- a/net/ipv6/ip6_tunnel.c
++++ b/net/ipv6/ip6_tunnel.c
+@@ -297,13 +297,16 @@ static struct ip6_tnl *ip6_tnl_create(struct net *net, struct __ip6_tnl_parm *p)
+ struct net_device *dev;
+ struct ip6_tnl *t;
+ char name[IFNAMSIZ];
+- int err = -ENOMEM;
++ int err = -E2BIG;
+
+- if (p->name[0])
++ if (p->name[0]) {
++ if (!dev_valid_name(p->name))
++ goto failed;
+ strlcpy(name, p->name, IFNAMSIZ);
+- else
++ } else {
+ sprintf(name, "ip6tnl%%d");
+-
++ }
++ err = -ENOMEM;
+ dev = alloc_netdev(sizeof(*t), name, NET_NAME_UNKNOWN,
+ ip6_tnl_dev_setup);
+ if (!dev)
+diff --git a/net/ipv6/ip6_vti.c b/net/ipv6/ip6_vti.c
+index 8c184f84f353..15c51686e076 100644
+--- a/net/ipv6/ip6_vti.c
++++ b/net/ipv6/ip6_vti.c
+@@ -212,10 +212,13 @@ static struct ip6_tnl *vti6_tnl_create(struct net *net, struct __ip6_tnl_parm *p
+ char name[IFNAMSIZ];
+ int err;
+
+- if (p->name[0])
++ if (p->name[0]) {
++ if (!dev_valid_name(p->name))
++ goto failed;
+ strlcpy(name, p->name, IFNAMSIZ);
+- else
++ } else {
+ sprintf(name, "ip6_vti%%d");
++ }
+
+ dev = alloc_netdev(sizeof(*t), name, NET_NAME_UNKNOWN, vti6_dev_setup);
+ if (!dev)
+diff --git a/net/ipv6/route.c b/net/ipv6/route.c
+index 1f0d94439c77..065518620dc2 100644
+--- a/net/ipv6/route.c
++++ b/net/ipv6/route.c
+@@ -922,6 +922,9 @@ static struct rt6_info *ip6_pol_route_lookup(struct net *net,
+ struct rt6_info *rt, *rt_cache;
+ struct fib6_node *fn;
+
++ if (fl6->flowi6_flags & FLOWI_FLAG_SKIP_NH_OIF)
++ flags &= ~RT6_LOOKUP_F_IFACE;
++
+ rcu_read_lock();
+ fn = fib6_lookup(&table->tb6_root, &fl6->daddr, &fl6->saddr);
+ restart:
+diff --git a/net/ipv6/seg6_iptunnel.c b/net/ipv6/seg6_iptunnel.c
+index 7a78dcfda68a..f343e6f0fc95 100644
+--- a/net/ipv6/seg6_iptunnel.c
++++ b/net/ipv6/seg6_iptunnel.c
+@@ -16,6 +16,7 @@
+ #include <linux/net.h>
+ #include <linux/module.h>
+ #include <net/ip.h>
++#include <net/ip_tunnels.h>
+ #include <net/lwtunnel.h>
+ #include <net/netevent.h>
+ #include <net/netns/generic.h>
+@@ -211,11 +212,6 @@ static int seg6_do_srh(struct sk_buff *skb)
+
+ tinfo = seg6_encap_lwtunnel(dst->lwtstate);
+
+- if (likely(!skb->encapsulation)) {
+- skb_reset_inner_headers(skb);
+- skb->encapsulation = 1;
+- }
+-
+ switch (tinfo->mode) {
+ case SEG6_IPTUN_MODE_INLINE:
+ if (skb->protocol != htons(ETH_P_IPV6))
+@@ -224,10 +220,12 @@ static int seg6_do_srh(struct sk_buff *skb)
+ err = seg6_do_srh_inline(skb, tinfo->srh);
+ if (err)
+ return err;
+-
+- skb_reset_inner_headers(skb);
+ break;
+ case SEG6_IPTUN_MODE_ENCAP:
++ err = iptunnel_handle_offloads(skb, SKB_GSO_IPXIP6);
++ if (err)
++ return err;
++
+ if (skb->protocol == htons(ETH_P_IPV6))
+ proto = IPPROTO_IPV6;
+ else if (skb->protocol == htons(ETH_P_IP))
+@@ -239,6 +237,8 @@ static int seg6_do_srh(struct sk_buff *skb)
+ if (err)
+ return err;
+
++ skb_set_inner_transport_header(skb, skb_transport_offset(skb));
++ skb_set_inner_protocol(skb, skb->protocol);
+ skb->protocol = htons(ETH_P_IPV6);
+ break;
+ case SEG6_IPTUN_MODE_L2ENCAP:
+@@ -262,8 +262,6 @@ static int seg6_do_srh(struct sk_buff *skb)
+ ipv6_hdr(skb)->payload_len = htons(skb->len - sizeof(struct ipv6hdr));
+ skb_set_transport_header(skb, sizeof(struct ipv6hdr));
+
+- skb_set_inner_protocol(skb, skb->protocol);
+-
+ return 0;
+ }
+
+diff --git a/net/ipv6/sit.c b/net/ipv6/sit.c
+index 3a1775a62973..5a0725d7aabc 100644
+--- a/net/ipv6/sit.c
++++ b/net/ipv6/sit.c
+@@ -250,11 +250,13 @@ static struct ip_tunnel *ipip6_tunnel_locate(struct net *net,
+ if (!create)
+ goto failed;
+
+- if (parms->name[0])
++ if (parms->name[0]) {
++ if (!dev_valid_name(parms->name))
++ goto failed;
+ strlcpy(name, parms->name, IFNAMSIZ);
+- else
++ } else {
+ strcpy(name, "sit%d");
+-
++ }
+ dev = alloc_netdev(sizeof(*t), name, NET_NAME_UNKNOWN,
+ ipip6_tunnel_setup);
+ if (!dev)
+diff --git a/net/l2tp/l2tp_netlink.c b/net/l2tp/l2tp_netlink.c
+index a1f24fb2be98..7e9c50125556 100644
+--- a/net/l2tp/l2tp_netlink.c
++++ b/net/l2tp/l2tp_netlink.c
+@@ -761,6 +761,8 @@ static int l2tp_nl_session_send(struct sk_buff *skb, u32 portid, u32 seq, int fl
+
+ if ((session->ifname[0] &&
+ nla_put_string(skb, L2TP_ATTR_IFNAME, session->ifname)) ||
++ (session->offset &&
++ nla_put_u16(skb, L2TP_ATTR_OFFSET, session->offset)) ||
+ (session->cookie_len &&
+ nla_put(skb, L2TP_ATTR_COOKIE, session->cookie_len,
+ &session->cookie[0])) ||
+diff --git a/net/mac80211/cfg.c b/net/mac80211/cfg.c
+index 84f757c5d91a..288640471c2f 100644
+--- a/net/mac80211/cfg.c
++++ b/net/mac80211/cfg.c
+@@ -2373,10 +2373,17 @@ static int ieee80211_set_tx_power(struct wiphy *wiphy,
+ struct ieee80211_sub_if_data *sdata;
+ enum nl80211_tx_power_setting txp_type = type;
+ bool update_txp_type = false;
++ bool has_monitor = false;
+
+ if (wdev) {
+ sdata = IEEE80211_WDEV_TO_SUB_IF(wdev);
+
++ if (sdata->vif.type == NL80211_IFTYPE_MONITOR) {
++ sdata = rtnl_dereference(local->monitor_sdata);
++ if (!sdata)
++ return -EOPNOTSUPP;
++ }
++
+ switch (type) {
+ case NL80211_TX_POWER_AUTOMATIC:
+ sdata->user_power_level = IEEE80211_UNSET_POWER_LEVEL;
+@@ -2415,15 +2422,34 @@ static int ieee80211_set_tx_power(struct wiphy *wiphy,
+
+ mutex_lock(&local->iflist_mtx);
+ list_for_each_entry(sdata, &local->interfaces, list) {
++ if (sdata->vif.type == NL80211_IFTYPE_MONITOR) {
++ has_monitor = true;
++ continue;
++ }
+ sdata->user_power_level = local->user_power_level;
+ if (txp_type != sdata->vif.bss_conf.txpower_type)
+ update_txp_type = true;
+ sdata->vif.bss_conf.txpower_type = txp_type;
+ }
+- list_for_each_entry(sdata, &local->interfaces, list)
++ list_for_each_entry(sdata, &local->interfaces, list) {
++ if (sdata->vif.type == NL80211_IFTYPE_MONITOR)
++ continue;
+ ieee80211_recalc_txpower(sdata, update_txp_type);
++ }
+ mutex_unlock(&local->iflist_mtx);
+
++ if (has_monitor) {
++ sdata = rtnl_dereference(local->monitor_sdata);
++ if (sdata) {
++ sdata->user_power_level = local->user_power_level;
++ if (txp_type != sdata->vif.bss_conf.txpower_type)
++ update_txp_type = true;
++ sdata->vif.bss_conf.txpower_type = txp_type;
++
++ ieee80211_recalc_txpower(sdata, update_txp_type);
++ }
++ }
++
+ return 0;
+ }
+
+diff --git a/net/mac80211/driver-ops.h b/net/mac80211/driver-ops.h
+index c7f93fd9ca7a..4d82fe7d627c 100644
+--- a/net/mac80211/driver-ops.h
++++ b/net/mac80211/driver-ops.h
+@@ -165,7 +165,8 @@ static inline void drv_bss_info_changed(struct ieee80211_local *local,
+ if (WARN_ON_ONCE(sdata->vif.type == NL80211_IFTYPE_P2P_DEVICE ||
+ sdata->vif.type == NL80211_IFTYPE_NAN ||
+ (sdata->vif.type == NL80211_IFTYPE_MONITOR &&
+- !sdata->vif.mu_mimo_owner)))
++ !sdata->vif.mu_mimo_owner &&
++ !(changed & BSS_CHANGED_TXPOWER))))
+ return;
+
+ if (!check_sdata_in_driver(sdata))
+diff --git a/net/netlink/af_netlink.c b/net/netlink/af_netlink.c
+index ca9c0544c856..1245aa1d6e1c 100644
+--- a/net/netlink/af_netlink.c
++++ b/net/netlink/af_netlink.c
+@@ -1052,6 +1052,9 @@ static int netlink_connect(struct socket *sock, struct sockaddr *addr,
+ if (addr->sa_family != AF_NETLINK)
+ return -EINVAL;
+
++ if (alen < sizeof(struct sockaddr_nl))
++ return -EINVAL;
++
+ if ((nladdr->nl_groups || nladdr->nl_pid) &&
+ !netlink_allowed(sock, NL_CFG_F_NONROOT_SEND))
+ return -EPERM;
+diff --git a/net/rds/bind.c b/net/rds/bind.c
+index 75d43dc8e96b..5aa3a64aa4f0 100644
+--- a/net/rds/bind.c
++++ b/net/rds/bind.c
+@@ -114,6 +114,7 @@ static int rds_add_bound(struct rds_sock *rs, __be32 addr, __be16 *port)
+ rs, &addr, (int)ntohs(*port));
+ break;
+ } else {
++ rs->rs_bound_addr = 0;
+ rds_sock_put(rs);
+ ret = -ENOMEM;
+ break;
+diff --git a/net/sched/act_api.c b/net/sched/act_api.c
+index 4d33a50a8a6d..e3386f1f485c 100644
+--- a/net/sched/act_api.c
++++ b/net/sched/act_api.c
+@@ -135,8 +135,10 @@ static int tcf_dump_walker(struct tcf_idrinfo *idrinfo, struct sk_buff *skb,
+ continue;
+
+ nest = nla_nest_start(skb, n_i);
+- if (!nest)
++ if (!nest) {
++ index--;
+ goto nla_put_failure;
++ }
+ err = tcf_action_dump_1(skb, p, 0, 0);
+ if (err < 0) {
+ index--;
+diff --git a/net/sched/act_bpf.c b/net/sched/act_bpf.c
+index 5ef8ce8c83d4..502159bdded3 100644
+--- a/net/sched/act_bpf.c
++++ b/net/sched/act_bpf.c
+@@ -248,10 +248,14 @@ static int tcf_bpf_init_from_efd(struct nlattr **tb, struct tcf_bpf_cfg *cfg)
+
+ static void tcf_bpf_cfg_cleanup(const struct tcf_bpf_cfg *cfg)
+ {
+- if (cfg->is_ebpf)
+- bpf_prog_put(cfg->filter);
+- else
+- bpf_prog_destroy(cfg->filter);
++ struct bpf_prog *filter = cfg->filter;
++
++ if (filter) {
++ if (cfg->is_ebpf)
++ bpf_prog_put(filter);
++ else
++ bpf_prog_destroy(filter);
++ }
+
+ kfree(cfg->bpf_ops);
+ kfree(cfg->bpf_name);
+diff --git a/net/sched/act_sample.c b/net/sched/act_sample.c
+index 9438969290a6..2298d91c4c83 100644
+--- a/net/sched/act_sample.c
++++ b/net/sched/act_sample.c
+@@ -103,7 +103,8 @@ static void tcf_sample_cleanup(struct tc_action *a, int bind)
+
+ psample_group = rtnl_dereference(s->psample_group);
+ RCU_INIT_POINTER(s->psample_group, NULL);
+- psample_group_put(psample_group);
++ if (psample_group)
++ psample_group_put(psample_group);
+ }
+
+ static bool tcf_sample_dev_ok_push(struct net_device *dev)
+diff --git a/net/sched/act_skbmod.c b/net/sched/act_skbmod.c
+index b642ad3d39dd..6d10b3af479b 100644
+--- a/net/sched/act_skbmod.c
++++ b/net/sched/act_skbmod.c
+@@ -190,7 +190,8 @@ static void tcf_skbmod_cleanup(struct tc_action *a, int bind)
+ struct tcf_skbmod_params *p;
+
+ p = rcu_dereference_protected(d->skbmod_p, 1);
+- kfree_rcu(p, rcu);
++ if (p)
++ kfree_rcu(p, rcu);
+ }
+
+ static int tcf_skbmod_dump(struct sk_buff *skb, struct tc_action *a,
+diff --git a/net/sched/act_tunnel_key.c b/net/sched/act_tunnel_key.c
+index 22bf1a376b91..7cb63616805d 100644
+--- a/net/sched/act_tunnel_key.c
++++ b/net/sched/act_tunnel_key.c
+@@ -208,11 +208,12 @@ static void tunnel_key_release(struct tc_action *a, int bind)
+ struct tcf_tunnel_key_params *params;
+
+ params = rcu_dereference_protected(t->params, 1);
++ if (params) {
++ if (params->tcft_action == TCA_TUNNEL_KEY_ACT_SET)
++ dst_release(¶ms->tcft_enc_metadata->dst);
+
+- if (params->tcft_action == TCA_TUNNEL_KEY_ACT_SET)
+- dst_release(¶ms->tcft_enc_metadata->dst);
+-
+- kfree_rcu(params, rcu);
++ kfree_rcu(params, rcu);
++ }
+ }
+
+ static int tunnel_key_dump_addresses(struct sk_buff *skb,
+diff --git a/net/sched/act_vlan.c b/net/sched/act_vlan.c
+index 97f717a13ad5..788a8daf9230 100644
+--- a/net/sched/act_vlan.c
++++ b/net/sched/act_vlan.c
+@@ -225,7 +225,8 @@ static void tcf_vlan_cleanup(struct tc_action *a, int bind)
+ struct tcf_vlan_params *p;
+
+ p = rcu_dereference_protected(v->vlan_p, 1);
+- kfree_rcu(p, rcu);
++ if (p)
++ kfree_rcu(p, rcu);
+ }
+
+ static int tcf_vlan_dump(struct sk_buff *skb, struct tc_action *a,
+diff --git a/net/sched/cls_u32.c b/net/sched/cls_u32.c
+index 425cc341fd41..8d25f38cc1ad 100644
+--- a/net/sched/cls_u32.c
++++ b/net/sched/cls_u32.c
+@@ -478,6 +478,7 @@ static int u32_delete_key(struct tcf_proto *tp, struct tc_u_knode *key)
+ RCU_INIT_POINTER(*kp, key->next);
+
+ tcf_unbind_filter(tp, &key->res);
++ idr_remove(&ht->handle_idr, key->handle);
+ tcf_exts_get_net(&key->exts);
+ call_rcu(&key->rcu, u32_delete_key_freepf_rcu);
+ return 0;
+diff --git a/net/sched/sch_red.c b/net/sched/sch_red.c
+index f0747eb87dc4..cca57e93a810 100644
+--- a/net/sched/sch_red.c
++++ b/net/sched/sch_red.c
+@@ -157,7 +157,6 @@ static int red_offload(struct Qdisc *sch, bool enable)
+ .handle = sch->handle,
+ .parent = sch->parent,
+ };
+- int err;
+
+ if (!tc_can_offload(dev) || !dev->netdev_ops->ndo_setup_tc)
+ return -EOPNOTSUPP;
+@@ -172,14 +171,7 @@ static int red_offload(struct Qdisc *sch, bool enable)
+ opt.command = TC_RED_DESTROY;
+ }
+
+- err = dev->netdev_ops->ndo_setup_tc(dev, TC_SETUP_QDISC_RED, &opt);
+-
+- if (!err && enable)
+- sch->flags |= TCQ_F_OFFLOADED;
+- else
+- sch->flags &= ~TCQ_F_OFFLOADED;
+-
+- return err;
++ return dev->netdev_ops->ndo_setup_tc(dev, TC_SETUP_QDISC_RED, &opt);
+ }
+
+ static void red_destroy(struct Qdisc *sch)
+@@ -294,12 +286,22 @@ static int red_dump_offload_stats(struct Qdisc *sch, struct tc_red_qopt *opt)
+ .stats.qstats = &sch->qstats,
+ },
+ };
++ int err;
++
++ sch->flags &= ~TCQ_F_OFFLOADED;
+
+- if (!(sch->flags & TCQ_F_OFFLOADED))
++ if (!tc_can_offload(dev) || !dev->netdev_ops->ndo_setup_tc)
++ return 0;
++
++ err = dev->netdev_ops->ndo_setup_tc(dev, TC_SETUP_QDISC_RED,
++ &hw_stats);
++ if (err == -EOPNOTSUPP)
+ return 0;
+
+- return dev->netdev_ops->ndo_setup_tc(dev, TC_SETUP_QDISC_RED,
+- &hw_stats);
++ if (!err)
++ sch->flags |= TCQ_F_OFFLOADED;
++
++ return err;
+ }
+
+ static int red_dump(struct Qdisc *sch, struct sk_buff *skb)
+diff --git a/net/sctp/ipv6.c b/net/sctp/ipv6.c
+index e35d4f73d2df..f6d3d0c1e133 100644
+--- a/net/sctp/ipv6.c
++++ b/net/sctp/ipv6.c
+@@ -728,8 +728,10 @@ static int sctp_v6_addr_to_user(struct sctp_sock *sp, union sctp_addr *addr)
+ sctp_v6_map_v4(addr);
+ }
+
+- if (addr->sa.sa_family == AF_INET)
++ if (addr->sa.sa_family == AF_INET) {
++ memset(addr->v4.sin_zero, 0, sizeof(addr->v4.sin_zero));
+ return sizeof(struct sockaddr_in);
++ }
+ return sizeof(struct sockaddr_in6);
+ }
+
+diff --git a/net/sctp/socket.c b/net/sctp/socket.c
+index 039fcb618c34..5e6ff7ac07d1 100644
+--- a/net/sctp/socket.c
++++ b/net/sctp/socket.c
+@@ -338,11 +338,14 @@ static struct sctp_af *sctp_sockaddr_af(struct sctp_sock *opt,
+ if (!opt->pf->af_supported(addr->sa.sa_family, opt))
+ return NULL;
+
+- /* V4 mapped address are really of AF_INET family */
+- if (addr->sa.sa_family == AF_INET6 &&
+- ipv6_addr_v4mapped(&addr->v6.sin6_addr) &&
+- !opt->pf->af_supported(AF_INET, opt))
+- return NULL;
++ if (addr->sa.sa_family == AF_INET6) {
++ if (len < SIN6_LEN_RFC2133)
++ return NULL;
++ /* V4 mapped address are really of AF_INET family */
++ if (ipv6_addr_v4mapped(&addr->v6.sin6_addr) &&
++ !opt->pf->af_supported(AF_INET, opt))
++ return NULL;
++ }
+
+ /* If we get this far, af is valid. */
+ af = sctp_get_af_specific(addr->sa.sa_family);
+diff --git a/net/strparser/strparser.c b/net/strparser/strparser.c
+index 1fdab5c4eda8..b9283ce5cd85 100644
+--- a/net/strparser/strparser.c
++++ b/net/strparser/strparser.c
+@@ -60,7 +60,7 @@ static void strp_abort_strp(struct strparser *strp, int err)
+ struct sock *sk = strp->sk;
+
+ /* Report an error on the lower socket */
+- sk->sk_err = err;
++ sk->sk_err = -err;
+ sk->sk_error_report(sk);
+ }
+ }
+@@ -458,7 +458,7 @@ static void strp_msg_timeout(struct work_struct *w)
+ /* Message assembly timed out */
+ STRP_STATS_INCR(strp->stats.msg_timeouts);
+ strp->cb.lock(strp);
+- strp->cb.abort_parser(strp, ETIMEDOUT);
++ strp->cb.abort_parser(strp, -ETIMEDOUT);
+ strp->cb.unlock(strp);
+ }
+
+diff --git a/sound/soc/intel/atom/sst/sst_stream.c b/sound/soc/intel/atom/sst/sst_stream.c
+index 65e257b17a7e..20f5066fefb9 100644
+--- a/sound/soc/intel/atom/sst/sst_stream.c
++++ b/sound/soc/intel/atom/sst/sst_stream.c
+@@ -220,7 +220,7 @@ int sst_send_byte_stream_mrfld(struct intel_sst_drv *sst_drv_ctx,
+ sst_free_block(sst_drv_ctx, block);
+ out:
+ test_and_clear_bit(pvt_id, &sst_drv_ctx->pvt_id);
+- return 0;
++ return ret;
+ }
+
+ /*
+diff --git a/sound/soc/intel/boards/cht_bsw_rt5645.c b/sound/soc/intel/boards/cht_bsw_rt5645.c
+index 18d129caa974..f898ee140cdc 100644
+--- a/sound/soc/intel/boards/cht_bsw_rt5645.c
++++ b/sound/soc/intel/boards/cht_bsw_rt5645.c
+@@ -118,6 +118,7 @@ static const struct snd_soc_dapm_widget cht_dapm_widgets[] = {
+ SND_SOC_DAPM_HP("Headphone", NULL),
+ SND_SOC_DAPM_MIC("Headset Mic", NULL),
+ SND_SOC_DAPM_MIC("Int Mic", NULL),
++ SND_SOC_DAPM_MIC("Int Analog Mic", NULL),
+ SND_SOC_DAPM_SPK("Ext Spk", NULL),
+ SND_SOC_DAPM_SUPPLY("Platform Clock", SND_SOC_NOPM, 0, 0,
+ platform_clock_control, SND_SOC_DAPM_PRE_PMU | SND_SOC_DAPM_POST_PMD),
+@@ -128,6 +129,8 @@ static const struct snd_soc_dapm_route cht_rt5645_audio_map[] = {
+ {"IN1N", NULL, "Headset Mic"},
+ {"DMIC L1", NULL, "Int Mic"},
+ {"DMIC R1", NULL, "Int Mic"},
++ {"IN2P", NULL, "Int Analog Mic"},
++ {"IN2N", NULL, "Int Analog Mic"},
+ {"Headphone", NULL, "HPOL"},
+ {"Headphone", NULL, "HPOR"},
+ {"Ext Spk", NULL, "SPOL"},
+@@ -135,6 +138,9 @@ static const struct snd_soc_dapm_route cht_rt5645_audio_map[] = {
+ {"Headphone", NULL, "Platform Clock"},
+ {"Headset Mic", NULL, "Platform Clock"},
+ {"Int Mic", NULL, "Platform Clock"},
++ {"Int Analog Mic", NULL, "Platform Clock"},
++ {"Int Analog Mic", NULL, "micbias1"},
++ {"Int Analog Mic", NULL, "micbias2"},
+ {"Ext Spk", NULL, "Platform Clock"},
+ };
+
+@@ -189,6 +195,7 @@ static const struct snd_kcontrol_new cht_mc_controls[] = {
+ SOC_DAPM_PIN_SWITCH("Headphone"),
+ SOC_DAPM_PIN_SWITCH("Headset Mic"),
+ SOC_DAPM_PIN_SWITCH("Int Mic"),
++ SOC_DAPM_PIN_SWITCH("Int Analog Mic"),
+ SOC_DAPM_PIN_SWITCH("Ext Spk"),
+ };
+
+diff --git a/sound/soc/intel/skylake/skl-messages.c b/sound/soc/intel/skylake/skl-messages.c
+index 61b5bfa79d13..97572fb38387 100644
+--- a/sound/soc/intel/skylake/skl-messages.c
++++ b/sound/soc/intel/skylake/skl-messages.c
+@@ -404,7 +404,11 @@ int skl_resume_dsp(struct skl *skl)
+ if (skl->skl_sst->is_first_boot == true)
+ return 0;
+
++ /* disable dynamic clock gating during fw and lib download */
++ ctx->enable_miscbdcge(ctx->dev, false);
++
+ ret = skl_dsp_wake(ctx->dsp);
++ ctx->enable_miscbdcge(ctx->dev, true);
+ if (ret < 0)
+ return ret;
+
+diff --git a/sound/soc/intel/skylake/skl-pcm.c b/sound/soc/intel/skylake/skl-pcm.c
+index 1dd97479e0c0..32b30f99d2c8 100644
+--- a/sound/soc/intel/skylake/skl-pcm.c
++++ b/sound/soc/intel/skylake/skl-pcm.c
+@@ -1343,7 +1343,11 @@ static int skl_platform_soc_probe(struct snd_soc_platform *platform)
+ return -EIO;
+ }
+
++ /* disable dynamic clock gating during fw and lib download */
++ skl->skl_sst->enable_miscbdcge(platform->dev, false);
++
+ ret = ops->init_fw(platform->dev, skl->skl_sst);
++ skl->skl_sst->enable_miscbdcge(platform->dev, true);
+ if (ret < 0) {
+ dev_err(platform->dev, "Failed to boot first fw: %d\n", ret);
+ return ret;
+diff --git a/tools/perf/arch/powerpc/util/sym-handling.c b/tools/perf/arch/powerpc/util/sym-handling.c
+index 9c4e23d8c8ce..53d83d7e6a09 100644
+--- a/tools/perf/arch/powerpc/util/sym-handling.c
++++ b/tools/perf/arch/powerpc/util/sym-handling.c
+@@ -64,6 +64,14 @@ int arch__compare_symbol_names_n(const char *namea, const char *nameb,
+
+ return strncmp(namea, nameb, n);
+ }
++
++const char *arch__normalize_symbol_name(const char *name)
++{
++ /* Skip over initial dot */
++ if (name && *name == '.')
++ name++;
++ return name;
++}
+ #endif
+
+ #if defined(_CALL_ELF) && _CALL_ELF == 2
+diff --git a/tools/perf/builtin-record.c b/tools/perf/builtin-record.c
+index 003255910c05..36b6213884b5 100644
+--- a/tools/perf/builtin-record.c
++++ b/tools/perf/builtin-record.c
+@@ -1781,8 +1781,8 @@ int cmd_record(int argc, const char **argv)
+ goto out;
+ }
+
+- /* Enable ignoring missing threads when -u option is defined. */
+- rec->opts.ignore_missing_thread = rec->opts.target.uid != UINT_MAX;
++ /* Enable ignoring missing threads when -u/-p option is defined. */
++ rec->opts.ignore_missing_thread = rec->opts.target.uid != UINT_MAX || rec->opts.target.pid;
+
+ err = -ENOMEM;
+ if (perf_evlist__create_maps(rec->evlist, &rec->opts.target) < 0)
+diff --git a/tools/perf/builtin-report.c b/tools/perf/builtin-report.c
+index af5dd038195e..0551a69bd4a5 100644
+--- a/tools/perf/builtin-report.c
++++ b/tools/perf/builtin-report.c
+@@ -162,12 +162,28 @@ static int hist_iter__branch_callback(struct hist_entry_iter *iter,
+ struct hist_entry *he = iter->he;
+ struct report *rep = arg;
+ struct branch_info *bi;
++ struct perf_sample *sample = iter->sample;
++ struct perf_evsel *evsel = iter->evsel;
++ int err;
++
++ if (!ui__has_annotation())
++ return 0;
++
++ hist__account_cycles(sample->branch_stack, al, sample,
++ rep->nonany_branch_mode);
+
+ bi = he->branch_info;
++ err = addr_map_symbol__inc_samples(&bi->from, sample, evsel->idx);
++ if (err)
++ goto out;
++
++ err = addr_map_symbol__inc_samples(&bi->to, sample, evsel->idx);
++
+ branch_type_count(&rep->brtype_stat, &bi->flags,
+ bi->from.addr, bi->to.addr);
+
+- return 0;
++out:
++ return err;
+ }
+
+ static int process_sample_event(struct perf_tool *tool,
+diff --git a/tools/perf/util/evsel.c b/tools/perf/util/evsel.c
+index d5fbcf8c7aa7..242d345beda4 100644
+--- a/tools/perf/util/evsel.c
++++ b/tools/perf/util/evsel.c
+@@ -36,6 +36,7 @@
+ #include "debug.h"
+ #include "trace-event.h"
+ #include "stat.h"
++#include "memswap.h"
+ #include "util/parse-branch-options.h"
+
+ #include "sane_ctype.h"
+@@ -1596,10 +1597,46 @@ static int __open_attr__fprintf(FILE *fp, const char *name, const char *val,
+ return fprintf(fp, " %-32s %s\n", name, val);
+ }
+
++static void perf_evsel__remove_fd(struct perf_evsel *pos,
++ int nr_cpus, int nr_threads,
++ int thread_idx)
++{
++ for (int cpu = 0; cpu < nr_cpus; cpu++)
++ for (int thread = thread_idx; thread < nr_threads - 1; thread++)
++ FD(pos, cpu, thread) = FD(pos, cpu, thread + 1);
++}
++
++static int update_fds(struct perf_evsel *evsel,
++ int nr_cpus, int cpu_idx,
++ int nr_threads, int thread_idx)
++{
++ struct perf_evsel *pos;
++
++ if (cpu_idx >= nr_cpus || thread_idx >= nr_threads)
++ return -EINVAL;
++
++ evlist__for_each_entry(evsel->evlist, pos) {
++ nr_cpus = pos != evsel ? nr_cpus : cpu_idx;
++
++ perf_evsel__remove_fd(pos, nr_cpus, nr_threads, thread_idx);
++
++ /*
++ * Since fds for next evsel has not been created,
++ * there is no need to iterate whole event list.
++ */
++ if (pos == evsel)
++ break;
++ }
++ return 0;
++}
++
+ static bool ignore_missing_thread(struct perf_evsel *evsel,
++ int nr_cpus, int cpu,
+ struct thread_map *threads,
+ int thread, int err)
+ {
++ pid_t ignore_pid = thread_map__pid(threads, thread);
++
+ if (!evsel->ignore_missing_thread)
+ return false;
+
+@@ -1615,11 +1652,18 @@ static bool ignore_missing_thread(struct perf_evsel *evsel,
+ if (threads->nr == 1)
+ return false;
+
++ /*
++ * We should remove fd for missing_thread first
++ * because thread_map__remove() will decrease threads->nr.
++ */
++ if (update_fds(evsel, nr_cpus, cpu, threads->nr, thread))
++ return false;
++
+ if (thread_map__remove(threads, thread))
+ return false;
+
+ pr_warning("WARNING: Ignored open failure for pid %d\n",
+- thread_map__pid(threads, thread));
++ ignore_pid);
+ return true;
+ }
+
+@@ -1724,7 +1768,7 @@ int perf_evsel__open(struct perf_evsel *evsel, struct cpu_map *cpus,
+ if (fd < 0) {
+ err = -errno;
+
+- if (ignore_missing_thread(evsel, threads, thread, err)) {
++ if (ignore_missing_thread(evsel, cpus->nr, cpu, threads, thread, err)) {
+ /*
+ * We just removed 1 thread, so take a step
+ * back on thread index and lower the upper
+@@ -2120,14 +2164,27 @@ int perf_evsel__parse_sample(struct perf_evsel *evsel, union perf_event *event,
+ if (type & PERF_SAMPLE_RAW) {
+ OVERFLOW_CHECK_u64(array);
+ u.val64 = *array;
+- if (WARN_ONCE(swapped,
+- "Endianness of raw data not corrected!\n")) {
+- /* undo swap of u64, then swap on individual u32s */
++
++ /*
++ * Undo swap of u64, then swap on individual u32s,
++ * get the size of the raw area and undo all of the
++ * swap. The pevent interface handles endianity by
++ * itself.
++ */
++ if (swapped) {
+ u.val64 = bswap_64(u.val64);
+ u.val32[0] = bswap_32(u.val32[0]);
+ u.val32[1] = bswap_32(u.val32[1]);
+ }
+ data->raw_size = u.val32[0];
++
++ /*
++ * The raw data is aligned on 64bits including the
++ * u32 size, so it's safe to use mem_bswap_64.
++ */
++ if (swapped)
++ mem_bswap_64((void *) array, data->raw_size);
++
+ array = (void *)array + sizeof(u32);
+
+ OVERFLOW_CHECK(array, data->raw_size, max_size);
+diff --git a/tools/perf/util/probe-event.c b/tools/perf/util/probe-event.c
+index b7aaf9b2294d..68786bb7790e 100644
+--- a/tools/perf/util/probe-event.c
++++ b/tools/perf/util/probe-event.c
+@@ -2625,6 +2625,14 @@ static int get_new_event_name(char *buf, size_t len, const char *base,
+
+ out:
+ free(nbase);
++
++ /* Final validation */
++ if (ret >= 0 && !is_c_func_name(buf)) {
++ pr_warning("Internal error: \"%s\" is an invalid event name.\n",
++ buf);
++ ret = -EINVAL;
++ }
++
+ return ret;
+ }
+
+@@ -2792,16 +2800,32 @@ static int find_probe_functions(struct map *map, char *name,
+ int found = 0;
+ struct symbol *sym;
+ struct rb_node *tmp;
++ const char *norm, *ver;
++ char *buf = NULL;
+
+ if (map__load(map) < 0)
+ return 0;
+
+ map__for_each_symbol(map, sym, tmp) {
+- if (strglobmatch(sym->name, name)) {
++ norm = arch__normalize_symbol_name(sym->name);
++ if (!norm)
++ continue;
++
++ /* We don't care about default symbol or not */
++ ver = strchr(norm, '@');
++ if (ver) {
++ buf = strndup(norm, ver - norm);
++ if (!buf)
++ return -ENOMEM;
++ norm = buf;
++ }
++ if (strglobmatch(norm, name)) {
+ found++;
+ if (syms && found < probe_conf.max_probes)
+ syms[found - 1] = sym;
+ }
++ if (buf)
++ zfree(&buf);
+ }
+
+ return found;
+@@ -2847,7 +2871,7 @@ static int find_probe_trace_events_from_map(struct perf_probe_event *pev,
+ * same name but different addresses, this lists all the symbols.
+ */
+ num_matched_functions = find_probe_functions(map, pp->function, syms);
+- if (num_matched_functions == 0) {
++ if (num_matched_functions <= 0) {
+ pr_err("Failed to find symbol %s in %s\n", pp->function,
+ pev->target ? : "kernel");
+ ret = -ENOENT;
+diff --git a/tools/perf/util/python-ext-sources b/tools/perf/util/python-ext-sources
+index b4f2f06722a7..7aa0ea64544e 100644
+--- a/tools/perf/util/python-ext-sources
++++ b/tools/perf/util/python-ext-sources
+@@ -10,6 +10,7 @@ util/ctype.c
+ util/evlist.c
+ util/evsel.c
+ util/cpumap.c
++util/memswap.c
+ util/mmap.c
+ util/namespaces.c
+ ../lib/bitmap.c
+diff --git a/tools/perf/util/symbol.c b/tools/perf/util/symbol.c
+index 1b67a8639dfe..cc065d4bfafc 100644
+--- a/tools/perf/util/symbol.c
++++ b/tools/perf/util/symbol.c
+@@ -94,6 +94,11 @@ static int prefix_underscores_count(const char *str)
+ return tail - str;
+ }
+
++const char * __weak arch__normalize_symbol_name(const char *name)
++{
++ return name;
++}
++
+ int __weak arch__compare_symbol_names(const char *namea, const char *nameb)
+ {
+ return strcmp(namea, nameb);
+diff --git a/tools/perf/util/symbol.h b/tools/perf/util/symbol.h
+index a4f0075b4e5c..0563f33c1eb3 100644
+--- a/tools/perf/util/symbol.h
++++ b/tools/perf/util/symbol.h
+@@ -349,6 +349,7 @@ bool elf__needs_adjust_symbols(GElf_Ehdr ehdr);
+ void arch__sym_update(struct symbol *s, GElf_Sym *sym);
+ #endif
+
++const char *arch__normalize_symbol_name(const char *name);
+ #define SYMBOL_A 0
+ #define SYMBOL_B 1
+
+diff --git a/tools/perf/util/util.c b/tools/perf/util/util.c
+index a789f952b3e9..443892dabedb 100644
+--- a/tools/perf/util/util.c
++++ b/tools/perf/util/util.c
+@@ -210,7 +210,7 @@ static int copyfile_offset(int ifd, loff_t off_in, int ofd, loff_t off_out, u64
+
+ size -= ret;
+ off_in += ret;
+- off_out -= ret;
++ off_out += ret;
+ }
+ munmap(ptr, off_in + size);
+
+diff --git a/tools/testing/selftests/net/msg_zerocopy.c b/tools/testing/selftests/net/msg_zerocopy.c
+index 3ab6ec403905..e11fe84de0fd 100644
+--- a/tools/testing/selftests/net/msg_zerocopy.c
++++ b/tools/testing/selftests/net/msg_zerocopy.c
+@@ -259,22 +259,28 @@ static int setup_ip6h(struct ipv6hdr *ip6h, uint16_t payload_len)
+ return sizeof(*ip6h);
+ }
+
+-static void setup_sockaddr(int domain, const char *str_addr, void *sockaddr)
++
++static void setup_sockaddr(int domain, const char *str_addr,
++ struct sockaddr_storage *sockaddr)
+ {
+ struct sockaddr_in6 *addr6 = (void *) sockaddr;
+ struct sockaddr_in *addr4 = (void *) sockaddr;
+
+ switch (domain) {
+ case PF_INET:
++ memset(addr4, 0, sizeof(*addr4));
+ addr4->sin_family = AF_INET;
+ addr4->sin_port = htons(cfg_port);
+- if (inet_pton(AF_INET, str_addr, &(addr4->sin_addr)) != 1)
++ if (str_addr &&
++ inet_pton(AF_INET, str_addr, &(addr4->sin_addr)) != 1)
+ error(1, 0, "ipv4 parse error: %s", str_addr);
+ break;
+ case PF_INET6:
++ memset(addr6, 0, sizeof(*addr6));
+ addr6->sin6_family = AF_INET6;
+ addr6->sin6_port = htons(cfg_port);
+- if (inet_pton(AF_INET6, str_addr, &(addr6->sin6_addr)) != 1)
++ if (str_addr &&
++ inet_pton(AF_INET6, str_addr, &(addr6->sin6_addr)) != 1)
+ error(1, 0, "ipv6 parse error: %s", str_addr);
+ break;
+ default:
+@@ -603,6 +609,7 @@ static void parse_opts(int argc, char **argv)
+ sizeof(struct tcphdr) -
+ 40 /* max tcp options */;
+ int c;
++ char *daddr = NULL, *saddr = NULL;
+
+ cfg_payload_len = max_payload_len;
+
+@@ -627,7 +634,7 @@ static void parse_opts(int argc, char **argv)
+ cfg_cpu = strtol(optarg, NULL, 0);
+ break;
+ case 'D':
+- setup_sockaddr(cfg_family, optarg, &cfg_dst_addr);
++ daddr = optarg;
+ break;
+ case 'i':
+ cfg_ifindex = if_nametoindex(optarg);
+@@ -638,7 +645,7 @@ static void parse_opts(int argc, char **argv)
+ cfg_cork_mixed = true;
+ break;
+ case 'p':
+- cfg_port = htons(strtoul(optarg, NULL, 0));
++ cfg_port = strtoul(optarg, NULL, 0);
+ break;
+ case 'r':
+ cfg_rx = true;
+@@ -647,7 +654,7 @@ static void parse_opts(int argc, char **argv)
+ cfg_payload_len = strtoul(optarg, NULL, 0);
+ break;
+ case 'S':
+- setup_sockaddr(cfg_family, optarg, &cfg_src_addr);
++ saddr = optarg;
+ break;
+ case 't':
+ cfg_runtime_ms = 200 + strtoul(optarg, NULL, 10) * 1000;
+@@ -660,6 +667,8 @@ static void parse_opts(int argc, char **argv)
+ break;
+ }
+ }
++ setup_sockaddr(cfg_family, daddr, &cfg_dst_addr);
++ setup_sockaddr(cfg_family, saddr, &cfg_src_addr);
+
+ if (cfg_payload_len > max_payload_len)
+ error(1, 0, "-s: payload exceeds max (%d)", max_payload_len);
^ permalink raw reply related [flat|nested] 27+ messages in thread
* [gentoo-commits] proj/linux-patches:4.15 commit in: /
@ 2018-04-19 10:44 Mike Pagano
0 siblings, 0 replies; 27+ messages in thread
From: Mike Pagano @ 2018-04-19 10:44 UTC (permalink / raw
To: gentoo-commits
commit: 0632b05dfe0edf20e50d473d3c174fda502d6808
Author: Mike Pagano <mpagano <AT> gentoo <DOT> org>
AuthorDate: Thu Apr 19 10:44:15 2018 +0000
Commit: Mike Pagano <mpagano <AT> gentoo <DOT> org>
CommitDate: Thu Apr 19 10:44:15 2018 +0000
URL: https://gitweb.gentoo.org/proj/linux-patches.git/commit/?id=0632b05d
Linux patch 4.15.18
0000_README | 4 +
1017_linux-4.15.18.patch | 2152 ++++++++++++++++++++++++++++++++++++++++++++++
2 files changed, 2156 insertions(+)
diff --git a/0000_README b/0000_README
index f973683..e2ca4c8 100644
--- a/0000_README
+++ b/0000_README
@@ -111,6 +111,10 @@ Patch: 1016_linux-4.15.17.patch
From: http://www.kernel.org
Desc: Linux 4.15.17
+Patch: 1017_linux-4.15.18.patch
+From: http://www.kernel.org
+Desc: Linux 4.15.18
+
Patch: 1500_XATTR_USER_PREFIX.patch
From: https://bugs.gentoo.org/show_bug.cgi?id=470644
Desc: Support for namespace user.pax.* on tmpfs.
diff --git a/1017_linux-4.15.18.patch b/1017_linux-4.15.18.patch
new file mode 100644
index 0000000..8aac24f
--- /dev/null
+++ b/1017_linux-4.15.18.patch
@@ -0,0 +1,2152 @@
+diff --git a/Makefile b/Makefile
+index cfff73b62eb5..83152471e1a9 100644
+--- a/Makefile
++++ b/Makefile
+@@ -1,7 +1,7 @@
+ # SPDX-License-Identifier: GPL-2.0
+ VERSION = 4
+ PATCHLEVEL = 15
+-SUBLEVEL = 17
++SUBLEVEL = 18
+ EXTRAVERSION =
+ NAME = Fearless Coyote
+
+diff --git a/arch/parisc/kernel/drivers.c b/arch/parisc/kernel/drivers.c
+index 29b99b8964aa..d4240aa7f8b1 100644
+--- a/arch/parisc/kernel/drivers.c
++++ b/arch/parisc/kernel/drivers.c
+@@ -651,6 +651,10 @@ static int match_pci_device(struct device *dev, int index,
+ (modpath->mod == PCI_FUNC(devfn)));
+ }
+
++ /* index might be out of bounds for bc[] */
++ if (index >= 6)
++ return 0;
++
+ id = PCI_SLOT(pdev->devfn) | (PCI_FUNC(pdev->devfn) << 5);
+ return (modpath->bc[index] == id);
+ }
+diff --git a/arch/parisc/kernel/hpmc.S b/arch/parisc/kernel/hpmc.S
+index 8d072c44f300..781c3b9a3e46 100644
+--- a/arch/parisc/kernel/hpmc.S
++++ b/arch/parisc/kernel/hpmc.S
+@@ -84,6 +84,7 @@ END(hpmc_pim_data)
+ .text
+
+ .import intr_save, code
++ .align 16
+ ENTRY_CFI(os_hpmc)
+ .os_hpmc:
+
+@@ -300,12 +301,15 @@ os_hpmc_6:
+
+ b .
+ nop
++ .align 16 /* make function length multiple of 16 bytes */
+ ENDPROC_CFI(os_hpmc)
+ .os_hpmc_end:
+
+
+ __INITRODATA
++.globl os_hpmc_size
+ .align 4
+- .export os_hpmc_size
++ .type os_hpmc_size, @object
++ .size os_hpmc_size, 4
+ os_hpmc_size:
+ .word .os_hpmc_end-.os_hpmc
+diff --git a/arch/powerpc/kvm/book3s_hv_rm_mmu.c b/arch/powerpc/kvm/book3s_hv_rm_mmu.c
+index 26c11f678fbf..d981dfdf8319 100644
+--- a/arch/powerpc/kvm/book3s_hv_rm_mmu.c
++++ b/arch/powerpc/kvm/book3s_hv_rm_mmu.c
+@@ -470,8 +470,6 @@ static void do_tlbies(struct kvm *kvm, unsigned long *rbvalues,
+ for (i = 0; i < npages; ++i) {
+ asm volatile(PPC_TLBIE_5(%0,%1,0,0,0) : :
+ "r" (rbvalues[i]), "r" (kvm->arch.lpid));
+- trace_tlbie(kvm->arch.lpid, 0, rbvalues[i],
+- kvm->arch.lpid, 0, 0, 0);
+ }
+ asm volatile("eieio; tlbsync; ptesync" : : : "memory");
+ kvm->arch.tlbie_lock = 0;
+@@ -481,8 +479,6 @@ static void do_tlbies(struct kvm *kvm, unsigned long *rbvalues,
+ for (i = 0; i < npages; ++i) {
+ asm volatile(PPC_TLBIEL(%0,%1,0,0,0) : :
+ "r" (rbvalues[i]), "r" (0));
+- trace_tlbie(kvm->arch.lpid, 1, rbvalues[i],
+- 0, 0, 0, 0);
+ }
+ asm volatile("ptesync" : : : "memory");
+ }
+diff --git a/arch/s390/kernel/compat_signal.c b/arch/s390/kernel/compat_signal.c
+index ef246940b44c..f19e90856e49 100644
+--- a/arch/s390/kernel/compat_signal.c
++++ b/arch/s390/kernel/compat_signal.c
+@@ -379,7 +379,7 @@ static int setup_frame32(struct ksignal *ksig, sigset_t *set,
+ if (put_compat_sigset((compat_sigset_t __user *)frame->sc.oldmask,
+ set, sizeof(compat_sigset_t)))
+ return -EFAULT;
+- if (__put_user(ptr_to_compat(&frame->sc), &frame->sc.sregs))
++ if (__put_user(ptr_to_compat(&frame->sregs), &frame->sc.sregs))
+ return -EFAULT;
+
+ /* Store registers needed to create the signal frame */
+diff --git a/arch/s390/kernel/ipl.c b/arch/s390/kernel/ipl.c
+index 8ecb8726ac47..66c470be1b58 100644
+--- a/arch/s390/kernel/ipl.c
++++ b/arch/s390/kernel/ipl.c
+@@ -779,6 +779,7 @@ static ssize_t reipl_generic_loadparm_store(struct ipl_parameter_block *ipb,
+ /* copy and convert to ebcdic */
+ memcpy(ipb->hdr.loadparm, buf, lp_len);
+ ASCEBC(ipb->hdr.loadparm, LOADPARM_LEN);
++ ipb->hdr.flags |= DIAG308_FLAGS_LP_VALID;
+ return len;
+ }
+
+diff --git a/arch/x86/include/asm/apic.h b/arch/x86/include/asm/apic.h
+index 98722773391d..f01eef8b392e 100644
+--- a/arch/x86/include/asm/apic.h
++++ b/arch/x86/include/asm/apic.h
+@@ -319,7 +319,7 @@ struct apic {
+ /* Probe, setup and smpboot functions */
+ int (*probe)(void);
+ int (*acpi_madt_oem_check)(char *oem_id, char *oem_table_id);
+- int (*apic_id_valid)(int apicid);
++ int (*apic_id_valid)(u32 apicid);
+ int (*apic_id_registered)(void);
+
+ bool (*check_apicid_used)(physid_mask_t *map, int apicid);
+@@ -492,7 +492,7 @@ static inline unsigned int read_apic_id(void)
+ return apic->get_apic_id(reg);
+ }
+
+-extern int default_apic_id_valid(int apicid);
++extern int default_apic_id_valid(u32 apicid);
+ extern int default_acpi_madt_oem_check(char *, char *);
+ extern void default_setup_apic_routing(void);
+
+diff --git a/arch/x86/kernel/acpi/boot.c b/arch/x86/kernel/acpi/boot.c
+index f4c463df8b08..125ac4eecffc 100644
+--- a/arch/x86/kernel/acpi/boot.c
++++ b/arch/x86/kernel/acpi/boot.c
+@@ -200,7 +200,7 @@ acpi_parse_x2apic(struct acpi_subtable_header *header, const unsigned long end)
+ {
+ struct acpi_madt_local_x2apic *processor = NULL;
+ #ifdef CONFIG_X86_X2APIC
+- int apic_id;
++ u32 apic_id;
+ u8 enabled;
+ #endif
+
+@@ -222,10 +222,13 @@ acpi_parse_x2apic(struct acpi_subtable_header *header, const unsigned long end)
+ * to not preallocating memory for all NR_CPUS
+ * when we use CPU hotplug.
+ */
+- if (!apic->apic_id_valid(apic_id) && enabled)
+- printk(KERN_WARNING PREFIX "x2apic entry ignored\n");
+- else
+- acpi_register_lapic(apic_id, processor->uid, enabled);
++ if (!apic->apic_id_valid(apic_id)) {
++ if (enabled)
++ pr_warn(PREFIX "x2apic entry ignored\n");
++ return 0;
++ }
++
++ acpi_register_lapic(apic_id, processor->uid, enabled);
+ #else
+ printk(KERN_WARNING PREFIX "x2apic entry ignored\n");
+ #endif
+diff --git a/arch/x86/kernel/apic/apic_common.c b/arch/x86/kernel/apic/apic_common.c
+index a360801779ae..02b4839478b1 100644
+--- a/arch/x86/kernel/apic/apic_common.c
++++ b/arch/x86/kernel/apic/apic_common.c
+@@ -40,7 +40,7 @@ int default_check_phys_apicid_present(int phys_apicid)
+ return physid_isset(phys_apicid, phys_cpu_present_map);
+ }
+
+-int default_apic_id_valid(int apicid)
++int default_apic_id_valid(u32 apicid)
+ {
+ return (apicid < 255);
+ }
+diff --git a/arch/x86/kernel/apic/apic_numachip.c b/arch/x86/kernel/apic/apic_numachip.c
+index 134e04506ab4..78778b54f904 100644
+--- a/arch/x86/kernel/apic/apic_numachip.c
++++ b/arch/x86/kernel/apic/apic_numachip.c
+@@ -56,7 +56,7 @@ static u32 numachip2_set_apic_id(unsigned int id)
+ return id << 24;
+ }
+
+-static int numachip_apic_id_valid(int apicid)
++static int numachip_apic_id_valid(u32 apicid)
+ {
+ /* Trust what bootloader passes in MADT */
+ return 1;
+diff --git a/arch/x86/kernel/apic/x2apic.h b/arch/x86/kernel/apic/x2apic.h
+index b107de381cb5..a49b3604027f 100644
+--- a/arch/x86/kernel/apic/x2apic.h
++++ b/arch/x86/kernel/apic/x2apic.h
+@@ -1,6 +1,6 @@
+ /* Common bits for X2APIC cluster/physical modes. */
+
+-int x2apic_apic_id_valid(int apicid);
++int x2apic_apic_id_valid(u32 apicid);
+ int x2apic_apic_id_registered(void);
+ void __x2apic_send_IPI_dest(unsigned int apicid, int vector, unsigned int dest);
+ unsigned int x2apic_get_apic_id(unsigned long id);
+diff --git a/arch/x86/kernel/apic/x2apic_phys.c b/arch/x86/kernel/apic/x2apic_phys.c
+index f8d9d69994e6..e972405eb2b5 100644
+--- a/arch/x86/kernel/apic/x2apic_phys.c
++++ b/arch/x86/kernel/apic/x2apic_phys.c
+@@ -101,7 +101,7 @@ static int x2apic_phys_probe(void)
+ }
+
+ /* Common x2apic functions, also used by x2apic_cluster */
+-int x2apic_apic_id_valid(int apicid)
++int x2apic_apic_id_valid(u32 apicid)
+ {
+ return 1;
+ }
+diff --git a/arch/x86/kernel/apic/x2apic_uv_x.c b/arch/x86/kernel/apic/x2apic_uv_x.c
+index e1b8e8bf6b3c..f6cce056324a 100644
+--- a/arch/x86/kernel/apic/x2apic_uv_x.c
++++ b/arch/x86/kernel/apic/x2apic_uv_x.c
+@@ -554,7 +554,7 @@ static void uv_send_IPI_all(int vector)
+ uv_send_IPI_mask(cpu_online_mask, vector);
+ }
+
+-static int uv_apic_id_valid(int apicid)
++static int uv_apic_id_valid(u32 apicid)
+ {
+ return 1;
+ }
+diff --git a/arch/x86/kernel/cpu/mcheck/mce_amd.c b/arch/x86/kernel/cpu/mcheck/mce_amd.c
+index 486f640b02ef..f3bbc7bde471 100644
+--- a/arch/x86/kernel/cpu/mcheck/mce_amd.c
++++ b/arch/x86/kernel/cpu/mcheck/mce_amd.c
+@@ -416,6 +416,21 @@ static u32 get_block_address(unsigned int cpu, u32 current_addr, u32 low, u32 hi
+ {
+ u32 addr = 0, offset = 0;
+
++ if ((bank >= mca_cfg.banks) || (block >= NR_BLOCKS))
++ return addr;
++
++ /* Get address from already initialized block. */
++ if (per_cpu(threshold_banks, cpu)) {
++ struct threshold_bank *bankp = per_cpu(threshold_banks, cpu)[bank];
++
++ if (bankp && bankp->blocks) {
++ struct threshold_block *blockp = &bankp->blocks[block];
++
++ if (blockp)
++ return blockp->address;
++ }
++ }
++
+ if (mce_flags.smca) {
+ if (!block) {
+ addr = MSR_AMD64_SMCA_MCx_MISC(bank);
+diff --git a/arch/x86/xen/apic.c b/arch/x86/xen/apic.c
+index de58533d3664..2fa79e2e73ea 100644
+--- a/arch/x86/xen/apic.c
++++ b/arch/x86/xen/apic.c
+@@ -112,7 +112,7 @@ static int xen_madt_oem_check(char *oem_id, char *oem_table_id)
+ return xen_pv_domain();
+ }
+
+-static int xen_id_always_valid(int apicid)
++static int xen_id_always_valid(u32 apicid)
+ {
+ return 1;
+ }
+diff --git a/block/blk-core.c b/block/blk-core.c
+index b725d9e340c2..322c47ffac3b 100644
+--- a/block/blk-core.c
++++ b/block/blk-core.c
+@@ -823,7 +823,7 @@ int blk_queue_enter(struct request_queue *q, blk_mq_req_flags_t flags)
+ bool success = false;
+ int ret;
+
+- rcu_read_lock_sched();
++ rcu_read_lock();
+ if (percpu_ref_tryget_live(&q->q_usage_counter)) {
+ /*
+ * The code that sets the PREEMPT_ONLY flag is
+@@ -836,7 +836,7 @@ int blk_queue_enter(struct request_queue *q, blk_mq_req_flags_t flags)
+ percpu_ref_put(&q->q_usage_counter);
+ }
+ }
+- rcu_read_unlock_sched();
++ rcu_read_unlock();
+
+ if (success)
+ return 0;
+diff --git a/block/blk-mq-cpumap.c b/block/blk-mq-cpumap.c
+index 9f8cffc8a701..3eb169f15842 100644
+--- a/block/blk-mq-cpumap.c
++++ b/block/blk-mq-cpumap.c
+@@ -16,11 +16,6 @@
+
+ static int cpu_to_queue_index(unsigned int nr_queues, const int cpu)
+ {
+- /*
+- * Non present CPU will be mapped to queue index 0.
+- */
+- if (!cpu_present(cpu))
+- return 0;
+ return cpu % nr_queues;
+ }
+
+diff --git a/block/blk-mq.c b/block/blk-mq.c
+index ab88ff3314a7..fb5f2704e621 100644
+--- a/block/blk-mq.c
++++ b/block/blk-mq.c
+@@ -1096,7 +1096,12 @@ bool blk_mq_dispatch_rq_list(struct request_queue *q, struct list_head *list,
+ blk_status_t ret;
+
+ rq = list_first_entry(list, struct request, queuelist);
+- if (!blk_mq_get_driver_tag(rq, &hctx, false)) {
++
++ hctx = blk_mq_map_queue(rq->q, rq->mq_ctx->cpu);
++ if (!got_budget && !blk_mq_get_dispatch_budget(hctx))
++ break;
++
++ if (!blk_mq_get_driver_tag(rq, NULL, false)) {
+ /*
+ * The initial allocation attempt failed, so we need to
+ * rerun the hardware queue when a tag is freed. The
+@@ -1105,8 +1110,7 @@ bool blk_mq_dispatch_rq_list(struct request_queue *q, struct list_head *list,
+ * we'll re-run it below.
+ */
+ if (!blk_mq_mark_tag_wait(&hctx, rq)) {
+- if (got_budget)
+- blk_mq_put_dispatch_budget(hctx);
++ blk_mq_put_dispatch_budget(hctx);
+ /*
+ * For non-shared tags, the RESTART check
+ * will suffice.
+@@ -1117,11 +1121,6 @@ bool blk_mq_dispatch_rq_list(struct request_queue *q, struct list_head *list,
+ }
+ }
+
+- if (!got_budget && !blk_mq_get_dispatch_budget(hctx)) {
+- blk_mq_put_driver_tag(rq);
+- break;
+- }
+-
+ list_del_init(&rq->queuelist);
+
+ bd.rq = rq;
+@@ -1619,11 +1618,11 @@ static void __blk_mq_try_issue_directly(struct blk_mq_hw_ctx *hctx,
+ if (q->elevator)
+ goto insert;
+
+- if (!blk_mq_get_driver_tag(rq, NULL, false))
++ if (!blk_mq_get_dispatch_budget(hctx))
+ goto insert;
+
+- if (!blk_mq_get_dispatch_budget(hctx)) {
+- blk_mq_put_driver_tag(rq);
++ if (!blk_mq_get_driver_tag(rq, NULL, false)) {
++ blk_mq_put_dispatch_budget(hctx);
+ goto insert;
+ }
+
+diff --git a/drivers/acpi/nfit/core.c b/drivers/acpi/nfit/core.c
+index b28ce440a06f..add21ba1bc86 100644
+--- a/drivers/acpi/nfit/core.c
++++ b/drivers/acpi/nfit/core.c
+@@ -2998,15 +2998,21 @@ static void acpi_nfit_scrub(struct work_struct *work)
+ static int acpi_nfit_register_regions(struct acpi_nfit_desc *acpi_desc)
+ {
+ struct nfit_spa *nfit_spa;
+- int rc;
+
+- list_for_each_entry(nfit_spa, &acpi_desc->spas, list)
+- if (nfit_spa_type(nfit_spa->spa) == NFIT_SPA_DCR) {
+- /* BLK regions don't need to wait for ars results */
+- rc = acpi_nfit_register_region(acpi_desc, nfit_spa);
+- if (rc)
+- return rc;
+- }
++ list_for_each_entry(nfit_spa, &acpi_desc->spas, list) {
++ int rc, type = nfit_spa_type(nfit_spa->spa);
++
++ /* PMEM and VMEM will be registered by the ARS workqueue */
++ if (type == NFIT_SPA_PM || type == NFIT_SPA_VOLATILE)
++ continue;
++ /* BLK apertures belong to BLK region registration below */
++ if (type == NFIT_SPA_BDW)
++ continue;
++ /* BLK regions don't need to wait for ARS results */
++ rc = acpi_nfit_register_region(acpi_desc, nfit_spa);
++ if (rc)
++ return rc;
++ }
+
+ acpi_desc->ars_start_flags = 0;
+ if (!acpi_desc->cancel)
+diff --git a/drivers/block/loop.c b/drivers/block/loop.c
+index 89d2ee00cced..69ddd171587f 100644
+--- a/drivers/block/loop.c
++++ b/drivers/block/loop.c
+@@ -1103,11 +1103,15 @@ loop_set_status(struct loop_device *lo, const struct loop_info64 *info)
+ if (info->lo_encrypt_type) {
+ unsigned int type = info->lo_encrypt_type;
+
+- if (type >= MAX_LO_CRYPT)
+- return -EINVAL;
++ if (type >= MAX_LO_CRYPT) {
++ err = -EINVAL;
++ goto exit;
++ }
+ xfer = xfer_funcs[type];
+- if (xfer == NULL)
+- return -EINVAL;
++ if (xfer == NULL) {
++ err = -EINVAL;
++ goto exit;
++ }
+ } else
+ xfer = NULL;
+
+diff --git a/drivers/bluetooth/hci_bcm.c b/drivers/bluetooth/hci_bcm.c
+index 7d98f9a17636..51a04a08cc9f 100644
+--- a/drivers/bluetooth/hci_bcm.c
++++ b/drivers/bluetooth/hci_bcm.c
+@@ -701,22 +701,6 @@ static const struct acpi_gpio_mapping acpi_bcm_int_first_gpios[] = {
+ #ifdef CONFIG_ACPI
+ /* IRQ polarity of some chipsets are not defined correctly in ACPI table. */
+ static const struct dmi_system_id bcm_active_low_irq_dmi_table[] = {
+- {
+- .ident = "Asus T100TA",
+- .matches = {
+- DMI_EXACT_MATCH(DMI_SYS_VENDOR,
+- "ASUSTeK COMPUTER INC."),
+- DMI_EXACT_MATCH(DMI_PRODUCT_NAME, "T100TA"),
+- },
+- },
+- {
+- .ident = "Asus T100CHI",
+- .matches = {
+- DMI_EXACT_MATCH(DMI_SYS_VENDOR,
+- "ASUSTeK COMPUTER INC."),
+- DMI_EXACT_MATCH(DMI_PRODUCT_NAME, "T100CHI"),
+- },
+- },
+ { /* Handle ThinkPad 8 tablets with BCM2E55 chipset ACPI ID */
+ .ident = "Lenovo ThinkPad 8",
+ .matches = {
+@@ -744,7 +728,9 @@ static int bcm_resource(struct acpi_resource *ares, void *data)
+ switch (ares->type) {
+ case ACPI_RESOURCE_TYPE_EXTENDED_IRQ:
+ irq = &ares->data.extended_irq;
+- dev->irq_active_low = irq->polarity == ACPI_ACTIVE_LOW;
++ if (irq->polarity != ACPI_ACTIVE_LOW)
++ dev_info(dev->dev, "ACPI Interrupt resource is active-high, this is usually wrong, treating the IRQ as active-low\n");
++ dev->irq_active_low = true;
+ break;
+
+ case ACPI_RESOURCE_TYPE_GPIO:
+diff --git a/drivers/char/ipmi/ipmi_si_intf.c b/drivers/char/ipmi/ipmi_si_intf.c
+index 7499b0cd8326..c33e579d8911 100644
+--- a/drivers/char/ipmi/ipmi_si_intf.c
++++ b/drivers/char/ipmi/ipmi_si_intf.c
+@@ -252,6 +252,9 @@ struct smi_info {
+ /* Default driver model device. */
+ struct platform_device *pdev;
+
++ /* Have we added the device group to the device? */
++ bool dev_group_added;
++
+ /* Counters and things for the proc filesystem. */
+ atomic_t stats[SI_NUM_STATS];
+
+@@ -2025,8 +2028,8 @@ int ipmi_si_add_smi(struct si_sm_io *io)
+ if (initialized) {
+ rv = try_smi_init(new_smi);
+ if (rv) {
+- mutex_unlock(&smi_infos_lock);
+ cleanup_one_si(new_smi);
++ mutex_unlock(&smi_infos_lock);
+ return rv;
+ }
+ }
+@@ -2185,6 +2188,7 @@ static int try_smi_init(struct smi_info *new_smi)
+ rv);
+ goto out_err_stop_timer;
+ }
++ new_smi->dev_group_added = true;
+
+ rv = ipmi_register_smi(&handlers,
+ new_smi,
+@@ -2238,7 +2242,10 @@ static int try_smi_init(struct smi_info *new_smi)
+ return 0;
+
+ out_err_remove_attrs:
+- device_remove_group(new_smi->io.dev, &ipmi_si_dev_attr_group);
++ if (new_smi->dev_group_added) {
++ device_remove_group(new_smi->io.dev, &ipmi_si_dev_attr_group);
++ new_smi->dev_group_added = false;
++ }
+ dev_set_drvdata(new_smi->io.dev, NULL);
+
+ out_err_stop_timer:
+@@ -2286,6 +2293,7 @@ static int try_smi_init(struct smi_info *new_smi)
+ else
+ platform_device_put(new_smi->pdev);
+ new_smi->pdev = NULL;
++ new_smi->io.dev = NULL;
+ }
+
+ kfree(init_name);
+@@ -2382,8 +2390,10 @@ static void cleanup_one_si(struct smi_info *to_clean)
+ }
+ }
+
+- device_remove_group(to_clean->io.dev, &ipmi_si_dev_attr_group);
+- dev_set_drvdata(to_clean->io.dev, NULL);
++ if (to_clean->dev_group_added)
++ device_remove_group(to_clean->io.dev, &ipmi_si_dev_attr_group);
++ if (to_clean->io.dev)
++ dev_set_drvdata(to_clean->io.dev, NULL);
+
+ list_del(&to_clean->link);
+
+diff --git a/drivers/gpu/drm/i915/intel_dp_link_training.c b/drivers/gpu/drm/i915/intel_dp_link_training.c
+index 05907fa8a553..cf8fef8b6f58 100644
+--- a/drivers/gpu/drm/i915/intel_dp_link_training.c
++++ b/drivers/gpu/drm/i915/intel_dp_link_training.c
+@@ -328,14 +328,22 @@ intel_dp_start_link_train(struct intel_dp *intel_dp)
+ return;
+
+ failure_handling:
+- DRM_DEBUG_KMS("[CONNECTOR:%d:%s] Link Training failed at link rate = %d, lane count = %d",
+- intel_connector->base.base.id,
+- intel_connector->base.name,
+- intel_dp->link_rate, intel_dp->lane_count);
+- if (!intel_dp_get_link_train_fallback_values(intel_dp,
+- intel_dp->link_rate,
+- intel_dp->lane_count))
+- /* Schedule a Hotplug Uevent to userspace to start modeset */
+- schedule_work(&intel_connector->modeset_retry_work);
++ /* Dont fallback and prune modes if its eDP */
++ if (!intel_dp_is_edp(intel_dp)) {
++ DRM_DEBUG_KMS("[CONNECTOR:%d:%s] Link Training failed at link rate = %d, lane count = %d",
++ intel_connector->base.base.id,
++ intel_connector->base.name,
++ intel_dp->link_rate, intel_dp->lane_count);
++ if (!intel_dp_get_link_train_fallback_values(intel_dp,
++ intel_dp->link_rate,
++ intel_dp->lane_count))
++ /* Schedule a Hotplug Uevent to userspace to start modeset */
++ schedule_work(&intel_connector->modeset_retry_work);
++ } else {
++ DRM_ERROR("[CONNECTOR:%d:%s] Link Training failed at link rate = %d, lane count = %d",
++ intel_connector->base.base.id,
++ intel_connector->base.name,
++ intel_dp->link_rate, intel_dp->lane_count);
++ }
+ return;
+ }
+diff --git a/drivers/hv/channel_mgmt.c b/drivers/hv/channel_mgmt.c
+index c21020b69114..55ee5e87073a 100644
+--- a/drivers/hv/channel_mgmt.c
++++ b/drivers/hv/channel_mgmt.c
+@@ -71,7 +71,7 @@ static const struct vmbus_device vmbus_devs[] = {
+ /* PCIE */
+ { .dev_type = HV_PCIE,
+ HV_PCIE_GUID,
+- .perf_device = true,
++ .perf_device = false,
+ },
+
+ /* Synthetic Frame Buffer */
+diff --git a/drivers/hwmon/ina2xx.c b/drivers/hwmon/ina2xx.c
+index e362a932fe8c..e9e6aeabbf84 100644
+--- a/drivers/hwmon/ina2xx.c
++++ b/drivers/hwmon/ina2xx.c
+@@ -454,6 +454,7 @@ static int ina2xx_probe(struct i2c_client *client,
+
+ /* set the device type */
+ data->config = &ina2xx_config[chip];
++ mutex_init(&data->config_lock);
+
+ if (of_property_read_u32(dev->of_node, "shunt-resistor", &val) < 0) {
+ struct ina2xx_platform_data *pdata = dev_get_platdata(dev);
+@@ -480,8 +481,6 @@ static int ina2xx_probe(struct i2c_client *client,
+ return -ENODEV;
+ }
+
+- mutex_init(&data->config_lock);
+-
+ data->groups[group++] = &ina2xx_group;
+ if (id->driver_data == ina226)
+ data->groups[group++] = &ina226_group;
+diff --git a/drivers/media/platform/vsp1/vsp1_dl.c b/drivers/media/platform/vsp1/vsp1_dl.c
+index 4257451f1bd8..0b86ed01e85d 100644
+--- a/drivers/media/platform/vsp1/vsp1_dl.c
++++ b/drivers/media/platform/vsp1/vsp1_dl.c
+@@ -509,7 +509,8 @@ static bool vsp1_dl_list_hw_update_pending(struct vsp1_dl_manager *dlm)
+ return !!(vsp1_read(vsp1, VI6_DL_BODY_SIZE)
+ & VI6_DL_BODY_SIZE_UPD);
+ else
+- return !!(vsp1_read(vsp1, VI6_CMD(dlm->index) & VI6_CMD_UPDHDR));
++ return !!(vsp1_read(vsp1, VI6_CMD(dlm->index))
++ & VI6_CMD_UPDHDR);
+ }
+
+ static void vsp1_dl_list_hw_enqueue(struct vsp1_dl_list *dl)
+diff --git a/drivers/media/v4l2-core/v4l2-compat-ioctl32.c b/drivers/media/v4l2-core/v4l2-compat-ioctl32.c
+index cbeea8343a5c..6730fd08ef03 100644
+--- a/drivers/media/v4l2-core/v4l2-compat-ioctl32.c
++++ b/drivers/media/v4l2-core/v4l2-compat-ioctl32.c
+@@ -101,7 +101,7 @@ static int get_v4l2_window32(struct v4l2_window __user *kp,
+ static int put_v4l2_window32(struct v4l2_window __user *kp,
+ struct v4l2_window32 __user *up)
+ {
+- struct v4l2_clip __user *kclips = kp->clips;
++ struct v4l2_clip __user *kclips;
+ struct v4l2_clip32 __user *uclips;
+ compat_caddr_t p;
+ u32 clipcount;
+@@ -116,6 +116,8 @@ static int put_v4l2_window32(struct v4l2_window __user *kp,
+ if (!clipcount)
+ return 0;
+
++ if (get_user(kclips, &kp->clips))
++ return -EFAULT;
+ if (get_user(p, &up->clips))
+ return -EFAULT;
+ uclips = compat_ptr(p);
+diff --git a/drivers/net/slip/slhc.c b/drivers/net/slip/slhc.c
+index 5782733959f0..f4e93f5fc204 100644
+--- a/drivers/net/slip/slhc.c
++++ b/drivers/net/slip/slhc.c
+@@ -509,6 +509,10 @@ slhc_uncompress(struct slcompress *comp, unsigned char *icp, int isize)
+ if(x < 0 || x > comp->rslot_limit)
+ goto bad;
+
++ /* Check if the cstate is initialized */
++ if (!comp->rstate[x].initialized)
++ goto bad;
++
+ comp->flags &=~ SLF_TOSS;
+ comp->recv_current = x;
+ } else {
+@@ -673,6 +677,7 @@ slhc_remember(struct slcompress *comp, unsigned char *icp, int isize)
+ if (cs->cs_tcp.doff > 5)
+ memcpy(cs->cs_tcpopt, icp + ihl*4 + sizeof(struct tcphdr), (cs->cs_tcp.doff - 5) * 4);
+ cs->cs_hsize = ihl*2 + cs->cs_tcp.doff*2;
++ cs->initialized = true;
+ /* Put headers back on packet
+ * Neither header checksum is recalculated
+ */
+diff --git a/drivers/net/usb/cdc_ether.c b/drivers/net/usb/cdc_ether.c
+index 05dca3e5c93d..178b956501a7 100644
+--- a/drivers/net/usb/cdc_ether.c
++++ b/drivers/net/usb/cdc_ether.c
+@@ -895,6 +895,12 @@ static const struct usb_device_id products[] = {
+ USB_CDC_SUBCLASS_ETHERNET,
+ USB_CDC_PROTO_NONE),
+ .driver_info = (unsigned long)&wwan_info,
++}, {
++ /* Cinterion AHS3 modem by GEMALTO */
++ USB_DEVICE_AND_INTERFACE_INFO(0x1e2d, 0x0055, USB_CLASS_COMM,
++ USB_CDC_SUBCLASS_ETHERNET,
++ USB_CDC_PROTO_NONE),
++ .driver_info = (unsigned long)&wwan_info,
+ }, {
+ USB_INTERFACE_INFO(USB_CLASS_COMM, USB_CDC_SUBCLASS_ETHERNET,
+ USB_CDC_PROTO_NONE),
+diff --git a/drivers/net/usb/lan78xx.c b/drivers/net/usb/lan78xx.c
+index 02048263c1fb..e0d52ad4842d 100644
+--- a/drivers/net/usb/lan78xx.c
++++ b/drivers/net/usb/lan78xx.c
+@@ -928,7 +928,8 @@ static int lan78xx_read_otp(struct lan78xx_net *dev, u32 offset,
+ offset += 0x100;
+ else
+ ret = -EINVAL;
+- ret = lan78xx_read_raw_otp(dev, offset, length, data);
++ if (!ret)
++ ret = lan78xx_read_raw_otp(dev, offset, length, data);
+ }
+
+ return ret;
+diff --git a/drivers/net/wireless/ath/ath9k/xmit.c b/drivers/net/wireless/ath/ath9k/xmit.c
+index 396bf05c6bf6..d8b041f48ca8 100644
+--- a/drivers/net/wireless/ath/ath9k/xmit.c
++++ b/drivers/net/wireless/ath/ath9k/xmit.c
+@@ -2892,6 +2892,8 @@ void ath_tx_node_cleanup(struct ath_softc *sc, struct ath_node *an)
+ struct ath_txq *txq;
+ int tidno;
+
++ rcu_read_lock();
++
+ for (tidno = 0; tidno < IEEE80211_NUM_TIDS; tidno++) {
+ tid = ath_node_to_tid(an, tidno);
+ txq = tid->txq;
+@@ -2909,6 +2911,8 @@ void ath_tx_node_cleanup(struct ath_softc *sc, struct ath_node *an)
+ if (!an->sta)
+ break; /* just one multicast ath_atx_tid */
+ }
++
++ rcu_read_unlock();
+ }
+
+ #ifdef CONFIG_ATH9K_TX99
+diff --git a/drivers/net/wireless/realtek/rtl818x/rtl8187/dev.c b/drivers/net/wireless/realtek/rtl818x/rtl8187/dev.c
+index 121b94f09714..9a1d15b3ce45 100644
+--- a/drivers/net/wireless/realtek/rtl818x/rtl8187/dev.c
++++ b/drivers/net/wireless/realtek/rtl818x/rtl8187/dev.c
+@@ -1450,6 +1450,7 @@ static int rtl8187_probe(struct usb_interface *intf,
+ goto err_free_dev;
+ }
+ mutex_init(&priv->io_mutex);
++ mutex_init(&priv->conf_mutex);
+
+ SET_IEEE80211_DEV(dev, &intf->dev);
+ usb_set_intfdata(intf, dev);
+@@ -1625,7 +1626,6 @@ static int rtl8187_probe(struct usb_interface *intf,
+ printk(KERN_ERR "rtl8187: Cannot register device\n");
+ goto err_free_dmabuf;
+ }
+- mutex_init(&priv->conf_mutex);
+ skb_queue_head_init(&priv->b_tx_status.queue);
+
+ wiphy_info(dev->wiphy, "hwaddr %pM, %s V%d + %s, rfkill mask %d\n",
+diff --git a/drivers/nvme/host/core.c b/drivers/nvme/host/core.c
+index 935593032123..bbe69e147b48 100644
+--- a/drivers/nvme/host/core.c
++++ b/drivers/nvme/host/core.c
+@@ -2720,6 +2720,7 @@ static int __nvme_check_ids(struct nvme_subsystem *subsys,
+
+ list_for_each_entry(h, &subsys->nsheads, entry) {
+ if (nvme_ns_ids_valid(&new->ids) &&
++ !list_empty(&h->list) &&
+ nvme_ns_ids_equal(&new->ids, &h->ids))
+ return -EINVAL;
+ }
+diff --git a/drivers/pci/host/pci-hyperv.c b/drivers/pci/host/pci-hyperv.c
+index 6b8d060d07de..1cdc12938b24 100644
+--- a/drivers/pci/host/pci-hyperv.c
++++ b/drivers/pci/host/pci-hyperv.c
+@@ -457,7 +457,6 @@ struct hv_pcibus_device {
+ spinlock_t device_list_lock; /* Protect lists below */
+ void __iomem *cfg_addr;
+
+- struct semaphore enum_sem;
+ struct list_head resources_for_children;
+
+ struct list_head children;
+@@ -471,6 +470,8 @@ struct hv_pcibus_device {
+ struct retarget_msi_interrupt retarget_msi_interrupt_params;
+
+ spinlock_t retarget_msi_interrupt_lock;
++
++ struct workqueue_struct *wq;
+ };
+
+ /*
+@@ -530,6 +531,8 @@ struct hv_pci_compl {
+ s32 completion_status;
+ };
+
++static void hv_pci_onchannelcallback(void *context);
++
+ /**
+ * hv_pci_generic_compl() - Invoked for a completion packet
+ * @context: Set up by the sender of the packet.
+@@ -674,6 +677,31 @@ static void _hv_pcifront_read_config(struct hv_pci_dev *hpdev, int where,
+ }
+ }
+
++static u16 hv_pcifront_get_vendor_id(struct hv_pci_dev *hpdev)
++{
++ u16 ret;
++ unsigned long flags;
++ void __iomem *addr = hpdev->hbus->cfg_addr + CFG_PAGE_OFFSET +
++ PCI_VENDOR_ID;
++
++ spin_lock_irqsave(&hpdev->hbus->config_lock, flags);
++
++ /* Choose the function to be read. (See comment above) */
++ writel(hpdev->desc.win_slot.slot, hpdev->hbus->cfg_addr);
++ /* Make sure the function was chosen before we start reading. */
++ mb();
++ /* Read from that function's config space. */
++ ret = readw(addr);
++ /*
++ * mb() is not required here, because the spin_unlock_irqrestore()
++ * is a barrier.
++ */
++
++ spin_unlock_irqrestore(&hpdev->hbus->config_lock, flags);
++
++ return ret;
++}
++
+ /**
+ * _hv_pcifront_write_config() - Internal PCI config write
+ * @hpdev: The PCI driver's representation of the device
+@@ -1116,8 +1144,37 @@ static void hv_compose_msi_msg(struct irq_data *data, struct msi_msg *msg)
+ * Since this function is called with IRQ locks held, can't
+ * do normal wait for completion; instead poll.
+ */
+- while (!try_wait_for_completion(&comp.comp_pkt.host_event))
++ while (!try_wait_for_completion(&comp.comp_pkt.host_event)) {
++ /* 0xFFFF means an invalid PCI VENDOR ID. */
++ if (hv_pcifront_get_vendor_id(hpdev) == 0xFFFF) {
++ dev_err_once(&hbus->hdev->device,
++ "the device has gone\n");
++ goto free_int_desc;
++ }
++
++ /*
++ * When the higher level interrupt code calls us with
++ * interrupt disabled, we must poll the channel by calling
++ * the channel callback directly when channel->target_cpu is
++ * the current CPU. When the higher level interrupt code
++ * calls us with interrupt enabled, let's add the
++ * local_bh_disable()/enable() to avoid race.
++ */
++ local_bh_disable();
++
++ if (hbus->hdev->channel->target_cpu == smp_processor_id())
++ hv_pci_onchannelcallback(hbus);
++
++ local_bh_enable();
++
++ if (hpdev->state == hv_pcichild_ejecting) {
++ dev_err_once(&hbus->hdev->device,
++ "the device is being ejected\n");
++ goto free_int_desc;
++ }
++
+ udelay(100);
++ }
+
+ if (comp.comp_pkt.completion_status < 0) {
+ dev_err(&hbus->hdev->device,
+@@ -1600,12 +1657,8 @@ static struct hv_pci_dev *get_pcichild_wslot(struct hv_pcibus_device *hbus,
+ * It must also treat the omission of a previously observed device as
+ * notification that the device no longer exists.
+ *
+- * Note that this function is a work item, and it may not be
+- * invoked in the order that it was queued. Back to back
+- * updates of the list of present devices may involve queuing
+- * multiple work items, and this one may run before ones that
+- * were sent later. As such, this function only does something
+- * if is the last one in the queue.
++ * Note that this function is serialized with hv_eject_device_work(),
++ * because both are pushed to the ordered workqueue hbus->wq.
+ */
+ static void pci_devices_present_work(struct work_struct *work)
+ {
+@@ -1626,11 +1679,6 @@ static void pci_devices_present_work(struct work_struct *work)
+
+ INIT_LIST_HEAD(&removed);
+
+- if (down_interruptible(&hbus->enum_sem)) {
+- put_hvpcibus(hbus);
+- return;
+- }
+-
+ /* Pull this off the queue and process it if it was the last one. */
+ spin_lock_irqsave(&hbus->device_list_lock, flags);
+ while (!list_empty(&hbus->dr_list)) {
+@@ -1647,7 +1695,6 @@ static void pci_devices_present_work(struct work_struct *work)
+ spin_unlock_irqrestore(&hbus->device_list_lock, flags);
+
+ if (!dr) {
+- up(&hbus->enum_sem);
+ put_hvpcibus(hbus);
+ return;
+ }
+@@ -1734,7 +1781,6 @@ static void pci_devices_present_work(struct work_struct *work)
+ break;
+ }
+
+- up(&hbus->enum_sem);
+ put_hvpcibus(hbus);
+ kfree(dr);
+ }
+@@ -1780,7 +1826,7 @@ static void hv_pci_devices_present(struct hv_pcibus_device *hbus,
+ spin_unlock_irqrestore(&hbus->device_list_lock, flags);
+
+ get_hvpcibus(hbus);
+- schedule_work(&dr_wrk->wrk);
++ queue_work(hbus->wq, &dr_wrk->wrk);
+ }
+
+ /**
+@@ -1858,7 +1904,7 @@ static void hv_pci_eject_device(struct hv_pci_dev *hpdev)
+ get_pcichild(hpdev, hv_pcidev_ref_pnp);
+ INIT_WORK(&hpdev->wrk, hv_eject_device_work);
+ get_hvpcibus(hpdev->hbus);
+- schedule_work(&hpdev->wrk);
++ queue_work(hpdev->hbus->wq, &hpdev->wrk);
+ }
+
+ /**
+@@ -2471,13 +2517,18 @@ static int hv_pci_probe(struct hv_device *hdev,
+ spin_lock_init(&hbus->config_lock);
+ spin_lock_init(&hbus->device_list_lock);
+ spin_lock_init(&hbus->retarget_msi_interrupt_lock);
+- sema_init(&hbus->enum_sem, 1);
+ init_completion(&hbus->remove_event);
++ hbus->wq = alloc_ordered_workqueue("hv_pci_%x", 0,
++ hbus->sysdata.domain);
++ if (!hbus->wq) {
++ ret = -ENOMEM;
++ goto free_bus;
++ }
+
+ ret = vmbus_open(hdev->channel, pci_ring_size, pci_ring_size, NULL, 0,
+ hv_pci_onchannelcallback, hbus);
+ if (ret)
+- goto free_bus;
++ goto destroy_wq;
+
+ hv_set_drvdata(hdev, hbus);
+
+@@ -2546,6 +2597,8 @@ static int hv_pci_probe(struct hv_device *hdev,
+ hv_free_config_window(hbus);
+ close:
+ vmbus_close(hdev->channel);
++destroy_wq:
++ destroy_workqueue(hbus->wq);
+ free_bus:
+ free_page((unsigned long)hbus);
+ return ret;
+@@ -2625,6 +2678,7 @@ static int hv_pci_remove(struct hv_device *hdev)
+ irq_domain_free_fwnode(hbus->sysdata.fwnode);
+ put_hvpcibus(hbus);
+ wait_for_completion(&hbus->remove_event);
++ destroy_workqueue(hbus->wq);
+ free_page((unsigned long)hbus);
+ return 0;
+ }
+diff --git a/drivers/s390/cio/qdio_main.c b/drivers/s390/cio/qdio_main.c
+index 95b0efe28afb..d4c63daf4479 100644
+--- a/drivers/s390/cio/qdio_main.c
++++ b/drivers/s390/cio/qdio_main.c
+@@ -127,7 +127,7 @@ static inline int qdio_check_ccq(struct qdio_q *q, unsigned int ccq)
+ static int qdio_do_eqbs(struct qdio_q *q, unsigned char *state,
+ int start, int count, int auto_ack)
+ {
+- int rc, tmp_count = count, tmp_start = start, nr = q->nr, retried = 0;
++ int rc, tmp_count = count, tmp_start = start, nr = q->nr;
+ unsigned int ccq = 0;
+
+ qperf_inc(q, eqbs);
+@@ -150,14 +150,7 @@ static int qdio_do_eqbs(struct qdio_q *q, unsigned char *state,
+ qperf_inc(q, eqbs_partial);
+ DBF_DEV_EVENT(DBF_WARN, q->irq_ptr, "EQBS part:%02x",
+ tmp_count);
+- /*
+- * Retry once, if that fails bail out and process the
+- * extracted buffers before trying again.
+- */
+- if (!retried++)
+- goto again;
+- else
+- return count - tmp_count;
++ return count - tmp_count;
+ }
+
+ DBF_ERROR("%4x EQBS ERROR", SCH_NO(q));
+@@ -213,7 +206,10 @@ static int qdio_do_sqbs(struct qdio_q *q, unsigned char state, int start,
+ return 0;
+ }
+
+-/* returns number of examined buffers and their common state in *state */
++/*
++ * Returns number of examined buffers and their common state in *state.
++ * Requested number of buffers-to-examine must be > 0.
++ */
+ static inline int get_buf_states(struct qdio_q *q, unsigned int bufnr,
+ unsigned char *state, unsigned int count,
+ int auto_ack, int merge_pending)
+@@ -224,17 +220,23 @@ static inline int get_buf_states(struct qdio_q *q, unsigned int bufnr,
+ if (is_qebsm(q))
+ return qdio_do_eqbs(q, state, bufnr, count, auto_ack);
+
+- for (i = 0; i < count; i++) {
+- if (!__state) {
+- __state = q->slsb.val[bufnr];
+- if (merge_pending && __state == SLSB_P_OUTPUT_PENDING)
+- __state = SLSB_P_OUTPUT_EMPTY;
+- } else if (merge_pending) {
+- if ((q->slsb.val[bufnr] & __state) != __state)
+- break;
+- } else if (q->slsb.val[bufnr] != __state)
+- break;
++ /* get initial state: */
++ __state = q->slsb.val[bufnr];
++ if (merge_pending && __state == SLSB_P_OUTPUT_PENDING)
++ __state = SLSB_P_OUTPUT_EMPTY;
++
++ for (i = 1; i < count; i++) {
+ bufnr = next_buf(bufnr);
++
++ /* merge PENDING into EMPTY: */
++ if (merge_pending &&
++ q->slsb.val[bufnr] == SLSB_P_OUTPUT_PENDING &&
++ __state == SLSB_P_OUTPUT_EMPTY)
++ continue;
++
++ /* stop if next state differs from initial state: */
++ if (q->slsb.val[bufnr] != __state)
++ break;
+ }
+ *state = __state;
+ return i;
+diff --git a/drivers/scsi/qla2xxx/qla_os.c b/drivers/scsi/qla2xxx/qla_os.c
+index 1204c1d59bc4..a879c3f4e20c 100644
+--- a/drivers/scsi/qla2xxx/qla_os.c
++++ b/drivers/scsi/qla2xxx/qla_os.c
+@@ -466,9 +466,6 @@ static int qla2x00_alloc_queues(struct qla_hw_data *ha, struct req_que *req,
+
+ static void qla2x00_free_req_que(struct qla_hw_data *ha, struct req_que *req)
+ {
+- if (!ha->req_q_map)
+- return;
+-
+ if (IS_QLAFX00(ha)) {
+ if (req && req->ring_fx00)
+ dma_free_coherent(&ha->pdev->dev,
+@@ -479,17 +476,14 @@ static void qla2x00_free_req_que(struct qla_hw_data *ha, struct req_que *req)
+ (req->length + 1) * sizeof(request_t),
+ req->ring, req->dma);
+
+- if (req) {
++ if (req)
+ kfree(req->outstanding_cmds);
+- kfree(req);
+- }
++
++ kfree(req);
+ }
+
+ static void qla2x00_free_rsp_que(struct qla_hw_data *ha, struct rsp_que *rsp)
+ {
+- if (!ha->rsp_q_map)
+- return;
+-
+ if (IS_QLAFX00(ha)) {
+ if (rsp && rsp->ring)
+ dma_free_coherent(&ha->pdev->dev,
+@@ -500,8 +494,7 @@ static void qla2x00_free_rsp_que(struct qla_hw_data *ha, struct rsp_que *rsp)
+ (rsp->length + 1) * sizeof(response_t),
+ rsp->ring, rsp->dma);
+ }
+- if (rsp)
+- kfree(rsp);
++ kfree(rsp);
+ }
+
+ static void qla2x00_free_queues(struct qla_hw_data *ha)
+@@ -3083,7 +3076,8 @@ qla2x00_probe_one(struct pci_dev *pdev, const struct pci_device_id *id)
+ goto probe_failed;
+
+ /* Alloc arrays of request and response ring ptrs */
+- if (qla2x00_alloc_queues(ha, req, rsp)) {
++ ret = qla2x00_alloc_queues(ha, req, rsp);
++ if (ret) {
+ ql_log(ql_log_fatal, base_vha, 0x003d,
+ "Failed to allocate memory for queue pointers..."
+ "aborting.\n");
+@@ -3384,8 +3378,15 @@ qla2x00_probe_one(struct pci_dev *pdev, const struct pci_device_id *id)
+ }
+
+ qla2x00_free_device(base_vha);
+-
+ scsi_host_put(base_vha->host);
++ /*
++ * Need to NULL out local req/rsp after
++ * qla2x00_free_device => qla2x00_free_queues frees
++ * what these are pointing to. Or else we'll
++ * fall over below in qla2x00_free_req/rsp_que.
++ */
++ req = NULL;
++ rsp = NULL;
+
+ probe_hw_failed:
+ qla2x00_mem_free(ha);
+@@ -4078,6 +4079,7 @@ qla2x00_mem_alloc(struct qla_hw_data *ha, uint16_t req_len, uint16_t rsp_len,
+ (*rsp)->dma = 0;
+ fail_rsp_ring:
+ kfree(*rsp);
++ *rsp = NULL;
+ fail_rsp:
+ dma_free_coherent(&ha->pdev->dev, ((*req)->length + 1) *
+ sizeof(request_t), (*req)->ring, (*req)->dma);
+@@ -4085,6 +4087,7 @@ qla2x00_mem_alloc(struct qla_hw_data *ha, uint16_t req_len, uint16_t rsp_len,
+ (*req)->dma = 0;
+ fail_req_ring:
+ kfree(*req);
++ *req = NULL;
+ fail_req:
+ dma_free_coherent(&ha->pdev->dev, sizeof(struct ct_sns_pkt),
+ ha->ct_sns, ha->ct_sns_dma);
+@@ -4452,16 +4455,11 @@ qla2x00_mem_free(struct qla_hw_data *ha)
+ dma_free_coherent(&ha->pdev->dev, ha->init_cb_size,
+ ha->init_cb, ha->init_cb_dma);
+
+- if (ha->optrom_buffer)
+- vfree(ha->optrom_buffer);
+- if (ha->nvram)
+- kfree(ha->nvram);
+- if (ha->npiv_info)
+- kfree(ha->npiv_info);
+- if (ha->swl)
+- kfree(ha->swl);
+- if (ha->loop_id_map)
+- kfree(ha->loop_id_map);
++ vfree(ha->optrom_buffer);
++ kfree(ha->nvram);
++ kfree(ha->npiv_info);
++ kfree(ha->swl);
++ kfree(ha->loop_id_map);
+
+ ha->srb_mempool = NULL;
+ ha->ctx_mempool = NULL;
+diff --git a/drivers/vhost/vhost.c b/drivers/vhost/vhost.c
+index a5622a8364cb..31bdfd296ced 100644
+--- a/drivers/vhost/vhost.c
++++ b/drivers/vhost/vhost.c
+@@ -756,7 +756,7 @@ static int vhost_copy_to_user(struct vhost_virtqueue *vq, void __user *to,
+ struct iov_iter t;
+ void __user *uaddr = vhost_vq_meta_fetch(vq,
+ (u64)(uintptr_t)to, size,
+- VHOST_ADDR_DESC);
++ VHOST_ADDR_USED);
+
+ if (uaddr)
+ return __copy_to_user(uaddr, from, size);
+@@ -1256,10 +1256,12 @@ static int vq_log_access_ok(struct vhost_virtqueue *vq,
+ /* Caller should have vq mutex and device mutex */
+ int vhost_vq_access_ok(struct vhost_virtqueue *vq)
+ {
+- int ret = vq_log_access_ok(vq, vq->log_base);
++ if (!vq_log_access_ok(vq, vq->log_base))
++ return 0;
+
+- if (ret || vq->iotlb)
+- return ret;
++ /* Access validation occurs at prefetch time with IOTLB */
++ if (vq->iotlb)
++ return 1;
+
+ return vq_access_ok(vq, vq->num, vq->desc, vq->avail, vq->used);
+ }
+diff --git a/drivers/xen/xenbus/xenbus_dev_frontend.c b/drivers/xen/xenbus/xenbus_dev_frontend.c
+index f3b089b7c0b6..d2edbc79384a 100644
+--- a/drivers/xen/xenbus/xenbus_dev_frontend.c
++++ b/drivers/xen/xenbus/xenbus_dev_frontend.c
+@@ -365,7 +365,7 @@ void xenbus_dev_queue_reply(struct xb_req_data *req)
+ if (WARN_ON(rc))
+ goto out;
+ }
+- } else if (req->msg.type == XS_TRANSACTION_END) {
++ } else if (req->type == XS_TRANSACTION_END) {
+ trans = xenbus_get_transaction(u, req->msg.tx_id);
+ if (WARN_ON(!trans))
+ goto out;
+diff --git a/fs/f2fs/gc.c b/fs/f2fs/gc.c
+index d844dcb80570..3a48ea72704c 100644
+--- a/fs/f2fs/gc.c
++++ b/fs/f2fs/gc.c
+@@ -191,8 +191,9 @@ static void select_policy(struct f2fs_sb_info *sbi, int gc_type,
+ if (gc_type != FG_GC && p->max_search > sbi->max_victim_search)
+ p->max_search = sbi->max_victim_search;
+
+- /* let's select beginning hot/small space first */
+- if (type == CURSEG_HOT_DATA || IS_NODESEG(type))
++ /* let's select beginning hot/small space first in no_heap mode*/
++ if (test_opt(sbi, NOHEAP) &&
++ (type == CURSEG_HOT_DATA || IS_NODESEG(type)))
+ p->offset = 0;
+ else
+ p->offset = SIT_I(sbi)->last_victim[p->gc_mode];
+diff --git a/fs/f2fs/segment.c b/fs/f2fs/segment.c
+index c117e0913f2a..203543b61244 100644
+--- a/fs/f2fs/segment.c
++++ b/fs/f2fs/segment.c
+@@ -2155,7 +2155,8 @@ static unsigned int __get_next_segno(struct f2fs_sb_info *sbi, int type)
+ if (sbi->segs_per_sec != 1)
+ return CURSEG_I(sbi, type)->segno;
+
+- if (type == CURSEG_HOT_DATA || IS_NODESEG(type))
++ if (test_opt(sbi, NOHEAP) &&
++ (type == CURSEG_HOT_DATA || IS_NODESEG(type)))
+ return 0;
+
+ if (SIT_I(sbi)->last_victim[ALLOC_NEXT])
+diff --git a/fs/hugetlbfs/inode.c b/fs/hugetlbfs/inode.c
+index 8a5bde8b1444..e26a8c14fc6f 100644
+--- a/fs/hugetlbfs/inode.c
++++ b/fs/hugetlbfs/inode.c
+@@ -148,10 +148,14 @@ static int hugetlbfs_file_mmap(struct file *file, struct vm_area_struct *vma)
+
+ /*
+ * page based offset in vm_pgoff could be sufficiently large to
+- * overflow a (l)off_t when converted to byte offset.
++ * overflow a loff_t when converted to byte offset. This can
++ * only happen on architectures where sizeof(loff_t) ==
++ * sizeof(unsigned long). So, only check in those instances.
+ */
+- if (vma->vm_pgoff & PGOFF_LOFFT_MAX)
+- return -EINVAL;
++ if (sizeof(unsigned long) == sizeof(loff_t)) {
++ if (vma->vm_pgoff & PGOFF_LOFFT_MAX)
++ return -EINVAL;
++ }
+
+ /* must be huge page aligned */
+ if (vma->vm_pgoff & (~huge_page_mask(h) >> PAGE_SHIFT))
+diff --git a/fs/namei.c b/fs/namei.c
+index ee19c4ef24b2..747fcb6f10c5 100644
+--- a/fs/namei.c
++++ b/fs/namei.c
+@@ -222,9 +222,10 @@ getname_kernel(const char * filename)
+ if (len <= EMBEDDED_NAME_MAX) {
+ result->name = (char *)result->iname;
+ } else if (len <= PATH_MAX) {
++ const size_t size = offsetof(struct filename, iname[1]);
+ struct filename *tmp;
+
+- tmp = kmalloc(sizeof(*tmp), GFP_KERNEL);
++ tmp = kmalloc(size, GFP_KERNEL);
+ if (unlikely(!tmp)) {
+ __putname(result);
+ return ERR_PTR(-ENOMEM);
+diff --git a/fs/nfsd/nfs4proc.c b/fs/nfsd/nfs4proc.c
+index effeeb4f556f..1aad5db515c7 100644
+--- a/fs/nfsd/nfs4proc.c
++++ b/fs/nfsd/nfs4proc.c
+@@ -32,6 +32,7 @@
+ * NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF THIS
+ * SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
+ */
++#include <linux/fs_struct.h>
+ #include <linux/file.h>
+ #include <linux/falloc.h>
+ #include <linux/slab.h>
+@@ -252,11 +253,13 @@ do_open_lookup(struct svc_rqst *rqstp, struct nfsd4_compound_state *cstate, stru
+ * Note: create modes (UNCHECKED,GUARDED...) are the same
+ * in NFSv4 as in v3 except EXCLUSIVE4_1.
+ */
++ current->fs->umask = open->op_umask;
+ status = do_nfsd_create(rqstp, current_fh, open->op_fname.data,
+ open->op_fname.len, &open->op_iattr,
+ *resfh, open->op_createmode,
+ (u32 *)open->op_verf.data,
+ &open->op_truncate, &open->op_created);
++ current->fs->umask = 0;
+
+ if (!status && open->op_label.len)
+ nfsd4_security_inode_setsecctx(*resfh, &open->op_label, open->op_bmval);
+@@ -603,6 +606,7 @@ nfsd4_create(struct svc_rqst *rqstp, struct nfsd4_compound_state *cstate,
+ if (status)
+ return status;
+
++ current->fs->umask = create->cr_umask;
+ switch (create->cr_type) {
+ case NF4LNK:
+ status = nfsd_symlink(rqstp, &cstate->current_fh,
+@@ -611,20 +615,22 @@ nfsd4_create(struct svc_rqst *rqstp, struct nfsd4_compound_state *cstate,
+ break;
+
+ case NF4BLK:
++ status = nfserr_inval;
+ rdev = MKDEV(create->cr_specdata1, create->cr_specdata2);
+ if (MAJOR(rdev) != create->cr_specdata1 ||
+ MINOR(rdev) != create->cr_specdata2)
+- return nfserr_inval;
++ goto out_umask;
+ status = nfsd_create(rqstp, &cstate->current_fh,
+ create->cr_name, create->cr_namelen,
+ &create->cr_iattr, S_IFBLK, rdev, &resfh);
+ break;
+
+ case NF4CHR:
++ status = nfserr_inval;
+ rdev = MKDEV(create->cr_specdata1, create->cr_specdata2);
+ if (MAJOR(rdev) != create->cr_specdata1 ||
+ MINOR(rdev) != create->cr_specdata2)
+- return nfserr_inval;
++ goto out_umask;
+ status = nfsd_create(rqstp, &cstate->current_fh,
+ create->cr_name, create->cr_namelen,
+ &create->cr_iattr,S_IFCHR, rdev, &resfh);
+@@ -668,6 +674,8 @@ nfsd4_create(struct svc_rqst *rqstp, struct nfsd4_compound_state *cstate,
+ fh_dup2(&cstate->current_fh, &resfh);
+ out:
+ fh_put(&resfh);
++out_umask:
++ current->fs->umask = 0;
+ return status;
+ }
+
+diff --git a/fs/nfsd/nfs4xdr.c b/fs/nfsd/nfs4xdr.c
+index 2c61c6b8ae09..df2b8849a63b 100644
+--- a/fs/nfsd/nfs4xdr.c
++++ b/fs/nfsd/nfs4xdr.c
+@@ -33,7 +33,6 @@
+ * SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
+ */
+
+-#include <linux/fs_struct.h>
+ #include <linux/file.h>
+ #include <linux/slab.h>
+ #include <linux/namei.h>
+@@ -683,7 +682,7 @@ nfsd4_decode_create(struct nfsd4_compoundargs *argp, struct nfsd4_create *create
+
+ status = nfsd4_decode_fattr(argp, create->cr_bmval, &create->cr_iattr,
+ &create->cr_acl, &create->cr_label,
+- ¤t->fs->umask);
++ &create->cr_umask);
+ if (status)
+ goto out;
+
+@@ -928,7 +927,6 @@ nfsd4_decode_open(struct nfsd4_compoundargs *argp, struct nfsd4_open *open)
+ case NFS4_OPEN_NOCREATE:
+ break;
+ case NFS4_OPEN_CREATE:
+- current->fs->umask = 0;
+ READ_BUF(4);
+ open->op_createmode = be32_to_cpup(p++);
+ switch (open->op_createmode) {
+@@ -936,7 +934,7 @@ nfsd4_decode_open(struct nfsd4_compoundargs *argp, struct nfsd4_open *open)
+ case NFS4_CREATE_GUARDED:
+ status = nfsd4_decode_fattr(argp, open->op_bmval,
+ &open->op_iattr, &open->op_acl, &open->op_label,
+- ¤t->fs->umask);
++ &open->op_umask);
+ if (status)
+ goto out;
+ break;
+@@ -951,7 +949,7 @@ nfsd4_decode_open(struct nfsd4_compoundargs *argp, struct nfsd4_open *open)
+ COPYMEM(open->op_verf.data, NFS4_VERIFIER_SIZE);
+ status = nfsd4_decode_fattr(argp, open->op_bmval,
+ &open->op_iattr, &open->op_acl, &open->op_label,
+- ¤t->fs->umask);
++ &open->op_umask);
+ if (status)
+ goto out;
+ break;
+diff --git a/fs/nfsd/xdr4.h b/fs/nfsd/xdr4.h
+index bc29511b6405..f47c392cbd57 100644
+--- a/fs/nfsd/xdr4.h
++++ b/fs/nfsd/xdr4.h
+@@ -118,6 +118,7 @@ struct nfsd4_create {
+ } u;
+ u32 cr_bmval[3]; /* request */
+ struct iattr cr_iattr; /* request */
++ int cr_umask; /* request */
+ struct nfsd4_change_info cr_cinfo; /* response */
+ struct nfs4_acl *cr_acl;
+ struct xdr_netobj cr_label;
+@@ -228,6 +229,7 @@ struct nfsd4_open {
+ u32 op_why_no_deleg; /* response - DELEG_NONE_EXT only */
+ u32 op_create; /* request */
+ u32 op_createmode; /* request */
++ int op_umask; /* request */
+ u32 op_bmval[3]; /* request */
+ struct iattr op_iattr; /* UNCHECKED4, GUARDED4, EXCLUSIVE4_1 */
+ nfs4_verifier op_verf __attribute__((aligned(32)));
+diff --git a/fs/overlayfs/inode.c b/fs/overlayfs/inode.c
+index 94d2f8a8b779..0dbbfedef54c 100644
+--- a/fs/overlayfs/inode.c
++++ b/fs/overlayfs/inode.c
+@@ -110,13 +110,10 @@ int ovl_getattr(const struct path *path, struct kstat *stat,
+ * that the upper hardlink is not broken.
+ */
+ if (is_dir || lowerstat.nlink == 1 ||
+- ovl_test_flag(OVL_INDEX, d_inode(dentry)))
++ ovl_test_flag(OVL_INDEX, d_inode(dentry))) {
+ stat->ino = lowerstat.ino;
+-
+- if (samefs)
+- WARN_ON_ONCE(stat->dev != lowerstat.dev);
+- else
+ stat->dev = ovl_get_pseudo_dev(dentry);
++ }
+ }
+ if (samefs) {
+ /*
+diff --git a/fs/overlayfs/namei.c b/fs/overlayfs/namei.c
+index ef3e7ea76296..dc917119d8a9 100644
+--- a/fs/overlayfs/namei.c
++++ b/fs/overlayfs/namei.c
+@@ -55,6 +55,15 @@ static int ovl_check_redirect(struct dentry *dentry, struct ovl_lookup_data *d,
+ if (s == next)
+ goto invalid;
+ }
++ /*
++ * One of the ancestor path elements in an absolute path
++ * lookup in ovl_lookup_layer() could have been opaque and
++ * that will stop further lookup in lower layers (d->stop=true)
++ * But we have found an absolute redirect in decendant path
++ * element and that should force continue lookup in lower
++ * layers (reset d->stop).
++ */
++ d->stop = false;
+ } else {
+ if (strchr(buf, '/') != NULL)
+ goto invalid;
+diff --git a/include/net/bluetooth/hci_core.h b/include/net/bluetooth/hci_core.h
+index 95ccc1eef558..b619a190ff12 100644
+--- a/include/net/bluetooth/hci_core.h
++++ b/include/net/bluetooth/hci_core.h
+@@ -895,7 +895,7 @@ struct hci_conn *hci_connect_le_scan(struct hci_dev *hdev, bdaddr_t *dst,
+ u16 conn_timeout);
+ struct hci_conn *hci_connect_le(struct hci_dev *hdev, bdaddr_t *dst,
+ u8 dst_type, u8 sec_level, u16 conn_timeout,
+- u8 role);
++ u8 role, bdaddr_t *direct_rpa);
+ struct hci_conn *hci_connect_acl(struct hci_dev *hdev, bdaddr_t *dst,
+ u8 sec_level, u8 auth_type);
+ struct hci_conn *hci_connect_sco(struct hci_dev *hdev, int type, bdaddr_t *dst,
+diff --git a/include/net/slhc_vj.h b/include/net/slhc_vj.h
+index 8716d5942b65..8fcf8908a694 100644
+--- a/include/net/slhc_vj.h
++++ b/include/net/slhc_vj.h
+@@ -127,6 +127,7 @@ typedef __u32 int32;
+ */
+ struct cstate {
+ byte_t cs_this; /* connection id number (xmit) */
++ bool initialized; /* true if initialized */
+ struct cstate *next; /* next in ring (xmit) */
+ struct iphdr cs_ip; /* ip/tcp hdr from most recent packet */
+ struct tcphdr cs_tcp;
+diff --git a/kernel/events/core.c b/kernel/events/core.c
+index 385480a5aa45..e20da29c4a9b 100644
+--- a/kernel/events/core.c
++++ b/kernel/events/core.c
+@@ -4112,6 +4112,9 @@ static void _free_event(struct perf_event *event)
+ if (event->ctx)
+ put_ctx(event->ctx);
+
++ if (event->hw.target)
++ put_task_struct(event->hw.target);
++
+ exclusive_event_destroy(event);
+ module_put(event->pmu->module);
+
+@@ -9456,6 +9459,7 @@ perf_event_alloc(struct perf_event_attr *attr, int cpu,
+ * and we cannot use the ctx information because we need the
+ * pmu before we get a ctx.
+ */
++ get_task_struct(task);
+ event->hw.target = task;
+ }
+
+@@ -9571,6 +9575,8 @@ perf_event_alloc(struct perf_event_attr *attr, int cpu,
+ perf_detach_cgroup(event);
+ if (event->ns)
+ put_pid_ns(event->ns);
++ if (event->hw.target)
++ put_task_struct(event->hw.target);
+ kfree(event);
+
+ return ERR_PTR(err);
+diff --git a/lib/bitmap.c b/lib/bitmap.c
+index d8f0c094b18e..96017c066319 100644
+--- a/lib/bitmap.c
++++ b/lib/bitmap.c
+@@ -607,7 +607,7 @@ static int __bitmap_parselist(const char *buf, unsigned int buflen,
+ /* if no digit is after '-', it's wrong*/
+ if (at_start && in_range)
+ return -EINVAL;
+- if (!(a <= b) || !(used_size <= group_size))
++ if (!(a <= b) || group_size == 0 || !(used_size <= group_size))
+ return -EINVAL;
+ if (b >= nmaskbits)
+ return -ERANGE;
+diff --git a/lib/test_bitmap.c b/lib/test_bitmap.c
+index aa1f2669bdd5..0ddf293cfac3 100644
+--- a/lib/test_bitmap.c
++++ b/lib/test_bitmap.c
+@@ -218,6 +218,10 @@ static const struct test_bitmap_parselist parselist_tests[] __initconst = {
+ {-EINVAL, "-1", NULL, 8, 0},
+ {-EINVAL, "-0", NULL, 8, 0},
+ {-EINVAL, "10-1", NULL, 8, 0},
++ {-EINVAL, "0-31:", NULL, 8, 0},
++ {-EINVAL, "0-31:0", NULL, 8, 0},
++ {-EINVAL, "0-31:0/0", NULL, 8, 0},
++ {-EINVAL, "0-31:1/0", NULL, 8, 0},
+ {-EINVAL, "0-31:10/1", NULL, 8, 0},
+ };
+
+diff --git a/mm/gup.c b/mm/gup.c
+index e0d82b6706d7..8fc23a60487d 100644
+--- a/mm/gup.c
++++ b/mm/gup.c
+@@ -1816,9 +1816,12 @@ int get_user_pages_fast(unsigned long start, int nr_pages, int write,
+ len = (unsigned long) nr_pages << PAGE_SHIFT;
+ end = start + len;
+
++ if (nr_pages <= 0)
++ return 0;
++
+ if (unlikely(!access_ok(write ? VERIFY_WRITE : VERIFY_READ,
+ (void __user *)start, len)))
+- return 0;
++ return -EFAULT;
+
+ if (gup_fast_permitted(start, nr_pages, write)) {
+ local_irq_disable();
+diff --git a/mm/gup_benchmark.c b/mm/gup_benchmark.c
+index 5c8e2abeaa15..0f44759486e2 100644
+--- a/mm/gup_benchmark.c
++++ b/mm/gup_benchmark.c
+@@ -23,7 +23,7 @@ static int __gup_benchmark_ioctl(unsigned int cmd,
+ struct page **pages;
+
+ nr_pages = gup->size / PAGE_SIZE;
+- pages = kvmalloc(sizeof(void *) * nr_pages, GFP_KERNEL);
++ pages = kvzalloc(sizeof(void *) * nr_pages, GFP_KERNEL);
+ if (!pages)
+ return -ENOMEM;
+
+@@ -41,6 +41,8 @@ static int __gup_benchmark_ioctl(unsigned int cmd,
+ }
+
+ nr = get_user_pages_fast(addr, nr, gup->flags & 1, pages + i);
++ if (nr <= 0)
++ break;
+ i += nr;
+ }
+ end_time = ktime_get();
+diff --git a/net/bluetooth/hci_conn.c b/net/bluetooth/hci_conn.c
+index a9682534c377..45ff5dc124cc 100644
+--- a/net/bluetooth/hci_conn.c
++++ b/net/bluetooth/hci_conn.c
+@@ -749,18 +749,31 @@ static bool conn_use_rpa(struct hci_conn *conn)
+ }
+
+ static void hci_req_add_le_create_conn(struct hci_request *req,
+- struct hci_conn *conn)
++ struct hci_conn *conn,
++ bdaddr_t *direct_rpa)
+ {
+ struct hci_cp_le_create_conn cp;
+ struct hci_dev *hdev = conn->hdev;
+ u8 own_addr_type;
+
+- /* Update random address, but set require_privacy to false so
+- * that we never connect with an non-resolvable address.
++ /* If direct address was provided we use it instead of current
++ * address.
+ */
+- if (hci_update_random_address(req, false, conn_use_rpa(conn),
+- &own_addr_type))
+- return;
++ if (direct_rpa) {
++ if (bacmp(&req->hdev->random_addr, direct_rpa))
++ hci_req_add(req, HCI_OP_LE_SET_RANDOM_ADDR, 6,
++ direct_rpa);
++
++ /* direct address is always RPA */
++ own_addr_type = ADDR_LE_DEV_RANDOM;
++ } else {
++ /* Update random address, but set require_privacy to false so
++ * that we never connect with an non-resolvable address.
++ */
++ if (hci_update_random_address(req, false, conn_use_rpa(conn),
++ &own_addr_type))
++ return;
++ }
+
+ memset(&cp, 0, sizeof(cp));
+
+@@ -825,7 +838,7 @@ static void hci_req_directed_advertising(struct hci_request *req,
+
+ struct hci_conn *hci_connect_le(struct hci_dev *hdev, bdaddr_t *dst,
+ u8 dst_type, u8 sec_level, u16 conn_timeout,
+- u8 role)
++ u8 role, bdaddr_t *direct_rpa)
+ {
+ struct hci_conn_params *params;
+ struct hci_conn *conn;
+@@ -940,7 +953,7 @@ struct hci_conn *hci_connect_le(struct hci_dev *hdev, bdaddr_t *dst,
+ hci_dev_set_flag(hdev, HCI_LE_SCAN_INTERRUPTED);
+ }
+
+- hci_req_add_le_create_conn(&req, conn);
++ hci_req_add_le_create_conn(&req, conn, direct_rpa);
+
+ create_conn:
+ err = hci_req_run(&req, create_le_conn_complete);
+diff --git a/net/bluetooth/hci_event.c b/net/bluetooth/hci_event.c
+index cd3bbb766c24..139707cd9d35 100644
+--- a/net/bluetooth/hci_event.c
++++ b/net/bluetooth/hci_event.c
+@@ -4648,7 +4648,8 @@ static void hci_le_conn_update_complete_evt(struct hci_dev *hdev,
+ /* This function requires the caller holds hdev->lock */
+ static struct hci_conn *check_pending_le_conn(struct hci_dev *hdev,
+ bdaddr_t *addr,
+- u8 addr_type, u8 adv_type)
++ u8 addr_type, u8 adv_type,
++ bdaddr_t *direct_rpa)
+ {
+ struct hci_conn *conn;
+ struct hci_conn_params *params;
+@@ -4699,7 +4700,8 @@ static struct hci_conn *check_pending_le_conn(struct hci_dev *hdev,
+ }
+
+ conn = hci_connect_le(hdev, addr, addr_type, BT_SECURITY_LOW,
+- HCI_LE_AUTOCONN_TIMEOUT, HCI_ROLE_MASTER);
++ HCI_LE_AUTOCONN_TIMEOUT, HCI_ROLE_MASTER,
++ direct_rpa);
+ if (!IS_ERR(conn)) {
+ /* If HCI_AUTO_CONN_EXPLICIT is set, conn is already owned
+ * by higher layer that tried to connect, if no then
+@@ -4808,8 +4810,13 @@ static void process_adv_report(struct hci_dev *hdev, u8 type, bdaddr_t *bdaddr,
+ bdaddr_type = irk->addr_type;
+ }
+
+- /* Check if we have been requested to connect to this device */
+- conn = check_pending_le_conn(hdev, bdaddr, bdaddr_type, type);
++ /* Check if we have been requested to connect to this device.
++ *
++ * direct_addr is set only for directed advertising reports (it is NULL
++ * for advertising reports) and is already verified to be RPA above.
++ */
++ conn = check_pending_le_conn(hdev, bdaddr, bdaddr_type, type,
++ direct_addr);
+ if (conn && type == LE_ADV_IND) {
+ /* Store report for later inclusion by
+ * mgmt_device_connected
+diff --git a/net/bluetooth/l2cap_core.c b/net/bluetooth/l2cap_core.c
+index fc6615d59165..9b7907ebfa01 100644
+--- a/net/bluetooth/l2cap_core.c
++++ b/net/bluetooth/l2cap_core.c
+@@ -7156,7 +7156,7 @@ int l2cap_chan_connect(struct l2cap_chan *chan, __le16 psm, u16 cid,
+ hcon = hci_connect_le(hdev, dst, dst_type,
+ chan->sec_level,
+ HCI_LE_CONN_TIMEOUT,
+- HCI_ROLE_SLAVE);
++ HCI_ROLE_SLAVE, NULL);
+ else
+ hcon = hci_connect_le_scan(hdev, dst, dst_type,
+ chan->sec_level,
+diff --git a/net/ipv4/ip_gre.c b/net/ipv4/ip_gre.c
+index 01bbbfe2c2a7..3686434d134b 100644
+--- a/net/ipv4/ip_gre.c
++++ b/net/ipv4/ip_gre.c
+@@ -796,8 +796,14 @@ static void ipgre_link_update(struct net_device *dev, bool set_mtu)
+ tunnel->encap.type == TUNNEL_ENCAP_NONE) {
+ dev->features |= NETIF_F_GSO_SOFTWARE;
+ dev->hw_features |= NETIF_F_GSO_SOFTWARE;
++ } else {
++ dev->features &= ~NETIF_F_GSO_SOFTWARE;
++ dev->hw_features &= ~NETIF_F_GSO_SOFTWARE;
+ }
+ dev->features |= NETIF_F_LLTX;
++ } else {
++ dev->hw_features &= ~NETIF_F_GSO_SOFTWARE;
++ dev->features &= ~(NETIF_F_LLTX | NETIF_F_GSO_SOFTWARE);
+ }
+ }
+
+diff --git a/net/l2tp/l2tp_core.c b/net/l2tp/l2tp_core.c
+index 083fa2ffee15..bf525f7e1e72 100644
+--- a/net/l2tp/l2tp_core.c
++++ b/net/l2tp/l2tp_core.c
+@@ -335,26 +335,6 @@ int l2tp_session_register(struct l2tp_session *session,
+ }
+ EXPORT_SYMBOL_GPL(l2tp_session_register);
+
+-/* Lookup a tunnel by id
+- */
+-struct l2tp_tunnel *l2tp_tunnel_find(const struct net *net, u32 tunnel_id)
+-{
+- struct l2tp_tunnel *tunnel;
+- struct l2tp_net *pn = l2tp_pernet(net);
+-
+- rcu_read_lock_bh();
+- list_for_each_entry_rcu(tunnel, &pn->l2tp_tunnel_list, list) {
+- if (tunnel->tunnel_id == tunnel_id) {
+- rcu_read_unlock_bh();
+- return tunnel;
+- }
+- }
+- rcu_read_unlock_bh();
+-
+- return NULL;
+-}
+-EXPORT_SYMBOL_GPL(l2tp_tunnel_find);
+-
+ struct l2tp_tunnel *l2tp_tunnel_find_nth(const struct net *net, int nth)
+ {
+ struct l2tp_net *pn = l2tp_pernet(net);
+@@ -1445,74 +1425,11 @@ int l2tp_tunnel_create(struct net *net, int fd, int version, u32 tunnel_id, u32
+ {
+ struct l2tp_tunnel *tunnel = NULL;
+ int err;
+- struct socket *sock = NULL;
+- struct sock *sk = NULL;
+- struct l2tp_net *pn;
+ enum l2tp_encap_type encap = L2TP_ENCAPTYPE_UDP;
+
+- /* Get the tunnel socket from the fd, which was opened by
+- * the userspace L2TP daemon. If not specified, create a
+- * kernel socket.
+- */
+- if (fd < 0) {
+- err = l2tp_tunnel_sock_create(net, tunnel_id, peer_tunnel_id,
+- cfg, &sock);
+- if (err < 0)
+- goto err;
+- } else {
+- sock = sockfd_lookup(fd, &err);
+- if (!sock) {
+- pr_err("tunl %u: sockfd_lookup(fd=%d) returned %d\n",
+- tunnel_id, fd, err);
+- err = -EBADF;
+- goto err;
+- }
+-
+- /* Reject namespace mismatches */
+- if (!net_eq(sock_net(sock->sk), net)) {
+- pr_err("tunl %u: netns mismatch\n", tunnel_id);
+- err = -EINVAL;
+- goto err;
+- }
+- }
+-
+- sk = sock->sk;
+-
+ if (cfg != NULL)
+ encap = cfg->encap;
+
+- /* Quick sanity checks */
+- err = -EPROTONOSUPPORT;
+- if (sk->sk_type != SOCK_DGRAM) {
+- pr_debug("tunl %hu: fd %d wrong socket type\n",
+- tunnel_id, fd);
+- goto err;
+- }
+- switch (encap) {
+- case L2TP_ENCAPTYPE_UDP:
+- if (sk->sk_protocol != IPPROTO_UDP) {
+- pr_err("tunl %hu: fd %d wrong protocol, got %d, expected %d\n",
+- tunnel_id, fd, sk->sk_protocol, IPPROTO_UDP);
+- goto err;
+- }
+- break;
+- case L2TP_ENCAPTYPE_IP:
+- if (sk->sk_protocol != IPPROTO_L2TP) {
+- pr_err("tunl %hu: fd %d wrong protocol, got %d, expected %d\n",
+- tunnel_id, fd, sk->sk_protocol, IPPROTO_L2TP);
+- goto err;
+- }
+- break;
+- }
+-
+- /* Check if this socket has already been prepped */
+- tunnel = l2tp_tunnel(sk);
+- if (tunnel != NULL) {
+- /* This socket has already been prepped */
+- err = -EBUSY;
+- goto err;
+- }
+-
+ tunnel = kzalloc(sizeof(struct l2tp_tunnel), GFP_KERNEL);
+ if (tunnel == NULL) {
+ err = -ENOMEM;
+@@ -1529,72 +1446,126 @@ int l2tp_tunnel_create(struct net *net, int fd, int version, u32 tunnel_id, u32
+ rwlock_init(&tunnel->hlist_lock);
+ tunnel->acpt_newsess = true;
+
+- /* The net we belong to */
+- tunnel->l2tp_net = net;
+- pn = l2tp_pernet(net);
+-
+ if (cfg != NULL)
+ tunnel->debug = cfg->debug;
+
+- /* Mark socket as an encapsulation socket. See net/ipv4/udp.c */
+ tunnel->encap = encap;
+- if (encap == L2TP_ENCAPTYPE_UDP) {
+- struct udp_tunnel_sock_cfg udp_cfg = { };
+-
+- udp_cfg.sk_user_data = tunnel;
+- udp_cfg.encap_type = UDP_ENCAP_L2TPINUDP;
+- udp_cfg.encap_rcv = l2tp_udp_encap_recv;
+- udp_cfg.encap_destroy = l2tp_udp_encap_destroy;
+-
+- setup_udp_tunnel_sock(net, sock, &udp_cfg);
+- } else {
+- sk->sk_user_data = tunnel;
+- }
+
+- /* Bump the reference count. The tunnel context is deleted
+- * only when this drops to zero. A reference is also held on
+- * the tunnel socket to ensure that it is not released while
+- * the tunnel is extant. Must be done before sk_destruct is
+- * set.
+- */
+ refcount_set(&tunnel->ref_count, 1);
+- sock_hold(sk);
+- tunnel->sock = sk;
+ tunnel->fd = fd;
+
+- /* Hook on the tunnel socket destructor so that we can cleanup
+- * if the tunnel socket goes away.
+- */
+- tunnel->old_sk_destruct = sk->sk_destruct;
+- sk->sk_destruct = &l2tp_tunnel_destruct;
+- lockdep_set_class_and_name(&sk->sk_lock.slock, &l2tp_socket_class, "l2tp_sock");
+-
+- sk->sk_allocation = GFP_ATOMIC;
+-
+ /* Init delete workqueue struct */
+ INIT_WORK(&tunnel->del_work, l2tp_tunnel_del_work);
+
+- /* Add tunnel to our list */
+ INIT_LIST_HEAD(&tunnel->list);
+- spin_lock_bh(&pn->l2tp_tunnel_list_lock);
+- list_add_rcu(&tunnel->list, &pn->l2tp_tunnel_list);
+- spin_unlock_bh(&pn->l2tp_tunnel_list_lock);
+
+ err = 0;
+ err:
+ if (tunnelp)
+ *tunnelp = tunnel;
+
+- /* If tunnel's socket was created by the kernel, it doesn't
+- * have a file.
+- */
+- if (sock && sock->file)
+- sockfd_put(sock);
+-
+ return err;
+ }
+ EXPORT_SYMBOL_GPL(l2tp_tunnel_create);
+
++static int l2tp_validate_socket(const struct sock *sk, const struct net *net,
++ enum l2tp_encap_type encap)
++{
++ if (!net_eq(sock_net(sk), net))
++ return -EINVAL;
++
++ if (sk->sk_type != SOCK_DGRAM)
++ return -EPROTONOSUPPORT;
++
++ if ((encap == L2TP_ENCAPTYPE_UDP && sk->sk_protocol != IPPROTO_UDP) ||
++ (encap == L2TP_ENCAPTYPE_IP && sk->sk_protocol != IPPROTO_L2TP))
++ return -EPROTONOSUPPORT;
++
++ if (sk->sk_user_data)
++ return -EBUSY;
++
++ return 0;
++}
++
++int l2tp_tunnel_register(struct l2tp_tunnel *tunnel, struct net *net,
++ struct l2tp_tunnel_cfg *cfg)
++{
++ struct l2tp_tunnel *tunnel_walk;
++ struct l2tp_net *pn;
++ struct socket *sock;
++ struct sock *sk;
++ int ret;
++
++ if (tunnel->fd < 0) {
++ ret = l2tp_tunnel_sock_create(net, tunnel->tunnel_id,
++ tunnel->peer_tunnel_id, cfg,
++ &sock);
++ if (ret < 0)
++ goto err;
++ } else {
++ sock = sockfd_lookup(tunnel->fd, &ret);
++ if (!sock)
++ goto err;
++
++ ret = l2tp_validate_socket(sock->sk, net, tunnel->encap);
++ if (ret < 0)
++ goto err_sock;
++ }
++
++ sk = sock->sk;
++
++ sock_hold(sk);
++ tunnel->sock = sk;
++ tunnel->l2tp_net = net;
++
++ pn = l2tp_pernet(net);
++
++ spin_lock_bh(&pn->l2tp_tunnel_list_lock);
++ list_for_each_entry(tunnel_walk, &pn->l2tp_tunnel_list, list) {
++ if (tunnel_walk->tunnel_id == tunnel->tunnel_id) {
++ spin_unlock_bh(&pn->l2tp_tunnel_list_lock);
++
++ ret = -EEXIST;
++ goto err_sock;
++ }
++ }
++ list_add_rcu(&tunnel->list, &pn->l2tp_tunnel_list);
++ spin_unlock_bh(&pn->l2tp_tunnel_list_lock);
++
++ if (tunnel->encap == L2TP_ENCAPTYPE_UDP) {
++ struct udp_tunnel_sock_cfg udp_cfg = {
++ .sk_user_data = tunnel,
++ .encap_type = UDP_ENCAP_L2TPINUDP,
++ .encap_rcv = l2tp_udp_encap_recv,
++ .encap_destroy = l2tp_udp_encap_destroy,
++ };
++
++ setup_udp_tunnel_sock(net, sock, &udp_cfg);
++ } else {
++ sk->sk_user_data = tunnel;
++ }
++
++ tunnel->old_sk_destruct = sk->sk_destruct;
++ sk->sk_destruct = &l2tp_tunnel_destruct;
++ lockdep_set_class_and_name(&sk->sk_lock.slock, &l2tp_socket_class,
++ "l2tp_sock");
++ sk->sk_allocation = GFP_ATOMIC;
++
++ if (tunnel->fd >= 0)
++ sockfd_put(sock);
++
++ return 0;
++
++err_sock:
++ if (tunnel->fd < 0)
++ sock_release(sock);
++ else
++ sockfd_put(sock);
++err:
++ return ret;
++}
++EXPORT_SYMBOL_GPL(l2tp_tunnel_register);
++
+ /* This function is used by the netlink TUNNEL_DELETE command.
+ */
+ void l2tp_tunnel_delete(struct l2tp_tunnel *tunnel)
+diff --git a/net/l2tp/l2tp_core.h b/net/l2tp/l2tp_core.h
+index 3fddfb207113..13e50ac774db 100644
+--- a/net/l2tp/l2tp_core.h
++++ b/net/l2tp/l2tp_core.h
+@@ -225,12 +225,14 @@ struct l2tp_session *l2tp_session_get(const struct net *net,
+ struct l2tp_session *l2tp_session_get_nth(struct l2tp_tunnel *tunnel, int nth);
+ struct l2tp_session *l2tp_session_get_by_ifname(const struct net *net,
+ const char *ifname);
+-struct l2tp_tunnel *l2tp_tunnel_find(const struct net *net, u32 tunnel_id);
+ struct l2tp_tunnel *l2tp_tunnel_find_nth(const struct net *net, int nth);
+
+ int l2tp_tunnel_create(struct net *net, int fd, int version, u32 tunnel_id,
+ u32 peer_tunnel_id, struct l2tp_tunnel_cfg *cfg,
+ struct l2tp_tunnel **tunnelp);
++int l2tp_tunnel_register(struct l2tp_tunnel *tunnel, struct net *net,
++ struct l2tp_tunnel_cfg *cfg);
++
+ void l2tp_tunnel_closeall(struct l2tp_tunnel *tunnel);
+ void l2tp_tunnel_delete(struct l2tp_tunnel *tunnel);
+ struct l2tp_session *l2tp_session_create(int priv_size,
+diff --git a/net/l2tp/l2tp_netlink.c b/net/l2tp/l2tp_netlink.c
+index 7e9c50125556..c56cb2c17d88 100644
+--- a/net/l2tp/l2tp_netlink.c
++++ b/net/l2tp/l2tp_netlink.c
+@@ -236,12 +236,6 @@ static int l2tp_nl_cmd_tunnel_create(struct sk_buff *skb, struct genl_info *info
+ if (info->attrs[L2TP_ATTR_DEBUG])
+ cfg.debug = nla_get_u32(info->attrs[L2TP_ATTR_DEBUG]);
+
+- tunnel = l2tp_tunnel_find(net, tunnel_id);
+- if (tunnel != NULL) {
+- ret = -EEXIST;
+- goto out;
+- }
+-
+ ret = -EINVAL;
+ switch (cfg.encap) {
+ case L2TP_ENCAPTYPE_UDP:
+@@ -251,9 +245,19 @@ static int l2tp_nl_cmd_tunnel_create(struct sk_buff *skb, struct genl_info *info
+ break;
+ }
+
+- if (ret >= 0)
+- ret = l2tp_tunnel_notify(&l2tp_nl_family, info,
+- tunnel, L2TP_CMD_TUNNEL_CREATE);
++ if (ret < 0)
++ goto out;
++
++ l2tp_tunnel_inc_refcount(tunnel);
++ ret = l2tp_tunnel_register(tunnel, net, &cfg);
++ if (ret < 0) {
++ kfree(tunnel);
++ goto out;
++ }
++ ret = l2tp_tunnel_notify(&l2tp_nl_family, info, tunnel,
++ L2TP_CMD_TUNNEL_CREATE);
++ l2tp_tunnel_dec_refcount(tunnel);
++
+ out:
+ return ret;
+ }
+diff --git a/net/l2tp/l2tp_ppp.c b/net/l2tp/l2tp_ppp.c
+index 5ea718609fe8..92ff5bb4e3d5 100644
+--- a/net/l2tp/l2tp_ppp.c
++++ b/net/l2tp/l2tp_ppp.c
+@@ -698,6 +698,15 @@ static int pppol2tp_connect(struct socket *sock, struct sockaddr *uservaddr,
+ error = l2tp_tunnel_create(sock_net(sk), fd, ver, tunnel_id, peer_tunnel_id, &tcfg, &tunnel);
+ if (error < 0)
+ goto end;
++
++ l2tp_tunnel_inc_refcount(tunnel);
++ error = l2tp_tunnel_register(tunnel, sock_net(sk),
++ &tcfg);
++ if (error < 0) {
++ kfree(tunnel);
++ goto end;
++ }
++ drop_tunnel = true;
+ }
+ } else {
+ /* Error if we can't find the tunnel */
+diff --git a/net/netfilter/ipset/ip_set_core.c b/net/netfilter/ipset/ip_set_core.c
+index cf84f7b37cd9..9d2ce1459cec 100644
+--- a/net/netfilter/ipset/ip_set_core.c
++++ b/net/netfilter/ipset/ip_set_core.c
+@@ -2055,6 +2055,7 @@ ip_set_net_exit(struct net *net)
+
+ inst->is_deleted = true; /* flag for ip_set_nfnl_put */
+
++ nfnl_lock(NFNL_SUBSYS_IPSET);
+ for (i = 0; i < inst->ip_set_max; i++) {
+ set = ip_set(inst, i);
+ if (set) {
+@@ -2062,6 +2063,7 @@ ip_set_net_exit(struct net *net)
+ ip_set_destroy_set(set);
+ }
+ }
++ nfnl_unlock(NFNL_SUBSYS_IPSET);
+ kfree(rcu_dereference_protected(inst->ip_set_list, 1));
+ }
+
+diff --git a/net/rds/send.c b/net/rds/send.c
+index f72466c63f0c..23f2d81e7967 100644
+--- a/net/rds/send.c
++++ b/net/rds/send.c
+@@ -1,5 +1,5 @@
+ /*
+- * Copyright (c) 2006 Oracle. All rights reserved.
++ * Copyright (c) 2006, 2018 Oracle and/or its affiliates. All rights reserved.
+ *
+ * This software is available to you under a choice of one of two
+ * licenses. You may choose to be licensed under the terms of the GNU
+@@ -986,10 +986,15 @@ static int rds_send_mprds_hash(struct rds_sock *rs, struct rds_connection *conn)
+ if (conn->c_npaths == 0 && hash != 0) {
+ rds_send_ping(conn, 0);
+
+- if (conn->c_npaths == 0) {
+- wait_event_interruptible(conn->c_hs_waitq,
+- (conn->c_npaths != 0));
+- }
++ /* The underlying connection is not up yet. Need to wait
++ * until it is up to be sure that the non-zero c_path can be
++ * used. But if we are interrupted, we have to use the zero
++ * c_path in case the connection ends up being non-MP capable.
++ */
++ if (conn->c_npaths == 0)
++ if (wait_event_interruptible(conn->c_hs_waitq,
++ conn->c_npaths != 0))
++ hash = 0;
+ if (conn->c_npaths == 1)
+ hash = 0;
+ }
+diff --git a/net/sunrpc/auth_gss/gss_krb5_crypto.c b/net/sunrpc/auth_gss/gss_krb5_crypto.c
+index 12649c9fedab..8654494b4d0a 100644
+--- a/net/sunrpc/auth_gss/gss_krb5_crypto.c
++++ b/net/sunrpc/auth_gss/gss_krb5_crypto.c
+@@ -237,9 +237,6 @@ make_checksum_hmac_md5(struct krb5_ctx *kctx, char *header, int hdrlen,
+
+ ahash_request_set_callback(req, CRYPTO_TFM_REQ_MAY_SLEEP, NULL, NULL);
+
+- err = crypto_ahash_init(req);
+- if (err)
+- goto out;
+ err = crypto_ahash_setkey(hmac_md5, cksumkey, kctx->gk5e->keylength);
+ if (err)
+ goto out;
+diff --git a/security/apparmor/apparmorfs.c b/security/apparmor/apparmorfs.c
+index d4fa04d91439..a23b0ca19fd0 100644
+--- a/security/apparmor/apparmorfs.c
++++ b/security/apparmor/apparmorfs.c
+@@ -1189,9 +1189,7 @@ static int seq_ns_level_show(struct seq_file *seq, void *v)
+ static int seq_ns_name_show(struct seq_file *seq, void *v)
+ {
+ struct aa_label *label = begin_current_label_crit_section();
+-
+- seq_printf(seq, "%s\n", aa_ns_name(labels_ns(label),
+- labels_ns(label), true));
++ seq_printf(seq, "%s\n", labels_ns(label)->base.name);
+ end_current_label_crit_section(label);
+
+ return 0;
+diff --git a/security/apparmor/include/audit.h b/security/apparmor/include/audit.h
+index 4ac095118717..2ebc00a579fd 100644
+--- a/security/apparmor/include/audit.h
++++ b/security/apparmor/include/audit.h
+@@ -126,6 +126,10 @@ struct apparmor_audit_data {
+ const char *target;
+ kuid_t ouid;
+ } fs;
++ struct {
++ int rlim;
++ unsigned long max;
++ } rlim;
+ int signal;
+ };
+ };
+@@ -134,10 +138,6 @@ struct apparmor_audit_data {
+ const char *ns;
+ long pos;
+ } iface;
+- struct {
+- int rlim;
+- unsigned long max;
+- } rlim;
+ struct {
+ const char *src_name;
+ const char *type;
+diff --git a/security/apparmor/include/sig_names.h b/security/apparmor/include/sig_names.h
+index 92e62fe95292..5ca47c50dfa7 100644
+--- a/security/apparmor/include/sig_names.h
++++ b/security/apparmor/include/sig_names.h
+@@ -2,6 +2,8 @@
+
+ #define SIGUNKNOWN 0
+ #define MAXMAPPED_SIG 35
++#define MAXMAPPED_SIGNAME (MAXMAPPED_SIG + 1)
++
+ /* provide a mapping of arch signal to internal signal # for mediation
+ * those that are always an alias SIGCLD for SIGCLHD and SIGPOLL for SIGIO
+ * map to the same entry those that may/or may not get a separate entry
+@@ -56,7 +58,7 @@ static const int sig_map[MAXMAPPED_SIG] = {
+ };
+
+ /* this table is ordered post sig_map[sig] mapping */
+-static const char *const sig_names[MAXMAPPED_SIG + 1] = {
++static const char *const sig_names[MAXMAPPED_SIGNAME] = {
+ "unknown",
+ "hup",
+ "int",
+diff --git a/security/apparmor/ipc.c b/security/apparmor/ipc.c
+index b40678f3c1d5..586facd35f7c 100644
+--- a/security/apparmor/ipc.c
++++ b/security/apparmor/ipc.c
+@@ -174,7 +174,7 @@ static void audit_signal_cb(struct audit_buffer *ab, void *va)
+ audit_signal_mask(ab, aad(sa)->denied);
+ }
+ }
+- if (aad(sa)->signal < MAXMAPPED_SIG)
++ if (aad(sa)->signal < MAXMAPPED_SIGNAME)
+ audit_log_format(ab, " signal=%s", sig_names[aad(sa)->signal]);
+ else
+ audit_log_format(ab, " signal=rtmin+%d",
^ permalink raw reply related [flat|nested] 27+ messages in thread
end of thread, other threads:[~2018-04-19 10:44 UTC | newest]
Thread overview: 27+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2018-02-28 19:53 [gentoo-commits] proj/linux-patches:4.15 commit in: / Alice Ferrazzi
-- strict thread matches above, loose matches on Subject: below --
2018-04-19 10:44 Mike Pagano
2018-04-12 12:20 Mike Pagano
2018-04-08 14:31 Mike Pagano
2018-03-31 22:20 Mike Pagano
2018-03-28 17:03 Mike Pagano
2018-03-25 13:37 Mike Pagano
2018-03-21 14:42 Mike Pagano
2018-03-19 12:02 Mike Pagano
2018-03-15 10:26 Mike Pagano
2018-03-11 17:39 Mike Pagano
2018-03-09 22:51 Mike Pagano
2018-03-09 16:38 Alice Ferrazzi
2018-03-01 13:51 Alice Ferrazzi
2018-02-28 20:07 Alice Ferrazzi
2018-02-28 20:07 Alice Ferrazzi
2018-02-28 15:16 Alice Ferrazzi
2018-02-28 14:57 Alice Ferrazzi
2018-02-26 14:18 Alice Ferrazzi
2018-02-26 14:18 Alice Ferrazzi
2018-02-25 13:46 Alice Ferrazzi
2018-02-22 23:24 Mike Pagano
2018-02-17 14:02 Alice Ferrazzi
2018-02-12 9:01 Alice Ferrazzi
2018-02-08 0:38 Mike Pagano
2018-02-03 21:20 Mike Pagano
2018-01-16 13:35 Mike Pagano
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox