From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from lists.gentoo.org (pigeon.gentoo.org [208.92.234.80]) by finch.gentoo.org (Postfix) with ESMTP id 310F859CB4 for ; Wed, 20 Apr 2016 11:21:58 +0000 (UTC) Received: from pigeon.gentoo.org (localhost [127.0.0.1]) by pigeon.gentoo.org (Postfix) with SMTP id 822D721C004; Wed, 20 Apr 2016 11:21:57 +0000 (UTC) Received: from smtp.gentoo.org (smtp.gentoo.org [140.211.166.183]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by pigeon.gentoo.org (Postfix) with ESMTPS id C6C2821C004 for ; Wed, 20 Apr 2016 11:21:56 +0000 (UTC) Received: from oystercatcher.gentoo.org (oystercatcher.gentoo.org [148.251.78.52]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.gentoo.org (Postfix) with ESMTPS id 3CD9C340BBF for ; Wed, 20 Apr 2016 11:21:55 +0000 (UTC) Received: from localhost.localdomain (localhost [127.0.0.1]) by oystercatcher.gentoo.org (Postfix) with ESMTP id 7B653188 for ; Wed, 20 Apr 2016 11:21:52 +0000 (UTC) From: "Mike Pagano" To: gentoo-commits@lists.gentoo.org Content-Transfer-Encoding: 8bit Content-type: text/plain; charset=UTF-8 Reply-To: gentoo-dev@lists.gentoo.org, "Mike Pagano" Message-ID: <1461151298.2fa9b8276f2566bcb2de0fff6e3d0a0ff63b821d.mpagano@gentoo> Subject: [gentoo-commits] proj/linux-patches:3.18 commit in: / X-VCS-Repository: proj/linux-patches X-VCS-Files: 0000_README 1030_linux-3.18.31.patch X-VCS-Directories: / X-VCS-Committer: mpagano X-VCS-Committer-Name: Mike Pagano X-VCS-Revision: 2fa9b8276f2566bcb2de0fff6e3d0a0ff63b821d X-VCS-Branch: 3.18 Date: Wed, 20 Apr 2016 11:21:52 +0000 (UTC) Precedence: bulk List-Post: List-Help: List-Unsubscribe: List-Subscribe: List-Id: Gentoo Linux mail X-BeenThere: gentoo-commits@lists.gentoo.org X-Archives-Salt: cc669fa9-592e-4a48-b4c2-611f0df9f79c X-Archives-Hash: b276ff9edd0a442d6b66300bea907ef8 commit: 2fa9b8276f2566bcb2de0fff6e3d0a0ff63b821d Author: Mike Pagano gentoo org> AuthorDate: Wed Apr 20 11:21:38 2016 +0000 Commit: Mike Pagano gentoo org> CommitDate: Wed Apr 20 11:21:38 2016 +0000 URL: https://gitweb.gentoo.org/proj/linux-patches.git/commit/?id=2fa9b827 Linux patch 3.18.31 0000_README | 4 + 1030_linux-3.18.31.patch | 6783 ++++++++++++++++++++++++++++++++++++++++++++++ 2 files changed, 6787 insertions(+) diff --git a/0000_README b/0000_README index eee1fbe..8c2614b 100644 --- a/0000_README +++ b/0000_README @@ -163,6 +163,10 @@ Patch: 1029_linux-3.18.30.patch From: http://www.kernel.org Desc: Linux 3.18.30 +Patch: 1030_linux-3.18.31.patch +From: http://www.kernel.org +Desc: Linux 3.18.31 + Patch: 1500_XATTR_USER_PREFIX.patch From: https://bugs.gentoo.org/show_bug.cgi?id=470644 Desc: Support for namespace user.pax.* on tmpfs. diff --git a/1030_linux-3.18.31.patch b/1030_linux-3.18.31.patch new file mode 100644 index 0000000..7b0e526 --- /dev/null +++ b/1030_linux-3.18.31.patch @@ -0,0 +1,6783 @@ +diff --git a/Documentation/filesystems/efivarfs.txt b/Documentation/filesystems/efivarfs.txt +index c477af086e65..686a64bba775 100644 +--- a/Documentation/filesystems/efivarfs.txt ++++ b/Documentation/filesystems/efivarfs.txt +@@ -14,3 +14,10 @@ filesystem. + efivarfs is typically mounted like this, + + mount -t efivarfs none /sys/firmware/efi/efivars ++ ++Due to the presence of numerous firmware bugs where removing non-standard ++UEFI variables causes the system firmware to fail to POST, efivarfs ++files that are not well-known standardized variables are created ++as immutable files. This doesn't prevent removal - "chattr -i" will work - ++but it does prevent this kind of failure from being accomplished ++accidentally. +diff --git a/MAINTAINERS b/MAINTAINERS +index c721042e7e45..090eaae42181 100644 +--- a/MAINTAINERS ++++ b/MAINTAINERS +@@ -9004,10 +9004,12 @@ S: Maintained + F: drivers/net/ethernet/dlink/sundance.c + + SUPERH ++M: Yoshinori Sato ++M: Rich Felker + L: linux-sh@vger.kernel.org + W: http://www.linux-sh.org + Q: http://patchwork.kernel.org/project/linux-sh/list/ +-S: Orphan ++S: Maintained + F: Documentation/sh/ + F: arch/sh/ + F: drivers/sh/ +diff --git a/Makefile b/Makefile +index cdc9cf7cb4dd..a05c9336722d 100644 +--- a/Makefile ++++ b/Makefile +@@ -1,6 +1,6 @@ + VERSION = 3 + PATCHLEVEL = 18 +-SUBLEVEL = 30 ++SUBLEVEL = 31 + EXTRAVERSION = + NAME = Diseased Newt + +diff --git a/arch/arm/boot/dts/armada-375.dtsi b/arch/arm/boot/dts/armada-375.dtsi +index de6571445cef..34a4f07b5546 100644 +--- a/arch/arm/boot/dts/armada-375.dtsi ++++ b/arch/arm/boot/dts/armada-375.dtsi +@@ -450,7 +450,7 @@ + }; + + sata@a0000 { +- compatible = "marvell,orion-sata"; ++ compatible = "marvell,armada-370-sata"; + reg = <0xa0000 0x5000>; + interrupts = ; + clocks = <&gateclk 14>, <&gateclk 20>; +diff --git a/arch/arm/boot/dts/sun5i-a10s.dtsi b/arch/arm/boot/dts/sun5i-a10s.dtsi +index 531272c0e526..fd2bcd3b7a9a 100644 +--- a/arch/arm/boot/dts/sun5i-a10s.dtsi ++++ b/arch/arm/boot/dts/sun5i-a10s.dtsi +@@ -526,7 +526,7 @@ + }; + + rtp: rtp@01c25000 { +- compatible = "allwinner,sun4i-a10-ts"; ++ compatible = "allwinner,sun5i-a13-ts"; + reg = <0x01c25000 0x100>; + interrupts = <29>; + }; +diff --git a/arch/arm/boot/dts/sun5i-a13.dtsi b/arch/arm/boot/dts/sun5i-a13.dtsi +index b131068f4f35..f9b019991f86 100644 +--- a/arch/arm/boot/dts/sun5i-a13.dtsi ++++ b/arch/arm/boot/dts/sun5i-a13.dtsi +@@ -474,7 +474,7 @@ + }; + + rtp: rtp@01c25000 { +- compatible = "allwinner,sun4i-a10-ts"; ++ compatible = "allwinner,sun5i-a13-ts"; + reg = <0x01c25000 0x100>; + interrupts = <29>; + }; +diff --git a/arch/arm/boot/dts/sun7i-a20.dtsi b/arch/arm/boot/dts/sun7i-a20.dtsi +index 82097c905c48..dcff77817c22 100644 +--- a/arch/arm/boot/dts/sun7i-a20.dtsi ++++ b/arch/arm/boot/dts/sun7i-a20.dtsi +@@ -896,7 +896,7 @@ + }; + + rtp: rtp@01c25000 { +- compatible = "allwinner,sun4i-a10-ts"; ++ compatible = "allwinner,sun5i-a13-ts"; + reg = <0x01c25000 0x100>; + interrupts = <0 29 4>; + }; +diff --git a/arch/arm/include/asm/psci.h b/arch/arm/include/asm/psci.h +index c25ef3ec6d1f..e3789fb02c9c 100644 +--- a/arch/arm/include/asm/psci.h ++++ b/arch/arm/include/asm/psci.h +@@ -37,7 +37,7 @@ struct psci_operations { + extern struct psci_operations psci_ops; + extern struct smp_operations psci_smp_ops; + +-#ifdef CONFIG_ARM_PSCI ++#if defined(CONFIG_SMP) && defined(CONFIG_ARM_PSCI) + int psci_init(void); + bool psci_smp_available(void); + #else +diff --git a/arch/arm64/Makefile b/arch/arm64/Makefile +index 2d54c55400ed..37c4fd6aeb7a 100644 +--- a/arch/arm64/Makefile ++++ b/arch/arm64/Makefile +@@ -20,6 +20,8 @@ LIBGCC := $(shell $(CC) $(KBUILD_CFLAGS) -print-libgcc-file-name) + KBUILD_DEFCONFIG := defconfig + + KBUILD_CFLAGS += -mgeneral-regs-only ++KBUILD_CFLAGS += $(call cc-option, -mpc-relative-literal-loads) ++ + ifeq ($(CONFIG_CPU_BIG_ENDIAN), y) + KBUILD_CPPFLAGS += -mbig-endian + AS += -EB +diff --git a/arch/arm64/include/asm/pgtable.h b/arch/arm64/include/asm/pgtable.h +index 41a43bf26492..fba3e59e0c78 100644 +--- a/arch/arm64/include/asm/pgtable.h ++++ b/arch/arm64/include/asm/pgtable.h +@@ -34,7 +34,7 @@ + /* + * VMALLOC and SPARSEMEM_VMEMMAP ranges. + * +- * VMEMAP_SIZE: allows the whole VA space to be covered by a struct page array ++ * VMEMAP_SIZE: allows the whole linear region to be covered by a struct page array + * (rounded up to PUD_SIZE). + * VMALLOC_START: beginning of the kernel VA space + * VMALLOC_END: extends to the available space below vmmemmap, PCI I/O space, +@@ -44,7 +44,9 @@ + #define VMALLOC_START (UL(0xffffffffffffffff) << VA_BITS) + #define VMALLOC_END (PAGE_OFFSET - PUD_SIZE - VMEMMAP_SIZE - SZ_64K) + +-#define vmemmap ((struct page *)(VMALLOC_END + SZ_64K)) ++#define VMEMMAP_START (VMALLOC_END + SZ_64K) ++#define vmemmap ((struct page *)VMEMMAP_START - \ ++ SECTION_ALIGN_DOWN(memstart_addr >> PAGE_SHIFT)) + + #define FIRST_USER_ADDRESS 0 + +diff --git a/arch/arm64/mm/init.c b/arch/arm64/mm/init.c +index f752943f75d2..43245e15413e 100644 +--- a/arch/arm64/mm/init.c ++++ b/arch/arm64/mm/init.c +@@ -286,8 +286,8 @@ void __init mem_init(void) + " .data : 0x%p" " - 0x%p" " (%6ld KB)\n", + MLG(VMALLOC_START, VMALLOC_END), + #ifdef CONFIG_SPARSEMEM_VMEMMAP +- MLG((unsigned long)vmemmap, +- (unsigned long)vmemmap + VMEMMAP_SIZE), ++ MLG(VMEMMAP_START, ++ VMEMMAP_START + VMEMMAP_SIZE), + MLM((unsigned long)virt_to_page(PAGE_OFFSET), + (unsigned long)virt_to_page(high_memory)), + #endif +diff --git a/arch/powerpc/kernel/module_64.c b/arch/powerpc/kernel/module_64.c +index 68384514506b..e77dbaeb88ff 100644 +--- a/arch/powerpc/kernel/module_64.c ++++ b/arch/powerpc/kernel/module_64.c +@@ -335,7 +335,7 @@ static void dedotify(Elf64_Sym *syms, unsigned int numsyms, char *strtab) + if (syms[i].st_shndx == SHN_UNDEF) { + char *name = strtab + syms[i].st_name; + if (name[0] == '.') +- memmove(name, name+1, strlen(name)); ++ syms[i].st_name++; + } + } + } +diff --git a/arch/s390/include/asm/pci.h b/arch/s390/include/asm/pci.h +index c030900320e0..d2d23f88464f 100644 +--- a/arch/s390/include/asm/pci.h ++++ b/arch/s390/include/asm/pci.h +@@ -44,11 +44,7 @@ struct zpci_fmb { + u64 rpcit_ops; + u64 dma_rbytes; + u64 dma_wbytes; +- /* software counters */ +- atomic64_t allocated_pages; +- atomic64_t mapped_pages; +- atomic64_t unmapped_pages; +-} __packed __aligned(16); ++} __packed __aligned(64); + + #define ZPCI_MSI_VEC_BITS 11 + #define ZPCI_MSI_VEC_MAX (1 << ZPCI_MSI_VEC_BITS) +@@ -114,6 +110,10 @@ struct zpci_dev { + /* Function measurement block */ + struct zpci_fmb *fmb; + u16 fmb_update; /* update interval */ ++ /* software counters */ ++ atomic64_t allocated_pages; ++ atomic64_t mapped_pages; ++ atomic64_t unmapped_pages; + + enum pci_bus_speed max_bus_speed; + +diff --git a/arch/s390/pci/pci.c b/arch/s390/pci/pci.c +index 2fa7b14b9c08..944818617718 100644 +--- a/arch/s390/pci/pci.c ++++ b/arch/s390/pci/pci.c +@@ -190,6 +190,11 @@ int zpci_fmb_enable_device(struct zpci_dev *zdev) + return -ENOMEM; + WARN_ON((u64) zdev->fmb & 0xf); + ++ /* reset software counters */ ++ atomic64_set(&zdev->allocated_pages, 0); ++ atomic64_set(&zdev->mapped_pages, 0); ++ atomic64_set(&zdev->unmapped_pages, 0); ++ + args.fmb_addr = virt_to_phys(zdev->fmb); + return mod_pci(zdev, ZPCI_MOD_FC_SET_MEASURE, 0, &args); + } +@@ -840,8 +845,11 @@ static inline int barsize(u8 size) + + static int zpci_mem_init(void) + { ++ BUILD_BUG_ON(!is_power_of_2(__alignof__(struct zpci_fmb)) || ++ __alignof__(struct zpci_fmb) < sizeof(struct zpci_fmb)); ++ + zdev_fmb_cache = kmem_cache_create("PCI_FMB_cache", sizeof(struct zpci_fmb), +- 16, 0, NULL); ++ __alignof__(struct zpci_fmb), 0, NULL); + if (!zdev_fmb_cache) + goto error_zdev; + +diff --git a/arch/s390/pci/pci_debug.c b/arch/s390/pci/pci_debug.c +index eec598c5939f..8eeccd7d7f79 100644 +--- a/arch/s390/pci/pci_debug.c ++++ b/arch/s390/pci/pci_debug.c +@@ -31,12 +31,25 @@ static char *pci_perf_names[] = { + "Refresh operations", + "DMA read bytes", + "DMA write bytes", +- /* software counters */ ++}; ++ ++static char *pci_sw_names[] = { + "Allocated pages", + "Mapped pages", + "Unmapped pages", + }; + ++static void pci_sw_counter_show(struct seq_file *m) ++{ ++ struct zpci_dev *zdev = m->private; ++ atomic64_t *counter = &zdev->allocated_pages; ++ int i; ++ ++ for (i = 0; i < ARRAY_SIZE(pci_sw_names); i++, counter++) ++ seq_printf(m, "%26s:\t%llu\n", pci_sw_names[i], ++ atomic64_read(counter)); ++} ++ + static int pci_perf_show(struct seq_file *m, void *v) + { + struct zpci_dev *zdev = m->private; +@@ -63,12 +76,8 @@ static int pci_perf_show(struct seq_file *m, void *v) + for (i = 4; i < 6; i++) + seq_printf(m, "%26s:\t%llu\n", + pci_perf_names[i], *(stat + i)); +- /* software counters */ +- for (i = 6; i < ARRAY_SIZE(pci_perf_names); i++) +- seq_printf(m, "%26s:\t%llu\n", +- pci_perf_names[i], +- atomic64_read((atomic64_t *) (stat + i))); + ++ pci_sw_counter_show(m); + return 0; + } + +diff --git a/arch/s390/pci/pci_dma.c b/arch/s390/pci/pci_dma.c +index 4cbb29a4d615..6fd8d5836138 100644 +--- a/arch/s390/pci/pci_dma.c ++++ b/arch/s390/pci/pci_dma.c +@@ -300,7 +300,7 @@ static dma_addr_t s390_dma_map_pages(struct device *dev, struct page *page, + flags |= ZPCI_TABLE_PROTECTED; + + if (!dma_update_trans(zdev, pa, dma_addr, size, flags)) { +- atomic64_add(nr_pages, &zdev->fmb->mapped_pages); ++ atomic64_add(nr_pages, &zdev->mapped_pages); + return dma_addr + (offset & ~PAGE_MASK); + } + +@@ -328,7 +328,7 @@ static void s390_dma_unmap_pages(struct device *dev, dma_addr_t dma_addr, + zpci_err_hex(&dma_addr, sizeof(dma_addr)); + } + +- atomic64_add(npages, &zdev->fmb->unmapped_pages); ++ atomic64_add(npages, &zdev->unmapped_pages); + iommu_page_index = (dma_addr - zdev->start_dma) >> PAGE_SHIFT; + dma_free_iommu(zdev, iommu_page_index, npages); + } +@@ -357,7 +357,7 @@ static void *s390_dma_alloc(struct device *dev, size_t size, + return NULL; + } + +- atomic64_add(size / PAGE_SIZE, &zdev->fmb->allocated_pages); ++ atomic64_add(size / PAGE_SIZE, &zdev->allocated_pages); + if (dma_handle) + *dma_handle = map; + return (void *) pa; +@@ -370,7 +370,7 @@ static void s390_dma_free(struct device *dev, size_t size, + struct zpci_dev *zdev = get_zdev(to_pci_dev(dev)); + + size = PAGE_ALIGN(size); +- atomic64_sub(size / PAGE_SIZE, &zdev->fmb->allocated_pages); ++ atomic64_sub(size / PAGE_SIZE, &zdev->allocated_pages); + s390_dma_unmap_pages(dev, dma_handle, size, DMA_BIDIRECTIONAL, NULL); + free_pages((unsigned long) pa, get_order(size)); + } +diff --git a/arch/um/drivers/mconsole_kern.c b/arch/um/drivers/mconsole_kern.c +index 29880c9b324e..e22e57298522 100644 +--- a/arch/um/drivers/mconsole_kern.c ++++ b/arch/um/drivers/mconsole_kern.c +@@ -133,7 +133,7 @@ void mconsole_proc(struct mc_request *req) + ptr += strlen("proc"); + ptr = skip_spaces(ptr); + +- file = file_open_root(mnt->mnt_root, mnt, ptr, O_RDONLY); ++ file = file_open_root(mnt->mnt_root, mnt, ptr, O_RDONLY, 0); + if (IS_ERR(file)) { + mconsole_reply(req, "Failed to open file", 1, 0); + printk(KERN_ERR "open /proc/%s: %ld\n", ptr, PTR_ERR(file)); +diff --git a/arch/x86/include/asm/apic.h b/arch/x86/include/asm/apic.h +index 465b309af254..dbaf844ddcb1 100644 +--- a/arch/x86/include/asm/apic.h ++++ b/arch/x86/include/asm/apic.h +@@ -651,8 +651,8 @@ static inline void entering_irq(void) + + static inline void entering_ack_irq(void) + { +- ack_APIC_irq(); + entering_irq(); ++ ack_APIC_irq(); + } + + static inline void exiting_irq(void) +diff --git a/arch/x86/include/asm/perf_event.h b/arch/x86/include/asm/perf_event.h +index 8dfc9fd094a3..024fa1a20f15 100644 +--- a/arch/x86/include/asm/perf_event.h ++++ b/arch/x86/include/asm/perf_event.h +@@ -159,6 +159,14 @@ struct x86_pmu_capability { + */ + #define INTEL_PMC_IDX_FIXED_BTS (INTEL_PMC_IDX_FIXED + 16) + ++#define GLOBAL_STATUS_COND_CHG BIT_ULL(63) ++#define GLOBAL_STATUS_BUFFER_OVF BIT_ULL(62) ++#define GLOBAL_STATUS_UNC_OVF BIT_ULL(61) ++#define GLOBAL_STATUS_ASIF BIT_ULL(60) ++#define GLOBAL_STATUS_COUNTERS_FROZEN BIT_ULL(59) ++#define GLOBAL_STATUS_LBRS_FROZEN BIT_ULL(58) ++#define GLOBAL_STATUS_TRACE_TOPAPMI BIT_ULL(55) ++ + /* + * IBS cpuid feature detection + */ +diff --git a/arch/x86/include/asm/xen/hypervisor.h b/arch/x86/include/asm/xen/hypervisor.h +index d866959e5685..d2ad00a42234 100644 +--- a/arch/x86/include/asm/xen/hypervisor.h ++++ b/arch/x86/include/asm/xen/hypervisor.h +@@ -57,4 +57,6 @@ static inline bool xen_x2apic_para_available(void) + } + #endif + ++extern void xen_set_iopl_mask(unsigned mask); ++ + #endif /* _ASM_X86_XEN_HYPERVISOR_H */ +diff --git a/arch/x86/include/uapi/asm/msr-index.h b/arch/x86/include/uapi/asm/msr-index.h +index e21331ce368f..177889cd0505 100644 +--- a/arch/x86/include/uapi/asm/msr-index.h ++++ b/arch/x86/include/uapi/asm/msr-index.h +@@ -69,6 +69,12 @@ + #define MSR_LBR_CORE_FROM 0x00000040 + #define MSR_LBR_CORE_TO 0x00000060 + ++#define MSR_LBR_INFO_0 0x00000dc0 /* ... 0xddf for _31 */ ++#define LBR_INFO_MISPRED BIT_ULL(63) ++#define LBR_INFO_IN_TX BIT_ULL(62) ++#define LBR_INFO_ABORT BIT_ULL(61) ++#define LBR_INFO_CYCLES 0xffff ++ + #define MSR_IA32_PEBS_ENABLE 0x000003f1 + #define MSR_IA32_DS_AREA 0x00000600 + #define MSR_IA32_PERF_CAPABILITIES 0x00000345 +diff --git a/arch/x86/kernel/ioport.c b/arch/x86/kernel/ioport.c +index 4ddaf66ea35f..792621a32457 100644 +--- a/arch/x86/kernel/ioport.c ++++ b/arch/x86/kernel/ioport.c +@@ -96,9 +96,14 @@ asmlinkage long sys_ioperm(unsigned long from, unsigned long num, int turn_on) + SYSCALL_DEFINE1(iopl, unsigned int, level) + { + struct pt_regs *regs = current_pt_regs(); +- unsigned int old = (regs->flags >> 12) & 3; + struct thread_struct *t = ¤t->thread; + ++ /* ++ * Careful: the IOPL bits in regs->flags are undefined under Xen PV ++ * and changing them has no effect. ++ */ ++ unsigned int old = t->iopl >> X86_EFLAGS_IOPL_BIT; ++ + if (level > 3) + return -EINVAL; + /* Trying to gain more privileges? */ +@@ -106,8 +111,9 @@ SYSCALL_DEFINE1(iopl, unsigned int, level) + if (!capable(CAP_SYS_RAWIO)) + return -EPERM; + } +- regs->flags = (regs->flags & ~X86_EFLAGS_IOPL) | (level << 12); +- t->iopl = level << 12; ++ regs->flags = (regs->flags & ~X86_EFLAGS_IOPL) | ++ (level << X86_EFLAGS_IOPL_BIT); ++ t->iopl = level << X86_EFLAGS_IOPL_BIT; + set_iopl_mask(t->iopl); + + return 0; +diff --git a/arch/x86/kernel/process_64.c b/arch/x86/kernel/process_64.c +index 54cfd5ebd96c..f547f866e86c 100644 +--- a/arch/x86/kernel/process_64.c ++++ b/arch/x86/kernel/process_64.c +@@ -49,6 +49,7 @@ + #include + #include + #include ++#include + + asmlinkage extern void ret_from_fork(void); + +@@ -424,6 +425,17 @@ __switch_to(struct task_struct *prev_p, struct task_struct *next_p) + task_thread_info(prev_p)->flags & _TIF_WORK_CTXSW_PREV)) + __switch_to_xtra(prev_p, next_p, tss); + ++#ifdef CONFIG_XEN ++ /* ++ * On Xen PV, IOPL bits in pt_regs->flags have no effect, and ++ * current_pt_regs()->flags may not match the current task's ++ * intended IOPL. We need to switch it manually. ++ */ ++ if (unlikely(xen_pv_domain() && ++ prev->iopl != next->iopl)) ++ xen_set_iopl_mask(next->iopl); ++#endif ++ + return prev_p; + } + +diff --git a/arch/x86/kvm/i8254.c b/arch/x86/kvm/i8254.c +index 1406ffde3e35..b0a706d063cb 100644 +--- a/arch/x86/kvm/i8254.c ++++ b/arch/x86/kvm/i8254.c +@@ -244,7 +244,7 @@ static void kvm_pit_ack_irq(struct kvm_irq_ack_notifier *kian) + * PIC is being reset. Handle it gracefully here + */ + atomic_inc(&ps->pending); +- else if (value > 0) ++ else if (value > 0 && ps->reinject) + /* in this case, we had multiple outstanding pit interrupts + * that we needed to inject. Reinject + */ +@@ -287,7 +287,9 @@ static void pit_do_work(struct kthread_work *work) + * last one has been acked. + */ + spin_lock(&ps->inject_lock); +- if (ps->irq_ack) { ++ if (!ps->reinject) ++ inject = 1; ++ else if (ps->irq_ack) { + ps->irq_ack = 0; + inject = 1; + } +@@ -316,10 +318,10 @@ static enum hrtimer_restart pit_timer_fn(struct hrtimer *data) + struct kvm_kpit_state *ps = container_of(data, struct kvm_kpit_state, timer); + struct kvm_pit *pt = ps->kvm->arch.vpit; + +- if (ps->reinject || !atomic_read(&ps->pending)) { ++ if (ps->reinject) + atomic_inc(&ps->pending); +- queue_kthread_work(&pt->worker, &pt->expired); +- } ++ ++ queue_kthread_work(&pt->worker, &pt->expired); + + if (ps->is_periodic) { + hrtimer_add_expires_ns(&ps->timer, ps->period); +diff --git a/arch/x86/kvm/vmx.c b/arch/x86/kvm/vmx.c +index 49cce7508a54..d9c11f3f5b18 100644 +--- a/arch/x86/kvm/vmx.c ++++ b/arch/x86/kvm/vmx.c +@@ -6740,6 +6740,7 @@ static int handle_invept(struct kvm_vcpu *vcpu) + if (!(types & (1UL << type))) { + nested_vmx_failValid(vcpu, + VMXERR_INVALID_OPERAND_TO_INVEPT_INVVPID); ++ skip_emulated_instruction(vcpu); + return 1; + } + +diff --git a/arch/x86/kvm/x86.c b/arch/x86/kvm/x86.c +index 9fbf7c7fcbd9..d77189c351a8 100644 +--- a/arch/x86/kvm/x86.c ++++ b/arch/x86/kvm/x86.c +@@ -3668,13 +3668,13 @@ static int kvm_vm_ioctl_get_pit(struct kvm *kvm, struct kvm_pit_state *ps) + + static int kvm_vm_ioctl_set_pit(struct kvm *kvm, struct kvm_pit_state *ps) + { +- int r = 0; +- ++ int i; + mutex_lock(&kvm->arch.vpit->pit_state.lock); + memcpy(&kvm->arch.vpit->pit_state, ps, sizeof(struct kvm_pit_state)); +- kvm_pit_load_count(kvm, 0, ps->channels[0].count, 0); ++ for (i = 0; i < 3; i++) ++ kvm_pit_load_count(kvm, i, ps->channels[i].count, 0); + mutex_unlock(&kvm->arch.vpit->pit_state.lock); +- return r; ++ return 0; + } + + static int kvm_vm_ioctl_get_pit2(struct kvm *kvm, struct kvm_pit_state2 *ps) +@@ -3693,6 +3693,7 @@ static int kvm_vm_ioctl_get_pit2(struct kvm *kvm, struct kvm_pit_state2 *ps) + static int kvm_vm_ioctl_set_pit2(struct kvm *kvm, struct kvm_pit_state2 *ps) + { + int r = 0, start = 0; ++ int i; + u32 prev_legacy, cur_legacy; + mutex_lock(&kvm->arch.vpit->pit_state.lock); + prev_legacy = kvm->arch.vpit->pit_state.flags & KVM_PIT_FLAGS_HPET_LEGACY; +@@ -3702,7 +3703,8 @@ static int kvm_vm_ioctl_set_pit2(struct kvm *kvm, struct kvm_pit_state2 *ps) + memcpy(&kvm->arch.vpit->pit_state.channels, &ps->channels, + sizeof(kvm->arch.vpit->pit_state.channels)); + kvm->arch.vpit->pit_state.flags = ps->flags; +- kvm_pit_load_count(kvm, 0, kvm->arch.vpit->pit_state.channels[0].count, start); ++ for (i = 0; i < 3; i++) ++ kvm_pit_load_count(kvm, i, kvm->arch.vpit->pit_state.channels[i].count, start); + mutex_unlock(&kvm->arch.vpit->pit_state.lock); + return r; + } +diff --git a/arch/x86/pci/fixup.c b/arch/x86/pci/fixup.c +index 9a2b7101ae8a..f16af96c60a2 100644 +--- a/arch/x86/pci/fixup.c ++++ b/arch/x86/pci/fixup.c +@@ -553,3 +553,10 @@ static void twinhead_reserve_killing_zone(struct pci_dev *dev) + } + } + DECLARE_PCI_FIXUP_HEADER(PCI_VENDOR_ID_INTEL, 0x27B9, twinhead_reserve_killing_zone); ++ ++static void pci_bdwep_bar(struct pci_dev *dev) ++{ ++ dev->non_compliant_bars = 1; ++} ++DECLARE_PCI_FIXUP_EARLY(PCI_VENDOR_ID_INTEL, 0x6fa0, pci_bdwep_bar); ++DECLARE_PCI_FIXUP_EARLY(PCI_VENDOR_ID_INTEL, 0x6fc0, pci_bdwep_bar); +diff --git a/arch/x86/pci/intel_mid_pci.c b/arch/x86/pci/intel_mid_pci.c +index b9958c364075..44b9271580b5 100644 +--- a/arch/x86/pci/intel_mid_pci.c ++++ b/arch/x86/pci/intel_mid_pci.c +@@ -210,6 +210,9 @@ static int intel_mid_pci_irq_enable(struct pci_dev *dev) + { + int polarity; + ++ if (dev->irq_managed && dev->irq > 0) ++ return 0; ++ + if (intel_mid_identify_cpu() == INTEL_MID_CPU_CHIP_TANGIER) + polarity = 0; /* active high */ + else +@@ -224,13 +227,18 @@ static int intel_mid_pci_irq_enable(struct pci_dev *dev) + if (mp_map_gsi_to_irq(dev->irq, IOAPIC_MAP_ALLOC) < 0) + return -EBUSY; + ++ dev->irq_managed = 1; ++ + return 0; + } + + static void intel_mid_pci_irq_disable(struct pci_dev *dev) + { +- if (!mp_should_keep_irq(&dev->dev) && dev->irq > 0) ++ if (!mp_should_keep_irq(&dev->dev) && dev->irq_managed && ++ dev->irq > 0) { + mp_unmap_irq(dev->irq); ++ dev->irq_managed = 0; ++ } + } + + struct pci_ops intel_mid_pci_ops = { +diff --git a/arch/x86/pci/irq.c b/arch/x86/pci/irq.c +index eb500c2592ad..a47e2dea0972 100644 +--- a/arch/x86/pci/irq.c ++++ b/arch/x86/pci/irq.c +@@ -1202,6 +1202,9 @@ static int pirq_enable_irq(struct pci_dev *dev) + int irq; + struct io_apic_irq_attr irq_attr; + ++ if (dev->irq_managed && dev->irq > 0) ++ return 0; ++ + irq = IO_APIC_get_PCI_irq_vector(dev->bus->number, + PCI_SLOT(dev->devfn), + pin - 1, &irq_attr); +@@ -1228,6 +1231,7 @@ static int pirq_enable_irq(struct pci_dev *dev) + } + dev = temp_dev; + if (irq >= 0) { ++ dev->irq_managed = 1; + dev->irq = irq; + dev_info(&dev->dev, "PCI->APIC IRQ transform: " + "INT %c -> IRQ %d\n", 'A' + pin - 1, irq); +@@ -1257,8 +1261,9 @@ static int pirq_enable_irq(struct pci_dev *dev) + static void pirq_disable_irq(struct pci_dev *dev) + { + if (io_apic_assign_pci_irqs && !mp_should_keep_irq(&dev->dev) && +- dev->irq) { ++ dev->irq_managed && dev->irq) { + mp_unmap_irq(dev->irq); + dev->irq = 0; ++ dev->irq_managed = 0; + } + } +diff --git a/arch/x86/xen/enlighten.c b/arch/x86/xen/enlighten.c +index 7e365d231a93..6ba1ec961aaa 100644 +--- a/arch/x86/xen/enlighten.c ++++ b/arch/x86/xen/enlighten.c +@@ -956,7 +956,7 @@ static void xen_load_sp0(struct tss_struct *tss, + xen_mc_issue(PARAVIRT_LAZY_CPU); + } + +-static void xen_set_iopl_mask(unsigned mask) ++void xen_set_iopl_mask(unsigned mask) + { + struct physdev_set_iopl set_iopl; + +diff --git a/arch/xtensa/kernel/head.S b/arch/xtensa/kernel/head.S +index aeeb3cc8a410..288b61f080fe 100644 +--- a/arch/xtensa/kernel/head.S ++++ b/arch/xtensa/kernel/head.S +@@ -123,7 +123,7 @@ ENTRY(_startup) + wsr a0, icountlevel + + .set _index, 0 +- .rept XCHAL_NUM_DBREAK - 1 ++ .rept XCHAL_NUM_DBREAK + wsr a0, SREG_DBREAKC + _index + .set _index, _index + 1 + .endr +diff --git a/arch/xtensa/mm/cache.c b/arch/xtensa/mm/cache.c +index d75aa1476da7..1a804a2f9a5b 100644 +--- a/arch/xtensa/mm/cache.c ++++ b/arch/xtensa/mm/cache.c +@@ -97,11 +97,11 @@ void clear_user_highpage(struct page *page, unsigned long vaddr) + unsigned long paddr; + void *kvaddr = coherent_kvaddr(page, TLBTEMP_BASE_1, vaddr, &paddr); + +- pagefault_disable(); ++ preempt_disable(); + kmap_invalidate_coherent(page, vaddr); + set_bit(PG_arch_1, &page->flags); + clear_page_alias(kvaddr, paddr); +- pagefault_enable(); ++ preempt_enable(); + } + + void copy_user_highpage(struct page *dst, struct page *src, +@@ -113,11 +113,11 @@ void copy_user_highpage(struct page *dst, struct page *src, + void *src_vaddr = coherent_kvaddr(src, TLBTEMP_BASE_2, vaddr, + &src_paddr); + +- pagefault_disable(); ++ preempt_disable(); + kmap_invalidate_coherent(dst, vaddr); + set_bit(PG_arch_1, &dst->flags); + copy_page_alias(dst_vaddr, src_vaddr, dst_paddr, src_paddr); +- pagefault_enable(); ++ preempt_enable(); + } + + #endif /* DCACHE_WAY_SIZE > PAGE_SIZE */ +diff --git a/arch/xtensa/platforms/iss/console.c b/arch/xtensa/platforms/iss/console.c +index 70cb408bc20d..92d785fefb6d 100644 +--- a/arch/xtensa/platforms/iss/console.c ++++ b/arch/xtensa/platforms/iss/console.c +@@ -100,21 +100,23 @@ static void rs_poll(unsigned long priv) + { + struct tty_port *port = (struct tty_port *)priv; + int i = 0; ++ int rd = 1; + unsigned char c; + + spin_lock(&timer_lock); + + while (simc_poll(0)) { +- simc_read(0, &c, 1); ++ rd = simc_read(0, &c, 1); ++ if (rd <= 0) ++ break; + tty_insert_flip_char(port, c, TTY_NORMAL); + i++; + } + + if (i) + tty_flip_buffer_push(port); +- +- +- mod_timer(&serial_timer, jiffies + SERIAL_TIMER_VALUE); ++ if (rd) ++ mod_timer(&serial_timer, jiffies + SERIAL_TIMER_VALUE); + spin_unlock(&timer_lock); + } + +diff --git a/crypto/algif_skcipher.c b/crypto/algif_skcipher.c +index 83187f497c7c..d2cacc7f079f 100644 +--- a/crypto/algif_skcipher.c ++++ b/crypto/algif_skcipher.c +@@ -31,6 +31,11 @@ struct skcipher_sg_list { + struct scatterlist sg[0]; + }; + ++struct skcipher_tfm { ++ struct crypto_ablkcipher *skcipher; ++ bool has_key; ++}; ++ + struct skcipher_ctx { + struct list_head tsgl; + struct af_alg_sgl rsgl; +@@ -544,19 +549,139 @@ static struct proto_ops algif_skcipher_ops = { + .poll = skcipher_poll, + }; + ++static int skcipher_check_key(struct socket *sock) ++{ ++ int err = 0; ++ struct sock *psk; ++ struct alg_sock *pask; ++ struct skcipher_tfm *tfm; ++ struct sock *sk = sock->sk; ++ struct alg_sock *ask = alg_sk(sk); ++ ++ lock_sock(sk); ++ if (ask->refcnt) ++ goto unlock_child; ++ ++ psk = ask->parent; ++ pask = alg_sk(ask->parent); ++ tfm = pask->private; ++ ++ err = -ENOKEY; ++ lock_sock_nested(psk, SINGLE_DEPTH_NESTING); ++ if (!tfm->has_key) ++ goto unlock; ++ ++ if (!pask->refcnt++) ++ sock_hold(psk); ++ ++ ask->refcnt = 1; ++ sock_put(psk); ++ ++ err = 0; ++ ++unlock: ++ release_sock(psk); ++unlock_child: ++ release_sock(sk); ++ ++ return err; ++} ++ ++static int skcipher_sendmsg_nokey(struct kiocb *unused, struct socket *sock, ++ struct msghdr *msg, size_t size) ++{ ++ int err; ++ ++ err = skcipher_check_key(sock); ++ if (err) ++ return err; ++ ++ return skcipher_sendmsg(NULL, sock, msg, size); ++} ++ ++static ssize_t skcipher_sendpage_nokey(struct socket *sock, struct page *page, ++ int offset, size_t size, int flags) ++{ ++ int err; ++ ++ err = skcipher_check_key(sock); ++ if (err) ++ return err; ++ ++ return skcipher_sendpage(sock, page, offset, size, flags); ++} ++ ++static int skcipher_recvmsg_nokey(struct kiocb *unused, struct socket *sock, ++ struct msghdr *msg, size_t ignored, int flags) ++{ ++ int err; ++ ++ err = skcipher_check_key(sock); ++ if (err) ++ return err; ++ ++ return skcipher_recvmsg(NULL, sock, msg, ignored, flags); ++} ++ ++static struct proto_ops algif_skcipher_ops_nokey = { ++ .family = PF_ALG, ++ ++ .connect = sock_no_connect, ++ .socketpair = sock_no_socketpair, ++ .getname = sock_no_getname, ++ .ioctl = sock_no_ioctl, ++ .listen = sock_no_listen, ++ .shutdown = sock_no_shutdown, ++ .getsockopt = sock_no_getsockopt, ++ .mmap = sock_no_mmap, ++ .bind = sock_no_bind, ++ .accept = sock_no_accept, ++ .setsockopt = sock_no_setsockopt, ++ ++ .release = af_alg_release, ++ .sendmsg = skcipher_sendmsg_nokey, ++ .sendpage = skcipher_sendpage_nokey, ++ .recvmsg = skcipher_recvmsg_nokey, ++ .poll = skcipher_poll, ++}; ++ + static void *skcipher_bind(const char *name, u32 type, u32 mask) + { +- return crypto_alloc_ablkcipher(name, type, mask); ++ struct skcipher_tfm *tfm; ++ struct crypto_ablkcipher *skcipher; ++ ++ tfm = kzalloc(sizeof(*tfm), GFP_KERNEL); ++ if (!tfm) ++ return ERR_PTR(-ENOMEM); ++ ++ skcipher = crypto_alloc_ablkcipher(name, type, mask); ++ if (IS_ERR(skcipher)) { ++ kfree(tfm); ++ return ERR_CAST(skcipher); ++ } ++ ++ tfm->skcipher = skcipher; ++ ++ return tfm; + } + + static void skcipher_release(void *private) + { +- crypto_free_ablkcipher(private); ++ struct skcipher_tfm *tfm = private; ++ ++ crypto_free_ablkcipher(tfm->skcipher); ++ kfree(tfm); + } + + static int skcipher_setkey(void *private, const u8 *key, unsigned int keylen) + { +- return crypto_ablkcipher_setkey(private, key, keylen); ++ struct skcipher_tfm *tfm = private; ++ int err; ++ ++ err = crypto_ablkcipher_setkey(tfm->skcipher, key, keylen); ++ tfm->has_key = !err; ++ ++ return err; + } + + static void skcipher_sock_destruct(struct sock *sk) +@@ -571,24 +696,27 @@ static void skcipher_sock_destruct(struct sock *sk) + af_alg_release_parent(sk); + } + +-static int skcipher_accept_parent(void *private, struct sock *sk) ++static int skcipher_accept_parent_nokey(void *private, struct sock *sk) + { + struct skcipher_ctx *ctx; + struct alg_sock *ask = alg_sk(sk); +- unsigned int len = sizeof(*ctx) + crypto_ablkcipher_reqsize(private); ++ struct skcipher_tfm *tfm = private; ++ struct crypto_ablkcipher *skcipher = tfm->skcipher; ++ unsigned int len = sizeof(*ctx) + crypto_ablkcipher_reqsize(skcipher); + + ctx = sock_kmalloc(sk, len, GFP_KERNEL); + if (!ctx) + return -ENOMEM; + +- ctx->iv = sock_kmalloc(sk, crypto_ablkcipher_ivsize(private), ++ ctx->iv = sock_kmalloc(sk, crypto_ablkcipher_ivsize(skcipher), + GFP_KERNEL); + if (!ctx->iv) { + sock_kfree_s(sk, ctx, len); + return -ENOMEM; + } + +- memset(ctx->iv, 0, crypto_ablkcipher_ivsize(private)); ++ memset(ctx->iv, 0, crypto_ablkcipher_ivsize(skcipher)); ++ + + INIT_LIST_HEAD(&ctx->tsgl); + ctx->len = len; +@@ -600,7 +728,7 @@ static int skcipher_accept_parent(void *private, struct sock *sk) + + ask->private = ctx; + +- ablkcipher_request_set_tfm(&ctx->req, private); ++ ablkcipher_request_set_tfm(&ctx->req, skcipher); + ablkcipher_request_set_callback(&ctx->req, CRYPTO_TFM_REQ_MAY_BACKLOG, + af_alg_complete, &ctx->completion); + +@@ -609,12 +737,24 @@ static int skcipher_accept_parent(void *private, struct sock *sk) + return 0; + } + ++static int skcipher_accept_parent(void *private, struct sock *sk) ++{ ++ struct skcipher_tfm *tfm = private; ++ ++ if (!tfm->has_key) ++ return -ENOKEY; ++ ++ return skcipher_accept_parent_nokey(private, sk); ++} ++ + static const struct af_alg_type algif_type_skcipher = { + .bind = skcipher_bind, + .release = skcipher_release, + .setkey = skcipher_setkey, + .accept = skcipher_accept_parent, ++ .accept_nokey = skcipher_accept_parent_nokey, + .ops = &algif_skcipher_ops, ++ .ops_nokey = &algif_skcipher_ops_nokey, + .name = "skcipher", + .owner = THIS_MODULE + }; +diff --git a/drivers/acpi/pci_irq.c b/drivers/acpi/pci_irq.c +index 6e6b80eb0bba..5f1fdca65e5f 100644 +--- a/drivers/acpi/pci_irq.c ++++ b/drivers/acpi/pci_irq.c +@@ -413,6 +413,9 @@ int acpi_pci_irq_enable(struct pci_dev *dev) + return 0; + } + ++ if (dev->irq_managed && dev->irq > 0) ++ return 0; ++ + entry = acpi_pci_irq_lookup(dev, pin); + if (!entry) { + /* +@@ -456,6 +459,7 @@ int acpi_pci_irq_enable(struct pci_dev *dev) + return rc; + } + dev->irq = rc; ++ dev->irq_managed = 1; + + if (link) + snprintf(link_desc, sizeof(link_desc), " -> Link[%s]", link); +@@ -478,7 +482,7 @@ void acpi_pci_irq_disable(struct pci_dev *dev) + u8 pin; + + pin = dev->pin; +- if (!pin) ++ if (!pin || !dev->irq_managed || dev->irq <= 0) + return; + + /* Keep IOAPIC pin configuration when suspending */ +@@ -506,6 +510,9 @@ void acpi_pci_irq_disable(struct pci_dev *dev) + */ + + dev_dbg(&dev->dev, "PCI INT %c disabled\n", pin_name(pin)); +- if (gsi >= 0 && dev->irq > 0) ++ if (gsi >= 0) { + acpi_unregister_gsi(gsi); ++ dev->irq = 0; ++ dev->irq_managed = 0; ++ } + } +diff --git a/drivers/block/mtip32xx/mtip32xx.c b/drivers/block/mtip32xx/mtip32xx.c +index 1bd5f523f8fd..d7e0b9b806e9 100644 +--- a/drivers/block/mtip32xx/mtip32xx.c ++++ b/drivers/block/mtip32xx/mtip32xx.c +@@ -705,7 +705,7 @@ static void mtip_handle_tfe(struct driver_data *dd) + fail_reason = "thermal shutdown"; + } + if (buf[288] == 0xBF) { +- set_bit(MTIP_DDF_SEC_LOCK_BIT, &dd->dd_flag); ++ set_bit(MTIP_DDF_REBUILD_FAILED_BIT, &dd->dd_flag); + dev_info(&dd->pdev->dev, + "Drive indicates rebuild has failed. Secure erase required.\n"); + fail_all_ncq_cmds = 1; +@@ -896,6 +896,10 @@ static inline irqreturn_t mtip_handle_irq(struct driver_data *data) + + /* Acknowledge the interrupt status on the port.*/ + port_stat = readl(port->mmio + PORT_IRQ_STAT); ++ if (unlikely(port_stat == 0xFFFFFFFF)) { ++ mtip_check_surprise_removal(dd->pdev); ++ return IRQ_HANDLED; ++ } + writel(port_stat, port->mmio + PORT_IRQ_STAT); + + /* Demux port status */ +@@ -991,15 +995,11 @@ static bool mtip_pause_ncq(struct mtip_port *port, + reply = port->rxfis + RX_FIS_D2H_REG; + task_file_data = readl(port->mmio+PORT_TFDATA); + +- if (fis->command == ATA_CMD_SEC_ERASE_UNIT) +- clear_bit(MTIP_DDF_SEC_LOCK_BIT, &port->dd->dd_flag); +- + if ((task_file_data & 1)) + return false; + + if (fis->command == ATA_CMD_SEC_ERASE_PREP) { + set_bit(MTIP_PF_SE_ACTIVE_BIT, &port->flags); +- set_bit(MTIP_DDF_SEC_LOCK_BIT, &port->dd->dd_flag); + port->ic_pause_timer = jiffies; + return true; + } else if ((fis->command == ATA_CMD_DOWNLOAD_MICRO) && +@@ -1011,6 +1011,8 @@ static bool mtip_pause_ncq(struct mtip_port *port, + ((fis->command == 0xFC) && + (fis->features == 0x27 || fis->features == 0x72 || + fis->features == 0x62 || fis->features == 0x26))) { ++ clear_bit(MTIP_DDF_SEC_LOCK_BIT, &port->dd->dd_flag); ++ clear_bit(MTIP_DDF_REBUILD_FAILED_BIT, &port->dd->dd_flag); + /* Com reset after secure erase or lowlevel format */ + mtip_restart_port(port); + return false; +@@ -1102,6 +1104,7 @@ static int mtip_exec_internal_command(struct mtip_port *port, + struct mtip_cmd *int_cmd; + struct driver_data *dd = port->dd; + int rv = 0; ++ unsigned long start; + + /* Make sure the buffer is 8 byte aligned. This is asic specific. */ + if (buffer & 0x00000007) { +@@ -1164,6 +1167,8 @@ static int mtip_exec_internal_command(struct mtip_port *port, + /* Populate the command header */ + int_cmd->command_header->byte_count = 0; + ++ start = jiffies; ++ + /* Issue the command to the hardware */ + mtip_issue_non_ncq_command(port, MTIP_TAG_INTERNAL); + +@@ -1172,10 +1177,12 @@ static int mtip_exec_internal_command(struct mtip_port *port, + if ((rv = wait_for_completion_interruptible_timeout( + &wait, + msecs_to_jiffies(timeout))) <= 0) { ++ + if (rv == -ERESTARTSYS) { /* interrupted */ + dev_err(&dd->pdev->dev, +- "Internal command [%02X] was interrupted after %lu ms\n", +- fis->command, timeout); ++ "Internal command [%02X] was interrupted after %u ms\n", ++ fis->command, ++ jiffies_to_msecs(jiffies - start)); + rv = -EINTR; + goto exec_ic_exit; + } else if (rv == 0) /* timeout */ +@@ -2780,48 +2787,6 @@ static void mtip_hw_debugfs_exit(struct driver_data *dd) + debugfs_remove_recursive(dd->dfs_node); + } + +-static int mtip_free_orphan(struct driver_data *dd) +-{ +- struct kobject *kobj; +- +- if (dd->bdev) { +- if (dd->bdev->bd_holders >= 1) +- return -2; +- +- bdput(dd->bdev); +- dd->bdev = NULL; +- } +- +- mtip_hw_debugfs_exit(dd); +- +- spin_lock(&rssd_index_lock); +- ida_remove(&rssd_index_ida, dd->index); +- spin_unlock(&rssd_index_lock); +- +- if (!test_bit(MTIP_DDF_INIT_DONE_BIT, &dd->dd_flag) && +- test_bit(MTIP_DDF_REBUILD_FAILED_BIT, &dd->dd_flag)) { +- put_disk(dd->disk); +- } else { +- if (dd->disk) { +- kobj = kobject_get(&disk_to_dev(dd->disk)->kobj); +- if (kobj) { +- mtip_hw_sysfs_exit(dd, kobj); +- kobject_put(kobj); +- } +- del_gendisk(dd->disk); +- dd->disk = NULL; +- } +- if (dd->queue) { +- dd->queue->queuedata = NULL; +- blk_cleanup_queue(dd->queue); +- blk_mq_free_tag_set(&dd->tags); +- dd->queue = NULL; +- } +- } +- kfree(dd); +- return 0; +-} +- + /* + * Perform any init/resume time hardware setup + * +@@ -2969,7 +2934,6 @@ static int mtip_service_thread(void *data) + unsigned long slot, slot_start, slot_wrap; + unsigned int num_cmd_slots = dd->slot_groups * 32; + struct mtip_port *port = dd->port; +- int ret; + + while (1) { + if (kthread_should_stop() || +@@ -3055,18 +3019,6 @@ restart_eh: + if (kthread_should_stop()) + goto st_out; + } +- +- while (1) { +- ret = mtip_free_orphan(dd); +- if (!ret) { +- /* NOTE: All data structures are invalid, do not +- * access any here */ +- return 0; +- } +- msleep_interruptible(1000); +- if (kthread_should_stop()) +- goto st_out; +- } + st_out: + return 0; + } +@@ -3178,7 +3130,7 @@ static int mtip_hw_get_identify(struct driver_data *dd) + if (buf[288] == 0xBF) { + dev_info(&dd->pdev->dev, + "Drive indicates rebuild has failed.\n"); +- /* TODO */ ++ set_bit(MTIP_DDF_REBUILD_FAILED_BIT, &dd->dd_flag); + } + } + +@@ -3352,20 +3304,25 @@ out1: + return rv; + } + +-static void mtip_standby_drive(struct driver_data *dd) ++static int mtip_standby_drive(struct driver_data *dd) + { +- if (dd->sr) +- return; ++ int rv = 0; + ++ if (dd->sr || !dd->port) ++ return -ENODEV; + /* + * Send standby immediate (E0h) to the drive so that it + * saves its state. + */ + if (!test_bit(MTIP_PF_REBUILD_BIT, &dd->port->flags) && +- !test_bit(MTIP_DDF_SEC_LOCK_BIT, &dd->dd_flag)) +- if (mtip_standby_immediate(dd->port)) ++ !test_bit(MTIP_DDF_REBUILD_FAILED_BIT, &dd->dd_flag) && ++ !test_bit(MTIP_DDF_SEC_LOCK_BIT, &dd->dd_flag)) { ++ rv = mtip_standby_immediate(dd->port); ++ if (rv) + dev_warn(&dd->pdev->dev, + "STANDBY IMMEDIATE failed\n"); ++ } ++ return rv; + } + + /* +@@ -3394,6 +3351,7 @@ static int mtip_hw_exit(struct driver_data *dd) + /* Release the IRQ. */ + irq_set_affinity_hint(dd->pdev->irq, NULL); + devm_free_irq(&dd->pdev->dev, dd->pdev->irq, dd); ++ msleep(1000); + + /* Free dma regions */ + mtip_dma_free(dd); +@@ -3422,8 +3380,7 @@ static int mtip_hw_shutdown(struct driver_data *dd) + * Send standby immediate (E0h) to the drive so that it + * saves its state. + */ +- if (!dd->sr && dd->port) +- mtip_standby_immediate(dd->port); ++ mtip_standby_drive(dd); + + return 0; + } +@@ -3446,7 +3403,7 @@ static int mtip_hw_suspend(struct driver_data *dd) + * Send standby immediate (E0h) to the drive + * so that it saves its state. + */ +- if (mtip_standby_immediate(dd->port) != 0) { ++ if (mtip_standby_drive(dd) != 0) { + dev_err(&dd->pdev->dev, + "Failed standby-immediate command\n"); + return -EFAULT; +@@ -3684,6 +3641,28 @@ static int mtip_block_getgeo(struct block_device *dev, + return 0; + } + ++static int mtip_block_open(struct block_device *dev, fmode_t mode) ++{ ++ struct driver_data *dd; ++ ++ if (dev && dev->bd_disk) { ++ dd = (struct driver_data *) dev->bd_disk->private_data; ++ ++ if (dd) { ++ if (test_bit(MTIP_DDF_REMOVAL_BIT, ++ &dd->dd_flag)) { ++ return -ENODEV; ++ } ++ return 0; ++ } ++ } ++ return -ENODEV; ++} ++ ++void mtip_block_release(struct gendisk *disk, fmode_t mode) ++{ ++} ++ + /* + * Block device operation function. + * +@@ -3691,6 +3670,8 @@ static int mtip_block_getgeo(struct block_device *dev, + * layer. + */ + static const struct block_device_operations mtip_block_ops = { ++ .open = mtip_block_open, ++ .release = mtip_block_release, + .ioctl = mtip_block_ioctl, + #ifdef CONFIG_COMPAT + .compat_ioctl = mtip_block_compat_ioctl, +@@ -3729,10 +3710,9 @@ static int mtip_submit_request(struct blk_mq_hw_ctx *hctx, struct request *rq) + rq_data_dir(rq))) { + return -ENODATA; + } +- if (unlikely(test_bit(MTIP_DDF_SEC_LOCK_BIT, &dd->dd_flag))) ++ if (unlikely(test_bit(MTIP_DDF_SEC_LOCK_BIT, &dd->dd_flag) || ++ test_bit(MTIP_DDF_REBUILD_FAILED_BIT, &dd->dd_flag))) + return -ENODATA; +- if (test_bit(MTIP_DDF_REBUILD_FAILED_BIT, &dd->dd_flag)) +- return -ENXIO; + } + + if (rq->cmd_flags & REQ_DISCARD) { +@@ -4065,52 +4045,51 @@ static int mtip_block_remove(struct driver_data *dd) + { + struct kobject *kobj; + +- if (!dd->sr) { +- mtip_hw_debugfs_exit(dd); ++ mtip_hw_debugfs_exit(dd); + +- if (dd->mtip_svc_handler) { +- set_bit(MTIP_PF_SVC_THD_STOP_BIT, &dd->port->flags); +- wake_up_interruptible(&dd->port->svc_wait); +- kthread_stop(dd->mtip_svc_handler); +- } ++ if (dd->mtip_svc_handler) { ++ set_bit(MTIP_PF_SVC_THD_STOP_BIT, &dd->port->flags); ++ wake_up_interruptible(&dd->port->svc_wait); ++ kthread_stop(dd->mtip_svc_handler); ++ } + +- /* Clean up the sysfs attributes, if created */ +- if (test_bit(MTIP_DDF_INIT_DONE_BIT, &dd->dd_flag)) { +- kobj = kobject_get(&disk_to_dev(dd->disk)->kobj); +- if (kobj) { +- mtip_hw_sysfs_exit(dd, kobj); +- kobject_put(kobj); +- } ++ /* Clean up the sysfs attributes, if created */ ++ if (test_bit(MTIP_DDF_INIT_DONE_BIT, &dd->dd_flag)) { ++ kobj = kobject_get(&disk_to_dev(dd->disk)->kobj); ++ if (kobj) { ++ mtip_hw_sysfs_exit(dd, kobj); ++ kobject_put(kobj); + } ++ } + ++ if (!dd->sr) + mtip_standby_drive(dd); +- +- /* +- * Delete our gendisk structure. This also removes the device +- * from /dev +- */ +- if (dd->bdev) { +- bdput(dd->bdev); +- dd->bdev = NULL; +- } +- if (dd->disk) { +- if (dd->disk->queue) { +- del_gendisk(dd->disk); +- blk_cleanup_queue(dd->queue); +- blk_mq_free_tag_set(&dd->tags); +- dd->queue = NULL; +- } else +- put_disk(dd->disk); +- } +- dd->disk = NULL; +- +- spin_lock(&rssd_index_lock); +- ida_remove(&rssd_index_ida, dd->index); +- spin_unlock(&rssd_index_lock); +- } else { ++ else + dev_info(&dd->pdev->dev, "device %s surprise removal\n", + dd->disk->disk_name); ++ ++ /* ++ * Delete our gendisk structure. This also removes the device ++ * from /dev ++ */ ++ if (dd->bdev) { ++ bdput(dd->bdev); ++ dd->bdev = NULL; + } ++ if (dd->disk) { ++ del_gendisk(dd->disk); ++ if (dd->disk->queue) { ++ blk_cleanup_queue(dd->queue); ++ blk_mq_free_tag_set(&dd->tags); ++ dd->queue = NULL; ++ } ++ put_disk(dd->disk); ++ } ++ dd->disk = NULL; ++ ++ spin_lock(&rssd_index_lock); ++ ida_remove(&rssd_index_ida, dd->index); ++ spin_unlock(&rssd_index_lock); + + /* De-initialize the protocol layer. */ + mtip_hw_exit(dd); +@@ -4139,12 +4118,12 @@ static int mtip_block_shutdown(struct driver_data *dd) + dev_info(&dd->pdev->dev, + "Shutting down %s ...\n", dd->disk->disk_name); + ++ del_gendisk(dd->disk); + if (dd->disk->queue) { +- del_gendisk(dd->disk); + blk_cleanup_queue(dd->queue); + blk_mq_free_tag_set(&dd->tags); +- } else +- put_disk(dd->disk); ++ } ++ put_disk(dd->disk); + dd->disk = NULL; + dd->queue = NULL; + } +@@ -4484,7 +4463,7 @@ static void mtip_pci_remove(struct pci_dev *pdev) + struct driver_data *dd = pci_get_drvdata(pdev); + unsigned long flags, to; + +- set_bit(MTIP_DDF_REMOVE_PENDING_BIT, &dd->dd_flag); ++ set_bit(MTIP_DDF_REMOVAL_BIT, &dd->dd_flag); + + spin_lock_irqsave(&dev_lock, flags); + list_del_init(&dd->online_list); +@@ -4501,11 +4480,18 @@ static void mtip_pci_remove(struct pci_dev *pdev) + } while (atomic_read(&dd->irq_workers_active) != 0 && + time_before(jiffies, to)); + ++ fsync_bdev(dd->bdev); ++ + if (atomic_read(&dd->irq_workers_active) != 0) { + dev_warn(&dd->pdev->dev, + "Completion workers still active!\n"); + } + ++ if (dd->sr) ++ blk_mq_stop_hw_queues(dd->queue); ++ ++ set_bit(MTIP_DDF_REMOVE_PENDING_BIT, &dd->dd_flag); ++ + /* Clean up the block layer. */ + mtip_block_remove(dd); + +@@ -4523,10 +4509,8 @@ static void mtip_pci_remove(struct pci_dev *pdev) + list_del_init(&dd->remove_list); + spin_unlock_irqrestore(&dev_lock, flags); + +- if (!dd->sr) +- kfree(dd); +- else +- set_bit(MTIP_DDF_REMOVE_DONE_BIT, &dd->dd_flag); ++ kfree(dd); ++ set_bit(MTIP_DDF_REMOVE_DONE_BIT, &dd->dd_flag); + + pcim_iounmap_regions(pdev, 1 << MTIP_ABAR); + pci_set_drvdata(pdev, NULL); +diff --git a/drivers/block/mtip32xx/mtip32xx.h b/drivers/block/mtip32xx/mtip32xx.h +index ba1b31ee22ec..76695265dffb 100644 +--- a/drivers/block/mtip32xx/mtip32xx.h ++++ b/drivers/block/mtip32xx/mtip32xx.h +@@ -155,6 +155,7 @@ enum { + MTIP_DDF_RESUME_BIT = 6, + MTIP_DDF_INIT_DONE_BIT = 7, + MTIP_DDF_REBUILD_FAILED_BIT = 8, ++ MTIP_DDF_REMOVAL_BIT = 9, + + MTIP_DDF_STOP_IO = ((1 << MTIP_DDF_REMOVE_PENDING_BIT) | + (1 << MTIP_DDF_SEC_LOCK_BIT) | +diff --git a/drivers/bluetooth/ath3k.c b/drivers/bluetooth/ath3k.c +index e527a3e13939..cbea17da6fe6 100644 +--- a/drivers/bluetooth/ath3k.c ++++ b/drivers/bluetooth/ath3k.c +@@ -82,6 +82,7 @@ static const struct usb_device_id ath3k_table[] = { + { USB_DEVICE(0x0489, 0xe05f) }, + { USB_DEVICE(0x0489, 0xe076) }, + { USB_DEVICE(0x0489, 0xe078) }, ++ { USB_DEVICE(0x0489, 0xe095) }, + { USB_DEVICE(0x04c5, 0x1330) }, + { USB_DEVICE(0x04CA, 0x3004) }, + { USB_DEVICE(0x04CA, 0x3005) }, +@@ -92,6 +93,7 @@ static const struct usb_device_id ath3k_table[] = { + { USB_DEVICE(0x04CA, 0x300d) }, + { USB_DEVICE(0x04CA, 0x300f) }, + { USB_DEVICE(0x04CA, 0x3010) }, ++ { USB_DEVICE(0x04CA, 0x3014) }, + { USB_DEVICE(0x0930, 0x0219) }, + { USB_DEVICE(0x0930, 0x0220) }, + { USB_DEVICE(0x0930, 0x0227) }, +@@ -111,10 +113,12 @@ static const struct usb_device_id ath3k_table[] = { + { USB_DEVICE(0x13d3, 0x3362) }, + { USB_DEVICE(0x13d3, 0x3375) }, + { USB_DEVICE(0x13d3, 0x3393) }, ++ { USB_DEVICE(0x13d3, 0x3395) }, + { USB_DEVICE(0x13d3, 0x3402) }, + { USB_DEVICE(0x13d3, 0x3408) }, + { USB_DEVICE(0x13d3, 0x3423) }, + { USB_DEVICE(0x13d3, 0x3432) }, ++ { USB_DEVICE(0x13d3, 0x3472) }, + { USB_DEVICE(0x13d3, 0x3474) }, + + /* Atheros AR5BBU12 with sflash firmware */ +@@ -142,6 +146,7 @@ static const struct usb_device_id ath3k_blist_tbl[] = { + { USB_DEVICE(0x0489, 0xe05f), .driver_info = BTUSB_ATH3012 }, + { USB_DEVICE(0x0489, 0xe076), .driver_info = BTUSB_ATH3012 }, + { USB_DEVICE(0x0489, 0xe078), .driver_info = BTUSB_ATH3012 }, ++ { USB_DEVICE(0x0489, 0xe095), .driver_info = BTUSB_ATH3012 }, + { USB_DEVICE(0x04c5, 0x1330), .driver_info = BTUSB_ATH3012 }, + { USB_DEVICE(0x04ca, 0x3004), .driver_info = BTUSB_ATH3012 }, + { USB_DEVICE(0x04ca, 0x3005), .driver_info = BTUSB_ATH3012 }, +@@ -152,6 +157,7 @@ static const struct usb_device_id ath3k_blist_tbl[] = { + { USB_DEVICE(0x04ca, 0x300d), .driver_info = BTUSB_ATH3012 }, + { USB_DEVICE(0x04ca, 0x300f), .driver_info = BTUSB_ATH3012 }, + { USB_DEVICE(0x04ca, 0x3010), .driver_info = BTUSB_ATH3012 }, ++ { USB_DEVICE(0x04ca, 0x3014), .driver_info = BTUSB_ATH3012 }, + { USB_DEVICE(0x0930, 0x0219), .driver_info = BTUSB_ATH3012 }, + { USB_DEVICE(0x0930, 0x0220), .driver_info = BTUSB_ATH3012 }, + { USB_DEVICE(0x0930, 0x0227), .driver_info = BTUSB_ATH3012 }, +@@ -171,10 +177,12 @@ static const struct usb_device_id ath3k_blist_tbl[] = { + { USB_DEVICE(0x13d3, 0x3362), .driver_info = BTUSB_ATH3012 }, + { USB_DEVICE(0x13d3, 0x3375), .driver_info = BTUSB_ATH3012 }, + { USB_DEVICE(0x13d3, 0x3393), .driver_info = BTUSB_ATH3012 }, ++ { USB_DEVICE(0x13d3, 0x3395), .driver_info = BTUSB_ATH3012 }, + { USB_DEVICE(0x13d3, 0x3402), .driver_info = BTUSB_ATH3012 }, + { USB_DEVICE(0x13d3, 0x3408), .driver_info = BTUSB_ATH3012 }, + { USB_DEVICE(0x13d3, 0x3423), .driver_info = BTUSB_ATH3012 }, + { USB_DEVICE(0x13d3, 0x3432), .driver_info = BTUSB_ATH3012 }, ++ { USB_DEVICE(0x13d3, 0x3472), .driver_info = BTUSB_ATH3012 }, + { USB_DEVICE(0x13d3, 0x3474), .driver_info = BTUSB_ATH3012 }, + + /* Atheros AR5BBU22 with sflash firmware */ +diff --git a/drivers/bluetooth/btusb.c b/drivers/bluetooth/btusb.c +index b1e4866dc8d5..b3334daab009 100644 +--- a/drivers/bluetooth/btusb.c ++++ b/drivers/bluetooth/btusb.c +@@ -174,6 +174,7 @@ static const struct usb_device_id blacklist_table[] = { + { USB_DEVICE(0x0489, 0xe05f), .driver_info = BTUSB_ATH3012 }, + { USB_DEVICE(0x0489, 0xe076), .driver_info = BTUSB_ATH3012 }, + { USB_DEVICE(0x0489, 0xe078), .driver_info = BTUSB_ATH3012 }, ++ { USB_DEVICE(0x0489, 0xe095), .driver_info = BTUSB_ATH3012 }, + { USB_DEVICE(0x04c5, 0x1330), .driver_info = BTUSB_ATH3012 }, + { USB_DEVICE(0x04ca, 0x3004), .driver_info = BTUSB_ATH3012 }, + { USB_DEVICE(0x04ca, 0x3005), .driver_info = BTUSB_ATH3012 }, +@@ -184,6 +185,7 @@ static const struct usb_device_id blacklist_table[] = { + { USB_DEVICE(0x04ca, 0x300d), .driver_info = BTUSB_ATH3012 }, + { USB_DEVICE(0x04ca, 0x300f), .driver_info = BTUSB_ATH3012 }, + { USB_DEVICE(0x04ca, 0x3010), .driver_info = BTUSB_ATH3012 }, ++ { USB_DEVICE(0x04ca, 0x3014), .driver_info = BTUSB_ATH3012 }, + { USB_DEVICE(0x0930, 0x0219), .driver_info = BTUSB_ATH3012 }, + { USB_DEVICE(0x0930, 0x0220), .driver_info = BTUSB_ATH3012 }, + { USB_DEVICE(0x0930, 0x0227), .driver_info = BTUSB_ATH3012 }, +@@ -203,10 +205,12 @@ static const struct usb_device_id blacklist_table[] = { + { USB_DEVICE(0x13d3, 0x3362), .driver_info = BTUSB_ATH3012 }, + { USB_DEVICE(0x13d3, 0x3375), .driver_info = BTUSB_ATH3012 }, + { USB_DEVICE(0x13d3, 0x3393), .driver_info = BTUSB_ATH3012 }, ++ { USB_DEVICE(0x13d3, 0x3395), .driver_info = BTUSB_ATH3012 }, + { USB_DEVICE(0x13d3, 0x3402), .driver_info = BTUSB_ATH3012 }, + { USB_DEVICE(0x13d3, 0x3408), .driver_info = BTUSB_ATH3012 }, + { USB_DEVICE(0x13d3, 0x3423), .driver_info = BTUSB_ATH3012 }, + { USB_DEVICE(0x13d3, 0x3432), .driver_info = BTUSB_ATH3012 }, ++ { USB_DEVICE(0x13d3, 0x3472), .driver_info = BTUSB_ATH3012 }, + { USB_DEVICE(0x13d3, 0x3474), .driver_info = BTUSB_ATH3012 }, + + /* Atheros AR5BBU12 with sflash firmware */ +diff --git a/drivers/bus/imx-weim.c b/drivers/bus/imx-weim.c +index 75c9681f8021..0ff86b980d05 100644 +--- a/drivers/bus/imx-weim.c ++++ b/drivers/bus/imx-weim.c +@@ -150,7 +150,7 @@ static int __init weim_parse_dt(struct platform_device *pdev, + return ret; + } + +- for_each_child_of_node(pdev->dev.of_node, child) { ++ for_each_available_child_of_node(pdev->dev.of_node, child) { + if (!child->name) + continue; + +diff --git a/drivers/clk/rockchip/clk-rk3188.c b/drivers/clk/rockchip/clk-rk3188.c +index 8088b384ce6e..63c80a319a0e 100644 +--- a/drivers/clk/rockchip/clk-rk3188.c ++++ b/drivers/clk/rockchip/clk-rk3188.c +@@ -713,6 +713,8 @@ static const char *rk3188_critical_clocks[] __initconst = { + "aclk_cpu", + "aclk_peri", + "hclk_peri", ++ "pclk_cpu", ++ "pclk_peri", + }; + + static void __init rk3188_common_clk_init(struct device_node *np) +diff --git a/drivers/crypto/ccp/ccp-crypto-aes-cmac.c b/drivers/crypto/ccp/ccp-crypto-aes-cmac.c +index 8e162ad82085..5c93afb1841a 100644 +--- a/drivers/crypto/ccp/ccp-crypto-aes-cmac.c ++++ b/drivers/crypto/ccp/ccp-crypto-aes-cmac.c +@@ -201,6 +201,39 @@ static int ccp_aes_cmac_digest(struct ahash_request *req) + return ccp_aes_cmac_finup(req); + } + ++static int ccp_aes_cmac_export(struct ahash_request *req, void *out) ++{ ++ struct ccp_aes_cmac_req_ctx *rctx = ahash_request_ctx(req); ++ struct ccp_aes_cmac_exp_ctx state; ++ ++ state.null_msg = rctx->null_msg; ++ memcpy(state.iv, rctx->iv, sizeof(state.iv)); ++ state.buf_count = rctx->buf_count; ++ memcpy(state.buf, rctx->buf, sizeof(state.buf)); ++ ++ /* 'out' may not be aligned so memcpy from local variable */ ++ memcpy(out, &state, sizeof(state)); ++ ++ return 0; ++} ++ ++static int ccp_aes_cmac_import(struct ahash_request *req, const void *in) ++{ ++ struct ccp_aes_cmac_req_ctx *rctx = ahash_request_ctx(req); ++ struct ccp_aes_cmac_exp_ctx state; ++ ++ /* 'in' may not be aligned so memcpy to local variable */ ++ memcpy(&state, in, sizeof(state)); ++ ++ memset(rctx, 0, sizeof(*rctx)); ++ rctx->null_msg = state.null_msg; ++ memcpy(rctx->iv, state.iv, sizeof(rctx->iv)); ++ rctx->buf_count = state.buf_count; ++ memcpy(rctx->buf, state.buf, sizeof(rctx->buf)); ++ ++ return 0; ++} ++ + static int ccp_aes_cmac_setkey(struct crypto_ahash *tfm, const u8 *key, + unsigned int key_len) + { +@@ -332,10 +365,13 @@ int ccp_register_aes_cmac_algs(struct list_head *head) + alg->final = ccp_aes_cmac_final; + alg->finup = ccp_aes_cmac_finup; + alg->digest = ccp_aes_cmac_digest; ++ alg->export = ccp_aes_cmac_export; ++ alg->import = ccp_aes_cmac_import; + alg->setkey = ccp_aes_cmac_setkey; + + halg = &alg->halg; + halg->digestsize = AES_BLOCK_SIZE; ++ halg->statesize = sizeof(struct ccp_aes_cmac_exp_ctx); + + base = &halg->base; + snprintf(base->cra_name, CRYPTO_MAX_ALG_NAME, "cmac(aes)"); +diff --git a/drivers/crypto/ccp/ccp-crypto-sha.c b/drivers/crypto/ccp/ccp-crypto-sha.c +index 96531571f7cf..b368e985a086 100644 +--- a/drivers/crypto/ccp/ccp-crypto-sha.c ++++ b/drivers/crypto/ccp/ccp-crypto-sha.c +@@ -193,6 +193,43 @@ static int ccp_sha_digest(struct ahash_request *req) + return ccp_sha_finup(req); + } + ++static int ccp_sha_export(struct ahash_request *req, void *out) ++{ ++ struct ccp_sha_req_ctx *rctx = ahash_request_ctx(req); ++ struct ccp_sha_exp_ctx state; ++ ++ state.type = rctx->type; ++ state.msg_bits = rctx->msg_bits; ++ state.first = rctx->first; ++ memcpy(state.ctx, rctx->ctx, sizeof(state.ctx)); ++ state.buf_count = rctx->buf_count; ++ memcpy(state.buf, rctx->buf, sizeof(state.buf)); ++ ++ /* 'out' may not be aligned so memcpy from local variable */ ++ memcpy(out, &state, sizeof(state)); ++ ++ return 0; ++} ++ ++static int ccp_sha_import(struct ahash_request *req, const void *in) ++{ ++ struct ccp_sha_req_ctx *rctx = ahash_request_ctx(req); ++ struct ccp_sha_exp_ctx state; ++ ++ /* 'in' may not be aligned so memcpy to local variable */ ++ memcpy(&state, in, sizeof(state)); ++ ++ memset(rctx, 0, sizeof(*rctx)); ++ rctx->type = state.type; ++ rctx->msg_bits = state.msg_bits; ++ rctx->first = state.first; ++ memcpy(rctx->ctx, state.ctx, sizeof(rctx->ctx)); ++ rctx->buf_count = state.buf_count; ++ memcpy(rctx->buf, state.buf, sizeof(rctx->buf)); ++ ++ return 0; ++} ++ + static int ccp_sha_setkey(struct crypto_ahash *tfm, const u8 *key, + unsigned int key_len) + { +@@ -388,9 +425,12 @@ static int ccp_register_sha_alg(struct list_head *head, + alg->final = ccp_sha_final; + alg->finup = ccp_sha_finup; + alg->digest = ccp_sha_digest; ++ alg->export = ccp_sha_export; ++ alg->import = ccp_sha_import; + + halg = &alg->halg; + halg->digestsize = def->digest_size; ++ halg->statesize = sizeof(struct ccp_sha_exp_ctx); + + base = &halg->base; + snprintf(base->cra_name, CRYPTO_MAX_ALG_NAME, "%s", def->name); +diff --git a/drivers/crypto/ccp/ccp-crypto.h b/drivers/crypto/ccp/ccp-crypto.h +index 9aa4ae184f7f..7a0bb029ac8e 100644 +--- a/drivers/crypto/ccp/ccp-crypto.h ++++ b/drivers/crypto/ccp/ccp-crypto.h +@@ -132,6 +132,15 @@ struct ccp_aes_cmac_req_ctx { + struct ccp_cmd cmd; + }; + ++struct ccp_aes_cmac_exp_ctx { ++ unsigned int null_msg; ++ ++ u8 iv[AES_BLOCK_SIZE]; ++ ++ unsigned int buf_count; ++ u8 buf[AES_BLOCK_SIZE]; ++}; ++ + /***** SHA related defines *****/ + #define MAX_SHA_CONTEXT_SIZE SHA256_DIGEST_SIZE + #define MAX_SHA_BLOCK_SIZE SHA256_BLOCK_SIZE +@@ -174,6 +183,19 @@ struct ccp_sha_req_ctx { + struct ccp_cmd cmd; + }; + ++struct ccp_sha_exp_ctx { ++ enum ccp_sha_type type; ++ ++ u64 msg_bits; ++ ++ unsigned int first; ++ ++ u8 ctx[MAX_SHA_CONTEXT_SIZE]; ++ ++ unsigned int buf_count; ++ u8 buf[MAX_SHA_BLOCK_SIZE]; ++}; ++ + /***** Common Context Structure *****/ + struct ccp_ctx { + int (*complete)(struct crypto_async_request *req, int ret); +diff --git a/drivers/edac/amd64_edac.c b/drivers/edac/amd64_edac.c +index c7236ba92b8e..88acf836fe05 100644 +--- a/drivers/edac/amd64_edac.c ++++ b/drivers/edac/amd64_edac.c +@@ -1316,7 +1316,7 @@ static u64 f1x_get_norm_dct_addr(struct amd64_pvt *pvt, u8 range, + u64 chan_off; + u64 dram_base = get_dram_base(pvt, range); + u64 hole_off = f10_dhar_offset(pvt); +- u64 dct_sel_base_off = (pvt->dct_sel_hi & 0xFFFFFC00) << 16; ++ u64 dct_sel_base_off = (u64)(pvt->dct_sel_hi & 0xFFFFFC00) << 16; + + if (hi_rng) { + /* +diff --git a/drivers/edac/sb_edac.c b/drivers/edac/sb_edac.c +index 15697c630139..b97f5f0c5c0a 100644 +--- a/drivers/edac/sb_edac.c ++++ b/drivers/edac/sb_edac.c +@@ -991,8 +991,8 @@ static void get_memory_layout(const struct mem_ctl_info *mci) + edac_dbg(0, "TAD#%d: up to %u.%03u GB (0x%016Lx), socket interleave %d, memory interleave %d, TGT: %d, %d, %d, %d, reg=0x%08x\n", + n_tads, gb, (mb*1000)/1024, + ((u64)tmp_mb) << 20L, +- (u32)TAD_SOCK(reg), +- (u32)TAD_CH(reg), ++ (u32)(1 << TAD_SOCK(reg)), ++ (u32)TAD_CH(reg) + 1, + (u32)TAD_TGT0(reg), + (u32)TAD_TGT1(reg), + (u32)TAD_TGT2(reg), +@@ -1264,7 +1264,7 @@ static int get_memory_error_data(struct mem_ctl_info *mci, + } + + ch_way = TAD_CH(reg) + 1; +- sck_way = TAD_SOCK(reg) + 1; ++ sck_way = 1 << TAD_SOCK(reg); + + if (ch_way == 3) + idx = addr >> 6; +@@ -1321,7 +1321,7 @@ static int get_memory_error_data(struct mem_ctl_info *mci, + n_tads, + addr, + limit, +- (u32)TAD_SOCK(reg), ++ sck_way, + ch_way, + offset, + idx, +@@ -1336,18 +1336,12 @@ static int get_memory_error_data(struct mem_ctl_info *mci, + offset, addr); + return -EINVAL; + } +- addr -= offset; +- /* Store the low bits [0:6] of the addr */ +- ch_addr = addr & 0x7f; +- /* Remove socket wayness and remove 6 bits */ +- addr >>= 6; +- addr = div_u64(addr, sck_xch); +-#if 0 +- /* Divide by channel way */ +- addr = addr / ch_way; +-#endif +- /* Recover the last 6 bits */ +- ch_addr |= addr << 6; ++ ++ ch_addr = addr - offset; ++ ch_addr >>= (6 + shiftup); ++ ch_addr /= ch_way * sck_way; ++ ch_addr <<= (6 + shiftup); ++ ch_addr |= addr & ((1 << (6 + shiftup)) - 1); + + /* + * Step 3) Decode rank +diff --git a/drivers/firmware/efi/efivars.c b/drivers/firmware/efi/efivars.c +index f256ecd8a176..9790d7707119 100644 +--- a/drivers/firmware/efi/efivars.c ++++ b/drivers/firmware/efi/efivars.c +@@ -221,7 +221,7 @@ sanity_check(struct efi_variable *var, efi_char16_t *name, efi_guid_t vendor, + } + + if ((attributes & ~EFI_VARIABLE_MASK) != 0 || +- efivar_validate(name, data, size) == false) { ++ efivar_validate(vendor, name, data, size) == false) { + printk(KERN_ERR "efivars: Malformed variable content\n"); + return -EINVAL; + } +@@ -447,7 +447,8 @@ static ssize_t efivar_create(struct file *filp, struct kobject *kobj, + } + + if ((attributes & ~EFI_VARIABLE_MASK) != 0 || +- efivar_validate(name, data, size) == false) { ++ efivar_validate(new_var->VendorGuid, name, data, ++ size) == false) { + printk(KERN_ERR "efivars: Malformed variable content\n"); + return -EINVAL; + } +@@ -535,50 +536,44 @@ static ssize_t efivar_delete(struct file *filp, struct kobject *kobj, + * efivar_create_sysfs_entry - create a new entry in sysfs + * @new_var: efivar entry to create + * +- * Returns 1 on failure, 0 on success ++ * Returns 0 on success, negative error code on failure + */ + static int + efivar_create_sysfs_entry(struct efivar_entry *new_var) + { +- int i, short_name_size; ++ int short_name_size; + char *short_name; +- unsigned long variable_name_size; +- efi_char16_t *variable_name; +- +- variable_name = new_var->var.VariableName; +- variable_name_size = ucs2_strlen(variable_name) * sizeof(efi_char16_t); ++ unsigned long utf8_name_size; ++ efi_char16_t *variable_name = new_var->var.VariableName; ++ int ret; + + /* +- * Length of the variable bytes in ASCII, plus the '-' separator, ++ * Length of the variable bytes in UTF8, plus the '-' separator, + * plus the GUID, plus trailing NUL + */ +- short_name_size = variable_name_size / sizeof(efi_char16_t) +- + 1 + EFI_VARIABLE_GUID_LEN + 1; +- +- short_name = kzalloc(short_name_size, GFP_KERNEL); ++ utf8_name_size = ucs2_utf8size(variable_name); ++ short_name_size = utf8_name_size + 1 + EFI_VARIABLE_GUID_LEN + 1; + ++ short_name = kmalloc(short_name_size, GFP_KERNEL); + if (!short_name) +- return 1; ++ return -ENOMEM; ++ ++ ucs2_as_utf8(short_name, variable_name, short_name_size); + +- /* Convert Unicode to normal chars (assume top bits are 0), +- ala UTF-8 */ +- for (i=0; i < (int)(variable_name_size / sizeof(efi_char16_t)); i++) { +- short_name[i] = variable_name[i] & 0xFF; +- } + /* This is ugly, but necessary to separate one vendor's + private variables from another's. */ + +- *(short_name + strlen(short_name)) = '-'; ++ short_name[utf8_name_size] = '-'; + efi_guid_unparse(&new_var->var.VendorGuid, +- short_name + strlen(short_name)); ++ short_name + utf8_name_size + 1); + + new_var->kobj.kset = efivars_kset; + +- i = kobject_init_and_add(&new_var->kobj, &efivar_ktype, ++ ret = kobject_init_and_add(&new_var->kobj, &efivar_ktype, + NULL, "%s", short_name); + kfree(short_name); +- if (i) +- return 1; ++ if (ret) ++ return ret; + + kobject_uevent(&new_var->kobj, KOBJ_ADD); + efivar_entry_add(new_var, &efivar_sysfs_list); +diff --git a/drivers/firmware/efi/vars.c b/drivers/firmware/efi/vars.c +index 70a0fb10517f..7f2ea21c730d 100644 +--- a/drivers/firmware/efi/vars.c ++++ b/drivers/firmware/efi/vars.c +@@ -165,67 +165,133 @@ validate_ascii_string(efi_char16_t *var_name, int match, u8 *buffer, + } + + struct variable_validate { ++ efi_guid_t vendor; + char *name; + bool (*validate)(efi_char16_t *var_name, int match, u8 *data, + unsigned long len); + }; + ++/* ++ * This is the list of variables we need to validate, as well as the ++ * whitelist for what we think is safe not to default to immutable. ++ * ++ * If it has a validate() method that's not NULL, it'll go into the ++ * validation routine. If not, it is assumed valid, but still used for ++ * whitelisting. ++ * ++ * Note that it's sorted by {vendor,name}, but globbed names must come after ++ * any other name with the same prefix. ++ */ + static const struct variable_validate variable_validate[] = { +- { "BootNext", validate_uint16 }, +- { "BootOrder", validate_boot_order }, +- { "DriverOrder", validate_boot_order }, +- { "Boot*", validate_load_option }, +- { "Driver*", validate_load_option }, +- { "ConIn", validate_device_path }, +- { "ConInDev", validate_device_path }, +- { "ConOut", validate_device_path }, +- { "ConOutDev", validate_device_path }, +- { "ErrOut", validate_device_path }, +- { "ErrOutDev", validate_device_path }, +- { "Timeout", validate_uint16 }, +- { "Lang", validate_ascii_string }, +- { "PlatformLang", validate_ascii_string }, +- { "", NULL }, ++ { EFI_GLOBAL_VARIABLE_GUID, "BootNext", validate_uint16 }, ++ { EFI_GLOBAL_VARIABLE_GUID, "BootOrder", validate_boot_order }, ++ { EFI_GLOBAL_VARIABLE_GUID, "Boot*", validate_load_option }, ++ { EFI_GLOBAL_VARIABLE_GUID, "DriverOrder", validate_boot_order }, ++ { EFI_GLOBAL_VARIABLE_GUID, "Driver*", validate_load_option }, ++ { EFI_GLOBAL_VARIABLE_GUID, "ConIn", validate_device_path }, ++ { EFI_GLOBAL_VARIABLE_GUID, "ConInDev", validate_device_path }, ++ { EFI_GLOBAL_VARIABLE_GUID, "ConOut", validate_device_path }, ++ { EFI_GLOBAL_VARIABLE_GUID, "ConOutDev", validate_device_path }, ++ { EFI_GLOBAL_VARIABLE_GUID, "ErrOut", validate_device_path }, ++ { EFI_GLOBAL_VARIABLE_GUID, "ErrOutDev", validate_device_path }, ++ { EFI_GLOBAL_VARIABLE_GUID, "Lang", validate_ascii_string }, ++ { EFI_GLOBAL_VARIABLE_GUID, "OsIndications", NULL }, ++ { EFI_GLOBAL_VARIABLE_GUID, "PlatformLang", validate_ascii_string }, ++ { EFI_GLOBAL_VARIABLE_GUID, "Timeout", validate_uint16 }, ++ { LINUX_EFI_CRASH_GUID, "*", NULL }, ++ { NULL_GUID, "", NULL }, + }; + ++static bool ++variable_matches(const char *var_name, size_t len, const char *match_name, ++ int *match) ++{ ++ for (*match = 0; ; (*match)++) { ++ char c = match_name[*match]; ++ char u = var_name[*match]; ++ ++ /* Wildcard in the matching name means we've matched */ ++ if (c == '*') ++ return true; ++ ++ /* Case sensitive match */ ++ if (!c && *match == len) ++ return true; ++ ++ if (c != u) ++ return false; ++ ++ if (!c) ++ return true; ++ } ++ return true; ++} ++ + bool +-efivar_validate(efi_char16_t *var_name, u8 *data, unsigned long len) ++efivar_validate(efi_guid_t vendor, efi_char16_t *var_name, u8 *data, ++ unsigned long data_size) + { + int i; +- u16 *unicode_name = var_name; ++ unsigned long utf8_size; ++ u8 *utf8_name; + +- for (i = 0; variable_validate[i].validate != NULL; i++) { +- const char *name = variable_validate[i].name; +- int match; ++ utf8_size = ucs2_utf8size(var_name); ++ utf8_name = kmalloc(utf8_size + 1, GFP_KERNEL); ++ if (!utf8_name) ++ return false; + +- for (match = 0; ; match++) { +- char c = name[match]; +- u16 u = unicode_name[match]; ++ ucs2_as_utf8(utf8_name, var_name, utf8_size); ++ utf8_name[utf8_size] = '\0'; + +- /* All special variables are plain ascii */ +- if (u > 127) +- return true; ++ for (i = 0; variable_validate[i].name[0] != '\0'; i++) { ++ const char *name = variable_validate[i].name; ++ int match = 0; + +- /* Wildcard in the matching name means we've matched */ +- if (c == '*') +- return variable_validate[i].validate(var_name, +- match, data, len); ++ if (efi_guidcmp(vendor, variable_validate[i].vendor)) ++ continue; + +- /* Case sensitive match */ +- if (c != u) ++ if (variable_matches(utf8_name, utf8_size+1, name, &match)) { ++ if (variable_validate[i].validate == NULL) + break; +- +- /* Reached the end of the string while matching */ +- if (!c) +- return variable_validate[i].validate(var_name, +- match, data, len); ++ kfree(utf8_name); ++ return variable_validate[i].validate(var_name, match, ++ data, data_size); + } + } +- ++ kfree(utf8_name); + return true; + } + EXPORT_SYMBOL_GPL(efivar_validate); + ++bool ++efivar_variable_is_removable(efi_guid_t vendor, const char *var_name, ++ size_t len) ++{ ++ int i; ++ bool found = false; ++ int match = 0; ++ ++ /* ++ * Check if our variable is in the validated variables list ++ */ ++ for (i = 0; variable_validate[i].name[0] != '\0'; i++) { ++ if (efi_guidcmp(variable_validate[i].vendor, vendor)) ++ continue; ++ ++ if (variable_matches(var_name, len, ++ variable_validate[i].name, &match)) { ++ found = true; ++ break; ++ } ++ } ++ ++ /* ++ * If it's in our list, it is removable. ++ */ ++ return found; ++} ++EXPORT_SYMBOL_GPL(efivar_variable_is_removable); ++ + static efi_status_t + check_var_size(u32 attributes, unsigned long size) + { +@@ -852,7 +918,7 @@ int efivar_entry_set_get_size(struct efivar_entry *entry, u32 attributes, + + *set = false; + +- if (efivar_validate(name, data, *size) == false) ++ if (efivar_validate(*vendor, name, data, *size) == false) + return -EINVAL; + + /* +diff --git a/drivers/gpu/drm/drm_dp_mst_topology.c b/drivers/gpu/drm/drm_dp_mst_topology.c +index 72deb52fd1d0..563d3d2c54a9 100644 +--- a/drivers/gpu/drm/drm_dp_mst_topology.c ++++ b/drivers/gpu/drm/drm_dp_mst_topology.c +@@ -996,18 +996,27 @@ static bool drm_dp_port_setup_pdt(struct drm_dp_mst_port *port) + return send_link; + } + +-static void drm_dp_check_port_guid(struct drm_dp_mst_branch *mstb, +- struct drm_dp_mst_port *port) ++static void drm_dp_check_mstb_guid(struct drm_dp_mst_branch *mstb, u8 *guid) + { + int ret; +- if (port->dpcd_rev >= 0x12) { +- port->guid_valid = drm_dp_validate_guid(mstb->mgr, port->guid); +- if (!port->guid_valid) { +- ret = drm_dp_send_dpcd_write(mstb->mgr, +- port, +- DP_GUID, +- 16, port->guid); +- port->guid_valid = true; ++ ++ memcpy(mstb->guid, guid, 16); ++ ++ if (!drm_dp_validate_guid(mstb->mgr, mstb->guid)) { ++ if (mstb->port_parent) { ++ ret = drm_dp_send_dpcd_write( ++ mstb->mgr, ++ mstb->port_parent, ++ DP_GUID, ++ 16, ++ mstb->guid); ++ } else { ++ ++ ret = drm_dp_dpcd_write( ++ mstb->mgr->aux, ++ DP_GUID, ++ mstb->guid, ++ 16); + } + } + } +@@ -1064,7 +1073,6 @@ static void drm_dp_add_port(struct drm_dp_mst_branch *mstb, + port->dpcd_rev = port_msg->dpcd_revision; + port->num_sdp_streams = port_msg->num_sdp_streams; + port->num_sdp_stream_sinks = port_msg->num_sdp_stream_sinks; +- memcpy(port->guid, port_msg->peer_guid, 16); + + /* manage mstb port lists with mgr lock - take a reference + for this list */ +@@ -1077,11 +1085,9 @@ static void drm_dp_add_port(struct drm_dp_mst_branch *mstb, + + if (old_ddps != port->ddps) { + if (port->ddps) { +- drm_dp_check_port_guid(mstb, port); + if (!port->input) + drm_dp_send_enum_path_resources(mstb->mgr, mstb, port); + } else { +- port->guid_valid = false; + port->available_pbn = 0; + } + } +@@ -1126,10 +1132,8 @@ static void drm_dp_update_port(struct drm_dp_mst_branch *mstb, + + if (old_ddps != port->ddps) { + if (port->ddps) { +- drm_dp_check_port_guid(mstb, port); + dowork = true; + } else { +- port->guid_valid = false; + port->available_pbn = 0; + } + } +@@ -1185,13 +1189,14 @@ static struct drm_dp_mst_branch *get_mst_branch_device_by_guid_helper( + struct drm_dp_mst_branch *found_mstb; + struct drm_dp_mst_port *port; + ++ if (memcmp(mstb->guid, guid, 16) == 0) ++ return mstb; ++ ++ + list_for_each_entry(port, &mstb->ports, next) { + if (!port->mstb) + continue; + +- if (port->guid_valid && memcmp(port->guid, guid, 16) == 0) +- return port->mstb; +- + found_mstb = get_mst_branch_device_by_guid_helper(port->mstb, guid); + + if (found_mstb) +@@ -1210,10 +1215,7 @@ static struct drm_dp_mst_branch *drm_dp_get_mst_branch_device_by_guid( + /* find the port by iterating down */ + mutex_lock(&mgr->lock); + +- if (mgr->guid_valid && memcmp(mgr->guid, guid, 16) == 0) +- mstb = mgr->mst_primary; +- else +- mstb = get_mst_branch_device_by_guid_helper(mgr->mst_primary, guid); ++ mstb = get_mst_branch_device_by_guid_helper(mgr->mst_primary, guid); + + if (mstb) + kref_get(&mstb->kref); +@@ -1525,6 +1527,9 @@ static int drm_dp_send_link_address(struct drm_dp_mst_topology_mgr *mgr, + txmsg->reply.u.link_addr.ports[i].num_sdp_streams, + txmsg->reply.u.link_addr.ports[i].num_sdp_stream_sinks); + } ++ ++ drm_dp_check_mstb_guid(mstb, txmsg->reply.u.link_addr.guid); ++ + for (i = 0; i < txmsg->reply.u.link_addr.nports; i++) { + drm_dp_add_port(mstb, mgr->dev, &txmsg->reply.u.link_addr.ports[i]); + } +@@ -1928,31 +1933,17 @@ int drm_dp_mst_topology_mgr_set_mst(struct drm_dp_mst_topology_mgr *mgr, bool ms + mgr->mst_primary = mstb; + kref_get(&mgr->mst_primary->kref); + +- { +- struct drm_dp_payload reset_pay; +- reset_pay.start_slot = 0; +- reset_pay.num_slots = 0x3f; +- drm_dp_dpcd_write_payload(mgr, 0, &reset_pay); +- } +- + ret = drm_dp_dpcd_writeb(mgr->aux, DP_MSTM_CTRL, +- DP_MST_EN | DP_UP_REQ_EN | DP_UPSTREAM_IS_SRC); ++ DP_MST_EN | DP_UP_REQ_EN | DP_UPSTREAM_IS_SRC); + if (ret < 0) { + goto out_unlock; + } + +- +- /* sort out guid */ +- ret = drm_dp_dpcd_read(mgr->aux, DP_GUID, mgr->guid, 16); +- if (ret != 16) { +- DRM_DEBUG_KMS("failed to read DP GUID %d\n", ret); +- goto out_unlock; +- } +- +- mgr->guid_valid = drm_dp_validate_guid(mgr, mgr->guid); +- if (!mgr->guid_valid) { +- ret = drm_dp_dpcd_write(mgr->aux, DP_GUID, mgr->guid, 16); +- mgr->guid_valid = true; ++ { ++ struct drm_dp_payload reset_pay; ++ reset_pay.start_slot = 0; ++ reset_pay.num_slots = 0x3f; ++ drm_dp_dpcd_write_payload(mgr, 0, &reset_pay); + } + + queue_work(system_long_wq, &mgr->work); +@@ -2174,6 +2165,7 @@ static int drm_dp_mst_handle_up_req(struct drm_dp_mst_topology_mgr *mgr) + } + + drm_dp_update_port(mstb, &msg.u.conn_stat); ++ + DRM_DEBUG_KMS("Got CSN: pn: %d ldps:%d ddps: %d mcs: %d ip: %d pdt: %d\n", msg.u.conn_stat.port_number, msg.u.conn_stat.legacy_device_plug_status, msg.u.conn_stat.displayport_device_plug_status, msg.u.conn_stat.message_capability_status, msg.u.conn_stat.input_port, msg.u.conn_stat.peer_device_type); + (*mgr->cbs->hotplug)(mgr); + +diff --git a/drivers/gpu/drm/gma500/gem.c b/drivers/gpu/drm/gma500/gem.c +index c707fa6fca85..e3bdc8b1c32c 100644 +--- a/drivers/gpu/drm/gma500/gem.c ++++ b/drivers/gpu/drm/gma500/gem.c +@@ -130,7 +130,7 @@ int psb_gem_create(struct drm_file *file, struct drm_device *dev, u64 size, + return ret; + } + /* We have the initial and handle reference but need only one now */ +- drm_gem_object_unreference(&r->gem); ++ drm_gem_object_unreference_unlocked(&r->gem); + *handlep = handle; + return 0; + } +diff --git a/drivers/gpu/drm/radeon/atombios_encoders.c b/drivers/gpu/drm/radeon/atombios_encoders.c +index d8a5db204a81..01701105653d 100644 +--- a/drivers/gpu/drm/radeon/atombios_encoders.c ++++ b/drivers/gpu/drm/radeon/atombios_encoders.c +@@ -872,8 +872,6 @@ atombios_dig_encoder_setup(struct drm_encoder *encoder, int action, int panel_mo + else + args.v1.ucLaneNum = 4; + +- if (ENCODER_MODE_IS_DP(args.v1.ucEncoderMode) && (dp_clock == 270000)) +- args.v1.ucConfig |= ATOM_ENCODER_CONFIG_DPLINKRATE_2_70GHZ; + switch (radeon_encoder->encoder_id) { + case ENCODER_OBJECT_ID_INTERNAL_UNIPHY: + args.v1.ucConfig = ATOM_ENCODER_CONFIG_V2_TRANSMITTER1; +@@ -890,6 +888,10 @@ atombios_dig_encoder_setup(struct drm_encoder *encoder, int action, int panel_mo + args.v1.ucConfig |= ATOM_ENCODER_CONFIG_LINKB; + else + args.v1.ucConfig |= ATOM_ENCODER_CONFIG_LINKA; ++ ++ if (ENCODER_MODE_IS_DP(args.v1.ucEncoderMode) && (dp_clock == 270000)) ++ args.v1.ucConfig |= ATOM_ENCODER_CONFIG_DPLINKRATE_2_70GHZ; ++ + break; + case 2: + case 3: +diff --git a/drivers/gpu/drm/radeon/radeon_atpx_handler.c b/drivers/gpu/drm/radeon/radeon_atpx_handler.c +index 8bc7d0bbd3c8..1523cf94bcdc 100644 +--- a/drivers/gpu/drm/radeon/radeon_atpx_handler.c ++++ b/drivers/gpu/drm/radeon/radeon_atpx_handler.c +@@ -62,6 +62,10 @@ bool radeon_has_atpx(void) { + return radeon_atpx_priv.atpx_detected; + } + ++bool radeon_has_atpx_dgpu_power_cntl(void) { ++ return radeon_atpx_priv.atpx.functions.power_cntl; ++} ++ + /** + * radeon_atpx_call - call an ATPX method + * +@@ -141,10 +145,6 @@ static void radeon_atpx_parse_functions(struct radeon_atpx_functions *f, u32 mas + */ + static int radeon_atpx_validate(struct radeon_atpx *atpx) + { +- /* make sure required functions are enabled */ +- /* dGPU power control is required */ +- atpx->functions.power_cntl = true; +- + if (atpx->functions.px_params) { + union acpi_object *info; + struct atpx_px_params output; +diff --git a/drivers/gpu/drm/radeon/radeon_device.c b/drivers/gpu/drm/radeon/radeon_device.c +index e206795e3693..eb5f88aa76ab 100644 +--- a/drivers/gpu/drm/radeon/radeon_device.c ++++ b/drivers/gpu/drm/radeon/radeon_device.c +@@ -103,6 +103,12 @@ static const char radeon_family_name[][16] = { + "LAST", + }; + ++#if defined(CONFIG_VGA_SWITCHEROO) ++bool radeon_has_atpx_dgpu_power_cntl(void); ++#else ++static inline bool radeon_has_atpx_dgpu_power_cntl(void) { return false; } ++#endif ++ + #define RADEON_PX_QUIRK_DISABLE_PX (1 << 0) + #define RADEON_PX_QUIRK_LONG_WAKEUP (1 << 1) + +@@ -1395,7 +1401,7 @@ int radeon_device_init(struct radeon_device *rdev, + * ignore it */ + vga_client_register(rdev->pdev, rdev, NULL, radeon_vga_set_decode); + +- if (rdev->flags & RADEON_IS_PX) ++ if ((rdev->flags & RADEON_IS_PX) && radeon_has_atpx_dgpu_power_cntl()) + runtime = true; + vga_switcheroo_register_client(rdev->pdev, &radeon_switcheroo_ops, runtime); + if (runtime) +@@ -1665,7 +1671,6 @@ int radeon_resume_kms(struct drm_device *dev, bool resume, bool fbcon) + } + + drm_kms_helper_poll_enable(dev); +- drm_helper_hpd_irq_event(dev); + + /* set the power state here in case we are a PX system or headless */ + if ((rdev->pm.pm_method == PM_METHOD_DPM) && rdev->pm.dpm_enabled) +diff --git a/drivers/gpu/drm/radeon/radeon_fb.c b/drivers/gpu/drm/radeon/radeon_fb.c +index 0ea1db83d573..a5dc86ae0bdd 100644 +--- a/drivers/gpu/drm/radeon/radeon_fb.c ++++ b/drivers/gpu/drm/radeon/radeon_fb.c +@@ -303,7 +303,8 @@ out_unref: + + void radeon_fb_output_poll_changed(struct radeon_device *rdev) + { +- drm_fb_helper_hotplug_event(&rdev->mode_info.rfbdev->helper); ++ if (rdev->mode_info.rfbdev) ++ drm_fb_helper_hotplug_event(&rdev->mode_info.rfbdev->helper); + } + + static int radeon_fbdev_destroy(struct drm_device *dev, struct radeon_fbdev *rfbdev) +@@ -343,6 +344,10 @@ int radeon_fbdev_init(struct radeon_device *rdev) + int bpp_sel = 32; + int ret; + ++ /* don't enable fbdev if no connectors */ ++ if (list_empty(&rdev->ddev->mode_config.connector_list)) ++ return 0; ++ + /* select 8 bpp console on RN50 or 16MB cards */ + if (ASIC_IS_RN50(rdev) || rdev->mc.real_vram_size <= (32*1024*1024)) + bpp_sel = 8; +@@ -386,7 +391,8 @@ void radeon_fbdev_fini(struct radeon_device *rdev) + + void radeon_fbdev_set_suspend(struct radeon_device *rdev, int state) + { +- fb_set_suspend(rdev->mode_info.rfbdev->helper.fbdev, state); ++ if (rdev->mode_info.rfbdev) ++ fb_set_suspend(rdev->mode_info.rfbdev->helper.fbdev, state); + } + + int radeon_fbdev_total_size(struct radeon_device *rdev) +@@ -401,7 +407,22 @@ int radeon_fbdev_total_size(struct radeon_device *rdev) + + bool radeon_fbdev_robj_is_fb(struct radeon_device *rdev, struct radeon_bo *robj) + { ++ if (!rdev->mode_info.rfbdev) ++ return false; ++ + if (robj == gem_to_radeon_bo(rdev->mode_info.rfbdev->rfb.obj)) + return true; + return false; + } ++ ++void radeon_fb_add_connector(struct radeon_device *rdev, struct drm_connector *connector) ++{ ++ if (rdev->mode_info.rfbdev) ++ drm_fb_helper_add_one_connector(&rdev->mode_info.rfbdev->helper, connector); ++} ++ ++void radeon_fb_remove_connector(struct radeon_device *rdev, struct drm_connector *connector) ++{ ++ if (rdev->mode_info.rfbdev) ++ drm_fb_helper_remove_one_connector(&rdev->mode_info.rfbdev->helper, connector); ++} +diff --git a/drivers/gpu/drm/radeon/radeon_mode.h b/drivers/gpu/drm/radeon/radeon_mode.h +index 04db2fdd8692..3a395f405fff 100644 +--- a/drivers/gpu/drm/radeon/radeon_mode.h ++++ b/drivers/gpu/drm/radeon/radeon_mode.h +@@ -921,6 +921,10 @@ bool radeon_fbdev_robj_is_fb(struct radeon_device *rdev, struct radeon_bo *robj) + void radeon_fb_output_poll_changed(struct radeon_device *rdev); + + void radeon_crtc_handle_vblank(struct radeon_device *rdev, int crtc_id); ++ ++void radeon_fb_add_connector(struct radeon_device *rdev, struct drm_connector *connector); ++void radeon_fb_remove_connector(struct radeon_device *rdev, struct drm_connector *connector); ++ + void radeon_crtc_handle_flip(struct radeon_device *rdev, int crtc_id); + + int radeon_align_pitch(struct radeon_device *rdev, int width, int bpp, bool tiled); +diff --git a/drivers/hid/hid-core.c b/drivers/hid/hid-core.c +index ab52d1b30161..cb4bc0dadba5 100644 +--- a/drivers/hid/hid-core.c ++++ b/drivers/hid/hid-core.c +@@ -2553,8 +2553,10 @@ int hid_add_device(struct hid_device *hdev) + /* + * Scan generic devices for group information + */ +- if (hid_ignore_special_drivers || +- !hid_match_id(hdev, hid_have_special_driver)) { ++ if (hid_ignore_special_drivers) { ++ hdev->group = HID_GROUP_GENERIC; ++ } else if (!hdev->group && ++ !hid_match_id(hdev, hid_have_special_driver)) { + ret = hid_scan_report(hdev); + if (ret) + hid_warn(hdev, "bad device descriptor (%d)\n", ret); +diff --git a/drivers/hid/hid-multitouch.c b/drivers/hid/hid-multitouch.c +index 51e25b9407f2..e24e22687f33 100644 +--- a/drivers/hid/hid-multitouch.c ++++ b/drivers/hid/hid-multitouch.c +@@ -312,8 +312,19 @@ static void mt_feature_mapping(struct hid_device *hdev, + break; + } + +- td->inputmode = field->report->id; +- td->inputmode_index = usage->usage_index; ++ if (td->inputmode < 0) { ++ td->inputmode = field->report->id; ++ td->inputmode_index = usage->usage_index; ++ } else { ++ /* ++ * Some elan panels wrongly declare 2 input mode ++ * features, and silently ignore when we set the ++ * value in the second field. Skip the second feature ++ * and hope for the best. ++ */ ++ dev_info(&hdev->dev, ++ "Ignoring the extra HID_DG_INPUTMODE\n"); ++ } + + break; + case HID_DG_CONTACTMAX: +diff --git a/drivers/hid/i2c-hid/i2c-hid.c b/drivers/hid/i2c-hid/i2c-hid.c +index 6d7c9c580ceb..fdcce357f395 100644 +--- a/drivers/hid/i2c-hid/i2c-hid.c ++++ b/drivers/hid/i2c-hid/i2c-hid.c +@@ -277,17 +277,21 @@ static int i2c_hid_set_or_send_report(struct i2c_client *client, u8 reportType, + u16 dataRegister = le16_to_cpu(ihid->hdesc.wDataRegister); + u16 outputRegister = le16_to_cpu(ihid->hdesc.wOutputRegister); + u16 maxOutputLength = le16_to_cpu(ihid->hdesc.wMaxOutputLength); ++ u16 size; ++ int args_len; ++ int index = 0; ++ ++ i2c_hid_dbg(ihid, "%s\n", __func__); ++ ++ if (data_len > ihid->bufsize) ++ return -EINVAL; + +- /* hid_hw_* already checked that data_len < HID_MAX_BUFFER_SIZE */ +- u16 size = 2 /* size */ + ++ size = 2 /* size */ + + (reportID ? 1 : 0) /* reportID */ + + data_len /* buf */; +- int args_len = (reportID >= 0x0F ? 1 : 0) /* optional third byte */ + ++ args_len = (reportID >= 0x0F ? 1 : 0) /* optional third byte */ + + 2 /* dataRegister */ + + size /* args */; +- int index = 0; +- +- i2c_hid_dbg(ihid, "%s\n", __func__); + + if (!use_data && maxOutputLength == 0) + return -ENOSYS; +diff --git a/drivers/iio/dac/mcp4725.c b/drivers/iio/dac/mcp4725.c +index 43d14588448d..b4dde8315210 100644 +--- a/drivers/iio/dac/mcp4725.c ++++ b/drivers/iio/dac/mcp4725.c +@@ -300,6 +300,7 @@ static int mcp4725_probe(struct i2c_client *client, + data->client = client; + + indio_dev->dev.parent = &client->dev; ++ indio_dev->name = id->name; + indio_dev->info = &mcp4725_info; + indio_dev->channels = &mcp4725_channel; + indio_dev->num_channels = 1; +diff --git a/drivers/iio/imu/adis_buffer.c b/drivers/iio/imu/adis_buffer.c +index cb32b593f1c5..36607d52fee0 100644 +--- a/drivers/iio/imu/adis_buffer.c ++++ b/drivers/iio/imu/adis_buffer.c +@@ -43,7 +43,7 @@ int adis_update_scan_mode(struct iio_dev *indio_dev, + return -ENOMEM; + + rx = adis->buffer; +- tx = rx + indio_dev->scan_bytes; ++ tx = rx + scan_count; + + spi_message_init(&adis->msg); + +diff --git a/drivers/iio/pressure/mpl115.c b/drivers/iio/pressure/mpl115.c +index f5ecd6e19f5d..a0d7deeac62f 100644 +--- a/drivers/iio/pressure/mpl115.c ++++ b/drivers/iio/pressure/mpl115.c +@@ -117,7 +117,7 @@ static int mpl115_read_raw(struct iio_dev *indio_dev, + *val = ret >> 6; + return IIO_VAL_INT; + case IIO_CHAN_INFO_OFFSET: +- *val = 605; ++ *val = -605; + *val2 = 750000; + return IIO_VAL_INT_PLUS_MICRO; + case IIO_CHAN_INFO_SCALE: +diff --git a/drivers/infiniband/hw/cxgb3/iwch_cm.c b/drivers/infiniband/hw/cxgb3/iwch_cm.c +index cb78b1e9bcd9..f504ba73e5dc 100644 +--- a/drivers/infiniband/hw/cxgb3/iwch_cm.c ++++ b/drivers/infiniband/hw/cxgb3/iwch_cm.c +@@ -149,7 +149,7 @@ static int iwch_l2t_send(struct t3cdev *tdev, struct sk_buff *skb, struct l2t_en + error = l2t_send(tdev, skb, l2e); + if (error < 0) + kfree_skb(skb); +- return error; ++ return error < 0 ? error : 0; + } + + int iwch_cxgb3_ofld_send(struct t3cdev *tdev, struct sk_buff *skb) +@@ -165,7 +165,7 @@ int iwch_cxgb3_ofld_send(struct t3cdev *tdev, struct sk_buff *skb) + error = cxgb3_ofld_send(tdev, skb); + if (error < 0) + kfree_skb(skb); +- return error; ++ return error < 0 ? error : 0; + } + + static void release_tid(struct t3cdev *tdev, u32 hwtid, struct sk_buff *skb) +diff --git a/drivers/infiniband/ulp/srpt/ib_srpt.c b/drivers/infiniband/ulp/srpt/ib_srpt.c +index ad4af66a4cbb..9fc0326c1da7 100644 +--- a/drivers/infiniband/ulp/srpt/ib_srpt.c ++++ b/drivers/infiniband/ulp/srpt/ib_srpt.c +@@ -1745,47 +1745,6 @@ send_sense: + return -1; + } + +-/** +- * srpt_rx_mgmt_fn_tag() - Process a task management function by tag. +- * @ch: RDMA channel of the task management request. +- * @fn: Task management function to perform. +- * @req_tag: Tag of the SRP task management request. +- * @mgmt_ioctx: I/O context of the task management request. +- * +- * Returns zero if the target core will process the task management +- * request asynchronously. +- * +- * Note: It is assumed that the initiator serializes tag-based task management +- * requests. +- */ +-static int srpt_rx_mgmt_fn_tag(struct srpt_send_ioctx *ioctx, u64 tag) +-{ +- struct srpt_device *sdev; +- struct srpt_rdma_ch *ch; +- struct srpt_send_ioctx *target; +- int ret, i; +- +- ret = -EINVAL; +- ch = ioctx->ch; +- BUG_ON(!ch); +- BUG_ON(!ch->sport); +- sdev = ch->sport->sdev; +- BUG_ON(!sdev); +- spin_lock_irq(&sdev->spinlock); +- for (i = 0; i < ch->rq_size; ++i) { +- target = ch->ioctx_ring[i]; +- if (target->cmd.se_lun == ioctx->cmd.se_lun && +- target->tag == tag && +- srpt_get_cmd_state(target) != SRPT_STATE_DONE) { +- ret = 0; +- /* now let the target core abort &target->cmd; */ +- break; +- } +- } +- spin_unlock_irq(&sdev->spinlock); +- return ret; +-} +- + static int srp_tmr_to_tcm(int fn) + { + switch (fn) { +@@ -1820,7 +1779,6 @@ static void srpt_handle_tsk_mgmt(struct srpt_rdma_ch *ch, + struct se_cmd *cmd; + struct se_session *sess = ch->sess; + uint64_t unpacked_lun; +- uint32_t tag = 0; + int tcm_tmr; + int rc; + +@@ -1836,25 +1794,10 @@ static void srpt_handle_tsk_mgmt(struct srpt_rdma_ch *ch, + srpt_set_cmd_state(send_ioctx, SRPT_STATE_MGMT); + send_ioctx->tag = srp_tsk->tag; + tcm_tmr = srp_tmr_to_tcm(srp_tsk->tsk_mgmt_func); +- if (tcm_tmr < 0) { +- send_ioctx->cmd.se_tmr_req->response = +- TMR_TASK_MGMT_FUNCTION_NOT_SUPPORTED; +- goto fail; +- } + unpacked_lun = srpt_unpack_lun((uint8_t *)&srp_tsk->lun, + sizeof(srp_tsk->lun)); +- +- if (srp_tsk->tsk_mgmt_func == SRP_TSK_ABORT_TASK) { +- rc = srpt_rx_mgmt_fn_tag(send_ioctx, srp_tsk->task_tag); +- if (rc < 0) { +- send_ioctx->cmd.se_tmr_req->response = +- TMR_TASK_DOES_NOT_EXIST; +- goto fail; +- } +- tag = srp_tsk->task_tag; +- } + rc = target_submit_tmr(&send_ioctx->cmd, sess, NULL, unpacked_lun, +- srp_tsk, tcm_tmr, GFP_KERNEL, tag, ++ srp_tsk, tcm_tmr, GFP_KERNEL, srp_tsk->task_tag, + TARGET_SCF_ACK_KREF); + if (rc != 0) { + send_ioctx->cmd.se_tmr_req->response = TMR_FUNCTION_REJECTED; +diff --git a/drivers/input/misc/ati_remote2.c b/drivers/input/misc/ati_remote2.c +index f63341f20b91..e8c6a4842e91 100644 +--- a/drivers/input/misc/ati_remote2.c ++++ b/drivers/input/misc/ati_remote2.c +@@ -817,26 +817,49 @@ static int ati_remote2_probe(struct usb_interface *interface, const struct usb_d + + ar2->udev = udev; + ++ /* Sanity check, first interface must have an endpoint */ ++ if (alt->desc.bNumEndpoints < 1 || !alt->endpoint) { ++ dev_err(&interface->dev, ++ "%s(): interface 0 must have an endpoint\n", __func__); ++ r = -ENODEV; ++ goto fail1; ++ } + ar2->intf[0] = interface; + ar2->ep[0] = &alt->endpoint[0].desc; + ++ /* Sanity check, the device must have two interfaces */ + ar2->intf[1] = usb_ifnum_to_if(udev, 1); ++ if ((udev->actconfig->desc.bNumInterfaces < 2) || !ar2->intf[1]) { ++ dev_err(&interface->dev, "%s(): need 2 interfaces, found %d\n", ++ __func__, udev->actconfig->desc.bNumInterfaces); ++ r = -ENODEV; ++ goto fail1; ++ } ++ + r = usb_driver_claim_interface(&ati_remote2_driver, ar2->intf[1], ar2); + if (r) + goto fail1; ++ ++ /* Sanity check, second interface must have an endpoint */ + alt = ar2->intf[1]->cur_altsetting; ++ if (alt->desc.bNumEndpoints < 1 || !alt->endpoint) { ++ dev_err(&interface->dev, ++ "%s(): interface 1 must have an endpoint\n", __func__); ++ r = -ENODEV; ++ goto fail2; ++ } + ar2->ep[1] = &alt->endpoint[0].desc; + + r = ati_remote2_urb_init(ar2); + if (r) +- goto fail2; ++ goto fail3; + + ar2->channel_mask = channel_mask; + ar2->mode_mask = mode_mask; + + r = ati_remote2_setup(ar2, ar2->channel_mask); + if (r) +- goto fail2; ++ goto fail3; + + usb_make_path(udev, ar2->phys, sizeof(ar2->phys)); + strlcat(ar2->phys, "/input0", sizeof(ar2->phys)); +@@ -845,11 +868,11 @@ static int ati_remote2_probe(struct usb_interface *interface, const struct usb_d + + r = sysfs_create_group(&udev->dev.kobj, &ati_remote2_attr_group); + if (r) +- goto fail2; ++ goto fail3; + + r = ati_remote2_input_init(ar2); + if (r) +- goto fail3; ++ goto fail4; + + usb_set_intfdata(interface, ar2); + +@@ -857,10 +880,11 @@ static int ati_remote2_probe(struct usb_interface *interface, const struct usb_d + + return 0; + +- fail3: ++ fail4: + sysfs_remove_group(&udev->dev.kobj, &ati_remote2_attr_group); +- fail2: ++ fail3: + ati_remote2_urb_cleanup(ar2); ++ fail2: + usb_driver_release_interface(&ati_remote2_driver, ar2->intf[1]); + fail1: + kfree(ar2); +diff --git a/drivers/input/misc/ims-pcu.c b/drivers/input/misc/ims-pcu.c +index afed8e2b2f94..41ef29b516f3 100644 +--- a/drivers/input/misc/ims-pcu.c ++++ b/drivers/input/misc/ims-pcu.c +@@ -1663,6 +1663,8 @@ static int ims_pcu_parse_cdc_data(struct usb_interface *intf, struct ims_pcu *pc + + pcu->ctrl_intf = usb_ifnum_to_if(pcu->udev, + union_desc->bMasterInterface0); ++ if (!pcu->ctrl_intf) ++ return -EINVAL; + + alt = pcu->ctrl_intf->cur_altsetting; + pcu->ep_ctrl = &alt->endpoint[0].desc; +@@ -1670,6 +1672,8 @@ static int ims_pcu_parse_cdc_data(struct usb_interface *intf, struct ims_pcu *pc + + pcu->data_intf = usb_ifnum_to_if(pcu->udev, + union_desc->bSlaveInterface0); ++ if (!pcu->data_intf) ++ return -EINVAL; + + alt = pcu->data_intf->cur_altsetting; + if (alt->desc.bNumEndpoints != 2) { +diff --git a/drivers/input/misc/powermate.c b/drivers/input/misc/powermate.c +index 63b539d3daba..84909a12ff36 100644 +--- a/drivers/input/misc/powermate.c ++++ b/drivers/input/misc/powermate.c +@@ -307,6 +307,9 @@ static int powermate_probe(struct usb_interface *intf, const struct usb_device_i + int error = -ENOMEM; + + interface = intf->cur_altsetting; ++ if (interface->desc.bNumEndpoints < 1) ++ return -EINVAL; ++ + endpoint = &interface->endpoint[0].desc; + if (!usb_endpoint_is_int_in(endpoint)) + return -EIO; +diff --git a/drivers/input/mouse/synaptics.c b/drivers/input/mouse/synaptics.c +index 826ef3df2dc7..1de070a239be 100644 +--- a/drivers/input/mouse/synaptics.c ++++ b/drivers/input/mouse/synaptics.c +@@ -840,8 +840,9 @@ static void synaptics_report_ext_buttons(struct psmouse *psmouse, + if (!SYN_CAP_MULTI_BUTTON_NO(priv->ext_cap)) + return; + +- /* Bug in FW 8.1, buttons are reported only when ExtBit is 1 */ +- if (SYN_ID_FULL(priv->identity) == 0x801 && ++ /* Bug in FW 8.1 & 8.2, buttons are reported only when ExtBit is 1 */ ++ if ((SYN_ID_FULL(priv->identity) == 0x801 || ++ SYN_ID_FULL(priv->identity) == 0x802) && + !((psmouse->packet[0] ^ psmouse->packet[3]) & 0x02)) + return; + +diff --git a/drivers/irqchip/irq-omap-intc.c b/drivers/irqchip/irq-omap-intc.c +index c03f140acbae..0ee0546a6a6f 100644 +--- a/drivers/irqchip/irq-omap-intc.c ++++ b/drivers/irqchip/irq-omap-intc.c +@@ -48,6 +48,7 @@ + #define INTC_ILR0 0x0100 + + #define ACTIVEIRQ_MASK 0x7f /* omap2/3 active interrupt bits */ ++#define SPURIOUSIRQ_MASK (0x1ffffff << 7) + #define INTCPS_NR_ILR_REGS 128 + #define INTCPS_NR_MIR_REGS 4 + +@@ -331,37 +332,36 @@ static int __init omap_init_irq(u32 base, struct device_node *node) + static asmlinkage void __exception_irq_entry + omap_intc_handle_irq(struct pt_regs *regs) + { +- u32 irqnr = 0; +- int handled_irq = 0; +- int i; +- +- do { +- for (i = 0; i < omap_nr_pending; i++) { +- irqnr = intc_readl(INTC_PENDING_IRQ0 + (0x20 * i)); +- if (irqnr) +- goto out; +- } +- +-out: +- if (!irqnr) +- break; ++ extern unsigned long irq_err_count; ++ u32 irqnr; + +- irqnr = intc_readl(INTC_SIR); +- irqnr &= ACTIVEIRQ_MASK; +- +- if (irqnr) { +- handle_domain_irq(domain, irqnr, regs); +- handled_irq = 1; +- } +- } while (irqnr); ++ irqnr = intc_readl(INTC_SIR); + + /* +- * If an irq is masked or deasserted while active, we will +- * keep ending up here with no irq handled. So remove it from +- * the INTC with an ack. ++ * A spurious IRQ can result if interrupt that triggered the ++ * sorting is no longer active during the sorting (10 INTC ++ * functional clock cycles after interrupt assertion). Or a ++ * change in interrupt mask affected the result during sorting ++ * time. There is no special handling required except ignoring ++ * the SIR register value just read and retrying. ++ * See section 6.2.5 of AM335x TRM Literature Number: SPRUH73K ++ * ++ * Many a times, a spurious interrupt situation has been fixed ++ * by adding a flush for the posted write acking the IRQ in ++ * the device driver. Typically, this is going be the device ++ * driver whose interrupt was handled just before the spurious ++ * IRQ occurred. Pay attention to those device drivers if you ++ * run into hitting the spurious IRQ condition below. + */ +- if (!handled_irq) ++ if (unlikely((irqnr & SPURIOUSIRQ_MASK) == SPURIOUSIRQ_MASK)) { ++ pr_err_once("%s: spurious irq!\n", __func__); ++ irq_err_count++; + omap_ack_irq(NULL); ++ return; ++ } ++ ++ irqnr &= ACTIVEIRQ_MASK; ++ handle_domain_irq(domain, irqnr, regs); + } + + void __init omap2_init_irq(void) +diff --git a/drivers/md/bcache/super.c b/drivers/md/bcache/super.c +index 42522c8f13c6..2a102834c2ee 100644 +--- a/drivers/md/bcache/super.c ++++ b/drivers/md/bcache/super.c +@@ -1046,8 +1046,12 @@ int bch_cached_dev_attach(struct cached_dev *dc, struct cache_set *c) + */ + atomic_set(&dc->count, 1); + +- if (bch_cached_dev_writeback_start(dc)) ++ /* Block writeback thread, but spawn it */ ++ down_write(&dc->writeback_lock); ++ if (bch_cached_dev_writeback_start(dc)) { ++ up_write(&dc->writeback_lock); + return -ENOMEM; ++ } + + if (BDEV_STATE(&dc->sb) == BDEV_STATE_DIRTY) { + bch_sectors_dirty_init(dc); +@@ -1059,6 +1063,9 @@ int bch_cached_dev_attach(struct cached_dev *dc, struct cache_set *c) + bch_cached_dev_run(dc); + bcache_device_link(&dc->disk, c, "bdev"); + ++ /* Allow the writeback thread to proceed */ ++ up_write(&dc->writeback_lock); ++ + pr_info("Caching %s as %s on set %pU", + bdevname(dc->bdev, buf), dc->disk.disk->disk_name, + dc->disk.c->sb.set_uuid); +@@ -1397,6 +1404,9 @@ static void cache_set_flush(struct closure *cl) + struct btree *b; + unsigned i; + ++ if (!c) ++ closure_return(cl); ++ + bch_cache_accounting_destroy(&c->accounting); + + kobject_put(&c->internal); +@@ -1862,11 +1872,12 @@ static int cache_alloc(struct cache_sb *sb, struct cache *ca) + return 0; + } + +-static void register_cache(struct cache_sb *sb, struct page *sb_page, ++static int register_cache(struct cache_sb *sb, struct page *sb_page, + struct block_device *bdev, struct cache *ca) + { + char name[BDEVNAME_SIZE]; +- const char *err = "cannot allocate memory"; ++ const char *err = NULL; ++ int ret = 0; + + memcpy(&ca->sb, sb, sizeof(struct cache_sb)); + ca->bdev = bdev; +@@ -1881,27 +1892,35 @@ static void register_cache(struct cache_sb *sb, struct page *sb_page, + if (blk_queue_discard(bdev_get_queue(ca->bdev))) + ca->discard = CACHE_DISCARD(&ca->sb); + +- if (cache_alloc(sb, ca) != 0) ++ ret = cache_alloc(sb, ca); ++ if (ret != 0) + goto err; + +- err = "error creating kobject"; +- if (kobject_add(&ca->kobj, &part_to_dev(bdev->bd_part)->kobj, "bcache")) +- goto err; ++ if (kobject_add(&ca->kobj, &part_to_dev(bdev->bd_part)->kobj, "bcache")) { ++ err = "error calling kobject_add"; ++ ret = -ENOMEM; ++ goto out; ++ } + + mutex_lock(&bch_register_lock); + err = register_cache_set(ca); + mutex_unlock(&bch_register_lock); + +- if (err) +- goto err; ++ if (err) { ++ ret = -ENODEV; ++ goto out; ++ } + + pr_info("registered cache device %s", bdevname(bdev, name)); ++ + out: + kobject_put(&ca->kobj); +- return; ++ + err: +- pr_notice("error opening %s: %s", bdevname(bdev, name), err); +- goto out; ++ if (err) ++ pr_notice("error opening %s: %s", bdevname(bdev, name), err); ++ ++ return ret; + } + + /* Global interfaces/init */ +@@ -1999,7 +2018,8 @@ static ssize_t register_bcache(struct kobject *k, struct kobj_attribute *attr, + if (!ca) + goto err_close; + +- register_cache(sb, sb_page, bdev, ca); ++ if (register_cache(sb, sb_page, bdev, ca) != 0) ++ goto err_close; + } + out: + if (sb_page) +diff --git a/drivers/md/multipath.c b/drivers/md/multipath.c +index 399272f9c042..2df218da1791 100644 +--- a/drivers/md/multipath.c ++++ b/drivers/md/multipath.c +@@ -129,7 +129,9 @@ static void multipath_make_request(struct mddev *mddev, struct bio * bio) + } + multipath = conf->multipaths + mp_bh->path; + +- mp_bh->bio = *bio; ++ bio_init(&mp_bh->bio); ++ __bio_clone_fast(&mp_bh->bio, bio); ++ + mp_bh->bio.bi_iter.bi_sector += multipath->rdev->data_offset; + mp_bh->bio.bi_bdev = multipath->rdev->bdev; + mp_bh->bio.bi_rw |= REQ_FAILFAST_TRANSPORT; +diff --git a/drivers/md/raid5.c b/drivers/md/raid5.c +index 5fa7549ba409..cdd8770de1c2 100644 +--- a/drivers/md/raid5.c ++++ b/drivers/md/raid5.c +@@ -6245,8 +6245,8 @@ static int run(struct mddev *mddev) + } + + if (discard_supported && +- mddev->queue->limits.max_discard_sectors >= stripe && +- mddev->queue->limits.discard_granularity >= stripe) ++ mddev->queue->limits.max_discard_sectors >= (stripe >> 9) && ++ mddev->queue->limits.discard_granularity >= stripe) + queue_flag_set_unlocked(QUEUE_FLAG_DISCARD, + mddev->queue); + else +diff --git a/drivers/media/i2c/adv7511.c b/drivers/media/i2c/adv7511.c +index f98acf4aafd4..621e4c058297 100644 +--- a/drivers/media/i2c/adv7511.c ++++ b/drivers/media/i2c/adv7511.c +@@ -833,12 +833,23 @@ static void adv7511_dbg_dump_edid(int lvl, int debug, struct v4l2_subdev *sd, in + } + } + ++static void adv7511_notify_no_edid(struct v4l2_subdev *sd) ++{ ++ struct adv7511_state *state = get_adv7511_state(sd); ++ struct adv7511_edid_detect ed; ++ ++ /* We failed to read the EDID, so send an event for this. */ ++ ed.present = false; ++ ed.segment = adv7511_rd(sd, 0xc4); ++ v4l2_subdev_notify(sd, ADV7511_EDID_DETECT, (void *)&ed); ++ v4l2_ctrl_s_ctrl(state->have_edid0_ctrl, 0x0); ++} ++ + static void adv7511_edid_handler(struct work_struct *work) + { + struct delayed_work *dwork = to_delayed_work(work); + struct adv7511_state *state = container_of(dwork, struct adv7511_state, edid_handler); + struct v4l2_subdev *sd = &state->sd; +- struct adv7511_edid_detect ed; + + v4l2_dbg(1, debug, sd, "%s:\n", __func__); + +@@ -863,9 +874,7 @@ static void adv7511_edid_handler(struct work_struct *work) + } + + /* We failed to read the EDID, so send an event for this. */ +- ed.present = false; +- ed.segment = adv7511_rd(sd, 0xc4); +- v4l2_subdev_notify(sd, ADV7511_EDID_DETECT, (void *)&ed); ++ adv7511_notify_no_edid(sd); + v4l2_dbg(1, debug, sd, "%s: no edid found\n", __func__); + } + +@@ -936,7 +945,6 @@ static void adv7511_check_monitor_present_status(struct v4l2_subdev *sd) + /* update read only ctrls */ + v4l2_ctrl_s_ctrl(state->hotplug_ctrl, adv7511_have_hotplug(sd) ? 0x1 : 0x0); + v4l2_ctrl_s_ctrl(state->rx_sense_ctrl, adv7511_have_rx_sense(sd) ? 0x1 : 0x0); +- v4l2_ctrl_s_ctrl(state->have_edid0_ctrl, state->edid.segments ? 0x1 : 0x0); + + if ((status & MASK_ADV7511_HPD_DETECT) && ((status & MASK_ADV7511_MSEN_DETECT) || state->edid.segments)) { + v4l2_dbg(1, debug, sd, "%s: hotplug and (rx-sense or edid)\n", __func__); +@@ -966,6 +974,7 @@ static void adv7511_check_monitor_present_status(struct v4l2_subdev *sd) + } + adv7511_s_power(sd, false); + memset(&state->edid, 0, sizeof(struct adv7511_state_edid)); ++ adv7511_notify_no_edid(sd); + } + } + +@@ -1042,6 +1051,7 @@ static bool adv7511_check_edid_status(struct v4l2_subdev *sd) + } + /* one more segment read ok */ + state->edid.segments = segment + 1; ++ v4l2_ctrl_s_ctrl(state->have_edid0_ctrl, 0x1); + if (((state->edid.data[0x7e] >> 1) + 1) > state->edid.segments) { + /* Request next EDID segment */ + v4l2_dbg(1, debug, sd, "%s: request segment %d\n", __func__, state->edid.segments); +@@ -1061,7 +1071,6 @@ static bool adv7511_check_edid_status(struct v4l2_subdev *sd) + ed.present = true; + ed.segment = 0; + state->edid_detect_counter++; +- v4l2_ctrl_s_ctrl(state->have_edid0_ctrl, state->edid.segments ? 0x1 : 0x0); + v4l2_subdev_notify(sd, ADV7511_EDID_DETECT, (void *)&ed); + return ed.present; + } +diff --git a/drivers/media/pci/bt8xx/bttv-driver.c b/drivers/media/pci/bt8xx/bttv-driver.c +index 4a8176c09fc9..dbb00bfd3d78 100644 +--- a/drivers/media/pci/bt8xx/bttv-driver.c ++++ b/drivers/media/pci/bt8xx/bttv-driver.c +@@ -2332,6 +2332,19 @@ static int bttv_g_fmt_vid_overlay(struct file *file, void *priv, + return 0; + } + ++static void bttv_get_width_mask_vid_cap(const struct bttv_format *fmt, ++ unsigned int *width_mask, ++ unsigned int *width_bias) ++{ ++ if (fmt->flags & FORMAT_FLAGS_PLANAR) { ++ *width_mask = ~15; /* width must be a multiple of 16 pixels */ ++ *width_bias = 8; /* nearest */ ++ } else { ++ *width_mask = ~3; /* width must be a multiple of 4 pixels */ ++ *width_bias = 2; /* nearest */ ++ } ++} ++ + static int bttv_try_fmt_vid_cap(struct file *file, void *priv, + struct v4l2_format *f) + { +@@ -2341,6 +2354,7 @@ static int bttv_try_fmt_vid_cap(struct file *file, void *priv, + enum v4l2_field field; + __s32 width, height; + __s32 height2; ++ unsigned int width_mask, width_bias; + int rc; + + fmt = format_by_fourcc(f->fmt.pix.pixelformat); +@@ -2373,9 +2387,9 @@ static int bttv_try_fmt_vid_cap(struct file *file, void *priv, + width = f->fmt.pix.width; + height = f->fmt.pix.height; + ++ bttv_get_width_mask_vid_cap(fmt, &width_mask, &width_bias); + rc = limit_scaled_size_lock(fh, &width, &height, field, +- /* width_mask: 4 pixels */ ~3, +- /* width_bias: nearest */ 2, ++ width_mask, width_bias, + /* adjust_size */ 1, + /* adjust_crop */ 0); + if (0 != rc) +@@ -2408,6 +2422,7 @@ static int bttv_s_fmt_vid_cap(struct file *file, void *priv, + struct bttv_fh *fh = priv; + struct bttv *btv = fh->btv; + __s32 width, height; ++ unsigned int width_mask, width_bias; + enum v4l2_field field; + + retval = bttv_switch_type(fh, f->type); +@@ -2422,9 +2437,10 @@ static int bttv_s_fmt_vid_cap(struct file *file, void *priv, + height = f->fmt.pix.height; + field = f->fmt.pix.field; + ++ fmt = format_by_fourcc(f->fmt.pix.pixelformat); ++ bttv_get_width_mask_vid_cap(fmt, &width_mask, &width_bias); + retval = limit_scaled_size_lock(fh, &width, &height, f->fmt.pix.field, +- /* width_mask: 4 pixels */ ~3, +- /* width_bias: nearest */ 2, ++ width_mask, width_bias, + /* adjust_size */ 1, + /* adjust_crop */ 1); + if (0 != retval) +@@ -2432,8 +2448,6 @@ static int bttv_s_fmt_vid_cap(struct file *file, void *priv, + + f->fmt.pix.field = field; + +- fmt = format_by_fourcc(f->fmt.pix.pixelformat); +- + /* update our state informations */ + fh->fmt = fmt; + fh->cap.field = f->fmt.pix.field; +diff --git a/drivers/media/pci/saa7134/saa7134-video.c b/drivers/media/pci/saa7134/saa7134-video.c +index fc4a427cb51f..af268ca40393 100644 +--- a/drivers/media/pci/saa7134/saa7134-video.c ++++ b/drivers/media/pci/saa7134/saa7134-video.c +@@ -1230,10 +1230,13 @@ static int saa7134_g_fmt_vid_cap(struct file *file, void *priv, + f->fmt.pix.height = dev->height; + f->fmt.pix.field = dev->field; + f->fmt.pix.pixelformat = dev->fmt->fourcc; +- f->fmt.pix.bytesperline = +- (f->fmt.pix.width * dev->fmt->depth) >> 3; ++ if (dev->fmt->planar) ++ f->fmt.pix.bytesperline = f->fmt.pix.width; ++ else ++ f->fmt.pix.bytesperline = ++ (f->fmt.pix.width * dev->fmt->depth) / 8; + f->fmt.pix.sizeimage = +- f->fmt.pix.height * f->fmt.pix.bytesperline; ++ (f->fmt.pix.height * f->fmt.pix.width * dev->fmt->depth) / 8; + f->fmt.pix.colorspace = V4L2_COLORSPACE_SMPTE170M; + return 0; + } +@@ -1309,10 +1312,13 @@ static int saa7134_try_fmt_vid_cap(struct file *file, void *priv, + if (f->fmt.pix.height > maxh) + f->fmt.pix.height = maxh; + f->fmt.pix.width &= ~0x03; +- f->fmt.pix.bytesperline = +- (f->fmt.pix.width * fmt->depth) >> 3; ++ if (fmt->planar) ++ f->fmt.pix.bytesperline = f->fmt.pix.width; ++ else ++ f->fmt.pix.bytesperline = ++ (f->fmt.pix.width * fmt->depth) / 8; + f->fmt.pix.sizeimage = +- f->fmt.pix.height * f->fmt.pix.bytesperline; ++ (f->fmt.pix.height * f->fmt.pix.width * fmt->depth) / 8; + f->fmt.pix.colorspace = V4L2_COLORSPACE_SMPTE170M; + + return 0; +diff --git a/drivers/media/tuners/si2157.c b/drivers/media/tuners/si2157.c +index cf97142e01e6..446917439ae5 100644 +--- a/drivers/media/tuners/si2157.c ++++ b/drivers/media/tuners/si2157.c +@@ -157,6 +157,11 @@ static int si2157_init(struct dvb_frontend *fe) + + for (remaining = fw->size; remaining > 0; remaining -= 17) { + len = fw->data[fw->size - remaining]; ++ if (len > SI2157_ARGLEN) { ++ dev_err(&s->client->dev, "Bad firmware length\n"); ++ ret = -EINVAL; ++ goto err; ++ } + memcpy(cmd.args, &fw->data[(fw->size - remaining) + 1], len); + cmd.wlen = len; + cmd.rlen = 1; +diff --git a/drivers/media/usb/pwc/pwc-if.c b/drivers/media/usb/pwc/pwc-if.c +index 15b754da4a2c..f3309269847a 100644 +--- a/drivers/media/usb/pwc/pwc-if.c ++++ b/drivers/media/usb/pwc/pwc-if.c +@@ -91,6 +91,7 @@ static const struct usb_device_id pwc_device_table [] = { + { USB_DEVICE(0x0471, 0x0312) }, + { USB_DEVICE(0x0471, 0x0313) }, /* the 'new' 720K */ + { USB_DEVICE(0x0471, 0x0329) }, /* Philips SPC 900NC PC Camera */ ++ { USB_DEVICE(0x0471, 0x032C) }, /* Philips SPC 880NC PC Camera */ + { USB_DEVICE(0x069A, 0x0001) }, /* Askey */ + { USB_DEVICE(0x046D, 0x08B0) }, /* Logitech QuickCam Pro 3000 */ + { USB_DEVICE(0x046D, 0x08B1) }, /* Logitech QuickCam Notebook Pro */ +@@ -799,6 +800,11 @@ static int usb_pwc_probe(struct usb_interface *intf, const struct usb_device_id + name = "Philips SPC 900NC webcam"; + type_id = 740; + break; ++ case 0x032C: ++ PWC_INFO("Philips SPC 880NC USB webcam detected.\n"); ++ name = "Philips SPC 880NC webcam"; ++ type_id = 740; ++ break; + default: + return -ENODEV; + break; +diff --git a/drivers/media/v4l2-core/v4l2-compat-ioctl32.c b/drivers/media/v4l2-core/v4l2-compat-ioctl32.c +index e502a5fb2994..e77d3fc50abd 100644 +--- a/drivers/media/v4l2-core/v4l2-compat-ioctl32.c ++++ b/drivers/media/v4l2-core/v4l2-compat-ioctl32.c +@@ -392,7 +392,8 @@ static int get_v4l2_buffer32(struct v4l2_buffer *kp, struct v4l2_buffer32 __user + get_user(kp->index, &up->index) || + get_user(kp->type, &up->type) || + get_user(kp->flags, &up->flags) || +- get_user(kp->memory, &up->memory)) ++ get_user(kp->memory, &up->memory) || ++ get_user(kp->length, &up->length)) + return -EFAULT; + + if (V4L2_TYPE_IS_OUTPUT(kp->type)) +@@ -404,9 +405,6 @@ static int get_v4l2_buffer32(struct v4l2_buffer *kp, struct v4l2_buffer32 __user + return -EFAULT; + + if (V4L2_TYPE_IS_MULTIPLANAR(kp->type)) { +- if (get_user(kp->length, &up->length)) +- return -EFAULT; +- + num_planes = kp->length; + if (num_planes == 0) { + kp->m.planes = NULL; +@@ -439,16 +437,14 @@ static int get_v4l2_buffer32(struct v4l2_buffer *kp, struct v4l2_buffer32 __user + } else { + switch (kp->memory) { + case V4L2_MEMORY_MMAP: +- if (get_user(kp->length, &up->length) || +- get_user(kp->m.offset, &up->m.offset)) ++ if (get_user(kp->m.offset, &up->m.offset)) + return -EFAULT; + break; + case V4L2_MEMORY_USERPTR: + { + compat_long_t tmp; + +- if (get_user(kp->length, &up->length) || +- get_user(tmp, &up->m.userptr)) ++ if (get_user(tmp, &up->m.userptr)) + return -EFAULT; + + kp->m.userptr = (unsigned long)compat_ptr(tmp); +@@ -490,7 +486,8 @@ static int put_v4l2_buffer32(struct v4l2_buffer *kp, struct v4l2_buffer32 __user + copy_to_user(&up->timecode, &kp->timecode, sizeof(struct v4l2_timecode)) || + put_user(kp->sequence, &up->sequence) || + put_user(kp->reserved2, &up->reserved2) || +- put_user(kp->reserved, &up->reserved)) ++ put_user(kp->reserved, &up->reserved) || ++ put_user(kp->length, &up->length)) + return -EFAULT; + + if (V4L2_TYPE_IS_MULTIPLANAR(kp->type)) { +@@ -513,13 +510,11 @@ static int put_v4l2_buffer32(struct v4l2_buffer *kp, struct v4l2_buffer32 __user + } else { + switch (kp->memory) { + case V4L2_MEMORY_MMAP: +- if (put_user(kp->length, &up->length) || +- put_user(kp->m.offset, &up->m.offset)) ++ if (put_user(kp->m.offset, &up->m.offset)) + return -EFAULT; + break; + case V4L2_MEMORY_USERPTR: +- if (put_user(kp->length, &up->length) || +- put_user(kp->m.userptr, &up->m.userptr)) ++ if (put_user(kp->m.userptr, &up->m.userptr)) + return -EFAULT; + break; + case V4L2_MEMORY_OVERLAY: +diff --git a/drivers/mmc/host/mmc_spi.c b/drivers/mmc/host/mmc_spi.c +index e4a07546f8b6..fc9004a74346 100644 +--- a/drivers/mmc/host/mmc_spi.c ++++ b/drivers/mmc/host/mmc_spi.c +@@ -1436,6 +1436,12 @@ static int mmc_spi_probe(struct spi_device *spi) + host->pdata->cd_debounce); + if (status != 0) + goto fail_add_host; ++ ++ /* The platform has a CD GPIO signal that may support ++ * interrupts, so let mmc_gpiod_request_cd_irq() decide ++ * if polling is needed or not. ++ */ ++ mmc->caps &= ~MMC_CAP_NEEDS_POLL; + mmc_gpiod_request_cd_irq(mmc); + } + +diff --git a/drivers/mmc/host/sdhci.c b/drivers/mmc/host/sdhci.c +index 9109287e47ac..8e7605894dfd 100644 +--- a/drivers/mmc/host/sdhci.c ++++ b/drivers/mmc/host/sdhci.c +@@ -660,9 +660,20 @@ static u8 sdhci_calc_timeout(struct sdhci_host *host, struct mmc_command *cmd) + if (!data) + target_timeout = cmd->busy_timeout * 1000; + else { +- target_timeout = data->timeout_ns / 1000; +- if (host->clock) +- target_timeout += data->timeout_clks / host->clock; ++ target_timeout = DIV_ROUND_UP(data->timeout_ns, 1000); ++ if (host->clock && data->timeout_clks) { ++ unsigned long long val; ++ ++ /* ++ * data->timeout_clks is in units of clock cycles. ++ * host->clock is in Hz. target_timeout is in us. ++ * Hence, us = 1000000 * cycles / Hz. Round up. ++ */ ++ val = 1000000 * data->timeout_clks; ++ if (do_div(val, host->clock)) ++ target_timeout++; ++ target_timeout += val; ++ } + } + + /* +@@ -2978,14 +2989,14 @@ int sdhci_add_host(struct sdhci_host *host) + if (caps[0] & SDHCI_TIMEOUT_CLK_UNIT) + host->timeout_clk *= 1000; + ++ if (override_timeout_clk) ++ host->timeout_clk = override_timeout_clk; ++ + mmc->max_busy_timeout = host->ops->get_max_timeout_count ? + host->ops->get_max_timeout_count(host) : 1 << 27; + mmc->max_busy_timeout /= host->timeout_clk; + } + +- if (override_timeout_clk) +- host->timeout_clk = override_timeout_clk; +- + mmc->caps |= MMC_CAP_SDIO_IRQ | MMC_CAP_ERASE | MMC_CAP_CMD23; + mmc->caps2 |= MMC_CAP2_SDIO_IRQ_NOTHREAD; + +diff --git a/drivers/mtd/onenand/onenand_base.c b/drivers/mtd/onenand/onenand_base.c +index 635ee0027691..c3f327ed7c12 100644 +--- a/drivers/mtd/onenand/onenand_base.c ++++ b/drivers/mtd/onenand/onenand_base.c +@@ -2605,6 +2605,7 @@ static int onenand_default_block_markbad(struct mtd_info *mtd, loff_t ofs) + */ + static int onenand_block_markbad(struct mtd_info *mtd, loff_t ofs) + { ++ struct onenand_chip *this = mtd->priv; + int ret; + + ret = onenand_block_isbad(mtd, ofs); +@@ -2616,7 +2617,7 @@ static int onenand_block_markbad(struct mtd_info *mtd, loff_t ofs) + } + + onenand_get_device(mtd, FL_WRITING); +- ret = mtd_block_markbad(mtd, ofs); ++ ret = this->block_markbad(mtd, ofs); + onenand_release_device(mtd); + return ret; + } +diff --git a/drivers/net/ethernet/marvell/mvneta.c b/drivers/net/ethernet/marvell/mvneta.c +index fb34708aa5f4..3bd7e3d8fe5a 100644 +--- a/drivers/net/ethernet/marvell/mvneta.c ++++ b/drivers/net/ethernet/marvell/mvneta.c +@@ -3078,7 +3078,7 @@ static int mvneta_probe(struct platform_device *pdev) + dev->features = NETIF_F_SG | NETIF_F_IP_CSUM | NETIF_F_TSO; + dev->hw_features |= dev->features; + dev->vlan_features |= dev->features; +- dev->priv_flags |= IFF_UNICAST_FLT; ++ dev->priv_flags |= IFF_UNICAST_FLT | IFF_LIVE_ADDR_CHANGE; + dev->gso_max_segs = MVNETA_MAX_TSO_SEGS; + + err = register_netdev(dev); +diff --git a/drivers/net/irda/irtty-sir.c b/drivers/net/irda/irtty-sir.c +index 24b6dddd7f2f..16219162566b 100644 +--- a/drivers/net/irda/irtty-sir.c ++++ b/drivers/net/irda/irtty-sir.c +@@ -430,16 +430,6 @@ static int irtty_open(struct tty_struct *tty) + + /* Module stuff handled via irda_ldisc.owner - Jean II */ + +- /* First make sure we're not already connected. */ +- if (tty->disc_data != NULL) { +- priv = tty->disc_data; +- if (priv && priv->magic == IRTTY_MAGIC) { +- ret = -EEXIST; +- goto out; +- } +- tty->disc_data = NULL; /* ### */ +- } +- + /* stop the underlying driver */ + irtty_stop_receiver(tty, TRUE); + if (tty->ops->stop) +diff --git a/drivers/net/rionet.c b/drivers/net/rionet.c +index dac7a0d9bb46..18cc2c8d5447 100644 +--- a/drivers/net/rionet.c ++++ b/drivers/net/rionet.c +@@ -280,7 +280,7 @@ static void rionet_outb_msg_event(struct rio_mport *mport, void *dev_id, int mbo + struct net_device *ndev = dev_id; + struct rionet_private *rnet = netdev_priv(ndev); + +- spin_lock(&rnet->lock); ++ spin_lock(&rnet->tx_lock); + + if (netif_msg_intr(rnet)) + printk(KERN_INFO +@@ -299,7 +299,7 @@ static void rionet_outb_msg_event(struct rio_mport *mport, void *dev_id, int mbo + if (rnet->tx_cnt < RIONET_TX_RING_SIZE) + netif_wake_queue(ndev); + +- spin_unlock(&rnet->lock); ++ spin_unlock(&rnet->tx_lock); + } + + static int rionet_open(struct net_device *ndev) +diff --git a/drivers/of/of_reserved_mem.c b/drivers/of/of_reserved_mem.c +index dc566b38645f..2ede604ff17a 100644 +--- a/drivers/of/of_reserved_mem.c ++++ b/drivers/of/of_reserved_mem.c +@@ -31,11 +31,13 @@ int __init __weak early_init_dt_alloc_reserved_memory_arch(phys_addr_t size, + phys_addr_t align, phys_addr_t start, phys_addr_t end, bool nomap, + phys_addr_t *res_base) + { ++ phys_addr_t base; + /* + * We use __memblock_alloc_base() because memblock_alloc_base() + * panic()s on allocation failure. + */ +- phys_addr_t base = __memblock_alloc_base(size, align, end); ++ end = !end ? MEMBLOCK_ALLOC_ANYWHERE : end; ++ base = __memblock_alloc_base(size, align, end); + if (!base) + return -ENOMEM; + +diff --git a/drivers/pci/probe.c b/drivers/pci/probe.c +index 3010ffc9029d..3f2d424c723f 100644 +--- a/drivers/pci/probe.c ++++ b/drivers/pci/probe.c +@@ -177,6 +177,9 @@ int __pci_read_base(struct pci_dev *dev, enum pci_bar_type type, + struct pci_bus_region region, inverted_region; + bool bar_too_big = false, bar_too_high = false, bar_invalid = false; + ++ if (dev->non_compliant_bars) ++ return 0; ++ + mask = type ? PCI_ROM_ADDRESS_MASK : ~0; + + /* No printks while decoding is disabled! */ +@@ -985,6 +988,8 @@ void set_pcie_port_type(struct pci_dev *pdev) + { + int pos; + u16 reg16; ++ int type; ++ struct pci_dev *parent; + + pos = pci_find_capability(pdev, PCI_CAP_ID_EXP); + if (!pos) +@@ -994,6 +999,22 @@ void set_pcie_port_type(struct pci_dev *pdev) + pdev->pcie_flags_reg = reg16; + pci_read_config_word(pdev, pos + PCI_EXP_DEVCAP, ®16); + pdev->pcie_mpss = reg16 & PCI_EXP_DEVCAP_PAYLOAD; ++ ++ /* ++ * A Root Port is always the upstream end of a Link. No PCIe ++ * component has two Links. Two Links are connected by a Switch ++ * that has a Port on each Link and internal logic to connect the ++ * two Ports. ++ */ ++ type = pci_pcie_type(pdev); ++ if (type == PCI_EXP_TYPE_ROOT_PORT) ++ pdev->has_secondary_link = 1; ++ else if (type == PCI_EXP_TYPE_UPSTREAM || ++ type == PCI_EXP_TYPE_DOWNSTREAM) { ++ parent = pci_upstream_bridge(pdev); ++ if (!parent->has_secondary_link) ++ pdev->has_secondary_link = 1; ++ } + } + + void set_pcie_hotplug_bridge(struct pci_dev *pdev) +@@ -1110,6 +1131,7 @@ int pci_cfg_space_size(struct pci_dev *dev) + int pci_setup_device(struct pci_dev *dev) + { + u32 class; ++ u16 cmd; + u8 hdr_type; + struct pci_slot *slot; + int pos = 0; +@@ -1157,6 +1179,16 @@ int pci_setup_device(struct pci_dev *dev) + /* device class may be changed after fixup */ + class = dev->class >> 8; + ++ if (dev->non_compliant_bars) { ++ pci_read_config_word(dev, PCI_COMMAND, &cmd); ++ if (cmd & (PCI_COMMAND_IO | PCI_COMMAND_MEMORY)) { ++ dev_info(&dev->dev, "device has non-compliant BARs; disabling IO/MEM decoding\n"); ++ cmd &= ~PCI_COMMAND_IO; ++ cmd &= ~PCI_COMMAND_MEMORY; ++ pci_write_config_word(dev, PCI_COMMAND, cmd); ++ } ++ } ++ + switch (dev->hdr_type) { /* header type */ + case PCI_HEADER_TYPE_NORMAL: /* standard header */ + if (class == PCI_CLASS_BRIDGE_PCI) +diff --git a/drivers/platform/x86/ideapad-laptop.c b/drivers/platform/x86/ideapad-laptop.c +index 4c82e8e26125..def22e8345cb 100644 +--- a/drivers/platform/x86/ideapad-laptop.c ++++ b/drivers/platform/x86/ideapad-laptop.c +@@ -839,6 +839,20 @@ static const struct dmi_system_id no_hw_rfkill_list[] = { + }, + }, + { ++ .ident = "Lenovo ideapad Y700-15ISK", ++ .matches = { ++ DMI_MATCH(DMI_SYS_VENDOR, "LENOVO"), ++ DMI_MATCH(DMI_PRODUCT_VERSION, "Lenovo ideapad Y700-15ISK"), ++ }, ++ }, ++ { ++ .ident = "Lenovo ideapad Y700 Touch-15ISK", ++ .matches = { ++ DMI_MATCH(DMI_SYS_VENDOR, "LENOVO"), ++ DMI_MATCH(DMI_PRODUCT_VERSION, "Lenovo ideapad Y700 Touch-15ISK"), ++ }, ++ }, ++ { + .ident = "Lenovo ideapad Y700-17ISK", + .matches = { + DMI_MATCH(DMI_SYS_VENDOR, "LENOVO"), +diff --git a/drivers/scsi/aacraid/commsup.c b/drivers/scsi/aacraid/commsup.c +index cab190af6345..6b32ddcefc11 100644 +--- a/drivers/scsi/aacraid/commsup.c ++++ b/drivers/scsi/aacraid/commsup.c +@@ -83,9 +83,12 @@ static int fib_map_alloc(struct aac_dev *dev) + + void aac_fib_map_free(struct aac_dev *dev) + { +- pci_free_consistent(dev->pdev, +- dev->max_fib_size * (dev->scsi_host_ptr->can_queue + AAC_NUM_MGT_FIB), +- dev->hw_fib_va, dev->hw_fib_pa); ++ if (dev->hw_fib_va && dev->max_fib_size) { ++ pci_free_consistent(dev->pdev, ++ (dev->max_fib_size * ++ (dev->scsi_host_ptr->can_queue + AAC_NUM_MGT_FIB)), ++ dev->hw_fib_va, dev->hw_fib_pa); ++ } + dev->hw_fib_va = NULL; + dev->hw_fib_pa = 0; + } +diff --git a/drivers/scsi/be2iscsi/be_main.c b/drivers/scsi/be2iscsi/be_main.c +index be4586b788d1..3ed37dc28b3c 100644 +--- a/drivers/scsi/be2iscsi/be_main.c ++++ b/drivers/scsi/be2iscsi/be_main.c +@@ -4435,6 +4435,7 @@ put_shost: + scsi_host_put(phba->shost); + free_kset: + iscsi_boot_destroy_kset(phba->boot_kset); ++ phba->boot_kset = NULL; + return -ENOMEM; + } + +diff --git a/drivers/scsi/sg.c b/drivers/scsi/sg.c +index a1866c0c6a57..bc09f1d196c9 100644 +--- a/drivers/scsi/sg.c ++++ b/drivers/scsi/sg.c +@@ -652,7 +652,8 @@ sg_write(struct file *filp, const char __user *buf, size_t count, loff_t * ppos) + else + hp->dxfer_direction = (mxsize > 0) ? SG_DXFER_FROM_DEV : SG_DXFER_NONE; + hp->dxfer_len = mxsize; +- if (hp->dxfer_direction == SG_DXFER_TO_DEV) ++ if ((hp->dxfer_direction == SG_DXFER_TO_DEV) || ++ (hp->dxfer_direction == SG_DXFER_TO_FROM_DEV)) + hp->dxferp = (char __user *)buf + cmd_size; + else + hp->dxferp = NULL; +diff --git a/drivers/staging/comedi/drivers/ni_mio_common.c b/drivers/staging/comedi/drivers/ni_mio_common.c +index 320b080149b6..b7588e4e3644 100644 +--- a/drivers/staging/comedi/drivers/ni_mio_common.c ++++ b/drivers/staging/comedi/drivers/ni_mio_common.c +@@ -248,24 +248,24 @@ static void ni_writel(struct comedi_device *dev, uint32_t data, int reg) + { + if (dev->mmio) + writel(data, dev->mmio + reg); +- +- outl(data, dev->iobase + reg); ++ else ++ outl(data, dev->iobase + reg); + } + + static void ni_writew(struct comedi_device *dev, uint16_t data, int reg) + { + if (dev->mmio) + writew(data, dev->mmio + reg); +- +- outw(data, dev->iobase + reg); ++ else ++ outw(data, dev->iobase + reg); + } + + static void ni_writeb(struct comedi_device *dev, uint8_t data, int reg) + { + if (dev->mmio) + writeb(data, dev->mmio + reg); +- +- outb(data, dev->iobase + reg); ++ else ++ outb(data, dev->iobase + reg); + } + + static uint32_t ni_readl(struct comedi_device *dev, int reg) +diff --git a/drivers/staging/comedi/drivers/ni_tiocmd.c b/drivers/staging/comedi/drivers/ni_tiocmd.c +index 26e7291c4a51..8059db6c7a76 100644 +--- a/drivers/staging/comedi/drivers/ni_tiocmd.c ++++ b/drivers/staging/comedi/drivers/ni_tiocmd.c +@@ -94,7 +94,7 @@ static int ni_tio_input_inttrig(struct comedi_device *dev, + unsigned long flags; + int ret = 0; + +- if (trig_num != cmd->start_src) ++ if (trig_num != cmd->start_arg) + return -EINVAL; + + spin_lock_irqsave(&counter->lock, flags); +diff --git a/drivers/target/target_core_transport.c b/drivers/target/target_core_transport.c +index b6051c82f386..1a487f9c2f78 100644 +--- a/drivers/target/target_core_transport.c ++++ b/drivers/target/target_core_transport.c +@@ -2537,8 +2537,6 @@ void target_wait_for_sess_cmds(struct se_session *se_sess) + + list_for_each_entry_safe(se_cmd, tmp_cmd, + &se_sess->sess_wait_list, se_cmd_list) { +- list_del_init(&se_cmd->se_cmd_list); +- + pr_debug("Waiting for se_cmd: %p t_state: %d, fabric state:" + " %d\n", se_cmd, se_cmd->t_state, + se_cmd->se_tfo->get_cmd_state(se_cmd)); +diff --git a/drivers/thermal/thermal_core.c b/drivers/thermal/thermal_core.c +index 82164eb7d5b4..f23813fe1a0d 100644 +--- a/drivers/thermal/thermal_core.c ++++ b/drivers/thermal/thermal_core.c +@@ -391,6 +391,10 @@ static void handle_thermal_trip(struct thermal_zone_device *tz, int trip) + { + enum thermal_trip_type type; + ++ /* Ignore disabled trip points */ ++ if (test_bit(trip, &tz->trips_disabled)) ++ return; ++ + tz->ops->get_trip_type(tz, trip, &type); + + if (type == THERMAL_TRIP_CRITICAL || type == THERMAL_TRIP_HOT) +@@ -1484,6 +1488,7 @@ struct thermal_zone_device *thermal_zone_device_register(const char *type, + { + struct thermal_zone_device *tz; + enum thermal_trip_type trip_type; ++ int trip_temp; + int result; + int count; + int passive = 0; +@@ -1554,9 +1559,15 @@ struct thermal_zone_device *thermal_zone_device_register(const char *type, + goto unregister; + + for (count = 0; count < trips; count++) { +- tz->ops->get_trip_type(tz, count, &trip_type); ++ if (tz->ops->get_trip_type(tz, count, &trip_type)) ++ set_bit(count, &tz->trips_disabled); + if (trip_type == THERMAL_TRIP_PASSIVE) + passive = 1; ++ if (tz->ops->get_trip_temp(tz, count, &trip_temp)) ++ set_bit(count, &tz->trips_disabled); ++ /* Check for bogus trip points */ ++ if (trip_temp == 0) ++ set_bit(count, &tz->trips_disabled); + } + + if (!passive) { +diff --git a/drivers/usb/class/cdc-acm.c b/drivers/usb/class/cdc-acm.c +index 7ec2b06069c9..0dd514e86fdc 100644 +--- a/drivers/usb/class/cdc-acm.c ++++ b/drivers/usb/class/cdc-acm.c +@@ -1109,6 +1109,9 @@ static int acm_probe(struct usb_interface *intf, + if (quirks == NO_UNION_NORMAL) { + data_interface = usb_ifnum_to_if(usb_dev, 1); + control_interface = usb_ifnum_to_if(usb_dev, 0); ++ /* we would crash */ ++ if (!data_interface || !control_interface) ++ return -ENODEV; + goto skip_normal_probe; + } + +diff --git a/drivers/usb/core/driver.c b/drivers/usb/core/driver.c +index d7a6d8bc510b..66be3b43de9f 100644 +--- a/drivers/usb/core/driver.c ++++ b/drivers/usb/core/driver.c +@@ -499,11 +499,15 @@ static int usb_unbind_interface(struct device *dev) + int usb_driver_claim_interface(struct usb_driver *driver, + struct usb_interface *iface, void *priv) + { +- struct device *dev = &iface->dev; ++ struct device *dev; + struct usb_device *udev; + int retval = 0; + int lpm_disable_error; + ++ if (!iface) ++ return -ENODEV; ++ ++ dev = &iface->dev; + if (dev->driver) + return -EBUSY; + +diff --git a/drivers/usb/core/hub.c b/drivers/usb/core/hub.c +index fd9a20f2b6e5..d8e1d5c1b9d2 100644 +--- a/drivers/usb/core/hub.c ++++ b/drivers/usb/core/hub.c +@@ -4242,7 +4242,7 @@ hub_port_init (struct usb_hub *hub, struct usb_device *udev, int port1, + { + struct usb_device *hdev = hub->hdev; + struct usb_hcd *hcd = bus_to_hcd(hdev->bus); +- int i, j, retval; ++ int retries, operations, retval, i; + unsigned delay = HUB_SHORT_RESET_TIME; + enum usb_device_speed oldspeed = udev->speed; + const char *speed; +@@ -4344,7 +4344,7 @@ hub_port_init (struct usb_hub *hub, struct usb_device *udev, int port1, + * first 8 bytes of the device descriptor to get the ep0 maxpacket + * value. + */ +- for (i = 0; i < GET_DESCRIPTOR_TRIES; (++i, msleep(100))) { ++ for (retries = 0; retries < GET_DESCRIPTOR_TRIES; (++retries, msleep(100))) { + bool did_new_scheme = false; + + if (use_new_scheme(udev, retry_counter)) { +@@ -4371,7 +4371,7 @@ hub_port_init (struct usb_hub *hub, struct usb_device *udev, int port1, + * 255 is for WUSB devices, we actually need to use + * 512 (WUSB1.0[4.8.1]). + */ +- for (j = 0; j < 3; ++j) { ++ for (operations = 0; operations < 3; ++operations) { + buf->bMaxPacketSize0 = 0; + r = usb_control_msg(udev, usb_rcvaddr0pipe(), + USB_REQ_GET_DESCRIPTOR, USB_DIR_IN, +@@ -4391,7 +4391,13 @@ hub_port_init (struct usb_hub *hub, struct usb_device *udev, int port1, + r = -EPROTO; + break; + } +- if (r == 0) ++ /* ++ * Some devices time out if they are powered on ++ * when already connected. They need a second ++ * reset. But only on the first attempt, ++ * lest we get into a time out/reset loop ++ */ ++ if (r == 0 || (r == -ETIMEDOUT && retries == 0)) + break; + } + udev->descriptor.bMaxPacketSize0 = +@@ -4423,7 +4429,7 @@ hub_port_init (struct usb_hub *hub, struct usb_device *udev, int port1, + * authorization will assign the final address. + */ + if (udev->wusb == 0) { +- for (j = 0; j < SET_ADDRESS_TRIES; ++j) { ++ for (operations = 0; operations < SET_ADDRESS_TRIES; ++operations) { + retval = hub_set_address(udev, devnum); + if (retval >= 0) + break; +diff --git a/drivers/usb/misc/iowarrior.c b/drivers/usb/misc/iowarrior.c +index c6bfd13f6c92..1950e87b4219 100644 +--- a/drivers/usb/misc/iowarrior.c ++++ b/drivers/usb/misc/iowarrior.c +@@ -787,6 +787,12 @@ static int iowarrior_probe(struct usb_interface *interface, + iface_desc = interface->cur_altsetting; + dev->product_id = le16_to_cpu(udev->descriptor.idProduct); + ++ if (iface_desc->desc.bNumEndpoints < 1) { ++ dev_err(&interface->dev, "Invalid number of endpoints\n"); ++ retval = -EINVAL; ++ goto error; ++ } ++ + /* set up the endpoint information */ + for (i = 0; i < iface_desc->desc.bNumEndpoints; ++i) { + endpoint = &iface_desc->endpoint[i].desc; +diff --git a/drivers/usb/storage/uas.c b/drivers/usb/storage/uas.c +index 2ef0f0abe246..c6b4af82b7ed 100644 +--- a/drivers/usb/storage/uas.c ++++ b/drivers/usb/storage/uas.c +@@ -820,7 +820,7 @@ static struct scsi_host_template uas_host_template = { + .slave_configure = uas_slave_configure, + .eh_abort_handler = uas_eh_abort_handler, + .eh_bus_reset_handler = uas_eh_bus_reset_handler, +- .can_queue = 65536, /* Is there a limit on the _host_ ? */ ++ .can_queue = MAX_CMNDS, + .this_id = -1, + .sg_tablesize = SG_NONE, + .cmd_per_lun = 1, /* until we override it */ +diff --git a/drivers/watchdog/rc32434_wdt.c b/drivers/watchdog/rc32434_wdt.c +index 71e78ef4b736..3a75f3b53452 100644 +--- a/drivers/watchdog/rc32434_wdt.c ++++ b/drivers/watchdog/rc32434_wdt.c +@@ -237,7 +237,7 @@ static long rc32434_wdt_ioctl(struct file *file, unsigned int cmd, + return -EINVAL; + /* Fall through */ + case WDIOC_GETTIMEOUT: +- return copy_to_user(argp, &timeout, sizeof(int)); ++ return copy_to_user(argp, &timeout, sizeof(int)) ? -EFAULT : 0; + default: + return -ENOTTY; + } +diff --git a/fs/btrfs/async-thread.c b/fs/btrfs/async-thread.c +index 4dabeb893b7c..dcc1aae4c951 100644 +--- a/fs/btrfs/async-thread.c ++++ b/fs/btrfs/async-thread.c +@@ -316,8 +316,8 @@ static inline void __btrfs_queue_work(struct __btrfs_workqueue *wq, + list_add_tail(&work->ordered_list, &wq->ordered_list); + spin_unlock_irqrestore(&wq->list_lock, flags); + } +- queue_work(wq->normal_wq, &work->normal_work); + trace_btrfs_work_queued(work); ++ queue_work(wq->normal_wq, &work->normal_work); + } + + void btrfs_queue_work(struct btrfs_workqueue *wq, +diff --git a/fs/btrfs/disk-io.c b/fs/btrfs/disk-io.c +index b1709831b1a1..5177954e1a2c 100644 +--- a/fs/btrfs/disk-io.c ++++ b/fs/btrfs/disk-io.c +@@ -2418,6 +2418,7 @@ int open_ctree(struct super_block *sb, + if (btrfs_check_super_csum(bh->b_data)) { + printk(KERN_ERR "BTRFS: superblock checksum mismatch\n"); + err = -EINVAL; ++ brelse(bh); + goto fail_alloc; + } + +diff --git a/fs/btrfs/inode.c b/fs/btrfs/inode.c +index 211f19aa56ba..c8d287fff7bc 100644 +--- a/fs/btrfs/inode.c ++++ b/fs/btrfs/inode.c +@@ -6092,7 +6092,7 @@ out_unlock_inode: + static int btrfs_link(struct dentry *old_dentry, struct inode *dir, + struct dentry *dentry) + { +- struct btrfs_trans_handle *trans; ++ struct btrfs_trans_handle *trans = NULL; + struct btrfs_root *root = BTRFS_I(dir)->root; + struct inode *inode = old_dentry->d_inode; + u64 index; +@@ -6118,6 +6118,7 @@ static int btrfs_link(struct dentry *old_dentry, struct inode *dir, + trans = btrfs_start_transaction(root, 5); + if (IS_ERR(trans)) { + err = PTR_ERR(trans); ++ trans = NULL; + goto fail; + } + +@@ -6151,9 +6152,10 @@ static int btrfs_link(struct dentry *old_dentry, struct inode *dir, + btrfs_log_new_name(trans, inode, NULL, parent); + } + +- btrfs_end_transaction(trans, root); + btrfs_balance_delayed_items(root); + fail: ++ if (trans) ++ btrfs_end_transaction(trans, root); + if (drop_inode) { + inode_dec_link_count(inode); + iput(inode); +@@ -8053,15 +8055,28 @@ int btrfs_readpage(struct file *file, struct page *page) + static int btrfs_writepage(struct page *page, struct writeback_control *wbc) + { + struct extent_io_tree *tree; +- ++ struct inode *inode = page->mapping->host; ++ int ret; + + if (current->flags & PF_MEMALLOC) { + redirty_page_for_writepage(wbc, page); + unlock_page(page); + return 0; + } ++ ++ /* ++ * If we are under memory pressure we will call this directly from the ++ * VM, we need to make sure we have the inode referenced for the ordered ++ * extent. If not just return like we didn't do anything. ++ */ ++ if (!igrab(inode)) { ++ redirty_page_for_writepage(wbc, page); ++ return AOP_WRITEPAGE_ACTIVATE; ++ } + tree = &BTRFS_I(page->mapping->host)->io_tree; +- return extent_write_full_page(tree, page, btrfs_get_extent, wbc); ++ ret = extent_write_full_page(tree, page, btrfs_get_extent, wbc); ++ btrfs_add_delayed_iput(inode); ++ return ret; + } + + static int btrfs_writepages(struct address_space *mapping, +@@ -9137,9 +9152,11 @@ static int btrfs_symlink(struct inode *dir, struct dentry *dentry, + /* + * 2 items for inode item and ref + * 2 items for dir items ++ * 1 item for updating parent inode item ++ * 1 item for the inline extent item + * 1 item for xattr if selinux is on + */ +- trans = btrfs_start_transaction(root, 5); ++ trans = btrfs_start_transaction(root, 7); + if (IS_ERR(trans)) + return PTR_ERR(trans); + +diff --git a/fs/btrfs/send.c b/fs/btrfs/send.c +index 874828dd0a86..3cc2d1dfd7bf 100644 +--- a/fs/btrfs/send.c ++++ b/fs/btrfs/send.c +@@ -1461,7 +1461,21 @@ static int read_symlink(struct btrfs_root *root, + ret = btrfs_search_slot(NULL, root, &key, path, 0, 0); + if (ret < 0) + goto out; +- BUG_ON(ret); ++ if (ret) { ++ /* ++ * An empty symlink inode. Can happen in rare error paths when ++ * creating a symlink (transaction committed before the inode ++ * eviction handler removed the symlink inode items and a crash ++ * happened in between or the subvol was snapshoted in between). ++ * Print an informative message to dmesg/syslog so that the user ++ * can delete the symlink. ++ */ ++ btrfs_err(root->fs_info, ++ "Found empty symlink inode %llu at root %llu", ++ ino, root->root_key.objectid); ++ ret = -EIO; ++ goto out; ++ } + + ei = btrfs_item_ptr(path->nodes[0], path->slots[0], + struct btrfs_file_extent_item); +diff --git a/fs/btrfs/super.c b/fs/btrfs/super.c +index 2105555657fc..7ceaaf2010f9 100644 +--- a/fs/btrfs/super.c ++++ b/fs/btrfs/super.c +@@ -1775,6 +1775,8 @@ static int btrfs_calc_avail_data_space(struct btrfs_root *root, u64 *free_bytes) + * there are other factors that may change the result (like a new metadata + * chunk). + * ++ * If metadata is exhausted, f_bavail will be 0. ++ * + * FIXME: not accurate for mixed block groups, total and free/used are ok, + * available appears slightly larger. + */ +@@ -1786,11 +1788,13 @@ static int btrfs_statfs(struct dentry *dentry, struct kstatfs *buf) + struct btrfs_space_info *found; + u64 total_used = 0; + u64 total_free_data = 0; ++ u64 total_free_meta = 0; + int bits = dentry->d_sb->s_blocksize_bits; + __be32 *fsid = (__be32 *)fs_info->fsid; + unsigned factor = 1; + struct btrfs_block_rsv *block_rsv = &fs_info->global_block_rsv; + int ret; ++ u64 thresh = 0; + + /* + * holding chunk_muext to avoid allocating new chunks, holding +@@ -1818,6 +1822,8 @@ static int btrfs_statfs(struct dentry *dentry, struct kstatfs *buf) + } + } + } ++ if (found->flags & BTRFS_BLOCK_GROUP_METADATA) ++ total_free_meta += found->disk_total - found->disk_used; + + total_used += found->disk_used; + } +@@ -1845,6 +1851,24 @@ static int btrfs_statfs(struct dentry *dentry, struct kstatfs *buf) + mutex_unlock(&fs_info->chunk_mutex); + mutex_unlock(&fs_info->fs_devices->device_list_mutex); + ++ /* ++ * We calculate the remaining metadata space minus global reserve. If ++ * this is (supposedly) smaller than zero, there's no space. But this ++ * does not hold in practice, the exhausted state happens where's still ++ * some positive delta. So we apply some guesswork and compare the ++ * delta to a 4M threshold. (Practically observed delta was ~2M.) ++ * ++ * We probably cannot calculate the exact threshold value because this ++ * depends on the internal reservations requested by various ++ * operations, so some operations that consume a few metadata will ++ * succeed even if the Avail is zero. But this is better than the other ++ * way around. ++ */ ++ thresh = 4 * 1024 * 1024; ++ ++ if (total_free_meta - thresh < block_rsv->size) ++ buf->f_bavail = 0; ++ + buf->f_type = BTRFS_SUPER_MAGIC; + buf->f_bsize = dentry->d_sb->s_blocksize; + buf->f_namelen = BTRFS_NAME_LEN; +diff --git a/fs/btrfs/volumes.c b/fs/btrfs/volumes.c +index d47289c715c8..25df49239ceb 100644 +--- a/fs/btrfs/volumes.c ++++ b/fs/btrfs/volumes.c +@@ -162,6 +162,7 @@ static struct btrfs_device *__alloc_device(void) + spin_lock_init(&dev->reada_lock); + atomic_set(&dev->reada_in_flight, 0); + atomic_set(&dev->dev_stats_ccnt, 0); ++ btrfs_device_data_ordered_init(dev); + INIT_RADIX_TREE(&dev->reada_zones, GFP_NOFS & ~__GFP_WAIT); + INIT_RADIX_TREE(&dev->reada_extents, GFP_NOFS & ~__GFP_WAIT); + +diff --git a/fs/coredump.c b/fs/coredump.c +index 00d75e82f6f2..7eb6181184ea 100644 +--- a/fs/coredump.c ++++ b/fs/coredump.c +@@ -32,6 +32,9 @@ + #include + #include + #include ++#include ++#include ++#include + + #include + #include +@@ -621,6 +624,8 @@ void do_coredump(const siginfo_t *siginfo) + } + } else { + struct inode *inode; ++ int open_flags = O_CREAT | O_RDWR | O_NOFOLLOW | ++ O_LARGEFILE | O_EXCL; + + if (cprm.limit < binfmt->min_coredump) + goto fail_unlock; +@@ -659,10 +664,27 @@ void do_coredump(const siginfo_t *siginfo) + * what matters is that at least one of the two processes + * writes its coredump successfully, not which one. + */ +- cprm.file = filp_open(cn.corename, +- O_CREAT | 2 | O_NOFOLLOW | +- O_LARGEFILE | O_EXCL, +- 0600); ++ if (need_suid_safe) { ++ /* ++ * Using user namespaces, normal user tasks can change ++ * their current->fs->root to point to arbitrary ++ * directories. Since the intention of the "only dump ++ * with a fully qualified path" rule is to control where ++ * coredumps may be placed using root privileges, ++ * current->fs->root must not be used. Instead, use the ++ * root directory of init_task. ++ */ ++ struct path root; ++ ++ task_lock(&init_task); ++ get_fs_root(init_task.fs, &root); ++ task_unlock(&init_task); ++ cprm.file = file_open_root(root.dentry, root.mnt, ++ cn.corename, open_flags, 0600); ++ path_put(&root); ++ } else { ++ cprm.file = filp_open(cn.corename, open_flags, 0600); ++ } + if (IS_ERR(cprm.file)) + goto fail_unlock; + +diff --git a/fs/efivarfs/file.c b/fs/efivarfs/file.c +index cdb2971192a5..174bb20042b3 100644 +--- a/fs/efivarfs/file.c ++++ b/fs/efivarfs/file.c +@@ -10,6 +10,7 @@ + #include + #include + #include ++#include + + #include "internal.h" + +@@ -103,9 +104,79 @@ out_free: + return size; + } + ++static int ++efivarfs_ioc_getxflags(struct file *file, void __user *arg) ++{ ++ struct inode *inode = file->f_mapping->host; ++ unsigned int i_flags; ++ unsigned int flags = 0; ++ ++ i_flags = inode->i_flags; ++ if (i_flags & S_IMMUTABLE) ++ flags |= FS_IMMUTABLE_FL; ++ ++ if (copy_to_user(arg, &flags, sizeof(flags))) ++ return -EFAULT; ++ return 0; ++} ++ ++static int ++efivarfs_ioc_setxflags(struct file *file, void __user *arg) ++{ ++ struct inode *inode = file->f_mapping->host; ++ unsigned int flags; ++ unsigned int i_flags = 0; ++ int error; ++ ++ if (!inode_owner_or_capable(inode)) ++ return -EACCES; ++ ++ if (copy_from_user(&flags, arg, sizeof(flags))) ++ return -EFAULT; ++ ++ if (flags & ~FS_IMMUTABLE_FL) ++ return -EOPNOTSUPP; ++ ++ if (!capable(CAP_LINUX_IMMUTABLE)) ++ return -EPERM; ++ ++ if (flags & FS_IMMUTABLE_FL) ++ i_flags |= S_IMMUTABLE; ++ ++ ++ error = mnt_want_write_file(file); ++ if (error) ++ return error; ++ ++ mutex_lock(&inode->i_mutex); ++ inode->i_flags &= ~S_IMMUTABLE; ++ inode->i_flags |= i_flags; ++ mutex_unlock(&inode->i_mutex); ++ ++ mnt_drop_write_file(file); ++ ++ return 0; ++} ++ ++long ++efivarfs_file_ioctl(struct file *file, unsigned int cmd, unsigned long p) ++{ ++ void __user *arg = (void __user *)p; ++ ++ switch (cmd) { ++ case FS_IOC_GETFLAGS: ++ return efivarfs_ioc_getxflags(file, arg); ++ case FS_IOC_SETFLAGS: ++ return efivarfs_ioc_setxflags(file, arg); ++ } ++ ++ return -ENOTTY; ++} ++ + const struct file_operations efivarfs_file_operations = { + .open = simple_open, + .read = efivarfs_file_read, + .write = efivarfs_file_write, + .llseek = no_llseek, ++ .unlocked_ioctl = efivarfs_file_ioctl, + }; +diff --git a/fs/efivarfs/inode.c b/fs/efivarfs/inode.c +index 07ab49745e31..7e7318f10575 100644 +--- a/fs/efivarfs/inode.c ++++ b/fs/efivarfs/inode.c +@@ -15,7 +15,8 @@ + #include "internal.h" + + struct inode *efivarfs_get_inode(struct super_block *sb, +- const struct inode *dir, int mode, dev_t dev) ++ const struct inode *dir, int mode, ++ dev_t dev, bool is_removable) + { + struct inode *inode = new_inode(sb); + +@@ -23,6 +24,7 @@ struct inode *efivarfs_get_inode(struct super_block *sb, + inode->i_ino = get_next_ino(); + inode->i_mode = mode; + inode->i_atime = inode->i_mtime = inode->i_ctime = CURRENT_TIME; ++ inode->i_flags = is_removable ? 0 : S_IMMUTABLE; + switch (mode & S_IFMT) { + case S_IFREG: + inode->i_fop = &efivarfs_file_operations; +@@ -102,22 +104,17 @@ static void efivarfs_hex_to_guid(const char *str, efi_guid_t *guid) + static int efivarfs_create(struct inode *dir, struct dentry *dentry, + umode_t mode, bool excl) + { +- struct inode *inode; ++ struct inode *inode = NULL; + struct efivar_entry *var; + int namelen, i = 0, err = 0; ++ bool is_removable = false; + + if (!efivarfs_valid_name(dentry->d_name.name, dentry->d_name.len)) + return -EINVAL; + +- inode = efivarfs_get_inode(dir->i_sb, dir, mode, 0); +- if (!inode) +- return -ENOMEM; +- + var = kzalloc(sizeof(struct efivar_entry), GFP_KERNEL); +- if (!var) { +- err = -ENOMEM; +- goto out; +- } ++ if (!var) ++ return -ENOMEM; + + /* length of the variable name itself: remove GUID and separator */ + namelen = dentry->d_name.len - EFI_VARIABLE_GUID_LEN - 1; +@@ -125,6 +122,16 @@ static int efivarfs_create(struct inode *dir, struct dentry *dentry, + efivarfs_hex_to_guid(dentry->d_name.name + namelen + 1, + &var->var.VendorGuid); + ++ if (efivar_variable_is_removable(var->var.VendorGuid, ++ dentry->d_name.name, namelen)) ++ is_removable = true; ++ ++ inode = efivarfs_get_inode(dir->i_sb, dir, mode, 0, is_removable); ++ if (!inode) { ++ err = -ENOMEM; ++ goto out; ++ } ++ + for (i = 0; i < namelen; i++) + var->var.VariableName[i] = dentry->d_name.name[i]; + +@@ -138,7 +145,8 @@ static int efivarfs_create(struct inode *dir, struct dentry *dentry, + out: + if (err) { + kfree(var); +- iput(inode); ++ if (inode) ++ iput(inode); + } + return err; + } +diff --git a/fs/efivarfs/internal.h b/fs/efivarfs/internal.h +index b5ff16addb7c..b4505188e799 100644 +--- a/fs/efivarfs/internal.h ++++ b/fs/efivarfs/internal.h +@@ -15,7 +15,8 @@ extern const struct file_operations efivarfs_file_operations; + extern const struct inode_operations efivarfs_dir_inode_operations; + extern bool efivarfs_valid_name(const char *str, int len); + extern struct inode *efivarfs_get_inode(struct super_block *sb, +- const struct inode *dir, int mode, dev_t dev); ++ const struct inode *dir, int mode, dev_t dev, ++ bool is_removable); + + extern struct list_head efivarfs_list; + +diff --git a/fs/efivarfs/super.c b/fs/efivarfs/super.c +index c2f421c30ccd..b57db0c6c2af 100644 +--- a/fs/efivarfs/super.c ++++ b/fs/efivarfs/super.c +@@ -118,8 +118,9 @@ static int efivarfs_callback(efi_char16_t *name16, efi_guid_t vendor, + struct dentry *dentry, *root = sb->s_root; + unsigned long size = 0; + char *name; +- int len, i; ++ int len; + int err = -ENOMEM; ++ bool is_removable = false; + + entry = kzalloc(sizeof(*entry), GFP_KERNEL); + if (!entry) +@@ -128,15 +129,17 @@ static int efivarfs_callback(efi_char16_t *name16, efi_guid_t vendor, + memcpy(entry->var.VariableName, name16, name_size); + memcpy(&(entry->var.VendorGuid), &vendor, sizeof(efi_guid_t)); + +- len = ucs2_strlen(entry->var.VariableName); ++ len = ucs2_utf8size(entry->var.VariableName); + + /* name, plus '-', plus GUID, plus NUL*/ + name = kmalloc(len + 1 + EFI_VARIABLE_GUID_LEN + 1, GFP_KERNEL); + if (!name) + goto fail; + +- for (i = 0; i < len; i++) +- name[i] = entry->var.VariableName[i] & 0xFF; ++ ucs2_as_utf8(name, entry->var.VariableName, len); ++ ++ if (efivar_variable_is_removable(entry->var.VendorGuid, name, len)) ++ is_removable = true; + + name[len] = '-'; + +@@ -144,7 +147,8 @@ static int efivarfs_callback(efi_char16_t *name16, efi_guid_t vendor, + + name[len + EFI_VARIABLE_GUID_LEN+1] = '\0'; + +- inode = efivarfs_get_inode(sb, root->d_inode, S_IFREG | 0644, 0); ++ inode = efivarfs_get_inode(sb, root->d_inode, S_IFREG | 0644, 0, ++ is_removable); + if (!inode) + goto fail_name; + +@@ -200,7 +204,7 @@ static int efivarfs_fill_super(struct super_block *sb, void *data, int silent) + sb->s_d_op = &efivarfs_d_ops; + sb->s_time_gran = 1; + +- inode = efivarfs_get_inode(sb, NULL, S_IFDIR | 0755, 0); ++ inode = efivarfs_get_inode(sb, NULL, S_IFDIR | 0755, 0, true); + if (!inode) + return -ENOMEM; + inode->i_op = &efivarfs_dir_inode_operations; +diff --git a/fs/ext4/move_extent.c b/fs/ext4/move_extent.c +index 165f309bafcc..f498c34b4688 100644 +--- a/fs/ext4/move_extent.c ++++ b/fs/ext4/move_extent.c +@@ -397,6 +397,7 @@ data_copy: + *err = ext4_get_block(orig_inode, orig_blk_offset + i, bh, 0); + if (*err < 0) + break; ++ bh = bh->b_this_page; + } + if (!*err) + *err = block_commit_write(pagep[0], from, from + replaced_size); +diff --git a/fs/fhandle.c b/fs/fhandle.c +index d59712dfa3e7..ca3c3dd01789 100644 +--- a/fs/fhandle.c ++++ b/fs/fhandle.c +@@ -228,7 +228,7 @@ long do_handle_open(int mountdirfd, + path_put(&path); + return fd; + } +- file = file_open_root(path.dentry, path.mnt, "", open_flag); ++ file = file_open_root(path.dentry, path.mnt, "", open_flag, 0); + if (IS_ERR(file)) { + put_unused_fd(fd); + retval = PTR_ERR(file); +diff --git a/fs/fuse/file.c b/fs/fuse/file.c +index caa8d95b24e8..e2a2c14a90ee 100644 +--- a/fs/fuse/file.c ++++ b/fs/fuse/file.c +@@ -1088,6 +1088,7 @@ static ssize_t fuse_fill_write_pages(struct fuse_req *req, + tmp = iov_iter_copy_from_user_atomic(page, ii, offset, bytes); + flush_dcache_page(page); + ++ iov_iter_advance(ii, tmp); + if (!tmp) { + unlock_page(page); + page_cache_release(page); +@@ -1100,7 +1101,6 @@ static ssize_t fuse_fill_write_pages(struct fuse_req *req, + req->page_descs[req->num_pages].length = tmp; + req->num_pages++; + +- iov_iter_advance(ii, tmp); + count += tmp; + pos += tmp; + offset += tmp; +diff --git a/fs/jbd2/journal.c b/fs/jbd2/journal.c +index 07e87ec45709..985e95b9b4ef 100644 +--- a/fs/jbd2/journal.c ++++ b/fs/jbd2/journal.c +@@ -1423,11 +1423,12 @@ out: + /** + * jbd2_mark_journal_empty() - Mark on disk journal as empty. + * @journal: The journal to update. ++ * @write_op: With which operation should we write the journal sb + * + * Update a journal's dynamic superblock fields to show that journal is empty. + * Write updated superblock to disk waiting for IO to complete. + */ +-static void jbd2_mark_journal_empty(journal_t *journal) ++static void jbd2_mark_journal_empty(journal_t *journal, int write_op) + { + journal_superblock_t *sb = journal->j_superblock; + +@@ -1445,7 +1446,7 @@ static void jbd2_mark_journal_empty(journal_t *journal) + sb->s_start = cpu_to_be32(0); + read_unlock(&journal->j_state_lock); + +- jbd2_write_superblock(journal, WRITE_FUA); ++ jbd2_write_superblock(journal, write_op); + + /* Log is no longer empty */ + write_lock(&journal->j_state_lock); +@@ -1730,7 +1731,13 @@ int jbd2_journal_destroy(journal_t *journal) + if (journal->j_sb_buffer) { + if (!is_journal_aborted(journal)) { + mutex_lock(&journal->j_checkpoint_mutex); +- jbd2_mark_journal_empty(journal); ++ ++ write_lock(&journal->j_state_lock); ++ journal->j_tail_sequence = ++ ++journal->j_transaction_sequence; ++ write_unlock(&journal->j_state_lock); ++ ++ jbd2_mark_journal_empty(journal, WRITE_FLUSH_FUA); + mutex_unlock(&journal->j_checkpoint_mutex); + } else + err = -EIO; +@@ -1990,7 +1997,7 @@ int jbd2_journal_flush(journal_t *journal) + * the magic code for a fully-recovered superblock. Any future + * commits of data to the journal will restore the current + * s_start value. */ +- jbd2_mark_journal_empty(journal); ++ jbd2_mark_journal_empty(journal, WRITE_FUA); + mutex_unlock(&journal->j_checkpoint_mutex); + write_lock(&journal->j_state_lock); + J_ASSERT(!journal->j_running_transaction); +@@ -2036,7 +2043,7 @@ int jbd2_journal_wipe(journal_t *journal, int write) + if (write) { + /* Lock to make assertions happy... */ + mutex_lock(&journal->j_checkpoint_mutex); +- jbd2_mark_journal_empty(journal); ++ jbd2_mark_journal_empty(journal, WRITE_FUA); + mutex_unlock(&journal->j_checkpoint_mutex); + } + +diff --git a/fs/nfsd/nfs4proc.c b/fs/nfsd/nfs4proc.c +index 6ed585935d5e..606d5aa33d76 100644 +--- a/fs/nfsd/nfs4proc.c ++++ b/fs/nfsd/nfs4proc.c +@@ -878,6 +878,7 @@ nfsd4_secinfo(struct svc_rqst *rqstp, struct nfsd4_compound_state *cstate, + &exp, &dentry); + if (err) + return err; ++ fh_unlock(&cstate->current_fh); + if (dentry->d_inode == NULL) { + exp_put(exp); + err = nfserr_noent; +diff --git a/fs/nfsd/nfs4xdr.c b/fs/nfsd/nfs4xdr.c +index 0fd2f1c76e60..dc7fd83409da 100644 +--- a/fs/nfsd/nfs4xdr.c ++++ b/fs/nfsd/nfs4xdr.c +@@ -1061,8 +1061,9 @@ nfsd4_decode_rename(struct nfsd4_compoundargs *argp, struct nfsd4_rename *rename + + READ_BUF(4); + rename->rn_snamelen = be32_to_cpup(p++); +- READ_BUF(rename->rn_snamelen + 4); ++ READ_BUF(rename->rn_snamelen); + SAVEMEM(rename->rn_sname, rename->rn_snamelen); ++ READ_BUF(4); + rename->rn_tnamelen = be32_to_cpup(p++); + READ_BUF(rename->rn_tnamelen); + SAVEMEM(rename->rn_tname, rename->rn_tnamelen); +@@ -1144,13 +1145,14 @@ nfsd4_decode_setclientid(struct nfsd4_compoundargs *argp, struct nfsd4_setclient + READ_BUF(8); + setclientid->se_callback_prog = be32_to_cpup(p++); + setclientid->se_callback_netid_len = be32_to_cpup(p++); +- +- READ_BUF(setclientid->se_callback_netid_len + 4); ++ READ_BUF(setclientid->se_callback_netid_len); + SAVEMEM(setclientid->se_callback_netid_val, setclientid->se_callback_netid_len); ++ READ_BUF(4); + setclientid->se_callback_addr_len = be32_to_cpup(p++); + +- READ_BUF(setclientid->se_callback_addr_len + 4); ++ READ_BUF(setclientid->se_callback_addr_len); + SAVEMEM(setclientid->se_callback_addr_val, setclientid->se_callback_addr_len); ++ READ_BUF(4); + setclientid->se_callback_ident = be32_to_cpup(p++); + + DECODE_TAIL; +@@ -1646,8 +1648,9 @@ nfsd4_decode_compound(struct nfsd4_compoundargs *argp) + + READ_BUF(4); + argp->taglen = be32_to_cpup(p++); +- READ_BUF(argp->taglen + 8); ++ READ_BUF(argp->taglen); + SAVEMEM(argp->tag, argp->taglen); ++ READ_BUF(8); + argp->minorversion = be32_to_cpup(p++); + argp->opcnt = be32_to_cpup(p++); + max_reply += 4 + (XDR_QUADLEN(argp->taglen) << 2); +diff --git a/fs/ocfs2/dlm/dlmconvert.c b/fs/ocfs2/dlm/dlmconvert.c +index e36d63ff1783..f90931335c6b 100644 +--- a/fs/ocfs2/dlm/dlmconvert.c ++++ b/fs/ocfs2/dlm/dlmconvert.c +@@ -262,6 +262,7 @@ enum dlm_status dlmconvert_remote(struct dlm_ctxt *dlm, + struct dlm_lock *lock, int flags, int type) + { + enum dlm_status status; ++ u8 old_owner = res->owner; + + mlog(0, "type=%d, convert_type=%d, busy=%d\n", lock->ml.type, + lock->ml.convert_type, res->state & DLM_LOCK_RES_IN_PROGRESS); +@@ -287,6 +288,19 @@ enum dlm_status dlmconvert_remote(struct dlm_ctxt *dlm, + status = DLM_DENIED; + goto bail; + } ++ ++ if (lock->ml.type == type && lock->ml.convert_type == LKM_IVMODE) { ++ mlog(0, "last convert request returned DLM_RECOVERING, but " ++ "owner has already queued and sent ast to me. res %.*s, " ++ "(cookie=%u:%llu, type=%d, conv=%d)\n", ++ res->lockname.len, res->lockname.name, ++ dlm_get_lock_cookie_node(be64_to_cpu(lock->ml.cookie)), ++ dlm_get_lock_cookie_seq(be64_to_cpu(lock->ml.cookie)), ++ lock->ml.type, lock->ml.convert_type); ++ status = DLM_NORMAL; ++ goto bail; ++ } ++ + res->state |= DLM_LOCK_RES_IN_PROGRESS; + /* move lock to local convert queue */ + /* do not alter lock refcount. switching lists. */ +@@ -316,11 +330,19 @@ enum dlm_status dlmconvert_remote(struct dlm_ctxt *dlm, + spin_lock(&res->spinlock); + res->state &= ~DLM_LOCK_RES_IN_PROGRESS; + lock->convert_pending = 0; +- /* if it failed, move it back to granted queue */ ++ /* if it failed, move it back to granted queue. ++ * if master returns DLM_NORMAL and then down before sending ast, ++ * it may have already been moved to granted queue, reset to ++ * DLM_RECOVERING and retry convert */ + if (status != DLM_NORMAL) { + if (status != DLM_NOTQUEUED) + dlm_error(status); + dlm_revert_pending_convert(res, lock); ++ } else if ((res->state & DLM_LOCK_RES_RECOVERING) || ++ (old_owner != res->owner)) { ++ mlog(0, "res %.*s is in recovering or has been recovered.\n", ++ res->lockname.len, res->lockname.name); ++ status = DLM_RECOVERING; + } + bail: + spin_unlock(&res->spinlock); +diff --git a/fs/ocfs2/dlm/dlmrecovery.c b/fs/ocfs2/dlm/dlmrecovery.c +index 450363628f98..3a78afb14b16 100644 +--- a/fs/ocfs2/dlm/dlmrecovery.c ++++ b/fs/ocfs2/dlm/dlmrecovery.c +@@ -2056,7 +2056,6 @@ void dlm_move_lockres_to_recovery_list(struct dlm_ctxt *dlm, + dlm_lock_get(lock); + if (lock->convert_pending) { + /* move converting lock back to granted */ +- BUG_ON(i != DLM_CONVERTING_LIST); + mlog(0, "node died with convert pending " + "on %.*s. move back to granted list.\n", + res->lockname.len, res->lockname.name); +diff --git a/fs/open.c b/fs/open.c +index d058ff1b841b..1651f35d50f5 100644 +--- a/fs/open.c ++++ b/fs/open.c +@@ -968,14 +968,12 @@ struct file *filp_open(const char *filename, int flags, umode_t mode) + EXPORT_SYMBOL(filp_open); + + struct file *file_open_root(struct dentry *dentry, struct vfsmount *mnt, +- const char *filename, int flags) ++ const char *filename, int flags, umode_t mode) + { + struct open_flags op; +- int err = build_open_flags(flags, 0, &op); ++ int err = build_open_flags(flags, mode, &op); + if (err) + return ERR_PTR(err); +- if (flags & O_CREAT) +- return ERR_PTR(-EINVAL); + if (!filename && (flags & O_DIRECTORY)) + if (!dentry->d_inode->i_op->lookup) + return ERR_PTR(-ENOTDIR); +diff --git a/fs/overlayfs/inode.c b/fs/overlayfs/inode.c +index 8774ebb5d80a..e696ba32a2b5 100644 +--- a/fs/overlayfs/inode.c ++++ b/fs/overlayfs/inode.c +@@ -49,15 +49,15 @@ int ovl_setattr(struct dentry *dentry, struct iattr *attr) + if (err) + goto out; + +- upperdentry = ovl_dentry_upper(dentry); +- if (upperdentry) { ++ err = ovl_copy_up(dentry); ++ if (!err) { ++ upperdentry = ovl_dentry_upper(dentry); ++ + mutex_lock(&upperdentry->d_inode->i_mutex); + err = notify_change(upperdentry, attr, NULL); + if (!err) + ovl_copyattr(upperdentry->d_inode, dentry->d_inode); + mutex_unlock(&upperdentry->d_inode->i_mutex); +- } else { +- err = ovl_copy_up_last(dentry, attr, false); + } + ovl_drop_write(dentry); + out: +diff --git a/fs/proc/array.c b/fs/proc/array.c +index cd3653e4f35c..16226e24ec48 100644 +--- a/fs/proc/array.c ++++ b/fs/proc/array.c +@@ -391,7 +391,7 @@ static int do_task_stat(struct seq_file *m, struct pid_namespace *ns, + + state = *get_task_state(task); + vsize = eip = esp = 0; +- permitted = ptrace_may_access(task, PTRACE_MODE_READ | PTRACE_MODE_NOAUDIT); ++ permitted = ptrace_may_access(task, PTRACE_MODE_READ_FSCREDS | PTRACE_MODE_NOAUDIT); + mm = get_task_mm(task); + if (mm) { + vsize = task_vsize(mm); +diff --git a/fs/proc/base.c b/fs/proc/base.c +index 7dc3ea89ef1a..76b296fe93c9 100644 +--- a/fs/proc/base.c ++++ b/fs/proc/base.c +@@ -211,7 +211,7 @@ static int proc_pid_cmdline(struct seq_file *m, struct pid_namespace *ns, + static int proc_pid_auxv(struct seq_file *m, struct pid_namespace *ns, + struct pid *pid, struct task_struct *task) + { +- struct mm_struct *mm = mm_access(task, PTRACE_MODE_READ); ++ struct mm_struct *mm = mm_access(task, PTRACE_MODE_READ_FSCREDS); + if (mm && !IS_ERR(mm)) { + unsigned int nwords = 0; + do { +@@ -239,7 +239,7 @@ static int proc_pid_wchan(struct seq_file *m, struct pid_namespace *ns, + wchan = get_wchan(task); + + if (lookup_symbol_name(wchan, symname) < 0) +- if (!ptrace_may_access(task, PTRACE_MODE_READ)) ++ if (!ptrace_may_access(task, PTRACE_MODE_READ_FSCREDS)) + return 0; + else + return seq_printf(m, "%lu", wchan); +@@ -253,7 +253,7 @@ static int lock_trace(struct task_struct *task) + int err = mutex_lock_killable(&task->signal->cred_guard_mutex); + if (err) + return err; +- if (!ptrace_may_access(task, PTRACE_MODE_ATTACH)) { ++ if (!ptrace_may_access(task, PTRACE_MODE_ATTACH_FSCREDS)) { + mutex_unlock(&task->signal->cred_guard_mutex); + return -EPERM; + } +@@ -496,7 +496,7 @@ static int proc_fd_access_allowed(struct inode *inode) + */ + task = get_proc_task(inode); + if (task) { +- allowed = ptrace_may_access(task, PTRACE_MODE_READ); ++ allowed = ptrace_may_access(task, PTRACE_MODE_READ_FSCREDS); + put_task_struct(task); + } + return allowed; +@@ -531,7 +531,7 @@ static bool has_pid_permissions(struct pid_namespace *pid, + return true; + if (in_group_p(pid->pid_gid)) + return true; +- return ptrace_may_access(task, PTRACE_MODE_READ); ++ return ptrace_may_access(task, PTRACE_MODE_READ_FSCREDS); + } + + +@@ -608,7 +608,7 @@ struct mm_struct *proc_mem_open(struct inode *inode, unsigned int mode) + struct mm_struct *mm = ERR_PTR(-ESRCH); + + if (task) { +- mm = mm_access(task, mode); ++ mm = mm_access(task, mode | PTRACE_MODE_FSCREDS); + put_task_struct(task); + + if (!IS_ERR_OR_NULL(mm)) { +@@ -1670,7 +1670,7 @@ static int map_files_d_revalidate(struct dentry *dentry, unsigned int flags) + if (!task) + goto out_notask; + +- mm = mm_access(task, PTRACE_MODE_READ); ++ mm = mm_access(task, PTRACE_MODE_READ_FSCREDS); + if (IS_ERR_OR_NULL(mm)) + goto out; + +@@ -1802,7 +1802,7 @@ static struct dentry *proc_map_files_lookup(struct inode *dir, + goto out; + + result = -EACCES; +- if (!ptrace_may_access(task, PTRACE_MODE_READ)) ++ if (!ptrace_may_access(task, PTRACE_MODE_READ_FSCREDS)) + goto out_put_task; + + result = -ENOENT; +@@ -1859,7 +1859,7 @@ proc_map_files_readdir(struct file *file, struct dir_context *ctx) + goto out; + + ret = -EACCES; +- if (!ptrace_may_access(task, PTRACE_MODE_READ)) ++ if (!ptrace_may_access(task, PTRACE_MODE_READ_FSCREDS)) + goto out_put_task; + + ret = 0; +@@ -2338,7 +2338,7 @@ static int do_io_accounting(struct task_struct *task, struct seq_file *m, int wh + if (result) + return result; + +- if (!ptrace_may_access(task, PTRACE_MODE_READ)) { ++ if (!ptrace_may_access(task, PTRACE_MODE_READ_FSCREDS)) { + result = -EACCES; + goto out_unlock; + } +diff --git a/fs/proc/namespaces.c b/fs/proc/namespaces.c +index 89026095f2b5..0bdad6b11a16 100644 +--- a/fs/proc/namespaces.c ++++ b/fs/proc/namespaces.c +@@ -119,7 +119,7 @@ static void *proc_ns_follow_link(struct dentry *dentry, struct nameidata *nd) + if (!task) + goto out; + +- if (!ptrace_may_access(task, PTRACE_MODE_READ)) ++ if (!ptrace_may_access(task, PTRACE_MODE_READ_FSCREDS)) + goto out_put_task; + + ns_path.dentry = proc_ns_get_dentry(sb, task, ei->ns.ns_ops); +@@ -152,7 +152,7 @@ static int proc_ns_readlink(struct dentry *dentry, char __user *buffer, int bufl + if (!task) + goto out; + +- if (!ptrace_may_access(task, PTRACE_MODE_READ)) ++ if (!ptrace_may_access(task, PTRACE_MODE_READ_FSCREDS)) + goto out_put_task; + + res = -ENOENT; +diff --git a/fs/quota/dquot.c b/fs/quota/dquot.c +index 05fea2ac116c..18aaccb290c2 100644 +--- a/fs/quota/dquot.c ++++ b/fs/quota/dquot.c +@@ -1384,7 +1384,7 @@ static int dquot_active(const struct inode *inode) + static void __dquot_initialize(struct inode *inode, int type) + { + int cnt, init_needed = 0; +- struct dquot *got[MAXQUOTAS]; ++ struct dquot *got[MAXQUOTAS] = {}; + struct super_block *sb = inode->i_sb; + qsize_t rsv; + +@@ -1394,7 +1394,6 @@ static void __dquot_initialize(struct inode *inode, int type) + /* First get references to structures we might need. */ + for (cnt = 0; cnt < MAXQUOTAS; cnt++) { + struct kqid qid; +- got[cnt] = NULL; + if (type != -1 && cnt != type) + continue; + /* +diff --git a/fs/splice.c b/fs/splice.c +index 75c6058eabf2..cf0cb768a1a7 100644 +--- a/fs/splice.c ++++ b/fs/splice.c +@@ -186,6 +186,9 @@ ssize_t splice_to_pipe(struct pipe_inode_info *pipe, + unsigned int spd_pages = spd->nr_pages; + int ret, do_wakeup, page_nr; + ++ if (!spd_pages) ++ return 0; ++ + ret = 0; + do_wakeup = 0; + page_nr = 0; +diff --git a/fs/xfs/xfs_attr_list.c b/fs/xfs/xfs_attr_list.c +index 62db83ab6cbc..ae64625937e5 100644 +--- a/fs/xfs/xfs_attr_list.c ++++ b/fs/xfs/xfs_attr_list.c +@@ -205,8 +205,10 @@ xfs_attr_shortform_list(xfs_attr_list_context_t *context) + sbp->namelen, + sbp->valuelen, + &sbp->name[sbp->namelen]); +- if (error) ++ if (error) { ++ kmem_free(sbuf); + return error; ++ } + if (context->seen_enough) + break; + cursor->offset++; +@@ -454,14 +456,13 @@ xfs_attr3_leaf_list_int( + args.rmtblkcnt = xfs_attr3_rmt_blocks( + args.dp->i_mount, valuelen); + retval = xfs_attr_rmtval_get(&args); +- if (retval) +- return retval; +- retval = context->put_listent(context, +- entry->flags, +- name_rmt->name, +- (int)name_rmt->namelen, +- valuelen, +- args.value); ++ if (!retval) ++ retval = context->put_listent(context, ++ entry->flags, ++ name_rmt->name, ++ (int)name_rmt->namelen, ++ valuelen, ++ args.value); + kmem_free(args.value); + } else { + retval = context->put_listent(context, +diff --git a/include/asm-generic/bitops/lock.h b/include/asm-generic/bitops/lock.h +index c30266e94806..8ef0ccbf8167 100644 +--- a/include/asm-generic/bitops/lock.h ++++ b/include/asm-generic/bitops/lock.h +@@ -29,16 +29,16 @@ do { \ + * @nr: the bit to set + * @addr: the address to start counting from + * +- * This operation is like clear_bit_unlock, however it is not atomic. +- * It does provide release barrier semantics so it can be used to unlock +- * a bit lock, however it would only be used if no other CPU can modify +- * any bits in the memory until the lock is released (a good example is +- * if the bit lock itself protects access to the other bits in the word). ++ * A weaker form of clear_bit_unlock() as used by __bit_lock_unlock(). If all ++ * the bits in the word are protected by this lock some archs can use weaker ++ * ops to safely unlock. ++ * ++ * See for example x86's implementation. + */ + #define __clear_bit_unlock(nr, addr) \ + do { \ +- smp_mb(); \ +- __clear_bit(nr, addr); \ ++ smp_mb__before_atomic(); \ ++ clear_bit(nr, addr); \ + } while (0) + + #endif /* _ASM_GENERIC_BITOPS_LOCK_H_ */ +diff --git a/include/drm/drm_dp_mst_helper.h b/include/drm/drm_dp_mst_helper.h +index 338fc1053835..e4196de8b77c 100644 +--- a/include/drm/drm_dp_mst_helper.h ++++ b/include/drm/drm_dp_mst_helper.h +@@ -44,8 +44,6 @@ struct drm_dp_vcpi { + /** + * struct drm_dp_mst_port - MST port + * @kref: reference count for this port. +- * @guid_valid: for DP 1.2 devices if we have validated the GUID. +- * @guid: guid for DP 1.2 device on this port. + * @port_num: port number + * @input: if this port is an input port. + * @mcs: message capability status - DP 1.2 spec. +@@ -70,10 +68,6 @@ struct drm_dp_vcpi { + struct drm_dp_mst_port { + struct kref kref; + +- /* if dpcd 1.2 device is on this port - its GUID info */ +- bool guid_valid; +- u8 guid[16]; +- + u8 port_num; + bool input; + bool mcs; +@@ -107,10 +101,12 @@ struct drm_dp_mst_port { + * @tx_slots: transmission slots for this device. + * @last_seqno: last sequence number used to talk to this. + * @link_address_sent: if a link address message has been sent to this device yet. ++ * @guid: guid for DP 1.2 branch device. port under this branch can be ++ * identified by port #. + * + * This structure represents an MST branch device, there is one +- * primary branch device at the root, along with any others connected +- * to downstream ports ++ * primary branch device at the root, along with any other branches connected ++ * to downstream port of parent branches. + */ + struct drm_dp_mst_branch { + struct kref kref; +@@ -129,6 +125,9 @@ struct drm_dp_mst_branch { + struct drm_dp_sideband_msg_tx *tx_slots[2]; + int last_seqno; + bool link_address_sent; ++ ++ /* global unique identifier to identify branch devices */ ++ u8 guid[16]; + }; + + +@@ -401,11 +400,9 @@ struct drm_dp_payload { + * @conn_base_id: DRM connector ID this mgr is connected to. + * @down_rep_recv: msg receiver state for down replies. + * @up_req_recv: msg receiver state for up requests. +- * @lock: protects mst state, primary, guid, dpcd. ++ * @lock: protects mst state, primary, dpcd. + * @mst_state: if this manager is enabled for an MST capable port. + * @mst_primary: pointer to the primary branch device. +- * @guid_valid: GUID valid for the primary branch device. +- * @guid: GUID for primary port. + * @dpcd: cache of DPCD for primary port. + * @pbn_div: PBN to slots divisor. + * +@@ -427,13 +424,11 @@ struct drm_dp_mst_topology_mgr { + struct drm_dp_sideband_msg_rx up_req_recv; + + /* pointer to info about the initial MST device */ +- struct mutex lock; /* protects mst_state + primary + guid + dpcd */ ++ struct mutex lock; /* protects mst_state + primary + dpcd */ + + bool mst_state; + struct drm_dp_mst_branch *mst_primary; +- /* primary MST device GUID */ +- bool guid_valid; +- u8 guid[16]; ++ + u8 dpcd[DP_RECEIVER_CAP_SIZE]; + u8 sink_count; + int pbn_div; +diff --git a/include/linux/efi.h b/include/linux/efi.h +index 0949f9c7e872..777c57596863 100644 +--- a/include/linux/efi.h ++++ b/include/linux/efi.h +@@ -1155,7 +1155,10 @@ int efivar_entry_iter(int (*func)(struct efivar_entry *, void *), + struct efivar_entry *efivar_entry_find(efi_char16_t *name, efi_guid_t guid, + struct list_head *head, bool remove); + +-bool efivar_validate(efi_char16_t *var_name, u8 *data, unsigned long len); ++bool efivar_validate(efi_guid_t vendor, efi_char16_t *var_name, u8 *data, ++ unsigned long data_size); ++bool efivar_variable_is_removable(efi_guid_t vendor, const char *name, ++ size_t len); + + extern struct work_struct efivar_work; + void efivar_run_worker(void); +diff --git a/include/linux/fs.h b/include/linux/fs.h +index 58f6ab319996..2a41353033d3 100644 +--- a/include/linux/fs.h ++++ b/include/linux/fs.h +@@ -2067,7 +2067,7 @@ extern long do_sys_open(int dfd, const char __user *filename, int flags, + extern struct file *file_open_name(struct filename *, int, umode_t); + extern struct file *filp_open(const char *, int, umode_t); + extern struct file *file_open_root(struct dentry *, struct vfsmount *, +- const char *, int); ++ const char *, int, umode_t); + extern struct file * dentry_open(const struct path *, int, const struct cred *); + extern int filp_close(struct file *, fl_owner_t id); + +diff --git a/include/linux/kernel.h b/include/linux/kernel.h +index 3d770f5564b8..0fe0cb8a5862 100644 +--- a/include/linux/kernel.h ++++ b/include/linux/kernel.h +@@ -590,7 +590,7 @@ do { \ + + #define do_trace_printk(fmt, args...) \ + do { \ +- static const char *trace_printk_fmt \ ++ static const char *trace_printk_fmt __used \ + __attribute__((section("__trace_printk_fmt"))) = \ + __builtin_constant_p(fmt) ? fmt : NULL; \ + \ +@@ -634,7 +634,7 @@ int __trace_printk(unsigned long ip, const char *fmt, ...); + */ + + #define trace_puts(str) ({ \ +- static const char *trace_printk_fmt \ ++ static const char *trace_printk_fmt __used \ + __attribute__((section("__trace_printk_fmt"))) = \ + __builtin_constant_p(str) ? str : NULL; \ + \ +@@ -656,7 +656,7 @@ extern void trace_dump_stack(int skip); + #define ftrace_vprintk(fmt, vargs) \ + do { \ + if (__builtin_constant_p(fmt)) { \ +- static const char *trace_printk_fmt \ ++ static const char *trace_printk_fmt __used \ + __attribute__((section("__trace_printk_fmt"))) = \ + __builtin_constant_p(fmt) ? fmt : NULL; \ + \ +diff --git a/include/linux/mm.h b/include/linux/mm.h +index 86a977bf4f79..9eef3a1f2291 100644 +--- a/include/linux/mm.h ++++ b/include/linux/mm.h +@@ -598,7 +598,7 @@ static inline compound_page_dtor *get_compound_page_dtor(struct page *page) + return (compound_page_dtor *)page[1].lru.next; + } + +-static inline int compound_order(struct page *page) ++static inline unsigned int compound_order(struct page *page) + { + if (!PageHead(page)) + return 0; +@@ -1730,7 +1730,8 @@ extern void si_meminfo(struct sysinfo * val); + extern void si_meminfo_node(struct sysinfo *val, int nid); + + extern __printf(3, 4) +-void warn_alloc_failed(gfp_t gfp_mask, int order, const char *fmt, ...); ++void warn_alloc_failed(gfp_t gfp_mask, unsigned int order, ++ const char *fmt, ...); + + extern void setup_per_cpu_pageset(void); + +diff --git a/include/linux/module.h b/include/linux/module.h +index 71f282a4e307..18edb9660da0 100644 +--- a/include/linux/module.h ++++ b/include/linux/module.h +@@ -224,6 +224,12 @@ struct module_ref { + unsigned long decs; + } __attribute((aligned(2 * sizeof(unsigned long)))); + ++struct mod_kallsyms { ++ Elf_Sym *symtab; ++ unsigned int num_symtab; ++ char *strtab; ++}; ++ + struct module { + enum module_state state; + +@@ -311,14 +317,9 @@ struct module { + #endif + + #ifdef CONFIG_KALLSYMS +- /* +- * We keep the symbol and string tables for kallsyms. +- * The core_* fields below are temporary, loader-only (they +- * could really be discarded after module init). +- */ +- Elf_Sym *symtab, *core_symtab; +- unsigned int num_symtab, core_num_syms; +- char *strtab, *core_strtab; ++ /* Protected by RCU and/or module_mutex: use rcu_dereference() */ ++ struct mod_kallsyms *kallsyms; ++ struct mod_kallsyms core_kallsyms; + + /* Section attributes */ + struct module_sect_attrs *sect_attrs; +diff --git a/include/linux/pageblock-flags.h b/include/linux/pageblock-flags.h +index 2baeee12f48e..e942558b3585 100644 +--- a/include/linux/pageblock-flags.h ++++ b/include/linux/pageblock-flags.h +@@ -44,7 +44,7 @@ enum pageblock_bits { + #ifdef CONFIG_HUGETLB_PAGE_SIZE_VARIABLE + + /* Huge page sizes are variable */ +-extern int pageblock_order; ++extern unsigned int pageblock_order; + + #else /* CONFIG_HUGETLB_PAGE_SIZE_VARIABLE */ + +diff --git a/include/linux/pci.h b/include/linux/pci.h +index 7a3484490867..88f1faf3de43 100644 +--- a/include/linux/pci.h ++++ b/include/linux/pci.h +@@ -355,6 +355,9 @@ struct pci_dev { + unsigned int __aer_firmware_first:1; + unsigned int broken_intx_masking:1; + unsigned int io_window_1k:1; /* Intel P2P bridge 1K I/O windows */ ++ unsigned int irq_managed:1; ++ unsigned int has_secondary_link:1; ++ unsigned int non_compliant_bars:1; /* broken BARs; ignore them */ + pci_dev_flags_t dev_flags; + atomic_t enable_cnt; /* pci_enable_device has been called */ + +diff --git a/include/linux/poison.h b/include/linux/poison.h +index 2110a81c5e2a..253c9b4198ef 100644 +--- a/include/linux/poison.h ++++ b/include/linux/poison.h +@@ -19,8 +19,8 @@ + * under normal circumstances, used to verify that nobody uses + * non-initialized list entries. + */ +-#define LIST_POISON1 ((void *) 0x00100100 + POISON_POINTER_DELTA) +-#define LIST_POISON2 ((void *) 0x00200200 + POISON_POINTER_DELTA) ++#define LIST_POISON1 ((void *) 0x100 + POISON_POINTER_DELTA) ++#define LIST_POISON2 ((void *) 0x200 + POISON_POINTER_DELTA) + + /********** include/linux/timer.h **********/ + /* +diff --git a/include/linux/ptrace.h b/include/linux/ptrace.h +index cc79eff4a1ad..608d90444b6f 100644 +--- a/include/linux/ptrace.h ++++ b/include/linux/ptrace.h +@@ -56,7 +56,29 @@ extern void exit_ptrace(struct task_struct *tracer); + #define PTRACE_MODE_READ 0x01 + #define PTRACE_MODE_ATTACH 0x02 + #define PTRACE_MODE_NOAUDIT 0x04 +-/* Returns true on success, false on denial. */ ++#define PTRACE_MODE_FSCREDS 0x08 ++#define PTRACE_MODE_REALCREDS 0x10 ++ ++/* shorthands for READ/ATTACH and FSCREDS/REALCREDS combinations */ ++#define PTRACE_MODE_READ_FSCREDS (PTRACE_MODE_READ | PTRACE_MODE_FSCREDS) ++#define PTRACE_MODE_READ_REALCREDS (PTRACE_MODE_READ | PTRACE_MODE_REALCREDS) ++#define PTRACE_MODE_ATTACH_FSCREDS (PTRACE_MODE_ATTACH | PTRACE_MODE_FSCREDS) ++#define PTRACE_MODE_ATTACH_REALCREDS (PTRACE_MODE_ATTACH | PTRACE_MODE_REALCREDS) ++ ++/** ++ * ptrace_may_access - check whether the caller is permitted to access ++ * a target task. ++ * @task: target task ++ * @mode: selects type of access and caller credentials ++ * ++ * Returns true on success, false on denial. ++ * ++ * One of the flags PTRACE_MODE_FSCREDS and PTRACE_MODE_REALCREDS must ++ * be set in @mode to specify whether the access was requested through ++ * a filesystem syscall (should use effective capabilities and fsuid ++ * of the caller) or through an explicit syscall such as ++ * process_vm_writev or ptrace (and should use the real credentials). ++ */ + extern bool ptrace_may_access(struct task_struct *task, unsigned int mode); + + static inline int ptrace_reparented(struct task_struct *child) +diff --git a/include/linux/thermal.h b/include/linux/thermal.h +index 041f9b4e0074..96c305167a1e 100644 +--- a/include/linux/thermal.h ++++ b/include/linux/thermal.h +@@ -175,6 +175,7 @@ struct thermal_attr { + * @trip_hyst_attrs: attributes for trip points for sysfs: trip hysteresis + * @devdata: private pointer for device private data + * @trips: number of trip points the thermal zone supports ++ * @trips_disabled; bitmap for disabled trips + * @passive_delay: number of milliseconds to wait between polls when + * performing passive cooling. Currenty only used by the + * step-wise governor +@@ -211,6 +212,7 @@ struct thermal_zone_device { + struct thermal_attr *trip_hyst_attrs; + void *devdata; + int trips; ++ unsigned long trips_disabled; /* bitmap for disabled trips */ + int passive_delay; + int polling_delay; + int temperature; +diff --git a/include/linux/tty.h b/include/linux/tty.h +index 5171ef8f7b85..4858a3b79b7a 100644 +--- a/include/linux/tty.h ++++ b/include/linux/tty.h +@@ -574,7 +574,7 @@ static inline int tty_ldisc_receive_buf(struct tty_ldisc *ld, unsigned char *p, + count = ld->ops->receive_buf2(ld->tty, p, f, count); + else { + count = min_t(int, count, ld->tty->receive_room); +- if (count) ++ if (count && ld->ops->receive_buf) + ld->ops->receive_buf(ld->tty, p, f, count); + } + return count; +diff --git a/include/linux/ucs2_string.h b/include/linux/ucs2_string.h +index cbb20afdbc01..bb679b48f408 100644 +--- a/include/linux/ucs2_string.h ++++ b/include/linux/ucs2_string.h +@@ -11,4 +11,8 @@ unsigned long ucs2_strlen(const ucs2_char_t *s); + unsigned long ucs2_strsize(const ucs2_char_t *data, unsigned long maxlength); + int ucs2_strncmp(const ucs2_char_t *a, const ucs2_char_t *b, size_t len); + ++unsigned long ucs2_utf8size(const ucs2_char_t *src); ++unsigned long ucs2_as_utf8(u8 *dest, const ucs2_char_t *src, ++ unsigned long maxlength); ++ + #endif /* _LINUX_UCS2_STRING_H_ */ +diff --git a/kernel/events/core.c b/kernel/events/core.c +index ff181a5a5562..44a47ac6c1e8 100644 +--- a/kernel/events/core.c ++++ b/kernel/events/core.c +@@ -3188,7 +3188,7 @@ find_lively_task_by_vpid(pid_t vpid) + + /* Reuse ptrace permission checks for now. */ + err = -EACCES; +- if (!ptrace_may_access(task, PTRACE_MODE_READ)) ++ if (!ptrace_may_access(task, PTRACE_MODE_READ_REALCREDS)) + goto errout; + + return task; +diff --git a/kernel/futex.c b/kernel/futex.c +index 1c43013e6edc..d9d63806f55f 100644 +--- a/kernel/futex.c ++++ b/kernel/futex.c +@@ -2763,7 +2763,7 @@ SYSCALL_DEFINE3(get_robust_list, int, pid, + } + + ret = -EPERM; +- if (!ptrace_may_access(p, PTRACE_MODE_READ)) ++ if (!ptrace_may_access(p, PTRACE_MODE_READ_REALCREDS)) + goto err_unlock; + + head = p->robust_list; +diff --git a/kernel/futex_compat.c b/kernel/futex_compat.c +index 55c8c9349cfe..4ae3232e7a28 100644 +--- a/kernel/futex_compat.c ++++ b/kernel/futex_compat.c +@@ -155,7 +155,7 @@ COMPAT_SYSCALL_DEFINE3(get_robust_list, int, pid, + } + + ret = -EPERM; +- if (!ptrace_may_access(p, PTRACE_MODE_READ)) ++ if (!ptrace_may_access(p, PTRACE_MODE_READ_REALCREDS)) + goto err_unlock; + + head = p->compat_robust_list; +diff --git a/kernel/kcmp.c b/kernel/kcmp.c +index 0aa69ea1d8fd..3a47fa998fe0 100644 +--- a/kernel/kcmp.c ++++ b/kernel/kcmp.c +@@ -122,8 +122,8 @@ SYSCALL_DEFINE5(kcmp, pid_t, pid1, pid_t, pid2, int, type, + &task2->signal->cred_guard_mutex); + if (ret) + goto err; +- if (!ptrace_may_access(task1, PTRACE_MODE_READ) || +- !ptrace_may_access(task2, PTRACE_MODE_READ)) { ++ if (!ptrace_may_access(task1, PTRACE_MODE_READ_REALCREDS) || ++ !ptrace_may_access(task2, PTRACE_MODE_READ_REALCREDS)) { + ret = -EPERM; + goto err_unlock; + } +diff --git a/kernel/module.c b/kernel/module.c +index 3da0c001d985..1df11b175a24 100644 +--- a/kernel/module.c ++++ b/kernel/module.c +@@ -179,6 +179,9 @@ struct load_info { + struct _ddebug *debug; + unsigned int num_debug; + bool sig_ok; ++#ifdef CONFIG_KALLSYMS ++ unsigned long mod_kallsyms_init_off; ++#endif + struct { + unsigned int sym, str, mod, vers, info, pcpu; + } index; +@@ -2325,8 +2328,20 @@ static void layout_symtab(struct module *mod, struct load_info *info) + strsect->sh_entsize = get_offset(mod, &mod->init_size, strsect, + info->index.str) | INIT_OFFSET_MASK; + pr_debug("\t%s\n", info->secstrings + strsect->sh_name); ++ ++ /* We'll tack temporary mod_kallsyms on the end. */ ++ mod->init_size = ALIGN(mod->init_size, ++ __alignof__(struct mod_kallsyms)); ++ info->mod_kallsyms_init_off = mod->init_size; ++ mod->init_size += sizeof(struct mod_kallsyms); ++ mod->init_size = debug_align(mod->init_size); + } + ++/* ++ * We use the full symtab and strtab which layout_symtab arranged to ++ * be appended to the init section. Later we switch to the cut-down ++ * core-only ones. ++ */ + static void add_kallsyms(struct module *mod, const struct load_info *info) + { + unsigned int i, ndst; +@@ -2335,28 +2350,33 @@ static void add_kallsyms(struct module *mod, const struct load_info *info) + char *s; + Elf_Shdr *symsec = &info->sechdrs[info->index.sym]; + +- mod->symtab = (void *)symsec->sh_addr; +- mod->num_symtab = symsec->sh_size / sizeof(Elf_Sym); ++ /* Set up to point into init section. */ ++ mod->kallsyms = mod->module_init + info->mod_kallsyms_init_off; ++ ++ mod->kallsyms->symtab = (void *)symsec->sh_addr; ++ mod->kallsyms->num_symtab = symsec->sh_size / sizeof(Elf_Sym); + /* Make sure we get permanent strtab: don't use info->strtab. */ +- mod->strtab = (void *)info->sechdrs[info->index.str].sh_addr; ++ mod->kallsyms->strtab = (void *)info->sechdrs[info->index.str].sh_addr; + + /* Set types up while we still have access to sections. */ +- for (i = 0; i < mod->num_symtab; i++) +- mod->symtab[i].st_info = elf_type(&mod->symtab[i], info); +- +- mod->core_symtab = dst = mod->module_core + info->symoffs; +- mod->core_strtab = s = mod->module_core + info->stroffs; +- src = mod->symtab; +- for (ndst = i = 0; i < mod->num_symtab; i++) { ++ for (i = 0; i < mod->kallsyms->num_symtab; i++) ++ mod->kallsyms->symtab[i].st_info ++ = elf_type(&mod->kallsyms->symtab[i], info); ++ ++ /* Now populate the cut down core kallsyms for after init. */ ++ mod->core_kallsyms.symtab = dst = mod->module_core + info->symoffs; ++ mod->core_kallsyms.strtab = s = mod->module_core + info->stroffs; ++ src = mod->kallsyms->symtab; ++ for (ndst = i = 0; i < mod->kallsyms->num_symtab; i++) { + if (i == 0 || + is_core_symbol(src+i, info->sechdrs, info->hdr->e_shnum)) { + dst[ndst] = src[i]; +- dst[ndst++].st_name = s - mod->core_strtab; +- s += strlcpy(s, &mod->strtab[src[i].st_name], ++ dst[ndst++].st_name = s - mod->core_kallsyms.strtab; ++ s += strlcpy(s, &mod->kallsyms->strtab[src[i].st_name], + KSYM_NAME_LEN) + 1; + } + } +- mod->core_num_syms = ndst; ++ mod->core_kallsyms.num_symtab = ndst; + } + #else + static inline void layout_symtab(struct module *mod, struct load_info *info) +@@ -3076,9 +3096,8 @@ static int do_init_module(struct module *mod) + module_put(mod); + trim_init_extable(mod); + #ifdef CONFIG_KALLSYMS +- mod->num_symtab = mod->core_num_syms; +- mod->symtab = mod->core_symtab; +- mod->strtab = mod->core_strtab; ++ /* Switch to core kallsyms now init is done: kallsyms may be walking! */ ++ rcu_assign_pointer(mod->kallsyms, &mod->core_kallsyms); + #endif + unset_module_init_ro_nx(mod); + module_free(mod, mod->module_init); +@@ -3401,6 +3420,11 @@ static inline int is_arm_mapping_symbol(const char *str) + && (str[2] == '\0' || str[2] == '.'); + } + ++static const char *symname(struct mod_kallsyms *kallsyms, unsigned int symnum) ++{ ++ return kallsyms->strtab + kallsyms->symtab[symnum].st_name; ++} ++ + static const char *get_ksymbol(struct module *mod, + unsigned long addr, + unsigned long *size, +@@ -3408,6 +3432,7 @@ static const char *get_ksymbol(struct module *mod, + { + unsigned int i, best = 0; + unsigned long nextval; ++ struct mod_kallsyms *kallsyms = rcu_dereference_sched(mod->kallsyms); + + /* At worse, next value is at end of module */ + if (within_module_init(addr, mod)) +@@ -3417,32 +3442,32 @@ static const char *get_ksymbol(struct module *mod, + + /* Scan for closest preceding symbol, and next symbol. (ELF + starts real symbols at 1). */ +- for (i = 1; i < mod->num_symtab; i++) { +- if (mod->symtab[i].st_shndx == SHN_UNDEF) ++ for (i = 1; i < kallsyms->num_symtab; i++) { ++ if (kallsyms->symtab[i].st_shndx == SHN_UNDEF) + continue; + + /* We ignore unnamed symbols: they're uninformative + * and inserted at a whim. */ +- if (mod->symtab[i].st_value <= addr +- && mod->symtab[i].st_value > mod->symtab[best].st_value +- && *(mod->strtab + mod->symtab[i].st_name) != '\0' +- && !is_arm_mapping_symbol(mod->strtab + mod->symtab[i].st_name)) ++ if (*symname(kallsyms, i) == '\0' ++ || is_arm_mapping_symbol(symname(kallsyms, i))) ++ continue; ++ ++ if (kallsyms->symtab[i].st_value <= addr ++ && kallsyms->symtab[i].st_value > kallsyms->symtab[best].st_value) + best = i; +- if (mod->symtab[i].st_value > addr +- && mod->symtab[i].st_value < nextval +- && *(mod->strtab + mod->symtab[i].st_name) != '\0' +- && !is_arm_mapping_symbol(mod->strtab + mod->symtab[i].st_name)) +- nextval = mod->symtab[i].st_value; ++ if (kallsyms->symtab[i].st_value > addr ++ && kallsyms->symtab[i].st_value < nextval) ++ nextval = kallsyms->symtab[i].st_value; + } + + if (!best) + return NULL; + + if (size) +- *size = nextval - mod->symtab[best].st_value; ++ *size = nextval - kallsyms->symtab[best].st_value; + if (offset) +- *offset = addr - mod->symtab[best].st_value; +- return mod->strtab + mod->symtab[best].st_name; ++ *offset = addr - kallsyms->symtab[best].st_value; ++ return symname(kallsyms, best); + } + + /* For kallsyms to ask for address resolution. NULL means not found. Careful +@@ -3535,19 +3560,21 @@ int module_get_kallsym(unsigned int symnum, unsigned long *value, char *type, + + preempt_disable(); + list_for_each_entry_rcu(mod, &modules, list) { ++ struct mod_kallsyms *kallsyms; ++ + if (mod->state == MODULE_STATE_UNFORMED) + continue; +- if (symnum < mod->num_symtab) { +- *value = mod->symtab[symnum].st_value; +- *type = mod->symtab[symnum].st_info; +- strlcpy(name, mod->strtab + mod->symtab[symnum].st_name, +- KSYM_NAME_LEN); ++ kallsyms = rcu_dereference_sched(mod->kallsyms); ++ if (symnum < kallsyms->num_symtab) { ++ *value = kallsyms->symtab[symnum].st_value; ++ *type = kallsyms->symtab[symnum].st_info; ++ strlcpy(name, symname(kallsyms, symnum), KSYM_NAME_LEN); + strlcpy(module_name, mod->name, MODULE_NAME_LEN); + *exported = is_exported(name, *value, mod); + preempt_enable(); + return 0; + } +- symnum -= mod->num_symtab; ++ symnum -= kallsyms->num_symtab; + } + preempt_enable(); + return -ERANGE; +@@ -3556,11 +3583,12 @@ int module_get_kallsym(unsigned int symnum, unsigned long *value, char *type, + static unsigned long mod_find_symname(struct module *mod, const char *name) + { + unsigned int i; ++ struct mod_kallsyms *kallsyms = rcu_dereference_sched(mod->kallsyms); + +- for (i = 0; i < mod->num_symtab; i++) +- if (strcmp(name, mod->strtab+mod->symtab[i].st_name) == 0 && +- mod->symtab[i].st_info != 'U') +- return mod->symtab[i].st_value; ++ for (i = 0; i < kallsyms->num_symtab; i++) ++ if (strcmp(name, symname(kallsyms, i)) == 0 && ++ kallsyms->symtab[i].st_info != 'U') ++ return kallsyms->symtab[i].st_value; + return 0; + } + +@@ -3597,11 +3625,14 @@ int module_kallsyms_on_each_symbol(int (*fn)(void *, const char *, + int ret; + + list_for_each_entry(mod, &modules, list) { ++ /* We hold module_mutex: no need for rcu_dereference_sched */ ++ struct mod_kallsyms *kallsyms = mod->kallsyms; ++ + if (mod->state == MODULE_STATE_UNFORMED) + continue; +- for (i = 0; i < mod->num_symtab; i++) { +- ret = fn(data, mod->strtab + mod->symtab[i].st_name, +- mod, mod->symtab[i].st_value); ++ for (i = 0; i < kallsyms->num_symtab; i++) { ++ ret = fn(data, symname(kallsyms, i), ++ mod, kallsyms->symtab[i].st_value); + if (ret != 0) + return ret; + } +diff --git a/kernel/ptrace.c b/kernel/ptrace.c +index dcd968232d42..0856b9720598 100644 +--- a/kernel/ptrace.c ++++ b/kernel/ptrace.c +@@ -219,6 +219,14 @@ static int ptrace_has_cap(struct user_namespace *ns, unsigned int mode) + static int __ptrace_may_access(struct task_struct *task, unsigned int mode) + { + const struct cred *cred = current_cred(), *tcred; ++ int dumpable = 0; ++ kuid_t caller_uid; ++ kgid_t caller_gid; ++ ++ if (!(mode & PTRACE_MODE_FSCREDS) == !(mode & PTRACE_MODE_REALCREDS)) { ++ WARN(1, "denying ptrace access check without PTRACE_MODE_*CREDS\n"); ++ return -EPERM; ++ } + + /* May we inspect the given task? + * This check is used both for attaching with ptrace +@@ -228,18 +236,33 @@ static int __ptrace_may_access(struct task_struct *task, unsigned int mode) + * because setting up the necessary parent/child relationship + * or halting the specified task is impossible. + */ +- int dumpable = 0; ++ + /* Don't let security modules deny introspection */ + if (same_thread_group(task, current)) + return 0; + rcu_read_lock(); ++ if (mode & PTRACE_MODE_FSCREDS) { ++ caller_uid = cred->fsuid; ++ caller_gid = cred->fsgid; ++ } else { ++ /* ++ * Using the euid would make more sense here, but something ++ * in userland might rely on the old behavior, and this ++ * shouldn't be a security problem since ++ * PTRACE_MODE_REALCREDS implies that the caller explicitly ++ * used a syscall that requests access to another process ++ * (and not a filesystem syscall to procfs). ++ */ ++ caller_uid = cred->uid; ++ caller_gid = cred->gid; ++ } + tcred = __task_cred(task); +- if (uid_eq(cred->uid, tcred->euid) && +- uid_eq(cred->uid, tcred->suid) && +- uid_eq(cred->uid, tcred->uid) && +- gid_eq(cred->gid, tcred->egid) && +- gid_eq(cred->gid, tcred->sgid) && +- gid_eq(cred->gid, tcred->gid)) ++ if (uid_eq(caller_uid, tcred->euid) && ++ uid_eq(caller_uid, tcred->suid) && ++ uid_eq(caller_uid, tcred->uid) && ++ gid_eq(caller_gid, tcred->egid) && ++ gid_eq(caller_gid, tcred->sgid) && ++ gid_eq(caller_gid, tcred->gid)) + goto ok; + if (ptrace_has_cap(tcred->user_ns, mode)) + goto ok; +@@ -306,7 +329,7 @@ static int ptrace_attach(struct task_struct *task, long request, + goto out; + + task_lock(task); +- retval = __ptrace_may_access(task, PTRACE_MODE_ATTACH); ++ retval = __ptrace_may_access(task, PTRACE_MODE_ATTACH_REALCREDS); + task_unlock(task); + if (retval) + goto unlock_creds; +diff --git a/kernel/resource.c b/kernel/resource.c +index 0bcebffc4e77..e3011e1a5c8d 100644 +--- a/kernel/resource.c ++++ b/kernel/resource.c +@@ -1073,9 +1073,10 @@ struct resource * __request_region(struct resource *parent, + if (!conflict) + break; + if (conflict != parent) { +- parent = conflict; +- if (!(conflict->flags & IORESOURCE_BUSY)) ++ if (!(conflict->flags & IORESOURCE_BUSY)) { ++ parent = conflict; + continue; ++ } + } + if (conflict->flags & flags & IORESOURCE_MUXED) { + add_wait_queue(&muxed_resource_wait, &wait); +diff --git a/kernel/sched/core.c b/kernel/sched/core.c +index d650e1e593b8..4317f0156092 100644 +--- a/kernel/sched/core.c ++++ b/kernel/sched/core.c +@@ -6416,7 +6416,7 @@ static void sched_init_numa(void) + + sched_domains_numa_masks[i][j] = mask; + +- for (k = 0; k < nr_node_ids; k++) { ++ for_each_node(k) { + if (node_distance(j, k) > sched_domains_numa_distance[i]) + continue; + +diff --git a/kernel/sched/cputime.c b/kernel/sched/cputime.c +index 8394b1ee600c..87b8576cbd50 100644 +--- a/kernel/sched/cputime.c ++++ b/kernel/sched/cputime.c +@@ -259,21 +259,21 @@ static __always_inline bool steal_account_process_tick(void) + #ifdef CONFIG_PARAVIRT + if (static_key_false(¶virt_steal_enabled)) { + u64 steal; +- cputime_t steal_ct; ++ unsigned long steal_jiffies; + + steal = paravirt_steal_clock(smp_processor_id()); + steal -= this_rq()->prev_steal_time; + + /* +- * cputime_t may be less precise than nsecs (eg: if it's +- * based on jiffies). Lets cast the result to cputime ++ * steal is in nsecs but our caller is expecting steal ++ * time in jiffies. Lets cast the result to jiffies + * granularity and account the rest on the next rounds. + */ +- steal_ct = nsecs_to_cputime(steal); +- this_rq()->prev_steal_time += cputime_to_nsecs(steal_ct); ++ steal_jiffies = nsecs_to_jiffies(steal); ++ this_rq()->prev_steal_time += jiffies_to_nsecs(steal_jiffies); + +- account_steal_time(steal_ct); +- return steal_ct; ++ account_steal_time(jiffies_to_cputime(steal_jiffies)); ++ return steal_jiffies; + } + #endif + return false; +diff --git a/kernel/sysctl_binary.c b/kernel/sysctl_binary.c +index 9a4f750a2963..b99a55863de1 100644 +--- a/kernel/sysctl_binary.c ++++ b/kernel/sysctl_binary.c +@@ -1320,7 +1320,7 @@ static ssize_t binary_sysctl(const int *name, int nlen, + } + + mnt = task_active_pid_ns(current)->proc_mnt; +- file = file_open_root(mnt->mnt_root, mnt, pathname, flags); ++ file = file_open_root(mnt->mnt_root, mnt, pathname, flags, 0); + result = PTR_ERR(file); + if (IS_ERR(file)) + goto out_putname; +diff --git a/kernel/trace/trace.c b/kernel/trace/trace.c +index 72c71345db81..2d5909fae96d 100644 +--- a/kernel/trace/trace.c ++++ b/kernel/trace/trace.c +@@ -4679,7 +4679,10 @@ static ssize_t tracing_splice_read_pipe(struct file *filp, + + spd.nr_pages = i; + +- ret = splice_to_pipe(pipe, &spd); ++ if (i) ++ ret = splice_to_pipe(pipe, &spd); ++ else ++ ret = 0; + out: + splice_shrink_spd(&spd); + return ret; +diff --git a/kernel/trace/trace_printk.c b/kernel/trace/trace_printk.c +index 2900817ba65c..14ffaa59a9e9 100644 +--- a/kernel/trace/trace_printk.c ++++ b/kernel/trace/trace_printk.c +@@ -291,6 +291,9 @@ static int t_show(struct seq_file *m, void *v) + const char *str = *fmt; + int i; + ++ if (!*fmt) ++ return 0; ++ + seq_printf(m, "0x%lx : \"", *(unsigned long *)fmt); + + /* +diff --git a/lib/ucs2_string.c b/lib/ucs2_string.c +index 6f500ef2301d..f0b323abb4c6 100644 +--- a/lib/ucs2_string.c ++++ b/lib/ucs2_string.c +@@ -49,3 +49,65 @@ ucs2_strncmp(const ucs2_char_t *a, const ucs2_char_t *b, size_t len) + } + } + EXPORT_SYMBOL(ucs2_strncmp); ++ ++unsigned long ++ucs2_utf8size(const ucs2_char_t *src) ++{ ++ unsigned long i; ++ unsigned long j = 0; ++ ++ for (i = 0; i < ucs2_strlen(src); i++) { ++ u16 c = src[i]; ++ ++ if (c >= 0x800) ++ j += 3; ++ else if (c >= 0x80) ++ j += 2; ++ else ++ j += 1; ++ } ++ ++ return j; ++} ++EXPORT_SYMBOL(ucs2_utf8size); ++ ++/* ++ * copy at most maxlength bytes of whole utf8 characters to dest from the ++ * ucs2 string src. ++ * ++ * The return value is the number of characters copied, not including the ++ * final NUL character. ++ */ ++unsigned long ++ucs2_as_utf8(u8 *dest, const ucs2_char_t *src, unsigned long maxlength) ++{ ++ unsigned int i; ++ unsigned long j = 0; ++ unsigned long limit = ucs2_strnlen(src, maxlength); ++ ++ for (i = 0; maxlength && i < limit; i++) { ++ u16 c = src[i]; ++ ++ if (c >= 0x800) { ++ if (maxlength < 3) ++ break; ++ maxlength -= 3; ++ dest[j++] = 0xe0 | (c & 0xf000) >> 12; ++ dest[j++] = 0x80 | (c & 0x0fc0) >> 6; ++ dest[j++] = 0x80 | (c & 0x003f); ++ } else if (c >= 0x80) { ++ if (maxlength < 2) ++ break; ++ maxlength -= 2; ++ dest[j++] = 0xc0 | (c & 0x7c0) >> 6; ++ dest[j++] = 0x80 | (c & 0x03f); ++ } else { ++ maxlength -= 1; ++ dest[j++] = c & 0x7f; ++ } ++ } ++ if (maxlength) ++ dest[j] = '\0'; ++ return j; ++} ++EXPORT_SYMBOL(ucs2_as_utf8); +diff --git a/mm/bootmem.c b/mm/bootmem.c +index 477be696511d..a23dd1934654 100644 +--- a/mm/bootmem.c ++++ b/mm/bootmem.c +@@ -164,7 +164,7 @@ void __init free_bootmem_late(unsigned long physaddr, unsigned long size) + end = PFN_DOWN(physaddr + size); + + for (; cursor < end; cursor++) { +- __free_pages_bootmem(pfn_to_page(cursor), 0); ++ __free_pages_bootmem(pfn_to_page(cursor), cursor, 0); + totalram_pages++; + } + } +@@ -172,7 +172,7 @@ void __init free_bootmem_late(unsigned long physaddr, unsigned long size) + static unsigned long __init free_all_bootmem_core(bootmem_data_t *bdata) + { + struct page *page; +- unsigned long *map, start, end, pages, count = 0; ++ unsigned long *map, start, end, pages, cur, count = 0; + + if (!bdata->node_bootmem_map) + return 0; +@@ -210,17 +210,17 @@ static unsigned long __init free_all_bootmem_core(bootmem_data_t *bdata) + if (IS_ALIGNED(start, BITS_PER_LONG) && vec == ~0UL) { + int order = ilog2(BITS_PER_LONG); + +- __free_pages_bootmem(pfn_to_page(start), order); ++ __free_pages_bootmem(pfn_to_page(start), start, order); + count += BITS_PER_LONG; + start += BITS_PER_LONG; + } else { +- unsigned long cur = start; ++ cur = start; + + start = ALIGN(start + 1, BITS_PER_LONG); + while (vec && cur != start) { + if (vec & 1) { + page = pfn_to_page(cur); +- __free_pages_bootmem(page, 0); ++ __free_pages_bootmem(page, cur, 0); + count++; + } + vec >>= 1; +@@ -229,12 +229,13 @@ static unsigned long __init free_all_bootmem_core(bootmem_data_t *bdata) + } + } + ++ cur = bdata->node_min_pfn; + page = virt_to_page(bdata->node_bootmem_map); + pages = bdata->node_low_pfn - bdata->node_min_pfn; + pages = bootmem_bootmap_pages(pages); + count += pages; + while (pages--) +- __free_pages_bootmem(page++, 0); ++ __free_pages_bootmem(page++, cur++, 0); + + bdebug("nid=%td released=%lx\n", bdata - bootmem_node_data, count); + +diff --git a/mm/hugetlb.c b/mm/hugetlb.c +index 77c8d03b4278..549bf5ac3d6e 100644 +--- a/mm/hugetlb.c ++++ b/mm/hugetlb.c +@@ -681,7 +681,7 @@ static int hstate_next_node_to_free(struct hstate *h, nodemask_t *nodes_allowed) + + #if defined(CONFIG_CMA) && defined(CONFIG_X86_64) + static void destroy_compound_gigantic_page(struct page *page, +- unsigned long order) ++ unsigned int order) + { + int i; + int nr_pages = 1 << order; +@@ -697,7 +697,7 @@ static void destroy_compound_gigantic_page(struct page *page, + __ClearPageHead(page); + } + +-static void free_gigantic_page(struct page *page, unsigned order) ++static void free_gigantic_page(struct page *page, unsigned int order) + { + free_contig_range(page_to_pfn(page), 1 << order); + } +@@ -741,7 +741,7 @@ static bool zone_spans_last_pfn(const struct zone *zone, + return zone_spans_pfn(zone, last_pfn); + } + +-static struct page *alloc_gigantic_page(int nid, unsigned order) ++static struct page *alloc_gigantic_page(int nid, unsigned int order) + { + unsigned long nr_pages = 1 << order; + unsigned long ret, pfn, flags; +@@ -777,7 +777,7 @@ static struct page *alloc_gigantic_page(int nid, unsigned order) + } + + static void prep_new_huge_page(struct hstate *h, struct page *page, int nid); +-static void prep_compound_gigantic_page(struct page *page, unsigned long order); ++static void prep_compound_gigantic_page(struct page *page, unsigned int order); + + static struct page *alloc_fresh_gigantic_page_node(struct hstate *h, int nid) + { +@@ -810,9 +810,9 @@ static int alloc_fresh_gigantic_page(struct hstate *h, + static inline bool gigantic_page_supported(void) { return true; } + #else + static inline bool gigantic_page_supported(void) { return false; } +-static inline void free_gigantic_page(struct page *page, unsigned order) { } ++static inline void free_gigantic_page(struct page *page, unsigned int order) { } + static inline void destroy_compound_gigantic_page(struct page *page, +- unsigned long order) { } ++ unsigned int order) { } + static inline int alloc_fresh_gigantic_page(struct hstate *h, + nodemask_t *nodes_allowed) { return 0; } + #endif +@@ -932,7 +932,7 @@ static void prep_new_huge_page(struct hstate *h, struct page *page, int nid) + put_page(page); /* free it into the hugepage allocator */ + } + +-static void prep_compound_gigantic_page(struct page *page, unsigned long order) ++static void prep_compound_gigantic_page(struct page *page, unsigned int order) + { + int i; + int nr_pages = 1 << order; +@@ -1490,7 +1490,8 @@ found: + return 1; + } + +-static void __init prep_compound_huge_page(struct page *page, int order) ++static void __init prep_compound_huge_page(struct page *page, ++ unsigned int order) + { + if (unlikely(order > (MAX_ORDER - 1))) + prep_compound_gigantic_page(page, order); +@@ -2197,7 +2198,7 @@ static int __init hugetlb_init(void) + module_init(hugetlb_init); + + /* Should be called on processing a hugepagesz=... option */ +-void __init hugetlb_add_hstate(unsigned order) ++void __init hugetlb_add_hstate(unsigned int order) + { + struct hstate *h; + unsigned long i; +diff --git a/mm/internal.h b/mm/internal.h +index a4f90ba7068e..858c8bf8aaa4 100644 +--- a/mm/internal.h ++++ b/mm/internal.h +@@ -133,8 +133,9 @@ __find_buddy_index(unsigned long page_idx, unsigned int order) + } + + extern int __isolate_free_page(struct page *page, unsigned int order); +-extern void __free_pages_bootmem(struct page *page, unsigned int order); +-extern void prep_compound_page(struct page *page, unsigned long order); ++extern void __free_pages_bootmem(struct page *page, unsigned long pfn, ++ unsigned int order); ++extern void prep_compound_page(struct page *page, unsigned int order); + #ifdef CONFIG_MEMORY_FAILURE + extern bool is_free_buddy_page(struct page *page); + #endif +@@ -192,7 +193,7 @@ isolate_migratepages_range(struct compact_control *cc, + * page cannot be allocated or merged in parallel. Alternatively, it must + * handle invalid values gracefully, and use page_order_unsafe() below. + */ +-static inline unsigned long page_order(struct page *page) ++static inline unsigned int page_order(struct page *page) + { + /* PageBuddy() must be checked by the caller */ + return page_private(page); +diff --git a/mm/memblock.c b/mm/memblock.c +index 6ecb0d937fb5..eda16393e4f6 100644 +--- a/mm/memblock.c ++++ b/mm/memblock.c +@@ -1305,7 +1305,7 @@ void __init __memblock_free_late(phys_addr_t base, phys_addr_t size) + end = PFN_DOWN(base + size); + + for (; cursor < end; cursor++) { +- __free_pages_bootmem(pfn_to_page(cursor), 0); ++ __free_pages_bootmem(pfn_to_page(cursor), cursor, 0); + totalram_pages++; + } + } +diff --git a/mm/nobootmem.c b/mm/nobootmem.c +index 90b50468333e..4bea539921df 100644 +--- a/mm/nobootmem.c ++++ b/mm/nobootmem.c +@@ -77,7 +77,7 @@ void __init free_bootmem_late(unsigned long addr, unsigned long size) + end = PFN_DOWN(addr + size); + + for (; cursor < end; cursor++) { +- __free_pages_bootmem(pfn_to_page(cursor), 0); ++ __free_pages_bootmem(pfn_to_page(cursor), cursor, 0); + totalram_pages++; + } + } +@@ -92,7 +92,7 @@ static void __init __free_pages_memory(unsigned long start, unsigned long end) + while (start + (1UL << order) > end) + order--; + +- __free_pages_bootmem(pfn_to_page(start), order); ++ __free_pages_bootmem(pfn_to_page(start), start, order); + + start += (1UL << order); + } +diff --git a/mm/page_alloc.c b/mm/page_alloc.c +index c32cb64a1277..73b00abbe282 100644 +--- a/mm/page_alloc.c ++++ b/mm/page_alloc.c +@@ -159,7 +159,7 @@ bool pm_suspended_storage(void) + #endif /* CONFIG_PM_SLEEP */ + + #ifdef CONFIG_HUGETLB_PAGE_SIZE_VARIABLE +-int pageblock_order __read_mostly; ++unsigned int pageblock_order __read_mostly; + #endif + + static void __free_pages_ok(struct page *page, unsigned int order); +@@ -361,7 +361,7 @@ static void free_compound_page(struct page *page) + __free_pages_ok(page, compound_order(page)); + } + +-void prep_compound_page(struct page *page, unsigned long order) ++void prep_compound_page(struct page *page, unsigned int order) + { + int i; + int nr_pages = 1 << order; +@@ -546,7 +546,9 @@ static inline void __free_one_page(struct page *page, + unsigned long combined_idx; + unsigned long uninitialized_var(buddy_idx); + struct page *buddy; +- int max_order = MAX_ORDER; ++ unsigned int max_order; ++ ++ max_order = min_t(unsigned int, MAX_ORDER, pageblock_order + 1); + + VM_BUG_ON(!zone_is_initialized(zone)); + +@@ -555,28 +557,20 @@ static inline void __free_one_page(struct page *page, + return; + + VM_BUG_ON(migratetype == -1); +- if (is_migrate_isolate(migratetype)) { +- /* +- * We restrict max order of merging to prevent merge +- * between freepages on isolate pageblock and normal +- * pageblock. Without this, pageblock isolation +- * could cause incorrect freepage accounting. +- */ +- max_order = min(MAX_ORDER, pageblock_order + 1); +- } else { ++ if (likely(!is_migrate_isolate(migratetype))) + __mod_zone_freepage_state(zone, 1 << order, migratetype); +- } + +- page_idx = pfn & ((1 << max_order) - 1); ++ page_idx = pfn & ((1 << MAX_ORDER) - 1); + + VM_BUG_ON_PAGE(page_idx & ((1 << order) - 1), page); + VM_BUG_ON_PAGE(bad_range(zone, page), page); + ++continue_merging: + while (order < max_order - 1) { + buddy_idx = __find_buddy_index(page_idx, order); + buddy = page + (buddy_idx - page_idx); + if (!page_is_buddy(page, buddy, order)) +- break; ++ goto done_merging; + /* + * Our buddy is free or it is CONFIG_DEBUG_PAGEALLOC guard page, + * merge with it and move up one order. +@@ -598,6 +592,32 @@ static inline void __free_one_page(struct page *page, + page_idx = combined_idx; + order++; + } ++ if (max_order < MAX_ORDER) { ++ /* If we are here, it means order is >= pageblock_order. ++ * We want to prevent merge between freepages on isolate ++ * pageblock and normal pageblock. Without this, pageblock ++ * isolation could cause incorrect freepage or CMA accounting. ++ * ++ * We don't want to hit this code for the more frequent ++ * low-order merging. ++ */ ++ if (unlikely(has_isolate_pageblock(zone))) { ++ int buddy_mt; ++ ++ buddy_idx = __find_buddy_index(page_idx, order); ++ buddy = page + (buddy_idx - page_idx); ++ buddy_mt = get_pageblock_migratetype(buddy); ++ ++ if (migratetype != buddy_mt ++ && (is_migrate_isolate(migratetype) || ++ is_migrate_isolate(buddy_mt))) ++ goto done_merging; ++ } ++ max_order++; ++ goto continue_merging; ++ } ++ ++done_merging: + set_page_order(page, order); + + /* +@@ -780,7 +800,8 @@ static void __free_pages_ok(struct page *page, unsigned int order) + local_irq_restore(flags); + } + +-void __init __free_pages_bootmem(struct page *page, unsigned int order) ++void __init __free_pages_bootmem(struct page *page, unsigned long pfn, ++ unsigned int order) + { + unsigned int nr_pages = 1 << order; + struct page *p = page; +@@ -993,7 +1014,7 @@ int move_freepages(struct zone *zone, + int migratetype) + { + struct page *page; +- unsigned long order; ++ unsigned int order; + int pages_moved = 0; + + #ifndef CONFIG_HOLES_IN_ZONE +@@ -1079,7 +1100,7 @@ static void change_pageblock_range(struct page *pageblock_page, + static int try_to_steal_freepages(struct zone *zone, struct page *page, + int start_type, int fallback_type) + { +- int current_order = page_order(page); ++ unsigned int current_order = page_order(page); + + /* + * When borrowing from MIGRATE_CMA, we need to release the excess +@@ -2142,7 +2163,7 @@ static DEFINE_RATELIMIT_STATE(nopage_rs, + DEFAULT_RATELIMIT_INTERVAL, + DEFAULT_RATELIMIT_BURST); + +-void warn_alloc_failed(gfp_t gfp_mask, int order, const char *fmt, ...) ++void warn_alloc_failed(gfp_t gfp_mask, unsigned int order, const char *fmt, ...) + { + unsigned int filter = SHOW_MEM_FILTER_NODES; + +@@ -2176,7 +2197,7 @@ void warn_alloc_failed(gfp_t gfp_mask, int order, const char *fmt, ...) + va_end(args); + } + +- pr_warn("%s: page allocation failure: order:%d, mode:0x%x\n", ++ pr_warn("%s: page allocation failure: order:%u, mode:0x%x\n", + current->comm, order, gfp_mask); + + dump_stack(); +@@ -2949,7 +2970,8 @@ void free_kmem_pages(unsigned long addr, unsigned int order) + } + } + +-static void *make_alloc_exact(unsigned long addr, unsigned order, size_t size) ++static void *make_alloc_exact(unsigned long addr, unsigned int order, ++ size_t size) + { + if (addr) { + unsigned long alloc_end = addr + (PAGE_SIZE << order); +@@ -3001,7 +3023,7 @@ EXPORT_SYMBOL(alloc_pages_exact); + */ + void * __meminit alloc_pages_exact_nid(int nid, size_t size, gfp_t gfp_mask) + { +- unsigned order = get_order(size); ++ unsigned int order = get_order(size); + struct page *p = alloc_pages_node(nid, gfp_mask, order); + if (!p) + return NULL; +@@ -3300,7 +3322,8 @@ void show_free_areas(unsigned int filter) + } + + for_each_populated_zone(zone) { +- unsigned long nr[MAX_ORDER], flags, order, total = 0; ++ unsigned int order; ++ unsigned long nr[MAX_ORDER], flags, total = 0; + unsigned char types[MAX_ORDER]; + + if (skip_free_areas_node(filter, zone_to_nid(zone))) +@@ -3649,7 +3672,7 @@ static void build_zonelists(pg_data_t *pgdat) + nodemask_t used_mask; + int local_node, prev_node; + struct zonelist *zonelist; +- int order = current_zonelist_order; ++ unsigned int order = current_zonelist_order; + + /* initialize zonelists */ + for (i = 0; i < MAX_ZONELISTS; i++) { +@@ -6320,7 +6343,8 @@ int alloc_contig_range(unsigned long start, unsigned long end, + unsigned migratetype) + { + unsigned long outer_start, outer_end; +- int ret = 0, order; ++ unsigned int order; ++ int ret = 0; + + struct compact_control cc = { + .nr_migratepages = 0, +diff --git a/mm/process_vm_access.c b/mm/process_vm_access.c +index 5077afcd9e11..b2dfa8cf6e4c 100644 +--- a/mm/process_vm_access.c ++++ b/mm/process_vm_access.c +@@ -197,7 +197,7 @@ static ssize_t process_vm_rw_core(pid_t pid, struct iov_iter *iter, + goto free_proc_pages; + } + +- mm = mm_access(task, PTRACE_MODE_ATTACH); ++ mm = mm_access(task, PTRACE_MODE_ATTACH_REALCREDS); + if (!mm || IS_ERR(mm)) { + rc = IS_ERR(mm) ? PTR_ERR(mm) : -ESRCH; + /* +diff --git a/net/core/datagram.c b/net/core/datagram.c +index 3a402a7b20e9..2850ab32f28a 100644 +--- a/net/core/datagram.c ++++ b/net/core/datagram.c +@@ -130,6 +130,35 @@ out_noerr: + goto out; + } + ++static struct sk_buff *skb_set_peeked(struct sk_buff *skb) ++{ ++ struct sk_buff *nskb; ++ ++ if (skb->peeked) ++ return skb; ++ ++ /* We have to unshare an skb before modifying it. */ ++ if (!skb_shared(skb)) ++ goto done; ++ ++ nskb = skb_clone(skb, GFP_ATOMIC); ++ if (!nskb) ++ return ERR_PTR(-ENOMEM); ++ ++ skb->prev->next = nskb; ++ skb->next->prev = nskb; ++ nskb->prev = skb->prev; ++ nskb->next = skb->next; ++ ++ consume_skb(skb); ++ skb = nskb; ++ ++done: ++ skb->peeked = 1; ++ ++ return skb; ++} ++ + /** + * __skb_recv_datagram - Receive a datagram skbuff + * @sk: socket +@@ -164,7 +193,9 @@ out_noerr: + struct sk_buff *__skb_recv_datagram(struct sock *sk, unsigned int flags, + int *peeked, int *off, int *err) + { ++ struct sk_buff_head *queue = &sk->sk_receive_queue; + struct sk_buff *skb, *last; ++ unsigned long cpu_flags; + long timeo; + /* + * Caller is allowed not to check sk->sk_err before skb_recv_datagram() +@@ -183,8 +214,6 @@ struct sk_buff *__skb_recv_datagram(struct sock *sk, unsigned int flags, + * Look at current nfs client by the way... + * However, this function was correct in any case. 8) + */ +- unsigned long cpu_flags; +- struct sk_buff_head *queue = &sk->sk_receive_queue; + int _off = *off; + + last = (struct sk_buff *)queue; +@@ -198,7 +227,12 @@ struct sk_buff *__skb_recv_datagram(struct sock *sk, unsigned int flags, + _off -= skb->len; + continue; + } +- skb->peeked = 1; ++ ++ skb = skb_set_peeked(skb); ++ error = PTR_ERR(skb); ++ if (IS_ERR(skb)) ++ goto unlock_err; ++ + atomic_inc(&skb->users); + } else + __skb_unlink(skb, queue); +@@ -222,6 +256,8 @@ struct sk_buff *__skb_recv_datagram(struct sock *sk, unsigned int flags, + + return NULL; + ++unlock_err: ++ spin_unlock_irqrestore(&queue->lock, cpu_flags); + no_packet: + *err = error; + return NULL; +diff --git a/scripts/coccinelle/iterators/use_after_iter.cocci b/scripts/coccinelle/iterators/use_after_iter.cocci +index f085f5968c52..ce8cc9c006e5 100644 +--- a/scripts/coccinelle/iterators/use_after_iter.cocci ++++ b/scripts/coccinelle/iterators/use_after_iter.cocci +@@ -123,7 +123,7 @@ list_remove_head(x,c,...) + | + sizeof(<+...c...+>) + | +-&c->member ++ &c->member + | + c = E + | +diff --git a/security/commoncap.c b/security/commoncap.c +index bab0611afc1e..6849e6c12987 100644 +--- a/security/commoncap.c ++++ b/security/commoncap.c +@@ -142,12 +142,17 @@ int cap_ptrace_access_check(struct task_struct *child, unsigned int mode) + { + int ret = 0; + const struct cred *cred, *child_cred; ++ const kernel_cap_t *caller_caps; + + rcu_read_lock(); + cred = current_cred(); + child_cred = __task_cred(child); ++ if (mode & PTRACE_MODE_FSCREDS) ++ caller_caps = &cred->cap_effective; ++ else ++ caller_caps = &cred->cap_permitted; + if (cred->user_ns == child_cred->user_ns && +- cap_issubset(child_cred->cap_permitted, cred->cap_permitted)) ++ cap_issubset(child_cred->cap_permitted, *caller_caps)) + goto out; + if (ns_capable(child_cred->user_ns, CAP_SYS_PTRACE)) + goto out; +diff --git a/security/keys/encrypted-keys/encrypted.c b/security/keys/encrypted-keys/encrypted.c +index 7bed4ad7cd76..0a374a2ce030 100644 +--- a/security/keys/encrypted-keys/encrypted.c ++++ b/security/keys/encrypted-keys/encrypted.c +@@ -845,6 +845,8 @@ static int encrypted_update(struct key *key, struct key_preparsed_payload *prep) + size_t datalen = prep->datalen; + int ret = 0; + ++ if (test_bit(KEY_FLAG_NEGATIVE, &key->flags)) ++ return -ENOKEY; + if (datalen <= 0 || datalen > 32767 || !prep->data) + return -EINVAL; + +diff --git a/security/keys/trusted.c b/security/keys/trusted.c +index c0594cb07ada..aeb38f1a12e7 100644 +--- a/security/keys/trusted.c ++++ b/security/keys/trusted.c +@@ -984,13 +984,16 @@ static void trusted_rcu_free(struct rcu_head *rcu) + */ + static int trusted_update(struct key *key, struct key_preparsed_payload *prep) + { +- struct trusted_key_payload *p = key->payload.data; ++ struct trusted_key_payload *p; + struct trusted_key_payload *new_p; + struct trusted_key_options *new_o; + size_t datalen = prep->datalen; + char *datablob; + int ret = 0; + ++ if (test_bit(KEY_FLAG_NEGATIVE, &key->flags)) ++ return -ENOKEY; ++ p = key->payload.data; + if (!p->migratable) + return -EPERM; + if (datalen <= 0 || datalen > 32767 || !prep->data) +diff --git a/security/keys/user_defined.c b/security/keys/user_defined.c +index 36b47bbd3d8c..7cf22260bdff 100644 +--- a/security/keys/user_defined.c ++++ b/security/keys/user_defined.c +@@ -120,7 +120,10 @@ int user_update(struct key *key, struct key_preparsed_payload *prep) + + if (ret == 0) { + /* attach the new data, displacing the old */ +- zap = key->payload.data; ++ if (!test_bit(KEY_FLAG_NEGATIVE, &key->flags)) ++ zap = key->payload.data; ++ else ++ zap = NULL; + rcu_assign_keypointer(key, upayload); + key->expiry = 0; + } +diff --git a/security/smack/smack_lsm.c b/security/smack/smack_lsm.c +index 9d3c64af1ca9..cddf5d17bbc4 100644 +--- a/security/smack/smack_lsm.c ++++ b/security/smack/smack_lsm.c +@@ -308,12 +308,10 @@ static int smk_copy_rules(struct list_head *nhead, struct list_head *ohead, + */ + static inline unsigned int smk_ptrace_mode(unsigned int mode) + { +- switch (mode) { +- case PTRACE_MODE_READ: +- return MAY_READ; +- case PTRACE_MODE_ATTACH: ++ if (mode & PTRACE_MODE_ATTACH) + return MAY_READWRITE; +- } ++ if (mode & PTRACE_MODE_READ) ++ return MAY_READ; + + return 0; + } +diff --git a/security/yama/yama_lsm.c b/security/yama/yama_lsm.c +index 13c88fbcf037..0038834b558e 100644 +--- a/security/yama/yama_lsm.c ++++ b/security/yama/yama_lsm.c +@@ -292,7 +292,7 @@ int yama_ptrace_access_check(struct task_struct *child, + return rc; + + /* require ptrace target be a child of ptracer on attach */ +- if (mode == PTRACE_MODE_ATTACH) { ++ if (mode & PTRACE_MODE_ATTACH) { + switch (ptrace_scope) { + case YAMA_SCOPE_DISABLED: + /* No additional restrictions. */ +@@ -318,7 +318,7 @@ int yama_ptrace_access_check(struct task_struct *child, + } + } + +- if (rc) { ++ if (rc && (mode & PTRACE_MODE_NOAUDIT) == 0) { + printk_ratelimited(KERN_NOTICE + "ptrace of pid %d was attempted by: %s (pid %d)\n", + child->pid, current->comm, current->pid); +diff --git a/sound/pci/hda/hda_generic.c b/sound/pci/hda/hda_generic.c +index 6c6e35aba989..e5d39ed42e85 100644 +--- a/sound/pci/hda/hda_generic.c ++++ b/sound/pci/hda/hda_generic.c +@@ -705,9 +705,6 @@ static void activate_amp(struct hda_codec *codec, hda_nid_t nid, int dir, + unsigned int caps; + unsigned int mask, val; + +- if (!enable && is_active_nid(codec, nid, dir, idx_to_check)) +- return; +- + caps = query_amp_caps(codec, nid, dir); + val = get_amp_val_to_activate(codec, nid, dir, caps, enable); + mask = get_amp_mask_to_modify(codec, nid, dir, idx_to_check, caps); +@@ -718,12 +715,22 @@ static void activate_amp(struct hda_codec *codec, hda_nid_t nid, int dir, + update_amp(codec, nid, dir, idx, mask, val); + } + ++static void check_and_activate_amp(struct hda_codec *codec, hda_nid_t nid, ++ int dir, int idx, int idx_to_check, ++ bool enable) ++{ ++ /* check whether the given amp is still used by others */ ++ if (!enable && is_active_nid(codec, nid, dir, idx_to_check)) ++ return; ++ activate_amp(codec, nid, dir, idx, idx_to_check, enable); ++} ++ + static void activate_amp_out(struct hda_codec *codec, struct nid_path *path, + int i, bool enable) + { + hda_nid_t nid = path->path[i]; + init_amp(codec, nid, HDA_OUTPUT, 0); +- activate_amp(codec, nid, HDA_OUTPUT, 0, 0, enable); ++ check_and_activate_amp(codec, nid, HDA_OUTPUT, 0, 0, enable); + } + + static void activate_amp_in(struct hda_codec *codec, struct nid_path *path, +@@ -751,9 +758,16 @@ static void activate_amp_in(struct hda_codec *codec, struct nid_path *path, + * when aa-mixer is available, we need to enable the path as well + */ + for (n = 0; n < nums; n++) { +- if (n != idx && (!add_aamix || conn[n] != spec->mixer_merge_nid)) +- continue; +- activate_amp(codec, nid, HDA_INPUT, n, idx, enable); ++ if (n != idx) { ++ if (conn[n] != spec->mixer_merge_nid) ++ continue; ++ /* when aamix is disabled, force to off */ ++ if (!add_aamix) { ++ activate_amp(codec, nid, HDA_INPUT, n, n, false); ++ continue; ++ } ++ } ++ check_and_activate_amp(codec, nid, HDA_INPUT, n, idx, enable); + } + } + +@@ -1473,6 +1487,12 @@ static bool map_singles(struct hda_codec *codec, int outs, + return found; + } + ++static inline bool has_aamix_out_paths(struct hda_gen_spec *spec) ++{ ++ return spec->aamix_out_paths[0] || spec->aamix_out_paths[1] || ++ spec->aamix_out_paths[2]; ++} ++ + /* create a new path including aamix if available, and return its index */ + static int check_aamix_out_path(struct hda_codec *codec, int path_idx) + { +@@ -2315,25 +2335,51 @@ static void update_aamix_paths(struct hda_codec *codec, bool do_mix, + } + } + ++/* re-initialize the output paths; only called from loopback_mixing_put() */ ++static void update_output_paths(struct hda_codec *codec, int num_outs, ++ const int *paths) ++{ ++ struct hda_gen_spec *spec = codec->spec; ++ struct nid_path *path; ++ int i; ++ ++ for (i = 0; i < num_outs; i++) { ++ path = snd_hda_get_path_from_idx(codec, paths[i]); ++ if (path) ++ snd_hda_activate_path(codec, path, path->active, ++ spec->aamix_mode); ++ } ++} ++ + static int loopback_mixing_put(struct snd_kcontrol *kcontrol, + struct snd_ctl_elem_value *ucontrol) + { + struct hda_codec *codec = snd_kcontrol_chip(kcontrol); + struct hda_gen_spec *spec = codec->spec; ++ const struct auto_pin_cfg *cfg = &spec->autocfg; + unsigned int val = ucontrol->value.enumerated.item[0]; + + if (val == spec->aamix_mode) + return 0; + spec->aamix_mode = val; +- update_aamix_paths(codec, val, spec->out_paths[0], +- spec->aamix_out_paths[0], +- spec->autocfg.line_out_type); +- update_aamix_paths(codec, val, spec->hp_paths[0], +- spec->aamix_out_paths[1], +- AUTO_PIN_HP_OUT); +- update_aamix_paths(codec, val, spec->speaker_paths[0], +- spec->aamix_out_paths[2], +- AUTO_PIN_SPEAKER_OUT); ++ if (has_aamix_out_paths(spec)) { ++ update_aamix_paths(codec, val, spec->out_paths[0], ++ spec->aamix_out_paths[0], ++ cfg->line_out_type); ++ update_aamix_paths(codec, val, spec->hp_paths[0], ++ spec->aamix_out_paths[1], ++ AUTO_PIN_HP_OUT); ++ update_aamix_paths(codec, val, spec->speaker_paths[0], ++ spec->aamix_out_paths[2], ++ AUTO_PIN_SPEAKER_OUT); ++ } else { ++ update_output_paths(codec, cfg->line_outs, spec->out_paths); ++ if (cfg->line_out_type != AUTO_PIN_HP_OUT) ++ update_output_paths(codec, cfg->hp_outs, spec->hp_paths); ++ if (cfg->line_out_type != AUTO_PIN_SPEAKER_OUT) ++ update_output_paths(codec, cfg->speaker_outs, ++ spec->speaker_paths); ++ } + return 1; + } + +@@ -2351,12 +2397,13 @@ static int create_loopback_mixing_ctl(struct hda_codec *codec) + + if (!spec->mixer_nid) + return 0; +- if (!(spec->aamix_out_paths[0] || spec->aamix_out_paths[1] || +- spec->aamix_out_paths[2])) +- return 0; + if (!snd_hda_gen_add_kctl(spec, NULL, &loopback_mixing_enum)) + return -ENOMEM; + spec->have_aamix_ctl = 1; ++ /* if no explicit aamix path is present (e.g. for Realtek codecs), ++ * enable aamix as default -- just for compatibility ++ */ ++ spec->aamix_mode = !has_aamix_out_paths(spec); + return 0; + } + +@@ -5236,6 +5283,8 @@ static void init_aamix_paths(struct hda_codec *codec) + + if (!spec->have_aamix_ctl) + return; ++ if (!has_aamix_out_paths(spec)) ++ return; + update_aamix_paths(codec, spec->aamix_mode, spec->out_paths[0], + spec->aamix_out_paths[0], + spec->autocfg.line_out_type); +diff --git a/sound/pci/hda/patch_cirrus.c b/sound/pci/hda/patch_cirrus.c +index e9eec080e46c..087228a0bcea 100644 +--- a/sound/pci/hda/patch_cirrus.c ++++ b/sound/pci/hda/patch_cirrus.c +@@ -174,8 +174,12 @@ static void cs_automute(struct hda_codec *codec) + snd_hda_gen_update_outputs(codec); + + if (spec->gpio_eapd_hp || spec->gpio_eapd_speaker) { +- spec->gpio_data = spec->gen.hp_jack_present ? +- spec->gpio_eapd_hp : spec->gpio_eapd_speaker; ++ if (spec->gen.automute_speaker) ++ spec->gpio_data = spec->gen.hp_jack_present ? ++ spec->gpio_eapd_hp : spec->gpio_eapd_speaker; ++ else ++ spec->gpio_data = ++ spec->gpio_eapd_hp | spec->gpio_eapd_speaker; + snd_hda_codec_write(codec, 0x01, 0, + AC_VERB_SET_GPIO_DATA, spec->gpio_data); + } +diff --git a/sound/pci/hda/patch_hdmi.c b/sound/pci/hda/patch_hdmi.c +index 8e8ccde973df..f6c5cbbbcfea 100644 +--- a/sound/pci/hda/patch_hdmi.c ++++ b/sound/pci/hda/patch_hdmi.c +@@ -1555,6 +1555,8 @@ static bool hdmi_present_sense(struct hdmi_spec_per_pin *per_pin, int repoll) + + mutex_lock(&per_pin->lock); + pin_eld->monitor_present = !!(present & AC_PINSENSE_PRESENCE); ++ eld->monitor_present = pin_eld->monitor_present; ++ + if (pin_eld->monitor_present) + eld->eld_valid = !!(present & AC_PINSENSE_ELDV); + else +@@ -3356,6 +3358,9 @@ static const struct hda_codec_preset snd_hda_preset_hdmi[] = { + { .id = 0x10de0070, .name = "GPU 70 HDMI/DP", .patch = patch_nvhdmi }, + { .id = 0x10de0071, .name = "GPU 71 HDMI/DP", .patch = patch_nvhdmi }, + { .id = 0x10de0072, .name = "GPU 72 HDMI/DP", .patch = patch_nvhdmi }, ++{ .id = 0x10de007d, .name = "GPU 7d HDMI/DP", .patch = patch_nvhdmi }, ++{ .id = 0x10de0082, .name = "GPU 82 HDMI/DP", .patch = patch_nvhdmi }, ++{ .id = 0x10de0083, .name = "GPU 83 HDMI/DP", .patch = patch_nvhdmi }, + { .id = 0x10de8001, .name = "MCP73 HDMI", .patch = patch_nvhdmi_2ch }, + { .id = 0x11069f80, .name = "VX900 HDMI/DP", .patch = patch_via_hdmi }, + { .id = 0x11069f81, .name = "VX900 HDMI/DP", .patch = patch_via_hdmi }, +@@ -3417,6 +3422,7 @@ MODULE_ALIAS("snd-hda-codec-id:10de0067"); + MODULE_ALIAS("snd-hda-codec-id:10de0070"); + MODULE_ALIAS("snd-hda-codec-id:10de0071"); + MODULE_ALIAS("snd-hda-codec-id:10de0072"); ++MODULE_ALIAS("snd-hda-codec-id:10de007d"); + MODULE_ALIAS("snd-hda-codec-id:10de8001"); + MODULE_ALIAS("snd-hda-codec-id:11069f80"); + MODULE_ALIAS("snd-hda-codec-id:11069f81"); +diff --git a/sound/pci/hda/patch_realtek.c b/sound/pci/hda/patch_realtek.c +index 0405e9753c04..1bc0be907ead 100644 +--- a/sound/pci/hda/patch_realtek.c ++++ b/sound/pci/hda/patch_realtek.c +@@ -4618,6 +4618,7 @@ enum { + ALC290_FIXUP_SUBWOOFER, + ALC290_FIXUP_SUBWOOFER_HSJACK, + ALC269_FIXUP_THINKPAD_ACPI, ++ ALC269_FIXUP_DMIC_THINKPAD_ACPI, + ALC255_FIXUP_DELL1_MIC_NO_PRESENCE, + ALC255_FIXUP_DELL2_MIC_NO_PRESENCE, + ALC255_FIXUP_HEADSET_MODE, +@@ -5052,6 +5053,12 @@ static const struct hda_fixup alc269_fixups[] = { + .type = HDA_FIXUP_FUNC, + .v.func = hda_fixup_thinkpad_acpi, + }, ++ [ALC269_FIXUP_DMIC_THINKPAD_ACPI] = { ++ .type = HDA_FIXUP_FUNC, ++ .v.func = alc_fixup_inv_dmic, ++ .chained = true, ++ .chain_id = ALC269_FIXUP_THINKPAD_ACPI, ++ }, + [ALC255_FIXUP_DELL1_MIC_NO_PRESENCE] = { + .type = HDA_FIXUP_PINS, + .v.pins = (const struct hda_pintbl[]) { +@@ -5386,6 +5393,8 @@ static const struct snd_pci_quirk alc269_fixup_tbl[] = { + SND_PCI_QUIRK(0x17aa, 0x2226, "ThinkPad X250", ALC292_FIXUP_TPT440_DOCK), + SND_PCI_QUIRK(0x17aa, 0x2233, "Thinkpad", ALC293_FIXUP_LENOVO_SPK_NOISE), + SND_PCI_QUIRK(0x17aa, 0x30bb, "ThinkCentre AIO", ALC233_FIXUP_LENOVO_LINE2_MIC_HOTKEY), ++ SND_PCI_QUIRK(0x17aa, 0x30e2, "ThinkCentre AIO", ALC233_FIXUP_LENOVO_LINE2_MIC_HOTKEY), ++ SND_PCI_QUIRK(0x17aa, 0x3902, "Lenovo E50-80", ALC269_FIXUP_DMIC_THINKPAD_ACPI), + SND_PCI_QUIRK(0x17aa, 0x3977, "IdeaPad S210", ALC283_FIXUP_INT_MIC), + SND_PCI_QUIRK(0x17aa, 0x3978, "IdeaPad Y410P", ALC269_FIXUP_NO_SHUTUP), + SND_PCI_QUIRK(0x17aa, 0x5013, "Thinkpad", ALC269_FIXUP_LIMIT_INT_MIC_BOOST), +diff --git a/sound/pci/intel8x0.c b/sound/pci/intel8x0.c +index 4a28252a42b9..b85ca1a2f75e 100644 +--- a/sound/pci/intel8x0.c ++++ b/sound/pci/intel8x0.c +@@ -2894,6 +2894,7 @@ static void intel8x0_measure_ac97_clock(struct intel8x0 *chip) + + static struct snd_pci_quirk intel8x0_clock_list[] = { + SND_PCI_QUIRK(0x0e11, 0x008a, "AD1885", 41000), ++ SND_PCI_QUIRK(0x1014, 0x0581, "AD1981B", 48000), + SND_PCI_QUIRK(0x1028, 0x00be, "AD1885", 44100), + SND_PCI_QUIRK(0x1028, 0x0177, "AD1980", 48000), + SND_PCI_QUIRK(0x1028, 0x01ad, "AD1981B", 48000), +diff --git a/sound/usb/mixer.c b/sound/usb/mixer.c +index b4edae1d4f5d..0d7f1ced1d3b 100644 +--- a/sound/usb/mixer.c ++++ b/sound/usb/mixer.c +@@ -805,12 +805,12 @@ static struct usb_feature_control_info audio_feature_info[] = { + { "Tone Control - Treble", USB_MIXER_S8 }, + { "Graphic Equalizer", USB_MIXER_S8 }, /* FIXME: not implemeted yet */ + { "Auto Gain Control", USB_MIXER_BOOLEAN }, +- { "Delay Control", USB_MIXER_U16 }, ++ { "Delay Control", USB_MIXER_U16 }, /* FIXME: U32 in UAC2 */ + { "Bass Boost", USB_MIXER_BOOLEAN }, + { "Loudness", USB_MIXER_BOOLEAN }, + /* UAC2 specific */ +- { "Input Gain Control", USB_MIXER_U16 }, +- { "Input Gain Pad Control", USB_MIXER_BOOLEAN }, ++ { "Input Gain Control", USB_MIXER_S16 }, ++ { "Input Gain Pad Control", USB_MIXER_S16 }, + { "Phase Inverter Control", USB_MIXER_BOOLEAN }, + }; + +diff --git a/sound/usb/quirks.c b/sound/usb/quirks.c +index 42c1d0171a5a..7da345b0cdaf 100644 +--- a/sound/usb/quirks.c ++++ b/sound/usb/quirks.c +@@ -168,6 +168,12 @@ static int create_fixed_stream_quirk(struct snd_usb_audio *chip, + } + alts = &iface->altsetting[fp->altset_idx]; + altsd = get_iface_desc(alts); ++ if (altsd->bNumEndpoints < 1) { ++ kfree(fp); ++ kfree(rate_table); ++ return -EINVAL; ++ } ++ + fp->protocol = altsd->bInterfaceProtocol; + + if (fp->datainterval == 0) +@@ -1108,6 +1114,7 @@ bool snd_usb_get_sample_rate_quirk(struct snd_usb_audio *chip) + switch (chip->usb_id) { + case USB_ID(0x045E, 0x075D): /* MS Lifecam Cinema */ + case USB_ID(0x045E, 0x076D): /* MS Lifecam HD-5000 */ ++ case USB_ID(0x045E, 0x076E): /* MS Lifecam HD-5001 */ + case USB_ID(0x045E, 0x076F): /* MS Lifecam HD-6000 */ + case USB_ID(0x045E, 0x0772): /* MS Lifecam Studio */ + case USB_ID(0x045E, 0x0779): /* MS Lifecam HD-3000 */ +diff --git a/tools/hv/Makefile b/tools/hv/Makefile +index bd22f786a60c..50715181d9a9 100644 +--- a/tools/hv/Makefile ++++ b/tools/hv/Makefile +@@ -5,9 +5,11 @@ PTHREAD_LIBS = -lpthread + WARNINGS = -Wall -Wextra + CFLAGS = $(WARNINGS) -g $(PTHREAD_LIBS) + +-all: hv_kvp_daemon hv_vss_daemon ++CFLAGS += -D__EXPORTED_HEADERS__ -I../../include/uapi -I../../include ++ ++all: hv_kvp_daemon hv_vss_daemon hv_fcopy_daemon + %: %.c + $(CC) $(CFLAGS) -o $@ $^ + + clean: +- $(RM) hv_kvp_daemon hv_vss_daemon ++ $(RM) hv_kvp_daemon hv_vss_daemon hv_fcopy_daemon +diff --git a/tools/perf/util/pmu.c b/tools/perf/util/pmu.c +index e243ad962a4d..bc7ca4ffcd2d 100644 +--- a/tools/perf/util/pmu.c ++++ b/tools/perf/util/pmu.c +@@ -221,13 +221,12 @@ static int pmu_aliases_parse(char *dir, struct list_head *head) + { + struct dirent *evt_ent; + DIR *event_dir; +- int ret = 0; + + event_dir = opendir(dir); + if (!event_dir) + return -EINVAL; + +- while (!ret && (evt_ent = readdir(event_dir))) { ++ while ((evt_ent = readdir(event_dir))) { + char path[PATH_MAX]; + char *name = evt_ent->d_name; + FILE *file; +@@ -243,17 +242,19 @@ static int pmu_aliases_parse(char *dir, struct list_head *head) + + snprintf(path, PATH_MAX, "%s/%s", dir, name); + +- ret = -EINVAL; + file = fopen(path, "r"); +- if (!file) +- break; ++ if (!file) { ++ pr_debug("Cannot open %s\n", path); ++ continue; ++ } + +- ret = perf_pmu__new_alias(head, dir, name, file); ++ if (perf_pmu__new_alias(head, dir, name, file) < 0) ++ pr_debug("Cannot set up %s\n", name); + fclose(file); + } + + closedir(event_dir); +- return ret; ++ return 0; + } + + /* +diff --git a/tools/testing/selftests/efivarfs/efivarfs.sh b/tools/testing/selftests/efivarfs/efivarfs.sh +index 77edcdcc016b..057278448515 100644 +--- a/tools/testing/selftests/efivarfs/efivarfs.sh ++++ b/tools/testing/selftests/efivarfs/efivarfs.sh +@@ -88,7 +88,11 @@ test_delete() + exit 1 + fi + +- rm $file ++ rm $file 2>/dev/null ++ if [ $? -ne 0 ]; then ++ chattr -i $file ++ rm $file ++ fi + + if [ -e $file ]; then + echo "$file couldn't be deleted" >&2 +@@ -111,6 +115,7 @@ test_zero_size_delete() + exit 1 + fi + ++ chattr -i $file + printf "$attrs" > $file + + if [ -e $file ]; then +@@ -141,7 +146,11 @@ test_valid_filenames() + echo "$file could not be created" >&2 + ret=1 + else +- rm $file ++ rm $file 2>/dev/null ++ if [ $? -ne 0 ]; then ++ chattr -i $file ++ rm $file ++ fi + fi + done + +@@ -174,7 +183,11 @@ test_invalid_filenames() + + if [ -e $file ]; then + echo "Creating $file should have failed" >&2 +- rm $file ++ rm $file 2>/dev/null ++ if [ $? -ne 0 ]; then ++ chattr -i $file ++ rm $file ++ fi + ret=1 + fi + done +diff --git a/tools/testing/selftests/efivarfs/open-unlink.c b/tools/testing/selftests/efivarfs/open-unlink.c +index 8c0764407b3c..4af74f733036 100644 +--- a/tools/testing/selftests/efivarfs/open-unlink.c ++++ b/tools/testing/selftests/efivarfs/open-unlink.c +@@ -1,10 +1,68 @@ ++#include + #include + #include + #include + #include ++#include + #include + #include + #include ++#include ++ ++static int set_immutable(const char *path, int immutable) ++{ ++ unsigned int flags; ++ int fd; ++ int rc; ++ int error; ++ ++ fd = open(path, O_RDONLY); ++ if (fd < 0) ++ return fd; ++ ++ rc = ioctl(fd, FS_IOC_GETFLAGS, &flags); ++ if (rc < 0) { ++ error = errno; ++ close(fd); ++ errno = error; ++ return rc; ++ } ++ ++ if (immutable) ++ flags |= FS_IMMUTABLE_FL; ++ else ++ flags &= ~FS_IMMUTABLE_FL; ++ ++ rc = ioctl(fd, FS_IOC_SETFLAGS, &flags); ++ error = errno; ++ close(fd); ++ errno = error; ++ return rc; ++} ++ ++static int get_immutable(const char *path) ++{ ++ unsigned int flags; ++ int fd; ++ int rc; ++ int error; ++ ++ fd = open(path, O_RDONLY); ++ if (fd < 0) ++ return fd; ++ ++ rc = ioctl(fd, FS_IOC_GETFLAGS, &flags); ++ if (rc < 0) { ++ error = errno; ++ close(fd); ++ errno = error; ++ return rc; ++ } ++ close(fd); ++ if (flags & FS_IMMUTABLE_FL) ++ return 1; ++ return 0; ++} + + int main(int argc, char **argv) + { +@@ -27,7 +85,7 @@ int main(int argc, char **argv) + buf[4] = 0; + + /* create a test variable */ +- fd = open(path, O_WRONLY | O_CREAT); ++ fd = open(path, O_WRONLY | O_CREAT, 0600); + if (fd < 0) { + perror("open(O_WRONLY)"); + return EXIT_FAILURE; +@@ -41,6 +99,18 @@ int main(int argc, char **argv) + + close(fd); + ++ rc = get_immutable(path); ++ if (rc < 0) { ++ perror("ioctl(FS_IOC_GETFLAGS)"); ++ return EXIT_FAILURE; ++ } else if (rc) { ++ rc = set_immutable(path, 0); ++ if (rc < 0) { ++ perror("ioctl(FS_IOC_SETFLAGS)"); ++ return EXIT_FAILURE; ++ } ++ } ++ + fd = open(path, O_RDONLY); + if (fd < 0) { + perror("open"); +diff --git a/virt/kvm/kvm_main.c b/virt/kvm/kvm_main.c +index 329c3c91bb68..2c9d47fbc498 100644 +--- a/virt/kvm/kvm_main.c ++++ b/virt/kvm/kvm_main.c +@@ -460,6 +460,16 @@ static struct kvm *kvm_create_vm(unsigned long type) + if (!kvm) + return ERR_PTR(-ENOMEM); + ++ spin_lock_init(&kvm->mmu_lock); ++ atomic_inc(¤t->mm->mm_count); ++ kvm->mm = current->mm; ++ kvm_eventfd_init(kvm); ++ mutex_init(&kvm->lock); ++ mutex_init(&kvm->irq_lock); ++ mutex_init(&kvm->slots_lock); ++ atomic_set(&kvm->users_count, 1); ++ INIT_LIST_HEAD(&kvm->devices); ++ + r = kvm_arch_init_vm(kvm, type); + if (r) + goto out_err_no_disable; +@@ -500,16 +510,6 @@ static struct kvm *kvm_create_vm(unsigned long type) + goto out_err; + } + +- spin_lock_init(&kvm->mmu_lock); +- kvm->mm = current->mm; +- atomic_inc(&kvm->mm->mm_count); +- kvm_eventfd_init(kvm); +- mutex_init(&kvm->lock); +- mutex_init(&kvm->irq_lock); +- mutex_init(&kvm->slots_lock); +- atomic_set(&kvm->users_count, 1); +- INIT_LIST_HEAD(&kvm->devices); +- + r = kvm_init_mmu_notifier(kvm); + if (r) + goto out_err; +@@ -531,6 +531,7 @@ out_err_no_disable: + kfree(kvm->buses[i]); + kvfree(kvm->memslots); + kvm_arch_free_vm(kvm); ++ mmdrop(current->mm); + return ERR_PTR(r); + } +