From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: <gentoo-commits+bounces-914155-garchives=archives.gentoo.org@lists.gentoo.org> Received: from lists.gentoo.org (pigeon.gentoo.org [208.92.234.80]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by finch.gentoo.org (Postfix) with ESMTPS id 53A7D1395E2 for <garchives@archives.gentoo.org>; Tue, 29 Nov 2016 17:45:33 +0000 (UTC) Received: from pigeon.gentoo.org (localhost [127.0.0.1]) by pigeon.gentoo.org (Postfix) with SMTP id A966721C06A; Tue, 29 Nov 2016 17:45:32 +0000 (UTC) Received: from smtp.gentoo.org (smtp.gentoo.org [140.211.166.183]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by pigeon.gentoo.org (Postfix) with ESMTPS id 6098221C06A for <gentoo-commits@lists.gentoo.org>; Tue, 29 Nov 2016 17:45:32 +0000 (UTC) Received: from oystercatcher.gentoo.org (unknown [IPv6:2a01:4f8:202:4333:225:90ff:fed9:fc84]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.gentoo.org (Postfix) with ESMTPS id EE9D23414E9 for <gentoo-commits@lists.gentoo.org>; Tue, 29 Nov 2016 17:45:30 +0000 (UTC) Received: from localhost.localdomain (localhost [127.0.0.1]) by oystercatcher.gentoo.org (Postfix) with ESMTP id C1F0449A for <gentoo-commits@lists.gentoo.org>; Tue, 29 Nov 2016 17:45:29 +0000 (UTC) From: "Alice Ferrazzi" <alicef@gentoo.org> To: gentoo-commits@lists.gentoo.org Content-Transfer-Encoding: 8bit Content-type: text/plain; charset=UTF-8 Reply-To: gentoo-dev@lists.gentoo.org, "Alice Ferrazzi" <alicef@gentoo.org> Message-ID: <1480441466.ba7372d114117fe69256e0716b71fbcf3df440c5.alicef@gentoo> Subject: [gentoo-commits] proj/linux-patches:3.12 commit in: / X-VCS-Repository: proj/linux-patches X-VCS-Files: 0000_README 1067_linux-3.12.68.patch X-VCS-Directories: / X-VCS-Committer: alicef X-VCS-Committer-Name: Alice Ferrazzi X-VCS-Revision: ba7372d114117fe69256e0716b71fbcf3df440c5 X-VCS-Branch: 3.12 Date: Tue, 29 Nov 2016 17:45:29 +0000 (UTC) Precedence: bulk List-Post: <mailto:gentoo-commits@lists.gentoo.org> List-Help: <mailto:gentoo-commits+help@lists.gentoo.org> List-Unsubscribe: <mailto:gentoo-commits+unsubscribe@lists.gentoo.org> List-Subscribe: <mailto:gentoo-commits+subscribe@lists.gentoo.org> List-Id: Gentoo Linux mail <gentoo-commits.gentoo.org> X-BeenThere: gentoo-commits@lists.gentoo.org X-Archives-Salt: 0b952701-ba93-439b-b8d6-9b673f4e5a3f X-Archives-Hash: 5b62a674418380e0fa1d441dfa6432e2 commit: ba7372d114117fe69256e0716b71fbcf3df440c5 Author: Alice Ferrazzi <alicef <AT> gentoo <DOT> org> AuthorDate: Tue Nov 29 17:44:26 2016 +0000 Commit: Alice Ferrazzi <alicef <AT> gentoo <DOT> org> CommitDate: Tue Nov 29 17:44:26 2016 +0000 URL: https://gitweb.gentoo.org/proj/linux-patches.git/commit/?id=ba7372d1 Linux patch 3.12.68 0000_README | 4 + 1067_linux-3.12.68.patch | 4739 ++++++++++++++++++++++++++++++++++++++++++++++ 2 files changed, 4743 insertions(+) diff --git a/0000_README b/0000_README index 1f30ddb..b783e9c 100644 --- a/0000_README +++ b/0000_README @@ -310,6 +310,10 @@ Patch: 1066_linux-3.12.67.patch From: http://www.kernel.org Desc: Linux 3.12.67 +Patch: 1067_linux-3.12.68.patch +From: http://www.kernel.org +Desc: Linux 3.12.68 + Patch: 1500_XATTR_USER_PREFIX.patch From: https://bugs.gentoo.org/show_bug.cgi?id=470644 Desc: Support for namespace user.pax.* on tmpfs. diff --git a/1067_linux-3.12.68.patch b/1067_linux-3.12.68.patch new file mode 100644 index 0000000..b04202a --- /dev/null +++ b/1067_linux-3.12.68.patch @@ -0,0 +1,4739 @@ +diff --git a/Makefile b/Makefile +index 32dbd8513eee..6d86f39be8ce 100644 +--- a/Makefile ++++ b/Makefile +@@ -1,6 +1,6 @@ + VERSION = 3 + PATCHLEVEL = 12 +-SUBLEVEL = 67 ++SUBLEVEL = 68 + EXTRAVERSION = + NAME = One Giant Leap for Frogkind + +@@ -378,11 +378,12 @@ KBUILD_CFLAGS := -Wall -Wundef -Wstrict-prototypes -Wno-trigraphs \ + -Werror-implicit-function-declaration \ + -Wno-format-security \ + -fno-delete-null-pointer-checks \ +- -std=gnu89 ++ -std=gnu89 $(call cc-option,-fno-PIE) ++ + + KBUILD_AFLAGS_KERNEL := + KBUILD_CFLAGS_KERNEL := +-KBUILD_AFLAGS := -D__ASSEMBLY__ ++KBUILD_AFLAGS := -D__ASSEMBLY__ $(call cc-option,-fno-PIE) + KBUILD_AFLAGS_MODULE := -DMODULE + KBUILD_CFLAGS_MODULE := -DMODULE + KBUILD_LDFLAGS_MODULE := -T $(srctree)/scripts/module-common.lds +diff --git a/arch/arm/include/asm/floppy.h b/arch/arm/include/asm/floppy.h +index c9f03eccc9d8..b5a0466db549 100644 +--- a/arch/arm/include/asm/floppy.h ++++ b/arch/arm/include/asm/floppy.h +@@ -17,7 +17,7 @@ + + #define fd_outb(val,port) \ + do { \ +- if ((port) == FD_DOR) \ ++ if ((port) == (u32)FD_DOR) \ + fd_setdor((val)); \ + else \ + outb((val),(port)); \ +diff --git a/arch/mips/include/asm/kvm_host.h b/arch/mips/include/asm/kvm_host.h +index 883a162083af..05863e3ee2e7 100644 +--- a/arch/mips/include/asm/kvm_host.h ++++ b/arch/mips/include/asm/kvm_host.h +@@ -375,7 +375,10 @@ struct kvm_vcpu_arch { + /* Host KSEG0 address of the EI/DI offset */ + void *kseg0_commpage; + +- u32 io_gpr; /* GPR used as IO source/target */ ++ /* Resume PC after MMIO completion */ ++ unsigned long io_pc; ++ /* GPR used as IO source/target */ ++ u32 io_gpr; + + /* Used to calibrate the virutal count register for the guest */ + int32_t host_cp0_count; +@@ -386,8 +389,6 @@ struct kvm_vcpu_arch { + /* Bitmask of pending exceptions to be cleared */ + unsigned long pending_exceptions_clr; + +- unsigned long pending_load_cause; +- + /* Save/Restore the entryhi register when are are preempted/scheduled back in */ + unsigned long preempt_entryhi; + +diff --git a/arch/mips/kvm/kvm_mips_emul.c b/arch/mips/kvm/kvm_mips_emul.c +index 8ab9958767bb..716285497e0e 100644 +--- a/arch/mips/kvm/kvm_mips_emul.c ++++ b/arch/mips/kvm/kvm_mips_emul.c +@@ -254,15 +254,15 @@ enum emulation_result kvm_mips_emul_eret(struct kvm_vcpu *vcpu) + struct mips_coproc *cop0 = vcpu->arch.cop0; + enum emulation_result er = EMULATE_DONE; + +- if (kvm_read_c0_guest_status(cop0) & ST0_EXL) { ++ if (kvm_read_c0_guest_status(cop0) & ST0_ERL) { ++ kvm_clear_c0_guest_status(cop0, ST0_ERL); ++ vcpu->arch.pc = kvm_read_c0_guest_errorepc(cop0); ++ } else if (kvm_read_c0_guest_status(cop0) & ST0_EXL) { + kvm_debug("[%#lx] ERET to %#lx\n", vcpu->arch.pc, + kvm_read_c0_guest_epc(cop0)); + kvm_clear_c0_guest_status(cop0, ST0_EXL); + vcpu->arch.pc = kvm_read_c0_guest_epc(cop0); + +- } else if (kvm_read_c0_guest_status(cop0) & ST0_ERL) { +- kvm_clear_c0_guest_status(cop0, ST0_ERL); +- vcpu->arch.pc = kvm_read_c0_guest_errorepc(cop0); + } else { + printk("[%#lx] ERET when MIPS_SR_EXL|MIPS_SR_ERL == 0\n", + vcpu->arch.pc); +@@ -325,7 +325,7 @@ static void kvm_mips_invalidate_guest_tlb(struct kvm_vcpu *vcpu, + bool user; + + /* No need to flush for entries which are already invalid */ +- if (!((tlb->tlb_lo[0] | tlb->tlb_lo[1]) & ENTRYLO_V)) ++ if (!((tlb->tlb_lo0 | tlb->tlb_lo1) & MIPS3_PG_V)) + return; + /* User address space doesn't need flushing for KSeg2/3 changes */ + user = tlb->tlb_hi < KVM_GUEST_KSEG0; +@@ -372,10 +372,8 @@ enum emulation_result kvm_mips_emul_tlbwi(struct kvm_vcpu *vcpu) + } + + tlb = &vcpu->arch.guest_tlb[index]; +-#if 1 + + kvm_mips_invalidate_guest_tlb(vcpu, tlb); +-#endif + + tlb->tlb_mask = kvm_read_c0_guest_pagemask(cop0); + tlb->tlb_hi = kvm_read_c0_guest_entryhi(cop0); +@@ -414,9 +412,7 @@ enum emulation_result kvm_mips_emul_tlbwr(struct kvm_vcpu *vcpu) + + tlb = &vcpu->arch.guest_tlb[index]; + +-#if 1 + kvm_mips_invalidate_guest_tlb(vcpu, tlb); +-#endif + + tlb->tlb_mask = kvm_read_c0_guest_pagemask(cop0); + tlb->tlb_hi = kvm_read_c0_guest_entryhi(cop0); +@@ -822,6 +818,7 @@ kvm_mips_emulate_load(uint32_t inst, uint32_t cause, + struct kvm_run *run, struct kvm_vcpu *vcpu) + { + enum emulation_result er = EMULATE_DO_MMIO; ++ unsigned long curr_pc; + int32_t op, base, rt, offset; + uint32_t bytes; + +@@ -830,7 +827,18 @@ kvm_mips_emulate_load(uint32_t inst, uint32_t cause, + offset = inst & 0xffff; + op = (inst >> 26) & 0x3f; + +- vcpu->arch.pending_load_cause = cause; ++ /* ++ * Find the resume PC now while we have safe and easy access to the ++ * prior branch instruction, and save it for ++ * kvm_mips_complete_mmio_load() to restore later. ++ */ ++ curr_pc = vcpu->arch.pc; ++ er = update_pc(vcpu, cause); ++ if (er == EMULATE_FAIL) ++ return er; ++ vcpu->arch.io_pc = vcpu->arch.pc; ++ vcpu->arch.pc = curr_pc; ++ + vcpu->arch.io_gpr = rt; + + switch (op) { +@@ -1659,7 +1667,6 @@ kvm_mips_complete_mmio_load(struct kvm_vcpu *vcpu, struct kvm_run *run) + { + unsigned long *gpr = &vcpu->arch.gprs[vcpu->arch.io_gpr]; + enum emulation_result er = EMULATE_DONE; +- unsigned long curr_pc; + + if (run->mmio.len > sizeof(*gpr)) { + printk("Bad MMIO length: %d", run->mmio.len); +@@ -1667,14 +1674,8 @@ kvm_mips_complete_mmio_load(struct kvm_vcpu *vcpu, struct kvm_run *run) + goto done; + } + +- /* +- * Update PC and hold onto current PC in case there is +- * an error and we want to rollback the PC +- */ +- curr_pc = vcpu->arch.pc; +- er = update_pc(vcpu, vcpu->arch.pending_load_cause); +- if (er == EMULATE_FAIL) +- return er; ++ /* Restore saved resume PC */ ++ vcpu->arch.pc = vcpu->arch.io_pc; + + switch (run->mmio.len) { + case 4: +@@ -1696,12 +1697,6 @@ kvm_mips_complete_mmio_load(struct kvm_vcpu *vcpu, struct kvm_run *run) + break; + } + +- if (vcpu->arch.pending_load_cause & CAUSEF_BD) +- kvm_debug +- ("[%#lx] Completing %d byte BD Load to gpr %d (0x%08lx) type %d\n", +- vcpu->arch.pc, run->mmio.len, vcpu->arch.io_gpr, *gpr, +- vcpu->mmio_needed); +- + done: + return er; + } +diff --git a/arch/mips/mm/init.c b/arch/mips/mm/init.c +index e205ef598e97..c247cf5a31cb 100644 +--- a/arch/mips/mm/init.c ++++ b/arch/mips/mm/init.c +@@ -74,6 +74,7 @@ + */ + unsigned long empty_zero_page, zero_page_mask; + EXPORT_SYMBOL_GPL(empty_zero_page); ++EXPORT_SYMBOL(zero_page_mask); + + /* + * Not static inline because used by IP27 special magic initialization code +diff --git a/arch/parisc/kernel/syscall.S b/arch/parisc/kernel/syscall.S +index e767ab733e32..69caa82c50d3 100644 +--- a/arch/parisc/kernel/syscall.S ++++ b/arch/parisc/kernel/syscall.S +@@ -106,8 +106,6 @@ linux_gateway_entry: + mtsp %r0,%sr4 /* get kernel space into sr4 */ + mtsp %r0,%sr5 /* get kernel space into sr5 */ + mtsp %r0,%sr6 /* get kernel space into sr6 */ +- mfsp %sr7,%r1 /* save user sr7 */ +- mtsp %r1,%sr3 /* and store it in sr3 */ + + #ifdef CONFIG_64BIT + /* for now we can *always* set the W bit on entry to the syscall +@@ -133,6 +131,14 @@ linux_gateway_entry: + depdi 0, 31, 32, %r21 + 1: + #endif ++ ++ /* We use a rsm/ssm pair to prevent sr3 from being clobbered ++ * by external interrupts. ++ */ ++ mfsp %sr7,%r1 /* save user sr7 */ ++ rsm PSW_SM_I, %r0 /* disable interrupts */ ++ mtsp %r1,%sr3 /* and store it in sr3 */ ++ + mfctl %cr30,%r1 + xor %r1,%r30,%r30 /* ye olde xor trick */ + xor %r1,%r30,%r1 +@@ -147,6 +153,7 @@ linux_gateway_entry: + */ + + mtsp %r0,%sr7 /* get kernel space into sr7 */ ++ ssm PSW_SM_I, %r0 /* enable interrupts */ + STREGM %r1,FRAME_SIZE(%r30) /* save r1 (usp) here for now */ + mfctl %cr30,%r1 /* get task ptr in %r1 */ + LDREG TI_TASK(%r1),%r1 +diff --git a/arch/s390/hypfs/hypfs_diag.c b/arch/s390/hypfs/hypfs_diag.c +index 5eeffeefae06..d73124df5d32 100644 +--- a/arch/s390/hypfs/hypfs_diag.c ++++ b/arch/s390/hypfs/hypfs_diag.c +@@ -517,11 +517,11 @@ static int diag224(void *ptr) + static int diag224_get_name_table(void) + { + /* memory must be below 2GB */ +- diag224_cpu_names = kmalloc(PAGE_SIZE, GFP_KERNEL | GFP_DMA); ++ diag224_cpu_names = (char *) __get_free_page(GFP_KERNEL | GFP_DMA); + if (!diag224_cpu_names) + return -ENOMEM; + if (diag224(diag224_cpu_names)) { +- kfree(diag224_cpu_names); ++ free_page((unsigned long) diag224_cpu_names); + return -EOPNOTSUPP; + } + EBCASC(diag224_cpu_names + 16, (*diag224_cpu_names + 1) * 16); +@@ -530,7 +530,7 @@ static int diag224_get_name_table(void) + + static void diag224_delete_name_table(void) + { +- kfree(diag224_cpu_names); ++ free_page((unsigned long) diag224_cpu_names); + } + + static int diag224_idx2name(int index, char *name) +diff --git a/arch/s390/mm/init.c b/arch/s390/mm/init.c +index ad446b0c55b6..1b30d5488f82 100644 +--- a/arch/s390/mm/init.c ++++ b/arch/s390/mm/init.c +@@ -43,6 +43,7 @@ pgd_t swapper_pg_dir[PTRS_PER_PGD] __attribute__((__aligned__(PAGE_SIZE))); + + unsigned long empty_zero_page, zero_page_mask; + EXPORT_SYMBOL(empty_zero_page); ++EXPORT_SYMBOL(zero_page_mask); + + static void __init setup_zero_pages(void) + { +diff --git a/arch/sparc/include/asm/mmu_64.h b/arch/sparc/include/asm/mmu_64.h +index f668797ae234..4994815fccc7 100644 +--- a/arch/sparc/include/asm/mmu_64.h ++++ b/arch/sparc/include/asm/mmu_64.h +@@ -92,7 +92,8 @@ struct tsb_config { + typedef struct { + spinlock_t lock; + unsigned long sparc64_ctx_val; +- unsigned long huge_pte_count; ++ unsigned long hugetlb_pte_count; ++ unsigned long thp_pte_count; + struct tsb_config tsb_block[MM_NUM_TSBS]; + struct hv_tsb_descr tsb_descr[MM_NUM_TSBS]; + } mm_context_t; +diff --git a/arch/sparc/kernel/dtlb_prot.S b/arch/sparc/kernel/dtlb_prot.S +index d668ca149e64..4087a62f96b0 100644 +--- a/arch/sparc/kernel/dtlb_prot.S ++++ b/arch/sparc/kernel/dtlb_prot.S +@@ -25,13 +25,13 @@ + + /* PROT ** ICACHE line 2: More real fault processing */ + ldxa [%g4] ASI_DMMU, %g5 ! Put tagaccess in %g5 ++ srlx %g5, PAGE_SHIFT, %g5 ++ sllx %g5, PAGE_SHIFT, %g5 ! Clear context ID bits + bgu,pn %xcc, winfix_trampoline ! Yes, perform winfixup + mov FAULT_CODE_DTLB | FAULT_CODE_WRITE, %g4 + ba,pt %xcc, sparc64_realfault_common ! Nope, normal fault + nop + nop +- nop +- nop + + /* PROT ** ICACHE line 3: Unused... */ + nop +diff --git a/arch/sparc/kernel/jump_label.c b/arch/sparc/kernel/jump_label.c +index 48565c11e82a..6d0dacb5812d 100644 +--- a/arch/sparc/kernel/jump_label.c ++++ b/arch/sparc/kernel/jump_label.c +@@ -13,19 +13,30 @@ + void arch_jump_label_transform(struct jump_entry *entry, + enum jump_label_type type) + { +- u32 val; + u32 *insn = (u32 *) (unsigned long) entry->code; ++ u32 val; + + if (type == JUMP_LABEL_ENABLE) { + s32 off = (s32)entry->target - (s32)entry->code; ++ bool use_v9_branch = false; ++ ++ BUG_ON(off & 3); + + #ifdef CONFIG_SPARC64 +- /* ba,pt %xcc, . + (off << 2) */ +- val = 0x10680000 | ((u32) off >> 2); +-#else +- /* ba . + (off << 2) */ +- val = 0x10800000 | ((u32) off >> 2); ++ if (off <= 0xfffff && off >= -0x100000) ++ use_v9_branch = true; + #endif ++ if (use_v9_branch) { ++ /* WDISP19 - target is . + immed << 2 */ ++ /* ba,pt %xcc, . + off */ ++ val = 0x10680000 | (((u32) off >> 2) & 0x7ffff); ++ } else { ++ /* WDISP22 - target is . + immed << 2 */ ++ BUG_ON(off > 0x7fffff); ++ BUG_ON(off < -0x800000); ++ /* ba . + off */ ++ val = 0x10800000 | (((u32) off >> 2) & 0x3fffff); ++ } + } else { + val = 0x01000000; + } +diff --git a/arch/sparc/kernel/ktlb.S b/arch/sparc/kernel/ktlb.S +index ef0d8e9e1210..f22bec0db645 100644 +--- a/arch/sparc/kernel/ktlb.S ++++ b/arch/sparc/kernel/ktlb.S +@@ -20,6 +20,10 @@ kvmap_itlb: + mov TLB_TAG_ACCESS, %g4 + ldxa [%g4] ASI_IMMU, %g4 + ++ /* The kernel executes in context zero, therefore we do not ++ * need to clear the context ID bits out of %g4 here. ++ */ ++ + /* sun4v_itlb_miss branches here with the missing virtual + * address already loaded into %g4 + */ +@@ -128,6 +132,10 @@ kvmap_dtlb: + mov TLB_TAG_ACCESS, %g4 + ldxa [%g4] ASI_DMMU, %g4 + ++ /* The kernel executes in context zero, therefore we do not ++ * need to clear the context ID bits out of %g4 here. ++ */ ++ + /* sun4v_dtlb_miss branches here with the missing virtual + * address already loaded into %g4 + */ +@@ -251,6 +259,10 @@ kvmap_dtlb_longpath: + nop + .previous + ++ /* The kernel executes in context zero, therefore we do not ++ * need to clear the context ID bits out of %g5 here. ++ */ ++ + be,pt %xcc, sparc64_realfault_common + mov FAULT_CODE_DTLB, %g4 + ba,pt %xcc, winfix_trampoline +diff --git a/arch/sparc/kernel/tsb.S b/arch/sparc/kernel/tsb.S +index be98685c14c6..d568c8207af7 100644 +--- a/arch/sparc/kernel/tsb.S ++++ b/arch/sparc/kernel/tsb.S +@@ -29,13 +29,17 @@ + */ + tsb_miss_dtlb: + mov TLB_TAG_ACCESS, %g4 ++ ldxa [%g4] ASI_DMMU, %g4 ++ srlx %g4, PAGE_SHIFT, %g4 + ba,pt %xcc, tsb_miss_page_table_walk +- ldxa [%g4] ASI_DMMU, %g4 ++ sllx %g4, PAGE_SHIFT, %g4 + + tsb_miss_itlb: + mov TLB_TAG_ACCESS, %g4 ++ ldxa [%g4] ASI_IMMU, %g4 ++ srlx %g4, PAGE_SHIFT, %g4 + ba,pt %xcc, tsb_miss_page_table_walk +- ldxa [%g4] ASI_IMMU, %g4 ++ sllx %g4, PAGE_SHIFT, %g4 + + /* At this point we have: + * %g1 -- PAGE_SIZE TSB entry address +@@ -284,6 +288,10 @@ tsb_do_dtlb_fault: + nop + .previous + ++ /* Clear context ID bits. */ ++ srlx %g5, PAGE_SHIFT, %g5 ++ sllx %g5, PAGE_SHIFT, %g5 ++ + be,pt %xcc, sparc64_realfault_common + mov FAULT_CODE_DTLB, %g4 + ba,pt %xcc, winfix_trampoline +diff --git a/arch/sparc/mm/fault_64.c b/arch/sparc/mm/fault_64.c +index c7009d7762b1..a21917c8f44f 100644 +--- a/arch/sparc/mm/fault_64.c ++++ b/arch/sparc/mm/fault_64.c +@@ -478,14 +478,14 @@ good_area: + up_read(&mm->mmap_sem); + + mm_rss = get_mm_rss(mm); +-#if defined(CONFIG_HUGETLB_PAGE) || defined(CONFIG_TRANSPARENT_HUGEPAGE) +- mm_rss -= (mm->context.huge_pte_count * (HPAGE_SIZE / PAGE_SIZE)); ++#if defined(CONFIG_TRANSPARENT_HUGEPAGE) ++ mm_rss -= (mm->context.thp_pte_count * (HPAGE_SIZE / PAGE_SIZE)); + #endif + if (unlikely(mm_rss > + mm->context.tsb_block[MM_TSB_BASE].tsb_rss_limit)) + tsb_grow(mm, MM_TSB_BASE, mm_rss); + #if defined(CONFIG_HUGETLB_PAGE) || defined(CONFIG_TRANSPARENT_HUGEPAGE) +- mm_rss = mm->context.huge_pte_count; ++ mm_rss = mm->context.hugetlb_pte_count + mm->context.thp_pte_count; + if (unlikely(mm_rss > + mm->context.tsb_block[MM_TSB_HUGE].tsb_rss_limit)) { + if (mm->context.tsb_block[MM_TSB_HUGE].tsb) +diff --git a/arch/sparc/mm/hugetlbpage.c b/arch/sparc/mm/hugetlbpage.c +index d941cd024f22..387ae1e9b462 100644 +--- a/arch/sparc/mm/hugetlbpage.c ++++ b/arch/sparc/mm/hugetlbpage.c +@@ -184,7 +184,7 @@ void set_huge_pte_at(struct mm_struct *mm, unsigned long addr, + int i; + + if (!pte_present(*ptep) && pte_present(entry)) +- mm->context.huge_pte_count++; ++ mm->context.hugetlb_pte_count++; + + addr &= HPAGE_MASK; + for (i = 0; i < (1 << HUGETLB_PAGE_ORDER); i++) { +@@ -203,7 +203,7 @@ pte_t huge_ptep_get_and_clear(struct mm_struct *mm, unsigned long addr, + + entry = *ptep; + if (pte_present(entry)) +- mm->context.huge_pte_count--; ++ mm->context.hugetlb_pte_count--; + + addr &= HPAGE_MASK; + +diff --git a/arch/sparc/mm/init_64.c b/arch/sparc/mm/init_64.c +index 9633e0706d6e..4650a3840305 100644 +--- a/arch/sparc/mm/init_64.c ++++ b/arch/sparc/mm/init_64.c +@@ -353,7 +353,8 @@ void update_mmu_cache(struct vm_area_struct *vma, unsigned long address, pte_t * + spin_lock_irqsave(&mm->context.lock, flags); + + #if defined(CONFIG_HUGETLB_PAGE) || defined(CONFIG_TRANSPARENT_HUGEPAGE) +- if (mm->context.huge_pte_count && is_hugetlb_pte(pte)) ++ if ((mm->context.hugetlb_pte_count || mm->context.thp_pte_count) && ++ is_hugetlb_pte(pte)) + __update_mmu_tsb_insert(mm, MM_TSB_HUGE, REAL_HPAGE_SHIFT, + address, pte_val(pte)); + else +diff --git a/arch/sparc/mm/tlb.c b/arch/sparc/mm/tlb.c +index c24d0aa2b615..56b820924b07 100644 +--- a/arch/sparc/mm/tlb.c ++++ b/arch/sparc/mm/tlb.c +@@ -166,9 +166,9 @@ void set_pmd_at(struct mm_struct *mm, unsigned long addr, + + if ((pmd_val(pmd) ^ pmd_val(orig)) & _PAGE_PMD_HUGE) { + if (pmd_val(pmd) & _PAGE_PMD_HUGE) +- mm->context.huge_pte_count++; ++ mm->context.thp_pte_count++; + else +- mm->context.huge_pte_count--; ++ mm->context.thp_pte_count--; + + /* Do not try to allocate the TSB hash table if we + * don't have one already. We have various locks held +diff --git a/arch/sparc/mm/tsb.c b/arch/sparc/mm/tsb.c +index 10a69f47745a..48a09e48d444 100644 +--- a/arch/sparc/mm/tsb.c ++++ b/arch/sparc/mm/tsb.c +@@ -26,6 +26,20 @@ static inline int tag_compare(unsigned long tag, unsigned long vaddr) + return (tag == (vaddr >> 22)); + } + ++static void flush_tsb_kernel_range_scan(unsigned long start, unsigned long end) ++{ ++ unsigned long idx; ++ ++ for (idx = 0; idx < KERNEL_TSB_NENTRIES; idx++) { ++ struct tsb *ent = &swapper_tsb[idx]; ++ unsigned long match = idx << 13; ++ ++ match |= (ent->tag << 22); ++ if (match >= start && match < end) ++ ent->tag = (1UL << TSB_TAG_INVALID_BIT); ++ } ++} ++ + /* TSB flushes need only occur on the processor initiating the address + * space modification, not on each cpu the address space has run on. + * Only the TLB flush needs that treatment. +@@ -35,6 +49,9 @@ void flush_tsb_kernel_range(unsigned long start, unsigned long end) + { + unsigned long v; + ++ if ((end - start) >> PAGE_SHIFT >= 2 * KERNEL_TSB_NENTRIES) ++ return flush_tsb_kernel_range_scan(start, end); ++ + for (v = start; v < end; v += PAGE_SIZE) { + unsigned long hash = tsb_hash(v, PAGE_SHIFT, + KERNEL_TSB_NENTRIES); +@@ -467,7 +484,7 @@ retry_tsb_alloc: + int init_new_context(struct task_struct *tsk, struct mm_struct *mm) + { + #if defined(CONFIG_HUGETLB_PAGE) || defined(CONFIG_TRANSPARENT_HUGEPAGE) +- unsigned long huge_pte_count; ++ unsigned long total_huge_pte_count; + #endif + unsigned int i; + +@@ -476,12 +493,14 @@ int init_new_context(struct task_struct *tsk, struct mm_struct *mm) + mm->context.sparc64_ctx_val = 0UL; + + #if defined(CONFIG_HUGETLB_PAGE) || defined(CONFIG_TRANSPARENT_HUGEPAGE) +- /* We reset it to zero because the fork() page copying ++ /* We reset them to zero because the fork() page copying + * will re-increment the counters as the parent PTEs are + * copied into the child address space. + */ +- huge_pte_count = mm->context.huge_pte_count; +- mm->context.huge_pte_count = 0; ++ total_huge_pte_count = mm->context.hugetlb_pte_count + ++ mm->context.thp_pte_count; ++ mm->context.hugetlb_pte_count = 0; ++ mm->context.thp_pte_count = 0; + #endif + + /* copy_mm() copies over the parent's mm_struct before calling +@@ -497,8 +516,8 @@ int init_new_context(struct task_struct *tsk, struct mm_struct *mm) + tsb_grow(mm, MM_TSB_BASE, get_mm_rss(mm)); + + #if defined(CONFIG_HUGETLB_PAGE) || defined(CONFIG_TRANSPARENT_HUGEPAGE) +- if (unlikely(huge_pte_count)) +- tsb_grow(mm, MM_TSB_HUGE, huge_pte_count); ++ if (unlikely(total_huge_pte_count)) ++ tsb_grow(mm, MM_TSB_HUGE, total_huge_pte_count); + #endif + + if (unlikely(!mm->context.tsb_block[MM_TSB_BASE].tsb)) +diff --git a/arch/sparc/mm/ultra.S b/arch/sparc/mm/ultra.S +index b4f4733abc6e..5d2fd6cd3189 100644 +--- a/arch/sparc/mm/ultra.S ++++ b/arch/sparc/mm/ultra.S +@@ -30,7 +30,7 @@ + .text + .align 32 + .globl __flush_tlb_mm +-__flush_tlb_mm: /* 18 insns */ ++__flush_tlb_mm: /* 19 insns */ + /* %o0=(ctx & TAG_CONTEXT_BITS), %o1=SECONDARY_CONTEXT */ + ldxa [%o1] ASI_DMMU, %g2 + cmp %g2, %o0 +@@ -81,7 +81,7 @@ __flush_tlb_page: /* 22 insns */ + + .align 32 + .globl __flush_tlb_pending +-__flush_tlb_pending: /* 26 insns */ ++__flush_tlb_pending: /* 27 insns */ + /* %o0 = context, %o1 = nr, %o2 = vaddrs[] */ + rdpr %pstate, %g7 + sllx %o1, 3, %o1 +@@ -113,12 +113,14 @@ __flush_tlb_pending: /* 26 insns */ + + .align 32 + .globl __flush_tlb_kernel_range +-__flush_tlb_kernel_range: /* 16 insns */ ++__flush_tlb_kernel_range: /* 31 insns */ + /* %o0=start, %o1=end */ + cmp %o0, %o1 + be,pn %xcc, 2f ++ sub %o1, %o0, %o3 ++ srlx %o3, 18, %o4 ++ brnz,pn %o4, __spitfire_flush_tlb_kernel_range_slow + sethi %hi(PAGE_SIZE), %o4 +- sub %o1, %o0, %o3 + sub %o3, %o4, %o3 + or %o0, 0x20, %o0 ! Nucleus + 1: stxa %g0, [%o0 + %o3] ASI_DMMU_DEMAP +@@ -131,6 +133,41 @@ __flush_tlb_kernel_range: /* 16 insns */ + retl + nop + nop ++ nop ++ nop ++ nop ++ nop ++ nop ++ nop ++ nop ++ nop ++ nop ++ nop ++ nop ++ nop ++ nop ++ ++__spitfire_flush_tlb_kernel_range_slow: ++ mov 63 * 8, %o4 ++1: ldxa [%o4] ASI_ITLB_DATA_ACCESS, %o3 ++ andcc %o3, 0x40, %g0 /* _PAGE_L_4U */ ++ bne,pn %xcc, 2f ++ mov TLB_TAG_ACCESS, %o3 ++ stxa %g0, [%o3] ASI_IMMU ++ stxa %g0, [%o4] ASI_ITLB_DATA_ACCESS ++ membar #Sync ++2: ldxa [%o4] ASI_DTLB_DATA_ACCESS, %o3 ++ andcc %o3, 0x40, %g0 ++ bne,pn %xcc, 2f ++ mov TLB_TAG_ACCESS, %o3 ++ stxa %g0, [%o3] ASI_DMMU ++ stxa %g0, [%o4] ASI_DTLB_DATA_ACCESS ++ membar #Sync ++2: sub %o4, 8, %o4 ++ brgez,pt %o4, 1b ++ nop ++ retl ++ nop + + __spitfire_flush_tlb_mm_slow: + rdpr %pstate, %g1 +@@ -285,6 +322,40 @@ __cheetah_flush_tlb_pending: /* 27 insns */ + retl + wrpr %g7, 0x0, %pstate + ++__cheetah_flush_tlb_kernel_range: /* 31 insns */ ++ /* %o0=start, %o1=end */ ++ cmp %o0, %o1 ++ be,pn %xcc, 2f ++ sub %o1, %o0, %o3 ++ srlx %o3, 18, %o4 ++ brnz,pn %o4, 3f ++ sethi %hi(PAGE_SIZE), %o4 ++ sub %o3, %o4, %o3 ++ or %o0, 0x20, %o0 ! Nucleus ++1: stxa %g0, [%o0 + %o3] ASI_DMMU_DEMAP ++ stxa %g0, [%o0 + %o3] ASI_IMMU_DEMAP ++ membar #Sync ++ brnz,pt %o3, 1b ++ sub %o3, %o4, %o3 ++2: sethi %hi(KERNBASE), %o3 ++ flush %o3 ++ retl ++ nop ++3: mov 0x80, %o4 ++ stxa %g0, [%o4] ASI_DMMU_DEMAP ++ membar #Sync ++ stxa %g0, [%o4] ASI_IMMU_DEMAP ++ membar #Sync ++ retl ++ nop ++ nop ++ nop ++ nop ++ nop ++ nop ++ nop ++ nop ++ + #ifdef DCACHE_ALIASING_POSSIBLE + __cheetah_flush_dcache_page: /* 11 insns */ + sethi %hi(PAGE_OFFSET), %g1 +@@ -309,19 +380,28 @@ __hypervisor_tlb_tl0_error: + ret + restore + +-__hypervisor_flush_tlb_mm: /* 10 insns */ ++__hypervisor_flush_tlb_mm: /* 19 insns */ + mov %o0, %o2 /* ARG2: mmu context */ + mov 0, %o0 /* ARG0: CPU lists unimplemented */ + mov 0, %o1 /* ARG1: CPU lists unimplemented */ + mov HV_MMU_ALL, %o3 /* ARG3: flags */ + mov HV_FAST_MMU_DEMAP_CTX, %o5 + ta HV_FAST_TRAP +- brnz,pn %o0, __hypervisor_tlb_tl0_error ++ brnz,pn %o0, 1f + mov HV_FAST_MMU_DEMAP_CTX, %o1 + retl + nop ++1: sethi %hi(__hypervisor_tlb_tl0_error), %o5 ++ jmpl %o5 + %lo(__hypervisor_tlb_tl0_error), %g0 ++ nop ++ nop ++ nop ++ nop ++ nop ++ nop ++ nop + +-__hypervisor_flush_tlb_page: /* 11 insns */ ++__hypervisor_flush_tlb_page: /* 22 insns */ + /* %o0 = context, %o1 = vaddr */ + mov %o0, %g2 + mov %o1, %o0 /* ARG0: vaddr + IMMU-bit */ +@@ -330,12 +410,23 @@ __hypervisor_flush_tlb_page: /* 11 insns */ + srlx %o0, PAGE_SHIFT, %o0 + sllx %o0, PAGE_SHIFT, %o0 + ta HV_MMU_UNMAP_ADDR_TRAP +- brnz,pn %o0, __hypervisor_tlb_tl0_error ++ brnz,pn %o0, 1f + mov HV_MMU_UNMAP_ADDR_TRAP, %o1 + retl + nop ++1: sethi %hi(__hypervisor_tlb_tl0_error), %o2 ++ jmpl %o2 + %lo(__hypervisor_tlb_tl0_error), %g0 ++ nop ++ nop ++ nop ++ nop ++ nop ++ nop ++ nop ++ nop ++ nop + +-__hypervisor_flush_tlb_pending: /* 16 insns */ ++__hypervisor_flush_tlb_pending: /* 27 insns */ + /* %o0 = context, %o1 = nr, %o2 = vaddrs[] */ + sllx %o1, 3, %g1 + mov %o2, %g2 +@@ -347,31 +438,57 @@ __hypervisor_flush_tlb_pending: /* 16 insns */ + srlx %o0, PAGE_SHIFT, %o0 + sllx %o0, PAGE_SHIFT, %o0 + ta HV_MMU_UNMAP_ADDR_TRAP +- brnz,pn %o0, __hypervisor_tlb_tl0_error ++ brnz,pn %o0, 1f + mov HV_MMU_UNMAP_ADDR_TRAP, %o1 + brnz,pt %g1, 1b + nop + retl + nop ++1: sethi %hi(__hypervisor_tlb_tl0_error), %o2 ++ jmpl %o2 + %lo(__hypervisor_tlb_tl0_error), %g0 ++ nop ++ nop ++ nop ++ nop ++ nop ++ nop ++ nop ++ nop ++ nop + +-__hypervisor_flush_tlb_kernel_range: /* 16 insns */ ++__hypervisor_flush_tlb_kernel_range: /* 31 insns */ + /* %o0=start, %o1=end */ + cmp %o0, %o1 + be,pn %xcc, 2f +- sethi %hi(PAGE_SIZE), %g3 +- mov %o0, %g1 +- sub %o1, %g1, %g2 ++ sub %o1, %o0, %g2 ++ srlx %g2, 18, %g3 ++ brnz,pn %g3, 4f ++ mov %o0, %g1 ++ sethi %hi(PAGE_SIZE), %g3 + sub %g2, %g3, %g2 + 1: add %g1, %g2, %o0 /* ARG0: virtual address */ + mov 0, %o1 /* ARG1: mmu context */ + mov HV_MMU_ALL, %o2 /* ARG2: flags */ + ta HV_MMU_UNMAP_ADDR_TRAP +- brnz,pn %o0, __hypervisor_tlb_tl0_error ++ brnz,pn %o0, 3f + mov HV_MMU_UNMAP_ADDR_TRAP, %o1 + brnz,pt %g2, 1b + sub %g2, %g3, %g2 + 2: retl + nop ++3: sethi %hi(__hypervisor_tlb_tl0_error), %o2 ++ jmpl %o2 + %lo(__hypervisor_tlb_tl0_error), %g0 ++ nop ++4: mov 0, %o0 /* ARG0: CPU lists unimplemented */ ++ mov 0, %o1 /* ARG1: CPU lists unimplemented */ ++ mov 0, %o2 /* ARG2: mmu context == nucleus */ ++ mov HV_MMU_ALL, %o3 /* ARG3: flags */ ++ mov HV_FAST_MMU_DEMAP_CTX, %o5 ++ ta HV_FAST_TRAP ++ brnz,pn %o0, 3b ++ mov HV_FAST_MMU_DEMAP_CTX, %o1 ++ retl ++ nop + + #ifdef DCACHE_ALIASING_POSSIBLE + /* XXX Niagara and friends have an 8K cache, so no aliasing is +@@ -394,43 +511,6 @@ tlb_patch_one: + retl + nop + +- .globl cheetah_patch_cachetlbops +-cheetah_patch_cachetlbops: +- save %sp, -128, %sp +- +- sethi %hi(__flush_tlb_mm), %o0 +- or %o0, %lo(__flush_tlb_mm), %o0 +- sethi %hi(__cheetah_flush_tlb_mm), %o1 +- or %o1, %lo(__cheetah_flush_tlb_mm), %o1 +- call tlb_patch_one +- mov 19, %o2 +- +- sethi %hi(__flush_tlb_page), %o0 +- or %o0, %lo(__flush_tlb_page), %o0 +- sethi %hi(__cheetah_flush_tlb_page), %o1 +- or %o1, %lo(__cheetah_flush_tlb_page), %o1 +- call tlb_patch_one +- mov 22, %o2 +- +- sethi %hi(__flush_tlb_pending), %o0 +- or %o0, %lo(__flush_tlb_pending), %o0 +- sethi %hi(__cheetah_flush_tlb_pending), %o1 +- or %o1, %lo(__cheetah_flush_tlb_pending), %o1 +- call tlb_patch_one +- mov 27, %o2 +- +-#ifdef DCACHE_ALIASING_POSSIBLE +- sethi %hi(__flush_dcache_page), %o0 +- or %o0, %lo(__flush_dcache_page), %o0 +- sethi %hi(__cheetah_flush_dcache_page), %o1 +- or %o1, %lo(__cheetah_flush_dcache_page), %o1 +- call tlb_patch_one +- mov 11, %o2 +-#endif /* DCACHE_ALIASING_POSSIBLE */ +- +- ret +- restore +- + #ifdef CONFIG_SMP + /* These are all called by the slaves of a cross call, at + * trap level 1, with interrupts fully disabled. +@@ -447,7 +527,7 @@ cheetah_patch_cachetlbops: + */ + .align 32 + .globl xcall_flush_tlb_mm +-xcall_flush_tlb_mm: /* 21 insns */ ++xcall_flush_tlb_mm: /* 24 insns */ + mov PRIMARY_CONTEXT, %g2 + ldxa [%g2] ASI_DMMU, %g3 + srlx %g3, CTX_PGSZ1_NUC_SHIFT, %g4 +@@ -469,9 +549,12 @@ xcall_flush_tlb_mm: /* 21 insns */ + nop + nop + nop ++ nop ++ nop ++ nop + + .globl xcall_flush_tlb_page +-xcall_flush_tlb_page: /* 17 insns */ ++xcall_flush_tlb_page: /* 20 insns */ + /* %g5=context, %g1=vaddr */ + mov PRIMARY_CONTEXT, %g4 + ldxa [%g4] ASI_DMMU, %g2 +@@ -490,15 +573,20 @@ xcall_flush_tlb_page: /* 17 insns */ + retry + nop + nop ++ nop ++ nop ++ nop + + .globl xcall_flush_tlb_kernel_range +-xcall_flush_tlb_kernel_range: /* 25 insns */ ++xcall_flush_tlb_kernel_range: /* 44 insns */ + sethi %hi(PAGE_SIZE - 1), %g2 + or %g2, %lo(PAGE_SIZE - 1), %g2 + andn %g1, %g2, %g1 + andn %g7, %g2, %g7 + sub %g7, %g1, %g3 +- add %g2, 1, %g2 ++ srlx %g3, 18, %g2 ++ brnz,pn %g2, 2f ++ add %g2, 1, %g2 + sub %g3, %g2, %g3 + or %g1, 0x20, %g1 ! Nucleus + 1: stxa %g0, [%g1 + %g3] ASI_DMMU_DEMAP +@@ -507,8 +595,25 @@ xcall_flush_tlb_kernel_range: /* 25 insns */ + brnz,pt %g3, 1b + sub %g3, %g2, %g3 + retry +- nop +- nop ++2: mov 63 * 8, %g1 ++1: ldxa [%g1] ASI_ITLB_DATA_ACCESS, %g2 ++ andcc %g2, 0x40, %g0 /* _PAGE_L_4U */ ++ bne,pn %xcc, 2f ++ mov TLB_TAG_ACCESS, %g2 ++ stxa %g0, [%g2] ASI_IMMU ++ stxa %g0, [%g1] ASI_ITLB_DATA_ACCESS ++ membar #Sync ++2: ldxa [%g1] ASI_DTLB_DATA_ACCESS, %g2 ++ andcc %g2, 0x40, %g0 ++ bne,pn %xcc, 2f ++ mov TLB_TAG_ACCESS, %g2 ++ stxa %g0, [%g2] ASI_DMMU ++ stxa %g0, [%g1] ASI_DTLB_DATA_ACCESS ++ membar #Sync ++2: sub %g1, 8, %g1 ++ brgez,pt %g1, 1b ++ nop ++ retry + nop + nop + nop +@@ -637,6 +742,52 @@ xcall_fetch_glob_pmu_n4: + + retry + ++__cheetah_xcall_flush_tlb_kernel_range: /* 44 insns */ ++ sethi %hi(PAGE_SIZE - 1), %g2 ++ or %g2, %lo(PAGE_SIZE - 1), %g2 ++ andn %g1, %g2, %g1 ++ andn %g7, %g2, %g7 ++ sub %g7, %g1, %g3 ++ srlx %g3, 18, %g2 ++ brnz,pn %g2, 2f ++ add %g2, 1, %g2 ++ sub %g3, %g2, %g3 ++ or %g1, 0x20, %g1 ! Nucleus ++1: stxa %g0, [%g1 + %g3] ASI_DMMU_DEMAP ++ stxa %g0, [%g1 + %g3] ASI_IMMU_DEMAP ++ membar #Sync ++ brnz,pt %g3, 1b ++ sub %g3, %g2, %g3 ++ retry ++2: mov 0x80, %g2 ++ stxa %g0, [%g2] ASI_DMMU_DEMAP ++ membar #Sync ++ stxa %g0, [%g2] ASI_IMMU_DEMAP ++ membar #Sync ++ retry ++ nop ++ nop ++ nop ++ nop ++ nop ++ nop ++ nop ++ nop ++ nop ++ nop ++ nop ++ nop ++ nop ++ nop ++ nop ++ nop ++ nop ++ nop ++ nop ++ nop ++ nop ++ nop ++ + #ifdef DCACHE_ALIASING_POSSIBLE + .align 32 + .globl xcall_flush_dcache_page_cheetah +@@ -700,7 +851,7 @@ __hypervisor_tlb_xcall_error: + ba,a,pt %xcc, rtrap + + .globl __hypervisor_xcall_flush_tlb_mm +-__hypervisor_xcall_flush_tlb_mm: /* 21 insns */ ++__hypervisor_xcall_flush_tlb_mm: /* 24 insns */ + /* %g5=ctx, g1,g2,g3,g4,g7=scratch, %g6=unusable */ + mov %o0, %g2 + mov %o1, %g3 +@@ -714,7 +865,7 @@ __hypervisor_xcall_flush_tlb_mm: /* 21 insns */ + mov HV_FAST_MMU_DEMAP_CTX, %o5 + ta HV_FAST_TRAP + mov HV_FAST_MMU_DEMAP_CTX, %g6 +- brnz,pn %o0, __hypervisor_tlb_xcall_error ++ brnz,pn %o0, 1f + mov %o0, %g5 + mov %g2, %o0 + mov %g3, %o1 +@@ -723,9 +874,12 @@ __hypervisor_xcall_flush_tlb_mm: /* 21 insns */ + mov %g7, %o5 + membar #Sync + retry ++1: sethi %hi(__hypervisor_tlb_xcall_error), %g4 ++ jmpl %g4 + %lo(__hypervisor_tlb_xcall_error), %g0 ++ nop + + .globl __hypervisor_xcall_flush_tlb_page +-__hypervisor_xcall_flush_tlb_page: /* 17 insns */ ++__hypervisor_xcall_flush_tlb_page: /* 20 insns */ + /* %g5=ctx, %g1=vaddr */ + mov %o0, %g2 + mov %o1, %g3 +@@ -737,42 +891,64 @@ __hypervisor_xcall_flush_tlb_page: /* 17 insns */ + sllx %o0, PAGE_SHIFT, %o0 + ta HV_MMU_UNMAP_ADDR_TRAP + mov HV_MMU_UNMAP_ADDR_TRAP, %g6 +- brnz,a,pn %o0, __hypervisor_tlb_xcall_error ++ brnz,a,pn %o0, 1f + mov %o0, %g5 + mov %g2, %o0 + mov %g3, %o1 + mov %g4, %o2 + membar #Sync + retry ++1: sethi %hi(__hypervisor_tlb_xcall_error), %g4 ++ jmpl %g4 + %lo(__hypervisor_tlb_xcall_error), %g0 ++ nop + + .globl __hypervisor_xcall_flush_tlb_kernel_range +-__hypervisor_xcall_flush_tlb_kernel_range: /* 25 insns */ ++__hypervisor_xcall_flush_tlb_kernel_range: /* 44 insns */ + /* %g1=start, %g7=end, g2,g3,g4,g5,g6=scratch */ + sethi %hi(PAGE_SIZE - 1), %g2 + or %g2, %lo(PAGE_SIZE - 1), %g2 + andn %g1, %g2, %g1 + andn %g7, %g2, %g7 + sub %g7, %g1, %g3 ++ srlx %g3, 18, %g7 + add %g2, 1, %g2 + sub %g3, %g2, %g3 + mov %o0, %g2 + mov %o1, %g4 +- mov %o2, %g7 ++ brnz,pn %g7, 2f ++ mov %o2, %g7 + 1: add %g1, %g3, %o0 /* ARG0: virtual address */ + mov 0, %o1 /* ARG1: mmu context */ + mov HV_MMU_ALL, %o2 /* ARG2: flags */ + ta HV_MMU_UNMAP_ADDR_TRAP + mov HV_MMU_UNMAP_ADDR_TRAP, %g6 +- brnz,pn %o0, __hypervisor_tlb_xcall_error ++ brnz,pn %o0, 1f + mov %o0, %g5 + sethi %hi(PAGE_SIZE), %o2 + brnz,pt %g3, 1b + sub %g3, %o2, %g3 +- mov %g2, %o0 ++5: mov %g2, %o0 + mov %g4, %o1 + mov %g7, %o2 + membar #Sync + retry ++1: sethi %hi(__hypervisor_tlb_xcall_error), %g4 ++ jmpl %g4 + %lo(__hypervisor_tlb_xcall_error), %g0 ++ nop ++2: mov %o3, %g1 ++ mov %o5, %g3 ++ mov 0, %o0 /* ARG0: CPU lists unimplemented */ ++ mov 0, %o1 /* ARG1: CPU lists unimplemented */ ++ mov 0, %o2 /* ARG2: mmu context == nucleus */ ++ mov HV_MMU_ALL, %o3 /* ARG3: flags */ ++ mov HV_FAST_MMU_DEMAP_CTX, %o5 ++ ta HV_FAST_TRAP ++ mov %g1, %o3 ++ brz,pt %o0, 5b ++ mov %g3, %o5 ++ mov HV_FAST_MMU_DEMAP_CTX, %g6 ++ ba,pt %xcc, 1b ++ clr %g5 + + /* These just get rescheduled to PIL vectors. */ + .globl xcall_call_function +@@ -809,6 +985,58 @@ xcall_kgdb_capture: + + #endif /* CONFIG_SMP */ + ++ .globl cheetah_patch_cachetlbops ++cheetah_patch_cachetlbops: ++ save %sp, -128, %sp ++ ++ sethi %hi(__flush_tlb_mm), %o0 ++ or %o0, %lo(__flush_tlb_mm), %o0 ++ sethi %hi(__cheetah_flush_tlb_mm), %o1 ++ or %o1, %lo(__cheetah_flush_tlb_mm), %o1 ++ call tlb_patch_one ++ mov 19, %o2 ++ ++ sethi %hi(__flush_tlb_page), %o0 ++ or %o0, %lo(__flush_tlb_page), %o0 ++ sethi %hi(__cheetah_flush_tlb_page), %o1 ++ or %o1, %lo(__cheetah_flush_tlb_page), %o1 ++ call tlb_patch_one ++ mov 22, %o2 ++ ++ sethi %hi(__flush_tlb_pending), %o0 ++ or %o0, %lo(__flush_tlb_pending), %o0 ++ sethi %hi(__cheetah_flush_tlb_pending), %o1 ++ or %o1, %lo(__cheetah_flush_tlb_pending), %o1 ++ call tlb_patch_one ++ mov 27, %o2 ++ ++ sethi %hi(__flush_tlb_kernel_range), %o0 ++ or %o0, %lo(__flush_tlb_kernel_range), %o0 ++ sethi %hi(__cheetah_flush_tlb_kernel_range), %o1 ++ or %o1, %lo(__cheetah_flush_tlb_kernel_range), %o1 ++ call tlb_patch_one ++ mov 31, %o2 ++ ++#ifdef DCACHE_ALIASING_POSSIBLE ++ sethi %hi(__flush_dcache_page), %o0 ++ or %o0, %lo(__flush_dcache_page), %o0 ++ sethi %hi(__cheetah_flush_dcache_page), %o1 ++ or %o1, %lo(__cheetah_flush_dcache_page), %o1 ++ call tlb_patch_one ++ mov 11, %o2 ++#endif /* DCACHE_ALIASING_POSSIBLE */ ++ ++#ifdef CONFIG_SMP ++ sethi %hi(xcall_flush_tlb_kernel_range), %o0 ++ or %o0, %lo(xcall_flush_tlb_kernel_range), %o0 ++ sethi %hi(__cheetah_xcall_flush_tlb_kernel_range), %o1 ++ or %o1, %lo(__cheetah_xcall_flush_tlb_kernel_range), %o1 ++ call tlb_patch_one ++ mov 44, %o2 ++#endif /* CONFIG_SMP */ ++ ++ ret ++ restore + + .globl hypervisor_patch_cachetlbops + hypervisor_patch_cachetlbops: +@@ -819,28 +1047,28 @@ hypervisor_patch_cachetlbops: + sethi %hi(__hypervisor_flush_tlb_mm), %o1 + or %o1, %lo(__hypervisor_flush_tlb_mm), %o1 + call tlb_patch_one +- mov 10, %o2 ++ mov 19, %o2 + + sethi %hi(__flush_tlb_page), %o0 + or %o0, %lo(__flush_tlb_page), %o0 + sethi %hi(__hypervisor_flush_tlb_page), %o1 + or %o1, %lo(__hypervisor_flush_tlb_page), %o1 + call tlb_patch_one +- mov 11, %o2 ++ mov 22, %o2 + + sethi %hi(__flush_tlb_pending), %o0 + or %o0, %lo(__flush_tlb_pending), %o0 + sethi %hi(__hypervisor_flush_tlb_pending), %o1 + or %o1, %lo(__hypervisor_flush_tlb_pending), %o1 + call tlb_patch_one +- mov 16, %o2 ++ mov 27, %o2 + + sethi %hi(__flush_tlb_kernel_range), %o0 + or %o0, %lo(__flush_tlb_kernel_range), %o0 + sethi %hi(__hypervisor_flush_tlb_kernel_range), %o1 + or %o1, %lo(__hypervisor_flush_tlb_kernel_range), %o1 + call tlb_patch_one +- mov 16, %o2 ++ mov 31, %o2 + + #ifdef DCACHE_ALIASING_POSSIBLE + sethi %hi(__flush_dcache_page), %o0 +@@ -857,21 +1085,21 @@ hypervisor_patch_cachetlbops: + sethi %hi(__hypervisor_xcall_flush_tlb_mm), %o1 + or %o1, %lo(__hypervisor_xcall_flush_tlb_mm), %o1 + call tlb_patch_one +- mov 21, %o2 ++ mov 24, %o2 + + sethi %hi(xcall_flush_tlb_page), %o0 + or %o0, %lo(xcall_flush_tlb_page), %o0 + sethi %hi(__hypervisor_xcall_flush_tlb_page), %o1 + or %o1, %lo(__hypervisor_xcall_flush_tlb_page), %o1 + call tlb_patch_one +- mov 17, %o2 ++ mov 20, %o2 + + sethi %hi(xcall_flush_tlb_kernel_range), %o0 + or %o0, %lo(xcall_flush_tlb_kernel_range), %o0 + sethi %hi(__hypervisor_xcall_flush_tlb_kernel_range), %o1 + or %o1, %lo(__hypervisor_xcall_flush_tlb_kernel_range), %o1 + call tlb_patch_one +- mov 25, %o2 ++ mov 44, %o2 + #endif /* CONFIG_SMP */ + + ret +diff --git a/arch/x86/include/asm/hugetlb.h b/arch/x86/include/asm/hugetlb.h +index 68c05398bba9..7aadd3cea843 100644 +--- a/arch/x86/include/asm/hugetlb.h ++++ b/arch/x86/include/asm/hugetlb.h +@@ -4,6 +4,7 @@ + #include <asm/page.h> + #include <asm-generic/hugetlb.h> + ++#define hugepages_supported() cpu_has_pse + + static inline int is_hugepage_only_range(struct mm_struct *mm, + unsigned long addr, +diff --git a/arch/x86/include/asm/uaccess.h b/arch/x86/include/asm/uaccess.h +index 5838fa911aa0..d4d6eb8c08a8 100644 +--- a/arch/x86/include/asm/uaccess.h ++++ b/arch/x86/include/asm/uaccess.h +@@ -384,7 +384,7 @@ do { \ + asm volatile("1: mov"itype" %1,%"rtype"0\n" \ + "2:\n" \ + _ASM_EXTABLE_EX(1b, 2b) \ +- : ltype(x) : "m" (__m(addr))) ++ : ltype(x) : "m" (__m(addr)), "0" (0)) + + #define __put_user_nocheck(x, ptr, size) \ + ({ \ +diff --git a/arch/x86/kvm/x86.c b/arch/x86/kvm/x86.c +index 06b37a671b12..8562aff68884 100644 +--- a/arch/x86/kvm/x86.c ++++ b/arch/x86/kvm/x86.c +@@ -178,7 +178,18 @@ static void kvm_on_user_return(struct user_return_notifier *urn) + struct kvm_shared_msrs *locals + = container_of(urn, struct kvm_shared_msrs, urn); + struct kvm_shared_msr_values *values; ++ unsigned long flags; + ++ /* ++ * Disabling irqs at this point since the following code could be ++ * interrupted and executed through kvm_arch_hardware_disable() ++ */ ++ local_irq_save(flags); ++ if (locals->registered) { ++ locals->registered = false; ++ user_return_notifier_unregister(urn); ++ } ++ local_irq_restore(flags); + for (slot = 0; slot < shared_msrs_global.nr; ++slot) { + values = &locals->values[slot]; + if (values->host != values->curr) { +@@ -186,8 +197,6 @@ static void kvm_on_user_return(struct user_return_notifier *urn) + values->curr = values->host; + } + } +- locals->registered = false; +- user_return_notifier_unregister(urn); + } + + static void shared_msr_update(unsigned slot, u32 msr) +@@ -3225,6 +3234,7 @@ long kvm_arch_vcpu_ioctl(struct file *filp, + }; + case KVM_SET_VAPIC_ADDR: { + struct kvm_vapic_addr va; ++ int idx; + + r = -EINVAL; + if (!irqchip_in_kernel(vcpu->kvm)) +@@ -3232,7 +3242,9 @@ long kvm_arch_vcpu_ioctl(struct file *filp, + r = -EFAULT; + if (copy_from_user(&va, argp, sizeof va)) + goto out; ++ idx = srcu_read_lock(&vcpu->kvm->srcu); + r = kvm_lapic_set_vapic_addr(vcpu, va.vapic_addr); ++ srcu_read_unlock(&vcpu->kvm->srcu, idx); + break; + } + case KVM_X86_SETUP_MCE: { +@@ -6662,11 +6674,13 @@ void kvm_put_guest_fpu(struct kvm_vcpu *vcpu) + + void kvm_arch_vcpu_free(struct kvm_vcpu *vcpu) + { ++ void *wbinvd_dirty_mask = vcpu->arch.wbinvd_dirty_mask; ++ + kvmclock_reset(vcpu); + +- free_cpumask_var(vcpu->arch.wbinvd_dirty_mask); + fx_free(vcpu); + kvm_x86_ops->vcpu_free(vcpu); ++ free_cpumask_var(wbinvd_dirty_mask); + } + + struct kvm_vcpu *kvm_arch_vcpu_create(struct kvm *kvm, +diff --git a/arch/x86/xen/mmu.c b/arch/x86/xen/mmu.c +index fdc3ba28ca38..53b061c9ad7e 100644 +--- a/arch/x86/xen/mmu.c ++++ b/arch/x86/xen/mmu.c +@@ -1187,7 +1187,7 @@ static void __init xen_cleanhighmap(unsigned long vaddr, + + /* NOTE: The loop is more greedy than the cleanup_highmap variant. + * We include the PMD passed in on _both_ boundaries. */ +- for (; vaddr <= vaddr_end && (pmd < (level2_kernel_pgt + PAGE_SIZE)); ++ for (; vaddr <= vaddr_end && (pmd < (level2_kernel_pgt + PTRS_PER_PMD)); + pmd++, vaddr += PMD_SIZE) { + if (pmd_none(*pmd)) + continue; +diff --git a/drivers/acpi/apei/ghes.c b/drivers/acpi/apei/ghes.c +index 8ec37bbdd699..74529dc575a2 100644 +--- a/drivers/acpi/apei/ghes.c ++++ b/drivers/acpi/apei/ghes.c +@@ -677,7 +677,7 @@ static int ghes_proc(struct ghes *ghes) + ghes_do_proc(ghes, ghes->estatus); + out: + ghes_clear_estatus(ghes); +- return 0; ++ return rc; + } + + static void ghes_add_timer(struct ghes *ghes) +diff --git a/drivers/block/drbd/drbd_main.c b/drivers/block/drbd/drbd_main.c +index 55635edf563b..342cb53db293 100644 +--- a/drivers/block/drbd/drbd_main.c ++++ b/drivers/block/drbd/drbd_main.c +@@ -1771,7 +1771,7 @@ int drbd_send(struct drbd_tconn *tconn, struct socket *sock, + * do we need to block DRBD_SIG if sock == &meta.socket ?? + * otherwise wake_asender() might interrupt some send_*Ack ! + */ +- rv = kernel_sendmsg(sock, &msg, &iov, 1, size); ++ rv = kernel_sendmsg(sock, &msg, &iov, 1, iov.iov_len); + if (rv == -EAGAIN) { + if (we_should_drop_the_connection(tconn, sock)) + break; +diff --git a/drivers/char/virtio_console.c b/drivers/char/virtio_console.c +index f6b96ba57b32..15a3ec940723 100644 +--- a/drivers/char/virtio_console.c ++++ b/drivers/char/virtio_console.c +@@ -1533,19 +1533,29 @@ static void remove_port_data(struct port *port) + spin_lock_irq(&port->inbuf_lock); + /* Remove unused data this port might have received. */ + discard_port_data(port); ++ spin_unlock_irq(&port->inbuf_lock); + + /* Remove buffers we queued up for the Host to send us data in. */ +- while ((buf = virtqueue_detach_unused_buf(port->in_vq))) +- free_buf(buf, true); +- spin_unlock_irq(&port->inbuf_lock); ++ do { ++ spin_lock_irq(&port->inbuf_lock); ++ buf = virtqueue_detach_unused_buf(port->in_vq); ++ spin_unlock_irq(&port->inbuf_lock); ++ if (buf) ++ free_buf(buf, true); ++ } while (buf); + + spin_lock_irq(&port->outvq_lock); + reclaim_consumed_buffers(port); ++ spin_unlock_irq(&port->outvq_lock); + + /* Free pending buffers from the out-queue. */ +- while ((buf = virtqueue_detach_unused_buf(port->out_vq))) +- free_buf(buf, true); +- spin_unlock_irq(&port->outvq_lock); ++ do { ++ spin_lock_irq(&port->outvq_lock); ++ buf = virtqueue_detach_unused_buf(port->out_vq); ++ spin_unlock_irq(&port->outvq_lock); ++ if (buf) ++ free_buf(buf, true); ++ } while (buf); + } + + /* +diff --git a/drivers/firewire/net.c b/drivers/firewire/net.c +index 4af0a7bad7f2..2a260443061d 100644 +--- a/drivers/firewire/net.c ++++ b/drivers/firewire/net.c +@@ -73,13 +73,13 @@ struct rfc2734_header { + + #define fwnet_get_hdr_lf(h) (((h)->w0 & 0xc0000000) >> 30) + #define fwnet_get_hdr_ether_type(h) (((h)->w0 & 0x0000ffff)) +-#define fwnet_get_hdr_dg_size(h) (((h)->w0 & 0x0fff0000) >> 16) ++#define fwnet_get_hdr_dg_size(h) ((((h)->w0 & 0x0fff0000) >> 16) + 1) + #define fwnet_get_hdr_fg_off(h) (((h)->w0 & 0x00000fff)) + #define fwnet_get_hdr_dgl(h) (((h)->w1 & 0xffff0000) >> 16) + +-#define fwnet_set_hdr_lf(lf) ((lf) << 30) ++#define fwnet_set_hdr_lf(lf) ((lf) << 30) + #define fwnet_set_hdr_ether_type(et) (et) +-#define fwnet_set_hdr_dg_size(dgs) ((dgs) << 16) ++#define fwnet_set_hdr_dg_size(dgs) (((dgs) - 1) << 16) + #define fwnet_set_hdr_fg_off(fgo) (fgo) + + #define fwnet_set_hdr_dgl(dgl) ((dgl) << 16) +@@ -591,6 +591,9 @@ static int fwnet_incoming_packet(struct fwnet_device *dev, __be32 *buf, int len, + int retval; + u16 ether_type; + ++ if (len <= RFC2374_UNFRAG_HDR_SIZE) ++ return 0; ++ + hdr.w0 = be32_to_cpu(buf[0]); + lf = fwnet_get_hdr_lf(&hdr); + if (lf == RFC2374_HDR_UNFRAG) { +@@ -615,7 +618,12 @@ static int fwnet_incoming_packet(struct fwnet_device *dev, __be32 *buf, int len, + return fwnet_finish_incoming_packet(net, skb, source_node_id, + is_broadcast, ether_type); + } ++ + /* A datagram fragment has been received, now the fun begins. */ ++ ++ if (len <= RFC2374_FRAG_HDR_SIZE) ++ return 0; ++ + hdr.w1 = ntohl(buf[1]); + buf += 2; + len -= RFC2374_FRAG_HDR_SIZE; +@@ -627,7 +635,10 @@ static int fwnet_incoming_packet(struct fwnet_device *dev, __be32 *buf, int len, + fg_off = fwnet_get_hdr_fg_off(&hdr); + } + datagram_label = fwnet_get_hdr_dgl(&hdr); +- dg_size = fwnet_get_hdr_dg_size(&hdr); /* ??? + 1 */ ++ dg_size = fwnet_get_hdr_dg_size(&hdr); ++ ++ if (fg_off + len > dg_size) ++ return 0; + + spin_lock_irqsave(&dev->lock, flags); + +@@ -735,6 +746,22 @@ static void fwnet_receive_packet(struct fw_card *card, struct fw_request *r, + fw_send_response(card, r, rcode); + } + ++static int gasp_source_id(__be32 *p) ++{ ++ return be32_to_cpu(p[0]) >> 16; ++} ++ ++static u32 gasp_specifier_id(__be32 *p) ++{ ++ return (be32_to_cpu(p[0]) & 0xffff) << 8 | ++ (be32_to_cpu(p[1]) & 0xff000000) >> 24; ++} ++ ++static u32 gasp_version(__be32 *p) ++{ ++ return be32_to_cpu(p[1]) & 0xffffff; ++} ++ + static void fwnet_receive_broadcast(struct fw_iso_context *context, + u32 cycle, size_t header_length, void *header, void *data) + { +@@ -744,9 +771,6 @@ static void fwnet_receive_broadcast(struct fw_iso_context *context, + __be32 *buf_ptr; + int retval; + u32 length; +- u16 source_node_id; +- u32 specifier_id; +- u32 ver; + unsigned long offset; + unsigned long flags; + +@@ -763,22 +787,17 @@ static void fwnet_receive_broadcast(struct fw_iso_context *context, + + spin_unlock_irqrestore(&dev->lock, flags); + +- specifier_id = (be32_to_cpu(buf_ptr[0]) & 0xffff) << 8 +- | (be32_to_cpu(buf_ptr[1]) & 0xff000000) >> 24; +- ver = be32_to_cpu(buf_ptr[1]) & 0xffffff; +- source_node_id = be32_to_cpu(buf_ptr[0]) >> 16; +- +- if (specifier_id == IANA_SPECIFIER_ID && +- (ver == RFC2734_SW_VERSION ++ if (length > IEEE1394_GASP_HDR_SIZE && ++ gasp_specifier_id(buf_ptr) == IANA_SPECIFIER_ID && ++ (gasp_version(buf_ptr) == RFC2734_SW_VERSION + #if IS_ENABLED(CONFIG_IPV6) +- || ver == RFC3146_SW_VERSION ++ || gasp_version(buf_ptr) == RFC3146_SW_VERSION + #endif +- )) { +- buf_ptr += 2; +- length -= IEEE1394_GASP_HDR_SIZE; +- fwnet_incoming_packet(dev, buf_ptr, length, source_node_id, ++ )) ++ fwnet_incoming_packet(dev, buf_ptr + 2, ++ length - IEEE1394_GASP_HDR_SIZE, ++ gasp_source_id(buf_ptr), + context->card->generation, true); +- } + + packet.payload_length = dev->rcv_buffer_size; + packet.interrupt = 1; +diff --git a/drivers/gpu/drm/exynos/exynos_drm_core.c b/drivers/gpu/drm/exynos/exynos_drm_core.c +index 1bef6dc77478..6d521497e3b4 100644 +--- a/drivers/gpu/drm/exynos/exynos_drm_core.c ++++ b/drivers/gpu/drm/exynos/exynos_drm_core.c +@@ -204,7 +204,7 @@ int exynos_drm_subdrv_open(struct drm_device *dev, struct drm_file *file) + return 0; + + err: +- list_for_each_entry_reverse(subdrv, &subdrv->list, list) { ++ list_for_each_entry_continue_reverse(subdrv, &exynos_drm_subdrv_list, list) { + if (subdrv->close) + subdrv->close(dev, subdrv->dev, file); + } +diff --git a/drivers/gpu/drm/radeon/ni.c b/drivers/gpu/drm/radeon/ni.c +index 7dcf2ffddccf..a10125442041 100644 +--- a/drivers/gpu/drm/radeon/ni.c ++++ b/drivers/gpu/drm/radeon/ni.c +@@ -1322,9 +1322,7 @@ static void cayman_pcie_gart_fini(struct radeon_device *rdev) + void cayman_cp_int_cntl_setup(struct radeon_device *rdev, + int ring, u32 cp_int_cntl) + { +- u32 srbm_gfx_cntl = RREG32(SRBM_GFX_CNTL) & ~3; +- +- WREG32(SRBM_GFX_CNTL, srbm_gfx_cntl | (ring & 3)); ++ WREG32(SRBM_GFX_CNTL, RINGID(ring)); + WREG32(CP_INT_CNTL, cp_int_cntl); + } + +diff --git a/drivers/gpu/drm/radeon/si_dpm.c b/drivers/gpu/drm/radeon/si_dpm.c +index c1281fc39040..3265792f1990 100644 +--- a/drivers/gpu/drm/radeon/si_dpm.c ++++ b/drivers/gpu/drm/radeon/si_dpm.c +@@ -2934,6 +2934,49 @@ static void si_apply_state_adjust_rules(struct radeon_device *rdev, + int i; + struct si_dpm_quirk *p = si_dpm_quirk_list; + ++ /* limit all SI kickers */ ++ if (rdev->family == CHIP_PITCAIRN) { ++ if ((rdev->pdev->revision == 0x81) || ++ (rdev->pdev->device == 0x6810) || ++ (rdev->pdev->device == 0x6811) || ++ (rdev->pdev->device == 0x6816) || ++ (rdev->pdev->device == 0x6817) || ++ (rdev->pdev->device == 0x6806)) ++ max_mclk = 120000; ++ } else if (rdev->family == CHIP_VERDE) { ++ if ((rdev->pdev->revision == 0x81) || ++ (rdev->pdev->revision == 0x83) || ++ (rdev->pdev->revision == 0x87) || ++ (rdev->pdev->device == 0x6820) || ++ (rdev->pdev->device == 0x6821) || ++ (rdev->pdev->device == 0x6822) || ++ (rdev->pdev->device == 0x6823) || ++ (rdev->pdev->device == 0x682A) || ++ (rdev->pdev->device == 0x682B)) { ++ max_sclk = 75000; ++ max_mclk = 80000; ++ } ++ } else if (rdev->family == CHIP_OLAND) { ++ if ((rdev->pdev->revision == 0xC7) || ++ (rdev->pdev->revision == 0x80) || ++ (rdev->pdev->revision == 0x81) || ++ (rdev->pdev->revision == 0x83) || ++ (rdev->pdev->device == 0x6604) || ++ (rdev->pdev->device == 0x6605)) { ++ max_sclk = 75000; ++ max_mclk = 80000; ++ } ++ } else if (rdev->family == CHIP_HAINAN) { ++ if ((rdev->pdev->revision == 0x81) || ++ (rdev->pdev->revision == 0x83) || ++ (rdev->pdev->revision == 0xC3) || ++ (rdev->pdev->device == 0x6664) || ++ (rdev->pdev->device == 0x6665) || ++ (rdev->pdev->device == 0x6667)) { ++ max_sclk = 75000; ++ max_mclk = 80000; ++ } ++ } + /* Apply dpm quirks */ + while (p && p->chip_device != 0) { + if (rdev->pdev->vendor == p->chip_vendor && +@@ -3008,16 +3051,6 @@ static void si_apply_state_adjust_rules(struct radeon_device *rdev, + ps->performance_levels[i].sclk = max_sclk; + } + } +- /* limit mclk on all R7 370 parts for stability */ +- if (rdev->pdev->device == 0x6811 && +- rdev->pdev->revision == 0x81) +- max_mclk = 120000; +- /* limit sclk/mclk on Jet parts for stability */ +- if (rdev->pdev->device == 0x6665 && +- rdev->pdev->revision == 0xc3) { +- max_sclk = 75000; +- max_mclk = 80000; +- } + + /* XXX validate the min clocks required for display */ + +diff --git a/drivers/hid/hid-core.c b/drivers/hid/hid-core.c +index d7d54e7449fa..d183ff679fe5 100644 +--- a/drivers/hid/hid-core.c ++++ b/drivers/hid/hid-core.c +@@ -707,6 +707,7 @@ static void hid_scan_collection(struct hid_parser *parser, unsigned type) + (hid->product == USB_DEVICE_ID_MS_TYPE_COVER_PRO_3 || + hid->product == USB_DEVICE_ID_MS_TYPE_COVER_PRO_3_2 || + hid->product == USB_DEVICE_ID_MS_TYPE_COVER_PRO_3_JP || ++ hid->product == USB_DEVICE_ID_MS_TYPE_COVER_PRO_4_JP || + hid->product == USB_DEVICE_ID_MS_TYPE_COVER_3 || + hid->product == USB_DEVICE_ID_MS_POWER_COVER) && + hid->group == HID_GROUP_MULTITOUCH) +@@ -1818,6 +1819,7 @@ static const struct hid_device_id hid_have_special_driver[] = { + { HID_USB_DEVICE(USB_VENDOR_ID_MICROSOFT, USB_DEVICE_ID_MS_TYPE_COVER_PRO_3) }, + { HID_USB_DEVICE(USB_VENDOR_ID_MICROSOFT, USB_DEVICE_ID_MS_TYPE_COVER_PRO_3_2) }, + { HID_USB_DEVICE(USB_VENDOR_ID_MICROSOFT, USB_DEVICE_ID_MS_TYPE_COVER_PRO_3_JP) }, ++ { HID_USB_DEVICE(USB_VENDOR_ID_MICROSOFT, USB_DEVICE_ID_MS_TYPE_COVER_PRO_4_JP) }, + { HID_USB_DEVICE(USB_VENDOR_ID_MICROSOFT, USB_DEVICE_ID_MS_TYPE_COVER_3) }, + { HID_USB_DEVICE(USB_VENDOR_ID_MICROSOFT, USB_DEVICE_ID_MS_DIGITAL_MEDIA_7K) }, + { HID_USB_DEVICE(USB_VENDOR_ID_MICROSOFT, USB_DEVICE_ID_MS_DIGITAL_MEDIA_600) }, +diff --git a/drivers/hid/hid-ids.h b/drivers/hid/hid-ids.h +index 132ed653b54e..16583e6621d4 100644 +--- a/drivers/hid/hid-ids.h ++++ b/drivers/hid/hid-ids.h +@@ -165,6 +165,8 @@ + #define USB_DEVICE_ID_ATEN_2PORTKVM 0x2204 + #define USB_DEVICE_ID_ATEN_4PORTKVM 0x2205 + #define USB_DEVICE_ID_ATEN_4PORTKVMC 0x2208 ++#define USB_DEVICE_ID_ATEN_CS682 0x2213 ++#define USB_DEVICE_ID_ATEN_CS692 0x8021 + + #define USB_VENDOR_ID_ATMEL 0x03eb + #define USB_DEVICE_ID_ATMEL_MULTITOUCH 0x211c +@@ -661,6 +663,7 @@ + #define USB_DEVICE_ID_MS_TYPE_COVER_PRO_3 0x07dc + #define USB_DEVICE_ID_MS_TYPE_COVER_PRO_3_2 0x07e2 + #define USB_DEVICE_ID_MS_TYPE_COVER_PRO_3_JP 0x07dd ++#define USB_DEVICE_ID_MS_TYPE_COVER_PRO_4_JP 0x07e9 + #define USB_DEVICE_ID_MS_TYPE_COVER_3 0x07de + #define USB_DEVICE_ID_MS_POWER_COVER 0x07da + +diff --git a/drivers/hid/hid-input.c b/drivers/hid/hid-input.c +index 5fbb46fe6ebf..bd7460541486 100644 +--- a/drivers/hid/hid-input.c ++++ b/drivers/hid/hid-input.c +@@ -895,6 +895,7 @@ static void hidinput_configure_usage(struct hid_input *hidinput, struct hid_fiel + case HID_UP_HPVENDOR2: + set_bit(EV_REP, input->evbit); + switch (usage->hid & HID_USAGE) { ++ case 0x001: map_key_clear(KEY_MICMUTE); break; + case 0x003: map_key_clear(KEY_BRIGHTNESSDOWN); break; + case 0x004: map_key_clear(KEY_BRIGHTNESSUP); break; + default: goto ignore; +diff --git a/drivers/hid/hid-microsoft.c b/drivers/hid/hid-microsoft.c +index 8dfc58ac9d52..607e57122458 100644 +--- a/drivers/hid/hid-microsoft.c ++++ b/drivers/hid/hid-microsoft.c +@@ -268,6 +268,8 @@ static const struct hid_device_id ms_devices[] = { + .driver_data = MS_HIDINPUT }, + { HID_USB_DEVICE(USB_VENDOR_ID_MICROSOFT, USB_DEVICE_ID_MS_TYPE_COVER_PRO_3_JP), + .driver_data = MS_HIDINPUT }, ++ { HID_USB_DEVICE(USB_VENDOR_ID_MICROSOFT, USB_DEVICE_ID_MS_TYPE_COVER_PRO_4_JP), ++ .driver_data = MS_HIDINPUT }, + { HID_USB_DEVICE(USB_VENDOR_ID_MICROSOFT, USB_DEVICE_ID_MS_TYPE_COVER_3), + .driver_data = MS_HIDINPUT }, + { HID_USB_DEVICE(USB_VENDOR_ID_MICROSOFT, USB_DEVICE_ID_MS_POWER_COVER), +diff --git a/drivers/hid/usbhid/hid-quirks.c b/drivers/hid/usbhid/hid-quirks.c +index d63f7e45b539..3fd5fa9385ae 100644 +--- a/drivers/hid/usbhid/hid-quirks.c ++++ b/drivers/hid/usbhid/hid-quirks.c +@@ -60,6 +60,8 @@ static const struct hid_blacklist { + { USB_VENDOR_ID_ATEN, USB_DEVICE_ID_ATEN_2PORTKVM, HID_QUIRK_NOGET }, + { USB_VENDOR_ID_ATEN, USB_DEVICE_ID_ATEN_4PORTKVM, HID_QUIRK_NOGET }, + { USB_VENDOR_ID_ATEN, USB_DEVICE_ID_ATEN_4PORTKVMC, HID_QUIRK_NOGET }, ++ { USB_VENDOR_ID_ATEN, USB_DEVICE_ID_ATEN_CS682, HID_QUIRK_NOGET }, ++ { USB_VENDOR_ID_ATEN, USB_DEVICE_ID_ATEN_CS692, HID_QUIRK_NOGET }, + { USB_VENDOR_ID_CH, USB_DEVICE_ID_CH_FIGHTERSTICK, HID_QUIRK_NOGET }, + { USB_VENDOR_ID_CH, USB_DEVICE_ID_CH_COMBATSTICK, HID_QUIRK_NOGET }, + { USB_VENDOR_ID_CH, USB_DEVICE_ID_CH_FLIGHT_SIM_ECLIPSE_YOKE, HID_QUIRK_NOGET }, +@@ -89,6 +91,7 @@ static const struct hid_blacklist { + { USB_VENDOR_ID_MICROSOFT, USB_DEVICE_ID_MS_TYPE_COVER_PRO_3, HID_QUIRK_NO_INIT_REPORTS }, + { USB_VENDOR_ID_MICROSOFT, USB_DEVICE_ID_MS_TYPE_COVER_PRO_3_2, HID_QUIRK_NO_INIT_REPORTS }, + { USB_VENDOR_ID_MICROSOFT, USB_DEVICE_ID_MS_TYPE_COVER_PRO_3_JP, HID_QUIRK_NO_INIT_REPORTS }, ++ { USB_VENDOR_ID_MICROSOFT, USB_DEVICE_ID_MS_TYPE_COVER_PRO_4_JP, HID_QUIRK_NO_INIT_REPORTS }, + { USB_VENDOR_ID_MICROSOFT, USB_DEVICE_ID_MS_TYPE_COVER_3, HID_QUIRK_NO_INIT_REPORTS }, + { USB_VENDOR_ID_MICROSOFT, USB_DEVICE_ID_MS_POWER_COVER, HID_QUIRK_NO_INIT_REPORTS }, + { USB_VENDOR_ID_MSI, USB_DEVICE_ID_MSI_GX680R_LED_PANEL, HID_QUIRK_NO_INIT_REPORTS }, +diff --git a/drivers/hv/hv_util.c b/drivers/hv/hv_util.c +index 665b7dac6b7d..74d7025a05e6 100644 +--- a/drivers/hv/hv_util.c ++++ b/drivers/hv/hv_util.c +@@ -276,10 +276,14 @@ static void heartbeat_onchannelcallback(void *context) + u8 *hbeat_txf_buf = util_heartbeat.recv_buffer; + struct icmsg_negotiate *negop = NULL; + +- vmbus_recvpacket(channel, hbeat_txf_buf, +- PAGE_SIZE, &recvlen, &requestid); ++ while (1) { ++ ++ vmbus_recvpacket(channel, hbeat_txf_buf, ++ PAGE_SIZE, &recvlen, &requestid); ++ ++ if (!recvlen) ++ break; + +- if (recvlen > 0) { + icmsghdrp = (struct icmsg_hdr *)&hbeat_txf_buf[ + sizeof(struct vmbuspipe_hdr)]; + +diff --git a/drivers/infiniband/core/cm.c b/drivers/infiniband/core/cm.c +index c410217fbe89..951a4f6a3b11 100644 +--- a/drivers/infiniband/core/cm.c ++++ b/drivers/infiniband/core/cm.c +@@ -79,6 +79,8 @@ static struct ib_cm { + __be32 random_id_operand; + struct list_head timewait_list; + struct workqueue_struct *wq; ++ /* Sync on cm change port state */ ++ spinlock_t state_lock; + } cm; + + /* Counter indexes ordered by attribute ID */ +@@ -160,6 +162,8 @@ struct cm_port { + struct ib_mad_agent *mad_agent; + struct kobject port_obj; + u8 port_num; ++ struct list_head cm_priv_prim_list; ++ struct list_head cm_priv_altr_list; + struct cm_counter_group counter_group[CM_COUNTER_GROUPS]; + }; + +@@ -237,6 +241,12 @@ struct cm_id_private { + u8 service_timeout; + u8 target_ack_delay; + ++ struct list_head prim_list; ++ struct list_head altr_list; ++ /* Indicates that the send port mad is registered and av is set */ ++ int prim_send_port_not_ready; ++ int altr_send_port_not_ready; ++ + struct list_head work_list; + atomic_t work_count; + }; +@@ -255,19 +265,46 @@ static int cm_alloc_msg(struct cm_id_private *cm_id_priv, + struct ib_mad_agent *mad_agent; + struct ib_mad_send_buf *m; + struct ib_ah *ah; ++ struct cm_av *av; ++ unsigned long flags, flags2; ++ int ret = 0; + ++ /* don't let the port to be released till the agent is down */ ++ spin_lock_irqsave(&cm.state_lock, flags2); ++ spin_lock_irqsave(&cm.lock, flags); ++ if (!cm_id_priv->prim_send_port_not_ready) ++ av = &cm_id_priv->av; ++ else if (!cm_id_priv->altr_send_port_not_ready && ++ (cm_id_priv->alt_av.port)) ++ av = &cm_id_priv->alt_av; ++ else { ++ pr_info("%s: not valid CM id\n", __func__); ++ ret = -ENODEV; ++ spin_unlock_irqrestore(&cm.lock, flags); ++ goto out; ++ } ++ spin_unlock_irqrestore(&cm.lock, flags); ++ /* Make sure the port haven't released the mad yet */ + mad_agent = cm_id_priv->av.port->mad_agent; +- ah = ib_create_ah(mad_agent->qp->pd, &cm_id_priv->av.ah_attr); +- if (IS_ERR(ah)) +- return PTR_ERR(ah); ++ if (!mad_agent) { ++ pr_info("%s: not a valid MAD agent\n", __func__); ++ ret = -ENODEV; ++ goto out; ++ } ++ ah = ib_create_ah(mad_agent->qp->pd, &av->ah_attr); ++ if (IS_ERR(ah)) { ++ ret = PTR_ERR(ah); ++ goto out; ++ } + + m = ib_create_send_mad(mad_agent, cm_id_priv->id.remote_cm_qpn, +- cm_id_priv->av.pkey_index, ++ av->pkey_index, + 0, IB_MGMT_MAD_HDR, IB_MGMT_MAD_DATA, + GFP_ATOMIC); + if (IS_ERR(m)) { + ib_destroy_ah(ah); +- return PTR_ERR(m); ++ ret = PTR_ERR(m); ++ goto out; + } + + /* Timeout set by caller if response is expected. */ +@@ -277,7 +314,10 @@ static int cm_alloc_msg(struct cm_id_private *cm_id_priv, + atomic_inc(&cm_id_priv->refcount); + m->context[0] = cm_id_priv; + *msg = m; +- return 0; ++ ++out: ++ spin_unlock_irqrestore(&cm.state_lock, flags2); ++ return ret; + } + + static int cm_alloc_response_msg(struct cm_port *port, +@@ -346,7 +386,8 @@ static void cm_init_av_for_response(struct cm_port *port, struct ib_wc *wc, + grh, &av->ah_attr); + } + +-static int cm_init_av_by_path(struct ib_sa_path_rec *path, struct cm_av *av) ++static int cm_init_av_by_path(struct ib_sa_path_rec *path, struct cm_av *av, ++ struct cm_id_private *cm_id_priv) + { + struct cm_device *cm_dev; + struct cm_port *port = NULL; +@@ -376,7 +417,18 @@ static int cm_init_av_by_path(struct ib_sa_path_rec *path, struct cm_av *av) + ib_init_ah_from_path(cm_dev->ib_device, port->port_num, path, + &av->ah_attr); + av->timeout = path->packet_life_time + 1; +- return 0; ++ ++ spin_lock_irqsave(&cm.lock, flags); ++ if (&cm_id_priv->av == av) ++ list_add_tail(&cm_id_priv->prim_list, &port->cm_priv_prim_list); ++ else if (&cm_id_priv->alt_av == av) ++ list_add_tail(&cm_id_priv->altr_list, &port->cm_priv_altr_list); ++ else ++ ret = -EINVAL; ++ ++ spin_unlock_irqrestore(&cm.lock, flags); ++ ++ return ret; + } + + static int cm_alloc_id(struct cm_id_private *cm_id_priv) +@@ -716,6 +768,8 @@ struct ib_cm_id *ib_create_cm_id(struct ib_device *device, + spin_lock_init(&cm_id_priv->lock); + init_completion(&cm_id_priv->comp); + INIT_LIST_HEAD(&cm_id_priv->work_list); ++ INIT_LIST_HEAD(&cm_id_priv->prim_list); ++ INIT_LIST_HEAD(&cm_id_priv->altr_list); + atomic_set(&cm_id_priv->work_count, -1); + atomic_set(&cm_id_priv->refcount, 1); + return &cm_id_priv->id; +@@ -914,6 +968,15 @@ retest: + break; + } + ++ spin_lock_irq(&cm.lock); ++ if (!list_empty(&cm_id_priv->altr_list) && ++ (!cm_id_priv->altr_send_port_not_ready)) ++ list_del(&cm_id_priv->altr_list); ++ if (!list_empty(&cm_id_priv->prim_list) && ++ (!cm_id_priv->prim_send_port_not_ready)) ++ list_del(&cm_id_priv->prim_list); ++ spin_unlock_irq(&cm.lock); ++ + cm_free_id(cm_id->local_id); + cm_deref_id(cm_id_priv); + wait_for_completion(&cm_id_priv->comp); +@@ -1137,12 +1200,13 @@ int ib_send_cm_req(struct ib_cm_id *cm_id, + goto out; + } + +- ret = cm_init_av_by_path(param->primary_path, &cm_id_priv->av); ++ ret = cm_init_av_by_path(param->primary_path, &cm_id_priv->av, ++ cm_id_priv); + if (ret) + goto error1; + if (param->alternate_path) { + ret = cm_init_av_by_path(param->alternate_path, +- &cm_id_priv->alt_av); ++ &cm_id_priv->alt_av, cm_id_priv); + if (ret) + goto error1; + } +@@ -1562,7 +1626,8 @@ static int cm_req_handler(struct cm_work *work) + + cm_process_routed_req(req_msg, work->mad_recv_wc->wc); + cm_format_paths_from_req(req_msg, &work->path[0], &work->path[1]); +- ret = cm_init_av_by_path(&work->path[0], &cm_id_priv->av); ++ ret = cm_init_av_by_path(&work->path[0], &cm_id_priv->av, ++ cm_id_priv); + if (ret) { + ib_get_cached_gid(work->port->cm_dev->ib_device, + work->port->port_num, 0, &work->path[0].sgid); +@@ -1572,7 +1637,8 @@ static int cm_req_handler(struct cm_work *work) + goto rejected; + } + if (req_msg->alt_local_lid) { +- ret = cm_init_av_by_path(&work->path[1], &cm_id_priv->alt_av); ++ ret = cm_init_av_by_path(&work->path[1], &cm_id_priv->alt_av, ++ cm_id_priv); + if (ret) { + ib_send_cm_rej(cm_id, IB_CM_REJ_INVALID_ALT_GID, + &work->path[0].sgid, +@@ -2627,7 +2693,8 @@ int ib_send_cm_lap(struct ib_cm_id *cm_id, + goto out; + } + +- ret = cm_init_av_by_path(alternate_path, &cm_id_priv->alt_av); ++ ret = cm_init_av_by_path(alternate_path, &cm_id_priv->alt_av, ++ cm_id_priv); + if (ret) + goto out; + cm_id_priv->alt_av.timeout = +@@ -2739,7 +2806,8 @@ static int cm_lap_handler(struct cm_work *work) + cm_init_av_for_response(work->port, work->mad_recv_wc->wc, + work->mad_recv_wc->recv_buf.grh, + &cm_id_priv->av); +- cm_init_av_by_path(param->alternate_path, &cm_id_priv->alt_av); ++ cm_init_av_by_path(param->alternate_path, &cm_id_priv->alt_av, ++ cm_id_priv); + ret = atomic_inc_and_test(&cm_id_priv->work_count); + if (!ret) + list_add_tail(&work->list, &cm_id_priv->work_list); +@@ -2931,7 +2999,7 @@ int ib_send_cm_sidr_req(struct ib_cm_id *cm_id, + return -EINVAL; + + cm_id_priv = container_of(cm_id, struct cm_id_private, id); +- ret = cm_init_av_by_path(param->path, &cm_id_priv->av); ++ ret = cm_init_av_by_path(param->path, &cm_id_priv->av, cm_id_priv); + if (ret) + goto out; + +@@ -3352,7 +3420,9 @@ out: + static int cm_migrate(struct ib_cm_id *cm_id) + { + struct cm_id_private *cm_id_priv; ++ struct cm_av tmp_av; + unsigned long flags; ++ int tmp_send_port_not_ready; + int ret = 0; + + cm_id_priv = container_of(cm_id, struct cm_id_private, id); +@@ -3361,7 +3431,14 @@ static int cm_migrate(struct ib_cm_id *cm_id) + (cm_id->lap_state == IB_CM_LAP_UNINIT || + cm_id->lap_state == IB_CM_LAP_IDLE)) { + cm_id->lap_state = IB_CM_LAP_IDLE; ++ /* Swap address vector */ ++ tmp_av = cm_id_priv->av; + cm_id_priv->av = cm_id_priv->alt_av; ++ cm_id_priv->alt_av = tmp_av; ++ /* Swap port send ready state */ ++ tmp_send_port_not_ready = cm_id_priv->prim_send_port_not_ready; ++ cm_id_priv->prim_send_port_not_ready = cm_id_priv->altr_send_port_not_ready; ++ cm_id_priv->altr_send_port_not_ready = tmp_send_port_not_ready; + } else + ret = -EINVAL; + spin_unlock_irqrestore(&cm_id_priv->lock, flags); +@@ -3767,6 +3844,9 @@ static void cm_add_one(struct ib_device *ib_device) + port->cm_dev = cm_dev; + port->port_num = i; + ++ INIT_LIST_HEAD(&port->cm_priv_prim_list); ++ INIT_LIST_HEAD(&port->cm_priv_altr_list); ++ + ret = cm_create_port_fs(port); + if (ret) + goto error1; +@@ -3813,6 +3893,8 @@ static void cm_remove_one(struct ib_device *ib_device) + { + struct cm_device *cm_dev; + struct cm_port *port; ++ struct cm_id_private *cm_id_priv; ++ struct ib_mad_agent *cur_mad_agent; + struct ib_port_modify port_modify = { + .clr_port_cap_mask = IB_PORT_CM_SUP + }; +@@ -3830,10 +3912,22 @@ static void cm_remove_one(struct ib_device *ib_device) + for (i = 1; i <= ib_device->phys_port_cnt; i++) { + port = cm_dev->port[i-1]; + ib_modify_port(ib_device, port->port_num, 0, &port_modify); +- ib_unregister_mad_agent(port->mad_agent); ++ /* Mark all the cm_id's as not valid */ ++ spin_lock_irq(&cm.lock); ++ list_for_each_entry(cm_id_priv, &port->cm_priv_altr_list, altr_list) ++ cm_id_priv->altr_send_port_not_ready = 1; ++ list_for_each_entry(cm_id_priv, &port->cm_priv_prim_list, prim_list) ++ cm_id_priv->prim_send_port_not_ready = 1; ++ spin_unlock_irq(&cm.lock); + flush_workqueue(cm.wq); ++ spin_lock_irq(&cm.state_lock); ++ cur_mad_agent = port->mad_agent; ++ port->mad_agent = NULL; ++ spin_unlock_irq(&cm.state_lock); ++ ib_unregister_mad_agent(cur_mad_agent); + cm_remove_port_fs(port); + } ++ + device_unregister(cm_dev->device); + kfree(cm_dev); + } +@@ -3846,6 +3940,7 @@ static int __init ib_cm_init(void) + INIT_LIST_HEAD(&cm.device_list); + rwlock_init(&cm.device_lock); + spin_lock_init(&cm.lock); ++ spin_lock_init(&cm.state_lock); + cm.listen_service_table = RB_ROOT; + cm.listen_service_id = be64_to_cpu(IB_CM_ASSIGN_SERVICE_ID); + cm.remote_id_table = RB_ROOT; +diff --git a/drivers/infiniband/core/uverbs_main.c b/drivers/infiniband/core/uverbs_main.c +index ee5222168b68..2afdd52f29d1 100644 +--- a/drivers/infiniband/core/uverbs_main.c ++++ b/drivers/infiniband/core/uverbs_main.c +@@ -237,12 +237,9 @@ static int ib_uverbs_cleanup_ucontext(struct ib_uverbs_file *file, + container_of(uobj, struct ib_uqp_object, uevent.uobject); + + idr_remove_uobj(&ib_uverbs_qp_idr, uobj); +- if (qp != qp->real_qp) { +- ib_close_qp(qp); +- } else { ++ if (qp == qp->real_qp) + ib_uverbs_detach_umcast(qp, uqp); +- ib_destroy_qp(qp); +- } ++ ib_destroy_qp(qp); + ib_uverbs_release_uevent(file, &uqp->uevent); + kfree(uqp); + } +diff --git a/drivers/infiniband/hw/mlx4/cq.c b/drivers/infiniband/hw/mlx4/cq.c +index d5e60f44ba5a..5b8a62c6bc8d 100644 +--- a/drivers/infiniband/hw/mlx4/cq.c ++++ b/drivers/infiniband/hw/mlx4/cq.c +@@ -239,11 +239,14 @@ struct ib_cq *mlx4_ib_create_cq(struct ib_device *ibdev, int entries, int vector + if (context) + if (ib_copy_to_udata(udata, &cq->mcq.cqn, sizeof (__u32))) { + err = -EFAULT; +- goto err_dbmap; ++ goto err_cq_free; + } + + return &cq->ibcq; + ++err_cq_free: ++ mlx4_cq_free(dev->dev, &cq->mcq); ++ + err_dbmap: + if (context) + mlx4_ib_db_unmap_user(to_mucontext(context), &cq->db); +diff --git a/drivers/infiniband/hw/mlx5/cq.c b/drivers/infiniband/hw/mlx5/cq.c +index 706833ab7e7e..e5a6d839f1d1 100644 +--- a/drivers/infiniband/hw/mlx5/cq.c ++++ b/drivers/infiniband/hw/mlx5/cq.c +@@ -684,8 +684,7 @@ struct ib_cq *mlx5_ib_create_cq(struct ib_device *ibdev, int entries, + if (err) + goto err_create; + } else { +- /* for now choose 64 bytes till we have a proper interface */ +- cqe_size = 64; ++ cqe_size = cache_line_size() == 128 ? 128 : 64; + err = create_cq_kernel(dev, cq, entries, cqe_size, &cqb, + &index, &inlen); + if (err) +diff --git a/drivers/infiniband/hw/mlx5/main.c b/drivers/infiniband/hw/mlx5/main.c +index b1a6cb3a2809..1300a377aca8 100644 +--- a/drivers/infiniband/hw/mlx5/main.c ++++ b/drivers/infiniband/hw/mlx5/main.c +@@ -959,12 +959,13 @@ static void mlx5_ib_event(struct mlx5_core_dev *dev, enum mlx5_dev_event event, + { + struct mlx5_ib_dev *ibdev = container_of(dev, struct mlx5_ib_dev, mdev); + struct ib_event ibev; ++ bool fatal = false; + u8 port = 0; + + switch (event) { + case MLX5_DEV_EVENT_SYS_ERROR: +- ibdev->ib_active = false; + ibev.event = IB_EVENT_DEVICE_FATAL; ++ fatal = true; + break; + + case MLX5_DEV_EVENT_PORT_UP: +@@ -1012,6 +1013,9 @@ static void mlx5_ib_event(struct mlx5_core_dev *dev, enum mlx5_dev_event event, + + if (ibdev->ib_active) + ib_dispatch_event(&ibev); ++ ++ if (fatal) ++ ibdev->ib_active = false; + } + + static void get_ext_port_caps(struct mlx5_ib_dev *dev) +diff --git a/drivers/input/serio/i8042-x86ia64io.h b/drivers/input/serio/i8042-x86ia64io.h +index d9ab5c5e8e82..ccb36fb565de 100644 +--- a/drivers/input/serio/i8042-x86ia64io.h ++++ b/drivers/input/serio/i8042-x86ia64io.h +@@ -776,6 +776,13 @@ static const struct dmi_system_id __initconst i8042_dmi_kbdreset_table[] = { + DMI_MATCH(DMI_PRODUCT_NAME, "P34"), + }, + }, ++ { ++ /* Schenker XMG C504 - Elantech touchpad */ ++ .matches = { ++ DMI_MATCH(DMI_SYS_VENDOR, "XMG"), ++ DMI_MATCH(DMI_PRODUCT_NAME, "C504"), ++ }, ++ }, + { } + }; + +diff --git a/drivers/iommu/amd_iommu.c b/drivers/iommu/amd_iommu.c +index 73353a97aafb..71f9cd108590 100644 +--- a/drivers/iommu/amd_iommu.c ++++ b/drivers/iommu/amd_iommu.c +@@ -2032,6 +2032,9 @@ static void dma_ops_domain_free(struct dma_ops_domain *dom) + kfree(dom->aperture[i]); + } + ++ if (dom->domain.id) ++ domain_id_free(dom->domain.id); ++ + kfree(dom); + } + +diff --git a/drivers/media/usb/dvb-usb/dib0700_core.c b/drivers/media/usb/dvb-usb/dib0700_core.c +index bf2a908d74cf..452ef7bc630c 100644 +--- a/drivers/media/usb/dvb-usb/dib0700_core.c ++++ b/drivers/media/usb/dvb-usb/dib0700_core.c +@@ -674,7 +674,7 @@ static void dib0700_rc_urb_completion(struct urb *purb) + { + struct dvb_usb_device *d = purb->context; + struct dib0700_rc_response *poll_reply; +- u32 uninitialized_var(keycode); ++ u32 keycode; + u8 toggle; + + deb_info("%s()\n", __func__); +@@ -713,7 +713,8 @@ static void dib0700_rc_urb_completion(struct urb *purb) + if ((poll_reply->system == 0x00) && (poll_reply->data == 0x00) + && (poll_reply->not_data == 0xff)) { + poll_reply->data_state = 2; +- break; ++ rc_repeat(d->rc_dev); ++ goto resubmit; + } + + if ((poll_reply->system ^ poll_reply->not_system) != 0xff) { +diff --git a/drivers/mfd/mfd-core.c b/drivers/mfd/mfd-core.c +index f421586f29fb..a1f0f73245c5 100644 +--- a/drivers/mfd/mfd-core.c ++++ b/drivers/mfd/mfd-core.c +@@ -265,6 +265,8 @@ int mfd_clone_cell(const char *cell, const char **clones, size_t n_clones) + clones[i]); + } + ++ put_device(dev); ++ + return 0; + } + EXPORT_SYMBOL(mfd_clone_cell); +diff --git a/drivers/misc/mei/nfc.c b/drivers/misc/mei/nfc.c +index 4b7ea3fb143c..1f8f856946cd 100644 +--- a/drivers/misc/mei/nfc.c ++++ b/drivers/misc/mei/nfc.c +@@ -292,7 +292,7 @@ static int mei_nfc_if_version(struct mei_nfc_dev *ndev) + return -ENOMEM; + + bytes_recv = __mei_cl_recv(cl, (u8 *)reply, if_version_length); +- if (bytes_recv < 0 || bytes_recv < sizeof(struct mei_nfc_reply)) { ++ if (bytes_recv < if_version_length) { + dev_err(&dev->pdev->dev, "Could not read IF version\n"); + ret = -EIO; + goto err; +diff --git a/drivers/mmc/host/mxs-mmc.c b/drivers/mmc/host/mxs-mmc.c +index e1fa3ef735e0..f8aac3044670 100644 +--- a/drivers/mmc/host/mxs-mmc.c ++++ b/drivers/mmc/host/mxs-mmc.c +@@ -675,13 +675,13 @@ static int mxs_mmc_probe(struct platform_device *pdev) + + platform_set_drvdata(pdev, mmc); + ++ spin_lock_init(&host->lock); ++ + ret = devm_request_irq(&pdev->dev, irq_err, mxs_mmc_irq_handler, 0, + DRIVER_NAME, host); + if (ret) + goto out_free_dma; + +- spin_lock_init(&host->lock); +- + ret = mmc_add_host(mmc); + if (ret) + goto out_free_dma; +diff --git a/drivers/mtd/ubi/fastmap.c b/drivers/mtd/ubi/fastmap.c +index 85cd77c9cd12..87bf356d274a 100644 +--- a/drivers/mtd/ubi/fastmap.c ++++ b/drivers/mtd/ubi/fastmap.c +@@ -438,10 +438,11 @@ static int scan_pool(struct ubi_device *ubi, struct ubi_attach_info *ai, + unsigned long long ec = be64_to_cpu(ech->ec); + unmap_peb(ai, pnum); + dbg_bld("Adding PEB to free: %i", pnum); ++ + if (err == UBI_IO_FF_BITFLIPS) +- add_aeb(ai, free, pnum, ec, 1); +- else +- add_aeb(ai, free, pnum, ec, 0); ++ scrub = 1; ++ ++ add_aeb(ai, free, pnum, ec, scrub); + continue; + } else if (err == 0 || err == UBI_IO_BITFLIPS) { + dbg_bld("Found non empty PEB:%i in pool", pnum); +diff --git a/drivers/net/ethernet/smsc/smc91x.c b/drivers/net/ethernet/smsc/smc91x.c +index 73be7f3982e6..af9e7d775348 100644 +--- a/drivers/net/ethernet/smsc/smc91x.c ++++ b/drivers/net/ethernet/smsc/smc91x.c +@@ -533,7 +533,7 @@ static inline void smc_rcv(struct net_device *dev) + #define smc_special_lock(lock, flags) spin_lock_irqsave(lock, flags) + #define smc_special_unlock(lock, flags) spin_unlock_irqrestore(lock, flags) + #else +-#define smc_special_trylock(lock, flags) (flags == flags) ++#define smc_special_trylock(lock, flags) ((void)flags, true) + #define smc_special_lock(lock, flags) do { flags = 0; } while (0) + #define smc_special_unlock(lock, flags) do { flags = 0; } while (0) + #endif +diff --git a/drivers/net/macvtap.c b/drivers/net/macvtap.c +index 98ce4feb9a79..576c3236fa40 100644 +--- a/drivers/net/macvtap.c ++++ b/drivers/net/macvtap.c +@@ -67,7 +67,7 @@ static struct cdev macvtap_cdev; + static const struct proto_ops macvtap_socket_ops; + + #define TUN_OFFLOADS (NETIF_F_HW_CSUM | NETIF_F_TSO_ECN | NETIF_F_TSO | \ +- NETIF_F_TSO6 | NETIF_F_UFO) ++ NETIF_F_TSO6) + #define RX_OFFLOADS (NETIF_F_GRO | NETIF_F_LRO) + #define TAP_FEATURES (NETIF_F_GSO | NETIF_F_SG | NETIF_F_FRAGLIST) + +@@ -566,6 +566,8 @@ static int macvtap_skb_from_vnet_hdr(struct sk_buff *skb, + gso_type = SKB_GSO_TCPV6; + break; + case VIRTIO_NET_HDR_GSO_UDP: ++ pr_warn_once("macvtap: %s: using disabled UFO feature; please fix this program\n", ++ current->comm); + gso_type = SKB_GSO_UDP; + if (skb->protocol == htons(ETH_P_IPV6)) + ipv6_proxy_select_ident(skb); +@@ -613,8 +615,6 @@ static int macvtap_skb_to_vnet_hdr(const struct sk_buff *skb, + vnet_hdr->gso_type = VIRTIO_NET_HDR_GSO_TCPV4; + else if (sinfo->gso_type & SKB_GSO_TCPV6) + vnet_hdr->gso_type = VIRTIO_NET_HDR_GSO_TCPV6; +- else if (sinfo->gso_type & SKB_GSO_UDP) +- vnet_hdr->gso_type = VIRTIO_NET_HDR_GSO_UDP; + else + BUG(); + if (sinfo->gso_type & SKB_GSO_TCP_ECN) +@@ -962,9 +962,6 @@ static int set_offload(struct macvtap_queue *q, unsigned long arg) + if (arg & TUN_F_TSO6) + feature_mask |= NETIF_F_TSO6; + } +- +- if (arg & TUN_F_UFO) +- feature_mask |= NETIF_F_UFO; + } + + /* tun/tap driver inverts the usage for TSO offloads, where +@@ -975,7 +972,7 @@ static int set_offload(struct macvtap_queue *q, unsigned long arg) + * When user space turns off TSO, we turn off GSO/LRO so that + * user-space will not receive TSO frames. + */ +- if (feature_mask & (NETIF_F_TSO | NETIF_F_TSO6 | NETIF_F_UFO)) ++ if (feature_mask & (NETIF_F_TSO | NETIF_F_TSO6)) + features |= RX_OFFLOADS; + else + features &= ~RX_OFFLOADS; +@@ -1076,7 +1073,7 @@ static long macvtap_ioctl(struct file *file, unsigned int cmd, + case TUNSETOFFLOAD: + /* let the user check for future flags */ + if (arg & ~(TUN_F_CSUM | TUN_F_TSO4 | TUN_F_TSO6 | +- TUN_F_TSO_ECN | TUN_F_UFO)) ++ TUN_F_TSO_ECN)) + return -EINVAL; + + rtnl_lock(); +diff --git a/drivers/net/tun.c b/drivers/net/tun.c +index 813750d09680..46f9cb21ec56 100644 +--- a/drivers/net/tun.c ++++ b/drivers/net/tun.c +@@ -173,7 +173,7 @@ struct tun_struct { + struct net_device *dev; + netdev_features_t set_features; + #define TUN_USER_FEATURES (NETIF_F_HW_CSUM|NETIF_F_TSO_ECN|NETIF_F_TSO| \ +- NETIF_F_TSO6|NETIF_F_UFO) ++ NETIF_F_TSO6) + + int vnet_hdr_sz; + int sndbuf; +@@ -1113,10 +1113,20 @@ static ssize_t tun_get_user(struct tun_struct *tun, struct tun_file *tfile, + skb_shinfo(skb)->gso_type = SKB_GSO_TCPV6; + break; + case VIRTIO_NET_HDR_GSO_UDP: ++ { ++ static bool warned; ++ ++ if (!warned) { ++ warned = true; ++ netdev_warn(tun->dev, ++ "%s: using disabled UFO feature; please fix this program\n", ++ current->comm); ++ } + skb_shinfo(skb)->gso_type = SKB_GSO_UDP; + if (skb->protocol == htons(ETH_P_IPV6)) + ipv6_proxy_select_ident(skb); + break; ++ } + default: + tun->dev->stats.rx_frame_errors++; + kfree_skb(skb); +@@ -1220,8 +1230,6 @@ static ssize_t tun_put_user(struct tun_struct *tun, + gso.gso_type = VIRTIO_NET_HDR_GSO_TCPV4; + else if (sinfo->gso_type & SKB_GSO_TCPV6) + gso.gso_type = VIRTIO_NET_HDR_GSO_TCPV6; +- else if (sinfo->gso_type & SKB_GSO_UDP) +- gso.gso_type = VIRTIO_NET_HDR_GSO_UDP; + else { + pr_err("unexpected GSO type: " + "0x%x, gso_size %d, hdr_len %d\n", +@@ -1750,11 +1758,6 @@ static int set_offload(struct tun_struct *tun, unsigned long arg) + features |= NETIF_F_TSO6; + arg &= ~(TUN_F_TSO4|TUN_F_TSO6); + } +- +- if (arg & TUN_F_UFO) { +- features |= NETIF_F_UFO; +- arg &= ~TUN_F_UFO; +- } + } + + /* This gives the user a way to test for new features in future by +diff --git a/drivers/net/virtio_net.c b/drivers/net/virtio_net.c +index 5d080516d0c5..421642af8d06 100644 +--- a/drivers/net/virtio_net.c ++++ b/drivers/net/virtio_net.c +@@ -438,8 +438,17 @@ static void receive_buf(struct receive_queue *rq, void *buf, unsigned int len) + skb_shinfo(skb)->gso_type = SKB_GSO_TCPV4; + break; + case VIRTIO_NET_HDR_GSO_UDP: ++ { ++ static bool warned; ++ ++ if (!warned) { ++ warned = true; ++ netdev_warn(dev, ++ "host using disabled UFO feature; please fix it\n"); ++ } + skb_shinfo(skb)->gso_type = SKB_GSO_UDP; + break; ++ } + case VIRTIO_NET_HDR_GSO_TCPV6: + skb_shinfo(skb)->gso_type = SKB_GSO_TCPV6; + break; +@@ -754,8 +763,6 @@ static int xmit_skb(struct send_queue *sq, struct sk_buff *skb) + hdr->hdr.gso_type = VIRTIO_NET_HDR_GSO_TCPV4; + else if (skb_shinfo(skb)->gso_type & SKB_GSO_TCPV6) + hdr->hdr.gso_type = VIRTIO_NET_HDR_GSO_TCPV6; +- else if (skb_shinfo(skb)->gso_type & SKB_GSO_UDP) +- hdr->hdr.gso_type = VIRTIO_NET_HDR_GSO_UDP; + else + BUG(); + if (skb_shinfo(skb)->gso_type & SKB_GSO_TCP_ECN) +@@ -1572,7 +1579,7 @@ static int virtnet_probe(struct virtio_device *vdev) + dev->features |= NETIF_F_HW_CSUM | NETIF_F_SG; + + if (virtio_has_feature(vdev, VIRTIO_NET_F_GSO)) { +- dev->hw_features |= NETIF_F_TSO | NETIF_F_UFO ++ dev->hw_features |= NETIF_F_TSO + | NETIF_F_TSO_ECN | NETIF_F_TSO6; + } + /* Individual feature bits: what can host handle? */ +@@ -1582,11 +1589,9 @@ static int virtnet_probe(struct virtio_device *vdev) + dev->hw_features |= NETIF_F_TSO6; + if (virtio_has_feature(vdev, VIRTIO_NET_F_HOST_ECN)) + dev->hw_features |= NETIF_F_TSO_ECN; +- if (virtio_has_feature(vdev, VIRTIO_NET_F_HOST_UFO)) +- dev->hw_features |= NETIF_F_UFO; + + if (gso) +- dev->features |= dev->hw_features & (NETIF_F_ALL_TSO|NETIF_F_UFO); ++ dev->features |= dev->hw_features & NETIF_F_ALL_TSO; + /* (!csum && gso) case will be fixed by register_netdev() */ + } + if (virtio_has_feature(vdev, VIRTIO_NET_F_GUEST_CSUM)) +@@ -1621,8 +1626,7 @@ static int virtnet_probe(struct virtio_device *vdev) + /* If we can receive ANY GSO packets, we must allocate large ones. */ + if (virtio_has_feature(vdev, VIRTIO_NET_F_GUEST_TSO4) || + virtio_has_feature(vdev, VIRTIO_NET_F_GUEST_TSO6) || +- virtio_has_feature(vdev, VIRTIO_NET_F_GUEST_ECN) || +- virtio_has_feature(vdev, VIRTIO_NET_F_GUEST_UFO)) ++ virtio_has_feature(vdev, VIRTIO_NET_F_GUEST_ECN)) + vi->big_packets = true; + + if (virtio_has_feature(vdev, VIRTIO_NET_F_MRG_RXBUF)) +@@ -1808,9 +1812,9 @@ static struct virtio_device_id id_table[] = { + static unsigned int features[] = { + VIRTIO_NET_F_CSUM, VIRTIO_NET_F_GUEST_CSUM, + VIRTIO_NET_F_GSO, VIRTIO_NET_F_MAC, +- VIRTIO_NET_F_HOST_TSO4, VIRTIO_NET_F_HOST_UFO, VIRTIO_NET_F_HOST_TSO6, ++ VIRTIO_NET_F_HOST_TSO4, VIRTIO_NET_F_HOST_TSO6, + VIRTIO_NET_F_HOST_ECN, VIRTIO_NET_F_GUEST_TSO4, VIRTIO_NET_F_GUEST_TSO6, +- VIRTIO_NET_F_GUEST_ECN, VIRTIO_NET_F_GUEST_UFO, ++ VIRTIO_NET_F_GUEST_ECN, + VIRTIO_NET_F_MRG_RXBUF, VIRTIO_NET_F_STATUS, VIRTIO_NET_F_CTRL_VQ, + VIRTIO_NET_F_CTRL_RX, VIRTIO_NET_F_CTRL_VLAN, + VIRTIO_NET_F_GUEST_ANNOUNCE, VIRTIO_NET_F_MQ, +diff --git a/drivers/pci/quirks.c b/drivers/pci/quirks.c +index 019dbc1fae11..cb245bd510a2 100644 +--- a/drivers/pci/quirks.c ++++ b/drivers/pci/quirks.c +@@ -339,19 +339,52 @@ static void quirk_s3_64M(struct pci_dev *dev) + DECLARE_PCI_FIXUP_HEADER(PCI_VENDOR_ID_S3, PCI_DEVICE_ID_S3_868, quirk_s3_64M); + DECLARE_PCI_FIXUP_HEADER(PCI_VENDOR_ID_S3, PCI_DEVICE_ID_S3_968, quirk_s3_64M); + ++static void quirk_io(struct pci_dev *dev, int pos, unsigned size, ++ const char *name) ++{ ++ u32 region; ++ struct pci_bus_region bus_region; ++ struct resource *res = dev->resource + pos; ++ ++ pci_read_config_dword(dev, PCI_BASE_ADDRESS_0 + (pos << 2), ®ion); ++ ++ if (!region) ++ return; ++ ++ res->name = pci_name(dev); ++ res->flags = region & ~PCI_BASE_ADDRESS_IO_MASK; ++ res->flags |= ++ (IORESOURCE_IO | IORESOURCE_PCI_FIXED | IORESOURCE_SIZEALIGN); ++ region &= ~(size - 1); ++ ++ /* Convert from PCI bus to resource space */ ++ bus_region.start = region; ++ bus_region.end = region + size - 1; ++ pcibios_bus_to_resource(dev, res, &bus_region); ++ ++ dev_info(&dev->dev, FW_BUG "%s quirk: reg 0x%x: %pR\n", ++ name, PCI_BASE_ADDRESS_0 + (pos << 2), res); ++} ++ + /* + * Some CS5536 BIOSes (for example, the Soekris NET5501 board w/ comBIOS + * ver. 1.33 20070103) don't set the correct ISA PCI region header info. + * BAR0 should be 8 bytes; instead, it may be set to something like 8k + * (which conflicts w/ BAR1's memory range). ++ * ++ * CS553x's ISA PCI BARs may also be read-only (ref: ++ * https://bugzilla.kernel.org/show_bug.cgi?id=85991 - Comment #4 forward). + */ + static void quirk_cs5536_vsa(struct pci_dev *dev) + { ++ static char *name = "CS5536 ISA bridge"; ++ + if (pci_resource_len(dev, 0) != 8) { +- struct resource *res = &dev->resource[0]; +- res->end = res->start + 8 - 1; +- dev_info(&dev->dev, "CS5536 ISA bridge bug detected " +- "(incorrect header); workaround applied.\n"); ++ quirk_io(dev, 0, 8, name); /* SMB */ ++ quirk_io(dev, 1, 256, name); /* GPIO */ ++ quirk_io(dev, 2, 64, name); /* MFGPT */ ++ dev_info(&dev->dev, "%s bug detected (incorrect header); workaround applied\n", ++ name); + } + } + DECLARE_PCI_FIXUP_HEADER(PCI_VENDOR_ID_AMD, PCI_DEVICE_ID_AMD_CS5536_ISA, quirk_cs5536_vsa); +diff --git a/drivers/pwm/core.c b/drivers/pwm/core.c +index 2ca95042a0b9..c244e7dc6d66 100644 +--- a/drivers/pwm/core.c ++++ b/drivers/pwm/core.c +@@ -293,6 +293,8 @@ int pwmchip_remove(struct pwm_chip *chip) + unsigned int i; + int ret = 0; + ++ pwmchip_sysfs_unexport_children(chip); ++ + mutex_lock(&pwm_lock); + + for (i = 0; i < chip->npwm; i++) { +diff --git a/drivers/pwm/sysfs.c b/drivers/pwm/sysfs.c +index 8c20332d4825..809b5ab9074c 100644 +--- a/drivers/pwm/sysfs.c ++++ b/drivers/pwm/sysfs.c +@@ -348,6 +348,24 @@ void pwmchip_sysfs_unexport(struct pwm_chip *chip) + } + } + ++void pwmchip_sysfs_unexport_children(struct pwm_chip *chip) ++{ ++ struct device *parent; ++ unsigned int i; ++ ++ parent = class_find_device(&pwm_class, NULL, chip, ++ pwmchip_sysfs_match); ++ if (!parent) ++ return; ++ ++ for (i = 0; i < chip->npwm; i++) { ++ struct pwm_device *pwm = &chip->pwms[i]; ++ ++ if (test_bit(PWMF_EXPORTED, &pwm->flags)) ++ pwm_unexport_child(parent, pwm); ++ } ++} ++ + static int __init pwm_sysfs_init(void) + { + return class_register(&pwm_class); +diff --git a/drivers/scsi/arcmsr/arcmsr_hba.c b/drivers/scsi/arcmsr/arcmsr_hba.c +index 66dda86e62e1..8d9477cc3227 100644 +--- a/drivers/scsi/arcmsr/arcmsr_hba.c ++++ b/drivers/scsi/arcmsr/arcmsr_hba.c +@@ -2069,18 +2069,9 @@ static int arcmsr_queue_command_lck(struct scsi_cmnd *cmd, + struct AdapterControlBlock *acb = (struct AdapterControlBlock *) host->hostdata; + struct CommandControlBlock *ccb; + int target = cmd->device->id; +- int lun = cmd->device->lun; +- uint8_t scsicmd = cmd->cmnd[0]; + cmd->scsi_done = done; + cmd->host_scribble = NULL; + cmd->result = 0; +- if ((scsicmd == SYNCHRONIZE_CACHE) ||(scsicmd == SEND_DIAGNOSTIC)){ +- if(acb->devstate[target][lun] == ARECA_RAID_GONE) { +- cmd->result = (DID_NO_CONNECT << 16); +- } +- cmd->scsi_done(cmd); +- return 0; +- } + if (target == 16) { + /* virtual device for iop message transfer */ + arcmsr_handle_virtual_command(acb, cmd); +diff --git a/drivers/scsi/megaraid/megaraid_sas.h b/drivers/scsi/megaraid/megaraid_sas.h +index deb1ed816c49..50e8d5912776 100644 +--- a/drivers/scsi/megaraid/megaraid_sas.h ++++ b/drivers/scsi/megaraid/megaraid_sas.h +@@ -1637,7 +1637,7 @@ struct megasas_instance_template { + }; + + #define MEGASAS_IS_LOGICAL(scp) \ +- (scp->device->channel < MEGASAS_MAX_PD_CHANNELS) ? 0 : 1 ++ ((scp->device->channel < MEGASAS_MAX_PD_CHANNELS) ? 0 : 1) + + #define MEGASAS_DEV_INDEX(inst, scp) \ + ((scp->device->channel % 2) * MEGASAS_MAX_DEV_PER_CHANNEL) + \ +diff --git a/drivers/scsi/megaraid/megaraid_sas_base.c b/drivers/scsi/megaraid/megaraid_sas_base.c +index 8c3270c809c8..11eafc3f4ca0 100644 +--- a/drivers/scsi/megaraid/megaraid_sas_base.c ++++ b/drivers/scsi/megaraid/megaraid_sas_base.c +@@ -1537,16 +1537,13 @@ megasas_queue_command_lck(struct scsi_cmnd *scmd, void (*done) (struct scsi_cmnd + goto out_done; + } + +- switch (scmd->cmnd[0]) { +- case SYNCHRONIZE_CACHE: +- /* +- * FW takes care of flush cache on its own +- * No need to send it down +- */ ++ /* ++ * FW takes care of flush cache on its own for Virtual Disk. ++ * No need to send it down for VD. For JBOD send SYNCHRONIZE_CACHE to FW. ++ */ ++ if ((scmd->cmnd[0] == SYNCHRONIZE_CACHE) && MEGASAS_IS_LOGICAL(scmd)) { + scmd->result = DID_OK << 16; + goto out_done; +- default: +- break; + } + + if (instance->instancet->build_and_issue_cmd(instance, scmd)) { +diff --git a/drivers/scsi/scsi_debug.c b/drivers/scsi/scsi_debug.c +index 01c0ffa31276..39f2d7d138cf 100644 +--- a/drivers/scsi/scsi_debug.c ++++ b/drivers/scsi/scsi_debug.c +@@ -3502,6 +3502,7 @@ static void __exit scsi_debug_exit(void) + bus_unregister(&pseudo_lld_bus); + root_device_unregister(pseudo_primary); + ++ vfree(map_storep); + if (dif_storep) + vfree(dif_storep); + +diff --git a/drivers/staging/android/binder.c b/drivers/staging/android/binder.c +index 69fd236345cb..a29a383d160d 100644 +--- a/drivers/staging/android/binder.c ++++ b/drivers/staging/android/binder.c +@@ -994,7 +994,7 @@ static int binder_dec_node(struct binder_node *node, int strong, int internal) + + + static struct binder_ref *binder_get_ref(struct binder_proc *proc, +- uint32_t desc) ++ u32 desc, bool need_strong_ref) + { + struct rb_node *n = proc->refs_by_desc.rb_node; + struct binder_ref *ref; +@@ -1002,12 +1002,16 @@ static struct binder_ref *binder_get_ref(struct binder_proc *proc, + while (n) { + ref = rb_entry(n, struct binder_ref, rb_node_desc); + +- if (desc < ref->desc) ++ if (desc < ref->desc) { + n = n->rb_left; +- else if (desc > ref->desc) ++ } else if (desc > ref->desc) { + n = n->rb_right; +- else ++ } else if (need_strong_ref && !ref->strong) { ++ binder_user_error("tried to use weak ref as strong ref\n"); ++ return NULL; ++ } else { + return ref; ++ } + } + return NULL; + } +@@ -1270,7 +1274,10 @@ static void binder_transaction_buffer_release(struct binder_proc *proc, + } break; + case BINDER_TYPE_HANDLE: + case BINDER_TYPE_WEAK_HANDLE: { +- struct binder_ref *ref = binder_get_ref(proc, fp->handle); ++ struct binder_ref *ref; ++ ++ ref = binder_get_ref(proc, fp->handle, ++ fp->type == BINDER_TYPE_HANDLE); + if (ref == NULL) { + pr_err("transaction release %d bad handle %d\n", + debug_id, fp->handle); +@@ -1362,7 +1369,7 @@ static void binder_transaction(struct binder_proc *proc, + } else { + if (tr->target.handle) { + struct binder_ref *ref; +- ref = binder_get_ref(proc, tr->target.handle); ++ ref = binder_get_ref(proc, tr->target.handle, true); + if (ref == NULL) { + binder_user_error("%d:%d got transaction to invalid handle\n", + proc->pid, thread->pid); +@@ -1534,7 +1541,9 @@ static void binder_transaction(struct binder_proc *proc, + fp->type = BINDER_TYPE_HANDLE; + else + fp->type = BINDER_TYPE_WEAK_HANDLE; ++ fp->binder = NULL; + fp->handle = ref->desc; ++ fp->cookie = NULL; + binder_inc_ref(ref, fp->type == BINDER_TYPE_HANDLE, + &thread->todo); + +@@ -1546,7 +1555,10 @@ static void binder_transaction(struct binder_proc *proc, + } break; + case BINDER_TYPE_HANDLE: + case BINDER_TYPE_WEAK_HANDLE: { +- struct binder_ref *ref = binder_get_ref(proc, fp->handle); ++ struct binder_ref *ref; ++ ++ ref = binder_get_ref(proc, fp->handle, ++ fp->type == BINDER_TYPE_HANDLE); + if (ref == NULL) { + binder_user_error("%d:%d got transaction with invalid handle, %d\n", + proc->pid, +@@ -1574,7 +1586,9 @@ static void binder_transaction(struct binder_proc *proc, + return_error = BR_FAILED_REPLY; + goto err_binder_get_ref_for_node_failed; + } ++ fp->binder = NULL; + fp->handle = new_ref->desc; ++ fp->cookie = NULL; + binder_inc_ref(new_ref, fp->type == BINDER_TYPE_HANDLE, NULL); + trace_binder_transaction_ref_to_ref(t, ref, + new_ref); +@@ -1621,6 +1635,7 @@ static void binder_transaction(struct binder_proc *proc, + binder_debug(BINDER_DEBUG_TRANSACTION, + " fd %d -> %d\n", fp->handle, target_fd); + /* TODO: fput? */ ++ fp->binder = NULL; + fp->handle = target_fd; + } break; + +@@ -1739,7 +1754,9 @@ int binder_thread_write(struct binder_proc *proc, struct binder_thread *thread, + ref->desc); + } + } else +- ref = binder_get_ref(proc, target); ++ ref = binder_get_ref(proc, target, ++ cmd == BC_ACQUIRE || ++ cmd == BC_RELEASE); + if (ref == NULL) { + binder_user_error("%d:%d refcount change on invalid ref %d\n", + proc->pid, thread->pid, target); +@@ -1934,7 +1951,7 @@ int binder_thread_write(struct binder_proc *proc, struct binder_thread *thread, + if (get_user(cookie, (void __user * __user *)ptr)) + return -EFAULT; + ptr += sizeof(void *); +- ref = binder_get_ref(proc, target); ++ ref = binder_get_ref(proc, target, false); + if (ref == NULL) { + binder_user_error("%d:%d %s invalid ref %d\n", + proc->pid, thread->pid, +diff --git a/drivers/staging/iio/impedance-analyzer/ad5933.c b/drivers/staging/iio/impedance-analyzer/ad5933.c +index bc23d66a7a1e..1ff17352abde 100644 +--- a/drivers/staging/iio/impedance-analyzer/ad5933.c ++++ b/drivers/staging/iio/impedance-analyzer/ad5933.c +@@ -646,6 +646,7 @@ static void ad5933_work(struct work_struct *work) + struct iio_dev *indio_dev = i2c_get_clientdata(st->client); + signed short buf[2]; + unsigned char status; ++ int ret; + + mutex_lock(&indio_dev->mlock); + if (st->state == AD5933_CTRL_INIT_START_FREQ) { +@@ -653,19 +654,22 @@ static void ad5933_work(struct work_struct *work) + ad5933_cmd(st, AD5933_CTRL_START_SWEEP); + st->state = AD5933_CTRL_START_SWEEP; + schedule_delayed_work(&st->work, st->poll_time_jiffies); +- mutex_unlock(&indio_dev->mlock); +- return; ++ goto out; + } + +- ad5933_i2c_read(st->client, AD5933_REG_STATUS, 1, &status); ++ ret = ad5933_i2c_read(st->client, AD5933_REG_STATUS, 1, &status); ++ if (ret) ++ goto out; + + if (status & AD5933_STAT_DATA_VALID) { + int scan_count = bitmap_weight(indio_dev->active_scan_mask, + indio_dev->masklength); +- ad5933_i2c_read(st->client, ++ ret = ad5933_i2c_read(st->client, + test_bit(1, indio_dev->active_scan_mask) ? + AD5933_REG_REAL_DATA : AD5933_REG_IMAG_DATA, + scan_count * 2, (u8 *)buf); ++ if (ret) ++ goto out; + + if (scan_count == 2) { + buf[0] = be16_to_cpu(buf[0]); +@@ -677,8 +681,7 @@ static void ad5933_work(struct work_struct *work) + } else { + /* no data available - try again later */ + schedule_delayed_work(&st->work, st->poll_time_jiffies); +- mutex_unlock(&indio_dev->mlock); +- return; ++ goto out; + } + + if (status & AD5933_STAT_SWEEP_DONE) { +@@ -690,7 +693,7 @@ static void ad5933_work(struct work_struct *work) + ad5933_cmd(st, AD5933_CTRL_INC_FREQ); + schedule_delayed_work(&st->work, st->poll_time_jiffies); + } +- ++out: + mutex_unlock(&indio_dev->mlock); + } + +diff --git a/drivers/staging/nvec/nvec_ps2.c b/drivers/staging/nvec/nvec_ps2.c +index 06dbb02085a9..90e7d841825b 100644 +--- a/drivers/staging/nvec/nvec_ps2.c ++++ b/drivers/staging/nvec/nvec_ps2.c +@@ -104,13 +104,12 @@ static int nvec_mouse_probe(struct platform_device *pdev) + { + struct nvec_chip *nvec = dev_get_drvdata(pdev->dev.parent); + struct serio *ser_dev; +- char mouse_reset[] = { NVEC_PS2, SEND_COMMAND, PSMOUSE_RST, 3 }; + + ser_dev = kzalloc(sizeof(struct serio), GFP_KERNEL); + if (ser_dev == NULL) + return -ENOMEM; + +- ser_dev->id.type = SERIO_PS_PSTHRU; ++ ser_dev->id.type = SERIO_8042; + ser_dev->write = ps2_sendcommand; + ser_dev->start = ps2_startstreaming; + ser_dev->stop = ps2_stopstreaming; +@@ -125,9 +124,6 @@ static int nvec_mouse_probe(struct platform_device *pdev) + + serio_register_port(ser_dev); + +- /* mouse reset */ +- nvec_write_async(nvec, mouse_reset, sizeof(mouse_reset)); +- + return 0; + } + +diff --git a/drivers/tty/tty_ldisc.c b/drivers/tty/tty_ldisc.c +index 6458e11e8e9d..b6877aa58b0f 100644 +--- a/drivers/tty/tty_ldisc.c ++++ b/drivers/tty/tty_ldisc.c +@@ -415,6 +415,10 @@ EXPORT_SYMBOL_GPL(tty_ldisc_flush); + * they are not on hot paths so a little discipline won't do + * any harm. + * ++ * The line discipline-related tty_struct fields are reset to ++ * prevent the ldisc driver from re-using stale information for ++ * the new ldisc instance. ++ * + * Locking: takes termios_rwsem + */ + +@@ -423,6 +427,9 @@ static void tty_set_termios_ldisc(struct tty_struct *tty, int num) + down_write(&tty->termios_rwsem); + tty->termios.c_line = num; + up_write(&tty->termios_rwsem); ++ ++ tty->disc_data = NULL; ++ tty->receive_room = 0; + } + + /** +diff --git a/drivers/tty/vt/vt.c b/drivers/tty/vt/vt.c +index 19aba5091408..75c059c56a23 100644 +--- a/drivers/tty/vt/vt.c ++++ b/drivers/tty/vt/vt.c +@@ -863,10 +863,15 @@ static int vc_do_resize(struct tty_struct *tty, struct vc_data *vc, + if (new_cols == vc->vc_cols && new_rows == vc->vc_rows) + return 0; + ++ if (new_screen_size > (4 << 20)) ++ return -EINVAL; + newscreen = kmalloc(new_screen_size, GFP_USER); + if (!newscreen) + return -ENOMEM; + ++ if (vc == sel_cons) ++ clear_selection(); ++ + old_rows = vc->vc_rows; + old_row_size = vc->vc_size_row; + +@@ -1164,7 +1169,7 @@ static void csi_J(struct vc_data *vc, int vpar) + break; + case 3: /* erase scroll-back buffer (and whole display) */ + scr_memsetw(vc->vc_screenbuf, vc->vc_video_erase_char, +- vc->vc_screenbuf_size >> 1); ++ vc->vc_screenbuf_size); + set_origin(vc); + if (CON_IS_VISIBLE(vc)) + update_screen(vc); +diff --git a/drivers/usb/gadget/u_ether.c b/drivers/usb/gadget/u_ether.c +index 2aae0d61bb19..0a974d448a56 100644 +--- a/drivers/usb/gadget/u_ether.c ++++ b/drivers/usb/gadget/u_ether.c +@@ -583,13 +583,6 @@ static netdev_tx_t eth_start_xmit(struct sk_buff *skb, + + req->length = length; + +- /* throttle high/super speed IRQ rate back slightly */ +- if (gadget_is_dualspeed(dev->gadget)) +- req->no_interrupt = (dev->gadget->speed == USB_SPEED_HIGH || +- dev->gadget->speed == USB_SPEED_SUPER) +- ? ((atomic_read(&dev->tx_qlen) % dev->qmult) != 0) +- : 0; +- + retval = usb_ep_queue(in, req, GFP_ATOMIC); + switch (retval) { + default: +diff --git a/drivers/usb/host/xhci-pci.c b/drivers/usb/host/xhci-pci.c +index aedc7e479a23..1ee8c97ae6be 100644 +--- a/drivers/usb/host/xhci-pci.c ++++ b/drivers/usb/host/xhci-pci.c +@@ -37,6 +37,7 @@ + + #define PCI_DEVICE_ID_INTEL_LYNXPOINT_XHCI 0x8c31 + #define PCI_DEVICE_ID_INTEL_LYNXPOINT_LP_XHCI 0x9c31 ++#define PCI_DEVICE_ID_INTEL_WILDCATPOINT_LP_XHCI 0x9cb1 + #define PCI_DEVICE_ID_INTEL_CHERRYVIEW_XHCI 0x22b5 + #define PCI_DEVICE_ID_INTEL_SUNRISEPOINT_H_XHCI 0xa12f + #define PCI_DEVICE_ID_INTEL_SUNRISEPOINT_LP_XHCI 0x9d2f +@@ -129,7 +130,8 @@ static void xhci_pci_quirks(struct device *dev, struct xhci_hcd *xhci) + xhci->quirks |= XHCI_SPURIOUS_REBOOT; + } + if (pdev->vendor == PCI_VENDOR_ID_INTEL && +- pdev->device == PCI_DEVICE_ID_INTEL_LYNXPOINT_LP_XHCI) { ++ (pdev->device == PCI_DEVICE_ID_INTEL_LYNXPOINT_LP_XHCI || ++ pdev->device == PCI_DEVICE_ID_INTEL_WILDCATPOINT_LP_XHCI)) { + xhci->quirks |= XHCI_SPURIOUS_REBOOT; + xhci->quirks |= XHCI_SPURIOUS_WAKEUP; + } +diff --git a/drivers/usb/musb/musb_cppi41.c b/drivers/usb/musb/musb_cppi41.c +index cce32e91fd9e..83bee312df8d 100644 +--- a/drivers/usb/musb/musb_cppi41.c ++++ b/drivers/usb/musb/musb_cppi41.c +@@ -234,6 +234,7 @@ static void cppi41_dma_callback(void *private_data) + cppi41_trans_done(cppi41_channel); + } else { + struct cppi41_dma_controller *controller; ++ int is_hs = 0; + /* + * On AM335x it has been observed that the TX interrupt fires + * too early that means the TXFIFO is not yet empty but the DMA +@@ -246,7 +247,14 @@ static void cppi41_dma_callback(void *private_data) + */ + controller = cppi41_channel->controller; + +- if (musb->g.speed == USB_SPEED_HIGH) { ++ if (is_host_active(musb)) { ++ if (musb->port1_status & USB_PORT_STAT_HIGH_SPEED) ++ is_hs = 1; ++ } else { ++ if (musb->g.speed == USB_SPEED_HIGH) ++ is_hs = 1; ++ } ++ if (is_hs) { + unsigned wait = 25; + + do { +diff --git a/drivers/usb/serial/cp210x.c b/drivers/usb/serial/cp210x.c +index f5e4fda7f902..188e50446514 100644 +--- a/drivers/usb/serial/cp210x.c ++++ b/drivers/usb/serial/cp210x.c +@@ -919,7 +919,9 @@ static int cp210x_tiocmget(struct tty_struct *tty) + unsigned int control; + int result; + +- cp210x_get_config(port, CP210X_GET_MDMSTS, &control, 1); ++ result = cp210x_get_config(port, CP210X_GET_MDMSTS, &control, 1); ++ if (result) ++ return result; + + result = ((control & CONTROL_DTR) ? TIOCM_DTR : 0) + |((control & CONTROL_RTS) ? TIOCM_RTS : 0) +diff --git a/drivers/usb/serial/ftdi_sio.c b/drivers/usb/serial/ftdi_sio.c +index e5545c5ced89..62ec56e379a0 100644 +--- a/drivers/usb/serial/ftdi_sio.c ++++ b/drivers/usb/serial/ftdi_sio.c +@@ -1000,7 +1000,8 @@ static struct usb_device_id id_table_combined [] = { + /* ekey Devices */ + { USB_DEVICE(FTDI_VID, FTDI_EKEY_CONV_USB_PID) }, + /* Infineon Devices */ +- { USB_DEVICE_INTERFACE_NUMBER(INFINEON_VID, INFINEON_TRIBOARD_PID, 1) }, ++ { USB_DEVICE_INTERFACE_NUMBER(INFINEON_VID, INFINEON_TRIBOARD_TC1798_PID, 1) }, ++ { USB_DEVICE_INTERFACE_NUMBER(INFINEON_VID, INFINEON_TRIBOARD_TC2X7_PID, 1) }, + /* GE Healthcare devices */ + { USB_DEVICE(GE_HEALTHCARE_VID, GE_HEALTHCARE_NEMO_TRACKER_PID) }, + /* Active Research (Actisense) devices */ +diff --git a/drivers/usb/serial/ftdi_sio_ids.h b/drivers/usb/serial/ftdi_sio_ids.h +index 48db84f25cc9..db1a9b3a5f38 100644 +--- a/drivers/usb/serial/ftdi_sio_ids.h ++++ b/drivers/usb/serial/ftdi_sio_ids.h +@@ -626,8 +626,9 @@ + /* + * Infineon Technologies + */ +-#define INFINEON_VID 0x058b +-#define INFINEON_TRIBOARD_PID 0x0028 /* DAS JTAG TriBoard TC1798 V1.0 */ ++#define INFINEON_VID 0x058b ++#define INFINEON_TRIBOARD_TC1798_PID 0x0028 /* DAS JTAG TriBoard TC1798 V1.0 */ ++#define INFINEON_TRIBOARD_TC2X7_PID 0x0043 /* DAS JTAG TriBoard TC2X7 V1.0 */ + + /* + * Acton Research Corp. +diff --git a/drivers/usb/serial/usb-serial.c b/drivers/usb/serial/usb-serial.c +index 137908af7c4c..4427705575c5 100644 +--- a/drivers/usb/serial/usb-serial.c ++++ b/drivers/usb/serial/usb-serial.c +@@ -1061,7 +1061,8 @@ static int usb_serial_probe(struct usb_interface *interface, + + serial->disconnected = 0; + +- usb_serial_console_init(serial->port[0]->minor); ++ if (num_ports > 0) ++ usb_serial_console_init(serial->port[0]->minor); + exit: + module_put(type->driver.owner); + return 0; +diff --git a/drivers/uwb/lc-rc.c b/drivers/uwb/lc-rc.c +index 3eca6ceb9844..4be2a5d1a9d2 100644 +--- a/drivers/uwb/lc-rc.c ++++ b/drivers/uwb/lc-rc.c +@@ -56,8 +56,11 @@ static struct uwb_rc *uwb_rc_find_by_index(int index) + struct uwb_rc *rc = NULL; + + dev = class_find_device(&uwb_rc_class, NULL, &index, uwb_rc_index_match); +- if (dev) ++ if (dev) { + rc = dev_get_drvdata(dev); ++ put_device(dev); ++ } ++ + return rc; + } + +@@ -368,7 +371,9 @@ struct uwb_rc *__uwb_rc_try_get(struct uwb_rc *target_rc) + if (dev) { + rc = dev_get_drvdata(dev); + __uwb_rc_get(rc); ++ put_device(dev); + } ++ + return rc; + } + EXPORT_SYMBOL_GPL(__uwb_rc_try_get); +@@ -421,8 +426,11 @@ struct uwb_rc *uwb_rc_get_by_grandpa(const struct device *grandpa_dev) + + dev = class_find_device(&uwb_rc_class, NULL, grandpa_dev, + find_rc_grandpa); +- if (dev) ++ if (dev) { + rc = dev_get_drvdata(dev); ++ put_device(dev); ++ } ++ + return rc; + } + EXPORT_SYMBOL_GPL(uwb_rc_get_by_grandpa); +@@ -454,8 +462,10 @@ struct uwb_rc *uwb_rc_get_by_dev(const struct uwb_dev_addr *addr) + struct uwb_rc *rc = NULL; + + dev = class_find_device(&uwb_rc_class, NULL, addr, find_rc_dev); +- if (dev) ++ if (dev) { + rc = dev_get_drvdata(dev); ++ put_device(dev); ++ } + + return rc; + } +diff --git a/drivers/uwb/pal.c b/drivers/uwb/pal.c +index c1304b8d4985..678e93741ae1 100644 +--- a/drivers/uwb/pal.c ++++ b/drivers/uwb/pal.c +@@ -97,6 +97,8 @@ static bool uwb_rc_class_device_exists(struct uwb_rc *target_rc) + + dev = class_find_device(&uwb_rc_class, NULL, target_rc, find_rc); + ++ put_device(dev); ++ + return (dev != NULL); + } + +diff --git a/drivers/xen/xen-pciback/conf_space.c b/drivers/xen/xen-pciback/conf_space.c +index ba3fac8318bb..47a4177b16d2 100644 +--- a/drivers/xen/xen-pciback/conf_space.c ++++ b/drivers/xen/xen-pciback/conf_space.c +@@ -16,8 +16,8 @@ + #include "conf_space.h" + #include "conf_space_quirks.h" + +-bool permissive; +-module_param(permissive, bool, 0644); ++bool xen_pcibk_permissive; ++module_param_named(permissive, xen_pcibk_permissive, bool, 0644); + + /* This is where xen_pcibk_read_config_byte, xen_pcibk_read_config_word, + * xen_pcibk_write_config_word, and xen_pcibk_write_config_byte are created. */ +@@ -260,7 +260,7 @@ int xen_pcibk_config_write(struct pci_dev *dev, int offset, int size, u32 value) + * This means that some fields may still be read-only because + * they have entries in the config_field list that intercept + * the write and do nothing. */ +- if (dev_data->permissive || permissive) { ++ if (dev_data->permissive || xen_pcibk_permissive) { + switch (size) { + case 1: + err = pci_write_config_byte(dev, offset, +diff --git a/drivers/xen/xen-pciback/conf_space.h b/drivers/xen/xen-pciback/conf_space.h +index 2e1d73d1d5d0..62461a8ba1d6 100644 +--- a/drivers/xen/xen-pciback/conf_space.h ++++ b/drivers/xen/xen-pciback/conf_space.h +@@ -64,7 +64,7 @@ struct config_field_entry { + void *data; + }; + +-extern bool permissive; ++extern bool xen_pcibk_permissive; + + #define OFFSET(cfg_entry) ((cfg_entry)->base_offset+(cfg_entry)->field->offset) + +diff --git a/drivers/xen/xen-pciback/conf_space_header.c b/drivers/xen/xen-pciback/conf_space_header.c +index 2d7369391472..f8baf463dd35 100644 +--- a/drivers/xen/xen-pciback/conf_space_header.c ++++ b/drivers/xen/xen-pciback/conf_space_header.c +@@ -105,7 +105,7 @@ static int command_write(struct pci_dev *dev, int offset, u16 value, void *data) + + cmd->val = value; + +- if (!permissive && (!dev_data || !dev_data->permissive)) ++ if (!xen_pcibk_permissive && (!dev_data || !dev_data->permissive)) + return 0; + + /* Only allow the guest to control certain bits. */ +diff --git a/fs/coredump.c b/fs/coredump.c +index 86753db01f2d..29950247a29a 100644 +--- a/fs/coredump.c ++++ b/fs/coredump.c +@@ -1,6 +1,7 @@ + #include <linux/slab.h> + #include <linux/file.h> + #include <linux/fdtable.h> ++#include <linux/freezer.h> + #include <linux/mm.h> + #include <linux/stat.h> + #include <linux/fcntl.h> +@@ -386,7 +387,9 @@ static int coredump_wait(int exit_code, struct core_state *core_state) + if (core_waiters > 0) { + struct core_thread *ptr; + ++ freezer_do_not_count(); + wait_for_completion(&core_state->startup); ++ freezer_count(); + /* + * Wait for all the threads to become inactive, so that + * all the thread context (extended register state, like +diff --git a/fs/ext4/ext4.h b/fs/ext4/ext4.h +index 66d2dc9ef561..7e80c4dd4735 100644 +--- a/fs/ext4/ext4.h ++++ b/fs/ext4/ext4.h +@@ -233,6 +233,7 @@ struct ext4_io_submit { + #define EXT4_MAX_BLOCK_SIZE 65536 + #define EXT4_MIN_BLOCK_LOG_SIZE 10 + #define EXT4_MAX_BLOCK_LOG_SIZE 16 ++#define EXT4_MAX_CLUSTER_LOG_SIZE 30 + #ifdef __KERNEL__ + # define EXT4_BLOCK_SIZE(s) ((s)->s_blocksize) + #else +diff --git a/fs/ext4/super.c b/fs/ext4/super.c +index 584d22c58329..483bc328643d 100644 +--- a/fs/ext4/super.c ++++ b/fs/ext4/super.c +@@ -3612,7 +3612,15 @@ static int ext4_fill_super(struct super_block *sb, void *data, int silent) + if (blocksize < EXT4_MIN_BLOCK_SIZE || + blocksize > EXT4_MAX_BLOCK_SIZE) { + ext4_msg(sb, KERN_ERR, +- "Unsupported filesystem blocksize %d", blocksize); ++ "Unsupported filesystem blocksize %d (%d log_block_size)", ++ blocksize, le32_to_cpu(es->s_log_block_size)); ++ goto failed_mount; ++ } ++ if (le32_to_cpu(es->s_log_block_size) > ++ (EXT4_MAX_BLOCK_LOG_SIZE - EXT4_MIN_BLOCK_LOG_SIZE)) { ++ ext4_msg(sb, KERN_ERR, ++ "Invalid log block size: %u", ++ le32_to_cpu(es->s_log_block_size)); + goto failed_mount; + } + +@@ -3727,6 +3735,13 @@ static int ext4_fill_super(struct super_block *sb, void *data, int silent) + "block size (%d)", clustersize, blocksize); + goto failed_mount; + } ++ if (le32_to_cpu(es->s_log_cluster_size) > ++ (EXT4_MAX_CLUSTER_LOG_SIZE - EXT4_MIN_BLOCK_LOG_SIZE)) { ++ ext4_msg(sb, KERN_ERR, ++ "Invalid log cluster size: %u", ++ le32_to_cpu(es->s_log_cluster_size)); ++ goto failed_mount; ++ } + sbi->s_cluster_bits = le32_to_cpu(es->s_log_cluster_size) - + le32_to_cpu(es->s_log_block_size); + sbi->s_clusters_per_group = +diff --git a/fs/ubifs/dir.c b/fs/ubifs/dir.c +index 6b4947f75af7..a751d1aa0e6a 100644 +--- a/fs/ubifs/dir.c ++++ b/fs/ubifs/dir.c +@@ -348,7 +348,7 @@ static unsigned int vfs_dent_type(uint8_t type) + */ + static int ubifs_readdir(struct file *file, struct dir_context *ctx) + { +- int err; ++ int err = 0; + struct qstr nm; + union ubifs_key key; + struct ubifs_dent_node *dent; +@@ -447,16 +447,23 @@ static int ubifs_readdir(struct file *file, struct dir_context *ctx) + } + + out: +- if (err != -ENOENT) { +- ubifs_err("cannot find next direntry, error %d", err); +- return err; +- } +- + kfree(file->private_data); + file->private_data = NULL; ++ ++ if (err != -ENOENT) ++ ubifs_err("cannot find next direntry, error %d", err); ++ else ++ /* ++ * -ENOENT is a non-fatal error in this context, the TNC uses ++ * it to indicate that the cursor moved past the current directory ++ * and readdir() has to stop. ++ */ ++ err = 0; ++ ++ + /* 2 is a special value indicating that there are no more direntries */ + ctx->pos = 2; +- return 0; ++ return err; + } + + /* Free saved readdir() state when the directory is closed */ +diff --git a/fs/xfs/xfs_dquot.c b/fs/xfs/xfs_dquot.c +index 895db7a88412..65d600f0d200 100644 +--- a/fs/xfs/xfs_dquot.c ++++ b/fs/xfs/xfs_dquot.c +@@ -312,8 +312,7 @@ xfs_dquot_buf_verify_crc( + if (mp->m_quotainfo) + ndquots = mp->m_quotainfo->qi_dqperchunk; + else +- ndquots = xfs_qm_calc_dquots_per_chunk(mp, +- XFS_BB_TO_FSB(mp, bp->b_length)); ++ ndquots = xfs_qm_calc_dquots_per_chunk(mp, bp->b_length); + + for (i = 0; i < ndquots; i++, d++) { + if (!xfs_verify_cksum((char *)d, sizeof(struct xfs_dqblk), +diff --git a/include/linux/filter.h b/include/linux/filter.h +index ff4e40cd45b1..264c1a440240 100644 +--- a/include/linux/filter.h ++++ b/include/linux/filter.h +@@ -41,7 +41,11 @@ static inline unsigned int sk_filter_size(unsigned int proglen) + offsetof(struct sk_filter, insns[proglen])); + } + +-extern int sk_filter(struct sock *sk, struct sk_buff *skb); ++int sk_filter_trim_cap(struct sock *sk, struct sk_buff *skb, unsigned int cap); ++static inline int sk_filter(struct sock *sk, struct sk_buff *skb) ++{ ++ return sk_filter_trim_cap(sk, skb, 1); ++} + extern unsigned int sk_run_filter(const struct sk_buff *skb, + const struct sock_filter *filter); + extern int sk_unattached_filter_create(struct sk_filter **pfp, +diff --git a/include/linux/hugetlb.h b/include/linux/hugetlb.h +index 1eaf61dde2c3..6671b365ba60 100644 +--- a/include/linux/hugetlb.h ++++ b/include/linux/hugetlb.h +@@ -395,15 +395,14 @@ static inline int hugepage_migration_support(struct hstate *h) + #endif + } + +-static inline bool hugepages_supported(void) +-{ +- /* +- * Some platform decide whether they support huge pages at boot +- * time. On these, such as powerpc, HPAGE_SHIFT is set to 0 when +- * there is no such support +- */ +- return HPAGE_SHIFT != 0; +-} ++#ifndef hugepages_supported ++/* ++ * Some platform decide whether they support huge pages at boot ++ * time. Some of them, such as powerpc, set HPAGE_SHIFT to 0 ++ * when there is no such support ++ */ ++#define hugepages_supported() (HPAGE_SHIFT != 0) ++#endif + + #else /* CONFIG_HUGETLB_PAGE */ + struct hstate {}; +diff --git a/include/linux/mroute.h b/include/linux/mroute.h +index 79aaa9fc1a15..d5277fc3ce2e 100644 +--- a/include/linux/mroute.h ++++ b/include/linux/mroute.h +@@ -103,5 +103,5 @@ struct mfc_cache { + struct rtmsg; + extern int ipmr_get_route(struct net *net, struct sk_buff *skb, + __be32 saddr, __be32 daddr, +- struct rtmsg *rtm, int nowait); ++ struct rtmsg *rtm, int nowait, u32 portid); + #endif +diff --git a/include/linux/mroute6.h b/include/linux/mroute6.h +index 66982e764051..f831155dc7d1 100644 +--- a/include/linux/mroute6.h ++++ b/include/linux/mroute6.h +@@ -115,7 +115,7 @@ struct mfc6_cache { + + struct rtmsg; + extern int ip6mr_get_route(struct net *net, struct sk_buff *skb, +- struct rtmsg *rtm, int nowait); ++ struct rtmsg *rtm, int nowait, u32 portid); + + #ifdef CONFIG_IPV6_MROUTE + extern struct sock *mroute6_socket(struct net *net, struct sk_buff *skb); +diff --git a/include/linux/perf_event.h b/include/linux/perf_event.h +index c8ba627c1d60..45aa1c62dbfa 100644 +--- a/include/linux/perf_event.h ++++ b/include/linux/perf_event.h +@@ -439,11 +439,6 @@ struct perf_event { + #endif /* CONFIG_PERF_EVENTS */ + }; + +-enum perf_event_context_type { +- task_context, +- cpu_context, +-}; +- + /** + * struct perf_event_context - event context structure + * +@@ -451,7 +446,6 @@ enum perf_event_context_type { + */ + struct perf_event_context { + struct pmu *pmu; +- enum perf_event_context_type type; + /* + * Protect the states of the events in the list, + * nr_active, and the list: +diff --git a/include/linux/pwm.h b/include/linux/pwm.h +index f0feafd184a0..08b0215128dc 100644 +--- a/include/linux/pwm.h ++++ b/include/linux/pwm.h +@@ -295,6 +295,7 @@ static inline void pwm_add_table(struct pwm_lookup *table, size_t num) + #ifdef CONFIG_PWM_SYSFS + void pwmchip_sysfs_export(struct pwm_chip *chip); + void pwmchip_sysfs_unexport(struct pwm_chip *chip); ++void pwmchip_sysfs_unexport_children(struct pwm_chip *chip); + #else + static inline void pwmchip_sysfs_export(struct pwm_chip *chip) + { +@@ -303,6 +304,10 @@ static inline void pwmchip_sysfs_export(struct pwm_chip *chip) + static inline void pwmchip_sysfs_unexport(struct pwm_chip *chip) + { + } ++ ++static inline void pwmchip_sysfs_unexport_children(struct pwm_chip *chip) ++{ ++} + #endif /* CONFIG_PWM_SYSFS */ + + #endif /* __LINUX_PWM_H */ +diff --git a/include/linux/stddef.h b/include/linux/stddef.h +index f4aec0e75c3a..9c61c7cda936 100644 +--- a/include/linux/stddef.h ++++ b/include/linux/stddef.h +@@ -3,7 +3,6 @@ + + #include <uapi/linux/stddef.h> + +- + #undef NULL + #define NULL ((void *)0) + +@@ -14,8 +13,18 @@ enum { + + #undef offsetof + #ifdef __compiler_offsetof +-#define offsetof(TYPE,MEMBER) __compiler_offsetof(TYPE,MEMBER) ++#define offsetof(TYPE, MEMBER) __compiler_offsetof(TYPE, MEMBER) + #else +-#define offsetof(TYPE, MEMBER) ((size_t) &((TYPE *)0)->MEMBER) ++#define offsetof(TYPE, MEMBER) ((size_t)&((TYPE *)0)->MEMBER) + #endif ++ ++/** ++ * offsetofend(TYPE, MEMBER) ++ * ++ * @TYPE: The type of the structure ++ * @MEMBER: The member within the structure to get the end offset of ++ */ ++#define offsetofend(TYPE, MEMBER) \ ++ (offsetof(TYPE, MEMBER) + sizeof(((TYPE *)0)->MEMBER)) ++ + #endif +diff --git a/include/linux/vfio.h b/include/linux/vfio.h +index 24579a0312a0..9131a4bf5c3e 100644 +--- a/include/linux/vfio.h ++++ b/include/linux/vfio.h +@@ -76,20 +76,6 @@ extern int vfio_register_iommu_driver(const struct vfio_iommu_driver_ops *ops); + extern void vfio_unregister_iommu_driver( + const struct vfio_iommu_driver_ops *ops); + +-/** +- * offsetofend(TYPE, MEMBER) +- * +- * @TYPE: The type of the structure +- * @MEMBER: The member within the structure to get the end offset of +- * +- * Simple helper macro for dealing with variable sized structures passed +- * from user space. This allows us to easily determine if the provided +- * structure is sized to include various fields. +- */ +-#define offsetofend(TYPE, MEMBER) ({ \ +- TYPE tmp; \ +- offsetof(TYPE, MEMBER) + sizeof(tmp.MEMBER); }) \ +- + /* + * External user API + */ +diff --git a/include/net/ip6_tunnel.h b/include/net/ip6_tunnel.h +index 6d1549c4893c..e6f0917d1ab5 100644 +--- a/include/net/ip6_tunnel.h ++++ b/include/net/ip6_tunnel.h +@@ -75,6 +75,7 @@ static inline void ip6tunnel_xmit(struct sk_buff *skb, struct net_device *dev) + struct net_device_stats *stats = &dev->stats; + int pkt_len, err; + ++ memset(skb->cb, 0, sizeof(struct inet6_skb_parm)); + pkt_len = skb->len; + err = ip6_local_out(skb); + +diff --git a/include/net/sock.h b/include/net/sock.h +index 6ed6df149bce..238e934dd3c3 100644 +--- a/include/net/sock.h ++++ b/include/net/sock.h +@@ -1380,7 +1380,7 @@ static inline struct inode *SOCK_INODE(struct socket *socket) + * Functions for memory accounting + */ + extern int __sk_mem_schedule(struct sock *sk, int size, int kind); +-extern void __sk_mem_reclaim(struct sock *sk); ++void __sk_mem_reclaim(struct sock *sk, int amount); + + #define SK_MEM_QUANTUM ((int)PAGE_SIZE) + #define SK_MEM_QUANTUM_SHIFT ilog2(SK_MEM_QUANTUM) +@@ -1421,7 +1421,7 @@ static inline void sk_mem_reclaim(struct sock *sk) + if (!sk_has_account(sk)) + return; + if (sk->sk_forward_alloc >= SK_MEM_QUANTUM) +- __sk_mem_reclaim(sk); ++ __sk_mem_reclaim(sk, sk->sk_forward_alloc); + } + + static inline void sk_mem_reclaim_partial(struct sock *sk) +@@ -1429,7 +1429,7 @@ static inline void sk_mem_reclaim_partial(struct sock *sk) + if (!sk_has_account(sk)) + return; + if (sk->sk_forward_alloc > SK_MEM_QUANTUM) +- __sk_mem_reclaim(sk); ++ __sk_mem_reclaim(sk, sk->sk_forward_alloc - 1); + } + + static inline void sk_mem_charge(struct sock *sk, int size) +@@ -1444,6 +1444,16 @@ static inline void sk_mem_uncharge(struct sock *sk, int size) + if (!sk_has_account(sk)) + return; + sk->sk_forward_alloc += size; ++ ++ /* Avoid a possible overflow. ++ * TCP send queues can make this happen, if sk_mem_reclaim() ++ * is not called and more than 2 GBytes are released at once. ++ * ++ * If we reach 2 MBytes, reclaim 1 MBytes right now, there is ++ * no need to hold that much forward allocation anyway. ++ */ ++ if (unlikely(sk->sk_forward_alloc >= 1 << 21)) ++ __sk_mem_reclaim(sk, 1 << 20); + } + + static inline void sk_wmem_free_skb(struct sock *sk, struct sk_buff *skb) +diff --git a/include/net/tcp.h b/include/net/tcp.h +index 035135b43820..83d03f86e914 100644 +--- a/include/net/tcp.h ++++ b/include/net/tcp.h +@@ -1049,6 +1049,7 @@ static inline void tcp_prequeue_init(struct tcp_sock *tp) + } + + extern bool tcp_prequeue(struct sock *sk, struct sk_buff *skb); ++int tcp_filter(struct sock *sk, struct sk_buff *skb); + + #undef STATE_TRACE + +diff --git a/kernel/events/core.c b/kernel/events/core.c +index 0b3c09a3f7b6..a4a1516f3efc 100644 +--- a/kernel/events/core.c ++++ b/kernel/events/core.c +@@ -6503,7 +6503,6 @@ skip_type: + __perf_event_init_context(&cpuctx->ctx); + lockdep_set_class(&cpuctx->ctx.mutex, &cpuctx_mutex); + lockdep_set_class(&cpuctx->ctx.lock, &cpuctx_lock); +- cpuctx->ctx.type = cpu_context; + cpuctx->ctx.pmu = pmu; + + __perf_cpu_hrtimer_init(cpuctx, cpu); +@@ -7136,7 +7135,19 @@ SYSCALL_DEFINE5(perf_event_open, + * task or CPU context: + */ + if (move_group) { +- if (group_leader->ctx->type != ctx->type) ++ /* ++ * Make sure we're both on the same task, or both ++ * per-cpu events. ++ */ ++ if (group_leader->ctx->task != ctx->task) ++ goto err_context; ++ ++ /* ++ * Make sure we're both events for the same CPU; ++ * grouping events for different CPUs is broken; since ++ * you can never concurrently schedule them anyhow. ++ */ ++ if (group_leader->cpu != event->cpu) + goto err_context; + } else { + if (group_leader->ctx != ctx) +diff --git a/kernel/power/suspend_test.c b/kernel/power/suspend_test.c +index 269b097e78ea..743615bfdcec 100644 +--- a/kernel/power/suspend_test.c ++++ b/kernel/power/suspend_test.c +@@ -169,8 +169,10 @@ static int __init test_suspend(void) + + /* RTCs have initialized by now too ... can we use one? */ + dev = class_find_device(rtc_class, NULL, NULL, has_wakealarm); +- if (dev) ++ if (dev) { + rtc = rtc_class_open(dev_name(dev)); ++ put_device(dev); ++ } + if (!rtc) { + printk(warn_no_rtc); + goto done; +diff --git a/lib/genalloc.c b/lib/genalloc.c +index 26cf20be72b7..17271ef368ca 100644 +--- a/lib/genalloc.c ++++ b/lib/genalloc.c +@@ -273,7 +273,7 @@ unsigned long gen_pool_alloc(struct gen_pool *pool, size_t size) + struct gen_pool_chunk *chunk; + unsigned long addr = 0; + int order = pool->min_alloc_order; +- int nbits, start_bit = 0, end_bit, remain; ++ int nbits, start_bit, end_bit, remain; + + #ifndef CONFIG_ARCH_HAVE_NMI_SAFE_CMPXCHG + BUG_ON(in_nmi()); +@@ -288,6 +288,7 @@ unsigned long gen_pool_alloc(struct gen_pool *pool, size_t size) + if (size > atomic_read(&chunk->avail)) + continue; + ++ start_bit = 0; + end_bit = chunk_size(chunk) >> order; + retry: + start_bit = pool->algo(chunk->bits, end_bit, start_bit, nbits, +diff --git a/mm/filemap.c b/mm/filemap.c +index af9e11ea4ecf..9fa5c3f40cd6 100644 +--- a/mm/filemap.c ++++ b/mm/filemap.c +@@ -808,8 +808,8 @@ EXPORT_SYMBOL(page_cache_prev_hole); + * Looks up the page cache slot at @mapping & @offset. If there is a + * page cache page, it is returned with an increased refcount. + * +- * If the slot holds a shadow entry of a previously evicted page, it +- * is returned. ++ * If the slot holds a shadow entry of a previously evicted page, or a ++ * swap entry from shmem/tmpfs, it is returned. + * + * Otherwise, %NULL is returned. + */ +@@ -830,9 +830,9 @@ repeat: + if (radix_tree_deref_retry(page)) + goto repeat; + /* +- * Otherwise, shmem/tmpfs must be storing a swap entry +- * here as an exceptional entry: so return it without +- * attempting to raise page count. ++ * A shadow entry of a recently evicted page, ++ * or a swap entry from shmem/tmpfs. Return ++ * it without attempting to raise page count. + */ + goto out; + } +@@ -865,8 +865,8 @@ EXPORT_SYMBOL(find_get_entry); + * page cache page, it is returned locked and with an increased + * refcount. + * +- * If the slot holds a shadow entry of a previously evicted page, it +- * is returned. ++ * If the slot holds a shadow entry of a previously evicted page, or a ++ * swap entry from shmem/tmpfs, it is returned. + * + * Otherwise, %NULL is returned. + * +@@ -999,8 +999,8 @@ EXPORT_SYMBOL(pagecache_get_page); + * with ascending indexes. There may be holes in the indices due to + * not-present pages. + * +- * Any shadow entries of evicted pages are included in the returned +- * array. ++ * Any shadow entries of evicted pages, or swap entries from ++ * shmem/tmpfs, are included in the returned array. + * + * find_get_entries() returns the number of pages and shadow entries + * which were found. +@@ -1028,9 +1028,9 @@ repeat: + if (radix_tree_deref_retry(page)) + goto restart; + /* +- * Otherwise, we must be storing a swap entry +- * here as an exceptional entry: so return it +- * without attempting to raise page count. ++ * A shadow entry of a recently evicted page, ++ * or a swap entry from shmem/tmpfs. Return ++ * it without attempting to raise page count. + */ + goto export; + } +@@ -1098,9 +1098,9 @@ repeat: + goto restart; + } + /* +- * Otherwise, shmem/tmpfs must be storing a swap entry +- * here as an exceptional entry: so skip over it - +- * we only reach this from invalidate_mapping_pages(). ++ * A shadow entry of a recently evicted page, ++ * or a swap entry from shmem/tmpfs. Skip ++ * over it. + */ + continue; + } +@@ -1165,9 +1165,9 @@ repeat: + goto restart; + } + /* +- * Otherwise, shmem/tmpfs must be storing a swap entry +- * here as an exceptional entry: so stop looking for +- * contiguous pages. ++ * A shadow entry of a recently evicted page, ++ * or a swap entry from shmem/tmpfs. Stop ++ * looking for contiguous pages. + */ + break; + } +@@ -1241,10 +1241,17 @@ repeat: + goto restart; + } + /* +- * This function is never used on a shmem/tmpfs +- * mapping, so a swap entry won't be found here. ++ * A shadow entry of a recently evicted page. ++ * ++ * Those entries should never be tagged, but ++ * this tree walk is lockless and the tags are ++ * looked up in bulk, one radix tree node at a ++ * time, so there is a sizable window for page ++ * reclaim to evict a page we saw tagged. ++ * ++ * Skip over it. + */ +- BUG(); ++ continue; + } + + if (!page_cache_get_speculative(page)) +diff --git a/mm/memcontrol.c b/mm/memcontrol.c +index 4a1559d8739f..0154a004667c 100644 +--- a/mm/memcontrol.c ++++ b/mm/memcontrol.c +@@ -6599,16 +6599,20 @@ static struct page *mc_handle_file_pte(struct vm_area_struct *vma, + pgoff = pte_to_pgoff(ptent); + + /* page is moved even if it's not RSS of this task(page-faulted). */ +- page = find_get_page(mapping, pgoff); +- + #ifdef CONFIG_SWAP + /* shmem/tmpfs may report page out on swap: account for that too. */ +- if (radix_tree_exceptional_entry(page)) { +- swp_entry_t swap = radix_to_swp_entry(page); +- if (do_swap_account) +- *entry = swap; +- page = find_get_page(swap_address_space(swap), swap.val); +- } ++ if (shmem_mapping(mapping)) { ++ page = find_get_entry(mapping, pgoff); ++ if (radix_tree_exceptional_entry(page)) { ++ swp_entry_t swp = radix_to_swp_entry(page); ++ if (do_swap_account) ++ *entry = swp; ++ page = find_get_page(swap_address_space(swp), swp.val); ++ } ++ } else ++ page = find_get_page(mapping, pgoff); ++#else ++ page = find_get_page(mapping, pgoff); + #endif + return page; + } +diff --git a/mm/memory.c b/mm/memory.c +index a0c9c6cb59d1..f5744269a454 100644 +--- a/mm/memory.c ++++ b/mm/memory.c +@@ -116,6 +116,8 @@ __setup("norandmaps", disable_randmaps); + unsigned long zero_pfn __read_mostly; + unsigned long highest_memmap_pfn __read_mostly; + ++EXPORT_SYMBOL(zero_pfn); ++ + /* + * CONFIG_MMU architectures set up ZERO_PAGE in their paging_init() + */ +diff --git a/mm/swapfile.c b/mm/swapfile.c +index 660b9c0e2e40..32fed0949adf 100644 +--- a/mm/swapfile.c ++++ b/mm/swapfile.c +@@ -2207,6 +2207,8 @@ static unsigned long read_swap_header(struct swap_info_struct *p, + swab32s(&swap_header->info.version); + swab32s(&swap_header->info.last_page); + swab32s(&swap_header->info.nr_badpages); ++ if (swap_header->info.nr_badpages > MAX_SWAP_BADPAGES) ++ return 0; + for (i = 0; i < swap_header->info.nr_badpages; i++) + swab32s(&swap_header->info.badpages[i]); + } +diff --git a/mm/truncate.c b/mm/truncate.c +index 827ad8d2b5cd..6dde010a6676 100644 +--- a/mm/truncate.c ++++ b/mm/truncate.c +@@ -415,14 +415,6 @@ unsigned long invalidate_mapping_pages(struct address_space *mapping, + unsigned long count = 0; + int i; + +- /* +- * Note: this function may get called on a shmem/tmpfs mapping: +- * pagevec_lookup() might then return 0 prematurely (because it +- * got a gangful of swap entries); but it's hardly worth worrying +- * about - it can rarely have anything to free from such a mapping +- * (most pages are dirty), and already skips over any difficulties. +- */ +- + pagevec_init(&pvec, 0); + while (index <= end && pagevec_lookup_entries(&pvec, mapping, index, + min(end - index, (pgoff_t)PAGEVEC_SIZE - 1) + 1, +diff --git a/net/bridge/br_multicast.c b/net/bridge/br_multicast.c +index 91fed8147c39..edb0eee5caf7 100644 +--- a/net/bridge/br_multicast.c ++++ b/net/bridge/br_multicast.c +@@ -911,20 +911,25 @@ static void br_multicast_enable(struct bridge_mcast_query *query) + mod_timer(&query->timer, jiffies); + } + +-void br_multicast_enable_port(struct net_bridge_port *port) ++static void __br_multicast_enable_port(struct net_bridge_port *port) + { + struct net_bridge *br = port->br; + +- spin_lock(&br->multicast_lock); + if (br->multicast_disabled || !netif_running(br->dev)) +- goto out; ++ return; + + br_multicast_enable(&port->ip4_query); + #if IS_ENABLED(CONFIG_IPV6) + br_multicast_enable(&port->ip6_query); + #endif ++} + +-out: ++void br_multicast_enable_port(struct net_bridge_port *port) ++{ ++ struct net_bridge *br = port->br; ++ ++ spin_lock(&br->multicast_lock); ++ __br_multicast_enable_port(port); + spin_unlock(&br->multicast_lock); + } + +@@ -1954,8 +1959,9 @@ static void br_multicast_start_querier(struct net_bridge *br, + + int br_multicast_toggle(struct net_bridge *br, unsigned long val) + { +- int err = 0; + struct net_bridge_mdb_htable *mdb; ++ struct net_bridge_port *port; ++ int err = 0; + + spin_lock_bh(&br->multicast_lock); + if (br->multicast_disabled == !val) +@@ -1983,10 +1989,9 @@ rollback: + goto rollback; + } + +- br_multicast_start_querier(br, &br->ip4_query); +-#if IS_ENABLED(CONFIG_IPV6) +- br_multicast_start_querier(br, &br->ip6_query); +-#endif ++ br_multicast_open(br); ++ list_for_each_entry(port, &br->port_list, list) ++ __br_multicast_enable_port(port); + + unlock: + spin_unlock_bh(&br->multicast_lock); +diff --git a/net/can/bcm.c b/net/can/bcm.c +index b57452a65fb9..392a687d3ca6 100644 +--- a/net/can/bcm.c ++++ b/net/can/bcm.c +@@ -1500,24 +1500,31 @@ static int bcm_connect(struct socket *sock, struct sockaddr *uaddr, int len, + struct sockaddr_can *addr = (struct sockaddr_can *)uaddr; + struct sock *sk = sock->sk; + struct bcm_sock *bo = bcm_sk(sk); ++ int ret = 0; + + if (len < sizeof(*addr)) + return -EINVAL; + +- if (bo->bound) +- return -EISCONN; ++ lock_sock(sk); ++ ++ if (bo->bound) { ++ ret = -EISCONN; ++ goto fail; ++ } + + /* bind a device to this socket */ + if (addr->can_ifindex) { + struct net_device *dev; + + dev = dev_get_by_index(&init_net, addr->can_ifindex); +- if (!dev) +- return -ENODEV; +- ++ if (!dev) { ++ ret = -ENODEV; ++ goto fail; ++ } + if (dev->type != ARPHRD_CAN) { + dev_put(dev); +- return -ENODEV; ++ ret = -ENODEV; ++ goto fail; + } + + bo->ifindex = dev->ifindex; +@@ -1528,17 +1535,24 @@ static int bcm_connect(struct socket *sock, struct sockaddr *uaddr, int len, + bo->ifindex = 0; + } + +- bo->bound = 1; +- + if (proc_dir) { + /* unique socket address as filename */ + sprintf(bo->procname, "%lu", sock_i_ino(sk)); + bo->bcm_proc_read = proc_create_data(bo->procname, 0644, + proc_dir, + &bcm_proc_fops, sk); ++ if (!bo->bcm_proc_read) { ++ ret = -ENOMEM; ++ goto fail; ++ } + } + +- return 0; ++ bo->bound = 1; ++ ++fail: ++ release_sock(sk); ++ ++ return ret; + } + + static int bcm_recvmsg(struct kiocb *iocb, struct socket *sock, +diff --git a/net/core/dev.c b/net/core/dev.c +index d30c12263f38..fa6d9a47f71f 100644 +--- a/net/core/dev.c ++++ b/net/core/dev.c +@@ -2263,7 +2263,7 @@ int skb_checksum_help(struct sk_buff *skb) + goto out; + } + +- *(__sum16 *)(skb->data + offset) = csum_fold(csum); ++ *(__sum16 *)(skb->data + offset) = csum_fold(csum) ?: CSUM_MANGLED_0; + out_set_summed: + skb->ip_summed = CHECKSUM_NONE; + out: +@@ -4546,6 +4546,7 @@ EXPORT_SYMBOL(netdev_master_upper_dev_get_rcu); + + static int __netdev_adjacent_dev_insert(struct net_device *dev, + struct net_device *adj_dev, ++ u16 ref_nr, + bool neighbour, bool master, + bool upper) + { +@@ -4555,7 +4556,7 @@ static int __netdev_adjacent_dev_insert(struct net_device *dev, + + if (adj) { + BUG_ON(neighbour); +- adj->ref_nr++; ++ adj->ref_nr += ref_nr; + return 0; + } + +@@ -4566,7 +4567,7 @@ static int __netdev_adjacent_dev_insert(struct net_device *dev, + adj->dev = adj_dev; + adj->master = master; + adj->neighbour = neighbour; +- adj->ref_nr = 1; ++ adj->ref_nr = ref_nr; + + dev_hold(adj_dev); + pr_debug("dev_hold for %s, because of %s link added from %s to %s\n", +@@ -4589,22 +4590,25 @@ static int __netdev_adjacent_dev_insert(struct net_device *dev, + + static inline int __netdev_upper_dev_insert(struct net_device *dev, + struct net_device *udev, ++ u16 ref_nr, + bool master, bool neighbour) + { +- return __netdev_adjacent_dev_insert(dev, udev, neighbour, master, +- true); ++ return __netdev_adjacent_dev_insert(dev, udev, ref_nr, neighbour, ++ master, true); + } + + static inline int __netdev_lower_dev_insert(struct net_device *dev, + struct net_device *ldev, ++ u16 ref_nr, + bool neighbour) + { +- return __netdev_adjacent_dev_insert(dev, ldev, neighbour, false, ++ return __netdev_adjacent_dev_insert(dev, ldev, ref_nr, neighbour, false, + false); + } + + void __netdev_adjacent_dev_remove(struct net_device *dev, +- struct net_device *adj_dev, bool upper) ++ struct net_device *adj_dev, u16 ref_nr, ++ bool upper) + { + struct netdev_adjacent *adj; + +@@ -4616,8 +4620,8 @@ void __netdev_adjacent_dev_remove(struct net_device *dev, + if (!adj) + BUG(); + +- if (adj->ref_nr > 1) { +- adj->ref_nr--; ++ if (adj->ref_nr > ref_nr) { ++ adj->ref_nr -= ref_nr; + return; + } + +@@ -4630,30 +4634,33 @@ void __netdev_adjacent_dev_remove(struct net_device *dev, + } + + static inline void __netdev_upper_dev_remove(struct net_device *dev, +- struct net_device *udev) ++ struct net_device *udev, ++ u16 ref_nr) + { +- return __netdev_adjacent_dev_remove(dev, udev, true); ++ return __netdev_adjacent_dev_remove(dev, udev, ref_nr, true); + } + + static inline void __netdev_lower_dev_remove(struct net_device *dev, +- struct net_device *ldev) ++ struct net_device *ldev, ++ u16 ref_nr) + { +- return __netdev_adjacent_dev_remove(dev, ldev, false); ++ return __netdev_adjacent_dev_remove(dev, ldev, ref_nr, false); + } + + int __netdev_adjacent_dev_insert_link(struct net_device *dev, + struct net_device *upper_dev, +- bool master, bool neighbour) ++ u16 ref_nr, bool master, bool neighbour) + { + int ret; + +- ret = __netdev_upper_dev_insert(dev, upper_dev, master, neighbour); ++ ret = __netdev_upper_dev_insert(dev, upper_dev, ref_nr, master, ++ neighbour); + if (ret) + return ret; + +- ret = __netdev_lower_dev_insert(upper_dev, dev, neighbour); ++ ret = __netdev_lower_dev_insert(upper_dev, dev, ref_nr, neighbour); + if (ret) { +- __netdev_upper_dev_remove(dev, upper_dev); ++ __netdev_upper_dev_remove(dev, upper_dev, ref_nr); + return ret; + } + +@@ -4661,23 +4668,25 @@ int __netdev_adjacent_dev_insert_link(struct net_device *dev, + } + + static inline int __netdev_adjacent_dev_link(struct net_device *dev, +- struct net_device *udev) ++ struct net_device *udev, ++ u16 ref_nr) + { +- return __netdev_adjacent_dev_insert_link(dev, udev, false, false); ++ return __netdev_adjacent_dev_insert_link(dev, udev, ref_nr, false, ++ false); + } + + static inline int __netdev_adjacent_dev_link_neighbour(struct net_device *dev, + struct net_device *udev, + bool master) + { +- return __netdev_adjacent_dev_insert_link(dev, udev, master, true); ++ return __netdev_adjacent_dev_insert_link(dev, udev, 1, master, true); + } + + void __netdev_adjacent_dev_unlink(struct net_device *dev, +- struct net_device *upper_dev) ++ struct net_device *upper_dev, u16 ref_nr) + { +- __netdev_upper_dev_remove(dev, upper_dev); +- __netdev_lower_dev_remove(upper_dev, dev); ++ __netdev_upper_dev_remove(dev, upper_dev, ref_nr); ++ __netdev_lower_dev_remove(upper_dev, dev, ref_nr); + } + + +@@ -4713,7 +4722,8 @@ static int __netdev_upper_dev_link(struct net_device *dev, + */ + list_for_each_entry(i, &dev->lower_dev_list, list) { + list_for_each_entry(j, &upper_dev->upper_dev_list, list) { +- ret = __netdev_adjacent_dev_link(i->dev, j->dev); ++ ret = __netdev_adjacent_dev_link(i->dev, j->dev, ++ i->ref_nr); + if (ret) + goto rollback_mesh; + } +@@ -4721,14 +4731,14 @@ static int __netdev_upper_dev_link(struct net_device *dev, + + /* add dev to every upper_dev's upper device */ + list_for_each_entry(i, &upper_dev->upper_dev_list, list) { +- ret = __netdev_adjacent_dev_link(dev, i->dev); ++ ret = __netdev_adjacent_dev_link(dev, i->dev, i->ref_nr); + if (ret) + goto rollback_upper_mesh; + } + + /* add upper_dev to every dev's lower device */ + list_for_each_entry(i, &dev->lower_dev_list, list) { +- ret = __netdev_adjacent_dev_link(i->dev, upper_dev); ++ ret = __netdev_adjacent_dev_link(i->dev, upper_dev, i->ref_nr); + if (ret) + goto rollback_lower_mesh; + } +@@ -4741,7 +4751,7 @@ rollback_lower_mesh: + list_for_each_entry(i, &dev->lower_dev_list, list) { + if (i == to_i) + break; +- __netdev_adjacent_dev_unlink(i->dev, upper_dev); ++ __netdev_adjacent_dev_unlink(i->dev, upper_dev, i->ref_nr); + } + + i = NULL; +@@ -4751,7 +4761,7 @@ rollback_upper_mesh: + list_for_each_entry(i, &upper_dev->upper_dev_list, list) { + if (i == to_i) + break; +- __netdev_adjacent_dev_unlink(dev, i->dev); ++ __netdev_adjacent_dev_unlink(dev, i->dev, i->ref_nr); + } + + i = j = NULL; +@@ -4763,13 +4773,13 @@ rollback_mesh: + list_for_each_entry(j, &upper_dev->upper_dev_list, list) { + if (i == to_i && j == to_j) + break; +- __netdev_adjacent_dev_unlink(i->dev, j->dev); ++ __netdev_adjacent_dev_unlink(i->dev, j->dev, i->ref_nr); + } + if (i == to_i) + break; + } + +- __netdev_adjacent_dev_unlink(dev, upper_dev); ++ __netdev_adjacent_dev_unlink(dev, upper_dev, 1); + + return ret; + } +@@ -4823,7 +4833,7 @@ void netdev_upper_dev_unlink(struct net_device *dev, + struct netdev_adjacent *i, *j; + ASSERT_RTNL(); + +- __netdev_adjacent_dev_unlink(dev, upper_dev); ++ __netdev_adjacent_dev_unlink(dev, upper_dev, 1); + + /* Here is the tricky part. We must remove all dev's lower + * devices from all upper_dev's upper devices and vice +@@ -4831,16 +4841,16 @@ void netdev_upper_dev_unlink(struct net_device *dev, + */ + list_for_each_entry(i, &dev->lower_dev_list, list) + list_for_each_entry(j, &upper_dev->upper_dev_list, list) +- __netdev_adjacent_dev_unlink(i->dev, j->dev); ++ __netdev_adjacent_dev_unlink(i->dev, j->dev, i->ref_nr); + + /* remove also the devices itself from lower/upper device + * list + */ + list_for_each_entry(i, &dev->lower_dev_list, list) +- __netdev_adjacent_dev_unlink(i->dev, upper_dev); ++ __netdev_adjacent_dev_unlink(i->dev, upper_dev, i->ref_nr); + + list_for_each_entry(i, &upper_dev->upper_dev_list, list) +- __netdev_adjacent_dev_unlink(dev, i->dev); ++ __netdev_adjacent_dev_unlink(dev, i->dev, i->ref_nr); + + call_netdevice_notifiers(NETDEV_CHANGEUPPER, dev); + } +diff --git a/net/core/filter.c b/net/core/filter.c +index ebce437678fc..5903efc408da 100644 +--- a/net/core/filter.c ++++ b/net/core/filter.c +@@ -67,9 +67,10 @@ static inline void *load_pointer(const struct sk_buff *skb, int k, + } + + /** +- * sk_filter - run a packet through a socket filter ++ * sk_filter_trim_cap - run a packet through a socket filter + * @sk: sock associated with &sk_buff + * @skb: buffer to filter ++ * @cap: limit on how short the eBPF program may trim the packet + * + * Run the filter code and then cut skb->data to correct size returned by + * sk_run_filter. If pkt_len is 0 we toss packet. If skb->len is smaller +@@ -78,7 +79,7 @@ static inline void *load_pointer(const struct sk_buff *skb, int k, + * be accepted or -EPERM if the packet should be tossed. + * + */ +-int sk_filter(struct sock *sk, struct sk_buff *skb) ++int sk_filter_trim_cap(struct sock *sk, struct sk_buff *skb, unsigned int cap) + { + int err; + struct sk_filter *filter; +@@ -99,14 +100,13 @@ int sk_filter(struct sock *sk, struct sk_buff *skb) + filter = rcu_dereference(sk->sk_filter); + if (filter) { + unsigned int pkt_len = SK_RUN_FILTER(filter, skb); +- +- err = pkt_len ? pskb_trim(skb, pkt_len) : -EPERM; ++ err = pkt_len ? pskb_trim(skb, max(cap, pkt_len)) : -EPERM; + } + rcu_read_unlock(); + + return err; + } +-EXPORT_SYMBOL(sk_filter); ++EXPORT_SYMBOL(sk_filter_trim_cap); + + /** + * sk_run_filter - run a filter on a socket +diff --git a/net/core/sock.c b/net/core/sock.c +index 4ac4c13352ab..73c6093e136a 100644 +--- a/net/core/sock.c ++++ b/net/core/sock.c +@@ -1537,6 +1537,7 @@ struct sock *sk_clone_lock(const struct sock *sk, const gfp_t priority) + } + + newsk->sk_err = 0; ++ newsk->sk_err_soft = 0; + newsk->sk_priority = 0; + /* + * Before updating sk_refcnt, we must commit prior changes to memory +@@ -2095,12 +2096,13 @@ EXPORT_SYMBOL(__sk_mem_schedule); + /** + * __sk_reclaim - reclaim memory_allocated + * @sk: socket ++ * @amount: number of bytes (rounded down to a SK_MEM_QUANTUM multiple) + */ +-void __sk_mem_reclaim(struct sock *sk) ++void __sk_mem_reclaim(struct sock *sk, int amount) + { +- sk_memory_allocated_sub(sk, +- sk->sk_forward_alloc >> SK_MEM_QUANTUM_SHIFT); +- sk->sk_forward_alloc &= SK_MEM_QUANTUM - 1; ++ amount >>= SK_MEM_QUANTUM_SHIFT; ++ sk_memory_allocated_sub(sk, amount); ++ sk->sk_forward_alloc -= amount << SK_MEM_QUANTUM_SHIFT; + + if (sk_under_memory_pressure(sk) && + (sk_memory_allocated(sk) < sk_prot_mem_limits(sk, 0))) +diff --git a/net/dccp/ipv4.c b/net/dccp/ipv4.c +index ebc54fef85a5..294c642fbebb 100644 +--- a/net/dccp/ipv4.c ++++ b/net/dccp/ipv4.c +@@ -212,7 +212,7 @@ static void dccp_v4_err(struct sk_buff *skb, u32 info) + { + const struct iphdr *iph = (struct iphdr *)skb->data; + const u8 offset = iph->ihl << 2; +- const struct dccp_hdr *dh = (struct dccp_hdr *)(skb->data + offset); ++ const struct dccp_hdr *dh; + struct dccp_sock *dp; + struct inet_sock *inet; + const int type = icmp_hdr(skb)->type; +@@ -222,11 +222,13 @@ static void dccp_v4_err(struct sk_buff *skb, u32 info) + int err; + struct net *net = dev_net(skb->dev); + +- if (skb->len < offset + sizeof(*dh) || +- skb->len < offset + __dccp_basic_hdr_len(dh)) { +- ICMP_INC_STATS_BH(net, ICMP_MIB_INERRORS); +- return; +- } ++ /* Only need dccph_dport & dccph_sport which are the first ++ * 4 bytes in dccp header. ++ * Our caller (icmp_socket_deliver()) already pulled 8 bytes for us. ++ */ ++ BUILD_BUG_ON(offsetofend(struct dccp_hdr, dccph_sport) > 8); ++ BUILD_BUG_ON(offsetofend(struct dccp_hdr, dccph_dport) > 8); ++ dh = (struct dccp_hdr *)(skb->data + offset); + + sk = inet_lookup(net, &dccp_hashinfo, + iph->daddr, dh->dccph_dport, +diff --git a/net/dccp/ipv6.c b/net/dccp/ipv6.c +index 86eedbaf037f..736fdedf9c85 100644 +--- a/net/dccp/ipv6.c ++++ b/net/dccp/ipv6.c +@@ -83,7 +83,7 @@ static void dccp_v6_err(struct sk_buff *skb, struct inet6_skb_parm *opt, + u8 type, u8 code, int offset, __be32 info) + { + const struct ipv6hdr *hdr = (const struct ipv6hdr *)skb->data; +- const struct dccp_hdr *dh = (struct dccp_hdr *)(skb->data + offset); ++ const struct dccp_hdr *dh; + struct dccp_sock *dp; + struct ipv6_pinfo *np; + struct sock *sk; +@@ -91,12 +91,13 @@ static void dccp_v6_err(struct sk_buff *skb, struct inet6_skb_parm *opt, + __u64 seq; + struct net *net = dev_net(skb->dev); + +- if (skb->len < offset + sizeof(*dh) || +- skb->len < offset + __dccp_basic_hdr_len(dh)) { +- ICMP6_INC_STATS_BH(net, __in6_dev_get(skb->dev), +- ICMP6_MIB_INERRORS); +- return; +- } ++ /* Only need dccph_dport & dccph_sport which are the first ++ * 4 bytes in dccp header. ++ * Our caller (icmpv6_notify()) already pulled 8 bytes for us. ++ */ ++ BUILD_BUG_ON(offsetofend(struct dccp_hdr, dccph_sport) > 8); ++ BUILD_BUG_ON(offsetofend(struct dccp_hdr, dccph_dport) > 8); ++ dh = (struct dccp_hdr *)(skb->data + offset); + + sk = inet6_lookup(net, &dccp_hashinfo, + &hdr->daddr, dh->dccph_dport, +@@ -1022,6 +1023,7 @@ static const struct inet_connection_sock_af_ops dccp_ipv6_mapped = { + .getsockopt = ipv6_getsockopt, + .addr2sockaddr = inet6_csk_addr2sockaddr, + .sockaddr_len = sizeof(struct sockaddr_in6), ++ .bind_conflict = inet6_csk_bind_conflict, + #ifdef CONFIG_COMPAT + .compat_setsockopt = compat_ipv6_setsockopt, + .compat_getsockopt = compat_ipv6_getsockopt, +diff --git a/net/dccp/proto.c b/net/dccp/proto.c +index ba64750f0387..f6f6fa1ddeb0 100644 +--- a/net/dccp/proto.c ++++ b/net/dccp/proto.c +@@ -1012,6 +1012,10 @@ void dccp_close(struct sock *sk, long timeout) + __kfree_skb(skb); + } + ++ /* If socket has been already reset kill it. */ ++ if (sk->sk_state == DCCP_CLOSED) ++ goto adjudge_to_death; ++ + if (data_was_unread) { + /* Unread data was tossed, send an appropriate Reset Code */ + DCCP_WARN("ABORT with %u bytes unread\n", data_was_unread); +diff --git a/net/ipv4/ipmr.c b/net/ipv4/ipmr.c +index dccda72bac62..5643a10da91d 100644 +--- a/net/ipv4/ipmr.c ++++ b/net/ipv4/ipmr.c +@@ -2188,7 +2188,7 @@ static int __ipmr_fill_mroute(struct mr_table *mrt, struct sk_buff *skb, + + int ipmr_get_route(struct net *net, struct sk_buff *skb, + __be32 saddr, __be32 daddr, +- struct rtmsg *rtm, int nowait) ++ struct rtmsg *rtm, int nowait, u32 portid) + { + struct mfc_cache *cache; + struct mr_table *mrt; +@@ -2233,6 +2233,7 @@ int ipmr_get_route(struct net *net, struct sk_buff *skb, + return -ENOMEM; + } + ++ NETLINK_CB(skb2).portid = portid; + skb_push(skb2, sizeof(struct iphdr)); + skb_reset_network_header(skb2); + iph = ip_hdr(skb2); +diff --git a/net/ipv4/route.c b/net/ipv4/route.c +index 1454176792b3..fd2811086257 100644 +--- a/net/ipv4/route.c ++++ b/net/ipv4/route.c +@@ -764,8 +764,10 @@ static void __ip_do_redirect(struct rtable *rt, struct sk_buff *skb, struct flow + goto reject_redirect; + } + +- n = ipv4_neigh_lookup(&rt->dst, NULL, &new_gw); +- if (n) { ++ n = __ipv4_neigh_lookup(rt->dst.dev, new_gw); ++ if (!n) ++ n = neigh_create(&arp_tbl, &new_gw, rt->dst.dev); ++ if (!IS_ERR(n)) { + if (!(n->nud_state & NUD_VALID)) { + neigh_event_send(n, NULL); + } else { +@@ -2427,7 +2429,8 @@ static int rt_fill_info(struct net *net, __be32 dst, __be32 src, + IPV4_DEVCONF_ALL(net, MC_FORWARDING)) { + int err = ipmr_get_route(net, skb, + fl4->saddr, fl4->daddr, +- r, nowait); ++ r, nowait, portid); ++ + if (err <= 0) { + if (!nowait) { + if (err == 0) +diff --git a/net/ipv4/tcp.c b/net/ipv4/tcp.c +index 392d3259f9ad..3e63b5fb2121 100644 +--- a/net/ipv4/tcp.c ++++ b/net/ipv4/tcp.c +@@ -1169,7 +1169,7 @@ new_segment: + + if (!skb_can_coalesce(skb, i, pfrag->page, + pfrag->offset)) { +- if (i == sysctl_max_skb_frags || !sg) { ++ if (i >= sysctl_max_skb_frags || !sg) { + tcp_mark_push(tp, skb); + goto new_segment; + } +diff --git a/net/ipv4/tcp_ipv4.c b/net/ipv4/tcp_ipv4.c +index 4b2040762733..57f5bad5650c 100644 +--- a/net/ipv4/tcp_ipv4.c ++++ b/net/ipv4/tcp_ipv4.c +@@ -1941,6 +1941,21 @@ bool tcp_prequeue(struct sock *sk, struct sk_buff *skb) + } + EXPORT_SYMBOL(tcp_prequeue); + ++int tcp_filter(struct sock *sk, struct sk_buff *skb) ++{ ++ struct tcphdr *th = (struct tcphdr *)skb->data; ++ unsigned int eaten = skb->len; ++ int err; ++ ++ err = sk_filter_trim_cap(sk, skb, th->doff * 4); ++ if (!err) { ++ eaten -= skb->len; ++ TCP_SKB_CB(skb)->end_seq -= eaten; ++ } ++ return err; ++} ++EXPORT_SYMBOL(tcp_filter); ++ + /* + * From tcp_input.c + */ +@@ -2003,8 +2018,10 @@ process: + goto discard_and_relse; + nf_reset(skb); + +- if (sk_filter(sk, skb)) ++ if (tcp_filter(sk, skb)) + goto discard_and_relse; ++ th = (const struct tcphdr *)skb->data; ++ iph = ip_hdr(skb); + + sk_mark_napi_id(sk, skb); + skb->dev = NULL; +diff --git a/net/ipv4/tcp_output.c b/net/ipv4/tcp_output.c +index aa72c9d604a0..c807d5790ca1 100644 +--- a/net/ipv4/tcp_output.c ++++ b/net/ipv4/tcp_output.c +@@ -1762,12 +1762,14 @@ static int tcp_mtu_probe(struct sock *sk) + len = 0; + tcp_for_write_queue_from_safe(skb, next, sk) { + copy = min_t(int, skb->len, probe_size - len); +- if (nskb->ip_summed) ++ if (nskb->ip_summed) { + skb_copy_bits(skb, 0, skb_put(nskb, copy), copy); +- else +- nskb->csum = skb_copy_and_csum_bits(skb, 0, +- skb_put(nskb, copy), +- copy, nskb->csum); ++ } else { ++ __wsum csum = skb_copy_and_csum_bits(skb, 0, ++ skb_put(nskb, copy), ++ copy, 0); ++ nskb->csum = csum_block_add(nskb->csum, csum, len); ++ } + + if (skb->len <= copy) { + /* We've eaten all the data from this skb. +@@ -2336,7 +2338,8 @@ int __tcp_retransmit_skb(struct sock *sk, struct sk_buff *skb) + * copying overhead: fragmentation, tunneling, mangling etc. + */ + if (atomic_read(&sk->sk_wmem_alloc) > +- min(sk->sk_wmem_queued + (sk->sk_wmem_queued >> 2), sk->sk_sndbuf)) ++ min_t(u32, sk->sk_wmem_queued + (sk->sk_wmem_queued >> 2), ++ sk->sk_sndbuf)) + return -EAGAIN; + + if (before(TCP_SKB_CB(skb)->seq, tp->snd_una)) { +diff --git a/net/ipv6/addrconf.c b/net/ipv6/addrconf.c +index bbf35875e4ef..1e31fc5477e8 100644 +--- a/net/ipv6/addrconf.c ++++ b/net/ipv6/addrconf.c +@@ -2648,7 +2648,7 @@ static void init_loopback(struct net_device *dev) + * lo device down, release this obsolete dst and + * reallocate a new router for ifa. + */ +- if (sp_ifa->rt->dst.obsolete > 0) { ++ if (!atomic_read(&sp_ifa->rt->rt6i_ref)) { + ip6_rt_put(sp_ifa->rt); + sp_ifa->rt = NULL; + } else { +diff --git a/net/ipv6/ip6_gre.c b/net/ipv6/ip6_gre.c +index 737af492ed75..6b5acd50103f 100644 +--- a/net/ipv6/ip6_gre.c ++++ b/net/ipv6/ip6_gre.c +@@ -895,7 +895,6 @@ static int ip6gre_xmit_other(struct sk_buff *skb, struct net_device *dev) + encap_limit = t->parms.encap_limit; + + memcpy(&fl6, &t->fl.u.ip6, sizeof(fl6)); +- fl6.flowi6_proto = skb->protocol; + + err = ip6gre_xmit2(skb, dev, 0, &fl6, encap_limit, &mtu); + +diff --git a/net/ipv6/ip6mr.c b/net/ipv6/ip6mr.c +index 86d30e60242a..56aa540d77f6 100644 +--- a/net/ipv6/ip6mr.c ++++ b/net/ipv6/ip6mr.c +@@ -2273,8 +2273,8 @@ static int __ip6mr_fill_mroute(struct mr6_table *mrt, struct sk_buff *skb, + return 1; + } + +-int ip6mr_get_route(struct net *net, +- struct sk_buff *skb, struct rtmsg *rtm, int nowait) ++int ip6mr_get_route(struct net *net, struct sk_buff *skb, struct rtmsg *rtm, ++ int nowait, u32 portid) + { + int err; + struct mr6_table *mrt; +@@ -2319,6 +2319,7 @@ int ip6mr_get_route(struct net *net, + return -ENOMEM; + } + ++ NETLINK_CB(skb2).portid = portid; + skb_reset_transport_header(skb2); + + skb_put(skb2, sizeof(struct ipv6hdr)); +diff --git a/net/ipv6/route.c b/net/ipv6/route.c +index f862c7688c99..e19817a090c7 100644 +--- a/net/ipv6/route.c ++++ b/net/ipv6/route.c +@@ -2614,7 +2614,9 @@ static int rt6_fill_node(struct net *net, + if (iif) { + #ifdef CONFIG_IPV6_MROUTE + if (ipv6_addr_is_multicast(&rt->rt6i_dst.addr)) { +- int err = ip6mr_get_route(net, skb, rtm, nowait); ++ int err = ip6mr_get_route(net, skb, rtm, nowait, ++ portid); ++ + if (err <= 0) { + if (!nowait) { + if (err == 0) +diff --git a/net/ipv6/tcp_ipv6.c b/net/ipv6/tcp_ipv6.c +index 0812b615885d..e5bafd576a13 100644 +--- a/net/ipv6/tcp_ipv6.c ++++ b/net/ipv6/tcp_ipv6.c +@@ -1339,7 +1339,7 @@ static int tcp_v6_do_rcv(struct sock *sk, struct sk_buff *skb) + goto discard; + #endif + +- if (sk_filter(sk, skb)) ++ if (tcp_filter(sk, skb)) + goto discard; + + /* +@@ -1509,8 +1509,10 @@ process: + if (!xfrm6_policy_check(sk, XFRM_POLICY_IN, skb)) + goto discard_and_relse; + +- if (sk_filter(sk, skb)) ++ if (tcp_filter(sk, skb)) + goto discard_and_relse; ++ th = (const struct tcphdr *)skb->data; ++ hdr = ipv6_hdr(skb); + + sk_mark_napi_id(sk, skb); + skb->dev = NULL; +diff --git a/net/mac80211/rx.c b/net/mac80211/rx.c +index 834a41830778..4003bd682e06 100644 +--- a/net/mac80211/rx.c ++++ b/net/mac80211/rx.c +@@ -2007,16 +2007,22 @@ ieee80211_rx_h_amsdu(struct ieee80211_rx_data *rx) + if (!(status->rx_flags & IEEE80211_RX_AMSDU)) + return RX_CONTINUE; + +- if (ieee80211_has_a4(hdr->frame_control) && +- rx->sdata->vif.type == NL80211_IFTYPE_AP_VLAN && +- !rx->sdata->u.vlan.sta) +- return RX_DROP_UNUSABLE; ++ if (unlikely(ieee80211_has_a4(hdr->frame_control))) { ++ switch (rx->sdata->vif.type) { ++ case NL80211_IFTYPE_AP_VLAN: ++ if (!rx->sdata->u.vlan.sta) ++ return RX_DROP_UNUSABLE; ++ break; ++ case NL80211_IFTYPE_STATION: ++ if (!rx->sdata->u.mgd.use_4addr) ++ return RX_DROP_UNUSABLE; ++ break; ++ default: ++ return RX_DROP_UNUSABLE; ++ } ++ } + +- if (is_multicast_ether_addr(hdr->addr1) && +- ((rx->sdata->vif.type == NL80211_IFTYPE_AP_VLAN && +- rx->sdata->u.vlan.sta) || +- (rx->sdata->vif.type == NL80211_IFTYPE_STATION && +- rx->sdata->u.mgd.use_4addr))) ++ if (is_multicast_ether_addr(hdr->addr1)) + return RX_DROP_UNUSABLE; + + skb->dev = dev; +diff --git a/net/netfilter/nf_log.c b/net/netfilter/nf_log.c +index 85296d4eac0e..811dd66f021e 100644 +--- a/net/netfilter/nf_log.c ++++ b/net/netfilter/nf_log.c +@@ -253,7 +253,7 @@ static int nf_log_proc_dostring(struct ctl_table *table, int write, + size_t size = *lenp; + int r = 0; + int tindex = (unsigned long)table->extra1; +- struct net *net = current->nsproxy->net_ns; ++ struct net *net = table->extra2; + + if (write) { + if (size > sizeof(buf)) +@@ -306,7 +306,6 @@ static int netfilter_log_sysctl_init(struct net *net) + 3, "%d", i); + nf_log_sysctl_table[i].procname = + nf_log_sysctl_fnames[i]; +- nf_log_sysctl_table[i].data = NULL; + nf_log_sysctl_table[i].maxlen = + NFLOGGER_NAME_LEN * sizeof(char); + nf_log_sysctl_table[i].mode = 0644; +@@ -317,6 +316,9 @@ static int netfilter_log_sysctl_init(struct net *net) + } + } + ++ for (i = NFPROTO_UNSPEC; i < NFPROTO_NUMPROTO; i++) ++ table[i].extra2 = net; ++ + net->nf.nf_log_dir_header = register_net_sysctl(net, + "net/netfilter/nf_log", + table); +diff --git a/net/packet/af_packet.c b/net/packet/af_packet.c +index 1e9cb9921daa..3f9804b2802a 100644 +--- a/net/packet/af_packet.c ++++ b/net/packet/af_packet.c +@@ -3365,6 +3365,7 @@ static int packet_notifier(struct notifier_block *this, + } + if (msg == NETDEV_UNREGISTER) { + packet_cached_dev_reset(po); ++ fanout_release(sk); + po->ifindex = -1; + if (po->prot_hook.dev) + dev_put(po->prot_hook.dev); +diff --git a/net/sctp/sm_statefuns.c b/net/sctp/sm_statefuns.c +index 63a116c31a8b..ce6c8910f041 100644 +--- a/net/sctp/sm_statefuns.c ++++ b/net/sctp/sm_statefuns.c +@@ -3427,6 +3427,12 @@ sctp_disposition_t sctp_sf_ootb(struct net *net, + return sctp_sf_violation_chunklen(net, ep, asoc, type, arg, + commands); + ++ /* Report violation if chunk len overflows */ ++ ch_end = ((__u8 *)ch) + WORD_ROUND(ntohs(ch->length)); ++ if (ch_end > skb_tail_pointer(skb)) ++ return sctp_sf_violation_chunklen(net, ep, asoc, type, arg, ++ commands); ++ + /* Now that we know we at least have a chunk header, + * do things that are type appropriate. + */ +@@ -3458,12 +3464,6 @@ sctp_disposition_t sctp_sf_ootb(struct net *net, + } + } + +- /* Report violation if chunk len overflows */ +- ch_end = ((__u8 *)ch) + WORD_ROUND(ntohs(ch->length)); +- if (ch_end > skb_tail_pointer(skb)) +- return sctp_sf_violation_chunklen(net, ep, asoc, type, arg, +- commands); +- + ch = (sctp_chunkhdr_t *) ch_end; + } while (ch_end < skb_tail_pointer(skb)); + +diff --git a/net/sctp/socket.c b/net/sctp/socket.c +index ead3a8adca08..2c5cb6d2787d 100644 +--- a/net/sctp/socket.c ++++ b/net/sctp/socket.c +@@ -1217,9 +1217,12 @@ static int __sctp_connect(struct sock* sk, + + timeo = sock_sndtimeo(sk, f_flags & O_NONBLOCK); + +- err = sctp_wait_for_connect(asoc, &timeo); +- if ((err == 0 || err == -EINPROGRESS) && assoc_id) ++ if (assoc_id) + *assoc_id = asoc->assoc_id; ++ err = sctp_wait_for_connect(asoc, &timeo); ++ /* Note: the asoc may be freed after the return of ++ * sctp_wait_for_connect. ++ */ + + /* Don't free association on exit. */ + asoc = NULL; +@@ -4247,7 +4250,7 @@ static int sctp_getsockopt_disable_fragments(struct sock *sk, int len, + static int sctp_getsockopt_events(struct sock *sk, int len, char __user *optval, + int __user *optlen) + { +- if (len <= 0) ++ if (len == 0) + return -EINVAL; + if (len > sizeof(struct sctp_event_subscribe)) + len = sizeof(struct sctp_event_subscribe); +@@ -5758,6 +5761,9 @@ static int sctp_getsockopt(struct sock *sk, int level, int optname, + if (get_user(len, optlen)) + return -EFAULT; + ++ if (len < 0) ++ return -EINVAL; ++ + sctp_lock_sock(sk); + + switch (optname) { +diff --git a/scripts/gcc-x86_64-has-stack-protector.sh b/scripts/gcc-x86_64-has-stack-protector.sh +index 973e8c141567..17867e723a51 100644 +--- a/scripts/gcc-x86_64-has-stack-protector.sh ++++ b/scripts/gcc-x86_64-has-stack-protector.sh +@@ -1,6 +1,6 @@ + #!/bin/sh + +-echo "int foo(void) { char X[200]; return 3; }" | $* -S -x c -c -O0 -mcmodel=kernel -fstack-protector - -o - 2> /dev/null | grep -q "%gs" ++echo "int foo(void) { char X[200]; return 3; }" | $* -S -x c -c -O0 -mcmodel=kernel -fno-PIE -fstack-protector - -o - 2> /dev/null | grep -q "%gs" + if [ "$?" -eq "0" ] ; then + echo y + else +diff --git a/security/keys/proc.c b/security/keys/proc.c +index 217b6855e815..374c3301b802 100644 +--- a/security/keys/proc.c ++++ b/security/keys/proc.c +@@ -188,7 +188,7 @@ static int proc_keys_show(struct seq_file *m, void *v) + struct timespec now; + unsigned long timo; + key_ref_t key_ref, skey_ref; +- char xbuf[12]; ++ char xbuf[16]; + int rc; + + key_ref = make_key_ref(key, 0); +diff --git a/sound/pci/hda/hda_intel.c b/sound/pci/hda/hda_intel.c +index 6a5e36dc23e5..202150d7873c 100644 +--- a/sound/pci/hda/hda_intel.c ++++ b/sound/pci/hda/hda_intel.c +@@ -594,7 +594,7 @@ enum { + #define AZX_DCAPS_NVIDIA_SNOOP (1 << 11) /* Nvidia snoop enable */ + #define AZX_DCAPS_SCH_SNOOP (1 << 12) /* SCH/PCH snoop enable */ + #define AZX_DCAPS_RIRB_DELAY (1 << 13) /* Long delay in read loop */ +-#define AZX_DCAPS_RIRB_PRE_DELAY (1 << 14) /* Put a delay before read */ ++/* 14 unused */ + #define AZX_DCAPS_CTX_WORKAROUND (1 << 15) /* X-Fi workaround */ + #define AZX_DCAPS_POSFIX_LPIB (1 << 16) /* Use LPIB as default */ + #define AZX_DCAPS_POSFIX_VIA (1 << 17) /* Use VIACOMBO as default */ +@@ -1540,7 +1540,7 @@ static irqreturn_t azx_interrupt(int irq, void *dev_id) + status = azx_readb(chip, RIRBSTS); + if (status & RIRB_INT_MASK) { + if (status & RIRB_INT_RESPONSE) { +- if (chip->driver_caps & AZX_DCAPS_RIRB_PRE_DELAY) ++ if (chip->driver_caps & AZX_DCAPS_CTX_WORKAROUND) + udelay(80); + azx_update_rirb(chip); + } +@@ -4288,14 +4288,12 @@ static DEFINE_PCI_DEVICE_TABLE(azx_ids) = { + .class = PCI_CLASS_MULTIMEDIA_HD_AUDIO << 8, + .class_mask = 0xffffff, + .driver_data = AZX_DRIVER_CTX | AZX_DCAPS_CTX_WORKAROUND | +- AZX_DCAPS_NO_64BIT | +- AZX_DCAPS_RIRB_PRE_DELAY | AZX_DCAPS_POSFIX_LPIB }, ++ AZX_DCAPS_NO_64BIT | AZX_DCAPS_POSFIX_LPIB }, + #else + /* this entry seems still valid -- i.e. without emu20kx chip */ + { PCI_DEVICE(0x1102, 0x0009), + .driver_data = AZX_DRIVER_CTX | AZX_DCAPS_CTX_WORKAROUND | +- AZX_DCAPS_NO_64BIT | +- AZX_DCAPS_RIRB_PRE_DELAY | AZX_DCAPS_POSFIX_LPIB }, ++ AZX_DCAPS_NO_64BIT | AZX_DCAPS_POSFIX_LPIB }, + #endif + /* Vortex86MX */ + { PCI_DEVICE(0x17f3, 0x3010), .driver_data = AZX_DRIVER_GENERIC }, +diff --git a/sound/soc/codecs/cs4270.c b/sound/soc/codecs/cs4270.c +index 83c835d9fd88..67c82956367d 100644 +--- a/sound/soc/codecs/cs4270.c ++++ b/sound/soc/codecs/cs4270.c +@@ -148,11 +148,11 @@ SND_SOC_DAPM_OUTPUT("AOUTR"), + }; + + static const struct snd_soc_dapm_route cs4270_dapm_routes[] = { +- { "Capture", NULL, "AINA" }, +- { "Capture", NULL, "AINB" }, ++ { "Capture", NULL, "AINL" }, ++ { "Capture", NULL, "AINR" }, + +- { "AOUTA", NULL, "Playback" }, +- { "AOUTB", NULL, "Playback" }, ++ { "AOUTL", NULL, "Playback" }, ++ { "AOUTR", NULL, "Playback" }, + }; + + /** +diff --git a/sound/usb/card.c b/sound/usb/card.c +index bc5795f342a7..96a09226be7d 100644 +--- a/sound/usb/card.c ++++ b/sound/usb/card.c +@@ -661,7 +661,7 @@ int snd_usb_autoresume(struct snd_usb_audio *chip) + int err = -ENODEV; + + down_read(&chip->shutdown_rwsem); +- if (chip->probing && chip->in_pm) ++ if (chip->probing || chip->in_pm) + err = 0; + else if (!chip->shutdown) + err = usb_autopm_get_interface(chip->pm_intf); +diff --git a/sound/usb/quirks-table.h b/sound/usb/quirks-table.h +index c600d4277974..a1f08d8c7bd2 100644 +--- a/sound/usb/quirks-table.h ++++ b/sound/usb/quirks-table.h +@@ -2953,6 +2953,23 @@ AU0828_DEVICE(0x2040, 0x7260, "Hauppauge", "HVR-950Q"), + AU0828_DEVICE(0x2040, 0x7213, "Hauppauge", "HVR-950Q"), + AU0828_DEVICE(0x2040, 0x7270, "Hauppauge", "HVR-950Q"), + ++/* Syntek STK1160 */ ++{ ++ .match_flags = USB_DEVICE_ID_MATCH_DEVICE | ++ USB_DEVICE_ID_MATCH_INT_CLASS | ++ USB_DEVICE_ID_MATCH_INT_SUBCLASS, ++ .idVendor = 0x05e1, ++ .idProduct = 0x0408, ++ .bInterfaceClass = USB_CLASS_AUDIO, ++ .bInterfaceSubClass = USB_SUBCLASS_AUDIOCONTROL, ++ .driver_info = (unsigned long) &(const struct snd_usb_audio_quirk) { ++ .vendor_name = "Syntek", ++ .product_name = "STK1160", ++ .ifnum = QUIRK_ANY_INTERFACE, ++ .type = QUIRK_AUDIO_ALIGN_TRANSFER ++ } ++}, ++ + /* Digidesign Mbox */ + { + /* Thanks to Clemens Ladisch <clemens@ladisch.de> */ +diff --git a/virt/kvm/kvm_main.c b/virt/kvm/kvm_main.c +index 3351605d2608..e7a1166c3eb4 100644 +--- a/virt/kvm/kvm_main.c ++++ b/virt/kvm/kvm_main.c +@@ -104,7 +104,7 @@ static bool largepages_enabled = true; + bool kvm_is_mmio_pfn(pfn_t pfn) + { + if (pfn_valid(pfn)) +- return PageReserved(pfn_to_page(pfn)); ++ return !is_zero_pfn(pfn) && PageReserved(pfn_to_page(pfn)); + + return true; + }