From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from lists.gentoo.org (pigeon.gentoo.org [208.92.234.80]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by finch.gentoo.org (Postfix) with ESMTPS id 0F16A1396D9 for ; Wed, 18 Oct 2017 11:51:30 +0000 (UTC) Received: from pigeon.gentoo.org (localhost [127.0.0.1]) by pigeon.gentoo.org (Postfix) with SMTP id 46B9E2BC00C; Wed, 18 Oct 2017 11:51:29 +0000 (UTC) Received: from smtp.gentoo.org (smtp.gentoo.org [140.211.166.183]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by pigeon.gentoo.org (Postfix) with ESMTPS id EDC692BC00C for ; Wed, 18 Oct 2017 11:51:28 +0000 (UTC) Received: from oystercatcher.gentoo.org (oystercatcher.gentoo.org [148.251.78.52]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.gentoo.org (Postfix) with ESMTPS id EF6C3340988 for ; Wed, 18 Oct 2017 11:51:26 +0000 (UTC) Received: from localhost.localdomain (localhost [IPv6:::1]) by oystercatcher.gentoo.org (Postfix) with ESMTP id 43AF531B for ; Wed, 18 Oct 2017 11:51:25 +0000 (UTC) From: "Mike Pagano" To: gentoo-commits@lists.gentoo.org Content-Transfer-Encoding: 8bit Content-type: text/plain; charset=UTF-8 Reply-To: gentoo-dev@lists.gentoo.org, "Mike Pagano" Message-ID: <1508327475.f3c55170f5ab7f92a5cef3ea0b6c5f9f910d1145.mpagano@gentoo> Subject: [gentoo-commits] proj/linux-patches:4.1 commit in: / X-VCS-Repository: proj/linux-patches X-VCS-Files: 0000_README 1043_linux-4.1.44.patch 1044_linux-4.1.45.patch X-VCS-Directories: / X-VCS-Committer: mpagano X-VCS-Committer-Name: Mike Pagano X-VCS-Revision: f3c55170f5ab7f92a5cef3ea0b6c5f9f910d1145 X-VCS-Branch: 4.1 Date: Wed, 18 Oct 2017 11:51:25 +0000 (UTC) Precedence: bulk List-Post: List-Help: List-Unsubscribe: List-Subscribe: List-Id: Gentoo Linux mail X-BeenThere: gentoo-commits@lists.gentoo.org X-Archives-Salt: b8de3525-267d-4c3f-acb8-bd335474e3ec X-Archives-Hash: 3d7f6a11e955c8dea8ae3b48f0450df3 commit: f3c55170f5ab7f92a5cef3ea0b6c5f9f910d1145 Author: Mike Pagano gentoo org> AuthorDate: Wed Oct 18 11:51:15 2017 +0000 Commit: Mike Pagano gentoo org> CommitDate: Wed Oct 18 11:51:15 2017 +0000 URL: https://gitweb.gentoo.org/proj/linux-patches.git/commit/?id=f3c55170 Linux patches 4.1.44 and 4.1.45 0000_README | 8 + 1043_linux-4.1.44.patch | 4971 +++++++++++++++++++++++++++++++++++++++++++++++ 1044_linux-4.1.45.patch | 4031 ++++++++++++++++++++++++++++++++++++++ 3 files changed, 9010 insertions(+) diff --git a/0000_README b/0000_README index 959795e..43ea8eb 100644 --- a/0000_README +++ b/0000_README @@ -215,6 +215,14 @@ Patch: 1042_linux-4.1.43.patch From: http://www.kernel.org Desc: Linux 4.1.43 +Patch: 1043_linux-4.1.44.patch +From: http://www.kernel.org +Desc: Linux 4.1.44 + +Patch: 1044_linux-4.1.45.patch +From: http://www.kernel.org +Desc: Linux 4.1.45 + Patch: 1500_XATTR_USER_PREFIX.patch From: https://bugs.gentoo.org/show_bug.cgi?id=470644 Desc: Support for namespace user.pax.* on tmpfs. diff --git a/1043_linux-4.1.44.patch b/1043_linux-4.1.44.patch new file mode 100644 index 0000000..962183f --- /dev/null +++ b/1043_linux-4.1.44.patch @@ -0,0 +1,4971 @@ +diff --git a/Makefile b/Makefile +index 50d0a93fa343..9c7aa08c70b7 100644 +--- a/Makefile ++++ b/Makefile +@@ -1,6 +1,6 @@ + VERSION = 4 + PATCHLEVEL = 1 +-SUBLEVEL = 43 ++SUBLEVEL = 44 + EXTRAVERSION = + NAME = Series 4800 + +diff --git a/arch/arm/boot/dts/armada-388-gp.dts b/arch/arm/boot/dts/armada-388-gp.dts +index 757ac079e7f2..bcf4f1b6b2bc 100644 +--- a/arch/arm/boot/dts/armada-388-gp.dts ++++ b/arch/arm/boot/dts/armada-388-gp.dts +@@ -91,7 +91,7 @@ + pinctrl-names = "default"; + pinctrl-0 = <&pca0_pins>; + interrupt-parent = <&gpio0>; +- interrupts = <18 IRQ_TYPE_EDGE_FALLING>; ++ interrupts = <18 IRQ_TYPE_LEVEL_LOW>; + gpio-controller; + #gpio-cells = <2>; + interrupt-controller; +@@ -103,7 +103,7 @@ + compatible = "nxp,pca9555"; + pinctrl-names = "default"; + interrupt-parent = <&gpio0>; +- interrupts = <18 IRQ_TYPE_EDGE_FALLING>; ++ interrupts = <18 IRQ_TYPE_LEVEL_LOW>; + gpio-controller; + #gpio-cells = <2>; + interrupt-controller; +diff --git a/arch/arm/boot/dts/omap3-n900.dts b/arch/arm/boot/dts/omap3-n900.dts +index 5f5e0f3d5b64..27cd4abfc74d 100644 +--- a/arch/arm/boot/dts/omap3-n900.dts ++++ b/arch/arm/boot/dts/omap3-n900.dts +@@ -697,6 +697,8 @@ + vmmc_aux-supply = <&vsim>; + bus-width = <8>; + non-removable; ++ no-sdio; ++ no-sd; + }; + + &mmc3 { +diff --git a/arch/arm/configs/s3c2410_defconfig b/arch/arm/configs/s3c2410_defconfig +index f3142369f594..01116ee1284b 100644 +--- a/arch/arm/configs/s3c2410_defconfig ++++ b/arch/arm/configs/s3c2410_defconfig +@@ -87,9 +87,9 @@ CONFIG_IPV6_TUNNEL=m + CONFIG_NETFILTER=y + CONFIG_NF_CONNTRACK=m + CONFIG_NF_CONNTRACK_EVENTS=y +-CONFIG_NF_CT_PROTO_DCCP=m +-CONFIG_NF_CT_PROTO_SCTP=m +-CONFIG_NF_CT_PROTO_UDPLITE=m ++CONFIG_NF_CT_PROTO_DCCP=y ++CONFIG_NF_CT_PROTO_SCTP=y ++CONFIG_NF_CT_PROTO_UDPLITE=y + CONFIG_NF_CONNTRACK_AMANDA=m + CONFIG_NF_CONNTRACK_FTP=m + CONFIG_NF_CONNTRACK_H323=m +diff --git a/arch/arm/include/asm/ftrace.h b/arch/arm/include/asm/ftrace.h +index bfe2a2f5a644..22b73112b75f 100644 +--- a/arch/arm/include/asm/ftrace.h ++++ b/arch/arm/include/asm/ftrace.h +@@ -54,6 +54,24 @@ static inline void *return_address(unsigned int level) + + #define ftrace_return_address(n) return_address(n) + ++#define ARCH_HAS_SYSCALL_MATCH_SYM_NAME ++ ++static inline bool arch_syscall_match_sym_name(const char *sym, ++ const char *name) ++{ ++ if (!strcmp(sym, "sys_mmap2")) ++ sym = "sys_mmap_pgoff"; ++ else if (!strcmp(sym, "sys_statfs64_wrapper")) ++ sym = "sys_statfs64"; ++ else if (!strcmp(sym, "sys_fstatfs64_wrapper")) ++ sym = "sys_fstatfs64"; ++ else if (!strcmp(sym, "sys_arm_fadvise64_64")) ++ sym = "sys_fadvise64_64"; ++ ++ /* Ignore case since sym may start with "SyS" instead of "sys" */ ++ return !strcasecmp(sym, name); ++} ++ + #endif /* ifndef __ASSEMBLY__ */ + + #endif /* _ASM_ARM_FTRACE */ +diff --git a/arch/arm/kvm/mmu.c b/arch/arm/kvm/mmu.c +index 4e15eed87074..3ca19cdb0eac 100644 +--- a/arch/arm/kvm/mmu.c ++++ b/arch/arm/kvm/mmu.c +@@ -1611,12 +1611,16 @@ static int kvm_test_age_hva_handler(struct kvm *kvm, gpa_t gpa, void *data) + + int kvm_age_hva(struct kvm *kvm, unsigned long start, unsigned long end) + { ++ if (!kvm->arch.pgd) ++ return 0; + trace_kvm_age_hva(start, end); + return handle_hva_to_gpa(kvm, start, end, kvm_age_hva_handler, NULL); + } + + int kvm_test_age_hva(struct kvm *kvm, unsigned long hva) + { ++ if (!kvm->arch.pgd) ++ return 0; + trace_kvm_test_age_hva(hva); + return handle_hva_to_gpa(kvm, hva, hva, kvm_test_age_hva_handler, NULL); + } +diff --git a/arch/arm64/boot/dts/xilinx/zynqmp-ep108.dts b/arch/arm64/boot/dts/xilinx/zynqmp-ep108.dts +index 0a3f40ecd06d..96235d2b135d 100644 +--- a/arch/arm64/boot/dts/xilinx/zynqmp-ep108.dts ++++ b/arch/arm64/boot/dts/xilinx/zynqmp-ep108.dts +@@ -26,7 +26,7 @@ + stdout-path = "serial0:115200n8"; + }; + +- memory { ++ memory@0 { + device_type = "memory"; + reg = <0x0 0x0 0x40000000>; + }; +diff --git a/arch/arm64/boot/dts/xilinx/zynqmp.dtsi b/arch/arm64/boot/dts/xilinx/zynqmp.dtsi +index 11e0b00045cf..0cb2cdfd7309 100644 +--- a/arch/arm64/boot/dts/xilinx/zynqmp.dtsi ++++ b/arch/arm64/boot/dts/xilinx/zynqmp.dtsi +@@ -71,7 +71,7 @@ + <1 10 0xf01>; + }; + +- amba_apu { ++ amba_apu: amba_apu@0 { + compatible = "simple-bus"; + #address-cells = <2>; + #size-cells = <1>; +@@ -251,7 +251,7 @@ + }; + + i2c0: i2c@ff020000 { +- compatible = "cdns,i2c-r1p10"; ++ compatible = "cdns,i2c-r1p14", "cdns,i2c-r1p10"; + status = "disabled"; + interrupt-parent = <&gic>; + interrupts = <0 17 4>; +@@ -262,7 +262,7 @@ + }; + + i2c1: i2c@ff030000 { +- compatible = "cdns,i2c-r1p10"; ++ compatible = "cdns,i2c-r1p14", "cdns,i2c-r1p10"; + status = "disabled"; + interrupt-parent = <&gic>; + interrupts = <0 18 4>; +diff --git a/arch/arm64/mm/fault.c b/arch/arm64/mm/fault.c +index fa5efaa5c3ac..16523fbd9671 100644 +--- a/arch/arm64/mm/fault.c ++++ b/arch/arm64/mm/fault.c +@@ -62,21 +62,21 @@ void show_pte(struct mm_struct *mm, unsigned long addr) + break; + + pud = pud_offset(pgd, addr); +- printk(", *pud=%016llx", pud_val(*pud)); ++ pr_cont(", *pud=%016llx", pud_val(*pud)); + if (pud_none(*pud) || pud_bad(*pud)) + break; + + pmd = pmd_offset(pud, addr); +- printk(", *pmd=%016llx", pmd_val(*pmd)); ++ pr_cont(", *pmd=%016llx", pmd_val(*pmd)); + if (pmd_none(*pmd) || pmd_bad(*pmd)) + break; + + pte = pte_offset_map(pmd, addr); +- printk(", *pte=%016llx", pte_val(*pte)); ++ pr_cont(", *pte=%016llx", pte_val(*pte)); + pte_unmap(pte); + } while(0); + +- printk("\n"); ++ pr_cont("\n"); + } + + /* +diff --git a/arch/mips/include/asm/branch.h b/arch/mips/include/asm/branch.h +index de781cf54bc7..da80878f2c0d 100644 +--- a/arch/mips/include/asm/branch.h ++++ b/arch/mips/include/asm/branch.h +@@ -74,10 +74,7 @@ static inline int compute_return_epc(struct pt_regs *regs) + return __microMIPS_compute_return_epc(regs); + if (cpu_has_mips16) + return __MIPS16e_compute_return_epc(regs); +- return regs->cp0_epc; +- } +- +- if (!delay_slot(regs)) { ++ } else if (!delay_slot(regs)) { + regs->cp0_epc += 4; + return 0; + } +diff --git a/arch/mips/kernel/branch.c b/arch/mips/kernel/branch.c +index fe376aa705c5..13254da66ce8 100644 +--- a/arch/mips/kernel/branch.c ++++ b/arch/mips/kernel/branch.c +@@ -399,7 +399,7 @@ int __MIPS16e_compute_return_epc(struct pt_regs *regs) + * + * @regs: Pointer to pt_regs + * @insn: branch instruction to decode +- * @returns: -EFAULT on error and forces SIGBUS, and on success ++ * @returns: -EFAULT on error and forces SIGILL, and on success + * returns 0 or BRANCH_LIKELY_TAKEN as appropriate after + * evaluating the branch. + * +@@ -556,6 +556,7 @@ int __compute_return_epc_for_insn(struct pt_regs *regs, + /* + * These are unconditional and in j_format. + */ ++ case jalx_op: + case jal_op: + regs->regs[31] = regs->cp0_epc + 8; + case j_op: +@@ -843,8 +844,9 @@ int __compute_return_epc_for_insn(struct pt_regs *regs, + return ret; + + sigill_dsp: +- printk("%s: DSP branch but not DSP ASE - sending SIGBUS.\n", current->comm); +- force_sig(SIGBUS, current); ++ pr_info("%s: DSP branch but not DSP ASE - sending SIGILL.\n", ++ current->comm); ++ force_sig(SIGILL, current); + return -EFAULT; + sigill_r6: + pr_info("%s: R2 branch but r2-to-r6 emulator is not preset - sending SIGILL.\n", +diff --git a/arch/mips/kernel/proc.c b/arch/mips/kernel/proc.c +index 298b2b773d12..f1fab6ff53e6 100644 +--- a/arch/mips/kernel/proc.c ++++ b/arch/mips/kernel/proc.c +@@ -83,7 +83,7 @@ static int show_cpuinfo(struct seq_file *m, void *v) + } + + seq_printf(m, "isa\t\t\t:"); +- if (cpu_has_mips_r1) ++ if (cpu_has_mips_1) + seq_printf(m, " mips1"); + if (cpu_has_mips_2) + seq_printf(m, "%s", " mips2"); +diff --git a/arch/mips/kernel/ptrace.c b/arch/mips/kernel/ptrace.c +index f7968b5149b0..5c3aa41a162f 100644 +--- a/arch/mips/kernel/ptrace.c ++++ b/arch/mips/kernel/ptrace.c +@@ -838,7 +838,7 @@ asmlinkage void syscall_trace_leave(struct pt_regs *regs) + audit_syscall_exit(regs); + + if (unlikely(test_thread_flag(TIF_SYSCALL_TRACEPOINT))) +- trace_sys_exit(regs, regs->regs[2]); ++ trace_sys_exit(regs, regs_return_value(regs)); + + if (test_thread_flag(TIF_SYSCALL_TRACE)) + tracehook_report_syscall_exit(regs, 0); +diff --git a/arch/mips/kernel/scall32-o32.S b/arch/mips/kernel/scall32-o32.S +index 6e8de80bb446..d516765ce320 100644 +--- a/arch/mips/kernel/scall32-o32.S ++++ b/arch/mips/kernel/scall32-o32.S +@@ -362,7 +362,7 @@ EXPORT(sys_call_table) + PTR sys_writev + PTR sys_cacheflush + PTR sys_cachectl +- PTR sys_sysmips ++ PTR __sys_sysmips + PTR sys_ni_syscall /* 4150 */ + PTR sys_getsid + PTR sys_fdatasync +diff --git a/arch/mips/kernel/scall64-64.S b/arch/mips/kernel/scall64-64.S +index a6f6b762c47a..a60edb497da3 100644 +--- a/arch/mips/kernel/scall64-64.S ++++ b/arch/mips/kernel/scall64-64.S +@@ -318,7 +318,7 @@ EXPORT(sys_call_table) + PTR sys_sched_getaffinity + PTR sys_cacheflush + PTR sys_cachectl +- PTR sys_sysmips ++ PTR __sys_sysmips + PTR sys_io_setup /* 5200 */ + PTR sys_io_destroy + PTR sys_io_getevents +diff --git a/arch/mips/kernel/scall64-n32.S b/arch/mips/kernel/scall64-n32.S +index 97fa4c7b9a5e..5de53e4b9607 100644 +--- a/arch/mips/kernel/scall64-n32.S ++++ b/arch/mips/kernel/scall64-n32.S +@@ -307,7 +307,7 @@ EXPORT(sysn32_call_table) + PTR compat_sys_sched_getaffinity + PTR sys_cacheflush + PTR sys_cachectl +- PTR sys_sysmips ++ PTR __sys_sysmips + PTR compat_sys_io_setup /* 6200 */ + PTR sys_io_destroy + PTR compat_sys_io_getevents +diff --git a/arch/mips/kernel/scall64-o32.S b/arch/mips/kernel/scall64-o32.S +index 80e39776e377..185092b9ecc1 100644 +--- a/arch/mips/kernel/scall64-o32.S ++++ b/arch/mips/kernel/scall64-o32.S +@@ -359,7 +359,7 @@ EXPORT(sys32_call_table) + PTR compat_sys_writev + PTR sys_cacheflush + PTR sys_cachectl +- PTR sys_sysmips ++ PTR __sys_sysmips + PTR sys_ni_syscall /* 4150 */ + PTR sys_getsid + PTR sys_fdatasync +diff --git a/arch/mips/kernel/syscall.c b/arch/mips/kernel/syscall.c +index 53a7ef9a8f32..4234b2d726c5 100644 +--- a/arch/mips/kernel/syscall.c ++++ b/arch/mips/kernel/syscall.c +@@ -28,6 +28,7 @@ + #include + + #include ++#include + #include + #include + #include +@@ -138,10 +139,12 @@ static inline int mips_atomic_set(unsigned long addr, unsigned long new) + __asm__ __volatile__ ( + " .set "MIPS_ISA_ARCH_LEVEL" \n" + " li %[err], 0 \n" +- "1: ll %[old], (%[addr]) \n" ++ "1: \n" ++ user_ll("%[old]", "(%[addr])") + " move %[tmp], %[new] \n" +- "2: sc %[tmp], (%[addr]) \n" +- " bnez %[tmp], 4f \n" ++ "2: \n" ++ user_sc("%[tmp]", "(%[addr])") ++ " beqz %[tmp], 4f \n" + "3: \n" + " .insn \n" + " .subsection 2 \n" +@@ -199,6 +202,12 @@ static inline int mips_atomic_set(unsigned long addr, unsigned long new) + unreachable(); + } + ++/* ++ * mips_atomic_set() normally returns directly via syscall_exit potentially ++ * clobbering static registers, so be sure to preserve them. ++ */ ++save_static_function(sys_sysmips); ++ + SYSCALL_DEFINE3(sysmips, long, cmd, long, arg1, long, arg2) + { + switch (cmd) { +diff --git a/arch/mips/math-emu/cp1emu.c b/arch/mips/math-emu/cp1emu.c +index 81f645973eb3..62ad117675b3 100644 +--- a/arch/mips/math-emu/cp1emu.c ++++ b/arch/mips/math-emu/cp1emu.c +@@ -2140,6 +2140,35 @@ dcopuop: + return 0; + } + ++/* ++ * Emulate FPU instructions. ++ * ++ * If we use FPU hardware, then we have been typically called to handle ++ * an unimplemented operation, such as where an operand is a NaN or ++ * denormalized. In that case exit the emulation loop after a single ++ * iteration so as to let hardware execute any subsequent instructions. ++ * ++ * If we have no FPU hardware or it has been disabled, then continue ++ * emulating floating-point instructions until one of these conditions ++ * has occurred: ++ * ++ * - a non-FPU instruction has been encountered, ++ * ++ * - an attempt to emulate has ended with a signal, ++ * ++ * - the ISA mode has been switched. ++ * ++ * We need to terminate the emulation loop if we got switched to the ++ * MIPS16 mode, whether supported or not, so that we do not attempt ++ * to emulate a MIPS16 instruction as a regular MIPS FPU instruction. ++ * Similarly if we got switched to the microMIPS mode and only the ++ * regular MIPS mode is supported, so that we do not attempt to emulate ++ * a microMIPS instruction as a regular MIPS FPU instruction. Or if ++ * we got switched to the regular MIPS mode and only the microMIPS mode ++ * is supported, so that we do not attempt to emulate a regular MIPS ++ * instruction that should cause an Address Error exception instead. ++ * For simplicity we always terminate upon an ISA mode switch. ++ */ + int fpu_emulator_cop1Handler(struct pt_regs *xcp, struct mips_fpu_struct *ctx, + int has_fpu, void *__user *fault_addr) + { +@@ -2225,6 +2254,15 @@ int fpu_emulator_cop1Handler(struct pt_regs *xcp, struct mips_fpu_struct *ctx, + break; + if (sig) + break; ++ /* ++ * We have to check for the ISA bit explicitly here, ++ * because `get_isa16_mode' may return 0 if support ++ * for code compression has been globally disabled, ++ * or otherwise we may produce the wrong signal or ++ * even proceed successfully where we must not. ++ */ ++ if ((xcp->cp0_epc ^ prevepc) & 0x1) ++ break; + + cond_resched(); + } while (xcp->cp0_epc > prevepc); +diff --git a/arch/openrisc/kernel/vmlinux.lds.S b/arch/openrisc/kernel/vmlinux.lds.S +index 2d69a853b742..3a08b55609b6 100644 +--- a/arch/openrisc/kernel/vmlinux.lds.S ++++ b/arch/openrisc/kernel/vmlinux.lds.S +@@ -38,6 +38,8 @@ SECTIONS + /* Read-only sections, merged into text segment: */ + . = LOAD_BASE ; + ++ _text = .; ++ + /* _s_kernel_ro must be page aligned */ + . = ALIGN(PAGE_SIZE); + _s_kernel_ro = .; +diff --git a/arch/powerpc/include/asm/atomic.h b/arch/powerpc/include/asm/atomic.h +index 512d2782b043..0d6670056cd2 100644 +--- a/arch/powerpc/include/asm/atomic.h ++++ b/arch/powerpc/include/asm/atomic.h +@@ -453,7 +453,7 @@ static __inline__ int atomic64_add_unless(atomic64_t *v, long a, long u) + * Atomically increments @v by 1, so long as @v is non-zero. + * Returns non-zero if @v was non-zero, and zero otherwise. + */ +-static __inline__ long atomic64_inc_not_zero(atomic64_t *v) ++static __inline__ int atomic64_inc_not_zero(atomic64_t *v) + { + long t1, t2; + +@@ -472,7 +472,7 @@ static __inline__ long atomic64_inc_not_zero(atomic64_t *v) + : "r" (&v->counter) + : "cc", "xer", "memory"); + +- return t1; ++ return t1 != 0; + } + + #endif /* __powerpc64__ */ +diff --git a/arch/powerpc/include/asm/reg.h b/arch/powerpc/include/asm/reg.h +index a4bf6e0eb813..e97e58e28668 100644 +--- a/arch/powerpc/include/asm/reg.h ++++ b/arch/powerpc/include/asm/reg.h +@@ -1237,7 +1237,7 @@ static inline unsigned long mfvtb (void) + " .llong 0\n" \ + ".previous" \ + : "=r" (rval) \ +- : "i" (CPU_FTR_CELL_TB_BUG), "i" (SPRN_TBRL)); \ ++ : "i" (CPU_FTR_CELL_TB_BUG), "i" (SPRN_TBRL) : "cr0"); \ + rval;}) + #else + #define mftb() ({unsigned long rval; \ +diff --git a/arch/powerpc/kvm/book3s_hv.c b/arch/powerpc/kvm/book3s_hv.c +index c1e10ffadd17..8e7a6c8efd27 100644 +--- a/arch/powerpc/kvm/book3s_hv.c ++++ b/arch/powerpc/kvm/book3s_hv.c +@@ -2232,6 +2232,10 @@ static int kvmppc_vcpu_run_hv(struct kvm_run *run, struct kvm_vcpu *vcpu) + { + int r; + int srcu_idx; ++ unsigned long ebb_regs[3] = {}; /* shut up GCC */ ++ unsigned long user_tar = 0; ++ unsigned long proc_fscr = 0; ++ unsigned int user_vrsave; + + if (!vcpu->arch.sane) { + run->exit_reason = KVM_EXIT_INTERNAL_ERROR; +@@ -2281,6 +2285,17 @@ static int kvmppc_vcpu_run_hv(struct kvm_run *run, struct kvm_vcpu *vcpu) + flush_fp_to_thread(current); + flush_altivec_to_thread(current); + flush_vsx_to_thread(current); ++ ++ /* Save userspace EBB and other register values */ ++ if (cpu_has_feature(CPU_FTR_ARCH_207S)) { ++ ebb_regs[0] = mfspr(SPRN_EBBHR); ++ ebb_regs[1] = mfspr(SPRN_EBBRR); ++ ebb_regs[2] = mfspr(SPRN_BESCR); ++ user_tar = mfspr(SPRN_TAR); ++ proc_fscr = mfspr(SPRN_FSCR); ++ } ++ user_vrsave = mfspr(SPRN_VRSAVE); ++ + vcpu->arch.wqp = &vcpu->arch.vcore->wq; + vcpu->arch.pgdir = current->mm->pgd; + vcpu->arch.state = KVMPPC_VCPU_BUSY_IN_HOST; +@@ -2302,6 +2317,16 @@ static int kvmppc_vcpu_run_hv(struct kvm_run *run, struct kvm_vcpu *vcpu) + } + } while (is_kvmppc_resume_guest(r)); + ++ /* Restore userspace EBB and other register values */ ++ if (cpu_has_feature(CPU_FTR_ARCH_207S)) { ++ mtspr(SPRN_EBBHR, ebb_regs[0]); ++ mtspr(SPRN_EBBRR, ebb_regs[1]); ++ mtspr(SPRN_BESCR, ebb_regs[2]); ++ mtspr(SPRN_TAR, user_tar); ++ mtspr(SPRN_FSCR, proc_fscr); ++ } ++ mtspr(SPRN_VRSAVE, user_vrsave); ++ + out: + vcpu->arch.state = KVMPPC_VCPU_NOTREADY; + atomic_dec(&vcpu->kvm->arch.vcpus_running); +diff --git a/arch/powerpc/kvm/book3s_hv_rmhandlers.S b/arch/powerpc/kvm/book3s_hv_rmhandlers.S +index 70eaf547703e..a3018f109cd3 100644 +--- a/arch/powerpc/kvm/book3s_hv_rmhandlers.S ++++ b/arch/powerpc/kvm/book3s_hv_rmhandlers.S +@@ -36,6 +36,13 @@ + #define NAPPING_CEDE 1 + #define NAPPING_NOVCPU 2 + ++/* Stack frame offsets for kvmppc_hv_entry */ ++#define SFS 112 ++#define STACK_SLOT_TRAP (SFS-4) ++#define STACK_SLOT_CIABR (SFS-16) ++#define STACK_SLOT_DAWR (SFS-24) ++#define STACK_SLOT_DAWRX (SFS-32) ++ + /* + * Call kvmppc_hv_entry in real mode. + * Must be called with interrupts hard-disabled. +@@ -265,10 +272,10 @@ kvm_novcpu_exit: + bl kvmhv_accumulate_time + #endif + 13: mr r3, r12 +- stw r12, 112-4(r1) ++ stw r12, STACK_SLOT_TRAP(r1) + bl kvmhv_commence_exit + nop +- lwz r12, 112-4(r1) ++ lwz r12, STACK_SLOT_TRAP(r1) + b kvmhv_switch_to_host + + /* +@@ -404,7 +411,7 @@ kvmppc_hv_entry: + */ + mflr r0 + std r0, PPC_LR_STKOFF(r1) +- stdu r1, -112(r1) ++ stdu r1, -SFS(r1) + + /* Save R1 in the PACA */ + std r1, HSTATE_HOST_R1(r13) +@@ -558,6 +565,16 @@ kvmppc_got_guest: + mtspr SPRN_PURR,r7 + mtspr SPRN_SPURR,r8 + ++ /* Save host values of some registers */ ++BEGIN_FTR_SECTION ++ mfspr r5, SPRN_CIABR ++ mfspr r6, SPRN_DAWR ++ mfspr r7, SPRN_DAWRX ++ std r5, STACK_SLOT_CIABR(r1) ++ std r6, STACK_SLOT_DAWR(r1) ++ std r7, STACK_SLOT_DAWRX(r1) ++END_FTR_SECTION_IFSET(CPU_FTR_ARCH_207S) ++ + BEGIN_FTR_SECTION + /* Set partition DABR */ + /* Do this before re-enabling PMU to avoid P7 DABR corruption bug */ +@@ -1169,8 +1186,7 @@ END_FTR_SECTION_IFCLR(CPU_FTR_ARCH_207S) + */ + li r0, 0 + mtspr SPRN_IAMR, r0 +- mtspr SPRN_CIABR, r0 +- mtspr SPRN_DAWRX, r0 ++ mtspr SPRN_PSPB, r0 + mtspr SPRN_TCSCR, r0 + mtspr SPRN_WORT, r0 + /* Set MMCRS to 1<<31 to freeze and disable the SPMC counters */ +@@ -1186,6 +1202,7 @@ END_FTR_SECTION_IFCLR(CPU_FTR_ARCH_207S) + std r6,VCPU_UAMOR(r9) + li r6,0 + mtspr SPRN_AMR,r6 ++ mtspr SPRN_UAMOR, r6 + + /* Switch DSCR back to host value */ + mfspr r8, SPRN_DSCR +@@ -1327,6 +1344,16 @@ END_FTR_SECTION_IFSET(CPU_FTR_ARCH_207S) + slbia + ptesync + ++ /* Restore host values of some registers */ ++BEGIN_FTR_SECTION ++ ld r5, STACK_SLOT_CIABR(r1) ++ ld r6, STACK_SLOT_DAWR(r1) ++ ld r7, STACK_SLOT_DAWRX(r1) ++ mtspr SPRN_CIABR, r5 ++ mtspr SPRN_DAWR, r6 ++ mtspr SPRN_DAWRX, r7 ++END_FTR_SECTION_IFSET(CPU_FTR_ARCH_207S) ++ + /* + * POWER7/POWER8 guest -> host partition switch code. + * We don't have to lock against tlbies but we do +@@ -1431,8 +1458,8 @@ END_FTR_SECTION_IFSET(CPU_FTR_ARCH_207S) + li r0, KVM_GUEST_MODE_NONE + stb r0, HSTATE_IN_GUEST(r13) + +- ld r0, 112+PPC_LR_STKOFF(r1) +- addi r1, r1, 112 ++ ld r0, SFS+PPC_LR_STKOFF(r1) ++ addi r1, r1, SFS + mtlr r0 + blr + +diff --git a/arch/powerpc/lib/sstep.c b/arch/powerpc/lib/sstep.c +index 4014881e9843..e37162d356d8 100644 +--- a/arch/powerpc/lib/sstep.c ++++ b/arch/powerpc/lib/sstep.c +@@ -687,8 +687,10 @@ int __kprobes analyse_instr(struct instruction_op *op, struct pt_regs *regs, + case 19: + switch ((instr >> 1) & 0x3ff) { + case 0: /* mcrf */ +- rd = (instr >> 21) & 0x1c; +- ra = (instr >> 16) & 0x1c; ++ rd = 7 - ((instr >> 23) & 0x7); ++ ra = 7 - ((instr >> 18) & 0x7); ++ rd *= 4; ++ ra *= 4; + val = (regs->ccr >> ra) & 0xf; + regs->ccr = (regs->ccr & ~(0xfUL << rd)) | (val << rd); + goto instr_done; +@@ -967,6 +969,19 @@ int __kprobes analyse_instr(struct instruction_op *op, struct pt_regs *regs, + #endif + + case 19: /* mfcr */ ++ if ((instr >> 20) & 1) { ++ imm = 0xf0000000UL; ++ for (sh = 0; sh < 8; ++sh) { ++ if (instr & (0x80000 >> sh)) { ++ regs->gpr[rd] = regs->ccr & imm; ++ break; ++ } ++ imm >>= 4; ++ } ++ ++ goto instr_done; ++ } ++ + regs->gpr[rd] = regs->ccr; + regs->gpr[rd] &= 0xffffffffUL; + goto instr_done; +diff --git a/arch/powerpc/platforms/pseries/reconfig.c b/arch/powerpc/platforms/pseries/reconfig.c +index 0f319521e002..14392b4e4693 100644 +--- a/arch/powerpc/platforms/pseries/reconfig.c ++++ b/arch/powerpc/platforms/pseries/reconfig.c +@@ -112,7 +112,6 @@ static int pSeries_reconfig_remove_node(struct device_node *np) + + of_detach_node(np); + of_node_put(parent); +- of_node_put(np); /* Must decrement the refcount */ + return 0; + } + +diff --git a/arch/s390/include/asm/syscall.h b/arch/s390/include/asm/syscall.h +index 6ba0bf928909..6bc941be6921 100644 +--- a/arch/s390/include/asm/syscall.h ++++ b/arch/s390/include/asm/syscall.h +@@ -64,6 +64,12 @@ static inline void syscall_get_arguments(struct task_struct *task, + { + unsigned long mask = -1UL; + ++ /* ++ * No arguments for this syscall, there's nothing to do. ++ */ ++ if (!n) ++ return; ++ + BUG_ON(i + n > 6); + #ifdef CONFIG_COMPAT + if (test_tsk_thread_flag(task, TIF_31BIT)) +diff --git a/arch/s390/net/bpf_jit_comp.c b/arch/s390/net/bpf_jit_comp.c +index dc2d7aa56440..a3b51d30e8d8 100644 +--- a/arch/s390/net/bpf_jit_comp.c ++++ b/arch/s390/net/bpf_jit_comp.c +@@ -1139,7 +1139,8 @@ static int bpf_jit_prog(struct bpf_jit *jit, struct bpf_prog *fp) + insn_count = bpf_jit_insn(jit, fp, i); + if (insn_count < 0) + return -1; +- jit->addrs[i + 1] = jit->prg; /* Next instruction address */ ++ /* Next instruction address */ ++ jit->addrs[i + insn_count] = jit->prg; + } + bpf_jit_epilogue(jit); + +diff --git a/arch/sparc/include/asm/mmu_context_64.h b/arch/sparc/include/asm/mmu_context_64.h +index 349dd23e2876..0cdeb2b483a0 100644 +--- a/arch/sparc/include/asm/mmu_context_64.h ++++ b/arch/sparc/include/asm/mmu_context_64.h +@@ -25,9 +25,11 @@ void destroy_context(struct mm_struct *mm); + void __tsb_context_switch(unsigned long pgd_pa, + struct tsb_config *tsb_base, + struct tsb_config *tsb_huge, +- unsigned long tsb_descr_pa); ++ unsigned long tsb_descr_pa, ++ unsigned long secondary_ctx); + +-static inline void tsb_context_switch(struct mm_struct *mm) ++static inline void tsb_context_switch_ctx(struct mm_struct *mm, ++ unsigned long ctx) + { + __tsb_context_switch(__pa(mm->pgd), + &mm->context.tsb_block[0], +@@ -38,9 +40,12 @@ static inline void tsb_context_switch(struct mm_struct *mm) + #else + NULL + #endif +- , __pa(&mm->context.tsb_descr[0])); ++ , __pa(&mm->context.tsb_descr[0]), ++ ctx); + } + ++#define tsb_context_switch(X) tsb_context_switch_ctx(X, 0) ++ + void tsb_grow(struct mm_struct *mm, + unsigned long tsb_index, + unsigned long mm_rss); +@@ -110,8 +115,7 @@ static inline void switch_mm(struct mm_struct *old_mm, struct mm_struct *mm, str + * cpu0 to update it's TSB because at that point the cpu_vm_mask + * only had cpu1 set in it. + */ +- load_secondary_context(mm); +- tsb_context_switch(mm); ++ tsb_context_switch_ctx(mm, CTX_HWBITS(mm->context)); + + /* Any time a processor runs a context on an address space + * for the first time, we must flush that context out of the +diff --git a/arch/sparc/include/asm/trap_block.h b/arch/sparc/include/asm/trap_block.h +index ec9c04de3664..ff05992dae7a 100644 +--- a/arch/sparc/include/asm/trap_block.h ++++ b/arch/sparc/include/asm/trap_block.h +@@ -54,6 +54,7 @@ extern struct trap_per_cpu trap_block[NR_CPUS]; + void init_cur_cpu_trap(struct thread_info *); + void setup_tba(void); + extern int ncpus_probed; ++extern u64 cpu_mondo_counter[NR_CPUS]; + + unsigned long real_hard_smp_processor_id(void); + +diff --git a/arch/sparc/kernel/smp_64.c b/arch/sparc/kernel/smp_64.c +index 95a9fa0d2195..4511caa3b7e9 100644 +--- a/arch/sparc/kernel/smp_64.c ++++ b/arch/sparc/kernel/smp_64.c +@@ -617,22 +617,48 @@ retry: + } + } + +-/* Multi-cpu list version. */ ++#define CPU_MONDO_COUNTER(cpuid) (cpu_mondo_counter[cpuid]) ++#define MONDO_USEC_WAIT_MIN 2 ++#define MONDO_USEC_WAIT_MAX 100 ++#define MONDO_RETRY_LIMIT 500000 ++ ++/* Multi-cpu list version. ++ * ++ * Deliver xcalls to 'cnt' number of cpus in 'cpu_list'. ++ * Sometimes not all cpus receive the mondo, requiring us to re-send ++ * the mondo until all cpus have received, or cpus are truly stuck ++ * unable to receive mondo, and we timeout. ++ * Occasionally a target cpu strand is borrowed briefly by hypervisor to ++ * perform guest service, such as PCIe error handling. Consider the ++ * service time, 1 second overall wait is reasonable for 1 cpu. ++ * Here two in-between mondo check wait time are defined: 2 usec for ++ * single cpu quick turn around and up to 100usec for large cpu count. ++ * Deliver mondo to large number of cpus could take longer, we adjusts ++ * the retry count as long as target cpus are making forward progress. ++ */ + static void hypervisor_xcall_deliver(struct trap_per_cpu *tb, int cnt) + { +- int retries, this_cpu, prev_sent, i, saw_cpu_error; ++ int this_cpu, tot_cpus, prev_sent, i, rem; ++ int usec_wait, retries, tot_retries; ++ u16 first_cpu = 0xffff; ++ unsigned long xc_rcvd = 0; + unsigned long status; ++ int ecpuerror_id = 0; ++ int enocpu_id = 0; + u16 *cpu_list; ++ u16 cpu; + + this_cpu = smp_processor_id(); +- + cpu_list = __va(tb->cpu_list_pa); +- +- saw_cpu_error = 0; +- retries = 0; ++ usec_wait = cnt * MONDO_USEC_WAIT_MIN; ++ if (usec_wait > MONDO_USEC_WAIT_MAX) ++ usec_wait = MONDO_USEC_WAIT_MAX; ++ retries = tot_retries = 0; ++ tot_cpus = cnt; + prev_sent = 0; ++ + do { +- int forward_progress, n_sent; ++ int n_sent, mondo_delivered, target_cpu_busy; + + status = sun4v_cpu_mondo_send(cnt, + tb->cpu_list_pa, +@@ -640,94 +666,113 @@ static void hypervisor_xcall_deliver(struct trap_per_cpu *tb, int cnt) + + /* HV_EOK means all cpus received the xcall, we're done. */ + if (likely(status == HV_EOK)) +- break; ++ goto xcall_done; ++ ++ /* If not these non-fatal errors, panic */ ++ if (unlikely((status != HV_EWOULDBLOCK) && ++ (status != HV_ECPUERROR) && ++ (status != HV_ENOCPU))) ++ goto fatal_errors; + + /* First, see if we made any forward progress. ++ * ++ * Go through the cpu_list, count the target cpus that have ++ * received our mondo (n_sent), and those that did not (rem). ++ * Re-pack cpu_list with the cpus remain to be retried in the ++ * front - this simplifies tracking the truly stalled cpus. + * + * The hypervisor indicates successful sends by setting + * cpu list entries to the value 0xffff. ++ * ++ * EWOULDBLOCK means some target cpus did not receive the ++ * mondo and retry usually helps. ++ * ++ * ECPUERROR means at least one target cpu is in error state, ++ * it's usually safe to skip the faulty cpu and retry. ++ * ++ * ENOCPU means one of the target cpu doesn't belong to the ++ * domain, perhaps offlined which is unexpected, but not ++ * fatal and it's okay to skip the offlined cpu. + */ ++ rem = 0; + n_sent = 0; + for (i = 0; i < cnt; i++) { +- if (likely(cpu_list[i] == 0xffff)) ++ cpu = cpu_list[i]; ++ if (likely(cpu == 0xffff)) { + n_sent++; ++ } else if ((status == HV_ECPUERROR) && ++ (sun4v_cpu_state(cpu) == HV_CPU_STATE_ERROR)) { ++ ecpuerror_id = cpu + 1; ++ } else if (status == HV_ENOCPU && !cpu_online(cpu)) { ++ enocpu_id = cpu + 1; ++ } else { ++ cpu_list[rem++] = cpu; ++ } + } + +- forward_progress = 0; +- if (n_sent > prev_sent) +- forward_progress = 1; ++ /* No cpu remained, we're done. */ ++ if (rem == 0) ++ break; + +- prev_sent = n_sent; ++ /* Otherwise, update the cpu count for retry. */ ++ cnt = rem; + +- /* If we get a HV_ECPUERROR, then one or more of the cpus +- * in the list are in error state. Use the cpu_state() +- * hypervisor call to find out which cpus are in error state. ++ /* Record the overall number of mondos received by the ++ * first of the remaining cpus. + */ +- if (unlikely(status == HV_ECPUERROR)) { +- for (i = 0; i < cnt; i++) { +- long err; +- u16 cpu; ++ if (first_cpu != cpu_list[0]) { ++ first_cpu = cpu_list[0]; ++ xc_rcvd = CPU_MONDO_COUNTER(first_cpu); ++ } + +- cpu = cpu_list[i]; +- if (cpu == 0xffff) +- continue; ++ /* Was any mondo delivered successfully? */ ++ mondo_delivered = (n_sent > prev_sent); ++ prev_sent = n_sent; + +- err = sun4v_cpu_state(cpu); +- if (err == HV_CPU_STATE_ERROR) { +- saw_cpu_error = (cpu + 1); +- cpu_list[i] = 0xffff; +- } +- } +- } else if (unlikely(status != HV_EWOULDBLOCK)) +- goto fatal_mondo_error; ++ /* or, was any target cpu busy processing other mondos? */ ++ target_cpu_busy = (xc_rcvd < CPU_MONDO_COUNTER(first_cpu)); ++ xc_rcvd = CPU_MONDO_COUNTER(first_cpu); + +- /* Don't bother rewriting the CPU list, just leave the +- * 0xffff and non-0xffff entries in there and the +- * hypervisor will do the right thing. +- * +- * Only advance timeout state if we didn't make any +- * forward progress. ++ /* Retry count is for no progress. If we're making progress, ++ * reset the retry count. + */ +- if (unlikely(!forward_progress)) { +- if (unlikely(++retries > 10000)) +- goto fatal_mondo_timeout; +- +- /* Delay a little bit to let other cpus catch up +- * on their cpu mondo queue work. +- */ +- udelay(2 * cnt); ++ if (likely(mondo_delivered || target_cpu_busy)) { ++ tot_retries += retries; ++ retries = 0; ++ } else if (unlikely(retries > MONDO_RETRY_LIMIT)) { ++ goto fatal_mondo_timeout; + } +- } while (1); + +- if (unlikely(saw_cpu_error)) +- goto fatal_mondo_cpu_error; ++ /* Delay a little bit to let other cpus catch up on ++ * their cpu mondo queue work. ++ */ ++ if (!mondo_delivered) ++ udelay(usec_wait); + +- return; ++ retries++; ++ } while (1); + +-fatal_mondo_cpu_error: +- printk(KERN_CRIT "CPU[%d]: SUN4V mondo cpu error, some target cpus " +- "(including %d) were in error state\n", +- this_cpu, saw_cpu_error - 1); ++xcall_done: ++ if (unlikely(ecpuerror_id > 0)) { ++ pr_crit("CPU[%d]: SUN4V mondo cpu error, target cpu(%d) was in error state\n", ++ this_cpu, ecpuerror_id - 1); ++ } else if (unlikely(enocpu_id > 0)) { ++ pr_crit("CPU[%d]: SUN4V mondo cpu error, target cpu(%d) does not belong to the domain\n", ++ this_cpu, enocpu_id - 1); ++ } + return; + ++fatal_errors: ++ /* fatal errors include bad alignment, etc */ ++ pr_crit("CPU[%d]: Args were cnt(%d) cpulist_pa(%lx) mondo_block_pa(%lx)\n", ++ this_cpu, tot_cpus, tb->cpu_list_pa, tb->cpu_mondo_block_pa); ++ panic("Unexpected SUN4V mondo error %lu\n", status); ++ + fatal_mondo_timeout: +- printk(KERN_CRIT "CPU[%d]: SUN4V mondo timeout, no forward " +- " progress after %d retries.\n", +- this_cpu, retries); +- goto dump_cpu_list_and_out; +- +-fatal_mondo_error: +- printk(KERN_CRIT "CPU[%d]: Unexpected SUN4V mondo error %lu\n", +- this_cpu, status); +- printk(KERN_CRIT "CPU[%d]: Args were cnt(%d) cpulist_pa(%lx) " +- "mondo_block_pa(%lx)\n", +- this_cpu, cnt, tb->cpu_list_pa, tb->cpu_mondo_block_pa); +- +-dump_cpu_list_and_out: +- printk(KERN_CRIT "CPU[%d]: CPU list [ ", this_cpu); +- for (i = 0; i < cnt; i++) +- printk("%u ", cpu_list[i]); +- printk("]\n"); ++ /* some cpus being non-responsive to the cpu mondo */ ++ pr_crit("CPU[%d]: SUN4V mondo timeout, cpu(%d) made no forward progress after %d retries. Total target cpus(%d).\n", ++ this_cpu, first_cpu, (tot_retries + retries), tot_cpus); ++ panic("SUN4V mondo timeout panic\n"); + } + + static void (*xcall_deliver_impl)(struct trap_per_cpu *, int); +diff --git a/arch/sparc/kernel/sun4v_ivec.S b/arch/sparc/kernel/sun4v_ivec.S +index 559bc5e9c199..34631995859a 100644 +--- a/arch/sparc/kernel/sun4v_ivec.S ++++ b/arch/sparc/kernel/sun4v_ivec.S +@@ -26,6 +26,21 @@ sun4v_cpu_mondo: + ldxa [%g0] ASI_SCRATCHPAD, %g4 + sub %g4, TRAP_PER_CPU_FAULT_INFO, %g4 + ++ /* Get smp_processor_id() into %g3 */ ++ sethi %hi(trap_block), %g5 ++ or %g5, %lo(trap_block), %g5 ++ sub %g4, %g5, %g3 ++ srlx %g3, TRAP_BLOCK_SZ_SHIFT, %g3 ++ ++ /* Increment cpu_mondo_counter[smp_processor_id()] */ ++ sethi %hi(cpu_mondo_counter), %g5 ++ or %g5, %lo(cpu_mondo_counter), %g5 ++ sllx %g3, 3, %g3 ++ add %g5, %g3, %g5 ++ ldx [%g5], %g3 ++ add %g3, 1, %g3 ++ stx %g3, [%g5] ++ + /* Get CPU mondo queue base phys address into %g7. */ + ldx [%g4 + TRAP_PER_CPU_CPU_MONDO_PA], %g7 + +diff --git a/arch/sparc/kernel/traps_64.c b/arch/sparc/kernel/traps_64.c +index cc97a43268ee..d883c5951e8b 100644 +--- a/arch/sparc/kernel/traps_64.c ++++ b/arch/sparc/kernel/traps_64.c +@@ -2659,6 +2659,7 @@ void do_getpsr(struct pt_regs *regs) + } + } + ++u64 cpu_mondo_counter[NR_CPUS] = {0}; + struct trap_per_cpu trap_block[NR_CPUS]; + EXPORT_SYMBOL(trap_block); + +diff --git a/arch/sparc/kernel/tsb.S b/arch/sparc/kernel/tsb.S +index 8e920d152565..12fe20c9042c 100644 +--- a/arch/sparc/kernel/tsb.S ++++ b/arch/sparc/kernel/tsb.S +@@ -367,6 +367,7 @@ tsb_flush: + * %o1: TSB base config pointer + * %o2: TSB huge config pointer, or NULL if none + * %o3: Hypervisor TSB descriptor physical address ++ * %o4: Secondary context to load, if non-zero + * + * We have to run this whole thing with interrupts + * disabled so that the current cpu doesn't change +@@ -379,6 +380,17 @@ __tsb_context_switch: + rdpr %pstate, %g1 + wrpr %g1, PSTATE_IE, %pstate + ++ brz,pn %o4, 1f ++ mov SECONDARY_CONTEXT, %o5 ++ ++661: stxa %o4, [%o5] ASI_DMMU ++ .section .sun4v_1insn_patch, "ax" ++ .word 661b ++ stxa %o4, [%o5] ASI_MMU ++ .previous ++ flush %g6 ++ ++1: + TRAP_LOAD_TRAP_BLOCK(%g2, %g3) + + stx %o0, [%g2 + TRAP_PER_CPU_PGD_PADDR] +diff --git a/arch/sparc/power/hibernate.c b/arch/sparc/power/hibernate.c +index 17bd2e167e07..df707a8ad311 100644 +--- a/arch/sparc/power/hibernate.c ++++ b/arch/sparc/power/hibernate.c +@@ -35,6 +35,5 @@ void restore_processor_state(void) + { + struct mm_struct *mm = current->active_mm; + +- load_secondary_context(mm); +- tsb_context_switch(mm); ++ tsb_context_switch_ctx(mm, CTX_HWBITS(mm->context)); + } +diff --git a/arch/x86/boot/string.c b/arch/x86/boot/string.c +index 318b8465d302..06ceddb3a22e 100644 +--- a/arch/x86/boot/string.c ++++ b/arch/x86/boot/string.c +@@ -14,6 +14,7 @@ + + #include + #include "ctype.h" ++#include "string.h" + + int memcmp(const void *s1, const void *s2, size_t len) + { +diff --git a/arch/x86/boot/string.h b/arch/x86/boot/string.h +index 725e820602b1..113588ddb43f 100644 +--- a/arch/x86/boot/string.h ++++ b/arch/x86/boot/string.h +@@ -18,4 +18,13 @@ int memcmp(const void *s1, const void *s2, size_t len); + #define memset(d,c,l) __builtin_memset(d,c,l) + #define memcmp __builtin_memcmp + ++extern int strcmp(const char *str1, const char *str2); ++extern int strncmp(const char *cs, const char *ct, size_t count); ++extern size_t strlen(const char *s); ++extern char *strstr(const char *s1, const char *s2); ++extern size_t strnlen(const char *s, size_t maxlen); ++extern unsigned int atou(const char *s); ++extern unsigned long long simple_strtoull(const char *cp, char **endp, ++ unsigned int base); ++ + #endif /* BOOT_STRING_H */ +diff --git a/arch/x86/include/asm/xen/hypercall.h b/arch/x86/include/asm/xen/hypercall.h +index ca08a27b90b3..4ad5a91aea79 100644 +--- a/arch/x86/include/asm/xen/hypercall.h ++++ b/arch/x86/include/asm/xen/hypercall.h +@@ -43,6 +43,7 @@ + + #include + #include ++#include + + #include + #include +@@ -213,10 +214,12 @@ privcmd_call(unsigned call, + __HYPERCALL_DECLS; + __HYPERCALL_5ARG(a1, a2, a3, a4, a5); + ++ stac(); + asm volatile("call *%[call]" + : __HYPERCALL_5PARAM + : [call] "a" (&hypercall_page[call]) + : __HYPERCALL_CLOBBER5); ++ clac(); + + return (long)__res; + } +diff --git a/arch/x86/kernel/acpi/boot.c b/arch/x86/kernel/acpi/boot.c +index 07bea80223f6..60aa02503b48 100644 +--- a/arch/x86/kernel/acpi/boot.c ++++ b/arch/x86/kernel/acpi/boot.c +@@ -328,6 +328,14 @@ static void __init mp_override_legacy_irq(u8 bus_irq, u8 polarity, u8 trigger, + int pin; + struct mpc_intsrc mp_irq; + ++ /* ++ * Check bus_irq boundary. ++ */ ++ if (bus_irq >= NR_IRQS_LEGACY) { ++ pr_warn("Invalid bus_irq %u for legacy override\n", bus_irq); ++ return; ++ } ++ + /* + * Convert 'gsi' to 'ioapic.pin'. + */ +diff --git a/arch/x86/kernel/cpu/mcheck/mce_amd.c b/arch/x86/kernel/cpu/mcheck/mce_amd.c +index df61c2d0cb56..bd7e7d6c29c5 100644 +--- a/arch/x86/kernel/cpu/mcheck/mce_amd.c ++++ b/arch/x86/kernel/cpu/mcheck/mce_amd.c +@@ -581,6 +581,9 @@ static int threshold_create_bank(unsigned int cpu, unsigned int bank) + const char *name = th_names[bank]; + int err = 0; + ++ if (!dev) ++ return -ENODEV; ++ + if (is_shared_bank(bank)) { + nb = node_to_amd_nb(amd_get_nb_id(cpu)); + +diff --git a/arch/x86/kernel/kvm.c b/arch/x86/kernel/kvm.c +index 27e63c1770e6..916e84aa5447 100644 +--- a/arch/x86/kernel/kvm.c ++++ b/arch/x86/kernel/kvm.c +@@ -151,6 +151,8 @@ void kvm_async_pf_task_wait(u32 token) + if (hlist_unhashed(&n.link)) + break; + ++ rcu_irq_exit(); ++ + if (!n.halted) { + local_irq_enable(); + schedule(); +@@ -159,11 +161,11 @@ void kvm_async_pf_task_wait(u32 token) + /* + * We cannot reschedule. So halt. + */ +- rcu_irq_exit(); + native_safe_halt(); + local_irq_disable(); +- rcu_irq_enter(); + } ++ ++ rcu_irq_enter(); + } + if (!n.halted) + finish_wait(&n.wq, &wait); +diff --git a/drivers/acpi/glue.c b/drivers/acpi/glue.c +index 39c485b0c25c..db89f4b8b966 100644 +--- a/drivers/acpi/glue.c ++++ b/drivers/acpi/glue.c +@@ -97,7 +97,15 @@ static int find_child_checks(struct acpi_device *adev, bool check_children) + if (check_children && list_empty(&adev->children)) + return -ENODEV; + +- return sta_present ? FIND_CHILD_MAX_SCORE : FIND_CHILD_MIN_SCORE; ++ /* ++ * If the device has a _HID (or _CID) returning a valid ACPI/PNP ++ * device ID, it is better to make it look less attractive here, so that ++ * the other device with the same _ADR value (that may not have a valid ++ * device ID) can be matched going forward. [This means a second spec ++ * violation in a row, so whatever we do here is best effort anyway.] ++ */ ++ return sta_present && list_empty(&adev->pnp.ids) ? ++ FIND_CHILD_MAX_SCORE : FIND_CHILD_MIN_SCORE; + } + + struct acpi_device *acpi_find_child_device(struct acpi_device *parent, +diff --git a/drivers/ata/libata-scsi.c b/drivers/ata/libata-scsi.c +index ae7cfcb562dc..4d4cdade9d7e 100644 +--- a/drivers/ata/libata-scsi.c ++++ b/drivers/ata/libata-scsi.c +@@ -2834,10 +2834,12 @@ static unsigned int atapi_xlat(struct ata_queued_cmd *qc) + static struct ata_device *ata_find_dev(struct ata_port *ap, int devno) + { + if (!sata_pmp_attached(ap)) { +- if (likely(devno < ata_link_max_devices(&ap->link))) ++ if (likely(devno >= 0 && ++ devno < ata_link_max_devices(&ap->link))) + return &ap->link.device[devno]; + } else { +- if (likely(devno < ap->nr_pmp_links)) ++ if (likely(devno >= 0 && ++ devno < ap->nr_pmp_links)) + return &ap->pmp_link[devno].device[0]; + } + +diff --git a/drivers/base/power/domain.c b/drivers/base/power/domain.c +index 2327613d4539..75e29733af54 100644 +--- a/drivers/base/power/domain.c ++++ b/drivers/base/power/domain.c +@@ -1440,7 +1440,6 @@ static struct generic_pm_domain_data *genpd_alloc_dev_data(struct device *dev, + } + + dev->power.subsys_data->domain_data = &gpd_data->base; +- dev->pm_domain = &genpd->domain; + + spin_unlock_irq(&dev->power.lock); + +@@ -1459,7 +1458,6 @@ static void genpd_free_dev_data(struct device *dev, + { + spin_lock_irq(&dev->power.lock); + +- dev->pm_domain = NULL; + dev->power.subsys_data->domain_data = NULL; + + spin_unlock_irq(&dev->power.lock); +@@ -1500,6 +1498,8 @@ int __pm_genpd_add_device(struct generic_pm_domain *genpd, struct device *dev, + if (ret) + goto out; + ++ dev->pm_domain = &genpd->domain; ++ + genpd->device_count++; + genpd->max_off_time_changed = true; + +@@ -1563,6 +1563,8 @@ int pm_genpd_remove_device(struct generic_pm_domain *genpd, + if (genpd->detach_dev) + genpd->detach_dev(genpd, dev); + ++ dev->pm_domain = NULL; ++ + list_del_init(&pdd->list_node); + + genpd_release_lock(genpd); +@@ -1673,7 +1675,7 @@ int pm_genpd_add_subdomain_names(const char *master_name, + int pm_genpd_remove_subdomain(struct generic_pm_domain *genpd, + struct generic_pm_domain *subdomain) + { +- struct gpd_link *link; ++ struct gpd_link *l, *link; + int ret = -EINVAL; + + if (IS_ERR_OR_NULL(genpd) || IS_ERR_OR_NULL(subdomain)) +@@ -1682,7 +1684,7 @@ int pm_genpd_remove_subdomain(struct generic_pm_domain *genpd, + start: + genpd_acquire_lock(genpd); + +- list_for_each_entry(link, &genpd->master_links, master_node) { ++ list_for_each_entry_safe(link, l, &genpd->master_links, master_node) { + if (link->slave != subdomain) + continue; + +@@ -2062,10 +2064,10 @@ EXPORT_SYMBOL_GPL(__of_genpd_add_provider); + */ + void of_genpd_del_provider(struct device_node *np) + { +- struct of_genpd_provider *cp; ++ struct of_genpd_provider *cp, *tmp; + + mutex_lock(&of_genpd_mutex); +- list_for_each_entry(cp, &of_genpd_providers, link) { ++ list_for_each_entry_safe(cp, tmp, &of_genpd_providers, link) { + if (cp->node == np) { + list_del(&cp->link); + of_node_put(cp->node); +diff --git a/drivers/block/virtio_blk.c b/drivers/block/virtio_blk.c +index 5ea2f0bbbc7c..071c3ea70882 100644 +--- a/drivers/block/virtio_blk.c ++++ b/drivers/block/virtio_blk.c +@@ -642,11 +642,12 @@ static int virtblk_probe(struct virtio_device *vdev) + if (err) + goto out_put_disk; + +- q = vblk->disk->queue = blk_mq_init_queue(&vblk->tag_set); ++ q = blk_mq_init_queue(&vblk->tag_set); + if (IS_ERR(q)) { + err = -ENOMEM; + goto out_free_tags; + } ++ vblk->disk->queue = q; + + q->queuedata = vblk; + +diff --git a/drivers/block/xen-blkback/blkback.c b/drivers/block/xen-blkback/blkback.c +index 3e9ec9523f73..1d8c6cb89c7f 100644 +--- a/drivers/block/xen-blkback/blkback.c ++++ b/drivers/block/xen-blkback/blkback.c +@@ -588,8 +588,6 @@ int xen_blkif_schedule(void *arg) + unsigned long timeout; + int ret; + +- xen_blkif_get(blkif); +- + while (!kthread_should_stop()) { + if (try_to_freeze()) + continue; +@@ -643,7 +641,6 @@ purge_gnt_list: + print_stats(blkif); + + blkif->xenblkd = NULL; +- xen_blkif_put(blkif); + + return 0; + } +diff --git a/drivers/block/xen-blkback/xenbus.c b/drivers/block/xen-blkback/xenbus.c +index 6ab69ad61ee1..b8c48da3b19f 100644 +--- a/drivers/block/xen-blkback/xenbus.c ++++ b/drivers/block/xen-blkback/xenbus.c +@@ -256,7 +256,6 @@ static int xen_blkif_disconnect(struct xen_blkif *blkif) + if (blkif->xenblkd) { + kthread_stop(blkif->xenblkd); + wake_up(&blkif->shutdown_wq); +- blkif->xenblkd = NULL; + } + + /* The above kthread_stop() guarantees that at this point we +diff --git a/drivers/char/ipmi/ipmi_msghandler.c b/drivers/char/ipmi/ipmi_msghandler.c +index 4bc508c14900..5da703c65d93 100644 +--- a/drivers/char/ipmi/ipmi_msghandler.c ++++ b/drivers/char/ipmi/ipmi_msghandler.c +@@ -3871,6 +3871,9 @@ static void smi_recv_tasklet(unsigned long val) + * because the lower layer is allowed to hold locks while calling + * message delivery. + */ ++ ++ rcu_read_lock(); ++ + if (!run_to_completion) + spin_lock_irqsave(&intf->xmit_msgs_lock, flags); + if (intf->curr_msg == NULL && !intf->in_shutdown) { +@@ -3893,6 +3896,8 @@ static void smi_recv_tasklet(unsigned long val) + if (newmsg) + intf->handlers->sender(intf->send_info, newmsg); + ++ rcu_read_unlock(); ++ + handle_new_recv_msgs(intf); + } + +diff --git a/drivers/char/ipmi/ipmi_ssif.c b/drivers/char/ipmi/ipmi_ssif.c +index 9df92eda8749..9156bbd90b56 100644 +--- a/drivers/char/ipmi/ipmi_ssif.c ++++ b/drivers/char/ipmi/ipmi_ssif.c +@@ -757,6 +757,11 @@ static void msg_done_handler(struct ssif_info *ssif_info, int result, + result, len, data[2]); + } else if (data[0] != (IPMI_NETFN_APP_REQUEST | 1) << 2 + || data[1] != IPMI_GET_MSG_FLAGS_CMD) { ++ /* ++ * Don't abort here, maybe it was a queued ++ * response to a previous command. ++ */ ++ ipmi_ssif_unlock_cond(ssif_info, flags); + pr_warn(PFX "Invalid response getting flags: %x %x\n", + data[0], data[1]); + } else { +diff --git a/drivers/char/ipmi/ipmi_watchdog.c b/drivers/char/ipmi/ipmi_watchdog.c +index 37b8be7cba95..f335fcee09af 100644 +--- a/drivers/char/ipmi/ipmi_watchdog.c ++++ b/drivers/char/ipmi/ipmi_watchdog.c +@@ -1156,10 +1156,11 @@ static int wdog_reboot_handler(struct notifier_block *this, + ipmi_watchdog_state = WDOG_TIMEOUT_NONE; + ipmi_set_timeout(IPMI_SET_TIMEOUT_NO_HB); + } else if (ipmi_watchdog_state != WDOG_TIMEOUT_NONE) { +- /* Set a long timer to let the reboot happens, but +- reboot if it hangs, but only if the watchdog ++ /* Set a long timer to let the reboot happen or ++ reset if it hangs, but only if the watchdog + timer was already running. */ +- timeout = 120; ++ if (timeout < 120) ++ timeout = 120; + pretimeout = 0; + ipmi_watchdog_state = WDOG_TIMEOUT_RESET; + ipmi_set_timeout(IPMI_SET_TIMEOUT_NO_HB); +diff --git a/drivers/char/tpm/tpm-sysfs.c b/drivers/char/tpm/tpm-sysfs.c +index ee66fd4673f3..62a6117b57d7 100644 +--- a/drivers/char/tpm/tpm-sysfs.c ++++ b/drivers/char/tpm/tpm-sysfs.c +@@ -38,6 +38,8 @@ static ssize_t pubek_show(struct device *dev, struct device_attribute *attr, + + struct tpm_chip *chip = dev_get_drvdata(dev); + ++ memset(&tpm_cmd, 0, sizeof(tpm_cmd)); ++ + tpm_cmd.header.in = tpm_readpubek_header; + err = tpm_transmit_cmd(chip, &tpm_cmd, READ_PUBEK_RESULT_SIZE, + "attempting to read the PUBEK"); +diff --git a/drivers/gpu/drm/drm_dp_mst_topology.c b/drivers/gpu/drm/drm_dp_mst_topology.c +index f557695a2409..7724ddb0f776 100644 +--- a/drivers/gpu/drm/drm_dp_mst_topology.c ++++ b/drivers/gpu/drm/drm_dp_mst_topology.c +@@ -330,6 +330,13 @@ static bool drm_dp_sideband_msg_build(struct drm_dp_sideband_msg_rx *msg, + return false; + } + ++ /* ++ * ignore out-of-order messages or messages that are part of a ++ * failed transaction ++ */ ++ if (!recv_hdr.somt && !msg->have_somt) ++ return false; ++ + /* get length contained in this portion */ + msg->curchunk_len = recv_hdr.msg_len; + msg->curchunk_hdrlen = hdrlen; +@@ -2161,7 +2168,7 @@ out_unlock: + } + EXPORT_SYMBOL(drm_dp_mst_topology_mgr_resume); + +-static void drm_dp_get_one_sb_msg(struct drm_dp_mst_topology_mgr *mgr, bool up) ++static bool drm_dp_get_one_sb_msg(struct drm_dp_mst_topology_mgr *mgr, bool up) + { + int len; + u8 replyblock[32]; +@@ -2176,12 +2183,12 @@ static void drm_dp_get_one_sb_msg(struct drm_dp_mst_topology_mgr *mgr, bool up) + replyblock, len); + if (ret != len) { + DRM_DEBUG_KMS("failed to read DPCD down rep %d %d\n", len, ret); +- return; ++ return false; + } + ret = drm_dp_sideband_msg_build(msg, replyblock, len, true); + if (!ret) { + DRM_DEBUG_KMS("sideband msg build failed %d\n", replyblock[0]); +- return; ++ return false; + } + replylen = msg->curchunk_len + msg->curchunk_hdrlen; + +@@ -2193,21 +2200,32 @@ static void drm_dp_get_one_sb_msg(struct drm_dp_mst_topology_mgr *mgr, bool up) + ret = drm_dp_dpcd_read(mgr->aux, basereg + curreply, + replyblock, len); + if (ret != len) { +- DRM_DEBUG_KMS("failed to read a chunk\n"); ++ DRM_DEBUG_KMS("failed to read a chunk (len %d, ret %d)\n", ++ len, ret); ++ return false; + } ++ + ret = drm_dp_sideband_msg_build(msg, replyblock, len, false); +- if (ret == false) ++ if (!ret) { + DRM_DEBUG_KMS("failed to build sideband msg\n"); ++ return false; ++ } ++ + curreply += len; + replylen -= len; + } ++ return true; + } + + static int drm_dp_mst_handle_down_rep(struct drm_dp_mst_topology_mgr *mgr) + { + int ret = 0; + +- drm_dp_get_one_sb_msg(mgr, false); ++ if (!drm_dp_get_one_sb_msg(mgr, false)) { ++ memset(&mgr->down_rep_recv, 0, ++ sizeof(struct drm_dp_sideband_msg_rx)); ++ return 0; ++ } + + if (mgr->down_rep_recv.have_eomt) { + struct drm_dp_sideband_msg_tx *txmsg; +@@ -2263,7 +2281,12 @@ static int drm_dp_mst_handle_down_rep(struct drm_dp_mst_topology_mgr *mgr) + static int drm_dp_mst_handle_up_req(struct drm_dp_mst_topology_mgr *mgr) + { + int ret = 0; +- drm_dp_get_one_sb_msg(mgr, true); ++ ++ if (!drm_dp_get_one_sb_msg(mgr, true)) { ++ memset(&mgr->up_req_recv, 0, ++ sizeof(struct drm_dp_sideband_msg_rx)); ++ return 0; ++ } + + if (mgr->up_req_recv.have_eomt) { + struct drm_dp_sideband_msg_req_body msg; +@@ -2315,7 +2338,9 @@ static int drm_dp_mst_handle_up_req(struct drm_dp_mst_topology_mgr *mgr) + DRM_DEBUG_KMS("Got RSN: pn: %d avail_pbn %d\n", msg.u.resource_stat.port_number, msg.u.resource_stat.available_pbn); + } + +- drm_dp_put_mst_branch_device(mstb); ++ if (mstb) ++ drm_dp_put_mst_branch_device(mstb); ++ + memset(&mgr->up_req_recv, 0, sizeof(struct drm_dp_sideband_msg_rx)); + } + return ret; +diff --git a/drivers/gpu/drm/msm/adreno/adreno_gpu.c b/drivers/gpu/drm/msm/adreno/adreno_gpu.c +index bbdcab0a56c1..3401df5b44db 100644 +--- a/drivers/gpu/drm/msm/adreno/adreno_gpu.c ++++ b/drivers/gpu/drm/msm/adreno/adreno_gpu.c +@@ -193,7 +193,14 @@ int adreno_submit(struct msm_gpu *gpu, struct msm_gem_submit *submit, + void adreno_flush(struct msm_gpu *gpu) + { + struct adreno_gpu *adreno_gpu = to_adreno_gpu(gpu); +- uint32_t wptr = get_wptr(gpu->rb); ++ uint32_t wptr; ++ ++ /* ++ * Mask wptr value that we calculate to fit in the HW range. This is ++ * to account for the possibility that the last command fit exactly into ++ * the ringbuffer and rb->next hasn't wrapped to zero yet ++ */ ++ wptr = get_wptr(gpu->rb) & ((gpu->rb->size / 4) - 1); + + /* ensure writes to ringbuffer have hit system memory: */ + mb(); +diff --git a/drivers/gpu/drm/msm/msm_gem_submit.c b/drivers/gpu/drm/msm/msm_gem_submit.c +index 4ff8c334e7c8..4a45ae01cc3e 100644 +--- a/drivers/gpu/drm/msm/msm_gem_submit.c ++++ b/drivers/gpu/drm/msm/msm_gem_submit.c +@@ -90,7 +90,8 @@ static int submit_lookup_objects(struct msm_gem_submit *submit, + pagefault_disable(); + } + +- if (submit_bo.flags & ~MSM_SUBMIT_BO_FLAGS) { ++ if ((submit_bo.flags & ~MSM_SUBMIT_BO_FLAGS) || ++ !(submit_bo.flags & MSM_SUBMIT_BO_FLAGS)) { + DRM_ERROR("invalid flags: %x\n", submit_bo.flags); + ret = -EINVAL; + goto out_unlock; +diff --git a/drivers/gpu/drm/msm/msm_ringbuffer.c b/drivers/gpu/drm/msm/msm_ringbuffer.c +index 1f14b908b221..ae317271cf81 100644 +--- a/drivers/gpu/drm/msm/msm_ringbuffer.c ++++ b/drivers/gpu/drm/msm/msm_ringbuffer.c +@@ -23,7 +23,8 @@ struct msm_ringbuffer *msm_ringbuffer_new(struct msm_gpu *gpu, int size) + struct msm_ringbuffer *ring; + int ret; + +- size = ALIGN(size, 4); /* size should be dword aligned */ ++ if (WARN_ON(!is_power_of_2(size))) ++ return ERR_PTR(-EINVAL); + + ring = kzalloc(sizeof(*ring), GFP_KERNEL); + if (!ring) { +diff --git a/drivers/gpu/drm/radeon/atombios_encoders.c b/drivers/gpu/drm/radeon/atombios_encoders.c +index d4ac8c837314..8e86cf7da614 100644 +--- a/drivers/gpu/drm/radeon/atombios_encoders.c ++++ b/drivers/gpu/drm/radeon/atombios_encoders.c +@@ -30,6 +30,7 @@ + #include "radeon_audio.h" + #include "atom.h" + #include ++#include + + extern int atom_debug; + +@@ -2183,9 +2184,17 @@ int radeon_atom_pick_dig_encoder(struct drm_encoder *encoder, int fe_idx) + goto assigned; + } + +- /* on DCE32 and encoder can driver any block so just crtc id */ ++ /* ++ * On DCE32 any encoder can drive any block so usually just use crtc id, ++ * but Apple thinks different at least on iMac10,1, so there use linkb, ++ * otherwise the internal eDP panel will stay dark. ++ */ + if (ASIC_IS_DCE32(rdev)) { +- enc_idx = radeon_crtc->crtc_id; ++ if (dmi_match(DMI_PRODUCT_NAME, "iMac10,1")) ++ enc_idx = (dig->linkb) ? 1 : 0; ++ else ++ enc_idx = radeon_crtc->crtc_id; ++ + goto assigned; + } + +diff --git a/drivers/gpu/drm/rcar-du/rcar_du_crtc.c b/drivers/gpu/drm/rcar-du/rcar_du_crtc.c +index 7d0b8ef9bea2..7c6f15d284e3 100644 +--- a/drivers/gpu/drm/rcar-du/rcar_du_crtc.c ++++ b/drivers/gpu/drm/rcar-du/rcar_du_crtc.c +@@ -277,26 +277,6 @@ static void rcar_du_crtc_update_planes(struct rcar_du_crtc *rcrtc) + * Page Flip + */ + +-void rcar_du_crtc_cancel_page_flip(struct rcar_du_crtc *rcrtc, +- struct drm_file *file) +-{ +- struct drm_pending_vblank_event *event; +- struct drm_device *dev = rcrtc->crtc.dev; +- unsigned long flags; +- +- /* Destroy the pending vertical blanking event associated with the +- * pending page flip, if any, and disable vertical blanking interrupts. +- */ +- spin_lock_irqsave(&dev->event_lock, flags); +- event = rcrtc->event; +- if (event && event->base.file_priv == file) { +- rcrtc->event = NULL; +- event->base.destroy(&event->base); +- drm_crtc_vblank_put(&rcrtc->crtc); +- } +- spin_unlock_irqrestore(&dev->event_lock, flags); +-} +- + static void rcar_du_crtc_finish_page_flip(struct rcar_du_crtc *rcrtc) + { + struct drm_pending_vblank_event *event; +diff --git a/drivers/gpu/drm/rcar-du/rcar_du_crtc.h b/drivers/gpu/drm/rcar-du/rcar_du_crtc.h +index 5d9aa9b33769..0d61a813054a 100644 +--- a/drivers/gpu/drm/rcar-du/rcar_du_crtc.h ++++ b/drivers/gpu/drm/rcar-du/rcar_du_crtc.h +@@ -53,8 +53,6 @@ enum rcar_du_output { + + int rcar_du_crtc_create(struct rcar_du_group *rgrp, unsigned int index); + void rcar_du_crtc_enable_vblank(struct rcar_du_crtc *rcrtc, bool enable); +-void rcar_du_crtc_cancel_page_flip(struct rcar_du_crtc *rcrtc, +- struct drm_file *file); + void rcar_du_crtc_suspend(struct rcar_du_crtc *rcrtc); + void rcar_du_crtc_resume(struct rcar_du_crtc *rcrtc); + +diff --git a/drivers/gpu/drm/rcar-du/rcar_du_drv.c b/drivers/gpu/drm/rcar-du/rcar_du_drv.c +index da1216a73969..94133c3ffe20 100644 +--- a/drivers/gpu/drm/rcar-du/rcar_du_drv.c ++++ b/drivers/gpu/drm/rcar-du/rcar_du_drv.c +@@ -205,15 +205,6 @@ done: + return ret; + } + +-static void rcar_du_preclose(struct drm_device *dev, struct drm_file *file) +-{ +- struct rcar_du_device *rcdu = dev->dev_private; +- unsigned int i; +- +- for (i = 0; i < rcdu->num_crtcs; ++i) +- rcar_du_crtc_cancel_page_flip(&rcdu->crtcs[i], file); +-} +- + static void rcar_du_lastclose(struct drm_device *dev) + { + struct rcar_du_device *rcdu = dev->dev_private; +@@ -256,7 +247,6 @@ static struct drm_driver rcar_du_driver = { + | DRIVER_ATOMIC, + .load = rcar_du_load, + .unload = rcar_du_unload, +- .preclose = rcar_du_preclose, + .lastclose = rcar_du_lastclose, + .set_busid = drm_platform_set_busid, + .get_vblank_counter = drm_vblank_count, +diff --git a/drivers/gpu/drm/vmwgfx/vmwgfx_execbuf.c b/drivers/gpu/drm/vmwgfx/vmwgfx_execbuf.c +index aee1c6ccc52d..6c312b584802 100644 +--- a/drivers/gpu/drm/vmwgfx/vmwgfx_execbuf.c ++++ b/drivers/gpu/drm/vmwgfx/vmwgfx_execbuf.c +@@ -285,7 +285,7 @@ static int vmw_cmd_invalid(struct vmw_private *dev_priv, + struct vmw_sw_context *sw_context, + SVGA3dCmdHeader *header) + { +- return capable(CAP_SYS_ADMIN) ? : -EINVAL; ++ return -EINVAL; + } + + static int vmw_cmd_ok(struct vmw_private *dev_priv, +diff --git a/drivers/hid/hid-core.c b/drivers/hid/hid-core.c +index 07a963039b60..d786b48f5d7b 100644 +--- a/drivers/hid/hid-core.c ++++ b/drivers/hid/hid-core.c +@@ -2391,6 +2391,7 @@ static const struct hid_device_id hid_ignore_list[] = { + { HID_USB_DEVICE(USB_VENDOR_ID_PANJIT, 0x0002) }, + { HID_USB_DEVICE(USB_VENDOR_ID_PANJIT, 0x0003) }, + { HID_USB_DEVICE(USB_VENDOR_ID_PANJIT, 0x0004) }, ++ { HID_USB_DEVICE(USB_VENDOR_ID_PETZL, USB_DEVICE_ID_PETZL_HEADLAMP) }, + { HID_USB_DEVICE(USB_VENDOR_ID_PHILIPS, USB_DEVICE_ID_PHILIPS_IEEE802154_DONGLE) }, + { HID_USB_DEVICE(USB_VENDOR_ID_POWERCOM, USB_DEVICE_ID_POWERCOM_UPS) }, + #if defined(CONFIG_MOUSE_SYNAPTICS_USB) || defined(CONFIG_MOUSE_SYNAPTICS_USB_MODULE) +diff --git a/drivers/hid/hid-ids.h b/drivers/hid/hid-ids.h +index 7ce93d927f62..e995058ad264 100644 +--- a/drivers/hid/hid-ids.h ++++ b/drivers/hid/hid-ids.h +@@ -760,6 +760,9 @@ + #define USB_VENDOR_ID_PETALYNX 0x18b1 + #define USB_DEVICE_ID_PETALYNX_MAXTER_REMOTE 0x0037 + ++#define USB_VENDOR_ID_PETZL 0x2122 ++#define USB_DEVICE_ID_PETZL_HEADLAMP 0x1234 ++ + #define USB_VENDOR_ID_PHILIPS 0x0471 + #define USB_DEVICE_ID_PHILIPS_IEEE802154_DONGLE 0x0617 + +diff --git a/drivers/iio/adc/vf610_adc.c b/drivers/iio/adc/vf610_adc.c +index 56292ae4538d..9bcad9a444f7 100644 +--- a/drivers/iio/adc/vf610_adc.c ++++ b/drivers/iio/adc/vf610_adc.c +@@ -71,7 +71,7 @@ + #define VF610_ADC_ADSTS_MASK 0x300 + #define VF610_ADC_ADLPC_EN 0x80 + #define VF610_ADC_ADHSC_EN 0x400 +-#define VF610_ADC_REFSEL_VALT 0x100 ++#define VF610_ADC_REFSEL_VALT 0x800 + #define VF610_ADC_REFSEL_VBG 0x1000 + #define VF610_ADC_ADTRG_HARD 0x2000 + #define VF610_ADC_AVGS_8 0x4000 +diff --git a/drivers/iio/light/tsl2563.c b/drivers/iio/light/tsl2563.c +index 94daa9fc1247..6a135effb7c5 100644 +--- a/drivers/iio/light/tsl2563.c ++++ b/drivers/iio/light/tsl2563.c +@@ -626,7 +626,7 @@ static irqreturn_t tsl2563_event_handler(int irq, void *private) + struct tsl2563_chip *chip = iio_priv(dev_info); + + iio_push_event(dev_info, +- IIO_UNMOD_EVENT_CODE(IIO_LIGHT, ++ IIO_UNMOD_EVENT_CODE(IIO_INTENSITY, + 0, + IIO_EV_TYPE_THRESH, + IIO_EV_DIR_EITHER), +diff --git a/drivers/infiniband/ulp/isert/ib_isert.c b/drivers/infiniband/ulp/isert/ib_isert.c +index b52a704c3449..2d515a544f33 100644 +--- a/drivers/infiniband/ulp/isert/ib_isert.c ++++ b/drivers/infiniband/ulp/isert/ib_isert.c +@@ -1586,7 +1586,7 @@ isert_rcv_completion(struct iser_rx_desc *desc, + struct isert_conn *isert_conn, + u32 xfer_len) + { +- struct ib_device *ib_dev = isert_conn->cm_id->device; ++ struct ib_device *ib_dev = isert_conn->device->ib_device; + struct iscsi_hdr *hdr; + u64 rx_dma; + int rx_buflen, outstanding; +diff --git a/drivers/input/serio/i8042.c b/drivers/input/serio/i8042.c +index 4cfb0ac797ef..6f15cdf5ff40 100644 +--- a/drivers/input/serio/i8042.c ++++ b/drivers/input/serio/i8042.c +@@ -397,8 +397,10 @@ static int i8042_start(struct serio *serio) + { + struct i8042_port *port = serio->port_data; + ++ spin_lock_irq(&i8042_lock); + port->exists = true; +- mb(); ++ spin_unlock_irq(&i8042_lock); ++ + return 0; + } + +@@ -411,16 +413,20 @@ static void i8042_stop(struct serio *serio) + { + struct i8042_port *port = serio->port_data; + ++ spin_lock_irq(&i8042_lock); + port->exists = false; ++ port->serio = NULL; ++ spin_unlock_irq(&i8042_lock); + + /* ++ * We need to make sure that interrupt handler finishes using ++ * our serio port before we return from this function. + * We synchronize with both AUX and KBD IRQs because there is + * a (very unlikely) chance that AUX IRQ is raised for KBD port + * and vice versa. + */ + synchronize_irq(I8042_AUX_IRQ); + synchronize_irq(I8042_KBD_IRQ); +- port->serio = NULL; + } + + /* +@@ -537,7 +543,7 @@ static irqreturn_t i8042_interrupt(int irq, void *dev_id) + + spin_unlock_irqrestore(&i8042_lock, flags); + +- if (likely(port->exists && !filtered)) ++ if (likely(serio && !filtered)) + serio_interrupt(serio, data, dfl); + + out: +diff --git a/drivers/isdn/i4l/isdn_ppp.c b/drivers/isdn/i4l/isdn_ppp.c +index 9c1e8adaf4fc..bf3fbd00a091 100644 +--- a/drivers/isdn/i4l/isdn_ppp.c ++++ b/drivers/isdn/i4l/isdn_ppp.c +@@ -2364,7 +2364,7 @@ static struct ippp_ccp_reset_state *isdn_ppp_ccp_reset_alloc_state(struct ippp_s + id); + return NULL; + } else { +- rs = kzalloc(sizeof(struct ippp_ccp_reset_state), GFP_KERNEL); ++ rs = kzalloc(sizeof(struct ippp_ccp_reset_state), GFP_ATOMIC); + if (!rs) + return NULL; + rs->state = CCPResetIdle; +diff --git a/drivers/mailbox/mailbox.c b/drivers/mailbox/mailbox.c +index 19b491d2964f..ac6087f77e08 100644 +--- a/drivers/mailbox/mailbox.c ++++ b/drivers/mailbox/mailbox.c +@@ -104,11 +104,14 @@ static void tx_tick(struct mbox_chan *chan, int r) + /* Submit next message */ + msg_submit(chan); + ++ if (!mssg) ++ return; ++ + /* Notify the client */ +- if (mssg && chan->cl->tx_done) ++ if (chan->cl->tx_done) + chan->cl->tx_done(chan->cl, mssg, r); + +- if (chan->cl->tx_block) ++ if (r != -ETIME && chan->cl->tx_block) + complete(&chan->tx_complete); + } + +@@ -258,7 +261,7 @@ int mbox_send_message(struct mbox_chan *chan, void *mssg) + + msg_submit(chan); + +- if (chan->cl->tx_block && chan->active_req) { ++ if (chan->cl->tx_block) { + unsigned long wait; + int ret; + +@@ -269,8 +272,8 @@ int mbox_send_message(struct mbox_chan *chan, void *mssg) + + ret = wait_for_completion_timeout(&chan->tx_complete, wait); + if (ret == 0) { +- t = -EIO; +- tx_tick(chan, -EIO); ++ t = -ETIME; ++ tx_tick(chan, t); + } + } + +diff --git a/drivers/md/raid1.c b/drivers/md/raid1.c +index 2b4e51c0544c..bf29edd8e8ee 100644 +--- a/drivers/md/raid1.c ++++ b/drivers/md/raid1.c +@@ -1118,7 +1118,7 @@ static void make_request(struct mddev *mddev, struct bio * bio) + */ + DEFINE_WAIT(w); + for (;;) { +- flush_signals(current); ++ sigset_t full, old; + prepare_to_wait(&conf->wait_barrier, + &w, TASK_INTERRUPTIBLE); + if (bio_end_sector(bio) <= mddev->suspend_lo || +@@ -1127,7 +1127,10 @@ static void make_request(struct mddev *mddev, struct bio * bio) + !md_cluster_ops->area_resyncing(mddev, + bio->bi_iter.bi_sector, bio_end_sector(bio)))) + break; ++ sigfillset(&full); ++ sigprocmask(SIG_BLOCK, &full, &old); + schedule(); ++ sigprocmask(SIG_SETMASK, &old, NULL); + } + finish_wait(&conf->wait_barrier, &w); + } +diff --git a/drivers/md/raid5.c b/drivers/md/raid5.c +index d7942cbaa1b0..69542a92e4b0 100644 +--- a/drivers/md/raid5.c ++++ b/drivers/md/raid5.c +@@ -5275,12 +5275,15 @@ static void make_request(struct mddev *mddev, struct bio * bi) + * userspace, we want an interruptible + * wait. + */ +- flush_signals(current); + prepare_to_wait(&conf->wait_for_overlap, + &w, TASK_INTERRUPTIBLE); + if (logical_sector >= mddev->suspend_lo && + logical_sector < mddev->suspend_hi) { ++ sigset_t full, old; ++ sigfillset(&full); ++ sigprocmask(SIG_BLOCK, &full, &old); + schedule(); ++ sigprocmask(SIG_SETMASK, &old, NULL); + do_prepare = true; + } + goto retry; +@@ -5796,6 +5799,8 @@ static void raid5_do_work(struct work_struct *work) + pr_debug("%d stripes handled\n", handled); + + spin_unlock_irq(&conf->device_lock); ++ ++ async_tx_issue_pending_all(); + blk_finish_plug(&plug); + + pr_debug("--- raid5worker inactive\n"); +@@ -7441,12 +7446,10 @@ static void end_reshape(struct r5conf *conf) + { + + if (!test_bit(MD_RECOVERY_INTR, &conf->mddev->recovery)) { +- struct md_rdev *rdev; + + spin_lock_irq(&conf->device_lock); + conf->previous_raid_disks = conf->raid_disks; +- rdev_for_each(rdev, conf->mddev) +- rdev->data_offset = rdev->new_data_offset; ++ md_finish_reshape(conf->mddev); + smp_wmb(); + conf->reshape_progress = MaxSector; + spin_unlock_irq(&conf->device_lock); +diff --git a/drivers/media/i2c/s5c73m3/s5c73m3-ctrls.c b/drivers/media/i2c/s5c73m3/s5c73m3-ctrls.c +index 8001cde1db1e..503135a4f47a 100644 +--- a/drivers/media/i2c/s5c73m3/s5c73m3-ctrls.c ++++ b/drivers/media/i2c/s5c73m3/s5c73m3-ctrls.c +@@ -211,7 +211,7 @@ static int s5c73m3_3a_lock(struct s5c73m3 *state, struct v4l2_ctrl *ctrl) + } + + if ((ctrl->val ^ ctrl->cur.val) & V4L2_LOCK_FOCUS) +- ret = s5c73m3_af_run(state, ~af_lock); ++ ret = s5c73m3_af_run(state, !af_lock); + + return ret; + } +diff --git a/drivers/media/pci/cx88/cx88-cards.c b/drivers/media/pci/cx88/cx88-cards.c +index 8f2556ec3971..61611d1682d1 100644 +--- a/drivers/media/pci/cx88/cx88-cards.c ++++ b/drivers/media/pci/cx88/cx88-cards.c +@@ -3691,7 +3691,14 @@ struct cx88_core *cx88_core_create(struct pci_dev *pci, int nr) + core->nr = nr; + sprintf(core->name, "cx88[%d]", core->nr); + +- core->tvnorm = V4L2_STD_NTSC_M; ++ /* ++ * Note: Setting initial standard here would cause first call to ++ * cx88_set_tvnorm() to return without programming any registers. Leave ++ * it blank for at this point and it will get set later in ++ * cx8800_initdev() ++ */ ++ core->tvnorm = 0; ++ + core->width = 320; + core->height = 240; + core->field = V4L2_FIELD_INTERLACED; +diff --git a/drivers/media/pci/cx88/cx88-video.c b/drivers/media/pci/cx88/cx88-video.c +index c9decd80bf61..53073def2bec 100644 +--- a/drivers/media/pci/cx88/cx88-video.c ++++ b/drivers/media/pci/cx88/cx88-video.c +@@ -1429,7 +1429,7 @@ static int cx8800_initdev(struct pci_dev *pci_dev, + + /* initial device configuration */ + mutex_lock(&core->lock); +- cx88_set_tvnorm(core, core->tvnorm); ++ cx88_set_tvnorm(core, V4L2_STD_NTSC_M); + v4l2_ctrl_handler_setup(&core->video_hdl); + v4l2_ctrl_handler_setup(&core->audio_hdl); + cx88_video_mux(core, 0); +diff --git a/drivers/media/pci/saa7164/saa7164-bus.c b/drivers/media/pci/saa7164/saa7164-bus.c +index 6c73f5b155f6..1c779ea8b5ec 100644 +--- a/drivers/media/pci/saa7164/saa7164-bus.c ++++ b/drivers/media/pci/saa7164/saa7164-bus.c +@@ -393,11 +393,11 @@ int saa7164_bus_get(struct saa7164_dev *dev, struct tmComResInfo* msg, + msg_tmp.size = le16_to_cpu((__force __le16)msg_tmp.size); + msg_tmp.command = le32_to_cpu((__force __le32)msg_tmp.command); + msg_tmp.controlselector = le16_to_cpu((__force __le16)msg_tmp.controlselector); ++ memcpy(msg, &msg_tmp, sizeof(*msg)); + + /* No need to update the read positions, because this was a peek */ + /* If the caller specifically want to peek, return */ + if (peekonly) { +- memcpy(msg, &msg_tmp, sizeof(*msg)); + goto peekout; + } + +@@ -442,21 +442,15 @@ int saa7164_bus_get(struct saa7164_dev *dev, struct tmComResInfo* msg, + space_rem = bus->m_dwSizeGetRing - curr_grp; + + if (space_rem < sizeof(*msg)) { +- /* msg wraps around the ring */ +- memcpy_fromio(msg, bus->m_pdwGetRing + curr_grp, space_rem); +- memcpy_fromio((u8 *)msg + space_rem, bus->m_pdwGetRing, +- sizeof(*msg) - space_rem); + if (buf) + memcpy_fromio(buf, bus->m_pdwGetRing + sizeof(*msg) - + space_rem, buf_size); + + } else if (space_rem == sizeof(*msg)) { +- memcpy_fromio(msg, bus->m_pdwGetRing + curr_grp, sizeof(*msg)); + if (buf) + memcpy_fromio(buf, bus->m_pdwGetRing, buf_size); + } else { + /* Additional data wraps around the ring */ +- memcpy_fromio(msg, bus->m_pdwGetRing + curr_grp, sizeof(*msg)); + if (buf) { + memcpy_fromio(buf, bus->m_pdwGetRing + curr_grp + + sizeof(*msg), space_rem - sizeof(*msg)); +@@ -469,15 +463,10 @@ int saa7164_bus_get(struct saa7164_dev *dev, struct tmComResInfo* msg, + + } else { + /* No wrapping */ +- memcpy_fromio(msg, bus->m_pdwGetRing + curr_grp, sizeof(*msg)); + if (buf) + memcpy_fromio(buf, bus->m_pdwGetRing + curr_grp + sizeof(*msg), + buf_size); + } +- /* Convert from little endian to CPU */ +- msg->size = le16_to_cpu((__force __le16)msg->size); +- msg->command = le32_to_cpu((__force __le32)msg->command); +- msg->controlselector = le16_to_cpu((__force __le16)msg->controlselector); + + /* Update the read positions, adjusting the ring */ + saa7164_writel(bus->m_dwGetReadPos, new_grp); +diff --git a/drivers/media/platform/davinci/vpfe_capture.c b/drivers/media/platform/davinci/vpfe_capture.c +index ccfcf3f528d3..445e17aeb8b2 100644 +--- a/drivers/media/platform/davinci/vpfe_capture.c ++++ b/drivers/media/platform/davinci/vpfe_capture.c +@@ -1706,27 +1706,9 @@ static long vpfe_param_handler(struct file *file, void *priv, + + switch (cmd) { + case VPFE_CMD_S_CCDC_RAW_PARAMS: ++ ret = -EINVAL; + v4l2_warn(&vpfe_dev->v4l2_dev, +- "VPFE_CMD_S_CCDC_RAW_PARAMS: experimental ioctl\n"); +- if (ccdc_dev->hw_ops.set_params) { +- ret = ccdc_dev->hw_ops.set_params(param); +- if (ret) { +- v4l2_dbg(1, debug, &vpfe_dev->v4l2_dev, +- "Error setting parameters in CCDC\n"); +- goto unlock_out; +- } +- ret = vpfe_get_ccdc_image_format(vpfe_dev, +- &vpfe_dev->fmt); +- if (ret < 0) { +- v4l2_dbg(1, debug, &vpfe_dev->v4l2_dev, +- "Invalid image format at CCDC\n"); +- goto unlock_out; +- } +- } else { +- ret = -EINVAL; +- v4l2_dbg(1, debug, &vpfe_dev->v4l2_dev, +- "VPFE_CMD_S_CCDC_RAW_PARAMS not supported\n"); +- } ++ "VPFE_CMD_S_CCDC_RAW_PARAMS not supported\n"); + break; + default: + ret = -ENOTTY; +diff --git a/drivers/media/rc/imon.c b/drivers/media/rc/imon.c +index 65f80b8b9f7a..eb9e7feb9b13 100644 +--- a/drivers/media/rc/imon.c ++++ b/drivers/media/rc/imon.c +@@ -1629,7 +1629,7 @@ static void imon_incoming_packet(struct imon_context *ictx, + if (kc == KEY_KEYBOARD && !ictx->release_code) { + ictx->last_keycode = kc; + if (!nomouse) { +- ictx->pad_mouse = ~(ictx->pad_mouse) & 0x1; ++ ictx->pad_mouse = !ictx->pad_mouse; + dev_dbg(dev, "toggling to %s mode\n", + ictx->pad_mouse ? "mouse" : "keyboard"); + spin_unlock_irqrestore(&ictx->kc_lock, flags); +diff --git a/drivers/media/rc/ir-lirc-codec.c b/drivers/media/rc/ir-lirc-codec.c +index 98893a8332c7..4795c31ceebc 100644 +--- a/drivers/media/rc/ir-lirc-codec.c ++++ b/drivers/media/rc/ir-lirc-codec.c +@@ -257,7 +257,7 @@ static long ir_lirc_ioctl(struct file *filep, unsigned int cmd, + return 0; + + case LIRC_GET_REC_RESOLUTION: +- val = dev->rx_resolution; ++ val = dev->rx_resolution / 1000; + break; + + case LIRC_SET_WIDEBAND_RECEIVER: +diff --git a/drivers/misc/enclosure.c b/drivers/misc/enclosure.c +index 65fed7146e9b..cc91f7b3d90c 100644 +--- a/drivers/misc/enclosure.c ++++ b/drivers/misc/enclosure.c +@@ -375,6 +375,7 @@ int enclosure_add_device(struct enclosure_device *edev, int component, + struct device *dev) + { + struct enclosure_component *cdev; ++ int err; + + if (!edev || component >= edev->components) + return -EINVAL; +@@ -384,12 +385,17 @@ int enclosure_add_device(struct enclosure_device *edev, int component, + if (cdev->dev == dev) + return -EEXIST; + +- if (cdev->dev) ++ if (cdev->dev) { + enclosure_remove_links(cdev); +- +- put_device(cdev->dev); ++ put_device(cdev->dev); ++ } + cdev->dev = get_device(dev); +- return enclosure_add_links(cdev); ++ err = enclosure_add_links(cdev); ++ if (err) { ++ put_device(cdev->dev); ++ cdev->dev = NULL; ++ } ++ return err; + } + EXPORT_SYMBOL_GPL(enclosure_add_device); + +diff --git a/drivers/mtd/spi-nor/fsl-quadspi.c b/drivers/mtd/spi-nor/fsl-quadspi.c +index 5d5d36272bb5..448123268e3b 100644 +--- a/drivers/mtd/spi-nor/fsl-quadspi.c ++++ b/drivers/mtd/spi-nor/fsl-quadspi.c +@@ -140,15 +140,15 @@ + #define LUT_MODE 4 + #define LUT_MODE2 5 + #define LUT_MODE4 6 +-#define LUT_READ 7 +-#define LUT_WRITE 8 ++#define LUT_FSL_READ 7 ++#define LUT_FSL_WRITE 8 + #define LUT_JMP_ON_CS 9 + #define LUT_ADDR_DDR 10 + #define LUT_MODE_DDR 11 + #define LUT_MODE2_DDR 12 + #define LUT_MODE4_DDR 13 +-#define LUT_READ_DDR 14 +-#define LUT_WRITE_DDR 15 ++#define LUT_FSL_READ_DDR 14 ++#define LUT_FSL_WRITE_DDR 15 + #define LUT_DATA_LEARN 16 + + /* +@@ -312,7 +312,7 @@ static void fsl_qspi_init_lut(struct fsl_qspi *q) + + writel(LUT0(CMD, PAD1, cmd) | LUT1(ADDR, PAD1, addrlen), + base + QUADSPI_LUT(lut_base)); +- writel(LUT0(DUMMY, PAD1, dummy) | LUT1(READ, PAD4, rxfifo), ++ writel(LUT0(DUMMY, PAD1, dummy) | LUT1(FSL_READ, PAD4, rxfifo), + base + QUADSPI_LUT(lut_base + 1)); + + /* Write enable */ +@@ -333,11 +333,11 @@ static void fsl_qspi_init_lut(struct fsl_qspi *q) + + writel(LUT0(CMD, PAD1, cmd) | LUT1(ADDR, PAD1, addrlen), + base + QUADSPI_LUT(lut_base)); +- writel(LUT0(WRITE, PAD1, 0), base + QUADSPI_LUT(lut_base + 1)); ++ writel(LUT0(FSL_WRITE, PAD1, 0), base + QUADSPI_LUT(lut_base + 1)); + + /* Read Status */ + lut_base = SEQID_RDSR * 4; +- writel(LUT0(CMD, PAD1, SPINOR_OP_RDSR) | LUT1(READ, PAD1, 0x1), ++ writel(LUT0(CMD, PAD1, SPINOR_OP_RDSR) | LUT1(FSL_READ, PAD1, 0x1), + base + QUADSPI_LUT(lut_base)); + + /* Erase a sector */ +@@ -362,17 +362,17 @@ static void fsl_qspi_init_lut(struct fsl_qspi *q) + + /* READ ID */ + lut_base = SEQID_RDID * 4; +- writel(LUT0(CMD, PAD1, SPINOR_OP_RDID) | LUT1(READ, PAD1, 0x8), ++ writel(LUT0(CMD, PAD1, SPINOR_OP_RDID) | LUT1(FSL_READ, PAD1, 0x8), + base + QUADSPI_LUT(lut_base)); + + /* Write Register */ + lut_base = SEQID_WRSR * 4; +- writel(LUT0(CMD, PAD1, SPINOR_OP_WRSR) | LUT1(WRITE, PAD1, 0x2), ++ writel(LUT0(CMD, PAD1, SPINOR_OP_WRSR) | LUT1(FSL_WRITE, PAD1, 0x2), + base + QUADSPI_LUT(lut_base)); + + /* Read Configuration Register */ + lut_base = SEQID_RDCR * 4; +- writel(LUT0(CMD, PAD1, SPINOR_OP_RDCR) | LUT1(READ, PAD1, 0x1), ++ writel(LUT0(CMD, PAD1, SPINOR_OP_RDCR) | LUT1(FSL_READ, PAD1, 0x1), + base + QUADSPI_LUT(lut_base)); + + /* Write disable */ +diff --git a/drivers/net/ethernet/broadcom/tg3.c b/drivers/net/ethernet/broadcom/tg3.c +index 7896f0f1fa05..f9713fe036ef 100644 +--- a/drivers/net/ethernet/broadcom/tg3.c ++++ b/drivers/net/ethernet/broadcom/tg3.c +@@ -8722,11 +8722,14 @@ static void tg3_free_consistent(struct tg3 *tp) + tg3_mem_rx_release(tp); + tg3_mem_tx_release(tp); + ++ /* Protect tg3_get_stats64() from reading freed tp->hw_stats. */ ++ tg3_full_lock(tp, 0); + if (tp->hw_stats) { + dma_free_coherent(&tp->pdev->dev, sizeof(struct tg3_hw_stats), + tp->hw_stats, tp->stats_mapping); + tp->hw_stats = NULL; + } ++ tg3_full_unlock(tp); + } + + /* +diff --git a/drivers/net/ethernet/mellanox/mlx4/icm.c b/drivers/net/ethernet/mellanox/mlx4/icm.c +index 2a9dd460a95f..e1f9e7cebf8f 100644 +--- a/drivers/net/ethernet/mellanox/mlx4/icm.c ++++ b/drivers/net/ethernet/mellanox/mlx4/icm.c +@@ -118,8 +118,13 @@ static int mlx4_alloc_icm_coherent(struct device *dev, struct scatterlist *mem, + if (!buf) + return -ENOMEM; + ++ if (offset_in_page(buf)) { ++ dma_free_coherent(dev, PAGE_SIZE << order, ++ buf, sg_dma_address(mem)); ++ return -ENOMEM; ++ } ++ + sg_set_buf(mem, buf, PAGE_SIZE << order); +- BUG_ON(mem->offset); + sg_dma_len(mem) = PAGE_SIZE << order; + return 0; + } +diff --git a/drivers/net/ethernet/realtek/r8169.c b/drivers/net/ethernet/realtek/r8169.c +index 3df51faf18ae..af4b1f4c24d2 100644 +--- a/drivers/net/ethernet/realtek/r8169.c ++++ b/drivers/net/ethernet/realtek/r8169.c +@@ -326,6 +326,7 @@ enum cfg_version { + static const struct pci_device_id rtl8169_pci_tbl[] = { + { PCI_DEVICE(PCI_VENDOR_ID_REALTEK, 0x8129), 0, 0, RTL_CFG_0 }, + { PCI_DEVICE(PCI_VENDOR_ID_REALTEK, 0x8136), 0, 0, RTL_CFG_2 }, ++ { PCI_DEVICE(PCI_VENDOR_ID_REALTEK, 0x8161), 0, 0, RTL_CFG_1 }, + { PCI_DEVICE(PCI_VENDOR_ID_REALTEK, 0x8167), 0, 0, RTL_CFG_0 }, + { PCI_DEVICE(PCI_VENDOR_ID_REALTEK, 0x8168), 0, 0, RTL_CFG_1 }, + { PCI_DEVICE(PCI_VENDOR_ID_REALTEK, 0x8169), 0, 0, RTL_CFG_0 }, +diff --git a/drivers/net/ethernet/renesas/sh_eth.c b/drivers/net/ethernet/renesas/sh_eth.c +index c93a458f96f7..e2dd94a91c15 100644 +--- a/drivers/net/ethernet/renesas/sh_eth.c ++++ b/drivers/net/ethernet/renesas/sh_eth.c +@@ -726,6 +726,7 @@ static struct sh_eth_cpu_data sh7734_data = { + .tsu = 1, + .hw_crc = 1, + .select_mii = 1, ++ .shift_rd0 = 1, + }; + + /* SH7763 */ +@@ -794,6 +795,7 @@ static struct sh_eth_cpu_data r8a7740_data = { + .rpadir_value = 2 << 16, + .no_trimd = 1, + .no_ade = 1, ++ .hw_crc = 1, + .tsu = 1, + .select_mii = 1, + .shift_rd0 = 1, +diff --git a/drivers/net/irda/mcs7780.c b/drivers/net/irda/mcs7780.c +index bca6a1e72d1d..e1bb802d4a4d 100644 +--- a/drivers/net/irda/mcs7780.c ++++ b/drivers/net/irda/mcs7780.c +@@ -141,9 +141,19 @@ static int mcs_set_reg(struct mcs_cb *mcs, __u16 reg, __u16 val) + static int mcs_get_reg(struct mcs_cb *mcs, __u16 reg, __u16 * val) + { + struct usb_device *dev = mcs->usbdev; +- int ret = usb_control_msg(dev, usb_rcvctrlpipe(dev, 0), MCS_RDREQ, +- MCS_RD_RTYPE, 0, reg, val, 2, +- msecs_to_jiffies(MCS_CTRL_TIMEOUT)); ++ void *dmabuf; ++ int ret; ++ ++ dmabuf = kmalloc(sizeof(__u16), GFP_KERNEL); ++ if (!dmabuf) ++ return -ENOMEM; ++ ++ ret = usb_control_msg(dev, usb_rcvctrlpipe(dev, 0), MCS_RDREQ, ++ MCS_RD_RTYPE, 0, reg, dmabuf, 2, ++ msecs_to_jiffies(MCS_CTRL_TIMEOUT)); ++ ++ memcpy(val, dmabuf, sizeof(__u16)); ++ kfree(dmabuf); + + return ret; + } +diff --git a/drivers/net/phy/phy.c b/drivers/net/phy/phy.c +index 21a668faacd7..1ca78b46c01b 100644 +--- a/drivers/net/phy/phy.c ++++ b/drivers/net/phy/phy.c +@@ -512,6 +512,9 @@ void phy_stop_machine(struct phy_device *phydev) + if (phydev->state > PHY_UP && phydev->state != PHY_HALTED) + phydev->state = PHY_UP; + mutex_unlock(&phydev->lock); ++ ++ /* Now we can run the state machine synchronously */ ++ phy_state_machine(&phydev->state_queue.work); + } + + /** +diff --git a/drivers/net/phy/phy_device.c b/drivers/net/phy/phy_device.c +index d551df62e61a..afb87840f853 100644 +--- a/drivers/net/phy/phy_device.c ++++ b/drivers/net/phy/phy_device.c +@@ -1281,6 +1281,8 @@ static int phy_remove(struct device *dev) + { + struct phy_device *phydev = to_phy_device(dev); + ++ cancel_delayed_work_sync(&phydev->state_queue); ++ + mutex_lock(&phydev->lock); + phydev->state = PHY_DOWN; + mutex_unlock(&phydev->lock); +@@ -1355,7 +1357,7 @@ static struct phy_driver genphy_driver[] = { + .phy_id = 0xffffffff, + .phy_id_mask = 0xffffffff, + .name = "Generic PHY", +- .soft_reset = genphy_soft_reset, ++ .soft_reset = genphy_no_soft_reset, + .config_init = genphy_config_init, + .features = PHY_GBIT_FEATURES | SUPPORTED_MII | + SUPPORTED_AUI | SUPPORTED_FIBRE | +diff --git a/drivers/net/usb/kaweth.c b/drivers/net/usb/kaweth.c +index 1e9cdca37014..c626971096d4 100644 +--- a/drivers/net/usb/kaweth.c ++++ b/drivers/net/usb/kaweth.c +@@ -1009,6 +1009,7 @@ static int kaweth_probe( + struct net_device *netdev; + const eth_addr_t bcast_addr = { 0xFF, 0xFF, 0xFF, 0xFF, 0xFF, 0xFF }; + int result = 0; ++ int rv = -EIO; + + dev_dbg(dev, + "Kawasaki Device Probe (Device number:%d): 0x%4.4x:0x%4.4x:0x%4.4x\n", +@@ -1029,6 +1030,7 @@ static int kaweth_probe( + kaweth = netdev_priv(netdev); + kaweth->dev = udev; + kaweth->net = netdev; ++ kaweth->intf = intf; + + spin_lock_init(&kaweth->device_lock); + init_waitqueue_head(&kaweth->term_wait); +@@ -1048,6 +1050,10 @@ static int kaweth_probe( + /* Download the firmware */ + dev_info(dev, "Downloading firmware...\n"); + kaweth->firmware_buf = (__u8 *)__get_free_page(GFP_KERNEL); ++ if (!kaweth->firmware_buf) { ++ rv = -ENOMEM; ++ goto err_free_netdev; ++ } + if ((result = kaweth_download_firmware(kaweth, + "kaweth/new_code.bin", + 100, +@@ -1139,8 +1145,6 @@ err_fw: + + dev_dbg(dev, "Initializing net device.\n"); + +- kaweth->intf = intf; +- + kaweth->tx_urb = usb_alloc_urb(0, GFP_KERNEL); + if (!kaweth->tx_urb) + goto err_free_netdev; +@@ -1210,7 +1214,7 @@ err_only_tx: + err_free_netdev: + free_netdev(netdev); + +- return -EIO; ++ return rv; + } + + /**************************************************************** +diff --git a/drivers/net/wireless/ath/ath10k/wmi-ops.h b/drivers/net/wireless/ath/ath10k/wmi-ops.h +index c8b64e7a6089..deed8dcfd91a 100644 +--- a/drivers/net/wireless/ath/ath10k/wmi-ops.h ++++ b/drivers/net/wireless/ath/ath10k/wmi-ops.h +@@ -562,6 +562,9 @@ ath10k_wmi_vdev_spectral_conf(struct ath10k *ar, + struct sk_buff *skb; + u32 cmd_id; + ++ if (!ar->wmi.ops->gen_vdev_spectral_conf) ++ return -EOPNOTSUPP; ++ + skb = ar->wmi.ops->gen_vdev_spectral_conf(ar, arg); + if (IS_ERR(skb)) + return PTR_ERR(skb); +@@ -577,6 +580,9 @@ ath10k_wmi_vdev_spectral_enable(struct ath10k *ar, u32 vdev_id, u32 trigger, + struct sk_buff *skb; + u32 cmd_id; + ++ if (!ar->wmi.ops->gen_vdev_spectral_enable) ++ return -EOPNOTSUPP; ++ + skb = ar->wmi.ops->gen_vdev_spectral_enable(ar, vdev_id, trigger, + enable); + if (IS_ERR(skb)) +diff --git a/drivers/net/wireless/ath/ath9k/ar9003_phy.c b/drivers/net/wireless/ath/ath9k/ar9003_phy.c +index 1ad66b76749b..c1b661e5c8c4 100644 +--- a/drivers/net/wireless/ath/ath9k/ar9003_phy.c ++++ b/drivers/net/wireless/ath/ath9k/ar9003_phy.c +@@ -1816,8 +1816,6 @@ static void ar9003_hw_spectral_scan_wait(struct ath_hw *ah) + static void ar9003_hw_tx99_start(struct ath_hw *ah, u32 qnum) + { + REG_SET_BIT(ah, AR_PHY_TEST, PHY_AGC_CLR); +- REG_SET_BIT(ah, 0x9864, 0x7f000); +- REG_SET_BIT(ah, 0x9924, 0x7f00fe); + REG_CLR_BIT(ah, AR_DIAG_SW, AR_DIAG_RX_DIS); + REG_WRITE(ah, AR_CR, AR_CR_RXD); + REG_WRITE(ah, AR_DLCL_IFS(qnum), 0); +diff --git a/drivers/net/wireless/ath/ath9k/tx99.c b/drivers/net/wireless/ath/ath9k/tx99.c +index ac4781f37e78..b4e6304afd40 100644 +--- a/drivers/net/wireless/ath/ath9k/tx99.c ++++ b/drivers/net/wireless/ath/ath9k/tx99.c +@@ -190,22 +190,27 @@ static ssize_t write_file_tx99(struct file *file, const char __user *user_buf, + if (strtobool(buf, &start)) + return -EINVAL; + ++ mutex_lock(&sc->mutex); ++ + if (start == sc->tx99_state) { + if (!start) +- return count; ++ goto out; + ath_dbg(common, XMIT, "Resetting TX99\n"); + ath9k_tx99_deinit(sc); + } + + if (!start) { + ath9k_tx99_deinit(sc); +- return count; ++ goto out; + } + + r = ath9k_tx99_init(sc); +- if (r) ++ if (r) { ++ mutex_unlock(&sc->mutex); + return r; +- ++ } ++out: ++ mutex_unlock(&sc->mutex); + return count; + } + +diff --git a/drivers/net/wireless/ath/wil6210/main.c b/drivers/net/wireless/ath/wil6210/main.c +index c2a238426425..a058151f5eed 100644 +--- a/drivers/net/wireless/ath/wil6210/main.c ++++ b/drivers/net/wireless/ath/wil6210/main.c +@@ -323,18 +323,19 @@ static void wil_fw_error_worker(struct work_struct *work) + + wil->last_fw_recovery = jiffies; + ++ wil_info(wil, "fw error recovery requested (try %d)...\n", ++ wil->recovery_count); ++ if (!no_fw_recovery) ++ wil->recovery_state = fw_recovery_running; ++ if (wil_wait_for_recovery(wil) != 0) ++ return; ++ + mutex_lock(&wil->mutex); + switch (wdev->iftype) { + case NL80211_IFTYPE_STATION: + case NL80211_IFTYPE_P2P_CLIENT: + case NL80211_IFTYPE_MONITOR: +- wil_info(wil, "fw error recovery requested (try %d)...\n", +- wil->recovery_count); +- if (!no_fw_recovery) +- wil->recovery_state = fw_recovery_running; +- if (0 != wil_wait_for_recovery(wil)) +- break; +- ++ /* silent recovery, upper layers will see disconnect */ + __wil_down(wil); + __wil_up(wil); + break; +diff --git a/drivers/net/xen-netback/common.h b/drivers/net/xen-netback/common.h +index 8a495b318b6f..aa8400e3839c 100644 +--- a/drivers/net/xen-netback/common.h ++++ b/drivers/net/xen-netback/common.h +@@ -195,6 +195,7 @@ struct xenvif_queue { /* Per-queue data for xenvif */ + unsigned long remaining_credit; + struct timer_list credit_timeout; + u64 credit_window_start; ++ bool rate_limited; + + /* Statistics */ + struct xenvif_stats stats; +diff --git a/drivers/net/xen-netback/interface.c b/drivers/net/xen-netback/interface.c +index 1a83e190fc15..e34527071260 100644 +--- a/drivers/net/xen-netback/interface.c ++++ b/drivers/net/xen-netback/interface.c +@@ -99,7 +99,11 @@ static int xenvif_poll(struct napi_struct *napi, int budget) + + if (work_done < budget) { + napi_complete(napi); +- xenvif_napi_schedule_or_enable_events(queue); ++ /* If the queue is rate-limited, it shall be ++ * rescheduled in the timer callback. ++ */ ++ if (likely(!queue->rate_limited)) ++ xenvif_napi_schedule_or_enable_events(queue); + } + + return work_done; +diff --git a/drivers/net/xen-netback/netback.c b/drivers/net/xen-netback/netback.c +index 5e5b6184e720..7bd3c5a8116d 100644 +--- a/drivers/net/xen-netback/netback.c ++++ b/drivers/net/xen-netback/netback.c +@@ -640,6 +640,7 @@ static void tx_add_credit(struct xenvif_queue *queue) + max_credit = ULONG_MAX; /* wrapped: clamp to ULONG_MAX */ + + queue->remaining_credit = min(max_credit, max_burst); ++ queue->rate_limited = false; + } + + void xenvif_tx_credit_callback(unsigned long data) +@@ -1152,8 +1153,10 @@ static bool tx_credit_exceeded(struct xenvif_queue *queue, unsigned size) + msecs_to_jiffies(queue->credit_usec / 1000); + + /* Timer could already be pending in rare cases. */ +- if (timer_pending(&queue->credit_timeout)) ++ if (timer_pending(&queue->credit_timeout)) { ++ queue->rate_limited = true; + return true; ++ } + + /* Passed the point where we can replenish credit? */ + if (time_after_eq64(now, next_credit)) { +@@ -1168,6 +1171,7 @@ static bool tx_credit_exceeded(struct xenvif_queue *queue, unsigned size) + mod_timer(&queue->credit_timeout, + next_credit); + queue->credit_window_start = next_credit; ++ queue->rate_limited = true; + + return true; + } +diff --git a/drivers/of/device.c b/drivers/of/device.c +index 20c1332a0018..493b21bd1199 100644 +--- a/drivers/of/device.c ++++ b/drivers/of/device.c +@@ -212,6 +212,7 @@ ssize_t of_device_get_modalias(struct device *dev, char *str, ssize_t len) + + return tsize; + } ++EXPORT_SYMBOL_GPL(of_device_get_modalias); + + /** + * of_device_uevent - Display OF related uevent information +@@ -274,3 +275,4 @@ int of_device_uevent_modalias(struct device *dev, struct kobj_uevent_env *env) + + return 0; + } ++EXPORT_SYMBOL_GPL(of_device_uevent_modalias); +diff --git a/drivers/pci/pci-driver.c b/drivers/pci/pci-driver.c +index 74f4a26e16b5..98101c4118bb 100644 +--- a/drivers/pci/pci-driver.c ++++ b/drivers/pci/pci-driver.c +@@ -937,6 +937,7 @@ static int pci_pm_thaw_noirq(struct device *dev) + return pci_legacy_resume_early(dev); + + pci_update_current_state(pci_dev, PCI_D0); ++ pci_restore_state(pci_dev); + + if (drv && drv->pm && drv->pm->thaw_noirq) + error = drv->pm->thaw_noirq(dev); +diff --git a/drivers/pinctrl/samsung/pinctrl-exynos.c b/drivers/pinctrl/samsung/pinctrl-exynos.c +index 0b7afa50121a..390feee4d47c 100644 +--- a/drivers/pinctrl/samsung/pinctrl-exynos.c ++++ b/drivers/pinctrl/samsung/pinctrl-exynos.c +@@ -194,8 +194,6 @@ static int exynos_irq_request_resources(struct irq_data *irqd) + + spin_unlock_irqrestore(&bank->slock, flags); + +- exynos_irq_unmask(irqd); +- + return 0; + } + +@@ -216,8 +214,6 @@ static void exynos_irq_release_resources(struct irq_data *irqd) + shift = irqd->hwirq * bank_type->fld_width[PINCFG_TYPE_FUNC]; + mask = (1 << bank_type->fld_width[PINCFG_TYPE_FUNC]) - 1; + +- exynos_irq_mask(irqd); +- + spin_lock_irqsave(&bank->slock, flags); + + con = readl(d->virt_base + reg_con); +diff --git a/drivers/pinctrl/sunxi/pinctrl-sun4i-a10.c b/drivers/pinctrl/sunxi/pinctrl-sun4i-a10.c +index 7376a97b5e65..727ce62de0bd 100644 +--- a/drivers/pinctrl/sunxi/pinctrl-sun4i-a10.c ++++ b/drivers/pinctrl/sunxi/pinctrl-sun4i-a10.c +@@ -800,6 +800,7 @@ static const struct sunxi_desc_pin sun4i_a10_pins[] = { + SUNXI_FUNCTION(0x2, "lcd1"), /* D16 */ + SUNXI_FUNCTION(0x3, "pata"), /* ATAD12 */ + SUNXI_FUNCTION(0x4, "keypad"), /* IN6 */ ++ SUNXI_FUNCTION(0x5, "sim"), /* DET */ + SUNXI_FUNCTION_IRQ(0x6, 16), /* EINT16 */ + SUNXI_FUNCTION(0x7, "csi1")), /* D16 */ + SUNXI_PIN(SUNXI_PINCTRL_PIN(H, 17), +diff --git a/drivers/scsi/fnic/fnic.h b/drivers/scsi/fnic/fnic.h +index ce129e595b55..5c935847599c 100644 +--- a/drivers/scsi/fnic/fnic.h ++++ b/drivers/scsi/fnic/fnic.h +@@ -248,6 +248,7 @@ struct fnic { + struct completion *remove_wait; /* device remove thread blocks */ + + atomic_t in_flight; /* io counter */ ++ bool internal_reset_inprogress; + u32 _reserved; /* fill hole */ + unsigned long state_flags; /* protected by host lock */ + enum fnic_state state; +diff --git a/drivers/scsi/fnic/fnic_scsi.c b/drivers/scsi/fnic/fnic_scsi.c +index 25436cd2860c..eaf29b18fb7a 100644 +--- a/drivers/scsi/fnic/fnic_scsi.c ++++ b/drivers/scsi/fnic/fnic_scsi.c +@@ -2517,6 +2517,19 @@ int fnic_host_reset(struct scsi_cmnd *sc) + unsigned long wait_host_tmo; + struct Scsi_Host *shost = sc->device->host; + struct fc_lport *lp = shost_priv(shost); ++ struct fnic *fnic = lport_priv(lp); ++ unsigned long flags; ++ ++ spin_lock_irqsave(&fnic->fnic_lock, flags); ++ if (fnic->internal_reset_inprogress == 0) { ++ fnic->internal_reset_inprogress = 1; ++ } else { ++ spin_unlock_irqrestore(&fnic->fnic_lock, flags); ++ FNIC_SCSI_DBG(KERN_DEBUG, fnic->lport->host, ++ "host reset in progress skipping another host reset\n"); ++ return SUCCESS; ++ } ++ spin_unlock_irqrestore(&fnic->fnic_lock, flags); + + /* + * If fnic_reset is successful, wait for fabric login to complete +@@ -2537,6 +2550,9 @@ int fnic_host_reset(struct scsi_cmnd *sc) + } + } + ++ spin_lock_irqsave(&fnic->fnic_lock, flags); ++ fnic->internal_reset_inprogress = 0; ++ spin_unlock_irqrestore(&fnic->fnic_lock, flags); + return ret; + } + +diff --git a/drivers/scsi/mpt3sas/mpt3sas_base.c b/drivers/scsi/mpt3sas/mpt3sas_base.c +index 14a781b6b88d..093f7b4847df 100644 +--- a/drivers/scsi/mpt3sas/mpt3sas_base.c ++++ b/drivers/scsi/mpt3sas/mpt3sas_base.c +@@ -4410,14 +4410,13 @@ _base_make_ioc_ready(struct MPT3SAS_ADAPTER *ioc, int sleep_flag, + static int + _base_make_ioc_operational(struct MPT3SAS_ADAPTER *ioc, int sleep_flag) + { +- int r, i; ++ int r, i, index; + unsigned long flags; + u32 reply_address; + u16 smid; + struct _tr_list *delayed_tr, *delayed_tr_next; + struct adapter_reply_queue *reply_q; +- long reply_post_free; +- u32 reply_post_free_sz, index = 0; ++ Mpi2ReplyDescriptorsUnion_t *reply_post_free_contig; + + dinitprintk(ioc, pr_info(MPT3SAS_FMT "%s\n", ioc->name, + __func__)); +@@ -4488,27 +4487,27 @@ _base_make_ioc_operational(struct MPT3SAS_ADAPTER *ioc, int sleep_flag) + _base_assign_reply_queues(ioc); + + /* initialize Reply Post Free Queue */ +- reply_post_free_sz = ioc->reply_post_queue_depth * +- sizeof(Mpi2DefaultReplyDescriptor_t); +- reply_post_free = (long)ioc->reply_post[index].reply_post_free; ++ index = 0; ++ reply_post_free_contig = ioc->reply_post[0].reply_post_free; + list_for_each_entry(reply_q, &ioc->reply_queue_list, list) { ++ /* ++ * If RDPQ is enabled, switch to the next allocation. ++ * Otherwise advance within the contiguous region. ++ */ ++ if (ioc->rdpq_array_enable) { ++ reply_q->reply_post_free = ++ ioc->reply_post[index++].reply_post_free; ++ } else { ++ reply_q->reply_post_free = reply_post_free_contig; ++ reply_post_free_contig += ioc->reply_post_queue_depth; ++ } ++ + reply_q->reply_post_host_index = 0; +- reply_q->reply_post_free = (Mpi2ReplyDescriptorsUnion_t *) +- reply_post_free; + for (i = 0; i < ioc->reply_post_queue_depth; i++) + reply_q->reply_post_free[i].Words = + cpu_to_le64(ULLONG_MAX); + if (!_base_is_controller_msix_enabled(ioc)) + goto skip_init_reply_post_free_queue; +- /* +- * If RDPQ is enabled, switch to the next allocation. +- * Otherwise advance within the contiguous region. +- */ +- if (ioc->rdpq_array_enable) +- reply_post_free = (long) +- ioc->reply_post[++index].reply_post_free; +- else +- reply_post_free += reply_post_free_sz; + } + skip_init_reply_post_free_queue: + +diff --git a/drivers/scsi/qla2xxx/qla_attr.c b/drivers/scsi/qla2xxx/qla_attr.c +index 82b92c414a9c..c1b2e86839ae 100644 +--- a/drivers/scsi/qla2xxx/qla_attr.c ++++ b/drivers/scsi/qla2xxx/qla_attr.c +@@ -329,12 +329,15 @@ qla2x00_sysfs_read_optrom(struct file *filp, struct kobject *kobj, + struct qla_hw_data *ha = vha->hw; + ssize_t rval = 0; + ++ mutex_lock(&ha->optrom_mutex); ++ + if (ha->optrom_state != QLA_SREADING) +- return 0; ++ goto out; + +- mutex_lock(&ha->optrom_mutex); + rval = memory_read_from_buffer(buf, count, &off, ha->optrom_buffer, + ha->optrom_region_size); ++ ++out: + mutex_unlock(&ha->optrom_mutex); + + return rval; +@@ -349,14 +352,19 @@ qla2x00_sysfs_write_optrom(struct file *filp, struct kobject *kobj, + struct device, kobj))); + struct qla_hw_data *ha = vha->hw; + +- if (ha->optrom_state != QLA_SWRITING) ++ mutex_lock(&ha->optrom_mutex); ++ ++ if (ha->optrom_state != QLA_SWRITING) { ++ mutex_unlock(&ha->optrom_mutex); + return -EINVAL; +- if (off > ha->optrom_region_size) ++ } ++ if (off > ha->optrom_region_size) { ++ mutex_unlock(&ha->optrom_mutex); + return -ERANGE; ++ } + if (off + count > ha->optrom_region_size) + count = ha->optrom_region_size - off; + +- mutex_lock(&ha->optrom_mutex); + memcpy(&ha->optrom_buffer[off], buf, count); + mutex_unlock(&ha->optrom_mutex); + +diff --git a/drivers/spi/spi-dw.c b/drivers/spi/spi-dw.c +index 4fbfcdc5cb24..ebaefecd6e82 100644 +--- a/drivers/spi/spi-dw.c ++++ b/drivers/spi/spi-dw.c +@@ -113,7 +113,10 @@ static const struct file_operations dw_spi_regs_ops = { + + static int dw_spi_debugfs_init(struct dw_spi *dws) + { +- dws->debugfs = debugfs_create_dir("dw_spi", NULL); ++ char name[128]; ++ ++ snprintf(name, 128, "dw_spi-%s", dev_name(&dws->master->dev)); ++ dws->debugfs = debugfs_create_dir(name, NULL); + if (!dws->debugfs) + return -ENOMEM; + +diff --git a/drivers/spmi/spmi.c b/drivers/spmi/spmi.c +index 94938436aef9..2f9f2958c203 100644 +--- a/drivers/spmi/spmi.c ++++ b/drivers/spmi/spmi.c +@@ -348,11 +348,23 @@ static int spmi_drv_remove(struct device *dev) + return 0; + } + ++static int spmi_drv_uevent(struct device *dev, struct kobj_uevent_env *env) ++{ ++ int ret; ++ ++ ret = of_device_uevent_modalias(dev, env); ++ if (ret != -ENODEV) ++ return ret; ++ ++ return 0; ++} ++ + static struct bus_type spmi_bus_type = { + .name = "spmi", + .match = spmi_device_match, + .probe = spmi_drv_probe, + .remove = spmi_drv_remove, ++ .uevent = spmi_drv_uevent, + }; + + /** +diff --git a/drivers/staging/comedi/comedi_fops.c b/drivers/staging/comedi/comedi_fops.c +index a503132f91e8..ab6139b5472f 100644 +--- a/drivers/staging/comedi/comedi_fops.c ++++ b/drivers/staging/comedi/comedi_fops.c +@@ -2875,9 +2875,6 @@ static int __init comedi_init(void) + + comedi_class->dev_groups = comedi_dev_groups; + +- /* XXX requires /proc interface */ +- comedi_proc_init(); +- + /* create devices files for legacy/manual use */ + for (i = 0; i < comedi_num_legacy_minors; i++) { + struct comedi_device *dev; +@@ -2895,6 +2892,9 @@ static int __init comedi_init(void) + mutex_unlock(&dev->mutex); + } + ++ /* XXX requires /proc interface */ ++ comedi_proc_init(); ++ + return 0; + } + module_init(comedi_init); +diff --git a/drivers/staging/iio/resolver/ad2s1210.c b/drivers/staging/iio/resolver/ad2s1210.c +index 7bc3e4a73834..16af77d20bdb 100644 +--- a/drivers/staging/iio/resolver/ad2s1210.c ++++ b/drivers/staging/iio/resolver/ad2s1210.c +@@ -468,7 +468,7 @@ static int ad2s1210_read_raw(struct iio_dev *indio_dev, + long m) + { + struct ad2s1210_state *st = iio_priv(indio_dev); +- bool negative; ++ u16 negative; + int ret = 0; + u16 pos; + s16 vel; +diff --git a/drivers/staging/rtl8188eu/os_dep/usb_intf.c b/drivers/staging/rtl8188eu/os_dep/usb_intf.c +index ef3c73e38172..4273e34ff3ea 100644 +--- a/drivers/staging/rtl8188eu/os_dep/usb_intf.c ++++ b/drivers/staging/rtl8188eu/os_dep/usb_intf.c +@@ -48,6 +48,7 @@ static struct usb_device_id rtw_usb_id_tbl[] = { + {USB_DEVICE(0x2001, 0x330F)}, /* DLink DWA-125 REV D1 */ + {USB_DEVICE(0x2001, 0x3310)}, /* Dlink DWA-123 REV D1 */ + {USB_DEVICE(0x2001, 0x3311)}, /* DLink GO-USB-N150 REV B1 */ ++ {USB_DEVICE(0x2357, 0x010c)}, /* TP-Link TL-WN722N v2 */ + {USB_DEVICE(0x0df6, 0x0076)}, /* Sitecom N150 v2 */ + {} /* Terminating entry */ + }; +diff --git a/drivers/target/iscsi/iscsi_target.c b/drivers/target/iscsi/iscsi_target.c +index 2fc3a231c2b6..7444640a7453 100644 +--- a/drivers/target/iscsi/iscsi_target.c ++++ b/drivers/target/iscsi/iscsi_target.c +@@ -426,6 +426,7 @@ int iscsit_reset_np_thread( + return 0; + } + np->np_thread_state = ISCSI_NP_THREAD_RESET; ++ atomic_inc(&np->np_reset_count); + + if (np->np_thread) { + spin_unlock_bh(&np->np_thread_lock); +@@ -1992,6 +1993,7 @@ iscsit_setup_text_cmd(struct iscsi_conn *conn, struct iscsi_cmd *cmd, + cmd->cmd_sn = be32_to_cpu(hdr->cmdsn); + cmd->exp_stat_sn = be32_to_cpu(hdr->exp_statsn); + cmd->data_direction = DMA_NONE; ++ kfree(cmd->text_in_ptr); + cmd->text_in_ptr = NULL; + + return 0; +@@ -4589,8 +4591,11 @@ static void iscsit_logout_post_handler_closesession( + * always sleep waiting for RX/TX thread shutdown to complete + * within iscsit_close_connection(). + */ +- if (conn->conn_transport->transport_type == ISCSI_TCP) ++ if (conn->conn_transport->transport_type == ISCSI_TCP) { + sleep = cmpxchg(&conn->tx_thread_active, true, false); ++ if (!sleep) ++ return; ++ } + + atomic_set(&conn->conn_logout_remove, 0); + complete(&conn->conn_logout_comp); +@@ -4606,8 +4611,11 @@ static void iscsit_logout_post_handler_samecid( + { + int sleep = 1; + +- if (conn->conn_transport->transport_type == ISCSI_TCP) ++ if (conn->conn_transport->transport_type == ISCSI_TCP) { + sleep = cmpxchg(&conn->tx_thread_active, true, false); ++ if (!sleep) ++ return; ++ } + + atomic_set(&conn->conn_logout_remove, 0); + complete(&conn->conn_logout_comp); +diff --git a/drivers/target/iscsi/iscsi_target_login.c b/drivers/target/iscsi/iscsi_target_login.c +index c5cbd702e7cd..ee3a4bb9fba7 100644 +--- a/drivers/target/iscsi/iscsi_target_login.c ++++ b/drivers/target/iscsi/iscsi_target_login.c +@@ -1290,9 +1290,11 @@ static int __iscsi_target_login_thread(struct iscsi_np *np) + flush_signals(current); + + spin_lock_bh(&np->np_thread_lock); +- if (np->np_thread_state == ISCSI_NP_THREAD_RESET) { ++ if (atomic_dec_if_positive(&np->np_reset_count) >= 0) { + np->np_thread_state = ISCSI_NP_THREAD_ACTIVE; ++ spin_unlock_bh(&np->np_thread_lock); + complete(&np->np_restart_comp); ++ return 1; + } else if (np->np_thread_state == ISCSI_NP_THREAD_SHUTDOWN) { + spin_unlock_bh(&np->np_thread_lock); + goto exit; +@@ -1325,7 +1327,8 @@ static int __iscsi_target_login_thread(struct iscsi_np *np) + goto exit; + } else if (rc < 0) { + spin_lock_bh(&np->np_thread_lock); +- if (np->np_thread_state == ISCSI_NP_THREAD_RESET) { ++ if (atomic_dec_if_positive(&np->np_reset_count) >= 0) { ++ np->np_thread_state = ISCSI_NP_THREAD_ACTIVE; + spin_unlock_bh(&np->np_thread_lock); + complete(&np->np_restart_comp); + iscsit_put_transport(conn->conn_transport); +diff --git a/drivers/target/target_core_transport.c b/drivers/target/target_core_transport.c +index 95c1c4ecf336..b7d27b816359 100644 +--- a/drivers/target/target_core_transport.c ++++ b/drivers/target/target_core_transport.c +@@ -733,6 +733,15 @@ void target_complete_cmd(struct se_cmd *cmd, u8 scsi_status) + if (cmd->transport_state & CMD_T_ABORTED || + cmd->transport_state & CMD_T_STOP) { + spin_unlock_irqrestore(&cmd->t_state_lock, flags); ++ /* ++ * If COMPARE_AND_WRITE was stopped by __transport_wait_for_tasks(), ++ * release se_device->caw_sem obtained by sbc_compare_and_write() ++ * since target_complete_ok_work() or target_complete_failure_work() ++ * won't be called to invoke the normal CAW completion callbacks. ++ */ ++ if (cmd->se_cmd_flags & SCF_COMPARE_AND_WRITE) { ++ up(&dev->caw_sem); ++ } + complete_all(&cmd->t_transport_stop_comp); + return; + } else if (!success) { +diff --git a/drivers/usb/class/cdc-acm.c b/drivers/usb/class/cdc-acm.c +index 7983298ab32c..acab64245923 100644 +--- a/drivers/usb/class/cdc-acm.c ++++ b/drivers/usb/class/cdc-acm.c +@@ -1771,6 +1771,9 @@ static const struct usb_device_id acm_ids[] = { + { USB_DEVICE(0x1576, 0x03b1), /* Maretron USB100 */ + .driver_info = NO_UNION_NORMAL, /* reports zero length descriptor */ + }, ++ { USB_DEVICE(0xfff0, 0x0100), /* DATECS FP-2000 */ ++ .driver_info = NO_UNION_NORMAL, /* reports zero length descriptor */ ++ }, + + { USB_DEVICE(0x2912, 0x0001), /* ATOL FPrint */ + .driver_info = CLEAR_HALT_CONDITIONS, +diff --git a/drivers/usb/core/hcd.c b/drivers/usb/core/hcd.c +index 029fa26d2ac9..78f357b1a8fd 100644 +--- a/drivers/usb/core/hcd.c ++++ b/drivers/usb/core/hcd.c +@@ -2397,6 +2397,8 @@ void usb_hc_died (struct usb_hcd *hcd) + } + if (usb_hcd_is_primary_hcd(hcd) && hcd->shared_hcd) { + hcd = hcd->shared_hcd; ++ clear_bit(HCD_FLAG_RH_RUNNING, &hcd->flags); ++ set_bit(HCD_FLAG_DEAD, &hcd->flags); + if (hcd->rh_registered) { + clear_bit(HCD_FLAG_POLL_RH, &hcd->flags); + +diff --git a/drivers/usb/core/hub.c b/drivers/usb/core/hub.c +index e479c7d47a9f..fbf5c57b8251 100644 +--- a/drivers/usb/core/hub.c ++++ b/drivers/usb/core/hub.c +@@ -4618,7 +4618,8 @@ hub_power_remaining (struct usb_hub *hub) + static void hub_port_connect(struct usb_hub *hub, int port1, u16 portstatus, + u16 portchange) + { +- int status, i; ++ int status = -ENODEV; ++ int i; + unsigned unit_load; + struct usb_device *hdev = hub->hdev; + struct usb_hcd *hcd = bus_to_hcd(hdev->bus); +@@ -4822,9 +4823,10 @@ loop: + + done: + hub_port_disable(hub, port1, 1); +- if (hcd->driver->relinquish_port && !hub->hdev->parent) +- hcd->driver->relinquish_port(hcd, port1); +- ++ if (hcd->driver->relinquish_port && !hub->hdev->parent) { ++ if (status != -ENOTCONN && status != -ENODEV) ++ hcd->driver->relinquish_port(hcd, port1); ++ } + } + + /* Handle physical or logical connection change events. +diff --git a/drivers/usb/core/quirks.c b/drivers/usb/core/quirks.c +index 3116edfcdc18..574da2b4529c 100644 +--- a/drivers/usb/core/quirks.c ++++ b/drivers/usb/core/quirks.c +@@ -150,6 +150,9 @@ static const struct usb_device_id usb_quirk_list[] = { + /* appletouch */ + { USB_DEVICE(0x05ac, 0x021a), .driver_info = USB_QUIRK_RESET_RESUME }, + ++ /* Genesys Logic hub, internally used by Moshi USB to Ethernet Adapter */ ++ { USB_DEVICE(0x05e3, 0x0616), .driver_info = USB_QUIRK_NO_LPM }, ++ + /* Avision AV600U */ + { USB_DEVICE(0x0638, 0x0a13), .driver_info = + USB_QUIRK_STRING_FETCH_255 }, +@@ -249,6 +252,7 @@ static const struct usb_device_id usb_amd_resume_quirk_list[] = { + { USB_DEVICE(0x093a, 0x2500), .driver_info = USB_QUIRK_RESET_RESUME }, + { USB_DEVICE(0x093a, 0x2510), .driver_info = USB_QUIRK_RESET_RESUME }, + { USB_DEVICE(0x093a, 0x2521), .driver_info = USB_QUIRK_RESET_RESUME }, ++ { USB_DEVICE(0x03f0, 0x2b4a), .driver_info = USB_QUIRK_RESET_RESUME }, + + /* Logitech Optical Mouse M90/M100 */ + { USB_DEVICE(0x046d, 0xc05a), .driver_info = USB_QUIRK_RESET_RESUME }, +diff --git a/drivers/usb/gadget/function/f_hid.c b/drivers/usb/gadget/function/f_hid.c +index f7f35a36c09a..466640afa7be 100644 +--- a/drivers/usb/gadget/function/f_hid.c ++++ b/drivers/usb/gadget/function/f_hid.c +@@ -544,7 +544,7 @@ static int hidg_set_alt(struct usb_function *f, unsigned intf, unsigned alt) + } + status = usb_ep_enable(hidg->out_ep); + if (status < 0) { +- ERROR(cdev, "Enable IN endpoint FAILED!\n"); ++ ERROR(cdev, "Enable OUT endpoint FAILED!\n"); + goto fail; + } + hidg->out_ep->driver_data = hidg; +diff --git a/drivers/usb/host/pci-quirks.c b/drivers/usb/host/pci-quirks.c +index f9400564cb72..03b9a372636f 100644 +--- a/drivers/usb/host/pci-quirks.c ++++ b/drivers/usb/host/pci-quirks.c +@@ -89,6 +89,7 @@ enum amd_chipset_gen { + AMD_CHIPSET_HUDSON2, + AMD_CHIPSET_BOLTON, + AMD_CHIPSET_YANGTZE, ++ AMD_CHIPSET_TAISHAN, + AMD_CHIPSET_UNKNOWN, + }; + +@@ -132,6 +133,11 @@ static int amd_chipset_sb_type_init(struct amd_chipset_info *pinfo) + pinfo->sb_type.gen = AMD_CHIPSET_SB700; + else if (rev >= 0x40 && rev <= 0x4f) + pinfo->sb_type.gen = AMD_CHIPSET_SB800; ++ } ++ pinfo->smbus_dev = pci_get_device(PCI_VENDOR_ID_AMD, ++ 0x145c, NULL); ++ if (pinfo->smbus_dev) { ++ pinfo->sb_type.gen = AMD_CHIPSET_TAISHAN; + } else { + pinfo->smbus_dev = pci_get_device(PCI_VENDOR_ID_AMD, + PCI_DEVICE_ID_AMD_HUDSON2_SMBUS, NULL); +@@ -251,11 +257,12 @@ int usb_hcd_amd_remote_wakeup_quirk(struct pci_dev *pdev) + { + /* Make sure amd chipset type has already been initialized */ + usb_amd_find_chipset_info(); +- if (amd_chipset.sb_type.gen != AMD_CHIPSET_YANGTZE) +- return 0; +- +- dev_dbg(&pdev->dev, "QUIRK: Enable AMD remote wakeup fix\n"); +- return 1; ++ if (amd_chipset.sb_type.gen == AMD_CHIPSET_YANGTZE || ++ amd_chipset.sb_type.gen == AMD_CHIPSET_TAISHAN) { ++ dev_dbg(&pdev->dev, "QUIRK: Enable AMD remote wakeup fix\n"); ++ return 1; ++ } ++ return 0; + } + EXPORT_SYMBOL_GPL(usb_hcd_amd_remote_wakeup_quirk); + +diff --git a/drivers/usb/host/xhci-hub.c b/drivers/usb/host/xhci-hub.c +index 2dd322e92951..25b1cf0b6848 100644 +--- a/drivers/usb/host/xhci-hub.c ++++ b/drivers/usb/host/xhci-hub.c +@@ -651,6 +651,9 @@ static u32 xhci_get_port_status(struct usb_hcd *hcd, + clear_bit(wIndex, &bus_state->resuming_ports); + + set_bit(wIndex, &bus_state->rexit_ports); ++ ++ xhci_test_and_clear_bit(xhci, port_array, wIndex, ++ PORT_PLC); + xhci_set_link_state(xhci, port_array, wIndex, + XDEV_U0); + +diff --git a/drivers/usb/host/xhci-ring.c b/drivers/usb/host/xhci-ring.c +index fbb77e2b288d..639419066ec4 100644 +--- a/drivers/usb/host/xhci-ring.c ++++ b/drivers/usb/host/xhci-ring.c +@@ -789,13 +789,16 @@ static void xhci_kill_endpoint_urbs(struct xhci_hcd *xhci, + (ep->ep_state & EP_GETTING_NO_STREAMS)) { + int stream_id; + +- for (stream_id = 0; stream_id < ep->stream_info->num_streams; ++ for (stream_id = 1; stream_id < ep->stream_info->num_streams; + stream_id++) { ++ ring = ep->stream_info->stream_rings[stream_id]; ++ if (!ring) ++ continue; ++ + xhci_dbg_trace(xhci, trace_xhci_dbg_cancel_urb, + "Killing URBs for slot ID %u, ep index %u, stream %u", +- slot_id, ep_index, stream_id + 1); +- xhci_kill_ring_urbs(xhci, +- ep->stream_info->stream_rings[stream_id]); ++ slot_id, ep_index, stream_id); ++ xhci_kill_ring_urbs(xhci, ring); + } + } else { + ring = ep->ring; +diff --git a/drivers/usb/renesas_usbhs/common.c b/drivers/usb/renesas_usbhs/common.c +index 0f7e850fd4aa..61a898e4a010 100644 +--- a/drivers/usb/renesas_usbhs/common.c ++++ b/drivers/usb/renesas_usbhs/common.c +@@ -731,8 +731,10 @@ static int usbhsc_resume(struct device *dev) + struct usbhs_priv *priv = dev_get_drvdata(dev); + struct platform_device *pdev = usbhs_priv_to_pdev(priv); + +- if (!usbhsc_flags_has(priv, USBHSF_RUNTIME_PWCTRL)) ++ if (!usbhsc_flags_has(priv, USBHSF_RUNTIME_PWCTRL)) { + usbhsc_power_ctrl(priv, 1); ++ usbhs_mod_autonomy_mode(priv); ++ } + + usbhs_platform_call(priv, phy_reset, pdev); + +diff --git a/drivers/usb/serial/cp210x.c b/drivers/usb/serial/cp210x.c +index 69040e9069e0..31cd99f59a6a 100644 +--- a/drivers/usb/serial/cp210x.c ++++ b/drivers/usb/serial/cp210x.c +@@ -133,6 +133,7 @@ static const struct usb_device_id id_table[] = { + { USB_DEVICE(0x10C4, 0x8998) }, /* KCF Technologies PRN */ + { USB_DEVICE(0x10C4, 0x8A2A) }, /* HubZ dual ZigBee and Z-Wave dongle */ + { USB_DEVICE(0x10C4, 0x8A5E) }, /* CEL EM3588 ZigBee USB Stick Long Range */ ++ { USB_DEVICE(0x10C4, 0x8B34) }, /* Qivicon ZigBee USB Radio Stick */ + { USB_DEVICE(0x10C4, 0xEA60) }, /* Silicon Labs factory default */ + { USB_DEVICE(0x10C4, 0xEA61) }, /* Silicon Labs factory default */ + { USB_DEVICE(0x10C4, 0xEA70) }, /* Silicon Labs factory default */ +diff --git a/drivers/usb/serial/option.c b/drivers/usb/serial/option.c +index 5d841485bbe3..f08b35819666 100644 +--- a/drivers/usb/serial/option.c ++++ b/drivers/usb/serial/option.c +@@ -2022,6 +2022,8 @@ static const struct usb_device_id option_ids[] = { + { USB_DEVICE_INTERFACE_CLASS(0x2001, 0x7d04, 0xff) }, /* D-Link DWM-158 */ + { USB_DEVICE_INTERFACE_CLASS(0x2001, 0x7e19, 0xff), /* D-Link DWM-221 B1 */ + .driver_info = (kernel_ulong_t)&net_intf4_blacklist }, ++ { USB_DEVICE_INTERFACE_CLASS(0x2001, 0x7e35, 0xff), /* D-Link DWM-222 */ ++ .driver_info = (kernel_ulong_t)&net_intf4_blacklist }, + { USB_DEVICE_AND_INTERFACE_INFO(0x07d1, 0x3e01, 0xff, 0xff, 0xff) }, /* D-Link DWM-152/C1 */ + { USB_DEVICE_AND_INTERFACE_INFO(0x07d1, 0x3e02, 0xff, 0xff, 0xff) }, /* D-Link DWM-156/C1 */ + { USB_DEVICE_AND_INTERFACE_INFO(0x07d1, 0x7e11, 0xff, 0xff, 0xff) }, /* D-Link DWM-156/A3 */ +diff --git a/drivers/usb/serial/pl2303.c b/drivers/usb/serial/pl2303.c +index 1db4b61bdf7b..a51b28379850 100644 +--- a/drivers/usb/serial/pl2303.c ++++ b/drivers/usb/serial/pl2303.c +@@ -49,6 +49,7 @@ static const struct usb_device_id id_table[] = { + { USB_DEVICE(IODATA_VENDOR_ID, IODATA_PRODUCT_ID) }, + { USB_DEVICE(IODATA_VENDOR_ID, IODATA_PRODUCT_ID_RSAQ5) }, + { USB_DEVICE(ATEN_VENDOR_ID, ATEN_PRODUCT_ID) }, ++ { USB_DEVICE(ATEN_VENDOR_ID, ATEN_PRODUCT_UC485) }, + { USB_DEVICE(ATEN_VENDOR_ID, ATEN_PRODUCT_ID2) }, + { USB_DEVICE(ATEN_VENDOR_ID2, ATEN_PRODUCT_ID) }, + { USB_DEVICE(ELCOM_VENDOR_ID, ELCOM_PRODUCT_ID) }, +diff --git a/drivers/usb/serial/pl2303.h b/drivers/usb/serial/pl2303.h +index 09d9be88209e..3b5a15d1dc0d 100644 +--- a/drivers/usb/serial/pl2303.h ++++ b/drivers/usb/serial/pl2303.h +@@ -27,6 +27,7 @@ + #define ATEN_VENDOR_ID 0x0557 + #define ATEN_VENDOR_ID2 0x0547 + #define ATEN_PRODUCT_ID 0x2008 ++#define ATEN_PRODUCT_UC485 0x2021 + #define ATEN_PRODUCT_ID2 0x2118 + + #define IODATA_VENDOR_ID 0x04bb +diff --git a/drivers/usb/storage/isd200.c b/drivers/usb/storage/isd200.c +index 076178645ba4..45b18df9fef1 100644 +--- a/drivers/usb/storage/isd200.c ++++ b/drivers/usb/storage/isd200.c +@@ -1522,8 +1522,11 @@ static void isd200_ata_command(struct scsi_cmnd *srb, struct us_data *us) + + /* Make sure driver was initialized */ + +- if (us->extra == NULL) ++ if (us->extra == NULL) { + usb_stor_dbg(us, "ERROR Driver not initialized\n"); ++ srb->result = DID_ERROR << 16; ++ return; ++ } + + scsi_set_resid(srb, 0); + /* scsi_bufflen might change in protocol translation to ata */ +diff --git a/drivers/usb/storage/unusual_uas.h b/drivers/usb/storage/unusual_uas.h +index 53341a77d89f..a37ed1e59e99 100644 +--- a/drivers/usb/storage/unusual_uas.h ++++ b/drivers/usb/storage/unusual_uas.h +@@ -123,9 +123,9 @@ UNUSUAL_DEV(0x0bc2, 0xab2a, 0x0000, 0x9999, + /* Reported-by: Benjamin Tissoires */ + UNUSUAL_DEV(0x13fd, 0x3940, 0x0000, 0x9999, + "Initio Corporation", +- "", ++ "INIC-3069", + USB_SC_DEVICE, USB_PR_DEVICE, NULL, +- US_FL_NO_ATA_1X), ++ US_FL_NO_ATA_1X | US_FL_IGNORE_RESIDUE), + + /* Reported-by: Tom Arild Naess */ + UNUSUAL_DEV(0x152d, 0x0539, 0x0000, 0x9999, +diff --git a/drivers/vfio/pci/vfio_pci.c b/drivers/vfio/pci/vfio_pci.c +index ae90cf8867b5..f85e3a17cf5a 100644 +--- a/drivers/vfio/pci/vfio_pci.c ++++ b/drivers/vfio/pci/vfio_pci.c +@@ -902,6 +902,10 @@ static int vfio_pci_mmap(void *device_data, struct vm_area_struct *vma) + return ret; + + vdev->barmap[index] = pci_iomap(pdev, index, 0); ++ if (!vdev->barmap[index]) { ++ pci_release_selected_regions(pdev, 1 << index); ++ return -ENOMEM; ++ } + } + + vma->vm_private_data = vdev; +diff --git a/drivers/vfio/pci/vfio_pci_rdwr.c b/drivers/vfio/pci/vfio_pci_rdwr.c +index 210db24d2204..4d39f7959adf 100644 +--- a/drivers/vfio/pci/vfio_pci_rdwr.c ++++ b/drivers/vfio/pci/vfio_pci_rdwr.c +@@ -190,7 +190,10 @@ ssize_t vfio_pci_vga_rw(struct vfio_pci_device *vdev, char __user *buf, + if (!vdev->has_vga) + return -EINVAL; + +- switch (pos) { ++ if (pos > 0xbfffful) ++ return -EINVAL; ++ ++ switch ((u32)pos) { + case 0xa0000 ... 0xbffff: + count = min(count, (size_t)(0xc0000 - pos)); + iomem = ioremap_nocache(0xa0000, 0xbffff - 0xa0000 + 1); +diff --git a/drivers/vfio/vfio.c b/drivers/vfio/vfio.c +index e1278fe04b1e..8b2354536bbb 100644 +--- a/drivers/vfio/vfio.c ++++ b/drivers/vfio/vfio.c +@@ -295,6 +295,34 @@ static void vfio_group_put(struct vfio_group *group) + kref_put_mutex(&group->kref, vfio_group_release, &vfio.group_lock); + } + ++struct vfio_group_put_work { ++ struct work_struct work; ++ struct vfio_group *group; ++}; ++ ++static void vfio_group_put_bg(struct work_struct *work) ++{ ++ struct vfio_group_put_work *do_work; ++ ++ do_work = container_of(work, struct vfio_group_put_work, work); ++ ++ vfio_group_put(do_work->group); ++ kfree(do_work); ++} ++ ++static void vfio_group_schedule_put(struct vfio_group *group) ++{ ++ struct vfio_group_put_work *do_work; ++ ++ do_work = kmalloc(sizeof(*do_work), GFP_KERNEL); ++ if (WARN_ON(!do_work)) ++ return; ++ ++ INIT_WORK(&do_work->work, vfio_group_put_bg); ++ do_work->group = group; ++ schedule_work(&do_work->work); ++} ++ + /* Assume group_lock or group reference is held */ + static void vfio_group_get(struct vfio_group *group) + { +@@ -601,7 +629,14 @@ static int vfio_iommu_group_notifier(struct notifier_block *nb, + break; + } + +- vfio_group_put(group); ++ /* ++ * If we're the last reference to the group, the group will be ++ * released, which includes unregistering the iommu group notifier. ++ * We hold a read-lock on that notifier list, unregistering needs ++ * a write-lock... deadlock. Release our reference asynchronously ++ * to avoid that situation. ++ */ ++ vfio_group_schedule_put(group); + return NOTIFY_OK; + } + +@@ -1504,6 +1539,15 @@ void vfio_group_put_external_user(struct vfio_group *group) + } + EXPORT_SYMBOL_GPL(vfio_group_put_external_user); + ++bool vfio_external_group_match_file(struct vfio_group *test_group, ++ struct file *filep) ++{ ++ struct vfio_group *group = filep->private_data; ++ ++ return (filep->f_op == &vfio_group_fops) && (group == test_group); ++} ++EXPORT_SYMBOL_GPL(vfio_external_group_match_file); ++ + int vfio_external_user_iommu_id(struct vfio_group *group) + { + return iommu_group_id(group->iommu_group); +diff --git a/drivers/video/fbdev/cobalt_lcdfb.c b/drivers/video/fbdev/cobalt_lcdfb.c +index 07675d6f323e..d4530b54479c 100644 +--- a/drivers/video/fbdev/cobalt_lcdfb.c ++++ b/drivers/video/fbdev/cobalt_lcdfb.c +@@ -350,6 +350,11 @@ static int cobalt_lcdfb_probe(struct platform_device *dev) + info->screen_size = resource_size(res); + info->screen_base = devm_ioremap(&dev->dev, res->start, + info->screen_size); ++ if (!info->screen_base) { ++ framebuffer_release(info); ++ return -ENOMEM; ++ } ++ + info->fbops = &cobalt_lcd_fbops; + info->fix = cobalt_lcdfb_fix; + info->fix.smem_start = res->start; +diff --git a/fs/ext4/file.c b/fs/ext4/file.c +index ece4982ee593..f57cf1c42ca3 100644 +--- a/fs/ext4/file.c ++++ b/fs/ext4/file.c +@@ -391,6 +391,8 @@ static int ext4_find_unwritten_pgoff(struct inode *inode, + lastoff = page_offset(page); + bh = head = page_buffers(page); + do { ++ if (lastoff + bh->b_size <= startoff) ++ goto next; + if (buffer_uptodate(bh) || + buffer_unwritten(bh)) { + if (whence == SEEK_DATA) +@@ -405,6 +407,7 @@ static int ext4_find_unwritten_pgoff(struct inode *inode, + unlock_page(page); + goto out; + } ++next: + lastoff += bh->b_size; + bh = bh->b_this_page; + } while (bh != head); +diff --git a/fs/ext4/resize.c b/fs/ext4/resize.c +index 0e783b9f7007..b63e308e2545 100644 +--- a/fs/ext4/resize.c ++++ b/fs/ext4/resize.c +@@ -1932,7 +1932,8 @@ retry: + n_desc_blocks = o_desc_blocks + + le16_to_cpu(es->s_reserved_gdt_blocks); + n_group = n_desc_blocks * EXT4_DESC_PER_BLOCK(sb); +- n_blocks_count = n_group * EXT4_BLOCKS_PER_GROUP(sb); ++ n_blocks_count = (ext4_fsblk_t)n_group * ++ EXT4_BLOCKS_PER_GROUP(sb); + n_group--; /* set to last group number */ + } + +diff --git a/fs/f2fs/acl.c b/fs/f2fs/acl.c +index c5e4a1856a0f..4147d83e6fdd 100644 +--- a/fs/f2fs/acl.c ++++ b/fs/f2fs/acl.c +@@ -213,7 +213,7 @@ static int __f2fs_set_acl(struct inode *inode, int type, + switch (type) { + case ACL_TYPE_ACCESS: + name_index = F2FS_XATTR_INDEX_POSIX_ACL_ACCESS; +- if (acl) { ++ if (acl && !ipage) { + error = posix_acl_update_mode(inode, &inode->i_mode, &acl); + if (error) + return error; +diff --git a/fs/f2fs/super.c b/fs/f2fs/super.c +index 660183e9ab7c..f2ada269feb7 100644 +--- a/fs/f2fs/super.c ++++ b/fs/f2fs/super.c +@@ -979,6 +979,8 @@ static int sanity_check_ckpt(struct f2fs_sb_info *sbi) + unsigned int total, fsmeta; + struct f2fs_super_block *raw_super = F2FS_RAW_SUPER(sbi); + struct f2fs_checkpoint *ckpt = F2FS_CKPT(sbi); ++ unsigned int main_segs, blocks_per_seg; ++ int i; + + total = le32_to_cpu(raw_super->segment_count); + fsmeta = le32_to_cpu(raw_super->segment_count_ckpt); +@@ -990,6 +992,20 @@ static int sanity_check_ckpt(struct f2fs_sb_info *sbi) + if (unlikely(fsmeta >= total)) + return 1; + ++ main_segs = le32_to_cpu(raw_super->segment_count_main); ++ blocks_per_seg = sbi->blocks_per_seg; ++ ++ for (i = 0; i < NR_CURSEG_NODE_TYPE; i++) { ++ if (le32_to_cpu(ckpt->cur_node_segno[i]) >= main_segs || ++ le16_to_cpu(ckpt->cur_node_blkoff[i]) >= blocks_per_seg) ++ return 1; ++ } ++ for (i = 0; i < NR_CURSEG_DATA_TYPE; i++) { ++ if (le32_to_cpu(ckpt->cur_data_segno[i]) >= main_segs || ++ le16_to_cpu(ckpt->cur_data_blkoff[i]) >= blocks_per_seg) ++ return 1; ++ } ++ + if (unlikely(f2fs_cp_error(sbi))) { + f2fs_msg(sbi->sb, KERN_ERR, "A bug case: need to run fsck"); + return 1; +diff --git a/fs/fuse/file.c b/fs/fuse/file.c +index 1f03f0a36e35..cacf95ac49fe 100644 +--- a/fs/fuse/file.c ++++ b/fs/fuse/file.c +@@ -46,7 +46,7 @@ struct fuse_file *fuse_file_alloc(struct fuse_conn *fc) + { + struct fuse_file *ff; + +- ff = kmalloc(sizeof(struct fuse_file), GFP_KERNEL); ++ ff = kzalloc(sizeof(struct fuse_file), GFP_KERNEL); + if (unlikely(!ff)) + return NULL; + +diff --git a/fs/nfs/Kconfig b/fs/nfs/Kconfig +index f31fd0dd92c6..b1daeafbea92 100644 +--- a/fs/nfs/Kconfig ++++ b/fs/nfs/Kconfig +@@ -121,6 +121,7 @@ config PNFS_FILE_LAYOUT + config PNFS_BLOCK + tristate + depends on NFS_V4_1 && BLK_DEV_DM ++ depends on 64BIT || LBDAF + default NFS_V4 + + config PNFS_OBJLAYOUT +diff --git a/fs/nfs/dir.c b/fs/nfs/dir.c +index b6d97dfa9cb6..4227adce3e52 100644 +--- a/fs/nfs/dir.c ++++ b/fs/nfs/dir.c +@@ -1154,11 +1154,13 @@ static int nfs_lookup_revalidate(struct dentry *dentry, unsigned int flags) + /* Force a full look up iff the parent directory has changed */ + if (!nfs_is_exclusive_create(dir, flags) && + nfs_check_verifier(dir, dentry, flags & LOOKUP_RCU)) { +- +- if (nfs_lookup_verify_inode(inode, flags)) { ++ error = nfs_lookup_verify_inode(inode, flags); ++ if (error) { + if (flags & LOOKUP_RCU) + return -ECHILD; +- goto out_zap_parent; ++ if (error == -ESTALE) ++ goto out_zap_parent; ++ goto out_error; + } + goto out_valid; + } +@@ -1182,8 +1184,10 @@ static int nfs_lookup_revalidate(struct dentry *dentry, unsigned int flags) + trace_nfs_lookup_revalidate_enter(dir, dentry, flags); + error = NFS_PROTO(dir)->lookup(dir, &dentry->d_name, fhandle, fattr, label); + trace_nfs_lookup_revalidate_exit(dir, dentry, flags, error); +- if (error) ++ if (error == -ESTALE || error == -ENOENT) + goto out_bad; ++ if (error) ++ goto out_error; + if (nfs_compare_fh(NFS_FH(inode), fhandle)) + goto out_bad; + if ((error = nfs_refresh_inode(inode, fattr)) != 0) +diff --git a/fs/nfs/flexfilelayout/flexfilelayoutdev.c b/fs/nfs/flexfilelayout/flexfilelayoutdev.c +index b28fa4cbea52..a84dd247b13a 100644 +--- a/fs/nfs/flexfilelayout/flexfilelayoutdev.c ++++ b/fs/nfs/flexfilelayout/flexfilelayoutdev.c +@@ -30,6 +30,7 @@ void nfs4_ff_layout_free_deviceid(struct nfs4_ff_layout_ds *mirror_ds) + { + nfs4_print_deviceid(&mirror_ds->id_node.deviceid); + nfs4_pnfs_ds_put(mirror_ds->ds); ++ kfree(mirror_ds->ds_versions); + kfree_rcu(mirror_ds, id_node.rcu); + } + +diff --git a/fs/nfs/inode.c b/fs/nfs/inode.c +index 723b8922d76b..8ddff9a72b34 100644 +--- a/fs/nfs/inode.c ++++ b/fs/nfs/inode.c +@@ -1227,9 +1227,9 @@ static int nfs_check_inode_attributes(struct inode *inode, struct nfs_fattr *fat + return 0; + /* Has the inode gone and changed behind our back? */ + if ((fattr->valid & NFS_ATTR_FATTR_FILEID) && nfsi->fileid != fattr->fileid) +- return -EIO; ++ return -ESTALE; + if ((fattr->valid & NFS_ATTR_FATTR_TYPE) && (inode->i_mode & S_IFMT) != (fattr->mode & S_IFMT)) +- return -EIO; ++ return -ESTALE; + + if ((fattr->valid & NFS_ATTR_FATTR_CHANGE) != 0 && + inode->i_version != fattr->change_attr) +diff --git a/fs/seq_file.c b/fs/seq_file.c +index 4408057d1dc8..4dcf9d28f022 100644 +--- a/fs/seq_file.c ++++ b/fs/seq_file.c +@@ -62,9 +62,10 @@ int seq_open(struct file *file, const struct seq_operations *op) + memset(p, 0, sizeof(*p)); + mutex_init(&p->lock); + p->op = op; +-#ifdef CONFIG_USER_NS +- p->user_ns = file->f_cred->user_ns; +-#endif ++ ++ // No refcounting: the lifetime of 'p' is constrained ++ // to the lifetime of the file. ++ p->file = file; + + /* + * Wrappers around seq_open(e.g. swaps_open) need to be +diff --git a/fs/udf/inode.c b/fs/udf/inode.c +index 78a40ef0c463..9635cd478cc9 100644 +--- a/fs/udf/inode.c ++++ b/fs/udf/inode.c +@@ -1235,8 +1235,8 @@ int udf_setsize(struct inode *inode, loff_t newsize) + return err; + } + set_size: +- truncate_setsize(inode, newsize); + up_write(&iinfo->i_data_sem); ++ truncate_setsize(inode, newsize); + } else { + if (iinfo->i_alloc_type == ICBTAG_FLAG_AD_IN_ICB) { + down_write(&iinfo->i_data_sem); +@@ -1253,9 +1253,9 @@ set_size: + udf_get_block); + if (err) + return err; ++ truncate_setsize(inode, newsize); + down_write(&iinfo->i_data_sem); + udf_clear_extent_cache(inode); +- truncate_setsize(inode, newsize); + udf_truncate_extents(inode); + up_write(&iinfo->i_data_sem); + } +diff --git a/include/linux/phy.h b/include/linux/phy.h +index 685809835b5c..d164045e296c 100644 +--- a/include/linux/phy.h ++++ b/include/linux/phy.h +@@ -751,6 +751,10 @@ int genphy_read_status(struct phy_device *phydev); + int genphy_suspend(struct phy_device *phydev); + int genphy_resume(struct phy_device *phydev); + int genphy_soft_reset(struct phy_device *phydev); ++static inline int genphy_no_soft_reset(struct phy_device *phydev) ++{ ++ return 0; ++} + void phy_driver_unregister(struct phy_driver *drv); + void phy_drivers_unregister(struct phy_driver *drv, int n); + int phy_driver_register(struct phy_driver *new_driver); +diff --git a/include/linux/sched.h b/include/linux/sched.h +index af99802ce7fe..b6c033430b15 100644 +--- a/include/linux/sched.h ++++ b/include/linux/sched.h +@@ -774,6 +774,16 @@ struct signal_struct { + + #define SIGNAL_UNKILLABLE 0x00000040 /* for init: ignore fatal signals */ + ++#define SIGNAL_STOP_MASK (SIGNAL_CLD_MASK | SIGNAL_STOP_STOPPED | \ ++ SIGNAL_STOP_CONTINUED) ++ ++static inline void signal_set_stop_flags(struct signal_struct *sig, ++ unsigned int flags) ++{ ++ WARN_ON(sig->flags & (SIGNAL_GROUP_EXIT|SIGNAL_GROUP_COREDUMP)); ++ sig->flags = (sig->flags & ~SIGNAL_STOP_MASK) | flags; ++} ++ + /* If true, all threads except ->group_exit_task have pending SIGKILL */ + static inline int signal_group_exit(const struct signal_struct *sig) + { +diff --git a/include/linux/seq_file.h b/include/linux/seq_file.h +index 7848473a5bc8..f36c3a27f7f6 100644 +--- a/include/linux/seq_file.h ++++ b/include/linux/seq_file.h +@@ -7,13 +7,10 @@ + #include + #include + #include ++#include ++#include + + struct seq_operations; +-struct file; +-struct path; +-struct inode; +-struct dentry; +-struct user_namespace; + + struct seq_file { + char *buf; +@@ -27,9 +24,7 @@ struct seq_file { + struct mutex lock; + const struct seq_operations *op; + int poll_event; +-#ifdef CONFIG_USER_NS +- struct user_namespace *user_ns; +-#endif ++ const struct file *file; + void *private; + }; + +@@ -141,7 +136,7 @@ int seq_put_decimal_ll(struct seq_file *m, char delimiter, + static inline struct user_namespace *seq_user_ns(struct seq_file *seq) + { + #ifdef CONFIG_USER_NS +- return seq->user_ns; ++ return seq->file->f_cred->user_ns; + #else + extern struct user_namespace init_user_ns; + return &init_user_ns; +diff --git a/include/linux/slab.h b/include/linux/slab.h +index ffd24c830151..ef441d93cea0 100644 +--- a/include/linux/slab.h ++++ b/include/linux/slab.h +@@ -185,7 +185,7 @@ size_t ksize(const void *); + * (PAGE_SIZE*2). Larger requests are passed to the page allocator. + */ + #define KMALLOC_SHIFT_HIGH (PAGE_SHIFT + 1) +-#define KMALLOC_SHIFT_MAX (MAX_ORDER + PAGE_SHIFT) ++#define KMALLOC_SHIFT_MAX (MAX_ORDER + PAGE_SHIFT - 1) + #ifndef KMALLOC_SHIFT_LOW + #define KMALLOC_SHIFT_LOW 3 + #endif +@@ -198,7 +198,7 @@ size_t ksize(const void *); + * be allocated from the same page. + */ + #define KMALLOC_SHIFT_HIGH PAGE_SHIFT +-#define KMALLOC_SHIFT_MAX 30 ++#define KMALLOC_SHIFT_MAX (MAX_ORDER + PAGE_SHIFT - 1) + #ifndef KMALLOC_SHIFT_LOW + #define KMALLOC_SHIFT_LOW 3 + #endif +diff --git a/include/linux/vfio.h b/include/linux/vfio.h +index ddb440975382..34851bf2e2c8 100644 +--- a/include/linux/vfio.h ++++ b/include/linux/vfio.h +@@ -85,6 +85,8 @@ extern void vfio_unregister_iommu_driver( + */ + extern struct vfio_group *vfio_group_get_external_user(struct file *filep); + extern void vfio_group_put_external_user(struct vfio_group *group); ++extern bool vfio_external_group_match_file(struct vfio_group *group, ++ struct file *filep); + extern int vfio_external_user_iommu_id(struct vfio_group *group); + extern long vfio_external_check_extension(struct vfio_group *group, + unsigned long arg); +diff --git a/include/net/iw_handler.h b/include/net/iw_handler.h +index e0f4109e64c6..c2aa73e5e6bb 100644 +--- a/include/net/iw_handler.h ++++ b/include/net/iw_handler.h +@@ -556,7 +556,8 @@ iwe_stream_add_point(struct iw_request_info *info, char *stream, char *ends, + memcpy(stream + lcp_len, + ((char *) &iwe->u) + IW_EV_POINT_OFF, + IW_EV_POINT_PK_LEN - IW_EV_LCP_PK_LEN); +- memcpy(stream + point_len, extra, iwe->u.data.length); ++ if (iwe->u.data.length && extra) ++ memcpy(stream + point_len, extra, iwe->u.data.length); + stream += event_len; + } + return stream; +diff --git a/include/net/sctp/sctp.h b/include/net/sctp/sctp.h +index ce13cf20f625..d33b17ba51d2 100644 +--- a/include/net/sctp/sctp.h ++++ b/include/net/sctp/sctp.h +@@ -444,6 +444,8 @@ _sctp_walk_params((pos), (chunk), ntohs((chunk)->chunk_hdr.length), member) + + #define _sctp_walk_params(pos, chunk, end, member)\ + for (pos.v = chunk->member;\ ++ (pos.v + offsetof(struct sctp_paramhdr, length) + sizeof(pos.p->length) <=\ ++ (void *)chunk + end) &&\ + pos.v <= (void *)chunk + end - ntohs(pos.p->length) &&\ + ntohs(pos.p->length) >= sizeof(sctp_paramhdr_t);\ + pos.v += WORD_ROUND(ntohs(pos.p->length))) +@@ -454,6 +456,8 @@ _sctp_walk_errors((err), (chunk_hdr), ntohs((chunk_hdr)->length)) + #define _sctp_walk_errors(err, chunk_hdr, end)\ + for (err = (sctp_errhdr_t *)((void *)chunk_hdr + \ + sizeof(sctp_chunkhdr_t));\ ++ ((void *)err + offsetof(sctp_errhdr_t, length) + sizeof(err->length) <=\ ++ (void *)chunk_hdr + end) &&\ + (void *)err <= (void *)chunk_hdr + end - ntohs(err->length) &&\ + ntohs(err->length) >= sizeof(sctp_errhdr_t); \ + err = (sctp_errhdr_t *)((void *)err + WORD_ROUND(ntohs(err->length)))) +diff --git a/include/target/iscsi/iscsi_target_core.h b/include/target/iscsi/iscsi_target_core.h +index e37059c901e2..e12eec076cd5 100644 +--- a/include/target/iscsi/iscsi_target_core.h ++++ b/include/target/iscsi/iscsi_target_core.h +@@ -785,6 +785,7 @@ struct iscsi_np { + int np_sock_type; + enum np_thread_state_table np_thread_state; + bool enabled; ++ atomic_t np_reset_count; + enum iscsi_timer_flags_table np_login_timer_flags; + u32 np_exports; + enum np_flags_table np_flags; +diff --git a/kernel/events/core.c b/kernel/events/core.c +index 10e9eec3e228..e871080bc44e 100644 +--- a/kernel/events/core.c ++++ b/kernel/events/core.c +@@ -6111,21 +6111,6 @@ static void perf_log_itrace_start(struct perf_event *event) + perf_output_end(&handle); + } + +-static bool sample_is_allowed(struct perf_event *event, struct pt_regs *regs) +-{ +- /* +- * Due to interrupt latency (AKA "skid"), we may enter the +- * kernel before taking an overflow, even if the PMU is only +- * counting user events. +- * To avoid leaking information to userspace, we must always +- * reject kernel samples when exclude_kernel is set. +- */ +- if (event->attr.exclude_kernel && !user_mode(regs)) +- return false; +- +- return true; +-} +- + /* + * Generic event overflow handling, sampling. + */ +@@ -6172,12 +6157,6 @@ static int __perf_event_overflow(struct perf_event *event, + perf_adjust_period(event, delta, hwc->last_period, true); + } + +- /* +- * For security, drop the skid kernel samples if necessary. +- */ +- if (!sample_is_allowed(event, regs)) +- return ret; +- + /* + * XXX event_limit might not quite work as expected on inherited + * events +diff --git a/kernel/resource.c b/kernel/resource.c +index a7c27cb71fc5..cbf725c24c3b 100644 +--- a/kernel/resource.c ++++ b/kernel/resource.c +@@ -105,16 +105,25 @@ static int r_show(struct seq_file *m, void *v) + { + struct resource *root = m->private; + struct resource *r = v, *p; ++ unsigned long long start, end; + int width = root->end < 0x10000 ? 4 : 8; + int depth; + + for (depth = 0, p = r; depth < MAX_IORES_LEVEL; depth++, p = p->parent) + if (p->parent == root) + break; ++ ++ if (file_ns_capable(m->file, &init_user_ns, CAP_SYS_ADMIN)) { ++ start = r->start; ++ end = r->end; ++ } else { ++ start = end = 0; ++ } ++ + seq_printf(m, "%*s%0*llx-%0*llx : %s\n", + depth * 2, "", +- width, (unsigned long long) r->start, +- width, (unsigned long long) r->end, ++ width, start, ++ width, end, + r->name ? r->name : ""); + return 0; + } +diff --git a/kernel/signal.c b/kernel/signal.c +index 0206be728dac..525a4cda5598 100644 +--- a/kernel/signal.c ++++ b/kernel/signal.c +@@ -346,7 +346,7 @@ static bool task_participate_group_stop(struct task_struct *task) + * fresh group stop. Read comment in do_signal_stop() for details. + */ + if (!sig->group_stop_count && !(sig->flags & SIGNAL_STOP_STOPPED)) { +- sig->flags = SIGNAL_STOP_STOPPED; ++ signal_set_stop_flags(sig, SIGNAL_STOP_STOPPED); + return true; + } + return false; +@@ -888,7 +888,7 @@ static bool prepare_signal(int sig, struct task_struct *p, bool force) + * will take ->siglock, notice SIGNAL_CLD_MASK, and + * notify its parent. See get_signal_to_deliver(). + */ +- signal->flags = why | SIGNAL_STOP_CONTINUED; ++ signal_set_stop_flags(signal, why | SIGNAL_STOP_CONTINUED); + signal->group_stop_count = 0; + signal->group_exit_code = 0; + } +diff --git a/kernel/workqueue.c b/kernel/workqueue.c +index d0efe9295a0e..9cdf3bfc9178 100644 +--- a/kernel/workqueue.c ++++ b/kernel/workqueue.c +@@ -3854,6 +3854,16 @@ struct workqueue_struct *__alloc_workqueue_key(const char *fmt, + struct workqueue_struct *wq; + struct pool_workqueue *pwq; + ++ /* ++ * Unbound && max_active == 1 used to imply ordered, which is no ++ * longer the case on NUMA machines due to per-node pools. While ++ * alloc_ordered_workqueue() is the right way to create an ordered ++ * workqueue, keep the previous behavior to avoid subtle breakages ++ * on NUMA. ++ */ ++ if ((flags & WQ_UNBOUND) && max_active == 1) ++ flags |= __WQ_ORDERED; ++ + /* see the comment above the definition of WQ_POWER_EFFICIENT */ + if ((flags & WQ_POWER_EFFICIENT) && wq_power_efficient) + flags |= WQ_UNBOUND; +diff --git a/lib/Kconfig.debug b/lib/Kconfig.debug +index ba2b0c87e65b..c986a6198b0e 100644 +--- a/lib/Kconfig.debug ++++ b/lib/Kconfig.debug +@@ -145,7 +145,7 @@ config DEBUG_INFO_REDUCED + + config DEBUG_INFO_SPLIT + bool "Produce split debuginfo in .dwo files" +- depends on DEBUG_INFO ++ depends on DEBUG_INFO && !FRV + help + Generate debug info into separate .dwo files. This significantly + reduces the build directory size for builds with DEBUG_INFO, +diff --git a/mm/mempool.c b/mm/mempool.c +index 2cc08de8b1db..70cccdcff860 100644 +--- a/mm/mempool.c ++++ b/mm/mempool.c +@@ -135,8 +135,8 @@ static void *remove_element(mempool_t *pool) + void *element = pool->elements[--pool->curr_nr]; + + BUG_ON(pool->curr_nr < 0); +- check_element(pool, element); + kasan_unpoison_element(pool, element); ++ check_element(pool, element); + return element; + } + +diff --git a/mm/page_alloc.c b/mm/page_alloc.c +index 4f1ff71074c7..35bda77211ea 100644 +--- a/mm/page_alloc.c ++++ b/mm/page_alloc.c +@@ -1106,14 +1106,14 @@ int move_freepages(struct zone *zone, + #endif + + for (page = start_page; page <= end_page;) { +- /* Make sure we are not inadvertently changing nodes */ +- VM_BUG_ON_PAGE(page_to_nid(page) != zone_to_nid(zone), page); +- + if (!pfn_valid_within(page_to_pfn(page))) { + page++; + continue; + } + ++ /* Make sure we are not inadvertently changing nodes */ ++ VM_BUG_ON_PAGE(page_to_nid(page) != zone_to_nid(zone), page); ++ + if (!PageBuddy(page)) { + page++; + continue; +@@ -5555,8 +5555,8 @@ unsigned long free_reserved_area(void *start, void *end, int poison, char *s) + } + + if (pages && s) +- pr_info("Freeing %s memory: %ldK (%p - %p)\n", +- s, pages << (PAGE_SHIFT - 10), start, end); ++ pr_info("Freeing %s memory: %ldK\n", ++ s, pages << (PAGE_SHIFT - 10)); + + return pages; + } +@@ -6513,7 +6513,7 @@ int alloc_contig_range(unsigned long start, unsigned long end, + + /* Make sure the range is really isolated. */ + if (test_pages_isolated(outer_start, end, false)) { +- pr_info("%s: [%lx, %lx) PFNs busy\n", ++ pr_info_ratelimited("%s: [%lx, %lx) PFNs busy\n", + __func__, outer_start, end); + ret = -EBUSY; + goto done; +diff --git a/net/8021q/vlan.c b/net/8021q/vlan.c +index d45e590e8f10..dfecba30d83a 100644 +--- a/net/8021q/vlan.c ++++ b/net/8021q/vlan.c +@@ -292,6 +292,10 @@ static void vlan_sync_address(struct net_device *dev, + if (ether_addr_equal(vlan->real_dev_addr, dev->dev_addr)) + return; + ++ /* vlan continues to inherit address of lower device */ ++ if (vlan_dev_inherit_address(vlandev, dev)) ++ goto out; ++ + /* vlan address was different from the old address and is equal to + * the new address */ + if (!ether_addr_equal(vlandev->dev_addr, vlan->real_dev_addr) && +@@ -304,6 +308,7 @@ static void vlan_sync_address(struct net_device *dev, + !ether_addr_equal(vlandev->dev_addr, dev->dev_addr)) + dev_uc_add(dev, vlandev->dev_addr); + ++out: + ether_addr_copy(vlan->real_dev_addr, dev->dev_addr); + } + +diff --git a/net/8021q/vlan.h b/net/8021q/vlan.h +index 9d010a09ab98..cc1557978066 100644 +--- a/net/8021q/vlan.h ++++ b/net/8021q/vlan.h +@@ -109,6 +109,8 @@ int vlan_check_real_dev(struct net_device *real_dev, + void vlan_setup(struct net_device *dev); + int register_vlan_dev(struct net_device *dev); + void unregister_vlan_dev(struct net_device *dev, struct list_head *head); ++bool vlan_dev_inherit_address(struct net_device *dev, ++ struct net_device *real_dev); + + static inline u32 vlan_get_ingress_priority(struct net_device *dev, + u16 vlan_tci) +diff --git a/net/8021q/vlan_dev.c b/net/8021q/vlan_dev.c +index 01d7ba840df8..93010f34c200 100644 +--- a/net/8021q/vlan_dev.c ++++ b/net/8021q/vlan_dev.c +@@ -244,6 +244,17 @@ void vlan_dev_get_realdev_name(const struct net_device *dev, char *result) + strncpy(result, vlan_dev_priv(dev)->real_dev->name, 23); + } + ++bool vlan_dev_inherit_address(struct net_device *dev, ++ struct net_device *real_dev) ++{ ++ if (dev->addr_assign_type != NET_ADDR_STOLEN) ++ return false; ++ ++ ether_addr_copy(dev->dev_addr, real_dev->dev_addr); ++ call_netdevice_notifiers(NETDEV_CHANGEADDR, dev); ++ return true; ++} ++ + static int vlan_dev_open(struct net_device *dev) + { + struct vlan_dev_priv *vlan = vlan_dev_priv(dev); +@@ -254,7 +265,8 @@ static int vlan_dev_open(struct net_device *dev) + !(vlan->flags & VLAN_FLAG_LOOSE_BINDING)) + return -ENETDOWN; + +- if (!ether_addr_equal(dev->dev_addr, real_dev->dev_addr)) { ++ if (!ether_addr_equal(dev->dev_addr, real_dev->dev_addr) && ++ !vlan_dev_inherit_address(dev, real_dev)) { + err = dev_uc_add(real_dev, dev->dev_addr); + if (err < 0) + goto out; +@@ -558,8 +570,10 @@ static int vlan_dev_init(struct net_device *dev) + /* ipv6 shared card related stuff */ + dev->dev_id = real_dev->dev_id; + +- if (is_zero_ether_addr(dev->dev_addr)) +- eth_hw_addr_inherit(dev, real_dev); ++ if (is_zero_ether_addr(dev->dev_addr)) { ++ ether_addr_copy(dev->dev_addr, real_dev->dev_addr); ++ dev->addr_assign_type = NET_ADDR_STOLEN; ++ } + if (is_zero_ether_addr(dev->broadcast)) + memcpy(dev->broadcast, real_dev->broadcast, dev->addr_len); + +diff --git a/net/bluetooth/smp.c b/net/bluetooth/smp.c +index 69ad5091e2ce..e4b56fcb5d4e 100644 +--- a/net/bluetooth/smp.c ++++ b/net/bluetooth/smp.c +@@ -23,6 +23,7 @@ + #include + #include + #include ++#include + #include + + #include +@@ -506,7 +507,7 @@ bool smp_irk_matches(struct hci_dev *hdev, const u8 irk[16], + if (err) + return false; + +- return !memcmp(bdaddr->b, hash, 3); ++ return !crypto_memneq(bdaddr->b, hash, 3); + } + + int smp_generate_rpa(struct hci_dev *hdev, const u8 irk[16], bdaddr_t *rpa) +@@ -559,7 +560,7 @@ int smp_generate_oob(struct hci_dev *hdev, u8 hash[16], u8 rand[16]) + /* This is unlikely, but we need to check that + * we didn't accidentially generate a debug key. + */ +- if (memcmp(smp->local_sk, debug_sk, 32)) ++ if (crypto_memneq(smp->local_sk, debug_sk, 32)) + break; + } + smp->debug_key = false; +@@ -973,7 +974,7 @@ static u8 smp_random(struct smp_chan *smp) + if (ret) + return SMP_UNSPECIFIED; + +- if (memcmp(smp->pcnf, confirm, sizeof(smp->pcnf)) != 0) { ++ if (crypto_memneq(smp->pcnf, confirm, sizeof(smp->pcnf))) { + BT_ERR("Pairing failed (confirmation values mismatch)"); + return SMP_CONFIRM_FAILED; + } +@@ -1490,7 +1491,7 @@ static u8 sc_passkey_round(struct smp_chan *smp, u8 smp_op) + smp->rrnd, r, cfm)) + return SMP_UNSPECIFIED; + +- if (memcmp(smp->pcnf, cfm, 16)) ++ if (crypto_memneq(smp->pcnf, cfm, 16)) + return SMP_CONFIRM_FAILED; + + smp->passkey_round++; +@@ -1874,7 +1875,7 @@ static u8 sc_send_public_key(struct smp_chan *smp) + /* This is unlikely, but we need to check that + * we didn't accidentially generate a debug key. + */ +- if (memcmp(smp->local_sk, debug_sk, 32)) ++ if (crypto_memneq(smp->local_sk, debug_sk, 32)) + break; + } + } +@@ -2139,7 +2140,7 @@ static u8 smp_cmd_pairing_random(struct l2cap_conn *conn, struct sk_buff *skb) + if (err) + return SMP_UNSPECIFIED; + +- if (memcmp(smp->pcnf, cfm, 16)) ++ if (crypto_memneq(smp->pcnf, cfm, 16)) + return SMP_CONFIRM_FAILED; + } else { + smp_send_cmd(conn, SMP_CMD_PAIRING_RANDOM, sizeof(smp->prnd), +@@ -2594,7 +2595,7 @@ static int smp_cmd_public_key(struct l2cap_conn *conn, struct sk_buff *skb) + if (err) + return SMP_UNSPECIFIED; + +- if (memcmp(cfm.confirm_val, smp->pcnf, 16)) ++ if (crypto_memneq(cfm.confirm_val, smp->pcnf, 16)) + return SMP_CONFIRM_FAILED; + } + +@@ -2627,7 +2628,7 @@ static int smp_cmd_public_key(struct l2cap_conn *conn, struct sk_buff *skb) + else + hcon->pending_sec_level = BT_SECURITY_FIPS; + +- if (!memcmp(debug_pk, smp->remote_pk, 64)) ++ if (!crypto_memneq(debug_pk, smp->remote_pk, 64)) + set_bit(SMP_FLAG_DEBUG_KEY, &smp->flags); + + if (smp->method == DSP_PASSKEY) { +@@ -2726,7 +2727,7 @@ static int smp_cmd_dhkey_check(struct l2cap_conn *conn, struct sk_buff *skb) + if (err) + return SMP_UNSPECIFIED; + +- if (memcmp(check->e, e, 16)) ++ if (crypto_memneq(check->e, e, 16)) + return SMP_DHKEY_CHECK_FAILED; + + if (!hcon->out) { +@@ -3336,7 +3337,7 @@ static int __init test_ah(struct crypto_blkcipher *tfm_aes) + if (err) + return err; + +- if (memcmp(res, exp, 3)) ++ if (crypto_memneq(res, exp, 3)) + return -EINVAL; + + return 0; +@@ -3366,7 +3367,7 @@ static int __init test_c1(struct crypto_blkcipher *tfm_aes) + if (err) + return err; + +- if (memcmp(res, exp, 16)) ++ if (crypto_memneq(res, exp, 16)) + return -EINVAL; + + return 0; +@@ -3391,7 +3392,7 @@ static int __init test_s1(struct crypto_blkcipher *tfm_aes) + if (err) + return err; + +- if (memcmp(res, exp, 16)) ++ if (crypto_memneq(res, exp, 16)) + return -EINVAL; + + return 0; +@@ -3423,7 +3424,7 @@ static int __init test_f4(struct crypto_hash *tfm_cmac) + if (err) + return err; + +- if (memcmp(res, exp, 16)) ++ if (crypto_memneq(res, exp, 16)) + return -EINVAL; + + return 0; +@@ -3457,10 +3458,10 @@ static int __init test_f5(struct crypto_hash *tfm_cmac) + if (err) + return err; + +- if (memcmp(mackey, exp_mackey, 16)) ++ if (crypto_memneq(mackey, exp_mackey, 16)) + return -EINVAL; + +- if (memcmp(ltk, exp_ltk, 16)) ++ if (crypto_memneq(ltk, exp_ltk, 16)) + return -EINVAL; + + return 0; +@@ -3493,7 +3494,7 @@ static int __init test_f6(struct crypto_hash *tfm_cmac) + if (err) + return err; + +- if (memcmp(res, exp, 16)) ++ if (crypto_memneq(res, exp, 16)) + return -EINVAL; + + return 0; +@@ -3547,7 +3548,7 @@ static int __init test_h6(struct crypto_hash *tfm_cmac) + if (err) + return err; + +- if (memcmp(res, exp, 16)) ++ if (crypto_memneq(res, exp, 16)) + return -EINVAL; + + return 0; +diff --git a/net/core/dev.c b/net/core/dev.c +index bd47736b689e..bb711e5e345b 100644 +--- a/net/core/dev.c ++++ b/net/core/dev.c +@@ -2465,9 +2465,10 @@ EXPORT_SYMBOL(skb_mac_gso_segment); + static inline bool skb_needs_check(struct sk_buff *skb, bool tx_path) + { + if (tx_path) +- return skb->ip_summed != CHECKSUM_PARTIAL; +- else +- return skb->ip_summed == CHECKSUM_NONE; ++ return skb->ip_summed != CHECKSUM_PARTIAL && ++ skb->ip_summed != CHECKSUM_UNNECESSARY; ++ ++ return skb->ip_summed == CHECKSUM_NONE; + } + + /** +@@ -2486,11 +2487,12 @@ static inline bool skb_needs_check(struct sk_buff *skb, bool tx_path) + struct sk_buff *__skb_gso_segment(struct sk_buff *skb, + netdev_features_t features, bool tx_path) + { ++ struct sk_buff *segs; ++ + if (unlikely(skb_needs_check(skb, tx_path))) { + int err; + +- skb_warn_bad_offload(skb); +- ++ /* We're going to init ->check field in TCP or UDP header */ + err = skb_cow_head(skb, 0); + if (err < 0) + return ERR_PTR(err); +@@ -2505,7 +2507,12 @@ struct sk_buff *__skb_gso_segment(struct sk_buff *skb, + skb_reset_mac_header(skb); + skb_reset_mac_len(skb); + +- return skb_mac_gso_segment(skb, features); ++ segs = skb_mac_gso_segment(skb, features); ++ ++ if (unlikely(skb_needs_check(skb, tx_path))) ++ skb_warn_bad_offload(skb); ++ ++ return segs; + } + EXPORT_SYMBOL(__skb_gso_segment); + +diff --git a/net/core/dev_ioctl.c b/net/core/dev_ioctl.c +index b94b1d293506..151e047ce072 100644 +--- a/net/core/dev_ioctl.c ++++ b/net/core/dev_ioctl.c +@@ -28,6 +28,7 @@ static int dev_ifname(struct net *net, struct ifreq __user *arg) + + if (copy_from_user(&ifr, arg, sizeof(struct ifreq))) + return -EFAULT; ++ ifr.ifr_name[IFNAMSIZ-1] = 0; + + error = netdev_get_name(net, ifr.ifr_name, ifr.ifr_ifindex); + if (error) +diff --git a/net/core/rtnetlink.c b/net/core/rtnetlink.c +index 7a0d98628137..3936683486e9 100644 +--- a/net/core/rtnetlink.c ++++ b/net/core/rtnetlink.c +@@ -1628,7 +1628,8 @@ static int do_setlink(const struct sk_buff *skb, + struct sockaddr *sa; + int len; + +- len = sizeof(sa_family_t) + dev->addr_len; ++ len = sizeof(sa_family_t) + max_t(size_t, dev->addr_len, ++ sizeof(*sa)); + sa = kmalloc(len, GFP_KERNEL); + if (!sa) { + err = -ENOMEM; +diff --git a/net/dccp/feat.c b/net/dccp/feat.c +index 1704948e6a12..f227f002c73d 100644 +--- a/net/dccp/feat.c ++++ b/net/dccp/feat.c +@@ -1471,9 +1471,12 @@ int dccp_feat_init(struct sock *sk) + * singleton values (which always leads to failure). + * These settings can still (later) be overridden via sockopts. + */ +- if (ccid_get_builtin_ccids(&tx.val, &tx.len) || +- ccid_get_builtin_ccids(&rx.val, &rx.len)) ++ if (ccid_get_builtin_ccids(&tx.val, &tx.len)) + return -ENOBUFS; ++ if (ccid_get_builtin_ccids(&rx.val, &rx.len)) { ++ kfree(tx.val); ++ return -ENOBUFS; ++ } + + if (!dccp_feat_prefer(sysctl_dccp_tx_ccid, tx.val, tx.len) || + !dccp_feat_prefer(sysctl_dccp_rx_ccid, rx.val, rx.len)) +diff --git a/net/dccp/ipv4.c b/net/dccp/ipv4.c +index ccf4c5629b3c..fd7ac7895c38 100644 +--- a/net/dccp/ipv4.c ++++ b/net/dccp/ipv4.c +@@ -661,6 +661,7 @@ int dccp_v4_conn_request(struct sock *sk, struct sk_buff *skb) + goto drop_and_free; + + inet_csk_reqsk_queue_hash_add(sk, req, DCCP_TIMEOUT_INIT); ++ reqsk_put(req); + return 0; + + drop_and_free: +diff --git a/net/ipv4/fib_frontend.c b/net/ipv4/fib_frontend.c +index 513b6aabc5b7..765909ba781e 100644 +--- a/net/ipv4/fib_frontend.c ++++ b/net/ipv4/fib_frontend.c +@@ -1247,13 +1247,14 @@ static struct pernet_operations fib_net_ops = { + + void __init ip_fib_init(void) + { +- rtnl_register(PF_INET, RTM_NEWROUTE, inet_rtm_newroute, NULL, NULL); +- rtnl_register(PF_INET, RTM_DELROUTE, inet_rtm_delroute, NULL, NULL); +- rtnl_register(PF_INET, RTM_GETROUTE, NULL, inet_dump_fib, NULL); ++ fib_trie_init(); + + register_pernet_subsys(&fib_net_ops); ++ + register_netdevice_notifier(&fib_netdev_notifier); + register_inetaddr_notifier(&fib_inetaddr_notifier); + +- fib_trie_init(); ++ rtnl_register(PF_INET, RTM_NEWROUTE, inet_rtm_newroute, NULL, NULL); ++ rtnl_register(PF_INET, RTM_DELROUTE, inet_rtm_delroute, NULL, NULL); ++ rtnl_register(PF_INET, RTM_GETROUTE, NULL, inet_dump_fib, NULL); + } +diff --git a/net/ipv4/ip_output.c b/net/ipv4/ip_output.c +index 51573f8a39bc..adbb28b39413 100644 +--- a/net/ipv4/ip_output.c ++++ b/net/ipv4/ip_output.c +@@ -891,7 +891,7 @@ static int __ip_append_data(struct sock *sk, + csummode = CHECKSUM_PARTIAL; + + cork->length += length; +- if (((length > mtu) || (skb && skb_is_gso(skb))) && ++ if ((((length + fragheaderlen) > mtu) || (skb && skb_is_gso(skb))) && + (sk->sk_protocol == IPPROTO_UDP) && + (rt->dst.dev->features & NETIF_F_UFO) && !rt->dst.header_len && + (sk->sk_type == SOCK_DGRAM) && !sk->sk_no_check_tx) { +diff --git a/net/ipv4/tcp_input.c b/net/ipv4/tcp_input.c +index 4a3a17ff046d..767ee7471c9b 100644 +--- a/net/ipv4/tcp_input.c ++++ b/net/ipv4/tcp_input.c +@@ -2536,8 +2536,8 @@ static inline void tcp_end_cwnd_reduction(struct sock *sk) + struct tcp_sock *tp = tcp_sk(sk); + + /* Reset cwnd to ssthresh in CWR or Recovery (unless it's undone) */ +- if (inet_csk(sk)->icsk_ca_state == TCP_CA_CWR || +- (tp->undo_marker && tp->snd_ssthresh < TCP_INFINITE_SSTHRESH)) { ++ if (tp->snd_ssthresh < TCP_INFINITE_SSTHRESH && ++ (inet_csk(sk)->icsk_ca_state == TCP_CA_CWR || tp->undo_marker)) { + tp->snd_cwnd = tp->snd_ssthresh; + tp->snd_cwnd_stamp = tcp_time_stamp; + } +diff --git a/net/ipv4/udp_offload.c b/net/ipv4/udp_offload.c +index dfcab88c3e74..8f27dce93f71 100644 +--- a/net/ipv4/udp_offload.c ++++ b/net/ipv4/udp_offload.c +@@ -231,7 +231,7 @@ static struct sk_buff *udp4_ufo_fragment(struct sk_buff *skb, + if (uh->check == 0) + uh->check = CSUM_MANGLED_0; + +- skb->ip_summed = CHECKSUM_NONE; ++ skb->ip_summed = CHECKSUM_UNNECESSARY; + + /* Fragment the skb. IP headers of the fragments are updated in + * inet_gso_segment() +diff --git a/net/ipv6/ip6_output.c b/net/ipv6/ip6_output.c +index eefb8759cfa4..29a1ffa72cd0 100644 +--- a/net/ipv6/ip6_output.c ++++ b/net/ipv6/ip6_output.c +@@ -1340,7 +1340,7 @@ emsgsize: + */ + + cork->length += length; +- if (((length > mtu) || ++ if ((((length + fragheaderlen) > mtu) || + (skb && skb_is_gso(skb))) && + (sk->sk_protocol == IPPROTO_UDP) && + (rt->dst.dev->features & NETIF_F_UFO) && +diff --git a/net/ipv6/output_core.c b/net/ipv6/output_core.c +index 3f6ee4138cab..292ef2e584db 100644 +--- a/net/ipv6/output_core.c ++++ b/net/ipv6/output_core.c +@@ -76,7 +76,7 @@ EXPORT_SYMBOL(ipv6_select_ident); + + int ip6_find_1stfragopt(struct sk_buff *skb, u8 **nexthdr) + { +- u16 offset = sizeof(struct ipv6hdr); ++ unsigned int offset = sizeof(struct ipv6hdr); + unsigned int packet_len = skb_tail_pointer(skb) - + skb_network_header(skb); + int found_rhdr = 0; +@@ -84,6 +84,7 @@ int ip6_find_1stfragopt(struct sk_buff *skb, u8 **nexthdr) + + while (offset <= packet_len) { + struct ipv6_opt_hdr *exthdr; ++ unsigned int len; + + switch (**nexthdr) { + +@@ -109,7 +110,10 @@ int ip6_find_1stfragopt(struct sk_buff *skb, u8 **nexthdr) + + exthdr = (struct ipv6_opt_hdr *)(skb_network_header(skb) + + offset); +- offset += ipv6_optlen(exthdr); ++ len = ipv6_optlen(exthdr); ++ if (len + offset >= IPV6_MAXPLEN) ++ return -EINVAL; ++ offset += len; + *nexthdr = &exthdr->nexthdr; + } + +diff --git a/net/ipv6/udp_offload.c b/net/ipv6/udp_offload.c +index 01582966ffa0..2e3c12eeca07 100644 +--- a/net/ipv6/udp_offload.c ++++ b/net/ipv6/udp_offload.c +@@ -86,7 +86,7 @@ static struct sk_buff *udp6_ufo_fragment(struct sk_buff *skb, + if (uh->check == 0) + uh->check = CSUM_MANGLED_0; + +- skb->ip_summed = CHECKSUM_NONE; ++ skb->ip_summed = CHECKSUM_UNNECESSARY; + + /* Check if there is enough headroom to insert fragment header. */ + tnl_hlen = skb_tnl_header_len(skb); +diff --git a/net/key/af_key.c b/net/key/af_key.c +index 9a556e434f59..39c78c9e1c68 100644 +--- a/net/key/af_key.c ++++ b/net/key/af_key.c +@@ -63,8 +63,13 @@ struct pfkey_sock { + } u; + struct sk_buff *skb; + } dump; ++ struct mutex dump_lock; + }; + ++static int parse_sockaddr_pair(struct sockaddr *sa, int ext_len, ++ xfrm_address_t *saddr, xfrm_address_t *daddr, ++ u16 *family); ++ + static inline struct pfkey_sock *pfkey_sk(struct sock *sk) + { + return (struct pfkey_sock *)sk; +@@ -139,6 +144,7 @@ static int pfkey_create(struct net *net, struct socket *sock, int protocol, + { + struct netns_pfkey *net_pfkey = net_generic(net, pfkey_net_id); + struct sock *sk; ++ struct pfkey_sock *pfk; + int err; + + if (!ns_capable(net->user_ns, CAP_NET_ADMIN)) +@@ -153,6 +159,9 @@ static int pfkey_create(struct net *net, struct socket *sock, int protocol, + if (sk == NULL) + goto out; + ++ pfk = pfkey_sk(sk); ++ mutex_init(&pfk->dump_lock); ++ + sock->ops = &pfkey_ops; + sock_init_data(sock, sk); + +@@ -281,13 +290,23 @@ static int pfkey_do_dump(struct pfkey_sock *pfk) + struct sadb_msg *hdr; + int rc; + ++ mutex_lock(&pfk->dump_lock); ++ if (!pfk->dump.dump) { ++ rc = 0; ++ goto out; ++ } ++ + rc = pfk->dump.dump(pfk); +- if (rc == -ENOBUFS) +- return 0; ++ if (rc == -ENOBUFS) { ++ rc = 0; ++ goto out; ++ } + + if (pfk->dump.skb) { +- if (!pfkey_can_dump(&pfk->sk)) +- return 0; ++ if (!pfkey_can_dump(&pfk->sk)) { ++ rc = 0; ++ goto out; ++ } + + hdr = (struct sadb_msg *) pfk->dump.skb->data; + hdr->sadb_msg_seq = 0; +@@ -298,6 +317,9 @@ static int pfkey_do_dump(struct pfkey_sock *pfk) + } + + pfkey_terminate_dump(pfk); ++ ++out: ++ mutex_unlock(&pfk->dump_lock); + return rc; + } + +@@ -1801,19 +1823,26 @@ static int pfkey_dump(struct sock *sk, struct sk_buff *skb, const struct sadb_ms + struct xfrm_address_filter *filter = NULL; + struct pfkey_sock *pfk = pfkey_sk(sk); + +- if (pfk->dump.dump != NULL) ++ mutex_lock(&pfk->dump_lock); ++ if (pfk->dump.dump != NULL) { ++ mutex_unlock(&pfk->dump_lock); + return -EBUSY; ++ } + + proto = pfkey_satype2proto(hdr->sadb_msg_satype); +- if (proto == 0) ++ if (proto == 0) { ++ mutex_unlock(&pfk->dump_lock); + return -EINVAL; ++ } + + if (ext_hdrs[SADB_X_EXT_FILTER - 1]) { + struct sadb_x_filter *xfilter = ext_hdrs[SADB_X_EXT_FILTER - 1]; + + filter = kmalloc(sizeof(*filter), GFP_KERNEL); +- if (filter == NULL) ++ if (filter == NULL) { ++ mutex_unlock(&pfk->dump_lock); + return -ENOMEM; ++ } + + memcpy(&filter->saddr, &xfilter->sadb_x_filter_saddr, + sizeof(xfrm_address_t)); +@@ -1829,6 +1858,7 @@ static int pfkey_dump(struct sock *sk, struct sk_buff *skb, const struct sadb_ms + pfk->dump.dump = pfkey_dump_sa; + pfk->dump.done = pfkey_dump_sa_done; + xfrm_state_walk_init(&pfk->dump.u.state, proto, filter); ++ mutex_unlock(&pfk->dump_lock); + + return pfkey_do_dump(pfk); + } +@@ -1921,19 +1951,14 @@ parse_ipsecrequest(struct xfrm_policy *xp, struct sadb_x_ipsecrequest *rq) + + /* addresses present only in tunnel mode */ + if (t->mode == XFRM_MODE_TUNNEL) { +- u8 *sa = (u8 *) (rq + 1); +- int family, socklen; ++ int err; + +- family = pfkey_sockaddr_extract((struct sockaddr *)sa, +- &t->saddr); +- if (!family) +- return -EINVAL; +- +- socklen = pfkey_sockaddr_len(family); +- if (pfkey_sockaddr_extract((struct sockaddr *)(sa + socklen), +- &t->id.daddr) != family) +- return -EINVAL; +- t->encap_family = family; ++ err = parse_sockaddr_pair( ++ (struct sockaddr *)(rq + 1), ++ rq->sadb_x_ipsecrequest_len - sizeof(*rq), ++ &t->saddr, &t->id.daddr, &t->encap_family); ++ if (err) ++ return err; + } else + t->encap_family = xp->family; + +@@ -1953,7 +1978,11 @@ parse_ipsecrequests(struct xfrm_policy *xp, struct sadb_x_policy *pol) + if (pol->sadb_x_policy_len * 8 < sizeof(struct sadb_x_policy)) + return -EINVAL; + +- while (len >= sizeof(struct sadb_x_ipsecrequest)) { ++ while (len >= sizeof(*rq)) { ++ if (len < rq->sadb_x_ipsecrequest_len || ++ rq->sadb_x_ipsecrequest_len < sizeof(*rq)) ++ return -EINVAL; ++ + if ((err = parse_ipsecrequest(xp, rq)) < 0) + return err; + len -= rq->sadb_x_ipsecrequest_len; +@@ -2416,7 +2445,6 @@ out: + return err; + } + +-#ifdef CONFIG_NET_KEY_MIGRATE + static int pfkey_sockaddr_pair_size(sa_family_t family) + { + return PFKEY_ALIGN8(pfkey_sockaddr_len(family) * 2); +@@ -2428,7 +2456,7 @@ static int parse_sockaddr_pair(struct sockaddr *sa, int ext_len, + { + int af, socklen; + +- if (ext_len < pfkey_sockaddr_pair_size(sa->sa_family)) ++ if (ext_len < 2 || ext_len < pfkey_sockaddr_pair_size(sa->sa_family)) + return -EINVAL; + + af = pfkey_sockaddr_extract(sa, saddr); +@@ -2444,6 +2472,7 @@ static int parse_sockaddr_pair(struct sockaddr *sa, int ext_len, + return 0; + } + ++#ifdef CONFIG_NET_KEY_MIGRATE + static int ipsecrequests_to_migrate(struct sadb_x_ipsecrequest *rq1, int len, + struct xfrm_migrate *m) + { +@@ -2451,13 +2480,14 @@ static int ipsecrequests_to_migrate(struct sadb_x_ipsecrequest *rq1, int len, + struct sadb_x_ipsecrequest *rq2; + int mode; + +- if (len <= sizeof(struct sadb_x_ipsecrequest) || +- len < rq1->sadb_x_ipsecrequest_len) ++ if (len < sizeof(*rq1) || ++ len < rq1->sadb_x_ipsecrequest_len || ++ rq1->sadb_x_ipsecrequest_len < sizeof(*rq1)) + return -EINVAL; + + /* old endoints */ + err = parse_sockaddr_pair((struct sockaddr *)(rq1 + 1), +- rq1->sadb_x_ipsecrequest_len, ++ rq1->sadb_x_ipsecrequest_len - sizeof(*rq1), + &m->old_saddr, &m->old_daddr, + &m->old_family); + if (err) +@@ -2466,13 +2496,14 @@ static int ipsecrequests_to_migrate(struct sadb_x_ipsecrequest *rq1, int len, + rq2 = (struct sadb_x_ipsecrequest *)((u8 *)rq1 + rq1->sadb_x_ipsecrequest_len); + len -= rq1->sadb_x_ipsecrequest_len; + +- if (len <= sizeof(struct sadb_x_ipsecrequest) || +- len < rq2->sadb_x_ipsecrequest_len) ++ if (len <= sizeof(*rq2) || ++ len < rq2->sadb_x_ipsecrequest_len || ++ rq2->sadb_x_ipsecrequest_len < sizeof(*rq2)) + return -EINVAL; + + /* new endpoints */ + err = parse_sockaddr_pair((struct sockaddr *)(rq2 + 1), +- rq2->sadb_x_ipsecrequest_len, ++ rq2->sadb_x_ipsecrequest_len - sizeof(*rq2), + &m->new_saddr, &m->new_daddr, + &m->new_family); + if (err) +@@ -2687,14 +2718,18 @@ static int pfkey_spddump(struct sock *sk, struct sk_buff *skb, const struct sadb + { + struct pfkey_sock *pfk = pfkey_sk(sk); + +- if (pfk->dump.dump != NULL) ++ mutex_lock(&pfk->dump_lock); ++ if (pfk->dump.dump != NULL) { ++ mutex_unlock(&pfk->dump_lock); + return -EBUSY; ++ } + + pfk->dump.msg_version = hdr->sadb_msg_version; + pfk->dump.msg_portid = hdr->sadb_msg_pid; + pfk->dump.dump = pfkey_dump_sp; + pfk->dump.done = pfkey_dump_sp_done; + xfrm_policy_walk_init(&pfk->dump.u.policy, XFRM_POLICY_TYPE_MAIN); ++ mutex_unlock(&pfk->dump_lock); + + return pfkey_do_dump(pfk); + } +diff --git a/net/nfc/core.c b/net/nfc/core.c +index cff3f1614ad4..54596f609d04 100644 +--- a/net/nfc/core.c ++++ b/net/nfc/core.c +@@ -969,6 +969,8 @@ static void nfc_release(struct device *d) + kfree(se); + } + ++ ida_simple_remove(&nfc_index_ida, dev->idx); ++ + kfree(dev); + } + +@@ -1043,6 +1045,7 @@ struct nfc_dev *nfc_allocate_device(struct nfc_ops *ops, + int tx_headroom, int tx_tailroom) + { + struct nfc_dev *dev; ++ int rc; + + if (!ops->start_poll || !ops->stop_poll || !ops->activate_target || + !ops->deactivate_target || !ops->im_transceive) +@@ -1055,6 +1058,15 @@ struct nfc_dev *nfc_allocate_device(struct nfc_ops *ops, + if (!dev) + return NULL; + ++ rc = ida_simple_get(&nfc_index_ida, 0, 0, GFP_KERNEL); ++ if (rc < 0) ++ goto err_free_dev; ++ dev->idx = rc; ++ ++ dev->dev.class = &nfc_class; ++ dev_set_name(&dev->dev, "nfc%d", dev->idx); ++ device_initialize(&dev->dev); ++ + dev->ops = ops; + dev->supported_protocols = supported_protocols; + dev->tx_headroom = tx_headroom; +@@ -1077,6 +1089,11 @@ struct nfc_dev *nfc_allocate_device(struct nfc_ops *ops, + } + + return dev; ++ ++err_free_dev: ++ kfree(dev); ++ ++ return ERR_PTR(rc); + } + EXPORT_SYMBOL(nfc_allocate_device); + +@@ -1091,14 +1108,6 @@ int nfc_register_device(struct nfc_dev *dev) + + pr_debug("dev_name=%s\n", dev_name(&dev->dev)); + +- dev->idx = ida_simple_get(&nfc_index_ida, 0, 0, GFP_KERNEL); +- if (dev->idx < 0) +- return dev->idx; +- +- dev->dev.class = &nfc_class; +- dev_set_name(&dev->dev, "nfc%d", dev->idx); +- device_initialize(&dev->dev); +- + mutex_lock(&nfc_devlist_mutex); + nfc_devlist_generation++; + rc = device_add(&dev->dev); +@@ -1136,12 +1145,10 @@ EXPORT_SYMBOL(nfc_register_device); + */ + void nfc_unregister_device(struct nfc_dev *dev) + { +- int rc, id; ++ int rc; + + pr_debug("dev_name=%s\n", dev_name(&dev->dev)); + +- id = dev->idx; +- + if (dev->rfkill) { + rfkill_unregister(dev->rfkill); + rfkill_destroy(dev->rfkill); +@@ -1166,8 +1173,6 @@ void nfc_unregister_device(struct nfc_dev *dev) + nfc_devlist_generation++; + device_del(&dev->dev); + mutex_unlock(&nfc_devlist_mutex); +- +- ida_simple_remove(&nfc_index_ida, id); + } + EXPORT_SYMBOL(nfc_unregister_device); + +diff --git a/net/nfc/llcp_sock.c b/net/nfc/llcp_sock.c +index 9578bd6a4f3e..5a6b76f8d157 100644 +--- a/net/nfc/llcp_sock.c ++++ b/net/nfc/llcp_sock.c +@@ -76,7 +76,8 @@ static int llcp_sock_bind(struct socket *sock, struct sockaddr *addr, int alen) + struct sockaddr_nfc_llcp llcp_addr; + int len, ret = 0; + +- if (!addr || addr->sa_family != AF_NFC) ++ if (!addr || alen < offsetofend(struct sockaddr, sa_family) || ++ addr->sa_family != AF_NFC) + return -EINVAL; + + pr_debug("sk %p addr %p family %d\n", sk, addr, addr->sa_family); +@@ -150,7 +151,8 @@ static int llcp_raw_sock_bind(struct socket *sock, struct sockaddr *addr, + struct sockaddr_nfc_llcp llcp_addr; + int len, ret = 0; + +- if (!addr || addr->sa_family != AF_NFC) ++ if (!addr || alen < offsetofend(struct sockaddr, sa_family) || ++ addr->sa_family != AF_NFC) + return -EINVAL; + + pr_debug("sk %p addr %p family %d\n", sk, addr, addr->sa_family); +@@ -655,8 +657,7 @@ static int llcp_sock_connect(struct socket *sock, struct sockaddr *_addr, + + pr_debug("sock %p sk %p flags 0x%x\n", sock, sk, flags); + +- if (!addr || len < sizeof(struct sockaddr_nfc) || +- addr->sa_family != AF_NFC) ++ if (!addr || len < sizeof(*addr) || addr->sa_family != AF_NFC) + return -EINVAL; + + if (addr->service_name_len == 0 && addr->dsap == 0) +diff --git a/net/nfc/nci/core.c b/net/nfc/nci/core.c +index 49ff32106080..a776fb53d66d 100644 +--- a/net/nfc/nci/core.c ++++ b/net/nfc/nci/core.c +@@ -981,8 +981,7 @@ struct nci_dev *nci_allocate_device(struct nci_ops *ops, + return ndev; + + free_nfc: +- kfree(ndev->nfc_dev); +- ++ nfc_free_device(ndev->nfc_dev); + free_nci: + kfree(ndev); + return NULL; +diff --git a/net/nfc/netlink.c b/net/nfc/netlink.c +index 3763036710ae..2f2a2a0e56ec 100644 +--- a/net/nfc/netlink.c ++++ b/net/nfc/netlink.c +@@ -865,7 +865,9 @@ static int nfc_genl_activate_target(struct sk_buff *skb, struct genl_info *info) + u32 device_idx, target_idx, protocol; + int rc; + +- if (!info->attrs[NFC_ATTR_DEVICE_INDEX]) ++ if (!info->attrs[NFC_ATTR_DEVICE_INDEX] || ++ !info->attrs[NFC_ATTR_TARGET_INDEX] || ++ !info->attrs[NFC_ATTR_PROTOCOLS]) + return -EINVAL; + + device_idx = nla_get_u32(info->attrs[NFC_ATTR_DEVICE_INDEX]); +diff --git a/net/packet/af_packet.c b/net/packet/af_packet.c +index 93c9a70d046e..c29070c27073 100644 +--- a/net/packet/af_packet.c ++++ b/net/packet/af_packet.c +@@ -3370,14 +3370,19 @@ packet_setsockopt(struct socket *sock, int level, int optname, char __user *optv + + if (optlen != sizeof(val)) + return -EINVAL; +- if (po->rx_ring.pg_vec || po->tx_ring.pg_vec) +- return -EBUSY; + if (copy_from_user(&val, optval, sizeof(val))) + return -EFAULT; + if (val > INT_MAX) + return -EINVAL; +- po->tp_reserve = val; +- return 0; ++ lock_sock(sk); ++ if (po->rx_ring.pg_vec || po->tx_ring.pg_vec) { ++ ret = -EBUSY; ++ } else { ++ po->tp_reserve = val; ++ ret = 0; ++ } ++ release_sock(sk); ++ return ret; + } + case PACKET_LOSS: + { +@@ -3954,7 +3959,7 @@ static int packet_set_ring(struct sock *sk, union tpacket_req_u *req_u, + register_prot_hook(sk); + } + spin_unlock(&po->bind_lock); +- if (closing && (po->tp_version > TPACKET_V2)) { ++ if (pg_vec && (po->tp_version > TPACKET_V2)) { + /* Because we don't support block-based V3 on tx-ring */ + if (!tx_ring) + prb_shutdown_retire_blk_timer(po, tx_ring, rb_queue); +diff --git a/sound/pci/hda/patch_realtek.c b/sound/pci/hda/patch_realtek.c +index 9b5dd2ac60b6..a0e45ae0a628 100644 +--- a/sound/pci/hda/patch_realtek.c ++++ b/sound/pci/hda/patch_realtek.c +@@ -2228,6 +2228,7 @@ static const struct snd_pci_quirk alc882_fixup_tbl[] = { + SND_PCI_QUIRK(0x1043, 0x8691, "ASUS ROG Ranger VIII", ALC882_FIXUP_GPIO3), + SND_PCI_QUIRK(0x104d, 0x9047, "Sony Vaio TT", ALC889_FIXUP_VAIO_TT), + SND_PCI_QUIRK(0x104d, 0x905a, "Sony Vaio Z", ALC882_FIXUP_NO_PRIMARY_HP), ++ SND_PCI_QUIRK(0x104d, 0x9060, "Sony Vaio VPCL14M1R", ALC882_FIXUP_NO_PRIMARY_HP), + SND_PCI_QUIRK(0x104d, 0x9043, "Sony Vaio VGC-LN51JGB", ALC882_FIXUP_NO_PRIMARY_HP), + SND_PCI_QUIRK(0x104d, 0x9044, "Sony VAIO AiO", ALC882_FIXUP_NO_PRIMARY_HP), + +diff --git a/sound/soc/codecs/tlv320aic3x.c b/sound/soc/codecs/tlv320aic3x.c +index 51c4713ac6e3..468fdf21be4f 100644 +--- a/sound/soc/codecs/tlv320aic3x.c ++++ b/sound/soc/codecs/tlv320aic3x.c +@@ -125,6 +125,16 @@ static const struct reg_default aic3x_reg[] = { + { 108, 0x00 }, { 109, 0x00 }, + }; + ++static bool aic3x_volatile_reg(struct device *dev, unsigned int reg) ++{ ++ switch (reg) { ++ case AIC3X_RESET: ++ return true; ++ default: ++ return false; ++ } ++} ++ + static const struct regmap_config aic3x_regmap = { + .reg_bits = 8, + .val_bits = 8, +@@ -132,6 +142,9 @@ static const struct regmap_config aic3x_regmap = { + .max_register = DAC_ICC_ADJ, + .reg_defaults = aic3x_reg, + .num_reg_defaults = ARRAY_SIZE(aic3x_reg), ++ ++ .volatile_reg = aic3x_volatile_reg, ++ + .cache_type = REGCACHE_RBTREE, + }; + +diff --git a/sound/soc/soc-compress.c b/sound/soc/soc-compress.c +index 1874cf0e6cab..35805d7e2bc2 100644 +--- a/sound/soc/soc-compress.c ++++ b/sound/soc/soc-compress.c +@@ -68,7 +68,8 @@ out: + static int soc_compr_open_fe(struct snd_compr_stream *cstream) + { + struct snd_soc_pcm_runtime *fe = cstream->private_data; +- struct snd_pcm_substream *fe_substream = fe->pcm->streams[0].substream; ++ struct snd_pcm_substream *fe_substream = ++ fe->pcm->streams[cstream->direction].substream; + struct snd_soc_platform *platform = fe->platform; + struct snd_soc_dpcm *dpcm; + struct snd_soc_dapm_widget_list *list; +@@ -412,7 +413,8 @@ static int soc_compr_set_params_fe(struct snd_compr_stream *cstream, + struct snd_compr_params *params) + { + struct snd_soc_pcm_runtime *fe = cstream->private_data; +- struct snd_pcm_substream *fe_substream = fe->pcm->streams[0].substream; ++ struct snd_pcm_substream *fe_substream = ++ fe->pcm->streams[cstream->direction].substream; + struct snd_soc_platform *platform = fe->platform; + int ret = 0, stream; + +diff --git a/sound/soc/soc-pcm.c b/sound/soc/soc-pcm.c +index 52fe7eb2dea1..c99e18cb2ba7 100644 +--- a/sound/soc/soc-pcm.c ++++ b/sound/soc/soc-pcm.c +@@ -163,6 +163,10 @@ int dpcm_dapm_stream_event(struct snd_soc_pcm_runtime *fe, int dir, + dev_dbg(be->dev, "ASoC: BE %s event %d dir %d\n", + be->dai_link->name, event, dir); + ++ if ((event == SND_SOC_DAPM_STREAM_STOP) && ++ (be->dpcm[dir].users >= 1)) ++ continue; ++ + snd_soc_dapm_stream_event(be, dir, event); + } + +@@ -1991,9 +1995,11 @@ static int dpcm_fe_dai_do_trigger(struct snd_pcm_substream *substream, int cmd) + break; + case SNDRV_PCM_TRIGGER_STOP: + case SNDRV_PCM_TRIGGER_SUSPEND: +- case SNDRV_PCM_TRIGGER_PAUSE_PUSH: + fe->dpcm[stream].state = SND_SOC_DPCM_STATE_STOP; + break; ++ case SNDRV_PCM_TRIGGER_PAUSE_PUSH: ++ fe->dpcm[stream].state = SND_SOC_DPCM_STATE_PAUSED; ++ break; + } + + out: +diff --git a/sound/usb/endpoint.c b/sound/usb/endpoint.c +index c2131b851602..5fed093fd447 100644 +--- a/sound/usb/endpoint.c ++++ b/sound/usb/endpoint.c +@@ -361,6 +361,9 @@ static void snd_complete_urb(struct urb *urb) + if (unlikely(atomic_read(&ep->chip->shutdown))) + goto exit_clear; + ++ if (unlikely(!test_bit(EP_FLAG_RUNNING, &ep->flags))) ++ goto exit_clear; ++ + if (usb_pipeout(ep->pipe)) { + retire_outbound_urb(ep, ctx); + /* can be stopped during retire callback */ +diff --git a/tools/lib/traceevent/plugin_sched_switch.c b/tools/lib/traceevent/plugin_sched_switch.c +index f1ce60065258..ec30c2fcbac0 100644 +--- a/tools/lib/traceevent/plugin_sched_switch.c ++++ b/tools/lib/traceevent/plugin_sched_switch.c +@@ -111,7 +111,7 @@ static int sched_switch_handler(struct trace_seq *s, + trace_seq_printf(s, "%lld ", val); + + if (pevent_get_field_val(s, event, "prev_prio", record, &val, 0) == 0) +- trace_seq_printf(s, "[%lld] ", val); ++ trace_seq_printf(s, "[%d] ", (int) val); + + if (pevent_get_field_val(s, event, "prev_state", record, &val, 0) == 0) + write_state(s, val); +@@ -129,7 +129,7 @@ static int sched_switch_handler(struct trace_seq *s, + trace_seq_printf(s, "%lld", val); + + if (pevent_get_field_val(s, event, "next_prio", record, &val, 0) == 0) +- trace_seq_printf(s, " [%lld]", val); ++ trace_seq_printf(s, " [%d]", (int) val); + + return 0; + } +diff --git a/tools/perf/ui/browser.c b/tools/perf/ui/browser.c +index 6680fa5cb9dd..d9f04239a12a 100644 +--- a/tools/perf/ui/browser.c ++++ b/tools/perf/ui/browser.c +@@ -673,7 +673,7 @@ static void __ui_browser__line_arrow_down(struct ui_browser *browser, + ui_browser__gotorc(browser, row, column + 1); + SLsmg_draw_hline(2); + +- if (row++ == 0) ++ if (++row == 0) + goto out; + } else + row = 0; +diff --git a/tools/perf/util/symbol-elf.c b/tools/perf/util/symbol-elf.c +index 3ddfab315e19..ec35cb33e46b 100644 +--- a/tools/perf/util/symbol-elf.c ++++ b/tools/perf/util/symbol-elf.c +@@ -488,6 +488,12 @@ int sysfs__read_build_id(const char *filename, void *build_id, size_t size) + break; + } else { + int n = namesz + descsz; ++ ++ if (n > (int)sizeof(bf)) { ++ n = sizeof(bf); ++ pr_debug("%s: truncating reading of build id in sysfs file %s: n_namesz=%u, n_descsz=%u.\n", ++ __func__, filename, nhdr.n_namesz, nhdr.n_descsz); ++ } + if (read(fd, bf, n) != n) + break; + } +diff --git a/virt/kvm/vfio.c b/virt/kvm/vfio.c +index 620e37f741b8..6ddd3c742555 100644 +--- a/virt/kvm/vfio.c ++++ b/virt/kvm/vfio.c +@@ -47,6 +47,22 @@ static struct vfio_group *kvm_vfio_group_get_external_user(struct file *filep) + return vfio_group; + } + ++static bool kvm_vfio_external_group_match_file(struct vfio_group *group, ++ struct file *filep) ++{ ++ bool ret, (*fn)(struct vfio_group *, struct file *); ++ ++ fn = symbol_get(vfio_external_group_match_file); ++ if (!fn) ++ return false; ++ ++ ret = fn(group, filep); ++ ++ symbol_put(vfio_external_group_match_file); ++ ++ return ret; ++} ++ + static void kvm_vfio_group_put_external_user(struct vfio_group *vfio_group) + { + void (*fn)(struct vfio_group *); +@@ -169,18 +185,13 @@ static int kvm_vfio_set_group(struct kvm_device *dev, long attr, u64 arg) + if (!f.file) + return -EBADF; + +- vfio_group = kvm_vfio_group_get_external_user(f.file); +- fdput(f); +- +- if (IS_ERR(vfio_group)) +- return PTR_ERR(vfio_group); +- + ret = -ENOENT; + + mutex_lock(&kv->lock); + + list_for_each_entry(kvg, &kv->group_list, node) { +- if (kvg->vfio_group != vfio_group) ++ if (!kvm_vfio_external_group_match_file(kvg->vfio_group, ++ f.file)) + continue; + + list_del(&kvg->node); +@@ -192,7 +203,7 @@ static int kvm_vfio_set_group(struct kvm_device *dev, long attr, u64 arg) + + mutex_unlock(&kv->lock); + +- kvm_vfio_group_put_external_user(vfio_group); ++ fdput(f); + + kvm_vfio_update_coherency(dev); + diff --git a/1044_linux-4.1.45.patch b/1044_linux-4.1.45.patch new file mode 100644 index 0000000..eb4d11d --- /dev/null +++ b/1044_linux-4.1.45.patch @@ -0,0 +1,4031 @@ +diff --git a/Makefile b/Makefile +index 9c7aa08c70b7..d4c064604058 100644 +--- a/Makefile ++++ b/Makefile +@@ -1,6 +1,6 @@ + VERSION = 4 + PATCHLEVEL = 1 +-SUBLEVEL = 44 ++SUBLEVEL = 45 + EXTRAVERSION = + NAME = Series 4800 + +diff --git a/arch/alpha/include/asm/types.h b/arch/alpha/include/asm/types.h +index 4cb4b6d3452c..0bc66e1d3a7e 100644 +--- a/arch/alpha/include/asm/types.h ++++ b/arch/alpha/include/asm/types.h +@@ -1,6 +1,6 @@ + #ifndef _ALPHA_TYPES_H + #define _ALPHA_TYPES_H + +-#include ++#include + + #endif /* _ALPHA_TYPES_H */ +diff --git a/arch/alpha/include/uapi/asm/types.h b/arch/alpha/include/uapi/asm/types.h +index 9fd3cd459777..8d1024d7be05 100644 +--- a/arch/alpha/include/uapi/asm/types.h ++++ b/arch/alpha/include/uapi/asm/types.h +@@ -9,8 +9,18 @@ + * need to be careful to avoid a name clashes. + */ + +-#ifndef __KERNEL__ ++/* ++ * This is here because we used to use l64 for alpha ++ * and we don't want to impact user mode with our change to ll64 ++ * in the kernel. ++ * ++ * However, some user programs are fine with this. They can ++ * flag __SANE_USERSPACE_TYPES__ to get int-ll64.h here. ++ */ ++#if !defined(__SANE_USERSPACE_TYPES__) && !defined(__KERNEL__) + #include ++#else ++#include + #endif + + #endif /* _UAPI_ALPHA_TYPES_H */ +diff --git a/arch/arc/kernel/entry.S b/arch/arc/kernel/entry.S +index d868289c5a26..da600d814035 100644 +--- a/arch/arc/kernel/entry.S ++++ b/arch/arc/kernel/entry.S +@@ -315,6 +315,12 @@ ENTRY(EV_MachineCheck) + lr r0, [efa] + mov r1, sp + ++ ; hardware auto-disables MMU, re-enable it to allow kernel vaddr ++ ; access for say stack unwinding of modules for crash dumps ++ lr r3, [ARC_REG_PID] ++ or r3, r3, MMU_ENABLE ++ sr r3, [ARC_REG_PID] ++ + lsr r3, r2, 8 + bmsk r3, r3, 7 + brne r3, ECR_C_MCHK_DUP_TLB, 1f +diff --git a/arch/arc/mm/tlb.c b/arch/arc/mm/tlb.c +index 7f47d2a56f44..b7a0c44785c1 100644 +--- a/arch/arc/mm/tlb.c ++++ b/arch/arc/mm/tlb.c +@@ -689,9 +689,6 @@ void do_tlb_overlap_fault(unsigned long cause, unsigned long address, + + local_irq_save(flags); + +- /* re-enable the MMU */ +- write_aux_reg(ARC_REG_PID, MMU_ENABLE | read_aux_reg(ARC_REG_PID)); +- + /* loop thru all sets of TLB */ + for (set = 0; set < mmu->sets; set++) { + +diff --git a/arch/arm/mm/fault.c b/arch/arm/mm/fault.c +index 6333d9c17875..9c521f9959a9 100644 +--- a/arch/arm/mm/fault.c ++++ b/arch/arm/mm/fault.c +@@ -314,8 +314,11 @@ retry: + * signal first. We do not need to release the mmap_sem because + * it would already be released in __lock_page_or_retry in + * mm/filemap.c. */ +- if ((fault & VM_FAULT_RETRY) && fatal_signal_pending(current)) ++ if ((fault & VM_FAULT_RETRY) && fatal_signal_pending(current)) { ++ if (!user_mode(regs)) ++ goto no_context; + return 0; ++ } + + /* + * Major/minor page fault accounting is only done on the +diff --git a/arch/arm64/kernel/fpsimd.c b/arch/arm64/kernel/fpsimd.c +index c31e59fe2cb8..7b4e9ea0b1a4 100644 +--- a/arch/arm64/kernel/fpsimd.c ++++ b/arch/arm64/kernel/fpsimd.c +@@ -156,9 +156,11 @@ void fpsimd_thread_switch(struct task_struct *next) + + void fpsimd_flush_thread(void) + { ++ preempt_disable(); + memset(¤t->thread.fpsimd_state, 0, sizeof(struct fpsimd_state)); + fpsimd_flush_task_state(current); + set_thread_flag(TIF_FOREIGN_FPSTATE); ++ preempt_enable(); + } + + /* +diff --git a/arch/arm64/mm/fault.c b/arch/arm64/mm/fault.c +index 16523fbd9671..d0e42f6fcddd 100644 +--- a/arch/arm64/mm/fault.c ++++ b/arch/arm64/mm/fault.c +@@ -253,8 +253,11 @@ retry: + * signal first. We do not need to release the mmap_sem because it + * would already be released in __lock_page_or_retry in mm/filemap.c. + */ +- if ((fault & VM_FAULT_RETRY) && fatal_signal_pending(current)) ++ if ((fault & VM_FAULT_RETRY) && fatal_signal_pending(current)) { ++ if (!user_mode(regs)) ++ goto no_context; + return 0; ++ } + + /* + * Major/minor page fault accounting is only done on the initial +diff --git a/arch/powerpc/kernel/align.c b/arch/powerpc/kernel/align.c +index 91e5c1758b5c..64e016abb2a5 100644 +--- a/arch/powerpc/kernel/align.c ++++ b/arch/powerpc/kernel/align.c +@@ -236,6 +236,28 @@ static int emulate_dcbz(struct pt_regs *regs, unsigned char __user *addr) + + #define SWIZ_PTR(p) ((unsigned char __user *)((p) ^ swiz)) + ++#define __get_user_or_set_dar(_regs, _dest, _addr) \ ++ ({ \ ++ int rc = 0; \ ++ typeof(_addr) __addr = (_addr); \ ++ if (__get_user_inatomic(_dest, __addr)) { \ ++ _regs->dar = (unsigned long)__addr; \ ++ rc = -EFAULT; \ ++ } \ ++ rc; \ ++ }) ++ ++#define __put_user_or_set_dar(_regs, _src, _addr) \ ++ ({ \ ++ int rc = 0; \ ++ typeof(_addr) __addr = (_addr); \ ++ if (__put_user_inatomic(_src, __addr)) { \ ++ _regs->dar = (unsigned long)__addr; \ ++ rc = -EFAULT; \ ++ } \ ++ rc; \ ++ }) ++ + static int emulate_multiple(struct pt_regs *regs, unsigned char __user *addr, + unsigned int reg, unsigned int nb, + unsigned int flags, unsigned int instr, +@@ -264,9 +286,10 @@ static int emulate_multiple(struct pt_regs *regs, unsigned char __user *addr, + } else { + unsigned long pc = regs->nip ^ (swiz & 4); + +- if (__get_user_inatomic(instr, +- (unsigned int __user *)pc)) ++ if (__get_user_or_set_dar(regs, instr, ++ (unsigned int __user *)pc)) + return -EFAULT; ++ + if (swiz == 0 && (flags & SW)) + instr = cpu_to_le32(instr); + nb = (instr >> 11) & 0x1f; +@@ -310,31 +333,31 @@ static int emulate_multiple(struct pt_regs *regs, unsigned char __user *addr, + ((nb0 + 3) / 4) * sizeof(unsigned long)); + + for (i = 0; i < nb; ++i, ++p) +- if (__get_user_inatomic(REG_BYTE(rptr, i ^ bswiz), +- SWIZ_PTR(p))) ++ if (__get_user_or_set_dar(regs, REG_BYTE(rptr, i ^ bswiz), ++ SWIZ_PTR(p))) + return -EFAULT; + if (nb0 > 0) { + rptr = ®s->gpr[0]; + addr += nb; + for (i = 0; i < nb0; ++i, ++p) +- if (__get_user_inatomic(REG_BYTE(rptr, +- i ^ bswiz), +- SWIZ_PTR(p))) ++ if (__get_user_or_set_dar(regs, ++ REG_BYTE(rptr, i ^ bswiz), ++ SWIZ_PTR(p))) + return -EFAULT; + } + + } else { + for (i = 0; i < nb; ++i, ++p) +- if (__put_user_inatomic(REG_BYTE(rptr, i ^ bswiz), +- SWIZ_PTR(p))) ++ if (__put_user_or_set_dar(regs, REG_BYTE(rptr, i ^ bswiz), ++ SWIZ_PTR(p))) + return -EFAULT; + if (nb0 > 0) { + rptr = ®s->gpr[0]; + addr += nb; + for (i = 0; i < nb0; ++i, ++p) +- if (__put_user_inatomic(REG_BYTE(rptr, +- i ^ bswiz), +- SWIZ_PTR(p))) ++ if (__put_user_or_set_dar(regs, ++ REG_BYTE(rptr, i ^ bswiz), ++ SWIZ_PTR(p))) + return -EFAULT; + } + } +@@ -346,29 +369,32 @@ static int emulate_multiple(struct pt_regs *regs, unsigned char __user *addr, + * Only POWER6 has these instructions, and it does true little-endian, + * so we don't need the address swizzling. + */ +-static int emulate_fp_pair(unsigned char __user *addr, unsigned int reg, +- unsigned int flags) ++static int emulate_fp_pair(struct pt_regs *regs, unsigned char __user *addr, ++ unsigned int reg, unsigned int flags) + { + char *ptr0 = (char *) ¤t->thread.TS_FPR(reg); + char *ptr1 = (char *) ¤t->thread.TS_FPR(reg+1); +- int i, ret, sw = 0; ++ int i, sw = 0; + + if (reg & 1) + return 0; /* invalid form: FRS/FRT must be even */ + if (flags & SW) + sw = 7; +- ret = 0; ++ + for (i = 0; i < 8; ++i) { + if (!(flags & ST)) { +- ret |= __get_user(ptr0[i^sw], addr + i); +- ret |= __get_user(ptr1[i^sw], addr + i + 8); ++ if (__get_user_or_set_dar(regs, ptr0[i^sw], addr + i)) ++ return -EFAULT; ++ if (__get_user_or_set_dar(regs, ptr1[i^sw], addr + i + 8)) ++ return -EFAULT; + } else { +- ret |= __put_user(ptr0[i^sw], addr + i); +- ret |= __put_user(ptr1[i^sw], addr + i + 8); ++ if (__put_user_or_set_dar(regs, ptr0[i^sw], addr + i)) ++ return -EFAULT; ++ if (__put_user_or_set_dar(regs, ptr1[i^sw], addr + i + 8)) ++ return -EFAULT; + } + } +- if (ret) +- return -EFAULT; ++ + return 1; /* exception handled and fixed up */ + } + +@@ -378,24 +404,27 @@ static int emulate_lq_stq(struct pt_regs *regs, unsigned char __user *addr, + { + char *ptr0 = (char *)®s->gpr[reg]; + char *ptr1 = (char *)®s->gpr[reg+1]; +- int i, ret, sw = 0; ++ int i, sw = 0; + + if (reg & 1) + return 0; /* invalid form: GPR must be even */ + if (flags & SW) + sw = 7; +- ret = 0; ++ + for (i = 0; i < 8; ++i) { + if (!(flags & ST)) { +- ret |= __get_user(ptr0[i^sw], addr + i); +- ret |= __get_user(ptr1[i^sw], addr + i + 8); ++ if (__get_user_or_set_dar(regs, ptr0[i^sw], addr + i)) ++ return -EFAULT; ++ if (__get_user_or_set_dar(regs, ptr1[i^sw], addr + i + 8)) ++ return -EFAULT; + } else { +- ret |= __put_user(ptr0[i^sw], addr + i); +- ret |= __put_user(ptr1[i^sw], addr + i + 8); ++ if (__put_user_or_set_dar(regs, ptr0[i^sw], addr + i)) ++ return -EFAULT; ++ if (__put_user_or_set_dar(regs, ptr1[i^sw], addr + i + 8)) ++ return -EFAULT; + } + } +- if (ret) +- return -EFAULT; ++ + return 1; /* exception handled and fixed up */ + } + #endif /* CONFIG_PPC64 */ +@@ -688,9 +717,14 @@ static int emulate_vsx(unsigned char __user *addr, unsigned int reg, + for (j = 0; j < length; j += elsize) { + for (i = 0; i < elsize; ++i) { + if (flags & ST) +- ret |= __put_user(ptr[i^sw], addr + i); ++ ret = __put_user_or_set_dar(regs, ptr[i^sw], ++ addr + i); + else +- ret |= __get_user(ptr[i^sw], addr + i); ++ ret = __get_user_or_set_dar(regs, ptr[i^sw], ++ addr + i); ++ ++ if (ret) ++ return ret; + } + ptr += elsize; + #ifdef __LITTLE_ENDIAN__ +@@ -740,7 +774,7 @@ int fix_alignment(struct pt_regs *regs) + unsigned int dsisr; + unsigned char __user *addr; + unsigned long p, swiz; +- int ret, i; ++ int i; + union data { + u64 ll; + double dd; +@@ -923,7 +957,7 @@ int fix_alignment(struct pt_regs *regs) + if (flags & F) { + /* Special case for 16-byte FP loads and stores */ + PPC_WARN_ALIGNMENT(fp_pair, regs); +- return emulate_fp_pair(addr, reg, flags); ++ return emulate_fp_pair(regs, addr, reg, flags); + } else { + #ifdef CONFIG_PPC64 + /* Special case for 16-byte loads and stores */ +@@ -953,15 +987,12 @@ int fix_alignment(struct pt_regs *regs) + } + + data.ll = 0; +- ret = 0; + p = (unsigned long)addr; + + for (i = 0; i < nb; i++) +- ret |= __get_user_inatomic(data.v[start + i], +- SWIZ_PTR(p++)); +- +- if (unlikely(ret)) +- return -EFAULT; ++ if (__get_user_or_set_dar(regs, data.v[start + i], ++ SWIZ_PTR(p++))) ++ return -EFAULT; + + } else if (flags & F) { + data.ll = current->thread.TS_FPR(reg); +@@ -1031,15 +1062,13 @@ int fix_alignment(struct pt_regs *regs) + break; + } + +- ret = 0; + p = (unsigned long)addr; + + for (i = 0; i < nb; i++) +- ret |= __put_user_inatomic(data.v[start + i], +- SWIZ_PTR(p++)); ++ if (__put_user_or_set_dar(regs, data.v[start + i], ++ SWIZ_PTR(p++))) ++ return -EFAULT; + +- if (unlikely(ret)) +- return -EFAULT; + } else if (flags & F) + current->thread.TS_FPR(reg) = data.ll; + else +diff --git a/arch/x86/include/asm/elf.h b/arch/x86/include/asm/elf.h +index 2903ff34174c..a8bd57d5ef43 100644 +--- a/arch/x86/include/asm/elf.h ++++ b/arch/x86/include/asm/elf.h +@@ -204,6 +204,7 @@ void set_personality_ia32(bool); + + #define ELF_CORE_COPY_REGS(pr_reg, regs) \ + do { \ ++ unsigned long base; \ + unsigned v; \ + (pr_reg)[0] = (regs)->r15; \ + (pr_reg)[1] = (regs)->r14; \ +@@ -226,8 +227,8 @@ do { \ + (pr_reg)[18] = (regs)->flags; \ + (pr_reg)[19] = (regs)->sp; \ + (pr_reg)[20] = (regs)->ss; \ +- (pr_reg)[21] = current->thread.fs; \ +- (pr_reg)[22] = current->thread.gs; \ ++ rdmsrl(MSR_FS_BASE, base); (pr_reg)[21] = base; \ ++ rdmsrl(MSR_KERNEL_GS_BASE, base); (pr_reg)[22] = base; \ + asm("movl %%ds,%0" : "=r" (v)); (pr_reg)[23] = v; \ + asm("movl %%es,%0" : "=r" (v)); (pr_reg)[24] = v; \ + asm("movl %%fs,%0" : "=r" (v)); (pr_reg)[25] = v; \ +diff --git a/arch/x86/include/asm/io.h b/arch/x86/include/asm/io.h +index 34a5b93704d3..b36deb1d9561 100644 +--- a/arch/x86/include/asm/io.h ++++ b/arch/x86/include/asm/io.h +@@ -301,13 +301,13 @@ static inline unsigned type in##bwl##_p(int port) \ + static inline void outs##bwl(int port, const void *addr, unsigned long count) \ + { \ + asm volatile("rep; outs" #bwl \ +- : "+S"(addr), "+c"(count) : "d"(port)); \ ++ : "+S"(addr), "+c"(count) : "d"(port) : "memory"); \ + } \ + \ + static inline void ins##bwl(int port, void *addr, unsigned long count) \ + { \ + asm volatile("rep; ins" #bwl \ +- : "+D"(addr), "+c"(count) : "d"(port)); \ ++ : "+D"(addr), "+c"(count) : "d"(port) : "memory"); \ + } + + BUILDIO(b, b, char) +diff --git a/block/blk-core.c b/block/blk-core.c +index bbbf36e6066b..a891e1f19f7b 100644 +--- a/block/blk-core.c ++++ b/block/blk-core.c +@@ -194,7 +194,7 @@ EXPORT_SYMBOL(blk_delay_queue); + **/ + void blk_start_queue(struct request_queue *q) + { +- WARN_ON(!irqs_disabled()); ++ WARN_ON(!in_interrupt() && !irqs_disabled()); + + queue_flag_clear(QUEUE_FLAG_STOPPED, q); + __blk_run_queue(q); +diff --git a/crypto/algif_skcipher.c b/crypto/algif_skcipher.c +index c0f03562a145..3734c5591d07 100644 +--- a/crypto/algif_skcipher.c ++++ b/crypto/algif_skcipher.c +@@ -94,8 +94,13 @@ static void skcipher_free_async_sgls(struct skcipher_async_req *sreq) + } + sgl = sreq->tsg; + n = sg_nents(sgl); +- for_each_sg(sgl, sg, n, i) +- put_page(sg_page(sg)); ++ for_each_sg(sgl, sg, n, i) { ++ struct page *page = sg_page(sg); ++ ++ /* some SGs may not have a page mapped */ ++ if (page && atomic_read(&page->_count)) ++ put_page(page); ++ } + + kfree(sreq->tsg); + } +diff --git a/drivers/acpi/apei/ghes.c b/drivers/acpi/apei/ghes.c +index e82d0976a5d0..568120eee7d9 100644 +--- a/drivers/acpi/apei/ghes.c ++++ b/drivers/acpi/apei/ghes.c +@@ -1064,6 +1064,7 @@ static int ghes_remove(struct platform_device *ghes_dev) + if (list_empty(&ghes_sci)) + unregister_acpi_hed_notifier(&ghes_notifier_sci); + mutex_unlock(&ghes_list_mutex); ++ synchronize_rcu(); + break; + case ACPI_HEST_NOTIFY_NMI: + ghes_nmi_remove(ghes); +diff --git a/drivers/acpi/ioapic.c b/drivers/acpi/ioapic.c +index ccdc8db16bb8..fa2cf2dc4e33 100644 +--- a/drivers/acpi/ioapic.c ++++ b/drivers/acpi/ioapic.c +@@ -45,6 +45,12 @@ static acpi_status setup_res(struct acpi_resource *acpi_res, void *data) + struct resource *res = data; + struct resource_win win; + ++ /* ++ * We might assign this to 'res' later, make sure all pointers are ++ * cleared before the resource is added to the global list ++ */ ++ memset(&win, 0, sizeof(win)); ++ + res->flags = 0; + if (acpi_dev_filter_resource_type(acpi_res, IORESOURCE_MEM) == 0) + return AE_OK; +diff --git a/drivers/android/binder.c b/drivers/android/binder.c +index 6f086415727c..235ba1fbabdb 100644 +--- a/drivers/android/binder.c ++++ b/drivers/android/binder.c +@@ -2865,7 +2865,7 @@ static int binder_mmap(struct file *filp, struct vm_area_struct *vma) + const char *failure_string; + struct binder_buffer *buffer; + +- if (proc->tsk != current) ++ if (proc->tsk != current->group_leader) + return -EINVAL; + + if ((vma->vm_end - vma->vm_start) > SZ_4M) +diff --git a/drivers/ata/pata_amd.c b/drivers/ata/pata_amd.c +index 8d4d959a821c..8706533db57b 100644 +--- a/drivers/ata/pata_amd.c ++++ b/drivers/ata/pata_amd.c +@@ -616,6 +616,7 @@ static const struct pci_device_id amd[] = { + { PCI_VDEVICE(NVIDIA, PCI_DEVICE_ID_NVIDIA_NFORCE_MCP73_IDE), 8 }, + { PCI_VDEVICE(NVIDIA, PCI_DEVICE_ID_NVIDIA_NFORCE_MCP77_IDE), 8 }, + { PCI_VDEVICE(AMD, PCI_DEVICE_ID_AMD_CS5536_IDE), 9 }, ++ { PCI_VDEVICE(AMD, PCI_DEVICE_ID_AMD_CS5536_DEV_IDE), 9 }, + + { }, + }; +diff --git a/drivers/ata/pata_cs5536.c b/drivers/ata/pata_cs5536.c +index 6c15a554efbe..dc1255294628 100644 +--- a/drivers/ata/pata_cs5536.c ++++ b/drivers/ata/pata_cs5536.c +@@ -289,6 +289,7 @@ static int cs5536_init_one(struct pci_dev *dev, const struct pci_device_id *id) + + static const struct pci_device_id cs5536[] = { + { PCI_VDEVICE(AMD, PCI_DEVICE_ID_AMD_CS5536_IDE), }, ++ { PCI_VDEVICE(AMD, PCI_DEVICE_ID_AMD_CS5536_DEV_IDE), }, + { }, + }; + +diff --git a/drivers/base/bus.c b/drivers/base/bus.c +index 79bc203f51ef..07ea8608fb0b 100644 +--- a/drivers/base/bus.c ++++ b/drivers/base/bus.c +@@ -722,7 +722,7 @@ int bus_add_driver(struct device_driver *drv) + + out_unregister: + kobject_put(&priv->kobj); +- kfree(drv->p); ++ /* drv->p is freed in driver_release() */ + drv->p = NULL; + out_put_bus: + bus_put(bus); +diff --git a/drivers/block/skd_main.c b/drivers/block/skd_main.c +index 1e46eb2305c0..f928e698f659 100644 +--- a/drivers/block/skd_main.c ++++ b/drivers/block/skd_main.c +@@ -2214,6 +2214,9 @@ static void skd_send_fitmsg(struct skd_device *skdev, + */ + qcmd |= FIT_QCMD_MSGSIZE_64; + ++ /* Make sure skd_msg_buf is written before the doorbell is triggered. */ ++ smp_wmb(); ++ + SKD_WRITEQ(skdev, qcmd, FIT_Q_COMMAND); + + } +@@ -2260,6 +2263,9 @@ static void skd_send_special_fitmsg(struct skd_device *skdev, + qcmd = skspcl->mb_dma_address; + qcmd |= FIT_QCMD_QID_NORMAL + FIT_QCMD_MSGSIZE_128; + ++ /* Make sure skd_msg_buf is written before the doorbell is triggered. */ ++ smp_wmb(); ++ + SKD_WRITEQ(skdev, qcmd, FIT_Q_COMMAND); + } + +@@ -4679,15 +4685,16 @@ static void skd_free_disk(struct skd_device *skdev) + { + struct gendisk *disk = skdev->disk; + +- if (disk != NULL) { +- struct request_queue *q = disk->queue; ++ if (disk && (disk->flags & GENHD_FL_UP)) ++ del_gendisk(disk); + +- if (disk->flags & GENHD_FL_UP) +- del_gendisk(disk); +- if (q) +- blk_cleanup_queue(q); +- put_disk(disk); ++ if (skdev->queue) { ++ blk_cleanup_queue(skdev->queue); ++ skdev->queue = NULL; ++ disk->queue = NULL; + } ++ ++ put_disk(disk); + skdev->disk = NULL; + } + +diff --git a/drivers/bluetooth/btusb.c b/drivers/bluetooth/btusb.c +index 3956fd646bf2..0c13dfd1c29d 100644 +--- a/drivers/bluetooth/btusb.c ++++ b/drivers/bluetooth/btusb.c +@@ -323,6 +323,7 @@ static const struct usb_device_id blacklist_table[] = { + { USB_DEVICE(0x13d3, 0x3410), .driver_info = BTUSB_REALTEK }, + { USB_DEVICE(0x13d3, 0x3416), .driver_info = BTUSB_REALTEK }, + { USB_DEVICE(0x13d3, 0x3459), .driver_info = BTUSB_REALTEK }, ++ { USB_DEVICE(0x13d3, 0x3494), .driver_info = BTUSB_REALTEK }, + + /* Additional Realtek 8821AE Bluetooth devices */ + { USB_DEVICE(0x0b05, 0x17dc), .driver_info = BTUSB_REALTEK }, +diff --git a/drivers/gpu/drm/drm_atomic.c b/drivers/gpu/drm/drm_atomic.c +index 6e3b78ee7d16..be9b1c8b9209 100644 +--- a/drivers/gpu/drm/drm_atomic.c ++++ b/drivers/gpu/drm/drm_atomic.c +@@ -996,6 +996,9 @@ int drm_atomic_check_only(struct drm_atomic_state *state) + if (config->funcs->atomic_check) + ret = config->funcs->atomic_check(state->dev, state); + ++ if (ret) ++ return ret; ++ + if (!state->allow_modeset) { + for_each_crtc_in_state(state, crtc, crtc_state, i) { + if (crtc_state->mode_changed || +@@ -1007,7 +1010,7 @@ int drm_atomic_check_only(struct drm_atomic_state *state) + } + } + +- return ret; ++ return 0; + } + EXPORT_SYMBOL(drm_atomic_check_only); + +diff --git a/drivers/gpu/drm/drm_gem.c b/drivers/gpu/drm/drm_gem.c +index 16a164770713..9b2de3ff66d9 100644 +--- a/drivers/gpu/drm/drm_gem.c ++++ b/drivers/gpu/drm/drm_gem.c +@@ -710,13 +710,13 @@ drm_gem_object_release_handle(int id, void *ptr, void *data) + struct drm_gem_object *obj = ptr; + struct drm_device *dev = obj->dev; + ++ if (dev->driver->gem_close_object) ++ dev->driver->gem_close_object(obj, file_priv); ++ + if (drm_core_check_feature(dev, DRIVER_PRIME)) + drm_gem_remove_prime_handles(obj, file_priv); + drm_vma_node_revoke(&obj->vma_node, file_priv->filp); + +- if (dev->driver->gem_close_object) +- dev->driver->gem_close_object(obj, file_priv); +- + drm_gem_object_handle_unreference_unlocked(obj); + + return 0; +diff --git a/drivers/gpu/drm/i2c/adv7511.c b/drivers/gpu/drm/i2c/adv7511.c +index b728523e194f..bfdbfc431e07 100644 +--- a/drivers/gpu/drm/i2c/adv7511.c ++++ b/drivers/gpu/drm/i2c/adv7511.c +@@ -48,6 +48,10 @@ struct adv7511 { + struct gpio_desc *gpio_pd; + }; + ++static const int edid_i2c_addr = 0x7e; ++static const int packet_i2c_addr = 0x70; ++static const int cec_i2c_addr = 0x78; ++ + static struct adv7511 *encoder_to_adv7511(struct drm_encoder *encoder) + { + return to_encoder_slave(encoder)->slave_priv; +@@ -362,12 +366,19 @@ static void adv7511_power_on(struct adv7511 *adv7511) + { + adv7511->current_edid_segment = -1; + +- regmap_write(adv7511->regmap, ADV7511_REG_INT(0), +- ADV7511_INT0_EDID_READY); +- regmap_write(adv7511->regmap, ADV7511_REG_INT(1), +- ADV7511_INT1_DDC_ERROR); + regmap_update_bits(adv7511->regmap, ADV7511_REG_POWER, + ADV7511_POWER_POWER_DOWN, 0); ++ if (adv7511->i2c_main->irq) { ++ /* ++ * Documentation says the INT_ENABLE registers are reset in ++ * POWER_DOWN mode. My 7511w preserved the bits, however. ++ * Still, let's be safe and stick to the documentation. ++ */ ++ regmap_write(adv7511->regmap, ADV7511_REG_INT_ENABLE(0), ++ ADV7511_INT0_EDID_READY); ++ regmap_write(adv7511->regmap, ADV7511_REG_INT_ENABLE(1), ++ ADV7511_INT1_DDC_ERROR); ++ } + + /* + * Per spec it is allowed to pulse the HDP signal to indicate that the +@@ -567,13 +578,18 @@ static int adv7511_get_modes(struct drm_encoder *encoder, + + /* Reading the EDID only works if the device is powered */ + if (!adv7511->powered) { +- regmap_write(adv7511->regmap, ADV7511_REG_INT(0), +- ADV7511_INT0_EDID_READY); +- regmap_write(adv7511->regmap, ADV7511_REG_INT(1), +- ADV7511_INT1_DDC_ERROR); + regmap_update_bits(adv7511->regmap, ADV7511_REG_POWER, + ADV7511_POWER_POWER_DOWN, 0); ++ if (adv7511->i2c_main->irq) { ++ regmap_write(adv7511->regmap, ADV7511_REG_INT_ENABLE(0), ++ ADV7511_INT0_EDID_READY); ++ regmap_write(adv7511->regmap, ADV7511_REG_INT_ENABLE(1), ++ ADV7511_INT1_DDC_ERROR); ++ } + adv7511->current_edid_segment = -1; ++ /* Reset the EDID_I2C_ADDR register as it might be cleared */ ++ regmap_write(adv7511->regmap, ADV7511_REG_EDID_I2C_ADDR, ++ edid_i2c_addr); + } + + edid = drm_do_get_edid(connector, adv7511_get_edid_block, adv7511); +@@ -849,10 +865,6 @@ static int adv7511_parse_dt(struct device_node *np, + return 0; + } + +-static const int edid_i2c_addr = 0x7e; +-static const int packet_i2c_addr = 0x70; +-static const int cec_i2c_addr = 0x78; +- + static int adv7511_probe(struct i2c_client *i2c, const struct i2c_device_id *id) + { + struct adv7511_link_config link_config; +diff --git a/drivers/gpu/drm/rcar-du/rcar_du_crtc.c b/drivers/gpu/drm/rcar-du/rcar_du_crtc.c +index 7c6f15d284e3..824c835330df 100644 +--- a/drivers/gpu/drm/rcar-du/rcar_du_crtc.c ++++ b/drivers/gpu/drm/rcar-du/rcar_du_crtc.c +@@ -148,8 +148,8 @@ static void rcar_du_crtc_set_display_timing(struct rcar_du_crtc *rcrtc) + rcar_du_group_write(rcrtc->group, rcrtc->index % 2 ? OTAR2 : OTAR, 0); + + /* Signal polarities */ +- value = ((mode->flags & DRM_MODE_FLAG_PVSYNC) ? 0 : DSMR_VSL) +- | ((mode->flags & DRM_MODE_FLAG_PHSYNC) ? 0 : DSMR_HSL) ++ value = ((mode->flags & DRM_MODE_FLAG_PVSYNC) ? DSMR_VSL : 0) ++ | ((mode->flags & DRM_MODE_FLAG_PHSYNC) ? DSMR_HSL : 0) + | DSMR_DIPM_DE | DSMR_CSPM; + rcar_du_crtc_write(rcrtc, DSMR, value); + +@@ -171,7 +171,7 @@ static void rcar_du_crtc_set_display_timing(struct rcar_du_crtc *rcrtc) + mode->crtc_vsync_start - 1); + rcar_du_crtc_write(rcrtc, VCR, mode->crtc_vtotal - 1); + +- rcar_du_crtc_write(rcrtc, DESR, mode->htotal - mode->hsync_start); ++ rcar_du_crtc_write(rcrtc, DESR, mode->htotal - mode->hsync_start - 1); + rcar_du_crtc_write(rcrtc, DEWR, mode->hdisplay); + } + +diff --git a/drivers/gpu/drm/rcar-du/rcar_du_lvdsenc.c b/drivers/gpu/drm/rcar-du/rcar_du_lvdsenc.c +index 85043c5bad03..873e04aa9352 100644 +--- a/drivers/gpu/drm/rcar-du/rcar_du_lvdsenc.c ++++ b/drivers/gpu/drm/rcar-du/rcar_du_lvdsenc.c +@@ -56,11 +56,11 @@ static int rcar_du_lvdsenc_start(struct rcar_du_lvdsenc *lvds, + return ret; + + /* PLL clock configuration */ +- if (freq <= 38000) ++ if (freq < 39000) + pllcr = LVDPLLCR_CEEN | LVDPLLCR_COSEL | LVDPLLCR_PLLDLYCNT_38M; +- else if (freq <= 60000) ++ else if (freq < 61000) + pllcr = LVDPLLCR_CEEN | LVDPLLCR_COSEL | LVDPLLCR_PLLDLYCNT_60M; +- else if (freq <= 121000) ++ else if (freq < 121000) + pllcr = LVDPLLCR_CEEN | LVDPLLCR_COSEL | LVDPLLCR_PLLDLYCNT_121M; + else + pllcr = LVDPLLCR_PLLDLYCNT_150M; +@@ -102,7 +102,7 @@ static int rcar_du_lvdsenc_start(struct rcar_du_lvdsenc *lvds, + /* Turn the PLL on, wait for the startup delay, and turn the output + * on. + */ +- lvdcr0 |= LVDCR0_PLLEN; ++ lvdcr0 |= LVDCR0_PLLON; + rcar_lvds_write(lvds, LVDCR0, lvdcr0); + + usleep_range(100, 150); +diff --git a/drivers/gpu/drm/rcar-du/rcar_lvds_regs.h b/drivers/gpu/drm/rcar-du/rcar_lvds_regs.h +index 77cf9289ab65..b1eafd097a79 100644 +--- a/drivers/gpu/drm/rcar-du/rcar_lvds_regs.h ++++ b/drivers/gpu/drm/rcar-du/rcar_lvds_regs.h +@@ -18,7 +18,7 @@ + #define LVDCR0_DMD (1 << 12) + #define LVDCR0_LVMD_MASK (0xf << 8) + #define LVDCR0_LVMD_SHIFT 8 +-#define LVDCR0_PLLEN (1 << 4) ++#define LVDCR0_PLLON (1 << 4) + #define LVDCR0_BEN (1 << 2) + #define LVDCR0_LVEN (1 << 1) + #define LVDCR0_LVRES (1 << 0) +diff --git a/drivers/gpu/drm/ttm/ttm_page_alloc.c b/drivers/gpu/drm/ttm/ttm_page_alloc.c +index 025c429050c0..5d8dfe027b30 100644 +--- a/drivers/gpu/drm/ttm/ttm_page_alloc.c ++++ b/drivers/gpu/drm/ttm/ttm_page_alloc.c +@@ -612,7 +612,7 @@ static void ttm_page_pool_fill_locked(struct ttm_page_pool *pool, + } else { + pr_err("Failed to fill pool (%p)\n", pool); + /* If we have any pages left put them to the pool. */ +- list_for_each_entry(p, &pool->list, lru) { ++ list_for_each_entry(p, &new_pages, lru) { + ++cpages; + } + list_splice(&new_pages, &pool->list); +diff --git a/drivers/i2c/busses/i2c-ismt.c b/drivers/i2c/busses/i2c-ismt.c +index f994712d0904..a9276eeb61d5 100644 +--- a/drivers/i2c/busses/i2c-ismt.c ++++ b/drivers/i2c/busses/i2c-ismt.c +@@ -340,8 +340,10 @@ static int ismt_process_desc(const struct ismt_desc *desc, + break; + case I2C_SMBUS_BLOCK_DATA: + case I2C_SMBUS_I2C_BLOCK_DATA: +- memcpy(&data->block[1], dma_buffer, desc->rxbytes); +- data->block[0] = desc->rxbytes; ++ if (desc->rxbytes != dma_buffer[0] + 1) ++ return -EMSGSIZE; ++ ++ memcpy(data->block, dma_buffer, desc->rxbytes); + break; + } + return 0; +diff --git a/drivers/i2c/busses/i2c-jz4780.c b/drivers/i2c/busses/i2c-jz4780.c +index 19b2d689a5ef..4880aae98b4c 100644 +--- a/drivers/i2c/busses/i2c-jz4780.c ++++ b/drivers/i2c/busses/i2c-jz4780.c +@@ -783,10 +783,6 @@ static int jz4780_i2c_probe(struct platform_device *pdev) + + jz4780_i2c_writew(i2c, JZ4780_I2C_INTM, 0x0); + +- i2c->cmd = 0; +- memset(i2c->cmd_buf, 0, BUFSIZE); +- memset(i2c->data_buf, 0, BUFSIZE); +- + i2c->irq = platform_get_irq(pdev, 0); + ret = devm_request_irq(&pdev->dev, i2c->irq, jz4780_i2c_irq, 0, + dev_name(&pdev->dev), i2c); +diff --git a/drivers/iio/common/hid-sensors/hid-sensor-trigger.c b/drivers/iio/common/hid-sensors/hid-sensor-trigger.c +index 595511022795..3460dd0e3e99 100644 +--- a/drivers/iio/common/hid-sensors/hid-sensor-trigger.c ++++ b/drivers/iio/common/hid-sensors/hid-sensor-trigger.c +@@ -36,8 +36,6 @@ static int _hid_sensor_power_state(struct hid_sensor_common *st, bool state) + s32 poll_value = 0; + + if (state) { +- if (!atomic_read(&st->user_requested_state)) +- return 0; + if (sensor_hub_device_open(st->hsdev)) + return -EIO; + +@@ -86,6 +84,9 @@ static int _hid_sensor_power_state(struct hid_sensor_common *st, bool state) + &report_val); + } + ++ pr_debug("HID_SENSOR %s set power_state %d report_state %d\n", ++ st->pdev->name, state_val, report_val); ++ + sensor_hub_get_feature(st->hsdev, st->power_state.report_id, + st->power_state.index, + sizeof(state_val), &state_val); +@@ -107,6 +108,7 @@ int hid_sensor_power_state(struct hid_sensor_common *st, bool state) + ret = pm_runtime_get_sync(&st->pdev->dev); + else { + pm_runtime_mark_last_busy(&st->pdev->dev); ++ pm_runtime_use_autosuspend(&st->pdev->dev); + ret = pm_runtime_put_autosuspend(&st->pdev->dev); + } + if (ret < 0) { +@@ -175,8 +177,6 @@ int hid_sensor_setup_trigger(struct iio_dev *indio_dev, const char *name, + /* Default to 3 seconds, but can be changed from sysfs */ + pm_runtime_set_autosuspend_delay(&attrb->pdev->dev, + 3000); +- pm_runtime_use_autosuspend(&attrb->pdev->dev); +- + return ret; + error_unreg_trigger: + iio_trigger_unregister(trig); +diff --git a/drivers/iio/imu/adis16480.c b/drivers/iio/imu/adis16480.c +index b94bfd3f595b..7a9c50842d8b 100644 +--- a/drivers/iio/imu/adis16480.c ++++ b/drivers/iio/imu/adis16480.c +@@ -696,7 +696,7 @@ static const struct adis16480_chip_info adis16480_chip_info[] = { + .gyro_max_val = IIO_RAD_TO_DEGREE(22500), + .gyro_max_scale = 450, + .accel_max_val = IIO_M_S_2_TO_G(12500), +- .accel_max_scale = 5, ++ .accel_max_scale = 10, + }, + [ADIS16485] = { + .channels = adis16485_channels, +diff --git a/drivers/input/mouse/trackpoint.c b/drivers/input/mouse/trackpoint.c +index 354d47ecd66a..7e2dc5e56632 100644 +--- a/drivers/input/mouse/trackpoint.c ++++ b/drivers/input/mouse/trackpoint.c +@@ -265,7 +265,8 @@ static int trackpoint_start_protocol(struct psmouse *psmouse, unsigned char *fir + if (ps2_command(&psmouse->ps2dev, param, MAKE_PS2_CMD(0, 2, TP_READ_ID))) + return -1; + +- if (param[0] != TP_MAGIC_IDENT) ++ /* add new TP ID. */ ++ if (!(param[0] & TP_MAGIC_IDENT)) + return -1; + + if (firmware_id) +@@ -380,8 +381,8 @@ int trackpoint_detect(struct psmouse *psmouse, bool set_properties) + return 0; + + if (trackpoint_read(&psmouse->ps2dev, TP_EXT_BTN, &button_info)) { +- psmouse_warn(psmouse, "failed to get extended button data\n"); +- button_info = 0; ++ psmouse_warn(psmouse, "failed to get extended button data, assuming 3 buttons\n"); ++ button_info = 0x33; + } + + psmouse->private = kzalloc(sizeof(struct trackpoint_data), GFP_KERNEL); +diff --git a/drivers/input/mouse/trackpoint.h b/drivers/input/mouse/trackpoint.h +index 5617ed3a7d7a..88055755f82e 100644 +--- a/drivers/input/mouse/trackpoint.h ++++ b/drivers/input/mouse/trackpoint.h +@@ -21,8 +21,9 @@ + #define TP_COMMAND 0xE2 /* Commands start with this */ + + #define TP_READ_ID 0xE1 /* Sent for device identification */ +-#define TP_MAGIC_IDENT 0x01 /* Sent after a TP_READ_ID followed */ ++#define TP_MAGIC_IDENT 0x03 /* Sent after a TP_READ_ID followed */ + /* by the firmware ID */ ++ /* Firmware ID includes 0x1, 0x2, 0x3 */ + + + /* +diff --git a/drivers/input/serio/i8042-x86ia64io.h b/drivers/input/serio/i8042-x86ia64io.h +index 1f40cdc1b357..18fd4cd6d3c7 100644 +--- a/drivers/input/serio/i8042-x86ia64io.h ++++ b/drivers/input/serio/i8042-x86ia64io.h +@@ -814,6 +814,13 @@ static const struct dmi_system_id __initconst i8042_dmi_kbdreset_table[] = { + DMI_MATCH(DMI_PRODUCT_NAME, "P34"), + }, + }, ++ { ++ /* Gigabyte P57 - Elantech touchpad */ ++ .matches = { ++ DMI_MATCH(DMI_SYS_VENDOR, "GIGABYTE"), ++ DMI_MATCH(DMI_PRODUCT_NAME, "P57"), ++ }, ++ }, + { + /* Schenker XMG C504 - Elantech touchpad */ + .matches = { +diff --git a/drivers/irqchip/irq-atmel-aic-common.c b/drivers/irqchip/irq-atmel-aic-common.c +index 869d01dd4063..af20eac63ad4 100644 +--- a/drivers/irqchip/irq-atmel-aic-common.c ++++ b/drivers/irqchip/irq-atmel-aic-common.c +@@ -148,9 +148,9 @@ void __init aic_common_rtc_irq_fixup(struct device_node *root) + struct device_node *np; + void __iomem *regs; + +- np = of_find_compatible_node(root, NULL, "atmel,at91rm9200-rtc"); ++ np = of_find_compatible_node(NULL, NULL, "atmel,at91rm9200-rtc"); + if (!np) +- np = of_find_compatible_node(root, NULL, ++ np = of_find_compatible_node(NULL, NULL, + "atmel,at91sam9x5-rtc"); + + if (!np) +@@ -202,7 +202,6 @@ void __init aic_common_irq_fixup(const struct of_device_id *matches) + return; + + match = of_match_node(matches, root); +- of_node_put(root); + + if (match) { + void (*fixup)(struct device_node *) = match->data; +diff --git a/drivers/irqchip/irq-mips-gic.c b/drivers/irqchip/irq-mips-gic.c +index 269c2354c431..e1d71574bdb5 100644 +--- a/drivers/irqchip/irq-mips-gic.c ++++ b/drivers/irqchip/irq-mips-gic.c +@@ -861,8 +861,11 @@ static int __init gic_of_init(struct device_node *node, + gic_len = resource_size(&res); + } + +- if (mips_cm_present()) ++ if (mips_cm_present()) { + write_gcr_gic_base(gic_base | CM_GCR_GIC_BASE_GICEN_MSK); ++ /* Ensure GIC region is enabled before trying to access it */ ++ __sync(); ++ } + gic_present = true; + + __gic_init(gic_base, gic_len, cpu_vec, 0, node); +diff --git a/drivers/md/bcache/bcache.h b/drivers/md/bcache/bcache.h +index 04f7bc28ef83..dfdd1908641c 100644 +--- a/drivers/md/bcache/bcache.h ++++ b/drivers/md/bcache/bcache.h +@@ -348,6 +348,7 @@ struct cached_dev { + /* Limit number of writeback bios in flight */ + struct semaphore in_flight; + struct task_struct *writeback_thread; ++ struct workqueue_struct *writeback_write_wq; + + struct keybuf writeback_keys; + +diff --git a/drivers/md/bcache/super.c b/drivers/md/bcache/super.c +index a7a03a21d78a..8e5666ac8a6a 100644 +--- a/drivers/md/bcache/super.c ++++ b/drivers/md/bcache/super.c +@@ -1054,7 +1054,7 @@ int bch_cached_dev_attach(struct cached_dev *dc, struct cache_set *c) + } + + if (BDEV_STATE(&dc->sb) == BDEV_STATE_DIRTY) { +- bch_sectors_dirty_init(dc); ++ bch_sectors_dirty_init(&dc->disk); + atomic_set(&dc->has_dirty, 1); + atomic_inc(&dc->count); + bch_writeback_queue(dc); +@@ -1087,6 +1087,8 @@ static void cached_dev_free(struct closure *cl) + cancel_delayed_work_sync(&dc->writeback_rate_update); + if (!IS_ERR_OR_NULL(dc->writeback_thread)) + kthread_stop(dc->writeback_thread); ++ if (dc->writeback_write_wq) ++ destroy_workqueue(dc->writeback_write_wq); + + mutex_lock(&bch_register_lock); + +@@ -1258,6 +1260,7 @@ static int flash_dev_run(struct cache_set *c, struct uuid_entry *u) + goto err; + + bcache_device_attach(d, c, u - c->uuids); ++ bch_sectors_dirty_init(d); + bch_flash_dev_request_init(d); + add_disk(d->disk); + +@@ -1996,6 +1999,8 @@ static ssize_t register_bcache(struct kobject *k, struct kobj_attribute *attr, + else + err = "device busy"; + mutex_unlock(&bch_register_lock); ++ if (!IS_ERR(bdev)) ++ bdput(bdev); + if (attr == &ksysfs_register_quiet) + goto out; + } +diff --git a/drivers/md/bcache/sysfs.c b/drivers/md/bcache/sysfs.c +index b3ff57d61dde..4fbb5532f24c 100644 +--- a/drivers/md/bcache/sysfs.c ++++ b/drivers/md/bcache/sysfs.c +@@ -191,7 +191,7 @@ STORE(__cached_dev) + { + struct cached_dev *dc = container_of(kobj, struct cached_dev, + disk.kobj); +- unsigned v = size; ++ ssize_t v = size; + struct cache_set *c; + struct kobj_uevent_env *env; + +@@ -226,7 +226,7 @@ STORE(__cached_dev) + bch_cached_dev_run(dc); + + if (attr == &sysfs_cache_mode) { +- ssize_t v = bch_read_string_list(buf, bch_cache_modes + 1); ++ v = bch_read_string_list(buf, bch_cache_modes + 1); + + if (v < 0) + return v; +diff --git a/drivers/md/bcache/util.c b/drivers/md/bcache/util.c +index db3ae4c2b223..6c18e3ec3e48 100644 +--- a/drivers/md/bcache/util.c ++++ b/drivers/md/bcache/util.c +@@ -73,24 +73,44 @@ STRTO_H(strtouint, unsigned int) + STRTO_H(strtoll, long long) + STRTO_H(strtoull, unsigned long long) + ++/** ++ * bch_hprint() - formats @v to human readable string for sysfs. ++ * ++ * @v - signed 64 bit integer ++ * @buf - the (at least 8 byte) buffer to format the result into. ++ * ++ * Returns the number of bytes used by format. ++ */ + ssize_t bch_hprint(char *buf, int64_t v) + { + static const char units[] = "?kMGTPEZY"; +- char dec[4] = ""; +- int u, t = 0; +- +- for (u = 0; v >= 1024 || v <= -1024; u++) { +- t = v & ~(~0 << 10); +- v >>= 10; +- } +- +- if (!u) +- return sprintf(buf, "%llu", v); +- +- if (v < 100 && v > -100) +- snprintf(dec, sizeof(dec), ".%i", t / 100); +- +- return sprintf(buf, "%lli%s%c", v, dec, units[u]); ++ int u = 0, t; ++ ++ uint64_t q; ++ ++ if (v < 0) ++ q = -v; ++ else ++ q = v; ++ ++ /* For as long as the number is more than 3 digits, but at least ++ * once, shift right / divide by 1024. Keep the remainder for ++ * a digit after the decimal point. ++ */ ++ do { ++ u++; ++ ++ t = q & ~(~0 << 10); ++ q >>= 10; ++ } while (q >= 1000); ++ ++ if (v < 0) ++ /* '-', up to 3 digits, '.', 1 digit, 1 character, null; ++ * yields 8 bytes. ++ */ ++ return sprintf(buf, "-%llu.%i%c", q, t * 10 / 1024, units[u]); ++ else ++ return sprintf(buf, "%llu.%i%c", q, t * 10 / 1024, units[u]); + } + + ssize_t bch_snprint_string_list(char *buf, size_t size, const char * const list[], +diff --git a/drivers/md/bcache/writeback.c b/drivers/md/bcache/writeback.c +index 540256a0df4f..b0667b321a3f 100644 +--- a/drivers/md/bcache/writeback.c ++++ b/drivers/md/bcache/writeback.c +@@ -21,7 +21,8 @@ + static void __update_writeback_rate(struct cached_dev *dc) + { + struct cache_set *c = dc->disk.c; +- uint64_t cache_sectors = c->nbuckets * c->sb.bucket_size; ++ uint64_t cache_sectors = c->nbuckets * c->sb.bucket_size - ++ bcache_flash_devs_sectors_dirty(c); + uint64_t cache_dirty_target = + div_u64(cache_sectors * dc->writeback_percent, 100); + +@@ -190,7 +191,7 @@ static void write_dirty(struct closure *cl) + + closure_bio_submit(&io->bio, cl, &io->dc->disk); + +- continue_at(cl, write_dirty_finish, system_wq); ++ continue_at(cl, write_dirty_finish, io->dc->writeback_write_wq); + } + + static void read_dirty_endio(struct bio *bio, int error) +@@ -210,7 +211,7 @@ static void read_dirty_submit(struct closure *cl) + + closure_bio_submit(&io->bio, cl, &io->dc->disk); + +- continue_at(cl, write_dirty, system_wq); ++ continue_at(cl, write_dirty, io->dc->writeback_write_wq); + } + + static void read_dirty(struct cached_dev *dc) +@@ -488,17 +489,17 @@ static int sectors_dirty_init_fn(struct btree_op *_op, struct btree *b, + return MAP_CONTINUE; + } + +-void bch_sectors_dirty_init(struct cached_dev *dc) ++void bch_sectors_dirty_init(struct bcache_device *d) + { + struct sectors_dirty_init op; + + bch_btree_op_init(&op.op, -1); +- op.inode = dc->disk.id; ++ op.inode = d->id; + +- bch_btree_map_keys(&op.op, dc->disk.c, &KEY(op.inode, 0, 0), ++ bch_btree_map_keys(&op.op, d->c, &KEY(op.inode, 0, 0), + sectors_dirty_init_fn, 0); + +- dc->disk.sectors_dirty_last = bcache_dev_sectors_dirty(&dc->disk); ++ d->sectors_dirty_last = bcache_dev_sectors_dirty(d); + } + + void bch_cached_dev_writeback_init(struct cached_dev *dc) +@@ -522,6 +523,11 @@ void bch_cached_dev_writeback_init(struct cached_dev *dc) + + int bch_cached_dev_writeback_start(struct cached_dev *dc) + { ++ dc->writeback_write_wq = alloc_workqueue("bcache_writeback_wq", ++ WQ_MEM_RECLAIM, 0); ++ if (!dc->writeback_write_wq) ++ return -ENOMEM; ++ + dc->writeback_thread = kthread_create(bch_writeback_thread, dc, + "bcache_writeback"); + if (IS_ERR(dc->writeback_thread)) +diff --git a/drivers/md/bcache/writeback.h b/drivers/md/bcache/writeback.h +index 073a042aed24..daec4fd782ea 100644 +--- a/drivers/md/bcache/writeback.h ++++ b/drivers/md/bcache/writeback.h +@@ -14,6 +14,25 @@ static inline uint64_t bcache_dev_sectors_dirty(struct bcache_device *d) + return ret; + } + ++static inline uint64_t bcache_flash_devs_sectors_dirty(struct cache_set *c) ++{ ++ uint64_t i, ret = 0; ++ ++ mutex_lock(&bch_register_lock); ++ ++ for (i = 0; i < c->nr_uuids; i++) { ++ struct bcache_device *d = c->devices[i]; ++ ++ if (!d || !UUID_FLASH_ONLY(&c->uuids[i])) ++ continue; ++ ret += bcache_dev_sectors_dirty(d); ++ } ++ ++ mutex_unlock(&bch_register_lock); ++ ++ return ret; ++} ++ + static inline unsigned offset_to_stripe(struct bcache_device *d, + uint64_t offset) + { +@@ -85,7 +104,7 @@ static inline void bch_writeback_add(struct cached_dev *dc) + + void bcache_dev_sectors_dirty_add(struct cache_set *, unsigned, uint64_t, int); + +-void bch_sectors_dirty_init(struct cached_dev *dc); ++void bch_sectors_dirty_init(struct bcache_device *); + void bch_cached_dev_writeback_init(struct cached_dev *); + int bch_cached_dev_writeback_start(struct cached_dev *); + +diff --git a/drivers/md/bitmap.c b/drivers/md/bitmap.c +index a7621a258936..7078447c8cd7 100644 +--- a/drivers/md/bitmap.c ++++ b/drivers/md/bitmap.c +@@ -1965,6 +1965,11 @@ int bitmap_resize(struct bitmap *bitmap, sector_t blocks, + long pages; + struct bitmap_page *new_bp; + ++ if (bitmap->storage.file && !init) { ++ pr_info("md: cannot resize file-based bitmap\n"); ++ return -EINVAL; ++ } ++ + if (chunksize == 0) { + /* If there is enough space, leave the chunk size unchanged, + * else increase by factor of two until there is enough space. +diff --git a/drivers/media/usb/uvc/uvc_ctrl.c b/drivers/media/usb/uvc/uvc_ctrl.c +index 3e59b288b8a8..618e4e2b4207 100644 +--- a/drivers/media/usb/uvc/uvc_ctrl.c ++++ b/drivers/media/usb/uvc/uvc_ctrl.c +@@ -2001,6 +2001,13 @@ int uvc_ctrl_add_mapping(struct uvc_video_chain *chain, + goto done; + } + ++ /* Validate the user-provided bit-size and offset */ ++ if (mapping->size > 32 || ++ mapping->offset + mapping->size > ctrl->info.size * 8) { ++ ret = -EINVAL; ++ goto done; ++ } ++ + list_for_each_entry(map, &ctrl->info.mappings, list) { + if (mapping->id == map->id) { + uvc_trace(UVC_TRACE_CONTROL, "Can't add mapping '%s', " +diff --git a/drivers/media/v4l2-core/v4l2-compat-ioctl32.c b/drivers/media/v4l2-core/v4l2-compat-ioctl32.c +index 4b777be714a4..4f002d0bebb1 100644 +--- a/drivers/media/v4l2-core/v4l2-compat-ioctl32.c ++++ b/drivers/media/v4l2-core/v4l2-compat-ioctl32.c +@@ -750,7 +750,8 @@ static int put_v4l2_event32(struct v4l2_event *kp, struct v4l2_event32 __user *u + copy_to_user(&up->u, &kp->u, sizeof(kp->u)) || + put_user(kp->pending, &up->pending) || + put_user(kp->sequence, &up->sequence) || +- compat_put_timespec(&kp->timestamp, &up->timestamp) || ++ put_user(kp->timestamp.tv_sec, &up->timestamp.tv_sec) || ++ put_user(kp->timestamp.tv_nsec, &up->timestamp.tv_nsec) || + put_user(kp->id, &up->id) || + copy_to_user(up->reserved, kp->reserved, 8 * sizeof(__u32))) + return -EFAULT; +diff --git a/drivers/net/ethernet/freescale/gianfar.c b/drivers/net/ethernet/freescale/gianfar.c +index 4ee080d49bc0..3ea651afa63d 100644 +--- a/drivers/net/ethernet/freescale/gianfar.c ++++ b/drivers/net/ethernet/freescale/gianfar.c +@@ -3512,7 +3512,7 @@ static noinline void gfar_update_link_state(struct gfar_private *priv) + u32 tempval1 = gfar_read(®s->maccfg1); + u32 tempval = gfar_read(®s->maccfg2); + u32 ecntrl = gfar_read(®s->ecntrl); +- u32 tx_flow_oldval = (tempval & MACCFG1_TX_FLOW); ++ u32 tx_flow_oldval = (tempval1 & MACCFG1_TX_FLOW); + + if (phydev->duplex != priv->oldduplex) { + if (!(phydev->duplex)) +diff --git a/drivers/net/ethernet/qlogic/qlge/qlge_dbg.c b/drivers/net/ethernet/qlogic/qlge/qlge_dbg.c +index 829be21f97b2..be258d90de9e 100644 +--- a/drivers/net/ethernet/qlogic/qlge/qlge_dbg.c ++++ b/drivers/net/ethernet/qlogic/qlge/qlge_dbg.c +@@ -724,7 +724,7 @@ static void ql_build_coredump_seg_header( + seg_hdr->cookie = MPI_COREDUMP_COOKIE; + seg_hdr->segNum = seg_number; + seg_hdr->segSize = seg_size; +- memcpy(seg_hdr->description, desc, (sizeof(seg_hdr->description)) - 1); ++ strncpy(seg_hdr->description, desc, (sizeof(seg_hdr->description)) - 1); + } + + /* +diff --git a/drivers/net/usb/qmi_wwan.c b/drivers/net/usb/qmi_wwan.c +index 34a59e79a33c..480c9366d6b6 100644 +--- a/drivers/net/usb/qmi_wwan.c ++++ b/drivers/net/usb/qmi_wwan.c +@@ -750,6 +750,7 @@ static const struct usb_device_id products[] = { + {QMI_FIXED_INTF(0x19d2, 0x1428, 2)}, /* Telewell TW-LTE 4G v2 */ + {QMI_FIXED_INTF(0x19d2, 0x2002, 4)}, /* ZTE (Vodafone) K3765-Z */ + {QMI_FIXED_INTF(0x2001, 0x7e19, 4)}, /* D-Link DWM-221 B1 */ ++ {QMI_FIXED_INTF(0x2001, 0x7e35, 4)}, /* D-Link DWM-222 */ + {QMI_FIXED_INTF(0x0f3d, 0x68a2, 8)}, /* Sierra Wireless MC7700 */ + {QMI_FIXED_INTF(0x114f, 0x68a2, 8)}, /* Sierra Wireless MC7750 */ + {QMI_FIXED_INTF(0x1199, 0x68a2, 8)}, /* Sierra Wireless MC7710 in QMI mode */ +diff --git a/drivers/net/wireless/ath/ath10k/core.c b/drivers/net/wireless/ath/ath10k/core.c +index c0e454bb6a8d..e0e23470a380 100644 +--- a/drivers/net/wireless/ath/ath10k/core.c ++++ b/drivers/net/wireless/ath/ath10k/core.c +@@ -1040,6 +1040,12 @@ int ath10k_core_start(struct ath10k *ar, enum ath10k_firmware_mode mode) + goto err_wmi_detach; + } + ++ /* If firmware indicates Full Rx Reorder support it must be used in a ++ * slightly different manner. Let HTT code know. ++ */ ++ ar->htt.rx_ring.in_ord_rx = !!(test_bit(WMI_SERVICE_RX_FULL_REORDER, ++ ar->wmi.svc_map)); ++ + status = ath10k_htt_rx_alloc(&ar->htt); + if (status) { + ath10k_err(ar, "failed to alloc htt rx: %d\n", status); +@@ -1104,12 +1110,6 @@ int ath10k_core_start(struct ath10k *ar, enum ath10k_firmware_mode mode) + goto err_hif_stop; + } + +- /* If firmware indicates Full Rx Reorder support it must be used in a +- * slightly different manner. Let HTT code know. +- */ +- ar->htt.rx_ring.in_ord_rx = !!(test_bit(WMI_SERVICE_RX_FULL_REORDER, +- ar->wmi.svc_map)); +- + status = ath10k_htt_rx_ring_refill(ar); + if (status) { + ath10k_err(ar, "failed to refill htt rx ring: %d\n", status); +diff --git a/drivers/net/wireless/p54/fwio.c b/drivers/net/wireless/p54/fwio.c +index 275408eaf95e..8a11dab8f4b3 100644 +--- a/drivers/net/wireless/p54/fwio.c ++++ b/drivers/net/wireless/p54/fwio.c +@@ -489,7 +489,7 @@ int p54_scan(struct p54_common *priv, u16 mode, u16 dwell) + + entry += sizeof(__le16); + chan->pa_points_per_curve = 8; +- memset(chan->curve_data, 0, sizeof(*chan->curve_data)); ++ memset(chan->curve_data, 0, sizeof(chan->curve_data)); + memcpy(chan->curve_data, entry, + sizeof(struct p54_pa_curve_data_sample) * + min((u8)8, curve_data->points_per_channel)); +diff --git a/drivers/net/wireless/ti/wl1251/main.c b/drivers/net/wireless/ti/wl1251/main.c +index 5d54d16a59e7..040bf3c66958 100644 +--- a/drivers/net/wireless/ti/wl1251/main.c ++++ b/drivers/net/wireless/ti/wl1251/main.c +@@ -1571,6 +1571,7 @@ struct ieee80211_hw *wl1251_alloc_hw(void) + + wl->state = WL1251_STATE_OFF; + mutex_init(&wl->mutex); ++ spin_lock_init(&wl->wl_lock); + + wl->tx_mgmt_frm_rate = DEFAULT_HW_GEN_TX_RATE; + wl->tx_mgmt_frm_mod = DEFAULT_HW_GEN_MODULATION_TYPE; +diff --git a/drivers/of/fdt.c b/drivers/of/fdt.c +index bf89754fe973..308a95ead432 100644 +--- a/drivers/of/fdt.c ++++ b/drivers/of/fdt.c +@@ -413,7 +413,7 @@ static void __unflatten_device_tree(void *blob, + /* Allocate memory for the expanded device tree */ + mem = dt_alloc(size + 4, __alignof__(struct device_node)); + if (!mem) +- return NULL; ++ return; + + memset(mem, 0, size); + +diff --git a/drivers/parisc/dino.c b/drivers/parisc/dino.c +index 7b0ca1551d7b..005ea632ba53 100644 +--- a/drivers/parisc/dino.c ++++ b/drivers/parisc/dino.c +@@ -954,7 +954,7 @@ static int __init dino_probe(struct parisc_device *dev) + + dino_dev->hba.dev = dev; + dino_dev->hba.base_addr = ioremap_nocache(hpa, 4096); +- dino_dev->hba.lmmio_space_offset = 0; /* CPU addrs == bus addrs */ ++ dino_dev->hba.lmmio_space_offset = PCI_F_EXTEND; + spin_lock_init(&dino_dev->dinosaur_pen); + dino_dev->hba.iommu = ccio_get_iommu(dev); + +diff --git a/drivers/pci/hotplug/shpchp_hpc.c b/drivers/pci/hotplug/shpchp_hpc.c +index 7d223e9080ef..77dddee2753a 100644 +--- a/drivers/pci/hotplug/shpchp_hpc.c ++++ b/drivers/pci/hotplug/shpchp_hpc.c +@@ -1062,6 +1062,8 @@ int shpc_init(struct controller *ctrl, struct pci_dev *pdev) + if (rc) { + ctrl_info(ctrl, "Can't get msi for the hotplug controller\n"); + ctrl_info(ctrl, "Use INTx for the hotplug controller\n"); ++ } else { ++ pci_set_master(pdev); + } + + rc = request_irq(ctrl->pci_dev->irq, shpc_isr, IRQF_SHARED, +diff --git a/drivers/s390/scsi/zfcp_dbf.c b/drivers/s390/scsi/zfcp_dbf.c +index 5d7fbe4e907e..296889dc193f 100644 +--- a/drivers/s390/scsi/zfcp_dbf.c ++++ b/drivers/s390/scsi/zfcp_dbf.c +@@ -418,8 +418,8 @@ void zfcp_dbf_scsi(char *tag, struct scsi_cmnd *sc, struct zfcp_fsf_req *fsf) + rec->scsi_retries = sc->retries; + rec->scsi_allowed = sc->allowed; + rec->scsi_id = sc->device->id; +- /* struct zfcp_dbf_scsi needs to be updated to handle 64bit LUNs */ + rec->scsi_lun = (u32)sc->device->lun; ++ rec->scsi_lun_64_hi = (u32)(sc->device->lun >> 32); + rec->host_scribble = (unsigned long)sc->host_scribble; + + memcpy(rec->scsi_opcode, sc->cmnd, +@@ -427,19 +427,32 @@ void zfcp_dbf_scsi(char *tag, struct scsi_cmnd *sc, struct zfcp_fsf_req *fsf) + + if (fsf) { + rec->fsf_req_id = fsf->req_id; ++ rec->pl_len = FCP_RESP_WITH_EXT; + fcp_rsp = (struct fcp_resp_with_ext *) + &(fsf->qtcb->bottom.io.fcp_rsp); ++ /* mandatory parts of FCP_RSP IU in this SCSI record */ + memcpy(&rec->fcp_rsp, fcp_rsp, FCP_RESP_WITH_EXT); + if (fcp_rsp->resp.fr_flags & FCP_RSP_LEN_VAL) { + fcp_rsp_info = (struct fcp_resp_rsp_info *) &fcp_rsp[1]; + rec->fcp_rsp_info = fcp_rsp_info->rsp_code; ++ rec->pl_len += be32_to_cpu(fcp_rsp->ext.fr_rsp_len); + } + if (fcp_rsp->resp.fr_flags & FCP_SNS_LEN_VAL) { +- rec->pl_len = min((u16)SCSI_SENSE_BUFFERSIZE, +- (u16)ZFCP_DBF_PAY_MAX_REC); +- zfcp_dbf_pl_write(dbf, sc->sense_buffer, rec->pl_len, +- "fcp_sns", fsf->req_id); ++ rec->pl_len += be32_to_cpu(fcp_rsp->ext.fr_sns_len); + } ++ /* complete FCP_RSP IU in associated PAYload record ++ * but only if there are optional parts ++ */ ++ if (fcp_rsp->resp.fr_flags != 0) ++ zfcp_dbf_pl_write( ++ dbf, fcp_rsp, ++ /* at least one full PAY record ++ * but not beyond hardware response field ++ */ ++ min_t(u16, max_t(u16, rec->pl_len, ++ ZFCP_DBF_PAY_MAX_REC), ++ FSF_FCP_RSP_SIZE), ++ "fcp_riu", fsf->req_id); + } + + debug_event(dbf->scsi, 1, rec, sizeof(*rec)); +diff --git a/drivers/s390/scsi/zfcp_dbf.h b/drivers/s390/scsi/zfcp_dbf.h +index 0be3d48681ae..2039e7510a30 100644 +--- a/drivers/s390/scsi/zfcp_dbf.h ++++ b/drivers/s390/scsi/zfcp_dbf.h +@@ -196,7 +196,7 @@ enum zfcp_dbf_scsi_id { + * @id: unique number of recovery record type + * @tag: identifier string specifying the location of initiation + * @scsi_id: scsi device id +- * @scsi_lun: scsi device logical unit number ++ * @scsi_lun: scsi device logical unit number, low part of 64 bit, old 32 bit + * @scsi_result: scsi result + * @scsi_retries: current retry number of scsi request + * @scsi_allowed: allowed retries +@@ -206,6 +206,7 @@ enum zfcp_dbf_scsi_id { + * @host_scribble: LLD specific data attached to SCSI request + * @pl_len: length of paload stored as zfcp_dbf_pay + * @fsf_rsp: response for fsf request ++ * @scsi_lun_64_hi: scsi device logical unit number, high part of 64 bit + */ + struct zfcp_dbf_scsi { + u8 id; +@@ -222,6 +223,7 @@ struct zfcp_dbf_scsi { + u64 host_scribble; + u16 pl_len; + struct fcp_resp_with_ext fcp_rsp; ++ u32 scsi_lun_64_hi; + } __packed; + + /** +@@ -291,7 +293,11 @@ void zfcp_dbf_hba_fsf_response(struct zfcp_fsf_req *req) + { + struct fsf_qtcb *qtcb = req->qtcb; + +- if ((qtcb->prefix.prot_status != FSF_PROT_GOOD) && ++ if (unlikely(req->status & (ZFCP_STATUS_FSFREQ_DISMISSED | ++ ZFCP_STATUS_FSFREQ_ERROR))) { ++ zfcp_dbf_hba_fsf_resp("fs_rerr", 3, req); ++ ++ } else if ((qtcb->prefix.prot_status != FSF_PROT_GOOD) && + (qtcb->prefix.prot_status != FSF_PROT_FSF_STATUS_PRESENTED)) { + zfcp_dbf_hba_fsf_resp("fs_perr", 1, req); + +diff --git a/drivers/s390/scsi/zfcp_fc.h b/drivers/s390/scsi/zfcp_fc.h +index df2b541c8287..a2275825186f 100644 +--- a/drivers/s390/scsi/zfcp_fc.h ++++ b/drivers/s390/scsi/zfcp_fc.h +@@ -4,7 +4,7 @@ + * Fibre Channel related definitions and inline functions for the zfcp + * device driver + * +- * Copyright IBM Corp. 2009 ++ * Copyright IBM Corp. 2009, 2017 + */ + + #ifndef ZFCP_FC_H +@@ -279,6 +279,10 @@ void zfcp_fc_eval_fcp_rsp(struct fcp_resp_with_ext *fcp_rsp, + !(rsp_flags & FCP_SNS_LEN_VAL) && + fcp_rsp->resp.fr_status == SAM_STAT_GOOD) + set_host_byte(scsi, DID_ERROR); ++ } else if (unlikely(rsp_flags & FCP_RESID_OVER)) { ++ /* FCP_DL was not sufficient for SCSI data length */ ++ if (fcp_rsp->resp.fr_status == SAM_STAT_GOOD) ++ set_host_byte(scsi, DID_ERROR); + } + } + +diff --git a/drivers/s390/scsi/zfcp_fsf.c b/drivers/s390/scsi/zfcp_fsf.c +index 21ec5e2f584c..7d77c318cc16 100644 +--- a/drivers/s390/scsi/zfcp_fsf.c ++++ b/drivers/s390/scsi/zfcp_fsf.c +@@ -2246,7 +2246,8 @@ int zfcp_fsf_fcp_cmnd(struct scsi_cmnd *scsi_cmnd) + fcp_cmnd = (struct fcp_cmnd *) &req->qtcb->bottom.io.fcp_cmnd; + zfcp_fc_scsi_to_fcp(fcp_cmnd, scsi_cmnd, 0); + +- if (scsi_prot_sg_count(scsi_cmnd)) { ++ if ((scsi_get_prot_op(scsi_cmnd) != SCSI_PROT_NORMAL) && ++ scsi_prot_sg_count(scsi_cmnd)) { + zfcp_qdio_set_data_div(qdio, &req->qdio_req, + scsi_prot_sg_count(scsi_cmnd)); + retval = zfcp_qdio_sbals_from_sg(qdio, &req->qdio_req, +diff --git a/drivers/s390/scsi/zfcp_scsi.c b/drivers/s390/scsi/zfcp_scsi.c +index 75f4bfc2b98a..6de09147e791 100644 +--- a/drivers/s390/scsi/zfcp_scsi.c ++++ b/drivers/s390/scsi/zfcp_scsi.c +@@ -224,8 +224,10 @@ static int zfcp_task_mgmt_function(struct scsi_cmnd *scpnt, u8 tm_flags) + + zfcp_erp_wait(adapter); + ret = fc_block_scsi_eh(scpnt); +- if (ret) ++ if (ret) { ++ zfcp_dbf_scsi_devreset("fiof", scpnt, tm_flags, NULL); + return ret; ++ } + + if (!(atomic_read(&adapter->status) & + ZFCP_STATUS_COMMON_RUNNING)) { +@@ -233,8 +235,10 @@ static int zfcp_task_mgmt_function(struct scsi_cmnd *scpnt, u8 tm_flags) + return SUCCESS; + } + } +- if (!fsf_req) ++ if (!fsf_req) { ++ zfcp_dbf_scsi_devreset("reqf", scpnt, tm_flags, NULL); + return FAILED; ++ } + + wait_for_completion(&fsf_req->completion); + +diff --git a/drivers/scsi/isci/remote_node_context.c b/drivers/scsi/isci/remote_node_context.c +index 1910100638a2..00602abec0ea 100644 +--- a/drivers/scsi/isci/remote_node_context.c ++++ b/drivers/scsi/isci/remote_node_context.c +@@ -66,6 +66,9 @@ const char *rnc_state_name(enum scis_sds_remote_node_context_states state) + { + static const char * const strings[] = RNC_STATES; + ++ if (state >= ARRAY_SIZE(strings)) ++ return "UNKNOWN"; ++ + return strings[state]; + } + #undef C +diff --git a/drivers/scsi/lpfc/lpfc_els.c b/drivers/scsi/lpfc/lpfc_els.c +index 0e5b3584e918..4da8963315c7 100644 +--- a/drivers/scsi/lpfc/lpfc_els.c ++++ b/drivers/scsi/lpfc/lpfc_els.c +@@ -1068,7 +1068,10 @@ stop_rr_fcf_flogi: + lpfc_sli4_unreg_all_rpis(vport); + } + } +- lpfc_issue_reg_vfi(vport); ++ ++ /* Do not register VFI if the driver aborted FLOGI */ ++ if (!lpfc_error_lost_link(irsp)) ++ lpfc_issue_reg_vfi(vport); + lpfc_nlp_put(ndlp); + goto out; + } +diff --git a/drivers/scsi/megaraid/megaraid_sas_base.c b/drivers/scsi/megaraid/megaraid_sas_base.c +index a991690167aa..b66a7a6a601d 100644 +--- a/drivers/scsi/megaraid/megaraid_sas_base.c ++++ b/drivers/scsi/megaraid/megaraid_sas_base.c +@@ -1709,9 +1709,12 @@ void megasas_complete_outstanding_ioctls(struct megasas_instance *instance) + if (cmd_fusion->sync_cmd_idx != (u32)ULONG_MAX) { + cmd_mfi = instance->cmd_list[cmd_fusion->sync_cmd_idx]; + if (cmd_mfi->sync_cmd && +- cmd_mfi->frame->hdr.cmd != MFI_CMD_ABORT) ++ (cmd_mfi->frame->hdr.cmd != MFI_CMD_ABORT)) { ++ cmd_mfi->frame->hdr.cmd_status = ++ MFI_STAT_WRONG_STATE; + megasas_complete_cmd(instance, + cmd_mfi, DID_OK); ++ } + } + } + } else { +diff --git a/drivers/scsi/qla2xxx/qla_attr.c b/drivers/scsi/qla2xxx/qla_attr.c +index c1b2e86839ae..e9cd3013dcd0 100644 +--- a/drivers/scsi/qla2xxx/qla_attr.c ++++ b/drivers/scsi/qla2xxx/qla_attr.c +@@ -404,6 +404,8 @@ qla2x00_sysfs_write_optrom_ctl(struct file *filp, struct kobject *kobj, + return -EINVAL; + if (start > ha->optrom_size) + return -EINVAL; ++ if (size > ha->optrom_size - start) ++ size = ha->optrom_size - start; + + mutex_lock(&ha->optrom_mutex); + switch (val) { +@@ -429,8 +431,7 @@ qla2x00_sysfs_write_optrom_ctl(struct file *filp, struct kobject *kobj, + } + + ha->optrom_region_start = start; +- ha->optrom_region_size = start + size > ha->optrom_size ? +- ha->optrom_size - start : size; ++ ha->optrom_region_size = start + size; + + ha->optrom_state = QLA_SREADING; + ha->optrom_buffer = vmalloc(ha->optrom_region_size); +@@ -503,8 +504,7 @@ qla2x00_sysfs_write_optrom_ctl(struct file *filp, struct kobject *kobj, + } + + ha->optrom_region_start = start; +- ha->optrom_region_size = start + size > ha->optrom_size ? +- ha->optrom_size - start : size; ++ ha->optrom_region_size = start + size; + + ha->optrom_state = QLA_SWRITING; + ha->optrom_buffer = vmalloc(ha->optrom_region_size); +diff --git a/drivers/scsi/sg.c b/drivers/scsi/sg.c +index c94191369452..fbdba7925723 100644 +--- a/drivers/scsi/sg.c ++++ b/drivers/scsi/sg.c +@@ -133,7 +133,7 @@ struct sg_device; /* forward declarations */ + struct sg_fd; + + typedef struct sg_request { /* SG_MAX_QUEUE requests outstanding per file */ +- struct sg_request *nextrp; /* NULL -> tail request (slist) */ ++ struct list_head entry; /* list entry */ + struct sg_fd *parentfp; /* NULL -> not in use */ + Sg_scatter_hold data; /* hold buffer, perhaps scatter list */ + sg_io_hdr_t header; /* scsi command+info, see */ +@@ -153,11 +153,11 @@ typedef struct sg_fd { /* holds the state of a file descriptor */ + struct sg_device *parentdp; /* owning device */ + wait_queue_head_t read_wait; /* queue read until command done */ + rwlock_t rq_list_lock; /* protect access to list in req_arr */ ++ struct mutex f_mutex; /* protect against changes in this fd */ + int timeout; /* defaults to SG_DEFAULT_TIMEOUT */ + int timeout_user; /* defaults to SG_DEFAULT_TIMEOUT_USER */ + Sg_scatter_hold reserve; /* buffer held for this file descriptor */ +- unsigned save_scat_len; /* original length of trunc. scat. element */ +- Sg_request *headrp; /* head of request slist, NULL->empty */ ++ struct list_head rq_list; /* head of request list */ + struct fasync_struct *async_qp; /* used by asynchronous notification */ + Sg_request req_arr[SG_MAX_QUEUE]; /* used as singly-linked list */ + char low_dma; /* as in parent but possibly overridden to 1 */ +@@ -166,6 +166,7 @@ typedef struct sg_fd { /* holds the state of a file descriptor */ + unsigned char next_cmd_len; /* 0: automatic, >0: use on next write() */ + char keep_orphan; /* 0 -> drop orphan (def), 1 -> keep for read() */ + char mmap_called; /* 0 -> mmap() never called on this fd */ ++ char res_in_use; /* 1 -> 'reserve' array in use */ + struct kref f_ref; + struct execute_work ew; + } Sg_fd; +@@ -209,7 +210,6 @@ static void sg_remove_sfp(struct kref *); + static Sg_request *sg_get_rq_mark(Sg_fd * sfp, int pack_id); + static Sg_request *sg_add_request(Sg_fd * sfp); + static int sg_remove_request(Sg_fd * sfp, Sg_request * srp); +-static int sg_res_in_use(Sg_fd * sfp); + static Sg_device *sg_get_dev(int dev); + static void sg_device_destroy(struct kref *kref); + +@@ -625,6 +625,7 @@ sg_write(struct file *filp, const char __user *buf, size_t count, loff_t * ppos) + } + buf += SZ_SG_HEADER; + __get_user(opcode, buf); ++ mutex_lock(&sfp->f_mutex); + if (sfp->next_cmd_len > 0) { + cmd_size = sfp->next_cmd_len; + sfp->next_cmd_len = 0; /* reset so only this write() effected */ +@@ -633,6 +634,7 @@ sg_write(struct file *filp, const char __user *buf, size_t count, loff_t * ppos) + if ((opcode >= 0xc0) && old_hdr.twelve_byte) + cmd_size = 12; + } ++ mutex_unlock(&sfp->f_mutex); + SCSI_LOG_TIMEOUT(4, sg_printk(KERN_INFO, sdp, + "sg_write: scsi opcode=0x%02x, cmd_size=%d\n", (int) opcode, cmd_size)); + /* Determine buffer size. */ +@@ -732,7 +734,7 @@ sg_new_write(Sg_fd *sfp, struct file *file, const char __user *buf, + sg_remove_request(sfp, srp); + return -EINVAL; /* either MMAP_IO or DIRECT_IO (not both) */ + } +- if (sg_res_in_use(sfp)) { ++ if (sfp->res_in_use) { + sg_remove_request(sfp, srp); + return -EBUSY; /* reserve buffer already being used */ + } +@@ -831,6 +833,39 @@ static int max_sectors_bytes(struct request_queue *q) + return max_sectors << 9; + } + ++static void ++sg_fill_request_table(Sg_fd *sfp, sg_req_info_t *rinfo) ++{ ++ Sg_request *srp; ++ int val; ++ unsigned int ms; ++ ++ val = 0; ++ list_for_each_entry(srp, &sfp->rq_list, entry) { ++ if (val > SG_MAX_QUEUE) ++ break; ++ rinfo[val].req_state = srp->done + 1; ++ rinfo[val].problem = ++ srp->header.masked_status & ++ srp->header.host_status & ++ srp->header.driver_status; ++ if (srp->done) ++ rinfo[val].duration = ++ srp->header.duration; ++ else { ++ ms = jiffies_to_msecs(jiffies); ++ rinfo[val].duration = ++ (ms > srp->header.duration) ? ++ (ms - srp->header.duration) : 0; ++ } ++ rinfo[val].orphan = srp->orphan; ++ rinfo[val].sg_io_owned = srp->sg_io_owned; ++ rinfo[val].pack_id = srp->header.pack_id; ++ rinfo[val].usr_ptr = srp->header.usr_ptr; ++ val++; ++ } ++} ++ + static long + sg_ioctl(struct file *filp, unsigned int cmd_in, unsigned long arg) + { +@@ -896,7 +931,7 @@ sg_ioctl(struct file *filp, unsigned int cmd_in, unsigned long arg) + return result; + if (val) { + sfp->low_dma = 1; +- if ((0 == sfp->low_dma) && (0 == sg_res_in_use(sfp))) { ++ if ((0 == sfp->low_dma) && !sfp->res_in_use) { + val = (int) sfp->reserve.bufflen; + sg_remove_scat(sfp, &sfp->reserve); + sg_build_reserve(sfp, val); +@@ -942,7 +977,7 @@ sg_ioctl(struct file *filp, unsigned int cmd_in, unsigned long arg) + if (!access_ok(VERIFY_WRITE, ip, sizeof (int))) + return -EFAULT; + read_lock_irqsave(&sfp->rq_list_lock, iflags); +- for (srp = sfp->headrp; srp; srp = srp->nextrp) { ++ list_for_each_entry(srp, &sfp->rq_list, entry) { + if ((1 == srp->done) && (!srp->sg_io_owned)) { + read_unlock_irqrestore(&sfp->rq_list_lock, + iflags); +@@ -955,7 +990,8 @@ sg_ioctl(struct file *filp, unsigned int cmd_in, unsigned long arg) + return 0; + case SG_GET_NUM_WAITING: + read_lock_irqsave(&sfp->rq_list_lock, iflags); +- for (val = 0, srp = sfp->headrp; srp; srp = srp->nextrp) { ++ val = 0; ++ list_for_each_entry(srp, &sfp->rq_list, entry) { + if ((1 == srp->done) && (!srp->sg_io_owned)) + ++val; + } +@@ -971,12 +1007,18 @@ sg_ioctl(struct file *filp, unsigned int cmd_in, unsigned long arg) + return -EINVAL; + val = min_t(int, val, + max_sectors_bytes(sdp->device->request_queue)); ++ mutex_lock(&sfp->f_mutex); + if (val != sfp->reserve.bufflen) { +- if (sg_res_in_use(sfp) || sfp->mmap_called) ++ if (sfp->mmap_called || ++ sfp->res_in_use) { ++ mutex_unlock(&sfp->f_mutex); + return -EBUSY; ++ } ++ + sg_remove_scat(sfp, &sfp->reserve); + sg_build_reserve(sfp, val); + } ++ mutex_unlock(&sfp->f_mutex); + return 0; + case SG_GET_RESERVED_SIZE: + val = min_t(int, sfp->reserve.bufflen, +@@ -1017,42 +1059,15 @@ sg_ioctl(struct file *filp, unsigned int cmd_in, unsigned long arg) + return -EFAULT; + else { + sg_req_info_t *rinfo; +- unsigned int ms; + +- rinfo = kmalloc(SZ_SG_REQ_INFO * SG_MAX_QUEUE, +- GFP_KERNEL); ++ rinfo = kzalloc(SZ_SG_REQ_INFO * SG_MAX_QUEUE, ++ GFP_KERNEL); + if (!rinfo) + return -ENOMEM; + read_lock_irqsave(&sfp->rq_list_lock, iflags); +- for (srp = sfp->headrp, val = 0; val < SG_MAX_QUEUE; +- ++val, srp = srp ? srp->nextrp : srp) { +- memset(&rinfo[val], 0, SZ_SG_REQ_INFO); +- if (srp) { +- rinfo[val].req_state = srp->done + 1; +- rinfo[val].problem = +- srp->header.masked_status & +- srp->header.host_status & +- srp->header.driver_status; +- if (srp->done) +- rinfo[val].duration = +- srp->header.duration; +- else { +- ms = jiffies_to_msecs(jiffies); +- rinfo[val].duration = +- (ms > srp->header.duration) ? +- (ms - srp->header.duration) : 0; +- } +- rinfo[val].orphan = srp->orphan; +- rinfo[val].sg_io_owned = +- srp->sg_io_owned; +- rinfo[val].pack_id = +- srp->header.pack_id; +- rinfo[val].usr_ptr = +- srp->header.usr_ptr; +- } +- } ++ sg_fill_request_table(sfp, rinfo); + read_unlock_irqrestore(&sfp->rq_list_lock, iflags); +- result = __copy_to_user(p, rinfo, ++ result = __copy_to_user(p, rinfo, + SZ_SG_REQ_INFO * SG_MAX_QUEUE); + result = result ? -EFAULT : 0; + kfree(rinfo); +@@ -1158,7 +1173,7 @@ sg_poll(struct file *filp, poll_table * wait) + return POLLERR; + poll_wait(filp, &sfp->read_wait, wait); + read_lock_irqsave(&sfp->rq_list_lock, iflags); +- for (srp = sfp->headrp; srp; srp = srp->nextrp) { ++ list_for_each_entry(srp, &sfp->rq_list, entry) { + /* if any read waiting, flag it */ + if ((0 == res) && (1 == srp->done) && (!srp->sg_io_owned)) + res = POLLIN | POLLRDNORM; +@@ -1239,6 +1254,7 @@ sg_mmap(struct file *filp, struct vm_area_struct *vma) + unsigned long req_sz, len, sa; + Sg_scatter_hold *rsv_schp; + int k, length; ++ int ret = 0; + + if ((!filp) || (!vma) || (!(sfp = (Sg_fd *) filp->private_data))) + return -ENXIO; +@@ -1249,8 +1265,11 @@ sg_mmap(struct file *filp, struct vm_area_struct *vma) + if (vma->vm_pgoff) + return -EINVAL; /* want no offset */ + rsv_schp = &sfp->reserve; +- if (req_sz > rsv_schp->bufflen) +- return -ENOMEM; /* cannot map more than reserved buffer */ ++ mutex_lock(&sfp->f_mutex); ++ if (req_sz > rsv_schp->bufflen) { ++ ret = -ENOMEM; /* cannot map more than reserved buffer */ ++ goto out; ++ } + + sa = vma->vm_start; + length = 1 << (PAGE_SHIFT + rsv_schp->page_order); +@@ -1264,7 +1283,9 @@ sg_mmap(struct file *filp, struct vm_area_struct *vma) + vma->vm_flags |= VM_IO | VM_DONTEXPAND | VM_DONTDUMP; + vma->vm_private_data = sfp; + vma->vm_ops = &sg_mmap_vm_ops; +- return 0; ++out: ++ mutex_unlock(&sfp->f_mutex); ++ return ret; + } + + static void +@@ -1731,13 +1752,25 @@ sg_start_req(Sg_request *srp, unsigned char *cmd) + md = &map_data; + + if (md) { +- if (!sg_res_in_use(sfp) && dxfer_len <= rsv_schp->bufflen) ++ mutex_lock(&sfp->f_mutex); ++ if (dxfer_len <= rsv_schp->bufflen && ++ !sfp->res_in_use) { ++ sfp->res_in_use = 1; + sg_link_reserve(sfp, srp, dxfer_len); +- else { ++ } else if (hp->flags & SG_FLAG_MMAP_IO) { ++ res = -EBUSY; /* sfp->res_in_use == 1 */ ++ if (dxfer_len > rsv_schp->bufflen) ++ res = -ENOMEM; ++ mutex_unlock(&sfp->f_mutex); ++ return res; ++ } else { + res = sg_build_indirect(req_schp, sfp, dxfer_len); +- if (res) ++ if (res) { ++ mutex_unlock(&sfp->f_mutex); + return res; ++ } + } ++ mutex_unlock(&sfp->f_mutex); + + md->pages = req_schp->pages; + md->page_order = req_schp->page_order; +@@ -2026,8 +2059,9 @@ sg_unlink_reserve(Sg_fd * sfp, Sg_request * srp) + req_schp->pages = NULL; + req_schp->page_order = 0; + req_schp->sglist_len = 0; +- sfp->save_scat_len = 0; + srp->res_used = 0; ++ /* Called without mutex lock to avoid deadlock */ ++ sfp->res_in_use = 0; + } + + static Sg_request * +@@ -2037,7 +2071,7 @@ sg_get_rq_mark(Sg_fd * sfp, int pack_id) + unsigned long iflags; + + write_lock_irqsave(&sfp->rq_list_lock, iflags); +- for (resp = sfp->headrp; resp; resp = resp->nextrp) { ++ list_for_each_entry(resp, &sfp->rq_list, entry) { + /* look for requests that are ready + not SG_IO owned */ + if ((1 == resp->done) && (!resp->sg_io_owned) && + ((-1 == pack_id) || (resp->header.pack_id == pack_id))) { +@@ -2055,70 +2089,45 @@ sg_add_request(Sg_fd * sfp) + { + int k; + unsigned long iflags; +- Sg_request *resp; + Sg_request *rp = sfp->req_arr; + + write_lock_irqsave(&sfp->rq_list_lock, iflags); +- resp = sfp->headrp; +- if (!resp) { +- memset(rp, 0, sizeof (Sg_request)); +- rp->parentfp = sfp; +- resp = rp; +- sfp->headrp = resp; +- } else { +- if (0 == sfp->cmd_q) +- resp = NULL; /* command queuing disallowed */ +- else { +- for (k = 0; k < SG_MAX_QUEUE; ++k, ++rp) { +- if (!rp->parentfp) +- break; +- } +- if (k < SG_MAX_QUEUE) { +- memset(rp, 0, sizeof (Sg_request)); +- rp->parentfp = sfp; +- while (resp->nextrp) +- resp = resp->nextrp; +- resp->nextrp = rp; +- resp = rp; +- } else +- resp = NULL; ++ if (!list_empty(&sfp->rq_list)) { ++ if (!sfp->cmd_q) ++ goto out_unlock; ++ ++ for (k = 0; k < SG_MAX_QUEUE; ++k, ++rp) { ++ if (!rp->parentfp) ++ break; + } ++ if (k >= SG_MAX_QUEUE) ++ goto out_unlock; + } +- if (resp) { +- resp->nextrp = NULL; +- resp->header.duration = jiffies_to_msecs(jiffies); +- } ++ memset(rp, 0, sizeof (Sg_request)); ++ rp->parentfp = sfp; ++ rp->header.duration = jiffies_to_msecs(jiffies); ++ list_add_tail(&rp->entry, &sfp->rq_list); + write_unlock_irqrestore(&sfp->rq_list_lock, iflags); +- return resp; ++ return rp; ++out_unlock: ++ write_unlock_irqrestore(&sfp->rq_list_lock, iflags); ++ return NULL; + } + + /* Return of 1 for found; 0 for not found */ + static int + sg_remove_request(Sg_fd * sfp, Sg_request * srp) + { +- Sg_request *prev_rp; +- Sg_request *rp; + unsigned long iflags; + int res = 0; + +- if ((!sfp) || (!srp) || (!sfp->headrp)) ++ if (!sfp || !srp || list_empty(&sfp->rq_list)) + return res; + write_lock_irqsave(&sfp->rq_list_lock, iflags); +- prev_rp = sfp->headrp; +- if (srp == prev_rp) { +- sfp->headrp = prev_rp->nextrp; +- prev_rp->parentfp = NULL; ++ if (!list_empty(&srp->entry)) { ++ list_del(&srp->entry); ++ srp->parentfp = NULL; + res = 1; +- } else { +- while ((rp = prev_rp->nextrp)) { +- if (srp == rp) { +- prev_rp->nextrp = rp->nextrp; +- rp->parentfp = NULL; +- res = 1; +- break; +- } +- prev_rp = rp; +- } + } + write_unlock_irqrestore(&sfp->rq_list_lock, iflags); + return res; +@@ -2137,8 +2146,9 @@ sg_add_sfp(Sg_device * sdp) + + init_waitqueue_head(&sfp->read_wait); + rwlock_init(&sfp->rq_list_lock); +- ++ INIT_LIST_HEAD(&sfp->rq_list); + kref_init(&sfp->f_ref); ++ mutex_init(&sfp->f_mutex); + sfp->timeout = SG_DEFAULT_TIMEOUT; + sfp->timeout_user = SG_DEFAULT_TIMEOUT_USER; + sfp->force_packid = SG_DEF_FORCE_PACK_ID; +@@ -2177,10 +2187,13 @@ sg_remove_sfp_usercontext(struct work_struct *work) + { + struct sg_fd *sfp = container_of(work, struct sg_fd, ew.work); + struct sg_device *sdp = sfp->parentdp; ++ Sg_request *srp; + + /* Cleanup any responses which were never read(). */ +- while (sfp->headrp) +- sg_finish_rem_req(sfp->headrp); ++ while (!list_empty(&sfp->rq_list)) { ++ srp = list_first_entry(&sfp->rq_list, Sg_request, entry); ++ sg_finish_rem_req(srp); ++ } + + if (sfp->reserve.bufflen > 0) { + SCSI_LOG_TIMEOUT(6, sg_printk(KERN_INFO, sdp, +@@ -2214,20 +2227,6 @@ sg_remove_sfp(struct kref *kref) + schedule_work(&sfp->ew.work); + } + +-static int +-sg_res_in_use(Sg_fd * sfp) +-{ +- const Sg_request *srp; +- unsigned long iflags; +- +- read_lock_irqsave(&sfp->rq_list_lock, iflags); +- for (srp = sfp->headrp; srp; srp = srp->nextrp) +- if (srp->res_used) +- break; +- read_unlock_irqrestore(&sfp->rq_list_lock, iflags); +- return srp ? 1 : 0; +-} +- + #ifdef CONFIG_SCSI_PROC_FS + static int + sg_idr_max_id(int id, void *p, void *data) +@@ -2597,7 +2596,7 @@ static int sg_proc_seq_show_devstrs(struct seq_file *s, void *v) + /* must be called while holding sg_index_lock */ + static void sg_proc_debug_helper(struct seq_file *s, Sg_device * sdp) + { +- int k, m, new_interface, blen, usg; ++ int k, new_interface, blen, usg; + Sg_request *srp; + Sg_fd *fp; + const sg_io_hdr_t *hp; +@@ -2617,13 +2616,11 @@ static void sg_proc_debug_helper(struct seq_file *s, Sg_device * sdp) + seq_printf(s, " cmd_q=%d f_packid=%d k_orphan=%d closed=0\n", + (int) fp->cmd_q, (int) fp->force_packid, + (int) fp->keep_orphan); +- for (m = 0, srp = fp->headrp; +- srp != NULL; +- ++m, srp = srp->nextrp) { ++ list_for_each_entry(srp, &fp->rq_list, entry) { + hp = &srp->header; + new_interface = (hp->interface_id == '\0') ? 0 : 1; + if (srp->res_used) { +- if (new_interface && ++ if (new_interface && + (SG_FLAG_MMAP_IO & hp->flags)) + cp = " mmap>> "; + else +@@ -2654,7 +2651,7 @@ static void sg_proc_debug_helper(struct seq_file *s, Sg_device * sdp) + seq_printf(s, "ms sgat=%d op=0x%02x\n", usg, + (int) srp->data.cmd_opcode); + } +- if (0 == m) ++ if (list_empty(&fp->rq_list)) + seq_puts(s, " No requests active\n"); + read_unlock(&fp->rq_list_lock); + } +diff --git a/drivers/scsi/storvsc_drv.c b/drivers/scsi/storvsc_drv.c +index 6c52d1411a73..51a0cc047b5f 100644 +--- a/drivers/scsi/storvsc_drv.c ++++ b/drivers/scsi/storvsc_drv.c +@@ -1699,6 +1699,8 @@ static int storvsc_queuecommand(struct Scsi_Host *host, struct scsi_cmnd *scmnd) + ret = storvsc_do_io(dev, cmd_request); + + if (ret == -EAGAIN) { ++ if (payload_sz > sizeof(cmd_request->mpb)) ++ kfree(payload); + /* no more space */ + + if (cmd_request->bounce_sgl_count) +diff --git a/drivers/staging/rtl8188eu/os_dep/usb_intf.c b/drivers/staging/rtl8188eu/os_dep/usb_intf.c +index 4273e34ff3ea..9af6ce2b6782 100644 +--- a/drivers/staging/rtl8188eu/os_dep/usb_intf.c ++++ b/drivers/staging/rtl8188eu/os_dep/usb_intf.c +@@ -50,6 +50,7 @@ static struct usb_device_id rtw_usb_id_tbl[] = { + {USB_DEVICE(0x2001, 0x3311)}, /* DLink GO-USB-N150 REV B1 */ + {USB_DEVICE(0x2357, 0x010c)}, /* TP-Link TL-WN722N v2 */ + {USB_DEVICE(0x0df6, 0x0076)}, /* Sitecom N150 v2 */ ++ {USB_DEVICE(USB_VENDER_ID_REALTEK, 0xffef)}, /* Rosewill RNX-N150NUB */ + {} /* Terminating entry */ + }; + +diff --git a/drivers/staging/rts5208/rtsx_scsi.c b/drivers/staging/rts5208/rtsx_scsi.c +index 8a5d6a8e780f..ba32ac8d1747 100644 +--- a/drivers/staging/rts5208/rtsx_scsi.c ++++ b/drivers/staging/rts5208/rtsx_scsi.c +@@ -414,7 +414,7 @@ void set_sense_data(struct rtsx_chip *chip, unsigned int lun, u8 err_code, + sense->ascq = ascq; + if (sns_key_info0 != 0) { + sense->sns_key_info[0] = SKSV | sns_key_info0; +- sense->sns_key_info[1] = (sns_key_info1 & 0xf0) >> 8; ++ sense->sns_key_info[1] = (sns_key_info1 & 0xf0) >> 4; + sense->sns_key_info[2] = sns_key_info1 & 0x0f; + } + } +diff --git a/drivers/tty/tty_buffer.c b/drivers/tty/tty_buffer.c +index aa9fad4f35b9..25c15910af77 100644 +--- a/drivers/tty/tty_buffer.c ++++ b/drivers/tty/tty_buffer.c +@@ -355,6 +355,32 @@ int tty_insert_flip_string_flags(struct tty_port *port, + } + EXPORT_SYMBOL(tty_insert_flip_string_flags); + ++/** ++ * __tty_insert_flip_char - Add one character to the tty buffer ++ * @port: tty port ++ * @ch: character ++ * @flag: flag byte ++ * ++ * Queue a single byte to the tty buffering, with an optional flag. ++ * This is the slow path of tty_insert_flip_char. ++ */ ++int __tty_insert_flip_char(struct tty_port *port, unsigned char ch, char flag) ++{ ++ struct tty_buffer *tb; ++ int flags = (flag == TTY_NORMAL) ? TTYB_NORMAL : 0; ++ ++ if (!__tty_buffer_request_room(port, 1, flags)) ++ return 0; ++ ++ tb = port->buf.tail; ++ if (~tb->flags & TTYB_NORMAL) ++ *flag_buf_ptr(tb, tb->used) = flag; ++ *char_buf_ptr(tb, tb->used++) = ch; ++ ++ return 1; ++} ++EXPORT_SYMBOL(__tty_insert_flip_char); ++ + /** + * tty_schedule_flip - push characters to ldisc + * @port: tty port to push from +diff --git a/drivers/usb/core/devio.c b/drivers/usb/core/devio.c +index eb8fdc75843b..a235e9ab932c 100644 +--- a/drivers/usb/core/devio.c ++++ b/drivers/usb/core/devio.c +@@ -519,6 +519,8 @@ static void async_completed(struct urb *urb) + if (as->status < 0 && as->bulk_addr && as->status != -ECONNRESET && + as->status != -ENOENT) + cancel_bulk_urbs(ps, as->bulk_addr); ++ ++ wake_up(&ps->wait); + spin_unlock(&ps->lock); + + if (signr) { +@@ -526,8 +528,6 @@ static void async_completed(struct urb *urb) + put_pid(pid); + put_cred(cred); + } +- +- wake_up(&ps->wait); + } + + static void destroy_async(struct usb_dev_state *ps, struct list_head *list) +diff --git a/drivers/usb/core/quirks.c b/drivers/usb/core/quirks.c +index 574da2b4529c..82806e311202 100644 +--- a/drivers/usb/core/quirks.c ++++ b/drivers/usb/core/quirks.c +@@ -57,8 +57,9 @@ static const struct usb_device_id usb_quirk_list[] = { + /* Microsoft LifeCam-VX700 v2.0 */ + { USB_DEVICE(0x045e, 0x0770), .driver_info = USB_QUIRK_RESET_RESUME }, + +- /* Logitech HD Pro Webcams C920 and C930e */ ++ /* Logitech HD Pro Webcams C920, C920-C and C930e */ + { USB_DEVICE(0x046d, 0x082d), .driver_info = USB_QUIRK_DELAY_INIT }, ++ { USB_DEVICE(0x046d, 0x0841), .driver_info = USB_QUIRK_DELAY_INIT }, + { USB_DEVICE(0x046d, 0x0843), .driver_info = USB_QUIRK_DELAY_INIT }, + + /* Logitech ConferenceCam CC3000e */ +@@ -217,6 +218,9 @@ static const struct usb_device_id usb_quirk_list[] = { + { USB_DEVICE(0x1a0a, 0x0200), .driver_info = + USB_QUIRK_LINEAR_UFRAME_INTR_BINTERVAL }, + ++ /* Corsair Strafe RGB */ ++ { USB_DEVICE(0x1b1c, 0x1b20), .driver_info = USB_QUIRK_DELAY_INIT }, ++ + /* Acer C120 LED Projector */ + { USB_DEVICE(0x1de1, 0xc102), .driver_info = USB_QUIRK_NO_LPM }, + +diff --git a/drivers/usb/core/usb-acpi.c b/drivers/usb/core/usb-acpi.c +index 2776cfe64c09..ef9cf4a21afe 100644 +--- a/drivers/usb/core/usb-acpi.c ++++ b/drivers/usb/core/usb-acpi.c +@@ -127,6 +127,22 @@ out: + */ + #define USB_ACPI_LOCATION_VALID (1 << 31) + ++static struct acpi_device *usb_acpi_find_port(struct acpi_device *parent, ++ int raw) ++{ ++ struct acpi_device *adev; ++ ++ if (!parent) ++ return NULL; ++ ++ list_for_each_entry(adev, &parent->children, node) { ++ if (acpi_device_adr(adev) == raw) ++ return adev; ++ } ++ ++ return acpi_find_child_device(parent, raw, false); ++} ++ + static struct acpi_device *usb_acpi_find_companion(struct device *dev) + { + struct usb_device *udev; +@@ -174,8 +190,10 @@ static struct acpi_device *usb_acpi_find_companion(struct device *dev) + int raw; + + raw = usb_hcd_find_raw_port_number(hcd, port1); +- adev = acpi_find_child_device(ACPI_COMPANION(&udev->dev), +- raw, false); ++ ++ adev = usb_acpi_find_port(ACPI_COMPANION(&udev->dev), ++ raw); ++ + if (!adev) + return NULL; + } else { +@@ -186,7 +204,9 @@ static struct acpi_device *usb_acpi_find_companion(struct device *dev) + return NULL; + + acpi_bus_get_device(parent_handle, &adev); +- adev = acpi_find_child_device(adev, port1, false); ++ ++ adev = usb_acpi_find_port(adev, port1); ++ + if (!adev) + return NULL; + } +diff --git a/drivers/usb/host/pci-quirks.c b/drivers/usb/host/pci-quirks.c +index 03b9a372636f..1fc6f478a02c 100644 +--- a/drivers/usb/host/pci-quirks.c ++++ b/drivers/usb/host/pci-quirks.c +@@ -133,29 +133,30 @@ static int amd_chipset_sb_type_init(struct amd_chipset_info *pinfo) + pinfo->sb_type.gen = AMD_CHIPSET_SB700; + else if (rev >= 0x40 && rev <= 0x4f) + pinfo->sb_type.gen = AMD_CHIPSET_SB800; +- } +- pinfo->smbus_dev = pci_get_device(PCI_VENDOR_ID_AMD, +- 0x145c, NULL); +- if (pinfo->smbus_dev) { +- pinfo->sb_type.gen = AMD_CHIPSET_TAISHAN; + } else { + pinfo->smbus_dev = pci_get_device(PCI_VENDOR_ID_AMD, + PCI_DEVICE_ID_AMD_HUDSON2_SMBUS, NULL); + +- if (!pinfo->smbus_dev) { +- pinfo->sb_type.gen = NOT_AMD_CHIPSET; +- return 0; ++ if (pinfo->smbus_dev) { ++ rev = pinfo->smbus_dev->revision; ++ if (rev >= 0x11 && rev <= 0x14) ++ pinfo->sb_type.gen = AMD_CHIPSET_HUDSON2; ++ else if (rev >= 0x15 && rev <= 0x18) ++ pinfo->sb_type.gen = AMD_CHIPSET_BOLTON; ++ else if (rev >= 0x39 && rev <= 0x3a) ++ pinfo->sb_type.gen = AMD_CHIPSET_YANGTZE; ++ } else { ++ pinfo->smbus_dev = pci_get_device(PCI_VENDOR_ID_AMD, ++ 0x145c, NULL); ++ if (pinfo->smbus_dev) { ++ rev = pinfo->smbus_dev->revision; ++ pinfo->sb_type.gen = AMD_CHIPSET_TAISHAN; ++ } else { ++ pinfo->sb_type.gen = NOT_AMD_CHIPSET; ++ return 0; ++ } + } +- +- rev = pinfo->smbus_dev->revision; +- if (rev >= 0x11 && rev <= 0x14) +- pinfo->sb_type.gen = AMD_CHIPSET_HUDSON2; +- else if (rev >= 0x15 && rev <= 0x18) +- pinfo->sb_type.gen = AMD_CHIPSET_BOLTON; +- else if (rev >= 0x39 && rev <= 0x3a) +- pinfo->sb_type.gen = AMD_CHIPSET_YANGTZE; + } +- + pinfo->sb_type.rev = rev; + return 1; + } +diff --git a/drivers/usb/serial/option.c b/drivers/usb/serial/option.c +index f08b35819666..a0fbc4e5a272 100644 +--- a/drivers/usb/serial/option.c ++++ b/drivers/usb/serial/option.c +@@ -2020,6 +2020,7 @@ static const struct usb_device_id option_ids[] = { + { USB_DEVICE_AND_INTERFACE_INFO(0x2001, 0x7d03, 0xff, 0x02, 0x01) }, + { USB_DEVICE_AND_INTERFACE_INFO(0x2001, 0x7d03, 0xff, 0x00, 0x00) }, + { USB_DEVICE_INTERFACE_CLASS(0x2001, 0x7d04, 0xff) }, /* D-Link DWM-158 */ ++ { USB_DEVICE_INTERFACE_CLASS(0x2001, 0x7d0e, 0xff) }, /* D-Link DWM-157 C1 */ + { USB_DEVICE_INTERFACE_CLASS(0x2001, 0x7e19, 0xff), /* D-Link DWM-221 B1 */ + .driver_info = (kernel_ulong_t)&net_intf4_blacklist }, + { USB_DEVICE_INTERFACE_CLASS(0x2001, 0x7e35, 0xff), /* D-Link DWM-222 */ +diff --git a/fs/btrfs/super.c b/fs/btrfs/super.c +index a40b454aea44..4f6a3afc45f4 100644 +--- a/fs/btrfs/super.c ++++ b/fs/btrfs/super.c +@@ -1593,6 +1593,8 @@ static int btrfs_remount(struct super_block *sb, int *flags, char *data) + goto restore; + } + ++ btrfs_qgroup_rescan_resume(fs_info); ++ + if (!fs_info->uuid_root) { + btrfs_info(fs_info, "creating UUID tree"); + ret = btrfs_create_uuid_tree(fs_info); +diff --git a/fs/cifs/dir.c b/fs/cifs/dir.c +index 26a3b389a265..297e05c9e2b0 100644 +--- a/fs/cifs/dir.c ++++ b/fs/cifs/dir.c +@@ -183,15 +183,20 @@ cifs_bp_rename_retry: + } + + /* ++ * Don't allow path components longer than the server max. + * Don't allow the separator character in a path component. + * The VFS will not allow "/", but "\" is allowed by posix. + */ + static int +-check_name(struct dentry *direntry) ++check_name(struct dentry *direntry, struct cifs_tcon *tcon) + { + struct cifs_sb_info *cifs_sb = CIFS_SB(direntry->d_sb); + int i; + ++ if (unlikely(direntry->d_name.len > ++ le32_to_cpu(tcon->fsAttrInfo.MaxPathNameComponentLength))) ++ return -ENAMETOOLONG; ++ + if (!(cifs_sb->mnt_cifs_flags & CIFS_MOUNT_POSIX_PATHS)) { + for (i = 0; i < direntry->d_name.len; i++) { + if (direntry->d_name.name[i] == '\\') { +@@ -489,10 +494,6 @@ cifs_atomic_open(struct inode *inode, struct dentry *direntry, + return finish_no_open(file, res); + } + +- rc = check_name(direntry); +- if (rc) +- return rc; +- + xid = get_xid(); + + cifs_dbg(FYI, "parent inode = 0x%p name is: %pd and dentry = 0x%p\n", +@@ -505,6 +506,11 @@ cifs_atomic_open(struct inode *inode, struct dentry *direntry, + } + + tcon = tlink_tcon(tlink); ++ ++ rc = check_name(direntry, tcon); ++ if (rc) ++ goto out_free_xid; ++ + server = tcon->ses->server; + + if (server->ops->new_lease_key) +@@ -765,7 +771,7 @@ cifs_lookup(struct inode *parent_dir_inode, struct dentry *direntry, + } + pTcon = tlink_tcon(tlink); + +- rc = check_name(direntry); ++ rc = check_name(direntry, pTcon); + if (rc) + goto lookup_out; + +diff --git a/fs/cifs/smb2pdu.c b/fs/cifs/smb2pdu.c +index f8ae041d60fe..2f6f164c83ab 100644 +--- a/fs/cifs/smb2pdu.c ++++ b/fs/cifs/smb2pdu.c +@@ -2554,8 +2554,8 @@ copy_fs_info_to_kstatfs(struct smb2_fs_full_size_info *pfs_inf, + kst->f_bsize = le32_to_cpu(pfs_inf->BytesPerSector) * + le32_to_cpu(pfs_inf->SectorsPerAllocationUnit); + kst->f_blocks = le64_to_cpu(pfs_inf->TotalAllocationUnits); +- kst->f_bfree = le64_to_cpu(pfs_inf->ActualAvailableAllocationUnits); +- kst->f_bavail = le64_to_cpu(pfs_inf->CallerAvailableAllocationUnits); ++ kst->f_bfree = kst->f_bavail = ++ le64_to_cpu(pfs_inf->CallerAvailableAllocationUnits); + return; + } + +diff --git a/fs/cifs/smb2pdu.h b/fs/cifs/smb2pdu.h +index 70867d54fb8b..31acb20d0b6e 100644 +--- a/fs/cifs/smb2pdu.h ++++ b/fs/cifs/smb2pdu.h +@@ -82,8 +82,8 @@ + + #define NUMBER_OF_SMB2_COMMANDS 0x0013 + +-/* BB FIXME - analyze following length BB */ +-#define MAX_SMB2_HDR_SIZE 0x78 /* 4 len + 64 hdr + (2*24 wct) + 2 bct + 2 pad */ ++/* 4 len + 52 transform hdr + 64 hdr + 56 create rsp */ ++#define MAX_SMB2_HDR_SIZE 0x00b0 + + #define SMB2_PROTO_NUMBER cpu_to_le32(0x424d53fe) + +diff --git a/fs/dlm/user.c b/fs/dlm/user.c +index fb85f32e9eca..0221731a9462 100644 +--- a/fs/dlm/user.c ++++ b/fs/dlm/user.c +@@ -355,6 +355,10 @@ static int dlm_device_register(struct dlm_ls *ls, char *name) + error = misc_register(&ls->ls_device); + if (error) { + kfree(ls->ls_device.name); ++ /* this has to be set to NULL ++ * to avoid a double-free in dlm_device_deregister ++ */ ++ ls->ls_device.name = NULL; + } + fail: + return error; +diff --git a/fs/eventpoll.c b/fs/eventpoll.c +index 1e009cad8d5c..1b08556776ce 100644 +--- a/fs/eventpoll.c ++++ b/fs/eventpoll.c +@@ -518,8 +518,13 @@ static void ep_remove_wait_queue(struct eppoll_entry *pwq) + wait_queue_head_t *whead; + + rcu_read_lock(); +- /* If it is cleared by POLLFREE, it should be rcu-safe */ +- whead = rcu_dereference(pwq->whead); ++ /* ++ * If it is cleared by POLLFREE, it should be rcu-safe. ++ * If we read NULL we need a barrier paired with ++ * smp_store_release() in ep_poll_callback(), otherwise ++ * we rely on whead->lock. ++ */ ++ whead = smp_load_acquire(&pwq->whead); + if (whead) + remove_wait_queue(whead, &pwq->wait); + rcu_read_unlock(); +@@ -1003,17 +1008,6 @@ static int ep_poll_callback(wait_queue_t *wait, unsigned mode, int sync, void *k + struct epitem *epi = ep_item_from_wait(wait); + struct eventpoll *ep = epi->ep; + +- if ((unsigned long)key & POLLFREE) { +- ep_pwq_from_wait(wait)->whead = NULL; +- /* +- * whead = NULL above can race with ep_remove_wait_queue() +- * which can do another remove_wait_queue() after us, so we +- * can't use __remove_wait_queue(). whead->lock is held by +- * the caller. +- */ +- list_del_init(&wait->task_list); +- } +- + spin_lock_irqsave(&ep->lock, flags); + + /* +@@ -1078,6 +1072,23 @@ out_unlock: + if (pwake) + ep_poll_safewake(&ep->poll_wait); + ++ ++ if ((unsigned long)key & POLLFREE) { ++ /* ++ * If we race with ep_remove_wait_queue() it can miss ++ * ->whead = NULL and do another remove_wait_queue() after ++ * us, so we can't use __remove_wait_queue(). ++ */ ++ list_del_init(&wait->task_list); ++ /* ++ * ->whead != NULL protects us from the race with ep_free() ++ * or ep_remove(), ep_remove_wait_queue() takes whead->lock ++ * held by the caller. Once we nullify it, nothing protects ++ * ep/epi or even wait. ++ */ ++ smp_store_release(&ep_pwq_from_wait(wait)->whead, NULL); ++ } ++ + return 1; + } + +diff --git a/fs/ext4/super.c b/fs/ext4/super.c +index 97aa8be40175..ccc43e2f07e2 100644 +--- a/fs/ext4/super.c ++++ b/fs/ext4/super.c +@@ -2233,7 +2233,7 @@ static void ext4_orphan_cleanup(struct super_block *sb, + #ifdef CONFIG_QUOTA + /* Needed for iput() to work correctly and not trash data */ + sb->s_flags |= MS_ACTIVE; +- /* Turn on quotas so that they are updated correctly */ ++ /* Turn on journaled quotas so that they are updated correctly */ + for (i = 0; i < EXT4_MAXQUOTAS; i++) { + if (EXT4_SB(sb)->s_qf_names[i]) { + int ret = ext4_quota_on_mount(sb, i); +@@ -2299,9 +2299,9 @@ static void ext4_orphan_cleanup(struct super_block *sb, + ext4_msg(sb, KERN_INFO, "%d truncate%s cleaned up", + PLURAL(nr_truncates)); + #ifdef CONFIG_QUOTA +- /* Turn quotas off */ ++ /* Turn off journaled quotas if they were enabled for orphan cleanup */ + for (i = 0; i < EXT4_MAXQUOTAS; i++) { +- if (sb_dqopt(sb)->files[i]) ++ if (EXT4_SB(sb)->s_qf_names[i] && sb_dqopt(sb)->files[i]) + dquot_quota_off(sb, i); + } + #endif +diff --git a/fs/f2fs/recovery.c b/fs/f2fs/recovery.c +index 8d8ea99f2156..e195cc5e3590 100644 +--- a/fs/f2fs/recovery.c ++++ b/fs/f2fs/recovery.c +@@ -265,7 +265,7 @@ static int check_index_in_prev_nodes(struct f2fs_sb_info *sbi, + return 0; + + /* Get the previous summary */ +- for (i = CURSEG_WARM_DATA; i <= CURSEG_COLD_DATA; i++) { ++ for (i = CURSEG_HOT_DATA; i <= CURSEG_COLD_DATA; i++) { + struct curseg_info *curseg = CURSEG_I(sbi, i); + if (curseg->segno == segno) { + sum = curseg->sum_blk->entries[blkoff]; +diff --git a/fs/nfsd/nfs4xdr.c b/fs/nfsd/nfs4xdr.c +index 16fcfdd6011c..280cd3d9151f 100644 +--- a/fs/nfsd/nfs4xdr.c ++++ b/fs/nfsd/nfs4xdr.c +@@ -128,7 +128,7 @@ static void next_decode_page(struct nfsd4_compoundargs *argp) + argp->p = page_address(argp->pagelist[0]); + argp->pagelist++; + if (argp->pagelen < PAGE_SIZE) { +- argp->end = argp->p + (argp->pagelen>>2); ++ argp->end = argp->p + XDR_QUADLEN(argp->pagelen); + argp->pagelen = 0; + } else { + argp->end = argp->p + (PAGE_SIZE>>2); +@@ -1245,9 +1245,7 @@ nfsd4_decode_write(struct nfsd4_compoundargs *argp, struct nfsd4_write *write) + argp->pagelen -= pages * PAGE_SIZE; + len -= pages * PAGE_SIZE; + +- argp->p = (__be32 *)page_address(argp->pagelist[0]); +- argp->pagelist++; +- argp->end = argp->p + XDR_QUADLEN(PAGE_SIZE); ++ next_decode_page(argp); + } + argp->p += XDR_QUADLEN(len); + +diff --git a/fs/xfs/xfs_linux.h b/fs/xfs/xfs_linux.h +index 7c7842c85a08..530c2f9c47c7 100644 +--- a/fs/xfs/xfs_linux.h ++++ b/fs/xfs/xfs_linux.h +@@ -376,7 +376,14 @@ static inline __uint64_t howmany_64(__uint64_t x, __uint32_t y) + #endif /* DEBUG */ + + #ifdef CONFIG_XFS_RT +-#define XFS_IS_REALTIME_INODE(ip) ((ip)->i_d.di_flags & XFS_DIFLAG_REALTIME) ++ ++/* ++ * make sure we ignore the inode flag if the filesystem doesn't have a ++ * configured realtime device. ++ */ ++#define XFS_IS_REALTIME_INODE(ip) \ ++ (((ip)->i_d.di_flags & XFS_DIFLAG_REALTIME) && \ ++ (ip)->i_mount->m_rtdev_targp) + #else + #define XFS_IS_REALTIME_INODE(ip) (0) + #endif +diff --git a/include/asm-generic/topology.h b/include/asm-generic/topology.h +index fc824e2828f3..5d2add1a6c96 100644 +--- a/include/asm-generic/topology.h ++++ b/include/asm-generic/topology.h +@@ -48,7 +48,11 @@ + #define parent_node(node) ((void)(node),0) + #endif + #ifndef cpumask_of_node +-#define cpumask_of_node(node) ((void)node, cpu_online_mask) ++ #ifdef CONFIG_NEED_MULTIPLE_NODES ++ #define cpumask_of_node(node) ((node) == 0 ? cpu_online_mask : cpu_none_mask) ++ #else ++ #define cpumask_of_node(node) ((void)node, cpu_online_mask) ++ #endif + #endif + #ifndef pcibus_to_node + #define pcibus_to_node(bus) ((void)(bus), -1) +diff --git a/include/linux/pci_ids.h b/include/linux/pci_ids.h +index 9b6f5dc58732..d57b902407dd 100644 +--- a/include/linux/pci_ids.h ++++ b/include/linux/pci_ids.h +@@ -573,6 +573,7 @@ + #define PCI_DEVICE_ID_AMD_CS5536_EHC 0x2095 + #define PCI_DEVICE_ID_AMD_CS5536_UDC 0x2096 + #define PCI_DEVICE_ID_AMD_CS5536_UOC 0x2097 ++#define PCI_DEVICE_ID_AMD_CS5536_DEV_IDE 0x2092 + #define PCI_DEVICE_ID_AMD_CS5536_IDE 0x209A + #define PCI_DEVICE_ID_AMD_LX_VIDEO 0x2081 + #define PCI_DEVICE_ID_AMD_LX_AES 0x2082 +diff --git a/include/linux/tty_flip.h b/include/linux/tty_flip.h +index c28dd523f96e..d43837f2ce3a 100644 +--- a/include/linux/tty_flip.h ++++ b/include/linux/tty_flip.h +@@ -12,6 +12,7 @@ extern int tty_prepare_flip_string(struct tty_port *port, + unsigned char **chars, size_t size); + extern void tty_flip_buffer_push(struct tty_port *port); + void tty_schedule_flip(struct tty_port *port); ++int __tty_insert_flip_char(struct tty_port *port, unsigned char ch, char flag); + + static inline int tty_insert_flip_char(struct tty_port *port, + unsigned char ch, char flag) +@@ -26,7 +27,7 @@ static inline int tty_insert_flip_char(struct tty_port *port, + *char_buf_ptr(tb, tb->used++) = ch; + return 1; + } +- return tty_insert_flip_string_flags(port, &ch, &flag, 1); ++ return __tty_insert_flip_char(port, ch, flag); + } + + static inline int tty_insert_flip_string(struct tty_port *port, +diff --git a/include/net/sch_generic.h b/include/net/sch_generic.h +index 530bdca19803..35fdedac3e25 100644 +--- a/include/net/sch_generic.h ++++ b/include/net/sch_generic.h +@@ -701,8 +701,11 @@ static inline struct Qdisc *qdisc_replace(struct Qdisc *sch, struct Qdisc *new, + old = *pold; + *pold = new; + if (old != NULL) { +- qdisc_tree_reduce_backlog(old, old->q.qlen, old->qstats.backlog); ++ unsigned int qlen = old->q.qlen; ++ unsigned int backlog = old->qstats.backlog; ++ + qdisc_reset(old); ++ qdisc_tree_reduce_backlog(old, qlen, backlog); + } + sch_tree_unlock(sch); + +diff --git a/kernel/audit_watch.c b/kernel/audit_watch.c +index 6e30024d9aac..d53c6e284e87 100644 +--- a/kernel/audit_watch.c ++++ b/kernel/audit_watch.c +@@ -455,13 +455,15 @@ void audit_remove_watch_rule(struct audit_krule *krule) + list_del(&krule->rlist); + + if (list_empty(&watch->rules)) { ++ /* ++ * audit_remove_watch() drops our reference to 'parent' which ++ * can get freed. Grab our own reference to be safe. ++ */ ++ audit_get_parent(parent); + audit_remove_watch(watch); +- +- if (list_empty(&parent->watches)) { +- audit_get_parent(parent); ++ if (list_empty(&parent->watches)) + fsnotify_destroy_mark(&parent->mark, audit_watch_group); +- audit_put_parent(parent); +- } ++ audit_put_parent(parent); + } + } + +diff --git a/kernel/events/core.c b/kernel/events/core.c +index e871080bc44e..e5553bdaf6c2 100644 +--- a/kernel/events/core.c ++++ b/kernel/events/core.c +@@ -8102,28 +8102,27 @@ SYSCALL_DEFINE5(perf_event_open, + goto err_context; + + /* +- * Do not allow to attach to a group in a different +- * task or CPU context: ++ * Make sure we're both events for the same CPU; ++ * grouping events for different CPUs is broken; since ++ * you can never concurrently schedule them anyhow. + */ +- if (move_group) { +- /* +- * Make sure we're both on the same task, or both +- * per-cpu events. +- */ +- if (group_leader->ctx->task != ctx->task) +- goto err_context; ++ if (group_leader->cpu != event->cpu) ++ goto err_context; + +- /* +- * Make sure we're both events for the same CPU; +- * grouping events for different CPUs is broken; since +- * you can never concurrently schedule them anyhow. +- */ +- if (group_leader->cpu != event->cpu) +- goto err_context; +- } else { +- if (group_leader->ctx != ctx) +- goto err_context; +- } ++ /* ++ * Make sure we're both on the same task, or both ++ * per-CPU events. ++ */ ++ if (group_leader->ctx->task != ctx->task) ++ goto err_context; ++ ++ /* ++ * Do not allow to attach to a group in a different task ++ * or CPU context. If we're moving SW events, we'll fix ++ * this up later, so allow that. ++ */ ++ if (!move_group && group_leader->ctx != ctx) ++ goto err_context; + + /* + * Only a group leader can be exclusive or pinned +diff --git a/kernel/locking/locktorture.c b/kernel/locking/locktorture.c +index ec8cce259779..a25e3a11f1b3 100644 +--- a/kernel/locking/locktorture.c ++++ b/kernel/locking/locktorture.c +@@ -630,6 +630,8 @@ static void lock_torture_cleanup(void) + else + lock_torture_print_module_parms(cxt.cur_ops, + "End of test: SUCCESS"); ++ kfree(cxt.lwsa); ++ kfree(cxt.lrsa); + torture_cleanup_end(); + } + +@@ -763,6 +765,8 @@ static int __init lock_torture_init(void) + GFP_KERNEL); + if (reader_tasks == NULL) { + VERBOSE_TOROUT_ERRSTRING("reader_tasks: Out of memory"); ++ kfree(writer_tasks); ++ writer_tasks = NULL; + firsterr = -ENOMEM; + goto unwind; + } +diff --git a/kernel/trace/ftrace.c b/kernel/trace/ftrace.c +index eb11011b5292..06d0e5712e86 100644 +--- a/kernel/trace/ftrace.c ++++ b/kernel/trace/ftrace.c +@@ -2657,13 +2657,14 @@ static int ftrace_shutdown(struct ftrace_ops *ops, int command) + + if (!command || !ftrace_enabled) { + /* +- * If these are control ops, they still need their +- * per_cpu field freed. Since, function tracing is ++ * If these are dynamic or control ops, they still ++ * need their data freed. Since, function tracing is + * not currently active, we can just free them + * without synchronizing all CPUs. + */ +- if (ops->flags & FTRACE_OPS_FL_CONTROL) +- control_ops_free(ops); ++ if (ops->flags & (FTRACE_OPS_FL_DYNAMIC | FTRACE_OPS_FL_CONTROL)) ++ goto free_ops; ++ + return 0; + } + +@@ -2718,6 +2719,7 @@ static int ftrace_shutdown(struct ftrace_ops *ops, int command) + if (ops->flags & (FTRACE_OPS_FL_DYNAMIC | FTRACE_OPS_FL_CONTROL)) { + schedule_on_each_cpu(ftrace_sync); + ++ free_ops: + arch_ftrace_trampoline_free(ops); + + if (ops->flags & FTRACE_OPS_FL_CONTROL) +diff --git a/kernel/trace/trace.c b/kernel/trace/trace.c +index 591b3b4f5337..17213d74540b 100644 +--- a/kernel/trace/trace.c ++++ b/kernel/trace/trace.c +@@ -5204,7 +5204,7 @@ static int tracing_set_clock(struct trace_array *tr, const char *clockstr) + tracing_reset_online_cpus(&tr->trace_buffer); + + #ifdef CONFIG_TRACER_MAX_TRACE +- if (tr->flags & TRACE_ARRAY_FL_GLOBAL && tr->max_buffer.buffer) ++ if (tr->max_buffer.buffer) + ring_buffer_set_clock(tr->max_buffer.buffer, trace_clocks[i].func); + tracing_reset_online_cpus(&tr->max_buffer); + #endif +diff --git a/kernel/trace/trace_events_filter.c b/kernel/trace/trace_events_filter.c +index 52adf02d7619..f186066f8b87 100644 +--- a/kernel/trace/trace_events_filter.c ++++ b/kernel/trace/trace_events_filter.c +@@ -1928,6 +1928,10 @@ static int create_filter(struct ftrace_event_call *call, + if (err && set_str) + append_filter_err(ps, filter); + } ++ if (err && !set_str) { ++ free_event_filter(filter); ++ filter = NULL; ++ } + create_filter_finish(ps); + + *filterp = filter; +diff --git a/kernel/trace/trace_selftest.c b/kernel/trace/trace_selftest.c +index b0f86ea77881..ca70d11b8aa7 100644 +--- a/kernel/trace/trace_selftest.c ++++ b/kernel/trace/trace_selftest.c +@@ -272,7 +272,7 @@ static int trace_selftest_ops(struct trace_array *tr, int cnt) + goto out_free; + if (cnt > 1) { + if (trace_selftest_test_global_cnt == 0) +- goto out; ++ goto out_free; + } + if (trace_selftest_test_dyn_cnt == 0) + goto out_free; +diff --git a/mm/mempolicy.c b/mm/mempolicy.c +index ea06282f8a3e..dacd2e9a5b68 100644 +--- a/mm/mempolicy.c ++++ b/mm/mempolicy.c +@@ -897,11 +897,6 @@ static long do_get_mempolicy(int *policy, nodemask_t *nmask, + *policy |= (pol->flags & MPOL_MODE_FLAGS); + } + +- if (vma) { +- up_read(¤t->mm->mmap_sem); +- vma = NULL; +- } +- + err = 0; + if (nmask) { + if (mpol_store_user_nodemask(pol)) { +diff --git a/net/bluetooth/bnep/core.c b/net/bluetooth/bnep/core.c +index 1641367e54ca..69f56073b337 100644 +--- a/net/bluetooth/bnep/core.c ++++ b/net/bluetooth/bnep/core.c +@@ -484,16 +484,16 @@ static int bnep_session(void *arg) + struct net_device *dev = s->dev; + struct sock *sk = s->sock->sk; + struct sk_buff *skb; +- wait_queue_t wait; ++ DEFINE_WAIT_FUNC(wait, woken_wake_function); + + BT_DBG(""); + + set_user_nice(current, -15); + +- init_waitqueue_entry(&wait, current); + add_wait_queue(sk_sleep(sk), &wait); + while (1) { +- set_current_state(TASK_INTERRUPTIBLE); ++ /* Ensure session->terminate is updated */ ++ smp_mb__before_atomic(); + + if (atomic_read(&s->terminate)) + break; +@@ -515,9 +515,8 @@ static int bnep_session(void *arg) + break; + netif_wake_queue(dev); + +- schedule(); ++ wait_woken(&wait, TASK_INTERRUPTIBLE, MAX_SCHEDULE_TIMEOUT); + } +- __set_current_state(TASK_RUNNING); + remove_wait_queue(sk_sleep(sk), &wait); + + /* Cleanup session */ +@@ -663,7 +662,7 @@ int bnep_del_connection(struct bnep_conndel_req *req) + s = __bnep_get_session(req->dst); + if (s) { + atomic_inc(&s->terminate); +- wake_up_process(s->task); ++ wake_up_interruptible(sk_sleep(s->sock->sk)); + } else + err = -ENOENT; + +diff --git a/net/bluetooth/cmtp/core.c b/net/bluetooth/cmtp/core.c +index 298ed37010e6..3a39fd523e40 100644 +--- a/net/bluetooth/cmtp/core.c ++++ b/net/bluetooth/cmtp/core.c +@@ -281,16 +281,16 @@ static int cmtp_session(void *arg) + struct cmtp_session *session = arg; + struct sock *sk = session->sock->sk; + struct sk_buff *skb; +- wait_queue_t wait; ++ DEFINE_WAIT_FUNC(wait, woken_wake_function); + + BT_DBG("session %p", session); + + set_user_nice(current, -15); + +- init_waitqueue_entry(&wait, current); + add_wait_queue(sk_sleep(sk), &wait); + while (1) { +- set_current_state(TASK_INTERRUPTIBLE); ++ /* Ensure session->terminate is updated */ ++ smp_mb__before_atomic(); + + if (atomic_read(&session->terminate)) + break; +@@ -307,9 +307,8 @@ static int cmtp_session(void *arg) + + cmtp_process_transmit(session); + +- schedule(); ++ wait_woken(&wait, TASK_INTERRUPTIBLE, MAX_SCHEDULE_TIMEOUT); + } +- __set_current_state(TASK_RUNNING); + remove_wait_queue(sk_sleep(sk), &wait); + + down_write(&cmtp_session_sem); +@@ -394,7 +393,7 @@ int cmtp_add_connection(struct cmtp_connadd_req *req, struct socket *sock) + err = cmtp_attach_device(session); + if (err < 0) { + atomic_inc(&session->terminate); +- wake_up_process(session->task); ++ wake_up_interruptible(sk_sleep(session->sock->sk)); + up_write(&cmtp_session_sem); + return err; + } +@@ -432,7 +431,11 @@ int cmtp_del_connection(struct cmtp_conndel_req *req) + + /* Stop session thread */ + atomic_inc(&session->terminate); +- wake_up_process(session->task); ++ ++ /* Ensure session->terminate is updated */ ++ smp_mb__after_atomic(); ++ ++ wake_up_interruptible(sk_sleep(session->sock->sk)); + } else + err = -ENOENT; + +diff --git a/net/bluetooth/hidp/core.c b/net/bluetooth/hidp/core.c +index 4a0015e16d4f..b9eb90109f7c 100644 +--- a/net/bluetooth/hidp/core.c ++++ b/net/bluetooth/hidp/core.c +@@ -36,6 +36,7 @@ + #define VERSION "1.2" + + static DECLARE_RWSEM(hidp_session_sem); ++static DECLARE_WAIT_QUEUE_HEAD(hidp_session_wq); + static LIST_HEAD(hidp_session_list); + + static unsigned char hidp_keycode[256] = { +@@ -1067,12 +1068,12 @@ static int hidp_session_start_sync(struct hidp_session *session) + * Wake up session thread and notify it to stop. This is asynchronous and + * returns immediately. Call this whenever a runtime error occurs and you want + * the session to stop. +- * Note: wake_up_process() performs any necessary memory-barriers for us. ++ * Note: wake_up_interruptible() performs any necessary memory-barriers for us. + */ + static void hidp_session_terminate(struct hidp_session *session) + { + atomic_inc(&session->terminate); +- wake_up_process(session->task); ++ wake_up_interruptible(&hidp_session_wq); + } + + /* +@@ -1179,7 +1180,9 @@ static void hidp_session_run(struct hidp_session *session) + struct sock *ctrl_sk = session->ctrl_sock->sk; + struct sock *intr_sk = session->intr_sock->sk; + struct sk_buff *skb; ++ DEFINE_WAIT_FUNC(wait, woken_wake_function); + ++ add_wait_queue(&hidp_session_wq, &wait); + for (;;) { + /* + * This thread can be woken up two ways: +@@ -1187,12 +1190,10 @@ static void hidp_session_run(struct hidp_session *session) + * session->terminate flag and wakes this thread up. + * - Via modifying the socket state of ctrl/intr_sock. This + * thread is woken up by ->sk_state_changed(). +- * +- * Note: set_current_state() performs any necessary +- * memory-barriers for us. + */ +- set_current_state(TASK_INTERRUPTIBLE); + ++ /* Ensure session->terminate is updated */ ++ smp_mb__before_atomic(); + if (atomic_read(&session->terminate)) + break; + +@@ -1226,11 +1227,22 @@ static void hidp_session_run(struct hidp_session *session) + hidp_process_transmit(session, &session->ctrl_transmit, + session->ctrl_sock); + +- schedule(); ++ wait_woken(&wait, TASK_INTERRUPTIBLE, MAX_SCHEDULE_TIMEOUT); + } ++ remove_wait_queue(&hidp_session_wq, &wait); + + atomic_inc(&session->terminate); +- set_current_state(TASK_RUNNING); ++ ++ /* Ensure session->terminate is updated */ ++ smp_mb__after_atomic(); ++} ++ ++static int hidp_session_wake_function(wait_queue_t *wait, ++ unsigned int mode, ++ int sync, void *key) ++{ ++ wake_up_interruptible(&hidp_session_wq); ++ return false; + } + + /* +@@ -1243,7 +1255,8 @@ static void hidp_session_run(struct hidp_session *session) + static int hidp_session_thread(void *arg) + { + struct hidp_session *session = arg; +- wait_queue_t ctrl_wait, intr_wait; ++ DEFINE_WAIT_FUNC(ctrl_wait, hidp_session_wake_function); ++ DEFINE_WAIT_FUNC(intr_wait, hidp_session_wake_function); + + BT_DBG("session %p", session); + +@@ -1253,8 +1266,6 @@ static int hidp_session_thread(void *arg) + set_user_nice(current, -15); + hidp_set_timer(session); + +- init_waitqueue_entry(&ctrl_wait, current); +- init_waitqueue_entry(&intr_wait, current); + add_wait_queue(sk_sleep(session->ctrl_sock->sk), &ctrl_wait); + add_wait_queue(sk_sleep(session->intr_sock->sk), &intr_wait); + /* This memory barrier is paired with wq_has_sleeper(). See +diff --git a/net/bluetooth/l2cap_core.c b/net/bluetooth/l2cap_core.c +index dad419782a12..9b6b35977f48 100644 +--- a/net/bluetooth/l2cap_core.c ++++ b/net/bluetooth/l2cap_core.c +@@ -57,7 +57,7 @@ static struct sk_buff *l2cap_build_cmd(struct l2cap_conn *conn, + u8 code, u8 ident, u16 dlen, void *data); + static void l2cap_send_cmd(struct l2cap_conn *conn, u8 ident, u8 code, u16 len, + void *data); +-static int l2cap_build_conf_req(struct l2cap_chan *chan, void *data); ++static int l2cap_build_conf_req(struct l2cap_chan *chan, void *data, size_t data_size); + static void l2cap_send_disconn_req(struct l2cap_chan *chan, int err); + + static void l2cap_tx(struct l2cap_chan *chan, struct l2cap_ctrl *control, +@@ -1462,7 +1462,7 @@ static void l2cap_conn_start(struct l2cap_conn *conn) + + set_bit(CONF_REQ_SENT, &chan->conf_state); + l2cap_send_cmd(conn, l2cap_get_ident(conn), L2CAP_CONF_REQ, +- l2cap_build_conf_req(chan, buf), buf); ++ l2cap_build_conf_req(chan, buf, sizeof(buf)), buf); + chan->num_conf_req++; + } + +@@ -2970,12 +2970,15 @@ static inline int l2cap_get_conf_opt(void **ptr, int *type, int *olen, + return len; + } + +-static void l2cap_add_conf_opt(void **ptr, u8 type, u8 len, unsigned long val) ++static void l2cap_add_conf_opt(void **ptr, u8 type, u8 len, unsigned long val, size_t size) + { + struct l2cap_conf_opt *opt = *ptr; + + BT_DBG("type 0x%2.2x len %u val 0x%lx", type, len, val); + ++ if (size < L2CAP_CONF_OPT_SIZE + len) ++ return; ++ + opt->type = type; + opt->len = len; + +@@ -3000,7 +3003,7 @@ static void l2cap_add_conf_opt(void **ptr, u8 type, u8 len, unsigned long val) + *ptr += L2CAP_CONF_OPT_SIZE + len; + } + +-static void l2cap_add_opt_efs(void **ptr, struct l2cap_chan *chan) ++static void l2cap_add_opt_efs(void **ptr, struct l2cap_chan *chan, size_t size) + { + struct l2cap_conf_efs efs; + +@@ -3028,7 +3031,7 @@ static void l2cap_add_opt_efs(void **ptr, struct l2cap_chan *chan) + } + + l2cap_add_conf_opt(ptr, L2CAP_CONF_EFS, sizeof(efs), +- (unsigned long) &efs); ++ (unsigned long) &efs, size); + } + + static void l2cap_ack_timeout(struct work_struct *work) +@@ -3174,11 +3177,12 @@ static inline void l2cap_txwin_setup(struct l2cap_chan *chan) + chan->ack_win = chan->tx_win; + } + +-static int l2cap_build_conf_req(struct l2cap_chan *chan, void *data) ++static int l2cap_build_conf_req(struct l2cap_chan *chan, void *data, size_t data_size) + { + struct l2cap_conf_req *req = data; + struct l2cap_conf_rfc rfc = { .mode = chan->mode }; + void *ptr = req->data; ++ void *endptr = data + data_size; + u16 size; + + BT_DBG("chan %p", chan); +@@ -3203,7 +3207,7 @@ static int l2cap_build_conf_req(struct l2cap_chan *chan, void *data) + + done: + if (chan->imtu != L2CAP_DEFAULT_MTU) +- l2cap_add_conf_opt(&ptr, L2CAP_CONF_MTU, 2, chan->imtu); ++ l2cap_add_conf_opt(&ptr, L2CAP_CONF_MTU, 2, chan->imtu, endptr - ptr); + + switch (chan->mode) { + case L2CAP_MODE_BASIC: +@@ -3222,7 +3226,7 @@ done: + rfc.max_pdu_size = 0; + + l2cap_add_conf_opt(&ptr, L2CAP_CONF_RFC, sizeof(rfc), +- (unsigned long) &rfc); ++ (unsigned long) &rfc, endptr - ptr); + break; + + case L2CAP_MODE_ERTM: +@@ -3242,21 +3246,21 @@ done: + L2CAP_DEFAULT_TX_WINDOW); + + l2cap_add_conf_opt(&ptr, L2CAP_CONF_RFC, sizeof(rfc), +- (unsigned long) &rfc); ++ (unsigned long) &rfc, endptr - ptr); + + if (test_bit(FLAG_EFS_ENABLE, &chan->flags)) +- l2cap_add_opt_efs(&ptr, chan); ++ l2cap_add_opt_efs(&ptr, chan, endptr - ptr); + + if (test_bit(FLAG_EXT_CTRL, &chan->flags)) + l2cap_add_conf_opt(&ptr, L2CAP_CONF_EWS, 2, +- chan->tx_win); ++ chan->tx_win, endptr - ptr); + + if (chan->conn->feat_mask & L2CAP_FEAT_FCS) + if (chan->fcs == L2CAP_FCS_NONE || + test_bit(CONF_RECV_NO_FCS, &chan->conf_state)) { + chan->fcs = L2CAP_FCS_NONE; + l2cap_add_conf_opt(&ptr, L2CAP_CONF_FCS, 1, +- chan->fcs); ++ chan->fcs, endptr - ptr); + } + break; + +@@ -3274,17 +3278,17 @@ done: + rfc.max_pdu_size = cpu_to_le16(size); + + l2cap_add_conf_opt(&ptr, L2CAP_CONF_RFC, sizeof(rfc), +- (unsigned long) &rfc); ++ (unsigned long) &rfc, endptr - ptr); + + if (test_bit(FLAG_EFS_ENABLE, &chan->flags)) +- l2cap_add_opt_efs(&ptr, chan); ++ l2cap_add_opt_efs(&ptr, chan, endptr - ptr); + + if (chan->conn->feat_mask & L2CAP_FEAT_FCS) + if (chan->fcs == L2CAP_FCS_NONE || + test_bit(CONF_RECV_NO_FCS, &chan->conf_state)) { + chan->fcs = L2CAP_FCS_NONE; + l2cap_add_conf_opt(&ptr, L2CAP_CONF_FCS, 1, +- chan->fcs); ++ chan->fcs, endptr - ptr); + } + break; + } +@@ -3295,10 +3299,11 @@ done: + return ptr - data; + } + +-static int l2cap_parse_conf_req(struct l2cap_chan *chan, void *data) ++static int l2cap_parse_conf_req(struct l2cap_chan *chan, void *data, size_t data_size) + { + struct l2cap_conf_rsp *rsp = data; + void *ptr = rsp->data; ++ void *endptr = data + data_size; + void *req = chan->conf_req; + int len = chan->conf_len; + int type, hint, olen; +@@ -3400,7 +3405,7 @@ done: + return -ECONNREFUSED; + + l2cap_add_conf_opt(&ptr, L2CAP_CONF_RFC, sizeof(rfc), +- (unsigned long) &rfc); ++ (unsigned long) &rfc, endptr - ptr); + } + + if (result == L2CAP_CONF_SUCCESS) { +@@ -3413,7 +3418,7 @@ done: + chan->omtu = mtu; + set_bit(CONF_MTU_DONE, &chan->conf_state); + } +- l2cap_add_conf_opt(&ptr, L2CAP_CONF_MTU, 2, chan->omtu); ++ l2cap_add_conf_opt(&ptr, L2CAP_CONF_MTU, 2, chan->omtu, endptr - ptr); + + if (remote_efs) { + if (chan->local_stype != L2CAP_SERV_NOTRAFIC && +@@ -3427,7 +3432,7 @@ done: + + l2cap_add_conf_opt(&ptr, L2CAP_CONF_EFS, + sizeof(efs), +- (unsigned long) &efs); ++ (unsigned long) &efs, endptr - ptr); + } else { + /* Send PENDING Conf Rsp */ + result = L2CAP_CONF_PENDING; +@@ -3460,7 +3465,7 @@ done: + set_bit(CONF_MODE_DONE, &chan->conf_state); + + l2cap_add_conf_opt(&ptr, L2CAP_CONF_RFC, +- sizeof(rfc), (unsigned long) &rfc); ++ sizeof(rfc), (unsigned long) &rfc, endptr - ptr); + + if (test_bit(FLAG_EFS_ENABLE, &chan->flags)) { + chan->remote_id = efs.id; +@@ -3474,7 +3479,7 @@ done: + le32_to_cpu(efs.sdu_itime); + l2cap_add_conf_opt(&ptr, L2CAP_CONF_EFS, + sizeof(efs), +- (unsigned long) &efs); ++ (unsigned long) &efs, endptr - ptr); + } + break; + +@@ -3488,7 +3493,7 @@ done: + set_bit(CONF_MODE_DONE, &chan->conf_state); + + l2cap_add_conf_opt(&ptr, L2CAP_CONF_RFC, sizeof(rfc), +- (unsigned long) &rfc); ++ (unsigned long) &rfc, endptr - ptr); + + break; + +@@ -3510,10 +3515,11 @@ done: + } + + static int l2cap_parse_conf_rsp(struct l2cap_chan *chan, void *rsp, int len, +- void *data, u16 *result) ++ void *data, size_t size, u16 *result) + { + struct l2cap_conf_req *req = data; + void *ptr = req->data; ++ void *endptr = data + size; + int type, olen; + unsigned long val; + struct l2cap_conf_rfc rfc = { .mode = L2CAP_MODE_BASIC }; +@@ -3531,13 +3537,13 @@ static int l2cap_parse_conf_rsp(struct l2cap_chan *chan, void *rsp, int len, + chan->imtu = L2CAP_DEFAULT_MIN_MTU; + } else + chan->imtu = val; +- l2cap_add_conf_opt(&ptr, L2CAP_CONF_MTU, 2, chan->imtu); ++ l2cap_add_conf_opt(&ptr, L2CAP_CONF_MTU, 2, chan->imtu, endptr - ptr); + break; + + case L2CAP_CONF_FLUSH_TO: + chan->flush_to = val; + l2cap_add_conf_opt(&ptr, L2CAP_CONF_FLUSH_TO, +- 2, chan->flush_to); ++ 2, chan->flush_to, endptr - ptr); + break; + + case L2CAP_CONF_RFC: +@@ -3551,13 +3557,13 @@ static int l2cap_parse_conf_rsp(struct l2cap_chan *chan, void *rsp, int len, + chan->fcs = 0; + + l2cap_add_conf_opt(&ptr, L2CAP_CONF_RFC, +- sizeof(rfc), (unsigned long) &rfc); ++ sizeof(rfc), (unsigned long) &rfc, endptr - ptr); + break; + + case L2CAP_CONF_EWS: + chan->ack_win = min_t(u16, val, chan->ack_win); + l2cap_add_conf_opt(&ptr, L2CAP_CONF_EWS, 2, +- chan->tx_win); ++ chan->tx_win, endptr - ptr); + break; + + case L2CAP_CONF_EFS: +@@ -3570,7 +3576,7 @@ static int l2cap_parse_conf_rsp(struct l2cap_chan *chan, void *rsp, int len, + return -ECONNREFUSED; + + l2cap_add_conf_opt(&ptr, L2CAP_CONF_EFS, sizeof(efs), +- (unsigned long) &efs); ++ (unsigned long) &efs, endptr - ptr); + break; + + case L2CAP_CONF_FCS: +@@ -3675,7 +3681,7 @@ void __l2cap_connect_rsp_defer(struct l2cap_chan *chan) + return; + + l2cap_send_cmd(conn, l2cap_get_ident(conn), L2CAP_CONF_REQ, +- l2cap_build_conf_req(chan, buf), buf); ++ l2cap_build_conf_req(chan, buf, sizeof(buf)), buf); + chan->num_conf_req++; + } + +@@ -3883,7 +3889,7 @@ sendresp: + u8 buf[128]; + set_bit(CONF_REQ_SENT, &chan->conf_state); + l2cap_send_cmd(conn, l2cap_get_ident(conn), L2CAP_CONF_REQ, +- l2cap_build_conf_req(chan, buf), buf); ++ l2cap_build_conf_req(chan, buf, sizeof(buf)), buf); + chan->num_conf_req++; + } + +@@ -3961,7 +3967,7 @@ static int l2cap_connect_create_rsp(struct l2cap_conn *conn, + break; + + l2cap_send_cmd(conn, l2cap_get_ident(conn), L2CAP_CONF_REQ, +- l2cap_build_conf_req(chan, req), req); ++ l2cap_build_conf_req(chan, req, sizeof(req)), req); + chan->num_conf_req++; + break; + +@@ -4073,7 +4079,7 @@ static inline int l2cap_config_req(struct l2cap_conn *conn, + } + + /* Complete config. */ +- len = l2cap_parse_conf_req(chan, rsp); ++ len = l2cap_parse_conf_req(chan, rsp, sizeof(rsp)); + if (len < 0) { + l2cap_send_disconn_req(chan, ECONNRESET); + goto unlock; +@@ -4107,7 +4113,7 @@ static inline int l2cap_config_req(struct l2cap_conn *conn, + if (!test_and_set_bit(CONF_REQ_SENT, &chan->conf_state)) { + u8 buf[64]; + l2cap_send_cmd(conn, l2cap_get_ident(conn), L2CAP_CONF_REQ, +- l2cap_build_conf_req(chan, buf), buf); ++ l2cap_build_conf_req(chan, buf, sizeof(buf)), buf); + chan->num_conf_req++; + } + +@@ -4167,7 +4173,7 @@ static inline int l2cap_config_rsp(struct l2cap_conn *conn, + char buf[64]; + + len = l2cap_parse_conf_rsp(chan, rsp->data, len, +- buf, &result); ++ buf, sizeof(buf), &result); + if (len < 0) { + l2cap_send_disconn_req(chan, ECONNRESET); + goto done; +@@ -4197,7 +4203,7 @@ static inline int l2cap_config_rsp(struct l2cap_conn *conn, + /* throw out any old stored conf requests */ + result = L2CAP_CONF_SUCCESS; + len = l2cap_parse_conf_rsp(chan, rsp->data, len, +- req, &result); ++ req, sizeof(req), &result); + if (len < 0) { + l2cap_send_disconn_req(chan, ECONNRESET); + goto done; +@@ -4774,7 +4780,7 @@ static void l2cap_do_create(struct l2cap_chan *chan, int result, + set_bit(CONF_REQ_SENT, &chan->conf_state); + l2cap_send_cmd(chan->conn, l2cap_get_ident(chan->conn), + L2CAP_CONF_REQ, +- l2cap_build_conf_req(chan, buf), buf); ++ l2cap_build_conf_req(chan, buf, sizeof(buf)), buf); + chan->num_conf_req++; + } + } +@@ -7430,7 +7436,7 @@ static void l2cap_security_cfm(struct hci_conn *hcon, u8 status, u8 encrypt) + set_bit(CONF_REQ_SENT, &chan->conf_state); + l2cap_send_cmd(conn, l2cap_get_ident(conn), + L2CAP_CONF_REQ, +- l2cap_build_conf_req(chan, buf), ++ l2cap_build_conf_req(chan, buf, sizeof(buf)), + buf); + chan->num_conf_req++; + } +diff --git a/net/dccp/proto.c b/net/dccp/proto.c +index 52a94016526d..522658179cca 100644 +--- a/net/dccp/proto.c ++++ b/net/dccp/proto.c +@@ -24,6 +24,7 @@ + #include + + #include ++#include + #include + #include + +@@ -170,6 +171,15 @@ const char *dccp_packet_name(const int type) + + EXPORT_SYMBOL_GPL(dccp_packet_name); + ++static void dccp_sk_destruct(struct sock *sk) ++{ ++ struct dccp_sock *dp = dccp_sk(sk); ++ ++ ccid_hc_tx_delete(dp->dccps_hc_tx_ccid, sk); ++ dp->dccps_hc_tx_ccid = NULL; ++ inet_sock_destruct(sk); ++} ++ + int dccp_init_sock(struct sock *sk, const __u8 ctl_sock_initialized) + { + struct dccp_sock *dp = dccp_sk(sk); +@@ -179,6 +189,7 @@ int dccp_init_sock(struct sock *sk, const __u8 ctl_sock_initialized) + icsk->icsk_syn_retries = sysctl_dccp_request_retries; + sk->sk_state = DCCP_CLOSED; + sk->sk_write_space = dccp_write_space; ++ sk->sk_destruct = dccp_sk_destruct; + icsk->icsk_sync_mss = dccp_sync_mss; + dp->dccps_mss_cache = 536; + dp->dccps_rate_last = jiffies; +@@ -201,10 +212,7 @@ void dccp_destroy_sock(struct sock *sk) + { + struct dccp_sock *dp = dccp_sk(sk); + +- /* +- * DCCP doesn't use sk_write_queue, just sk_send_head +- * for retransmissions +- */ ++ __skb_queue_purge(&sk->sk_write_queue); + if (sk->sk_send_head != NULL) { + kfree_skb(sk->sk_send_head); + sk->sk_send_head = NULL; +@@ -222,8 +230,7 @@ void dccp_destroy_sock(struct sock *sk) + dp->dccps_hc_rx_ackvec = NULL; + } + ccid_hc_rx_delete(dp->dccps_hc_rx_ccid, sk); +- ccid_hc_tx_delete(dp->dccps_hc_tx_ccid, sk); +- dp->dccps_hc_rx_ccid = dp->dccps_hc_tx_ccid = NULL; ++ dp->dccps_hc_rx_ccid = NULL; + + /* clean up feature negotiation state */ + dccp_feat_list_purge(&dp->dccps_featneg); +diff --git a/net/ipv4/tcp.c b/net/ipv4/tcp.c +index d935c9815564..1ba4d0964042 100644 +--- a/net/ipv4/tcp.c ++++ b/net/ipv4/tcp.c +@@ -2220,6 +2220,10 @@ int tcp_disconnect(struct sock *sk, int flags) + tcp_set_ca_state(sk, TCP_CA_Open); + tcp_clear_retrans(tp); + inet_csk_delack_init(sk); ++ /* Initialize rcv_mss to TCP_MIN_MSS to avoid division by 0 ++ * issue in __tcp_select_window() ++ */ ++ icsk->icsk_ack.rcv_mss = TCP_MIN_MSS; + tcp_init_send_head(sk); + memset(&tp->rx_opt, 0, sizeof(tp->rx_opt)); + __sk_dst_reset(sk); +diff --git a/net/ipv4/tcp_input.c b/net/ipv4/tcp_input.c +index 767ee7471c9b..95f98d2444fa 100644 +--- a/net/ipv4/tcp_input.c ++++ b/net/ipv4/tcp_input.c +@@ -2989,8 +2989,7 @@ void tcp_rearm_rto(struct sock *sk) + /* delta may not be positive if the socket is locked + * when the retrans timer fires and is rescheduled. + */ +- if (delta > 0) +- rto = delta; ++ rto = max(delta, 1); + } + inet_csk_reset_xmit_timer(sk, ICSK_TIME_RETRANS, rto, + TCP_RTO_MAX); +diff --git a/net/ipv6/ip6_fib.c b/net/ipv6/ip6_fib.c +index bde57b113009..e7a60f5de097 100644 +--- a/net/ipv6/ip6_fib.c ++++ b/net/ipv6/ip6_fib.c +@@ -160,6 +160,12 @@ static void rt6_release(struct rt6_info *rt) + dst_free(&rt->dst); + } + ++static void fib6_free_table(struct fib6_table *table) ++{ ++ inetpeer_invalidate_tree(&table->tb6_peers); ++ kfree(table); ++} ++ + static void fib6_link_table(struct net *net, struct fib6_table *tb) + { + unsigned int h; +@@ -853,6 +859,8 @@ add: + } + nsiblings = iter->rt6i_nsiblings; + fib6_purge_rt(iter, fn, info->nl_net); ++ if (fn->rr_ptr == iter) ++ fn->rr_ptr = NULL; + rt6_release(iter); + + if (nsiblings) { +@@ -863,6 +871,8 @@ add: + if (rt6_qualify_for_ecmp(iter)) { + *ins = iter->dst.rt6_next; + fib6_purge_rt(iter, fn, info->nl_net); ++ if (fn->rr_ptr == iter) ++ fn->rr_ptr = NULL; + rt6_release(iter); + nsiblings--; + } else { +@@ -1818,15 +1828,22 @@ out_timer: + + static void fib6_net_exit(struct net *net) + { ++ unsigned int i; ++ + rt6_ifdown(net, NULL); + del_timer_sync(&net->ipv6.ip6_fib_timer); + +-#ifdef CONFIG_IPV6_MULTIPLE_TABLES +- inetpeer_invalidate_tree(&net->ipv6.fib6_local_tbl->tb6_peers); +- kfree(net->ipv6.fib6_local_tbl); +-#endif +- inetpeer_invalidate_tree(&net->ipv6.fib6_main_tbl->tb6_peers); +- kfree(net->ipv6.fib6_main_tbl); ++ for (i = 0; i < FIB6_TABLE_HASHSZ; i++) { ++ struct hlist_head *head = &net->ipv6.fib_table_hash[i]; ++ struct hlist_node *tmp; ++ struct fib6_table *tb; ++ ++ hlist_for_each_entry_safe(tb, tmp, head, tb6_hlist) { ++ hlist_del(&tb->tb6_hlist); ++ fib6_free_table(tb); ++ } ++ } ++ + kfree(net->ipv6.fib_table_hash); + kfree(net->ipv6.rt6_stats); + } +diff --git a/net/ipv6/ip6_offload.c b/net/ipv6/ip6_offload.c +index 606a07890c68..1cb68e01c301 100644 +--- a/net/ipv6/ip6_offload.c ++++ b/net/ipv6/ip6_offload.c +@@ -261,19 +261,6 @@ out: + return pp; + } + +-static struct sk_buff **sit_gro_receive(struct sk_buff **head, +- struct sk_buff *skb) +-{ +- if (NAPI_GRO_CB(skb)->encap_mark) { +- NAPI_GRO_CB(skb)->flush = 1; +- return NULL; +- } +- +- NAPI_GRO_CB(skb)->encap_mark = 1; +- +- return ipv6_gro_receive(head, skb); +-} +- + static int ipv6_gro_complete(struct sk_buff *skb, int nhoff) + { + const struct net_offload *ops; +diff --git a/net/ipv6/output_core.c b/net/ipv6/output_core.c +index 292ef2e584db..7c6159b1481a 100644 +--- a/net/ipv6/output_core.c ++++ b/net/ipv6/output_core.c +@@ -84,7 +84,6 @@ int ip6_find_1stfragopt(struct sk_buff *skb, u8 **nexthdr) + + while (offset <= packet_len) { + struct ipv6_opt_hdr *exthdr; +- unsigned int len; + + switch (**nexthdr) { + +@@ -110,10 +109,9 @@ int ip6_find_1stfragopt(struct sk_buff *skb, u8 **nexthdr) + + exthdr = (struct ipv6_opt_hdr *)(skb_network_header(skb) + + offset); +- len = ipv6_optlen(exthdr); +- if (len + offset >= IPV6_MAXPLEN) ++ offset += ipv6_optlen(exthdr); ++ if (offset > IPV6_MAXPLEN) + return -EINVAL; +- offset += len; + *nexthdr = &exthdr->nexthdr; + } + +diff --git a/net/irda/af_irda.c b/net/irda/af_irda.c +index eca46d3d3ff3..d7637c9218bd 100644 +--- a/net/irda/af_irda.c ++++ b/net/irda/af_irda.c +@@ -2228,7 +2228,7 @@ static int irda_getsockopt(struct socket *sock, int level, int optname, + { + struct sock *sk = sock->sk; + struct irda_sock *self = irda_sk(sk); +- struct irda_device_list list; ++ struct irda_device_list list = { 0 }; + struct irda_device_info *discoveries; + struct irda_ias_set * ias_opt; /* IAS get/query params */ + struct ias_object * ias_obj; /* Object in IAS */ +diff --git a/net/netfilter/nf_conntrack_extend.c b/net/netfilter/nf_conntrack_extend.c +index 1a9545965c0d..531ca55f1af6 100644 +--- a/net/netfilter/nf_conntrack_extend.c ++++ b/net/netfilter/nf_conntrack_extend.c +@@ -53,7 +53,11 @@ nf_ct_ext_create(struct nf_ct_ext **ext, enum nf_ct_ext_id id, + + rcu_read_lock(); + t = rcu_dereference(nf_ct_ext_types[id]); +- BUG_ON(t == NULL); ++ if (!t) { ++ rcu_read_unlock(); ++ return NULL; ++ } ++ + off = ALIGN(sizeof(struct nf_ct_ext), t->align); + len = off + t->len + var_alloc_len; + alloc_size = t->alloc_size + var_alloc_len; +@@ -88,7 +92,10 @@ void *__nf_ct_ext_add_length(struct nf_conn *ct, enum nf_ct_ext_id id, + + rcu_read_lock(); + t = rcu_dereference(nf_ct_ext_types[id]); +- BUG_ON(t == NULL); ++ if (!t) { ++ rcu_read_unlock(); ++ return NULL; ++ } + + newoff = ALIGN(old->len, t->align); + newlen = newoff + t->len + var_alloc_len; +@@ -186,6 +193,6 @@ void nf_ct_extend_unregister(struct nf_ct_ext_type *type) + RCU_INIT_POINTER(nf_ct_ext_types[type->id], NULL); + update_alloc_size(type); + mutex_unlock(&nf_ct_ext_type_mutex); +- rcu_barrier(); /* Wait for completion of call_rcu()'s */ ++ synchronize_rcu(); + } + EXPORT_SYMBOL_GPL(nf_ct_extend_unregister); +diff --git a/net/sched/sch_sfq.c b/net/sched/sch_sfq.c +index fdcced6aa71d..78d0eaf5de61 100644 +--- a/net/sched/sch_sfq.c ++++ b/net/sched/sch_sfq.c +@@ -457,6 +457,7 @@ congestion_drop: + qdisc_drop(head, sch); + + slot_queue_add(slot, skb); ++ qdisc_tree_reduce_backlog(sch, 0, delta); + return NET_XMIT_CN; + } + +@@ -488,8 +489,10 @@ enqueue: + /* Return Congestion Notification only if we dropped a packet + * from this flow. + */ +- if (qlen != slot->qlen) ++ if (qlen != slot->qlen) { ++ qdisc_tree_reduce_backlog(sch, 0, dropped - qdisc_pkt_len(skb)); + return NET_XMIT_CN; ++ } + + /* As we dropped a packet, better let upper stack know this */ + qdisc_tree_reduce_backlog(sch, 1, dropped); +diff --git a/net/sctp/ipv6.c b/net/sctp/ipv6.c +index 29fa707d61fd..2bb7240c6f8b 100644 +--- a/net/sctp/ipv6.c ++++ b/net/sctp/ipv6.c +@@ -491,7 +491,9 @@ static void sctp_v6_to_addr(union sctp_addr *addr, struct in6_addr *saddr, + { + addr->sa.sa_family = AF_INET6; + addr->v6.sin6_port = port; ++ addr->v6.sin6_flowinfo = 0; + addr->v6.sin6_addr = *saddr; ++ addr->v6.sin6_scope_id = 0; + } + + /* Compare addresses exactly. +diff --git a/net/tipc/netlink_compat.c b/net/tipc/netlink_compat.c +index e5ec86dd8dc1..a8dd585fcc38 100644 +--- a/net/tipc/netlink_compat.c ++++ b/net/tipc/netlink_compat.c +@@ -256,13 +256,15 @@ static int tipc_nl_compat_dumpit(struct tipc_nl_compat_cmd_dump *cmd, + arg = nlmsg_new(0, GFP_KERNEL); + if (!arg) { + kfree_skb(msg->rep); ++ msg->rep = NULL; + return -ENOMEM; + } + + err = __tipc_nl_compat_dumpit(cmd, msg, arg); +- if (err) ++ if (err) { + kfree_skb(msg->rep); +- ++ msg->rep = NULL; ++ } + kfree_skb(arg); + + return err; +diff --git a/net/xfrm/xfrm_policy.c b/net/xfrm/xfrm_policy.c +index 155070f500aa..04a025218d13 100644 +--- a/net/xfrm/xfrm_policy.c ++++ b/net/xfrm/xfrm_policy.c +@@ -3255,9 +3255,15 @@ int xfrm_migrate(const struct xfrm_selector *sel, u8 dir, u8 type, + struct xfrm_state *x_new[XFRM_MAX_DEPTH]; + struct xfrm_migrate *mp; + ++ /* Stage 0 - sanity checks */ + if ((err = xfrm_migrate_check(m, num_migrate)) < 0) + goto out; + ++ if (dir >= XFRM_POLICY_MAX) { ++ err = -EINVAL; ++ goto out; ++ } ++ + /* Stage 1 - find policy */ + if ((pol = xfrm_migrate_policy_find(sel, dir, type, net)) == NULL) { + err = -ENOENT; +diff --git a/sound/core/control.c b/sound/core/control.c +index b4fe9b002512..bd01d492f46a 100644 +--- a/sound/core/control.c ++++ b/sound/core/control.c +@@ -1126,7 +1126,7 @@ static int snd_ctl_elem_user_tlv(struct snd_kcontrol *kcontrol, + mutex_lock(&ue->card->user_ctl_lock); + change = ue->tlv_data_size != size; + if (!change) +- change = memcmp(ue->tlv_data, new_data, size); ++ change = memcmp(ue->tlv_data, new_data, size) != 0; + kfree(ue->tlv_data); + ue->tlv_data = new_data; + ue->tlv_data_size = size; +diff --git a/sound/core/seq/seq_clientmgr.c b/sound/core/seq/seq_clientmgr.c +index 8158ba354b48..b6f5f47048ba 100644 +--- a/sound/core/seq/seq_clientmgr.c ++++ b/sound/core/seq/seq_clientmgr.c +@@ -1530,19 +1530,14 @@ static int snd_seq_ioctl_create_queue(struct snd_seq_client *client, + void __user *arg) + { + struct snd_seq_queue_info info; +- int result; + struct snd_seq_queue *q; + + if (copy_from_user(&info, arg, sizeof(info))) + return -EFAULT; + +- result = snd_seq_queue_alloc(client->number, info.locked, info.flags); +- if (result < 0) +- return result; +- +- q = queueptr(result); +- if (q == NULL) +- return -EINVAL; ++ q = snd_seq_queue_alloc(client->number, info.locked, info.flags); ++ if (IS_ERR(q)) ++ return PTR_ERR(q); + + info.queue = q->queue; + info.locked = q->locked; +@@ -1552,7 +1547,7 @@ static int snd_seq_ioctl_create_queue(struct snd_seq_client *client, + if (! info.name[0]) + snprintf(info.name, sizeof(info.name), "Queue-%d", q->queue); + strlcpy(q->name, info.name, sizeof(q->name)); +- queuefree(q); ++ snd_use_lock_free(&q->use_lock); + + if (copy_to_user(arg, &info, sizeof(info))) + return -EFAULT; +diff --git a/sound/core/seq/seq_queue.c b/sound/core/seq/seq_queue.c +index f676ae53c477..a7bd074f6c0e 100644 +--- a/sound/core/seq/seq_queue.c ++++ b/sound/core/seq/seq_queue.c +@@ -184,22 +184,26 @@ void __exit snd_seq_queues_delete(void) + static void queue_use(struct snd_seq_queue *queue, int client, int use); + + /* allocate a new queue - +- * return queue index value or negative value for error ++ * return pointer to new queue or ERR_PTR(-errno) for error ++ * The new queue's use_lock is set to 1. It is the caller's responsibility to ++ * call snd_use_lock_free(&q->use_lock). + */ +-int snd_seq_queue_alloc(int client, int locked, unsigned int info_flags) ++struct snd_seq_queue *snd_seq_queue_alloc(int client, int locked, unsigned int info_flags) + { + struct snd_seq_queue *q; + + q = queue_new(client, locked); + if (q == NULL) +- return -ENOMEM; ++ return ERR_PTR(-ENOMEM); + q->info_flags = info_flags; + queue_use(q, client, 1); ++ snd_use_lock_use(&q->use_lock); + if (queue_list_add(q) < 0) { ++ snd_use_lock_free(&q->use_lock); + queue_delete(q); +- return -ENOMEM; ++ return ERR_PTR(-ENOMEM); + } +- return q->queue; ++ return q; + } + + /* delete a queue - queue must be owned by the client */ +diff --git a/sound/core/seq/seq_queue.h b/sound/core/seq/seq_queue.h +index 30c8111477f6..719093489a2c 100644 +--- a/sound/core/seq/seq_queue.h ++++ b/sound/core/seq/seq_queue.h +@@ -71,7 +71,7 @@ void snd_seq_queues_delete(void); + + + /* create new queue (constructor) */ +-int snd_seq_queue_alloc(int client, int locked, unsigned int flags); ++struct snd_seq_queue *snd_seq_queue_alloc(int client, int locked, unsigned int flags); + + /* delete queue (destructor) */ + int snd_seq_queue_delete(int client, int queueid); +diff --git a/sound/isa/msnd/msnd_midi.c b/sound/isa/msnd/msnd_midi.c +index ffc67fd80c23..58e59cd3c95c 100644 +--- a/sound/isa/msnd/msnd_midi.c ++++ b/sound/isa/msnd/msnd_midi.c +@@ -120,24 +120,24 @@ void snd_msndmidi_input_read(void *mpuv) + unsigned long flags; + struct snd_msndmidi *mpu = mpuv; + void *pwMIDQData = mpu->dev->mappedbase + MIDQ_DATA_BUFF; ++ u16 head, tail, size; + + spin_lock_irqsave(&mpu->input_lock, flags); +- while (readw(mpu->dev->MIDQ + JQS_wTail) != +- readw(mpu->dev->MIDQ + JQS_wHead)) { +- u16 wTmp, val; +- val = readw(pwMIDQData + 2 * readw(mpu->dev->MIDQ + JQS_wHead)); +- +- if (test_bit(MSNDMIDI_MODE_BIT_INPUT_TRIGGER, +- &mpu->mode)) +- snd_rawmidi_receive(mpu->substream_input, +- (unsigned char *)&val, 1); +- +- wTmp = readw(mpu->dev->MIDQ + JQS_wHead) + 1; +- if (wTmp > readw(mpu->dev->MIDQ + JQS_wSize)) +- writew(0, mpu->dev->MIDQ + JQS_wHead); +- else +- writew(wTmp, mpu->dev->MIDQ + JQS_wHead); ++ head = readw(mpu->dev->MIDQ + JQS_wHead); ++ tail = readw(mpu->dev->MIDQ + JQS_wTail); ++ size = readw(mpu->dev->MIDQ + JQS_wSize); ++ if (head > size || tail > size) ++ goto out; ++ while (head != tail) { ++ unsigned char val = readw(pwMIDQData + 2 * head); ++ ++ if (test_bit(MSNDMIDI_MODE_BIT_INPUT_TRIGGER, &mpu->mode)) ++ snd_rawmidi_receive(mpu->substream_input, &val, 1); ++ if (++head > size) ++ head = 0; ++ writew(head, mpu->dev->MIDQ + JQS_wHead); + } ++ out: + spin_unlock_irqrestore(&mpu->input_lock, flags); + } + EXPORT_SYMBOL(snd_msndmidi_input_read); +diff --git a/sound/isa/msnd/msnd_pinnacle.c b/sound/isa/msnd/msnd_pinnacle.c +index 4c072666115d..a31ea6c22d19 100644 +--- a/sound/isa/msnd/msnd_pinnacle.c ++++ b/sound/isa/msnd/msnd_pinnacle.c +@@ -170,23 +170,24 @@ static irqreturn_t snd_msnd_interrupt(int irq, void *dev_id) + { + struct snd_msnd *chip = dev_id; + void *pwDSPQData = chip->mappedbase + DSPQ_DATA_BUFF; ++ u16 head, tail, size; + + /* Send ack to DSP */ + /* inb(chip->io + HP_RXL); */ + + /* Evaluate queued DSP messages */ +- while (readw(chip->DSPQ + JQS_wTail) != readw(chip->DSPQ + JQS_wHead)) { +- u16 wTmp; +- +- snd_msnd_eval_dsp_msg(chip, +- readw(pwDSPQData + 2 * readw(chip->DSPQ + JQS_wHead))); +- +- wTmp = readw(chip->DSPQ + JQS_wHead) + 1; +- if (wTmp > readw(chip->DSPQ + JQS_wSize)) +- writew(0, chip->DSPQ + JQS_wHead); +- else +- writew(wTmp, chip->DSPQ + JQS_wHead); ++ head = readw(chip->DSPQ + JQS_wHead); ++ tail = readw(chip->DSPQ + JQS_wTail); ++ size = readw(chip->DSPQ + JQS_wSize); ++ if (head > size || tail > size) ++ goto out; ++ while (head != tail) { ++ snd_msnd_eval_dsp_msg(chip, readw(pwDSPQData + 2 * head)); ++ if (++head > size) ++ head = 0; ++ writew(head, chip->DSPQ + JQS_wHead); + } ++ out: + /* Send ack to DSP */ + inb(chip->io + HP_RXL); + return IRQ_HANDLED; +diff --git a/sound/pci/au88x0/au88x0_core.c b/sound/pci/au88x0/au88x0_core.c +index 74177189063c..d3125c169684 100644 +--- a/sound/pci/au88x0/au88x0_core.c ++++ b/sound/pci/au88x0/au88x0_core.c +@@ -2150,8 +2150,7 @@ vortex_adb_allocroute(vortex_t *vortex, int dma, int nr_ch, int dir, + stream->resources, en, + VORTEX_RESOURCE_SRC)) < 0) { + memset(stream->resources, 0, +- sizeof(unsigned char) * +- VORTEX_RESOURCE_LAST); ++ sizeof(stream->resources)); + return -EBUSY; + } + if (stream->type != VORTEX_PCM_A3D) { +@@ -2161,7 +2160,7 @@ vortex_adb_allocroute(vortex_t *vortex, int dma, int nr_ch, int dir, + VORTEX_RESOURCE_MIXIN)) < 0) { + memset(stream->resources, + 0, +- sizeof(unsigned char) * VORTEX_RESOURCE_LAST); ++ sizeof(stream->resources)); + return -EBUSY; + } + } +@@ -2174,8 +2173,7 @@ vortex_adb_allocroute(vortex_t *vortex, int dma, int nr_ch, int dir, + stream->resources, en, + VORTEX_RESOURCE_A3D)) < 0) { + memset(stream->resources, 0, +- sizeof(unsigned char) * +- VORTEX_RESOURCE_LAST); ++ sizeof(stream->resources)); + dev_err(vortex->card->dev, + "out of A3D sources. Sorry\n"); + return -EBUSY; +@@ -2289,8 +2287,7 @@ vortex_adb_allocroute(vortex_t *vortex, int dma, int nr_ch, int dir, + VORTEX_RESOURCE_MIXOUT)) + < 0) { + memset(stream->resources, 0, +- sizeof(unsigned char) * +- VORTEX_RESOURCE_LAST); ++ sizeof(stream->resources)); + return -EBUSY; + } + if ((src[i] = +@@ -2298,8 +2295,7 @@ vortex_adb_allocroute(vortex_t *vortex, int dma, int nr_ch, int dir, + stream->resources, en, + VORTEX_RESOURCE_SRC)) < 0) { + memset(stream->resources, 0, +- sizeof(unsigned char) * +- VORTEX_RESOURCE_LAST); ++ sizeof(stream->resources)); + return -EBUSY; + } + } +diff --git a/sound/pci/hda/patch_conexant.c b/sound/pci/hda/patch_conexant.c +index 91b77bad03ea..a780540b7d4f 100644 +--- a/sound/pci/hda/patch_conexant.c ++++ b/sound/pci/hda/patch_conexant.c +@@ -827,6 +827,7 @@ static const struct snd_pci_quirk cxt5066_fixups[] = { + SND_PCI_QUIRK(0x17aa, 0x390b, "Lenovo G50-80", CXT_FIXUP_STEREO_DMIC), + SND_PCI_QUIRK(0x17aa, 0x3975, "Lenovo U300s", CXT_FIXUP_STEREO_DMIC), + SND_PCI_QUIRK(0x17aa, 0x3977, "Lenovo IdeaPad U310", CXT_FIXUP_STEREO_DMIC), ++ SND_PCI_QUIRK(0x17aa, 0x3978, "Lenovo G50-70", CXT_FIXUP_STEREO_DMIC), + SND_PCI_QUIRK(0x17aa, 0x397b, "Lenovo S205", CXT_FIXUP_STEREO_DMIC), + SND_PCI_QUIRK_VENDOR(0x17aa, "Thinkpad", CXT_FIXUP_THINKPAD_ACPI), + SND_PCI_QUIRK(0x1c06, 0x2011, "Lemote A1004", CXT_PINCFG_LEMOTE_A1004), +diff --git a/sound/soc/sh/rcar/ssi.c b/sound/soc/sh/rcar/ssi.c +index 7bb9c087f3dc..4599983cfc8a 100644 +--- a/sound/soc/sh/rcar/ssi.c ++++ b/sound/soc/sh/rcar/ssi.c +@@ -39,6 +39,7 @@ + #define SCKP (1 << 13) /* Serial Bit Clock Polarity */ + #define SWSP (1 << 12) /* Serial WS Polarity */ + #define SDTA (1 << 10) /* Serial Data Alignment */ ++#define PDTA (1 << 9) /* Parallel Data Alignment */ + #define DEL (1 << 8) /* Serial Data Delay */ + #define CKDV(v) (v << 4) /* Serial Clock Division Ratio */ + #define TRMD (1 << 1) /* Transmit/Receive Mode Select */ +@@ -278,7 +279,7 @@ static int rsnd_ssi_init(struct rsnd_mod *mod, + struct snd_pcm_runtime *runtime = rsnd_io_to_runtime(io); + u32 cr; + +- cr = FORCE; ++ cr = FORCE | PDTA; + + /* + * always use 32bit system word for easy clock calculation. +diff --git a/sound/usb/mixer.c b/sound/usb/mixer.c +index 4a033cbbd361..33c544acf3f6 100644 +--- a/sound/usb/mixer.c ++++ b/sound/usb/mixer.c +@@ -541,6 +541,8 @@ int snd_usb_mixer_vol_tlv(struct snd_kcontrol *kcontrol, int op_flag, + + if (size < sizeof(scale)) + return -ENOMEM; ++ if (cval->min_mute) ++ scale[0] = SNDRV_CTL_TLVT_DB_MINMAX_MUTE; + scale[2] = cval->dBmin; + scale[3] = cval->dBmax; + if (copy_to_user(_tlv, scale, sizeof(scale))) +diff --git a/sound/usb/mixer.h b/sound/usb/mixer.h +index 3417ef347e40..2b4b067646ab 100644 +--- a/sound/usb/mixer.h ++++ b/sound/usb/mixer.h +@@ -64,6 +64,7 @@ struct usb_mixer_elem_info { + int cached; + int cache_val[MAX_CHANNELS]; + u8 initialized; ++ u8 min_mute; + void *private_data; + }; + +diff --git a/sound/usb/mixer_quirks.c b/sound/usb/mixer_quirks.c +index 940442848fc8..de3f18059213 100644 +--- a/sound/usb/mixer_quirks.c ++++ b/sound/usb/mixer_quirks.c +@@ -1863,6 +1863,12 @@ void snd_usb_mixer_fu_apply_quirk(struct usb_mixer_interface *mixer, + if (unitid == 7 && cval->min == 0 && cval->max == 50) + snd_dragonfly_quirk_db_scale(mixer, kctl); + break; ++ /* lowest playback value is muted on C-Media devices */ ++ case USB_ID(0x0d8c, 0x000c): ++ case USB_ID(0x0d8c, 0x0014): ++ if (strstr(kctl->id.name, "Playback")) ++ cval->min_mute = 1; ++ break; + } + } + +diff --git a/sound/usb/quirks.c b/sound/usb/quirks.c +index 2c71e5682716..693b2ac6720a 100644 +--- a/sound/usb/quirks.c ++++ b/sound/usb/quirks.c +@@ -1138,6 +1138,7 @@ bool snd_usb_get_sample_rate_quirk(struct snd_usb_audio *chip) + case USB_ID(0x04D8, 0xFEEA): /* Benchmark DAC1 Pre */ + case USB_ID(0x0556, 0x0014): /* Phoenix Audio TMX320VC */ + case USB_ID(0x074D, 0x3553): /* Outlaw RR2150 (Micronas UAC3553B) */ ++ case USB_ID(0x1395, 0x740a): /* Sennheiser DECT */ + case USB_ID(0x1901, 0x0191): /* GE B850V3 CP2114 audio interface */ + case USB_ID(0x1de7, 0x0013): /* Phoenix Audio MT202exe */ + case USB_ID(0x1de7, 0x0014): /* Phoenix Audio TMX320 */