From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: <gentoo-commits+bounces-791910-garchives=archives.gentoo.org@lists.gentoo.org> Received: from lists.gentoo.org (pigeon.gentoo.org [208.92.234.80]) by finch.gentoo.org (Postfix) with ESMTP id 88F5A138A6C for <garchives@archives.gentoo.org>; Fri, 10 Apr 2015 18:15:56 +0000 (UTC) Received: from pigeon.gentoo.org (localhost [127.0.0.1]) by pigeon.gentoo.org (Postfix) with SMTP id DAFA7E086E; Fri, 10 Apr 2015 18:15:50 +0000 (UTC) Received: from smtp.gentoo.org (smtp.gentoo.org [140.211.166.183]) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by pigeon.gentoo.org (Postfix) with ESMTPS id 40EE0E086E for <gentoo-commits@lists.gentoo.org>; Fri, 10 Apr 2015 18:15:50 +0000 (UTC) Received: from oystercatcher.gentoo.org (oystercatcher.gentoo.org [148.251.78.52]) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by smtp.gentoo.org (Postfix) with ESMTPS id 18669340C0C for <gentoo-commits@lists.gentoo.org>; Fri, 10 Apr 2015 18:15:48 +0000 (UTC) Received: from localhost.localdomain (localhost [127.0.0.1]) by oystercatcher.gentoo.org (Postfix) with ESMTP id 6A4F015A2A for <gentoo-commits@lists.gentoo.org>; Fri, 10 Apr 2015 18:15:44 +0000 (UTC) From: "Mike Pagano" <mpagano@gentoo.org> To: gentoo-commits@lists.gentoo.org Content-Transfer-Encoding: 8bit Content-type: text/plain; charset=UTF-8 Reply-To: gentoo-dev@lists.gentoo.org, "Mike Pagano" <mpagano@gentoo.org> Message-ID: <1428689033.4a69ed1442f33c9f7848db2132a8fe280a9a1a8c.mpagano@gentoo> Subject: [gentoo-commits] proj/linux-patches:3.12 commit in: / X-VCS-Repository: proj/linux-patches X-VCS-Files: 0000_README 1039_linux-3.12.40.patch X-VCS-Directories: / X-VCS-Committer: mpagano X-VCS-Committer-Name: Mike Pagano X-VCS-Revision: 4a69ed1442f33c9f7848db2132a8fe280a9a1a8c X-VCS-Branch: 3.12 Date: Fri, 10 Apr 2015 18:15:44 +0000 (UTC) Precedence: bulk List-Post: <mailto:gentoo-commits@lists.gentoo.org> List-Help: <mailto:gentoo-commits+help@lists.gentoo.org> List-Unsubscribe: <mailto:gentoo-commits+unsubscribe@lists.gentoo.org> List-Subscribe: <mailto:gentoo-commits+subscribe@lists.gentoo.org> List-Id: Gentoo Linux mail <gentoo-commits.gentoo.org> X-BeenThere: gentoo-commits@lists.gentoo.org X-Archives-Salt: 4e33f18c-7da9-4633-bb82-b8bff9e9d130 X-Archives-Hash: b08d2602d9cdc101232823c127420088 commit: 4a69ed1442f33c9f7848db2132a8fe280a9a1a8c Author: Mike Pagano <mpagano <AT> gentoo <DOT> org> AuthorDate: Fri Apr 10 18:03:53 2015 +0000 Commit: Mike Pagano <mpagano <AT> gentoo <DOT> org> CommitDate: Fri Apr 10 18:03:53 2015 +0000 URL: https://gitweb.gentoo.org/proj/linux-patches.git/commit/?id=4a69ed14 Linux patch 3.12.40 0000_README | 4 + 1039_linux-3.12.40.patch | 6417 ++++++++++++++++++++++++++++++++++++++++++++++ 2 files changed, 6421 insertions(+) diff --git a/0000_README b/0000_README index a488cd4..ef54dd8 100644 --- a/0000_README +++ b/0000_README @@ -198,6 +198,10 @@ Patch: 1038_linux-3.12.39.patch From: http://www.kernel.org Desc: Linux 3.12.39 +Patch: 1039_linux-3.12.40.patch +From: http://www.kernel.org +Desc: Linux 3.12.40 + Patch: 1500_XATTR_USER_PREFIX.patch From: https://bugs.gentoo.org/show_bug.cgi?id=470644 Desc: Support for namespace user.pax.* on tmpfs. diff --git a/1039_linux-3.12.40.patch b/1039_linux-3.12.40.patch new file mode 100644 index 0000000..c5f3cba --- /dev/null +++ b/1039_linux-3.12.40.patch @@ -0,0 +1,6417 @@ +diff --git a/Documentation/stable_kernel_rules.txt b/Documentation/stable_kernel_rules.txt +index b0714d8f678a..8dfb6a5f427d 100644 +--- a/Documentation/stable_kernel_rules.txt ++++ b/Documentation/stable_kernel_rules.txt +@@ -29,6 +29,9 @@ Rules on what kind of patches are accepted, and which ones are not, into the + + Procedure for submitting patches to the -stable tree: + ++ - If the patch covers files in net/ or drivers/net please follow netdev stable ++ submission guidelines as described in ++ Documentation/networking/netdev-FAQ.txt + - Send the patch, after verifying that it follows the above rules, to + stable@vger.kernel.org. You must note the upstream commit ID in the + changelog of your submission, as well as the kernel version you wish +diff --git a/Makefile b/Makefile +index 18a1d91bda79..4e732d8bf663 100644 +--- a/Makefile ++++ b/Makefile +@@ -1,6 +1,6 @@ + VERSION = 3 + PATCHLEVEL = 12 +-SUBLEVEL = 39 ++SUBLEVEL = 40 + EXTRAVERSION = + NAME = One Giant Leap for Frogkind + +diff --git a/arch/arc/boot/dts/nsimosci.dts b/arch/arc/boot/dts/nsimosci.dts +index 4f31b2eb5cdf..4c169d825415 100644 +--- a/arch/arc/boot/dts/nsimosci.dts ++++ b/arch/arc/boot/dts/nsimosci.dts +@@ -20,7 +20,7 @@ + /* this is for console on PGU */ + /* bootargs = "console=tty0 consoleblank=0"; */ + /* this is for console on serial */ +- bootargs = "earlycon=uart8250,mmio32,0xc0000000,115200n8 console=ttyS0,115200n8 consoleblank=0 debug"; ++ bootargs = "earlycon=uart8250,mmio32,0xf0000000,115200n8 console=tty0 console=ttyS0,115200n8 consoleblank=0 debug"; + }; + + aliases { +@@ -46,9 +46,9 @@ + #interrupt-cells = <1>; + }; + +- uart0: serial@c0000000 { ++ uart0: serial@f0000000 { + compatible = "ns8250"; +- reg = <0xc0000000 0x2000>; ++ reg = <0xf0000000 0x2000>; + interrupts = <11>; + clock-frequency = <3686400>; + baud = <115200>; +@@ -57,21 +57,21 @@ + no-loopback-test = <1>; + }; + +- pgu0: pgu@c9000000 { ++ pgu0: pgu@f9000000 { + compatible = "snps,arcpgufb"; +- reg = <0xc9000000 0x400>; ++ reg = <0xf9000000 0x400>; + }; + +- ps2: ps2@c9001000 { ++ ps2: ps2@f9001000 { + compatible = "snps,arc_ps2"; +- reg = <0xc9000400 0x14>; ++ reg = <0xf9000400 0x14>; + interrupts = <13>; + interrupt-names = "arc_ps2_irq"; + }; + +- eth0: ethernet@c0003000 { ++ eth0: ethernet@f0003000 { + compatible = "snps,oscilan"; +- reg = <0xc0003000 0x44>; ++ reg = <0xf0003000 0x44>; + interrupts = <7>, <8>; + interrupt-names = "rx", "tx"; + }; +diff --git a/arch/arc/include/asm/kgdb.h b/arch/arc/include/asm/kgdb.h +index b65fca7ffeb5..fea931634136 100644 +--- a/arch/arc/include/asm/kgdb.h ++++ b/arch/arc/include/asm/kgdb.h +@@ -19,7 +19,7 @@ + * register API yet */ + #undef DBG_MAX_REG_NUM + +-#define GDB_MAX_REGS 39 ++#define GDB_MAX_REGS 87 + + #define BREAK_INSTR_SIZE 2 + #define CACHE_FLUSH_IS_SAFE 1 +@@ -33,23 +33,27 @@ static inline void arch_kgdb_breakpoint(void) + + extern void kgdb_trap(struct pt_regs *regs); + +-enum arc700_linux_regnums { ++/* This is the numbering of registers according to the GDB. See GDB's ++ * arc-tdep.h for details. ++ * ++ * Registers are ordered for GDB 7.5. It is incompatible with GDB 6.8. */ ++enum arc_linux_regnums { + _R0 = 0, + _R1, _R2, _R3, _R4, _R5, _R6, _R7, _R8, _R9, _R10, _R11, _R12, _R13, + _R14, _R15, _R16, _R17, _R18, _R19, _R20, _R21, _R22, _R23, _R24, + _R25, _R26, +- _BTA = 27, +- _LP_START = 28, +- _LP_END = 29, +- _LP_COUNT = 30, +- _STATUS32 = 31, +- _BLINK = 32, +- _FP = 33, +- __SP = 34, +- _EFA = 35, +- _RET = 36, +- _ORIG_R8 = 37, +- _STOP_PC = 38 ++ _FP = 27, ++ __SP = 28, ++ _R30 = 30, ++ _BLINK = 31, ++ _LP_COUNT = 60, ++ _STOP_PC = 64, ++ _RET = 64, ++ _LP_START = 65, ++ _LP_END = 66, ++ _STATUS32 = 67, ++ _ECR = 76, ++ _BTA = 82, + }; + + #else +diff --git a/arch/arm/include/asm/atomic.h b/arch/arm/include/asm/atomic.h +index da1c77d39327..9ee7e01066f9 100644 +--- a/arch/arm/include/asm/atomic.h ++++ b/arch/arm/include/asm/atomic.h +@@ -114,7 +114,8 @@ static inline int atomic_sub_return(int i, atomic_t *v) + + static inline int atomic_cmpxchg(atomic_t *ptr, int old, int new) + { +- unsigned long oldval, res; ++ int oldval; ++ unsigned long res; + + smp_mb(); + +@@ -238,15 +239,15 @@ static inline int __atomic_add_unless(atomic_t *v, int a, int u) + + #ifndef CONFIG_GENERIC_ATOMIC64 + typedef struct { +- u64 __aligned(8) counter; ++ long long counter; + } atomic64_t; + + #define ATOMIC64_INIT(i) { (i) } + + #ifdef CONFIG_ARM_LPAE +-static inline u64 atomic64_read(const atomic64_t *v) ++static inline long long atomic64_read(const atomic64_t *v) + { +- u64 result; ++ long long result; + + __asm__ __volatile__("@ atomic64_read\n" + " ldrd %0, %H0, [%1]" +@@ -257,7 +258,7 @@ static inline u64 atomic64_read(const atomic64_t *v) + return result; + } + +-static inline void atomic64_set(atomic64_t *v, u64 i) ++static inline void atomic64_set(atomic64_t *v, long long i) + { + __asm__ __volatile__("@ atomic64_set\n" + " strd %2, %H2, [%1]" +@@ -266,9 +267,9 @@ static inline void atomic64_set(atomic64_t *v, u64 i) + ); + } + #else +-static inline u64 atomic64_read(const atomic64_t *v) ++static inline long long atomic64_read(const atomic64_t *v) + { +- u64 result; ++ long long result; + + __asm__ __volatile__("@ atomic64_read\n" + " ldrexd %0, %H0, [%1]" +@@ -279,9 +280,9 @@ static inline u64 atomic64_read(const atomic64_t *v) + return result; + } + +-static inline void atomic64_set(atomic64_t *v, u64 i) ++static inline void atomic64_set(atomic64_t *v, long long i) + { +- u64 tmp; ++ long long tmp; + + __asm__ __volatile__("@ atomic64_set\n" + "1: ldrexd %0, %H0, [%2]\n" +@@ -294,9 +295,9 @@ static inline void atomic64_set(atomic64_t *v, u64 i) + } + #endif + +-static inline void atomic64_add(u64 i, atomic64_t *v) ++static inline void atomic64_add(long long i, atomic64_t *v) + { +- u64 result; ++ long long result; + unsigned long tmp; + + __asm__ __volatile__("@ atomic64_add\n" +@@ -311,9 +312,9 @@ static inline void atomic64_add(u64 i, atomic64_t *v) + : "cc"); + } + +-static inline u64 atomic64_add_return(u64 i, atomic64_t *v) ++static inline long long atomic64_add_return(long long i, atomic64_t *v) + { +- u64 result; ++ long long result; + unsigned long tmp; + + smp_mb(); +@@ -334,9 +335,9 @@ static inline u64 atomic64_add_return(u64 i, atomic64_t *v) + return result; + } + +-static inline void atomic64_sub(u64 i, atomic64_t *v) ++static inline void atomic64_sub(long long i, atomic64_t *v) + { +- u64 result; ++ long long result; + unsigned long tmp; + + __asm__ __volatile__("@ atomic64_sub\n" +@@ -351,9 +352,9 @@ static inline void atomic64_sub(u64 i, atomic64_t *v) + : "cc"); + } + +-static inline u64 atomic64_sub_return(u64 i, atomic64_t *v) ++static inline long long atomic64_sub_return(long long i, atomic64_t *v) + { +- u64 result; ++ long long result; + unsigned long tmp; + + smp_mb(); +@@ -374,9 +375,10 @@ static inline u64 atomic64_sub_return(u64 i, atomic64_t *v) + return result; + } + +-static inline u64 atomic64_cmpxchg(atomic64_t *ptr, u64 old, u64 new) ++static inline long long atomic64_cmpxchg(atomic64_t *ptr, long long old, ++ long long new) + { +- u64 oldval; ++ long long oldval; + unsigned long res; + + smp_mb(); +@@ -398,9 +400,9 @@ static inline u64 atomic64_cmpxchg(atomic64_t *ptr, u64 old, u64 new) + return oldval; + } + +-static inline u64 atomic64_xchg(atomic64_t *ptr, u64 new) ++static inline long long atomic64_xchg(atomic64_t *ptr, long long new) + { +- u64 result; ++ long long result; + unsigned long tmp; + + smp_mb(); +@@ -419,9 +421,9 @@ static inline u64 atomic64_xchg(atomic64_t *ptr, u64 new) + return result; + } + +-static inline u64 atomic64_dec_if_positive(atomic64_t *v) ++static inline long long atomic64_dec_if_positive(atomic64_t *v) + { +- u64 result; ++ long long result; + unsigned long tmp; + + smp_mb(); +@@ -445,9 +447,9 @@ static inline u64 atomic64_dec_if_positive(atomic64_t *v) + return result; + } + +-static inline int atomic64_add_unless(atomic64_t *v, u64 a, u64 u) ++static inline int atomic64_add_unless(atomic64_t *v, long long a, long long u) + { +- u64 val; ++ long long val; + unsigned long tmp; + int ret = 1; + +diff --git a/arch/arm/include/asm/bug.h b/arch/arm/include/asm/bug.h +index 7af5c6c3653a..b274bde24905 100644 +--- a/arch/arm/include/asm/bug.h ++++ b/arch/arm/include/asm/bug.h +@@ -2,6 +2,8 @@ + #define _ASMARM_BUG_H + + #include <linux/linkage.h> ++#include <linux/types.h> ++#include <asm/opcodes.h> + + #ifdef CONFIG_BUG + +@@ -12,10 +14,10 @@ + */ + #ifdef CONFIG_THUMB2_KERNEL + #define BUG_INSTR_VALUE 0xde02 +-#define BUG_INSTR_TYPE ".hword " ++#define BUG_INSTR(__value) __inst_thumb16(__value) + #else + #define BUG_INSTR_VALUE 0xe7f001f2 +-#define BUG_INSTR_TYPE ".word " ++#define BUG_INSTR(__value) __inst_arm(__value) + #endif + + +@@ -33,7 +35,7 @@ + + #define __BUG(__file, __line, __value) \ + do { \ +- asm volatile("1:\t" BUG_INSTR_TYPE #__value "\n" \ ++ asm volatile("1:\t" BUG_INSTR(__value) "\n" \ + ".pushsection .rodata.str, \"aMS\", %progbits, 1\n" \ + "2:\t.asciz " #__file "\n" \ + ".popsection\n" \ +@@ -48,7 +50,7 @@ do { \ + + #define __BUG(__file, __line, __value) \ + do { \ +- asm volatile(BUG_INSTR_TYPE #__value); \ ++ asm volatile(BUG_INSTR(__value) "\n"); \ + unreachable(); \ + } while (0) + #endif /* CONFIG_DEBUG_BUGVERBOSE */ +diff --git a/arch/arm/include/asm/memory.h b/arch/arm/include/asm/memory.h +index e750a938fd3c..152b8e67f8b8 100644 +--- a/arch/arm/include/asm/memory.h ++++ b/arch/arm/include/asm/memory.h +@@ -285,7 +285,8 @@ static inline __deprecated void *bus_to_virt(unsigned long x) + #define ARCH_PFN_OFFSET PHYS_PFN_OFFSET + + #define virt_to_page(kaddr) pfn_to_page(__pa(kaddr) >> PAGE_SHIFT) +-#define virt_addr_valid(kaddr) ((unsigned long)(kaddr) >= PAGE_OFFSET && (unsigned long)(kaddr) < (unsigned long)high_memory) ++#define virt_addr_valid(kaddr) (((unsigned long)(kaddr) >= PAGE_OFFSET && (unsigned long)(kaddr) < (unsigned long)high_memory) \ ++ && pfn_valid(__pa(kaddr) >> PAGE_SHIFT) ) + + #endif + +diff --git a/arch/arm/include/asm/pgtable-3level-hwdef.h b/arch/arm/include/asm/pgtable-3level-hwdef.h +index 626989fec4d3..9fd61c72a33a 100644 +--- a/arch/arm/include/asm/pgtable-3level-hwdef.h ++++ b/arch/arm/include/asm/pgtable-3level-hwdef.h +@@ -43,7 +43,7 @@ + #define PMD_SECT_BUFFERABLE (_AT(pmdval_t, 1) << 2) + #define PMD_SECT_CACHEABLE (_AT(pmdval_t, 1) << 3) + #define PMD_SECT_USER (_AT(pmdval_t, 1) << 6) /* AP[1] */ +-#define PMD_SECT_RDONLY (_AT(pmdval_t, 1) << 7) /* AP[2] */ ++#define PMD_SECT_AP2 (_AT(pmdval_t, 1) << 7) /* read only */ + #define PMD_SECT_S (_AT(pmdval_t, 3) << 8) + #define PMD_SECT_AF (_AT(pmdval_t, 1) << 10) + #define PMD_SECT_nG (_AT(pmdval_t, 1) << 11) +@@ -72,6 +72,7 @@ + #define PTE_TABLE_BIT (_AT(pteval_t, 1) << 1) + #define PTE_BUFFERABLE (_AT(pteval_t, 1) << 2) /* AttrIndx[0] */ + #define PTE_CACHEABLE (_AT(pteval_t, 1) << 3) /* AttrIndx[1] */ ++#define PTE_AP2 (_AT(pteval_t, 1) << 7) /* AP[2] */ + #define PTE_EXT_SHARED (_AT(pteval_t, 3) << 8) /* SH[1:0], inner shareable */ + #define PTE_EXT_AF (_AT(pteval_t, 1) << 10) /* Access Flag */ + #define PTE_EXT_NG (_AT(pteval_t, 1) << 11) /* nG */ +diff --git a/arch/arm/include/asm/pgtable-3level.h b/arch/arm/include/asm/pgtable-3level.h +index ceb4807ee8b2..6a171d0afc12 100644 +--- a/arch/arm/include/asm/pgtable-3level.h ++++ b/arch/arm/include/asm/pgtable-3level.h +@@ -79,18 +79,19 @@ + #define L_PTE_PRESENT (_AT(pteval_t, 3) << 0) /* Present */ + #define L_PTE_FILE (_AT(pteval_t, 1) << 2) /* only when !PRESENT */ + #define L_PTE_USER (_AT(pteval_t, 1) << 6) /* AP[1] */ +-#define L_PTE_RDONLY (_AT(pteval_t, 1) << 7) /* AP[2] */ + #define L_PTE_SHARED (_AT(pteval_t, 3) << 8) /* SH[1:0], inner shareable */ + #define L_PTE_YOUNG (_AT(pteval_t, 1) << 10) /* AF */ + #define L_PTE_XN (_AT(pteval_t, 1) << 54) /* XN */ +-#define L_PTE_DIRTY (_AT(pteval_t, 1) << 55) /* unused */ +-#define L_PTE_SPECIAL (_AT(pteval_t, 1) << 56) /* unused */ ++#define L_PTE_DIRTY (_AT(pteval_t, 1) << 55) ++#define L_PTE_SPECIAL (_AT(pteval_t, 1) << 56) + #define L_PTE_NONE (_AT(pteval_t, 1) << 57) /* PROT_NONE */ ++#define L_PTE_RDONLY (_AT(pteval_t, 1) << 58) /* READ ONLY */ + +-#define PMD_SECT_VALID (_AT(pmdval_t, 1) << 0) +-#define PMD_SECT_DIRTY (_AT(pmdval_t, 1) << 55) +-#define PMD_SECT_SPLITTING (_AT(pmdval_t, 1) << 56) +-#define PMD_SECT_NONE (_AT(pmdval_t, 1) << 57) ++#define L_PMD_SECT_VALID (_AT(pmdval_t, 1) << 0) ++#define L_PMD_SECT_DIRTY (_AT(pmdval_t, 1) << 55) ++#define L_PMD_SECT_SPLITTING (_AT(pmdval_t, 1) << 56) ++#define L_PMD_SECT_NONE (_AT(pmdval_t, 1) << 57) ++#define L_PMD_SECT_RDONLY (_AT(pteval_t, 1) << 58) + + /* + * To be used in assembly code with the upper page attributes. +@@ -204,24 +205,29 @@ static inline pmd_t *pmd_offset(pud_t *pud, unsigned long addr) + #define pte_huge(pte) (pte_val(pte) && !(pte_val(pte) & PTE_TABLE_BIT)) + #define pte_mkhuge(pte) (__pte(pte_val(pte) & ~PTE_TABLE_BIT)) + +-#define pmd_young(pmd) (pmd_val(pmd) & PMD_SECT_AF) ++#define pmd_isset(pmd, val) ((u32)(val) == (val) ? pmd_val(pmd) & (val) \ ++ : !!(pmd_val(pmd) & (val))) ++#define pmd_isclear(pmd, val) (!(pmd_val(pmd) & (val))) ++ ++#define pmd_young(pmd) (pmd_isset((pmd), PMD_SECT_AF)) + + #define __HAVE_ARCH_PMD_WRITE +-#define pmd_write(pmd) (!(pmd_val(pmd) & PMD_SECT_RDONLY)) ++#define pmd_write(pmd) (pmd_isclear((pmd), L_PMD_SECT_RDONLY)) ++#define pmd_dirty(pmd) (pmd_isset((pmd), L_PMD_SECT_DIRTY)) + + #ifdef CONFIG_TRANSPARENT_HUGEPAGE +-#define pmd_trans_huge(pmd) (pmd_val(pmd) && !(pmd_val(pmd) & PMD_TABLE_BIT)) +-#define pmd_trans_splitting(pmd) (pmd_val(pmd) & PMD_SECT_SPLITTING) ++#define pmd_trans_huge(pmd) (pmd_val(pmd) && !pmd_table(pmd)) ++#define pmd_trans_splitting(pmd) (pmd_isset((pmd), L_PMD_SECT_SPLITTING)) + #endif + + #define PMD_BIT_FUNC(fn,op) \ + static inline pmd_t pmd_##fn(pmd_t pmd) { pmd_val(pmd) op; return pmd; } + +-PMD_BIT_FUNC(wrprotect, |= PMD_SECT_RDONLY); ++PMD_BIT_FUNC(wrprotect, |= L_PMD_SECT_RDONLY); + PMD_BIT_FUNC(mkold, &= ~PMD_SECT_AF); +-PMD_BIT_FUNC(mksplitting, |= PMD_SECT_SPLITTING); +-PMD_BIT_FUNC(mkwrite, &= ~PMD_SECT_RDONLY); +-PMD_BIT_FUNC(mkdirty, |= PMD_SECT_DIRTY); ++PMD_BIT_FUNC(mksplitting, |= L_PMD_SECT_SPLITTING); ++PMD_BIT_FUNC(mkwrite, &= ~L_PMD_SECT_RDONLY); ++PMD_BIT_FUNC(mkdirty, |= L_PMD_SECT_DIRTY); + PMD_BIT_FUNC(mkyoung, |= PMD_SECT_AF); + + #define pmd_mkhuge(pmd) (__pmd(pmd_val(pmd) & ~PMD_TABLE_BIT)) +@@ -235,8 +241,8 @@ PMD_BIT_FUNC(mkyoung, |= PMD_SECT_AF); + + static inline pmd_t pmd_modify(pmd_t pmd, pgprot_t newprot) + { +- const pmdval_t mask = PMD_SECT_USER | PMD_SECT_XN | PMD_SECT_RDONLY | +- PMD_SECT_VALID | PMD_SECT_NONE; ++ const pmdval_t mask = PMD_SECT_USER | PMD_SECT_XN | L_PMD_SECT_RDONLY | ++ L_PMD_SECT_VALID | L_PMD_SECT_NONE; + pmd_val(pmd) = (pmd_val(pmd) & ~mask) | (pgprot_val(newprot) & mask); + return pmd; + } +@@ -247,8 +253,13 @@ static inline void set_pmd_at(struct mm_struct *mm, unsigned long addr, + BUG_ON(addr >= TASK_SIZE); + + /* create a faulting entry if PROT_NONE protected */ +- if (pmd_val(pmd) & PMD_SECT_NONE) +- pmd_val(pmd) &= ~PMD_SECT_VALID; ++ if (pmd_val(pmd) & L_PMD_SECT_NONE) ++ pmd_val(pmd) &= ~L_PMD_SECT_VALID; ++ ++ if (pmd_write(pmd) && pmd_dirty(pmd)) ++ pmd_val(pmd) &= ~PMD_SECT_AP2; ++ else ++ pmd_val(pmd) |= PMD_SECT_AP2; + + *pmdp = __pmd(pmd_val(pmd) | PMD_SECT_nG); + flush_pmd_entry(pmdp); +diff --git a/arch/arm/include/asm/pgtable.h b/arch/arm/include/asm/pgtable.h +index 1571d126e9dd..a348bfd34f66 100644 +--- a/arch/arm/include/asm/pgtable.h ++++ b/arch/arm/include/asm/pgtable.h +@@ -214,12 +214,16 @@ static inline pte_t *pmd_page_vaddr(pmd_t pmd) + + #define pte_clear(mm,addr,ptep) set_pte_ext(ptep, __pte(0), 0) + ++#define pte_isset(pte, val) ((u32)(val) == (val) ? pte_val(pte) & (val) \ ++ : !!(pte_val(pte) & (val))) ++#define pte_isclear(pte, val) (!(pte_val(pte) & (val))) ++ + #define pte_none(pte) (!pte_val(pte)) +-#define pte_present(pte) (pte_val(pte) & L_PTE_PRESENT) +-#define pte_write(pte) (!(pte_val(pte) & L_PTE_RDONLY)) +-#define pte_dirty(pte) (pte_val(pte) & L_PTE_DIRTY) +-#define pte_young(pte) (pte_val(pte) & L_PTE_YOUNG) +-#define pte_exec(pte) (!(pte_val(pte) & L_PTE_XN)) ++#define pte_present(pte) (pte_isset((pte), L_PTE_PRESENT)) ++#define pte_write(pte) (pte_isclear((pte), L_PTE_RDONLY)) ++#define pte_dirty(pte) (pte_isset((pte), L_PTE_DIRTY)) ++#define pte_young(pte) (pte_isset((pte), L_PTE_YOUNG)) ++#define pte_exec(pte) (pte_isclear((pte), L_PTE_XN)) + #define pte_special(pte) (0) + + #define pte_present_user(pte) (pte_present(pte) && (pte_val(pte) & L_PTE_USER)) +diff --git a/arch/arm/kernel/traps.c b/arch/arm/kernel/traps.c +index 8e6cd760cc68..3c2f438e2daa 100644 +--- a/arch/arm/kernel/traps.c ++++ b/arch/arm/kernel/traps.c +@@ -347,15 +347,17 @@ void arm_notify_die(const char *str, struct pt_regs *regs, + int is_valid_bugaddr(unsigned long pc) + { + #ifdef CONFIG_THUMB2_KERNEL +- unsigned short bkpt; ++ u16 bkpt; ++ u16 insn = __opcode_to_mem_thumb16(BUG_INSTR_VALUE); + #else +- unsigned long bkpt; ++ u32 bkpt; ++ u32 insn = __opcode_to_mem_arm(BUG_INSTR_VALUE); + #endif + + if (probe_kernel_address((unsigned *)pc, bkpt)) + return 0; + +- return bkpt == BUG_INSTR_VALUE; ++ return bkpt == insn; + } + + #endif +diff --git a/arch/arm/mach-at91/pm.h b/arch/arm/mach-at91/pm.h +index 2f5908f0b8c5..d8af0755bddc 100644 +--- a/arch/arm/mach-at91/pm.h ++++ b/arch/arm/mach-at91/pm.h +@@ -37,7 +37,7 @@ static inline void at91rm9200_standby(void) + " mcr p15, 0, %0, c7, c0, 4\n\t" + " str %5, [%1, %2]" + : +- : "r" (0), "r" (AT91_BASE_SYS), "r" (AT91RM9200_SDRAMC_LPR), ++ : "r" (0), "r" (at91_ramc_base[0]), "r" (AT91RM9200_SDRAMC_LPR), + "r" (1), "r" (AT91RM9200_SDRAMC_SRR), + "r" (lpr)); + } +diff --git a/arch/arm/mach-omap2/cpuidle44xx.c b/arch/arm/mach-omap2/cpuidle44xx.c +index 4c8982ae9529..d159ec39cd3e 100644 +--- a/arch/arm/mach-omap2/cpuidle44xx.c ++++ b/arch/arm/mach-omap2/cpuidle44xx.c +@@ -14,6 +14,7 @@ + #include <linux/cpuidle.h> + #include <linux/cpu_pm.h> + #include <linux/export.h> ++#include <linux/clockchips.h> + + #include <asm/cpuidle.h> + #include <asm/proc-fns.h> +@@ -80,6 +81,7 @@ static int omap_enter_idle_coupled(struct cpuidle_device *dev, + int index) + { + struct idle_statedata *cx = state_ptr + index; ++ int cpu_id = smp_processor_id(); + + /* + * CPU0 has to wait and stay ON until CPU1 is OFF state. +@@ -104,6 +106,8 @@ static int omap_enter_idle_coupled(struct cpuidle_device *dev, + } + } + ++ clockevents_notify(CLOCK_EVT_NOTIFY_BROADCAST_ENTER, &cpu_id); ++ + /* + * Call idle CPU PM enter notifier chain so that + * VFP and per CPU interrupt context is saved. +@@ -147,6 +151,8 @@ static int omap_enter_idle_coupled(struct cpuidle_device *dev, + (cx->mpu_logic_state == PWRDM_POWER_OFF)) + cpu_cluster_pm_exit(); + ++ clockevents_notify(CLOCK_EVT_NOTIFY_BROADCAST_EXIT, &cpu_id); ++ + fail: + cpuidle_coupled_parallel_barrier(dev, &abort_barrier); + cpu_done[dev->cpu] = false; +@@ -154,6 +160,16 @@ fail: + return index; + } + ++/* ++ * For each cpu, setup the broadcast timer because local timers ++ * stops for the states above C1. ++ */ ++static void omap_setup_broadcast_timer(void *arg) ++{ ++ int cpu = smp_processor_id(); ++ clockevents_notify(CLOCK_EVT_NOTIFY_BROADCAST_ON, &cpu); ++} ++ + static struct cpuidle_driver omap4_idle_driver = { + .name = "omap4_idle", + .owner = THIS_MODULE, +@@ -171,8 +187,7 @@ static struct cpuidle_driver omap4_idle_driver = { + /* C2 - CPU0 OFF + CPU1 OFF + MPU CSWR */ + .exit_latency = 328 + 440, + .target_residency = 960, +- .flags = CPUIDLE_FLAG_TIME_VALID | CPUIDLE_FLAG_COUPLED | +- CPUIDLE_FLAG_TIMER_STOP, ++ .flags = CPUIDLE_FLAG_TIME_VALID | CPUIDLE_FLAG_COUPLED, + .enter = omap_enter_idle_coupled, + .name = "C2", + .desc = "CPUx OFF, MPUSS CSWR", +@@ -181,8 +196,7 @@ static struct cpuidle_driver omap4_idle_driver = { + /* C3 - CPU0 OFF + CPU1 OFF + MPU OSWR */ + .exit_latency = 460 + 518, + .target_residency = 1100, +- .flags = CPUIDLE_FLAG_TIME_VALID | CPUIDLE_FLAG_COUPLED | +- CPUIDLE_FLAG_TIMER_STOP, ++ .flags = CPUIDLE_FLAG_TIME_VALID | CPUIDLE_FLAG_COUPLED, + .enter = omap_enter_idle_coupled, + .name = "C3", + .desc = "CPUx OFF, MPUSS OSWR", +@@ -213,5 +227,8 @@ int __init omap4_idle_init(void) + if (!cpu_clkdm[0] || !cpu_clkdm[1]) + return -ENODEV; + ++ /* Configure the broadcast timer on each cpu */ ++ on_each_cpu(omap_setup_broadcast_timer, NULL, 1); ++ + return cpuidle_register(&omap4_idle_driver, cpu_online_mask); + } +diff --git a/arch/arm/mach-omap2/omap_hwmod.c b/arch/arm/mach-omap2/omap_hwmod.c +index 832adb1a6dd2..3d29fb972cd0 100644 +--- a/arch/arm/mach-omap2/omap_hwmod.c ++++ b/arch/arm/mach-omap2/omap_hwmod.c +@@ -2179,6 +2179,8 @@ static int _enable(struct omap_hwmod *oh) + oh->mux->pads_dynamic))) { + omap_hwmod_mux(oh->mux, _HWMOD_STATE_ENABLED); + _reconfigure_io_chain(); ++ } else if (oh->flags & HWMOD_FORCE_MSTANDBY) { ++ _reconfigure_io_chain(); + } + + _add_initiator_dep(oh, mpu_oh); +@@ -2285,6 +2287,8 @@ static int _idle(struct omap_hwmod *oh) + if (oh->mux && oh->mux->pads_dynamic) { + omap_hwmod_mux(oh->mux, _HWMOD_STATE_IDLE); + _reconfigure_io_chain(); ++ } else if (oh->flags & HWMOD_FORCE_MSTANDBY) { ++ _reconfigure_io_chain(); + } + + oh->_state = _HWMOD_STATE_IDLE; +diff --git a/arch/arm/mm/proc-v7-3level.S b/arch/arm/mm/proc-v7-3level.S +index 22e3ad63500c..eb81123a845d 100644 +--- a/arch/arm/mm/proc-v7-3level.S ++++ b/arch/arm/mm/proc-v7-3level.S +@@ -86,8 +86,13 @@ ENTRY(cpu_v7_set_pte_ext) + tst rh, #1 << (57 - 32) @ L_PTE_NONE + bicne rl, #L_PTE_VALID + bne 1f +- tst rh, #1 << (55 - 32) @ L_PTE_DIRTY +- orreq rl, #L_PTE_RDONLY ++ ++ eor ip, rh, #1 << (55 - 32) @ toggle L_PTE_DIRTY in temp reg to ++ @ test for !L_PTE_DIRTY || L_PTE_RDONLY ++ tst ip, #1 << (55 - 32) | 1 << (58 - 32) ++ orrne rl, #PTE_AP2 ++ biceq rl, #PTE_AP2 ++ + 1: strd r2, r3, [r0] + ALT_SMP(W(nop)) + ALT_UP (mcr p15, 0, r0, c7, c10, 1) @ flush_pte +diff --git a/arch/mips/include/asm/reg.h b/arch/mips/include/asm/reg.h +index 910e71a12466..b8343ccbc989 100644 +--- a/arch/mips/include/asm/reg.h ++++ b/arch/mips/include/asm/reg.h +@@ -12,116 +12,194 @@ + #ifndef __ASM_MIPS_REG_H + #define __ASM_MIPS_REG_H + +- +-#if defined(CONFIG_32BIT) || defined(WANT_COMPAT_REG_H) +- +-#define EF_R0 6 +-#define EF_R1 7 +-#define EF_R2 8 +-#define EF_R3 9 +-#define EF_R4 10 +-#define EF_R5 11 +-#define EF_R6 12 +-#define EF_R7 13 +-#define EF_R8 14 +-#define EF_R9 15 +-#define EF_R10 16 +-#define EF_R11 17 +-#define EF_R12 18 +-#define EF_R13 19 +-#define EF_R14 20 +-#define EF_R15 21 +-#define EF_R16 22 +-#define EF_R17 23 +-#define EF_R18 24 +-#define EF_R19 25 +-#define EF_R20 26 +-#define EF_R21 27 +-#define EF_R22 28 +-#define EF_R23 29 +-#define EF_R24 30 +-#define EF_R25 31 ++#define MIPS32_EF_R0 6 ++#define MIPS32_EF_R1 7 ++#define MIPS32_EF_R2 8 ++#define MIPS32_EF_R3 9 ++#define MIPS32_EF_R4 10 ++#define MIPS32_EF_R5 11 ++#define MIPS32_EF_R6 12 ++#define MIPS32_EF_R7 13 ++#define MIPS32_EF_R8 14 ++#define MIPS32_EF_R9 15 ++#define MIPS32_EF_R10 16 ++#define MIPS32_EF_R11 17 ++#define MIPS32_EF_R12 18 ++#define MIPS32_EF_R13 19 ++#define MIPS32_EF_R14 20 ++#define MIPS32_EF_R15 21 ++#define MIPS32_EF_R16 22 ++#define MIPS32_EF_R17 23 ++#define MIPS32_EF_R18 24 ++#define MIPS32_EF_R19 25 ++#define MIPS32_EF_R20 26 ++#define MIPS32_EF_R21 27 ++#define MIPS32_EF_R22 28 ++#define MIPS32_EF_R23 29 ++#define MIPS32_EF_R24 30 ++#define MIPS32_EF_R25 31 + + /* + * k0/k1 unsaved + */ +-#define EF_R26 32 +-#define EF_R27 33 ++#define MIPS32_EF_R26 32 ++#define MIPS32_EF_R27 33 + +-#define EF_R28 34 +-#define EF_R29 35 +-#define EF_R30 36 +-#define EF_R31 37 ++#define MIPS32_EF_R28 34 ++#define MIPS32_EF_R29 35 ++#define MIPS32_EF_R30 36 ++#define MIPS32_EF_R31 37 + + /* + * Saved special registers + */ +-#define EF_LO 38 +-#define EF_HI 39 +- +-#define EF_CP0_EPC 40 +-#define EF_CP0_BADVADDR 41 +-#define EF_CP0_STATUS 42 +-#define EF_CP0_CAUSE 43 +-#define EF_UNUSED0 44 +- +-#define EF_SIZE 180 +- +-#endif +- +-#if defined(CONFIG_64BIT) && !defined(WANT_COMPAT_REG_H) +- +-#define EF_R0 0 +-#define EF_R1 1 +-#define EF_R2 2 +-#define EF_R3 3 +-#define EF_R4 4 +-#define EF_R5 5 +-#define EF_R6 6 +-#define EF_R7 7 +-#define EF_R8 8 +-#define EF_R9 9 +-#define EF_R10 10 +-#define EF_R11 11 +-#define EF_R12 12 +-#define EF_R13 13 +-#define EF_R14 14 +-#define EF_R15 15 +-#define EF_R16 16 +-#define EF_R17 17 +-#define EF_R18 18 +-#define EF_R19 19 +-#define EF_R20 20 +-#define EF_R21 21 +-#define EF_R22 22 +-#define EF_R23 23 +-#define EF_R24 24 +-#define EF_R25 25 ++#define MIPS32_EF_LO 38 ++#define MIPS32_EF_HI 39 ++ ++#define MIPS32_EF_CP0_EPC 40 ++#define MIPS32_EF_CP0_BADVADDR 41 ++#define MIPS32_EF_CP0_STATUS 42 ++#define MIPS32_EF_CP0_CAUSE 43 ++#define MIPS32_EF_UNUSED0 44 ++ ++#define MIPS32_EF_SIZE 180 ++ ++#define MIPS64_EF_R0 0 ++#define MIPS64_EF_R1 1 ++#define MIPS64_EF_R2 2 ++#define MIPS64_EF_R3 3 ++#define MIPS64_EF_R4 4 ++#define MIPS64_EF_R5 5 ++#define MIPS64_EF_R6 6 ++#define MIPS64_EF_R7 7 ++#define MIPS64_EF_R8 8 ++#define MIPS64_EF_R9 9 ++#define MIPS64_EF_R10 10 ++#define MIPS64_EF_R11 11 ++#define MIPS64_EF_R12 12 ++#define MIPS64_EF_R13 13 ++#define MIPS64_EF_R14 14 ++#define MIPS64_EF_R15 15 ++#define MIPS64_EF_R16 16 ++#define MIPS64_EF_R17 17 ++#define MIPS64_EF_R18 18 ++#define MIPS64_EF_R19 19 ++#define MIPS64_EF_R20 20 ++#define MIPS64_EF_R21 21 ++#define MIPS64_EF_R22 22 ++#define MIPS64_EF_R23 23 ++#define MIPS64_EF_R24 24 ++#define MIPS64_EF_R25 25 + + /* + * k0/k1 unsaved + */ +-#define EF_R26 26 +-#define EF_R27 27 ++#define MIPS64_EF_R26 26 ++#define MIPS64_EF_R27 27 + + +-#define EF_R28 28 +-#define EF_R29 29 +-#define EF_R30 30 +-#define EF_R31 31 ++#define MIPS64_EF_R28 28 ++#define MIPS64_EF_R29 29 ++#define MIPS64_EF_R30 30 ++#define MIPS64_EF_R31 31 + + /* + * Saved special registers + */ +-#define EF_LO 32 +-#define EF_HI 33 +- +-#define EF_CP0_EPC 34 +-#define EF_CP0_BADVADDR 35 +-#define EF_CP0_STATUS 36 +-#define EF_CP0_CAUSE 37 +- +-#define EF_SIZE 304 /* size in bytes */ ++#define MIPS64_EF_LO 32 ++#define MIPS64_EF_HI 33 ++ ++#define MIPS64_EF_CP0_EPC 34 ++#define MIPS64_EF_CP0_BADVADDR 35 ++#define MIPS64_EF_CP0_STATUS 36 ++#define MIPS64_EF_CP0_CAUSE 37 ++ ++#define MIPS64_EF_SIZE 304 /* size in bytes */ ++ ++#if defined(CONFIG_32BIT) ++ ++#define EF_R0 MIPS32_EF_R0 ++#define EF_R1 MIPS32_EF_R1 ++#define EF_R2 MIPS32_EF_R2 ++#define EF_R3 MIPS32_EF_R3 ++#define EF_R4 MIPS32_EF_R4 ++#define EF_R5 MIPS32_EF_R5 ++#define EF_R6 MIPS32_EF_R6 ++#define EF_R7 MIPS32_EF_R7 ++#define EF_R8 MIPS32_EF_R8 ++#define EF_R9 MIPS32_EF_R9 ++#define EF_R10 MIPS32_EF_R10 ++#define EF_R11 MIPS32_EF_R11 ++#define EF_R12 MIPS32_EF_R12 ++#define EF_R13 MIPS32_EF_R13 ++#define EF_R14 MIPS32_EF_R14 ++#define EF_R15 MIPS32_EF_R15 ++#define EF_R16 MIPS32_EF_R16 ++#define EF_R17 MIPS32_EF_R17 ++#define EF_R18 MIPS32_EF_R18 ++#define EF_R19 MIPS32_EF_R19 ++#define EF_R20 MIPS32_EF_R20 ++#define EF_R21 MIPS32_EF_R21 ++#define EF_R22 MIPS32_EF_R22 ++#define EF_R23 MIPS32_EF_R23 ++#define EF_R24 MIPS32_EF_R24 ++#define EF_R25 MIPS32_EF_R25 ++#define EF_R26 MIPS32_EF_R26 ++#define EF_R27 MIPS32_EF_R27 ++#define EF_R28 MIPS32_EF_R28 ++#define EF_R29 MIPS32_EF_R29 ++#define EF_R30 MIPS32_EF_R30 ++#define EF_R31 MIPS32_EF_R31 ++#define EF_LO MIPS32_EF_LO ++#define EF_HI MIPS32_EF_HI ++#define EF_CP0_EPC MIPS32_EF_CP0_EPC ++#define EF_CP0_BADVADDR MIPS32_EF_CP0_BADVADDR ++#define EF_CP0_STATUS MIPS32_EF_CP0_STATUS ++#define EF_CP0_CAUSE MIPS32_EF_CP0_CAUSE ++#define EF_UNUSED0 MIPS32_EF_UNUSED0 ++#define EF_SIZE MIPS32_EF_SIZE ++ ++#elif defined(CONFIG_64BIT) ++ ++#define EF_R0 MIPS64_EF_R0 ++#define EF_R1 MIPS64_EF_R1 ++#define EF_R2 MIPS64_EF_R2 ++#define EF_R3 MIPS64_EF_R3 ++#define EF_R4 MIPS64_EF_R4 ++#define EF_R5 MIPS64_EF_R5 ++#define EF_R6 MIPS64_EF_R6 ++#define EF_R7 MIPS64_EF_R7 ++#define EF_R8 MIPS64_EF_R8 ++#define EF_R9 MIPS64_EF_R9 ++#define EF_R10 MIPS64_EF_R10 ++#define EF_R11 MIPS64_EF_R11 ++#define EF_R12 MIPS64_EF_R12 ++#define EF_R13 MIPS64_EF_R13 ++#define EF_R14 MIPS64_EF_R14 ++#define EF_R15 MIPS64_EF_R15 ++#define EF_R16 MIPS64_EF_R16 ++#define EF_R17 MIPS64_EF_R17 ++#define EF_R18 MIPS64_EF_R18 ++#define EF_R19 MIPS64_EF_R19 ++#define EF_R20 MIPS64_EF_R20 ++#define EF_R21 MIPS64_EF_R21 ++#define EF_R22 MIPS64_EF_R22 ++#define EF_R23 MIPS64_EF_R23 ++#define EF_R24 MIPS64_EF_R24 ++#define EF_R25 MIPS64_EF_R25 ++#define EF_R26 MIPS64_EF_R26 ++#define EF_R27 MIPS64_EF_R27 ++#define EF_R28 MIPS64_EF_R28 ++#define EF_R29 MIPS64_EF_R29 ++#define EF_R30 MIPS64_EF_R30 ++#define EF_R31 MIPS64_EF_R31 ++#define EF_LO MIPS64_EF_LO ++#define EF_HI MIPS64_EF_HI ++#define EF_CP0_EPC MIPS64_EF_CP0_EPC ++#define EF_CP0_BADVADDR MIPS64_EF_CP0_BADVADDR ++#define EF_CP0_STATUS MIPS64_EF_CP0_STATUS ++#define EF_CP0_CAUSE MIPS64_EF_CP0_CAUSE ++#define EF_SIZE MIPS64_EF_SIZE + + #endif /* CONFIG_64BIT */ + +diff --git a/arch/mips/kernel/binfmt_elfo32.c b/arch/mips/kernel/binfmt_elfo32.c +index 202e581e6096..7fdf1de0447f 100644 +--- a/arch/mips/kernel/binfmt_elfo32.c ++++ b/arch/mips/kernel/binfmt_elfo32.c +@@ -58,12 +58,6 @@ typedef elf_fpreg_t elf_fpregset_t[ELF_NFPREG]; + + #include <asm/processor.h> + +-/* +- * When this file is selected, we are definitely running a 64bit kernel. +- * So using the right regs define in asm/reg.h +- */ +-#define WANT_COMPAT_REG_H +- + /* These MUST be defined before elf.h gets included */ + extern void elf32_core_copy_regs(elf_gregset_t grp, struct pt_regs *regs); + #define ELF_CORE_COPY_REGS(_dest, _regs) elf32_core_copy_regs(_dest, _regs); +@@ -135,21 +129,21 @@ void elf32_core_copy_regs(elf_gregset_t grp, struct pt_regs *regs) + { + int i; + +- for (i = 0; i < EF_R0; i++) ++ for (i = 0; i < MIPS32_EF_R0; i++) + grp[i] = 0; +- grp[EF_R0] = 0; ++ grp[MIPS32_EF_R0] = 0; + for (i = 1; i <= 31; i++) +- grp[EF_R0 + i] = (elf_greg_t) regs->regs[i]; +- grp[EF_R26] = 0; +- grp[EF_R27] = 0; +- grp[EF_LO] = (elf_greg_t) regs->lo; +- grp[EF_HI] = (elf_greg_t) regs->hi; +- grp[EF_CP0_EPC] = (elf_greg_t) regs->cp0_epc; +- grp[EF_CP0_BADVADDR] = (elf_greg_t) regs->cp0_badvaddr; +- grp[EF_CP0_STATUS] = (elf_greg_t) regs->cp0_status; +- grp[EF_CP0_CAUSE] = (elf_greg_t) regs->cp0_cause; +-#ifdef EF_UNUSED0 +- grp[EF_UNUSED0] = 0; ++ grp[MIPS32_EF_R0 + i] = (elf_greg_t) regs->regs[i]; ++ grp[MIPS32_EF_R26] = 0; ++ grp[MIPS32_EF_R27] = 0; ++ grp[MIPS32_EF_LO] = (elf_greg_t) regs->lo; ++ grp[MIPS32_EF_HI] = (elf_greg_t) regs->hi; ++ grp[MIPS32_EF_CP0_EPC] = (elf_greg_t) regs->cp0_epc; ++ grp[MIPS32_EF_CP0_BADVADDR] = (elf_greg_t) regs->cp0_badvaddr; ++ grp[MIPS32_EF_CP0_STATUS] = (elf_greg_t) regs->cp0_status; ++ grp[MIPS32_EF_CP0_CAUSE] = (elf_greg_t) regs->cp0_cause; ++#ifdef MIPS32_EF_UNUSED0 ++ grp[MIPS32_EF_UNUSED0] = 0; + #endif + } + +diff --git a/arch/parisc/kernel/hardware.c b/arch/parisc/kernel/hardware.c +index 06cb3992907e..5b2c5f394035 100644 +--- a/arch/parisc/kernel/hardware.c ++++ b/arch/parisc/kernel/hardware.c +@@ -1205,7 +1205,8 @@ static struct hp_hardware hp_hardware_list[] = { + {HPHW_FIO, 0x004, 0x00320, 0x0, "Metheus Frame Buffer"}, + {HPHW_FIO, 0x004, 0x00340, 0x0, "BARCO CX4500 VME Grphx Cnsl"}, + {HPHW_FIO, 0x004, 0x00360, 0x0, "Hughes TOG VME FDDI"}, +- {HPHW_FIO, 0x076, 0x000AD, 0x00, "Crestone Peak RS-232"}, ++ {HPHW_FIO, 0x076, 0x000AD, 0x0, "Crestone Peak Core RS-232"}, ++ {HPHW_FIO, 0x077, 0x000AD, 0x0, "Crestone Peak Fast? Core RS-232"}, + {HPHW_IOA, 0x185, 0x0000B, 0x00, "Java BC Summit Port"}, + {HPHW_IOA, 0x1FF, 0x0000B, 0x00, "Hitachi Ghostview Summit Port"}, + {HPHW_IOA, 0x580, 0x0000B, 0x10, "U2-IOA BC Runway Port"}, +diff --git a/arch/powerpc/boot/dts/fsl/pq3-etsec2-0.dtsi b/arch/powerpc/boot/dts/fsl/pq3-etsec2-0.dtsi +index 1382fec9e8c5..7fcb1ac0f232 100644 +--- a/arch/powerpc/boot/dts/fsl/pq3-etsec2-0.dtsi ++++ b/arch/powerpc/boot/dts/fsl/pq3-etsec2-0.dtsi +@@ -50,6 +50,7 @@ ethernet@b0000 { + fsl,num_tx_queues = <0x8>; + fsl,magic-packet; + local-mac-address = [ 00 00 00 00 00 00 ]; ++ ranges; + + queue-group@b0000 { + #address-cells = <1>; +diff --git a/arch/powerpc/boot/dts/fsl/pq3-etsec2-1.dtsi b/arch/powerpc/boot/dts/fsl/pq3-etsec2-1.dtsi +index 221cd2ea5b31..9f25427c1527 100644 +--- a/arch/powerpc/boot/dts/fsl/pq3-etsec2-1.dtsi ++++ b/arch/powerpc/boot/dts/fsl/pq3-etsec2-1.dtsi +@@ -50,6 +50,7 @@ ethernet@b1000 { + fsl,num_tx_queues = <0x8>; + fsl,magic-packet; + local-mac-address = [ 00 00 00 00 00 00 ]; ++ ranges; + + queue-group@b1000 { + #address-cells = <1>; +diff --git a/arch/powerpc/boot/dts/fsl/pq3-etsec2-2.dtsi b/arch/powerpc/boot/dts/fsl/pq3-etsec2-2.dtsi +index 61456c317609..cd7c318ab131 100644 +--- a/arch/powerpc/boot/dts/fsl/pq3-etsec2-2.dtsi ++++ b/arch/powerpc/boot/dts/fsl/pq3-etsec2-2.dtsi +@@ -49,6 +49,7 @@ ethernet@b2000 { + fsl,num_tx_queues = <0x8>; + fsl,magic-packet; + local-mac-address = [ 00 00 00 00 00 00 ]; ++ ranges; + + queue-group@b2000 { + #address-cells = <1>; +diff --git a/arch/powerpc/include/asm/ptrace.h b/arch/powerpc/include/asm/ptrace.h +index 279b80f3bb29..c0c61fa9cd9e 100644 +--- a/arch/powerpc/include/asm/ptrace.h ++++ b/arch/powerpc/include/asm/ptrace.h +@@ -47,6 +47,12 @@ + STACK_FRAME_OVERHEAD + KERNEL_REDZONE_SIZE) + #define STACK_FRAME_MARKER 12 + ++#if defined(_CALL_ELF) && _CALL_ELF == 2 ++#define STACK_FRAME_MIN_SIZE 32 ++#else ++#define STACK_FRAME_MIN_SIZE STACK_FRAME_OVERHEAD ++#endif ++ + /* Size of dummy stack frame allocated when calling signal handler. */ + #define __SIGNAL_FRAMESIZE 128 + #define __SIGNAL_FRAMESIZE32 64 +@@ -60,6 +66,7 @@ + #define STACK_FRAME_REGS_MARKER ASM_CONST(0x72656773) + #define STACK_INT_FRAME_SIZE (sizeof(struct pt_regs) + STACK_FRAME_OVERHEAD) + #define STACK_FRAME_MARKER 2 ++#define STACK_FRAME_MIN_SIZE STACK_FRAME_OVERHEAD + + /* Size of stack frame allocated when calling signal handler. */ + #define __SIGNAL_FRAMESIZE 64 +diff --git a/arch/powerpc/include/asm/syscall.h b/arch/powerpc/include/asm/syscall.h +index b54b2add07be..528ba9d8eed5 100644 +--- a/arch/powerpc/include/asm/syscall.h ++++ b/arch/powerpc/include/asm/syscall.h +@@ -17,7 +17,7 @@ + + /* ftrace syscalls requires exporting the sys_call_table */ + #ifdef CONFIG_FTRACE_SYSCALLS +-extern const unsigned long *sys_call_table; ++extern const unsigned long sys_call_table[]; + #endif /* CONFIG_FTRACE_SYSCALLS */ + + static inline long syscall_get_nr(struct task_struct *task, +diff --git a/arch/powerpc/kernel/smp.c b/arch/powerpc/kernel/smp.c +index 8e59abc237d7..0732d75ca84a 100644 +--- a/arch/powerpc/kernel/smp.c ++++ b/arch/powerpc/kernel/smp.c +@@ -567,8 +567,8 @@ int __cpu_up(unsigned int cpu, struct task_struct *tidle) + if (smp_ops->give_timebase) + smp_ops->give_timebase(); + +- /* Wait until cpu puts itself in the online map */ +- while (!cpu_online(cpu)) ++ /* Wait until cpu puts itself in the online & active maps */ ++ while (!cpu_online(cpu) || !cpu_active(cpu)) + cpu_relax(); + + return 0; +diff --git a/arch/powerpc/mm/numa.c b/arch/powerpc/mm/numa.c +index e91079b796d2..5a5a483d5000 100644 +--- a/arch/powerpc/mm/numa.c ++++ b/arch/powerpc/mm/numa.c +@@ -1569,6 +1569,20 @@ int arch_update_cpu_topology(void) + cpu = cpu_last_thread_sibling(cpu); + } + ++ /* ++ * In cases where we have nothing to update (because the updates list ++ * is too short or because the new topology is same as the old one), ++ * skip invoking update_cpu_topology() via stop-machine(). This is ++ * necessary (and not just a fast-path optimization) since stop-machine ++ * can end up electing a random CPU to run update_cpu_topology(), and ++ * thus trick us into setting up incorrect cpu-node mappings (since ++ * 'updates' is kzalloc()'ed). ++ * ++ * And for the similar reason, we will skip all the following updating. ++ */ ++ if (!cpumask_weight(&updated_cpus)) ++ goto out; ++ + stop_machine(update_cpu_topology, &updates[0], &updated_cpus); + + /* +@@ -1590,6 +1604,7 @@ int arch_update_cpu_topology(void) + changed = 1; + } + ++out: + kfree(updates); + return changed; + } +diff --git a/arch/powerpc/perf/callchain.c b/arch/powerpc/perf/callchain.c +index 74d1e780748b..2396dda282cd 100644 +--- a/arch/powerpc/perf/callchain.c ++++ b/arch/powerpc/perf/callchain.c +@@ -35,7 +35,7 @@ static int valid_next_sp(unsigned long sp, unsigned long prev_sp) + return 0; /* must be 16-byte aligned */ + if (!validate_sp(sp, current, STACK_FRAME_OVERHEAD)) + return 0; +- if (sp >= prev_sp + STACK_FRAME_OVERHEAD) ++ if (sp >= prev_sp + STACK_FRAME_MIN_SIZE) + return 1; + /* + * sp could decrease when we jump off an interrupt stack +diff --git a/arch/s390/crypto/aes_s390.c b/arch/s390/crypto/aes_s390.c +index f8d9cb14adce..92eb4d6ad39d 100644 +--- a/arch/s390/crypto/aes_s390.c ++++ b/arch/s390/crypto/aes_s390.c +@@ -818,6 +818,9 @@ static int ctr_aes_crypt(struct blkcipher_desc *desc, long func, + else + memcpy(walk->iv, ctrptr, AES_BLOCK_SIZE); + spin_unlock(&ctrblk_lock); ++ } else { ++ if (!nbytes) ++ memcpy(walk->iv, ctrptr, AES_BLOCK_SIZE); + } + /* + * final block may be < AES_BLOCK_SIZE, copy only nbytes +diff --git a/arch/s390/crypto/des_s390.c b/arch/s390/crypto/des_s390.c +index a3e24d4d2530..a89feffb22b5 100644 +--- a/arch/s390/crypto/des_s390.c ++++ b/arch/s390/crypto/des_s390.c +@@ -429,6 +429,9 @@ static int ctr_desall_crypt(struct blkcipher_desc *desc, long func, + else + memcpy(walk->iv, ctrptr, DES_BLOCK_SIZE); + spin_unlock(&ctrblk_lock); ++ } else { ++ if (!nbytes) ++ memcpy(walk->iv, ctrptr, DES_BLOCK_SIZE); + } + /* final block may be < DES_BLOCK_SIZE, copy only nbytes */ + if (nbytes) { +diff --git a/arch/sparc/kernel/perf_event.c b/arch/sparc/kernel/perf_event.c +index 617b9fe33771..3ccb6777a7e1 100644 +--- a/arch/sparc/kernel/perf_event.c ++++ b/arch/sparc/kernel/perf_event.c +@@ -960,6 +960,8 @@ out: + cpuc->pcr[0] |= cpuc->event[0]->hw.config_base; + } + ++static void sparc_pmu_start(struct perf_event *event, int flags); ++ + /* On this PMU each PIC has it's own PCR control register. */ + static void calculate_multiple_pcrs(struct cpu_hw_events *cpuc) + { +@@ -972,20 +974,13 @@ static void calculate_multiple_pcrs(struct cpu_hw_events *cpuc) + struct perf_event *cp = cpuc->event[i]; + struct hw_perf_event *hwc = &cp->hw; + int idx = hwc->idx; +- u64 enc; + + if (cpuc->current_idx[i] != PIC_NO_INDEX) + continue; + +- sparc_perf_event_set_period(cp, hwc, idx); + cpuc->current_idx[i] = idx; + +- enc = perf_event_get_enc(cpuc->events[i]); +- cpuc->pcr[idx] &= ~mask_for_index(idx); +- if (hwc->state & PERF_HES_STOPPED) +- cpuc->pcr[idx] |= nop_for_index(idx); +- else +- cpuc->pcr[idx] |= event_encoding(enc, idx); ++ sparc_pmu_start(cp, PERF_EF_RELOAD); + } + out: + for (i = 0; i < cpuc->n_events; i++) { +@@ -1101,7 +1096,6 @@ static void sparc_pmu_del(struct perf_event *event, int _flags) + int i; + + local_irq_save(flags); +- perf_pmu_disable(event->pmu); + + for (i = 0; i < cpuc->n_events; i++) { + if (event == cpuc->event[i]) { +@@ -1127,7 +1121,6 @@ static void sparc_pmu_del(struct perf_event *event, int _flags) + } + } + +- perf_pmu_enable(event->pmu); + local_irq_restore(flags); + } + +@@ -1361,7 +1354,6 @@ static int sparc_pmu_add(struct perf_event *event, int ef_flags) + unsigned long flags; + + local_irq_save(flags); +- perf_pmu_disable(event->pmu); + + n0 = cpuc->n_events; + if (n0 >= sparc_pmu->max_hw_events) +@@ -1394,7 +1386,6 @@ nocheck: + + ret = 0; + out: +- perf_pmu_enable(event->pmu); + local_irq_restore(flags); + return ret; + } +diff --git a/arch/sparc/kernel/process_64.c b/arch/sparc/kernel/process_64.c +index fa49b80d8ab6..6616c775c4dd 100644 +--- a/arch/sparc/kernel/process_64.c ++++ b/arch/sparc/kernel/process_64.c +@@ -280,6 +280,8 @@ void arch_trigger_all_cpu_backtrace(void) + printk(" TPC[%lx] O7[%lx] I7[%lx] RPC[%lx]\n", + gp->tpc, gp->o7, gp->i7, gp->rpc); + } ++ ++ touch_nmi_watchdog(); + } + + memset(global_cpu_snapshot, 0, sizeof(global_cpu_snapshot)); +@@ -355,6 +357,8 @@ static void pmu_snapshot_all_cpus(void) + (cpu == this_cpu ? '*' : ' '), cpu, + pp->pcr[0], pp->pcr[1], pp->pcr[2], pp->pcr[3], + pp->pic[0], pp->pic[1], pp->pic[2], pp->pic[3]); ++ ++ touch_nmi_watchdog(); + } + + memset(global_cpu_snapshot, 0, sizeof(global_cpu_snapshot)); +diff --git a/arch/sparc/kernel/sys_sparc_64.c b/arch/sparc/kernel/sys_sparc_64.c +index d05eb9c1d846..d188c591f2d6 100644 +--- a/arch/sparc/kernel/sys_sparc_64.c ++++ b/arch/sparc/kernel/sys_sparc_64.c +@@ -331,7 +331,7 @@ SYSCALL_DEFINE6(sparc_ipc, unsigned int, call, int, first, unsigned long, second + long err; + + /* No need for backward compatibility. We can start fresh... */ +- if (call <= SEMCTL) { ++ if (call <= SEMTIMEDOP) { + switch (call) { + case SEMOP: + err = sys_semtimedop(first, ptr, +diff --git a/arch/sparc/lib/memmove.S b/arch/sparc/lib/memmove.S +index b7f6334e159f..857ad4f8905f 100644 +--- a/arch/sparc/lib/memmove.S ++++ b/arch/sparc/lib/memmove.S +@@ -8,9 +8,11 @@ + + .text + ENTRY(memmove) /* o0=dst o1=src o2=len */ +- mov %o0, %g1 ++ brz,pn %o2, 99f ++ mov %o0, %g1 ++ + cmp %o0, %o1 +- bleu,pt %xcc, memcpy ++ bleu,pt %xcc, 2f + add %o1, %o2, %g7 + cmp %g7, %o0 + bleu,pt %xcc, memcpy +@@ -24,7 +26,34 @@ ENTRY(memmove) /* o0=dst o1=src o2=len */ + stb %g7, [%o0] + bne,pt %icc, 1b + sub %o0, 1, %o0 +- ++99: + retl + mov %g1, %o0 ++ ++ /* We can't just call memcpy for these memmove cases. On some ++ * chips the memcpy uses cache initializing stores and when dst ++ * and src are close enough, those can clobber the source data ++ * before we've loaded it in. ++ */ ++2: or %o0, %o1, %g7 ++ or %o2, %g7, %g7 ++ andcc %g7, 0x7, %g0 ++ bne,pn %xcc, 4f ++ nop ++ ++3: ldx [%o1], %g7 ++ add %o1, 8, %o1 ++ subcc %o2, 8, %o2 ++ add %o0, 8, %o0 ++ bne,pt %icc, 3b ++ stx %g7, [%o0 - 0x8] ++ ba,a,pt %xcc, 99b ++ ++4: ldub [%o1], %g7 ++ add %o1, 1, %o1 ++ subcc %o2, 1, %o2 ++ add %o0, 1, %o0 ++ bne,pt %icc, 4b ++ stb %g7, [%o0 - 0x1] ++ ba,a,pt %xcc, 99b + ENDPROC(memmove) +diff --git a/arch/sparc/mm/srmmu.c b/arch/sparc/mm/srmmu.c +index 5d721df48a72..1e0e7d2837da 100644 +--- a/arch/sparc/mm/srmmu.c ++++ b/arch/sparc/mm/srmmu.c +@@ -455,10 +455,12 @@ static void __init sparc_context_init(int numctx) + void switch_mm(struct mm_struct *old_mm, struct mm_struct *mm, + struct task_struct *tsk) + { ++ unsigned long flags; ++ + if (mm->context == NO_CONTEXT) { +- spin_lock(&srmmu_context_spinlock); ++ spin_lock_irqsave(&srmmu_context_spinlock, flags); + alloc_context(old_mm, mm); +- spin_unlock(&srmmu_context_spinlock); ++ spin_unlock_irqrestore(&srmmu_context_spinlock, flags); + srmmu_ctxd_set(&srmmu_context_table[mm->context], mm->pgd); + } + +@@ -983,14 +985,15 @@ int init_new_context(struct task_struct *tsk, struct mm_struct *mm) + + void destroy_context(struct mm_struct *mm) + { ++ unsigned long flags; + + if (mm->context != NO_CONTEXT) { + flush_cache_mm(mm); + srmmu_ctxd_set(&srmmu_context_table[mm->context], srmmu_swapper_pg_dir); + flush_tlb_mm(mm); +- spin_lock(&srmmu_context_spinlock); ++ spin_lock_irqsave(&srmmu_context_spinlock, flags); + free_context(mm->context); +- spin_unlock(&srmmu_context_spinlock); ++ spin_unlock_irqrestore(&srmmu_context_spinlock, flags); + mm->context = NO_CONTEXT; + } + } +diff --git a/arch/x86/crypto/aesni-intel_glue.c b/arch/x86/crypto/aesni-intel_glue.c +index f89e7490d303..990c9699b662 100644 +--- a/arch/x86/crypto/aesni-intel_glue.c ++++ b/arch/x86/crypto/aesni-intel_glue.c +@@ -989,7 +989,7 @@ static int __driver_rfc4106_decrypt(struct aead_request *req) + src = kmalloc(req->cryptlen + req->assoclen, GFP_ATOMIC); + if (!src) + return -ENOMEM; +- assoc = (src + req->cryptlen + auth_tag_len); ++ assoc = (src + req->cryptlen); + scatterwalk_map_and_copy(src, req->src, 0, req->cryptlen, 0); + scatterwalk_map_and_copy(assoc, req->assoc, 0, + req->assoclen, 0); +@@ -1014,7 +1014,7 @@ static int __driver_rfc4106_decrypt(struct aead_request *req) + scatterwalk_done(&src_sg_walk, 0, 0); + scatterwalk_done(&assoc_sg_walk, 0, 0); + } else { +- scatterwalk_map_and_copy(dst, req->dst, 0, req->cryptlen, 1); ++ scatterwalk_map_and_copy(dst, req->dst, 0, tempCipherLen, 1); + kfree(src); + } + return retval; +diff --git a/arch/x86/include/asm/fpu-internal.h b/arch/x86/include/asm/fpu-internal.h +index 5be9f879957f..18cd5ed4e1ba 100644 +--- a/arch/x86/include/asm/fpu-internal.h ++++ b/arch/x86/include/asm/fpu-internal.h +@@ -368,7 +368,7 @@ static inline void drop_fpu(struct task_struct *tsk) + preempt_disable(); + tsk->fpu_counter = 0; + __drop_fpu(tsk); +- clear_used_math(); ++ clear_stopped_child_used_math(tsk); + preempt_enable(); + } + +diff --git a/arch/x86/kernel/irq.c b/arch/x86/kernel/irq.c +index 39100783cf26..549b119ed781 100644 +--- a/arch/x86/kernel/irq.c ++++ b/arch/x86/kernel/irq.c +@@ -291,6 +291,9 @@ int check_irq_vectors_for_cpu_disable(void) + irq = __this_cpu_read(vector_irq[vector]); + if (irq >= 0) { + desc = irq_to_desc(irq); ++ if (!desc) ++ continue; ++ + data = irq_desc_get_irq_data(desc); + cpumask_copy(&affinity_new, data->affinity); + cpu_clear(this_cpu, affinity_new); +diff --git a/arch/x86/kernel/microcode_intel_early.c b/arch/x86/kernel/microcode_intel_early.c +index 1575deb2e636..c44f7db7ce44 100644 +--- a/arch/x86/kernel/microcode_intel_early.c ++++ b/arch/x86/kernel/microcode_intel_early.c +@@ -321,7 +321,7 @@ get_matching_model_microcode(int cpu, unsigned long start, + unsigned int mc_saved_count = mc_saved_data->mc_saved_count; + int i; + +- while (leftover) { ++ while (leftover && mc_saved_count < ARRAY_SIZE(mc_saved_tmp)) { + mc_header = (struct microcode_header_intel *)ucode_ptr; + + mc_size = get_totalsize(mc_header); +diff --git a/arch/x86/kernel/xsave.c b/arch/x86/kernel/xsave.c +index f5869fc65d66..bf640b8db9b3 100644 +--- a/arch/x86/kernel/xsave.c ++++ b/arch/x86/kernel/xsave.c +@@ -375,7 +375,7 @@ int __restore_xstate_sig(void __user *buf, void __user *buf_fx, int size) + * thread's fpu state, reconstruct fxstate from the fsave + * header. Sanitize the copied state etc. + */ +- struct xsave_struct *xsave = &tsk->thread.fpu.state->xsave; ++ struct fpu *fpu = &tsk->thread.fpu; + struct user_i387_ia32_struct env; + int err = 0; + +@@ -389,14 +389,15 @@ int __restore_xstate_sig(void __user *buf, void __user *buf_fx, int size) + */ + drop_fpu(tsk); + +- if (__copy_from_user(xsave, buf_fx, state_size) || ++ if (__copy_from_user(&fpu->state->xsave, buf_fx, state_size) || + __copy_from_user(&env, buf, sizeof(env))) { ++ fpu_finit(fpu); + err = -1; + } else { + sanitize_restored_xstate(tsk, &env, xstate_bv, fx_only); +- set_used_math(); + } + ++ set_used_math(); + if (use_eager_fpu()) { + preempt_disable(); + math_state_restore(); +diff --git a/arch/x86/vdso/vdso32/sigreturn.S b/arch/x86/vdso/vdso32/sigreturn.S +index 31776d0efc8c..d7ec4e251c0a 100644 +--- a/arch/x86/vdso/vdso32/sigreturn.S ++++ b/arch/x86/vdso/vdso32/sigreturn.S +@@ -17,6 +17,7 @@ + .text + .globl __kernel_sigreturn + .type __kernel_sigreturn,@function ++ nop /* this guy is needed for .LSTARTFDEDLSI1 below (watch for HACK) */ + ALIGN + __kernel_sigreturn: + .LSTART_sigreturn: +diff --git a/crypto/sha256_generic.c b/crypto/sha256_generic.c +index 136381bdd48d..f85b1340e459 100644 +--- a/crypto/sha256_generic.c ++++ b/crypto/sha256_generic.c +@@ -24,6 +24,7 @@ + #include <linux/types.h> + #include <crypto/sha.h> + #include <asm/byteorder.h> ++#include <asm/unaligned.h> + + static inline u32 Ch(u32 x, u32 y, u32 z) + { +@@ -42,7 +43,7 @@ static inline u32 Maj(u32 x, u32 y, u32 z) + + static inline void LOAD_OP(int I, u32 *W, const u8 *input) + { +- W[I] = __be32_to_cpu( ((__be32*)(input))[I] ); ++ W[I] = get_unaligned_be32((__u32 *)input + I); + } + + static inline void BLEND_OP(int I, u32 *W) +diff --git a/crypto/sha512_generic.c b/crypto/sha512_generic.c +index 6c6d901a7cc1..13a23e169a7b 100644 +--- a/crypto/sha512_generic.c ++++ b/crypto/sha512_generic.c +@@ -20,6 +20,7 @@ + #include <crypto/sha.h> + #include <linux/percpu.h> + #include <asm/byteorder.h> ++#include <asm/unaligned.h> + + static inline u64 Ch(u64 x, u64 y, u64 z) + { +@@ -68,7 +69,7 @@ static const u64 sha512_K[80] = { + + static inline void LOAD_OP(int I, u64 *W, const u8 *input) + { +- W[I] = __be64_to_cpu( ((__be64*)(input))[I] ); ++ W[I] = get_unaligned_be64((__u64 *)input + I); + } + + static inline void BLEND_OP(int I, u64 *W) +diff --git a/crypto/tcrypt.c b/crypto/tcrypt.c +index 25a5934f0e50..fa9956e79308 100644 +--- a/crypto/tcrypt.c ++++ b/crypto/tcrypt.c +@@ -490,10 +490,9 @@ static inline int do_one_ahash_op(struct ahash_request *req, int ret) + if (ret == -EINPROGRESS || ret == -EBUSY) { + struct tcrypt_result *tr = req->base.data; + +- ret = wait_for_completion_interruptible(&tr->completion); +- if (!ret) +- ret = tr->err; ++ wait_for_completion(&tr->completion); + INIT_COMPLETION(tr->completion); ++ ret = tr->err; + } + return ret; + } +@@ -718,10 +717,9 @@ static inline int do_one_acipher_op(struct ablkcipher_request *req, int ret) + if (ret == -EINPROGRESS || ret == -EBUSY) { + struct tcrypt_result *tr = req->base.data; + +- ret = wait_for_completion_interruptible(&tr->completion); +- if (!ret) +- ret = tr->err; ++ wait_for_completion(&tr->completion); + INIT_COMPLETION(tr->completion); ++ ret = tr->err; + } + + return ret; +diff --git a/crypto/testmgr.c b/crypto/testmgr.c +index e091ef6e1791..317c31f0b262 100644 +--- a/crypto/testmgr.c ++++ b/crypto/testmgr.c +@@ -176,10 +176,9 @@ static int do_one_async_hash_op(struct ahash_request *req, + int ret) + { + if (ret == -EINPROGRESS || ret == -EBUSY) { +- ret = wait_for_completion_interruptible(&tr->completion); +- if (!ret) +- ret = tr->err; ++ wait_for_completion(&tr->completion); + INIT_COMPLETION(tr->completion); ++ ret = tr->err; + } + return ret; + } +@@ -333,12 +332,10 @@ static int __test_hash(struct crypto_ahash *tfm, struct hash_testvec *template, + break; + case -EINPROGRESS: + case -EBUSY: +- ret = wait_for_completion_interruptible( +- &tresult.completion); +- if (!ret && !(ret = tresult.err)) { +- INIT_COMPLETION(tresult.completion); ++ wait_for_completion(&tresult.completion); ++ INIT_COMPLETION(tresult.completion); ++ if (!ret) + break; +- } + /* fall through */ + default: + printk(KERN_ERR "alg: hash: digest failed " +@@ -540,12 +537,11 @@ static int __test_aead(struct crypto_aead *tfm, int enc, + break; + case -EINPROGRESS: + case -EBUSY: +- ret = wait_for_completion_interruptible( +- &result.completion); +- if (!ret && !(ret = result.err)) { +- INIT_COMPLETION(result.completion); ++ wait_for_completion(&result.completion); ++ INIT_COMPLETION(result.completion); ++ ret = result.err; ++ if (!ret) + break; +- } + case -EBADMSG: + if (template[i].novrfy) + /* verification failure was expected */ +@@ -694,12 +690,11 @@ static int __test_aead(struct crypto_aead *tfm, int enc, + break; + case -EINPROGRESS: + case -EBUSY: +- ret = wait_for_completion_interruptible( +- &result.completion); +- if (!ret && !(ret = result.err)) { +- INIT_COMPLETION(result.completion); ++ wait_for_completion(&result.completion); ++ INIT_COMPLETION(result.completion); ++ ret = result.err; ++ if (!ret) + break; +- } + case -EBADMSG: + if (template[i].novrfy) + /* verification failure was expected */ +@@ -980,12 +975,11 @@ static int __test_skcipher(struct crypto_ablkcipher *tfm, int enc, + break; + case -EINPROGRESS: + case -EBUSY: +- ret = wait_for_completion_interruptible( +- &result.completion); +- if (!ret && !((ret = result.err))) { +- INIT_COMPLETION(result.completion); ++ wait_for_completion(&result.completion); ++ INIT_COMPLETION(result.completion); ++ ret = result.err; ++ if (!ret) + break; +- } + /* fall through */ + default: + pr_err("alg: skcipher%s: %s failed on test %d for %s: ret=%d\n", +@@ -1083,12 +1077,10 @@ static int __test_skcipher(struct crypto_ablkcipher *tfm, int enc, + break; + case -EINPROGRESS: + case -EBUSY: +- ret = wait_for_completion_interruptible( +- &result.completion); +- if (!ret && !((ret = result.err))) { +- INIT_COMPLETION(result.completion); ++ wait_for_completion(&result.completion); ++ INIT_COMPLETION(result.completion); ++ if (!ret) + break; +- } + /* fall through */ + default: + pr_err("alg: skcipher%s: %s failed on chunk test %d for %s: ret=%d\n", +diff --git a/drivers/acpi/acpica/aclocal.h b/drivers/acpi/acpica/aclocal.h +index 0ed00669cd21..ca8196ea12d1 100644 +--- a/drivers/acpi/acpica/aclocal.h ++++ b/drivers/acpi/acpica/aclocal.h +@@ -254,6 +254,7 @@ struct acpi_create_field_info { + u32 field_bit_position; + u32 field_bit_length; + u16 resource_length; ++ u16 pin_number_index; + u8 field_flags; + u8 attribute; + u8 field_type; +diff --git a/drivers/acpi/acpica/acobject.h b/drivers/acpi/acpica/acobject.h +index cc7ab6dd724e..a47cc78ffd4f 100644 +--- a/drivers/acpi/acpica/acobject.h ++++ b/drivers/acpi/acpica/acobject.h +@@ -263,6 +263,7 @@ struct acpi_object_region_field { + ACPI_OBJECT_COMMON_HEADER ACPI_COMMON_FIELD_INFO u16 resource_length; + union acpi_operand_object *region_obj; /* Containing op_region object */ + u8 *resource_buffer; /* resource_template for serial regions/fields */ ++ u16 pin_number_index; /* Index relative to previous Connection/Template */ + }; + + struct acpi_object_bank_field { +diff --git a/drivers/acpi/acpica/dsfield.c b/drivers/acpi/acpica/dsfield.c +index d4bfe7b7f90a..10bc062a4a88 100644 +--- a/drivers/acpi/acpica/dsfield.c ++++ b/drivers/acpi/acpica/dsfield.c +@@ -360,6 +360,7 @@ acpi_ds_get_field_names(struct acpi_create_field_info *info, + */ + info->resource_buffer = NULL; + info->connection_node = NULL; ++ info->pin_number_index = 0; + + /* + * A Connection() is either an actual resource descriptor (buffer) +@@ -437,6 +438,7 @@ acpi_ds_get_field_names(struct acpi_create_field_info *info, + } + + info->field_bit_position += info->field_bit_length; ++ info->pin_number_index++; /* Index relative to previous Connection() */ + break; + + default: +diff --git a/drivers/acpi/acpica/evregion.c b/drivers/acpi/acpica/evregion.c +index cea14d6fc76c..1788b3870713 100644 +--- a/drivers/acpi/acpica/evregion.c ++++ b/drivers/acpi/acpica/evregion.c +@@ -142,6 +142,7 @@ acpi_ev_address_space_dispatch(union acpi_operand_object *region_obj, + union acpi_operand_object *region_obj2; + void *region_context = NULL; + struct acpi_connection_info *context; ++ acpi_physical_address address; + + ACPI_FUNCTION_TRACE(ev_address_space_dispatch); + +@@ -236,25 +237,23 @@ acpi_ev_address_space_dispatch(union acpi_operand_object *region_obj, + /* We have everything we need, we can invoke the address space handler */ + + handler = handler_desc->address_space.handler; +- +- ACPI_DEBUG_PRINT((ACPI_DB_OPREGION, +- "Handler %p (@%p) Address %8.8X%8.8X [%s]\n", +- ®ion_obj->region.handler->address_space, handler, +- ACPI_FORMAT_NATIVE_UINT(region_obj->region.address + +- region_offset), +- acpi_ut_get_region_name(region_obj->region. +- space_id))); ++ address = (region_obj->region.address + region_offset); + + /* + * Special handling for generic_serial_bus and general_purpose_io: + * There are three extra parameters that must be passed to the + * handler via the context: +- * 1) Connection buffer, a resource template from Connection() op. +- * 2) Length of the above buffer. +- * 3) Actual access length from the access_as() op. ++ * 1) Connection buffer, a resource template from Connection() op ++ * 2) Length of the above buffer ++ * 3) Actual access length from the access_as() op ++ * ++ * In addition, for general_purpose_io, the Address and bit_width fields ++ * are defined as follows: ++ * 1) Address is the pin number index of the field (bit offset from ++ * the previous Connection) ++ * 2) bit_width is the actual bit length of the field (number of pins) + */ +- if (((region_obj->region.space_id == ACPI_ADR_SPACE_GSBUS) || +- (region_obj->region.space_id == ACPI_ADR_SPACE_GPIO)) && ++ if ((region_obj->region.space_id == ACPI_ADR_SPACE_GSBUS) && + context && field_obj) { + + /* Get the Connection (resource_template) buffer */ +@@ -263,6 +262,24 @@ acpi_ev_address_space_dispatch(union acpi_operand_object *region_obj, + context->length = field_obj->field.resource_length; + context->access_length = field_obj->field.access_length; + } ++ if ((region_obj->region.space_id == ACPI_ADR_SPACE_GPIO) && ++ context && field_obj) { ++ ++ /* Get the Connection (resource_template) buffer */ ++ ++ context->connection = field_obj->field.resource_buffer; ++ context->length = field_obj->field.resource_length; ++ context->access_length = field_obj->field.access_length; ++ address = field_obj->field.pin_number_index; ++ bit_width = field_obj->field.bit_length; ++ } ++ ++ ACPI_DEBUG_PRINT((ACPI_DB_OPREGION, ++ "Handler %p (@%p) Address %8.8X%8.8X [%s]\n", ++ ®ion_obj->region.handler->address_space, handler, ++ ACPI_FORMAT_NATIVE_UINT(address), ++ acpi_ut_get_region_name(region_obj->region. ++ space_id))); + + if (!(handler_desc->address_space.handler_flags & + ACPI_ADDR_HANDLER_DEFAULT_INSTALLED)) { +@@ -276,9 +293,7 @@ acpi_ev_address_space_dispatch(union acpi_operand_object *region_obj, + + /* Call the handler */ + +- status = handler(function, +- (region_obj->region.address + region_offset), +- bit_width, value, context, ++ status = handler(function, address, bit_width, value, context, + region_obj2->extra.region_context); + + if (ACPI_FAILURE(status)) { +diff --git a/drivers/acpi/acpica/exfield.c b/drivers/acpi/acpica/exfield.c +index c2a65aaf29af..43cf348dab48 100644 +--- a/drivers/acpi/acpica/exfield.c ++++ b/drivers/acpi/acpica/exfield.c +@@ -178,6 +178,37 @@ acpi_ex_read_data_from_field(struct acpi_walk_state *walk_state, + buffer = &buffer_desc->integer.value; + } + ++ if ((obj_desc->common.type == ACPI_TYPE_LOCAL_REGION_FIELD) && ++ (obj_desc->field.region_obj->region.space_id == ++ ACPI_ADR_SPACE_GPIO)) { ++ /* ++ * For GPIO (general_purpose_io), the Address will be the bit offset ++ * from the previous Connection() operator, making it effectively a ++ * pin number index. The bit_length is the length of the field, which ++ * is thus the number of pins. ++ */ ++ ACPI_DEBUG_PRINT((ACPI_DB_BFIELD, ++ "GPIO FieldRead [FROM]: Pin %u Bits %u\n", ++ obj_desc->field.pin_number_index, ++ obj_desc->field.bit_length)); ++ ++ /* Lock entire transaction if requested */ ++ ++ acpi_ex_acquire_global_lock(obj_desc->common_field.field_flags); ++ ++ /* Perform the write */ ++ ++ status = acpi_ex_access_region(obj_desc, 0, ++ (u64 *)buffer, ACPI_READ); ++ acpi_ex_release_global_lock(obj_desc->common_field.field_flags); ++ if (ACPI_FAILURE(status)) { ++ acpi_ut_remove_reference(buffer_desc); ++ } else { ++ *ret_buffer_desc = buffer_desc; ++ } ++ return_ACPI_STATUS(status); ++ } ++ + ACPI_DEBUG_PRINT((ACPI_DB_BFIELD, + "FieldRead [TO]: Obj %p, Type %X, Buf %p, ByteLen %X\n", + obj_desc, obj_desc->common.type, buffer, +@@ -325,6 +356,42 @@ acpi_ex_write_data_to_field(union acpi_operand_object *source_desc, + + *result_desc = buffer_desc; + return_ACPI_STATUS(status); ++ } else if ((obj_desc->common.type == ACPI_TYPE_LOCAL_REGION_FIELD) && ++ (obj_desc->field.region_obj->region.space_id == ++ ACPI_ADR_SPACE_GPIO)) { ++ /* ++ * For GPIO (general_purpose_io), we will bypass the entire field ++ * mechanism and handoff the bit address and bit width directly to ++ * the handler. The Address will be the bit offset ++ * from the previous Connection() operator, making it effectively a ++ * pin number index. The bit_length is the length of the field, which ++ * is thus the number of pins. ++ */ ++ if (source_desc->common.type != ACPI_TYPE_INTEGER) { ++ return_ACPI_STATUS(AE_AML_OPERAND_TYPE); ++ } ++ ++ ACPI_DEBUG_PRINT((ACPI_DB_BFIELD, ++ "GPIO FieldWrite [FROM]: (%s:%X), Val %.8X [TO]: Pin %u Bits %u\n", ++ acpi_ut_get_type_name(source_desc->common. ++ type), ++ source_desc->common.type, ++ (u32)source_desc->integer.value, ++ obj_desc->field.pin_number_index, ++ obj_desc->field.bit_length)); ++ ++ buffer = &source_desc->integer.value; ++ ++ /* Lock entire transaction if requested */ ++ ++ acpi_ex_acquire_global_lock(obj_desc->common_field.field_flags); ++ ++ /* Perform the write */ ++ ++ status = acpi_ex_access_region(obj_desc, 0, ++ (u64 *)buffer, ACPI_WRITE); ++ acpi_ex_release_global_lock(obj_desc->common_field.field_flags); ++ return_ACPI_STATUS(status); + } + + /* Get a pointer to the data to be written */ +diff --git a/drivers/acpi/acpica/exprep.c b/drivers/acpi/acpica/exprep.c +index 5a588611ab48..8c88cfdec441 100644 +--- a/drivers/acpi/acpica/exprep.c ++++ b/drivers/acpi/acpica/exprep.c +@@ -484,6 +484,8 @@ acpi_status acpi_ex_prep_field_value(struct acpi_create_field_info *info) + obj_desc->field.resource_length = info->resource_length; + } + ++ obj_desc->field.pin_number_index = info->pin_number_index; ++ + /* Allow full data read from EC address space */ + + if ((obj_desc->field.region_obj->region.space_id == +diff --git a/drivers/acpi/scan.c b/drivers/acpi/scan.c +index d047771c179a..332b126ee3c4 100644 +--- a/drivers/acpi/scan.c ++++ b/drivers/acpi/scan.c +@@ -834,12 +834,17 @@ static void acpi_device_notify(acpi_handle handle, u32 event, void *data) + device->driver->ops.notify(device, event); + } + +-static acpi_status acpi_device_notify_fixed(void *data) ++static void acpi_device_notify_fixed(void *data) + { + struct acpi_device *device = data; + + /* Fixed hardware devices have no handles */ + acpi_device_notify(NULL, ACPI_FIXED_HARDWARE_EVENT, device); ++} ++ ++static acpi_status acpi_device_fixed_event(void *data) ++{ ++ acpi_os_execute(OSL_NOTIFY_HANDLER, acpi_device_notify_fixed, data); + return AE_OK; + } + +@@ -850,12 +855,12 @@ static int acpi_device_install_notify_handler(struct acpi_device *device) + if (device->device_type == ACPI_BUS_TYPE_POWER_BUTTON) + status = + acpi_install_fixed_event_handler(ACPI_EVENT_POWER_BUTTON, +- acpi_device_notify_fixed, ++ acpi_device_fixed_event, + device); + else if (device->device_type == ACPI_BUS_TYPE_SLEEP_BUTTON) + status = + acpi_install_fixed_event_handler(ACPI_EVENT_SLEEP_BUTTON, +- acpi_device_notify_fixed, ++ acpi_device_fixed_event, + device); + else + status = acpi_install_notify_handler(device->handle, +@@ -872,10 +877,10 @@ static void acpi_device_remove_notify_handler(struct acpi_device *device) + { + if (device->device_type == ACPI_BUS_TYPE_POWER_BUTTON) + acpi_remove_fixed_event_handler(ACPI_EVENT_POWER_BUTTON, +- acpi_device_notify_fixed); ++ acpi_device_fixed_event); + else if (device->device_type == ACPI_BUS_TYPE_SLEEP_BUTTON) + acpi_remove_fixed_event_handler(ACPI_EVENT_SLEEP_BUTTON, +- acpi_device_notify_fixed); ++ acpi_device_fixed_event); + else + acpi_remove_notify_handler(device->handle, ACPI_DEVICE_NOTIFY, + acpi_device_notify); +diff --git a/drivers/base/regmap/regcache-rbtree.c b/drivers/base/regmap/regcache-rbtree.c +index 930cad4e5df8..2b946bc4212d 100644 +--- a/drivers/base/regmap/regcache-rbtree.c ++++ b/drivers/base/regmap/regcache-rbtree.c +@@ -313,7 +313,7 @@ static int regcache_rbtree_insert_to_block(struct regmap *map, + if (pos == 0) { + memmove(blk + offset * map->cache_word_size, + blk, rbnode->blklen * map->cache_word_size); +- bitmap_shift_right(present, present, offset, blklen); ++ bitmap_shift_left(present, present, offset, blklen); + } + + /* update the rbnode block, its size and the base register */ +diff --git a/drivers/base/topology.c b/drivers/base/topology.c +index 94ffee378f10..37a5661ca5f9 100644 +--- a/drivers/base/topology.c ++++ b/drivers/base/topology.c +@@ -40,8 +40,7 @@ + static ssize_t show_##name(struct device *dev, \ + struct device_attribute *attr, char *buf) \ + { \ +- unsigned int cpu = dev->id; \ +- return sprintf(buf, "%d\n", topology_##name(cpu)); \ ++ return sprintf(buf, "%d\n", topology_##name(dev->id)); \ + } + + #if defined(topology_thread_cpumask) || defined(topology_core_cpumask) || \ +diff --git a/drivers/block/xen-blkfront.c b/drivers/block/xen-blkfront.c +index e85bc358e052..3bb5efdcdc8a 100644 +--- a/drivers/block/xen-blkfront.c ++++ b/drivers/block/xen-blkfront.c +@@ -121,7 +121,8 @@ struct blkfront_info + struct work_struct work; + struct gnttab_free_callback callback; + struct blk_shadow shadow[BLK_RING_SIZE]; +- struct list_head persistent_gnts; ++ struct list_head grants; ++ struct list_head indirect_pages; + unsigned int persistent_gnts_c; + unsigned long shadow_free; + unsigned int feature_flush; +@@ -200,15 +201,17 @@ static int fill_grant_buffer(struct blkfront_info *info, int num) + if (!gnt_list_entry) + goto out_of_memory; + +- granted_page = alloc_page(GFP_NOIO); +- if (!granted_page) { +- kfree(gnt_list_entry); +- goto out_of_memory; ++ if (info->feature_persistent) { ++ granted_page = alloc_page(GFP_NOIO); ++ if (!granted_page) { ++ kfree(gnt_list_entry); ++ goto out_of_memory; ++ } ++ gnt_list_entry->pfn = page_to_pfn(granted_page); + } + +- gnt_list_entry->pfn = page_to_pfn(granted_page); + gnt_list_entry->gref = GRANT_INVALID_REF; +- list_add(&gnt_list_entry->node, &info->persistent_gnts); ++ list_add(&gnt_list_entry->node, &info->grants); + i++; + } + +@@ -216,9 +219,10 @@ static int fill_grant_buffer(struct blkfront_info *info, int num) + + out_of_memory: + list_for_each_entry_safe(gnt_list_entry, n, +- &info->persistent_gnts, node) { ++ &info->grants, node) { + list_del(&gnt_list_entry->node); +- __free_page(pfn_to_page(gnt_list_entry->pfn)); ++ if (info->feature_persistent) ++ __free_page(pfn_to_page(gnt_list_entry->pfn)); + kfree(gnt_list_entry); + i--; + } +@@ -227,13 +231,14 @@ out_of_memory: + } + + static struct grant *get_grant(grant_ref_t *gref_head, ++ unsigned long pfn, + struct blkfront_info *info) + { + struct grant *gnt_list_entry; + unsigned long buffer_mfn; + +- BUG_ON(list_empty(&info->persistent_gnts)); +- gnt_list_entry = list_first_entry(&info->persistent_gnts, struct grant, ++ BUG_ON(list_empty(&info->grants)); ++ gnt_list_entry = list_first_entry(&info->grants, struct grant, + node); + list_del(&gnt_list_entry->node); + +@@ -245,6 +250,10 @@ static struct grant *get_grant(grant_ref_t *gref_head, + /* Assign a gref to this page */ + gnt_list_entry->gref = gnttab_claim_grant_reference(gref_head); + BUG_ON(gnt_list_entry->gref == -ENOSPC); ++ if (!info->feature_persistent) { ++ BUG_ON(!pfn); ++ gnt_list_entry->pfn = pfn; ++ } + buffer_mfn = pfn_to_mfn(gnt_list_entry->pfn); + gnttab_grant_foreign_access_ref(gnt_list_entry->gref, + info->xbdev->otherend_id, +@@ -477,22 +486,34 @@ static int blkif_queue_request(struct request *req) + + if ((ring_req->operation == BLKIF_OP_INDIRECT) && + (i % SEGS_PER_INDIRECT_FRAME == 0)) { ++ unsigned long pfn; ++ + if (segments) + kunmap_atomic(segments); + + n = i / SEGS_PER_INDIRECT_FRAME; +- gnt_list_entry = get_grant(&gref_head, info); ++ if (!info->feature_persistent) { ++ struct page *indirect_page; ++ ++ /* Fetch a pre-allocated page to use for indirect grefs */ ++ BUG_ON(list_empty(&info->indirect_pages)); ++ indirect_page = list_first_entry(&info->indirect_pages, ++ struct page, lru); ++ list_del(&indirect_page->lru); ++ pfn = page_to_pfn(indirect_page); ++ } ++ gnt_list_entry = get_grant(&gref_head, pfn, info); + info->shadow[id].indirect_grants[n] = gnt_list_entry; + segments = kmap_atomic(pfn_to_page(gnt_list_entry->pfn)); + ring_req->u.indirect.indirect_grefs[n] = gnt_list_entry->gref; + } + +- gnt_list_entry = get_grant(&gref_head, info); ++ gnt_list_entry = get_grant(&gref_head, page_to_pfn(sg_page(sg)), info); + ref = gnt_list_entry->gref; + + info->shadow[id].grants_used[i] = gnt_list_entry; + +- if (rq_data_dir(req)) { ++ if (rq_data_dir(req) && info->feature_persistent) { + char *bvec_data; + void *shared_data; + +@@ -904,21 +925,36 @@ static void blkif_free(struct blkfront_info *info, int suspend) + blk_stop_queue(info->rq); + + /* Remove all persistent grants */ +- if (!list_empty(&info->persistent_gnts)) { ++ if (!list_empty(&info->grants)) { + list_for_each_entry_safe(persistent_gnt, n, +- &info->persistent_gnts, node) { ++ &info->grants, node) { + list_del(&persistent_gnt->node); + if (persistent_gnt->gref != GRANT_INVALID_REF) { + gnttab_end_foreign_access(persistent_gnt->gref, + 0, 0UL); + info->persistent_gnts_c--; + } +- __free_page(pfn_to_page(persistent_gnt->pfn)); ++ if (info->feature_persistent) ++ __free_page(pfn_to_page(persistent_gnt->pfn)); + kfree(persistent_gnt); + } + } + BUG_ON(info->persistent_gnts_c != 0); + ++ /* ++ * Remove indirect pages, this only happens when using indirect ++ * descriptors but not persistent grants ++ */ ++ if (!list_empty(&info->indirect_pages)) { ++ struct page *indirect_page, *n; ++ ++ BUG_ON(info->feature_persistent); ++ list_for_each_entry_safe(indirect_page, n, &info->indirect_pages, lru) { ++ list_del(&indirect_page->lru); ++ __free_page(indirect_page); ++ } ++ } ++ + for (i = 0; i < BLK_RING_SIZE; i++) { + /* + * Clear persistent grants present in requests already +@@ -933,7 +969,8 @@ static void blkif_free(struct blkfront_info *info, int suspend) + for (j = 0; j < segs; j++) { + persistent_gnt = info->shadow[i].grants_used[j]; + gnttab_end_foreign_access(persistent_gnt->gref, 0, 0UL); +- __free_page(pfn_to_page(persistent_gnt->pfn)); ++ if (info->feature_persistent) ++ __free_page(pfn_to_page(persistent_gnt->pfn)); + kfree(persistent_gnt); + } + +@@ -992,7 +1029,7 @@ static void blkif_completion(struct blk_shadow *s, struct blkfront_info *info, + nseg = s->req.operation == BLKIF_OP_INDIRECT ? + s->req.u.indirect.nr_segments : s->req.u.rw.nr_segments; + +- if (bret->operation == BLKIF_OP_READ) { ++ if (bret->operation == BLKIF_OP_READ && info->feature_persistent) { + /* + * Copy the data received from the backend into the bvec. + * Since bv_offset can be different than 0, and bv_len different +@@ -1013,13 +1050,51 @@ static void blkif_completion(struct blk_shadow *s, struct blkfront_info *info, + } + /* Add the persistent grant into the list of free grants */ + for (i = 0; i < nseg; i++) { +- list_add(&s->grants_used[i]->node, &info->persistent_gnts); +- info->persistent_gnts_c++; ++ if (gnttab_query_foreign_access(s->grants_used[i]->gref)) { ++ /* ++ * If the grant is still mapped by the backend (the ++ * backend has chosen to make this grant persistent) ++ * we add it at the head of the list, so it will be ++ * reused first. ++ */ ++ if (!info->feature_persistent) ++ pr_alert_ratelimited("backed has not unmapped grant: %u\n", ++ s->grants_used[i]->gref); ++ list_add(&s->grants_used[i]->node, &info->grants); ++ info->persistent_gnts_c++; ++ } else { ++ /* ++ * If the grant is not mapped by the backend we end the ++ * foreign access and add it to the tail of the list, ++ * so it will not be picked again unless we run out of ++ * persistent grants. ++ */ ++ gnttab_end_foreign_access(s->grants_used[i]->gref, 0, 0UL); ++ s->grants_used[i]->gref = GRANT_INVALID_REF; ++ list_add_tail(&s->grants_used[i]->node, &info->grants); ++ } + } + if (s->req.operation == BLKIF_OP_INDIRECT) { + for (i = 0; i < INDIRECT_GREFS(nseg); i++) { +- list_add(&s->indirect_grants[i]->node, &info->persistent_gnts); +- info->persistent_gnts_c++; ++ if (gnttab_query_foreign_access(s->indirect_grants[i]->gref)) { ++ if (!info->feature_persistent) ++ pr_alert_ratelimited("backed has not unmapped grant: %u\n", ++ s->indirect_grants[i]->gref); ++ list_add(&s->indirect_grants[i]->node, &info->grants); ++ info->persistent_gnts_c++; ++ } else { ++ struct page *indirect_page; ++ ++ gnttab_end_foreign_access(s->indirect_grants[i]->gref, 0, 0UL); ++ /* ++ * Add the used indirect page back to the list of ++ * available pages for indirect grefs. ++ */ ++ indirect_page = pfn_to_page(s->indirect_grants[i]->pfn); ++ list_add(&indirect_page->lru, &info->indirect_pages); ++ s->indirect_grants[i]->gref = GRANT_INVALID_REF; ++ list_add_tail(&s->indirect_grants[i]->node, &info->grants); ++ } + } + } + } +@@ -1313,7 +1388,8 @@ static int blkfront_probe(struct xenbus_device *dev, + spin_lock_init(&info->io_lock); + info->xbdev = dev; + info->vdevice = vdevice; +- INIT_LIST_HEAD(&info->persistent_gnts); ++ INIT_LIST_HEAD(&info->grants); ++ INIT_LIST_HEAD(&info->indirect_pages); + info->persistent_gnts_c = 0; + info->connected = BLKIF_STATE_DISCONNECTED; + INIT_WORK(&info->work, blkif_restart_queue); +@@ -1660,6 +1736,23 @@ static int blkfront_setup_indirect(struct blkfront_info *info) + if (err) + goto out_of_memory; + ++ if (!info->feature_persistent && info->max_indirect_segments) { ++ /* ++ * We are using indirect descriptors but not persistent ++ * grants, we need to allocate a set of pages that can be ++ * used for mapping indirect grefs ++ */ ++ int num = INDIRECT_GREFS(segs) * BLK_RING_SIZE; ++ ++ BUG_ON(!list_empty(&info->indirect_pages)); ++ for (i = 0; i < num; i++) { ++ struct page *indirect_page = alloc_page(GFP_NOIO); ++ if (!indirect_page) ++ goto out_of_memory; ++ list_add(&indirect_page->lru, &info->indirect_pages); ++ } ++ } ++ + for (i = 0; i < BLK_RING_SIZE; i++) { + info->shadow[i].grants_used = kzalloc( + sizeof(info->shadow[i].grants_used[0]) * segs, +@@ -1690,6 +1783,13 @@ out_of_memory: + kfree(info->shadow[i].indirect_grants); + info->shadow[i].indirect_grants = NULL; + } ++ if (!list_empty(&info->indirect_pages)) { ++ struct page *indirect_page, *n; ++ list_for_each_entry_safe(indirect_page, n, &info->indirect_pages, lru) { ++ list_del(&indirect_page->lru); ++ __free_page(indirect_page); ++ } ++ } + return -ENOMEM; + } + +diff --git a/drivers/char/tpm/tpm_ibmvtpm.c b/drivers/char/tpm/tpm_ibmvtpm.c +index 538856f3e68a..09df26f9621d 100644 +--- a/drivers/char/tpm/tpm_ibmvtpm.c ++++ b/drivers/char/tpm/tpm_ibmvtpm.c +@@ -124,7 +124,7 @@ static int tpm_ibmvtpm_send(struct tpm_chip *chip, u8 *buf, size_t count) + { + struct ibmvtpm_dev *ibmvtpm; + struct ibmvtpm_crq crq; +- u64 *word = (u64 *) &crq; ++ __be64 *word = (__be64 *)&crq; + int rc; + + ibmvtpm = (struct ibmvtpm_dev *)TPM_VPRIV(chip); +@@ -145,11 +145,11 @@ static int tpm_ibmvtpm_send(struct tpm_chip *chip, u8 *buf, size_t count) + memcpy((void *)ibmvtpm->rtce_buf, (void *)buf, count); + crq.valid = (u8)IBMVTPM_VALID_CMD; + crq.msg = (u8)VTPM_TPM_COMMAND; +- crq.len = (u16)count; +- crq.data = ibmvtpm->rtce_dma_handle; ++ crq.len = cpu_to_be16(count); ++ crq.data = cpu_to_be32(ibmvtpm->rtce_dma_handle); + +- rc = ibmvtpm_send_crq(ibmvtpm->vdev, cpu_to_be64(word[0]), +- cpu_to_be64(word[1])); ++ rc = ibmvtpm_send_crq(ibmvtpm->vdev, be64_to_cpu(word[0]), ++ be64_to_cpu(word[1])); + if (rc != H_SUCCESS) { + dev_err(ibmvtpm->dev, "tpm_ibmvtpm_send failed rc=%d\n", rc); + rc = 0; +diff --git a/drivers/char/tpm/tpm_ibmvtpm.h b/drivers/char/tpm/tpm_ibmvtpm.h +index bd82a791f995..b2c231b1beec 100644 +--- a/drivers/char/tpm/tpm_ibmvtpm.h ++++ b/drivers/char/tpm/tpm_ibmvtpm.h +@@ -22,9 +22,9 @@ + struct ibmvtpm_crq { + u8 valid; + u8 msg; +- u16 len; +- u32 data; +- u64 reserved; ++ __be16 len; ++ __be32 data; ++ __be64 reserved; + } __attribute__((packed, aligned(8))); + + struct ibmvtpm_crq_queue { +diff --git a/drivers/char/virtio_console.c b/drivers/char/virtio_console.c +index b79cf3e1b793..f6b96ba57b32 100644 +--- a/drivers/char/virtio_console.c ++++ b/drivers/char/virtio_console.c +@@ -142,6 +142,7 @@ struct ports_device { + * notification + */ + struct work_struct control_work; ++ struct work_struct config_work; + + struct list_head ports; + +@@ -1833,10 +1834,21 @@ static void config_intr(struct virtio_device *vdev) + + portdev = vdev->priv; + ++ if (!use_multiport(portdev)) ++ schedule_work(&portdev->config_work); ++} ++ ++static void config_work_handler(struct work_struct *work) ++{ ++ struct ports_device *portdev; ++ ++ portdev = container_of(work, struct ports_device, control_work); + if (!use_multiport(portdev)) { ++ struct virtio_device *vdev; + struct port *port; + u16 rows, cols; + ++ vdev = portdev->vdev; + vdev->config->get(vdev, + offsetof(struct virtio_console_config, cols), + &cols, sizeof(u16)); +@@ -2030,12 +2042,14 @@ static int virtcons_probe(struct virtio_device *vdev) + spin_lock_init(&portdev->ports_lock); + INIT_LIST_HEAD(&portdev->ports); + ++ INIT_WORK(&portdev->config_work, &config_work_handler); ++ INIT_WORK(&portdev->control_work, &control_work_handler); ++ + if (multiport) { + unsigned int nr_added_bufs; + + spin_lock_init(&portdev->c_ivq_lock); + spin_lock_init(&portdev->c_ovq_lock); +- INIT_WORK(&portdev->control_work, &control_work_handler); + + nr_added_bufs = fill_queue(portdev->c_ivq, + &portdev->c_ivq_lock); +@@ -2103,6 +2117,8 @@ static void virtcons_remove(struct virtio_device *vdev) + /* Finish up work that's lined up */ + if (use_multiport(portdev)) + cancel_work_sync(&portdev->control_work); ++ else ++ cancel_work_sync(&portdev->config_work); + + list_for_each_entry_safe(port, port2, &portdev->ports, list) + unplug_port(port); +@@ -2154,6 +2170,7 @@ static int virtcons_freeze(struct virtio_device *vdev) + + virtqueue_disable_cb(portdev->c_ivq); + cancel_work_sync(&portdev->control_work); ++ cancel_work_sync(&portdev->config_work); + /* + * Once more: if control_work_handler() was running, it would + * enable the cb as the last step. +diff --git a/drivers/dma/dw/platform.c b/drivers/dma/dw/platform.c +index e35d97590311..df0c6b042557 100644 +--- a/drivers/dma/dw/platform.c ++++ b/drivers/dma/dw/platform.c +@@ -48,6 +48,8 @@ static bool dw_dma_of_filter(struct dma_chan *chan, void *param) + return true; + } + ++#define DRV_NAME "dw_dmac" ++ + static struct dma_chan *dw_dma_of_xlate(struct of_phandle_args *dma_spec, + struct of_dma *ofdma) + { +@@ -295,7 +297,7 @@ static struct platform_driver dw_driver = { + .remove = dw_remove, + .shutdown = dw_shutdown, + .driver = { +- .name = "dw_dmac", ++ .name = DRV_NAME, + .pm = &dw_dev_pm_ops, + .of_match_table = of_match_ptr(dw_dma_of_id_table), + .acpi_match_table = ACPI_PTR(dw_dma_acpi_id_table), +@@ -316,3 +318,4 @@ module_exit(dw_exit); + + MODULE_LICENSE("GPL v2"); + MODULE_DESCRIPTION("Synopsys DesignWare DMA Controller platform driver"); ++MODULE_ALIAS("platform:" DRV_NAME); +diff --git a/drivers/gpu/drm/radeon/atombios_crtc.c b/drivers/gpu/drm/radeon/atombios_crtc.c +index ba8742ab85ee..65344d65ff91 100644 +--- a/drivers/gpu/drm/radeon/atombios_crtc.c ++++ b/drivers/gpu/drm/radeon/atombios_crtc.c +@@ -1278,6 +1278,9 @@ static int dce4_crtc_do_set_base(struct drm_crtc *crtc, + (x << 16) | y); + viewport_w = crtc->mode.hdisplay; + viewport_h = (crtc->mode.vdisplay + 1) & ~1; ++ if ((rdev->family >= CHIP_BONAIRE) && ++ (crtc->mode.flags & DRM_MODE_FLAG_INTERLACE)) ++ viewport_h *= 2; + WREG32(EVERGREEN_VIEWPORT_SIZE + radeon_crtc->crtc_offset, + (viewport_w << 16) | viewport_h); + +diff --git a/drivers/gpu/drm/radeon/cik.c b/drivers/gpu/drm/radeon/cik.c +index 6e2e4a859047..bb4c6573a525 100644 +--- a/drivers/gpu/drm/radeon/cik.c ++++ b/drivers/gpu/drm/radeon/cik.c +@@ -6352,6 +6352,9 @@ int cik_irq_set(struct radeon_device *rdev) + WREG32(DC_HPD5_INT_CONTROL, hpd5); + WREG32(DC_HPD6_INT_CONTROL, hpd6); + ++ /* posting read */ ++ RREG32(SRBM_STATUS); ++ + return 0; + } + +diff --git a/drivers/gpu/drm/radeon/evergreen.c b/drivers/gpu/drm/radeon/evergreen.c +index 20b00a0f42b4..063b72fbfe1e 100644 +--- a/drivers/gpu/drm/radeon/evergreen.c ++++ b/drivers/gpu/drm/radeon/evergreen.c +@@ -4497,6 +4497,9 @@ int evergreen_irq_set(struct radeon_device *rdev) + WREG32(AFMT_AUDIO_PACKET_CONTROL + EVERGREEN_CRTC4_REGISTER_OFFSET, afmt5); + WREG32(AFMT_AUDIO_PACKET_CONTROL + EVERGREEN_CRTC5_REGISTER_OFFSET, afmt6); + ++ /* posting read */ ++ RREG32(SRBM_STATUS); ++ + return 0; + } + +diff --git a/drivers/gpu/drm/radeon/r100.c b/drivers/gpu/drm/radeon/r100.c +index f98dcbeb9a72..91bcb794d59e 100644 +--- a/drivers/gpu/drm/radeon/r100.c ++++ b/drivers/gpu/drm/radeon/r100.c +@@ -742,6 +742,10 @@ int r100_irq_set(struct radeon_device *rdev) + tmp |= RADEON_FP2_DETECT_MASK; + } + WREG32(RADEON_GEN_INT_CNTL, tmp); ++ ++ /* read back to post the write */ ++ RREG32(RADEON_GEN_INT_CNTL); ++ + return 0; + } + +diff --git a/drivers/gpu/drm/radeon/r600.c b/drivers/gpu/drm/radeon/r600.c +index 88eb936fbc2f..2c0a0d7c2492 100644 +--- a/drivers/gpu/drm/radeon/r600.c ++++ b/drivers/gpu/drm/radeon/r600.c +@@ -3509,6 +3509,9 @@ int r600_irq_set(struct radeon_device *rdev) + WREG32(RV770_CG_THERMAL_INT, thermal_int); + } + ++ /* posting read */ ++ RREG32(R_000E50_SRBM_STATUS); ++ + return 0; + } + +diff --git a/drivers/gpu/drm/radeon/radeon_cs.c b/drivers/gpu/drm/radeon/radeon_cs.c +index ed9a997c99a3..f4af10089409 100644 +--- a/drivers/gpu/drm/radeon/radeon_cs.c ++++ b/drivers/gpu/drm/radeon/radeon_cs.c +@@ -178,11 +178,13 @@ int radeon_cs_parser_init(struct radeon_cs_parser *p, void *data) + u32 ring = RADEON_CS_RING_GFX; + s32 priority = 0; + ++ INIT_LIST_HEAD(&p->validated); ++ + if (!cs->num_chunks) { + return 0; + } ++ + /* get chunks */ +- INIT_LIST_HEAD(&p->validated); + p->idx = 0; + p->ib.sa_bo = NULL; + p->ib.semaphore = NULL; +diff --git a/drivers/gpu/drm/radeon/rs600.c b/drivers/gpu/drm/radeon/rs600.c +index bbe84591f159..dcb119571f11 100644 +--- a/drivers/gpu/drm/radeon/rs600.c ++++ b/drivers/gpu/drm/radeon/rs600.c +@@ -636,6 +636,10 @@ int rs600_irq_set(struct radeon_device *rdev) + WREG32(R_007D18_DC_HOT_PLUG_DETECT2_INT_CONTROL, hpd2); + if (ASIC_IS_DCE2(rdev)) + WREG32(R_007408_HDMI0_AUDIO_PACKET_CONTROL, hdmi0); ++ ++ /* posting read */ ++ RREG32(R_000040_GEN_INT_CNTL); ++ + return 0; + } + +diff --git a/drivers/gpu/drm/radeon/si.c b/drivers/gpu/drm/radeon/si.c +index 50482e763d80..c9f229f2048a 100644 +--- a/drivers/gpu/drm/radeon/si.c ++++ b/drivers/gpu/drm/radeon/si.c +@@ -5901,6 +5901,9 @@ int si_irq_set(struct radeon_device *rdev) + + WREG32(CG_THERMAL_INT, thermal_int); + ++ /* posting read */ ++ RREG32(SRBM_STATUS); ++ + return 0; + } + +@@ -6816,8 +6819,7 @@ int si_set_uvd_clocks(struct radeon_device *rdev, u32 vclk, u32 dclk) + WREG32_P(CG_UPLL_FUNC_CNTL, UPLL_BYPASS_EN_MASK, ~UPLL_BYPASS_EN_MASK); + + if (!vclk || !dclk) { +- /* keep the Bypass mode, put PLL to sleep */ +- WREG32_P(CG_UPLL_FUNC_CNTL, UPLL_SLEEP_MASK, ~UPLL_SLEEP_MASK); ++ /* keep the Bypass mode */ + return 0; + } + +@@ -6833,8 +6835,7 @@ int si_set_uvd_clocks(struct radeon_device *rdev, u32 vclk, u32 dclk) + /* set VCO_MODE to 1 */ + WREG32_P(CG_UPLL_FUNC_CNTL, UPLL_VCO_MODE_MASK, ~UPLL_VCO_MODE_MASK); + +- /* toggle UPLL_SLEEP to 1 then back to 0 */ +- WREG32_P(CG_UPLL_FUNC_CNTL, UPLL_SLEEP_MASK, ~UPLL_SLEEP_MASK); ++ /* disable sleep mode */ + WREG32_P(CG_UPLL_FUNC_CNTL, 0, ~UPLL_SLEEP_MASK); + + /* deassert UPLL_RESET */ +diff --git a/drivers/gpu/drm/vmwgfx/vmwgfx_drv.c b/drivers/gpu/drm/vmwgfx/vmwgfx_drv.c +index 0508f93b9795..59cd2baf6dc0 100644 +--- a/drivers/gpu/drm/vmwgfx/vmwgfx_drv.c ++++ b/drivers/gpu/drm/vmwgfx/vmwgfx_drv.c +@@ -550,21 +550,6 @@ static int vmw_driver_load(struct drm_device *dev, unsigned long chipset) + goto out_err1; + } + +- ret = ttm_bo_init_mm(&dev_priv->bdev, TTM_PL_VRAM, +- (dev_priv->vram_size >> PAGE_SHIFT)); +- if (unlikely(ret != 0)) { +- DRM_ERROR("Failed initializing memory manager for VRAM.\n"); +- goto out_err2; +- } +- +- dev_priv->has_gmr = true; +- if (ttm_bo_init_mm(&dev_priv->bdev, VMW_PL_GMR, +- dev_priv->max_gmr_ids) != 0) { +- DRM_INFO("No GMR memory available. " +- "Graphics memory resources are very limited.\n"); +- dev_priv->has_gmr = false; +- } +- + dev_priv->mmio_mtrr = arch_phys_wc_add(dev_priv->mmio_start, + dev_priv->mmio_size); + +@@ -627,6 +612,22 @@ static int vmw_driver_load(struct drm_device *dev, unsigned long chipset) + goto out_no_fman; + } + ++ ++ ret = ttm_bo_init_mm(&dev_priv->bdev, TTM_PL_VRAM, ++ (dev_priv->vram_size >> PAGE_SHIFT)); ++ if (unlikely(ret != 0)) { ++ DRM_ERROR("Failed initializing memory manager for VRAM.\n"); ++ goto out_no_vram; ++ } ++ ++ dev_priv->has_gmr = true; ++ if (ttm_bo_init_mm(&dev_priv->bdev, VMW_PL_GMR, ++ dev_priv->max_gmr_ids) != 0) { ++ DRM_INFO("No GMR memory available. " ++ "Graphics memory resources are very limited.\n"); ++ dev_priv->has_gmr = false; ++ } ++ + vmw_kms_save_vga(dev_priv); + + /* Start kms and overlay systems, needs fifo. */ +@@ -652,6 +653,10 @@ out_no_fifo: + vmw_kms_close(dev_priv); + out_no_kms: + vmw_kms_restore_vga(dev_priv); ++ if (dev_priv->has_gmr) ++ (void) ttm_bo_clean_mm(&dev_priv->bdev, VMW_PL_GMR); ++ (void)ttm_bo_clean_mm(&dev_priv->bdev, TTM_PL_VRAM); ++out_no_vram: + vmw_fence_manager_takedown(dev_priv->fman); + out_no_fman: + if (dev_priv->capabilities & SVGA_CAP_IRQMASK) +@@ -667,10 +672,6 @@ out_err4: + iounmap(dev_priv->mmio_virt); + out_err3: + arch_phys_wc_del(dev_priv->mmio_mtrr); +- if (dev_priv->has_gmr) +- (void) ttm_bo_clean_mm(&dev_priv->bdev, VMW_PL_GMR); +- (void)ttm_bo_clean_mm(&dev_priv->bdev, TTM_PL_VRAM); +-out_err2: + (void)ttm_bo_device_release(&dev_priv->bdev); + out_err1: + vmw_ttm_global_release(dev_priv); +@@ -700,6 +701,11 @@ static int vmw_driver_unload(struct drm_device *dev) + } + vmw_kms_close(dev_priv); + vmw_overlay_close(dev_priv); ++ ++ if (dev_priv->has_gmr) ++ (void)ttm_bo_clean_mm(&dev_priv->bdev, VMW_PL_GMR); ++ (void)ttm_bo_clean_mm(&dev_priv->bdev, TTM_PL_VRAM); ++ + vmw_fence_manager_takedown(dev_priv->fman); + if (dev_priv->capabilities & SVGA_CAP_IRQMASK) + drm_irq_uninstall(dev_priv->dev); +@@ -711,9 +717,6 @@ static int vmw_driver_unload(struct drm_device *dev) + ttm_object_device_release(&dev_priv->tdev); + iounmap(dev_priv->mmio_virt); + arch_phys_wc_del(dev_priv->mmio_mtrr); +- if (dev_priv->has_gmr) +- (void)ttm_bo_clean_mm(&dev_priv->bdev, VMW_PL_GMR); +- (void)ttm_bo_clean_mm(&dev_priv->bdev, TTM_PL_VRAM); + (void)ttm_bo_device_release(&dev_priv->bdev); + vmw_ttm_global_release(dev_priv); + +diff --git a/drivers/hid/hid-ids.h b/drivers/hid/hid-ids.h +index 946b8cbfaa9f..56a4ed6e679b 100644 +--- a/drivers/hid/hid-ids.h ++++ b/drivers/hid/hid-ids.h +@@ -555,6 +555,7 @@ + + #define USB_VENDOR_ID_LOGITECH 0x046d + #define USB_DEVICE_ID_LOGITECH_AUDIOHUB 0x0a0e ++#define USB_DEVICE_ID_LOGITECH_C077 0xc007 + #define USB_DEVICE_ID_LOGITECH_RECEIVER 0xc101 + #define USB_DEVICE_ID_LOGITECH_HARMONY_FIRST 0xc110 + #define USB_DEVICE_ID_LOGITECH_HARMONY_LAST 0xc14f +diff --git a/drivers/hid/usbhid/hid-quirks.c b/drivers/hid/usbhid/hid-quirks.c +index 25484ee3c51e..89b7eb4f9d3a 100644 +--- a/drivers/hid/usbhid/hid-quirks.c ++++ b/drivers/hid/usbhid/hid-quirks.c +@@ -78,6 +78,7 @@ static const struct hid_blacklist { + { USB_VENDOR_ID_ELO, USB_DEVICE_ID_ELO_TS2700, HID_QUIRK_NOGET }, + { USB_VENDOR_ID_FORMOSA, USB_DEVICE_ID_FORMOSA_IR_RECEIVER, HID_QUIRK_NO_INIT_REPORTS }, + { USB_VENDOR_ID_FREESCALE, USB_DEVICE_ID_FREESCALE_MX28, HID_QUIRK_NOGET }, ++ { USB_VENDOR_ID_LOGITECH, USB_DEVICE_ID_LOGITECH_C077, HID_QUIRK_ALWAYS_POLL }, + { USB_VENDOR_ID_MGE, USB_DEVICE_ID_MGE_UPS, HID_QUIRK_NOGET }, + { USB_VENDOR_ID_MICROSOFT, USB_DEVICE_ID_MS_TYPE_COVER_3, HID_QUIRK_NO_INIT_REPORTS }, + { USB_VENDOR_ID_MICROSOFT, USB_DEVICE_ID_MS_TYPE_COVER_3_JP, HID_QUIRK_NO_INIT_REPORTS }, +diff --git a/drivers/idle/intel_idle.c b/drivers/idle/intel_idle.c +index 97f4e807c862..7be7ddf47797 100644 +--- a/drivers/idle/intel_idle.c ++++ b/drivers/idle/intel_idle.c +@@ -503,6 +503,7 @@ static const struct x86_cpu_id intel_idle_ids[] = { + ICPU(0x2f, idle_cpu_nehalem), + ICPU(0x2a, idle_cpu_snb), + ICPU(0x2d, idle_cpu_snb), ++ ICPU(0x36, idle_cpu_atom), + ICPU(0x3a, idle_cpu_ivb), + ICPU(0x3e, idle_cpu_ivb), + ICPU(0x3c, idle_cpu_hsw), +diff --git a/drivers/iio/adc/max1363.c b/drivers/iio/adc/max1363.c +index cfb3d39b6664..a9102ef8db38 100644 +--- a/drivers/iio/adc/max1363.c ++++ b/drivers/iio/adc/max1363.c +@@ -1214,8 +1214,8 @@ static const struct max1363_chip_info max1363_chip_info_tbl[] = { + .num_modes = ARRAY_SIZE(max1238_mode_list), + .default_mode = s0to11, + .info = &max1238_info, +- .channels = max1238_channels, +- .num_channels = ARRAY_SIZE(max1238_channels), ++ .channels = max1038_channels, ++ .num_channels = ARRAY_SIZE(max1038_channels), + }, + [max11605] = { + .bits = 8, +@@ -1224,8 +1224,8 @@ static const struct max1363_chip_info max1363_chip_info_tbl[] = { + .num_modes = ARRAY_SIZE(max1238_mode_list), + .default_mode = s0to11, + .info = &max1238_info, +- .channels = max1238_channels, +- .num_channels = ARRAY_SIZE(max1238_channels), ++ .channels = max1038_channels, ++ .num_channels = ARRAY_SIZE(max1038_channels), + }, + [max11606] = { + .bits = 10, +@@ -1274,8 +1274,8 @@ static const struct max1363_chip_info max1363_chip_info_tbl[] = { + .num_modes = ARRAY_SIZE(max1238_mode_list), + .default_mode = s0to11, + .info = &max1238_info, +- .channels = max1238_channels, +- .num_channels = ARRAY_SIZE(max1238_channels), ++ .channels = max1138_channels, ++ .num_channels = ARRAY_SIZE(max1138_channels), + }, + [max11611] = { + .bits = 10, +@@ -1284,8 +1284,8 @@ static const struct max1363_chip_info max1363_chip_info_tbl[] = { + .num_modes = ARRAY_SIZE(max1238_mode_list), + .default_mode = s0to11, + .info = &max1238_info, +- .channels = max1238_channels, +- .num_channels = ARRAY_SIZE(max1238_channels), ++ .channels = max1138_channels, ++ .num_channels = ARRAY_SIZE(max1138_channels), + }, + [max11612] = { + .bits = 12, +diff --git a/drivers/infiniband/core/uverbs_main.c b/drivers/infiniband/core/uverbs_main.c +index 2df31f68ea09..849c9dc7d1f6 100644 +--- a/drivers/infiniband/core/uverbs_main.c ++++ b/drivers/infiniband/core/uverbs_main.c +@@ -473,6 +473,7 @@ static void ib_uverbs_async_handler(struct ib_uverbs_file *file, + + entry->desc.async.element = element; + entry->desc.async.event_type = event; ++ entry->desc.async.reserved = 0; + entry->counter = counter; + + list_add_tail(&entry->list, &file->async_file->event_list); +diff --git a/drivers/iommu/iommu.c b/drivers/iommu/iommu.c +index fbe9ca734f8f..9d71a57c96b1 100644 +--- a/drivers/iommu/iommu.c ++++ b/drivers/iommu/iommu.c +@@ -794,7 +794,7 @@ int iommu_map(struct iommu_domain *domain, unsigned long iova, + size_t orig_size = size; + int ret = 0; + +- if (unlikely(domain->ops->unmap == NULL || ++ if (unlikely(domain->ops->map == NULL || + domain->ops->pgsize_bitmap == 0UL)) + return -ENODEV; + +diff --git a/drivers/md/bitmap.c b/drivers/md/bitmap.c +index a7fd82133b12..03b2edd35e19 100644 +--- a/drivers/md/bitmap.c ++++ b/drivers/md/bitmap.c +@@ -883,7 +883,6 @@ void bitmap_unplug(struct bitmap *bitmap) + { + unsigned long i; + int dirty, need_write; +- int wait = 0; + + if (!bitmap || !bitmap->storage.filemap || + test_bit(BITMAP_STALE, &bitmap->flags)) +@@ -901,16 +900,13 @@ void bitmap_unplug(struct bitmap *bitmap) + clear_page_attr(bitmap, i, BITMAP_PAGE_PENDING); + write_page(bitmap, bitmap->storage.filemap[i], 0); + } +- if (dirty) +- wait = 1; +- } +- if (wait) { /* if any writes were performed, we need to wait on them */ +- if (bitmap->storage.file) +- wait_event(bitmap->write_wait, +- atomic_read(&bitmap->pending_writes)==0); +- else +- md_super_wait(bitmap->mddev); + } ++ if (bitmap->storage.file) ++ wait_event(bitmap->write_wait, ++ atomic_read(&bitmap->pending_writes)==0); ++ else ++ md_super_wait(bitmap->mddev); ++ + if (test_bit(BITMAP_WRITE_ERROR, &bitmap->flags)) + bitmap_file_kick(bitmap); + } +diff --git a/drivers/md/dm-io.c b/drivers/md/dm-io.c +index 951addc80fcc..d4e1a17325ac 100644 +--- a/drivers/md/dm-io.c ++++ b/drivers/md/dm-io.c +@@ -289,9 +289,16 @@ static void do_region(int rw, unsigned region, struct dm_io_region *where, + struct request_queue *q = bdev_get_queue(where->bdev); + unsigned short logical_block_size = queue_logical_block_size(q); + sector_t num_sectors; ++ unsigned int uninitialized_var(special_cmd_max_sectors); + +- /* Reject unsupported discard requests */ +- if ((rw & REQ_DISCARD) && !blk_queue_discard(q)) { ++ /* ++ * Reject unsupported discard and write same requests. ++ */ ++ if (rw & REQ_DISCARD) ++ special_cmd_max_sectors = q->limits.max_discard_sectors; ++ else if (rw & REQ_WRITE_SAME) ++ special_cmd_max_sectors = q->limits.max_write_same_sectors; ++ if ((rw & (REQ_DISCARD | REQ_WRITE_SAME)) && special_cmd_max_sectors == 0) { + dec_count(io, region, -EOPNOTSUPP); + return; + } +@@ -317,7 +324,7 @@ static void do_region(int rw, unsigned region, struct dm_io_region *where, + store_io_and_region_in_bio(bio, io, region); + + if (rw & REQ_DISCARD) { +- num_sectors = min_t(sector_t, q->limits.max_discard_sectors, remaining); ++ num_sectors = min_t(sector_t, special_cmd_max_sectors, remaining); + bio->bi_size = num_sectors << SECTOR_SHIFT; + remaining -= num_sectors; + } else if (rw & REQ_WRITE_SAME) { +@@ -326,7 +333,7 @@ static void do_region(int rw, unsigned region, struct dm_io_region *where, + */ + dp->get_page(dp, &page, &len, &offset); + bio_add_page(bio, page, logical_block_size, offset); +- num_sectors = min_t(sector_t, q->limits.max_write_same_sectors, remaining); ++ num_sectors = min_t(sector_t, special_cmd_max_sectors, remaining); + bio->bi_size = num_sectors << SECTOR_SHIFT; + + offset = 0; +diff --git a/drivers/md/dm.c b/drivers/md/dm.c +index 93f3fe443657..5a0b1742f794 100644 +--- a/drivers/md/dm.c ++++ b/drivers/md/dm.c +@@ -2439,10 +2439,16 @@ static void __dm_destroy(struct mapped_device *md, bool wait) + set_bit(DMF_FREEING, &md->flags); + spin_unlock(&_minor_lock); + ++ /* ++ * Take suspend_lock so that presuspend and postsuspend methods ++ * do not race with internal suspend. ++ */ ++ mutex_lock(&md->suspend_lock); + if (!dm_suspended_md(md)) { + dm_table_presuspend_targets(map); + dm_table_postsuspend_targets(map); + } ++ mutex_unlock(&md->suspend_lock); + + /* dm_put_live_table must be before msleep, otherwise deadlock is possible */ + dm_put_live_table(md, srcu_idx); +diff --git a/drivers/net/can/dev.c b/drivers/net/can/dev.c +index a4694aa20a3e..f66aeb79abdf 100644 +--- a/drivers/net/can/dev.c ++++ b/drivers/net/can/dev.c +@@ -503,6 +503,14 @@ struct sk_buff *alloc_can_skb(struct net_device *dev, struct can_frame **cf) + skb->pkt_type = PACKET_BROADCAST; + skb->ip_summed = CHECKSUM_UNNECESSARY; + ++ skb_reset_mac_header(skb); ++ skb_reset_network_header(skb); ++ skb_reset_transport_header(skb); ++ ++ skb_reset_mac_header(skb); ++ skb_reset_network_header(skb); ++ skb_reset_transport_header(skb); ++ + can_skb_reserve(skb); + can_skb_prv(skb)->ifindex = dev->ifindex; + +diff --git a/drivers/net/ethernet/broadcom/bnx2x/bnx2x_main.c b/drivers/net/ethernet/broadcom/bnx2x/bnx2x_main.c +index b42f89ce02ef..237a5611d3f6 100644 +--- a/drivers/net/ethernet/broadcom/bnx2x/bnx2x_main.c ++++ b/drivers/net/ethernet/broadcom/bnx2x/bnx2x_main.c +@@ -12236,6 +12236,10 @@ static int bnx2x_init_dev(struct bnx2x *bp, struct pci_dev *pdev, + /* clean indirect addresses */ + pci_write_config_dword(bp->pdev, PCICFG_GRC_ADDRESS, + PCICFG_VENDOR_ID_OFFSET); ++ ++ /* Set PCIe reset type to fundamental for EEH recovery */ ++ pdev->needs_freset = 1; ++ + /* + * Clean the following indirect addresses for all functions since it + * is not used by the driver. +diff --git a/drivers/net/ethernet/marvell/mvneta.c b/drivers/net/ethernet/marvell/mvneta.c +index 9c66d3168911..c54868523f27 100644 +--- a/drivers/net/ethernet/marvell/mvneta.c ++++ b/drivers/net/ethernet/marvell/mvneta.c +@@ -101,16 +101,56 @@ + #define MVNETA_CPU_RXQ_ACCESS_ALL_MASK 0x000000ff + #define MVNETA_CPU_TXQ_ACCESS_ALL_MASK 0x0000ff00 + #define MVNETA_RXQ_TIME_COAL_REG(q) (0x2580 + ((q) << 2)) ++ ++/* Exception Interrupt Port/Queue Cause register */ ++ + #define MVNETA_INTR_NEW_CAUSE 0x25a0 +-#define MVNETA_RX_INTR_MASK(nr_rxqs) (((1 << nr_rxqs) - 1) << 8) + #define MVNETA_INTR_NEW_MASK 0x25a4 ++ ++/* bits 0..7 = TXQ SENT, one bit per queue. ++ * bits 8..15 = RXQ OCCUP, one bit per queue. ++ * bits 16..23 = RXQ FREE, one bit per queue. ++ * bit 29 = OLD_REG_SUM, see old reg ? ++ * bit 30 = TX_ERR_SUM, one bit for 4 ports ++ * bit 31 = MISC_SUM, one bit for 4 ports ++ */ ++#define MVNETA_TX_INTR_MASK(nr_txqs) (((1 << nr_txqs) - 1) << 0) ++#define MVNETA_TX_INTR_MASK_ALL (0xff << 0) ++#define MVNETA_RX_INTR_MASK(nr_rxqs) (((1 << nr_rxqs) - 1) << 8) ++#define MVNETA_RX_INTR_MASK_ALL (0xff << 8) ++ + #define MVNETA_INTR_OLD_CAUSE 0x25a8 + #define MVNETA_INTR_OLD_MASK 0x25ac ++ ++/* Data Path Port/Queue Cause Register */ + #define MVNETA_INTR_MISC_CAUSE 0x25b0 + #define MVNETA_INTR_MISC_MASK 0x25b4 ++ ++#define MVNETA_CAUSE_PHY_STATUS_CHANGE BIT(0) ++#define MVNETA_CAUSE_LINK_CHANGE BIT(1) ++#define MVNETA_CAUSE_PTP BIT(4) ++ ++#define MVNETA_CAUSE_INTERNAL_ADDR_ERR BIT(7) ++#define MVNETA_CAUSE_RX_OVERRUN BIT(8) ++#define MVNETA_CAUSE_RX_CRC_ERROR BIT(9) ++#define MVNETA_CAUSE_RX_LARGE_PKT BIT(10) ++#define MVNETA_CAUSE_TX_UNDERUN BIT(11) ++#define MVNETA_CAUSE_PRBS_ERR BIT(12) ++#define MVNETA_CAUSE_PSC_SYNC_CHANGE BIT(13) ++#define MVNETA_CAUSE_SERDES_SYNC_ERR BIT(14) ++ ++#define MVNETA_CAUSE_BMU_ALLOC_ERR_SHIFT 16 ++#define MVNETA_CAUSE_BMU_ALLOC_ERR_ALL_MASK (0xF << MVNETA_CAUSE_BMU_ALLOC_ERR_SHIFT) ++#define MVNETA_CAUSE_BMU_ALLOC_ERR_MASK(pool) (1 << (MVNETA_CAUSE_BMU_ALLOC_ERR_SHIFT + (pool))) ++ ++#define MVNETA_CAUSE_TXQ_ERROR_SHIFT 24 ++#define MVNETA_CAUSE_TXQ_ERROR_ALL_MASK (0xFF << MVNETA_CAUSE_TXQ_ERROR_SHIFT) ++#define MVNETA_CAUSE_TXQ_ERROR_MASK(q) (1 << (MVNETA_CAUSE_TXQ_ERROR_SHIFT + (q))) ++ + #define MVNETA_INTR_ENABLE 0x25b8 + #define MVNETA_TXQ_INTR_ENABLE_ALL_MASK 0x0000ff00 +-#define MVNETA_RXQ_INTR_ENABLE_ALL_MASK 0xff000000 ++#define MVNETA_RXQ_INTR_ENABLE_ALL_MASK 0xff000000 // note: neta says it's 0x000000FF ++ + #define MVNETA_RXQ_CMD 0x2680 + #define MVNETA_RXQ_DISABLE_SHIFT 8 + #define MVNETA_RXQ_ENABLE_MASK 0x000000ff +@@ -176,9 +216,6 @@ + #define MVNETA_RX_COAL_PKTS 32 + #define MVNETA_RX_COAL_USEC 100 + +-/* Timer */ +-#define MVNETA_TX_DONE_TIMER_PERIOD 10 +- + /* Napi polling weight */ + #define MVNETA_RX_POLL_WEIGHT 64 + +@@ -221,10 +258,12 @@ + + #define MVNETA_RX_BUF_SIZE(pkt_size) ((pkt_size) + NET_SKB_PAD) + +-struct mvneta_stats { ++struct mvneta_pcpu_stats { + struct u64_stats_sync syncp; +- u64 packets; +- u64 bytes; ++ u64 rx_packets; ++ u64 rx_bytes; ++ u64 tx_packets; ++ u64 tx_bytes; + }; + + struct mvneta_port { +@@ -232,16 +271,11 @@ struct mvneta_port { + void __iomem *base; + struct mvneta_rx_queue *rxqs; + struct mvneta_tx_queue *txqs; +- struct timer_list tx_done_timer; + struct net_device *dev; + + u32 cause_rx_tx; + struct napi_struct napi; + +- /* Flags */ +- unsigned long flags; +-#define MVNETA_F_TX_DONE_TIMER_BIT 0 +- + /* Napi weight */ + int weight; + +@@ -250,8 +284,7 @@ struct mvneta_port { + u8 mcast_count[256]; + u16 tx_ring_size; + u16 rx_ring_size; +- struct mvneta_stats tx_stats; +- struct mvneta_stats rx_stats; ++ struct mvneta_pcpu_stats *stats; + + struct mii_bus *mii_bus; + struct phy_device *phy_dev; +@@ -461,21 +494,29 @@ struct rtnl_link_stats64 *mvneta_get_stats64(struct net_device *dev, + { + struct mvneta_port *pp = netdev_priv(dev); + unsigned int start; ++ int cpu; + +- memset(stats, 0, sizeof(struct rtnl_link_stats64)); ++ for_each_possible_cpu(cpu) { ++ struct mvneta_pcpu_stats *cpu_stats; ++ u64 rx_packets; ++ u64 rx_bytes; ++ u64 tx_packets; ++ u64 tx_bytes; + +- do { +- start = u64_stats_fetch_begin_bh(&pp->rx_stats.syncp); +- stats->rx_packets = pp->rx_stats.packets; +- stats->rx_bytes = pp->rx_stats.bytes; +- } while (u64_stats_fetch_retry_bh(&pp->rx_stats.syncp, start)); ++ cpu_stats = per_cpu_ptr(pp->stats, cpu); ++ do { ++ start = u64_stats_fetch_begin_bh(&cpu_stats->syncp); ++ rx_packets = cpu_stats->rx_packets; ++ rx_bytes = cpu_stats->rx_bytes; ++ tx_packets = cpu_stats->tx_packets; ++ tx_bytes = cpu_stats->tx_bytes; ++ } while (u64_stats_fetch_retry_bh(&cpu_stats->syncp, start)); + +- +- do { +- start = u64_stats_fetch_begin_bh(&pp->tx_stats.syncp); +- stats->tx_packets = pp->tx_stats.packets; +- stats->tx_bytes = pp->tx_stats.bytes; +- } while (u64_stats_fetch_retry_bh(&pp->tx_stats.syncp, start)); ++ stats->rx_packets += rx_packets; ++ stats->rx_bytes += rx_bytes; ++ stats->tx_packets += tx_packets; ++ stats->tx_bytes += tx_bytes; ++ } + + stats->rx_errors = dev->stats.rx_errors; + stats->rx_dropped = dev->stats.rx_dropped; +@@ -1100,17 +1141,6 @@ static void mvneta_tx_done_pkts_coal_set(struct mvneta_port *pp, + txq->done_pkts_coal = value; + } + +-/* Trigger tx done timer in MVNETA_TX_DONE_TIMER_PERIOD msecs */ +-static void mvneta_add_tx_done_timer(struct mvneta_port *pp) +-{ +- if (test_and_set_bit(MVNETA_F_TX_DONE_TIMER_BIT, &pp->flags) == 0) { +- pp->tx_done_timer.expires = jiffies + +- msecs_to_jiffies(MVNETA_TX_DONE_TIMER_PERIOD); +- add_timer(&pp->tx_done_timer); +- } +-} +- +- + /* Handle rx descriptor fill by setting buf_cookie and buf_phys_addr */ + static void mvneta_rx_desc_fill(struct mvneta_rx_desc *rx_desc, + u32 phys_addr, u32 cookie) +@@ -1182,7 +1212,7 @@ static u32 mvneta_txq_desc_csum(int l3_offs, int l3_proto, + command = l3_offs << MVNETA_TX_L3_OFF_SHIFT; + command |= ip_hdr_len << MVNETA_TX_IP_HLEN_SHIFT; + +- if (l3_proto == swab16(ETH_P_IP)) ++ if (l3_proto == htons(ETH_P_IP)) + command |= MVNETA_TXD_IP_CSUM; + else + command |= MVNETA_TX_L3_IP6; +@@ -1391,6 +1421,8 @@ static int mvneta_rx(struct mvneta_port *pp, int rx_todo, + { + struct net_device *dev = pp->dev; + int rx_done, rx_filled; ++ u32 rcvd_pkts = 0; ++ u32 rcvd_bytes = 0; + + /* Get number of received packets */ + rx_done = mvneta_rxq_busy_desc_num_get(pp, rxq); +@@ -1428,10 +1460,8 @@ static int mvneta_rx(struct mvneta_port *pp, int rx_todo, + + rx_bytes = rx_desc->data_size - + (ETH_FCS_LEN + MVNETA_MH_SIZE); +- u64_stats_update_begin(&pp->rx_stats.syncp); +- pp->rx_stats.packets++; +- pp->rx_stats.bytes += rx_bytes; +- u64_stats_update_end(&pp->rx_stats.syncp); ++ rcvd_pkts++; ++ rcvd_bytes += rx_bytes; + + /* Linux processing */ + skb_reserve(skb, MVNETA_MH_SIZE); +@@ -1452,6 +1482,15 @@ static int mvneta_rx(struct mvneta_port *pp, int rx_todo, + } + } + ++ if (rcvd_pkts) { ++ struct mvneta_pcpu_stats *stats = this_cpu_ptr(pp->stats); ++ ++ u64_stats_update_begin(&stats->syncp); ++ stats->rx_packets += rcvd_pkts; ++ stats->rx_bytes += rcvd_bytes; ++ u64_stats_update_end(&stats->syncp); ++ } ++ + /* Update rxq management counters */ + mvneta_rxq_desc_num_update(pp, rxq, rx_done, rx_filled); + +@@ -1583,25 +1622,17 @@ static int mvneta_tx(struct sk_buff *skb, struct net_device *dev) + + out: + if (frags > 0) { +- u64_stats_update_begin(&pp->tx_stats.syncp); +- pp->tx_stats.packets++; +- pp->tx_stats.bytes += len; +- u64_stats_update_end(&pp->tx_stats.syncp); ++ struct mvneta_pcpu_stats *stats = this_cpu_ptr(pp->stats); + ++ u64_stats_update_begin(&stats->syncp); ++ stats->tx_packets++; ++ stats->tx_bytes += len; ++ u64_stats_update_end(&stats->syncp); + } else { + dev->stats.tx_dropped++; + dev_kfree_skb_any(skb); + } + +- if (txq->count >= MVNETA_TXDONE_COAL_PKTS) +- mvneta_txq_done(pp, txq); +- +- /* If after calling mvneta_txq_done, count equals +- * frags, we need to set the timer +- */ +- if (txq->count == frags && frags > 0) +- mvneta_add_tx_done_timer(pp); +- + return NETDEV_TX_OK; + } + +@@ -1877,14 +1908,22 @@ static int mvneta_poll(struct napi_struct *napi, int budget) + + /* Read cause register */ + cause_rx_tx = mvreg_read(pp, MVNETA_INTR_NEW_CAUSE) & +- MVNETA_RX_INTR_MASK(rxq_number); ++ (MVNETA_RX_INTR_MASK(rxq_number) | MVNETA_TX_INTR_MASK(txq_number)); ++ ++ /* Release Tx descriptors */ ++ if (cause_rx_tx & MVNETA_TX_INTR_MASK_ALL) { ++ int tx_todo = 0; ++ ++ mvneta_tx_done_gbe(pp, (cause_rx_tx & MVNETA_TX_INTR_MASK_ALL), &tx_todo); ++ cause_rx_tx &= ~MVNETA_TX_INTR_MASK_ALL; ++ } + + /* For the case where the last mvneta_poll did not process all + * RX packets + */ + cause_rx_tx |= pp->cause_rx_tx; + if (rxq_number > 1) { +- while ((cause_rx_tx != 0) && (budget > 0)) { ++ while ((cause_rx_tx & MVNETA_RX_INTR_MASK_ALL) && (budget > 0)) { + int count; + struct mvneta_rx_queue *rxq; + /* get rx queue number from cause_rx_tx */ +@@ -1916,7 +1955,7 @@ static int mvneta_poll(struct napi_struct *napi, int budget) + napi_complete(napi); + local_irq_save(flags); + mvreg_write(pp, MVNETA_INTR_NEW_MASK, +- MVNETA_RX_INTR_MASK(rxq_number)); ++ MVNETA_RX_INTR_MASK(rxq_number) | MVNETA_TX_INTR_MASK(txq_number)); + local_irq_restore(flags); + } + +@@ -1924,26 +1963,6 @@ static int mvneta_poll(struct napi_struct *napi, int budget) + return rx_done; + } + +-/* tx done timer callback */ +-static void mvneta_tx_done_timer_callback(unsigned long data) +-{ +- struct net_device *dev = (struct net_device *)data; +- struct mvneta_port *pp = netdev_priv(dev); +- int tx_done = 0, tx_todo = 0; +- +- if (!netif_running(dev)) +- return ; +- +- clear_bit(MVNETA_F_TX_DONE_TIMER_BIT, &pp->flags); +- +- tx_done = mvneta_tx_done_gbe(pp, +- (((1 << txq_number) - 1) & +- MVNETA_CAUSE_TXQ_SENT_DESC_ALL_MASK), +- &tx_todo); +- if (tx_todo > 0) +- mvneta_add_tx_done_timer(pp); +-} +- + /* Handle rxq fill: allocates rxq skbs; called when initializing a port */ + static int mvneta_rxq_fill(struct mvneta_port *pp, struct mvneta_rx_queue *rxq, + int num) +@@ -2193,7 +2212,7 @@ static void mvneta_start_dev(struct mvneta_port *pp) + + /* Unmask interrupts */ + mvreg_write(pp, MVNETA_INTR_NEW_MASK, +- MVNETA_RX_INTR_MASK(rxq_number)); ++ MVNETA_RX_INTR_MASK(rxq_number) | MVNETA_TX_INTR_MASK(txq_number)); + + phy_start(pp->phy_dev); + netif_tx_start_all_queues(pp->dev); +@@ -2226,16 +2245,6 @@ static void mvneta_stop_dev(struct mvneta_port *pp) + mvneta_rx_reset(pp); + } + +-/* tx timeout callback - display a message and stop/start the network device */ +-static void mvneta_tx_timeout(struct net_device *dev) +-{ +- struct mvneta_port *pp = netdev_priv(dev); +- +- netdev_info(dev, "tx timeout\n"); +- mvneta_stop_dev(pp); +- mvneta_start_dev(pp); +-} +- + /* Return positive if MTU is valid */ + static int mvneta_check_mtu_valid(struct net_device *dev, int mtu) + { +@@ -2479,8 +2488,6 @@ static int mvneta_stop(struct net_device *dev) + free_irq(dev->irq, pp); + mvneta_cleanup_rxqs(pp); + mvneta_cleanup_txqs(pp); +- del_timer(&pp->tx_done_timer); +- clear_bit(MVNETA_F_TX_DONE_TIMER_BIT, &pp->flags); + + return 0; + } +@@ -2616,7 +2623,6 @@ static const struct net_device_ops mvneta_netdev_ops = { + .ndo_set_rx_mode = mvneta_set_rx_mode, + .ndo_set_mac_address = mvneta_set_mac_addr, + .ndo_change_mtu = mvneta_change_mtu, +- .ndo_tx_timeout = mvneta_tx_timeout, + .ndo_get_stats64 = mvneta_get_stats64, + .ndo_do_ioctl = mvneta_ioctl, + }; +@@ -2811,6 +2817,13 @@ static int mvneta_probe(struct platform_device *pdev) + goto err_clk; + } + ++ /* Alloc per-cpu stats */ ++ pp->stats = alloc_percpu(struct mvneta_pcpu_stats); ++ if (!pp->stats) { ++ err = -ENOMEM; ++ goto err_unmap; ++ } ++ + dt_mac_addr = of_get_mac_address(dn); + if (dt_mac_addr && is_valid_ether_addr(dt_mac_addr)) { + mac_from = "device tree"; +@@ -2826,11 +2839,6 @@ static int mvneta_probe(struct platform_device *pdev) + } + } + +- pp->tx_done_timer.data = (unsigned long)dev; +- pp->tx_done_timer.function = mvneta_tx_done_timer_callback; +- init_timer(&pp->tx_done_timer); +- clear_bit(MVNETA_F_TX_DONE_TIMER_BIT, &pp->flags); +- + pp->tx_ring_size = MVNETA_MAX_TXD; + pp->rx_ring_size = MVNETA_MAX_RXD; + +@@ -2840,7 +2848,7 @@ static int mvneta_probe(struct platform_device *pdev) + err = mvneta_init(pp, phy_addr); + if (err < 0) { + dev_err(&pdev->dev, "can't init eth hal\n"); +- goto err_unmap; ++ goto err_free_stats; + } + mvneta_port_power_up(pp, phy_mode); + +@@ -2870,6 +2878,8 @@ static int mvneta_probe(struct platform_device *pdev) + + err_deinit: + mvneta_deinit(pp); ++err_free_stats: ++ free_percpu(pp->stats); + err_unmap: + iounmap(pp->base); + err_clk: +@@ -2890,6 +2900,7 @@ static int mvneta_remove(struct platform_device *pdev) + unregister_netdev(dev); + mvneta_deinit(pp); + clk_disable_unprepare(pp->clk); ++ free_percpu(pp->stats); + iounmap(pp->base); + irq_dispose_mapping(dev->irq); + free_netdev(dev); +diff --git a/drivers/net/ethernet/mellanox/mlx4/main.c b/drivers/net/ethernet/mellanox/mlx4/main.c +index 60c9f4f103fc..d98586085cab 100644 +--- a/drivers/net/ethernet/mellanox/mlx4/main.c ++++ b/drivers/net/ethernet/mellanox/mlx4/main.c +@@ -2134,13 +2134,8 @@ static int __mlx4_init_one(struct pci_dev *pdev, int pci_dev_data) + /* Allow large DMA segments, up to the firmware limit of 1 GB */ + dma_set_max_seg_size(&pdev->dev, 1024 * 1024 * 1024); + +- priv = kzalloc(sizeof(*priv), GFP_KERNEL); +- if (!priv) { +- err = -ENOMEM; +- goto err_release_regions; +- } +- +- dev = &priv->dev; ++ dev = pci_get_drvdata(pdev); ++ priv = mlx4_priv(dev); + dev->pdev = pdev; + INIT_LIST_HEAD(&priv->ctx_list); + spin_lock_init(&priv->ctx_lock); +@@ -2308,8 +2303,7 @@ slave_start: + mlx4_sense_init(dev); + mlx4_start_sense(dev); + +- priv->pci_dev_data = pci_dev_data; +- pci_set_drvdata(pdev, dev); ++ priv->removed = 0; + + return 0; + +@@ -2375,84 +2369,108 @@ err_disable_pdev: + + static int mlx4_init_one(struct pci_dev *pdev, const struct pci_device_id *id) + { ++ struct mlx4_priv *priv; ++ struct mlx4_dev *dev; ++ + printk_once(KERN_INFO "%s", mlx4_version); + ++ priv = kzalloc(sizeof(*priv), GFP_KERNEL); ++ if (!priv) ++ return -ENOMEM; ++ ++ dev = &priv->dev; ++ pci_set_drvdata(pdev, dev); ++ priv->pci_dev_data = id->driver_data; ++ + return __mlx4_init_one(pdev, id->driver_data); + } + +-static void mlx4_remove_one(struct pci_dev *pdev) ++static void __mlx4_remove_one(struct pci_dev *pdev) + { + struct mlx4_dev *dev = pci_get_drvdata(pdev); + struct mlx4_priv *priv = mlx4_priv(dev); ++ int pci_dev_data; + int p; + +- if (dev) { +- /* in SRIOV it is not allowed to unload the pf's +- * driver while there are alive vf's */ +- if (mlx4_is_master(dev)) { +- if (mlx4_how_many_lives_vf(dev)) +- printk(KERN_ERR "Removing PF when there are assigned VF's !!!\n"); +- } +- mlx4_stop_sense(dev); +- mlx4_unregister_device(dev); ++ if (priv->removed) ++ return; + +- for (p = 1; p <= dev->caps.num_ports; p++) { +- mlx4_cleanup_port_info(&priv->port[p]); +- mlx4_CLOSE_PORT(dev, p); +- } ++ pci_dev_data = priv->pci_dev_data; + +- if (mlx4_is_master(dev)) +- mlx4_free_resource_tracker(dev, +- RES_TR_FREE_SLAVES_ONLY); +- +- mlx4_cleanup_counters_table(dev); +- mlx4_cleanup_qp_table(dev); +- mlx4_cleanup_srq_table(dev); +- mlx4_cleanup_cq_table(dev); +- mlx4_cmd_use_polling(dev); +- mlx4_cleanup_eq_table(dev); +- mlx4_cleanup_mcg_table(dev); +- mlx4_cleanup_mr_table(dev); +- mlx4_cleanup_xrcd_table(dev); +- mlx4_cleanup_pd_table(dev); ++ /* in SRIOV it is not allowed to unload the pf's ++ * driver while there are alive vf's */ ++ if (mlx4_is_master(dev) && mlx4_how_many_lives_vf(dev)) ++ printk(KERN_ERR "Removing PF when there are assigned VF's !!!\n"); ++ mlx4_stop_sense(dev); ++ mlx4_unregister_device(dev); + +- if (mlx4_is_master(dev)) +- mlx4_free_resource_tracker(dev, +- RES_TR_FREE_STRUCTS_ONLY); +- +- iounmap(priv->kar); +- mlx4_uar_free(dev, &priv->driver_uar); +- mlx4_cleanup_uar_table(dev); +- if (!mlx4_is_slave(dev)) +- mlx4_clear_steering(dev); +- mlx4_free_eq_table(dev); +- if (mlx4_is_master(dev)) +- mlx4_multi_func_cleanup(dev); +- mlx4_close_hca(dev); +- if (mlx4_is_slave(dev)) +- mlx4_multi_func_cleanup(dev); +- mlx4_cmd_cleanup(dev); +- +- if (dev->flags & MLX4_FLAG_MSI_X) +- pci_disable_msix(pdev); +- if (dev->flags & MLX4_FLAG_SRIOV) { +- mlx4_warn(dev, "Disabling SR-IOV\n"); +- pci_disable_sriov(pdev); +- } ++ for (p = 1; p <= dev->caps.num_ports; p++) { ++ mlx4_cleanup_port_info(&priv->port[p]); ++ mlx4_CLOSE_PORT(dev, p); ++ } + +- if (!mlx4_is_slave(dev)) +- mlx4_free_ownership(dev); ++ if (mlx4_is_master(dev)) ++ mlx4_free_resource_tracker(dev, ++ RES_TR_FREE_SLAVES_ONLY); ++ ++ mlx4_cleanup_counters_table(dev); ++ mlx4_cleanup_qp_table(dev); ++ mlx4_cleanup_srq_table(dev); ++ mlx4_cleanup_cq_table(dev); ++ mlx4_cmd_use_polling(dev); ++ mlx4_cleanup_eq_table(dev); ++ mlx4_cleanup_mcg_table(dev); ++ mlx4_cleanup_mr_table(dev); ++ mlx4_cleanup_xrcd_table(dev); ++ mlx4_cleanup_pd_table(dev); + +- kfree(dev->caps.qp0_tunnel); +- kfree(dev->caps.qp0_proxy); +- kfree(dev->caps.qp1_tunnel); +- kfree(dev->caps.qp1_proxy); ++ if (mlx4_is_master(dev)) ++ mlx4_free_resource_tracker(dev, ++ RES_TR_FREE_STRUCTS_ONLY); + +- kfree(priv); +- pci_release_regions(pdev); +- pci_disable_device(pdev); +- pci_set_drvdata(pdev, NULL); ++ iounmap(priv->kar); ++ mlx4_uar_free(dev, &priv->driver_uar); ++ mlx4_cleanup_uar_table(dev); ++ if (!mlx4_is_slave(dev)) ++ mlx4_clear_steering(dev); ++ mlx4_free_eq_table(dev); ++ if (mlx4_is_master(dev)) ++ mlx4_multi_func_cleanup(dev); ++ mlx4_close_hca(dev); ++ if (mlx4_is_slave(dev)) ++ mlx4_multi_func_cleanup(dev); ++ mlx4_cmd_cleanup(dev); ++ ++ if (dev->flags & MLX4_FLAG_MSI_X) ++ pci_disable_msix(pdev); ++ if (dev->flags & MLX4_FLAG_SRIOV) { ++ mlx4_warn(dev, "Disabling SR-IOV\n"); ++ pci_disable_sriov(pdev); + } ++ ++ if (!mlx4_is_slave(dev)) ++ mlx4_free_ownership(dev); ++ ++ kfree(dev->caps.qp0_tunnel); ++ kfree(dev->caps.qp0_proxy); ++ kfree(dev->caps.qp1_tunnel); ++ kfree(dev->caps.qp1_proxy); ++ ++ pci_release_regions(pdev); ++ pci_disable_device(pdev); ++ memset(priv, 0, sizeof(*priv)); ++ priv->pci_dev_data = pci_dev_data; ++ priv->removed = 1; ++} ++ ++static void mlx4_remove_one(struct pci_dev *pdev) ++{ ++ struct mlx4_dev *dev = pci_get_drvdata(pdev); ++ struct mlx4_priv *priv = mlx4_priv(dev); ++ ++ __mlx4_remove_one(pdev); ++ kfree(priv); ++ pci_set_drvdata(pdev, NULL); + } + + int mlx4_restart_one(struct pci_dev *pdev) +@@ -2462,7 +2480,7 @@ int mlx4_restart_one(struct pci_dev *pdev) + int pci_dev_data; + + pci_dev_data = priv->pci_dev_data; +- mlx4_remove_one(pdev); ++ __mlx4_remove_one(pdev); + return __mlx4_init_one(pdev, pci_dev_data); + } + +@@ -2517,7 +2535,7 @@ MODULE_DEVICE_TABLE(pci, mlx4_pci_table); + static pci_ers_result_t mlx4_pci_err_detected(struct pci_dev *pdev, + pci_channel_state_t state) + { +- mlx4_remove_one(pdev); ++ __mlx4_remove_one(pdev); + + return state == pci_channel_io_perm_failure ? + PCI_ERS_RESULT_DISCONNECT : PCI_ERS_RESULT_NEED_RESET; +@@ -2525,7 +2543,11 @@ static pci_ers_result_t mlx4_pci_err_detected(struct pci_dev *pdev, + + static pci_ers_result_t mlx4_pci_slot_reset(struct pci_dev *pdev) + { +- int ret = __mlx4_init_one(pdev, 0); ++ struct mlx4_dev *dev = pci_get_drvdata(pdev); ++ struct mlx4_priv *priv = mlx4_priv(dev); ++ int ret; ++ ++ ret = __mlx4_init_one(pdev, priv->pci_dev_data); + + return ret ? PCI_ERS_RESULT_DISCONNECT : PCI_ERS_RESULT_RECOVERED; + } +diff --git a/drivers/net/ethernet/mellanox/mlx4/mlx4.h b/drivers/net/ethernet/mellanox/mlx4/mlx4.h +index 348bb8c7d9a7..796ed1a79284 100644 +--- a/drivers/net/ethernet/mellanox/mlx4/mlx4.h ++++ b/drivers/net/ethernet/mellanox/mlx4/mlx4.h +@@ -774,6 +774,7 @@ struct mlx4_priv { + spinlock_t ctx_lock; + + int pci_dev_data; ++ int removed; + + struct list_head pgdir_list; + struct mutex pgdir_mutex; +diff --git a/drivers/net/usb/cx82310_eth.c b/drivers/net/usb/cx82310_eth.c +index 1e207f086b75..49ab45e17fe8 100644 +--- a/drivers/net/usb/cx82310_eth.c ++++ b/drivers/net/usb/cx82310_eth.c +@@ -302,9 +302,18 @@ static const struct driver_info cx82310_info = { + .tx_fixup = cx82310_tx_fixup, + }; + ++#define USB_DEVICE_CLASS(vend, prod, cl, sc, pr) \ ++ .match_flags = USB_DEVICE_ID_MATCH_DEVICE | \ ++ USB_DEVICE_ID_MATCH_DEV_INFO, \ ++ .idVendor = (vend), \ ++ .idProduct = (prod), \ ++ .bDeviceClass = (cl), \ ++ .bDeviceSubClass = (sc), \ ++ .bDeviceProtocol = (pr) ++ + static const struct usb_device_id products[] = { + { +- USB_DEVICE_AND_INTERFACE_INFO(0x0572, 0xcb01, 0xff, 0, 0), ++ USB_DEVICE_CLASS(0x0572, 0xcb01, 0xff, 0, 0), + .driver_info = (unsigned long) &cx82310_info + }, + { }, +diff --git a/drivers/regulator/core.c b/drivers/regulator/core.c +index a2ce8e86ced7..ef79c1c4280f 100644 +--- a/drivers/regulator/core.c ++++ b/drivers/regulator/core.c +@@ -1690,10 +1690,12 @@ static int _regulator_do_enable(struct regulator_dev *rdev) + trace_regulator_enable(rdev_get_name(rdev)); + + if (rdev->ena_pin) { +- ret = regulator_ena_gpio_ctrl(rdev, true); +- if (ret < 0) +- return ret; +- rdev->ena_gpio_state = 1; ++ if (!rdev->ena_gpio_state) { ++ ret = regulator_ena_gpio_ctrl(rdev, true); ++ if (ret < 0) ++ return ret; ++ rdev->ena_gpio_state = 1; ++ } + } else if (rdev->desc->ops->enable) { + ret = rdev->desc->ops->enable(rdev); + if (ret < 0) +@@ -1795,10 +1797,12 @@ static int _regulator_do_disable(struct regulator_dev *rdev) + trace_regulator_disable(rdev_get_name(rdev)); + + if (rdev->ena_pin) { +- ret = regulator_ena_gpio_ctrl(rdev, false); +- if (ret < 0) +- return ret; +- rdev->ena_gpio_state = 0; ++ if (rdev->ena_gpio_state) { ++ ret = regulator_ena_gpio_ctrl(rdev, false); ++ if (ret < 0) ++ return ret; ++ rdev->ena_gpio_state = 0; ++ } + + } else if (rdev->desc->ops->disable) { + ret = rdev->desc->ops->disable(rdev); +@@ -3392,12 +3396,6 @@ regulator_register(const struct regulator_desc *regulator_desc, + config->ena_gpio, ret); + goto wash; + } +- +- if (config->ena_gpio_flags & GPIOF_OUT_INIT_HIGH) +- rdev->ena_gpio_state = 1; +- +- if (config->ena_gpio_invert) +- rdev->ena_gpio_state = !rdev->ena_gpio_state; + } + + /* set regulator constraints */ +@@ -3569,9 +3567,11 @@ int regulator_suspend_finish(void) + list_for_each_entry(rdev, ®ulator_list, list) { + mutex_lock(&rdev->mutex); + if (rdev->use_count > 0 || rdev->constraints->always_on) { +- error = _regulator_do_enable(rdev); +- if (error) +- ret = error; ++ if (!_regulator_is_enabled(rdev)) { ++ error = _regulator_do_enable(rdev); ++ if (error) ++ ret = error; ++ } + } else { + if (!has_full_constraints) + goto unlock; +diff --git a/drivers/scsi/libsas/sas_discover.c b/drivers/scsi/libsas/sas_discover.c +index 62b58d38ce2e..60de66252fa2 100644 +--- a/drivers/scsi/libsas/sas_discover.c ++++ b/drivers/scsi/libsas/sas_discover.c +@@ -500,6 +500,7 @@ static void sas_revalidate_domain(struct work_struct *work) + struct sas_discovery_event *ev = to_sas_discovery_event(work); + struct asd_sas_port *port = ev->port; + struct sas_ha_struct *ha = port->ha; ++ struct domain_device *ddev = port->port_dev; + + /* prevent revalidation from finding sata links in recovery */ + mutex_lock(&ha->disco_mutex); +@@ -514,8 +515,9 @@ static void sas_revalidate_domain(struct work_struct *work) + SAS_DPRINTK("REVALIDATING DOMAIN on port %d, pid:%d\n", port->id, + task_pid_nr(current)); + +- if (port->port_dev) +- res = sas_ex_revalidate_domain(port->port_dev); ++ if (ddev && (ddev->dev_type == SAS_FANOUT_EXPANDER_DEVICE || ++ ddev->dev_type == SAS_EDGE_EXPANDER_DEVICE)) ++ res = sas_ex_revalidate_domain(ddev); + + SAS_DPRINTK("done REVALIDATING DOMAIN on port %d, pid:%d, res 0x%x\n", + port->id, task_pid_nr(current), res); +diff --git a/drivers/scsi/megaraid/megaraid_sas.h b/drivers/scsi/megaraid/megaraid_sas.h +index 0c73ba4bf451..f2bb2f09bff1 100644 +--- a/drivers/scsi/megaraid/megaraid_sas.h ++++ b/drivers/scsi/megaraid/megaraid_sas.h +@@ -1527,7 +1527,6 @@ struct megasas_instance { + u32 *reply_queue; + dma_addr_t reply_queue_h; + +- unsigned long base_addr; + struct megasas_register_set __iomem *reg_set; + u32 *reply_post_host_index_addr[MR_MAX_MSIX_REG_ARRAY]; + struct megasas_pd_list pd_list[MEGASAS_MAX_PD]; +diff --git a/drivers/scsi/megaraid/megaraid_sas_base.c b/drivers/scsi/megaraid/megaraid_sas_base.c +index 855dc7c4cad7..6da7e62b13fb 100644 +--- a/drivers/scsi/megaraid/megaraid_sas_base.c ++++ b/drivers/scsi/megaraid/megaraid_sas_base.c +@@ -3613,6 +3613,7 @@ static int megasas_init_fw(struct megasas_instance *instance) + u32 max_sectors_1; + u32 max_sectors_2; + u32 tmp_sectors, msix_enable, scratch_pad_2; ++ resource_size_t base_addr; + struct megasas_register_set __iomem *reg_set; + struct megasas_ctrl_info *ctrl_info; + unsigned long bar_list; +@@ -3621,14 +3622,14 @@ static int megasas_init_fw(struct megasas_instance *instance) + /* Find first memory bar */ + bar_list = pci_select_bars(instance->pdev, IORESOURCE_MEM); + instance->bar = find_first_bit(&bar_list, sizeof(unsigned long)); +- instance->base_addr = pci_resource_start(instance->pdev, instance->bar); + if (pci_request_selected_regions(instance->pdev, instance->bar, + "megasas: LSI")) { + printk(KERN_DEBUG "megasas: IO memory region busy!\n"); + return -EBUSY; + } + +- instance->reg_set = ioremap_nocache(instance->base_addr, 8192); ++ base_addr = pci_resource_start(instance->pdev, instance->bar); ++ instance->reg_set = ioremap_nocache(base_addr, 8192); + + if (!instance->reg_set) { + printk(KERN_DEBUG "megasas: Failed to map IO mem\n"); +diff --git a/drivers/scsi/qla2xxx/tcm_qla2xxx.c b/drivers/scsi/qla2xxx/tcm_qla2xxx.c +index 80a1f9f40aac..783a0a9f8577 100644 +--- a/drivers/scsi/qla2xxx/tcm_qla2xxx.c ++++ b/drivers/scsi/qla2xxx/tcm_qla2xxx.c +@@ -1454,7 +1454,7 @@ static int tcm_qla2xxx_check_initiator_node_acl( + /* + * Finally register the new FC Nexus with TCM + */ +- __transport_register_session(se_nacl->se_tpg, se_nacl, se_sess, sess); ++ transport_register_session(se_nacl->se_tpg, se_nacl, se_sess, sess); + + return 0; + } +diff --git a/drivers/spi/spi-pl022.c b/drivers/spi/spi-pl022.c +index b1a9ba893fab..0bc4e4508e6f 100644 +--- a/drivers/spi/spi-pl022.c ++++ b/drivers/spi/spi-pl022.c +@@ -503,12 +503,12 @@ static void giveback(struct pl022 *pl022) + pl022->cur_msg = NULL; + pl022->cur_transfer = NULL; + pl022->cur_chip = NULL; +- spi_finalize_current_message(pl022->master); + + /* disable the SPI/SSP operation */ + writew((readw(SSP_CR1(pl022->virtbase)) & + (~SSP_CR1_MASK_SSE)), SSP_CR1(pl022->virtbase)); + ++ spi_finalize_current_message(pl022->master); + } + + /** +diff --git a/drivers/staging/vt6655/rf.c b/drivers/staging/vt6655/rf.c +index 6948984a25ab..c2d602825422 100644 +--- a/drivers/staging/vt6655/rf.c ++++ b/drivers/staging/vt6655/rf.c +@@ -966,6 +966,7 @@ bool RFbSetPower( + break; + case RATE_6M: + case RATE_9M: ++ case RATE_12M: + case RATE_18M: + byPwr = pDevice->abyOFDMPwrTbl[uCH]; + if (pDevice->byRFType == RF_UW2452) { +diff --git a/drivers/target/iscsi/iscsi_target.c b/drivers/target/iscsi/iscsi_target.c +index c60277e86e4b..8ec8dc92baf4 100644 +--- a/drivers/target/iscsi/iscsi_target.c ++++ b/drivers/target/iscsi/iscsi_target.c +@@ -4194,11 +4194,17 @@ int iscsit_close_connection( + pr_debug("Closing iSCSI connection CID %hu on SID:" + " %u\n", conn->cid, sess->sid); + /* +- * Always up conn_logout_comp just in case the RX Thread is sleeping +- * and the logout response never got sent because the connection +- * failed. ++ * Always up conn_logout_comp for the traditional TCP case just in case ++ * the RX Thread in iscsi_target_rx_opcode() is sleeping and the logout ++ * response never got sent because the connection failed. ++ * ++ * However for iser-target, isert_wait4logout() is using conn_logout_comp ++ * to signal logout response TX interrupt completion. Go ahead and skip ++ * this for iser since isert_rx_opcode() does not wait on logout failure, ++ * and to avoid iscsi_conn pointer dereference in iser-target code. + */ +- complete(&conn->conn_logout_comp); ++ if (conn->conn_transport->transport_type == ISCSI_TCP) ++ complete(&conn->conn_logout_comp); + + iscsi_release_thread_set(conn); + +diff --git a/drivers/target/iscsi/iscsi_target_login.c b/drivers/target/iscsi/iscsi_target_login.c +index 0c15772c5035..eb92af05ee12 100644 +--- a/drivers/target/iscsi/iscsi_target_login.c ++++ b/drivers/target/iscsi/iscsi_target_login.c +@@ -249,6 +249,28 @@ static void iscsi_login_set_conn_values( + mutex_unlock(&auth_id_lock); + } + ++static __printf(2, 3) int iscsi_change_param_sprintf( ++ struct iscsi_conn *conn, ++ const char *fmt, ...) ++{ ++ va_list args; ++ unsigned char buf[64]; ++ ++ memset(buf, 0, sizeof buf); ++ ++ va_start(args, fmt); ++ vsnprintf(buf, sizeof buf, fmt, args); ++ va_end(args); ++ ++ if (iscsi_change_param_value(buf, conn->param_list, 0) < 0) { ++ iscsit_tx_login_rsp(conn, ISCSI_STATUS_CLS_TARGET_ERR, ++ ISCSI_LOGIN_STATUS_NO_RESOURCES); ++ return -1; ++ } ++ ++ return 0; ++} ++ + /* + * This is the leading connection of a new session, + * or session reinstatement. +@@ -338,7 +360,6 @@ static int iscsi_login_zero_tsih_s2( + { + struct iscsi_node_attrib *na; + struct iscsi_session *sess = conn->sess; +- unsigned char buf[32]; + bool iser = false; + + sess->tpg = conn->tpg; +@@ -379,26 +400,16 @@ static int iscsi_login_zero_tsih_s2( + * + * In our case, we have already located the struct iscsi_tiqn at this point. + */ +- memset(buf, 0, 32); +- sprintf(buf, "TargetPortalGroupTag=%hu", ISCSI_TPG_S(sess)->tpgt); +- if (iscsi_change_param_value(buf, conn->param_list, 0) < 0) { +- iscsit_tx_login_rsp(conn, ISCSI_STATUS_CLS_TARGET_ERR, +- ISCSI_LOGIN_STATUS_NO_RESOURCES); ++ if (iscsi_change_param_sprintf(conn, "TargetPortalGroupTag=%hu", ISCSI_TPG_S(sess)->tpgt)) + return -1; +- } + + /* + * Workaround for Initiators that have broken connection recovery logic. + * + * "We would really like to get rid of this." Linux-iSCSI.org team + */ +- memset(buf, 0, 32); +- sprintf(buf, "ErrorRecoveryLevel=%d", na->default_erl); +- if (iscsi_change_param_value(buf, conn->param_list, 0) < 0) { +- iscsit_tx_login_rsp(conn, ISCSI_STATUS_CLS_TARGET_ERR, +- ISCSI_LOGIN_STATUS_NO_RESOURCES); ++ if (iscsi_change_param_sprintf(conn, "ErrorRecoveryLevel=%d", na->default_erl)) + return -1; +- } + + if (iscsi_login_disable_FIM_keys(conn->param_list, conn) < 0) + return -1; +@@ -410,12 +421,9 @@ static int iscsi_login_zero_tsih_s2( + unsigned long mrdsl, off; + int rc; + +- sprintf(buf, "RDMAExtensions=Yes"); +- if (iscsi_change_param_value(buf, conn->param_list, 0) < 0) { +- iscsit_tx_login_rsp(conn, ISCSI_STATUS_CLS_TARGET_ERR, +- ISCSI_LOGIN_STATUS_NO_RESOURCES); ++ if (iscsi_change_param_sprintf(conn, "RDMAExtensions=Yes")) + return -1; +- } ++ + /* + * Make MaxRecvDataSegmentLength PAGE_SIZE aligned for + * Immediate Data + Unsolicitied Data-OUT if necessary.. +@@ -445,12 +453,8 @@ static int iscsi_login_zero_tsih_s2( + pr_warn("Aligning ISER MaxRecvDataSegmentLength: %lu down" + " to PAGE_SIZE\n", mrdsl); + +- sprintf(buf, "MaxRecvDataSegmentLength=%lu\n", mrdsl); +- if (iscsi_change_param_value(buf, conn->param_list, 0) < 0) { +- iscsit_tx_login_rsp(conn, ISCSI_STATUS_CLS_TARGET_ERR, +- ISCSI_LOGIN_STATUS_NO_RESOURCES); ++ if (iscsi_change_param_sprintf(conn, "MaxRecvDataSegmentLength=%lu\n", mrdsl)) + return -1; +- } + } + + return 0; +@@ -592,13 +596,8 @@ static int iscsi_login_non_zero_tsih_s2( + * + * In our case, we have already located the struct iscsi_tiqn at this point. + */ +- memset(buf, 0, 32); +- sprintf(buf, "TargetPortalGroupTag=%hu", ISCSI_TPG_S(sess)->tpgt); +- if (iscsi_change_param_value(buf, conn->param_list, 0) < 0) { +- iscsit_tx_login_rsp(conn, ISCSI_STATUS_CLS_TARGET_ERR, +- ISCSI_LOGIN_STATUS_NO_RESOURCES); ++ if (iscsi_change_param_sprintf(conn, "TargetPortalGroupTag=%hu", ISCSI_TPG_S(sess)->tpgt)) + return -1; +- } + + return iscsi_login_disable_FIM_keys(conn->param_list, conn); + } +diff --git a/drivers/target/target_core_device.c b/drivers/target/target_core_device.c +index c67a56e7ee1c..4711e437479e 100644 +--- a/drivers/target/target_core_device.c ++++ b/drivers/target/target_core_device.c +@@ -1492,8 +1492,6 @@ int target_configure_device(struct se_device *dev) + ret = dev->transport->configure_device(dev); + if (ret) + goto out; +- dev->dev_flags |= DF_CONFIGURED; +- + /* + * XXX: there is not much point to have two different values here.. + */ +@@ -1555,6 +1553,8 @@ int target_configure_device(struct se_device *dev) + list_add_tail(&dev->g_dev_node, &g_device_list); + mutex_unlock(&g_device_mutex); + ++ dev->dev_flags |= DF_CONFIGURED; ++ + return 0; + + out_free_alua: +diff --git a/drivers/target/target_core_pr.c b/drivers/target/target_core_pr.c +index 36c507c1b4fd..a50982a7304f 100644 +--- a/drivers/target/target_core_pr.c ++++ b/drivers/target/target_core_pr.c +@@ -76,7 +76,7 @@ enum preempt_type { + }; + + static void __core_scsi3_complete_pro_release(struct se_device *, struct se_node_acl *, +- struct t10_pr_registration *, int); ++ struct t10_pr_registration *, int, int); + + static sense_reason_t + target_scsi2_reservation_check(struct se_cmd *cmd) +@@ -528,6 +528,18 @@ static int core_scsi3_pr_seq_non_holder( + + return 0; + } ++ } else if (we && registered_nexus) { ++ /* ++ * Reads are allowed for Write Exclusive locks ++ * from all registrants. ++ */ ++ if (cmd->data_direction == DMA_FROM_DEVICE) { ++ pr_debug("Allowing READ CDB: 0x%02x for %s" ++ " reservation\n", cdb[0], ++ core_scsi3_pr_dump_type(pr_reg_type)); ++ ++ return 0; ++ } + } + pr_debug("%s Conflict for %sregistered nexus %s CDB: 0x%2x" + " for %s reservation\n", transport_dump_cmd_direction(cmd), +@@ -1186,7 +1198,7 @@ static int core_scsi3_check_implict_release( + * service action with the SERVICE ACTION RESERVATION KEY + * field set to zero (see 5.7.11.3). + */ +- __core_scsi3_complete_pro_release(dev, nacl, pr_reg, 0); ++ __core_scsi3_complete_pro_release(dev, nacl, pr_reg, 0, 1); + ret = 1; + /* + * For 'All Registrants' reservation types, all existing +@@ -1228,7 +1240,8 @@ static void __core_scsi3_free_registration( + + pr_reg->pr_reg_deve->def_pr_registered = 0; + pr_reg->pr_reg_deve->pr_res_key = 0; +- list_del(&pr_reg->pr_reg_list); ++ if (!list_empty(&pr_reg->pr_reg_list)) ++ list_del(&pr_reg->pr_reg_list); + /* + * Caller accessing *pr_reg using core_scsi3_locate_pr_reg(), + * so call core_scsi3_put_pr_reg() to decrement our reference. +@@ -1280,6 +1293,7 @@ void core_scsi3_free_pr_reg_from_nacl( + { + struct t10_reservation *pr_tmpl = &dev->t10_pr; + struct t10_pr_registration *pr_reg, *pr_reg_tmp, *pr_res_holder; ++ bool free_reg = false; + /* + * If the passed se_node_acl matches the reservation holder, + * release the reservation. +@@ -1287,13 +1301,18 @@ void core_scsi3_free_pr_reg_from_nacl( + spin_lock(&dev->dev_reservation_lock); + pr_res_holder = dev->dev_pr_res_holder; + if ((pr_res_holder != NULL) && +- (pr_res_holder->pr_reg_nacl == nacl)) +- __core_scsi3_complete_pro_release(dev, nacl, pr_res_holder, 0); ++ (pr_res_holder->pr_reg_nacl == nacl)) { ++ __core_scsi3_complete_pro_release(dev, nacl, pr_res_holder, 0, 1); ++ free_reg = true; ++ } + spin_unlock(&dev->dev_reservation_lock); + /* + * Release any registration associated with the struct se_node_acl. + */ + spin_lock(&pr_tmpl->registration_lock); ++ if (pr_res_holder && free_reg) ++ __core_scsi3_free_registration(dev, pr_res_holder, NULL, 0); ++ + list_for_each_entry_safe(pr_reg, pr_reg_tmp, + &pr_tmpl->registration_list, pr_reg_list) { + +@@ -1316,7 +1335,7 @@ void core_scsi3_free_all_registrations( + if (pr_res_holder != NULL) { + struct se_node_acl *pr_res_nacl = pr_res_holder->pr_reg_nacl; + __core_scsi3_complete_pro_release(dev, pr_res_nacl, +- pr_res_holder, 0); ++ pr_res_holder, 0, 0); + } + spin_unlock(&dev->dev_reservation_lock); + +@@ -2126,13 +2145,13 @@ core_scsi3_emulate_pro_register(struct se_cmd *cmd, u64 res_key, u64 sa_res_key, + /* + * sa_res_key=0 Unregister Reservation Key for registered I_T Nexus. + */ +- pr_holder = core_scsi3_check_implict_release( +- cmd->se_dev, pr_reg); ++ type = pr_reg->pr_res_type; ++ pr_holder = core_scsi3_check_implict_release(cmd->se_dev, ++ pr_reg); + if (pr_holder < 0) { + ret = TCM_RESERVATION_CONFLICT; + goto out; + } +- type = pr_reg->pr_res_type; + + spin_lock(&pr_tmpl->registration_lock); + /* +@@ -2290,6 +2309,7 @@ core_scsi3_pro_reserve(struct se_cmd *cmd, int type, int scope, u64 res_key) + spin_lock(&dev->dev_reservation_lock); + pr_res_holder = dev->dev_pr_res_holder; + if (pr_res_holder) { ++ int pr_res_type = pr_res_holder->pr_res_type; + /* + * From spc4r17 Section 5.7.9: Reserving: + * +@@ -2300,7 +2320,9 @@ core_scsi3_pro_reserve(struct se_cmd *cmd, int type, int scope, u64 res_key) + * the logical unit, then the command shall be completed with + * RESERVATION CONFLICT status. + */ +- if (pr_res_holder != pr_reg) { ++ if ((pr_res_holder != pr_reg) && ++ (pr_res_type != PR_TYPE_WRITE_EXCLUSIVE_ALLREG) && ++ (pr_res_type != PR_TYPE_EXCLUSIVE_ACCESS_ALLREG)) { + struct se_node_acl *pr_res_nacl = pr_res_holder->pr_reg_nacl; + pr_err("SPC-3 PR: Attempted RESERVE from" + " [%s]: %s while reservation already held by" +@@ -2406,23 +2428,59 @@ static void __core_scsi3_complete_pro_release( + struct se_device *dev, + struct se_node_acl *se_nacl, + struct t10_pr_registration *pr_reg, +- int explict) ++ int explict, ++ int unreg) + { + struct target_core_fabric_ops *tfo = se_nacl->se_tpg->se_tpg_tfo; + char i_buf[PR_REG_ISID_ID_LEN]; ++ int pr_res_type = 0, pr_res_scope = 0; + + memset(i_buf, 0, PR_REG_ISID_ID_LEN); + core_pr_dump_initiator_port(pr_reg, i_buf, PR_REG_ISID_ID_LEN); + /* + * Go ahead and release the current PR reservation holder. ++ * If an All Registrants reservation is currently active and ++ * a unregister operation is requested, replace the current ++ * dev_pr_res_holder with another active registration. + */ +- dev->dev_pr_res_holder = NULL; ++ if (dev->dev_pr_res_holder) { ++ pr_res_type = dev->dev_pr_res_holder->pr_res_type; ++ pr_res_scope = dev->dev_pr_res_holder->pr_res_scope; ++ dev->dev_pr_res_holder->pr_res_type = 0; ++ dev->dev_pr_res_holder->pr_res_scope = 0; ++ dev->dev_pr_res_holder->pr_res_holder = 0; ++ dev->dev_pr_res_holder = NULL; ++ } ++ if (!unreg) ++ goto out; + +- pr_debug("SPC-3 PR [%s] Service Action: %s RELEASE cleared" +- " reservation holder TYPE: %s ALL_TG_PT: %d\n", +- tfo->get_fabric_name(), (explict) ? "explict" : "implict", +- core_scsi3_pr_dump_type(pr_reg->pr_res_type), +- (pr_reg->pr_reg_all_tg_pt) ? 1 : 0); ++ spin_lock(&dev->t10_pr.registration_lock); ++ list_del_init(&pr_reg->pr_reg_list); ++ /* ++ * If the I_T nexus is a reservation holder, the persistent reservation ++ * is of an all registrants type, and the I_T nexus is the last remaining ++ * registered I_T nexus, then the device server shall also release the ++ * persistent reservation. ++ */ ++ if (!list_empty(&dev->t10_pr.registration_list) && ++ ((pr_res_type == PR_TYPE_WRITE_EXCLUSIVE_ALLREG) || ++ (pr_res_type == PR_TYPE_EXCLUSIVE_ACCESS_ALLREG))) { ++ dev->dev_pr_res_holder = ++ list_entry(dev->t10_pr.registration_list.next, ++ struct t10_pr_registration, pr_reg_list); ++ dev->dev_pr_res_holder->pr_res_type = pr_res_type; ++ dev->dev_pr_res_holder->pr_res_scope = pr_res_scope; ++ dev->dev_pr_res_holder->pr_res_holder = 1; ++ } ++ spin_unlock(&dev->t10_pr.registration_lock); ++out: ++ if (!dev->dev_pr_res_holder) { ++ pr_debug("SPC-3 PR [%s] Service Action: %s RELEASE cleared" ++ " reservation holder TYPE: %s ALL_TG_PT: %d\n", ++ tfo->get_fabric_name(), (explict) ? "explict" : ++ "implict", core_scsi3_pr_dump_type(pr_res_type), ++ (pr_reg->pr_reg_all_tg_pt) ? 1 : 0); ++ } + pr_debug("SPC-3 PR [%s] RELEASE Node: %s%s\n", + tfo->get_fabric_name(), se_nacl->initiatorname, + i_buf); +@@ -2553,7 +2611,7 @@ core_scsi3_emulate_pro_release(struct se_cmd *cmd, int type, int scope, + * server shall not establish a unit attention condition. + */ + __core_scsi3_complete_pro_release(dev, se_sess->se_node_acl, +- pr_reg, 1); ++ pr_reg, 1, 0); + + spin_unlock(&dev->dev_reservation_lock); + +@@ -2641,7 +2699,7 @@ core_scsi3_emulate_pro_clear(struct se_cmd *cmd, u64 res_key) + if (pr_res_holder) { + struct se_node_acl *pr_res_nacl = pr_res_holder->pr_reg_nacl; + __core_scsi3_complete_pro_release(dev, pr_res_nacl, +- pr_res_holder, 0); ++ pr_res_holder, 0, 0); + } + spin_unlock(&dev->dev_reservation_lock); + /* +@@ -2700,7 +2758,7 @@ static void __core_scsi3_complete_pro_preempt( + */ + if (dev->dev_pr_res_holder) + __core_scsi3_complete_pro_release(dev, nacl, +- dev->dev_pr_res_holder, 0); ++ dev->dev_pr_res_holder, 0, 0); + + dev->dev_pr_res_holder = pr_reg; + pr_reg->pr_res_holder = 1; +@@ -2944,8 +3002,8 @@ core_scsi3_pro_preempt(struct se_cmd *cmd, int type, int scope, u64 res_key, + */ + if (pr_reg_n != pr_res_holder) + __core_scsi3_complete_pro_release(dev, +- pr_res_holder->pr_reg_nacl, +- dev->dev_pr_res_holder, 0); ++ pr_res_holder->pr_reg_nacl, ++ dev->dev_pr_res_holder, 0, 0); + /* + * b) Remove the registrations for all I_T nexuses identified + * by the SERVICE ACTION RESERVATION KEY field, except the +@@ -3415,7 +3473,7 @@ after_iport_check: + * holder (i.e., the I_T nexus on which the + */ + __core_scsi3_complete_pro_release(dev, pr_res_nacl, +- dev->dev_pr_res_holder, 0); ++ dev->dev_pr_res_holder, 0, 0); + /* + * g) Move the persistent reservation to the specified I_T nexus using + * the same scope and type as the persistent reservation released in +@@ -3855,7 +3913,8 @@ core_scsi3_pri_read_full_status(struct se_cmd *cmd) + unsigned char *buf; + u32 add_desc_len = 0, add_len = 0, desc_len, exp_desc_len; + u32 off = 8; /* off into first Full Status descriptor */ +- int format_code = 0; ++ int format_code = 0, pr_res_type = 0, pr_res_scope = 0; ++ bool all_reg = false; + + if (cmd->data_length < 8) { + pr_err("PRIN SA READ_FULL_STATUS SCSI Data Length: %u" +@@ -3872,6 +3931,19 @@ core_scsi3_pri_read_full_status(struct se_cmd *cmd) + buf[2] = ((dev->t10_pr.pr_generation >> 8) & 0xff); + buf[3] = (dev->t10_pr.pr_generation & 0xff); + ++ spin_lock(&dev->dev_reservation_lock); ++ if (dev->dev_pr_res_holder) { ++ struct t10_pr_registration *pr_holder = dev->dev_pr_res_holder; ++ ++ if (pr_holder->pr_res_type == PR_TYPE_WRITE_EXCLUSIVE_ALLREG || ++ pr_holder->pr_res_type == PR_TYPE_EXCLUSIVE_ACCESS_ALLREG) { ++ all_reg = true; ++ pr_res_type = pr_holder->pr_res_type; ++ pr_res_scope = pr_holder->pr_res_scope; ++ } ++ } ++ spin_unlock(&dev->dev_reservation_lock); ++ + spin_lock(&pr_tmpl->registration_lock); + list_for_each_entry_safe(pr_reg, pr_reg_tmp, + &pr_tmpl->registration_list, pr_reg_list) { +@@ -3921,14 +3993,20 @@ core_scsi3_pri_read_full_status(struct se_cmd *cmd) + * reservation holder for PR_HOLDER bit. + * + * Also, if this registration is the reservation +- * holder, fill in SCOPE and TYPE in the next byte. ++ * holder or there is an All Registrants reservation ++ * active, fill in SCOPE and TYPE in the next byte. + */ + if (pr_reg->pr_res_holder) { + buf[off++] |= 0x01; + buf[off++] = (pr_reg->pr_res_scope & 0xf0) | + (pr_reg->pr_res_type & 0x0f); +- } else ++ } else if (all_reg) { ++ buf[off++] |= 0x01; ++ buf[off++] = (pr_res_scope & 0xf0) | ++ (pr_res_type & 0x0f); ++ } else { + off += 2; ++ } + + off += 4; /* Skip over reserved area */ + /* +diff --git a/drivers/target/target_core_pscsi.c b/drivers/target/target_core_pscsi.c +index 0f199f6a0738..29f28808fc03 100644 +--- a/drivers/target/target_core_pscsi.c ++++ b/drivers/target/target_core_pscsi.c +@@ -1111,7 +1111,7 @@ static u32 pscsi_get_device_type(struct se_device *dev) + struct pscsi_dev_virt *pdv = PSCSI_DEV(dev); + struct scsi_device *sd = pdv->pdv_sd; + +- return sd->type; ++ return (sd) ? sd->type : TYPE_NO_LUN; + } + + static sector_t pscsi_get_blocks(struct se_device *dev) +diff --git a/drivers/target/target_core_transport.c b/drivers/target/target_core_transport.c +index 3931b50eeefd..65ecaa1c59a7 100644 +--- a/drivers/target/target_core_transport.c ++++ b/drivers/target/target_core_transport.c +@@ -2328,6 +2328,10 @@ int target_get_sess_cmd(struct se_session *se_sess, struct se_cmd *se_cmd, + list_add_tail(&se_cmd->se_cmd_list, &se_sess->sess_cmd_list); + out: + spin_unlock_irqrestore(&se_sess->sess_cmd_lock, flags); ++ ++ if (ret && ack_kref) ++ target_put_sess_cmd(se_sess, se_cmd); ++ + return ret; + } + EXPORT_SYMBOL(target_get_sess_cmd); +diff --git a/drivers/target/tcm_fc/tfc_io.c b/drivers/target/tcm_fc/tfc_io.c +index e415af32115a..c67d3795db4a 100644 +--- a/drivers/target/tcm_fc/tfc_io.c ++++ b/drivers/target/tcm_fc/tfc_io.c +@@ -346,7 +346,7 @@ void ft_invl_hw_context(struct ft_cmd *cmd) + ep = fc_seq_exch(seq); + if (ep) { + lport = ep->lp; +- if (lport && (ep->xid <= lport->lro_xid)) ++ if (lport && (ep->xid <= lport->lro_xid)) { + /* + * "ddp_done" trigger invalidation of HW + * specific DDP context +@@ -361,6 +361,7 @@ void ft_invl_hw_context(struct ft_cmd *cmd) + * identified using ep->xid) + */ + cmd->was_ddp_setup = 0; ++ } + } + } + } +diff --git a/drivers/tty/serial/8250/8250_pci.c b/drivers/tty/serial/8250/8250_pci.c +index ee1f7c52bd52..eac50ec4c70d 100644 +--- a/drivers/tty/serial/8250/8250_pci.c ++++ b/drivers/tty/serial/8250/8250_pci.c +@@ -68,7 +68,7 @@ static void moan_device(const char *str, struct pci_dev *dev) + "Please send the output of lspci -vv, this\n" + "message (0x%04x,0x%04x,0x%04x,0x%04x), the\n" + "manufacturer and name of serial board or\n" +- "modem board to rmk+serial@arm.linux.org.uk.\n", ++ "modem board to <linux-serial@vger.kernel.org>.\n", + pci_name(dev), str, dev->vendor, dev->device, + dev->subsystem_vendor, dev->subsystem_device); + } +diff --git a/drivers/usb/serial/zte_ev.c b/drivers/usb/serial/zte_ev.c +index 88dd32ce5224..d6a3fbd029be 100644 +--- a/drivers/usb/serial/zte_ev.c ++++ b/drivers/usb/serial/zte_ev.c +@@ -273,6 +273,14 @@ static void zte_ev_usb_serial_close(struct usb_serial_port *port) + } + + static const struct usb_device_id id_table[] = { ++ { USB_DEVICE(0x19d2, 0xffec) }, ++ { USB_DEVICE(0x19d2, 0xffee) }, ++ { USB_DEVICE(0x19d2, 0xfff6) }, ++ { USB_DEVICE(0x19d2, 0xfff7) }, ++ { USB_DEVICE(0x19d2, 0xfff8) }, ++ { USB_DEVICE(0x19d2, 0xfff9) }, ++ { USB_DEVICE(0x19d2, 0xfffb) }, ++ { USB_DEVICE(0x19d2, 0xfffc) }, + /* MG880 */ + { USB_DEVICE(0x19d2, 0xfffd) }, + { }, +diff --git a/drivers/xen/xen-pciback/conf_space.c b/drivers/xen/xen-pciback/conf_space.c +index 46ae0f9f02ad..75fe3d466515 100644 +--- a/drivers/xen/xen-pciback/conf_space.c ++++ b/drivers/xen/xen-pciback/conf_space.c +@@ -16,7 +16,7 @@ + #include "conf_space.h" + #include "conf_space_quirks.h" + +-static bool permissive; ++bool permissive; + module_param(permissive, bool, 0644); + + /* This is where xen_pcibk_read_config_byte, xen_pcibk_read_config_word, +diff --git a/drivers/xen/xen-pciback/conf_space.h b/drivers/xen/xen-pciback/conf_space.h +index e56c934ad137..2e1d73d1d5d0 100644 +--- a/drivers/xen/xen-pciback/conf_space.h ++++ b/drivers/xen/xen-pciback/conf_space.h +@@ -64,6 +64,8 @@ struct config_field_entry { + void *data; + }; + ++extern bool permissive; ++ + #define OFFSET(cfg_entry) ((cfg_entry)->base_offset+(cfg_entry)->field->offset) + + /* Add fields to a device - the add_fields macro expects to get a pointer to +diff --git a/drivers/xen/xen-pciback/conf_space_header.c b/drivers/xen/xen-pciback/conf_space_header.c +index c5ee82587e8c..2d7369391472 100644 +--- a/drivers/xen/xen-pciback/conf_space_header.c ++++ b/drivers/xen/xen-pciback/conf_space_header.c +@@ -11,6 +11,10 @@ + #include "pciback.h" + #include "conf_space.h" + ++struct pci_cmd_info { ++ u16 val; ++}; ++ + struct pci_bar_info { + u32 val; + u32 len_val; +@@ -20,22 +24,36 @@ struct pci_bar_info { + #define is_enable_cmd(value) ((value)&(PCI_COMMAND_MEMORY|PCI_COMMAND_IO)) + #define is_master_cmd(value) ((value)&PCI_COMMAND_MASTER) + +-static int command_read(struct pci_dev *dev, int offset, u16 *value, void *data) ++/* Bits guests are allowed to control in permissive mode. */ ++#define PCI_COMMAND_GUEST (PCI_COMMAND_MASTER|PCI_COMMAND_SPECIAL| \ ++ PCI_COMMAND_INVALIDATE|PCI_COMMAND_VGA_PALETTE| \ ++ PCI_COMMAND_WAIT|PCI_COMMAND_FAST_BACK) ++ ++static void *command_init(struct pci_dev *dev, int offset) + { +- int i; +- int ret; +- +- ret = xen_pcibk_read_config_word(dev, offset, value, data); +- if (!pci_is_enabled(dev)) +- return ret; +- +- for (i = 0; i < PCI_ROM_RESOURCE; i++) { +- if (dev->resource[i].flags & IORESOURCE_IO) +- *value |= PCI_COMMAND_IO; +- if (dev->resource[i].flags & IORESOURCE_MEM) +- *value |= PCI_COMMAND_MEMORY; ++ struct pci_cmd_info *cmd = kmalloc(sizeof(*cmd), GFP_KERNEL); ++ int err; ++ ++ if (!cmd) ++ return ERR_PTR(-ENOMEM); ++ ++ err = pci_read_config_word(dev, PCI_COMMAND, &cmd->val); ++ if (err) { ++ kfree(cmd); ++ return ERR_PTR(err); + } + ++ return cmd; ++} ++ ++static int command_read(struct pci_dev *dev, int offset, u16 *value, void *data) ++{ ++ int ret = pci_read_config_word(dev, offset, value); ++ const struct pci_cmd_info *cmd = data; ++ ++ *value &= PCI_COMMAND_GUEST; ++ *value |= cmd->val & ~PCI_COMMAND_GUEST; ++ + return ret; + } + +@@ -43,6 +61,8 @@ static int command_write(struct pci_dev *dev, int offset, u16 value, void *data) + { + struct xen_pcibk_dev_data *dev_data; + int err; ++ u16 val; ++ struct pci_cmd_info *cmd = data; + + dev_data = pci_get_drvdata(dev); + if (!pci_is_enabled(dev) && is_enable_cmd(value)) { +@@ -83,6 +103,19 @@ static int command_write(struct pci_dev *dev, int offset, u16 value, void *data) + } + } + ++ cmd->val = value; ++ ++ if (!permissive && (!dev_data || !dev_data->permissive)) ++ return 0; ++ ++ /* Only allow the guest to control certain bits. */ ++ err = pci_read_config_word(dev, offset, &val); ++ if (err || val == value) ++ return err; ++ ++ value &= PCI_COMMAND_GUEST; ++ value |= val & ~PCI_COMMAND_GUEST; ++ + return pci_write_config_word(dev, offset, value); + } + +@@ -282,6 +315,8 @@ static const struct config_field header_common[] = { + { + .offset = PCI_COMMAND, + .size = 2, ++ .init = command_init, ++ .release = bar_release, + .u.w.read = command_read, + .u.w.write = command_write, + }, +diff --git a/fs/btrfs/delayed-inode.c b/fs/btrfs/delayed-inode.c +index cbd9523ad09c..ebc592317848 100644 +--- a/fs/btrfs/delayed-inode.c ++++ b/fs/btrfs/delayed-inode.c +@@ -1804,6 +1804,14 @@ int btrfs_delayed_update_inode(struct btrfs_trans_handle *trans, + struct btrfs_delayed_node *delayed_node; + int ret = 0; + ++ /* ++ * we don't do delayed inode updates during log recovery because it ++ * leads to enospc problems. This means we also can't do ++ * delayed inode refs ++ */ ++ if (BTRFS_I(inode)->root->fs_info->log_root_recovering) ++ return -EAGAIN; ++ + delayed_node = btrfs_get_or_create_delayed_node(inode); + if (IS_ERR(delayed_node)) + return PTR_ERR(delayed_node); +diff --git a/fs/fuse/dev.c b/fs/fuse/dev.c +index fc8e4991736a..be1f2813c4f0 100644 +--- a/fs/fuse/dev.c ++++ b/fs/fuse/dev.c +@@ -819,8 +819,8 @@ static int fuse_try_move_page(struct fuse_copy_state *cs, struct page **pagep) + + newpage = buf->page; + +- if (WARN_ON(!PageUptodate(newpage))) +- return -EIO; ++ if (!PageUptodate(newpage)) ++ SetPageUptodate(newpage); + + ClearPageMappedToDisk(newpage); + +@@ -1725,6 +1725,9 @@ copy_finish: + static int fuse_notify(struct fuse_conn *fc, enum fuse_notify_code code, + unsigned int size, struct fuse_copy_state *cs) + { ++ /* Don't try to move pages (yet) */ ++ cs->move_pages = 0; ++ + switch (code) { + case FUSE_NOTIFY_POLL: + return fuse_notify_poll(fc, size, cs); +diff --git a/fs/hfsplus/brec.c b/fs/hfsplus/brec.c +index 6e560d56094b..754fdf8c6356 100644 +--- a/fs/hfsplus/brec.c ++++ b/fs/hfsplus/brec.c +@@ -131,13 +131,16 @@ skip: + hfs_bnode_write(node, entry, data_off + key_len, entry_len); + hfs_bnode_dump(node); + +- if (new_node) { +- /* update parent key if we inserted a key +- * at the start of the first node +- */ +- if (!rec && new_node != node) +- hfs_brec_update_parent(fd); ++ /* ++ * update parent key if we inserted a key ++ * at the start of the node and it is not the new node ++ */ ++ if (!rec && new_node != node) { ++ hfs_bnode_read_key(node, fd->search_key, data_off + size); ++ hfs_brec_update_parent(fd); ++ } + ++ if (new_node) { + hfs_bnode_put(fd->bnode); + if (!new_node->parent) { + hfs_btree_inc_height(tree); +@@ -168,9 +171,6 @@ skip: + goto again; + } + +- if (!rec) +- hfs_brec_update_parent(fd); +- + return 0; + } + +@@ -370,6 +370,8 @@ again: + if (IS_ERR(parent)) + return PTR_ERR(parent); + __hfs_brec_find(parent, fd, hfs_find_rec_by_key); ++ if (fd->record < 0) ++ return -ENOENT; + hfs_bnode_dump(parent); + rec = fd->record; + +diff --git a/fs/nfs/inode.c b/fs/nfs/inode.c +index e5eb677ca9ce..127a6d9d81b7 100644 +--- a/fs/nfs/inode.c ++++ b/fs/nfs/inode.c +@@ -545,6 +545,7 @@ EXPORT_SYMBOL_GPL(nfs_setattr); + * This is a copy of the common vmtruncate, but with the locking + * corrected to take into account the fact that NFS requires + * inode->i_size to be updated under the inode->i_lock. ++ * Note: must be called with inode->i_lock held! + */ + static int nfs_vmtruncate(struct inode * inode, loff_t offset) + { +@@ -554,11 +555,11 @@ static int nfs_vmtruncate(struct inode * inode, loff_t offset) + if (err) + goto out; + +- spin_lock(&inode->i_lock); + i_size_write(inode, offset); +- spin_unlock(&inode->i_lock); + ++ spin_unlock(&inode->i_lock); + truncate_pagecache(inode, offset); ++ spin_lock(&inode->i_lock); + out: + return err; + } +@@ -571,10 +572,15 @@ out: + * Note: we do this in the *proc.c in order to ensure that + * it works for things like exclusive creates too. + */ +-void nfs_setattr_update_inode(struct inode *inode, struct iattr *attr) ++void nfs_setattr_update_inode(struct inode *inode, struct iattr *attr, ++ struct nfs_fattr *fattr) + { ++ /* Barrier: bump the attribute generation count. */ ++ fattr->gencount = nfs_inc_attr_generation_counter(); ++ ++ spin_lock(&inode->i_lock); ++ NFS_I(inode)->attr_gencount = fattr->gencount; + if ((attr->ia_valid & (ATTR_MODE|ATTR_UID|ATTR_GID)) != 0) { +- spin_lock(&inode->i_lock); + if ((attr->ia_valid & ATTR_MODE) != 0) { + int mode = attr->ia_mode & S_IALLUGO; + mode |= inode->i_mode & ~S_IALLUGO; +@@ -585,12 +591,13 @@ void nfs_setattr_update_inode(struct inode *inode, struct iattr *attr) + if ((attr->ia_valid & ATTR_GID) != 0) + inode->i_gid = attr->ia_gid; + NFS_I(inode)->cache_validity |= NFS_INO_INVALID_ACCESS|NFS_INO_INVALID_ACL; +- spin_unlock(&inode->i_lock); + } + if ((attr->ia_valid & ATTR_SIZE) != 0) { + nfs_inc_stats(inode, NFSIOS_SETATTRTRUNC); + nfs_vmtruncate(inode, attr->ia_size); + } ++ nfs_update_inode(inode, fattr); ++ spin_unlock(&inode->i_lock); + } + EXPORT_SYMBOL_GPL(nfs_setattr_update_inode); + +diff --git a/fs/nfs/nfs3proc.c b/fs/nfs/nfs3proc.c +index 90cb10d7b693..8ef5276776f9 100644 +--- a/fs/nfs/nfs3proc.c ++++ b/fs/nfs/nfs3proc.c +@@ -136,7 +136,7 @@ nfs3_proc_setattr(struct dentry *dentry, struct nfs_fattr *fattr, + nfs_fattr_init(fattr); + status = rpc_call_sync(NFS_CLIENT(inode), &msg, 0); + if (status == 0) +- nfs_setattr_update_inode(inode, sattr); ++ nfs_setattr_update_inode(inode, sattr, fattr); + dprintk("NFS reply setattr: %d\n", status); + return status; + } +diff --git a/fs/nfs/nfs4proc.c b/fs/nfs/nfs4proc.c +index 43c27110387a..36a72b59d7c8 100644 +--- a/fs/nfs/nfs4proc.c ++++ b/fs/nfs/nfs4proc.c +@@ -2276,8 +2276,8 @@ static int _nfs4_do_open(struct inode *dir, + opendata->o_res.f_attr, sattr, + state, label, olabel); + if (status == 0) { +- nfs_setattr_update_inode(state->inode, sattr); +- nfs_post_op_update_inode(state->inode, opendata->o_res.f_attr); ++ nfs_setattr_update_inode(state->inode, sattr, ++ opendata->o_res.f_attr); + nfs_setsecurity(state->inode, opendata->o_res.f_attr, olabel); + } + } +@@ -3114,7 +3114,7 @@ nfs4_proc_setattr(struct dentry *dentry, struct nfs_fattr *fattr, + + status = nfs4_do_setattr(inode, cred, fattr, sattr, state, NULL, label); + if (status == 0) { +- nfs_setattr_update_inode(inode, sattr); ++ nfs_setattr_update_inode(inode, sattr, fattr); + nfs_setsecurity(inode, fattr, label); + } + nfs4_label_free(label); +diff --git a/fs/nfs/proc.c b/fs/nfs/proc.c +index a8f57c728df5..7ffaeffc1330 100644 +--- a/fs/nfs/proc.c ++++ b/fs/nfs/proc.c +@@ -139,7 +139,7 @@ nfs_proc_setattr(struct dentry *dentry, struct nfs_fattr *fattr, + nfs_fattr_init(fattr); + status = rpc_call_sync(NFS_CLIENT(inode), &msg, 0); + if (status == 0) +- nfs_setattr_update_inode(inode, sattr); ++ nfs_setattr_update_inode(inode, sattr, fattr); + dprintk("NFS reply setattr: %d\n", status); + return status; + } +diff --git a/fs/nilfs2/segment.c b/fs/nilfs2/segment.c +index a0c815b71e6a..85a9bb94a72b 100644 +--- a/fs/nilfs2/segment.c ++++ b/fs/nilfs2/segment.c +@@ -1907,6 +1907,7 @@ static void nilfs_segctor_drop_written_files(struct nilfs_sc_info *sci, + struct the_nilfs *nilfs) + { + struct nilfs_inode_info *ii, *n; ++ int during_mount = !(sci->sc_super->s_flags & MS_ACTIVE); + int defer_iput = false; + + spin_lock(&nilfs->ns_inode_lock); +@@ -1919,10 +1920,10 @@ static void nilfs_segctor_drop_written_files(struct nilfs_sc_info *sci, + brelse(ii->i_bh); + ii->i_bh = NULL; + list_del_init(&ii->i_dirty); +- if (!ii->vfs_inode.i_nlink) { ++ if (!ii->vfs_inode.i_nlink || during_mount) { + /* +- * Defer calling iput() to avoid a deadlock +- * over I_SYNC flag for inodes with i_nlink == 0 ++ * Defer calling iput() to avoid deadlocks if ++ * i_nlink == 0 or mount is not yet finished. + */ + list_add_tail(&ii->i_dirty, &sci->sc_iput_queue); + defer_iput = true; +diff --git a/fs/proc/task_mmu.c b/fs/proc/task_mmu.c +index 7724fbdf443f..1db8ce0086ed 100644 +--- a/fs/proc/task_mmu.c ++++ b/fs/proc/task_mmu.c +@@ -1230,6 +1230,9 @@ out: + + static int pagemap_open(struct inode *inode, struct file *file) + { ++ /* do not disclose physical addresses: attack vector */ ++ if (!capable(CAP_SYS_ADMIN)) ++ return -EPERM; + pr_warn_once("Bits 55-60 of /proc/PID/pagemap entries are about " + "to stop being page-shift some time soon. See the " + "linux/Documentation/vm/pagemap.txt for details.\n"); +diff --git a/include/linux/hugetlb.h b/include/linux/hugetlb.h +index e492c34439c3..1eaf61dde2c3 100644 +--- a/include/linux/hugetlb.h ++++ b/include/linux/hugetlb.h +@@ -70,7 +70,6 @@ int dequeue_hwpoisoned_huge_page(struct page *page); + bool isolate_huge_page(struct page *page, struct list_head *list); + void putback_active_hugepage(struct page *page); + bool is_hugepage_active(struct page *page); +-void copy_huge_page(struct page *dst, struct page *src); + + #ifdef CONFIG_ARCH_WANT_HUGE_PMD_SHARE + pte_t *huge_pmd_share(struct mm_struct *mm, unsigned long addr, pud_t *pud); +@@ -143,12 +142,12 @@ static inline int dequeue_hwpoisoned_huge_page(struct page *page) + return 0; + } + +-#define isolate_huge_page(p, l) false +-#define putback_active_hugepage(p) do {} while (0) +-#define is_hugepage_active(x) false +-static inline void copy_huge_page(struct page *dst, struct page *src) ++static inline bool isolate_huge_page(struct page *page, struct list_head *list) + { ++ return false; + } ++#define putback_active_hugepage(p) do {} while (0) ++#define is_hugepage_active(x) false + + static inline unsigned long hugetlb_change_protection(struct vm_area_struct *vma, + unsigned long address, unsigned long end, pgprot_t newprot) +@@ -415,6 +414,7 @@ struct hstate {}; + #define hstate_sizelog(s) NULL + #define hstate_vma(v) NULL + #define hstate_inode(i) NULL ++#define page_hstate(page) NULL + #define huge_page_size(h) PAGE_SIZE + #define huge_page_mask(h) PAGE_MASK + #define vma_kernel_pagesize(v) PAGE_SIZE +diff --git a/include/linux/nfs_fs.h b/include/linux/nfs_fs.h +index a632498d42fa..f4bf1b593327 100644 +--- a/include/linux/nfs_fs.h ++++ b/include/linux/nfs_fs.h +@@ -354,7 +354,7 @@ extern int nfs_revalidate_inode(struct nfs_server *server, struct inode *inode); + extern int __nfs_revalidate_inode(struct nfs_server *, struct inode *); + extern int nfs_revalidate_mapping(struct inode *inode, struct address_space *mapping); + extern int nfs_setattr(struct dentry *, struct iattr *); +-extern void nfs_setattr_update_inode(struct inode *inode, struct iattr *attr); ++extern void nfs_setattr_update_inode(struct inode *inode, struct iattr *attr, struct nfs_fattr *); + extern void nfs_setsecurity(struct inode *inode, struct nfs_fattr *fattr, + struct nfs4_label *label); + extern struct nfs_open_context *get_nfs_open_context(struct nfs_open_context *ctx); +diff --git a/include/linux/workqueue.h b/include/linux/workqueue.h +index eff358e6945d..0f67a9c82787 100644 +--- a/include/linux/workqueue.h ++++ b/include/linux/workqueue.h +@@ -71,7 +71,8 @@ enum { + /* data contains off-queue information when !WORK_STRUCT_PWQ */ + WORK_OFFQ_FLAG_BASE = WORK_STRUCT_COLOR_SHIFT, + +- WORK_OFFQ_CANCELING = (1 << WORK_OFFQ_FLAG_BASE), ++ __WORK_OFFQ_CANCELING = WORK_OFFQ_FLAG_BASE, ++ WORK_OFFQ_CANCELING = (1 << __WORK_OFFQ_CANCELING), + + /* + * When a work item is off queue, its high bits point to the last +diff --git a/include/net/sock.h b/include/net/sock.h +index 3899018a6b21..d157f4f56f01 100644 +--- a/include/net/sock.h ++++ b/include/net/sock.h +@@ -1791,7 +1791,7 @@ sk_dst_set(struct sock *sk, struct dst_entry *dst) + struct dst_entry *old_dst; + + sk_tx_queue_clear(sk); +- old_dst = xchg(&sk->sk_dst_cache, dst); ++ old_dst = xchg((__force struct dst_entry **)&sk->sk_dst_cache, dst); + dst_release(old_dst); + } + +diff --git a/ipc/sem.c b/ipc/sem.c +index 0c312ac04e49..d8456ad6131c 100644 +--- a/ipc/sem.c ++++ b/ipc/sem.c +@@ -515,13 +515,6 @@ static int newary(struct ipc_namespace *ns, struct ipc_params *params) + return retval; + } + +- id = ipc_addid(&sem_ids(ns), &sma->sem_perm, ns->sc_semmni); +- if (id < 0) { +- ipc_rcu_putref(sma, sem_rcu_free); +- return id; +- } +- ns->used_sems += nsems; +- + sma->sem_base = (struct sem *) &sma[1]; + + for (i = 0; i < nsems; i++) { +@@ -536,6 +529,14 @@ static int newary(struct ipc_namespace *ns, struct ipc_params *params) + INIT_LIST_HEAD(&sma->list_id); + sma->sem_nsems = nsems; + sma->sem_ctime = get_seconds(); ++ ++ id = ipc_addid(&sem_ids(ns), &sma->sem_perm, ns->sc_semmni); ++ if (id < 0) { ++ ipc_rcu_putref(sma, sem_rcu_free); ++ return id; ++ } ++ ns->used_sems += nsems; ++ + sem_unlock(sma, -1); + rcu_read_unlock(); + +diff --git a/ipc/shm.c b/ipc/shm.c +index 7a51443a51d6..623bc3877118 100644 +--- a/ipc/shm.c ++++ b/ipc/shm.c +@@ -218,7 +218,8 @@ static void shm_destroy(struct ipc_namespace *ns, struct shmid_kernel *shp) + if (!is_file_hugepages(shm_file)) + shmem_lock(shm_file, 0, shp->mlock_user); + else if (shp->mlock_user) +- user_shm_unlock(file_inode(shm_file)->i_size, shp->mlock_user); ++ user_shm_unlock(i_size_read(file_inode(shm_file)), ++ shp->mlock_user); + fput(shm_file); + ipc_rcu_putref(shp, shm_rcu_free); + } +@@ -1227,6 +1228,7 @@ SYSCALL_DEFINE1(shmdt, char __user *, shmaddr) + int retval = -EINVAL; + #ifdef CONFIG_MMU + loff_t size = 0; ++ struct file *file; + struct vm_area_struct *next; + #endif + +@@ -1243,7 +1245,8 @@ SYSCALL_DEFINE1(shmdt, char __user *, shmaddr) + * started at address shmaddr. It records it's size and then unmaps + * it. + * - Then it unmaps all shm vmas that started at shmaddr and that +- * are within the initially determined size. ++ * are within the initially determined size and that are from the ++ * same shm segment from which we determined the size. + * Errors from do_munmap are ignored: the function only fails if + * it's called with invalid parameters or if it's called to unmap + * a part of a vma. Both calls in this function are for full vmas, +@@ -1269,8 +1272,14 @@ SYSCALL_DEFINE1(shmdt, char __user *, shmaddr) + if ((vma->vm_ops == &shm_vm_ops) && + (vma->vm_start - addr)/PAGE_SIZE == vma->vm_pgoff) { + +- +- size = file_inode(vma->vm_file)->i_size; ++ /* ++ * Record the file of the shm segment being ++ * unmapped. With mremap(), someone could place ++ * page from another segment but with equal offsets ++ * in the range we are unmapping. ++ */ ++ file = vma->vm_file; ++ size = i_size_read(file_inode(vma->vm_file)); + do_munmap(mm, vma->vm_start, vma->vm_end - vma->vm_start); + /* + * We discovered the size of the shm segment, so +@@ -1296,8 +1305,8 @@ SYSCALL_DEFINE1(shmdt, char __user *, shmaddr) + + /* finding a matching vma now does not alter retval */ + if ((vma->vm_ops == &shm_vm_ops) && +- (vma->vm_start - addr)/PAGE_SIZE == vma->vm_pgoff) +- ++ ((vma->vm_start - addr)/PAGE_SIZE == vma->vm_pgoff) && ++ (vma->vm_file == file)) + do_munmap(mm, vma->vm_start, vma->vm_end - vma->vm_start); + vma = next; + } +diff --git a/kernel/cpuset.c b/kernel/cpuset.c +index c8289138cad4..b5f0faa4bfbd 100644 +--- a/kernel/cpuset.c ++++ b/kernel/cpuset.c +@@ -503,9 +503,6 @@ static void update_domain_attr_tree(struct sched_domain_attr *dattr, + + rcu_read_lock(); + cpuset_for_each_descendant_pre(cp, pos_css, root_cs) { +- if (cp == root_cs) +- continue; +- + /* skip the whole subtree if @cp doesn't have any CPU */ + if (cpumask_empty(cp->cpus_allowed)) { + pos_css = css_rightmost_descendant(pos_css); +diff --git a/kernel/events/core.c b/kernel/events/core.c +index 452b2b0d7af1..18de86cbcdac 100644 +--- a/kernel/events/core.c ++++ b/kernel/events/core.c +@@ -4169,6 +4169,13 @@ static void perf_pending_event(struct irq_work *entry) + { + struct perf_event *event = container_of(entry, + struct perf_event, pending); ++ int rctx; ++ ++ rctx = perf_swevent_get_recursion_context(); ++ /* ++ * If we 'fail' here, that's OK, it means recursion is already disabled ++ * and we won't recurse 'further'. ++ */ + + if (event->pending_disable) { + event->pending_disable = 0; +@@ -4179,6 +4186,9 @@ static void perf_pending_event(struct irq_work *entry) + event->pending_wakeup = 0; + perf_event_wakeup(event); + } ++ ++ if (rctx >= 0) ++ perf_swevent_put_recursion_context(rctx); + } + + /* +diff --git a/kernel/module.c b/kernel/module.c +index f3c612e45330..a97785308f25 100644 +--- a/kernel/module.c ++++ b/kernel/module.c +@@ -3048,21 +3048,6 @@ static int do_init_module(struct module *mod) + */ + current->flags &= ~PF_USED_ASYNC; + +- blocking_notifier_call_chain(&module_notify_list, +- MODULE_STATE_COMING, mod); +- +- /* Set RO and NX regions for core */ +- set_section_ro_nx(mod->module_core, +- mod->core_text_size, +- mod->core_ro_size, +- mod->core_size); +- +- /* Set RO and NX regions for init */ +- set_section_ro_nx(mod->module_init, +- mod->init_text_size, +- mod->init_ro_size, +- mod->init_size); +- + do_mod_ctors(mod); + /* Start the module */ + if (mod->init != NULL) +@@ -3194,9 +3179,26 @@ static int complete_formation(struct module *mod, struct load_info *info) + /* This relies on module_mutex for list integrity. */ + module_bug_finalize(info->hdr, info->sechdrs, mod); + ++ /* Set RO and NX regions for core */ ++ set_section_ro_nx(mod->module_core, ++ mod->core_text_size, ++ mod->core_ro_size, ++ mod->core_size); ++ ++ /* Set RO and NX regions for init */ ++ set_section_ro_nx(mod->module_init, ++ mod->init_text_size, ++ mod->init_ro_size, ++ mod->init_size); ++ + /* Mark state as coming so strong_try_module_get() ignores us, + * but kallsyms etc. can see us. */ + mod->state = MODULE_STATE_COMING; ++ mutex_unlock(&module_mutex); ++ ++ blocking_notifier_call_chain(&module_notify_list, ++ MODULE_STATE_COMING, mod); ++ return 0; + + out: + mutex_unlock(&module_mutex); +@@ -3330,6 +3332,11 @@ static int load_module(struct load_info *info, const char __user *uargs, + mutex_lock(&module_mutex); + module_bug_cleanup(mod); + mutex_unlock(&module_mutex); ++ ++ /* we can't deallocate the module until we clear memory protection */ ++ unset_module_init_ro_nx(mod); ++ unset_module_core_ro_nx(mod); ++ + ddebug_cleanup: + dynamic_debug_remove(info->debug); + synchronize_sched(); +diff --git a/kernel/printk/console_cmdline.h b/kernel/printk/console_cmdline.h +index cbd69d842341..2ca4a8b5fe57 100644 +--- a/kernel/printk/console_cmdline.h ++++ b/kernel/printk/console_cmdline.h +@@ -3,7 +3,7 @@ + + struct console_cmdline + { +- char name[8]; /* Name of the driver */ ++ char name[16]; /* Name of the driver */ + int index; /* Minor dev. to use */ + char *options; /* Options for the driver */ + #ifdef CONFIG_A11Y_BRAILLE_CONSOLE +diff --git a/kernel/printk/printk.c b/kernel/printk/printk.c +index 0f9149036885..fbafa885eee1 100644 +--- a/kernel/printk/printk.c ++++ b/kernel/printk/printk.c +@@ -2281,6 +2281,7 @@ void register_console(struct console *newcon) + for (i = 0, c = console_cmdline; + i < MAX_CMDLINECONSOLES && c->name[0]; + i++, c++) { ++ BUILD_BUG_ON(sizeof(c->name) != sizeof(newcon->name)); + if (strcmp(c->name, newcon->name) != 0) + continue; + if (newcon->index >= 0 && +diff --git a/kernel/sysctl.c b/kernel/sysctl.c +index 167741003616..6a9b0c42fb30 100644 +--- a/kernel/sysctl.c ++++ b/kernel/sysctl.c +@@ -142,6 +142,11 @@ static int minolduid; + static int ngroups_max = NGROUPS_MAX; + static const int cap_last_cap = CAP_LAST_CAP; + ++/*this is needed for proc_doulongvec_minmax of sysctl_hung_task_timeout_secs */ ++#ifdef CONFIG_DETECT_HUNG_TASK ++static unsigned long hung_task_timeout_max = (LONG_MAX/HZ); ++#endif ++ + #ifdef CONFIG_INOTIFY_USER + #include <linux/inotify.h> + #endif +@@ -971,6 +976,7 @@ static struct ctl_table kern_table[] = { + .maxlen = sizeof(unsigned long), + .mode = 0644, + .proc_handler = proc_dohung_task_timeout_secs, ++ .extra2 = &hung_task_timeout_max, + }, + { + .procname = "hung_task_warnings", +diff --git a/kernel/time/ntp.c b/kernel/time/ntp.c +index 28db9bedc857..6211d5d6d465 100644 +--- a/kernel/time/ntp.c ++++ b/kernel/time/ntp.c +@@ -631,10 +631,14 @@ int ntp_validate_timex(struct timex *txc) + if ((txc->modes & ADJ_SETOFFSET) && (!capable(CAP_SYS_TIME))) + return -EPERM; + +- if (txc->modes & ADJ_FREQUENCY) { +- if (LONG_MIN / PPM_SCALE > txc->freq) ++ /* ++ * Check for potential multiplication overflows that can ++ * only happen on 64-bit systems: ++ */ ++ if ((txc->modes & ADJ_FREQUENCY) && (BITS_PER_LONG == 64)) { ++ if (LLONG_MIN / PPM_SCALE > txc->freq) + return -EINVAL; +- if (LONG_MAX / PPM_SCALE < txc->freq) ++ if (LLONG_MAX / PPM_SCALE < txc->freq) + return -EINVAL; + } + +diff --git a/kernel/workqueue.c b/kernel/workqueue.c +index d10cc05bfbc6..bb5f920268d7 100644 +--- a/kernel/workqueue.c ++++ b/kernel/workqueue.c +@@ -2890,19 +2890,57 @@ bool flush_work(struct work_struct *work) + } + EXPORT_SYMBOL_GPL(flush_work); + ++struct cwt_wait { ++ wait_queue_t wait; ++ struct work_struct *work; ++}; ++ ++static int cwt_wakefn(wait_queue_t *wait, unsigned mode, int sync, void *key) ++{ ++ struct cwt_wait *cwait = container_of(wait, struct cwt_wait, wait); ++ ++ if (cwait->work != key) ++ return 0; ++ return autoremove_wake_function(wait, mode, sync, key); ++} ++ + static bool __cancel_work_timer(struct work_struct *work, bool is_dwork) + { ++ static DECLARE_WAIT_QUEUE_HEAD(cancel_waitq); + unsigned long flags; + int ret; + + do { + ret = try_to_grab_pending(work, is_dwork, &flags); + /* +- * If someone else is canceling, wait for the same event it +- * would be waiting for before retrying. ++ * If someone else is already canceling, wait for it to ++ * finish. flush_work() doesn't work for PREEMPT_NONE ++ * because we may get scheduled between @work's completion ++ * and the other canceling task resuming and clearing ++ * CANCELING - flush_work() will return false immediately ++ * as @work is no longer busy, try_to_grab_pending() will ++ * return -ENOENT as @work is still being canceled and the ++ * other canceling task won't be able to clear CANCELING as ++ * we're hogging the CPU. ++ * ++ * Let's wait for completion using a waitqueue. As this ++ * may lead to the thundering herd problem, use a custom ++ * wake function which matches @work along with exclusive ++ * wait and wakeup. + */ +- if (unlikely(ret == -ENOENT)) +- flush_work(work); ++ if (unlikely(ret == -ENOENT)) { ++ struct cwt_wait cwait; ++ ++ init_wait(&cwait.wait); ++ cwait.wait.func = cwt_wakefn; ++ cwait.work = work; ++ ++ prepare_to_wait_exclusive(&cancel_waitq, &cwait.wait, ++ TASK_UNINTERRUPTIBLE); ++ if (work_is_canceling(work)) ++ schedule(); ++ finish_wait(&cancel_waitq, &cwait.wait); ++ } + } while (unlikely(ret < 0)); + + /* tell other tasks trying to grab @work to back off */ +@@ -2911,6 +2949,16 @@ static bool __cancel_work_timer(struct work_struct *work, bool is_dwork) + + flush_work(work); + clear_work_data(work); ++ ++ /* ++ * Paired with prepare_to_wait() above so that either ++ * waitqueue_active() is visible here or !work_is_canceling() is ++ * visible there. ++ */ ++ smp_mb(); ++ if (waitqueue_active(&cancel_waitq)) ++ __wake_up(&cancel_waitq, TASK_NORMAL, 1, work); ++ + return ret; + } + +diff --git a/lib/lz4/lz4_decompress.c b/lib/lz4/lz4_decompress.c +index 7a85967060a5..f0f5c5c3de12 100644 +--- a/lib/lz4/lz4_decompress.c ++++ b/lib/lz4/lz4_decompress.c +@@ -139,6 +139,9 @@ static int lz4_uncompress(const char *source, char *dest, int osize) + /* Error: request to write beyond destination buffer */ + if (cpy > oend) + goto _output_error; ++ if ((ref + COPYLENGTH) > oend || ++ (op + COPYLENGTH) > oend) ++ goto _output_error; + LZ4_SECURECOPY(ref, op, (oend - COPYLENGTH)); + while (op < cpy) + *op++ = *ref++; +diff --git a/mm/hugetlb.c b/mm/hugetlb.c +index ed00a70fb052..424ee30fcd0d 100644 +--- a/mm/hugetlb.c ++++ b/mm/hugetlb.c +@@ -476,40 +476,6 @@ static int vma_has_reserves(struct vm_area_struct *vma, long chg) + return 0; + } + +-static void copy_gigantic_page(struct page *dst, struct page *src) +-{ +- int i; +- struct hstate *h = page_hstate(src); +- struct page *dst_base = dst; +- struct page *src_base = src; +- +- for (i = 0; i < pages_per_huge_page(h); ) { +- cond_resched(); +- copy_highpage(dst, src); +- +- i++; +- dst = mem_map_next(dst, dst_base, i); +- src = mem_map_next(src, src_base, i); +- } +-} +- +-void copy_huge_page(struct page *dst, struct page *src) +-{ +- int i; +- struct hstate *h = page_hstate(src); +- +- if (unlikely(pages_per_huge_page(h) > MAX_ORDER_NR_PAGES)) { +- copy_gigantic_page(dst, src); +- return; +- } +- +- might_sleep(); +- for (i = 0; i < pages_per_huge_page(h); i++) { +- cond_resched(); +- copy_highpage(dst + i, src + i); +- } +-} +- + static void enqueue_huge_page(struct hstate *h, struct page *page) + { + int nid = page_to_nid(page); +@@ -2960,6 +2926,7 @@ int hugetlb_fault(struct mm_struct *mm, struct vm_area_struct *vma, + struct page *pagecache_page = NULL; + static DEFINE_MUTEX(hugetlb_instantiation_mutex); + struct hstate *h = hstate_vma(vma); ++ int need_wait_lock = 0; + + address &= huge_page_mask(h); + +@@ -2993,6 +2960,16 @@ int hugetlb_fault(struct mm_struct *mm, struct vm_area_struct *vma, + ret = 0; + + /* ++ * entry could be a migration/hwpoison entry at this point, so this ++ * check prevents the kernel from going below assuming that we have ++ * a active hugepage in pagecache. This goto expects the 2nd page fault, ++ * and is_hugetlb_entry_(migration|hwpoisoned) check will properly ++ * handle it. ++ */ ++ if (!pte_present(entry)) ++ goto out_mutex; ++ ++ /* + * If we are going to COW the mapping later, we examine the pending + * reservations for this page now. This will ensure that any + * allocations necessary to record that reservation occur outside the +@@ -3011,29 +2988,31 @@ int hugetlb_fault(struct mm_struct *mm, struct vm_area_struct *vma, + vma, address); + } + ++ spin_lock(&mm->page_table_lock); ++ ++ /* Check for a racing update before calling hugetlb_cow */ ++ if (unlikely(!pte_same(entry, huge_ptep_get(ptep)))) ++ goto out_page_table_lock; ++ + /* + * hugetlb_cow() requires page locks of pte_page(entry) and + * pagecache_page, so here we need take the former one + * when page != pagecache_page or !pagecache_page. +- * Note that locking order is always pagecache_page -> page, +- * so no worry about deadlock. + */ + page = pte_page(entry); +- get_page(page); + if (page != pagecache_page) +- lock_page(page); +- +- spin_lock(&mm->page_table_lock); +- /* Check for a racing update before calling hugetlb_cow */ +- if (unlikely(!pte_same(entry, huge_ptep_get(ptep)))) +- goto out_page_table_lock; ++ if (!trylock_page(page)) { ++ need_wait_lock = 1; ++ goto out_page_table_lock; ++ } + ++ get_page(page); + + if (flags & FAULT_FLAG_WRITE) { + if (!huge_pte_write(entry)) { + ret = hugetlb_cow(mm, vma, address, ptep, entry, + pagecache_page); +- goto out_page_table_lock; ++ goto out_put_page; + } + entry = huge_pte_mkdirty(entry); + } +@@ -3041,7 +3020,10 @@ int hugetlb_fault(struct mm_struct *mm, struct vm_area_struct *vma, + if (huge_ptep_set_access_flags(vma, address, ptep, entry, + flags & FAULT_FLAG_WRITE)) + update_mmu_cache(vma, address, ptep); +- ++out_put_page: ++ if (page != pagecache_page) ++ unlock_page(page); ++ put_page(page); + out_page_table_lock: + spin_unlock(&mm->page_table_lock); + +@@ -3049,13 +3031,18 @@ out_page_table_lock: + unlock_page(pagecache_page); + put_page(pagecache_page); + } +- if (page != pagecache_page) +- unlock_page(page); +- put_page(page); +- + out_mutex: + mutex_unlock(&hugetlb_instantiation_mutex); + ++ /* ++ * Generally it's safe to hold refcount during waiting page lock. But ++ * here we just wait to defer the next page fault to avoid busy loop and ++ * the page is not used after unlocked before returning from the current ++ * page fault. So we are safe from accessing freed page, even if we wait ++ * here without taking refcount. ++ */ ++ if (need_wait_lock) ++ wait_on_page_locked(page); + return ret; + } + +diff --git a/mm/migrate.c b/mm/migrate.c +index 66ca0c494b90..05502f10c842 100644 +--- a/mm/migrate.c ++++ b/mm/migrate.c +@@ -444,6 +444,54 @@ int migrate_huge_page_move_mapping(struct address_space *mapping, + } + + /* ++ * Gigantic pages are so large that we do not guarantee that page++ pointer ++ * arithmetic will work across the entire page. We need something more ++ * specialized. ++ */ ++static void __copy_gigantic_page(struct page *dst, struct page *src, ++ int nr_pages) ++{ ++ int i; ++ struct page *dst_base = dst; ++ struct page *src_base = src; ++ ++ for (i = 0; i < nr_pages; ) { ++ cond_resched(); ++ copy_highpage(dst, src); ++ ++ i++; ++ dst = mem_map_next(dst, dst_base, i); ++ src = mem_map_next(src, src_base, i); ++ } ++} ++ ++static void copy_huge_page(struct page *dst, struct page *src) ++{ ++ int i; ++ int nr_pages; ++ ++ if (PageHuge(src)) { ++ /* hugetlbfs page */ ++ struct hstate *h = page_hstate(src); ++ nr_pages = pages_per_huge_page(h); ++ ++ if (unlikely(nr_pages > MAX_ORDER_NR_PAGES)) { ++ __copy_gigantic_page(dst, src, nr_pages); ++ return; ++ } ++ } else { ++ /* thp page */ ++ BUG_ON(!PageTransHuge(src)); ++ nr_pages = hpage_nr_pages(src); ++ } ++ ++ for (i = 0; i < nr_pages; i++) { ++ cond_resched(); ++ copy_highpage(dst + i, src + i); ++ } ++} ++ ++/* + * Copy the page to its new location + */ + void migrate_page_copy(struct page *newpage, struct page *page) +diff --git a/net/caif/caif_socket.c b/net/caif/caif_socket.c +index d6be3edb7a43..526bf56f4d31 100644 +--- a/net/caif/caif_socket.c ++++ b/net/caif/caif_socket.c +@@ -283,7 +283,7 @@ static int caif_seqpkt_recvmsg(struct kiocb *iocb, struct socket *sock, + int copylen; + + ret = -EOPNOTSUPP; +- if (m->msg_flags&MSG_OOB) ++ if (flags & MSG_OOB) + goto read_error; + + skb = skb_recv_datagram(sk, flags, 0 , &ret); +diff --git a/net/can/af_can.c b/net/can/af_can.c +index ae3f07eb6cd7..5a668268f7ff 100644 +--- a/net/can/af_can.c ++++ b/net/can/af_can.c +@@ -262,6 +262,9 @@ int can_send(struct sk_buff *skb, int loop) + goto inval_skb; + } + ++ skb->ip_summed = CHECKSUM_UNNECESSARY; ++ ++ skb_reset_mac_header(skb); + skb_reset_network_header(skb); + skb_reset_transport_header(skb); + +diff --git a/net/compat.c b/net/compat.c +index 275af79c131b..d12529050b29 100644 +--- a/net/compat.c ++++ b/net/compat.c +@@ -71,6 +71,13 @@ int get_compat_msghdr(struct msghdr *kmsg, struct compat_msghdr __user *umsg) + __get_user(kmsg->msg_controllen, &umsg->msg_controllen) || + __get_user(kmsg->msg_flags, &umsg->msg_flags)) + return -EFAULT; ++ ++ if (!tmp1) ++ kmsg->msg_namelen = 0; ++ ++ if (kmsg->msg_namelen < 0) ++ return -EINVAL; ++ + if (kmsg->msg_namelen > sizeof(struct sockaddr_storage)) + kmsg->msg_namelen = sizeof(struct sockaddr_storage); + kmsg->msg_name = compat_ptr(tmp1); +diff --git a/net/core/sysctl_net_core.c b/net/core/sysctl_net_core.c +index cca444190907..f3413ae3d973 100644 +--- a/net/core/sysctl_net_core.c ++++ b/net/core/sysctl_net_core.c +@@ -25,6 +25,8 @@ + static int zero = 0; + static int one = 1; + static int ushort_max = USHRT_MAX; ++static int min_sndbuf = SOCK_MIN_SNDBUF; ++static int min_rcvbuf = SOCK_MIN_RCVBUF; + + #ifdef CONFIG_RPS + static int rps_sock_flow_sysctl(struct ctl_table *table, int write, +@@ -222,7 +224,7 @@ static struct ctl_table net_core_table[] = { + .maxlen = sizeof(int), + .mode = 0644, + .proc_handler = proc_dointvec_minmax, +- .extra1 = &one, ++ .extra1 = &min_sndbuf, + }, + { + .procname = "rmem_max", +@@ -230,7 +232,7 @@ static struct ctl_table net_core_table[] = { + .maxlen = sizeof(int), + .mode = 0644, + .proc_handler = proc_dointvec_minmax, +- .extra1 = &one, ++ .extra1 = &min_rcvbuf, + }, + { + .procname = "wmem_default", +@@ -238,7 +240,7 @@ static struct ctl_table net_core_table[] = { + .maxlen = sizeof(int), + .mode = 0644, + .proc_handler = proc_dointvec_minmax, +- .extra1 = &one, ++ .extra1 = &min_sndbuf, + }, + { + .procname = "rmem_default", +@@ -246,7 +248,7 @@ static struct ctl_table net_core_table[] = { + .maxlen = sizeof(int), + .mode = 0644, + .proc_handler = proc_dointvec_minmax, +- .extra1 = &one, ++ .extra1 = &min_rcvbuf, + }, + { + .procname = "dev_weight", +diff --git a/net/dns_resolver/dns_query.c b/net/dns_resolver/dns_query.c +index ede0e2d7412e..2022b46ab38f 100644 +--- a/net/dns_resolver/dns_query.c ++++ b/net/dns_resolver/dns_query.c +@@ -151,7 +151,7 @@ int dns_query(const char *type, const char *name, size_t namelen, + goto put; + + memcpy(*_result, upayload->data, len); +- *_result[len] = '\0'; ++ (*_result)[len] = '\0'; + + if (_expiry) + *_expiry = rkey->expiry; +diff --git a/net/ipv4/inet_diag.c b/net/ipv4/inet_diag.c +index 45dbdab915e2..14a1ed611b05 100644 +--- a/net/ipv4/inet_diag.c ++++ b/net/ipv4/inet_diag.c +@@ -71,6 +71,20 @@ static inline void inet_diag_unlock_handler( + mutex_unlock(&inet_diag_table_mutex); + } + ++static size_t inet_sk_attr_size(void) ++{ ++ return nla_total_size(sizeof(struct tcp_info)) ++ + nla_total_size(1) /* INET_DIAG_SHUTDOWN */ ++ + nla_total_size(1) /* INET_DIAG_TOS */ ++ + nla_total_size(1) /* INET_DIAG_TCLASS */ ++ + nla_total_size(sizeof(struct inet_diag_meminfo)) ++ + nla_total_size(sizeof(struct inet_diag_msg)) ++ + nla_total_size(SK_MEMINFO_VARS * sizeof(u32)) ++ + nla_total_size(TCP_CA_NAME_MAX) ++ + nla_total_size(sizeof(struct tcpvegas_info)) ++ + 64; ++} ++ + int inet_sk_diag_fill(struct sock *sk, struct inet_connection_sock *icsk, + struct sk_buff *skb, struct inet_diag_req_v2 *req, + struct user_namespace *user_ns, +@@ -326,9 +340,7 @@ int inet_diag_dump_one_icsk(struct inet_hashinfo *hashinfo, struct sk_buff *in_s + if (err) + goto out; + +- rep = nlmsg_new(sizeof(struct inet_diag_msg) + +- sizeof(struct inet_diag_meminfo) + +- sizeof(struct tcp_info) + 64, GFP_KERNEL); ++ rep = nlmsg_new(inet_sk_attr_size(), GFP_KERNEL); + if (!rep) { + err = -ENOMEM; + goto out; +diff --git a/net/ipv4/tcp_output.c b/net/ipv4/tcp_output.c +index b4435ae4c485..e07ccba040be 100644 +--- a/net/ipv4/tcp_output.c ++++ b/net/ipv4/tcp_output.c +@@ -2603,15 +2603,11 @@ void tcp_send_fin(struct sock *sk) + } else { + /* Socket is locked, keep trying until memory is available. */ + for (;;) { +- skb = alloc_skb_fclone(MAX_TCP_HEADER, +- sk->sk_allocation); ++ skb = sk_stream_alloc_skb(sk, 0, sk->sk_allocation); + if (skb) + break; + yield(); + } +- +- /* Reserve space for headers and prepare control bits. */ +- skb_reserve(skb, MAX_TCP_HEADER); + /* FIN eats a sequence byte, write_seq advanced by tcp_queue_skb(). */ + tcp_init_nondata_skb(skb, tp->write_seq, + TCPHDR_ACK | TCPHDR_FIN); +@@ -2885,9 +2881,9 @@ static int tcp_send_syn_data(struct sock *sk, struct sk_buff *syn) + { + struct tcp_sock *tp = tcp_sk(sk); + struct tcp_fastopen_request *fo = tp->fastopen_req; +- int syn_loss = 0, space, i, err = 0, iovlen = fo->data->msg_iovlen; +- struct sk_buff *syn_data = NULL, *data; ++ int syn_loss = 0, space, err = 0; + unsigned long last_syn_loss = 0; ++ struct sk_buff *syn_data; + + tp->rx_opt.mss_clamp = tp->advmss; /* If MSS is not cached */ + tcp_fastopen_cache_get(sk, &tp->rx_opt.mss_clamp, &fo->cookie, +@@ -2918,42 +2914,38 @@ static int tcp_send_syn_data(struct sock *sk, struct sk_buff *syn) + /* limit to order-0 allocations */ + space = min_t(size_t, space, SKB_MAX_HEAD(MAX_TCP_HEADER)); + +- syn_data = skb_copy_expand(syn, MAX_TCP_HEADER, space, +- sk->sk_allocation); +- if (syn_data == NULL) ++ syn_data = sk_stream_alloc_skb(sk, space, sk->sk_allocation); ++ if (!syn_data) + goto fallback; ++ syn_data->ip_summed = CHECKSUM_PARTIAL; ++ memcpy(syn_data->cb, syn->cb, sizeof(syn->cb)); ++ if (unlikely(memcpy_fromiovecend(skb_put(syn_data, space), ++ fo->data->msg_iov, 0, space))) { ++ kfree_skb(syn_data); ++ goto fallback; ++ } + +- for (i = 0; i < iovlen && syn_data->len < space; ++i) { +- struct iovec *iov = &fo->data->msg_iov[i]; +- unsigned char __user *from = iov->iov_base; +- int len = iov->iov_len; +- +- if (syn_data->len + len > space) +- len = space - syn_data->len; +- else if (i + 1 == iovlen) +- /* No more data pending in inet_wait_for_connect() */ +- fo->data = NULL; ++ /* No more data pending in inet_wait_for_connect() */ ++ if (space == fo->size) ++ fo->data = NULL; ++ fo->copied = space; + +- if (skb_add_data(syn_data, from, len)) +- goto fallback; +- } ++ tcp_connect_queue_skb(sk, syn_data); + +- /* Queue a data-only packet after the regular SYN for retransmission */ +- data = pskb_copy(syn_data, sk->sk_allocation); +- if (data == NULL) +- goto fallback; +- TCP_SKB_CB(data)->seq++; +- TCP_SKB_CB(data)->tcp_flags &= ~TCPHDR_SYN; +- TCP_SKB_CB(data)->tcp_flags = (TCPHDR_ACK|TCPHDR_PSH); +- tcp_connect_queue_skb(sk, data); +- fo->copied = data->len; ++ err = tcp_transmit_skb(sk, syn_data, 1, sk->sk_allocation); + +- if (tcp_transmit_skb(sk, syn_data, 0, sk->sk_allocation) == 0) { ++ /* Now full SYN+DATA was cloned and sent (or not), ++ * remove the SYN from the original skb (syn_data) ++ * we keep in write queue in case of a retransmit, as we ++ * also have the SYN packet (with no data) in the same queue. ++ */ ++ TCP_SKB_CB(syn_data)->seq++; ++ TCP_SKB_CB(syn_data)->tcp_flags = TCPHDR_ACK | TCPHDR_PSH; ++ if (!err) { + tp->syn_data = (fo->copied > 0); + NET_INC_STATS(sock_net(sk), LINUX_MIB_TCPFASTOPENACTIVE); + goto done; + } +- syn_data = NULL; + + fallback: + /* Send a regular SYN with Fast Open cookie request option */ +@@ -2962,7 +2954,6 @@ fallback: + err = tcp_transmit_skb(sk, syn, 1, sk->sk_allocation); + if (err) + tp->syn_fastopen = 0; +- kfree_skb(syn_data); + done: + fo->cookie.len = -1; /* Exclude Fast Open option for SYN retries */ + return err; +@@ -2982,13 +2973,10 @@ int tcp_connect(struct sock *sk) + return 0; + } + +- buff = alloc_skb_fclone(MAX_TCP_HEADER + 15, sk->sk_allocation); +- if (unlikely(buff == NULL)) ++ buff = sk_stream_alloc_skb(sk, 0, sk->sk_allocation); ++ if (unlikely(!buff)) + return -ENOBUFS; + +- /* Reserve space for headers. */ +- skb_reserve(buff, MAX_TCP_HEADER); +- + tcp_init_nondata_skb(buff, tp->write_seq++, TCPHDR_SYN); + tp->retrans_stamp = TCP_SKB_CB(buff)->when = tcp_time_stamp; + tcp_connect_queue_skb(sk, buff); +diff --git a/net/ipv6/addrconf.c b/net/ipv6/addrconf.c +index 5dac9fd72465..87f1a70bd234 100644 +--- a/net/ipv6/addrconf.c ++++ b/net/ipv6/addrconf.c +@@ -2641,8 +2641,18 @@ static void init_loopback(struct net_device *dev) + if (sp_ifa->flags & (IFA_F_DADFAILED | IFA_F_TENTATIVE)) + continue; + +- if (sp_ifa->rt) +- continue; ++ if (sp_ifa->rt) { ++ /* This dst has been added to garbage list when ++ * lo device down, release this obsolete dst and ++ * reallocate a new router for ifa. ++ */ ++ if (sp_ifa->rt->dst.obsolete > 0) { ++ ip6_rt_put(sp_ifa->rt); ++ sp_ifa->rt = NULL; ++ } else { ++ continue; ++ } ++ } + + sp_rt = addrconf_dst_alloc(idev, &sp_ifa->addr, 0); + +diff --git a/net/ipv6/fib6_rules.c b/net/ipv6/fib6_rules.c +index 3fd0a578329e..ab82a47d0bdf 100644 +--- a/net/ipv6/fib6_rules.c ++++ b/net/ipv6/fib6_rules.c +@@ -104,6 +104,7 @@ static int fib6_rule_action(struct fib_rule *rule, struct flowi *flp, + goto again; + flp6->saddr = saddr; + } ++ err = rt->dst.error; + goto out; + } + again: +diff --git a/net/mac80211/ieee80211_i.h b/net/mac80211/ieee80211_i.h +index 23c13165ce83..09d4201b9e81 100644 +--- a/net/mac80211/ieee80211_i.h ++++ b/net/mac80211/ieee80211_i.h +@@ -57,13 +57,24 @@ struct ieee80211_local; + #define IEEE80211_UNSET_POWER_LEVEL INT_MIN + + /* +- * Some APs experience problems when working with U-APSD. Decrease the +- * probability of that happening by using legacy mode for all ACs but VO. +- * The AP that caused us trouble was a Cisco 4410N. It ignores our +- * setting, and always treats non-VO ACs as legacy. ++ * Some APs experience problems when working with U-APSD. Decreasing the ++ * probability of that happening by using legacy mode for all ACs but VO isn't ++ * enough. ++ * ++ * Cisco 4410N originally forced us to enable VO by default only because it ++ * treated non-VO ACs as legacy. ++ * ++ * However some APs (notably Netgear R7000) silently reclassify packets to ++ * different ACs. Since u-APSD ACs require trigger frames for frame retrieval ++ * clients would never see some frames (e.g. ARP responses) or would fetch them ++ * accidentally after a long time. ++ * ++ * It makes little sense to enable u-APSD queues by default because it needs ++ * userspace applications to be aware of it to actually take advantage of the ++ * possible additional powersavings. Implicitly depending on driver autotrigger ++ * frame support doesn't make much sense. + */ +-#define IEEE80211_DEFAULT_UAPSD_QUEUES \ +- IEEE80211_WMM_IE_STA_QOSINFO_AC_VO ++#define IEEE80211_DEFAULT_UAPSD_QUEUES 0 + + #define IEEE80211_DEFAULT_MAX_SP_LEN \ + IEEE80211_WMM_IE_STA_QOSINFO_SP_ALL +diff --git a/net/mac80211/rx.c b/net/mac80211/rx.c +index 03146a15f4f9..198fd6e1f6d4 100644 +--- a/net/mac80211/rx.c ++++ b/net/mac80211/rx.c +@@ -2078,6 +2078,9 @@ ieee80211_rx_h_mesh_fwding(struct ieee80211_rx_data *rx) + hdr = (struct ieee80211_hdr *) skb->data; + mesh_hdr = (struct ieee80211s_hdr *) (skb->data + hdrlen); + ++ if (ieee80211_drop_unencrypted(rx, hdr->frame_control)) ++ return RX_DROP_MONITOR; ++ + /* frame is in RMC, don't forward */ + if (ieee80211_is_data(hdr->frame_control) && + is_multicast_ether_addr(hdr->addr1) && +diff --git a/net/netfilter/nfnetlink_queue_core.c b/net/netfilter/nfnetlink_queue_core.c +index ae2e5c11d01a..f5c34db24498 100644 +--- a/net/netfilter/nfnetlink_queue_core.c ++++ b/net/netfilter/nfnetlink_queue_core.c +@@ -235,22 +235,23 @@ nfqnl_flush(struct nfqnl_instance *queue, nfqnl_cmpfn cmpfn, unsigned long data) + spin_unlock_bh(&queue->lock); + } + +-static void ++static int + nfqnl_zcopy(struct sk_buff *to, const struct sk_buff *from, int len, int hlen) + { + int i, j = 0; + int plen = 0; /* length of skb->head fragment */ ++ int ret; + struct page *page; + unsigned int offset; + + /* dont bother with small payloads */ +- if (len <= skb_tailroom(to)) { +- skb_copy_bits(from, 0, skb_put(to, len), len); +- return; +- } ++ if (len <= skb_tailroom(to)) ++ return skb_copy_bits(from, 0, skb_put(to, len), len); + + if (hlen) { +- skb_copy_bits(from, 0, skb_put(to, hlen), hlen); ++ ret = skb_copy_bits(from, 0, skb_put(to, hlen), hlen); ++ if (unlikely(ret)) ++ return ret; + len -= hlen; + } else { + plen = min_t(int, skb_headlen(from), len); +@@ -268,6 +269,11 @@ nfqnl_zcopy(struct sk_buff *to, const struct sk_buff *from, int len, int hlen) + to->len += len + plen; + to->data_len += len + plen; + ++ if (unlikely(skb_orphan_frags(from, GFP_ATOMIC))) { ++ skb_tx_error(from); ++ return -ENOMEM; ++ } ++ + for (i = 0; i < skb_shinfo(from)->nr_frags; i++) { + if (!len) + break; +@@ -278,6 +284,8 @@ nfqnl_zcopy(struct sk_buff *to, const struct sk_buff *from, int len, int hlen) + j++; + } + skb_shinfo(to)->nr_frags = j; ++ ++ return 0; + } + + static int +@@ -374,13 +382,16 @@ nfqnl_build_packet_message(struct nfqnl_instance *queue, + + skb = nfnetlink_alloc_skb(&init_net, size, queue->peer_portid, + GFP_ATOMIC); +- if (!skb) ++ if (!skb) { ++ skb_tx_error(entskb); + return NULL; ++ } + + nlh = nlmsg_put(skb, 0, 0, + NFNL_SUBSYS_QUEUE << 8 | NFQNL_MSG_PACKET, + sizeof(struct nfgenmsg), 0); + if (!nlh) { ++ skb_tx_error(entskb); + kfree_skb(skb); + return NULL; + } +@@ -504,13 +515,15 @@ nfqnl_build_packet_message(struct nfqnl_instance *queue, + nla->nla_type = NFQA_PAYLOAD; + nla->nla_len = nla_attr_size(data_len); + +- nfqnl_zcopy(skb, entskb, data_len, hlen); ++ if (nfqnl_zcopy(skb, entskb, data_len, hlen)) ++ goto nla_put_failure; + } + + nlh->nlmsg_len = skb->len; + return skb; + + nla_put_failure: ++ skb_tx_error(entskb); + kfree_skb(skb); + net_err_ratelimited("nf_queue: error creating packet message\n"); + return NULL; +diff --git a/net/rds/iw_rdma.c b/net/rds/iw_rdma.c +index a817705ce2d0..dba8d0864f18 100644 +--- a/net/rds/iw_rdma.c ++++ b/net/rds/iw_rdma.c +@@ -88,7 +88,9 @@ static unsigned int rds_iw_unmap_fastreg_list(struct rds_iw_mr_pool *pool, + int *unpinned); + static void rds_iw_destroy_fastreg(struct rds_iw_mr_pool *pool, struct rds_iw_mr *ibmr); + +-static int rds_iw_get_device(struct rds_sock *rs, struct rds_iw_device **rds_iwdev, struct rdma_cm_id **cm_id) ++static int rds_iw_get_device(struct sockaddr_in *src, struct sockaddr_in *dst, ++ struct rds_iw_device **rds_iwdev, ++ struct rdma_cm_id **cm_id) + { + struct rds_iw_device *iwdev; + struct rds_iw_cm_id *i_cm_id; +@@ -112,15 +114,15 @@ static int rds_iw_get_device(struct rds_sock *rs, struct rds_iw_device **rds_iwd + src_addr->sin_port, + dst_addr->sin_addr.s_addr, + dst_addr->sin_port, +- rs->rs_bound_addr, +- rs->rs_bound_port, +- rs->rs_conn_addr, +- rs->rs_conn_port); ++ src->sin_addr.s_addr, ++ src->sin_port, ++ dst->sin_addr.s_addr, ++ dst->sin_port); + #ifdef WORKING_TUPLE_DETECTION +- if (src_addr->sin_addr.s_addr == rs->rs_bound_addr && +- src_addr->sin_port == rs->rs_bound_port && +- dst_addr->sin_addr.s_addr == rs->rs_conn_addr && +- dst_addr->sin_port == rs->rs_conn_port) { ++ if (src_addr->sin_addr.s_addr == src->sin_addr.s_addr && ++ src_addr->sin_port == src->sin_port && ++ dst_addr->sin_addr.s_addr == dst->sin_addr.s_addr && ++ dst_addr->sin_port == dst->sin_port) { + #else + /* FIXME - needs to compare the local and remote + * ipaddr/port tuple, but the ipaddr is the only +@@ -128,7 +130,7 @@ static int rds_iw_get_device(struct rds_sock *rs, struct rds_iw_device **rds_iwd + * zero'ed. It doesn't appear to be properly populated + * during connection setup... + */ +- if (src_addr->sin_addr.s_addr == rs->rs_bound_addr) { ++ if (src_addr->sin_addr.s_addr == src->sin_addr.s_addr) { + #endif + spin_unlock_irq(&iwdev->spinlock); + *rds_iwdev = iwdev; +@@ -180,19 +182,13 @@ int rds_iw_update_cm_id(struct rds_iw_device *rds_iwdev, struct rdma_cm_id *cm_i + { + struct sockaddr_in *src_addr, *dst_addr; + struct rds_iw_device *rds_iwdev_old; +- struct rds_sock rs; + struct rdma_cm_id *pcm_id; + int rc; + + src_addr = (struct sockaddr_in *)&cm_id->route.addr.src_addr; + dst_addr = (struct sockaddr_in *)&cm_id->route.addr.dst_addr; + +- rs.rs_bound_addr = src_addr->sin_addr.s_addr; +- rs.rs_bound_port = src_addr->sin_port; +- rs.rs_conn_addr = dst_addr->sin_addr.s_addr; +- rs.rs_conn_port = dst_addr->sin_port; +- +- rc = rds_iw_get_device(&rs, &rds_iwdev_old, &pcm_id); ++ rc = rds_iw_get_device(src_addr, dst_addr, &rds_iwdev_old, &pcm_id); + if (rc) + rds_iw_remove_cm_id(rds_iwdev, cm_id); + +@@ -598,9 +594,17 @@ void *rds_iw_get_mr(struct scatterlist *sg, unsigned long nents, + struct rds_iw_device *rds_iwdev; + struct rds_iw_mr *ibmr = NULL; + struct rdma_cm_id *cm_id; ++ struct sockaddr_in src = { ++ .sin_addr.s_addr = rs->rs_bound_addr, ++ .sin_port = rs->rs_bound_port, ++ }; ++ struct sockaddr_in dst = { ++ .sin_addr.s_addr = rs->rs_conn_addr, ++ .sin_port = rs->rs_conn_port, ++ }; + int ret; + +- ret = rds_iw_get_device(rs, &rds_iwdev, &cm_id); ++ ret = rds_iw_get_device(&src, &dst, &rds_iwdev, &cm_id); + if (ret || !cm_id) { + ret = -ENODEV; + goto out; +diff --git a/net/rxrpc/ar-recvmsg.c b/net/rxrpc/ar-recvmsg.c +index 898492a8d61b..5cc2da5d295d 100644 +--- a/net/rxrpc/ar-recvmsg.c ++++ b/net/rxrpc/ar-recvmsg.c +@@ -87,7 +87,7 @@ int rxrpc_recvmsg(struct kiocb *iocb, struct socket *sock, + if (!skb) { + /* nothing remains on the queue */ + if (copied && +- (msg->msg_flags & MSG_PEEK || timeo == 0)) ++ (flags & MSG_PEEK || timeo == 0)) + goto out; + + /* wait for a message to turn up */ +diff --git a/net/wireless/chan.c b/net/wireless/chan.c +index 50f6195c8b70..e5af85d10212 100644 +--- a/net/wireless/chan.c ++++ b/net/wireless/chan.c +@@ -368,7 +368,7 @@ bool cfg80211_chandef_usable(struct wiphy *wiphy, + { + struct ieee80211_sta_ht_cap *ht_cap; + struct ieee80211_sta_vht_cap *vht_cap; +- u32 width, control_freq; ++ u32 width, control_freq, cap; + + if (WARN_ON(!cfg80211_chandef_valid(chandef))) + return false; +@@ -406,7 +406,8 @@ bool cfg80211_chandef_usable(struct wiphy *wiphy, + return false; + break; + case NL80211_CHAN_WIDTH_80P80: +- if (!(vht_cap->cap & IEEE80211_VHT_CAP_SUPP_CHAN_WIDTH_160_80PLUS80MHZ)) ++ cap = vht_cap->cap & IEEE80211_VHT_CAP_SUPP_CHAN_WIDTH_MASK; ++ if (cap != IEEE80211_VHT_CAP_SUPP_CHAN_WIDTH_160_80PLUS80MHZ) + return false; + case NL80211_CHAN_WIDTH_80: + if (!vht_cap->vht_supported) +@@ -417,7 +418,9 @@ bool cfg80211_chandef_usable(struct wiphy *wiphy, + case NL80211_CHAN_WIDTH_160: + if (!vht_cap->vht_supported) + return false; +- if (!(vht_cap->cap & IEEE80211_VHT_CAP_SUPP_CHAN_WIDTH_160MHZ)) ++ cap = vht_cap->cap & IEEE80211_VHT_CAP_SUPP_CHAN_WIDTH_MASK; ++ if (cap != IEEE80211_VHT_CAP_SUPP_CHAN_WIDTH_160MHZ && ++ cap != IEEE80211_VHT_CAP_SUPP_CHAN_WIDTH_160_80PLUS80MHZ) + return false; + prohibited_flags |= IEEE80211_CHAN_NO_160MHZ; + width = 160; +diff --git a/net/wireless/nl80211.c b/net/wireless/nl80211.c +index 388123667f1e..79c3e641581d 100644 +--- a/net/wireless/nl80211.c ++++ b/net/wireless/nl80211.c +@@ -4096,6 +4096,16 @@ static int nl80211_new_station(struct sk_buff *skb, struct genl_info *info) + if (parse_station_flags(info, dev->ieee80211_ptr->iftype, ¶ms)) + return -EINVAL; + ++ /* HT/VHT requires QoS, but if we don't have that just ignore HT/VHT ++ * as userspace might just pass through the capabilities from the IEs ++ * directly, rather than enforcing this restriction and returning an ++ * error in this case. ++ */ ++ if (!(params.sta_flags_set & BIT(NL80211_STA_FLAG_WME))) { ++ params.ht_capa = NULL; ++ params.vht_capa = NULL; ++ } ++ + /* When you run into this, adjust the code below for the new flag */ + BUILD_BUG_ON(NL80211_STA_FLAG_MAX != 7); + +diff --git a/security/keys/request_key.c b/security/keys/request_key.c +index c411f9bb156b..5678616cde9d 100644 +--- a/security/keys/request_key.c ++++ b/security/keys/request_key.c +@@ -432,6 +432,7 @@ link_check_failed: + + link_prealloc_failed: + mutex_unlock(&user->cons_lock); ++ key_put(key); + kleave(" = %d [prelink]", ret); + return ret; + +diff --git a/sound/core/control.c b/sound/core/control.c +index 98a29b26c5f4..f2082a35b890 100644 +--- a/sound/core/control.c ++++ b/sound/core/control.c +@@ -1168,6 +1168,10 @@ static int snd_ctl_elem_add(struct snd_ctl_file *file, + + if (info->count < 1) + return -EINVAL; ++ if (!*info->id.name) ++ return -EINVAL; ++ if (strnlen(info->id.name, sizeof(info->id.name)) >= sizeof(info->id.name)) ++ return -EINVAL; + access = info->access == 0 ? SNDRV_CTL_ELEM_ACCESS_READWRITE : + (info->access & (SNDRV_CTL_ELEM_ACCESS_READWRITE| + SNDRV_CTL_ELEM_ACCESS_INACTIVE| +diff --git a/sound/pci/hda/hda_generic.c b/sound/pci/hda/hda_generic.c +index 31da88bf6c1c..1b1243450d8e 100644 +--- a/sound/pci/hda/hda_generic.c ++++ b/sound/pci/hda/hda_generic.c +@@ -652,12 +652,45 @@ static int get_amp_val_to_activate(struct hda_codec *codec, hda_nid_t nid, + return val; + } + ++/* is this a stereo widget or a stereo-to-mono mix? */ ++static bool is_stereo_amps(struct hda_codec *codec, hda_nid_t nid, int dir) ++{ ++ unsigned int wcaps = get_wcaps(codec, nid); ++ hda_nid_t conn; ++ ++ if (wcaps & AC_WCAP_STEREO) ++ return true; ++ if (dir != HDA_INPUT || get_wcaps_type(wcaps) != AC_WID_AUD_MIX) ++ return false; ++ if (snd_hda_get_num_conns(codec, nid) != 1) ++ return false; ++ if (snd_hda_get_connections(codec, nid, &conn, 1) < 0) ++ return false; ++ return !!(get_wcaps(codec, conn) & AC_WCAP_STEREO); ++} ++ + /* initialize the amp value (only at the first time) */ + static void init_amp(struct hda_codec *codec, hda_nid_t nid, int dir, int idx) + { + unsigned int caps = query_amp_caps(codec, nid, dir); + int val = get_amp_val_to_activate(codec, nid, dir, caps, false); +- snd_hda_codec_amp_init_stereo(codec, nid, dir, idx, 0xff, val); ++ ++ if (is_stereo_amps(codec, nid, dir)) ++ snd_hda_codec_amp_init_stereo(codec, nid, dir, idx, 0xff, val); ++ else ++ snd_hda_codec_amp_init(codec, nid, 0, dir, idx, 0xff, val); ++} ++ ++/* update the amp, doing in stereo or mono depending on NID */ ++static int update_amp(struct hda_codec *codec, hda_nid_t nid, int dir, int idx, ++ unsigned int mask, unsigned int val) ++{ ++ if (is_stereo_amps(codec, nid, dir)) ++ return snd_hda_codec_amp_stereo(codec, nid, dir, idx, ++ mask, val); ++ else ++ return snd_hda_codec_amp_update(codec, nid, 0, dir, idx, ++ mask, val); + } + + /* calculate amp value mask we can modify; +@@ -697,7 +730,7 @@ static void activate_amp(struct hda_codec *codec, hda_nid_t nid, int dir, + return; + + val &= mask; +- snd_hda_codec_amp_stereo(codec, nid, dir, idx, mask, val); ++ update_amp(codec, nid, dir, idx, mask, val); + } + + static void activate_amp_out(struct hda_codec *codec, struct nid_path *path, +@@ -4331,13 +4364,11 @@ static void mute_all_mixer_nid(struct hda_codec *codec, hda_nid_t mix) + has_amp = nid_has_mute(codec, mix, HDA_INPUT); + for (i = 0; i < nums; i++) { + if (has_amp) +- snd_hda_codec_amp_stereo(codec, mix, +- HDA_INPUT, i, +- 0xff, HDA_AMP_MUTE); ++ update_amp(codec, mix, HDA_INPUT, i, ++ 0xff, HDA_AMP_MUTE); + else if (nid_has_volume(codec, conn[i], HDA_OUTPUT)) +- snd_hda_codec_amp_stereo(codec, conn[i], +- HDA_OUTPUT, 0, +- 0xff, HDA_AMP_MUTE); ++ update_amp(codec, conn[i], HDA_OUTPUT, 0, ++ 0xff, HDA_AMP_MUTE); + } + } + +diff --git a/sound/pci/hda/hda_intel.c b/sound/pci/hda/hda_intel.c +index 86e63b665777..3e57cfcf08e2 100644 +--- a/sound/pci/hda/hda_intel.c ++++ b/sound/pci/hda/hda_intel.c +@@ -1007,7 +1007,7 @@ static unsigned int azx_rirb_get_response(struct hda_bus *bus, + } + } + +- if (!bus->no_response_fallback) ++ if (bus->no_response_fallback) + return -1; + + if (!chip->polling_mode && chip->poll_count < 2) { +diff --git a/sound/pci/hda/hda_proc.c b/sound/pci/hda/hda_proc.c +index a8cb22eec89e..d64193c1a387 100644 +--- a/sound/pci/hda/hda_proc.c ++++ b/sound/pci/hda/hda_proc.c +@@ -129,13 +129,38 @@ static void print_amp_caps(struct snd_info_buffer *buffer, + (caps & AC_AMPCAP_MUTE) >> AC_AMPCAP_MUTE_SHIFT); + } + ++/* is this a stereo widget or a stereo-to-mono mix? */ ++static bool is_stereo_amps(struct hda_codec *codec, hda_nid_t nid, ++ int dir, unsigned int wcaps, int indices) ++{ ++ hda_nid_t conn; ++ ++ if (wcaps & AC_WCAP_STEREO) ++ return true; ++ /* check for a stereo-to-mono mix; it must be: ++ * only a single connection, only for input, and only a mixer widget ++ */ ++ if (indices != 1 || dir != HDA_INPUT || ++ get_wcaps_type(wcaps) != AC_WID_AUD_MIX) ++ return false; ++ ++ if (snd_hda_get_raw_connections(codec, nid, &conn, 1) < 0) ++ return false; ++ /* the connection source is a stereo? */ ++ wcaps = snd_hda_param_read(codec, conn, AC_PAR_AUDIO_WIDGET_CAP); ++ return !!(wcaps & AC_WCAP_STEREO); ++} ++ + static void print_amp_vals(struct snd_info_buffer *buffer, + struct hda_codec *codec, hda_nid_t nid, +- int dir, int stereo, int indices) ++ int dir, unsigned int wcaps, int indices) + { + unsigned int val; ++ bool stereo; + int i; + ++ stereo = is_stereo_amps(codec, nid, dir, wcaps, indices); ++ + dir = dir == HDA_OUTPUT ? AC_AMP_GET_OUTPUT : AC_AMP_GET_INPUT; + for (i = 0; i < indices; i++) { + snd_iprintf(buffer, " ["); +@@ -727,12 +752,10 @@ static void print_codec_info(struct snd_info_entry *entry, + (codec->single_adc_amp && + wid_type == AC_WID_AUD_IN)) + print_amp_vals(buffer, codec, nid, HDA_INPUT, +- wid_caps & AC_WCAP_STEREO, +- 1); ++ wid_caps, 1); + else + print_amp_vals(buffer, codec, nid, HDA_INPUT, +- wid_caps & AC_WCAP_STEREO, +- conn_len); ++ wid_caps, conn_len); + } + if (wid_caps & AC_WCAP_OUT_AMP) { + snd_iprintf(buffer, " Amp-Out caps: "); +@@ -741,11 +764,10 @@ static void print_codec_info(struct snd_info_entry *entry, + if (wid_type == AC_WID_PIN && + codec->pin_amp_workaround) + print_amp_vals(buffer, codec, nid, HDA_OUTPUT, +- wid_caps & AC_WCAP_STEREO, +- conn_len); ++ wid_caps, conn_len); + else + print_amp_vals(buffer, codec, nid, HDA_OUTPUT, +- wid_caps & AC_WCAP_STEREO, 1); ++ wid_caps, 1); + } + + switch (wid_type) { +diff --git a/sound/pci/hda/patch_cirrus.c b/sound/pci/hda/patch_cirrus.c +index 072755c8289c..ab0d0a384c15 100644 +--- a/sound/pci/hda/patch_cirrus.c ++++ b/sound/pci/hda/patch_cirrus.c +@@ -381,6 +381,7 @@ static const struct snd_pci_quirk cs420x_fixup_tbl[] = { + SND_PCI_QUIRK(0x106b, 0x1c00, "MacBookPro 8,1", CS420X_MBP81), + SND_PCI_QUIRK(0x106b, 0x2000, "iMac 12,2", CS420X_IMAC27_122), + SND_PCI_QUIRK(0x106b, 0x2800, "MacBookPro 10,1", CS420X_MBP101), ++ SND_PCI_QUIRK(0x106b, 0x5600, "MacBookAir 5,2", CS420X_MBP81), + SND_PCI_QUIRK(0x106b, 0x5b00, "MacBookAir 4,2", CS420X_MBA42), + SND_PCI_QUIRK_VENDOR(0x106b, "Apple", CS420X_APPLE), + {} /* terminator */ +@@ -572,6 +573,7 @@ static int patch_cs420x(struct hda_codec *codec) + return -ENOMEM; + + spec->gen.automute_hook = cs_automute; ++ codec->single_adc_amp = 1; + + snd_hda_pick_fixup(codec, cs420x_models, cs420x_fixup_tbl, + cs420x_fixups); +diff --git a/sound/pci/hda/patch_conexant.c b/sound/pci/hda/patch_conexant.c +index 131d7d459cb6..9baf0037866f 100644 +--- a/sound/pci/hda/patch_conexant.c ++++ b/sound/pci/hda/patch_conexant.c +@@ -3227,6 +3227,7 @@ enum { + CXT_PINCFG_LENOVO_TP410, + CXT_PINCFG_LEMOTE_A1004, + CXT_PINCFG_LEMOTE_A1205, ++ CXT_PINCFG_COMPAQ_CQ60, + CXT_FIXUP_STEREO_DMIC, + CXT_FIXUP_INC_MIC_BOOST, + CXT_FIXUP_HEADPHONE_MIC_PIN, +@@ -3357,6 +3358,15 @@ static const struct hda_fixup cxt_fixups[] = { + .type = HDA_FIXUP_PINS, + .v.pins = cxt_pincfg_lemote, + }, ++ [CXT_PINCFG_COMPAQ_CQ60] = { ++ .type = HDA_FIXUP_PINS, ++ .v.pins = (const struct hda_pintbl[]) { ++ /* 0x17 was falsely set up as a mic, it should 0x1d */ ++ { 0x17, 0x400001f0 }, ++ { 0x1d, 0x97a70120 }, ++ { } ++ } ++ }, + [CXT_FIXUP_STEREO_DMIC] = { + .type = HDA_FIXUP_FUNC, + .v.func = cxt_fixup_stereo_dmic, +@@ -3396,6 +3406,7 @@ static const struct hda_fixup cxt_fixups[] = { + }; + + static const struct snd_pci_quirk cxt5051_fixups[] = { ++ SND_PCI_QUIRK(0x103c, 0x360b, "Compaq CQ60", CXT_PINCFG_COMPAQ_CQ60), + SND_PCI_QUIRK(0x17aa, 0x20f2, "Lenovo X200", CXT_PINCFG_LENOVO_X200), + {} + }; +diff --git a/sound/soc/codecs/adav80x.c b/sound/soc/codecs/adav80x.c +index 15b012d0f226..3be52ff14362 100644 +--- a/sound/soc/codecs/adav80x.c ++++ b/sound/soc/codecs/adav80x.c +@@ -307,7 +307,7 @@ static int adav80x_put_deemph(struct snd_kcontrol *kcontrol, + { + struct snd_soc_codec *codec = snd_kcontrol_chip(kcontrol); + struct adav80x *adav80x = snd_soc_codec_get_drvdata(codec); +- unsigned int deemph = ucontrol->value.enumerated.item[0]; ++ unsigned int deemph = ucontrol->value.integer.value[0]; + + if (deemph > 1) + return -EINVAL; +@@ -323,7 +323,7 @@ static int adav80x_get_deemph(struct snd_kcontrol *kcontrol, + struct snd_soc_codec *codec = snd_kcontrol_chip(kcontrol); + struct adav80x *adav80x = snd_soc_codec_get_drvdata(codec); + +- ucontrol->value.enumerated.item[0] = adav80x->deemph; ++ ucontrol->value.integer.value[0] = adav80x->deemph; + return 0; + }; + +diff --git a/sound/soc/codecs/ak4641.c b/sound/soc/codecs/ak4641.c +index 5f9af1fb76e8..68379c14720b 100644 +--- a/sound/soc/codecs/ak4641.c ++++ b/sound/soc/codecs/ak4641.c +@@ -74,7 +74,7 @@ static int ak4641_put_deemph(struct snd_kcontrol *kcontrol, + { + struct snd_soc_codec *codec = snd_kcontrol_chip(kcontrol); + struct ak4641_priv *ak4641 = snd_soc_codec_get_drvdata(codec); +- int deemph = ucontrol->value.enumerated.item[0]; ++ int deemph = ucontrol->value.integer.value[0]; + + if (deemph > 1) + return -EINVAL; +@@ -90,7 +90,7 @@ static int ak4641_get_deemph(struct snd_kcontrol *kcontrol, + struct snd_soc_codec *codec = snd_kcontrol_chip(kcontrol); + struct ak4641_priv *ak4641 = snd_soc_codec_get_drvdata(codec); + +- ucontrol->value.enumerated.item[0] = ak4641->deemph; ++ ucontrol->value.integer.value[0] = ak4641->deemph; + return 0; + }; + +diff --git a/sound/soc/codecs/cs4271.c b/sound/soc/codecs/cs4271.c +index a20f1bb8f071..57950e110e61 100644 +--- a/sound/soc/codecs/cs4271.c ++++ b/sound/soc/codecs/cs4271.c +@@ -287,7 +287,7 @@ static int cs4271_get_deemph(struct snd_kcontrol *kcontrol, + struct snd_soc_codec *codec = snd_kcontrol_chip(kcontrol); + struct cs4271_private *cs4271 = snd_soc_codec_get_drvdata(codec); + +- ucontrol->value.enumerated.item[0] = cs4271->deemph; ++ ucontrol->value.integer.value[0] = cs4271->deemph; + return 0; + } + +@@ -297,7 +297,7 @@ static int cs4271_put_deemph(struct snd_kcontrol *kcontrol, + struct snd_soc_codec *codec = snd_kcontrol_chip(kcontrol); + struct cs4271_private *cs4271 = snd_soc_codec_get_drvdata(codec); + +- cs4271->deemph = ucontrol->value.enumerated.item[0]; ++ cs4271->deemph = ucontrol->value.integer.value[0]; + return cs4271_set_deemph(codec); + } + +diff --git a/sound/soc/codecs/pcm1681.c b/sound/soc/codecs/pcm1681.c +index c91eba504f92..0819fa2ff710 100644 +--- a/sound/soc/codecs/pcm1681.c ++++ b/sound/soc/codecs/pcm1681.c +@@ -117,7 +117,7 @@ static int pcm1681_get_deemph(struct snd_kcontrol *kcontrol, + struct snd_soc_codec *codec = snd_kcontrol_chip(kcontrol); + struct pcm1681_private *priv = snd_soc_codec_get_drvdata(codec); + +- ucontrol->value.enumerated.item[0] = priv->deemph; ++ ucontrol->value.integer.value[0] = priv->deemph; + + return 0; + } +@@ -128,7 +128,7 @@ static int pcm1681_put_deemph(struct snd_kcontrol *kcontrol, + struct snd_soc_codec *codec = snd_kcontrol_chip(kcontrol); + struct pcm1681_private *priv = snd_soc_codec_get_drvdata(codec); + +- priv->deemph = ucontrol->value.enumerated.item[0]; ++ priv->deemph = ucontrol->value.integer.value[0]; + + return pcm1681_set_deemph(codec); + } +diff --git a/sound/soc/codecs/sgtl5000.c b/sound/soc/codecs/sgtl5000.c +index ba73f832e455..cc2d29cee002 100644 +--- a/sound/soc/codecs/sgtl5000.c ++++ b/sound/soc/codecs/sgtl5000.c +@@ -1197,13 +1197,7 @@ static int sgtl5000_set_power_regs(struct snd_soc_codec *codec) + /* Enable VDDC charge pump */ + ana_pwr |= SGTL5000_VDDC_CHRGPMP_POWERUP; + } else if (vddio >= 3100 && vdda >= 3100) { +- /* +- * if vddio and vddd > 3.1v, +- * charge pump should be clean before set ana_pwr +- */ +- snd_soc_update_bits(codec, SGTL5000_CHIP_ANA_POWER, +- SGTL5000_VDDC_CHRGPMP_POWERUP, 0); +- ++ ana_pwr &= ~SGTL5000_VDDC_CHRGPMP_POWERUP; + /* VDDC use VDDIO rail */ + lreg_ctrl |= SGTL5000_VDDC_ASSN_OVRD; + lreg_ctrl |= SGTL5000_VDDC_MAN_ASSN_VDDIO << +diff --git a/sound/soc/codecs/tas5086.c b/sound/soc/codecs/tas5086.c +index 6d31d88f7204..6b694e422344 100644 +--- a/sound/soc/codecs/tas5086.c ++++ b/sound/soc/codecs/tas5086.c +@@ -272,7 +272,7 @@ static int tas5086_get_deemph(struct snd_kcontrol *kcontrol, + struct snd_soc_codec *codec = snd_kcontrol_chip(kcontrol); + struct tas5086_private *priv = snd_soc_codec_get_drvdata(codec); + +- ucontrol->value.enumerated.item[0] = priv->deemph; ++ ucontrol->value.integer.value[0] = priv->deemph; + + return 0; + } +@@ -283,7 +283,7 @@ static int tas5086_put_deemph(struct snd_kcontrol *kcontrol, + struct snd_soc_codec *codec = snd_kcontrol_chip(kcontrol); + struct tas5086_private *priv = snd_soc_codec_get_drvdata(codec); + +- priv->deemph = ucontrol->value.enumerated.item[0]; ++ priv->deemph = ucontrol->value.integer.value[0]; + + return tas5086_set_deemph(codec); + } +diff --git a/sound/soc/codecs/wm2000.c b/sound/soc/codecs/wm2000.c +index 7fefd766b582..124fb538dfa9 100644 +--- a/sound/soc/codecs/wm2000.c ++++ b/sound/soc/codecs/wm2000.c +@@ -605,7 +605,7 @@ static int wm2000_anc_mode_get(struct snd_kcontrol *kcontrol, + struct snd_soc_codec *codec = snd_kcontrol_chip(kcontrol); + struct wm2000_priv *wm2000 = dev_get_drvdata(codec->dev); + +- ucontrol->value.enumerated.item[0] = wm2000->anc_active; ++ ucontrol->value.integer.value[0] = wm2000->anc_active; + + return 0; + } +@@ -615,7 +615,7 @@ static int wm2000_anc_mode_put(struct snd_kcontrol *kcontrol, + { + struct snd_soc_codec *codec = snd_kcontrol_chip(kcontrol); + struct wm2000_priv *wm2000 = dev_get_drvdata(codec->dev); +- int anc_active = ucontrol->value.enumerated.item[0]; ++ int anc_active = ucontrol->value.integer.value[0]; + int ret; + + if (anc_active > 1) +@@ -638,7 +638,7 @@ static int wm2000_speaker_get(struct snd_kcontrol *kcontrol, + struct snd_soc_codec *codec = snd_kcontrol_chip(kcontrol); + struct wm2000_priv *wm2000 = dev_get_drvdata(codec->dev); + +- ucontrol->value.enumerated.item[0] = wm2000->spk_ena; ++ ucontrol->value.integer.value[0] = wm2000->spk_ena; + + return 0; + } +@@ -648,7 +648,7 @@ static int wm2000_speaker_put(struct snd_kcontrol *kcontrol, + { + struct snd_soc_codec *codec = snd_kcontrol_chip(kcontrol); + struct wm2000_priv *wm2000 = dev_get_drvdata(codec->dev); +- int val = ucontrol->value.enumerated.item[0]; ++ int val = ucontrol->value.integer.value[0]; + int ret; + + if (val > 1) +diff --git a/sound/soc/codecs/wm8731.c b/sound/soc/codecs/wm8731.c +index bc7472c968e3..cb860099126f 100644 +--- a/sound/soc/codecs/wm8731.c ++++ b/sound/soc/codecs/wm8731.c +@@ -122,7 +122,7 @@ static int wm8731_get_deemph(struct snd_kcontrol *kcontrol, + struct snd_soc_codec *codec = snd_kcontrol_chip(kcontrol); + struct wm8731_priv *wm8731 = snd_soc_codec_get_drvdata(codec); + +- ucontrol->value.enumerated.item[0] = wm8731->deemph; ++ ucontrol->value.integer.value[0] = wm8731->deemph; + + return 0; + } +@@ -132,7 +132,7 @@ static int wm8731_put_deemph(struct snd_kcontrol *kcontrol, + { + struct snd_soc_codec *codec = snd_kcontrol_chip(kcontrol); + struct wm8731_priv *wm8731 = snd_soc_codec_get_drvdata(codec); +- int deemph = ucontrol->value.enumerated.item[0]; ++ int deemph = ucontrol->value.integer.value[0]; + int ret = 0; + + if (deemph > 1) +diff --git a/sound/soc/codecs/wm8903.c b/sound/soc/codecs/wm8903.c +index eebcb1da3b7b..ae7d76efe063 100644 +--- a/sound/soc/codecs/wm8903.c ++++ b/sound/soc/codecs/wm8903.c +@@ -442,7 +442,7 @@ static int wm8903_get_deemph(struct snd_kcontrol *kcontrol, + struct snd_soc_codec *codec = snd_kcontrol_chip(kcontrol); + struct wm8903_priv *wm8903 = snd_soc_codec_get_drvdata(codec); + +- ucontrol->value.enumerated.item[0] = wm8903->deemph; ++ ucontrol->value.integer.value[0] = wm8903->deemph; + + return 0; + } +@@ -452,7 +452,7 @@ static int wm8903_put_deemph(struct snd_kcontrol *kcontrol, + { + struct snd_soc_codec *codec = snd_kcontrol_chip(kcontrol); + struct wm8903_priv *wm8903 = snd_soc_codec_get_drvdata(codec); +- int deemph = ucontrol->value.enumerated.item[0]; ++ int deemph = ucontrol->value.integer.value[0]; + int ret = 0; + + if (deemph > 1) +diff --git a/sound/soc/codecs/wm8904.c b/sound/soc/codecs/wm8904.c +index 48bae0ec500f..07c67a37a131 100644 +--- a/sound/soc/codecs/wm8904.c ++++ b/sound/soc/codecs/wm8904.c +@@ -523,7 +523,7 @@ static int wm8904_get_deemph(struct snd_kcontrol *kcontrol, + struct snd_soc_codec *codec = snd_kcontrol_chip(kcontrol); + struct wm8904_priv *wm8904 = snd_soc_codec_get_drvdata(codec); + +- ucontrol->value.enumerated.item[0] = wm8904->deemph; ++ ucontrol->value.integer.value[0] = wm8904->deemph; + return 0; + } + +@@ -532,7 +532,7 @@ static int wm8904_put_deemph(struct snd_kcontrol *kcontrol, + { + struct snd_soc_codec *codec = snd_kcontrol_chip(kcontrol); + struct wm8904_priv *wm8904 = snd_soc_codec_get_drvdata(codec); +- int deemph = ucontrol->value.enumerated.item[0]; ++ int deemph = ucontrol->value.integer.value[0]; + + if (deemph > 1) + return -EINVAL; +diff --git a/sound/soc/codecs/wm8955.c b/sound/soc/codecs/wm8955.c +index 82c8ba975720..1c1fc6119758 100644 +--- a/sound/soc/codecs/wm8955.c ++++ b/sound/soc/codecs/wm8955.c +@@ -393,7 +393,7 @@ static int wm8955_get_deemph(struct snd_kcontrol *kcontrol, + struct snd_soc_codec *codec = snd_kcontrol_chip(kcontrol); + struct wm8955_priv *wm8955 = snd_soc_codec_get_drvdata(codec); + +- ucontrol->value.enumerated.item[0] = wm8955->deemph; ++ ucontrol->value.integer.value[0] = wm8955->deemph; + return 0; + } + +@@ -402,7 +402,7 @@ static int wm8955_put_deemph(struct snd_kcontrol *kcontrol, + { + struct snd_soc_codec *codec = snd_kcontrol_chip(kcontrol); + struct wm8955_priv *wm8955 = snd_soc_codec_get_drvdata(codec); +- int deemph = ucontrol->value.enumerated.item[0]; ++ int deemph = ucontrol->value.integer.value[0]; + + if (deemph > 1) + return -EINVAL; +diff --git a/sound/soc/codecs/wm8960.c b/sound/soc/codecs/wm8960.c +index 942ef8427347..2a0bfb848512 100644 +--- a/sound/soc/codecs/wm8960.c ++++ b/sound/soc/codecs/wm8960.c +@@ -181,7 +181,7 @@ static int wm8960_get_deemph(struct snd_kcontrol *kcontrol, + struct snd_soc_codec *codec = snd_kcontrol_chip(kcontrol); + struct wm8960_priv *wm8960 = snd_soc_codec_get_drvdata(codec); + +- ucontrol->value.enumerated.item[0] = wm8960->deemph; ++ ucontrol->value.integer.value[0] = wm8960->deemph; + return 0; + } + +@@ -190,7 +190,7 @@ static int wm8960_put_deemph(struct snd_kcontrol *kcontrol, + { + struct snd_soc_codec *codec = snd_kcontrol_chip(kcontrol); + struct wm8960_priv *wm8960 = snd_soc_codec_get_drvdata(codec); +- int deemph = ucontrol->value.enumerated.item[0]; ++ int deemph = ucontrol->value.integer.value[0]; + + if (deemph > 1) + return -EINVAL; +diff --git a/sound/usb/quirks-table.h b/sound/usb/quirks-table.h +index 83bddbdb90e9..5293b5ac8b9d 100644 +--- a/sound/usb/quirks-table.h ++++ b/sound/usb/quirks-table.h +@@ -1773,6 +1773,36 @@ YAMAHA_DEVICE(0x7010, "UB99"), + } + } + }, ++{ ++ USB_DEVICE(0x0582, 0x0159), ++ .driver_info = (unsigned long) & (const struct snd_usb_audio_quirk) { ++ /* .vendor_name = "Roland", */ ++ /* .product_name = "UA-22", */ ++ .ifnum = QUIRK_ANY_INTERFACE, ++ .type = QUIRK_COMPOSITE, ++ .data = (const struct snd_usb_audio_quirk[]) { ++ { ++ .ifnum = 0, ++ .type = QUIRK_AUDIO_STANDARD_INTERFACE ++ }, ++ { ++ .ifnum = 1, ++ .type = QUIRK_AUDIO_STANDARD_INTERFACE ++ }, ++ { ++ .ifnum = 2, ++ .type = QUIRK_MIDI_FIXED_ENDPOINT, ++ .data = & (const struct snd_usb_midi_endpoint_info) { ++ .out_cables = 0x0001, ++ .in_cables = 0x0001 ++ } ++ }, ++ { ++ .ifnum = -1 ++ } ++ } ++ } ++}, + /* this catches most recent vendor-specific Roland devices */ + { + .match_flags = USB_DEVICE_ID_MATCH_VENDOR |