From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from lists.gentoo.org (pigeon.gentoo.org [208.92.234.80]) by finch.gentoo.org (Postfix) with ESMTP id 9C0B51387FD for ; Mon, 9 Jun 2014 12:27:40 +0000 (UTC) Received: from pigeon.gentoo.org (localhost [127.0.0.1]) by pigeon.gentoo.org (Postfix) with SMTP id 902F1E08D0; Mon, 9 Jun 2014 12:27:38 +0000 (UTC) Received: from smtp.gentoo.org (smtp.gentoo.org [140.211.166.183]) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by pigeon.gentoo.org (Postfix) with ESMTPS id E4F56E08D0 for ; Mon, 9 Jun 2014 12:27:37 +0000 (UTC) Received: from flycatcher.gentoo.org (flycatcher.gentoo.org [81.93.255.6]) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by smtp.gentoo.org (Postfix) with ESMTPS id AD5EB33FED0 for ; Mon, 9 Jun 2014 12:27:36 +0000 (UTC) Received: by flycatcher.gentoo.org (Postfix, from userid 2195) id 6977E2004E; Mon, 9 Jun 2014 12:27:35 +0000 (UTC) To: gentoo-commits@lists.gentoo.org From: "Mike Pagano (mpagano)" Subject: [gentoo-commits] linux-patches r2819 - genpatches-2.6/trunk/3.10 X-VCS-Repository: linux-patches X-VCS-Revision: 2819 X-VCS-Files: genpatches-2.6/trunk/3.10/1041_linux-3.10.42.patch genpatches-2.6/trunk/3.10/1501-futex-add-another-early-deadlock-detection-check.patch genpatches-2.6/trunk/3.10/1502-futex-prevent-attaching-to-kernel-threads.patch genpatches-2.6/trunk/3.10/1503-futex-prevent-requeue-pi-on-same-futex-patch-futex-forbid-uaddr-uaddr2-in-futex_requeue-requeue_pi-1.patch genpatches-2.6/trunk/3.10/1504-futex-validate-atomic-acquisition-in-futex_lock_pi_atomic.patch genpatches-2.6/trunk/3.10/1505-futex-always-cleanup-owner-tid-in-unlock_pi.patch genpatches-2.6/trunk/3.10/1506-futex-make-lookup_pi_state-more-robust.patch genpatches-2.6/trunk/3.10/0000_README X-VCS-Directories: genpatches-2.6/trunk/3.10 X-VCS-Committer: mpagano X-VCS-Committer-Name: Mike Pagano Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit Message-Id: <20140609122735.6977E2004E@flycatcher.gentoo.org> Date: Mon, 9 Jun 2014 12:27:35 +0000 (UTC) Precedence: bulk List-Post: List-Help: List-Unsubscribe: List-Subscribe: List-Id: Gentoo Linux mail X-BeenThere: gentoo-commits@lists.gentoo.org X-Archives-Salt: c8c65601-423d-4beb-8141-f18b9446cf11 X-Archives-Hash: 23e69bfbab04317d9a2a20ca9066abac Author: mpagano Date: 2014-06-09 12:27:34 +0000 (Mon, 09 Jun 2014) New Revision: 2819 Added: genpatches-2.6/trunk/3.10/1041_linux-3.10.42.patch Removed: genpatches-2.6/trunk/3.10/1501-futex-add-another-early-deadlock-detection-check.patch genpatches-2.6/trunk/3.10/1502-futex-prevent-attaching-to-kernel-threads.patch genpatches-2.6/trunk/3.10/1503-futex-prevent-requeue-pi-on-same-futex-patch-futex-forbid-uaddr-uaddr2-in-futex_requeue-requeue_pi-1.patch genpatches-2.6/trunk/3.10/1504-futex-validate-atomic-acquisition-in-futex_lock_pi_atomic.patch genpatches-2.6/trunk/3.10/1505-futex-always-cleanup-owner-tid-in-unlock_pi.patch genpatches-2.6/trunk/3.10/1506-futex-make-lookup_pi_state-more-robust.patch Modified: genpatches-2.6/trunk/3.10/0000_README Log: Linux patch 3.10.42. Remove redundant patches Modified: genpatches-2.6/trunk/3.10/0000_README =================================================================== --- genpatches-2.6/trunk/3.10/0000_README 2014-06-09 00:42:33 UTC (rev 2818) +++ genpatches-2.6/trunk/3.10/0000_README 2014-06-09 12:27:34 UTC (rev 2819) @@ -206,6 +206,10 @@ From: http://www.kernel.org Desc: Linux 3.10.41 +Patch: 1041_linux-3.10.42.patch +From: http://www.kernel.org +Desc: Linux 3.10.42 + Patch: 1500_XATTR_USER_PREFIX.patch From: https://bugs.gentoo.org/show_bug.cgi?id=470644 Desc: Support for namespace user.pax.* on tmpfs. @@ -214,30 +218,6 @@ From: https://git.kernel.org/cgit/linux/kernel/git/torvalds/linux.git/commit/?id=6a96e15096da6e7491107321cfa660c7c2aa119d Desc: selinux: add SOCK_DIAG_BY_FAMILY to the list of netlink message types -Patch: 1501-futex-add-another-early-deadlock-detection-check.patch -From: https://git.kernel.org/cgit/linux/kernel/git/torvalds/linux.git/commit/?id=866293ee54227584ffcb4a42f69c1f365974ba7f -Desc: CVE-2014-3153 - -Patch: 1502-futex-prevent-attaching-to-kernel-threads.patch -From: https://git.kernel.org/cgit/linux/kernel/git/torvalds/linux.git/commit/?id=f0d71b3dcb8332f7971b5f2363632573e6d9486a -Desc: CVE-2014-3153 - -Patch: 1503-futex-prevent-requeue-pi-on-same-futex-patch-futex-forbid-uaddr-uaddr2-in-futex_requeue-requeue_pi-1.patch -From: https://git.kernel.org/cgit/linux/kernel/git/torvalds/linux.git/commit/?id=e9c243a5a6de0be8e584c604d353412584b592f8 -Desc: CVE-2014-3153 - -Patch: 1504-futex-validate-atomic-acquisition-in-futex_lock_pi_atomic.patch -From: https://git.kernel.org/cgit/linux/kernel/git/torvalds/linux.git/commit/?id=b3eaa9fc5cd0a4d74b18f6b8dc617aeaf1873270 -Desc: CVE-2014-3153 - -Patch: 1505-futex-always-cleanup-owner-tid-in-unlock_pi.patch -From: https://git.kernel.org/cgit/linux/kernel/git/torvalds/linux.git/commit/?id=13fbca4c6ecd96ec1a1cfa2e4f2ce191fe928a5e -Desc: CVE-2014-3153 - -Patch: 1506-futex-make-lookup_pi_state-more-robust.patch -From: https://git.kernel.org/cgit/linux/kernel/git/torvalds/linux.git/commit/?id=54a217887a7b658e2650c3feff22756ab80c7339 -Desc: CVE-2014-3153 - Patch: 1700_enable-thinkpad-micled.patch From: https://bugs.gentoo.org/show_bug.cgi?id=449248 Desc: Enable mic mute led in thinkpads Added: genpatches-2.6/trunk/3.10/1041_linux-3.10.42.patch =================================================================== --- genpatches-2.6/trunk/3.10/1041_linux-3.10.42.patch (rev 0) +++ genpatches-2.6/trunk/3.10/1041_linux-3.10.42.patch 2014-06-09 12:27:34 UTC (rev 2819) @@ -0,0 +1,3808 @@ +diff --git a/Documentation/i2c/busses/i2c-i801 b/Documentation/i2c/busses/i2c-i801 +index d29dea0f3232..babe2ef16139 100644 +--- a/Documentation/i2c/busses/i2c-i801 ++++ b/Documentation/i2c/busses/i2c-i801 +@@ -25,6 +25,8 @@ Supported adapters: + * Intel Avoton (SOC) + * Intel Wellsburg (PCH) + * Intel Coleto Creek (PCH) ++ * Intel Wildcat Point-LP (PCH) ++ * Intel BayTrail (SOC) + Datasheets: Publicly available at the Intel website + + On Intel Patsburg and later chipsets, both the normal host SMBus controller +diff --git a/Documentation/input/elantech.txt b/Documentation/input/elantech.txt +index 5602eb71ad5d..e1ae127ed099 100644 +--- a/Documentation/input/elantech.txt ++++ b/Documentation/input/elantech.txt +@@ -504,9 +504,12 @@ byte 5: + * reg_10 + + bit 7 6 5 4 3 2 1 0 +- 0 0 0 0 0 0 0 A ++ 0 0 0 0 R F T A + + A: 1 = enable absolute tracking ++ T: 1 = enable two finger mode auto correct ++ F: 1 = disable ABS Position Filter ++ R: 1 = enable real hardware resolution + + 6.2 Native absolute mode 6 byte packet format + ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ +diff --git a/Documentation/ja_JP/HOWTO b/Documentation/ja_JP/HOWTO +index 050d37fe6d40..46ed73593465 100644 +--- a/Documentation/ja_JP/HOWTO ++++ b/Documentation/ja_JP/HOWTO +@@ -315,7 +315,7 @@ Andrew Morton が Linux-kernel メーリングリストにカーネルリリー + もし、2.6.x.y カーネルが存在しない場合には、番号が一番大きい 2.6.x が + 最新の安定版カーネルです。 + +-2.6.x.y は "stable" チーム でメンテされており、必 ++2.6.x.y は "stable" チーム でメンテされており、必 + 要に応じてリリースされます。通常のリリース期間は 2週間毎ですが、差し迫っ + た問題がなければもう少し長くなることもあります。セキュリティ関連の問題 + の場合はこれに対してだいたいの場合、すぐにリリースがされます。 +diff --git a/Documentation/ja_JP/stable_kernel_rules.txt b/Documentation/ja_JP/stable_kernel_rules.txt +index 14265837c4ce..9dbda9b5d21e 100644 +--- a/Documentation/ja_JP/stable_kernel_rules.txt ++++ b/Documentation/ja_JP/stable_kernel_rules.txt +@@ -50,16 +50,16 @@ linux-2.6.29/Documentation/stable_kernel_rules.txt + + -stable ツリーにパッチを送付する手続き- + +- - 上記の規則に従っているかを確認した後に、stable@kernel.org にパッチ ++ - 上記の規則に従っているかを確認した後に、stable@vger.kernel.org にパッチ + を送る。 + - 送信者はパッチがキューに受け付けられた際には ACK を、却下された場合 + には NAK を受け取る。この反応は開発者たちのスケジュールによって、数 + 日かかる場合がある。 + - もし受け取られたら、パッチは他の開発者たちと関連するサブシステムの + メンテナーによるレビューのために -stable キューに追加される。 +- - パッチに stable@kernel.org のアドレスが付加されているときには、それ ++ - パッチに stable@vger.kernel.org のアドレスが付加されているときには、それ + が Linus のツリーに入る時に自動的に stable チームに email される。 +- - セキュリティパッチはこのエイリアス (stable@kernel.org) に送られるべ ++ - セキュリティパッチはこのエイリアス (stable@vger.kernel.org) に送られるべ + きではなく、代わりに security@kernel.org のアドレスに送られる。 + + レビューサイクル- +diff --git a/Documentation/zh_CN/HOWTO b/Documentation/zh_CN/HOWTO +index 7fba5aab9ef9..7599eb38b764 100644 +--- a/Documentation/zh_CN/HOWTO ++++ b/Documentation/zh_CN/HOWTO +@@ -237,7 +237,7 @@ kernel.org网站的pub/linux/kernel/v2.6/目录下找到它。它的开发遵循 + 如果没有2.6.x.y版本内核存在,那么最新的2.6.x版本内核就相当于是当前的稳定 + 版内核。 + +-2.6.x.y版本由“稳定版”小组(邮件地址)维护,一般隔周发 ++2.6.x.y版本由“稳定版”小组(邮件地址)维护,一般隔周发 + 布新版本。 + + 内核源码中的Documentation/stable_kernel_rules.txt文件具体描述了可被稳定 +diff --git a/Documentation/zh_CN/stable_kernel_rules.txt b/Documentation/zh_CN/stable_kernel_rules.txt +index b5b9b0ab02fd..26ea5ed7cd9c 100644 +--- a/Documentation/zh_CN/stable_kernel_rules.txt ++++ b/Documentation/zh_CN/stable_kernel_rules.txt +@@ -42,7 +42,7 @@ Documentation/stable_kernel_rules.txt 的中文翻译 + + 向稳定版代码树提交补丁的过程: + +- - 在确认了补丁符合以上的规则后,将补丁发送到stable@kernel.org。 ++ - 在确认了补丁符合以上的规则后,将补丁发送到stable@vger.kernel.org。 + - 如果补丁被接受到队列里,发送者会收到一个ACK回复,如果没有被接受,收 + 到的是NAK回复。回复需要几天的时间,这取决于开发者的时间安排。 + - 被接受的补丁会被加到稳定版本队列里,等待其他开发者的审查。 +diff --git a/Makefile b/Makefile +index 370cc01afb07..4634015fed68 100644 +--- a/Makefile ++++ b/Makefile +@@ -1,6 +1,6 @@ + VERSION = 3 + PATCHLEVEL = 10 +-SUBLEVEL = 41 ++SUBLEVEL = 42 + EXTRAVERSION = + NAME = TOSSUG Baby Fish + +diff --git a/arch/arm/boot/dts/imx53.dtsi b/arch/arm/boot/dts/imx53.dtsi +index eb83aa039b8b..e524316998f4 100644 +--- a/arch/arm/boot/dts/imx53.dtsi ++++ b/arch/arm/boot/dts/imx53.dtsi +@@ -71,7 +71,7 @@ + ipu: ipu@18000000 { + #crtc-cells = <1>; + compatible = "fsl,imx53-ipu"; +- reg = <0x18000000 0x080000000>; ++ reg = <0x18000000 0x08000000>; + interrupts = <11 10>; + clocks = <&clks 59>, <&clks 110>, <&clks 61>; + clock-names = "bus", "di0", "di1"; +diff --git a/arch/arm/kernel/crash_dump.c b/arch/arm/kernel/crash_dump.c +index 90c50d4b43f7..5d1286d51154 100644 +--- a/arch/arm/kernel/crash_dump.c ++++ b/arch/arm/kernel/crash_dump.c +@@ -39,7 +39,7 @@ ssize_t copy_oldmem_page(unsigned long pfn, char *buf, + if (!csize) + return 0; + +- vaddr = ioremap(pfn << PAGE_SHIFT, PAGE_SIZE); ++ vaddr = ioremap(__pfn_to_phys(pfn), PAGE_SIZE); + if (!vaddr) + return -ENOMEM; + +diff --git a/arch/metag/include/asm/barrier.h b/arch/metag/include/asm/barrier.h +index c90bfc6bf648..e355a4c10968 100644 +--- a/arch/metag/include/asm/barrier.h ++++ b/arch/metag/include/asm/barrier.h +@@ -15,6 +15,7 @@ static inline void wr_fence(void) + volatile int *flushptr = (volatile int *) LINSYSEVENT_WR_FENCE; + barrier(); + *flushptr = 0; ++ barrier(); + } + + #else /* CONFIG_METAG_META21 */ +@@ -35,6 +36,7 @@ static inline void wr_fence(void) + *flushptr = 0; + *flushptr = 0; + *flushptr = 0; ++ barrier(); + } + + #endif /* !CONFIG_METAG_META21 */ +@@ -68,6 +70,7 @@ static inline void fence(void) + volatile int *flushptr = (volatile int *) LINSYSEVENT_WR_ATOMIC_UNLOCK; + barrier(); + *flushptr = 0; ++ barrier(); + } + #define smp_mb() fence() + #define smp_rmb() fence() +diff --git a/arch/metag/include/asm/processor.h b/arch/metag/include/asm/processor.h +index 9b029a7911c3..579e3d93a5ca 100644 +--- a/arch/metag/include/asm/processor.h ++++ b/arch/metag/include/asm/processor.h +@@ -22,6 +22,8 @@ + /* Add an extra page of padding at the top of the stack for the guard page. */ + #define STACK_TOP (TASK_SIZE - PAGE_SIZE) + #define STACK_TOP_MAX STACK_TOP ++/* Maximum virtual space for stack */ ++#define STACK_SIZE_MAX (1 << 28) /* 256 MB */ + + /* This decides where the kernel will search for a free chunk of vm + * space during mmap's. +diff --git a/arch/mips/cavium-octeon/octeon-irq.c b/arch/mips/cavium-octeon/octeon-irq.c +index a22f06a6f7ca..45c1a6caa206 100644 +--- a/arch/mips/cavium-octeon/octeon-irq.c ++++ b/arch/mips/cavium-octeon/octeon-irq.c +@@ -635,7 +635,7 @@ static void octeon_irq_cpu_offline_ciu(struct irq_data *data) + cpumask_clear(&new_affinity); + cpumask_set_cpu(cpumask_first(cpu_online_mask), &new_affinity); + } +- __irq_set_affinity_locked(data, &new_affinity); ++ irq_set_affinity_locked(data, &new_affinity, false); + } + + static int octeon_irq_ciu_set_affinity(struct irq_data *data, +diff --git a/arch/mips/lantiq/dts/easy50712.dts b/arch/mips/lantiq/dts/easy50712.dts +index fac1f5b178eb..143b8a37b5e4 100644 +--- a/arch/mips/lantiq/dts/easy50712.dts ++++ b/arch/mips/lantiq/dts/easy50712.dts +@@ -8,6 +8,7 @@ + }; + + memory@0 { ++ device_type = "memory"; + reg = <0x0 0x2000000>; + }; + +diff --git a/arch/mips/ralink/dts/mt7620a_eval.dts b/arch/mips/ralink/dts/mt7620a_eval.dts +index 35eb874ab7f1..709f58132f5c 100644 +--- a/arch/mips/ralink/dts/mt7620a_eval.dts ++++ b/arch/mips/ralink/dts/mt7620a_eval.dts +@@ -7,6 +7,7 @@ + model = "Ralink MT7620A evaluation board"; + + memory@0 { ++ device_type = "memory"; + reg = <0x0 0x2000000>; + }; + +diff --git a/arch/mips/ralink/dts/rt2880_eval.dts b/arch/mips/ralink/dts/rt2880_eval.dts +index 322d7002595b..0a685db093d4 100644 +--- a/arch/mips/ralink/dts/rt2880_eval.dts ++++ b/arch/mips/ralink/dts/rt2880_eval.dts +@@ -7,6 +7,7 @@ + model = "Ralink RT2880 evaluation board"; + + memory@0 { ++ device_type = "memory"; + reg = <0x8000000 0x2000000>; + }; + +diff --git a/arch/mips/ralink/dts/rt3052_eval.dts b/arch/mips/ralink/dts/rt3052_eval.dts +index 0ac73ea28198..ec9e9a035541 100644 +--- a/arch/mips/ralink/dts/rt3052_eval.dts ++++ b/arch/mips/ralink/dts/rt3052_eval.dts +@@ -7,6 +7,7 @@ + model = "Ralink RT3052 evaluation board"; + + memory@0 { ++ device_type = "memory"; + reg = <0x0 0x2000000>; + }; + +diff --git a/arch/mips/ralink/dts/rt3883_eval.dts b/arch/mips/ralink/dts/rt3883_eval.dts +index 2fa6b330bf4f..e8df21a5d10d 100644 +--- a/arch/mips/ralink/dts/rt3883_eval.dts ++++ b/arch/mips/ralink/dts/rt3883_eval.dts +@@ -7,6 +7,7 @@ + model = "Ralink RT3883 evaluation board"; + + memory@0 { ++ device_type = "memory"; + reg = <0x0 0x2000000>; + }; + +diff --git a/arch/parisc/include/asm/processor.h b/arch/parisc/include/asm/processor.h +index cc2290a3cace..c6ee86542fec 100644 +--- a/arch/parisc/include/asm/processor.h ++++ b/arch/parisc/include/asm/processor.h +@@ -53,6 +53,8 @@ + #define STACK_TOP TASK_SIZE + #define STACK_TOP_MAX DEFAULT_TASK_SIZE + ++#define STACK_SIZE_MAX (1 << 30) /* 1 GB */ ++ + #endif + + #ifndef __ASSEMBLY__ +diff --git a/arch/powerpc/Makefile b/arch/powerpc/Makefile +index 967fd23ace78..56a4a5d205af 100644 +--- a/arch/powerpc/Makefile ++++ b/arch/powerpc/Makefile +@@ -97,7 +97,9 @@ CFLAGS-$(CONFIG_POWER7_CPU) += $(call cc-option,-mcpu=power7) + + CFLAGS-$(CONFIG_TUNE_CELL) += $(call cc-option,-mtune=cell) + +-KBUILD_CPPFLAGS += -Iarch/$(ARCH) ++asinstr := $(call as-instr,lis 9$(comma)foo@high,-DHAVE_AS_ATHIGH=1) ++ ++KBUILD_CPPFLAGS += -Iarch/$(ARCH) $(asinstr) + KBUILD_AFLAGS += -Iarch/$(ARCH) + KBUILD_CFLAGS += -msoft-float -pipe -Iarch/$(ARCH) $(CFLAGS-y) + CPP = $(CC) -E $(KBUILD_CFLAGS) +diff --git a/arch/powerpc/include/asm/ppc_asm.h b/arch/powerpc/include/asm/ppc_asm.h +index 2f1b6c5f8174..22cee04a47fc 100644 +--- a/arch/powerpc/include/asm/ppc_asm.h ++++ b/arch/powerpc/include/asm/ppc_asm.h +@@ -390,11 +390,16 @@ n: + * ld rY,ADDROFF(name)(rX) + */ + #ifdef __powerpc64__ ++#ifdef HAVE_AS_ATHIGH ++#define __AS_ATHIGH high ++#else ++#define __AS_ATHIGH h ++#endif + #define LOAD_REG_IMMEDIATE(reg,expr) \ + lis reg,(expr)@highest; \ + ori reg,reg,(expr)@higher; \ + rldicr reg,reg,32,31; \ +- oris reg,reg,(expr)@h; \ ++ oris reg,reg,(expr)@__AS_ATHIGH; \ + ori reg,reg,(expr)@l; + + #define LOAD_REG_ADDR(reg,name) \ +diff --git a/arch/powerpc/kernel/process.c b/arch/powerpc/kernel/process.c +index 1e1c995ddacc..d55357ee9028 100644 +--- a/arch/powerpc/kernel/process.c ++++ b/arch/powerpc/kernel/process.c +@@ -948,6 +948,16 @@ int arch_dup_task_struct(struct task_struct *dst, struct task_struct *src) + flush_altivec_to_thread(src); + flush_vsx_to_thread(src); + flush_spe_to_thread(src); ++ /* ++ * Flush TM state out so we can copy it. __switch_to_tm() does this ++ * flush but it removes the checkpointed state from the current CPU and ++ * transitions the CPU out of TM mode. Hence we need to call ++ * tm_recheckpoint_new_task() (on the same task) to restore the ++ * checkpointed state back and the TM mode. ++ */ ++ __switch_to_tm(src); ++ tm_recheckpoint_new_task(src); ++ + *dst = *src; + return 0; + } +diff --git a/arch/s390/crypto/aes_s390.c b/arch/s390/crypto/aes_s390.c +index 2a245b55bb71..fd104db9cea1 100644 +--- a/arch/s390/crypto/aes_s390.c ++++ b/arch/s390/crypto/aes_s390.c +@@ -818,6 +818,9 @@ static int ctr_aes_crypt(struct blkcipher_desc *desc, long func, + else + memcpy(walk->iv, ctrptr, AES_BLOCK_SIZE); + spin_unlock(&ctrblk_lock); ++ } else { ++ if (!nbytes) ++ memcpy(walk->iv, ctrptr, AES_BLOCK_SIZE); + } + /* + * final block may be < AES_BLOCK_SIZE, copy only nbytes +diff --git a/arch/s390/crypto/des_s390.c b/arch/s390/crypto/des_s390.c +index 2d96e68febb2..f2d6cccddcf8 100644 +--- a/arch/s390/crypto/des_s390.c ++++ b/arch/s390/crypto/des_s390.c +@@ -429,6 +429,9 @@ static int ctr_desall_crypt(struct blkcipher_desc *desc, long func, + else + memcpy(walk->iv, ctrptr, DES_BLOCK_SIZE); + spin_unlock(&ctrblk_lock); ++ } else { ++ if (!nbytes) ++ memcpy(walk->iv, ctrptr, DES_BLOCK_SIZE); + } + /* final block may be < DES_BLOCK_SIZE, copy only nbytes */ + if (nbytes) { +diff --git a/arch/x86/include/asm/hugetlb.h b/arch/x86/include/asm/hugetlb.h +index a8091216963b..68c05398bba9 100644 +--- a/arch/x86/include/asm/hugetlb.h ++++ b/arch/x86/include/asm/hugetlb.h +@@ -52,6 +52,7 @@ static inline pte_t huge_ptep_get_and_clear(struct mm_struct *mm, + static inline void huge_ptep_clear_flush(struct vm_area_struct *vma, + unsigned long addr, pte_t *ptep) + { ++ ptep_clear_flush(vma, addr, ptep); + } + + static inline int huge_pte_none(pte_t pte) +diff --git a/arch/x86/kernel/ldt.c b/arch/x86/kernel/ldt.c +index af1d14a9ebda..dcbbaa165bde 100644 +--- a/arch/x86/kernel/ldt.c ++++ b/arch/x86/kernel/ldt.c +@@ -20,6 +20,8 @@ + #include + #include + ++int sysctl_ldt16 = 0; ++ + #ifdef CONFIG_SMP + static void flush_ldt(void *current_mm) + { +@@ -234,7 +236,7 @@ static int write_ldt(void __user *ptr, unsigned long bytecount, int oldmode) + * IRET leaking the high bits of the kernel stack address. + */ + #ifdef CONFIG_X86_64 +- if (!ldt_info.seg_32bit) { ++ if (!ldt_info.seg_32bit && !sysctl_ldt16) { + error = -EINVAL; + goto out_unlock; + } +diff --git a/arch/x86/vdso/vdso32-setup.c b/arch/x86/vdso/vdso32-setup.c +index 0faad646f5fd..0f134c7cfc24 100644 +--- a/arch/x86/vdso/vdso32-setup.c ++++ b/arch/x86/vdso/vdso32-setup.c +@@ -41,6 +41,7 @@ enum { + #ifdef CONFIG_X86_64 + #define vdso_enabled sysctl_vsyscall32 + #define arch_setup_additional_pages syscall32_setup_pages ++extern int sysctl_ldt16; + #endif + + /* +@@ -380,6 +381,13 @@ static ctl_table abi_table2[] = { + .mode = 0644, + .proc_handler = proc_dointvec + }, ++ { ++ .procname = "ldt16", ++ .data = &sysctl_ldt16, ++ .maxlen = sizeof(int), ++ .mode = 0644, ++ .proc_handler = proc_dointvec ++ }, + {} + }; + +diff --git a/crypto/crypto_wq.c b/crypto/crypto_wq.c +index adad92a44ba2..2f1b8d12952a 100644 +--- a/crypto/crypto_wq.c ++++ b/crypto/crypto_wq.c +@@ -33,7 +33,7 @@ static void __exit crypto_wq_exit(void) + destroy_workqueue(kcrypto_wq); + } + +-module_init(crypto_wq_init); ++subsys_initcall(crypto_wq_init); + module_exit(crypto_wq_exit); + + MODULE_LICENSE("GPL"); +diff --git a/drivers/acpi/blacklist.c b/drivers/acpi/blacklist.c +index cb9629638def..76da257cfc28 100644 +--- a/drivers/acpi/blacklist.c ++++ b/drivers/acpi/blacklist.c +@@ -327,6 +327,19 @@ static struct dmi_system_id acpi_osi_dmi_table[] __initdata = { + DMI_MATCH(DMI_PRODUCT_VERSION, "ThinkPad T500"), + }, + }, ++ /* ++ * Without this this EEEpc exports a non working WMI interface, with ++ * this it exports a working "good old" eeepc_laptop interface, fixing ++ * both brightness control, and rfkill not working. ++ */ ++ { ++ .callback = dmi_enable_osi_linux, ++ .ident = "Asus EEE PC 1015PX", ++ .matches = { ++ DMI_MATCH(DMI_SYS_VENDOR, "ASUSTeK Computer INC."), ++ DMI_MATCH(DMI_PRODUCT_NAME, "1015PX"), ++ }, ++ }, + {} + }; + +diff --git a/drivers/ata/libata-core.c b/drivers/ata/libata-core.c +index 9cf616b5210b..bf00fbcde8ad 100644 +--- a/drivers/ata/libata-core.c ++++ b/drivers/ata/libata-core.c +@@ -6300,6 +6300,8 @@ int ata_host_activate(struct ata_host *host, int irq, + static void ata_port_detach(struct ata_port *ap) + { + unsigned long flags; ++ struct ata_link *link; ++ struct ata_device *dev; + + if (!ap->ops->error_handler) + goto skip_eh; +@@ -6319,6 +6321,13 @@ static void ata_port_detach(struct ata_port *ap) + cancel_delayed_work_sync(&ap->hotplug_task); + + skip_eh: ++ /* clean up zpodd on port removal */ ++ ata_for_each_link(link, ap, HOST_FIRST) { ++ ata_for_each_dev(dev, link, ALL) { ++ if (zpodd_dev_enabled(dev)) ++ zpodd_exit(dev); ++ } ++ } + if (ap->pmp_link) { + int i; + for (i = 0; i < SATA_PMP_MAX_PORTS; i++) +diff --git a/drivers/ata/pata_at91.c b/drivers/ata/pata_at91.c +index 033f3f4c20ad..fa288597f01b 100644 +--- a/drivers/ata/pata_at91.c ++++ b/drivers/ata/pata_at91.c +@@ -408,12 +408,13 @@ static int pata_at91_probe(struct platform_device *pdev) + + host->private_data = info; + +- return ata_host_activate(host, gpio_is_valid(irq) ? gpio_to_irq(irq) : 0, +- gpio_is_valid(irq) ? ata_sff_interrupt : NULL, +- irq_flags, &pata_at91_sht); ++ ret = ata_host_activate(host, gpio_is_valid(irq) ? gpio_to_irq(irq) : 0, ++ gpio_is_valid(irq) ? ata_sff_interrupt : NULL, ++ irq_flags, &pata_at91_sht); ++ if (ret) ++ goto err_put; + +- if (!ret) +- return 0; ++ return 0; + + err_put: + clk_put(info->mck); +diff --git a/drivers/base/dd.c b/drivers/base/dd.c +index 06051767393f..8a8d611f2021 100644 +--- a/drivers/base/dd.c ++++ b/drivers/base/dd.c +@@ -52,6 +52,7 @@ static DEFINE_MUTEX(deferred_probe_mutex); + static LIST_HEAD(deferred_probe_pending_list); + static LIST_HEAD(deferred_probe_active_list); + static struct workqueue_struct *deferred_wq; ++static atomic_t deferred_trigger_count = ATOMIC_INIT(0); + + /** + * deferred_probe_work_func() - Retry probing devices in the active list. +@@ -135,6 +136,17 @@ static bool driver_deferred_probe_enable = false; + * This functions moves all devices from the pending list to the active + * list and schedules the deferred probe workqueue to process them. It + * should be called anytime a driver is successfully bound to a device. ++ * ++ * Note, there is a race condition in multi-threaded probe. In the case where ++ * more than one device is probing at the same time, it is possible for one ++ * probe to complete successfully while another is about to defer. If the second ++ * depends on the first, then it will get put on the pending list after the ++ * trigger event has already occured and will be stuck there. ++ * ++ * The atomic 'deferred_trigger_count' is used to determine if a successful ++ * trigger has occurred in the midst of probing a driver. If the trigger count ++ * changes in the midst of a probe, then deferred processing should be triggered ++ * again. + */ + static void driver_deferred_probe_trigger(void) + { +@@ -147,6 +159,7 @@ static void driver_deferred_probe_trigger(void) + * into the active list so they can be retried by the workqueue + */ + mutex_lock(&deferred_probe_mutex); ++ atomic_inc(&deferred_trigger_count); + list_splice_tail_init(&deferred_probe_pending_list, + &deferred_probe_active_list); + mutex_unlock(&deferred_probe_mutex); +@@ -265,6 +278,7 @@ static DECLARE_WAIT_QUEUE_HEAD(probe_waitqueue); + static int really_probe(struct device *dev, struct device_driver *drv) + { + int ret = 0; ++ int local_trigger_count = atomic_read(&deferred_trigger_count); + + atomic_inc(&probe_count); + pr_debug("bus: '%s': %s: probing driver %s with device %s\n", +@@ -310,6 +324,9 @@ probe_failed: + /* Driver requested deferred probing */ + dev_info(dev, "Driver %s requests probe deferral\n", drv->name); + driver_deferred_probe_add(dev); ++ /* Did a trigger occur while probing? Need to re-trigger if yes */ ++ if (local_trigger_count != atomic_read(&deferred_trigger_count)) ++ driver_deferred_probe_trigger(); + } else if (ret != -ENODEV && ret != -ENXIO) { + /* driver matched but the probe failed */ + printk(KERN_WARNING +diff --git a/drivers/base/topology.c b/drivers/base/topology.c +index ae989c57cd5e..bcd19886fa1a 100644 +--- a/drivers/base/topology.c ++++ b/drivers/base/topology.c +@@ -40,8 +40,7 @@ + static ssize_t show_##name(struct device *dev, \ + struct device_attribute *attr, char *buf) \ + { \ +- unsigned int cpu = dev->id; \ +- return sprintf(buf, "%d\n", topology_##name(cpu)); \ ++ return sprintf(buf, "%d\n", topology_##name(dev->id)); \ + } + + #if defined(topology_thread_cpumask) || defined(topology_core_cpumask) || \ +diff --git a/drivers/block/xen-blkfront.c b/drivers/block/xen-blkfront.c +index 1735b0d17e29..ddd9a098bc67 100644 +--- a/drivers/block/xen-blkfront.c ++++ b/drivers/block/xen-blkfront.c +@@ -104,7 +104,7 @@ struct blkfront_info + struct work_struct work; + struct gnttab_free_callback callback; + struct blk_shadow shadow[BLK_RING_SIZE]; +- struct list_head persistent_gnts; ++ struct list_head grants; + unsigned int persistent_gnts_c; + unsigned long shadow_free; + unsigned int feature_flush; +@@ -175,15 +175,17 @@ static int fill_grant_buffer(struct blkfront_info *info, int num) + if (!gnt_list_entry) + goto out_of_memory; + +- granted_page = alloc_page(GFP_NOIO); +- if (!granted_page) { +- kfree(gnt_list_entry); +- goto out_of_memory; ++ if (info->feature_persistent) { ++ granted_page = alloc_page(GFP_NOIO); ++ if (!granted_page) { ++ kfree(gnt_list_entry); ++ goto out_of_memory; ++ } ++ gnt_list_entry->pfn = page_to_pfn(granted_page); + } + +- gnt_list_entry->pfn = page_to_pfn(granted_page); + gnt_list_entry->gref = GRANT_INVALID_REF; +- list_add(&gnt_list_entry->node, &info->persistent_gnts); ++ list_add(&gnt_list_entry->node, &info->grants); + i++; + } + +@@ -191,9 +193,10 @@ static int fill_grant_buffer(struct blkfront_info *info, int num) + + out_of_memory: + list_for_each_entry_safe(gnt_list_entry, n, +- &info->persistent_gnts, node) { ++ &info->grants, node) { + list_del(&gnt_list_entry->node); +- __free_page(pfn_to_page(gnt_list_entry->pfn)); ++ if (info->feature_persistent) ++ __free_page(pfn_to_page(gnt_list_entry->pfn)); + kfree(gnt_list_entry); + i--; + } +@@ -202,14 +205,14 @@ out_of_memory: + } + + static struct grant *get_grant(grant_ref_t *gref_head, ++ unsigned long pfn, + struct blkfront_info *info) + { + struct grant *gnt_list_entry; + unsigned long buffer_mfn; + +- BUG_ON(list_empty(&info->persistent_gnts)); +- gnt_list_entry = list_first_entry(&info->persistent_gnts, struct grant, +- node); ++ BUG_ON(list_empty(&info->grants)); ++ gnt_list_entry = list_first_entry(&info->grants, struct grant, node); + list_del(&gnt_list_entry->node); + + if (gnt_list_entry->gref != GRANT_INVALID_REF) { +@@ -220,6 +223,10 @@ static struct grant *get_grant(grant_ref_t *gref_head, + /* Assign a gref to this page */ + gnt_list_entry->gref = gnttab_claim_grant_reference(gref_head); + BUG_ON(gnt_list_entry->gref == -ENOSPC); ++ if (!info->feature_persistent) { ++ BUG_ON(!pfn); ++ gnt_list_entry->pfn = pfn; ++ } + buffer_mfn = pfn_to_mfn(gnt_list_entry->pfn); + gnttab_grant_foreign_access_ref(gnt_list_entry->gref, + info->xbdev->otherend_id, +@@ -430,12 +437,12 @@ static int blkif_queue_request(struct request *req) + fsect = sg->offset >> 9; + lsect = fsect + (sg->length >> 9) - 1; + +- gnt_list_entry = get_grant(&gref_head, info); ++ gnt_list_entry = get_grant(&gref_head, page_to_pfn(sg_page(sg)), info); + ref = gnt_list_entry->gref; + + info->shadow[id].grants_used[i] = gnt_list_entry; + +- if (rq_data_dir(req)) { ++ if (rq_data_dir(req) && info->feature_persistent) { + char *bvec_data; + void *shared_data; + +@@ -828,16 +835,17 @@ static void blkif_free(struct blkfront_info *info, int suspend) + blk_stop_queue(info->rq); + + /* Remove all persistent grants */ +- if (!list_empty(&info->persistent_gnts)) { ++ if (!list_empty(&info->grants)) { + list_for_each_entry_safe(persistent_gnt, n, +- &info->persistent_gnts, node) { ++ &info->grants, node) { + list_del(&persistent_gnt->node); + if (persistent_gnt->gref != GRANT_INVALID_REF) { + gnttab_end_foreign_access(persistent_gnt->gref, + 0, 0UL); + info->persistent_gnts_c--; + } +- __free_page(pfn_to_page(persistent_gnt->pfn)); ++ if (info->feature_persistent) ++ __free_page(pfn_to_page(persistent_gnt->pfn)); + kfree(persistent_gnt); + } + } +@@ -874,7 +882,7 @@ static void blkif_completion(struct blk_shadow *s, struct blkfront_info *info, + + nseg = s->req.u.rw.nr_segments; + +- if (bret->operation == BLKIF_OP_READ) { ++ if (bret->operation == BLKIF_OP_READ && info->feature_persistent) { + /* + * Copy the data received from the backend into the bvec. + * Since bv_offset can be different than 0, and bv_len different +@@ -894,9 +902,30 @@ static void blkif_completion(struct blk_shadow *s, struct blkfront_info *info, + } + } + /* Add the persistent grant into the list of free grants */ +- for (i = 0; i < s->req.u.rw.nr_segments; i++) { +- list_add(&s->grants_used[i]->node, &info->persistent_gnts); +- info->persistent_gnts_c++; ++ for (i = 0; i < nseg; i++) { ++ if (gnttab_query_foreign_access(s->grants_used[i]->gref)) { ++ /* ++ * If the grant is still mapped by the backend (the ++ * backend has chosen to make this grant persistent) ++ * we add it at the head of the list, so it will be ++ * reused first. ++ */ ++ if (!info->feature_persistent) ++ pr_alert_ratelimited("backed has not unmapped grant: %u\n", ++ s->grants_used[i]->gref); ++ list_add(&s->grants_used[i]->node, &info->grants); ++ info->persistent_gnts_c++; ++ } else { ++ /* ++ * If the grant is not mapped by the backend we end the ++ * foreign access and add it to the tail of the list, ++ * so it will not be picked again unless we run out of ++ * persistent grants. ++ */ ++ gnttab_end_foreign_access(s->grants_used[i]->gref, 0, 0UL); ++ s->grants_used[i]->gref = GRANT_INVALID_REF; ++ list_add_tail(&s->grants_used[i]->node, &info->grants); ++ } + } + } + +@@ -1034,12 +1063,6 @@ static int setup_blkring(struct xenbus_device *dev, + for (i = 0; i < BLK_RING_SIZE; i++) + sg_init_table(info->shadow[i].sg, BLKIF_MAX_SEGMENTS_PER_REQUEST); + +- /* Allocate memory for grants */ +- err = fill_grant_buffer(info, BLK_RING_SIZE * +- BLKIF_MAX_SEGMENTS_PER_REQUEST); +- if (err) +- goto fail; +- + err = xenbus_grant_ring(dev, virt_to_mfn(info->ring.sring)); + if (err < 0) { + free_page((unsigned long)sring); +@@ -1198,7 +1221,7 @@ static int blkfront_probe(struct xenbus_device *dev, + spin_lock_init(&info->io_lock); + info->xbdev = dev; + info->vdevice = vdevice; +- INIT_LIST_HEAD(&info->persistent_gnts); ++ INIT_LIST_HEAD(&info->grants); + info->persistent_gnts_c = 0; + info->connected = BLKIF_STATE_DISCONNECTED; + INIT_WORK(&info->work, blkif_restart_queue); +@@ -1227,7 +1250,8 @@ static int blkif_recover(struct blkfront_info *info) + int i; + struct blkif_request *req; + struct blk_shadow *copy; +- int j; ++ unsigned int persistent; ++ int j, rc; + + /* Stage 1: Make a safe copy of the shadow state. */ + copy = kmemdup(info->shadow, sizeof(info->shadow), +@@ -1242,6 +1266,24 @@ static int blkif_recover(struct blkfront_info *info) + info->shadow_free = info->ring.req_prod_pvt; + info->shadow[BLK_RING_SIZE-1].req.u.rw.id = 0x0fffffff; + ++ /* Check if the backend supports persistent grants */ ++ rc = xenbus_gather(XBT_NIL, info->xbdev->otherend, ++ "feature-persistent", "%u", &persistent, ++ NULL); ++ if (rc) ++ info->feature_persistent = 0; ++ else ++ info->feature_persistent = persistent; ++ ++ /* Allocate memory for grants */ ++ rc = fill_grant_buffer(info, BLK_RING_SIZE * ++ BLKIF_MAX_SEGMENTS_PER_REQUEST); ++ if (rc) { ++ xenbus_dev_fatal(info->xbdev, rc, "setting grant buffer failed"); ++ kfree(copy); ++ return rc; ++ } ++ + /* Stage 3: Find pending requests and requeue them. */ + for (i = 0; i < BLK_RING_SIZE; i++) { + /* Not in use? */ +@@ -1306,8 +1348,12 @@ static int blkfront_resume(struct xenbus_device *dev) + blkif_free(info, info->connected == BLKIF_STATE_CONNECTED); + + err = talk_to_blkback(dev, info); +- if (info->connected == BLKIF_STATE_SUSPENDED && !err) +- err = blkif_recover(info); ++ ++ /* ++ * We have to wait for the backend to switch to ++ * connected state, since we want to read which ++ * features it supports. ++ */ + + return err; + } +@@ -1411,9 +1457,16 @@ static void blkfront_connect(struct blkfront_info *info) + sectors); + set_capacity(info->gd, sectors); + revalidate_disk(info->gd); ++ return; + +- /* fall through */ + case BLKIF_STATE_SUSPENDED: ++ /* ++ * If we are recovering from suspension, we need to wait ++ * for the backend to announce it's features before ++ * reconnecting, we need to know if the backend supports ++ * persistent grants. ++ */ ++ blkif_recover(info); + return; + + default: +@@ -1481,6 +1534,14 @@ static void blkfront_connect(struct blkfront_info *info) + else + info->feature_persistent = persistent; + ++ /* Allocate memory for grants */ ++ err = fill_grant_buffer(info, BLK_RING_SIZE * ++ BLKIF_MAX_SEGMENTS_PER_REQUEST); ++ if (err) { ++ xenbus_dev_fatal(info->xbdev, err, "setting grant buffer failed"); ++ return; ++ } ++ + err = xlvbd_alloc_gendisk(sectors, info, binfo, sector_size); + if (err) { + xenbus_dev_fatal(info->xbdev, err, "xlvbd_add at %s", +diff --git a/drivers/bluetooth/ath3k.c b/drivers/bluetooth/ath3k.c +index 0a327f4154a2..2acabdaecec8 100644 +--- a/drivers/bluetooth/ath3k.c ++++ b/drivers/bluetooth/ath3k.c +@@ -82,6 +82,7 @@ static struct usb_device_id ath3k_table[] = { + { USB_DEVICE(0x04CA, 0x3004) }, + { USB_DEVICE(0x04CA, 0x3005) }, + { USB_DEVICE(0x04CA, 0x3006) }, ++ { USB_DEVICE(0x04CA, 0x3007) }, + { USB_DEVICE(0x04CA, 0x3008) }, + { USB_DEVICE(0x13d3, 0x3362) }, + { USB_DEVICE(0x0CF3, 0xE004) }, +@@ -124,6 +125,7 @@ static struct usb_device_id ath3k_blist_tbl[] = { + { USB_DEVICE(0x04ca, 0x3004), .driver_info = BTUSB_ATH3012 }, + { USB_DEVICE(0x04ca, 0x3005), .driver_info = BTUSB_ATH3012 }, + { USB_DEVICE(0x04ca, 0x3006), .driver_info = BTUSB_ATH3012 }, ++ { USB_DEVICE(0x04ca, 0x3007), .driver_info = BTUSB_ATH3012 }, + { USB_DEVICE(0x04ca, 0x3008), .driver_info = BTUSB_ATH3012 }, + { USB_DEVICE(0x13d3, 0x3362), .driver_info = BTUSB_ATH3012 }, + { USB_DEVICE(0x0cf3, 0xe004), .driver_info = BTUSB_ATH3012 }, +diff --git a/drivers/bluetooth/btusb.c b/drivers/bluetooth/btusb.c +index 58491f1b2799..45aa8e760124 100644 +--- a/drivers/bluetooth/btusb.c ++++ b/drivers/bluetooth/btusb.c +@@ -146,6 +146,7 @@ static struct usb_device_id blacklist_table[] = { + { USB_DEVICE(0x04ca, 0x3004), .driver_info = BTUSB_ATH3012 }, + { USB_DEVICE(0x04ca, 0x3005), .driver_info = BTUSB_ATH3012 }, + { USB_DEVICE(0x04ca, 0x3006), .driver_info = BTUSB_ATH3012 }, ++ { USB_DEVICE(0x04ca, 0x3007), .driver_info = BTUSB_ATH3012 }, + { USB_DEVICE(0x04ca, 0x3008), .driver_info = BTUSB_ATH3012 }, + { USB_DEVICE(0x13d3, 0x3362), .driver_info = BTUSB_ATH3012 }, + { USB_DEVICE(0x0cf3, 0xe004), .driver_info = BTUSB_ATH3012 }, +diff --git a/drivers/bus/mvebu-mbus.c b/drivers/bus/mvebu-mbus.c +index 8740f46b4d0d..5dcc8305abd1 100644 +--- a/drivers/bus/mvebu-mbus.c ++++ b/drivers/bus/mvebu-mbus.c +@@ -250,12 +250,6 @@ static int mvebu_mbus_window_conflicts(struct mvebu_mbus_state *mbus, + */ + if ((u64)base < wend && end > wbase) + return 0; +- +- /* +- * Check if target/attribute conflicts +- */ +- if (target == wtarget && attr == wattr) +- return 0; + } + + return 1; +diff --git a/drivers/char/ipmi/ipmi_kcs_sm.c b/drivers/char/ipmi/ipmi_kcs_sm.c +index e53fc24c6af3..e1ddcf938519 100644 +--- a/drivers/char/ipmi/ipmi_kcs_sm.c ++++ b/drivers/char/ipmi/ipmi_kcs_sm.c +@@ -251,8 +251,9 @@ static inline int check_obf(struct si_sm_data *kcs, unsigned char status, + if (!GET_STATUS_OBF(status)) { + kcs->obf_timeout -= time; + if (kcs->obf_timeout < 0) { +- start_error_recovery(kcs, "OBF not ready in time"); +- return 1; ++ kcs->obf_timeout = OBF_RETRY_TIMEOUT; ++ start_error_recovery(kcs, "OBF not ready in time"); ++ return 1; + } + return 0; + } +diff --git a/drivers/char/ipmi/ipmi_si_intf.c b/drivers/char/ipmi/ipmi_si_intf.c +index af4b23ffc5a6..40b3f756f904 100644 +--- a/drivers/char/ipmi/ipmi_si_intf.c ++++ b/drivers/char/ipmi/ipmi_si_intf.c +@@ -244,6 +244,9 @@ struct smi_info { + /* The timer for this si. */ + struct timer_list si_timer; + ++ /* This flag is set, if the timer is running (timer_pending() isn't enough) */ ++ bool timer_running; ++ + /* The time (in jiffies) the last timeout occurred at. */ + unsigned long last_timeout_jiffies; + +@@ -427,6 +430,13 @@ static void start_clear_flags(struct smi_info *smi_info) + smi_info->si_state = SI_CLEARING_FLAGS; + } + ++static void smi_mod_timer(struct smi_info *smi_info, unsigned long new_val) ++{ ++ smi_info->last_timeout_jiffies = jiffies; ++ mod_timer(&smi_info->si_timer, new_val); ++ smi_info->timer_running = true; ++} ++ + /* + * When we have a situtaion where we run out of memory and cannot + * allocate messages, we just leave them in the BMC and run the system +@@ -439,8 +449,7 @@ static inline void disable_si_irq(struct smi_info *smi_info) + start_disable_irq(smi_info); + smi_info->interrupt_disabled = 1; + if (!atomic_read(&smi_info->stop_operation)) +- mod_timer(&smi_info->si_timer, +- jiffies + SI_TIMEOUT_JIFFIES); ++ smi_mod_timer(smi_info, jiffies + SI_TIMEOUT_JIFFIES); + } + } + +@@ -900,15 +909,7 @@ static void sender(void *send_info, + list_add_tail(&msg->link, &smi_info->xmit_msgs); + + if (smi_info->si_state == SI_NORMAL && smi_info->curr_msg == NULL) { +- /* +- * last_timeout_jiffies is updated here to avoid +- * smi_timeout() handler passing very large time_diff +- * value to smi_event_handler() that causes +- * the send command to abort. +- */ +- smi_info->last_timeout_jiffies = jiffies; +- +- mod_timer(&smi_info->si_timer, jiffies + SI_TIMEOUT_JIFFIES); ++ smi_mod_timer(smi_info, jiffies + SI_TIMEOUT_JIFFIES); + + if (smi_info->thread) + wake_up_process(smi_info->thread); +@@ -997,6 +998,17 @@ static int ipmi_thread(void *data) + + spin_lock_irqsave(&(smi_info->si_lock), flags); + smi_result = smi_event_handler(smi_info, 0); ++ ++ /* ++ * If the driver is doing something, there is a possible ++ * race with the timer. If the timer handler see idle, ++ * and the thread here sees something else, the timer ++ * handler won't restart the timer even though it is ++ * required. So start it here if necessary. ++ */ ++ if (smi_result != SI_SM_IDLE && !smi_info->timer_running) ++ smi_mod_timer(smi_info, jiffies + SI_TIMEOUT_JIFFIES); ++ + spin_unlock_irqrestore(&(smi_info->si_lock), flags); + busy_wait = ipmi_thread_busy_wait(smi_result, smi_info, + &busy_until); +@@ -1066,10 +1078,6 @@ static void smi_timeout(unsigned long data) + * SI_USEC_PER_JIFFY); + smi_result = smi_event_handler(smi_info, time_diff); + +- spin_unlock_irqrestore(&(smi_info->si_lock), flags); +- +- smi_info->last_timeout_jiffies = jiffies_now; +- + if ((smi_info->irq) && (!smi_info->interrupt_disabled)) { + /* Running with interrupts, only do long timeouts. */ + timeout = jiffies + SI_TIMEOUT_JIFFIES; +@@ -1091,7 +1099,10 @@ static void smi_timeout(unsigned long data) + + do_mod_timer: + if (smi_result != SI_SM_IDLE) +- mod_timer(&(smi_info->si_timer), timeout); ++ smi_mod_timer(smi_info, timeout); ++ else ++ smi_info->timer_running = false; ++ spin_unlock_irqrestore(&(smi_info->si_lock), flags); + } + + static irqreturn_t si_irq_handler(int irq, void *data) +@@ -1139,8 +1150,7 @@ static int smi_start_processing(void *send_info, + + /* Set up the timer that drives the interface. */ + setup_timer(&new_smi->si_timer, smi_timeout, (long)new_smi); +- new_smi->last_timeout_jiffies = jiffies; +- mod_timer(&new_smi->si_timer, jiffies + SI_TIMEOUT_JIFFIES); ++ smi_mod_timer(new_smi, jiffies + SI_TIMEOUT_JIFFIES); + + /* + * Check if the user forcefully enabled the daemon. +diff --git a/drivers/clk/versatile/clk-vexpress-osc.c b/drivers/clk/versatile/clk-vexpress-osc.c +index 256c8be74df8..8b8798bb93f3 100644 +--- a/drivers/clk/versatile/clk-vexpress-osc.c ++++ b/drivers/clk/versatile/clk-vexpress-osc.c +@@ -102,7 +102,7 @@ void __init vexpress_osc_of_setup(struct device_node *node) + + osc = kzalloc(sizeof(*osc), GFP_KERNEL); + if (!osc) +- goto error; ++ return; + + osc->func = vexpress_config_func_get_by_node(node); + if (!osc->func) { +diff --git a/drivers/clocksource/exynos_mct.c b/drivers/clocksource/exynos_mct.c +index 662fcc065821..b7960185919d 100644 +--- a/drivers/clocksource/exynos_mct.c ++++ b/drivers/clocksource/exynos_mct.c +@@ -429,8 +429,6 @@ static int __cpuinit exynos4_local_timer_setup(struct clock_event_device *evt) + evt->set_mode = exynos4_tick_set_mode; + evt->features = CLOCK_EVT_FEAT_PERIODIC | CLOCK_EVT_FEAT_ONESHOT; + evt->rating = 450; +- clockevents_config_and_register(evt, clk_rate / (TICK_BASE_CNT + 1), +- 0xf, 0x7fffffff); + + exynos4_mct_write(TICK_BASE_CNT, mevt->base + MCT_L_TCNTB_OFFSET); + +@@ -448,6 +446,8 @@ static int __cpuinit exynos4_local_timer_setup(struct clock_event_device *evt) + } else { + enable_percpu_irq(mct_irqs[MCT_L0_IRQ], 0); + } ++ clockevents_config_and_register(evt, clk_rate / (TICK_BASE_CNT + 1), ++ 0xf, 0x7fffffff); + + return 0; + } +diff --git a/drivers/crypto/caam/error.c b/drivers/crypto/caam/error.c +index 9f25f5296029..0eabd81e1a90 100644 +--- a/drivers/crypto/caam/error.c ++++ b/drivers/crypto/caam/error.c +@@ -16,9 +16,13 @@ + char *tmp; \ + \ + tmp = kmalloc(sizeof(format) + max_alloc, GFP_ATOMIC); \ +- sprintf(tmp, format, param); \ +- strcat(str, tmp); \ +- kfree(tmp); \ ++ if (likely(tmp)) { \ ++ sprintf(tmp, format, param); \ ++ strcat(str, tmp); \ ++ kfree(tmp); \ ++ } else { \ ++ strcat(str, "kmalloc failure in SPRINTFCAT"); \ ++ } \ + } + + static void report_jump_idx(u32 status, char *outstr) +diff --git a/drivers/gpu/drm/i915/intel_display.c b/drivers/gpu/drm/i915/intel_display.c +index 54ae96f7bec6..8814b0dbfc4f 100644 +--- a/drivers/gpu/drm/i915/intel_display.c ++++ b/drivers/gpu/drm/i915/intel_display.c +@@ -9123,15 +9123,6 @@ void intel_modeset_init(struct drm_device *dev) + intel_disable_fbc(dev); + } + +-static void +-intel_connector_break_all_links(struct intel_connector *connector) +-{ +- connector->base.dpms = DRM_MODE_DPMS_OFF; +- connector->base.encoder = NULL; +- connector->encoder->connectors_active = false; +- connector->encoder->base.crtc = NULL; +-} +- + static void intel_enable_pipe_a(struct drm_device *dev) + { + struct intel_connector *connector; +@@ -9213,8 +9204,17 @@ static void intel_sanitize_crtc(struct intel_crtc *crtc) + if (connector->encoder->base.crtc != &crtc->base) + continue; + +- intel_connector_break_all_links(connector); ++ connector->base.dpms = DRM_MODE_DPMS_OFF; ++ connector->base.encoder = NULL; + } ++ /* multiple connectors may have the same encoder: ++ * handle them and break crtc link separately */ ++ list_for_each_entry(connector, &dev->mode_config.connector_list, ++ base.head) ++ if (connector->encoder->base.crtc == &crtc->base) { ++ connector->encoder->base.crtc = NULL; ++ connector->encoder->connectors_active = false; ++ } + + WARN_ON(crtc->active); + crtc->base.enabled = false; +@@ -9285,6 +9285,8 @@ static void intel_sanitize_encoder(struct intel_encoder *encoder) + drm_get_encoder_name(&encoder->base)); + encoder->disable(encoder); + } ++ encoder->base.crtc = NULL; ++ encoder->connectors_active = false; + + /* Inconsistent output/port/pipe state happens presumably due to + * a bug in one of the get_hw_state functions. Or someplace else +@@ -9295,8 +9297,8 @@ static void intel_sanitize_encoder(struct intel_encoder *encoder) + base.head) { + if (connector->encoder != encoder) + continue; +- +- intel_connector_break_all_links(connector); ++ connector->base.dpms = DRM_MODE_DPMS_OFF; ++ connector->base.encoder = NULL; + } + } + /* Enabled encoders without active connectors will be fixed in +diff --git a/drivers/gpu/drm/nouveau/core/subdev/therm/fan.c b/drivers/gpu/drm/nouveau/core/subdev/therm/fan.c +index c728380d3d62..ea19acd20784 100644 +--- a/drivers/gpu/drm/nouveau/core/subdev/therm/fan.c ++++ b/drivers/gpu/drm/nouveau/core/subdev/therm/fan.c +@@ -54,8 +54,10 @@ nouveau_fan_update(struct nouveau_fan *fan, bool immediate, int target) + + /* check that we're not already at the target duty cycle */ + duty = fan->get(therm); +- if (duty == target) +- goto done; ++ if (duty == target) { ++ spin_unlock_irqrestore(&fan->lock, flags); ++ return 0; ++ } + + /* smooth out the fanspeed increase/decrease */ + if (!immediate && duty >= 0) { +@@ -73,8 +75,15 @@ nouveau_fan_update(struct nouveau_fan *fan, bool immediate, int target) + + nv_debug(therm, "FAN update: %d\n", duty); + ret = fan->set(therm, duty); +- if (ret) +- goto done; ++ if (ret) { ++ spin_unlock_irqrestore(&fan->lock, flags); ++ return ret; ++ } ++ ++ /* fan speed updated, drop the fan lock before grabbing the ++ * alarm-scheduling lock and risking a deadlock ++ */ ++ spin_unlock_irqrestore(&fan->lock, flags); + + /* schedule next fan update, if not at target speed already */ + if (list_empty(&fan->alarm.head) && target != duty) { +@@ -92,8 +101,6 @@ nouveau_fan_update(struct nouveau_fan *fan, bool immediate, int target) + ptimer->alarm(ptimer, delay * 1000 * 1000, &fan->alarm); + } + +-done: +- spin_unlock_irqrestore(&fan->lock, flags); + return ret; + } + +diff --git a/drivers/gpu/drm/nouveau/nouveau_acpi.c b/drivers/gpu/drm/nouveau/nouveau_acpi.c +index d97f20069d3e..5cec3a0c6c85 100644 +--- a/drivers/gpu/drm/nouveau/nouveau_acpi.c ++++ b/drivers/gpu/drm/nouveau/nouveau_acpi.c +@@ -372,9 +372,6 @@ bool nouveau_acpi_rom_supported(struct pci_dev *pdev) + acpi_status status; + acpi_handle dhandle, rom_handle; + +- if (!nouveau_dsm_priv.dsm_detected && !nouveau_dsm_priv.optimus_detected) +- return false; +- + dhandle = DEVICE_ACPI_HANDLE(&pdev->dev); + if (!dhandle) + return false; +diff --git a/drivers/gpu/drm/radeon/radeon_atpx_handler.c b/drivers/gpu/drm/radeon/radeon_atpx_handler.c +index cbb06d7c89b5..8c44ef57864b 100644 +--- a/drivers/gpu/drm/radeon/radeon_atpx_handler.c ++++ b/drivers/gpu/drm/radeon/radeon_atpx_handler.c +@@ -523,6 +523,13 @@ static bool radeon_atpx_detect(void) + has_atpx |= (radeon_atpx_pci_probe_handle(pdev) == true); + } + ++ /* some newer PX laptops mark the dGPU as a non-VGA display device */ ++ while ((pdev = pci_get_class(PCI_CLASS_DISPLAY_OTHER << 8, pdev)) != NULL) { ++ vga_count++; ++ ++ has_atpx |= (radeon_atpx_pci_probe_handle(pdev) == true); ++ } ++ + if (has_atpx && vga_count == 2) { + acpi_get_name(radeon_atpx_priv.atpx.handle, ACPI_FULL_PATHNAME, &buffer); + printk(KERN_INFO "VGA switcheroo: detected switching method %s handle\n", +diff --git a/drivers/gpu/drm/radeon/radeon_uvd.c b/drivers/gpu/drm/radeon/radeon_uvd.c +index 21d2d5280fc1..5715429279fb 100644 +--- a/drivers/gpu/drm/radeon/radeon_uvd.c ++++ b/drivers/gpu/drm/radeon/radeon_uvd.c +@@ -449,6 +449,10 @@ static int radeon_uvd_cs_reloc(struct radeon_cs_parser *p, + cmd = radeon_get_ib_value(p, p->idx) >> 1; + + if (cmd < 0x4) { ++ if (end <= start) { ++ DRM_ERROR("invalid reloc offset %X!\n", offset); ++ return -EINVAL; ++ } + if ((end - start) < buf_sizes[cmd]) { + DRM_ERROR("buffer to small (%d / %d)!\n", + (unsigned)(end - start), buf_sizes[cmd]); +diff --git a/drivers/gpu/host1x/hw/intr_hw.c b/drivers/gpu/host1x/hw/intr_hw.c +index b592eef1efcb..b083509325e4 100644 +--- a/drivers/gpu/host1x/hw/intr_hw.c ++++ b/drivers/gpu/host1x/hw/intr_hw.c +@@ -48,7 +48,7 @@ static irqreturn_t syncpt_thresh_isr(int irq, void *dev_id) + unsigned long reg; + int i, id; + +- for (i = 0; i <= BIT_WORD(host->info->nb_pts); i++) { ++ for (i = 0; i < DIV_ROUND_UP(host->info->nb_pts, 32); i++) { + reg = host1x_sync_readl(host, + HOST1X_SYNC_SYNCPT_THRESH_CPU0_INT_STATUS(i)); + for_each_set_bit(id, ®, BITS_PER_LONG) { +@@ -65,7 +65,7 @@ static void _host1x_intr_disable_all_syncpt_intrs(struct host1x *host) + { + u32 i; + +- for (i = 0; i <= BIT_WORD(host->info->nb_pts); ++i) { ++ for (i = 0; i < DIV_ROUND_UP(host->info->nb_pts, 32); ++i) { + host1x_sync_writel(host, 0xffffffffu, + HOST1X_SYNC_SYNCPT_THRESH_INT_DISABLE(i)); + host1x_sync_writel(host, 0xffffffffu, +diff --git a/drivers/hv/connection.c b/drivers/hv/connection.c +index d4fac934b220..fd02cb79a99c 100644 +--- a/drivers/hv/connection.c ++++ b/drivers/hv/connection.c +@@ -55,6 +55,9 @@ static __u32 vmbus_get_next_version(__u32 current_version) + case (VERSION_WIN8): + return VERSION_WIN7; + ++ case (VERSION_WIN8_1): ++ return VERSION_WIN8; ++ + case (VERSION_WS2008): + default: + return VERSION_INVAL; +@@ -80,6 +83,9 @@ static int vmbus_negotiate_version(struct vmbus_channel_msginfo *msginfo, + (void *)((unsigned long)vmbus_connection.monitor_pages + + PAGE_SIZE)); + ++ if (version == VERSION_WIN8_1) ++ msg->target_vcpu = hv_context.vp_index[smp_processor_id()]; ++ + /* + * Add to list before we send the request since we may + * receive the response before returning from this routine +diff --git a/drivers/hwmon/emc1403.c b/drivers/hwmon/emc1403.c +index 142e1cb8dea7..361f50b221bd 100644 +--- a/drivers/hwmon/emc1403.c ++++ b/drivers/hwmon/emc1403.c +@@ -162,7 +162,7 @@ static ssize_t store_hyst(struct device *dev, + if (retval < 0) + goto fail; + +- hyst = val - retval * 1000; ++ hyst = retval * 1000 - val; + hyst = DIV_ROUND_CLOSEST(hyst, 1000); + if (hyst < 0 || hyst > 255) { + retval = -ERANGE; +@@ -295,7 +295,7 @@ static int emc1403_detect(struct i2c_client *client, + } + + id = i2c_smbus_read_byte_data(client, THERMAL_REVISION_REG); +- if (id != 0x01) ++ if (id < 0x01 || id > 0x04) + return -ENODEV; + + return 0; +diff --git a/drivers/i2c/busses/Kconfig b/drivers/i2c/busses/Kconfig +index 49423e913459..d4fe13ee543e 100644 +--- a/drivers/i2c/busses/Kconfig ++++ b/drivers/i2c/busses/Kconfig +@@ -109,6 +109,8 @@ config I2C_I801 + Avoton (SOC) + Wellsburg (PCH) + Coleto Creek (PCH) ++ Wildcat Point-LP (PCH) ++ BayTrail (SOC) + + This driver can also be built as a module. If so, the module + will be called i2c-i801. +diff --git a/drivers/i2c/busses/i2c-designware-core.c b/drivers/i2c/busses/i2c-designware-core.c +index c41ca6354fc5..f24a7385260a 100644 +--- a/drivers/i2c/busses/i2c-designware-core.c ++++ b/drivers/i2c/busses/i2c-designware-core.c +@@ -380,6 +380,9 @@ static void i2c_dw_xfer_init(struct dw_i2c_dev *dev) + ic_con &= ~DW_IC_CON_10BITADDR_MASTER; + dw_writel(dev, ic_con, DW_IC_CON); + ++ /* enforce disabled interrupts (due to HW issues) */ ++ i2c_dw_disable_int(dev); ++ + /* Enable the adapter */ + __i2c_dw_enable(dev, true); + +diff --git a/drivers/i2c/busses/i2c-i801.c b/drivers/i2c/busses/i2c-i801.c +index 4ebceed6bc66..783fa75e13ae 100644 +--- a/drivers/i2c/busses/i2c-i801.c ++++ b/drivers/i2c/busses/i2c-i801.c +@@ -59,6 +59,8 @@ + Wellsburg (PCH) MS 0x8d7e 32 hard yes yes yes + Wellsburg (PCH) MS 0x8d7f 32 hard yes yes yes + Coleto Creek (PCH) 0x23b0 32 hard yes yes yes ++ Wildcat Point-LP (PCH) 0x9ca2 32 hard yes yes yes ++ BayTrail (SOC) 0x0f12 32 hard yes yes yes + + Features supported by this driver: + Software PEC no +@@ -161,6 +163,7 @@ + STATUS_ERROR_FLAGS) + + /* Older devices have their ID defined in */ ++#define PCI_DEVICE_ID_INTEL_BAYTRAIL_SMBUS 0x0f12 + #define PCI_DEVICE_ID_INTEL_COUGARPOINT_SMBUS 0x1c22 + #define PCI_DEVICE_ID_INTEL_PATSBURG_SMBUS 0x1d22 + /* Patsburg also has three 'Integrated Device Function' SMBus controllers */ +@@ -178,6 +181,7 @@ + #define PCI_DEVICE_ID_INTEL_WELLSBURG_SMBUS_MS1 0x8d7e + #define PCI_DEVICE_ID_INTEL_WELLSBURG_SMBUS_MS2 0x8d7f + #define PCI_DEVICE_ID_INTEL_LYNXPOINT_LP_SMBUS 0x9c22 ++#define PCI_DEVICE_ID_INTEL_WILDCATPOINT_LP_SMBUS 0x9ca2 + + struct i801_mux_config { + char *gpio_chip; +@@ -820,6 +824,8 @@ static DEFINE_PCI_DEVICE_TABLE(i801_ids) = { + { PCI_DEVICE(PCI_VENDOR_ID_INTEL, PCI_DEVICE_ID_INTEL_WELLSBURG_SMBUS_MS1) }, + { PCI_DEVICE(PCI_VENDOR_ID_INTEL, PCI_DEVICE_ID_INTEL_WELLSBURG_SMBUS_MS2) }, + { PCI_DEVICE(PCI_VENDOR_ID_INTEL, PCI_DEVICE_ID_INTEL_COLETOCREEK_SMBUS) }, ++ { PCI_DEVICE(PCI_VENDOR_ID_INTEL, PCI_DEVICE_ID_INTEL_WILDCATPOINT_LP_SMBUS) }, ++ { PCI_DEVICE(PCI_VENDOR_ID_INTEL, PCI_DEVICE_ID_INTEL_BAYTRAIL_SMBUS) }, + { 0, } + }; + +diff --git a/drivers/i2c/busses/i2c-rcar.c b/drivers/i2c/busses/i2c-rcar.c +index 4ba4a95b6b26..8a806f5c40cf 100644 +--- a/drivers/i2c/busses/i2c-rcar.c ++++ b/drivers/i2c/busses/i2c-rcar.c +@@ -541,6 +541,12 @@ static int rcar_i2c_master_xfer(struct i2c_adapter *adap, + + ret = -EINVAL; + for (i = 0; i < num; i++) { ++ /* This HW can't send STOP after address phase */ ++ if (msgs[i].len == 0) { ++ ret = -EOPNOTSUPP; ++ break; ++ } ++ + /*-------------- spin lock -----------------*/ + spin_lock_irqsave(&priv->lock, flags); + +@@ -605,7 +611,8 @@ static int rcar_i2c_master_xfer(struct i2c_adapter *adap, + + static u32 rcar_i2c_func(struct i2c_adapter *adap) + { +- return I2C_FUNC_I2C | I2C_FUNC_SMBUS_EMUL; ++ /* This HW can't do SMBUS_QUICK and NOSTART */ ++ return I2C_FUNC_I2C | (I2C_FUNC_SMBUS_EMUL & ~I2C_FUNC_SMBUS_QUICK); + } + + static const struct i2c_algorithm rcar_i2c_algo = { +diff --git a/drivers/i2c/busses/i2c-s3c2410.c b/drivers/i2c/busses/i2c-s3c2410.c +index cab1c91b75a3..a72aad9561b0 100644 +--- a/drivers/i2c/busses/i2c-s3c2410.c ++++ b/drivers/i2c/busses/i2c-s3c2410.c +@@ -1204,10 +1204,10 @@ static int s3c24xx_i2c_resume(struct device *dev) + struct platform_device *pdev = to_platform_device(dev); + struct s3c24xx_i2c *i2c = platform_get_drvdata(pdev); + +- i2c->suspended = 0; + clk_prepare_enable(i2c->clk); + s3c24xx_i2c_init(i2c); + clk_disable_unprepare(i2c->clk); ++ i2c->suspended = 0; + + return 0; + } +diff --git a/drivers/iio/imu/inv_mpu6050/inv_mpu_core.c b/drivers/iio/imu/inv_mpu6050/inv_mpu_core.c +index fe4c61e219f3..111ac381b40b 100644 +--- a/drivers/iio/imu/inv_mpu6050/inv_mpu_core.c ++++ b/drivers/iio/imu/inv_mpu6050/inv_mpu_core.c +@@ -660,6 +660,7 @@ static int inv_mpu_probe(struct i2c_client *client, + { + struct inv_mpu6050_state *st; + struct iio_dev *indio_dev; ++ struct inv_mpu6050_platform_data *pdata; + int result; + + if (!i2c_check_functionality(client->adapter, +@@ -675,8 +676,10 @@ static int inv_mpu_probe(struct i2c_client *client, + } + st = iio_priv(indio_dev); + st->client = client; +- st->plat_data = *(struct inv_mpu6050_platform_data +- *)dev_get_platdata(&client->dev); ++ pdata = (struct inv_mpu6050_platform_data ++ *)dev_get_platdata(&client->dev); ++ if (pdata) ++ st->plat_data = *pdata; + /* power is turned on inside check chip type*/ + result = inv_check_and_setup_chip(st, id); + if (result) +diff --git a/drivers/infiniband/ulp/isert/ib_isert.c b/drivers/infiniband/ulp/isert/ib_isert.c +index ce6c603a3cc9..988e29d18bb4 100644 +--- a/drivers/infiniband/ulp/isert/ib_isert.c ++++ b/drivers/infiniband/ulp/isert/ib_isert.c +@@ -27,6 +27,7 @@ + #include + #include + #include ++#include + + #include "isert_proto.h" + #include "ib_isert.h" +@@ -459,11 +460,11 @@ isert_connect_request(struct rdma_cm_id *cma_id, struct rdma_cm_event *event) + goto out_conn_dev; + + mutex_lock(&isert_np->np_accept_mutex); +- list_add_tail(&isert_np->np_accept_list, &isert_conn->conn_accept_node); ++ list_add_tail(&isert_conn->conn_accept_node, &isert_np->np_accept_list); + mutex_unlock(&isert_np->np_accept_mutex); + +- pr_debug("isert_connect_request() waking up np_accept_wq: %p\n", np); +- wake_up(&isert_np->np_accept_wq); ++ pr_debug("isert_connect_request() up np_sem np: %p\n", np); ++ up(&isert_np->np_sem); + return 0; + + out_conn_dev: +@@ -2042,7 +2043,7 @@ isert_setup_np(struct iscsi_np *np, + pr_err("Unable to allocate struct isert_np\n"); + return -ENOMEM; + } +- init_waitqueue_head(&isert_np->np_accept_wq); ++ sema_init(&isert_np->np_sem, 0); + mutex_init(&isert_np->np_accept_mutex); + INIT_LIST_HEAD(&isert_np->np_accept_list); + init_completion(&isert_np->np_login_comp); +@@ -2091,18 +2092,6 @@ out: + } + + static int +-isert_check_accept_queue(struct isert_np *isert_np) +-{ +- int empty; +- +- mutex_lock(&isert_np->np_accept_mutex); +- empty = list_empty(&isert_np->np_accept_list); +- mutex_unlock(&isert_np->np_accept_mutex); +- +- return empty; +-} +- +-static int + isert_rdma_accept(struct isert_conn *isert_conn) + { + struct rdma_cm_id *cm_id = isert_conn->conn_cm_id; +@@ -2186,16 +2175,14 @@ isert_accept_np(struct iscsi_np *np, struct iscsi_conn *conn) + int max_accept = 0, ret; + + accept_wait: +- ret = wait_event_interruptible(isert_np->np_accept_wq, +- !isert_check_accept_queue(isert_np) || +- np->np_thread_state == ISCSI_NP_THREAD_RESET); ++ ret = down_interruptible(&isert_np->np_sem); + if (max_accept > 5) + return -ENODEV; + + spin_lock_bh(&np->np_thread_lock); + if (np->np_thread_state == ISCSI_NP_THREAD_RESET) { + spin_unlock_bh(&np->np_thread_lock); +- pr_err("ISCSI_NP_THREAD_RESET for isert_accept_np\n"); ++ pr_debug("ISCSI_NP_THREAD_RESET for isert_accept_np\n"); + return -ENODEV; + } + spin_unlock_bh(&np->np_thread_lock); +diff --git a/drivers/infiniband/ulp/isert/ib_isert.h b/drivers/infiniband/ulp/isert/ib_isert.h +index b9d6cc6917cf..dfe4a2ebef0d 100644 +--- a/drivers/infiniband/ulp/isert/ib_isert.h ++++ b/drivers/infiniband/ulp/isert/ib_isert.h +@@ -131,7 +131,7 @@ struct isert_device { + }; + + struct isert_np { +- wait_queue_head_t np_accept_wq; ++ struct semaphore np_sem; + struct rdma_cm_id *np_cm_id; + struct mutex np_accept_mutex; + struct list_head np_accept_list; +diff --git a/drivers/input/keyboard/atkbd.c b/drivers/input/keyboard/atkbd.c +index 2626773ff29b..2dd1d0dd4f7d 100644 +--- a/drivers/input/keyboard/atkbd.c ++++ b/drivers/input/keyboard/atkbd.c +@@ -243,6 +243,12 @@ static void (*atkbd_platform_fixup)(struct atkbd *, const void *data); + static void *atkbd_platform_fixup_data; + static unsigned int (*atkbd_platform_scancode_fixup)(struct atkbd *, unsigned int); + ++/* ++ * Certain keyboards to not like ATKBD_CMD_RESET_DIS and stop responding ++ * to many commands until full reset (ATKBD_CMD_RESET_BAT) is performed. ++ */ ++static bool atkbd_skip_deactivate; ++ + static ssize_t atkbd_attr_show_helper(struct device *dev, char *buf, + ssize_t (*handler)(struct atkbd *, char *)); + static ssize_t atkbd_attr_set_helper(struct device *dev, const char *buf, size_t count, +@@ -768,7 +774,8 @@ static int atkbd_probe(struct atkbd *atkbd) + * Make sure nothing is coming from the keyboard and disturbs our + * internal state. + */ +- atkbd_deactivate(atkbd); ++ if (!atkbd_skip_deactivate) ++ atkbd_deactivate(atkbd); + + return 0; + } +@@ -1638,6 +1645,12 @@ static int __init atkbd_setup_scancode_fixup(const struct dmi_system_id *id) + return 1; + } + ++static int __init atkbd_deactivate_fixup(const struct dmi_system_id *id) ++{ ++ atkbd_skip_deactivate = true; ++ return 1; ++} ++ + static const struct dmi_system_id atkbd_dmi_quirk_table[] __initconst = { + { + .matches = { +@@ -1775,6 +1788,20 @@ static const struct dmi_system_id atkbd_dmi_quirk_table[] __initconst = { + .callback = atkbd_setup_scancode_fixup, + .driver_data = atkbd_oqo_01plus_scancode_fixup, + }, ++ { ++ .matches = { ++ DMI_MATCH(DMI_SYS_VENDOR, "LG Electronics"), ++ DMI_MATCH(DMI_PRODUCT_NAME, "LW25-B7HV"), ++ }, ++ .callback = atkbd_deactivate_fixup, ++ }, ++ { ++ .matches = { ++ DMI_MATCH(DMI_SYS_VENDOR, "LG Electronics"), ++ DMI_MATCH(DMI_PRODUCT_NAME, "P1-J273B"), ++ }, ++ .callback = atkbd_deactivate_fixup, ++ }, + { } + }; + +diff --git a/drivers/input/mouse/elantech.c b/drivers/input/mouse/elantech.c +index 1fb1a7b5a754..76f1d37ac0ff 100644 +--- a/drivers/input/mouse/elantech.c ++++ b/drivers/input/mouse/elantech.c +@@ -11,6 +11,7 @@ + */ + + #include ++#include + #include + #include + #include +@@ -801,7 +802,11 @@ static int elantech_set_absolute_mode(struct psmouse *psmouse) + break; + + case 3: +- etd->reg_10 = 0x0b; ++ if (etd->set_hw_resolution) ++ etd->reg_10 = 0x0b; ++ else ++ etd->reg_10 = 0x03; ++ + if (elantech_write_reg(psmouse, 0x10, etd->reg_10)) + rc = -1; + +@@ -1301,6 +1306,22 @@ static int elantech_reconnect(struct psmouse *psmouse) + } + + /* ++ * Some hw_version 3 models go into error state when we try to set bit 3 of r10 ++ */ ++static const struct dmi_system_id no_hw_res_dmi_table[] = { ++#if defined(CONFIG_DMI) && defined(CONFIG_X86) ++ { ++ /* Gigabyte U2442 */ ++ .matches = { ++ DMI_MATCH(DMI_SYS_VENDOR, "GIGABYTE"), ++ DMI_MATCH(DMI_PRODUCT_NAME, "U2442"), ++ }, ++ }, ++#endif ++ { } ++}; ++ ++/* + * determine hardware version and set some properties according to it. + */ + static int elantech_set_properties(struct elantech_data *etd) +@@ -1351,6 +1372,9 @@ static int elantech_set_properties(struct elantech_data *etd) + etd->reports_pressure = true; + } + ++ /* Enable real hardware resolution on hw_version 3 ? */ ++ etd->set_hw_resolution = !dmi_check_system(no_hw_res_dmi_table); ++ + return 0; + } + +diff --git a/drivers/input/mouse/elantech.h b/drivers/input/mouse/elantech.h +index 46db3be45ac9..c1c15ab6872d 100644 +--- a/drivers/input/mouse/elantech.h ++++ b/drivers/input/mouse/elantech.h +@@ -129,6 +129,7 @@ struct elantech_data { + bool paritycheck; + bool jumpy_cursor; + bool reports_pressure; ++ bool set_hw_resolution; + unsigned char hw_version; + unsigned int fw_version; + unsigned int single_finger_reports; +diff --git a/drivers/input/mouse/synaptics.c b/drivers/input/mouse/synaptics.c +index d60c9b7ad1b8..f36f7b88f260 100644 +--- a/drivers/input/mouse/synaptics.c ++++ b/drivers/input/mouse/synaptics.c +@@ -1552,7 +1552,7 @@ static const struct dmi_system_id min_max_dmi_table[] __initconst = { + DMI_MATCH(DMI_SYS_VENDOR, "LENOVO"), + DMI_MATCH(DMI_PRODUCT_VERSION, "ThinkPad T540"), + }, +- .driver_data = (int []){1024, 5056, 2058, 4832}, ++ .driver_data = (int []){1024, 5112, 2024, 4832}, + }, + { + /* Lenovo ThinkPad L540 */ +@@ -1563,6 +1563,14 @@ static const struct dmi_system_id min_max_dmi_table[] __initconst = { + .driver_data = (int []){1024, 5112, 2024, 4832}, + }, + { ++ /* Lenovo ThinkPad W540 */ ++ .matches = { ++ DMI_MATCH(DMI_SYS_VENDOR, "LENOVO"), ++ DMI_MATCH(DMI_PRODUCT_VERSION, "ThinkPad W540"), ++ }, ++ .driver_data = (int []){1024, 5112, 2024, 4832}, ++ }, ++ { + /* Lenovo Yoga S1 */ + .matches = { + DMI_MATCH(DMI_SYS_VENDOR, "LENOVO"), +diff --git a/drivers/iommu/amd_iommu.c b/drivers/iommu/amd_iommu.c +index a3c338942f10..6f849cbcac6f 100644 +--- a/drivers/iommu/amd_iommu.c ++++ b/drivers/iommu/amd_iommu.c +@@ -3959,7 +3959,7 @@ static struct irq_remap_table *get_irq_table(u16 devid, bool ioapic) + iommu_flush_dte(iommu, devid); + if (devid != alias) { + irq_lookup_table[alias] = table; +- set_dte_irq_entry(devid, table); ++ set_dte_irq_entry(alias, table); + iommu_flush_dte(iommu, alias); + } + +diff --git a/drivers/irqchip/irq-gic.c b/drivers/irqchip/irq-gic.c +index 19ceaa60e0f4..4e11218d644e 100644 +--- a/drivers/irqchip/irq-gic.c ++++ b/drivers/irqchip/irq-gic.c +@@ -246,10 +246,14 @@ static int gic_set_affinity(struct irq_data *d, const struct cpumask *mask_val, + bool force) + { + void __iomem *reg = gic_dist_base(d) + GIC_DIST_TARGET + (gic_irq(d) & ~3); +- unsigned int shift = (gic_irq(d) % 4) * 8; +- unsigned int cpu = cpumask_any_and(mask_val, cpu_online_mask); ++ unsigned int cpu, shift = (gic_irq(d) % 4) * 8; + u32 val, mask, bit; + ++ if (!force) ++ cpu = cpumask_any_and(mask_val, cpu_online_mask); ++ else ++ cpu = cpumask_first(mask_val); ++ + if (cpu >= NR_GIC_CPU_IF || cpu >= nr_cpu_ids) + return -EINVAL; + +diff --git a/drivers/leds/leds-pwm.c b/drivers/leds/leds-pwm.c +index faf52c005e8c..5d64b2431415 100644 +--- a/drivers/leds/leds-pwm.c ++++ b/drivers/leds/leds-pwm.c +@@ -82,6 +82,15 @@ static inline size_t sizeof_pwm_leds_priv(int num_leds) + (sizeof(struct led_pwm_data) * num_leds); + } + ++static void led_pwm_cleanup(struct led_pwm_priv *priv) ++{ ++ while (priv->num_leds--) { ++ led_classdev_unregister(&priv->leds[priv->num_leds].cdev); ++ if (priv->leds[priv->num_leds].can_sleep) ++ cancel_work_sync(&priv->leds[priv->num_leds].work); ++ } ++} ++ + static struct led_pwm_priv *led_pwm_create_of(struct platform_device *pdev) + { + struct device_node *node = pdev->dev.of_node; +@@ -139,8 +148,7 @@ static struct led_pwm_priv *led_pwm_create_of(struct platform_device *pdev) + + return priv; + err: +- while (priv->num_leds--) +- led_classdev_unregister(&priv->leds[priv->num_leds].cdev); ++ led_pwm_cleanup(priv); + + return NULL; + } +@@ -200,8 +208,8 @@ static int led_pwm_probe(struct platform_device *pdev) + return 0; + + err: +- while (i--) +- led_classdev_unregister(&priv->leds[i].cdev); ++ priv->num_leds = i; ++ led_pwm_cleanup(priv); + + return ret; + } +@@ -209,13 +217,8 @@ err: + static int led_pwm_remove(struct platform_device *pdev) + { + struct led_pwm_priv *priv = platform_get_drvdata(pdev); +- int i; + +- for (i = 0; i < priv->num_leds; i++) { +- led_classdev_unregister(&priv->leds[i].cdev); +- if (priv->leds[i].can_sleep) +- cancel_work_sync(&priv->leds[i].work); +- } ++ led_pwm_cleanup(priv); + + return 0; + } +diff --git a/drivers/md/dm-crypt.c b/drivers/md/dm-crypt.c +index 6d2d41ae9e32..5177ba54559b 100644 +--- a/drivers/md/dm-crypt.c ++++ b/drivers/md/dm-crypt.c +@@ -18,7 +18,6 @@ + #include + #include + #include +-#include + #include + #include + #include +@@ -44,6 +43,7 @@ struct convert_context { + unsigned int idx_out; + sector_t cc_sector; + atomic_t cc_pending; ++ struct ablkcipher_request *req; + }; + + /* +@@ -105,15 +105,7 @@ struct iv_lmk_private { + enum flags { DM_CRYPT_SUSPENDED, DM_CRYPT_KEY_VALID }; + + /* +- * Duplicated per-CPU state for cipher. +- */ +-struct crypt_cpu { +- struct ablkcipher_request *req; +-}; +- +-/* +- * The fields in here must be read only after initialization, +- * changing state should be in crypt_cpu. ++ * The fields in here must be read only after initialization. + */ + struct crypt_config { + struct dm_dev *dev; +@@ -143,12 +135,6 @@ struct crypt_config { + sector_t iv_offset; + unsigned int iv_size; + +- /* +- * Duplicated per cpu state. Access through +- * per_cpu_ptr() only. +- */ +- struct crypt_cpu __percpu *cpu; +- + /* ESSIV: struct crypto_cipher *essiv_tfm */ + void *iv_private; + struct crypto_ablkcipher **tfms; +@@ -184,11 +170,6 @@ static void clone_init(struct dm_crypt_io *, struct bio *); + static void kcryptd_queue_crypt(struct dm_crypt_io *io); + static u8 *iv_of_dmreq(struct crypt_config *cc, struct dm_crypt_request *dmreq); + +-static struct crypt_cpu *this_crypt_config(struct crypt_config *cc) +-{ +- return this_cpu_ptr(cc->cpu); +-} +- + /* + * Use this to access cipher attributes that are the same for each CPU. + */ +@@ -738,16 +719,15 @@ static void kcryptd_async_done(struct crypto_async_request *async_req, + static void crypt_alloc_req(struct crypt_config *cc, + struct convert_context *ctx) + { +- struct crypt_cpu *this_cc = this_crypt_config(cc); + unsigned key_index = ctx->cc_sector & (cc->tfms_count - 1); + +- if (!this_cc->req) +- this_cc->req = mempool_alloc(cc->req_pool, GFP_NOIO); ++ if (!ctx->req) ++ ctx->req = mempool_alloc(cc->req_pool, GFP_NOIO); + +- ablkcipher_request_set_tfm(this_cc->req, cc->tfms[key_index]); +- ablkcipher_request_set_callback(this_cc->req, ++ ablkcipher_request_set_tfm(ctx->req, cc->tfms[key_index]); ++ ablkcipher_request_set_callback(ctx->req, + CRYPTO_TFM_REQ_MAY_BACKLOG | CRYPTO_TFM_REQ_MAY_SLEEP, +- kcryptd_async_done, dmreq_of_req(cc, this_cc->req)); ++ kcryptd_async_done, dmreq_of_req(cc, ctx->req)); + } + + /* +@@ -756,7 +736,6 @@ static void crypt_alloc_req(struct crypt_config *cc, + static int crypt_convert(struct crypt_config *cc, + struct convert_context *ctx) + { +- struct crypt_cpu *this_cc = this_crypt_config(cc); + int r; + + atomic_set(&ctx->cc_pending, 1); +@@ -768,7 +747,7 @@ static int crypt_convert(struct crypt_config *cc, + + atomic_inc(&ctx->cc_pending); + +- r = crypt_convert_block(cc, ctx, this_cc->req); ++ r = crypt_convert_block(cc, ctx, ctx->req); + + switch (r) { + /* async */ +@@ -777,7 +756,7 @@ static int crypt_convert(struct crypt_config *cc, + INIT_COMPLETION(ctx->restart); + /* fall through*/ + case -EINPROGRESS: +- this_cc->req = NULL; ++ ctx->req = NULL; + ctx->cc_sector++; + continue; + +@@ -876,6 +855,7 @@ static struct dm_crypt_io *crypt_io_alloc(struct crypt_config *cc, + io->sector = sector; + io->error = 0; + io->base_io = NULL; ++ io->ctx.req = NULL; + atomic_set(&io->io_pending, 0); + + return io; +@@ -901,6 +881,8 @@ static void crypt_dec_pending(struct dm_crypt_io *io) + if (!atomic_dec_and_test(&io->io_pending)) + return; + ++ if (io->ctx.req) ++ mempool_free(io->ctx.req, cc->req_pool); + mempool_free(io, cc->io_pool); + + if (likely(!base_io)) +@@ -1326,8 +1308,6 @@ static int crypt_wipe_key(struct crypt_config *cc) + static void crypt_dtr(struct dm_target *ti) + { + struct crypt_config *cc = ti->private; +- struct crypt_cpu *cpu_cc; +- int cpu; + + ti->private = NULL; + +@@ -1339,13 +1319,6 @@ static void crypt_dtr(struct dm_target *ti) + if (cc->crypt_queue) + destroy_workqueue(cc->crypt_queue); + +- if (cc->cpu) +- for_each_possible_cpu(cpu) { +- cpu_cc = per_cpu_ptr(cc->cpu, cpu); +- if (cpu_cc->req) +- mempool_free(cpu_cc->req, cc->req_pool); +- } +- + crypt_free_tfms(cc); + + if (cc->bs) +@@ -1364,9 +1337,6 @@ static void crypt_dtr(struct dm_target *ti) + if (cc->dev) + dm_put_device(ti, cc->dev); + +- if (cc->cpu) +- free_percpu(cc->cpu); +- + kzfree(cc->cipher); + kzfree(cc->cipher_string); + +@@ -1421,13 +1391,6 @@ static int crypt_ctr_cipher(struct dm_target *ti, + if (tmp) + DMWARN("Ignoring unexpected additional cipher options"); + +- cc->cpu = __alloc_percpu(sizeof(*(cc->cpu)), +- __alignof__(struct crypt_cpu)); +- if (!cc->cpu) { +- ti->error = "Cannot allocate per cpu state"; +- goto bad_mem; +- } +- + /* + * For compatibility with the original dm-crypt mapping format, if + * only the cipher name is supplied, use cbc-plain. +diff --git a/drivers/md/md.c b/drivers/md/md.c +index a2dda416c9cb..00a99fe797d4 100644 +--- a/drivers/md/md.c ++++ b/drivers/md/md.c +@@ -8481,7 +8481,8 @@ static int md_notify_reboot(struct notifier_block *this, + if (mddev_trylock(mddev)) { + if (mddev->pers) + __md_stop_writes(mddev); +- mddev->safemode = 2; ++ if (mddev->persistent) ++ mddev->safemode = 2; + mddev_unlock(mddev); + } + need_delay = 1; +diff --git a/drivers/media/i2c/ov7670.c b/drivers/media/i2c/ov7670.c +index 617ad3fff4aa..3ead3a83f04a 100644 +--- a/drivers/media/i2c/ov7670.c ++++ b/drivers/media/i2c/ov7670.c +@@ -1110,7 +1110,7 @@ static int ov7670_enum_framesizes(struct v4l2_subdev *sd, + * windows that fall outside that. + */ + for (i = 0; i < n_win_sizes; i++) { +- struct ov7670_win_size *win = &info->devtype->win_sizes[index]; ++ struct ov7670_win_size *win = &info->devtype->win_sizes[i]; + if (info->min_width && win->width < info->min_width) + continue; + if (info->min_height && win->height < info->min_height) +diff --git a/drivers/media/media-device.c b/drivers/media/media-device.c +index 1957c0df08fd..79715f9feb0a 100644 +--- a/drivers/media/media-device.c ++++ b/drivers/media/media-device.c +@@ -93,6 +93,7 @@ static long media_device_enum_entities(struct media_device *mdev, + struct media_entity *ent; + struct media_entity_desc u_ent; + ++ memset(&u_ent, 0, sizeof(u_ent)); + if (copy_from_user(&u_ent.id, &uent->id, sizeof(u_ent.id))) + return -EFAULT; + +diff --git a/drivers/media/platform/omap3isp/isp.c b/drivers/media/platform/omap3isp/isp.c +index 1d7dbd5c0fba..3e8ef11f67aa 100644 +--- a/drivers/media/platform/omap3isp/isp.c ++++ b/drivers/media/platform/omap3isp/isp.c +@@ -2249,6 +2249,7 @@ static int isp_probe(struct platform_device *pdev) + ret = iommu_attach_device(isp->domain, &pdev->dev); + if (ret) { + dev_err(&pdev->dev, "can't attach iommu device: %d\n", ret); ++ ret = -EPROBE_DEFER; + goto free_domain; + } + +@@ -2287,6 +2288,7 @@ detach_dev: + iommu_detach_device(isp->domain, &pdev->dev); + free_domain: + iommu_domain_free(isp->domain); ++ isp->domain = NULL; + error_isp: + isp_xclk_cleanup(isp); + omap3isp_put(isp); +diff --git a/drivers/media/tuners/fc2580.c b/drivers/media/tuners/fc2580.c +index 3aecaf465094..f0c9c42867de 100644 +--- a/drivers/media/tuners/fc2580.c ++++ b/drivers/media/tuners/fc2580.c +@@ -195,7 +195,7 @@ static int fc2580_set_params(struct dvb_frontend *fe) + + f_ref = 2UL * priv->cfg->clock / r_val; + n_val = div_u64_rem(f_vco, f_ref, &k_val); +- k_val_reg = 1UL * k_val * (1 << 20) / f_ref; ++ k_val_reg = div_u64(1ULL * k_val * (1 << 20), f_ref); + + ret = fc2580_wr_reg(priv, 0x18, r18_val | ((k_val_reg >> 16) & 0xff)); + if (ret < 0) +@@ -348,8 +348,8 @@ static int fc2580_set_params(struct dvb_frontend *fe) + if (ret < 0) + goto err; + +- ret = fc2580_wr_reg(priv, 0x37, 1UL * priv->cfg->clock * \ +- fc2580_if_filter_lut[i].mul / 1000000000); ++ ret = fc2580_wr_reg(priv, 0x37, div_u64(1ULL * priv->cfg->clock * ++ fc2580_if_filter_lut[i].mul, 1000000000)); + if (ret < 0) + goto err; + +diff --git a/drivers/media/tuners/fc2580_priv.h b/drivers/media/tuners/fc2580_priv.h +index be38a9e637e0..646c99452136 100644 +--- a/drivers/media/tuners/fc2580_priv.h ++++ b/drivers/media/tuners/fc2580_priv.h +@@ -22,6 +22,7 @@ + #define FC2580_PRIV_H + + #include "fc2580.h" ++#include + + struct fc2580_reg_val { + u8 reg; +diff --git a/drivers/media/v4l2-core/v4l2-compat-ioctl32.c b/drivers/media/v4l2-core/v4l2-compat-ioctl32.c +index f56b729581e7..e2b0a0969ebb 100644 +--- a/drivers/media/v4l2-core/v4l2-compat-ioctl32.c ++++ b/drivers/media/v4l2-core/v4l2-compat-ioctl32.c +@@ -178,6 +178,9 @@ struct v4l2_create_buffers32 { + + static int __get_v4l2_format32(struct v4l2_format *kp, struct v4l2_format32 __user *up) + { ++ if (get_user(kp->type, &up->type)) ++ return -EFAULT; ++ + switch (kp->type) { + case V4L2_BUF_TYPE_VIDEO_CAPTURE: + case V4L2_BUF_TYPE_VIDEO_OUTPUT: +@@ -204,17 +207,16 @@ static int __get_v4l2_format32(struct v4l2_format *kp, struct v4l2_format32 __us + + static int get_v4l2_format32(struct v4l2_format *kp, struct v4l2_format32 __user *up) + { +- if (!access_ok(VERIFY_READ, up, sizeof(struct v4l2_format32)) || +- get_user(kp->type, &up->type)) +- return -EFAULT; ++ if (!access_ok(VERIFY_READ, up, sizeof(struct v4l2_format32))) ++ return -EFAULT; + return __get_v4l2_format32(kp, up); + } + + static int get_v4l2_create32(struct v4l2_create_buffers *kp, struct v4l2_create_buffers32 __user *up) + { + if (!access_ok(VERIFY_READ, up, sizeof(struct v4l2_create_buffers32)) || +- copy_from_user(kp, up, offsetof(struct v4l2_create_buffers32, format.fmt))) +- return -EFAULT; ++ copy_from_user(kp, up, offsetof(struct v4l2_create_buffers32, format))) ++ return -EFAULT; + return __get_v4l2_format32(&kp->format, &up->format); + } + +diff --git a/drivers/net/wireless/ath/ath9k/xmit.c b/drivers/net/wireless/ath/ath9k/xmit.c +index e752f5d4995d..0c9b2f1c6939 100644 +--- a/drivers/net/wireless/ath/ath9k/xmit.c ++++ b/drivers/net/wireless/ath/ath9k/xmit.c +@@ -1255,14 +1255,16 @@ void ath_tx_aggr_sleep(struct ieee80211_sta *sta, struct ath_softc *sc, + for (tidno = 0, tid = &an->tid[tidno]; + tidno < IEEE80211_NUM_TIDS; tidno++, tid++) { + +- if (!tid->sched) +- continue; +- + ac = tid->ac; + txq = ac->txq; + + ath_txq_lock(sc, txq); + ++ if (!tid->sched) { ++ ath_txq_unlock(sc, txq); ++ continue; ++ } ++ + buffered = !skb_queue_empty(&tid->buf_q); + + tid->sched = false; +diff --git a/drivers/net/wireless/brcm80211/brcmsmac/mac80211_if.c b/drivers/net/wireless/brcm80211/brcmsmac/mac80211_if.c +index 3a6544710c8a..8e8543cfe489 100644 +--- a/drivers/net/wireless/brcm80211/brcmsmac/mac80211_if.c ++++ b/drivers/net/wireless/brcm80211/brcmsmac/mac80211_if.c +@@ -426,6 +426,12 @@ static int brcms_ops_start(struct ieee80211_hw *hw) + bool blocked; + int err; + ++ if (!wl->ucode.bcm43xx_bomminor) { ++ err = brcms_request_fw(wl, wl->wlc->hw->d11core); ++ if (err) ++ return -ENOENT; ++ } ++ + ieee80211_wake_queues(hw); + spin_lock_bh(&wl->lock); + blocked = brcms_rfkill_set_hw_state(wl); +@@ -433,14 +439,6 @@ static int brcms_ops_start(struct ieee80211_hw *hw) + if (!blocked) + wiphy_rfkill_stop_polling(wl->pub->ieee_hw->wiphy); + +- if (!wl->ucode.bcm43xx_bomminor) { +- err = brcms_request_fw(wl, wl->wlc->hw->d11core); +- if (err) { +- brcms_remove(wl->wlc->hw->d11core); +- return -ENOENT; +- } +- } +- + spin_lock_bh(&wl->lock); + /* avoid acknowledging frames before a non-monitor device is added */ + wl->mute_tx = true; +diff --git a/drivers/net/wireless/rt2x00/rt2x00mac.c b/drivers/net/wireless/rt2x00/rt2x00mac.c +index f8cff1f0b6b7..2b724fc4e306 100644 +--- a/drivers/net/wireless/rt2x00/rt2x00mac.c ++++ b/drivers/net/wireless/rt2x00/rt2x00mac.c +@@ -623,20 +623,18 @@ void rt2x00mac_bss_info_changed(struct ieee80211_hw *hw, + bss_conf->bssid); + + /* +- * Update the beacon. This is only required on USB devices. PCI +- * devices fetch beacons periodically. +- */ +- if (changes & BSS_CHANGED_BEACON && rt2x00_is_usb(rt2x00dev)) +- rt2x00queue_update_beacon(rt2x00dev, vif); +- +- /* + * Start/stop beaconing. + */ + if (changes & BSS_CHANGED_BEACON_ENABLED) { + if (!bss_conf->enable_beacon && intf->enable_beacon) { +- rt2x00queue_clear_beacon(rt2x00dev, vif); + rt2x00dev->intf_beaconing--; + intf->enable_beacon = false; ++ /* ++ * Clear beacon in the H/W for this vif. This is needed ++ * to disable beaconing on this particular interface ++ * and keep it running on other interfaces. ++ */ ++ rt2x00queue_clear_beacon(rt2x00dev, vif); + + if (rt2x00dev->intf_beaconing == 0) { + /* +@@ -647,11 +645,15 @@ void rt2x00mac_bss_info_changed(struct ieee80211_hw *hw, + rt2x00queue_stop_queue(rt2x00dev->bcn); + mutex_unlock(&intf->beacon_skb_mutex); + } +- +- + } else if (bss_conf->enable_beacon && !intf->enable_beacon) { + rt2x00dev->intf_beaconing++; + intf->enable_beacon = true; ++ /* ++ * Upload beacon to the H/W. This is only required on ++ * USB devices. PCI devices fetch beacons periodically. ++ */ ++ if (rt2x00_is_usb(rt2x00dev)) ++ rt2x00queue_update_beacon(rt2x00dev, vif); + + if (rt2x00dev->intf_beaconing == 1) { + /* +diff --git a/drivers/net/wireless/rtlwifi/rtl8192cu/hw.c b/drivers/net/wireless/rtlwifi/rtl8192cu/hw.c +index 324aa581938e..c3f2b55501ae 100644 +--- a/drivers/net/wireless/rtlwifi/rtl8192cu/hw.c ++++ b/drivers/net/wireless/rtlwifi/rtl8192cu/hw.c +@@ -1001,7 +1001,7 @@ int rtl92cu_hw_init(struct ieee80211_hw *hw) + err = _rtl92cu_init_mac(hw); + if (err) { + RT_TRACE(rtlpriv, COMP_ERR, DBG_EMERG, "init mac failed!\n"); +- return err; ++ goto exit; + } + err = rtl92c_download_fw(hw); + if (err) { +diff --git a/drivers/pci/hotplug/shpchp_ctrl.c b/drivers/pci/hotplug/shpchp_ctrl.c +index 58499277903a..6efc2ec5e4db 100644 +--- a/drivers/pci/hotplug/shpchp_ctrl.c ++++ b/drivers/pci/hotplug/shpchp_ctrl.c +@@ -282,8 +282,8 @@ static int board_added(struct slot *p_slot) + return WRONG_BUS_FREQUENCY; + } + +- bsp = ctrl->pci_dev->bus->cur_bus_speed; +- msp = ctrl->pci_dev->bus->max_bus_speed; ++ bsp = ctrl->pci_dev->subordinate->cur_bus_speed; ++ msp = ctrl->pci_dev->subordinate->max_bus_speed; + + /* Check if there are other slots or devices on the same bus */ + if (!list_empty(&ctrl->pci_dev->subordinate->devices)) +diff --git a/drivers/target/target_core_device.c b/drivers/target/target_core_device.c +index 660b109487ae..8032917b6636 100644 +--- a/drivers/target/target_core_device.c ++++ b/drivers/target/target_core_device.c +@@ -796,10 +796,10 @@ int se_dev_set_emulate_write_cache(struct se_device *dev, int flag) + pr_err("emulate_write_cache not supported for pSCSI\n"); + return -EINVAL; + } +- if (dev->transport->get_write_cache) { +- pr_warn("emulate_write_cache cannot be changed when underlying" +- " HW reports WriteCacheEnabled, ignoring request\n"); +- return 0; ++ if (flag && ++ dev->transport->get_write_cache) { ++ pr_err("emulate_write_cache not supported for this device\n"); ++ return -EINVAL; + } + + dev->dev_attrib.emulate_write_cache = flag; +diff --git a/drivers/tty/serial/8250/8250_core.c b/drivers/tty/serial/8250/8250_core.c +index b167665b7de2..d8c06a3d391e 100644 +--- a/drivers/tty/serial/8250/8250_core.c ++++ b/drivers/tty/serial/8250/8250_core.c +@@ -1520,7 +1520,7 @@ int serial8250_handle_irq(struct uart_port *port, unsigned int iir) + status = serial8250_rx_chars(up, status); + } + serial8250_modem_status(up); +- if (status & UART_LSR_THRE) ++ if (!up->dma && (status & UART_LSR_THRE)) + serial8250_tx_chars(up); + + spin_unlock_irqrestore(&port->lock, flags); +diff --git a/drivers/tty/serial/8250/8250_dma.c b/drivers/tty/serial/8250/8250_dma.c +index 7046769608d4..ab9096dc3849 100644 +--- a/drivers/tty/serial/8250/8250_dma.c ++++ b/drivers/tty/serial/8250/8250_dma.c +@@ -20,12 +20,15 @@ static void __dma_tx_complete(void *param) + struct uart_8250_port *p = param; + struct uart_8250_dma *dma = p->dma; + struct circ_buf *xmit = &p->port.state->xmit; +- +- dma->tx_running = 0; ++ unsigned long flags; + + dma_sync_single_for_cpu(dma->txchan->device->dev, dma->tx_addr, + UART_XMIT_SIZE, DMA_TO_DEVICE); + ++ spin_lock_irqsave(&p->port.lock, flags); ++ ++ dma->tx_running = 0; ++ + xmit->tail += dma->tx_size; + xmit->tail &= UART_XMIT_SIZE - 1; + p->port.icount.tx += dma->tx_size; +@@ -35,6 +38,8 @@ static void __dma_tx_complete(void *param) + + if (!uart_circ_empty(xmit) && !uart_tx_stopped(&p->port)) + serial8250_tx_dma(p); ++ ++ spin_unlock_irqrestore(&p->port.lock, flags); + } + + static void __dma_rx_complete(void *param) +diff --git a/drivers/usb/gadget/at91_udc.c b/drivers/usb/gadget/at91_udc.c +index 073b938f9135..55e96131753e 100644 +--- a/drivers/usb/gadget/at91_udc.c ++++ b/drivers/usb/gadget/at91_udc.c +@@ -1703,16 +1703,6 @@ static int at91udc_probe(struct platform_device *pdev) + return -ENODEV; + } + +- if (pdev->num_resources != 2) { +- DBG("invalid num_resources\n"); +- return -ENODEV; +- } +- if ((pdev->resource[0].flags != IORESOURCE_MEM) +- || (pdev->resource[1].flags != IORESOURCE_IRQ)) { +- DBG("invalid resource type\n"); +- return -ENODEV; +- } +- + res = platform_get_resource(pdev, IORESOURCE_MEM, 0); + if (!res) + return -ENXIO; +diff --git a/drivers/usb/host/ehci-fsl.c b/drivers/usb/host/ehci-fsl.c +index 3c0a49a298dd..bfcf38383f74 100644 +--- a/drivers/usb/host/ehci-fsl.c ++++ b/drivers/usb/host/ehci-fsl.c +@@ -261,7 +261,8 @@ static int ehci_fsl_setup_phy(struct usb_hcd *hcd, + break; + } + +- if (pdata->have_sysif_regs && pdata->controller_ver && ++ if (pdata->have_sysif_regs && ++ pdata->controller_ver > FSL_USB_VER_1_6 && + (phy_mode == FSL_USB2_PHY_ULPI)) { + /* check PHY_CLK_VALID to get phy clk valid */ + if (!spin_event_timeout(in_be32(non_ehci + FSL_SOC_USB_CTRL) & +diff --git a/drivers/usb/host/ohci-hub.c b/drivers/usb/host/ohci-hub.c +index 60ff4220e8b4..cd908066fde9 100644 +--- a/drivers/usb/host/ohci-hub.c ++++ b/drivers/usb/host/ohci-hub.c +@@ -90,6 +90,24 @@ __acquires(ohci->lock) + dl_done_list (ohci); + finish_unlinks (ohci, ohci_frame_no(ohci)); + ++ /* ++ * Some controllers don't handle "global" suspend properly if ++ * there are unsuspended ports. For these controllers, put all ++ * the enabled ports into suspend before suspending the root hub. ++ */ ++ if (ohci->flags & OHCI_QUIRK_GLOBAL_SUSPEND) { ++ __hc32 __iomem *portstat = ohci->regs->roothub.portstatus; ++ int i; ++ unsigned temp; ++ ++ for (i = 0; i < ohci->num_ports; (++i, ++portstat)) { ++ temp = ohci_readl(ohci, portstat); ++ if ((temp & (RH_PS_PES | RH_PS_PSS)) == ++ RH_PS_PES) ++ ohci_writel(ohci, RH_PS_PSS, portstat); ++ } ++ } ++ + /* maybe resume can wake root hub */ + if (ohci_to_hcd(ohci)->self.root_hub->do_remote_wakeup || autostop) { + ohci->hc_control |= OHCI_CTRL_RWE; +diff --git a/drivers/usb/host/ohci-pci.c b/drivers/usb/host/ohci-pci.c +index ef6782bd1fa9..67af8eef6537 100644 +--- a/drivers/usb/host/ohci-pci.c ++++ b/drivers/usb/host/ohci-pci.c +@@ -172,6 +172,7 @@ static int ohci_quirk_amd700(struct usb_hcd *hcd) + pci_dev_put(amd_smbus_dev); + amd_smbus_dev = NULL; + ++ ohci->flags |= OHCI_QUIRK_GLOBAL_SUSPEND; + return 0; + } + +diff --git a/drivers/usb/host/ohci.h b/drivers/usb/host/ohci.h +index d3299143d9e2..f2521f3185d2 100644 +--- a/drivers/usb/host/ohci.h ++++ b/drivers/usb/host/ohci.h +@@ -405,6 +405,8 @@ struct ohci_hcd { + #define OHCI_QUIRK_HUB_POWER 0x100 /* distrust firmware power/oc setup */ + #define OHCI_QUIRK_AMD_PLL 0x200 /* AMD PLL quirk*/ + #define OHCI_QUIRK_AMD_PREFETCH 0x400 /* pre-fetch for ISO transfer */ ++#define OHCI_QUIRK_GLOBAL_SUSPEND 0x800 /* must suspend ports */ ++ + // there are also chip quirks/bugs in init logic + + struct work_struct nec_work; /* Worker for NEC quirk */ +diff --git a/drivers/usb/serial/qcserial.c b/drivers/usb/serial/qcserial.c +index 7ed681a714a5..6c0a542e8ec1 100644 +--- a/drivers/usb/serial/qcserial.c ++++ b/drivers/usb/serial/qcserial.c +@@ -151,6 +151,21 @@ static const struct usb_device_id id_table[] = { + {USB_DEVICE_INTERFACE_NUMBER(0x1199, 0x9051, 0)}, /* Netgear AirCard 340U Device Management */ + {USB_DEVICE_INTERFACE_NUMBER(0x1199, 0x9051, 2)}, /* Netgear AirCard 340U NMEA */ + {USB_DEVICE_INTERFACE_NUMBER(0x1199, 0x9051, 3)}, /* Netgear AirCard 340U Modem */ ++ {USB_DEVICE_INTERFACE_NUMBER(0x413c, 0x81a2, 0)}, /* Dell Wireless 5806 Gobi(TM) 4G LTE Mobile Broadband Card Device Management */ ++ {USB_DEVICE_INTERFACE_NUMBER(0x413c, 0x81a2, 2)}, /* Dell Wireless 5806 Gobi(TM) 4G LTE Mobile Broadband Card NMEA */ ++ {USB_DEVICE_INTERFACE_NUMBER(0x413c, 0x81a2, 3)}, /* Dell Wireless 5806 Gobi(TM) 4G LTE Mobile Broadband Card Modem */ ++ {USB_DEVICE_INTERFACE_NUMBER(0x413c, 0x81a3, 0)}, /* Dell Wireless 5570 HSPA+ (42Mbps) Mobile Broadband Card Device Management */ ++ {USB_DEVICE_INTERFACE_NUMBER(0x413c, 0x81a3, 2)}, /* Dell Wireless 5570 HSPA+ (42Mbps) Mobile Broadband Card NMEA */ ++ {USB_DEVICE_INTERFACE_NUMBER(0x413c, 0x81a3, 3)}, /* Dell Wireless 5570 HSPA+ (42Mbps) Mobile Broadband Card Modem */ ++ {USB_DEVICE_INTERFACE_NUMBER(0x413c, 0x81a4, 0)}, /* Dell Wireless 5570e HSPA+ (42Mbps) Mobile Broadband Card Device Management */ ++ {USB_DEVICE_INTERFACE_NUMBER(0x413c, 0x81a4, 2)}, /* Dell Wireless 5570e HSPA+ (42Mbps) Mobile Broadband Card NMEA */ ++ {USB_DEVICE_INTERFACE_NUMBER(0x413c, 0x81a4, 3)}, /* Dell Wireless 5570e HSPA+ (42Mbps) Mobile Broadband Card Modem */ ++ {USB_DEVICE_INTERFACE_NUMBER(0x413c, 0x81a8, 0)}, /* Dell Wireless 5808 Gobi(TM) 4G LTE Mobile Broadband Card Device Management */ ++ {USB_DEVICE_INTERFACE_NUMBER(0x413c, 0x81a8, 2)}, /* Dell Wireless 5808 Gobi(TM) 4G LTE Mobile Broadband Card NMEA */ ++ {USB_DEVICE_INTERFACE_NUMBER(0x413c, 0x81a8, 3)}, /* Dell Wireless 5808 Gobi(TM) 4G LTE Mobile Broadband Card Modem */ ++ {USB_DEVICE_INTERFACE_NUMBER(0x413c, 0x81a9, 0)}, /* Dell Wireless 5808e Gobi(TM) 4G LTE Mobile Broadband Card Device Management */ ++ {USB_DEVICE_INTERFACE_NUMBER(0x413c, 0x81a9, 2)}, /* Dell Wireless 5808e Gobi(TM) 4G LTE Mobile Broadband Card NMEA */ ++ {USB_DEVICE_INTERFACE_NUMBER(0x413c, 0x81a9, 3)}, /* Dell Wireless 5808e Gobi(TM) 4G LTE Mobile Broadband Card Modem */ + + { } /* Terminating entry */ + }; +diff --git a/drivers/usb/storage/shuttle_usbat.c b/drivers/usb/storage/shuttle_usbat.c +index 4ef2a80728f7..008d805c3d21 100644 +--- a/drivers/usb/storage/shuttle_usbat.c ++++ b/drivers/usb/storage/shuttle_usbat.c +@@ -1851,7 +1851,7 @@ static int usbat_probe(struct usb_interface *intf, + us->transport_name = "Shuttle USBAT"; + us->transport = usbat_flash_transport; + us->transport_reset = usb_stor_CB_reset; +- us->max_lun = 1; ++ us->max_lun = 0; + + result = usb_stor_probe2(us); + return result; +diff --git a/drivers/usb/storage/unusual_devs.h b/drivers/usb/storage/unusual_devs.h +index adbeb255616a..042c83b01046 100644 +--- a/drivers/usb/storage/unusual_devs.h ++++ b/drivers/usb/storage/unusual_devs.h +@@ -234,6 +234,20 @@ UNUSUAL_DEV( 0x0421, 0x0495, 0x0370, 0x0370, + USB_SC_DEVICE, USB_PR_DEVICE, NULL, + US_FL_MAX_SECTORS_64 ), + ++/* Reported by Daniele Forsi */ ++UNUSUAL_DEV( 0x0421, 0x04b9, 0x0350, 0x0350, ++ "Nokia", ++ "5300", ++ USB_SC_DEVICE, USB_PR_DEVICE, NULL, ++ US_FL_MAX_SECTORS_64 ), ++ ++/* Patch submitted by Victor A. Santos */ ++UNUSUAL_DEV( 0x0421, 0x05af, 0x0742, 0x0742, ++ "Nokia", ++ "305", ++ USB_SC_DEVICE, USB_PR_DEVICE, NULL, ++ US_FL_MAX_SECTORS_64), ++ + /* Patch submitted by Mikhail Zolotaryov */ + UNUSUAL_DEV( 0x0421, 0x06aa, 0x1110, 0x1110, + "Nokia", +diff --git a/fs/exec.c b/fs/exec.c +index bb60cda5ee30..dd6aa61c8548 100644 +--- a/fs/exec.c ++++ b/fs/exec.c +@@ -654,10 +654,10 @@ int setup_arg_pages(struct linux_binprm *bprm, + unsigned long rlim_stack; + + #ifdef CONFIG_STACK_GROWSUP +- /* Limit stack size to 1GB */ ++ /* Limit stack size */ + stack_base = rlimit_max(RLIMIT_STACK); +- if (stack_base > (1 << 30)) +- stack_base = 1 << 30; ++ if (stack_base > STACK_SIZE_MAX) ++ stack_base = STACK_SIZE_MAX; + + /* Make sure we didn't let the argument array grow too large. */ + if (vma->vm_end - vma->vm_start > stack_base) +diff --git a/fs/nfsd/nfs4acl.c b/fs/nfsd/nfs4acl.c +index 8a50b3c18093..e15bcbd5043c 100644 +--- a/fs/nfsd/nfs4acl.c ++++ b/fs/nfsd/nfs4acl.c +@@ -385,8 +385,10 @@ sort_pacl(struct posix_acl *pacl) + * by uid/gid. */ + int i, j; + +- if (pacl->a_count <= 4) +- return; /* no users or groups */ ++ /* no users or groups */ ++ if (!pacl || pacl->a_count <= 4) ++ return; ++ + i = 1; + while (pacl->a_entries[i].e_tag == ACL_USER) + i++; +@@ -513,13 +515,12 @@ posix_state_to_acl(struct posix_acl_state *state, unsigned int flags) + + /* + * ACLs with no ACEs are treated differently in the inheritable +- * and effective cases: when there are no inheritable ACEs, we +- * set a zero-length default posix acl: ++ * and effective cases: when there are no inheritable ACEs, ++ * calls ->set_acl with a NULL ACL structure. + */ +- if (state->empty && (flags & NFS4_ACL_TYPE_DEFAULT)) { +- pacl = posix_acl_alloc(0, GFP_KERNEL); +- return pacl ? pacl : ERR_PTR(-ENOMEM); +- } ++ if (state->empty && (flags & NFS4_ACL_TYPE_DEFAULT)) ++ return NULL; ++ + /* + * When there are no effective ACEs, the following will end + * up setting a 3-element effective posix ACL with all +diff --git a/fs/nfsd/nfs4state.c b/fs/nfsd/nfs4state.c +index 442509285ca9..ae6a50b7a617 100644 +--- a/fs/nfsd/nfs4state.c ++++ b/fs/nfsd/nfs4state.c +@@ -1081,6 +1081,18 @@ static struct nfs4_client *alloc_client(struct xdr_netobj name) + return NULL; + } + clp->cl_name.len = name.len; ++ INIT_LIST_HEAD(&clp->cl_sessions); ++ idr_init(&clp->cl_stateids); ++ atomic_set(&clp->cl_refcount, 0); ++ clp->cl_cb_state = NFSD4_CB_UNKNOWN; ++ INIT_LIST_HEAD(&clp->cl_idhash); ++ INIT_LIST_HEAD(&clp->cl_openowners); ++ INIT_LIST_HEAD(&clp->cl_delegations); ++ INIT_LIST_HEAD(&clp->cl_lru); ++ INIT_LIST_HEAD(&clp->cl_callbacks); ++ INIT_LIST_HEAD(&clp->cl_revoked); ++ spin_lock_init(&clp->cl_lock); ++ rpc_init_wait_queue(&clp->cl_cb_waitq, "Backchannel slot table"); + return clp; + } + +@@ -1098,6 +1110,7 @@ free_client(struct nfs4_client *clp) + WARN_ON_ONCE(atomic_read(&ses->se_ref)); + free_session(ses); + } ++ rpc_destroy_wait_queue(&clp->cl_cb_waitq); + free_svc_cred(&clp->cl_cred); + kfree(clp->cl_name.data); + idr_destroy(&clp->cl_stateids); +@@ -1315,7 +1328,6 @@ static struct nfs4_client *create_client(struct xdr_netobj name, + if (clp == NULL) + return NULL; + +- INIT_LIST_HEAD(&clp->cl_sessions); + ret = copy_cred(&clp->cl_cred, &rqstp->rq_cred); + if (ret) { + spin_lock(&nn->client_lock); +@@ -1323,20 +1335,9 @@ static struct nfs4_client *create_client(struct xdr_netobj name, + spin_unlock(&nn->client_lock); + return NULL; + } +- idr_init(&clp->cl_stateids); +- atomic_set(&clp->cl_refcount, 0); +- clp->cl_cb_state = NFSD4_CB_UNKNOWN; +- INIT_LIST_HEAD(&clp->cl_idhash); +- INIT_LIST_HEAD(&clp->cl_openowners); +- INIT_LIST_HEAD(&clp->cl_delegations); +- INIT_LIST_HEAD(&clp->cl_lru); +- INIT_LIST_HEAD(&clp->cl_callbacks); +- INIT_LIST_HEAD(&clp->cl_revoked); +- spin_lock_init(&clp->cl_lock); + nfsd4_init_callback(&clp->cl_cb_null); + clp->cl_time = get_seconds(); + clear_bit(0, &clp->cl_cb_slot_busy); +- rpc_init_wait_queue(&clp->cl_cb_waitq, "Backchannel slot table"); + copy_verf(clp, verf); + rpc_copy_addr((struct sockaddr *) &clp->cl_addr, sa); + gen_confirm(clp); +@@ -3598,9 +3599,16 @@ out: + static __be32 + nfsd4_free_lock_stateid(struct nfs4_ol_stateid *stp) + { +- if (check_for_locks(stp->st_file, lockowner(stp->st_stateowner))) ++ struct nfs4_lockowner *lo = lockowner(stp->st_stateowner); ++ ++ if (check_for_locks(stp->st_file, lo)) + return nfserr_locks_held; +- release_lock_stateid(stp); ++ /* ++ * Currently there's a 1-1 lock stateid<->lockowner ++ * correspondance, and we have to delete the lockowner when we ++ * delete the lock stateid: ++ */ ++ unhash_lockowner(lo); + return nfs_ok; + } + +@@ -4044,6 +4052,10 @@ static bool same_lockowner_ino(struct nfs4_lockowner *lo, struct inode *inode, c + + if (!same_owner_str(&lo->lo_owner, owner, clid)) + return false; ++ if (list_empty(&lo->lo_owner.so_stateids)) { ++ WARN_ON_ONCE(1); ++ return false; ++ } + lst = list_first_entry(&lo->lo_owner.so_stateids, + struct nfs4_ol_stateid, st_perstateowner); + return lst->st_file->fi_inode == inode; +diff --git a/fs/posix_acl.c b/fs/posix_acl.c +index 8bd2135b7f82..3542f1f814e2 100644 +--- a/fs/posix_acl.c ++++ b/fs/posix_acl.c +@@ -158,6 +158,12 @@ posix_acl_equiv_mode(const struct posix_acl *acl, umode_t *mode_p) + umode_t mode = 0; + int not_equiv = 0; + ++ /* ++ * A null ACL can always be presented as mode bits. ++ */ ++ if (!acl) ++ return 0; ++ + FOREACH_ACL_ENTRY(pa, acl, pe) { + switch (pa->e_tag) { + case ACL_USER_OBJ: +diff --git a/include/linux/ftrace.h b/include/linux/ftrace.h +index 99d0fbcbaf79..7a13848d635c 100644 +--- a/include/linux/ftrace.h ++++ b/include/linux/ftrace.h +@@ -524,6 +524,7 @@ static inline int ftrace_modify_call(struct dyn_ftrace *rec, unsigned long old_a + extern int ftrace_arch_read_dyn_info(char *buf, int size); + + extern int skip_trace(unsigned long ip); ++extern void ftrace_module_init(struct module *mod); + + extern void ftrace_disable_daemon(void); + extern void ftrace_enable_daemon(void); +@@ -533,6 +534,7 @@ static inline int ftrace_force_update(void) { return 0; } + static inline void ftrace_disable_daemon(void) { } + static inline void ftrace_enable_daemon(void) { } + static inline void ftrace_release_mod(struct module *mod) {} ++static inline void ftrace_module_init(struct module *mod) {} + static inline int register_ftrace_command(struct ftrace_func_command *cmd) + { + return -EINVAL; +diff --git a/include/linux/hyperv.h b/include/linux/hyperv.h +index c2559847d7ee..422eac8538fd 100644 +--- a/include/linux/hyperv.h ++++ b/include/linux/hyperv.h +@@ -483,15 +483,18 @@ hv_get_ringbuffer_availbytes(struct hv_ring_buffer_info *rbi, + * 0 . 13 (Windows Server 2008) + * 1 . 1 (Windows 7) + * 2 . 4 (Windows 8) ++ * 3 . 0 (Windows 8 R2) + */ + + #define VERSION_WS2008 ((0 << 16) | (13)) + #define VERSION_WIN7 ((1 << 16) | (1)) + #define VERSION_WIN8 ((2 << 16) | (4)) ++#define VERSION_WIN8_1 ((3 << 16) | (0)) ++ + + #define VERSION_INVAL -1 + +-#define VERSION_CURRENT VERSION_WIN8 ++#define VERSION_CURRENT VERSION_WIN8_1 + + /* Make maximum size of pipe payload of 16K */ + #define MAX_PIPE_DATA_PAYLOAD (sizeof(u8) * 16384) +@@ -894,7 +897,7 @@ struct vmbus_channel_relid_released { + struct vmbus_channel_initiate_contact { + struct vmbus_channel_message_header header; + u32 vmbus_version_requested; +- u32 padding2; ++ u32 target_vcpu; /* The VCPU the host should respond to */ + u64 interrupt_page; + u64 monitor_page1; + u64 monitor_page2; +diff --git a/include/linux/interrupt.h b/include/linux/interrupt.h +index 5fa5afeeb759..6de0f2c14ec0 100644 +--- a/include/linux/interrupt.h ++++ b/include/linux/interrupt.h +@@ -239,7 +239,40 @@ static inline int check_wakeup_irqs(void) { return 0; } + + extern cpumask_var_t irq_default_affinity; + +-extern int irq_set_affinity(unsigned int irq, const struct cpumask *cpumask); ++/* Internal implementation. Use the helpers below */ ++extern int __irq_set_affinity(unsigned int irq, const struct cpumask *cpumask, ++ bool force); ++ ++/** ++ * irq_set_affinity - Set the irq affinity of a given irq ++ * @irq: Interrupt to set affinity ++ * @mask: cpumask ++ * ++ * Fails if cpumask does not contain an online CPU ++ */ ++static inline int ++irq_set_affinity(unsigned int irq, const struct cpumask *cpumask) ++{ ++ return __irq_set_affinity(irq, cpumask, false); ++} ++ ++/** ++ * irq_force_affinity - Force the irq affinity of a given irq ++ * @irq: Interrupt to set affinity ++ * @mask: cpumask ++ * ++ * Same as irq_set_affinity, but without checking the mask against ++ * online cpus. ++ * ++ * Solely for low level cpu hotplug code, where we need to make per ++ * cpu interrupts affine before the cpu becomes online. ++ */ ++static inline int ++irq_force_affinity(unsigned int irq, const struct cpumask *cpumask) ++{ ++ return __irq_set_affinity(irq, cpumask, true); ++} ++ + extern int irq_can_set_affinity(unsigned int irq); + extern int irq_select_affinity(unsigned int irq); + +@@ -275,6 +308,11 @@ static inline int irq_set_affinity(unsigned int irq, const struct cpumask *m) + return -EINVAL; + } + ++static inline int irq_force_affinity(unsigned int irq, const struct cpumask *cpumask) ++{ ++ return 0; ++} ++ + static inline int irq_can_set_affinity(unsigned int irq) + { + return 0; +diff --git a/include/linux/irq.h b/include/linux/irq.h +index bc4e06611958..d591bfe1475b 100644 +--- a/include/linux/irq.h ++++ b/include/linux/irq.h +@@ -375,7 +375,8 @@ extern void remove_percpu_irq(unsigned int irq, struct irqaction *act); + + extern void irq_cpu_online(void); + extern void irq_cpu_offline(void); +-extern int __irq_set_affinity_locked(struct irq_data *data, const struct cpumask *cpumask); ++extern int irq_set_affinity_locked(struct irq_data *data, ++ const struct cpumask *cpumask, bool force); + + #ifdef CONFIG_GENERIC_HARDIRQS + +diff --git a/include/trace/events/module.h b/include/trace/events/module.h +index 161932737416..ca298c7157ae 100644 +--- a/include/trace/events/module.h ++++ b/include/trace/events/module.h +@@ -78,7 +78,7 @@ DECLARE_EVENT_CLASS(module_refcnt, + + TP_fast_assign( + __entry->ip = ip; +- __entry->refcnt = __this_cpu_read(mod->refptr->incs) + __this_cpu_read(mod->refptr->decs); ++ __entry->refcnt = __this_cpu_read(mod->refptr->incs) - __this_cpu_read(mod->refptr->decs); + __assign_str(name, mod->name); + ), + +diff --git a/include/uapi/drm/tegra_drm.h b/include/uapi/drm/tegra_drm.h +index 6e132a2f7420..86b1f9942d0a 100644 +--- a/include/uapi/drm/tegra_drm.h ++++ b/include/uapi/drm/tegra_drm.h +@@ -103,7 +103,6 @@ struct drm_tegra_submit { + __u32 num_waitchks; + __u32 waitchk_mask; + __u32 timeout; +- __u32 pad; + __u64 syncpts; + __u64 cmdbufs; + __u64 relocs; +diff --git a/kernel/futex.c b/kernel/futex.c +index 3bc18bf48d0c..625a4e659e7a 100644 +--- a/kernel/futex.c ++++ b/kernel/futex.c +@@ -592,6 +592,55 @@ void exit_pi_state_list(struct task_struct *curr) + raw_spin_unlock_irq(&curr->pi_lock); + } + ++/* ++ * We need to check the following states: ++ * ++ * Waiter | pi_state | pi->owner | uTID | uODIED | ? ++ * ++ * [1] NULL | --- | --- | 0 | 0/1 | Valid ++ * [2] NULL | --- | --- | >0 | 0/1 | Valid ++ * ++ * [3] Found | NULL | -- | Any | 0/1 | Invalid ++ * ++ * [4] Found | Found | NULL | 0 | 1 | Valid ++ * [5] Found | Found | NULL | >0 | 1 | Invalid ++ * ++ * [6] Found | Found | task | 0 | 1 | Valid ++ * ++ * [7] Found | Found | NULL | Any | 0 | Invalid ++ * ++ * [8] Found | Found | task | ==taskTID | 0/1 | Valid ++ * [9] Found | Found | task | 0 | 0 | Invalid ++ * [10] Found | Found | task | !=taskTID | 0/1 | Invalid ++ * ++ * [1] Indicates that the kernel can acquire the futex atomically. We ++ * came came here due to a stale FUTEX_WAITERS/FUTEX_OWNER_DIED bit. ++ * ++ * [2] Valid, if TID does not belong to a kernel thread. If no matching ++ * thread is found then it indicates that the owner TID has died. ++ * ++ * [3] Invalid. The waiter is queued on a non PI futex ++ * ++ * [4] Valid state after exit_robust_list(), which sets the user space ++ * value to FUTEX_WAITERS | FUTEX_OWNER_DIED. ++ * ++ * [5] The user space value got manipulated between exit_robust_list() ++ * and exit_pi_state_list() ++ * ++ * [6] Valid state after exit_pi_state_list() which sets the new owner in ++ * the pi_state but cannot access the user space value. ++ * ++ * [7] pi_state->owner can only be NULL when the OWNER_DIED bit is set. ++ * ++ * [8] Owner and user space value match ++ * ++ * [9] There is no transient state which sets the user space TID to 0 ++ * except exit_robust_list(), but this is indicated by the ++ * FUTEX_OWNER_DIED bit. See [4] ++ * ++ * [10] There is no transient state which leaves owner and user space ++ * TID out of sync. ++ */ + static int + lookup_pi_state(u32 uval, struct futex_hash_bucket *hb, + union futex_key *key, struct futex_pi_state **ps) +@@ -607,12 +656,13 @@ lookup_pi_state(u32 uval, struct futex_hash_bucket *hb, + plist_for_each_entry_safe(this, next, head, list) { + if (match_futex(&this->key, key)) { + /* +- * Another waiter already exists - bump up +- * the refcount and return its pi_state: ++ * Sanity check the waiter before increasing ++ * the refcount and attaching to it. + */ + pi_state = this->pi_state; + /* +- * Userspace might have messed up non-PI and PI futexes ++ * Userspace might have messed up non-PI and ++ * PI futexes [3] + */ + if (unlikely(!pi_state)) + return -EINVAL; +@@ -620,34 +670,70 @@ lookup_pi_state(u32 uval, struct futex_hash_bucket *hb, + WARN_ON(!atomic_read(&pi_state->refcount)); + + /* +- * When pi_state->owner is NULL then the owner died +- * and another waiter is on the fly. pi_state->owner +- * is fixed up by the task which acquires +- * pi_state->rt_mutex. +- * +- * We do not check for pid == 0 which can happen when +- * the owner died and robust_list_exit() cleared the +- * TID. ++ * Handle the owner died case: + */ +- if (pid && pi_state->owner) { ++ if (uval & FUTEX_OWNER_DIED) { ++ /* ++ * exit_pi_state_list sets owner to NULL and ++ * wakes the topmost waiter. The task which ++ * acquires the pi_state->rt_mutex will fixup ++ * owner. ++ */ ++ if (!pi_state->owner) { ++ /* ++ * No pi state owner, but the user ++ * space TID is not 0. Inconsistent ++ * state. [5] ++ */ ++ if (pid) ++ return -EINVAL; ++ /* ++ * Take a ref on the state and ++ * return. [4] ++ */ ++ goto out_state; ++ } ++ + /* +- * Bail out if user space manipulated the +- * futex value. ++ * If TID is 0, then either the dying owner ++ * has not yet executed exit_pi_state_list() ++ * or some waiter acquired the rtmutex in the ++ * pi state, but did not yet fixup the TID in ++ * user space. ++ * ++ * Take a ref on the state and return. [6] + */ +- if (pid != task_pid_vnr(pi_state->owner)) ++ if (!pid) ++ goto out_state; ++ } else { ++ /* ++ * If the owner died bit is not set, ++ * then the pi_state must have an ++ * owner. [7] ++ */ ++ if (!pi_state->owner) + return -EINVAL; + } + ++ /* ++ * Bail out if user space manipulated the ++ * futex value. If pi state exists then the ++ * owner TID must be the same as the user ++ * space TID. [9/10] ++ */ ++ if (pid != task_pid_vnr(pi_state->owner)) ++ return -EINVAL; ++ ++ out_state: + atomic_inc(&pi_state->refcount); + *ps = pi_state; +- + return 0; + } + } + + /* + * We are the first waiter - try to look up the real owner and attach +- * the new pi_state to it, but bail out when TID = 0 ++ * the new pi_state to it, but bail out when TID = 0 [1] + */ + if (!pid) + return -ESRCH; +@@ -655,6 +741,11 @@ lookup_pi_state(u32 uval, struct futex_hash_bucket *hb, + if (!p) + return -ESRCH; + ++ if (!p->mm) { ++ put_task_struct(p); ++ return -EPERM; ++ } ++ + /* + * We need to look at the task state flags to figure out, + * whether the task is exiting. To protect against the do_exit +@@ -675,6 +766,9 @@ lookup_pi_state(u32 uval, struct futex_hash_bucket *hb, + return ret; + } + ++ /* ++ * No existing pi state. First waiter. [2] ++ */ + pi_state = alloc_pi_state(); + + /* +@@ -746,10 +840,18 @@ retry: + return -EDEADLK; + + /* +- * Surprise - we got the lock. Just return to userspace: ++ * Surprise - we got the lock, but we do not trust user space at all. + */ +- if (unlikely(!curval)) +- return 1; ++ if (unlikely(!curval)) { ++ /* ++ * We verify whether there is kernel state for this ++ * futex. If not, we can safely assume, that the 0 -> ++ * TID transition is correct. If state exists, we do ++ * not bother to fixup the user space state as it was ++ * corrupted already. ++ */ ++ return futex_top_waiter(hb, key) ? -EINVAL : 1; ++ } + + uval = curval; + +@@ -879,6 +981,7 @@ static int wake_futex_pi(u32 __user *uaddr, u32 uval, struct futex_q *this) + struct task_struct *new_owner; + struct futex_pi_state *pi_state = this->pi_state; + u32 uninitialized_var(curval), newval; ++ int ret = 0; + + if (!pi_state) + return -EINVAL; +@@ -902,23 +1005,19 @@ static int wake_futex_pi(u32 __user *uaddr, u32 uval, struct futex_q *this) + new_owner = this->task; + + /* +- * We pass it to the next owner. (The WAITERS bit is always +- * kept enabled while there is PI state around. We must also +- * preserve the owner died bit.) ++ * We pass it to the next owner. The WAITERS bit is always ++ * kept enabled while there is PI state around. We cleanup the ++ * owner died bit, because we are the owner. + */ +- if (!(uval & FUTEX_OWNER_DIED)) { +- int ret = 0; ++ newval = FUTEX_WAITERS | task_pid_vnr(new_owner); + +- newval = FUTEX_WAITERS | task_pid_vnr(new_owner); +- +- if (cmpxchg_futex_value_locked(&curval, uaddr, uval, newval)) +- ret = -EFAULT; +- else if (curval != uval) +- ret = -EINVAL; +- if (ret) { +- raw_spin_unlock(&pi_state->pi_mutex.wait_lock); +- return ret; +- } ++ if (cmpxchg_futex_value_locked(&curval, uaddr, uval, newval)) ++ ret = -EFAULT; ++ else if (curval != uval) ++ ret = -EINVAL; ++ if (ret) { ++ raw_spin_unlock(&pi_state->pi_mutex.wait_lock); ++ return ret; + } + + raw_spin_lock_irq(&pi_state->owner->pi_lock); +@@ -1197,7 +1296,7 @@ void requeue_pi_wake_futex(struct futex_q *q, union futex_key *key, + * + * Return: + * 0 - failed to acquire the lock atomically; +- * 1 - acquired the lock; ++ * >0 - acquired the lock, return value is vpid of the top_waiter + * <0 - error + */ + static int futex_proxy_trylock_atomic(u32 __user *pifutex, +@@ -1208,7 +1307,7 @@ static int futex_proxy_trylock_atomic(u32 __user *pifutex, + { + struct futex_q *top_waiter = NULL; + u32 curval; +- int ret; ++ int ret, vpid; + + if (get_futex_value_locked(&curval, pifutex)) + return -EFAULT; +@@ -1236,11 +1335,13 @@ static int futex_proxy_trylock_atomic(u32 __user *pifutex, + * the contended case or if set_waiters is 1. The pi_state is returned + * in ps in contended cases. + */ ++ vpid = task_pid_vnr(top_waiter->task); + ret = futex_lock_pi_atomic(pifutex, hb2, key2, ps, top_waiter->task, + set_waiters); +- if (ret == 1) ++ if (ret == 1) { + requeue_pi_wake_futex(top_waiter, key2, hb2); +- ++ return vpid; ++ } + return ret; + } + +@@ -1272,10 +1373,16 @@ static int futex_requeue(u32 __user *uaddr1, unsigned int flags, + struct futex_hash_bucket *hb1, *hb2; + struct plist_head *head1; + struct futex_q *this, *next; +- u32 curval2; + + if (requeue_pi) { + /* ++ * Requeue PI only works on two distinct uaddrs. This ++ * check is only valid for private futexes. See below. ++ */ ++ if (uaddr1 == uaddr2) ++ return -EINVAL; ++ ++ /* + * requeue_pi requires a pi_state, try to allocate it now + * without any locks in case it fails. + */ +@@ -1313,6 +1420,15 @@ retry: + if (unlikely(ret != 0)) + goto out_put_key1; + ++ /* ++ * The check above which compares uaddrs is not sufficient for ++ * shared futexes. We need to compare the keys: ++ */ ++ if (requeue_pi && match_futex(&key1, &key2)) { ++ ret = -EINVAL; ++ goto out_put_keys; ++ } ++ + hb1 = hash_futex(&key1); + hb2 = hash_futex(&key2); + +@@ -1358,16 +1474,25 @@ retry_private: + * At this point the top_waiter has either taken uaddr2 or is + * waiting on it. If the former, then the pi_state will not + * exist yet, look it up one more time to ensure we have a +- * reference to it. ++ * reference to it. If the lock was taken, ret contains the ++ * vpid of the top waiter task. + */ +- if (ret == 1) { ++ if (ret > 0) { + WARN_ON(pi_state); + drop_count++; + task_count++; +- ret = get_futex_value_locked(&curval2, uaddr2); +- if (!ret) +- ret = lookup_pi_state(curval2, hb2, &key2, +- &pi_state); ++ /* ++ * If we acquired the lock, then the user ++ * space value of uaddr2 should be vpid. It ++ * cannot be changed by the top waiter as it ++ * is blocked on hb2 lock if it tries to do ++ * so. If something fiddled with it behind our ++ * back the pi state lookup might unearth ++ * it. So we rather use the known value than ++ * rereading and handing potential crap to ++ * lookup_pi_state. ++ */ ++ ret = lookup_pi_state(ret, hb2, &key2, &pi_state); + } + + switch (ret) { +@@ -2137,9 +2262,10 @@ retry: + /* + * To avoid races, try to do the TID -> 0 atomic transition + * again. If it succeeds then we can return without waking +- * anyone else up: ++ * anyone else up. We only try this if neither the waiters nor ++ * the owner died bit are set. + */ +- if (!(uval & FUTEX_OWNER_DIED) && ++ if (!(uval & ~FUTEX_TID_MASK) && + cmpxchg_futex_value_locked(&uval, uaddr, vpid, 0)) + goto pi_faulted; + /* +@@ -2171,11 +2297,9 @@ retry: + /* + * No waiters - kernel unlocks the futex: + */ +- if (!(uval & FUTEX_OWNER_DIED)) { +- ret = unlock_futex_pi(uaddr, uval); +- if (ret == -EFAULT) +- goto pi_faulted; +- } ++ ret = unlock_futex_pi(uaddr, uval); ++ if (ret == -EFAULT) ++ goto pi_faulted; + + out_unlock: + spin_unlock(&hb->lock); +@@ -2334,6 +2458,15 @@ static int futex_wait_requeue_pi(u32 __user *uaddr, unsigned int flags, + if (ret) + goto out_key2; + ++ /* ++ * The check above which compares uaddrs is not sufficient for ++ * shared futexes. We need to compare the keys: ++ */ ++ if (match_futex(&q.key, &key2)) { ++ ret = -EINVAL; ++ goto out_put_keys; ++ } ++ + /* Queue the futex_q, drop the hb lock, wait for wakeup. */ + futex_wait_queue_me(hb, &q, to); + +diff --git a/kernel/hrtimer.c b/kernel/hrtimer.c +index 2288fbdada16..aadf4b7a607c 100644 +--- a/kernel/hrtimer.c ++++ b/kernel/hrtimer.c +@@ -245,6 +245,11 @@ again: + goto again; + } + timer->base = new_base; ++ } else { ++ if (cpu != this_cpu && hrtimer_check_target(timer, new_base)) { ++ cpu = this_cpu; ++ goto again; ++ } + } + return new_base; + } +@@ -580,6 +585,23 @@ hrtimer_force_reprogram(struct hrtimer_cpu_base *cpu_base, int skip_equal) + + cpu_base->expires_next.tv64 = expires_next.tv64; + ++ /* ++ * If a hang was detected in the last timer interrupt then we ++ * leave the hang delay active in the hardware. We want the ++ * system to make progress. That also prevents the following ++ * scenario: ++ * T1 expires 50ms from now ++ * T2 expires 5s from now ++ * ++ * T1 is removed, so this code is called and would reprogram ++ * the hardware to 5s from now. Any hrtimer_start after that ++ * will not reprogram the hardware due to hang_detected being ++ * set. So we'd effectivly block all timers until the T2 event ++ * fires. ++ */ ++ if (cpu_base->hang_detected) ++ return; ++ + if (cpu_base->expires_next.tv64 != KTIME_MAX) + tick_program_event(cpu_base->expires_next, 1); + } +@@ -977,11 +999,8 @@ int __hrtimer_start_range_ns(struct hrtimer *timer, ktime_t tim, + /* Remove an active timer from the queue: */ + ret = remove_hrtimer(timer, base); + +- /* Switch the timer base, if necessary: */ +- new_base = switch_hrtimer_base(timer, base, mode & HRTIMER_MODE_PINNED); +- + if (mode & HRTIMER_MODE_REL) { +- tim = ktime_add_safe(tim, new_base->get_time()); ++ tim = ktime_add_safe(tim, base->get_time()); + /* + * CONFIG_TIME_LOW_RES is a temporary way for architectures + * to signal that they simply return xtime in +@@ -996,6 +1015,9 @@ int __hrtimer_start_range_ns(struct hrtimer *timer, ktime_t tim, + + hrtimer_set_expires_range_ns(timer, tim, delta_ns); + ++ /* Switch the timer base, if necessary: */ ++ new_base = switch_hrtimer_base(timer, base, mode & HRTIMER_MODE_PINNED); ++ + timer_stats_hrtimer_set_start_info(timer); + + leftmost = enqueue_hrtimer(timer, new_base); +diff --git a/kernel/irq/manage.c b/kernel/irq/manage.c +index 9bd5c8a6c8ee..8815abfdf2cb 100644 +--- a/kernel/irq/manage.c ++++ b/kernel/irq/manage.c +@@ -150,7 +150,7 @@ int irq_do_set_affinity(struct irq_data *data, const struct cpumask *mask, + struct irq_chip *chip = irq_data_get_irq_chip(data); + int ret; + +- ret = chip->irq_set_affinity(data, mask, false); ++ ret = chip->irq_set_affinity(data, mask, force); + switch (ret) { + case IRQ_SET_MASK_OK: + cpumask_copy(data->affinity, mask); +@@ -162,7 +162,8 @@ int irq_do_set_affinity(struct irq_data *data, const struct cpumask *mask, + return ret; + } + +-int __irq_set_affinity_locked(struct irq_data *data, const struct cpumask *mask) ++int irq_set_affinity_locked(struct irq_data *data, const struct cpumask *mask, ++ bool force) + { + struct irq_chip *chip = irq_data_get_irq_chip(data); + struct irq_desc *desc = irq_data_to_desc(data); +@@ -172,7 +173,7 @@ int __irq_set_affinity_locked(struct irq_data *data, const struct cpumask *mask) + return -EINVAL; + + if (irq_can_move_pcntxt(data)) { +- ret = irq_do_set_affinity(data, mask, false); ++ ret = irq_do_set_affinity(data, mask, force); + } else { + irqd_set_move_pending(data); + irq_copy_pending(desc, mask); +@@ -187,13 +188,7 @@ int __irq_set_affinity_locked(struct irq_data *data, const struct cpumask *mask) + return ret; + } + +-/** +- * irq_set_affinity - Set the irq affinity of a given irq +- * @irq: Interrupt to set affinity +- * @mask: cpumask +- * +- */ +-int irq_set_affinity(unsigned int irq, const struct cpumask *mask) ++int __irq_set_affinity(unsigned int irq, const struct cpumask *mask, bool force) + { + struct irq_desc *desc = irq_to_desc(irq); + unsigned long flags; +@@ -203,7 +198,7 @@ int irq_set_affinity(unsigned int irq, const struct cpumask *mask) + return -EINVAL; + + raw_spin_lock_irqsave(&desc->lock, flags); +- ret = __irq_set_affinity_locked(irq_desc_get_irq_data(desc), mask); ++ ret = irq_set_affinity_locked(irq_desc_get_irq_data(desc), mask, force); + raw_spin_unlock_irqrestore(&desc->lock, flags); + return ret; + } +diff --git a/kernel/module.c b/kernel/module.c +index fa53db8aadeb..10a3af821d28 100644 +--- a/kernel/module.c ++++ b/kernel/module.c +@@ -3279,6 +3279,9 @@ static int load_module(struct load_info *info, const char __user *uargs, + + dynamic_debug_setup(info->debug, info->num_debug); + ++ /* Ftrace init must be called in the MODULE_STATE_UNFORMED state */ ++ ftrace_module_init(mod); ++ + /* Finally it's fully formed, ready to start executing. */ + err = complete_formation(mod, info); + if (err) +diff --git a/kernel/timer.c b/kernel/timer.c +index 15bc1b41021d..20f45ea6f5a4 100644 +--- a/kernel/timer.c ++++ b/kernel/timer.c +@@ -822,7 +822,7 @@ unsigned long apply_slack(struct timer_list *timer, unsigned long expires) + + bit = find_last_bit(&mask, BITS_PER_LONG); + +- mask = (1 << bit) - 1; ++ mask = (1UL << bit) - 1; + + expires_limit = expires_limit & ~(mask); + +diff --git a/kernel/trace/ftrace.c b/kernel/trace/ftrace.c +index 4b93b8412252..797d3b91a30b 100644 +--- a/kernel/trace/ftrace.c ++++ b/kernel/trace/ftrace.c +@@ -4222,16 +4222,11 @@ static void ftrace_init_module(struct module *mod, + ftrace_process_locs(mod, start, end); + } + +-static int ftrace_module_notify_enter(struct notifier_block *self, +- unsigned long val, void *data) ++void ftrace_module_init(struct module *mod) + { +- struct module *mod = data; +- +- if (val == MODULE_STATE_COMING) +- ftrace_init_module(mod, mod->ftrace_callsites, +- mod->ftrace_callsites + +- mod->num_ftrace_callsites); +- return 0; ++ ftrace_init_module(mod, mod->ftrace_callsites, ++ mod->ftrace_callsites + ++ mod->num_ftrace_callsites); + } + + static int ftrace_module_notify_exit(struct notifier_block *self, +@@ -4245,11 +4240,6 @@ static int ftrace_module_notify_exit(struct notifier_block *self, + return 0; + } + #else +-static int ftrace_module_notify_enter(struct notifier_block *self, +- unsigned long val, void *data) +-{ +- return 0; +-} + static int ftrace_module_notify_exit(struct notifier_block *self, + unsigned long val, void *data) + { +@@ -4257,11 +4247,6 @@ static int ftrace_module_notify_exit(struct notifier_block *self, + } + #endif /* CONFIG_MODULES */ + +-struct notifier_block ftrace_module_enter_nb = { +- .notifier_call = ftrace_module_notify_enter, +- .priority = INT_MAX, /* Run before anything that can use kprobes */ +-}; +- + struct notifier_block ftrace_module_exit_nb = { + .notifier_call = ftrace_module_notify_exit, + .priority = INT_MIN, /* Run after anything that can remove kprobes */ +@@ -4298,10 +4283,6 @@ void __init ftrace_init(void) + __start_mcount_loc, + __stop_mcount_loc); + +- ret = register_module_notifier(&ftrace_module_enter_nb); +- if (ret) +- pr_warning("Failed to register trace ftrace module enter notifier\n"); +- + ret = register_module_notifier(&ftrace_module_exit_nb); + if (ret) + pr_warning("Failed to register trace ftrace module exit notifier\n"); +diff --git a/kernel/workqueue.c b/kernel/workqueue.c +index db7a6ac7c0a8..652f36dd40de 100644 +--- a/kernel/workqueue.c ++++ b/kernel/workqueue.c +@@ -1881,6 +1881,12 @@ static void send_mayday(struct work_struct *work) + + /* mayday mayday mayday */ + if (list_empty(&pwq->mayday_node)) { ++ /* ++ * If @pwq is for an unbound wq, its base ref may be put at ++ * any time due to an attribute change. Pin @pwq until the ++ * rescuer is done with it. ++ */ ++ get_pwq(pwq); + list_add_tail(&pwq->mayday_node, &wq->maydays); + wake_up_process(wq->rescuer->task); + } +@@ -2356,6 +2362,7 @@ static int rescuer_thread(void *__rescuer) + struct worker *rescuer = __rescuer; + struct workqueue_struct *wq = rescuer->rescue_wq; + struct list_head *scheduled = &rescuer->scheduled; ++ bool should_stop; + + set_user_nice(current, RESCUER_NICE_LEVEL); + +@@ -2367,11 +2374,15 @@ static int rescuer_thread(void *__rescuer) + repeat: + set_current_state(TASK_INTERRUPTIBLE); + +- if (kthread_should_stop()) { +- __set_current_state(TASK_RUNNING); +- rescuer->task->flags &= ~PF_WQ_WORKER; +- return 0; +- } ++ /* ++ * By the time the rescuer is requested to stop, the workqueue ++ * shouldn't have any work pending, but @wq->maydays may still have ++ * pwq(s) queued. This can happen by non-rescuer workers consuming ++ * all the work items before the rescuer got to them. Go through ++ * @wq->maydays processing before acting on should_stop so that the ++ * list is always empty on exit. ++ */ ++ should_stop = kthread_should_stop(); + + /* see whether any pwq is asking for help */ + spin_lock_irq(&wq_mayday_lock); +@@ -2403,6 +2414,12 @@ repeat: + process_scheduled_works(rescuer); + + /* ++ * Put the reference grabbed by send_mayday(). @pool won't ++ * go away while we're holding its lock. ++ */ ++ put_pwq(pwq); ++ ++ /* + * Leave this pool. If keep_working() is %true, notify a + * regular worker; otherwise, we end up with 0 concurrency + * and stalling the execution. +@@ -2417,6 +2434,12 @@ repeat: + + spin_unlock_irq(&wq_mayday_lock); + ++ if (should_stop) { ++ __set_current_state(TASK_RUNNING); ++ rescuer->task->flags &= ~PF_WQ_WORKER; ++ return 0; ++ } ++ + /* rescuers should never participate in concurrency management */ + WARN_ON_ONCE(!(rescuer->flags & WORKER_NOT_RUNNING)); + schedule(); +@@ -4043,7 +4066,8 @@ static void wq_update_unbound_numa(struct workqueue_struct *wq, int cpu, + if (!pwq) { + pr_warning("workqueue: allocation failed while updating NUMA affinity of \"%s\"\n", + wq->name); +- goto out_unlock; ++ mutex_lock(&wq->mutex); ++ goto use_dfl_pwq; + } + + /* +diff --git a/mm/memory-failure.c b/mm/memory-failure.c +index 59c62fa75c5a..4254eb021583 100644 +--- a/mm/memory-failure.c ++++ b/mm/memory-failure.c +@@ -1083,15 +1083,16 @@ int memory_failure(unsigned long pfn, int trapno, int flags) + return 0; + } else if (PageHuge(hpage)) { + /* +- * Check "just unpoisoned", "filter hit", and +- * "race with other subpage." ++ * Check "filter hit" and "race with other subpage." + */ + lock_page(hpage); +- if (!PageHWPoison(hpage) +- || (hwpoison_filter(p) && TestClearPageHWPoison(p)) +- || (p != hpage && TestSetPageHWPoison(hpage))) { +- atomic_long_sub(nr_pages, &num_poisoned_pages); +- return 0; ++ if (PageHWPoison(hpage)) { ++ if ((hwpoison_filter(p) && TestClearPageHWPoison(p)) ++ || (p != hpage && TestSetPageHWPoison(hpage))) { ++ atomic_long_sub(nr_pages, &num_poisoned_pages); ++ unlock_page(hpage); ++ return 0; ++ } + } + set_page_hwpoison_huge_page(hpage); + res = dequeue_hwpoisoned_huge_page(hpage); +diff --git a/mm/memory.c b/mm/memory.c +index 4b60011907d7..ebe0f285c0e7 100644 +--- a/mm/memory.c ++++ b/mm/memory.c +@@ -1937,12 +1937,17 @@ int fixup_user_fault(struct task_struct *tsk, struct mm_struct *mm, + unsigned long address, unsigned int fault_flags) + { + struct vm_area_struct *vma; ++ vm_flags_t vm_flags; + int ret; + + vma = find_extend_vma(mm, address); + if (!vma || address < vma->vm_start) + return -EFAULT; + ++ vm_flags = (fault_flags & FAULT_FLAG_WRITE) ? VM_WRITE : VM_READ; ++ if (!(vm_flags & vma->vm_flags)) ++ return -EFAULT; ++ + ret = handle_mm_fault(mm, vma, address, fault_flags); + if (ret & VM_FAULT_ERROR) { + if (ret & VM_FAULT_OOM) +diff --git a/mm/mremap.c b/mm/mremap.c +index 463a25705ac6..2201d060c31b 100644 +--- a/mm/mremap.c ++++ b/mm/mremap.c +@@ -175,10 +175,17 @@ unsigned long move_page_tables(struct vm_area_struct *vma, + break; + if (pmd_trans_huge(*old_pmd)) { + int err = 0; +- if (extent == HPAGE_PMD_SIZE) ++ if (extent == HPAGE_PMD_SIZE) { ++ VM_BUG_ON(vma->vm_file || !vma->anon_vma); ++ /* See comment in move_ptes() */ ++ if (need_rmap_locks) ++ anon_vma_lock_write(vma->anon_vma); + err = move_huge_pmd(vma, new_vma, old_addr, + new_addr, old_end, + old_pmd, new_pmd); ++ if (need_rmap_locks) ++ anon_vma_unlock_write(vma->anon_vma); ++ } + if (err > 0) { + need_flush = true; + continue; +diff --git a/mm/percpu.c b/mm/percpu.c +index 8c8e08f3a692..25e2ea52db82 100644 +--- a/mm/percpu.c ++++ b/mm/percpu.c +@@ -612,7 +612,7 @@ static struct pcpu_chunk *pcpu_alloc_chunk(void) + chunk->map = pcpu_mem_zalloc(PCPU_DFL_MAP_ALLOC * + sizeof(chunk->map[0])); + if (!chunk->map) { +- kfree(chunk); ++ pcpu_mem_free(chunk, pcpu_chunk_struct_size); + return NULL; + } + +diff --git a/net/bluetooth/hci_conn.c b/net/bluetooth/hci_conn.c +index 6c7f36379722..4c51c055d00f 100644 +--- a/net/bluetooth/hci_conn.c ++++ b/net/bluetooth/hci_conn.c +@@ -652,14 +652,17 @@ static int hci_conn_auth(struct hci_conn *conn, __u8 sec_level, __u8 auth_type) + if (!test_and_set_bit(HCI_CONN_AUTH_PEND, &conn->flags)) { + struct hci_cp_auth_requested cp; + +- /* encrypt must be pending if auth is also pending */ +- set_bit(HCI_CONN_ENCRYPT_PEND, &conn->flags); +- + cp.handle = cpu_to_le16(conn->handle); + hci_send_cmd(conn->hdev, HCI_OP_AUTH_REQUESTED, + sizeof(cp), &cp); ++ ++ /* If we're already encrypted set the REAUTH_PEND flag, ++ * otherwise set the ENCRYPT_PEND. ++ */ + if (conn->key_type != 0xff) + set_bit(HCI_CONN_REAUTH_PEND, &conn->flags); ++ else ++ set_bit(HCI_CONN_ENCRYPT_PEND, &conn->flags); + } + + return 0; +diff --git a/net/bluetooth/hci_event.c b/net/bluetooth/hci_event.c +index cfca44f8d048..ab2ec7c414cb 100644 +--- a/net/bluetooth/hci_event.c ++++ b/net/bluetooth/hci_event.c +@@ -3051,6 +3051,12 @@ static void hci_key_refresh_complete_evt(struct hci_dev *hdev, + if (!conn) + goto unlock; + ++ /* For BR/EDR the necessary steps are taken through the ++ * auth_complete event. ++ */ ++ if (conn->type != LE_LINK) ++ goto unlock; ++ + if (!ev->status) + conn->sec_level = conn->pending_sec_level; + +diff --git a/net/ceph/messenger.c b/net/ceph/messenger.c +index eb0a46a49bd4..b9d7df175700 100644 +--- a/net/ceph/messenger.c ++++ b/net/ceph/messenger.c +@@ -556,7 +556,7 @@ static int ceph_tcp_sendmsg(struct socket *sock, struct kvec *iov, + return r; + } + +-static int ceph_tcp_sendpage(struct socket *sock, struct page *page, ++static int __ceph_tcp_sendpage(struct socket *sock, struct page *page, + int offset, size_t size, bool more) + { + int flags = MSG_DONTWAIT | MSG_NOSIGNAL | (more ? MSG_MORE : MSG_EOR); +@@ -569,6 +569,24 @@ static int ceph_tcp_sendpage(struct socket *sock, struct page *page, + return ret; + } + ++static int ceph_tcp_sendpage(struct socket *sock, struct page *page, ++ int offset, size_t size, bool more) ++{ ++ int ret; ++ struct kvec iov; ++ ++ /* sendpage cannot properly handle pages with page_count == 0, ++ * we need to fallback to sendmsg if that's the case */ ++ if (page_count(page) >= 1) ++ return __ceph_tcp_sendpage(sock, page, offset, size, more); ++ ++ iov.iov_base = kmap(page) + offset; ++ iov.iov_len = size; ++ ret = ceph_tcp_sendmsg(sock, &iov, 1, size, more); ++ kunmap(page); ++ ++ return ret; ++} + + /* + * Shutdown/close the socket for the given connection. +diff --git a/net/mac80211/ieee80211_i.h b/net/mac80211/ieee80211_i.h +index 92ef04c72c51..845563b81a0f 100644 +--- a/net/mac80211/ieee80211_i.h ++++ b/net/mac80211/ieee80211_i.h +@@ -311,6 +311,7 @@ struct ieee80211_roc_work { + + bool started, abort, hw_begun, notified; + bool to_be_freed; ++ bool on_channel; + + unsigned long hw_start_time; + +@@ -1270,6 +1271,7 @@ void ieee80211_sta_reset_conn_monitor(struct ieee80211_sub_if_data *sdata); + void ieee80211_mgd_stop(struct ieee80211_sub_if_data *sdata); + void ieee80211_mgd_conn_tx_status(struct ieee80211_sub_if_data *sdata, + __le16 fc, bool acked); ++void ieee80211_mgd_quiesce(struct ieee80211_sub_if_data *sdata); + void ieee80211_sta_restart(struct ieee80211_sub_if_data *sdata); + + /* IBSS code */ +diff --git a/net/mac80211/mlme.c b/net/mac80211/mlme.c +index 49bc2246bd86..fc94937cd7b3 100644 +--- a/net/mac80211/mlme.c ++++ b/net/mac80211/mlme.c +@@ -3754,6 +3754,32 @@ static void ieee80211_restart_sta_timer(struct ieee80211_sub_if_data *sdata) + } + + #ifdef CONFIG_PM ++void ieee80211_mgd_quiesce(struct ieee80211_sub_if_data *sdata) ++{ ++ struct ieee80211_if_managed *ifmgd = &sdata->u.mgd; ++ u8 frame_buf[IEEE80211_DEAUTH_FRAME_LEN]; ++ ++ mutex_lock(&ifmgd->mtx); ++ ++ if (ifmgd->auth_data) { ++ /* ++ * If we are trying to authenticate while suspending, cfg80211 ++ * won't know and won't actually abort those attempts, thus we ++ * need to do that ourselves. ++ */ ++ ieee80211_send_deauth_disassoc(sdata, ++ ifmgd->auth_data->bss->bssid, ++ IEEE80211_STYPE_DEAUTH, ++ WLAN_REASON_DEAUTH_LEAVING, ++ false, frame_buf); ++ ieee80211_destroy_auth_data(sdata, false); ++ cfg80211_send_deauth(sdata->dev, frame_buf, ++ IEEE80211_DEAUTH_FRAME_LEN); ++ } ++ ++ mutex_unlock(&ifmgd->mtx); ++} ++ + void ieee80211_sta_restart(struct ieee80211_sub_if_data *sdata) + { + struct ieee80211_if_managed *ifmgd = &sdata->u.mgd; +diff --git a/net/mac80211/offchannel.c b/net/mac80211/offchannel.c +index 11d3f227e11e..0427a58b4397 100644 +--- a/net/mac80211/offchannel.c ++++ b/net/mac80211/offchannel.c +@@ -333,7 +333,7 @@ void ieee80211_sw_roc_work(struct work_struct *work) + container_of(work, struct ieee80211_roc_work, work.work); + struct ieee80211_sub_if_data *sdata = roc->sdata; + struct ieee80211_local *local = sdata->local; +- bool started; ++ bool started, on_channel; + + mutex_lock(&local->mtx); + +@@ -354,14 +354,24 @@ void ieee80211_sw_roc_work(struct work_struct *work) + if (!roc->started) { + struct ieee80211_roc_work *dep; + +- /* start this ROC */ +- ieee80211_offchannel_stop_vifs(local); ++ WARN_ON(local->use_chanctx); ++ ++ /* If actually operating on the desired channel (with at least ++ * 20 MHz channel width) don't stop all the operations but still ++ * treat it as though the ROC operation started properly, so ++ * other ROC operations won't interfere with this one. ++ */ ++ roc->on_channel = roc->chan == local->_oper_chandef.chan; + +- /* switch channel etc */ ++ /* start this ROC */ + ieee80211_recalc_idle(local); + +- local->tmp_channel = roc->chan; +- ieee80211_hw_config(local, 0); ++ if (!roc->on_channel) { ++ ieee80211_offchannel_stop_vifs(local); ++ ++ local->tmp_channel = roc->chan; ++ ieee80211_hw_config(local, 0); ++ } + + /* tell userspace or send frame */ + ieee80211_handle_roc_started(roc); +@@ -380,9 +390,10 @@ void ieee80211_sw_roc_work(struct work_struct *work) + finish: + list_del(&roc->list); + started = roc->started; ++ on_channel = roc->on_channel; + ieee80211_roc_notify_destroy(roc, !roc->abort); + +- if (started) { ++ if (started && !on_channel) { + ieee80211_flush_queues(local, NULL); + + local->tmp_channel = NULL; +diff --git a/net/mac80211/pm.c b/net/mac80211/pm.c +index 340126204343..efb510e6f206 100644 +--- a/net/mac80211/pm.c ++++ b/net/mac80211/pm.c +@@ -101,10 +101,18 @@ int __ieee80211_suspend(struct ieee80211_hw *hw, struct cfg80211_wowlan *wowlan) + + /* remove all interfaces that were created in the driver */ + list_for_each_entry(sdata, &local->interfaces, list) { +- if (!ieee80211_sdata_running(sdata) || +- sdata->vif.type == NL80211_IFTYPE_AP_VLAN || +- sdata->vif.type == NL80211_IFTYPE_MONITOR) ++ if (!ieee80211_sdata_running(sdata)) + continue; ++ switch (sdata->vif.type) { ++ case NL80211_IFTYPE_AP_VLAN: ++ case NL80211_IFTYPE_MONITOR: ++ continue; ++ case NL80211_IFTYPE_STATION: ++ ieee80211_mgd_quiesce(sdata); ++ break; ++ default: ++ break; ++ } + + drv_remove_interface(local, sdata); + } +diff --git a/sound/pci/hda/hda_intel.c b/sound/pci/hda/hda_intel.c +index 5f055d7ee85b..1800db643a16 100644 +--- a/sound/pci/hda/hda_intel.c ++++ b/sound/pci/hda/hda_intel.c +@@ -3856,6 +3856,9 @@ static DEFINE_PCI_DEVICE_TABLE(azx_ids) = { + /* Lynx Point */ + { PCI_DEVICE(0x8086, 0x8c20), + .driver_data = AZX_DRIVER_PCH | AZX_DCAPS_INTEL_PCH }, ++ /* 9 Series */ ++ { PCI_DEVICE(0x8086, 0x8ca0), ++ .driver_data = AZX_DRIVER_PCH | AZX_DCAPS_INTEL_PCH }, + /* Wellsburg */ + { PCI_DEVICE(0x8086, 0x8d20), + .driver_data = AZX_DRIVER_PCH | AZX_DCAPS_INTEL_PCH }, +diff --git a/sound/soc/codecs/wm8962.c b/sound/soc/codecs/wm8962.c +index e3cd86514cea..1ae1f8bd9c36 100644 +--- a/sound/soc/codecs/wm8962.c ++++ b/sound/soc/codecs/wm8962.c +@@ -153,6 +153,7 @@ static struct reg_default wm8962_reg[] = { + { 40, 0x0000 }, /* R40 - SPKOUTL volume */ + { 41, 0x0000 }, /* R41 - SPKOUTR volume */ + ++ { 49, 0x0010 }, /* R49 - Class D Control 1 */ + { 51, 0x0003 }, /* R51 - Class D Control 2 */ + + { 56, 0x0506 }, /* R56 - Clocking 4 */ +@@ -794,7 +795,6 @@ static bool wm8962_volatile_register(struct device *dev, unsigned int reg) + case WM8962_ALC2: + case WM8962_THERMAL_SHUTDOWN_STATUS: + case WM8962_ADDITIONAL_CONTROL_4: +- case WM8962_CLASS_D_CONTROL_1: + case WM8962_DC_SERVO_6: + case WM8962_INTERRUPT_STATUS_1: + case WM8962_INTERRUPT_STATUS_2: +@@ -2901,13 +2901,22 @@ static int wm8962_set_fll(struct snd_soc_codec *codec, int fll_id, int source, + static int wm8962_mute(struct snd_soc_dai *dai, int mute) + { + struct snd_soc_codec *codec = dai->codec; +- int val; ++ int val, ret; + + if (mute) +- val = WM8962_DAC_MUTE; ++ val = WM8962_DAC_MUTE | WM8962_DAC_MUTE_ALT; + else + val = 0; + ++ /** ++ * The DAC mute bit is mirrored in two registers, update both to keep ++ * the register cache consistent. ++ */ ++ ret = snd_soc_update_bits(codec, WM8962_CLASS_D_CONTROL_1, ++ WM8962_DAC_MUTE_ALT, val); ++ if (ret < 0) ++ return ret; ++ + return snd_soc_update_bits(codec, WM8962_ADC_DAC_CONTROL_1, + WM8962_DAC_MUTE, val); + } +diff --git a/sound/soc/codecs/wm8962.h b/sound/soc/codecs/wm8962.h +index a1a5d5294c19..910aafd09d21 100644 +--- a/sound/soc/codecs/wm8962.h ++++ b/sound/soc/codecs/wm8962.h +@@ -1954,6 +1954,10 @@ + #define WM8962_SPKOUTL_ENA_MASK 0x0040 /* SPKOUTL_ENA */ + #define WM8962_SPKOUTL_ENA_SHIFT 6 /* SPKOUTL_ENA */ + #define WM8962_SPKOUTL_ENA_WIDTH 1 /* SPKOUTL_ENA */ ++#define WM8962_DAC_MUTE_ALT 0x0010 /* DAC_MUTE */ ++#define WM8962_DAC_MUTE_ALT_MASK 0x0010 /* DAC_MUTE */ ++#define WM8962_DAC_MUTE_ALT_SHIFT 4 /* DAC_MUTE */ ++#define WM8962_DAC_MUTE_ALT_WIDTH 1 /* DAC_MUTE */ + #define WM8962_SPKOUTL_PGA_MUTE 0x0002 /* SPKOUTL_PGA_MUTE */ + #define WM8962_SPKOUTL_PGA_MUTE_MASK 0x0002 /* SPKOUTL_PGA_MUTE */ + #define WM8962_SPKOUTL_PGA_MUTE_SHIFT 1 /* SPKOUTL_PGA_MUTE */ +diff --git a/sound/usb/card.h b/sound/usb/card.h +index bf2889a2cae5..82c2d80c8228 100644 +--- a/sound/usb/card.h ++++ b/sound/usb/card.h +@@ -90,6 +90,7 @@ struct snd_usb_endpoint { + unsigned int curframesize; /* current packet size in frames (for capture) */ + unsigned int syncmaxsize; /* sync endpoint packet size */ + unsigned int fill_max:1; /* fill max packet size always */ ++ unsigned int udh01_fb_quirk:1; /* corrupted feedback data */ + unsigned int datainterval; /* log_2 of data packet interval */ + unsigned int syncinterval; /* P for adaptive mode, 0 otherwise */ + unsigned char silence_value; +diff --git a/sound/usb/endpoint.c b/sound/usb/endpoint.c +index 659950e5b94f..308c02b2a597 100644 +--- a/sound/usb/endpoint.c ++++ b/sound/usb/endpoint.c +@@ -467,6 +467,10 @@ struct snd_usb_endpoint *snd_usb_add_endpoint(struct snd_usb_audio *chip, + ep->syncinterval = 3; + + ep->syncmaxsize = le16_to_cpu(get_endpoint(alts, 1)->wMaxPacketSize); ++ ++ if (chip->usb_id == USB_ID(0x0644, 0x8038) /* TEAC UD-H01 */ && ++ ep->syncmaxsize == 4) ++ ep->udh01_fb_quirk = 1; + } + + list_add_tail(&ep->list, &chip->ep_list); +@@ -1075,7 +1079,16 @@ void snd_usb_handle_sync_urb(struct snd_usb_endpoint *ep, + if (f == 0) + return; + +- if (unlikely(ep->freqshift == INT_MIN)) { ++ if (unlikely(sender->udh01_fb_quirk)) { ++ /* ++ * The TEAC UD-H01 firmware sometimes changes the feedback value ++ * by +/- 0x1.0000. ++ */ ++ if (f < ep->freqn - 0x8000) ++ f += 0x10000; ++ else if (f > ep->freqn + 0x8000) ++ f -= 0x10000; ++ } else if (unlikely(ep->freqshift == INT_MIN)) { + /* + * The first time we see a feedback value, determine its format + * by shifting it left or right until it matches the nominal Deleted: genpatches-2.6/trunk/3.10/1501-futex-add-another-early-deadlock-detection-check.patch =================================================================== --- genpatches-2.6/trunk/3.10/1501-futex-add-another-early-deadlock-detection-check.patch 2014-06-09 00:42:33 UTC (rev 2818) +++ genpatches-2.6/trunk/3.10/1501-futex-add-another-early-deadlock-detection-check.patch 2014-06-09 12:27:34 UTC (rev 2819) @@ -1,160 +0,0 @@ -From: Thomas Gleixner -Date: Mon, 12 May 2014 20:45:34 +0000 -Subject: futex: Add another early deadlock detection check -Git-commit: 866293ee54227584ffcb4a42f69c1f365974ba7f - -Dave Jones trinity syscall fuzzer exposed an issue in the deadlock -detection code of rtmutex: - http://lkml.kernel.org/r/20140429151655.GA14277@redhat.com - -That underlying issue has been fixed with a patch to the rtmutex code, -but the futex code must not call into rtmutex in that case because - - it can detect that issue early - - it avoids a different and more complex fixup for backing out - -If the user space variable got manipulated to 0x80000000 which means -no lock holder, but the waiters bit set and an active pi_state in the -kernel is found we can figure out the recursive locking issue by -looking at the pi_state owner. If that is the current task, then we -can safely return -EDEADLK. - -The check should have been added in commit 59fa62451 (futex: Handle -futex_pi OWNER_DIED take over correctly) already, but I did not see -the above issue caused by user space manipulation back then. - -Signed-off-by: Thomas Gleixner -Cc: Dave Jones -Cc: Linus Torvalds -Cc: Peter Zijlstra -Cc: Darren Hart -Cc: Davidlohr Bueso -Cc: Steven Rostedt -Cc: Clark Williams -Cc: Paul McKenney -Cc: Lai Jiangshan -Cc: Roland McGrath -Cc: Carlos ODonell -Cc: Jakub Jelinek -Cc: Michael Kerrisk -Cc: Sebastian Andrzej Siewior -Link: http://lkml.kernel.org/r/20140512201701.097349971@linutronix.de -Signed-off-by: Thomas Gleixner -Cc: stable@vger.kernel.org ---- - kernel/futex.c | 47 ++++++++++++++++++++++++++++++++++------------- - 1 file changed, 34 insertions(+), 13 deletions(-) - -Index: linux-3.12/kernel/futex.c -=================================================================== ---- linux-3.12.orig/kernel/futex.c -+++ linux-3.12/kernel/futex.c -@@ -596,7 +596,8 @@ void exit_pi_state_list(struct task_stru - - static int - lookup_pi_state(u32 uval, struct futex_hash_bucket *hb, -- union futex_key *key, struct futex_pi_state **ps) -+ union futex_key *key, struct futex_pi_state **ps, -+ struct task_struct *task) - { - struct futex_pi_state *pi_state = NULL; - struct futex_q *this, *next; -@@ -640,6 +641,16 @@ lookup_pi_state(u32 uval, struct futex_h - return -EINVAL; - } - -+ /* -+ * Protect against a corrupted uval. If uval -+ * is 0x80000000 then pid is 0 and the waiter -+ * bit is set. So the deadlock check in the -+ * calling code has failed and we did not fall -+ * into the check above due to !pid. -+ */ -+ if (task && pi_state->owner == task) -+ return -EDEADLK; -+ - atomic_inc(&pi_state->refcount); - *ps = pi_state; - -@@ -789,7 +800,7 @@ retry: - * We dont have the lock. Look up the PI state (or create it if - * we are the first waiter): - */ -- ret = lookup_pi_state(uval, hb, key, ps); -+ ret = lookup_pi_state(uval, hb, key, ps, task); - - if (unlikely(ret)) { - switch (ret) { -@@ -1199,7 +1210,7 @@ void requeue_pi_wake_futex(struct futex_ - * - * Return: - * 0 - failed to acquire the lock atomically; -- * 1 - acquired the lock; -+ * >0 - acquired the lock, return value is vpid of the top_waiter - * <0 - error - */ - static int futex_proxy_trylock_atomic(u32 __user *pifutex, -@@ -1210,7 +1221,7 @@ static int futex_proxy_trylock_atomic(u3 - { - struct futex_q *top_waiter = NULL; - u32 curval; -- int ret; -+ int ret, vpid; - - if (get_futex_value_locked(&curval, pifutex)) - return -EFAULT; -@@ -1238,11 +1249,13 @@ static int futex_proxy_trylock_atomic(u3 - * the contended case or if set_waiters is 1. The pi_state is returned - * in ps in contended cases. - */ -+ vpid = task_pid_vnr(top_waiter->task); - ret = futex_lock_pi_atomic(pifutex, hb2, key2, ps, top_waiter->task, - set_waiters); -- if (ret == 1) -+ if (ret == 1) { - requeue_pi_wake_futex(top_waiter, key2, hb2); -- -+ return vpid; -+ } - return ret; - } - -@@ -1274,7 +1287,6 @@ static int futex_requeue(u32 __user *uad - struct futex_hash_bucket *hb1, *hb2; - struct plist_head *head1; - struct futex_q *this, *next; -- u32 curval2; - - if (requeue_pi) { - /* -@@ -1360,16 +1372,25 @@ retry_private: - * At this point the top_waiter has either taken uaddr2 or is - * waiting on it. If the former, then the pi_state will not - * exist yet, look it up one more time to ensure we have a -- * reference to it. -+ * reference to it. If the lock was taken, ret contains the -+ * vpid of the top waiter task. - */ -- if (ret == 1) { -+ if (ret > 0) { - WARN_ON(pi_state); - drop_count++; - task_count++; -- ret = get_futex_value_locked(&curval2, uaddr2); -- if (!ret) -- ret = lookup_pi_state(curval2, hb2, &key2, -- &pi_state); -+ /* -+ * If we acquired the lock, then the user -+ * space value of uaddr2 should be vpid. It -+ * cannot be changed by the top waiter as it -+ * is blocked on hb2 lock if it tries to do -+ * so. If something fiddled with it behind our -+ * back the pi state lookup might unearth -+ * it. So we rather use the known value than -+ * rereading and handing potential crap to -+ * lookup_pi_state. -+ */ -+ ret = lookup_pi_state(ret, hb2, &key2, &pi_state, NULL); - } - - switch (ret) { Deleted: genpatches-2.6/trunk/3.10/1502-futex-prevent-attaching-to-kernel-threads.patch =================================================================== --- genpatches-2.6/trunk/3.10/1502-futex-prevent-attaching-to-kernel-threads.patch 2014-06-09 00:42:33 UTC (rev 2818) +++ genpatches-2.6/trunk/3.10/1502-futex-prevent-attaching-to-kernel-threads.patch 2014-06-09 12:27:34 UTC (rev 2819) @@ -1,52 +0,0 @@ -From: Thomas Gleixner -Date: Mon, 12 May 2014 20:45:35 +0000 -Subject: futex: Prevent attaching to kernel threads -Git-commit: f0d71b3dcb8332f7971b5f2363632573e6d9486a - -We happily allow userspace to declare a random kernel thread to be the -owner of a user space PI futex. - -Found while analysing the fallout of Dave Jones syscall fuzzer. - -We also should validate the thread group for private futexes and find -some fast way to validate whether the "alleged" owner has RW access on -the file which backs the SHM, but that's a separate issue. - -Signed-off-by: Thomas Gleixner -Cc: Dave Jones -Cc: Linus Torvalds -Cc: Peter Zijlstra -Cc: Darren Hart -Cc: Davidlohr Bueso -Cc: Steven Rostedt -Cc: Clark Williams -Cc: Paul McKenney -Cc: Lai Jiangshan -Cc: Roland McGrath -Cc: Carlos ODonell -Cc: Jakub Jelinek -Cc: Michael Kerrisk -Cc: Sebastian Andrzej Siewior -Link: http://lkml.kernel.org/r/20140512201701.194824402@linutronix.de -Signed-off-by: Thomas Gleixner -Cc: stable@vger.kernel.org ---- - kernel/futex.c | 5 +++++ - 1 file changed, 5 insertions(+) - -Index: linux-3.12/kernel/futex.c -=================================================================== ---- linux-3.12.orig/kernel/futex.c -+++ linux-3.12/kernel/futex.c -@@ -668,6 +668,11 @@ lookup_pi_state(u32 uval, struct futex_h - if (!p) - return -ESRCH; - -+ if (!p->mm) { -+ put_task_struct(p); -+ return -EPERM; -+ } -+ - /* - * We need to look at the task state flags to figure out, - * whether the task is exiting. To protect against the do_exit Deleted: genpatches-2.6/trunk/3.10/1503-futex-prevent-requeue-pi-on-same-futex-patch-futex-forbid-uaddr-uaddr2-in-futex_requeue-requeue_pi-1.patch =================================================================== --- genpatches-2.6/trunk/3.10/1503-futex-prevent-requeue-pi-on-same-futex-patch-futex-forbid-uaddr-uaddr2-in-futex_requeue-requeue_pi-1.patch 2014-06-09 00:42:33 UTC (rev 2818) +++ genpatches-2.6/trunk/3.10/1503-futex-prevent-requeue-pi-on-same-futex-patch-futex-forbid-uaddr-uaddr2-in-futex_requeue-requeue_pi-1.patch 2014-06-09 12:27:34 UTC (rev 2819) @@ -1,81 +0,0 @@ -From: Thomas Gleixner -Date: Tue, 3 Jun 2014 12:27:06 +0000 -Subject: futex-prevent-requeue-pi-on-same-futex.patch futex: Forbid uaddr == - uaddr2 in futex_requeue(..., requeue_pi=1) -Git-commit: e9c243a5a6de0be8e584c604d353412584b592f8 - -If uaddr == uaddr2, then we have broken the rule of only requeueing from -a non-pi futex to a pi futex with this call. If we attempt this, then -dangling pointers may be left for rt_waiter resulting in an exploitable -condition. - -This change brings futex_requeue() in line with futex_wait_requeue_pi() -which performs the same check as per commit 6f7b0a2a5c0f ("futex: Forbid -uaddr == uaddr2 in futex_wait_requeue_pi()") - -[ tglx: Compare the resulting keys as well, as uaddrs might be - different depending on the mapping ] - -Fixes CVE-2014-3153. - -Reported-by: Pinkie Pie -Signed-off-by: Will Drewry -Signed-off-by: Kees Cook -Cc: stable@vger.kernel.org -Signed-off-by: Thomas Gleixner -Reviewed-by: Darren Hart -Signed-off-by: Linus Torvalds ---- - kernel/futex.c | 25 +++++++++++++++++++++++++ - 1 file changed, 25 insertions(+) - -Index: linux-3.12/kernel/futex.c -=================================================================== ---- linux-3.12.orig/kernel/futex.c -+++ linux-3.12/kernel/futex.c -@@ -1295,6 +1295,13 @@ static int futex_requeue(u32 __user *uad - - if (requeue_pi) { - /* -+ * Requeue PI only works on two distinct uaddrs. This -+ * check is only valid for private futexes. See below. -+ */ -+ if (uaddr1 == uaddr2) -+ return -EINVAL; -+ -+ /* - * requeue_pi requires a pi_state, try to allocate it now - * without any locks in case it fails. - */ -@@ -1332,6 +1339,15 @@ retry: - if (unlikely(ret != 0)) - goto out_put_key1; - -+ /* -+ * The check above which compares uaddrs is not sufficient for -+ * shared futexes. We need to compare the keys: -+ */ -+ if (requeue_pi && match_futex(&key1, &key2)) { -+ ret = -EINVAL; -+ goto out_put_keys; -+ } -+ - hb1 = hash_futex(&key1); - hb2 = hash_futex(&key2); - -@@ -2362,6 +2378,15 @@ static int futex_wait_requeue_pi(u32 __u - if (ret) - goto out_key2; - -+ /* -+ * The check above which compares uaddrs is not sufficient for -+ * shared futexes. We need to compare the keys: -+ */ -+ if (match_futex(&q.key, &key2)) { -+ ret = -EINVAL; -+ goto out_put_keys; -+ } -+ - /* Queue the futex_q, drop the hb lock, wait for wakeup. */ - futex_wait_queue_me(hb, &q, to); - Deleted: genpatches-2.6/trunk/3.10/1504-futex-validate-atomic-acquisition-in-futex_lock_pi_atomic.patch =================================================================== --- genpatches-2.6/trunk/3.10/1504-futex-validate-atomic-acquisition-in-futex_lock_pi_atomic.patch 2014-06-09 00:42:33 UTC (rev 2818) +++ genpatches-2.6/trunk/3.10/1504-futex-validate-atomic-acquisition-in-futex_lock_pi_atomic.patch 2014-06-09 12:27:34 UTC (rev 2819) @@ -1,53 +0,0 @@ -From: Thomas Gleixner -Date: Tue, 3 Jun 2014 12:27:06 +0000 -Subject: futex: Validate atomic acquisition in futex_lock_pi_atomic() -Git-commit: b3eaa9fc5cd0a4d74b18f6b8dc617aeaf1873270 - -We need to protect the atomic acquisition in the kernel against rogue -user space which sets the user space futex to 0, so the kernel side -acquisition succeeds while there is existing state in the kernel -associated to the real owner. - -Verify whether the futex has waiters associated with kernel state. If -it has, return -EINVAL. The state is corrupted already, so no point in -cleaning it up. Subsequent calls will fail as well. Not our problem. - -[ tglx: Use futex_top_waiter() and explain why we do not need to try - restoring the already corrupted user space state. ] - -Signed-off-by: Darren Hart -Cc: Kees Cook -Cc: Will Drewry -Cc: stable@vger.kernel.org -Signed-off-by: Thomas Gleixner -Signed-off-by: Linus Torvalds ---- - kernel/futex.c | 14 +++++++++++--- - 1 file changed, 11 insertions(+), 3 deletions(-) - -Index: linux-3.12/kernel/futex.c -=================================================================== ---- linux-3.12.orig/kernel/futex.c -+++ linux-3.12/kernel/futex.c -@@ -764,10 +764,18 @@ retry: - return -EDEADLK; - - /* -- * Surprise - we got the lock. Just return to userspace: -+ * Surprise - we got the lock, but we do not trust user space at all. - */ -- if (unlikely(!curval)) -- return 1; -+ if (unlikely(!curval)) { -+ /* -+ * We verify whether there is kernel state for this -+ * futex. If not, we can safely assume, that the 0 -> -+ * TID transition is correct. If state exists, we do -+ * not bother to fixup the user space state as it was -+ * corrupted already. -+ */ -+ return futex_top_waiter(hb, key) ? -EINVAL : 1; -+ } - - uval = curval; - Deleted: genpatches-2.6/trunk/3.10/1505-futex-always-cleanup-owner-tid-in-unlock_pi.patch =================================================================== --- genpatches-2.6/trunk/3.10/1505-futex-always-cleanup-owner-tid-in-unlock_pi.patch 2014-06-09 00:42:33 UTC (rev 2818) +++ genpatches-2.6/trunk/3.10/1505-futex-always-cleanup-owner-tid-in-unlock_pi.patch 2014-06-09 12:27:34 UTC (rev 2819) @@ -1,99 +0,0 @@ -From: Thomas Gleixner -Date: Tue, 3 Jun 2014 12:27:07 +0000 -Subject: futex: Always cleanup owner tid in unlock_pi -Git-commit: 13fbca4c6ecd96ec1a1cfa2e4f2ce191fe928a5e - -If the owner died bit is set at futex_unlock_pi, we currently do not -cleanup the user space futex. So the owner TID of the current owner -(the unlocker) persists. That's observable inconsistant state, -especially when the ownership of the pi state got transferred. - -Clean it up unconditionally. - -Signed-off-by: Thomas Gleixner -Cc: Kees Cook -Cc: Will Drewry -Cc: Darren Hart -Cc: stable@vger.kernel.org -Signed-off-by: Linus Torvalds ---- - kernel/futex.c | 40 ++++++++++++++++++---------------------- - 1 file changed, 18 insertions(+), 22 deletions(-) - -Index: linux-3.12/kernel/futex.c -=================================================================== ---- linux-3.12.orig/kernel/futex.c -+++ linux-3.12/kernel/futex.c -@@ -905,6 +905,7 @@ static int wake_futex_pi(u32 __user *uad - struct task_struct *new_owner; - struct futex_pi_state *pi_state = this->pi_state; - u32 uninitialized_var(curval), newval; -+ int ret = 0; - - if (!pi_state) - return -EINVAL; -@@ -928,23 +929,19 @@ static int wake_futex_pi(u32 __user *uad - new_owner = this->task; - - /* -- * We pass it to the next owner. (The WAITERS bit is always -- * kept enabled while there is PI state around. We must also -- * preserve the owner died bit.) -- */ -- if (!(uval & FUTEX_OWNER_DIED)) { -- int ret = 0; -- -- newval = FUTEX_WAITERS | task_pid_vnr(new_owner); -- -- if (cmpxchg_futex_value_locked(&curval, uaddr, uval, newval)) -- ret = -EFAULT; -- else if (curval != uval) -- ret = -EINVAL; -- if (ret) { -- raw_spin_unlock(&pi_state->pi_mutex.wait_lock); -- return ret; -- } -+ * We pass it to the next owner. The WAITERS bit is always -+ * kept enabled while there is PI state around. We cleanup the -+ * owner died bit, because we are the owner. -+ */ -+ newval = FUTEX_WAITERS | task_pid_vnr(new_owner); -+ -+ if (cmpxchg_futex_value_locked(&curval, uaddr, uval, newval)) -+ ret = -EFAULT; -+ else if (curval != uval) -+ ret = -EINVAL; -+ if (ret) { -+ raw_spin_unlock(&pi_state->pi_mutex.wait_lock); -+ return ret; - } - - raw_spin_lock_irq(&pi_state->owner->pi_lock); -@@ -2189,9 +2186,10 @@ retry: - /* - * To avoid races, try to do the TID -> 0 atomic transition - * again. If it succeeds then we can return without waking -- * anyone else up: -+ * anyone else up. We only try this if neither the waiters nor -+ * the owner died bit are set. - */ -- if (!(uval & FUTEX_OWNER_DIED) && -+ if (!(uval & ~FUTEX_TID_MASK) && - cmpxchg_futex_value_locked(&uval, uaddr, vpid, 0)) - goto pi_faulted; - /* -@@ -2223,11 +2221,9 @@ retry: - /* - * No waiters - kernel unlocks the futex: - */ -- if (!(uval & FUTEX_OWNER_DIED)) { -- ret = unlock_futex_pi(uaddr, uval); -- if (ret == -EFAULT) -- goto pi_faulted; -- } -+ ret = unlock_futex_pi(uaddr, uval); -+ if (ret == -EFAULT) -+ goto pi_faulted; - - out_unlock: - spin_unlock(&hb->lock); Deleted: genpatches-2.6/trunk/3.10/1506-futex-make-lookup_pi_state-more-robust.patch =================================================================== (Binary files differ)