public inbox for gentoo-commits@lists.gentoo.org
 help / color / mirror / Atom feed
* [gentoo-commits] linux-patches r2175 - genpatches-2.6/trunk/3.4
@ 2012-07-17 14:52 Mike Pagano (mpagano)
  0 siblings, 0 replies; only message in thread
From: Mike Pagano (mpagano) @ 2012-07-17 14:52 UTC (permalink / raw
  To: gentoo-commits

Author: mpagano
Date: 2012-07-17 14:52:11 +0000 (Tue, 17 Jul 2012)
New Revision: 2175

Added:
   genpatches-2.6/trunk/3.4/1004_linux-3.4.5.patch
Removed:
   genpatches-2.6/trunk/3.4/1900_cifs-double-delim-check.patch
Modified:
   genpatches-2.6/trunk/3.4/0000_README
Log:
Linux patch 3.4.5. Removal of redundant patch

Modified: genpatches-2.6/trunk/3.4/0000_README
===================================================================
--- genpatches-2.6/trunk/3.4/0000_README	2012-07-07 16:52:38 UTC (rev 2174)
+++ genpatches-2.6/trunk/3.4/0000_README	2012-07-17 14:52:11 UTC (rev 2175)
@@ -55,14 +55,14 @@
 From:   http://www.kernel.org
 Desc:   Linux 3.4.4
 
+Patch:  1004_linux-3.4.5.patch
+From:   http://www.kernel.org
+Desc:   Linux 3.4.5
+
 Patch:  1700_correct-bnx2-firware-ver-mips.patch
 From:   https://bugs.gentoo.org/show_bug.cgi?id=424609
 Desc:   Correct firmware version for bnx2 on mips
 
-Patch:  1900_cifs-double-delim-check.patch
-From:   http://bugzillafiles.novell.org/attachment.cgi?id=494407
-Desc:   Patch to properly parse cifs mount options
-
 Patch:  2400_kcopy-patch-for-infiniband-driver.patch
 From:   Alexey Shvetsov <alexxy@gentoo.org>
 Desc:   Zero copy for infiniband psm userspace driver

Added: genpatches-2.6/trunk/3.4/1004_linux-3.4.5.patch
===================================================================
--- genpatches-2.6/trunk/3.4/1004_linux-3.4.5.patch	                        (rev 0)
+++ genpatches-2.6/trunk/3.4/1004_linux-3.4.5.patch	2012-07-17 14:52:11 UTC (rev 2175)
@@ -0,0 +1,7455 @@
+diff --git a/Documentation/device-mapper/verity.txt b/Documentation/device-mapper/verity.txt
+index 32e4879..9884681 100644
+--- a/Documentation/device-mapper/verity.txt
++++ b/Documentation/device-mapper/verity.txt
+@@ -7,39 +7,39 @@ This target is read-only.
+ 
+ Construction Parameters
+ =======================
+-    <version> <dev> <hash_dev> <hash_start>
++    <version> <dev> <hash_dev>
+     <data_block_size> <hash_block_size>
+     <num_data_blocks> <hash_start_block>
+     <algorithm> <digest> <salt>
+ 
+ <version>
+-    This is the version number of the on-disk format.
++    This is the type of the on-disk hash format.
+ 
+     0 is the original format used in the Chromium OS.
+-	The salt is appended when hashing, digests are stored continuously and
+-	the rest of the block is padded with zeros.
++      The salt is appended when hashing, digests are stored continuously and
++      the rest of the block is padded with zeros.
+ 
+     1 is the current format that should be used for new devices.
+-	The salt is prepended when hashing and each digest is
+-	padded with zeros to the power of two.
++      The salt is prepended when hashing and each digest is
++      padded with zeros to the power of two.
+ 
+ <dev>
+-    This is the device containing the data the integrity of which needs to be
++    This is the device containing data, the integrity of which needs to be
+     checked.  It may be specified as a path, like /dev/sdaX, or a device number,
+     <major>:<minor>.
+ 
+ <hash_dev>
+-    This is the device that that supplies the hash tree data.  It may be
++    This is the device that supplies the hash tree data.  It may be
+     specified similarly to the device path and may be the same device.  If the
+-    same device is used, the hash_start should be outside of the dm-verity
+-    configured device size.
++    same device is used, the hash_start should be outside the configured
++    dm-verity device.
+ 
+ <data_block_size>
+-    The block size on a data device.  Each block corresponds to one digest on
+-    the hash device.
++    The block size on a data device in bytes.
++    Each block corresponds to one digest on the hash device.
+ 
+ <hash_block_size>
+-    The size of a hash block.
++    The size of a hash block in bytes.
+ 
+ <num_data_blocks>
+     The number of data blocks on the data device.  Additional blocks are
+@@ -65,7 +65,7 @@ Construction Parameters
+ Theory of operation
+ ===================
+ 
+-dm-verity is meant to be setup as part of a verified boot path.  This
++dm-verity is meant to be set up as part of a verified boot path.  This
+ may be anything ranging from a boot using tboot or trustedgrub to just
+ booting from a known-good device (like a USB drive or CD).
+ 
+@@ -73,20 +73,20 @@ When a dm-verity device is configured, it is expected that the caller
+ has been authenticated in some way (cryptographic signatures, etc).
+ After instantiation, all hashes will be verified on-demand during
+ disk access.  If they cannot be verified up to the root node of the
+-tree, the root hash, then the I/O will fail.  This should identify
++tree, the root hash, then the I/O will fail.  This should detect
+ tampering with any data on the device and the hash data.
+ 
+ Cryptographic hashes are used to assert the integrity of the device on a
+-per-block basis.  This allows for a lightweight hash computation on first read
+-into the page cache.  Block hashes are stored linearly-aligned to the nearest
+-block the size of a page.
++per-block basis. This allows for a lightweight hash computation on first read
++into the page cache. Block hashes are stored linearly, aligned to the nearest
++block size.
+ 
+ Hash Tree
+ ---------
+ 
+ Each node in the tree is a cryptographic hash.  If it is a leaf node, the hash
+-is of some block data on disk.  If it is an intermediary node, then the hash is
+-of a number of child nodes.
++of some data block on disk is calculated. If it is an intermediary node,
++the hash of a number of child nodes is calculated.
+ 
+ Each entry in the tree is a collection of neighboring nodes that fit in one
+ block.  The number is determined based on block_size and the size of the
+@@ -110,63 +110,23 @@ alg = sha256, num_blocks = 32768, block_size = 4096
+ On-disk format
+ ==============
+ 
+-Below is the recommended on-disk format. The verity kernel code does not
+-read the on-disk header. It only reads the hash blocks which directly
+-follow the header. It is expected that a user-space tool will verify the
+-integrity of the verity_header and then call dmsetup with the correct
+-parameters. Alternatively, the header can be omitted and the dmsetup
+-parameters can be passed via the kernel command-line in a rooted chain
+-of trust where the command-line is verified.
++The verity kernel code does not read the verity metadata on-disk header.
++It only reads the hash blocks which directly follow the header.
++It is expected that a user-space tool will verify the integrity of the
++verity header.
+ 
+-The on-disk format is especially useful in cases where the hash blocks
+-are on a separate partition. The magic number allows easy identification
+-of the partition contents. Alternatively, the hash blocks can be stored
+-in the same partition as the data to be verified. In such a configuration
+-the filesystem on the partition would be sized a little smaller than
+-the full-partition, leaving room for the hash blocks.
+-
+-struct superblock {
+-	uint8_t signature[8]
+-		"verity\0\0";
+-
+-	uint8_t version;
+-		1 - current format
+-
+-	uint8_t data_block_bits;
+-		log2(data block size)
+-
+-	uint8_t hash_block_bits;
+-		log2(hash block size)
+-
+-	uint8_t pad1[1];
+-		zero padding
+-
+-	uint16_t salt_size;
+-		big-endian salt size
+-
+-	uint8_t pad2[2];
+-		zero padding
+-
+-	uint32_t data_blocks_hi;
+-		big-endian high 32 bits of the 64-bit number of data blocks
+-
+-	uint32_t data_blocks_lo;
+-		big-endian low 32 bits of the 64-bit number of data blocks
+-
+-	uint8_t algorithm[16];
+-		cryptographic algorithm
+-
+-	uint8_t salt[384];
+-		salt (the salt size is specified above)
+-
+-	uint8_t pad3[88];
+-		zero padding to 512-byte boundary
+-}
++Alternatively, the header can be omitted and the dmsetup parameters can
++be passed via the kernel command-line in a rooted chain of trust where
++the command-line is verified.
+ 
+ Directly following the header (and with sector number padded to the next hash
+ block boundary) are the hash blocks which are stored a depth at a time
+ (starting from the root), sorted in order of increasing index.
+ 
++The full specification of kernel parameters and on-disk metadata format
++is available at the cryptsetup project's wiki page
++  http://code.google.com/p/cryptsetup/wiki/DMVerity
++
+ Status
+ ======
+ V (for Valid) is returned if every check performed so far was valid.
+@@ -174,21 +134,22 @@ If any check failed, C (for Corruption) is returned.
+ 
+ Example
+ =======
+-
+-Setup a device:
+-  dmsetup create vroot --table \
+-    "0 2097152 "\
+-    "verity 1 /dev/sda1 /dev/sda2 4096 4096 2097152 1 "\
++Set up a device:
++  # dmsetup create vroot --readonly --table \
++    "0 2097152 verity 1 /dev/sda1 /dev/sda2 4096 4096 262144 1 sha256 "\
+     "4392712ba01368efdf14b05c76f9e4df0d53664630b5d48632ed17a137f39076 "\
+     "1234000000000000000000000000000000000000000000000000000000000000"
+ 
+ A command line tool veritysetup is available to compute or verify
+-the hash tree or activate the kernel driver.  This is available from
+-the LVM2 upstream repository and may be supplied as a package called
+-device-mapper-verity-tools:
+-    git://sources.redhat.com/git/lvm2
+-    http://sourceware.org/git/?p=lvm2.git
+-    http://sourceware.org/cgi-bin/cvsweb.cgi/LVM2/verity?cvsroot=lvm2
+-
+-veritysetup -a vroot /dev/sda1 /dev/sda2 \
+-	4392712ba01368efdf14b05c76f9e4df0d53664630b5d48632ed17a137f39076
++the hash tree or activate the kernel device. This is available from
++the cryptsetup upstream repository http://code.google.com/p/cryptsetup/
++(as a libcryptsetup extension).
++
++Create hash on the device:
++  # veritysetup format /dev/sda1 /dev/sda2
++  ...
++  Root hash: 4392712ba01368efdf14b05c76f9e4df0d53664630b5d48632ed17a137f39076
++
++Activate the device:
++  # veritysetup create vroot /dev/sda1 /dev/sda2 \
++    4392712ba01368efdf14b05c76f9e4df0d53664630b5d48632ed17a137f39076
+diff --git a/Documentation/stable_kernel_rules.txt b/Documentation/stable_kernel_rules.txt
+index f0ab5cf..4a7b54b 100644
+--- a/Documentation/stable_kernel_rules.txt
++++ b/Documentation/stable_kernel_rules.txt
+@@ -12,6 +12,12 @@ Rules on what kind of patches are accepted, and which ones are not, into the
+    marked CONFIG_BROKEN), an oops, a hang, data corruption, a real
+    security issue, or some "oh, that's not good" issue.  In short, something
+    critical.
++ - Serious issues as reported by a user of a distribution kernel may also
++   be considered if they fix a notable performance or interactivity issue.
++   As these fixes are not as obvious and have a higher risk of a subtle
++   regression they should only be submitted by a distribution kernel
++   maintainer and include an addendum linking to a bugzilla entry if it
++   exists and additional information on the user-visible impact.
+  - New device IDs and quirks are also accepted.
+  - No "theoretical race condition" issues, unless an explanation of how the
+    race can be exploited is also provided.
+diff --git a/Makefile b/Makefile
+index 058320d..a2e69a0 100644
+--- a/Makefile
++++ b/Makefile
+@@ -1,6 +1,6 @@
+ VERSION = 3
+ PATCHLEVEL = 4
+-SUBLEVEL = 4
++SUBLEVEL = 5
+ EXTRAVERSION =
+ NAME = Saber-toothed Squirrel
+ 
+diff --git a/arch/arm/mach-dove/include/mach/bridge-regs.h b/arch/arm/mach-dove/include/mach/bridge-regs.h
+index 226949d..f953bb5 100644
+--- a/arch/arm/mach-dove/include/mach/bridge-regs.h
++++ b/arch/arm/mach-dove/include/mach/bridge-regs.h
+@@ -50,5 +50,6 @@
+ #define POWER_MANAGEMENT	(BRIDGE_VIRT_BASE | 0x011c)
+ 
+ #define TIMER_VIRT_BASE		(BRIDGE_VIRT_BASE | 0x0300)
++#define TIMER_PHYS_BASE         (BRIDGE_PHYS_BASE | 0x0300)
+ 
+ #endif
+diff --git a/arch/arm/mach-dove/include/mach/dove.h b/arch/arm/mach-dove/include/mach/dove.h
+index ad1165d..d52b0ef 100644
+--- a/arch/arm/mach-dove/include/mach/dove.h
++++ b/arch/arm/mach-dove/include/mach/dove.h
+@@ -78,6 +78,7 @@
+ 
+ /* North-South Bridge */
+ #define BRIDGE_VIRT_BASE	(DOVE_SB_REGS_VIRT_BASE | 0x20000)
++#define BRIDGE_PHYS_BASE	(DOVE_SB_REGS_PHYS_BASE | 0x20000)
+ 
+ /* Cryptographic Engine */
+ #define DOVE_CRYPT_PHYS_BASE	(DOVE_SB_REGS_PHYS_BASE | 0x30000)
+diff --git a/arch/arm/mach-kirkwood/include/mach/bridge-regs.h b/arch/arm/mach-kirkwood/include/mach/bridge-regs.h
+index 957bd79..086f25e 100644
+--- a/arch/arm/mach-kirkwood/include/mach/bridge-regs.h
++++ b/arch/arm/mach-kirkwood/include/mach/bridge-regs.h
+@@ -38,6 +38,7 @@
+ #define IRQ_MASK_HIGH_OFF	0x0014
+ 
+ #define TIMER_VIRT_BASE		(BRIDGE_VIRT_BASE | 0x0300)
++#define TIMER_PHYS_BASE		(BRIDGE_PHYS_BASE | 0x0300)
+ 
+ #define L2_CONFIG_REG		(BRIDGE_VIRT_BASE | 0x0128)
+ #define L2_WRITETHROUGH		0x00000010
+diff --git a/arch/arm/mach-kirkwood/include/mach/kirkwood.h b/arch/arm/mach-kirkwood/include/mach/kirkwood.h
+index fede3d5..c5b6851 100644
+--- a/arch/arm/mach-kirkwood/include/mach/kirkwood.h
++++ b/arch/arm/mach-kirkwood/include/mach/kirkwood.h
+@@ -80,6 +80,7 @@
+ #define  UART1_VIRT_BASE	(DEV_BUS_VIRT_BASE | 0x2100)
+ 
+ #define BRIDGE_VIRT_BASE	(KIRKWOOD_REGS_VIRT_BASE | 0x20000)
++#define BRIDGE_PHYS_BASE	(KIRKWOOD_REGS_PHYS_BASE | 0x20000)
+ 
+ #define CRYPTO_PHYS_BASE	(KIRKWOOD_REGS_PHYS_BASE | 0x30000)
+ 
+diff --git a/arch/arm/mach-mv78xx0/include/mach/bridge-regs.h b/arch/arm/mach-mv78xx0/include/mach/bridge-regs.h
+index c64dbb9..eb187e0 100644
+--- a/arch/arm/mach-mv78xx0/include/mach/bridge-regs.h
++++ b/arch/arm/mach-mv78xx0/include/mach/bridge-regs.h
+@@ -31,5 +31,6 @@
+ #define IRQ_MASK_HIGH_OFF	0x0014
+ 
+ #define TIMER_VIRT_BASE		(BRIDGE_VIRT_BASE | 0x0300)
++#define TIMER_PHYS_BASE		(BRIDGE_PHYS_BASE | 0x0300)
+ 
+ #endif
+diff --git a/arch/arm/mach-mv78xx0/include/mach/mv78xx0.h b/arch/arm/mach-mv78xx0/include/mach/mv78xx0.h
+index 3674497..e807c4c 100644
+--- a/arch/arm/mach-mv78xx0/include/mach/mv78xx0.h
++++ b/arch/arm/mach-mv78xx0/include/mach/mv78xx0.h
+@@ -42,6 +42,7 @@
+ #define MV78XX0_CORE0_REGS_PHYS_BASE	0xf1020000
+ #define MV78XX0_CORE1_REGS_PHYS_BASE	0xf1024000
+ #define MV78XX0_CORE_REGS_VIRT_BASE	0xfe400000
++#define MV78XX0_CORE_REGS_PHYS_BASE	0xfe400000
+ #define MV78XX0_CORE_REGS_SIZE		SZ_16K
+ 
+ #define MV78XX0_PCIE_IO_PHYS_BASE(i)	(0xf0800000 + ((i) << 20))
+@@ -59,6 +60,7 @@
+  * Core-specific peripheral registers.
+  */
+ #define BRIDGE_VIRT_BASE	(MV78XX0_CORE_REGS_VIRT_BASE)
++#define BRIDGE_PHYS_BASE	(MV78XX0_CORE_REGS_PHYS_BASE)
+ 
+ /*
+  * Register Map
+diff --git a/arch/arm/mach-orion5x/include/mach/bridge-regs.h b/arch/arm/mach-orion5x/include/mach/bridge-regs.h
+index 96484bc..11a3c1e 100644
+--- a/arch/arm/mach-orion5x/include/mach/bridge-regs.h
++++ b/arch/arm/mach-orion5x/include/mach/bridge-regs.h
+@@ -35,5 +35,5 @@
+ #define MAIN_IRQ_MASK		(ORION5X_BRIDGE_VIRT_BASE | 0x204)
+ 
+ #define TIMER_VIRT_BASE		(ORION5X_BRIDGE_VIRT_BASE | 0x300)
+-
++#define TIMER_PHYS_BASE		(ORION5X_BRIDGE_PHYS_BASE | 0x300)
+ #endif
+diff --git a/arch/arm/mach-orion5x/include/mach/orion5x.h b/arch/arm/mach-orion5x/include/mach/orion5x.h
+index 2745f5d..683e085 100644
+--- a/arch/arm/mach-orion5x/include/mach/orion5x.h
++++ b/arch/arm/mach-orion5x/include/mach/orion5x.h
+@@ -82,6 +82,7 @@
+ #define  UART1_VIRT_BASE		(ORION5X_DEV_BUS_VIRT_BASE | 0x2100)
+ 
+ #define ORION5X_BRIDGE_VIRT_BASE	(ORION5X_REGS_VIRT_BASE | 0x20000)
++#define ORION5X_BRIDGE_PHYS_BASE	(ORION5X_REGS_PHYS_BASE | 0x20000)
+ 
+ #define ORION5X_PCI_VIRT_BASE		(ORION5X_REGS_VIRT_BASE | 0x30000)
+ 
+diff --git a/arch/arm/mach-tegra/reset.c b/arch/arm/mach-tegra/reset.c
+index 4d6a2ee..5beb7eb 100644
+--- a/arch/arm/mach-tegra/reset.c
++++ b/arch/arm/mach-tegra/reset.c
+@@ -33,7 +33,7 @@
+ 
+ static bool is_enabled;
+ 
+-static void tegra_cpu_reset_handler_enable(void)
++static void __init tegra_cpu_reset_handler_enable(void)
+ {
+ 	void __iomem *iram_base = IO_ADDRESS(TEGRA_IRAM_RESET_BASE);
+ 	void __iomem *evp_cpu_reset =
+diff --git a/arch/arm/mm/mmu.c b/arch/arm/mm/mmu.c
+index aa78de8..75f9f9d 100644
+--- a/arch/arm/mm/mmu.c
++++ b/arch/arm/mm/mmu.c
+@@ -783,6 +783,79 @@ void __init iotable_init(struct map_desc *io_desc, int nr)
+ 	}
+ }
+ 
++#ifndef CONFIG_ARM_LPAE
++
++/*
++ * The Linux PMD is made of two consecutive section entries covering 2MB
++ * (see definition in include/asm/pgtable-2level.h).  However a call to
++ * create_mapping() may optimize static mappings by using individual
++ * 1MB section mappings.  This leaves the actual PMD potentially half
++ * initialized if the top or bottom section entry isn't used, leaving it
++ * open to problems if a subsequent ioremap() or vmalloc() tries to use
++ * the virtual space left free by that unused section entry.
++ *
++ * Let's avoid the issue by inserting dummy vm entries covering the unused
++ * PMD halves once the static mappings are in place.
++ */
++
++static void __init pmd_empty_section_gap(unsigned long addr)
++{
++	struct vm_struct *vm;
++
++	vm = early_alloc_aligned(sizeof(*vm), __alignof__(*vm));
++	vm->addr = (void *)addr;
++	vm->size = SECTION_SIZE;
++	vm->flags = VM_IOREMAP | VM_ARM_STATIC_MAPPING;
++	vm->caller = pmd_empty_section_gap;
++	vm_area_add_early(vm);
++}
++
++static void __init fill_pmd_gaps(void)
++{
++	struct vm_struct *vm;
++	unsigned long addr, next = 0;
++	pmd_t *pmd;
++
++	/* we're still single threaded hence no lock needed here */
++	for (vm = vmlist; vm; vm = vm->next) {
++		if (!(vm->flags & VM_ARM_STATIC_MAPPING))
++			continue;
++		addr = (unsigned long)vm->addr;
++		if (addr < next)
++			continue;
++
++		/*
++		 * Check if this vm starts on an odd section boundary.
++		 * If so and the first section entry for this PMD is free
++		 * then we block the corresponding virtual address.
++		 */
++		if ((addr & ~PMD_MASK) == SECTION_SIZE) {
++			pmd = pmd_off_k(addr);
++			if (pmd_none(*pmd))
++				pmd_empty_section_gap(addr & PMD_MASK);
++		}
++
++		/*
++		 * Then check if this vm ends on an odd section boundary.
++		 * If so and the second section entry for this PMD is empty
++		 * then we block the corresponding virtual address.
++		 */
++		addr += vm->size;
++		if ((addr & ~PMD_MASK) == SECTION_SIZE) {
++			pmd = pmd_off_k(addr) + 1;
++			if (pmd_none(*pmd))
++				pmd_empty_section_gap(addr);
++		}
++
++		/* no need to look at any vm entry until we hit the next PMD */
++		next = (addr + PMD_SIZE - 1) & PMD_MASK;
++	}
++}
++
++#else
++#define fill_pmd_gaps() do { } while (0)
++#endif
++
+ static void * __initdata vmalloc_min =
+ 	(void *)(VMALLOC_END - (240 << 20) - VMALLOC_OFFSET);
+ 
+@@ -1064,6 +1137,7 @@ static void __init devicemaps_init(struct machine_desc *mdesc)
+ 	 */
+ 	if (mdesc->map_io)
+ 		mdesc->map_io();
++	fill_pmd_gaps();
+ 
+ 	/*
+ 	 * Finally flush the caches and tlb to ensure that we're in a
+diff --git a/arch/arm/plat-orion/common.c b/arch/arm/plat-orion/common.c
+index 74daf5e..331f8bb 100644
+--- a/arch/arm/plat-orion/common.c
++++ b/arch/arm/plat-orion/common.c
+@@ -570,7 +570,7 @@ void __init orion_spi_1_init(unsigned long mapbase,
+ static struct orion_wdt_platform_data orion_wdt_data;
+ 
+ static struct resource orion_wdt_resource =
+-		DEFINE_RES_MEM(TIMER_VIRT_BASE, 0x28);
++		DEFINE_RES_MEM(TIMER_PHYS_BASE, 0x28);
+ 
+ static struct platform_device orion_wdt_device = {
+ 	.name		= "orion_wdt",
+diff --git a/arch/arm/plat-samsung/include/plat/map-s3c.h b/arch/arm/plat-samsung/include/plat/map-s3c.h
+index 7d04875..c0c70a8 100644
+--- a/arch/arm/plat-samsung/include/plat/map-s3c.h
++++ b/arch/arm/plat-samsung/include/plat/map-s3c.h
+@@ -22,7 +22,7 @@
+ #define S3C24XX_VA_WATCHDOG	S3C_VA_WATCHDOG
+ 
+ #define S3C2412_VA_SSMC		S3C_ADDR_CPU(0x00000000)
+-#define S3C2412_VA_EBI		S3C_ADDR_CPU(0x00010000)
++#define S3C2412_VA_EBI		S3C_ADDR_CPU(0x00100000)
+ 
+ #define S3C2410_PA_UART		(0x50000000)
+ #define S3C24XX_PA_UART		S3C2410_PA_UART
+diff --git a/arch/arm/plat-samsung/include/plat/watchdog-reset.h b/arch/arm/plat-samsung/include/plat/watchdog-reset.h
+index f19aff1..bc4db9b 100644
+--- a/arch/arm/plat-samsung/include/plat/watchdog-reset.h
++++ b/arch/arm/plat-samsung/include/plat/watchdog-reset.h
+@@ -25,7 +25,7 @@ static inline void arch_wdt_reset(void)
+ 
+ 	__raw_writel(0, S3C2410_WTCON);	  /* disable watchdog, to be safe  */
+ 
+-	if (s3c2410_wdtclk)
++	if (!IS_ERR(s3c2410_wdtclk))
+ 		clk_enable(s3c2410_wdtclk);
+ 
+ 	/* put initial values into count and data */
+diff --git a/arch/powerpc/include/asm/hw_irq.h b/arch/powerpc/include/asm/hw_irq.h
+index 102abd6..907e9fd 100644
+--- a/arch/powerpc/include/asm/hw_irq.h
++++ b/arch/powerpc/include/asm/hw_irq.h
+@@ -85,8 +85,8 @@ static inline bool arch_irqs_disabled(void)
+ }
+ 
+ #ifdef CONFIG_PPC_BOOK3E
+-#define __hard_irq_enable()	asm volatile("wrteei 1" : : : "memory");
+-#define __hard_irq_disable()	asm volatile("wrteei 0" : : : "memory");
++#define __hard_irq_enable()	asm volatile("wrteei 1" : : : "memory")
++#define __hard_irq_disable()	asm volatile("wrteei 0" : : : "memory")
+ #else
+ #define __hard_irq_enable()	__mtmsrd(local_paca->kernel_msr | MSR_EE, 1)
+ #define __hard_irq_disable()	__mtmsrd(local_paca->kernel_msr, 1)
+@@ -102,6 +102,11 @@ static inline void hard_irq_disable(void)
+ /* include/linux/interrupt.h needs hard_irq_disable to be a macro */
+ #define hard_irq_disable	hard_irq_disable
+ 
++static inline bool lazy_irq_pending(void)
++{
++	return !!(get_paca()->irq_happened & ~PACA_IRQ_HARD_DIS);
++}
++
+ /*
+  * This is called by asynchronous interrupts to conditionally
+  * re-enable hard interrupts when soft-disabled after having
+@@ -119,6 +124,8 @@ static inline bool arch_irq_disabled_regs(struct pt_regs *regs)
+ 	return !regs->softe;
+ }
+ 
++extern bool prep_irq_for_idle(void);
++
+ #else /* CONFIG_PPC64 */
+ 
+ #define SET_MSR_EE(x)	mtmsr(x)
+diff --git a/arch/powerpc/kernel/irq.c b/arch/powerpc/kernel/irq.c
+index 641da9e..d7ebc58 100644
+--- a/arch/powerpc/kernel/irq.c
++++ b/arch/powerpc/kernel/irq.c
+@@ -229,7 +229,7 @@ notrace void arch_local_irq_restore(unsigned long en)
+ 	 */
+ 	if (unlikely(irq_happened != PACA_IRQ_HARD_DIS))
+ 		__hard_irq_disable();
+-#ifdef CONFIG_TRACE_IRQFLAG
++#ifdef CONFIG_TRACE_IRQFLAGS
+ 	else {
+ 		/*
+ 		 * We should already be hard disabled here. We had bugs
+@@ -277,7 +277,7 @@ EXPORT_SYMBOL(arch_local_irq_restore);
+  * NOTE: This is called with interrupts hard disabled but not marked
+  * as such in paca->irq_happened, so we need to resync this.
+  */
+-void restore_interrupts(void)
++void notrace restore_interrupts(void)
+ {
+ 	if (irqs_disabled()) {
+ 		local_paca->irq_happened |= PACA_IRQ_HARD_DIS;
+@@ -286,6 +286,52 @@ void restore_interrupts(void)
+ 		__hard_irq_enable();
+ }
+ 
++/*
++ * This is a helper to use when about to go into idle low-power
++ * when the latter has the side effect of re-enabling interrupts
++ * (such as calling H_CEDE under pHyp).
++ *
++ * You call this function with interrupts soft-disabled (this is
++ * already the case when ppc_md.power_save is called). The function
++ * will return whether to enter power save or just return.
++ *
++ * In the former case, it will have notified lockdep of interrupts
++ * being re-enabled and generally sanitized the lazy irq state,
++ * and in the latter case it will leave with interrupts hard
++ * disabled and marked as such, so the local_irq_enable() call
++ * in cpu_idle() will properly re-enable everything.
++ */
++bool prep_irq_for_idle(void)
++{
++	/*
++	 * First we need to hard disable to ensure no interrupt
++	 * occurs before we effectively enter the low power state
++	 */
++	hard_irq_disable();
++
++	/*
++	 * If anything happened while we were soft-disabled,
++	 * we return now and do not enter the low power state.
++	 */
++	if (lazy_irq_pending())
++		return false;
++
++	/* Tell lockdep we are about to re-enable */
++	trace_hardirqs_on();
++
++	/*
++	 * Mark interrupts as soft-enabled and clear the
++	 * PACA_IRQ_HARD_DIS from the pending mask since we
++	 * are about to hard enable as well as a side effect
++	 * of entering the low power state.
++	 */
++	local_paca->irq_happened &= ~PACA_IRQ_HARD_DIS;
++	local_paca->soft_enabled = 1;
++
++	/* Tell the caller to enter the low power state */
++	return true;
++}
++
+ #endif /* CONFIG_PPC64 */
+ 
+ int arch_show_interrupts(struct seq_file *p, int prec)
+diff --git a/arch/powerpc/kvm/book3s_hv_rmhandlers.S b/arch/powerpc/kvm/book3s_hv_rmhandlers.S
+index b70bf22..24b23a4 100644
+--- a/arch/powerpc/kvm/book3s_hv_rmhandlers.S
++++ b/arch/powerpc/kvm/book3s_hv_rmhandlers.S
+@@ -776,7 +776,7 @@ END_FTR_SECTION_IFSET(CPU_FTR_ARCH_201)
+ 	lwz	r3,VCORE_NAPPING_THREADS(r5)
+ 	lwz	r4,VCPU_PTID(r9)
+ 	li	r0,1
+-	sldi	r0,r0,r4
++	sld	r0,r0,r4
+ 	andc.	r3,r3,r0		/* no sense IPI'ing ourselves */
+ 	beq	43f
+ 	mulli	r4,r4,PACA_SIZE		/* get paca for thread 0 */
+diff --git a/arch/powerpc/mm/numa.c b/arch/powerpc/mm/numa.c
+index b6edbb3..6e8f677 100644
+--- a/arch/powerpc/mm/numa.c
++++ b/arch/powerpc/mm/numa.c
+@@ -635,7 +635,7 @@ static inline int __init read_usm_ranges(const u32 **usm)
+  */
+ static void __init parse_drconf_memory(struct device_node *memory)
+ {
+-	const u32 *dm, *usm;
++	const u32 *uninitialized_var(dm), *usm;
+ 	unsigned int n, rc, ranges, is_kexec_kdump = 0;
+ 	unsigned long lmb_size, base, size, sz;
+ 	int nid;
+diff --git a/arch/powerpc/platforms/cell/pervasive.c b/arch/powerpc/platforms/cell/pervasive.c
+index efdacc8..d17e98b 100644
+--- a/arch/powerpc/platforms/cell/pervasive.c
++++ b/arch/powerpc/platforms/cell/pervasive.c
+@@ -42,11 +42,9 @@ static void cbe_power_save(void)
+ {
+ 	unsigned long ctrl, thread_switch_control;
+ 
+-	/*
+-	 * We need to hard disable interrupts, the local_irq_enable() done by
+-	 * our caller upon return will hard re-enable.
+-	 */
+-	hard_irq_disable();
++	/* Ensure our interrupt state is properly tracked */
++	if (!prep_irq_for_idle())
++		return;
+ 
+ 	ctrl = mfspr(SPRN_CTRLF);
+ 
+@@ -81,6 +79,9 @@ static void cbe_power_save(void)
+ 	 */
+ 	ctrl &= ~(CTRL_RUNLATCH | CTRL_TE);
+ 	mtspr(SPRN_CTRLT, ctrl);
++
++	/* Re-enable interrupts in MSR */
++	__hard_irq_enable();
+ }
+ 
+ static int cbe_system_reset_exception(struct pt_regs *regs)
+diff --git a/arch/powerpc/platforms/pseries/iommu.c b/arch/powerpc/platforms/pseries/iommu.c
+index 0915b1a..2d311c0 100644
+--- a/arch/powerpc/platforms/pseries/iommu.c
++++ b/arch/powerpc/platforms/pseries/iommu.c
+@@ -106,7 +106,7 @@ static int tce_build_pSeries(struct iommu_table *tbl, long index,
+ 		tcep++;
+ 	}
+ 
+-	if (tbl->it_type == TCE_PCI_SWINV_CREATE)
++	if (tbl->it_type & TCE_PCI_SWINV_CREATE)
+ 		tce_invalidate_pSeries_sw(tbl, tces, tcep - 1);
+ 	return 0;
+ }
+@@ -121,7 +121,7 @@ static void tce_free_pSeries(struct iommu_table *tbl, long index, long npages)
+ 	while (npages--)
+ 		*(tcep++) = 0;
+ 
+-	if (tbl->it_type == TCE_PCI_SWINV_FREE)
++	if (tbl->it_type & TCE_PCI_SWINV_FREE)
+ 		tce_invalidate_pSeries_sw(tbl, tces, tcep - 1);
+ }
+ 
+diff --git a/arch/powerpc/platforms/pseries/processor_idle.c b/arch/powerpc/platforms/pseries/processor_idle.c
+index 41a34bc..c71be66 100644
+--- a/arch/powerpc/platforms/pseries/processor_idle.c
++++ b/arch/powerpc/platforms/pseries/processor_idle.c
+@@ -99,15 +99,18 @@ out:
+ static void check_and_cede_processor(void)
+ {
+ 	/*
+-	 * Interrupts are soft-disabled at this point,
+-	 * but not hard disabled. So an interrupt might have
+-	 * occurred before entering NAP, and would be potentially
+-	 * lost (edge events, decrementer events, etc...) unless
+-	 * we first hard disable then check.
++	 * Ensure our interrupt state is properly tracked,
++	 * also checks if no interrupt has occurred while we
++	 * were soft-disabled
+ 	 */
+-	hard_irq_disable();
+-	if (get_paca()->irq_happened == 0)
++	if (prep_irq_for_idle()) {
+ 		cede_processor();
++#ifdef CONFIG_TRACE_IRQFLAGS
++		/* Ensure that H_CEDE returns with IRQs on */
++		if (WARN_ON(!(mfmsr() & MSR_EE)))
++			__hard_irq_enable();
++#endif
++	}
+ }
+ 
+ static int dedicated_cede_loop(struct cpuidle_device *dev,
+diff --git a/arch/powerpc/xmon/xmon.c b/arch/powerpc/xmon/xmon.c
+index 0f3ab06..eab3492 100644
+--- a/arch/powerpc/xmon/xmon.c
++++ b/arch/powerpc/xmon/xmon.c
+@@ -971,7 +971,7 @@ static int cpu_cmd(void)
+ 		/* print cpus waiting or in xmon */
+ 		printf("cpus stopped:");
+ 		count = 0;
+-		for (cpu = 0; cpu < NR_CPUS; ++cpu) {
++		for_each_possible_cpu(cpu) {
+ 			if (cpumask_test_cpu(cpu, &cpus_in_xmon)) {
+ 				if (count == 0)
+ 					printf(" %x", cpu);
+diff --git a/arch/x86/ia32/ia32_signal.c b/arch/x86/ia32/ia32_signal.c
+index a69245b..4f5bfac 100644
+--- a/arch/x86/ia32/ia32_signal.c
++++ b/arch/x86/ia32/ia32_signal.c
+@@ -38,7 +38,7 @@
+ int copy_siginfo_to_user32(compat_siginfo_t __user *to, siginfo_t *from)
+ {
+ 	int err = 0;
+-	bool ia32 = is_ia32_task();
++	bool ia32 = test_thread_flag(TIF_IA32);
+ 
+ 	if (!access_ok(VERIFY_WRITE, to, sizeof(compat_siginfo_t)))
+ 		return -EFAULT;
+diff --git a/arch/x86/include/asm/cpufeature.h b/arch/x86/include/asm/cpufeature.h
+index 340ee49..f91e80f 100644
+--- a/arch/x86/include/asm/cpufeature.h
++++ b/arch/x86/include/asm/cpufeature.h
+@@ -176,7 +176,7 @@
+ #define X86_FEATURE_XSAVEOPT	(7*32+ 4) /* Optimized Xsave */
+ #define X86_FEATURE_PLN		(7*32+ 5) /* Intel Power Limit Notification */
+ #define X86_FEATURE_PTS		(7*32+ 6) /* Intel Package Thermal Status */
+-#define X86_FEATURE_DTS		(7*32+ 7) /* Digital Thermal Sensor */
++#define X86_FEATURE_DTHERM	(7*32+ 7) /* Digital Thermal Sensor */
+ #define X86_FEATURE_HW_PSTATE	(7*32+ 8) /* AMD HW-PState */
+ 
+ /* Virtualization flags: Linux defined, word 8 */
+diff --git a/arch/x86/include/asm/pgtable-3level.h b/arch/x86/include/asm/pgtable-3level.h
+index effff47..cb00ccc 100644
+--- a/arch/x86/include/asm/pgtable-3level.h
++++ b/arch/x86/include/asm/pgtable-3level.h
+@@ -31,6 +31,60 @@ static inline void native_set_pte(pte_t *ptep, pte_t pte)
+ 	ptep->pte_low = pte.pte_low;
+ }
+ 
++#define pmd_read_atomic pmd_read_atomic
++/*
++ * pte_offset_map_lock on 32bit PAE kernels was reading the pmd_t with
++ * a "*pmdp" dereference done by gcc. Problem is, in certain places
++ * where pte_offset_map_lock is called, concurrent page faults are
++ * allowed, if the mmap_sem is hold for reading. An example is mincore
++ * vs page faults vs MADV_DONTNEED. On the page fault side
++ * pmd_populate rightfully does a set_64bit, but if we're reading the
++ * pmd_t with a "*pmdp" on the mincore side, a SMP race can happen
++ * because gcc will not read the 64bit of the pmd atomically. To fix
++ * this all places running pmd_offset_map_lock() while holding the
++ * mmap_sem in read mode, shall read the pmdp pointer using this
++ * function to know if the pmd is null nor not, and in turn to know if
++ * they can run pmd_offset_map_lock or pmd_trans_huge or other pmd
++ * operations.
++ *
++ * Without THP if the mmap_sem is hold for reading, the pmd can only
++ * transition from null to not null while pmd_read_atomic runs. So
++ * we can always return atomic pmd values with this function.
++ *
++ * With THP if the mmap_sem is hold for reading, the pmd can become
++ * trans_huge or none or point to a pte (and in turn become "stable")
++ * at any time under pmd_read_atomic. We could read it really
++ * atomically here with a atomic64_read for the THP enabled case (and
++ * it would be a whole lot simpler), but to avoid using cmpxchg8b we
++ * only return an atomic pmdval if the low part of the pmdval is later
++ * found stable (i.e. pointing to a pte). And we're returning a none
++ * pmdval if the low part of the pmd is none. In some cases the high
++ * and low part of the pmdval returned may not be consistent if THP is
++ * enabled (the low part may point to previously mapped hugepage,
++ * while the high part may point to a more recently mapped hugepage),
++ * but pmd_none_or_trans_huge_or_clear_bad() only needs the low part
++ * of the pmd to be read atomically to decide if the pmd is unstable
++ * or not, with the only exception of when the low part of the pmd is
++ * zero in which case we return a none pmd.
++ */
++static inline pmd_t pmd_read_atomic(pmd_t *pmdp)
++{
++	pmdval_t ret;
++	u32 *tmp = (u32 *)pmdp;
++
++	ret = (pmdval_t) (*tmp);
++	if (ret) {
++		/*
++		 * If the low part is null, we must not read the high part
++		 * or we can end up with a partial pmd.
++		 */
++		smp_rmb();
++		ret |= ((pmdval_t)*(tmp + 1)) << 32;
++	}
++
++	return (pmd_t) { ret };
++}
++
+ static inline void native_set_pte_atomic(pte_t *ptep, pte_t pte)
+ {
+ 	set_64bit((unsigned long long *)(ptep), native_pte_val(pte));
+diff --git a/arch/x86/kernel/acpi/boot.c b/arch/x86/kernel/acpi/boot.c
+index 7c439fe..bbdffc2 100644
+--- a/arch/x86/kernel/acpi/boot.c
++++ b/arch/x86/kernel/acpi/boot.c
+@@ -422,12 +422,14 @@ acpi_parse_int_src_ovr(struct acpi_subtable_header * header,
+ 		return 0;
+ 	}
+ 
+-	if (intsrc->source_irq == 0 && intsrc->global_irq == 2) {
++	if (intsrc->source_irq == 0) {
+ 		if (acpi_skip_timer_override) {
+-			printk(PREFIX "BIOS IRQ0 pin2 override ignored.\n");
++			printk(PREFIX "BIOS IRQ0 override ignored.\n");
+ 			return 0;
+ 		}
+-		if (acpi_fix_pin2_polarity && (intsrc->inti_flags & ACPI_MADT_POLARITY_MASK)) {
++
++		if ((intsrc->global_irq == 2) && acpi_fix_pin2_polarity
++			&& (intsrc->inti_flags & ACPI_MADT_POLARITY_MASK)) {
+ 			intsrc->inti_flags &= ~ACPI_MADT_POLARITY_MASK;
+ 			printk(PREFIX "BIOS IRQ0 pin2 override: forcing polarity to high active.\n");
+ 		}
+@@ -1334,17 +1336,12 @@ static int __init dmi_disable_acpi(const struct dmi_system_id *d)
+ }
+ 
+ /*
+- * Force ignoring BIOS IRQ0 pin2 override
++ * Force ignoring BIOS IRQ0 override
+  */
+ static int __init dmi_ignore_irq0_timer_override(const struct dmi_system_id *d)
+ {
+-	/*
+-	 * The ati_ixp4x0_rev() early PCI quirk should have set
+-	 * the acpi_skip_timer_override flag already:
+-	 */
+ 	if (!acpi_skip_timer_override) {
+-		WARN(1, KERN_ERR "ati_ixp4x0 quirk not complete.\n");
+-		pr_notice("%s detected: Ignoring BIOS IRQ0 pin2 override\n",
++		pr_notice("%s detected: Ignoring BIOS IRQ0 override\n",
+ 			d->ident);
+ 		acpi_skip_timer_override = 1;
+ 	}
+@@ -1438,7 +1435,7 @@ static struct dmi_system_id __initdata acpi_dmi_table_late[] = {
+ 	 * is enabled.  This input is incorrectly designated the
+ 	 * ISA IRQ 0 via an interrupt source override even though
+ 	 * it is wired to the output of the master 8259A and INTIN0
+-	 * is not connected at all.  Force ignoring BIOS IRQ0 pin2
++	 * is not connected at all.  Force ignoring BIOS IRQ0
+ 	 * override in that cases.
+ 	 */
+ 	{
+@@ -1473,6 +1470,14 @@ static struct dmi_system_id __initdata acpi_dmi_table_late[] = {
+ 		     DMI_MATCH(DMI_PRODUCT_NAME, "HP Compaq 6715b"),
+ 		     },
+ 	 },
++	{
++	 .callback = dmi_ignore_irq0_timer_override,
++	 .ident = "FUJITSU SIEMENS",
++	 .matches = {
++		     DMI_MATCH(DMI_SYS_VENDOR, "FUJITSU SIEMENS"),
++		     DMI_MATCH(DMI_PRODUCT_NAME, "AMILO PRO V2030"),
++		     },
++	 },
+ 	{}
+ };
+ 
+diff --git a/arch/x86/kernel/cpu/scattered.c b/arch/x86/kernel/cpu/scattered.c
+index addf9e8..ee8e9ab 100644
+--- a/arch/x86/kernel/cpu/scattered.c
++++ b/arch/x86/kernel/cpu/scattered.c
+@@ -31,7 +31,7 @@ void __cpuinit init_scattered_cpuid_features(struct cpuinfo_x86 *c)
+ 	const struct cpuid_bit *cb;
+ 
+ 	static const struct cpuid_bit __cpuinitconst cpuid_bits[] = {
+-		{ X86_FEATURE_DTS,		CR_EAX, 0, 0x00000006, 0 },
++		{ X86_FEATURE_DTHERM,		CR_EAX, 0, 0x00000006, 0 },
+ 		{ X86_FEATURE_IDA,		CR_EAX, 1, 0x00000006, 0 },
+ 		{ X86_FEATURE_ARAT,		CR_EAX, 2, 0x00000006, 0 },
+ 		{ X86_FEATURE_PLN,		CR_EAX, 4, 0x00000006, 0 },
+diff --git a/arch/x86/kernel/reboot.c b/arch/x86/kernel/reboot.c
+index d840e69..3034ee5 100644
+--- a/arch/x86/kernel/reboot.c
++++ b/arch/x86/kernel/reboot.c
+@@ -471,6 +471,14 @@ static struct dmi_system_id __initdata pci_reboot_dmi_table[] = {
+ 			DMI_MATCH(DMI_PRODUCT_NAME, "OptiPlex 990"),
+ 		},
+ 	},
++	{	/* Handle problems with rebooting on the Precision M6600. */
++		.callback = set_pci_reboot,
++		.ident = "Dell OptiPlex 990",
++		.matches = {
++			DMI_MATCH(DMI_SYS_VENDOR, "Dell Inc."),
++			DMI_MATCH(DMI_PRODUCT_NAME, "Precision M6600"),
++		},
++	},
+ 	{ }
+ };
+ 
+diff --git a/drivers/acpi/acpi_pad.c b/drivers/acpi/acpi_pad.c
+index a43fa1a..1502c502 100644
+--- a/drivers/acpi/acpi_pad.c
++++ b/drivers/acpi/acpi_pad.c
+@@ -36,6 +36,7 @@
+ #define ACPI_PROCESSOR_AGGREGATOR_DEVICE_NAME "Processor Aggregator"
+ #define ACPI_PROCESSOR_AGGREGATOR_NOTIFY 0x80
+ static DEFINE_MUTEX(isolated_cpus_lock);
++static DEFINE_MUTEX(round_robin_lock);
+ 
+ static unsigned long power_saving_mwait_eax;
+ 
+@@ -107,7 +108,7 @@ static void round_robin_cpu(unsigned int tsk_index)
+ 	if (!alloc_cpumask_var(&tmp, GFP_KERNEL))
+ 		return;
+ 
+-	mutex_lock(&isolated_cpus_lock);
++	mutex_lock(&round_robin_lock);
+ 	cpumask_clear(tmp);
+ 	for_each_cpu(cpu, pad_busy_cpus)
+ 		cpumask_or(tmp, tmp, topology_thread_cpumask(cpu));
+@@ -116,7 +117,7 @@ static void round_robin_cpu(unsigned int tsk_index)
+ 	if (cpumask_empty(tmp))
+ 		cpumask_andnot(tmp, cpu_online_mask, pad_busy_cpus);
+ 	if (cpumask_empty(tmp)) {
+-		mutex_unlock(&isolated_cpus_lock);
++		mutex_unlock(&round_robin_lock);
+ 		return;
+ 	}
+ 	for_each_cpu(cpu, tmp) {
+@@ -131,7 +132,7 @@ static void round_robin_cpu(unsigned int tsk_index)
+ 	tsk_in_cpu[tsk_index] = preferred_cpu;
+ 	cpumask_set_cpu(preferred_cpu, pad_busy_cpus);
+ 	cpu_weight[preferred_cpu]++;
+-	mutex_unlock(&isolated_cpus_lock);
++	mutex_unlock(&round_robin_lock);
+ 
+ 	set_cpus_allowed_ptr(current, cpumask_of(preferred_cpu));
+ }
+diff --git a/drivers/acpi/acpica/hwsleep.c b/drivers/acpi/acpica/hwsleep.c
+index 0ed85ca..615996a 100644
+--- a/drivers/acpi/acpica/hwsleep.c
++++ b/drivers/acpi/acpica/hwsleep.c
+@@ -95,18 +95,6 @@ acpi_status acpi_hw_legacy_sleep(u8 sleep_state, u8 flags)
+ 		return_ACPI_STATUS(status);
+ 	}
+ 
+-	if (sleep_state != ACPI_STATE_S5) {
+-		/*
+-		 * Disable BM arbitration. This feature is contained within an
+-		 * optional register (PM2 Control), so ignore a BAD_ADDRESS
+-		 * exception.
+-		 */
+-		status = acpi_write_bit_register(ACPI_BITREG_ARB_DISABLE, 1);
+-		if (ACPI_FAILURE(status) && (status != AE_BAD_ADDRESS)) {
+-			return_ACPI_STATUS(status);
+-		}
+-	}
+-
+ 	/*
+ 	 * 1) Disable/Clear all GPEs
+ 	 * 2) Enable all wakeup GPEs
+@@ -364,16 +352,6 @@ acpi_status acpi_hw_legacy_wake(u8 sleep_state, u8 flags)
+ 				    [ACPI_EVENT_POWER_BUTTON].
+ 				    status_register_id, ACPI_CLEAR_STATUS);
+ 
+-	/*
+-	 * Enable BM arbitration. This feature is contained within an
+-	 * optional register (PM2 Control), so ignore a BAD_ADDRESS
+-	 * exception.
+-	 */
+-	status = acpi_write_bit_register(ACPI_BITREG_ARB_DISABLE, 0);
+-	if (ACPI_FAILURE(status) && (status != AE_BAD_ADDRESS)) {
+-		return_ACPI_STATUS(status);
+-	}
+-
+ 	acpi_hw_execute_sleep_method(METHOD_PATHNAME__SST, ACPI_SST_WORKING);
+ 	return_ACPI_STATUS(status);
+ }
+diff --git a/drivers/acpi/apei/apei-base.c b/drivers/acpi/apei/apei-base.c
+index 5577762..6686b1e 100644
+--- a/drivers/acpi/apei/apei-base.c
++++ b/drivers/acpi/apei/apei-base.c
+@@ -243,7 +243,7 @@ static int pre_map_gar_callback(struct apei_exec_context *ctx,
+ 	u8 ins = entry->instruction;
+ 
+ 	if (ctx->ins_table[ins].flags & APEI_EXEC_INS_ACCESS_REGISTER)
+-		return acpi_os_map_generic_address(&entry->register_region);
++		return apei_map_generic_address(&entry->register_region);
+ 
+ 	return 0;
+ }
+@@ -276,7 +276,7 @@ static int post_unmap_gar_callback(struct apei_exec_context *ctx,
+ 	u8 ins = entry->instruction;
+ 
+ 	if (ctx->ins_table[ins].flags & APEI_EXEC_INS_ACCESS_REGISTER)
+-		acpi_os_unmap_generic_address(&entry->register_region);
++		apei_unmap_generic_address(&entry->register_region);
+ 
+ 	return 0;
+ }
+@@ -606,6 +606,19 @@ static int apei_check_gar(struct acpi_generic_address *reg, u64 *paddr,
+ 	return 0;
+ }
+ 
++int apei_map_generic_address(struct acpi_generic_address *reg)
++{
++	int rc;
++	u32 access_bit_width;
++	u64 address;
++
++	rc = apei_check_gar(reg, &address, &access_bit_width);
++	if (rc)
++		return rc;
++	return acpi_os_map_generic_address(reg);
++}
++EXPORT_SYMBOL_GPL(apei_map_generic_address);
++
+ /* read GAR in interrupt (including NMI) or process context */
+ int apei_read(u64 *val, struct acpi_generic_address *reg)
+ {
+diff --git a/drivers/acpi/apei/apei-internal.h b/drivers/acpi/apei/apei-internal.h
+index cca240a..f220d64 100644
+--- a/drivers/acpi/apei/apei-internal.h
++++ b/drivers/acpi/apei/apei-internal.h
+@@ -7,6 +7,8 @@
+ #define APEI_INTERNAL_H
+ 
+ #include <linux/cper.h>
++#include <linux/acpi.h>
++#include <linux/acpi_io.h>
+ 
+ struct apei_exec_context;
+ 
+@@ -68,6 +70,13 @@ static inline int apei_exec_run_optional(struct apei_exec_context *ctx, u8 actio
+ /* IP has been set in instruction function */
+ #define APEI_EXEC_SET_IP	1
+ 
++int apei_map_generic_address(struct acpi_generic_address *reg);
++
++static inline void apei_unmap_generic_address(struct acpi_generic_address *reg)
++{
++	acpi_os_unmap_generic_address(reg);
++}
++
+ int apei_read(u64 *val, struct acpi_generic_address *reg);
+ int apei_write(u64 val, struct acpi_generic_address *reg);
+ 
+diff --git a/drivers/acpi/apei/ghes.c b/drivers/acpi/apei/ghes.c
+index 9b3cac0..1599566 100644
+--- a/drivers/acpi/apei/ghes.c
++++ b/drivers/acpi/apei/ghes.c
+@@ -301,7 +301,7 @@ static struct ghes *ghes_new(struct acpi_hest_generic *generic)
+ 	if (!ghes)
+ 		return ERR_PTR(-ENOMEM);
+ 	ghes->generic = generic;
+-	rc = acpi_os_map_generic_address(&generic->error_status_address);
++	rc = apei_map_generic_address(&generic->error_status_address);
+ 	if (rc)
+ 		goto err_free;
+ 	error_block_length = generic->error_block_length;
+@@ -321,7 +321,7 @@ static struct ghes *ghes_new(struct acpi_hest_generic *generic)
+ 	return ghes;
+ 
+ err_unmap:
+-	acpi_os_unmap_generic_address(&generic->error_status_address);
++	apei_unmap_generic_address(&generic->error_status_address);
+ err_free:
+ 	kfree(ghes);
+ 	return ERR_PTR(rc);
+@@ -330,7 +330,7 @@ err_free:
+ static void ghes_fini(struct ghes *ghes)
+ {
+ 	kfree(ghes->estatus);
+-	acpi_os_unmap_generic_address(&ghes->generic->error_status_address);
++	apei_unmap_generic_address(&ghes->generic->error_status_address);
+ }
+ 
+ enum {
+diff --git a/drivers/acpi/sleep.c b/drivers/acpi/sleep.c
+index eb6fd23..2377445 100644
+--- a/drivers/acpi/sleep.c
++++ b/drivers/acpi/sleep.c
+@@ -732,8 +732,8 @@ int acpi_pm_device_sleep_state(struct device *dev, int *d_min_p)
+ 	 * can wake the system.  _S0W may be valid, too.
+ 	 */
+ 	if (acpi_target_sleep_state == ACPI_STATE_S0 ||
+-	    (device_may_wakeup(dev) &&
+-	     adev->wakeup.sleep_state <= acpi_target_sleep_state)) {
++	    (device_may_wakeup(dev) && adev->wakeup.flags.valid &&
++	     adev->wakeup.sleep_state >= acpi_target_sleep_state)) {
+ 		acpi_status status;
+ 
+ 		acpi_method[3] = 'W';
+diff --git a/drivers/acpi/sysfs.c b/drivers/acpi/sysfs.c
+index 9f66181..240a244 100644
+--- a/drivers/acpi/sysfs.c
++++ b/drivers/acpi/sysfs.c
+@@ -173,7 +173,7 @@ static int param_set_trace_state(const char *val, struct kernel_param *kp)
+ {
+ 	int result = 0;
+ 
+-	if (!strncmp(val, "enable", strlen("enable") - 1)) {
++	if (!strncmp(val, "enable", strlen("enable"))) {
+ 		result = acpi_debug_trace(trace_method_name, trace_debug_level,
+ 					  trace_debug_layer, 0);
+ 		if (result)
+@@ -181,7 +181,7 @@ static int param_set_trace_state(const char *val, struct kernel_param *kp)
+ 		goto exit;
+ 	}
+ 
+-	if (!strncmp(val, "disable", strlen("disable") - 1)) {
++	if (!strncmp(val, "disable", strlen("disable"))) {
+ 		int name = 0;
+ 		result = acpi_debug_trace((char *)&name, trace_debug_level,
+ 					  trace_debug_layer, 0);
+diff --git a/drivers/acpi/video.c b/drivers/acpi/video.c
+index 66e8f73..48b5a3c 100644
+--- a/drivers/acpi/video.c
++++ b/drivers/acpi/video.c
+@@ -558,6 +558,8 @@ acpi_video_bus_DOS(struct acpi_video_bus *video, int bios_flag, int lcd_flag)
+ 	union acpi_object arg0 = { ACPI_TYPE_INTEGER };
+ 	struct acpi_object_list args = { 1, &arg0 };
+ 
++	if (!video->cap._DOS)
++		return 0;
+ 
+ 	if (bios_flag < 0 || bios_flag > 3 || lcd_flag < 0 || lcd_flag > 1)
+ 		return -EINVAL;
+diff --git a/drivers/base/power/main.c b/drivers/base/power/main.c
+index b462c0e..3085f9b 100644
+--- a/drivers/base/power/main.c
++++ b/drivers/base/power/main.c
+@@ -1021,7 +1021,7 @@ static int __device_suspend(struct device *dev, pm_message_t state, bool async)
+ 	dpm_wait_for_children(dev, async);
+ 
+ 	if (async_error)
+-		return 0;
++		goto Complete;
+ 
+ 	pm_runtime_get_noresume(dev);
+ 	if (pm_runtime_barrier(dev) && device_may_wakeup(dev))
+@@ -1030,7 +1030,7 @@ static int __device_suspend(struct device *dev, pm_message_t state, bool async)
+ 	if (pm_wakeup_pending()) {
+ 		pm_runtime_put_sync(dev);
+ 		async_error = -EBUSY;
+-		return 0;
++		goto Complete;
+ 	}
+ 
+ 	device_lock(dev);
+@@ -1087,6 +1087,8 @@ static int __device_suspend(struct device *dev, pm_message_t state, bool async)
+ 	}
+ 
+ 	device_unlock(dev);
++
++ Complete:
+ 	complete_all(&dev->power.completion);
+ 
+ 	if (error) {
+diff --git a/drivers/block/umem.c b/drivers/block/umem.c
+index aa27120..9a72277 100644
+--- a/drivers/block/umem.c
++++ b/drivers/block/umem.c
+@@ -513,6 +513,44 @@ static void process_page(unsigned long data)
+ 	}
+ }
+ 
++struct mm_plug_cb {
++	struct blk_plug_cb cb;
++	struct cardinfo *card;
++};
++
++static void mm_unplug(struct blk_plug_cb *cb)
++{
++	struct mm_plug_cb *mmcb = container_of(cb, struct mm_plug_cb, cb);
++
++	spin_lock_irq(&mmcb->card->lock);
++	activate(mmcb->card);
++	spin_unlock_irq(&mmcb->card->lock);
++	kfree(mmcb);
++}
++
++static int mm_check_plugged(struct cardinfo *card)
++{
++	struct blk_plug *plug = current->plug;
++	struct mm_plug_cb *mmcb;
++
++	if (!plug)
++		return 0;
++
++	list_for_each_entry(mmcb, &plug->cb_list, cb.list) {
++		if (mmcb->cb.callback == mm_unplug && mmcb->card == card)
++			return 1;
++	}
++	/* Not currently on the callback list */
++	mmcb = kmalloc(sizeof(*mmcb), GFP_ATOMIC);
++	if (!mmcb)
++		return 0;
++
++	mmcb->card = card;
++	mmcb->cb.callback = mm_unplug;
++	list_add(&mmcb->cb.list, &plug->cb_list);
++	return 1;
++}
++
+ static void mm_make_request(struct request_queue *q, struct bio *bio)
+ {
+ 	struct cardinfo *card = q->queuedata;
+@@ -523,6 +561,8 @@ static void mm_make_request(struct request_queue *q, struct bio *bio)
+ 	*card->biotail = bio;
+ 	bio->bi_next = NULL;
+ 	card->biotail = &bio->bi_next;
++	if (bio->bi_rw & REQ_SYNC || !mm_check_plugged(card))
++		activate(card);
+ 	spin_unlock_irq(&card->lock);
+ 
+ 	return;
+diff --git a/drivers/block/xen-blkback/common.h b/drivers/block/xen-blkback/common.h
+index 773cf27..9ad3b5e 100644
+--- a/drivers/block/xen-blkback/common.h
++++ b/drivers/block/xen-blkback/common.h
+@@ -257,6 +257,7 @@ static inline void blkif_get_x86_32_req(struct blkif_request *dst,
+ 		break;
+ 	case BLKIF_OP_DISCARD:
+ 		dst->u.discard.flag = src->u.discard.flag;
++		dst->u.discard.id = src->u.discard.id;
+ 		dst->u.discard.sector_number = src->u.discard.sector_number;
+ 		dst->u.discard.nr_sectors = src->u.discard.nr_sectors;
+ 		break;
+@@ -287,6 +288,7 @@ static inline void blkif_get_x86_64_req(struct blkif_request *dst,
+ 		break;
+ 	case BLKIF_OP_DISCARD:
+ 		dst->u.discard.flag = src->u.discard.flag;
++		dst->u.discard.id = src->u.discard.id;
+ 		dst->u.discard.sector_number = src->u.discard.sector_number;
+ 		dst->u.discard.nr_sectors = src->u.discard.nr_sectors;
+ 		break;
+diff --git a/drivers/clk/clk.c b/drivers/clk/clk.c
+index 9cf6f59..2da025e 100644
+--- a/drivers/clk/clk.c
++++ b/drivers/clk/clk.c
+@@ -997,7 +997,7 @@ static struct clk *__clk_init_parent(struct clk *clk)
+ 
+ 	if (!clk->parents)
+ 		clk->parents =
+-			kmalloc((sizeof(struct clk*) * clk->num_parents),
++			kzalloc((sizeof(struct clk*) * clk->num_parents),
+ 					GFP_KERNEL);
+ 
+ 	if (!clk->parents)
+@@ -1062,21 +1062,24 @@ static int __clk_set_parent(struct clk *clk, struct clk *parent)
+ 
+ 	old_parent = clk->parent;
+ 
+-	/* find index of new parent clock using cached parent ptrs */
+-	for (i = 0; i < clk->num_parents; i++)
+-		if (clk->parents[i] == parent)
+-			break;
++	if (!clk->parents)
++		clk->parents = kzalloc((sizeof(struct clk*) * clk->num_parents),
++								GFP_KERNEL);
+ 
+ 	/*
+-	 * find index of new parent clock using string name comparison
+-	 * also try to cache the parent to avoid future calls to __clk_lookup
++	 * find index of new parent clock using cached parent ptrs,
++	 * or if not yet cached, use string name comparison and cache
++	 * them now to avoid future calls to __clk_lookup.
+ 	 */
+-	if (i == clk->num_parents)
+-		for (i = 0; i < clk->num_parents; i++)
+-			if (!strcmp(clk->parent_names[i], parent->name)) {
++	for (i = 0; i < clk->num_parents; i++) {
++		if (clk->parents && clk->parents[i] == parent)
++			break;
++		else if (!strcmp(clk->parent_names[i], parent->name)) {
++			if (clk->parents)
+ 				clk->parents[i] = __clk_lookup(parent->name);
+-				break;
+-			}
++			break;
++		}
++	}
+ 
+ 	if (i == clk->num_parents) {
+ 		pr_debug("%s: clock %s is not a possible parent of clock %s\n",
+diff --git a/drivers/dma/pl330.c b/drivers/dma/pl330.c
+index fa3fb21..8c44f17 100644
+--- a/drivers/dma/pl330.c
++++ b/drivers/dma/pl330.c
+@@ -2322,7 +2322,7 @@ static void pl330_tasklet(unsigned long data)
+ 	/* Pick up ripe tomatoes */
+ 	list_for_each_entry_safe(desc, _dt, &pch->work_list, node)
+ 		if (desc->status == DONE) {
+-			if (pch->cyclic)
++			if (!pch->cyclic)
+ 				dma_cookie_complete(&desc->txd);
+ 			list_move_tail(&desc->node, &list);
+ 		}
+diff --git a/drivers/gpio/gpio-wm8994.c b/drivers/gpio/gpio-wm8994.c
+index 92ea535..aa61ad2 100644
+--- a/drivers/gpio/gpio-wm8994.c
++++ b/drivers/gpio/gpio-wm8994.c
+@@ -89,8 +89,11 @@ static int wm8994_gpio_direction_out(struct gpio_chip *chip,
+ 	struct wm8994_gpio *wm8994_gpio = to_wm8994_gpio(chip);
+ 	struct wm8994 *wm8994 = wm8994_gpio->wm8994;
+ 
++	if (value)
++		value = WM8994_GPN_LVL;
++
+ 	return wm8994_set_bits(wm8994, WM8994_GPIO_1 + offset,
+-			       WM8994_GPN_DIR, 0);
++			       WM8994_GPN_DIR | WM8994_GPN_LVL, value);
+ }
+ 
+ static void wm8994_gpio_set(struct gpio_chip *chip, unsigned offset, int value)
+diff --git a/drivers/gpu/drm/drm_edid.c b/drivers/gpu/drm/drm_edid.c
+index 5a18b0d..6e38325 100644
+--- a/drivers/gpu/drm/drm_edid.c
++++ b/drivers/gpu/drm/drm_edid.c
+@@ -574,7 +574,7 @@ static bool
+ drm_monitor_supports_rb(struct edid *edid)
+ {
+ 	if (edid->revision >= 4) {
+-		bool ret;
++		bool ret = false;
+ 		drm_for_each_detailed_block((u8 *)edid, is_rb, &ret);
+ 		return ret;
+ 	}
+diff --git a/drivers/gpu/drm/i915/i915_irq.c b/drivers/gpu/drm/i915/i915_irq.c
+index f57e5cf..26c67a7 100644
+--- a/drivers/gpu/drm/i915/i915_irq.c
++++ b/drivers/gpu/drm/i915/i915_irq.c
+@@ -424,6 +424,30 @@ static void gen6_pm_rps_work(struct work_struct *work)
+ 	mutex_unlock(&dev_priv->dev->struct_mutex);
+ }
+ 
++static void gen6_queue_rps_work(struct drm_i915_private *dev_priv,
++				u32 pm_iir)
++{
++	unsigned long flags;
++
++	/*
++	 * IIR bits should never already be set because IMR should
++	 * prevent an interrupt from being shown in IIR. The warning
++	 * displays a case where we've unsafely cleared
++	 * dev_priv->pm_iir. Although missing an interrupt of the same
++	 * type is not a problem, it displays a problem in the logic.
++	 *
++	 * The mask bit in IMR is cleared by rps_work.
++	 */
++
++	spin_lock_irqsave(&dev_priv->rps_lock, flags);
++	dev_priv->pm_iir |= pm_iir;
++	I915_WRITE(GEN6_PMIMR, dev_priv->pm_iir);
++	POSTING_READ(GEN6_PMIMR);
++	spin_unlock_irqrestore(&dev_priv->rps_lock, flags);
++
++	queue_work(dev_priv->wq, &dev_priv->rps_work);
++}
++
+ static void pch_irq_handler(struct drm_device *dev, u32 pch_iir)
+ {
+ 	drm_i915_private_t *dev_priv = (drm_i915_private_t *) dev->dev_private;
+@@ -529,16 +553,8 @@ static irqreturn_t ivybridge_irq_handler(DRM_IRQ_ARGS)
+ 		pch_irq_handler(dev, pch_iir);
+ 	}
+ 
+-	if (pm_iir & GEN6_PM_DEFERRED_EVENTS) {
+-		unsigned long flags;
+-		spin_lock_irqsave(&dev_priv->rps_lock, flags);
+-		WARN(dev_priv->pm_iir & pm_iir, "Missed a PM interrupt\n");
+-		dev_priv->pm_iir |= pm_iir;
+-		I915_WRITE(GEN6_PMIMR, dev_priv->pm_iir);
+-		POSTING_READ(GEN6_PMIMR);
+-		spin_unlock_irqrestore(&dev_priv->rps_lock, flags);
+-		queue_work(dev_priv->wq, &dev_priv->rps_work);
+-	}
++	if (pm_iir & GEN6_PM_DEFERRED_EVENTS)
++		gen6_queue_rps_work(dev_priv, pm_iir);
+ 
+ 	/* should clear PCH hotplug event before clear CPU irq */
+ 	I915_WRITE(SDEIIR, pch_iir);
+@@ -634,25 +650,8 @@ static irqreturn_t ironlake_irq_handler(DRM_IRQ_ARGS)
+ 		i915_handle_rps_change(dev);
+ 	}
+ 
+-	if (IS_GEN6(dev) && pm_iir & GEN6_PM_DEFERRED_EVENTS) {
+-		/*
+-		 * IIR bits should never already be set because IMR should
+-		 * prevent an interrupt from being shown in IIR. The warning
+-		 * displays a case where we've unsafely cleared
+-		 * dev_priv->pm_iir. Although missing an interrupt of the same
+-		 * type is not a problem, it displays a problem in the logic.
+-		 *
+-		 * The mask bit in IMR is cleared by rps_work.
+-		 */
+-		unsigned long flags;
+-		spin_lock_irqsave(&dev_priv->rps_lock, flags);
+-		WARN(dev_priv->pm_iir & pm_iir, "Missed a PM interrupt\n");
+-		dev_priv->pm_iir |= pm_iir;
+-		I915_WRITE(GEN6_PMIMR, dev_priv->pm_iir);
+-		POSTING_READ(GEN6_PMIMR);
+-		spin_unlock_irqrestore(&dev_priv->rps_lock, flags);
+-		queue_work(dev_priv->wq, &dev_priv->rps_work);
+-	}
++	if (IS_GEN6(dev) && pm_iir & GEN6_PM_DEFERRED_EVENTS)
++		gen6_queue_rps_work(dev_priv, pm_iir);
+ 
+ 	/* should clear PCH hotplug event before clear CPU irq */
+ 	I915_WRITE(SDEIIR, pch_iir);
+diff --git a/drivers/gpu/drm/i915/i915_suspend.c b/drivers/gpu/drm/i915/i915_suspend.c
+index 2b5eb22..0d13778 100644
+--- a/drivers/gpu/drm/i915/i915_suspend.c
++++ b/drivers/gpu/drm/i915/i915_suspend.c
+@@ -740,8 +740,11 @@ static void i915_restore_display(struct drm_device *dev)
+ 	if (HAS_PCH_SPLIT(dev)) {
+ 		I915_WRITE(BLC_PWM_PCH_CTL1, dev_priv->saveBLC_PWM_CTL);
+ 		I915_WRITE(BLC_PWM_PCH_CTL2, dev_priv->saveBLC_PWM_CTL2);
+-		I915_WRITE(BLC_PWM_CPU_CTL, dev_priv->saveBLC_CPU_PWM_CTL);
++		/* NOTE: BLC_PWM_CPU_CTL must be written after BLC_PWM_CPU_CTL2;
++		 * otherwise we get blank eDP screen after S3 on some machines
++		 */
+ 		I915_WRITE(BLC_PWM_CPU_CTL2, dev_priv->saveBLC_CPU_PWM_CTL2);
++		I915_WRITE(BLC_PWM_CPU_CTL, dev_priv->saveBLC_CPU_PWM_CTL);
+ 		I915_WRITE(PCH_PP_ON_DELAYS, dev_priv->savePP_ON_DELAYS);
+ 		I915_WRITE(PCH_PP_OFF_DELAYS, dev_priv->savePP_OFF_DELAYS);
+ 		I915_WRITE(PCH_PP_DIVISOR, dev_priv->savePP_DIVISOR);
+diff --git a/drivers/gpu/drm/nouveau/nouveau_fbcon.c b/drivers/gpu/drm/nouveau/nouveau_fbcon.c
+index 8113e92..6fd2211 100644
+--- a/drivers/gpu/drm/nouveau/nouveau_fbcon.c
++++ b/drivers/gpu/drm/nouveau/nouveau_fbcon.c
+@@ -497,7 +497,7 @@ int nouveau_fbcon_init(struct drm_device *dev)
+ 	nfbdev->helper.funcs = &nouveau_fbcon_helper_funcs;
+ 
+ 	ret = drm_fb_helper_init(dev, &nfbdev->helper,
+-				 nv_two_heads(dev) ? 2 : 1, 4);
++				 dev->mode_config.num_crtc, 4);
+ 	if (ret) {
+ 		kfree(nfbdev);
+ 		return ret;
+diff --git a/drivers/gpu/drm/radeon/radeon_gart.c b/drivers/gpu/drm/radeon/radeon_gart.c
+index 62050f5..2a4c592 100644
+--- a/drivers/gpu/drm/radeon/radeon_gart.c
++++ b/drivers/gpu/drm/radeon/radeon_gart.c
+@@ -289,8 +289,9 @@ int radeon_vm_manager_init(struct radeon_device *rdev)
+ 	rdev->vm_manager.enabled = false;
+ 
+ 	/* mark first vm as always in use, it's the system one */
++	/* allocate enough for 2 full VM pts */
+ 	r = radeon_sa_bo_manager_init(rdev, &rdev->vm_manager.sa_manager,
+-				      rdev->vm_manager.max_pfn * 8,
++				      rdev->vm_manager.max_pfn * 8 * 2,
+ 				      RADEON_GEM_DOMAIN_VRAM);
+ 	if (r) {
+ 		dev_err(rdev->dev, "failed to allocate vm bo (%dKB)\n",
+@@ -635,7 +636,15 @@ int radeon_vm_init(struct radeon_device *rdev, struct radeon_vm *vm)
+ 	mutex_init(&vm->mutex);
+ 	INIT_LIST_HEAD(&vm->list);
+ 	INIT_LIST_HEAD(&vm->va);
+-	vm->last_pfn = 0;
++	/* SI requires equal sized PTs for all VMs, so always set
++	 * last_pfn to max_pfn.  cayman allows variable sized
++	 * pts so we can grow then as needed.  Once we switch
++	 * to two level pts we can unify this again.
++	 */
++	if (rdev->family >= CHIP_TAHITI)
++		vm->last_pfn = rdev->vm_manager.max_pfn;
++	else
++		vm->last_pfn = 0;
+ 	/* map the ib pool buffer at 0 in virtual address space, set
+ 	 * read only
+ 	 */
+diff --git a/drivers/gpu/drm/radeon/si.c b/drivers/gpu/drm/radeon/si.c
+index 27bda98..2af1ce6 100644
+--- a/drivers/gpu/drm/radeon/si.c
++++ b/drivers/gpu/drm/radeon/si.c
+@@ -2527,12 +2527,12 @@ int si_pcie_gart_enable(struct radeon_device *rdev)
+ 	WREG32(0x15DC, 0);
+ 
+ 	/* empty context1-15 */
+-	/* FIXME start with 1G, once using 2 level pt switch to full
++	/* FIXME start with 4G, once using 2 level pt switch to full
+ 	 * vm size space
+ 	 */
+ 	/* set vm size, must be a multiple of 4 */
+ 	WREG32(VM_CONTEXT1_PAGE_TABLE_START_ADDR, 0);
+-	WREG32(VM_CONTEXT1_PAGE_TABLE_END_ADDR, (1 << 30) / RADEON_GPU_PAGE_SIZE);
++	WREG32(VM_CONTEXT1_PAGE_TABLE_END_ADDR, rdev->vm_manager.max_pfn);
+ 	for (i = 1; i < 16; i++) {
+ 		if (i < 8)
+ 			WREG32(VM_CONTEXT0_PAGE_TABLE_BASE_ADDR + (i << 2),
+diff --git a/drivers/hid/hid-multitouch.c b/drivers/hid/hid-multitouch.c
+index 1d5b941..543896d 100644
+--- a/drivers/hid/hid-multitouch.c
++++ b/drivers/hid/hid-multitouch.c
+@@ -70,9 +70,16 @@ struct mt_class {
+ 	bool is_indirect;	/* true for touchpads */
+ };
+ 
++struct mt_fields {
++	unsigned usages[HID_MAX_FIELDS];
++	unsigned int length;
++};
++
+ struct mt_device {
+ 	struct mt_slot curdata;	/* placeholder of incoming data */
+ 	struct mt_class mtclass;	/* our mt device class */
++	struct mt_fields *fields;	/* temporary placeholder for storing the
++					   multitouch fields */
+ 	unsigned last_field_index;	/* last field index of the report */
+ 	unsigned last_slot_field;	/* the last field of a slot */
+ 	__s8 inputmode;		/* InputMode HID feature, -1 if non-existent */
+@@ -275,11 +282,15 @@ static void set_abs(struct input_dev *input, unsigned int code,
+ 	input_set_abs_params(input, code, fmin, fmax, fuzz, 0);
+ }
+ 
+-static void set_last_slot_field(struct hid_usage *usage, struct mt_device *td,
++static void mt_store_field(struct hid_usage *usage, struct mt_device *td,
+ 		struct hid_input *hi)
+ {
+-	if (!test_bit(usage->hid, hi->input->absbit))
+-		td->last_slot_field = usage->hid;
++	struct mt_fields *f = td->fields;
++
++	if (f->length >= HID_MAX_FIELDS)
++		return;
++
++	f->usages[f->length++] = usage->hid;
+ }
+ 
+ static int mt_input_mapping(struct hid_device *hdev, struct hid_input *hi,
+@@ -330,7 +341,7 @@ static int mt_input_mapping(struct hid_device *hdev, struct hid_input *hi,
+ 				cls->sn_move);
+ 			/* touchscreen emulation */
+ 			set_abs(hi->input, ABS_X, field, cls->sn_move);
+-			set_last_slot_field(usage, td, hi);
++			mt_store_field(usage, td, hi);
+ 			td->last_field_index = field->index;
+ 			return 1;
+ 		case HID_GD_Y:
+@@ -340,7 +351,7 @@ static int mt_input_mapping(struct hid_device *hdev, struct hid_input *hi,
+ 				cls->sn_move);
+ 			/* touchscreen emulation */
+ 			set_abs(hi->input, ABS_Y, field, cls->sn_move);
+-			set_last_slot_field(usage, td, hi);
++			mt_store_field(usage, td, hi);
+ 			td->last_field_index = field->index;
+ 			return 1;
+ 		}
+@@ -349,24 +360,24 @@ static int mt_input_mapping(struct hid_device *hdev, struct hid_input *hi,
+ 	case HID_UP_DIGITIZER:
+ 		switch (usage->hid) {
+ 		case HID_DG_INRANGE:
+-			set_last_slot_field(usage, td, hi);
++			mt_store_field(usage, td, hi);
+ 			td->last_field_index = field->index;
+ 			return 1;
+ 		case HID_DG_CONFIDENCE:
+-			set_last_slot_field(usage, td, hi);
++			mt_store_field(usage, td, hi);
+ 			td->last_field_index = field->index;
+ 			return 1;
+ 		case HID_DG_TIPSWITCH:
+ 			hid_map_usage(hi, usage, bit, max, EV_KEY, BTN_TOUCH);
+ 			input_set_capability(hi->input, EV_KEY, BTN_TOUCH);
+-			set_last_slot_field(usage, td, hi);
++			mt_store_field(usage, td, hi);
+ 			td->last_field_index = field->index;
+ 			return 1;
+ 		case HID_DG_CONTACTID:
+ 			if (!td->maxcontacts)
+ 				td->maxcontacts = MT_DEFAULT_MAXCONTACT;
+ 			input_mt_init_slots(hi->input, td->maxcontacts);
+-			td->last_slot_field = usage->hid;
++			mt_store_field(usage, td, hi);
+ 			td->last_field_index = field->index;
+ 			td->touches_by_report++;
+ 			return 1;
+@@ -375,7 +386,7 @@ static int mt_input_mapping(struct hid_device *hdev, struct hid_input *hi,
+ 					EV_ABS, ABS_MT_TOUCH_MAJOR);
+ 			set_abs(hi->input, ABS_MT_TOUCH_MAJOR, field,
+ 				cls->sn_width);
+-			set_last_slot_field(usage, td, hi);
++			mt_store_field(usage, td, hi);
+ 			td->last_field_index = field->index;
+ 			return 1;
+ 		case HID_DG_HEIGHT:
+@@ -385,7 +396,7 @@ static int mt_input_mapping(struct hid_device *hdev, struct hid_input *hi,
+ 				cls->sn_height);
+ 			input_set_abs_params(hi->input,
+ 					ABS_MT_ORIENTATION, 0, 1, 0, 0);
+-			set_last_slot_field(usage, td, hi);
++			mt_store_field(usage, td, hi);
+ 			td->last_field_index = field->index;
+ 			return 1;
+ 		case HID_DG_TIPPRESSURE:
+@@ -396,7 +407,7 @@ static int mt_input_mapping(struct hid_device *hdev, struct hid_input *hi,
+ 			/* touchscreen emulation */
+ 			set_abs(hi->input, ABS_PRESSURE, field,
+ 				cls->sn_pressure);
+-			set_last_slot_field(usage, td, hi);
++			mt_store_field(usage, td, hi);
+ 			td->last_field_index = field->index;
+ 			return 1;
+ 		case HID_DG_CONTACTCOUNT:
+@@ -635,6 +646,16 @@ static void mt_set_maxcontacts(struct hid_device *hdev)
+ 	}
+ }
+ 
++static void mt_post_parse(struct mt_device *td)
++{
++	struct mt_fields *f = td->fields;
++
++	if (td->touches_by_report > 0) {
++		int field_count_per_touch = f->length / td->touches_by_report;
++		td->last_slot_field = f->usages[field_count_per_touch - 1];
++	}
++}
++
+ static int mt_probe(struct hid_device *hdev, const struct hid_device_id *id)
+ {
+ 	int ret, i;
+@@ -666,6 +687,13 @@ static int mt_probe(struct hid_device *hdev, const struct hid_device_id *id)
+ 	td->maxcontact_report_id = -1;
+ 	hid_set_drvdata(hdev, td);
+ 
++	td->fields = kzalloc(sizeof(struct mt_fields), GFP_KERNEL);
++	if (!td->fields) {
++		dev_err(&hdev->dev, "cannot allocate multitouch fields data\n");
++		ret = -ENOMEM;
++		goto fail;
++	}
++
+ 	ret = hid_parse(hdev);
+ 	if (ret != 0)
+ 		goto fail;
+@@ -674,6 +702,8 @@ static int mt_probe(struct hid_device *hdev, const struct hid_device_id *id)
+ 	if (ret)
+ 		goto fail;
+ 
++	mt_post_parse(td);
++
+ 	if (!id && td->touches_by_report == 1) {
+ 		/* the device has been sent by hid-generic */
+ 		mtclass = &td->mtclass;
+@@ -697,9 +727,13 @@ static int mt_probe(struct hid_device *hdev, const struct hid_device_id *id)
+ 	mt_set_maxcontacts(hdev);
+ 	mt_set_input_mode(hdev);
+ 
++	kfree(td->fields);
++	td->fields = NULL;
++
+ 	return 0;
+ 
+ fail:
++	kfree(td->fields);
+ 	kfree(td);
+ 	return ret;
+ }
+diff --git a/drivers/hwmon/applesmc.c b/drivers/hwmon/applesmc.c
+index f082e48..70d62f5 100644
+--- a/drivers/hwmon/applesmc.c
++++ b/drivers/hwmon/applesmc.c
+@@ -215,7 +215,7 @@ static int read_smc(u8 cmd, const char *key, u8 *buffer, u8 len)
+ 	int i;
+ 
+ 	if (send_command(cmd) || send_argument(key)) {
+-		pr_warn("%s: read arg fail\n", key);
++		pr_warn("%.4s: read arg fail\n", key);
+ 		return -EIO;
+ 	}
+ 
+@@ -223,7 +223,7 @@ static int read_smc(u8 cmd, const char *key, u8 *buffer, u8 len)
+ 
+ 	for (i = 0; i < len; i++) {
+ 		if (__wait_status(0x05)) {
+-			pr_warn("%s: read data fail\n", key);
++			pr_warn("%.4s: read data fail\n", key);
+ 			return -EIO;
+ 		}
+ 		buffer[i] = inb(APPLESMC_DATA_PORT);
+diff --git a/drivers/hwmon/coretemp.c b/drivers/hwmon/coretemp.c
+index b9d5123..0f52799 100644
+--- a/drivers/hwmon/coretemp.c
++++ b/drivers/hwmon/coretemp.c
+@@ -664,7 +664,7 @@ static void __cpuinit get_core_online(unsigned int cpu)
+ 	 * sensors. We check this bit only, all the early CPUs
+ 	 * without thermal sensors will be filtered out.
+ 	 */
+-	if (!cpu_has(c, X86_FEATURE_DTS))
++	if (!cpu_has(c, X86_FEATURE_DTHERM))
+ 		return;
+ 
+ 	if (!pdev) {
+@@ -765,7 +765,7 @@ static struct notifier_block coretemp_cpu_notifier __refdata = {
+ };
+ 
+ static const struct x86_cpu_id coretemp_ids[] = {
+-	{ X86_VENDOR_INTEL, X86_FAMILY_ANY, X86_MODEL_ANY, X86_FEATURE_DTS },
++	{ X86_VENDOR_INTEL, X86_FAMILY_ANY, X86_MODEL_ANY, X86_FEATURE_DTHERM },
+ 	{}
+ };
+ MODULE_DEVICE_TABLE(x86cpu, coretemp_ids);
+diff --git a/drivers/hwspinlock/hwspinlock_core.c b/drivers/hwspinlock/hwspinlock_core.c
+index 61c9cf1..1201a15 100644
+--- a/drivers/hwspinlock/hwspinlock_core.c
++++ b/drivers/hwspinlock/hwspinlock_core.c
+@@ -345,7 +345,7 @@ int hwspin_lock_register(struct hwspinlock_device *bank, struct device *dev,
+ 		spin_lock_init(&hwlock->lock);
+ 		hwlock->bank = bank;
+ 
+-		ret = hwspin_lock_register_single(hwlock, i);
++		ret = hwspin_lock_register_single(hwlock, base_id + i);
+ 		if (ret)
+ 			goto reg_failed;
+ 	}
+@@ -354,7 +354,7 @@ int hwspin_lock_register(struct hwspinlock_device *bank, struct device *dev,
+ 
+ reg_failed:
+ 	while (--i >= 0)
+-		hwspin_lock_unregister_single(i);
++		hwspin_lock_unregister_single(base_id + i);
+ 	return ret;
+ }
+ EXPORT_SYMBOL_GPL(hwspin_lock_register);
+diff --git a/drivers/iommu/amd_iommu.c b/drivers/iommu/amd_iommu.c
+index a2e418c..dfe7d37 100644
+--- a/drivers/iommu/amd_iommu.c
++++ b/drivers/iommu/amd_iommu.c
+@@ -83,6 +83,8 @@ static struct iommu_ops amd_iommu_ops;
+ static ATOMIC_NOTIFIER_HEAD(ppr_notifier);
+ int amd_iommu_max_glx_val = -1;
+ 
++static struct dma_map_ops amd_iommu_dma_ops;
++
+ /*
+  * general struct to manage commands send to an IOMMU
+  */
+@@ -2267,6 +2269,13 @@ static int device_change_notifier(struct notifier_block *nb,
+ 		list_add_tail(&dma_domain->list, &iommu_pd_list);
+ 		spin_unlock_irqrestore(&iommu_pd_list_lock, flags);
+ 
++		dev_data = get_dev_data(dev);
++
++		if (!dev_data->passthrough)
++			dev->archdata.dma_ops = &amd_iommu_dma_ops;
++		else
++			dev->archdata.dma_ops = &nommu_dma_ops;
++
+ 		break;
+ 	case BUS_NOTIFY_DEL_DEVICE:
+ 
+diff --git a/drivers/iommu/amd_iommu_init.c b/drivers/iommu/amd_iommu_init.c
+index 542024b..c04ddca 100644
+--- a/drivers/iommu/amd_iommu_init.c
++++ b/drivers/iommu/amd_iommu_init.c
+@@ -1641,6 +1641,8 @@ static int __init amd_iommu_init(void)
+ 
+ 	amd_iommu_init_api();
+ 
++	x86_platform.iommu_shutdown = disable_iommus;
++
+ 	if (iommu_pass_through)
+ 		goto out;
+ 
+@@ -1649,8 +1651,6 @@ static int __init amd_iommu_init(void)
+ 	else
+ 		printk(KERN_INFO "AMD-Vi: Lazy IO/TLB flushing enabled\n");
+ 
+-	x86_platform.iommu_shutdown = disable_iommus;
+-
+ out:
+ 	return ret;
+ 
+diff --git a/drivers/iommu/tegra-smmu.c b/drivers/iommu/tegra-smmu.c
+index eb93c82..17ef6c4 100644
+--- a/drivers/iommu/tegra-smmu.c
++++ b/drivers/iommu/tegra-smmu.c
+@@ -550,13 +550,13 @@ static int alloc_pdir(struct smmu_as *as)
+ 		return 0;
+ 
+ 	as->pte_count = devm_kzalloc(smmu->dev,
+-		     sizeof(as->pte_count[0]) * SMMU_PDIR_COUNT, GFP_KERNEL);
++		     sizeof(as->pte_count[0]) * SMMU_PDIR_COUNT, GFP_ATOMIC);
+ 	if (!as->pte_count) {
+ 		dev_err(smmu->dev,
+ 			"failed to allocate smmu_device PTE cunters\n");
+ 		return -ENOMEM;
+ 	}
+-	as->pdir_page = alloc_page(GFP_KERNEL | __GFP_DMA);
++	as->pdir_page = alloc_page(GFP_ATOMIC | __GFP_DMA);
+ 	if (!as->pdir_page) {
+ 		dev_err(smmu->dev,
+ 			"failed to allocate smmu_device page directory\n");
+diff --git a/drivers/md/persistent-data/dm-space-map-checker.c b/drivers/md/persistent-data/dm-space-map-checker.c
+index 50ed53b..fc90c11 100644
+--- a/drivers/md/persistent-data/dm-space-map-checker.c
++++ b/drivers/md/persistent-data/dm-space-map-checker.c
+@@ -8,6 +8,7 @@
+ 
+ #include <linux/device-mapper.h>
+ #include <linux/export.h>
++#include <linux/vmalloc.h>
+ 
+ #ifdef CONFIG_DM_DEBUG_SPACE_MAPS
+ 
+@@ -89,13 +90,23 @@ static int ca_create(struct count_array *ca, struct dm_space_map *sm)
+ 
+ 	ca->nr = nr_blocks;
+ 	ca->nr_free = nr_blocks;
+-	ca->counts = kzalloc(sizeof(*ca->counts) * nr_blocks, GFP_KERNEL);
+-	if (!ca->counts)
+-		return -ENOMEM;
++
++	if (!nr_blocks)
++		ca->counts = NULL;
++	else {
++		ca->counts = vzalloc(sizeof(*ca->counts) * nr_blocks);
++		if (!ca->counts)
++			return -ENOMEM;
++	}
+ 
+ 	return 0;
+ }
+ 
++static void ca_destroy(struct count_array *ca)
++{
++	vfree(ca->counts);
++}
++
+ static int ca_load(struct count_array *ca, struct dm_space_map *sm)
+ {
+ 	int r;
+@@ -126,12 +137,14 @@ static int ca_load(struct count_array *ca, struct dm_space_map *sm)
+ static int ca_extend(struct count_array *ca, dm_block_t extra_blocks)
+ {
+ 	dm_block_t nr_blocks = ca->nr + extra_blocks;
+-	uint32_t *counts = kzalloc(sizeof(*counts) * nr_blocks, GFP_KERNEL);
++	uint32_t *counts = vzalloc(sizeof(*counts) * nr_blocks);
+ 	if (!counts)
+ 		return -ENOMEM;
+ 
+-	memcpy(counts, ca->counts, sizeof(*counts) * ca->nr);
+-	kfree(ca->counts);
++	if (ca->counts) {
++		memcpy(counts, ca->counts, sizeof(*counts) * ca->nr);
++		ca_destroy(ca);
++	}
+ 	ca->nr = nr_blocks;
+ 	ca->nr_free += extra_blocks;
+ 	ca->counts = counts;
+@@ -151,11 +164,6 @@ static int ca_commit(struct count_array *old, struct count_array *new)
+ 	return 0;
+ }
+ 
+-static void ca_destroy(struct count_array *ca)
+-{
+-	kfree(ca->counts);
+-}
+-
+ /*----------------------------------------------------------------*/
+ 
+ struct sm_checker {
+@@ -343,25 +351,25 @@ struct dm_space_map *dm_sm_checker_create(struct dm_space_map *sm)
+ 	int r;
+ 	struct sm_checker *smc;
+ 
+-	if (!sm)
+-		return NULL;
++	if (IS_ERR_OR_NULL(sm))
++		return ERR_PTR(-EINVAL);
+ 
+ 	smc = kmalloc(sizeof(*smc), GFP_KERNEL);
+ 	if (!smc)
+-		return NULL;
++		return ERR_PTR(-ENOMEM);
+ 
+ 	memcpy(&smc->sm, &ops_, sizeof(smc->sm));
+ 	r = ca_create(&smc->old_counts, sm);
+ 	if (r) {
+ 		kfree(smc);
+-		return NULL;
++		return ERR_PTR(r);
+ 	}
+ 
+ 	r = ca_create(&smc->counts, sm);
+ 	if (r) {
+ 		ca_destroy(&smc->old_counts);
+ 		kfree(smc);
+-		return NULL;
++		return ERR_PTR(r);
+ 	}
+ 
+ 	smc->real_sm = sm;
+@@ -371,7 +379,7 @@ struct dm_space_map *dm_sm_checker_create(struct dm_space_map *sm)
+ 		ca_destroy(&smc->counts);
+ 		ca_destroy(&smc->old_counts);
+ 		kfree(smc);
+-		return NULL;
++		return ERR_PTR(r);
+ 	}
+ 
+ 	r = ca_commit(&smc->old_counts, &smc->counts);
+@@ -379,7 +387,7 @@ struct dm_space_map *dm_sm_checker_create(struct dm_space_map *sm)
+ 		ca_destroy(&smc->counts);
+ 		ca_destroy(&smc->old_counts);
+ 		kfree(smc);
+-		return NULL;
++		return ERR_PTR(r);
+ 	}
+ 
+ 	return &smc->sm;
+@@ -391,25 +399,25 @@ struct dm_space_map *dm_sm_checker_create_fresh(struct dm_space_map *sm)
+ 	int r;
+ 	struct sm_checker *smc;
+ 
+-	if (!sm)
+-		return NULL;
++	if (IS_ERR_OR_NULL(sm))
++		return ERR_PTR(-EINVAL);
+ 
+ 	smc = kmalloc(sizeof(*smc), GFP_KERNEL);
+ 	if (!smc)
+-		return NULL;
++		return ERR_PTR(-ENOMEM);
+ 
+ 	memcpy(&smc->sm, &ops_, sizeof(smc->sm));
+ 	r = ca_create(&smc->old_counts, sm);
+ 	if (r) {
+ 		kfree(smc);
+-		return NULL;
++		return ERR_PTR(r);
+ 	}
+ 
+ 	r = ca_create(&smc->counts, sm);
+ 	if (r) {
+ 		ca_destroy(&smc->old_counts);
+ 		kfree(smc);
+-		return NULL;
++		return ERR_PTR(r);
+ 	}
+ 
+ 	smc->real_sm = sm;
+diff --git a/drivers/md/persistent-data/dm-space-map-disk.c b/drivers/md/persistent-data/dm-space-map-disk.c
+index fc469ba..3d0ed53 100644
+--- a/drivers/md/persistent-data/dm-space-map-disk.c
++++ b/drivers/md/persistent-data/dm-space-map-disk.c
+@@ -290,7 +290,16 @@ struct dm_space_map *dm_sm_disk_create(struct dm_transaction_manager *tm,
+ 				       dm_block_t nr_blocks)
+ {
+ 	struct dm_space_map *sm = dm_sm_disk_create_real(tm, nr_blocks);
+-	return dm_sm_checker_create_fresh(sm);
++	struct dm_space_map *smc;
++
++	if (IS_ERR_OR_NULL(sm))
++		return sm;
++
++	smc = dm_sm_checker_create_fresh(sm);
++	if (IS_ERR(smc))
++		dm_sm_destroy(sm);
++
++	return smc;
+ }
+ EXPORT_SYMBOL_GPL(dm_sm_disk_create);
+ 
+diff --git a/drivers/md/persistent-data/dm-transaction-manager.c b/drivers/md/persistent-data/dm-transaction-manager.c
+index 6f8d387..ba54aac 100644
+--- a/drivers/md/persistent-data/dm-transaction-manager.c
++++ b/drivers/md/persistent-data/dm-transaction-manager.c
+@@ -138,6 +138,9 @@ EXPORT_SYMBOL_GPL(dm_tm_create_non_blocking_clone);
+ 
+ void dm_tm_destroy(struct dm_transaction_manager *tm)
+ {
++	if (!tm->is_clone)
++		wipe_shadow_table(tm);
++
+ 	kfree(tm);
+ }
+ EXPORT_SYMBOL_GPL(dm_tm_destroy);
+@@ -342,8 +345,10 @@ static int dm_tm_create_internal(struct dm_block_manager *bm,
+ 		}
+ 
+ 		*sm = dm_sm_checker_create(inner);
+-		if (!*sm)
++		if (IS_ERR(*sm)) {
++			r = PTR_ERR(*sm);
+ 			goto bad2;
++		}
+ 
+ 	} else {
+ 		r = dm_bm_write_lock(dm_tm_get_bm(*tm), sb_location,
+@@ -362,8 +367,10 @@ static int dm_tm_create_internal(struct dm_block_manager *bm,
+ 		}
+ 
+ 		*sm = dm_sm_checker_create(inner);
+-		if (!*sm)
++		if (IS_ERR(*sm)) {
++			r = PTR_ERR(*sm);
+ 			goto bad2;
++		}
+ 	}
+ 
+ 	return 0;
+diff --git a/drivers/md/raid10.c b/drivers/md/raid10.c
+index d037adb..a954c95 100644
+--- a/drivers/md/raid10.c
++++ b/drivers/md/raid10.c
+@@ -2209,7 +2209,7 @@ static void fix_read_error(struct r10conf *conf, struct mddev *mddev, struct r10
+ 			if (r10_sync_page_io(rdev,
+ 					     r10_bio->devs[sl].addr +
+ 					     sect,
+-					     s<<9, conf->tmppage, WRITE)
++					     s, conf->tmppage, WRITE)
+ 			    == 0) {
+ 				/* Well, this device is dead */
+ 				printk(KERN_NOTICE
+@@ -2246,7 +2246,7 @@ static void fix_read_error(struct r10conf *conf, struct mddev *mddev, struct r10
+ 			switch (r10_sync_page_io(rdev,
+ 					     r10_bio->devs[sl].addr +
+ 					     sect,
+-					     s<<9, conf->tmppage,
++					     s, conf->tmppage,
+ 						 READ)) {
+ 			case 0:
+ 				/* Well, this device is dead */
+@@ -2407,7 +2407,7 @@ read_more:
+ 	slot = r10_bio->read_slot;
+ 	printk_ratelimited(
+ 		KERN_ERR
+-		"md/raid10:%s: %s: redirecting"
++		"md/raid10:%s: %s: redirecting "
+ 		"sector %llu to another mirror\n",
+ 		mdname(mddev),
+ 		bdevname(rdev->bdev, b),
+@@ -2772,6 +2772,12 @@ static sector_t sync_request(struct mddev *mddev, sector_t sector_nr,
+ 			/* want to reconstruct this device */
+ 			rb2 = r10_bio;
+ 			sect = raid10_find_virt(conf, sector_nr, i);
++			if (sect >= mddev->resync_max_sectors) {
++				/* last stripe is not complete - don't
++				 * try to recover this sector.
++				 */
++				continue;
++			}
+ 			/* Unless we are doing a full sync, or a replacement
+ 			 * we only need to recover the block if it is set in
+ 			 * the bitmap
+diff --git a/drivers/md/raid5.c b/drivers/md/raid5.c
+index f351422..73a5800 100644
+--- a/drivers/md/raid5.c
++++ b/drivers/md/raid5.c
+@@ -196,12 +196,14 @@ static void __release_stripe(struct r5conf *conf, struct stripe_head *sh)
+ 		BUG_ON(!list_empty(&sh->lru));
+ 		BUG_ON(atomic_read(&conf->active_stripes)==0);
+ 		if (test_bit(STRIPE_HANDLE, &sh->state)) {
+-			if (test_bit(STRIPE_DELAYED, &sh->state))
++			if (test_bit(STRIPE_DELAYED, &sh->state) &&
++			    !test_bit(STRIPE_PREREAD_ACTIVE, &sh->state))
+ 				list_add_tail(&sh->lru, &conf->delayed_list);
+ 			else if (test_bit(STRIPE_BIT_DELAY, &sh->state) &&
+ 				   sh->bm_seq - conf->seq_write > 0)
+ 				list_add_tail(&sh->lru, &conf->bitmap_list);
+ 			else {
++				clear_bit(STRIPE_DELAYED, &sh->state);
+ 				clear_bit(STRIPE_BIT_DELAY, &sh->state);
+ 				list_add_tail(&sh->lru, &conf->handle_list);
+ 			}
+@@ -583,6 +585,12 @@ static void ops_run_io(struct stripe_head *sh, struct stripe_head_state *s)
+ 					 * a chance*/
+ 					md_check_recovery(conf->mddev);
+ 				}
++				/*
++				 * Because md_wait_for_blocked_rdev
++				 * will dec nr_pending, we must
++				 * increment it first.
++				 */
++				atomic_inc(&rdev->nr_pending);
+ 				md_wait_for_blocked_rdev(rdev, conf->mddev);
+ 			} else {
+ 				/* Acknowledged bad block - skip the write */
+@@ -3842,7 +3850,6 @@ static int chunk_aligned_read(struct mddev *mddev, struct bio * raid_bio)
+ 		raid_bio->bi_next = (void*)rdev;
+ 		align_bi->bi_bdev =  rdev->bdev;
+ 		align_bi->bi_flags &= ~(1 << BIO_SEG_VALID);
+-		align_bi->bi_sector += rdev->data_offset;
+ 
+ 		if (!bio_fits_rdev(align_bi) ||
+ 		    is_badblock(rdev, align_bi->bi_sector, align_bi->bi_size>>9,
+@@ -3853,6 +3860,9 @@ static int chunk_aligned_read(struct mddev *mddev, struct bio * raid_bio)
+ 			return 0;
+ 		}
+ 
++		/* No reshape active, so we can trust rdev->data_offset */
++		align_bi->bi_sector += rdev->data_offset;
++
+ 		spin_lock_irq(&conf->device_lock);
+ 		wait_event_lock_irq(conf->wait_for_stripe,
+ 				    conf->quiesce == 0,
+diff --git a/drivers/media/dvb/siano/smsusb.c b/drivers/media/dvb/siano/smsusb.c
+index 63c004a..664e460 100644
+--- a/drivers/media/dvb/siano/smsusb.c
++++ b/drivers/media/dvb/siano/smsusb.c
+@@ -544,6 +544,8 @@ static const struct usb_device_id smsusb_id_table[] __devinitconst = {
+ 		.driver_info = SMS1XXX_BOARD_HAUPPAUGE_WINDHAM },
+ 	{ USB_DEVICE(0x2040, 0xc0a0),
+ 		.driver_info = SMS1XXX_BOARD_HAUPPAUGE_WINDHAM },
++	{ USB_DEVICE(0x2040, 0xf5a0),
++		.driver_info = SMS1XXX_BOARD_HAUPPAUGE_WINDHAM },
+ 	{ } /* Terminating entry */
+ 	};
+ 
+diff --git a/drivers/media/video/gspca/gspca.c b/drivers/media/video/gspca/gspca.c
+index ca5a2b1..4dc8852 100644
+--- a/drivers/media/video/gspca/gspca.c
++++ b/drivers/media/video/gspca/gspca.c
+@@ -1723,7 +1723,7 @@ static int vidioc_streamoff(struct file *file, void *priv,
+ 				enum v4l2_buf_type buf_type)
+ {
+ 	struct gspca_dev *gspca_dev = priv;
+-	int ret;
++	int i, ret;
+ 
+ 	if (buf_type != V4L2_BUF_TYPE_VIDEO_CAPTURE)
+ 		return -EINVAL;
+@@ -1754,6 +1754,8 @@ static int vidioc_streamoff(struct file *file, void *priv,
+ 	wake_up_interruptible(&gspca_dev->wq);
+ 
+ 	/* empty the transfer queues */
++	for (i = 0; i < gspca_dev->nframes; i++)
++		gspca_dev->frame[i].v4l2_buf.flags &= ~BUF_ALL_FLAGS;
+ 	atomic_set(&gspca_dev->fr_q, 0);
+ 	atomic_set(&gspca_dev->fr_i, 0);
+ 	gspca_dev->fr_o = 0;
+diff --git a/drivers/mtd/nand/cafe_nand.c b/drivers/mtd/nand/cafe_nand.c
+index 2a96e1a..6d22755 100644
+--- a/drivers/mtd/nand/cafe_nand.c
++++ b/drivers/mtd/nand/cafe_nand.c
+@@ -102,7 +102,7 @@ static const char *part_probes[] = { "cmdlinepart", "RedBoot", NULL };
+ static int cafe_device_ready(struct mtd_info *mtd)
+ {
+ 	struct cafe_priv *cafe = mtd->priv;
+-	int result = !!(cafe_readl(cafe, NAND_STATUS) | 0x40000000);
++	int result = !!(cafe_readl(cafe, NAND_STATUS) & 0x40000000);
+ 	uint32_t irqs = cafe_readl(cafe, NAND_IRQ);
+ 
+ 	cafe_writel(cafe, irqs, NAND_IRQ);
+diff --git a/drivers/net/bonding/bond_main.c b/drivers/net/bonding/bond_main.c
+index bc13b3d..a579a2f 100644
+--- a/drivers/net/bonding/bond_main.c
++++ b/drivers/net/bonding/bond_main.c
+@@ -76,6 +76,7 @@
+ #include <net/route.h>
+ #include <net/net_namespace.h>
+ #include <net/netns/generic.h>
++#include <net/pkt_sched.h>
+ #include "bonding.h"
+ #include "bond_3ad.h"
+ #include "bond_alb.h"
+@@ -381,8 +382,6 @@ struct vlan_entry *bond_next_vlan(struct bonding *bond, struct vlan_entry *curr)
+ 	return next;
+ }
+ 
+-#define bond_queue_mapping(skb) (*(u16 *)((skb)->cb))
+-
+ /**
+  * bond_dev_queue_xmit - Prepare skb for xmit.
+  *
+@@ -395,7 +394,9 @@ int bond_dev_queue_xmit(struct bonding *bond, struct sk_buff *skb,
+ {
+ 	skb->dev = slave_dev;
+ 
+-	skb->queue_mapping = bond_queue_mapping(skb);
++	BUILD_BUG_ON(sizeof(skb->queue_mapping) !=
++		     sizeof(qdisc_skb_cb(skb)->bond_queue_mapping));
++	skb->queue_mapping = qdisc_skb_cb(skb)->bond_queue_mapping;
+ 
+ 	if (unlikely(netpoll_tx_running(slave_dev)))
+ 		bond_netpoll_send_skb(bond_get_slave_by_dev(bond, slave_dev), skb);
+@@ -4162,7 +4163,7 @@ static u16 bond_select_queue(struct net_device *dev, struct sk_buff *skb)
+ 	/*
+ 	 * Save the original txq to restore before passing to the driver
+ 	 */
+-	bond_queue_mapping(skb) = skb->queue_mapping;
++	qdisc_skb_cb(skb)->bond_queue_mapping = skb->queue_mapping;
+ 
+ 	if (unlikely(txq >= dev->real_num_tx_queues)) {
+ 		do {
+diff --git a/drivers/net/can/c_can/c_can.c b/drivers/net/can/c_can/c_can.c
+index 8dc84d6..86cd532 100644
+--- a/drivers/net/can/c_can/c_can.c
++++ b/drivers/net/can/c_can/c_can.c
+@@ -590,8 +590,8 @@ static void c_can_chip_config(struct net_device *dev)
+ 	priv->write_reg(priv, &priv->regs->control,
+ 			CONTROL_ENABLE_AR);
+ 
+-	if (priv->can.ctrlmode & (CAN_CTRLMODE_LISTENONLY &
+-					CAN_CTRLMODE_LOOPBACK)) {
++	if ((priv->can.ctrlmode & CAN_CTRLMODE_LISTENONLY) &&
++	    (priv->can.ctrlmode & CAN_CTRLMODE_LOOPBACK)) {
+ 		/* loopback + silent mode : useful for hot self-test */
+ 		priv->write_reg(priv, &priv->regs->control, CONTROL_EIE |
+ 				CONTROL_SIE | CONTROL_IE | CONTROL_TEST);
+diff --git a/drivers/net/can/flexcan.c b/drivers/net/can/flexcan.c
+index 1efb083..00baa7e 100644
+--- a/drivers/net/can/flexcan.c
++++ b/drivers/net/can/flexcan.c
+@@ -933,12 +933,12 @@ static int __devinit flexcan_probe(struct platform_device *pdev)
+ 	u32 clock_freq = 0;
+ 
+ 	if (pdev->dev.of_node) {
+-		const u32 *clock_freq_p;
++		const __be32 *clock_freq_p;
+ 
+ 		clock_freq_p = of_get_property(pdev->dev.of_node,
+ 						"clock-frequency", NULL);
+ 		if (clock_freq_p)
+-			clock_freq = *clock_freq_p;
++			clock_freq = be32_to_cpup(clock_freq_p);
+ 	}
+ 
+ 	if (!clock_freq) {
+diff --git a/drivers/net/dummy.c b/drivers/net/dummy.c
+index 442d91a..bab0158 100644
+--- a/drivers/net/dummy.c
++++ b/drivers/net/dummy.c
+@@ -187,8 +187,10 @@ static int __init dummy_init_module(void)
+ 	rtnl_lock();
+ 	err = __rtnl_link_register(&dummy_link_ops);
+ 
+-	for (i = 0; i < numdummies && !err; i++)
++	for (i = 0; i < numdummies && !err; i++) {
+ 		err = dummy_init_one();
++		cond_resched();
++	}
+ 	if (err < 0)
+ 		__rtnl_link_unregister(&dummy_link_ops);
+ 	rtnl_unlock();
+diff --git a/drivers/net/ethernet/broadcom/bnx2x/bnx2x.h b/drivers/net/ethernet/broadcom/bnx2x/bnx2x.h
+index 2c9ee55..75d35ec 100644
+--- a/drivers/net/ethernet/broadcom/bnx2x/bnx2x.h
++++ b/drivers/net/ethernet/broadcom/bnx2x/bnx2x.h
+@@ -744,21 +744,6 @@ struct bnx2x_fastpath {
+ 
+ #define ETH_RX_ERROR_FALGS		ETH_FAST_PATH_RX_CQE_PHY_DECODE_ERR_FLG
+ 
+-#define BNX2X_IP_CSUM_ERR(cqe) \
+-			(!((cqe)->fast_path_cqe.status_flags & \
+-			   ETH_FAST_PATH_RX_CQE_IP_XSUM_NO_VALIDATION_FLG) && \
+-			 ((cqe)->fast_path_cqe.type_error_flags & \
+-			  ETH_FAST_PATH_RX_CQE_IP_BAD_XSUM_FLG))
+-
+-#define BNX2X_L4_CSUM_ERR(cqe) \
+-			(!((cqe)->fast_path_cqe.status_flags & \
+-			   ETH_FAST_PATH_RX_CQE_L4_XSUM_NO_VALIDATION_FLG) && \
+-			 ((cqe)->fast_path_cqe.type_error_flags & \
+-			  ETH_FAST_PATH_RX_CQE_L4_BAD_XSUM_FLG))
+-
+-#define BNX2X_RX_CSUM_OK(cqe) \
+-			(!(BNX2X_L4_CSUM_ERR(cqe) || BNX2X_IP_CSUM_ERR(cqe)))
+-
+ #define BNX2X_PRS_FLAG_OVERETH_IPV4(flags) \
+ 				(((le16_to_cpu(flags) & \
+ 				   PARSING_FLAGS_OVER_ETHERNET_PROTOCOL) >> \
+diff --git a/drivers/net/ethernet/broadcom/bnx2x/bnx2x_cmn.c b/drivers/net/ethernet/broadcom/bnx2x/bnx2x_cmn.c
+index 4b05481..41bb34f 100644
+--- a/drivers/net/ethernet/broadcom/bnx2x/bnx2x_cmn.c
++++ b/drivers/net/ethernet/broadcom/bnx2x/bnx2x_cmn.c
+@@ -191,7 +191,7 @@ int bnx2x_tx_int(struct bnx2x *bp, struct bnx2x_fp_txdata *txdata)
+ 
+ 		if ((netif_tx_queue_stopped(txq)) &&
+ 		    (bp->state == BNX2X_STATE_OPEN) &&
+-		    (bnx2x_tx_avail(bp, txdata) >= MAX_SKB_FRAGS + 3))
++		    (bnx2x_tx_avail(bp, txdata) >= MAX_SKB_FRAGS + 4))
+ 			netif_tx_wake_queue(txq);
+ 
+ 		__netif_tx_unlock(txq);
+@@ -568,6 +568,25 @@ drop:
+ 	fp->eth_q_stats.rx_skb_alloc_failed++;
+ }
+ 
++static void bnx2x_csum_validate(struct sk_buff *skb, union eth_rx_cqe *cqe,
++				struct bnx2x_fastpath *fp)
++{
++	/* Do nothing if no IP/L4 csum validation was done */
++
++	if (cqe->fast_path_cqe.status_flags &
++	    (ETH_FAST_PATH_RX_CQE_IP_XSUM_NO_VALIDATION_FLG |
++	     ETH_FAST_PATH_RX_CQE_L4_XSUM_NO_VALIDATION_FLG))
++		return;
++
++	/* If both IP/L4 validation were done, check if an error was found. */
++
++	if (cqe->fast_path_cqe.type_error_flags &
++	    (ETH_FAST_PATH_RX_CQE_IP_BAD_XSUM_FLG |
++	     ETH_FAST_PATH_RX_CQE_L4_BAD_XSUM_FLG))
++		fp->eth_q_stats.hw_csum_err++;
++	else
++		skb->ip_summed = CHECKSUM_UNNECESSARY;
++}
+ 
+ int bnx2x_rx_int(struct bnx2x_fastpath *fp, int budget)
+ {
+@@ -757,13 +776,9 @@ reuse_rx:
+ 
+ 		skb_checksum_none_assert(skb);
+ 
+-		if (bp->dev->features & NETIF_F_RXCSUM) {
++		if (bp->dev->features & NETIF_F_RXCSUM)
++			bnx2x_csum_validate(skb, cqe, fp);
+ 
+-			if (likely(BNX2X_RX_CSUM_OK(cqe)))
+-				skb->ip_summed = CHECKSUM_UNNECESSARY;
+-			else
+-				fp->eth_q_stats.hw_csum_err++;
+-		}
+ 
+ 		skb_record_rx_queue(skb, fp->rx_queue);
+ 
+@@ -2334,8 +2349,6 @@ int bnx2x_poll(struct napi_struct *napi, int budget)
+ /* we split the first BD into headers and data BDs
+  * to ease the pain of our fellow microcode engineers
+  * we use one mapping for both BDs
+- * So far this has only been observed to happen
+- * in Other Operating Systems(TM)
+  */
+ static noinline u16 bnx2x_tx_split(struct bnx2x *bp,
+ 				   struct bnx2x_fp_txdata *txdata,
+@@ -2987,7 +3000,7 @@ netdev_tx_t bnx2x_start_xmit(struct sk_buff *skb, struct net_device *dev)
+ 
+ 	txdata->tx_bd_prod += nbd;
+ 
+-	if (unlikely(bnx2x_tx_avail(bp, txdata) < MAX_SKB_FRAGS + 3)) {
++	if (unlikely(bnx2x_tx_avail(bp, txdata) < MAX_SKB_FRAGS + 4)) {
+ 		netif_tx_stop_queue(txq);
+ 
+ 		/* paired memory barrier is in bnx2x_tx_int(), we have to keep
+@@ -2996,7 +3009,7 @@ netdev_tx_t bnx2x_start_xmit(struct sk_buff *skb, struct net_device *dev)
+ 		smp_mb();
+ 
+ 		fp->eth_q_stats.driver_xoff++;
+-		if (bnx2x_tx_avail(bp, txdata) >= MAX_SKB_FRAGS + 3)
++		if (bnx2x_tx_avail(bp, txdata) >= MAX_SKB_FRAGS + 4)
+ 			netif_tx_wake_queue(txq);
+ 	}
+ 	txdata->tx_pkt++;
+diff --git a/drivers/net/ethernet/broadcom/tg3.c b/drivers/net/ethernet/broadcom/tg3.c
+index ceeab8e..1a1b29f 100644
+--- a/drivers/net/ethernet/broadcom/tg3.c
++++ b/drivers/net/ethernet/broadcom/tg3.c
+@@ -14248,7 +14248,8 @@ static int __devinit tg3_get_invariants(struct tg3 *tp)
+ 		}
+ 	}
+ 
+-	if (tg3_flag(tp, 5755_PLUS))
++	if (tg3_flag(tp, 5755_PLUS) ||
++	    GET_ASIC_REV(tp->pci_chip_rev_id) == ASIC_REV_5906)
+ 		tg3_flag_set(tp, SHORT_DMA_BUG);
+ 
+ 	if (GET_ASIC_REV(tp->pci_chip_rev_id) == ASIC_REV_5719)
+diff --git a/drivers/net/ethernet/emulex/benet/be_main.c b/drivers/net/ethernet/emulex/benet/be_main.c
+index 528a886..1bbf6b3 100644
+--- a/drivers/net/ethernet/emulex/benet/be_main.c
++++ b/drivers/net/ethernet/emulex/benet/be_main.c
+@@ -731,6 +731,8 @@ static netdev_tx_t be_xmit(struct sk_buff *skb,
+ 
+ 	copied = make_tx_wrbs(adapter, txq, skb, wrb_cnt, dummy_wrb);
+ 	if (copied) {
++		int gso_segs = skb_shinfo(skb)->gso_segs;
++
+ 		/* record the sent skb in the sent_skb table */
+ 		BUG_ON(txo->sent_skb_list[start]);
+ 		txo->sent_skb_list[start] = skb;
+@@ -748,8 +750,7 @@ static netdev_tx_t be_xmit(struct sk_buff *skb,
+ 
+ 		be_txq_notify(adapter, txq->id, wrb_cnt);
+ 
+-		be_tx_stats_update(txo, wrb_cnt, copied,
+-				skb_shinfo(skb)->gso_segs, stopped);
++		be_tx_stats_update(txo, wrb_cnt, copied, gso_segs, stopped);
+ 	} else {
+ 		txq->head = start;
+ 		dev_kfree_skb_any(skb);
+diff --git a/drivers/net/ethernet/intel/e1000e/defines.h b/drivers/net/ethernet/intel/e1000e/defines.h
+index 3a50259..eb84fe7 100644
+--- a/drivers/net/ethernet/intel/e1000e/defines.h
++++ b/drivers/net/ethernet/intel/e1000e/defines.h
+@@ -101,6 +101,7 @@
+ #define E1000_RXD_ERR_SEQ       0x04    /* Sequence Error */
+ #define E1000_RXD_ERR_CXE       0x10    /* Carrier Extension Error */
+ #define E1000_RXD_ERR_TCPE      0x20    /* TCP/UDP Checksum Error */
++#define E1000_RXD_ERR_IPE       0x40    /* IP Checksum Error */
+ #define E1000_RXD_ERR_RXE       0x80    /* Rx Data Error */
+ #define E1000_RXD_SPC_VLAN_MASK 0x0FFF  /* VLAN ID is in lower 12 bits */
+ 
+diff --git a/drivers/net/ethernet/intel/e1000e/ethtool.c b/drivers/net/ethernet/intel/e1000e/ethtool.c
+index db35dd5..e48f2d2 100644
+--- a/drivers/net/ethernet/intel/e1000e/ethtool.c
++++ b/drivers/net/ethernet/intel/e1000e/ethtool.c
+@@ -258,7 +258,8 @@ static int e1000_set_settings(struct net_device *netdev,
+ 	 * When SoL/IDER sessions are active, autoneg/speed/duplex
+ 	 * cannot be changed
+ 	 */
+-	if (hw->phy.ops.check_reset_block(hw)) {
++	if (hw->phy.ops.check_reset_block &&
++	    hw->phy.ops.check_reset_block(hw)) {
+ 		e_err("Cannot change link characteristics when SoL/IDER is "
+ 		      "active.\n");
+ 		return -EINVAL;
+@@ -1604,7 +1605,8 @@ static int e1000_loopback_test(struct e1000_adapter *adapter, u64 *data)
+ 	 * PHY loopback cannot be performed if SoL/IDER
+ 	 * sessions are active
+ 	 */
+-	if (hw->phy.ops.check_reset_block(hw)) {
++	if (hw->phy.ops.check_reset_block &&
++	    hw->phy.ops.check_reset_block(hw)) {
+ 		e_err("Cannot do PHY loopback test when SoL/IDER is active.\n");
+ 		*data = 0;
+ 		goto out;
+diff --git a/drivers/net/ethernet/intel/e1000e/mac.c b/drivers/net/ethernet/intel/e1000e/mac.c
+index decad98..efecb50 100644
+--- a/drivers/net/ethernet/intel/e1000e/mac.c
++++ b/drivers/net/ethernet/intel/e1000e/mac.c
+@@ -709,7 +709,7 @@ s32 e1000e_setup_link_generic(struct e1000_hw *hw)
+ 	 * In the case of the phy reset being blocked, we already have a link.
+ 	 * We do not need to set it up again.
+ 	 */
+-	if (hw->phy.ops.check_reset_block(hw))
++	if (hw->phy.ops.check_reset_block && hw->phy.ops.check_reset_block(hw))
+ 		return 0;
+ 
+ 	/*
+diff --git a/drivers/net/ethernet/intel/e1000e/netdev.c b/drivers/net/ethernet/intel/e1000e/netdev.c
+index 00e961e..5621d5b 100644
+--- a/drivers/net/ethernet/intel/e1000e/netdev.c
++++ b/drivers/net/ethernet/intel/e1000e/netdev.c
+@@ -495,7 +495,7 @@ static void e1000_receive_skb(struct e1000_adapter *adapter,
+  * @sk_buff: socket buffer with received data
+  **/
+ static void e1000_rx_checksum(struct e1000_adapter *adapter, u32 status_err,
+-			      __le16 csum, struct sk_buff *skb)
++			      struct sk_buff *skb)
+ {
+ 	u16 status = (u16)status_err;
+ 	u8 errors = (u8)(status_err >> 24);
+@@ -510,8 +510,8 @@ static void e1000_rx_checksum(struct e1000_adapter *adapter, u32 status_err,
+ 	if (status & E1000_RXD_STAT_IXSM)
+ 		return;
+ 
+-	/* TCP/UDP checksum error bit is set */
+-	if (errors & E1000_RXD_ERR_TCPE) {
++	/* TCP/UDP checksum error bit or IP checksum error bit is set */
++	if (errors & (E1000_RXD_ERR_TCPE | E1000_RXD_ERR_IPE)) {
+ 		/* let the stack verify checksum errors */
+ 		adapter->hw_csum_err++;
+ 		return;
+@@ -522,19 +522,7 @@ static void e1000_rx_checksum(struct e1000_adapter *adapter, u32 status_err,
+ 		return;
+ 
+ 	/* It must be a TCP or UDP packet with a valid checksum */
+-	if (status & E1000_RXD_STAT_TCPCS) {
+-		/* TCP checksum is good */
+-		skb->ip_summed = CHECKSUM_UNNECESSARY;
+-	} else {
+-		/*
+-		 * IP fragment with UDP payload
+-		 * Hardware complements the payload checksum, so we undo it
+-		 * and then put the value in host order for further stack use.
+-		 */
+-		__sum16 sum = (__force __sum16)swab16((__force u16)csum);
+-		skb->csum = csum_unfold(~sum);
+-		skb->ip_summed = CHECKSUM_COMPLETE;
+-	}
++	skb->ip_summed = CHECKSUM_UNNECESSARY;
+ 	adapter->hw_csum_good++;
+ }
+ 
+@@ -978,8 +966,7 @@ static bool e1000_clean_rx_irq(struct e1000_ring *rx_ring, int *work_done,
+ 		skb_put(skb, length);
+ 
+ 		/* Receive Checksum Offload */
+-		e1000_rx_checksum(adapter, staterr,
+-				  rx_desc->wb.lower.hi_dword.csum_ip.csum, skb);
++		e1000_rx_checksum(adapter, staterr, skb);
+ 
+ 		e1000_rx_hash(netdev, rx_desc->wb.lower.hi_dword.rss, skb);
+ 
+@@ -1360,8 +1347,7 @@ copydone:
+ 		total_rx_bytes += skb->len;
+ 		total_rx_packets++;
+ 
+-		e1000_rx_checksum(adapter, staterr,
+-				  rx_desc->wb.lower.hi_dword.csum_ip.csum, skb);
++		e1000_rx_checksum(adapter, staterr, skb);
+ 
+ 		e1000_rx_hash(netdev, rx_desc->wb.lower.hi_dword.rss, skb);
+ 
+@@ -1531,9 +1517,8 @@ static bool e1000_clean_jumbo_rx_irq(struct e1000_ring *rx_ring, int *work_done,
+ 			}
+ 		}
+ 
+-		/* Receive Checksum Offload XXX recompute due to CRC strip? */
+-		e1000_rx_checksum(adapter, staterr,
+-				  rx_desc->wb.lower.hi_dword.csum_ip.csum, skb);
++		/* Receive Checksum Offload */
++		e1000_rx_checksum(adapter, staterr, skb);
+ 
+ 		e1000_rx_hash(netdev, rx_desc->wb.lower.hi_dword.rss, skb);
+ 
+@@ -3120,19 +3105,10 @@ static void e1000_configure_rx(struct e1000_adapter *adapter)
+ 
+ 	/* Enable Receive Checksum Offload for TCP and UDP */
+ 	rxcsum = er32(RXCSUM);
+-	if (adapter->netdev->features & NETIF_F_RXCSUM) {
++	if (adapter->netdev->features & NETIF_F_RXCSUM)
+ 		rxcsum |= E1000_RXCSUM_TUOFL;
+-
+-		/*
+-		 * IPv4 payload checksum for UDP fragments must be
+-		 * used in conjunction with packet-split.
+-		 */
+-		if (adapter->rx_ps_pages)
+-			rxcsum |= E1000_RXCSUM_IPPCSE;
+-	} else {
++	else
+ 		rxcsum &= ~E1000_RXCSUM_TUOFL;
+-		/* no need to clear IPPCSE as it defaults to 0 */
+-	}
+ 	ew32(RXCSUM, rxcsum);
+ 
+ 	if (adapter->hw.mac.type == e1000_pch2lan) {
+@@ -5260,22 +5236,10 @@ static int e1000_change_mtu(struct net_device *netdev, int new_mtu)
+ 	int max_frame = new_mtu + ETH_HLEN + ETH_FCS_LEN;
+ 
+ 	/* Jumbo frame support */
+-	if (max_frame > ETH_FRAME_LEN + ETH_FCS_LEN) {
+-		if (!(adapter->flags & FLAG_HAS_JUMBO_FRAMES)) {
+-			e_err("Jumbo Frames not supported.\n");
+-			return -EINVAL;
+-		}
+-
+-		/*
+-		 * IP payload checksum (enabled with jumbos/packet-split when
+-		 * Rx checksum is enabled) and generation of RSS hash is
+-		 * mutually exclusive in the hardware.
+-		 */
+-		if ((netdev->features & NETIF_F_RXCSUM) &&
+-		    (netdev->features & NETIF_F_RXHASH)) {
+-			e_err("Jumbo frames cannot be enabled when both receive checksum offload and receive hashing are enabled.  Disable one of the receive offload features before enabling jumbos.\n");
+-			return -EINVAL;
+-		}
++	if ((max_frame > ETH_FRAME_LEN + ETH_FCS_LEN) &&
++	    !(adapter->flags & FLAG_HAS_JUMBO_FRAMES)) {
++		e_err("Jumbo Frames not supported.\n");
++		return -EINVAL;
+ 	}
+ 
+ 	/* Supported frame sizes */
+@@ -6049,17 +6013,6 @@ static int e1000_set_features(struct net_device *netdev,
+ 			 NETIF_F_RXALL)))
+ 		return 0;
+ 
+-	/*
+-	 * IP payload checksum (enabled with jumbos/packet-split when Rx
+-	 * checksum is enabled) and generation of RSS hash is mutually
+-	 * exclusive in the hardware.
+-	 */
+-	if (adapter->rx_ps_pages &&
+-	    (features & NETIF_F_RXCSUM) && (features & NETIF_F_RXHASH)) {
+-		e_err("Enabling both receive checksum offload and receive hashing is not possible with jumbo frames.  Disable jumbos or enable only one of the receive offload features.\n");
+-		return -EINVAL;
+-	}
+-
+ 	if (changed & NETIF_F_RXFCS) {
+ 		if (features & NETIF_F_RXFCS) {
+ 			adapter->flags2 &= ~FLAG2_CRC_STRIPPING;
+@@ -6256,7 +6209,7 @@ static int __devinit e1000_probe(struct pci_dev *pdev,
+ 		adapter->hw.phy.ms_type = e1000_ms_hw_default;
+ 	}
+ 
+-	if (hw->phy.ops.check_reset_block(hw))
++	if (hw->phy.ops.check_reset_block && hw->phy.ops.check_reset_block(hw))
+ 		e_info("PHY reset is blocked due to SOL/IDER session.\n");
+ 
+ 	/* Set initial default active device features */
+@@ -6423,7 +6376,7 @@ err_register:
+ 	if (!(adapter->flags & FLAG_HAS_AMT))
+ 		e1000e_release_hw_control(adapter);
+ err_eeprom:
+-	if (!hw->phy.ops.check_reset_block(hw))
++	if (hw->phy.ops.check_reset_block && !hw->phy.ops.check_reset_block(hw))
+ 		e1000_phy_hw_reset(&adapter->hw);
+ err_hw_init:
+ 	kfree(adapter->tx_ring);
+diff --git a/drivers/net/ethernet/intel/e1000e/phy.c b/drivers/net/ethernet/intel/e1000e/phy.c
+index 35b4557..c4befb3 100644
+--- a/drivers/net/ethernet/intel/e1000e/phy.c
++++ b/drivers/net/ethernet/intel/e1000e/phy.c
+@@ -2121,9 +2121,11 @@ s32 e1000e_phy_hw_reset_generic(struct e1000_hw *hw)
+ 	s32 ret_val;
+ 	u32 ctrl;
+ 
+-	ret_val = phy->ops.check_reset_block(hw);
+-	if (ret_val)
+-		return 0;
++	if (phy->ops.check_reset_block) {
++		ret_val = phy->ops.check_reset_block(hw);
++		if (ret_val)
++			return 0;
++	}
+ 
+ 	ret_val = phy->ops.acquire(hw);
+ 	if (ret_val)
+diff --git a/drivers/net/ethernet/intel/igbvf/ethtool.c b/drivers/net/ethernet/intel/igbvf/ethtool.c
+index 8ce6706..90eef07 100644
+--- a/drivers/net/ethernet/intel/igbvf/ethtool.c
++++ b/drivers/net/ethernet/intel/igbvf/ethtool.c
+@@ -357,21 +357,28 @@ static int igbvf_set_coalesce(struct net_device *netdev,
+ 	struct igbvf_adapter *adapter = netdev_priv(netdev);
+ 	struct e1000_hw *hw = &adapter->hw;
+ 
+-	if ((ec->rx_coalesce_usecs > IGBVF_MAX_ITR_USECS) ||
+-	    ((ec->rx_coalesce_usecs > 3) &&
+-	     (ec->rx_coalesce_usecs < IGBVF_MIN_ITR_USECS)) ||
+-	    (ec->rx_coalesce_usecs == 2))
+-		return -EINVAL;
+-
+-	/* convert to rate of irq's per second */
+-	if (ec->rx_coalesce_usecs && ec->rx_coalesce_usecs <= 3) {
++	if ((ec->rx_coalesce_usecs >= IGBVF_MIN_ITR_USECS) &&
++	     (ec->rx_coalesce_usecs <= IGBVF_MAX_ITR_USECS)) {
++		adapter->current_itr = ec->rx_coalesce_usecs << 2;
++		adapter->requested_itr = 1000000000 /
++					(adapter->current_itr * 256);
++	} else if ((ec->rx_coalesce_usecs == 3) ||
++		   (ec->rx_coalesce_usecs == 2)) {
+ 		adapter->current_itr = IGBVF_START_ITR;
+ 		adapter->requested_itr = ec->rx_coalesce_usecs;
+-	} else {
+-		adapter->current_itr = ec->rx_coalesce_usecs << 2;
++	} else if (ec->rx_coalesce_usecs == 0) {
++		/*
++		 * The user's desire is to turn off interrupt throttling
++		 * altogether, but due to HW limitations, we can't do that.
++		 * Instead we set a very small value in EITR, which would
++		 * allow ~967k interrupts per second, but allow the adapter's
++		 * internal clocking to still function properly.
++		 */
++		adapter->current_itr = 4;
+ 		adapter->requested_itr = 1000000000 /
+ 					(adapter->current_itr * 256);
+-	}
++	} else
++		return -EINVAL;
+ 
+ 	writel(adapter->current_itr,
+ 	       hw->hw_addr + adapter->rx_ring->itr_register);
+diff --git a/drivers/net/ethernet/intel/ixgbe/ixgbe.h b/drivers/net/ethernet/intel/ixgbe/ixgbe.h
+index 81b1555..f8f85ec 100644
+--- a/drivers/net/ethernet/intel/ixgbe/ixgbe.h
++++ b/drivers/net/ethernet/intel/ixgbe/ixgbe.h
+@@ -189,7 +189,7 @@ enum ixgbe_ring_state_t {
+ 	__IXGBE_HANG_CHECK_ARMED,
+ 	__IXGBE_RX_RSC_ENABLED,
+ 	__IXGBE_RX_CSUM_UDP_ZERO_ERR,
+-	__IXGBE_RX_FCOE_BUFSZ,
++	__IXGBE_RX_FCOE,
+ };
+ 
+ #define check_for_tx_hang(ring) \
+@@ -283,7 +283,7 @@ struct ixgbe_ring_feature {
+ #if defined(IXGBE_FCOE) && (PAGE_SIZE < 8192)
+ static inline unsigned int ixgbe_rx_pg_order(struct ixgbe_ring *ring)
+ {
+-	return test_bit(__IXGBE_RX_FCOE_BUFSZ, &ring->state) ? 1 : 0;
++	return test_bit(__IXGBE_RX_FCOE, &ring->state) ? 1 : 0;
+ }
+ #else
+ #define ixgbe_rx_pg_order(_ring) 0
+diff --git a/drivers/net/ethernet/intel/ixgbe/ixgbe_lib.c b/drivers/net/ethernet/intel/ixgbe/ixgbe_lib.c
+index ed1b47d..a269d11 100644
+--- a/drivers/net/ethernet/intel/ixgbe/ixgbe_lib.c
++++ b/drivers/net/ethernet/intel/ixgbe/ixgbe_lib.c
+@@ -628,7 +628,7 @@ static int ixgbe_alloc_q_vector(struct ixgbe_adapter *adapter, int v_idx,
+ 			f = &adapter->ring_feature[RING_F_FCOE];
+ 			if ((rxr_idx >= f->mask) &&
+ 			    (rxr_idx < f->mask + f->indices))
+-				set_bit(__IXGBE_RX_FCOE_BUFSZ, &ring->state);
++				set_bit(__IXGBE_RX_FCOE, &ring->state);
+ 		}
+ 
+ #endif /* IXGBE_FCOE */
+diff --git a/drivers/net/ethernet/intel/ixgbe/ixgbe_main.c b/drivers/net/ethernet/intel/ixgbe/ixgbe_main.c
+index 467948e..a66c215 100644
+--- a/drivers/net/ethernet/intel/ixgbe/ixgbe_main.c
++++ b/drivers/net/ethernet/intel/ixgbe/ixgbe_main.c
+@@ -1036,17 +1036,17 @@ static inline void ixgbe_rx_hash(struct ixgbe_ring *ring,
+ #ifdef IXGBE_FCOE
+ /**
+  * ixgbe_rx_is_fcoe - check the rx desc for incoming pkt type
+- * @adapter: address of board private structure
++ * @ring: structure containing ring specific data
+  * @rx_desc: advanced rx descriptor
+  *
+  * Returns : true if it is FCoE pkt
+  */
+-static inline bool ixgbe_rx_is_fcoe(struct ixgbe_adapter *adapter,
++static inline bool ixgbe_rx_is_fcoe(struct ixgbe_ring *ring,
+ 				    union ixgbe_adv_rx_desc *rx_desc)
+ {
+ 	__le16 pkt_info = rx_desc->wb.lower.lo_dword.hs_rss.pkt_info;
+ 
+-	return (adapter->flags & IXGBE_FLAG_FCOE_ENABLED) &&
++	return test_bit(__IXGBE_RX_FCOE, &ring->state) &&
+ 	       ((pkt_info & cpu_to_le16(IXGBE_RXDADV_PKTTYPE_ETQF_MASK)) ==
+ 		(cpu_to_le16(IXGBE_ETQF_FILTER_FCOE <<
+ 			     IXGBE_RXDADV_PKTTYPE_ETQF_SHIFT)));
+@@ -1519,6 +1519,12 @@ static bool ixgbe_cleanup_headers(struct ixgbe_ring *rx_ring,
+ 		skb->truesize -= ixgbe_rx_bufsz(rx_ring);
+ 	}
+ 
++#ifdef IXGBE_FCOE
++	/* do not attempt to pad FCoE Frames as this will disrupt DDP */
++	if (ixgbe_rx_is_fcoe(rx_ring, rx_desc))
++		return false;
++
++#endif
+ 	/* if skb_pad returns an error the skb was freed */
+ 	if (unlikely(skb->len < 60)) {
+ 		int pad_len = 60 - skb->len;
+@@ -1745,7 +1751,7 @@ static bool ixgbe_clean_rx_irq(struct ixgbe_q_vector *q_vector,
+ 
+ #ifdef IXGBE_FCOE
+ 		/* if ddp, not passing to ULD unless for FCP_RSP or error */
+-		if (ixgbe_rx_is_fcoe(adapter, rx_desc)) {
++		if (ixgbe_rx_is_fcoe(rx_ring, rx_desc)) {
+ 			ddp_bytes = ixgbe_fcoe_ddp(adapter, rx_desc, skb);
+ 			if (!ddp_bytes) {
+ 				dev_kfree_skb_any(skb);
+diff --git a/drivers/net/ethernet/marvell/sky2.c b/drivers/net/ethernet/marvell/sky2.c
+index 487a6c8..589753f 100644
+--- a/drivers/net/ethernet/marvell/sky2.c
++++ b/drivers/net/ethernet/marvell/sky2.c
+@@ -4381,10 +4381,12 @@ static int sky2_set_features(struct net_device *dev, netdev_features_t features)
+ 	struct sky2_port *sky2 = netdev_priv(dev);
+ 	netdev_features_t changed = dev->features ^ features;
+ 
+-	if (changed & NETIF_F_RXCSUM) {
+-		bool on = features & NETIF_F_RXCSUM;
+-		sky2_write32(sky2->hw, Q_ADDR(rxqaddr[sky2->port], Q_CSR),
+-			     on ? BMU_ENA_RX_CHKSUM : BMU_DIS_RX_CHKSUM);
++	if ((changed & NETIF_F_RXCSUM) &&
++	    !(sky2->hw->flags & SKY2_HW_NEW_LE)) {
++		sky2_write32(sky2->hw,
++			     Q_ADDR(rxqaddr[sky2->port], Q_CSR),
++			     (features & NETIF_F_RXCSUM)
++			     ? BMU_ENA_RX_CHKSUM : BMU_DIS_RX_CHKSUM);
+ 	}
+ 
+ 	if (changed & NETIF_F_RXHASH)
+diff --git a/drivers/net/ethernet/nxp/lpc_eth.c b/drivers/net/ethernet/nxp/lpc_eth.c
+index 6dfc26d..0c5edc1 100644
+--- a/drivers/net/ethernet/nxp/lpc_eth.c
++++ b/drivers/net/ethernet/nxp/lpc_eth.c
+@@ -936,16 +936,16 @@ static void __lpc_handle_xmit(struct net_device *ndev)
+ 			/* Update stats */
+ 			ndev->stats.tx_packets++;
+ 			ndev->stats.tx_bytes += skb->len;
+-
+-			/* Free buffer */
+-			dev_kfree_skb_irq(skb);
+ 		}
++		dev_kfree_skb_irq(skb);
+ 
+ 		txcidx = readl(LPC_ENET_TXCONSUMEINDEX(pldat->net_base));
+ 	}
+ 
+-	if (netif_queue_stopped(ndev))
+-		netif_wake_queue(ndev);
++	if (pldat->num_used_tx_buffs <= ENET_TX_DESC/2) {
++		if (netif_queue_stopped(ndev))
++			netif_wake_queue(ndev);
++	}
+ }
+ 
+ static int __lpc_handle_recv(struct net_device *ndev, int budget)
+@@ -1310,6 +1310,7 @@ static const struct net_device_ops lpc_netdev_ops = {
+ 	.ndo_set_rx_mode	= lpc_eth_set_multicast_list,
+ 	.ndo_do_ioctl		= lpc_eth_ioctl,
+ 	.ndo_set_mac_address	= lpc_set_mac_address,
++	.ndo_change_mtu		= eth_change_mtu,
+ };
+ 
+ static int lpc_eth_drv_probe(struct platform_device *pdev)
+diff --git a/drivers/net/ethernet/realtek/r8169.c b/drivers/net/ethernet/realtek/r8169.c
+index ce6b44d..161e045 100644
+--- a/drivers/net/ethernet/realtek/r8169.c
++++ b/drivers/net/ethernet/realtek/r8169.c
+@@ -5966,6 +5966,8 @@ static void __devexit rtl_remove_one(struct pci_dev *pdev)
+ 
+ 	cancel_work_sync(&tp->wk.work);
+ 
++	netif_napi_del(&tp->napi);
++
+ 	unregister_netdev(dev);
+ 
+ 	rtl_release_firmware(tp);
+@@ -6288,6 +6290,7 @@ out:
+ 	return rc;
+ 
+ err_out_msi_4:
++	netif_napi_del(&tp->napi);
+ 	rtl_disable_msi(pdev, tp);
+ 	iounmap(ioaddr);
+ err_out_free_res_3:
+diff --git a/drivers/net/ethernet/sun/niu.c b/drivers/net/ethernet/sun/niu.c
+index c99b3b0..8489d09 100644
+--- a/drivers/net/ethernet/sun/niu.c
++++ b/drivers/net/ethernet/sun/niu.c
+@@ -3598,7 +3598,6 @@ static int release_tx_packet(struct niu *np, struct tx_ring_info *rp, int idx)
+ static void niu_tx_work(struct niu *np, struct tx_ring_info *rp)
+ {
+ 	struct netdev_queue *txq;
+-	unsigned int tx_bytes;
+ 	u16 pkt_cnt, tmp;
+ 	int cons, index;
+ 	u64 cs;
+@@ -3621,18 +3620,12 @@ static void niu_tx_work(struct niu *np, struct tx_ring_info *rp)
+ 	netif_printk(np, tx_done, KERN_DEBUG, np->dev,
+ 		     "%s() pkt_cnt[%u] cons[%d]\n", __func__, pkt_cnt, cons);
+ 
+-	tx_bytes = 0;
+-	tmp = pkt_cnt;
+-	while (tmp--) {
+-		tx_bytes += rp->tx_buffs[cons].skb->len;
++	while (pkt_cnt--)
+ 		cons = release_tx_packet(np, rp, cons);
+-	}
+ 
+ 	rp->cons = cons;
+ 	smp_mb();
+ 
+-	netdev_tx_completed_queue(txq, pkt_cnt, tx_bytes);
+-
+ out:
+ 	if (unlikely(netif_tx_queue_stopped(txq) &&
+ 		     (niu_tx_avail(rp) > NIU_TX_WAKEUP_THRESH(rp)))) {
+@@ -4333,7 +4326,6 @@ static void niu_free_channels(struct niu *np)
+ 			struct tx_ring_info *rp = &np->tx_rings[i];
+ 
+ 			niu_free_tx_ring_info(np, rp);
+-			netdev_tx_reset_queue(netdev_get_tx_queue(np->dev, i));
+ 		}
+ 		kfree(np->tx_rings);
+ 		np->tx_rings = NULL;
+@@ -6739,8 +6731,6 @@ static netdev_tx_t niu_start_xmit(struct sk_buff *skb,
+ 		prod = NEXT_TX(rp, prod);
+ 	}
+ 
+-	netdev_tx_sent_queue(txq, skb->len);
+-
+ 	if (prod < rp->prod)
+ 		rp->wrap_bit ^= TX_RING_KICK_WRAP;
+ 	rp->prod = prod;
+diff --git a/drivers/net/macvtap.c b/drivers/net/macvtap.c
+index cb8fd50..c1d602d 100644
+--- a/drivers/net/macvtap.c
++++ b/drivers/net/macvtap.c
+@@ -528,9 +528,10 @@ static int zerocopy_sg_from_iovec(struct sk_buff *skb, const struct iovec *from,
+ 		}
+ 		base = (unsigned long)from->iov_base + offset1;
+ 		size = ((base & ~PAGE_MASK) + len + ~PAGE_MASK) >> PAGE_SHIFT;
++		if (i + size > MAX_SKB_FRAGS)
++			return -EMSGSIZE;
+ 		num_pages = get_user_pages_fast(base, size, 0, &page[i]);
+-		if ((num_pages != size) ||
+-		    (num_pages > MAX_SKB_FRAGS - skb_shinfo(skb)->nr_frags))
++		if (num_pages != size)
+ 			/* put_page is in skb free */
+ 			return -EFAULT;
+ 		skb->data_len += len;
+@@ -647,7 +648,7 @@ static ssize_t macvtap_get_user(struct macvtap_queue *q, struct msghdr *m,
+ 	int err;
+ 	struct virtio_net_hdr vnet_hdr = { 0 };
+ 	int vnet_hdr_len = 0;
+-	int copylen;
++	int copylen = 0;
+ 	bool zerocopy = false;
+ 
+ 	if (q->flags & IFF_VNET_HDR) {
+@@ -676,15 +677,31 @@ static ssize_t macvtap_get_user(struct macvtap_queue *q, struct msghdr *m,
+ 	if (unlikely(len < ETH_HLEN))
+ 		goto err;
+ 
++	err = -EMSGSIZE;
++	if (unlikely(count > UIO_MAXIOV))
++		goto err;
++
+ 	if (m && m->msg_control && sock_flag(&q->sk, SOCK_ZEROCOPY))
+ 		zerocopy = true;
+ 
+ 	if (zerocopy) {
++		/* Userspace may produce vectors with count greater than
++		 * MAX_SKB_FRAGS, so we need to linearize parts of the skb
++		 * to let the rest of data to be fit in the frags.
++		 */
++		if (count > MAX_SKB_FRAGS) {
++			copylen = iov_length(iv, count - MAX_SKB_FRAGS);
++			if (copylen < vnet_hdr_len)
++				copylen = 0;
++			else
++				copylen -= vnet_hdr_len;
++		}
+ 		/* There are 256 bytes to be copied in skb, so there is enough
+ 		 * room for skb expand head in case it is used.
+ 		 * The rest buffer is mapped from userspace.
+ 		 */
+-		copylen = vnet_hdr.hdr_len;
++		if (copylen < vnet_hdr.hdr_len)
++			copylen = vnet_hdr.hdr_len;
+ 		if (!copylen)
+ 			copylen = GOODCOPY_LEN;
+ 	} else
+diff --git a/drivers/net/usb/ipheth.c b/drivers/net/usb/ipheth.c
+index dd78c4c..5cba415 100644
+--- a/drivers/net/usb/ipheth.c
++++ b/drivers/net/usb/ipheth.c
+@@ -59,6 +59,7 @@
+ #define USB_PRODUCT_IPHONE_3G   0x1292
+ #define USB_PRODUCT_IPHONE_3GS  0x1294
+ #define USB_PRODUCT_IPHONE_4	0x1297
++#define USB_PRODUCT_IPAD 0x129a
+ #define USB_PRODUCT_IPHONE_4_VZW 0x129c
+ #define USB_PRODUCT_IPHONE_4S	0x12a0
+ 
+@@ -101,6 +102,10 @@ static struct usb_device_id ipheth_table[] = {
+ 		IPHETH_USBINTF_CLASS, IPHETH_USBINTF_SUBCLASS,
+ 		IPHETH_USBINTF_PROTO) },
+ 	{ USB_DEVICE_AND_INTERFACE_INFO(
++		USB_VENDOR_APPLE, USB_PRODUCT_IPAD,
++		IPHETH_USBINTF_CLASS, IPHETH_USBINTF_SUBCLASS,
++		IPHETH_USBINTF_PROTO) },
++	{ USB_DEVICE_AND_INTERFACE_INFO(
+ 		USB_VENDOR_APPLE, USB_PRODUCT_IPHONE_4_VZW,
+ 		IPHETH_USBINTF_CLASS, IPHETH_USBINTF_SUBCLASS,
+ 		IPHETH_USBINTF_PROTO) },
+diff --git a/drivers/net/usb/qmi_wwan.c b/drivers/net/usb/qmi_wwan.c
+index d316503b..c2ae426 100644
+--- a/drivers/net/usb/qmi_wwan.c
++++ b/drivers/net/usb/qmi_wwan.c
+@@ -197,6 +197,10 @@ err:
+ static int qmi_wwan_cdc_wdm_manage_power(struct usb_interface *intf, int on)
+ {
+ 	struct usbnet *dev = usb_get_intfdata(intf);
++
++	/* can be called while disconnecting */
++	if (!dev)
++		return 0;
+ 	return qmi_wwan_manage_power(dev, on);
+ }
+ 
+@@ -257,29 +261,6 @@ err:
+ 	return rv;
+ }
+ 
+-/* Gobi devices uses identical class/protocol codes for all interfaces regardless
+- * of function. Some of these are CDC ACM like and have the exact same endpoints
+- * we are looking for. This leaves two possible strategies for identifying the
+- * correct interface:
+- *   a) hardcoding interface number, or
+- *   b) use the fact that the wwan interface is the only one lacking additional
+- *      (CDC functional) descriptors
+- *
+- * Let's see if we can get away with the generic b) solution.
+- */
+-static int qmi_wwan_bind_gobi(struct usbnet *dev, struct usb_interface *intf)
+-{
+-	int rv = -EINVAL;
+-
+-	/* ignore any interface with additional descriptors */
+-	if (intf->cur_altsetting->extralen)
+-		goto err;
+-
+-	rv = qmi_wwan_bind_shared(dev, intf);
+-err:
+-	return rv;
+-}
+-
+ static void qmi_wwan_unbind_shared(struct usbnet *dev, struct usb_interface *intf)
+ {
+ 	struct usb_driver *subdriver = (void *)dev->data[0];
+@@ -347,19 +328,37 @@ static const struct driver_info	qmi_wwan_shared = {
+ 	.manage_power	= qmi_wwan_manage_power,
+ };
+ 
+-static const struct driver_info	qmi_wwan_gobi = {
+-	.description	= "Qualcomm Gobi wwan/QMI device",
++static const struct driver_info	qmi_wwan_force_int0 = {
++	.description	= "Qualcomm WWAN/QMI device",
++	.flags		= FLAG_WWAN,
++	.bind		= qmi_wwan_bind_shared,
++	.unbind		= qmi_wwan_unbind_shared,
++	.manage_power	= qmi_wwan_manage_power,
++	.data		= BIT(0), /* interface whitelist bitmap */
++};
++
++static const struct driver_info	qmi_wwan_force_int1 = {
++	.description	= "Qualcomm WWAN/QMI device",
++	.flags		= FLAG_WWAN,
++	.bind		= qmi_wwan_bind_shared,
++	.unbind		= qmi_wwan_unbind_shared,
++	.manage_power	= qmi_wwan_manage_power,
++	.data		= BIT(1), /* interface whitelist bitmap */
++};
++
++static const struct driver_info	qmi_wwan_force_int3 = {
++	.description	= "Qualcomm WWAN/QMI device",
+ 	.flags		= FLAG_WWAN,
+-	.bind		= qmi_wwan_bind_gobi,
++	.bind		= qmi_wwan_bind_shared,
+ 	.unbind		= qmi_wwan_unbind_shared,
+ 	.manage_power	= qmi_wwan_manage_power,
++	.data		= BIT(3), /* interface whitelist bitmap */
+ };
+ 
+-/* ZTE suck at making USB descriptors */
+ static const struct driver_info	qmi_wwan_force_int4 = {
+-	.description	= "Qualcomm Gobi wwan/QMI device",
++	.description	= "Qualcomm WWAN/QMI device",
+ 	.flags		= FLAG_WWAN,
+-	.bind		= qmi_wwan_bind_gobi,
++	.bind		= qmi_wwan_bind_shared,
+ 	.unbind		= qmi_wwan_unbind_shared,
+ 	.manage_power	= qmi_wwan_manage_power,
+ 	.data		= BIT(4), /* interface whitelist bitmap */
+@@ -381,16 +380,23 @@ static const struct driver_info	qmi_wwan_force_int4 = {
+ static const struct driver_info	qmi_wwan_sierra = {
+ 	.description	= "Sierra Wireless wwan/QMI device",
+ 	.flags		= FLAG_WWAN,
+-	.bind		= qmi_wwan_bind_gobi,
++	.bind		= qmi_wwan_bind_shared,
+ 	.unbind		= qmi_wwan_unbind_shared,
+ 	.manage_power	= qmi_wwan_manage_power,
+ 	.data		= BIT(8) | BIT(19), /* interface whitelist bitmap */
+ };
+ 
+ #define HUAWEI_VENDOR_ID	0x12D1
++
++/* Gobi 1000 QMI/wwan interface number is 3 according to qcserial */
++#define QMI_GOBI1K_DEVICE(vend, prod) \
++	USB_DEVICE(vend, prod), \
++	.driver_info = (unsigned long)&qmi_wwan_force_int3
++
++/* Gobi 2000 and Gobi 3000 QMI/wwan interface number is 0 according to qcserial */
+ #define QMI_GOBI_DEVICE(vend, prod) \
+ 	USB_DEVICE(vend, prod), \
+-	.driver_info = (unsigned long)&qmi_wwan_gobi
++	.driver_info = (unsigned long)&qmi_wwan_force_int0
+ 
+ static const struct usb_device_id products[] = {
+ 	{	/* Huawei E392, E398 and possibly others sharing both device id and more... */
+@@ -430,6 +436,15 @@ static const struct usb_device_id products[] = {
+ 		.bInterfaceProtocol = 0xff,
+ 		.driver_info        = (unsigned long)&qmi_wwan_force_int4,
+ 	},
++	{	/* ZTE (Vodafone) K3520-Z */
++		.match_flags	    = USB_DEVICE_ID_MATCH_DEVICE | USB_DEVICE_ID_MATCH_INT_INFO,
++		.idVendor           = 0x19d2,
++		.idProduct          = 0x0055,
++		.bInterfaceClass    = 0xff,
++		.bInterfaceSubClass = 0xff,
++		.bInterfaceProtocol = 0xff,
++		.driver_info        = (unsigned long)&qmi_wwan_force_int1,
++	},
+ 	{	/* ZTE (Vodafone) K3565-Z */
+ 		.match_flags	    = USB_DEVICE_ID_MATCH_DEVICE | USB_DEVICE_ID_MATCH_INT_INFO,
+ 		.idVendor           = 0x19d2,
+@@ -475,20 +490,24 @@ static const struct usb_device_id products[] = {
+ 		.bInterfaceProtocol = 0xff,
+ 		.driver_info        = (unsigned long)&qmi_wwan_sierra,
+ 	},
+-	{QMI_GOBI_DEVICE(0x05c6, 0x9212)},	/* Acer Gobi Modem Device */
+-	{QMI_GOBI_DEVICE(0x03f0, 0x1f1d)},	/* HP un2400 Gobi Modem Device */
+-	{QMI_GOBI_DEVICE(0x03f0, 0x371d)},	/* HP un2430 Mobile Broadband Module */
+-	{QMI_GOBI_DEVICE(0x04da, 0x250d)},	/* Panasonic Gobi Modem device */
+-	{QMI_GOBI_DEVICE(0x413c, 0x8172)},	/* Dell Gobi Modem device */
+-	{QMI_GOBI_DEVICE(0x1410, 0xa001)},	/* Novatel Gobi Modem device */
+-	{QMI_GOBI_DEVICE(0x0b05, 0x1776)},	/* Asus Gobi Modem device */
+-	{QMI_GOBI_DEVICE(0x19d2, 0xfff3)},	/* ONDA Gobi Modem device */
+-	{QMI_GOBI_DEVICE(0x05c6, 0x9001)},	/* Generic Gobi Modem device */
+-	{QMI_GOBI_DEVICE(0x05c6, 0x9002)},	/* Generic Gobi Modem device */
+-	{QMI_GOBI_DEVICE(0x05c6, 0x9202)},	/* Generic Gobi Modem device */
+-	{QMI_GOBI_DEVICE(0x05c6, 0x9203)},	/* Generic Gobi Modem device */
+-	{QMI_GOBI_DEVICE(0x05c6, 0x9222)},	/* Generic Gobi Modem device */
+-	{QMI_GOBI_DEVICE(0x05c6, 0x9009)},	/* Generic Gobi Modem device */
++
++	/* Gobi 1000 devices */
++	{QMI_GOBI1K_DEVICE(0x05c6, 0x9212)},	/* Acer Gobi Modem Device */
++	{QMI_GOBI1K_DEVICE(0x03f0, 0x1f1d)},	/* HP un2400 Gobi Modem Device */
++	{QMI_GOBI1K_DEVICE(0x03f0, 0x371d)},	/* HP un2430 Mobile Broadband Module */
++	{QMI_GOBI1K_DEVICE(0x04da, 0x250d)},	/* Panasonic Gobi Modem device */
++	{QMI_GOBI1K_DEVICE(0x413c, 0x8172)},	/* Dell Gobi Modem device */
++	{QMI_GOBI1K_DEVICE(0x1410, 0xa001)},	/* Novatel Gobi Modem device */
++	{QMI_GOBI1K_DEVICE(0x0b05, 0x1776)},	/* Asus Gobi Modem device */
++	{QMI_GOBI1K_DEVICE(0x19d2, 0xfff3)},	/* ONDA Gobi Modem device */
++	{QMI_GOBI1K_DEVICE(0x05c6, 0x9001)},	/* Generic Gobi Modem device */
++	{QMI_GOBI1K_DEVICE(0x05c6, 0x9002)},	/* Generic Gobi Modem device */
++	{QMI_GOBI1K_DEVICE(0x05c6, 0x9202)},	/* Generic Gobi Modem device */
++	{QMI_GOBI1K_DEVICE(0x05c6, 0x9203)},	/* Generic Gobi Modem device */
++	{QMI_GOBI1K_DEVICE(0x05c6, 0x9222)},	/* Generic Gobi Modem device */
++	{QMI_GOBI1K_DEVICE(0x05c6, 0x9009)},	/* Generic Gobi Modem device */
++
++	/* Gobi 2000 and 3000 devices */
+ 	{QMI_GOBI_DEVICE(0x413c, 0x8186)},	/* Dell Gobi 2000 Modem device (N0218, VU936) */
+ 	{QMI_GOBI_DEVICE(0x05c6, 0x920b)},	/* Generic Gobi 2000 Modem device */
+ 	{QMI_GOBI_DEVICE(0x05c6, 0x9225)},	/* Sony Gobi 2000 Modem device (N0279, VU730) */
+diff --git a/drivers/net/wireless/ath/ath.h b/drivers/net/wireless/ath/ath.h
+index c54b7d37..420d69b 100644
+--- a/drivers/net/wireless/ath/ath.h
++++ b/drivers/net/wireless/ath/ath.h
+@@ -143,6 +143,7 @@ struct ath_common {
+ 	u32 keymax;
+ 	DECLARE_BITMAP(keymap, ATH_KEYMAX);
+ 	DECLARE_BITMAP(tkip_keymap, ATH_KEYMAX);
++	DECLARE_BITMAP(ccmp_keymap, ATH_KEYMAX);
+ 	enum ath_crypt_caps crypt_caps;
+ 
+ 	unsigned int clockrate;
+diff --git a/drivers/net/wireless/ath/ath9k/ath9k.h b/drivers/net/wireless/ath/ath9k/ath9k.h
+index 8c84049..4bfb44a 100644
+--- a/drivers/net/wireless/ath/ath9k/ath9k.h
++++ b/drivers/net/wireless/ath/ath9k/ath9k.h
+@@ -213,6 +213,7 @@ struct ath_frame_info {
+ 	enum ath9k_key_type keytype;
+ 	u8 keyix;
+ 	u8 retries;
++	u8 rtscts_rate;
+ };
+ 
+ struct ath_buf_state {
+diff --git a/drivers/net/wireless/ath/ath9k/htc_drv_main.c b/drivers/net/wireless/ath/ath9k/htc_drv_main.c
+index 2b8f61c..abbd6ef 100644
+--- a/drivers/net/wireless/ath/ath9k/htc_drv_main.c
++++ b/drivers/net/wireless/ath/ath9k/htc_drv_main.c
+@@ -1496,6 +1496,7 @@ static void ath9k_htc_bss_info_changed(struct ieee80211_hw *hw,
+ 			priv->num_sta_assoc_vif++ : priv->num_sta_assoc_vif--;
+ 
+ 		if (priv->ah->opmode == NL80211_IFTYPE_STATION) {
++			ath9k_htc_choose_set_bssid(priv);
+ 			if (bss_conf->assoc && (priv->num_sta_assoc_vif == 1))
+ 				ath9k_htc_start_ani(priv);
+ 			else if (priv->num_sta_assoc_vif == 0)
+@@ -1503,13 +1504,11 @@ static void ath9k_htc_bss_info_changed(struct ieee80211_hw *hw,
+ 		}
+ 	}
+ 
+-	if (changed & BSS_CHANGED_BSSID) {
++	if (changed & BSS_CHANGED_IBSS) {
+ 		if (priv->ah->opmode == NL80211_IFTYPE_ADHOC) {
+ 			common->curaid = bss_conf->aid;
+ 			memcpy(common->curbssid, bss_conf->bssid, ETH_ALEN);
+ 			ath9k_htc_set_bssid(priv);
+-		} else if (priv->ah->opmode == NL80211_IFTYPE_STATION) {
+-			ath9k_htc_choose_set_bssid(priv);
+ 		}
+ 	}
+ 
+diff --git a/drivers/net/wireless/ath/ath9k/hw.c b/drivers/net/wireless/ath/ath9k/hw.c
+index fa84e37..6dfd964 100644
+--- a/drivers/net/wireless/ath/ath9k/hw.c
++++ b/drivers/net/wireless/ath/ath9k/hw.c
+@@ -558,7 +558,7 @@ static int __ath9k_hw_init(struct ath_hw *ah)
+ 
+ 	if (NR_CPUS > 1 && ah->config.serialize_regmode == SER_REG_MODE_AUTO) {
+ 		if (ah->hw_version.macVersion == AR_SREV_VERSION_5416_PCI ||
+-		    ((AR_SREV_9160(ah) || AR_SREV_9280(ah)) &&
++		    ((AR_SREV_9160(ah) || AR_SREV_9280(ah) || AR_SREV_9287(ah)) &&
+ 		     !ah->is_pciexpress)) {
+ 			ah->config.serialize_regmode =
+ 				SER_REG_MODE_ON;
+@@ -720,13 +720,25 @@ static void ath9k_hw_init_qos(struct ath_hw *ah)
+ 
+ u32 ar9003_get_pll_sqsum_dvc(struct ath_hw *ah)
+ {
++	struct ath_common *common = ath9k_hw_common(ah);
++	int i = 0;
++
+ 	REG_CLR_BIT(ah, PLL3, PLL3_DO_MEAS_MASK);
+ 	udelay(100);
+ 	REG_SET_BIT(ah, PLL3, PLL3_DO_MEAS_MASK);
+ 
+-	while ((REG_READ(ah, PLL4) & PLL4_MEAS_DONE) == 0)
++	while ((REG_READ(ah, PLL4) & PLL4_MEAS_DONE) == 0) {
++
+ 		udelay(100);
+ 
++		if (WARN_ON_ONCE(i >= 100)) {
++			ath_err(common, "PLL4 meaurement not done\n");
++			break;
++		}
++
++		i++;
++	}
++
+ 	return (REG_READ(ah, PLL3) & SQSUM_DVC_MASK) >> 3;
+ }
+ EXPORT_SYMBOL(ar9003_get_pll_sqsum_dvc);
+diff --git a/drivers/net/wireless/ath/ath9k/main.c b/drivers/net/wireless/ath/ath9k/main.c
+index 798ea57..d5dabcb 100644
+--- a/drivers/net/wireless/ath/ath9k/main.c
++++ b/drivers/net/wireless/ath/ath9k/main.c
+@@ -960,6 +960,15 @@ void ath_hw_pll_work(struct work_struct *work)
+ 					    hw_pll_work.work);
+ 	u32 pll_sqsum;
+ 
++	/*
++	 * ensure that the PLL WAR is executed only
++	 * after the STA is associated (or) if the
++	 * beaconing had started in interfaces that
++	 * uses beacons.
++	 */
++	if (!(sc->sc_flags & SC_OP_BEACONS))
++		return;
++
+ 	if (AR_SREV_9485(sc->sc_ah)) {
+ 
+ 		ath9k_ps_wakeup(sc);
+@@ -1419,15 +1428,6 @@ static int ath9k_add_interface(struct ieee80211_hw *hw,
+ 		}
+ 	}
+ 
+-	if ((ah->opmode == NL80211_IFTYPE_ADHOC) ||
+-	    ((vif->type == NL80211_IFTYPE_ADHOC) &&
+-	     sc->nvifs > 0)) {
+-		ath_err(common, "Cannot create ADHOC interface when other"
+-			" interfaces already exist.\n");
+-		ret = -EINVAL;
+-		goto out;
+-	}
+-
+ 	ath_dbg(common, CONFIG, "Attach a VIF of type: %d\n", vif->type);
+ 
+ 	sc->nvifs++;
+diff --git a/drivers/net/wireless/ath/ath9k/recv.c b/drivers/net/wireless/ath/ath9k/recv.c
+index 1c4583c..a2f7ae8 100644
+--- a/drivers/net/wireless/ath/ath9k/recv.c
++++ b/drivers/net/wireless/ath/ath9k/recv.c
+@@ -695,9 +695,9 @@ static bool ath_edma_get_buffers(struct ath_softc *sc,
+ 			__skb_unlink(skb, &rx_edma->rx_fifo);
+ 			list_add_tail(&bf->list, &sc->rx.rxbuf);
+ 			ath_rx_edma_buf_link(sc, qtype);
+-		} else {
+-			bf = NULL;
+ 		}
++
++		bf = NULL;
+ 	}
+ 
+ 	*dest = bf;
+@@ -821,7 +821,8 @@ static bool ath9k_rx_accept(struct ath_common *common,
+ 	 * descriptor does contain a valid key index. This has been observed
+ 	 * mostly with CCMP encryption.
+ 	 */
+-	if (rx_stats->rs_keyix == ATH9K_RXKEYIX_INVALID)
++	if (rx_stats->rs_keyix == ATH9K_RXKEYIX_INVALID ||
++	    !test_bit(rx_stats->rs_keyix, common->ccmp_keymap))
+ 		rx_stats->rs_status &= ~ATH9K_RXERR_KEYMISS;
+ 
+ 	if (!rx_stats->rs_datalen)
+diff --git a/drivers/net/wireless/ath/ath9k/xmit.c b/drivers/net/wireless/ath/ath9k/xmit.c
+index d59dd01..4d57139 100644
+--- a/drivers/net/wireless/ath/ath9k/xmit.c
++++ b/drivers/net/wireless/ath/ath9k/xmit.c
+@@ -938,6 +938,7 @@ static void ath_buf_set_rate(struct ath_softc *sc, struct ath_buf *bf,
+ 	struct ieee80211_tx_rate *rates;
+ 	const struct ieee80211_rate *rate;
+ 	struct ieee80211_hdr *hdr;
++	struct ath_frame_info *fi = get_frame_info(bf->bf_mpdu);
+ 	int i;
+ 	u8 rix = 0;
+ 
+@@ -948,18 +949,7 @@ static void ath_buf_set_rate(struct ath_softc *sc, struct ath_buf *bf,
+ 
+ 	/* set dur_update_en for l-sig computation except for PS-Poll frames */
+ 	info->dur_update = !ieee80211_is_pspoll(hdr->frame_control);
+-
+-	/*
+-	 * We check if Short Preamble is needed for the CTS rate by
+-	 * checking the BSS's global flag.
+-	 * But for the rate series, IEEE80211_TX_RC_USE_SHORT_PREAMBLE is used.
+-	 */
+-	rate = ieee80211_get_rts_cts_rate(sc->hw, tx_info);
+-	info->rtscts_rate = rate->hw_value;
+-
+-	if (tx_info->control.vif &&
+-	    tx_info->control.vif->bss_conf.use_short_preamble)
+-		info->rtscts_rate |= rate->hw_value_short;
++	info->rtscts_rate = fi->rtscts_rate;
+ 
+ 	for (i = 0; i < 4; i++) {
+ 		bool is_40, is_sgi, is_sp;
+@@ -1001,13 +991,13 @@ static void ath_buf_set_rate(struct ath_softc *sc, struct ath_buf *bf,
+ 		}
+ 
+ 		/* legacy rates */
++		rate = &sc->sbands[tx_info->band].bitrates[rates[i].idx];
+ 		if ((tx_info->band == IEEE80211_BAND_2GHZ) &&
+ 		    !(rate->flags & IEEE80211_RATE_ERP_G))
+ 			phy = WLAN_RC_PHY_CCK;
+ 		else
+ 			phy = WLAN_RC_PHY_OFDM;
+ 
+-		rate = &sc->sbands[tx_info->band].bitrates[rates[i].idx];
+ 		info->rates[i].Rate = rate->hw_value;
+ 		if (rate->hw_value_short) {
+ 			if (rates[i].flags & IEEE80211_TX_RC_USE_SHORT_PREAMBLE)
+@@ -1776,10 +1766,22 @@ static void setup_frame_info(struct ieee80211_hw *hw, struct sk_buff *skb,
+ 	struct ieee80211_sta *sta = tx_info->control.sta;
+ 	struct ieee80211_key_conf *hw_key = tx_info->control.hw_key;
+ 	struct ieee80211_hdr *hdr = (struct ieee80211_hdr *)skb->data;
++	const struct ieee80211_rate *rate;
+ 	struct ath_frame_info *fi = get_frame_info(skb);
+ 	struct ath_node *an = NULL;
+ 	enum ath9k_key_type keytype;
++	bool short_preamble = false;
++
++	/*
++	 * We check if Short Preamble is needed for the CTS rate by
++	 * checking the BSS's global flag.
++	 * But for the rate series, IEEE80211_TX_RC_USE_SHORT_PREAMBLE is used.
++	 */
++	if (tx_info->control.vif &&
++	    tx_info->control.vif->bss_conf.use_short_preamble)
++		short_preamble = true;
+ 
++	rate = ieee80211_get_rts_cts_rate(hw, tx_info);
+ 	keytype = ath9k_cmn_get_hw_crypto_keytype(skb);
+ 
+ 	if (sta)
+@@ -1794,6 +1796,9 @@ static void setup_frame_info(struct ieee80211_hw *hw, struct sk_buff *skb,
+ 		fi->keyix = ATH9K_TXKEYIX_INVALID;
+ 	fi->keytype = keytype;
+ 	fi->framelen = framelen;
++	fi->rtscts_rate = rate->hw_value;
++	if (short_preamble)
++		fi->rtscts_rate |= rate->hw_value_short;
+ }
+ 
+ u8 ath_txchainmask_reduction(struct ath_softc *sc, u8 chainmask, u32 rate)
+diff --git a/drivers/net/wireless/ath/key.c b/drivers/net/wireless/ath/key.c
+index 0e81904..5c54aa4 100644
+--- a/drivers/net/wireless/ath/key.c
++++ b/drivers/net/wireless/ath/key.c
+@@ -556,6 +556,9 @@ int ath_key_config(struct ath_common *common,
+ 		return -EIO;
+ 
+ 	set_bit(idx, common->keymap);
++	if (key->cipher == WLAN_CIPHER_SUITE_CCMP)
++		set_bit(idx, common->ccmp_keymap);
++
+ 	if (key->cipher == WLAN_CIPHER_SUITE_TKIP) {
+ 		set_bit(idx + 64, common->keymap);
+ 		set_bit(idx, common->tkip_keymap);
+@@ -582,6 +585,7 @@ void ath_key_delete(struct ath_common *common, struct ieee80211_key_conf *key)
+ 		return;
+ 
+ 	clear_bit(key->hw_key_idx, common->keymap);
++	clear_bit(key->hw_key_idx, common->ccmp_keymap);
+ 	if (key->cipher != WLAN_CIPHER_SUITE_TKIP)
+ 		return;
+ 
+diff --git a/drivers/net/wireless/ipw2x00/ipw.h b/drivers/net/wireless/ipw2x00/ipw.h
+new file mode 100644
+index 0000000..4007bf5
+--- /dev/null
++++ b/drivers/net/wireless/ipw2x00/ipw.h
+@@ -0,0 +1,23 @@
++/*
++ * Intel Pro/Wireless 2100, 2200BG, 2915ABG network connection driver
++ *
++ * Copyright 2012 Stanislav Yakovlev <stas.yakovlev@gmail.com>
++ *
++ * This program is free software; you can redistribute it and/or modify
++ * it under the terms of the GNU General Public License version 2 as
++ * published by the Free Software Foundation.
++ */
++
++#ifndef __IPW_H__
++#define __IPW_H__
++
++#include <linux/ieee80211.h>
++
++static const u32 ipw_cipher_suites[] = {
++	WLAN_CIPHER_SUITE_WEP40,
++	WLAN_CIPHER_SUITE_WEP104,
++	WLAN_CIPHER_SUITE_TKIP,
++	WLAN_CIPHER_SUITE_CCMP,
++};
++
++#endif
+diff --git a/drivers/net/wireless/ipw2x00/ipw2100.c b/drivers/net/wireless/ipw2x00/ipw2100.c
+index f0551f8..7c8e8b1 100644
+--- a/drivers/net/wireless/ipw2x00/ipw2100.c
++++ b/drivers/net/wireless/ipw2x00/ipw2100.c
+@@ -166,6 +166,7 @@ that only one external action is invoked at a time.
+ #include <net/lib80211.h>
+ 
+ #include "ipw2100.h"
++#include "ipw.h"
+ 
+ #define IPW2100_VERSION "git-1.2.2"
+ 
+@@ -1946,6 +1947,9 @@ static int ipw2100_wdev_init(struct net_device *dev)
+ 		wdev->wiphy->bands[IEEE80211_BAND_2GHZ] = bg_band;
+ 	}
+ 
++	wdev->wiphy->cipher_suites = ipw_cipher_suites;
++	wdev->wiphy->n_cipher_suites = ARRAY_SIZE(ipw_cipher_suites);
++
+ 	set_wiphy_dev(wdev->wiphy, &priv->pci_dev->dev);
+ 	if (wiphy_register(wdev->wiphy)) {
+ 		ipw2100_down(priv);
+diff --git a/drivers/net/wireless/ipw2x00/ipw2200.c b/drivers/net/wireless/ipw2x00/ipw2200.c
+index 1779db3..3a6b991 100644
+--- a/drivers/net/wireless/ipw2x00/ipw2200.c
++++ b/drivers/net/wireless/ipw2x00/ipw2200.c
+@@ -34,6 +34,7 @@
+ #include <linux/slab.h>
+ #include <net/cfg80211-wext.h>
+ #include "ipw2200.h"
++#include "ipw.h"
+ 
+ 
+ #ifndef KBUILD_EXTMOD
+@@ -11544,6 +11545,9 @@ static int ipw_wdev_init(struct net_device *dev)
+ 		wdev->wiphy->bands[IEEE80211_BAND_5GHZ] = a_band;
+ 	}
+ 
++	wdev->wiphy->cipher_suites = ipw_cipher_suites;
++	wdev->wiphy->n_cipher_suites = ARRAY_SIZE(ipw_cipher_suites);
++
+ 	set_wiphy_dev(wdev->wiphy, &priv->pci_dev->dev);
+ 
+ 	/* With that information in place, we can now register the wiphy... */
+diff --git a/drivers/net/wireless/iwlwifi/iwl-mac80211.c b/drivers/net/wireless/iwlwifi/iwl-mac80211.c
+index 1018f9b..e0e6c67 100644
+--- a/drivers/net/wireless/iwlwifi/iwl-mac80211.c
++++ b/drivers/net/wireless/iwlwifi/iwl-mac80211.c
+@@ -788,6 +788,18 @@ static int iwlagn_mac_sta_state(struct ieee80211_hw *hw,
+ 	switch (op) {
+ 	case ADD:
+ 		ret = iwlagn_mac_sta_add(hw, vif, sta);
++		if (ret)
++			break;
++		/*
++		 * Clear the in-progress flag, the AP station entry was added
++		 * but we'll initialize LQ only when we've associated (which
++		 * would also clear the in-progress flag). This is necessary
++		 * in case we never initialize LQ because association fails.
++		 */
++		spin_lock_bh(&priv->sta_lock);
++		priv->stations[iwl_sta_id(sta)].used &=
++			~IWL_STA_UCODE_INPROGRESS;
++		spin_unlock_bh(&priv->sta_lock);
+ 		break;
+ 	case REMOVE:
+ 		ret = iwlagn_mac_sta_remove(hw, vif, sta);
+diff --git a/drivers/net/wireless/iwlwifi/iwl-trans-pcie.c b/drivers/net/wireless/iwlwifi/iwl-trans-pcie.c
+index 6eac984..8741048 100644
+--- a/drivers/net/wireless/iwlwifi/iwl-trans-pcie.c
++++ b/drivers/net/wireless/iwlwifi/iwl-trans-pcie.c
+@@ -2000,6 +2000,7 @@ static ssize_t iwl_dbgfs_rx_queue_read(struct file *file,
+ 	return simple_read_from_buffer(user_buf, count, ppos, buf, pos);
+ }
+ 
++#ifdef CONFIG_IWLWIFI_DEBUG
+ static ssize_t iwl_dbgfs_log_event_read(struct file *file,
+ 					 char __user *user_buf,
+ 					 size_t count, loff_t *ppos)
+@@ -2037,6 +2038,7 @@ static ssize_t iwl_dbgfs_log_event_write(struct file *file,
+ 
+ 	return count;
+ }
++#endif
+ 
+ static ssize_t iwl_dbgfs_interrupt_read(struct file *file,
+ 					char __user *user_buf,
+@@ -2164,7 +2166,9 @@ static ssize_t iwl_dbgfs_fh_reg_read(struct file *file,
+ 	return ret;
+ }
+ 
++#ifdef CONFIG_IWLWIFI_DEBUG
+ DEBUGFS_READ_WRITE_FILE_OPS(log_event);
++#endif
+ DEBUGFS_READ_WRITE_FILE_OPS(interrupt);
+ DEBUGFS_READ_FILE_OPS(fh_reg);
+ DEBUGFS_READ_FILE_OPS(rx_queue);
+@@ -2180,7 +2184,9 @@ static int iwl_trans_pcie_dbgfs_register(struct iwl_trans *trans,
+ {
+ 	DEBUGFS_ADD_FILE(rx_queue, dir, S_IRUSR);
+ 	DEBUGFS_ADD_FILE(tx_queue, dir, S_IRUSR);
++#ifdef CONFIG_IWLWIFI_DEBUG
+ 	DEBUGFS_ADD_FILE(log_event, dir, S_IWUSR | S_IRUSR);
++#endif
+ 	DEBUGFS_ADD_FILE(interrupt, dir, S_IWUSR | S_IRUSR);
+ 	DEBUGFS_ADD_FILE(csr, dir, S_IWUSR);
+ 	DEBUGFS_ADD_FILE(fh_reg, dir, S_IRUSR);
+diff --git a/drivers/net/wireless/mwifiex/11n_rxreorder.c b/drivers/net/wireless/mwifiex/11n_rxreorder.c
+index 9c44088..900ee12 100644
+--- a/drivers/net/wireless/mwifiex/11n_rxreorder.c
++++ b/drivers/net/wireless/mwifiex/11n_rxreorder.c
+@@ -256,7 +256,8 @@ mwifiex_11n_create_rx_reorder_tbl(struct mwifiex_private *priv, u8 *ta,
+ 	else
+ 		last_seq = priv->rx_seq[tid];
+ 
+-	if (last_seq >= new_node->start_win)
++	if (last_seq != MWIFIEX_DEF_11N_RX_SEQ_NUM &&
++	    last_seq >= new_node->start_win)
+ 		new_node->start_win = last_seq + 1;
+ 
+ 	new_node->win_size = win_size;
+@@ -596,5 +597,5 @@ void mwifiex_11n_cleanup_reorder_tbl(struct mwifiex_private *priv)
+ 	spin_unlock_irqrestore(&priv->rx_reorder_tbl_lock, flags);
+ 
+ 	INIT_LIST_HEAD(&priv->rx_reorder_tbl_ptr);
+-	memset(priv->rx_seq, 0, sizeof(priv->rx_seq));
++	mwifiex_reset_11n_rx_seq_num(priv);
+ }
+diff --git a/drivers/net/wireless/mwifiex/11n_rxreorder.h b/drivers/net/wireless/mwifiex/11n_rxreorder.h
+index f1bffeb..6c9815a 100644
+--- a/drivers/net/wireless/mwifiex/11n_rxreorder.h
++++ b/drivers/net/wireless/mwifiex/11n_rxreorder.h
+@@ -37,6 +37,13 @@
+ 
+ #define ADDBA_RSP_STATUS_ACCEPT 0
+ 
++#define MWIFIEX_DEF_11N_RX_SEQ_NUM	0xffff
++
++static inline void mwifiex_reset_11n_rx_seq_num(struct mwifiex_private *priv)
++{
++	memset(priv->rx_seq, 0xff, sizeof(priv->rx_seq));
++}
++
+ int mwifiex_11n_rx_reorder_pkt(struct mwifiex_private *,
+ 			       u16 seqNum,
+ 			       u16 tid, u8 *ta,
+diff --git a/drivers/net/wireless/mwifiex/cfg80211.c b/drivers/net/wireless/mwifiex/cfg80211.c
+index 6505038..baf6919 100644
+--- a/drivers/net/wireless/mwifiex/cfg80211.c
++++ b/drivers/net/wireless/mwifiex/cfg80211.c
+@@ -1214,11 +1214,11 @@ struct net_device *mwifiex_add_virtual_intf(struct wiphy *wiphy,
+ 	void *mdev_priv;
+ 
+ 	if (!priv)
+-		return NULL;
++		return ERR_PTR(-EFAULT);
+ 
+ 	adapter = priv->adapter;
+ 	if (!adapter)
+-		return NULL;
++		return ERR_PTR(-EFAULT);
+ 
+ 	switch (type) {
+ 	case NL80211_IFTYPE_UNSPECIFIED:
+@@ -1227,7 +1227,7 @@ struct net_device *mwifiex_add_virtual_intf(struct wiphy *wiphy,
+ 		if (priv->bss_mode) {
+ 			wiphy_err(wiphy, "cannot create multiple"
+ 					" station/adhoc interfaces\n");
+-			return NULL;
++			return ERR_PTR(-EINVAL);
+ 		}
+ 
+ 		if (type == NL80211_IFTYPE_UNSPECIFIED)
+@@ -1244,14 +1244,15 @@ struct net_device *mwifiex_add_virtual_intf(struct wiphy *wiphy,
+ 		break;
+ 	default:
+ 		wiphy_err(wiphy, "type not supported\n");
+-		return NULL;
++		return ERR_PTR(-EINVAL);
+ 	}
+ 
+ 	dev = alloc_netdev_mq(sizeof(struct mwifiex_private *), name,
+ 			      ether_setup, 1);
+ 	if (!dev) {
+ 		wiphy_err(wiphy, "no memory available for netdevice\n");
+-		goto error;
++		priv->bss_mode = NL80211_IFTYPE_UNSPECIFIED;
++		return ERR_PTR(-ENOMEM);
+ 	}
+ 
+ 	dev_net_set(dev, wiphy_net(wiphy));
+@@ -1276,7 +1277,9 @@ struct net_device *mwifiex_add_virtual_intf(struct wiphy *wiphy,
+ 	/* Register network device */
+ 	if (register_netdevice(dev)) {
+ 		wiphy_err(wiphy, "cannot register virtual network device\n");
+-		goto error;
++		free_netdev(dev);
++		priv->bss_mode = NL80211_IFTYPE_UNSPECIFIED;
++		return ERR_PTR(-EFAULT);
+ 	}
+ 
+ 	sema_init(&priv->async_sem, 1);
+@@ -1288,12 +1291,6 @@ struct net_device *mwifiex_add_virtual_intf(struct wiphy *wiphy,
+ 	mwifiex_dev_debugfs_init(priv);
+ #endif
+ 	return dev;
+-error:
+-	if (dev && (dev->reg_state == NETREG_UNREGISTERED))
+-		free_netdev(dev);
+-	priv->bss_mode = NL80211_IFTYPE_UNSPECIFIED;
+-
+-	return NULL;
+ }
+ EXPORT_SYMBOL_GPL(mwifiex_add_virtual_intf);
+ 
+diff --git a/drivers/net/wireless/mwifiex/wmm.c b/drivers/net/wireless/mwifiex/wmm.c
+index 5a7316c..3e6abf0 100644
+--- a/drivers/net/wireless/mwifiex/wmm.c
++++ b/drivers/net/wireless/mwifiex/wmm.c
+@@ -404,6 +404,8 @@ mwifiex_wmm_init(struct mwifiex_adapter *adapter)
+ 		priv->add_ba_param.tx_win_size = MWIFIEX_AMPDU_DEF_TXWINSIZE;
+ 		priv->add_ba_param.rx_win_size = MWIFIEX_AMPDU_DEF_RXWINSIZE;
+ 
++		mwifiex_reset_11n_rx_seq_num(priv);
++
+ 		atomic_set(&priv->wmm.tx_pkts_queued, 0);
+ 		atomic_set(&priv->wmm.highest_queued_prio, HIGH_PRIO_TID);
+ 	}
+@@ -1209,6 +1211,7 @@ mwifiex_dequeue_tx_packet(struct mwifiex_adapter *adapter)
+ 
+ 	if (!ptr->is_11n_enabled ||
+ 	    mwifiex_is_ba_stream_setup(priv, ptr, tid) ||
++	    priv->wps.session_enable ||
+ 	    ((priv->sec_info.wpa_enabled ||
+ 	      priv->sec_info.wpa2_enabled) &&
+ 	     !priv->wpa_is_gtk_set)) {
+diff --git a/drivers/net/wireless/rtl818x/rtl8187/leds.c b/drivers/net/wireless/rtl818x/rtl8187/leds.c
+index 2e0de2f..c2d5b49 100644
+--- a/drivers/net/wireless/rtl818x/rtl8187/leds.c
++++ b/drivers/net/wireless/rtl818x/rtl8187/leds.c
+@@ -117,7 +117,7 @@ static void rtl8187_led_brightness_set(struct led_classdev *led_dev,
+ 			radio_on = true;
+ 		} else if (radio_on) {
+ 			radio_on = false;
+-			cancel_delayed_work_sync(&priv->led_on);
++			cancel_delayed_work(&priv->led_on);
+ 			ieee80211_queue_delayed_work(hw, &priv->led_off, 0);
+ 		}
+ 	} else if (radio_on) {
+diff --git a/drivers/net/wireless/rtlwifi/rtl8192cu/sw.c b/drivers/net/wireless/rtlwifi/rtl8192cu/sw.c
+index 82c85286..5bd4085 100644
+--- a/drivers/net/wireless/rtlwifi/rtl8192cu/sw.c
++++ b/drivers/net/wireless/rtlwifi/rtl8192cu/sw.c
+@@ -301,9 +301,11 @@ static struct usb_device_id rtl8192c_usb_ids[] = {
+ 	{RTL_USB_DEVICE(0x07b8, 0x8188, rtl92cu_hal_cfg)}, /*Abocom - Abocom*/
+ 	{RTL_USB_DEVICE(0x07b8, 0x8189, rtl92cu_hal_cfg)}, /*Funai - Abocom*/
+ 	{RTL_USB_DEVICE(0x0846, 0x9041, rtl92cu_hal_cfg)}, /*NetGear WNA1000M*/
++	{RTL_USB_DEVICE(0x0bda, 0x5088, rtl92cu_hal_cfg)}, /*Thinkware-CC&C*/
+ 	{RTL_USB_DEVICE(0x0df6, 0x0052, rtl92cu_hal_cfg)}, /*Sitecom - Edimax*/
+ 	{RTL_USB_DEVICE(0x0df6, 0x005c, rtl92cu_hal_cfg)}, /*Sitecom - Edimax*/
+ 	{RTL_USB_DEVICE(0x0eb0, 0x9071, rtl92cu_hal_cfg)}, /*NO Brand - Etop*/
++	{RTL_USB_DEVICE(0x4856, 0x0091, rtl92cu_hal_cfg)}, /*NetweeN - Feixun*/
+ 	/* HP - Lite-On ,8188CUS Slim Combo */
+ 	{RTL_USB_DEVICE(0x103c, 0x1629, rtl92cu_hal_cfg)},
+ 	{RTL_USB_DEVICE(0x13d3, 0x3357, rtl92cu_hal_cfg)}, /* AzureWave */
+@@ -345,6 +347,7 @@ static struct usb_device_id rtl8192c_usb_ids[] = {
+ 	{RTL_USB_DEVICE(0x07b8, 0x8178, rtl92cu_hal_cfg)}, /*Funai -Abocom*/
+ 	{RTL_USB_DEVICE(0x0846, 0x9021, rtl92cu_hal_cfg)}, /*Netgear-Sercomm*/
+ 	{RTL_USB_DEVICE(0x0b05, 0x17ab, rtl92cu_hal_cfg)}, /*ASUS-Edimax*/
++	{RTL_USB_DEVICE(0x0bda, 0x8186, rtl92cu_hal_cfg)}, /*Realtek 92CE-VAU*/
+ 	{RTL_USB_DEVICE(0x0df6, 0x0061, rtl92cu_hal_cfg)}, /*Sitecom-Edimax*/
+ 	{RTL_USB_DEVICE(0x0e66, 0x0019, rtl92cu_hal_cfg)}, /*Hawking-Edimax*/
+ 	{RTL_USB_DEVICE(0x2001, 0x3307, rtl92cu_hal_cfg)}, /*D-Link-Cameo*/
+diff --git a/drivers/net/xen-netfront.c b/drivers/net/xen-netfront.c
+index 0ebbb19..796afbf 100644
+--- a/drivers/net/xen-netfront.c
++++ b/drivers/net/xen-netfront.c
+@@ -1935,14 +1935,14 @@ static int __devexit xennet_remove(struct xenbus_device *dev)
+ 
+ 	dev_dbg(&dev->dev, "%s\n", dev->nodename);
+ 
+-	unregister_netdev(info->netdev);
+-
+ 	xennet_disconnect_backend(info);
+ 
+-	del_timer_sync(&info->rx_refill_timer);
+-
+ 	xennet_sysfs_delif(info->netdev);
+ 
++	unregister_netdev(info->netdev);
++
++	del_timer_sync(&info->rx_refill_timer);
++
+ 	free_percpu(info->stats);
+ 
+ 	free_netdev(info->netdev);
+diff --git a/drivers/oprofile/oprofile_perf.c b/drivers/oprofile/oprofile_perf.c
+index da14432..efc4b7f 100644
+--- a/drivers/oprofile/oprofile_perf.c
++++ b/drivers/oprofile/oprofile_perf.c
+@@ -25,7 +25,7 @@ static int oprofile_perf_enabled;
+ static DEFINE_MUTEX(oprofile_perf_mutex);
+ 
+ static struct op_counter_config *counter_config;
+-static struct perf_event **perf_events[nr_cpumask_bits];
++static struct perf_event **perf_events[NR_CPUS];
+ static int num_counters;
+ 
+ /*
+diff --git a/drivers/pci/pci-driver.c b/drivers/pci/pci-driver.c
+index 6b54b23..3cd3f45 100644
+--- a/drivers/pci/pci-driver.c
++++ b/drivers/pci/pci-driver.c
+@@ -742,6 +742,18 @@ static int pci_pm_suspend_noirq(struct device *dev)
+ 
+ 	pci_pm_set_unknown_state(pci_dev);
+ 
++	/*
++	 * Some BIOSes from ASUS have a bug: If a USB EHCI host controller's
++	 * PCI COMMAND register isn't 0, the BIOS assumes that the controller
++	 * hasn't been quiesced and tries to turn it off.  If the controller
++	 * is already in D3, this can hang or cause memory corruption.
++	 *
++	 * Since the value of the COMMAND register doesn't matter once the
++	 * device has been suspended, we can safely set it to 0 here.
++	 */
++	if (pci_dev->class == PCI_CLASS_SERIAL_USB_EHCI)
++		pci_write_config_word(pci_dev, PCI_COMMAND, 0);
++
+ 	return 0;
+ }
+ 
+diff --git a/drivers/pci/pci.c b/drivers/pci/pci.c
+index f597a1a..111569c 100644
+--- a/drivers/pci/pci.c
++++ b/drivers/pci/pci.c
+@@ -1743,11 +1743,6 @@ int pci_prepare_to_sleep(struct pci_dev *dev)
+ 	if (target_state == PCI_POWER_ERROR)
+ 		return -EIO;
+ 
+-	/* Some devices mustn't be in D3 during system sleep */
+-	if (target_state == PCI_D3hot &&
+-			(dev->dev_flags & PCI_DEV_FLAGS_NO_D3_DURING_SLEEP))
+-		return 0;
+-
+ 	pci_enable_wake(dev, target_state, device_may_wakeup(&dev->dev));
+ 
+ 	error = pci_set_power_state(dev, target_state);
+diff --git a/drivers/pci/quirks.c b/drivers/pci/quirks.c
+index bf33f0b..4bf7102 100644
+--- a/drivers/pci/quirks.c
++++ b/drivers/pci/quirks.c
+@@ -2917,32 +2917,6 @@ static void __devinit disable_igfx_irq(struct pci_dev *dev)
+ DECLARE_PCI_FIXUP_FINAL(PCI_VENDOR_ID_INTEL, 0x0102, disable_igfx_irq);
+ DECLARE_PCI_FIXUP_FINAL(PCI_VENDOR_ID_INTEL, 0x010a, disable_igfx_irq);
+ 
+-/*
+- * The Intel 6 Series/C200 Series chipset's EHCI controllers on many
+- * ASUS motherboards will cause memory corruption or a system crash
+- * if they are in D3 while the system is put into S3 sleep.
+- */
+-static void __devinit asus_ehci_no_d3(struct pci_dev *dev)
+-{
+-	const char *sys_info;
+-	static const char good_Asus_board[] = "P8Z68-V";
+-
+-	if (dev->dev_flags & PCI_DEV_FLAGS_NO_D3_DURING_SLEEP)
+-		return;
+-	if (dev->subsystem_vendor != PCI_VENDOR_ID_ASUSTEK)
+-		return;
+-	sys_info = dmi_get_system_info(DMI_BOARD_NAME);
+-	if (sys_info && memcmp(sys_info, good_Asus_board,
+-			sizeof(good_Asus_board) - 1) == 0)
+-		return;
+-
+-	dev_info(&dev->dev, "broken D3 during system sleep on ASUS\n");
+-	dev->dev_flags |= PCI_DEV_FLAGS_NO_D3_DURING_SLEEP;
+-	device_set_wakeup_capable(&dev->dev, false);
+-}
+-DECLARE_PCI_FIXUP_FINAL(PCI_VENDOR_ID_INTEL, 0x1c26, asus_ehci_no_d3);
+-DECLARE_PCI_FIXUP_FINAL(PCI_VENDOR_ID_INTEL, 0x1c2d, asus_ehci_no_d3);
+-
+ static void pci_do_fixups(struct pci_dev *dev, struct pci_fixup *f,
+ 			  struct pci_fixup *end)
+ {
+diff --git a/drivers/remoteproc/Kconfig b/drivers/remoteproc/Kconfig
+index 24d880e..f8d818a 100644
+--- a/drivers/remoteproc/Kconfig
++++ b/drivers/remoteproc/Kconfig
+@@ -4,9 +4,11 @@ menu "Remoteproc drivers (EXPERIMENTAL)"
+ config REMOTEPROC
+ 	tristate
+ 	depends on EXPERIMENTAL
++	select FW_CONFIG
+ 
+ config OMAP_REMOTEPROC
+ 	tristate "OMAP remoteproc support"
++	depends on EXPERIMENTAL
+ 	depends on ARCH_OMAP4
+ 	depends on OMAP_IOMMU
+ 	select REMOTEPROC
+diff --git a/drivers/rpmsg/virtio_rpmsg_bus.c b/drivers/rpmsg/virtio_rpmsg_bus.c
+index 75506ec..39d3aa4 100644
+--- a/drivers/rpmsg/virtio_rpmsg_bus.c
++++ b/drivers/rpmsg/virtio_rpmsg_bus.c
+@@ -188,6 +188,26 @@ static int rpmsg_uevent(struct device *dev, struct kobj_uevent_env *env)
+ 					rpdev->id.name);
+ }
+ 
++/**
++ * __ept_release() - deallocate an rpmsg endpoint
++ * @kref: the ept's reference count
++ *
++ * This function deallocates an ept, and is invoked when its @kref refcount
++ * drops to zero.
++ *
++ * Never invoke this function directly!
++ */
++static void __ept_release(struct kref *kref)
++{
++	struct rpmsg_endpoint *ept = container_of(kref, struct rpmsg_endpoint,
++						  refcount);
++	/*
++	 * At this point no one holds a reference to ept anymore,
++	 * so we can directly free it
++	 */
++	kfree(ept);
++}
++
+ /* for more info, see below documentation of rpmsg_create_ept() */
+ static struct rpmsg_endpoint *__rpmsg_create_ept(struct virtproc_info *vrp,
+ 		struct rpmsg_channel *rpdev, rpmsg_rx_cb_t cb,
+@@ -206,6 +226,9 @@ static struct rpmsg_endpoint *__rpmsg_create_ept(struct virtproc_info *vrp,
+ 		return NULL;
+ 	}
+ 
++	kref_init(&ept->refcount);
++	mutex_init(&ept->cb_lock);
++
+ 	ept->rpdev = rpdev;
+ 	ept->cb = cb;
+ 	ept->priv = priv;
+@@ -238,7 +261,7 @@ rem_idr:
+ 	idr_remove(&vrp->endpoints, request);
+ free_ept:
+ 	mutex_unlock(&vrp->endpoints_lock);
+-	kfree(ept);
++	kref_put(&ept->refcount, __ept_release);
+ 	return NULL;
+ }
+ 
+@@ -302,11 +325,17 @@ EXPORT_SYMBOL(rpmsg_create_ept);
+ static void
+ __rpmsg_destroy_ept(struct virtproc_info *vrp, struct rpmsg_endpoint *ept)
+ {
++	/* make sure new inbound messages can't find this ept anymore */
+ 	mutex_lock(&vrp->endpoints_lock);
+ 	idr_remove(&vrp->endpoints, ept->addr);
+ 	mutex_unlock(&vrp->endpoints_lock);
+ 
+-	kfree(ept);
++	/* make sure in-flight inbound messages won't invoke cb anymore */
++	mutex_lock(&ept->cb_lock);
++	ept->cb = NULL;
++	mutex_unlock(&ept->cb_lock);
++
++	kref_put(&ept->refcount, __ept_release);
+ }
+ 
+ /**
+@@ -790,12 +819,28 @@ static void rpmsg_recv_done(struct virtqueue *rvq)
+ 
+ 	/* use the dst addr to fetch the callback of the appropriate user */
+ 	mutex_lock(&vrp->endpoints_lock);
++
+ 	ept = idr_find(&vrp->endpoints, msg->dst);
++
++	/* let's make sure no one deallocates ept while we use it */
++	if (ept)
++		kref_get(&ept->refcount);
++
+ 	mutex_unlock(&vrp->endpoints_lock);
+ 
+-	if (ept && ept->cb)
+-		ept->cb(ept->rpdev, msg->data, msg->len, ept->priv, msg->src);
+-	else
++	if (ept) {
++		/* make sure ept->cb doesn't go away while we use it */
++		mutex_lock(&ept->cb_lock);
++
++		if (ept->cb)
++			ept->cb(ept->rpdev, msg->data, msg->len, ept->priv,
++				msg->src);
++
++		mutex_unlock(&ept->cb_lock);
++
++		/* farewell, ept, we don't need you anymore */
++		kref_put(&ept->refcount, __ept_release);
++	} else
+ 		dev_warn(dev, "msg received with no recepient\n");
+ 
+ 	/* publish the real size of the buffer */
+diff --git a/drivers/rtc/rtc-ab8500.c b/drivers/rtc/rtc-ab8500.c
+index 4bcf9ca..b11a2ec 100644
+--- a/drivers/rtc/rtc-ab8500.c
++++ b/drivers/rtc/rtc-ab8500.c
+@@ -422,7 +422,7 @@ static int __devinit ab8500_rtc_probe(struct platform_device *pdev)
+ 	}
+ 
+ 	err = request_threaded_irq(irq, NULL, rtc_alarm_handler,
+-		IRQF_NO_SUSPEND, "ab8500-rtc", rtc);
++		IRQF_NO_SUSPEND | IRQF_ONESHOT, "ab8500-rtc", rtc);
+ 	if (err < 0) {
+ 		rtc_device_unregister(rtc);
+ 		return err;
+diff --git a/drivers/rtc/rtc-mxc.c b/drivers/rtc/rtc-mxc.c
+index 5e1d64e..e3e50d6 100644
+--- a/drivers/rtc/rtc-mxc.c
++++ b/drivers/rtc/rtc-mxc.c
+@@ -202,10 +202,11 @@ static irqreturn_t mxc_rtc_interrupt(int irq, void *dev_id)
+ 	struct platform_device *pdev = dev_id;
+ 	struct rtc_plat_data *pdata = platform_get_drvdata(pdev);
+ 	void __iomem *ioaddr = pdata->ioaddr;
++	unsigned long flags;
+ 	u32 status;
+ 	u32 events = 0;
+ 
+-	spin_lock_irq(&pdata->rtc->irq_lock);
++	spin_lock_irqsave(&pdata->rtc->irq_lock, flags);
+ 	status = readw(ioaddr + RTC_RTCISR) & readw(ioaddr + RTC_RTCIENR);
+ 	/* clear interrupt sources */
+ 	writew(status, ioaddr + RTC_RTCISR);
+@@ -224,7 +225,7 @@ static irqreturn_t mxc_rtc_interrupt(int irq, void *dev_id)
+ 		events |= (RTC_PF | RTC_IRQF);
+ 
+ 	rtc_update_irq(pdata->rtc, 1, events);
+-	spin_unlock_irq(&pdata->rtc->irq_lock);
++	spin_unlock_irqrestore(&pdata->rtc->irq_lock, flags);
+ 
+ 	return IRQ_HANDLED;
+ }
+diff --git a/drivers/rtc/rtc-spear.c b/drivers/rtc/rtc-spear.c
+index e38da0d..235b0ef 100644
+--- a/drivers/rtc/rtc-spear.c
++++ b/drivers/rtc/rtc-spear.c
+@@ -457,12 +457,12 @@ static int __devexit spear_rtc_remove(struct platform_device *pdev)
+ 	clk_disable(config->clk);
+ 	clk_put(config->clk);
+ 	iounmap(config->ioaddr);
+-	kfree(config);
+ 	res = platform_get_resource(pdev, IORESOURCE_MEM, 0);
+ 	if (res)
+ 		release_mem_region(res->start, resource_size(res));
+ 	platform_set_drvdata(pdev, NULL);
+ 	rtc_device_unregister(config->rtc);
++	kfree(config);
+ 
+ 	return 0;
+ }
+diff --git a/drivers/scsi/aic94xx/aic94xx_task.c b/drivers/scsi/aic94xx/aic94xx_task.c
+index 532d212..393e7ce 100644
+--- a/drivers/scsi/aic94xx/aic94xx_task.c
++++ b/drivers/scsi/aic94xx/aic94xx_task.c
+@@ -201,7 +201,7 @@ static void asd_get_response_tasklet(struct asd_ascb *ascb,
+ 
+ 		if (SAS_STATUS_BUF_SIZE >= sizeof(*resp)) {
+ 			resp->frame_len = le16_to_cpu(*(__le16 *)(r+6));
+-			memcpy(&resp->ending_fis[0], r+16, 24);
++			memcpy(&resp->ending_fis[0], r+16, ATA_RESP_FIS_SIZE);
+ 			ts->buf_valid_size = sizeof(*resp);
+ 		}
+ 	}
+diff --git a/drivers/scsi/libsas/sas_ata.c b/drivers/scsi/libsas/sas_ata.c
+index 441d88a..d109cc3 100644
+--- a/drivers/scsi/libsas/sas_ata.c
++++ b/drivers/scsi/libsas/sas_ata.c
+@@ -139,12 +139,12 @@ static void sas_ata_task_done(struct sas_task *task)
+ 	if (stat->stat == SAS_PROTO_RESPONSE || stat->stat == SAM_STAT_GOOD ||
+ 	    ((stat->stat == SAM_STAT_CHECK_CONDITION &&
+ 	      dev->sata_dev.command_set == ATAPI_COMMAND_SET))) {
+-		ata_tf_from_fis(resp->ending_fis, &dev->sata_dev.tf);
++		memcpy(dev->sata_dev.fis, resp->ending_fis, ATA_RESP_FIS_SIZE);
+ 
+ 		if (!link->sactive) {
+-			qc->err_mask |= ac_err_mask(dev->sata_dev.tf.command);
++			qc->err_mask |= ac_err_mask(dev->sata_dev.fis[2]);
+ 		} else {
+-			link->eh_info.err_mask |= ac_err_mask(dev->sata_dev.tf.command);
++			link->eh_info.err_mask |= ac_err_mask(dev->sata_dev.fis[2]);
+ 			if (unlikely(link->eh_info.err_mask))
+ 				qc->flags |= ATA_QCFLAG_FAILED;
+ 		}
+@@ -161,8 +161,8 @@ static void sas_ata_task_done(struct sas_task *task)
+ 				qc->flags |= ATA_QCFLAG_FAILED;
+ 			}
+ 
+-			dev->sata_dev.tf.feature = 0x04; /* status err */
+-			dev->sata_dev.tf.command = ATA_ERR;
++			dev->sata_dev.fis[3] = 0x04; /* status err */
++			dev->sata_dev.fis[2] = ATA_ERR;
+ 		}
+ 	}
+ 
+@@ -269,7 +269,7 @@ static bool sas_ata_qc_fill_rtf(struct ata_queued_cmd *qc)
+ {
+ 	struct domain_device *dev = qc->ap->private_data;
+ 
+-	memcpy(&qc->result_tf, &dev->sata_dev.tf, sizeof(qc->result_tf));
++	ata_tf_from_fis(dev->sata_dev.fis, &qc->result_tf);
+ 	return true;
+ }
+ 
+diff --git a/drivers/scsi/sd.c b/drivers/scsi/sd.c
+index 5ba5c2a..a239382 100644
+--- a/drivers/scsi/sd.c
++++ b/drivers/scsi/sd.c
+@@ -1898,6 +1898,8 @@ static int sd_try_rc16_first(struct scsi_device *sdp)
+ {
+ 	if (sdp->host->max_cmd_len < 16)
+ 		return 0;
++	if (sdp->try_rc_10_first)
++		return 0;
+ 	if (sdp->scsi_level > SCSI_SPC_2)
+ 		return 1;
+ 	if (scsi_device_protection(sdp))
+diff --git a/drivers/staging/iio/adc/ad7606_core.c b/drivers/staging/iio/adc/ad7606_core.c
+index 97e8d3d..7322c16 100644
+--- a/drivers/staging/iio/adc/ad7606_core.c
++++ b/drivers/staging/iio/adc/ad7606_core.c
+@@ -235,6 +235,7 @@ static const struct attribute_group ad7606_attribute_group_range = {
+ 		.indexed = 1,				\
+ 		.channel = num,				\
+ 		.address = num,				\
++		.info_mask = IIO_CHAN_INFO_SCALE_SHARED_BIT, \
+ 		.scan_index = num,			\
+ 		.scan_type = IIO_ST('s', 16, 16, 0),	\
+ 	}
+diff --git a/drivers/staging/rtl8712/usb_intf.c b/drivers/staging/rtl8712/usb_intf.c
+index e419b4f..2c80745 100644
+--- a/drivers/staging/rtl8712/usb_intf.c
++++ b/drivers/staging/rtl8712/usb_intf.c
+@@ -102,6 +102,8 @@ static struct usb_device_id rtl871x_usb_id_tbl[] = {
+ 	/* - */
+ 	{USB_DEVICE(0x20F4, 0x646B)},
+ 	{USB_DEVICE(0x083A, 0xC512)},
++	{USB_DEVICE(0x25D4, 0x4CA1)},
++	{USB_DEVICE(0x25D4, 0x4CAB)},
+ 
+ /* RTL8191SU */
+ 	/* Realtek */
+diff --git a/drivers/target/tcm_fc/tfc_sess.c b/drivers/target/tcm_fc/tfc_sess.c
+index cb99da9..87901fa 100644
+--- a/drivers/target/tcm_fc/tfc_sess.c
++++ b/drivers/target/tcm_fc/tfc_sess.c
+@@ -58,7 +58,8 @@ static struct ft_tport *ft_tport_create(struct fc_lport *lport)
+ 	struct ft_tport *tport;
+ 	int i;
+ 
+-	tport = rcu_dereference(lport->prov[FC_TYPE_FCP]);
++	tport = rcu_dereference_protected(lport->prov[FC_TYPE_FCP],
++					  lockdep_is_held(&ft_lport_lock));
+ 	if (tport && tport->tpg)
+ 		return tport;
+ 
+diff --git a/drivers/usb/class/cdc-wdm.c b/drivers/usb/class/cdc-wdm.c
+index 83d14bf..01d247e 100644
+--- a/drivers/usb/class/cdc-wdm.c
++++ b/drivers/usb/class/cdc-wdm.c
+@@ -497,6 +497,8 @@ retry:
+ 			goto retry;
+ 		}
+ 		if (!desc->reslength) { /* zero length read */
++			dev_dbg(&desc->intf->dev, "%s: zero length - clearing WDM_READ\n", __func__);
++			clear_bit(WDM_READ, &desc->flags);
+ 			spin_unlock_irq(&desc->iuspin);
+ 			goto retry;
+ 		}
+diff --git a/drivers/usb/core/hub.c b/drivers/usb/core/hub.c
+index c8e0704..6241b71 100644
+--- a/drivers/usb/core/hub.c
++++ b/drivers/usb/core/hub.c
+@@ -2102,12 +2102,16 @@ static unsigned hub_is_wusb(struct usb_hub *hub)
+ static int hub_port_reset(struct usb_hub *hub, int port1,
+ 			struct usb_device *udev, unsigned int delay, bool warm);
+ 
+-/* Is a USB 3.0 port in the Inactive state? */
+-static bool hub_port_inactive(struct usb_hub *hub, u16 portstatus)
++/* Is a USB 3.0 port in the Inactive or Complinance Mode state?
++ * Port worm reset is required to recover
++ */
++static bool hub_port_warm_reset_required(struct usb_hub *hub, u16 portstatus)
+ {
+ 	return hub_is_superspeed(hub->hdev) &&
+-		(portstatus & USB_PORT_STAT_LINK_STATE) ==
+-		USB_SS_PORT_LS_SS_INACTIVE;
++		(((portstatus & USB_PORT_STAT_LINK_STATE) ==
++		  USB_SS_PORT_LS_SS_INACTIVE) ||
++		 ((portstatus & USB_PORT_STAT_LINK_STATE) ==
++		  USB_SS_PORT_LS_COMP_MOD)) ;
+ }
+ 
+ static int hub_port_wait_reset(struct usb_hub *hub, int port1,
+@@ -2143,7 +2147,7 @@ static int hub_port_wait_reset(struct usb_hub *hub, int port1,
+ 			 *
+ 			 * See https://bugzilla.kernel.org/show_bug.cgi?id=41752
+ 			 */
+-			if (hub_port_inactive(hub, portstatus)) {
++			if (hub_port_warm_reset_required(hub, portstatus)) {
+ 				int ret;
+ 
+ 				if ((portchange & USB_PORT_STAT_C_CONNECTION))
+@@ -3757,9 +3761,7 @@ static void hub_events(void)
+ 			/* Warm reset a USB3 protocol port if it's in
+ 			 * SS.Inactive state.
+ 			 */
+-			if (hub_is_superspeed(hub->hdev) &&
+-				(portstatus & USB_PORT_STAT_LINK_STATE)
+-					== USB_SS_PORT_LS_SS_INACTIVE) {
++			if (hub_port_warm_reset_required(hub, portstatus)) {
+ 				dev_dbg(hub_dev, "warm reset port %d\n", i);
+ 				hub_port_reset(hub, i, NULL,
+ 						HUB_BH_RESET_TIME, true);
+diff --git a/drivers/usb/host/xhci-hub.c b/drivers/usb/host/xhci-hub.c
+index 89850a8..bbf3c0c 100644
+--- a/drivers/usb/host/xhci-hub.c
++++ b/drivers/usb/host/xhci-hub.c
+@@ -462,6 +462,42 @@ void xhci_test_and_clear_bit(struct xhci_hcd *xhci, __le32 __iomem **port_array,
+ 	}
+ }
+ 
++/* Updates Link Status for super Speed port */
++static void xhci_hub_report_link_state(u32 *status, u32 status_reg)
++{
++	u32 pls = status_reg & PORT_PLS_MASK;
++
++	/* resume state is a xHCI internal state.
++	 * Do not report it to usb core.
++	 */
++	if (pls == XDEV_RESUME)
++		return;
++
++	/* When the CAS bit is set then warm reset
++	 * should be performed on port
++	 */
++	if (status_reg & PORT_CAS) {
++		/* The CAS bit can be set while the port is
++		 * in any link state.
++		 * Only roothubs have CAS bit, so we
++		 * pretend to be in compliance mode
++		 * unless we're already in compliance
++		 * or the inactive state.
++		 */
++		if (pls != USB_SS_PORT_LS_COMP_MOD &&
++		    pls != USB_SS_PORT_LS_SS_INACTIVE) {
++			pls = USB_SS_PORT_LS_COMP_MOD;
++		}
++		/* Return also connection bit -
++		 * hub state machine resets port
++		 * when this bit is set.
++		 */
++		pls |= USB_PORT_STAT_CONNECTION;
++	}
++	/* update status field */
++	*status |= pls;
++}
++
+ int xhci_hub_control(struct usb_hcd *hcd, u16 typeReq, u16 wValue,
+ 		u16 wIndex, char *buf, u16 wLength)
+ {
+@@ -605,13 +641,9 @@ int xhci_hub_control(struct usb_hcd *hcd, u16 typeReq, u16 wValue,
+ 			else
+ 				status |= USB_PORT_STAT_POWER;
+ 		}
+-		/* Port Link State */
++		/* Update Port Link State for super speed ports*/
+ 		if (hcd->speed == HCD_USB3) {
+-			/* resume state is a xHCI internal state.
+-			 * Do not report it to usb core.
+-			 */
+-			if ((temp & PORT_PLS_MASK) != XDEV_RESUME)
+-				status |= (temp & PORT_PLS_MASK);
++			xhci_hub_report_link_state(&status, temp);
+ 		}
+ 		if (bus_state->port_c_suspend & (1 << wIndex))
+ 			status |= 1 << USB_PORT_FEAT_C_SUSPEND;
+diff --git a/drivers/usb/host/xhci-ring.c b/drivers/usb/host/xhci-ring.c
+index 525a1ee..158175bf 100644
+--- a/drivers/usb/host/xhci-ring.c
++++ b/drivers/usb/host/xhci-ring.c
+@@ -885,6 +885,17 @@ static void update_ring_for_set_deq_completion(struct xhci_hcd *xhci,
+ 	num_trbs_free_temp = ep_ring->num_trbs_free;
+ 	dequeue_temp = ep_ring->dequeue;
+ 
++	/* If we get two back-to-back stalls, and the first stalled transfer
++	 * ends just before a link TRB, the dequeue pointer will be left on
++	 * the link TRB by the code in the while loop.  So we have to update
++	 * the dequeue pointer one segment further, or we'll jump off
++	 * the segment into la-la-land.
++	 */
++	if (last_trb(xhci, ep_ring, ep_ring->deq_seg, ep_ring->dequeue)) {
++		ep_ring->deq_seg = ep_ring->deq_seg->next;
++		ep_ring->dequeue = ep_ring->deq_seg->trbs;
++	}
++
+ 	while (ep_ring->dequeue != dev->eps[ep_index].queued_deq_ptr) {
+ 		/* We have more usable TRBs */
+ 		ep_ring->num_trbs_free++;
+diff --git a/drivers/usb/host/xhci.h b/drivers/usb/host/xhci.h
+index ac14276..59434fe 100644
+--- a/drivers/usb/host/xhci.h
++++ b/drivers/usb/host/xhci.h
+@@ -341,7 +341,11 @@ struct xhci_op_regs {
+ #define PORT_PLC	(1 << 22)
+ /* port configure error change - port failed to configure its link partner */
+ #define PORT_CEC	(1 << 23)
+-/* bit 24 reserved */
++/* Cold Attach Status - xHC can set this bit to report device attached during
++ * Sx state. Warm port reset should be perfomed to clear this bit and move port
++ * to connected state.
++ */
++#define PORT_CAS	(1 << 24)
+ /* wake on connect (enable) */
+ #define PORT_WKCONN_E	(1 << 25)
+ /* wake on disconnect (enable) */
+diff --git a/drivers/usb/serial/cp210x.c b/drivers/usb/serial/cp210x.c
+index 95de9c0..53e7e69 100644
+--- a/drivers/usb/serial/cp210x.c
++++ b/drivers/usb/serial/cp210x.c
+@@ -93,6 +93,7 @@ static const struct usb_device_id id_table[] = {
+ 	{ USB_DEVICE(0x10C4, 0x814B) }, /* West Mountain Radio RIGtalk */
+ 	{ USB_DEVICE(0x10C4, 0x8156) }, /* B&G H3000 link cable */
+ 	{ USB_DEVICE(0x10C4, 0x815E) }, /* Helicomm IP-Link 1220-DVM */
++	{ USB_DEVICE(0x10C4, 0x815F) }, /* Timewave HamLinkUSB */
+ 	{ USB_DEVICE(0x10C4, 0x818B) }, /* AVIT Research USB to TTL */
+ 	{ USB_DEVICE(0x10C4, 0x819F) }, /* MJS USB Toslink Switcher */
+ 	{ USB_DEVICE(0x10C4, 0x81A6) }, /* ThinkOptics WavIt */
+@@ -134,7 +135,13 @@ static const struct usb_device_id id_table[] = {
+ 	{ USB_DEVICE(0x10CE, 0xEA6A) }, /* Silicon Labs MobiData GPRS USB Modem 100EU */
+ 	{ USB_DEVICE(0x13AD, 0x9999) }, /* Baltech card reader */
+ 	{ USB_DEVICE(0x1555, 0x0004) }, /* Owen AC4 USB-RS485 Converter */
++	{ USB_DEVICE(0x166A, 0x0201) }, /* Clipsal 5500PACA C-Bus Pascal Automation Controller */
++	{ USB_DEVICE(0x166A, 0x0301) }, /* Clipsal 5800PC C-Bus Wireless PC Interface */
+ 	{ USB_DEVICE(0x166A, 0x0303) }, /* Clipsal 5500PCU C-Bus USB interface */
++	{ USB_DEVICE(0x166A, 0x0304) }, /* Clipsal 5000CT2 C-Bus Black and White Touchscreen */
++	{ USB_DEVICE(0x166A, 0x0305) }, /* Clipsal C-5000CT2 C-Bus Spectrum Colour Touchscreen */
++	{ USB_DEVICE(0x166A, 0x0401) }, /* Clipsal L51xx C-Bus Architectural Dimmer */
++	{ USB_DEVICE(0x166A, 0x0101) }, /* Clipsal 5560884 C-Bus Multi-room Audio Matrix Switcher */
+ 	{ USB_DEVICE(0x16D6, 0x0001) }, /* Jablotron serial interface */
+ 	{ USB_DEVICE(0x16DC, 0x0010) }, /* W-IE-NE-R Plein & Baus GmbH PL512 Power Supply */
+ 	{ USB_DEVICE(0x16DC, 0x0011) }, /* W-IE-NE-R Plein & Baus GmbH RCM Remote Control for MARATON Power Supply */
+@@ -146,7 +153,11 @@ static const struct usb_device_id id_table[] = {
+ 	{ USB_DEVICE(0x1843, 0x0200) }, /* Vaisala USB Instrument Cable */
+ 	{ USB_DEVICE(0x18EF, 0xE00F) }, /* ELV USB-I2C-Interface */
+ 	{ USB_DEVICE(0x1BE3, 0x07A6) }, /* WAGO 750-923 USB Service Cable */
++	{ USB_DEVICE(0x1E29, 0x0102) }, /* Festo CPX-USB */
++	{ USB_DEVICE(0x1E29, 0x0501) }, /* Festo CMSP */
+ 	{ USB_DEVICE(0x3195, 0xF190) }, /* Link Instruments MSO-19 */
++	{ USB_DEVICE(0x3195, 0xF280) }, /* Link Instruments MSO-28 */
++	{ USB_DEVICE(0x3195, 0xF281) }, /* Link Instruments MSO-28 */
+ 	{ USB_DEVICE(0x413C, 0x9500) }, /* DW700 GPS USB interface */
+ 	{ } /* Terminating Entry */
+ };
+diff --git a/drivers/usb/serial/metro-usb.c b/drivers/usb/serial/metro-usb.c
+index 08d16e8..7c14671 100644
+--- a/drivers/usb/serial/metro-usb.c
++++ b/drivers/usb/serial/metro-usb.c
+@@ -171,14 +171,6 @@ static int metrousb_open(struct tty_struct *tty, struct usb_serial_port *port)
+ 	metro_priv->throttled = 0;
+ 	spin_unlock_irqrestore(&metro_priv->lock, flags);
+ 
+-	/*
+-	 * Force low_latency on so that our tty_push actually forces the data
+-	 * through, otherwise it is scheduled, and with high data rates (like
+-	 * with OHCI) data can get lost.
+-	 */
+-	if (tty)
+-		tty->low_latency = 1;
+-
+ 	/* Clear the urb pipe. */
+ 	usb_clear_halt(serial->dev, port->interrupt_in_urb->pipe);
+ 
+diff --git a/drivers/usb/serial/option.c b/drivers/usb/serial/option.c
+index 2706d8a..49484b3 100644
+--- a/drivers/usb/serial/option.c
++++ b/drivers/usb/serial/option.c
+@@ -236,6 +236,7 @@ static void option_instat_callback(struct urb *urb);
+ #define NOVATELWIRELESS_PRODUCT_G1		0xA001
+ #define NOVATELWIRELESS_PRODUCT_G1_M		0xA002
+ #define NOVATELWIRELESS_PRODUCT_G2		0xA010
++#define NOVATELWIRELESS_PRODUCT_MC551		0xB001
+ 
+ /* AMOI PRODUCTS */
+ #define AMOI_VENDOR_ID				0x1614
+@@ -496,6 +497,19 @@ static void option_instat_callback(struct urb *urb);
+ 
+ /* MediaTek products */
+ #define MEDIATEK_VENDOR_ID			0x0e8d
++#define MEDIATEK_PRODUCT_DC_1COM		0x00a0
++#define MEDIATEK_PRODUCT_DC_4COM		0x00a5
++#define MEDIATEK_PRODUCT_DC_5COM		0x00a4
++#define MEDIATEK_PRODUCT_7208_1COM		0x7101
++#define MEDIATEK_PRODUCT_7208_2COM		0x7102
++#define MEDIATEK_PRODUCT_FP_1COM		0x0003
++#define MEDIATEK_PRODUCT_FP_2COM		0x0023
++#define MEDIATEK_PRODUCT_FPDC_1COM		0x0043
++#define MEDIATEK_PRODUCT_FPDC_2COM		0x0033
++
++/* Cellient products */
++#define CELLIENT_VENDOR_ID			0x2692
++#define CELLIENT_PRODUCT_MEN200			0x9005
+ 
+ /* some devices interfaces need special handling due to a number of reasons */
+ enum option_blacklist_reason {
+@@ -549,6 +563,10 @@ static const struct option_blacklist_info net_intf1_blacklist = {
+ 	.reserved = BIT(1),
+ };
+ 
++static const struct option_blacklist_info net_intf2_blacklist = {
++	.reserved = BIT(2),
++};
++
+ static const struct option_blacklist_info net_intf3_blacklist = {
+ 	.reserved = BIT(3),
+ };
+@@ -734,6 +752,8 @@ static const struct usb_device_id option_ids[] = {
+ 	{ USB_DEVICE(NOVATELWIRELESS_VENDOR_ID, NOVATELWIRELESS_PRODUCT_G1) },
+ 	{ USB_DEVICE(NOVATELWIRELESS_VENDOR_ID, NOVATELWIRELESS_PRODUCT_G1_M) },
+ 	{ USB_DEVICE(NOVATELWIRELESS_VENDOR_ID, NOVATELWIRELESS_PRODUCT_G2) },
++	/* Novatel Ovation MC551 a.k.a. Verizon USB551L */
++	{ USB_DEVICE_AND_INTERFACE_INFO(NOVATELWIRELESS_VENDOR_ID, NOVATELWIRELESS_PRODUCT_MC551, 0xff, 0xff, 0xff) },
+ 
+ 	{ USB_DEVICE(AMOI_VENDOR_ID, AMOI_PRODUCT_H01) },
+ 	{ USB_DEVICE(AMOI_VENDOR_ID, AMOI_PRODUCT_H01A) },
+@@ -1092,6 +1112,8 @@ static const struct usb_device_id option_ids[] = {
+ 	{ USB_DEVICE_AND_INTERFACE_INFO(ZTE_VENDOR_ID, 0x1298, 0xff, 0xff, 0xff) },
+ 	{ USB_DEVICE_AND_INTERFACE_INFO(ZTE_VENDOR_ID, 0x1299, 0xff, 0xff, 0xff) },
+ 	{ USB_DEVICE_AND_INTERFACE_INFO(ZTE_VENDOR_ID, 0x1300, 0xff, 0xff, 0xff) },
++	{ USB_DEVICE_AND_INTERFACE_INFO(ZTE_VENDOR_ID, 0x1402, 0xff, 0xff, 0xff),
++		.driver_info = (kernel_ulong_t)&net_intf2_blacklist },
+ 	{ USB_DEVICE_AND_INTERFACE_INFO(ZTE_VENDOR_ID, 0x2002, 0xff,
+ 	  0xff, 0xff), .driver_info = (kernel_ulong_t)&zte_k3765_z_blacklist },
+ 	{ USB_DEVICE_AND_INTERFACE_INFO(ZTE_VENDOR_ID, 0x2003, 0xff, 0xff, 0xff) },
+@@ -1233,6 +1255,18 @@ static const struct usb_device_id option_ids[] = {
+ 	{ USB_DEVICE_AND_INTERFACE_INFO(MEDIATEK_VENDOR_ID, 0x00a1, 0xff, 0x02, 0x01) },
+ 	{ USB_DEVICE_AND_INTERFACE_INFO(MEDIATEK_VENDOR_ID, 0x00a2, 0xff, 0x00, 0x00) },
+ 	{ USB_DEVICE_AND_INTERFACE_INFO(MEDIATEK_VENDOR_ID, 0x00a2, 0xff, 0x02, 0x01) },        /* MediaTek MT6276M modem & app port */
++	{ USB_DEVICE_AND_INTERFACE_INFO(MEDIATEK_VENDOR_ID, MEDIATEK_PRODUCT_DC_1COM, 0x0a, 0x00, 0x00) },
++	{ USB_DEVICE_AND_INTERFACE_INFO(MEDIATEK_VENDOR_ID, MEDIATEK_PRODUCT_DC_5COM, 0xff, 0x02, 0x01) },
++	{ USB_DEVICE_AND_INTERFACE_INFO(MEDIATEK_VENDOR_ID, MEDIATEK_PRODUCT_DC_5COM, 0xff, 0x00, 0x00) },
++	{ USB_DEVICE_AND_INTERFACE_INFO(MEDIATEK_VENDOR_ID, MEDIATEK_PRODUCT_DC_4COM, 0xff, 0x02, 0x01) },
++	{ USB_DEVICE_AND_INTERFACE_INFO(MEDIATEK_VENDOR_ID, MEDIATEK_PRODUCT_DC_4COM, 0xff, 0x00, 0x00) },
++	{ USB_DEVICE_AND_INTERFACE_INFO(MEDIATEK_VENDOR_ID, MEDIATEK_PRODUCT_7208_1COM, 0x02, 0x00, 0x00) },
++	{ USB_DEVICE_AND_INTERFACE_INFO(MEDIATEK_VENDOR_ID, MEDIATEK_PRODUCT_7208_2COM, 0x02, 0x02, 0x01) },
++	{ USB_DEVICE_AND_INTERFACE_INFO(MEDIATEK_VENDOR_ID, MEDIATEK_PRODUCT_FP_1COM, 0x0a, 0x00, 0x00) },
++	{ USB_DEVICE_AND_INTERFACE_INFO(MEDIATEK_VENDOR_ID, MEDIATEK_PRODUCT_FP_2COM, 0x0a, 0x00, 0x00) },
++	{ USB_DEVICE_AND_INTERFACE_INFO(MEDIATEK_VENDOR_ID, MEDIATEK_PRODUCT_FPDC_1COM, 0x0a, 0x00, 0x00) },
++	{ USB_DEVICE_AND_INTERFACE_INFO(MEDIATEK_VENDOR_ID, MEDIATEK_PRODUCT_FPDC_2COM, 0x0a, 0x00, 0x00) },
++	{ USB_DEVICE(CELLIENT_VENDOR_ID, CELLIENT_PRODUCT_MEN200) },
+ 	{ } /* Terminating entry */
+ };
+ MODULE_DEVICE_TABLE(usb, option_ids);
+diff --git a/drivers/usb/storage/scsiglue.c b/drivers/usb/storage/scsiglue.c
+index a324a5d..11418da 100644
+--- a/drivers/usb/storage/scsiglue.c
++++ b/drivers/usb/storage/scsiglue.c
+@@ -202,6 +202,12 @@ static int slave_configure(struct scsi_device *sdev)
+ 		if (us->fflags & US_FL_NO_READ_CAPACITY_16)
+ 			sdev->no_read_capacity_16 = 1;
+ 
++		/*
++		 * Many devices do not respond properly to READ_CAPACITY_16.
++		 * Tell the SCSI layer to try READ_CAPACITY_10 first.
++		 */
++		sdev->try_rc_10_first = 1;
++
+ 		/* assume SPC3 or latter devices support sense size > 18 */
+ 		if (sdev->scsi_level > SCSI_SPC_2)
+ 			us->fflags |= US_FL_SANE_SENSE;
+diff --git a/drivers/usb/storage/unusual_devs.h b/drivers/usb/storage/unusual_devs.h
+index f4d3e1a..8f3cbb8 100644
+--- a/drivers/usb/storage/unusual_devs.h
++++ b/drivers/usb/storage/unusual_devs.h
+@@ -1107,13 +1107,6 @@ UNUSUAL_DEV( 0x090a, 0x1200, 0x0000, 0x9999,
+ 		USB_SC_RBC, USB_PR_BULK, NULL,
+ 		0 ),
+ 
+-/* Feiya QDI U2 DISK, reported by Hans de Goede <hdegoede@redhat.com> */
+-UNUSUAL_DEV( 0x090c, 0x1000, 0x0000, 0xffff,
+-		"Feiya",
+-		"QDI U2 DISK",
+-		USB_SC_DEVICE, USB_PR_DEVICE, NULL,
+-		US_FL_NO_READ_CAPACITY_16 ),
+-
+ /* aeb */
+ UNUSUAL_DEV( 0x090c, 0x1132, 0x0000, 0xffff,
+ 		"Feiya",
+diff --git a/drivers/video/omap2/dss/apply.c b/drivers/video/omap2/dss/apply.c
+index b10b3bc..cb19af2 100644
+--- a/drivers/video/omap2/dss/apply.c
++++ b/drivers/video/omap2/dss/apply.c
+@@ -927,7 +927,7 @@ static void dss_ovl_setup_fifo(struct omap_overlay *ovl,
+ 	dssdev = ovl->manager->device;
+ 
+ 	dispc_ovl_compute_fifo_thresholds(ovl->id, &fifo_low, &fifo_high,
+-			use_fifo_merge);
++			use_fifo_merge, ovl_manual_update(ovl));
+ 
+ 	dss_apply_ovl_fifo_thresholds(ovl, fifo_low, fifo_high);
+ }
+diff --git a/drivers/video/omap2/dss/dispc.c b/drivers/video/omap2/dss/dispc.c
+index ee30937..c4d0e44 100644
+--- a/drivers/video/omap2/dss/dispc.c
++++ b/drivers/video/omap2/dss/dispc.c
+@@ -1063,7 +1063,8 @@ void dispc_enable_fifomerge(bool enable)
+ }
+ 
+ void dispc_ovl_compute_fifo_thresholds(enum omap_plane plane,
+-		u32 *fifo_low, u32 *fifo_high, bool use_fifomerge)
++		u32 *fifo_low, u32 *fifo_high, bool use_fifomerge,
++		bool manual_update)
+ {
+ 	/*
+ 	 * All sizes are in bytes. Both the buffer and burst are made of
+@@ -1091,7 +1092,7 @@ void dispc_ovl_compute_fifo_thresholds(enum omap_plane plane,
+ 	 * combined fifo size
+ 	 */
+ 
+-	if (dss_has_feature(FEAT_OMAP3_DSI_FIFO_BUG)) {
++	if (manual_update && dss_has_feature(FEAT_OMAP3_DSI_FIFO_BUG)) {
+ 		*fifo_low = ovl_fifo_size - burst_size * 2;
+ 		*fifo_high = total_fifo_size - burst_size;
+ 	} else {
+diff --git a/drivers/video/omap2/dss/dss.h b/drivers/video/omap2/dss/dss.h
+index d4b3dff..d0638da 100644
+--- a/drivers/video/omap2/dss/dss.h
++++ b/drivers/video/omap2/dss/dss.h
+@@ -424,7 +424,8 @@ int dispc_calc_clock_rates(unsigned long dispc_fclk_rate,
+ 
+ void dispc_ovl_set_fifo_threshold(enum omap_plane plane, u32 low, u32 high);
+ void dispc_ovl_compute_fifo_thresholds(enum omap_plane plane,
+-		u32 *fifo_low, u32 *fifo_high, bool use_fifomerge);
++		u32 *fifo_low, u32 *fifo_high, bool use_fifomerge,
++		bool manual_update);
+ int dispc_ovl_setup(enum omap_plane plane, struct omap_overlay_info *oi,
+ 		bool ilace, bool replication);
+ int dispc_ovl_enable(enum omap_plane plane, bool enable);
+diff --git a/fs/btrfs/tree-log.c b/fs/btrfs/tree-log.c
+index eb1ae90..dce89da 100644
+--- a/fs/btrfs/tree-log.c
++++ b/fs/btrfs/tree-log.c
+@@ -690,6 +690,8 @@ static noinline int drop_one_dir_item(struct btrfs_trans_handle *trans,
+ 	kfree(name);
+ 
+ 	iput(inode);
++
++	btrfs_run_delayed_items(trans, root);
+ 	return ret;
+ }
+ 
+@@ -895,6 +897,7 @@ again:
+ 				ret = btrfs_unlink_inode(trans, root, dir,
+ 							 inode, victim_name,
+ 							 victim_name_len);
++				btrfs_run_delayed_items(trans, root);
+ 			}
+ 			kfree(victim_name);
+ 			ptr = (unsigned long)(victim_ref + 1) + victim_name_len;
+@@ -1475,6 +1478,9 @@ again:
+ 			ret = btrfs_unlink_inode(trans, root, dir, inode,
+ 						 name, name_len);
+ 			BUG_ON(ret);
++
++			btrfs_run_delayed_items(trans, root);
++
+ 			kfree(name);
+ 			iput(inode);
+ 
+diff --git a/fs/cifs/connect.c b/fs/cifs/connect.c
+index e0b56d7..402fa0f 100644
+--- a/fs/cifs/connect.c
++++ b/fs/cifs/connect.c
+@@ -1585,24 +1585,26 @@ cifs_parse_mount_options(const char *mountdata, const char *devname,
+ 			 * If yes, we have encountered a double deliminator
+ 			 * reset the NULL character to the deliminator
+ 			 */
+-			if (tmp_end < end && tmp_end[1] == delim)
++			if (tmp_end < end && tmp_end[1] == delim) {
+ 				tmp_end[0] = delim;
+ 
+-			/* Keep iterating until we get to a single deliminator
+-			 * OR the end
+-			 */
+-			while ((tmp_end = strchr(tmp_end, delim)) != NULL &&
+-			       (tmp_end[1] == delim)) {
+-				tmp_end = (char *) &tmp_end[2];
+-			}
++				/* Keep iterating until we get to a single
++				 * deliminator OR the end
++				 */
++				while ((tmp_end = strchr(tmp_end, delim))
++					!= NULL && (tmp_end[1] == delim)) {
++						tmp_end = (char *) &tmp_end[2];
++				}
+ 
+-			/* Reset var options to point to next element */
+-			if (tmp_end) {
+-				tmp_end[0] = '\0';
+-				options = (char *) &tmp_end[1];
+-			} else
+-				/* Reached the end of the mount option string */
+-				options = end;
++				/* Reset var options to point to next element */
++				if (tmp_end) {
++					tmp_end[0] = '\0';
++					options = (char *) &tmp_end[1];
++				} else
++					/* Reached the end of the mount option
++					 * string */
++					options = end;
++			}
+ 
+ 			/* Now build new password string */
+ 			temp_len = strlen(value);
+@@ -3396,18 +3398,15 @@ cifs_negotiate_rsize(struct cifs_tcon *tcon, struct smb_vol *pvolume_info)
+ 	 * MS-CIFS indicates that servers are only limited by the client's
+ 	 * bufsize for reads, testing against win98se shows that it throws
+ 	 * INVALID_PARAMETER errors if you try to request too large a read.
++	 * OS/2 just sends back short reads.
+ 	 *
+-	 * If the server advertises a MaxBufferSize of less than one page,
+-	 * assume that it also can't satisfy reads larger than that either.
+-	 *
+-	 * FIXME: Is there a better heuristic for this?
++	 * If the server doesn't advertise CAP_LARGE_READ_X, then assume that
++	 * it can't handle a read request larger than its MaxBufferSize either.
+ 	 */
+ 	if (tcon->unix_ext && (unix_cap & CIFS_UNIX_LARGE_READ_CAP))
+ 		defsize = CIFS_DEFAULT_IOSIZE;
+ 	else if (server->capabilities & CAP_LARGE_READ_X)
+ 		defsize = CIFS_DEFAULT_NON_POSIX_RSIZE;
+-	else if (server->maxBuf >= PAGE_CACHE_SIZE)
+-		defsize = CIFSMaxBufSize;
+ 	else
+ 		defsize = server->maxBuf - sizeof(READ_RSP);
+ 
+diff --git a/fs/ecryptfs/kthread.c b/fs/ecryptfs/kthread.c
+index 69f994a..0dbe58a 100644
+--- a/fs/ecryptfs/kthread.c
++++ b/fs/ecryptfs/kthread.c
+@@ -149,7 +149,7 @@ int ecryptfs_privileged_open(struct file **lower_file,
+ 	(*lower_file) = dentry_open(lower_dentry, lower_mnt, flags, cred);
+ 	if (!IS_ERR(*lower_file))
+ 		goto out;
+-	if (flags & O_RDONLY) {
++	if ((flags & O_ACCMODE) == O_RDONLY) {
+ 		rc = PTR_ERR((*lower_file));
+ 		goto out;
+ 	}
+diff --git a/fs/ecryptfs/miscdev.c b/fs/ecryptfs/miscdev.c
+index 3a06f40..c0038f6 100644
+--- a/fs/ecryptfs/miscdev.c
++++ b/fs/ecryptfs/miscdev.c
+@@ -49,7 +49,10 @@ ecryptfs_miscdev_poll(struct file *file, poll_table *pt)
+ 	mutex_lock(&ecryptfs_daemon_hash_mux);
+ 	/* TODO: Just use file->private_data? */
+ 	rc = ecryptfs_find_daemon_by_euid(&daemon, euid, current_user_ns());
+-	BUG_ON(rc || !daemon);
++	if (rc || !daemon) {
++		mutex_unlock(&ecryptfs_daemon_hash_mux);
++		return -EINVAL;
++	}
+ 	mutex_lock(&daemon->mux);
+ 	mutex_unlock(&ecryptfs_daemon_hash_mux);
+ 	if (daemon->flags & ECRYPTFS_DAEMON_ZOMBIE) {
+@@ -122,6 +125,7 @@ ecryptfs_miscdev_open(struct inode *inode, struct file *file)
+ 		goto out_unlock_daemon;
+ 	}
+ 	daemon->flags |= ECRYPTFS_DAEMON_MISCDEV_OPEN;
++	file->private_data = daemon;
+ 	atomic_inc(&ecryptfs_num_miscdev_opens);
+ out_unlock_daemon:
+ 	mutex_unlock(&daemon->mux);
+@@ -152,9 +156,9 @@ ecryptfs_miscdev_release(struct inode *inode, struct file *file)
+ 
+ 	mutex_lock(&ecryptfs_daemon_hash_mux);
+ 	rc = ecryptfs_find_daemon_by_euid(&daemon, euid, current_user_ns());
+-	BUG_ON(rc || !daemon);
++	if (rc || !daemon)
++		daemon = file->private_data;
+ 	mutex_lock(&daemon->mux);
+-	BUG_ON(daemon->pid != task_pid(current));
+ 	BUG_ON(!(daemon->flags & ECRYPTFS_DAEMON_MISCDEV_OPEN));
+ 	daemon->flags &= ~ECRYPTFS_DAEMON_MISCDEV_OPEN;
+ 	atomic_dec(&ecryptfs_num_miscdev_opens);
+@@ -191,31 +195,32 @@ int ecryptfs_send_miscdev(char *data, size_t data_size,
+ 			  struct ecryptfs_msg_ctx *msg_ctx, u8 msg_type,
+ 			  u16 msg_flags, struct ecryptfs_daemon *daemon)
+ {
+-	int rc = 0;
++	struct ecryptfs_message *msg;
+ 
+-	mutex_lock(&msg_ctx->mux);
+-	msg_ctx->msg = kmalloc((sizeof(*msg_ctx->msg) + data_size),
+-			       GFP_KERNEL);
+-	if (!msg_ctx->msg) {
+-		rc = -ENOMEM;
++	msg = kmalloc((sizeof(*msg) + data_size), GFP_KERNEL);
++	if (!msg) {
+ 		printk(KERN_ERR "%s: Out of memory whilst attempting "
+ 		       "to kmalloc(%zd, GFP_KERNEL)\n", __func__,
+-		       (sizeof(*msg_ctx->msg) + data_size));
+-		goto out_unlock;
++		       (sizeof(*msg) + data_size));
++		return -ENOMEM;
+ 	}
++
++	mutex_lock(&msg_ctx->mux);
++	msg_ctx->msg = msg;
+ 	msg_ctx->msg->index = msg_ctx->index;
+ 	msg_ctx->msg->data_len = data_size;
+ 	msg_ctx->type = msg_type;
+ 	memcpy(msg_ctx->msg->data, data, data_size);
+ 	msg_ctx->msg_size = (sizeof(*msg_ctx->msg) + data_size);
+-	mutex_lock(&daemon->mux);
+ 	list_add_tail(&msg_ctx->daemon_out_list, &daemon->msg_ctx_out_queue);
++	mutex_unlock(&msg_ctx->mux);
++
++	mutex_lock(&daemon->mux);
+ 	daemon->num_queued_msg_ctx++;
+ 	wake_up_interruptible(&daemon->wait);
+ 	mutex_unlock(&daemon->mux);
+-out_unlock:
+-	mutex_unlock(&msg_ctx->mux);
+-	return rc;
++
++	return 0;
+ }
+ 
+ /*
+@@ -269,8 +274,16 @@ ecryptfs_miscdev_read(struct file *file, char __user *buf, size_t count,
+ 	mutex_lock(&ecryptfs_daemon_hash_mux);
+ 	/* TODO: Just use file->private_data? */
+ 	rc = ecryptfs_find_daemon_by_euid(&daemon, euid, current_user_ns());
+-	BUG_ON(rc || !daemon);
++	if (rc || !daemon) {
++		mutex_unlock(&ecryptfs_daemon_hash_mux);
++		return -EINVAL;
++	}
+ 	mutex_lock(&daemon->mux);
++	if (task_pid(current) != daemon->pid) {
++		mutex_unlock(&daemon->mux);
++		mutex_unlock(&ecryptfs_daemon_hash_mux);
++		return -EPERM;
++	}
+ 	if (daemon->flags & ECRYPTFS_DAEMON_ZOMBIE) {
+ 		rc = 0;
+ 		mutex_unlock(&ecryptfs_daemon_hash_mux);
+@@ -307,9 +320,6 @@ check_list:
+ 		 * message from the queue; try again */
+ 		goto check_list;
+ 	}
+-	BUG_ON(euid != daemon->euid);
+-	BUG_ON(current_user_ns() != daemon->user_ns);
+-	BUG_ON(task_pid(current) != daemon->pid);
+ 	msg_ctx = list_first_entry(&daemon->msg_ctx_out_queue,
+ 				   struct ecryptfs_msg_ctx, daemon_out_list);
+ 	BUG_ON(!msg_ctx);
+diff --git a/fs/exec.c b/fs/exec.c
+index b1fd202..29e5f84 100644
+--- a/fs/exec.c
++++ b/fs/exec.c
+@@ -823,10 +823,10 @@ static int exec_mmap(struct mm_struct *mm)
+ 	/* Notify parent that we're no longer interested in the old VM */
+ 	tsk = current;
+ 	old_mm = current->mm;
+-	sync_mm_rss(old_mm);
+ 	mm_release(tsk, old_mm);
+ 
+ 	if (old_mm) {
++		sync_mm_rss(old_mm);
+ 		/*
+ 		 * Make sure that if there is a core dump in progress
+ 		 * for the old mm, we get out and die instead of going
+diff --git a/fs/lockd/clntlock.c b/fs/lockd/clntlock.c
+index ba1dc2e..ca0a080 100644
+--- a/fs/lockd/clntlock.c
++++ b/fs/lockd/clntlock.c
+@@ -56,7 +56,7 @@ struct nlm_host *nlmclnt_init(const struct nlmclnt_initdata *nlm_init)
+ 	u32 nlm_version = (nlm_init->nfs_version == 2) ? 1 : 4;
+ 	int status;
+ 
+-	status = lockd_up();
++	status = lockd_up(nlm_init->net);
+ 	if (status < 0)
+ 		return ERR_PTR(status);
+ 
+@@ -65,7 +65,7 @@ struct nlm_host *nlmclnt_init(const struct nlmclnt_initdata *nlm_init)
+ 				   nlm_init->hostname, nlm_init->noresvport,
+ 				   nlm_init->net);
+ 	if (host == NULL) {
+-		lockd_down();
++		lockd_down(nlm_init->net);
+ 		return ERR_PTR(-ENOLCK);
+ 	}
+ 
+@@ -80,8 +80,10 @@ EXPORT_SYMBOL_GPL(nlmclnt_init);
+  */
+ void nlmclnt_done(struct nlm_host *host)
+ {
++	struct net *net = host->net;
++
+ 	nlmclnt_release_host(host);
+-	lockd_down();
++	lockd_down(net);
+ }
+ EXPORT_SYMBOL_GPL(nlmclnt_done);
+ 
+@@ -220,11 +222,12 @@ reclaimer(void *ptr)
+ 	struct nlm_wait	  *block;
+ 	struct file_lock *fl, *next;
+ 	u32 nsmstate;
++	struct net *net = host->net;
+ 
+ 	allow_signal(SIGKILL);
+ 
+ 	down_write(&host->h_rwsem);
+-	lockd_up();	/* note: this cannot fail as lockd is already running */
++	lockd_up(net);	/* note: this cannot fail as lockd is already running */
+ 
+ 	dprintk("lockd: reclaiming locks for host %s\n", host->h_name);
+ 
+@@ -275,6 +278,6 @@ restart:
+ 
+ 	/* Release host handle after use */
+ 	nlmclnt_release_host(host);
+-	lockd_down();
++	lockd_down(net);
+ 	return 0;
+ }
+diff --git a/fs/lockd/svc.c b/fs/lockd/svc.c
+index f49b9af..3250f28 100644
+--- a/fs/lockd/svc.c
++++ b/fs/lockd/svc.c
+@@ -257,7 +257,7 @@ static int lockd_up_net(struct net *net)
+ 	struct svc_serv *serv = nlmsvc_rqst->rq_server;
+ 	int error;
+ 
+-	if (ln->nlmsvc_users)
++	if (ln->nlmsvc_users++)
+ 		return 0;
+ 
+ 	error = svc_rpcb_setup(serv, net);
+@@ -272,6 +272,7 @@ static int lockd_up_net(struct net *net)
+ err_socks:
+ 	svc_rpcb_cleanup(serv, net);
+ err_rpcb:
++	ln->nlmsvc_users--;
+ 	return error;
+ }
+ 
+@@ -295,11 +296,11 @@ static void lockd_down_net(struct net *net)
+ /*
+  * Bring up the lockd process if it's not already up.
+  */
+-int lockd_up(void)
++int lockd_up(struct net *net)
+ {
+ 	struct svc_serv *serv;
+ 	int		error = 0;
+-	struct net *net = current->nsproxy->net_ns;
++	struct lockd_net *ln = net_generic(net, lockd_net_id);
+ 
+ 	mutex_lock(&nlmsvc_mutex);
+ 	/*
+@@ -325,9 +326,17 @@ int lockd_up(void)
+ 		goto out;
+ 	}
+ 
++	error = svc_bind(serv, net);
++	if (error < 0) {
++		printk(KERN_WARNING "lockd_up: bind service failed\n");
++		goto destroy_and_out;
++	}
++
++	ln->nlmsvc_users++;
++
+ 	error = make_socks(serv, net);
+ 	if (error < 0)
+-		goto destroy_and_out;
++		goto err_start;
+ 
+ 	/*
+ 	 * Create the kernel thread and wait for it to start.
+@@ -339,7 +348,7 @@ int lockd_up(void)
+ 		printk(KERN_WARNING
+ 			"lockd_up: svc_rqst allocation failed, error=%d\n",
+ 			error);
+-		goto destroy_and_out;
++		goto err_start;
+ 	}
+ 
+ 	svc_sock_update_bufs(serv);
+@@ -353,7 +362,7 @@ int lockd_up(void)
+ 		nlmsvc_rqst = NULL;
+ 		printk(KERN_WARNING
+ 			"lockd_up: kthread_run failed, error=%d\n", error);
+-		goto destroy_and_out;
++		goto err_start;
+ 	}
+ 
+ 	/*
+@@ -363,14 +372,14 @@ int lockd_up(void)
+ destroy_and_out:
+ 	svc_destroy(serv);
+ out:
+-	if (!error) {
+-		struct lockd_net *ln = net_generic(net, lockd_net_id);
+-
+-		ln->nlmsvc_users++;
++	if (!error)
+ 		nlmsvc_users++;
+-	}
+ 	mutex_unlock(&nlmsvc_mutex);
+ 	return error;
++
++err_start:
++	lockd_down_net(net);
++	goto destroy_and_out;
+ }
+ EXPORT_SYMBOL_GPL(lockd_up);
+ 
+@@ -378,14 +387,13 @@ EXPORT_SYMBOL_GPL(lockd_up);
+  * Decrement the user count and bring down lockd if we're the last.
+  */
+ void
+-lockd_down(void)
++lockd_down(struct net *net)
+ {
+ 	mutex_lock(&nlmsvc_mutex);
++	lockd_down_net(net);
+ 	if (nlmsvc_users) {
+-		if (--nlmsvc_users) {
+-			lockd_down_net(current->nsproxy->net_ns);
++		if (--nlmsvc_users)
+ 			goto out;
+-		}
+ 	} else {
+ 		printk(KERN_ERR "lockd_down: no users! task=%p\n",
+ 			nlmsvc_task);
+diff --git a/fs/nfs/callback.c b/fs/nfs/callback.c
+index eb95f50..38a44c6 100644
+--- a/fs/nfs/callback.c
++++ b/fs/nfs/callback.c
+@@ -106,7 +106,7 @@ nfs4_callback_up(struct svc_serv *serv, struct rpc_xprt *xprt)
+ {
+ 	int ret;
+ 
+-	ret = svc_create_xprt(serv, "tcp", xprt->xprt_net, PF_INET,
++	ret = svc_create_xprt(serv, "tcp", &init_net, PF_INET,
+ 				nfs_callback_set_tcpport, SVC_SOCK_ANONYMOUS);
+ 	if (ret <= 0)
+ 		goto out_err;
+@@ -114,7 +114,7 @@ nfs4_callback_up(struct svc_serv *serv, struct rpc_xprt *xprt)
+ 	dprintk("NFS: Callback listener port = %u (af %u)\n",
+ 			nfs_callback_tcpport, PF_INET);
+ 
+-	ret = svc_create_xprt(serv, "tcp", xprt->xprt_net, PF_INET6,
++	ret = svc_create_xprt(serv, "tcp", &init_net, PF_INET6,
+ 				nfs_callback_set_tcpport, SVC_SOCK_ANONYMOUS);
+ 	if (ret > 0) {
+ 		nfs_callback_tcpport6 = ret;
+@@ -183,7 +183,7 @@ nfs41_callback_up(struct svc_serv *serv, struct rpc_xprt *xprt)
+ 	 * fore channel connection.
+ 	 * Returns the input port (0) and sets the svc_serv bc_xprt on success
+ 	 */
+-	ret = svc_create_xprt(serv, "tcp-bc", xprt->xprt_net, PF_INET, 0,
++	ret = svc_create_xprt(serv, "tcp-bc", &init_net, PF_INET, 0,
+ 			      SVC_SOCK_ANONYMOUS);
+ 	if (ret < 0) {
+ 		rqstp = ERR_PTR(ret);
+@@ -253,6 +253,7 @@ int nfs_callback_up(u32 minorversion, struct rpc_xprt *xprt)
+ 	char svc_name[12];
+ 	int ret = 0;
+ 	int minorversion_setup;
++	struct net *net = &init_net;
+ 
+ 	mutex_lock(&nfs_callback_mutex);
+ 	if (cb_info->users++ || cb_info->task != NULL) {
+@@ -265,6 +266,12 @@ int nfs_callback_up(u32 minorversion, struct rpc_xprt *xprt)
+ 		goto out_err;
+ 	}
+ 
++	ret = svc_bind(serv, net);
++	if (ret < 0) {
++		printk(KERN_WARNING "NFS: bind callback service failed\n");
++		goto out_err;
++	}
++
+ 	minorversion_setup =  nfs_minorversion_callback_svc_setup(minorversion,
+ 					serv, xprt, &rqstp, &callback_svc);
+ 	if (!minorversion_setup) {
+@@ -306,6 +313,8 @@ out_err:
+ 	dprintk("NFS: Couldn't create callback socket or server thread; "
+ 		"err = %d\n", ret);
+ 	cb_info->users--;
++	if (serv)
++		svc_shutdown_net(serv, net);
+ 	goto out;
+ }
+ 
+@@ -320,6 +329,7 @@ void nfs_callback_down(int minorversion)
+ 	cb_info->users--;
+ 	if (cb_info->users == 0 && cb_info->task != NULL) {
+ 		kthread_stop(cb_info->task);
++		svc_shutdown_net(cb_info->serv, &init_net);
+ 		svc_exit_thread(cb_info->rqst);
+ 		cb_info->serv = NULL;
+ 		cb_info->rqst = NULL;
+diff --git a/fs/nfs/idmap.c b/fs/nfs/idmap.c
+index 3e8edbe..93aa3a4 100644
+--- a/fs/nfs/idmap.c
++++ b/fs/nfs/idmap.c
+@@ -57,6 +57,11 @@ unsigned int nfs_idmap_cache_timeout = 600;
+ static const struct cred *id_resolver_cache;
+ static struct key_type key_type_id_resolver_legacy;
+ 
++struct idmap {
++	struct rpc_pipe		*idmap_pipe;
++	struct key_construction	*idmap_key_cons;
++	struct mutex		idmap_mutex;
++};
+ 
+ /**
+  * nfs_fattr_init_names - initialise the nfs_fattr owner_name/group_name fields
+@@ -310,9 +315,11 @@ static ssize_t nfs_idmap_get_key(const char *name, size_t namelen,
+ 					    name, namelen, type, data,
+ 					    data_size, NULL);
+ 	if (ret < 0) {
++		mutex_lock(&idmap->idmap_mutex);
+ 		ret = nfs_idmap_request_key(&key_type_id_resolver_legacy,
+ 					    name, namelen, type, data,
+ 					    data_size, idmap);
++		mutex_unlock(&idmap->idmap_mutex);
+ 	}
+ 	return ret;
+ }
+@@ -354,11 +361,6 @@ static int nfs_idmap_lookup_id(const char *name, size_t namelen, const char *typ
+ /* idmap classic begins here */
+ module_param(nfs_idmap_cache_timeout, int, 0644);
+ 
+-struct idmap {
+-	struct rpc_pipe		*idmap_pipe;
+-	struct key_construction	*idmap_key_cons;
+-};
+-
+ enum {
+ 	Opt_find_uid, Opt_find_gid, Opt_find_user, Opt_find_group, Opt_find_err
+ };
+@@ -469,6 +471,7 @@ nfs_idmap_new(struct nfs_client *clp)
+ 		return error;
+ 	}
+ 	idmap->idmap_pipe = pipe;
++	mutex_init(&idmap->idmap_mutex);
+ 
+ 	clp->cl_idmap = idmap;
+ 	return 0;
+diff --git a/fs/nfsd/nfsctl.c b/fs/nfsd/nfsctl.c
+index 2c53be6..3ab12eb 100644
+--- a/fs/nfsd/nfsctl.c
++++ b/fs/nfsd/nfsctl.c
+@@ -651,6 +651,7 @@ static ssize_t __write_ports_addfd(char *buf)
+ {
+ 	char *mesg = buf;
+ 	int fd, err;
++	struct net *net = &init_net;
+ 
+ 	err = get_int(&mesg, &fd);
+ 	if (err != 0 || fd < 0)
+@@ -662,6 +663,8 @@ static ssize_t __write_ports_addfd(char *buf)
+ 
+ 	err = svc_addsock(nfsd_serv, fd, buf, SIMPLE_TRANSACTION_LIMIT);
+ 	if (err < 0) {
++		if (nfsd_serv->sv_nrthreads == 1)
++			svc_shutdown_net(nfsd_serv, net);
+ 		svc_destroy(nfsd_serv);
+ 		return err;
+ 	}
+@@ -699,6 +702,7 @@ static ssize_t __write_ports_addxprt(char *buf)
+ 	char transport[16];
+ 	struct svc_xprt *xprt;
+ 	int port, err;
++	struct net *net = &init_net;
+ 
+ 	if (sscanf(buf, "%15s %4u", transport, &port) != 2)
+ 		return -EINVAL;
+@@ -710,12 +714,12 @@ static ssize_t __write_ports_addxprt(char *buf)
+ 	if (err != 0)
+ 		return err;
+ 
+-	err = svc_create_xprt(nfsd_serv, transport, &init_net,
++	err = svc_create_xprt(nfsd_serv, transport, net,
+ 				PF_INET, port, SVC_SOCK_ANONYMOUS);
+ 	if (err < 0)
+ 		goto out_err;
+ 
+-	err = svc_create_xprt(nfsd_serv, transport, &init_net,
++	err = svc_create_xprt(nfsd_serv, transport, net,
+ 				PF_INET6, port, SVC_SOCK_ANONYMOUS);
+ 	if (err < 0 && err != -EAFNOSUPPORT)
+ 		goto out_close;
+@@ -724,12 +728,14 @@ static ssize_t __write_ports_addxprt(char *buf)
+ 	nfsd_serv->sv_nrthreads--;
+ 	return 0;
+ out_close:
+-	xprt = svc_find_xprt(nfsd_serv, transport, &init_net, PF_INET, port);
++	xprt = svc_find_xprt(nfsd_serv, transport, net, PF_INET, port);
+ 	if (xprt != NULL) {
+ 		svc_close_xprt(xprt);
+ 		svc_xprt_put(xprt);
+ 	}
+ out_err:
++	if (nfsd_serv->sv_nrthreads == 1)
++		svc_shutdown_net(nfsd_serv, net);
+ 	svc_destroy(nfsd_serv);
+ 	return err;
+ }
+diff --git a/fs/nfsd/nfssvc.c b/fs/nfsd/nfssvc.c
+index 28dfad3..bcda12a 100644
+--- a/fs/nfsd/nfssvc.c
++++ b/fs/nfsd/nfssvc.c
+@@ -11,6 +11,7 @@
+ #include <linux/module.h>
+ #include <linux/fs_struct.h>
+ #include <linux/swap.h>
++#include <linux/nsproxy.h>
+ 
+ #include <linux/sunrpc/stats.h>
+ #include <linux/sunrpc/svcsock.h>
+@@ -220,7 +221,7 @@ static int nfsd_startup(unsigned short port, int nrservs)
+ 	ret = nfsd_init_socks(port);
+ 	if (ret)
+ 		goto out_racache;
+-	ret = lockd_up();
++	ret = lockd_up(&init_net);
+ 	if (ret)
+ 		goto out_racache;
+ 	ret = nfs4_state_start();
+@@ -229,7 +230,7 @@ static int nfsd_startup(unsigned short port, int nrservs)
+ 	nfsd_up = true;
+ 	return 0;
+ out_lockd:
+-	lockd_down();
++	lockd_down(&init_net);
+ out_racache:
+ 	nfsd_racache_shutdown();
+ 	return ret;
+@@ -246,7 +247,7 @@ static void nfsd_shutdown(void)
+ 	if (!nfsd_up)
+ 		return;
+ 	nfs4_state_shutdown();
+-	lockd_down();
++	lockd_down(&init_net);
+ 	nfsd_racache_shutdown();
+ 	nfsd_up = false;
+ }
+@@ -330,6 +331,8 @@ static int nfsd_get_default_max_blksize(void)
+ 
+ int nfsd_create_serv(void)
+ {
++	int error;
++
+ 	WARN_ON(!mutex_is_locked(&nfsd_mutex));
+ 	if (nfsd_serv) {
+ 		svc_get(nfsd_serv);
+@@ -343,6 +346,12 @@ int nfsd_create_serv(void)
+ 	if (nfsd_serv == NULL)
+ 		return -ENOMEM;
+ 
++	error = svc_bind(nfsd_serv, current->nsproxy->net_ns);
++	if (error < 0) {
++		svc_destroy(nfsd_serv);
++		return error;
++	}
++
+ 	set_max_drc();
+ 	do_gettimeofday(&nfssvc_boot);		/* record boot time */
+ 	return 0;
+@@ -373,6 +382,7 @@ int nfsd_set_nrthreads(int n, int *nthreads)
+ 	int i = 0;
+ 	int tot = 0;
+ 	int err = 0;
++	struct net *net = &init_net;
+ 
+ 	WARN_ON(!mutex_is_locked(&nfsd_mutex));
+ 
+@@ -417,6 +427,9 @@ int nfsd_set_nrthreads(int n, int *nthreads)
+ 		if (err)
+ 			break;
+ 	}
++
++	if (nfsd_serv->sv_nrthreads == 1)
++		svc_shutdown_net(nfsd_serv, net);
+ 	svc_destroy(nfsd_serv);
+ 
+ 	return err;
+@@ -432,6 +445,7 @@ nfsd_svc(unsigned short port, int nrservs)
+ {
+ 	int	error;
+ 	bool	nfsd_up_before;
++	struct net *net = &init_net;
+ 
+ 	mutex_lock(&nfsd_mutex);
+ 	dprintk("nfsd: creating service\n");
+@@ -464,6 +478,8 @@ out_shutdown:
+ 	if (error < 0 && !nfsd_up_before)
+ 		nfsd_shutdown();
+ out_destroy:
++	if (nfsd_serv->sv_nrthreads == 1)
++		svc_shutdown_net(nfsd_serv, net);
+ 	svc_destroy(nfsd_serv);		/* Release server */
+ out:
+ 	mutex_unlock(&nfsd_mutex);
+@@ -547,6 +563,9 @@ nfsd(void *vrqstp)
+ 	nfsdstats.th_cnt --;
+ 
+ out:
++	if (rqstp->rq_server->sv_nrthreads == 1)
++		svc_shutdown_net(rqstp->rq_server, &init_net);
++
+ 	/* Release the thread */
+ 	svc_exit_thread(rqstp);
+ 
+@@ -659,8 +678,12 @@ int nfsd_pool_stats_open(struct inode *inode, struct file *file)
+ int nfsd_pool_stats_release(struct inode *inode, struct file *file)
+ {
+ 	int ret = seq_release(inode, file);
++	struct net *net = &init_net;
++
+ 	mutex_lock(&nfsd_mutex);
+ 	/* this function really, really should have been called svc_put() */
++	if (nfsd_serv->sv_nrthreads == 1)
++		svc_shutdown_net(nfsd_serv, net);
+ 	svc_destroy(nfsd_serv);
+ 	mutex_unlock(&nfsd_mutex);
+ 	return ret;
+diff --git a/fs/nilfs2/gcinode.c b/fs/nilfs2/gcinode.c
+index 08a07a2..57ceaf3 100644
+--- a/fs/nilfs2/gcinode.c
++++ b/fs/nilfs2/gcinode.c
+@@ -191,6 +191,8 @@ void nilfs_remove_all_gcinodes(struct the_nilfs *nilfs)
+ 	while (!list_empty(head)) {
+ 		ii = list_first_entry(head, struct nilfs_inode_info, i_dirty);
+ 		list_del_init(&ii->i_dirty);
++		truncate_inode_pages(&ii->vfs_inode.i_data, 0);
++		nilfs_btnode_cache_clear(&ii->i_btnode_cache);
+ 		iput(&ii->vfs_inode);
+ 	}
+ }
+diff --git a/fs/nilfs2/segment.c b/fs/nilfs2/segment.c
+index 0e72ad6..88e11fb 100644
+--- a/fs/nilfs2/segment.c
++++ b/fs/nilfs2/segment.c
+@@ -2309,6 +2309,8 @@ nilfs_remove_written_gcinodes(struct the_nilfs *nilfs, struct list_head *head)
+ 		if (!test_bit(NILFS_I_UPDATED, &ii->i_state))
+ 			continue;
+ 		list_del_init(&ii->i_dirty);
++		truncate_inode_pages(&ii->vfs_inode.i_data, 0);
++		nilfs_btnode_cache_clear(&ii->i_btnode_cache);
+ 		iput(&ii->vfs_inode);
+ 	}
+ }
+diff --git a/fs/ocfs2/file.c b/fs/ocfs2/file.c
+index 061591a..7602783 100644
+--- a/fs/ocfs2/file.c
++++ b/fs/ocfs2/file.c
+@@ -1950,7 +1950,7 @@ static int __ocfs2_change_file_space(struct file *file, struct inode *inode,
+ 	if (ret < 0)
+ 		mlog_errno(ret);
+ 
+-	if (file->f_flags & O_SYNC)
++	if (file && (file->f_flags & O_SYNC))
+ 		handle->h_sync = 1;
+ 
+ 	ocfs2_commit_trans(osb, handle);
+@@ -2422,8 +2422,10 @@ out_dio:
+ 		unaligned_dio = 0;
+ 	}
+ 
+-	if (unaligned_dio)
++	if (unaligned_dio) {
++		ocfs2_iocb_clear_unaligned_aio(iocb);
+ 		atomic_dec(&OCFS2_I(inode)->ip_unaligned_aio);
++	}
+ 
+ out:
+ 	if (rw_level != -1)
+diff --git a/fs/open.c b/fs/open.c
+index 5720854..3f1108b 100644
+--- a/fs/open.c
++++ b/fs/open.c
+@@ -396,10 +396,10 @@ SYSCALL_DEFINE1(fchdir, unsigned int, fd)
+ {
+ 	struct file *file;
+ 	struct inode *inode;
+-	int error;
++	int error, fput_needed;
+ 
+ 	error = -EBADF;
+-	file = fget(fd);
++	file = fget_raw_light(fd, &fput_needed);
+ 	if (!file)
+ 		goto out;
+ 
+@@ -413,7 +413,7 @@ SYSCALL_DEFINE1(fchdir, unsigned int, fd)
+ 	if (!error)
+ 		set_fs_pwd(current->fs, &file->f_path);
+ out_putf:
+-	fput(file);
++	fput_light(file, fput_needed);
+ out:
+ 	return error;
+ }
+diff --git a/fs/ramfs/file-nommu.c b/fs/ramfs/file-nommu.c
+index fbb0b47..d5378d0 100644
+--- a/fs/ramfs/file-nommu.c
++++ b/fs/ramfs/file-nommu.c
+@@ -110,6 +110,7 @@ int ramfs_nommu_expand_for_mapping(struct inode *inode, size_t newsize)
+ 
+ 		/* prevent the page from being discarded on memory pressure */
+ 		SetPageDirty(page);
++		SetPageUptodate(page);
+ 
+ 		unlock_page(page);
+ 		put_page(page);
+diff --git a/fs/splice.c b/fs/splice.c
+index f847684..5cac690 100644
+--- a/fs/splice.c
++++ b/fs/splice.c
+@@ -273,13 +273,16 @@ void spd_release_page(struct splice_pipe_desc *spd, unsigned int i)
+  * Check if we need to grow the arrays holding pages and partial page
+  * descriptions.
+  */
+-int splice_grow_spd(struct pipe_inode_info *pipe, struct splice_pipe_desc *spd)
++int splice_grow_spd(const struct pipe_inode_info *pipe, struct splice_pipe_desc *spd)
+ {
+-	if (pipe->buffers <= PIPE_DEF_BUFFERS)
++	unsigned int buffers = ACCESS_ONCE(pipe->buffers);
++
++	spd->nr_pages_max = buffers;
++	if (buffers <= PIPE_DEF_BUFFERS)
+ 		return 0;
+ 
+-	spd->pages = kmalloc(pipe->buffers * sizeof(struct page *), GFP_KERNEL);
+-	spd->partial = kmalloc(pipe->buffers * sizeof(struct partial_page), GFP_KERNEL);
++	spd->pages = kmalloc(buffers * sizeof(struct page *), GFP_KERNEL);
++	spd->partial = kmalloc(buffers * sizeof(struct partial_page), GFP_KERNEL);
+ 
+ 	if (spd->pages && spd->partial)
+ 		return 0;
+@@ -289,10 +292,9 @@ int splice_grow_spd(struct pipe_inode_info *pipe, struct splice_pipe_desc *spd)
+ 	return -ENOMEM;
+ }
+ 
+-void splice_shrink_spd(struct pipe_inode_info *pipe,
+-		       struct splice_pipe_desc *spd)
++void splice_shrink_spd(struct splice_pipe_desc *spd)
+ {
+-	if (pipe->buffers <= PIPE_DEF_BUFFERS)
++	if (spd->nr_pages_max <= PIPE_DEF_BUFFERS)
+ 		return;
+ 
+ 	kfree(spd->pages);
+@@ -315,6 +317,7 @@ __generic_file_splice_read(struct file *in, loff_t *ppos,
+ 	struct splice_pipe_desc spd = {
+ 		.pages = pages,
+ 		.partial = partial,
++		.nr_pages_max = PIPE_DEF_BUFFERS,
+ 		.flags = flags,
+ 		.ops = &page_cache_pipe_buf_ops,
+ 		.spd_release = spd_release_page,
+@@ -326,7 +329,7 @@ __generic_file_splice_read(struct file *in, loff_t *ppos,
+ 	index = *ppos >> PAGE_CACHE_SHIFT;
+ 	loff = *ppos & ~PAGE_CACHE_MASK;
+ 	req_pages = (len + loff + PAGE_CACHE_SIZE - 1) >> PAGE_CACHE_SHIFT;
+-	nr_pages = min(req_pages, pipe->buffers);
++	nr_pages = min(req_pages, spd.nr_pages_max);
+ 
+ 	/*
+ 	 * Lookup the (hopefully) full range of pages we need.
+@@ -497,7 +500,7 @@ fill_it:
+ 	if (spd.nr_pages)
+ 		error = splice_to_pipe(pipe, &spd);
+ 
+-	splice_shrink_spd(pipe, &spd);
++	splice_shrink_spd(&spd);
+ 	return error;
+ }
+ 
+@@ -598,6 +601,7 @@ ssize_t default_file_splice_read(struct file *in, loff_t *ppos,
+ 	struct splice_pipe_desc spd = {
+ 		.pages = pages,
+ 		.partial = partial,
++		.nr_pages_max = PIPE_DEF_BUFFERS,
+ 		.flags = flags,
+ 		.ops = &default_pipe_buf_ops,
+ 		.spd_release = spd_release_page,
+@@ -608,8 +612,8 @@ ssize_t default_file_splice_read(struct file *in, loff_t *ppos,
+ 
+ 	res = -ENOMEM;
+ 	vec = __vec;
+-	if (pipe->buffers > PIPE_DEF_BUFFERS) {
+-		vec = kmalloc(pipe->buffers * sizeof(struct iovec), GFP_KERNEL);
++	if (spd.nr_pages_max > PIPE_DEF_BUFFERS) {
++		vec = kmalloc(spd.nr_pages_max * sizeof(struct iovec), GFP_KERNEL);
+ 		if (!vec)
+ 			goto shrink_ret;
+ 	}
+@@ -617,7 +621,7 @@ ssize_t default_file_splice_read(struct file *in, loff_t *ppos,
+ 	offset = *ppos & ~PAGE_CACHE_MASK;
+ 	nr_pages = (len + offset + PAGE_CACHE_SIZE - 1) >> PAGE_CACHE_SHIFT;
+ 
+-	for (i = 0; i < nr_pages && i < pipe->buffers && len; i++) {
++	for (i = 0; i < nr_pages && i < spd.nr_pages_max && len; i++) {
+ 		struct page *page;
+ 
+ 		page = alloc_page(GFP_USER);
+@@ -665,7 +669,7 @@ ssize_t default_file_splice_read(struct file *in, loff_t *ppos,
+ shrink_ret:
+ 	if (vec != __vec)
+ 		kfree(vec);
+-	splice_shrink_spd(pipe, &spd);
++	splice_shrink_spd(&spd);
+ 	return res;
+ 
+ err:
+@@ -1612,6 +1616,7 @@ static long vmsplice_to_pipe(struct file *file, const struct iovec __user *iov,
+ 	struct splice_pipe_desc spd = {
+ 		.pages = pages,
+ 		.partial = partial,
++		.nr_pages_max = PIPE_DEF_BUFFERS,
+ 		.flags = flags,
+ 		.ops = &user_page_pipe_buf_ops,
+ 		.spd_release = spd_release_page,
+@@ -1627,13 +1632,13 @@ static long vmsplice_to_pipe(struct file *file, const struct iovec __user *iov,
+ 
+ 	spd.nr_pages = get_iovec_page_array(iov, nr_segs, spd.pages,
+ 					    spd.partial, flags & SPLICE_F_GIFT,
+-					    pipe->buffers);
++					    spd.nr_pages_max);
+ 	if (spd.nr_pages <= 0)
+ 		ret = spd.nr_pages;
+ 	else
+ 		ret = splice_to_pipe(pipe, &spd);
+ 
+-	splice_shrink_spd(pipe, &spd);
++	splice_shrink_spd(&spd);
+ 	return ret;
+ }
+ 
+diff --git a/fs/udf/super.c b/fs/udf/super.c
+index ac8a348..8d86a87 100644
+--- a/fs/udf/super.c
++++ b/fs/udf/super.c
+@@ -56,6 +56,7 @@
+ #include <linux/seq_file.h>
+ #include <linux/bitmap.h>
+ #include <linux/crc-itu-t.h>
++#include <linux/log2.h>
+ #include <asm/byteorder.h>
+ 
+ #include "udf_sb.h"
+@@ -1215,16 +1216,65 @@ out_bh:
+ 	return ret;
+ }
+ 
++static int udf_load_sparable_map(struct super_block *sb,
++				 struct udf_part_map *map,
++				 struct sparablePartitionMap *spm)
++{
++	uint32_t loc;
++	uint16_t ident;
++	struct sparingTable *st;
++	struct udf_sparing_data *sdata = &map->s_type_specific.s_sparing;
++	int i;
++	struct buffer_head *bh;
++
++	map->s_partition_type = UDF_SPARABLE_MAP15;
++	sdata->s_packet_len = le16_to_cpu(spm->packetLength);
++	if (!is_power_of_2(sdata->s_packet_len)) {
++		udf_err(sb, "error loading logical volume descriptor: "
++			"Invalid packet length %u\n",
++			(unsigned)sdata->s_packet_len);
++		return -EIO;
++	}
++	if (spm->numSparingTables > 4) {
++		udf_err(sb, "error loading logical volume descriptor: "
++			"Too many sparing tables (%d)\n",
++			(int)spm->numSparingTables);
++		return -EIO;
++	}
++
++	for (i = 0; i < spm->numSparingTables; i++) {
++		loc = le32_to_cpu(spm->locSparingTable[i]);
++		bh = udf_read_tagged(sb, loc, loc, &ident);
++		if (!bh)
++			continue;
++
++		st = (struct sparingTable *)bh->b_data;
++		if (ident != 0 ||
++		    strncmp(st->sparingIdent.ident, UDF_ID_SPARING,
++			    strlen(UDF_ID_SPARING)) ||
++		    sizeof(*st) + le16_to_cpu(st->reallocationTableLen) >
++							sb->s_blocksize) {
++			brelse(bh);
++			continue;
++		}
++
++		sdata->s_spar_map[i] = bh;
++	}
++	map->s_partition_func = udf_get_pblock_spar15;
++	return 0;
++}
++
+ static int udf_load_logicalvol(struct super_block *sb, sector_t block,
+ 			       struct kernel_lb_addr *fileset)
+ {
+ 	struct logicalVolDesc *lvd;
+-	int i, j, offset;
++	int i, offset;
+ 	uint8_t type;
+ 	struct udf_sb_info *sbi = UDF_SB(sb);
+ 	struct genericPartitionMap *gpm;
+ 	uint16_t ident;
+ 	struct buffer_head *bh;
++	unsigned int table_len;
+ 	int ret = 0;
+ 
+ 	bh = udf_read_tagged(sb, block, block, &ident);
+@@ -1232,15 +1282,20 @@ static int udf_load_logicalvol(struct super_block *sb, sector_t block,
+ 		return 1;
+ 	BUG_ON(ident != TAG_IDENT_LVD);
+ 	lvd = (struct logicalVolDesc *)bh->b_data;
+-
+-	i = udf_sb_alloc_partition_maps(sb, le32_to_cpu(lvd->numPartitionMaps));
+-	if (i != 0) {
+-		ret = i;
++	table_len = le32_to_cpu(lvd->mapTableLength);
++	if (sizeof(*lvd) + table_len > sb->s_blocksize) {
++		udf_err(sb, "error loading logical volume descriptor: "
++			"Partition table too long (%u > %lu)\n", table_len,
++			sb->s_blocksize - sizeof(*lvd));
+ 		goto out_bh;
+ 	}
+ 
++	ret = udf_sb_alloc_partition_maps(sb, le32_to_cpu(lvd->numPartitionMaps));
++	if (ret)
++		goto out_bh;
++
+ 	for (i = 0, offset = 0;
+-	     i < sbi->s_partitions && offset < le32_to_cpu(lvd->mapTableLength);
++	     i < sbi->s_partitions && offset < table_len;
+ 	     i++, offset += gpm->partitionMapLength) {
+ 		struct udf_part_map *map = &sbi->s_partmaps[i];
+ 		gpm = (struct genericPartitionMap *)
+@@ -1275,38 +1330,9 @@ static int udf_load_logicalvol(struct super_block *sb, sector_t block,
+ 			} else if (!strncmp(upm2->partIdent.ident,
+ 						UDF_ID_SPARABLE,
+ 						strlen(UDF_ID_SPARABLE))) {
+-				uint32_t loc;
+-				struct sparingTable *st;
+-				struct sparablePartitionMap *spm =
+-					(struct sparablePartitionMap *)gpm;
+-
+-				map->s_partition_type = UDF_SPARABLE_MAP15;
+-				map->s_type_specific.s_sparing.s_packet_len =
+-						le16_to_cpu(spm->packetLength);
+-				for (j = 0; j < spm->numSparingTables; j++) {
+-					struct buffer_head *bh2;
+-
+-					loc = le32_to_cpu(
+-						spm->locSparingTable[j]);
+-					bh2 = udf_read_tagged(sb, loc, loc,
+-							     &ident);
+-					map->s_type_specific.s_sparing.
+-							s_spar_map[j] = bh2;
+-
+-					if (bh2 == NULL)
+-						continue;
+-
+-					st = (struct sparingTable *)bh2->b_data;
+-					if (ident != 0 || strncmp(
+-						st->sparingIdent.ident,
+-						UDF_ID_SPARING,
+-						strlen(UDF_ID_SPARING))) {
+-						brelse(bh2);
+-						map->s_type_specific.s_sparing.
+-							s_spar_map[j] = NULL;
+-					}
+-				}
+-				map->s_partition_func = udf_get_pblock_spar15;
++				if (udf_load_sparable_map(sb, map,
++				    (struct sparablePartitionMap *)gpm) < 0)
++					goto out_bh;
+ 			} else if (!strncmp(upm2->partIdent.ident,
+ 						UDF_ID_METADATA,
+ 						strlen(UDF_ID_METADATA))) {
+diff --git a/include/asm-generic/pgtable.h b/include/asm-generic/pgtable.h
+index 125c54e..c7ec2cd 100644
+--- a/include/asm-generic/pgtable.h
++++ b/include/asm-generic/pgtable.h
+@@ -446,6 +446,18 @@ static inline int pmd_write(pmd_t pmd)
+ #endif /* __HAVE_ARCH_PMD_WRITE */
+ #endif /* CONFIG_TRANSPARENT_HUGEPAGE */
+ 
++#ifndef pmd_read_atomic
++static inline pmd_t pmd_read_atomic(pmd_t *pmdp)
++{
++	/*
++	 * Depend on compiler for an atomic pmd read. NOTE: this is
++	 * only going to work, if the pmdval_t isn't larger than
++	 * an unsigned long.
++	 */
++	return *pmdp;
++}
++#endif
++
+ /*
+  * This function is meant to be used by sites walking pagetables with
+  * the mmap_sem hold in read mode to protect against MADV_DONTNEED and
+@@ -459,14 +471,30 @@ static inline int pmd_write(pmd_t pmd)
+  * undefined so behaving like if the pmd was none is safe (because it
+  * can return none anyway). The compiler level barrier() is critically
+  * important to compute the two checks atomically on the same pmdval.
++ *
++ * For 32bit kernels with a 64bit large pmd_t this automatically takes
++ * care of reading the pmd atomically to avoid SMP race conditions
++ * against pmd_populate() when the mmap_sem is hold for reading by the
++ * caller (a special atomic read not done by "gcc" as in the generic
++ * version above, is also needed when THP is disabled because the page
++ * fault can populate the pmd from under us).
+  */
+ static inline int pmd_none_or_trans_huge_or_clear_bad(pmd_t *pmd)
+ {
+-	/* depend on compiler for an atomic pmd read */
+-	pmd_t pmdval = *pmd;
++	pmd_t pmdval = pmd_read_atomic(pmd);
+ 	/*
+ 	 * The barrier will stabilize the pmdval in a register or on
+ 	 * the stack so that it will stop changing under the code.
++	 *
++	 * When CONFIG_TRANSPARENT_HUGEPAGE=y on x86 32bit PAE,
++	 * pmd_read_atomic is allowed to return a not atomic pmdval
++	 * (for example pointing to an hugepage that has never been
++	 * mapped in the pmd). The below checks will only care about
++	 * the low part of the pmd with 32bit PAE x86 anyway, with the
++	 * exception of pmd_none(). So the important thing is that if
++	 * the low part of the pmd is found null, the high part will
++	 * be also null or the pmd_none() check below would be
++	 * confused.
+ 	 */
+ #ifdef CONFIG_TRANSPARENT_HUGEPAGE
+ 	barrier();
+diff --git a/include/linux/aio.h b/include/linux/aio.h
+index 2314ad8..b1a520e 100644
+--- a/include/linux/aio.h
++++ b/include/linux/aio.h
+@@ -140,6 +140,7 @@ struct kiocb {
+ 		(x)->ki_dtor = NULL;			\
+ 		(x)->ki_obj.tsk = tsk;			\
+ 		(x)->ki_user_data = 0;                  \
++		(x)->private = NULL;			\
+ 	} while (0)
+ 
+ #define AIO_RING_MAGIC			0xa10a10a1
+diff --git a/include/linux/lockd/bind.h b/include/linux/lockd/bind.h
+index 11a966e..4d24d64 100644
+--- a/include/linux/lockd/bind.h
++++ b/include/linux/lockd/bind.h
+@@ -54,7 +54,7 @@ extern void	nlmclnt_done(struct nlm_host *host);
+ 
+ extern int	nlmclnt_proc(struct nlm_host *host, int cmd,
+ 					struct file_lock *fl);
+-extern int	lockd_up(void);
+-extern void	lockd_down(void);
++extern int	lockd_up(struct net *net);
++extern void	lockd_down(struct net *net);
+ 
+ #endif /* LINUX_LOCKD_BIND_H */
+diff --git a/include/linux/memblock.h b/include/linux/memblock.h
+index a6bb102..19dc455 100644
+--- a/include/linux/memblock.h
++++ b/include/linux/memblock.h
+@@ -50,9 +50,7 @@ phys_addr_t memblock_find_in_range_node(phys_addr_t start, phys_addr_t end,
+ 				phys_addr_t size, phys_addr_t align, int nid);
+ phys_addr_t memblock_find_in_range(phys_addr_t start, phys_addr_t end,
+ 				   phys_addr_t size, phys_addr_t align);
+-int memblock_free_reserved_regions(void);
+-int memblock_reserve_reserved_regions(void);
+-
++phys_addr_t get_allocated_memblock_reserved_regions_info(phys_addr_t *addr);
+ void memblock_allow_resize(void);
+ int memblock_add_node(phys_addr_t base, phys_addr_t size, int nid);
+ int memblock_add(phys_addr_t base, phys_addr_t size);
+diff --git a/include/linux/mm_types.h b/include/linux/mm_types.h
+index 3cc3062..b35752f 100644
+--- a/include/linux/mm_types.h
++++ b/include/linux/mm_types.h
+@@ -56,8 +56,18 @@ struct page {
+ 		};
+ 
+ 		union {
++#if defined(CONFIG_HAVE_CMPXCHG_DOUBLE) && \
++	defined(CONFIG_HAVE_ALIGNED_STRUCT_PAGE)
+ 			/* Used for cmpxchg_double in slub */
+ 			unsigned long counters;
++#else
++			/*
++			 * Keep _count separate from slub cmpxchg_double data.
++			 * As the rest of the double word is protected by
++			 * slab_lock but _count is not.
++			 */
++			unsigned counters;
++#endif
+ 
+ 			struct {
+ 
+diff --git a/include/linux/mmzone.h b/include/linux/mmzone.h
+index dff7115..5f6806b 100644
+--- a/include/linux/mmzone.h
++++ b/include/linux/mmzone.h
+@@ -663,7 +663,7 @@ typedef struct pglist_data {
+ 					     range, including holes */
+ 	int node_id;
+ 	wait_queue_head_t kswapd_wait;
+-	struct task_struct *kswapd;
++	struct task_struct *kswapd;	/* Protected by lock_memory_hotplug() */
+ 	int kswapd_max_order;
+ 	enum zone_type classzone_idx;
+ } pg_data_t;
+diff --git a/include/linux/pci.h b/include/linux/pci.h
+index 8b2921a..e444f5b 100644
+--- a/include/linux/pci.h
++++ b/include/linux/pci.h
+@@ -176,8 +176,6 @@ enum pci_dev_flags {
+ 	PCI_DEV_FLAGS_NO_D3 = (__force pci_dev_flags_t) 2,
+ 	/* Provide indication device is assigned by a Virtual Machine Manager */
+ 	PCI_DEV_FLAGS_ASSIGNED = (__force pci_dev_flags_t) 4,
+-	/* Device causes system crash if in D3 during S3 sleep */
+-	PCI_DEV_FLAGS_NO_D3_DURING_SLEEP = (__force pci_dev_flags_t) 8,
+ };
+ 
+ enum pci_irq_reroute_variant {
+diff --git a/include/linux/rpmsg.h b/include/linux/rpmsg.h
+index a8e50e4..82a6739 100644
+--- a/include/linux/rpmsg.h
++++ b/include/linux/rpmsg.h
+@@ -38,6 +38,8 @@
+ #include <linux/types.h>
+ #include <linux/device.h>
+ #include <linux/mod_devicetable.h>
++#include <linux/kref.h>
++#include <linux/mutex.h>
+ 
+ /* The feature bitmap for virtio rpmsg */
+ #define VIRTIO_RPMSG_F_NS	0 /* RP supports name service notifications */
+@@ -120,7 +122,9 @@ typedef void (*rpmsg_rx_cb_t)(struct rpmsg_channel *, void *, int, void *, u32);
+ /**
+  * struct rpmsg_endpoint - binds a local rpmsg address to its user
+  * @rpdev: rpmsg channel device
++ * @refcount: when this drops to zero, the ept is deallocated
+  * @cb: rx callback handler
++ * @cb_lock: must be taken before accessing/changing @cb
+  * @addr: local rpmsg address
+  * @priv: private data for the driver's use
+  *
+@@ -140,7 +144,9 @@ typedef void (*rpmsg_rx_cb_t)(struct rpmsg_channel *, void *, int, void *, u32);
+  */
+ struct rpmsg_endpoint {
+ 	struct rpmsg_channel *rpdev;
++	struct kref refcount;
+ 	rpmsg_rx_cb_t cb;
++	struct mutex cb_lock;
+ 	u32 addr;
+ 	void *priv;
+ };
+diff --git a/include/linux/skbuff.h b/include/linux/skbuff.h
+index c168907..c1bae8d 100644
+--- a/include/linux/skbuff.h
++++ b/include/linux/skbuff.h
+@@ -225,14 +225,11 @@ enum {
+ 	/* device driver is going to provide hardware time stamp */
+ 	SKBTX_IN_PROGRESS = 1 << 2,
+ 
+-	/* ensure the originating sk reference is available on driver level */
+-	SKBTX_DRV_NEEDS_SK_REF = 1 << 3,
+-
+ 	/* device driver supports TX zero-copy buffers */
+-	SKBTX_DEV_ZEROCOPY = 1 << 4,
++	SKBTX_DEV_ZEROCOPY = 1 << 3,
+ 
+ 	/* generate wifi status information (where possible) */
+-	SKBTX_WIFI_STATUS = 1 << 5,
++	SKBTX_WIFI_STATUS = 1 << 4,
+ };
+ 
+ /*
+diff --git a/include/linux/splice.h b/include/linux/splice.h
+index 26e5b61..09a545a 100644
+--- a/include/linux/splice.h
++++ b/include/linux/splice.h
+@@ -51,7 +51,8 @@ struct partial_page {
+ struct splice_pipe_desc {
+ 	struct page **pages;		/* page map */
+ 	struct partial_page *partial;	/* pages[] may not be contig */
+-	int nr_pages;			/* number of pages in map */
++	int nr_pages;			/* number of populated pages in map */
++	unsigned int nr_pages_max;	/* pages[] & partial[] arrays size */
+ 	unsigned int flags;		/* splice flags */
+ 	const struct pipe_buf_operations *ops;/* ops associated with output pipe */
+ 	void (*spd_release)(struct splice_pipe_desc *, unsigned int);
+@@ -85,9 +86,8 @@ extern ssize_t splice_direct_to_actor(struct file *, struct splice_desc *,
+ /*
+  * for dynamic pipe sizing
+  */
+-extern int splice_grow_spd(struct pipe_inode_info *, struct splice_pipe_desc *);
+-extern void splice_shrink_spd(struct pipe_inode_info *,
+-				struct splice_pipe_desc *);
++extern int splice_grow_spd(const struct pipe_inode_info *, struct splice_pipe_desc *);
++extern void splice_shrink_spd(struct splice_pipe_desc *);
+ extern void spd_release_page(struct splice_pipe_desc *, unsigned int);
+ 
+ extern const struct pipe_buf_operations page_cache_pipe_buf_ops;
+diff --git a/include/linux/sunrpc/svc.h b/include/linux/sunrpc/svc.h
+index 51b29ac..2b43e02 100644
+--- a/include/linux/sunrpc/svc.h
++++ b/include/linux/sunrpc/svc.h
+@@ -416,6 +416,7 @@ struct svc_procedure {
+  */
+ int svc_rpcb_setup(struct svc_serv *serv, struct net *net);
+ void svc_rpcb_cleanup(struct svc_serv *serv, struct net *net);
++int svc_bind(struct svc_serv *serv, struct net *net);
+ struct svc_serv *svc_create(struct svc_program *, unsigned int,
+ 			    void (*shutdown)(struct svc_serv *, struct net *net));
+ struct svc_rqst *svc_prepare_thread(struct svc_serv *serv,
+diff --git a/include/net/cipso_ipv4.h b/include/net/cipso_ipv4.h
+index 9808877..a7a683e 100644
+--- a/include/net/cipso_ipv4.h
++++ b/include/net/cipso_ipv4.h
+@@ -42,6 +42,7 @@
+ #include <net/netlabel.h>
+ #include <net/request_sock.h>
+ #include <linux/atomic.h>
++#include <asm/unaligned.h>
+ 
+ /* known doi values */
+ #define CIPSO_V4_DOI_UNKNOWN          0x00000000
+@@ -285,7 +286,33 @@ static inline int cipso_v4_skbuff_getattr(const struct sk_buff *skb,
+ static inline int cipso_v4_validate(const struct sk_buff *skb,
+ 				    unsigned char **option)
+ {
+-	return -ENOSYS;
++	unsigned char *opt = *option;
++	unsigned char err_offset = 0;
++	u8 opt_len = opt[1];
++	u8 opt_iter;
++
++	if (opt_len < 8) {
++		err_offset = 1;
++		goto out;
++	}
++
++	if (get_unaligned_be32(&opt[2]) == 0) {
++		err_offset = 2;
++		goto out;
++	}
++
++	for (opt_iter = 6; opt_iter < opt_len;) {
++		if (opt[opt_iter + 1] > (opt_len - opt_iter)) {
++			err_offset = opt_iter + 1;
++			goto out;
++		}
++		opt_iter += opt[opt_iter + 1];
++	}
++
++out:
++	*option = opt + err_offset;
++	return err_offset;
++
+ }
+ #endif /* CONFIG_NETLABEL */
+ 
+diff --git a/include/net/inetpeer.h b/include/net/inetpeer.h
+index b94765e..2040bff 100644
+--- a/include/net/inetpeer.h
++++ b/include/net/inetpeer.h
+@@ -40,7 +40,10 @@ struct inet_peer {
+ 	u32			pmtu_orig;
+ 	u32			pmtu_learned;
+ 	struct inetpeer_addr_base redirect_learned;
+-	struct list_head	gc_list;
++	union {
++		struct list_head	gc_list;
++		struct rcu_head     gc_rcu;
++	};
+ 	/*
+ 	 * Once inet_peer is queued for deletion (refcnt == -1), following fields
+ 	 * are not available: rid, ip_id_count, tcp_ts, tcp_ts_stamp
+diff --git a/include/net/sch_generic.h b/include/net/sch_generic.h
+index 55ce96b..9d7d54a 100644
+--- a/include/net/sch_generic.h
++++ b/include/net/sch_generic.h
+@@ -220,13 +220,16 @@ struct tcf_proto {
+ 
+ struct qdisc_skb_cb {
+ 	unsigned int		pkt_len;
+-	unsigned char		data[24];
++	u16			bond_queue_mapping;
++	u16			_pad;
++	unsigned char		data[20];
+ };
+ 
+ static inline void qdisc_cb_private_validate(const struct sk_buff *skb, int sz)
+ {
+ 	struct qdisc_skb_cb *qcb;
+-	BUILD_BUG_ON(sizeof(skb->cb) < sizeof(unsigned int) + sz);
++
++	BUILD_BUG_ON(sizeof(skb->cb) < offsetof(struct qdisc_skb_cb, data) + sz);
+ 	BUILD_BUG_ON(sizeof(qcb->data) < sz);
+ }
+ 
+diff --git a/include/scsi/libsas.h b/include/scsi/libsas.h
+index f4f1c96..10ce74f 100644
+--- a/include/scsi/libsas.h
++++ b/include/scsi/libsas.h
+@@ -163,6 +163,8 @@ enum ata_command_set {
+         ATAPI_COMMAND_SET = 1,
+ };
+ 
++#define ATA_RESP_FIS_SIZE 24
++
+ struct sata_device {
+         enum   ata_command_set command_set;
+         struct smp_resp        rps_resp; /* report_phy_sata_resp */
+@@ -171,7 +173,7 @@ struct sata_device {
+ 
+ 	struct ata_port *ap;
+ 	struct ata_host ata_host;
+-	struct ata_taskfile tf;
++	u8     fis[ATA_RESP_FIS_SIZE];
+ };
+ 
+ enum {
+@@ -537,7 +539,7 @@ enum exec_status {
+  */
+ struct ata_task_resp {
+ 	u16  frame_len;
+-	u8   ending_fis[24];	  /* dev to host or data-in */
++	u8   ending_fis[ATA_RESP_FIS_SIZE];	  /* dev to host or data-in */
+ };
+ 
+ #define SAS_STATUS_BUF_SIZE 96
+diff --git a/include/scsi/scsi_cmnd.h b/include/scsi/scsi_cmnd.h
+index 1e11985..ac06cc5 100644
+--- a/include/scsi/scsi_cmnd.h
++++ b/include/scsi/scsi_cmnd.h
+@@ -134,10 +134,16 @@ struct scsi_cmnd {
+ 
+ static inline struct scsi_driver *scsi_cmd_to_driver(struct scsi_cmnd *cmd)
+ {
++	struct scsi_driver **sdp;
++
+ 	if (!cmd->request->rq_disk)
+ 		return NULL;
+ 
+-	return *(struct scsi_driver **)cmd->request->rq_disk->private_data;
++	sdp = (struct scsi_driver **)cmd->request->rq_disk->private_data;
++	if (!sdp)
++		return NULL;
++
++	return *sdp;
+ }
+ 
+ extern struct scsi_cmnd *scsi_get_command(struct scsi_device *, gfp_t);
+diff --git a/include/scsi/scsi_device.h b/include/scsi/scsi_device.h
+index 6efb2e1..ba96988 100644
+--- a/include/scsi/scsi_device.h
++++ b/include/scsi/scsi_device.h
+@@ -151,6 +151,7 @@ struct scsi_device {
+ 					   SD_LAST_BUGGY_SECTORS */
+ 	unsigned no_read_disc_info:1;	/* Avoid READ_DISC_INFO cmds */
+ 	unsigned no_read_capacity_16:1; /* Avoid READ_CAPACITY_16 cmds */
++	unsigned try_rc_10_first:1;	/* Try READ_CAPACACITY_10 first */
+ 	unsigned is_visible:1;	/* is the device visible in sysfs */
+ 
+ 	DECLARE_BITMAP(supported_events, SDEV_EVT_MAXBITS); /* supported events */
+diff --git a/kernel/exit.c b/kernel/exit.c
+index d8bd3b42..9d81012 100644
+--- a/kernel/exit.c
++++ b/kernel/exit.c
+@@ -643,6 +643,7 @@ static void exit_mm(struct task_struct * tsk)
+ 	mm_release(tsk, mm);
+ 	if (!mm)
+ 		return;
++	sync_mm_rss(mm);
+ 	/*
+ 	 * Serialize with any possible pending coredump.
+ 	 * We must hold mmap_sem around checking core_state
+diff --git a/kernel/relay.c b/kernel/relay.c
+index ab56a17..e8cd202 100644
+--- a/kernel/relay.c
++++ b/kernel/relay.c
+@@ -1235,6 +1235,7 @@ static ssize_t subbuf_splice_actor(struct file *in,
+ 	struct splice_pipe_desc spd = {
+ 		.pages = pages,
+ 		.nr_pages = 0,
++		.nr_pages_max = PIPE_DEF_BUFFERS,
+ 		.partial = partial,
+ 		.flags = flags,
+ 		.ops = &relay_pipe_buf_ops,
+@@ -1302,8 +1303,8 @@ static ssize_t subbuf_splice_actor(struct file *in,
+                 ret += padding;
+ 
+ out:
+-	splice_shrink_spd(pipe, &spd);
+-        return ret;
++	splice_shrink_spd(&spd);
++	return ret;
+ }
+ 
+ static ssize_t relay_file_splice_read(struct file *in,
+diff --git a/kernel/trace/trace.c b/kernel/trace/trace.c
+index 464a96f..55e4d4c 100644
+--- a/kernel/trace/trace.c
++++ b/kernel/trace/trace.c
+@@ -2648,10 +2648,12 @@ tracing_cpumask_write(struct file *filp, const char __user *ubuf,
+ 		if (cpumask_test_cpu(cpu, tracing_cpumask) &&
+ 				!cpumask_test_cpu(cpu, tracing_cpumask_new)) {
+ 			atomic_inc(&global_trace.data[cpu]->disabled);
++			ring_buffer_record_disable_cpu(global_trace.buffer, cpu);
+ 		}
+ 		if (!cpumask_test_cpu(cpu, tracing_cpumask) &&
+ 				cpumask_test_cpu(cpu, tracing_cpumask_new)) {
+ 			atomic_dec(&global_trace.data[cpu]->disabled);
++			ring_buffer_record_enable_cpu(global_trace.buffer, cpu);
+ 		}
+ 	}
+ 	arch_spin_unlock(&ftrace_max_lock);
+@@ -3563,6 +3565,7 @@ static ssize_t tracing_splice_read_pipe(struct file *filp,
+ 		.pages		= pages_def,
+ 		.partial	= partial_def,
+ 		.nr_pages	= 0, /* This gets updated below. */
++		.nr_pages_max	= PIPE_DEF_BUFFERS,
+ 		.flags		= flags,
+ 		.ops		= &tracing_pipe_buf_ops,
+ 		.spd_release	= tracing_spd_release_pipe,
+@@ -3634,7 +3637,7 @@ static ssize_t tracing_splice_read_pipe(struct file *filp,
+ 
+ 	ret = splice_to_pipe(pipe, &spd);
+ out:
+-	splice_shrink_spd(pipe, &spd);
++	splice_shrink_spd(&spd);
+ 	return ret;
+ 
+ out_err:
+@@ -4124,6 +4127,7 @@ tracing_buffers_splice_read(struct file *file, loff_t *ppos,
+ 	struct splice_pipe_desc spd = {
+ 		.pages		= pages_def,
+ 		.partial	= partial_def,
++		.nr_pages_max	= PIPE_DEF_BUFFERS,
+ 		.flags		= flags,
+ 		.ops		= &buffer_pipe_buf_ops,
+ 		.spd_release	= buffer_spd_release,
+@@ -4211,7 +4215,7 @@ tracing_buffers_splice_read(struct file *file, loff_t *ppos,
+ 	}
+ 
+ 	ret = splice_to_pipe(pipe, &spd);
+-	splice_shrink_spd(pipe, &spd);
++	splice_shrink_spd(&spd);
+ out:
+ 	return ret;
+ }
+diff --git a/lib/dynamic_queue_limits.c b/lib/dynamic_queue_limits.c
+index 6ab4587..0777c5a 100644
+--- a/lib/dynamic_queue_limits.c
++++ b/lib/dynamic_queue_limits.c
+@@ -10,23 +10,27 @@
+ #include <linux/jiffies.h>
+ #include <linux/dynamic_queue_limits.h>
+ 
+-#define POSDIFF(A, B) ((A) > (B) ? (A) - (B) : 0)
++#define POSDIFF(A, B) ((int)((A) - (B)) > 0 ? (A) - (B) : 0)
++#define AFTER_EQ(A, B) ((int)((A) - (B)) >= 0)
+ 
+ /* Records completed count and recalculates the queue limit */
+ void dql_completed(struct dql *dql, unsigned int count)
+ {
+ 	unsigned int inprogress, prev_inprogress, limit;
+-	unsigned int ovlimit, all_prev_completed, completed;
++	unsigned int ovlimit, completed, num_queued;
++	bool all_prev_completed;
++
++	num_queued = ACCESS_ONCE(dql->num_queued);
+ 
+ 	/* Can't complete more than what's in queue */
+-	BUG_ON(count > dql->num_queued - dql->num_completed);
++	BUG_ON(count > num_queued - dql->num_completed);
+ 
+ 	completed = dql->num_completed + count;
+ 	limit = dql->limit;
+-	ovlimit = POSDIFF(dql->num_queued - dql->num_completed, limit);
+-	inprogress = dql->num_queued - completed;
++	ovlimit = POSDIFF(num_queued - dql->num_completed, limit);
++	inprogress = num_queued - completed;
+ 	prev_inprogress = dql->prev_num_queued - dql->num_completed;
+-	all_prev_completed = POSDIFF(completed, dql->prev_num_queued);
++	all_prev_completed = AFTER_EQ(completed, dql->prev_num_queued);
+ 
+ 	if ((ovlimit && !inprogress) ||
+ 	    (dql->prev_ovlimit && all_prev_completed)) {
+@@ -104,7 +108,7 @@ void dql_completed(struct dql *dql, unsigned int count)
+ 	dql->prev_ovlimit = ovlimit;
+ 	dql->prev_last_obj_cnt = dql->last_obj_cnt;
+ 	dql->num_completed = completed;
+-	dql->prev_num_queued = dql->num_queued;
++	dql->prev_num_queued = num_queued;
+ }
+ EXPORT_SYMBOL(dql_completed);
+ 
+diff --git a/mm/compaction.c b/mm/compaction.c
+index 74a8c82..459b0ab 100644
+--- a/mm/compaction.c
++++ b/mm/compaction.c
+@@ -594,8 +594,11 @@ static int compact_zone(struct zone *zone, struct compact_control *cc)
+ 		if (err) {
+ 			putback_lru_pages(&cc->migratepages);
+ 			cc->nr_migratepages = 0;
++			if (err == -ENOMEM) {
++				ret = COMPACT_PARTIAL;
++				goto out;
++			}
+ 		}
+-
+ 	}
+ 
+ out:
+diff --git a/mm/madvise.c b/mm/madvise.c
+index 1ccbba5..55f645c 100644
+--- a/mm/madvise.c
++++ b/mm/madvise.c
+@@ -13,6 +13,7 @@
+ #include <linux/hugetlb.h>
+ #include <linux/sched.h>
+ #include <linux/ksm.h>
++#include <linux/file.h>
+ 
+ /*
+  * Any behaviour which results in changes to the vma->vm_flags needs to
+@@ -203,14 +204,16 @@ static long madvise_remove(struct vm_area_struct *vma,
+ 	struct address_space *mapping;
+ 	loff_t offset, endoff;
+ 	int error;
++	struct file *f;
+ 
+ 	*prev = NULL;	/* tell sys_madvise we drop mmap_sem */
+ 
+ 	if (vma->vm_flags & (VM_LOCKED|VM_NONLINEAR|VM_HUGETLB))
+ 		return -EINVAL;
+ 
+-	if (!vma->vm_file || !vma->vm_file->f_mapping
+-		|| !vma->vm_file->f_mapping->host) {
++	f = vma->vm_file;
++
++	if (!f || !f->f_mapping || !f->f_mapping->host) {
+ 			return -EINVAL;
+ 	}
+ 
+@@ -224,9 +227,16 @@ static long madvise_remove(struct vm_area_struct *vma,
+ 	endoff = (loff_t)(end - vma->vm_start - 1)
+ 			+ ((loff_t)vma->vm_pgoff << PAGE_SHIFT);
+ 
+-	/* vmtruncate_range needs to take i_mutex */
++	/*
++	 * vmtruncate_range may need to take i_mutex.  We need to
++	 * explicitly grab a reference because the vma (and hence the
++	 * vma's reference to the file) can go away as soon as we drop
++	 * mmap_sem.
++	 */
++	get_file(f);
+ 	up_read(&current->mm->mmap_sem);
+ 	error = vmtruncate_range(mapping->host, offset, endoff);
++	fput(f);
+ 	down_read(&current->mm->mmap_sem);
+ 	return error;
+ }
+diff --git a/mm/memblock.c b/mm/memblock.c
+index a44eab3..280d3d7 100644
+--- a/mm/memblock.c
++++ b/mm/memblock.c
+@@ -37,6 +37,8 @@ struct memblock memblock __initdata_memblock = {
+ 
+ int memblock_debug __initdata_memblock;
+ static int memblock_can_resize __initdata_memblock;
++static int memblock_memory_in_slab __initdata_memblock = 0;
++static int memblock_reserved_in_slab __initdata_memblock = 0;
+ 
+ /* inline so we don't get a warning when pr_debug is compiled out */
+ static inline const char *memblock_type_name(struct memblock_type *type)
+@@ -141,30 +143,6 @@ phys_addr_t __init_memblock memblock_find_in_range(phys_addr_t start,
+ 					   MAX_NUMNODES);
+ }
+ 
+-/*
+- * Free memblock.reserved.regions
+- */
+-int __init_memblock memblock_free_reserved_regions(void)
+-{
+-	if (memblock.reserved.regions == memblock_reserved_init_regions)
+-		return 0;
+-
+-	return memblock_free(__pa(memblock.reserved.regions),
+-		 sizeof(struct memblock_region) * memblock.reserved.max);
+-}
+-
+-/*
+- * Reserve memblock.reserved.regions
+- */
+-int __init_memblock memblock_reserve_reserved_regions(void)
+-{
+-	if (memblock.reserved.regions == memblock_reserved_init_regions)
+-		return 0;
+-
+-	return memblock_reserve(__pa(memblock.reserved.regions),
+-		 sizeof(struct memblock_region) * memblock.reserved.max);
+-}
+-
+ static void __init_memblock memblock_remove_region(struct memblock_type *type, unsigned long r)
+ {
+ 	type->total_size -= type->regions[r].size;
+@@ -182,11 +160,42 @@ static void __init_memblock memblock_remove_region(struct memblock_type *type, u
+ 	}
+ }
+ 
+-static int __init_memblock memblock_double_array(struct memblock_type *type)
++phys_addr_t __init_memblock get_allocated_memblock_reserved_regions_info(
++					phys_addr_t *addr)
++{
++	if (memblock.reserved.regions == memblock_reserved_init_regions)
++		return 0;
++
++	*addr = __pa(memblock.reserved.regions);
++
++	return PAGE_ALIGN(sizeof(struct memblock_region) *
++			  memblock.reserved.max);
++}
++
++/**
++ * memblock_double_array - double the size of the memblock regions array
++ * @type: memblock type of the regions array being doubled
++ * @new_area_start: starting address of memory range to avoid overlap with
++ * @new_area_size: size of memory range to avoid overlap with
++ *
++ * Double the size of the @type regions array. If memblock is being used to
++ * allocate memory for a new reserved regions array and there is a previously
++ * allocated memory range [@new_area_start,@new_area_start+@new_area_size]
++ * waiting to be reserved, ensure the memory used by the new array does
++ * not overlap.
++ *
++ * RETURNS:
++ * 0 on success, -1 on failure.
++ */
++static int __init_memblock memblock_double_array(struct memblock_type *type,
++						phys_addr_t new_area_start,
++						phys_addr_t new_area_size)
+ {
+ 	struct memblock_region *new_array, *old_array;
++	phys_addr_t old_alloc_size, new_alloc_size;
+ 	phys_addr_t old_size, new_size, addr;
+ 	int use_slab = slab_is_available();
++	int *in_slab;
+ 
+ 	/* We don't allow resizing until we know about the reserved regions
+ 	 * of memory that aren't suitable for allocation
+@@ -197,6 +206,18 @@ static int __init_memblock memblock_double_array(struct memblock_type *type)
+ 	/* Calculate new doubled size */
+ 	old_size = type->max * sizeof(struct memblock_region);
+ 	new_size = old_size << 1;
++	/*
++	 * We need to allocated new one align to PAGE_SIZE,
++	 *   so we can free them completely later.
++	 */
++	old_alloc_size = PAGE_ALIGN(old_size);
++	new_alloc_size = PAGE_ALIGN(new_size);
++
++	/* Retrieve the slab flag */
++	if (type == &memblock.memory)
++		in_slab = &memblock_memory_in_slab;
++	else
++		in_slab = &memblock_reserved_in_slab;
+ 
+ 	/* Try to find some space for it.
+ 	 *
+@@ -212,14 +233,26 @@ static int __init_memblock memblock_double_array(struct memblock_type *type)
+ 	if (use_slab) {
+ 		new_array = kmalloc(new_size, GFP_KERNEL);
+ 		addr = new_array ? __pa(new_array) : 0;
+-	} else
+-		addr = memblock_find_in_range(0, MEMBLOCK_ALLOC_ACCESSIBLE, new_size, sizeof(phys_addr_t));
++	} else {
++		/* only exclude range when trying to double reserved.regions */
++		if (type != &memblock.reserved)
++			new_area_start = new_area_size = 0;
++
++		addr = memblock_find_in_range(new_area_start + new_area_size,
++						memblock.current_limit,
++						new_alloc_size, PAGE_SIZE);
++		if (!addr && new_area_size)
++			addr = memblock_find_in_range(0,
++					min(new_area_start, memblock.current_limit),
++					new_alloc_size, PAGE_SIZE);
++
++		new_array = addr ? __va(addr) : 0;
++	}
+ 	if (!addr) {
+ 		pr_err("memblock: Failed to double %s array from %ld to %ld entries !\n",
+ 		       memblock_type_name(type), type->max, type->max * 2);
+ 		return -1;
+ 	}
+-	new_array = __va(addr);
+ 
+ 	memblock_dbg("memblock: %s array is doubled to %ld at [%#010llx-%#010llx]",
+ 		 memblock_type_name(type), type->max * 2, (u64)addr, (u64)addr + new_size - 1);
+@@ -234,21 +267,23 @@ static int __init_memblock memblock_double_array(struct memblock_type *type)
+ 	type->regions = new_array;
+ 	type->max <<= 1;
+ 
+-	/* If we use SLAB that's it, we are done */
+-	if (use_slab)
+-		return 0;
+-
+-	/* Add the new reserved region now. Should not fail ! */
+-	BUG_ON(memblock_reserve(addr, new_size));
+-
+-	/* If the array wasn't our static init one, then free it. We only do
+-	 * that before SLAB is available as later on, we don't know whether
+-	 * to use kfree or free_bootmem_pages(). Shouldn't be a big deal
+-	 * anyways
++	/* Free old array. We needn't free it if the array is the
++	 * static one
+ 	 */
+-	if (old_array != memblock_memory_init_regions &&
+-	    old_array != memblock_reserved_init_regions)
+-		memblock_free(__pa(old_array), old_size);
++	if (*in_slab)
++		kfree(old_array);
++	else if (old_array != memblock_memory_init_regions &&
++		 old_array != memblock_reserved_init_regions)
++		memblock_free(__pa(old_array), old_alloc_size);
++
++	/* Reserve the new array if that comes from the memblock.
++	 * Otherwise, we needn't do it
++	 */
++	if (!use_slab)
++		BUG_ON(memblock_reserve(addr, new_alloc_size));
++
++	/* Update slab flag */
++	*in_slab = use_slab;
+ 
+ 	return 0;
+ }
+@@ -387,7 +422,7 @@ repeat:
+ 	 */
+ 	if (!insert) {
+ 		while (type->cnt + nr_new > type->max)
+-			if (memblock_double_array(type) < 0)
++			if (memblock_double_array(type, obase, size) < 0)
+ 				return -ENOMEM;
+ 		insert = true;
+ 		goto repeat;
+@@ -438,7 +473,7 @@ static int __init_memblock memblock_isolate_range(struct memblock_type *type,
+ 
+ 	/* we'll create at most two more regions */
+ 	while (type->cnt + 2 > type->max)
+-		if (memblock_double_array(type) < 0)
++		if (memblock_double_array(type, base, size) < 0)
+ 			return -ENOMEM;
+ 
+ 	for (i = 0; i < type->cnt; i++) {
+diff --git a/mm/nobootmem.c b/mm/nobootmem.c
+index 1983fb1..218e6f9 100644
+--- a/mm/nobootmem.c
++++ b/mm/nobootmem.c
+@@ -105,27 +105,35 @@ static void __init __free_pages_memory(unsigned long start, unsigned long end)
+ 		__free_pages_bootmem(pfn_to_page(i), 0);
+ }
+ 
++static unsigned long __init __free_memory_core(phys_addr_t start,
++				 phys_addr_t end)
++{
++	unsigned long start_pfn = PFN_UP(start);
++	unsigned long end_pfn = min_t(unsigned long,
++				      PFN_DOWN(end), max_low_pfn);
++
++	if (start_pfn > end_pfn)
++		return 0;
++
++	__free_pages_memory(start_pfn, end_pfn);
++
++	return end_pfn - start_pfn;
++}
++
+ unsigned long __init free_low_memory_core_early(int nodeid)
+ {
+ 	unsigned long count = 0;
+-	phys_addr_t start, end;
++	phys_addr_t start, end, size;
+ 	u64 i;
+ 
+-	/* free reserved array temporarily so that it's treated as free area */
+-	memblock_free_reserved_regions();
+-
+-	for_each_free_mem_range(i, MAX_NUMNODES, &start, &end, NULL) {
+-		unsigned long start_pfn = PFN_UP(start);
+-		unsigned long end_pfn = min_t(unsigned long,
+-					      PFN_DOWN(end), max_low_pfn);
+-		if (start_pfn < end_pfn) {
+-			__free_pages_memory(start_pfn, end_pfn);
+-			count += end_pfn - start_pfn;
+-		}
+-	}
++	for_each_free_mem_range(i, MAX_NUMNODES, &start, &end, NULL)
++		count += __free_memory_core(start, end);
++
++	/* free range that is used for reserved array if we allocate it */
++	size = get_allocated_memblock_reserved_regions_info(&start);
++	if (size)
++		count += __free_memory_core(start, start + size);
+ 
+-	/* put region array back? */
+-	memblock_reserve_reserved_regions();
+ 	return count;
+ }
+ 
+diff --git a/mm/shmem.c b/mm/shmem.c
+index f99ff3e..9d65a02 100644
+--- a/mm/shmem.c
++++ b/mm/shmem.c
+@@ -1365,6 +1365,7 @@ static ssize_t shmem_file_splice_read(struct file *in, loff_t *ppos,
+ 	struct splice_pipe_desc spd = {
+ 		.pages = pages,
+ 		.partial = partial,
++		.nr_pages_max = PIPE_DEF_BUFFERS,
+ 		.flags = flags,
+ 		.ops = &page_cache_pipe_buf_ops,
+ 		.spd_release = spd_release_page,
+@@ -1453,7 +1454,7 @@ static ssize_t shmem_file_splice_read(struct file *in, loff_t *ppos,
+ 	if (spd.nr_pages)
+ 		error = splice_to_pipe(pipe, &spd);
+ 
+-	splice_shrink_spd(pipe, &spd);
++	splice_shrink_spd(&spd);
+ 
+ 	if (error > 0) {
+ 		*ppos += error;
+diff --git a/mm/vmscan.c b/mm/vmscan.c
+index 0932dc2..4607cc6 100644
+--- a/mm/vmscan.c
++++ b/mm/vmscan.c
+@@ -3279,14 +3279,17 @@ int kswapd_run(int nid)
+ }
+ 
+ /*
+- * Called by memory hotplug when all memory in a node is offlined.
++ * Called by memory hotplug when all memory in a node is offlined.  Caller must
++ * hold lock_memory_hotplug().
+  */
+ void kswapd_stop(int nid)
+ {
+ 	struct task_struct *kswapd = NODE_DATA(nid)->kswapd;
+ 
+-	if (kswapd)
++	if (kswapd) {
+ 		kthread_stop(kswapd);
++		NODE_DATA(nid)->kswapd = NULL;
++	}
+ }
+ 
+ static int __init kswapd_init(void)
+diff --git a/net/batman-adv/routing.c b/net/batman-adv/routing.c
+index 7f8e158..8df3a1f 100644
+--- a/net/batman-adv/routing.c
++++ b/net/batman-adv/routing.c
+@@ -618,6 +618,8 @@ int recv_tt_query(struct sk_buff *skb, struct hard_iface *recv_if)
+ 			 * changes */
+ 			if (skb_linearize(skb) < 0)
+ 				goto out;
++			/* skb_linearize() possibly changed skb->data */
++			tt_query = (struct tt_query_packet *)skb->data;
+ 
+ 			tt_len = tt_query->tt_data * sizeof(struct tt_change);
+ 
+diff --git a/net/batman-adv/translation-table.c b/net/batman-adv/translation-table.c
+index 1f86921..f014bf8 100644
+--- a/net/batman-adv/translation-table.c
++++ b/net/batman-adv/translation-table.c
+@@ -1803,10 +1803,10 @@ bool is_ap_isolated(struct bat_priv *bat_priv, uint8_t *src, uint8_t *dst)
+ {
+ 	struct tt_local_entry *tt_local_entry = NULL;
+ 	struct tt_global_entry *tt_global_entry = NULL;
+-	bool ret = true;
++	bool ret = false;
+ 
+ 	if (!atomic_read(&bat_priv->ap_isolation))
+-		return false;
++		goto out;
+ 
+ 	tt_local_entry = tt_local_hash_find(bat_priv, dst);
+ 	if (!tt_local_entry)
+@@ -1816,10 +1816,10 @@ bool is_ap_isolated(struct bat_priv *bat_priv, uint8_t *src, uint8_t *dst)
+ 	if (!tt_global_entry)
+ 		goto out;
+ 
+-	if (_is_ap_isolated(tt_local_entry, tt_global_entry))
++	if (!_is_ap_isolated(tt_local_entry, tt_global_entry))
+ 		goto out;
+ 
+-	ret = false;
++	ret = true;
+ 
+ out:
+ 	if (tt_global_entry)
+diff --git a/net/bridge/br_if.c b/net/bridge/br_if.c
+index 0a942fb..e1144e1 100644
+--- a/net/bridge/br_if.c
++++ b/net/bridge/br_if.c
+@@ -240,6 +240,7 @@ int br_add_bridge(struct net *net, const char *name)
+ 		return -ENOMEM;
+ 
+ 	dev_net_set(dev, net);
++	dev->rtnl_link_ops = &br_link_ops;
+ 
+ 	res = register_netdev(dev);
+ 	if (res)
+diff --git a/net/bridge/br_netlink.c b/net/bridge/br_netlink.c
+index a1daf82..cbf9ccd 100644
+--- a/net/bridge/br_netlink.c
++++ b/net/bridge/br_netlink.c
+@@ -211,7 +211,7 @@ static int br_validate(struct nlattr *tb[], struct nlattr *data[])
+ 	return 0;
+ }
+ 
+-static struct rtnl_link_ops br_link_ops __read_mostly = {
++struct rtnl_link_ops br_link_ops __read_mostly = {
+ 	.kind		= "bridge",
+ 	.priv_size	= sizeof(struct net_bridge),
+ 	.setup		= br_dev_setup,
+diff --git a/net/bridge/br_private.h b/net/bridge/br_private.h
+index e1d8822..51e8826 100644
+--- a/net/bridge/br_private.h
++++ b/net/bridge/br_private.h
+@@ -538,6 +538,7 @@ extern int (*br_fdb_test_addr_hook)(struct net_device *dev, unsigned char *addr)
+ #endif
+ 
+ /* br_netlink.c */
++extern struct rtnl_link_ops br_link_ops;
+ extern int br_netlink_init(void);
+ extern void br_netlink_fini(void);
+ extern void br_ifinfo_notify(int event, struct net_bridge_port *port);
+diff --git a/net/can/raw.c b/net/can/raw.c
+index cde1b4a..46cca3a 100644
+--- a/net/can/raw.c
++++ b/net/can/raw.c
+@@ -681,9 +681,6 @@ static int raw_sendmsg(struct kiocb *iocb, struct socket *sock,
+ 	if (err < 0)
+ 		goto free_skb;
+ 
+-	/* to be able to check the received tx sock reference in raw_rcv() */
+-	skb_shinfo(skb)->tx_flags |= SKBTX_DRV_NEEDS_SK_REF;
+-
+ 	skb->dev = dev;
+ 	skb->sk  = sk;
+ 
+diff --git a/net/core/dev.c b/net/core/dev.c
+index 99e1d75..533c586 100644
+--- a/net/core/dev.c
++++ b/net/core/dev.c
+@@ -2091,25 +2091,6 @@ static int dev_gso_segment(struct sk_buff *skb, netdev_features_t features)
+ 	return 0;
+ }
+ 
+-/*
+- * Try to orphan skb early, right before transmission by the device.
+- * We cannot orphan skb if tx timestamp is requested or the sk-reference
+- * is needed on driver level for other reasons, e.g. see net/can/raw.c
+- */
+-static inline void skb_orphan_try(struct sk_buff *skb)
+-{
+-	struct sock *sk = skb->sk;
+-
+-	if (sk && !skb_shinfo(skb)->tx_flags) {
+-		/* skb_tx_hash() wont be able to get sk.
+-		 * We copy sk_hash into skb->rxhash
+-		 */
+-		if (!skb->rxhash)
+-			skb->rxhash = sk->sk_hash;
+-		skb_orphan(skb);
+-	}
+-}
+-
+ static bool can_checksum_protocol(netdev_features_t features, __be16 protocol)
+ {
+ 	return ((features & NETIF_F_GEN_CSUM) ||
+@@ -2195,8 +2176,6 @@ int dev_hard_start_xmit(struct sk_buff *skb, struct net_device *dev,
+ 		if (!list_empty(&ptype_all))
+ 			dev_queue_xmit_nit(skb, dev);
+ 
+-		skb_orphan_try(skb);
+-
+ 		features = netif_skb_features(skb);
+ 
+ 		if (vlan_tx_tag_present(skb) &&
+@@ -2306,7 +2285,7 @@ u16 __skb_tx_hash(const struct net_device *dev, const struct sk_buff *skb,
+ 	if (skb->sk && skb->sk->sk_hash)
+ 		hash = skb->sk->sk_hash;
+ 	else
+-		hash = (__force u16) skb->protocol ^ skb->rxhash;
++		hash = (__force u16) skb->protocol;
+ 	hash = jhash_1word(hash, hashrnd);
+ 
+ 	return (u16) (((u64) hash * qcount) >> 32) + qoffset;
+diff --git a/net/core/drop_monitor.c b/net/core/drop_monitor.c
+index a7cad74..b856f87 100644
+--- a/net/core/drop_monitor.c
++++ b/net/core/drop_monitor.c
+@@ -33,9 +33,6 @@
+ #define TRACE_ON 1
+ #define TRACE_OFF 0
+ 
+-static void send_dm_alert(struct work_struct *unused);
+-
+-
+ /*
+  * Globals, our netlink socket pointer
+  * and the work handle that will send up
+@@ -45,11 +42,10 @@ static int trace_state = TRACE_OFF;
+ static DEFINE_MUTEX(trace_state_mutex);
+ 
+ struct per_cpu_dm_data {
+-	struct work_struct dm_alert_work;
+-	struct sk_buff __rcu *skb;
+-	atomic_t dm_hit_count;
+-	struct timer_list send_timer;
+-	int cpu;
++	spinlock_t		lock;
++	struct sk_buff		*skb;
++	struct work_struct	dm_alert_work;
++	struct timer_list	send_timer;
+ };
+ 
+ struct dm_hw_stat_delta {
+@@ -75,13 +71,13 @@ static int dm_delay = 1;
+ static unsigned long dm_hw_check_delta = 2*HZ;
+ static LIST_HEAD(hw_stats_list);
+ 
+-static void reset_per_cpu_data(struct per_cpu_dm_data *data)
++static struct sk_buff *reset_per_cpu_data(struct per_cpu_dm_data *data)
+ {
+ 	size_t al;
+ 	struct net_dm_alert_msg *msg;
+ 	struct nlattr *nla;
+ 	struct sk_buff *skb;
+-	struct sk_buff *oskb = rcu_dereference_protected(data->skb, 1);
++	unsigned long flags;
+ 
+ 	al = sizeof(struct net_dm_alert_msg);
+ 	al += dm_hit_limit * sizeof(struct net_dm_drop_point);
+@@ -96,65 +92,40 @@ static void reset_per_cpu_data(struct per_cpu_dm_data *data)
+ 				  sizeof(struct net_dm_alert_msg));
+ 		msg = nla_data(nla);
+ 		memset(msg, 0, al);
+-	} else
+-		schedule_work_on(data->cpu, &data->dm_alert_work);
+-
+-	/*
+-	 * Don't need to lock this, since we are guaranteed to only
+-	 * run this on a single cpu at a time.
+-	 * Note also that we only update data->skb if the old and new skb
+-	 * pointers don't match.  This ensures that we don't continually call
+-	 * synchornize_rcu if we repeatedly fail to alloc a new netlink message.
+-	 */
+-	if (skb != oskb) {
+-		rcu_assign_pointer(data->skb, skb);
+-
+-		synchronize_rcu();
+-
+-		atomic_set(&data->dm_hit_count, dm_hit_limit);
++	} else {
++		mod_timer(&data->send_timer, jiffies + HZ / 10);
+ 	}
+ 
++	spin_lock_irqsave(&data->lock, flags);
++	swap(data->skb, skb);
++	spin_unlock_irqrestore(&data->lock, flags);
++
++	return skb;
+ }
+ 
+-static void send_dm_alert(struct work_struct *unused)
++static void send_dm_alert(struct work_struct *work)
+ {
+ 	struct sk_buff *skb;
+-	struct per_cpu_dm_data *data = &get_cpu_var(dm_cpu_data);
++	struct per_cpu_dm_data *data;
+ 
+-	WARN_ON_ONCE(data->cpu != smp_processor_id());
++	data = container_of(work, struct per_cpu_dm_data, dm_alert_work);
+ 
+-	/*
+-	 * Grab the skb we're about to send
+-	 */
+-	skb = rcu_dereference_protected(data->skb, 1);
++	skb = reset_per_cpu_data(data);
+ 
+-	/*
+-	 * Replace it with a new one
+-	 */
+-	reset_per_cpu_data(data);
+-
+-	/*
+-	 * Ship it!
+-	 */
+ 	if (skb)
+ 		genlmsg_multicast(skb, 0, NET_DM_GRP_ALERT, GFP_KERNEL);
+-
+-	put_cpu_var(dm_cpu_data);
+ }
+ 
+ /*
+  * This is the timer function to delay the sending of an alert
+  * in the event that more drops will arrive during the
+- * hysteresis period.  Note that it operates under the timer interrupt
+- * so we don't need to disable preemption here
++ * hysteresis period.
+  */
+-static void sched_send_work(unsigned long unused)
++static void sched_send_work(unsigned long _data)
+ {
+-	struct per_cpu_dm_data *data =  &get_cpu_var(dm_cpu_data);
++	struct per_cpu_dm_data *data = (struct per_cpu_dm_data *)_data;
+ 
+-	schedule_work_on(smp_processor_id(), &data->dm_alert_work);
+-
+-	put_cpu_var(dm_cpu_data);
++	schedule_work(&data->dm_alert_work);
+ }
+ 
+ static void trace_drop_common(struct sk_buff *skb, void *location)
+@@ -164,33 +135,28 @@ static void trace_drop_common(struct sk_buff *skb, void *location)
+ 	struct nlattr *nla;
+ 	int i;
+ 	struct sk_buff *dskb;
+-	struct per_cpu_dm_data *data = &get_cpu_var(dm_cpu_data);
+-
++	struct per_cpu_dm_data *data;
++	unsigned long flags;
+ 
+-	rcu_read_lock();
+-	dskb = rcu_dereference(data->skb);
++	local_irq_save(flags);
++	data = &__get_cpu_var(dm_cpu_data);
++	spin_lock(&data->lock);
++	dskb = data->skb;
+ 
+ 	if (!dskb)
+ 		goto out;
+ 
+-	if (!atomic_add_unless(&data->dm_hit_count, -1, 0)) {
+-		/*
+-		 * we're already at zero, discard this hit
+-		 */
+-		goto out;
+-	}
+-
+ 	nlh = (struct nlmsghdr *)dskb->data;
+ 	nla = genlmsg_data(nlmsg_data(nlh));
+ 	msg = nla_data(nla);
+ 	for (i = 0; i < msg->entries; i++) {
+ 		if (!memcmp(&location, msg->points[i].pc, sizeof(void *))) {
+ 			msg->points[i].count++;
+-			atomic_inc(&data->dm_hit_count);
+ 			goto out;
+ 		}
+ 	}
+-
++	if (msg->entries == dm_hit_limit)
++		goto out;
+ 	/*
+ 	 * We need to create a new entry
+ 	 */
+@@ -202,13 +168,11 @@ static void trace_drop_common(struct sk_buff *skb, void *location)
+ 
+ 	if (!timer_pending(&data->send_timer)) {
+ 		data->send_timer.expires = jiffies + dm_delay * HZ;
+-		add_timer_on(&data->send_timer, smp_processor_id());
++		add_timer(&data->send_timer);
+ 	}
+ 
+ out:
+-	rcu_read_unlock();
+-	put_cpu_var(dm_cpu_data);
+-	return;
++	spin_unlock_irqrestore(&data->lock, flags);
+ }
+ 
+ static void trace_kfree_skb_hit(void *ignore, struct sk_buff *skb, void *location)
+@@ -406,11 +370,11 @@ static int __init init_net_drop_monitor(void)
+ 
+ 	for_each_present_cpu(cpu) {
+ 		data = &per_cpu(dm_cpu_data, cpu);
+-		data->cpu = cpu;
+ 		INIT_WORK(&data->dm_alert_work, send_dm_alert);
+ 		init_timer(&data->send_timer);
+-		data->send_timer.data = cpu;
++		data->send_timer.data = (unsigned long)data;
+ 		data->send_timer.function = sched_send_work;
++		spin_lock_init(&data->lock);
+ 		reset_per_cpu_data(data);
+ 	}
+ 
+diff --git a/net/core/neighbour.c b/net/core/neighbour.c
+index 0a68045..73b9035 100644
+--- a/net/core/neighbour.c
++++ b/net/core/neighbour.c
+@@ -2214,9 +2214,7 @@ static int neigh_dump_table(struct neigh_table *tbl, struct sk_buff *skb,
+ 	rcu_read_lock_bh();
+ 	nht = rcu_dereference_bh(tbl->nht);
+ 
+-	for (h = 0; h < (1 << nht->hash_shift); h++) {
+-		if (h < s_h)
+-			continue;
++	for (h = s_h; h < (1 << nht->hash_shift); h++) {
+ 		if (h > s_h)
+ 			s_idx = 0;
+ 		for (n = rcu_dereference_bh(nht->hash_buckets[h]), idx = 0;
+@@ -2255,9 +2253,7 @@ static int pneigh_dump_table(struct neigh_table *tbl, struct sk_buff *skb,
+ 
+ 	read_lock_bh(&tbl->lock);
+ 
+-	for (h = 0; h <= PNEIGH_HASHMASK; h++) {
+-		if (h < s_h)
+-			continue;
++	for (h = s_h; h <= PNEIGH_HASHMASK; h++) {
+ 		if (h > s_h)
+ 			s_idx = 0;
+ 		for (n = tbl->phash_buckets[h], idx = 0; n; n = n->next) {
+@@ -2292,7 +2288,7 @@ static int neigh_dump_info(struct sk_buff *skb, struct netlink_callback *cb)
+ 	struct neigh_table *tbl;
+ 	int t, family, s_t;
+ 	int proxy = 0;
+-	int err = 0;
++	int err;
+ 
+ 	read_lock(&neigh_tbl_lock);
+ 	family = ((struct rtgenmsg *) nlmsg_data(cb->nlh))->rtgen_family;
+@@ -2306,7 +2302,7 @@ static int neigh_dump_info(struct sk_buff *skb, struct netlink_callback *cb)
+ 
+ 	s_t = cb->args[0];
+ 
+-	for (tbl = neigh_tables, t = 0; tbl && (err >= 0);
++	for (tbl = neigh_tables, t = 0; tbl;
+ 	     tbl = tbl->next, t++) {
+ 		if (t < s_t || (family && tbl->family != family))
+ 			continue;
+@@ -2317,6 +2313,8 @@ static int neigh_dump_info(struct sk_buff *skb, struct netlink_callback *cb)
+ 			err = pneigh_dump_table(tbl, skb, cb);
+ 		else
+ 			err = neigh_dump_table(tbl, skb, cb);
++		if (err < 0)
++			break;
+ 	}
+ 	read_unlock(&neigh_tbl_lock);
+ 
+diff --git a/net/core/netpoll.c b/net/core/netpoll.c
+index 3d84fb9..f9f40b9 100644
+--- a/net/core/netpoll.c
++++ b/net/core/netpoll.c
+@@ -362,22 +362,23 @@ EXPORT_SYMBOL(netpoll_send_skb_on_dev);
+ 
+ void netpoll_send_udp(struct netpoll *np, const char *msg, int len)
+ {
+-	int total_len, eth_len, ip_len, udp_len;
++	int total_len, ip_len, udp_len;
+ 	struct sk_buff *skb;
+ 	struct udphdr *udph;
+ 	struct iphdr *iph;
+ 	struct ethhdr *eth;
+ 
+ 	udp_len = len + sizeof(*udph);
+-	ip_len = eth_len = udp_len + sizeof(*iph);
+-	total_len = eth_len + ETH_HLEN + NET_IP_ALIGN;
++	ip_len = udp_len + sizeof(*iph);
++	total_len = ip_len + LL_RESERVED_SPACE(np->dev);
+ 
+-	skb = find_skb(np, total_len, total_len - len);
++	skb = find_skb(np, total_len + np->dev->needed_tailroom,
++		       total_len - len);
+ 	if (!skb)
+ 		return;
+ 
+ 	skb_copy_to_linear_data(skb, msg, len);
+-	skb->len += len;
++	skb_put(skb, len);
+ 
+ 	skb_push(skb, sizeof(*udph));
+ 	skb_reset_transport_header(skb);
+diff --git a/net/core/skbuff.c b/net/core/skbuff.c
+index e598400..e99aedd 100644
+--- a/net/core/skbuff.c
++++ b/net/core/skbuff.c
+@@ -1712,6 +1712,7 @@ int skb_splice_bits(struct sk_buff *skb, unsigned int offset,
+ 	struct splice_pipe_desc spd = {
+ 		.pages = pages,
+ 		.partial = partial,
++		.nr_pages_max = MAX_SKB_FRAGS,
+ 		.flags = flags,
+ 		.ops = &sock_pipe_buf_ops,
+ 		.spd_release = sock_spd_release,
+@@ -1758,7 +1759,7 @@ done:
+ 		lock_sock(sk);
+ 	}
+ 
+-	splice_shrink_spd(pipe, &spd);
++	splice_shrink_spd(&spd);
+ 	return ret;
+ }
+ 
+diff --git a/net/core/sock.c b/net/core/sock.c
+index b2e14c0..0f8402e 100644
+--- a/net/core/sock.c
++++ b/net/core/sock.c
+@@ -1600,6 +1600,11 @@ struct sk_buff *sock_alloc_send_pskb(struct sock *sk, unsigned long header_len,
+ 	gfp_t gfp_mask;
+ 	long timeo;
+ 	int err;
++	int npages = (data_len + (PAGE_SIZE - 1)) >> PAGE_SHIFT;
++
++	err = -EMSGSIZE;
++	if (npages > MAX_SKB_FRAGS)
++		goto failure;
+ 
+ 	gfp_mask = sk->sk_allocation;
+ 	if (gfp_mask & __GFP_WAIT)
+@@ -1618,14 +1623,12 @@ struct sk_buff *sock_alloc_send_pskb(struct sock *sk, unsigned long header_len,
+ 		if (atomic_read(&sk->sk_wmem_alloc) < sk->sk_sndbuf) {
+ 			skb = alloc_skb(header_len, gfp_mask);
+ 			if (skb) {
+-				int npages;
+ 				int i;
+ 
+ 				/* No pages, we're done... */
+ 				if (!data_len)
+ 					break;
+ 
+-				npages = (data_len + (PAGE_SIZE - 1)) >> PAGE_SHIFT;
+ 				skb->truesize += data_len;
+ 				skb_shinfo(skb)->nr_frags = npages;
+ 				for (i = 0; i < npages; i++) {
+diff --git a/net/ipv4/inetpeer.c b/net/ipv4/inetpeer.c
+index d4d61b6..dfba343 100644
+--- a/net/ipv4/inetpeer.c
++++ b/net/ipv4/inetpeer.c
+@@ -560,6 +560,17 @@ bool inet_peer_xrlim_allow(struct inet_peer *peer, int timeout)
+ }
+ EXPORT_SYMBOL(inet_peer_xrlim_allow);
+ 
++static void inetpeer_inval_rcu(struct rcu_head *head)
++{
++	struct inet_peer *p = container_of(head, struct inet_peer, gc_rcu);
++
++	spin_lock_bh(&gc_lock);
++	list_add_tail(&p->gc_list, &gc_list);
++	spin_unlock_bh(&gc_lock);
++
++	schedule_delayed_work(&gc_work, gc_delay);
++}
++
+ void inetpeer_invalidate_tree(int family)
+ {
+ 	struct inet_peer *old, *new, *prev;
+@@ -576,10 +587,7 @@ void inetpeer_invalidate_tree(int family)
+ 	prev = cmpxchg(&base->root, old, new);
+ 	if (prev == old) {
+ 		base->total = 0;
+-		spin_lock(&gc_lock);
+-		list_add_tail(&prev->gc_list, &gc_list);
+-		spin_unlock(&gc_lock);
+-		schedule_delayed_work(&gc_work, gc_delay);
++		call_rcu(&prev->gc_rcu, inetpeer_inval_rcu);
+ 	}
+ 
+ out:
+diff --git a/net/ipv6/ip6_fib.c b/net/ipv6/ip6_fib.c
+index 9371743..92bb9cb 100644
+--- a/net/ipv6/ip6_fib.c
++++ b/net/ipv6/ip6_fib.c
+@@ -1560,7 +1560,7 @@ static int fib6_age(struct rt6_info *rt, void *arg)
+ 				neigh_flags = neigh->flags;
+ 				neigh_release(neigh);
+ 			}
+-			if (neigh_flags & NTF_ROUTER) {
++			if (!(neigh_flags & NTF_ROUTER)) {
+ 				RT6_TRACE("purging route %p via non-router but gateway\n",
+ 					  rt);
+ 				return -1;
+diff --git a/net/ipv6/route.c b/net/ipv6/route.c
+index bc4888d..c4920ca 100644
+--- a/net/ipv6/route.c
++++ b/net/ipv6/route.c
+@@ -2953,10 +2953,6 @@ static int __net_init ip6_route_net_init(struct net *net)
+ 	net->ipv6.sysctl.ip6_rt_mtu_expires = 10*60*HZ;
+ 	net->ipv6.sysctl.ip6_rt_min_advmss = IPV6_MIN_MTU - 20 - 40;
+ 
+-#ifdef CONFIG_PROC_FS
+-	proc_net_fops_create(net, "ipv6_route", 0, &ipv6_route_proc_fops);
+-	proc_net_fops_create(net, "rt6_stats", S_IRUGO, &rt6_stats_seq_fops);
+-#endif
+ 	net->ipv6.ip6_rt_gc_expire = 30*HZ;
+ 
+ 	ret = 0;
+@@ -2977,10 +2973,6 @@ out_ip6_dst_ops:
+ 
+ static void __net_exit ip6_route_net_exit(struct net *net)
+ {
+-#ifdef CONFIG_PROC_FS
+-	proc_net_remove(net, "ipv6_route");
+-	proc_net_remove(net, "rt6_stats");
+-#endif
+ 	kfree(net->ipv6.ip6_null_entry);
+ #ifdef CONFIG_IPV6_MULTIPLE_TABLES
+ 	kfree(net->ipv6.ip6_prohibit_entry);
+@@ -2989,11 +2981,33 @@ static void __net_exit ip6_route_net_exit(struct net *net)
+ 	dst_entries_destroy(&net->ipv6.ip6_dst_ops);
+ }
+ 
++static int __net_init ip6_route_net_init_late(struct net *net)
++{
++#ifdef CONFIG_PROC_FS
++	proc_net_fops_create(net, "ipv6_route", 0, &ipv6_route_proc_fops);
++	proc_net_fops_create(net, "rt6_stats", S_IRUGO, &rt6_stats_seq_fops);
++#endif
++	return 0;
++}
++
++static void __net_exit ip6_route_net_exit_late(struct net *net)
++{
++#ifdef CONFIG_PROC_FS
++	proc_net_remove(net, "ipv6_route");
++	proc_net_remove(net, "rt6_stats");
++#endif
++}
++
+ static struct pernet_operations ip6_route_net_ops = {
+ 	.init = ip6_route_net_init,
+ 	.exit = ip6_route_net_exit,
+ };
+ 
++static struct pernet_operations ip6_route_net_late_ops = {
++	.init = ip6_route_net_init_late,
++	.exit = ip6_route_net_exit_late,
++};
++
+ static struct notifier_block ip6_route_dev_notifier = {
+ 	.notifier_call = ip6_route_dev_notify,
+ 	.priority = 0,
+@@ -3043,19 +3057,25 @@ int __init ip6_route_init(void)
+ 	if (ret)
+ 		goto xfrm6_init;
+ 
++	ret = register_pernet_subsys(&ip6_route_net_late_ops);
++	if (ret)
++		goto fib6_rules_init;
++
+ 	ret = -ENOBUFS;
+ 	if (__rtnl_register(PF_INET6, RTM_NEWROUTE, inet6_rtm_newroute, NULL, NULL) ||
+ 	    __rtnl_register(PF_INET6, RTM_DELROUTE, inet6_rtm_delroute, NULL, NULL) ||
+ 	    __rtnl_register(PF_INET6, RTM_GETROUTE, inet6_rtm_getroute, NULL, NULL))
+-		goto fib6_rules_init;
++		goto out_register_late_subsys;
+ 
+ 	ret = register_netdevice_notifier(&ip6_route_dev_notifier);
+ 	if (ret)
+-		goto fib6_rules_init;
++		goto out_register_late_subsys;
+ 
+ out:
+ 	return ret;
+ 
++out_register_late_subsys:
++	unregister_pernet_subsys(&ip6_route_net_late_ops);
+ fib6_rules_init:
+ 	fib6_rules_cleanup();
+ xfrm6_init:
+@@ -3074,6 +3094,7 @@ out_kmem_cache:
+ void ip6_route_cleanup(void)
+ {
+ 	unregister_netdevice_notifier(&ip6_route_dev_notifier);
++	unregister_pernet_subsys(&ip6_route_net_late_ops);
+ 	fib6_rules_cleanup();
+ 	xfrm6_fini();
+ 	fib6_gc_cleanup();
+diff --git a/net/iucv/af_iucv.c b/net/iucv/af_iucv.c
+index 07d7d55..cd6f7a9 100644
+--- a/net/iucv/af_iucv.c
++++ b/net/iucv/af_iucv.c
+@@ -372,7 +372,6 @@ static int afiucv_hs_send(struct iucv_message *imsg, struct sock *sock,
+ 			skb_trim(skb, skb->dev->mtu);
+ 	}
+ 	skb->protocol = ETH_P_AF_IUCV;
+-	skb_shinfo(skb)->tx_flags |= SKBTX_DRV_NEEDS_SK_REF;
+ 	nskb = skb_clone(skb, GFP_ATOMIC);
+ 	if (!nskb)
+ 		return -ENOMEM;
+diff --git a/net/l2tp/l2tp_eth.c b/net/l2tp/l2tp_eth.c
+index 63fe5f3..7446038 100644
+--- a/net/l2tp/l2tp_eth.c
++++ b/net/l2tp/l2tp_eth.c
+@@ -167,6 +167,7 @@ static void l2tp_eth_delete(struct l2tp_session *session)
+ 		if (dev) {
+ 			unregister_netdev(dev);
+ 			spriv->dev = NULL;
++			module_put(THIS_MODULE);
+ 		}
+ 	}
+ }
+@@ -254,6 +255,7 @@ static int l2tp_eth_create(struct net *net, u32 tunnel_id, u32 session_id, u32 p
+ 	if (rc < 0)
+ 		goto out_del_dev;
+ 
++	__module_get(THIS_MODULE);
+ 	/* Must be done after register_netdev() */
+ 	strlcpy(session->ifname, dev->name, IFNAMSIZ);
+ 
+diff --git a/net/l2tp/l2tp_ip.c b/net/l2tp/l2tp_ip.c
+index cc8ad7b..b1d4370 100644
+--- a/net/l2tp/l2tp_ip.c
++++ b/net/l2tp/l2tp_ip.c
+@@ -516,10 +516,12 @@ static int l2tp_ip_sendmsg(struct kiocb *iocb, struct sock *sk, struct msghdr *m
+ 					   sk->sk_bound_dev_if);
+ 		if (IS_ERR(rt))
+ 			goto no_route;
+-		if (connected)
++		if (connected) {
+ 			sk_setup_caps(sk, &rt->dst);
+-		else
+-			dst_release(&rt->dst); /* safe since we hold rcu_read_lock */
++		} else {
++			skb_dst_set(skb, &rt->dst);
++			goto xmit;
++		}
+ 	}
+ 
+ 	/* We dont need to clone dst here, it is guaranteed to not disappear.
+@@ -527,6 +529,7 @@ static int l2tp_ip_sendmsg(struct kiocb *iocb, struct sock *sk, struct msghdr *m
+ 	 */
+ 	skb_dst_set_noref(skb, &rt->dst);
+ 
++xmit:
+ 	/* Queue the packet to IP for output */
+ 	rc = ip_queue_xmit(skb, &inet->cork.fl);
+ 	rcu_read_unlock();
+diff --git a/net/mac80211/mlme.c b/net/mac80211/mlme.c
+index 20c680b..1197e8d 100644
+--- a/net/mac80211/mlme.c
++++ b/net/mac80211/mlme.c
+@@ -187,7 +187,7 @@ static u32 ieee80211_enable_ht(struct ieee80211_sub_if_data *sdata,
+ 	u32 changed = 0;
+ 	int hti_cfreq;
+ 	u16 ht_opmode;
+-	bool enable_ht = true;
++	bool enable_ht = true, queues_stopped = false;
+ 	enum nl80211_channel_type prev_chantype;
+ 	enum nl80211_channel_type rx_channel_type = NL80211_CHAN_NO_HT;
+ 	enum nl80211_channel_type tx_channel_type;
+@@ -254,6 +254,7 @@ static u32 ieee80211_enable_ht(struct ieee80211_sub_if_data *sdata,
+ 		 */
+ 		ieee80211_stop_queues_by_reason(&sdata->local->hw,
+ 				IEEE80211_QUEUE_STOP_REASON_CHTYPE_CHANGE);
++		queues_stopped = true;
+ 
+ 		/* flush out all packets */
+ 		synchronize_net();
+@@ -272,12 +273,12 @@ static u32 ieee80211_enable_ht(struct ieee80211_sub_if_data *sdata,
+ 						 IEEE80211_RC_HT_CHANGED,
+ 						 tx_channel_type);
+ 		rcu_read_unlock();
+-
+-		if (beacon_htcap_ie)
+-			ieee80211_wake_queues_by_reason(&sdata->local->hw,
+-				IEEE80211_QUEUE_STOP_REASON_CHTYPE_CHANGE);
+ 	}
+ 
++	if (queues_stopped)
++		ieee80211_wake_queues_by_reason(&sdata->local->hw,
++			IEEE80211_QUEUE_STOP_REASON_CHTYPE_CHANGE);
++
+ 	ht_opmode = le16_to_cpu(hti->operation_mode);
+ 
+ 	/* if bss configuration changed store the new one */
+@@ -1375,7 +1376,6 @@ static void ieee80211_set_disassoc(struct ieee80211_sub_if_data *sdata,
+ 	struct ieee80211_local *local = sdata->local;
+ 	struct sta_info *sta;
+ 	u32 changed = 0;
+-	u8 bssid[ETH_ALEN];
+ 
+ 	ASSERT_MGD_MTX(ifmgd);
+ 
+@@ -1385,10 +1385,7 @@ static void ieee80211_set_disassoc(struct ieee80211_sub_if_data *sdata,
+ 	if (WARN_ON(!ifmgd->associated))
+ 		return;
+ 
+-	memcpy(bssid, ifmgd->associated->bssid, ETH_ALEN);
+-
+ 	ifmgd->associated = NULL;
+-	memset(ifmgd->bssid, 0, ETH_ALEN);
+ 
+ 	/*
+ 	 * we need to commit the associated = NULL change because the
+@@ -1408,7 +1405,7 @@ static void ieee80211_set_disassoc(struct ieee80211_sub_if_data *sdata,
+ 	netif_carrier_off(sdata->dev);
+ 
+ 	mutex_lock(&local->sta_mtx);
+-	sta = sta_info_get(sdata, bssid);
++	sta = sta_info_get(sdata, ifmgd->bssid);
+ 	if (sta) {
+ 		set_sta_flag(sta, WLAN_STA_BLOCK_BA);
+ 		ieee80211_sta_tear_down_BA_sessions(sta, tx);
+@@ -1417,13 +1414,16 @@ static void ieee80211_set_disassoc(struct ieee80211_sub_if_data *sdata,
+ 
+ 	/* deauthenticate/disassociate now */
+ 	if (tx || frame_buf)
+-		ieee80211_send_deauth_disassoc(sdata, bssid, stype, reason,
+-					       tx, frame_buf);
++		ieee80211_send_deauth_disassoc(sdata, ifmgd->bssid, stype,
++					       reason, tx, frame_buf);
+ 
+ 	/* flush out frame */
+ 	if (tx)
+ 		drv_flush(local, false);
+ 
++	/* clear bssid only after building the needed mgmt frames */
++	memset(ifmgd->bssid, 0, ETH_ALEN);
++
+ 	/* remove AP and TDLS peers */
+ 	sta_info_flush(local, sdata);
+ 
+diff --git a/net/mac80211/rx.c b/net/mac80211/rx.c
+index d64e285..c9b508e 100644
+--- a/net/mac80211/rx.c
++++ b/net/mac80211/rx.c
+@@ -2459,7 +2459,7 @@ ieee80211_rx_h_action_return(struct ieee80211_rx_data *rx)
+ 	 * frames that we didn't handle, including returning unknown
+ 	 * ones. For all other modes we will return them to the sender,
+ 	 * setting the 0x80 bit in the action category, as required by
+-	 * 802.11-2007 7.3.1.11.
++	 * 802.11-2012 9.24.4.
+ 	 * Newer versions of hostapd shall also use the management frame
+ 	 * registration mechanisms, but older ones still use cooked
+ 	 * monitor interfaces so push all frames there.
+@@ -2469,6 +2469,9 @@ ieee80211_rx_h_action_return(struct ieee80211_rx_data *rx)
+ 	     sdata->vif.type == NL80211_IFTYPE_AP_VLAN))
+ 		return RX_DROP_MONITOR;
+ 
++	if (is_multicast_ether_addr(mgmt->da))
++		return RX_DROP_MONITOR;
++
+ 	/* do not return rejected action frames */
+ 	if (mgmt->u.action.category & 0x80)
+ 		return RX_DROP_UNUSABLE;
+diff --git a/net/nfc/nci/ntf.c b/net/nfc/nci/ntf.c
+index 2e3dee4..e460cf1 100644
+--- a/net/nfc/nci/ntf.c
++++ b/net/nfc/nci/ntf.c
+@@ -106,7 +106,7 @@ static __u8 *nci_extract_rf_params_nfca_passive_poll(struct nci_dev *ndev,
+ 	nfca_poll->sens_res = __le16_to_cpu(*((__u16 *)data));
+ 	data += 2;
+ 
+-	nfca_poll->nfcid1_len = *data++;
++	nfca_poll->nfcid1_len = min_t(__u8, *data++, NFC_NFCID1_MAXSIZE);
+ 
+ 	pr_debug("sens_res 0x%x, nfcid1_len %d\n",
+ 		 nfca_poll->sens_res, nfca_poll->nfcid1_len);
+@@ -130,7 +130,7 @@ static __u8 *nci_extract_rf_params_nfcb_passive_poll(struct nci_dev *ndev,
+ 			struct rf_tech_specific_params_nfcb_poll *nfcb_poll,
+ 						     __u8 *data)
+ {
+-	nfcb_poll->sensb_res_len = *data++;
++	nfcb_poll->sensb_res_len = min_t(__u8, *data++, NFC_SENSB_RES_MAXSIZE);
+ 
+ 	pr_debug("sensb_res_len %d\n", nfcb_poll->sensb_res_len);
+ 
+@@ -145,7 +145,7 @@ static __u8 *nci_extract_rf_params_nfcf_passive_poll(struct nci_dev *ndev,
+ 						     __u8 *data)
+ {
+ 	nfcf_poll->bit_rate = *data++;
+-	nfcf_poll->sensf_res_len = *data++;
++	nfcf_poll->sensf_res_len = min_t(__u8, *data++, NFC_SENSF_RES_MAXSIZE);
+ 
+ 	pr_debug("bit_rate %d, sensf_res_len %d\n",
+ 		 nfcf_poll->bit_rate, nfcf_poll->sensf_res_len);
+@@ -331,7 +331,7 @@ static int nci_extract_activation_params_iso_dep(struct nci_dev *ndev,
+ 	switch (ntf->activation_rf_tech_and_mode) {
+ 	case NCI_NFC_A_PASSIVE_POLL_MODE:
+ 		nfca_poll = &ntf->activation_params.nfca_poll_iso_dep;
+-		nfca_poll->rats_res_len = *data++;
++		nfca_poll->rats_res_len = min_t(__u8, *data++, 20);
+ 		pr_debug("rats_res_len %d\n", nfca_poll->rats_res_len);
+ 		if (nfca_poll->rats_res_len > 0) {
+ 			memcpy(nfca_poll->rats_res,
+@@ -341,7 +341,7 @@ static int nci_extract_activation_params_iso_dep(struct nci_dev *ndev,
+ 
+ 	case NCI_NFC_B_PASSIVE_POLL_MODE:
+ 		nfcb_poll = &ntf->activation_params.nfcb_poll_iso_dep;
+-		nfcb_poll->attrib_res_len = *data++;
++		nfcb_poll->attrib_res_len = min_t(__u8, *data++, 50);
+ 		pr_debug("attrib_res_len %d\n", nfcb_poll->attrib_res_len);
+ 		if (nfcb_poll->attrib_res_len > 0) {
+ 			memcpy(nfcb_poll->attrib_res,
+diff --git a/net/nfc/rawsock.c b/net/nfc/rawsock.c
+index 5a839ce..e879dce 100644
+--- a/net/nfc/rawsock.c
++++ b/net/nfc/rawsock.c
+@@ -54,7 +54,10 @@ static int rawsock_release(struct socket *sock)
+ {
+ 	struct sock *sk = sock->sk;
+ 
+-	pr_debug("sock=%p\n", sock);
++	pr_debug("sock=%p sk=%p\n", sock, sk);
++
++	if (!sk)
++		return 0;
+ 
+ 	sock_orphan(sk);
+ 	sock_put(sk);
+diff --git a/net/sunrpc/rpcb_clnt.c b/net/sunrpc/rpcb_clnt.c
+index 78ac39f..4c38b33 100644
+--- a/net/sunrpc/rpcb_clnt.c
++++ b/net/sunrpc/rpcb_clnt.c
+@@ -180,14 +180,16 @@ void rpcb_put_local(struct net *net)
+ 	struct sunrpc_net *sn = net_generic(net, sunrpc_net_id);
+ 	struct rpc_clnt *clnt = sn->rpcb_local_clnt;
+ 	struct rpc_clnt *clnt4 = sn->rpcb_local_clnt4;
+-	int shutdown;
++	int shutdown = 0;
+ 
+ 	spin_lock(&sn->rpcb_clnt_lock);
+-	if (--sn->rpcb_users == 0) {
+-		sn->rpcb_local_clnt = NULL;
+-		sn->rpcb_local_clnt4 = NULL;
++	if (sn->rpcb_users) {
++		if (--sn->rpcb_users == 0) {
++			sn->rpcb_local_clnt = NULL;
++			sn->rpcb_local_clnt4 = NULL;
++		}
++		shutdown = !sn->rpcb_users;
+ 	}
+-	shutdown = !sn->rpcb_users;
+ 	spin_unlock(&sn->rpcb_clnt_lock);
+ 
+ 	if (shutdown) {
+diff --git a/net/sunrpc/svc.c b/net/sunrpc/svc.c
+index 234ee39..cb7c13f 100644
+--- a/net/sunrpc/svc.c
++++ b/net/sunrpc/svc.c
+@@ -407,6 +407,14 @@ static int svc_uses_rpcbind(struct svc_serv *serv)
+ 	return 0;
+ }
+ 
++int svc_bind(struct svc_serv *serv, struct net *net)
++{
++	if (!svc_uses_rpcbind(serv))
++		return 0;
++	return svc_rpcb_setup(serv, net);
++}
++EXPORT_SYMBOL_GPL(svc_bind);
++
+ /*
+  * Create an RPC service
+  */
+@@ -471,15 +479,8 @@ __svc_create(struct svc_program *prog, unsigned int bufsize, int npools,
+ 		spin_lock_init(&pool->sp_lock);
+ 	}
+ 
+-	if (svc_uses_rpcbind(serv)) {
+-		if (svc_rpcb_setup(serv, current->nsproxy->net_ns) < 0) {
+-			kfree(serv->sv_pools);
+-			kfree(serv);
+-			return NULL;
+-		}
+-		if (!serv->sv_shutdown)
+-			serv->sv_shutdown = svc_rpcb_cleanup;
+-	}
++	if (svc_uses_rpcbind(serv) && (!serv->sv_shutdown))
++		serv->sv_shutdown = svc_rpcb_cleanup;
+ 
+ 	return serv;
+ }
+@@ -536,8 +537,6 @@ EXPORT_SYMBOL_GPL(svc_shutdown_net);
+ void
+ svc_destroy(struct svc_serv *serv)
+ {
+-	struct net *net = current->nsproxy->net_ns;
+-
+ 	dprintk("svc: svc_destroy(%s, %d)\n",
+ 				serv->sv_program->pg_name,
+ 				serv->sv_nrthreads);
+@@ -552,8 +551,6 @@ svc_destroy(struct svc_serv *serv)
+ 
+ 	del_timer_sync(&serv->sv_temptimer);
+ 
+-	svc_shutdown_net(serv, net);
+-
+ 	/*
+ 	 * The last user is gone and thus all sockets have to be destroyed to
+ 	 * the point. Check this.
+diff --git a/net/wireless/reg.c b/net/wireless/reg.c
+index 15f3474..baf5704 100644
+--- a/net/wireless/reg.c
++++ b/net/wireless/reg.c
+@@ -1389,7 +1389,7 @@ static void reg_set_request_processed(void)
+ 	spin_unlock(&reg_requests_lock);
+ 
+ 	if (last_request->initiator == NL80211_REGDOM_SET_BY_USER)
+-		cancel_delayed_work_sync(&reg_timeout);
++		cancel_delayed_work(&reg_timeout);
+ 
+ 	if (need_more_processing)
+ 		schedule_work(&reg_work);
+diff --git a/sound/pci/hda/hda_codec.c b/sound/pci/hda/hda_codec.c
+index 841475c..926b455 100644
+--- a/sound/pci/hda/hda_codec.c
++++ b/sound/pci/hda/hda_codec.c
+@@ -1192,6 +1192,7 @@ static void snd_hda_codec_free(struct hda_codec *codec)
+ {
+ 	if (!codec)
+ 		return;
++	snd_hda_jack_tbl_clear(codec);
+ 	restore_init_pincfgs(codec);
+ #ifdef CONFIG_SND_HDA_POWER_SAVE
+ 	cancel_delayed_work(&codec->power_work);
+@@ -1200,6 +1201,7 @@ static void snd_hda_codec_free(struct hda_codec *codec)
+ 	list_del(&codec->list);
+ 	snd_array_free(&codec->mixers);
+ 	snd_array_free(&codec->nids);
++	snd_array_free(&codec->cvt_setups);
+ 	snd_array_free(&codec->conn_lists);
+ 	snd_array_free(&codec->spdif_out);
+ 	codec->bus->caddr_tbl[codec->addr] = NULL;
+diff --git a/sound/pci/hda/patch_realtek.c b/sound/pci/hda/patch_realtek.c
+index e56c2c8..c43264f 100644
+--- a/sound/pci/hda/patch_realtek.c
++++ b/sound/pci/hda/patch_realtek.c
+@@ -6976,6 +6976,7 @@ static const struct hda_codec_preset snd_hda_preset_realtek[] = {
+ 	{ .id = 0x10ec0272, .name = "ALC272", .patch = patch_alc662 },
+ 	{ .id = 0x10ec0275, .name = "ALC275", .patch = patch_alc269 },
+ 	{ .id = 0x10ec0276, .name = "ALC276", .patch = patch_alc269 },
++	{ .id = 0x10ec0280, .name = "ALC280", .patch = patch_alc269 },
+ 	{ .id = 0x10ec0861, .rev = 0x100340, .name = "ALC660",
+ 	  .patch = patch_alc861 },
+ 	{ .id = 0x10ec0660, .name = "ALC660-VD", .patch = patch_alc861vd },
+diff --git a/sound/pci/hda/patch_sigmatel.c b/sound/pci/hda/patch_sigmatel.c
+index 2cb1e08..7494fbc 100644
+--- a/sound/pci/hda/patch_sigmatel.c
++++ b/sound/pci/hda/patch_sigmatel.c
+@@ -4388,7 +4388,7 @@ static int stac92xx_init(struct hda_codec *codec)
+ 					 AC_PINCTL_IN_EN);
+ 	for (i = 0; i < spec->num_pwrs; i++)  {
+ 		hda_nid_t nid = spec->pwr_nids[i];
+-		int pinctl, def_conf;
++		unsigned int pinctl, def_conf;
+ 
+ 		/* power on when no jack detection is available */
+ 		/* or when the VREF is used for controlling LED */
+@@ -4415,7 +4415,7 @@ static int stac92xx_init(struct hda_codec *codec)
+ 		def_conf = get_defcfg_connect(def_conf);
+ 		/* skip any ports that don't have jacks since presence
+  		 * detection is useless */
+-		if (def_conf != AC_JACK_PORT_NONE &&
++		if (def_conf != AC_JACK_PORT_COMPLEX ||
+ 		    !is_jack_detectable(codec, nid)) {
+ 			stac_toggle_power_map(codec, nid, 1);
+ 			continue;
+diff --git a/sound/soc/codecs/tlv320aic3x.c b/sound/soc/codecs/tlv320aic3x.c
+index 8d20f6e..b8f0262 100644
+--- a/sound/soc/codecs/tlv320aic3x.c
++++ b/sound/soc/codecs/tlv320aic3x.c
+@@ -936,9 +936,7 @@ static int aic3x_hw_params(struct snd_pcm_substream *substream,
+ 	}
+ 
+ found:
+-	data = snd_soc_read(codec, AIC3X_PLL_PROGA_REG);
+-	snd_soc_write(codec, AIC3X_PLL_PROGA_REG,
+-		      data | (pll_p << PLLP_SHIFT));
++	snd_soc_update_bits(codec, AIC3X_PLL_PROGA_REG, PLLP_MASK, pll_p);
+ 	snd_soc_write(codec, AIC3X_OVRF_STATUS_AND_PLLR_REG,
+ 		      pll_r << PLLR_SHIFT);
+ 	snd_soc_write(codec, AIC3X_PLL_PROGB_REG, pll_j << PLLJ_SHIFT);
+diff --git a/sound/soc/codecs/tlv320aic3x.h b/sound/soc/codecs/tlv320aic3x.h
+index 6f097fb..08c7f66 100644
+--- a/sound/soc/codecs/tlv320aic3x.h
++++ b/sound/soc/codecs/tlv320aic3x.h
+@@ -166,6 +166,7 @@
+ 
+ /* PLL registers bitfields */
+ #define PLLP_SHIFT		0
++#define PLLP_MASK		7
+ #define PLLQ_SHIFT		3
+ #define PLLR_SHIFT		0
+ #define PLLJ_SHIFT		2
+diff --git a/sound/soc/codecs/wm2200.c b/sound/soc/codecs/wm2200.c
+index acbdc5f..32682c1 100644
+--- a/sound/soc/codecs/wm2200.c
++++ b/sound/soc/codecs/wm2200.c
+@@ -1491,6 +1491,7 @@ static int wm2200_bclk_rates_dat[WM2200_NUM_BCLK_RATES] = {
+ 
+ static int wm2200_bclk_rates_cd[WM2200_NUM_BCLK_RATES] = {
+ 	5644800,
++	3763200,
+ 	2882400,
+ 	1881600,
+ 	1411200,
+diff --git a/tools/hv/hv_kvp_daemon.c b/tools/hv/hv_kvp_daemon.c
+index 146fd61..d9834b3 100644
+--- a/tools/hv/hv_kvp_daemon.c
++++ b/tools/hv/hv_kvp_daemon.c
+@@ -701,14 +701,18 @@ int main(void)
+ 	pfd.fd = fd;
+ 
+ 	while (1) {
++		struct sockaddr *addr_p = (struct sockaddr *) &addr;
++		socklen_t addr_l = sizeof(addr);
+ 		pfd.events = POLLIN;
+ 		pfd.revents = 0;
+ 		poll(&pfd, 1, -1);
+ 
+-		len = recv(fd, kvp_recv_buffer, sizeof(kvp_recv_buffer), 0);
++		len = recvfrom(fd, kvp_recv_buffer, sizeof(kvp_recv_buffer), 0,
++				addr_p, &addr_l);
+ 
+-		if (len < 0) {
+-			syslog(LOG_ERR, "recv failed; error:%d", len);
++		if (len < 0 || addr.nl_pid) {
++			syslog(LOG_ERR, "recvfrom failed; pid:%u error:%d %s",
++					addr.nl_pid, errno, strerror(errno));
+ 			close(fd);
+ 			return -1;
+ 		}

Deleted: genpatches-2.6/trunk/3.4/1900_cifs-double-delim-check.patch
===================================================================
--- genpatches-2.6/trunk/3.4/1900_cifs-double-delim-check.patch	2012-07-07 16:52:38 UTC (rev 2174)
+++ genpatches-2.6/trunk/3.4/1900_cifs-double-delim-check.patch	2012-07-17 14:52:11 UTC (rev 2175)
@@ -1,45 +0,0 @@
---- a/fs/cifs/connect.c	2012-06-13 12:09:20.443398223 -0400
-+++ b/fs/cifs/connect.c	2012-06-13 14:12:11.067735328 -0400
-@@ -1585,24 +1585,27 @@ cifs_parse_mount_options(const char *mou
- 			 * If yes, we have encountered a double deliminator
- 			 * reset the NULL character to the deliminator
- 			 */
--			if (tmp_end < end && tmp_end[1] == delim)
-+			if (tmp_end < end && tmp_end[1] == delim) {
- 				tmp_end[0] = delim;
- 
--			/* Keep iterating until we get to a single deliminator
--			 * OR the end
--			 */
--			while ((tmp_end = strchr(tmp_end, delim)) != NULL &&
--			       (tmp_end[1] == delim)) {
--				tmp_end = (char *) &tmp_end[2];
--			}
-+				/* Keep iterating until we get to a single
-+			 	 * deliminator OR the end
-+			 	 */
-+				while ((tmp_end = strchr(tmp_end, delim))
-+					!= NULL && (tmp_end[1] == delim)) {
-+						tmp_end = (char *) &tmp_end[2];
-+				}
- 
--			/* Reset var options to point to next element */
--			if (tmp_end) {
--				tmp_end[0] = '\0';
--				options = (char *) &tmp_end[1];
--			} else
--				/* Reached the end of the mount option string */
--				options = end;
-+
-+				/* Reset var options to point to next element */
-+				if (tmp_end) {
-+					tmp_end[0] = '\0';
-+					options = (char *) &tmp_end[1];
-+				} else
-+					/* Reached the end of the mount option
-+					 * string */
-+					options = end;
-+			}
- 
- 			/* Now build new password string */
- 			temp_len = strlen(value);




^ permalink raw reply	[flat|nested] only message in thread

only message in thread, other threads:[~2012-07-17 14:53 UTC | newest]

Thread overview: (only message) (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2012-07-17 14:52 [gentoo-commits] linux-patches r2175 - genpatches-2.6/trunk/3.4 Mike Pagano (mpagano)

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox