public inbox for gentoo-commits@lists.gentoo.org
 help / color / mirror / Atom feed
From: "Mike Pagano" <mpagano@gentoo.org>
To: gentoo-commits@lists.gentoo.org
Subject: [gentoo-commits] proj/linux-patches:4.1 commit in: /
Date: Wed, 22 Jul 2015 10:09:49 +0000 (UTC)	[thread overview]
Message-ID: <1437559783.40044434d47f847553adb9b9cde5c31516fe435c.mpagano@gentoo> (raw)

commit:     40044434d47f847553adb9b9cde5c31516fe435c
Author:     Mike Pagano <mpagano <AT> gentoo <DOT> org>
AuthorDate: Wed Jul 22 10:09:43 2015 +0000
Commit:     Mike Pagano <mpagano <AT> gentoo <DOT> org>
CommitDate: Wed Jul 22 10:09:43 2015 +0000
URL:        https://gitweb.gentoo.org/proj/linux-patches.git/commit/?id=40044434

Linux patch 4.1.3

 0000_README            |    4 +
 1002_linux-4.1.3.patch | 3990 ++++++++++++++++++++++++++++++++++++++++++++++++
 2 files changed, 3994 insertions(+)

diff --git a/0000_README b/0000_README
index 3b87439..8e9fdc7 100644
--- a/0000_README
+++ b/0000_README
@@ -51,6 +51,10 @@ Patch:  1001_linux-4.1.2.patch
 From:   http://www.kernel.org
 Desc:   Linux 4.1.2
 
+Patch:  1002_linux-4.1.3.patch
+From:   http://www.kernel.org
+Desc:   Linux 4.1.3
+
 Patch:  1500_XATTR_USER_PREFIX.patch
 From:   https://bugs.gentoo.org/show_bug.cgi?id=470644
 Desc:   Support for namespace user.pax.* on tmpfs.

diff --git a/1002_linux-4.1.3.patch b/1002_linux-4.1.3.patch
new file mode 100644
index 0000000..a9dbb97
--- /dev/null
+++ b/1002_linux-4.1.3.patch
@@ -0,0 +1,3990 @@
+diff --git a/Documentation/DMA-API-HOWTO.txt b/Documentation/DMA-API-HOWTO.txt
+index 0f7afb2bb442..aef8cc5a677b 100644
+--- a/Documentation/DMA-API-HOWTO.txt
++++ b/Documentation/DMA-API-HOWTO.txt
+@@ -25,13 +25,18 @@ physical addresses.  These are the addresses in /proc/iomem.  The physical
+ address is not directly useful to a driver; it must use ioremap() to map
+ the space and produce a virtual address.
+ 
+-I/O devices use a third kind of address: a "bus address" or "DMA address".
+-If a device has registers at an MMIO address, or if it performs DMA to read
+-or write system memory, the addresses used by the device are bus addresses.
+-In some systems, bus addresses are identical to CPU physical addresses, but
+-in general they are not.  IOMMUs and host bridges can produce arbitrary
++I/O devices use a third kind of address: a "bus address".  If a device has
++registers at an MMIO address, or if it performs DMA to read or write system
++memory, the addresses used by the device are bus addresses.  In some
++systems, bus addresses are identical to CPU physical addresses, but in
++general they are not.  IOMMUs and host bridges can produce arbitrary
+ mappings between physical and bus addresses.
+ 
++From a device's point of view, DMA uses the bus address space, but it may
++be restricted to a subset of that space.  For example, even if a system
++supports 64-bit addresses for main memory and PCI BARs, it may use an IOMMU
++so devices only need to use 32-bit DMA addresses.
++
+ Here's a picture and some examples:
+ 
+                CPU                  CPU                  Bus
+@@ -72,11 +77,11 @@ can use virtual address X to access the buffer, but the device itself
+ cannot because DMA doesn't go through the CPU virtual memory system.
+ 
+ In some simple systems, the device can do DMA directly to physical address
+-Y.  But in many others, there is IOMMU hardware that translates bus
++Y.  But in many others, there is IOMMU hardware that translates DMA
+ addresses to physical addresses, e.g., it translates Z to Y.  This is part
+ of the reason for the DMA API: the driver can give a virtual address X to
+ an interface like dma_map_single(), which sets up any required IOMMU
+-mapping and returns the bus address Z.  The driver then tells the device to
++mapping and returns the DMA address Z.  The driver then tells the device to
+ do DMA to Z, and the IOMMU maps it to the buffer at address Y in system
+ RAM.
+ 
+@@ -98,7 +103,7 @@ First of all, you should make sure
+ #include <linux/dma-mapping.h>
+ 
+ is in your driver, which provides the definition of dma_addr_t.  This type
+-can hold any valid DMA or bus address for the platform and should be used
++can hold any valid DMA address for the platform and should be used
+ everywhere you hold a DMA address returned from the DMA mapping functions.
+ 
+ 			 What memory is DMA'able?
+@@ -316,7 +321,7 @@ There are two types of DMA mappings:
+   Think of "consistent" as "synchronous" or "coherent".
+ 
+   The current default is to return consistent memory in the low 32
+-  bits of the bus space.  However, for future compatibility you should
++  bits of the DMA space.  However, for future compatibility you should
+   set the consistent mask even if this default is fine for your
+   driver.
+ 
+@@ -403,7 +408,7 @@ dma_alloc_coherent() returns two values: the virtual address which you
+ can use to access it from the CPU and dma_handle which you pass to the
+ card.
+ 
+-The CPU virtual address and the DMA bus address are both
++The CPU virtual address and the DMA address are both
+ guaranteed to be aligned to the smallest PAGE_SIZE order which
+ is greater than or equal to the requested size.  This invariant
+ exists (for example) to guarantee that if you allocate a chunk
+@@ -645,8 +650,8 @@ PLEASE NOTE:  The 'nents' argument to the dma_unmap_sg call must be
+               dma_map_sg call.
+ 
+ Every dma_map_{single,sg}() call should have its dma_unmap_{single,sg}()
+-counterpart, because the bus address space is a shared resource and
+-you could render the machine unusable by consuming all bus addresses.
++counterpart, because the DMA address space is a shared resource and
++you could render the machine unusable by consuming all DMA addresses.
+ 
+ If you need to use the same streaming DMA region multiple times and touch
+ the data in between the DMA transfers, the buffer needs to be synced
+diff --git a/Documentation/DMA-API.txt b/Documentation/DMA-API.txt
+index 52088408668a..7eba542eff7c 100644
+--- a/Documentation/DMA-API.txt
++++ b/Documentation/DMA-API.txt
+@@ -18,10 +18,10 @@ Part I - dma_ API
+ To get the dma_ API, you must #include <linux/dma-mapping.h>.  This
+ provides dma_addr_t and the interfaces described below.
+ 
+-A dma_addr_t can hold any valid DMA or bus address for the platform.  It
+-can be given to a device to use as a DMA source or target.  A CPU cannot
+-reference a dma_addr_t directly because there may be translation between
+-its physical address space and the bus address space.
++A dma_addr_t can hold any valid DMA address for the platform.  It can be
++given to a device to use as a DMA source or target.  A CPU cannot reference
++a dma_addr_t directly because there may be translation between its physical
++address space and the DMA address space.
+ 
+ Part Ia - Using large DMA-coherent buffers
+ ------------------------------------------
+@@ -42,7 +42,7 @@ It returns a pointer to the allocated region (in the processor's virtual
+ address space) or NULL if the allocation failed.
+ 
+ It also returns a <dma_handle> which may be cast to an unsigned integer the
+-same width as the bus and given to the device as the bus address base of
++same width as the bus and given to the device as the DMA address base of
+ the region.
+ 
+ Note: consistent memory can be expensive on some platforms, and the
+@@ -193,7 +193,7 @@ dma_map_single(struct device *dev, void *cpu_addr, size_t size,
+ 		      enum dma_data_direction direction)
+ 
+ Maps a piece of processor virtual memory so it can be accessed by the
+-device and returns the bus address of the memory.
++device and returns the DMA address of the memory.
+ 
+ The direction for both APIs may be converted freely by casting.
+ However the dma_ API uses a strongly typed enumerator for its
+@@ -212,20 +212,20 @@ contiguous piece of memory.  For this reason, memory to be mapped by
+ this API should be obtained from sources which guarantee it to be
+ physically contiguous (like kmalloc).
+ 
+-Further, the bus address of the memory must be within the
++Further, the DMA address of the memory must be within the
+ dma_mask of the device (the dma_mask is a bit mask of the
+-addressable region for the device, i.e., if the bus address of
+-the memory ANDed with the dma_mask is still equal to the bus
++addressable region for the device, i.e., if the DMA address of
++the memory ANDed with the dma_mask is still equal to the DMA
+ address, then the device can perform DMA to the memory).  To
+ ensure that the memory allocated by kmalloc is within the dma_mask,
+ the driver may specify various platform-dependent flags to restrict
+-the bus address range of the allocation (e.g., on x86, GFP_DMA
+-guarantees to be within the first 16MB of available bus addresses,
++the DMA address range of the allocation (e.g., on x86, GFP_DMA
++guarantees to be within the first 16MB of available DMA addresses,
+ as required by ISA devices).
+ 
+ Note also that the above constraints on physical contiguity and
+ dma_mask may not apply if the platform has an IOMMU (a device which
+-maps an I/O bus address to a physical memory address).  However, to be
++maps an I/O DMA address to a physical memory address).  However, to be
+ portable, device driver writers may *not* assume that such an IOMMU
+ exists.
+ 
+@@ -296,7 +296,7 @@ reduce current DMA mapping usage or delay and try again later).
+ 	dma_map_sg(struct device *dev, struct scatterlist *sg,
+ 		int nents, enum dma_data_direction direction)
+ 
+-Returns: the number of bus address segments mapped (this may be shorter
++Returns: the number of DMA address segments mapped (this may be shorter
+ than <nents> passed in if some elements of the scatter/gather list are
+ physically or virtually adjacent and an IOMMU maps them with a single
+ entry).
+@@ -340,7 +340,7 @@ must be the same as those and passed in to the scatter/gather mapping
+ API.
+ 
+ Note: <nents> must be the number you passed in, *not* the number of
+-bus address entries returned.
++DMA address entries returned.
+ 
+ void
+ dma_sync_single_for_cpu(struct device *dev, dma_addr_t dma_handle, size_t size,
+@@ -507,7 +507,7 @@ it's asked for coherent memory for this device.
+ phys_addr is the CPU physical address to which the memory is currently
+ assigned (this will be ioremapped so the CPU can access the region).
+ 
+-device_addr is the bus address the device needs to be programmed
++device_addr is the DMA address the device needs to be programmed
+ with to actually address this memory (this will be handed out as the
+ dma_addr_t in dma_alloc_coherent()).
+ 
+diff --git a/Documentation/devicetree/bindings/spi/spi_pl022.txt b/Documentation/devicetree/bindings/spi/spi_pl022.txt
+index 22ed6797216d..4d1673ca8cf8 100644
+--- a/Documentation/devicetree/bindings/spi/spi_pl022.txt
++++ b/Documentation/devicetree/bindings/spi/spi_pl022.txt
+@@ -4,9 +4,9 @@ Required properties:
+ - compatible : "arm,pl022", "arm,primecell"
+ - reg : Offset and length of the register set for the device
+ - interrupts : Should contain SPI controller interrupt
++- num-cs : total number of chipselects
+ 
+ Optional properties:
+-- num-cs : total number of chipselects
+ - cs-gpios : should specify GPIOs used for chipselects.
+   The gpios will be referred to as reg = <index> in the SPI child nodes.
+   If unspecified, a single SPI device without a chip select can be used.
+diff --git a/Makefile b/Makefile
+index cef84c061f02..e3cdec4898be 100644
+--- a/Makefile
++++ b/Makefile
+@@ -1,6 +1,6 @@
+ VERSION = 4
+ PATCHLEVEL = 1
+-SUBLEVEL = 2
++SUBLEVEL = 3
+ EXTRAVERSION =
+ NAME = Series 4800
+ 
+diff --git a/arch/arc/include/asm/atomic.h b/arch/arc/include/asm/atomic.h
+index 9917a45fc430..20b7dc17979e 100644
+--- a/arch/arc/include/asm/atomic.h
++++ b/arch/arc/include/asm/atomic.h
+@@ -43,6 +43,12 @@ static inline int atomic_##op##_return(int i, atomic_t *v)		\
+ {									\
+ 	unsigned int temp;						\
+ 									\
++	/*								\
++	 * Explicit full memory barrier needed before/after as		\
++	 * LLOCK/SCOND thmeselves don't provide any such semantics	\
++	 */								\
++	smp_mb();							\
++									\
+ 	__asm__ __volatile__(						\
+ 	"1:	llock   %0, [%1]	\n"				\
+ 	"	" #asm_op " %0, %0, %2	\n"				\
+@@ -52,6 +58,8 @@ static inline int atomic_##op##_return(int i, atomic_t *v)		\
+ 	: "r"(&v->counter), "ir"(i)					\
+ 	: "cc");							\
+ 									\
++	smp_mb();							\
++									\
+ 	return temp;							\
+ }
+ 
+@@ -105,6 +113,9 @@ static inline int atomic_##op##_return(int i, atomic_t *v)		\
+ 	unsigned long flags;						\
+ 	unsigned long temp;						\
+ 									\
++	/*								\
++	 * spin lock/unlock provides the needed smp_mb() before/after	\
++	 */								\
+ 	atomic_ops_lock(flags);						\
+ 	temp = v->counter;						\
+ 	temp c_op i;							\
+@@ -142,9 +153,19 @@ ATOMIC_OP(and, &=, and)
+ #define __atomic_add_unless(v, a, u)					\
+ ({									\
+ 	int c, old;							\
++									\
++	/*								\
++	 * Explicit full memory barrier needed before/after as		\
++	 * LLOCK/SCOND thmeselves don't provide any such semantics	\
++	 */								\
++	smp_mb();							\
++									\
+ 	c = atomic_read(v);						\
+ 	while (c != (u) && (old = atomic_cmpxchg((v), c, c + (a))) != c)\
+ 		c = old;						\
++									\
++	smp_mb();							\
++									\
+ 	c;								\
+ })
+ 
+diff --git a/arch/arc/include/asm/bitops.h b/arch/arc/include/asm/bitops.h
+index 4051e9525939..624a9d048ca9 100644
+--- a/arch/arc/include/asm/bitops.h
++++ b/arch/arc/include/asm/bitops.h
+@@ -117,6 +117,12 @@ static inline int test_and_set_bit(unsigned long nr, volatile unsigned long *m)
+ 	if (__builtin_constant_p(nr))
+ 		nr &= 0x1f;
+ 
++	/*
++	 * Explicit full memory barrier needed before/after as
++	 * LLOCK/SCOND themselves don't provide any such semantics
++	 */
++	smp_mb();
++
+ 	__asm__ __volatile__(
+ 	"1:	llock   %0, [%2]	\n"
+ 	"	bset    %1, %0, %3	\n"
+@@ -126,6 +132,8 @@ static inline int test_and_set_bit(unsigned long nr, volatile unsigned long *m)
+ 	: "r"(m), "ir"(nr)
+ 	: "cc");
+ 
++	smp_mb();
++
+ 	return (old & (1 << nr)) != 0;
+ }
+ 
+@@ -139,6 +147,8 @@ test_and_clear_bit(unsigned long nr, volatile unsigned long *m)
+ 	if (__builtin_constant_p(nr))
+ 		nr &= 0x1f;
+ 
++	smp_mb();
++
+ 	__asm__ __volatile__(
+ 	"1:	llock   %0, [%2]	\n"
+ 	"	bclr    %1, %0, %3	\n"
+@@ -148,6 +158,8 @@ test_and_clear_bit(unsigned long nr, volatile unsigned long *m)
+ 	: "r"(m), "ir"(nr)
+ 	: "cc");
+ 
++	smp_mb();
++
+ 	return (old & (1 << nr)) != 0;
+ }
+ 
+@@ -161,6 +173,8 @@ test_and_change_bit(unsigned long nr, volatile unsigned long *m)
+ 	if (__builtin_constant_p(nr))
+ 		nr &= 0x1f;
+ 
++	smp_mb();
++
+ 	__asm__ __volatile__(
+ 	"1:	llock   %0, [%2]	\n"
+ 	"	bxor    %1, %0, %3	\n"
+@@ -170,6 +184,8 @@ test_and_change_bit(unsigned long nr, volatile unsigned long *m)
+ 	: "r"(m), "ir"(nr)
+ 	: "cc");
+ 
++	smp_mb();
++
+ 	return (old & (1 << nr)) != 0;
+ }
+ 
+@@ -249,6 +265,9 @@ static inline int test_and_set_bit(unsigned long nr, volatile unsigned long *m)
+ 	if (__builtin_constant_p(nr))
+ 		nr &= 0x1f;
+ 
++	/*
++	 * spin lock/unlock provide the needed smp_mb() before/after
++	 */
+ 	bitops_lock(flags);
+ 
+ 	old = *m;
+diff --git a/arch/arc/include/asm/cmpxchg.h b/arch/arc/include/asm/cmpxchg.h
+index 03cd6894855d..44fd531f4d7b 100644
+--- a/arch/arc/include/asm/cmpxchg.h
++++ b/arch/arc/include/asm/cmpxchg.h
+@@ -10,6 +10,8 @@
+ #define __ASM_ARC_CMPXCHG_H
+ 
+ #include <linux/types.h>
++
++#include <asm/barrier.h>
+ #include <asm/smp.h>
+ 
+ #ifdef CONFIG_ARC_HAS_LLSC
+@@ -19,16 +21,25 @@ __cmpxchg(volatile void *ptr, unsigned long expected, unsigned long new)
+ {
+ 	unsigned long prev;
+ 
++	/*
++	 * Explicit full memory barrier needed before/after as
++	 * LLOCK/SCOND thmeselves don't provide any such semantics
++	 */
++	smp_mb();
++
+ 	__asm__ __volatile__(
+ 	"1:	llock   %0, [%1]	\n"
+ 	"	brne    %0, %2, 2f	\n"
+ 	"	scond   %3, [%1]	\n"
+ 	"	bnz     1b		\n"
+ 	"2:				\n"
+-	: "=&r"(prev)
+-	: "r"(ptr), "ir"(expected),
+-	  "r"(new) /* can't be "ir". scond can't take limm for "b" */
+-	: "cc");
++	: "=&r"(prev)	/* Early clobber, to prevent reg reuse */
++	: "r"(ptr),	/* Not "m": llock only supports reg direct addr mode */
++	  "ir"(expected),
++	  "r"(new)	/* can't be "ir". scond can't take LIMM for "b" */
++	: "cc", "memory"); /* so that gcc knows memory is being written here */
++
++	smp_mb();
+ 
+ 	return prev;
+ }
+@@ -42,6 +53,9 @@ __cmpxchg(volatile void *ptr, unsigned long expected, unsigned long new)
+ 	int prev;
+ 	volatile unsigned long *p = ptr;
+ 
++	/*
++	 * spin lock/unlock provide the needed smp_mb() before/after
++	 */
+ 	atomic_ops_lock(flags);
+ 	prev = *p;
+ 	if (prev == expected)
+@@ -77,12 +91,16 @@ static inline unsigned long __xchg(unsigned long val, volatile void *ptr,
+ 
+ 	switch (size) {
+ 	case 4:
++		smp_mb();
++
+ 		__asm__ __volatile__(
+ 		"	ex  %0, [%1]	\n"
+ 		: "+r"(val)
+ 		: "r"(ptr)
+ 		: "memory");
+ 
++		smp_mb();
++
+ 		return val;
+ 	}
+ 	return __xchg_bad_pointer();
+diff --git a/arch/arc/include/asm/spinlock.h b/arch/arc/include/asm/spinlock.h
+index b6a8c2dfbe6e..e1651df6a93d 100644
+--- a/arch/arc/include/asm/spinlock.h
++++ b/arch/arc/include/asm/spinlock.h
+@@ -22,24 +22,46 @@ static inline void arch_spin_lock(arch_spinlock_t *lock)
+ {
+ 	unsigned int tmp = __ARCH_SPIN_LOCK_LOCKED__;
+ 
++	/*
++	 * This smp_mb() is technically superfluous, we only need the one
++	 * after the lock for providing the ACQUIRE semantics.
++	 * However doing the "right" thing was regressing hackbench
++	 * so keeping this, pending further investigation
++	 */
++	smp_mb();
++
+ 	__asm__ __volatile__(
+ 	"1:	ex  %0, [%1]		\n"
+ 	"	breq  %0, %2, 1b	\n"
+ 	: "+&r" (tmp)
+ 	: "r"(&(lock->slock)), "ir"(__ARCH_SPIN_LOCK_LOCKED__)
+ 	: "memory");
++
++	/*
++	 * ACQUIRE barrier to ensure load/store after taking the lock
++	 * don't "bleed-up" out of the critical section (leak-in is allowed)
++	 * http://www.spinics.net/lists/kernel/msg2010409.html
++	 *
++	 * ARCv2 only has load-load, store-store and all-all barrier
++	 * thus need the full all-all barrier
++	 */
++	smp_mb();
+ }
+ 
+ static inline int arch_spin_trylock(arch_spinlock_t *lock)
+ {
+ 	unsigned int tmp = __ARCH_SPIN_LOCK_LOCKED__;
+ 
++	smp_mb();
++
+ 	__asm__ __volatile__(
+ 	"1:	ex  %0, [%1]		\n"
+ 	: "+r" (tmp)
+ 	: "r"(&(lock->slock))
+ 	: "memory");
+ 
++	smp_mb();
++
+ 	return (tmp == __ARCH_SPIN_LOCK_UNLOCKED__);
+ }
+ 
+@@ -47,12 +69,22 @@ static inline void arch_spin_unlock(arch_spinlock_t *lock)
+ {
+ 	unsigned int tmp = __ARCH_SPIN_LOCK_UNLOCKED__;
+ 
++	/*
++	 * RELEASE barrier: given the instructions avail on ARCv2, full barrier
++	 * is the only option
++	 */
++	smp_mb();
++
+ 	__asm__ __volatile__(
+ 	"	ex  %0, [%1]		\n"
+ 	: "+r" (tmp)
+ 	: "r"(&(lock->slock))
+ 	: "memory");
+ 
++	/*
++	 * superfluous, but keeping for now - see pairing version in
++	 * arch_spin_lock above
++	 */
+ 	smp_mb();
+ }
+ 
+diff --git a/arch/arc/kernel/perf_event.c b/arch/arc/kernel/perf_event.c
+index fd2ec50102f2..57b58f52d825 100644
+--- a/arch/arc/kernel/perf_event.c
++++ b/arch/arc/kernel/perf_event.c
+@@ -266,7 +266,6 @@ static int arc_pmu_add(struct perf_event *event, int flags)
+ 
+ static int arc_pmu_device_probe(struct platform_device *pdev)
+ {
+-	struct arc_pmu *arc_pmu;
+ 	struct arc_reg_pct_build pct_bcr;
+ 	struct arc_reg_cc_build cc_bcr;
+ 	int i, j, ret;
+diff --git a/arch/arm64/kernel/entry.S b/arch/arm64/kernel/entry.S
+index 959fe8733560..bddd04d031db 100644
+--- a/arch/arm64/kernel/entry.S
++++ b/arch/arm64/kernel/entry.S
+@@ -517,6 +517,7 @@ el0_sp_pc:
+ 	mrs	x26, far_el1
+ 	// enable interrupts before calling the main handler
+ 	enable_dbg_and_irq
++	ct_user_exit
+ 	mov	x0, x26
+ 	mov	x1, x25
+ 	mov	x2, sp
+diff --git a/arch/arm64/kernel/vdso/Makefile b/arch/arm64/kernel/vdso/Makefile
+index ff3bddea482d..f6fe17d88da5 100644
+--- a/arch/arm64/kernel/vdso/Makefile
++++ b/arch/arm64/kernel/vdso/Makefile
+@@ -15,6 +15,10 @@ ccflags-y := -shared -fno-common -fno-builtin
+ ccflags-y += -nostdlib -Wl,-soname=linux-vdso.so.1 \
+ 		$(call cc-ldoption, -Wl$(comma)--hash-style=sysv)
+ 
++# Workaround for bare-metal (ELF) toolchains that neglect to pass -shared
++# down to collect2, resulting in silent corruption of the vDSO image.
++ccflags-y += -Wl,-shared
++
+ obj-y += vdso.o
+ extra-y += vdso.lds vdso-offsets.h
+ CPPFLAGS_vdso.lds += -P -C -U$(ARCH)
+diff --git a/arch/arm64/mm/context.c b/arch/arm64/mm/context.c
+index baa758d37021..76c1e6cd36fc 100644
+--- a/arch/arm64/mm/context.c
++++ b/arch/arm64/mm/context.c
+@@ -92,6 +92,14 @@ static void reset_context(void *info)
+ 	unsigned int cpu = smp_processor_id();
+ 	struct mm_struct *mm = current->active_mm;
+ 
++	/*
++	 * current->active_mm could be init_mm for the idle thread immediately
++	 * after secondary CPU boot or hotplug. TTBR0_EL1 is already set to
++	 * the reserved value, so no need to reset any context.
++	 */
++	if (mm == &init_mm)
++		return;
++
+ 	smp_rmb();
+ 	asid = cpu_last_asid + cpu;
+ 
+diff --git a/arch/arm64/mm/init.c b/arch/arm64/mm/init.c
+index 597831bdddf3..ad87ce826cce 100644
+--- a/arch/arm64/mm/init.c
++++ b/arch/arm64/mm/init.c
+@@ -262,7 +262,7 @@ static void __init free_unused_memmap(void)
+ 		 * memmap entries are valid from the bank end aligned to
+ 		 * MAX_ORDER_NR_PAGES.
+ 		 */
+-		prev_end = ALIGN(start + __phys_to_pfn(reg->size),
++		prev_end = ALIGN(__phys_to_pfn(reg->base + reg->size),
+ 				 MAX_ORDER_NR_PAGES);
+ 	}
+ 
+diff --git a/arch/s390/hypfs/inode.c b/arch/s390/hypfs/inode.c
+index d3f896a35b98..2eeb0a0f506d 100644
+--- a/arch/s390/hypfs/inode.c
++++ b/arch/s390/hypfs/inode.c
+@@ -456,8 +456,6 @@ static const struct super_operations hypfs_s_ops = {
+ 	.show_options	= hypfs_show_options,
+ };
+ 
+-static struct kobject *s390_kobj;
+-
+ static int __init hypfs_init(void)
+ {
+ 	int rc;
+@@ -481,18 +479,16 @@ static int __init hypfs_init(void)
+ 		rc = -ENODATA;
+ 		goto fail_hypfs_sprp_exit;
+ 	}
+-	s390_kobj = kobject_create_and_add("s390", hypervisor_kobj);
+-	if (!s390_kobj) {
+-		rc = -ENOMEM;
++	rc = sysfs_create_mount_point(hypervisor_kobj, "s390");
++	if (rc)
+ 		goto fail_hypfs_diag0c_exit;
+-	}
+ 	rc = register_filesystem(&hypfs_type);
+ 	if (rc)
+ 		goto fail_filesystem;
+ 	return 0;
+ 
+ fail_filesystem:
+-	kobject_put(s390_kobj);
++	sysfs_remove_mount_point(hypervisor_kobj, "s390");
+ fail_hypfs_diag0c_exit:
+ 	hypfs_diag0c_exit();
+ fail_hypfs_sprp_exit:
+@@ -510,7 +506,7 @@ fail_dbfs_exit:
+ static void __exit hypfs_exit(void)
+ {
+ 	unregister_filesystem(&hypfs_type);
+-	kobject_put(s390_kobj);
++	sysfs_remove_mount_point(hypervisor_kobj, "s390");
+ 	hypfs_diag0c_exit();
+ 	hypfs_sprp_exit();
+ 	hypfs_vm_exit();
+diff --git a/drivers/acpi/bus.c b/drivers/acpi/bus.c
+index c412fdb28d34..513e7230e3d0 100644
+--- a/drivers/acpi/bus.c
++++ b/drivers/acpi/bus.c
+@@ -470,6 +470,16 @@ static int __init acpi_bus_init_irq(void)
+ 	return 0;
+ }
+ 
++/**
++ * acpi_early_init - Initialize ACPICA and populate the ACPI namespace.
++ *
++ * The ACPI tables are accessible after this, but the handling of events has not
++ * been initialized and the global lock is not available yet, so AML should not
++ * be executed at this point.
++ *
++ * Doing this before switching the EFI runtime services to virtual mode allows
++ * the EfiBootServices memory to be freed slightly earlier on boot.
++ */
+ void __init acpi_early_init(void)
+ {
+ 	acpi_status status;
+@@ -533,26 +543,42 @@ void __init acpi_early_init(void)
+ 		acpi_gbl_FADT.sci_interrupt = acpi_sci_override_gsi;
+ 	}
+ #endif
++	return;
++
++ error0:
++	disable_acpi();
++}
++
++/**
++ * acpi_subsystem_init - Finalize the early initialization of ACPI.
++ *
++ * Switch over the platform to the ACPI mode (if possible), initialize the
++ * handling of ACPI events, install the interrupt and global lock handlers.
++ *
++ * Doing this too early is generally unsafe, but at the same time it needs to be
++ * done before all things that really depend on ACPI.  The right spot appears to
++ * be before finalizing the EFI initialization.
++ */
++void __init acpi_subsystem_init(void)
++{
++	acpi_status status;
++
++	if (acpi_disabled)
++		return;
+ 
+ 	status = acpi_enable_subsystem(~ACPI_NO_ACPI_ENABLE);
+ 	if (ACPI_FAILURE(status)) {
+ 		printk(KERN_ERR PREFIX "Unable to enable ACPI\n");
+-		goto error0;
++		disable_acpi();
++	} else {
++		/*
++		 * If the system is using ACPI then we can be reasonably
++		 * confident that any regulators are managed by the firmware
++		 * so tell the regulator core it has everything it needs to
++		 * know.
++		 */
++		regulator_has_full_constraints();
+ 	}
+-
+-	/*
+-	 * If the system is using ACPI then we can be reasonably
+-	 * confident that any regulators are managed by the firmware
+-	 * so tell the regulator core it has everything it needs to
+-	 * know.
+-	 */
+-	regulator_has_full_constraints();
+-
+-	return;
+-
+-      error0:
+-	disable_acpi();
+-	return;
+ }
+ 
+ static int __init acpi_bus_init(void)
+diff --git a/drivers/acpi/device_pm.c b/drivers/acpi/device_pm.c
+index 735db11a9b00..8217e0bda60f 100644
+--- a/drivers/acpi/device_pm.c
++++ b/drivers/acpi/device_pm.c
+@@ -953,6 +953,7 @@ EXPORT_SYMBOL_GPL(acpi_subsys_prepare);
+  */
+ void acpi_subsys_complete(struct device *dev)
+ {
++	pm_generic_complete(dev);
+ 	/*
+ 	 * If the device had been runtime-suspended before the system went into
+ 	 * the sleep state it is going out of and it has never been resumed till
+diff --git a/drivers/acpi/osl.c b/drivers/acpi/osl.c
+index 7ccba395c9dd..5226a8b921ae 100644
+--- a/drivers/acpi/osl.c
++++ b/drivers/acpi/osl.c
+@@ -175,11 +175,7 @@ static void __init acpi_request_region (struct acpi_generic_address *gas,
+ 	if (!addr || !length)
+ 		return;
+ 
+-	/* Resources are never freed */
+-	if (gas->space_id == ACPI_ADR_SPACE_SYSTEM_IO)
+-		request_region(addr, length, desc);
+-	else if (gas->space_id == ACPI_ADR_SPACE_SYSTEM_MEMORY)
+-		request_mem_region(addr, length, desc);
++	acpi_reserve_region(addr, length, gas->space_id, 0, desc);
+ }
+ 
+ static void __init acpi_reserve_resources(void)
+diff --git a/drivers/acpi/resource.c b/drivers/acpi/resource.c
+index 8244f013f210..fcb7807ea8b7 100644
+--- a/drivers/acpi/resource.c
++++ b/drivers/acpi/resource.c
+@@ -26,6 +26,7 @@
+ #include <linux/device.h>
+ #include <linux/export.h>
+ #include <linux/ioport.h>
++#include <linux/list.h>
+ #include <linux/slab.h>
+ 
+ #ifdef CONFIG_X86
+@@ -621,3 +622,162 @@ int acpi_dev_filter_resource_type(struct acpi_resource *ares,
+ 	return (type & types) ? 0 : 1;
+ }
+ EXPORT_SYMBOL_GPL(acpi_dev_filter_resource_type);
++
++struct reserved_region {
++	struct list_head node;
++	u64 start;
++	u64 end;
++};
++
++static LIST_HEAD(reserved_io_regions);
++static LIST_HEAD(reserved_mem_regions);
++
++static int request_range(u64 start, u64 end, u8 space_id, unsigned long flags,
++			 char *desc)
++{
++	unsigned int length = end - start + 1;
++	struct resource *res;
++
++	res = space_id == ACPI_ADR_SPACE_SYSTEM_IO ?
++		request_region(start, length, desc) :
++		request_mem_region(start, length, desc);
++	if (!res)
++		return -EIO;
++
++	res->flags &= ~flags;
++	return 0;
++}
++
++static int add_region_before(u64 start, u64 end, u8 space_id,
++			     unsigned long flags, char *desc,
++			     struct list_head *head)
++{
++	struct reserved_region *reg;
++	int error;
++
++	reg = kmalloc(sizeof(*reg), GFP_KERNEL);
++	if (!reg)
++		return -ENOMEM;
++
++	error = request_range(start, end, space_id, flags, desc);
++	if (error)
++		return error;
++
++	reg->start = start;
++	reg->end = end;
++	list_add_tail(&reg->node, head);
++	return 0;
++}
++
++/**
++ * acpi_reserve_region - Reserve an I/O or memory region as a system resource.
++ * @start: Starting address of the region.
++ * @length: Length of the region.
++ * @space_id: Identifier of address space to reserve the region from.
++ * @flags: Resource flags to clear for the region after requesting it.
++ * @desc: Region description (for messages).
++ *
++ * Reserve an I/O or memory region as a system resource to prevent others from
++ * using it.  If the new region overlaps with one of the regions (in the given
++ * address space) already reserved by this routine, only the non-overlapping
++ * parts of it will be reserved.
++ *
++ * Returned is either 0 (success) or a negative error code indicating a resource
++ * reservation problem.  It is the code of the first encountered error, but the
++ * routine doesn't abort until it has attempted to request all of the parts of
++ * the new region that don't overlap with other regions reserved previously.
++ *
++ * The resources requested by this routine are never released.
++ */
++int acpi_reserve_region(u64 start, unsigned int length, u8 space_id,
++			unsigned long flags, char *desc)
++{
++	struct list_head *regions;
++	struct reserved_region *reg;
++	u64 end = start + length - 1;
++	int ret = 0, error = 0;
++
++	if (space_id == ACPI_ADR_SPACE_SYSTEM_IO)
++		regions = &reserved_io_regions;
++	else if (space_id == ACPI_ADR_SPACE_SYSTEM_MEMORY)
++		regions = &reserved_mem_regions;
++	else
++		return -EINVAL;
++
++	if (list_empty(regions))
++		return add_region_before(start, end, space_id, flags, desc, regions);
++
++	list_for_each_entry(reg, regions, node)
++		if (reg->start == end + 1) {
++			/* The new region can be prepended to this one. */
++			ret = request_range(start, end, space_id, flags, desc);
++			if (!ret)
++				reg->start = start;
++
++			return ret;
++		} else if (reg->start > end) {
++			/* No overlap.  Add the new region here and get out. */
++			return add_region_before(start, end, space_id, flags,
++						 desc, &reg->node);
++		} else if (reg->end == start - 1) {
++			goto combine;
++		} else if (reg->end >= start) {
++			goto overlap;
++		}
++
++	/* The new region goes after the last existing one. */
++	return add_region_before(start, end, space_id, flags, desc, regions);
++
++ overlap:
++	/*
++	 * The new region overlaps an existing one.
++	 *
++	 * The head part of the new region immediately preceding the existing
++	 * overlapping one can be combined with it right away.
++	 */
++	if (reg->start > start) {
++		error = request_range(start, reg->start - 1, space_id, flags, desc);
++		if (error)
++			ret = error;
++		else
++			reg->start = start;
++	}
++
++ combine:
++	/*
++	 * The new region is adjacent to an existing one.  If it extends beyond
++	 * that region all the way to the next one, it is possible to combine
++	 * all three of them.
++	 */
++	while (reg->end < end) {
++		struct reserved_region *next = NULL;
++		u64 a = reg->end + 1, b = end;
++
++		if (!list_is_last(&reg->node, regions)) {
++			next = list_next_entry(reg, node);
++			if (next->start <= end)
++				b = next->start - 1;
++		}
++		error = request_range(a, b, space_id, flags, desc);
++		if (!error) {
++			if (next && next->start == b + 1) {
++				reg->end = next->end;
++				list_del(&next->node);
++				kfree(next);
++			} else {
++				reg->end = end;
++				break;
++			}
++		} else if (next) {
++			if (!ret)
++				ret = error;
++
++			reg = next;
++		} else {
++			break;
++		}
++	}
++
++	return ret ? ret : error;
++}
++EXPORT_SYMBOL_GPL(acpi_reserve_region);
+diff --git a/drivers/base/regmap/regmap.c b/drivers/base/regmap/regmap.c
+index 6273ff072f3e..1c76dcb502cf 100644
+--- a/drivers/base/regmap/regmap.c
++++ b/drivers/base/regmap/regmap.c
+@@ -945,11 +945,10 @@ EXPORT_SYMBOL_GPL(devm_regmap_init);
+ static void regmap_field_init(struct regmap_field *rm_field,
+ 	struct regmap *regmap, struct reg_field reg_field)
+ {
+-	int field_bits = reg_field.msb - reg_field.lsb + 1;
+ 	rm_field->regmap = regmap;
+ 	rm_field->reg = reg_field.reg;
+ 	rm_field->shift = reg_field.lsb;
+-	rm_field->mask = ((BIT(field_bits) - 1) << reg_field.lsb);
++	rm_field->mask = GENMASK(reg_field.msb, reg_field.lsb);
+ 	rm_field->id_size = reg_field.id_size;
+ 	rm_field->id_offset = reg_field.id_offset;
+ }
+@@ -2318,7 +2317,7 @@ int regmap_bulk_read(struct regmap *map, unsigned int reg, void *val,
+ 					  &ival);
+ 			if (ret != 0)
+ 				return ret;
+-			memcpy(val + (i * val_bytes), &ival, val_bytes);
++			map->format.format_val(val + (i * val_bytes), ival, 0);
+ 		}
+ 	}
+ 
+diff --git a/drivers/firmware/efi/efi.c b/drivers/firmware/efi/efi.c
+index 3061bb8629dc..e14363d12690 100644
+--- a/drivers/firmware/efi/efi.c
++++ b/drivers/firmware/efi/efi.c
+@@ -65,7 +65,6 @@ static int __init parse_efi_cmdline(char *str)
+ early_param("efi", parse_efi_cmdline);
+ 
+ static struct kobject *efi_kobj;
+-static struct kobject *efivars_kobj;
+ 
+ /*
+  * Let's not leave out systab information that snuck into
+@@ -212,10 +211,9 @@ static int __init efisubsys_init(void)
+ 		goto err_remove_group;
+ 
+ 	/* and the standard mountpoint for efivarfs */
+-	efivars_kobj = kobject_create_and_add("efivars", efi_kobj);
+-	if (!efivars_kobj) {
++	error = sysfs_create_mount_point(efi_kobj, "efivars");
++	if (error) {
+ 		pr_err("efivars: Subsystem registration failed.\n");
+-		error = -ENOMEM;
+ 		goto err_remove_group;
+ 	}
+ 
+diff --git a/drivers/gpio/gpio-crystalcove.c b/drivers/gpio/gpio-crystalcove.c
+index 91a7ffe83135..ab457fc00e75 100644
+--- a/drivers/gpio/gpio-crystalcove.c
++++ b/drivers/gpio/gpio-crystalcove.c
+@@ -255,6 +255,7 @@ static struct irq_chip crystalcove_irqchip = {
+ 	.irq_set_type		= crystalcove_irq_type,
+ 	.irq_bus_lock		= crystalcove_bus_lock,
+ 	.irq_bus_sync_unlock	= crystalcove_bus_sync_unlock,
++	.flags			= IRQCHIP_SKIP_SET_WAKE,
+ };
+ 
+ static irqreturn_t crystalcove_gpio_irq_handler(int irq, void *data)
+diff --git a/drivers/gpio/gpio-rcar.c b/drivers/gpio/gpio-rcar.c
+index fd3977465948..1e14a6c74ed1 100644
+--- a/drivers/gpio/gpio-rcar.c
++++ b/drivers/gpio/gpio-rcar.c
+@@ -177,8 +177,17 @@ static int gpio_rcar_irq_set_wake(struct irq_data *d, unsigned int on)
+ 	struct gpio_chip *gc = irq_data_get_irq_chip_data(d);
+ 	struct gpio_rcar_priv *p = container_of(gc, struct gpio_rcar_priv,
+ 						gpio_chip);
+-
+-	irq_set_irq_wake(p->irq_parent, on);
++	int error;
++
++	if (p->irq_parent) {
++		error = irq_set_irq_wake(p->irq_parent, on);
++		if (error) {
++			dev_dbg(&p->pdev->dev,
++				"irq %u doesn't support irq_set_wake\n",
++				p->irq_parent);
++			p->irq_parent = 0;
++		}
++	}
+ 
+ 	if (!p->clk)
+ 		return 0;
+diff --git a/drivers/iio/accel/kxcjk-1013.c b/drivers/iio/accel/kxcjk-1013.c
+index 51da3692d561..5b7a860df524 100644
+--- a/drivers/iio/accel/kxcjk-1013.c
++++ b/drivers/iio/accel/kxcjk-1013.c
+@@ -1418,6 +1418,7 @@ static const struct dev_pm_ops kxcjk1013_pm_ops = {
+ static const struct acpi_device_id kx_acpi_match[] = {
+ 	{"KXCJ1013", KXCJK1013},
+ 	{"KXCJ1008", KXCJ91008},
++	{"KXCJ9000", KXCJ91008},
+ 	{"KXTJ1009", KXTJ21009},
+ 	{"SMO8500",  KXCJ91008},
+ 	{ },
+diff --git a/drivers/infiniband/ulp/srp/ib_srp.c b/drivers/infiniband/ulp/srp/ib_srp.c
+index 918814cd0f80..75c01b27bd0b 100644
+--- a/drivers/infiniband/ulp/srp/ib_srp.c
++++ b/drivers/infiniband/ulp/srp/ib_srp.c
+@@ -465,14 +465,13 @@ static struct srp_fr_pool *srp_alloc_fr_pool(struct srp_target_port *target)
+  */
+ static void srp_destroy_qp(struct srp_rdma_ch *ch)
+ {
+-	struct srp_target_port *target = ch->target;
+ 	static struct ib_qp_attr attr = { .qp_state = IB_QPS_ERR };
+ 	static struct ib_recv_wr wr = { .wr_id = SRP_LAST_WR_ID };
+ 	struct ib_recv_wr *bad_wr;
+ 	int ret;
+ 
+ 	/* Destroying a QP and reusing ch->done is only safe if not connected */
+-	WARN_ON_ONCE(target->connected);
++	WARN_ON_ONCE(ch->connected);
+ 
+ 	ret = ib_modify_qp(ch->qp, &attr, IB_QP_STATE);
+ 	WARN_ONCE(ret, "ib_cm_init_qp_attr() returned %d\n", ret);
+@@ -811,35 +810,19 @@ static bool srp_queue_remove_work(struct srp_target_port *target)
+ 	return changed;
+ }
+ 
+-static bool srp_change_conn_state(struct srp_target_port *target,
+-				  bool connected)
+-{
+-	bool changed = false;
+-
+-	spin_lock_irq(&target->lock);
+-	if (target->connected != connected) {
+-		target->connected = connected;
+-		changed = true;
+-	}
+-	spin_unlock_irq(&target->lock);
+-
+-	return changed;
+-}
+-
+ static void srp_disconnect_target(struct srp_target_port *target)
+ {
+ 	struct srp_rdma_ch *ch;
+ 	int i;
+ 
+-	if (srp_change_conn_state(target, false)) {
+-		/* XXX should send SRP_I_LOGOUT request */
++	/* XXX should send SRP_I_LOGOUT request */
+ 
+-		for (i = 0; i < target->ch_count; i++) {
+-			ch = &target->ch[i];
+-			if (ch->cm_id && ib_send_cm_dreq(ch->cm_id, NULL, 0)) {
+-				shost_printk(KERN_DEBUG, target->scsi_host,
+-					     PFX "Sending CM DREQ failed\n");
+-			}
++	for (i = 0; i < target->ch_count; i++) {
++		ch = &target->ch[i];
++		ch->connected = false;
++		if (ch->cm_id && ib_send_cm_dreq(ch->cm_id, NULL, 0)) {
++			shost_printk(KERN_DEBUG, target->scsi_host,
++				     PFX "Sending CM DREQ failed\n");
+ 		}
+ 	}
+ }
+@@ -986,14 +969,26 @@ static void srp_rport_delete(struct srp_rport *rport)
+ 	srp_queue_remove_work(target);
+ }
+ 
++/**
++ * srp_connected_ch() - number of connected channels
++ * @target: SRP target port.
++ */
++static int srp_connected_ch(struct srp_target_port *target)
++{
++	int i, c = 0;
++
++	for (i = 0; i < target->ch_count; i++)
++		c += target->ch[i].connected;
++
++	return c;
++}
++
+ static int srp_connect_ch(struct srp_rdma_ch *ch, bool multich)
+ {
+ 	struct srp_target_port *target = ch->target;
+ 	int ret;
+ 
+-	WARN_ON_ONCE(!multich && target->connected);
+-
+-	target->qp_in_error = false;
++	WARN_ON_ONCE(!multich && srp_connected_ch(target) > 0);
+ 
+ 	ret = srp_lookup_path(ch);
+ 	if (ret)
+@@ -1016,7 +1011,7 @@ static int srp_connect_ch(struct srp_rdma_ch *ch, bool multich)
+ 		 */
+ 		switch (ch->status) {
+ 		case 0:
+-			srp_change_conn_state(target, true);
++			ch->connected = true;
+ 			return 0;
+ 
+ 		case SRP_PORT_REDIRECT:
+@@ -1243,13 +1238,13 @@ static int srp_rport_reconnect(struct srp_rport *rport)
+ 		for (j = 0; j < target->queue_size; ++j)
+ 			list_add(&ch->tx_ring[j]->list, &ch->free_tx);
+ 	}
++
++	target->qp_in_error = false;
++
+ 	for (i = 0; i < target->ch_count; i++) {
+ 		ch = &target->ch[i];
+-		if (ret || !ch->target) {
+-			if (i > 1)
+-				ret = 0;
++		if (ret || !ch->target)
+ 			break;
+-		}
+ 		ret = srp_connect_ch(ch, multich);
+ 		multich = true;
+ 	}
+@@ -1929,7 +1924,7 @@ static void srp_handle_qp_err(u64 wr_id, enum ib_wc_status wc_status,
+ 		return;
+ 	}
+ 
+-	if (target->connected && !target->qp_in_error) {
++	if (ch->connected && !target->qp_in_error) {
+ 		if (wr_id & LOCAL_INV_WR_ID_MASK) {
+ 			shost_printk(KERN_ERR, target->scsi_host, PFX
+ 				     "LOCAL_INV failed with status %d\n",
+@@ -2367,7 +2362,7 @@ static int srp_cm_handler(struct ib_cm_id *cm_id, struct ib_cm_event *event)
+ 	case IB_CM_DREQ_RECEIVED:
+ 		shost_printk(KERN_WARNING, target->scsi_host,
+ 			     PFX "DREQ received - connection closed\n");
+-		srp_change_conn_state(target, false);
++		ch->connected = false;
+ 		if (ib_send_cm_drep(cm_id, NULL, 0))
+ 			shost_printk(KERN_ERR, target->scsi_host,
+ 				     PFX "Sending CM DREP failed\n");
+@@ -2423,7 +2418,7 @@ static int srp_send_tsk_mgmt(struct srp_rdma_ch *ch, u64 req_tag,
+ 	struct srp_iu *iu;
+ 	struct srp_tsk_mgmt *tsk_mgmt;
+ 
+-	if (!target->connected || target->qp_in_error)
++	if (!ch->connected || target->qp_in_error)
+ 		return -1;
+ 
+ 	init_completion(&ch->tsk_mgmt_done);
+@@ -2797,7 +2792,8 @@ static int srp_add_target(struct srp_host *host, struct srp_target_port *target)
+ 	scsi_scan_target(&target->scsi_host->shost_gendev,
+ 			 0, target->scsi_id, SCAN_WILD_CARD, 0);
+ 
+-	if (!target->connected || target->qp_in_error) {
++	if (srp_connected_ch(target) < target->ch_count ||
++	    target->qp_in_error) {
+ 		shost_printk(KERN_INFO, target->scsi_host,
+ 			     PFX "SCSI scan failed - removing SCSI host\n");
+ 		srp_queue_remove_work(target);
+@@ -3172,11 +3168,11 @@ static ssize_t srp_create_target(struct device *dev,
+ 
+ 	ret = srp_parse_options(buf, target);
+ 	if (ret)
+-		goto err;
++		goto out;
+ 
+ 	ret = scsi_init_shared_tag_map(target_host, target_host->can_queue);
+ 	if (ret)
+-		goto err;
++		goto out;
+ 
+ 	target->req_ring_size = target->queue_size - SRP_TSK_MGMT_SQ_SIZE;
+ 
+@@ -3187,7 +3183,7 @@ static ssize_t srp_create_target(struct device *dev,
+ 			     be64_to_cpu(target->ioc_guid),
+ 			     be64_to_cpu(target->initiator_ext));
+ 		ret = -EEXIST;
+-		goto err;
++		goto out;
+ 	}
+ 
+ 	if (!srp_dev->has_fmr && !srp_dev->has_fr && !target->allow_ext_sg &&
+@@ -3208,7 +3204,7 @@ static ssize_t srp_create_target(struct device *dev,
+ 	spin_lock_init(&target->lock);
+ 	ret = ib_query_gid(ibdev, host->port, 0, &target->sgid);
+ 	if (ret)
+-		goto err;
++		goto out;
+ 
+ 	ret = -ENOMEM;
+ 	target->ch_count = max_t(unsigned, num_online_nodes(),
+@@ -3219,7 +3215,7 @@ static ssize_t srp_create_target(struct device *dev,
+ 	target->ch = kcalloc(target->ch_count, sizeof(*target->ch),
+ 			     GFP_KERNEL);
+ 	if (!target->ch)
+-		goto err;
++		goto out;
+ 
+ 	node_idx = 0;
+ 	for_each_online_node(node) {
+@@ -3315,9 +3311,6 @@ err_disconnect:
+ 	}
+ 
+ 	kfree(target->ch);
+-
+-err:
+-	scsi_host_put(target_host);
+ 	goto out;
+ }
+ 
+diff --git a/drivers/infiniband/ulp/srp/ib_srp.h b/drivers/infiniband/ulp/srp/ib_srp.h
+index a611556406ac..e690847a46dd 100644
+--- a/drivers/infiniband/ulp/srp/ib_srp.h
++++ b/drivers/infiniband/ulp/srp/ib_srp.h
+@@ -170,6 +170,7 @@ struct srp_rdma_ch {
+ 
+ 	struct completion	tsk_mgmt_done;
+ 	u8			tsk_mgmt_status;
++	bool			connected;
+ };
+ 
+ /**
+@@ -214,7 +215,6 @@ struct srp_target_port {
+ 	__be16			pkey;
+ 
+ 	u32			rq_tmo_jiffies;
+-	bool			connected;
+ 
+ 	int			zero_req_lim;
+ 
+diff --git a/drivers/input/touchscreen/pixcir_i2c_ts.c b/drivers/input/touchscreen/pixcir_i2c_ts.c
+index 2c2107147319..8f3e243a62bf 100644
+--- a/drivers/input/touchscreen/pixcir_i2c_ts.c
++++ b/drivers/input/touchscreen/pixcir_i2c_ts.c
+@@ -78,7 +78,7 @@ static void pixcir_ts_parse(struct pixcir_i2c_ts_data *tsdata,
+ 	}
+ 
+ 	ret = i2c_master_recv(tsdata->client, rdbuf, readsize);
+-	if (ret != sizeof(rdbuf)) {
++	if (ret != readsize) {
+ 		dev_err(&tsdata->client->dev,
+ 			"%s: i2c_master_recv failed(), ret=%d\n",
+ 			__func__, ret);
+diff --git a/drivers/leds/led-class.c b/drivers/leds/led-class.c
+index 728681debdbe..7fb2a19ac649 100644
+--- a/drivers/leds/led-class.c
++++ b/drivers/leds/led-class.c
+@@ -187,6 +187,7 @@ void led_classdev_resume(struct led_classdev *led_cdev)
+ }
+ EXPORT_SYMBOL_GPL(led_classdev_resume);
+ 
++#ifdef CONFIG_PM_SLEEP
+ static int led_suspend(struct device *dev)
+ {
+ 	struct led_classdev *led_cdev = dev_get_drvdata(dev);
+@@ -206,11 +207,9 @@ static int led_resume(struct device *dev)
+ 
+ 	return 0;
+ }
++#endif
+ 
+-static const struct dev_pm_ops leds_class_dev_pm_ops = {
+-	.suspend        = led_suspend,
+-	.resume         = led_resume,
+-};
++static SIMPLE_DEV_PM_OPS(leds_class_dev_pm_ops, led_suspend, led_resume);
+ 
+ static int match_name(struct device *dev, const void *data)
+ {
+diff --git a/drivers/misc/mei/client.c b/drivers/misc/mei/client.c
+index 1e99ef6a54a2..b2b9f4382d77 100644
+--- a/drivers/misc/mei/client.c
++++ b/drivers/misc/mei/client.c
+@@ -699,7 +699,7 @@ void mei_host_client_init(struct work_struct *work)
+ bool mei_hbuf_acquire(struct mei_device *dev)
+ {
+ 	if (mei_pg_state(dev) == MEI_PG_ON ||
+-	    dev->pg_event == MEI_PG_EVENT_WAIT) {
++	    mei_pg_in_transition(dev)) {
+ 		dev_dbg(dev->dev, "device is in pg\n");
+ 		return false;
+ 	}
+diff --git a/drivers/misc/mei/hw-me.c b/drivers/misc/mei/hw-me.c
+index 6fb75e62a764..43d7101ff993 100644
+--- a/drivers/misc/mei/hw-me.c
++++ b/drivers/misc/mei/hw-me.c
+@@ -663,11 +663,27 @@ int mei_me_pg_exit_sync(struct mei_device *dev)
+ 	mutex_lock(&dev->device_lock);
+ 
+ reply:
+-	if (dev->pg_event == MEI_PG_EVENT_RECEIVED)
+-		ret = mei_hbm_pg(dev, MEI_PG_ISOLATION_EXIT_RES_CMD);
++	if (dev->pg_event != MEI_PG_EVENT_RECEIVED) {
++		ret = -ETIME;
++		goto out;
++	}
++
++	dev->pg_event = MEI_PG_EVENT_INTR_WAIT;
++	ret = mei_hbm_pg(dev, MEI_PG_ISOLATION_EXIT_RES_CMD);
++	if (ret)
++		return ret;
++
++	mutex_unlock(&dev->device_lock);
++	wait_event_timeout(dev->wait_pg,
++		dev->pg_event == MEI_PG_EVENT_INTR_RECEIVED, timeout);
++	mutex_lock(&dev->device_lock);
++
++	if (dev->pg_event == MEI_PG_EVENT_INTR_RECEIVED)
++		ret = 0;
+ 	else
+ 		ret = -ETIME;
+ 
++out:
+ 	dev->pg_event = MEI_PG_EVENT_IDLE;
+ 	hw->pg_state = MEI_PG_OFF;
+ 
+@@ -675,6 +691,19 @@ reply:
+ }
+ 
+ /**
++ * mei_me_pg_in_transition - is device now in pg transition
++ *
++ * @dev: the device structure
++ *
++ * Return: true if in pg transition, false otherwise
++ */
++static bool mei_me_pg_in_transition(struct mei_device *dev)
++{
++	return dev->pg_event >= MEI_PG_EVENT_WAIT &&
++	       dev->pg_event <= MEI_PG_EVENT_INTR_WAIT;
++}
++
++/**
+  * mei_me_pg_is_enabled - detect if PG is supported by HW
+  *
+  * @dev: the device structure
+@@ -705,6 +734,24 @@ notsupported:
+ }
+ 
+ /**
++ * mei_me_pg_intr - perform pg processing in interrupt thread handler
++ *
++ * @dev: the device structure
++ */
++static void mei_me_pg_intr(struct mei_device *dev)
++{
++	struct mei_me_hw *hw = to_me_hw(dev);
++
++	if (dev->pg_event != MEI_PG_EVENT_INTR_WAIT)
++		return;
++
++	dev->pg_event = MEI_PG_EVENT_INTR_RECEIVED;
++	hw->pg_state = MEI_PG_OFF;
++	if (waitqueue_active(&dev->wait_pg))
++		wake_up(&dev->wait_pg);
++}
++
++/**
+  * mei_me_irq_quick_handler - The ISR of the MEI device
+  *
+  * @irq: The irq number
+@@ -761,6 +808,8 @@ irqreturn_t mei_me_irq_thread_handler(int irq, void *dev_id)
+ 		goto end;
+ 	}
+ 
++	mei_me_pg_intr(dev);
++
+ 	/*  check if we need to start the dev */
+ 	if (!mei_host_is_ready(dev)) {
+ 		if (mei_hw_is_ready(dev)) {
+@@ -797,9 +846,10 @@ irqreturn_t mei_me_irq_thread_handler(int irq, void *dev_id)
+ 	/*
+ 	 * During PG handshake only allowed write is the replay to the
+ 	 * PG exit message, so block calling write function
+-	 * if the pg state is not idle
++	 * if the pg event is in PG handshake
+ 	 */
+-	if (dev->pg_event == MEI_PG_EVENT_IDLE) {
++	if (dev->pg_event != MEI_PG_EVENT_WAIT &&
++	    dev->pg_event != MEI_PG_EVENT_RECEIVED) {
+ 		rets = mei_irq_write_handler(dev, &complete_list);
+ 		dev->hbuf_is_ready = mei_hbuf_is_ready(dev);
+ 	}
+@@ -824,6 +874,7 @@ static const struct mei_hw_ops mei_me_hw_ops = {
+ 	.hw_config = mei_me_hw_config,
+ 	.hw_start = mei_me_hw_start,
+ 
++	.pg_in_transition = mei_me_pg_in_transition,
+ 	.pg_is_enabled = mei_me_pg_is_enabled,
+ 
+ 	.intr_clear = mei_me_intr_clear,
+diff --git a/drivers/misc/mei/hw-txe.c b/drivers/misc/mei/hw-txe.c
+index 7abafe7d120d..bae680c648ff 100644
+--- a/drivers/misc/mei/hw-txe.c
++++ b/drivers/misc/mei/hw-txe.c
+@@ -16,6 +16,7 @@
+ 
+ #include <linux/pci.h>
+ #include <linux/jiffies.h>
++#include <linux/ktime.h>
+ #include <linux/delay.h>
+ #include <linux/kthread.h>
+ #include <linux/irqreturn.h>
+@@ -218,26 +219,25 @@ static u32 mei_txe_aliveness_get(struct mei_device *dev)
+  *
+  * Polls for HICR_HOST_ALIVENESS_RESP.ALIVENESS_RESP to be set
+  *
+- * Return: > 0 if the expected value was received, -ETIME otherwise
++ * Return: 0 if the expected value was received, -ETIME otherwise
+  */
+ static int mei_txe_aliveness_poll(struct mei_device *dev, u32 expected)
+ {
+ 	struct mei_txe_hw *hw = to_txe_hw(dev);
+-	int t = 0;
++	ktime_t stop, start;
+ 
++	start = ktime_get();
++	stop = ktime_add(start, ms_to_ktime(SEC_ALIVENESS_WAIT_TIMEOUT));
+ 	do {
+ 		hw->aliveness = mei_txe_aliveness_get(dev);
+ 		if (hw->aliveness == expected) {
+ 			dev->pg_event = MEI_PG_EVENT_IDLE;
+-			dev_dbg(dev->dev,
+-				"aliveness settled after %d msecs\n", t);
+-			return t;
++			dev_dbg(dev->dev, "aliveness settled after %lld usecs\n",
++				ktime_to_us(ktime_sub(ktime_get(), start)));
++			return 0;
+ 		}
+-		mutex_unlock(&dev->device_lock);
+-		msleep(MSEC_PER_SEC / 5);
+-		mutex_lock(&dev->device_lock);
+-		t += MSEC_PER_SEC / 5;
+-	} while (t < SEC_ALIVENESS_WAIT_TIMEOUT);
++		usleep_range(20, 50);
++	} while (ktime_before(ktime_get(), stop));
+ 
+ 	dev->pg_event = MEI_PG_EVENT_IDLE;
+ 	dev_err(dev->dev, "aliveness timed out\n");
+@@ -302,6 +302,18 @@ int mei_txe_aliveness_set_sync(struct mei_device *dev, u32 req)
+ }
+ 
+ /**
++ * mei_txe_pg_in_transition - is device now in pg transition
++ *
++ * @dev: the device structure
++ *
++ * Return: true if in pg transition, false otherwise
++ */
++static bool mei_txe_pg_in_transition(struct mei_device *dev)
++{
++	return dev->pg_event == MEI_PG_EVENT_WAIT;
++}
++
++/**
+  * mei_txe_pg_is_enabled - detect if PG is supported by HW
+  *
+  * @dev: the device structure
+@@ -1138,6 +1150,7 @@ static const struct mei_hw_ops mei_txe_hw_ops = {
+ 	.hw_config = mei_txe_hw_config,
+ 	.hw_start = mei_txe_hw_start,
+ 
++	.pg_in_transition = mei_txe_pg_in_transition,
+ 	.pg_is_enabled = mei_txe_pg_is_enabled,
+ 
+ 	.intr_clear = mei_txe_intr_clear,
+diff --git a/drivers/misc/mei/mei_dev.h b/drivers/misc/mei/mei_dev.h
+index f066ecd71939..f84c39ee28a8 100644
+--- a/drivers/misc/mei/mei_dev.h
++++ b/drivers/misc/mei/mei_dev.h
+@@ -271,6 +271,7 @@ struct mei_cl {
+ 
+  * @fw_status        : get fw status registers
+  * @pg_state         : power gating state of the device
++ * @pg_in_transition : is device now in pg transition
+  * @pg_is_enabled    : is power gating enabled
+ 
+  * @intr_clear       : clear pending interrupts
+@@ -300,6 +301,7 @@ struct mei_hw_ops {
+ 
+ 	int (*fw_status)(struct mei_device *dev, struct mei_fw_status *fw_sts);
+ 	enum mei_pg_state (*pg_state)(struct mei_device *dev);
++	bool (*pg_in_transition)(struct mei_device *dev);
+ 	bool (*pg_is_enabled)(struct mei_device *dev);
+ 
+ 	void (*intr_clear)(struct mei_device *dev);
+@@ -398,11 +400,15 @@ struct mei_cl_device {
+  * @MEI_PG_EVENT_IDLE: the driver is not in power gating transition
+  * @MEI_PG_EVENT_WAIT: the driver is waiting for a pg event to complete
+  * @MEI_PG_EVENT_RECEIVED: the driver received pg event
++ * @MEI_PG_EVENT_INTR_WAIT: the driver is waiting for a pg event interrupt
++ * @MEI_PG_EVENT_INTR_RECEIVED: the driver received pg event interrupt
+  */
+ enum mei_pg_event {
+ 	MEI_PG_EVENT_IDLE,
+ 	MEI_PG_EVENT_WAIT,
+ 	MEI_PG_EVENT_RECEIVED,
++	MEI_PG_EVENT_INTR_WAIT,
++	MEI_PG_EVENT_INTR_RECEIVED,
+ };
+ 
+ /**
+@@ -717,6 +723,11 @@ static inline enum mei_pg_state mei_pg_state(struct mei_device *dev)
+ 	return dev->ops->pg_state(dev);
+ }
+ 
++static inline bool mei_pg_in_transition(struct mei_device *dev)
++{
++	return dev->ops->pg_in_transition(dev);
++}
++
+ static inline bool mei_pg_is_enabled(struct mei_device *dev)
+ {
+ 	return dev->ops->pg_is_enabled(dev);
+diff --git a/drivers/mtd/maps/dc21285.c b/drivers/mtd/maps/dc21285.c
+index f8a7dd14cee0..70a3db3ab856 100644
+--- a/drivers/mtd/maps/dc21285.c
++++ b/drivers/mtd/maps/dc21285.c
+@@ -38,9 +38,9 @@ static void nw_en_write(void)
+ 	 * we want to write a bit pattern XXX1 to Xilinx to enable
+ 	 * the write gate, which will be open for about the next 2ms.
+ 	 */
+-	spin_lock_irqsave(&nw_gpio_lock, flags);
++	raw_spin_lock_irqsave(&nw_gpio_lock, flags);
+ 	nw_cpld_modify(CPLD_FLASH_WR_ENABLE, CPLD_FLASH_WR_ENABLE);
+-	spin_unlock_irqrestore(&nw_gpio_lock, flags);
++	raw_spin_unlock_irqrestore(&nw_gpio_lock, flags);
+ 
+ 	/*
+ 	 * let the ISA bus to catch on...
+diff --git a/drivers/mtd/mtd_blkdevs.c b/drivers/mtd/mtd_blkdevs.c
+index 2b0c52870999..df7c6c70757a 100644
+--- a/drivers/mtd/mtd_blkdevs.c
++++ b/drivers/mtd/mtd_blkdevs.c
+@@ -197,6 +197,7 @@ static int blktrans_open(struct block_device *bdev, fmode_t mode)
+ 		return -ERESTARTSYS; /* FIXME: busy loop! -arnd*/
+ 
+ 	mutex_lock(&dev->lock);
++	mutex_lock(&mtd_table_mutex);
+ 
+ 	if (dev->open)
+ 		goto unlock;
+@@ -220,6 +221,7 @@ static int blktrans_open(struct block_device *bdev, fmode_t mode)
+ 
+ unlock:
+ 	dev->open++;
++	mutex_unlock(&mtd_table_mutex);
+ 	mutex_unlock(&dev->lock);
+ 	blktrans_dev_put(dev);
+ 	return ret;
+@@ -230,6 +232,7 @@ error_release:
+ error_put:
+ 	module_put(dev->tr->owner);
+ 	kref_put(&dev->ref, blktrans_dev_release);
++	mutex_unlock(&mtd_table_mutex);
+ 	mutex_unlock(&dev->lock);
+ 	blktrans_dev_put(dev);
+ 	return ret;
+@@ -243,6 +246,7 @@ static void blktrans_release(struct gendisk *disk, fmode_t mode)
+ 		return;
+ 
+ 	mutex_lock(&dev->lock);
++	mutex_lock(&mtd_table_mutex);
+ 
+ 	if (--dev->open)
+ 		goto unlock;
+@@ -256,6 +260,7 @@ static void blktrans_release(struct gendisk *disk, fmode_t mode)
+ 		__put_mtd_device(dev->mtd);
+ 	}
+ unlock:
++	mutex_unlock(&mtd_table_mutex);
+ 	mutex_unlock(&dev->lock);
+ 	blktrans_dev_put(dev);
+ }
+diff --git a/drivers/of/address.c b/drivers/of/address.c
+index 78a7dcbec7d8..6906a3f61bd8 100644
+--- a/drivers/of/address.c
++++ b/drivers/of/address.c
+@@ -765,7 +765,7 @@ unsigned long __weak pci_address_to_pio(phys_addr_t address)
+ 	spin_lock(&io_range_lock);
+ 	list_for_each_entry(res, &io_range_list, list) {
+ 		if (address >= res->start && address < res->start + res->size) {
+-			addr = res->start - address + offset;
++			addr = address - res->start + offset;
+ 			break;
+ 		}
+ 		offset += res->size;
+diff --git a/drivers/pci/Kconfig b/drivers/pci/Kconfig
+index 7a8f1c5e65af..73de4efcbe6e 100644
+--- a/drivers/pci/Kconfig
++++ b/drivers/pci/Kconfig
+@@ -1,6 +1,10 @@
+ #
+ # PCI configuration
+ #
++config PCI_BUS_ADDR_T_64BIT
++	def_bool y if (ARCH_DMA_ADDR_T_64BIT || 64BIT)
++	depends on PCI
++
+ config PCI_MSI
+ 	bool "Message Signaled Interrupts (MSI and MSI-X)"
+ 	depends on PCI
+diff --git a/drivers/pci/bus.c b/drivers/pci/bus.c
+index 90fa3a78fb7c..6fbd3f2b5992 100644
+--- a/drivers/pci/bus.c
++++ b/drivers/pci/bus.c
+@@ -92,11 +92,11 @@ void pci_bus_remove_resources(struct pci_bus *bus)
+ }
+ 
+ static struct pci_bus_region pci_32_bit = {0, 0xffffffffULL};
+-#ifdef CONFIG_ARCH_DMA_ADDR_T_64BIT
++#ifdef CONFIG_PCI_BUS_ADDR_T_64BIT
+ static struct pci_bus_region pci_64_bit = {0,
+-				(dma_addr_t) 0xffffffffffffffffULL};
+-static struct pci_bus_region pci_high = {(dma_addr_t) 0x100000000ULL,
+-				(dma_addr_t) 0xffffffffffffffffULL};
++				(pci_bus_addr_t) 0xffffffffffffffffULL};
++static struct pci_bus_region pci_high = {(pci_bus_addr_t) 0x100000000ULL,
++				(pci_bus_addr_t) 0xffffffffffffffffULL};
+ #endif
+ 
+ /*
+@@ -200,7 +200,7 @@ int pci_bus_alloc_resource(struct pci_bus *bus, struct resource *res,
+ 					  resource_size_t),
+ 		void *alignf_data)
+ {
+-#ifdef CONFIG_ARCH_DMA_ADDR_T_64BIT
++#ifdef CONFIG_PCI_BUS_ADDR_T_64BIT
+ 	int rc;
+ 
+ 	if (res->flags & IORESOURCE_MEM_64) {
+diff --git a/drivers/pci/hotplug/pciehp_hpc.c b/drivers/pci/hotplug/pciehp_hpc.c
+index 0ebf754fc177..6d6868811e56 100644
+--- a/drivers/pci/hotplug/pciehp_hpc.c
++++ b/drivers/pci/hotplug/pciehp_hpc.c
+@@ -176,20 +176,17 @@ static void pcie_wait_cmd(struct controller *ctrl)
+ 			  jiffies_to_msecs(jiffies - ctrl->cmd_started));
+ }
+ 
+-/**
+- * pcie_write_cmd - Issue controller command
+- * @ctrl: controller to which the command is issued
+- * @cmd:  command value written to slot control register
+- * @mask: bitmask of slot control register to be modified
+- */
+-static void pcie_write_cmd(struct controller *ctrl, u16 cmd, u16 mask)
++static void pcie_do_write_cmd(struct controller *ctrl, u16 cmd,
++			      u16 mask, bool wait)
+ {
+ 	struct pci_dev *pdev = ctrl_dev(ctrl);
+ 	u16 slot_ctrl;
+ 
+ 	mutex_lock(&ctrl->ctrl_lock);
+ 
+-	/* Wait for any previous command that might still be in progress */
++	/*
++	 * Always wait for any previous command that might still be in progress
++	 */
+ 	pcie_wait_cmd(ctrl);
+ 
+ 	pcie_capability_read_word(pdev, PCI_EXP_SLTCTL, &slot_ctrl);
+@@ -201,9 +198,33 @@ static void pcie_write_cmd(struct controller *ctrl, u16 cmd, u16 mask)
+ 	ctrl->cmd_started = jiffies;
+ 	ctrl->slot_ctrl = slot_ctrl;
+ 
++	/*
++	 * Optionally wait for the hardware to be ready for a new command,
++	 * indicating completion of the above issued command.
++	 */
++	if (wait)
++		pcie_wait_cmd(ctrl);
++
+ 	mutex_unlock(&ctrl->ctrl_lock);
+ }
+ 
++/**
++ * pcie_write_cmd - Issue controller command
++ * @ctrl: controller to which the command is issued
++ * @cmd:  command value written to slot control register
++ * @mask: bitmask of slot control register to be modified
++ */
++static void pcie_write_cmd(struct controller *ctrl, u16 cmd, u16 mask)
++{
++	pcie_do_write_cmd(ctrl, cmd, mask, true);
++}
++
++/* Same as above without waiting for the hardware to latch */
++static void pcie_write_cmd_nowait(struct controller *ctrl, u16 cmd, u16 mask)
++{
++	pcie_do_write_cmd(ctrl, cmd, mask, false);
++}
++
+ bool pciehp_check_link_active(struct controller *ctrl)
+ {
+ 	struct pci_dev *pdev = ctrl_dev(ctrl);
+@@ -422,7 +443,7 @@ void pciehp_set_attention_status(struct slot *slot, u8 value)
+ 	default:
+ 		return;
+ 	}
+-	pcie_write_cmd(ctrl, slot_cmd, PCI_EXP_SLTCTL_AIC);
++	pcie_write_cmd_nowait(ctrl, slot_cmd, PCI_EXP_SLTCTL_AIC);
+ 	ctrl_dbg(ctrl, "%s: SLOTCTRL %x write cmd %x\n", __func__,
+ 		 pci_pcie_cap(ctrl->pcie->port) + PCI_EXP_SLTCTL, slot_cmd);
+ }
+@@ -434,7 +455,8 @@ void pciehp_green_led_on(struct slot *slot)
+ 	if (!PWR_LED(ctrl))
+ 		return;
+ 
+-	pcie_write_cmd(ctrl, PCI_EXP_SLTCTL_PWR_IND_ON, PCI_EXP_SLTCTL_PIC);
++	pcie_write_cmd_nowait(ctrl, PCI_EXP_SLTCTL_PWR_IND_ON,
++			      PCI_EXP_SLTCTL_PIC);
+ 	ctrl_dbg(ctrl, "%s: SLOTCTRL %x write cmd %x\n", __func__,
+ 		 pci_pcie_cap(ctrl->pcie->port) + PCI_EXP_SLTCTL,
+ 		 PCI_EXP_SLTCTL_PWR_IND_ON);
+@@ -447,7 +469,8 @@ void pciehp_green_led_off(struct slot *slot)
+ 	if (!PWR_LED(ctrl))
+ 		return;
+ 
+-	pcie_write_cmd(ctrl, PCI_EXP_SLTCTL_PWR_IND_OFF, PCI_EXP_SLTCTL_PIC);
++	pcie_write_cmd_nowait(ctrl, PCI_EXP_SLTCTL_PWR_IND_OFF,
++			      PCI_EXP_SLTCTL_PIC);
+ 	ctrl_dbg(ctrl, "%s: SLOTCTRL %x write cmd %x\n", __func__,
+ 		 pci_pcie_cap(ctrl->pcie->port) + PCI_EXP_SLTCTL,
+ 		 PCI_EXP_SLTCTL_PWR_IND_OFF);
+@@ -460,7 +483,8 @@ void pciehp_green_led_blink(struct slot *slot)
+ 	if (!PWR_LED(ctrl))
+ 		return;
+ 
+-	pcie_write_cmd(ctrl, PCI_EXP_SLTCTL_PWR_IND_BLINK, PCI_EXP_SLTCTL_PIC);
++	pcie_write_cmd_nowait(ctrl, PCI_EXP_SLTCTL_PWR_IND_BLINK,
++			      PCI_EXP_SLTCTL_PIC);
+ 	ctrl_dbg(ctrl, "%s: SLOTCTRL %x write cmd %x\n", __func__,
+ 		 pci_pcie_cap(ctrl->pcie->port) + PCI_EXP_SLTCTL,
+ 		 PCI_EXP_SLTCTL_PWR_IND_BLINK);
+@@ -613,7 +637,7 @@ void pcie_enable_notification(struct controller *ctrl)
+ 		PCI_EXP_SLTCTL_HPIE | PCI_EXP_SLTCTL_CCIE |
+ 		PCI_EXP_SLTCTL_DLLSCE);
+ 
+-	pcie_write_cmd(ctrl, cmd, mask);
++	pcie_write_cmd_nowait(ctrl, cmd, mask);
+ 	ctrl_dbg(ctrl, "%s: SLOTCTRL %x write cmd %x\n", __func__,
+ 		 pci_pcie_cap(ctrl->pcie->port) + PCI_EXP_SLTCTL, cmd);
+ }
+@@ -664,7 +688,7 @@ int pciehp_reset_slot(struct slot *slot, int probe)
+ 	pci_reset_bridge_secondary_bus(ctrl->pcie->port);
+ 
+ 	pcie_capability_write_word(pdev, PCI_EXP_SLTSTA, stat_mask);
+-	pcie_write_cmd(ctrl, ctrl_mask, ctrl_mask);
++	pcie_write_cmd_nowait(ctrl, ctrl_mask, ctrl_mask);
+ 	ctrl_dbg(ctrl, "%s: SLOTCTRL %x write cmd %x\n", __func__,
+ 		 pci_pcie_cap(ctrl->pcie->port) + PCI_EXP_SLTCTL, ctrl_mask);
+ 	if (pciehp_poll_mode)
+diff --git a/drivers/pci/pci.c b/drivers/pci/pci.c
+index acc4b6ef78c4..c44393f26fd3 100644
+--- a/drivers/pci/pci.c
++++ b/drivers/pci/pci.c
+@@ -4324,6 +4324,17 @@ bool pci_device_is_present(struct pci_dev *pdev)
+ }
+ EXPORT_SYMBOL_GPL(pci_device_is_present);
+ 
++void pci_ignore_hotplug(struct pci_dev *dev)
++{
++	struct pci_dev *bridge = dev->bus->self;
++
++	dev->ignore_hotplug = 1;
++	/* Propagate the "ignore hotplug" setting to the parent bridge. */
++	if (bridge)
++		bridge->ignore_hotplug = 1;
++}
++EXPORT_SYMBOL_GPL(pci_ignore_hotplug);
++
+ #define RESOURCE_ALIGNMENT_PARAM_SIZE COMMAND_LINE_SIZE
+ static char resource_alignment_param[RESOURCE_ALIGNMENT_PARAM_SIZE] = {0};
+ static DEFINE_SPINLOCK(resource_alignment_lock);
+diff --git a/drivers/pci/probe.c b/drivers/pci/probe.c
+index 6675a7a1b9fc..c91185721345 100644
+--- a/drivers/pci/probe.c
++++ b/drivers/pci/probe.c
+@@ -254,8 +254,8 @@ int __pci_read_base(struct pci_dev *dev, enum pci_bar_type type,
+ 	}
+ 
+ 	if (res->flags & IORESOURCE_MEM_64) {
+-		if ((sizeof(dma_addr_t) < 8 || sizeof(resource_size_t) < 8) &&
+-		    sz64 > 0x100000000ULL) {
++		if ((sizeof(pci_bus_addr_t) < 8 || sizeof(resource_size_t) < 8)
++		    && sz64 > 0x100000000ULL) {
+ 			res->flags |= IORESOURCE_UNSET | IORESOURCE_DISABLED;
+ 			res->start = 0;
+ 			res->end = 0;
+@@ -264,7 +264,7 @@ int __pci_read_base(struct pci_dev *dev, enum pci_bar_type type,
+ 			goto out;
+ 		}
+ 
+-		if ((sizeof(dma_addr_t) < 8) && l) {
++		if ((sizeof(pci_bus_addr_t) < 8) && l) {
+ 			/* Above 32-bit boundary; try to reallocate */
+ 			res->flags |= IORESOURCE_UNSET;
+ 			res->start = 0;
+@@ -399,7 +399,7 @@ static void pci_read_bridge_mmio_pref(struct pci_bus *child)
+ 	struct pci_dev *dev = child->self;
+ 	u16 mem_base_lo, mem_limit_lo;
+ 	u64 base64, limit64;
+-	dma_addr_t base, limit;
++	pci_bus_addr_t base, limit;
+ 	struct pci_bus_region region;
+ 	struct resource *res;
+ 
+@@ -426,8 +426,8 @@ static void pci_read_bridge_mmio_pref(struct pci_bus *child)
+ 		}
+ 	}
+ 
+-	base = (dma_addr_t) base64;
+-	limit = (dma_addr_t) limit64;
++	base = (pci_bus_addr_t) base64;
++	limit = (pci_bus_addr_t) limit64;
+ 
+ 	if (base != base64) {
+ 		dev_err(&dev->dev, "can't handle bridge window above 4GB (bus address %#010llx)\n",
+diff --git a/drivers/pcmcia/topic.h b/drivers/pcmcia/topic.h
+index 615a45a8fe86..582688fe7505 100644
+--- a/drivers/pcmcia/topic.h
++++ b/drivers/pcmcia/topic.h
+@@ -104,6 +104,9 @@
+ #define TOPIC_EXCA_IF_CONTROL		0x3e	/* 8 bit */
+ #define TOPIC_EXCA_IFC_33V_ENA		0x01
+ 
++#define TOPIC_PCI_CFG_PPBCN		0x3e	/* 16-bit */
++#define TOPIC_PCI_CFG_PPBCN_WBEN	0x0400
++
+ static void topic97_zoom_video(struct pcmcia_socket *sock, int onoff)
+ {
+ 	struct yenta_socket *socket = container_of(sock, struct yenta_socket, socket);
+@@ -138,6 +141,7 @@ static int topic97_override(struct yenta_socket *socket)
+ static int topic95_override(struct yenta_socket *socket)
+ {
+ 	u8 fctrl;
++	u16 ppbcn;
+ 
+ 	/* enable 3.3V support for 16bit cards */
+ 	fctrl = exca_readb(socket, TOPIC_EXCA_IF_CONTROL);
+@@ -146,6 +150,18 @@ static int topic95_override(struct yenta_socket *socket)
+ 	/* tell yenta to use exca registers to power 16bit cards */
+ 	socket->flags |= YENTA_16BIT_POWER_EXCA | YENTA_16BIT_POWER_DF;
+ 
++	/* Disable write buffers to prevent lockups under load with numerous
++	   Cardbus cards, observed on Tecra 500CDT and reported elsewhere on the
++	   net.  This is not a power-on default according to the datasheet
++	   but some BIOSes seem to set it. */
++	if (pci_read_config_word(socket->dev, TOPIC_PCI_CFG_PPBCN, &ppbcn) == 0
++	    && socket->dev->revision <= 7
++	    && (ppbcn & TOPIC_PCI_CFG_PPBCN_WBEN)) {
++		ppbcn &= ~TOPIC_PCI_CFG_PPBCN_WBEN;
++		pci_write_config_word(socket->dev, TOPIC_PCI_CFG_PPBCN, ppbcn);
++		dev_info(&socket->dev->dev, "Disabled ToPIC95 Cardbus write buffers.\n");
++	}
++
+ 	return 0;
+ }
+ 
+diff --git a/drivers/pnp/system.c b/drivers/pnp/system.c
+index 49c1720df59a..515f33882ab8 100644
+--- a/drivers/pnp/system.c
++++ b/drivers/pnp/system.c
+@@ -7,6 +7,7 @@
+  *	Bjorn Helgaas <bjorn.helgaas@hp.com>
+  */
+ 
++#include <linux/acpi.h>
+ #include <linux/pnp.h>
+ #include <linux/device.h>
+ #include <linux/init.h>
+@@ -22,25 +23,41 @@ static const struct pnp_device_id pnp_dev_table[] = {
+ 	{"", 0}
+ };
+ 
++#ifdef CONFIG_ACPI
++static bool __reserve_range(u64 start, unsigned int length, bool io, char *desc)
++{
++	u8 space_id = io ? ACPI_ADR_SPACE_SYSTEM_IO : ACPI_ADR_SPACE_SYSTEM_MEMORY;
++	return !acpi_reserve_region(start, length, space_id, IORESOURCE_BUSY, desc);
++}
++#else
++static bool __reserve_range(u64 start, unsigned int length, bool io, char *desc)
++{
++	struct resource *res;
++
++	res = io ? request_region(start, length, desc) :
++		request_mem_region(start, length, desc);
++	if (res) {
++		res->flags &= ~IORESOURCE_BUSY;
++		return true;
++	}
++	return false;
++}
++#endif
++
+ static void reserve_range(struct pnp_dev *dev, struct resource *r, int port)
+ {
+ 	char *regionid;
+ 	const char *pnpid = dev_name(&dev->dev);
+ 	resource_size_t start = r->start, end = r->end;
+-	struct resource *res;
++	bool reserved;
+ 
+ 	regionid = kmalloc(16, GFP_KERNEL);
+ 	if (!regionid)
+ 		return;
+ 
+ 	snprintf(regionid, 16, "pnp %s", pnpid);
+-	if (port)
+-		res = request_region(start, end - start + 1, regionid);
+-	else
+-		res = request_mem_region(start, end - start + 1, regionid);
+-	if (res)
+-		res->flags &= ~IORESOURCE_BUSY;
+-	else
++	reserved = __reserve_range(start, end - start + 1, !!port, regionid);
++	if (!reserved)
+ 		kfree(regionid);
+ 
+ 	/*
+@@ -49,7 +66,7 @@ static void reserve_range(struct pnp_dev *dev, struct resource *r, int port)
+ 	 * have double reservations.
+ 	 */
+ 	dev_info(&dev->dev, "%pR %s reserved\n", r,
+-		 res ? "has been" : "could not be");
++		 reserved ? "has been" : "could not be");
+ }
+ 
+ static void reserve_resources_of_dev(struct pnp_dev *dev)
+diff --git a/drivers/power/power_supply_core.c b/drivers/power/power_supply_core.c
+index 2ed4a4a6b3c5..4bc0c7f459a5 100644
+--- a/drivers/power/power_supply_core.c
++++ b/drivers/power/power_supply_core.c
+@@ -30,6 +30,8 @@ EXPORT_SYMBOL_GPL(power_supply_notifier);
+ 
+ static struct device_type power_supply_dev_type;
+ 
++#define POWER_SUPPLY_DEFERRED_REGISTER_TIME	msecs_to_jiffies(10)
++
+ static bool __power_supply_is_supplied_by(struct power_supply *supplier,
+ 					 struct power_supply *supply)
+ {
+@@ -121,6 +123,30 @@ void power_supply_changed(struct power_supply *psy)
+ }
+ EXPORT_SYMBOL_GPL(power_supply_changed);
+ 
++/*
++ * Notify that power supply was registered after parent finished the probing.
++ *
++ * Often power supply is registered from driver's probe function. However
++ * calling power_supply_changed() directly from power_supply_register()
++ * would lead to execution of get_property() function provided by the driver
++ * too early - before the probe ends.
++ *
++ * Avoid that by waiting on parent's mutex.
++ */
++static void power_supply_deferred_register_work(struct work_struct *work)
++{
++	struct power_supply *psy = container_of(work, struct power_supply,
++						deferred_register_work.work);
++
++	if (psy->dev.parent)
++		mutex_lock(&psy->dev.parent->mutex);
++
++	power_supply_changed(psy);
++
++	if (psy->dev.parent)
++		mutex_unlock(&psy->dev.parent->mutex);
++}
++
+ #ifdef CONFIG_OF
+ #include <linux/of.h>
+ 
+@@ -645,6 +671,10 @@ __power_supply_register(struct device *parent,
+ 	struct power_supply *psy;
+ 	int rc;
+ 
++	if (!parent)
++		pr_warn("%s: Expected proper parent device for '%s'\n",
++			__func__, desc->name);
++
+ 	psy = kzalloc(sizeof(*psy), GFP_KERNEL);
+ 	if (!psy)
+ 		return ERR_PTR(-ENOMEM);
+@@ -659,7 +689,6 @@ __power_supply_register(struct device *parent,
+ 	dev->release = power_supply_dev_release;
+ 	dev_set_drvdata(dev, psy);
+ 	psy->desc = desc;
+-	atomic_inc(&psy->use_cnt);
+ 	if (cfg) {
+ 		psy->drv_data = cfg->drv_data;
+ 		psy->of_node = cfg->of_node;
+@@ -672,6 +701,8 @@ __power_supply_register(struct device *parent,
+ 		goto dev_set_name_failed;
+ 
+ 	INIT_WORK(&psy->changed_work, power_supply_changed_work);
++	INIT_DELAYED_WORK(&psy->deferred_register_work,
++			  power_supply_deferred_register_work);
+ 
+ 	rc = power_supply_check_supplies(psy);
+ 	if (rc) {
+@@ -700,7 +731,20 @@ __power_supply_register(struct device *parent,
+ 	if (rc)
+ 		goto create_triggers_failed;
+ 
+-	power_supply_changed(psy);
++	/*
++	 * Update use_cnt after any uevents (most notably from device_add()).
++	 * We are here still during driver's probe but
++	 * the power_supply_uevent() calls back driver's get_property
++	 * method so:
++	 * 1. Driver did not assigned the returned struct power_supply,
++	 * 2. Driver could not finish initialization (anything in its probe
++	 *    after calling power_supply_register()).
++	 */
++	atomic_inc(&psy->use_cnt);
++
++	queue_delayed_work(system_power_efficient_wq,
++			   &psy->deferred_register_work,
++			   POWER_SUPPLY_DEFERRED_REGISTER_TIME);
+ 
+ 	return psy;
+ 
+@@ -720,7 +764,8 @@ dev_set_name_failed:
+ 
+ /**
+  * power_supply_register() - Register new power supply
+- * @parent:	Device to be a parent of power supply's device
++ * @parent:	Device to be a parent of power supply's device, usually
++ *		the device which probe function calls this
+  * @desc:	Description of power supply, must be valid through whole
+  *		lifetime of this power supply
+  * @cfg:	Run-time specific configuration accessed during registering,
+@@ -741,7 +786,8 @@ EXPORT_SYMBOL_GPL(power_supply_register);
+ 
+ /**
+  * power_supply_register() - Register new non-waking-source power supply
+- * @parent:	Device to be a parent of power supply's device
++ * @parent:	Device to be a parent of power supply's device, usually
++ *		the device which probe function calls this
+  * @desc:	Description of power supply, must be valid through whole
+  *		lifetime of this power supply
+  * @cfg:	Run-time specific configuration accessed during registering,
+@@ -770,7 +816,8 @@ static void devm_power_supply_release(struct device *dev, void *res)
+ 
+ /**
+  * power_supply_register() - Register managed power supply
+- * @parent:	Device to be a parent of power supply's device
++ * @parent:	Device to be a parent of power supply's device, usually
++ *		the device which probe function calls this
+  * @desc:	Description of power supply, must be valid through whole
+  *		lifetime of this power supply
+  * @cfg:	Run-time specific configuration accessed during registering,
+@@ -805,7 +852,8 @@ EXPORT_SYMBOL_GPL(devm_power_supply_register);
+ 
+ /**
+  * power_supply_register() - Register managed non-waking-source power supply
+- * @parent:	Device to be a parent of power supply's device
++ * @parent:	Device to be a parent of power supply's device, usually
++ *		the device which probe function calls this
+  * @desc:	Description of power supply, must be valid through whole
+  *		lifetime of this power supply
+  * @cfg:	Run-time specific configuration accessed during registering,
+@@ -849,6 +897,7 @@ void power_supply_unregister(struct power_supply *psy)
+ {
+ 	WARN_ON(atomic_dec_return(&psy->use_cnt));
+ 	cancel_work_sync(&psy->changed_work);
++	cancel_delayed_work_sync(&psy->deferred_register_work);
+ 	sysfs_remove_link(&psy->dev.kobj, "powers");
+ 	power_supply_remove_triggers(psy);
+ 	psy_unregister_cooler(psy);
+diff --git a/drivers/regulator/core.c b/drivers/regulator/core.c
+index 443eaab933fc..8a28116b5805 100644
+--- a/drivers/regulator/core.c
++++ b/drivers/regulator/core.c
+@@ -779,7 +779,7 @@ static int suspend_prepare(struct regulator_dev *rdev, suspend_state_t state)
+ static void print_constraints(struct regulator_dev *rdev)
+ {
+ 	struct regulation_constraints *constraints = rdev->constraints;
+-	char buf[80] = "";
++	char buf[160] = "";
+ 	int count = 0;
+ 	int ret;
+ 
+diff --git a/drivers/regulator/max77686.c b/drivers/regulator/max77686.c
+index 15fb1416bfbd..c064e32fb3b9 100644
+--- a/drivers/regulator/max77686.c
++++ b/drivers/regulator/max77686.c
+@@ -88,7 +88,7 @@ enum max77686_ramp_rate {
+ };
+ 
+ struct max77686_data {
+-	u64 gpio_enabled:MAX77686_REGULATORS;
++	DECLARE_BITMAP(gpio_enabled, MAX77686_REGULATORS);
+ 
+ 	/* Array indexed by regulator id */
+ 	unsigned int opmode[MAX77686_REGULATORS];
+@@ -121,7 +121,7 @@ static unsigned int max77686_map_normal_mode(struct max77686_data *max77686,
+ 	case MAX77686_BUCK8:
+ 	case MAX77686_BUCK9:
+ 	case MAX77686_LDO20 ... MAX77686_LDO22:
+-		if (max77686->gpio_enabled & (1 << id))
++		if (test_bit(id, max77686->gpio_enabled))
+ 			return MAX77686_GPIO_CONTROL;
+ 	}
+ 
+@@ -277,7 +277,7 @@ static int max77686_of_parse_cb(struct device_node *np,
+ 	}
+ 
+ 	if (gpio_is_valid(config->ena_gpio)) {
+-		max77686->gpio_enabled |= (1 << desc->id);
++		set_bit(desc->id, max77686->gpio_enabled);
+ 
+ 		return regmap_update_bits(config->regmap, desc->enable_reg,
+ 					  desc->enable_mask,
+diff --git a/drivers/scsi/ipr.h b/drivers/scsi/ipr.h
+index 47412cf4eaac..73790a1d0969 100644
+--- a/drivers/scsi/ipr.h
++++ b/drivers/scsi/ipr.h
+@@ -272,7 +272,7 @@
+ #define IPR_RUNTIME_RESET				0x40000000
+ 
+ #define IPR_IPL_INIT_MIN_STAGE_TIME			5
+-#define IPR_IPL_INIT_DEFAULT_STAGE_TIME                 15
++#define IPR_IPL_INIT_DEFAULT_STAGE_TIME                 30
+ #define IPR_IPL_INIT_STAGE_UNKNOWN			0x0
+ #define IPR_IPL_INIT_STAGE_TRANSOP			0xB0000000
+ #define IPR_IPL_INIT_STAGE_MASK				0xff000000
+diff --git a/drivers/scsi/scsi_transport_srp.c b/drivers/scsi/scsi_transport_srp.c
+index ae45bd99baed..f115f67a6ba5 100644
+--- a/drivers/scsi/scsi_transport_srp.c
++++ b/drivers/scsi/scsi_transport_srp.c
+@@ -396,6 +396,36 @@ static void srp_reconnect_work(struct work_struct *work)
+ 	}
+ }
+ 
++/**
++ * scsi_request_fn_active() - number of kernel threads inside scsi_request_fn()
++ * @shost: SCSI host for which to count the number of scsi_request_fn() callers.
++ *
++ * To do: add support for scsi-mq in this function.
++ */
++static int scsi_request_fn_active(struct Scsi_Host *shost)
++{
++	struct scsi_device *sdev;
++	struct request_queue *q;
++	int request_fn_active = 0;
++
++	shost_for_each_device(sdev, shost) {
++		q = sdev->request_queue;
++
++		spin_lock_irq(q->queue_lock);
++		request_fn_active += q->request_fn_active;
++		spin_unlock_irq(q->queue_lock);
++	}
++
++	return request_fn_active;
++}
++
++/* Wait until ongoing shost->hostt->queuecommand() calls have finished. */
++static void srp_wait_for_queuecommand(struct Scsi_Host *shost)
++{
++	while (scsi_request_fn_active(shost))
++		msleep(20);
++}
++
+ static void __rport_fail_io_fast(struct srp_rport *rport)
+ {
+ 	struct Scsi_Host *shost = rport_to_shost(rport);
+@@ -409,8 +439,10 @@ static void __rport_fail_io_fast(struct srp_rport *rport)
+ 
+ 	/* Involve the LLD if possible to terminate all I/O on the rport. */
+ 	i = to_srp_internal(shost->transportt);
+-	if (i->f->terminate_rport_io)
++	if (i->f->terminate_rport_io) {
++		srp_wait_for_queuecommand(shost);
+ 		i->f->terminate_rport_io(rport);
++	}
+ }
+ 
+ /**
+@@ -504,27 +536,6 @@ void srp_start_tl_fail_timers(struct srp_rport *rport)
+ EXPORT_SYMBOL(srp_start_tl_fail_timers);
+ 
+ /**
+- * scsi_request_fn_active() - number of kernel threads inside scsi_request_fn()
+- * @shost: SCSI host for which to count the number of scsi_request_fn() callers.
+- */
+-static int scsi_request_fn_active(struct Scsi_Host *shost)
+-{
+-	struct scsi_device *sdev;
+-	struct request_queue *q;
+-	int request_fn_active = 0;
+-
+-	shost_for_each_device(sdev, shost) {
+-		q = sdev->request_queue;
+-
+-		spin_lock_irq(q->queue_lock);
+-		request_fn_active += q->request_fn_active;
+-		spin_unlock_irq(q->queue_lock);
+-	}
+-
+-	return request_fn_active;
+-}
+-
+-/**
+  * srp_reconnect_rport() - reconnect to an SRP target port
+  * @rport: SRP target port.
+  *
+@@ -559,8 +570,7 @@ int srp_reconnect_rport(struct srp_rport *rport)
+ 	if (res)
+ 		goto out;
+ 	scsi_target_block(&shost->shost_gendev);
+-	while (scsi_request_fn_active(shost))
+-		msleep(20);
++	srp_wait_for_queuecommand(shost);
+ 	res = rport->state != SRP_RPORT_LOST ? i->f->reconnect(rport) : -ENODEV;
+ 	pr_debug("%s (state %d): transport.reconnect() returned %d\n",
+ 		 dev_name(&shost->shost_gendev), rport->state, res);
+diff --git a/drivers/spi/spi-orion.c b/drivers/spi/spi-orion.c
+index 861664776672..ff97cabdaa81 100644
+--- a/drivers/spi/spi-orion.c
++++ b/drivers/spi/spi-orion.c
+@@ -61,6 +61,12 @@ enum orion_spi_type {
+ 
+ struct orion_spi_dev {
+ 	enum orion_spi_type	typ;
++	/*
++	 * min_divisor and max_hz should be exclusive, the only we can
++	 * have both is for managing the armada-370-spi case with old
++	 * device tree
++	 */
++	unsigned long		max_hz;
+ 	unsigned int		min_divisor;
+ 	unsigned int		max_divisor;
+ 	u32			prescale_mask;
+@@ -387,8 +393,9 @@ static const struct orion_spi_dev orion_spi_dev_data = {
+ 
+ static const struct orion_spi_dev armada_spi_dev_data = {
+ 	.typ = ARMADA_SPI,
+-	.min_divisor = 1,
++	.min_divisor = 4,
+ 	.max_divisor = 1920,
++	.max_hz = 50000000,
+ 	.prescale_mask = ARMADA_SPI_CLK_PRESCALE_MASK,
+ };
+ 
+@@ -454,7 +461,21 @@ static int orion_spi_probe(struct platform_device *pdev)
+ 		goto out;
+ 
+ 	tclk_hz = clk_get_rate(spi->clk);
+-	master->max_speed_hz = DIV_ROUND_UP(tclk_hz, devdata->min_divisor);
++
++	/*
++	 * With old device tree, armada-370-spi could be used with
++	 * Armada XP, however for this SoC the maximum frequency is
++	 * 50MHz instead of tclk/4. On Armada 370, tclk cannot be
++	 * higher than 200MHz. So, in order to be able to handle both
++	 * SoCs, we can take the minimum of 50MHz and tclk/4.
++	 */
++	if (of_device_is_compatible(pdev->dev.of_node,
++					"marvell,armada-370-spi"))
++		master->max_speed_hz = min(devdata->max_hz,
++				DIV_ROUND_UP(tclk_hz, devdata->min_divisor));
++	else
++		master->max_speed_hz =
++			DIV_ROUND_UP(tclk_hz, devdata->min_divisor);
+ 	master->min_speed_hz = DIV_ROUND_UP(tclk_hz, devdata->max_divisor);
+ 
+ 	r = platform_get_resource(pdev, IORESOURCE_MEM, 0);
+diff --git a/drivers/spi/spi.c b/drivers/spi/spi.c
+index 50910d85df5a..d35c1a13217c 100644
+--- a/drivers/spi/spi.c
++++ b/drivers/spi/spi.c
+@@ -988,9 +988,6 @@ void spi_finalize_current_message(struct spi_master *master)
+ 
+ 	spin_lock_irqsave(&master->queue_lock, flags);
+ 	mesg = master->cur_msg;
+-	master->cur_msg = NULL;
+-
+-	queue_kthread_work(&master->kworker, &master->pump_messages);
+ 	spin_unlock_irqrestore(&master->queue_lock, flags);
+ 
+ 	spi_unmap_msg(master, mesg);
+@@ -1003,9 +1000,13 @@ void spi_finalize_current_message(struct spi_master *master)
+ 		}
+ 	}
+ 
+-	trace_spi_message_done(mesg);
+-
++	spin_lock_irqsave(&master->queue_lock, flags);
++	master->cur_msg = NULL;
+ 	master->cur_msg_prepared = false;
++	queue_kthread_work(&master->kworker, &master->pump_messages);
++	spin_unlock_irqrestore(&master->queue_lock, flags);
++
++	trace_spi_message_done(mesg);
+ 
+ 	mesg->state = NULL;
+ 	if (mesg->complete)
+diff --git a/drivers/video/fbdev/mxsfb.c b/drivers/video/fbdev/mxsfb.c
+index f8ac4a452f26..0f64165b0147 100644
+--- a/drivers/video/fbdev/mxsfb.c
++++ b/drivers/video/fbdev/mxsfb.c
+@@ -316,6 +316,18 @@ static int mxsfb_check_var(struct fb_var_screeninfo *var,
+ 	return 0;
+ }
+ 
++static inline void mxsfb_enable_axi_clk(struct mxsfb_info *host)
++{
++	if (host->clk_axi)
++		clk_prepare_enable(host->clk_axi);
++}
++
++static inline void mxsfb_disable_axi_clk(struct mxsfb_info *host)
++{
++	if (host->clk_axi)
++		clk_disable_unprepare(host->clk_axi);
++}
++
+ static void mxsfb_enable_controller(struct fb_info *fb_info)
+ {
+ 	struct mxsfb_info *host = to_imxfb_host(fb_info);
+@@ -333,14 +345,13 @@ static void mxsfb_enable_controller(struct fb_info *fb_info)
+ 		}
+ 	}
+ 
+-	if (host->clk_axi)
+-		clk_prepare_enable(host->clk_axi);
+-
+ 	if (host->clk_disp_axi)
+ 		clk_prepare_enable(host->clk_disp_axi);
+ 	clk_prepare_enable(host->clk);
+ 	clk_set_rate(host->clk, PICOS2KHZ(fb_info->var.pixclock) * 1000U);
+ 
++	mxsfb_enable_axi_clk(host);
++
+ 	/* if it was disabled, re-enable the mode again */
+ 	writel(CTRL_DOTCLK_MODE, host->base + LCDC_CTRL + REG_SET);
+ 
+@@ -380,11 +391,11 @@ static void mxsfb_disable_controller(struct fb_info *fb_info)
+ 	reg = readl(host->base + LCDC_VDCTRL4);
+ 	writel(reg & ~VDCTRL4_SYNC_SIGNALS_ON, host->base + LCDC_VDCTRL4);
+ 
++	mxsfb_disable_axi_clk(host);
++
+ 	clk_disable_unprepare(host->clk);
+ 	if (host->clk_disp_axi)
+ 		clk_disable_unprepare(host->clk_disp_axi);
+-	if (host->clk_axi)
+-		clk_disable_unprepare(host->clk_axi);
+ 
+ 	host->enabled = 0;
+ 
+@@ -421,6 +432,8 @@ static int mxsfb_set_par(struct fb_info *fb_info)
+ 		mxsfb_disable_controller(fb_info);
+ 	}
+ 
++	mxsfb_enable_axi_clk(host);
++
+ 	/* clear the FIFOs */
+ 	writel(CTRL1_FIFO_CLEAR, host->base + LCDC_CTRL1 + REG_SET);
+ 
+@@ -438,6 +451,7 @@ static int mxsfb_set_par(struct fb_info *fb_info)
+ 		ctrl |= CTRL_SET_WORD_LENGTH(3);
+ 		switch (host->ld_intf_width) {
+ 		case STMLCDIF_8BIT:
++			mxsfb_disable_axi_clk(host);
+ 			dev_err(&host->pdev->dev,
+ 					"Unsupported LCD bus width mapping\n");
+ 			return -EINVAL;
+@@ -451,6 +465,7 @@ static int mxsfb_set_par(struct fb_info *fb_info)
+ 		writel(CTRL1_SET_BYTE_PACKAGING(0x7), host->base + LCDC_CTRL1);
+ 		break;
+ 	default:
++		mxsfb_disable_axi_clk(host);
+ 		dev_err(&host->pdev->dev, "Unhandled color depth of %u\n",
+ 				fb_info->var.bits_per_pixel);
+ 		return -EINVAL;
+@@ -504,6 +519,8 @@ static int mxsfb_set_par(struct fb_info *fb_info)
+ 			fb_info->fix.line_length * fb_info->var.yoffset,
+ 			host->base + host->devdata->next_buf);
+ 
++	mxsfb_disable_axi_clk(host);
++
+ 	if (reenable)
+ 		mxsfb_enable_controller(fb_info);
+ 
+@@ -582,10 +599,14 @@ static int mxsfb_pan_display(struct fb_var_screeninfo *var,
+ 
+ 	offset = fb_info->fix.line_length * var->yoffset;
+ 
++	mxsfb_enable_axi_clk(host);
++
+ 	/* update on next VSYNC */
+ 	writel(fb_info->fix.smem_start + offset,
+ 			host->base + host->devdata->next_buf);
+ 
++	mxsfb_disable_axi_clk(host);
++
+ 	return 0;
+ }
+ 
+@@ -608,13 +629,17 @@ static int mxsfb_restore_mode(struct mxsfb_info *host,
+ 	unsigned line_count;
+ 	unsigned period;
+ 	unsigned long pa, fbsize;
+-	int bits_per_pixel, ofs;
++	int bits_per_pixel, ofs, ret = 0;
+ 	u32 transfer_count, vdctrl0, vdctrl2, vdctrl3, vdctrl4, ctrl;
+ 
++	mxsfb_enable_axi_clk(host);
++
+ 	/* Only restore the mode when the controller is running */
+ 	ctrl = readl(host->base + LCDC_CTRL);
+-	if (!(ctrl & CTRL_RUN))
+-		return -EINVAL;
++	if (!(ctrl & CTRL_RUN)) {
++		ret = -EINVAL;
++		goto err;
++	}
+ 
+ 	vdctrl0 = readl(host->base + LCDC_VDCTRL0);
+ 	vdctrl2 = readl(host->base + LCDC_VDCTRL2);
+@@ -635,7 +660,8 @@ static int mxsfb_restore_mode(struct mxsfb_info *host,
+ 		break;
+ 	case 1:
+ 	default:
+-		return -EINVAL;
++		ret = -EINVAL;
++		goto err;
+ 	}
+ 
+ 	fb_info->var.bits_per_pixel = bits_per_pixel;
+@@ -673,10 +699,14 @@ static int mxsfb_restore_mode(struct mxsfb_info *host,
+ 
+ 	pa = readl(host->base + host->devdata->cur_buf);
+ 	fbsize = fb_info->fix.line_length * vmode->yres;
+-	if (pa < fb_info->fix.smem_start)
+-		return -EINVAL;
+-	if (pa + fbsize > fb_info->fix.smem_start + fb_info->fix.smem_len)
+-		return -EINVAL;
++	if (pa < fb_info->fix.smem_start) {
++		ret = -EINVAL;
++		goto err;
++	}
++	if (pa + fbsize > fb_info->fix.smem_start + fb_info->fix.smem_len) {
++		ret = -EINVAL;
++		goto err;
++	}
+ 	ofs = pa - fb_info->fix.smem_start;
+ 	if (ofs) {
+ 		memmove(fb_info->screen_base, fb_info->screen_base + ofs, fbsize);
+@@ -689,7 +719,11 @@ static int mxsfb_restore_mode(struct mxsfb_info *host,
+ 	clk_prepare_enable(host->clk);
+ 	host->enabled = 1;
+ 
+-	return 0;
++err:
++	if (ret)
++		mxsfb_disable_axi_clk(host);
++
++	return ret;
+ }
+ 
+ static int mxsfb_init_fbinfo_dt(struct mxsfb_info *host,
+@@ -915,7 +949,9 @@ static int mxsfb_probe(struct platform_device *pdev)
+ 	}
+ 
+ 	if (!host->enabled) {
++		mxsfb_enable_axi_clk(host);
+ 		writel(0, host->base + LCDC_CTRL);
++		mxsfb_disable_axi_clk(host);
+ 		mxsfb_set_par(fb_info);
+ 		mxsfb_enable_controller(fb_info);
+ 	}
+@@ -954,11 +990,15 @@ static void mxsfb_shutdown(struct platform_device *pdev)
+ 	struct fb_info *fb_info = platform_get_drvdata(pdev);
+ 	struct mxsfb_info *host = to_imxfb_host(fb_info);
+ 
++	mxsfb_enable_axi_clk(host);
++
+ 	/*
+ 	 * Force stop the LCD controller as keeping it running during reboot
+ 	 * might interfere with the BootROM's boot mode pads sampling.
+ 	 */
+ 	writel(CTRL_RUN, host->base + LCDC_CTRL + REG_CLR);
++
++	mxsfb_disable_axi_clk(host);
+ }
+ 
+ static struct platform_driver mxsfb_driver = {
+diff --git a/fs/configfs/mount.c b/fs/configfs/mount.c
+index 537356742091..a8f3b589a2df 100644
+--- a/fs/configfs/mount.c
++++ b/fs/configfs/mount.c
+@@ -129,8 +129,6 @@ void configfs_release_fs(void)
+ }
+ 
+ 
+-static struct kobject *config_kobj;
+-
+ static int __init configfs_init(void)
+ {
+ 	int err = -ENOMEM;
+@@ -141,8 +139,8 @@ static int __init configfs_init(void)
+ 	if (!configfs_dir_cachep)
+ 		goto out;
+ 
+-	config_kobj = kobject_create_and_add("config", kernel_kobj);
+-	if (!config_kobj)
++	err = sysfs_create_mount_point(kernel_kobj, "config");
++	if (err)
+ 		goto out2;
+ 
+ 	err = register_filesystem(&configfs_fs_type);
+@@ -152,7 +150,7 @@ static int __init configfs_init(void)
+ 	return 0;
+ out3:
+ 	pr_err("Unable to register filesystem!\n");
+-	kobject_put(config_kobj);
++	sysfs_remove_mount_point(kernel_kobj, "config");
+ out2:
+ 	kmem_cache_destroy(configfs_dir_cachep);
+ 	configfs_dir_cachep = NULL;
+@@ -163,7 +161,7 @@ out:
+ static void __exit configfs_exit(void)
+ {
+ 	unregister_filesystem(&configfs_fs_type);
+-	kobject_put(config_kobj);
++	sysfs_remove_mount_point(kernel_kobj, "config");
+ 	kmem_cache_destroy(configfs_dir_cachep);
+ 	configfs_dir_cachep = NULL;
+ }
+diff --git a/fs/debugfs/inode.c b/fs/debugfs/inode.c
+index c1e7ffb0dab6..12756040ca20 100644
+--- a/fs/debugfs/inode.c
++++ b/fs/debugfs/inode.c
+@@ -716,20 +716,17 @@ bool debugfs_initialized(void)
+ }
+ EXPORT_SYMBOL_GPL(debugfs_initialized);
+ 
+-
+-static struct kobject *debug_kobj;
+-
+ static int __init debugfs_init(void)
+ {
+ 	int retval;
+ 
+-	debug_kobj = kobject_create_and_add("debug", kernel_kobj);
+-	if (!debug_kobj)
+-		return -EINVAL;
++	retval = sysfs_create_mount_point(kernel_kobj, "debug");
++	if (retval)
++		return retval;
+ 
+ 	retval = register_filesystem(&debug_fs_type);
+ 	if (retval)
+-		kobject_put(debug_kobj);
++		sysfs_remove_mount_point(kernel_kobj, "debug");
+ 	else
+ 		debugfs_registered = true;
+ 
+diff --git a/fs/fuse/inode.c b/fs/fuse/inode.c
+index 082ac1c97f39..18dacf9ed8ff 100644
+--- a/fs/fuse/inode.c
++++ b/fs/fuse/inode.c
+@@ -1238,7 +1238,6 @@ static void fuse_fs_cleanup(void)
+ }
+ 
+ static struct kobject *fuse_kobj;
+-static struct kobject *connections_kobj;
+ 
+ static int fuse_sysfs_init(void)
+ {
+@@ -1250,11 +1249,9 @@ static int fuse_sysfs_init(void)
+ 		goto out_err;
+ 	}
+ 
+-	connections_kobj = kobject_create_and_add("connections", fuse_kobj);
+-	if (!connections_kobj) {
+-		err = -ENOMEM;
++	err = sysfs_create_mount_point(fuse_kobj, "connections");
++	if (err)
+ 		goto out_fuse_unregister;
+-	}
+ 
+ 	return 0;
+ 
+@@ -1266,7 +1263,7 @@ static int fuse_sysfs_init(void)
+ 
+ static void fuse_sysfs_cleanup(void)
+ {
+-	kobject_put(connections_kobj);
++	sysfs_remove_mount_point(fuse_kobj, "connections");
+ 	kobject_put(fuse_kobj);
+ }
+ 
+diff --git a/fs/kernfs/dir.c b/fs/kernfs/dir.c
+index fffca9517321..2d48d28e1640 100644
+--- a/fs/kernfs/dir.c
++++ b/fs/kernfs/dir.c
+@@ -592,6 +592,9 @@ int kernfs_add_one(struct kernfs_node *kn)
+ 		goto out_unlock;
+ 
+ 	ret = -ENOENT;
++	if (parent->flags & KERNFS_EMPTY_DIR)
++		goto out_unlock;
++
+ 	if ((parent->flags & KERNFS_ACTIVATED) && !kernfs_active(parent))
+ 		goto out_unlock;
+ 
+@@ -783,6 +786,38 @@ struct kernfs_node *kernfs_create_dir_ns(struct kernfs_node *parent,
+ 	return ERR_PTR(rc);
+ }
+ 
++/**
++ * kernfs_create_empty_dir - create an always empty directory
++ * @parent: parent in which to create a new directory
++ * @name: name of the new directory
++ *
++ * Returns the created node on success, ERR_PTR() value on failure.
++ */
++struct kernfs_node *kernfs_create_empty_dir(struct kernfs_node *parent,
++					    const char *name)
++{
++	struct kernfs_node *kn;
++	int rc;
++
++	/* allocate */
++	kn = kernfs_new_node(parent, name, S_IRUGO|S_IXUGO|S_IFDIR, KERNFS_DIR);
++	if (!kn)
++		return ERR_PTR(-ENOMEM);
++
++	kn->flags |= KERNFS_EMPTY_DIR;
++	kn->dir.root = parent->dir.root;
++	kn->ns = NULL;
++	kn->priv = NULL;
++
++	/* link in */
++	rc = kernfs_add_one(kn);
++	if (!rc)
++		return kn;
++
++	kernfs_put(kn);
++	return ERR_PTR(rc);
++}
++
+ static struct dentry *kernfs_iop_lookup(struct inode *dir,
+ 					struct dentry *dentry,
+ 					unsigned int flags)
+@@ -1254,7 +1289,8 @@ int kernfs_rename_ns(struct kernfs_node *kn, struct kernfs_node *new_parent,
+ 	mutex_lock(&kernfs_mutex);
+ 
+ 	error = -ENOENT;
+-	if (!kernfs_active(kn) || !kernfs_active(new_parent))
++	if (!kernfs_active(kn) || !kernfs_active(new_parent) ||
++	    (new_parent->flags & KERNFS_EMPTY_DIR))
+ 		goto out;
+ 
+ 	error = 0;
+diff --git a/fs/kernfs/inode.c b/fs/kernfs/inode.c
+index 2da8493a380b..756dd56aaf60 100644
+--- a/fs/kernfs/inode.c
++++ b/fs/kernfs/inode.c
+@@ -296,6 +296,8 @@ static void kernfs_init_inode(struct kernfs_node *kn, struct inode *inode)
+ 	case KERNFS_DIR:
+ 		inode->i_op = &kernfs_dir_iops;
+ 		inode->i_fop = &kernfs_dir_fops;
++		if (kn->flags & KERNFS_EMPTY_DIR)
++			make_empty_dir_inode(inode);
+ 		break;
+ 	case KERNFS_FILE:
+ 		inode->i_size = kn->attr.size;
+diff --git a/fs/libfs.c b/fs/libfs.c
+index cb1fb4b9b637..02813592e121 100644
+--- a/fs/libfs.c
++++ b/fs/libfs.c
+@@ -1093,3 +1093,99 @@ simple_nosetlease(struct file *filp, long arg, struct file_lock **flp,
+ 	return -EINVAL;
+ }
+ EXPORT_SYMBOL(simple_nosetlease);
++
++
++/*
++ * Operations for a permanently empty directory.
++ */
++static struct dentry *empty_dir_lookup(struct inode *dir, struct dentry *dentry, unsigned int flags)
++{
++	return ERR_PTR(-ENOENT);
++}
++
++static int empty_dir_getattr(struct vfsmount *mnt, struct dentry *dentry,
++				 struct kstat *stat)
++{
++	struct inode *inode = d_inode(dentry);
++	generic_fillattr(inode, stat);
++	return 0;
++}
++
++static int empty_dir_setattr(struct dentry *dentry, struct iattr *attr)
++{
++	return -EPERM;
++}
++
++static int empty_dir_setxattr(struct dentry *dentry, const char *name,
++			      const void *value, size_t size, int flags)
++{
++	return -EOPNOTSUPP;
++}
++
++static ssize_t empty_dir_getxattr(struct dentry *dentry, const char *name,
++				  void *value, size_t size)
++{
++	return -EOPNOTSUPP;
++}
++
++static int empty_dir_removexattr(struct dentry *dentry, const char *name)
++{
++	return -EOPNOTSUPP;
++}
++
++static ssize_t empty_dir_listxattr(struct dentry *dentry, char *list, size_t size)
++{
++	return -EOPNOTSUPP;
++}
++
++static const struct inode_operations empty_dir_inode_operations = {
++	.lookup		= empty_dir_lookup,
++	.permission	= generic_permission,
++	.setattr	= empty_dir_setattr,
++	.getattr	= empty_dir_getattr,
++	.setxattr	= empty_dir_setxattr,
++	.getxattr	= empty_dir_getxattr,
++	.removexattr	= empty_dir_removexattr,
++	.listxattr	= empty_dir_listxattr,
++};
++
++static loff_t empty_dir_llseek(struct file *file, loff_t offset, int whence)
++{
++	/* An empty directory has two entries . and .. at offsets 0 and 1 */
++	return generic_file_llseek_size(file, offset, whence, 2, 2);
++}
++
++static int empty_dir_readdir(struct file *file, struct dir_context *ctx)
++{
++	dir_emit_dots(file, ctx);
++	return 0;
++}
++
++static const struct file_operations empty_dir_operations = {
++	.llseek		= empty_dir_llseek,
++	.read		= generic_read_dir,
++	.iterate	= empty_dir_readdir,
++	.fsync		= noop_fsync,
++};
++
++
++void make_empty_dir_inode(struct inode *inode)
++{
++	set_nlink(inode, 2);
++	inode->i_mode = S_IFDIR | S_IRUGO | S_IXUGO;
++	inode->i_uid = GLOBAL_ROOT_UID;
++	inode->i_gid = GLOBAL_ROOT_GID;
++	inode->i_rdev = 0;
++	inode->i_size = 2;
++	inode->i_blkbits = PAGE_SHIFT;
++	inode->i_blocks = 0;
++
++	inode->i_op = &empty_dir_inode_operations;
++	inode->i_fop = &empty_dir_operations;
++}
++
++bool is_empty_dir_inode(struct inode *inode)
++{
++	return (inode->i_fop == &empty_dir_operations) &&
++		(inode->i_op == &empty_dir_inode_operations);
++}
+diff --git a/fs/namespace.c b/fs/namespace.c
+index 1d4a97c573e0..02c6875dd945 100644
+--- a/fs/namespace.c
++++ b/fs/namespace.c
+@@ -2332,6 +2332,8 @@ unlock:
+ 	return err;
+ }
+ 
++static bool fs_fully_visible(struct file_system_type *fs_type, int *new_mnt_flags);
++
+ /*
+  * create a new mount for userspace and request it to be added into the
+  * namespace's tree
+@@ -2363,6 +2365,10 @@ static int do_new_mount(struct path *path, const char *fstype, int flags,
+ 			flags |= MS_NODEV;
+ 			mnt_flags |= MNT_NODEV | MNT_LOCK_NODEV;
+ 		}
++		if (type->fs_flags & FS_USERNS_VISIBLE) {
++			if (!fs_fully_visible(type, &mnt_flags))
++				return -EPERM;
++		}
+ 	}
+ 
+ 	mnt = vfs_kern_mount(type, flags, name, data);
+@@ -3164,9 +3170,10 @@ bool current_chrooted(void)
+ 	return chrooted;
+ }
+ 
+-bool fs_fully_visible(struct file_system_type *type)
++static bool fs_fully_visible(struct file_system_type *type, int *new_mnt_flags)
+ {
+ 	struct mnt_namespace *ns = current->nsproxy->mnt_ns;
++	int new_flags = *new_mnt_flags;
+ 	struct mount *mnt;
+ 	bool visible = false;
+ 
+@@ -3185,6 +3192,19 @@ bool fs_fully_visible(struct file_system_type *type)
+ 		if (mnt->mnt.mnt_root != mnt->mnt.mnt_sb->s_root)
+ 			continue;
+ 
++		/* Verify the mount flags are equal to or more permissive
++		 * than the proposed new mount.
++		 */
++		if ((mnt->mnt.mnt_flags & MNT_LOCK_READONLY) &&
++		    !(new_flags & MNT_READONLY))
++			continue;
++		if ((mnt->mnt.mnt_flags & MNT_LOCK_NODEV) &&
++		    !(new_flags & MNT_NODEV))
++			continue;
++		if ((mnt->mnt.mnt_flags & MNT_LOCK_ATIME) &&
++		    ((mnt->mnt.mnt_flags & MNT_ATIME_MASK) != (new_flags & MNT_ATIME_MASK)))
++			continue;
++
+ 		/* This mount is not fully visible if there are any
+ 		 * locked child mounts that cover anything except for
+ 		 * empty directories.
+@@ -3194,11 +3214,14 @@ bool fs_fully_visible(struct file_system_type *type)
+ 			/* Only worry about locked mounts */
+ 			if (!(mnt->mnt.mnt_flags & MNT_LOCKED))
+ 				continue;
+-			if (!S_ISDIR(inode->i_mode))
+-				goto next;
+-			if (inode->i_nlink > 2)
++			/* Is the directory permanetly empty? */
++			if (!is_empty_dir_inode(inode))
+ 				goto next;
+ 		}
++		/* Preserve the locked attributes */
++		*new_mnt_flags |= mnt->mnt.mnt_flags & (MNT_LOCK_READONLY | \
++							MNT_LOCK_NODEV    | \
++							MNT_LOCK_ATIME);
+ 		visible = true;
+ 		goto found;
+ 	next:	;
+diff --git a/fs/proc/generic.c b/fs/proc/generic.c
+index df6327a2b865..e5dee5c3188e 100644
+--- a/fs/proc/generic.c
++++ b/fs/proc/generic.c
+@@ -373,6 +373,10 @@ static struct proc_dir_entry *__proc_create(struct proc_dir_entry **parent,
+ 		WARN(1, "create '/proc/%s' by hand\n", qstr.name);
+ 		return NULL;
+ 	}
++	if (is_empty_pde(*parent)) {
++		WARN(1, "attempt to add to permanently empty directory");
++		return NULL;
++	}
+ 
+ 	ent = kzalloc(sizeof(struct proc_dir_entry) + qstr.len + 1, GFP_KERNEL);
+ 	if (!ent)
+@@ -455,6 +459,25 @@ struct proc_dir_entry *proc_mkdir(const char *name,
+ }
+ EXPORT_SYMBOL(proc_mkdir);
+ 
++struct proc_dir_entry *proc_create_mount_point(const char *name)
++{
++	umode_t mode = S_IFDIR | S_IRUGO | S_IXUGO;
++	struct proc_dir_entry *ent, *parent = NULL;
++
++	ent = __proc_create(&parent, name, mode, 2);
++	if (ent) {
++		ent->data = NULL;
++		ent->proc_fops = NULL;
++		ent->proc_iops = NULL;
++		if (proc_register(parent, ent) < 0) {
++			kfree(ent);
++			parent->nlink--;
++			ent = NULL;
++		}
++	}
++	return ent;
++}
++
+ struct proc_dir_entry *proc_create_data(const char *name, umode_t mode,
+ 					struct proc_dir_entry *parent,
+ 					const struct file_operations *proc_fops,
+diff --git a/fs/proc/inode.c b/fs/proc/inode.c
+index 8272aaba1bb0..e3eb5524639f 100644
+--- a/fs/proc/inode.c
++++ b/fs/proc/inode.c
+@@ -423,6 +423,10 @@ struct inode *proc_get_inode(struct super_block *sb, struct proc_dir_entry *de)
+ 		inode->i_mtime = inode->i_atime = inode->i_ctime = CURRENT_TIME;
+ 		PROC_I(inode)->pde = de;
+ 
++		if (is_empty_pde(de)) {
++			make_empty_dir_inode(inode);
++			return inode;
++		}
+ 		if (de->mode) {
+ 			inode->i_mode = de->mode;
+ 			inode->i_uid = de->uid;
+diff --git a/fs/proc/internal.h b/fs/proc/internal.h
+index c835b94c0cd3..aa2781095bd1 100644
+--- a/fs/proc/internal.h
++++ b/fs/proc/internal.h
+@@ -191,6 +191,12 @@ static inline struct proc_dir_entry *pde_get(struct proc_dir_entry *pde)
+ }
+ extern void pde_put(struct proc_dir_entry *);
+ 
++static inline bool is_empty_pde(const struct proc_dir_entry *pde)
++{
++	return S_ISDIR(pde->mode) && !pde->proc_iops;
++}
++struct proc_dir_entry *proc_create_mount_point(const char *name);
++
+ /*
+  * inode.c
+  */
+diff --git a/fs/proc/proc_sysctl.c b/fs/proc/proc_sysctl.c
+index fea2561d773b..fdda62e6115e 100644
+--- a/fs/proc/proc_sysctl.c
++++ b/fs/proc/proc_sysctl.c
+@@ -19,6 +19,28 @@ static const struct inode_operations proc_sys_inode_operations;
+ static const struct file_operations proc_sys_dir_file_operations;
+ static const struct inode_operations proc_sys_dir_operations;
+ 
++/* Support for permanently empty directories */
++
++struct ctl_table sysctl_mount_point[] = {
++	{ }
++};
++
++static bool is_empty_dir(struct ctl_table_header *head)
++{
++	return head->ctl_table[0].child == sysctl_mount_point;
++}
++
++static void set_empty_dir(struct ctl_dir *dir)
++{
++	dir->header.ctl_table[0].child = sysctl_mount_point;
++}
++
++static void clear_empty_dir(struct ctl_dir *dir)
++
++{
++	dir->header.ctl_table[0].child = NULL;
++}
++
+ void proc_sys_poll_notify(struct ctl_table_poll *poll)
+ {
+ 	if (!poll)
+@@ -187,6 +209,17 @@ static int insert_header(struct ctl_dir *dir, struct ctl_table_header *header)
+ 	struct ctl_table *entry;
+ 	int err;
+ 
++	/* Is this a permanently empty directory? */
++	if (is_empty_dir(&dir->header))
++		return -EROFS;
++
++	/* Am I creating a permanently empty directory? */
++	if (header->ctl_table == sysctl_mount_point) {
++		if (!RB_EMPTY_ROOT(&dir->root))
++			return -EINVAL;
++		set_empty_dir(dir);
++	}
++
+ 	dir->header.nreg++;
+ 	header->parent = dir;
+ 	err = insert_links(header);
+@@ -202,6 +235,8 @@ fail:
+ 	erase_header(header);
+ 	put_links(header);
+ fail_links:
++	if (header->ctl_table == sysctl_mount_point)
++		clear_empty_dir(dir);
+ 	header->parent = NULL;
+ 	drop_sysctl_table(&dir->header);
+ 	return err;
+@@ -419,6 +454,8 @@ static struct inode *proc_sys_make_inode(struct super_block *sb,
+ 		inode->i_mode |= S_IFDIR;
+ 		inode->i_op = &proc_sys_dir_operations;
+ 		inode->i_fop = &proc_sys_dir_file_operations;
++		if (is_empty_dir(head))
++			make_empty_dir_inode(inode);
+ 	}
+ out:
+ 	return inode;
+diff --git a/fs/proc/root.c b/fs/proc/root.c
+index b7fa4bfe896a..68feb0f70e63 100644
+--- a/fs/proc/root.c
++++ b/fs/proc/root.c
+@@ -112,9 +112,6 @@ static struct dentry *proc_mount(struct file_system_type *fs_type,
+ 		ns = task_active_pid_ns(current);
+ 		options = data;
+ 
+-		if (!capable(CAP_SYS_ADMIN) && !fs_fully_visible(fs_type))
+-			return ERR_PTR(-EPERM);
+-
+ 		/* Does the mounter have privilege over the pid namespace? */
+ 		if (!ns_capable(ns->user_ns, CAP_SYS_ADMIN))
+ 			return ERR_PTR(-EPERM);
+@@ -159,7 +156,7 @@ static struct file_system_type proc_fs_type = {
+ 	.name		= "proc",
+ 	.mount		= proc_mount,
+ 	.kill_sb	= proc_kill_sb,
+-	.fs_flags	= FS_USERNS_MOUNT,
++	.fs_flags	= FS_USERNS_VISIBLE | FS_USERNS_MOUNT,
+ };
+ 
+ void __init proc_root_init(void)
+@@ -182,10 +179,10 @@ void __init proc_root_init(void)
+ #endif
+ 	proc_mkdir("fs", NULL);
+ 	proc_mkdir("driver", NULL);
+-	proc_mkdir("fs/nfsd", NULL); /* somewhere for the nfsd filesystem to be mounted */
++	proc_create_mount_point("fs/nfsd"); /* somewhere for the nfsd filesystem to be mounted */
+ #if defined(CONFIG_SUN_OPENPROMFS) || defined(CONFIG_SUN_OPENPROMFS_MODULE)
+ 	/* just give it a mountpoint */
+-	proc_mkdir("openprom", NULL);
++	proc_create_mount_point("openprom");
+ #endif
+ 	proc_tty_init();
+ 	proc_mkdir("bus", NULL);
+diff --git a/fs/pstore/inode.c b/fs/pstore/inode.c
+index dc43b5f29305..3adcc4669fac 100644
+--- a/fs/pstore/inode.c
++++ b/fs/pstore/inode.c
+@@ -461,22 +461,18 @@ static struct file_system_type pstore_fs_type = {
+ 	.kill_sb	= pstore_kill_sb,
+ };
+ 
+-static struct kobject *pstore_kobj;
+-
+ static int __init init_pstore_fs(void)
+ {
+-	int err = 0;
++	int err;
+ 
+ 	/* Create a convenient mount point for people to access pstore */
+-	pstore_kobj = kobject_create_and_add("pstore", fs_kobj);
+-	if (!pstore_kobj) {
+-		err = -ENOMEM;
++	err = sysfs_create_mount_point(fs_kobj, "pstore");
++	if (err)
+ 		goto out;
+-	}
+ 
+ 	err = register_filesystem(&pstore_fs_type);
+ 	if (err < 0)
+-		kobject_put(pstore_kobj);
++		sysfs_remove_mount_point(fs_kobj, "pstore");
+ 
+ out:
+ 	return err;
+diff --git a/fs/sysfs/dir.c b/fs/sysfs/dir.c
+index 0b45ff42f374..94374e435025 100644
+--- a/fs/sysfs/dir.c
++++ b/fs/sysfs/dir.c
+@@ -121,3 +121,37 @@ int sysfs_move_dir_ns(struct kobject *kobj, struct kobject *new_parent_kobj,
+ 
+ 	return kernfs_rename_ns(kn, new_parent, kn->name, new_ns);
+ }
++
++/**
++ * sysfs_create_mount_point - create an always empty directory
++ * @parent_kobj:  kobject that will contain this always empty directory
++ * @name: The name of the always empty directory to add
++ */
++int sysfs_create_mount_point(struct kobject *parent_kobj, const char *name)
++{
++	struct kernfs_node *kn, *parent = parent_kobj->sd;
++
++	kn = kernfs_create_empty_dir(parent, name);
++	if (IS_ERR(kn)) {
++		if (PTR_ERR(kn) == -EEXIST)
++			sysfs_warn_dup(parent, name);
++		return PTR_ERR(kn);
++	}
++
++	return 0;
++}
++EXPORT_SYMBOL_GPL(sysfs_create_mount_point);
++
++/**
++ *	sysfs_remove_mount_point - remove an always empty directory.
++ *	@parent_kobj: kobject that will contain this always empty directory
++ *	@name: The name of the always empty directory to remove
++ *
++ */
++void sysfs_remove_mount_point(struct kobject *parent_kobj, const char *name)
++{
++	struct kernfs_node *parent = parent_kobj->sd;
++
++	kernfs_remove_by_name_ns(parent, name, NULL);
++}
++EXPORT_SYMBOL_GPL(sysfs_remove_mount_point);
+diff --git a/fs/sysfs/mount.c b/fs/sysfs/mount.c
+index 8a49486bf30c..1c6ac6fcee9f 100644
+--- a/fs/sysfs/mount.c
++++ b/fs/sysfs/mount.c
+@@ -31,9 +31,6 @@ static struct dentry *sysfs_mount(struct file_system_type *fs_type,
+ 	bool new_sb;
+ 
+ 	if (!(flags & MS_KERNMOUNT)) {
+-		if (!capable(CAP_SYS_ADMIN) && !fs_fully_visible(fs_type))
+-			return ERR_PTR(-EPERM);
+-
+ 		if (!kobj_ns_current_may_mount(KOBJ_NS_TYPE_NET))
+ 			return ERR_PTR(-EPERM);
+ 	}
+@@ -58,7 +55,7 @@ static struct file_system_type sysfs_fs_type = {
+ 	.name		= "sysfs",
+ 	.mount		= sysfs_mount,
+ 	.kill_sb	= sysfs_kill_sb,
+-	.fs_flags	= FS_USERNS_MOUNT,
++	.fs_flags	= FS_USERNS_VISIBLE | FS_USERNS_MOUNT,
+ };
+ 
+ int __init sysfs_init(void)
+diff --git a/fs/tracefs/inode.c b/fs/tracefs/inode.c
+index d92bdf3b079a..a43df11a163f 100644
+--- a/fs/tracefs/inode.c
++++ b/fs/tracefs/inode.c
+@@ -631,14 +631,12 @@ bool tracefs_initialized(void)
+ 	return tracefs_registered;
+ }
+ 
+-static struct kobject *trace_kobj;
+-
+ static int __init tracefs_init(void)
+ {
+ 	int retval;
+ 
+-	trace_kobj = kobject_create_and_add("tracing", kernel_kobj);
+-	if (!trace_kobj)
++	retval = sysfs_create_mount_point(kernel_kobj, "tracing");
++	if (retval)
+ 		return -EINVAL;
+ 
+ 	retval = register_filesystem(&trace_fs_type);
+diff --git a/include/linux/acpi.h b/include/linux/acpi.h
+index e4da5e35e29c..5da2d2e9d38e 100644
+--- a/include/linux/acpi.h
++++ b/include/linux/acpi.h
+@@ -332,6 +332,9 @@ int acpi_check_region(resource_size_t start, resource_size_t n,
+ 
+ int acpi_resources_are_enforced(void);
+ 
++int acpi_reserve_region(u64 start, unsigned int length, u8 space_id,
++			unsigned long flags, char *desc);
++
+ #ifdef CONFIG_HIBERNATION
+ void __init acpi_no_s4_hw_signature(void);
+ #endif
+@@ -440,6 +443,7 @@ extern acpi_status acpi_pci_osc_control_set(acpi_handle handle,
+ #define ACPI_OST_SC_INSERT_NOT_SUPPORTED	0x82
+ 
+ extern void acpi_early_init(void);
++extern void acpi_subsystem_init(void);
+ 
+ extern int acpi_nvs_register(__u64 start, __u64 size);
+ 
+@@ -494,6 +498,7 @@ static inline const char *acpi_dev_name(struct acpi_device *adev)
+ }
+ 
+ static inline void acpi_early_init(void) { }
++static inline void acpi_subsystem_init(void) { }
+ 
+ static inline int early_acpi_boot_init(void)
+ {
+@@ -525,6 +530,13 @@ static inline int acpi_check_region(resource_size_t start, resource_size_t n,
+ 	return 0;
+ }
+ 
++static inline int acpi_reserve_region(u64 start, unsigned int length,
++				      u8 space_id, unsigned long flags,
++				      char *desc)
++{
++	return -ENXIO;
++}
++
+ struct acpi_table_header;
+ static inline int acpi_table_parse(char *id,
+ 				int (*handler)(struct acpi_table_header *))
+diff --git a/include/linux/fs.h b/include/linux/fs.h
+index 35ec87e490b1..571aab91bfc0 100644
+--- a/include/linux/fs.h
++++ b/include/linux/fs.h
+@@ -1897,6 +1897,7 @@ struct file_system_type {
+ #define FS_HAS_SUBTYPE		4
+ #define FS_USERNS_MOUNT		8	/* Can be mounted by userns root */
+ #define FS_USERNS_DEV_MOUNT	16 /* A userns mount does not imply MNT_NODEV */
++#define FS_USERNS_VISIBLE	32	/* FS must already be visible */
+ #define FS_RENAME_DOES_D_MOVE	32768	/* FS will handle d_move() during rename() internally. */
+ 	struct dentry *(*mount) (struct file_system_type *, int,
+ 		       const char *, void *);
+@@ -1984,7 +1985,6 @@ extern int vfs_ustat(dev_t, struct kstatfs *);
+ extern int freeze_super(struct super_block *super);
+ extern int thaw_super(struct super_block *super);
+ extern bool our_mnt(struct vfsmount *mnt);
+-extern bool fs_fully_visible(struct file_system_type *);
+ 
+ extern int current_umask(void);
+ 
+@@ -2780,6 +2780,8 @@ extern struct dentry *simple_lookup(struct inode *, struct dentry *, unsigned in
+ extern ssize_t generic_read_dir(struct file *, char __user *, size_t, loff_t *);
+ extern const struct file_operations simple_dir_operations;
+ extern const struct inode_operations simple_dir_inode_operations;
++extern void make_empty_dir_inode(struct inode *inode);
++extern bool is_empty_dir_inode(struct inode *inode);
+ struct tree_descr { char *name; const struct file_operations *ops; int mode; };
+ struct dentry *d_alloc_name(struct dentry *, const char *);
+ extern int simple_fill_super(struct super_block *, unsigned long, struct tree_descr *);
+diff --git a/include/linux/kernfs.h b/include/linux/kernfs.h
+index 71ecdab1671b..29d1896c3ba5 100644
+--- a/include/linux/kernfs.h
++++ b/include/linux/kernfs.h
+@@ -45,6 +45,7 @@ enum kernfs_node_flag {
+ 	KERNFS_LOCKDEP		= 0x0100,
+ 	KERNFS_SUICIDAL		= 0x0400,
+ 	KERNFS_SUICIDED		= 0x0800,
++	KERNFS_EMPTY_DIR	= 0x1000,
+ };
+ 
+ /* @flags for kernfs_create_root() */
+@@ -285,6 +286,8 @@ void kernfs_destroy_root(struct kernfs_root *root);
+ struct kernfs_node *kernfs_create_dir_ns(struct kernfs_node *parent,
+ 					 const char *name, umode_t mode,
+ 					 void *priv, const void *ns);
++struct kernfs_node *kernfs_create_empty_dir(struct kernfs_node *parent,
++					    const char *name);
+ struct kernfs_node *__kernfs_create_file(struct kernfs_node *parent,
+ 					 const char *name,
+ 					 umode_t mode, loff_t size,
+diff --git a/include/linux/kmemleak.h b/include/linux/kmemleak.h
+index e705467ddb47..d0a1f99e24e3 100644
+--- a/include/linux/kmemleak.h
++++ b/include/linux/kmemleak.h
+@@ -28,7 +28,8 @@
+ extern void kmemleak_init(void) __ref;
+ extern void kmemleak_alloc(const void *ptr, size_t size, int min_count,
+ 			   gfp_t gfp) __ref;
+-extern void kmemleak_alloc_percpu(const void __percpu *ptr, size_t size) __ref;
++extern void kmemleak_alloc_percpu(const void __percpu *ptr, size_t size,
++				  gfp_t gfp) __ref;
+ extern void kmemleak_free(const void *ptr) __ref;
+ extern void kmemleak_free_part(const void *ptr, size_t size) __ref;
+ extern void kmemleak_free_percpu(const void __percpu *ptr) __ref;
+@@ -71,7 +72,8 @@ static inline void kmemleak_alloc_recursive(const void *ptr, size_t size,
+ 					    gfp_t gfp)
+ {
+ }
+-static inline void kmemleak_alloc_percpu(const void __percpu *ptr, size_t size)
++static inline void kmemleak_alloc_percpu(const void __percpu *ptr, size_t size,
++					 gfp_t gfp)
+ {
+ }
+ static inline void kmemleak_free(const void *ptr)
+diff --git a/include/linux/pci.h b/include/linux/pci.h
+index 353db8dc4c6e..3ef3a52068df 100644
+--- a/include/linux/pci.h
++++ b/include/linux/pci.h
+@@ -577,9 +577,15 @@ int raw_pci_read(unsigned int domain, unsigned int bus, unsigned int devfn,
+ int raw_pci_write(unsigned int domain, unsigned int bus, unsigned int devfn,
+ 		  int reg, int len, u32 val);
+ 
++#ifdef CONFIG_PCI_BUS_ADDR_T_64BIT
++typedef u64 pci_bus_addr_t;
++#else
++typedef u32 pci_bus_addr_t;
++#endif
++
+ struct pci_bus_region {
+-	dma_addr_t start;
+-	dma_addr_t end;
++	pci_bus_addr_t start;
++	pci_bus_addr_t end;
+ };
+ 
+ struct pci_dynids {
+@@ -1006,6 +1012,7 @@ int __must_check pci_assign_resource(struct pci_dev *dev, int i);
+ int __must_check pci_reassign_resource(struct pci_dev *dev, int i, resource_size_t add_size, resource_size_t align);
+ int pci_select_bars(struct pci_dev *dev, unsigned long flags);
+ bool pci_device_is_present(struct pci_dev *pdev);
++void pci_ignore_hotplug(struct pci_dev *dev);
+ 
+ /* ROM control related routines */
+ int pci_enable_rom(struct pci_dev *pdev);
+@@ -1043,11 +1050,6 @@ bool pci_dev_run_wake(struct pci_dev *dev);
+ bool pci_check_pme_status(struct pci_dev *dev);
+ void pci_pme_wakeup_bus(struct pci_bus *bus);
+ 
+-static inline void pci_ignore_hotplug(struct pci_dev *dev)
+-{
+-	dev->ignore_hotplug = 1;
+-}
+-
+ static inline int pci_enable_wake(struct pci_dev *dev, pci_power_t state,
+ 				  bool enable)
+ {
+@@ -1128,7 +1130,7 @@ int __must_check pci_bus_alloc_resource(struct pci_bus *bus,
+ 
+ int pci_remap_iospace(const struct resource *res, phys_addr_t phys_addr);
+ 
+-static inline dma_addr_t pci_bus_address(struct pci_dev *pdev, int bar)
++static inline pci_bus_addr_t pci_bus_address(struct pci_dev *pdev, int bar)
+ {
+ 	struct pci_bus_region region;
+ 
+diff --git a/include/linux/power_supply.h b/include/linux/power_supply.h
+index 75a1dd8dc56e..a80f1fd01ddb 100644
+--- a/include/linux/power_supply.h
++++ b/include/linux/power_supply.h
+@@ -237,6 +237,7 @@ struct power_supply {
+ 	/* private */
+ 	struct device dev;
+ 	struct work_struct changed_work;
++	struct delayed_work deferred_register_work;
+ 	spinlock_t changed_lock;
+ 	bool changed;
+ 	atomic_t use_cnt;
+diff --git a/include/linux/sysctl.h b/include/linux/sysctl.h
+index 795d5fea5697..fa7bc29925c9 100644
+--- a/include/linux/sysctl.h
++++ b/include/linux/sysctl.h
+@@ -188,6 +188,9 @@ struct ctl_table_header *register_sysctl_paths(const struct ctl_path *path,
+ void unregister_sysctl_table(struct ctl_table_header * table);
+ 
+ extern int sysctl_init(void);
++
++extern struct ctl_table sysctl_mount_point[];
++
+ #else /* CONFIG_SYSCTL */
+ static inline struct ctl_table_header *register_sysctl_table(struct ctl_table * table)
+ {
+diff --git a/include/linux/sysfs.h b/include/linux/sysfs.h
+index 99382c0df17e..9f65758311a4 100644
+--- a/include/linux/sysfs.h
++++ b/include/linux/sysfs.h
+@@ -210,6 +210,10 @@ int __must_check sysfs_rename_dir_ns(struct kobject *kobj, const char *new_name,
+ int __must_check sysfs_move_dir_ns(struct kobject *kobj,
+ 				   struct kobject *new_parent_kobj,
+ 				   const void *new_ns);
++int __must_check sysfs_create_mount_point(struct kobject *parent_kobj,
++					  const char *name);
++void sysfs_remove_mount_point(struct kobject *parent_kobj,
++			      const char *name);
+ 
+ int __must_check sysfs_create_file_ns(struct kobject *kobj,
+ 				      const struct attribute *attr,
+@@ -298,6 +302,17 @@ static inline int sysfs_move_dir_ns(struct kobject *kobj,
+ 	return 0;
+ }
+ 
++static inline int sysfs_create_mount_point(struct kobject *parent_kobj,
++					   const char *name)
++{
++	return 0;
++}
++
++static inline void sysfs_remove_mount_point(struct kobject *parent_kobj,
++					    const char *name)
++{
++}
++
+ static inline int sysfs_create_file_ns(struct kobject *kobj,
+ 				       const struct attribute *attr,
+ 				       const void *ns)
+diff --git a/include/linux/types.h b/include/linux/types.h
+index 59698be03490..8715287c3b1f 100644
+--- a/include/linux/types.h
++++ b/include/linux/types.h
+@@ -139,12 +139,20 @@ typedef unsigned long blkcnt_t;
+  */
+ #define pgoff_t unsigned long
+ 
+-/* A dma_addr_t can hold any valid DMA or bus address for the platform */
++/*
++ * A dma_addr_t can hold any valid DMA address, i.e., any address returned
++ * by the DMA API.
++ *
++ * If the DMA API only uses 32-bit addresses, dma_addr_t need only be 32
++ * bits wide.  Bus addresses, e.g., PCI BARs, may be wider than 32 bits,
++ * but drivers do memory-mapped I/O to ioremapped kernel virtual addresses,
++ * so they don't care about the size of the actual bus addresses.
++ */
+ #ifdef CONFIG_ARCH_DMA_ADDR_T_64BIT
+ typedef u64 dma_addr_t;
+ #else
+ typedef u32 dma_addr_t;
+-#endif /* dma_addr_t */
++#endif
+ 
+ typedef unsigned __bitwise__ gfp_t;
+ typedef unsigned __bitwise__ fmode_t;
+diff --git a/init/main.c b/init/main.c
+index 2115055faeac..2a89545e0a5d 100644
+--- a/init/main.c
++++ b/init/main.c
+@@ -664,6 +664,7 @@ asmlinkage __visible void __init start_kernel(void)
+ 
+ 	check_bugs();
+ 
++	acpi_subsystem_init();
+ 	sfi_init_late();
+ 
+ 	if (efi_enabled(EFI_RUNTIME_SERVICES)) {
+diff --git a/kernel/cgroup.c b/kernel/cgroup.c
+index 469dd547770c..e8a5491be756 100644
+--- a/kernel/cgroup.c
++++ b/kernel/cgroup.c
+@@ -1924,8 +1924,6 @@ static struct file_system_type cgroup_fs_type = {
+ 	.kill_sb = cgroup_kill_sb,
+ };
+ 
+-static struct kobject *cgroup_kobj;
+-
+ /**
+  * task_cgroup_path - cgroup path of a task in the first cgroup hierarchy
+  * @task: target task
+@@ -5044,13 +5042,13 @@ int __init cgroup_init(void)
+ 			ss->bind(init_css_set.subsys[ssid]);
+ 	}
+ 
+-	cgroup_kobj = kobject_create_and_add("cgroup", fs_kobj);
+-	if (!cgroup_kobj)
+-		return -ENOMEM;
++	err = sysfs_create_mount_point(fs_kobj, "cgroup");
++	if (err)
++		return err;
+ 
+ 	err = register_filesystem(&cgroup_fs_type);
+ 	if (err < 0) {
+-		kobject_put(cgroup_kobj);
++		sysfs_remove_mount_point(fs_kobj, "cgroup");
+ 		return err;
+ 	}
+ 
+diff --git a/kernel/irq/devres.c b/kernel/irq/devres.c
+index d5d0f7345c54..74d90a754268 100644
+--- a/kernel/irq/devres.c
++++ b/kernel/irq/devres.c
+@@ -104,7 +104,7 @@ int devm_request_any_context_irq(struct device *dev, unsigned int irq,
+ 		return -ENOMEM;
+ 
+ 	rc = request_any_context_irq(irq, handler, irqflags, devname, dev_id);
+-	if (rc) {
++	if (rc < 0) {
+ 		devres_free(dr);
+ 		return rc;
+ 	}
+@@ -113,7 +113,7 @@ int devm_request_any_context_irq(struct device *dev, unsigned int irq,
+ 	dr->dev_id = dev_id;
+ 	devres_add(dev, dr);
+ 
+-	return 0;
++	return rc;
+ }
+ EXPORT_SYMBOL(devm_request_any_context_irq);
+ 
+diff --git a/kernel/livepatch/core.c b/kernel/livepatch/core.c
+index 284e2691e380..9ec555732f1a 100644
+--- a/kernel/livepatch/core.c
++++ b/kernel/livepatch/core.c
+@@ -179,7 +179,9 @@ static int klp_find_object_symbol(const char *objname, const char *name,
+ 		.count = 0
+ 	};
+ 
++	mutex_lock(&module_mutex);
+ 	kallsyms_on_each_symbol(klp_find_callback, &args);
++	mutex_unlock(&module_mutex);
+ 
+ 	if (args.count == 0)
+ 		pr_err("symbol '%s' not found in symbol table\n", name);
+@@ -219,13 +221,19 @@ static int klp_verify_vmlinux_symbol(const char *name, unsigned long addr)
+ 		.name = name,
+ 		.addr = addr,
+ 	};
++	int ret;
+ 
+-	if (kallsyms_on_each_symbol(klp_verify_callback, &args))
+-		return 0;
++	mutex_lock(&module_mutex);
++	ret = kallsyms_on_each_symbol(klp_verify_callback, &args);
++	mutex_unlock(&module_mutex);
+ 
+-	pr_err("symbol '%s' not found at specified address 0x%016lx, kernel mismatch?\n",
+-		name, addr);
+-	return -EINVAL;
++	if (!ret) {
++		pr_err("symbol '%s' not found at specified address 0x%016lx, kernel mismatch?\n",
++			name, addr);
++		return -EINVAL;
++	}
++
++	return 0;
+ }
+ 
+ static int klp_find_verify_func_addr(struct klp_object *obj,
+diff --git a/kernel/rcu/tiny.c b/kernel/rcu/tiny.c
+index 069742d61c68..ec3086879cb5 100644
+--- a/kernel/rcu/tiny.c
++++ b/kernel/rcu/tiny.c
+@@ -170,6 +170,11 @@ static void __rcu_process_callbacks(struct rcu_ctrlblk *rcp)
+ 
+ 	/* Move the ready-to-invoke callbacks to a local list. */
+ 	local_irq_save(flags);
++	if (rcp->donetail == &rcp->rcucblist) {
++		/* No callbacks ready, so just leave. */
++		local_irq_restore(flags);
++		return;
++	}
+ 	RCU_TRACE(trace_rcu_batch_start(rcp->name, 0, rcp->qlen, -1));
+ 	list = rcp->rcucblist;
+ 	rcp->rcucblist = *rcp->donetail;
+diff --git a/kernel/sysctl.c b/kernel/sysctl.c
+index 2082b1a88fb9..c3eee4c6d6c1 100644
+--- a/kernel/sysctl.c
++++ b/kernel/sysctl.c
+@@ -1531,12 +1531,6 @@ static struct ctl_table vm_table[] = {
+ 	{ }
+ };
+ 
+-#if defined(CONFIG_BINFMT_MISC) || defined(CONFIG_BINFMT_MISC_MODULE)
+-static struct ctl_table binfmt_misc_table[] = {
+-	{ }
+-};
+-#endif
+-
+ static struct ctl_table fs_table[] = {
+ 	{
+ 		.procname	= "inode-nr",
+@@ -1690,7 +1684,7 @@ static struct ctl_table fs_table[] = {
+ 	{
+ 		.procname	= "binfmt_misc",
+ 		.mode		= 0555,
+-		.child		= binfmt_misc_table,
++		.child		= sysctl_mount_point,
+ 	},
+ #endif
+ 	{
+diff --git a/mm/kmemleak.c b/mm/kmemleak.c
+index f0fe4f2c1fa7..3716cdb8ba42 100644
+--- a/mm/kmemleak.c
++++ b/mm/kmemleak.c
+@@ -195,6 +195,8 @@ static struct kmem_cache *scan_area_cache;
+ 
+ /* set if tracing memory operations is enabled */
+ static int kmemleak_enabled;
++/* same as above but only for the kmemleak_free() callback */
++static int kmemleak_free_enabled;
+ /* set in the late_initcall if there were no errors */
+ static int kmemleak_initialized;
+ /* enables or disables early logging of the memory operations */
+@@ -907,12 +909,13 @@ EXPORT_SYMBOL_GPL(kmemleak_alloc);
+  * kmemleak_alloc_percpu - register a newly allocated __percpu object
+  * @ptr:	__percpu pointer to beginning of the object
+  * @size:	size of the object
++ * @gfp:	flags used for kmemleak internal memory allocations
+  *
+  * This function is called from the kernel percpu allocator when a new object
+- * (memory block) is allocated (alloc_percpu). It assumes GFP_KERNEL
+- * allocation.
++ * (memory block) is allocated (alloc_percpu).
+  */
+-void __ref kmemleak_alloc_percpu(const void __percpu *ptr, size_t size)
++void __ref kmemleak_alloc_percpu(const void __percpu *ptr, size_t size,
++				 gfp_t gfp)
+ {
+ 	unsigned int cpu;
+ 
+@@ -925,7 +928,7 @@ void __ref kmemleak_alloc_percpu(const void __percpu *ptr, size_t size)
+ 	if (kmemleak_enabled && ptr && !IS_ERR(ptr))
+ 		for_each_possible_cpu(cpu)
+ 			create_object((unsigned long)per_cpu_ptr(ptr, cpu),
+-				      size, 0, GFP_KERNEL);
++				      size, 0, gfp);
+ 	else if (kmemleak_early_log)
+ 		log_early(KMEMLEAK_ALLOC_PERCPU, ptr, size, 0);
+ }
+@@ -942,7 +945,7 @@ void __ref kmemleak_free(const void *ptr)
+ {
+ 	pr_debug("%s(0x%p)\n", __func__, ptr);
+ 
+-	if (kmemleak_enabled && ptr && !IS_ERR(ptr))
++	if (kmemleak_free_enabled && ptr && !IS_ERR(ptr))
+ 		delete_object_full((unsigned long)ptr);
+ 	else if (kmemleak_early_log)
+ 		log_early(KMEMLEAK_FREE, ptr, 0, 0);
+@@ -982,7 +985,7 @@ void __ref kmemleak_free_percpu(const void __percpu *ptr)
+ 
+ 	pr_debug("%s(0x%p)\n", __func__, ptr);
+ 
+-	if (kmemleak_enabled && ptr && !IS_ERR(ptr))
++	if (kmemleak_free_enabled && ptr && !IS_ERR(ptr))
+ 		for_each_possible_cpu(cpu)
+ 			delete_object_full((unsigned long)per_cpu_ptr(ptr,
+ 								      cpu));
+@@ -1750,6 +1753,13 @@ static void kmemleak_do_cleanup(struct work_struct *work)
+ 	mutex_lock(&scan_mutex);
+ 	stop_scan_thread();
+ 
++	/*
++	 * Once the scan thread has stopped, it is safe to no longer track
++	 * object freeing. Ordering of the scan thread stopping and the memory
++	 * accesses below is guaranteed by the kthread_stop() function.
++	 */
++	kmemleak_free_enabled = 0;
++
+ 	if (!kmemleak_found_leaks)
+ 		__kmemleak_do_cleanup();
+ 	else
+@@ -1776,6 +1786,8 @@ static void kmemleak_disable(void)
+ 	/* check whether it is too early for a kernel thread */
+ 	if (kmemleak_initialized)
+ 		schedule_work(&cleanup_work);
++	else
++		kmemleak_free_enabled = 0;
+ 
+ 	pr_info("Kernel memory leak detector disabled\n");
+ }
+@@ -1840,8 +1852,10 @@ void __init kmemleak_init(void)
+ 	if (kmemleak_error) {
+ 		local_irq_restore(flags);
+ 		return;
+-	} else
++	} else {
+ 		kmemleak_enabled = 1;
++		kmemleak_free_enabled = 1;
++	}
+ 	local_irq_restore(flags);
+ 
+ 	/*
+diff --git a/mm/mempolicy.c b/mm/mempolicy.c
+index 747743237d9f..99d4c1d0b858 100644
+--- a/mm/mempolicy.c
++++ b/mm/mempolicy.c
+@@ -1972,35 +1972,41 @@ retry_cpuset:
+ 	pol = get_vma_policy(vma, addr);
+ 	cpuset_mems_cookie = read_mems_allowed_begin();
+ 
+-	if (unlikely(IS_ENABLED(CONFIG_TRANSPARENT_HUGEPAGE) && hugepage &&
+-					pol->mode != MPOL_INTERLEAVE)) {
++	if (pol->mode == MPOL_INTERLEAVE) {
++		unsigned nid;
++
++		nid = interleave_nid(pol, vma, addr, PAGE_SHIFT + order);
++		mpol_cond_put(pol);
++		page = alloc_page_interleave(gfp, order, nid);
++		goto out;
++	}
++
++	if (unlikely(IS_ENABLED(CONFIG_TRANSPARENT_HUGEPAGE) && hugepage)) {
++		int hpage_node = node;
++
+ 		/*
+ 		 * For hugepage allocation and non-interleave policy which
+-		 * allows the current node, we only try to allocate from the
+-		 * current node and don't fall back to other nodes, as the
+-		 * cost of remote accesses would likely offset THP benefits.
++		 * allows the current node (or other explicitly preferred
++		 * node) we only try to allocate from the current/preferred
++		 * node and don't fall back to other nodes, as the cost of
++		 * remote accesses would likely offset THP benefits.
+ 		 *
+ 		 * If the policy is interleave, or does not allow the current
+ 		 * node in its nodemask, we allocate the standard way.
+ 		 */
++		if (pol->mode == MPOL_PREFERRED &&
++						!(pol->flags & MPOL_F_LOCAL))
++			hpage_node = pol->v.preferred_node;
++
+ 		nmask = policy_nodemask(gfp, pol);
+-		if (!nmask || node_isset(node, *nmask)) {
++		if (!nmask || node_isset(hpage_node, *nmask)) {
+ 			mpol_cond_put(pol);
+-			page = alloc_pages_exact_node(node,
++			page = alloc_pages_exact_node(hpage_node,
+ 						gfp | __GFP_THISNODE, order);
+ 			goto out;
+ 		}
+ 	}
+ 
+-	if (pol->mode == MPOL_INTERLEAVE) {
+-		unsigned nid;
+-
+-		nid = interleave_nid(pol, vma, addr, PAGE_SHIFT + order);
+-		mpol_cond_put(pol);
+-		page = alloc_page_interleave(gfp, order, nid);
+-		goto out;
+-	}
+-
+ 	nmask = policy_nodemask(gfp, pol);
+ 	zl = policy_zonelist(gfp, pol, node);
+ 	mpol_cond_put(pol);
+diff --git a/mm/percpu.c b/mm/percpu.c
+index dfd02484e8de..2dd74487a0af 100644
+--- a/mm/percpu.c
++++ b/mm/percpu.c
+@@ -1030,7 +1030,7 @@ area_found:
+ 		memset((void *)pcpu_chunk_addr(chunk, cpu, 0) + off, 0, size);
+ 
+ 	ptr = __addr_to_pcpu_ptr(chunk->base_addr + off);
+-	kmemleak_alloc_percpu(ptr, size);
++	kmemleak_alloc_percpu(ptr, size, gfp);
+ 	return ptr;
+ 
+ fail_unlock:
+diff --git a/security/inode.c b/security/inode.c
+index 91503b79c5f8..0e37e4fba8fa 100644
+--- a/security/inode.c
++++ b/security/inode.c
+@@ -215,19 +215,17 @@ void securityfs_remove(struct dentry *dentry)
+ }
+ EXPORT_SYMBOL_GPL(securityfs_remove);
+ 
+-static struct kobject *security_kobj;
+-
+ static int __init securityfs_init(void)
+ {
+ 	int retval;
+ 
+-	security_kobj = kobject_create_and_add("security", kernel_kobj);
+-	if (!security_kobj)
+-		return -EINVAL;
++	retval = sysfs_create_mount_point(kernel_kobj, "security");
++	if (retval)
++		return retval;
+ 
+ 	retval = register_filesystem(&fs_type);
+ 	if (retval)
+-		kobject_put(security_kobj);
++		sysfs_remove_mount_point(kernel_kobj, "security");
+ 	return retval;
+ }
+ 
+diff --git a/security/selinux/selinuxfs.c b/security/selinux/selinuxfs.c
+index d2787cca1fcb..3d2201413028 100644
+--- a/security/selinux/selinuxfs.c
++++ b/security/selinux/selinuxfs.c
+@@ -1853,7 +1853,6 @@ static struct file_system_type sel_fs_type = {
+ };
+ 
+ struct vfsmount *selinuxfs_mount;
+-static struct kobject *selinuxfs_kobj;
+ 
+ static int __init init_sel_fs(void)
+ {
+@@ -1862,13 +1861,13 @@ static int __init init_sel_fs(void)
+ 	if (!selinux_enabled)
+ 		return 0;
+ 
+-	selinuxfs_kobj = kobject_create_and_add("selinux", fs_kobj);
+-	if (!selinuxfs_kobj)
+-		return -ENOMEM;
++	err = sysfs_create_mount_point(fs_kobj, "selinux");
++	if (err)
++		return err;
+ 
+ 	err = register_filesystem(&sel_fs_type);
+ 	if (err) {
+-		kobject_put(selinuxfs_kobj);
++		sysfs_remove_mount_point(fs_kobj, "selinux");
+ 		return err;
+ 	}
+ 
+@@ -1887,7 +1886,7 @@ __initcall(init_sel_fs);
+ #ifdef CONFIG_SECURITY_SELINUX_DISABLE
+ void exit_sel_fs(void)
+ {
+-	kobject_put(selinuxfs_kobj);
++	sysfs_remove_mount_point(fs_kobj, "selinux");
+ 	kern_unmount(selinuxfs_mount);
+ 	unregister_filesystem(&sel_fs_type);
+ }
+diff --git a/security/smack/smackfs.c b/security/smack/smackfs.c
+index d9682985349e..ac4cac7c661a 100644
+--- a/security/smack/smackfs.c
++++ b/security/smack/smackfs.c
+@@ -2241,16 +2241,16 @@ static const struct file_operations smk_revoke_subj_ops = {
+ 	.llseek		= generic_file_llseek,
+ };
+ 
+-static struct kset *smackfs_kset;
+ /**
+  * smk_init_sysfs - initialize /sys/fs/smackfs
+  *
+  */
+ static int smk_init_sysfs(void)
+ {
+-	smackfs_kset = kset_create_and_add("smackfs", NULL, fs_kobj);
+-	if (!smackfs_kset)
+-		return -ENOMEM;
++	int err;
++	err = sysfs_create_mount_point(fs_kobj, "smackfs");
++	if (err)
++		return err;
+ 	return 0;
+ }
+ 
+diff --git a/sound/core/pcm.c b/sound/core/pcm.c
+index b25bcf5b8644..dfed728d8c87 100644
+--- a/sound/core/pcm.c
++++ b/sound/core/pcm.c
+@@ -1027,7 +1027,8 @@ void snd_pcm_detach_substream(struct snd_pcm_substream *substream)
+ static ssize_t show_pcm_class(struct device *dev,
+ 			      struct device_attribute *attr, char *buf)
+ {
+-	struct snd_pcm *pcm;
++	struct snd_pcm_str *pstr = container_of(dev, struct snd_pcm_str, dev);
++	struct snd_pcm *pcm = pstr->pcm;
+ 	const char *str;
+ 	static const char *strs[SNDRV_PCM_CLASS_LAST + 1] = {
+ 		[SNDRV_PCM_CLASS_GENERIC] = "generic",
+@@ -1036,8 +1037,7 @@ static ssize_t show_pcm_class(struct device *dev,
+ 		[SNDRV_PCM_CLASS_DIGITIZER] = "digitizer",
+ 	};
+ 
+-	if (! (pcm = dev_get_drvdata(dev)) ||
+-	    pcm->dev_class > SNDRV_PCM_CLASS_LAST)
++	if (pcm->dev_class > SNDRV_PCM_CLASS_LAST)
+ 		str = "none";
+ 	else
+ 		str = strs[pcm->dev_class];
+diff --git a/sound/pci/hda/hda_intel.c b/sound/pci/hda/hda_intel.c
+index b6db25b23dd3..c403dd10d126 100644
+--- a/sound/pci/hda/hda_intel.c
++++ b/sound/pci/hda/hda_intel.c
+@@ -2054,6 +2054,8 @@ static const struct pci_device_id azx_ids[] = {
+ 	{ PCI_DEVICE(0x1022, 0x780d),
+ 	  .driver_data = AZX_DRIVER_GENERIC | AZX_DCAPS_PRESET_ATI_SB },
+ 	/* ATI HDMI */
++	{ PCI_DEVICE(0x1002, 0x1308),
++	  .driver_data = AZX_DRIVER_ATIHDMI_NS | AZX_DCAPS_PRESET_ATI_HDMI_NS },
+ 	{ PCI_DEVICE(0x1002, 0x793b),
+ 	  .driver_data = AZX_DRIVER_ATIHDMI | AZX_DCAPS_PRESET_ATI_HDMI },
+ 	{ PCI_DEVICE(0x1002, 0x7919),
+@@ -2062,6 +2064,8 @@ static const struct pci_device_id azx_ids[] = {
+ 	  .driver_data = AZX_DRIVER_ATIHDMI | AZX_DCAPS_PRESET_ATI_HDMI },
+ 	{ PCI_DEVICE(0x1002, 0x970f),
+ 	  .driver_data = AZX_DRIVER_ATIHDMI | AZX_DCAPS_PRESET_ATI_HDMI },
++	{ PCI_DEVICE(0x1002, 0x9840),
++	  .driver_data = AZX_DRIVER_ATIHDMI_NS | AZX_DCAPS_PRESET_ATI_HDMI_NS },
+ 	{ PCI_DEVICE(0x1002, 0xaa00),
+ 	  .driver_data = AZX_DRIVER_ATIHDMI | AZX_DCAPS_PRESET_ATI_HDMI },
+ 	{ PCI_DEVICE(0x1002, 0xaa08),
+diff --git a/sound/pci/hda/patch_realtek.c b/sound/pci/hda/patch_realtek.c
+index 6d010452c1f5..0e75998db39f 100644
+--- a/sound/pci/hda/patch_realtek.c
++++ b/sound/pci/hda/patch_realtek.c
+@@ -4458,6 +4458,7 @@ enum {
+ 	ALC269_FIXUP_LIFEBOOK,
+ 	ALC269_FIXUP_LIFEBOOK_EXTMIC,
+ 	ALC269_FIXUP_LIFEBOOK_HP_PIN,
++	ALC269_FIXUP_LIFEBOOK_NO_HP_TO_LINEOUT,
+ 	ALC269_FIXUP_AMIC,
+ 	ALC269_FIXUP_DMIC,
+ 	ALC269VB_FIXUP_AMIC,
+@@ -4478,6 +4479,7 @@ enum {
+ 	ALC269_FIXUP_DELL3_MIC_NO_PRESENCE,
+ 	ALC269_FIXUP_HEADSET_MODE,
+ 	ALC269_FIXUP_HEADSET_MODE_NO_HP_MIC,
++	ALC269_FIXUP_ASPIRE_HEADSET_MIC,
+ 	ALC269_FIXUP_ASUS_X101_FUNC,
+ 	ALC269_FIXUP_ASUS_X101_VERB,
+ 	ALC269_FIXUP_ASUS_X101,
+@@ -4505,6 +4507,7 @@ enum {
+ 	ALC255_FIXUP_HEADSET_MODE_NO_HP_MIC,
+ 	ALC293_FIXUP_DELL1_MIC_NO_PRESENCE,
+ 	ALC292_FIXUP_TPT440_DOCK,
++	ALC292_FIXUP_TPT440_DOCK2,
+ 	ALC283_FIXUP_BXBT2807_MIC,
+ 	ALC255_FIXUP_DELL_WMI_MIC_MUTE_LED,
+ 	ALC282_FIXUP_ASPIRE_V5_PINS,
+@@ -4515,6 +4518,8 @@ enum {
+ 	ALC288_FIXUP_DELL_HEADSET_MODE,
+ 	ALC288_FIXUP_DELL1_MIC_NO_PRESENCE,
+ 	ALC288_FIXUP_DELL_XPS_13_GPIO6,
++	ALC288_FIXUP_DELL_XPS_13,
++	ALC288_FIXUP_DISABLE_AAMIX,
+ 	ALC292_FIXUP_DELL_E7X,
+ 	ALC292_FIXUP_DISABLE_AAMIX,
+ };
+@@ -4623,6 +4628,10 @@ static const struct hda_fixup alc269_fixups[] = {
+ 			{ }
+ 		},
+ 	},
++	[ALC269_FIXUP_LIFEBOOK_NO_HP_TO_LINEOUT] = {
++		.type = HDA_FIXUP_FUNC,
++		.v.func = alc269_fixup_pincfg_no_hp_to_lineout,
++	},
+ 	[ALC269_FIXUP_AMIC] = {
+ 		.type = HDA_FIXUP_PINS,
+ 		.v.pins = (const struct hda_pintbl[]) {
+@@ -4751,6 +4760,15 @@ static const struct hda_fixup alc269_fixups[] = {
+ 		.type = HDA_FIXUP_FUNC,
+ 		.v.func = alc_fixup_headset_mode_no_hp_mic,
+ 	},
++	[ALC269_FIXUP_ASPIRE_HEADSET_MIC] = {
++		.type = HDA_FIXUP_PINS,
++		.v.pins = (const struct hda_pintbl[]) {
++			{ 0x19, 0x01a1913c }, /* headset mic w/o jack detect */
++			{ }
++		},
++		.chained = true,
++		.chain_id = ALC269_FIXUP_HEADSET_MODE,
++	},
+ 	[ALC286_FIXUP_SONY_MIC_NO_PRESENCE] = {
+ 		.type = HDA_FIXUP_PINS,
+ 		.v.pins = (const struct hda_pintbl[]) {
+@@ -4953,6 +4971,12 @@ static const struct hda_fixup alc269_fixups[] = {
+ 		.chain_id = ALC269_FIXUP_HEADSET_MODE
+ 	},
+ 	[ALC292_FIXUP_TPT440_DOCK] = {
++		.type = HDA_FIXUP_FUNC,
++		.v.func = alc269_fixup_pincfg_no_hp_to_lineout,
++		.chained = true,
++		.chain_id = ALC292_FIXUP_TPT440_DOCK2
++	},
++	[ALC292_FIXUP_TPT440_DOCK2] = {
+ 		.type = HDA_FIXUP_PINS,
+ 		.v.pins = (const struct hda_pintbl[]) {
+ 			{ 0x16, 0x21211010 }, /* dock headphone */
+@@ -5039,9 +5063,23 @@ static const struct hda_fixup alc269_fixups[] = {
+ 		.chained = true,
+ 		.chain_id = ALC288_FIXUP_DELL1_MIC_NO_PRESENCE
+ 	},
++	[ALC288_FIXUP_DISABLE_AAMIX] = {
++		.type = HDA_FIXUP_FUNC,
++		.v.func = alc_fixup_disable_aamix,
++		.chained = true,
++		.chain_id = ALC288_FIXUP_DELL_XPS_13_GPIO6
++	},
++	[ALC288_FIXUP_DELL_XPS_13] = {
++		.type = HDA_FIXUP_FUNC,
++		.v.func = alc_fixup_dell_xps13,
++		.chained = true,
++		.chain_id = ALC288_FIXUP_DISABLE_AAMIX
++	},
+ 	[ALC292_FIXUP_DISABLE_AAMIX] = {
+ 		.type = HDA_FIXUP_FUNC,
+ 		.v.func = alc_fixup_disable_aamix,
++		.chained = true,
++		.chain_id = ALC269_FIXUP_DELL2_MIC_NO_PRESENCE
+ 	},
+ 	[ALC292_FIXUP_DELL_E7X] = {
+ 		.type = HDA_FIXUP_FUNC,
+@@ -5056,6 +5094,8 @@ static const struct snd_pci_quirk alc269_fixup_tbl[] = {
+ 	SND_PCI_QUIRK(0x1025, 0x029b, "Acer 1810TZ", ALC269_FIXUP_INV_DMIC),
+ 	SND_PCI_QUIRK(0x1025, 0x0349, "Acer AOD260", ALC269_FIXUP_INV_DMIC),
+ 	SND_PCI_QUIRK(0x1025, 0x047c, "Acer AC700", ALC269_FIXUP_ACER_AC700),
++	SND_PCI_QUIRK(0x1025, 0x072d, "Acer Aspire V5-571G", ALC269_FIXUP_ASPIRE_HEADSET_MIC),
++	SND_PCI_QUIRK(0x1025, 0x080d, "Acer Aspire V5-122P", ALC269_FIXUP_ASPIRE_HEADSET_MIC),
+ 	SND_PCI_QUIRK(0x1025, 0x0740, "Acer AO725", ALC271_FIXUP_HP_GATE_MIC_JACK),
+ 	SND_PCI_QUIRK(0x1025, 0x0742, "Acer AO756", ALC271_FIXUP_HP_GATE_MIC_JACK),
+ 	SND_PCI_QUIRK(0x1025, 0x0775, "Acer Aspire E1-572", ALC271_FIXUP_HP_GATE_MIC_JACK_E1_572),
+@@ -5069,10 +5109,11 @@ static const struct snd_pci_quirk alc269_fixup_tbl[] = {
+ 	SND_PCI_QUIRK(0x1028, 0x05f6, "Dell", ALC269_FIXUP_DELL1_MIC_NO_PRESENCE),
+ 	SND_PCI_QUIRK(0x1028, 0x0615, "Dell Vostro 5470", ALC290_FIXUP_SUBWOOFER_HSJACK),
+ 	SND_PCI_QUIRK(0x1028, 0x0616, "Dell Vostro 5470", ALC290_FIXUP_SUBWOOFER_HSJACK),
++	SND_PCI_QUIRK(0x1028, 0x062e, "Dell Latitude E7450", ALC292_FIXUP_DELL_E7X),
+ 	SND_PCI_QUIRK(0x1028, 0x0638, "Dell Inspiron 5439", ALC290_FIXUP_MONO_SPEAKERS_HSJACK),
+ 	SND_PCI_QUIRK(0x1028, 0x064a, "Dell", ALC293_FIXUP_DELL1_MIC_NO_PRESENCE),
+ 	SND_PCI_QUIRK(0x1028, 0x064b, "Dell", ALC293_FIXUP_DELL1_MIC_NO_PRESENCE),
+-	SND_PCI_QUIRK(0x1028, 0x0665, "Dell XPS 13", ALC292_FIXUP_DELL_E7X),
++	SND_PCI_QUIRK(0x1028, 0x0665, "Dell XPS 13", ALC288_FIXUP_DELL_XPS_13),
+ 	SND_PCI_QUIRK(0x1028, 0x06c7, "Dell", ALC255_FIXUP_DELL1_MIC_NO_PRESENCE),
+ 	SND_PCI_QUIRK(0x1028, 0x06d9, "Dell", ALC293_FIXUP_DELL1_MIC_NO_PRESENCE),
+ 	SND_PCI_QUIRK(0x1028, 0x06da, "Dell", ALC293_FIXUP_DELL1_MIC_NO_PRESENCE),
+@@ -5156,6 +5197,7 @@ static const struct snd_pci_quirk alc269_fixup_tbl[] = {
+ 	SND_PCI_QUIRK(0x104d, 0x9084, "Sony VAIO", ALC275_FIXUP_SONY_HWEQ),
+ 	SND_PCI_QUIRK(0x104d, 0x9099, "Sony VAIO S13", ALC275_FIXUP_SONY_DISABLE_AAMIX),
+ 	SND_PCI_QUIRK(0x10cf, 0x1475, "Lifebook", ALC269_FIXUP_LIFEBOOK),
++	SND_PCI_QUIRK(0x10cf, 0x159f, "Lifebook E780", ALC269_FIXUP_LIFEBOOK_NO_HP_TO_LINEOUT),
+ 	SND_PCI_QUIRK(0x10cf, 0x15dc, "Lifebook T731", ALC269_FIXUP_LIFEBOOK_HP_PIN),
+ 	SND_PCI_QUIRK(0x10cf, 0x1757, "Lifebook E752", ALC269_FIXUP_LIFEBOOK_HP_PIN),
+ 	SND_PCI_QUIRK(0x10cf, 0x1845, "Lifebook U904", ALC269_FIXUP_LIFEBOOK_EXTMIC),
+diff --git a/sound/pci/hda/patch_via.c b/sound/pci/hda/patch_via.c
+index bab6c04932aa..0baeecc2213c 100644
+--- a/sound/pci/hda/patch_via.c
++++ b/sound/pci/hda/patch_via.c
+@@ -238,7 +238,9 @@ static int via_pin_power_ctl_get(struct snd_kcontrol *kcontrol,
+ 				 struct snd_ctl_elem_value *ucontrol)
+ {
+ 	struct hda_codec *codec = snd_kcontrol_chip(kcontrol);
+-	ucontrol->value.enumerated.item[0] = codec->power_save_node;
++	struct via_spec *spec = codec->spec;
++
++	ucontrol->value.enumerated.item[0] = spec->gen.power_down_unused;
+ 	return 0;
+ }
+ 
+@@ -249,9 +251,9 @@ static int via_pin_power_ctl_put(struct snd_kcontrol *kcontrol,
+ 	struct via_spec *spec = codec->spec;
+ 	bool val = !!ucontrol->value.enumerated.item[0];
+ 
+-	if (val == codec->power_save_node)
++	if (val == spec->gen.power_down_unused)
+ 		return 0;
+-	codec->power_save_node = val;
++	/* codec->power_save_node = val; */ /* widget PM seems yet broken */
+ 	spec->gen.power_down_unused = val;
+ 	analog_low_current_mode(codec);
+ 	return 1;
+diff --git a/tools/testing/selftests/Makefile b/tools/testing/selftests/Makefile
+index 95abddcd7839..f76830643086 100644
+--- a/tools/testing/selftests/Makefile
++++ b/tools/testing/selftests/Makefile
+@@ -27,7 +27,7 @@ TARGETS_HOTPLUG += memory-hotplug
+ # Makefile to avoid test build failures when test
+ # Makefile doesn't have explicit build rules.
+ ifeq (1,$(MAKELEVEL))
+-undefine LDFLAGS
++override LDFLAGS =
+ override MAKEFLAGS =
+ endif
+ 


             reply	other threads:[~2015-07-22 10:09 UTC|newest]

Thread overview: 71+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2015-07-22 10:09 Mike Pagano [this message]
  -- strict thread matches above, loose matches on Subject: below --
2018-05-29 10:34 [gentoo-commits] proj/linux-patches:4.1 commit in: / Mike Pagano
2018-01-23  9:37 Alice Ferrazzi
2017-12-15 20:22 Alice Ferrazzi
2017-12-08 14:48 Mike Pagano
2017-12-07 18:53 Mike Pagano
2017-10-18 11:51 Mike Pagano
2017-09-13 19:38 Mike Pagano
2017-08-06 18:01 Mike Pagano
2017-04-14 19:17 Mike Pagano
2017-03-14 11:39 Mike Pagano
2017-03-02 16:31 Mike Pagano
2017-03-02 16:31 Mike Pagano
2017-02-24 16:11 Mike Pagano
2017-01-18 23:50 Alice Ferrazzi
2017-01-10  4:02 Alice Ferrazzi
2016-12-08  0:43 Mike Pagano
2016-11-30 11:45 Mike Pagano
2016-11-23 11:25 Mike Pagano
2016-10-28 10:19 Mike Pagano
2016-10-12 19:52 Mike Pagano
2016-09-18 12:47 Mike Pagano
2016-08-22 23:29 Mike Pagano
2016-08-10 12:55 Mike Pagano
2016-07-31 16:01 Mike Pagano
2016-07-15 14:18 Mike Pagano
2016-07-13 23:38 Mike Pagano
2016-07-02 15:31 Mike Pagano
2016-07-01 19:56 Mike Pagano
2016-06-23 11:45 Mike Pagano
2016-06-08 11:17 Mike Pagano
2016-05-24 12:39 Mike Pagano
2016-05-12  0:12 Mike Pagano
2016-04-28 18:56 Mike Pagano
2016-04-22 18:06 Mike Pagano
2016-04-20 11:23 Mike Pagano
2016-04-06 11:23 Mike Pagano
2016-03-22 22:47 Mike Pagano
2016-03-17 22:52 Mike Pagano
2016-03-05 23:38 Mike Pagano
2016-02-16 15:28 Mike Pagano
2016-01-31 23:29 Mike Pagano
2016-01-23 18:30 Mike Pagano
2016-01-20 13:54 Mike Pagano
2015-12-15 11:17 Mike Pagano
2015-12-10 13:54 Mike Pagano
2015-11-10  0:30 Mike Pagano
2015-11-05 23:29 Mike Pagano
2015-11-05 23:29 Mike Pagano
2015-10-27 13:19 Mike Pagano
2015-10-26 20:51 Mike Pagano
2015-10-26 20:49 Mike Pagano
2015-10-03 16:07 Mike Pagano
2015-10-02 12:08 Mike Pagano
2015-09-29 17:50 Mike Pagano
2015-09-28 23:57 Mike Pagano
2015-09-21 22:16 Mike Pagano
2015-09-14 15:20 Mike Pagano
2015-08-17 15:38 Mike Pagano
2015-08-12 14:17 Mike Pagano
2015-08-10 23:42 Mike Pagano
2015-08-03 19:01 Mike Pagano
2015-07-22 10:31 Mike Pagano
2015-07-19 18:55 Mike Pagano
2015-07-17 15:24 Mike Pagano
2015-07-10 23:47 Mike Pagano
2015-07-01 15:33 Mike Pagano
2015-06-27 19:50 Mike Pagano
2015-06-26 22:36 Mike Pagano
2015-06-20 17:37 Mike Pagano
2015-06-08 17:59 Mike Pagano

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=1437559783.40044434d47f847553adb9b9cde5c31516fe435c.mpagano@gentoo \
    --to=mpagano@gentoo.org \
    --cc=gentoo-commits@lists.gentoo.org \
    --cc=gentoo-dev@lists.gentoo.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox