public inbox for gentoo-commits@lists.gentoo.org
 help / color / mirror / Atom feed
From: "Mike Pagano" <mpagano@gentoo.org>
To: gentoo-commits@lists.gentoo.org
Subject: [gentoo-commits] proj/linux-patches:4.1 commit in: /
Date: Mon,  3 Aug 2015 19:01:23 +0000 (UTC)	[thread overview]
Message-ID: <1438628474.ca1be7247b8e5d2dfeb7ae78ece8b4ec4cb819a2.mpagano@gentoo> (raw)

commit:     ca1be7247b8e5d2dfeb7ae78ece8b4ec4cb819a2
Author:     Mike Pagano <mpagano <AT> gentoo <DOT> org>
AuthorDate: Mon Aug  3 19:01:14 2015 +0000
Commit:     Mike Pagano <mpagano <AT> gentoo <DOT> org>
CommitDate: Mon Aug  3 19:01:14 2015 +0000
URL:        https://gitweb.gentoo.org/proj/linux-patches.git/commit/?id=ca1be724

Linux patch 4.1.4

 0000_README            |     4 +
 1003_linux-4.1.4.patch | 11078 +++++++++++++++++++++++++++++++++++++++++++++++
 2 files changed, 11082 insertions(+)

diff --git a/0000_README b/0000_README
index eab69c9..ceda226 100644
--- a/0000_README
+++ b/0000_README
@@ -55,6 +55,10 @@ Patch:  1002_linux-4.1.3.patch
 From:   http://www.kernel.org
 Desc:   Linux 4.1.3
 
+Patch:  1003_linux-4.1.4.patch
+From:   http://www.kernel.org
+Desc:   Linux 4.1.4
+
 Patch:  1500_XATTR_USER_PREFIX.patch
 From:   https://bugs.gentoo.org/show_bug.cgi?id=470644
 Desc:   Support for namespace user.pax.* on tmpfs.

diff --git a/1003_linux-4.1.4.patch b/1003_linux-4.1.4.patch
new file mode 100644
index 0000000..f7598b8
--- /dev/null
+++ b/1003_linux-4.1.4.patch
@@ -0,0 +1,11078 @@
+diff --git a/Documentation/ABI/testing/ima_policy b/Documentation/ABI/testing/ima_policy
+index d0d0c578324c..0a378a88217a 100644
+--- a/Documentation/ABI/testing/ima_policy
++++ b/Documentation/ABI/testing/ima_policy
+@@ -20,17 +20,19 @@ Description:
+ 		action: measure | dont_measure | appraise | dont_appraise | audit
+ 		condition:= base | lsm  [option]
+ 			base:	[[func=] [mask=] [fsmagic=] [fsuuid=] [uid=]
+-				 [fowner]]
++				[euid=] [fowner=]]
+ 			lsm:	[[subj_user=] [subj_role=] [subj_type=]
+ 				 [obj_user=] [obj_role=] [obj_type=]]
+ 			option:	[[appraise_type=]] [permit_directio]
+ 
+ 		base: 	func:= [BPRM_CHECK][MMAP_CHECK][FILE_CHECK][MODULE_CHECK]
+ 				[FIRMWARE_CHECK]
+-			mask:= [MAY_READ] [MAY_WRITE] [MAY_APPEND] [MAY_EXEC]
++			mask:= [[^]MAY_READ] [[^]MAY_WRITE] [[^]MAY_APPEND]
++			       [[^]MAY_EXEC]
+ 			fsmagic:= hex value
+ 			fsuuid:= file system UUID (e.g 8bcbe394-4f13-4144-be8e-5aa9ea2ce2f6)
+ 			uid:= decimal value
++			euid:= decimal value
+ 			fowner:=decimal value
+ 		lsm:  	are LSM specific
+ 		option:	appraise_type:= [imasig]
+@@ -49,11 +51,25 @@ Description:
+ 			dont_measure fsmagic=0x01021994
+ 			dont_appraise fsmagic=0x01021994
+ 			# RAMFS_MAGIC
+-			dont_measure fsmagic=0x858458f6
+ 			dont_appraise fsmagic=0x858458f6
++			# DEVPTS_SUPER_MAGIC
++			dont_measure fsmagic=0x1cd1
++			dont_appraise fsmagic=0x1cd1
++			# BINFMTFS_MAGIC
++			dont_measure fsmagic=0x42494e4d
++			dont_appraise fsmagic=0x42494e4d
+ 			# SECURITYFS_MAGIC
+ 			dont_measure fsmagic=0x73636673
+ 			dont_appraise fsmagic=0x73636673
++			# SELINUX_MAGIC
++			dont_measure fsmagic=0xf97cff8c
++			dont_appraise fsmagic=0xf97cff8c
++			# CGROUP_SUPER_MAGIC
++			dont_measure fsmagic=0x27e0eb
++			dont_appraise fsmagic=0x27e0eb
++			# NSFS_MAGIC
++			dont_measure fsmagic=0x6e736673
++			dont_appraise fsmagic=0x6e736673
+ 
+ 			measure func=BPRM_CHECK
+ 			measure func=FILE_MMAP mask=MAY_EXEC
+@@ -70,10 +86,6 @@ Description:
+ 		Examples of LSM specific definitions:
+ 
+ 		SELinux:
+-			# SELINUX_MAGIC
+-			dont_measure fsmagic=0xf97cff8c
+-			dont_appraise fsmagic=0xf97cff8c
+-
+ 			dont_measure obj_type=var_log_t
+ 			dont_appraise obj_type=var_log_t
+ 			dont_measure obj_type=auditd_log_t
+diff --git a/Documentation/ABI/testing/sysfs-ata b/Documentation/ABI/testing/sysfs-ata
+index 0a932155cbba..9231daef3813 100644
+--- a/Documentation/ABI/testing/sysfs-ata
++++ b/Documentation/ABI/testing/sysfs-ata
+@@ -90,6 +90,17 @@ gscr
+ 	130:	SATA_PMP_GSCR_SII_GPIO
+ 	Only valid if the device is a PM.
+ 
++trim
++
++	Shows the DSM TRIM mode currently used by the device. Valid
++	values are:
++	unsupported:		Drive does not support DSM TRIM
++	unqueued:		Drive supports unqueued DSM TRIM only
++	queued:			Drive supports queued DSM TRIM
++	forced_unqueued:	Drive's unqueued DSM support is known to be
++				buggy and only unqueued TRIM commands
++				are sent
++
+ spdn_cnt
+ 
+ 	Number of time libata decided to lower the speed of link due to errors.
+diff --git a/Documentation/ABI/testing/sysfs-bus-iio b/Documentation/ABI/testing/sysfs-bus-iio
+index 3befcb19f414..1fbdd79d1624 100644
+--- a/Documentation/ABI/testing/sysfs-bus-iio
++++ b/Documentation/ABI/testing/sysfs-bus-iio
+@@ -1165,10 +1165,8 @@ Description:
+ 		object is near the sensor, usually be observing
+ 		reflectivity of infrared or ultrasound emitted.
+ 		Often these sensors are unit less and as such conversion
+-		to SI units is not possible.  Where it is, the units should
+-		be meters.  If such a conversion is not possible, the reported
+-		values should behave in the same way as a distance, i.e. lower
+-		values indicate something is closer to the sensor.
++		to SI units is not possible. Higher proximity measurements
++		indicate closer objects, and vice versa.
+ 
+ What:		/sys/.../iio:deviceX/in_illuminance_input
+ What:		/sys/.../iio:deviceX/in_illuminance_raw
+diff --git a/Documentation/devicetree/bindings/pinctrl/marvell,armada-370-pinctrl.txt b/Documentation/devicetree/bindings/pinctrl/marvell,armada-370-pinctrl.txt
+index adda2a8d1d52..e357b020861d 100644
+--- a/Documentation/devicetree/bindings/pinctrl/marvell,armada-370-pinctrl.txt
++++ b/Documentation/devicetree/bindings/pinctrl/marvell,armada-370-pinctrl.txt
+@@ -92,5 +92,5 @@ mpp61         61       gpo, dev(wen1), uart1(txd), audio(rclk)
+ mpp62         62       gpio, dev(a2), uart1(cts), tdm(drx), pcie(clkreq0),
+                        audio(mclk), uart0(cts)
+ mpp63         63       gpo, spi0(sck), tclk
+-mpp64         64       gpio, spi0(miso), spi0-1(cs1)
+-mpp65         65       gpio, spi0(mosi), spi0-1(cs2)
++mpp64         64       gpio, spi0(miso), spi0(cs1)
++mpp65         65       gpio, spi0(mosi), spi0(cs2)
+diff --git a/Documentation/devicetree/bindings/pinctrl/marvell,armada-375-pinctrl.txt b/Documentation/devicetree/bindings/pinctrl/marvell,armada-375-pinctrl.txt
+index 7de0cda4a379..bedbe42c8c0a 100644
+--- a/Documentation/devicetree/bindings/pinctrl/marvell,armada-375-pinctrl.txt
++++ b/Documentation/devicetree/bindings/pinctrl/marvell,armada-375-pinctrl.txt
+@@ -22,8 +22,8 @@ mpp5          5        gpio, dev(ad7), spi0(cs2), spi1(cs2)
+ mpp6          6        gpio, dev(ad0), led(p1), audio(rclk)
+ mpp7          7        gpio, dev(ad1), ptp(clk), led(p2), audio(extclk)
+ mpp8          8        gpio, dev (bootcs), spi0(cs0), spi1(cs0)
+-mpp9          9        gpio, nf(wen), spi0(sck), spi1(sck)
+-mpp10        10        gpio, nf(ren), dram(vttctrl), led(c1)
++mpp9          9        gpio, spi0(sck), spi1(sck), nand(we)
++mpp10        10        gpio, dram(vttctrl), led(c1), nand(re)
+ mpp11        11        gpio, dev(a0), led(c2), audio(sdo)
+ mpp12        12        gpio, dev(a1), audio(bclk)
+ mpp13        13        gpio, dev(readyn), pcie0(rstoutn), pcie1(rstoutn)
+diff --git a/Documentation/devicetree/bindings/pinctrl/marvell,armada-38x-pinctrl.txt b/Documentation/devicetree/bindings/pinctrl/marvell,armada-38x-pinctrl.txt
+index b17c96849fc9..4ac138aaaf87 100644
+--- a/Documentation/devicetree/bindings/pinctrl/marvell,armada-38x-pinctrl.txt
++++ b/Documentation/devicetree/bindings/pinctrl/marvell,armada-38x-pinctrl.txt
+@@ -27,15 +27,15 @@ mpp8          8        gpio, ge0(txd1), dev(ad10)
+ mpp9          9        gpio, ge0(txd2), dev(ad11)
+ mpp10         10       gpio, ge0(txd3), dev(ad12)
+ mpp11         11       gpio, ge0(txctl), dev(ad13)
+-mpp12         12       gpio, ge0(rxd0), pcie0(rstout), pcie1(rstout) [1], spi0(cs1), dev(ad14)
+-mpp13         13       gpio, ge0(rxd1), pcie0(clkreq), pcie1(clkreq) [1], spi0(cs2), dev(ad15)
+-mpp14         14       gpio, ge0(rxd2), ptp(clk), m(vtt_ctrl), spi0(cs3), dev(wen1)
+-mpp15         15       gpio, ge0(rxd3), ge(mdc slave), pcie0(rstout), spi0(mosi), pcie1(rstout) [1]
+-mpp16         16       gpio, ge0(rxctl), ge(mdio slave), m(decc_err), spi0(miso), pcie0(clkreq)
++mpp12         12       gpio, ge0(rxd0), pcie0(rstout), spi0(cs1), dev(ad14), pcie3(clkreq)
++mpp13         13       gpio, ge0(rxd1), pcie0(clkreq), pcie1(clkreq) [1], spi0(cs2), dev(ad15), pcie2(clkreq)
++mpp14         14       gpio, ge0(rxd2), ptp(clk), m(vtt_ctrl), spi0(cs3), dev(wen1), pcie3(clkreq)
++mpp15         15       gpio, ge0(rxd3), ge(mdc slave), pcie0(rstout), spi0(mosi)
++mpp16         16       gpio, ge0(rxctl), ge(mdio slave), m(decc_err), spi0(miso), pcie0(clkreq), pcie1(clkreq) [1]
+ mpp17         17       gpio, ge0(rxclk), ptp(clk), ua1(rxd), spi0(sck), sata1(prsnt)
+-mpp18         18       gpio, ge0(rxerr), ptp(trig_gen), ua1(txd), spi0(cs0), pcie1(rstout) [1]
+-mpp19         19       gpio, ge0(col), ptp(event_req), pcie0(clkreq), sata1(prsnt), ua0(cts)
+-mpp20         20       gpio, ge0(txclk), ptp(clk), pcie1(rstout) [1], sata0(prsnt), ua0(rts)
++mpp18         18       gpio, ge0(rxerr), ptp(trig_gen), ua1(txd), spi0(cs0)
++mpp19         19       gpio, ge0(col), ptp(event_req), ge0(txerr), sata1(prsnt), ua0(cts)
++mpp20         20       gpio, ge0(txclk), ptp(clk), sata0(prsnt), ua0(rts)
+ mpp21         21       gpio, spi0(cs1), ge1(rxd0), sata0(prsnt), sd0(cmd), dev(bootcs)
+ mpp22         22       gpio, spi0(mosi), dev(ad0)
+ mpp23         23       gpio, spi0(sck), dev(ad2)
+@@ -58,23 +58,23 @@ mpp39         39       gpio, i2c1(sck), ge1(rxd2), ua0(cts), sd0(d1), dev(a2)
+ mpp40         40       gpio, i2c1(sda), ge1(rxd3), ua0(rts), sd0(d2), dev(ad6)
+ mpp41         41       gpio, ua1(rxd), ge1(rxctl), ua0(cts), spi1(cs3), dev(burst/last)
+ mpp42         42       gpio, ua1(txd), ua0(rts), dev(ad7)
+-mpp43         43       gpio, pcie0(clkreq), m(vtt_ctrl), m(decc_err), pcie0(rstout), dev(clkout)
+-mpp44         44       gpio, sata0(prsnt), sata1(prsnt), sata2(prsnt) [2], sata3(prsnt) [3], pcie0(rstout)
+-mpp45         45       gpio, ref(clk_out0), pcie0(rstout), pcie1(rstout) [1], pcie2(rstout), pcie3(rstout)
+-mpp46         46       gpio, ref(clk_out1), pcie0(rstout), pcie1(rstout) [1], pcie2(rstout), pcie3(rstout)
+-mpp47         47       gpio, sata0(prsnt), sata1(prsnt), sata2(prsnt) [2], spi1(cs2), sata3(prsnt) [2]
+-mpp48         48       gpio, sata0(prsnt), m(vtt_ctrl), tdm2c(pclk), audio(mclk), sd0(d4)
+-mpp49         49       gpio, sata2(prsnt) [2], sata3(prsnt) [2], tdm2c(fsync), audio(lrclk), sd0(d5)
+-mpp50         50       gpio, pcie0(rstout), pcie1(rstout) [1], tdm2c(drx), audio(extclk), sd0(cmd)
++mpp43         43       gpio, pcie0(clkreq), m(vtt_ctrl), m(decc_err), spi1(cs2), dev(clkout)
++mpp44         44       gpio, sata0(prsnt), sata1(prsnt), sata2(prsnt) [2], sata3(prsnt) [3]
++mpp45         45       gpio, ref(clk_out0), pcie0(rstout)
++mpp46         46       gpio, ref(clk_out1), pcie0(rstout)
++mpp47         47       gpio, sata0(prsnt), sata1(prsnt), sata2(prsnt) [2], sata3(prsnt) [2]
++mpp48         48       gpio, sata0(prsnt), m(vtt_ctrl), tdm2c(pclk), audio(mclk), sd0(d4), pcie0(clkreq)
++mpp49         49       gpio, sata2(prsnt) [2], sata3(prsnt) [2], tdm2c(fsync), audio(lrclk), sd0(d5), pcie1(clkreq)
++mpp50         50       gpio, pcie0(rstout), tdm2c(drx), audio(extclk), sd0(cmd)
+ mpp51         51       gpio, tdm2c(dtx), audio(sdo), m(decc_err)
+-mpp52         52       gpio, pcie0(rstout), pcie1(rstout) [1], tdm2c(intn), audio(sdi), sd0(d6)
++mpp52         52       gpio, pcie0(rstout), tdm2c(intn), audio(sdi), sd0(d6)
+ mpp53         53       gpio, sata1(prsnt), sata0(prsnt), tdm2c(rstn), audio(bclk), sd0(d7)
+-mpp54         54       gpio, sata0(prsnt), sata1(prsnt), pcie0(rstout), pcie1(rstout) [1], sd0(d3)
++mpp54         54       gpio, sata0(prsnt), sata1(prsnt), pcie0(rstout), ge0(txerr), sd0(d3)
+ mpp55         55       gpio, ua1(cts), ge(mdio), pcie1(clkreq) [1], spi1(cs1), sd0(d0)
+ mpp56         56       gpio, ua1(rts), ge(mdc), m(decc_err), spi1(mosi)
+ mpp57         57       gpio, spi1(sck), sd0(clk)
+ mpp58         58       gpio, pcie1(clkreq) [1], i2c1(sck), pcie2(clkreq), spi1(miso), sd0(d1)
+-mpp59         59       gpio, pcie0(rstout), i2c1(sda), pcie1(rstout) [1], spi1(cs0), sd0(d2)
++mpp59         59       gpio, pcie0(rstout), i2c1(sda), spi1(cs0), sd0(d2)
+ 
+ [1]: only available on 88F6820 and 88F6828
+ [2]: only available on 88F6828
+diff --git a/Documentation/devicetree/bindings/pinctrl/marvell,armada-xp-pinctrl.txt b/Documentation/devicetree/bindings/pinctrl/marvell,armada-xp-pinctrl.txt
+index 373dbccd7ab0..96e7744cab84 100644
+--- a/Documentation/devicetree/bindings/pinctrl/marvell,armada-xp-pinctrl.txt
++++ b/Documentation/devicetree/bindings/pinctrl/marvell,armada-xp-pinctrl.txt
+@@ -42,15 +42,15 @@ mpp20         20       gpio, ge0(rxd4), ge1(rxd2), lcd(d20), ptp(clk)
+ mpp21         21       gpio, ge0(rxd5), ge1(rxd3), lcd(d21), mem(bat)
+ mpp22         22       gpio, ge0(rxd6), ge1(rxctl), lcd(d22), sata0(prsnt)
+ mpp23         23       gpio, ge0(rxd7), ge1(rxclk), lcd(d23), sata1(prsnt)
+-mpp24         24       gpio, lcd(hsync), sata1(prsnt), nf(bootcs-re), tdm(rst)
+-mpp25         25       gpio, lcd(vsync), sata0(prsnt), nf(bootcs-we), tdm(pclk)
+-mpp26         26       gpio, lcd(clk), tdm(fsync), vdd(cpu1-pd)
++mpp24         24       gpio, lcd(hsync), sata1(prsnt), tdm(rst)
++mpp25         25       gpio, lcd(vsync), sata0(prsnt), tdm(pclk)
++mpp26         26       gpio, lcd(clk), tdm(fsync)
+ mpp27         27       gpio, lcd(e), tdm(dtx), ptp(trig)
+ mpp28         28       gpio, lcd(pwm), tdm(drx), ptp(evreq)
+-mpp29         29       gpio, lcd(ref-clk), tdm(int0), ptp(clk), vdd(cpu0-pd)
++mpp29         29       gpio, lcd(ref-clk), tdm(int0), ptp(clk)
+ mpp30         30       gpio, tdm(int1), sd0(clk)
+-mpp31         31       gpio, tdm(int2), sd0(cmd), vdd(cpu0-pd)
+-mpp32         32       gpio, tdm(int3), sd0(d0), vdd(cpu1-pd)
++mpp31         31       gpio, tdm(int2), sd0(cmd)
++mpp32         32       gpio, tdm(int3), sd0(d0)
+ mpp33         33       gpio, tdm(int4), sd0(d1), mem(bat)
+ mpp34         34       gpio, tdm(int5), sd0(d2), sata0(prsnt)
+ mpp35         35       gpio, tdm(int6), sd0(d3), sata1(prsnt)
+@@ -58,21 +58,18 @@ mpp36         36       gpio, spi(mosi)
+ mpp37         37       gpio, spi(miso)
+ mpp38         38       gpio, spi(sck)
+ mpp39         39       gpio, spi(cs0)
+-mpp40         40       gpio, spi(cs1), uart2(cts), lcd(vga-hsync), vdd(cpu1-pd),
+-                       pcie(clkreq0)
++mpp40         40       gpio, spi(cs1), uart2(cts), lcd(vga-hsync), pcie(clkreq0)
+ mpp41         41       gpio, spi(cs2), uart2(rts), lcd(vga-vsync), sata1(prsnt),
+                        pcie(clkreq1)
+-mpp42         42       gpio, uart2(rxd), uart0(cts), tdm(int7), tdm-1(timer),
+-                       vdd(cpu0-pd)
+-mpp43         43       gpio, uart2(txd), uart0(rts), spi(cs3), pcie(rstout),
+-                       vdd(cpu2-3-pd){1}
++mpp42         42       gpio, uart2(rxd), uart0(cts), tdm(int7), tdm-1(timer)
++mpp43         43       gpio, uart2(txd), uart0(rts), spi(cs3), pcie(rstout)
+ mpp44         44       gpio, uart2(cts), uart3(rxd), spi(cs4), pcie(clkreq2),
+                        mem(bat)
+ mpp45         45       gpio, uart2(rts), uart3(txd), spi(cs5), sata1(prsnt)
+ mpp46         46       gpio, uart3(rts), uart1(rts), spi(cs6), sata0(prsnt)
+ mpp47         47       gpio, uart3(cts), uart1(cts), spi(cs7), pcie(clkreq3),
+                        ref(clkout)
+-mpp48         48       gpio, tclk, dev(burst/last)
++mpp48         48       gpio, dev(clkout), dev(burst/last)
+ 
+ * Marvell Armada XP (mv78260 and mv78460 only)
+ 
+@@ -84,9 +81,9 @@ mpp51         51       gpio, dev(ad16)
+ mpp52         52       gpio, dev(ad17)
+ mpp53         53       gpio, dev(ad18)
+ mpp54         54       gpio, dev(ad19)
+-mpp55         55       gpio, dev(ad20), vdd(cpu0-pd)
+-mpp56         56       gpio, dev(ad21), vdd(cpu1-pd)
+-mpp57         57       gpio, dev(ad22), vdd(cpu2-3-pd){1}
++mpp55         55       gpio, dev(ad20)
++mpp56         56       gpio, dev(ad21)
++mpp57         57       gpio, dev(ad22)
+ mpp58         58       gpio, dev(ad23)
+ mpp59         59       gpio, dev(ad24)
+ mpp60         60       gpio, dev(ad25)
+@@ -96,6 +93,3 @@ mpp63         63       gpio, dev(ad28)
+ mpp64         64       gpio, dev(ad29)
+ mpp65         65       gpio, dev(ad30)
+ mpp66         66       gpio, dev(ad31)
+-
+-Notes:
+-* {1} vdd(cpu2-3-pd) only available on mv78460.
+diff --git a/Documentation/devicetree/bindings/usb/atmel-usb.txt b/Documentation/devicetree/bindings/usb/atmel-usb.txt
+index e180d56c75db..de773a00e2d4 100644
+--- a/Documentation/devicetree/bindings/usb/atmel-usb.txt
++++ b/Documentation/devicetree/bindings/usb/atmel-usb.txt
+@@ -60,9 +60,9 @@ Atmel High-Speed USB device controller
+ 
+ Required properties:
+  - compatible: Should be one of the following
+-	       "at91sam9rl-udc"
+-	       "at91sam9g45-udc"
+-	       "sama5d3-udc"
++	       "atmel,at91sam9rl-udc"
++	       "atmel,at91sam9g45-udc"
++	       "atmel,sama5d3-udc"
+  - reg: Address and length of the register set for the device
+  - interrupts: Should contain usba interrupt
+  - ep childnode: To specify the number of endpoints and their properties.
+diff --git a/Documentation/kernel-parameters.txt b/Documentation/kernel-parameters.txt
+index 6726139bd289..cd03a0faca8f 100644
+--- a/Documentation/kernel-parameters.txt
++++ b/Documentation/kernel-parameters.txt
+@@ -1398,7 +1398,15 @@ bytes respectively. Such letter suffixes can also be entirely omitted.
+ 			The list of supported hash algorithms is defined
+ 			in crypto/hash_info.h.
+ 
+-	ima_tcb		[IMA]
++	ima_policy=	[IMA]
++			The builtin measurement policy to load during IMA
++			setup.  Specyfing "tcb" as the value, measures all
++			programs exec'd, files mmap'd for exec, and all files
++			opened with the read mode bit set by either the
++			effective uid (euid=0) or uid=0.
++			Format: "tcb"
++
++	ima_tcb		[IMA] Deprecated.  Use ima_policy= instead.
+ 			Load a policy which meets the needs of the Trusted
+ 			Computing Base.  This means IMA will measure all
+ 			programs exec'd, files mmap'd for exec, and all files
+diff --git a/Makefile b/Makefile
+index e3cdec4898be..36f3225cdf1f 100644
+--- a/Makefile
++++ b/Makefile
+@@ -1,6 +1,6 @@
+ VERSION = 4
+ PATCHLEVEL = 1
+-SUBLEVEL = 3
++SUBLEVEL = 4
+ EXTRAVERSION =
+ NAME = Series 4800
+ 
+diff --git a/arch/arm/boot/dts/at91-sama5d4ek.dts b/arch/arm/boot/dts/at91-sama5d4ek.dts
+index 89ef4a540db5..45e7761b7a29 100644
+--- a/arch/arm/boot/dts/at91-sama5d4ek.dts
++++ b/arch/arm/boot/dts/at91-sama5d4ek.dts
+@@ -108,8 +108,8 @@
+ 			mmc0: mmc@f8000000 {
+ 				pinctrl-names = "default";
+ 				pinctrl-0 = <&pinctrl_mmc0_clk_cmd_dat0 &pinctrl_mmc0_dat1_3 &pinctrl_mmc0_cd>;
+-				slot@1 {
+-					reg = <1>;
++				slot@0 {
++					reg = <0>;
+ 					bus-width = <4>;
+ 					cd-gpios = <&pioE 5 0>;
+ 				};
+diff --git a/arch/arm/boot/dts/at91sam9g45.dtsi b/arch/arm/boot/dts/at91sam9g45.dtsi
+index 70e59c5ceb2f..e54421176af8 100644
+--- a/arch/arm/boot/dts/at91sam9g45.dtsi
++++ b/arch/arm/boot/dts/at91sam9g45.dtsi
+@@ -1148,7 +1148,7 @@
+ 			usb2: gadget@fff78000 {
+ 				#address-cells = <1>;
+ 				#size-cells = <0>;
+-				compatible = "atmel,at91sam9rl-udc";
++				compatible = "atmel,at91sam9g45-udc";
+ 				reg = <0x00600000 0x80000
+ 				       0xfff78000 0x400>;
+ 				interrupts = <27 IRQ_TYPE_LEVEL_HIGH 0>;
+diff --git a/arch/arm/boot/dts/at91sam9x5.dtsi b/arch/arm/boot/dts/at91sam9x5.dtsi
+index 3aa56ae3410a..3314a7303754 100644
+--- a/arch/arm/boot/dts/at91sam9x5.dtsi
++++ b/arch/arm/boot/dts/at91sam9x5.dtsi
+@@ -1062,7 +1062,7 @@
+ 			usb2: gadget@f803c000 {
+ 				#address-cells = <1>;
+ 				#size-cells = <0>;
+-				compatible = "atmel,at91sam9rl-udc";
++				compatible = "atmel,at91sam9g45-udc";
+ 				reg = <0x00500000 0x80000
+ 				       0xf803c000 0x400>;
+ 				interrupts = <23 IRQ_TYPE_LEVEL_HIGH 0>;
+diff --git a/arch/arm/boot/dts/imx23.dtsi b/arch/arm/boot/dts/imx23.dtsi
+index bbcfb5a19c77..0cb8b0b11c3f 100644
+--- a/arch/arm/boot/dts/imx23.dtsi
++++ b/arch/arm/boot/dts/imx23.dtsi
+@@ -435,6 +435,7 @@
+ 				interrupts = <36 37 38 39 40 41 42 43 44>;
+ 				status = "disabled";
+ 				clocks = <&clks 26>;
++				#io-channel-cells = <1>;
+ 			};
+ 
+ 			spdif@80054000 {
+diff --git a/arch/arm/boot/dts/sama5d3.dtsi b/arch/arm/boot/dts/sama5d3.dtsi
+index 57ab8587f7b9..37e6182f1470 100644
+--- a/arch/arm/boot/dts/sama5d3.dtsi
++++ b/arch/arm/boot/dts/sama5d3.dtsi
+@@ -1321,7 +1321,7 @@
+ 		usb0: gadget@00500000 {
+ 			#address-cells = <1>;
+ 			#size-cells = <0>;
+-			compatible = "atmel,at91sam9rl-udc";
++			compatible = "atmel,sama5d3-udc";
+ 			reg = <0x00500000 0x100000
+ 			       0xf8030000 0x4000>;
+ 			interrupts = <33 IRQ_TYPE_LEVEL_HIGH 2>;
+diff --git a/arch/arm/boot/dts/sama5d4.dtsi b/arch/arm/boot/dts/sama5d4.dtsi
+index 6b1bb58f9c0b..a5f5f4090af6 100644
+--- a/arch/arm/boot/dts/sama5d4.dtsi
++++ b/arch/arm/boot/dts/sama5d4.dtsi
+@@ -123,7 +123,7 @@
+ 		usb0: gadget@00400000 {
+ 			#address-cells = <1>;
+ 			#size-cells = <0>;
+-			compatible = "atmel,at91sam9rl-udc";
++			compatible = "atmel,sama5d3-udc";
+ 			reg = <0x00400000 0x100000
+ 			       0xfc02c000 0x4000>;
+ 			interrupts = <47 IRQ_TYPE_LEVEL_HIGH 2>;
+@@ -1125,10 +1125,10 @@
+ 				compatible = "atmel,at91sam9g46-aes";
+ 				reg = <0xfc044000 0x100>;
+ 				interrupts = <12 IRQ_TYPE_LEVEL_HIGH 0>;
+-				dmas = <&dma0 (AT91_XDMAC_DT_MEM_IF(0) | AT91_XDMAC_DT_PER_IF(1))
+-					AT91_XDMAC_DT_PERID(41)>,
+-				       <&dma0 (AT91_XDMAC_DT_MEM_IF(0) | AT91_XDMAC_DT_PER_IF(1))
+-					AT91_XDMAC_DT_PERID(40)>;
++				dmas = <&dma0 (AT91_XDMAC_DT_MEM_IF(0) | AT91_XDMAC_DT_PER_IF(1)
++					| AT91_XDMAC_DT_PERID(41))>,
++				       <&dma0 (AT91_XDMAC_DT_MEM_IF(0) | AT91_XDMAC_DT_PER_IF(1)
++					| AT91_XDMAC_DT_PERID(40))>;
+ 				dma-names = "tx", "rx";
+ 				clocks = <&aes_clk>;
+ 				clock-names = "aes_clk";
+@@ -1139,10 +1139,10 @@
+ 				compatible = "atmel,at91sam9g46-tdes";
+ 				reg = <0xfc04c000 0x100>;
+ 				interrupts = <14 IRQ_TYPE_LEVEL_HIGH 0>;
+-				dmas = <&dma0 (AT91_XDMAC_DT_MEM_IF(0) | AT91_XDMAC_DT_PER_IF(1))
+-					AT91_XDMAC_DT_PERID(42)>,
+-				       <&dma0 (AT91_XDMAC_DT_MEM_IF(0) | AT91_XDMAC_DT_PER_IF(1))
+-					AT91_XDMAC_DT_PERID(43)>;
++				dmas = <&dma0 (AT91_XDMAC_DT_MEM_IF(0) | AT91_XDMAC_DT_PER_IF(1)
++					| AT91_XDMAC_DT_PERID(42))>,
++				       <&dma0 (AT91_XDMAC_DT_MEM_IF(0) | AT91_XDMAC_DT_PER_IF(1)
++					| AT91_XDMAC_DT_PERID(43))>;
+ 				dma-names = "tx", "rx";
+ 				clocks = <&tdes_clk>;
+ 				clock-names = "tdes_clk";
+@@ -1153,8 +1153,8 @@
+ 				compatible = "atmel,at91sam9g46-sha";
+ 				reg = <0xfc050000 0x100>;
+ 				interrupts = <15 IRQ_TYPE_LEVEL_HIGH 0>;
+-				dmas = <&dma0 (AT91_XDMAC_DT_MEM_IF(0) | AT91_XDMAC_DT_PER_IF(1))
+-					AT91_XDMAC_DT_PERID(44)>;
++				dmas = <&dma0 (AT91_XDMAC_DT_MEM_IF(0) | AT91_XDMAC_DT_PER_IF(1)
++					| AT91_XDMAC_DT_PERID(44))>;
+ 				dma-names = "tx";
+ 				clocks = <&sha_clk>;
+ 				clock-names = "sha_clk";
+diff --git a/arch/arm/kernel/smp.c b/arch/arm/kernel/smp.c
+index cca5b8758185..f11d82527076 100644
+--- a/arch/arm/kernel/smp.c
++++ b/arch/arm/kernel/smp.c
+@@ -576,7 +576,7 @@ void handle_IPI(int ipinr, struct pt_regs *regs)
+ 	struct pt_regs *old_regs = set_irq_regs(regs);
+ 
+ 	if ((unsigned)ipinr < NR_IPI) {
+-		trace_ipi_entry(ipi_types[ipinr]);
++		trace_ipi_entry_rcuidle(ipi_types[ipinr]);
+ 		__inc_irq_stat(cpu, ipi_irqs[ipinr]);
+ 	}
+ 
+@@ -635,7 +635,7 @@ void handle_IPI(int ipinr, struct pt_regs *regs)
+ 	}
+ 
+ 	if ((unsigned)ipinr < NR_IPI)
+-		trace_ipi_exit(ipi_types[ipinr]);
++		trace_ipi_exit_rcuidle(ipi_types[ipinr]);
+ 	set_irq_regs(old_regs);
+ }
+ 
+diff --git a/arch/arm/mach-dove/include/mach/irqs.h b/arch/arm/mach-dove/include/mach/irqs.h
+index 03d401d20453..3f29e6bca058 100644
+--- a/arch/arm/mach-dove/include/mach/irqs.h
++++ b/arch/arm/mach-dove/include/mach/irqs.h
+@@ -14,73 +14,73 @@
+ /*
+  * Dove Low Interrupt Controller
+  */
+-#define IRQ_DOVE_BRIDGE		0
+-#define IRQ_DOVE_H2C		1
+-#define IRQ_DOVE_C2H		2
+-#define IRQ_DOVE_NAND		3
+-#define IRQ_DOVE_PDMA		4
+-#define IRQ_DOVE_SPI1		5
+-#define IRQ_DOVE_SPI0		6
+-#define IRQ_DOVE_UART_0		7
+-#define IRQ_DOVE_UART_1		8
+-#define IRQ_DOVE_UART_2		9
+-#define IRQ_DOVE_UART_3		10
+-#define IRQ_DOVE_I2C		11
+-#define IRQ_DOVE_GPIO_0_7	12
+-#define IRQ_DOVE_GPIO_8_15	13
+-#define IRQ_DOVE_GPIO_16_23	14
+-#define IRQ_DOVE_PCIE0_ERR	15
+-#define IRQ_DOVE_PCIE0		16
+-#define IRQ_DOVE_PCIE1_ERR	17
+-#define IRQ_DOVE_PCIE1		18
+-#define IRQ_DOVE_I2S0		19
+-#define IRQ_DOVE_I2S0_ERR	20
+-#define IRQ_DOVE_I2S1		21
+-#define IRQ_DOVE_I2S1_ERR	22
+-#define IRQ_DOVE_USB_ERR	23
+-#define IRQ_DOVE_USB0		24
+-#define IRQ_DOVE_USB1		25
+-#define IRQ_DOVE_GE00_RX	26
+-#define IRQ_DOVE_GE00_TX	27
+-#define IRQ_DOVE_GE00_MISC	28
+-#define IRQ_DOVE_GE00_SUM	29
+-#define IRQ_DOVE_GE00_ERR	30
+-#define IRQ_DOVE_CRYPTO		31
++#define IRQ_DOVE_BRIDGE		(1 + 0)
++#define IRQ_DOVE_H2C		(1 + 1)
++#define IRQ_DOVE_C2H		(1 + 2)
++#define IRQ_DOVE_NAND		(1 + 3)
++#define IRQ_DOVE_PDMA		(1 + 4)
++#define IRQ_DOVE_SPI1		(1 + 5)
++#define IRQ_DOVE_SPI0		(1 + 6)
++#define IRQ_DOVE_UART_0		(1 + 7)
++#define IRQ_DOVE_UART_1		(1 + 8)
++#define IRQ_DOVE_UART_2		(1 + 9)
++#define IRQ_DOVE_UART_3		(1 + 10)
++#define IRQ_DOVE_I2C		(1 + 11)
++#define IRQ_DOVE_GPIO_0_7	(1 + 12)
++#define IRQ_DOVE_GPIO_8_15	(1 + 13)
++#define IRQ_DOVE_GPIO_16_23	(1 + 14)
++#define IRQ_DOVE_PCIE0_ERR	(1 + 15)
++#define IRQ_DOVE_PCIE0		(1 + 16)
++#define IRQ_DOVE_PCIE1_ERR	(1 + 17)
++#define IRQ_DOVE_PCIE1		(1 + 18)
++#define IRQ_DOVE_I2S0		(1 + 19)
++#define IRQ_DOVE_I2S0_ERR	(1 + 20)
++#define IRQ_DOVE_I2S1		(1 + 21)
++#define IRQ_DOVE_I2S1_ERR	(1 + 22)
++#define IRQ_DOVE_USB_ERR	(1 + 23)
++#define IRQ_DOVE_USB0		(1 + 24)
++#define IRQ_DOVE_USB1		(1 + 25)
++#define IRQ_DOVE_GE00_RX	(1 + 26)
++#define IRQ_DOVE_GE00_TX	(1 + 27)
++#define IRQ_DOVE_GE00_MISC	(1 + 28)
++#define IRQ_DOVE_GE00_SUM	(1 + 29)
++#define IRQ_DOVE_GE00_ERR	(1 + 30)
++#define IRQ_DOVE_CRYPTO		(1 + 31)
+ 
+ /*
+  * Dove High Interrupt Controller
+  */
+-#define IRQ_DOVE_AC97		32
+-#define IRQ_DOVE_PMU		33
+-#define IRQ_DOVE_CAM		34
+-#define IRQ_DOVE_SDIO0		35
+-#define IRQ_DOVE_SDIO1		36
+-#define IRQ_DOVE_SDIO0_WAKEUP	37
+-#define IRQ_DOVE_SDIO1_WAKEUP	38
+-#define IRQ_DOVE_XOR_00		39
+-#define IRQ_DOVE_XOR_01		40
+-#define IRQ_DOVE_XOR0_ERR	41
+-#define IRQ_DOVE_XOR_10		42
+-#define IRQ_DOVE_XOR_11		43
+-#define IRQ_DOVE_XOR1_ERR	44
+-#define IRQ_DOVE_LCD_DCON	45
+-#define IRQ_DOVE_LCD1		46
+-#define IRQ_DOVE_LCD0		47
+-#define IRQ_DOVE_GPU		48
+-#define IRQ_DOVE_PERFORM_MNTR	49
+-#define IRQ_DOVE_VPRO_DMA1	51
+-#define IRQ_DOVE_SSP_TIMER	54
+-#define IRQ_DOVE_SSP		55
+-#define IRQ_DOVE_MC_L2_ERR	56
+-#define IRQ_DOVE_CRYPTO_ERR	59
+-#define IRQ_DOVE_GPIO_24_31	60
+-#define IRQ_DOVE_HIGH_GPIO	61
+-#define IRQ_DOVE_SATA		62
++#define IRQ_DOVE_AC97		(1 + 32)
++#define IRQ_DOVE_PMU		(1 + 33)
++#define IRQ_DOVE_CAM		(1 + 34)
++#define IRQ_DOVE_SDIO0		(1 + 35)
++#define IRQ_DOVE_SDIO1		(1 + 36)
++#define IRQ_DOVE_SDIO0_WAKEUP	(1 + 37)
++#define IRQ_DOVE_SDIO1_WAKEUP	(1 + 38)
++#define IRQ_DOVE_XOR_00		(1 + 39)
++#define IRQ_DOVE_XOR_01		(1 + 40)
++#define IRQ_DOVE_XOR0_ERR	(1 + 41)
++#define IRQ_DOVE_XOR_10		(1 + 42)
++#define IRQ_DOVE_XOR_11		(1 + 43)
++#define IRQ_DOVE_XOR1_ERR	(1 + 44)
++#define IRQ_DOVE_LCD_DCON	(1 + 45)
++#define IRQ_DOVE_LCD1		(1 + 46)
++#define IRQ_DOVE_LCD0		(1 + 47)
++#define IRQ_DOVE_GPU		(1 + 48)
++#define IRQ_DOVE_PERFORM_MNTR	(1 + 49)
++#define IRQ_DOVE_VPRO_DMA1	(1 + 51)
++#define IRQ_DOVE_SSP_TIMER	(1 + 54)
++#define IRQ_DOVE_SSP		(1 + 55)
++#define IRQ_DOVE_MC_L2_ERR	(1 + 56)
++#define IRQ_DOVE_CRYPTO_ERR	(1 + 59)
++#define IRQ_DOVE_GPIO_24_31	(1 + 60)
++#define IRQ_DOVE_HIGH_GPIO	(1 + 61)
++#define IRQ_DOVE_SATA		(1 + 62)
+ 
+ /*
+  * DOVE General Purpose Pins
+  */
+-#define IRQ_DOVE_GPIO_START	64
++#define IRQ_DOVE_GPIO_START	65
+ #define NR_GPIO_IRQS		64
+ 
+ /*
+diff --git a/arch/arm/mach-dove/irq.c b/arch/arm/mach-dove/irq.c
+index 4a5a7aedcb76..df0223f76fa9 100644
+--- a/arch/arm/mach-dove/irq.c
++++ b/arch/arm/mach-dove/irq.c
+@@ -126,14 +126,14 @@ __exception_irq_entry dove_legacy_handle_irq(struct pt_regs *regs)
+ 	stat = readl_relaxed(dove_irq_base + IRQ_CAUSE_LOW_OFF);
+ 	stat &= readl_relaxed(dove_irq_base + IRQ_MASK_LOW_OFF);
+ 	if (stat) {
+-		unsigned int hwirq = __fls(stat);
++		unsigned int hwirq = 1 + __fls(stat);
+ 		handle_IRQ(hwirq, regs);
+ 		return;
+ 	}
+ 	stat = readl_relaxed(dove_irq_base + IRQ_CAUSE_HIGH_OFF);
+ 	stat &= readl_relaxed(dove_irq_base + IRQ_MASK_HIGH_OFF);
+ 	if (stat) {
+-		unsigned int hwirq = 32 + __fls(stat);
++		unsigned int hwirq = 33 + __fls(stat);
+ 		handle_IRQ(hwirq, regs);
+ 		return;
+ 	}
+@@ -144,8 +144,8 @@ void __init dove_init_irq(void)
+ {
+ 	int i;
+ 
+-	orion_irq_init(0, IRQ_VIRT_BASE + IRQ_MASK_LOW_OFF);
+-	orion_irq_init(32, IRQ_VIRT_BASE + IRQ_MASK_HIGH_OFF);
++	orion_irq_init(1, IRQ_VIRT_BASE + IRQ_MASK_LOW_OFF);
++	orion_irq_init(33, IRQ_VIRT_BASE + IRQ_MASK_HIGH_OFF);
+ 
+ #ifdef CONFIG_MULTI_IRQ_HANDLER
+ 	set_handle_irq(dove_legacy_handle_irq);
+diff --git a/arch/arm/vdso/vdsomunge.c b/arch/arm/vdso/vdsomunge.c
+index 9005b07296c8..aedec81d1198 100644
+--- a/arch/arm/vdso/vdsomunge.c
++++ b/arch/arm/vdso/vdsomunge.c
+@@ -45,13 +45,11 @@
+  * it does.
+  */
+ 
+-#define _GNU_SOURCE
+-
+ #include <byteswap.h>
+ #include <elf.h>
+ #include <errno.h>
+-#include <error.h>
+ #include <fcntl.h>
++#include <stdarg.h>
+ #include <stdbool.h>
+ #include <stdio.h>
+ #include <stdlib.h>
+@@ -82,11 +80,25 @@
+ #define EF_ARM_ABI_FLOAT_HARD 0x400
+ #endif
+ 
++static int failed;
++static const char *argv0;
+ static const char *outfile;
+ 
++static void fail(const char *fmt, ...)
++{
++	va_list ap;
++
++	failed = 1;
++	fprintf(stderr, "%s: ", argv0);
++	va_start(ap, fmt);
++	vfprintf(stderr, fmt, ap);
++	va_end(ap);
++	exit(EXIT_FAILURE);
++}
++
+ static void cleanup(void)
+ {
+-	if (error_message_count > 0 && outfile != NULL)
++	if (failed && outfile != NULL)
+ 		unlink(outfile);
+ }
+ 
+@@ -119,68 +131,66 @@ int main(int argc, char **argv)
+ 	int infd;
+ 
+ 	atexit(cleanup);
++	argv0 = argv[0];
+ 
+ 	if (argc != 3)
+-		error(EXIT_FAILURE, 0, "Usage: %s [infile] [outfile]", argv[0]);
++		fail("Usage: %s [infile] [outfile]\n", argv[0]);
+ 
+ 	infile = argv[1];
+ 	outfile = argv[2];
+ 
+ 	infd = open(infile, O_RDONLY);
+ 	if (infd < 0)
+-		error(EXIT_FAILURE, errno, "Cannot open %s", infile);
++		fail("Cannot open %s: %s\n", infile, strerror(errno));
+ 
+ 	if (fstat(infd, &stat) != 0)
+-		error(EXIT_FAILURE, errno, "Failed stat for %s", infile);
++		fail("Failed stat for %s: %s\n", infile, strerror(errno));
+ 
+ 	inbuf = mmap(NULL, stat.st_size, PROT_READ, MAP_PRIVATE, infd, 0);
+ 	if (inbuf == MAP_FAILED)
+-		error(EXIT_FAILURE, errno, "Failed to map %s", infile);
++		fail("Failed to map %s: %s\n", infile, strerror(errno));
+ 
+ 	close(infd);
+ 
+ 	inhdr = inbuf;
+ 
+ 	if (memcmp(&inhdr->e_ident, ELFMAG, SELFMAG) != 0)
+-		error(EXIT_FAILURE, 0, "Not an ELF file");
++		fail("Not an ELF file\n");
+ 
+ 	if (inhdr->e_ident[EI_CLASS] != ELFCLASS32)
+-		error(EXIT_FAILURE, 0, "Unsupported ELF class");
++		fail("Unsupported ELF class\n");
+ 
+ 	swap = inhdr->e_ident[EI_DATA] != HOST_ORDER;
+ 
+ 	if (read_elf_half(inhdr->e_type, swap) != ET_DYN)
+-		error(EXIT_FAILURE, 0, "Not a shared object");
++		fail("Not a shared object\n");
+ 
+-	if (read_elf_half(inhdr->e_machine, swap) != EM_ARM) {
+-		error(EXIT_FAILURE, 0, "Unsupported architecture %#x",
+-		      inhdr->e_machine);
+-	}
++	if (read_elf_half(inhdr->e_machine, swap) != EM_ARM)
++		fail("Unsupported architecture %#x\n", inhdr->e_machine);
+ 
+ 	e_flags = read_elf_word(inhdr->e_flags, swap);
+ 
+ 	if (EF_ARM_EABI_VERSION(e_flags) != EF_ARM_EABI_VER5) {
+-		error(EXIT_FAILURE, 0, "Unsupported EABI version %#x",
+-		      EF_ARM_EABI_VERSION(e_flags));
++		fail("Unsupported EABI version %#x\n",
++		     EF_ARM_EABI_VERSION(e_flags));
+ 	}
+ 
+ 	if (e_flags & EF_ARM_ABI_FLOAT_HARD)
+-		error(EXIT_FAILURE, 0,
+-		      "Unexpected hard-float flag set in e_flags");
++		fail("Unexpected hard-float flag set in e_flags\n");
+ 
+ 	clear_soft_float = !!(e_flags & EF_ARM_ABI_FLOAT_SOFT);
+ 
+ 	outfd = open(outfile, O_RDWR | O_CREAT | O_TRUNC, S_IRUSR | S_IWUSR);
+ 	if (outfd < 0)
+-		error(EXIT_FAILURE, errno, "Cannot open %s", outfile);
++		fail("Cannot open %s: %s\n", outfile, strerror(errno));
+ 
+ 	if (ftruncate(outfd, stat.st_size) != 0)
+-		error(EXIT_FAILURE, errno, "Cannot truncate %s", outfile);
++		fail("Cannot truncate %s: %s\n", outfile, strerror(errno));
+ 
+ 	outbuf = mmap(NULL, stat.st_size, PROT_READ | PROT_WRITE, MAP_SHARED,
+ 		      outfd, 0);
+ 	if (outbuf == MAP_FAILED)
+-		error(EXIT_FAILURE, errno, "Failed to map %s", outfile);
++		fail("Failed to map %s: %s\n", outfile, strerror(errno));
+ 
+ 	close(outfd);
+ 
+@@ -195,7 +205,7 @@ int main(int argc, char **argv)
+ 	}
+ 
+ 	if (msync(outbuf, stat.st_size, MS_SYNC) != 0)
+-		error(EXIT_FAILURE, errno, "Failed to sync %s", outfile);
++		fail("Failed to sync %s: %s\n", outfile, strerror(errno));
+ 
+ 	return EXIT_SUCCESS;
+ }
+diff --git a/arch/arm64/kernel/smp.c b/arch/arm64/kernel/smp.c
+index 2cb008177252..d3a202b85ba6 100644
+--- a/arch/arm64/kernel/smp.c
++++ b/arch/arm64/kernel/smp.c
+@@ -569,7 +569,7 @@ void handle_IPI(int ipinr, struct pt_regs *regs)
+ 	struct pt_regs *old_regs = set_irq_regs(regs);
+ 
+ 	if ((unsigned)ipinr < NR_IPI) {
+-		trace_ipi_entry(ipi_types[ipinr]);
++		trace_ipi_entry_rcuidle(ipi_types[ipinr]);
+ 		__inc_irq_stat(cpu, ipi_irqs[ipinr]);
+ 	}
+ 
+@@ -612,7 +612,7 @@ void handle_IPI(int ipinr, struct pt_regs *regs)
+ 	}
+ 
+ 	if ((unsigned)ipinr < NR_IPI)
+-		trace_ipi_exit(ipi_types[ipinr]);
++		trace_ipi_exit_rcuidle(ipi_types[ipinr]);
+ 	set_irq_regs(old_regs);
+ }
+ 
+diff --git a/arch/arm64/mm/hugetlbpage.c b/arch/arm64/mm/hugetlbpage.c
+index 2de9d2e59d96..0eeb4f0930a0 100644
+--- a/arch/arm64/mm/hugetlbpage.c
++++ b/arch/arm64/mm/hugetlbpage.c
+@@ -40,13 +40,13 @@ int huge_pmd_unshare(struct mm_struct *mm, unsigned long *addr, pte_t *ptep)
+ 
+ int pmd_huge(pmd_t pmd)
+ {
+-	return !(pmd_val(pmd) & PMD_TABLE_BIT);
++	return pmd_val(pmd) && !(pmd_val(pmd) & PMD_TABLE_BIT);
+ }
+ 
+ int pud_huge(pud_t pud)
+ {
+ #ifndef __PAGETABLE_PMD_FOLDED
+-	return !(pud_val(pud) & PUD_TABLE_BIT);
++	return pud_val(pud) && !(pud_val(pud) & PUD_TABLE_BIT);
+ #else
+ 	return 0;
+ #endif
+diff --git a/arch/arm64/net/bpf_jit.h b/arch/arm64/net/bpf_jit.h
+index de0a81a539a0..98a26ce82d26 100644
+--- a/arch/arm64/net/bpf_jit.h
++++ b/arch/arm64/net/bpf_jit.h
+@@ -110,6 +110,10 @@
+ /* Rd = Rn >> shift; signed */
+ #define A64_ASR(sf, Rd, Rn, shift) A64_SBFM(sf, Rd, Rn, shift, (sf) ? 63 : 31)
+ 
++/* Zero extend */
++#define A64_UXTH(sf, Rd, Rn) A64_UBFM(sf, Rd, Rn, 0, 15)
++#define A64_UXTW(sf, Rd, Rn) A64_UBFM(sf, Rd, Rn, 0, 31)
++
+ /* Move wide (immediate) */
+ #define A64_MOVEW(sf, Rd, imm16, shift, type) \
+ 	aarch64_insn_gen_movewide(Rd, imm16, shift, \
+diff --git a/arch/arm64/net/bpf_jit_comp.c b/arch/arm64/net/bpf_jit_comp.c
+index dc6a4842683a..c047598b09e0 100644
+--- a/arch/arm64/net/bpf_jit_comp.c
++++ b/arch/arm64/net/bpf_jit_comp.c
+@@ -113,9 +113,9 @@ static inline void emit_a64_mov_i(const int is64, const int reg,
+ static inline int bpf2a64_offset(int bpf_to, int bpf_from,
+ 				 const struct jit_ctx *ctx)
+ {
+-	int to = ctx->offset[bpf_to + 1];
++	int to = ctx->offset[bpf_to];
+ 	/* -1 to account for the Branch instruction */
+-	int from = ctx->offset[bpf_from + 1] - 1;
++	int from = ctx->offset[bpf_from] - 1;
+ 
+ 	return to - from;
+ }
+@@ -289,23 +289,41 @@ static int build_insn(const struct bpf_insn *insn, struct jit_ctx *ctx)
+ 	case BPF_ALU | BPF_END | BPF_FROM_BE:
+ #ifdef CONFIG_CPU_BIG_ENDIAN
+ 		if (BPF_SRC(code) == BPF_FROM_BE)
+-			break;
++			goto emit_bswap_uxt;
+ #else /* !CONFIG_CPU_BIG_ENDIAN */
+ 		if (BPF_SRC(code) == BPF_FROM_LE)
+-			break;
++			goto emit_bswap_uxt;
+ #endif
+ 		switch (imm) {
+ 		case 16:
+ 			emit(A64_REV16(is64, dst, dst), ctx);
++			/* zero-extend 16 bits into 64 bits */
++			emit(A64_UXTH(is64, dst, dst), ctx);
+ 			break;
+ 		case 32:
+ 			emit(A64_REV32(is64, dst, dst), ctx);
++			/* upper 32 bits already cleared */
+ 			break;
+ 		case 64:
+ 			emit(A64_REV64(dst, dst), ctx);
+ 			break;
+ 		}
+ 		break;
++emit_bswap_uxt:
++		switch (imm) {
++		case 16:
++			/* zero-extend 16 bits into 64 bits */
++			emit(A64_UXTH(is64, dst, dst), ctx);
++			break;
++		case 32:
++			/* zero-extend 32 bits into 64 bits */
++			emit(A64_UXTW(is64, dst, dst), ctx);
++			break;
++		case 64:
++			/* nop */
++			break;
++		}
++		break;
+ 	/* dst = imm */
+ 	case BPF_ALU | BPF_MOV | BPF_K:
+ 	case BPF_ALU64 | BPF_MOV | BPF_K:
+@@ -640,10 +658,11 @@ static int build_body(struct jit_ctx *ctx)
+ 		const struct bpf_insn *insn = &prog->insnsi[i];
+ 		int ret;
+ 
++		ret = build_insn(insn, ctx);
++
+ 		if (ctx->image == NULL)
+ 			ctx->offset[i] = ctx->idx;
+ 
+-		ret = build_insn(insn, ctx);
+ 		if (ret > 0) {
+ 			i++;
+ 			continue;
+diff --git a/arch/m68k/Kconfig.cpu b/arch/m68k/Kconfig.cpu
+index 33013dfcd3e1..5c68c85d5dbe 100644
+--- a/arch/m68k/Kconfig.cpu
++++ b/arch/m68k/Kconfig.cpu
+@@ -125,6 +125,13 @@ endif # M68KCLASSIC
+ 
+ if COLDFIRE
+ 
++choice
++	prompt "ColdFire SoC type"
++	default M520x
++	help
++	  Select the type of ColdFire System-on-Chip (SoC) that you want
++	  to build for.
++
+ config M5206
+ 	bool "MCF5206"
+ 	depends on !MMU
+@@ -174,9 +181,6 @@ config M525x
+ 	help
+ 	  Freescale (Motorola) Coldfire 5251/5253 processor support.
+ 
+-config M527x
+-	bool
+-
+ config M5271
+ 	bool "MCF5271"
+ 	depends on !MMU
+@@ -223,9 +227,6 @@ config M5307
+ 	help
+ 	  Motorola ColdFire 5307 processor support.
+ 
+-config M53xx
+-	bool
+-
+ config M532x
+ 	bool "MCF532x"
+ 	depends on !MMU
+@@ -251,9 +252,6 @@ config M5407
+ 	help
+ 	  Motorola ColdFire 5407 processor support.
+ 
+-config M54xx
+-	bool
+-
+ config M547x
+ 	bool "MCF547x"
+ 	select M54xx
+@@ -280,6 +278,17 @@ config M5441x
+ 	help
+ 	  Freescale Coldfire 54410/54415/54416/54417/54418 processor support.
+ 
++endchoice
++
++config M527x
++	bool
++
++config M53xx
++	bool
++
++config M54xx
++	bool
++
+ endif # COLDFIRE
+ 
+ 
+@@ -416,22 +425,10 @@ config HAVE_MBAR
+ config HAVE_IPSBAR
+ 	bool
+ 
+-config CLOCK_SET
+-	bool "Enable setting the CPU clock frequency"
+-	depends on COLDFIRE
+-	default n
+-	help
+-	  On some CPU's you do not need to know what the core CPU clock
+-	  frequency is. On these you can disable clock setting. On some
+-	  traditional 68K parts, and on all ColdFire parts you need to set
+-	  the appropriate CPU clock frequency. On these devices many of the
+-	  onboard peripherals derive their timing from the master CPU clock
+-	  frequency.
+-
+ config CLOCK_FREQ
+ 	int "Set the core clock frequency"
+ 	default "66666666"
+-	depends on CLOCK_SET
++	depends on COLDFIRE
+ 	help
+ 	  Define the CPU clock frequency in use. This is the core clock
+ 	  frequency, it may or may not be the same as the external clock
+diff --git a/arch/m68k/include/asm/coldfire.h b/arch/m68k/include/asm/coldfire.h
+index c94557b91448..50aa4dac9ca2 100644
+--- a/arch/m68k/include/asm/coldfire.h
++++ b/arch/m68k/include/asm/coldfire.h
+@@ -19,7 +19,7 @@
+  *	in any case new boards come along from time to time that have yet
+  *	another different clocking frequency.
+  */
+-#ifdef CONFIG_CLOCK_SET
++#ifdef CONFIG_CLOCK_FREQ
+ #define	MCF_CLK		CONFIG_CLOCK_FREQ
+ #else
+ #error "Don't know what your ColdFire CPU clock frequency is??"
+diff --git a/arch/openrisc/Kconfig b/arch/openrisc/Kconfig
+index e5a693b16da2..443f44de1020 100644
+--- a/arch/openrisc/Kconfig
++++ b/arch/openrisc/Kconfig
+@@ -17,6 +17,7 @@ config OPENRISC
+ 	select GENERIC_IRQ_SHOW
+ 	select GENERIC_IOMAP
+ 	select GENERIC_CPU_DEVICES
++	select HAVE_UID16
+ 	select GENERIC_ATOMIC64
+ 	select GENERIC_CLOCKEVENTS
+ 	select GENERIC_STRNCPY_FROM_USER
+@@ -31,9 +32,6 @@ config MMU
+ config HAVE_DMA_ATTRS
+ 	def_bool y
+ 
+-config UID16
+-	def_bool y
+-
+ config RWSEM_GENERIC_SPINLOCK
+ 	def_bool y
+ 
+diff --git a/arch/x86/mm/mmap.c b/arch/x86/mm/mmap.c
+index 9d518d693b4b..844b06d67df4 100644
+--- a/arch/x86/mm/mmap.c
++++ b/arch/x86/mm/mmap.c
+@@ -126,3 +126,10 @@ void arch_pick_mmap_layout(struct mm_struct *mm)
+ 		mm->get_unmapped_area = arch_get_unmapped_area_topdown;
+ 	}
+ }
++
++const char *arch_vma_name(struct vm_area_struct *vma)
++{
++	if (vma->vm_flags & VM_MPX)
++		return "[mpx]";
++	return NULL;
++}
+diff --git a/arch/x86/mm/mpx.c b/arch/x86/mm/mpx.c
+index c439ec478216..4d1c11c07fe1 100644
+--- a/arch/x86/mm/mpx.c
++++ b/arch/x86/mm/mpx.c
+@@ -18,26 +18,9 @@
+ #include <asm/processor.h>
+ #include <asm/fpu-internal.h>
+ 
+-static const char *mpx_mapping_name(struct vm_area_struct *vma)
+-{
+-	return "[mpx]";
+-}
+-
+-static struct vm_operations_struct mpx_vma_ops = {
+-	.name = mpx_mapping_name,
+-};
+-
+-static int is_mpx_vma(struct vm_area_struct *vma)
+-{
+-	return (vma->vm_ops == &mpx_vma_ops);
+-}
+-
+ /*
+  * This is really a simplified "vm_mmap". it only handles MPX
+  * bounds tables (the bounds directory is user-allocated).
+- *
+- * Later on, we use the vma->vm_ops to uniquely identify these
+- * VMAs.
+  */
+ static unsigned long mpx_mmap(unsigned long len)
+ {
+@@ -83,7 +66,6 @@ static unsigned long mpx_mmap(unsigned long len)
+ 		ret = -ENOMEM;
+ 		goto out;
+ 	}
+-	vma->vm_ops = &mpx_vma_ops;
+ 
+ 	if (vm_flags & VM_LOCKED) {
+ 		up_write(&mm->mmap_sem);
+@@ -661,7 +643,7 @@ static int zap_bt_entries(struct mm_struct *mm,
+ 		 * so stop immediately and return an error.  This
+ 		 * probably results in a SIGSEGV.
+ 		 */
+-		if (!is_mpx_vma(vma))
++		if (!(vma->vm_flags & VM_MPX))
+ 			return -EINVAL;
+ 
+ 		len = min(vma->vm_end, end) - addr;
+diff --git a/block/bio.c b/block/bio.c
+index f66a4eae16ee..4441522ca339 100644
+--- a/block/bio.c
++++ b/block/bio.c
+@@ -1814,8 +1814,9 @@ EXPORT_SYMBOL(bio_endio_nodec);
+  * Allocates and returns a new bio which represents @sectors from the start of
+  * @bio, and updates @bio to represent the remaining sectors.
+  *
+- * The newly allocated bio will point to @bio's bi_io_vec; it is the caller's
+- * responsibility to ensure that @bio is not freed before the split.
++ * Unless this is a discard request the newly allocated bio will point
++ * to @bio's bi_io_vec; it is the caller's responsibility to ensure that
++ * @bio is not freed before the split.
+  */
+ struct bio *bio_split(struct bio *bio, int sectors,
+ 		      gfp_t gfp, struct bio_set *bs)
+@@ -1825,7 +1826,15 @@ struct bio *bio_split(struct bio *bio, int sectors,
+ 	BUG_ON(sectors <= 0);
+ 	BUG_ON(sectors >= bio_sectors(bio));
+ 
+-	split = bio_clone_fast(bio, gfp, bs);
++	/*
++	 * Discards need a mutable bio_vec to accommodate the payload
++	 * required by the DSM TRIM and UNMAP commands.
++	 */
++	if (bio->bi_rw & REQ_DISCARD)
++		split = bio_clone_bioset(bio, gfp, bs);
++	else
++		split = bio_clone_fast(bio, gfp, bs);
++
+ 	if (!split)
+ 		return NULL;
+ 
+diff --git a/crypto/asymmetric_keys/asymmetric_keys.h b/crypto/asymmetric_keys/asymmetric_keys.h
+index f97330886d58..3f5b537ab33e 100644
+--- a/crypto/asymmetric_keys/asymmetric_keys.h
++++ b/crypto/asymmetric_keys/asymmetric_keys.h
+@@ -11,6 +11,9 @@
+ 
+ extern struct asymmetric_key_id *asymmetric_key_hex_to_key_id(const char *id);
+ 
++extern int __asymmetric_key_hex_to_key_id(const char *id,
++					  struct asymmetric_key_id *match_id,
++					  size_t hexlen);
+ static inline
+ const struct asymmetric_key_ids *asymmetric_key_ids(const struct key *key)
+ {
+diff --git a/crypto/asymmetric_keys/asymmetric_type.c b/crypto/asymmetric_keys/asymmetric_type.c
+index bcbbbd794e1d..b0e4ed23d668 100644
+--- a/crypto/asymmetric_keys/asymmetric_type.c
++++ b/crypto/asymmetric_keys/asymmetric_type.c
+@@ -104,6 +104,15 @@ static bool asymmetric_match_key_ids(
+ 	return false;
+ }
+ 
++/* helper function can be called directly with pre-allocated memory */
++inline int __asymmetric_key_hex_to_key_id(const char *id,
++				   struct asymmetric_key_id *match_id,
++				   size_t hexlen)
++{
++	match_id->len = hexlen;
++	return hex2bin(match_id->data, id, hexlen);
++}
++
+ /**
+  * asymmetric_key_hex_to_key_id - Convert a hex string into a key ID.
+  * @id: The ID as a hex string.
+@@ -111,21 +120,20 @@ static bool asymmetric_match_key_ids(
+ struct asymmetric_key_id *asymmetric_key_hex_to_key_id(const char *id)
+ {
+ 	struct asymmetric_key_id *match_id;
+-	size_t hexlen;
++	size_t asciihexlen;
+ 	int ret;
+ 
+ 	if (!*id)
+ 		return ERR_PTR(-EINVAL);
+-	hexlen = strlen(id);
+-	if (hexlen & 1)
++	asciihexlen = strlen(id);
++	if (asciihexlen & 1)
+ 		return ERR_PTR(-EINVAL);
+ 
+-	match_id = kmalloc(sizeof(struct asymmetric_key_id) + hexlen / 2,
++	match_id = kmalloc(sizeof(struct asymmetric_key_id) + asciihexlen / 2,
+ 			   GFP_KERNEL);
+ 	if (!match_id)
+ 		return ERR_PTR(-ENOMEM);
+-	match_id->len = hexlen / 2;
+-	ret = hex2bin(match_id->data, id, hexlen / 2);
++	ret = __asymmetric_key_hex_to_key_id(id, match_id, asciihexlen / 2);
+ 	if (ret < 0) {
+ 		kfree(match_id);
+ 		return ERR_PTR(-EINVAL);
+diff --git a/crypto/asymmetric_keys/x509_public_key.c b/crypto/asymmetric_keys/x509_public_key.c
+index a6c42031628e..24f17e6c5904 100644
+--- a/crypto/asymmetric_keys/x509_public_key.c
++++ b/crypto/asymmetric_keys/x509_public_key.c
+@@ -28,17 +28,30 @@ static bool use_builtin_keys;
+ static struct asymmetric_key_id *ca_keyid;
+ 
+ #ifndef MODULE
++static struct {
++	struct asymmetric_key_id id;
++	unsigned char data[10];
++} cakey;
++
+ static int __init ca_keys_setup(char *str)
+ {
+ 	if (!str)		/* default system keyring */
+ 		return 1;
+ 
+ 	if (strncmp(str, "id:", 3) == 0) {
+-		struct asymmetric_key_id *p;
+-		p = asymmetric_key_hex_to_key_id(str + 3);
+-		if (p == ERR_PTR(-EINVAL))
+-			pr_err("Unparsable hex string in ca_keys\n");
+-		else if (!IS_ERR(p))
++		struct asymmetric_key_id *p = &cakey.id;
++		size_t hexlen = (strlen(str) - 3) / 2;
++		int ret;
++
++		if (hexlen == 0 || hexlen > sizeof(cakey.data)) {
++			pr_err("Missing or invalid ca_keys id\n");
++			return 1;
++		}
++
++		ret = __asymmetric_key_hex_to_key_id(str + 3, p, hexlen);
++		if (ret < 0)
++			pr_err("Unparsable ca_keys id hex string\n");
++		else
+ 			ca_keyid = p;	/* owner key 'id:xxxxxx' */
+ 	} else if (strcmp(str, "builtin") == 0) {
+ 		use_builtin_keys = true;
+diff --git a/drivers/acpi/acpi_lpss.c b/drivers/acpi/acpi_lpss.c
+index 37fb19047603..73f056a597a9 100644
+--- a/drivers/acpi/acpi_lpss.c
++++ b/drivers/acpi/acpi_lpss.c
+@@ -352,13 +352,16 @@ static int acpi_lpss_create_device(struct acpi_device *adev,
+ 				pdata->mmio_size = resource_size(rentry->res);
+ 			pdata->mmio_base = ioremap(rentry->res->start,
+ 						   pdata->mmio_size);
+-			if (!pdata->mmio_base)
+-				goto err_out;
+ 			break;
+ 		}
+ 
+ 	acpi_dev_free_resource_list(&resource_list);
+ 
++	if (!pdata->mmio_base) {
++		ret = -ENOMEM;
++		goto err_out;
++	}
++
+ 	pdata->dev_desc = dev_desc;
+ 
+ 	if (dev_desc->setup)
+diff --git a/drivers/acpi/acpica/aclocal.h b/drivers/acpi/acpica/aclocal.h
+index 87b27521fcac..7f50dd9eb1d0 100644
+--- a/drivers/acpi/acpica/aclocal.h
++++ b/drivers/acpi/acpica/aclocal.h
+@@ -213,6 +213,7 @@ struct acpi_table_list {
+ 
+ #define ACPI_TABLE_INDEX_DSDT           (0)
+ #define ACPI_TABLE_INDEX_FACS           (1)
++#define ACPI_TABLE_INDEX_X_FACS         (2)
+ 
+ struct acpi_find_context {
+ 	char *search_for;
+diff --git a/drivers/acpi/acpica/tbfadt.c b/drivers/acpi/acpica/tbfadt.c
+index 7d2486005e3f..05be59c772c7 100644
+--- a/drivers/acpi/acpica/tbfadt.c
++++ b/drivers/acpi/acpica/tbfadt.c
+@@ -350,9 +350,18 @@ void acpi_tb_parse_fadt(u32 table_index)
+ 	/* If Hardware Reduced flag is set, there is no FACS */
+ 
+ 	if (!acpi_gbl_reduced_hardware) {
+-		acpi_tb_install_fixed_table((acpi_physical_address)
+-					    acpi_gbl_FADT.Xfacs, ACPI_SIG_FACS,
+-					    ACPI_TABLE_INDEX_FACS);
++		if (acpi_gbl_FADT.facs) {
++			acpi_tb_install_fixed_table((acpi_physical_address)
++						    acpi_gbl_FADT.facs,
++						    ACPI_SIG_FACS,
++						    ACPI_TABLE_INDEX_FACS);
++		}
++		if (acpi_gbl_FADT.Xfacs) {
++			acpi_tb_install_fixed_table((acpi_physical_address)
++						    acpi_gbl_FADT.Xfacs,
++						    ACPI_SIG_FACS,
++						    ACPI_TABLE_INDEX_X_FACS);
++		}
+ 	}
+ }
+ 
+@@ -491,13 +500,9 @@ static void acpi_tb_convert_fadt(void)
+ 	acpi_gbl_FADT.header.length = sizeof(struct acpi_table_fadt);
+ 
+ 	/*
+-	 * Expand the 32-bit FACS and DSDT addresses to 64-bit as necessary.
++	 * Expand the 32-bit DSDT addresses to 64-bit as necessary.
+ 	 * Later ACPICA code will always use the X 64-bit field.
+ 	 */
+-	acpi_gbl_FADT.Xfacs = acpi_tb_select_address("FACS",
+-						     acpi_gbl_FADT.facs,
+-						     acpi_gbl_FADT.Xfacs);
+-
+ 	acpi_gbl_FADT.Xdsdt = acpi_tb_select_address("DSDT",
+ 						     acpi_gbl_FADT.dsdt,
+ 						     acpi_gbl_FADT.Xdsdt);
+diff --git a/drivers/acpi/acpica/tbutils.c b/drivers/acpi/acpica/tbutils.c
+index 6559a58439c5..2fb1afaacc6d 100644
+--- a/drivers/acpi/acpica/tbutils.c
++++ b/drivers/acpi/acpica/tbutils.c
+@@ -68,7 +68,8 @@ acpi_tb_get_root_table_entry(u8 *table_entry, u32 table_entry_size);
+ 
+ acpi_status acpi_tb_initialize_facs(void)
+ {
+-	acpi_status status;
++	struct acpi_table_facs *facs32;
++	struct acpi_table_facs *facs64;
+ 
+ 	/* If Hardware Reduced flag is set, there is no FACS */
+ 
+@@ -77,11 +78,22 @@ acpi_status acpi_tb_initialize_facs(void)
+ 		return (AE_OK);
+ 	}
+ 
+-	status = acpi_get_table_by_index(ACPI_TABLE_INDEX_FACS,
+-					 ACPI_CAST_INDIRECT_PTR(struct
+-								acpi_table_header,
+-								&acpi_gbl_FACS));
+-	return (status);
++	(void)acpi_get_table_by_index(ACPI_TABLE_INDEX_FACS,
++				      ACPI_CAST_INDIRECT_PTR(struct
++							     acpi_table_header,
++							     &facs32));
++	(void)acpi_get_table_by_index(ACPI_TABLE_INDEX_X_FACS,
++				      ACPI_CAST_INDIRECT_PTR(struct
++							     acpi_table_header,
++							     &facs64));
++
++	if (acpi_gbl_use32_bit_facs_addresses) {
++		acpi_gbl_FACS = facs32 ? facs32 : facs64;
++	} else {
++		acpi_gbl_FACS = facs64 ? facs64 : facs32;
++	}
++
++	return (AE_OK);
+ }
+ #endif				/* !ACPI_REDUCED_HARDWARE */
+ 
+@@ -101,7 +113,7 @@ acpi_status acpi_tb_initialize_facs(void)
+ u8 acpi_tb_tables_loaded(void)
+ {
+ 
+-	if (acpi_gbl_root_table_list.current_table_count >= 3) {
++	if (acpi_gbl_root_table_list.current_table_count >= 4) {
+ 		return (TRUE);
+ 	}
+ 
+@@ -357,11 +369,11 @@ acpi_status __init acpi_tb_parse_root_table(acpi_physical_address rsdp_address)
+ 	table_entry = ACPI_ADD_PTR(u8, table, sizeof(struct acpi_table_header));
+ 
+ 	/*
+-	 * First two entries in the table array are reserved for the DSDT
+-	 * and FACS, which are not actually present in the RSDT/XSDT - they
+-	 * come from the FADT
++	 * First three entries in the table array are reserved for the DSDT
++	 * and 32bit/64bit FACS, which are not actually present in the
++	 * RSDT/XSDT - they come from the FADT
+ 	 */
+-	acpi_gbl_root_table_list.current_table_count = 2;
++	acpi_gbl_root_table_list.current_table_count = 3;
+ 
+ 	/* Initialize the root table array from the RSDT/XSDT */
+ 
+diff --git a/drivers/acpi/acpica/tbxfload.c b/drivers/acpi/acpica/tbxfload.c
+index aadb3002a2dd..b63e35d6d1bf 100644
+--- a/drivers/acpi/acpica/tbxfload.c
++++ b/drivers/acpi/acpica/tbxfload.c
+@@ -166,7 +166,8 @@ static acpi_status acpi_tb_load_namespace(void)
+ 
+ 	(void)acpi_ut_acquire_mutex(ACPI_MTX_TABLES);
+ 	for (i = 0; i < acpi_gbl_root_table_list.current_table_count; ++i) {
+-		if ((!ACPI_COMPARE_NAME
++		if (!acpi_gbl_root_table_list.tables[i].address ||
++		    (!ACPI_COMPARE_NAME
+ 		     (&(acpi_gbl_root_table_list.tables[i].signature),
+ 		      ACPI_SIG_SSDT)
+ 		     &&
+diff --git a/drivers/acpi/acpica/utxfinit.c b/drivers/acpi/acpica/utxfinit.c
+index 083a76891889..42a32a66ef22 100644
+--- a/drivers/acpi/acpica/utxfinit.c
++++ b/drivers/acpi/acpica/utxfinit.c
+@@ -179,10 +179,12 @@ acpi_status __init acpi_enable_subsystem(u32 flags)
+ 	 * Obtain a permanent mapping for the FACS. This is required for the
+ 	 * Global Lock and the Firmware Waking Vector
+ 	 */
+-	status = acpi_tb_initialize_facs();
+-	if (ACPI_FAILURE(status)) {
+-		ACPI_WARNING((AE_INFO, "Could not map the FACS table"));
+-		return_ACPI_STATUS(status);
++	if (!(flags & ACPI_NO_FACS_INIT)) {
++		status = acpi_tb_initialize_facs();
++		if (ACPI_FAILURE(status)) {
++			ACPI_WARNING((AE_INFO, "Could not map the FACS table"));
++			return_ACPI_STATUS(status);
++		}
+ 	}
+ #endif				/* !ACPI_REDUCED_HARDWARE */
+ 
+diff --git a/drivers/acpi/osl.c b/drivers/acpi/osl.c
+index 5226a8b921ae..98f5316aad72 100644
+--- a/drivers/acpi/osl.c
++++ b/drivers/acpi/osl.c
+@@ -175,10 +175,14 @@ static void __init acpi_request_region (struct acpi_generic_address *gas,
+ 	if (!addr || !length)
+ 		return;
+ 
+-	acpi_reserve_region(addr, length, gas->space_id, 0, desc);
++	/* Resources are never freed */
++	if (gas->space_id == ACPI_ADR_SPACE_SYSTEM_IO)
++		request_region(addr, length, desc);
++	else if (gas->space_id == ACPI_ADR_SPACE_SYSTEM_MEMORY)
++		request_mem_region(addr, length, desc);
+ }
+ 
+-static void __init acpi_reserve_resources(void)
++static int __init acpi_reserve_resources(void)
+ {
+ 	acpi_request_region(&acpi_gbl_FADT.xpm1a_event_block, acpi_gbl_FADT.pm1_event_length,
+ 		"ACPI PM1a_EVT_BLK");
+@@ -207,7 +211,10 @@ static void __init acpi_reserve_resources(void)
+ 	if (!(acpi_gbl_FADT.gpe1_block_length & 0x1))
+ 		acpi_request_region(&acpi_gbl_FADT.xgpe1_block,
+ 			       acpi_gbl_FADT.gpe1_block_length, "ACPI GPE1_BLK");
++
++	return 0;
+ }
++fs_initcall_sync(acpi_reserve_resources);
+ 
+ void acpi_os_printf(const char *fmt, ...)
+ {
+@@ -1838,7 +1845,6 @@ acpi_status __init acpi_os_initialize(void)
+ 
+ acpi_status __init acpi_os_initialize1(void)
+ {
+-	acpi_reserve_resources();
+ 	kacpid_wq = alloc_workqueue("kacpid", 0, 1);
+ 	kacpi_notify_wq = alloc_workqueue("kacpi_notify", 0, 1);
+ 	kacpi_hotplug_wq = alloc_ordered_workqueue("kacpi_hotplug", 0);
+diff --git a/drivers/acpi/resource.c b/drivers/acpi/resource.c
+index fcb7807ea8b7..f1c966e05078 100644
+--- a/drivers/acpi/resource.c
++++ b/drivers/acpi/resource.c
+@@ -26,7 +26,6 @@
+ #include <linux/device.h>
+ #include <linux/export.h>
+ #include <linux/ioport.h>
+-#include <linux/list.h>
+ #include <linux/slab.h>
+ 
+ #ifdef CONFIG_X86
+@@ -194,6 +193,7 @@ static bool acpi_decode_space(struct resource_win *win,
+ 	u8 iodec = attr->granularity == 0xfff ? ACPI_DECODE_10 : ACPI_DECODE_16;
+ 	bool wp = addr->info.mem.write_protect;
+ 	u64 len = attr->address_length;
++	u64 start, end, offset = 0;
+ 	struct resource *res = &win->res;
+ 
+ 	/*
+@@ -205,9 +205,6 @@ static bool acpi_decode_space(struct resource_win *win,
+ 		pr_debug("ACPI: Invalid address space min_addr_fix %d, max_addr_fix %d, len %llx\n",
+ 			 addr->min_address_fixed, addr->max_address_fixed, len);
+ 
+-	res->start = attr->minimum;
+-	res->end = attr->maximum;
+-
+ 	/*
+ 	 * For bridges that translate addresses across the bridge,
+ 	 * translation_offset is the offset that must be added to the
+@@ -215,12 +212,22 @@ static bool acpi_decode_space(struct resource_win *win,
+ 	 * primary side. Non-bridge devices must list 0 for all Address
+ 	 * Translation offset bits.
+ 	 */
+-	if (addr->producer_consumer == ACPI_PRODUCER) {
+-		res->start += attr->translation_offset;
+-		res->end += attr->translation_offset;
+-	} else if (attr->translation_offset) {
++	if (addr->producer_consumer == ACPI_PRODUCER)
++		offset = attr->translation_offset;
++	else if (attr->translation_offset)
+ 		pr_debug("ACPI: translation_offset(%lld) is invalid for non-bridge device.\n",
+ 			 attr->translation_offset);
++	start = attr->minimum + offset;
++	end = attr->maximum + offset;
++
++	win->offset = offset;
++	res->start = start;
++	res->end = end;
++	if (sizeof(resource_size_t) < sizeof(u64) &&
++	    (offset != win->offset || start != res->start || end != res->end)) {
++		pr_warn("acpi resource window ([%#llx-%#llx] ignored, not CPU addressable)\n",
++			attr->minimum, attr->maximum);
++		return false;
+ 	}
+ 
+ 	switch (addr->resource_type) {
+@@ -237,8 +244,6 @@ static bool acpi_decode_space(struct resource_win *win,
+ 		return false;
+ 	}
+ 
+-	win->offset = attr->translation_offset;
+-
+ 	if (addr->producer_consumer == ACPI_PRODUCER)
+ 		res->flags |= IORESOURCE_WINDOW;
+ 
+@@ -622,162 +627,3 @@ int acpi_dev_filter_resource_type(struct acpi_resource *ares,
+ 	return (type & types) ? 0 : 1;
+ }
+ EXPORT_SYMBOL_GPL(acpi_dev_filter_resource_type);
+-
+-struct reserved_region {
+-	struct list_head node;
+-	u64 start;
+-	u64 end;
+-};
+-
+-static LIST_HEAD(reserved_io_regions);
+-static LIST_HEAD(reserved_mem_regions);
+-
+-static int request_range(u64 start, u64 end, u8 space_id, unsigned long flags,
+-			 char *desc)
+-{
+-	unsigned int length = end - start + 1;
+-	struct resource *res;
+-
+-	res = space_id == ACPI_ADR_SPACE_SYSTEM_IO ?
+-		request_region(start, length, desc) :
+-		request_mem_region(start, length, desc);
+-	if (!res)
+-		return -EIO;
+-
+-	res->flags &= ~flags;
+-	return 0;
+-}
+-
+-static int add_region_before(u64 start, u64 end, u8 space_id,
+-			     unsigned long flags, char *desc,
+-			     struct list_head *head)
+-{
+-	struct reserved_region *reg;
+-	int error;
+-
+-	reg = kmalloc(sizeof(*reg), GFP_KERNEL);
+-	if (!reg)
+-		return -ENOMEM;
+-
+-	error = request_range(start, end, space_id, flags, desc);
+-	if (error)
+-		return error;
+-
+-	reg->start = start;
+-	reg->end = end;
+-	list_add_tail(&reg->node, head);
+-	return 0;
+-}
+-
+-/**
+- * acpi_reserve_region - Reserve an I/O or memory region as a system resource.
+- * @start: Starting address of the region.
+- * @length: Length of the region.
+- * @space_id: Identifier of address space to reserve the region from.
+- * @flags: Resource flags to clear for the region after requesting it.
+- * @desc: Region description (for messages).
+- *
+- * Reserve an I/O or memory region as a system resource to prevent others from
+- * using it.  If the new region overlaps with one of the regions (in the given
+- * address space) already reserved by this routine, only the non-overlapping
+- * parts of it will be reserved.
+- *
+- * Returned is either 0 (success) or a negative error code indicating a resource
+- * reservation problem.  It is the code of the first encountered error, but the
+- * routine doesn't abort until it has attempted to request all of the parts of
+- * the new region that don't overlap with other regions reserved previously.
+- *
+- * The resources requested by this routine are never released.
+- */
+-int acpi_reserve_region(u64 start, unsigned int length, u8 space_id,
+-			unsigned long flags, char *desc)
+-{
+-	struct list_head *regions;
+-	struct reserved_region *reg;
+-	u64 end = start + length - 1;
+-	int ret = 0, error = 0;
+-
+-	if (space_id == ACPI_ADR_SPACE_SYSTEM_IO)
+-		regions = &reserved_io_regions;
+-	else if (space_id == ACPI_ADR_SPACE_SYSTEM_MEMORY)
+-		regions = &reserved_mem_regions;
+-	else
+-		return -EINVAL;
+-
+-	if (list_empty(regions))
+-		return add_region_before(start, end, space_id, flags, desc, regions);
+-
+-	list_for_each_entry(reg, regions, node)
+-		if (reg->start == end + 1) {
+-			/* The new region can be prepended to this one. */
+-			ret = request_range(start, end, space_id, flags, desc);
+-			if (!ret)
+-				reg->start = start;
+-
+-			return ret;
+-		} else if (reg->start > end) {
+-			/* No overlap.  Add the new region here and get out. */
+-			return add_region_before(start, end, space_id, flags,
+-						 desc, &reg->node);
+-		} else if (reg->end == start - 1) {
+-			goto combine;
+-		} else if (reg->end >= start) {
+-			goto overlap;
+-		}
+-
+-	/* The new region goes after the last existing one. */
+-	return add_region_before(start, end, space_id, flags, desc, regions);
+-
+- overlap:
+-	/*
+-	 * The new region overlaps an existing one.
+-	 *
+-	 * The head part of the new region immediately preceding the existing
+-	 * overlapping one can be combined with it right away.
+-	 */
+-	if (reg->start > start) {
+-		error = request_range(start, reg->start - 1, space_id, flags, desc);
+-		if (error)
+-			ret = error;
+-		else
+-			reg->start = start;
+-	}
+-
+- combine:
+-	/*
+-	 * The new region is adjacent to an existing one.  If it extends beyond
+-	 * that region all the way to the next one, it is possible to combine
+-	 * all three of them.
+-	 */
+-	while (reg->end < end) {
+-		struct reserved_region *next = NULL;
+-		u64 a = reg->end + 1, b = end;
+-
+-		if (!list_is_last(&reg->node, regions)) {
+-			next = list_next_entry(reg, node);
+-			if (next->start <= end)
+-				b = next->start - 1;
+-		}
+-		error = request_range(a, b, space_id, flags, desc);
+-		if (!error) {
+-			if (next && next->start == b + 1) {
+-				reg->end = next->end;
+-				list_del(&next->node);
+-				kfree(next);
+-			} else {
+-				reg->end = end;
+-				break;
+-			}
+-		} else if (next) {
+-			if (!ret)
+-				ret = error;
+-
+-			reg = next;
+-		} else {
+-			break;
+-		}
+-	}
+-
+-	return ret ? ret : error;
+-}
+-EXPORT_SYMBOL_GPL(acpi_reserve_region);
+diff --git a/drivers/ata/libata-core.c b/drivers/ata/libata-core.c
+index 577849c6611a..41c99be9bd41 100644
+--- a/drivers/ata/libata-core.c
++++ b/drivers/ata/libata-core.c
+@@ -2478,6 +2478,10 @@ int ata_dev_configure(struct ata_device *dev)
+ 		dev->max_sectors = min_t(unsigned int, ATA_MAX_SECTORS_128,
+ 					 dev->max_sectors);
+ 
++	if (dev->horkage & ATA_HORKAGE_MAX_SEC_1024)
++		dev->max_sectors = min_t(unsigned int, ATA_MAX_SECTORS_1024,
++					 dev->max_sectors);
++
+ 	if (dev->horkage & ATA_HORKAGE_MAX_SEC_LBA48)
+ 		dev->max_sectors = ATA_MAX_SECTORS_LBA48;
+ 
+@@ -4146,6 +4150,12 @@ static const struct ata_blacklist_entry ata_device_blacklist [] = {
+ 	{ "Slimtype DVD A  DS8A8SH", NULL,	ATA_HORKAGE_MAX_SEC_LBA48 },
+ 	{ "Slimtype DVD A  DS8A9SH", NULL,	ATA_HORKAGE_MAX_SEC_LBA48 },
+ 
++	/*
++	 * Causes silent data corruption with higher max sects.
++	 * http://lkml.kernel.org/g/x49wpy40ysk.fsf@segfault.boston.devel.redhat.com
++	 */
++	{ "ST380013AS",		"3.20",		ATA_HORKAGE_MAX_SEC_1024 },
++
+ 	/* Devices we expect to fail diagnostics */
+ 
+ 	/* Devices where NCQ should be avoided */
+@@ -4174,9 +4184,10 @@ static const struct ata_blacklist_entry ata_device_blacklist [] = {
+ 	{ "ST3320[68]13AS",	"SD1[5-9]",	ATA_HORKAGE_NONCQ |
+ 						ATA_HORKAGE_FIRMWARE_WARN },
+ 
+-	/* Seagate Momentus SpinPoint M8 seem to have FPMDA_AA issues */
++	/* drives which fail FPDMA_AA activation (some may freeze afterwards) */
+ 	{ "ST1000LM024 HN-M101MBB", "2AR10001",	ATA_HORKAGE_BROKEN_FPDMA_AA },
+ 	{ "ST1000LM024 HN-M101MBB", "2BA30001",	ATA_HORKAGE_BROKEN_FPDMA_AA },
++	{ "VB0250EAVER",	"HPG7",		ATA_HORKAGE_BROKEN_FPDMA_AA },
+ 
+ 	/* Blacklist entries taken from Silicon Image 3124/3132
+ 	   Windows driver .inf file - also several Linux problem reports */
+@@ -4225,11 +4236,11 @@ static const struct ata_blacklist_entry ata_device_blacklist [] = {
+ 	{ "PIONEER DVD-RW  DVR-216D",	NULL,	ATA_HORKAGE_NOSETXFER },
+ 
+ 	/* devices that don't properly handle queued TRIM commands */
+-	{ "Micron_M500*",		NULL,	ATA_HORKAGE_NO_NCQ_TRIM |
++	{ "Micron_M500_*",		NULL,	ATA_HORKAGE_NO_NCQ_TRIM |
+ 						ATA_HORKAGE_ZERO_AFTER_TRIM, },
+ 	{ "Crucial_CT*M500*",		NULL,	ATA_HORKAGE_NO_NCQ_TRIM |
+ 						ATA_HORKAGE_ZERO_AFTER_TRIM, },
+-	{ "Micron_M5[15]0*",		"MU01",	ATA_HORKAGE_NO_NCQ_TRIM |
++	{ "Micron_M5[15]0_*",		"MU01",	ATA_HORKAGE_NO_NCQ_TRIM |
+ 						ATA_HORKAGE_ZERO_AFTER_TRIM, },
+ 	{ "Crucial_CT*M550*",		"MU01",	ATA_HORKAGE_NO_NCQ_TRIM |
+ 						ATA_HORKAGE_ZERO_AFTER_TRIM, },
+@@ -4238,6 +4249,9 @@ static const struct ata_blacklist_entry ata_device_blacklist [] = {
+ 	{ "Samsung SSD 8*",		NULL,	ATA_HORKAGE_NO_NCQ_TRIM |
+ 						ATA_HORKAGE_ZERO_AFTER_TRIM, },
+ 
++	/* devices that don't properly handle TRIM commands */
++	{ "SuperSSpeed S238*",		NULL,	ATA_HORKAGE_NOTRIM, },
++
+ 	/*
+ 	 * As defined, the DRAT (Deterministic Read After Trim) and RZAT
+ 	 * (Return Zero After Trim) flags in the ATA Command Set are
+@@ -4501,7 +4515,8 @@ static unsigned int ata_dev_set_xfermode(struct ata_device *dev)
+ 	else /* In the ancient relic department - skip all of this */
+ 		return 0;
+ 
+-	err_mask = ata_exec_internal(dev, &tf, NULL, DMA_NONE, NULL, 0, 0);
++	/* On some disks, this command causes spin-up, so we need longer timeout */
++	err_mask = ata_exec_internal(dev, &tf, NULL, DMA_NONE, NULL, 0, 15000);
+ 
+ 	DPRINTK("EXIT, err_mask=%x\n", err_mask);
+ 	return err_mask;
+diff --git a/drivers/ata/libata-eh.c b/drivers/ata/libata-eh.c
+index cf0022ec07f2..7465031a893c 100644
+--- a/drivers/ata/libata-eh.c
++++ b/drivers/ata/libata-eh.c
+@@ -1507,16 +1507,21 @@ unsigned int ata_read_log_page(struct ata_device *dev, u8 log,
+ {
+ 	struct ata_taskfile tf;
+ 	unsigned int err_mask;
++	bool dma = false;
+ 
+ 	DPRINTK("read log page - log 0x%x, page 0x%x\n", log, page);
+ 
++retry:
+ 	ata_tf_init(dev, &tf);
+-	if (dev->dma_mode && ata_id_has_read_log_dma_ext(dev->id)) {
++	if (dev->dma_mode && ata_id_has_read_log_dma_ext(dev->id) &&
++	    !(dev->horkage & ATA_HORKAGE_NO_NCQ_LOG)) {
+ 		tf.command = ATA_CMD_READ_LOG_DMA_EXT;
+ 		tf.protocol = ATA_PROT_DMA;
++		dma = true;
+ 	} else {
+ 		tf.command = ATA_CMD_READ_LOG_EXT;
+ 		tf.protocol = ATA_PROT_PIO;
++		dma = false;
+ 	}
+ 	tf.lbal = log;
+ 	tf.lbam = page;
+@@ -1527,6 +1532,12 @@ unsigned int ata_read_log_page(struct ata_device *dev, u8 log,
+ 	err_mask = ata_exec_internal(dev, &tf, NULL, DMA_FROM_DEVICE,
+ 				     buf, sectors * ATA_SECT_SIZE, 0);
+ 
++	if (err_mask && dma) {
++		dev->horkage |= ATA_HORKAGE_NO_NCQ_LOG;
++		ata_dev_warn(dev, "READ LOG DMA EXT failed, trying unqueued\n");
++		goto retry;
++	}
++
+ 	DPRINTK("EXIT, err_mask=%x\n", err_mask);
+ 	return err_mask;
+ }
+diff --git a/drivers/ata/libata-scsi.c b/drivers/ata/libata-scsi.c
+index 3131adcc1f87..641a61a59e89 100644
+--- a/drivers/ata/libata-scsi.c
++++ b/drivers/ata/libata-scsi.c
+@@ -2568,7 +2568,8 @@ static unsigned int ata_scsiop_read_cap(struct ata_scsi_args *args, u8 *rbuf)
+ 		rbuf[14] = (lowest_aligned >> 8) & 0x3f;
+ 		rbuf[15] = lowest_aligned;
+ 
+-		if (ata_id_has_trim(args->id)) {
++		if (ata_id_has_trim(args->id) &&
++		    !(dev->horkage & ATA_HORKAGE_NOTRIM)) {
+ 			rbuf[14] |= 0x80; /* LBPME */
+ 
+ 			if (ata_id_has_zero_after_trim(args->id) &&
+diff --git a/drivers/ata/libata-transport.c b/drivers/ata/libata-transport.c
+index 3227b7c8a05f..e2d94972962d 100644
+--- a/drivers/ata/libata-transport.c
++++ b/drivers/ata/libata-transport.c
+@@ -560,6 +560,29 @@ show_ata_dev_gscr(struct device *dev,
+ 
+ static DEVICE_ATTR(gscr, S_IRUGO, show_ata_dev_gscr, NULL);
+ 
++static ssize_t
++show_ata_dev_trim(struct device *dev,
++		  struct device_attribute *attr, char *buf)
++{
++	struct ata_device *ata_dev = transport_class_to_dev(dev);
++	unsigned char *mode;
++
++	if (!ata_id_has_trim(ata_dev->id))
++		mode = "unsupported";
++	else if (ata_dev->horkage & ATA_HORKAGE_NOTRIM)
++		mode = "forced_unsupported";
++	else if (ata_dev->horkage & ATA_HORKAGE_NO_NCQ_TRIM)
++			mode = "forced_unqueued";
++	else if (ata_fpdma_dsm_supported(ata_dev))
++		mode = "queued";
++	else
++		mode = "unqueued";
++
++	return snprintf(buf, 20, "%s\n", mode);
++}
++
++static DEVICE_ATTR(trim, S_IRUGO, show_ata_dev_trim, NULL);
++
+ static DECLARE_TRANSPORT_CLASS(ata_dev_class,
+ 			       "ata_device", NULL, NULL, NULL);
+ 
+@@ -733,6 +756,7 @@ struct scsi_transport_template *ata_attach_transport(void)
+ 	SETUP_DEV_ATTRIBUTE(ering);
+ 	SETUP_DEV_ATTRIBUTE(id);
+ 	SETUP_DEV_ATTRIBUTE(gscr);
++	SETUP_DEV_ATTRIBUTE(trim);
+ 	BUG_ON(count > ATA_DEV_ATTRS);
+ 	i->dev_attrs[count] = NULL;
+ 
+diff --git a/drivers/base/firmware_class.c b/drivers/base/firmware_class.c
+index 171841ad1008..4d1d9de4f9bf 100644
+--- a/drivers/base/firmware_class.c
++++ b/drivers/base/firmware_class.c
+@@ -544,10 +544,8 @@ static void fw_dev_release(struct device *dev)
+ 	kfree(fw_priv);
+ }
+ 
+-static int firmware_uevent(struct device *dev, struct kobj_uevent_env *env)
++static int do_firmware_uevent(struct firmware_priv *fw_priv, struct kobj_uevent_env *env)
+ {
+-	struct firmware_priv *fw_priv = to_firmware_priv(dev);
+-
+ 	if (add_uevent_var(env, "FIRMWARE=%s", fw_priv->buf->fw_id))
+ 		return -ENOMEM;
+ 	if (add_uevent_var(env, "TIMEOUT=%i", loading_timeout))
+@@ -558,6 +556,18 @@ static int firmware_uevent(struct device *dev, struct kobj_uevent_env *env)
+ 	return 0;
+ }
+ 
++static int firmware_uevent(struct device *dev, struct kobj_uevent_env *env)
++{
++	struct firmware_priv *fw_priv = to_firmware_priv(dev);
++	int err = 0;
++
++	mutex_lock(&fw_lock);
++	if (fw_priv->buf)
++		err = do_firmware_uevent(fw_priv, env);
++	mutex_unlock(&fw_lock);
++	return err;
++}
++
+ static struct class firmware_class = {
+ 	.name		= "firmware",
+ 	.class_attrs	= firmware_class_attrs,
+diff --git a/drivers/base/power/clock_ops.c b/drivers/base/power/clock_ops.c
+index 7fdd0172605a..c7b0fcebf168 100644
+--- a/drivers/base/power/clock_ops.c
++++ b/drivers/base/power/clock_ops.c
+@@ -93,7 +93,7 @@ static int __pm_clk_add(struct device *dev, const char *con_id,
+ 			return -ENOMEM;
+ 		}
+ 	} else {
+-		if (IS_ERR(ce->clk) || !__clk_get(clk)) {
++		if (IS_ERR(clk) || !__clk_get(clk)) {
+ 			kfree(ce);
+ 			return -ENOENT;
+ 		}
+diff --git a/drivers/block/loop.c b/drivers/block/loop.c
+index d7173cb1ea76..cef6fa83a274 100644
+--- a/drivers/block/loop.c
++++ b/drivers/block/loop.c
+@@ -86,8 +86,6 @@ static DEFINE_MUTEX(loop_index_mutex);
+ static int max_part;
+ static int part_shift;
+ 
+-static struct workqueue_struct *loop_wq;
+-
+ static int transfer_xor(struct loop_device *lo, int cmd,
+ 			struct page *raw_page, unsigned raw_off,
+ 			struct page *loop_page, unsigned loop_off,
+@@ -725,6 +723,12 @@ static int loop_set_fd(struct loop_device *lo, fmode_t mode,
+ 	size = get_loop_size(lo, file);
+ 	if ((loff_t)(sector_t)size != size)
+ 		goto out_putf;
++	error = -ENOMEM;
++	lo->wq = alloc_workqueue("kloopd%d",
++			WQ_MEM_RECLAIM | WQ_HIGHPRI | WQ_UNBOUND, 16,
++			lo->lo_number);
++	if (!lo->wq)
++		goto out_putf;
+ 
+ 	error = 0;
+ 
+@@ -872,6 +876,8 @@ static int loop_clr_fd(struct loop_device *lo)
+ 	lo->lo_flags = 0;
+ 	if (!part_shift)
+ 		lo->lo_disk->flags |= GENHD_FL_NO_PART_SCAN;
++	destroy_workqueue(lo->wq);
++	lo->wq = NULL;
+ 	mutex_unlock(&lo->lo_ctl_mutex);
+ 	/*
+ 	 * Need not hold lo_ctl_mutex to fput backing file.
+@@ -1425,9 +1431,13 @@ static int loop_queue_rq(struct blk_mq_hw_ctx *hctx,
+ 		const struct blk_mq_queue_data *bd)
+ {
+ 	struct loop_cmd *cmd = blk_mq_rq_to_pdu(bd->rq);
++	struct loop_device *lo = cmd->rq->q->queuedata;
+ 
+ 	blk_mq_start_request(bd->rq);
+ 
++	if (lo->lo_state != Lo_bound)
++		return -EIO;
++
+ 	if (cmd->rq->cmd_flags & REQ_WRITE) {
+ 		struct loop_device *lo = cmd->rq->q->queuedata;
+ 		bool need_sched = true;
+@@ -1441,9 +1451,9 @@ static int loop_queue_rq(struct blk_mq_hw_ctx *hctx,
+ 		spin_unlock_irq(&lo->lo_lock);
+ 
+ 		if (need_sched)
+-			queue_work(loop_wq, &lo->write_work);
++			queue_work(lo->wq, &lo->write_work);
+ 	} else {
+-		queue_work(loop_wq, &cmd->read_work);
++		queue_work(lo->wq, &cmd->read_work);
+ 	}
+ 
+ 	return BLK_MQ_RQ_QUEUE_OK;
+@@ -1455,9 +1465,6 @@ static void loop_handle_cmd(struct loop_cmd *cmd)
+ 	struct loop_device *lo = cmd->rq->q->queuedata;
+ 	int ret = -EIO;
+ 
+-	if (lo->lo_state != Lo_bound)
+-		goto failed;
+-
+ 	if (write && (lo->lo_flags & LO_FLAGS_READ_ONLY))
+ 		goto failed;
+ 
+@@ -1806,13 +1813,6 @@ static int __init loop_init(void)
+ 		goto misc_out;
+ 	}
+ 
+-	loop_wq = alloc_workqueue("kloopd",
+-			WQ_MEM_RECLAIM | WQ_HIGHPRI | WQ_UNBOUND, 0);
+-	if (!loop_wq) {
+-		err = -ENOMEM;
+-		goto misc_out;
+-	}
+-
+ 	blk_register_region(MKDEV(LOOP_MAJOR, 0), range,
+ 				  THIS_MODULE, loop_probe, NULL, NULL);
+ 
+@@ -1850,8 +1850,6 @@ static void __exit loop_exit(void)
+ 	blk_unregister_region(MKDEV(LOOP_MAJOR, 0), range);
+ 	unregister_blkdev(LOOP_MAJOR, "loop");
+ 
+-	destroy_workqueue(loop_wq);
+-
+ 	misc_deregister(&loop_misc);
+ }
+ 
+diff --git a/drivers/block/loop.h b/drivers/block/loop.h
+index 301c27f8323f..49564edf5581 100644
+--- a/drivers/block/loop.h
++++ b/drivers/block/loop.h
+@@ -54,6 +54,7 @@ struct loop_device {
+ 	gfp_t		old_gfp_mask;
+ 
+ 	spinlock_t		lo_lock;
++	struct workqueue_struct *wq;
+ 	struct list_head	write_cmd_head;
+ 	struct work_struct	write_work;
+ 	bool			write_started;
+diff --git a/drivers/block/rbd.c b/drivers/block/rbd.c
+index ec6c5c6e1ac9..53f253574abe 100644
+--- a/drivers/block/rbd.c
++++ b/drivers/block/rbd.c
+@@ -2001,11 +2001,11 @@ static struct rbd_obj_request *rbd_obj_request_create(const char *object_name,
+ 	rbd_assert(obj_request_type_valid(type));
+ 
+ 	size = strlen(object_name) + 1;
+-	name = kmalloc(size, GFP_KERNEL);
++	name = kmalloc(size, GFP_NOIO);
+ 	if (!name)
+ 		return NULL;
+ 
+-	obj_request = kmem_cache_zalloc(rbd_obj_request_cache, GFP_KERNEL);
++	obj_request = kmem_cache_zalloc(rbd_obj_request_cache, GFP_NOIO);
+ 	if (!obj_request) {
+ 		kfree(name);
+ 		return NULL;
+diff --git a/drivers/bluetooth/btbcm.c b/drivers/bluetooth/btbcm.c
+index 4bba86677adc..3f146c9911c1 100644
+--- a/drivers/bluetooth/btbcm.c
++++ b/drivers/bluetooth/btbcm.c
+@@ -378,12 +378,11 @@ int btbcm_setup_apple(struct hci_dev *hdev)
+ 
+ 	/* Read Verbose Config Version Info */
+ 	skb = btbcm_read_verbose_config(hdev);
+-	if (IS_ERR(skb))
+-		return PTR_ERR(skb);
+-
+-	BT_INFO("%s: BCM: chip id %u build %4.4u", hdev->name, skb->data[1],
+-		get_unaligned_le16(skb->data + 5));
+-	kfree_skb(skb);
++	if (!IS_ERR(skb)) {
++		BT_INFO("%s: BCM: chip id %u build %4.4u", hdev->name, skb->data[1],
++			get_unaligned_le16(skb->data + 5));
++		kfree_skb(skb);
++	}
+ 
+ 	set_bit(HCI_QUIRK_STRICT_DUPLICATE_FILTER, &hdev->quirks);
+ 
+diff --git a/drivers/bluetooth/btusb.c b/drivers/bluetooth/btusb.c
+index 420cc9f3eb76..c65501539224 100644
+--- a/drivers/bluetooth/btusb.c
++++ b/drivers/bluetooth/btusb.c
+@@ -268,7 +268,7 @@ static const struct usb_device_id blacklist_table[] = {
+ 	{ USB_DEVICE(0x0e5e, 0x6622), .driver_info = BTUSB_BROKEN_ISOC },
+ 
+ 	/* Roper Class 1 Bluetooth Dongle (Silicon Wave based) */
+-	{ USB_DEVICE(0x1300, 0x0001), .driver_info = BTUSB_SWAVE },
++	{ USB_DEVICE(0x1310, 0x0001), .driver_info = BTUSB_SWAVE },
+ 
+ 	/* Digianswer devices */
+ 	{ USB_DEVICE(0x08fd, 0x0001), .driver_info = BTUSB_DIGIANSWER },
+@@ -1993,6 +1993,8 @@ static int btusb_setup_intel(struct hci_dev *hdev)
+ 	}
+ 	fw_ptr = fw->data;
+ 
++	kfree_skb(skb);
++
+ 	/* This Intel specific command enables the manufacturer mode of the
+ 	 * controller.
+ 	 *
+@@ -2334,6 +2336,7 @@ static int btusb_setup_intel_new(struct hci_dev *hdev)
+ 	struct intel_boot_params *params;
+ 	const struct firmware *fw;
+ 	const u8 *fw_ptr;
++	u32 frag_len;
+ 	char fwname[64];
+ 	ktime_t calltime, delta, rettime;
+ 	unsigned long long duration;
+@@ -2540,24 +2543,33 @@ static int btusb_setup_intel_new(struct hci_dev *hdev)
+ 	}
+ 
+ 	fw_ptr = fw->data + 644;
++	frag_len = 0;
+ 
+ 	while (fw_ptr - fw->data < fw->size) {
+-		struct hci_command_hdr *cmd = (void *)fw_ptr;
+-		u8 cmd_len;
++		struct hci_command_hdr *cmd = (void *)(fw_ptr + frag_len);
+ 
+-		cmd_len = sizeof(*cmd) + cmd->plen;
++		frag_len += sizeof(*cmd) + cmd->plen;
+ 
+-		/* Send each command from the firmware data buffer as
+-		 * a single Data fragment.
++		/* The paramter length of the secure send command requires
++		 * a 4 byte alignment. It happens so that the firmware file
++		 * contains proper Intel_NOP commands to align the fragments
++		 * as needed.
++		 *
++		 * Send set of commands with 4 byte alignment from the
++		 * firmware data buffer as a single Data fragement.
+ 		 */
+-		err = btusb_intel_secure_send(hdev, 0x01, cmd_len, fw_ptr);
+-		if (err < 0) {
+-			BT_ERR("%s: Failed to send firmware data (%d)",
+-			       hdev->name, err);
+-			goto done;
+-		}
++		if (!(frag_len % 4)) {
++			err = btusb_intel_secure_send(hdev, 0x01, frag_len,
++						      fw_ptr);
++			if (err < 0) {
++				BT_ERR("%s: Failed to send firmware data (%d)",
++				       hdev->name, err);
++				goto done;
++			}
+ 
+-		fw_ptr += cmd_len;
++			fw_ptr += frag_len;
++			frag_len = 0;
++		}
+ 	}
+ 
+ 	set_bit(BTUSB_FIRMWARE_LOADED, &data->flags);
+diff --git a/drivers/bus/arm-ccn.c b/drivers/bus/arm-ccn.c
+index aaa0f2a87118..60397ec77ff7 100644
+--- a/drivers/bus/arm-ccn.c
++++ b/drivers/bus/arm-ccn.c
+@@ -212,7 +212,7 @@ static int arm_ccn_node_to_xp_port(int node)
+ 
+ static void arm_ccn_pmu_config_set(u64 *config, u32 node_xp, u32 type, u32 port)
+ {
+-	*config &= ~((0xff << 0) | (0xff << 8) | (0xff << 24));
++	*config &= ~((0xff << 0) | (0xff << 8) | (0x3 << 24));
+ 	*config |= (node_xp << 0) | (type << 8) | (port << 24);
+ }
+ 
+diff --git a/drivers/char/agp/intel-gtt.c b/drivers/char/agp/intel-gtt.c
+index 0b4188b9af7c..c6dea3f6917b 100644
+--- a/drivers/char/agp/intel-gtt.c
++++ b/drivers/char/agp/intel-gtt.c
+@@ -581,7 +581,7 @@ static inline int needs_ilk_vtd_wa(void)
+ 	/* Query intel_iommu to see if we need the workaround. Presumably that
+ 	 * was loaded first.
+ 	 */
+-	if ((gpu_devid == PCI_DEVICE_ID_INTEL_IRONLAKE_M_HB ||
++	if ((gpu_devid == PCI_DEVICE_ID_INTEL_IRONLAKE_D_IG ||
+ 	     gpu_devid == PCI_DEVICE_ID_INTEL_IRONLAKE_M_IG) &&
+ 	     intel_iommu_gfx_mapped)
+ 		return 1;
+diff --git a/drivers/char/tpm/tpm-chip.c b/drivers/char/tpm/tpm-chip.c
+index 283f00a7f036..1082d4bb016a 100644
+--- a/drivers/char/tpm/tpm-chip.c
++++ b/drivers/char/tpm/tpm-chip.c
+@@ -129,8 +129,9 @@ struct tpm_chip *tpmm_chip_alloc(struct device *dev,
+ 
+ 	device_initialize(&chip->dev);
+ 
+-	chip->cdev.owner = chip->pdev->driver->owner;
+ 	cdev_init(&chip->cdev, &tpm_fops);
++	chip->cdev.owner = chip->pdev->driver->owner;
++	chip->cdev.kobj.parent = &chip->dev.kobj;
+ 
+ 	return chip;
+ }
+diff --git a/drivers/char/tpm/tpm_crb.c b/drivers/char/tpm/tpm_crb.c
+index b26ceee3585e..1267322595da 100644
+--- a/drivers/char/tpm/tpm_crb.c
++++ b/drivers/char/tpm/tpm_crb.c
+@@ -233,6 +233,14 @@ static int crb_acpi_add(struct acpi_device *device)
+ 		return -ENODEV;
+ 	}
+ 
++	/* At least some versions of AMI BIOS have a bug that TPM2 table has
++	 * zero address for the control area and therefore we must fail.
++	*/
++	if (!buf->control_area_pa) {
++		dev_err(dev, "TPM2 ACPI table has a zero address for the control area\n");
++		return -EINVAL;
++	}
++
+ 	if (buf->hdr.length < sizeof(struct acpi_tpm2)) {
+ 		dev_err(dev, "TPM2 ACPI table has wrong size");
+ 		return -EINVAL;
+@@ -267,7 +275,7 @@ static int crb_acpi_add(struct acpi_device *device)
+ 
+ 	memcpy_fromio(&pa, &priv->cca->cmd_pa, 8);
+ 	pa = le64_to_cpu(pa);
+-	priv->cmd = devm_ioremap_nocache(dev, le64_to_cpu(pa),
++	priv->cmd = devm_ioremap_nocache(dev, pa,
+ 					 ioread32(&priv->cca->cmd_size));
+ 	if (!priv->cmd) {
+ 		dev_err(dev, "ioremap of the command buffer failed\n");
+@@ -276,7 +284,7 @@ static int crb_acpi_add(struct acpi_device *device)
+ 
+ 	memcpy_fromio(&pa, &priv->cca->rsp_pa, 8);
+ 	pa = le64_to_cpu(pa);
+-	priv->rsp = devm_ioremap_nocache(dev, le64_to_cpu(pa),
++	priv->rsp = devm_ioremap_nocache(dev, pa,
+ 					 ioread32(&priv->cca->rsp_size));
+ 	if (!priv->rsp) {
+ 		dev_err(dev, "ioremap of the response buffer failed\n");
+diff --git a/drivers/char/tpm/tpm_ibmvtpm.c b/drivers/char/tpm/tpm_ibmvtpm.c
+index 42ffa5e7a1e0..27ebf9511cb4 100644
+--- a/drivers/char/tpm/tpm_ibmvtpm.c
++++ b/drivers/char/tpm/tpm_ibmvtpm.c
+@@ -578,6 +578,9 @@ static int tpm_ibmvtpm_probe(struct vio_dev *vio_dev,
+ 		goto cleanup;
+ 	}
+ 
++	ibmvtpm->dev = dev;
++	ibmvtpm->vdev = vio_dev;
++
+ 	crq_q = &ibmvtpm->crq_queue;
+ 	crq_q->crq_addr = (struct ibmvtpm_crq *)get_zeroed_page(GFP_KERNEL);
+ 	if (!crq_q->crq_addr) {
+@@ -622,8 +625,6 @@ static int tpm_ibmvtpm_probe(struct vio_dev *vio_dev,
+ 
+ 	crq_q->index = 0;
+ 
+-	ibmvtpm->dev = dev;
+-	ibmvtpm->vdev = vio_dev;
+ 	TPM_VPRIV(chip) = (void *)ibmvtpm;
+ 
+ 	spin_lock_init(&ibmvtpm->rtce_lock);
+diff --git a/drivers/clk/clk.c b/drivers/clk/clk.c
+index 5b0f41868b42..9f9cadd00bc8 100644
+--- a/drivers/clk/clk.c
++++ b/drivers/clk/clk.c
+@@ -230,11 +230,12 @@ static void clk_dump_one(struct seq_file *s, struct clk_core *c, int level)
+ 	if (!c)
+ 		return;
+ 
++	/* This should be JSON format, i.e. elements separated with a comma */
+ 	seq_printf(s, "\"%s\": { ", c->name);
+ 	seq_printf(s, "\"enable_count\": %d,", c->enable_count);
+ 	seq_printf(s, "\"prepare_count\": %d,", c->prepare_count);
+-	seq_printf(s, "\"rate\": %lu", clk_core_get_rate(c));
+-	seq_printf(s, "\"accuracy\": %lu", clk_core_get_accuracy(c));
++	seq_printf(s, "\"rate\": %lu,", clk_core_get_rate(c));
++	seq_printf(s, "\"accuracy\": %lu,", clk_core_get_accuracy(c));
+ 	seq_printf(s, "\"phase\": %d", clk_core_get_phase(c));
+ }
+ 
+diff --git a/drivers/clk/qcom/clk-rcg2.c b/drivers/clk/qcom/clk-rcg2.c
+index b95d17fbb8d7..92936f0912d2 100644
+--- a/drivers/clk/qcom/clk-rcg2.c
++++ b/drivers/clk/qcom/clk-rcg2.c
+@@ -530,19 +530,16 @@ static int clk_pixel_set_rate(struct clk_hw *hw, unsigned long rate,
+ 	struct clk_rcg2 *rcg = to_clk_rcg2(hw);
+ 	struct freq_tbl f = *rcg->freq_tbl;
+ 	const struct frac_entry *frac = frac_table_pixel;
+-	unsigned long request, src_rate;
++	unsigned long request;
+ 	int delta = 100000;
+ 	u32 mask = BIT(rcg->hid_width) - 1;
+ 	u32 hid_div;
+-	int index = qcom_find_src_index(hw, rcg->parent_map, f.src);
+-	struct clk *parent = clk_get_parent_by_index(hw->clk, index);
+ 
+ 	for (; frac->num; frac++) {
+ 		request = (rate * frac->den) / frac->num;
+ 
+-		src_rate = __clk_round_rate(parent, request);
+-		if ((src_rate < (request - delta)) ||
+-			(src_rate > (request + delta)))
++		if ((parent_rate < (request - delta)) ||
++			(parent_rate > (request + delta)))
+ 			continue;
+ 
+ 		regmap_read(rcg->clkr.regmap, rcg->cmd_rcgr + CFG_REG,
+diff --git a/drivers/clk/ti/clk-dra7-atl.c b/drivers/clk/ti/clk-dra7-atl.c
+index d86bc46b93bd..0a1df821860f 100644
+--- a/drivers/clk/ti/clk-dra7-atl.c
++++ b/drivers/clk/ti/clk-dra7-atl.c
+@@ -252,6 +252,11 @@ static int of_dra7_atl_clk_probe(struct platform_device *pdev)
+ 		}
+ 
+ 		clk = of_clk_get_from_provider(&clkspec);
++		if (IS_ERR(clk)) {
++			pr_err("%s: failed to get atl clock %d from provider\n",
++			       __func__, i);
++			return PTR_ERR(clk);
++		}
+ 
+ 		cdesc = to_atl_desc(__clk_get_hw(clk));
+ 		cdesc->cinfo = cinfo;
+diff --git a/drivers/clocksource/exynos_mct.c b/drivers/clocksource/exynos_mct.c
+index 83564c9cfdbe..c844616028d2 100644
+--- a/drivers/clocksource/exynos_mct.c
++++ b/drivers/clocksource/exynos_mct.c
+@@ -466,15 +466,12 @@ static int exynos4_local_timer_setup(struct clock_event_device *evt)
+ 	exynos4_mct_write(TICK_BASE_CNT, mevt->base + MCT_L_TCNTB_OFFSET);
+ 
+ 	if (mct_int_type == MCT_INT_SPI) {
+-		evt->irq = mct_irqs[MCT_L0_IRQ + cpu];
+-		if (request_irq(evt->irq, exynos4_mct_tick_isr,
+-				IRQF_TIMER | IRQF_NOBALANCING,
+-				evt->name, mevt)) {
+-			pr_err("exynos-mct: cannot register IRQ %d\n",
+-				evt->irq);
++
++		if (evt->irq == -1)
+ 			return -EIO;
+-		}
+-		irq_force_affinity(mct_irqs[MCT_L0_IRQ + cpu], cpumask_of(cpu));
++
++		irq_force_affinity(evt->irq, cpumask_of(cpu));
++		enable_irq(evt->irq);
+ 	} else {
+ 		enable_percpu_irq(mct_irqs[MCT_L0_IRQ], 0);
+ 	}
+@@ -487,10 +484,12 @@ static int exynos4_local_timer_setup(struct clock_event_device *evt)
+ static void exynos4_local_timer_stop(struct clock_event_device *evt)
+ {
+ 	evt->set_mode(CLOCK_EVT_MODE_UNUSED, evt);
+-	if (mct_int_type == MCT_INT_SPI)
+-		free_irq(evt->irq, this_cpu_ptr(&percpu_mct_tick));
+-	else
++	if (mct_int_type == MCT_INT_SPI) {
++		if (evt->irq != -1)
++			disable_irq_nosync(evt->irq);
++	} else {
+ 		disable_percpu_irq(mct_irqs[MCT_L0_IRQ]);
++	}
+ }
+ 
+ static int exynos4_mct_cpu_notify(struct notifier_block *self,
+@@ -522,7 +521,7 @@ static struct notifier_block exynos4_mct_cpu_nb = {
+ 
+ static void __init exynos4_timer_resources(struct device_node *np, void __iomem *base)
+ {
+-	int err;
++	int err, cpu;
+ 	struct mct_clock_event_device *mevt = this_cpu_ptr(&percpu_mct_tick);
+ 	struct clk *mct_clk, *tick_clk;
+ 
+@@ -549,7 +548,25 @@ static void __init exynos4_timer_resources(struct device_node *np, void __iomem
+ 		WARN(err, "MCT: can't request IRQ %d (%d)\n",
+ 		     mct_irqs[MCT_L0_IRQ], err);
+ 	} else {
+-		irq_set_affinity(mct_irqs[MCT_L0_IRQ], cpumask_of(0));
++		for_each_possible_cpu(cpu) {
++			int mct_irq = mct_irqs[MCT_L0_IRQ + cpu];
++			struct mct_clock_event_device *pcpu_mevt =
++				per_cpu_ptr(&percpu_mct_tick, cpu);
++
++			pcpu_mevt->evt.irq = -1;
++
++			irq_set_status_flags(mct_irq, IRQ_NOAUTOEN);
++			if (request_irq(mct_irq,
++					exynos4_mct_tick_isr,
++					IRQF_TIMER | IRQF_NOBALANCING,
++					pcpu_mevt->name, pcpu_mevt)) {
++				pr_err("exynos-mct: cannot register IRQ (cpu%d)\n",
++									cpu);
++
++				continue;
++			}
++			pcpu_mevt->evt.irq = mct_irq;
++		}
+ 	}
+ 
+ 	err = register_cpu_notifier(&exynos4_mct_cpu_nb);
+diff --git a/drivers/dma/mv_xor.c b/drivers/dma/mv_xor.c
+index 1c56001df676..50f1b422dee3 100644
+--- a/drivers/dma/mv_xor.c
++++ b/drivers/dma/mv_xor.c
+@@ -273,7 +273,8 @@ static void mv_xor_slot_cleanup(struct mv_xor_chan *mv_chan)
+ 	dma_cookie_t cookie = 0;
+ 	int busy = mv_chan_is_busy(mv_chan);
+ 	u32 current_desc = mv_chan_get_current_desc(mv_chan);
+-	int seen_current = 0;
++	int current_cleaned = 0;
++	struct mv_xor_desc *hw_desc;
+ 
+ 	dev_dbg(mv_chan_to_devp(mv_chan), "%s %d\n", __func__, __LINE__);
+ 	dev_dbg(mv_chan_to_devp(mv_chan), "current_desc %x\n", current_desc);
+@@ -285,38 +286,57 @@ static void mv_xor_slot_cleanup(struct mv_xor_chan *mv_chan)
+ 
+ 	list_for_each_entry_safe(iter, _iter, &mv_chan->chain,
+ 					chain_node) {
+-		prefetch(_iter);
+-		prefetch(&_iter->async_tx);
+ 
+-		/* do not advance past the current descriptor loaded into the
+-		 * hardware channel, subsequent descriptors are either in
+-		 * process or have not been submitted
+-		 */
+-		if (seen_current)
+-			break;
++		/* clean finished descriptors */
++		hw_desc = iter->hw_desc;
++		if (hw_desc->status & XOR_DESC_SUCCESS) {
++			cookie = mv_xor_run_tx_complete_actions(iter, mv_chan,
++								cookie);
+ 
+-		/* stop the search if we reach the current descriptor and the
+-		 * channel is busy
+-		 */
+-		if (iter->async_tx.phys == current_desc) {
+-			seen_current = 1;
+-			if (busy)
++			/* done processing desc, clean slot */
++			mv_xor_clean_slot(iter, mv_chan);
++
++			/* break if we did cleaned the current */
++			if (iter->async_tx.phys == current_desc) {
++				current_cleaned = 1;
++				break;
++			}
++		} else {
++			if (iter->async_tx.phys == current_desc) {
++				current_cleaned = 0;
+ 				break;
++			}
+ 		}
+-
+-		cookie = mv_xor_run_tx_complete_actions(iter, mv_chan, cookie);
+-
+-		if (mv_xor_clean_slot(iter, mv_chan))
+-			break;
+ 	}
+ 
+ 	if ((busy == 0) && !list_empty(&mv_chan->chain)) {
+-		struct mv_xor_desc_slot *chain_head;
+-		chain_head = list_entry(mv_chan->chain.next,
+-					struct mv_xor_desc_slot,
+-					chain_node);
+-
+-		mv_xor_start_new_chain(mv_chan, chain_head);
++		if (current_cleaned) {
++			/*
++			 * current descriptor cleaned and removed, run
++			 * from list head
++			 */
++			iter = list_entry(mv_chan->chain.next,
++					  struct mv_xor_desc_slot,
++					  chain_node);
++			mv_xor_start_new_chain(mv_chan, iter);
++		} else {
++			if (!list_is_last(&iter->chain_node, &mv_chan->chain)) {
++				/*
++				 * descriptors are still waiting after
++				 * current, trigger them
++				 */
++				iter = list_entry(iter->chain_node.next,
++						  struct mv_xor_desc_slot,
++						  chain_node);
++				mv_xor_start_new_chain(mv_chan, iter);
++			} else {
++				/*
++				 * some descriptors are still waiting
++				 * to be cleaned
++				 */
++				tasklet_schedule(&mv_chan->irq_tasklet);
++			}
++		}
+ 	}
+ 
+ 	if (cookie > 0)
+diff --git a/drivers/dma/mv_xor.h b/drivers/dma/mv_xor.h
+index 91958dba39a2..0e302b3a33ad 100644
+--- a/drivers/dma/mv_xor.h
++++ b/drivers/dma/mv_xor.h
+@@ -31,6 +31,7 @@
+ #define XOR_OPERATION_MODE_XOR		0
+ #define XOR_OPERATION_MODE_MEMCPY	2
+ #define XOR_DESCRIPTOR_SWAP		BIT(14)
++#define XOR_DESC_SUCCESS		0x40000000
+ 
+ #define XOR_DESC_DMA_OWNED		BIT(31)
+ #define XOR_DESC_EOD_INT_EN		BIT(31)
+diff --git a/drivers/edac/octeon_edac-l2c.c b/drivers/edac/octeon_edac-l2c.c
+index 7e98084d3645..afea7fc625cc 100644
+--- a/drivers/edac/octeon_edac-l2c.c
++++ b/drivers/edac/octeon_edac-l2c.c
+@@ -151,7 +151,7 @@ static int octeon_l2c_probe(struct platform_device *pdev)
+ 	l2c->ctl_name = "octeon_l2c_err";
+ 
+ 
+-	if (OCTEON_IS_MODEL(OCTEON_FAM_1_PLUS)) {
++	if (OCTEON_IS_OCTEON1PLUS()) {
+ 		union cvmx_l2t_err l2t_err;
+ 		union cvmx_l2d_err l2d_err;
+ 
+diff --git a/drivers/edac/octeon_edac-lmc.c b/drivers/edac/octeon_edac-lmc.c
+index bb19e0732681..cda6dab5067a 100644
+--- a/drivers/edac/octeon_edac-lmc.c
++++ b/drivers/edac/octeon_edac-lmc.c
+@@ -234,7 +234,7 @@ static int octeon_lmc_edac_probe(struct platform_device *pdev)
+ 	layers[0].size = 1;
+ 	layers[0].is_virt_csrow = false;
+ 
+-	if (OCTEON_IS_MODEL(OCTEON_FAM_1_PLUS)) {
++	if (OCTEON_IS_OCTEON1PLUS()) {
+ 		union cvmx_lmcx_mem_cfg0 cfg0;
+ 
+ 		cfg0.u64 = cvmx_read_csr(CVMX_LMCX_MEM_CFG0(0));
+diff --git a/drivers/edac/octeon_edac-pc.c b/drivers/edac/octeon_edac-pc.c
+index 0f83c33a7d1f..2ab6cf24c959 100644
+--- a/drivers/edac/octeon_edac-pc.c
++++ b/drivers/edac/octeon_edac-pc.c
+@@ -73,7 +73,7 @@ static int  co_cache_error_event(struct notifier_block *this,
+ 			edac_device_handle_ce(p->ed, cpu, 0, "dcache");
+ 
+ 		/* Clear the error indication */
+-		if (OCTEON_IS_MODEL(OCTEON_FAM_2))
++		if (OCTEON_IS_OCTEON2())
+ 			write_octeon_c0_dcacheerr(1);
+ 		else
+ 			write_octeon_c0_dcacheerr(0);
+diff --git a/drivers/firmware/dmi_scan.c b/drivers/firmware/dmi_scan.c
+index 97b1616aa391..bba843c2b0ac 100644
+--- a/drivers/firmware/dmi_scan.c
++++ b/drivers/firmware/dmi_scan.c
+@@ -89,9 +89,9 @@ static void dmi_table(u8 *buf,
+ 
+ 	/*
+ 	 * Stop when we have seen all the items the table claimed to have
+-	 * (SMBIOS < 3.0 only) OR we reach an end-of-table marker OR we run
+-	 * off the end of the table (should never happen but sometimes does
+-	 * on bogus implementations.)
++	 * (SMBIOS < 3.0 only) OR we reach an end-of-table marker (SMBIOS
++	 * >= 3.0 only) OR we run off the end of the table (should never
++	 * happen but sometimes does on bogus implementations.)
+ 	 */
+ 	while ((!dmi_num || i < dmi_num) &&
+ 	       (data - buf + sizeof(struct dmi_header)) <= dmi_len) {
+@@ -110,8 +110,13 @@ static void dmi_table(u8 *buf,
+ 
+ 		/*
+ 		 * 7.45 End-of-Table (Type 127) [SMBIOS reference spec v3.0.0]
++		 * For tables behind a 64-bit entry point, we have no item
++		 * count and no exact table length, so stop on end-of-table
++		 * marker. For tables behind a 32-bit entry point, we have
++		 * seen OEM structures behind the end-of-table marker on
++		 * some systems, so don't trust it.
+ 		 */
+-		if (dm->type == DMI_ENTRY_END_OF_TABLE)
++		if (!dmi_num && dm->type == DMI_ENTRY_END_OF_TABLE)
+ 			break;
+ 
+ 		data += 2;
+diff --git a/drivers/gpu/drm/bridge/ptn3460.c b/drivers/gpu/drm/bridge/ptn3460.c
+index 9d2f053382e1..63a09e4079f3 100644
+--- a/drivers/gpu/drm/bridge/ptn3460.c
++++ b/drivers/gpu/drm/bridge/ptn3460.c
+@@ -15,6 +15,7 @@
+ 
+ #include <linux/delay.h>
+ #include <linux/gpio.h>
++#include <linux/gpio/consumer.h>
+ #include <linux/i2c.h>
+ #include <linux/module.h>
+ #include <linux/of.h>
+diff --git a/drivers/gpu/drm/drm_crtc.c b/drivers/gpu/drm/drm_crtc.c
+index 3007b44e6bf4..800a025dd062 100644
+--- a/drivers/gpu/drm/drm_crtc.c
++++ b/drivers/gpu/drm/drm_crtc.c
+@@ -2749,8 +2749,11 @@ int drm_mode_setcrtc(struct drm_device *dev, void *data,
+ 	if (!drm_core_check_feature(dev, DRIVER_MODESET))
+ 		return -EINVAL;
+ 
+-	/* For some reason crtc x/y offsets are signed internally. */
+-	if (crtc_req->x > INT_MAX || crtc_req->y > INT_MAX)
++	/*
++	 * Universal plane src offsets are only 16.16, prevent havoc for
++	 * drivers using universal plane code internally.
++	 */
++	if (crtc_req->x & 0xffff0000 || crtc_req->y & 0xffff0000)
+ 		return -ERANGE;
+ 
+ 	drm_modeset_lock_all(dev);
+@@ -5048,12 +5051,9 @@ void drm_mode_config_reset(struct drm_device *dev)
+ 		if (encoder->funcs->reset)
+ 			encoder->funcs->reset(encoder);
+ 
+-	list_for_each_entry(connector, &dev->mode_config.connector_list, head) {
+-		connector->status = connector_status_unknown;
+-
++	list_for_each_entry(connector, &dev->mode_config.connector_list, head)
+ 		if (connector->funcs->reset)
+ 			connector->funcs->reset(connector);
+-	}
+ }
+ EXPORT_SYMBOL(drm_mode_config_reset);
+ 
+diff --git a/drivers/gpu/drm/drm_dp_mst_topology.c b/drivers/gpu/drm/drm_dp_mst_topology.c
+index 132581ca4ad8..778bbb6425b8 100644
+--- a/drivers/gpu/drm/drm_dp_mst_topology.c
++++ b/drivers/gpu/drm/drm_dp_mst_topology.c
+@@ -867,8 +867,16 @@ static void drm_dp_destroy_port(struct kref *kref)
+ 		port->vcpi.num_slots = 0;
+ 
+ 		kfree(port->cached_edid);
+-		if (port->connector)
+-			(*port->mgr->cbs->destroy_connector)(mgr, port->connector);
++
++		/* we can't destroy the connector here, as
++		   we might be holding the mode_config.mutex
++		   from an EDID retrieval */
++		if (port->connector) {
++			mutex_lock(&mgr->destroy_connector_lock);
++			list_add(&port->connector->destroy_list, &mgr->destroy_connector_list);
++			mutex_unlock(&mgr->destroy_connector_lock);
++			schedule_work(&mgr->destroy_connector_work);
++		}
+ 		drm_dp_port_teardown_pdt(port, port->pdt);
+ 
+ 		if (!port->input && port->vcpi.vcpi > 0)
+@@ -1163,6 +1171,8 @@ static struct drm_dp_mst_branch *drm_dp_get_mst_branch_device(struct drm_dp_mst_
+ 	struct drm_dp_mst_port *port;
+ 	int i;
+ 	/* find the port by iterating down */
++
++	mutex_lock(&mgr->lock);
+ 	mstb = mgr->mst_primary;
+ 
+ 	for (i = 0; i < lct - 1; i++) {
+@@ -1182,6 +1192,7 @@ static struct drm_dp_mst_branch *drm_dp_get_mst_branch_device(struct drm_dp_mst_
+ 		}
+ 	}
+ 	kref_get(&mstb->kref);
++	mutex_unlock(&mgr->lock);
+ 	return mstb;
+ }
+ 
+@@ -1189,7 +1200,7 @@ static void drm_dp_check_and_send_link_address(struct drm_dp_mst_topology_mgr *m
+ 					       struct drm_dp_mst_branch *mstb)
+ {
+ 	struct drm_dp_mst_port *port;
+-
++	struct drm_dp_mst_branch *mstb_child;
+ 	if (!mstb->link_address_sent) {
+ 		drm_dp_send_link_address(mgr, mstb);
+ 		mstb->link_address_sent = true;
+@@ -1204,17 +1215,31 @@ static void drm_dp_check_and_send_link_address(struct drm_dp_mst_topology_mgr *m
+ 		if (!port->available_pbn)
+ 			drm_dp_send_enum_path_resources(mgr, mstb, port);
+ 
+-		if (port->mstb)
+-			drm_dp_check_and_send_link_address(mgr, port->mstb);
++		if (port->mstb) {
++			mstb_child = drm_dp_get_validated_mstb_ref(mgr, port->mstb);
++			if (mstb_child) {
++				drm_dp_check_and_send_link_address(mgr, mstb_child);
++				drm_dp_put_mst_branch_device(mstb_child);
++			}
++		}
+ 	}
+ }
+ 
+ static void drm_dp_mst_link_probe_work(struct work_struct *work)
+ {
+ 	struct drm_dp_mst_topology_mgr *mgr = container_of(work, struct drm_dp_mst_topology_mgr, work);
++	struct drm_dp_mst_branch *mstb;
+ 
+-	drm_dp_check_and_send_link_address(mgr, mgr->mst_primary);
+-
++	mutex_lock(&mgr->lock);
++	mstb = mgr->mst_primary;
++	if (mstb) {
++		kref_get(&mstb->kref);
++	}
++	mutex_unlock(&mgr->lock);
++	if (mstb) {
++		drm_dp_check_and_send_link_address(mgr, mstb);
++		drm_dp_put_mst_branch_device(mstb);
++	}
+ }
+ 
+ static bool drm_dp_validate_guid(struct drm_dp_mst_topology_mgr *mgr,
+@@ -2632,6 +2657,30 @@ static void drm_dp_tx_work(struct work_struct *work)
+ 	mutex_unlock(&mgr->qlock);
+ }
+ 
++static void drm_dp_destroy_connector_work(struct work_struct *work)
++{
++	struct drm_dp_mst_topology_mgr *mgr = container_of(work, struct drm_dp_mst_topology_mgr, destroy_connector_work);
++	struct drm_connector *connector;
++
++	/*
++	 * Not a regular list traverse as we have to drop the destroy
++	 * connector lock before destroying the connector, to avoid AB->BA
++	 * ordering between this lock and the config mutex.
++	 */
++	for (;;) {
++		mutex_lock(&mgr->destroy_connector_lock);
++		connector = list_first_entry_or_null(&mgr->destroy_connector_list, struct drm_connector, destroy_list);
++		if (!connector) {
++			mutex_unlock(&mgr->destroy_connector_lock);
++			break;
++		}
++		list_del(&connector->destroy_list);
++		mutex_unlock(&mgr->destroy_connector_lock);
++
++		mgr->cbs->destroy_connector(mgr, connector);
++	}
++}
++
+ /**
+  * drm_dp_mst_topology_mgr_init - initialise a topology manager
+  * @mgr: manager struct to initialise
+@@ -2651,10 +2700,13 @@ int drm_dp_mst_topology_mgr_init(struct drm_dp_mst_topology_mgr *mgr,
+ 	mutex_init(&mgr->lock);
+ 	mutex_init(&mgr->qlock);
+ 	mutex_init(&mgr->payload_lock);
++	mutex_init(&mgr->destroy_connector_lock);
+ 	INIT_LIST_HEAD(&mgr->tx_msg_upq);
+ 	INIT_LIST_HEAD(&mgr->tx_msg_downq);
++	INIT_LIST_HEAD(&mgr->destroy_connector_list);
+ 	INIT_WORK(&mgr->work, drm_dp_mst_link_probe_work);
+ 	INIT_WORK(&mgr->tx_work, drm_dp_tx_work);
++	INIT_WORK(&mgr->destroy_connector_work, drm_dp_destroy_connector_work);
+ 	init_waitqueue_head(&mgr->tx_waitq);
+ 	mgr->dev = dev;
+ 	mgr->aux = aux;
+@@ -2679,6 +2731,7 @@ EXPORT_SYMBOL(drm_dp_mst_topology_mgr_init);
+  */
+ void drm_dp_mst_topology_mgr_destroy(struct drm_dp_mst_topology_mgr *mgr)
+ {
++	flush_work(&mgr->destroy_connector_work);
+ 	mutex_lock(&mgr->payload_lock);
+ 	kfree(mgr->payloads);
+ 	mgr->payloads = NULL;
+diff --git a/drivers/gpu/drm/drm_ioc32.c b/drivers/gpu/drm/drm_ioc32.c
+index aa8bbb460c57..9cfcd0aef0df 100644
+--- a/drivers/gpu/drm/drm_ioc32.c
++++ b/drivers/gpu/drm/drm_ioc32.c
+@@ -70,6 +70,8 @@
+ 
+ #define DRM_IOCTL_WAIT_VBLANK32		DRM_IOWR(0x3a, drm_wait_vblank32_t)
+ 
++#define DRM_IOCTL_MODE_ADDFB232		DRM_IOWR(0xb8, drm_mode_fb_cmd232_t)
++
+ typedef struct drm_version_32 {
+ 	int version_major;	  /**< Major version */
+ 	int version_minor;	  /**< Minor version */
+@@ -1016,6 +1018,63 @@ static int compat_drm_wait_vblank(struct file *file, unsigned int cmd,
+ 	return 0;
+ }
+ 
++typedef struct drm_mode_fb_cmd232 {
++	u32 fb_id;
++	u32 width;
++	u32 height;
++	u32 pixel_format;
++	u32 flags;
++	u32 handles[4];
++	u32 pitches[4];
++	u32 offsets[4];
++	u64 modifier[4];
++} __attribute__((packed)) drm_mode_fb_cmd232_t;
++
++static int compat_drm_mode_addfb2(struct file *file, unsigned int cmd,
++				  unsigned long arg)
++{
++	struct drm_mode_fb_cmd232 __user *argp = (void __user *)arg;
++	struct drm_mode_fb_cmd232 req32;
++	struct drm_mode_fb_cmd2 __user *req64;
++	int i;
++	int err;
++
++	if (copy_from_user(&req32, argp, sizeof(req32)))
++		return -EFAULT;
++
++	req64 = compat_alloc_user_space(sizeof(*req64));
++
++	if (!access_ok(VERIFY_WRITE, req64, sizeof(*req64))
++	    || __put_user(req32.width, &req64->width)
++	    || __put_user(req32.height, &req64->height)
++	    || __put_user(req32.pixel_format, &req64->pixel_format)
++	    || __put_user(req32.flags, &req64->flags))
++		return -EFAULT;
++
++	for (i = 0; i < 4; i++) {
++		if (__put_user(req32.handles[i], &req64->handles[i]))
++			return -EFAULT;
++		if (__put_user(req32.pitches[i], &req64->pitches[i]))
++			return -EFAULT;
++		if (__put_user(req32.offsets[i], &req64->offsets[i]))
++			return -EFAULT;
++		if (__put_user(req32.modifier[i], &req64->modifier[i]))
++			return -EFAULT;
++	}
++
++	err = drm_ioctl(file, DRM_IOCTL_MODE_ADDFB2, (unsigned long)req64);
++	if (err)
++		return err;
++
++	if (__get_user(req32.fb_id, &req64->fb_id))
++		return -EFAULT;
++
++	if (copy_to_user(argp, &req32, sizeof(req32)))
++		return -EFAULT;
++
++	return 0;
++}
++
+ static drm_ioctl_compat_t *drm_compat_ioctls[] = {
+ 	[DRM_IOCTL_NR(DRM_IOCTL_VERSION32)] = compat_drm_version,
+ 	[DRM_IOCTL_NR(DRM_IOCTL_GET_UNIQUE32)] = compat_drm_getunique,
+@@ -1048,6 +1107,7 @@ static drm_ioctl_compat_t *drm_compat_ioctls[] = {
+ 	[DRM_IOCTL_NR(DRM_IOCTL_UPDATE_DRAW32)] = compat_drm_update_draw,
+ #endif
+ 	[DRM_IOCTL_NR(DRM_IOCTL_WAIT_VBLANK32)] = compat_drm_wait_vblank,
++	[DRM_IOCTL_NR(DRM_IOCTL_MODE_ADDFB232)] = compat_drm_mode_addfb2,
+ };
+ 
+ /**
+diff --git a/drivers/gpu/drm/i915/i915_gem.c b/drivers/gpu/drm/i915/i915_gem.c
+index 2d0995e7afc3..596bce56e379 100644
+--- a/drivers/gpu/drm/i915/i915_gem.c
++++ b/drivers/gpu/drm/i915/i915_gem.c
+@@ -2401,6 +2401,7 @@ int __i915_add_request(struct intel_engine_cs *ring,
+ 	}
+ 
+ 	request->emitted_jiffies = jiffies;
++	ring->last_submitted_seqno = request->seqno;
+ 	list_add_tail(&request->list, &ring->request_list);
+ 	request->file_priv = NULL;
+ 
+diff --git a/drivers/gpu/drm/i915/i915_gem_gtt.c b/drivers/gpu/drm/i915/i915_gem_gtt.c
+index 0239fbff7bf7..ad90fa3045e5 100644
+--- a/drivers/gpu/drm/i915/i915_gem_gtt.c
++++ b/drivers/gpu/drm/i915/i915_gem_gtt.c
+@@ -502,17 +502,17 @@ static void gen8_ppgtt_clear_range(struct i915_address_space *vm,
+ 		struct page *page_table;
+ 
+ 		if (WARN_ON(!ppgtt->pdp.page_directory[pdpe]))
+-			continue;
++			break;
+ 
+ 		pd = ppgtt->pdp.page_directory[pdpe];
+ 
+ 		if (WARN_ON(!pd->page_table[pde]))
+-			continue;
++			break;
+ 
+ 		pt = pd->page_table[pde];
+ 
+ 		if (WARN_ON(!pt->page))
+-			continue;
++			break;
+ 
+ 		page_table = pt->page;
+ 
+diff --git a/drivers/gpu/drm/i915/i915_ioc32.c b/drivers/gpu/drm/i915/i915_ioc32.c
+index 176de6322e4d..23aa04cded6b 100644
+--- a/drivers/gpu/drm/i915/i915_ioc32.c
++++ b/drivers/gpu/drm/i915/i915_ioc32.c
+@@ -204,7 +204,7 @@ long i915_compat_ioctl(struct file *filp, unsigned int cmd, unsigned long arg)
+ 	drm_ioctl_compat_t *fn = NULL;
+ 	int ret;
+ 
+-	if (nr < DRM_COMMAND_BASE)
++	if (nr < DRM_COMMAND_BASE || nr >= DRM_COMMAND_END)
+ 		return drm_compat_ioctl(filp, cmd, arg);
+ 
+ 	if (nr < DRM_COMMAND_BASE + ARRAY_SIZE(i915_compat_ioctls))
+diff --git a/drivers/gpu/drm/i915/i915_irq.c b/drivers/gpu/drm/i915/i915_irq.c
+index 6d494432b19f..b0df8d10482a 100644
+--- a/drivers/gpu/drm/i915/i915_irq.c
++++ b/drivers/gpu/drm/i915/i915_irq.c
+@@ -2650,18 +2650,11 @@ static void gen8_disable_vblank(struct drm_device *dev, int pipe)
+ 	spin_unlock_irqrestore(&dev_priv->irq_lock, irqflags);
+ }
+ 
+-static struct drm_i915_gem_request *
+-ring_last_request(struct intel_engine_cs *ring)
+-{
+-	return list_entry(ring->request_list.prev,
+-			  struct drm_i915_gem_request, list);
+-}
+-
+ static bool
+-ring_idle(struct intel_engine_cs *ring)
++ring_idle(struct intel_engine_cs *ring, u32 seqno)
+ {
+ 	return (list_empty(&ring->request_list) ||
+-		i915_gem_request_completed(ring_last_request(ring), false));
++		i915_seqno_passed(seqno, ring->last_submitted_seqno));
+ }
+ 
+ static bool
+@@ -2883,7 +2876,7 @@ static void i915_hangcheck_elapsed(struct work_struct *work)
+ 		acthd = intel_ring_get_active_head(ring);
+ 
+ 		if (ring->hangcheck.seqno == seqno) {
+-			if (ring_idle(ring)) {
++			if (ring_idle(ring, seqno)) {
+ 				ring->hangcheck.action = HANGCHECK_IDLE;
+ 
+ 				if (waitqueue_active(&ring->irq_queue)) {
+diff --git a/drivers/gpu/drm/i915/i915_reg.h b/drivers/gpu/drm/i915/i915_reg.h
+index 773d1d24e604..a30db4b4050e 100644
+--- a/drivers/gpu/drm/i915/i915_reg.h
++++ b/drivers/gpu/drm/i915/i915_reg.h
+@@ -3209,6 +3209,7 @@ enum skl_disp_power_wells {
+ #define   BLM_POLARITY_PNV			(1 << 0) /* pnv only */
+ 
+ #define BLC_HIST_CTL	(dev_priv->info.display_mmio_offset + 0x61260)
++#define  BLM_HISTOGRAM_ENABLE			(1 << 31)
+ 
+ /* New registers for PCH-split platforms. Safe where new bits show up, the
+  * register layout machtes with gen4 BLC_PWM_CTL[12]. */
+diff --git a/drivers/gpu/drm/i915/intel_display.c b/drivers/gpu/drm/i915/intel_display.c
+index d0f3cbc87474..57c887843dc3 100644
+--- a/drivers/gpu/drm/i915/intel_display.c
++++ b/drivers/gpu/drm/i915/intel_display.c
+@@ -12499,6 +12499,16 @@ intel_check_primary_plane(struct drm_plane *plane,
+ 				intel_crtc->atomic.wait_vblank = true;
+ 		}
+ 
++		/*
++		 * FIXME: Actually if we will still have any other plane enabled
++		 * on the pipe we could let IPS enabled still, but for
++		 * now lets consider that when we make primary invisible
++		 * by setting DSPCNTR to 0 on update_primary_plane function
++		 * IPS needs to be disable.
++		 */
++		if (!state->visible || !fb)
++			intel_crtc->atomic.disable_ips = true;
++
+ 		intel_crtc->atomic.fb_bits |=
+ 			INTEL_FRONTBUFFER_PRIMARY(intel_crtc->pipe);
+ 
+@@ -12590,6 +12600,9 @@ static void intel_begin_crtc_commit(struct drm_crtc *crtc)
+ 	if (intel_crtc->atomic.disable_fbc)
+ 		intel_fbc_disable(dev);
+ 
++	if (intel_crtc->atomic.disable_ips)
++		hsw_disable_ips(intel_crtc);
++
+ 	if (intel_crtc->atomic.pre_disable_primary)
+ 		intel_pre_disable_primary(crtc);
+ 
+diff --git a/drivers/gpu/drm/i915/intel_drv.h b/drivers/gpu/drm/i915/intel_drv.h
+index 897f17db08af..68d1f74a7403 100644
+--- a/drivers/gpu/drm/i915/intel_drv.h
++++ b/drivers/gpu/drm/i915/intel_drv.h
+@@ -424,6 +424,7 @@ struct intel_crtc_atomic_commit {
+ 	/* Sleepable operations to perform before commit */
+ 	bool wait_for_flips;
+ 	bool disable_fbc;
++	bool disable_ips;
+ 	bool pre_disable_primary;
+ 	bool update_wm;
+ 	unsigned disabled_planes;
+diff --git a/drivers/gpu/drm/i915/intel_panel.c b/drivers/gpu/drm/i915/intel_panel.c
+index 08532d4ffe0a..2bf92cba4a55 100644
+--- a/drivers/gpu/drm/i915/intel_panel.c
++++ b/drivers/gpu/drm/i915/intel_panel.c
+@@ -879,6 +879,14 @@ static void i9xx_enable_backlight(struct intel_connector *connector)
+ 
+ 	/* XXX: combine this into above write? */
+ 	intel_panel_actually_set_backlight(connector, panel->backlight.level);
++
++	/*
++	 * Needed to enable backlight on some 855gm models. BLC_HIST_CTL is
++	 * 855gm only, but checking for gen2 is safe, as 855gm is the only gen2
++	 * that has backlight.
++	 */
++	if (IS_GEN2(dev))
++		I915_WRITE(BLC_HIST_CTL, BLM_HISTOGRAM_ENABLE);
+ }
+ 
+ static void i965_enable_backlight(struct intel_connector *connector)
+diff --git a/drivers/gpu/drm/i915/intel_ringbuffer.h b/drivers/gpu/drm/i915/intel_ringbuffer.h
+index c761fe05ad6f..94514d364d25 100644
+--- a/drivers/gpu/drm/i915/intel_ringbuffer.h
++++ b/drivers/gpu/drm/i915/intel_ringbuffer.h
+@@ -266,6 +266,13 @@ struct  intel_engine_cs {
+ 	 * Do we have some not yet emitted requests outstanding?
+ 	 */
+ 	struct drm_i915_gem_request *outstanding_lazy_request;
++	/**
++	 * Seqno of request most recently submitted to request_list.
++	 * Used exclusively by hang checker to avoid grabbing lock while
++	 * inspecting request list.
++	 */
++	u32 last_submitted_seqno;
++
+ 	bool gpu_caches_dirty;
+ 
+ 	wait_queue_head_t irq_queue;
+diff --git a/drivers/gpu/drm/i915/intel_uncore.c b/drivers/gpu/drm/i915/intel_uncore.c
+index ff2a74651dd4..a18807ec8371 100644
+--- a/drivers/gpu/drm/i915/intel_uncore.c
++++ b/drivers/gpu/drm/i915/intel_uncore.c
+@@ -1220,10 +1220,12 @@ int i915_reg_read_ioctl(struct drm_device *dev,
+ 	struct drm_i915_private *dev_priv = dev->dev_private;
+ 	struct drm_i915_reg_read *reg = data;
+ 	struct register_whitelist const *entry = whitelist;
++	unsigned size;
++	u64 offset;
+ 	int i, ret = 0;
+ 
+ 	for (i = 0; i < ARRAY_SIZE(whitelist); i++, entry++) {
+-		if (entry->offset == reg->offset &&
++		if (entry->offset == (reg->offset & -entry->size) &&
+ 		    (1 << INTEL_INFO(dev)->gen & entry->gen_bitmask))
+ 			break;
+ 	}
+@@ -1231,23 +1233,33 @@ int i915_reg_read_ioctl(struct drm_device *dev,
+ 	if (i == ARRAY_SIZE(whitelist))
+ 		return -EINVAL;
+ 
++	/* We use the low bits to encode extra flags as the register should
++	 * be naturally aligned (and those that are not so aligned merely
++	 * limit the available flags for that register).
++	 */
++	offset = entry->offset;
++	size = entry->size;
++	size |= reg->offset ^ offset;
++
+ 	intel_runtime_pm_get(dev_priv);
+ 
+-	switch (entry->size) {
++	switch (size) {
++	case 8 | 1:
++		reg->val = I915_READ64_2x32(offset, offset+4);
++		break;
+ 	case 8:
+-		reg->val = I915_READ64(reg->offset);
++		reg->val = I915_READ64(offset);
+ 		break;
+ 	case 4:
+-		reg->val = I915_READ(reg->offset);
++		reg->val = I915_READ(offset);
+ 		break;
+ 	case 2:
+-		reg->val = I915_READ16(reg->offset);
++		reg->val = I915_READ16(offset);
+ 		break;
+ 	case 1:
+-		reg->val = I915_READ8(reg->offset);
++		reg->val = I915_READ8(offset);
+ 		break;
+ 	default:
+-		MISSING_CASE(entry->size);
+ 		ret = -EINVAL;
+ 		goto out;
+ 	}
+diff --git a/drivers/gpu/drm/qxl/qxl_cmd.c b/drivers/gpu/drm/qxl/qxl_cmd.c
+index 97823644d347..f33251d67914 100644
+--- a/drivers/gpu/drm/qxl/qxl_cmd.c
++++ b/drivers/gpu/drm/qxl/qxl_cmd.c
+@@ -505,6 +505,7 @@ int qxl_hw_surface_alloc(struct qxl_device *qdev,
+ 
+ 	cmd = (struct qxl_surface_cmd *)qxl_release_map(qdev, release);
+ 	cmd->type = QXL_SURFACE_CMD_CREATE;
++	cmd->flags = QXL_SURF_FLAG_KEEP_DATA;
+ 	cmd->u.surface_create.format = surf->surf.format;
+ 	cmd->u.surface_create.width = surf->surf.width;
+ 	cmd->u.surface_create.height = surf->surf.height;
+diff --git a/drivers/gpu/drm/qxl/qxl_ioctl.c b/drivers/gpu/drm/qxl/qxl_ioctl.c
+index b110883f8253..7354a4cda59d 100644
+--- a/drivers/gpu/drm/qxl/qxl_ioctl.c
++++ b/drivers/gpu/drm/qxl/qxl_ioctl.c
+@@ -122,8 +122,10 @@ static struct qxl_bo *qxlhw_handle_to_bo(struct qxl_device *qdev,
+ 	qobj = gem_to_qxl_bo(gobj);
+ 
+ 	ret = qxl_release_list_add(release, qobj);
+-	if (ret)
++	if (ret) {
++		drm_gem_object_unreference_unlocked(gobj);
+ 		return NULL;
++	}
+ 
+ 	return qobj;
+ }
+diff --git a/drivers/gpu/drm/radeon/ci_dpm.c b/drivers/gpu/drm/radeon/ci_dpm.c
+index 8730562323a8..4a09947be244 100644
+--- a/drivers/gpu/drm/radeon/ci_dpm.c
++++ b/drivers/gpu/drm/radeon/ci_dpm.c
+@@ -5818,7 +5818,7 @@ int ci_dpm_init(struct radeon_device *rdev)
+ 			tmp |= DPM_ENABLED;
+ 			break;
+ 		default:
+-			DRM_ERROR("Invalid PCC GPIO: %u!\n", gpio.shift);
++			DRM_DEBUG("Invalid PCC GPIO: %u!\n", gpio.shift);
+ 			break;
+ 		}
+ 		WREG32_SMC(CNB_PWRMGT_CNTL, tmp);
+diff --git a/drivers/gpu/drm/radeon/cik.c b/drivers/gpu/drm/radeon/cik.c
+index ba50f3c1c2e0..845665362475 100644
+--- a/drivers/gpu/drm/radeon/cik.c
++++ b/drivers/gpu/drm/radeon/cik.c
+@@ -4579,6 +4579,31 @@ void cik_compute_set_wptr(struct radeon_device *rdev,
+ 	WDOORBELL32(ring->doorbell_index, ring->wptr);
+ }
+ 
++static void cik_compute_stop(struct radeon_device *rdev,
++			     struct radeon_ring *ring)
++{
++	u32 j, tmp;
++
++	cik_srbm_select(rdev, ring->me, ring->pipe, ring->queue, 0);
++	/* Disable wptr polling. */
++	tmp = RREG32(CP_PQ_WPTR_POLL_CNTL);
++	tmp &= ~WPTR_POLL_EN;
++	WREG32(CP_PQ_WPTR_POLL_CNTL, tmp);
++	/* Disable HQD. */
++	if (RREG32(CP_HQD_ACTIVE) & 1) {
++		WREG32(CP_HQD_DEQUEUE_REQUEST, 1);
++		for (j = 0; j < rdev->usec_timeout; j++) {
++			if (!(RREG32(CP_HQD_ACTIVE) & 1))
++				break;
++			udelay(1);
++		}
++		WREG32(CP_HQD_DEQUEUE_REQUEST, 0);
++		WREG32(CP_HQD_PQ_RPTR, 0);
++		WREG32(CP_HQD_PQ_WPTR, 0);
++	}
++	cik_srbm_select(rdev, 0, 0, 0, 0);
++}
++
+ /**
+  * cik_cp_compute_enable - enable/disable the compute CP MEs
+  *
+@@ -4592,6 +4617,15 @@ static void cik_cp_compute_enable(struct radeon_device *rdev, bool enable)
+ 	if (enable)
+ 		WREG32(CP_MEC_CNTL, 0);
+ 	else {
++		/*
++		 * To make hibernation reliable we need to clear compute ring
++		 * configuration before halting the compute ring.
++		 */
++		mutex_lock(&rdev->srbm_mutex);
++		cik_compute_stop(rdev,&rdev->ring[CAYMAN_RING_TYPE_CP1_INDEX]);
++		cik_compute_stop(rdev,&rdev->ring[CAYMAN_RING_TYPE_CP2_INDEX]);
++		mutex_unlock(&rdev->srbm_mutex);
++
+ 		WREG32(CP_MEC_CNTL, (MEC_ME1_HALT | MEC_ME2_HALT));
+ 		rdev->ring[CAYMAN_RING_TYPE_CP1_INDEX].ready = false;
+ 		rdev->ring[CAYMAN_RING_TYPE_CP2_INDEX].ready = false;
+@@ -7905,23 +7939,27 @@ restart_ih:
+ 		case 1: /* D1 vblank/vline */
+ 			switch (src_data) {
+ 			case 0: /* D1 vblank */
+-				if (rdev->irq.stat_regs.cik.disp_int & LB_D1_VBLANK_INTERRUPT) {
+-					if (rdev->irq.crtc_vblank_int[0]) {
+-						drm_handle_vblank(rdev->ddev, 0);
+-						rdev->pm.vblank_sync = true;
+-						wake_up(&rdev->irq.vblank_queue);
+-					}
+-					if (atomic_read(&rdev->irq.pflip[0]))
+-						radeon_crtc_handle_vblank(rdev, 0);
+-					rdev->irq.stat_regs.cik.disp_int &= ~LB_D1_VBLANK_INTERRUPT;
+-					DRM_DEBUG("IH: D1 vblank\n");
++				if (!(rdev->irq.stat_regs.cik.disp_int & LB_D1_VBLANK_INTERRUPT))
++					DRM_DEBUG("IH: IH event w/o asserted irq bit?\n");
++
++				if (rdev->irq.crtc_vblank_int[0]) {
++					drm_handle_vblank(rdev->ddev, 0);
++					rdev->pm.vblank_sync = true;
++					wake_up(&rdev->irq.vblank_queue);
+ 				}
++				if (atomic_read(&rdev->irq.pflip[0]))
++					radeon_crtc_handle_vblank(rdev, 0);
++				rdev->irq.stat_regs.cik.disp_int &= ~LB_D1_VBLANK_INTERRUPT;
++				DRM_DEBUG("IH: D1 vblank\n");
++
+ 				break;
+ 			case 1: /* D1 vline */
+-				if (rdev->irq.stat_regs.cik.disp_int & LB_D1_VLINE_INTERRUPT) {
+-					rdev->irq.stat_regs.cik.disp_int &= ~LB_D1_VLINE_INTERRUPT;
+-					DRM_DEBUG("IH: D1 vline\n");
+-				}
++				if (!(rdev->irq.stat_regs.cik.disp_int & LB_D1_VLINE_INTERRUPT))
++					DRM_DEBUG("IH: IH event w/o asserted irq bit?\n");
++
++				rdev->irq.stat_regs.cik.disp_int &= ~LB_D1_VLINE_INTERRUPT;
++				DRM_DEBUG("IH: D1 vline\n");
++
+ 				break;
+ 			default:
+ 				DRM_DEBUG("Unhandled interrupt: %d %d\n", src_id, src_data);
+@@ -7931,23 +7969,27 @@ restart_ih:
+ 		case 2: /* D2 vblank/vline */
+ 			switch (src_data) {
+ 			case 0: /* D2 vblank */
+-				if (rdev->irq.stat_regs.cik.disp_int_cont & LB_D2_VBLANK_INTERRUPT) {
+-					if (rdev->irq.crtc_vblank_int[1]) {
+-						drm_handle_vblank(rdev->ddev, 1);
+-						rdev->pm.vblank_sync = true;
+-						wake_up(&rdev->irq.vblank_queue);
+-					}
+-					if (atomic_read(&rdev->irq.pflip[1]))
+-						radeon_crtc_handle_vblank(rdev, 1);
+-					rdev->irq.stat_regs.cik.disp_int_cont &= ~LB_D2_VBLANK_INTERRUPT;
+-					DRM_DEBUG("IH: D2 vblank\n");
++				if (!(rdev->irq.stat_regs.cik.disp_int_cont & LB_D2_VBLANK_INTERRUPT))
++					DRM_DEBUG("IH: IH event w/o asserted irq bit?\n");
++
++				if (rdev->irq.crtc_vblank_int[1]) {
++					drm_handle_vblank(rdev->ddev, 1);
++					rdev->pm.vblank_sync = true;
++					wake_up(&rdev->irq.vblank_queue);
+ 				}
++				if (atomic_read(&rdev->irq.pflip[1]))
++					radeon_crtc_handle_vblank(rdev, 1);
++				rdev->irq.stat_regs.cik.disp_int_cont &= ~LB_D2_VBLANK_INTERRUPT;
++				DRM_DEBUG("IH: D2 vblank\n");
++
+ 				break;
+ 			case 1: /* D2 vline */
+-				if (rdev->irq.stat_regs.cik.disp_int_cont & LB_D2_VLINE_INTERRUPT) {
+-					rdev->irq.stat_regs.cik.disp_int_cont &= ~LB_D2_VLINE_INTERRUPT;
+-					DRM_DEBUG("IH: D2 vline\n");
+-				}
++				if (!(rdev->irq.stat_regs.cik.disp_int_cont & LB_D2_VLINE_INTERRUPT))
++					DRM_DEBUG("IH: IH event w/o asserted irq bit?\n");
++
++				rdev->irq.stat_regs.cik.disp_int_cont &= ~LB_D2_VLINE_INTERRUPT;
++				DRM_DEBUG("IH: D2 vline\n");
++
+ 				break;
+ 			default:
+ 				DRM_DEBUG("Unhandled interrupt: %d %d\n", src_id, src_data);
+@@ -7957,23 +7999,27 @@ restart_ih:
+ 		case 3: /* D3 vblank/vline */
+ 			switch (src_data) {
+ 			case 0: /* D3 vblank */
+-				if (rdev->irq.stat_regs.cik.disp_int_cont2 & LB_D3_VBLANK_INTERRUPT) {
+-					if (rdev->irq.crtc_vblank_int[2]) {
+-						drm_handle_vblank(rdev->ddev, 2);
+-						rdev->pm.vblank_sync = true;
+-						wake_up(&rdev->irq.vblank_queue);
+-					}
+-					if (atomic_read(&rdev->irq.pflip[2]))
+-						radeon_crtc_handle_vblank(rdev, 2);
+-					rdev->irq.stat_regs.cik.disp_int_cont2 &= ~LB_D3_VBLANK_INTERRUPT;
+-					DRM_DEBUG("IH: D3 vblank\n");
++				if (!(rdev->irq.stat_regs.cik.disp_int_cont2 & LB_D3_VBLANK_INTERRUPT))
++					DRM_DEBUG("IH: IH event w/o asserted irq bit?\n");
++
++				if (rdev->irq.crtc_vblank_int[2]) {
++					drm_handle_vblank(rdev->ddev, 2);
++					rdev->pm.vblank_sync = true;
++					wake_up(&rdev->irq.vblank_queue);
+ 				}
++				if (atomic_read(&rdev->irq.pflip[2]))
++					radeon_crtc_handle_vblank(rdev, 2);
++				rdev->irq.stat_regs.cik.disp_int_cont2 &= ~LB_D3_VBLANK_INTERRUPT;
++				DRM_DEBUG("IH: D3 vblank\n");
++
+ 				break;
+ 			case 1: /* D3 vline */
+-				if (rdev->irq.stat_regs.cik.disp_int_cont2 & LB_D3_VLINE_INTERRUPT) {
+-					rdev->irq.stat_regs.cik.disp_int_cont2 &= ~LB_D3_VLINE_INTERRUPT;
+-					DRM_DEBUG("IH: D3 vline\n");
+-				}
++				if (!(rdev->irq.stat_regs.cik.disp_int_cont2 & LB_D3_VLINE_INTERRUPT))
++					DRM_DEBUG("IH: IH event w/o asserted irq bit?\n");
++
++				rdev->irq.stat_regs.cik.disp_int_cont2 &= ~LB_D3_VLINE_INTERRUPT;
++				DRM_DEBUG("IH: D3 vline\n");
++
+ 				break;
+ 			default:
+ 				DRM_DEBUG("Unhandled interrupt: %d %d\n", src_id, src_data);
+@@ -7983,23 +8029,27 @@ restart_ih:
+ 		case 4: /* D4 vblank/vline */
+ 			switch (src_data) {
+ 			case 0: /* D4 vblank */
+-				if (rdev->irq.stat_regs.cik.disp_int_cont3 & LB_D4_VBLANK_INTERRUPT) {
+-					if (rdev->irq.crtc_vblank_int[3]) {
+-						drm_handle_vblank(rdev->ddev, 3);
+-						rdev->pm.vblank_sync = true;
+-						wake_up(&rdev->irq.vblank_queue);
+-					}
+-					if (atomic_read(&rdev->irq.pflip[3]))
+-						radeon_crtc_handle_vblank(rdev, 3);
+-					rdev->irq.stat_regs.cik.disp_int_cont3 &= ~LB_D4_VBLANK_INTERRUPT;
+-					DRM_DEBUG("IH: D4 vblank\n");
++				if (!(rdev->irq.stat_regs.cik.disp_int_cont3 & LB_D4_VBLANK_INTERRUPT))
++					DRM_DEBUG("IH: IH event w/o asserted irq bit?\n");
++
++				if (rdev->irq.crtc_vblank_int[3]) {
++					drm_handle_vblank(rdev->ddev, 3);
++					rdev->pm.vblank_sync = true;
++					wake_up(&rdev->irq.vblank_queue);
+ 				}
++				if (atomic_read(&rdev->irq.pflip[3]))
++					radeon_crtc_handle_vblank(rdev, 3);
++				rdev->irq.stat_regs.cik.disp_int_cont3 &= ~LB_D4_VBLANK_INTERRUPT;
++				DRM_DEBUG("IH: D4 vblank\n");
++
+ 				break;
+ 			case 1: /* D4 vline */
+-				if (rdev->irq.stat_regs.cik.disp_int_cont3 & LB_D4_VLINE_INTERRUPT) {
+-					rdev->irq.stat_regs.cik.disp_int_cont3 &= ~LB_D4_VLINE_INTERRUPT;
+-					DRM_DEBUG("IH: D4 vline\n");
+-				}
++				if (!(rdev->irq.stat_regs.cik.disp_int_cont3 & LB_D4_VLINE_INTERRUPT))
++					DRM_DEBUG("IH: IH event w/o asserted irq bit?\n");
++
++				rdev->irq.stat_regs.cik.disp_int_cont3 &= ~LB_D4_VLINE_INTERRUPT;
++				DRM_DEBUG("IH: D4 vline\n");
++
+ 				break;
+ 			default:
+ 				DRM_DEBUG("Unhandled interrupt: %d %d\n", src_id, src_data);
+@@ -8009,23 +8059,27 @@ restart_ih:
+ 		case 5: /* D5 vblank/vline */
+ 			switch (src_data) {
+ 			case 0: /* D5 vblank */
+-				if (rdev->irq.stat_regs.cik.disp_int_cont4 & LB_D5_VBLANK_INTERRUPT) {
+-					if (rdev->irq.crtc_vblank_int[4]) {
+-						drm_handle_vblank(rdev->ddev, 4);
+-						rdev->pm.vblank_sync = true;
+-						wake_up(&rdev->irq.vblank_queue);
+-					}
+-					if (atomic_read(&rdev->irq.pflip[4]))
+-						radeon_crtc_handle_vblank(rdev, 4);
+-					rdev->irq.stat_regs.cik.disp_int_cont4 &= ~LB_D5_VBLANK_INTERRUPT;
+-					DRM_DEBUG("IH: D5 vblank\n");
++				if (!(rdev->irq.stat_regs.cik.disp_int_cont4 & LB_D5_VBLANK_INTERRUPT))
++					DRM_DEBUG("IH: IH event w/o asserted irq bit?\n");
++
++				if (rdev->irq.crtc_vblank_int[4]) {
++					drm_handle_vblank(rdev->ddev, 4);
++					rdev->pm.vblank_sync = true;
++					wake_up(&rdev->irq.vblank_queue);
+ 				}
++				if (atomic_read(&rdev->irq.pflip[4]))
++					radeon_crtc_handle_vblank(rdev, 4);
++				rdev->irq.stat_regs.cik.disp_int_cont4 &= ~LB_D5_VBLANK_INTERRUPT;
++				DRM_DEBUG("IH: D5 vblank\n");
++
+ 				break;
+ 			case 1: /* D5 vline */
+-				if (rdev->irq.stat_regs.cik.disp_int_cont4 & LB_D5_VLINE_INTERRUPT) {
+-					rdev->irq.stat_regs.cik.disp_int_cont4 &= ~LB_D5_VLINE_INTERRUPT;
+-					DRM_DEBUG("IH: D5 vline\n");
+-				}
++				if (!(rdev->irq.stat_regs.cik.disp_int_cont4 & LB_D5_VLINE_INTERRUPT))
++					DRM_DEBUG("IH: IH event w/o asserted irq bit?\n");
++
++				rdev->irq.stat_regs.cik.disp_int_cont4 &= ~LB_D5_VLINE_INTERRUPT;
++				DRM_DEBUG("IH: D5 vline\n");
++
+ 				break;
+ 			default:
+ 				DRM_DEBUG("Unhandled interrupt: %d %d\n", src_id, src_data);
+@@ -8035,23 +8089,27 @@ restart_ih:
+ 		case 6: /* D6 vblank/vline */
+ 			switch (src_data) {
+ 			case 0: /* D6 vblank */
+-				if (rdev->irq.stat_regs.cik.disp_int_cont5 & LB_D6_VBLANK_INTERRUPT) {
+-					if (rdev->irq.crtc_vblank_int[5]) {
+-						drm_handle_vblank(rdev->ddev, 5);
+-						rdev->pm.vblank_sync = true;
+-						wake_up(&rdev->irq.vblank_queue);
+-					}
+-					if (atomic_read(&rdev->irq.pflip[5]))
+-						radeon_crtc_handle_vblank(rdev, 5);
+-					rdev->irq.stat_regs.cik.disp_int_cont5 &= ~LB_D6_VBLANK_INTERRUPT;
+-					DRM_DEBUG("IH: D6 vblank\n");
++				if (!(rdev->irq.stat_regs.cik.disp_int_cont5 & LB_D6_VBLANK_INTERRUPT))
++					DRM_DEBUG("IH: IH event w/o asserted irq bit?\n");
++
++				if (rdev->irq.crtc_vblank_int[5]) {
++					drm_handle_vblank(rdev->ddev, 5);
++					rdev->pm.vblank_sync = true;
++					wake_up(&rdev->irq.vblank_queue);
+ 				}
++				if (atomic_read(&rdev->irq.pflip[5]))
++					radeon_crtc_handle_vblank(rdev, 5);
++				rdev->irq.stat_regs.cik.disp_int_cont5 &= ~LB_D6_VBLANK_INTERRUPT;
++				DRM_DEBUG("IH: D6 vblank\n");
++
+ 				break;
+ 			case 1: /* D6 vline */
+-				if (rdev->irq.stat_regs.cik.disp_int_cont5 & LB_D6_VLINE_INTERRUPT) {
+-					rdev->irq.stat_regs.cik.disp_int_cont5 &= ~LB_D6_VLINE_INTERRUPT;
+-					DRM_DEBUG("IH: D6 vline\n");
+-				}
++				if (!(rdev->irq.stat_regs.cik.disp_int_cont5 & LB_D6_VLINE_INTERRUPT))
++					DRM_DEBUG("IH: IH event w/o asserted irq bit?\n");
++
++				rdev->irq.stat_regs.cik.disp_int_cont5 &= ~LB_D6_VLINE_INTERRUPT;
++				DRM_DEBUG("IH: D6 vline\n");
++
+ 				break;
+ 			default:
+ 				DRM_DEBUG("Unhandled interrupt: %d %d\n", src_id, src_data);
+@@ -8071,88 +8129,112 @@ restart_ih:
+ 		case 42: /* HPD hotplug */
+ 			switch (src_data) {
+ 			case 0:
+-				if (rdev->irq.stat_regs.cik.disp_int & DC_HPD1_INTERRUPT) {
+-					rdev->irq.stat_regs.cik.disp_int &= ~DC_HPD1_INTERRUPT;
+-					queue_hotplug = true;
+-					DRM_DEBUG("IH: HPD1\n");
+-				}
++				if (!(rdev->irq.stat_regs.cik.disp_int & DC_HPD1_INTERRUPT))
++					DRM_DEBUG("IH: IH event w/o asserted irq bit?\n");
++
++				rdev->irq.stat_regs.cik.disp_int &= ~DC_HPD1_INTERRUPT;
++				queue_hotplug = true;
++				DRM_DEBUG("IH: HPD1\n");
++
+ 				break;
+ 			case 1:
+-				if (rdev->irq.stat_regs.cik.disp_int_cont & DC_HPD2_INTERRUPT) {
+-					rdev->irq.stat_regs.cik.disp_int_cont &= ~DC_HPD2_INTERRUPT;
+-					queue_hotplug = true;
+-					DRM_DEBUG("IH: HPD2\n");
+-				}
++				if (!(rdev->irq.stat_regs.cik.disp_int_cont & DC_HPD2_INTERRUPT))
++					DRM_DEBUG("IH: IH event w/o asserted irq bit?\n");
++
++				rdev->irq.stat_regs.cik.disp_int_cont &= ~DC_HPD2_INTERRUPT;
++				queue_hotplug = true;
++				DRM_DEBUG("IH: HPD2\n");
++
+ 				break;
+ 			case 2:
+-				if (rdev->irq.stat_regs.cik.disp_int_cont2 & DC_HPD3_INTERRUPT) {
+-					rdev->irq.stat_regs.cik.disp_int_cont2 &= ~DC_HPD3_INTERRUPT;
+-					queue_hotplug = true;
+-					DRM_DEBUG("IH: HPD3\n");
+-				}
++				if (!(rdev->irq.stat_regs.cik.disp_int_cont2 & DC_HPD3_INTERRUPT))
++					DRM_DEBUG("IH: IH event w/o asserted irq bit?\n");
++
++				rdev->irq.stat_regs.cik.disp_int_cont2 &= ~DC_HPD3_INTERRUPT;
++				queue_hotplug = true;
++				DRM_DEBUG("IH: HPD3\n");
++
+ 				break;
+ 			case 3:
+-				if (rdev->irq.stat_regs.cik.disp_int_cont3 & DC_HPD4_INTERRUPT) {
+-					rdev->irq.stat_regs.cik.disp_int_cont3 &= ~DC_HPD4_INTERRUPT;
+-					queue_hotplug = true;
+-					DRM_DEBUG("IH: HPD4\n");
+-				}
++				if (!(rdev->irq.stat_regs.cik.disp_int_cont3 & DC_HPD4_INTERRUPT))
++					DRM_DEBUG("IH: IH event w/o asserted irq bit?\n");
++
++				rdev->irq.stat_regs.cik.disp_int_cont3 &= ~DC_HPD4_INTERRUPT;
++				queue_hotplug = true;
++				DRM_DEBUG("IH: HPD4\n");
++
+ 				break;
+ 			case 4:
+-				if (rdev->irq.stat_regs.cik.disp_int_cont4 & DC_HPD5_INTERRUPT) {
+-					rdev->irq.stat_regs.cik.disp_int_cont4 &= ~DC_HPD5_INTERRUPT;
+-					queue_hotplug = true;
+-					DRM_DEBUG("IH: HPD5\n");
+-				}
++				if (!(rdev->irq.stat_regs.cik.disp_int_cont4 & DC_HPD5_INTERRUPT))
++					DRM_DEBUG("IH: IH event w/o asserted irq bit?\n");
++
++				rdev->irq.stat_regs.cik.disp_int_cont4 &= ~DC_HPD5_INTERRUPT;
++				queue_hotplug = true;
++				DRM_DEBUG("IH: HPD5\n");
++
+ 				break;
+ 			case 5:
+-				if (rdev->irq.stat_regs.cik.disp_int_cont5 & DC_HPD6_INTERRUPT) {
+-					rdev->irq.stat_regs.cik.disp_int_cont5 &= ~DC_HPD6_INTERRUPT;
+-					queue_hotplug = true;
+-					DRM_DEBUG("IH: HPD6\n");
+-				}
++				if (!(rdev->irq.stat_regs.cik.disp_int_cont5 & DC_HPD6_INTERRUPT))
++					DRM_DEBUG("IH: IH event w/o asserted irq bit?\n");
++
++				rdev->irq.stat_regs.cik.disp_int_cont5 &= ~DC_HPD6_INTERRUPT;
++				queue_hotplug = true;
++				DRM_DEBUG("IH: HPD6\n");
++
+ 				break;
+ 			case 6:
+-				if (rdev->irq.stat_regs.cik.disp_int & DC_HPD1_RX_INTERRUPT) {
+-					rdev->irq.stat_regs.cik.disp_int &= ~DC_HPD1_RX_INTERRUPT;
+-					queue_dp = true;
+-					DRM_DEBUG("IH: HPD_RX 1\n");
+-				}
++				if (!(rdev->irq.stat_regs.cik.disp_int & DC_HPD1_RX_INTERRUPT))
++					DRM_DEBUG("IH: IH event w/o asserted irq bit?\n");
++
++				rdev->irq.stat_regs.cik.disp_int &= ~DC_HPD1_RX_INTERRUPT;
++				queue_dp = true;
++				DRM_DEBUG("IH: HPD_RX 1\n");
++
+ 				break;
+ 			case 7:
+-				if (rdev->irq.stat_regs.cik.disp_int_cont & DC_HPD2_RX_INTERRUPT) {
+-					rdev->irq.stat_regs.cik.disp_int_cont &= ~DC_HPD2_RX_INTERRUPT;
+-					queue_dp = true;
+-					DRM_DEBUG("IH: HPD_RX 2\n");
+-				}
++				if (!(rdev->irq.stat_regs.cik.disp_int_cont & DC_HPD2_RX_INTERRUPT))
++					DRM_DEBUG("IH: IH event w/o asserted irq bit?\n");
++
++				rdev->irq.stat_regs.cik.disp_int_cont &= ~DC_HPD2_RX_INTERRUPT;
++				queue_dp = true;
++				DRM_DEBUG("IH: HPD_RX 2\n");
++
+ 				break;
+ 			case 8:
+-				if (rdev->irq.stat_regs.cik.disp_int_cont2 & DC_HPD3_RX_INTERRUPT) {
+-					rdev->irq.stat_regs.cik.disp_int_cont2 &= ~DC_HPD3_RX_INTERRUPT;
+-					queue_dp = true;
+-					DRM_DEBUG("IH: HPD_RX 3\n");
+-				}
++				if (!(rdev->irq.stat_regs.cik.disp_int_cont2 & DC_HPD3_RX_INTERRUPT))
++					DRM_DEBUG("IH: IH event w/o asserted irq bit?\n");
++
++				rdev->irq.stat_regs.cik.disp_int_cont2 &= ~DC_HPD3_RX_INTERRUPT;
++				queue_dp = true;
++				DRM_DEBUG("IH: HPD_RX 3\n");
++
+ 				break;
+ 			case 9:
+-				if (rdev->irq.stat_regs.cik.disp_int_cont3 & DC_HPD4_RX_INTERRUPT) {
+-					rdev->irq.stat_regs.cik.disp_int_cont3 &= ~DC_HPD4_RX_INTERRUPT;
+-					queue_dp = true;
+-					DRM_DEBUG("IH: HPD_RX 4\n");
+-				}
++				if (!(rdev->irq.stat_regs.cik.disp_int_cont3 & DC_HPD4_RX_INTERRUPT))
++					DRM_DEBUG("IH: IH event w/o asserted irq bit?\n");
++
++				rdev->irq.stat_regs.cik.disp_int_cont3 &= ~DC_HPD4_RX_INTERRUPT;
++				queue_dp = true;
++				DRM_DEBUG("IH: HPD_RX 4\n");
++
+ 				break;
+ 			case 10:
+-				if (rdev->irq.stat_regs.cik.disp_int_cont4 & DC_HPD5_RX_INTERRUPT) {
+-					rdev->irq.stat_regs.cik.disp_int_cont4 &= ~DC_HPD5_RX_INTERRUPT;
+-					queue_dp = true;
+-					DRM_DEBUG("IH: HPD_RX 5\n");
+-				}
++				if (!(rdev->irq.stat_regs.cik.disp_int_cont4 & DC_HPD5_RX_INTERRUPT))
++					DRM_DEBUG("IH: IH event w/o asserted irq bit?\n");
++
++				rdev->irq.stat_regs.cik.disp_int_cont4 &= ~DC_HPD5_RX_INTERRUPT;
++				queue_dp = true;
++				DRM_DEBUG("IH: HPD_RX 5\n");
++
+ 				break;
+ 			case 11:
+-				if (rdev->irq.stat_regs.cik.disp_int_cont5 & DC_HPD6_RX_INTERRUPT) {
+-					rdev->irq.stat_regs.cik.disp_int_cont5 &= ~DC_HPD6_RX_INTERRUPT;
+-					queue_dp = true;
+-					DRM_DEBUG("IH: HPD_RX 6\n");
+-				}
++				if (!(rdev->irq.stat_regs.cik.disp_int_cont5 & DC_HPD6_RX_INTERRUPT))
++					DRM_DEBUG("IH: IH event w/o asserted irq bit?\n");
++
++				rdev->irq.stat_regs.cik.disp_int_cont5 &= ~DC_HPD6_RX_INTERRUPT;
++				queue_dp = true;
++				DRM_DEBUG("IH: HPD_RX 6\n");
++
+ 				break;
+ 			default:
+ 				DRM_DEBUG("Unhandled interrupt: %d %d\n", src_id, src_data);
+diff --git a/drivers/gpu/drm/radeon/cik_sdma.c b/drivers/gpu/drm/radeon/cik_sdma.c
+index f86eb54e7763..d16f2eebd95e 100644
+--- a/drivers/gpu/drm/radeon/cik_sdma.c
++++ b/drivers/gpu/drm/radeon/cik_sdma.c
+@@ -268,6 +268,17 @@ static void cik_sdma_gfx_stop(struct radeon_device *rdev)
+ 	}
+ 	rdev->ring[R600_RING_TYPE_DMA_INDEX].ready = false;
+ 	rdev->ring[CAYMAN_RING_TYPE_DMA1_INDEX].ready = false;
++
++	/* FIXME use something else than big hammer but after few days can not
++	 * seem to find good combination so reset SDMA blocks as it seems we
++	 * do not shut them down properly. This fix hibernation and does not
++	 * affect suspend to ram.
++	 */
++	WREG32(SRBM_SOFT_RESET, SOFT_RESET_SDMA | SOFT_RESET_SDMA1);
++	(void)RREG32(SRBM_SOFT_RESET);
++	udelay(50);
++	WREG32(SRBM_SOFT_RESET, 0);
++	(void)RREG32(SRBM_SOFT_RESET);
+ }
+ 
+ /**
+diff --git a/drivers/gpu/drm/radeon/evergreen.c b/drivers/gpu/drm/radeon/evergreen.c
+index f848acfd3fc8..feef136cdb55 100644
+--- a/drivers/gpu/drm/radeon/evergreen.c
++++ b/drivers/gpu/drm/radeon/evergreen.c
+@@ -4855,7 +4855,7 @@ restart_ih:
+ 		return IRQ_NONE;
+ 
+ 	rptr = rdev->ih.rptr;
+-	DRM_DEBUG("r600_irq_process start: rptr %d, wptr %d\n", rptr, wptr);
++	DRM_DEBUG("evergreen_irq_process start: rptr %d, wptr %d\n", rptr, wptr);
+ 
+ 	/* Order reading of wptr vs. reading of IH ring data */
+ 	rmb();
+@@ -4873,23 +4873,27 @@ restart_ih:
+ 		case 1: /* D1 vblank/vline */
+ 			switch (src_data) {
+ 			case 0: /* D1 vblank */
+-				if (rdev->irq.stat_regs.evergreen.disp_int & LB_D1_VBLANK_INTERRUPT) {
+-					if (rdev->irq.crtc_vblank_int[0]) {
+-						drm_handle_vblank(rdev->ddev, 0);
+-						rdev->pm.vblank_sync = true;
+-						wake_up(&rdev->irq.vblank_queue);
+-					}
+-					if (atomic_read(&rdev->irq.pflip[0]))
+-						radeon_crtc_handle_vblank(rdev, 0);
+-					rdev->irq.stat_regs.evergreen.disp_int &= ~LB_D1_VBLANK_INTERRUPT;
+-					DRM_DEBUG("IH: D1 vblank\n");
++				if (!(rdev->irq.stat_regs.evergreen.disp_int & LB_D1_VBLANK_INTERRUPT))
++					DRM_DEBUG("IH: D1 vblank - IH event w/o asserted irq bit?\n");
++
++				if (rdev->irq.crtc_vblank_int[0]) {
++					drm_handle_vblank(rdev->ddev, 0);
++					rdev->pm.vblank_sync = true;
++					wake_up(&rdev->irq.vblank_queue);
+ 				}
++				if (atomic_read(&rdev->irq.pflip[0]))
++					radeon_crtc_handle_vblank(rdev, 0);
++				rdev->irq.stat_regs.evergreen.disp_int &= ~LB_D1_VBLANK_INTERRUPT;
++				DRM_DEBUG("IH: D1 vblank\n");
++
+ 				break;
+ 			case 1: /* D1 vline */
+-				if (rdev->irq.stat_regs.evergreen.disp_int & LB_D1_VLINE_INTERRUPT) {
+-					rdev->irq.stat_regs.evergreen.disp_int &= ~LB_D1_VLINE_INTERRUPT;
+-					DRM_DEBUG("IH: D1 vline\n");
+-				}
++				if (!(rdev->irq.stat_regs.evergreen.disp_int & LB_D1_VLINE_INTERRUPT))
++					DRM_DEBUG("IH: D1 vline - IH event w/o asserted irq bit?\n");
++
++				rdev->irq.stat_regs.evergreen.disp_int &= ~LB_D1_VLINE_INTERRUPT;
++				DRM_DEBUG("IH: D1 vline\n");
++
+ 				break;
+ 			default:
+ 				DRM_DEBUG("Unhandled interrupt: %d %d\n", src_id, src_data);
+@@ -4899,23 +4903,27 @@ restart_ih:
+ 		case 2: /* D2 vblank/vline */
+ 			switch (src_data) {
+ 			case 0: /* D2 vblank */
+-				if (rdev->irq.stat_regs.evergreen.disp_int_cont & LB_D2_VBLANK_INTERRUPT) {
+-					if (rdev->irq.crtc_vblank_int[1]) {
+-						drm_handle_vblank(rdev->ddev, 1);
+-						rdev->pm.vblank_sync = true;
+-						wake_up(&rdev->irq.vblank_queue);
+-					}
+-					if (atomic_read(&rdev->irq.pflip[1]))
+-						radeon_crtc_handle_vblank(rdev, 1);
+-					rdev->irq.stat_regs.evergreen.disp_int_cont &= ~LB_D2_VBLANK_INTERRUPT;
+-					DRM_DEBUG("IH: D2 vblank\n");
++				if (!(rdev->irq.stat_regs.evergreen.disp_int_cont & LB_D2_VBLANK_INTERRUPT))
++					DRM_DEBUG("IH: D2 vblank - IH event w/o asserted irq bit?\n");
++
++				if (rdev->irq.crtc_vblank_int[1]) {
++					drm_handle_vblank(rdev->ddev, 1);
++					rdev->pm.vblank_sync = true;
++					wake_up(&rdev->irq.vblank_queue);
+ 				}
++				if (atomic_read(&rdev->irq.pflip[1]))
++					radeon_crtc_handle_vblank(rdev, 1);
++				rdev->irq.stat_regs.evergreen.disp_int_cont &= ~LB_D2_VBLANK_INTERRUPT;
++				DRM_DEBUG("IH: D2 vblank\n");
++
+ 				break;
+ 			case 1: /* D2 vline */
+-				if (rdev->irq.stat_regs.evergreen.disp_int_cont & LB_D2_VLINE_INTERRUPT) {
+-					rdev->irq.stat_regs.evergreen.disp_int_cont &= ~LB_D2_VLINE_INTERRUPT;
+-					DRM_DEBUG("IH: D2 vline\n");
+-				}
++				if (!(rdev->irq.stat_regs.evergreen.disp_int_cont & LB_D2_VLINE_INTERRUPT))
++					DRM_DEBUG("IH: D2 vline - IH event w/o asserted irq bit?\n");
++
++				rdev->irq.stat_regs.evergreen.disp_int_cont &= ~LB_D2_VLINE_INTERRUPT;
++				DRM_DEBUG("IH: D2 vline\n");
++
+ 				break;
+ 			default:
+ 				DRM_DEBUG("Unhandled interrupt: %d %d\n", src_id, src_data);
+@@ -4925,23 +4933,27 @@ restart_ih:
+ 		case 3: /* D3 vblank/vline */
+ 			switch (src_data) {
+ 			case 0: /* D3 vblank */
+-				if (rdev->irq.stat_regs.evergreen.disp_int_cont2 & LB_D3_VBLANK_INTERRUPT) {
+-					if (rdev->irq.crtc_vblank_int[2]) {
+-						drm_handle_vblank(rdev->ddev, 2);
+-						rdev->pm.vblank_sync = true;
+-						wake_up(&rdev->irq.vblank_queue);
+-					}
+-					if (atomic_read(&rdev->irq.pflip[2]))
+-						radeon_crtc_handle_vblank(rdev, 2);
+-					rdev->irq.stat_regs.evergreen.disp_int_cont2 &= ~LB_D3_VBLANK_INTERRUPT;
+-					DRM_DEBUG("IH: D3 vblank\n");
++				if (!(rdev->irq.stat_regs.evergreen.disp_int_cont2 & LB_D3_VBLANK_INTERRUPT))
++					DRM_DEBUG("IH: D3 vblank - IH event w/o asserted irq bit?\n");
++
++				if (rdev->irq.crtc_vblank_int[2]) {
++					drm_handle_vblank(rdev->ddev, 2);
++					rdev->pm.vblank_sync = true;
++					wake_up(&rdev->irq.vblank_queue);
+ 				}
++				if (atomic_read(&rdev->irq.pflip[2]))
++					radeon_crtc_handle_vblank(rdev, 2);
++				rdev->irq.stat_regs.evergreen.disp_int_cont2 &= ~LB_D3_VBLANK_INTERRUPT;
++				DRM_DEBUG("IH: D3 vblank\n");
++
+ 				break;
+ 			case 1: /* D3 vline */
+-				if (rdev->irq.stat_regs.evergreen.disp_int_cont2 & LB_D3_VLINE_INTERRUPT) {
+-					rdev->irq.stat_regs.evergreen.disp_int_cont2 &= ~LB_D3_VLINE_INTERRUPT;
+-					DRM_DEBUG("IH: D3 vline\n");
+-				}
++				if (!(rdev->irq.stat_regs.evergreen.disp_int_cont2 & LB_D3_VLINE_INTERRUPT))
++					DRM_DEBUG("IH: D3 vline - IH event w/o asserted irq bit?\n");
++
++				rdev->irq.stat_regs.evergreen.disp_int_cont2 &= ~LB_D3_VLINE_INTERRUPT;
++				DRM_DEBUG("IH: D3 vline\n");
++
+ 				break;
+ 			default:
+ 				DRM_DEBUG("Unhandled interrupt: %d %d\n", src_id, src_data);
+@@ -4951,23 +4963,27 @@ restart_ih:
+ 		case 4: /* D4 vblank/vline */
+ 			switch (src_data) {
+ 			case 0: /* D4 vblank */
+-				if (rdev->irq.stat_regs.evergreen.disp_int_cont3 & LB_D4_VBLANK_INTERRUPT) {
+-					if (rdev->irq.crtc_vblank_int[3]) {
+-						drm_handle_vblank(rdev->ddev, 3);
+-						rdev->pm.vblank_sync = true;
+-						wake_up(&rdev->irq.vblank_queue);
+-					}
+-					if (atomic_read(&rdev->irq.pflip[3]))
+-						radeon_crtc_handle_vblank(rdev, 3);
+-					rdev->irq.stat_regs.evergreen.disp_int_cont3 &= ~LB_D4_VBLANK_INTERRUPT;
+-					DRM_DEBUG("IH: D4 vblank\n");
++				if (!(rdev->irq.stat_regs.evergreen.disp_int_cont3 & LB_D4_VBLANK_INTERRUPT))
++					DRM_DEBUG("IH: D4 vblank - IH event w/o asserted irq bit?\n");
++
++				if (rdev->irq.crtc_vblank_int[3]) {
++					drm_handle_vblank(rdev->ddev, 3);
++					rdev->pm.vblank_sync = true;
++					wake_up(&rdev->irq.vblank_queue);
+ 				}
++				if (atomic_read(&rdev->irq.pflip[3]))
++					radeon_crtc_handle_vblank(rdev, 3);
++				rdev->irq.stat_regs.evergreen.disp_int_cont3 &= ~LB_D4_VBLANK_INTERRUPT;
++				DRM_DEBUG("IH: D4 vblank\n");
++
+ 				break;
+ 			case 1: /* D4 vline */
+-				if (rdev->irq.stat_regs.evergreen.disp_int_cont3 & LB_D4_VLINE_INTERRUPT) {
+-					rdev->irq.stat_regs.evergreen.disp_int_cont3 &= ~LB_D4_VLINE_INTERRUPT;
+-					DRM_DEBUG("IH: D4 vline\n");
+-				}
++				if (!(rdev->irq.stat_regs.evergreen.disp_int_cont3 & LB_D4_VLINE_INTERRUPT))
++					DRM_DEBUG("IH: D4 vline - IH event w/o asserted irq bit?\n");
++
++				rdev->irq.stat_regs.evergreen.disp_int_cont3 &= ~LB_D4_VLINE_INTERRUPT;
++				DRM_DEBUG("IH: D4 vline\n");
++
+ 				break;
+ 			default:
+ 				DRM_DEBUG("Unhandled interrupt: %d %d\n", src_id, src_data);
+@@ -4977,23 +4993,27 @@ restart_ih:
+ 		case 5: /* D5 vblank/vline */
+ 			switch (src_data) {
+ 			case 0: /* D5 vblank */
+-				if (rdev->irq.stat_regs.evergreen.disp_int_cont4 & LB_D5_VBLANK_INTERRUPT) {
+-					if (rdev->irq.crtc_vblank_int[4]) {
+-						drm_handle_vblank(rdev->ddev, 4);
+-						rdev->pm.vblank_sync = true;
+-						wake_up(&rdev->irq.vblank_queue);
+-					}
+-					if (atomic_read(&rdev->irq.pflip[4]))
+-						radeon_crtc_handle_vblank(rdev, 4);
+-					rdev->irq.stat_regs.evergreen.disp_int_cont4 &= ~LB_D5_VBLANK_INTERRUPT;
+-					DRM_DEBUG("IH: D5 vblank\n");
++				if (!(rdev->irq.stat_regs.evergreen.disp_int_cont4 & LB_D5_VBLANK_INTERRUPT))
++					DRM_DEBUG("IH: D5 vblank - IH event w/o asserted irq bit?\n");
++
++				if (rdev->irq.crtc_vblank_int[4]) {
++					drm_handle_vblank(rdev->ddev, 4);
++					rdev->pm.vblank_sync = true;
++					wake_up(&rdev->irq.vblank_queue);
+ 				}
++				if (atomic_read(&rdev->irq.pflip[4]))
++					radeon_crtc_handle_vblank(rdev, 4);
++				rdev->irq.stat_regs.evergreen.disp_int_cont4 &= ~LB_D5_VBLANK_INTERRUPT;
++				DRM_DEBUG("IH: D5 vblank\n");
++
+ 				break;
+ 			case 1: /* D5 vline */
+-				if (rdev->irq.stat_regs.evergreen.disp_int_cont4 & LB_D5_VLINE_INTERRUPT) {
+-					rdev->irq.stat_regs.evergreen.disp_int_cont4 &= ~LB_D5_VLINE_INTERRUPT;
+-					DRM_DEBUG("IH: D5 vline\n");
+-				}
++				if (!(rdev->irq.stat_regs.evergreen.disp_int_cont4 & LB_D5_VLINE_INTERRUPT))
++					DRM_DEBUG("IH: D5 vline - IH event w/o asserted irq bit?\n");
++
++				rdev->irq.stat_regs.evergreen.disp_int_cont4 &= ~LB_D5_VLINE_INTERRUPT;
++				DRM_DEBUG("IH: D5 vline\n");
++
+ 				break;
+ 			default:
+ 				DRM_DEBUG("Unhandled interrupt: %d %d\n", src_id, src_data);
+@@ -5003,23 +5023,27 @@ restart_ih:
+ 		case 6: /* D6 vblank/vline */
+ 			switch (src_data) {
+ 			case 0: /* D6 vblank */
+-				if (rdev->irq.stat_regs.evergreen.disp_int_cont5 & LB_D6_VBLANK_INTERRUPT) {
+-					if (rdev->irq.crtc_vblank_int[5]) {
+-						drm_handle_vblank(rdev->ddev, 5);
+-						rdev->pm.vblank_sync = true;
+-						wake_up(&rdev->irq.vblank_queue);
+-					}
+-					if (atomic_read(&rdev->irq.pflip[5]))
+-						radeon_crtc_handle_vblank(rdev, 5);
+-					rdev->irq.stat_regs.evergreen.disp_int_cont5 &= ~LB_D6_VBLANK_INTERRUPT;
+-					DRM_DEBUG("IH: D6 vblank\n");
++				if (!(rdev->irq.stat_regs.evergreen.disp_int_cont5 & LB_D6_VBLANK_INTERRUPT))
++					DRM_DEBUG("IH: D6 vblank - IH event w/o asserted irq bit?\n");
++
++				if (rdev->irq.crtc_vblank_int[5]) {
++					drm_handle_vblank(rdev->ddev, 5);
++					rdev->pm.vblank_sync = true;
++					wake_up(&rdev->irq.vblank_queue);
+ 				}
++				if (atomic_read(&rdev->irq.pflip[5]))
++					radeon_crtc_handle_vblank(rdev, 5);
++				rdev->irq.stat_regs.evergreen.disp_int_cont5 &= ~LB_D6_VBLANK_INTERRUPT;
++				DRM_DEBUG("IH: D6 vblank\n");
++
+ 				break;
+ 			case 1: /* D6 vline */
+-				if (rdev->irq.stat_regs.evergreen.disp_int_cont5 & LB_D6_VLINE_INTERRUPT) {
+-					rdev->irq.stat_regs.evergreen.disp_int_cont5 &= ~LB_D6_VLINE_INTERRUPT;
+-					DRM_DEBUG("IH: D6 vline\n");
+-				}
++				if (!(rdev->irq.stat_regs.evergreen.disp_int_cont5 & LB_D6_VLINE_INTERRUPT))
++					DRM_DEBUG("IH: D6 vline - IH event w/o asserted irq bit?\n");
++
++				rdev->irq.stat_regs.evergreen.disp_int_cont5 &= ~LB_D6_VLINE_INTERRUPT;
++				DRM_DEBUG("IH: D6 vline\n");
++
+ 				break;
+ 			default:
+ 				DRM_DEBUG("Unhandled interrupt: %d %d\n", src_id, src_data);
+@@ -5039,88 +5063,100 @@ restart_ih:
+ 		case 42: /* HPD hotplug */
+ 			switch (src_data) {
+ 			case 0:
+-				if (rdev->irq.stat_regs.evergreen.disp_int & DC_HPD1_INTERRUPT) {
+-					rdev->irq.stat_regs.evergreen.disp_int &= ~DC_HPD1_INTERRUPT;
+-					queue_hotplug = true;
+-					DRM_DEBUG("IH: HPD1\n");
+-				}
++				if (!(rdev->irq.stat_regs.evergreen.disp_int & DC_HPD1_INTERRUPT))
++					DRM_DEBUG("IH: IH event w/o asserted irq bit?\n");
++
++				rdev->irq.stat_regs.evergreen.disp_int &= ~DC_HPD1_INTERRUPT;
++				queue_hotplug = true;
++				DRM_DEBUG("IH: HPD1\n");
+ 				break;
+ 			case 1:
+-				if (rdev->irq.stat_regs.evergreen.disp_int_cont & DC_HPD2_INTERRUPT) {
+-					rdev->irq.stat_regs.evergreen.disp_int_cont &= ~DC_HPD2_INTERRUPT;
+-					queue_hotplug = true;
+-					DRM_DEBUG("IH: HPD2\n");
+-				}
++				if (!(rdev->irq.stat_regs.evergreen.disp_int_cont & DC_HPD2_INTERRUPT))
++					DRM_DEBUG("IH: IH event w/o asserted irq bit?\n");
++
++				rdev->irq.stat_regs.evergreen.disp_int_cont &= ~DC_HPD2_INTERRUPT;
++				queue_hotplug = true;
++				DRM_DEBUG("IH: HPD2\n");
+ 				break;
+ 			case 2:
+-				if (rdev->irq.stat_regs.evergreen.disp_int_cont2 & DC_HPD3_INTERRUPT) {
+-					rdev->irq.stat_regs.evergreen.disp_int_cont2 &= ~DC_HPD3_INTERRUPT;
+-					queue_hotplug = true;
+-					DRM_DEBUG("IH: HPD3\n");
+-				}
++				if (!(rdev->irq.stat_regs.evergreen.disp_int_cont2 & DC_HPD3_INTERRUPT))
++					DRM_DEBUG("IH: IH event w/o asserted irq bit?\n");
++
++				rdev->irq.stat_regs.evergreen.disp_int_cont2 &= ~DC_HPD3_INTERRUPT;
++				queue_hotplug = true;
++				DRM_DEBUG("IH: HPD3\n");
+ 				break;
+ 			case 3:
+-				if (rdev->irq.stat_regs.evergreen.disp_int_cont3 & DC_HPD4_INTERRUPT) {
+-					rdev->irq.stat_regs.evergreen.disp_int_cont3 &= ~DC_HPD4_INTERRUPT;
+-					queue_hotplug = true;
+-					DRM_DEBUG("IH: HPD4\n");
+-				}
++				if (!(rdev->irq.stat_regs.evergreen.disp_int_cont3 & DC_HPD4_INTERRUPT))
++					DRM_DEBUG("IH: IH event w/o asserted irq bit?\n");
++
++				rdev->irq.stat_regs.evergreen.disp_int_cont3 &= ~DC_HPD4_INTERRUPT;
++				queue_hotplug = true;
++				DRM_DEBUG("IH: HPD4\n");
+ 				break;
+ 			case 4:
+-				if (rdev->irq.stat_regs.evergreen.disp_int_cont4 & DC_HPD5_INTERRUPT) {
+-					rdev->irq.stat_regs.evergreen.disp_int_cont4 &= ~DC_HPD5_INTERRUPT;
+-					queue_hotplug = true;
+-					DRM_DEBUG("IH: HPD5\n");
+-				}
++				if (!(rdev->irq.stat_regs.evergreen.disp_int_cont4 & DC_HPD5_INTERRUPT))
++					DRM_DEBUG("IH: IH event w/o asserted irq bit?\n");
++
++				rdev->irq.stat_regs.evergreen.disp_int_cont4 &= ~DC_HPD5_INTERRUPT;
++				queue_hotplug = true;
++				DRM_DEBUG("IH: HPD5\n");
+ 				break;
+ 			case 5:
+-				if (rdev->irq.stat_regs.evergreen.disp_int_cont5 & DC_HPD6_INTERRUPT) {
+-					rdev->irq.stat_regs.evergreen.disp_int_cont5 &= ~DC_HPD6_INTERRUPT;
+-					queue_hotplug = true;
+-					DRM_DEBUG("IH: HPD6\n");
+-				}
++				if (!(rdev->irq.stat_regs.evergreen.disp_int_cont5 & DC_HPD6_INTERRUPT))
++					DRM_DEBUG("IH: IH event w/o asserted irq bit?\n");
++
++				rdev->irq.stat_regs.evergreen.disp_int_cont5 &= ~DC_HPD6_INTERRUPT;
++				queue_hotplug = true;
++				DRM_DEBUG("IH: HPD6\n");
+ 				break;
+ 			case 6:
+-				if (rdev->irq.stat_regs.evergreen.disp_int & DC_HPD1_RX_INTERRUPT) {
+-					rdev->irq.stat_regs.evergreen.disp_int &= ~DC_HPD1_RX_INTERRUPT;
+-					queue_dp = true;
+-					DRM_DEBUG("IH: HPD_RX 1\n");
+-				}
++				if (!(rdev->irq.stat_regs.evergreen.disp_int & DC_HPD1_RX_INTERRUPT))
++					DRM_DEBUG("IH: IH event w/o asserted irq bit?\n");
++
++				rdev->irq.stat_regs.evergreen.disp_int &= ~DC_HPD1_RX_INTERRUPT;
++				queue_dp = true;
++				DRM_DEBUG("IH: HPD_RX 1\n");
+ 				break;
+ 			case 7:
+-				if (rdev->irq.stat_regs.evergreen.disp_int_cont & DC_HPD2_RX_INTERRUPT) {
+-					rdev->irq.stat_regs.evergreen.disp_int_cont &= ~DC_HPD2_RX_INTERRUPT;
+-					queue_dp = true;
+-					DRM_DEBUG("IH: HPD_RX 2\n");
+-				}
++				if (!(rdev->irq.stat_regs.evergreen.disp_int_cont & DC_HPD2_RX_INTERRUPT))
++					DRM_DEBUG("IH: IH event w/o asserted irq bit?\n");
++
++				rdev->irq.stat_regs.evergreen.disp_int_cont &= ~DC_HPD2_RX_INTERRUPT;
++				queue_dp = true;
++				DRM_DEBUG("IH: HPD_RX 2\n");
+ 				break;
+ 			case 8:
+-				if (rdev->irq.stat_regs.evergreen.disp_int_cont2 & DC_HPD3_RX_INTERRUPT) {
+-					rdev->irq.stat_regs.evergreen.disp_int_cont2 &= ~DC_HPD3_RX_INTERRUPT;
+-					queue_dp = true;
+-					DRM_DEBUG("IH: HPD_RX 3\n");
+-				}
++				if (!(rdev->irq.stat_regs.evergreen.disp_int_cont2 & DC_HPD3_RX_INTERRUPT))
++					DRM_DEBUG("IH: IH event w/o asserted irq bit?\n");
++
++				rdev->irq.stat_regs.evergreen.disp_int_cont2 &= ~DC_HPD3_RX_INTERRUPT;
++				queue_dp = true;
++				DRM_DEBUG("IH: HPD_RX 3\n");
+ 				break;
+ 			case 9:
+-				if (rdev->irq.stat_regs.evergreen.disp_int_cont3 & DC_HPD4_RX_INTERRUPT) {
+-					rdev->irq.stat_regs.evergreen.disp_int_cont3 &= ~DC_HPD4_RX_INTERRUPT;
+-					queue_dp = true;
+-					DRM_DEBUG("IH: HPD_RX 4\n");
+-				}
++				if (!(rdev->irq.stat_regs.evergreen.disp_int_cont3 & DC_HPD4_RX_INTERRUPT))
++					DRM_DEBUG("IH: IH event w/o asserted irq bit?\n");
++
++				rdev->irq.stat_regs.evergreen.disp_int_cont3 &= ~DC_HPD4_RX_INTERRUPT;
++				queue_dp = true;
++				DRM_DEBUG("IH: HPD_RX 4\n");
+ 				break;
+ 			case 10:
+-				if (rdev->irq.stat_regs.evergreen.disp_int_cont4 & DC_HPD5_RX_INTERRUPT) {
+-					rdev->irq.stat_regs.evergreen.disp_int_cont4 &= ~DC_HPD5_RX_INTERRUPT;
+-					queue_dp = true;
+-					DRM_DEBUG("IH: HPD_RX 5\n");
+-				}
++				if (!(rdev->irq.stat_regs.evergreen.disp_int_cont4 & DC_HPD5_RX_INTERRUPT))
++					DRM_DEBUG("IH: IH event w/o asserted irq bit?\n");
++
++				rdev->irq.stat_regs.evergreen.disp_int_cont4 &= ~DC_HPD5_RX_INTERRUPT;
++				queue_dp = true;
++				DRM_DEBUG("IH: HPD_RX 5\n");
+ 				break;
+ 			case 11:
+-				if (rdev->irq.stat_regs.evergreen.disp_int_cont5 & DC_HPD6_RX_INTERRUPT) {
+-					rdev->irq.stat_regs.evergreen.disp_int_cont5 &= ~DC_HPD6_RX_INTERRUPT;
+-					queue_dp = true;
+-					DRM_DEBUG("IH: HPD_RX 6\n");
+-				}
++				if (!(rdev->irq.stat_regs.evergreen.disp_int_cont5 & DC_HPD6_RX_INTERRUPT))
++					DRM_DEBUG("IH: IH event w/o asserted irq bit?\n");
++
++				rdev->irq.stat_regs.evergreen.disp_int_cont5 &= ~DC_HPD6_RX_INTERRUPT;
++				queue_dp = true;
++				DRM_DEBUG("IH: HPD_RX 6\n");
+ 				break;
+ 			default:
+ 				DRM_DEBUG("Unhandled interrupt: %d %d\n", src_id, src_data);
+@@ -5130,46 +5166,52 @@ restart_ih:
+ 		case 44: /* hdmi */
+ 			switch (src_data) {
+ 			case 0:
+-				if (rdev->irq.stat_regs.evergreen.afmt_status1 & AFMT_AZ_FORMAT_WTRIG) {
+-					rdev->irq.stat_regs.evergreen.afmt_status1 &= ~AFMT_AZ_FORMAT_WTRIG;
+-					queue_hdmi = true;
+-					DRM_DEBUG("IH: HDMI0\n");
+-				}
++				if (!(rdev->irq.stat_regs.evergreen.afmt_status1 & AFMT_AZ_FORMAT_WTRIG))
++					DRM_DEBUG("IH: IH event w/o asserted irq bit?\n");
++
++				rdev->irq.stat_regs.evergreen.afmt_status1 &= ~AFMT_AZ_FORMAT_WTRIG;
++				queue_hdmi = true;
++				DRM_DEBUG("IH: HDMI0\n");
+ 				break;
+ 			case 1:
+-				if (rdev->irq.stat_regs.evergreen.afmt_status2 & AFMT_AZ_FORMAT_WTRIG) {
+-					rdev->irq.stat_regs.evergreen.afmt_status2 &= ~AFMT_AZ_FORMAT_WTRIG;
+-					queue_hdmi = true;
+-					DRM_DEBUG("IH: HDMI1\n");
+-				}
++				if (!(rdev->irq.stat_regs.evergreen.afmt_status2 & AFMT_AZ_FORMAT_WTRIG))
++					DRM_DEBUG("IH: IH event w/o asserted irq bit?\n");
++
++				rdev->irq.stat_regs.evergreen.afmt_status2 &= ~AFMT_AZ_FORMAT_WTRIG;
++				queue_hdmi = true;
++				DRM_DEBUG("IH: HDMI1\n");
+ 				break;
+ 			case 2:
+-				if (rdev->irq.stat_regs.evergreen.afmt_status3 & AFMT_AZ_FORMAT_WTRIG) {
+-					rdev->irq.stat_regs.evergreen.afmt_status3 &= ~AFMT_AZ_FORMAT_WTRIG;
+-					queue_hdmi = true;
+-					DRM_DEBUG("IH: HDMI2\n");
+-				}
++				if (!(rdev->irq.stat_regs.evergreen.afmt_status3 & AFMT_AZ_FORMAT_WTRIG))
++					DRM_DEBUG("IH: IH event w/o asserted irq bit?\n");
++
++				rdev->irq.stat_regs.evergreen.afmt_status3 &= ~AFMT_AZ_FORMAT_WTRIG;
++				queue_hdmi = true;
++				DRM_DEBUG("IH: HDMI2\n");
+ 				break;
+ 			case 3:
+-				if (rdev->irq.stat_regs.evergreen.afmt_status4 & AFMT_AZ_FORMAT_WTRIG) {
+-					rdev->irq.stat_regs.evergreen.afmt_status4 &= ~AFMT_AZ_FORMAT_WTRIG;
+-					queue_hdmi = true;
+-					DRM_DEBUG("IH: HDMI3\n");
+-				}
++				if (!(rdev->irq.stat_regs.evergreen.afmt_status4 & AFMT_AZ_FORMAT_WTRIG))
++					DRM_DEBUG("IH: IH event w/o asserted irq bit?\n");
++
++				rdev->irq.stat_regs.evergreen.afmt_status4 &= ~AFMT_AZ_FORMAT_WTRIG;
++				queue_hdmi = true;
++				DRM_DEBUG("IH: HDMI3\n");
+ 				break;
+ 			case 4:
+-				if (rdev->irq.stat_regs.evergreen.afmt_status5 & AFMT_AZ_FORMAT_WTRIG) {
+-					rdev->irq.stat_regs.evergreen.afmt_status5 &= ~AFMT_AZ_FORMAT_WTRIG;
+-					queue_hdmi = true;
+-					DRM_DEBUG("IH: HDMI4\n");
+-				}
++				if (!(rdev->irq.stat_regs.evergreen.afmt_status5 & AFMT_AZ_FORMAT_WTRIG))
++					DRM_DEBUG("IH: IH event w/o asserted irq bit?\n");
++
++				rdev->irq.stat_regs.evergreen.afmt_status5 &= ~AFMT_AZ_FORMAT_WTRIG;
++				queue_hdmi = true;
++				DRM_DEBUG("IH: HDMI4\n");
+ 				break;
+ 			case 5:
+-				if (rdev->irq.stat_regs.evergreen.afmt_status6 & AFMT_AZ_FORMAT_WTRIG) {
+-					rdev->irq.stat_regs.evergreen.afmt_status6 &= ~AFMT_AZ_FORMAT_WTRIG;
+-					queue_hdmi = true;
+-					DRM_DEBUG("IH: HDMI5\n");
+-				}
++				if (!(rdev->irq.stat_regs.evergreen.afmt_status6 & AFMT_AZ_FORMAT_WTRIG))
++					DRM_DEBUG("IH: IH event w/o asserted irq bit?\n");
++
++				rdev->irq.stat_regs.evergreen.afmt_status6 &= ~AFMT_AZ_FORMAT_WTRIG;
++				queue_hdmi = true;
++				DRM_DEBUG("IH: HDMI5\n");
+ 				break;
+ 			default:
+ 				DRM_ERROR("Unhandled interrupt: %d %d\n", src_id, src_data);
+diff --git a/drivers/gpu/drm/radeon/r600.c b/drivers/gpu/drm/radeon/r600.c
+index 8f6d862a1882..21e479fefcab 100644
+--- a/drivers/gpu/drm/radeon/r600.c
++++ b/drivers/gpu/drm/radeon/r600.c
+@@ -4039,23 +4039,27 @@ restart_ih:
+ 		case 1: /* D1 vblank/vline */
+ 			switch (src_data) {
+ 			case 0: /* D1 vblank */
+-				if (rdev->irq.stat_regs.r600.disp_int & LB_D1_VBLANK_INTERRUPT) {
+-					if (rdev->irq.crtc_vblank_int[0]) {
+-						drm_handle_vblank(rdev->ddev, 0);
+-						rdev->pm.vblank_sync = true;
+-						wake_up(&rdev->irq.vblank_queue);
+-					}
+-					if (atomic_read(&rdev->irq.pflip[0]))
+-						radeon_crtc_handle_vblank(rdev, 0);
+-					rdev->irq.stat_regs.r600.disp_int &= ~LB_D1_VBLANK_INTERRUPT;
+-					DRM_DEBUG("IH: D1 vblank\n");
++				if (!(rdev->irq.stat_regs.r600.disp_int & LB_D1_VBLANK_INTERRUPT))
++					DRM_DEBUG("IH: D1 vblank - IH event w/o asserted irq bit?\n");
++
++				if (rdev->irq.crtc_vblank_int[0]) {
++					drm_handle_vblank(rdev->ddev, 0);
++					rdev->pm.vblank_sync = true;
++					wake_up(&rdev->irq.vblank_queue);
+ 				}
++				if (atomic_read(&rdev->irq.pflip[0]))
++					radeon_crtc_handle_vblank(rdev, 0);
++				rdev->irq.stat_regs.r600.disp_int &= ~LB_D1_VBLANK_INTERRUPT;
++				DRM_DEBUG("IH: D1 vblank\n");
++
+ 				break;
+ 			case 1: /* D1 vline */
+-				if (rdev->irq.stat_regs.r600.disp_int & LB_D1_VLINE_INTERRUPT) {
+-					rdev->irq.stat_regs.r600.disp_int &= ~LB_D1_VLINE_INTERRUPT;
+-					DRM_DEBUG("IH: D1 vline\n");
+-				}
++				if (!(rdev->irq.stat_regs.r600.disp_int & LB_D1_VLINE_INTERRUPT))
++				    DRM_DEBUG("IH: D1 vline - IH event w/o asserted irq bit?\n");
++
++				rdev->irq.stat_regs.r600.disp_int &= ~LB_D1_VLINE_INTERRUPT;
++				DRM_DEBUG("IH: D1 vline\n");
++
+ 				break;
+ 			default:
+ 				DRM_DEBUG("Unhandled interrupt: %d %d\n", src_id, src_data);
+@@ -4065,23 +4069,27 @@ restart_ih:
+ 		case 5: /* D2 vblank/vline */
+ 			switch (src_data) {
+ 			case 0: /* D2 vblank */
+-				if (rdev->irq.stat_regs.r600.disp_int & LB_D2_VBLANK_INTERRUPT) {
+-					if (rdev->irq.crtc_vblank_int[1]) {
+-						drm_handle_vblank(rdev->ddev, 1);
+-						rdev->pm.vblank_sync = true;
+-						wake_up(&rdev->irq.vblank_queue);
+-					}
+-					if (atomic_read(&rdev->irq.pflip[1]))
+-						radeon_crtc_handle_vblank(rdev, 1);
+-					rdev->irq.stat_regs.r600.disp_int &= ~LB_D2_VBLANK_INTERRUPT;
+-					DRM_DEBUG("IH: D2 vblank\n");
++				if (!(rdev->irq.stat_regs.r600.disp_int & LB_D2_VBLANK_INTERRUPT))
++					DRM_DEBUG("IH: D2 vblank - IH event w/o asserted irq bit?\n");
++
++				if (rdev->irq.crtc_vblank_int[1]) {
++					drm_handle_vblank(rdev->ddev, 1);
++					rdev->pm.vblank_sync = true;
++					wake_up(&rdev->irq.vblank_queue);
+ 				}
++				if (atomic_read(&rdev->irq.pflip[1]))
++					radeon_crtc_handle_vblank(rdev, 1);
++				rdev->irq.stat_regs.r600.disp_int &= ~LB_D2_VBLANK_INTERRUPT;
++				DRM_DEBUG("IH: D2 vblank\n");
++
+ 				break;
+ 			case 1: /* D1 vline */
+-				if (rdev->irq.stat_regs.r600.disp_int & LB_D2_VLINE_INTERRUPT) {
+-					rdev->irq.stat_regs.r600.disp_int &= ~LB_D2_VLINE_INTERRUPT;
+-					DRM_DEBUG("IH: D2 vline\n");
+-				}
++				if (!(rdev->irq.stat_regs.r600.disp_int & LB_D2_VLINE_INTERRUPT))
++					DRM_DEBUG("IH: D2 vline - IH event w/o asserted irq bit?\n");
++
++				rdev->irq.stat_regs.r600.disp_int &= ~LB_D2_VLINE_INTERRUPT;
++				DRM_DEBUG("IH: D2 vline\n");
++
+ 				break;
+ 			default:
+ 				DRM_DEBUG("Unhandled interrupt: %d %d\n", src_id, src_data);
+@@ -4101,46 +4109,53 @@ restart_ih:
+ 		case 19: /* HPD/DAC hotplug */
+ 			switch (src_data) {
+ 			case 0:
+-				if (rdev->irq.stat_regs.r600.disp_int & DC_HPD1_INTERRUPT) {
+-					rdev->irq.stat_regs.r600.disp_int &= ~DC_HPD1_INTERRUPT;
+-					queue_hotplug = true;
+-					DRM_DEBUG("IH: HPD1\n");
+-				}
++				if (!(rdev->irq.stat_regs.r600.disp_int & DC_HPD1_INTERRUPT))
++					DRM_DEBUG("IH: HPD1 - IH event w/o asserted irq bit?\n");
++
++				rdev->irq.stat_regs.r600.disp_int &= ~DC_HPD1_INTERRUPT;
++				queue_hotplug = true;
++				DRM_DEBUG("IH: HPD1\n");
+ 				break;
+ 			case 1:
+-				if (rdev->irq.stat_regs.r600.disp_int & DC_HPD2_INTERRUPT) {
+-					rdev->irq.stat_regs.r600.disp_int &= ~DC_HPD2_INTERRUPT;
+-					queue_hotplug = true;
+-					DRM_DEBUG("IH: HPD2\n");
+-				}
++				if (!(rdev->irq.stat_regs.r600.disp_int & DC_HPD2_INTERRUPT))
++					DRM_DEBUG("IH: HPD2 - IH event w/o asserted irq bit?\n");
++
++				rdev->irq.stat_regs.r600.disp_int &= ~DC_HPD2_INTERRUPT;
++				queue_hotplug = true;
++				DRM_DEBUG("IH: HPD2\n");
+ 				break;
+ 			case 4:
+-				if (rdev->irq.stat_regs.r600.disp_int_cont & DC_HPD3_INTERRUPT) {
+-					rdev->irq.stat_regs.r600.disp_int_cont &= ~DC_HPD3_INTERRUPT;
+-					queue_hotplug = true;
+-					DRM_DEBUG("IH: HPD3\n");
+-				}
++				if (!(rdev->irq.stat_regs.r600.disp_int_cont & DC_HPD3_INTERRUPT))
++					DRM_DEBUG("IH: HPD3 - IH event w/o asserted irq bit?\n");
++
++				rdev->irq.stat_regs.r600.disp_int_cont &= ~DC_HPD3_INTERRUPT;
++				queue_hotplug = true;
++				DRM_DEBUG("IH: HPD3\n");
+ 				break;
+ 			case 5:
+-				if (rdev->irq.stat_regs.r600.disp_int_cont & DC_HPD4_INTERRUPT) {
+-					rdev->irq.stat_regs.r600.disp_int_cont &= ~DC_HPD4_INTERRUPT;
+-					queue_hotplug = true;
+-					DRM_DEBUG("IH: HPD4\n");
+-				}
++				if (!(rdev->irq.stat_regs.r600.disp_int_cont & DC_HPD4_INTERRUPT))
++					DRM_DEBUG("IH: HPD4 - IH event w/o asserted irq bit?\n");
++
++				rdev->irq.stat_regs.r600.disp_int_cont &= ~DC_HPD4_INTERRUPT;
++				queue_hotplug = true;
++				DRM_DEBUG("IH: HPD4\n");
+ 				break;
+ 			case 10:
+-				if (rdev->irq.stat_regs.r600.disp_int_cont2 & DC_HPD5_INTERRUPT) {
+-					rdev->irq.stat_regs.r600.disp_int_cont2 &= ~DC_HPD5_INTERRUPT;
+-					queue_hotplug = true;
+-					DRM_DEBUG("IH: HPD5\n");
+-				}
++				if (!(rdev->irq.stat_regs.r600.disp_int_cont2 & DC_HPD5_INTERRUPT))
++					DRM_DEBUG("IH: HPD5 - IH event w/o asserted irq bit?\n");
++
++				rdev->irq.stat_regs.r600.disp_int_cont2 &= ~DC_HPD5_INTERRUPT;
++				queue_hotplug = true;
++				DRM_DEBUG("IH: HPD5\n");
+ 				break;
+ 			case 12:
+-				if (rdev->irq.stat_regs.r600.disp_int_cont2 & DC_HPD6_INTERRUPT) {
+-					rdev->irq.stat_regs.r600.disp_int_cont2 &= ~DC_HPD6_INTERRUPT;
+-					queue_hotplug = true;
+-					DRM_DEBUG("IH: HPD6\n");
+-				}
++				if (!(rdev->irq.stat_regs.r600.disp_int_cont2 & DC_HPD6_INTERRUPT))
++					DRM_DEBUG("IH: HPD6 - IH event w/o asserted irq bit?\n");
++
++				rdev->irq.stat_regs.r600.disp_int_cont2 &= ~DC_HPD6_INTERRUPT;
++				queue_hotplug = true;
++				DRM_DEBUG("IH: HPD6\n");
++
+ 				break;
+ 			default:
+ 				DRM_DEBUG("Unhandled interrupt: %d %d\n", src_id, src_data);
+@@ -4150,18 +4165,22 @@ restart_ih:
+ 		case 21: /* hdmi */
+ 			switch (src_data) {
+ 			case 4:
+-				if (rdev->irq.stat_regs.r600.hdmi0_status & HDMI0_AZ_FORMAT_WTRIG) {
+-					rdev->irq.stat_regs.r600.hdmi0_status &= ~HDMI0_AZ_FORMAT_WTRIG;
+-					queue_hdmi = true;
+-					DRM_DEBUG("IH: HDMI0\n");
+-				}
++				if (!(rdev->irq.stat_regs.r600.hdmi0_status & HDMI0_AZ_FORMAT_WTRIG))
++					DRM_DEBUG("IH: HDMI0 - IH event w/o asserted irq bit?\n");
++
++				rdev->irq.stat_regs.r600.hdmi0_status &= ~HDMI0_AZ_FORMAT_WTRIG;
++				queue_hdmi = true;
++				DRM_DEBUG("IH: HDMI0\n");
++
+ 				break;
+ 			case 5:
+-				if (rdev->irq.stat_regs.r600.hdmi1_status & HDMI0_AZ_FORMAT_WTRIG) {
+-					rdev->irq.stat_regs.r600.hdmi1_status &= ~HDMI0_AZ_FORMAT_WTRIG;
+-					queue_hdmi = true;
+-					DRM_DEBUG("IH: HDMI1\n");
+-				}
++				if (!(rdev->irq.stat_regs.r600.hdmi1_status & HDMI0_AZ_FORMAT_WTRIG))
++					DRM_DEBUG("IH: HDMI1 - IH event w/o asserted irq bit?\n");
++
++				rdev->irq.stat_regs.r600.hdmi1_status &= ~HDMI0_AZ_FORMAT_WTRIG;
++				queue_hdmi = true;
++				DRM_DEBUG("IH: HDMI1\n");
++
+ 				break;
+ 			default:
+ 				DRM_ERROR("Unhandled interrupt: %d %d\n", src_id, src_data);
+diff --git a/drivers/gpu/drm/radeon/radeon_audio.c b/drivers/gpu/drm/radeon/radeon_audio.c
+index 25191f126f3b..fa719c53449b 100644
+--- a/drivers/gpu/drm/radeon/radeon_audio.c
++++ b/drivers/gpu/drm/radeon/radeon_audio.c
+@@ -242,6 +242,13 @@ static struct radeon_audio_funcs dce6_dp_funcs = {
+ 	.dpms = evergreen_dp_enable,
+ };
+ 
++static void radeon_audio_enable(struct radeon_device *rdev,
++				struct r600_audio_pin *pin, u8 enable_mask)
++{
++	if (rdev->audio.funcs->enable)
++		rdev->audio.funcs->enable(rdev, pin, enable_mask);
++}
++
+ static void radeon_audio_interface_init(struct radeon_device *rdev)
+ {
+ 	if (ASIC_IS_DCE6(rdev)) {
+@@ -307,7 +314,7 @@ int radeon_audio_init(struct radeon_device *rdev)
+ 
+ 	/* disable audio.  it will be set up later */
+ 	for (i = 0; i < rdev->audio.num_pins; i++)
+-		radeon_audio_enable(rdev, &rdev->audio.pin[i], false);
++		radeon_audio_enable(rdev, &rdev->audio.pin[i], 0);
+ 
+ 	return 0;
+ }
+@@ -443,13 +450,6 @@ static void radeon_audio_select_pin(struct drm_encoder *encoder)
+ 		radeon_encoder->audio->select_pin(encoder);
+ }
+ 
+-void radeon_audio_enable(struct radeon_device *rdev,
+-	struct r600_audio_pin *pin, u8 enable_mask)
+-{
+-	if (rdev->audio.funcs->enable)
+-		rdev->audio.funcs->enable(rdev, pin, enable_mask);
+-}
+-
+ void radeon_audio_detect(struct drm_connector *connector,
+ 			 enum drm_connector_status status)
+ {
+@@ -469,22 +469,22 @@ void radeon_audio_detect(struct drm_connector *connector,
+ 	dig = radeon_encoder->enc_priv;
+ 
+ 	if (status == connector_status_connected) {
+-		struct radeon_connector *radeon_connector;
+-		int sink_type;
+-
+ 		if (!drm_detect_monitor_audio(radeon_connector_edid(connector))) {
+ 			radeon_encoder->audio = NULL;
+ 			return;
+ 		}
+ 
+-		radeon_connector = to_radeon_connector(connector);
+-		sink_type = radeon_dp_getsinktype(radeon_connector);
++		if (connector->connector_type == DRM_MODE_CONNECTOR_DisplayPort) {
++			struct radeon_connector *radeon_connector = to_radeon_connector(connector);
+ 
+-		if (connector->connector_type == DRM_MODE_CONNECTOR_DisplayPort &&
+-			sink_type == CONNECTOR_OBJECT_ID_DISPLAYPORT)
+-			radeon_encoder->audio = rdev->audio.dp_funcs;
+-		else
++			if (radeon_dp_getsinktype(radeon_connector) ==
++			    CONNECTOR_OBJECT_ID_DISPLAYPORT)
++				radeon_encoder->audio = rdev->audio.dp_funcs;
++			else
++				radeon_encoder->audio = rdev->audio.hdmi_funcs;
++		} else {
+ 			radeon_encoder->audio = rdev->audio.hdmi_funcs;
++		}
+ 
+ 		dig->afmt->pin = radeon_audio_get_pin(connector->encoder);
+ 		radeon_audio_enable(rdev, dig->afmt->pin, 0xf);
+@@ -502,7 +502,7 @@ void radeon_audio_fini(struct radeon_device *rdev)
+ 		return;
+ 
+ 	for (i = 0; i < rdev->audio.num_pins; i++)
+-		radeon_audio_enable(rdev, &rdev->audio.pin[i], false);
++		radeon_audio_enable(rdev, &rdev->audio.pin[i], 0);
+ 
+ 	rdev->audio.enabled = false;
+ }
+diff --git a/drivers/gpu/drm/radeon/radeon_audio.h b/drivers/gpu/drm/radeon/radeon_audio.h
+index c92d059ab204..8438304f7139 100644
+--- a/drivers/gpu/drm/radeon/radeon_audio.h
++++ b/drivers/gpu/drm/radeon/radeon_audio.h
+@@ -74,8 +74,6 @@ u32 radeon_audio_endpoint_rreg(struct radeon_device *rdev,
+ void radeon_audio_endpoint_wreg(struct radeon_device *rdev,
+ 	u32 offset,	u32 reg, u32 v);
+ struct r600_audio_pin *radeon_audio_get_pin(struct drm_encoder *encoder);
+-void radeon_audio_enable(struct radeon_device *rdev,
+-	struct r600_audio_pin *pin, u8 enable_mask);
+ void radeon_audio_fini(struct radeon_device *rdev);
+ void radeon_audio_mode_set(struct drm_encoder *encoder,
+ 	struct drm_display_mode *mode);
+diff --git a/drivers/gpu/drm/radeon/radeon_cursor.c b/drivers/gpu/drm/radeon/radeon_cursor.c
+index 45e54060ee97..fa661744a1f5 100644
+--- a/drivers/gpu/drm/radeon/radeon_cursor.c
++++ b/drivers/gpu/drm/radeon/radeon_cursor.c
+@@ -205,8 +205,9 @@ static int radeon_cursor_move_locked(struct drm_crtc *crtc, int x, int y)
+ 			| (x << 16)
+ 			| y));
+ 		/* offset is from DISP(2)_BASE_ADDRESS */
+-		WREG32(RADEON_CUR_OFFSET + radeon_crtc->crtc_offset, (radeon_crtc->legacy_cursor_offset +
+-								      (yorigin * 256)));
++		WREG32(RADEON_CUR_OFFSET + radeon_crtc->crtc_offset,
++		       radeon_crtc->cursor_addr - radeon_crtc->legacy_display_base_addr +
++		       yorigin * 256);
+ 	}
+ 
+ 	radeon_crtc->cursor_x = x;
+@@ -227,51 +228,32 @@ int radeon_crtc_cursor_move(struct drm_crtc *crtc,
+ 	return ret;
+ }
+ 
+-static int radeon_set_cursor(struct drm_crtc *crtc, struct drm_gem_object *obj)
++static void radeon_set_cursor(struct drm_crtc *crtc)
+ {
+ 	struct radeon_crtc *radeon_crtc = to_radeon_crtc(crtc);
+ 	struct radeon_device *rdev = crtc->dev->dev_private;
+-	struct radeon_bo *robj = gem_to_radeon_bo(obj);
+-	uint64_t gpu_addr;
+-	int ret;
+-
+-	ret = radeon_bo_reserve(robj, false);
+-	if (unlikely(ret != 0))
+-		goto fail;
+-	/* Only 27 bit offset for legacy cursor */
+-	ret = radeon_bo_pin_restricted(robj, RADEON_GEM_DOMAIN_VRAM,
+-				       ASIC_IS_AVIVO(rdev) ? 0 : 1 << 27,
+-				       &gpu_addr);
+-	radeon_bo_unreserve(robj);
+-	if (ret)
+-		goto fail;
+ 
+ 	if (ASIC_IS_DCE4(rdev)) {
+ 		WREG32(EVERGREEN_CUR_SURFACE_ADDRESS_HIGH + radeon_crtc->crtc_offset,
+-		       upper_32_bits(gpu_addr));
++		       upper_32_bits(radeon_crtc->cursor_addr));
+ 		WREG32(EVERGREEN_CUR_SURFACE_ADDRESS + radeon_crtc->crtc_offset,
+-		       gpu_addr & 0xffffffff);
++		       lower_32_bits(radeon_crtc->cursor_addr));
+ 	} else if (ASIC_IS_AVIVO(rdev)) {
+ 		if (rdev->family >= CHIP_RV770) {
+ 			if (radeon_crtc->crtc_id)
+-				WREG32(R700_D2CUR_SURFACE_ADDRESS_HIGH, upper_32_bits(gpu_addr));
++				WREG32(R700_D2CUR_SURFACE_ADDRESS_HIGH,
++				       upper_32_bits(radeon_crtc->cursor_addr));
+ 			else
+-				WREG32(R700_D1CUR_SURFACE_ADDRESS_HIGH, upper_32_bits(gpu_addr));
++				WREG32(R700_D1CUR_SURFACE_ADDRESS_HIGH,
++				       upper_32_bits(radeon_crtc->cursor_addr));
+ 		}
+ 		WREG32(AVIVO_D1CUR_SURFACE_ADDRESS + radeon_crtc->crtc_offset,
+-		       gpu_addr & 0xffffffff);
++		       lower_32_bits(radeon_crtc->cursor_addr));
+ 	} else {
+-		radeon_crtc->legacy_cursor_offset = gpu_addr - radeon_crtc->legacy_display_base_addr;
+ 		/* offset is from DISP(2)_BASE_ADDRESS */
+-		WREG32(RADEON_CUR_OFFSET + radeon_crtc->crtc_offset, radeon_crtc->legacy_cursor_offset);
++		WREG32(RADEON_CUR_OFFSET + radeon_crtc->crtc_offset,
++		       radeon_crtc->cursor_addr - radeon_crtc->legacy_display_base_addr);
+ 	}
+-
+-	return 0;
+-
+-fail:
+-	drm_gem_object_unreference_unlocked(obj);
+-
+-	return ret;
+ }
+ 
+ int radeon_crtc_cursor_set2(struct drm_crtc *crtc,
+@@ -283,7 +265,9 @@ int radeon_crtc_cursor_set2(struct drm_crtc *crtc,
+ 			    int32_t hot_y)
+ {
+ 	struct radeon_crtc *radeon_crtc = to_radeon_crtc(crtc);
++	struct radeon_device *rdev = crtc->dev->dev_private;
+ 	struct drm_gem_object *obj;
++	struct radeon_bo *robj;
+ 	int ret;
+ 
+ 	if (!handle) {
+@@ -305,6 +289,23 @@ int radeon_crtc_cursor_set2(struct drm_crtc *crtc,
+ 		return -ENOENT;
+ 	}
+ 
++	robj = gem_to_radeon_bo(obj);
++	ret = radeon_bo_reserve(robj, false);
++	if (ret != 0) {
++		drm_gem_object_unreference_unlocked(obj);
++		return ret;
++	}
++	/* Only 27 bit offset for legacy cursor */
++	ret = radeon_bo_pin_restricted(robj, RADEON_GEM_DOMAIN_VRAM,
++				       ASIC_IS_AVIVO(rdev) ? 0 : 1 << 27,
++				       &radeon_crtc->cursor_addr);
++	radeon_bo_unreserve(robj);
++	if (ret) {
++		DRM_ERROR("Failed to pin new cursor BO (%d)\n", ret);
++		drm_gem_object_unreference_unlocked(obj);
++		return ret;
++	}
++
+ 	radeon_crtc->cursor_width = width;
+ 	radeon_crtc->cursor_height = height;
+ 
+@@ -323,13 +324,8 @@ int radeon_crtc_cursor_set2(struct drm_crtc *crtc,
+ 		radeon_crtc->cursor_hot_y = hot_y;
+ 	}
+ 
+-	ret = radeon_set_cursor(crtc, obj);
+-
+-	if (ret)
+-		DRM_ERROR("radeon_set_cursor returned %d, not changing cursor\n",
+-			  ret);
+-	else
+-		radeon_show_cursor(crtc);
++	radeon_set_cursor(crtc);
++	radeon_show_cursor(crtc);
+ 
+ 	radeon_lock_cursor(crtc, false);
+ 
+@@ -341,8 +337,7 @@ unpin:
+ 			radeon_bo_unpin(robj);
+ 			radeon_bo_unreserve(robj);
+ 		}
+-		if (radeon_crtc->cursor_bo != obj)
+-			drm_gem_object_unreference_unlocked(radeon_crtc->cursor_bo);
++		drm_gem_object_unreference_unlocked(radeon_crtc->cursor_bo);
+ 	}
+ 
+ 	radeon_crtc->cursor_bo = obj;
+@@ -360,7 +355,6 @@ unpin:
+ void radeon_cursor_reset(struct drm_crtc *crtc)
+ {
+ 	struct radeon_crtc *radeon_crtc = to_radeon_crtc(crtc);
+-	int ret;
+ 
+ 	if (radeon_crtc->cursor_bo) {
+ 		radeon_lock_cursor(crtc, true);
+@@ -368,12 +362,8 @@ void radeon_cursor_reset(struct drm_crtc *crtc)
+ 		radeon_cursor_move_locked(crtc, radeon_crtc->cursor_x,
+ 					  radeon_crtc->cursor_y);
+ 
+-		ret = radeon_set_cursor(crtc, radeon_crtc->cursor_bo);
+-		if (ret)
+-			DRM_ERROR("radeon_set_cursor returned %d, not showing "
+-				  "cursor\n", ret);
+-		else
+-			radeon_show_cursor(crtc);
++		radeon_set_cursor(crtc);
++		radeon_show_cursor(crtc);
+ 
+ 		radeon_lock_cursor(crtc, false);
+ 	}
+diff --git a/drivers/gpu/drm/radeon/radeon_device.c b/drivers/gpu/drm/radeon/radeon_device.c
+index a7fdfa4f0857..604c44d88e7a 100644
+--- a/drivers/gpu/drm/radeon/radeon_device.c
++++ b/drivers/gpu/drm/radeon/radeon_device.c
+@@ -1572,11 +1572,21 @@ int radeon_suspend_kms(struct drm_device *dev, bool suspend, bool fbcon)
+ 		drm_helper_connector_dpms(connector, DRM_MODE_DPMS_OFF);
+ 	}
+ 
+-	/* unpin the front buffers */
++	/* unpin the front buffers and cursors */
+ 	list_for_each_entry(crtc, &dev->mode_config.crtc_list, head) {
++		struct radeon_crtc *radeon_crtc = to_radeon_crtc(crtc);
+ 		struct radeon_framebuffer *rfb = to_radeon_framebuffer(crtc->primary->fb);
+ 		struct radeon_bo *robj;
+ 
++		if (radeon_crtc->cursor_bo) {
++			struct radeon_bo *robj = gem_to_radeon_bo(radeon_crtc->cursor_bo);
++			r = radeon_bo_reserve(robj, false);
++			if (r == 0) {
++				radeon_bo_unpin(robj);
++				radeon_bo_unreserve(robj);
++			}
++		}
++
+ 		if (rfb == NULL || rfb->obj == NULL) {
+ 			continue;
+ 		}
+@@ -1639,6 +1649,7 @@ int radeon_resume_kms(struct drm_device *dev, bool resume, bool fbcon)
+ {
+ 	struct drm_connector *connector;
+ 	struct radeon_device *rdev = dev->dev_private;
++	struct drm_crtc *crtc;
+ 	int r;
+ 
+ 	if (dev->switch_power_state == DRM_SWITCH_POWER_OFF)
+@@ -1678,6 +1689,27 @@ int radeon_resume_kms(struct drm_device *dev, bool resume, bool fbcon)
+ 
+ 	radeon_restore_bios_scratch_regs(rdev);
+ 
++	/* pin cursors */
++	list_for_each_entry(crtc, &dev->mode_config.crtc_list, head) {
++		struct radeon_crtc *radeon_crtc = to_radeon_crtc(crtc);
++
++		if (radeon_crtc->cursor_bo) {
++			struct radeon_bo *robj = gem_to_radeon_bo(radeon_crtc->cursor_bo);
++			r = radeon_bo_reserve(robj, false);
++			if (r == 0) {
++				/* Only 27 bit offset for legacy cursor */
++				r = radeon_bo_pin_restricted(robj,
++							     RADEON_GEM_DOMAIN_VRAM,
++							     ASIC_IS_AVIVO(rdev) ?
++							     0 : 1 << 27,
++							     &radeon_crtc->cursor_addr);
++				if (r != 0)
++					DRM_ERROR("Failed to pin cursor BO (%d)\n", r);
++				radeon_bo_unreserve(robj);
++			}
++		}
++	}
++
+ 	/* init dig PHYs, disp eng pll */
+ 	if (rdev->is_atom_bios) {
+ 		radeon_atom_encoder_init(rdev);
+diff --git a/drivers/gpu/drm/radeon/radeon_fb.c b/drivers/gpu/drm/radeon/radeon_fb.c
+index aeb676708e60..634793ea8418 100644
+--- a/drivers/gpu/drm/radeon/radeon_fb.c
++++ b/drivers/gpu/drm/radeon/radeon_fb.c
+@@ -257,7 +257,6 @@ static int radeonfb_create(struct drm_fb_helper *helper,
+ 	}
+ 
+ 	info->par = rfbdev;
+-	info->skip_vt_switch = true;
+ 
+ 	ret = radeon_framebuffer_init(rdev->ddev, &rfbdev->rfb, &mode_cmd, gobj);
+ 	if (ret) {
+diff --git a/drivers/gpu/drm/radeon/radeon_gart.c b/drivers/gpu/drm/radeon/radeon_gart.c
+index 5450fa95a47e..c4777c8d0312 100644
+--- a/drivers/gpu/drm/radeon/radeon_gart.c
++++ b/drivers/gpu/drm/radeon/radeon_gart.c
+@@ -260,8 +260,10 @@ void radeon_gart_unbind(struct radeon_device *rdev, unsigned offset,
+ 			}
+ 		}
+ 	}
+-	mb();
+-	radeon_gart_tlb_flush(rdev);
++	if (rdev->gart.ptr) {
++		mb();
++		radeon_gart_tlb_flush(rdev);
++	}
+ }
+ 
+ /**
+@@ -306,8 +308,10 @@ int radeon_gart_bind(struct radeon_device *rdev, unsigned offset,
+ 			page_base += RADEON_GPU_PAGE_SIZE;
+ 		}
+ 	}
+-	mb();
+-	radeon_gart_tlb_flush(rdev);
++	if (rdev->gart.ptr) {
++		mb();
++		radeon_gart_tlb_flush(rdev);
++	}
+ 	return 0;
+ }
+ 
+diff --git a/drivers/gpu/drm/radeon/radeon_gem.c b/drivers/gpu/drm/radeon/radeon_gem.c
+index ac3c1310b953..186d0b792a02 100644
+--- a/drivers/gpu/drm/radeon/radeon_gem.c
++++ b/drivers/gpu/drm/radeon/radeon_gem.c
+@@ -36,6 +36,7 @@ void radeon_gem_object_free(struct drm_gem_object *gobj)
+ 	if (robj) {
+ 		if (robj->gem_base.import_attach)
+ 			drm_prime_gem_destroy(&robj->gem_base, robj->tbo.sg);
++		radeon_mn_unregister(robj);
+ 		radeon_bo_unref(&robj);
+ 	}
+ }
+@@ -471,6 +472,7 @@ int radeon_gem_wait_idle_ioctl(struct drm_device *dev, void *data,
+ 		r = ret;
+ 
+ 	/* Flush HDP cache via MMIO if necessary */
++	cur_placement = ACCESS_ONCE(robj->tbo.mem.mem_type);
+ 	if (rdev->asic->mmio_hdp_flush &&
+ 	    radeon_mem_type_to_domain(cur_placement) == RADEON_GEM_DOMAIN_VRAM)
+ 		robj->rdev->asic->mmio_hdp_flush(rdev);
+diff --git a/drivers/gpu/drm/radeon/radeon_irq_kms.c b/drivers/gpu/drm/radeon/radeon_irq_kms.c
+index 7162c935371c..f682e5351252 100644
+--- a/drivers/gpu/drm/radeon/radeon_irq_kms.c
++++ b/drivers/gpu/drm/radeon/radeon_irq_kms.c
+@@ -79,10 +79,12 @@ static void radeon_hotplug_work_func(struct work_struct *work)
+ 	struct drm_mode_config *mode_config = &dev->mode_config;
+ 	struct drm_connector *connector;
+ 
++	mutex_lock(&mode_config->mutex);
+ 	if (mode_config->num_connector) {
+ 		list_for_each_entry(connector, &mode_config->connector_list, head)
+ 			radeon_connector_hotplug(connector);
+ 	}
++	mutex_unlock(&mode_config->mutex);
+ 	/* Just fire off a uevent and let userspace tell us what to do */
+ 	drm_helper_hpd_irq_event(dev);
+ }
+diff --git a/drivers/gpu/drm/radeon/radeon_mode.h b/drivers/gpu/drm/radeon/radeon_mode.h
+index fa91a17b81b6..f01c797b78cf 100644
+--- a/drivers/gpu/drm/radeon/radeon_mode.h
++++ b/drivers/gpu/drm/radeon/radeon_mode.h
+@@ -343,7 +343,6 @@ struct radeon_crtc {
+ 	int max_cursor_width;
+ 	int max_cursor_height;
+ 	uint32_t legacy_display_base_addr;
+-	uint32_t legacy_cursor_offset;
+ 	enum radeon_rmx_type rmx_type;
+ 	u8 h_border;
+ 	u8 v_border;
+diff --git a/drivers/gpu/drm/radeon/radeon_object.c b/drivers/gpu/drm/radeon/radeon_object.c
+index 318165d4855c..676362769b8d 100644
+--- a/drivers/gpu/drm/radeon/radeon_object.c
++++ b/drivers/gpu/drm/radeon/radeon_object.c
+@@ -75,7 +75,6 @@ static void radeon_ttm_bo_destroy(struct ttm_buffer_object *tbo)
+ 	bo = container_of(tbo, struct radeon_bo, tbo);
+ 
+ 	radeon_update_memory_usage(bo, bo->tbo.mem.mem_type, -1);
+-	radeon_mn_unregister(bo);
+ 
+ 	mutex_lock(&bo->rdev->gem.mutex);
+ 	list_del_init(&bo->list);
+diff --git a/drivers/gpu/drm/radeon/si.c b/drivers/gpu/drm/radeon/si.c
+index 4c679b802bc8..e15185b16504 100644
+--- a/drivers/gpu/drm/radeon/si.c
++++ b/drivers/gpu/drm/radeon/si.c
+@@ -6466,23 +6466,27 @@ restart_ih:
+ 		case 1: /* D1 vblank/vline */
+ 			switch (src_data) {
+ 			case 0: /* D1 vblank */
+-				if (rdev->irq.stat_regs.evergreen.disp_int & LB_D1_VBLANK_INTERRUPT) {
+-					if (rdev->irq.crtc_vblank_int[0]) {
+-						drm_handle_vblank(rdev->ddev, 0);
+-						rdev->pm.vblank_sync = true;
+-						wake_up(&rdev->irq.vblank_queue);
+-					}
+-					if (atomic_read(&rdev->irq.pflip[0]))
+-						radeon_crtc_handle_vblank(rdev, 0);
+-					rdev->irq.stat_regs.evergreen.disp_int &= ~LB_D1_VBLANK_INTERRUPT;
+-					DRM_DEBUG("IH: D1 vblank\n");
++				if (!(rdev->irq.stat_regs.evergreen.disp_int & LB_D1_VBLANK_INTERRUPT))
++					DRM_DEBUG("IH: IH event w/o asserted irq bit?\n");
++
++				if (rdev->irq.crtc_vblank_int[0]) {
++					drm_handle_vblank(rdev->ddev, 0);
++					rdev->pm.vblank_sync = true;
++					wake_up(&rdev->irq.vblank_queue);
+ 				}
++				if (atomic_read(&rdev->irq.pflip[0]))
++					radeon_crtc_handle_vblank(rdev, 0);
++				rdev->irq.stat_regs.evergreen.disp_int &= ~LB_D1_VBLANK_INTERRUPT;
++				DRM_DEBUG("IH: D1 vblank\n");
++
+ 				break;
+ 			case 1: /* D1 vline */
+-				if (rdev->irq.stat_regs.evergreen.disp_int & LB_D1_VLINE_INTERRUPT) {
+-					rdev->irq.stat_regs.evergreen.disp_int &= ~LB_D1_VLINE_INTERRUPT;
+-					DRM_DEBUG("IH: D1 vline\n");
+-				}
++				if (!(rdev->irq.stat_regs.evergreen.disp_int & LB_D1_VLINE_INTERRUPT))
++					DRM_DEBUG("IH: IH event w/o asserted irq bit?\n");
++
++				rdev->irq.stat_regs.evergreen.disp_int &= ~LB_D1_VLINE_INTERRUPT;
++				DRM_DEBUG("IH: D1 vline\n");
++
+ 				break;
+ 			default:
+ 				DRM_DEBUG("Unhandled interrupt: %d %d\n", src_id, src_data);
+@@ -6492,23 +6496,27 @@ restart_ih:
+ 		case 2: /* D2 vblank/vline */
+ 			switch (src_data) {
+ 			case 0: /* D2 vblank */
+-				if (rdev->irq.stat_regs.evergreen.disp_int_cont & LB_D2_VBLANK_INTERRUPT) {
+-					if (rdev->irq.crtc_vblank_int[1]) {
+-						drm_handle_vblank(rdev->ddev, 1);
+-						rdev->pm.vblank_sync = true;
+-						wake_up(&rdev->irq.vblank_queue);
+-					}
+-					if (atomic_read(&rdev->irq.pflip[1]))
+-						radeon_crtc_handle_vblank(rdev, 1);
+-					rdev->irq.stat_regs.evergreen.disp_int_cont &= ~LB_D2_VBLANK_INTERRUPT;
+-					DRM_DEBUG("IH: D2 vblank\n");
++				if (!(rdev->irq.stat_regs.evergreen.disp_int_cont & LB_D2_VBLANK_INTERRUPT))
++					DRM_DEBUG("IH: IH event w/o asserted irq bit?\n");
++
++				if (rdev->irq.crtc_vblank_int[1]) {
++					drm_handle_vblank(rdev->ddev, 1);
++					rdev->pm.vblank_sync = true;
++					wake_up(&rdev->irq.vblank_queue);
+ 				}
++				if (atomic_read(&rdev->irq.pflip[1]))
++					radeon_crtc_handle_vblank(rdev, 1);
++				rdev->irq.stat_regs.evergreen.disp_int_cont &= ~LB_D2_VBLANK_INTERRUPT;
++				DRM_DEBUG("IH: D2 vblank\n");
++
+ 				break;
+ 			case 1: /* D2 vline */
+-				if (rdev->irq.stat_regs.evergreen.disp_int_cont & LB_D2_VLINE_INTERRUPT) {
+-					rdev->irq.stat_regs.evergreen.disp_int_cont &= ~LB_D2_VLINE_INTERRUPT;
+-					DRM_DEBUG("IH: D2 vline\n");
+-				}
++				if (!(rdev->irq.stat_regs.evergreen.disp_int_cont & LB_D2_VLINE_INTERRUPT))
++					DRM_DEBUG("IH: IH event w/o asserted irq bit?\n");
++
++				rdev->irq.stat_regs.evergreen.disp_int_cont &= ~LB_D2_VLINE_INTERRUPT;
++				DRM_DEBUG("IH: D2 vline\n");
++
+ 				break;
+ 			default:
+ 				DRM_DEBUG("Unhandled interrupt: %d %d\n", src_id, src_data);
+@@ -6518,23 +6526,27 @@ restart_ih:
+ 		case 3: /* D3 vblank/vline */
+ 			switch (src_data) {
+ 			case 0: /* D3 vblank */
+-				if (rdev->irq.stat_regs.evergreen.disp_int_cont2 & LB_D3_VBLANK_INTERRUPT) {
+-					if (rdev->irq.crtc_vblank_int[2]) {
+-						drm_handle_vblank(rdev->ddev, 2);
+-						rdev->pm.vblank_sync = true;
+-						wake_up(&rdev->irq.vblank_queue);
+-					}
+-					if (atomic_read(&rdev->irq.pflip[2]))
+-						radeon_crtc_handle_vblank(rdev, 2);
+-					rdev->irq.stat_regs.evergreen.disp_int_cont2 &= ~LB_D3_VBLANK_INTERRUPT;
+-					DRM_DEBUG("IH: D3 vblank\n");
++				if (!(rdev->irq.stat_regs.evergreen.disp_int_cont2 & LB_D3_VBLANK_INTERRUPT))
++					DRM_DEBUG("IH: IH event w/o asserted irq bit?\n");
++
++				if (rdev->irq.crtc_vblank_int[2]) {
++					drm_handle_vblank(rdev->ddev, 2);
++					rdev->pm.vblank_sync = true;
++					wake_up(&rdev->irq.vblank_queue);
+ 				}
++				if (atomic_read(&rdev->irq.pflip[2]))
++					radeon_crtc_handle_vblank(rdev, 2);
++				rdev->irq.stat_regs.evergreen.disp_int_cont2 &= ~LB_D3_VBLANK_INTERRUPT;
++				DRM_DEBUG("IH: D3 vblank\n");
++
+ 				break;
+ 			case 1: /* D3 vline */
+-				if (rdev->irq.stat_regs.evergreen.disp_int_cont2 & LB_D3_VLINE_INTERRUPT) {
+-					rdev->irq.stat_regs.evergreen.disp_int_cont2 &= ~LB_D3_VLINE_INTERRUPT;
+-					DRM_DEBUG("IH: D3 vline\n");
+-				}
++				if (!(rdev->irq.stat_regs.evergreen.disp_int_cont2 & LB_D3_VLINE_INTERRUPT))
++					DRM_DEBUG("IH: IH event w/o asserted irq bit?\n");
++
++				rdev->irq.stat_regs.evergreen.disp_int_cont2 &= ~LB_D3_VLINE_INTERRUPT;
++				DRM_DEBUG("IH: D3 vline\n");
++
+ 				break;
+ 			default:
+ 				DRM_DEBUG("Unhandled interrupt: %d %d\n", src_id, src_data);
+@@ -6544,23 +6556,27 @@ restart_ih:
+ 		case 4: /* D4 vblank/vline */
+ 			switch (src_data) {
+ 			case 0: /* D4 vblank */
+-				if (rdev->irq.stat_regs.evergreen.disp_int_cont3 & LB_D4_VBLANK_INTERRUPT) {
+-					if (rdev->irq.crtc_vblank_int[3]) {
+-						drm_handle_vblank(rdev->ddev, 3);
+-						rdev->pm.vblank_sync = true;
+-						wake_up(&rdev->irq.vblank_queue);
+-					}
+-					if (atomic_read(&rdev->irq.pflip[3]))
+-						radeon_crtc_handle_vblank(rdev, 3);
+-					rdev->irq.stat_regs.evergreen.disp_int_cont3 &= ~LB_D4_VBLANK_INTERRUPT;
+-					DRM_DEBUG("IH: D4 vblank\n");
++				if (!(rdev->irq.stat_regs.evergreen.disp_int_cont3 & LB_D4_VBLANK_INTERRUPT))
++					DRM_DEBUG("IH: IH event w/o asserted irq bit?\n");
++
++				if (rdev->irq.crtc_vblank_int[3]) {
++					drm_handle_vblank(rdev->ddev, 3);
++					rdev->pm.vblank_sync = true;
++					wake_up(&rdev->irq.vblank_queue);
+ 				}
++				if (atomic_read(&rdev->irq.pflip[3]))
++					radeon_crtc_handle_vblank(rdev, 3);
++				rdev->irq.stat_regs.evergreen.disp_int_cont3 &= ~LB_D4_VBLANK_INTERRUPT;
++				DRM_DEBUG("IH: D4 vblank\n");
++
+ 				break;
+ 			case 1: /* D4 vline */
+-				if (rdev->irq.stat_regs.evergreen.disp_int_cont3 & LB_D4_VLINE_INTERRUPT) {
+-					rdev->irq.stat_regs.evergreen.disp_int_cont3 &= ~LB_D4_VLINE_INTERRUPT;
+-					DRM_DEBUG("IH: D4 vline\n");
+-				}
++				if (!(rdev->irq.stat_regs.evergreen.disp_int_cont3 & LB_D4_VLINE_INTERRUPT))
++					DRM_DEBUG("IH: IH event w/o asserted irq bit?\n");
++
++				rdev->irq.stat_regs.evergreen.disp_int_cont3 &= ~LB_D4_VLINE_INTERRUPT;
++				DRM_DEBUG("IH: D4 vline\n");
++
+ 				break;
+ 			default:
+ 				DRM_DEBUG("Unhandled interrupt: %d %d\n", src_id, src_data);
+@@ -6570,23 +6586,27 @@ restart_ih:
+ 		case 5: /* D5 vblank/vline */
+ 			switch (src_data) {
+ 			case 0: /* D5 vblank */
+-				if (rdev->irq.stat_regs.evergreen.disp_int_cont4 & LB_D5_VBLANK_INTERRUPT) {
+-					if (rdev->irq.crtc_vblank_int[4]) {
+-						drm_handle_vblank(rdev->ddev, 4);
+-						rdev->pm.vblank_sync = true;
+-						wake_up(&rdev->irq.vblank_queue);
+-					}
+-					if (atomic_read(&rdev->irq.pflip[4]))
+-						radeon_crtc_handle_vblank(rdev, 4);
+-					rdev->irq.stat_regs.evergreen.disp_int_cont4 &= ~LB_D5_VBLANK_INTERRUPT;
+-					DRM_DEBUG("IH: D5 vblank\n");
++				if (!(rdev->irq.stat_regs.evergreen.disp_int_cont4 & LB_D5_VBLANK_INTERRUPT))
++					DRM_DEBUG("IH: IH event w/o asserted irq bit?\n");
++
++				if (rdev->irq.crtc_vblank_int[4]) {
++					drm_handle_vblank(rdev->ddev, 4);
++					rdev->pm.vblank_sync = true;
++					wake_up(&rdev->irq.vblank_queue);
+ 				}
++				if (atomic_read(&rdev->irq.pflip[4]))
++					radeon_crtc_handle_vblank(rdev, 4);
++				rdev->irq.stat_regs.evergreen.disp_int_cont4 &= ~LB_D5_VBLANK_INTERRUPT;
++				DRM_DEBUG("IH: D5 vblank\n");
++
+ 				break;
+ 			case 1: /* D5 vline */
+-				if (rdev->irq.stat_regs.evergreen.disp_int_cont4 & LB_D5_VLINE_INTERRUPT) {
+-					rdev->irq.stat_regs.evergreen.disp_int_cont4 &= ~LB_D5_VLINE_INTERRUPT;
+-					DRM_DEBUG("IH: D5 vline\n");
+-				}
++				if (!(rdev->irq.stat_regs.evergreen.disp_int_cont4 & LB_D5_VLINE_INTERRUPT))
++					DRM_DEBUG("IH: IH event w/o asserted irq bit?\n");
++
++				rdev->irq.stat_regs.evergreen.disp_int_cont4 &= ~LB_D5_VLINE_INTERRUPT;
++				DRM_DEBUG("IH: D5 vline\n");
++
+ 				break;
+ 			default:
+ 				DRM_DEBUG("Unhandled interrupt: %d %d\n", src_id, src_data);
+@@ -6596,23 +6616,27 @@ restart_ih:
+ 		case 6: /* D6 vblank/vline */
+ 			switch (src_data) {
+ 			case 0: /* D6 vblank */
+-				if (rdev->irq.stat_regs.evergreen.disp_int_cont5 & LB_D6_VBLANK_INTERRUPT) {
+-					if (rdev->irq.crtc_vblank_int[5]) {
+-						drm_handle_vblank(rdev->ddev, 5);
+-						rdev->pm.vblank_sync = true;
+-						wake_up(&rdev->irq.vblank_queue);
+-					}
+-					if (atomic_read(&rdev->irq.pflip[5]))
+-						radeon_crtc_handle_vblank(rdev, 5);
+-					rdev->irq.stat_regs.evergreen.disp_int_cont5 &= ~LB_D6_VBLANK_INTERRUPT;
+-					DRM_DEBUG("IH: D6 vblank\n");
++				if (!(rdev->irq.stat_regs.evergreen.disp_int_cont5 & LB_D6_VBLANK_INTERRUPT))
++					DRM_DEBUG("IH: IH event w/o asserted irq bit?\n");
++
++				if (rdev->irq.crtc_vblank_int[5]) {
++					drm_handle_vblank(rdev->ddev, 5);
++					rdev->pm.vblank_sync = true;
++					wake_up(&rdev->irq.vblank_queue);
+ 				}
++				if (atomic_read(&rdev->irq.pflip[5]))
++					radeon_crtc_handle_vblank(rdev, 5);
++				rdev->irq.stat_regs.evergreen.disp_int_cont5 &= ~LB_D6_VBLANK_INTERRUPT;
++				DRM_DEBUG("IH: D6 vblank\n");
++
+ 				break;
+ 			case 1: /* D6 vline */
+-				if (rdev->irq.stat_regs.evergreen.disp_int_cont5 & LB_D6_VLINE_INTERRUPT) {
+-					rdev->irq.stat_regs.evergreen.disp_int_cont5 &= ~LB_D6_VLINE_INTERRUPT;
+-					DRM_DEBUG("IH: D6 vline\n");
+-				}
++				if (!(rdev->irq.stat_regs.evergreen.disp_int_cont5 & LB_D6_VLINE_INTERRUPT))
++					DRM_DEBUG("IH: IH event w/o asserted irq bit?\n");
++
++				rdev->irq.stat_regs.evergreen.disp_int_cont5 &= ~LB_D6_VLINE_INTERRUPT;
++				DRM_DEBUG("IH: D6 vline\n");
++
+ 				break;
+ 			default:
+ 				DRM_DEBUG("Unhandled interrupt: %d %d\n", src_id, src_data);
+@@ -6632,88 +6656,112 @@ restart_ih:
+ 		case 42: /* HPD hotplug */
+ 			switch (src_data) {
+ 			case 0:
+-				if (rdev->irq.stat_regs.evergreen.disp_int & DC_HPD1_INTERRUPT) {
+-					rdev->irq.stat_regs.evergreen.disp_int &= ~DC_HPD1_INTERRUPT;
+-					queue_hotplug = true;
+-					DRM_DEBUG("IH: HPD1\n");
+-				}
++				if (!(rdev->irq.stat_regs.evergreen.disp_int & DC_HPD1_INTERRUPT))
++					DRM_DEBUG("IH: IH event w/o asserted irq bit?\n");
++
++				rdev->irq.stat_regs.evergreen.disp_int &= ~DC_HPD1_INTERRUPT;
++				queue_hotplug = true;
++				DRM_DEBUG("IH: HPD1\n");
++
+ 				break;
+ 			case 1:
+-				if (rdev->irq.stat_regs.evergreen.disp_int_cont & DC_HPD2_INTERRUPT) {
+-					rdev->irq.stat_regs.evergreen.disp_int_cont &= ~DC_HPD2_INTERRUPT;
+-					queue_hotplug = true;
+-					DRM_DEBUG("IH: HPD2\n");
+-				}
++				if (!(rdev->irq.stat_regs.evergreen.disp_int_cont & DC_HPD2_INTERRUPT))
++					DRM_DEBUG("IH: IH event w/o asserted irq bit?\n");
++
++				rdev->irq.stat_regs.evergreen.disp_int_cont &= ~DC_HPD2_INTERRUPT;
++				queue_hotplug = true;
++				DRM_DEBUG("IH: HPD2\n");
++
+ 				break;
+ 			case 2:
+-				if (rdev->irq.stat_regs.evergreen.disp_int_cont2 & DC_HPD3_INTERRUPT) {
+-					rdev->irq.stat_regs.evergreen.disp_int_cont2 &= ~DC_HPD3_INTERRUPT;
+-					queue_hotplug = true;
+-					DRM_DEBUG("IH: HPD3\n");
+-				}
++				if (!(rdev->irq.stat_regs.evergreen.disp_int_cont2 & DC_HPD3_INTERRUPT))
++					DRM_DEBUG("IH: IH event w/o asserted irq bit?\n");
++
++				rdev->irq.stat_regs.evergreen.disp_int_cont2 &= ~DC_HPD3_INTERRUPT;
++				queue_hotplug = true;
++				DRM_DEBUG("IH: HPD3\n");
++
+ 				break;
+ 			case 3:
+-				if (rdev->irq.stat_regs.evergreen.disp_int_cont3 & DC_HPD4_INTERRUPT) {
+-					rdev->irq.stat_regs.evergreen.disp_int_cont3 &= ~DC_HPD4_INTERRUPT;
+-					queue_hotplug = true;
+-					DRM_DEBUG("IH: HPD4\n");
+-				}
++				if (!(rdev->irq.stat_regs.evergreen.disp_int_cont3 & DC_HPD4_INTERRUPT))
++					DRM_DEBUG("IH: IH event w/o asserted irq bit?\n");
++
++				rdev->irq.stat_regs.evergreen.disp_int_cont3 &= ~DC_HPD4_INTERRUPT;
++				queue_hotplug = true;
++				DRM_DEBUG("IH: HPD4\n");
++
+ 				break;
+ 			case 4:
+-				if (rdev->irq.stat_regs.evergreen.disp_int_cont4 & DC_HPD5_INTERRUPT) {
+-					rdev->irq.stat_regs.evergreen.disp_int_cont4 &= ~DC_HPD5_INTERRUPT;
+-					queue_hotplug = true;
+-					DRM_DEBUG("IH: HPD5\n");
+-				}
++				if (!(rdev->irq.stat_regs.evergreen.disp_int_cont4 & DC_HPD5_INTERRUPT))
++					DRM_DEBUG("IH: IH event w/o asserted irq bit?\n");
++
++				rdev->irq.stat_regs.evergreen.disp_int_cont4 &= ~DC_HPD5_INTERRUPT;
++				queue_hotplug = true;
++				DRM_DEBUG("IH: HPD5\n");
++
+ 				break;
+ 			case 5:
+-				if (rdev->irq.stat_regs.evergreen.disp_int_cont5 & DC_HPD6_INTERRUPT) {
+-					rdev->irq.stat_regs.evergreen.disp_int_cont5 &= ~DC_HPD6_INTERRUPT;
+-					queue_hotplug = true;
+-					DRM_DEBUG("IH: HPD6\n");
+-				}
++				if (!(rdev->irq.stat_regs.evergreen.disp_int_cont5 & DC_HPD6_INTERRUPT))
++					DRM_DEBUG("IH: IH event w/o asserted irq bit?\n");
++
++				rdev->irq.stat_regs.evergreen.disp_int_cont5 &= ~DC_HPD6_INTERRUPT;
++				queue_hotplug = true;
++				DRM_DEBUG("IH: HPD6\n");
++
+ 				break;
+ 			case 6:
+-				if (rdev->irq.stat_regs.evergreen.disp_int & DC_HPD1_RX_INTERRUPT) {
+-					rdev->irq.stat_regs.evergreen.disp_int &= ~DC_HPD1_RX_INTERRUPT;
+-					queue_dp = true;
+-					DRM_DEBUG("IH: HPD_RX 1\n");
+-				}
++				if (!(rdev->irq.stat_regs.evergreen.disp_int & DC_HPD1_RX_INTERRUPT))
++					DRM_DEBUG("IH: IH event w/o asserted irq bit?\n");
++
++				rdev->irq.stat_regs.evergreen.disp_int &= ~DC_HPD1_RX_INTERRUPT;
++				queue_dp = true;
++				DRM_DEBUG("IH: HPD_RX 1\n");
++
+ 				break;
+ 			case 7:
+-				if (rdev->irq.stat_regs.evergreen.disp_int_cont & DC_HPD2_RX_INTERRUPT) {
+-					rdev->irq.stat_regs.evergreen.disp_int_cont &= ~DC_HPD2_RX_INTERRUPT;
+-					queue_dp = true;
+-					DRM_DEBUG("IH: HPD_RX 2\n");
+-				}
++				if (!(rdev->irq.stat_regs.evergreen.disp_int_cont & DC_HPD2_RX_INTERRUPT))
++					DRM_DEBUG("IH: IH event w/o asserted irq bit?\n");
++
++				rdev->irq.stat_regs.evergreen.disp_int_cont &= ~DC_HPD2_RX_INTERRUPT;
++				queue_dp = true;
++				DRM_DEBUG("IH: HPD_RX 2\n");
++
+ 				break;
+ 			case 8:
+-				if (rdev->irq.stat_regs.evergreen.disp_int_cont2 & DC_HPD3_RX_INTERRUPT) {
+-					rdev->irq.stat_regs.evergreen.disp_int_cont2 &= ~DC_HPD3_RX_INTERRUPT;
+-					queue_dp = true;
+-					DRM_DEBUG("IH: HPD_RX 3\n");
+-				}
++				if (!(rdev->irq.stat_regs.evergreen.disp_int_cont2 & DC_HPD3_RX_INTERRUPT))
++					DRM_DEBUG("IH: IH event w/o asserted irq bit?\n");
++
++				rdev->irq.stat_regs.evergreen.disp_int_cont2 &= ~DC_HPD3_RX_INTERRUPT;
++				queue_dp = true;
++				DRM_DEBUG("IH: HPD_RX 3\n");
++
+ 				break;
+ 			case 9:
+-				if (rdev->irq.stat_regs.evergreen.disp_int_cont3 & DC_HPD4_RX_INTERRUPT) {
+-					rdev->irq.stat_regs.evergreen.disp_int_cont3 &= ~DC_HPD4_RX_INTERRUPT;
+-					queue_dp = true;
+-					DRM_DEBUG("IH: HPD_RX 4\n");
+-				}
++				if (!(rdev->irq.stat_regs.evergreen.disp_int_cont3 & DC_HPD4_RX_INTERRUPT))
++					DRM_DEBUG("IH: IH event w/o asserted irq bit?\n");
++
++				rdev->irq.stat_regs.evergreen.disp_int_cont3 &= ~DC_HPD4_RX_INTERRUPT;
++				queue_dp = true;
++				DRM_DEBUG("IH: HPD_RX 4\n");
++
+ 				break;
+ 			case 10:
+-				if (rdev->irq.stat_regs.evergreen.disp_int_cont4 & DC_HPD5_RX_INTERRUPT) {
+-					rdev->irq.stat_regs.evergreen.disp_int_cont4 &= ~DC_HPD5_RX_INTERRUPT;
+-					queue_dp = true;
+-					DRM_DEBUG("IH: HPD_RX 5\n");
+-				}
++				if (!(rdev->irq.stat_regs.evergreen.disp_int_cont4 & DC_HPD5_RX_INTERRUPT))
++					DRM_DEBUG("IH: IH event w/o asserted irq bit?\n");
++
++				rdev->irq.stat_regs.evergreen.disp_int_cont4 &= ~DC_HPD5_RX_INTERRUPT;
++				queue_dp = true;
++				DRM_DEBUG("IH: HPD_RX 5\n");
++
+ 				break;
+ 			case 11:
+-				if (rdev->irq.stat_regs.evergreen.disp_int_cont5 & DC_HPD6_RX_INTERRUPT) {
+-					rdev->irq.stat_regs.evergreen.disp_int_cont5 &= ~DC_HPD6_RX_INTERRUPT;
+-					queue_dp = true;
+-					DRM_DEBUG("IH: HPD_RX 6\n");
+-				}
++				if (!(rdev->irq.stat_regs.evergreen.disp_int_cont5 & DC_HPD6_RX_INTERRUPT))
++					DRM_DEBUG("IH: IH event w/o asserted irq bit?\n");
++
++				rdev->irq.stat_regs.evergreen.disp_int_cont5 &= ~DC_HPD6_RX_INTERRUPT;
++				queue_dp = true;
++				DRM_DEBUG("IH: HPD_RX 6\n");
++
+ 				break;
+ 			default:
+ 				DRM_DEBUG("Unhandled interrupt: %d %d\n", src_id, src_data);
+diff --git a/drivers/gpu/drm/radeon/si_dpm.c b/drivers/gpu/drm/radeon/si_dpm.c
+index ff8b83f5e929..9dfcedec05a6 100644
+--- a/drivers/gpu/drm/radeon/si_dpm.c
++++ b/drivers/gpu/drm/radeon/si_dpm.c
+@@ -2925,6 +2925,7 @@ static struct si_dpm_quirk si_dpm_quirk_list[] = {
+ 	/* PITCAIRN - https://bugs.freedesktop.org/show_bug.cgi?id=76490 */
+ 	{ PCI_VENDOR_ID_ATI, 0x6810, 0x1462, 0x3036, 0, 120000 },
+ 	{ PCI_VENDOR_ID_ATI, 0x6811, 0x174b, 0xe271, 0, 120000 },
++	{ PCI_VENDOR_ID_ATI, 0x6810, 0x174b, 0xe271, 85000, 90000 },
+ 	{ 0, 0, 0, 0 },
+ };
+ 
+diff --git a/drivers/gpu/drm/rockchip/rockchip_drm_gem.c b/drivers/gpu/drm/rockchip/rockchip_drm_gem.c
+index eb2282cc4a56..eba5f8a52fbd 100644
+--- a/drivers/gpu/drm/rockchip/rockchip_drm_gem.c
++++ b/drivers/gpu/drm/rockchip/rockchip_drm_gem.c
+@@ -54,55 +54,56 @@ static void rockchip_gem_free_buf(struct rockchip_gem_object *rk_obj)
+ 		       &rk_obj->dma_attrs);
+ }
+ 
+-int rockchip_gem_mmap_buf(struct drm_gem_object *obj,
+-			  struct vm_area_struct *vma)
++static int rockchip_drm_gem_object_mmap(struct drm_gem_object *obj,
++					struct vm_area_struct *vma)
++
+ {
++	int ret;
+ 	struct rockchip_gem_object *rk_obj = to_rockchip_obj(obj);
+ 	struct drm_device *drm = obj->dev;
+-	unsigned long vm_size;
+ 
+-	vma->vm_flags |= VM_IO | VM_DONTEXPAND | VM_DONTDUMP;
+-	vm_size = vma->vm_end - vma->vm_start;
+-
+-	if (vm_size > obj->size)
+-		return -EINVAL;
++	/*
++	 * dma_alloc_attrs() allocated a struct page table for rk_obj, so clear
++	 * VM_PFNMAP flag that was set by drm_gem_mmap_obj()/drm_gem_mmap().
++	 */
++	vma->vm_flags &= ~VM_PFNMAP;
+ 
+-	return dma_mmap_attrs(drm->dev, vma, rk_obj->kvaddr, rk_obj->dma_addr,
++	ret = dma_mmap_attrs(drm->dev, vma, rk_obj->kvaddr, rk_obj->dma_addr,
+ 			     obj->size, &rk_obj->dma_attrs);
++	if (ret)
++		drm_gem_vm_close(vma);
++
++	return ret;
+ }
+ 
+-/* drm driver mmap file operations */
+-int rockchip_gem_mmap(struct file *filp, struct vm_area_struct *vma)
++int rockchip_gem_mmap_buf(struct drm_gem_object *obj,
++			  struct vm_area_struct *vma)
+ {
+-	struct drm_file *priv = filp->private_data;
+-	struct drm_device *dev = priv->minor->dev;
+-	struct drm_gem_object *obj;
+-	struct drm_vma_offset_node *node;
++	struct drm_device *drm = obj->dev;
+ 	int ret;
+ 
+-	if (drm_device_is_unplugged(dev))
+-		return -ENODEV;
++	mutex_lock(&drm->struct_mutex);
++	ret = drm_gem_mmap_obj(obj, obj->size, vma);
++	mutex_unlock(&drm->struct_mutex);
++	if (ret)
++		return ret;
+ 
+-	mutex_lock(&dev->struct_mutex);
++	return rockchip_drm_gem_object_mmap(obj, vma);
++}
+ 
+-	node = drm_vma_offset_exact_lookup(dev->vma_offset_manager,
+-					   vma->vm_pgoff,
+-					   vma_pages(vma));
+-	if (!node) {
+-		mutex_unlock(&dev->struct_mutex);
+-		DRM_ERROR("failed to find vma node.\n");
+-		return -EINVAL;
+-	} else if (!drm_vma_node_is_allowed(node, filp)) {
+-		mutex_unlock(&dev->struct_mutex);
+-		return -EACCES;
+-	}
++/* drm driver mmap file operations */
++int rockchip_gem_mmap(struct file *filp, struct vm_area_struct *vma)
++{
++	struct drm_gem_object *obj;
++	int ret;
+ 
+-	obj = container_of(node, struct drm_gem_object, vma_node);
+-	ret = rockchip_gem_mmap_buf(obj, vma);
++	ret = drm_gem_mmap(filp, vma);
++	if (ret)
++		return ret;
+ 
+-	mutex_unlock(&dev->struct_mutex);
++	obj = vma->vm_private_data;
+ 
+-	return ret;
++	return rockchip_drm_gem_object_mmap(obj, vma);
+ }
+ 
+ struct rockchip_gem_object *
+diff --git a/drivers/gpu/drm/tegra/dpaux.c b/drivers/gpu/drm/tegra/dpaux.c
+index d6b55e3e3716..a43a836e6f88 100644
+--- a/drivers/gpu/drm/tegra/dpaux.c
++++ b/drivers/gpu/drm/tegra/dpaux.c
+@@ -72,34 +72,32 @@ static inline void tegra_dpaux_writel(struct tegra_dpaux *dpaux,
+ static void tegra_dpaux_write_fifo(struct tegra_dpaux *dpaux, const u8 *buffer,
+ 				   size_t size)
+ {
+-	unsigned long offset = DPAUX_DP_AUXDATA_WRITE(0);
+ 	size_t i, j;
+ 
+-	for (i = 0; i < size; i += 4) {
+-		size_t num = min_t(size_t, size - i, 4);
++	for (i = 0; i < DIV_ROUND_UP(size, 4); i++) {
++		size_t num = min_t(size_t, size - i * 4, 4);
+ 		unsigned long value = 0;
+ 
+ 		for (j = 0; j < num; j++)
+-			value |= buffer[i + j] << (j * 8);
++			value |= buffer[i * 4 + j] << (j * 8);
+ 
+-		tegra_dpaux_writel(dpaux, value, offset++);
++		tegra_dpaux_writel(dpaux, value, DPAUX_DP_AUXDATA_WRITE(i));
+ 	}
+ }
+ 
+ static void tegra_dpaux_read_fifo(struct tegra_dpaux *dpaux, u8 *buffer,
+ 				  size_t size)
+ {
+-	unsigned long offset = DPAUX_DP_AUXDATA_READ(0);
+ 	size_t i, j;
+ 
+-	for (i = 0; i < size; i += 4) {
+-		size_t num = min_t(size_t, size - i, 4);
++	for (i = 0; i < DIV_ROUND_UP(size, 4); i++) {
++		size_t num = min_t(size_t, size - i * 4, 4);
+ 		unsigned long value;
+ 
+-		value = tegra_dpaux_readl(dpaux, offset++);
++		value = tegra_dpaux_readl(dpaux, DPAUX_DP_AUXDATA_READ(i));
+ 
+ 		for (j = 0; j < num; j++)
+-			buffer[i + j] = value >> (j * 8);
++			buffer[i * 4 + j] = value >> (j * 8);
+ 	}
+ }
+ 
+diff --git a/drivers/gpu/drm/vgem/vgem_drv.c b/drivers/gpu/drm/vgem/vgem_drv.c
+index 7a207ca547be..6394547cf67a 100644
+--- a/drivers/gpu/drm/vgem/vgem_drv.c
++++ b/drivers/gpu/drm/vgem/vgem_drv.c
+@@ -328,6 +328,8 @@ static int __init vgem_init(void)
+ 		goto out;
+ 	}
+ 
++	drm_dev_set_unique(vgem_device, "vgem");
++
+ 	ret  = drm_dev_register(vgem_device, 0);
+ 
+ 	if (ret)
+diff --git a/drivers/hwmon/mcp3021.c b/drivers/hwmon/mcp3021.c
+index d219c06a857b..972444a14cca 100644
+--- a/drivers/hwmon/mcp3021.c
++++ b/drivers/hwmon/mcp3021.c
+@@ -31,14 +31,11 @@
+ /* output format */
+ #define MCP3021_SAR_SHIFT	2
+ #define MCP3021_SAR_MASK	0x3ff
+-
+ #define MCP3021_OUTPUT_RES	10	/* 10-bit resolution */
+-#define MCP3021_OUTPUT_SCALE	4
+ 
+ #define MCP3221_SAR_SHIFT	0
+ #define MCP3221_SAR_MASK	0xfff
+ #define MCP3221_OUTPUT_RES	12	/* 12-bit resolution */
+-#define MCP3221_OUTPUT_SCALE	1
+ 
+ enum chips {
+ 	mcp3021,
+@@ -54,7 +51,6 @@ struct mcp3021_data {
+ 	u16 sar_shift;
+ 	u16 sar_mask;
+ 	u8 output_res;
+-	u8 output_scale;
+ };
+ 
+ static int mcp3021_read16(struct i2c_client *client)
+@@ -84,13 +80,7 @@ static int mcp3021_read16(struct i2c_client *client)
+ 
+ static inline u16 volts_from_reg(struct mcp3021_data *data, u16 val)
+ {
+-	if (val == 0)
+-		return 0;
+-
+-	val = val * data->output_scale - data->output_scale / 2;
+-
+-	return val * DIV_ROUND_CLOSEST(data->vdd,
+-			(1 << data->output_res) * data->output_scale);
++	return DIV_ROUND_CLOSEST(data->vdd * val, 1 << data->output_res);
+ }
+ 
+ static ssize_t show_in_input(struct device *dev, struct device_attribute *attr,
+@@ -132,14 +122,12 @@ static int mcp3021_probe(struct i2c_client *client,
+ 		data->sar_shift = MCP3021_SAR_SHIFT;
+ 		data->sar_mask = MCP3021_SAR_MASK;
+ 		data->output_res = MCP3021_OUTPUT_RES;
+-		data->output_scale = MCP3021_OUTPUT_SCALE;
+ 		break;
+ 
+ 	case mcp3221:
+ 		data->sar_shift = MCP3221_SAR_SHIFT;
+ 		data->sar_mask = MCP3221_SAR_MASK;
+ 		data->output_res = MCP3221_OUTPUT_RES;
+-		data->output_scale = MCP3221_OUTPUT_SCALE;
+ 		break;
+ 	}
+ 
+diff --git a/drivers/hwmon/nct7802.c b/drivers/hwmon/nct7802.c
+index 55765790907b..28fcb2e246d5 100644
+--- a/drivers/hwmon/nct7802.c
++++ b/drivers/hwmon/nct7802.c
+@@ -547,7 +547,7 @@ static umode_t nct7802_temp_is_visible(struct kobject *kobj,
+ 	if (index >= 9 && index < 18 &&
+ 	    (reg & 0x0c) != 0x04 && (reg & 0x0c) != 0x08)	/* RD2 */
+ 		return 0;
+-	if (index >= 18 && index < 27 && (reg & 0x30) != 0x10)	/* RD3 */
++	if (index >= 18 && index < 27 && (reg & 0x30) != 0x20)	/* RD3 */
+ 		return 0;
+ 	if (index >= 27 && index < 35)				/* local */
+ 		return attr->mode;
+diff --git a/drivers/i2c/busses/i2c-at91.c b/drivers/i2c/busses/i2c-at91.c
+index ff23d1bdd230..9bd10a9b4b50 100644
+--- a/drivers/i2c/busses/i2c-at91.c
++++ b/drivers/i2c/busses/i2c-at91.c
+@@ -65,6 +65,9 @@
+ #define	AT91_TWI_UNRE		0x0080	/* Underrun Error */
+ #define	AT91_TWI_NACK		0x0100	/* Not Acknowledged */
+ 
++#define	AT91_TWI_INT_MASK \
++	(AT91_TWI_TXCOMP | AT91_TWI_RXRDY | AT91_TWI_TXRDY | AT91_TWI_NACK)
++
+ #define	AT91_TWI_IER		0x0024	/* Interrupt Enable Register */
+ #define	AT91_TWI_IDR		0x0028	/* Interrupt Disable Register */
+ #define	AT91_TWI_IMR		0x002c	/* Interrupt Mask Register */
+@@ -119,13 +122,12 @@ static void at91_twi_write(struct at91_twi_dev *dev, unsigned reg, unsigned val)
+ 
+ static void at91_disable_twi_interrupts(struct at91_twi_dev *dev)
+ {
+-	at91_twi_write(dev, AT91_TWI_IDR,
+-		       AT91_TWI_TXCOMP | AT91_TWI_RXRDY | AT91_TWI_TXRDY);
++	at91_twi_write(dev, AT91_TWI_IDR, AT91_TWI_INT_MASK);
+ }
+ 
+ static void at91_twi_irq_save(struct at91_twi_dev *dev)
+ {
+-	dev->imr = at91_twi_read(dev, AT91_TWI_IMR) & 0x7;
++	dev->imr = at91_twi_read(dev, AT91_TWI_IMR) & AT91_TWI_INT_MASK;
+ 	at91_disable_twi_interrupts(dev);
+ }
+ 
+@@ -215,6 +217,14 @@ static void at91_twi_write_data_dma_callback(void *data)
+ 	dma_unmap_single(dev->dev, sg_dma_address(&dev->dma.sg),
+ 			 dev->buf_len, DMA_TO_DEVICE);
+ 
++	/*
++	 * When this callback is called, THR/TX FIFO is likely not to be empty
++	 * yet. So we have to wait for TXCOMP or NACK bits to be set into the
++	 * Status Register to be sure that the STOP bit has been sent and the
++	 * transfer is completed. The NACK interrupt has already been enabled,
++	 * we just have to enable TXCOMP one.
++	 */
++	at91_twi_write(dev, AT91_TWI_IER, AT91_TWI_TXCOMP);
+ 	at91_twi_write(dev, AT91_TWI_CR, AT91_TWI_STOP);
+ }
+ 
+@@ -309,7 +319,7 @@ static void at91_twi_read_data_dma_callback(void *data)
+ 	/* The last two bytes have to be read without using dma */
+ 	dev->buf += dev->buf_len - 2;
+ 	dev->buf_len = 2;
+-	at91_twi_write(dev, AT91_TWI_IER, AT91_TWI_RXRDY);
++	at91_twi_write(dev, AT91_TWI_IER, AT91_TWI_RXRDY | AT91_TWI_TXCOMP);
+ }
+ 
+ static void at91_twi_read_data_dma(struct at91_twi_dev *dev)
+@@ -370,7 +380,7 @@ static irqreturn_t atmel_twi_interrupt(int irq, void *dev_id)
+ 	/* catch error flags */
+ 	dev->transfer_status |= status;
+ 
+-	if (irqstatus & AT91_TWI_TXCOMP) {
++	if (irqstatus & (AT91_TWI_TXCOMP | AT91_TWI_NACK)) {
+ 		at91_disable_twi_interrupts(dev);
+ 		complete(&dev->cmd_complete);
+ 	}
+@@ -384,6 +394,34 @@ static int at91_do_twi_transfer(struct at91_twi_dev *dev)
+ 	unsigned long time_left;
+ 	bool has_unre_flag = dev->pdata->has_unre_flag;
+ 
++	/*
++	 * WARNING: the TXCOMP bit in the Status Register is NOT a clear on
++	 * read flag but shows the state of the transmission at the time the
++	 * Status Register is read. According to the programmer datasheet,
++	 * TXCOMP is set when both holding register and internal shifter are
++	 * empty and STOP condition has been sent.
++	 * Consequently, we should enable NACK interrupt rather than TXCOMP to
++	 * detect transmission failure.
++	 *
++	 * Besides, the TXCOMP bit is already set before the i2c transaction
++	 * has been started. For read transactions, this bit is cleared when
++	 * writing the START bit into the Control Register. So the
++	 * corresponding interrupt can safely be enabled just after.
++	 * However for write transactions managed by the CPU, we first write
++	 * into THR, so TXCOMP is cleared. Then we can safely enable TXCOMP
++	 * interrupt. If TXCOMP interrupt were enabled before writing into THR,
++	 * the interrupt handler would be called immediately and the i2c command
++	 * would be reported as completed.
++	 * Also when a write transaction is managed by the DMA controller,
++	 * enabling the TXCOMP interrupt in this function may lead to a race
++	 * condition since we don't know whether the TXCOMP interrupt is enabled
++	 * before or after the DMA has started to write into THR. So the TXCOMP
++	 * interrupt is enabled later by at91_twi_write_data_dma_callback().
++	 * Immediately after in that DMA callback, we still need to send the
++	 * STOP condition manually writing the corresponding bit into the
++	 * Control Register.
++	 */
++
+ 	dev_dbg(dev->dev, "transfer: %s %d bytes.\n",
+ 		(dev->msg->flags & I2C_M_RD) ? "read" : "write", dev->buf_len);
+ 
+@@ -414,26 +452,24 @@ static int at91_do_twi_transfer(struct at91_twi_dev *dev)
+ 		 * seems to be the best solution.
+ 		 */
+ 		if (dev->use_dma && (dev->buf_len > AT91_I2C_DMA_THRESHOLD)) {
++			at91_twi_write(dev, AT91_TWI_IER, AT91_TWI_NACK);
+ 			at91_twi_read_data_dma(dev);
+-			/*
+-			 * It is important to enable TXCOMP irq here because
+-			 * doing it only when transferring the last two bytes
+-			 * will mask NACK errors since TXCOMP is set when a
+-			 * NACK occurs.
+-			 */
+-			at91_twi_write(dev, AT91_TWI_IER,
+-			       AT91_TWI_TXCOMP);
+-		} else
++		} else {
+ 			at91_twi_write(dev, AT91_TWI_IER,
+-			       AT91_TWI_TXCOMP | AT91_TWI_RXRDY);
++				       AT91_TWI_TXCOMP |
++				       AT91_TWI_NACK |
++				       AT91_TWI_RXRDY);
++		}
+ 	} else {
+ 		if (dev->use_dma && (dev->buf_len > AT91_I2C_DMA_THRESHOLD)) {
++			at91_twi_write(dev, AT91_TWI_IER, AT91_TWI_NACK);
+ 			at91_twi_write_data_dma(dev);
+-			at91_twi_write(dev, AT91_TWI_IER, AT91_TWI_TXCOMP);
+ 		} else {
+ 			at91_twi_write_next_byte(dev);
+ 			at91_twi_write(dev, AT91_TWI_IER,
+-				AT91_TWI_TXCOMP | AT91_TWI_TXRDY);
++				       AT91_TWI_TXCOMP |
++				       AT91_TWI_NACK |
++				       AT91_TWI_TXRDY);
+ 		}
+ 	}
+ 
+diff --git a/drivers/i2c/i2c-mux.c b/drivers/i2c/i2c-mux.c
+index 06cc1ff088f1..2ba7c0fbc615 100644
+--- a/drivers/i2c/i2c-mux.c
++++ b/drivers/i2c/i2c-mux.c
+@@ -51,7 +51,7 @@ static int i2c_mux_master_xfer(struct i2c_adapter *adap,
+ 
+ 	ret = priv->select(parent, priv->mux_priv, priv->chan_id);
+ 	if (ret >= 0)
+-		ret = parent->algo->master_xfer(parent, msgs, num);
++		ret = __i2c_transfer(parent, msgs, num);
+ 	if (priv->deselect)
+ 		priv->deselect(parent, priv->mux_priv, priv->chan_id);
+ 
+@@ -144,6 +144,7 @@ struct i2c_adapter *i2c_add_mux_adapter(struct i2c_adapter *parent,
+ 	priv->adap.dev.parent = &parent->dev;
+ 	priv->adap.retries = parent->retries;
+ 	priv->adap.timeout = parent->timeout;
++	priv->adap.quirks = parent->quirks;
+ 
+ 	/* Sanity check on class */
+ 	if (i2c_mux_parent_classes(parent) & class)
+diff --git a/drivers/i2c/muxes/i2c-mux-pca9541.c b/drivers/i2c/muxes/i2c-mux-pca9541.c
+index cb772775da43..0c8d4d2cbdaf 100644
+--- a/drivers/i2c/muxes/i2c-mux-pca9541.c
++++ b/drivers/i2c/muxes/i2c-mux-pca9541.c
+@@ -104,7 +104,7 @@ static int pca9541_reg_write(struct i2c_client *client, u8 command, u8 val)
+ 		buf[0] = command;
+ 		buf[1] = val;
+ 		msg.buf = buf;
+-		ret = adap->algo->master_xfer(adap, &msg, 1);
++		ret = __i2c_transfer(adap, &msg, 1);
+ 	} else {
+ 		union i2c_smbus_data data;
+ 
+@@ -144,7 +144,7 @@ static int pca9541_reg_read(struct i2c_client *client, u8 command)
+ 				.buf = &val
+ 			}
+ 		};
+-		ret = adap->algo->master_xfer(adap, msg, 2);
++		ret = __i2c_transfer(adap, msg, 2);
+ 		if (ret == 2)
+ 			ret = val;
+ 		else if (ret >= 0)
+diff --git a/drivers/i2c/muxes/i2c-mux-pca954x.c b/drivers/i2c/muxes/i2c-mux-pca954x.c
+index bea0d2de2993..ea4aa9dfcea9 100644
+--- a/drivers/i2c/muxes/i2c-mux-pca954x.c
++++ b/drivers/i2c/muxes/i2c-mux-pca954x.c
+@@ -134,7 +134,7 @@ static int pca954x_reg_write(struct i2c_adapter *adap,
+ 		msg.len = 1;
+ 		buf[0] = val;
+ 		msg.buf = buf;
+-		ret = adap->algo->master_xfer(adap, &msg, 1);
++		ret = __i2c_transfer(adap, &msg, 1);
+ 	} else {
+ 		union i2c_smbus_data data;
+ 		ret = adap->algo->smbus_xfer(adap, client->addr,
+diff --git a/drivers/iio/accel/bmc150-accel.c b/drivers/iio/accel/bmc150-accel.c
+index 73e87739d219..bf827d012a71 100644
+--- a/drivers/iio/accel/bmc150-accel.c
++++ b/drivers/iio/accel/bmc150-accel.c
+@@ -1465,7 +1465,7 @@ static void bmc150_accel_unregister_triggers(struct bmc150_accel_data *data,
+ {
+ 	int i;
+ 
+-	for (i = from; i >= 0; i++) {
++	for (i = from; i >= 0; i--) {
+ 		if (data->triggers[i].indio_trig) {
+ 			iio_trigger_unregister(data->triggers[i].indio_trig);
+ 			data->triggers[i].indio_trig = NULL;
+diff --git a/drivers/iio/adc/Kconfig b/drivers/iio/adc/Kconfig
+index e36a73e7c3a8..1bcb65b8d4a1 100644
+--- a/drivers/iio/adc/Kconfig
++++ b/drivers/iio/adc/Kconfig
+@@ -146,8 +146,7 @@ config DA9150_GPADC
+ 
+ config CC10001_ADC
+ 	tristate "Cosmic Circuits 10001 ADC driver"
+-	depends on HAVE_CLK || REGULATOR
+-	depends on HAS_IOMEM
++	depends on HAS_IOMEM && HAVE_CLK && REGULATOR
+ 	select IIO_BUFFER
+ 	select IIO_TRIGGERED_BUFFER
+ 	help
+diff --git a/drivers/iio/adc/at91_adc.c b/drivers/iio/adc/at91_adc.c
+index 8a0eb4a04fb5..7b40925dd4ff 100644
+--- a/drivers/iio/adc/at91_adc.c
++++ b/drivers/iio/adc/at91_adc.c
+@@ -182,7 +182,7 @@ struct at91_adc_caps {
+ 	u8	ts_pen_detect_sensitivity;
+ 
+ 	/* startup time calculate function */
+-	u32 (*calc_startup_ticks)(u8 startup_time, u32 adc_clk_khz);
++	u32 (*calc_startup_ticks)(u32 startup_time, u32 adc_clk_khz);
+ 
+ 	u8	num_channels;
+ 	struct at91_adc_reg_desc registers;
+@@ -201,7 +201,7 @@ struct at91_adc_state {
+ 	u8			num_channels;
+ 	void __iomem		*reg_base;
+ 	struct at91_adc_reg_desc *registers;
+-	u8			startup_time;
++	u32			startup_time;
+ 	u8			sample_hold_time;
+ 	bool			sleep_mode;
+ 	struct iio_trigger	**trig;
+@@ -779,7 +779,7 @@ ret:
+ 	return ret;
+ }
+ 
+-static u32 calc_startup_ticks_9260(u8 startup_time, u32 adc_clk_khz)
++static u32 calc_startup_ticks_9260(u32 startup_time, u32 adc_clk_khz)
+ {
+ 	/*
+ 	 * Number of ticks needed to cover the startup time of the ADC
+@@ -790,7 +790,7 @@ static u32 calc_startup_ticks_9260(u8 startup_time, u32 adc_clk_khz)
+ 	return round_up((startup_time * adc_clk_khz / 1000) - 1, 8) / 8;
+ }
+ 
+-static u32 calc_startup_ticks_9x5(u8 startup_time, u32 adc_clk_khz)
++static u32 calc_startup_ticks_9x5(u32 startup_time, u32 adc_clk_khz)
+ {
+ 	/*
+ 	 * For sama5d3x and at91sam9x5, the formula changes to:
+diff --git a/drivers/iio/adc/rockchip_saradc.c b/drivers/iio/adc/rockchip_saradc.c
+index 8d4e019ea4ca..9c311c1e1ac7 100644
+--- a/drivers/iio/adc/rockchip_saradc.c
++++ b/drivers/iio/adc/rockchip_saradc.c
+@@ -349,3 +349,7 @@ static struct platform_driver rockchip_saradc_driver = {
+ };
+ 
+ module_platform_driver(rockchip_saradc_driver);
++
++MODULE_AUTHOR("Heiko Stuebner <heiko@sntech.de>");
++MODULE_DESCRIPTION("Rockchip SARADC driver");
++MODULE_LICENSE("GPL v2");
+diff --git a/drivers/iio/adc/twl4030-madc.c b/drivers/iio/adc/twl4030-madc.c
+index 94c5f05b4bc1..4caecbea4c97 100644
+--- a/drivers/iio/adc/twl4030-madc.c
++++ b/drivers/iio/adc/twl4030-madc.c
+@@ -835,7 +835,8 @@ static int twl4030_madc_probe(struct platform_device *pdev)
+ 	irq = platform_get_irq(pdev, 0);
+ 	ret = devm_request_threaded_irq(&pdev->dev, irq, NULL,
+ 				   twl4030_madc_threaded_irq_handler,
+-				   IRQF_TRIGGER_RISING, "twl4030_madc", madc);
++				   IRQF_TRIGGER_RISING | IRQF_ONESHOT,
++				   "twl4030_madc", madc);
+ 	if (ret) {
+ 		dev_err(&pdev->dev, "could not request irq\n");
+ 		goto err_i2c;
+diff --git a/drivers/iio/common/hid-sensors/hid-sensor-trigger.c b/drivers/iio/common/hid-sensors/hid-sensor-trigger.c
+index 610fc98f88ef..595511022795 100644
+--- a/drivers/iio/common/hid-sensors/hid-sensor-trigger.c
++++ b/drivers/iio/common/hid-sensors/hid-sensor-trigger.c
+@@ -36,6 +36,8 @@ static int _hid_sensor_power_state(struct hid_sensor_common *st, bool state)
+ 	s32 poll_value = 0;
+ 
+ 	if (state) {
++		if (!atomic_read(&st->user_requested_state))
++			return 0;
+ 		if (sensor_hub_device_open(st->hsdev))
+ 			return -EIO;
+ 
+@@ -52,8 +54,12 @@ static int _hid_sensor_power_state(struct hid_sensor_common *st, bool state)
+ 
+ 		poll_value = hid_sensor_read_poll_value(st);
+ 	} else {
+-		if (!atomic_dec_and_test(&st->data_ready))
++		int val;
++
++		val = atomic_dec_if_positive(&st->data_ready);
++		if (val < 0)
+ 			return 0;
++
+ 		sensor_hub_device_close(st->hsdev);
+ 		state_val = hid_sensor_get_usage_index(st->hsdev,
+ 			st->power_state.report_id,
+@@ -92,9 +98,11 @@ EXPORT_SYMBOL(hid_sensor_power_state);
+ 
+ int hid_sensor_power_state(struct hid_sensor_common *st, bool state)
+ {
++
+ #ifdef CONFIG_PM
+ 	int ret;
+ 
++	atomic_set(&st->user_requested_state, state);
+ 	if (state)
+ 		ret = pm_runtime_get_sync(&st->pdev->dev);
+ 	else {
+@@ -109,6 +117,7 @@ int hid_sensor_power_state(struct hid_sensor_common *st, bool state)
+ 
+  	return 0;
+ #else
++	atomic_set(&st->user_requested_state, state);
+ 	return _hid_sensor_power_state(st, state);
+ #endif
+ }
+diff --git a/drivers/iio/dac/ad5624r_spi.c b/drivers/iio/dac/ad5624r_spi.c
+index 61bb9d4239ea..e98428df0d44 100644
+--- a/drivers/iio/dac/ad5624r_spi.c
++++ b/drivers/iio/dac/ad5624r_spi.c
+@@ -22,7 +22,7 @@
+ #include "ad5624r.h"
+ 
+ static int ad5624r_spi_write(struct spi_device *spi,
+-			     u8 cmd, u8 addr, u16 val, u8 len)
++			     u8 cmd, u8 addr, u16 val, u8 shift)
+ {
+ 	u32 data;
+ 	u8 msg[3];
+@@ -35,7 +35,7 @@ static int ad5624r_spi_write(struct spi_device *spi,
+ 	 * 14-, 12-bit input code followed by 0, 2, or 4 don't care bits,
+ 	 * for the AD5664R, AD5644R, and AD5624R, respectively.
+ 	 */
+-	data = (0 << 22) | (cmd << 19) | (addr << 16) | (val << (16 - len));
++	data = (0 << 22) | (cmd << 19) | (addr << 16) | (val << shift);
+ 	msg[0] = data >> 16;
+ 	msg[1] = data >> 8;
+ 	msg[2] = data;
+diff --git a/drivers/iio/imu/inv_mpu6050/inv_mpu_core.c b/drivers/iio/imu/inv_mpu6050/inv_mpu_core.c
+index 17d4bb15be4d..65ce86837177 100644
+--- a/drivers/iio/imu/inv_mpu6050/inv_mpu_core.c
++++ b/drivers/iio/imu/inv_mpu6050/inv_mpu_core.c
+@@ -431,6 +431,23 @@ static int inv_mpu6050_write_gyro_scale(struct inv_mpu6050_state *st, int val)
+ 	return -EINVAL;
+ }
+ 
++static int inv_write_raw_get_fmt(struct iio_dev *indio_dev,
++				 struct iio_chan_spec const *chan, long mask)
++{
++	switch (mask) {
++	case IIO_CHAN_INFO_SCALE:
++		switch (chan->type) {
++		case IIO_ANGL_VEL:
++			return IIO_VAL_INT_PLUS_NANO;
++		default:
++			return IIO_VAL_INT_PLUS_MICRO;
++		}
++	default:
++		return IIO_VAL_INT_PLUS_MICRO;
++	}
++
++	return -EINVAL;
++}
+ static int inv_mpu6050_write_accel_scale(struct inv_mpu6050_state *st, int val)
+ {
+ 	int result, i;
+@@ -696,6 +713,7 @@ static const struct iio_info mpu_info = {
+ 	.driver_module = THIS_MODULE,
+ 	.read_raw = &inv_mpu6050_read_raw,
+ 	.write_raw = &inv_mpu6050_write_raw,
++	.write_raw_get_fmt = &inv_write_raw_get_fmt,
+ 	.attrs = &inv_attribute_group,
+ 	.validate_trigger = inv_mpu6050_validate_trigger,
+ };
+diff --git a/drivers/iio/light/cm3323.c b/drivers/iio/light/cm3323.c
+index 869033e48a1f..a1d4905cc9d2 100644
+--- a/drivers/iio/light/cm3323.c
++++ b/drivers/iio/light/cm3323.c
+@@ -123,7 +123,7 @@ static int cm3323_set_it_bits(struct cm3323_data *data, int val, int val2)
+ 	for (i = 0; i < ARRAY_SIZE(cm3323_int_time); i++) {
+ 		if (val == cm3323_int_time[i].val &&
+ 		    val2 == cm3323_int_time[i].val2) {
+-			reg_conf = data->reg_conf;
++			reg_conf = data->reg_conf & ~CM3323_CONF_IT_MASK;
+ 			reg_conf |= i << CM3323_CONF_IT_SHIFT;
+ 
+ 			ret = i2c_smbus_write_word_data(data->client,
+diff --git a/drivers/iio/light/tcs3414.c b/drivers/iio/light/tcs3414.c
+index 71c2bde275aa..f8b1df018abe 100644
+--- a/drivers/iio/light/tcs3414.c
++++ b/drivers/iio/light/tcs3414.c
+@@ -185,7 +185,7 @@ static int tcs3414_write_raw(struct iio_dev *indio_dev,
+ 		if (val != 0)
+ 			return -EINVAL;
+ 		for (i = 0; i < ARRAY_SIZE(tcs3414_times); i++) {
+-			if (val == tcs3414_times[i] * 1000) {
++			if (val2 == tcs3414_times[i] * 1000) {
+ 				data->timing &= ~TCS3414_INTEG_MASK;
+ 				data->timing |= i;
+ 				return i2c_smbus_write_byte_data(
+diff --git a/drivers/iio/proximity/sx9500.c b/drivers/iio/proximity/sx9500.c
+index fa40f6d0ca39..bd26a484abcc 100644
+--- a/drivers/iio/proximity/sx9500.c
++++ b/drivers/iio/proximity/sx9500.c
+@@ -206,7 +206,7 @@ static int sx9500_read_proximity(struct sx9500_data *data,
+ 	if (ret < 0)
+ 		return ret;
+ 
+-	*val = 32767 - (s16)be16_to_cpu(regval);
++	*val = be16_to_cpu(regval);
+ 
+ 	return IIO_VAL_INT;
+ }
+diff --git a/drivers/iio/temperature/tmp006.c b/drivers/iio/temperature/tmp006.c
+index 84a0789c3d96..7a8050996b4e 100644
+--- a/drivers/iio/temperature/tmp006.c
++++ b/drivers/iio/temperature/tmp006.c
+@@ -132,6 +132,9 @@ static int tmp006_write_raw(struct iio_dev *indio_dev,
+ 	struct tmp006_data *data = iio_priv(indio_dev);
+ 	int i;
+ 
++	if (mask != IIO_CHAN_INFO_SAMP_FREQ)
++		return -EINVAL;
++
+ 	for (i = 0; i < ARRAY_SIZE(tmp006_freqs); i++)
+ 		if ((val == tmp006_freqs[i][0]) &&
+ 		    (val2 == tmp006_freqs[i][1])) {
+diff --git a/drivers/infiniband/hw/ocrdma/ocrdma_verbs.c b/drivers/infiniband/hw/ocrdma/ocrdma_verbs.c
+index 9dcb66077d6c..219f2122f9b9 100644
+--- a/drivers/infiniband/hw/ocrdma/ocrdma_verbs.c
++++ b/drivers/infiniband/hw/ocrdma/ocrdma_verbs.c
+@@ -679,7 +679,6 @@ err:
+ 		ocrdma_release_ucontext_pd(uctx);
+ 	} else {
+ 		status = _ocrdma_dealloc_pd(dev, pd);
+-		kfree(pd);
+ 	}
+ exit:
+ 	return ERR_PTR(status);
+diff --git a/drivers/md/bitmap.c b/drivers/md/bitmap.c
+index 135a0907e9de..c90118e90708 100644
+--- a/drivers/md/bitmap.c
++++ b/drivers/md/bitmap.c
+@@ -494,7 +494,7 @@ static int bitmap_new_disk_sb(struct bitmap *bitmap)
+ 	bitmap_super_t *sb;
+ 	unsigned long chunksize, daemon_sleep, write_behind;
+ 
+-	bitmap->storage.sb_page = alloc_page(GFP_KERNEL);
++	bitmap->storage.sb_page = alloc_page(GFP_KERNEL | __GFP_ZERO);
+ 	if (bitmap->storage.sb_page == NULL)
+ 		return -ENOMEM;
+ 	bitmap->storage.sb_page->index = 0;
+@@ -541,6 +541,7 @@ static int bitmap_new_disk_sb(struct bitmap *bitmap)
+ 	sb->state = cpu_to_le32(bitmap->flags);
+ 	bitmap->events_cleared = bitmap->mddev->events;
+ 	sb->events_cleared = cpu_to_le64(bitmap->mddev->events);
++	bitmap->mddev->bitmap_info.nodes = 0;
+ 
+ 	kunmap_atomic(sb);
+ 
+@@ -611,8 +612,16 @@ re_read:
+ 	daemon_sleep = le32_to_cpu(sb->daemon_sleep) * HZ;
+ 	write_behind = le32_to_cpu(sb->write_behind);
+ 	sectors_reserved = le32_to_cpu(sb->sectors_reserved);
+-	nodes = le32_to_cpu(sb->nodes);
+-	strlcpy(bitmap->mddev->bitmap_info.cluster_name, sb->cluster_name, 64);
++	/* XXX: This is a hack to ensure that we don't use clustering
++	 *  in case:
++	 *	- dm-raid is in use and
++	 *	- the nodes written in bitmap_sb is erroneous.
++	 */
++	if (!bitmap->mddev->sync_super) {
++		nodes = le32_to_cpu(sb->nodes);
++		strlcpy(bitmap->mddev->bitmap_info.cluster_name,
++				sb->cluster_name, 64);
++	}
+ 
+ 	/* verify that the bitmap-specific fields are valid */
+ 	if (sb->magic != cpu_to_le32(BITMAP_MAGIC))
+diff --git a/drivers/md/dm-cache-policy-cleaner.c b/drivers/md/dm-cache-policy-cleaner.c
+index b04d1f904d07..004e463c9423 100644
+--- a/drivers/md/dm-cache-policy-cleaner.c
++++ b/drivers/md/dm-cache-policy-cleaner.c
+@@ -171,7 +171,8 @@ static void remove_cache_hash_entry(struct wb_cache_entry *e)
+ /* Public interface (see dm-cache-policy.h */
+ static int wb_map(struct dm_cache_policy *pe, dm_oblock_t oblock,
+ 		  bool can_block, bool can_migrate, bool discarded_oblock,
+-		  struct bio *bio, struct policy_result *result)
++		  struct bio *bio, struct policy_locker *locker,
++		  struct policy_result *result)
+ {
+ 	struct policy *p = to_policy(pe);
+ 	struct wb_cache_entry *e;
+diff --git a/drivers/md/dm-cache-policy-internal.h b/drivers/md/dm-cache-policy-internal.h
+index 2256a1f24f73..c198e6defb9c 100644
+--- a/drivers/md/dm-cache-policy-internal.h
++++ b/drivers/md/dm-cache-policy-internal.h
+@@ -16,9 +16,10 @@
+  */
+ static inline int policy_map(struct dm_cache_policy *p, dm_oblock_t oblock,
+ 			     bool can_block, bool can_migrate, bool discarded_oblock,
+-			     struct bio *bio, struct policy_result *result)
++			     struct bio *bio, struct policy_locker *locker,
++			     struct policy_result *result)
+ {
+-	return p->map(p, oblock, can_block, can_migrate, discarded_oblock, bio, result);
++	return p->map(p, oblock, can_block, can_migrate, discarded_oblock, bio, locker, result);
+ }
+ 
+ static inline int policy_lookup(struct dm_cache_policy *p, dm_oblock_t oblock, dm_cblock_t *cblock)
+diff --git a/drivers/md/dm-cache-policy-mq.c b/drivers/md/dm-cache-policy-mq.c
+index 3ddd1162334d..515d44bf24d3 100644
+--- a/drivers/md/dm-cache-policy-mq.c
++++ b/drivers/md/dm-cache-policy-mq.c
+@@ -693,9 +693,10 @@ static void requeue(struct mq_policy *mq, struct entry *e)
+  * - set the hit count to a hard coded value other than 1, eg, is it better
+  *   if it goes in at level 2?
+  */
+-static int demote_cblock(struct mq_policy *mq, dm_oblock_t *oblock)
++static int demote_cblock(struct mq_policy *mq,
++			 struct policy_locker *locker, dm_oblock_t *oblock)
+ {
+-	struct entry *demoted = pop(mq, &mq->cache_clean);
++	struct entry *demoted = peek(&mq->cache_clean);
+ 
+ 	if (!demoted)
+ 		/*
+@@ -707,6 +708,13 @@ static int demote_cblock(struct mq_policy *mq, dm_oblock_t *oblock)
+ 		 */
+ 		return -ENOSPC;
+ 
++	if (locker->fn(locker, demoted->oblock))
++		/*
++		 * We couldn't lock the demoted block.
++		 */
++		return -EBUSY;
++
++	del(mq, demoted);
+ 	*oblock = demoted->oblock;
+ 	free_entry(&mq->cache_pool, demoted);
+ 
+@@ -795,6 +803,7 @@ static int cache_entry_found(struct mq_policy *mq,
+  * finding which cache block to use.
+  */
+ static int pre_cache_to_cache(struct mq_policy *mq, struct entry *e,
++			      struct policy_locker *locker,
+ 			      struct policy_result *result)
+ {
+ 	int r;
+@@ -803,11 +812,12 @@ static int pre_cache_to_cache(struct mq_policy *mq, struct entry *e,
+ 	/* Ensure there's a free cblock in the cache */
+ 	if (epool_empty(&mq->cache_pool)) {
+ 		result->op = POLICY_REPLACE;
+-		r = demote_cblock(mq, &result->old_oblock);
++		r = demote_cblock(mq, locker, &result->old_oblock);
+ 		if (r) {
+ 			result->op = POLICY_MISS;
+ 			return 0;
+ 		}
++
+ 	} else
+ 		result->op = POLICY_NEW;
+ 
+@@ -829,7 +839,8 @@ static int pre_cache_to_cache(struct mq_policy *mq, struct entry *e,
+ 
+ static int pre_cache_entry_found(struct mq_policy *mq, struct entry *e,
+ 				 bool can_migrate, bool discarded_oblock,
+-				 int data_dir, struct policy_result *result)
++				 int data_dir, struct policy_locker *locker,
++				 struct policy_result *result)
+ {
+ 	int r = 0;
+ 
+@@ -842,7 +853,7 @@ static int pre_cache_entry_found(struct mq_policy *mq, struct entry *e,
+ 
+ 	else {
+ 		requeue(mq, e);
+-		r = pre_cache_to_cache(mq, e, result);
++		r = pre_cache_to_cache(mq, e, locker, result);
+ 	}
+ 
+ 	return r;
+@@ -872,6 +883,7 @@ static void insert_in_pre_cache(struct mq_policy *mq,
+ }
+ 
+ static void insert_in_cache(struct mq_policy *mq, dm_oblock_t oblock,
++			    struct policy_locker *locker,
+ 			    struct policy_result *result)
+ {
+ 	int r;
+@@ -879,7 +891,7 @@ static void insert_in_cache(struct mq_policy *mq, dm_oblock_t oblock,
+ 
+ 	if (epool_empty(&mq->cache_pool)) {
+ 		result->op = POLICY_REPLACE;
+-		r = demote_cblock(mq, &result->old_oblock);
++		r = demote_cblock(mq, locker, &result->old_oblock);
+ 		if (unlikely(r)) {
+ 			result->op = POLICY_MISS;
+ 			insert_in_pre_cache(mq, oblock);
+@@ -907,11 +919,12 @@ static void insert_in_cache(struct mq_policy *mq, dm_oblock_t oblock,
+ 
+ static int no_entry_found(struct mq_policy *mq, dm_oblock_t oblock,
+ 			  bool can_migrate, bool discarded_oblock,
+-			  int data_dir, struct policy_result *result)
++			  int data_dir, struct policy_locker *locker,
++			  struct policy_result *result)
+ {
+ 	if (adjusted_promote_threshold(mq, discarded_oblock, data_dir) <= 1) {
+ 		if (can_migrate)
+-			insert_in_cache(mq, oblock, result);
++			insert_in_cache(mq, oblock, locker, result);
+ 		else
+ 			return -EWOULDBLOCK;
+ 	} else {
+@@ -928,7 +941,8 @@ static int no_entry_found(struct mq_policy *mq, dm_oblock_t oblock,
+  */
+ static int map(struct mq_policy *mq, dm_oblock_t oblock,
+ 	       bool can_migrate, bool discarded_oblock,
+-	       int data_dir, struct policy_result *result)
++	       int data_dir, struct policy_locker *locker,
++	       struct policy_result *result)
+ {
+ 	int r = 0;
+ 	struct entry *e = hash_lookup(mq, oblock);
+@@ -942,11 +956,11 @@ static int map(struct mq_policy *mq, dm_oblock_t oblock,
+ 
+ 	else if (e)
+ 		r = pre_cache_entry_found(mq, e, can_migrate, discarded_oblock,
+-					  data_dir, result);
++					  data_dir, locker, result);
+ 
+ 	else
+ 		r = no_entry_found(mq, oblock, can_migrate, discarded_oblock,
+-				   data_dir, result);
++				   data_dir, locker, result);
+ 
+ 	if (r == -EWOULDBLOCK)
+ 		result->op = POLICY_MISS;
+@@ -1012,7 +1026,8 @@ static void copy_tick(struct mq_policy *mq)
+ 
+ static int mq_map(struct dm_cache_policy *p, dm_oblock_t oblock,
+ 		  bool can_block, bool can_migrate, bool discarded_oblock,
+-		  struct bio *bio, struct policy_result *result)
++		  struct bio *bio, struct policy_locker *locker,
++		  struct policy_result *result)
+ {
+ 	int r;
+ 	struct mq_policy *mq = to_mq_policy(p);
+@@ -1028,7 +1043,7 @@ static int mq_map(struct dm_cache_policy *p, dm_oblock_t oblock,
+ 
+ 	iot_examine_bio(&mq->tracker, bio);
+ 	r = map(mq, oblock, can_migrate, discarded_oblock,
+-		bio_data_dir(bio), result);
++		bio_data_dir(bio), locker, result);
+ 
+ 	mutex_unlock(&mq->lock);
+ 
+diff --git a/drivers/md/dm-cache-policy.h b/drivers/md/dm-cache-policy.h
+index f50fe360c546..5524e21e4836 100644
+--- a/drivers/md/dm-cache-policy.h
++++ b/drivers/md/dm-cache-policy.h
+@@ -70,6 +70,18 @@ enum policy_operation {
+ };
+ 
+ /*
++ * When issuing a POLICY_REPLACE the policy needs to make a callback to
++ * lock the block being demoted.  This doesn't need to occur during a
++ * writeback operation since the block remains in the cache.
++ */
++struct policy_locker;
++typedef int (*policy_lock_fn)(struct policy_locker *l, dm_oblock_t oblock);
++
++struct policy_locker {
++	policy_lock_fn fn;
++};
++
++/*
+  * This is the instruction passed back to the core target.
+  */
+ struct policy_result {
+@@ -122,7 +134,8 @@ struct dm_cache_policy {
+ 	 */
+ 	int (*map)(struct dm_cache_policy *p, dm_oblock_t oblock,
+ 		   bool can_block, bool can_migrate, bool discarded_oblock,
+-		   struct bio *bio, struct policy_result *result);
++		   struct bio *bio, struct policy_locker *locker,
++		   struct policy_result *result);
+ 
+ 	/*
+ 	 * Sometimes we want to see if a block is in the cache, without
+diff --git a/drivers/md/dm-cache-target.c b/drivers/md/dm-cache-target.c
+index 7755af351867..e049becaaf2d 100644
+--- a/drivers/md/dm-cache-target.c
++++ b/drivers/md/dm-cache-target.c
+@@ -1445,16 +1445,43 @@ static void inc_miss_counter(struct cache *cache, struct bio *bio)
+ 		   &cache->stats.read_miss : &cache->stats.write_miss);
+ }
+ 
++/*----------------------------------------------------------------*/
++
++struct old_oblock_lock {
++	struct policy_locker locker;
++	struct cache *cache;
++	struct prealloc *structs;
++	struct dm_bio_prison_cell *cell;
++};
++
++static int null_locker(struct policy_locker *locker, dm_oblock_t b)
++{
++	/* This should never be called */
++	BUG();
++	return 0;
++}
++
++static int cell_locker(struct policy_locker *locker, dm_oblock_t b)
++{
++	struct old_oblock_lock *l = container_of(locker, struct old_oblock_lock, locker);
++	struct dm_bio_prison_cell *cell_prealloc = prealloc_get_cell(l->structs);
++
++	return bio_detain(l->cache, b, NULL, cell_prealloc,
++			  (cell_free_fn) prealloc_put_cell,
++			  l->structs, &l->cell);
++}
++
+ static void process_bio(struct cache *cache, struct prealloc *structs,
+ 			struct bio *bio)
+ {
+ 	int r;
+ 	bool release_cell = true;
+ 	dm_oblock_t block = get_bio_block(cache, bio);
+-	struct dm_bio_prison_cell *cell_prealloc, *old_ocell, *new_ocell;
++	struct dm_bio_prison_cell *cell_prealloc, *new_ocell;
+ 	struct policy_result lookup_result;
+ 	bool passthrough = passthrough_mode(&cache->features);
+ 	bool discarded_block, can_migrate;
++	struct old_oblock_lock ool;
+ 
+ 	/*
+ 	 * Check to see if that block is currently migrating.
+@@ -1469,8 +1496,12 @@ static void process_bio(struct cache *cache, struct prealloc *structs,
+ 	discarded_block = is_discarded_oblock(cache, block);
+ 	can_migrate = !passthrough && (discarded_block || spare_migration_bandwidth(cache));
+ 
++	ool.locker.fn = cell_locker;
++	ool.cache = cache;
++	ool.structs = structs;
++	ool.cell = NULL;
+ 	r = policy_map(cache->policy, block, true, can_migrate, discarded_block,
+-		       bio, &lookup_result);
++		       bio, &ool.locker, &lookup_result);
+ 
+ 	if (r == -EWOULDBLOCK)
+ 		/* migration has been denied */
+@@ -1527,27 +1558,11 @@ static void process_bio(struct cache *cache, struct prealloc *structs,
+ 		break;
+ 
+ 	case POLICY_REPLACE:
+-		cell_prealloc = prealloc_get_cell(structs);
+-		r = bio_detain(cache, lookup_result.old_oblock, bio, cell_prealloc,
+-			       (cell_free_fn) prealloc_put_cell,
+-			       structs, &old_ocell);
+-		if (r > 0) {
+-			/*
+-			 * We have to be careful to avoid lock inversion of
+-			 * the cells.  So we back off, and wait for the
+-			 * old_ocell to become free.
+-			 */
+-			policy_force_mapping(cache->policy, block,
+-					     lookup_result.old_oblock);
+-			atomic_inc(&cache->stats.cache_cell_clash);
+-			break;
+-		}
+ 		atomic_inc(&cache->stats.demotion);
+ 		atomic_inc(&cache->stats.promotion);
+-
+ 		demote_then_promote(cache, structs, lookup_result.old_oblock,
+ 				    block, lookup_result.cblock,
+-				    old_ocell, new_ocell);
++				    ool.cell, new_ocell);
+ 		release_cell = false;
+ 		break;
+ 
+@@ -2595,6 +2610,9 @@ static int __cache_map(struct cache *cache, struct bio *bio, struct dm_bio_priso
+ 	bool discarded_block;
+ 	struct policy_result lookup_result;
+ 	struct per_bio_data *pb = init_per_bio_data(bio, pb_data_size);
++	struct old_oblock_lock ool;
++
++	ool.locker.fn = null_locker;
+ 
+ 	if (unlikely(from_oblock(block) >= from_oblock(cache->origin_blocks))) {
+ 		/*
+@@ -2633,7 +2651,7 @@ static int __cache_map(struct cache *cache, struct bio *bio, struct dm_bio_priso
+ 	discarded_block = is_discarded_oblock(cache, block);
+ 
+ 	r = policy_map(cache->policy, block, false, can_migrate, discarded_block,
+-		       bio, &lookup_result);
++		       bio, &ool.locker, &lookup_result);
+ 	if (r == -EWOULDBLOCK) {
+ 		cell_defer(cache, *cell, true);
+ 		return DM_MAPIO_SUBMITTED;
+diff --git a/drivers/md/dm-stats.c b/drivers/md/dm-stats.c
+index f478a4c96d2f..419bdd4fc8b8 100644
+--- a/drivers/md/dm-stats.c
++++ b/drivers/md/dm-stats.c
+@@ -795,6 +795,8 @@ static int message_stats_create(struct mapped_device *md,
+ 		return -EINVAL;
+ 
+ 	if (sscanf(argv[2], "/%u%c", &divisor, &dummy) == 1) {
++		if (!divisor)
++			return -EINVAL;
+ 		step = end - start;
+ 		if (do_div(step, divisor))
+ 			step++;
+diff --git a/drivers/md/dm-thin.c b/drivers/md/dm-thin.c
+index 921aafd12aee..e22e6c892b8a 100644
+--- a/drivers/md/dm-thin.c
++++ b/drivers/md/dm-thin.c
+@@ -18,6 +18,7 @@
+ #include <linux/init.h>
+ #include <linux/module.h>
+ #include <linux/slab.h>
++#include <linux/vmalloc.h>
+ #include <linux/sort.h>
+ #include <linux/rbtree.h>
+ 
+@@ -260,7 +261,7 @@ struct pool {
+ 	process_mapping_fn process_prepared_mapping;
+ 	process_mapping_fn process_prepared_discard;
+ 
+-	struct dm_bio_prison_cell *cell_sort_array[CELL_SORT_ARRAY_SIZE];
++	struct dm_bio_prison_cell **cell_sort_array;
+ };
+ 
+ static enum pool_mode get_pool_mode(struct pool *pool);
+@@ -2499,6 +2500,7 @@ static void __pool_destroy(struct pool *pool)
+ {
+ 	__pool_table_remove(pool);
+ 
++	vfree(pool->cell_sort_array);
+ 	if (dm_pool_metadata_close(pool->pmd) < 0)
+ 		DMWARN("%s: dm_pool_metadata_close() failed.", __func__);
+ 
+@@ -2611,6 +2613,13 @@ static struct pool *pool_create(struct mapped_device *pool_md,
+ 		goto bad_mapping_pool;
+ 	}
+ 
++	pool->cell_sort_array = vmalloc(sizeof(*pool->cell_sort_array) * CELL_SORT_ARRAY_SIZE);
++	if (!pool->cell_sort_array) {
++		*error = "Error allocating cell sort array";
++		err_p = ERR_PTR(-ENOMEM);
++		goto bad_sort_array;
++	}
++
+ 	pool->ref_count = 1;
+ 	pool->last_commit_jiffies = jiffies;
+ 	pool->pool_md = pool_md;
+@@ -2619,6 +2628,8 @@ static struct pool *pool_create(struct mapped_device *pool_md,
+ 
+ 	return pool;
+ 
++bad_sort_array:
++	mempool_destroy(pool->mapping_pool);
+ bad_mapping_pool:
+ 	dm_deferred_set_destroy(pool->all_io_ds);
+ bad_all_io_ds:
+diff --git a/drivers/md/md.c b/drivers/md/md.c
+index 4dbed4a67aaf..b9200282fd77 100644
+--- a/drivers/md/md.c
++++ b/drivers/md/md.c
+@@ -4005,8 +4005,10 @@ new_dev_store(struct mddev *mddev, const char *buf, size_t len)
+ 	else
+ 		rdev = md_import_device(dev, -1, -1);
+ 
+-	if (IS_ERR(rdev))
++	if (IS_ERR(rdev)) {
++		mddev_unlock(mddev);
+ 		return PTR_ERR(rdev);
++	}
+ 	err = bind_rdev_to_array(rdev, mddev);
+  out:
+ 	if (err)
+@@ -5159,6 +5161,7 @@ int md_run(struct mddev *mddev)
+ 		mddev_detach(mddev);
+ 		if (mddev->private)
+ 			pers->free(mddev, mddev->private);
++		mddev->private = NULL;
+ 		module_put(pers->owner);
+ 		bitmap_destroy(mddev);
+ 		return err;
+@@ -5294,6 +5297,7 @@ static void md_clean(struct mddev *mddev)
+ 	mddev->changed = 0;
+ 	mddev->degraded = 0;
+ 	mddev->safemode = 0;
++	mddev->private = NULL;
+ 	mddev->merge_check_needed = 0;
+ 	mddev->bitmap_info.offset = 0;
+ 	mddev->bitmap_info.default_offset = 0;
+@@ -5366,6 +5370,7 @@ static void __md_stop(struct mddev *mddev)
+ 	mddev->pers = NULL;
+ 	spin_unlock(&mddev->lock);
+ 	pers->free(mddev, mddev->private);
++	mddev->private = NULL;
+ 	if (pers->sync_request && mddev->to_remove == NULL)
+ 		mddev->to_remove = &md_redundancy_group;
+ 	module_put(pers->owner);
+@@ -6375,7 +6380,7 @@ static int update_array_info(struct mddev *mddev, mdu_array_info_t *info)
+ 	    mddev->ctime         != info->ctime         ||
+ 	    mddev->level         != info->level         ||
+ /*	    mddev->layout        != info->layout        || */
+-	    !mddev->persistent	 != info->not_persistent||
++	    mddev->persistent	 != !info->not_persistent ||
+ 	    mddev->chunk_sectors != info->chunk_size >> 9 ||
+ 	    /* ignore bottom 8 bits of state, and allow SB_BITMAP_PRESENT to change */
+ 	    ((state^info->state) & 0xfffffe00)
+diff --git a/drivers/md/persistent-data/dm-btree-remove.c b/drivers/md/persistent-data/dm-btree-remove.c
+index b88757cd0d1d..a03178e91a79 100644
+--- a/drivers/md/persistent-data/dm-btree-remove.c
++++ b/drivers/md/persistent-data/dm-btree-remove.c
+@@ -309,8 +309,8 @@ static void redistribute3(struct dm_btree_info *info, struct btree_node *parent,
+ 
+ 		if (s < 0 && nr_center < -s) {
+ 			/* not enough in central node */
+-			shift(left, center, nr_center);
+-			s = nr_center - target;
++			shift(left, center, -nr_center);
++			s += nr_center;
+ 			shift(left, right, s);
+ 			nr_right += s;
+ 		} else
+@@ -323,7 +323,7 @@ static void redistribute3(struct dm_btree_info *info, struct btree_node *parent,
+ 		if (s > 0 && nr_center < s) {
+ 			/* not enough in central node */
+ 			shift(center, right, nr_center);
+-			s = target - nr_center;
++			s -= nr_center;
+ 			shift(left, right, s);
+ 			nr_left -= s;
+ 		} else
+diff --git a/drivers/md/persistent-data/dm-btree.c b/drivers/md/persistent-data/dm-btree.c
+index 200ac12a1d40..fdd3793e22f9 100644
+--- a/drivers/md/persistent-data/dm-btree.c
++++ b/drivers/md/persistent-data/dm-btree.c
+@@ -255,7 +255,7 @@ int dm_btree_del(struct dm_btree_info *info, dm_block_t root)
+ 	int r;
+ 	struct del_stack *s;
+ 
+-	s = kmalloc(sizeof(*s), GFP_KERNEL);
++	s = kmalloc(sizeof(*s), GFP_NOIO);
+ 	if (!s)
+ 		return -ENOMEM;
+ 	s->info = info;
+diff --git a/drivers/md/persistent-data/dm-space-map-metadata.c b/drivers/md/persistent-data/dm-space-map-metadata.c
+index e8a904298887..53091295fce9 100644
+--- a/drivers/md/persistent-data/dm-space-map-metadata.c
++++ b/drivers/md/persistent-data/dm-space-map-metadata.c
+@@ -204,6 +204,27 @@ static void in(struct sm_metadata *smm)
+ 	smm->recursion_count++;
+ }
+ 
++static int apply_bops(struct sm_metadata *smm)
++{
++	int r = 0;
++
++	while (!brb_empty(&smm->uncommitted)) {
++		struct block_op bop;
++
++		r = brb_pop(&smm->uncommitted, &bop);
++		if (r) {
++			DMERR("bug in bop ring buffer");
++			break;
++		}
++
++		r = commit_bop(smm, &bop);
++		if (r)
++			break;
++	}
++
++	return r;
++}
++
+ static int out(struct sm_metadata *smm)
+ {
+ 	int r = 0;
+@@ -216,21 +237,8 @@ static int out(struct sm_metadata *smm)
+ 		return -ENOMEM;
+ 	}
+ 
+-	if (smm->recursion_count == 1) {
+-		while (!brb_empty(&smm->uncommitted)) {
+-			struct block_op bop;
+-
+-			r = brb_pop(&smm->uncommitted, &bop);
+-			if (r) {
+-				DMERR("bug in bop ring buffer");
+-				break;
+-			}
+-
+-			r = commit_bop(smm, &bop);
+-			if (r)
+-				break;
+-		}
+-	}
++	if (smm->recursion_count == 1)
++		apply_bops(smm);
+ 
+ 	smm->recursion_count--;
+ 
+@@ -704,6 +712,12 @@ static int sm_metadata_extend(struct dm_space_map *sm, dm_block_t extra_blocks)
+ 		}
+ 		old_len = smm->begin;
+ 
++		r = apply_bops(smm);
++		if (r) {
++			DMERR("%s: apply_bops failed", __func__);
++			goto out;
++		}
++
+ 		r = sm_ll_commit(&smm->ll);
+ 		if (r)
+ 			goto out;
+@@ -773,6 +787,12 @@ int dm_sm_metadata_create(struct dm_space_map *sm,
+ 	if (r)
+ 		return r;
+ 
++	r = apply_bops(smm);
++	if (r) {
++		DMERR("%s: apply_bops failed", __func__);
++		return r;
++	}
++
+ 	return sm_metadata_commit(sm);
+ }
+ 
+diff --git a/drivers/media/dvb-frontends/af9013.c b/drivers/media/dvb-frontends/af9013.c
+index 8001690d7576..ba6c8f6c42a1 100644
+--- a/drivers/media/dvb-frontends/af9013.c
++++ b/drivers/media/dvb-frontends/af9013.c
+@@ -605,6 +605,10 @@ static int af9013_set_frontend(struct dvb_frontend *fe)
+ 			}
+ 		}
+ 
++		/* Return an error if can't find bandwidth or the right clock */
++		if (i == ARRAY_SIZE(coeff_lut))
++			return -EINVAL;
++
+ 		ret = af9013_wr_regs(state, 0xae00, coeff_lut[i].val,
+ 			sizeof(coeff_lut[i].val));
+ 	}
+diff --git a/drivers/media/dvb-frontends/cx24116.c b/drivers/media/dvb-frontends/cx24116.c
+index 2916d7c74a1d..7bc68b355c0b 100644
+--- a/drivers/media/dvb-frontends/cx24116.c
++++ b/drivers/media/dvb-frontends/cx24116.c
+@@ -963,6 +963,10 @@ static int cx24116_send_diseqc_msg(struct dvb_frontend *fe,
+ 	struct cx24116_state *state = fe->demodulator_priv;
+ 	int i, ret;
+ 
++	/* Validate length */
++	if (d->msg_len > sizeof(d->msg))
++                return -EINVAL;
++
+ 	/* Dump DiSEqC message */
+ 	if (debug) {
+ 		printk(KERN_INFO "cx24116: %s(", __func__);
+@@ -974,10 +978,6 @@ static int cx24116_send_diseqc_msg(struct dvb_frontend *fe,
+ 		printk(") toneburst=%d\n", toneburst);
+ 	}
+ 
+-	/* Validate length */
+-	if (d->msg_len > (CX24116_ARGLEN - CX24116_DISEQC_MSGOFS))
+-		return -EINVAL;
+-
+ 	/* DiSEqC message */
+ 	for (i = 0; i < d->msg_len; i++)
+ 		state->dsec_cmd.args[CX24116_DISEQC_MSGOFS + i] = d->msg[i];
+diff --git a/drivers/media/dvb-frontends/cx24117.c b/drivers/media/dvb-frontends/cx24117.c
+index acb965ce0358..af6363573efd 100644
+--- a/drivers/media/dvb-frontends/cx24117.c
++++ b/drivers/media/dvb-frontends/cx24117.c
+@@ -1043,7 +1043,7 @@ static int cx24117_send_diseqc_msg(struct dvb_frontend *fe,
+ 	dev_dbg(&state->priv->i2c->dev, ")\n");
+ 
+ 	/* Validate length */
+-	if (d->msg_len > 15)
++	if (d->msg_len > sizeof(d->msg))
+ 		return -EINVAL;
+ 
+ 	/* DiSEqC message */
+diff --git a/drivers/media/dvb-frontends/s5h1420.c b/drivers/media/dvb-frontends/s5h1420.c
+index 93eeaf7118fd..0b4f8fe6bf99 100644
+--- a/drivers/media/dvb-frontends/s5h1420.c
++++ b/drivers/media/dvb-frontends/s5h1420.c
+@@ -180,7 +180,7 @@ static int s5h1420_send_master_cmd (struct dvb_frontend* fe,
+ 	int result = 0;
+ 
+ 	dprintk("enter %s\n", __func__);
+-	if (cmd->msg_len > 8)
++	if (cmd->msg_len > sizeof(cmd->msg))
+ 		return -EINVAL;
+ 
+ 	/* setup for DISEQC */
+diff --git a/drivers/media/pci/cx18/cx18-streams.c b/drivers/media/pci/cx18/cx18-streams.c
+index c82d25d53341..c9860845264f 100644
+--- a/drivers/media/pci/cx18/cx18-streams.c
++++ b/drivers/media/pci/cx18/cx18-streams.c
+@@ -90,6 +90,7 @@ static struct {
+ 		"encoder PCM audio",
+ 		VFL_TYPE_GRABBER, CX18_V4L2_ENC_PCM_OFFSET,
+ 		PCI_DMA_FROMDEVICE,
++		V4L2_CAP_TUNER | V4L2_CAP_AUDIO | V4L2_CAP_READWRITE,
+ 	},
+ 	{	/* CX18_ENC_STREAM_TYPE_IDX */
+ 		"encoder IDX",
+diff --git a/drivers/media/pci/saa7164/saa7164-encoder.c b/drivers/media/pci/saa7164/saa7164-encoder.c
+index 9266965412c3..7a0a65146723 100644
+--- a/drivers/media/pci/saa7164/saa7164-encoder.c
++++ b/drivers/media/pci/saa7164/saa7164-encoder.c
+@@ -721,13 +721,14 @@ static int vidioc_querycap(struct file *file, void  *priv,
+ 		sizeof(cap->card));
+ 	sprintf(cap->bus_info, "PCI:%s", pci_name(dev->pci));
+ 
+-	cap->capabilities =
++	cap->device_caps =
+ 		V4L2_CAP_VIDEO_CAPTURE |
+-		V4L2_CAP_READWRITE     |
+-		0;
++		V4L2_CAP_READWRITE |
++		V4L2_CAP_TUNER;
+ 
+-	cap->capabilities |= V4L2_CAP_TUNER;
+-	cap->version = 0;
++	cap->capabilities = cap->device_caps |
++		V4L2_CAP_VBI_CAPTURE |
++		V4L2_CAP_DEVICE_CAPS;
+ 
+ 	return 0;
+ }
+diff --git a/drivers/media/pci/saa7164/saa7164-vbi.c b/drivers/media/pci/saa7164/saa7164-vbi.c
+index 6e025fea2542..06117e6c0596 100644
+--- a/drivers/media/pci/saa7164/saa7164-vbi.c
++++ b/drivers/media/pci/saa7164/saa7164-vbi.c
+@@ -660,13 +660,14 @@ static int vidioc_querycap(struct file *file, void  *priv,
+ 		sizeof(cap->card));
+ 	sprintf(cap->bus_info, "PCI:%s", pci_name(dev->pci));
+ 
+-	cap->capabilities =
++	cap->device_caps =
+ 		V4L2_CAP_VBI_CAPTURE |
+-		V4L2_CAP_READWRITE     |
+-		0;
++		V4L2_CAP_READWRITE |
++		V4L2_CAP_TUNER;
+ 
+-	cap->capabilities |= V4L2_CAP_TUNER;
+-	cap->version = 0;
++	cap->capabilities = cap->device_caps |
++		V4L2_CAP_VIDEO_CAPTURE |
++		V4L2_CAP_DEVICE_CAPS;
+ 
+ 	return 0;
+ }
+diff --git a/drivers/media/usb/dvb-usb/dib0700_core.c b/drivers/media/usb/dvb-usb/dib0700_core.c
+index 2b40393836ff..0d248ce02a9b 100644
+--- a/drivers/media/usb/dvb-usb/dib0700_core.c
++++ b/drivers/media/usb/dvb-usb/dib0700_core.c
+@@ -655,10 +655,20 @@ out:
+ struct dib0700_rc_response {
+ 	u8 report_id;
+ 	u8 data_state;
+-	u8 system;
+-	u8 not_system;
+-	u8 data;
+-	u8 not_data;
++	union {
++		struct {
++			u8 system;
++			u8 not_system;
++			u8 data;
++			u8 not_data;
++		} nec;
++		struct {
++			u8 not_used;
++			u8 system;
++			u8 data;
++			u8 not_data;
++		} rc5;
++	};
+ };
+ #define RC_MSG_SIZE_V1_20 6
+ 
+@@ -694,8 +704,8 @@ static void dib0700_rc_urb_completion(struct urb *purb)
+ 
+ 	deb_data("IR ID = %02X state = %02X System = %02X %02X Cmd = %02X %02X (len %d)\n",
+ 		 poll_reply->report_id, poll_reply->data_state,
+-		 poll_reply->system, poll_reply->not_system,
+-		 poll_reply->data, poll_reply->not_data,
++		 poll_reply->nec.system, poll_reply->nec.not_system,
++		 poll_reply->nec.data, poll_reply->nec.not_data,
+ 		 purb->actual_length);
+ 
+ 	switch (d->props.rc.core.protocol) {
+@@ -704,30 +714,30 @@ static void dib0700_rc_urb_completion(struct urb *purb)
+ 		toggle = 0;
+ 
+ 		/* NEC protocol sends repeat code as 0 0 0 FF */
+-		if (poll_reply->system     == 0x00 &&
+-		    poll_reply->not_system == 0x00 &&
+-		    poll_reply->data       == 0x00 &&
+-		    poll_reply->not_data   == 0xff) {
++		if (poll_reply->nec.system     == 0x00 &&
++		    poll_reply->nec.not_system == 0x00 &&
++		    poll_reply->nec.data       == 0x00 &&
++		    poll_reply->nec.not_data   == 0xff) {
+ 			poll_reply->data_state = 2;
+ 			break;
+ 		}
+ 
+-		if ((poll_reply->data ^ poll_reply->not_data) != 0xff) {
++		if ((poll_reply->nec.data ^ poll_reply->nec.not_data) != 0xff) {
+ 			deb_data("NEC32 protocol\n");
+-			keycode = RC_SCANCODE_NEC32(poll_reply->system     << 24 |
+-						     poll_reply->not_system << 16 |
+-						     poll_reply->data       << 8  |
+-						     poll_reply->not_data);
+-		} else if ((poll_reply->system ^ poll_reply->not_system) != 0xff) {
++			keycode = RC_SCANCODE_NEC32(poll_reply->nec.system     << 24 |
++						     poll_reply->nec.not_system << 16 |
++						     poll_reply->nec.data       << 8  |
++						     poll_reply->nec.not_data);
++		} else if ((poll_reply->nec.system ^ poll_reply->nec.not_system) != 0xff) {
+ 			deb_data("NEC extended protocol\n");
+-			keycode = RC_SCANCODE_NECX(poll_reply->system << 8 |
+-						    poll_reply->not_system,
+-						    poll_reply->data);
++			keycode = RC_SCANCODE_NECX(poll_reply->nec.system << 8 |
++						    poll_reply->nec.not_system,
++						    poll_reply->nec.data);
+ 
+ 		} else {
+ 			deb_data("NEC normal protocol\n");
+-			keycode = RC_SCANCODE_NEC(poll_reply->system,
+-						   poll_reply->data);
++			keycode = RC_SCANCODE_NEC(poll_reply->nec.system,
++						   poll_reply->nec.data);
+ 		}
+ 
+ 		break;
+@@ -735,19 +745,19 @@ static void dib0700_rc_urb_completion(struct urb *purb)
+ 		deb_data("RC5 protocol\n");
+ 		protocol = RC_TYPE_RC5;
+ 		toggle = poll_reply->report_id;
+-		keycode = RC_SCANCODE_RC5(poll_reply->system, poll_reply->data);
++		keycode = RC_SCANCODE_RC5(poll_reply->rc5.system, poll_reply->rc5.data);
++
++		if ((poll_reply->rc5.data ^ poll_reply->rc5.not_data) != 0xff) {
++			/* Key failed integrity check */
++			err("key failed integrity check: %02x %02x %02x %02x",
++			    poll_reply->rc5.not_used, poll_reply->rc5.system,
++			    poll_reply->rc5.data, poll_reply->rc5.not_data);
++			goto resubmit;
++		}
+ 
+ 		break;
+ 	}
+ 
+-	if ((poll_reply->data + poll_reply->not_data) != 0xff) {
+-		/* Key failed integrity check */
+-		err("key failed integrity check: %02x %02x %02x %02x",
+-		    poll_reply->system,  poll_reply->not_system,
+-		    poll_reply->data, poll_reply->not_data);
+-		goto resubmit;
+-	}
+-
+ 	rc_keydown(d->rc_dev, protocol, keycode, toggle);
+ 
+ resubmit:
+diff --git a/drivers/media/usb/dvb-usb/dib0700_devices.c b/drivers/media/usb/dvb-usb/dib0700_devices.c
+index d7d55a20e959..c170523226aa 100644
+--- a/drivers/media/usb/dvb-usb/dib0700_devices.c
++++ b/drivers/media/usb/dvb-usb/dib0700_devices.c
+@@ -3944,6 +3944,8 @@ struct dvb_usb_device_properties dib0700_devices[] = {
+ 
+ 				DIB0700_DEFAULT_STREAMING_CONFIG(0x02),
+ 			}},
++				.size_of_priv = sizeof(struct
++						dib0700_adapter_state),
+ 			}, {
+ 			.num_frontends = 1,
+ 			.fe = {{
+@@ -3956,6 +3958,8 @@ struct dvb_usb_device_properties dib0700_devices[] = {
+ 
+ 				DIB0700_DEFAULT_STREAMING_CONFIG(0x03),
+ 			}},
++				.size_of_priv = sizeof(struct
++						dib0700_adapter_state),
+ 			}
+ 		},
+ 
+@@ -4009,6 +4013,8 @@ struct dvb_usb_device_properties dib0700_devices[] = {
+ 
+ 				DIB0700_DEFAULT_STREAMING_CONFIG(0x02),
+ 			}},
++				.size_of_priv = sizeof(struct
++						dib0700_adapter_state),
+ 			},
+ 		},
+ 
+diff --git a/drivers/media/v4l2-core/videobuf2-core.c b/drivers/media/v4l2-core/videobuf2-core.c
+index 66ada01c796c..cf9d644a8aff 100644
+--- a/drivers/media/v4l2-core/videobuf2-core.c
++++ b/drivers/media/v4l2-core/videobuf2-core.c
+@@ -1237,6 +1237,23 @@ void vb2_discard_done(struct vb2_queue *q)
+ }
+ EXPORT_SYMBOL_GPL(vb2_discard_done);
+ 
++static void vb2_warn_zero_bytesused(struct vb2_buffer *vb)
++{
++	static bool __check_once __read_mostly;
++
++	if (__check_once)
++		return;
++
++	__check_once = true;
++	__WARN();
++
++	pr_warn_once("use of bytesused == 0 is deprecated and will be removed in the future,\n");
++	if (vb->vb2_queue->allow_zero_bytesused)
++		pr_warn_once("use VIDIOC_DECODER_CMD(V4L2_DEC_CMD_STOP) instead.\n");
++	else
++		pr_warn_once("use the actual size instead.\n");
++}
++
+ /**
+  * __fill_vb2_buffer() - fill a vb2_buffer with information provided in a
+  * v4l2_buffer by the userspace. The caller has already verified that struct
+@@ -1247,16 +1264,6 @@ static void __fill_vb2_buffer(struct vb2_buffer *vb, const struct v4l2_buffer *b
+ {
+ 	unsigned int plane;
+ 
+-	if (V4L2_TYPE_IS_OUTPUT(b->type)) {
+-		if (WARN_ON_ONCE(b->bytesused == 0)) {
+-			pr_warn_once("use of bytesused == 0 is deprecated and will be removed in the future,\n");
+-			if (vb->vb2_queue->allow_zero_bytesused)
+-				pr_warn_once("use VIDIOC_DECODER_CMD(V4L2_DEC_CMD_STOP) instead.\n");
+-			else
+-				pr_warn_once("use the actual size instead.\n");
+-		}
+-	}
+-
+ 	if (V4L2_TYPE_IS_MULTIPLANAR(b->type)) {
+ 		if (b->memory == V4L2_MEMORY_USERPTR) {
+ 			for (plane = 0; plane < vb->num_planes; ++plane) {
+@@ -1297,6 +1304,9 @@ static void __fill_vb2_buffer(struct vb2_buffer *vb, const struct v4l2_buffer *b
+ 				struct v4l2_plane *pdst = &v4l2_planes[plane];
+ 				struct v4l2_plane *psrc = &b->m.planes[plane];
+ 
++				if (psrc->bytesused == 0)
++					vb2_warn_zero_bytesused(vb);
++
+ 				if (vb->vb2_queue->allow_zero_bytesused)
+ 					pdst->bytesused = psrc->bytesused;
+ 				else
+@@ -1331,6 +1341,9 @@ static void __fill_vb2_buffer(struct vb2_buffer *vb, const struct v4l2_buffer *b
+ 		}
+ 
+ 		if (V4L2_TYPE_IS_OUTPUT(b->type)) {
++			if (b->bytesused == 0)
++				vb2_warn_zero_bytesused(vb);
++
+ 			if (vb->vb2_queue->allow_zero_bytesused)
+ 				v4l2_planes[0].bytesused = b->bytesused;
+ 			else
+diff --git a/drivers/mmc/card/block.c b/drivers/mmc/card/block.c
+index 60f7141a6b02..31d2627d9d4d 100644
+--- a/drivers/mmc/card/block.c
++++ b/drivers/mmc/card/block.c
+@@ -208,6 +208,8 @@ static ssize_t power_ro_lock_show(struct device *dev,
+ 
+ 	ret = snprintf(buf, PAGE_SIZE, "%d\n", locked);
+ 
++	mmc_blk_put(md);
++
+ 	return ret;
+ }
+ 
+@@ -1910,9 +1912,11 @@ static int mmc_blk_issue_rw_rq(struct mmc_queue *mq, struct request *rqc)
+ 			break;
+ 		case MMC_BLK_CMD_ERR:
+ 			ret = mmc_blk_cmd_err(md, card, brq, req, ret);
+-			if (!mmc_blk_reset(md, card->host, type))
+-				break;
+-			goto cmd_abort;
++			if (mmc_blk_reset(md, card->host, type))
++				goto cmd_abort;
++			if (!ret)
++				goto start_new_req;
++			break;
+ 		case MMC_BLK_RETRY:
+ 			if (retry++ < 5)
+ 				break;
+diff --git a/drivers/mmc/host/sdhci.c b/drivers/mmc/host/sdhci.c
+index 9231cdfe2757..d3dbb28057e9 100644
+--- a/drivers/mmc/host/sdhci.c
++++ b/drivers/mmc/host/sdhci.c
+@@ -3315,13 +3315,14 @@ int sdhci_add_host(struct sdhci_host *host)
+ 				   SDHCI_MAX_CURRENT_MULTIPLIER;
+ 	}
+ 
+-	/* If OCR set by external regulators, use it instead */
++	/* If OCR set by host, use it instead. */
++	if (host->ocr_mask)
++		ocr_avail = host->ocr_mask;
++
++	/* If OCR set by external regulators, give it highest prio. */
+ 	if (mmc->ocr_avail)
+ 		ocr_avail = mmc->ocr_avail;
+ 
+-	if (host->ocr_mask)
+-		ocr_avail &= host->ocr_mask;
+-
+ 	mmc->ocr_avail = ocr_avail;
+ 	mmc->ocr_avail_sdio = ocr_avail;
+ 	if (host->ocr_avail_sdio)
+diff --git a/drivers/net/ethernet/intel/e1000e/82571.c b/drivers/net/ethernet/intel/e1000e/82571.c
+index dc79ed85030b..32e77755a9c6 100644
+--- a/drivers/net/ethernet/intel/e1000e/82571.c
++++ b/drivers/net/ethernet/intel/e1000e/82571.c
+@@ -2010,7 +2010,7 @@ const struct e1000_info e1000_82573_info = {
+ 	.flags2			= FLAG2_DISABLE_ASPM_L1
+ 				  | FLAG2_DISABLE_ASPM_L0S,
+ 	.pba			= 20,
+-	.max_hw_frame_size	= ETH_FRAME_LEN + ETH_FCS_LEN,
++	.max_hw_frame_size	= VLAN_ETH_FRAME_LEN + ETH_FCS_LEN,
+ 	.get_variants		= e1000_get_variants_82571,
+ 	.mac_ops		= &e82571_mac_ops,
+ 	.phy_ops		= &e82_phy_ops_m88,
+diff --git a/drivers/net/ethernet/intel/e1000e/ich8lan.c b/drivers/net/ethernet/intel/e1000e/ich8lan.c
+index 9d81c0317433..e2498dbf3c3b 100644
+--- a/drivers/net/ethernet/intel/e1000e/ich8lan.c
++++ b/drivers/net/ethernet/intel/e1000e/ich8lan.c
+@@ -1563,7 +1563,7 @@ static s32 e1000_get_variants_ich8lan(struct e1000_adapter *adapter)
+ 	    ((adapter->hw.mac.type >= e1000_pch2lan) &&
+ 	     (!(er32(CTRL_EXT) & E1000_CTRL_EXT_LSECCK)))) {
+ 		adapter->flags &= ~FLAG_HAS_JUMBO_FRAMES;
+-		adapter->max_hw_frame_size = ETH_FRAME_LEN + ETH_FCS_LEN;
++		adapter->max_hw_frame_size = VLAN_ETH_FRAME_LEN + ETH_FCS_LEN;
+ 
+ 		hw->mac.ops.blink_led = NULL;
+ 	}
+@@ -5681,7 +5681,7 @@ const struct e1000_info e1000_ich8_info = {
+ 				  | FLAG_HAS_FLASH
+ 				  | FLAG_APME_IN_WUC,
+ 	.pba			= 8,
+-	.max_hw_frame_size	= ETH_FRAME_LEN + ETH_FCS_LEN,
++	.max_hw_frame_size	= VLAN_ETH_FRAME_LEN + ETH_FCS_LEN,
+ 	.get_variants		= e1000_get_variants_ich8lan,
+ 	.mac_ops		= &ich8_mac_ops,
+ 	.phy_ops		= &ich8_phy_ops,
+@@ -5754,7 +5754,7 @@ const struct e1000_info e1000_pch2_info = {
+ 	.flags2			= FLAG2_HAS_PHY_STATS
+ 				  | FLAG2_HAS_EEE,
+ 	.pba			= 26,
+-	.max_hw_frame_size	= 9018,
++	.max_hw_frame_size	= 9022,
+ 	.get_variants		= e1000_get_variants_ich8lan,
+ 	.mac_ops		= &ich8_mac_ops,
+ 	.phy_ops		= &ich8_phy_ops,
+@@ -5774,7 +5774,7 @@ const struct e1000_info e1000_pch_lpt_info = {
+ 	.flags2			= FLAG2_HAS_PHY_STATS
+ 				  | FLAG2_HAS_EEE,
+ 	.pba			= 26,
+-	.max_hw_frame_size	= 9018,
++	.max_hw_frame_size	= 9022,
+ 	.get_variants		= e1000_get_variants_ich8lan,
+ 	.mac_ops		= &ich8_mac_ops,
+ 	.phy_ops		= &ich8_phy_ops,
+@@ -5794,7 +5794,7 @@ const struct e1000_info e1000_pch_spt_info = {
+ 	.flags2			= FLAG2_HAS_PHY_STATS
+ 				  | FLAG2_HAS_EEE,
+ 	.pba			= 26,
+-	.max_hw_frame_size	= 9018,
++	.max_hw_frame_size	= 9022,
+ 	.get_variants		= e1000_get_variants_ich8lan,
+ 	.mac_ops		= &ich8_mac_ops,
+ 	.phy_ops		= &ich8_phy_ops,
+diff --git a/drivers/net/ethernet/intel/e1000e/netdev.c b/drivers/net/ethernet/intel/e1000e/netdev.c
+index c509a5c900f5..68913d103542 100644
+--- a/drivers/net/ethernet/intel/e1000e/netdev.c
++++ b/drivers/net/ethernet/intel/e1000e/netdev.c
+@@ -3807,7 +3807,7 @@ void e1000e_reset(struct e1000_adapter *adapter)
+ 	/* reset Packet Buffer Allocation to default */
+ 	ew32(PBA, pba);
+ 
+-	if (adapter->max_frame_size > ETH_FRAME_LEN + ETH_FCS_LEN) {
++	if (adapter->max_frame_size > (VLAN_ETH_FRAME_LEN + ETH_FCS_LEN)) {
+ 		/* To maintain wire speed transmits, the Tx FIFO should be
+ 		 * large enough to accommodate two full transmit packets,
+ 		 * rounded up to the next 1KB and expressed in KB.  Likewise,
+@@ -4196,9 +4196,9 @@ static int e1000_sw_init(struct e1000_adapter *adapter)
+ {
+ 	struct net_device *netdev = adapter->netdev;
+ 
+-	adapter->rx_buffer_len = ETH_FRAME_LEN + VLAN_HLEN + ETH_FCS_LEN;
++	adapter->rx_buffer_len = VLAN_ETH_FRAME_LEN + ETH_FCS_LEN;
+ 	adapter->rx_ps_bsize0 = 128;
+-	adapter->max_frame_size = netdev->mtu + ETH_HLEN + ETH_FCS_LEN;
++	adapter->max_frame_size = netdev->mtu + VLAN_ETH_HLEN + ETH_FCS_LEN;
+ 	adapter->min_frame_size = ETH_ZLEN + ETH_FCS_LEN;
+ 	adapter->tx_ring_count = E1000_DEFAULT_TXD;
+ 	adapter->rx_ring_count = E1000_DEFAULT_RXD;
+@@ -5781,17 +5781,17 @@ struct rtnl_link_stats64 *e1000e_get_stats64(struct net_device *netdev,
+ static int e1000_change_mtu(struct net_device *netdev, int new_mtu)
+ {
+ 	struct e1000_adapter *adapter = netdev_priv(netdev);
+-	int max_frame = new_mtu + VLAN_HLEN + ETH_HLEN + ETH_FCS_LEN;
++	int max_frame = new_mtu + VLAN_ETH_HLEN + ETH_FCS_LEN;
+ 
+ 	/* Jumbo frame support */
+-	if ((max_frame > ETH_FRAME_LEN + ETH_FCS_LEN) &&
++	if ((max_frame > (VLAN_ETH_FRAME_LEN + ETH_FCS_LEN)) &&
+ 	    !(adapter->flags & FLAG_HAS_JUMBO_FRAMES)) {
+ 		e_err("Jumbo Frames not supported.\n");
+ 		return -EINVAL;
+ 	}
+ 
+ 	/* Supported frame sizes */
+-	if ((new_mtu < ETH_ZLEN + ETH_FCS_LEN + VLAN_HLEN) ||
++	if ((new_mtu < (VLAN_ETH_ZLEN + ETH_FCS_LEN)) ||
+ 	    (max_frame > adapter->max_hw_frame_size)) {
+ 		e_err("Unsupported MTU setting\n");
+ 		return -EINVAL;
+@@ -5831,10 +5831,8 @@ static int e1000_change_mtu(struct net_device *netdev, int new_mtu)
+ 		adapter->rx_buffer_len = 4096;
+ 
+ 	/* adjust allocation if LPE protects us, and we aren't using SBP */
+-	if ((max_frame == ETH_FRAME_LEN + ETH_FCS_LEN) ||
+-	    (max_frame == ETH_FRAME_LEN + VLAN_HLEN + ETH_FCS_LEN))
+-		adapter->rx_buffer_len = ETH_FRAME_LEN + VLAN_HLEN
+-		    + ETH_FCS_LEN;
++	if (max_frame <= (VLAN_ETH_FRAME_LEN + ETH_FCS_LEN))
++		adapter->rx_buffer_len = VLAN_ETH_FRAME_LEN + ETH_FCS_LEN;
+ 
+ 	if (netif_running(netdev))
+ 		e1000e_up(adapter);
+diff --git a/drivers/net/wireless/ath/ath9k/htc.h b/drivers/net/wireless/ath/ath9k/htc.h
+index e82a0d4ce23f..5dbc617ecf8a 100644
+--- a/drivers/net/wireless/ath/ath9k/htc.h
++++ b/drivers/net/wireless/ath/ath9k/htc.h
+@@ -440,9 +440,9 @@ static inline void ath9k_htc_stop_btcoex(struct ath9k_htc_priv *priv)
+ }
+ #endif /* CONFIG_ATH9K_BTCOEX_SUPPORT */
+ 
+-#define OP_BT_PRIORITY_DETECTED    BIT(3)
+-#define OP_BT_SCAN                 BIT(4)
+-#define OP_TSF_RESET               BIT(6)
++#define OP_BT_PRIORITY_DETECTED    3
++#define OP_BT_SCAN                 4
++#define OP_TSF_RESET               6
+ 
+ enum htc_op_flags {
+ 	HTC_FWFLAG_NO_RMW,
+diff --git a/drivers/net/wireless/ath/ath9k/main.c b/drivers/net/wireless/ath/ath9k/main.c
+index b0badef71ce7..d5f2fbf62d72 100644
+--- a/drivers/net/wireless/ath/ath9k/main.c
++++ b/drivers/net/wireless/ath/ath9k/main.c
+@@ -216,11 +216,13 @@ static bool ath_prepare_reset(struct ath_softc *sc)
+ 	ath_stop_ani(sc);
+ 	ath9k_hw_disable_interrupts(ah);
+ 
+-	if (!ath_drain_all_txq(sc))
+-		ret = false;
+-
+-	if (!ath_stoprecv(sc))
+-		ret = false;
++	if (AR_SREV_9300_20_OR_LATER(ah)) {
++		ret &= ath_stoprecv(sc);
++		ret &= ath_drain_all_txq(sc);
++	} else {
++		ret &= ath_drain_all_txq(sc);
++		ret &= ath_stoprecv(sc);
++	}
+ 
+ 	return ret;
+ }
+diff --git a/drivers/net/wireless/iwlwifi/mvm/debugfs.c b/drivers/net/wireless/iwlwifi/mvm/debugfs.c
+index 9ac04c1ea706..8c17b943cc6f 100644
+--- a/drivers/net/wireless/iwlwifi/mvm/debugfs.c
++++ b/drivers/net/wireless/iwlwifi/mvm/debugfs.c
+@@ -6,7 +6,7 @@
+  * GPL LICENSE SUMMARY
+  *
+  * Copyright(c) 2012 - 2014 Intel Corporation. All rights reserved.
+- * Copyright(c) 2013 - 2014 Intel Mobile Communications GmbH
++ * Copyright(c) 2013 - 2015 Intel Mobile Communications GmbH
+  *
+  * This program is free software; you can redistribute it and/or modify
+  * it under the terms of version 2 of the GNU General Public License as
+@@ -32,7 +32,7 @@
+  * BSD LICENSE
+  *
+  * Copyright(c) 2012 - 2014 Intel Corporation. All rights reserved.
+- * Copyright(c) 2013 - 2014 Intel Mobile Communications GmbH
++ * Copyright(c) 2013 - 2015 Intel Mobile Communications GmbH
+  * All rights reserved.
+  *
+  * Redistribution and use in source and binary forms, with or without
+@@ -1356,6 +1356,7 @@ static ssize_t iwl_dbgfs_d0i3_refs_read(struct file *file,
+ 	PRINT_MVM_REF(IWL_MVM_REF_UCODE_DOWN);
+ 	PRINT_MVM_REF(IWL_MVM_REF_SCAN);
+ 	PRINT_MVM_REF(IWL_MVM_REF_ROC);
++	PRINT_MVM_REF(IWL_MVM_REF_ROC_AUX);
+ 	PRINT_MVM_REF(IWL_MVM_REF_P2P_CLIENT);
+ 	PRINT_MVM_REF(IWL_MVM_REF_AP_IBSS);
+ 	PRINT_MVM_REF(IWL_MVM_REF_USER);
+diff --git a/drivers/net/wireless/iwlwifi/mvm/mac80211.c b/drivers/net/wireless/iwlwifi/mvm/mac80211.c
+index dda9f7b5f342..60c138a9bf4f 100644
+--- a/drivers/net/wireless/iwlwifi/mvm/mac80211.c
++++ b/drivers/net/wireless/iwlwifi/mvm/mac80211.c
+@@ -1404,7 +1404,7 @@ void __iwl_mvm_mac_stop(struct iwl_mvm *mvm)
+ 	 * The work item could be running or queued if the
+ 	 * ROC time event stops just as we get here.
+ 	 */
+-	cancel_work_sync(&mvm->roc_done_wk);
++	flush_work(&mvm->roc_done_wk);
+ 
+ 	iwl_trans_stop_device(mvm->trans);
+ 
+diff --git a/drivers/net/wireless/iwlwifi/mvm/mvm.h b/drivers/net/wireless/iwlwifi/mvm/mvm.h
+index cf70f681d1ac..6af21daaaaef 100644
+--- a/drivers/net/wireless/iwlwifi/mvm/mvm.h
++++ b/drivers/net/wireless/iwlwifi/mvm/mvm.h
+@@ -275,6 +275,7 @@ enum iwl_mvm_ref_type {
+ 	IWL_MVM_REF_UCODE_DOWN,
+ 	IWL_MVM_REF_SCAN,
+ 	IWL_MVM_REF_ROC,
++	IWL_MVM_REF_ROC_AUX,
+ 	IWL_MVM_REF_P2P_CLIENT,
+ 	IWL_MVM_REF_AP_IBSS,
+ 	IWL_MVM_REF_USER,
+diff --git a/drivers/net/wireless/iwlwifi/mvm/time-event.c b/drivers/net/wireless/iwlwifi/mvm/time-event.c
+index fd7b0d36f9a6..a7448cf01688 100644
+--- a/drivers/net/wireless/iwlwifi/mvm/time-event.c
++++ b/drivers/net/wireless/iwlwifi/mvm/time-event.c
+@@ -6,7 +6,7 @@
+  * GPL LICENSE SUMMARY
+  *
+  * Copyright(c) 2012 - 2014 Intel Corporation. All rights reserved.
+- * Copyright(c) 2013 - 2014 Intel Mobile Communications GmbH
++ * Copyright(c) 2013 - 2015 Intel Mobile Communications GmbH
+  *
+  * This program is free software; you can redistribute it and/or modify
+  * it under the terms of version 2 of the GNU General Public License as
+@@ -32,7 +32,7 @@
+  * BSD LICENSE
+  *
+  * Copyright(c) 2012 - 2014 Intel Corporation. All rights reserved.
+- * Copyright(c) 2013 - 2014 Intel Mobile Communications GmbH
++ * Copyright(c) 2013 - 2015 Intel Mobile Communications GmbH
+  * All rights reserved.
+  *
+  * Redistribution and use in source and binary forms, with or without
+@@ -108,12 +108,14 @@ void iwl_mvm_roc_done_wk(struct work_struct *wk)
+ 	 * in the case that the time event actually completed in the firmware
+ 	 * (which is handled in iwl_mvm_te_handle_notif).
+ 	 */
+-	if (test_and_clear_bit(IWL_MVM_STATUS_ROC_RUNNING, &mvm->status))
++	if (test_and_clear_bit(IWL_MVM_STATUS_ROC_RUNNING, &mvm->status)) {
+ 		queues |= BIT(IWL_MVM_OFFCHANNEL_QUEUE);
+-	if (test_and_clear_bit(IWL_MVM_STATUS_ROC_AUX_RUNNING, &mvm->status))
++		iwl_mvm_unref(mvm, IWL_MVM_REF_ROC);
++	}
++	if (test_and_clear_bit(IWL_MVM_STATUS_ROC_AUX_RUNNING, &mvm->status)) {
+ 		queues |= BIT(mvm->aux_queue);
+-
+-	iwl_mvm_unref(mvm, IWL_MVM_REF_ROC);
++		iwl_mvm_unref(mvm, IWL_MVM_REF_ROC_AUX);
++	}
+ 
+ 	synchronize_net();
+ 
+@@ -393,6 +395,7 @@ static int iwl_mvm_aux_roc_te_handle_notif(struct iwl_mvm *mvm,
+ 	} else if (le32_to_cpu(notif->action) == TE_V2_NOTIF_HOST_EVENT_START) {
+ 		set_bit(IWL_MVM_STATUS_ROC_AUX_RUNNING, &mvm->status);
+ 		te_data->running = true;
++		iwl_mvm_ref(mvm, IWL_MVM_REF_ROC_AUX);
+ 		ieee80211_ready_on_channel(mvm->hw); /* Start TE */
+ 	} else {
+ 		IWL_DEBUG_TE(mvm,
+diff --git a/drivers/net/wireless/rtlwifi/rtl8188ee/hw.c b/drivers/net/wireless/rtlwifi/rtl8188ee/hw.c
+index 86ce5b1930e6..e5d8108f1987 100644
+--- a/drivers/net/wireless/rtlwifi/rtl8188ee/hw.c
++++ b/drivers/net/wireless/rtlwifi/rtl8188ee/hw.c
+@@ -1354,27 +1354,11 @@ void rtl88ee_set_qos(struct ieee80211_hw *hw, int aci)
+ 	}
+ }
+ 
+-static void rtl88ee_clear_interrupt(struct ieee80211_hw *hw)
+-{
+-	struct rtl_priv *rtlpriv = rtl_priv(hw);
+-	u32 tmp;
+-
+-	tmp = rtl_read_dword(rtlpriv, REG_HISR);
+-	rtl_write_dword(rtlpriv, REG_HISR, tmp);
+-
+-	tmp = rtl_read_dword(rtlpriv, REG_HISRE);
+-	rtl_write_dword(rtlpriv, REG_HISRE, tmp);
+-
+-	tmp = rtl_read_dword(rtlpriv, REG_HSISR);
+-	rtl_write_dword(rtlpriv, REG_HSISR, tmp);
+-}
+-
+ void rtl88ee_enable_interrupt(struct ieee80211_hw *hw)
+ {
+ 	struct rtl_priv *rtlpriv = rtl_priv(hw);
+ 	struct rtl_pci *rtlpci = rtl_pcidev(rtl_pcipriv(hw));
+ 
+-	rtl88ee_clear_interrupt(hw);/*clear it here first*/
+ 	rtl_write_dword(rtlpriv, REG_HIMR,
+ 			rtlpci->irq_mask[0] & 0xFFFFFFFF);
+ 	rtl_write_dword(rtlpriv, REG_HIMRE,
+diff --git a/drivers/net/wireless/rtlwifi/rtl8192ee/hw.c b/drivers/net/wireless/rtlwifi/rtl8192ee/hw.c
+index da0a6125f314..cbf2ca7c7c6d 100644
+--- a/drivers/net/wireless/rtlwifi/rtl8192ee/hw.c
++++ b/drivers/net/wireless/rtlwifi/rtl8192ee/hw.c
+@@ -1584,28 +1584,11 @@ void rtl92ee_set_qos(struct ieee80211_hw *hw, int aci)
+ 	}
+ }
+ 
+-static void rtl92ee_clear_interrupt(struct ieee80211_hw *hw)
+-{
+-	struct rtl_priv *rtlpriv = rtl_priv(hw);
+-	u32 tmp;
+-
+-	tmp = rtl_read_dword(rtlpriv, REG_HISR);
+-	rtl_write_dword(rtlpriv, REG_HISR, tmp);
+-
+-	tmp = rtl_read_dword(rtlpriv, REG_HISRE);
+-	rtl_write_dword(rtlpriv, REG_HISRE, tmp);
+-
+-	tmp = rtl_read_dword(rtlpriv, REG_HSISR);
+-	rtl_write_dword(rtlpriv, REG_HSISR, tmp);
+-}
+-
+ void rtl92ee_enable_interrupt(struct ieee80211_hw *hw)
+ {
+ 	struct rtl_priv *rtlpriv = rtl_priv(hw);
+ 	struct rtl_pci *rtlpci = rtl_pcidev(rtl_pcipriv(hw));
+ 
+-	rtl92ee_clear_interrupt(hw);/*clear it here first*/
+-
+ 	rtl_write_dword(rtlpriv, REG_HIMR, rtlpci->irq_mask[0] & 0xFFFFFFFF);
+ 	rtl_write_dword(rtlpriv, REG_HIMRE, rtlpci->irq_mask[1] & 0xFFFFFFFF);
+ 	rtlpci->irq_enabled = true;
+diff --git a/drivers/net/wireless/rtlwifi/rtl8723ae/hw.c b/drivers/net/wireless/rtlwifi/rtl8723ae/hw.c
+index 67bb47d77b68..a4b7eac6856f 100644
+--- a/drivers/net/wireless/rtlwifi/rtl8723ae/hw.c
++++ b/drivers/net/wireless/rtlwifi/rtl8723ae/hw.c
+@@ -1258,18 +1258,6 @@ void rtl8723e_set_qos(struct ieee80211_hw *hw, int aci)
+ 	}
+ }
+ 
+-static void rtl8723e_clear_interrupt(struct ieee80211_hw *hw)
+-{
+-	struct rtl_priv *rtlpriv = rtl_priv(hw);
+-	u32 tmp;
+-
+-	tmp = rtl_read_dword(rtlpriv, REG_HISR);
+-	rtl_write_dword(rtlpriv, REG_HISR, tmp);
+-
+-	tmp = rtl_read_dword(rtlpriv, REG_HISRE);
+-	rtl_write_dword(rtlpriv, REG_HISRE, tmp);
+-}
+-
+ void rtl8723e_enable_interrupt(struct ieee80211_hw *hw)
+ {
+ 	struct rtl_priv *rtlpriv = rtl_priv(hw);
+@@ -1284,7 +1272,6 @@ void rtl8723e_disable_interrupt(struct ieee80211_hw *hw)
+ {
+ 	struct rtl_priv *rtlpriv = rtl_priv(hw);
+ 	struct rtl_pci *rtlpci = rtl_pcidev(rtl_pcipriv(hw));
+-	rtl8723e_clear_interrupt(hw);/*clear it here first*/
+ 	rtl_write_dword(rtlpriv, 0x3a8, IMR8190_DISABLED);
+ 	rtl_write_dword(rtlpriv, 0x3ac, IMR8190_DISABLED);
+ 	rtlpci->irq_enabled = false;
+diff --git a/drivers/net/wireless/rtlwifi/rtl8723be/hw.c b/drivers/net/wireless/rtlwifi/rtl8723be/hw.c
+index b681af3c7a35..b9417268427e 100644
+--- a/drivers/net/wireless/rtlwifi/rtl8723be/hw.c
++++ b/drivers/net/wireless/rtlwifi/rtl8723be/hw.c
+@@ -1634,28 +1634,11 @@ void rtl8723be_set_qos(struct ieee80211_hw *hw, int aci)
+ 	}
+ }
+ 
+-static void rtl8723be_clear_interrupt(struct ieee80211_hw *hw)
+-{
+-	struct rtl_priv *rtlpriv = rtl_priv(hw);
+-	u32 tmp;
+-
+-	tmp = rtl_read_dword(rtlpriv, REG_HISR);
+-	rtl_write_dword(rtlpriv, REG_HISR, tmp);
+-
+-	tmp = rtl_read_dword(rtlpriv, REG_HISRE);
+-	rtl_write_dword(rtlpriv, REG_HISRE, tmp);
+-
+-	tmp = rtl_read_dword(rtlpriv, REG_HSISR);
+-	rtl_write_dword(rtlpriv, REG_HSISR, tmp);
+-}
+-
+ void rtl8723be_enable_interrupt(struct ieee80211_hw *hw)
+ {
+ 	struct rtl_priv *rtlpriv = rtl_priv(hw);
+ 	struct rtl_pci *rtlpci = rtl_pcidev(rtl_pcipriv(hw));
+ 
+-	rtl8723be_clear_interrupt(hw);/*clear it here first*/
+-
+ 	rtl_write_dword(rtlpriv, REG_HIMR, rtlpci->irq_mask[0] & 0xFFFFFFFF);
+ 	rtl_write_dword(rtlpriv, REG_HIMRE, rtlpci->irq_mask[1] & 0xFFFFFFFF);
+ 	rtlpci->irq_enabled = true;
+diff --git a/drivers/net/wireless/rtlwifi/rtl8821ae/hw.c b/drivers/net/wireless/rtlwifi/rtl8821ae/hw.c
+index 8704eee9f3a4..57966e3c8e8d 100644
+--- a/drivers/net/wireless/rtlwifi/rtl8821ae/hw.c
++++ b/drivers/net/wireless/rtlwifi/rtl8821ae/hw.c
+@@ -2253,31 +2253,11 @@ void rtl8821ae_set_qos(struct ieee80211_hw *hw, int aci)
+ 	}
+ }
+ 
+-static void rtl8821ae_clear_interrupt(struct ieee80211_hw *hw)
+-{
+-	struct rtl_priv *rtlpriv = rtl_priv(hw);
+-	u32 tmp;
+-	tmp = rtl_read_dword(rtlpriv, REG_HISR);
+-	/*printk("clear interrupt first:\n");
+-	printk("0x%x = 0x%08x\n",REG_HISR, tmp);*/
+-	rtl_write_dword(rtlpriv, REG_HISR, tmp);
+-
+-	tmp = rtl_read_dword(rtlpriv, REG_HISRE);
+-	/*printk("0x%x = 0x%08x\n",REG_HISRE, tmp);*/
+-	rtl_write_dword(rtlpriv, REG_HISRE, tmp);
+-
+-	tmp = rtl_read_dword(rtlpriv, REG_HSISR);
+-	/*printk("0x%x = 0x%08x\n",REG_HSISR, tmp);*/
+-	rtl_write_dword(rtlpriv, REG_HSISR, tmp);
+-}
+-
+ void rtl8821ae_enable_interrupt(struct ieee80211_hw *hw)
+ {
+ 	struct rtl_priv *rtlpriv = rtl_priv(hw);
+ 	struct rtl_pci *rtlpci = rtl_pcidev(rtl_pcipriv(hw));
+ 
+-	rtl8821ae_clear_interrupt(hw);/*clear it here first*/
+-
+ 	rtl_write_dword(rtlpriv, REG_HIMR, rtlpci->irq_mask[0] & 0xFFFFFFFF);
+ 	rtl_write_dword(rtlpriv, REG_HIMRE, rtlpci->irq_mask[1] & 0xFFFFFFFF);
+ 	rtlpci->irq_enabled = true;
+diff --git a/drivers/nfc/st21nfcb/i2c.c b/drivers/nfc/st21nfcb/i2c.c
+index 76a4cad41cec..c44f8cf5391a 100644
+--- a/drivers/nfc/st21nfcb/i2c.c
++++ b/drivers/nfc/st21nfcb/i2c.c
+@@ -87,11 +87,6 @@ static void st21nfcb_nci_i2c_disable(void *phy_id)
+ 	gpio_set_value(phy->gpio_reset, 1);
+ }
+ 
+-static void st21nfcb_nci_remove_header(struct sk_buff *skb)
+-{
+-	skb_pull(skb, ST21NFCB_FRAME_HEADROOM);
+-}
+-
+ /*
+  * Writing a frame must not return the number of written bytes.
+  * It must return either zero for success, or <0 for error.
+@@ -121,8 +116,6 @@ static int st21nfcb_nci_i2c_write(void *phy_id, struct sk_buff *skb)
+ 			r = 0;
+ 	}
+ 
+-	st21nfcb_nci_remove_header(skb);
+-
+ 	return r;
+ }
+ 
+@@ -366,9 +359,6 @@ static int st21nfcb_nci_i2c_remove(struct i2c_client *client)
+ 
+ 	ndlc_remove(phy->ndlc);
+ 
+-	if (phy->powered)
+-		st21nfcb_nci_i2c_disable(phy);
+-
+ 	return 0;
+ }
+ 
+diff --git a/drivers/nfc/st21nfcb/st21nfcb.c b/drivers/nfc/st21nfcb/st21nfcb.c
+index ca9871ab3fb3..c7dc282d5c3b 100644
+--- a/drivers/nfc/st21nfcb/st21nfcb.c
++++ b/drivers/nfc/st21nfcb/st21nfcb.c
+@@ -131,11 +131,8 @@ EXPORT_SYMBOL_GPL(st21nfcb_nci_probe);
+ 
+ void st21nfcb_nci_remove(struct nci_dev *ndev)
+ {
+-	struct st21nfcb_nci_info *info = nci_get_drvdata(ndev);
+-
+ 	nci_unregister_device(ndev);
+ 	nci_free_device(ndev);
+-	kfree(info);
+ }
+ EXPORT_SYMBOL_GPL(st21nfcb_nci_remove);
+ 
+diff --git a/drivers/of/address.c b/drivers/of/address.c
+index 6906a3f61bd8..8bfda6ade2c0 100644
+--- a/drivers/of/address.c
++++ b/drivers/of/address.c
+@@ -712,7 +712,7 @@ int __weak pci_register_io_range(phys_addr_t addr, resource_size_t size)
+ 	}
+ 
+ 	/* add the range to the list */
+-	range = kzalloc(sizeof(*range), GFP_KERNEL);
++	range = kzalloc(sizeof(*range), GFP_ATOMIC);
+ 	if (!range) {
+ 		err = -ENOMEM;
+ 		goto end_register;
+diff --git a/drivers/of/base.c b/drivers/of/base.c
+index f0650265febf..5ed97246c2e7 100644
+--- a/drivers/of/base.c
++++ b/drivers/of/base.c
+@@ -89,7 +89,7 @@ EXPORT_SYMBOL(of_n_size_cells);
+ #ifdef CONFIG_NUMA
+ int __weak of_node_to_nid(struct device_node *np)
+ {
+-	return numa_node_id();
++	return NUMA_NO_NODE;
+ }
+ #endif
+ 
+diff --git a/drivers/phy/phy-berlin-usb.c b/drivers/phy/phy-berlin-usb.c
+index c6fc95b53083..ab54f2864451 100644
+--- a/drivers/phy/phy-berlin-usb.c
++++ b/drivers/phy/phy-berlin-usb.c
+@@ -106,8 +106,8 @@
+ static const u32 phy_berlin_pll_dividers[] = {
+ 	/* Berlin 2 */
+ 	CLK_REF_DIV(0xc) | FEEDBACK_CLK_DIV(0x54),
+-	/* Berlin 2CD */
+-	CLK_REF_DIV(0x6) | FEEDBACK_CLK_DIV(0x55),
++	/* Berlin 2CD/Q */
++	CLK_REF_DIV(0xc) | FEEDBACK_CLK_DIV(0x54),
+ };
+ 
+ struct phy_berlin_usb_priv {
+diff --git a/drivers/phy/phy-twl4030-usb.c b/drivers/phy/phy-twl4030-usb.c
+index bc42d6a8939f..8882afbef688 100644
+--- a/drivers/phy/phy-twl4030-usb.c
++++ b/drivers/phy/phy-twl4030-usb.c
+@@ -711,7 +711,6 @@ static int twl4030_usb_probe(struct platform_device *pdev)
+ 	pm_runtime_use_autosuspend(&pdev->dev);
+ 	pm_runtime_set_autosuspend_delay(&pdev->dev, 2000);
+ 	pm_runtime_enable(&pdev->dev);
+-	pm_runtime_get_sync(&pdev->dev);
+ 
+ 	/* Our job is to use irqs and status from the power module
+ 	 * to keep the transceiver disabled when nothing's connected.
+diff --git a/drivers/pinctrl/mvebu/pinctrl-armada-370.c b/drivers/pinctrl/mvebu/pinctrl-armada-370.c
+index 03aa58c4cb85..1eb084c3b0c9 100644
+--- a/drivers/pinctrl/mvebu/pinctrl-armada-370.c
++++ b/drivers/pinctrl/mvebu/pinctrl-armada-370.c
+@@ -370,11 +370,11 @@ static struct mvebu_mpp_mode mv88f6710_mpp_modes[] = {
+ 	MPP_MODE(64,
+ 	   MPP_FUNCTION(0x0, "gpio", NULL),
+ 	   MPP_FUNCTION(0x1, "spi0", "miso"),
+-	   MPP_FUNCTION(0x2, "spi0-1", "cs1")),
++	   MPP_FUNCTION(0x2, "spi0", "cs1")),
+ 	MPP_MODE(65,
+ 	   MPP_FUNCTION(0x0, "gpio", NULL),
+ 	   MPP_FUNCTION(0x1, "spi0", "mosi"),
+-	   MPP_FUNCTION(0x2, "spi0-1", "cs2")),
++	   MPP_FUNCTION(0x2, "spi0", "cs2")),
+ };
+ 
+ static struct mvebu_pinctrl_soc_info armada_370_pinctrl_info;
+diff --git a/drivers/pinctrl/mvebu/pinctrl-armada-375.c b/drivers/pinctrl/mvebu/pinctrl-armada-375.c
+index ca1e7571fedb..203291bde608 100644
+--- a/drivers/pinctrl/mvebu/pinctrl-armada-375.c
++++ b/drivers/pinctrl/mvebu/pinctrl-armada-375.c
+@@ -92,19 +92,17 @@ static struct mvebu_mpp_mode mv88f6720_mpp_modes[] = {
+ 		 MPP_FUNCTION(0x5, "nand", "io1")),
+ 	MPP_MODE(8,
+ 		 MPP_FUNCTION(0x0, "gpio", NULL),
+-		 MPP_FUNCTION(0x1, "dev ", "bootcs"),
++		 MPP_FUNCTION(0x1, "dev", "bootcs"),
+ 		 MPP_FUNCTION(0x2, "spi0", "cs0"),
+ 		 MPP_FUNCTION(0x3, "spi1", "cs0"),
+ 		 MPP_FUNCTION(0x5, "nand", "ce")),
+ 	MPP_MODE(9,
+ 		 MPP_FUNCTION(0x0, "gpio", NULL),
+-		 MPP_FUNCTION(0x1, "nf", "wen"),
+ 		 MPP_FUNCTION(0x2, "spi0", "sck"),
+ 		 MPP_FUNCTION(0x3, "spi1", "sck"),
+ 		 MPP_FUNCTION(0x5, "nand", "we")),
+ 	MPP_MODE(10,
+ 		 MPP_FUNCTION(0x0, "gpio", NULL),
+-		 MPP_FUNCTION(0x1, "nf", "ren"),
+ 		 MPP_FUNCTION(0x2, "dram", "vttctrl"),
+ 		 MPP_FUNCTION(0x3, "led", "c1"),
+ 		 MPP_FUNCTION(0x5, "nand", "re"),
+diff --git a/drivers/pinctrl/mvebu/pinctrl-armada-38x.c b/drivers/pinctrl/mvebu/pinctrl-armada-38x.c
+index 83bbcc72be1f..ff411a53b5a4 100644
+--- a/drivers/pinctrl/mvebu/pinctrl-armada-38x.c
++++ b/drivers/pinctrl/mvebu/pinctrl-armada-38x.c
+@@ -94,37 +94,39 @@ static struct mvebu_mpp_mode armada_38x_mpp_modes[] = {
+ 		 MPP_VAR_FUNCTION(0, "gpio",  NULL,         V_88F6810_PLUS),
+ 		 MPP_VAR_FUNCTION(1, "ge0",   "rxd0",       V_88F6810_PLUS),
+ 		 MPP_VAR_FUNCTION(2, "pcie0", "rstout",     V_88F6810_PLUS),
+-		 MPP_VAR_FUNCTION(3, "pcie1", "rstout",     V_88F6820_PLUS),
+ 		 MPP_VAR_FUNCTION(4, "spi0",  "cs1",        V_88F6810_PLUS),
+-		 MPP_VAR_FUNCTION(5, "dev",   "ad14",       V_88F6810_PLUS)),
++		 MPP_VAR_FUNCTION(5, "dev",   "ad14",       V_88F6810_PLUS),
++		 MPP_VAR_FUNCTION(6, "pcie3", "clkreq",     V_88F6810_PLUS)),
+ 	MPP_MODE(13,
+ 		 MPP_VAR_FUNCTION(0, "gpio",  NULL,         V_88F6810_PLUS),
+ 		 MPP_VAR_FUNCTION(1, "ge0",   "rxd1",       V_88F6810_PLUS),
+ 		 MPP_VAR_FUNCTION(2, "pcie0", "clkreq",     V_88F6810_PLUS),
+ 		 MPP_VAR_FUNCTION(3, "pcie1", "clkreq",     V_88F6820_PLUS),
+ 		 MPP_VAR_FUNCTION(4, "spi0",  "cs2",        V_88F6810_PLUS),
+-		 MPP_VAR_FUNCTION(5, "dev",   "ad15",       V_88F6810_PLUS)),
++		 MPP_VAR_FUNCTION(5, "dev",   "ad15",       V_88F6810_PLUS),
++		 MPP_VAR_FUNCTION(6, "pcie2", "clkreq",     V_88F6810_PLUS)),
+ 	MPP_MODE(14,
+ 		 MPP_VAR_FUNCTION(0, "gpio",  NULL,         V_88F6810_PLUS),
+ 		 MPP_VAR_FUNCTION(1, "ge0",   "rxd2",       V_88F6810_PLUS),
+ 		 MPP_VAR_FUNCTION(2, "ptp",   "clk",        V_88F6810_PLUS),
+ 		 MPP_VAR_FUNCTION(3, "m",     "vtt_ctrl",   V_88F6810_PLUS),
+ 		 MPP_VAR_FUNCTION(4, "spi0",  "cs3",        V_88F6810_PLUS),
+-		 MPP_VAR_FUNCTION(5, "dev",   "wen1",       V_88F6810_PLUS)),
++		 MPP_VAR_FUNCTION(5, "dev",   "wen1",       V_88F6810_PLUS),
++		 MPP_VAR_FUNCTION(6, "pcie3", "clkreq",     V_88F6810_PLUS)),
+ 	MPP_MODE(15,
+ 		 MPP_VAR_FUNCTION(0, "gpio",  NULL,         V_88F6810_PLUS),
+ 		 MPP_VAR_FUNCTION(1, "ge0",   "rxd3",       V_88F6810_PLUS),
+ 		 MPP_VAR_FUNCTION(2, "ge",    "mdc slave",  V_88F6810_PLUS),
+ 		 MPP_VAR_FUNCTION(3, "pcie0", "rstout",     V_88F6810_PLUS),
+-		 MPP_VAR_FUNCTION(4, "spi0",  "mosi",       V_88F6810_PLUS),
+-		 MPP_VAR_FUNCTION(5, "pcie1", "rstout",     V_88F6820_PLUS)),
++		 MPP_VAR_FUNCTION(4, "spi0",  "mosi",       V_88F6810_PLUS)),
+ 	MPP_MODE(16,
+ 		 MPP_VAR_FUNCTION(0, "gpio",  NULL,         V_88F6810_PLUS),
+ 		 MPP_VAR_FUNCTION(1, "ge0",   "rxctl",      V_88F6810_PLUS),
+ 		 MPP_VAR_FUNCTION(2, "ge",    "mdio slave", V_88F6810_PLUS),
+ 		 MPP_VAR_FUNCTION(3, "m",     "decc_err",   V_88F6810_PLUS),
+ 		 MPP_VAR_FUNCTION(4, "spi0",  "miso",       V_88F6810_PLUS),
+-		 MPP_VAR_FUNCTION(5, "pcie0", "clkreq",     V_88F6810_PLUS)),
++		 MPP_VAR_FUNCTION(5, "pcie0", "clkreq",     V_88F6810_PLUS),
++		 MPP_VAR_FUNCTION(6, "pcie1", "clkreq",     V_88F6820_PLUS)),
+ 	MPP_MODE(17,
+ 		 MPP_VAR_FUNCTION(0, "gpio",  NULL,         V_88F6810_PLUS),
+ 		 MPP_VAR_FUNCTION(1, "ge0",   "rxclk",      V_88F6810_PLUS),
+@@ -137,13 +139,12 @@ static struct mvebu_mpp_mode armada_38x_mpp_modes[] = {
+ 		 MPP_VAR_FUNCTION(1, "ge0",   "rxerr",      V_88F6810_PLUS),
+ 		 MPP_VAR_FUNCTION(2, "ptp",   "trig_gen",   V_88F6810_PLUS),
+ 		 MPP_VAR_FUNCTION(3, "ua1",   "txd",        V_88F6810_PLUS),
+-		 MPP_VAR_FUNCTION(4, "spi0",  "cs0",        V_88F6810_PLUS),
+-		 MPP_VAR_FUNCTION(5, "pcie1", "rstout",     V_88F6820_PLUS)),
++		 MPP_VAR_FUNCTION(4, "spi0",  "cs0",        V_88F6810_PLUS)),
+ 	MPP_MODE(19,
+ 		 MPP_VAR_FUNCTION(0, "gpio",  NULL,         V_88F6810_PLUS),
+ 		 MPP_VAR_FUNCTION(1, "ge0",   "col",        V_88F6810_PLUS),
+ 		 MPP_VAR_FUNCTION(2, "ptp",   "event_req",  V_88F6810_PLUS),
+-		 MPP_VAR_FUNCTION(3, "pcie0", "clkreq",     V_88F6810_PLUS),
++		 MPP_VAR_FUNCTION(3, "ge0",   "txerr",      V_88F6810_PLUS),
+ 		 MPP_VAR_FUNCTION(4, "sata1", "prsnt",      V_88F6810_PLUS),
+ 		 MPP_VAR_FUNCTION(5, "ua0",   "cts",        V_88F6810_PLUS),
+ 		 MPP_VAR_FUNCTION(6, "ua1",   "rxd",        V_88F6810_PLUS)),
+@@ -151,7 +152,6 @@ static struct mvebu_mpp_mode armada_38x_mpp_modes[] = {
+ 		 MPP_VAR_FUNCTION(0, "gpio",  NULL,         V_88F6810_PLUS),
+ 		 MPP_VAR_FUNCTION(1, "ge0",   "txclk",      V_88F6810_PLUS),
+ 		 MPP_VAR_FUNCTION(2, "ptp",   "clk",        V_88F6810_PLUS),
+-		 MPP_VAR_FUNCTION(3, "pcie1", "rstout",     V_88F6820_PLUS),
+ 		 MPP_VAR_FUNCTION(4, "sata0", "prsnt",      V_88F6810_PLUS),
+ 		 MPP_VAR_FUNCTION(5, "ua0",   "rts",        V_88F6810_PLUS),
+ 		 MPP_VAR_FUNCTION(6, "ua1",   "txd",        V_88F6810_PLUS)),
+@@ -277,35 +277,27 @@ static struct mvebu_mpp_mode armada_38x_mpp_modes[] = {
+ 		 MPP_VAR_FUNCTION(1, "pcie0", "clkreq",     V_88F6810_PLUS),
+ 		 MPP_VAR_FUNCTION(2, "m",     "vtt_ctrl",   V_88F6810_PLUS),
+ 		 MPP_VAR_FUNCTION(3, "m",     "decc_err",   V_88F6810_PLUS),
+-		 MPP_VAR_FUNCTION(4, "pcie0", "rstout",     V_88F6810_PLUS),
++		 MPP_VAR_FUNCTION(4, "spi1",  "cs2",        V_88F6810_PLUS),
+ 		 MPP_VAR_FUNCTION(5, "dev",   "clkout",     V_88F6810_PLUS)),
+ 	MPP_MODE(44,
+ 		 MPP_VAR_FUNCTION(0, "gpio",  NULL,         V_88F6810_PLUS),
+ 		 MPP_VAR_FUNCTION(1, "sata0", "prsnt",      V_88F6810_PLUS),
+ 		 MPP_VAR_FUNCTION(2, "sata1", "prsnt",      V_88F6810_PLUS),
+ 		 MPP_VAR_FUNCTION(3, "sata2", "prsnt",      V_88F6828),
+-		 MPP_VAR_FUNCTION(4, "sata3", "prsnt",      V_88F6828),
+-		 MPP_VAR_FUNCTION(5, "pcie0", "rstout",     V_88F6810_PLUS)),
++		 MPP_VAR_FUNCTION(4, "sata3", "prsnt",      V_88F6828)),
+ 	MPP_MODE(45,
+ 		 MPP_VAR_FUNCTION(0, "gpio",  NULL,         V_88F6810_PLUS),
+ 		 MPP_VAR_FUNCTION(1, "ref",   "clk_out0",   V_88F6810_PLUS),
+-		 MPP_VAR_FUNCTION(2, "pcie0", "rstout",     V_88F6810_PLUS),
+-		 MPP_VAR_FUNCTION(3, "pcie1", "rstout",     V_88F6820_PLUS),
+-		 MPP_VAR_FUNCTION(4, "pcie2", "rstout",     V_88F6810_PLUS),
+-		 MPP_VAR_FUNCTION(5, "pcie3", "rstout",     V_88F6810_PLUS)),
++		 MPP_VAR_FUNCTION(2, "pcie0", "rstout",     V_88F6810_PLUS)),
+ 	MPP_MODE(46,
+ 		 MPP_VAR_FUNCTION(0, "gpio",  NULL,         V_88F6810_PLUS),
+ 		 MPP_VAR_FUNCTION(1, "ref",   "clk_out1",   V_88F6810_PLUS),
+-		 MPP_VAR_FUNCTION(2, "pcie0", "rstout",     V_88F6810_PLUS),
+-		 MPP_VAR_FUNCTION(3, "pcie1", "rstout",     V_88F6820_PLUS),
+-		 MPP_VAR_FUNCTION(4, "pcie2", "rstout",     V_88F6810_PLUS),
+-		 MPP_VAR_FUNCTION(5, "pcie3", "rstout",     V_88F6810_PLUS)),
++		 MPP_VAR_FUNCTION(2, "pcie0", "rstout",     V_88F6810_PLUS)),
+ 	MPP_MODE(47,
+ 		 MPP_VAR_FUNCTION(0, "gpio",  NULL,         V_88F6810_PLUS),
+ 		 MPP_VAR_FUNCTION(1, "sata0", "prsnt",      V_88F6810_PLUS),
+ 		 MPP_VAR_FUNCTION(2, "sata1", "prsnt",      V_88F6810_PLUS),
+ 		 MPP_VAR_FUNCTION(3, "sata2", "prsnt",      V_88F6828),
+-		 MPP_VAR_FUNCTION(4, "spi1",  "cs2",        V_88F6810_PLUS),
+ 		 MPP_VAR_FUNCTION(5, "sata3", "prsnt",      V_88F6828)),
+ 	MPP_MODE(48,
+ 		 MPP_VAR_FUNCTION(0, "gpio",  NULL,         V_88F6810_PLUS),
+@@ -313,18 +305,19 @@ static struct mvebu_mpp_mode armada_38x_mpp_modes[] = {
+ 		 MPP_VAR_FUNCTION(2, "m",     "vtt_ctrl",   V_88F6810_PLUS),
+ 		 MPP_VAR_FUNCTION(3, "tdm2c", "pclk",       V_88F6810_PLUS),
+ 		 MPP_VAR_FUNCTION(4, "audio", "mclk",       V_88F6810_PLUS),
+-		 MPP_VAR_FUNCTION(5, "sd0",   "d4",         V_88F6810_PLUS)),
++		 MPP_VAR_FUNCTION(5, "sd0",   "d4",         V_88F6810_PLUS),
++		 MPP_VAR_FUNCTION(6, "pcie0", "clkreq",     V_88F6810_PLUS)),
+ 	MPP_MODE(49,
+ 		 MPP_VAR_FUNCTION(0, "gpio",  NULL,         V_88F6810_PLUS),
+ 		 MPP_VAR_FUNCTION(1, "sata2", "prsnt",      V_88F6828),
+ 		 MPP_VAR_FUNCTION(2, "sata3", "prsnt",      V_88F6828),
+ 		 MPP_VAR_FUNCTION(3, "tdm2c", "fsync",      V_88F6810_PLUS),
+ 		 MPP_VAR_FUNCTION(4, "audio", "lrclk",      V_88F6810_PLUS),
+-		 MPP_VAR_FUNCTION(5, "sd0",   "d5",         V_88F6810_PLUS)),
++		 MPP_VAR_FUNCTION(5, "sd0",   "d5",         V_88F6810_PLUS),
++		 MPP_VAR_FUNCTION(6, "pcie1", "clkreq",     V_88F6820_PLUS)),
+ 	MPP_MODE(50,
+ 		 MPP_VAR_FUNCTION(0, "gpio",  NULL,         V_88F6810_PLUS),
+ 		 MPP_VAR_FUNCTION(1, "pcie0", "rstout",     V_88F6810_PLUS),
+-		 MPP_VAR_FUNCTION(2, "pcie1", "rstout",     V_88F6820_PLUS),
+ 		 MPP_VAR_FUNCTION(3, "tdm2c", "drx",        V_88F6810_PLUS),
+ 		 MPP_VAR_FUNCTION(4, "audio", "extclk",     V_88F6810_PLUS),
+ 		 MPP_VAR_FUNCTION(5, "sd0",   "cmd",        V_88F6810_PLUS)),
+@@ -336,7 +329,6 @@ static struct mvebu_mpp_mode armada_38x_mpp_modes[] = {
+ 	MPP_MODE(52,
+ 		 MPP_VAR_FUNCTION(0, "gpio",  NULL,         V_88F6810_PLUS),
+ 		 MPP_VAR_FUNCTION(1, "pcie0", "rstout",     V_88F6810_PLUS),
+-		 MPP_VAR_FUNCTION(2, "pcie1", "rstout",     V_88F6820_PLUS),
+ 		 MPP_VAR_FUNCTION(3, "tdm2c", "intn",       V_88F6810_PLUS),
+ 		 MPP_VAR_FUNCTION(4, "audio", "sdi",        V_88F6810_PLUS),
+ 		 MPP_VAR_FUNCTION(5, "sd0",   "d6",         V_88F6810_PLUS)),
+@@ -352,7 +344,7 @@ static struct mvebu_mpp_mode armada_38x_mpp_modes[] = {
+ 		 MPP_VAR_FUNCTION(1, "sata0", "prsnt",      V_88F6810_PLUS),
+ 		 MPP_VAR_FUNCTION(2, "sata1", "prsnt",      V_88F6810_PLUS),
+ 		 MPP_VAR_FUNCTION(3, "pcie0", "rstout",     V_88F6810_PLUS),
+-		 MPP_VAR_FUNCTION(4, "pcie1", "rstout",     V_88F6820_PLUS),
++		 MPP_VAR_FUNCTION(4, "ge0",   "txerr",      V_88F6810_PLUS),
+ 		 MPP_VAR_FUNCTION(5, "sd0",   "d3",         V_88F6810_PLUS)),
+ 	MPP_MODE(55,
+ 		 MPP_VAR_FUNCTION(0, "gpio",  NULL,         V_88F6810_PLUS),
+@@ -382,7 +374,6 @@ static struct mvebu_mpp_mode armada_38x_mpp_modes[] = {
+ 		 MPP_VAR_FUNCTION(0, "gpio",  NULL,         V_88F6810_PLUS),
+ 		 MPP_VAR_FUNCTION(1, "pcie0", "rstout",     V_88F6810_PLUS),
+ 		 MPP_VAR_FUNCTION(2, "i2c1",  "sda",        V_88F6810_PLUS),
+-		 MPP_VAR_FUNCTION(3, "pcie1", "rstout",     V_88F6820_PLUS),
+ 		 MPP_VAR_FUNCTION(4, "spi1",  "cs0",        V_88F6810_PLUS),
+ 		 MPP_VAR_FUNCTION(5, "sd0",   "d2",         V_88F6810_PLUS)),
+ };
+@@ -411,7 +402,7 @@ static struct mvebu_mpp_ctrl armada_38x_mpp_controls[] = {
+ 
+ static struct pinctrl_gpio_range armada_38x_mpp_gpio_ranges[] = {
+ 	MPP_GPIO_RANGE(0,   0,  0, 32),
+-	MPP_GPIO_RANGE(1,  32, 32, 27),
++	MPP_GPIO_RANGE(1,  32, 32, 28),
+ };
+ 
+ static int armada_38x_pinctrl_probe(struct platform_device *pdev)
+diff --git a/drivers/pinctrl/mvebu/pinctrl-armada-39x.c b/drivers/pinctrl/mvebu/pinctrl-armada-39x.c
+index 42491624d660..2dcf9b41e01e 100644
+--- a/drivers/pinctrl/mvebu/pinctrl-armada-39x.c
++++ b/drivers/pinctrl/mvebu/pinctrl-armada-39x.c
+@@ -380,7 +380,7 @@ static struct mvebu_mpp_ctrl armada_39x_mpp_controls[] = {
+ 
+ static struct pinctrl_gpio_range armada_39x_mpp_gpio_ranges[] = {
+ 	MPP_GPIO_RANGE(0,   0,  0, 32),
+-	MPP_GPIO_RANGE(1,  32, 32, 27),
++	MPP_GPIO_RANGE(1,  32, 32, 28),
+ };
+ 
+ static int armada_39x_pinctrl_probe(struct platform_device *pdev)
+diff --git a/drivers/pinctrl/mvebu/pinctrl-armada-xp.c b/drivers/pinctrl/mvebu/pinctrl-armada-xp.c
+index 578db9f033b2..d7cdb146f44d 100644
+--- a/drivers/pinctrl/mvebu/pinctrl-armada-xp.c
++++ b/drivers/pinctrl/mvebu/pinctrl-armada-xp.c
+@@ -14,10 +14,7 @@
+  * available: mv78230, mv78260 and mv78460. From a pin muxing
+  * perspective, the mv78230 has 49 MPP pins. The mv78260 and mv78460
+  * both have 67 MPP pins (more GPIOs and address lines for the memory
+- * bus mainly). The only difference between the mv78260 and the
+- * mv78460 in terms of pin muxing is the addition of two functions on
+- * pins 43 and 56 to access the VDD of the CPU2 and 3 (mv78260 has two
+- * cores, mv78460 has four cores).
++ * bus mainly).
+  */
+ 
+ #include <linux/err.h>
+@@ -172,20 +169,17 @@ static struct mvebu_mpp_mode armada_xp_mpp_modes[] = {
+ 	MPP_MODE(24,
+ 		 MPP_VAR_FUNCTION(0x0, "gpio", NULL,        V_MV78230_PLUS),
+ 		 MPP_VAR_FUNCTION(0x1, "sata1", "prsnt",    V_MV78230_PLUS),
+-		 MPP_VAR_FUNCTION(0x2, "nf", "bootcs-re",   V_MV78230_PLUS),
+ 		 MPP_VAR_FUNCTION(0x3, "tdm", "rst",        V_MV78230_PLUS),
+ 		 MPP_VAR_FUNCTION(0x4, "lcd", "hsync",      V_MV78230_PLUS)),
+ 	MPP_MODE(25,
+ 		 MPP_VAR_FUNCTION(0x0, "gpio", NULL,        V_MV78230_PLUS),
+ 		 MPP_VAR_FUNCTION(0x1, "sata0", "prsnt",    V_MV78230_PLUS),
+-		 MPP_VAR_FUNCTION(0x2, "nf", "bootcs-we",   V_MV78230_PLUS),
+ 		 MPP_VAR_FUNCTION(0x3, "tdm", "pclk",       V_MV78230_PLUS),
+ 		 MPP_VAR_FUNCTION(0x4, "lcd", "vsync",      V_MV78230_PLUS)),
+ 	MPP_MODE(26,
+ 		 MPP_VAR_FUNCTION(0x0, "gpio", NULL,        V_MV78230_PLUS),
+ 		 MPP_VAR_FUNCTION(0x3, "tdm", "fsync",      V_MV78230_PLUS),
+-		 MPP_VAR_FUNCTION(0x4, "lcd", "clk",        V_MV78230_PLUS),
+-		 MPP_VAR_FUNCTION(0x5, "vdd", "cpu1-pd",    V_MV78230_PLUS)),
++		 MPP_VAR_FUNCTION(0x4, "lcd", "clk",        V_MV78230_PLUS)),
+ 	MPP_MODE(27,
+ 		 MPP_VAR_FUNCTION(0x0, "gpio", NULL,        V_MV78230_PLUS),
+ 		 MPP_VAR_FUNCTION(0x1, "ptp", "trig",       V_MV78230_PLUS),
+@@ -200,8 +194,7 @@ static struct mvebu_mpp_mode armada_xp_mpp_modes[] = {
+ 		 MPP_VAR_FUNCTION(0x0, "gpio", NULL,        V_MV78230_PLUS),
+ 		 MPP_VAR_FUNCTION(0x1, "ptp", "clk",        V_MV78230_PLUS),
+ 		 MPP_VAR_FUNCTION(0x3, "tdm", "int0",       V_MV78230_PLUS),
+-		 MPP_VAR_FUNCTION(0x4, "lcd", "ref-clk",    V_MV78230_PLUS),
+-		 MPP_VAR_FUNCTION(0x5, "vdd", "cpu0-pd",    V_MV78230_PLUS)),
++		 MPP_VAR_FUNCTION(0x4, "lcd", "ref-clk",    V_MV78230_PLUS)),
+ 	MPP_MODE(30,
+ 		 MPP_VAR_FUNCTION(0x0, "gpio", NULL,        V_MV78230_PLUS),
+ 		 MPP_VAR_FUNCTION(0x1, "sd0", "clk",        V_MV78230_PLUS),
+@@ -209,13 +202,11 @@ static struct mvebu_mpp_mode armada_xp_mpp_modes[] = {
+ 	MPP_MODE(31,
+ 		 MPP_VAR_FUNCTION(0x0, "gpio", NULL,        V_MV78230_PLUS),
+ 		 MPP_VAR_FUNCTION(0x1, "sd0", "cmd",        V_MV78230_PLUS),
+-		 MPP_VAR_FUNCTION(0x3, "tdm", "int2",       V_MV78230_PLUS),
+-		 MPP_VAR_FUNCTION(0x5, "vdd", "cpu0-pd",    V_MV78230_PLUS)),
++		 MPP_VAR_FUNCTION(0x3, "tdm", "int2",       V_MV78230_PLUS)),
+ 	MPP_MODE(32,
+ 		 MPP_VAR_FUNCTION(0x0, "gpio", NULL,        V_MV78230_PLUS),
+ 		 MPP_VAR_FUNCTION(0x1, "sd0", "d0",         V_MV78230_PLUS),
+-		 MPP_VAR_FUNCTION(0x3, "tdm", "int3",       V_MV78230_PLUS),
+-		 MPP_VAR_FUNCTION(0x5, "vdd", "cpu1-pd",    V_MV78230_PLUS)),
++		 MPP_VAR_FUNCTION(0x3, "tdm", "int3",       V_MV78230_PLUS)),
+ 	MPP_MODE(33,
+ 		 MPP_VAR_FUNCTION(0x0, "gpio", NULL,        V_MV78230_PLUS),
+ 		 MPP_VAR_FUNCTION(0x1, "sd0", "d1",         V_MV78230_PLUS),
+@@ -247,7 +238,6 @@ static struct mvebu_mpp_mode armada_xp_mpp_modes[] = {
+ 		 MPP_VAR_FUNCTION(0x0, "gpio", NULL,        V_MV78230_PLUS),
+ 		 MPP_VAR_FUNCTION(0x1, "spi", "cs1",        V_MV78230_PLUS),
+ 		 MPP_VAR_FUNCTION(0x2, "uart2", "cts",      V_MV78230_PLUS),
+-		 MPP_VAR_FUNCTION(0x3, "vdd", "cpu1-pd",    V_MV78230_PLUS),
+ 		 MPP_VAR_FUNCTION(0x4, "lcd", "vga-hsync",  V_MV78230_PLUS),
+ 		 MPP_VAR_FUNCTION(0x5, "pcie", "clkreq0",   V_MV78230_PLUS)),
+ 	MPP_MODE(41,
+@@ -262,15 +252,13 @@ static struct mvebu_mpp_mode armada_xp_mpp_modes[] = {
+ 		 MPP_VAR_FUNCTION(0x1, "uart2", "rxd",      V_MV78230_PLUS),
+ 		 MPP_VAR_FUNCTION(0x2, "uart0", "cts",      V_MV78230_PLUS),
+ 		 MPP_VAR_FUNCTION(0x3, "tdm", "int7",       V_MV78230_PLUS),
+-		 MPP_VAR_FUNCTION(0x4, "tdm-1", "timer",    V_MV78230_PLUS),
+-		 MPP_VAR_FUNCTION(0x5, "vdd", "cpu0-pd",    V_MV78230_PLUS)),
++		 MPP_VAR_FUNCTION(0x4, "tdm-1", "timer",    V_MV78230_PLUS)),
+ 	MPP_MODE(43,
+ 		 MPP_VAR_FUNCTION(0x0, "gpio", NULL,        V_MV78230_PLUS),
+ 		 MPP_VAR_FUNCTION(0x1, "uart2", "txd",      V_MV78230_PLUS),
+ 		 MPP_VAR_FUNCTION(0x2, "uart0", "rts",      V_MV78230_PLUS),
+ 		 MPP_VAR_FUNCTION(0x3, "spi", "cs3",        V_MV78230_PLUS),
+-		 MPP_VAR_FUNCTION(0x4, "pcie", "rstout",    V_MV78230_PLUS),
+-		 MPP_VAR_FUNCTION(0x5, "vdd", "cpu2-3-pd",  V_MV78460)),
++		 MPP_VAR_FUNCTION(0x4, "pcie", "rstout",    V_MV78230_PLUS)),
+ 	MPP_MODE(44,
+ 		 MPP_VAR_FUNCTION(0x0, "gpio", NULL,        V_MV78230_PLUS),
+ 		 MPP_VAR_FUNCTION(0x1, "uart2", "cts",      V_MV78230_PLUS),
+@@ -299,7 +287,7 @@ static struct mvebu_mpp_mode armada_xp_mpp_modes[] = {
+ 		 MPP_VAR_FUNCTION(0x5, "pcie", "clkreq3",   V_MV78230_PLUS)),
+ 	MPP_MODE(48,
+ 		 MPP_VAR_FUNCTION(0x0, "gpio", NULL,        V_MV78230_PLUS),
+-		 MPP_VAR_FUNCTION(0x1, "tclk", NULL,        V_MV78230_PLUS),
++		 MPP_VAR_FUNCTION(0x1, "dev", "clkout",     V_MV78230_PLUS),
+ 		 MPP_VAR_FUNCTION(0x2, "dev", "burst/last", V_MV78230_PLUS)),
+ 	MPP_MODE(49,
+ 		 MPP_VAR_FUNCTION(0x0, "gpio", NULL,        V_MV78260_PLUS),
+@@ -321,16 +309,13 @@ static struct mvebu_mpp_mode armada_xp_mpp_modes[] = {
+ 		 MPP_VAR_FUNCTION(0x1, "dev", "ad19",       V_MV78260_PLUS)),
+ 	MPP_MODE(55,
+ 		 MPP_VAR_FUNCTION(0x0, "gpio", NULL,        V_MV78260_PLUS),
+-		 MPP_VAR_FUNCTION(0x1, "dev", "ad20",       V_MV78260_PLUS),
+-		 MPP_VAR_FUNCTION(0x2, "vdd", "cpu0-pd",    V_MV78260_PLUS)),
++		 MPP_VAR_FUNCTION(0x1, "dev", "ad20",       V_MV78260_PLUS)),
+ 	MPP_MODE(56,
+ 		 MPP_VAR_FUNCTION(0x0, "gpio", NULL,        V_MV78260_PLUS),
+-		 MPP_VAR_FUNCTION(0x1, "dev", "ad21",       V_MV78260_PLUS),
+-		 MPP_VAR_FUNCTION(0x2, "vdd", "cpu1-pd",    V_MV78260_PLUS)),
++		 MPP_VAR_FUNCTION(0x1, "dev", "ad21",       V_MV78260_PLUS)),
+ 	MPP_MODE(57,
+ 		 MPP_VAR_FUNCTION(0x0, "gpio", NULL,        V_MV78260_PLUS),
+-		 MPP_VAR_FUNCTION(0x1, "dev", "ad22",       V_MV78260_PLUS),
+-		 MPP_VAR_FUNCTION(0x2, "vdd", "cpu2-3-pd",  V_MV78460)),
++		 MPP_VAR_FUNCTION(0x1, "dev", "ad22",       V_MV78260_PLUS)),
+ 	MPP_MODE(58,
+ 		 MPP_VAR_FUNCTION(0x0, "gpio", NULL,        V_MV78260_PLUS),
+ 		 MPP_VAR_FUNCTION(0x1, "dev", "ad23",       V_MV78260_PLUS)),
+diff --git a/drivers/pinctrl/pinctrl-zynq.c b/drivers/pinctrl/pinctrl-zynq.c
+index 22280bddb9e2..8c51a3c65513 100644
+--- a/drivers/pinctrl/pinctrl-zynq.c
++++ b/drivers/pinctrl/pinctrl-zynq.c
+@@ -714,12 +714,13 @@ static const char * const gpio0_groups[] = {"gpio0_0_grp",
+ 		.mux_val = mval,			\
+ 	}
+ 
+-#define DEFINE_ZYNQ_PINMUX_FUNCTION_MUX(fname, mval, mux, mask, shift)	\
++#define DEFINE_ZYNQ_PINMUX_FUNCTION_MUX(fname, mval, offset, mask, shift)\
+ 	[ZYNQ_PMUX_##fname] = {				\
+ 		.name = #fname,				\
+ 		.groups = fname##_groups,		\
+ 		.ngroups = ARRAY_SIZE(fname##_groups),	\
+ 		.mux_val = mval,			\
++		.mux = offset,				\
+ 		.mux_mask = mask,			\
+ 		.mux_shift = shift,			\
+ 	}
+@@ -744,15 +745,15 @@ static const struct zynq_pinmux_function zynq_pmux_functions[] = {
+ 	DEFINE_ZYNQ_PINMUX_FUNCTION(spi1, 0x50),
+ 	DEFINE_ZYNQ_PINMUX_FUNCTION(sdio0, 0x40),
+ 	DEFINE_ZYNQ_PINMUX_FUNCTION(sdio0_pc, 0xc),
+-	DEFINE_ZYNQ_PINMUX_FUNCTION_MUX(sdio0_wp, 0, 130, ZYNQ_SDIO_WP_MASK,
++	DEFINE_ZYNQ_PINMUX_FUNCTION_MUX(sdio0_wp, 0, 0x130, ZYNQ_SDIO_WP_MASK,
+ 					ZYNQ_SDIO_WP_SHIFT),
+-	DEFINE_ZYNQ_PINMUX_FUNCTION_MUX(sdio0_cd, 0, 130, ZYNQ_SDIO_CD_MASK,
++	DEFINE_ZYNQ_PINMUX_FUNCTION_MUX(sdio0_cd, 0, 0x130, ZYNQ_SDIO_CD_MASK,
+ 					ZYNQ_SDIO_CD_SHIFT),
+ 	DEFINE_ZYNQ_PINMUX_FUNCTION(sdio1, 0x40),
+ 	DEFINE_ZYNQ_PINMUX_FUNCTION(sdio1_pc, 0xc),
+-	DEFINE_ZYNQ_PINMUX_FUNCTION_MUX(sdio1_wp, 0, 134, ZYNQ_SDIO_WP_MASK,
++	DEFINE_ZYNQ_PINMUX_FUNCTION_MUX(sdio1_wp, 0, 0x134, ZYNQ_SDIO_WP_MASK,
+ 					ZYNQ_SDIO_WP_SHIFT),
+-	DEFINE_ZYNQ_PINMUX_FUNCTION_MUX(sdio1_cd, 0, 134, ZYNQ_SDIO_CD_MASK,
++	DEFINE_ZYNQ_PINMUX_FUNCTION_MUX(sdio1_cd, 0, 0x134, ZYNQ_SDIO_CD_MASK,
+ 					ZYNQ_SDIO_CD_SHIFT),
+ 	DEFINE_ZYNQ_PINMUX_FUNCTION(smc0_nor, 4),
+ 	DEFINE_ZYNQ_PINMUX_FUNCTION(smc0_nor_cs1, 8),
+diff --git a/drivers/platform/x86/dell-laptop.c b/drivers/platform/x86/dell-laptop.c
+index d688d806a8a5..2c1d5f5432a9 100644
+--- a/drivers/platform/x86/dell-laptop.c
++++ b/drivers/platform/x86/dell-laptop.c
+@@ -305,7 +305,6 @@ static const struct dmi_system_id dell_quirks[] __initconst = {
+ };
+ 
+ static struct calling_interface_buffer *buffer;
+-static struct page *bufferpage;
+ static DEFINE_MUTEX(buffer_mutex);
+ 
+ static int hwswitch_state;
+@@ -1896,12 +1895,11 @@ static int __init dell_init(void)
+ 	 * Allocate buffer below 4GB for SMI data--only 32-bit physical addr
+ 	 * is passed to SMI handler.
+ 	 */
+-	bufferpage = alloc_page(GFP_KERNEL | GFP_DMA32);
+-	if (!bufferpage) {
++	buffer = (void *)__get_free_page(GFP_KERNEL | GFP_DMA32);
++	if (!buffer) {
+ 		ret = -ENOMEM;
+ 		goto fail_buffer;
+ 	}
+-	buffer = page_address(bufferpage);
+ 
+ 	ret = dell_setup_rfkill();
+ 
+@@ -1965,7 +1963,7 @@ fail_backlight:
+ 	cancel_delayed_work_sync(&dell_rfkill_work);
+ 	dell_cleanup_rfkill();
+ fail_rfkill:
+-	free_page((unsigned long)bufferpage);
++	free_page((unsigned long)buffer);
+ fail_buffer:
+ 	platform_device_del(platform_device);
+ fail_platform_device2:
+diff --git a/drivers/platform/x86/ideapad-laptop.c b/drivers/platform/x86/ideapad-laptop.c
+index b496db87bc05..cb7cd8d79329 100644
+--- a/drivers/platform/x86/ideapad-laptop.c
++++ b/drivers/platform/x86/ideapad-laptop.c
+@@ -464,8 +464,9 @@ static const struct ideapad_rfk_data ideapad_rfk_data[] = {
+ static int ideapad_rfk_set(void *data, bool blocked)
+ {
+ 	struct ideapad_rfk_priv *priv = data;
++	int opcode = ideapad_rfk_data[priv->dev].opcode;
+ 
+-	return write_ec_cmd(priv->priv->adev->handle, priv->dev, !blocked);
++	return write_ec_cmd(priv->priv->adev->handle, opcode, !blocked);
+ }
+ 
+ static struct rfkill_ops ideapad_rfk_ops = {
+@@ -837,6 +838,13 @@ static const struct dmi_system_id no_hw_rfkill_list[] = {
+ 		},
+ 	},
+ 	{
++		.ident = "Lenovo G50-30",
++		.matches = {
++			DMI_MATCH(DMI_SYS_VENDOR, "LENOVO"),
++			DMI_MATCH(DMI_PRODUCT_VERSION, "Lenovo G50-30"),
++		},
++	},
++	{
+ 		.ident = "Lenovo Yoga 2 11 / 13 / Pro",
+ 		.matches = {
+ 			DMI_MATCH(DMI_SYS_VENDOR, "LENOVO"),
+diff --git a/drivers/pnp/system.c b/drivers/pnp/system.c
+index 515f33882ab8..49c1720df59a 100644
+--- a/drivers/pnp/system.c
++++ b/drivers/pnp/system.c
+@@ -7,7 +7,6 @@
+  *	Bjorn Helgaas <bjorn.helgaas@hp.com>
+  */
+ 
+-#include <linux/acpi.h>
+ #include <linux/pnp.h>
+ #include <linux/device.h>
+ #include <linux/init.h>
+@@ -23,41 +22,25 @@ static const struct pnp_device_id pnp_dev_table[] = {
+ 	{"", 0}
+ };
+ 
+-#ifdef CONFIG_ACPI
+-static bool __reserve_range(u64 start, unsigned int length, bool io, char *desc)
+-{
+-	u8 space_id = io ? ACPI_ADR_SPACE_SYSTEM_IO : ACPI_ADR_SPACE_SYSTEM_MEMORY;
+-	return !acpi_reserve_region(start, length, space_id, IORESOURCE_BUSY, desc);
+-}
+-#else
+-static bool __reserve_range(u64 start, unsigned int length, bool io, char *desc)
+-{
+-	struct resource *res;
+-
+-	res = io ? request_region(start, length, desc) :
+-		request_mem_region(start, length, desc);
+-	if (res) {
+-		res->flags &= ~IORESOURCE_BUSY;
+-		return true;
+-	}
+-	return false;
+-}
+-#endif
+-
+ static void reserve_range(struct pnp_dev *dev, struct resource *r, int port)
+ {
+ 	char *regionid;
+ 	const char *pnpid = dev_name(&dev->dev);
+ 	resource_size_t start = r->start, end = r->end;
+-	bool reserved;
++	struct resource *res;
+ 
+ 	regionid = kmalloc(16, GFP_KERNEL);
+ 	if (!regionid)
+ 		return;
+ 
+ 	snprintf(regionid, 16, "pnp %s", pnpid);
+-	reserved = __reserve_range(start, end - start + 1, !!port, regionid);
+-	if (!reserved)
++	if (port)
++		res = request_region(start, end - start + 1, regionid);
++	else
++		res = request_mem_region(start, end - start + 1, regionid);
++	if (res)
++		res->flags &= ~IORESOURCE_BUSY;
++	else
+ 		kfree(regionid);
+ 
+ 	/*
+@@ -66,7 +49,7 @@ static void reserve_range(struct pnp_dev *dev, struct resource *r, int port)
+ 	 * have double reservations.
+ 	 */
+ 	dev_info(&dev->dev, "%pR %s reserved\n", r,
+-		 reserved ? "has been" : "could not be");
++		 res ? "has been" : "could not be");
+ }
+ 
+ static void reserve_resources_of_dev(struct pnp_dev *dev)
+diff --git a/drivers/rtc/rtc-snvs.c b/drivers/rtc/rtc-snvs.c
+index 0479e807a776..d87a85cefb66 100644
+--- a/drivers/rtc/rtc-snvs.c
++++ b/drivers/rtc/rtc-snvs.c
+@@ -322,6 +322,13 @@ static int snvs_rtc_suspend(struct device *dev)
+ 	if (device_may_wakeup(dev))
+ 		enable_irq_wake(data->irq);
+ 
++	return 0;
++}
++
++static int snvs_rtc_suspend_noirq(struct device *dev)
++{
++	struct snvs_rtc_data *data = dev_get_drvdata(dev);
++
+ 	if (data->clk)
+ 		clk_disable_unprepare(data->clk);
+ 
+@@ -331,23 +338,28 @@ static int snvs_rtc_suspend(struct device *dev)
+ static int snvs_rtc_resume(struct device *dev)
+ {
+ 	struct snvs_rtc_data *data = dev_get_drvdata(dev);
+-	int ret;
+ 
+ 	if (device_may_wakeup(dev))
+-		disable_irq_wake(data->irq);
++		return disable_irq_wake(data->irq);
+ 
+-	if (data->clk) {
+-		ret = clk_prepare_enable(data->clk);
+-		if (ret)
+-			return ret;
+-	}
++	return 0;
++}
++
++static int snvs_rtc_resume_noirq(struct device *dev)
++{
++	struct snvs_rtc_data *data = dev_get_drvdata(dev);
++
++	if (data->clk)
++		return clk_prepare_enable(data->clk);
+ 
+ 	return 0;
+ }
+ 
+ static const struct dev_pm_ops snvs_rtc_pm_ops = {
+-	.suspend_noirq = snvs_rtc_suspend,
+-	.resume_noirq = snvs_rtc_resume,
++	.suspend = snvs_rtc_suspend,
++	.suspend_noirq = snvs_rtc_suspend_noirq,
++	.resume = snvs_rtc_resume,
++	.resume_noirq = snvs_rtc_resume_noirq,
+ };
+ 
+ #define SNVS_RTC_PM_OPS	(&snvs_rtc_pm_ops)
+diff --git a/drivers/staging/comedi/drivers/cb_pcimdas.c b/drivers/staging/comedi/drivers/cb_pcimdas.c
+index c458e5010a74..4ebf5aae5019 100644
+--- a/drivers/staging/comedi/drivers/cb_pcimdas.c
++++ b/drivers/staging/comedi/drivers/cb_pcimdas.c
+@@ -243,7 +243,7 @@ static int cb_pcimdas_ao_insn_write(struct comedi_device *dev,
+ 	return insn->n;
+ }
+ 
+-static int cb_pcimdas_di_insn_read(struct comedi_device *dev,
++static int cb_pcimdas_di_insn_bits(struct comedi_device *dev,
+ 				   struct comedi_subdevice *s,
+ 				   struct comedi_insn *insn,
+ 				   unsigned int *data)
+@@ -258,7 +258,7 @@ static int cb_pcimdas_di_insn_read(struct comedi_device *dev,
+ 	return insn->n;
+ }
+ 
+-static int cb_pcimdas_do_insn_write(struct comedi_device *dev,
++static int cb_pcimdas_do_insn_bits(struct comedi_device *dev,
+ 				    struct comedi_subdevice *s,
+ 				    struct comedi_insn *insn,
+ 				    unsigned int *data)
+@@ -424,7 +424,7 @@ static int cb_pcimdas_auto_attach(struct comedi_device *dev,
+ 	s->n_chan	= 4;
+ 	s->maxdata	= 1;
+ 	s->range_table	= &range_digital;
+-	s->insn_read	= cb_pcimdas_di_insn_read;
++	s->insn_bits	= cb_pcimdas_di_insn_bits;
+ 
+ 	/* Digital Output subdevice (main connector) */
+ 	s = &dev->subdevices[4];
+@@ -433,7 +433,7 @@ static int cb_pcimdas_auto_attach(struct comedi_device *dev,
+ 	s->n_chan	= 4;
+ 	s->maxdata	= 1;
+ 	s->range_table	= &range_digital;
+-	s->insn_write	= cb_pcimdas_do_insn_write;
++	s->insn_bits	= cb_pcimdas_do_insn_bits;
+ 
+ 	/* Counter subdevice (8254) */
+ 	s = &dev->subdevices[5];
+diff --git a/drivers/staging/rtl8712/rtl8712_recv.c b/drivers/staging/rtl8712/rtl8712_recv.c
+index 50227b598e0c..fcb8c61b2884 100644
+--- a/drivers/staging/rtl8712/rtl8712_recv.c
++++ b/drivers/staging/rtl8712/rtl8712_recv.c
+@@ -1056,7 +1056,8 @@ static int recvbuf2recvframe(struct _adapter *padapter, struct sk_buff *pskb)
+ 		/* for first fragment packet, driver need allocate 1536 +
+ 		 * drvinfo_sz + RXDESC_SIZE to defrag packet. */
+ 		if ((mf == 1) && (frag == 0))
+-			alloc_sz = 1658;/*1658+6=1664, 1664 is 128 alignment.*/
++			/*1658+6=1664, 1664 is 128 alignment.*/
++			alloc_sz = max_t(u16, tmp_len, 1658);
+ 		else
+ 			alloc_sz = tmp_len;
+ 		/* 2 is for IP header 4 bytes alignment in QoS packet case.
+diff --git a/drivers/staging/vt6655/device_main.c b/drivers/staging/vt6655/device_main.c
+index 0343ae386f03..15baacb126ad 100644
+--- a/drivers/staging/vt6655/device_main.c
++++ b/drivers/staging/vt6655/device_main.c
+@@ -807,6 +807,10 @@ static int device_rx_srv(struct vnt_private *pDevice, unsigned int uIdx)
+ 	     pRD = pRD->next) {
+ 		if (works++ > 15)
+ 			break;
++
++		if (!pRD->pRDInfo->skb)
++			break;
++
+ 		if (vnt_receive_frame(pDevice, pRD)) {
+ 			if (!device_alloc_rx_buf(pDevice, pRD)) {
+ 				dev_err(&pDevice->pcid->dev,
+@@ -1417,7 +1421,7 @@ static void vnt_bss_info_changed(struct ieee80211_hw *hw,
+ 
+ 	priv->current_aid = conf->aid;
+ 
+-	if (changed & BSS_CHANGED_BSSID) {
++	if (changed & BSS_CHANGED_BSSID && conf->bssid) {
+ 		unsigned long flags;
+ 
+ 		spin_lock_irqsave(&priv->lock, flags);
+diff --git a/drivers/staging/vt6656/main_usb.c b/drivers/staging/vt6656/main_usb.c
+index ab3ab84cb0a7..766fdcece074 100644
+--- a/drivers/staging/vt6656/main_usb.c
++++ b/drivers/staging/vt6656/main_usb.c
+@@ -701,7 +701,7 @@ static void vnt_bss_info_changed(struct ieee80211_hw *hw,
+ 
+ 	priv->current_aid = conf->aid;
+ 
+-	if (changed & BSS_CHANGED_BSSID)
++	if (changed & BSS_CHANGED_BSSID && conf->bssid)
+ 		vnt_mac_set_bssid_addr(priv, (u8 *)conf->bssid);
+ 
+ 
+diff --git a/drivers/tty/serial/Kconfig b/drivers/tty/serial/Kconfig
+index f8120c1bde14..8cd35348fc19 100644
+--- a/drivers/tty/serial/Kconfig
++++ b/drivers/tty/serial/Kconfig
+@@ -241,7 +241,6 @@ config SERIAL_SAMSUNG
+ 	tristate "Samsung SoC serial support"
+ 	depends on PLAT_SAMSUNG || ARCH_EXYNOS
+ 	select SERIAL_CORE
+-	select SERIAL_EARLYCON
+ 	help
+ 	  Support for the on-chip UARTs on the Samsung S3C24XX series CPUs,
+ 	  providing /dev/ttySAC0, 1 and 2 (note, some machines may not
+@@ -277,6 +276,7 @@ config SERIAL_SAMSUNG_CONSOLE
+ 	bool "Support for console on Samsung SoC serial port"
+ 	depends on SERIAL_SAMSUNG=y
+ 	select SERIAL_CORE_CONSOLE
++	select SERIAL_EARLYCON
+ 	help
+ 	  Allow selection of the S3C24XX on-board serial ports for use as
+ 	  an virtual console.
+diff --git a/drivers/tty/serial/atmel_serial.c b/drivers/tty/serial/atmel_serial.c
+index 27dade29646b..5ca1dfb0561c 100644
+--- a/drivers/tty/serial/atmel_serial.c
++++ b/drivers/tty/serial/atmel_serial.c
+@@ -315,8 +315,7 @@ static int atmel_config_rs485(struct uart_port *port,
+ 	if (rs485conf->flags & SER_RS485_ENABLED) {
+ 		dev_dbg(port->dev, "Setting UART to RS485\n");
+ 		atmel_port->tx_done_mask = ATMEL_US_TXEMPTY;
+-		if ((rs485conf->delay_rts_after_send) > 0)
+-			UART_PUT_TTGR(port, rs485conf->delay_rts_after_send);
++		UART_PUT_TTGR(port, rs485conf->delay_rts_after_send);
+ 		mode |= ATMEL_US_USMODE_RS485;
+ 	} else {
+ 		dev_dbg(port->dev, "Setting UART to RS232\n");
+@@ -354,8 +353,7 @@ static void atmel_set_mctrl(struct uart_port *port, u_int mctrl)
+ 
+ 	/* override mode to RS485 if needed, otherwise keep the current mode */
+ 	if (port->rs485.flags & SER_RS485_ENABLED) {
+-		if ((port->rs485.delay_rts_after_send) > 0)
+-			UART_PUT_TTGR(port, port->rs485.delay_rts_after_send);
++		UART_PUT_TTGR(port, port->rs485.delay_rts_after_send);
+ 		mode &= ~ATMEL_US_USMODE;
+ 		mode |= ATMEL_US_USMODE_RS485;
+ 	}
+@@ -2061,8 +2059,7 @@ static void atmel_set_termios(struct uart_port *port, struct ktermios *termios,
+ 
+ 	/* mode */
+ 	if (port->rs485.flags & SER_RS485_ENABLED) {
+-		if ((port->rs485.delay_rts_after_send) > 0)
+-			UART_PUT_TTGR(port, port->rs485.delay_rts_after_send);
++		UART_PUT_TTGR(port, port->rs485.delay_rts_after_send);
+ 		mode |= ATMEL_US_USMODE_RS485;
+ 	} else if (termios->c_cflag & CRTSCTS) {
+ 		/* RS232 with hardware handshake (RTS/CTS) */
+diff --git a/drivers/tty/sysrq.c b/drivers/tty/sysrq.c
+index 843f2cdc280b..9ffdfcf2ec6e 100644
+--- a/drivers/tty/sysrq.c
++++ b/drivers/tty/sysrq.c
+@@ -55,9 +55,6 @@
+ static int __read_mostly sysrq_enabled = CONFIG_MAGIC_SYSRQ_DEFAULT_ENABLE;
+ static bool __read_mostly sysrq_always_enabled;
+ 
+-unsigned short platform_sysrq_reset_seq[] __weak = { KEY_RESERVED };
+-int sysrq_reset_downtime_ms __weak;
+-
+ static bool sysrq_on(void)
+ {
+ 	return sysrq_enabled || sysrq_always_enabled;
+@@ -569,6 +566,7 @@ void handle_sysrq(int key)
+ EXPORT_SYMBOL(handle_sysrq);
+ 
+ #ifdef CONFIG_INPUT
++static int sysrq_reset_downtime_ms;
+ 
+ /* Simple translation table for the SysRq keys */
+ static const unsigned char sysrq_xlate[KEY_CNT] =
+@@ -949,23 +947,8 @@ static bool sysrq_handler_registered;
+ 
+ static inline void sysrq_register_handler(void)
+ {
+-	unsigned short key;
+ 	int error;
+-	int i;
+-
+-	/* First check if a __weak interface was instantiated. */
+-	for (i = 0; i < ARRAY_SIZE(sysrq_reset_seq); i++) {
+-		key = platform_sysrq_reset_seq[i];
+-		if (key == KEY_RESERVED || key > KEY_MAX)
+-			break;
+-
+-		sysrq_reset_seq[sysrq_reset_seq_len++] = key;
+-	}
+ 
+-	/*
+-	 * DT configuration takes precedence over anything that would
+-	 * have been defined via the __weak interface.
+-	 */
+ 	sysrq_of_get_keyreset_config();
+ 
+ 	error = input_register_handler(&sysrq_handler);
+diff --git a/drivers/usb/core/devio.c b/drivers/usb/core/devio.c
+index 4b0448c26810..986abde07683 100644
+--- a/drivers/usb/core/devio.c
++++ b/drivers/usb/core/devio.c
+@@ -513,7 +513,7 @@ static void async_completed(struct urb *urb)
+ 	snoop(&urb->dev->dev, "urb complete\n");
+ 	snoop_urb(urb->dev, as->userurb, urb->pipe, urb->actual_length,
+ 			as->status, COMPLETE, NULL, 0);
+-	if ((urb->transfer_flags & URB_DIR_MASK) == USB_DIR_IN)
++	if ((urb->transfer_flags & URB_DIR_MASK) == URB_DIR_IN)
+ 		snoop_urb_data(urb, urb->actual_length);
+ 
+ 	if (as->status < 0 && as->bulk_addr && as->status != -ECONNRESET &&
+diff --git a/drivers/usb/core/hcd.c b/drivers/usb/core/hcd.c
+index 45a915ccd71c..1c1385e3a824 100644
+--- a/drivers/usb/core/hcd.c
++++ b/drivers/usb/core/hcd.c
+@@ -1022,9 +1022,12 @@ static int register_root_hub(struct usb_hcd *hcd)
+ 				dev_name(&usb_dev->dev), retval);
+ 		return (retval < 0) ? retval : -EMSGSIZE;
+ 	}
+-	if (usb_dev->speed == USB_SPEED_SUPER) {
++
++	if (le16_to_cpu(usb_dev->descriptor.bcdUSB) >= 0x0201) {
+ 		retval = usb_get_bos_descriptor(usb_dev);
+-		if (retval < 0) {
++		if (!retval) {
++			usb_dev->lpm_capable = usb_device_supports_lpm(usb_dev);
++		} else if (usb_dev->speed == USB_SPEED_SUPER) {
+ 			mutex_unlock(&usb_bus_list_lock);
+ 			dev_dbg(parent_dev, "can't read %s bos descriptor %d\n",
+ 					dev_name(&usb_dev->dev), retval);
+diff --git a/drivers/usb/core/hub.c b/drivers/usb/core/hub.c
+index 3b7151687776..1e9a8c9aa531 100644
+--- a/drivers/usb/core/hub.c
++++ b/drivers/usb/core/hub.c
+@@ -122,7 +122,7 @@ struct usb_hub *usb_hub_to_struct_hub(struct usb_device *hdev)
+ 	return usb_get_intfdata(hdev->actconfig->interface[0]);
+ }
+ 
+-static int usb_device_supports_lpm(struct usb_device *udev)
++int usb_device_supports_lpm(struct usb_device *udev)
+ {
+ 	/* USB 2.1 (and greater) devices indicate LPM support through
+ 	 * their USB 2.0 Extended Capabilities BOS descriptor.
+@@ -2616,9 +2616,6 @@ static bool use_new_scheme(struct usb_device *udev, int retry)
+ 	return USE_NEW_SCHEME(retry);
+ }
+ 
+-static int hub_port_reset(struct usb_hub *hub, int port1,
+-			struct usb_device *udev, unsigned int delay, bool warm);
+-
+ /* Is a USB 3.0 port in the Inactive or Compliance Mode state?
+  * Port worm reset is required to recover
+  */
+@@ -2706,44 +2703,6 @@ static int hub_port_wait_reset(struct usb_hub *hub, int port1,
+ 	return 0;
+ }
+ 
+-static void hub_port_finish_reset(struct usb_hub *hub, int port1,
+-			struct usb_device *udev, int *status)
+-{
+-	switch (*status) {
+-	case 0:
+-		/* TRSTRCY = 10 ms; plus some extra */
+-		msleep(10 + 40);
+-		if (udev) {
+-			struct usb_hcd *hcd = bus_to_hcd(udev->bus);
+-
+-			update_devnum(udev, 0);
+-			/* The xHC may think the device is already reset,
+-			 * so ignore the status.
+-			 */
+-			if (hcd->driver->reset_device)
+-				hcd->driver->reset_device(hcd, udev);
+-		}
+-		/* FALL THROUGH */
+-	case -ENOTCONN:
+-	case -ENODEV:
+-		usb_clear_port_feature(hub->hdev,
+-				port1, USB_PORT_FEAT_C_RESET);
+-		if (hub_is_superspeed(hub->hdev)) {
+-			usb_clear_port_feature(hub->hdev, port1,
+-					USB_PORT_FEAT_C_BH_PORT_RESET);
+-			usb_clear_port_feature(hub->hdev, port1,
+-					USB_PORT_FEAT_C_PORT_LINK_STATE);
+-			usb_clear_port_feature(hub->hdev, port1,
+-					USB_PORT_FEAT_C_CONNECTION);
+-		}
+-		if (udev)
+-			usb_set_device_state(udev, *status
+-					? USB_STATE_NOTATTACHED
+-					: USB_STATE_DEFAULT);
+-		break;
+-	}
+-}
+-
+ /* Handle port reset and port warm(BH) reset (for USB3 protocol ports) */
+ static int hub_port_reset(struct usb_hub *hub, int port1,
+ 			struct usb_device *udev, unsigned int delay, bool warm)
+@@ -2767,13 +2726,10 @@ static int hub_port_reset(struct usb_hub *hub, int port1,
+ 		 * If the caller hasn't explicitly requested a warm reset,
+ 		 * double check and see if one is needed.
+ 		 */
+-		status = hub_port_status(hub, port1,
+-					&portstatus, &portchange);
+-		if (status < 0)
+-			goto done;
+-
+-		if (hub_port_warm_reset_required(hub, port1, portstatus))
+-			warm = true;
++		if (hub_port_status(hub, port1, &portstatus, &portchange) == 0)
++			if (hub_port_warm_reset_required(hub, port1,
++							portstatus))
++				warm = true;
+ 	}
+ 	clear_bit(port1, hub->warm_reset_bits);
+ 
+@@ -2799,11 +2755,19 @@ static int hub_port_reset(struct usb_hub *hub, int port1,
+ 
+ 		/* Check for disconnect or reset */
+ 		if (status == 0 || status == -ENOTCONN || status == -ENODEV) {
+-			hub_port_finish_reset(hub, port1, udev, &status);
++			usb_clear_port_feature(hub->hdev, port1,
++					USB_PORT_FEAT_C_RESET);
+ 
+ 			if (!hub_is_superspeed(hub->hdev))
+ 				goto done;
+ 
++			usb_clear_port_feature(hub->hdev, port1,
++					USB_PORT_FEAT_C_BH_PORT_RESET);
++			usb_clear_port_feature(hub->hdev, port1,
++					USB_PORT_FEAT_C_PORT_LINK_STATE);
++			usb_clear_port_feature(hub->hdev, port1,
++					USB_PORT_FEAT_C_CONNECTION);
++
+ 			/*
+ 			 * If a USB 3.0 device migrates from reset to an error
+ 			 * state, re-issue the warm reset.
+@@ -2836,6 +2800,26 @@ static int hub_port_reset(struct usb_hub *hub, int port1,
+ 	dev_err(&port_dev->dev, "Cannot enable. Maybe the USB cable is bad?\n");
+ 
+ done:
++	if (status == 0) {
++		/* TRSTRCY = 10 ms; plus some extra */
++		msleep(10 + 40);
++		if (udev) {
++			struct usb_hcd *hcd = bus_to_hcd(udev->bus);
++
++			update_devnum(udev, 0);
++			/* The xHC may think the device is already reset,
++			 * so ignore the status.
++			 */
++			if (hcd->driver->reset_device)
++				hcd->driver->reset_device(hcd, udev);
++
++			usb_set_device_state(udev, USB_STATE_DEFAULT);
++		}
++	} else {
++		if (udev)
++			usb_set_device_state(udev, USB_STATE_NOTATTACHED);
++	}
++
+ 	if (!hub_is_superspeed(hub->hdev))
+ 		up_read(&ehci_cf_port_reset_rwsem);
+ 
+diff --git a/drivers/usb/core/usb.h b/drivers/usb/core/usb.h
+index 7eb1e26798e5..457255a3306a 100644
+--- a/drivers/usb/core/usb.h
++++ b/drivers/usb/core/usb.h
+@@ -65,6 +65,7 @@ extern int  usb_hub_init(void);
+ extern void usb_hub_cleanup(void);
+ extern int usb_major_init(void);
+ extern void usb_major_cleanup(void);
++extern int usb_device_supports_lpm(struct usb_device *udev);
+ 
+ #ifdef	CONFIG_PM
+ 
+diff --git a/drivers/usb/dwc3/ep0.c b/drivers/usb/dwc3/ep0.c
+index 2ef3c8d6a9db..69e769c35cf5 100644
+--- a/drivers/usb/dwc3/ep0.c
++++ b/drivers/usb/dwc3/ep0.c
+@@ -727,6 +727,10 @@ static int dwc3_ep0_std_request(struct dwc3 *dwc, struct usb_ctrlrequest *ctrl)
+ 		dwc3_trace(trace_dwc3_ep0, "USB_REQ_SET_ISOCH_DELAY");
+ 		ret = dwc3_ep0_set_isoch_delay(dwc, ctrl);
+ 		break;
++	case USB_REQ_SET_INTERFACE:
++		dwc3_trace(trace_dwc3_ep0, "USB_REQ_SET_INTERFACE");
++		dwc->start_config_issued = false;
++		/* Fall through */
+ 	default:
+ 		dwc3_trace(trace_dwc3_ep0, "Forwarding to gadget driver");
+ 		ret = dwc3_ep0_delegate_req(dwc, ctrl);
+diff --git a/drivers/usb/dwc3/gadget.c b/drivers/usb/dwc3/gadget.c
+index 8946c32047e9..333a7c0078fc 100644
+--- a/drivers/usb/dwc3/gadget.c
++++ b/drivers/usb/dwc3/gadget.c
+@@ -291,6 +291,8 @@ int dwc3_send_gadget_generic_command(struct dwc3 *dwc, unsigned cmd, u32 param)
+ 			dwc3_trace(trace_dwc3_gadget,
+ 					"Command Complete --> %d",
+ 					DWC3_DGCMD_STATUS(reg));
++			if (DWC3_DGCMD_STATUS(reg))
++				return -EINVAL;
+ 			return 0;
+ 		}
+ 
+@@ -328,6 +330,8 @@ int dwc3_send_gadget_ep_cmd(struct dwc3 *dwc, unsigned ep,
+ 			dwc3_trace(trace_dwc3_gadget,
+ 					"Command Complete --> %d",
+ 					DWC3_DEPCMD_STATUS(reg));
++			if (DWC3_DEPCMD_STATUS(reg))
++				return -EINVAL;
+ 			return 0;
+ 		}
+ 
+@@ -1902,12 +1906,16 @@ static void dwc3_endpoint_transfer_complete(struct dwc3 *dwc,
+ {
+ 	unsigned		status = 0;
+ 	int			clean_busy;
++	u32			is_xfer_complete;
++
++	is_xfer_complete = (event->endpoint_event == DWC3_DEPEVT_XFERCOMPLETE);
+ 
+ 	if (event->status & DEPEVT_STATUS_BUSERR)
+ 		status = -ECONNRESET;
+ 
+ 	clean_busy = dwc3_cleanup_done_reqs(dwc, dep, event, status);
+-	if (clean_busy)
++	if (clean_busy && (is_xfer_complete ||
++				usb_endpoint_xfer_isoc(dep->endpoint.desc)))
+ 		dep->flags &= ~DWC3_EP_BUSY;
+ 
+ 	/*
+diff --git a/drivers/usb/gadget/composite.c b/drivers/usb/gadget/composite.c
+index 4e3447bbd097..58b4657fc721 100644
+--- a/drivers/usb/gadget/composite.c
++++ b/drivers/usb/gadget/composite.c
+@@ -1758,10 +1758,13 @@ unknown:
+ 		 * take such requests too, if that's ever needed:  to work
+ 		 * in config 0, etc.
+ 		 */
+-		list_for_each_entry(f, &cdev->config->functions, list)
+-			if (f->req_match && f->req_match(f, ctrl))
+-				goto try_fun_setup;
+-		f = NULL;
++		if (cdev->config) {
++			list_for_each_entry(f, &cdev->config->functions, list)
++				if (f->req_match && f->req_match(f, ctrl))
++					goto try_fun_setup;
++			f = NULL;
++		}
++
+ 		switch (ctrl->bRequestType & USB_RECIP_MASK) {
+ 		case USB_RECIP_INTERFACE:
+ 			if (!cdev->config || intf >= MAX_CONFIG_INTERFACES)
+diff --git a/drivers/usb/gadget/function/f_fs.c b/drivers/usb/gadget/function/f_fs.c
+index 45b8c8b338df..6e7be91e6097 100644
+--- a/drivers/usb/gadget/function/f_fs.c
++++ b/drivers/usb/gadget/function/f_fs.c
+@@ -924,7 +924,8 @@ static ssize_t ffs_epfile_write_iter(struct kiocb *kiocb, struct iov_iter *from)
+ 
+ 	kiocb->private = p;
+ 
+-	kiocb_set_cancel_fn(kiocb, ffs_aio_cancel);
++	if (p->aio)
++		kiocb_set_cancel_fn(kiocb, ffs_aio_cancel);
+ 
+ 	res = ffs_epfile_io(kiocb->ki_filp, p);
+ 	if (res == -EIOCBQUEUED)
+@@ -968,7 +969,8 @@ static ssize_t ffs_epfile_read_iter(struct kiocb *kiocb, struct iov_iter *to)
+ 
+ 	kiocb->private = p;
+ 
+-	kiocb_set_cancel_fn(kiocb, ffs_aio_cancel);
++	if (p->aio)
++		kiocb_set_cancel_fn(kiocb, ffs_aio_cancel);
+ 
+ 	res = ffs_epfile_io(kiocb->ki_filp, p);
+ 	if (res == -EIOCBQUEUED)
+diff --git a/drivers/usb/gadget/function/f_mass_storage.c b/drivers/usb/gadget/function/f_mass_storage.c
+index 3cc109f3c9c8..15c307155037 100644
+--- a/drivers/usb/gadget/function/f_mass_storage.c
++++ b/drivers/usb/gadget/function/f_mass_storage.c
+@@ -2786,7 +2786,7 @@ int fsg_common_set_nluns(struct fsg_common *common, int nluns)
+ 		return -EINVAL;
+ 	}
+ 
+-	curlun = kcalloc(nluns, sizeof(*curlun), GFP_KERNEL);
++	curlun = kcalloc(FSG_MAX_LUNS, sizeof(*curlun), GFP_KERNEL);
+ 	if (unlikely(!curlun))
+ 		return -ENOMEM;
+ 
+@@ -2796,8 +2796,6 @@ int fsg_common_set_nluns(struct fsg_common *common, int nluns)
+ 	common->luns = curlun;
+ 	common->nluns = nluns;
+ 
+-	pr_info("Number of LUNs=%d\n", common->nluns);
+-
+ 	return 0;
+ }
+ EXPORT_SYMBOL_GPL(fsg_common_set_nluns);
+@@ -3563,14 +3561,26 @@ static struct usb_function *fsg_alloc(struct usb_function_instance *fi)
+ 	struct fsg_opts *opts = fsg_opts_from_func_inst(fi);
+ 	struct fsg_common *common = opts->common;
+ 	struct fsg_dev *fsg;
++	unsigned nluns, i;
+ 
+ 	fsg = kzalloc(sizeof(*fsg), GFP_KERNEL);
+ 	if (unlikely(!fsg))
+ 		return ERR_PTR(-ENOMEM);
+ 
+ 	mutex_lock(&opts->lock);
++	if (!opts->refcnt) {
++		for (nluns = i = 0; i < FSG_MAX_LUNS; ++i)
++			if (common->luns[i])
++				nluns = i + 1;
++		if (!nluns)
++			pr_warn("No LUNS defined, continuing anyway\n");
++		else
++			common->nluns = nluns;
++		pr_info("Number of LUNs=%u\n", common->nluns);
++	}
+ 	opts->refcnt++;
+ 	mutex_unlock(&opts->lock);
++
+ 	fsg->function.name	= FSG_DRIVER_DESC;
+ 	fsg->function.bind	= fsg_bind;
+ 	fsg->function.unbind	= fsg_unbind;
+diff --git a/drivers/usb/gadget/udc/mv_udc_core.c b/drivers/usb/gadget/udc/mv_udc_core.c
+index d32160d6463f..5da37c957b53 100644
+--- a/drivers/usb/gadget/udc/mv_udc_core.c
++++ b/drivers/usb/gadget/udc/mv_udc_core.c
+@@ -2167,7 +2167,7 @@ static int mv_udc_probe(struct platform_device *pdev)
+ 		return -ENODEV;
+ 	}
+ 
+-	udc->phy_regs = ioremap(r->start, resource_size(r));
++	udc->phy_regs = devm_ioremap(&pdev->dev, r->start, resource_size(r));
+ 	if (udc->phy_regs == NULL) {
+ 		dev_err(&pdev->dev, "failed to map phy I/O memory\n");
+ 		return -EBUSY;
+diff --git a/drivers/usb/host/ohci-q.c b/drivers/usb/host/ohci-q.c
+index 1463c398d322..fe1d5fc7da2d 100644
+--- a/drivers/usb/host/ohci-q.c
++++ b/drivers/usb/host/ohci-q.c
+@@ -980,10 +980,6 @@ rescan_all:
+ 		int			completed, modified;
+ 		__hc32			*prev;
+ 
+-		/* Is this ED already invisible to the hardware? */
+-		if (ed->state == ED_IDLE)
+-			goto ed_idle;
+-
+ 		/* only take off EDs that the HC isn't using, accounting for
+ 		 * frame counter wraps and EDs with partially retired TDs
+ 		 */
+@@ -1011,12 +1007,10 @@ skip_ed:
+ 		}
+ 
+ 		/* ED's now officially unlinked, hc doesn't see */
+-		ed->state = ED_IDLE;
+ 		ed->hwHeadP &= ~cpu_to_hc32(ohci, ED_H);
+ 		ed->hwNextED = 0;
+ 		wmb();
+ 		ed->hwINFO &= ~cpu_to_hc32(ohci, ED_SKIP | ED_DEQUEUE);
+-ed_idle:
+ 
+ 		/* reentrancy:  if we drop the schedule lock, someone might
+ 		 * have modified this list.  normally it's just prepending
+@@ -1087,6 +1081,7 @@ rescan_this:
+ 		if (list_empty(&ed->td_list)) {
+ 			*last = ed->ed_next;
+ 			ed->ed_next = NULL;
++			ed->state = ED_IDLE;
+ 			list_del(&ed->in_use_list);
+ 		} else if (ohci->rh_state == OHCI_RH_RUNNING) {
+ 			*last = ed->ed_next;
+diff --git a/drivers/usb/host/xhci-mem.c b/drivers/usb/host/xhci-mem.c
+index f8336408ef07..3e442f77a2b9 100644
+--- a/drivers/usb/host/xhci-mem.c
++++ b/drivers/usb/host/xhci-mem.c
+@@ -1427,10 +1427,10 @@ int xhci_endpoint_init(struct xhci_hcd *xhci,
+ 		/* Attempt to use the ring cache */
+ 		if (virt_dev->num_rings_cached == 0)
+ 			return -ENOMEM;
++		virt_dev->num_rings_cached--;
+ 		virt_dev->eps[ep_index].new_ring =
+ 			virt_dev->ring_cache[virt_dev->num_rings_cached];
+ 		virt_dev->ring_cache[virt_dev->num_rings_cached] = NULL;
+-		virt_dev->num_rings_cached--;
+ 		xhci_reinit_cached_ring(xhci, virt_dev->eps[ep_index].new_ring,
+ 					1, type);
+ 	}
+diff --git a/drivers/usb/musb/musb_virthub.c b/drivers/usb/musb/musb_virthub.c
+index 86c4b533e90b..4731baca377f 100644
+--- a/drivers/usb/musb/musb_virthub.c
++++ b/drivers/usb/musb/musb_virthub.c
+@@ -273,9 +273,7 @@ static int musb_has_gadget(struct musb *musb)
+ #ifdef CONFIG_USB_MUSB_HOST
+ 	return 1;
+ #else
+-	if (musb->port_mode == MUSB_PORT_MODE_HOST)
+-		return 1;
+-	return musb->g.dev.driver != NULL;
++	return musb->port_mode == MUSB_PORT_MODE_HOST;
+ #endif
+ }
+ 
+diff --git a/drivers/usb/phy/phy-mxs-usb.c b/drivers/usb/phy/phy-mxs-usb.c
+index 8f7cb068d29b..3fcc0483a081 100644
+--- a/drivers/usb/phy/phy-mxs-usb.c
++++ b/drivers/usb/phy/phy-mxs-usb.c
+@@ -217,6 +217,9 @@ static bool mxs_phy_get_vbus_status(struct mxs_phy *mxs_phy)
+ {
+ 	unsigned int vbus_value;
+ 
++	if (!mxs_phy->regmap_anatop)
++		return false;
++
+ 	if (mxs_phy->port_id == 0)
+ 		regmap_read(mxs_phy->regmap_anatop,
+ 			ANADIG_USB1_VBUS_DET_STAT,
+diff --git a/drivers/usb/serial/cp210x.c b/drivers/usb/serial/cp210x.c
+index ffd739e31bfc..eac7ccaa3c85 100644
+--- a/drivers/usb/serial/cp210x.c
++++ b/drivers/usb/serial/cp210x.c
+@@ -187,6 +187,7 @@ static const struct usb_device_id id_table[] = {
+ 	{ USB_DEVICE(0x1FB9, 0x0602) }, /* Lake Shore Model 648 Magnet Power Supply */
+ 	{ USB_DEVICE(0x1FB9, 0x0700) }, /* Lake Shore Model 737 VSM Controller */
+ 	{ USB_DEVICE(0x1FB9, 0x0701) }, /* Lake Shore Model 776 Hall Matrix */
++	{ USB_DEVICE(0x2626, 0xEA60) }, /* Aruba Networks 7xxx USB Serial Console */
+ 	{ USB_DEVICE(0x3195, 0xF190) }, /* Link Instruments MSO-19 */
+ 	{ USB_DEVICE(0x3195, 0xF280) }, /* Link Instruments MSO-28 */
+ 	{ USB_DEVICE(0x3195, 0xF281) }, /* Link Instruments MSO-28 */
+diff --git a/drivers/usb/serial/option.c b/drivers/usb/serial/option.c
+index f0c0c53359ad..19b85ee98a72 100644
+--- a/drivers/usb/serial/option.c
++++ b/drivers/usb/serial/option.c
+@@ -1765,6 +1765,7 @@ static const struct usb_device_id option_ids[] = {
+ 	{ USB_DEVICE_AND_INTERFACE_INFO(0x2001, 0x7d03, 0xff, 0x00, 0x00) },
+ 	{ USB_DEVICE_AND_INTERFACE_INFO(0x07d1, 0x3e01, 0xff, 0xff, 0xff) }, /* D-Link DWM-152/C1 */
+ 	{ USB_DEVICE_AND_INTERFACE_INFO(0x07d1, 0x3e02, 0xff, 0xff, 0xff) }, /* D-Link DWM-156/C1 */
++	{ USB_DEVICE_INTERFACE_CLASS(0x2020, 0x4000, 0xff) },                /* OLICARD300 - MT6225 */
+ 	{ USB_DEVICE(INOVIA_VENDOR_ID, INOVIA_SEW858) },
+ 	{ USB_DEVICE(VIATELECOM_VENDOR_ID, VIATELECOM_PRODUCT_CDS7) },
+ 	{ } /* Terminating entry */
+diff --git a/drivers/usb/serial/usb-serial.c b/drivers/usb/serial/usb-serial.c
+index 529066bbc7e8..46f1f13b41f1 100644
+--- a/drivers/usb/serial/usb-serial.c
++++ b/drivers/usb/serial/usb-serial.c
+@@ -1306,6 +1306,7 @@ static void __exit usb_serial_exit(void)
+ 	tty_unregister_driver(usb_serial_tty_driver);
+ 	put_tty_driver(usb_serial_tty_driver);
+ 	bus_unregister(&usb_serial_bus_type);
++	idr_destroy(&serial_minors);
+ }
+ 
+ 
+diff --git a/drivers/w1/slaves/w1_therm.c b/drivers/w1/slaves/w1_therm.c
+index 1f11a20a8ab9..55eb86c9e214 100644
+--- a/drivers/w1/slaves/w1_therm.c
++++ b/drivers/w1/slaves/w1_therm.c
+@@ -59,16 +59,32 @@ MODULE_ALIAS("w1-family-" __stringify(W1_THERM_DS28EA00));
+ static int w1_strong_pullup = 1;
+ module_param_named(strong_pullup, w1_strong_pullup, int, 0);
+ 
++struct w1_therm_family_data {
++	uint8_t rom[9];
++	atomic_t refcnt;
++};
++
++/* return the address of the refcnt in the family data */
++#define THERM_REFCNT(family_data) \
++	(&((struct w1_therm_family_data*)family_data)->refcnt)
++
+ static int w1_therm_add_slave(struct w1_slave *sl)
+ {
+-	sl->family_data = kzalloc(9, GFP_KERNEL);
++	sl->family_data = kzalloc(sizeof(struct w1_therm_family_data),
++		GFP_KERNEL);
+ 	if (!sl->family_data)
+ 		return -ENOMEM;
++	atomic_set(THERM_REFCNT(sl->family_data), 1);
+ 	return 0;
+ }
+ 
+ static void w1_therm_remove_slave(struct w1_slave *sl)
+ {
++	int refcnt = atomic_sub_return(1, THERM_REFCNT(sl->family_data));
++	while(refcnt) {
++		msleep(1000);
++		refcnt = atomic_read(THERM_REFCNT(sl->family_data));
++	}
+ 	kfree(sl->family_data);
+ 	sl->family_data = NULL;
+ }
+@@ -194,13 +210,22 @@ static ssize_t w1_slave_show(struct device *device,
+ 	struct w1_slave *sl = dev_to_w1_slave(device);
+ 	struct w1_master *dev = sl->master;
+ 	u8 rom[9], crc, verdict, external_power;
+-	int i, max_trying = 10;
++	int i, ret, max_trying = 10;
+ 	ssize_t c = PAGE_SIZE;
++	u8 *family_data = sl->family_data;
++
++	ret = mutex_lock_interruptible(&dev->bus_mutex);
++	if (ret != 0)
++		goto post_unlock;
+ 
+-	i = mutex_lock_interruptible(&dev->bus_mutex);
+-	if (i != 0)
+-		return i;
++	if(!sl->family_data)
++	{
++		ret = -ENODEV;
++		goto pre_unlock;
++	}
+ 
++	/* prevent the slave from going away in sleep */
++	atomic_inc(THERM_REFCNT(family_data));
+ 	memset(rom, 0, sizeof(rom));
+ 
+ 	while (max_trying--) {
+@@ -230,17 +255,19 @@ static ssize_t w1_slave_show(struct device *device,
+ 				mutex_unlock(&dev->bus_mutex);
+ 
+ 				sleep_rem = msleep_interruptible(tm);
+-				if (sleep_rem != 0)
+-					return -EINTR;
++				if (sleep_rem != 0) {
++					ret = -EINTR;
++					goto post_unlock;
++				}
+ 
+-				i = mutex_lock_interruptible(&dev->bus_mutex);
+-				if (i != 0)
+-					return i;
++				ret = mutex_lock_interruptible(&dev->bus_mutex);
++				if (ret != 0)
++					goto post_unlock;
+ 			} else if (!w1_strong_pullup) {
+ 				sleep_rem = msleep_interruptible(tm);
+ 				if (sleep_rem != 0) {
+-					mutex_unlock(&dev->bus_mutex);
+-					return -EINTR;
++					ret = -EINTR;
++					goto pre_unlock;
+ 				}
+ 			}
+ 
+@@ -269,19 +296,24 @@ static ssize_t w1_slave_show(struct device *device,
+ 	c -= snprintf(buf + PAGE_SIZE - c, c, ": crc=%02x %s\n",
+ 			   crc, (verdict) ? "YES" : "NO");
+ 	if (verdict)
+-		memcpy(sl->family_data, rom, sizeof(rom));
++		memcpy(family_data, rom, sizeof(rom));
+ 	else
+ 		dev_warn(device, "Read failed CRC check\n");
+ 
+ 	for (i = 0; i < 9; ++i)
+ 		c -= snprintf(buf + PAGE_SIZE - c, c, "%02x ",
+-			      ((u8 *)sl->family_data)[i]);
++			      ((u8 *)family_data)[i]);
+ 
+ 	c -= snprintf(buf + PAGE_SIZE - c, c, "t=%d\n",
+ 		w1_convert_temp(rom, sl->family->fid));
++	ret = PAGE_SIZE - c;
++
++pre_unlock:
+ 	mutex_unlock(&dev->bus_mutex);
+ 
+-	return PAGE_SIZE - c;
++post_unlock:
++	atomic_dec(THERM_REFCNT(family_data));
++	return ret;
+ }
+ 
+ static int __init w1_therm_init(void)
+diff --git a/drivers/watchdog/omap_wdt.c b/drivers/watchdog/omap_wdt.c
+index 1e6be9e40577..c9c97dacf452 100644
+--- a/drivers/watchdog/omap_wdt.c
++++ b/drivers/watchdog/omap_wdt.c
+@@ -132,6 +132,13 @@ static int omap_wdt_start(struct watchdog_device *wdog)
+ 
+ 	pm_runtime_get_sync(wdev->dev);
+ 
++	/*
++	 * Make sure the watchdog is disabled. This is unfortunately required
++	 * because writing to various registers with the watchdog running has no
++	 * effect.
++	 */
++	omap_wdt_disable(wdev);
++
+ 	/* initialize prescaler */
+ 	while (readl_relaxed(base + OMAP_WATCHDOG_WPS) & 0x01)
+ 		cpu_relax();
+diff --git a/fs/9p/vfs_inode.c b/fs/9p/vfs_inode.c
+index 703342e309f5..53f1e8a21707 100644
+--- a/fs/9p/vfs_inode.c
++++ b/fs/9p/vfs_inode.c
+@@ -540,8 +540,7 @@ static struct inode *v9fs_qid_iget(struct super_block *sb,
+ 	unlock_new_inode(inode);
+ 	return inode;
+ error:
+-	unlock_new_inode(inode);
+-	iput(inode);
++	iget_failed(inode);
+ 	return ERR_PTR(retval);
+ 
+ }
+diff --git a/fs/9p/vfs_inode_dotl.c b/fs/9p/vfs_inode_dotl.c
+index 9861c7c951a6..4d3ecfb55fcf 100644
+--- a/fs/9p/vfs_inode_dotl.c
++++ b/fs/9p/vfs_inode_dotl.c
+@@ -149,8 +149,7 @@ static struct inode *v9fs_qid_iget_dotl(struct super_block *sb,
+ 	unlock_new_inode(inode);
+ 	return inode;
+ error:
+-	unlock_new_inode(inode);
+-	iput(inode);
++	iget_failed(inode);
+ 	return ERR_PTR(retval);
+ 
+ }
+diff --git a/fs/btrfs/inode-map.c b/fs/btrfs/inode-map.c
+index f6a596d5a637..d4a582ac3f73 100644
+--- a/fs/btrfs/inode-map.c
++++ b/fs/btrfs/inode-map.c
+@@ -246,6 +246,7 @@ void btrfs_unpin_free_ino(struct btrfs_root *root)
+ {
+ 	struct btrfs_free_space_ctl *ctl = root->free_ino_ctl;
+ 	struct rb_root *rbroot = &root->free_ino_pinned->free_space_offset;
++	spinlock_t *rbroot_lock = &root->free_ino_pinned->tree_lock;
+ 	struct btrfs_free_space *info;
+ 	struct rb_node *n;
+ 	u64 count;
+@@ -254,24 +255,30 @@ void btrfs_unpin_free_ino(struct btrfs_root *root)
+ 		return;
+ 
+ 	while (1) {
++		bool add_to_ctl = true;
++
++		spin_lock(rbroot_lock);
+ 		n = rb_first(rbroot);
+-		if (!n)
++		if (!n) {
++			spin_unlock(rbroot_lock);
+ 			break;
++		}
+ 
+ 		info = rb_entry(n, struct btrfs_free_space, offset_index);
+ 		BUG_ON(info->bitmap); /* Logic error */
+ 
+ 		if (info->offset > root->ino_cache_progress)
+-			goto free;
++			add_to_ctl = false;
+ 		else if (info->offset + info->bytes > root->ino_cache_progress)
+ 			count = root->ino_cache_progress - info->offset + 1;
+ 		else
+ 			count = info->bytes;
+ 
+-		__btrfs_add_free_space(ctl, info->offset, count);
+-free:
+ 		rb_erase(&info->offset_index, rbroot);
+-		kfree(info);
++		spin_unlock(rbroot_lock);
++		if (add_to_ctl)
++			__btrfs_add_free_space(ctl, info->offset, count);
++		kmem_cache_free(btrfs_free_space_cachep, info);
+ 	}
+ }
+ 
+diff --git a/fs/btrfs/ioctl.c b/fs/btrfs/ioctl.c
+index 1c22c6518504..37d456a9a3b8 100644
+--- a/fs/btrfs/ioctl.c
++++ b/fs/btrfs/ioctl.c
+@@ -2413,8 +2413,6 @@ static noinline int btrfs_ioctl_snap_destroy(struct file *file,
+ 		goto out_unlock_inode;
+ 	}
+ 
+-	d_invalidate(dentry);
+-
+ 	down_write(&root->fs_info->subvol_sem);
+ 
+ 	err = may_destroy_subvol(dest);
+@@ -2508,7 +2506,7 @@ out_up_write:
+ out_unlock_inode:
+ 	mutex_unlock(&inode->i_mutex);
+ 	if (!err) {
+-		shrink_dcache_sb(root->fs_info->sb);
++		d_invalidate(dentry);
+ 		btrfs_invalidate_inodes(dest);
+ 		d_delete(dentry);
+ 		ASSERT(dest->send_in_progress == 0);
+@@ -2940,7 +2938,7 @@ out_unlock:
+ static long btrfs_ioctl_file_extent_same(struct file *file,
+ 			struct btrfs_ioctl_same_args __user *argp)
+ {
+-	struct btrfs_ioctl_same_args *same;
++	struct btrfs_ioctl_same_args *same = NULL;
+ 	struct btrfs_ioctl_same_extent_info *info;
+ 	struct inode *src = file_inode(file);
+ 	u64 off;
+@@ -2970,6 +2968,7 @@ static long btrfs_ioctl_file_extent_same(struct file *file,
+ 
+ 	if (IS_ERR(same)) {
+ 		ret = PTR_ERR(same);
++		same = NULL;
+ 		goto out;
+ 	}
+ 
+@@ -3040,6 +3039,7 @@ static long btrfs_ioctl_file_extent_same(struct file *file,
+ 
+ out:
+ 	mnt_drop_write_file(file);
++	kfree(same);
+ 	return ret;
+ }
+ 
+@@ -3434,6 +3434,20 @@ process_slot:
+ 				u64 trim = 0;
+ 				u64 aligned_end = 0;
+ 
++				/*
++				 * Don't copy an inline extent into an offset
++				 * greater than zero. Having an inline extent
++				 * at such an offset results in chaos as btrfs
++				 * isn't prepared for such cases. Just skip
++				 * this case for the same reasons as commented
++				 * at btrfs_ioctl_clone().
++				 */
++				if (last_dest_end > 0) {
++					ret = -EOPNOTSUPP;
++					btrfs_end_transaction(trans, root);
++					goto out;
++				}
++
+ 				if (off > key.offset) {
+ 					skip = off - key.offset;
+ 					new_key.offset += skip;
+diff --git a/fs/btrfs/transaction.c b/fs/btrfs/transaction.c
+index 5628e25250c0..94e909c5a503 100644
+--- a/fs/btrfs/transaction.c
++++ b/fs/btrfs/transaction.c
+@@ -758,7 +758,7 @@ static int __btrfs_end_transaction(struct btrfs_trans_handle *trans,
+ 
+ 	if (!list_empty(&trans->ordered)) {
+ 		spin_lock(&info->trans_lock);
+-		list_splice(&trans->ordered, &cur_trans->pending_ordered);
++		list_splice_init(&trans->ordered, &cur_trans->pending_ordered);
+ 		spin_unlock(&info->trans_lock);
+ 	}
+ 
+@@ -1848,7 +1848,7 @@ int btrfs_commit_transaction(struct btrfs_trans_handle *trans,
+ 	}
+ 
+ 	spin_lock(&root->fs_info->trans_lock);
+-	list_splice(&trans->ordered, &cur_trans->pending_ordered);
++	list_splice_init(&trans->ordered, &cur_trans->pending_ordered);
+ 	if (cur_trans->state >= TRANS_STATE_COMMIT_START) {
+ 		spin_unlock(&root->fs_info->trans_lock);
+ 		atomic_inc(&cur_trans->use_count);
+diff --git a/fs/btrfs/tree-log.c b/fs/btrfs/tree-log.c
+index d04968374e9d..4920fceffacb 100644
+--- a/fs/btrfs/tree-log.c
++++ b/fs/btrfs/tree-log.c
+@@ -4161,6 +4161,7 @@ static int btrfs_log_inode(struct btrfs_trans_handle *trans,
+ 	u64 ino = btrfs_ino(inode);
+ 	struct extent_map_tree *em_tree = &BTRFS_I(inode)->extent_tree;
+ 	u64 logged_isize = 0;
++	bool need_log_inode_item = true;
+ 
+ 	path = btrfs_alloc_path();
+ 	if (!path)
+@@ -4269,11 +4270,6 @@ static int btrfs_log_inode(struct btrfs_trans_handle *trans,
+ 		} else {
+ 			if (inode_only == LOG_INODE_ALL)
+ 				fast_search = true;
+-			ret = log_inode_item(trans, log, dst_path, inode);
+-			if (ret) {
+-				err = ret;
+-				goto out_unlock;
+-			}
+ 			goto log_extents;
+ 		}
+ 
+@@ -4296,6 +4292,9 @@ again:
+ 		if (min_key.type > max_key.type)
+ 			break;
+ 
++		if (min_key.type == BTRFS_INODE_ITEM_KEY)
++			need_log_inode_item = false;
++
+ 		src = path->nodes[0];
+ 		if (ins_nr && ins_start_slot + ins_nr == path->slots[0]) {
+ 			ins_nr++;
+@@ -4366,6 +4365,11 @@ next_slot:
+ log_extents:
+ 	btrfs_release_path(path);
+ 	btrfs_release_path(dst_path);
++	if (need_log_inode_item) {
++		err = log_inode_item(trans, log, dst_path, inode);
++		if (err)
++			goto out_unlock;
++	}
+ 	if (fast_search) {
+ 		/*
+ 		 * Some ordered extents started by fsync might have completed
+diff --git a/fs/ext4/extents.c b/fs/ext4/extents.c
+index e003a1e81dc3..87ba10d1d3bc 100644
+--- a/fs/ext4/extents.c
++++ b/fs/ext4/extents.c
+@@ -503,7 +503,7 @@ __read_extent_tree_block(const char *function, unsigned int line,
+ 	struct buffer_head		*bh;
+ 	int				err;
+ 
+-	bh = sb_getblk(inode->i_sb, pblk);
++	bh = sb_getblk_gfp(inode->i_sb, pblk, __GFP_MOVABLE | GFP_NOFS);
+ 	if (unlikely(!bh))
+ 		return ERR_PTR(-ENOMEM);
+ 
+@@ -1088,7 +1088,7 @@ static int ext4_ext_split(handle_t *handle, struct inode *inode,
+ 		err = -EIO;
+ 		goto cleanup;
+ 	}
+-	bh = sb_getblk(inode->i_sb, newblock);
++	bh = sb_getblk_gfp(inode->i_sb, newblock, __GFP_MOVABLE | GFP_NOFS);
+ 	if (unlikely(!bh)) {
+ 		err = -ENOMEM;
+ 		goto cleanup;
+@@ -1282,7 +1282,7 @@ static int ext4_ext_grow_indepth(handle_t *handle, struct inode *inode,
+ 	if (newblock == 0)
+ 		return err;
+ 
+-	bh = sb_getblk(inode->i_sb, newblock);
++	bh = sb_getblk_gfp(inode->i_sb, newblock, __GFP_MOVABLE | GFP_NOFS);
+ 	if (unlikely(!bh))
+ 		return -ENOMEM;
+ 	lock_buffer(bh);
+diff --git a/fs/ext4/indirect.c b/fs/ext4/indirect.c
+index 958824019509..94ae6874c2cb 100644
+--- a/fs/ext4/indirect.c
++++ b/fs/ext4/indirect.c
+@@ -565,7 +565,7 @@ int ext4_ind_map_blocks(handle_t *handle, struct inode *inode,
+ 				       EXT4_FEATURE_RO_COMPAT_BIGALLOC)) {
+ 		EXT4_ERROR_INODE(inode, "Can't allocate blocks for "
+ 				 "non-extent mapped inodes with bigalloc");
+-		return -ENOSPC;
++		return -EUCLEAN;
+ 	}
+ 
+ 	/* Set up for the direct block allocation */
+diff --git a/fs/ext4/inode.c b/fs/ext4/inode.c
+index 0554b0b5957b..966c614822cc 100644
+--- a/fs/ext4/inode.c
++++ b/fs/ext4/inode.c
+@@ -1342,7 +1342,7 @@ static void ext4_da_page_release_reservation(struct page *page,
+ 					     unsigned int offset,
+ 					     unsigned int length)
+ {
+-	int to_release = 0;
++	int to_release = 0, contiguous_blks = 0;
+ 	struct buffer_head *head, *bh;
+ 	unsigned int curr_off = 0;
+ 	struct inode *inode = page->mapping->host;
+@@ -1363,14 +1363,23 @@ static void ext4_da_page_release_reservation(struct page *page,
+ 
+ 		if ((offset <= curr_off) && (buffer_delay(bh))) {
+ 			to_release++;
++			contiguous_blks++;
+ 			clear_buffer_delay(bh);
++		} else if (contiguous_blks) {
++			lblk = page->index <<
++			       (PAGE_CACHE_SHIFT - inode->i_blkbits);
++			lblk += (curr_off >> inode->i_blkbits) -
++				contiguous_blks;
++			ext4_es_remove_extent(inode, lblk, contiguous_blks);
++			contiguous_blks = 0;
+ 		}
+ 		curr_off = next_off;
+ 	} while ((bh = bh->b_this_page) != head);
+ 
+-	if (to_release) {
++	if (contiguous_blks) {
+ 		lblk = page->index << (PAGE_CACHE_SHIFT - inode->i_blkbits);
+-		ext4_es_remove_extent(inode, lblk, to_release);
++		lblk += (curr_off >> inode->i_blkbits) - contiguous_blks;
++		ext4_es_remove_extent(inode, lblk, contiguous_blks);
+ 	}
+ 
+ 	/* If we have released all the blocks belonging to a cluster, then we
+@@ -1701,19 +1710,32 @@ static int __ext4_journalled_writepage(struct page *page,
+ 		ext4_walk_page_buffers(handle, page_bufs, 0, len,
+ 				       NULL, bget_one);
+ 	}
+-	/* As soon as we unlock the page, it can go away, but we have
+-	 * references to buffers so we are safe */
++	/*
++	 * We need to release the page lock before we start the
++	 * journal, so grab a reference so the page won't disappear
++	 * out from under us.
++	 */
++	get_page(page);
+ 	unlock_page(page);
+ 
+ 	handle = ext4_journal_start(inode, EXT4_HT_WRITE_PAGE,
+ 				    ext4_writepage_trans_blocks(inode));
+ 	if (IS_ERR(handle)) {
+ 		ret = PTR_ERR(handle);
+-		goto out;
++		put_page(page);
++		goto out_no_pagelock;
+ 	}
+-
+ 	BUG_ON(!ext4_handle_valid(handle));
+ 
++	lock_page(page);
++	put_page(page);
++	if (page->mapping != mapping) {
++		/* The page got truncated from under us */
++		ext4_journal_stop(handle);
++		ret = 0;
++		goto out;
++	}
++
+ 	if (inline_data) {
+ 		BUFFER_TRACE(inode_bh, "get write access");
+ 		ret = ext4_journal_get_write_access(handle, inode_bh);
+@@ -1739,6 +1761,8 @@ static int __ext4_journalled_writepage(struct page *page,
+ 				       NULL, bput_one);
+ 	ext4_set_inode_state(inode, EXT4_STATE_JDATA);
+ out:
++	unlock_page(page);
++out_no_pagelock:
+ 	brelse(inode_bh);
+ 	return ret;
+ }
+@@ -4345,7 +4369,12 @@ static void ext4_update_other_inodes_time(struct super_block *sb,
+ 	int inode_size = EXT4_INODE_SIZE(sb);
+ 
+ 	oi.orig_ino = orig_ino;
+-	ino = (orig_ino & ~(inodes_per_block - 1)) + 1;
++	/*
++	 * Calculate the first inode in the inode table block.  Inode
++	 * numbers are one-based.  That is, the first inode in a block
++	 * (assuming 4k blocks and 256 byte inodes) is (n*16 + 1).
++	 */
++	ino = ((orig_ino - 1) & ~(inodes_per_block - 1)) + 1;
+ 	for (i = 0; i < inodes_per_block; i++, ino++, buf += inode_size) {
+ 		if (ino == orig_ino)
+ 			continue;
+diff --git a/fs/ext4/mballoc.c b/fs/ext4/mballoc.c
+index 8d1e60214ef0..41260489d3bc 100644
+--- a/fs/ext4/mballoc.c
++++ b/fs/ext4/mballoc.c
+@@ -4800,18 +4800,12 @@ do_more:
+ 		/*
+ 		 * blocks being freed are metadata. these blocks shouldn't
+ 		 * be used until this transaction is committed
++		 *
++		 * We use __GFP_NOFAIL because ext4_free_blocks() is not allowed
++		 * to fail.
+ 		 */
+-	retry:
+-		new_entry = kmem_cache_alloc(ext4_free_data_cachep, GFP_NOFS);
+-		if (!new_entry) {
+-			/*
+-			 * We use a retry loop because
+-			 * ext4_free_blocks() is not allowed to fail.
+-			 */
+-			cond_resched();
+-			congestion_wait(BLK_RW_ASYNC, HZ/50);
+-			goto retry;
+-		}
++		new_entry = kmem_cache_alloc(ext4_free_data_cachep,
++				GFP_NOFS|__GFP_NOFAIL);
+ 		new_entry->efd_start_cluster = bit;
+ 		new_entry->efd_group = block_group;
+ 		new_entry->efd_count = count_clusters;
+diff --git a/fs/ext4/migrate.c b/fs/ext4/migrate.c
+index b52374e42102..6163ad21cb0e 100644
+--- a/fs/ext4/migrate.c
++++ b/fs/ext4/migrate.c
+@@ -620,6 +620,7 @@ int ext4_ind_migrate(struct inode *inode)
+ 	struct ext4_inode_info		*ei = EXT4_I(inode);
+ 	struct ext4_extent		*ex;
+ 	unsigned int			i, len;
++	ext4_lblk_t			start, end;
+ 	ext4_fsblk_t			blk;
+ 	handle_t			*handle;
+ 	int				ret;
+@@ -633,6 +634,14 @@ int ext4_ind_migrate(struct inode *inode)
+ 				       EXT4_FEATURE_RO_COMPAT_BIGALLOC))
+ 		return -EOPNOTSUPP;
+ 
++	/*
++	 * In order to get correct extent info, force all delayed allocation
++	 * blocks to be allocated, otherwise delayed allocation blocks may not
++	 * be reflected and bypass the checks on extent header.
++	 */
++	if (test_opt(inode->i_sb, DELALLOC))
++		ext4_alloc_da_blocks(inode);
++
+ 	handle = ext4_journal_start(inode, EXT4_HT_MIGRATE, 1);
+ 	if (IS_ERR(handle))
+ 		return PTR_ERR(handle);
+@@ -650,11 +659,13 @@ int ext4_ind_migrate(struct inode *inode)
+ 		goto errout;
+ 	}
+ 	if (eh->eh_entries == 0)
+-		blk = len = 0;
++		blk = len = start = end = 0;
+ 	else {
+ 		len = le16_to_cpu(ex->ee_len);
+ 		blk = ext4_ext_pblock(ex);
+-		if (len > EXT4_NDIR_BLOCKS) {
++		start = le32_to_cpu(ex->ee_block);
++		end = start + len - 1;
++		if (end >= EXT4_NDIR_BLOCKS) {
+ 			ret = -EOPNOTSUPP;
+ 			goto errout;
+ 		}
+@@ -662,7 +673,7 @@ int ext4_ind_migrate(struct inode *inode)
+ 
+ 	ext4_clear_inode_flag(inode, EXT4_INODE_EXTENTS);
+ 	memset(ei->i_data, 0, sizeof(ei->i_data));
+-	for (i=0; i < len; i++)
++	for (i = start; i <= end; i++)
+ 		ei->i_data[i] = cpu_to_le32(blk++);
+ 	ext4_mark_inode_dirty(handle, inode);
+ errout:
+diff --git a/fs/ext4/super.c b/fs/ext4/super.c
+index ca9d4a2fed41..ca12affdba96 100644
+--- a/fs/ext4/super.c
++++ b/fs/ext4/super.c
+@@ -807,6 +807,7 @@ static void ext4_put_super(struct super_block *sb)
+ 		dump_orphan_list(sb, sbi);
+ 	J_ASSERT(list_empty(&sbi->s_orphan));
+ 
++	sync_blockdev(sb->s_bdev);
+ 	invalidate_bdev(sb->s_bdev);
+ 	if (sbi->journal_bdev && sbi->journal_bdev != sb->s_bdev) {
+ 		/*
+@@ -4943,6 +4944,9 @@ static int ext4_remount(struct super_block *sb, int *flags, char *data)
+ 		set_task_ioprio(sbi->s_journal->j_task, journal_ioprio);
+ 	}
+ 
++	if (*flags & MS_LAZYTIME)
++		sb->s_flags |= MS_LAZYTIME;
++
+ 	if ((*flags & MS_RDONLY) != (sb->s_flags & MS_RDONLY)) {
+ 		if (sbi->s_mount_flags & EXT4_MF_FS_ABORTED) {
+ 			err = -EROFS;
+diff --git a/fs/fuse/inode.c b/fs/fuse/inode.c
+index 18dacf9ed8ff..708d697113fc 100644
+--- a/fs/fuse/inode.c
++++ b/fs/fuse/inode.c
+@@ -1026,6 +1026,7 @@ static int fuse_fill_super(struct super_block *sb, void *data, int silent)
+ 		goto err_fput;
+ 
+ 	fuse_conn_init(fc);
++	fc->release = fuse_free_conn;
+ 
+ 	fc->dev = sb->s_dev;
+ 	fc->sb = sb;
+@@ -1040,7 +1041,6 @@ static int fuse_fill_super(struct super_block *sb, void *data, int silent)
+ 		fc->dont_mask = 1;
+ 	sb->s_flags |= MS_POSIXACL;
+ 
+-	fc->release = fuse_free_conn;
+ 	fc->flags = d.flags;
+ 	fc->user_id = d.user_id;
+ 	fc->group_id = d.group_id;
+diff --git a/fs/hpfs/super.c b/fs/hpfs/super.c
+index 7cd00d3a7c9b..8685c655737f 100644
+--- a/fs/hpfs/super.c
++++ b/fs/hpfs/super.c
+@@ -52,17 +52,20 @@ static void unmark_dirty(struct super_block *s)
+ }
+ 
+ /* Filesystem error... */
+-static char err_buf[1024];
+-
+ void hpfs_error(struct super_block *s, const char *fmt, ...)
+ {
++	struct va_format vaf;
+ 	va_list args;
+ 
+ 	va_start(args, fmt);
+-	vsnprintf(err_buf, sizeof(err_buf), fmt, args);
++
++	vaf.fmt = fmt;
++	vaf.va = &args;
++
++	pr_err("filesystem error: %pV", &vaf);
++
+ 	va_end(args);
+ 
+-	pr_err("filesystem error: %s", err_buf);
+ 	if (!hpfs_sb(s)->sb_was_error) {
+ 		if (hpfs_sb(s)->sb_err == 2) {
+ 			pr_cont("; crashing the system because you wanted it\n");
+@@ -424,11 +427,14 @@ static int hpfs_remount_fs(struct super_block *s, int *flags, char *data)
+ 	int o;
+ 	struct hpfs_sb_info *sbi = hpfs_sb(s);
+ 	char *new_opts = kstrdup(data, GFP_KERNEL);
+-	
++
++	if (!new_opts)
++		return -ENOMEM;
++
+ 	sync_filesystem(s);
+ 
+ 	*flags |= MS_NOATIME;
+-	
++
+ 	hpfs_lock(s);
+ 	uid = sbi->sb_uid; gid = sbi->sb_gid;
+ 	umask = 0777 & ~sbi->sb_mode;
+diff --git a/fs/jbd2/checkpoint.c b/fs/jbd2/checkpoint.c
+index 988b32ed4c87..4227dc4f7437 100644
+--- a/fs/jbd2/checkpoint.c
++++ b/fs/jbd2/checkpoint.c
+@@ -390,7 +390,7 @@ int jbd2_cleanup_journal_tail(journal_t *journal)
+ 	unsigned long	blocknr;
+ 
+ 	if (is_journal_aborted(journal))
+-		return 1;
++		return -EIO;
+ 
+ 	if (!jbd2_journal_get_log_tail(journal, &first_tid, &blocknr))
+ 		return 1;
+@@ -405,10 +405,9 @@ int jbd2_cleanup_journal_tail(journal_t *journal)
+ 	 * jbd2_cleanup_journal_tail() doesn't get called all that often.
+ 	 */
+ 	if (journal->j_flags & JBD2_BARRIER)
+-		blkdev_issue_flush(journal->j_fs_dev, GFP_KERNEL, NULL);
++		blkdev_issue_flush(journal->j_fs_dev, GFP_NOFS, NULL);
+ 
+-	__jbd2_update_log_tail(journal, first_tid, blocknr);
+-	return 0;
++	return __jbd2_update_log_tail(journal, first_tid, blocknr);
+ }
+ 
+ 
+diff --git a/fs/jbd2/journal.c b/fs/jbd2/journal.c
+index b96bd8076b70..112fad9e1e20 100644
+--- a/fs/jbd2/journal.c
++++ b/fs/jbd2/journal.c
+@@ -885,9 +885,10 @@ int jbd2_journal_get_log_tail(journal_t *journal, tid_t *tid,
+  *
+  * Requires j_checkpoint_mutex
+  */
+-void __jbd2_update_log_tail(journal_t *journal, tid_t tid, unsigned long block)
++int __jbd2_update_log_tail(journal_t *journal, tid_t tid, unsigned long block)
+ {
+ 	unsigned long freed;
++	int ret;
+ 
+ 	BUG_ON(!mutex_is_locked(&journal->j_checkpoint_mutex));
+ 
+@@ -897,7 +898,10 @@ void __jbd2_update_log_tail(journal_t *journal, tid_t tid, unsigned long block)
+ 	 * space and if we lose sb update during power failure we'd replay
+ 	 * old transaction with possibly newly overwritten data.
+ 	 */
+-	jbd2_journal_update_sb_log_tail(journal, tid, block, WRITE_FUA);
++	ret = jbd2_journal_update_sb_log_tail(journal, tid, block, WRITE_FUA);
++	if (ret)
++		goto out;
++
+ 	write_lock(&journal->j_state_lock);
+ 	freed = block - journal->j_tail;
+ 	if (block < journal->j_tail)
+@@ -913,6 +917,9 @@ void __jbd2_update_log_tail(journal_t *journal, tid_t tid, unsigned long block)
+ 	journal->j_tail_sequence = tid;
+ 	journal->j_tail = block;
+ 	write_unlock(&journal->j_state_lock);
++
++out:
++	return ret;
+ }
+ 
+ /*
+@@ -1331,7 +1338,7 @@ static int journal_reset(journal_t *journal)
+ 	return jbd2_journal_start_thread(journal);
+ }
+ 
+-static void jbd2_write_superblock(journal_t *journal, int write_op)
++static int jbd2_write_superblock(journal_t *journal, int write_op)
+ {
+ 	struct buffer_head *bh = journal->j_sb_buffer;
+ 	journal_superblock_t *sb = journal->j_superblock;
+@@ -1370,7 +1377,10 @@ static void jbd2_write_superblock(journal_t *journal, int write_op)
+ 		printk(KERN_ERR "JBD2: Error %d detected when updating "
+ 		       "journal superblock for %s.\n", ret,
+ 		       journal->j_devname);
++		jbd2_journal_abort(journal, ret);
+ 	}
++
++	return ret;
+ }
+ 
+ /**
+@@ -1383,10 +1393,11 @@ static void jbd2_write_superblock(journal_t *journal, int write_op)
+  * Update a journal's superblock information about log tail and write it to
+  * disk, waiting for the IO to complete.
+  */
+-void jbd2_journal_update_sb_log_tail(journal_t *journal, tid_t tail_tid,
++int jbd2_journal_update_sb_log_tail(journal_t *journal, tid_t tail_tid,
+ 				     unsigned long tail_block, int write_op)
+ {
+ 	journal_superblock_t *sb = journal->j_superblock;
++	int ret;
+ 
+ 	BUG_ON(!mutex_is_locked(&journal->j_checkpoint_mutex));
+ 	jbd_debug(1, "JBD2: updating superblock (start %lu, seq %u)\n",
+@@ -1395,13 +1406,18 @@ void jbd2_journal_update_sb_log_tail(journal_t *journal, tid_t tail_tid,
+ 	sb->s_sequence = cpu_to_be32(tail_tid);
+ 	sb->s_start    = cpu_to_be32(tail_block);
+ 
+-	jbd2_write_superblock(journal, write_op);
++	ret = jbd2_write_superblock(journal, write_op);
++	if (ret)
++		goto out;
+ 
+ 	/* Log is no longer empty */
+ 	write_lock(&journal->j_state_lock);
+ 	WARN_ON(!sb->s_sequence);
+ 	journal->j_flags &= ~JBD2_FLUSHED;
+ 	write_unlock(&journal->j_state_lock);
++
++out:
++	return ret;
+ }
+ 
+ /**
+@@ -1950,7 +1966,14 @@ int jbd2_journal_flush(journal_t *journal)
+ 		return -EIO;
+ 
+ 	mutex_lock(&journal->j_checkpoint_mutex);
+-	jbd2_cleanup_journal_tail(journal);
++	if (!err) {
++		err = jbd2_cleanup_journal_tail(journal);
++		if (err < 0) {
++			mutex_unlock(&journal->j_checkpoint_mutex);
++			goto out;
++		}
++		err = 0;
++	}
+ 
+ 	/* Finally, mark the journal as really needing no recovery.
+ 	 * This sets s_start==0 in the underlying superblock, which is
+@@ -1966,7 +1989,8 @@ int jbd2_journal_flush(journal_t *journal)
+ 	J_ASSERT(journal->j_head == journal->j_tail);
+ 	J_ASSERT(journal->j_tail_sequence == journal->j_transaction_sequence);
+ 	write_unlock(&journal->j_state_lock);
+-	return 0;
++out:
++	return err;
+ }
+ 
+ /**
+diff --git a/fs/nfs/flexfilelayout/flexfilelayout.c b/fs/nfs/flexfilelayout/flexfilelayout.c
+index 7d05089e52d6..6f5f0f425e86 100644
+--- a/fs/nfs/flexfilelayout/flexfilelayout.c
++++ b/fs/nfs/flexfilelayout/flexfilelayout.c
+@@ -631,7 +631,7 @@ static void ff_layout_reset_write(struct nfs_pgio_header *hdr, bool retry_pnfs)
+ 			nfs_direct_set_resched_writes(hdr->dreq);
+ 			/* fake unstable write to let common nfs resend pages */
+ 			hdr->verf.committed = NFS_UNSTABLE;
+-			hdr->good_bytes = 0;
++			hdr->good_bytes = hdr->args.count;
+ 		}
+ 		return;
+ 	}
+diff --git a/fs/nfs/flexfilelayout/flexfilelayoutdev.c b/fs/nfs/flexfilelayout/flexfilelayoutdev.c
+index 77a2d026aa12..f13e1969eedd 100644
+--- a/fs/nfs/flexfilelayout/flexfilelayoutdev.c
++++ b/fs/nfs/flexfilelayout/flexfilelayoutdev.c
+@@ -324,7 +324,8 @@ static int ff_layout_update_mirror_cred(struct nfs4_ff_layout_mirror *mirror,
+ 				__func__, PTR_ERR(cred));
+ 			return PTR_ERR(cred);
+ 		} else {
+-			mirror->cred = cred;
++			if (cmpxchg(&mirror->cred, NULL, cred))
++				put_rpccred(cred);
+ 		}
+ 	}
+ 	return 0;
+@@ -386,7 +387,7 @@ nfs4_ff_layout_prepare_ds(struct pnfs_layout_segment *lseg, u32 ds_idx,
+ 	/* matching smp_wmb() in _nfs4_pnfs_v3/4_ds_connect */
+ 	smp_rmb();
+ 	if (ds->ds_clp)
+-		goto out;
++		goto out_update_creds;
+ 
+ 	flavor = nfs4_ff_layout_choose_authflavor(mirror);
+ 
+@@ -430,7 +431,7 @@ nfs4_ff_layout_prepare_ds(struct pnfs_layout_segment *lseg, u32 ds_idx,
+ 			}
+ 		}
+ 	}
+-
++out_update_creds:
+ 	if (ff_layout_update_mirror_cred(mirror, ds))
+ 		ds = NULL;
+ out:
+diff --git a/fs/nfs/nfs3xdr.c b/fs/nfs/nfs3xdr.c
+index 53852a4bd88b..9b04c2e6fffc 100644
+--- a/fs/nfs/nfs3xdr.c
++++ b/fs/nfs/nfs3xdr.c
+@@ -1342,7 +1342,7 @@ static void nfs3_xdr_enc_setacl3args(struct rpc_rqst *req,
+ 	if (args->npages != 0)
+ 		xdr_write_pages(xdr, args->pages, 0, args->len);
+ 	else
+-		xdr_reserve_space(xdr, NFS_ACL_INLINE_BUFSIZE);
++		xdr_reserve_space(xdr, args->len);
+ 
+ 	error = nfsacl_encode(xdr->buf, base, args->inode,
+ 			    (args->mask & NFS_ACL) ?
+diff --git a/fs/nfs/nfs4state.c b/fs/nfs/nfs4state.c
+index 2782cfca2265..ddef1dc80cf7 100644
+--- a/fs/nfs/nfs4state.c
++++ b/fs/nfs/nfs4state.c
+@@ -1482,6 +1482,8 @@ restart:
+ 					spin_unlock(&state->state_lock);
+ 				}
+ 				nfs4_put_open_state(state);
++				clear_bit(NFS4CLNT_RECLAIM_NOGRACE,
++					&state->flags);
+ 				spin_lock(&sp->so_lock);
+ 				goto restart;
+ 			}
+diff --git a/fs/nfs/pnfs.c b/fs/nfs/pnfs.c
+index 230606243be6..d47c188682b1 100644
+--- a/fs/nfs/pnfs.c
++++ b/fs/nfs/pnfs.c
+@@ -1821,6 +1821,7 @@ int pnfs_write_done_resend_to_mds(struct nfs_pgio_header *hdr)
+ 	/* Resend all requests through the MDS */
+ 	nfs_pageio_init_write(&pgio, hdr->inode, FLUSH_STABLE, true,
+ 			      hdr->completion_ops);
++	set_bit(NFS_CONTEXT_RESEND_WRITES, &hdr->args.context->flags);
+ 	return nfs_pageio_resend(&pgio, hdr);
+ }
+ EXPORT_SYMBOL_GPL(pnfs_write_done_resend_to_mds);
+@@ -1865,6 +1866,7 @@ pnfs_write_through_mds(struct nfs_pageio_descriptor *desc,
+ 		mirror->pg_recoalesce = 1;
+ 	}
+ 	nfs_pgio_data_destroy(hdr);
++	hdr->release(hdr);
+ }
+ 
+ static enum pnfs_try_status
+@@ -1979,6 +1981,7 @@ pnfs_read_through_mds(struct nfs_pageio_descriptor *desc,
+ 		mirror->pg_recoalesce = 1;
+ 	}
+ 	nfs_pgio_data_destroy(hdr);
++	hdr->release(hdr);
+ }
+ 
+ /*
+diff --git a/fs/nfs/write.c b/fs/nfs/write.c
+index dfc19f1575a1..daf355642845 100644
+--- a/fs/nfs/write.c
++++ b/fs/nfs/write.c
+@@ -1289,6 +1289,7 @@ static void nfs_initiate_write(struct nfs_pgio_header *hdr,
+ static void nfs_redirty_request(struct nfs_page *req)
+ {
+ 	nfs_mark_request_dirty(req);
++	set_bit(NFS_CONTEXT_RESEND_WRITES, &req->wb_context->flags);
+ 	nfs_unlock_request(req);
+ 	nfs_end_page_writeback(req);
+ 	nfs_release_request(req);
+diff --git a/fs/overlayfs/readdir.c b/fs/overlayfs/readdir.c
+index 907870e81a72..70e9af551600 100644
+--- a/fs/overlayfs/readdir.c
++++ b/fs/overlayfs/readdir.c
+@@ -23,6 +23,7 @@ struct ovl_cache_entry {
+ 	u64 ino;
+ 	struct list_head l_node;
+ 	struct rb_node node;
++	struct ovl_cache_entry *next_maybe_whiteout;
+ 	bool is_whiteout;
+ 	char name[];
+ };
+@@ -39,7 +40,7 @@ struct ovl_readdir_data {
+ 	struct rb_root root;
+ 	struct list_head *list;
+ 	struct list_head middle;
+-	struct dentry *dir;
++	struct ovl_cache_entry *first_maybe_whiteout;
+ 	int count;
+ 	int err;
+ };
+@@ -79,7 +80,7 @@ static struct ovl_cache_entry *ovl_cache_entry_find(struct rb_root *root,
+ 	return NULL;
+ }
+ 
+-static struct ovl_cache_entry *ovl_cache_entry_new(struct dentry *dir,
++static struct ovl_cache_entry *ovl_cache_entry_new(struct ovl_readdir_data *rdd,
+ 						   const char *name, int len,
+ 						   u64 ino, unsigned int d_type)
+ {
+@@ -98,29 +99,8 @@ static struct ovl_cache_entry *ovl_cache_entry_new(struct dentry *dir,
+ 	p->is_whiteout = false;
+ 
+ 	if (d_type == DT_CHR) {
+-		struct dentry *dentry;
+-		const struct cred *old_cred;
+-		struct cred *override_cred;
+-
+-		override_cred = prepare_creds();
+-		if (!override_cred) {
+-			kfree(p);
+-			return NULL;
+-		}
+-
+-		/*
+-		 * CAP_DAC_OVERRIDE for lookup
+-		 */
+-		cap_raise(override_cred->cap_effective, CAP_DAC_OVERRIDE);
+-		old_cred = override_creds(override_cred);
+-
+-		dentry = lookup_one_len(name, dir, len);
+-		if (!IS_ERR(dentry)) {
+-			p->is_whiteout = ovl_is_whiteout(dentry);
+-			dput(dentry);
+-		}
+-		revert_creds(old_cred);
+-		put_cred(override_cred);
++		p->next_maybe_whiteout = rdd->first_maybe_whiteout;
++		rdd->first_maybe_whiteout = p;
+ 	}
+ 	return p;
+ }
+@@ -148,7 +128,7 @@ static int ovl_cache_entry_add_rb(struct ovl_readdir_data *rdd,
+ 			return 0;
+ 	}
+ 
+-	p = ovl_cache_entry_new(rdd->dir, name, len, ino, d_type);
++	p = ovl_cache_entry_new(rdd, name, len, ino, d_type);
+ 	if (p == NULL)
+ 		return -ENOMEM;
+ 
+@@ -169,7 +149,7 @@ static int ovl_fill_lower(struct ovl_readdir_data *rdd,
+ 	if (p) {
+ 		list_move_tail(&p->l_node, &rdd->middle);
+ 	} else {
+-		p = ovl_cache_entry_new(rdd->dir, name, namelen, ino, d_type);
++		p = ovl_cache_entry_new(rdd, name, namelen, ino, d_type);
+ 		if (p == NULL)
+ 			rdd->err = -ENOMEM;
+ 		else
+@@ -219,6 +199,43 @@ static int ovl_fill_merge(struct dir_context *ctx, const char *name,
+ 		return ovl_fill_lower(rdd, name, namelen, offset, ino, d_type);
+ }
+ 
++static int ovl_check_whiteouts(struct dentry *dir, struct ovl_readdir_data *rdd)
++{
++	int err;
++	struct ovl_cache_entry *p;
++	struct dentry *dentry;
++	const struct cred *old_cred;
++	struct cred *override_cred;
++
++	override_cred = prepare_creds();
++	if (!override_cred)
++		return -ENOMEM;
++
++	/*
++	 * CAP_DAC_OVERRIDE for lookup
++	 */
++	cap_raise(override_cred->cap_effective, CAP_DAC_OVERRIDE);
++	old_cred = override_creds(override_cred);
++
++	err = mutex_lock_killable(&dir->d_inode->i_mutex);
++	if (!err) {
++		while (rdd->first_maybe_whiteout) {
++			p = rdd->first_maybe_whiteout;
++			rdd->first_maybe_whiteout = p->next_maybe_whiteout;
++			dentry = lookup_one_len(p->name, dir, p->len);
++			if (!IS_ERR(dentry)) {
++				p->is_whiteout = ovl_is_whiteout(dentry);
++				dput(dentry);
++			}
++		}
++		mutex_unlock(&dir->d_inode->i_mutex);
++	}
++	revert_creds(old_cred);
++	put_cred(override_cred);
++
++	return err;
++}
++
+ static inline int ovl_dir_read(struct path *realpath,
+ 			       struct ovl_readdir_data *rdd)
+ {
+@@ -229,7 +246,7 @@ static inline int ovl_dir_read(struct path *realpath,
+ 	if (IS_ERR(realfile))
+ 		return PTR_ERR(realfile);
+ 
+-	rdd->dir = realpath->dentry;
++	rdd->first_maybe_whiteout = NULL;
+ 	rdd->ctx.pos = 0;
+ 	do {
+ 		rdd->count = 0;
+@@ -238,6 +255,10 @@ static inline int ovl_dir_read(struct path *realpath,
+ 		if (err >= 0)
+ 			err = rdd->err;
+ 	} while (!err && rdd->count);
++
++	if (!err && rdd->first_maybe_whiteout)
++		err = ovl_check_whiteouts(realpath->dentry, rdd);
++
+ 	fput(realfile);
+ 
+ 	return err;
+diff --git a/fs/xfs/xfs_attr_inactive.c b/fs/xfs/xfs_attr_inactive.c
+index 3fbf167cfb4c..73e75a87af50 100644
+--- a/fs/xfs/xfs_attr_inactive.c
++++ b/fs/xfs/xfs_attr_inactive.c
+@@ -435,8 +435,14 @@ xfs_attr_inactive(
+ 	 */
+ 	xfs_trans_ijoin(trans, dp, 0);
+ 
+-	/* invalidate and truncate the attribute fork extents */
+-	if (dp->i_d.di_aformat != XFS_DINODE_FMT_LOCAL) {
++	/*
++	 * Invalidate and truncate the attribute fork extents. Make sure the
++	 * fork actually has attributes as otherwise the invalidation has no
++	 * blocks to read and returns an error. In this case, just do the fork
++	 * removal below.
++	 */
++	if (xfs_inode_hasattr(dp) &&
++	    dp->i_d.di_aformat != XFS_DINODE_FMT_LOCAL) {
+ 		error = xfs_attr3_root_inactive(&trans, dp);
+ 		if (error)
+ 			goto out_cancel;
+diff --git a/fs/xfs/xfs_symlink.c b/fs/xfs/xfs_symlink.c
+index 3df411eadb86..40c076523cfa 100644
+--- a/fs/xfs/xfs_symlink.c
++++ b/fs/xfs/xfs_symlink.c
+@@ -104,7 +104,7 @@ xfs_readlink_bmap(
+ 			cur_chunk += sizeof(struct xfs_dsymlink_hdr);
+ 		}
+ 
+-		memcpy(link + offset, bp->b_addr, byte_cnt);
++		memcpy(link + offset, cur_chunk, byte_cnt);
+ 
+ 		pathlen -= byte_cnt;
+ 		offset += byte_cnt;
+diff --git a/include/acpi/acpixf.h b/include/acpi/acpixf.h
+index 08ef57bc8d63..f5ed1f17f061 100644
+--- a/include/acpi/acpixf.h
++++ b/include/acpi/acpixf.h
+@@ -195,9 +195,18 @@ ACPI_INIT_GLOBAL(u8, acpi_gbl_do_not_use_xsdt, FALSE);
+  * address. Although ACPICA adheres to the ACPI specification which
+  * requires the use of the corresponding 64-bit address if it is non-zero,
+  * some machines have been found to have a corrupted non-zero 64-bit
+- * address. Default is TRUE, favor the 32-bit addresses.
++ * address. Default is FALSE, do not favor the 32-bit addresses.
+  */
+-ACPI_INIT_GLOBAL(u8, acpi_gbl_use32_bit_fadt_addresses, TRUE);
++ACPI_INIT_GLOBAL(u8, acpi_gbl_use32_bit_fadt_addresses, FALSE);
++
++/*
++ * Optionally use 32-bit FACS table addresses.
++ * It is reported that some platforms fail to resume from system suspending
++ * if 64-bit FACS table address is selected:
++ * https://bugzilla.kernel.org/show_bug.cgi?id=74021
++ * Default is TRUE, favor the 32-bit addresses.
++ */
++ACPI_INIT_GLOBAL(u8, acpi_gbl_use32_bit_facs_addresses, TRUE);
+ 
+ /*
+  * Optionally truncate I/O addresses to 16 bits. Provides compatibility
+diff --git a/include/acpi/actypes.h b/include/acpi/actypes.h
+index 1c3002e1db20..181427ef3549 100644
+--- a/include/acpi/actypes.h
++++ b/include/acpi/actypes.h
+@@ -572,6 +572,7 @@ typedef u64 acpi_integer;
+ #define ACPI_NO_ACPI_ENABLE             0x10
+ #define ACPI_NO_DEVICE_INIT             0x20
+ #define ACPI_NO_OBJECT_INIT             0x40
++#define ACPI_NO_FACS_INIT               0x80
+ 
+ /*
+  * Initialization state
+diff --git a/include/drm/drm_atomic.h b/include/drm/drm_atomic.h
+index c157103492b0..3f13b910f8d2 100644
+--- a/include/drm/drm_atomic.h
++++ b/include/drm/drm_atomic.h
+@@ -77,26 +77,26 @@ int __must_check drm_atomic_async_commit(struct drm_atomic_state *state);
+ 
+ #define for_each_connector_in_state(state, connector, connector_state, __i) \
+ 	for ((__i) = 0;							\
+-	     (connector) = (state)->connectors[__i],			\
+-	     (connector_state) = (state)->connector_states[__i],	\
+-	     (__i) < (state)->num_connector;				\
++	     (__i) < (state)->num_connector &&				\
++	     ((connector) = (state)->connectors[__i],			\
++	     (connector_state) = (state)->connector_states[__i], 1); 	\
+ 	     (__i)++)							\
+ 		if (connector)
+ 
+ #define for_each_crtc_in_state(state, crtc, crtc_state, __i)	\
+ 	for ((__i) = 0;						\
+-	     (crtc) = (state)->crtcs[__i],			\
+-	     (crtc_state) = (state)->crtc_states[__i],		\
+-	     (__i) < (state)->dev->mode_config.num_crtc;	\
++	     (__i) < (state)->dev->mode_config.num_crtc &&	\
++	     ((crtc) = (state)->crtcs[__i],			\
++	     (crtc_state) = (state)->crtc_states[__i], 1);	\
+ 	     (__i)++)						\
+ 		if (crtc_state)
+ 
+-#define for_each_plane_in_state(state, plane, plane_state, __i)	\
+-	for ((__i) = 0;						\
+-	     (plane) = (state)->planes[__i],			\
+-	     (plane_state) = (state)->plane_states[__i],	\
+-	     (__i) < (state)->dev->mode_config.num_total_plane;	\
+-	     (__i)++)						\
++#define for_each_plane_in_state(state, plane, plane_state, __i)		\
++	for ((__i) = 0;							\
++	     (__i) < (state)->dev->mode_config.num_total_plane &&	\
++	     ((plane) = (state)->planes[__i],				\
++	     (plane_state) = (state)->plane_states[__i], 1);		\
++	     (__i)++)							\
+ 		if (plane_state)
+ 
+ #endif /* DRM_ATOMIC_H_ */
+diff --git a/include/drm/drm_crtc.h b/include/drm/drm_crtc.h
+index ca71c03143d1..54233583c6cb 100644
+--- a/include/drm/drm_crtc.h
++++ b/include/drm/drm_crtc.h
+@@ -731,6 +731,8 @@ struct drm_connector {
+ 	uint8_t num_h_tile, num_v_tile;
+ 	uint8_t tile_h_loc, tile_v_loc;
+ 	uint16_t tile_h_size, tile_v_size;
++
++	struct list_head destroy_list;
+ };
+ 
+ /**
+diff --git a/include/drm/drm_dp_mst_helper.h b/include/drm/drm_dp_mst_helper.h
+index a2507817be41..86d0b25ed054 100644
+--- a/include/drm/drm_dp_mst_helper.h
++++ b/include/drm/drm_dp_mst_helper.h
+@@ -463,6 +463,10 @@ struct drm_dp_mst_topology_mgr {
+ 	struct work_struct work;
+ 
+ 	struct work_struct tx_work;
++
++	struct list_head destroy_connector_list;
++	struct mutex destroy_connector_lock;
++	struct work_struct destroy_connector_work;
+ };
+ 
+ int drm_dp_mst_topology_mgr_init(struct drm_dp_mst_topology_mgr *mgr, struct device *dev, struct drm_dp_aux *aux, int max_dpcd_transaction_bytes, int max_payloads, int conn_base_id);
+diff --git a/include/linux/acpi.h b/include/linux/acpi.h
+index 5da2d2e9d38e..4550be3bb63b 100644
+--- a/include/linux/acpi.h
++++ b/include/linux/acpi.h
+@@ -332,9 +332,6 @@ int acpi_check_region(resource_size_t start, resource_size_t n,
+ 
+ int acpi_resources_are_enforced(void);
+ 
+-int acpi_reserve_region(u64 start, unsigned int length, u8 space_id,
+-			unsigned long flags, char *desc);
+-
+ #ifdef CONFIG_HIBERNATION
+ void __init acpi_no_s4_hw_signature(void);
+ #endif
+@@ -530,13 +527,6 @@ static inline int acpi_check_region(resource_size_t start, resource_size_t n,
+ 	return 0;
+ }
+ 
+-static inline int acpi_reserve_region(u64 start, unsigned int length,
+-				      u8 space_id, unsigned long flags,
+-				      char *desc)
+-{
+-	return -ENXIO;
+-}
+-
+ struct acpi_table_header;
+ static inline int acpi_table_parse(char *id,
+ 				int (*handler)(struct acpi_table_header *))
+diff --git a/include/linux/ata.h b/include/linux/ata.h
+index b666b773e111..533dbb6428f5 100644
+--- a/include/linux/ata.h
++++ b/include/linux/ata.h
+@@ -45,6 +45,7 @@ enum {
+ 	ATA_SECT_SIZE		= 512,
+ 	ATA_MAX_SECTORS_128	= 128,
+ 	ATA_MAX_SECTORS		= 256,
++	ATA_MAX_SECTORS_1024    = 1024,
+ 	ATA_MAX_SECTORS_LBA48	= 65535,/* TODO: 65536? */
+ 	ATA_MAX_SECTORS_TAPE	= 65535,
+ 
+diff --git a/include/linux/buffer_head.h b/include/linux/buffer_head.h
+index 73b45225a7ca..e6797ded700e 100644
+--- a/include/linux/buffer_head.h
++++ b/include/linux/buffer_head.h
+@@ -317,6 +317,13 @@ sb_getblk(struct super_block *sb, sector_t block)
+ 	return __getblk_gfp(sb->s_bdev, block, sb->s_blocksize, __GFP_MOVABLE);
+ }
+ 
++
++static inline struct buffer_head *
++sb_getblk_gfp(struct super_block *sb, sector_t block, gfp_t gfp)
++{
++	return __getblk_gfp(sb->s_bdev, block, sb->s_blocksize, gfp);
++}
++
+ static inline struct buffer_head *
+ sb_find_get_block(struct super_block *sb, sector_t block)
+ {
+diff --git a/include/linux/compiler-intel.h b/include/linux/compiler-intel.h
+index 0c9a2f2c2802..d4c71132d07f 100644
+--- a/include/linux/compiler-intel.h
++++ b/include/linux/compiler-intel.h
+@@ -13,10 +13,12 @@
+ /* Intel ECC compiler doesn't support gcc specific asm stmts.
+  * It uses intrinsics to do the equivalent things.
+  */
++#undef barrier
+ #undef barrier_data
+ #undef RELOC_HIDE
+ #undef OPTIMIZER_HIDE_VAR
+ 
++#define barrier() __memory_barrier()
+ #define barrier_data(ptr) barrier()
+ 
+ #define RELOC_HIDE(ptr, off)					\
+diff --git a/include/linux/gpio/consumer.h b/include/linux/gpio/consumer.h
+index 3a7c9ffd5ab9..da042657dc31 100644
+--- a/include/linux/gpio/consumer.h
++++ b/include/linux/gpio/consumer.h
+@@ -406,6 +406,21 @@ static inline int desc_to_gpio(const struct gpio_desc *desc)
+ 	return -EINVAL;
+ }
+ 
++/* Child properties interface */
++struct fwnode_handle;
++
++static inline struct gpio_desc *fwnode_get_named_gpiod(
++	struct fwnode_handle *fwnode, const char *propname)
++{
++	return ERR_PTR(-ENOSYS);
++}
++
++static inline struct gpio_desc *devm_get_gpiod_from_child(
++	struct device *dev, const char *con_id, struct fwnode_handle *child)
++{
++	return ERR_PTR(-ENOSYS);
++}
++
+ #endif /* CONFIG_GPIOLIB */
+ 
+ /*
+diff --git a/include/linux/hid-sensor-hub.h b/include/linux/hid-sensor-hub.h
+index 0042bf330b99..c02b5ce6c5cd 100644
+--- a/include/linux/hid-sensor-hub.h
++++ b/include/linux/hid-sensor-hub.h
+@@ -230,6 +230,7 @@ struct hid_sensor_common {
+ 	struct platform_device *pdev;
+ 	unsigned usage_id;
+ 	atomic_t data_ready;
++	atomic_t user_requested_state;
+ 	struct iio_trigger *trigger;
+ 	struct hid_sensor_hub_attribute_info poll;
+ 	struct hid_sensor_hub_attribute_info report_state;
+diff --git a/include/linux/jbd2.h b/include/linux/jbd2.h
+index 20e7f78041c8..edb640ae9a94 100644
+--- a/include/linux/jbd2.h
++++ b/include/linux/jbd2.h
+@@ -1035,7 +1035,7 @@ struct buffer_head *jbd2_journal_get_descriptor_buffer(journal_t *journal);
+ int jbd2_journal_next_log_block(journal_t *, unsigned long long *);
+ int jbd2_journal_get_log_tail(journal_t *journal, tid_t *tid,
+ 			      unsigned long *block);
+-void __jbd2_update_log_tail(journal_t *journal, tid_t tid, unsigned long block);
++int __jbd2_update_log_tail(journal_t *journal, tid_t tid, unsigned long block);
+ void jbd2_update_log_tail(journal_t *journal, tid_t tid, unsigned long block);
+ 
+ /* Commit management */
+@@ -1157,7 +1157,7 @@ extern int	   jbd2_journal_recover    (journal_t *journal);
+ extern int	   jbd2_journal_wipe       (journal_t *, int);
+ extern int	   jbd2_journal_skip_recovery	(journal_t *);
+ extern void	   jbd2_journal_update_sb_errno(journal_t *);
+-extern void	   jbd2_journal_update_sb_log_tail	(journal_t *, tid_t,
++extern int	   jbd2_journal_update_sb_log_tail	(journal_t *, tid_t,
+ 				unsigned long, int);
+ extern void	   __jbd2_journal_abort_hard	(journal_t *);
+ extern void	   jbd2_journal_abort      (journal_t *, int);
+diff --git a/include/linux/libata.h b/include/linux/libata.h
+index 28aeae46f355..e0e33787c485 100644
+--- a/include/linux/libata.h
++++ b/include/linux/libata.h
+@@ -431,6 +431,9 @@ enum {
+ 	ATA_HORKAGE_NOLPM	= (1 << 20),	/* don't use LPM */
+ 	ATA_HORKAGE_WD_BROKEN_LPM = (1 << 21),	/* some WDs have broken LPM */
+ 	ATA_HORKAGE_ZERO_AFTER_TRIM = (1 << 22),/* guarantees zero after trim */
++	ATA_HORKAGE_NO_NCQ_LOG	= (1 << 23),	/* don't use NCQ for log read */
++	ATA_HORKAGE_NOTRIM	= (1 << 24),	/* don't use TRIM */
++	ATA_HORKAGE_MAX_SEC_1024 = (1 << 25),	/* Limit max sects to 1024 */
+ 
+ 	 /* DMA mask for user DMA control: User visible values; DO NOT
+ 	    renumber */
+diff --git a/include/linux/nfs_xdr.h b/include/linux/nfs_xdr.h
+index 93ab6071bbe9..e9e9a8dcfb47 100644
+--- a/include/linux/nfs_xdr.h
++++ b/include/linux/nfs_xdr.h
+@@ -1142,7 +1142,7 @@ struct nfs41_state_protection {
+ 	struct nfs4_op_map allow;
+ };
+ 
+-#define NFS4_EXCHANGE_ID_LEN	(48)
++#define NFS4_EXCHANGE_ID_LEN	(127)
+ struct nfs41_exchange_id_args {
+ 	struct nfs_client		*client;
+ 	nfs4_verifier			*verifier;
+diff --git a/include/linux/of.h b/include/linux/of.h
+index b871ff9d81d7..8135d507d089 100644
+--- a/include/linux/of.h
++++ b/include/linux/of.h
+@@ -673,7 +673,10 @@ static inline void of_property_clear_flag(struct property *p, unsigned long flag
+ #if defined(CONFIG_OF) && defined(CONFIG_NUMA)
+ extern int of_node_to_nid(struct device_node *np);
+ #else
+-static inline int of_node_to_nid(struct device_node *device) { return 0; }
++static inline int of_node_to_nid(struct device_node *device)
++{
++	return NUMA_NO_NODE;
++}
+ #endif
+ 
+ static inline struct device_node *of_find_matching_node(
+diff --git a/include/uapi/drm/i915_drm.h b/include/uapi/drm/i915_drm.h
+index 551b6737f5df..a7e41fb6ed54 100644
+--- a/include/uapi/drm/i915_drm.h
++++ b/include/uapi/drm/i915_drm.h
+@@ -1065,6 +1065,14 @@ struct drm_i915_reg_read {
+ 	__u64 offset;
+ 	__u64 val; /* Return value */
+ };
++/* Known registers:
++ *
++ * Render engine timestamp - 0x2358 + 64bit - gen7+
++ * - Note this register returns an invalid value if using the default
++ *   single instruction 8byte read, in order to workaround that use
++ *   offset (0x2538 | 1) instead.
++ *
++ */
+ 
+ struct drm_i915_reset_stats {
+ 	__u32 ctx_id;
+diff --git a/kernel/power/Kconfig b/kernel/power/Kconfig
+index 7e01f78f0417..9e302315e33d 100644
+--- a/kernel/power/Kconfig
++++ b/kernel/power/Kconfig
+@@ -187,7 +187,7 @@ config DPM_WATCHDOG
+ config DPM_WATCHDOG_TIMEOUT
+ 	int "Watchdog timeout in seconds"
+ 	range 1 120
+-	default 12
++	default 60
+ 	depends on DPM_WATCHDOG
+ 
+ config PM_TRACE
+diff --git a/kernel/printk/printk.c b/kernel/printk/printk.c
+index c099b082cd02..bff0169e1ad8 100644
+--- a/kernel/printk/printk.c
++++ b/kernel/printk/printk.c
+@@ -484,11 +484,11 @@ int check_syslog_permissions(int type, bool from_file)
+ 	 * already done the capabilities checks at open time.
+ 	 */
+ 	if (from_file && type != SYSLOG_ACTION_OPEN)
+-		return 0;
++		goto ok;
+ 
+ 	if (syslog_action_restricted(type)) {
+ 		if (capable(CAP_SYSLOG))
+-			return 0;
++			goto ok;
+ 		/*
+ 		 * For historical reasons, accept CAP_SYS_ADMIN too, with
+ 		 * a warning.
+@@ -498,10 +498,11 @@ int check_syslog_permissions(int type, bool from_file)
+ 				     "CAP_SYS_ADMIN but no CAP_SYSLOG "
+ 				     "(deprecated).\n",
+ 				 current->comm, task_pid_nr(current));
+-			return 0;
++			goto ok;
+ 		}
+ 		return -EPERM;
+ 	}
++ok:
+ 	return security_syslog(type);
+ }
+ 
+@@ -1263,10 +1264,6 @@ int do_syslog(int type, char __user *buf, int len, bool from_file)
+ 	if (error)
+ 		goto out;
+ 
+-	error = security_syslog(type);
+-	if (error)
+-		return error;
+-
+ 	switch (type) {
+ 	case SYSLOG_ACTION_CLOSE:	/* Close log */
+ 		break;
+diff --git a/kernel/trace/trace.h b/kernel/trace/trace.h
+index d2612016de94..921691c5cb04 100644
+--- a/kernel/trace/trace.h
++++ b/kernel/trace/trace.h
+@@ -444,6 +444,7 @@ enum {
+ 
+ 	TRACE_CONTROL_BIT,
+ 
++	TRACE_BRANCH_BIT,
+ /*
+  * Abuse of the trace_recursion.
+  * As we need a way to maintain state if we are tracing the function
+@@ -1312,7 +1313,7 @@ void trace_event_init(void);
+ void trace_event_enum_update(struct trace_enum_map **map, int len);
+ #else
+ static inline void __init trace_event_init(void) { }
+-static inlin void trace_event_enum_update(struct trace_enum_map **map, int len) { }
++static inline void trace_event_enum_update(struct trace_enum_map **map, int len) { }
+ #endif
+ 
+ extern struct trace_iterator *tracepoint_print_iter;
+diff --git a/kernel/trace/trace_branch.c b/kernel/trace/trace_branch.c
+index 57cbf1efdd44..1879980f06c2 100644
+--- a/kernel/trace/trace_branch.c
++++ b/kernel/trace/trace_branch.c
+@@ -36,9 +36,12 @@ probe_likely_condition(struct ftrace_branch_data *f, int val, int expect)
+ 	struct trace_branch *entry;
+ 	struct ring_buffer *buffer;
+ 	unsigned long flags;
+-	int cpu, pc;
++	int pc;
+ 	const char *p;
+ 
++	if (current->trace_recursion & TRACE_BRANCH_BIT)
++		return;
++
+ 	/*
+ 	 * I would love to save just the ftrace_likely_data pointer, but
+ 	 * this code can also be used by modules. Ugly things can happen
+@@ -49,10 +52,10 @@ probe_likely_condition(struct ftrace_branch_data *f, int val, int expect)
+ 	if (unlikely(!tr))
+ 		return;
+ 
+-	local_irq_save(flags);
+-	cpu = raw_smp_processor_id();
+-	data = per_cpu_ptr(tr->trace_buffer.data, cpu);
+-	if (atomic_inc_return(&data->disabled) != 1)
++	raw_local_irq_save(flags);
++	current->trace_recursion |= TRACE_BRANCH_BIT;
++	data = this_cpu_ptr(tr->trace_buffer.data);
++	if (atomic_read(&data->disabled))
+ 		goto out;
+ 
+ 	pc = preempt_count();
+@@ -81,8 +84,8 @@ probe_likely_condition(struct ftrace_branch_data *f, int val, int expect)
+ 		__buffer_unlock_commit(buffer, event);
+ 
+  out:
+-	atomic_dec(&data->disabled);
+-	local_irq_restore(flags);
++	current->trace_recursion &= ~TRACE_BRANCH_BIT;
++	raw_local_irq_restore(flags);
+ }
+ 
+ static inline
+diff --git a/kernel/trace/trace_events_filter.c b/kernel/trace/trace_events_filter.c
+index 7f2e97ce71a7..52adf02d7619 100644
+--- a/kernel/trace/trace_events_filter.c
++++ b/kernel/trace/trace_events_filter.c
+@@ -1056,6 +1056,9 @@ static void parse_init(struct filter_parse_state *ps,
+ 
+ static char infix_next(struct filter_parse_state *ps)
+ {
++	if (!ps->infix.cnt)
++		return 0;
++
+ 	ps->infix.cnt--;
+ 
+ 	return ps->infix.string[ps->infix.tail++];
+@@ -1071,6 +1074,9 @@ static char infix_peek(struct filter_parse_state *ps)
+ 
+ static void infix_advance(struct filter_parse_state *ps)
+ {
++	if (!ps->infix.cnt)
++		return;
++
+ 	ps->infix.cnt--;
+ 	ps->infix.tail++;
+ }
+@@ -1385,7 +1391,9 @@ static int check_preds(struct filter_parse_state *ps)
+ 		if (elt->op != OP_NOT)
+ 			cnt--;
+ 		n_normal_preds++;
+-		WARN_ON_ONCE(cnt < 0);
++		/* all ops should have operands */
++		if (cnt < 0)
++			break;
+ 	}
+ 
+ 	if (cnt != 1 || !n_normal_preds || n_logical_preds >= n_normal_preds) {
+diff --git a/lib/bitmap.c b/lib/bitmap.c
+index 64c0926f5dd8..40162f87ea2d 100644
+--- a/lib/bitmap.c
++++ b/lib/bitmap.c
+@@ -506,12 +506,12 @@ static int __bitmap_parselist(const char *buf, unsigned int buflen,
+ 	unsigned a, b;
+ 	int c, old_c, totaldigits;
+ 	const char __user __force *ubuf = (const char __user __force *)buf;
+-	int exp_digit, in_range;
++	int at_start, in_range;
+ 
+ 	totaldigits = c = 0;
+ 	bitmap_zero(maskp, nmaskbits);
+ 	do {
+-		exp_digit = 1;
++		at_start = 1;
+ 		in_range = 0;
+ 		a = b = 0;
+ 
+@@ -540,11 +540,10 @@ static int __bitmap_parselist(const char *buf, unsigned int buflen,
+ 				break;
+ 
+ 			if (c == '-') {
+-				if (exp_digit || in_range)
++				if (at_start || in_range)
+ 					return -EINVAL;
+ 				b = 0;
+ 				in_range = 1;
+-				exp_digit = 1;
+ 				continue;
+ 			}
+ 
+@@ -554,16 +553,18 @@ static int __bitmap_parselist(const char *buf, unsigned int buflen,
+ 			b = b * 10 + (c - '0');
+ 			if (!in_range)
+ 				a = b;
+-			exp_digit = 0;
++			at_start = 0;
+ 			totaldigits++;
+ 		}
+ 		if (!(a <= b))
+ 			return -EINVAL;
+ 		if (b >= nmaskbits)
+ 			return -ERANGE;
+-		while (a <= b) {
+-			set_bit(a, maskp);
+-			a++;
++		if (!at_start) {
++			while (a <= b) {
++				set_bit(a, maskp);
++				a++;
++			}
+ 		}
+ 	} while (buflen && c == ',');
+ 	return 0;
+diff --git a/mm/hugetlb.c b/mm/hugetlb.c
+index 271e4432734c..8c4c1f9f9a9a 100644
+--- a/mm/hugetlb.c
++++ b/mm/hugetlb.c
+@@ -40,6 +40,11 @@ int hugepages_treat_as_movable;
+ int hugetlb_max_hstate __read_mostly;
+ unsigned int default_hstate_idx;
+ struct hstate hstates[HUGE_MAX_HSTATE];
++/*
++ * Minimum page order among possible hugepage sizes, set to a proper value
++ * at boot time.
++ */
++static unsigned int minimum_order __read_mostly = UINT_MAX;
+ 
+ __initdata LIST_HEAD(huge_boot_pages);
+ 
+@@ -1188,19 +1193,13 @@ static void dissolve_free_huge_page(struct page *page)
+  */
+ void dissolve_free_huge_pages(unsigned long start_pfn, unsigned long end_pfn)
+ {
+-	unsigned int order = 8 * sizeof(void *);
+ 	unsigned long pfn;
+-	struct hstate *h;
+ 
+ 	if (!hugepages_supported())
+ 		return;
+ 
+-	/* Set scan step to minimum hugepage size */
+-	for_each_hstate(h)
+-		if (order > huge_page_order(h))
+-			order = huge_page_order(h);
+-	VM_BUG_ON(!IS_ALIGNED(start_pfn, 1 << order));
+-	for (pfn = start_pfn; pfn < end_pfn; pfn += 1 << order)
++	VM_BUG_ON(!IS_ALIGNED(start_pfn, 1 << minimum_order));
++	for (pfn = start_pfn; pfn < end_pfn; pfn += 1 << minimum_order)
+ 		dissolve_free_huge_page(pfn_to_page(pfn));
+ }
+ 
+@@ -1627,10 +1626,14 @@ static void __init hugetlb_init_hstates(void)
+ 	struct hstate *h;
+ 
+ 	for_each_hstate(h) {
++		if (minimum_order > huge_page_order(h))
++			minimum_order = huge_page_order(h);
++
+ 		/* oversize hugepages were init'ed in early boot */
+ 		if (!hstate_is_gigantic(h))
+ 			hugetlb_hstate_alloc_pages(h);
+ 	}
++	VM_BUG_ON(minimum_order == UINT_MAX);
+ }
+ 
+ static char * __init memfmt(char *buf, unsigned long n)
+diff --git a/mm/memory.c b/mm/memory.c
+index 22e037e3364e..2a9e09870c20 100644
+--- a/mm/memory.c
++++ b/mm/memory.c
+@@ -2669,6 +2669,10 @@ static int do_anonymous_page(struct mm_struct *mm, struct vm_area_struct *vma,
+ 
+ 	pte_unmap(page_table);
+ 
++	/* File mapping without ->vm_ops ? */
++	if (vma->vm_flags & VM_SHARED)
++		return VM_FAULT_SIGBUS;
++
+ 	/* Check if we need to add a guard page to the stack */
+ 	if (check_stack_guard_page(vma, address) < 0)
+ 		return VM_FAULT_SIGSEGV;
+@@ -3097,6 +3101,9 @@ static int do_fault(struct mm_struct *mm, struct vm_area_struct *vma,
+ 			- vma->vm_start) >> PAGE_SHIFT) + vma->vm_pgoff;
+ 
+ 	pte_unmap(page_table);
++	/* The VMA was not fully populated on mmap() or missing VM_DONTEXPAND */
++	if (!vma->vm_ops->fault)
++		return VM_FAULT_SIGBUS;
+ 	if (!(flags & FAULT_FLAG_WRITE))
+ 		return do_read_fault(mm, vma, address, pmd, pgoff, flags,
+ 				orig_pte);
+@@ -3242,13 +3249,12 @@ static int handle_pte_fault(struct mm_struct *mm,
+ 	barrier();
+ 	if (!pte_present(entry)) {
+ 		if (pte_none(entry)) {
+-			if (vma->vm_ops) {
+-				if (likely(vma->vm_ops->fault))
+-					return do_fault(mm, vma, address, pte,
+-							pmd, flags, entry);
+-			}
+-			return do_anonymous_page(mm, vma, address,
+-						 pte, pmd, flags);
++			if (vma->vm_ops)
++				return do_fault(mm, vma, address, pte, pmd,
++						flags, entry);
++
++			return do_anonymous_page(mm, vma, address, pte, pmd,
++					flags);
+ 		}
+ 		return do_swap_page(mm, vma, address,
+ 					pte, pmd, flags, entry);
+diff --git a/net/9p/client.c b/net/9p/client.c
+index 6f4c4c88db84..81925b923318 100644
+--- a/net/9p/client.c
++++ b/net/9p/client.c
+@@ -843,7 +843,8 @@ static struct p9_req_t *p9_client_zc_rpc(struct p9_client *c, int8_t type,
+ 	if (err < 0) {
+ 		if (err == -EIO)
+ 			c->status = Disconnected;
+-		goto reterr;
++		if (err != -ERESTARTSYS)
++			goto reterr;
+ 	}
+ 	if (req->status == REQ_STATUS_ERROR) {
+ 		p9_debug(P9_DEBUG_ERROR, "req_status error %d\n", req->t_err);
+@@ -1647,6 +1648,7 @@ p9_client_write(struct p9_fid *fid, u64 offset, struct iov_iter *from, int *err)
+ 		if (*err) {
+ 			trace_9p_protocol_dump(clnt, req->rc);
+ 			p9_free_req(clnt, req);
++			break;
+ 		}
+ 
+ 		p9_debug(P9_DEBUG_9P, "<<< RWRITE count %d\n", count);
+diff --git a/net/bluetooth/hci_sock.c b/net/bluetooth/hci_sock.c
+index 56f9edbf3d05..e11a5cfda4b1 100644
+--- a/net/bluetooth/hci_sock.c
++++ b/net/bluetooth/hci_sock.c
+@@ -741,10 +741,11 @@ static int hci_sock_bind(struct socket *sock, struct sockaddr *addr,
+ 			goto done;
+ 		}
+ 
+-		if (test_bit(HCI_UP, &hdev->flags) ||
+-		    test_bit(HCI_INIT, &hdev->flags) ||
++		if (test_bit(HCI_INIT, &hdev->flags) ||
+ 		    hci_dev_test_flag(hdev, HCI_SETUP) ||
+-		    hci_dev_test_flag(hdev, HCI_CONFIG)) {
++		    hci_dev_test_flag(hdev, HCI_CONFIG) ||
++		    (!hci_dev_test_flag(hdev, HCI_AUTO_OFF) &&
++		     test_bit(HCI_UP, &hdev->flags))) {
+ 			err = -EBUSY;
+ 			hci_dev_put(hdev);
+ 			goto done;
+@@ -760,10 +761,21 @@ static int hci_sock_bind(struct socket *sock, struct sockaddr *addr,
+ 
+ 		err = hci_dev_open(hdev->id);
+ 		if (err) {
+-			hci_dev_clear_flag(hdev, HCI_USER_CHANNEL);
+-			mgmt_index_added(hdev);
+-			hci_dev_put(hdev);
+-			goto done;
++			if (err == -EALREADY) {
++				/* In case the transport is already up and
++				 * running, clear the error here.
++				 *
++				 * This can happen when opening an user
++				 * channel and HCI_AUTO_OFF grace period
++				 * is still active.
++				 */
++				err = 0;
++			} else {
++				hci_dev_clear_flag(hdev, HCI_USER_CHANNEL);
++				mgmt_index_added(hdev);
++				hci_dev_put(hdev);
++				goto done;
++			}
+ 		}
+ 
+ 		atomic_inc(&hdev->promisc);
+diff --git a/net/ceph/osdmap.c b/net/ceph/osdmap.c
+index 15796696d64e..4a3125836b64 100644
+--- a/net/ceph/osdmap.c
++++ b/net/ceph/osdmap.c
+@@ -89,7 +89,7 @@ static int crush_decode_tree_bucket(void **p, void *end,
+ {
+ 	int j;
+ 	dout("crush_decode_tree_bucket %p to %p\n", *p, end);
+-	ceph_decode_32_safe(p, end, b->num_nodes, bad);
++	ceph_decode_8_safe(p, end, b->num_nodes, bad);
+ 	b->node_weights = kcalloc(b->num_nodes, sizeof(u32), GFP_NOFS);
+ 	if (b->node_weights == NULL)
+ 		return -ENOMEM;
+diff --git a/net/ieee802154/socket.c b/net/ieee802154/socket.c
+index b60c65f70346..627a2537634e 100644
+--- a/net/ieee802154/socket.c
++++ b/net/ieee802154/socket.c
+@@ -739,6 +739,12 @@ static int dgram_recvmsg(struct sock *sk, struct msghdr *msg, size_t len,
+ 	sock_recv_ts_and_drops(msg, sk, skb);
+ 
+ 	if (saddr) {
++		/* Clear the implicit padding in struct sockaddr_ieee802154
++		 * (16 bits between 'family' and 'addr') and in struct
++		 * ieee802154_addr_sa (16 bits at the end of the structure).
++		 */
++		memset(saddr, 0, sizeof(*saddr));
++
+ 		saddr->family = AF_IEEE802154;
+ 		ieee802154_addr_to_sa(&saddr->addr, &mac_cb(skb)->source);
+ 		*addr_len = sizeof(*saddr);
+diff --git a/net/mac80211/cfg.c b/net/mac80211/cfg.c
+index ff347a0eebd4..f06d42267306 100644
+--- a/net/mac80211/cfg.c
++++ b/net/mac80211/cfg.c
+@@ -3356,6 +3356,7 @@ static int ieee80211_mgmt_tx(struct wiphy *wiphy, struct wireless_dev *wdev,
+ 	/* Update CSA counters */
+ 	if (sdata->vif.csa_active &&
+ 	    (sdata->vif.type == NL80211_IFTYPE_AP ||
++	     sdata->vif.type == NL80211_IFTYPE_MESH_POINT ||
+ 	     sdata->vif.type == NL80211_IFTYPE_ADHOC) &&
+ 	    params->n_csa_offsets) {
+ 		int i;
+diff --git a/net/mac80211/ibss.c b/net/mac80211/ibss.c
+index bfef1b215050..a9c9d961f039 100644
+--- a/net/mac80211/ibss.c
++++ b/net/mac80211/ibss.c
+@@ -146,6 +146,7 @@ ieee80211_ibss_build_presp(struct ieee80211_sub_if_data *sdata,
+ 				csa_settings->chandef.chan->center_freq);
+ 		presp->csa_counter_offsets[0] = (pos - presp->head);
+ 		*pos++ = csa_settings->count;
++		presp->csa_current_counter = csa_settings->count;
+ 	}
+ 
+ 	/* put the remaining rates in WLAN_EID_EXT_SUPP_RATES */
+diff --git a/net/mac80211/main.c b/net/mac80211/main.c
+index df3051d96aff..e86daed83c6f 100644
+--- a/net/mac80211/main.c
++++ b/net/mac80211/main.c
+@@ -249,6 +249,7 @@ static void ieee80211_restart_work(struct work_struct *work)
+ {
+ 	struct ieee80211_local *local =
+ 		container_of(work, struct ieee80211_local, restart_work);
++	struct ieee80211_sub_if_data *sdata;
+ 
+ 	/* wait for scan work complete */
+ 	flush_workqueue(local->workqueue);
+@@ -257,6 +258,8 @@ static void ieee80211_restart_work(struct work_struct *work)
+ 	     "%s called with hardware scan in progress\n", __func__);
+ 
+ 	rtnl_lock();
++	list_for_each_entry(sdata, &local->interfaces, list)
++		flush_delayed_work(&sdata->dec_tailroom_needed_wk);
+ 	ieee80211_scan_cancel(local);
+ 	ieee80211_reconfig(local);
+ 	rtnl_unlock();
+diff --git a/net/mac80211/mesh.c b/net/mac80211/mesh.c
+index d4684242e78b..817098add1d6 100644
+--- a/net/mac80211/mesh.c
++++ b/net/mac80211/mesh.c
+@@ -680,6 +680,7 @@ ieee80211_mesh_build_beacon(struct ieee80211_if_mesh *ifmsh)
+ 		*pos++ = 0x0;
+ 		*pos++ = ieee80211_frequency_to_channel(
+ 				csa->settings.chandef.chan->center_freq);
++		bcn->csa_current_counter = csa->settings.count;
+ 		bcn->csa_counter_offsets[0] = hdr_len + 6;
+ 		*pos++ = csa->settings.count;
+ 		*pos++ = WLAN_EID_CHAN_SWITCH_PARAM;
+diff --git a/net/sunrpc/backchannel_rqst.c b/net/sunrpc/backchannel_rqst.c
+index 9dd0ea8db463..28504dfd3dad 100644
+--- a/net/sunrpc/backchannel_rqst.c
++++ b/net/sunrpc/backchannel_rqst.c
+@@ -60,7 +60,7 @@ static void xprt_free_allocation(struct rpc_rqst *req)
+ 
+ 	dprintk("RPC:        free allocations for req= %p\n", req);
+ 	WARN_ON_ONCE(test_bit(RPC_BC_PA_IN_USE, &req->rq_bc_pa_state));
+-	xbufp = &req->rq_private_buf;
++	xbufp = &req->rq_rcv_buf;
+ 	free_page((unsigned long)xbufp->head[0].iov_base);
+ 	xbufp = &req->rq_snd_buf;
+ 	free_page((unsigned long)xbufp->head[0].iov_base);
+diff --git a/net/wireless/util.c b/net/wireless/util.c
+index 70051ab52f4f..7e4e3fffe7ce 100644
+--- a/net/wireless/util.c
++++ b/net/wireless/util.c
+@@ -944,7 +944,7 @@ int cfg80211_change_iface(struct cfg80211_registered_device *rdev,
+ 	     ntype == NL80211_IFTYPE_P2P_CLIENT))
+ 		return -EBUSY;
+ 
+-	if (ntype != otype && netif_running(dev)) {
++	if (ntype != otype) {
+ 		dev->ieee80211_ptr->use_4addr = false;
+ 		dev->ieee80211_ptr->mesh_id_up_len = 0;
+ 		wdev_lock(dev->ieee80211_ptr);
+diff --git a/samples/trace_events/trace-events-sample.h b/samples/trace_events/trace-events-sample.h
+index 8965d1bb8811..125d6402f64f 100644
+--- a/samples/trace_events/trace-events-sample.h
++++ b/samples/trace_events/trace-events-sample.h
+@@ -168,7 +168,10 @@
+  *
+  *      For __dynamic_array(int, foo, bar) use __get_dynamic_array(foo)
+  *            Use __get_dynamic_array_len(foo) to get the length of the array
+- *            saved.
++ *            saved. Note, __get_dynamic_array_len() returns the total allocated
++ *            length of the dynamic array; __print_array() expects the second
++ *            parameter to be the number of elements. To get that, the array length
++ *            needs to be divided by the element size.
+  *
+  *      For __string(foo, bar) use __get_str(foo)
+  *
+@@ -288,7 +291,7 @@ TRACE_EVENT(foo_bar,
+  *    This prints out the array that is defined by __array in a nice format.
+  */
+ 		  __print_array(__get_dynamic_array(list),
+-				__get_dynamic_array_len(list),
++				__get_dynamic_array_len(list) / sizeof(int),
+ 				sizeof(int)),
+ 		  __get_str(str), __get_bitmask(cpus))
+ );
+diff --git a/security/integrity/evm/evm_main.c b/security/integrity/evm/evm_main.c
+index 10f994307a04..582091498819 100644
+--- a/security/integrity/evm/evm_main.c
++++ b/security/integrity/evm/evm_main.c
+@@ -296,6 +296,17 @@ static int evm_protect_xattr(struct dentry *dentry, const char *xattr_name,
+ 		iint = integrity_iint_find(d_backing_inode(dentry));
+ 		if (iint && (iint->flags & IMA_NEW_FILE))
+ 			return 0;
++
++		/* exception for pseudo filesystems */
++		if (dentry->d_inode->i_sb->s_magic == TMPFS_MAGIC
++		    || dentry->d_inode->i_sb->s_magic == SYSFS_MAGIC)
++			return 0;
++
++		integrity_audit_msg(AUDIT_INTEGRITY_METADATA,
++				    dentry->d_inode, dentry->d_name.name,
++				    "update_metadata",
++				    integrity_status_msg[evm_status],
++				    -EPERM, 0);
+ 	}
+ out:
+ 	if (evm_status != INTEGRITY_PASS)
+diff --git a/security/integrity/ima/ima.h b/security/integrity/ima/ima.h
+index 8ee997dff139..fc56d4dfa954 100644
+--- a/security/integrity/ima/ima.h
++++ b/security/integrity/ima/ima.h
+@@ -106,7 +106,7 @@ void ima_add_violation(struct file *file, const unsigned char *filename,
+ 		       const char *op, const char *cause);
+ int ima_init_crypto(void);
+ void ima_putc(struct seq_file *m, void *data, int datalen);
+-void ima_print_digest(struct seq_file *m, u8 *digest, int size);
++void ima_print_digest(struct seq_file *m, u8 *digest, u32 size);
+ struct ima_template_desc *ima_template_desc_current(void);
+ int ima_init_template(void);
+ 
+diff --git a/security/integrity/ima/ima_fs.c b/security/integrity/ima/ima_fs.c
+index 461215e5fd31..816d175da79a 100644
+--- a/security/integrity/ima/ima_fs.c
++++ b/security/integrity/ima/ima_fs.c
+@@ -190,9 +190,9 @@ static const struct file_operations ima_measurements_ops = {
+ 	.release = seq_release,
+ };
+ 
+-void ima_print_digest(struct seq_file *m, u8 *digest, int size)
++void ima_print_digest(struct seq_file *m, u8 *digest, u32 size)
+ {
+-	int i;
++	u32 i;
+ 
+ 	for (i = 0; i < size; i++)
+ 		seq_printf(m, "%02x", *(digest + i));
+diff --git a/security/integrity/ima/ima_policy.c b/security/integrity/ima/ima_policy.c
+index d1eefb9d65fb..3997e206f82d 100644
+--- a/security/integrity/ima/ima_policy.c
++++ b/security/integrity/ima/ima_policy.c
+@@ -27,6 +27,8 @@
+ #define IMA_UID		0x0008
+ #define IMA_FOWNER	0x0010
+ #define IMA_FSUUID	0x0020
++#define IMA_INMASK	0x0040
++#define IMA_EUID	0x0080
+ 
+ #define UNKNOWN		0
+ #define MEASURE		0x0001	/* same as IMA_MEASURE */
+@@ -42,6 +44,8 @@ enum lsm_rule_types { LSM_OBJ_USER, LSM_OBJ_ROLE, LSM_OBJ_TYPE,
+ 	LSM_SUBJ_USER, LSM_SUBJ_ROLE, LSM_SUBJ_TYPE
+ };
+ 
++enum policy_types { ORIGINAL_TCB = 1, DEFAULT_TCB };
++
+ struct ima_rule_entry {
+ 	struct list_head list;
+ 	int action;
+@@ -70,7 +74,7 @@ struct ima_rule_entry {
+  * normal users can easily run the machine out of memory simply building
+  * and running executables.
+  */
+-static struct ima_rule_entry default_rules[] = {
++static struct ima_rule_entry dont_measure_rules[] = {
+ 	{.action = DONT_MEASURE, .fsmagic = PROC_SUPER_MAGIC, .flags = IMA_FSMAGIC},
+ 	{.action = DONT_MEASURE, .fsmagic = SYSFS_MAGIC, .flags = IMA_FSMAGIC},
+ 	{.action = DONT_MEASURE, .fsmagic = DEBUGFS_MAGIC, .flags = IMA_FSMAGIC},
+@@ -79,12 +83,31 @@ static struct ima_rule_entry default_rules[] = {
+ 	{.action = DONT_MEASURE, .fsmagic = BINFMTFS_MAGIC, .flags = IMA_FSMAGIC},
+ 	{.action = DONT_MEASURE, .fsmagic = SECURITYFS_MAGIC, .flags = IMA_FSMAGIC},
+ 	{.action = DONT_MEASURE, .fsmagic = SELINUX_MAGIC, .flags = IMA_FSMAGIC},
++	{.action = DONT_MEASURE, .fsmagic = CGROUP_SUPER_MAGIC,
++	 .flags = IMA_FSMAGIC},
++	{.action = DONT_MEASURE, .fsmagic = NSFS_MAGIC, .flags = IMA_FSMAGIC}
++};
++
++static struct ima_rule_entry original_measurement_rules[] = {
+ 	{.action = MEASURE, .func = MMAP_CHECK, .mask = MAY_EXEC,
+ 	 .flags = IMA_FUNC | IMA_MASK},
+ 	{.action = MEASURE, .func = BPRM_CHECK, .mask = MAY_EXEC,
+ 	 .flags = IMA_FUNC | IMA_MASK},
+-	{.action = MEASURE, .func = FILE_CHECK, .mask = MAY_READ, .uid = GLOBAL_ROOT_UID,
+-	 .flags = IMA_FUNC | IMA_MASK | IMA_UID},
++	{.action = MEASURE, .func = FILE_CHECK, .mask = MAY_READ,
++	 .uid = GLOBAL_ROOT_UID, .flags = IMA_FUNC | IMA_MASK | IMA_UID},
++	{.action = MEASURE, .func = MODULE_CHECK, .flags = IMA_FUNC},
++	{.action = MEASURE, .func = FIRMWARE_CHECK, .flags = IMA_FUNC},
++};
++
++static struct ima_rule_entry default_measurement_rules[] = {
++	{.action = MEASURE, .func = MMAP_CHECK, .mask = MAY_EXEC,
++	 .flags = IMA_FUNC | IMA_MASK},
++	{.action = MEASURE, .func = BPRM_CHECK, .mask = MAY_EXEC,
++	 .flags = IMA_FUNC | IMA_MASK},
++	{.action = MEASURE, .func = FILE_CHECK, .mask = MAY_READ,
++	 .uid = GLOBAL_ROOT_UID, .flags = IMA_FUNC | IMA_INMASK | IMA_EUID},
++	{.action = MEASURE, .func = FILE_CHECK, .mask = MAY_READ,
++	 .uid = GLOBAL_ROOT_UID, .flags = IMA_FUNC | IMA_INMASK | IMA_UID},
+ 	{.action = MEASURE, .func = MODULE_CHECK, .flags = IMA_FUNC},
+ 	{.action = MEASURE, .func = FIRMWARE_CHECK, .flags = IMA_FUNC},
+ };
+@@ -99,6 +122,7 @@ static struct ima_rule_entry default_appraise_rules[] = {
+ 	{.action = DONT_APPRAISE, .fsmagic = BINFMTFS_MAGIC, .flags = IMA_FSMAGIC},
+ 	{.action = DONT_APPRAISE, .fsmagic = SECURITYFS_MAGIC, .flags = IMA_FSMAGIC},
+ 	{.action = DONT_APPRAISE, .fsmagic = SELINUX_MAGIC, .flags = IMA_FSMAGIC},
++	{.action = DONT_APPRAISE, .fsmagic = NSFS_MAGIC, .flags = IMA_FSMAGIC},
+ 	{.action = DONT_APPRAISE, .fsmagic = CGROUP_SUPER_MAGIC, .flags = IMA_FSMAGIC},
+ #ifndef CONFIG_IMA_APPRAISE_SIGNED_INIT
+ 	{.action = APPRAISE, .fowner = GLOBAL_ROOT_UID, .flags = IMA_FOWNER},
+@@ -115,14 +139,29 @@ static struct list_head *ima_rules;
+ 
+ static DEFINE_MUTEX(ima_rules_mutex);
+ 
+-static bool ima_use_tcb __initdata;
++static int ima_policy __initdata;
+ static int __init default_measure_policy_setup(char *str)
+ {
+-	ima_use_tcb = 1;
++	if (ima_policy)
++		return 1;
++
++	ima_policy = ORIGINAL_TCB;
+ 	return 1;
+ }
+ __setup("ima_tcb", default_measure_policy_setup);
+ 
++static int __init policy_setup(char *str)
++{
++	if (ima_policy)
++		return 1;
++
++	if (strcmp(str, "tcb") == 0)
++		ima_policy = DEFAULT_TCB;
++
++	return 1;
++}
++__setup("ima_policy=", policy_setup);
++
+ static bool ima_use_appraise_tcb __initdata;
+ static int __init default_appraise_policy_setup(char *str)
+ {
+@@ -182,6 +221,9 @@ static bool ima_match_rules(struct ima_rule_entry *rule,
+ 	if ((rule->flags & IMA_MASK) &&
+ 	    (rule->mask != mask && func != POST_SETATTR))
+ 		return false;
++	if ((rule->flags & IMA_INMASK) &&
++	    (!(rule->mask & mask) && func != POST_SETATTR))
++		return false;
+ 	if ((rule->flags & IMA_FSMAGIC)
+ 	    && rule->fsmagic != inode->i_sb->s_magic)
+ 		return false;
+@@ -190,6 +232,16 @@ static bool ima_match_rules(struct ima_rule_entry *rule,
+ 		return false;
+ 	if ((rule->flags & IMA_UID) && !uid_eq(rule->uid, cred->uid))
+ 		return false;
++	if (rule->flags & IMA_EUID) {
++		if (has_capability_noaudit(current, CAP_SETUID)) {
++			if (!uid_eq(rule->uid, cred->euid)
++			    && !uid_eq(rule->uid, cred->suid)
++			    && !uid_eq(rule->uid, cred->uid))
++				return false;
++		} else if (!uid_eq(rule->uid, cred->euid))
++			return false;
++	}
++
+ 	if ((rule->flags & IMA_FOWNER) && !uid_eq(rule->fowner, inode->i_uid))
+ 		return false;
+ 	for (i = 0; i < MAX_LSM_RULES; i++) {
+@@ -333,21 +385,31 @@ void __init ima_init_policy(void)
+ {
+ 	int i, measure_entries, appraise_entries;
+ 
+-	/* if !ima_use_tcb set entries = 0 so we load NO default rules */
+-	measure_entries = ima_use_tcb ? ARRAY_SIZE(default_rules) : 0;
++	/* if !ima_policy set entries = 0 so we load NO default rules */
++	measure_entries = ima_policy ? ARRAY_SIZE(dont_measure_rules) : 0;
+ 	appraise_entries = ima_use_appraise_tcb ?
+ 			 ARRAY_SIZE(default_appraise_rules) : 0;
+ 
+-	for (i = 0; i < measure_entries + appraise_entries; i++) {
+-		if (i < measure_entries)
+-			list_add_tail(&default_rules[i].list,
+-				      &ima_default_rules);
+-		else {
+-			int j = i - measure_entries;
++	for (i = 0; i < measure_entries; i++)
++		list_add_tail(&dont_measure_rules[i].list, &ima_default_rules);
+ 
+-			list_add_tail(&default_appraise_rules[j].list,
++	switch (ima_policy) {
++	case ORIGINAL_TCB:
++		for (i = 0; i < ARRAY_SIZE(original_measurement_rules); i++)
++			list_add_tail(&original_measurement_rules[i].list,
+ 				      &ima_default_rules);
+-		}
++		break;
++	case DEFAULT_TCB:
++		for (i = 0; i < ARRAY_SIZE(default_measurement_rules); i++)
++			list_add_tail(&default_measurement_rules[i].list,
++				      &ima_default_rules);
++	default:
++		break;
++	}
++
++	for (i = 0; i < appraise_entries; i++) {
++		list_add_tail(&default_appraise_rules[i].list,
++			      &ima_default_rules);
+ 	}
+ 
+ 	ima_rules = &ima_default_rules;
+@@ -373,7 +435,8 @@ enum {
+ 	Opt_audit,
+ 	Opt_obj_user, Opt_obj_role, Opt_obj_type,
+ 	Opt_subj_user, Opt_subj_role, Opt_subj_type,
+-	Opt_func, Opt_mask, Opt_fsmagic, Opt_uid, Opt_fowner,
++	Opt_func, Opt_mask, Opt_fsmagic,
++	Opt_uid, Opt_euid, Opt_fowner,
+ 	Opt_appraise_type, Opt_fsuuid, Opt_permit_directio
+ };
+ 
+@@ -394,6 +457,7 @@ static match_table_t policy_tokens = {
+ 	{Opt_fsmagic, "fsmagic=%s"},
+ 	{Opt_fsuuid, "fsuuid=%s"},
+ 	{Opt_uid, "uid=%s"},
++	{Opt_euid, "euid=%s"},
+ 	{Opt_fowner, "fowner=%s"},
+ 	{Opt_appraise_type, "appraise_type=%s"},
+ 	{Opt_permit_directio, "permit_directio"},
+@@ -435,6 +499,7 @@ static void ima_log_string(struct audit_buffer *ab, char *key, char *value)
+ static int ima_parse_rule(char *rule, struct ima_rule_entry *entry)
+ {
+ 	struct audit_buffer *ab;
++	char *from;
+ 	char *p;
+ 	int result = 0;
+ 
+@@ -525,18 +590,23 @@ static int ima_parse_rule(char *rule, struct ima_rule_entry *entry)
+ 			if (entry->mask)
+ 				result = -EINVAL;
+ 
+-			if ((strcmp(args[0].from, "MAY_EXEC")) == 0)
++			from = args[0].from;
++			if (*from == '^')
++				from++;
++
++			if ((strcmp(from, "MAY_EXEC")) == 0)
+ 				entry->mask = MAY_EXEC;
+-			else if (strcmp(args[0].from, "MAY_WRITE") == 0)
++			else if (strcmp(from, "MAY_WRITE") == 0)
+ 				entry->mask = MAY_WRITE;
+-			else if (strcmp(args[0].from, "MAY_READ") == 0)
++			else if (strcmp(from, "MAY_READ") == 0)
+ 				entry->mask = MAY_READ;
+-			else if (strcmp(args[0].from, "MAY_APPEND") == 0)
++			else if (strcmp(from, "MAY_APPEND") == 0)
+ 				entry->mask = MAY_APPEND;
+ 			else
+ 				result = -EINVAL;
+ 			if (!result)
+-				entry->flags |= IMA_MASK;
++				entry->flags |= (*args[0].from == '^')
++				     ? IMA_INMASK : IMA_MASK;
+ 			break;
+ 		case Opt_fsmagic:
+ 			ima_log_string(ab, "fsmagic", args[0].from);
+@@ -566,6 +636,9 @@ static int ima_parse_rule(char *rule, struct ima_rule_entry *entry)
+ 			break;
+ 		case Opt_uid:
+ 			ima_log_string(ab, "uid", args[0].from);
++		case Opt_euid:
++			if (token == Opt_euid)
++				ima_log_string(ab, "euid", args[0].from);
+ 
+ 			if (uid_valid(entry->uid)) {
+ 				result = -EINVAL;
+@@ -574,11 +647,14 @@ static int ima_parse_rule(char *rule, struct ima_rule_entry *entry)
+ 
+ 			result = kstrtoul(args[0].from, 10, &lnum);
+ 			if (!result) {
+-				entry->uid = make_kuid(current_user_ns(), (uid_t)lnum);
+-				if (!uid_valid(entry->uid) || (((uid_t)lnum) != lnum))
++				entry->uid = make_kuid(current_user_ns(),
++						       (uid_t) lnum);
++				if (!uid_valid(entry->uid) ||
++				    (uid_t)lnum != lnum)
+ 					result = -EINVAL;
+ 				else
+-					entry->flags |= IMA_UID;
++					entry->flags |= (token == Opt_uid)
++					    ? IMA_UID : IMA_EUID;
+ 			}
+ 			break;
+ 		case Opt_fowner:
+diff --git a/security/integrity/ima/ima_template_lib.c b/security/integrity/ima/ima_template_lib.c
+index bcfc36cbde6a..61fbd0c0d95c 100644
+--- a/security/integrity/ima/ima_template_lib.c
++++ b/security/integrity/ima/ima_template_lib.c
+@@ -70,7 +70,8 @@ static void ima_show_template_data_ascii(struct seq_file *m,
+ 					 enum data_formats datafmt,
+ 					 struct ima_field_data *field_data)
+ {
+-	u8 *buf_ptr = field_data->data, buflen = field_data->len;
++	u8 *buf_ptr = field_data->data;
++	u32 buflen = field_data->len;
+ 
+ 	switch (datafmt) {
+ 	case DATA_FMT_DIGEST_WITH_ALGO:
+diff --git a/security/keys/keyring.c b/security/keys/keyring.c
+index e72548b5897e..d33437007ad2 100644
+--- a/security/keys/keyring.c
++++ b/security/keys/keyring.c
+@@ -1181,9 +1181,11 @@ void __key_link_end(struct key *keyring,
+ 	if (index_key->type == &key_type_keyring)
+ 		up_write(&keyring_serialise_link_sem);
+ 
+-	if (edit && !edit->dead_leaf) {
+-		key_payload_reserve(keyring,
+-				    keyring->datalen - KEYQUOTA_LINK_BYTES);
++	if (edit) {
++		if (!edit->dead_leaf) {
++			key_payload_reserve(keyring,
++				keyring->datalen - KEYQUOTA_LINK_BYTES);
++		}
+ 		assoc_array_cancel_edit(edit);
+ 	}
+ 	up_write(&keyring->sem);
+diff --git a/security/selinux/hooks.c b/security/selinux/hooks.c
+index 212070e1de1a..7f8d7f19e044 100644
+--- a/security/selinux/hooks.c
++++ b/security/selinux/hooks.c
+@@ -3288,7 +3288,8 @@ static int file_map_prot_check(struct file *file, unsigned long prot, int shared
+ 	int rc = 0;
+ 
+ 	if (default_noexec &&
+-	    (prot & PROT_EXEC) && (!file || (!shared && (prot & PROT_WRITE)))) {
++	    (prot & PROT_EXEC) && (!file || IS_PRIVATE(file_inode(file)) ||
++				   (!shared && (prot & PROT_WRITE)))) {
+ 		/*
+ 		 * We are making executable an anonymous mapping or a
+ 		 * private file mapping that will also be writable.
+diff --git a/security/selinux/ss/ebitmap.c b/security/selinux/ss/ebitmap.c
+index afe6a269ec17..57644b1dc42e 100644
+--- a/security/selinux/ss/ebitmap.c
++++ b/security/selinux/ss/ebitmap.c
+@@ -153,6 +153,12 @@ int ebitmap_netlbl_import(struct ebitmap *ebmap,
+ 		if (offset == (u32)-1)
+ 			return 0;
+ 
++		/* don't waste ebitmap space if the netlabel bitmap is empty */
++		if (bitmap == 0) {
++			offset += EBITMAP_UNIT_SIZE;
++			continue;
++		}
++
+ 		if (e_iter == NULL ||
+ 		    offset >= e_iter->startbit + EBITMAP_SIZE) {
+ 			e_prev = e_iter;
+diff --git a/sound/soc/codecs/max98925.c b/sound/soc/codecs/max98925.c
+index 9b5a17de4690..aad664225dc3 100644
+--- a/sound/soc/codecs/max98925.c
++++ b/sound/soc/codecs/max98925.c
+@@ -346,7 +346,7 @@ static int max98925_dai_set_fmt(struct snd_soc_dai *codec_dai,
+ 	}
+ 
+ 	regmap_update_bits(max98925->regmap, MAX98925_FORMAT,
+-			M98925_DAI_BCI_MASK, invert);
++			M98925_DAI_BCI_MASK | M98925_DAI_WCI_MASK, invert);
+ 	return 0;
+ }
+ 
+diff --git a/sound/soc/codecs/rt5645.c b/sound/soc/codecs/rt5645.c
+index be4d741c45ba..2ee44abd56a6 100644
+--- a/sound/soc/codecs/rt5645.c
++++ b/sound/soc/codecs/rt5645.c
+@@ -2837,6 +2837,8 @@ static int rt5645_i2c_probe(struct i2c_client *i2c,
+ 		}
+ 	}
+ 
++	INIT_DELAYED_WORK(&rt5645->jack_detect_work, rt5645_jack_detect_work);
++
+ 	if (rt5645->i2c->irq) {
+ 		ret = request_threaded_irq(rt5645->i2c->irq, NULL, rt5645_irq,
+ 			IRQF_TRIGGER_RISING | IRQF_TRIGGER_FALLING
+@@ -2855,8 +2857,6 @@ static int rt5645_i2c_probe(struct i2c_client *i2c,
+ 			dev_err(&i2c->dev, "Fail gpio_direction hp_det_gpio\n");
+ 	}
+ 
+-	INIT_DELAYED_WORK(&rt5645->jack_detect_work, rt5645_jack_detect_work);
+-
+ 	return snd_soc_register_codec(&i2c->dev, &soc_codec_dev_rt5645,
+ 				      rt5645_dai, ARRAY_SIZE(rt5645_dai));
+ }
+diff --git a/sound/soc/codecs/tas2552.c b/sound/soc/codecs/tas2552.c
+index dfb4ff5cc9ea..18558595ba72 100644
+--- a/sound/soc/codecs/tas2552.c
++++ b/sound/soc/codecs/tas2552.c
+@@ -120,6 +120,9 @@ static void tas2552_sw_shutdown(struct tas2552_data *tas_data, int sw_shutdown)
+ {
+ 	u8 cfg1_reg;
+ 
++	if (!tas_data->codec)
++		return;
++
+ 	if (sw_shutdown)
+ 		cfg1_reg = 0;
+ 	else
+@@ -335,7 +338,6 @@ static DECLARE_TLV_DB_SCALE(dac_tlv, -7, 100, 24);
+ static const struct snd_kcontrol_new tas2552_snd_controls[] = {
+ 	SOC_SINGLE_TLV("Speaker Driver Playback Volume",
+ 			 TAS2552_PGA_GAIN, 0, 0x1f, 1, dac_tlv),
+-	SOC_DAPM_SINGLE("Playback AMP", SND_SOC_NOPM, 0, 1, 0),
+ };
+ 
+ static const struct reg_default tas2552_init_regs[] = {
+diff --git a/sound/soc/codecs/wm5102.c b/sound/soc/codecs/wm5102.c
+index 0c6d1bc0526e..d476221dba51 100644
+--- a/sound/soc/codecs/wm5102.c
++++ b/sound/soc/codecs/wm5102.c
+@@ -42,7 +42,7 @@ struct wm5102_priv {
+ static DECLARE_TLV_DB_SCALE(ana_tlv, 0, 100, 0);
+ static DECLARE_TLV_DB_SCALE(eq_tlv, -1200, 100, 0);
+ static DECLARE_TLV_DB_SCALE(digital_tlv, -6400, 50, 0);
+-static DECLARE_TLV_DB_SCALE(noise_tlv, 0, 600, 0);
++static DECLARE_TLV_DB_SCALE(noise_tlv, -13200, 600, 0);
+ static DECLARE_TLV_DB_SCALE(ng_tlv, -10200, 600, 0);
+ 
+ static const struct wm_adsp_region wm5102_dsp1_regions[] = {
+diff --git a/sound/soc/codecs/wm5110.c b/sound/soc/codecs/wm5110.c
+index fbaeddb3e903..3ee6cfd0578b 100644
+--- a/sound/soc/codecs/wm5110.c
++++ b/sound/soc/codecs/wm5110.c
+@@ -167,7 +167,7 @@ static int wm5110_sysclk_ev(struct snd_soc_dapm_widget *w,
+ static DECLARE_TLV_DB_SCALE(ana_tlv, 0, 100, 0);
+ static DECLARE_TLV_DB_SCALE(eq_tlv, -1200, 100, 0);
+ static DECLARE_TLV_DB_SCALE(digital_tlv, -6400, 50, 0);
+-static DECLARE_TLV_DB_SCALE(noise_tlv, 0, 600, 0);
++static DECLARE_TLV_DB_SCALE(noise_tlv, -13200, 600, 0);
+ static DECLARE_TLV_DB_SCALE(ng_tlv, -10200, 600, 0);
+ 
+ #define WM5110_NG_SRC(name, base) \
+diff --git a/sound/soc/codecs/wm8737.c b/sound/soc/codecs/wm8737.c
+index ada9ac1ba2c6..51171e457fa4 100644
+--- a/sound/soc/codecs/wm8737.c
++++ b/sound/soc/codecs/wm8737.c
+@@ -483,7 +483,8 @@ static int wm8737_set_bias_level(struct snd_soc_codec *codec,
+ 
+ 			/* Fast VMID ramp at 2*2.5k */
+ 			snd_soc_update_bits(codec, WM8737_MISC_BIAS_CONTROL,
+-					    WM8737_VMIDSEL_MASK, 0x4);
++					    WM8737_VMIDSEL_MASK,
++					    2 << WM8737_VMIDSEL_SHIFT);
+ 
+ 			/* Bring VMID up */
+ 			snd_soc_update_bits(codec, WM8737_POWER_MANAGEMENT,
+@@ -497,7 +498,8 @@ static int wm8737_set_bias_level(struct snd_soc_codec *codec,
+ 
+ 		/* VMID at 2*300k */
+ 		snd_soc_update_bits(codec, WM8737_MISC_BIAS_CONTROL,
+-				    WM8737_VMIDSEL_MASK, 2);
++				    WM8737_VMIDSEL_MASK,
++				    1 << WM8737_VMIDSEL_SHIFT);
+ 
+ 		break;
+ 
+diff --git a/sound/soc/codecs/wm8903.h b/sound/soc/codecs/wm8903.h
+index db949311c0f2..0bb4a647755d 100644
+--- a/sound/soc/codecs/wm8903.h
++++ b/sound/soc/codecs/wm8903.h
+@@ -172,7 +172,7 @@ extern int wm8903_mic_detect(struct snd_soc_codec *codec,
+ #define WM8903_VMID_BUF_ENA_WIDTH                    1  /* VMID_BUF_ENA */
+ 
+ #define WM8903_VMID_RES_50K                          2
+-#define WM8903_VMID_RES_250K                         3
++#define WM8903_VMID_RES_250K                         4
+ #define WM8903_VMID_RES_5K                           6
+ 
+ /*
+diff --git a/sound/soc/codecs/wm8955.c b/sound/soc/codecs/wm8955.c
+index 00bec915d652..03e04bf6c5ba 100644
+--- a/sound/soc/codecs/wm8955.c
++++ b/sound/soc/codecs/wm8955.c
+@@ -298,7 +298,7 @@ static int wm8955_configure_clocking(struct snd_soc_codec *codec)
+ 		snd_soc_update_bits(codec, WM8955_PLL_CONTROL_2,
+ 				    WM8955_K_17_9_MASK,
+ 				    (pll.k >> 9) & WM8955_K_17_9_MASK);
+-		snd_soc_update_bits(codec, WM8955_PLL_CONTROL_2,
++		snd_soc_update_bits(codec, WM8955_PLL_CONTROL_3,
+ 				    WM8955_K_8_0_MASK,
+ 				    pll.k & WM8955_K_8_0_MASK);
+ 		if (pll.k)
+diff --git a/sound/soc/codecs/wm8960.c b/sound/soc/codecs/wm8960.c
+index e97a7615df85..8d7f63253440 100644
+--- a/sound/soc/codecs/wm8960.c
++++ b/sound/soc/codecs/wm8960.c
+@@ -245,7 +245,7 @@ SOC_SINGLE("PCM Playback -6dB Switch", WM8960_DACCTL1, 7, 1, 0),
+ SOC_ENUM("ADC Polarity", wm8960_enum[0]),
+ SOC_SINGLE("ADC High Pass Filter Switch", WM8960_DACCTL1, 0, 1, 0),
+ 
+-SOC_ENUM("DAC Polarity", wm8960_enum[2]),
++SOC_ENUM("DAC Polarity", wm8960_enum[1]),
+ SOC_SINGLE_BOOL_EXT("DAC Deemphasis Switch", 0,
+ 		    wm8960_get_deemph, wm8960_put_deemph),
+ 
+diff --git a/sound/soc/codecs/wm8997.c b/sound/soc/codecs/wm8997.c
+index a4d11770630c..e7c81baefe66 100644
+--- a/sound/soc/codecs/wm8997.c
++++ b/sound/soc/codecs/wm8997.c
+@@ -40,7 +40,7 @@ struct wm8997_priv {
+ static DECLARE_TLV_DB_SCALE(ana_tlv, 0, 100, 0);
+ static DECLARE_TLV_DB_SCALE(eq_tlv, -1200, 100, 0);
+ static DECLARE_TLV_DB_SCALE(digital_tlv, -6400, 50, 0);
+-static DECLARE_TLV_DB_SCALE(noise_tlv, 0, 600, 0);
++static DECLARE_TLV_DB_SCALE(noise_tlv, -13200, 600, 0);
+ static DECLARE_TLV_DB_SCALE(ng_tlv, -10200, 600, 0);
+ 
+ static const struct reg_default wm8997_sysclk_reva_patch[] = {
+diff --git a/sound/soc/fsl/imx-wm8962.c b/sound/soc/fsl/imx-wm8962.c
+index cd146d4fa805..b38b98cae855 100644
+--- a/sound/soc/fsl/imx-wm8962.c
++++ b/sound/soc/fsl/imx-wm8962.c
+@@ -190,7 +190,7 @@ static int imx_wm8962_probe(struct platform_device *pdev)
+ 		dev_err(&pdev->dev, "audmux internal port setup failed\n");
+ 		return ret;
+ 	}
+-	imx_audmux_v2_configure_port(ext_port,
++	ret = imx_audmux_v2_configure_port(ext_port,
+ 			IMX_AUDMUX_V2_PTCR_SYN,
+ 			IMX_AUDMUX_V2_PDCR_RXDSEL(int_port));
+ 	if (ret) {
+diff --git a/sound/soc/omap/Kconfig b/sound/soc/omap/Kconfig
+index 6768e4f7d7d0..30d0109703a9 100644
+--- a/sound/soc/omap/Kconfig
++++ b/sound/soc/omap/Kconfig
+@@ -100,12 +100,13 @@ config SND_OMAP_SOC_OMAP_TWL4030
+ 
+ config SND_OMAP_SOC_OMAP_ABE_TWL6040
+ 	tristate "SoC Audio support for OMAP boards using ABE and twl6040 codec"
+-	depends on TWL6040_CORE && SND_OMAP_SOC && (ARCH_OMAP4 || SOC_OMAP5 || COMPILE_TEST)
++	depends on TWL6040_CORE && SND_OMAP_SOC
++	depends on ARCH_OMAP4 || (SOC_OMAP5 && MFD_PALMAS) || COMPILE_TEST
+ 	select SND_OMAP_SOC_DMIC
+ 	select SND_OMAP_SOC_MCPDM
+ 	select SND_SOC_TWL6040
+ 	select SND_SOC_DMIC
+-	select COMMON_CLK_PALMAS if MFD_PALMAS
++	select COMMON_CLK_PALMAS if (SOC_OMAP5 && MFD_PALMAS)
+ 	help
+ 	  Say Y if you want to add support for SoC audio on OMAP boards using
+ 	  ABE and twl6040 codec. This driver currently supports:
+diff --git a/sound/soc/qcom/Kconfig b/sound/soc/qcom/Kconfig
+index 5f58e4f1bca9..b07f183fc47f 100644
+--- a/sound/soc/qcom/Kconfig
++++ b/sound/soc/qcom/Kconfig
+@@ -6,12 +6,10 @@ config SND_SOC_QCOM
+ 
+ config SND_SOC_LPASS_CPU
+ 	tristate
+-	depends on SND_SOC_QCOM
+ 	select REGMAP_MMIO
+ 
+ config SND_SOC_LPASS_PLATFORM
+ 	tristate
+-	depends on SND_SOC_QCOM
+ 	select REGMAP_MMIO
+ 
+ config SND_SOC_STORM
+diff --git a/tools/perf/util/cloexec.c b/tools/perf/util/cloexec.c
+index 85b523885f9d..2babddaa2481 100644
+--- a/tools/perf/util/cloexec.c
++++ b/tools/perf/util/cloexec.c
+@@ -7,11 +7,15 @@
+ 
+ static unsigned long flag = PERF_FLAG_FD_CLOEXEC;
+ 
++#ifdef __GLIBC_PREREQ
++#if !__GLIBC_PREREQ(2, 6)
+ int __weak sched_getcpu(void)
+ {
+ 	errno = ENOSYS;
+ 	return -1;
+ }
++#endif
++#endif
+ 
+ static int perf_flag_probe(void)
+ {


             reply	other threads:[~2015-08-03 19:01 UTC|newest]

Thread overview: 71+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2015-08-03 19:01 Mike Pagano [this message]
  -- strict thread matches above, loose matches on Subject: below --
2018-05-29 10:34 [gentoo-commits] proj/linux-patches:4.1 commit in: / Mike Pagano
2018-01-23  9:37 Alice Ferrazzi
2017-12-15 20:22 Alice Ferrazzi
2017-12-08 14:48 Mike Pagano
2017-12-07 18:53 Mike Pagano
2017-10-18 11:51 Mike Pagano
2017-09-13 19:38 Mike Pagano
2017-08-06 18:01 Mike Pagano
2017-04-14 19:17 Mike Pagano
2017-03-14 11:39 Mike Pagano
2017-03-02 16:31 Mike Pagano
2017-03-02 16:31 Mike Pagano
2017-02-24 16:11 Mike Pagano
2017-01-18 23:50 Alice Ferrazzi
2017-01-10  4:02 Alice Ferrazzi
2016-12-08  0:43 Mike Pagano
2016-11-30 11:45 Mike Pagano
2016-11-23 11:25 Mike Pagano
2016-10-28 10:19 Mike Pagano
2016-10-12 19:52 Mike Pagano
2016-09-18 12:47 Mike Pagano
2016-08-22 23:29 Mike Pagano
2016-08-10 12:55 Mike Pagano
2016-07-31 16:01 Mike Pagano
2016-07-15 14:18 Mike Pagano
2016-07-13 23:38 Mike Pagano
2016-07-02 15:31 Mike Pagano
2016-07-01 19:56 Mike Pagano
2016-06-23 11:45 Mike Pagano
2016-06-08 11:17 Mike Pagano
2016-05-24 12:39 Mike Pagano
2016-05-12  0:12 Mike Pagano
2016-04-28 18:56 Mike Pagano
2016-04-22 18:06 Mike Pagano
2016-04-20 11:23 Mike Pagano
2016-04-06 11:23 Mike Pagano
2016-03-22 22:47 Mike Pagano
2016-03-17 22:52 Mike Pagano
2016-03-05 23:38 Mike Pagano
2016-02-16 15:28 Mike Pagano
2016-01-31 23:29 Mike Pagano
2016-01-23 18:30 Mike Pagano
2016-01-20 13:54 Mike Pagano
2015-12-15 11:17 Mike Pagano
2015-12-10 13:54 Mike Pagano
2015-11-10  0:30 Mike Pagano
2015-11-05 23:29 Mike Pagano
2015-11-05 23:29 Mike Pagano
2015-10-27 13:19 Mike Pagano
2015-10-26 20:51 Mike Pagano
2015-10-26 20:49 Mike Pagano
2015-10-03 16:07 Mike Pagano
2015-10-02 12:08 Mike Pagano
2015-09-29 17:50 Mike Pagano
2015-09-28 23:57 Mike Pagano
2015-09-21 22:16 Mike Pagano
2015-09-14 15:20 Mike Pagano
2015-08-17 15:38 Mike Pagano
2015-08-12 14:17 Mike Pagano
2015-08-10 23:42 Mike Pagano
2015-07-22 10:31 Mike Pagano
2015-07-22 10:09 Mike Pagano
2015-07-19 18:55 Mike Pagano
2015-07-17 15:24 Mike Pagano
2015-07-10 23:47 Mike Pagano
2015-07-01 15:33 Mike Pagano
2015-06-27 19:50 Mike Pagano
2015-06-26 22:36 Mike Pagano
2015-06-20 17:37 Mike Pagano
2015-06-08 17:59 Mike Pagano

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=1438628474.ca1be7247b8e5d2dfeb7ae78ece8b4ec4cb819a2.mpagano@gentoo \
    --to=mpagano@gentoo.org \
    --cc=gentoo-commits@lists.gentoo.org \
    --cc=gentoo-dev@lists.gentoo.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox