From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from lists.gentoo.org (pigeon.gentoo.org [208.92.234.80]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by finch.gentoo.org (Postfix) with ESMTPS id C74B8138330 for ; Wed, 30 May 2018 11:44:53 +0000 (UTC) Received: from pigeon.gentoo.org (localhost [127.0.0.1]) by pigeon.gentoo.org (Postfix) with SMTP id E6C67E07C9; Wed, 30 May 2018 11:44:52 +0000 (UTC) Received: from smtp.gentoo.org (smtp.gentoo.org [140.211.166.183]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by pigeon.gentoo.org (Postfix) with ESMTPS id 629D4E07C9 for ; Wed, 30 May 2018 11:44:52 +0000 (UTC) Received: from oystercatcher.gentoo.org (oystercatcher.gentoo.org [148.251.78.52]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.gentoo.org (Postfix) with ESMTPS id 042D2335CA2 for ; Wed, 30 May 2018 11:44:51 +0000 (UTC) Received: from localhost.localdomain (localhost [IPv6:::1]) by oystercatcher.gentoo.org (Postfix) with ESMTP id 0C4162AD for ; Wed, 30 May 2018 11:44:49 +0000 (UTC) From: "Mike Pagano" To: gentoo-commits@lists.gentoo.org Content-Transfer-Encoding: 8bit Content-type: text/plain; charset=UTF-8 Reply-To: gentoo-dev@lists.gentoo.org, "Mike Pagano" Message-ID: <1527680678.8a5950d77db4cc1cc9e4b9b359bdd8d288d2167c.mpagano@gentoo> Subject: [gentoo-commits] proj/linux-patches:4.16 commit in: / X-VCS-Repository: proj/linux-patches X-VCS-Files: 0000_README 1012_linux-4.16.13.patch X-VCS-Directories: / X-VCS-Committer: mpagano X-VCS-Committer-Name: Mike Pagano X-VCS-Revision: 8a5950d77db4cc1cc9e4b9b359bdd8d288d2167c X-VCS-Branch: 4.16 Date: Wed, 30 May 2018 11:44:49 +0000 (UTC) Precedence: bulk List-Post: List-Help: List-Unsubscribe: List-Subscribe: List-Id: Gentoo Linux mail X-BeenThere: gentoo-commits@lists.gentoo.org X-Archives-Salt: d022deec-a6a0-49c4-912c-8983055cf448 X-Archives-Hash: acc4b91370c14b3bab84403b8d245a04 commit: 8a5950d77db4cc1cc9e4b9b359bdd8d288d2167c Author: Mike Pagano gentoo org> AuthorDate: Wed May 30 11:44:38 2018 +0000 Commit: Mike Pagano gentoo org> CommitDate: Wed May 30 11:44:38 2018 +0000 URL: https://gitweb.gentoo.org/proj/linux-patches.git/commit/?id=8a5950d7 Linux patch 4.16.13 0000_README | 4 + 1012_linux-4.16.13.patch | 10200 +++++++++++++++++++++++++++++++++++++++++++++ 2 files changed, 10204 insertions(+) diff --git a/0000_README b/0000_README index 603fb6f..f199583 100644 --- a/0000_README +++ b/0000_README @@ -91,6 +91,10 @@ Patch: 1011_linux-4.16.12.patch From: http://www.kernel.org Desc: Linux 4.16.12 +Patch: 1012_linux-4.16.13.patch +From: http://www.kernel.org +Desc: Linux 4.16.13 + Patch: 1500_XATTR_USER_PREFIX.patch From: https://bugs.gentoo.org/show_bug.cgi?id=470644 Desc: Support for namespace user.pax.* on tmpfs. diff --git a/1012_linux-4.16.13.patch b/1012_linux-4.16.13.patch new file mode 100644 index 0000000..8fb1dc5 --- /dev/null +++ b/1012_linux-4.16.13.patch @@ -0,0 +1,10200 @@ +diff --git a/Documentation/devicetree/bindings/clock/sunxi-ccu.txt b/Documentation/devicetree/bindings/clock/sunxi-ccu.txt +index 4ca21c3a6fc9..460ef27b1008 100644 +--- a/Documentation/devicetree/bindings/clock/sunxi-ccu.txt ++++ b/Documentation/devicetree/bindings/clock/sunxi-ccu.txt +@@ -20,6 +20,7 @@ Required properties : + - "allwinner,sun50i-a64-ccu" + - "allwinner,sun50i-a64-r-ccu" + - "allwinner,sun50i-h5-ccu" ++ - "allwinner,sun50i-h6-ccu" + - "nextthing,gr8-ccu" + + - reg: Must contain the registers base address and length +@@ -31,6 +32,9 @@ Required properties : + - #clock-cells : must contain 1 + - #reset-cells : must contain 1 + ++For the main CCU on H6, one more clock is needed: ++- "iosc": the SoC's internal frequency oscillator ++ + For the PRCM CCUs on A83T/H3/A64, two more clocks are needed: + - "pll-periph": the SoC's peripheral PLL from the main CCU + - "iosc": the SoC's internal frequency oscillator +diff --git a/Documentation/devicetree/bindings/display/msm/dsi.txt b/Documentation/devicetree/bindings/display/msm/dsi.txt +index a6671bd2c85a..ae38a1ee9c29 100644 +--- a/Documentation/devicetree/bindings/display/msm/dsi.txt ++++ b/Documentation/devicetree/bindings/display/msm/dsi.txt +@@ -102,7 +102,11 @@ Required properties: + - clocks: Phandles to device clocks. See [1] for details on clock bindings. + - clock-names: the following clocks are required: + * "iface" ++ For 28nm HPM/LP, 28nm 8960 PHYs: + - vddio-supply: phandle to vdd-io regulator device node ++ For 20nm PHY: ++- vddio-supply: phandle to vdd-io regulator device node ++- vcca-supply: phandle to vcca regulator device node + + Optional properties: + - qcom,dsi-phy-regulator-ldo-mode: Boolean value indicating if the LDO mode PHY +diff --git a/Documentation/devicetree/bindings/pinctrl/axis,artpec6-pinctrl.txt b/Documentation/devicetree/bindings/pinctrl/axis,artpec6-pinctrl.txt +index 47284f85ec80..c3f9826692bc 100644 +--- a/Documentation/devicetree/bindings/pinctrl/axis,artpec6-pinctrl.txt ++++ b/Documentation/devicetree/bindings/pinctrl/axis,artpec6-pinctrl.txt +@@ -20,7 +20,8 @@ Required subnode-properties: + gpio: cpuclkoutgrp0, udlclkoutgrp0, i2c1grp0, i2c2grp0, + i2c3grp0, i2s0grp0, i2s1grp0, i2srefclkgrp0, spi0grp0, + spi1grp0, pciedebuggrp0, uart0grp0, uart0grp1, uart1grp0, +- uart2grp0, uart2grp1, uart3grp0, uart4grp0, uart5grp0 ++ uart2grp0, uart2grp1, uart3grp0, uart4grp0, uart5grp0, ++ uart5nocts + cpuclkout: cpuclkoutgrp0 + udlclkout: udlclkoutgrp0 + i2c1: i2c1grp0 +@@ -37,7 +38,7 @@ Required subnode-properties: + uart2: uart2grp0, uart2grp1 + uart3: uart3grp0 + uart4: uart4grp0 +- uart5: uart5grp0 ++ uart5: uart5grp0, uart5nocts + nand: nandgrp0 + sdio0: sdio0grp0 + sdio1: sdio1grp0 +diff --git a/Makefile b/Makefile +index ded9e8480d74..146e527a5e06 100644 +--- a/Makefile ++++ b/Makefile +@@ -1,7 +1,7 @@ + # SPDX-License-Identifier: GPL-2.0 + VERSION = 4 + PATCHLEVEL = 16 +-SUBLEVEL = 12 ++SUBLEVEL = 13 + EXTRAVERSION = + NAME = Fearless Coyote + +diff --git a/arch/arm/boot/dts/at91-nattis-2-natte-2.dts b/arch/arm/boot/dts/at91-nattis-2-natte-2.dts +index 3ea1d26e1c68..c457eff25911 100644 +--- a/arch/arm/boot/dts/at91-nattis-2-natte-2.dts ++++ b/arch/arm/boot/dts/at91-nattis-2-natte-2.dts +@@ -146,7 +146,7 @@ + }; + + eeprom@50 { +- compatible = "nxp,24c02"; ++ compatible = "nxp,se97b", "atmel,24c02"; + reg = <0x50>; + pagesize = <16>; + }; +diff --git a/arch/arm/boot/dts/at91-tse850-3.dts b/arch/arm/boot/dts/at91-tse850-3.dts +index 9b82cc8843e1..97b227693658 100644 +--- a/arch/arm/boot/dts/at91-tse850-3.dts ++++ b/arch/arm/boot/dts/at91-tse850-3.dts +@@ -246,7 +246,7 @@ + }; + + eeprom@50 { +- compatible = "nxp,24c02", "atmel,24c02"; ++ compatible = "nxp,se97b", "atmel,24c02"; + reg = <0x50>; + pagesize = <16>; + }; +diff --git a/arch/arm/boot/dts/bcm283x.dtsi b/arch/arm/boot/dts/bcm283x.dtsi +index 9d293decf8d3..8d9a0df207a4 100644 +--- a/arch/arm/boot/dts/bcm283x.dtsi ++++ b/arch/arm/boot/dts/bcm283x.dtsi +@@ -252,7 +252,7 @@ + + jtag_gpio4: jtag_gpio4 { + brcm,pins = <4 5 6 12 13>; +- brcm,function = ; ++ brcm,function = ; + }; + jtag_gpio22: jtag_gpio22 { + brcm,pins = <22 23 24 25 26 27>; +@@ -397,8 +397,8 @@ + + i2s: i2s@7e203000 { + compatible = "brcm,bcm2835-i2s"; +- reg = <0x7e203000 0x20>, +- <0x7e101098 0x02>; ++ reg = <0x7e203000 0x24>; ++ clocks = <&clocks BCM2835_CLOCK_PCM>; + + dmas = <&dma 2>, + <&dma 3>; +diff --git a/arch/arm/boot/dts/dra71-evm.dts b/arch/arm/boot/dts/dra71-evm.dts +index 41c9132eb550..64363f75c01a 100644 +--- a/arch/arm/boot/dts/dra71-evm.dts ++++ b/arch/arm/boot/dts/dra71-evm.dts +@@ -24,13 +24,13 @@ + + regulator-name = "vddshv8"; + regulator-min-microvolt = <1800000>; +- regulator-max-microvolt = <3000000>; ++ regulator-max-microvolt = <3300000>; + regulator-boot-on; + vin-supply = <&evm_5v0>; + + gpios = <&gpio7 11 GPIO_ACTIVE_HIGH>; + states = <1800000 0x0 +- 3000000 0x1>; ++ 3300000 0x1>; + }; + + evm_1v8_sw: fixedregulator-evm_1v8 { +diff --git a/arch/arm/boot/dts/imx7d-cl-som-imx7.dts b/arch/arm/boot/dts/imx7d-cl-som-imx7.dts +index ae45af1ad062..3cc1fb9ce441 100644 +--- a/arch/arm/boot/dts/imx7d-cl-som-imx7.dts ++++ b/arch/arm/boot/dts/imx7d-cl-som-imx7.dts +@@ -213,37 +213,37 @@ + &iomuxc { + pinctrl_enet1: enet1grp { + fsl,pins = < +- MX7D_PAD_SD2_CD_B__ENET1_MDIO 0x3 +- MX7D_PAD_SD2_WP__ENET1_MDC 0x3 +- MX7D_PAD_ENET1_RGMII_TXC__ENET1_RGMII_TXC 0x1 +- MX7D_PAD_ENET1_RGMII_TD0__ENET1_RGMII_TD0 0x1 +- MX7D_PAD_ENET1_RGMII_TD1__ENET1_RGMII_TD1 0x1 +- MX7D_PAD_ENET1_RGMII_TD2__ENET1_RGMII_TD2 0x1 +- MX7D_PAD_ENET1_RGMII_TD3__ENET1_RGMII_TD3 0x1 +- MX7D_PAD_ENET1_RGMII_TX_CTL__ENET1_RGMII_TX_CTL 0x1 +- MX7D_PAD_ENET1_RGMII_RXC__ENET1_RGMII_RXC 0x1 +- MX7D_PAD_ENET1_RGMII_RD0__ENET1_RGMII_RD0 0x1 +- MX7D_PAD_ENET1_RGMII_RD1__ENET1_RGMII_RD1 0x1 +- MX7D_PAD_ENET1_RGMII_RD2__ENET1_RGMII_RD2 0x1 +- MX7D_PAD_ENET1_RGMII_RD3__ENET1_RGMII_RD3 0x1 +- MX7D_PAD_ENET1_RGMII_RX_CTL__ENET1_RGMII_RX_CTL 0x1 ++ MX7D_PAD_SD2_CD_B__ENET1_MDIO 0x30 ++ MX7D_PAD_SD2_WP__ENET1_MDC 0x30 ++ MX7D_PAD_ENET1_RGMII_TXC__ENET1_RGMII_TXC 0x11 ++ MX7D_PAD_ENET1_RGMII_TD0__ENET1_RGMII_TD0 0x11 ++ MX7D_PAD_ENET1_RGMII_TD1__ENET1_RGMII_TD1 0x11 ++ MX7D_PAD_ENET1_RGMII_TD2__ENET1_RGMII_TD2 0x11 ++ MX7D_PAD_ENET1_RGMII_TD3__ENET1_RGMII_TD3 0x11 ++ MX7D_PAD_ENET1_RGMII_TX_CTL__ENET1_RGMII_TX_CTL 0x11 ++ MX7D_PAD_ENET1_RGMII_RXC__ENET1_RGMII_RXC 0x11 ++ MX7D_PAD_ENET1_RGMII_RD0__ENET1_RGMII_RD0 0x11 ++ MX7D_PAD_ENET1_RGMII_RD1__ENET1_RGMII_RD1 0x11 ++ MX7D_PAD_ENET1_RGMII_RD2__ENET1_RGMII_RD2 0x11 ++ MX7D_PAD_ENET1_RGMII_RD3__ENET1_RGMII_RD3 0x11 ++ MX7D_PAD_ENET1_RGMII_RX_CTL__ENET1_RGMII_RX_CTL 0x11 + >; + }; + + pinctrl_enet2: enet2grp { + fsl,pins = < +- MX7D_PAD_EPDC_GDSP__ENET2_RGMII_TXC 0x1 +- MX7D_PAD_EPDC_SDCE2__ENET2_RGMII_TD0 0x1 +- MX7D_PAD_EPDC_SDCE3__ENET2_RGMII_TD1 0x1 +- MX7D_PAD_EPDC_GDCLK__ENET2_RGMII_TD2 0x1 +- MX7D_PAD_EPDC_GDOE__ENET2_RGMII_TD3 0x1 +- MX7D_PAD_EPDC_GDRL__ENET2_RGMII_TX_CTL 0x1 +- MX7D_PAD_EPDC_SDCE1__ENET2_RGMII_RXC 0x1 +- MX7D_PAD_EPDC_SDCLK__ENET2_RGMII_RD0 0x1 +- MX7D_PAD_EPDC_SDLE__ENET2_RGMII_RD1 0x1 +- MX7D_PAD_EPDC_SDOE__ENET2_RGMII_RD2 0x1 +- MX7D_PAD_EPDC_SDSHR__ENET2_RGMII_RD3 0x1 +- MX7D_PAD_EPDC_SDCE0__ENET2_RGMII_RX_CTL 0x1 ++ MX7D_PAD_EPDC_GDSP__ENET2_RGMII_TXC 0x11 ++ MX7D_PAD_EPDC_SDCE2__ENET2_RGMII_TD0 0x11 ++ MX7D_PAD_EPDC_SDCE3__ENET2_RGMII_TD1 0x11 ++ MX7D_PAD_EPDC_GDCLK__ENET2_RGMII_TD2 0x11 ++ MX7D_PAD_EPDC_GDOE__ENET2_RGMII_TD3 0x11 ++ MX7D_PAD_EPDC_GDRL__ENET2_RGMII_TX_CTL 0x11 ++ MX7D_PAD_EPDC_SDCE1__ENET2_RGMII_RXC 0x11 ++ MX7D_PAD_EPDC_SDCLK__ENET2_RGMII_RD0 0x11 ++ MX7D_PAD_EPDC_SDLE__ENET2_RGMII_RD1 0x11 ++ MX7D_PAD_EPDC_SDOE__ENET2_RGMII_RD2 0x11 ++ MX7D_PAD_EPDC_SDSHR__ENET2_RGMII_RD3 0x11 ++ MX7D_PAD_EPDC_SDCE0__ENET2_RGMII_RX_CTL 0x11 + >; + }; + +diff --git a/arch/arm/boot/dts/keystone-k2e-clocks.dtsi b/arch/arm/boot/dts/keystone-k2e-clocks.dtsi +index 5e0e7d232161..f7592155a740 100644 +--- a/arch/arm/boot/dts/keystone-k2e-clocks.dtsi ++++ b/arch/arm/boot/dts/keystone-k2e-clocks.dtsi +@@ -42,7 +42,7 @@ clocks { + domain-id = <0>; + }; + +- clkhyperlink0: clkhyperlink02350030 { ++ clkhyperlink0: clkhyperlink0@2350030 { + #clock-cells = <0>; + compatible = "ti,keystone,psc-clock"; + clocks = <&chipclk12>; +diff --git a/arch/arm/boot/dts/r8a7791-porter.dts b/arch/arm/boot/dts/r8a7791-porter.dts +index eb374956294f..9a02d03b23c2 100644 +--- a/arch/arm/boot/dts/r8a7791-porter.dts ++++ b/arch/arm/boot/dts/r8a7791-porter.dts +@@ -425,7 +425,7 @@ + "dclkin.0", "dclkin.1"; + + ports { +- port@1 { ++ port@0 { + endpoint { + remote-endpoint = <&adv7511_in>; + }; +diff --git a/arch/arm/boot/dts/socfpga.dtsi b/arch/arm/boot/dts/socfpga.dtsi +index c42ca7022e8c..486d4e7433ed 100644 +--- a/arch/arm/boot/dts/socfpga.dtsi ++++ b/arch/arm/boot/dts/socfpga.dtsi +@@ -831,7 +831,7 @@ + timer@fffec600 { + compatible = "arm,cortex-a9-twd-timer"; + reg = <0xfffec600 0x100>; +- interrupts = <1 13 0xf04>; ++ interrupts = <1 13 0xf01>; + clocks = <&mpu_periph_clk>; + }; + +diff --git a/arch/arm/boot/dts/sun4i-a10.dtsi b/arch/arm/boot/dts/sun4i-a10.dtsi +index 4f2f2eea0755..5df34345a354 100644 +--- a/arch/arm/boot/dts/sun4i-a10.dtsi ++++ b/arch/arm/boot/dts/sun4i-a10.dtsi +@@ -76,7 +76,7 @@ + allwinner,pipeline = "de_fe0-de_be0-lcd0-hdmi"; + clocks = <&ccu CLK_AHB_LCD0>, <&ccu CLK_AHB_HDMI0>, + <&ccu CLK_AHB_DE_BE0>, <&ccu CLK_AHB_DE_FE0>, +- <&ccu CLK_DE_BE0>, <&ccu CLK_AHB_DE_FE0>, ++ <&ccu CLK_DE_BE0>, <&ccu CLK_DE_FE0>, + <&ccu CLK_TCON0_CH1>, <&ccu CLK_HDMI>, + <&ccu CLK_DRAM_DE_FE0>, <&ccu CLK_DRAM_DE_BE0>; + status = "disabled"; +@@ -88,7 +88,7 @@ + allwinner,pipeline = "de_fe0-de_be0-lcd0"; + clocks = <&ccu CLK_AHB_LCD0>, <&ccu CLK_AHB_DE_BE0>, + <&ccu CLK_AHB_DE_FE0>, <&ccu CLK_DE_BE0>, +- <&ccu CLK_AHB_DE_FE0>, <&ccu CLK_TCON0_CH0>, ++ <&ccu CLK_DE_FE0>, <&ccu CLK_TCON0_CH0>, + <&ccu CLK_DRAM_DE_FE0>, <&ccu CLK_DRAM_DE_BE0>; + status = "disabled"; + }; +@@ -99,7 +99,7 @@ + allwinner,pipeline = "de_fe0-de_be0-lcd0-tve0"; + clocks = <&ccu CLK_AHB_TVE0>, <&ccu CLK_AHB_LCD0>, + <&ccu CLK_AHB_DE_BE0>, <&ccu CLK_AHB_DE_FE0>, +- <&ccu CLK_DE_BE0>, <&ccu CLK_AHB_DE_FE0>, ++ <&ccu CLK_DE_BE0>, <&ccu CLK_DE_FE0>, + <&ccu CLK_TCON0_CH1>, <&ccu CLK_DRAM_TVE0>, + <&ccu CLK_DRAM_DE_FE0>, <&ccu CLK_DRAM_DE_BE0>; + status = "disabled"; +diff --git a/arch/arm64/boot/dts/qcom/msm8996.dtsi b/arch/arm64/boot/dts/qcom/msm8996.dtsi +index 0a6f7952bbb1..48b85653ad66 100644 +--- a/arch/arm64/boot/dts/qcom/msm8996.dtsi ++++ b/arch/arm64/boot/dts/qcom/msm8996.dtsi +@@ -497,8 +497,8 @@ + blsp2_spi5: spi@75ba000{ + compatible = "qcom,spi-qup-v2.2.1"; + reg = <0x075ba000 0x600>; +- interrupts = ; +- clocks = <&gcc GCC_BLSP2_QUP5_SPI_APPS_CLK>, ++ interrupts = ; ++ clocks = <&gcc GCC_BLSP2_QUP6_SPI_APPS_CLK>, + <&gcc GCC_BLSP2_AHB_CLK>; + clock-names = "core", "iface"; + pinctrl-names = "default", "sleep"; +diff --git a/arch/arm64/include/asm/atomic_lse.h b/arch/arm64/include/asm/atomic_lse.h +index 9ef0797380cb..f9b0b09153e0 100644 +--- a/arch/arm64/include/asm/atomic_lse.h ++++ b/arch/arm64/include/asm/atomic_lse.h +@@ -117,7 +117,7 @@ static inline void atomic_and(int i, atomic_t *v) + /* LSE atomics */ + " mvn %w[i], %w[i]\n" + " stclr %w[i], %[v]") +- : [i] "+r" (w0), [v] "+Q" (v->counter) ++ : [i] "+&r" (w0), [v] "+Q" (v->counter) + : "r" (x1) + : __LL_SC_CLOBBERS); + } +@@ -135,7 +135,7 @@ static inline int atomic_fetch_and##name(int i, atomic_t *v) \ + /* LSE atomics */ \ + " mvn %w[i], %w[i]\n" \ + " ldclr" #mb " %w[i], %w[i], %[v]") \ +- : [i] "+r" (w0), [v] "+Q" (v->counter) \ ++ : [i] "+&r" (w0), [v] "+Q" (v->counter) \ + : "r" (x1) \ + : __LL_SC_CLOBBERS, ##cl); \ + \ +@@ -161,7 +161,7 @@ static inline void atomic_sub(int i, atomic_t *v) + /* LSE atomics */ + " neg %w[i], %w[i]\n" + " stadd %w[i], %[v]") +- : [i] "+r" (w0), [v] "+Q" (v->counter) ++ : [i] "+&r" (w0), [v] "+Q" (v->counter) + : "r" (x1) + : __LL_SC_CLOBBERS); + } +@@ -180,7 +180,7 @@ static inline int atomic_sub_return##name(int i, atomic_t *v) \ + " neg %w[i], %w[i]\n" \ + " ldadd" #mb " %w[i], w30, %[v]\n" \ + " add %w[i], %w[i], w30") \ +- : [i] "+r" (w0), [v] "+Q" (v->counter) \ ++ : [i] "+&r" (w0), [v] "+Q" (v->counter) \ + : "r" (x1) \ + : __LL_SC_CLOBBERS , ##cl); \ + \ +@@ -207,7 +207,7 @@ static inline int atomic_fetch_sub##name(int i, atomic_t *v) \ + /* LSE atomics */ \ + " neg %w[i], %w[i]\n" \ + " ldadd" #mb " %w[i], %w[i], %[v]") \ +- : [i] "+r" (w0), [v] "+Q" (v->counter) \ ++ : [i] "+&r" (w0), [v] "+Q" (v->counter) \ + : "r" (x1) \ + : __LL_SC_CLOBBERS, ##cl); \ + \ +@@ -314,7 +314,7 @@ static inline void atomic64_and(long i, atomic64_t *v) + /* LSE atomics */ + " mvn %[i], %[i]\n" + " stclr %[i], %[v]") +- : [i] "+r" (x0), [v] "+Q" (v->counter) ++ : [i] "+&r" (x0), [v] "+Q" (v->counter) + : "r" (x1) + : __LL_SC_CLOBBERS); + } +@@ -332,7 +332,7 @@ static inline long atomic64_fetch_and##name(long i, atomic64_t *v) \ + /* LSE atomics */ \ + " mvn %[i], %[i]\n" \ + " ldclr" #mb " %[i], %[i], %[v]") \ +- : [i] "+r" (x0), [v] "+Q" (v->counter) \ ++ : [i] "+&r" (x0), [v] "+Q" (v->counter) \ + : "r" (x1) \ + : __LL_SC_CLOBBERS, ##cl); \ + \ +@@ -358,7 +358,7 @@ static inline void atomic64_sub(long i, atomic64_t *v) + /* LSE atomics */ + " neg %[i], %[i]\n" + " stadd %[i], %[v]") +- : [i] "+r" (x0), [v] "+Q" (v->counter) ++ : [i] "+&r" (x0), [v] "+Q" (v->counter) + : "r" (x1) + : __LL_SC_CLOBBERS); + } +@@ -377,7 +377,7 @@ static inline long atomic64_sub_return##name(long i, atomic64_t *v) \ + " neg %[i], %[i]\n" \ + " ldadd" #mb " %[i], x30, %[v]\n" \ + " add %[i], %[i], x30") \ +- : [i] "+r" (x0), [v] "+Q" (v->counter) \ ++ : [i] "+&r" (x0), [v] "+Q" (v->counter) \ + : "r" (x1) \ + : __LL_SC_CLOBBERS, ##cl); \ + \ +@@ -404,7 +404,7 @@ static inline long atomic64_fetch_sub##name(long i, atomic64_t *v) \ + /* LSE atomics */ \ + " neg %[i], %[i]\n" \ + " ldadd" #mb " %[i], %[i], %[v]") \ +- : [i] "+r" (x0), [v] "+Q" (v->counter) \ ++ : [i] "+&r" (x0), [v] "+Q" (v->counter) \ + : "r" (x1) \ + : __LL_SC_CLOBBERS, ##cl); \ + \ +@@ -435,7 +435,7 @@ static inline long atomic64_dec_if_positive(atomic64_t *v) + " sub x30, x30, %[ret]\n" + " cbnz x30, 1b\n" + "2:") +- : [ret] "+r" (x0), [v] "+Q" (v->counter) ++ : [ret] "+&r" (x0), [v] "+Q" (v->counter) + : + : __LL_SC_CLOBBERS, "cc", "memory"); + +@@ -516,7 +516,7 @@ static inline long __cmpxchg_double##name(unsigned long old1, \ + " eor %[old1], %[old1], %[oldval1]\n" \ + " eor %[old2], %[old2], %[oldval2]\n" \ + " orr %[old1], %[old1], %[old2]") \ +- : [old1] "+r" (x0), [old2] "+r" (x1), \ ++ : [old1] "+&r" (x0), [old2] "+&r" (x1), \ + [v] "+Q" (*(unsigned long *)ptr) \ + : [new1] "r" (x2), [new2] "r" (x3), [ptr] "r" (x4), \ + [oldval1] "r" (oldval1), [oldval2] "r" (oldval2) \ +diff --git a/arch/arm64/kernel/arm64ksyms.c b/arch/arm64/kernel/arm64ksyms.c +index 66be504edb6c..d894a20b70b2 100644 +--- a/arch/arm64/kernel/arm64ksyms.c ++++ b/arch/arm64/kernel/arm64ksyms.c +@@ -75,3 +75,11 @@ NOKPROBE_SYMBOL(_mcount); + /* arm-smccc */ + EXPORT_SYMBOL(__arm_smccc_smc); + EXPORT_SYMBOL(__arm_smccc_hvc); ++ ++ /* tishift.S */ ++extern long long __ashlti3(long long a, int b); ++EXPORT_SYMBOL(__ashlti3); ++extern long long __ashrti3(long long a, int b); ++EXPORT_SYMBOL(__ashrti3); ++extern long long __lshrti3(long long a, int b); ++EXPORT_SYMBOL(__lshrti3); +diff --git a/arch/arm64/lib/tishift.S b/arch/arm64/lib/tishift.S +index d3db9b2cd479..0fdff97794de 100644 +--- a/arch/arm64/lib/tishift.S ++++ b/arch/arm64/lib/tishift.S +@@ -1,17 +1,6 @@ +-/* +- * Copyright (C) 2017 Jason A. Donenfeld . All Rights Reserved. ++/* SPDX-License-Identifier: (GPL-2.0 OR BSD-3-Clause) + * +- * This program is free software; you can redistribute it and/or modify +- * it under the terms of the GNU General Public License version 2 as +- * published by the Free Software Foundation. +- * +- * This program is distributed in the hope that it will be useful, +- * but WITHOUT ANY WARRANTY; without even the implied warranty of +- * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the +- * GNU General Public License for more details. +- * +- * You should have received a copy of the GNU General Public License +- * along with this program. If not, see . ++ * Copyright (C) 2017-2018 Jason A. Donenfeld . All Rights Reserved. + */ + + #include +diff --git a/arch/m68k/coldfire/device.c b/arch/m68k/coldfire/device.c +index 84938fdbbada..908d58347790 100644 +--- a/arch/m68k/coldfire/device.c ++++ b/arch/m68k/coldfire/device.c +@@ -135,7 +135,11 @@ static struct platform_device mcf_fec0 = { + .id = 0, + .num_resources = ARRAY_SIZE(mcf_fec0_resources), + .resource = mcf_fec0_resources, +- .dev.platform_data = FEC_PDATA, ++ .dev = { ++ .dma_mask = &mcf_fec0.dev.coherent_dma_mask, ++ .coherent_dma_mask = DMA_BIT_MASK(32), ++ .platform_data = FEC_PDATA, ++ } + }; + + #ifdef MCFFEC_BASE1 +@@ -167,7 +171,11 @@ static struct platform_device mcf_fec1 = { + .id = 1, + .num_resources = ARRAY_SIZE(mcf_fec1_resources), + .resource = mcf_fec1_resources, +- .dev.platform_data = FEC_PDATA, ++ .dev = { ++ .dma_mask = &mcf_fec1.dev.coherent_dma_mask, ++ .coherent_dma_mask = DMA_BIT_MASK(32), ++ .platform_data = FEC_PDATA, ++ } + }; + #endif /* MCFFEC_BASE1 */ + #endif /* CONFIG_FEC */ +diff --git a/arch/mips/boot/compressed/uart-16550.c b/arch/mips/boot/compressed/uart-16550.c +index b3043c08f769..aee8d7b8f091 100644 +--- a/arch/mips/boot/compressed/uart-16550.c ++++ b/arch/mips/boot/compressed/uart-16550.c +@@ -18,9 +18,9 @@ + #define PORT(offset) (CKSEG1ADDR(AR7_REGS_UART0) + (4 * offset)) + #endif + +-#if defined(CONFIG_MACH_JZ4740) || defined(CONFIG_MACH_JZ4780) +-#include +-#define PORT(offset) (CKSEG1ADDR(JZ4740_UART0_BASE_ADDR) + (4 * offset)) ++#ifdef CONFIG_MACH_INGENIC ++#define INGENIC_UART0_BASE_ADDR 0x10030000 ++#define PORT(offset) (CKSEG1ADDR(INGENIC_UART0_BASE_ADDR) + (4 * offset)) + #endif + + #ifdef CONFIG_CPU_XLR +diff --git a/arch/mips/boot/dts/xilfpga/Makefile b/arch/mips/boot/dts/xilfpga/Makefile +index 9987e0e378c5..69ca00590b8d 100644 +--- a/arch/mips/boot/dts/xilfpga/Makefile ++++ b/arch/mips/boot/dts/xilfpga/Makefile +@@ -1,4 +1,2 @@ + # SPDX-License-Identifier: GPL-2.0 + dtb-$(CONFIG_FIT_IMAGE_FDT_XILFPGA) += nexys4ddr.dtb +- +-obj-y += $(patsubst %.dtb, %.dtb.o, $(dtb-y)) +diff --git a/arch/mips/cavium-octeon/octeon-irq.c b/arch/mips/cavium-octeon/octeon-irq.c +index d99f5242169e..b3aec101a65d 100644 +--- a/arch/mips/cavium-octeon/octeon-irq.c ++++ b/arch/mips/cavium-octeon/octeon-irq.c +@@ -2271,7 +2271,7 @@ static int __init octeon_irq_init_cib(struct device_node *ciu_node, + + parent_irq = irq_of_parse_and_map(ciu_node, 0); + if (!parent_irq) { +- pr_err("ERROR: Couldn't acquire parent_irq for %s\n.", ++ pr_err("ERROR: Couldn't acquire parent_irq for %s\n", + ciu_node->name); + return -EINVAL; + } +@@ -2283,7 +2283,7 @@ static int __init octeon_irq_init_cib(struct device_node *ciu_node, + + addr = of_get_address(ciu_node, 0, NULL, NULL); + if (!addr) { +- pr_err("ERROR: Couldn't acquire reg(0) %s\n.", ciu_node->name); ++ pr_err("ERROR: Couldn't acquire reg(0) %s\n", ciu_node->name); + return -EINVAL; + } + host_data->raw_reg = (u64)phys_to_virt( +@@ -2291,7 +2291,7 @@ static int __init octeon_irq_init_cib(struct device_node *ciu_node, + + addr = of_get_address(ciu_node, 1, NULL, NULL); + if (!addr) { +- pr_err("ERROR: Couldn't acquire reg(1) %s\n.", ciu_node->name); ++ pr_err("ERROR: Couldn't acquire reg(1) %s\n", ciu_node->name); + return -EINVAL; + } + host_data->en_reg = (u64)phys_to_virt( +@@ -2299,7 +2299,7 @@ static int __init octeon_irq_init_cib(struct device_node *ciu_node, + + r = of_property_read_u32(ciu_node, "cavium,max-bits", &val); + if (r) { +- pr_err("ERROR: Couldn't read cavium,max-bits from %s\n.", ++ pr_err("ERROR: Couldn't read cavium,max-bits from %s\n", + ciu_node->name); + return r; + } +@@ -2309,7 +2309,7 @@ static int __init octeon_irq_init_cib(struct device_node *ciu_node, + &octeon_irq_domain_cib_ops, + host_data); + if (!cib_domain) { +- pr_err("ERROR: Couldn't irq_domain_add_linear()\n."); ++ pr_err("ERROR: Couldn't irq_domain_add_linear()\n"); + return -ENOMEM; + } + +diff --git a/arch/mips/generic/Platform b/arch/mips/generic/Platform +index b51432dd10b6..0dd0d5d460a5 100644 +--- a/arch/mips/generic/Platform ++++ b/arch/mips/generic/Platform +@@ -16,3 +16,4 @@ all-$(CONFIG_MIPS_GENERIC) := vmlinux.gz.itb + its-y := vmlinux.its.S + its-$(CONFIG_FIT_IMAGE_FDT_BOSTON) += board-boston.its.S + its-$(CONFIG_FIT_IMAGE_FDT_NI169445) += board-ni169445.its.S ++its-$(CONFIG_FIT_IMAGE_FDT_XILFPGA) += board-xilfpga.its.S +diff --git a/arch/mips/include/asm/mach-ath79/ar71xx_regs.h b/arch/mips/include/asm/mach-ath79/ar71xx_regs.h +index aa3800c82332..d99ca862dae3 100644 +--- a/arch/mips/include/asm/mach-ath79/ar71xx_regs.h ++++ b/arch/mips/include/asm/mach-ath79/ar71xx_regs.h +@@ -167,7 +167,7 @@ + #define AR71XX_AHB_DIV_MASK 0x7 + + #define AR724X_PLL_REG_CPU_CONFIG 0x00 +-#define AR724X_PLL_REG_PCIE_CONFIG 0x18 ++#define AR724X_PLL_REG_PCIE_CONFIG 0x10 + + #define AR724X_PLL_FB_SHIFT 0 + #define AR724X_PLL_FB_MASK 0x3ff +diff --git a/arch/mips/kernel/ptrace.c b/arch/mips/kernel/ptrace.c +index 0b23b1ad99e6..8d098b9f395c 100644 +--- a/arch/mips/kernel/ptrace.c ++++ b/arch/mips/kernel/ptrace.c +@@ -463,7 +463,7 @@ static int fpr_get_msa(struct task_struct *target, + /* + * Copy the floating-point context to the supplied NT_PRFPREG buffer. + * Choose the appropriate helper for general registers, and then copy +- * the FCSR register separately. ++ * the FCSR and FIR registers separately. + */ + static int fpr_get(struct task_struct *target, + const struct user_regset *regset, +@@ -471,6 +471,7 @@ static int fpr_get(struct task_struct *target, + void *kbuf, void __user *ubuf) + { + const int fcr31_pos = NUM_FPU_REGS * sizeof(elf_fpreg_t); ++ const int fir_pos = fcr31_pos + sizeof(u32); + int err; + + if (sizeof(target->thread.fpu.fpr[0]) == sizeof(elf_fpreg_t)) +@@ -483,6 +484,12 @@ static int fpr_get(struct task_struct *target, + err = user_regset_copyout(&pos, &count, &kbuf, &ubuf, + &target->thread.fpu.fcr31, + fcr31_pos, fcr31_pos + sizeof(u32)); ++ if (err) ++ return err; ++ ++ err = user_regset_copyout(&pos, &count, &kbuf, &ubuf, ++ &boot_cpu_data.fpu_id, ++ fir_pos, fir_pos + sizeof(u32)); + + return err; + } +@@ -531,7 +538,8 @@ static int fpr_set_msa(struct task_struct *target, + /* + * Copy the supplied NT_PRFPREG buffer to the floating-point context. + * Choose the appropriate helper for general registers, and then copy +- * the FCSR register separately. ++ * the FCSR register separately. Ignore the incoming FIR register ++ * contents though, as the register is read-only. + * + * We optimize for the case where `count % sizeof(elf_fpreg_t) == 0', + * which is supposed to have been guaranteed by the kernel before +@@ -545,6 +553,7 @@ static int fpr_set(struct task_struct *target, + const void *kbuf, const void __user *ubuf) + { + const int fcr31_pos = NUM_FPU_REGS * sizeof(elf_fpreg_t); ++ const int fir_pos = fcr31_pos + sizeof(u32); + u32 fcr31; + int err; + +@@ -572,6 +581,11 @@ static int fpr_set(struct task_struct *target, + ptrace_setfcr31(target, fcr31); + } + ++ if (count > 0) ++ err = user_regset_copyin_ignore(&pos, &count, &kbuf, &ubuf, ++ fir_pos, ++ fir_pos + sizeof(u32)); ++ + return err; + } + +@@ -793,7 +807,7 @@ long arch_ptrace(struct task_struct *child, long request, + fregs = get_fpu_regs(child); + + #ifdef CONFIG_32BIT +- if (test_thread_flag(TIF_32BIT_FPREGS)) { ++ if (test_tsk_thread_flag(child, TIF_32BIT_FPREGS)) { + /* + * The odd registers are actually the high + * order bits of the values stored in the even +@@ -888,7 +902,7 @@ long arch_ptrace(struct task_struct *child, long request, + + init_fp_ctx(child); + #ifdef CONFIG_32BIT +- if (test_thread_flag(TIF_32BIT_FPREGS)) { ++ if (test_tsk_thread_flag(child, TIF_32BIT_FPREGS)) { + /* + * The odd registers are actually the high + * order bits of the values stored in the even +diff --git a/arch/mips/kernel/ptrace32.c b/arch/mips/kernel/ptrace32.c +index 2b9260f92ccd..656a137c1fe2 100644 +--- a/arch/mips/kernel/ptrace32.c ++++ b/arch/mips/kernel/ptrace32.c +@@ -99,7 +99,7 @@ long compat_arch_ptrace(struct task_struct *child, compat_long_t request, + break; + } + fregs = get_fpu_regs(child); +- if (test_thread_flag(TIF_32BIT_FPREGS)) { ++ if (test_tsk_thread_flag(child, TIF_32BIT_FPREGS)) { + /* + * The odd registers are actually the high + * order bits of the values stored in the even +@@ -212,7 +212,7 @@ long compat_arch_ptrace(struct task_struct *child, compat_long_t request, + sizeof(child->thread.fpu)); + child->thread.fpu.fcr31 = 0; + } +- if (test_thread_flag(TIF_32BIT_FPREGS)) { ++ if (test_tsk_thread_flag(child, TIF_32BIT_FPREGS)) { + /* + * The odd registers are actually the high + * order bits of the values stored in the even +diff --git a/arch/mips/kvm/mips.c b/arch/mips/kvm/mips.c +index 2549fdd27ee1..0f725e9cee8f 100644 +--- a/arch/mips/kvm/mips.c ++++ b/arch/mips/kvm/mips.c +@@ -45,7 +45,7 @@ struct kvm_stats_debugfs_item debugfs_entries[] = { + { "cache", VCPU_STAT(cache_exits), KVM_STAT_VCPU }, + { "signal", VCPU_STAT(signal_exits), KVM_STAT_VCPU }, + { "interrupt", VCPU_STAT(int_exits), KVM_STAT_VCPU }, +- { "cop_unsuable", VCPU_STAT(cop_unusable_exits), KVM_STAT_VCPU }, ++ { "cop_unusable", VCPU_STAT(cop_unusable_exits), KVM_STAT_VCPU }, + { "tlbmod", VCPU_STAT(tlbmod_exits), KVM_STAT_VCPU }, + { "tlbmiss_ld", VCPU_STAT(tlbmiss_ld_exits), KVM_STAT_VCPU }, + { "tlbmiss_st", VCPU_STAT(tlbmiss_st_exits), KVM_STAT_VCPU }, +diff --git a/arch/mips/mm/c-r4k.c b/arch/mips/mm/c-r4k.c +index 6f534b209971..e12dfa48b478 100644 +--- a/arch/mips/mm/c-r4k.c ++++ b/arch/mips/mm/c-r4k.c +@@ -851,9 +851,12 @@ static void r4k_dma_cache_wback_inv(unsigned long addr, unsigned long size) + /* + * Either no secondary cache or the available caches don't have the + * subset property so we have to flush the primary caches +- * explicitly ++ * explicitly. ++ * If we would need IPI to perform an INDEX-type operation, then ++ * we have to use the HIT-type alternative as IPI cannot be used ++ * here due to interrupts possibly being disabled. + */ +- if (size >= dcache_size) { ++ if (!r4k_op_needs_ipi(R4K_INDEX) && size >= dcache_size) { + r4k_blast_dcache(); + } else { + R4600_HIT_CACHEOP_WAR_IMPL; +@@ -890,7 +893,7 @@ static void r4k_dma_cache_inv(unsigned long addr, unsigned long size) + return; + } + +- if (size >= dcache_size) { ++ if (!r4k_op_needs_ipi(R4K_INDEX) && size >= dcache_size) { + r4k_blast_dcache(); + } else { + R4600_HIT_CACHEOP_WAR_IMPL; +diff --git a/arch/powerpc/include/asm/book3s/64/slice.h b/arch/powerpc/include/asm/book3s/64/slice.h +new file mode 100644 +index 000000000000..db0dedab65ee +--- /dev/null ++++ b/arch/powerpc/include/asm/book3s/64/slice.h +@@ -0,0 +1,27 @@ ++/* SPDX-License-Identifier: GPL-2.0 */ ++#ifndef _ASM_POWERPC_BOOK3S_64_SLICE_H ++#define _ASM_POWERPC_BOOK3S_64_SLICE_H ++ ++#ifdef CONFIG_PPC_MM_SLICES ++ ++#define SLICE_LOW_SHIFT 28 ++#define SLICE_LOW_TOP (0x100000000ul) ++#define SLICE_NUM_LOW (SLICE_LOW_TOP >> SLICE_LOW_SHIFT) ++#define GET_LOW_SLICE_INDEX(addr) ((addr) >> SLICE_LOW_SHIFT) ++ ++#define SLICE_HIGH_SHIFT 40 ++#define SLICE_NUM_HIGH (H_PGTABLE_RANGE >> SLICE_HIGH_SHIFT) ++#define GET_HIGH_SLICE_INDEX(addr) ((addr) >> SLICE_HIGH_SHIFT) ++ ++#else /* CONFIG_PPC_MM_SLICES */ ++ ++#define get_slice_psize(mm, addr) ((mm)->context.user_psize) ++#define slice_set_user_psize(mm, psize) \ ++do { \ ++ (mm)->context.user_psize = (psize); \ ++ (mm)->context.sllp = SLB_VSID_USER | mmu_psize_defs[(psize)].sllp; \ ++} while (0) ++ ++#endif /* CONFIG_PPC_MM_SLICES */ ++ ++#endif /* _ASM_POWERPC_BOOK3S_64_SLICE_H */ +diff --git a/arch/powerpc/include/asm/irq_work.h b/arch/powerpc/include/asm/irq_work.h +index c6d3078bd8c3..b8b0be8f1a07 100644 +--- a/arch/powerpc/include/asm/irq_work.h ++++ b/arch/powerpc/include/asm/irq_work.h +@@ -6,5 +6,6 @@ static inline bool arch_irq_work_has_interrupt(void) + { + return true; + } ++extern void arch_irq_work_raise(void); + + #endif /* _ASM_POWERPC_IRQ_WORK_H */ +diff --git a/arch/powerpc/include/asm/mmu-8xx.h b/arch/powerpc/include/asm/mmu-8xx.h +index 2f806e329648..b324ab46d838 100644 +--- a/arch/powerpc/include/asm/mmu-8xx.h ++++ b/arch/powerpc/include/asm/mmu-8xx.h +@@ -191,6 +191,12 @@ typedef struct { + unsigned int id; + unsigned int active; + unsigned long vdso_base; ++#ifdef CONFIG_PPC_MM_SLICES ++ u16 user_psize; /* page size index */ ++ u64 low_slices_psize; /* page size encodings */ ++ unsigned char high_slices_psize[0]; ++ unsigned long slb_addr_limit; ++#endif + } mm_context_t; + + #define PHYS_IMMR_BASE (mfspr(SPRN_IMMR) & 0xfff80000) +diff --git a/arch/powerpc/include/asm/nohash/32/slice.h b/arch/powerpc/include/asm/nohash/32/slice.h +new file mode 100644 +index 000000000000..95d532e18092 +--- /dev/null ++++ b/arch/powerpc/include/asm/nohash/32/slice.h +@@ -0,0 +1,18 @@ ++/* SPDX-License-Identifier: GPL-2.0 */ ++#ifndef _ASM_POWERPC_NOHASH_32_SLICE_H ++#define _ASM_POWERPC_NOHASH_32_SLICE_H ++ ++#ifdef CONFIG_PPC_MM_SLICES ++ ++#define SLICE_LOW_SHIFT 28 ++#define SLICE_LOW_TOP (0x100000000ull) ++#define SLICE_NUM_LOW (SLICE_LOW_TOP >> SLICE_LOW_SHIFT) ++#define GET_LOW_SLICE_INDEX(addr) ((addr) >> SLICE_LOW_SHIFT) ++ ++#define SLICE_HIGH_SHIFT 0 ++#define SLICE_NUM_HIGH 0ul ++#define GET_HIGH_SLICE_INDEX(addr) (addr & 0) ++ ++#endif /* CONFIG_PPC_MM_SLICES */ ++ ++#endif /* _ASM_POWERPC_NOHASH_32_SLICE_H */ +diff --git a/arch/powerpc/include/asm/nohash/64/slice.h b/arch/powerpc/include/asm/nohash/64/slice.h +new file mode 100644 +index 000000000000..ad0d6e3cc1c5 +--- /dev/null ++++ b/arch/powerpc/include/asm/nohash/64/slice.h +@@ -0,0 +1,12 @@ ++/* SPDX-License-Identifier: GPL-2.0 */ ++#ifndef _ASM_POWERPC_NOHASH_64_SLICE_H ++#define _ASM_POWERPC_NOHASH_64_SLICE_H ++ ++#ifdef CONFIG_PPC_64K_PAGES ++#define get_slice_psize(mm, addr) MMU_PAGE_64K ++#else /* CONFIG_PPC_64K_PAGES */ ++#define get_slice_psize(mm, addr) MMU_PAGE_4K ++#endif /* !CONFIG_PPC_64K_PAGES */ ++#define slice_set_user_psize(mm, psize) do { BUG(); } while (0) ++ ++#endif /* _ASM_POWERPC_NOHASH_64_SLICE_H */ +diff --git a/arch/powerpc/include/asm/page.h b/arch/powerpc/include/asm/page.h +index 8da5d4c1cab2..d5f1c41b7dba 100644 +--- a/arch/powerpc/include/asm/page.h ++++ b/arch/powerpc/include/asm/page.h +@@ -344,5 +344,6 @@ typedef struct page *pgtable_t; + + #include + #endif /* __ASSEMBLY__ */ ++#include + + #endif /* _ASM_POWERPC_PAGE_H */ +diff --git a/arch/powerpc/include/asm/page_64.h b/arch/powerpc/include/asm/page_64.h +index 56234c6fcd61..af04acdb873f 100644 +--- a/arch/powerpc/include/asm/page_64.h ++++ b/arch/powerpc/include/asm/page_64.h +@@ -86,65 +86,6 @@ extern u64 ppc64_pft_size; + + #endif /* __ASSEMBLY__ */ + +-#ifdef CONFIG_PPC_MM_SLICES +- +-#define SLICE_LOW_SHIFT 28 +-#define SLICE_HIGH_SHIFT 40 +- +-#define SLICE_LOW_TOP (0x100000000ul) +-#define SLICE_NUM_LOW (SLICE_LOW_TOP >> SLICE_LOW_SHIFT) +-#define SLICE_NUM_HIGH (H_PGTABLE_RANGE >> SLICE_HIGH_SHIFT) +- +-#define GET_LOW_SLICE_INDEX(addr) ((addr) >> SLICE_LOW_SHIFT) +-#define GET_HIGH_SLICE_INDEX(addr) ((addr) >> SLICE_HIGH_SHIFT) +- +-#ifndef __ASSEMBLY__ +-struct mm_struct; +- +-extern unsigned long slice_get_unmapped_area(unsigned long addr, +- unsigned long len, +- unsigned long flags, +- unsigned int psize, +- int topdown); +- +-extern unsigned int get_slice_psize(struct mm_struct *mm, +- unsigned long addr); +- +-extern void slice_set_user_psize(struct mm_struct *mm, unsigned int psize); +-extern void slice_set_range_psize(struct mm_struct *mm, unsigned long start, +- unsigned long len, unsigned int psize); +- +-#endif /* __ASSEMBLY__ */ +-#else +-#define slice_init() +-#ifdef CONFIG_PPC_BOOK3S_64 +-#define get_slice_psize(mm, addr) ((mm)->context.user_psize) +-#define slice_set_user_psize(mm, psize) \ +-do { \ +- (mm)->context.user_psize = (psize); \ +- (mm)->context.sllp = SLB_VSID_USER | mmu_psize_defs[(psize)].sllp; \ +-} while (0) +-#else /* !CONFIG_PPC_BOOK3S_64 */ +-#ifdef CONFIG_PPC_64K_PAGES +-#define get_slice_psize(mm, addr) MMU_PAGE_64K +-#else /* CONFIG_PPC_64K_PAGES */ +-#define get_slice_psize(mm, addr) MMU_PAGE_4K +-#endif /* !CONFIG_PPC_64K_PAGES */ +-#define slice_set_user_psize(mm, psize) do { BUG(); } while(0) +-#endif /* CONFIG_PPC_BOOK3S_64 */ +- +-#define slice_set_range_psize(mm, start, len, psize) \ +- slice_set_user_psize((mm), (psize)) +-#endif /* CONFIG_PPC_MM_SLICES */ +- +-#ifdef CONFIG_HUGETLB_PAGE +- +-#ifdef CONFIG_PPC_MM_SLICES +-#define HAVE_ARCH_HUGETLB_UNMAPPED_AREA +-#endif +- +-#endif /* !CONFIG_HUGETLB_PAGE */ +- + #define VM_DATA_DEFAULT_FLAGS \ + (is_32bit_task() ? \ + VM_DATA_DEFAULT_FLAGS32 : VM_DATA_DEFAULT_FLAGS64) +diff --git a/arch/powerpc/include/asm/slice.h b/arch/powerpc/include/asm/slice.h +new file mode 100644 +index 000000000000..172711fadb1c +--- /dev/null ++++ b/arch/powerpc/include/asm/slice.h +@@ -0,0 +1,42 @@ ++/* SPDX-License-Identifier: GPL-2.0 */ ++#ifndef _ASM_POWERPC_SLICE_H ++#define _ASM_POWERPC_SLICE_H ++ ++#ifdef CONFIG_PPC_BOOK3S_64 ++#include ++#elif defined(CONFIG_PPC64) ++#include ++#elif defined(CONFIG_PPC_MMU_NOHASH) ++#include ++#endif ++ ++#ifdef CONFIG_PPC_MM_SLICES ++ ++#ifdef CONFIG_HUGETLB_PAGE ++#define HAVE_ARCH_HUGETLB_UNMAPPED_AREA ++#endif ++#define HAVE_ARCH_UNMAPPED_AREA ++#define HAVE_ARCH_UNMAPPED_AREA_TOPDOWN ++ ++#ifndef __ASSEMBLY__ ++ ++struct mm_struct; ++ ++unsigned long slice_get_unmapped_area(unsigned long addr, unsigned long len, ++ unsigned long flags, unsigned int psize, ++ int topdown); ++ ++unsigned int get_slice_psize(struct mm_struct *mm, unsigned long addr); ++ ++void slice_set_user_psize(struct mm_struct *mm, unsigned int psize); ++void slice_set_range_psize(struct mm_struct *mm, unsigned long start, ++ unsigned long len, unsigned int psize); ++#endif /* __ASSEMBLY__ */ ++ ++#else /* CONFIG_PPC_MM_SLICES */ ++ ++#define slice_set_range_psize(mm, start, len, psize) \ ++ slice_set_user_psize((mm), (psize)) ++#endif /* CONFIG_PPC_MM_SLICES */ ++ ++#endif /* _ASM_POWERPC_SLICE_H */ +diff --git a/arch/powerpc/kernel/cpu_setup_power.S b/arch/powerpc/kernel/cpu_setup_power.S +index 3f30c994e931..458b928dbd84 100644 +--- a/arch/powerpc/kernel/cpu_setup_power.S ++++ b/arch/powerpc/kernel/cpu_setup_power.S +@@ -28,6 +28,7 @@ _GLOBAL(__setup_cpu_power7) + beqlr + li r0,0 + mtspr SPRN_LPID,r0 ++ mtspr SPRN_PCR,r0 + mfspr r3,SPRN_LPCR + li r4,(LPCR_LPES1 >> LPCR_LPES_SH) + bl __init_LPCR_ISA206 +@@ -41,6 +42,7 @@ _GLOBAL(__restore_cpu_power7) + beqlr + li r0,0 + mtspr SPRN_LPID,r0 ++ mtspr SPRN_PCR,r0 + mfspr r3,SPRN_LPCR + li r4,(LPCR_LPES1 >> LPCR_LPES_SH) + bl __init_LPCR_ISA206 +@@ -57,6 +59,7 @@ _GLOBAL(__setup_cpu_power8) + beqlr + li r0,0 + mtspr SPRN_LPID,r0 ++ mtspr SPRN_PCR,r0 + mfspr r3,SPRN_LPCR + ori r3, r3, LPCR_PECEDH + li r4,0 /* LPES = 0 */ +@@ -78,6 +81,7 @@ _GLOBAL(__restore_cpu_power8) + beqlr + li r0,0 + mtspr SPRN_LPID,r0 ++ mtspr SPRN_PCR,r0 + mfspr r3,SPRN_LPCR + ori r3, r3, LPCR_PECEDH + li r4,0 /* LPES = 0 */ +@@ -99,6 +103,7 @@ _GLOBAL(__setup_cpu_power9) + mtspr SPRN_PSSCR,r0 + mtspr SPRN_LPID,r0 + mtspr SPRN_PID,r0 ++ mtspr SPRN_PCR,r0 + mfspr r3,SPRN_LPCR + LOAD_REG_IMMEDIATE(r4, LPCR_PECEDH | LPCR_PECE_HVEE | LPCR_HVICE | LPCR_HEIC) + or r3, r3, r4 +@@ -123,6 +128,7 @@ _GLOBAL(__restore_cpu_power9) + mtspr SPRN_PSSCR,r0 + mtspr SPRN_LPID,r0 + mtspr SPRN_PID,r0 ++ mtspr SPRN_PCR,r0 + mfspr r3,SPRN_LPCR + LOAD_REG_IMMEDIATE(r4, LPCR_PECEDH | LPCR_PECE_HVEE | LPCR_HVICE | LPCR_HEIC) + or r3, r3, r4 +diff --git a/arch/powerpc/kernel/dt_cpu_ftrs.c b/arch/powerpc/kernel/dt_cpu_ftrs.c +index 078553a177de..afe6808d7a41 100644 +--- a/arch/powerpc/kernel/dt_cpu_ftrs.c ++++ b/arch/powerpc/kernel/dt_cpu_ftrs.c +@@ -114,6 +114,7 @@ static void __restore_cpu_cpufeatures(void) + if (hv_mode) { + mtspr(SPRN_LPID, 0); + mtspr(SPRN_HFSCR, system_registers.hfscr); ++ mtspr(SPRN_PCR, 0); + } + mtspr(SPRN_FSCR, system_registers.fscr); + +diff --git a/arch/powerpc/kernel/idle_book3s.S b/arch/powerpc/kernel/idle_book3s.S +index 01e1c1997893..2fce278446f5 100644 +--- a/arch/powerpc/kernel/idle_book3s.S ++++ b/arch/powerpc/kernel/idle_book3s.S +@@ -834,6 +834,8 @@ BEGIN_FTR_SECTION + mtspr SPRN_PTCR,r4 + ld r4,_RPR(r1) + mtspr SPRN_RPR,r4 ++ ld r4,_AMOR(r1) ++ mtspr SPRN_AMOR,r4 + END_FTR_SECTION_IFSET(CPU_FTR_ARCH_300) + + ld r4,_TSCR(r1) +diff --git a/arch/powerpc/kernel/setup-common.c b/arch/powerpc/kernel/setup-common.c +index d73ec518ef80..a6002f9449b1 100644 +--- a/arch/powerpc/kernel/setup-common.c ++++ b/arch/powerpc/kernel/setup-common.c +@@ -919,6 +919,8 @@ void __init setup_arch(char **cmdline_p) + #ifdef CONFIG_PPC64 + if (!radix_enabled()) + init_mm.context.slb_addr_limit = DEFAULT_MAP_WINDOW_USER64; ++#elif defined(CONFIG_PPC_8xx) ++ init_mm.context.slb_addr_limit = DEFAULT_MAP_WINDOW; + #else + #error "context.addr_limit not initialized." + #endif +diff --git a/arch/powerpc/kernel/traps.c b/arch/powerpc/kernel/traps.c +index 1e48d157196a..578c5e80aa14 100644 +--- a/arch/powerpc/kernel/traps.c ++++ b/arch/powerpc/kernel/traps.c +@@ -208,6 +208,12 @@ static void oops_end(unsigned long flags, struct pt_regs *regs, + } + raw_local_irq_restore(flags); + ++ /* ++ * system_reset_excption handles debugger, crash dump, panic, for 0x100 ++ */ ++ if (TRAP(regs) == 0x100) ++ return; ++ + crash_fadump(regs, "die oops"); + + if (kexec_should_crash(current)) +@@ -272,8 +278,13 @@ void die(const char *str, struct pt_regs *regs, long err) + { + unsigned long flags; + +- if (debugger(regs)) +- return; ++ /* ++ * system_reset_excption handles debugger, crash dump, panic, for 0x100 ++ */ ++ if (TRAP(regs) != 0x100) { ++ if (debugger(regs)) ++ return; ++ } + + flags = oops_begin(regs); + if (__die(str, regs, err)) +@@ -1612,6 +1623,22 @@ void facility_unavailable_exception(struct pt_regs *regs) + value = mfspr(SPRN_FSCR); + + status = value >> 56; ++ if ((hv || status >= 2) && ++ (status < ARRAY_SIZE(facility_strings)) && ++ facility_strings[status]) ++ facility = facility_strings[status]; ++ ++ /* We should not have taken this interrupt in kernel */ ++ if (!user_mode(regs)) { ++ pr_emerg("Facility '%s' unavailable (%d) exception in kernel mode at %lx\n", ++ facility, status, regs->nip); ++ die("Unexpected facility unavailable exception", regs, SIGABRT); ++ } ++ ++ /* We restore the interrupt state now */ ++ if (!arch_irq_disabled_regs(regs)) ++ local_irq_enable(); ++ + if (status == FSCR_DSCR_LG) { + /* + * User is accessing the DSCR register using the problem +@@ -1678,25 +1705,11 @@ void facility_unavailable_exception(struct pt_regs *regs) + return; + } + +- if ((hv || status >= 2) && +- (status < ARRAY_SIZE(facility_strings)) && +- facility_strings[status]) +- facility = facility_strings[status]; +- +- /* We restore the interrupt state now */ +- if (!arch_irq_disabled_regs(regs)) +- local_irq_enable(); +- + pr_err_ratelimited("%sFacility '%s' unavailable (%d), exception at 0x%lx, MSR=%lx\n", + hv ? "Hypervisor " : "", facility, status, regs->nip, regs->msr); + + out: +- if (user_mode(regs)) { +- _exception(SIGILL, regs, ILL_ILLOPC, regs->nip); +- return; +- } +- +- die("Unexpected facility unavailable exception", regs, SIGABRT); ++ _exception(SIGILL, regs, ILL_ILLOPC, regs->nip); + } + #endif + +diff --git a/arch/powerpc/mm/8xx_mmu.c b/arch/powerpc/mm/8xx_mmu.c +index 849f50cd62f2..cf77d755246d 100644 +--- a/arch/powerpc/mm/8xx_mmu.c ++++ b/arch/powerpc/mm/8xx_mmu.c +@@ -192,7 +192,7 @@ void set_context(unsigned long id, pgd_t *pgd) + mtspr(SPRN_M_TW, __pa(pgd) - offset); + + /* Update context */ +- mtspr(SPRN_M_CASID, id); ++ mtspr(SPRN_M_CASID, id - 1); + /* sync */ + mb(); + } +diff --git a/arch/powerpc/mm/hugetlbpage.c b/arch/powerpc/mm/hugetlbpage.c +index 876da2bc1796..590be3fa0ce2 100644 +--- a/arch/powerpc/mm/hugetlbpage.c ++++ b/arch/powerpc/mm/hugetlbpage.c +@@ -553,9 +553,11 @@ unsigned long hugetlb_get_unmapped_area(struct file *file, unsigned long addr, + struct hstate *hstate = hstate_file(file); + int mmu_psize = shift_to_mmu_psize(huge_page_shift(hstate)); + ++#ifdef CONFIG_PPC_RADIX_MMU + if (radix_enabled()) + return radix__hugetlb_get_unmapped_area(file, addr, len, + pgoff, flags); ++#endif + return slice_get_unmapped_area(addr, len, flags, mmu_psize, 1); + } + #endif +diff --git a/arch/powerpc/mm/mmu_context_nohash.c b/arch/powerpc/mm/mmu_context_nohash.c +index 4554d6527682..d98f7e5c141b 100644 +--- a/arch/powerpc/mm/mmu_context_nohash.c ++++ b/arch/powerpc/mm/mmu_context_nohash.c +@@ -331,6 +331,20 @@ int init_new_context(struct task_struct *t, struct mm_struct *mm) + { + pr_hard("initing context for mm @%p\n", mm); + ++#ifdef CONFIG_PPC_MM_SLICES ++ if (!mm->context.slb_addr_limit) ++ mm->context.slb_addr_limit = DEFAULT_MAP_WINDOW; ++ ++ /* ++ * We have MMU_NO_CONTEXT set to be ~0. Hence check ++ * explicitly against context.id == 0. This ensures that we properly ++ * initialize context slice details for newly allocated mm's (which will ++ * have id == 0) and don't alter context slice inherited via fork (which ++ * will have id != 0). ++ */ ++ if (mm->context.id == 0) ++ slice_set_user_psize(mm, mmu_virtual_psize); ++#endif + mm->context.id = MMU_NO_CONTEXT; + mm->context.active = 0; + return 0; +@@ -428,8 +442,8 @@ void __init mmu_context_init(void) + * -- BenH + */ + if (mmu_has_feature(MMU_FTR_TYPE_8xx)) { +- first_context = 0; +- last_context = 15; ++ first_context = 1; ++ last_context = 16; + no_selective_tlbil = true; + } else if (mmu_has_feature(MMU_FTR_TYPE_47x)) { + first_context = 1; +diff --git a/arch/powerpc/mm/slice.c b/arch/powerpc/mm/slice.c +index 23ec2c5e3b78..0beca1ba2282 100644 +--- a/arch/powerpc/mm/slice.c ++++ b/arch/powerpc/mm/slice.c +@@ -73,10 +73,12 @@ static void slice_range_to_mask(unsigned long start, unsigned long len, + unsigned long end = start + len - 1; + + ret->low_slices = 0; +- bitmap_zero(ret->high_slices, SLICE_NUM_HIGH); ++ if (SLICE_NUM_HIGH) ++ bitmap_zero(ret->high_slices, SLICE_NUM_HIGH); + + if (start < SLICE_LOW_TOP) { +- unsigned long mend = min(end, (SLICE_LOW_TOP - 1)); ++ unsigned long mend = min(end, ++ (unsigned long)(SLICE_LOW_TOP - 1)); + + ret->low_slices = (1u << (GET_LOW_SLICE_INDEX(mend) + 1)) + - (1u << GET_LOW_SLICE_INDEX(start)); +@@ -113,11 +115,13 @@ static int slice_high_has_vma(struct mm_struct *mm, unsigned long slice) + unsigned long start = slice << SLICE_HIGH_SHIFT; + unsigned long end = start + (1ul << SLICE_HIGH_SHIFT); + ++#ifdef CONFIG_PPC64 + /* Hack, so that each addresses is controlled by exactly one + * of the high or low area bitmaps, the first high area starts + * at 4GB, not 0 */ + if (start == 0) + start = SLICE_LOW_TOP; ++#endif + + return !slice_area_is_free(mm, start, end - start); + } +@@ -128,7 +132,8 @@ static void slice_mask_for_free(struct mm_struct *mm, struct slice_mask *ret, + unsigned long i; + + ret->low_slices = 0; +- bitmap_zero(ret->high_slices, SLICE_NUM_HIGH); ++ if (SLICE_NUM_HIGH) ++ bitmap_zero(ret->high_slices, SLICE_NUM_HIGH); + + for (i = 0; i < SLICE_NUM_LOW; i++) + if (!slice_low_has_vma(mm, i)) +@@ -151,7 +156,8 @@ static void slice_mask_for_size(struct mm_struct *mm, int psize, struct slice_ma + u64 lpsizes; + + ret->low_slices = 0; +- bitmap_zero(ret->high_slices, SLICE_NUM_HIGH); ++ if (SLICE_NUM_HIGH) ++ bitmap_zero(ret->high_slices, SLICE_NUM_HIGH); + + lpsizes = mm->context.low_slices_psize; + for (i = 0; i < SLICE_NUM_LOW; i++) +@@ -180,6 +186,10 @@ static int slice_check_fit(struct mm_struct *mm, + */ + unsigned long slice_count = GET_HIGH_SLICE_INDEX(mm->context.slb_addr_limit); + ++ if (!SLICE_NUM_HIGH) ++ return (mask.low_slices & available.low_slices) == ++ mask.low_slices; ++ + bitmap_and(result, mask.high_slices, + available.high_slices, slice_count); + +@@ -189,6 +199,7 @@ static int slice_check_fit(struct mm_struct *mm, + + static void slice_flush_segments(void *parm) + { ++#ifdef CONFIG_PPC64 + struct mm_struct *mm = parm; + unsigned long flags; + +@@ -200,6 +211,7 @@ static void slice_flush_segments(void *parm) + local_irq_save(flags); + slb_flush_and_rebolt(); + local_irq_restore(flags); ++#endif + } + + static void slice_convert(struct mm_struct *mm, struct slice_mask mask, int psize) +@@ -388,21 +400,21 @@ static unsigned long slice_find_area(struct mm_struct *mm, unsigned long len, + + static inline void slice_or_mask(struct slice_mask *dst, struct slice_mask *src) + { +- DECLARE_BITMAP(result, SLICE_NUM_HIGH); +- + dst->low_slices |= src->low_slices; +- bitmap_or(result, dst->high_slices, src->high_slices, SLICE_NUM_HIGH); +- bitmap_copy(dst->high_slices, result, SLICE_NUM_HIGH); ++ if (!SLICE_NUM_HIGH) ++ return; ++ bitmap_or(dst->high_slices, dst->high_slices, src->high_slices, ++ SLICE_NUM_HIGH); + } + + static inline void slice_andnot_mask(struct slice_mask *dst, struct slice_mask *src) + { +- DECLARE_BITMAP(result, SLICE_NUM_HIGH); +- + dst->low_slices &= ~src->low_slices; + +- bitmap_andnot(result, dst->high_slices, src->high_slices, SLICE_NUM_HIGH); +- bitmap_copy(dst->high_slices, result, SLICE_NUM_HIGH); ++ if (!SLICE_NUM_HIGH) ++ return; ++ bitmap_andnot(dst->high_slices, dst->high_slices, src->high_slices, ++ SLICE_NUM_HIGH); + } + + #ifdef CONFIG_PPC_64K_PAGES +@@ -450,14 +462,17 @@ unsigned long slice_get_unmapped_area(unsigned long addr, unsigned long len, + * init different masks + */ + mask.low_slices = 0; +- bitmap_zero(mask.high_slices, SLICE_NUM_HIGH); + + /* silence stupid warning */; + potential_mask.low_slices = 0; +- bitmap_zero(potential_mask.high_slices, SLICE_NUM_HIGH); + + compat_mask.low_slices = 0; +- bitmap_zero(compat_mask.high_slices, SLICE_NUM_HIGH); ++ ++ if (SLICE_NUM_HIGH) { ++ bitmap_zero(mask.high_slices, SLICE_NUM_HIGH); ++ bitmap_zero(potential_mask.high_slices, SLICE_NUM_HIGH); ++ bitmap_zero(compat_mask.high_slices, SLICE_NUM_HIGH); ++ } + + /* Sanity checks */ + BUG_ON(mm->task_size == 0); +@@ -595,7 +610,9 @@ unsigned long slice_get_unmapped_area(unsigned long addr, unsigned long len, + convert: + slice_andnot_mask(&mask, &good_mask); + slice_andnot_mask(&mask, &compat_mask); +- if (mask.low_slices || !bitmap_empty(mask.high_slices, SLICE_NUM_HIGH)) { ++ if (mask.low_slices || ++ (SLICE_NUM_HIGH && ++ !bitmap_empty(mask.high_slices, SLICE_NUM_HIGH))) { + slice_convert(mm, mask, psize); + if (psize > MMU_PAGE_BASE) + on_each_cpu(slice_flush_segments, mm, 1); +diff --git a/arch/powerpc/perf/core-book3s.c b/arch/powerpc/perf/core-book3s.c +index f89bbd54ecec..1e55ae2f2afd 100644 +--- a/arch/powerpc/perf/core-book3s.c ++++ b/arch/powerpc/perf/core-book3s.c +@@ -457,6 +457,16 @@ static void power_pmu_bhrb_read(struct cpu_hw_events *cpuhw) + /* invalid entry */ + continue; + ++ /* ++ * BHRB rolling buffer could very much contain the kernel ++ * addresses at this point. Check the privileges before ++ * exporting it to userspace (avoid exposure of regions ++ * where we could have speculative execution) ++ */ ++ if (perf_paranoid_kernel() && !capable(CAP_SYS_ADMIN) && ++ is_kernel_addr(addr)) ++ continue; ++ + /* Branches are read most recent first (ie. mfbhrb 0 is + * the most recent branch). + * There are two types of valid entries: +@@ -1226,6 +1236,7 @@ static void power_pmu_disable(struct pmu *pmu) + */ + write_mmcr0(cpuhw, val); + mb(); ++ isync(); + + /* + * Disable instruction sampling if it was enabled +@@ -1234,12 +1245,26 @@ static void power_pmu_disable(struct pmu *pmu) + mtspr(SPRN_MMCRA, + cpuhw->mmcr[2] & ~MMCRA_SAMPLE_ENABLE); + mb(); ++ isync(); + } + + cpuhw->disabled = 1; + cpuhw->n_added = 0; + + ebb_switch_out(mmcr0); ++ ++#ifdef CONFIG_PPC64 ++ /* ++ * These are readable by userspace, may contain kernel ++ * addresses and are not switched by context switch, so clear ++ * them now to avoid leaking anything to userspace in general ++ * including to another process. ++ */ ++ if (ppmu->flags & PPMU_ARCH_207S) { ++ mtspr(SPRN_SDAR, 0); ++ mtspr(SPRN_SIAR, 0); ++ } ++#endif + } + + local_irq_restore(flags); +diff --git a/arch/powerpc/platforms/Kconfig.cputype b/arch/powerpc/platforms/Kconfig.cputype +index a429d859f15d..5a8b1bf1e819 100644 +--- a/arch/powerpc/platforms/Kconfig.cputype ++++ b/arch/powerpc/platforms/Kconfig.cputype +@@ -326,6 +326,7 @@ config PPC_BOOK3E_MMU + config PPC_MM_SLICES + bool + default y if PPC_BOOK3S_64 ++ default y if PPC_8xx && HUGETLB_PAGE + default n + + config PPC_HAVE_PMU_SUPPORT +diff --git a/arch/powerpc/platforms/powernv/npu-dma.c b/arch/powerpc/platforms/powernv/npu-dma.c +index e7b621f619b2..a9a3d62c34d6 100644 +--- a/arch/powerpc/platforms/powernv/npu-dma.c ++++ b/arch/powerpc/platforms/powernv/npu-dma.c +@@ -417,6 +417,11 @@ struct npu_context { + void *priv; + }; + ++struct mmio_atsd_reg { ++ struct npu *npu; ++ int reg; ++}; ++ + /* + * Find a free MMIO ATSD register and mark it in use. Return -ENOSPC + * if none are available. +@@ -426,7 +431,7 @@ static int get_mmio_atsd_reg(struct npu *npu) + int i; + + for (i = 0; i < npu->mmio_atsd_count; i++) { +- if (!test_and_set_bit(i, &npu->mmio_atsd_usage)) ++ if (!test_and_set_bit_lock(i, &npu->mmio_atsd_usage)) + return i; + } + +@@ -435,86 +440,90 @@ static int get_mmio_atsd_reg(struct npu *npu) + + static void put_mmio_atsd_reg(struct npu *npu, int reg) + { +- clear_bit(reg, &npu->mmio_atsd_usage); ++ clear_bit_unlock(reg, &npu->mmio_atsd_usage); + } + + /* MMIO ATSD register offsets */ + #define XTS_ATSD_AVA 1 + #define XTS_ATSD_STAT 2 + +-static int mmio_launch_invalidate(struct npu *npu, unsigned long launch, +- unsigned long va) ++static void mmio_launch_invalidate(struct mmio_atsd_reg *mmio_atsd_reg, ++ unsigned long launch, unsigned long va) + { +- int mmio_atsd_reg; +- +- do { +- mmio_atsd_reg = get_mmio_atsd_reg(npu); +- cpu_relax(); +- } while (mmio_atsd_reg < 0); ++ struct npu *npu = mmio_atsd_reg->npu; ++ int reg = mmio_atsd_reg->reg; + + __raw_writeq(cpu_to_be64(va), +- npu->mmio_atsd_regs[mmio_atsd_reg] + XTS_ATSD_AVA); ++ npu->mmio_atsd_regs[reg] + XTS_ATSD_AVA); + eieio(); +- __raw_writeq(cpu_to_be64(launch), npu->mmio_atsd_regs[mmio_atsd_reg]); +- +- return mmio_atsd_reg; ++ __raw_writeq(cpu_to_be64(launch), npu->mmio_atsd_regs[reg]); + } + +-static int mmio_invalidate_pid(struct npu *npu, unsigned long pid, bool flush) ++static void mmio_invalidate_pid(struct mmio_atsd_reg mmio_atsd_reg[NV_MAX_NPUS], ++ unsigned long pid, bool flush) + { ++ int i; + unsigned long launch; + +- /* IS set to invalidate matching PID */ +- launch = PPC_BIT(12); ++ for (i = 0; i <= max_npu2_index; i++) { ++ if (mmio_atsd_reg[i].reg < 0) ++ continue; ++ ++ /* IS set to invalidate matching PID */ ++ launch = PPC_BIT(12); + +- /* PRS set to process-scoped */ +- launch |= PPC_BIT(13); ++ /* PRS set to process-scoped */ ++ launch |= PPC_BIT(13); + +- /* AP */ +- launch |= (u64) mmu_get_ap(mmu_virtual_psize) << PPC_BITLSHIFT(17); ++ /* AP */ ++ launch |= (u64) ++ mmu_get_ap(mmu_virtual_psize) << PPC_BITLSHIFT(17); + +- /* PID */ +- launch |= pid << PPC_BITLSHIFT(38); ++ /* PID */ ++ launch |= pid << PPC_BITLSHIFT(38); + +- /* No flush */ +- launch |= !flush << PPC_BITLSHIFT(39); ++ /* No flush */ ++ launch |= !flush << PPC_BITLSHIFT(39); + +- /* Invalidating the entire process doesn't use a va */ +- return mmio_launch_invalidate(npu, launch, 0); ++ /* Invalidating the entire process doesn't use a va */ ++ mmio_launch_invalidate(&mmio_atsd_reg[i], launch, 0); ++ } + } + +-static int mmio_invalidate_va(struct npu *npu, unsigned long va, +- unsigned long pid, bool flush) ++static void mmio_invalidate_va(struct mmio_atsd_reg mmio_atsd_reg[NV_MAX_NPUS], ++ unsigned long va, unsigned long pid, bool flush) + { ++ int i; + unsigned long launch; + +- /* IS set to invalidate target VA */ +- launch = 0; ++ for (i = 0; i <= max_npu2_index; i++) { ++ if (mmio_atsd_reg[i].reg < 0) ++ continue; ++ ++ /* IS set to invalidate target VA */ ++ launch = 0; + +- /* PRS set to process scoped */ +- launch |= PPC_BIT(13); ++ /* PRS set to process scoped */ ++ launch |= PPC_BIT(13); + +- /* AP */ +- launch |= (u64) mmu_get_ap(mmu_virtual_psize) << PPC_BITLSHIFT(17); ++ /* AP */ ++ launch |= (u64) ++ mmu_get_ap(mmu_virtual_psize) << PPC_BITLSHIFT(17); + +- /* PID */ +- launch |= pid << PPC_BITLSHIFT(38); ++ /* PID */ ++ launch |= pid << PPC_BITLSHIFT(38); + +- /* No flush */ +- launch |= !flush << PPC_BITLSHIFT(39); ++ /* No flush */ ++ launch |= !flush << PPC_BITLSHIFT(39); + +- return mmio_launch_invalidate(npu, launch, va); ++ mmio_launch_invalidate(&mmio_atsd_reg[i], launch, va); ++ } + } + + #define mn_to_npu_context(x) container_of(x, struct npu_context, mn) + +-struct mmio_atsd_reg { +- struct npu *npu; +- int reg; +-}; +- + static void mmio_invalidate_wait( +- struct mmio_atsd_reg mmio_atsd_reg[NV_MAX_NPUS], bool flush) ++ struct mmio_atsd_reg mmio_atsd_reg[NV_MAX_NPUS]) + { + struct npu *npu; + int i, reg; +@@ -529,16 +538,67 @@ static void mmio_invalidate_wait( + reg = mmio_atsd_reg[i].reg; + while (__raw_readq(npu->mmio_atsd_regs[reg] + XTS_ATSD_STAT)) + cpu_relax(); ++ } ++} ++ ++/* ++ * Acquires all the address translation shootdown (ATSD) registers required to ++ * launch an ATSD on all links this npu_context is active on. ++ */ ++static void acquire_atsd_reg(struct npu_context *npu_context, ++ struct mmio_atsd_reg mmio_atsd_reg[NV_MAX_NPUS]) ++{ ++ int i, j; ++ struct npu *npu; ++ struct pci_dev *npdev; ++ struct pnv_phb *nphb; + +- put_mmio_atsd_reg(npu, reg); ++ for (i = 0; i <= max_npu2_index; i++) { ++ mmio_atsd_reg[i].reg = -1; ++ for (j = 0; j < NV_MAX_LINKS; j++) { ++ /* ++ * There are no ordering requirements with respect to ++ * the setup of struct npu_context, but to ensure ++ * consistent behaviour we need to ensure npdev[][] is ++ * only read once. ++ */ ++ npdev = READ_ONCE(npu_context->npdev[i][j]); ++ if (!npdev) ++ continue; + ++ nphb = pci_bus_to_host(npdev->bus)->private_data; ++ npu = &nphb->npu; ++ mmio_atsd_reg[i].npu = npu; ++ mmio_atsd_reg[i].reg = get_mmio_atsd_reg(npu); ++ while (mmio_atsd_reg[i].reg < 0) { ++ mmio_atsd_reg[i].reg = get_mmio_atsd_reg(npu); ++ cpu_relax(); ++ } ++ break; ++ } ++ } ++} ++ ++/* ++ * Release previously acquired ATSD registers. To avoid deadlocks the registers ++ * must be released in the same order they were acquired above in ++ * acquire_atsd_reg. ++ */ ++static void release_atsd_reg(struct mmio_atsd_reg mmio_atsd_reg[NV_MAX_NPUS]) ++{ ++ int i; ++ ++ for (i = 0; i <= max_npu2_index; i++) { + /* +- * The GPU requires two flush ATSDs to ensure all entries have +- * been flushed. We use PID 0 as it will never be used for a +- * process on the GPU. ++ * We can't rely on npu_context->npdev[][] being the same here ++ * as when acquire_atsd_reg() was called, hence we use the ++ * values stored in mmio_atsd_reg during the acquire phase ++ * rather than re-reading npdev[][]. + */ +- if (flush) +- mmio_invalidate_pid(npu, 0, true); ++ if (mmio_atsd_reg[i].reg < 0) ++ continue; ++ ++ put_mmio_atsd_reg(mmio_atsd_reg[i].npu, mmio_atsd_reg[i].reg); + } + } + +@@ -549,10 +609,6 @@ static void mmio_invalidate_wait( + static void mmio_invalidate(struct npu_context *npu_context, int va, + unsigned long address, bool flush) + { +- int i, j; +- struct npu *npu; +- struct pnv_phb *nphb; +- struct pci_dev *npdev; + struct mmio_atsd_reg mmio_atsd_reg[NV_MAX_NPUS]; + unsigned long pid = npu_context->mm->context.id; + +@@ -568,37 +624,25 @@ static void mmio_invalidate(struct npu_context *npu_context, int va, + * Loop over all the NPUs this process is active on and launch + * an invalidate. + */ +- for (i = 0; i <= max_npu2_index; i++) { +- mmio_atsd_reg[i].reg = -1; +- for (j = 0; j < NV_MAX_LINKS; j++) { +- npdev = npu_context->npdev[i][j]; +- if (!npdev) +- continue; +- +- nphb = pci_bus_to_host(npdev->bus)->private_data; +- npu = &nphb->npu; +- mmio_atsd_reg[i].npu = npu; +- +- if (va) +- mmio_atsd_reg[i].reg = +- mmio_invalidate_va(npu, address, pid, +- flush); +- else +- mmio_atsd_reg[i].reg = +- mmio_invalidate_pid(npu, pid, flush); +- +- /* +- * The NPU hardware forwards the shootdown to all GPUs +- * so we only have to launch one shootdown per NPU. +- */ +- break; +- } ++ acquire_atsd_reg(npu_context, mmio_atsd_reg); ++ if (va) ++ mmio_invalidate_va(mmio_atsd_reg, address, pid, flush); ++ else ++ mmio_invalidate_pid(mmio_atsd_reg, pid, flush); ++ ++ mmio_invalidate_wait(mmio_atsd_reg); ++ if (flush) { ++ /* ++ * The GPU requires two flush ATSDs to ensure all entries have ++ * been flushed. We use PID 0 as it will never be used for a ++ * process on the GPU. ++ */ ++ mmio_invalidate_pid(mmio_atsd_reg, 0, true); ++ mmio_invalidate_wait(mmio_atsd_reg); ++ mmio_invalidate_pid(mmio_atsd_reg, 0, true); ++ mmio_invalidate_wait(mmio_atsd_reg); + } +- +- mmio_invalidate_wait(mmio_atsd_reg, flush); +- if (flush) +- /* Wait for the flush to complete */ +- mmio_invalidate_wait(mmio_atsd_reg, false); ++ release_atsd_reg(mmio_atsd_reg); + } + + static void pnv_npu2_mn_release(struct mmu_notifier *mn, +@@ -741,7 +785,16 @@ struct npu_context *pnv_npu2_init_context(struct pci_dev *gpdev, + if (WARN_ON(of_property_read_u32(nvlink_dn, "ibm,npu-link-index", + &nvlink_index))) + return ERR_PTR(-ENODEV); +- npu_context->npdev[npu->index][nvlink_index] = npdev; ++ ++ /* ++ * npdev is a pci_dev pointer setup by the PCI code. We assign it to ++ * npdev[][] to indicate to the mmu notifiers that an invalidation ++ * should also be sent over this nvlink. The notifiers don't use any ++ * other fields in npu_context, so we just need to ensure that when they ++ * deference npu_context->npdev[][] it is either a valid pointer or ++ * NULL. ++ */ ++ WRITE_ONCE(npu_context->npdev[npu->index][nvlink_index], npdev); + + if (!nphb->npu.nmmu_flush) { + /* +@@ -793,7 +846,7 @@ void pnv_npu2_destroy_context(struct npu_context *npu_context, + if (WARN_ON(of_property_read_u32(nvlink_dn, "ibm,npu-link-index", + &nvlink_index))) + return; +- npu_context->npdev[npu->index][nvlink_index] = NULL; ++ WRITE_ONCE(npu_context->npdev[npu->index][nvlink_index], NULL); + opal_npu_destroy_context(nphb->opal_id, npu_context->mm->context.id, + PCI_DEVID(gpdev->bus->number, gpdev->devfn)); + kref_put(&npu_context->kref, pnv_npu2_release_context); +diff --git a/arch/powerpc/platforms/powernv/vas-debug.c b/arch/powerpc/platforms/powernv/vas-debug.c +index ca22f1eae050..3161e39eea1d 100644 +--- a/arch/powerpc/platforms/powernv/vas-debug.c ++++ b/arch/powerpc/platforms/powernv/vas-debug.c +@@ -179,6 +179,7 @@ void vas_instance_init_dbgdir(struct vas_instance *vinst) + { + struct dentry *d; + ++ vas_init_dbgdir(); + if (!vas_debugfs) + return; + +@@ -201,8 +202,18 @@ void vas_instance_init_dbgdir(struct vas_instance *vinst) + vinst->dbgdir = NULL; + } + ++/* ++ * Set up the "root" VAS debugfs dir. Return if we already set it up ++ * (or failed to) in an earlier instance of VAS. ++ */ + void vas_init_dbgdir(void) + { ++ static bool first_time = true; ++ ++ if (!first_time) ++ return; ++ ++ first_time = false; + vas_debugfs = debugfs_create_dir("vas", NULL); + if (IS_ERR(vas_debugfs)) + vas_debugfs = NULL; +diff --git a/arch/powerpc/platforms/powernv/vas.c b/arch/powerpc/platforms/powernv/vas.c +index aebbe95c9230..5a2b24cbbc88 100644 +--- a/arch/powerpc/platforms/powernv/vas.c ++++ b/arch/powerpc/platforms/powernv/vas.c +@@ -160,8 +160,6 @@ static int __init vas_init(void) + int found = 0; + struct device_node *dn; + +- vas_init_dbgdir(); +- + platform_driver_register(&vas_driver); + + for_each_compatible_node(dn, NULL, "ibm,vas") { +@@ -169,8 +167,10 @@ static int __init vas_init(void) + found++; + } + +- if (!found) ++ if (!found) { ++ platform_driver_unregister(&vas_driver); + return -ENODEV; ++ } + + pr_devel("Found %d instances\n", found); + +diff --git a/arch/powerpc/sysdev/mpic.c b/arch/powerpc/sysdev/mpic.c +index 73067805300a..1d4e0ef658d3 100644 +--- a/arch/powerpc/sysdev/mpic.c ++++ b/arch/powerpc/sysdev/mpic.c +@@ -626,7 +626,7 @@ static inline u32 mpic_physmask(u32 cpumask) + int i; + u32 mask = 0; + +- for (i = 0; i < min(32, NR_CPUS); ++i, cpumask >>= 1) ++ for (i = 0; i < min(32, NR_CPUS) && cpu_possible(i); ++i, cpumask >>= 1) + mask |= (cpumask & 1) << get_hard_smp_processor_id(i); + return mask; + } +diff --git a/arch/riscv/include/asm/fence.h b/arch/riscv/include/asm/fence.h +new file mode 100644 +index 000000000000..2b443a3a487f +--- /dev/null ++++ b/arch/riscv/include/asm/fence.h +@@ -0,0 +1,12 @@ ++#ifndef _ASM_RISCV_FENCE_H ++#define _ASM_RISCV_FENCE_H ++ ++#ifdef CONFIG_SMP ++#define RISCV_ACQUIRE_BARRIER "\tfence r , rw\n" ++#define RISCV_RELEASE_BARRIER "\tfence rw, w\n" ++#else ++#define RISCV_ACQUIRE_BARRIER ++#define RISCV_RELEASE_BARRIER ++#endif ++ ++#endif /* _ASM_RISCV_FENCE_H */ +diff --git a/arch/riscv/include/asm/spinlock.h b/arch/riscv/include/asm/spinlock.h +index 2fd27e8ef1fd..8eb26d1ede81 100644 +--- a/arch/riscv/include/asm/spinlock.h ++++ b/arch/riscv/include/asm/spinlock.h +@@ -17,6 +17,7 @@ + + #include + #include ++#include + + /* + * Simple spin lock operations. These provide no fairness guarantees. +@@ -28,10 +29,7 @@ + + static inline void arch_spin_unlock(arch_spinlock_t *lock) + { +- __asm__ __volatile__ ( +- "amoswap.w.rl x0, x0, %0" +- : "=A" (lock->lock) +- :: "memory"); ++ smp_store_release(&lock->lock, 0); + } + + static inline int arch_spin_trylock(arch_spinlock_t *lock) +@@ -39,7 +37,8 @@ static inline int arch_spin_trylock(arch_spinlock_t *lock) + int tmp = 1, busy; + + __asm__ __volatile__ ( +- "amoswap.w.aq %0, %2, %1" ++ " amoswap.w %0, %2, %1\n" ++ RISCV_ACQUIRE_BARRIER + : "=r" (busy), "+A" (lock->lock) + : "r" (tmp) + : "memory"); +@@ -68,8 +67,9 @@ static inline void arch_read_lock(arch_rwlock_t *lock) + "1: lr.w %1, %0\n" + " bltz %1, 1b\n" + " addi %1, %1, 1\n" +- " sc.w.aq %1, %1, %0\n" ++ " sc.w %1, %1, %0\n" + " bnez %1, 1b\n" ++ RISCV_ACQUIRE_BARRIER + : "+A" (lock->lock), "=&r" (tmp) + :: "memory"); + } +@@ -82,8 +82,9 @@ static inline void arch_write_lock(arch_rwlock_t *lock) + "1: lr.w %1, %0\n" + " bnez %1, 1b\n" + " li %1, -1\n" +- " sc.w.aq %1, %1, %0\n" ++ " sc.w %1, %1, %0\n" + " bnez %1, 1b\n" ++ RISCV_ACQUIRE_BARRIER + : "+A" (lock->lock), "=&r" (tmp) + :: "memory"); + } +@@ -96,8 +97,9 @@ static inline int arch_read_trylock(arch_rwlock_t *lock) + "1: lr.w %1, %0\n" + " bltz %1, 1f\n" + " addi %1, %1, 1\n" +- " sc.w.aq %1, %1, %0\n" ++ " sc.w %1, %1, %0\n" + " bnez %1, 1b\n" ++ RISCV_ACQUIRE_BARRIER + "1:\n" + : "+A" (lock->lock), "=&r" (busy) + :: "memory"); +@@ -113,8 +115,9 @@ static inline int arch_write_trylock(arch_rwlock_t *lock) + "1: lr.w %1, %0\n" + " bnez %1, 1f\n" + " li %1, -1\n" +- " sc.w.aq %1, %1, %0\n" ++ " sc.w %1, %1, %0\n" + " bnez %1, 1b\n" ++ RISCV_ACQUIRE_BARRIER + "1:\n" + : "+A" (lock->lock), "=&r" (busy) + :: "memory"); +@@ -125,7 +128,8 @@ static inline int arch_write_trylock(arch_rwlock_t *lock) + static inline void arch_read_unlock(arch_rwlock_t *lock) + { + __asm__ __volatile__( +- "amoadd.w.rl x0, %1, %0" ++ RISCV_RELEASE_BARRIER ++ " amoadd.w x0, %1, %0\n" + : "+A" (lock->lock) + : "r" (-1) + : "memory"); +@@ -133,10 +137,7 @@ static inline void arch_read_unlock(arch_rwlock_t *lock) + + static inline void arch_write_unlock(arch_rwlock_t *lock) + { +- __asm__ __volatile__ ( +- "amoswap.w.rl x0, x0, %0" +- : "=A" (lock->lock) +- :: "memory"); ++ smp_store_release(&lock->lock, 0); + } + + #endif /* _ASM_RISCV_SPINLOCK_H */ +diff --git a/arch/s390/kvm/vsie.c b/arch/s390/kvm/vsie.c +index 8961e3970901..969882b54266 100644 +--- a/arch/s390/kvm/vsie.c ++++ b/arch/s390/kvm/vsie.c +@@ -578,7 +578,7 @@ static int pin_blocks(struct kvm_vcpu *vcpu, struct vsie_page *vsie_page) + + gpa = READ_ONCE(scb_o->itdba) & ~0xffUL; + if (gpa && (scb_s->ecb & ECB_TE)) { +- if (!(gpa & ~0x1fffU)) { ++ if (!(gpa & ~0x1fffUL)) { + rc = set_validity_icpt(scb_s, 0x0080U); + goto unpin; + } +diff --git a/arch/sh/kernel/entry-common.S b/arch/sh/kernel/entry-common.S +index c001f782c5f1..28cc61216b64 100644 +--- a/arch/sh/kernel/entry-common.S ++++ b/arch/sh/kernel/entry-common.S +@@ -255,7 +255,7 @@ debug_trap: + mov.l @r8, r8 + jsr @r8 + nop +- bra __restore_all ++ bra ret_from_exception + nop + CFI_ENDPROC + +diff --git a/arch/sparc/include/asm/atomic_64.h b/arch/sparc/include/asm/atomic_64.h +index abad97edf736..28db058d471b 100644 +--- a/arch/sparc/include/asm/atomic_64.h ++++ b/arch/sparc/include/asm/atomic_64.h +@@ -83,7 +83,11 @@ ATOMIC_OPS(xor) + #define atomic64_add_negative(i, v) (atomic64_add_return(i, v) < 0) + + #define atomic_cmpxchg(v, o, n) (cmpxchg(&((v)->counter), (o), (n))) +-#define atomic_xchg(v, new) (xchg(&((v)->counter), new)) ++ ++static inline int atomic_xchg(atomic_t *v, int new) ++{ ++ return xchg(&v->counter, new); ++} + + static inline int __atomic_add_unless(atomic_t *v, int a, int u) + { +diff --git a/arch/x86/events/core.c b/arch/x86/events/core.c +index 06086439b7bd..70610604c360 100644 +--- a/arch/x86/events/core.c ++++ b/arch/x86/events/core.c +@@ -1162,16 +1162,13 @@ int x86_perf_event_set_period(struct perf_event *event) + + per_cpu(pmc_prev_left[idx], smp_processor_id()) = left; + +- if (!(hwc->flags & PERF_X86_EVENT_AUTO_RELOAD) || +- local64_read(&hwc->prev_count) != (u64)-left) { +- /* +- * The hw event starts counting from this event offset, +- * mark it to be able to extra future deltas: +- */ +- local64_set(&hwc->prev_count, (u64)-left); ++ /* ++ * The hw event starts counting from this event offset, ++ * mark it to be able to extra future deltas: ++ */ ++ local64_set(&hwc->prev_count, (u64)-left); + +- wrmsrl(hwc->event_base, (u64)(-left) & x86_pmu.cntval_mask); +- } ++ wrmsrl(hwc->event_base, (u64)(-left) & x86_pmu.cntval_mask); + + /* + * Due to erratum on certan cpu we need +diff --git a/arch/x86/events/intel/core.c b/arch/x86/events/intel/core.c +index 1e41d7508d99..39cd0b36c790 100644 +--- a/arch/x86/events/intel/core.c ++++ b/arch/x86/events/intel/core.c +@@ -2201,9 +2201,15 @@ static int intel_pmu_handle_irq(struct pt_regs *regs) + int bit, loops; + u64 status; + int handled; ++ int pmu_enabled; + + cpuc = this_cpu_ptr(&cpu_hw_events); + ++ /* ++ * Save the PMU state. ++ * It needs to be restored when leaving the handler. ++ */ ++ pmu_enabled = cpuc->enabled; + /* + * No known reason to not always do late ACK, + * but just in case do it opt-in. +@@ -2211,6 +2217,7 @@ static int intel_pmu_handle_irq(struct pt_regs *regs) + if (!x86_pmu.late_ack) + apic_write(APIC_LVTPC, APIC_DM_NMI); + intel_bts_disable_local(); ++ cpuc->enabled = 0; + __intel_pmu_disable_all(); + handled = intel_pmu_drain_bts_buffer(); + handled += intel_bts_interrupt(); +@@ -2320,7 +2327,8 @@ static int intel_pmu_handle_irq(struct pt_regs *regs) + + done: + /* Only restore PMU state when it's active. See x86_pmu_disable(). */ +- if (cpuc->enabled) ++ cpuc->enabled = pmu_enabled; ++ if (pmu_enabled) + __intel_pmu_enable_all(0, true); + intel_bts_enable_local(); + +@@ -3188,7 +3196,7 @@ glp_get_event_constraints(struct cpu_hw_events *cpuc, int idx, + * Therefore the effective (average) period matches the requested period, + * despite coarser hardware granularity. + */ +-static unsigned bdw_limit_period(struct perf_event *event, unsigned left) ++static u64 bdw_limit_period(struct perf_event *event, u64 left) + { + if ((event->hw.config & INTEL_ARCH_EVENT_MASK) == + X86_CONFIG(.event=0xc0, .umask=0x01)) { +diff --git a/arch/x86/events/intel/ds.c b/arch/x86/events/intel/ds.c +index 5e526c54247e..cc0eb543cc70 100644 +--- a/arch/x86/events/intel/ds.c ++++ b/arch/x86/events/intel/ds.c +@@ -1315,17 +1315,84 @@ get_next_pebs_record_by_bit(void *base, void *top, int bit) + return NULL; + } + ++/* ++ * Special variant of intel_pmu_save_and_restart() for auto-reload. ++ */ ++static int ++intel_pmu_save_and_restart_reload(struct perf_event *event, int count) ++{ ++ struct hw_perf_event *hwc = &event->hw; ++ int shift = 64 - x86_pmu.cntval_bits; ++ u64 period = hwc->sample_period; ++ u64 prev_raw_count, new_raw_count; ++ s64 new, old; ++ ++ WARN_ON(!period); ++ ++ /* ++ * drain_pebs() only happens when the PMU is disabled. ++ */ ++ WARN_ON(this_cpu_read(cpu_hw_events.enabled)); ++ ++ prev_raw_count = local64_read(&hwc->prev_count); ++ rdpmcl(hwc->event_base_rdpmc, new_raw_count); ++ local64_set(&hwc->prev_count, new_raw_count); ++ ++ /* ++ * Since the counter increments a negative counter value and ++ * overflows on the sign switch, giving the interval: ++ * ++ * [-period, 0] ++ * ++ * the difference between two consequtive reads is: ++ * ++ * A) value2 - value1; ++ * when no overflows have happened in between, ++ * ++ * B) (0 - value1) + (value2 - (-period)); ++ * when one overflow happened in between, ++ * ++ * C) (0 - value1) + (n - 1) * (period) + (value2 - (-period)); ++ * when @n overflows happened in between. ++ * ++ * Here A) is the obvious difference, B) is the extension to the ++ * discrete interval, where the first term is to the top of the ++ * interval and the second term is from the bottom of the next ++ * interval and C) the extension to multiple intervals, where the ++ * middle term is the whole intervals covered. ++ * ++ * An equivalent of C, by reduction, is: ++ * ++ * value2 - value1 + n * period ++ */ ++ new = ((s64)(new_raw_count << shift) >> shift); ++ old = ((s64)(prev_raw_count << shift) >> shift); ++ local64_add(new - old + count * period, &event->count); ++ ++ perf_event_update_userpage(event); ++ ++ return 0; ++} ++ + static void __intel_pmu_pebs_event(struct perf_event *event, + struct pt_regs *iregs, + void *base, void *top, + int bit, int count) + { ++ struct hw_perf_event *hwc = &event->hw; + struct perf_sample_data data; + struct pt_regs regs; + void *at = get_next_pebs_record_by_bit(base, top, bit); + +- if (!intel_pmu_save_and_restart(event) && +- !(event->hw.flags & PERF_X86_EVENT_AUTO_RELOAD)) ++ if (hwc->flags & PERF_X86_EVENT_AUTO_RELOAD) { ++ /* ++ * Now, auto-reload is only enabled in fixed period mode. ++ * The reload value is always hwc->sample_period. ++ * May need to change it, if auto-reload is enabled in ++ * freq mode later. ++ */ ++ intel_pmu_save_and_restart_reload(event, count); ++ } else if (!intel_pmu_save_and_restart(event)) + return; + + while (count > 1) { +@@ -1377,8 +1444,11 @@ static void intel_pmu_drain_pebs_core(struct pt_regs *iregs) + return; + + n = top - at; +- if (n <= 0) ++ if (n <= 0) { ++ if (event->hw.flags & PERF_X86_EVENT_AUTO_RELOAD) ++ intel_pmu_save_and_restart_reload(event, 0); + return; ++ } + + __intel_pmu_pebs_event(event, iregs, at, top, 0, n); + } +@@ -1401,8 +1471,22 @@ static void intel_pmu_drain_pebs_nhm(struct pt_regs *iregs) + + ds->pebs_index = ds->pebs_buffer_base; + +- if (unlikely(base >= top)) ++ if (unlikely(base >= top)) { ++ /* ++ * The drain_pebs() could be called twice in a short period ++ * for auto-reload event in pmu::read(). There are no ++ * overflows have happened in between. ++ * It needs to call intel_pmu_save_and_restart_reload() to ++ * update the event->count for this case. ++ */ ++ for_each_set_bit(bit, (unsigned long *)&cpuc->pebs_enabled, ++ x86_pmu.max_pebs_events) { ++ event = cpuc->events[bit]; ++ if (event->hw.flags & PERF_X86_EVENT_AUTO_RELOAD) ++ intel_pmu_save_and_restart_reload(event, 0); ++ } + return; ++ } + + for (at = base; at < top; at += x86_pmu.pebs_record_size) { + struct pebs_record_nhm *p = at; +diff --git a/arch/x86/events/perf_event.h b/arch/x86/events/perf_event.h +index 39cd0615f04f..5e2ef399ac86 100644 +--- a/arch/x86/events/perf_event.h ++++ b/arch/x86/events/perf_event.h +@@ -557,7 +557,7 @@ struct x86_pmu { + struct x86_pmu_quirk *quirks; + int perfctr_second_write; + bool late_ack; +- unsigned (*limit_period)(struct perf_event *event, unsigned l); ++ u64 (*limit_period)(struct perf_event *event, u64 l); + + /* + * sysfs attrs +diff --git a/arch/x86/include/asm/tlbflush.h b/arch/x86/include/asm/tlbflush.h +index 84137c22fdfa..6690cd3fc8b1 100644 +--- a/arch/x86/include/asm/tlbflush.h ++++ b/arch/x86/include/asm/tlbflush.h +@@ -131,7 +131,12 @@ static inline unsigned long build_cr3(pgd_t *pgd, u16 asid) + static inline unsigned long build_cr3_noflush(pgd_t *pgd, u16 asid) + { + VM_WARN_ON_ONCE(asid > MAX_ASID_AVAILABLE); +- VM_WARN_ON_ONCE(!this_cpu_has(X86_FEATURE_PCID)); ++ /* ++ * Use boot_cpu_has() instead of this_cpu_has() as this function ++ * might be called during early boot. This should work even after ++ * boot because all CPU's the have same capabilities: ++ */ ++ VM_WARN_ON_ONCE(!boot_cpu_has(X86_FEATURE_PCID)); + return __sme_pa(pgd) | kern_pcid(asid) | CR3_NOFLUSH; + } + +diff --git a/arch/x86/kernel/apic/apic.c b/arch/x86/kernel/apic/apic.c +index b203af0855b5..5071cc7972ea 100644 +--- a/arch/x86/kernel/apic/apic.c ++++ b/arch/x86/kernel/apic/apic.c +@@ -1570,7 +1570,7 @@ void setup_local_APIC(void) + * TODO: set up through-local-APIC from through-I/O-APIC? --macro + */ + value = apic_read(APIC_LVT0) & APIC_LVT_MASKED; +- if (!cpu && (pic_mode || !value)) { ++ if (!cpu && (pic_mode || !value || skip_ioapic_setup)) { + value = APIC_DM_EXTINT; + apic_printk(APIC_VERBOSE, "enabled ExtINT on CPU#%d\n", cpu); + } else { +diff --git a/arch/x86/kernel/devicetree.c b/arch/x86/kernel/devicetree.c +index 25de5f6ca997..5cd387fcc777 100644 +--- a/arch/x86/kernel/devicetree.c ++++ b/arch/x86/kernel/devicetree.c +@@ -11,6 +11,7 @@ + #include + #include + #include ++#include + #include + #include + #include +@@ -194,19 +195,22 @@ static struct of_ioapic_type of_ioapic_type[] = + static int dt_irqdomain_alloc(struct irq_domain *domain, unsigned int virq, + unsigned int nr_irqs, void *arg) + { +- struct of_phandle_args *irq_data = (void *)arg; ++ struct irq_fwspec *fwspec = (struct irq_fwspec *)arg; + struct of_ioapic_type *it; + struct irq_alloc_info tmp; ++ int type_index; + +- if (WARN_ON(irq_data->args_count < 2)) ++ if (WARN_ON(fwspec->param_count < 2)) + return -EINVAL; +- if (irq_data->args[1] >= ARRAY_SIZE(of_ioapic_type)) ++ ++ type_index = fwspec->param[1]; ++ if (type_index >= ARRAY_SIZE(of_ioapic_type)) + return -EINVAL; + +- it = &of_ioapic_type[irq_data->args[1]]; ++ it = &of_ioapic_type[type_index]; + ioapic_set_alloc_attr(&tmp, NUMA_NO_NODE, it->trigger, it->polarity); + tmp.ioapic_id = mpc_ioapic_id(mp_irqdomain_ioapic_idx(domain)); +- tmp.ioapic_pin = irq_data->args[0]; ++ tmp.ioapic_pin = fwspec->param[0]; + + return mp_irqdomain_alloc(domain, virq, nr_irqs, &tmp); + } +@@ -270,14 +274,15 @@ static void __init x86_flattree_get_config(void) + + map_len = max(PAGE_SIZE - (initial_dtb & ~PAGE_MASK), (u64)128); + +- initial_boot_params = dt = early_memremap(initial_dtb, map_len); +- size = of_get_flat_dt_size(); ++ dt = early_memremap(initial_dtb, map_len); ++ size = fdt_totalsize(dt); + if (map_len < size) { + early_memunmap(dt, map_len); +- initial_boot_params = dt = early_memremap(initial_dtb, size); ++ dt = early_memremap(initial_dtb, size); + map_len = size; + } + ++ early_init_dt_verify(dt); + unflatten_and_copy_device_tree(); + early_memunmap(dt, map_len); + } +diff --git a/arch/x86/kvm/cpuid.c b/arch/x86/kvm/cpuid.c +index 3f400004f602..752f361ef453 100644 +--- a/arch/x86/kvm/cpuid.c ++++ b/arch/x86/kvm/cpuid.c +@@ -402,8 +402,8 @@ static inline int __do_cpuid_ent(struct kvm_cpuid_entry2 *entry, u32 function, + + /* cpuid 7.0.edx*/ + const u32 kvm_cpuid_7_0_edx_x86_features = +- F(AVX512_4VNNIW) | F(AVX512_4FMAPS) | F(SPEC_CTRL) | F(SSBD) | +- F(ARCH_CAPABILITIES); ++ F(AVX512_4VNNIW) | F(AVX512_4FMAPS) | F(SPEC_CTRL) | ++ F(SPEC_CTRL_SSBD) | F(ARCH_CAPABILITIES); + + /* all calls to cpuid_count() should be made on the same cpu */ + get_cpu(); +@@ -490,6 +490,11 @@ static inline int __do_cpuid_ent(struct kvm_cpuid_entry2 *entry, u32 function, + entry->ecx &= ~F(PKU); + entry->edx &= kvm_cpuid_7_0_edx_x86_features; + cpuid_mask(&entry->edx, CPUID_7_EDX); ++ /* ++ * We emulate ARCH_CAPABILITIES in software even ++ * if the host doesn't support it. ++ */ ++ entry->edx |= F(ARCH_CAPABILITIES); + } else { + entry->ebx = 0; + entry->ecx = 0; +diff --git a/arch/x86/kvm/lapic.c b/arch/x86/kvm/lapic.c +index 7cf470a3755f..3773c4625114 100644 +--- a/arch/x86/kvm/lapic.c ++++ b/arch/x86/kvm/lapic.c +@@ -321,8 +321,16 @@ void kvm_apic_set_version(struct kvm_vcpu *vcpu) + if (!lapic_in_kernel(vcpu)) + return; + ++ /* ++ * KVM emulates 82093AA datasheet (with in-kernel IOAPIC implementation) ++ * which doesn't have EOI register; Some buggy OSes (e.g. Windows with ++ * Hyper-V role) disable EOI broadcast in lapic not checking for IOAPIC ++ * version first and level-triggered interrupts never get EOIed in ++ * IOAPIC. ++ */ + feat = kvm_find_cpuid_entry(apic->vcpu, 0x1, 0); +- if (feat && (feat->ecx & (1 << (X86_FEATURE_X2APIC & 31)))) ++ if (feat && (feat->ecx & (1 << (X86_FEATURE_X2APIC & 31))) && ++ !ioapic_in_kernel(vcpu->kvm)) + v |= APIC_LVR_DIRECTED_EOI; + kvm_lapic_set_reg(apic, APIC_LVR, v); + } +@@ -1514,11 +1522,23 @@ static bool set_target_expiration(struct kvm_lapic *apic) + + static void advance_periodic_target_expiration(struct kvm_lapic *apic) + { +- apic->lapic_timer.tscdeadline += +- nsec_to_cycles(apic->vcpu, apic->lapic_timer.period); ++ ktime_t now = ktime_get(); ++ u64 tscl = rdtsc(); ++ ktime_t delta; ++ ++ /* ++ * Synchronize both deadlines to the same time source or ++ * differences in the periods (caused by differences in the ++ * underlying clocks or numerical approximation errors) will ++ * cause the two to drift apart over time as the errors ++ * accumulate. ++ */ + apic->lapic_timer.target_expiration = + ktime_add_ns(apic->lapic_timer.target_expiration, + apic->lapic_timer.period); ++ delta = ktime_sub(apic->lapic_timer.target_expiration, now); ++ apic->lapic_timer.tscdeadline = kvm_read_l1_tsc(apic->vcpu, tscl) + ++ nsec_to_cycles(apic->vcpu, delta); + } + + static void start_sw_period(struct kvm_lapic *apic) +diff --git a/arch/x86/kvm/vmx.c b/arch/x86/kvm/vmx.c +index 3deb153bf9d9..11e2147c3824 100644 +--- a/arch/x86/kvm/vmx.c ++++ b/arch/x86/kvm/vmx.c +@@ -2561,6 +2561,8 @@ static void vmx_queue_exception(struct kvm_vcpu *vcpu) + return; + } + ++ WARN_ON_ONCE(vmx->emulation_required); ++ + if (kvm_exception_is_soft(nr)) { + vmcs_write32(VM_ENTRY_INSTRUCTION_LEN, + vmx->vcpu.arch.event_exit_inst_len); +@@ -6854,12 +6856,12 @@ static int handle_invalid_guest_state(struct kvm_vcpu *vcpu) + goto out; + } + +- if (err != EMULATE_DONE) { +- vcpu->run->exit_reason = KVM_EXIT_INTERNAL_ERROR; +- vcpu->run->internal.suberror = KVM_INTERNAL_ERROR_EMULATION; +- vcpu->run->internal.ndata = 0; +- return 0; +- } ++ if (err != EMULATE_DONE) ++ goto emulation_error; ++ ++ if (vmx->emulation_required && !vmx->rmode.vm86_active && ++ vcpu->arch.exception.pending) ++ goto emulation_error; + + if (vcpu->arch.halt_request) { + vcpu->arch.halt_request = 0; +@@ -6875,6 +6877,12 @@ static int handle_invalid_guest_state(struct kvm_vcpu *vcpu) + + out: + return ret; ++ ++emulation_error: ++ vcpu->run->exit_reason = KVM_EXIT_INTERNAL_ERROR; ++ vcpu->run->internal.suberror = KVM_INTERNAL_ERROR_EMULATION; ++ vcpu->run->internal.ndata = 0; ++ return 0; + } + + static int __grow_ple_window(int val) +diff --git a/arch/x86/kvm/x86.c b/arch/x86/kvm/x86.c +index f3df3a934733..999560ff12b5 100644 +--- a/arch/x86/kvm/x86.c ++++ b/arch/x86/kvm/x86.c +@@ -7777,6 +7777,7 @@ int kvm_arch_vcpu_ioctl_set_sregs(struct kvm_vcpu *vcpu, + { + struct msr_data apic_base_msr; + int mmu_reset_needed = 0; ++ int cpuid_update_needed = 0; + int pending_vec, max_bits, idx; + struct desc_ptr dt; + int ret = -EINVAL; +@@ -7817,8 +7818,10 @@ int kvm_arch_vcpu_ioctl_set_sregs(struct kvm_vcpu *vcpu, + vcpu->arch.cr0 = sregs->cr0; + + mmu_reset_needed |= kvm_read_cr4(vcpu) != sregs->cr4; ++ cpuid_update_needed |= ((kvm_read_cr4(vcpu) ^ sregs->cr4) & ++ (X86_CR4_OSXSAVE | X86_CR4_PKE)); + kvm_x86_ops->set_cr4(vcpu, sregs->cr4); +- if (sregs->cr4 & (X86_CR4_OSXSAVE | X86_CR4_PKE)) ++ if (cpuid_update_needed) + kvm_update_cpuid(vcpu); + + idx = srcu_read_lock(&vcpu->kvm->srcu); +diff --git a/arch/x86/mm/pageattr.c b/arch/x86/mm/pageattr.c +index 85cf12219dea..94c41044a578 100644 +--- a/arch/x86/mm/pageattr.c ++++ b/arch/x86/mm/pageattr.c +@@ -298,9 +298,11 @@ static inline pgprot_t static_protections(pgprot_t prot, unsigned long address, + + /* + * The .rodata section needs to be read-only. Using the pfn +- * catches all aliases. ++ * catches all aliases. This also includes __ro_after_init, ++ * so do not enforce until kernel_set_to_readonly is true. + */ +- if (within(pfn, __pa_symbol(__start_rodata) >> PAGE_SHIFT, ++ if (kernel_set_to_readonly && ++ within(pfn, __pa_symbol(__start_rodata) >> PAGE_SHIFT, + __pa_symbol(__end_rodata) >> PAGE_SHIFT)) + pgprot_val(forbidden) |= _PAGE_RW; + +diff --git a/arch/x86/mm/pgtable.c b/arch/x86/mm/pgtable.c +index 34cda7e0551b..c03c85e4fb6a 100644 +--- a/arch/x86/mm/pgtable.c ++++ b/arch/x86/mm/pgtable.c +@@ -1,6 +1,7 @@ + // SPDX-License-Identifier: GPL-2.0 + #include + #include ++#include + #include + #include + #include +@@ -636,6 +637,10 @@ int pud_set_huge(pud_t *pud, phys_addr_t addr, pgprot_t prot) + (mtrr != MTRR_TYPE_WRBACK)) + return 0; + ++ /* Bail out if we are we on a populated non-leaf entry: */ ++ if (pud_present(*pud) && !pud_huge(*pud)) ++ return 0; ++ + prot = pgprot_4k_2_large(prot); + + set_pte((pte_t *)pud, pfn_pte( +@@ -664,6 +669,10 @@ int pmd_set_huge(pmd_t *pmd, phys_addr_t addr, pgprot_t prot) + return 0; + } + ++ /* Bail out if we are we on a populated non-leaf entry: */ ++ if (pmd_present(*pmd) && !pmd_huge(*pmd)) ++ return 0; ++ + prot = pgprot_4k_2_large(prot); + + set_pte((pte_t *)pmd, pfn_pte( +diff --git a/drivers/acpi/acpi_pad.c b/drivers/acpi/acpi_pad.c +index 754431031282..552c1f725b6c 100644 +--- a/drivers/acpi/acpi_pad.c ++++ b/drivers/acpi/acpi_pad.c +@@ -110,6 +110,7 @@ static void round_robin_cpu(unsigned int tsk_index) + cpumask_andnot(tmp, cpu_online_mask, pad_busy_cpus); + if (cpumask_empty(tmp)) { + mutex_unlock(&round_robin_lock); ++ free_cpumask_var(tmp); + return; + } + for_each_cpu(cpu, tmp) { +@@ -127,6 +128,8 @@ static void round_robin_cpu(unsigned int tsk_index) + mutex_unlock(&round_robin_lock); + + set_cpus_allowed_ptr(current, cpumask_of(preferred_cpu)); ++ ++ free_cpumask_var(tmp); + } + + static void exit_round_robin(unsigned int tsk_index) +diff --git a/drivers/acpi/acpica/evevent.c b/drivers/acpi/acpica/evevent.c +index 4b2b0b44a16b..a65c186114eb 100644 +--- a/drivers/acpi/acpica/evevent.c ++++ b/drivers/acpi/acpica/evevent.c +@@ -204,6 +204,7 @@ u32 acpi_ev_fixed_event_detect(void) + u32 fixed_status; + u32 fixed_enable; + u32 i; ++ acpi_status status; + + ACPI_FUNCTION_NAME(ev_fixed_event_detect); + +@@ -211,8 +212,12 @@ u32 acpi_ev_fixed_event_detect(void) + * Read the fixed feature status and enable registers, as all the cases + * depend on their values. Ignore errors here. + */ +- (void)acpi_hw_register_read(ACPI_REGISTER_PM1_STATUS, &fixed_status); +- (void)acpi_hw_register_read(ACPI_REGISTER_PM1_ENABLE, &fixed_enable); ++ status = acpi_hw_register_read(ACPI_REGISTER_PM1_STATUS, &fixed_status); ++ status |= ++ acpi_hw_register_read(ACPI_REGISTER_PM1_ENABLE, &fixed_enable); ++ if (ACPI_FAILURE(status)) { ++ return (int_status); ++ } + + ACPI_DEBUG_PRINT((ACPI_DB_INTERRUPTS, + "Fixed Event Block: Enable %08X Status %08X\n", +diff --git a/drivers/acpi/acpica/nseval.c b/drivers/acpi/acpica/nseval.c +index c2d883b8c45e..a18e61081013 100644 +--- a/drivers/acpi/acpica/nseval.c ++++ b/drivers/acpi/acpica/nseval.c +@@ -308,6 +308,14 @@ acpi_status acpi_ns_evaluate(struct acpi_evaluate_info *info) + /* Map AE_CTRL_RETURN_VALUE to AE_OK, we are done with it */ + + status = AE_OK; ++ } else if (ACPI_FAILURE(status)) { ++ ++ /* If return_object exists, delete it */ ++ ++ if (info->return_object) { ++ acpi_ut_remove_reference(info->return_object); ++ info->return_object = NULL; ++ } + } + + ACPI_DEBUG_PRINT((ACPI_DB_NAMES, +diff --git a/drivers/acpi/acpica/psargs.c b/drivers/acpi/acpica/psargs.c +index dbc51bc5fdd6..5ca895da3b10 100644 +--- a/drivers/acpi/acpica/psargs.c ++++ b/drivers/acpi/acpica/psargs.c +@@ -890,6 +890,10 @@ acpi_ps_get_next_arg(struct acpi_walk_state *walk_state, + ACPI_POSSIBLE_METHOD_CALL); + + if (arg->common.aml_opcode == AML_INT_METHODCALL_OP) { ++ ++ /* Free method call op and corresponding namestring sub-ob */ ++ ++ acpi_ps_free_op(arg->common.value.arg); + acpi_ps_free_op(arg); + arg = NULL; + walk_state->arg_count = 1; +diff --git a/drivers/ata/ahci.c b/drivers/ata/ahci.c +index 1ff17799769d..1d396b6e6000 100644 +--- a/drivers/ata/ahci.c ++++ b/drivers/ata/ahci.c +@@ -334,6 +334,7 @@ static const struct pci_device_id ahci_pci_tbl[] = { + { PCI_VDEVICE(INTEL, 0x9c07), board_ahci_mobile }, /* Lynx LP RAID */ + { PCI_VDEVICE(INTEL, 0x9c0e), board_ahci_mobile }, /* Lynx LP RAID */ + { PCI_VDEVICE(INTEL, 0x9c0f), board_ahci_mobile }, /* Lynx LP RAID */ ++ { PCI_VDEVICE(INTEL, 0x9dd3), board_ahci_mobile }, /* Cannon Lake PCH-LP AHCI */ + { PCI_VDEVICE(INTEL, 0x1f22), board_ahci }, /* Avoton AHCI */ + { PCI_VDEVICE(INTEL, 0x1f23), board_ahci }, /* Avoton AHCI */ + { PCI_VDEVICE(INTEL, 0x1f24), board_ahci }, /* Avoton RAID */ +diff --git a/drivers/ata/libata-core.c b/drivers/ata/libata-core.c +index 0df21f046fc6..d4fb9e0c29ee 100644 +--- a/drivers/ata/libata-core.c ++++ b/drivers/ata/libata-core.c +@@ -4493,6 +4493,10 @@ static const struct ata_blacklist_entry ata_device_blacklist [] = { + /* https://bugzilla.kernel.org/show_bug.cgi?id=15573 */ + { "C300-CTFDDAC128MAG", "0001", ATA_HORKAGE_NONCQ, }, + ++ /* Some Sandisk SSDs lock up hard with NCQ enabled. Reported on ++ SD7SN6S256G and SD8SN8U256G */ ++ { "SanDisk SD[78]SN*G", NULL, ATA_HORKAGE_NONCQ, }, ++ + /* devices which puke on READ_NATIVE_MAX */ + { "HDS724040KLSA80", "KFAOA20N", ATA_HORKAGE_BROKEN_HPA, }, + { "WDC WD3200JD-00KLB0", "WD-WCAMR1130137", ATA_HORKAGE_BROKEN_HPA }, +@@ -4553,6 +4557,8 @@ static const struct ata_blacklist_entry ata_device_blacklist [] = { + { "SanDisk SD7UB3Q*G1001", NULL, ATA_HORKAGE_NOLPM, }, + + /* devices that don't properly handle queued TRIM commands */ ++ { "Micron_M500IT_*", "MU01", ATA_HORKAGE_NO_NCQ_TRIM | ++ ATA_HORKAGE_ZERO_AFTER_TRIM, }, + { "Micron_M500_*", NULL, ATA_HORKAGE_NO_NCQ_TRIM | + ATA_HORKAGE_ZERO_AFTER_TRIM, }, + { "Crucial_CT*M500*", NULL, ATA_HORKAGE_NO_NCQ_TRIM | +diff --git a/drivers/base/firmware_class.c b/drivers/base/firmware_class.c +index 7dd36ace6152..103b5a38ee38 100644 +--- a/drivers/base/firmware_class.c ++++ b/drivers/base/firmware_class.c +@@ -524,7 +524,7 @@ static int fw_add_devm_name(struct device *dev, const char *name) + + fwn = fw_find_devm_name(dev, name); + if (fwn) +- return 1; ++ return 0; + + fwn = devres_alloc(fw_name_devm_release, sizeof(struct fw_name_devm), + GFP_KERNEL); +@@ -552,6 +552,7 @@ static int assign_fw(struct firmware *fw, struct device *device, + unsigned int opt_flags) + { + struct fw_priv *fw_priv = fw->priv; ++ int ret; + + mutex_lock(&fw_lock); + if (!fw_priv->size || fw_state_is_aborted(fw_priv)) { +@@ -568,8 +569,13 @@ static int assign_fw(struct firmware *fw, struct device *device, + */ + /* don't cache firmware handled without uevent */ + if (device && (opt_flags & FW_OPT_UEVENT) && +- !(opt_flags & FW_OPT_NOCACHE)) +- fw_add_devm_name(device, fw_priv->fw_name); ++ !(opt_flags & FW_OPT_NOCACHE)) { ++ ret = fw_add_devm_name(device, fw_priv->fw_name); ++ if (ret) { ++ mutex_unlock(&fw_lock); ++ return ret; ++ } ++ } + + /* + * After caching firmware image is started, let it piggyback +diff --git a/drivers/base/power/main.c b/drivers/base/power/main.c +index 02a497e7c785..e5e067091572 100644 +--- a/drivers/base/power/main.c ++++ b/drivers/base/power/main.c +@@ -1923,10 +1923,8 @@ static int device_prepare(struct device *dev, pm_message_t state) + + dev->power.wakeup_path = false; + +- if (dev->power.no_pm_callbacks) { +- ret = 1; /* Let device go direct_complete */ ++ if (dev->power.no_pm_callbacks) + goto unlock; +- } + + if (dev->pm_domain) + callback = dev->pm_domain->ops.prepare; +@@ -1960,7 +1958,8 @@ static int device_prepare(struct device *dev, pm_message_t state) + */ + spin_lock_irq(&dev->power.lock); + dev->power.direct_complete = state.event == PM_EVENT_SUSPEND && +- pm_runtime_suspended(dev) && ret > 0 && ++ ((pm_runtime_suspended(dev) && ret > 0) || ++ dev->power.no_pm_callbacks) && + !dev_pm_test_driver_flags(dev, DPM_FLAG_NEVER_SKIP); + spin_unlock_irq(&dev->power.lock); + return 0; +diff --git a/drivers/base/regmap/regmap.c b/drivers/base/regmap/regmap.c +index 453116fd4362..c7b7c5fa73ab 100644 +--- a/drivers/base/regmap/regmap.c ++++ b/drivers/base/regmap/regmap.c +@@ -99,7 +99,7 @@ bool regmap_cached(struct regmap *map, unsigned int reg) + int ret; + unsigned int val; + +- if (map->cache == REGCACHE_NONE) ++ if (map->cache_type == REGCACHE_NONE) + return false; + + if (!map->cache_ops) +diff --git a/drivers/bcma/driver_mips.c b/drivers/bcma/driver_mips.c +index f040aba48d50..27e9686b6d3a 100644 +--- a/drivers/bcma/driver_mips.c ++++ b/drivers/bcma/driver_mips.c +@@ -184,7 +184,7 @@ static void bcma_core_mips_print_irq(struct bcma_device *dev, unsigned int irq) + { + int i; + static const char *irq_name[] = {"2(S)", "3", "4", "5", "6", "D", "I"}; +- char interrupts[20]; ++ char interrupts[25]; + char *ints = interrupts; + + for (i = 0; i < ARRAY_SIZE(irq_name); i++) +diff --git a/drivers/block/null_blk.c b/drivers/block/null_blk.c +index 287a09611c0f..763f06603131 100644 +--- a/drivers/block/null_blk.c ++++ b/drivers/block/null_blk.c +@@ -72,6 +72,7 @@ enum nullb_device_flags { + NULLB_DEV_FL_CACHE = 3, + }; + ++#define MAP_SZ ((PAGE_SIZE >> SECTOR_SHIFT) + 2) + /* + * nullb_page is a page in memory for nullb devices. + * +@@ -86,10 +87,10 @@ enum nullb_device_flags { + */ + struct nullb_page { + struct page *page; +- unsigned long bitmap; ++ DECLARE_BITMAP(bitmap, MAP_SZ); + }; +-#define NULLB_PAGE_LOCK (sizeof(unsigned long) * 8 - 1) +-#define NULLB_PAGE_FREE (sizeof(unsigned long) * 8 - 2) ++#define NULLB_PAGE_LOCK (MAP_SZ - 1) ++#define NULLB_PAGE_FREE (MAP_SZ - 2) + + struct nullb_device { + struct nullb *nullb; +@@ -728,7 +729,7 @@ static struct nullb_page *null_alloc_page(gfp_t gfp_flags) + if (!t_page->page) + goto out_freepage; + +- t_page->bitmap = 0; ++ memset(t_page->bitmap, 0, sizeof(t_page->bitmap)); + return t_page; + out_freepage: + kfree(t_page); +@@ -738,13 +739,20 @@ static struct nullb_page *null_alloc_page(gfp_t gfp_flags) + + static void null_free_page(struct nullb_page *t_page) + { +- __set_bit(NULLB_PAGE_FREE, &t_page->bitmap); +- if (test_bit(NULLB_PAGE_LOCK, &t_page->bitmap)) ++ __set_bit(NULLB_PAGE_FREE, t_page->bitmap); ++ if (test_bit(NULLB_PAGE_LOCK, t_page->bitmap)) + return; + __free_page(t_page->page); + kfree(t_page); + } + ++static bool null_page_empty(struct nullb_page *page) ++{ ++ int size = MAP_SZ - 2; ++ ++ return find_first_bit(page->bitmap, size) == size; ++} ++ + static void null_free_sector(struct nullb *nullb, sector_t sector, + bool is_cache) + { +@@ -759,9 +767,9 @@ static void null_free_sector(struct nullb *nullb, sector_t sector, + + t_page = radix_tree_lookup(root, idx); + if (t_page) { +- __clear_bit(sector_bit, &t_page->bitmap); ++ __clear_bit(sector_bit, t_page->bitmap); + +- if (!t_page->bitmap) { ++ if (null_page_empty(t_page)) { + ret = radix_tree_delete_item(root, idx, t_page); + WARN_ON(ret != t_page); + null_free_page(ret); +@@ -832,7 +840,7 @@ static struct nullb_page *__null_lookup_page(struct nullb *nullb, + t_page = radix_tree_lookup(root, idx); + WARN_ON(t_page && t_page->page->index != idx); + +- if (t_page && (for_write || test_bit(sector_bit, &t_page->bitmap))) ++ if (t_page && (for_write || test_bit(sector_bit, t_page->bitmap))) + return t_page; + + return NULL; +@@ -895,10 +903,10 @@ static int null_flush_cache_page(struct nullb *nullb, struct nullb_page *c_page) + + t_page = null_insert_page(nullb, idx << PAGE_SECTORS_SHIFT, true); + +- __clear_bit(NULLB_PAGE_LOCK, &c_page->bitmap); +- if (test_bit(NULLB_PAGE_FREE, &c_page->bitmap)) { ++ __clear_bit(NULLB_PAGE_LOCK, c_page->bitmap); ++ if (test_bit(NULLB_PAGE_FREE, c_page->bitmap)) { + null_free_page(c_page); +- if (t_page && t_page->bitmap == 0) { ++ if (t_page && null_page_empty(t_page)) { + ret = radix_tree_delete_item(&nullb->dev->data, + idx, t_page); + null_free_page(t_page); +@@ -914,11 +922,11 @@ static int null_flush_cache_page(struct nullb *nullb, struct nullb_page *c_page) + + for (i = 0; i < PAGE_SECTORS; + i += (nullb->dev->blocksize >> SECTOR_SHIFT)) { +- if (test_bit(i, &c_page->bitmap)) { ++ if (test_bit(i, c_page->bitmap)) { + offset = (i << SECTOR_SHIFT); + memcpy(dst + offset, src + offset, + nullb->dev->blocksize); +- __set_bit(i, &t_page->bitmap); ++ __set_bit(i, t_page->bitmap); + } + } + +@@ -955,10 +963,10 @@ static int null_make_cache_space(struct nullb *nullb, unsigned long n) + * We found the page which is being flushed to disk by other + * threads + */ +- if (test_bit(NULLB_PAGE_LOCK, &c_pages[i]->bitmap)) ++ if (test_bit(NULLB_PAGE_LOCK, c_pages[i]->bitmap)) + c_pages[i] = NULL; + else +- __set_bit(NULLB_PAGE_LOCK, &c_pages[i]->bitmap); ++ __set_bit(NULLB_PAGE_LOCK, c_pages[i]->bitmap); + } + + one_round = 0; +@@ -1011,7 +1019,7 @@ static int copy_to_nullb(struct nullb *nullb, struct page *source, + kunmap_atomic(dst); + kunmap_atomic(src); + +- __set_bit(sector & SECTOR_MASK, &t_page->bitmap); ++ __set_bit(sector & SECTOR_MASK, t_page->bitmap); + + if (is_fua) + null_free_sector(nullb, sector, true); +@@ -1802,10 +1810,6 @@ static int __init null_init(void) + struct nullb *nullb; + struct nullb_device *dev; + +- /* check for nullb_page.bitmap */ +- if (sizeof(unsigned long) * 8 - 2 < (PAGE_SIZE >> SECTOR_SHIFT)) +- return -EINVAL; +- + if (g_bs > PAGE_SIZE) { + pr_warn("null_blk: invalid block size\n"); + pr_warn("null_blk: defaults block size to %lu\n", PAGE_SIZE); +diff --git a/drivers/block/paride/pcd.c b/drivers/block/paride/pcd.c +index 7b8c6368beb7..a026211afb51 100644 +--- a/drivers/block/paride/pcd.c ++++ b/drivers/block/paride/pcd.c +@@ -230,6 +230,8 @@ static int pcd_block_open(struct block_device *bdev, fmode_t mode) + struct pcd_unit *cd = bdev->bd_disk->private_data; + int ret; + ++ check_disk_change(bdev); ++ + mutex_lock(&pcd_mutex); + ret = cdrom_open(&cd->info, bdev, mode); + mutex_unlock(&pcd_mutex); +diff --git a/drivers/cdrom/cdrom.c b/drivers/cdrom/cdrom.c +index 5f7d86509f2f..bfc566d3f31a 100644 +--- a/drivers/cdrom/cdrom.c ++++ b/drivers/cdrom/cdrom.c +@@ -1152,9 +1152,6 @@ int cdrom_open(struct cdrom_device_info *cdi, struct block_device *bdev, + + cd_dbg(CD_OPEN, "entering cdrom_open\n"); + +- /* open is event synchronization point, check events first */ +- check_disk_change(bdev); +- + /* if this was a O_NONBLOCK open and we should honor the flags, + * do a quick open without drive/disc integrity checks. */ + cdi->use_count++; +diff --git a/drivers/cdrom/gdrom.c b/drivers/cdrom/gdrom.c +index 6495b03f576c..ae3a7537cf0f 100644 +--- a/drivers/cdrom/gdrom.c ++++ b/drivers/cdrom/gdrom.c +@@ -497,6 +497,9 @@ static const struct cdrom_device_ops gdrom_ops = { + static int gdrom_bdops_open(struct block_device *bdev, fmode_t mode) + { + int ret; ++ ++ check_disk_change(bdev); ++ + mutex_lock(&gdrom_mutex); + ret = cdrom_open(gd.cd_info, bdev, mode); + mutex_unlock(&gdrom_mutex); +diff --git a/drivers/char/hw_random/bcm2835-rng.c b/drivers/char/hw_random/bcm2835-rng.c +index 7a84cec30c3a..6767d965c36c 100644 +--- a/drivers/char/hw_random/bcm2835-rng.c ++++ b/drivers/char/hw_random/bcm2835-rng.c +@@ -163,6 +163,8 @@ static int bcm2835_rng_probe(struct platform_device *pdev) + + /* Clock is optional on most platforms */ + priv->clk = devm_clk_get(dev, NULL); ++ if (IS_ERR(priv->clk) && PTR_ERR(priv->clk) == -EPROBE_DEFER) ++ return -EPROBE_DEFER; + + priv->rng.name = pdev->name; + priv->rng.init = bcm2835_rng_init; +diff --git a/drivers/char/hw_random/stm32-rng.c b/drivers/char/hw_random/stm32-rng.c +index 63d84e6f1891..83c695938a2d 100644 +--- a/drivers/char/hw_random/stm32-rng.c ++++ b/drivers/char/hw_random/stm32-rng.c +@@ -21,6 +21,7 @@ + #include + #include + #include ++#include + #include + + #define RNG_CR 0x00 +@@ -46,6 +47,7 @@ struct stm32_rng_private { + struct hwrng rng; + void __iomem *base; + struct clk *clk; ++ struct reset_control *rst; + }; + + static int stm32_rng_read(struct hwrng *rng, void *data, size_t max, bool wait) +@@ -140,6 +142,13 @@ static int stm32_rng_probe(struct platform_device *ofdev) + if (IS_ERR(priv->clk)) + return PTR_ERR(priv->clk); + ++ priv->rst = devm_reset_control_get(&ofdev->dev, NULL); ++ if (!IS_ERR(priv->rst)) { ++ reset_control_assert(priv->rst); ++ udelay(2); ++ reset_control_deassert(priv->rst); ++ } ++ + dev_set_drvdata(dev, priv); + + priv->rng.name = dev_driver_string(dev), +diff --git a/drivers/char/ipmi/ipmi_ssif.c b/drivers/char/ipmi/ipmi_ssif.c +index f929e72bdac8..16d7fb563718 100644 +--- a/drivers/char/ipmi/ipmi_ssif.c ++++ b/drivers/char/ipmi/ipmi_ssif.c +@@ -761,7 +761,7 @@ static void msg_done_handler(struct ssif_info *ssif_info, int result, + ssif_info->ssif_state = SSIF_NORMAL; + ipmi_ssif_unlock_cond(ssif_info, flags); + pr_warn(PFX "Error getting flags: %d %d, %x\n", +- result, len, data[2]); ++ result, len, (len >= 3) ? data[2] : 0); + } else if (data[0] != (IPMI_NETFN_APP_REQUEST | 1) << 2 + || data[1] != IPMI_GET_MSG_FLAGS_CMD) { + /* +@@ -783,7 +783,7 @@ static void msg_done_handler(struct ssif_info *ssif_info, int result, + if ((result < 0) || (len < 3) || (data[2] != 0)) { + /* Error clearing flags */ + pr_warn(PFX "Error clearing flags: %d %d, %x\n", +- result, len, data[2]); ++ result, len, (len >= 3) ? data[2] : 0); + } else if (data[0] != (IPMI_NETFN_APP_REQUEST | 1) << 2 + || data[1] != IPMI_CLEAR_MSG_FLAGS_CMD) { + pr_warn(PFX "Invalid response clearing flags: %x %x\n", +diff --git a/drivers/cpufreq/cppc_cpufreq.c b/drivers/cpufreq/cppc_cpufreq.c +index dcb1cb9a4572..8b432d6e846d 100644 +--- a/drivers/cpufreq/cppc_cpufreq.c ++++ b/drivers/cpufreq/cppc_cpufreq.c +@@ -167,9 +167,19 @@ static int cppc_cpufreq_cpu_init(struct cpufreq_policy *policy) + NSEC_PER_USEC; + policy->shared_type = cpu->shared_type; + +- if (policy->shared_type == CPUFREQ_SHARED_TYPE_ANY) ++ if (policy->shared_type == CPUFREQ_SHARED_TYPE_ANY) { ++ int i; ++ + cpumask_copy(policy->cpus, cpu->shared_cpu_map); +- else if (policy->shared_type == CPUFREQ_SHARED_TYPE_ALL) { ++ ++ for_each_cpu(i, policy->cpus) { ++ if (unlikely(i == policy->cpu)) ++ continue; ++ ++ memcpy(&all_cpu_data[i]->perf_caps, &cpu->perf_caps, ++ sizeof(cpu->perf_caps)); ++ } ++ } else if (policy->shared_type == CPUFREQ_SHARED_TYPE_ALL) { + /* Support only SW_ANY for now. */ + pr_debug("Unsupported CPU co-ord type\n"); + return -EFAULT; +@@ -233,8 +243,13 @@ static int __init cppc_cpufreq_init(void) + return ret; + + out: +- for_each_possible_cpu(i) +- kfree(all_cpu_data[i]); ++ for_each_possible_cpu(i) { ++ cpu = all_cpu_data[i]; ++ if (!cpu) ++ break; ++ free_cpumask_var(cpu->shared_cpu_map); ++ kfree(cpu); ++ } + + kfree(all_cpu_data); + return -ENODEV; +diff --git a/drivers/cpufreq/cpufreq.c b/drivers/cpufreq/cpufreq.c +index de33ebf008ad..8814c572e263 100644 +--- a/drivers/cpufreq/cpufreq.c ++++ b/drivers/cpufreq/cpufreq.c +@@ -1327,14 +1327,14 @@ static int cpufreq_online(unsigned int cpu) + return 0; + + out_exit_policy: ++ for_each_cpu(j, policy->real_cpus) ++ remove_cpu_dev_symlink(policy, get_cpu_device(j)); ++ + up_write(&policy->rwsem); + + if (cpufreq_driver->exit) + cpufreq_driver->exit(policy); + +- for_each_cpu(j, policy->real_cpus) +- remove_cpu_dev_symlink(policy, get_cpu_device(j)); +- + out_free_policy: + cpufreq_policy_free(policy); + return ret; +diff --git a/drivers/dma/pl330.c b/drivers/dma/pl330.c +index d7327fd5f445..de1fd59fe136 100644 +--- a/drivers/dma/pl330.c ++++ b/drivers/dma/pl330.c +@@ -1510,7 +1510,7 @@ static void pl330_dotask(unsigned long data) + /* Returns 1 if state was updated, 0 otherwise */ + static int pl330_update(struct pl330_dmac *pl330) + { +- struct dma_pl330_desc *descdone, *tmp; ++ struct dma_pl330_desc *descdone; + unsigned long flags; + void __iomem *regs; + u32 val; +@@ -1588,7 +1588,9 @@ static int pl330_update(struct pl330_dmac *pl330) + } + + /* Now that we are in no hurry, do the callbacks */ +- list_for_each_entry_safe(descdone, tmp, &pl330->req_done, rqd) { ++ while (!list_empty(&pl330->req_done)) { ++ descdone = list_first_entry(&pl330->req_done, ++ struct dma_pl330_desc, rqd); + list_del(&descdone->rqd); + spin_unlock_irqrestore(&pl330->lock, flags); + dma_pl330_rqcb(descdone, PL330_ERR_NONE); +diff --git a/drivers/dma/qcom/bam_dma.c b/drivers/dma/qcom/bam_dma.c +index d076940e0c69..4cc58904ee52 100644 +--- a/drivers/dma/qcom/bam_dma.c ++++ b/drivers/dma/qcom/bam_dma.c +@@ -393,6 +393,7 @@ struct bam_device { + struct device_dma_parameters dma_parms; + struct bam_chan *channels; + u32 num_channels; ++ u32 num_ees; + + /* execution environment ID, from DT */ + u32 ee; +@@ -1128,15 +1129,19 @@ static int bam_init(struct bam_device *bdev) + u32 val; + + /* read revision and configuration information */ +- val = readl_relaxed(bam_addr(bdev, 0, BAM_REVISION)) >> NUM_EES_SHIFT; +- val &= NUM_EES_MASK; ++ if (!bdev->num_ees) { ++ val = readl_relaxed(bam_addr(bdev, 0, BAM_REVISION)); ++ bdev->num_ees = (val >> NUM_EES_SHIFT) & NUM_EES_MASK; ++ } + + /* check that configured EE is within range */ +- if (bdev->ee >= val) ++ if (bdev->ee >= bdev->num_ees) + return -EINVAL; + +- val = readl_relaxed(bam_addr(bdev, 0, BAM_NUM_PIPES)); +- bdev->num_channels = val & BAM_NUM_PIPES_MASK; ++ if (!bdev->num_channels) { ++ val = readl_relaxed(bam_addr(bdev, 0, BAM_NUM_PIPES)); ++ bdev->num_channels = val & BAM_NUM_PIPES_MASK; ++ } + + if (bdev->controlled_remotely) + return 0; +@@ -1232,6 +1237,18 @@ static int bam_dma_probe(struct platform_device *pdev) + bdev->controlled_remotely = of_property_read_bool(pdev->dev.of_node, + "qcom,controlled-remotely"); + ++ if (bdev->controlled_remotely) { ++ ret = of_property_read_u32(pdev->dev.of_node, "num-channels", ++ &bdev->num_channels); ++ if (ret) ++ dev_err(bdev->dev, "num-channels unspecified in dt\n"); ++ ++ ret = of_property_read_u32(pdev->dev.of_node, "qcom,num-ees", ++ &bdev->num_ees); ++ if (ret) ++ dev_err(bdev->dev, "num-ees unspecified in dt\n"); ++ } ++ + bdev->bamclk = devm_clk_get(bdev->dev, "bam_clk"); + if (IS_ERR(bdev->bamclk)) + return PTR_ERR(bdev->bamclk); +diff --git a/drivers/dma/sh/rcar-dmac.c b/drivers/dma/sh/rcar-dmac.c +index d0cacdb0713e..2a2ccd9c78e4 100644 +--- a/drivers/dma/sh/rcar-dmac.c ++++ b/drivers/dma/sh/rcar-dmac.c +@@ -1301,8 +1301,17 @@ static unsigned int rcar_dmac_chan_get_residue(struct rcar_dmac_chan *chan, + * If the cookie doesn't correspond to the currently running transfer + * then the descriptor hasn't been processed yet, and the residue is + * equal to the full descriptor size. ++ * Also, a client driver is possible to call this function before ++ * rcar_dmac_isr_channel_thread() runs. In this case, the "desc.running" ++ * will be the next descriptor, and the done list will appear. So, if ++ * the argument cookie matches the done list's cookie, we can assume ++ * the residue is zero. + */ + if (cookie != desc->async_tx.cookie) { ++ list_for_each_entry(desc, &chan->desc.done, node) { ++ if (cookie == desc->async_tx.cookie) ++ return 0; ++ } + list_for_each_entry(desc, &chan->desc.pending, node) { + if (cookie == desc->async_tx.cookie) + return desc->size; +@@ -1677,8 +1686,8 @@ static const struct dev_pm_ops rcar_dmac_pm = { + * - Wait for the current transfer to complete and stop the device, + * - Resume transfers, if any. + */ +- SET_LATE_SYSTEM_SLEEP_PM_OPS(pm_runtime_force_suspend, +- pm_runtime_force_resume) ++ SET_NOIRQ_SYSTEM_SLEEP_PM_OPS(pm_runtime_force_suspend, ++ pm_runtime_force_resume) + SET_RUNTIME_PM_OPS(rcar_dmac_runtime_suspend, rcar_dmac_runtime_resume, + NULL) + }; +diff --git a/drivers/firmware/dmi_scan.c b/drivers/firmware/dmi_scan.c +index e763e1484331..c3be8ef9243f 100644 +--- a/drivers/firmware/dmi_scan.c ++++ b/drivers/firmware/dmi_scan.c +@@ -186,7 +186,7 @@ static void __init dmi_save_uuid(const struct dmi_header *dm, int slot, + char *s; + int is_ff = 1, is_00 = 1, i; + +- if (dmi_ident[slot] || dm->length <= index + 16) ++ if (dmi_ident[slot] || dm->length < index + 16) + return; + + d = (u8 *) dm + index; +diff --git a/drivers/firmware/efi/arm-runtime.c b/drivers/firmware/efi/arm-runtime.c +index 1cc41c3d6315..86a1ad17a32e 100644 +--- a/drivers/firmware/efi/arm-runtime.c ++++ b/drivers/firmware/efi/arm-runtime.c +@@ -54,6 +54,9 @@ static struct ptdump_info efi_ptdump_info = { + + static int __init ptdump_init(void) + { ++ if (!efi_enabled(EFI_RUNTIME_SERVICES)) ++ return 0; ++ + return ptdump_debugfs_register(&efi_ptdump_info, "efi_page_tables"); + } + device_initcall(ptdump_init); +diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_amdkfd.h b/drivers/gpu/drm/amd/amdgpu/amdgpu_amdkfd.h +index 2a519f9062ee..e515ca01ffb2 100644 +--- a/drivers/gpu/drm/amd/amdgpu/amdgpu_amdkfd.h ++++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_amdkfd.h +@@ -26,6 +26,7 @@ + #define AMDGPU_AMDKFD_H_INCLUDED + + #include ++#include + #include + #include + +diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_ib.c b/drivers/gpu/drm/amd/amdgpu/amdgpu_ib.c +index a162d87ca0c8..b552a9416e92 100644 +--- a/drivers/gpu/drm/amd/amdgpu/amdgpu_ib.c ++++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_ib.c +@@ -321,14 +321,45 @@ int amdgpu_ib_ring_tests(struct amdgpu_device *adev) + { + unsigned i; + int r, ret = 0; ++ long tmo_gfx, tmo_mm; ++ ++ tmo_mm = tmo_gfx = AMDGPU_IB_TEST_TIMEOUT; ++ if (amdgpu_sriov_vf(adev)) { ++ /* for MM engines in hypervisor side they are not scheduled together ++ * with CP and SDMA engines, so even in exclusive mode MM engine could ++ * still running on other VF thus the IB TEST TIMEOUT for MM engines ++ * under SR-IOV should be set to a long time. 8 sec should be enough ++ * for the MM comes back to this VF. ++ */ ++ tmo_mm = 8 * AMDGPU_IB_TEST_TIMEOUT; ++ } ++ ++ if (amdgpu_sriov_runtime(adev)) { ++ /* for CP & SDMA engines since they are scheduled together so ++ * need to make the timeout width enough to cover the time ++ * cost waiting for it coming back under RUNTIME only ++ */ ++ tmo_gfx = 8 * AMDGPU_IB_TEST_TIMEOUT; ++ } + + for (i = 0; i < AMDGPU_MAX_RINGS; ++i) { + struct amdgpu_ring *ring = adev->rings[i]; ++ long tmo; + + if (!ring || !ring->ready) + continue; + +- r = amdgpu_ring_test_ib(ring, AMDGPU_IB_TEST_TIMEOUT); ++ /* MM engine need more time */ ++ if (ring->funcs->type == AMDGPU_RING_TYPE_UVD || ++ ring->funcs->type == AMDGPU_RING_TYPE_VCE || ++ ring->funcs->type == AMDGPU_RING_TYPE_UVD_ENC || ++ ring->funcs->type == AMDGPU_RING_TYPE_VCN_DEC || ++ ring->funcs->type == AMDGPU_RING_TYPE_VCN_ENC) ++ tmo = tmo_mm; ++ else ++ tmo = tmo_gfx; ++ ++ r = amdgpu_ring_test_ib(ring, tmo); + if (r) { + ring->ready = false; + +diff --git a/drivers/gpu/drm/amd/amdgpu/gfx_v9_0.c b/drivers/gpu/drm/amd/amdgpu/gfx_v9_0.c +index c06479615e8a..d7bbccd67eb9 100644 +--- a/drivers/gpu/drm/amd/amdgpu/gfx_v9_0.c ++++ b/drivers/gpu/drm/amd/amdgpu/gfx_v9_0.c +@@ -2954,7 +2954,13 @@ static int gfx_v9_0_hw_fini(void *handle) + gfx_v9_0_kcq_disable(&adev->gfx.kiq.ring, &adev->gfx.compute_ring[i]); + + if (amdgpu_sriov_vf(adev)) { +- pr_debug("For SRIOV client, shouldn't do anything.\n"); ++ gfx_v9_0_cp_gfx_enable(adev, false); ++ /* must disable polling for SRIOV when hw finished, otherwise ++ * CPC engine may still keep fetching WB address which is already ++ * invalid after sw finished and trigger DMAR reading error in ++ * hypervisor side. ++ */ ++ WREG32_FIELD15(GC, 0, CP_PQ_WPTR_POLL_CNTL, EN, 0); + return 0; + } + gfx_v9_0_cp_enable(adev, false); +diff --git a/drivers/gpu/drm/amd/amdgpu/sdma_v3_0.c b/drivers/gpu/drm/amd/amdgpu/sdma_v3_0.c +index fa63c564cf91..7657cc5784a5 100644 +--- a/drivers/gpu/drm/amd/amdgpu/sdma_v3_0.c ++++ b/drivers/gpu/drm/amd/amdgpu/sdma_v3_0.c +@@ -719,14 +719,17 @@ static int sdma_v3_0_gfx_resume(struct amdgpu_device *adev) + WREG32(mmSDMA0_GFX_RB_WPTR_POLL_ADDR_HI + sdma_offsets[i], + upper_32_bits(wptr_gpu_addr)); + wptr_poll_cntl = RREG32(mmSDMA0_GFX_RB_WPTR_POLL_CNTL + sdma_offsets[i]); +- if (ring->use_pollmem) ++ if (ring->use_pollmem) { ++ /*wptr polling is not enogh fast, directly clean the wptr register */ ++ WREG32(mmSDMA0_GFX_RB_WPTR + sdma_offsets[i], 0); + wptr_poll_cntl = REG_SET_FIELD(wptr_poll_cntl, + SDMA0_GFX_RB_WPTR_POLL_CNTL, + ENABLE, 1); +- else ++ } else { + wptr_poll_cntl = REG_SET_FIELD(wptr_poll_cntl, + SDMA0_GFX_RB_WPTR_POLL_CNTL, + ENABLE, 0); ++ } + WREG32(mmSDMA0_GFX_RB_WPTR_POLL_CNTL + sdma_offsets[i], wptr_poll_cntl); + + /* enable DMA RB */ +diff --git a/drivers/gpu/drm/amd/display/dc/core/dc_resource.c b/drivers/gpu/drm/amd/display/dc/core/dc_resource.c +index 4d07ffebfd31..6d1dd64f50c3 100644 +--- a/drivers/gpu/drm/amd/display/dc/core/dc_resource.c ++++ b/drivers/gpu/drm/amd/display/dc/core/dc_resource.c +@@ -35,6 +35,7 @@ + #include "core_types.h" + #include "set_mode_types.h" + #include "virtual/virtual_stream_encoder.h" ++#include "dpcd_defs.h" + + #include "dce80/dce80_resource.h" + #include "dce100/dce100_resource.h" +@@ -2428,7 +2429,8 @@ static void set_vsc_info_packet( + unsigned int vscPacketRevision = 0; + unsigned int i; + +- if (stream->sink->link->psr_enabled) { ++ /*VSC packet set to 2 when DP revision >= 1.2*/ ++ if (stream->sink->link->dpcd_caps.dpcd_rev.raw >= DPCD_REV_12) { + vscPacketRevision = 2; + } + +diff --git a/drivers/gpu/drm/bridge/sii902x.c b/drivers/gpu/drm/bridge/sii902x.c +index b1ab4ab09532..60373d7eb220 100644 +--- a/drivers/gpu/drm/bridge/sii902x.c ++++ b/drivers/gpu/drm/bridge/sii902x.c +@@ -137,7 +137,9 @@ static int sii902x_get_modes(struct drm_connector *connector) + struct sii902x *sii902x = connector_to_sii902x(connector); + struct regmap *regmap = sii902x->regmap; + u32 bus_format = MEDIA_BUS_FMT_RGB888_1X24; ++ struct device *dev = &sii902x->i2c->dev; + unsigned long timeout; ++ unsigned int retries; + unsigned int status; + struct edid *edid; + int num = 0; +@@ -159,7 +161,7 @@ static int sii902x_get_modes(struct drm_connector *connector) + time_before(jiffies, timeout)); + + if (!(status & SII902X_SYS_CTRL_DDC_BUS_GRTD)) { +- dev_err(&sii902x->i2c->dev, "failed to acquire the i2c bus\n"); ++ dev_err(dev, "failed to acquire the i2c bus\n"); + return -ETIMEDOUT; + } + +@@ -179,9 +181,19 @@ static int sii902x_get_modes(struct drm_connector *connector) + if (ret) + return ret; + +- ret = regmap_read(regmap, SII902X_SYS_CTRL_DATA, &status); ++ /* ++ * Sometimes the I2C bus can stall after failure to use the ++ * EDID channel. Retry a few times to see if things clear ++ * up, else continue anyway. ++ */ ++ retries = 5; ++ do { ++ ret = regmap_read(regmap, SII902X_SYS_CTRL_DATA, ++ &status); ++ retries--; ++ } while (ret && retries); + if (ret) +- return ret; ++ dev_err(dev, "failed to read status (%d)\n", ret); + + ret = regmap_update_bits(regmap, SII902X_SYS_CTRL_DATA, + SII902X_SYS_CTRL_DDC_BUS_REQ | +@@ -201,7 +213,7 @@ static int sii902x_get_modes(struct drm_connector *connector) + + if (status & (SII902X_SYS_CTRL_DDC_BUS_REQ | + SII902X_SYS_CTRL_DDC_BUS_GRTD)) { +- dev_err(&sii902x->i2c->dev, "failed to release the i2c bus\n"); ++ dev_err(dev, "failed to release the i2c bus\n"); + return -ETIMEDOUT; + } + +diff --git a/drivers/gpu/drm/drm_vblank.c b/drivers/gpu/drm/drm_vblank.c +index 32d9bcf5be7f..f0d3ed5f2528 100644 +--- a/drivers/gpu/drm/drm_vblank.c ++++ b/drivers/gpu/drm/drm_vblank.c +@@ -271,7 +271,7 @@ static void drm_update_vblank_count(struct drm_device *dev, unsigned int pipe, + store_vblank(dev, pipe, diff, t_vblank, cur_vblank); + } + +-static u32 drm_vblank_count(struct drm_device *dev, unsigned int pipe) ++static u64 drm_vblank_count(struct drm_device *dev, unsigned int pipe) + { + struct drm_vblank_crtc *vblank = &dev->vblank[pipe]; + +@@ -292,11 +292,11 @@ static u32 drm_vblank_count(struct drm_device *dev, unsigned int pipe) + * This is mostly useful for hardware that can obtain the scanout position, but + * doesn't have a hardware frame counter. + */ +-u32 drm_crtc_accurate_vblank_count(struct drm_crtc *crtc) ++u64 drm_crtc_accurate_vblank_count(struct drm_crtc *crtc) + { + struct drm_device *dev = crtc->dev; + unsigned int pipe = drm_crtc_index(crtc); +- u32 vblank; ++ u64 vblank; + unsigned long flags; + + WARN_ONCE(drm_debug & DRM_UT_VBL && !dev->driver->get_vblank_timestamp, +@@ -1055,7 +1055,7 @@ void drm_wait_one_vblank(struct drm_device *dev, unsigned int pipe) + { + struct drm_vblank_crtc *vblank = &dev->vblank[pipe]; + int ret; +- u32 last; ++ u64 last; + + if (WARN_ON(pipe >= dev->num_crtcs)) + return; +diff --git a/drivers/gpu/drm/meson/meson_drv.c b/drivers/gpu/drm/meson/meson_drv.c +index f9ad0e960263..2751b9107fc5 100644 +--- a/drivers/gpu/drm/meson/meson_drv.c ++++ b/drivers/gpu/drm/meson/meson_drv.c +@@ -189,40 +189,51 @@ static int meson_drv_bind_master(struct device *dev, bool has_components) + + res = platform_get_resource_byname(pdev, IORESOURCE_MEM, "vpu"); + regs = devm_ioremap_resource(dev, res); +- if (IS_ERR(regs)) +- return PTR_ERR(regs); ++ if (IS_ERR(regs)) { ++ ret = PTR_ERR(regs); ++ goto free_drm; ++ } + + priv->io_base = regs; + + res = platform_get_resource_byname(pdev, IORESOURCE_MEM, "hhi"); + /* Simply ioremap since it may be a shared register zone */ + regs = devm_ioremap(dev, res->start, resource_size(res)); +- if (!regs) +- return -EADDRNOTAVAIL; ++ if (!regs) { ++ ret = -EADDRNOTAVAIL; ++ goto free_drm; ++ } + + priv->hhi = devm_regmap_init_mmio(dev, regs, + &meson_regmap_config); + if (IS_ERR(priv->hhi)) { + dev_err(&pdev->dev, "Couldn't create the HHI regmap\n"); +- return PTR_ERR(priv->hhi); ++ ret = PTR_ERR(priv->hhi); ++ goto free_drm; + } + + res = platform_get_resource_byname(pdev, IORESOURCE_MEM, "dmc"); + /* Simply ioremap since it may be a shared register zone */ + regs = devm_ioremap(dev, res->start, resource_size(res)); +- if (!regs) +- return -EADDRNOTAVAIL; ++ if (!regs) { ++ ret = -EADDRNOTAVAIL; ++ goto free_drm; ++ } + + priv->dmc = devm_regmap_init_mmio(dev, regs, + &meson_regmap_config); + if (IS_ERR(priv->dmc)) { + dev_err(&pdev->dev, "Couldn't create the DMC regmap\n"); +- return PTR_ERR(priv->dmc); ++ ret = PTR_ERR(priv->dmc); ++ goto free_drm; + } + + priv->vsync_irq = platform_get_irq(pdev, 0); + +- drm_vblank_init(drm, 1); ++ ret = drm_vblank_init(drm, 1); ++ if (ret) ++ goto free_drm; ++ + drm_mode_config_init(drm); + drm->mode_config.max_width = 3840; + drm->mode_config.max_height = 2160; +diff --git a/drivers/gpu/drm/omapdrm/dss/dss.c b/drivers/gpu/drm/omapdrm/dss/dss.c +index 04300b2da1b1..f2a38727fa85 100644 +--- a/drivers/gpu/drm/omapdrm/dss/dss.c ++++ b/drivers/gpu/drm/omapdrm/dss/dss.c +@@ -1300,88 +1300,18 @@ static const struct soc_device_attribute dss_soc_devices[] = { + + static int dss_bind(struct device *dev) + { +- struct platform_device *pdev = to_platform_device(dev); +- struct resource *dss_mem; +- u32 rev; + int r; + +- dss_mem = platform_get_resource(dss.pdev, IORESOURCE_MEM, 0); +- dss.base = devm_ioremap_resource(&pdev->dev, dss_mem); +- if (IS_ERR(dss.base)) +- return PTR_ERR(dss.base); +- +- r = dss_get_clocks(); ++ r = component_bind_all(dev, NULL); + if (r) + return r; + +- r = dss_setup_default_clock(); +- if (r) +- goto err_setup_clocks; +- +- r = dss_video_pll_probe(pdev); +- if (r) +- goto err_pll_init; +- +- r = dss_init_ports(pdev); +- if (r) +- goto err_init_ports; +- +- pm_runtime_enable(&pdev->dev); +- +- r = dss_runtime_get(); +- if (r) +- goto err_runtime_get; +- +- dss.dss_clk_rate = clk_get_rate(dss.dss_clk); +- +- /* Select DPLL */ +- REG_FLD_MOD(DSS_CONTROL, 0, 0, 0); +- +- dss_select_dispc_clk_source(DSS_CLK_SRC_FCK); +- +-#ifdef CONFIG_OMAP2_DSS_VENC +- REG_FLD_MOD(DSS_CONTROL, 1, 4, 4); /* venc dac demen */ +- REG_FLD_MOD(DSS_CONTROL, 1, 3, 3); /* venc clock 4x enable */ +- REG_FLD_MOD(DSS_CONTROL, 0, 2, 2); /* venc clock mode = normal */ +-#endif +- dss.dsi_clk_source[0] = DSS_CLK_SRC_FCK; +- dss.dsi_clk_source[1] = DSS_CLK_SRC_FCK; +- dss.dispc_clk_source = DSS_CLK_SRC_FCK; +- dss.lcd_clk_source[0] = DSS_CLK_SRC_FCK; +- dss.lcd_clk_source[1] = DSS_CLK_SRC_FCK; +- +- rev = dss_read_reg(DSS_REVISION); +- pr_info("OMAP DSS rev %d.%d\n", FLD_GET(rev, 7, 4), FLD_GET(rev, 3, 0)); +- +- dss_runtime_put(); +- +- r = component_bind_all(&pdev->dev, NULL); +- if (r) +- goto err_component; +- +- dss_debugfs_create_file("dss", dss_dump_regs); +- + pm_set_vt_switch(0); + + omapdss_gather_components(dev); + omapdss_set_is_initialized(true); + + return 0; +- +-err_component: +-err_runtime_get: +- pm_runtime_disable(&pdev->dev); +- dss_uninit_ports(pdev); +-err_init_ports: +- if (dss.video1_pll) +- dss_video_pll_uninit(dss.video1_pll); +- +- if (dss.video2_pll) +- dss_video_pll_uninit(dss.video2_pll); +-err_pll_init: +-err_setup_clocks: +- dss_put_clocks(); +- return r; + } + + static void dss_unbind(struct device *dev) +@@ -1391,18 +1321,6 @@ static void dss_unbind(struct device *dev) + omapdss_set_is_initialized(false); + + component_unbind_all(&pdev->dev, NULL); +- +- if (dss.video1_pll) +- dss_video_pll_uninit(dss.video1_pll); +- +- if (dss.video2_pll) +- dss_video_pll_uninit(dss.video2_pll); +- +- dss_uninit_ports(pdev); +- +- pm_runtime_disable(&pdev->dev); +- +- dss_put_clocks(); + } + + static const struct component_master_ops dss_component_ops = { +@@ -1434,10 +1352,46 @@ static int dss_add_child_component(struct device *dev, void *data) + return 0; + } + ++static int dss_probe_hardware(void) ++{ ++ u32 rev; ++ int r; ++ ++ r = dss_runtime_get(); ++ if (r) ++ return r; ++ ++ dss.dss_clk_rate = clk_get_rate(dss.dss_clk); ++ ++ /* Select DPLL */ ++ REG_FLD_MOD(DSS_CONTROL, 0, 0, 0); ++ ++ dss_select_dispc_clk_source(DSS_CLK_SRC_FCK); ++ ++#ifdef CONFIG_OMAP2_DSS_VENC ++ REG_FLD_MOD(DSS_CONTROL, 1, 4, 4); /* venc dac demen */ ++ REG_FLD_MOD(DSS_CONTROL, 1, 3, 3); /* venc clock 4x enable */ ++ REG_FLD_MOD(DSS_CONTROL, 0, 2, 2); /* venc clock mode = normal */ ++#endif ++ dss.dsi_clk_source[0] = DSS_CLK_SRC_FCK; ++ dss.dsi_clk_source[1] = DSS_CLK_SRC_FCK; ++ dss.dispc_clk_source = DSS_CLK_SRC_FCK; ++ dss.lcd_clk_source[0] = DSS_CLK_SRC_FCK; ++ dss.lcd_clk_source[1] = DSS_CLK_SRC_FCK; ++ ++ rev = dss_read_reg(DSS_REVISION); ++ pr_info("OMAP DSS rev %d.%d\n", FLD_GET(rev, 7, 4), FLD_GET(rev, 3, 0)); ++ ++ dss_runtime_put(); ++ ++ return 0; ++} ++ + static int dss_probe(struct platform_device *pdev) + { + const struct soc_device_attribute *soc; + struct component_match *match = NULL; ++ struct resource *dss_mem; + int r; + + dss.pdev = pdev; +@@ -1458,20 +1412,69 @@ static int dss_probe(struct platform_device *pdev) + else + dss.feat = of_match_device(dss_of_match, &pdev->dev)->data; + +- r = dss_initialize_debugfs(); ++ /* Map I/O registers, get and setup clocks. */ ++ dss_mem = platform_get_resource(pdev, IORESOURCE_MEM, 0); ++ dss.base = devm_ioremap_resource(&pdev->dev, dss_mem); ++ if (IS_ERR(dss.base)) ++ return PTR_ERR(dss.base); ++ ++ r = dss_get_clocks(); + if (r) + return r; + +- /* add all the child devices as components */ ++ r = dss_setup_default_clock(); ++ if (r) ++ goto err_put_clocks; ++ ++ /* Setup the video PLLs and the DPI and SDI ports. */ ++ r = dss_video_pll_probe(pdev); ++ if (r) ++ goto err_put_clocks; ++ ++ r = dss_init_ports(pdev); ++ if (r) ++ goto err_uninit_plls; ++ ++ /* Enable runtime PM and probe the hardware. */ ++ pm_runtime_enable(&pdev->dev); ++ ++ r = dss_probe_hardware(); ++ if (r) ++ goto err_pm_runtime_disable; ++ ++ /* Initialize debugfs. */ ++ r = dss_initialize_debugfs(); ++ if (r) ++ goto err_pm_runtime_disable; ++ ++ dss_debugfs_create_file("dss", dss_dump_regs); ++ ++ /* Add all the child devices as components. */ + device_for_each_child(&pdev->dev, &match, dss_add_child_component); + + r = component_master_add_with_match(&pdev->dev, &dss_component_ops, match); +- if (r) { +- dss_uninitialize_debugfs(); +- return r; +- } ++ if (r) ++ goto err_uninit_debugfs; + + return 0; ++ ++err_uninit_debugfs: ++ dss_uninitialize_debugfs(); ++ ++err_pm_runtime_disable: ++ pm_runtime_disable(&pdev->dev); ++ dss_uninit_ports(pdev); ++ ++err_uninit_plls: ++ if (dss.video1_pll) ++ dss_video_pll_uninit(dss.video1_pll); ++ if (dss.video2_pll) ++ dss_video_pll_uninit(dss.video2_pll); ++ ++err_put_clocks: ++ dss_put_clocks(); ++ ++ return r; + } + + static int dss_remove(struct platform_device *pdev) +@@ -1480,6 +1483,18 @@ static int dss_remove(struct platform_device *pdev) + + dss_uninitialize_debugfs(); + ++ pm_runtime_disable(&pdev->dev); ++ ++ dss_uninit_ports(pdev); ++ ++ if (dss.video1_pll) ++ dss_video_pll_uninit(dss.video1_pll); ++ ++ if (dss.video2_pll) ++ dss_video_pll_uninit(dss.video2_pll); ++ ++ dss_put_clocks(); ++ + return 0; + } + +diff --git a/drivers/gpu/drm/panel/panel-simple.c b/drivers/gpu/drm/panel/panel-simple.c +index 5591984a392b..f9649bded63f 100644 +--- a/drivers/gpu/drm/panel/panel-simple.c ++++ b/drivers/gpu/drm/panel/panel-simple.c +@@ -1597,7 +1597,7 @@ static const struct panel_desc ontat_yx700wv03 = { + .width = 154, + .height = 83, + }, +- .bus_format = MEDIA_BUS_FMT_RGB888_1X24, ++ .bus_format = MEDIA_BUS_FMT_RGB666_1X18, + }; + + static const struct drm_display_mode ortustech_com43h4m85ulc_mode = { +diff --git a/drivers/gpu/drm/rcar-du/rcar_du_lvdsenc.c b/drivers/gpu/drm/rcar-du/rcar_du_lvdsenc.c +index 12d22f3db1af..6a4b8c98a719 100644 +--- a/drivers/gpu/drm/rcar-du/rcar_du_lvdsenc.c ++++ b/drivers/gpu/drm/rcar-du/rcar_du_lvdsenc.c +@@ -59,11 +59,8 @@ static void rcar_du_lvdsenc_start_gen2(struct rcar_du_lvdsenc *lvds, + + rcar_lvds_write(lvds, LVDPLLCR, pllcr); + +- /* +- * Select the input, hardcode mode 0, enable LVDS operation and turn +- * bias circuitry on. +- */ +- lvdcr0 = (lvds->mode << LVDCR0_LVMD_SHIFT) | LVDCR0_BEN | LVDCR0_LVEN; ++ /* Select the input and set the LVDS mode. */ ++ lvdcr0 = lvds->mode << LVDCR0_LVMD_SHIFT; + if (rcrtc->index == 2) + lvdcr0 |= LVDCR0_DUSEL; + rcar_lvds_write(lvds, LVDCR0, lvdcr0); +@@ -74,6 +71,10 @@ static void rcar_du_lvdsenc_start_gen2(struct rcar_du_lvdsenc *lvds, + LVDCR1_CHSTBY_GEN2(1) | LVDCR1_CHSTBY_GEN2(0) | + LVDCR1_CLKSTBY_GEN2); + ++ /* Enable LVDS operation and turn bias circuitry on. */ ++ lvdcr0 |= LVDCR0_BEN | LVDCR0_LVEN; ++ rcar_lvds_write(lvds, LVDCR0, lvdcr0); ++ + /* + * Turn the PLL on, wait for the startup delay, and turn the output + * on. +@@ -95,7 +96,7 @@ static void rcar_du_lvdsenc_start_gen3(struct rcar_du_lvdsenc *lvds, + u32 lvdcr0; + u32 pllcr; + +- /* PLL clock configuration */ ++ /* Set the PLL clock configuration and LVDS mode. */ + if (freq < 42000) + pllcr = LVDPLLCR_PLLDIVCNT_42M; + else if (freq < 85000) +@@ -107,6 +108,9 @@ static void rcar_du_lvdsenc_start_gen3(struct rcar_du_lvdsenc *lvds, + + rcar_lvds_write(lvds, LVDPLLCR, pllcr); + ++ lvdcr0 = lvds->mode << LVDCR0_LVMD_SHIFT; ++ rcar_lvds_write(lvds, LVDCR0, lvdcr0); ++ + /* Turn all the channels on. */ + rcar_lvds_write(lvds, LVDCR1, + LVDCR1_CHSTBY_GEN3(3) | LVDCR1_CHSTBY_GEN3(2) | +@@ -117,7 +121,7 @@ static void rcar_du_lvdsenc_start_gen3(struct rcar_du_lvdsenc *lvds, + * Turn the PLL on, set it to LVDS normal mode, wait for the startup + * delay and turn the output on. + */ +- lvdcr0 = (lvds->mode << LVDCR0_LVMD_SHIFT) | LVDCR0_PLLON; ++ lvdcr0 |= LVDCR0_PLLON; + rcar_lvds_write(lvds, LVDCR0, lvdcr0); + + lvdcr0 |= LVDCR0_PWD; +diff --git a/drivers/gpu/drm/rockchip/rockchip_drm_gem.c b/drivers/gpu/drm/rockchip/rockchip_drm_gem.c +index 1d9655576b6e..6bf2f8289847 100644 +--- a/drivers/gpu/drm/rockchip/rockchip_drm_gem.c ++++ b/drivers/gpu/drm/rockchip/rockchip_drm_gem.c +@@ -262,7 +262,6 @@ static int rockchip_drm_gem_object_mmap(struct drm_gem_object *obj, + * VM_PFNMAP flag that was set by drm_gem_mmap_obj()/drm_gem_mmap(). + */ + vma->vm_flags &= ~VM_PFNMAP; +- vma->vm_pgoff = 0; + + if (rk_obj->pages) + ret = rockchip_drm_gem_object_mmap_iommu(obj, vma); +@@ -297,6 +296,12 @@ int rockchip_gem_mmap(struct file *filp, struct vm_area_struct *vma) + if (ret) + return ret; + ++ /* ++ * Set vm_pgoff (used as a fake buffer offset by DRM) to 0 and map the ++ * whole buffer from the start. ++ */ ++ vma->vm_pgoff = 0; ++ + obj = vma->vm_private_data; + + return rockchip_drm_gem_object_mmap(obj, vma); +diff --git a/drivers/gpu/drm/vmwgfx/vmwgfx_msg.h b/drivers/gpu/drm/vmwgfx/vmwgfx_msg.h +index 557a033fb610..8545488aa0cf 100644 +--- a/drivers/gpu/drm/vmwgfx/vmwgfx_msg.h ++++ b/drivers/gpu/drm/vmwgfx/vmwgfx_msg.h +@@ -135,17 +135,24 @@ + + #else + +-/* In the 32-bit version of this macro, we use "m" because there is no +- * more register left for bp ++/* ++ * In the 32-bit version of this macro, we store bp in a memory location ++ * because we've ran out of registers. ++ * Now we can't reference that memory location while we've modified ++ * %esp or %ebp, so we first push it on the stack, just before we push ++ * %ebp, and then when we need it we read it from the stack where we ++ * just pushed it. + */ + #define VMW_PORT_HB_OUT(cmd, in_ecx, in_si, in_di, \ + port_num, magic, bp, \ + eax, ebx, ecx, edx, si, di) \ + ({ \ +- asm volatile ("push %%ebp;" \ +- "mov %12, %%ebp;" \ ++ asm volatile ("push %12;" \ ++ "push %%ebp;" \ ++ "mov 0x04(%%esp), %%ebp;" \ + "rep outsb;" \ +- "pop %%ebp;" : \ ++ "pop %%ebp;" \ ++ "add $0x04, %%esp;" : \ + "=a"(eax), \ + "=b"(ebx), \ + "=c"(ecx), \ +@@ -167,10 +174,12 @@ + port_num, magic, bp, \ + eax, ebx, ecx, edx, si, di) \ + ({ \ +- asm volatile ("push %%ebp;" \ +- "mov %12, %%ebp;" \ ++ asm volatile ("push %12;" \ ++ "push %%ebp;" \ ++ "mov 0x04(%%esp), %%ebp;" \ + "rep insb;" \ +- "pop %%ebp" : \ ++ "pop %%ebp;" \ ++ "add $0x04, %%esp;" : \ + "=a"(eax), \ + "=b"(ebx), \ + "=c"(ecx), \ +diff --git a/drivers/gpu/drm/vmwgfx/vmwgfx_scrn.c b/drivers/gpu/drm/vmwgfx/vmwgfx_scrn.c +index 3ec9eae831b8..f9413c0199f0 100644 +--- a/drivers/gpu/drm/vmwgfx/vmwgfx_scrn.c ++++ b/drivers/gpu/drm/vmwgfx/vmwgfx_scrn.c +@@ -453,7 +453,11 @@ vmw_sou_primary_plane_cleanup_fb(struct drm_plane *plane, + struct drm_plane_state *old_state) + { + struct vmw_plane_state *vps = vmw_plane_state_to_vps(old_state); ++ struct drm_crtc *crtc = plane->state->crtc ? ++ plane->state->crtc : old_state->crtc; + ++ if (vps->dmabuf) ++ vmw_dmabuf_unpin(vmw_priv(crtc->dev), vps->dmabuf, false); + vmw_dmabuf_unreference(&vps->dmabuf); + vps->dmabuf_size = 0; + +@@ -491,10 +495,17 @@ vmw_sou_primary_plane_prepare_fb(struct drm_plane *plane, + } + + size = new_state->crtc_w * new_state->crtc_h * 4; ++ dev_priv = vmw_priv(crtc->dev); + + if (vps->dmabuf) { +- if (vps->dmabuf_size == size) +- return 0; ++ if (vps->dmabuf_size == size) { ++ /* ++ * Note that this might temporarily up the pin-count ++ * to 2, until cleanup_fb() is called. ++ */ ++ return vmw_dmabuf_pin_in_vram(dev_priv, vps->dmabuf, ++ true); ++ } + + vmw_dmabuf_unreference(&vps->dmabuf); + vps->dmabuf_size = 0; +@@ -504,7 +515,6 @@ vmw_sou_primary_plane_prepare_fb(struct drm_plane *plane, + if (!vps->dmabuf) + return -ENOMEM; + +- dev_priv = vmw_priv(crtc->dev); + vmw_svga_enable(dev_priv); + + /* After we have alloced the backing store might not be able to +@@ -515,13 +525,18 @@ vmw_sou_primary_plane_prepare_fb(struct drm_plane *plane, + &vmw_vram_ne_placement, + false, &vmw_dmabuf_bo_free); + vmw_overlay_resume_all(dev_priv); +- +- if (ret != 0) ++ if (ret) { + vps->dmabuf = NULL; /* vmw_dmabuf_init frees on error */ +- else +- vps->dmabuf_size = size; ++ return ret; ++ } + +- return ret; ++ vps->dmabuf_size = size; ++ ++ /* ++ * TTM already thinks the buffer is pinned, but make sure the ++ * pin_count is upped. ++ */ ++ return vmw_dmabuf_pin_in_vram(dev_priv, vps->dmabuf, true); + } + + +diff --git a/drivers/hwmon/nct6775.c b/drivers/hwmon/nct6775.c +index c219e43b8f02..f5f3f8cf57ea 100644 +--- a/drivers/hwmon/nct6775.c ++++ b/drivers/hwmon/nct6775.c +@@ -1469,7 +1469,7 @@ static void nct6775_update_pwm(struct device *dev) + duty_is_dc = data->REG_PWM_MODE[i] && + (nct6775_read_value(data, data->REG_PWM_MODE[i]) + & data->PWM_MODE_MASK[i]); +- data->pwm_mode[i] = duty_is_dc; ++ data->pwm_mode[i] = !duty_is_dc; + + fanmodecfg = nct6775_read_value(data, data->REG_FAN_MODE[i]); + for (j = 0; j < ARRAY_SIZE(data->REG_PWM); j++) { +@@ -2350,7 +2350,7 @@ show_pwm_mode(struct device *dev, struct device_attribute *attr, char *buf) + struct nct6775_data *data = nct6775_update_device(dev); + struct sensor_device_attribute *sattr = to_sensor_dev_attr(attr); + +- return sprintf(buf, "%d\n", !data->pwm_mode[sattr->index]); ++ return sprintf(buf, "%d\n", data->pwm_mode[sattr->index]); + } + + static ssize_t +@@ -2371,9 +2371,9 @@ store_pwm_mode(struct device *dev, struct device_attribute *attr, + if (val > 1) + return -EINVAL; + +- /* Setting DC mode is not supported for all chips/channels */ ++ /* Setting DC mode (0) is not supported for all chips/channels */ + if (data->REG_PWM_MODE[nr] == 0) { +- if (val) ++ if (!val) + return -EINVAL; + return count; + } +@@ -2382,7 +2382,7 @@ store_pwm_mode(struct device *dev, struct device_attribute *attr, + data->pwm_mode[nr] = val; + reg = nct6775_read_value(data, data->REG_PWM_MODE[nr]); + reg &= ~data->PWM_MODE_MASK[nr]; +- if (val) ++ if (!val) + reg |= data->PWM_MODE_MASK[nr]; + nct6775_write_value(data, data->REG_PWM_MODE[nr], reg); + mutex_unlock(&data->update_lock); +diff --git a/drivers/hwmon/pmbus/adm1275.c b/drivers/hwmon/pmbus/adm1275.c +index 00d6995af4c2..8a44e94d5679 100644 +--- a/drivers/hwmon/pmbus/adm1275.c ++++ b/drivers/hwmon/pmbus/adm1275.c +@@ -154,7 +154,7 @@ static int adm1275_read_word_data(struct i2c_client *client, int page, int reg) + const struct adm1275_data *data = to_adm1275_data(info); + int ret = 0; + +- if (page) ++ if (page > 0) + return -ENXIO; + + switch (reg) { +@@ -240,7 +240,7 @@ static int adm1275_write_word_data(struct i2c_client *client, int page, int reg, + const struct adm1275_data *data = to_adm1275_data(info); + int ret; + +- if (page) ++ if (page > 0) + return -ENXIO; + + switch (reg) { +diff --git a/drivers/hwmon/pmbus/max8688.c b/drivers/hwmon/pmbus/max8688.c +index dd4883a19045..e951f9b87abb 100644 +--- a/drivers/hwmon/pmbus/max8688.c ++++ b/drivers/hwmon/pmbus/max8688.c +@@ -45,7 +45,7 @@ static int max8688_read_word_data(struct i2c_client *client, int page, int reg) + { + int ret; + +- if (page) ++ if (page > 0) + return -ENXIO; + + switch (reg) { +diff --git a/drivers/hwtracing/coresight/coresight-cpu-debug.c b/drivers/hwtracing/coresight/coresight-cpu-debug.c +index 6ea62c62ff27..9cdb3fbc8c1f 100644 +--- a/drivers/hwtracing/coresight/coresight-cpu-debug.c ++++ b/drivers/hwtracing/coresight/coresight-cpu-debug.c +@@ -315,7 +315,7 @@ static void debug_dump_regs(struct debug_drvdata *drvdata) + } + + pc = debug_adjust_pc(drvdata); +- dev_emerg(dev, " EDPCSR: [<%p>] %pS\n", (void *)pc, (void *)pc); ++ dev_emerg(dev, " EDPCSR: [<%px>] %pS\n", (void *)pc, (void *)pc); + + if (drvdata->edcidsr_present) + dev_emerg(dev, " EDCIDSR: %08x\n", drvdata->edcidsr); +diff --git a/drivers/hwtracing/intel_th/core.c b/drivers/hwtracing/intel_th/core.c +index 1a023e30488c..c1793313bb08 100644 +--- a/drivers/hwtracing/intel_th/core.c ++++ b/drivers/hwtracing/intel_th/core.c +@@ -935,7 +935,7 @@ EXPORT_SYMBOL_GPL(intel_th_trace_disable); + int intel_th_set_output(struct intel_th_device *thdev, + unsigned int master) + { +- struct intel_th_device *hub = to_intel_th_device(thdev->dev.parent); ++ struct intel_th_device *hub = to_intel_th_hub(thdev); + struct intel_th_driver *hubdrv = to_intel_th_driver(hub->dev.driver); + + if (!hubdrv->set_output) +diff --git a/drivers/i2c/busses/i2c-mv64xxx.c b/drivers/i2c/busses/i2c-mv64xxx.c +index 440fe4a96e68..a5a95ea5b81a 100644 +--- a/drivers/i2c/busses/i2c-mv64xxx.c ++++ b/drivers/i2c/busses/i2c-mv64xxx.c +@@ -845,12 +845,16 @@ mv64xxx_of_config(struct mv64xxx_i2c_data *drv_data, + */ + if (of_device_is_compatible(np, "marvell,mv78230-i2c")) { + drv_data->offload_enabled = true; +- drv_data->errata_delay = true; ++ /* The delay is only needed in standard mode (100kHz) */ ++ if (bus_freq <= 100000) ++ drv_data->errata_delay = true; + } + + if (of_device_is_compatible(np, "marvell,mv78230-a0-i2c")) { + drv_data->offload_enabled = false; +- drv_data->errata_delay = true; ++ /* The delay is only needed in standard mode (100kHz) */ ++ if (bus_freq <= 100000) ++ drv_data->errata_delay = true; + } + + if (of_device_is_compatible(np, "allwinner,sun6i-a31-i2c")) +diff --git a/drivers/ide/ide-cd.c b/drivers/ide/ide-cd.c +index 7c3ed7c9af77..5613cc2d51fc 100644 +--- a/drivers/ide/ide-cd.c ++++ b/drivers/ide/ide-cd.c +@@ -1613,6 +1613,8 @@ static int idecd_open(struct block_device *bdev, fmode_t mode) + struct cdrom_info *info; + int rc = -ENXIO; + ++ check_disk_change(bdev); ++ + mutex_lock(&ide_cd_mutex); + info = ide_cd_get(bdev->bd_disk); + if (!info) +diff --git a/drivers/infiniband/core/multicast.c b/drivers/infiniband/core/multicast.c +index 45f2f095f793..4eb72ff539fc 100644 +--- a/drivers/infiniband/core/multicast.c ++++ b/drivers/infiniband/core/multicast.c +@@ -724,21 +724,19 @@ int ib_init_ah_from_mcmember(struct ib_device *device, u8 port_num, + { + int ret; + u16 gid_index; +- u8 p; +- +- if (rdma_protocol_roce(device, port_num)) { +- ret = ib_find_cached_gid_by_port(device, &rec->port_gid, +- gid_type, port_num, +- ndev, +- &gid_index); +- } else if (rdma_protocol_ib(device, port_num)) { +- ret = ib_find_cached_gid(device, &rec->port_gid, +- IB_GID_TYPE_IB, NULL, &p, +- &gid_index); +- } else { +- ret = -EINVAL; +- } + ++ /* GID table is not based on the netdevice for IB link layer, ++ * so ignore ndev during search. ++ */ ++ if (rdma_protocol_ib(device, port_num)) ++ ndev = NULL; ++ else if (!rdma_protocol_roce(device, port_num)) ++ return -EINVAL; ++ ++ ret = ib_find_cached_gid_by_port(device, &rec->port_gid, ++ gid_type, port_num, ++ ndev, ++ &gid_index); + if (ret) + return ret; + +diff --git a/drivers/infiniband/core/umem.c b/drivers/infiniband/core/umem.c +index 9a4e899d94b3..2b6c9b516070 100644 +--- a/drivers/infiniband/core/umem.c ++++ b/drivers/infiniband/core/umem.c +@@ -119,7 +119,6 @@ struct ib_umem *ib_umem_get(struct ib_ucontext *context, unsigned long addr, + umem->length = size; + umem->address = addr; + umem->page_shift = PAGE_SHIFT; +- umem->pid = get_task_pid(current, PIDTYPE_PID); + /* + * We ask for writable memory if any of the following + * access flags are set. "Local write" and "remote write" +@@ -132,7 +131,6 @@ struct ib_umem *ib_umem_get(struct ib_ucontext *context, unsigned long addr, + IB_ACCESS_REMOTE_ATOMIC | IB_ACCESS_MW_BIND)); + + if (access & IB_ACCESS_ON_DEMAND) { +- put_pid(umem->pid); + ret = ib_umem_odp_get(context, umem, access); + if (ret) { + kfree(umem); +@@ -148,7 +146,6 @@ struct ib_umem *ib_umem_get(struct ib_ucontext *context, unsigned long addr, + + page_list = (struct page **) __get_free_page(GFP_KERNEL); + if (!page_list) { +- put_pid(umem->pid); + kfree(umem); + return ERR_PTR(-ENOMEM); + } +@@ -231,7 +228,6 @@ struct ib_umem *ib_umem_get(struct ib_ucontext *context, unsigned long addr, + if (ret < 0) { + if (need_release) + __ib_umem_release(context->device, umem, 0); +- put_pid(umem->pid); + kfree(umem); + } else + current->mm->pinned_vm = locked; +@@ -274,8 +270,7 @@ void ib_umem_release(struct ib_umem *umem) + + __ib_umem_release(umem->context->device, umem, 1); + +- task = get_pid_task(umem->pid, PIDTYPE_PID); +- put_pid(umem->pid); ++ task = get_pid_task(umem->context->tgid, PIDTYPE_PID); + if (!task) + goto out; + mm = get_task_mm(task); +diff --git a/drivers/infiniband/hw/hfi1/chip.c b/drivers/infiniband/hw/hfi1/chip.c +index e6a60fa59f2b..e6bdd0c1e80a 100644 +--- a/drivers/infiniband/hw/hfi1/chip.c ++++ b/drivers/infiniband/hw/hfi1/chip.c +@@ -5944,6 +5944,7 @@ static void is_sendctxt_err_int(struct hfi1_devdata *dd, + u64 status; + u32 sw_index; + int i = 0; ++ unsigned long irq_flags; + + sw_index = dd->hw_to_sw[hw_context]; + if (sw_index >= dd->num_send_contexts) { +@@ -5953,10 +5954,12 @@ static void is_sendctxt_err_int(struct hfi1_devdata *dd, + return; + } + sci = &dd->send_contexts[sw_index]; ++ spin_lock_irqsave(&dd->sc_lock, irq_flags); + sc = sci->sc; + if (!sc) { + dd_dev_err(dd, "%s: context %u(%u): no sc?\n", __func__, + sw_index, hw_context); ++ spin_unlock_irqrestore(&dd->sc_lock, irq_flags); + return; + } + +@@ -5978,6 +5981,7 @@ static void is_sendctxt_err_int(struct hfi1_devdata *dd, + */ + if (sc->type != SC_USER) + queue_work(dd->pport->hfi1_wq, &sc->halt_work); ++ spin_unlock_irqrestore(&dd->sc_lock, irq_flags); + + /* + * Update the counters for the corresponding status bits. +diff --git a/drivers/infiniband/hw/mlx5/main.c b/drivers/infiniband/hw/mlx5/main.c +index 0881f7907848..c14ed9cc9c9e 100644 +--- a/drivers/infiniband/hw/mlx5/main.c ++++ b/drivers/infiniband/hw/mlx5/main.c +@@ -388,6 +388,9 @@ static int mlx5_query_port_roce(struct ib_device *device, u8 port_num, + if (err) + goto out; + ++ props->active_width = IB_WIDTH_4X; ++ props->active_speed = IB_SPEED_QDR; ++ + translate_eth_proto_oper(eth_prot_oper, &props->active_speed, + &props->active_width); + +diff --git a/drivers/infiniband/sw/rxe/rxe_verbs.c b/drivers/infiniband/sw/rxe/rxe_verbs.c +index 45594091353c..7ef21fa2c3f0 100644 +--- a/drivers/infiniband/sw/rxe/rxe_verbs.c ++++ b/drivers/infiniband/sw/rxe/rxe_verbs.c +@@ -1206,7 +1206,7 @@ int rxe_register_device(struct rxe_dev *rxe) + rxe->ndev->dev_addr); + dev->dev.dma_ops = &dma_virt_ops; + dma_coerce_mask_and_coherent(&dev->dev, +- dma_get_required_mask(dev->dev.parent)); ++ dma_get_required_mask(&dev->dev)); + + dev->uverbs_abi_ver = RXE_UVERBS_ABI_VERSION; + dev->uverbs_cmd_mask = BIT_ULL(IB_USER_VERBS_CMD_GET_CONTEXT) +diff --git a/drivers/iommu/amd_iommu.c b/drivers/iommu/amd_iommu.c +index 74788fdeb773..8b591c192daf 100644 +--- a/drivers/iommu/amd_iommu.c ++++ b/drivers/iommu/amd_iommu.c +@@ -310,6 +310,8 @@ static struct iommu_dev_data *find_dev_data(u16 devid) + + if (dev_data == NULL) { + dev_data = alloc_dev_data(devid); ++ if (!dev_data) ++ return NULL; + + if (translation_pre_enabled(iommu)) + dev_data->defer_attach = true; +diff --git a/drivers/iommu/mtk_iommu.c b/drivers/iommu/mtk_iommu.c +index f227d73e7bf6..f2832a10fcea 100644 +--- a/drivers/iommu/mtk_iommu.c ++++ b/drivers/iommu/mtk_iommu.c +@@ -60,7 +60,7 @@ + (((prot) & 0x3) << F_MMU_TF_PROTECT_SEL_SHIFT(data)) + + #define REG_MMU_IVRP_PADDR 0x114 +-#define F_MMU_IVRP_PA_SET(pa, ext) (((pa) >> 1) | ((!!(ext)) << 31)) ++ + #define REG_MMU_VLD_PA_RNG 0x118 + #define F_MMU_VLD_PA_RNG(EA, SA) (((EA) << 8) | (SA)) + +@@ -539,8 +539,13 @@ static int mtk_iommu_hw_init(const struct mtk_iommu_data *data) + F_INT_PRETETCH_TRANSATION_FIFO_FAULT; + writel_relaxed(regval, data->base + REG_MMU_INT_MAIN_CONTROL); + +- writel_relaxed(F_MMU_IVRP_PA_SET(data->protect_base, data->enable_4GB), +- data->base + REG_MMU_IVRP_PADDR); ++ if (data->m4u_plat == M4U_MT8173) ++ regval = (data->protect_base >> 1) | (data->enable_4GB << 31); ++ else ++ regval = lower_32_bits(data->protect_base) | ++ upper_32_bits(data->protect_base); ++ writel_relaxed(regval, data->base + REG_MMU_IVRP_PADDR); ++ + if (data->enable_4GB && data->m4u_plat != M4U_MT8173) { + /* + * If 4GB mode is enabled, the validate PA range is from +@@ -695,6 +700,7 @@ static int __maybe_unused mtk_iommu_suspend(struct device *dev) + reg->ctrl_reg = readl_relaxed(base + REG_MMU_CTRL_REG); + reg->int_control0 = readl_relaxed(base + REG_MMU_INT_CONTROL0); + reg->int_main_control = readl_relaxed(base + REG_MMU_INT_MAIN_CONTROL); ++ reg->ivrp_paddr = readl_relaxed(base + REG_MMU_IVRP_PADDR); + clk_disable_unprepare(data->bclk); + return 0; + } +@@ -717,8 +723,7 @@ static int __maybe_unused mtk_iommu_resume(struct device *dev) + writel_relaxed(reg->ctrl_reg, base + REG_MMU_CTRL_REG); + writel_relaxed(reg->int_control0, base + REG_MMU_INT_CONTROL0); + writel_relaxed(reg->int_main_control, base + REG_MMU_INT_MAIN_CONTROL); +- writel_relaxed(F_MMU_IVRP_PA_SET(data->protect_base, data->enable_4GB), +- base + REG_MMU_IVRP_PADDR); ++ writel_relaxed(reg->ivrp_paddr, base + REG_MMU_IVRP_PADDR); + if (data->m4u_dom) + writel(data->m4u_dom->cfg.arm_v7s_cfg.ttbr[0], + base + REG_MMU_PT_BASE_ADDR); +diff --git a/drivers/iommu/mtk_iommu.h b/drivers/iommu/mtk_iommu.h +index b4451a1c7c2f..778498b8633f 100644 +--- a/drivers/iommu/mtk_iommu.h ++++ b/drivers/iommu/mtk_iommu.h +@@ -32,6 +32,7 @@ struct mtk_iommu_suspend_reg { + u32 ctrl_reg; + u32 int_control0; + u32 int_main_control; ++ u32 ivrp_paddr; + }; + + enum mtk_iommu_plat { +diff --git a/drivers/macintosh/rack-meter.c b/drivers/macintosh/rack-meter.c +index 910b5b6f96b1..eb65b6e78d57 100644 +--- a/drivers/macintosh/rack-meter.c ++++ b/drivers/macintosh/rack-meter.c +@@ -154,8 +154,8 @@ static void rackmeter_do_pause(struct rackmeter *rm, int pause) + DBDMA_DO_STOP(rm->dma_regs); + return; + } +- memset(rdma->buf1, 0, ARRAY_SIZE(rdma->buf1)); +- memset(rdma->buf2, 0, ARRAY_SIZE(rdma->buf2)); ++ memset(rdma->buf1, 0, sizeof(rdma->buf1)); ++ memset(rdma->buf2, 0, sizeof(rdma->buf2)); + + rm->dma_buf_v->mark = 0; + +diff --git a/drivers/md/bcache/bcache.h b/drivers/md/bcache/bcache.h +index 12e5197f186c..b5ddb848cd31 100644 +--- a/drivers/md/bcache/bcache.h ++++ b/drivers/md/bcache/bcache.h +@@ -258,10 +258,11 @@ struct bcache_device { + struct gendisk *disk; + + unsigned long flags; +-#define BCACHE_DEV_CLOSING 0 +-#define BCACHE_DEV_DETACHING 1 +-#define BCACHE_DEV_UNLINK_DONE 2 +- ++#define BCACHE_DEV_CLOSING 0 ++#define BCACHE_DEV_DETACHING 1 ++#define BCACHE_DEV_UNLINK_DONE 2 ++#define BCACHE_DEV_WB_RUNNING 3 ++#define BCACHE_DEV_RATE_DW_RUNNING 4 + unsigned nr_stripes; + unsigned stripe_size; + atomic_t *stripe_sectors_dirty; +diff --git a/drivers/md/bcache/super.c b/drivers/md/bcache/super.c +index f2273143b3cb..432088adc497 100644 +--- a/drivers/md/bcache/super.c ++++ b/drivers/md/bcache/super.c +@@ -899,6 +899,31 @@ void bch_cached_dev_run(struct cached_dev *dc) + pr_debug("error creating sysfs link"); + } + ++/* ++ * If BCACHE_DEV_RATE_DW_RUNNING is set, it means routine of the delayed ++ * work dc->writeback_rate_update is running. Wait until the routine ++ * quits (BCACHE_DEV_RATE_DW_RUNNING is clear), then continue to ++ * cancel it. If BCACHE_DEV_RATE_DW_RUNNING is not clear after time_out ++ * seconds, give up waiting here and continue to cancel it too. ++ */ ++static void cancel_writeback_rate_update_dwork(struct cached_dev *dc) ++{ ++ int time_out = WRITEBACK_RATE_UPDATE_SECS_MAX * HZ; ++ ++ do { ++ if (!test_bit(BCACHE_DEV_RATE_DW_RUNNING, ++ &dc->disk.flags)) ++ break; ++ time_out--; ++ schedule_timeout_interruptible(1); ++ } while (time_out > 0); ++ ++ if (time_out == 0) ++ pr_warn("give up waiting for dc->writeback_write_update to quit"); ++ ++ cancel_delayed_work_sync(&dc->writeback_rate_update); ++} ++ + static void cached_dev_detach_finish(struct work_struct *w) + { + struct cached_dev *dc = container_of(w, struct cached_dev, detach); +@@ -911,7 +936,9 @@ static void cached_dev_detach_finish(struct work_struct *w) + + mutex_lock(&bch_register_lock); + +- cancel_delayed_work_sync(&dc->writeback_rate_update); ++ if (test_and_clear_bit(BCACHE_DEV_WB_RUNNING, &dc->disk.flags)) ++ cancel_writeback_rate_update_dwork(dc); ++ + if (!IS_ERR_OR_NULL(dc->writeback_thread)) { + kthread_stop(dc->writeback_thread); + dc->writeback_thread = NULL; +@@ -954,6 +981,7 @@ void bch_cached_dev_detach(struct cached_dev *dc) + closure_get(&dc->disk.cl); + + bch_writeback_queue(dc); ++ + cached_dev_put(dc); + } + +@@ -1065,7 +1093,6 @@ int bch_cached_dev_attach(struct cached_dev *dc, struct cache_set *c, + if (BDEV_STATE(&dc->sb) == BDEV_STATE_DIRTY) { + bch_sectors_dirty_init(&dc->disk); + atomic_set(&dc->has_dirty, 1); +- refcount_inc(&dc->count); + bch_writeback_queue(dc); + } + +@@ -1093,14 +1120,16 @@ static void cached_dev_free(struct closure *cl) + { + struct cached_dev *dc = container_of(cl, struct cached_dev, disk.cl); + +- cancel_delayed_work_sync(&dc->writeback_rate_update); ++ mutex_lock(&bch_register_lock); ++ ++ if (test_and_clear_bit(BCACHE_DEV_WB_RUNNING, &dc->disk.flags)) ++ cancel_writeback_rate_update_dwork(dc); ++ + if (!IS_ERR_OR_NULL(dc->writeback_thread)) + kthread_stop(dc->writeback_thread); + if (dc->writeback_write_wq) + destroy_workqueue(dc->writeback_write_wq); + +- mutex_lock(&bch_register_lock); +- + if (atomic_read(&dc->running)) + bd_unlink_disk_holder(dc->bdev, dc->disk.disk); + bcache_device_free(&dc->disk); +diff --git a/drivers/md/bcache/sysfs.c b/drivers/md/bcache/sysfs.c +index 78cd7bd50fdd..55673508628f 100644 +--- a/drivers/md/bcache/sysfs.c ++++ b/drivers/md/bcache/sysfs.c +@@ -309,7 +309,8 @@ STORE(bch_cached_dev) + bch_writeback_queue(dc); + + if (attr == &sysfs_writeback_percent) +- schedule_delayed_work(&dc->writeback_rate_update, ++ if (!test_and_set_bit(BCACHE_DEV_WB_RUNNING, &dc->disk.flags)) ++ schedule_delayed_work(&dc->writeback_rate_update, + dc->writeback_rate_update_seconds * HZ); + + mutex_unlock(&bch_register_lock); +diff --git a/drivers/md/bcache/writeback.c b/drivers/md/bcache/writeback.c +index f1d2fc15abcc..8f98ef1038d3 100644 +--- a/drivers/md/bcache/writeback.c ++++ b/drivers/md/bcache/writeback.c +@@ -115,6 +115,21 @@ static void update_writeback_rate(struct work_struct *work) + struct cached_dev, + writeback_rate_update); + ++ /* ++ * should check BCACHE_DEV_RATE_DW_RUNNING before calling ++ * cancel_delayed_work_sync(). ++ */ ++ set_bit(BCACHE_DEV_RATE_DW_RUNNING, &dc->disk.flags); ++ /* paired with where BCACHE_DEV_RATE_DW_RUNNING is tested */ ++ smp_mb(); ++ ++ if (!test_bit(BCACHE_DEV_WB_RUNNING, &dc->disk.flags)) { ++ clear_bit(BCACHE_DEV_RATE_DW_RUNNING, &dc->disk.flags); ++ /* paired with where BCACHE_DEV_RATE_DW_RUNNING is tested */ ++ smp_mb(); ++ return; ++ } ++ + down_read(&dc->writeback_lock); + + if (atomic_read(&dc->has_dirty) && +@@ -123,8 +138,18 @@ static void update_writeback_rate(struct work_struct *work) + + up_read(&dc->writeback_lock); + +- schedule_delayed_work(&dc->writeback_rate_update, ++ if (test_bit(BCACHE_DEV_WB_RUNNING, &dc->disk.flags)) { ++ schedule_delayed_work(&dc->writeback_rate_update, + dc->writeback_rate_update_seconds * HZ); ++ } ++ ++ /* ++ * should check BCACHE_DEV_RATE_DW_RUNNING before calling ++ * cancel_delayed_work_sync(). ++ */ ++ clear_bit(BCACHE_DEV_RATE_DW_RUNNING, &dc->disk.flags); ++ /* paired with where BCACHE_DEV_RATE_DW_RUNNING is tested */ ++ smp_mb(); + } + + static unsigned writeback_delay(struct cached_dev *dc, unsigned sectors) +@@ -565,14 +590,20 @@ static int bch_writeback_thread(void *arg) + while (!kthread_should_stop()) { + down_write(&dc->writeback_lock); + set_current_state(TASK_INTERRUPTIBLE); +- if (!atomic_read(&dc->has_dirty) || +- (!test_bit(BCACHE_DEV_DETACHING, &dc->disk.flags) && +- !dc->writeback_running)) { ++ /* ++ * If the bache device is detaching, skip here and continue ++ * to perform writeback. Otherwise, if no dirty data on cache, ++ * or there is dirty data on cache but writeback is disabled, ++ * the writeback thread should sleep here and wait for others ++ * to wake up it. ++ */ ++ if (!test_bit(BCACHE_DEV_DETACHING, &dc->disk.flags) && ++ (!atomic_read(&dc->has_dirty) || !dc->writeback_running)) { + up_write(&dc->writeback_lock); + + if (kthread_should_stop()) { + set_current_state(TASK_RUNNING); +- return 0; ++ break; + } + + schedule(); +@@ -585,9 +616,16 @@ static int bch_writeback_thread(void *arg) + if (searched_full_index && + RB_EMPTY_ROOT(&dc->writeback_keys.keys)) { + atomic_set(&dc->has_dirty, 0); +- cached_dev_put(dc); + SET_BDEV_STATE(&dc->sb, BDEV_STATE_CLEAN); + bch_write_bdev_super(dc, NULL); ++ /* ++ * If bcache device is detaching via sysfs interface, ++ * writeback thread should stop after there is no dirty ++ * data on cache. BCACHE_DEV_DETACHING flag is set in ++ * bch_cached_dev_detach(). ++ */ ++ if (test_bit(BCACHE_DEV_DETACHING, &dc->disk.flags)) ++ break; + } + + up_write(&dc->writeback_lock); +@@ -606,6 +644,9 @@ static int bch_writeback_thread(void *arg) + } + } + ++ dc->writeback_thread = NULL; ++ cached_dev_put(dc); ++ + return 0; + } + +@@ -659,6 +700,7 @@ void bch_cached_dev_writeback_init(struct cached_dev *dc) + dc->writeback_rate_p_term_inverse = 40; + dc->writeback_rate_i_term_inverse = 10000; + ++ WARN_ON(test_and_clear_bit(BCACHE_DEV_WB_RUNNING, &dc->disk.flags)); + INIT_DELAYED_WORK(&dc->writeback_rate_update, update_writeback_rate); + } + +@@ -669,11 +711,15 @@ int bch_cached_dev_writeback_start(struct cached_dev *dc) + if (!dc->writeback_write_wq) + return -ENOMEM; + ++ cached_dev_get(dc); + dc->writeback_thread = kthread_create(bch_writeback_thread, dc, + "bcache_writeback"); +- if (IS_ERR(dc->writeback_thread)) ++ if (IS_ERR(dc->writeback_thread)) { ++ cached_dev_put(dc); + return PTR_ERR(dc->writeback_thread); ++ } + ++ WARN_ON(test_and_set_bit(BCACHE_DEV_WB_RUNNING, &dc->disk.flags)); + schedule_delayed_work(&dc->writeback_rate_update, + dc->writeback_rate_update_seconds * HZ); + +diff --git a/drivers/md/bcache/writeback.h b/drivers/md/bcache/writeback.h +index 587b25599856..0bba8f1c6cdf 100644 +--- a/drivers/md/bcache/writeback.h ++++ b/drivers/md/bcache/writeback.h +@@ -105,8 +105,6 @@ static inline void bch_writeback_add(struct cached_dev *dc) + { + if (!atomic_read(&dc->has_dirty) && + !atomic_xchg(&dc->has_dirty, 1)) { +- refcount_inc(&dc->count); +- + if (BDEV_STATE(&dc->sb) != BDEV_STATE_DIRTY) { + SET_BDEV_STATE(&dc->sb, BDEV_STATE_DIRTY); + /* XXX: should do this synchronously */ +diff --git a/drivers/misc/cxl/cxl.h b/drivers/misc/cxl/cxl.h +index 4f015da78f28..4949b8d5a748 100644 +--- a/drivers/misc/cxl/cxl.h ++++ b/drivers/misc/cxl/cxl.h +@@ -369,6 +369,9 @@ static const cxl_p2n_reg_t CXL_PSL_WED_An = {0x0A0}; + #define CXL_PSL_TFC_An_AE (1ull << (63-30)) /* Restart PSL with address error */ + #define CXL_PSL_TFC_An_R (1ull << (63-31)) /* Restart PSL transaction */ + ++/****** CXL_PSL_DEBUG *****************************************************/ ++#define CXL_PSL_DEBUG_CDC (1ull << (63-27)) /* Coherent Data cache support */ ++ + /****** CXL_XSL9_IERAT_ERAT - CAIA 2 **********************************/ + #define CXL_XSL9_IERAT_MLPID (1ull << (63-0)) /* Match LPID */ + #define CXL_XSL9_IERAT_MPID (1ull << (63-1)) /* Match PID */ +@@ -669,6 +672,7 @@ struct cxl_native { + irq_hw_number_t err_hwirq; + unsigned int err_virq; + u64 ps_off; ++ bool no_data_cache; /* set if no data cache on the card */ + const struct cxl_service_layer_ops *sl_ops; + }; + +diff --git a/drivers/misc/cxl/native.c b/drivers/misc/cxl/native.c +index 1b3d7c65ea3f..98f867fcef24 100644 +--- a/drivers/misc/cxl/native.c ++++ b/drivers/misc/cxl/native.c +@@ -353,8 +353,17 @@ int cxl_data_cache_flush(struct cxl *adapter) + u64 reg; + unsigned long timeout = jiffies + (HZ * CXL_TIMEOUT); + +- pr_devel("Flushing data cache\n"); ++ /* ++ * Do a datacache flush only if datacache is available. ++ * In case of PSL9D datacache absent hence flush operation. ++ * would timeout. ++ */ ++ if (adapter->native->no_data_cache) { ++ pr_devel("No PSL data cache. Ignoring cache flush req.\n"); ++ return 0; ++ } + ++ pr_devel("Flushing data cache\n"); + reg = cxl_p1_read(adapter, CXL_PSL_Control); + reg |= CXL_PSL_Control_Fr; + cxl_p1_write(adapter, CXL_PSL_Control, reg); +diff --git a/drivers/misc/cxl/pci.c b/drivers/misc/cxl/pci.c +index 758842f65a1b..61de57292e40 100644 +--- a/drivers/misc/cxl/pci.c ++++ b/drivers/misc/cxl/pci.c +@@ -456,6 +456,7 @@ static int init_implementation_adapter_regs_psl9(struct cxl *adapter, + u64 chipid; + u32 phb_index; + u64 capp_unit_id; ++ u64 psl_debug; + int rc; + + rc = cxl_calc_capp_routing(dev, &chipid, &phb_index, &capp_unit_id); +@@ -506,6 +507,16 @@ static int init_implementation_adapter_regs_psl9(struct cxl *adapter, + } else + cxl_p1_write(adapter, CXL_PSL9_DEBUG, 0x4000000000000000ULL); + ++ /* ++ * Check if PSL has data-cache. We need to flush adapter datacache ++ * when as its about to be removed. ++ */ ++ psl_debug = cxl_p1_read(adapter, CXL_PSL9_DEBUG); ++ if (psl_debug & CXL_PSL_DEBUG_CDC) { ++ dev_dbg(&dev->dev, "No data-cache present\n"); ++ adapter->native->no_data_cache = true; ++ } ++ + return 0; + } + +@@ -1449,10 +1460,8 @@ int cxl_pci_reset(struct cxl *adapter) + + /* + * The adapter is about to be reset, so ignore errors. +- * Not supported on P9 DD1 + */ +- if ((cxl_is_power8()) || (!(cxl_is_power9_dd1()))) +- cxl_data_cache_flush(adapter); ++ cxl_data_cache_flush(adapter); + + /* pcie_warm_reset requests a fundamental pci reset which includes a + * PERST assert/deassert. PERST triggers a loading of the image +@@ -1936,10 +1945,8 @@ static void cxl_pci_remove_adapter(struct cxl *adapter) + + /* + * Flush adapter datacache as its about to be removed. +- * Not supported on P9 DD1. + */ +- if ((cxl_is_power8()) || (!(cxl_is_power9_dd1()))) +- cxl_data_cache_flush(adapter); ++ cxl_data_cache_flush(adapter); + + cxl_deconfigure_adapter(adapter); + +diff --git a/drivers/mmc/core/block.c b/drivers/mmc/core/block.c +index 9c6f639d8a57..81501ebd5b26 100644 +--- a/drivers/mmc/core/block.c ++++ b/drivers/mmc/core/block.c +@@ -2492,7 +2492,7 @@ static long mmc_rpmb_ioctl(struct file *filp, unsigned int cmd, + break; + } + +- return 0; ++ return ret; + } + + #ifdef CONFIG_COMPAT +diff --git a/drivers/mmc/host/sdhci-iproc.c b/drivers/mmc/host/sdhci-iproc.c +index 61666d269771..0cfbdb3ab68a 100644 +--- a/drivers/mmc/host/sdhci-iproc.c ++++ b/drivers/mmc/host/sdhci-iproc.c +@@ -33,6 +33,8 @@ struct sdhci_iproc_host { + const struct sdhci_iproc_data *data; + u32 shadow_cmd; + u32 shadow_blk; ++ bool is_cmd_shadowed; ++ bool is_blk_shadowed; + }; + + #define REG_OFFSET_IN_BITS(reg) ((reg) << 3 & 0x18) +@@ -48,8 +50,22 @@ static inline u32 sdhci_iproc_readl(struct sdhci_host *host, int reg) + + static u16 sdhci_iproc_readw(struct sdhci_host *host, int reg) + { +- u32 val = sdhci_iproc_readl(host, (reg & ~3)); +- u16 word = val >> REG_OFFSET_IN_BITS(reg) & 0xffff; ++ struct sdhci_pltfm_host *pltfm_host = sdhci_priv(host); ++ struct sdhci_iproc_host *iproc_host = sdhci_pltfm_priv(pltfm_host); ++ u32 val; ++ u16 word; ++ ++ if ((reg == SDHCI_TRANSFER_MODE) && iproc_host->is_cmd_shadowed) { ++ /* Get the saved transfer mode */ ++ val = iproc_host->shadow_cmd; ++ } else if ((reg == SDHCI_BLOCK_SIZE || reg == SDHCI_BLOCK_COUNT) && ++ iproc_host->is_blk_shadowed) { ++ /* Get the saved block info */ ++ val = iproc_host->shadow_blk; ++ } else { ++ val = sdhci_iproc_readl(host, (reg & ~3)); ++ } ++ word = val >> REG_OFFSET_IN_BITS(reg) & 0xffff; + return word; + } + +@@ -105,13 +121,15 @@ static void sdhci_iproc_writew(struct sdhci_host *host, u16 val, int reg) + + if (reg == SDHCI_COMMAND) { + /* Write the block now as we are issuing a command */ +- if (iproc_host->shadow_blk != 0) { ++ if (iproc_host->is_blk_shadowed) { + sdhci_iproc_writel(host, iproc_host->shadow_blk, + SDHCI_BLOCK_SIZE); +- iproc_host->shadow_blk = 0; ++ iproc_host->is_blk_shadowed = false; + } + oldval = iproc_host->shadow_cmd; +- } else if (reg == SDHCI_BLOCK_SIZE || reg == SDHCI_BLOCK_COUNT) { ++ iproc_host->is_cmd_shadowed = false; ++ } else if ((reg == SDHCI_BLOCK_SIZE || reg == SDHCI_BLOCK_COUNT) && ++ iproc_host->is_blk_shadowed) { + /* Block size and count are stored in shadow reg */ + oldval = iproc_host->shadow_blk; + } else { +@@ -123,9 +141,11 @@ static void sdhci_iproc_writew(struct sdhci_host *host, u16 val, int reg) + if (reg == SDHCI_TRANSFER_MODE) { + /* Save the transfer mode until the command is issued */ + iproc_host->shadow_cmd = newval; ++ iproc_host->is_cmd_shadowed = true; + } else if (reg == SDHCI_BLOCK_SIZE || reg == SDHCI_BLOCK_COUNT) { + /* Save the block info until the command is issued */ + iproc_host->shadow_blk = newval; ++ iproc_host->is_blk_shadowed = true; + } else { + /* Command or other regular 32-bit write */ + sdhci_iproc_writel(host, newval, reg & ~3); +@@ -166,7 +186,7 @@ static const struct sdhci_ops sdhci_iproc_32only_ops = { + + static const struct sdhci_pltfm_data sdhci_iproc_cygnus_pltfm_data = { + .quirks = SDHCI_QUIRK_DATA_TIMEOUT_USES_SDCLK, +- .quirks2 = SDHCI_QUIRK2_ACMD23_BROKEN, ++ .quirks2 = SDHCI_QUIRK2_ACMD23_BROKEN | SDHCI_QUIRK2_HOST_OFF_CARD_ON, + .ops = &sdhci_iproc_32only_ops, + }; + +@@ -206,7 +226,6 @@ static const struct sdhci_iproc_data iproc_data = { + .caps1 = SDHCI_DRIVER_TYPE_C | + SDHCI_DRIVER_TYPE_D | + SDHCI_SUPPORT_DDR50, +- .mmc_caps = MMC_CAP_1_8V_DDR, + }; + + static const struct sdhci_pltfm_data sdhci_bcm2835_pltfm_data = { +diff --git a/drivers/net/ethernet/broadcom/bgmac.c b/drivers/net/ethernet/broadcom/bgmac.c +index 8eef9fb6b1fe..ad8195b0d161 100644 +--- a/drivers/net/ethernet/broadcom/bgmac.c ++++ b/drivers/net/ethernet/broadcom/bgmac.c +@@ -533,7 +533,8 @@ static void bgmac_dma_tx_ring_free(struct bgmac *bgmac, + int i; + + for (i = 0; i < BGMAC_TX_RING_SLOTS; i++) { +- int len = dma_desc[i].ctl1 & BGMAC_DESC_CTL1_LEN; ++ u32 ctl1 = le32_to_cpu(dma_desc[i].ctl1); ++ unsigned int len = ctl1 & BGMAC_DESC_CTL1_LEN; + + slot = &ring->slots[i]; + dev_kfree_skb(slot->skb); +diff --git a/drivers/net/ethernet/broadcom/bgmac.h b/drivers/net/ethernet/broadcom/bgmac.h +index 4040d846da8e..40d02fec2747 100644 +--- a/drivers/net/ethernet/broadcom/bgmac.h ++++ b/drivers/net/ethernet/broadcom/bgmac.h +@@ -479,9 +479,9 @@ struct bgmac_rx_header { + struct bgmac { + union { + struct { +- void *base; +- void *idm_base; +- void *nicpm_base; ++ void __iomem *base; ++ void __iomem *idm_base; ++ void __iomem *nicpm_base; + } plat; + struct { + struct bcma_device *core; +diff --git a/drivers/net/ethernet/broadcom/bnxt/bnxt_ethtool.c b/drivers/net/ethernet/broadcom/bnxt/bnxt_ethtool.c +index 9442605f4fd4..0b71d3b44933 100644 +--- a/drivers/net/ethernet/broadcom/bnxt/bnxt_ethtool.c ++++ b/drivers/net/ethernet/broadcom/bnxt/bnxt_ethtool.c +@@ -2552,16 +2552,20 @@ static int bnxt_reset(struct net_device *dev, u32 *flags) + return -EOPNOTSUPP; + + rc = bnxt_firmware_reset(dev, BNXT_FW_RESET_CHIP); +- if (!rc) ++ if (!rc) { + netdev_info(dev, "Reset request successful. Reload driver to complete reset\n"); ++ *flags = 0; ++ } + } else if (*flags == ETH_RESET_AP) { + /* This feature is not supported in older firmware versions */ + if (bp->hwrm_spec_code < 0x10803) + return -EOPNOTSUPP; + + rc = bnxt_firmware_reset(dev, BNXT_FW_RESET_AP); +- if (!rc) ++ if (!rc) { + netdev_info(dev, "Reset Application Processor request successful.\n"); ++ *flags = 0; ++ } + } else { + rc = -EINVAL; + } +diff --git a/drivers/net/ethernet/broadcom/bnxt/bnxt_tc.c b/drivers/net/ethernet/broadcom/bnxt/bnxt_tc.c +index 65c2cee35766..9d8aa96044d3 100644 +--- a/drivers/net/ethernet/broadcom/bnxt/bnxt_tc.c ++++ b/drivers/net/ethernet/broadcom/bnxt/bnxt_tc.c +@@ -992,8 +992,10 @@ static int bnxt_tc_get_decap_handle(struct bnxt *bp, struct bnxt_tc_flow *flow, + + /* Check if there's another flow using the same tunnel decap. + * If not, add this tunnel to the table and resolve the other +- * tunnel header fileds ++ * tunnel header fileds. Ignore src_port in the tunnel_key, ++ * since it is not required for decap filters. + */ ++ decap_key->tp_src = 0; + decap_node = bnxt_tc_get_tunnel_node(bp, &tc_info->decap_table, + &tc_info->decap_ht_params, + decap_key); +diff --git a/drivers/net/ethernet/chelsio/cxgb4/cxgb4_main.c b/drivers/net/ethernet/chelsio/cxgb4/cxgb4_main.c +index 61022b5f6743..57dcb957f27c 100644 +--- a/drivers/net/ethernet/chelsio/cxgb4/cxgb4_main.c ++++ b/drivers/net/ethernet/chelsio/cxgb4/cxgb4_main.c +@@ -833,8 +833,6 @@ static int setup_fw_sge_queues(struct adapter *adap) + + err = t4_sge_alloc_rxq(adap, &s->fw_evtq, true, adap->port[0], + adap->msi_idx, NULL, fwevtq_handler, NULL, -1); +- if (err) +- t4_free_sge_resources(adap); + return err; + } + +@@ -5474,6 +5472,13 @@ static int init_one(struct pci_dev *pdev, const struct pci_device_id *ent) + if (err) + goto out_free_dev; + ++ err = setup_fw_sge_queues(adapter); ++ if (err) { ++ dev_err(adapter->pdev_dev, ++ "FW sge queue allocation failed, err %d", err); ++ goto out_free_dev; ++ } ++ + /* + * The card is now ready to go. If any errors occur during device + * registration we do not fail the whole card but rather proceed only +@@ -5522,10 +5527,10 @@ static int init_one(struct pci_dev *pdev, const struct pci_device_id *ent) + cxgb4_ptp_init(adapter); + + print_adapter_info(adapter); +- setup_fw_sge_queues(adapter); + return 0; + + out_free_dev: ++ t4_free_sge_resources(adapter); + free_some_resources(adapter); + if (adapter->flags & USING_MSIX) + free_msix_info(adapter); +diff --git a/drivers/net/ethernet/chelsio/cxgb4/cxgb4_uld.c b/drivers/net/ethernet/chelsio/cxgb4/cxgb4_uld.c +index 6b5fea4532f3..2d827140a475 100644 +--- a/drivers/net/ethernet/chelsio/cxgb4/cxgb4_uld.c ++++ b/drivers/net/ethernet/chelsio/cxgb4/cxgb4_uld.c +@@ -342,6 +342,7 @@ static void free_queues_uld(struct adapter *adap, unsigned int uld_type) + { + struct sge_uld_rxq_info *rxq_info = adap->sge.uld_rxq_info[uld_type]; + ++ adap->sge.uld_rxq_info[uld_type] = NULL; + kfree(rxq_info->rspq_id); + kfree(rxq_info->uldrxq); + kfree(rxq_info); +diff --git a/drivers/net/ethernet/cisco/enic/enic_main.c b/drivers/net/ethernet/cisco/enic/enic_main.c +index f202ba72a811..b91109d967fa 100644 +--- a/drivers/net/ethernet/cisco/enic/enic_main.c ++++ b/drivers/net/ethernet/cisco/enic/enic_main.c +@@ -1898,6 +1898,8 @@ static int enic_open(struct net_device *netdev) + } + + for (i = 0; i < enic->rq_count; i++) { ++ /* enable rq before updating rq desc */ ++ vnic_rq_enable(&enic->rq[i]); + vnic_rq_fill(&enic->rq[i], enic_rq_alloc_buf); + /* Need at least one buffer on ring to get going */ + if (vnic_rq_desc_used(&enic->rq[i]) == 0) { +@@ -1909,8 +1911,6 @@ static int enic_open(struct net_device *netdev) + + for (i = 0; i < enic->wq_count; i++) + vnic_wq_enable(&enic->wq[i]); +- for (i = 0; i < enic->rq_count; i++) +- vnic_rq_enable(&enic->rq[i]); + + if (!enic_is_dynamic(enic) && !enic_is_sriov_vf(enic)) + enic_dev_add_station_addr(enic); +@@ -1936,8 +1936,12 @@ static int enic_open(struct net_device *netdev) + return 0; + + err_out_free_rq: +- for (i = 0; i < enic->rq_count; i++) ++ for (i = 0; i < enic->rq_count; i++) { ++ err = vnic_rq_disable(&enic->rq[i]); ++ if (err) ++ return err; + vnic_rq_clean(&enic->rq[i], enic_free_rq_buf); ++ } + enic_dev_notify_unset(enic); + err_out_free_intr: + enic_unset_affinity_hint(enic); +diff --git a/drivers/net/ethernet/freescale/dpaa/dpaa_eth.c b/drivers/net/ethernet/freescale/dpaa/dpaa_eth.c +index e4ec32a9ca15..3615e5f148dd 100644 +--- a/drivers/net/ethernet/freescale/dpaa/dpaa_eth.c ++++ b/drivers/net/ethernet/freescale/dpaa/dpaa_eth.c +@@ -1916,8 +1916,10 @@ static int skb_to_sg_fd(struct dpaa_priv *priv, + goto csum_failed; + } + ++ /* SGT[0] is used by the linear part */ + sgt = (struct qm_sg_entry *)(sgt_buf + priv->tx_headroom); +- qm_sg_entry_set_len(&sgt[0], skb_headlen(skb)); ++ frag_len = skb_headlen(skb); ++ qm_sg_entry_set_len(&sgt[0], frag_len); + sgt[0].bpid = FSL_DPAA_BPID_INV; + sgt[0].offset = 0; + addr = dma_map_single(dev, skb->data, +@@ -1930,9 +1932,9 @@ static int skb_to_sg_fd(struct dpaa_priv *priv, + qm_sg_entry_set64(&sgt[0], addr); + + /* populate the rest of SGT entries */ +- frag = &skb_shinfo(skb)->frags[0]; +- frag_len = frag->size; +- for (i = 1; i <= nr_frags; i++, frag++) { ++ for (i = 0; i < nr_frags; i++) { ++ frag = &skb_shinfo(skb)->frags[i]; ++ frag_len = frag->size; + WARN_ON(!skb_frag_page(frag)); + addr = skb_frag_dma_map(dev, frag, 0, + frag_len, dma_dir); +@@ -1942,15 +1944,16 @@ static int skb_to_sg_fd(struct dpaa_priv *priv, + goto sg_map_failed; + } + +- qm_sg_entry_set_len(&sgt[i], frag_len); +- sgt[i].bpid = FSL_DPAA_BPID_INV; +- sgt[i].offset = 0; ++ qm_sg_entry_set_len(&sgt[i + 1], frag_len); ++ sgt[i + 1].bpid = FSL_DPAA_BPID_INV; ++ sgt[i + 1].offset = 0; + + /* keep the offset in the address */ +- qm_sg_entry_set64(&sgt[i], addr); +- frag_len = frag->size; ++ qm_sg_entry_set64(&sgt[i + 1], addr); + } +- qm_sg_entry_set_f(&sgt[i - 1], frag_len); ++ ++ /* Set the final bit in the last used entry of the SGT */ ++ qm_sg_entry_set_f(&sgt[nr_frags], frag_len); + + qm_fd_set_sg(fd, priv->tx_headroom, skb->len); + +diff --git a/drivers/net/ethernet/freescale/dpaa/dpaa_ethtool.c b/drivers/net/ethernet/freescale/dpaa/dpaa_ethtool.c +index faea674094b9..85306d1b2acf 100644 +--- a/drivers/net/ethernet/freescale/dpaa/dpaa_ethtool.c ++++ b/drivers/net/ethernet/freescale/dpaa/dpaa_ethtool.c +@@ -211,7 +211,7 @@ static int dpaa_set_pauseparam(struct net_device *net_dev, + if (epause->rx_pause) + newadv = ADVERTISED_Pause | ADVERTISED_Asym_Pause; + if (epause->tx_pause) +- newadv |= ADVERTISED_Asym_Pause; ++ newadv ^= ADVERTISED_Asym_Pause; + + oldadv = phydev->advertising & + (ADVERTISED_Pause | ADVERTISED_Asym_Pause); +diff --git a/drivers/net/ethernet/hisilicon/hns3/hns3_enet.c b/drivers/net/ethernet/hisilicon/hns3/hns3_enet.c +index 601b6295d3f8..9f6a6a1640d6 100644 +--- a/drivers/net/ethernet/hisilicon/hns3/hns3_enet.c ++++ b/drivers/net/ethernet/hisilicon/hns3/hns3_enet.c +@@ -747,7 +747,7 @@ static void hns3_set_txbd_baseinfo(u16 *bdtp_fe_sc_vld_ra_ri, int frag_end) + { + /* Config bd buffer end */ + hnae_set_field(*bdtp_fe_sc_vld_ra_ri, HNS3_TXD_BDTYPE_M, +- HNS3_TXD_BDTYPE_M, 0); ++ HNS3_TXD_BDTYPE_S, 0); + hnae_set_bit(*bdtp_fe_sc_vld_ra_ri, HNS3_TXD_FE_B, !!frag_end); + hnae_set_bit(*bdtp_fe_sc_vld_ra_ri, HNS3_TXD_VLD_B, 1); + hnae_set_field(*bdtp_fe_sc_vld_ra_ri, HNS3_TXD_SC_M, HNS3_TXD_SC_S, 0); +diff --git a/drivers/net/ethernet/hisilicon/hns3/hns3_ethtool.c b/drivers/net/ethernet/hisilicon/hns3/hns3_ethtool.c +index b034c7f24eda..a1e53c671944 100644 +--- a/drivers/net/ethernet/hisilicon/hns3/hns3_ethtool.c ++++ b/drivers/net/ethernet/hisilicon/hns3/hns3_ethtool.c +@@ -698,7 +698,7 @@ static u32 hns3_get_rss_key_size(struct net_device *netdev) + + if (!h->ae_algo || !h->ae_algo->ops || + !h->ae_algo->ops->get_rss_key_size) +- return -EOPNOTSUPP; ++ return 0; + + return h->ae_algo->ops->get_rss_key_size(h); + } +@@ -709,7 +709,7 @@ static u32 hns3_get_rss_indir_size(struct net_device *netdev) + + if (!h->ae_algo || !h->ae_algo->ops || + !h->ae_algo->ops->get_rss_indir_size) +- return -EOPNOTSUPP; ++ return 0; + + return h->ae_algo->ops->get_rss_indir_size(h); + } +diff --git a/drivers/net/ethernet/ibm/ibmvnic.c b/drivers/net/ethernet/ibm/ibmvnic.c +index 1b3cc8bb0705..fd8e6937ee00 100644 +--- a/drivers/net/ethernet/ibm/ibmvnic.c ++++ b/drivers/net/ethernet/ibm/ibmvnic.c +@@ -812,8 +812,6 @@ static void release_resources(struct ibmvnic_adapter *adapter) + release_tx_pools(adapter); + release_rx_pools(adapter); + +- release_stats_token(adapter); +- release_stats_buffers(adapter); + release_error_buffers(adapter); + + if (adapter->napi) { +@@ -953,14 +951,6 @@ static int init_resources(struct ibmvnic_adapter *adapter) + if (rc) + return rc; + +- rc = init_stats_buffers(adapter); +- if (rc) +- return rc; +- +- rc = init_stats_token(adapter); +- if (rc) +- return rc; +- + adapter->vpd = kzalloc(sizeof(*adapter->vpd), GFP_KERNEL); + if (!adapter->vpd) + return -ENOMEM; +@@ -1699,12 +1689,14 @@ static int do_reset(struct ibmvnic_adapter *adapter, + rc = reset_rx_pools(adapter); + if (rc) + return rc; +- +- if (reset_state == VNIC_CLOSED) +- return 0; + } + } + ++ adapter->state = VNIC_CLOSED; ++ ++ if (reset_state == VNIC_CLOSED) ++ return 0; ++ + rc = __ibmvnic_open(netdev); + if (rc) { + if (list_empty(&adapter->rwi_list)) +@@ -2266,6 +2258,7 @@ static int reset_one_sub_crq_queue(struct ibmvnic_adapter *adapter, + } + + memset(scrq->msgs, 0, 4 * PAGE_SIZE); ++ atomic_set(&scrq->used, 0); + scrq->cur = 0; + + rc = h_reg_sub_crq(adapter->vdev->unit_address, scrq->msg_token, +@@ -4387,6 +4380,14 @@ static int ibmvnic_init(struct ibmvnic_adapter *adapter) + release_crq_queue(adapter); + } + ++ rc = init_stats_buffers(adapter); ++ if (rc) ++ return rc; ++ ++ rc = init_stats_token(adapter); ++ if (rc) ++ return rc; ++ + return rc; + } + +@@ -4494,6 +4495,9 @@ static int ibmvnic_remove(struct vio_dev *dev) + release_sub_crqs(adapter); + release_crq_queue(adapter); + ++ release_stats_token(adapter); ++ release_stats_buffers(adapter); ++ + adapter->state = VNIC_REMOVED; + + mutex_unlock(&adapter->reset_lock); +diff --git a/drivers/net/ethernet/intel/i40e/i40e_main.c b/drivers/net/ethernet/intel/i40e/i40e_main.c +index e31adbc75f9c..e50d703d7353 100644 +--- a/drivers/net/ethernet/intel/i40e/i40e_main.c ++++ b/drivers/net/ethernet/intel/i40e/i40e_main.c +@@ -9215,6 +9215,17 @@ static void i40e_rebuild(struct i40e_pf *pf, bool reinit, bool lock_acquired) + } + i40e_get_oem_version(&pf->hw); + ++ if (test_bit(__I40E_EMP_RESET_INTR_RECEIVED, pf->state) && ++ ((hw->aq.fw_maj_ver == 4 && hw->aq.fw_min_ver <= 33) || ++ hw->aq.fw_maj_ver < 4) && hw->mac.type == I40E_MAC_XL710) { ++ /* The following delay is necessary for 4.33 firmware and older ++ * to recover after EMP reset. 200 ms should suffice but we ++ * put here 300 ms to be sure that FW is ready to operate ++ * after reset. ++ */ ++ mdelay(300); ++ } ++ + /* re-verify the eeprom if we just had an EMP reset */ + if (test_and_clear_bit(__I40E_EMP_RESET_INTR_RECEIVED, pf->state)) + i40e_verify_eeprom(pf); +@@ -14216,7 +14227,13 @@ static int __maybe_unused i40e_suspend(struct device *dev) + if (pf->wol_en && (pf->hw_features & I40E_HW_WOL_MC_MAGIC_PKT_WAKE)) + i40e_enable_mc_magic_wake(pf); + +- i40e_prep_for_reset(pf, false); ++ /* Since we're going to destroy queues during the ++ * i40e_clear_interrupt_scheme() we should hold the RTNL lock for this ++ * whole section ++ */ ++ rtnl_lock(); ++ ++ i40e_prep_for_reset(pf, true); + + wr32(hw, I40E_PFPM_APM, (pf->wol_en ? I40E_PFPM_APM_APME_MASK : 0)); + wr32(hw, I40E_PFPM_WUFC, (pf->wol_en ? I40E_PFPM_WUFC_MAG_MASK : 0)); +@@ -14228,6 +14245,8 @@ static int __maybe_unused i40e_suspend(struct device *dev) + */ + i40e_clear_interrupt_scheme(pf); + ++ rtnl_unlock(); ++ + return 0; + } + +@@ -14245,6 +14264,11 @@ static int __maybe_unused i40e_resume(struct device *dev) + if (!test_bit(__I40E_SUSPENDED, pf->state)) + return 0; + ++ /* We need to hold the RTNL lock prior to restoring interrupt schemes, ++ * since we're going to be restoring queues ++ */ ++ rtnl_lock(); ++ + /* We cleared the interrupt scheme when we suspended, so we need to + * restore it now to resume device functionality. + */ +@@ -14255,7 +14279,9 @@ static int __maybe_unused i40e_resume(struct device *dev) + } + + clear_bit(__I40E_DOWN, pf->state); +- i40e_reset_and_rebuild(pf, false, false); ++ i40e_reset_and_rebuild(pf, false, true); ++ ++ rtnl_unlock(); + + /* Clear suspended state last after everything is recovered */ + clear_bit(__I40E_SUSPENDED, pf->state); +diff --git a/drivers/net/ethernet/intel/ixgbe/ixgbe_main.c b/drivers/net/ethernet/intel/ixgbe/ixgbe_main.c +index 9fc063af233c..85369423452d 100644 +--- a/drivers/net/ethernet/intel/ixgbe/ixgbe_main.c ++++ b/drivers/net/ethernet/intel/ixgbe/ixgbe_main.c +@@ -7711,7 +7711,8 @@ static void ixgbe_service_task(struct work_struct *work) + + if (test_bit(__IXGBE_PTP_RUNNING, &adapter->state)) { + ixgbe_ptp_overflow_check(adapter); +- ixgbe_ptp_rx_hang(adapter); ++ if (adapter->flags & IXGBE_FLAG_RX_HWTSTAMP_IN_REGISTER) ++ ixgbe_ptp_rx_hang(adapter); + ixgbe_ptp_tx_hang(adapter); + } + +diff --git a/drivers/net/ethernet/mellanox/mlx5/core/cmd.c b/drivers/net/ethernet/mellanox/mlx5/core/cmd.c +index e9a1fbcc4adf..3efe45bc2471 100644 +--- a/drivers/net/ethernet/mellanox/mlx5/core/cmd.c ++++ b/drivers/net/ethernet/mellanox/mlx5/core/cmd.c +@@ -1802,7 +1802,7 @@ int mlx5_cmd_init(struct mlx5_core_dev *dev) + + cmd->checksum_disabled = 1; + cmd->max_reg_cmds = (1 << cmd->log_sz) - 1; +- cmd->bitmask = (1 << cmd->max_reg_cmds) - 1; ++ cmd->bitmask = (1UL << cmd->max_reg_cmds) - 1; + + cmd->cmdif_rev = ioread32be(&dev->iseg->cmdif_rev_fw_sub) >> 16; + if (cmd->cmdif_rev > CMD_IF_REV) { +diff --git a/drivers/net/ethernet/mellanox/mlx5/core/en_main.c b/drivers/net/ethernet/mellanox/mlx5/core/en_main.c +index 9b4827d36e3e..1ae61514b6a9 100644 +--- a/drivers/net/ethernet/mellanox/mlx5/core/en_main.c ++++ b/drivers/net/ethernet/mellanox/mlx5/core/en_main.c +@@ -153,26 +153,6 @@ static void mlx5e_update_carrier_work(struct work_struct *work) + mutex_unlock(&priv->state_lock); + } + +-static void mlx5e_tx_timeout_work(struct work_struct *work) +-{ +- struct mlx5e_priv *priv = container_of(work, struct mlx5e_priv, +- tx_timeout_work); +- int err; +- +- rtnl_lock(); +- mutex_lock(&priv->state_lock); +- if (!test_bit(MLX5E_STATE_OPENED, &priv->state)) +- goto unlock; +- mlx5e_close_locked(priv->netdev); +- err = mlx5e_open_locked(priv->netdev); +- if (err) +- netdev_err(priv->netdev, "mlx5e_open_locked failed recovering from a tx_timeout, err(%d).\n", +- err); +-unlock: +- mutex_unlock(&priv->state_lock); +- rtnl_unlock(); +-} +- + void mlx5e_update_stats(struct mlx5e_priv *priv) + { + int i; +@@ -3632,13 +3612,19 @@ static bool mlx5e_tx_timeout_eq_recover(struct net_device *dev, + return true; + } + +-static void mlx5e_tx_timeout(struct net_device *dev) ++static void mlx5e_tx_timeout_work(struct work_struct *work) + { +- struct mlx5e_priv *priv = netdev_priv(dev); ++ struct mlx5e_priv *priv = container_of(work, struct mlx5e_priv, ++ tx_timeout_work); ++ struct net_device *dev = priv->netdev; + bool reopen_channels = false; +- int i; ++ int i, err; + +- netdev_err(dev, "TX timeout detected\n"); ++ rtnl_lock(); ++ mutex_lock(&priv->state_lock); ++ ++ if (!test_bit(MLX5E_STATE_OPENED, &priv->state)) ++ goto unlock; + + for (i = 0; i < priv->channels.num * priv->channels.params.num_tc; i++) { + struct netdev_queue *dev_queue = netdev_get_tx_queue(dev, i); +@@ -3646,7 +3632,9 @@ static void mlx5e_tx_timeout(struct net_device *dev) + + if (!netif_xmit_stopped(dev_queue)) + continue; +- netdev_err(dev, "TX timeout on queue: %d, SQ: 0x%x, CQ: 0x%x, SQ Cons: 0x%x SQ Prod: 0x%x, usecs since last trans: %u\n", ++ ++ netdev_err(dev, ++ "TX timeout on queue: %d, SQ: 0x%x, CQ: 0x%x, SQ Cons: 0x%x SQ Prod: 0x%x, usecs since last trans: %u\n", + i, sq->sqn, sq->cq.mcq.cqn, sq->cc, sq->pc, + jiffies_to_usecs(jiffies - dev_queue->trans_start)); + +@@ -3659,8 +3647,27 @@ static void mlx5e_tx_timeout(struct net_device *dev) + } + } + +- if (reopen_channels && test_bit(MLX5E_STATE_OPENED, &priv->state)) +- schedule_work(&priv->tx_timeout_work); ++ if (!reopen_channels) ++ goto unlock; ++ ++ mlx5e_close_locked(dev); ++ err = mlx5e_open_locked(dev); ++ if (err) ++ netdev_err(priv->netdev, ++ "mlx5e_open_locked failed recovering from a tx_timeout, err(%d).\n", ++ err); ++ ++unlock: ++ mutex_unlock(&priv->state_lock); ++ rtnl_unlock(); ++} ++ ++static void mlx5e_tx_timeout(struct net_device *dev) ++{ ++ struct mlx5e_priv *priv = netdev_priv(dev); ++ ++ netdev_err(dev, "TX timeout detected\n"); ++ queue_work(priv->wq, &priv->tx_timeout_work); + } + + static int mlx5e_xdp_set(struct net_device *netdev, struct bpf_prog *prog) +diff --git a/drivers/net/ethernet/qualcomm/rmnet/rmnet_config.c b/drivers/net/ethernet/qualcomm/rmnet/rmnet_config.c +index c4949183eef3..3881de91015e 100644 +--- a/drivers/net/ethernet/qualcomm/rmnet/rmnet_config.c ++++ b/drivers/net/ethernet/qualcomm/rmnet/rmnet_config.c +@@ -307,6 +307,8 @@ static int rmnet_changelink(struct net_device *dev, struct nlattr *tb[], + if (data[IFLA_VLAN_ID]) { + mux_id = nla_get_u16(data[IFLA_VLAN_ID]); + ep = rmnet_get_endpoint(port, priv->mux_id); ++ if (!ep) ++ return -ENODEV; + + hlist_del_init_rcu(&ep->hlnode); + hlist_add_head_rcu(&ep->hlnode, &port->muxed_ep[mux_id]); +diff --git a/drivers/net/ethernet/renesas/sh_eth.c b/drivers/net/ethernet/renesas/sh_eth.c +index 14c839bb09e7..7c9235c9d081 100644 +--- a/drivers/net/ethernet/renesas/sh_eth.c ++++ b/drivers/net/ethernet/renesas/sh_eth.c +@@ -763,6 +763,7 @@ static struct sh_eth_cpu_data sh7757_data = { + .rpadir = 1, + .rpadir_value = 2 << 16, + .rtrate = 1, ++ .dual_port = 1, + }; + + #define SH_GIGA_ETH_BASE 0xfee00000UL +@@ -841,6 +842,7 @@ static struct sh_eth_cpu_data sh7757_data_giga = { + .no_trimd = 1, + .no_ade = 1, + .tsu = 1, ++ .dual_port = 1, + }; + + /* SH7734 */ +@@ -911,6 +913,7 @@ static struct sh_eth_cpu_data sh7763_data = { + .tsu = 1, + .irq_flags = IRQF_SHARED, + .magic = 1, ++ .dual_port = 1, + }; + + static struct sh_eth_cpu_data sh7619_data = { +@@ -943,6 +946,7 @@ static struct sh_eth_cpu_data sh771x_data = { + EESIPR_RRFIP | EESIPR_RTLFIP | EESIPR_RTSFIP | + EESIPR_PREIP | EESIPR_CERFIP, + .tsu = 1, ++ .dual_port = 1, + }; + + static void sh_eth_set_default_cpu_data(struct sh_eth_cpu_data *cd) +@@ -2932,7 +2936,7 @@ static int sh_eth_vlan_rx_kill_vid(struct net_device *ndev, + /* SuperH's TSU register init function */ + static void sh_eth_tsu_init(struct sh_eth_private *mdp) + { +- if (sh_eth_is_rz_fast_ether(mdp)) { ++ if (!mdp->cd->dual_port) { + sh_eth_tsu_write(mdp, 0, TSU_TEN); /* Disable all CAM entry */ + sh_eth_tsu_write(mdp, TSU_FWSLC_POSTENU | TSU_FWSLC_POSTENL, + TSU_FWSLC); /* Enable POST registers */ +diff --git a/drivers/net/ethernet/renesas/sh_eth.h b/drivers/net/ethernet/renesas/sh_eth.h +index e5fe70134690..fdd6d71c03d1 100644 +--- a/drivers/net/ethernet/renesas/sh_eth.h ++++ b/drivers/net/ethernet/renesas/sh_eth.h +@@ -509,6 +509,7 @@ struct sh_eth_cpu_data { + unsigned rmiimode:1; /* EtherC has RMIIMODE register */ + unsigned rtrate:1; /* EtherC has RTRATE register */ + unsigned magic:1; /* EtherC has ECMR.MPDE and ECSR.MPD */ ++ unsigned dual_port:1; /* Dual EtherC/E-DMAC */ + }; + + struct sh_eth_private { +diff --git a/drivers/net/ethernet/stmicro/stmmac/stmmac_main.c b/drivers/net/ethernet/stmicro/stmmac/stmmac_main.c +index 3ea343b45d93..8044563453f9 100644 +--- a/drivers/net/ethernet/stmicro/stmmac/stmmac_main.c ++++ b/drivers/net/ethernet/stmicro/stmmac/stmmac_main.c +@@ -1843,6 +1843,11 @@ static void stmmac_tx_clean(struct stmmac_priv *priv, u32 queue) + if (unlikely(status & tx_dma_own)) + break; + ++ /* Make sure descriptor fields are read after reading ++ * the own bit. ++ */ ++ dma_rmb(); ++ + /* Just consider the last segment and ...*/ + if (likely(!(status & tx_not_ls))) { + /* ... verify the status error condition */ +@@ -2430,7 +2435,7 @@ static void stmmac_mac_config_rx_queues_routing(struct stmmac_priv *priv) + continue; + + packet = priv->plat->rx_queues_cfg[queue].pkt_route; +- priv->hw->mac->rx_queue_prio(priv->hw, packet, queue); ++ priv->hw->mac->rx_queue_routing(priv->hw, packet, queue); + } + } + +@@ -2980,8 +2985,15 @@ static netdev_tx_t stmmac_tso_xmit(struct sk_buff *skb, struct net_device *dev) + tcp_hdrlen(skb) / 4, (skb->len - proto_hdr_len)); + + /* If context desc is used to change MSS */ +- if (mss_desc) ++ if (mss_desc) { ++ /* Make sure that first descriptor has been completely ++ * written, including its own bit. This is because MSS is ++ * actually before first descriptor, so we need to make ++ * sure that MSS's own bit is the last thing written. ++ */ ++ dma_wmb(); + priv->hw->desc->set_tx_owner(mss_desc); ++ } + + /* The own bit must be the latest setting done when prepare the + * descriptor and then barrier is needed to make sure that +diff --git a/drivers/net/hyperv/netvsc.c b/drivers/net/hyperv/netvsc.c +index 7472172823f3..11209e494502 100644 +--- a/drivers/net/hyperv/netvsc.c ++++ b/drivers/net/hyperv/netvsc.c +@@ -1078,10 +1078,14 @@ static int netvsc_receive(struct net_device *ndev, + void *data = recv_buf + + vmxferpage_packet->ranges[i].byte_offset; + u32 buflen = vmxferpage_packet->ranges[i].byte_count; ++ int ret; + + /* Pass it to the upper layer */ +- status = rndis_filter_receive(ndev, net_device, +- channel, data, buflen); ++ ret = rndis_filter_receive(ndev, net_device, ++ channel, data, buflen); ++ ++ if (unlikely(ret != NVSP_STAT_SUCCESS)) ++ status = NVSP_STAT_FAIL; + } + + enq_receive_complete(ndev, net_device, q_idx, +diff --git a/drivers/net/hyperv/netvsc_drv.c b/drivers/net/hyperv/netvsc_drv.c +index 4774766fe20d..2a7752c113df 100644 +--- a/drivers/net/hyperv/netvsc_drv.c ++++ b/drivers/net/hyperv/netvsc_drv.c +@@ -831,7 +831,7 @@ int netvsc_recv_callback(struct net_device *net, + u64_stats_update_end(&rx_stats->syncp); + + napi_gro_receive(&nvchan->napi, skb); +- return 0; ++ return NVSP_STAT_SUCCESS; + } + + static void netvsc_get_drvinfo(struct net_device *net, +diff --git a/drivers/net/hyperv/rndis_filter.c b/drivers/net/hyperv/rndis_filter.c +index 95846f0321f3..33138e4f0b5a 100644 +--- a/drivers/net/hyperv/rndis_filter.c ++++ b/drivers/net/hyperv/rndis_filter.c +@@ -434,10 +434,10 @@ int rndis_filter_receive(struct net_device *ndev, + "unhandled rndis message (type %u len %u)\n", + rndis_msg->ndis_msg_type, + rndis_msg->msg_len); +- break; ++ return NVSP_STAT_FAIL; + } + +- return 0; ++ return NVSP_STAT_SUCCESS; + } + + static int rndis_filter_query_device(struct rndis_device *dev, +diff --git a/drivers/net/ieee802154/ca8210.c b/drivers/net/ieee802154/ca8210.c +index 377af43b81b3..58299fb666ed 100644 +--- a/drivers/net/ieee802154/ca8210.c ++++ b/drivers/net/ieee802154/ca8210.c +@@ -2493,13 +2493,14 @@ static ssize_t ca8210_test_int_user_write( + struct ca8210_priv *priv = filp->private_data; + u8 command[CA8210_SPI_BUF_SIZE]; + +- if (len > CA8210_SPI_BUF_SIZE) { ++ memset(command, SPI_IDLE, 6); ++ if (len > CA8210_SPI_BUF_SIZE || len < 2) { + dev_warn( + &priv->spi->dev, +- "userspace requested erroneously long write (%zu)\n", ++ "userspace requested erroneous write length (%zu)\n", + len + ); +- return -EMSGSIZE; ++ return -EBADE; + } + + ret = copy_from_user(command, in_buf, len); +@@ -2511,6 +2512,13 @@ static ssize_t ca8210_test_int_user_write( + ); + return -EIO; + } ++ if (len != command[1] + 2) { ++ dev_err( ++ &priv->spi->dev, ++ "write len does not match packet length field\n" ++ ); ++ return -EBADE; ++ } + + ret = ca8210_test_check_upstream(command, priv->spi); + if (ret == 0) { +diff --git a/drivers/net/phy/dp83640.c b/drivers/net/phy/dp83640.c +index 654f42d00092..a6c87793d899 100644 +--- a/drivers/net/phy/dp83640.c ++++ b/drivers/net/phy/dp83640.c +@@ -1207,6 +1207,23 @@ static void dp83640_remove(struct phy_device *phydev) + kfree(dp83640); + } + ++static int dp83640_soft_reset(struct phy_device *phydev) ++{ ++ int ret; ++ ++ ret = genphy_soft_reset(phydev); ++ if (ret < 0) ++ return ret; ++ ++ /* From DP83640 datasheet: "Software driver code must wait 3 us ++ * following a software reset before allowing further serial MII ++ * operations with the DP83640." ++ */ ++ udelay(10); /* Taking udelay inaccuracy into account */ ++ ++ return 0; ++} ++ + static int dp83640_config_init(struct phy_device *phydev) + { + struct dp83640_private *dp83640 = phydev->priv; +@@ -1501,6 +1518,7 @@ static struct phy_driver dp83640_driver = { + .flags = PHY_HAS_INTERRUPT, + .probe = dp83640_probe, + .remove = dp83640_remove, ++ .soft_reset = dp83640_soft_reset, + .config_init = dp83640_config_init, + .ack_interrupt = dp83640_ack_interrupt, + .config_intr = dp83640_config_intr, +diff --git a/drivers/net/usb/lan78xx.c b/drivers/net/usb/lan78xx.c +index 32cf21716f19..145bb7cbf5b2 100644 +--- a/drivers/net/usb/lan78xx.c ++++ b/drivers/net/usb/lan78xx.c +@@ -2083,10 +2083,6 @@ static int lan78xx_phy_init(struct lan78xx_net *dev) + + dev->fc_autoneg = phydev->autoneg; + +- phy_start(phydev); +- +- netif_dbg(dev, ifup, dev->net, "phy initialised successfully"); +- + return 0; + + error: +@@ -2523,9 +2519,9 @@ static int lan78xx_open(struct net_device *net) + if (ret < 0) + goto done; + +- ret = lan78xx_phy_init(dev); +- if (ret < 0) +- goto done; ++ phy_start(net->phydev); ++ ++ netif_dbg(dev, ifup, dev->net, "phy initialised successfully"); + + /* for Link Check */ + if (dev->urb_intr) { +@@ -2586,13 +2582,8 @@ static int lan78xx_stop(struct net_device *net) + if (timer_pending(&dev->stat_monitor)) + del_timer_sync(&dev->stat_monitor); + +- phy_unregister_fixup_for_uid(PHY_KSZ9031RNX, 0xfffffff0); +- phy_unregister_fixup_for_uid(PHY_LAN8835, 0xfffffff0); +- +- phy_stop(net->phydev); +- phy_disconnect(net->phydev); +- +- net->phydev = NULL; ++ if (net->phydev) ++ phy_stop(net->phydev); + + clear_bit(EVENT_DEV_OPEN, &dev->flags); + netif_stop_queue(net); +@@ -3507,8 +3498,13 @@ static void lan78xx_disconnect(struct usb_interface *intf) + return; + + udev = interface_to_usbdev(intf); +- + net = dev->net; ++ ++ phy_unregister_fixup_for_uid(PHY_KSZ9031RNX, 0xfffffff0); ++ phy_unregister_fixup_for_uid(PHY_LAN8835, 0xfffffff0); ++ ++ phy_disconnect(net->phydev); ++ + unregister_netdev(net); + + cancel_delayed_work_sync(&dev->wq); +@@ -3664,8 +3660,14 @@ static int lan78xx_probe(struct usb_interface *intf, + pm_runtime_set_autosuspend_delay(&udev->dev, + DEFAULT_AUTOSUSPEND_DELAY); + ++ ret = lan78xx_phy_init(dev); ++ if (ret < 0) ++ goto out4; ++ + return 0; + ++out4: ++ unregister_netdev(netdev); + out3: + lan78xx_unbind(dev, intf); + out2: +@@ -4013,7 +4015,7 @@ static int lan78xx_reset_resume(struct usb_interface *intf) + + lan78xx_reset(dev); + +- lan78xx_phy_init(dev); ++ phy_start(dev->net->phydev); + + return lan78xx_resume(intf); + } +diff --git a/drivers/net/virtio_net.c b/drivers/net/virtio_net.c +index aa21b2225679..16b0c7db431b 100644 +--- a/drivers/net/virtio_net.c ++++ b/drivers/net/virtio_net.c +@@ -2874,8 +2874,8 @@ static int virtnet_probe(struct virtio_device *vdev) + + /* Assume link up if device can't report link status, + otherwise get link status from config. */ ++ netif_carrier_off(dev); + if (virtio_has_feature(vi->vdev, VIRTIO_NET_F_STATUS)) { +- netif_carrier_off(dev); + schedule_work(&vi->config_work); + } else { + vi->status = VIRTIO_NET_S_LINK_UP; +diff --git a/drivers/net/wireless/ath/ath10k/mac.c b/drivers/net/wireless/ath/ath10k/mac.c +index 800a86e2d671..2d7ef7460780 100644 +--- a/drivers/net/wireless/ath/ath10k/mac.c ++++ b/drivers/net/wireless/ath/ath10k/mac.c +@@ -7084,10 +7084,20 @@ static void ath10k_sta_rc_update(struct ieee80211_hw *hw, + { + struct ath10k *ar = hw->priv; + struct ath10k_sta *arsta = (struct ath10k_sta *)sta->drv_priv; ++ struct ath10k_vif *arvif = (void *)vif->drv_priv; ++ struct ath10k_peer *peer; + u32 bw, smps; + + spin_lock_bh(&ar->data_lock); + ++ peer = ath10k_peer_find(ar, arvif->vdev_id, sta->addr); ++ if (!peer) { ++ spin_unlock_bh(&ar->data_lock); ++ ath10k_warn(ar, "mac sta rc update failed to find peer %pM on vdev %i\n", ++ sta->addr, arvif->vdev_id); ++ return; ++ } ++ + ath10k_dbg(ar, ATH10K_DBG_MAC, + "mac sta rc update for %pM changed %08x bw %d nss %d smps %d\n", + sta->addr, changed, sta->bandwidth, sta->rx_nss, +@@ -7873,6 +7883,7 @@ static const struct ieee80211_iface_combination ath10k_10x_if_comb[] = { + .max_interfaces = 8, + .num_different_channels = 1, + .beacon_int_infra_match = true, ++ .beacon_int_min_gcd = 1, + #ifdef CONFIG_ATH10K_DFS_CERTIFIED + .radar_detect_widths = BIT(NL80211_CHAN_WIDTH_20_NOHT) | + BIT(NL80211_CHAN_WIDTH_20) | +@@ -7996,6 +8007,7 @@ static const struct ieee80211_iface_combination ath10k_10_4_if_comb[] = { + .max_interfaces = 16, + .num_different_channels = 1, + .beacon_int_infra_match = true, ++ .beacon_int_min_gcd = 1, + #ifdef CONFIG_ATH10K_DFS_CERTIFIED + .radar_detect_widths = BIT(NL80211_CHAN_WIDTH_20_NOHT) | + BIT(NL80211_CHAN_WIDTH_20) | +diff --git a/drivers/net/wireless/ath/ath9k/common-spectral.c b/drivers/net/wireless/ath/ath9k/common-spectral.c +index 5e77fe1f5b0d..a41bcbda1d9e 100644 +--- a/drivers/net/wireless/ath/ath9k/common-spectral.c ++++ b/drivers/net/wireless/ath/ath9k/common-spectral.c +@@ -479,14 +479,16 @@ ath_cmn_is_fft_buf_full(struct ath_spec_scan_priv *spec_priv) + { + int i = 0; + int ret = 0; ++ struct rchan_buf *buf; + struct rchan *rc = spec_priv->rfs_chan_spec_scan; + +- for_each_online_cpu(i) +- ret += relay_buf_full(*per_cpu_ptr(rc->buf, i)); +- +- i = num_online_cpus(); ++ for_each_possible_cpu(i) { ++ if ((buf = *per_cpu_ptr(rc->buf, i))) { ++ ret += relay_buf_full(buf); ++ } ++ } + +- if (ret == i) ++ if (ret) + return 1; + else + return 0; +diff --git a/drivers/net/wireless/intel/iwlwifi/mvm/rs-fw.c b/drivers/net/wireless/intel/iwlwifi/mvm/rs-fw.c +index 55d1274c6092..fb5745660509 100644 +--- a/drivers/net/wireless/intel/iwlwifi/mvm/rs-fw.c ++++ b/drivers/net/wireless/intel/iwlwifi/mvm/rs-fw.c +@@ -234,13 +234,15 @@ void iwl_mvm_tlc_update_notif(struct iwl_mvm *mvm, struct iwl_rx_packet *pkt) + struct iwl_mvm_sta *mvmsta; + struct iwl_lq_sta_rs_fw *lq_sta; + ++ rcu_read_lock(); ++ + notif = (void *)pkt->data; + mvmsta = iwl_mvm_sta_from_staid_rcu(mvm, notif->sta_id); + + if (!mvmsta) { + IWL_ERR(mvm, "Invalid sta id (%d) in FW TLC notification\n", + notif->sta_id); +- return; ++ goto out; + } + + lq_sta = &mvmsta->lq_sta.rs_fw; +@@ -251,6 +253,8 @@ void iwl_mvm_tlc_update_notif(struct iwl_mvm *mvm, struct iwl_rx_packet *pkt) + IWL_DEBUG_RATE(mvm, "new rate_n_flags: 0x%X\n", + lq_sta->last_rate_n_flags); + } ++out: ++ rcu_read_unlock(); + } + + void rs_fw_rate_init(struct iwl_mvm *mvm, struct ieee80211_sta *sta, +diff --git a/drivers/net/wireless/intel/iwlwifi/mvm/utils.c b/drivers/net/wireless/intel/iwlwifi/mvm/utils.c +index d65e1db7c097..70f8b8eb6117 100644 +--- a/drivers/net/wireless/intel/iwlwifi/mvm/utils.c ++++ b/drivers/net/wireless/intel/iwlwifi/mvm/utils.c +@@ -800,12 +800,19 @@ int iwl_mvm_disable_txq(struct iwl_mvm *mvm, int queue, int mac80211_queue, + .scd_queue = queue, + .action = SCD_CFG_DISABLE_QUEUE, + }; +- bool remove_mac_queue = true; ++ bool remove_mac_queue = mac80211_queue != IEEE80211_INVAL_HW_QUEUE; + int ret; + ++ if (WARN_ON(remove_mac_queue && mac80211_queue >= IEEE80211_MAX_QUEUES)) ++ return -EINVAL; ++ + if (iwl_mvm_has_new_tx_api(mvm)) { + spin_lock_bh(&mvm->queue_info_lock); +- mvm->hw_queue_to_mac80211[queue] &= ~BIT(mac80211_queue); ++ ++ if (remove_mac_queue) ++ mvm->hw_queue_to_mac80211[queue] &= ++ ~BIT(mac80211_queue); ++ + spin_unlock_bh(&mvm->queue_info_lock); + + iwl_trans_txq_free(mvm->trans, queue); +diff --git a/drivers/net/wireless/mediatek/mt76/mt76x2_main.c b/drivers/net/wireless/mediatek/mt76/mt76x2_main.c +index 205043b470b2..7d4e308ee6a7 100644 +--- a/drivers/net/wireless/mediatek/mt76/mt76x2_main.c ++++ b/drivers/net/wireless/mediatek/mt76/mt76x2_main.c +@@ -336,6 +336,17 @@ mt76x2_set_key(struct ieee80211_hw *hw, enum set_key_cmd cmd, + int idx = key->keyidx; + int ret; + ++ /* fall back to sw encryption for unsupported ciphers */ ++ switch (key->cipher) { ++ case WLAN_CIPHER_SUITE_WEP40: ++ case WLAN_CIPHER_SUITE_WEP104: ++ case WLAN_CIPHER_SUITE_TKIP: ++ case WLAN_CIPHER_SUITE_CCMP: ++ break; ++ default: ++ return -EOPNOTSUPP; ++ } ++ + /* + * The hardware does not support per-STA RX GTK, fall back + * to software mode for these. +diff --git a/drivers/net/wireless/mediatek/mt76/mt76x2_tx.c b/drivers/net/wireless/mediatek/mt76/mt76x2_tx.c +index 534e4bf9a34c..e46eafc4c436 100644 +--- a/drivers/net/wireless/mediatek/mt76/mt76x2_tx.c ++++ b/drivers/net/wireless/mediatek/mt76/mt76x2_tx.c +@@ -36,9 +36,12 @@ void mt76x2_tx(struct ieee80211_hw *hw, struct ieee80211_tx_control *control, + + msta = (struct mt76x2_sta *) control->sta->drv_priv; + wcid = &msta->wcid; ++ /* sw encrypted frames */ ++ if (!info->control.hw_key && wcid->hw_key_idx != -1) ++ control->sta = NULL; + } + +- if (vif || (!info->control.hw_key && wcid->hw_key_idx != -1)) { ++ if (vif && !control->sta) { + struct mt76x2_vif *mvif; + + mvif = (struct mt76x2_vif *) vif->drv_priv; +diff --git a/drivers/net/wireless/rsi/rsi_91x_sdio.c b/drivers/net/wireless/rsi/rsi_91x_sdio.c +index b0cf41195051..96fc3c84d7d2 100644 +--- a/drivers/net/wireless/rsi/rsi_91x_sdio.c ++++ b/drivers/net/wireless/rsi/rsi_91x_sdio.c +@@ -636,11 +636,14 @@ static int rsi_sdio_master_reg_read(struct rsi_hw *adapter, u32 addr, + u32 *read_buf, u16 size) + { + u32 addr_on_bus, *data; +- u32 align[2] = {}; + u16 ms_addr; + int status; + +- data = PTR_ALIGN(&align[0], 8); ++ data = kzalloc(RSI_MASTER_REG_BUF_SIZE, GFP_KERNEL); ++ if (!data) ++ return -ENOMEM; ++ ++ data = PTR_ALIGN(data, 8); + + ms_addr = (addr >> 16); + status = rsi_sdio_master_access_msword(adapter, ms_addr); +@@ -648,7 +651,7 @@ static int rsi_sdio_master_reg_read(struct rsi_hw *adapter, u32 addr, + rsi_dbg(ERR_ZONE, + "%s: Unable to set ms word to common reg\n", + __func__); +- return status; ++ goto err; + } + addr &= 0xFFFF; + +@@ -666,7 +669,7 @@ static int rsi_sdio_master_reg_read(struct rsi_hw *adapter, u32 addr, + (u8 *)data, 4); + if (status < 0) { + rsi_dbg(ERR_ZONE, "%s: AHB register read failed\n", __func__); +- return status; ++ goto err; + } + if (size == 2) { + if ((addr & 0x3) == 0) +@@ -688,17 +691,23 @@ static int rsi_sdio_master_reg_read(struct rsi_hw *adapter, u32 addr, + *read_buf = *data; + } + +- return 0; ++err: ++ kfree(data); ++ return status; + } + + static int rsi_sdio_master_reg_write(struct rsi_hw *adapter, + unsigned long addr, + unsigned long data, u16 size) + { +- unsigned long data1[2], *data_aligned; ++ unsigned long *data_aligned; + int status; + +- data_aligned = PTR_ALIGN(&data1[0], 8); ++ data_aligned = kzalloc(RSI_MASTER_REG_BUF_SIZE, GFP_KERNEL); ++ if (!data_aligned) ++ return -ENOMEM; ++ ++ data_aligned = PTR_ALIGN(data_aligned, 8); + + if (size == 2) { + *data_aligned = ((data << 16) | (data & 0xFFFF)); +@@ -717,6 +726,7 @@ static int rsi_sdio_master_reg_write(struct rsi_hw *adapter, + rsi_dbg(ERR_ZONE, + "%s: Unable to set ms word to common reg\n", + __func__); ++ kfree(data_aligned); + return -EIO; + } + addr = addr & 0xFFFF; +@@ -726,12 +736,12 @@ static int rsi_sdio_master_reg_write(struct rsi_hw *adapter, + (adapter, + (addr | RSI_SD_REQUEST_MASTER), + (u8 *)data_aligned, size); +- if (status < 0) { ++ if (status < 0) + rsi_dbg(ERR_ZONE, + "%s: Unable to do AHB reg write\n", __func__); +- return status; +- } +- return 0; ++ ++ kfree(data_aligned); ++ return status; + } + + /** +diff --git a/drivers/net/wireless/rsi/rsi_sdio.h b/drivers/net/wireless/rsi/rsi_sdio.h +index 49c549ba6682..34242d84bd7b 100644 +--- a/drivers/net/wireless/rsi/rsi_sdio.h ++++ b/drivers/net/wireless/rsi/rsi_sdio.h +@@ -46,6 +46,8 @@ enum sdio_interrupt_type { + #define PKT_BUFF_AVAILABLE 1 + #define FW_ASSERT_IND 2 + ++#define RSI_MASTER_REG_BUF_SIZE 12 ++ + #define RSI_DEVICE_BUFFER_STATUS_REGISTER 0xf3 + #define RSI_FN1_INT_REGISTER 0xf9 + #define RSI_INT_ENABLE_REGISTER 0x04 +diff --git a/drivers/nvme/host/core.c b/drivers/nvme/host/core.c +index f81773570dfd..f5259912f049 100644 +--- a/drivers/nvme/host/core.c ++++ b/drivers/nvme/host/core.c +@@ -379,6 +379,15 @@ static void nvme_put_ns(struct nvme_ns *ns) + kref_put(&ns->kref, nvme_free_ns); + } + ++static inline void nvme_clear_nvme_request(struct request *req) ++{ ++ if (!(req->rq_flags & RQF_DONTPREP)) { ++ nvme_req(req)->retries = 0; ++ nvme_req(req)->flags = 0; ++ req->rq_flags |= RQF_DONTPREP; ++ } ++} ++ + struct request *nvme_alloc_request(struct request_queue *q, + struct nvme_command *cmd, blk_mq_req_flags_t flags, int qid) + { +@@ -395,6 +404,7 @@ struct request *nvme_alloc_request(struct request_queue *q, + return req; + + req->cmd_flags |= REQ_FAILFAST_DRIVER; ++ nvme_clear_nvme_request(req); + nvme_req(req)->cmd = cmd; + + return req; +@@ -611,11 +621,7 @@ blk_status_t nvme_setup_cmd(struct nvme_ns *ns, struct request *req, + { + blk_status_t ret = BLK_STS_OK; + +- if (!(req->rq_flags & RQF_DONTPREP)) { +- nvme_req(req)->retries = 0; +- nvme_req(req)->flags = 0; +- req->rq_flags |= RQF_DONTPREP; +- } ++ nvme_clear_nvme_request(req); + + switch (req_op(req)) { + case REQ_OP_DRV_IN: +@@ -745,6 +751,7 @@ static int nvme_submit_user_cmd(struct request_queue *q, + return PTR_ERR(req); + + req->timeout = timeout ? timeout : ADMIN_TIMEOUT; ++ nvme_req(req)->flags |= NVME_REQ_USERCMD; + + if (ubuffer && bufflen) { + ret = blk_rq_map_user(q, req, NULL, ubuffer, bufflen, +diff --git a/drivers/nvme/host/fabrics.c b/drivers/nvme/host/fabrics.c +index 8f0f34d06d46..124c458806df 100644 +--- a/drivers/nvme/host/fabrics.c ++++ b/drivers/nvme/host/fabrics.c +@@ -536,6 +536,85 @@ static struct nvmf_transport_ops *nvmf_lookup_transport( + return NULL; + } + ++blk_status_t nvmf_check_if_ready(struct nvme_ctrl *ctrl, struct request *rq, ++ bool queue_live, bool is_connected) ++{ ++ struct nvme_command *cmd = nvme_req(rq)->cmd; ++ ++ if (likely(ctrl->state == NVME_CTRL_LIVE && is_connected)) ++ return BLK_STS_OK; ++ ++ switch (ctrl->state) { ++ case NVME_CTRL_DELETING: ++ goto reject_io; ++ ++ case NVME_CTRL_NEW: ++ case NVME_CTRL_CONNECTING: ++ if (!is_connected) ++ /* ++ * This is the case of starting a new ++ * association but connectivity was lost ++ * before it was fully created. We need to ++ * error the commands used to initialize the ++ * controller so the reconnect can go into a ++ * retry attempt. The commands should all be ++ * marked REQ_FAILFAST_DRIVER, which will hit ++ * the reject path below. Anything else will ++ * be queued while the state settles. ++ */ ++ goto reject_or_queue_io; ++ ++ if ((queue_live && ++ !(nvme_req(rq)->flags & NVME_REQ_USERCMD)) || ++ (!queue_live && blk_rq_is_passthrough(rq) && ++ cmd->common.opcode == nvme_fabrics_command && ++ cmd->fabrics.fctype == nvme_fabrics_type_connect)) ++ /* ++ * If queue is live, allow only commands that ++ * are internally generated pass through. These ++ * are commands on the admin queue to initialize ++ * the controller. This will reject any ioctl ++ * admin cmds received while initializing. ++ * ++ * If the queue is not live, allow only a ++ * connect command. This will reject any ioctl ++ * admin cmd as well as initialization commands ++ * if the controller reverted the queue to non-live. ++ */ ++ return BLK_STS_OK; ++ ++ /* ++ * fall-thru to the reject_or_queue_io clause ++ */ ++ break; ++ ++ /* these cases fall-thru ++ * case NVME_CTRL_LIVE: ++ * case NVME_CTRL_RESETTING: ++ */ ++ default: ++ break; ++ } ++ ++reject_or_queue_io: ++ /* ++ * Any other new io is something we're not in a state to send ++ * to the device. Default action is to busy it and retry it ++ * after the controller state is recovered. However, anything ++ * marked for failfast or nvme multipath is immediately failed. ++ * Note: commands used to initialize the controller will be ++ * marked for failfast. ++ * Note: nvme cli/ioctl commands are marked for failfast. ++ */ ++ if (!blk_noretry_request(rq) && !(rq->cmd_flags & REQ_NVME_MPATH)) ++ return BLK_STS_RESOURCE; ++ ++reject_io: ++ nvme_req(rq)->status = NVME_SC_ABORT_REQ; ++ return BLK_STS_IOERR; ++} ++EXPORT_SYMBOL_GPL(nvmf_check_if_ready); ++ + static const match_table_t opt_tokens = { + { NVMF_OPT_TRANSPORT, "transport=%s" }, + { NVMF_OPT_TRADDR, "traddr=%s" }, +@@ -608,8 +687,10 @@ static int nvmf_parse_options(struct nvmf_ctrl_options *opts, + opts->discovery_nqn = + !(strcmp(opts->subsysnqn, + NVME_DISC_SUBSYS_NAME)); +- if (opts->discovery_nqn) ++ if (opts->discovery_nqn) { ++ opts->kato = 0; + opts->nr_io_queues = 0; ++ } + break; + case NVMF_OPT_TRADDR: + p = match_strdup(args); +diff --git a/drivers/nvme/host/fabrics.h b/drivers/nvme/host/fabrics.h +index a3145d90c1d2..ef46c915b7b5 100644 +--- a/drivers/nvme/host/fabrics.h ++++ b/drivers/nvme/host/fabrics.h +@@ -157,36 +157,7 @@ void nvmf_unregister_transport(struct nvmf_transport_ops *ops); + void nvmf_free_options(struct nvmf_ctrl_options *opts); + int nvmf_get_address(struct nvme_ctrl *ctrl, char *buf, int size); + bool nvmf_should_reconnect(struct nvme_ctrl *ctrl); +- +-static inline blk_status_t nvmf_check_init_req(struct nvme_ctrl *ctrl, +- struct request *rq) +-{ +- struct nvme_command *cmd = nvme_req(rq)->cmd; +- +- /* +- * We cannot accept any other command until the connect command has +- * completed, so only allow connect to pass. +- */ +- if (!blk_rq_is_passthrough(rq) || +- cmd->common.opcode != nvme_fabrics_command || +- cmd->fabrics.fctype != nvme_fabrics_type_connect) { +- /* +- * Connecting state means transport disruption or initial +- * establishment, which can take a long time and even might +- * fail permanently, fail fast to give upper layers a chance +- * to failover. +- * Deleting state means that the ctrl will never accept commands +- * again, fail it permanently. +- */ +- if (ctrl->state == NVME_CTRL_CONNECTING || +- ctrl->state == NVME_CTRL_DELETING) { +- nvme_req(rq)->status = NVME_SC_ABORT_REQ; +- return BLK_STS_IOERR; +- } +- return BLK_STS_RESOURCE; /* try again later */ +- } +- +- return BLK_STS_OK; +-} ++blk_status_t nvmf_check_if_ready(struct nvme_ctrl *ctrl, ++ struct request *rq, bool queue_live, bool is_connected); + + #endif /* _NVME_FABRICS_H */ +diff --git a/drivers/nvme/host/fc.c b/drivers/nvme/host/fc.c +index 1dc1387b7134..6044f891c3ce 100644 +--- a/drivers/nvme/host/fc.c ++++ b/drivers/nvme/host/fc.c +@@ -2191,7 +2191,7 @@ nvme_fc_start_fcp_op(struct nvme_fc_ctrl *ctrl, struct nvme_fc_queue *queue, + struct nvme_fc_cmd_iu *cmdiu = &op->cmd_iu; + struct nvme_command *sqe = &cmdiu->sqe; + u32 csn; +- int ret; ++ int ret, opstate; + + /* + * before attempting to send the io, check to see if we believe +@@ -2269,6 +2269,9 @@ nvme_fc_start_fcp_op(struct nvme_fc_ctrl *ctrl, struct nvme_fc_queue *queue, + queue->lldd_handle, &op->fcp_req); + + if (ret) { ++ opstate = atomic_xchg(&op->state, FCPOP_STATE_COMPLETE); ++ __nvme_fc_fcpop_chk_teardowns(ctrl, op, opstate); ++ + if (!(op->flags & FCOP_FLAGS_AEN)) + nvme_fc_unmap_data(ctrl, op->rq, op); + +@@ -2284,14 +2287,6 @@ nvme_fc_start_fcp_op(struct nvme_fc_ctrl *ctrl, struct nvme_fc_queue *queue, + return BLK_STS_OK; + } + +-static inline blk_status_t nvme_fc_is_ready(struct nvme_fc_queue *queue, +- struct request *rq) +-{ +- if (unlikely(!test_bit(NVME_FC_Q_LIVE, &queue->flags))) +- return nvmf_check_init_req(&queue->ctrl->ctrl, rq); +- return BLK_STS_OK; +-} +- + static blk_status_t + nvme_fc_queue_rq(struct blk_mq_hw_ctx *hctx, + const struct blk_mq_queue_data *bd) +@@ -2307,7 +2302,9 @@ nvme_fc_queue_rq(struct blk_mq_hw_ctx *hctx, + u32 data_len; + blk_status_t ret; + +- ret = nvme_fc_is_ready(queue, rq); ++ ret = nvmf_check_if_ready(&queue->ctrl->ctrl, rq, ++ test_bit(NVME_FC_Q_LIVE, &queue->flags), ++ ctrl->rport->remoteport.port_state == FC_OBJSTATE_ONLINE); + if (unlikely(ret)) + return ret; + +diff --git a/drivers/nvme/host/nvme.h b/drivers/nvme/host/nvme.h +index 013380641ddf..0133f3d2ce94 100644 +--- a/drivers/nvme/host/nvme.h ++++ b/drivers/nvme/host/nvme.h +@@ -109,6 +109,7 @@ struct nvme_request { + + enum { + NVME_REQ_CANCELLED = (1 << 0), ++ NVME_REQ_USERCMD = (1 << 1), + }; + + static inline struct nvme_request *nvme_req(struct request *req) +diff --git a/drivers/nvme/host/pci.c b/drivers/nvme/host/pci.c +index f6648610d153..dba797b57d73 100644 +--- a/drivers/nvme/host/pci.c ++++ b/drivers/nvme/host/pci.c +@@ -2470,10 +2470,13 @@ static unsigned long check_vendor_combination_bug(struct pci_dev *pdev) + } else if (pdev->vendor == 0x144d && pdev->device == 0xa804) { + /* + * Samsung SSD 960 EVO drops off the PCIe bus after system +- * suspend on a Ryzen board, ASUS PRIME B350M-A. ++ * suspend on a Ryzen board, ASUS PRIME B350M-A, as well as ++ * within few minutes after bootup on a Coffee Lake board - ++ * ASUS PRIME Z370-A + */ + if (dmi_match(DMI_BOARD_VENDOR, "ASUSTeK COMPUTER INC.") && +- dmi_match(DMI_BOARD_NAME, "PRIME B350M-A")) ++ (dmi_match(DMI_BOARD_NAME, "PRIME B350M-A") || ++ dmi_match(DMI_BOARD_NAME, "PRIME Z370-A"))) + return NVME_QUIRK_NO_APST; + } + +diff --git a/drivers/nvme/host/rdma.c b/drivers/nvme/host/rdma.c +index 4d84a73ee12d..02dd232951b9 100644 +--- a/drivers/nvme/host/rdma.c ++++ b/drivers/nvme/host/rdma.c +@@ -1594,17 +1594,6 @@ nvme_rdma_timeout(struct request *rq, bool reserved) + return BLK_EH_HANDLED; + } + +-/* +- * We cannot accept any other command until the Connect command has completed. +- */ +-static inline blk_status_t +-nvme_rdma_is_ready(struct nvme_rdma_queue *queue, struct request *rq) +-{ +- if (unlikely(!test_bit(NVME_RDMA_Q_LIVE, &queue->flags))) +- return nvmf_check_init_req(&queue->ctrl->ctrl, rq); +- return BLK_STS_OK; +-} +- + static blk_status_t nvme_rdma_queue_rq(struct blk_mq_hw_ctx *hctx, + const struct blk_mq_queue_data *bd) + { +@@ -1620,7 +1609,8 @@ static blk_status_t nvme_rdma_queue_rq(struct blk_mq_hw_ctx *hctx, + + WARN_ON_ONCE(rq->tag < 0); + +- ret = nvme_rdma_is_ready(queue, rq); ++ ret = nvmf_check_if_ready(&queue->ctrl->ctrl, rq, ++ test_bit(NVME_RDMA_Q_LIVE, &queue->flags), true); + if (unlikely(ret)) + return ret; + +diff --git a/drivers/nvme/target/loop.c b/drivers/nvme/target/loop.c +index 861d1509b22b..e10987f87603 100644 +--- a/drivers/nvme/target/loop.c ++++ b/drivers/nvme/target/loop.c +@@ -149,14 +149,6 @@ nvme_loop_timeout(struct request *rq, bool reserved) + return BLK_EH_HANDLED; + } + +-static inline blk_status_t nvme_loop_is_ready(struct nvme_loop_queue *queue, +- struct request *rq) +-{ +- if (unlikely(!test_bit(NVME_LOOP_Q_LIVE, &queue->flags))) +- return nvmf_check_init_req(&queue->ctrl->ctrl, rq); +- return BLK_STS_OK; +-} +- + static blk_status_t nvme_loop_queue_rq(struct blk_mq_hw_ctx *hctx, + const struct blk_mq_queue_data *bd) + { +@@ -166,7 +158,8 @@ static blk_status_t nvme_loop_queue_rq(struct blk_mq_hw_ctx *hctx, + struct nvme_loop_iod *iod = blk_mq_rq_to_pdu(req); + blk_status_t ret; + +- ret = nvme_loop_is_ready(queue, req); ++ ret = nvmf_check_if_ready(&queue->ctrl->ctrl, req, ++ test_bit(NVME_LOOP_Q_LIVE, &queue->flags), true); + if (unlikely(ret)) + return ret; + +diff --git a/drivers/parisc/lba_pci.c b/drivers/parisc/lba_pci.c +index 41b740aed3a3..69bd98421eb1 100644 +--- a/drivers/parisc/lba_pci.c ++++ b/drivers/parisc/lba_pci.c +@@ -1403,9 +1403,27 @@ lba_hw_init(struct lba_device *d) + WRITE_REG32(stat, d->hba.base_addr + LBA_ERROR_CONFIG); + } + +- /* Set HF mode as the default (vs. -1 mode). */ ++ ++ /* ++ * Hard Fail vs. Soft Fail on PCI "Master Abort". ++ * ++ * "Master Abort" means the MMIO transaction timed out - usually due to ++ * the device not responding to an MMIO read. We would like HF to be ++ * enabled to find driver problems, though it means the system will ++ * crash with a HPMC. ++ * ++ * In SoftFail mode "~0L" is returned as a result of a timeout on the ++ * pci bus. This is like how PCI busses on x86 and most other ++ * architectures behave. In order to increase compatibility with ++ * existing (x86) PCI hardware and existing Linux drivers we enable ++ * Soft Faul mode on PA-RISC now too. ++ */ + stat = READ_REG32(d->hba.base_addr + LBA_STAT_CTL); ++#if defined(ENABLE_HARDFAIL) + WRITE_REG32(stat | HF_ENABLE, d->hba.base_addr + LBA_STAT_CTL); ++#else ++ WRITE_REG32(stat & ~HF_ENABLE, d->hba.base_addr + LBA_STAT_CTL); ++#endif + + /* + ** Writing a zero to STAT_CTL.rf (bit 0) will clear reset signal +diff --git a/drivers/pci/pci-driver.c b/drivers/pci/pci-driver.c +index eede34e5ada2..98da1e137071 100644 +--- a/drivers/pci/pci-driver.c ++++ b/drivers/pci/pci-driver.c +@@ -1225,11 +1225,14 @@ static int pci_pm_runtime_suspend(struct device *dev) + int error; + + /* +- * If pci_dev->driver is not set (unbound), the device should +- * always remain in D0 regardless of the runtime PM status ++ * If pci_dev->driver is not set (unbound), we leave the device in D0, ++ * but it may go to D3cold when the bridge above it runtime suspends. ++ * Save its config space in case that happens. + */ +- if (!pci_dev->driver) ++ if (!pci_dev->driver) { ++ pci_save_state(pci_dev); + return 0; ++ } + + if (!pm || !pm->runtime_suspend) + return -ENOSYS; +@@ -1277,16 +1280,18 @@ static int pci_pm_runtime_resume(struct device *dev) + const struct dev_pm_ops *pm = dev->driver ? dev->driver->pm : NULL; + + /* +- * If pci_dev->driver is not set (unbound), the device should +- * always remain in D0 regardless of the runtime PM status ++ * Restoring config space is necessary even if the device is not bound ++ * to a driver because although we left it in D0, it may have gone to ++ * D3cold when the bridge above it runtime suspended. + */ ++ pci_restore_standard_config(pci_dev); ++ + if (!pci_dev->driver) + return 0; + + if (!pm || !pm->runtime_resume) + return -ENOSYS; + +- pci_restore_standard_config(pci_dev); + pci_fixup_device(pci_fixup_resume_early, pci_dev); + pci_enable_wake(pci_dev, PCI_D0, false); + pci_fixup_device(pci_fixup_resume, pci_dev); +diff --git a/drivers/pci/quirks.c b/drivers/pci/quirks.c +index 81241f981ad7..88598dbdc1b0 100644 +--- a/drivers/pci/quirks.c ++++ b/drivers/pci/quirks.c +@@ -3903,6 +3903,9 @@ DECLARE_PCI_FIXUP_HEADER(PCI_VENDOR_ID_MARVELL_EXT, 0x9182, + /* https://bugzilla.kernel.org/show_bug.cgi?id=42679#c46 */ + DECLARE_PCI_FIXUP_HEADER(PCI_VENDOR_ID_MARVELL_EXT, 0x91a0, + quirk_dma_func1_alias); ++/* https://bugzilla.kernel.org/show_bug.cgi?id=42679#c127 */ ++DECLARE_PCI_FIXUP_HEADER(PCI_VENDOR_ID_MARVELL_EXT, 0x9220, ++ quirk_dma_func1_alias); + /* https://bugzilla.kernel.org/show_bug.cgi?id=42679#c49 */ + DECLARE_PCI_FIXUP_HEADER(PCI_VENDOR_ID_MARVELL_EXT, 0x9230, + quirk_dma_func1_alias); +diff --git a/drivers/pcmcia/cs.c b/drivers/pcmcia/cs.c +index c3b615c94b4b..8c8caec3a72c 100644 +--- a/drivers/pcmcia/cs.c ++++ b/drivers/pcmcia/cs.c +@@ -452,17 +452,20 @@ static int socket_insert(struct pcmcia_socket *skt) + + static int socket_suspend(struct pcmcia_socket *skt) + { +- if (skt->state & SOCKET_SUSPEND) ++ if ((skt->state & SOCKET_SUSPEND) && !(skt->state & SOCKET_IN_RESUME)) + return -EBUSY; + + mutex_lock(&skt->ops_mutex); +- skt->suspended_state = skt->state; ++ /* store state on first suspend, but not after spurious wakeups */ ++ if (!(skt->state & SOCKET_IN_RESUME)) ++ skt->suspended_state = skt->state; + + skt->socket = dead_socket; + skt->ops->set_socket(skt, &skt->socket); + if (skt->ops->suspend) + skt->ops->suspend(skt); + skt->state |= SOCKET_SUSPEND; ++ skt->state &= ~SOCKET_IN_RESUME; + mutex_unlock(&skt->ops_mutex); + return 0; + } +@@ -475,6 +478,7 @@ static int socket_early_resume(struct pcmcia_socket *skt) + skt->ops->set_socket(skt, &skt->socket); + if (skt->state & SOCKET_PRESENT) + skt->resume_status = socket_setup(skt, resume_delay); ++ skt->state |= SOCKET_IN_RESUME; + mutex_unlock(&skt->ops_mutex); + return 0; + } +@@ -484,7 +488,7 @@ static int socket_late_resume(struct pcmcia_socket *skt) + int ret = 0; + + mutex_lock(&skt->ops_mutex); +- skt->state &= ~SOCKET_SUSPEND; ++ skt->state &= ~(SOCKET_SUSPEND | SOCKET_IN_RESUME); + mutex_unlock(&skt->ops_mutex); + + if (!(skt->state & SOCKET_PRESENT)) { +diff --git a/drivers/pcmcia/cs_internal.h b/drivers/pcmcia/cs_internal.h +index 6765beadea95..03ec43802909 100644 +--- a/drivers/pcmcia/cs_internal.h ++++ b/drivers/pcmcia/cs_internal.h +@@ -70,6 +70,7 @@ struct pccard_resource_ops { + /* Flags in socket state */ + #define SOCKET_PRESENT 0x0008 + #define SOCKET_INUSE 0x0010 ++#define SOCKET_IN_RESUME 0x0040 + #define SOCKET_SUSPEND 0x0080 + #define SOCKET_WIN_REQ(i) (0x0100<<(i)) + #define SOCKET_CARDBUS 0x8000 +diff --git a/drivers/phy/qualcomm/phy-qcom-qmp.c b/drivers/phy/qualcomm/phy-qcom-qmp.c +index e17f0351ccc2..2526971f9929 100644 +--- a/drivers/phy/qualcomm/phy-qcom-qmp.c ++++ b/drivers/phy/qualcomm/phy-qcom-qmp.c +@@ -751,8 +751,6 @@ static int qcom_qmp_phy_poweroff(struct phy *phy) + struct qmp_phy *qphy = phy_get_drvdata(phy); + struct qcom_qmp *qmp = qphy->qmp; + +- clk_disable_unprepare(qphy->pipe_clk); +- + regulator_bulk_disable(qmp->cfg->num_vregs, qmp->vregs); + + return 0; +@@ -936,6 +934,8 @@ static int qcom_qmp_phy_exit(struct phy *phy) + const struct qmp_phy_cfg *cfg = qmp->cfg; + int i = cfg->num_clks; + ++ clk_disable_unprepare(qphy->pipe_clk); ++ + /* PHY reset */ + qphy_setbits(qphy->pcs, cfg->regs[QPHY_SW_RESET], SW_RESET); + +diff --git a/drivers/phy/rockchip/phy-rockchip-emmc.c b/drivers/phy/rockchip/phy-rockchip-emmc.c +index f1b24f18e9b2..b0d10934413f 100644 +--- a/drivers/phy/rockchip/phy-rockchip-emmc.c ++++ b/drivers/phy/rockchip/phy-rockchip-emmc.c +@@ -76,6 +76,10 @@ + #define PHYCTRL_OTAPDLYSEL_MASK 0xf + #define PHYCTRL_OTAPDLYSEL_SHIFT 0x7 + ++#define PHYCTRL_IS_CALDONE(x) \ ++ ((((x) >> PHYCTRL_CALDONE_SHIFT) & \ ++ PHYCTRL_CALDONE_MASK) == PHYCTRL_CALDONE_DONE) ++ + struct rockchip_emmc_phy { + unsigned int reg_offset; + struct regmap *reg_base; +@@ -90,6 +94,7 @@ static int rockchip_emmc_phy_power(struct phy *phy, bool on_off) + unsigned int freqsel = PHYCTRL_FREQSEL_200M; + unsigned long rate; + unsigned long timeout; ++ int ret; + + /* + * Keep phyctrl_pdb and phyctrl_endll low to allow +@@ -160,17 +165,19 @@ static int rockchip_emmc_phy_power(struct phy *phy, bool on_off) + PHYCTRL_PDB_SHIFT)); + + /* +- * According to the user manual, it asks driver to +- * wait 5us for calpad busy trimming ++ * According to the user manual, it asks driver to wait 5us for ++ * calpad busy trimming. However it is documented that this value is ++ * PVT(A.K.A process,voltage and temperature) relevant, so some ++ * failure cases are found which indicates we should be more tolerant ++ * to calpad busy trimming. + */ +- udelay(5); +- regmap_read(rk_phy->reg_base, +- rk_phy->reg_offset + GRF_EMMCPHY_STATUS, +- &caldone); +- caldone = (caldone >> PHYCTRL_CALDONE_SHIFT) & PHYCTRL_CALDONE_MASK; +- if (caldone != PHYCTRL_CALDONE_DONE) { +- pr_err("rockchip_emmc_phy_power: caldone timeout.\n"); +- return -ETIMEDOUT; ++ ret = regmap_read_poll_timeout(rk_phy->reg_base, ++ rk_phy->reg_offset + GRF_EMMCPHY_STATUS, ++ caldone, PHYCTRL_IS_CALDONE(caldone), ++ 0, 50); ++ if (ret) { ++ pr_err("%s: caldone failed, ret=%d\n", __func__, ret); ++ return ret; + } + + /* Set the frequency of the DLL operation */ +diff --git a/drivers/pinctrl/devicetree.c b/drivers/pinctrl/devicetree.c +index 1ff6c3573493..b601039d6c69 100644 +--- a/drivers/pinctrl/devicetree.c ++++ b/drivers/pinctrl/devicetree.c +@@ -122,8 +122,10 @@ static int dt_to_map_one_config(struct pinctrl *p, + /* OK let's just assume this will appear later then */ + return -EPROBE_DEFER; + } +- if (!pctldev) +- pctldev = get_pinctrl_dev_from_of_node(np_pctldev); ++ /* If we're creating a hog we can use the passed pctldev */ ++ if (pctldev && (np_pctldev == p->dev->of_node)) ++ break; ++ pctldev = get_pinctrl_dev_from_of_node(np_pctldev); + if (pctldev) + break; + /* Do not defer probing of hogs (circular loop) */ +diff --git a/drivers/pinctrl/pinctrl-mcp23s08.c b/drivers/pinctrl/pinctrl-mcp23s08.c +index 644c5beb05cb..e86d23279ac1 100644 +--- a/drivers/pinctrl/pinctrl-mcp23s08.c ++++ b/drivers/pinctrl/pinctrl-mcp23s08.c +@@ -771,6 +771,7 @@ static int mcp23s08_probe_one(struct mcp23s08 *mcp, struct device *dev, + { + int status, ret; + bool mirror = false; ++ struct regmap_config *one_regmap_config = NULL; + + mutex_init(&mcp->lock); + +@@ -791,22 +792,36 @@ static int mcp23s08_probe_one(struct mcp23s08 *mcp, struct device *dev, + switch (type) { + #ifdef CONFIG_SPI_MASTER + case MCP_TYPE_S08: +- mcp->regmap = devm_regmap_init(dev, &mcp23sxx_spi_regmap, mcp, +- &mcp23x08_regmap); +- mcp->reg_shift = 0; +- mcp->chip.ngpio = 8; +- mcp->chip.label = "mcp23s08"; +- break; +- + case MCP_TYPE_S17: ++ switch (type) { ++ case MCP_TYPE_S08: ++ one_regmap_config = ++ devm_kmemdup(dev, &mcp23x08_regmap, ++ sizeof(struct regmap_config), GFP_KERNEL); ++ mcp->reg_shift = 0; ++ mcp->chip.ngpio = 8; ++ mcp->chip.label = "mcp23s08"; ++ break; ++ case MCP_TYPE_S17: ++ one_regmap_config = ++ devm_kmemdup(dev, &mcp23x17_regmap, ++ sizeof(struct regmap_config), GFP_KERNEL); ++ mcp->reg_shift = 1; ++ mcp->chip.ngpio = 16; ++ mcp->chip.label = "mcp23s17"; ++ break; ++ } ++ if (!one_regmap_config) ++ return -ENOMEM; ++ ++ one_regmap_config->name = devm_kasprintf(dev, GFP_KERNEL, "%d", (addr & ~0x40) >> 1); + mcp->regmap = devm_regmap_init(dev, &mcp23sxx_spi_regmap, mcp, +- &mcp23x17_regmap); +- mcp->reg_shift = 1; +- mcp->chip.ngpio = 16; +- mcp->chip.label = "mcp23s17"; ++ one_regmap_config); + break; + + case MCP_TYPE_S18: ++ if (!one_regmap_config) ++ return -ENOMEM; + mcp->regmap = devm_regmap_init(dev, &mcp23sxx_spi_regmap, mcp, + &mcp23x17_regmap); + mcp->reg_shift = 1; +diff --git a/drivers/pinctrl/qcom/pinctrl-msm.c b/drivers/pinctrl/qcom/pinctrl-msm.c +index 495432f3341b..95e5c5ea40af 100644 +--- a/drivers/pinctrl/qcom/pinctrl-msm.c ++++ b/drivers/pinctrl/qcom/pinctrl-msm.c +@@ -818,7 +818,7 @@ static int msm_gpio_init(struct msm_pinctrl *pctrl) + return -EINVAL; + + chip = &pctrl->chip; +- chip->base = 0; ++ chip->base = -1; + chip->ngpio = ngpio; + chip->label = dev_name(pctrl->dev); + chip->parent = pctrl->dev; +diff --git a/drivers/pinctrl/sh-pfc/pfc-r8a7796.c b/drivers/pinctrl/sh-pfc/pfc-r8a7796.c +index e5807d1ce0dc..74ee48303156 100644 +--- a/drivers/pinctrl/sh-pfc/pfc-r8a7796.c ++++ b/drivers/pinctrl/sh-pfc/pfc-r8a7796.c +@@ -1,7 +1,7 @@ + /* + * R8A7796 processor support - PFC hardware block. + * +- * Copyright (C) 2016 Renesas Electronics Corp. ++ * Copyright (C) 2016-2017 Renesas Electronics Corp. + * + * This file is based on the drivers/pinctrl/sh-pfc/pfc-r8a7795.c + * +@@ -477,7 +477,7 @@ FM(IP16_31_28) IP16_31_28 FM(IP17_31_28) IP17_31_28 + #define MOD_SEL1_26 FM(SEL_TIMER_TMU_0) FM(SEL_TIMER_TMU_1) + #define MOD_SEL1_25_24 FM(SEL_SSP1_1_0) FM(SEL_SSP1_1_1) FM(SEL_SSP1_1_2) FM(SEL_SSP1_1_3) + #define MOD_SEL1_23_22_21 FM(SEL_SSP1_0_0) FM(SEL_SSP1_0_1) FM(SEL_SSP1_0_2) FM(SEL_SSP1_0_3) FM(SEL_SSP1_0_4) F_(0, 0) F_(0, 0) F_(0, 0) +-#define MOD_SEL1_20 FM(SEL_SSI_0) FM(SEL_SSI_1) ++#define MOD_SEL1_20 FM(SEL_SSI1_0) FM(SEL_SSI1_1) + #define MOD_SEL1_19 FM(SEL_SPEED_PULSE_0) FM(SEL_SPEED_PULSE_1) + #define MOD_SEL1_18_17 FM(SEL_SIMCARD_0) FM(SEL_SIMCARD_1) FM(SEL_SIMCARD_2) FM(SEL_SIMCARD_3) + #define MOD_SEL1_16 FM(SEL_SDHI2_0) FM(SEL_SDHI2_1) +@@ -1218,7 +1218,7 @@ static const u16 pinmux_data[] = { + PINMUX_IPSR_GPSR(IP13_11_8, HSCK0), + PINMUX_IPSR_MSEL(IP13_11_8, MSIOF1_SCK_D, SEL_MSIOF1_3), + PINMUX_IPSR_MSEL(IP13_11_8, AUDIO_CLKB_A, SEL_ADG_B_0), +- PINMUX_IPSR_MSEL(IP13_11_8, SSI_SDATA1_B, SEL_SSI_1), ++ PINMUX_IPSR_MSEL(IP13_11_8, SSI_SDATA1_B, SEL_SSI1_1), + PINMUX_IPSR_MSEL(IP13_11_8, TS_SCK0_D, SEL_TSIF0_3), + PINMUX_IPSR_MSEL(IP13_11_8, STP_ISCLK_0_D, SEL_SSP1_0_3), + PINMUX_IPSR_MSEL(IP13_11_8, RIF0_CLK_C, SEL_DRIF0_2), +@@ -1226,14 +1226,14 @@ static const u16 pinmux_data[] = { + + PINMUX_IPSR_GPSR(IP13_15_12, HRX0), + PINMUX_IPSR_MSEL(IP13_15_12, MSIOF1_RXD_D, SEL_MSIOF1_3), +- PINMUX_IPSR_MSEL(IP13_15_12, SSI_SDATA2_B, SEL_SSI_1), ++ PINMUX_IPSR_MSEL(IP13_15_12, SSI_SDATA2_B, SEL_SSI2_1), + PINMUX_IPSR_MSEL(IP13_15_12, TS_SDEN0_D, SEL_TSIF0_3), + PINMUX_IPSR_MSEL(IP13_15_12, STP_ISEN_0_D, SEL_SSP1_0_3), + PINMUX_IPSR_MSEL(IP13_15_12, RIF0_D0_C, SEL_DRIF0_2), + + PINMUX_IPSR_GPSR(IP13_19_16, HTX0), + PINMUX_IPSR_MSEL(IP13_19_16, MSIOF1_TXD_D, SEL_MSIOF1_3), +- PINMUX_IPSR_MSEL(IP13_19_16, SSI_SDATA9_B, SEL_SSI_1), ++ PINMUX_IPSR_MSEL(IP13_19_16, SSI_SDATA9_B, SEL_SSI9_1), + PINMUX_IPSR_MSEL(IP13_19_16, TS_SDAT0_D, SEL_TSIF0_3), + PINMUX_IPSR_MSEL(IP13_19_16, STP_ISD_0_D, SEL_SSP1_0_3), + PINMUX_IPSR_MSEL(IP13_19_16, RIF0_D1_C, SEL_DRIF0_2), +@@ -1241,7 +1241,7 @@ static const u16 pinmux_data[] = { + PINMUX_IPSR_GPSR(IP13_23_20, HCTS0_N), + PINMUX_IPSR_MSEL(IP13_23_20, RX2_B, SEL_SCIF2_1), + PINMUX_IPSR_MSEL(IP13_23_20, MSIOF1_SYNC_D, SEL_MSIOF1_3), +- PINMUX_IPSR_MSEL(IP13_23_20, SSI_SCK9_A, SEL_SSI_0), ++ PINMUX_IPSR_MSEL(IP13_23_20, SSI_SCK9_A, SEL_SSI9_0), + PINMUX_IPSR_MSEL(IP13_23_20, TS_SPSYNC0_D, SEL_TSIF0_3), + PINMUX_IPSR_MSEL(IP13_23_20, STP_ISSYNC_0_D, SEL_SSP1_0_3), + PINMUX_IPSR_MSEL(IP13_23_20, RIF0_SYNC_C, SEL_DRIF0_2), +@@ -1250,7 +1250,7 @@ static const u16 pinmux_data[] = { + PINMUX_IPSR_GPSR(IP13_27_24, HRTS0_N), + PINMUX_IPSR_MSEL(IP13_27_24, TX2_B, SEL_SCIF2_1), + PINMUX_IPSR_MSEL(IP13_27_24, MSIOF1_SS1_D, SEL_MSIOF1_3), +- PINMUX_IPSR_MSEL(IP13_27_24, SSI_WS9_A, SEL_SSI_0), ++ PINMUX_IPSR_MSEL(IP13_27_24, SSI_WS9_A, SEL_SSI9_0), + PINMUX_IPSR_MSEL(IP13_27_24, STP_IVCXO27_0_D, SEL_SSP1_0_3), + PINMUX_IPSR_MSEL(IP13_27_24, BPFCLK_A, SEL_FM_0), + PINMUX_IPSR_GPSR(IP13_27_24, AUDIO_CLKOUT2_A), +@@ -1265,7 +1265,7 @@ static const u16 pinmux_data[] = { + PINMUX_IPSR_MSEL(IP14_3_0, RX5_A, SEL_SCIF5_0), + PINMUX_IPSR_MSEL(IP14_3_0, NFWP_N_A, SEL_NDF_0), + PINMUX_IPSR_MSEL(IP14_3_0, AUDIO_CLKA_C, SEL_ADG_A_2), +- PINMUX_IPSR_MSEL(IP14_3_0, SSI_SCK2_A, SEL_SSI_0), ++ PINMUX_IPSR_MSEL(IP14_3_0, SSI_SCK2_A, SEL_SSI2_0), + PINMUX_IPSR_MSEL(IP14_3_0, STP_IVCXO27_0_C, SEL_SSP1_0_2), + PINMUX_IPSR_GPSR(IP14_3_0, AUDIO_CLKOUT3_A), + PINMUX_IPSR_MSEL(IP14_3_0, TCLK1_B, SEL_TIMER_TMU_1), +@@ -1274,7 +1274,7 @@ static const u16 pinmux_data[] = { + PINMUX_IPSR_MSEL(IP14_7_4, TX5_A, SEL_SCIF5_0), + PINMUX_IPSR_MSEL(IP14_7_4, MSIOF1_SS2_D, SEL_MSIOF1_3), + PINMUX_IPSR_MSEL(IP14_7_4, AUDIO_CLKC_A, SEL_ADG_C_0), +- PINMUX_IPSR_MSEL(IP14_7_4, SSI_WS2_A, SEL_SSI_0), ++ PINMUX_IPSR_MSEL(IP14_7_4, SSI_WS2_A, SEL_SSI2_0), + PINMUX_IPSR_MSEL(IP14_7_4, STP_OPWM_0_D, SEL_SSP1_0_3), + PINMUX_IPSR_GPSR(IP14_7_4, AUDIO_CLKOUT_D), + PINMUX_IPSR_MSEL(IP14_7_4, SPEEDIN_B, SEL_SPEED_PULSE_1), +@@ -1302,10 +1302,10 @@ static const u16 pinmux_data[] = { + PINMUX_IPSR_MSEL(IP14_31_28, MSIOF1_SS2_F, SEL_MSIOF1_5), + + /* IPSR15 */ +- PINMUX_IPSR_MSEL(IP15_3_0, SSI_SDATA1_A, SEL_SSI_0), ++ PINMUX_IPSR_MSEL(IP15_3_0, SSI_SDATA1_A, SEL_SSI1_0), + +- PINMUX_IPSR_MSEL(IP15_7_4, SSI_SDATA2_A, SEL_SSI_0), +- PINMUX_IPSR_MSEL(IP15_7_4, SSI_SCK1_B, SEL_SSI_1), ++ PINMUX_IPSR_MSEL(IP15_7_4, SSI_SDATA2_A, SEL_SSI2_0), ++ PINMUX_IPSR_MSEL(IP15_7_4, SSI_SCK1_B, SEL_SSI1_1), + + PINMUX_IPSR_GPSR(IP15_11_8, SSI_SCK349), + PINMUX_IPSR_MSEL(IP15_11_8, MSIOF1_SS1_A, SEL_MSIOF1_0), +@@ -1391,11 +1391,11 @@ static const u16 pinmux_data[] = { + PINMUX_IPSR_MSEL(IP16_27_24, RIF1_D1_A, SEL_DRIF1_0), + PINMUX_IPSR_MSEL(IP16_27_24, RIF3_D1_A, SEL_DRIF3_0), + +- PINMUX_IPSR_MSEL(IP16_31_28, SSI_SDATA9_A, SEL_SSI_0), ++ PINMUX_IPSR_MSEL(IP16_31_28, SSI_SDATA9_A, SEL_SSI9_0), + PINMUX_IPSR_MSEL(IP16_31_28, HSCK2_B, SEL_HSCIF2_1), + PINMUX_IPSR_MSEL(IP16_31_28, MSIOF1_SS1_C, SEL_MSIOF1_2), + PINMUX_IPSR_MSEL(IP16_31_28, HSCK1_A, SEL_HSCIF1_0), +- PINMUX_IPSR_MSEL(IP16_31_28, SSI_WS1_B, SEL_SSI_1), ++ PINMUX_IPSR_MSEL(IP16_31_28, SSI_WS1_B, SEL_SSI1_1), + PINMUX_IPSR_GPSR(IP16_31_28, SCK1), + PINMUX_IPSR_MSEL(IP16_31_28, STP_IVCXO27_1_A, SEL_SSP1_1_0), + PINMUX_IPSR_MSEL(IP16_31_28, SCK5_A, SEL_SCIF5_0), +@@ -1427,7 +1427,7 @@ static const u16 pinmux_data[] = { + + PINMUX_IPSR_GPSR(IP17_19_16, USB1_PWEN), + PINMUX_IPSR_MSEL(IP17_19_16, SIM0_CLK_C, SEL_SIMCARD_2), +- PINMUX_IPSR_MSEL(IP17_19_16, SSI_SCK1_A, SEL_SSI_0), ++ PINMUX_IPSR_MSEL(IP17_19_16, SSI_SCK1_A, SEL_SSI1_0), + PINMUX_IPSR_MSEL(IP17_19_16, TS_SCK0_E, SEL_TSIF0_4), + PINMUX_IPSR_MSEL(IP17_19_16, STP_ISCLK_0_E, SEL_SSP1_0_4), + PINMUX_IPSR_MSEL(IP17_19_16, FMCLK_B, SEL_FM_1), +@@ -1437,7 +1437,7 @@ static const u16 pinmux_data[] = { + + PINMUX_IPSR_GPSR(IP17_23_20, USB1_OVC), + PINMUX_IPSR_MSEL(IP17_23_20, MSIOF1_SS2_C, SEL_MSIOF1_2), +- PINMUX_IPSR_MSEL(IP17_23_20, SSI_WS1_A, SEL_SSI_0), ++ PINMUX_IPSR_MSEL(IP17_23_20, SSI_WS1_A, SEL_SSI1_0), + PINMUX_IPSR_MSEL(IP17_23_20, TS_SDAT0_E, SEL_TSIF0_4), + PINMUX_IPSR_MSEL(IP17_23_20, STP_ISD_0_E, SEL_SSP1_0_4), + PINMUX_IPSR_MSEL(IP17_23_20, FMIN_B, SEL_FM_1), +@@ -1447,7 +1447,7 @@ static const u16 pinmux_data[] = { + + PINMUX_IPSR_GPSR(IP17_27_24, USB30_PWEN), + PINMUX_IPSR_GPSR(IP17_27_24, AUDIO_CLKOUT_B), +- PINMUX_IPSR_MSEL(IP17_27_24, SSI_SCK2_B, SEL_SSI_1), ++ PINMUX_IPSR_MSEL(IP17_27_24, SSI_SCK2_B, SEL_SSI2_1), + PINMUX_IPSR_MSEL(IP17_27_24, TS_SDEN1_D, SEL_TSIF1_3), + PINMUX_IPSR_MSEL(IP17_27_24, STP_ISEN_1_D, SEL_SSP1_1_3), + PINMUX_IPSR_MSEL(IP17_27_24, STP_OPWM_0_E, SEL_SSP1_0_4), +@@ -1459,7 +1459,7 @@ static const u16 pinmux_data[] = { + + PINMUX_IPSR_GPSR(IP17_31_28, USB30_OVC), + PINMUX_IPSR_GPSR(IP17_31_28, AUDIO_CLKOUT1_B), +- PINMUX_IPSR_MSEL(IP17_31_28, SSI_WS2_B, SEL_SSI_1), ++ PINMUX_IPSR_MSEL(IP17_31_28, SSI_WS2_B, SEL_SSI2_1), + PINMUX_IPSR_MSEL(IP17_31_28, TS_SPSYNC1_D, SEL_TSIF1_3), + PINMUX_IPSR_MSEL(IP17_31_28, STP_ISSYNC_1_D, SEL_SSP1_1_3), + PINMUX_IPSR_MSEL(IP17_31_28, STP_IVCXO27_0_E, SEL_SSP1_0_4), +@@ -1470,7 +1470,7 @@ static const u16 pinmux_data[] = { + /* IPSR18 */ + PINMUX_IPSR_GPSR(IP18_3_0, GP6_30), + PINMUX_IPSR_GPSR(IP18_3_0, AUDIO_CLKOUT2_B), +- PINMUX_IPSR_MSEL(IP18_3_0, SSI_SCK9_B, SEL_SSI_1), ++ PINMUX_IPSR_MSEL(IP18_3_0, SSI_SCK9_B, SEL_SSI9_1), + PINMUX_IPSR_MSEL(IP18_3_0, TS_SDEN0_E, SEL_TSIF0_4), + PINMUX_IPSR_MSEL(IP18_3_0, STP_ISEN_0_E, SEL_SSP1_0_4), + PINMUX_IPSR_MSEL(IP18_3_0, RIF2_D0_B, SEL_DRIF2_1), +@@ -1480,7 +1480,7 @@ static const u16 pinmux_data[] = { + + PINMUX_IPSR_GPSR(IP18_7_4, GP6_31), + PINMUX_IPSR_GPSR(IP18_7_4, AUDIO_CLKOUT3_B), +- PINMUX_IPSR_MSEL(IP18_7_4, SSI_WS9_B, SEL_SSI_1), ++ PINMUX_IPSR_MSEL(IP18_7_4, SSI_WS9_B, SEL_SSI9_1), + PINMUX_IPSR_MSEL(IP18_7_4, TS_SPSYNC0_E, SEL_TSIF0_4), + PINMUX_IPSR_MSEL(IP18_7_4, STP_ISSYNC_0_E, SEL_SSP1_0_4), + PINMUX_IPSR_MSEL(IP18_7_4, RIF2_D1_B, SEL_DRIF2_1), +diff --git a/drivers/platform/x86/dell-smbios-base.c b/drivers/platform/x86/dell-smbios-base.c +index 2485c80a9fdd..33fb2a20458a 100644 +--- a/drivers/platform/x86/dell-smbios-base.c ++++ b/drivers/platform/x86/dell-smbios-base.c +@@ -514,7 +514,7 @@ static int build_tokens_sysfs(struct platform_device *dev) + continue; + + loop_fail_create_value: +- kfree(value_name); ++ kfree(location_name); + goto out_unwind_strings; + } + smbios_attribute_group.attrs = token_attrs; +@@ -525,7 +525,7 @@ static int build_tokens_sysfs(struct platform_device *dev) + return 0; + + out_unwind_strings: +- for (i = i-1; i > 0; i--) { ++ while (i--) { + kfree(token_location_attrs[i].attr.name); + kfree(token_value_attrs[i].attr.name); + } +diff --git a/drivers/power/supply/ltc2941-battery-gauge.c b/drivers/power/supply/ltc2941-battery-gauge.c +index 4cfa3f0cd689..cc7c516bb417 100644 +--- a/drivers/power/supply/ltc2941-battery-gauge.c ++++ b/drivers/power/supply/ltc2941-battery-gauge.c +@@ -317,15 +317,15 @@ static int ltc294x_get_temperature(const struct ltc294x_info *info, int *val) + + if (info->id == LTC2942_ID) { + reg = LTC2942_REG_TEMPERATURE_MSB; +- value = 60000; /* Full-scale is 600 Kelvin */ ++ value = 6000; /* Full-scale is 600 Kelvin */ + } else { + reg = LTC2943_REG_TEMPERATURE_MSB; +- value = 51000; /* Full-scale is 510 Kelvin */ ++ value = 5100; /* Full-scale is 510 Kelvin */ + } + ret = ltc294x_read_regs(info->client, reg, &datar[0], 2); + value *= (datar[0] << 8) | datar[1]; +- /* Convert to centidegrees */ +- *val = value / 0xFFFF - 27215; ++ /* Convert to tenths of degree Celsius */ ++ *val = value / 0xFFFF - 2722; + return ret; + } + +diff --git a/drivers/power/supply/max17042_battery.c b/drivers/power/supply/max17042_battery.c +index 35dde81b1c9b..1a568df383db 100644 +--- a/drivers/power/supply/max17042_battery.c ++++ b/drivers/power/supply/max17042_battery.c +@@ -1053,6 +1053,7 @@ static int max17042_probe(struct i2c_client *client, + + i2c_set_clientdata(client, chip); + psy_cfg.drv_data = chip; ++ psy_cfg.of_node = dev->of_node; + + /* When current is not measured, + * CURRENT_NOW and CURRENT_AVG properties should be invisible. */ +diff --git a/drivers/regulator/gpio-regulator.c b/drivers/regulator/gpio-regulator.c +index 0fce06acfaec..a2eb50719c7b 100644 +--- a/drivers/regulator/gpio-regulator.c ++++ b/drivers/regulator/gpio-regulator.c +@@ -271,8 +271,7 @@ static int gpio_regulator_probe(struct platform_device *pdev) + drvdata->desc.name = kstrdup(config->supply_name, GFP_KERNEL); + if (drvdata->desc.name == NULL) { + dev_err(&pdev->dev, "Failed to allocate supply name\n"); +- ret = -ENOMEM; +- goto err; ++ return -ENOMEM; + } + + if (config->nr_gpios != 0) { +@@ -292,7 +291,7 @@ static int gpio_regulator_probe(struct platform_device *pdev) + dev_err(&pdev->dev, + "Could not obtain regulator setting GPIOs: %d\n", + ret); +- goto err_memstate; ++ goto err_memgpio; + } + } + +@@ -303,7 +302,7 @@ static int gpio_regulator_probe(struct platform_device *pdev) + if (drvdata->states == NULL) { + dev_err(&pdev->dev, "Failed to allocate state data\n"); + ret = -ENOMEM; +- goto err_memgpio; ++ goto err_stategpio; + } + drvdata->nr_states = config->nr_states; + +@@ -324,7 +323,7 @@ static int gpio_regulator_probe(struct platform_device *pdev) + default: + dev_err(&pdev->dev, "No regulator type set\n"); + ret = -EINVAL; +- goto err_memgpio; ++ goto err_memstate; + } + + /* build initial state from gpio init data. */ +@@ -361,22 +360,21 @@ static int gpio_regulator_probe(struct platform_device *pdev) + if (IS_ERR(drvdata->dev)) { + ret = PTR_ERR(drvdata->dev); + dev_err(&pdev->dev, "Failed to register regulator: %d\n", ret); +- goto err_stategpio; ++ goto err_memstate; + } + + platform_set_drvdata(pdev, drvdata); + + return 0; + +-err_stategpio: +- gpio_free_array(drvdata->gpios, drvdata->nr_gpios); + err_memstate: + kfree(drvdata->states); ++err_stategpio: ++ gpio_free_array(drvdata->gpios, drvdata->nr_gpios); + err_memgpio: + kfree(drvdata->gpios); + err_name: + kfree(drvdata->desc.name); +-err: + return ret; + } + +diff --git a/drivers/regulator/of_regulator.c b/drivers/regulator/of_regulator.c +index 092ed6efb3ec..f47264fa1940 100644 +--- a/drivers/regulator/of_regulator.c ++++ b/drivers/regulator/of_regulator.c +@@ -321,6 +321,7 @@ int of_regulator_match(struct device *dev, struct device_node *node, + dev_err(dev, + "failed to parse DT for regulator %s\n", + child->name); ++ of_node_put(child); + return -EINVAL; + } + match->of_node = of_node_get(child); +diff --git a/drivers/remoteproc/imx_rproc.c b/drivers/remoteproc/imx_rproc.c +index 633268e9d550..05bcbce2013a 100644 +--- a/drivers/remoteproc/imx_rproc.c ++++ b/drivers/remoteproc/imx_rproc.c +@@ -339,8 +339,10 @@ static int imx_rproc_probe(struct platform_device *pdev) + } + + dcfg = of_device_get_match_data(dev); +- if (!dcfg) +- return -EINVAL; ++ if (!dcfg) { ++ ret = -EINVAL; ++ goto err_put_rproc; ++ } + + priv = rproc->priv; + priv->rproc = rproc; +diff --git a/drivers/s390/cio/vfio_ccw_fsm.c b/drivers/s390/cio/vfio_ccw_fsm.c +index e96b85579f21..3c800642134e 100644 +--- a/drivers/s390/cio/vfio_ccw_fsm.c ++++ b/drivers/s390/cio/vfio_ccw_fsm.c +@@ -129,6 +129,11 @@ static void fsm_io_request(struct vfio_ccw_private *private, + if (scsw->cmd.fctl & SCSW_FCTL_START_FUNC) { + orb = (union orb *)io_region->orb_area; + ++ /* Don't try to build a cp if transport mode is specified. */ ++ if (orb->tm.b) { ++ io_region->ret_code = -EOPNOTSUPP; ++ goto err_out; ++ } + io_region->ret_code = cp_init(&private->cp, mdev_dev(mdev), + orb); + if (io_region->ret_code) +diff --git a/drivers/scsi/sr.c b/drivers/scsi/sr.c +index 9be34d37c356..3f3cb72e0c0c 100644 +--- a/drivers/scsi/sr.c ++++ b/drivers/scsi/sr.c +@@ -525,6 +525,8 @@ static int sr_block_open(struct block_device *bdev, fmode_t mode) + struct scsi_cd *cd; + int ret = -ENXIO; + ++ check_disk_change(bdev); ++ + mutex_lock(&sr_mutex); + cd = scsi_cd_get(bdev->bd_disk); + if (cd) { +@@ -585,18 +587,28 @@ static int sr_block_ioctl(struct block_device *bdev, fmode_t mode, unsigned cmd, + static unsigned int sr_block_check_events(struct gendisk *disk, + unsigned int clearing) + { +- struct scsi_cd *cd = scsi_cd(disk); ++ unsigned int ret = 0; ++ struct scsi_cd *cd; + +- if (atomic_read(&cd->device->disk_events_disable_depth)) ++ cd = scsi_cd_get(disk); ++ if (!cd) + return 0; + +- return cdrom_check_events(&cd->cdi, clearing); ++ if (!atomic_read(&cd->device->disk_events_disable_depth)) ++ ret = cdrom_check_events(&cd->cdi, clearing); ++ ++ scsi_cd_put(cd); ++ return ret; + } + + static int sr_block_revalidate_disk(struct gendisk *disk) + { +- struct scsi_cd *cd = scsi_cd(disk); + struct scsi_sense_hdr sshdr; ++ struct scsi_cd *cd; ++ ++ cd = scsi_cd_get(disk); ++ if (!cd) ++ return -ENXIO; + + /* if the unit is not ready, nothing more to do */ + if (scsi_test_unit_ready(cd->device, SR_TIMEOUT, MAX_RETRIES, &sshdr)) +@@ -605,6 +617,7 @@ static int sr_block_revalidate_disk(struct gendisk *disk) + sr_cd_check(&cd->cdi); + get_sectorsize(cd); + out: ++ scsi_cd_put(cd); + return 0; + } + +diff --git a/drivers/scsi/sr_ioctl.c b/drivers/scsi/sr_ioctl.c +index 2a21f2d48592..35fab1e18adc 100644 +--- a/drivers/scsi/sr_ioctl.c ++++ b/drivers/scsi/sr_ioctl.c +@@ -188,9 +188,13 @@ int sr_do_ioctl(Scsi_CD *cd, struct packet_command *cgc) + struct scsi_device *SDev; + struct scsi_sense_hdr sshdr; + int result, err = 0, retries = 0; ++ unsigned char sense_buffer[SCSI_SENSE_BUFFERSIZE], *senseptr = NULL; + + SDev = cd->device; + ++ if (cgc->sense) ++ senseptr = sense_buffer; ++ + retry: + if (!scsi_block_when_processing_errors(SDev)) { + err = -ENODEV; +@@ -198,10 +202,12 @@ int sr_do_ioctl(Scsi_CD *cd, struct packet_command *cgc) + } + + result = scsi_execute(SDev, cgc->cmd, cgc->data_direction, +- cgc->buffer, cgc->buflen, +- (unsigned char *)cgc->sense, &sshdr, ++ cgc->buffer, cgc->buflen, senseptr, &sshdr, + cgc->timeout, IOCTL_RETRIES, 0, 0, NULL); + ++ if (cgc->sense) ++ memcpy(cgc->sense, sense_buffer, sizeof(*cgc->sense)); ++ + /* Minimal error checking. Ignore cases we know about, and report the rest. */ + if (driver_byte(result) != 0) { + switch (sshdr.sense_key) { +diff --git a/drivers/soc/amlogic/meson-gx-pwrc-vpu.c b/drivers/soc/amlogic/meson-gx-pwrc-vpu.c +index 2bdeebc48901..2625ef06c10e 100644 +--- a/drivers/soc/amlogic/meson-gx-pwrc-vpu.c ++++ b/drivers/soc/amlogic/meson-gx-pwrc-vpu.c +@@ -224,7 +224,11 @@ static int meson_gx_pwrc_vpu_probe(struct platform_device *pdev) + + static void meson_gx_pwrc_vpu_shutdown(struct platform_device *pdev) + { +- meson_gx_pwrc_vpu_power_off(&vpu_hdmi_pd.genpd); ++ bool powered_off; ++ ++ powered_off = meson_gx_pwrc_vpu_get_power(&vpu_hdmi_pd); ++ if (!powered_off) ++ meson_gx_pwrc_vpu_power_off(&vpu_hdmi_pd.genpd); + } + + static const struct of_device_id meson_gx_pwrc_vpu_match_table[] = { +diff --git a/drivers/soc/qcom/wcnss_ctrl.c b/drivers/soc/qcom/wcnss_ctrl.c +index d008e5b82db4..df3ccb30bc2d 100644 +--- a/drivers/soc/qcom/wcnss_ctrl.c ++++ b/drivers/soc/qcom/wcnss_ctrl.c +@@ -249,7 +249,7 @@ static int wcnss_download_nv(struct wcnss_ctrl *wcnss, bool *expect_cbc) + /* Increment for next fragment */ + req->seq++; + +- data += req->hdr.len; ++ data += NV_FRAGMENT_SIZE; + left -= NV_FRAGMENT_SIZE; + } while (left > 0); + +diff --git a/drivers/soc/renesas/r8a77970-sysc.c b/drivers/soc/renesas/r8a77970-sysc.c +index 8c614164718e..caf894f193ed 100644 +--- a/drivers/soc/renesas/r8a77970-sysc.c ++++ b/drivers/soc/renesas/r8a77970-sysc.c +@@ -25,12 +25,12 @@ static const struct rcar_sysc_area r8a77970_areas[] __initconst = { + PD_CPU_NOCR }, + { "cr7", 0x240, 0, R8A77970_PD_CR7, R8A77970_PD_ALWAYS_ON }, + { "a3ir", 0x180, 0, R8A77970_PD_A3IR, R8A77970_PD_ALWAYS_ON }, +- { "a2ir0", 0x400, 0, R8A77970_PD_A2IR0, R8A77970_PD_ALWAYS_ON }, +- { "a2ir1", 0x400, 1, R8A77970_PD_A2IR1, R8A77970_PD_A2IR0 }, +- { "a2ir2", 0x400, 2, R8A77970_PD_A2IR2, R8A77970_PD_A2IR0 }, +- { "a2ir3", 0x400, 3, R8A77970_PD_A2IR3, R8A77970_PD_A2IR0 }, +- { "a2sc0", 0x400, 4, R8A77970_PD_A2SC0, R8A77970_PD_ALWAYS_ON }, +- { "a2sc1", 0x400, 5, R8A77970_PD_A2SC1, R8A77970_PD_A2SC0 }, ++ { "a2ir0", 0x400, 0, R8A77970_PD_A2IR0, R8A77970_PD_A3IR }, ++ { "a2ir1", 0x400, 1, R8A77970_PD_A2IR1, R8A77970_PD_A3IR }, ++ { "a2ir2", 0x400, 2, R8A77970_PD_A2IR2, R8A77970_PD_A3IR }, ++ { "a2ir3", 0x400, 3, R8A77970_PD_A2IR3, R8A77970_PD_A3IR }, ++ { "a2sc0", 0x400, 4, R8A77970_PD_A2SC0, R8A77970_PD_A3IR }, ++ { "a2sc1", 0x400, 5, R8A77970_PD_A2SC1, R8A77970_PD_A3IR }, + }; + + const struct rcar_sysc_info r8a77970_sysc_info __initconst = { +diff --git a/drivers/spi/spi-bcm-qspi.c b/drivers/spi/spi-bcm-qspi.c +index ff01f865a173..6573152ce893 100644 +--- a/drivers/spi/spi-bcm-qspi.c ++++ b/drivers/spi/spi-bcm-qspi.c +@@ -1255,7 +1255,7 @@ int bcm_qspi_probe(struct platform_device *pdev, + qspi->base[MSPI] = devm_ioremap_resource(dev, res); + if (IS_ERR(qspi->base[MSPI])) { + ret = PTR_ERR(qspi->base[MSPI]); +- goto qspi_probe_err; ++ goto qspi_resource_err; + } + } else { + goto qspi_resource_err; +@@ -1266,7 +1266,7 @@ int bcm_qspi_probe(struct platform_device *pdev, + qspi->base[BSPI] = devm_ioremap_resource(dev, res); + if (IS_ERR(qspi->base[BSPI])) { + ret = PTR_ERR(qspi->base[BSPI]); +- goto qspi_probe_err; ++ goto qspi_resource_err; + } + qspi->bspi_mode = true; + } else { +diff --git a/drivers/watchdog/asm9260_wdt.c b/drivers/watchdog/asm9260_wdt.c +index 7dd0da644a7f..2cf56b459d84 100644 +--- a/drivers/watchdog/asm9260_wdt.c ++++ b/drivers/watchdog/asm9260_wdt.c +@@ -292,14 +292,14 @@ static int asm9260_wdt_probe(struct platform_device *pdev) + if (IS_ERR(priv->iobase)) + return PTR_ERR(priv->iobase); + +- ret = asm9260_wdt_get_dt_clks(priv); +- if (ret) +- return ret; +- + priv->rst = devm_reset_control_get_exclusive(&pdev->dev, "wdt_rst"); + if (IS_ERR(priv->rst)) + return PTR_ERR(priv->rst); + ++ ret = asm9260_wdt_get_dt_clks(priv); ++ if (ret) ++ return ret; ++ + wdd = &priv->wdd; + wdd->info = &asm9260_wdt_ident; + wdd->ops = &asm9260_wdt_ops; +diff --git a/drivers/watchdog/aspeed_wdt.c b/drivers/watchdog/aspeed_wdt.c +index ca5b91e2eb92..a5b8eb21201f 100644 +--- a/drivers/watchdog/aspeed_wdt.c ++++ b/drivers/watchdog/aspeed_wdt.c +@@ -46,6 +46,7 @@ MODULE_DEVICE_TABLE(of, aspeed_wdt_of_table); + #define WDT_RELOAD_VALUE 0x04 + #define WDT_RESTART 0x08 + #define WDT_CTRL 0x0C ++#define WDT_CTRL_BOOT_SECONDARY BIT(7) + #define WDT_CTRL_RESET_MODE_SOC (0x00 << 5) + #define WDT_CTRL_RESET_MODE_FULL_CHIP (0x01 << 5) + #define WDT_CTRL_RESET_MODE_ARM_CPU (0x10 << 5) +@@ -158,6 +159,7 @@ static int aspeed_wdt_restart(struct watchdog_device *wdd, + { + struct aspeed_wdt *wdt = to_aspeed_wdt(wdd); + ++ wdt->ctrl &= ~WDT_CTRL_BOOT_SECONDARY; + aspeed_wdt_enable(wdt, 128 * WDT_RATE_1MHZ / 1000); + + mdelay(1000); +@@ -232,16 +234,21 @@ static int aspeed_wdt_probe(struct platform_device *pdev) + wdt->ctrl |= WDT_CTRL_RESET_MODE_SOC | WDT_CTRL_RESET_SYSTEM; + } else { + if (!strcmp(reset_type, "cpu")) +- wdt->ctrl |= WDT_CTRL_RESET_MODE_ARM_CPU; ++ wdt->ctrl |= WDT_CTRL_RESET_MODE_ARM_CPU | ++ WDT_CTRL_RESET_SYSTEM; + else if (!strcmp(reset_type, "soc")) +- wdt->ctrl |= WDT_CTRL_RESET_MODE_SOC; ++ wdt->ctrl |= WDT_CTRL_RESET_MODE_SOC | ++ WDT_CTRL_RESET_SYSTEM; + else if (!strcmp(reset_type, "system")) +- wdt->ctrl |= WDT_CTRL_RESET_SYSTEM; ++ wdt->ctrl |= WDT_CTRL_RESET_MODE_FULL_CHIP | ++ WDT_CTRL_RESET_SYSTEM; + else if (strcmp(reset_type, "none")) + return -EINVAL; + } + if (of_property_read_bool(np, "aspeed,external-signal")) + wdt->ctrl |= WDT_CTRL_WDT_EXT; ++ if (of_property_read_bool(np, "aspeed,alt-boot")) ++ wdt->ctrl |= WDT_CTRL_BOOT_SECONDARY; + + if (readl(wdt->base + WDT_CTRL) & WDT_CTRL_ENABLE) { + /* +diff --git a/drivers/watchdog/davinci_wdt.c b/drivers/watchdog/davinci_wdt.c +index 3e4c592c239f..6c6594261cb7 100644 +--- a/drivers/watchdog/davinci_wdt.c ++++ b/drivers/watchdog/davinci_wdt.c +@@ -236,15 +236,22 @@ static int davinci_wdt_probe(struct platform_device *pdev) + + wdt_mem = platform_get_resource(pdev, IORESOURCE_MEM, 0); + davinci_wdt->base = devm_ioremap_resource(dev, wdt_mem); +- if (IS_ERR(davinci_wdt->base)) +- return PTR_ERR(davinci_wdt->base); ++ if (IS_ERR(davinci_wdt->base)) { ++ ret = PTR_ERR(davinci_wdt->base); ++ goto err_clk_disable; ++ } + + ret = watchdog_register_device(wdd); +- if (ret < 0) { +- clk_disable_unprepare(davinci_wdt->clk); ++ if (ret) { + dev_err(dev, "cannot register watchdog device\n"); ++ goto err_clk_disable; + } + ++ return 0; ++ ++err_clk_disable: ++ clk_disable_unprepare(davinci_wdt->clk); ++ + return ret; + } + +diff --git a/drivers/watchdog/dw_wdt.c b/drivers/watchdog/dw_wdt.c +index c2f4ff516230..918357bccf5e 100644 +--- a/drivers/watchdog/dw_wdt.c ++++ b/drivers/watchdog/dw_wdt.c +@@ -34,6 +34,7 @@ + + #define WDOG_CONTROL_REG_OFFSET 0x00 + #define WDOG_CONTROL_REG_WDT_EN_MASK 0x01 ++#define WDOG_CONTROL_REG_RESP_MODE_MASK 0x02 + #define WDOG_TIMEOUT_RANGE_REG_OFFSET 0x04 + #define WDOG_TIMEOUT_RANGE_TOPINIT_SHIFT 4 + #define WDOG_CURRENT_COUNT_REG_OFFSET 0x08 +@@ -121,14 +122,23 @@ static int dw_wdt_set_timeout(struct watchdog_device *wdd, unsigned int top_s) + return 0; + } + ++static void dw_wdt_arm_system_reset(struct dw_wdt *dw_wdt) ++{ ++ u32 val = readl(dw_wdt->regs + WDOG_CONTROL_REG_OFFSET); ++ ++ /* Disable interrupt mode; always perform system reset. */ ++ val &= ~WDOG_CONTROL_REG_RESP_MODE_MASK; ++ /* Enable watchdog. */ ++ val |= WDOG_CONTROL_REG_WDT_EN_MASK; ++ writel(val, dw_wdt->regs + WDOG_CONTROL_REG_OFFSET); ++} ++ + static int dw_wdt_start(struct watchdog_device *wdd) + { + struct dw_wdt *dw_wdt = to_dw_wdt(wdd); + + dw_wdt_set_timeout(wdd, wdd->timeout); +- +- writel(WDOG_CONTROL_REG_WDT_EN_MASK, +- dw_wdt->regs + WDOG_CONTROL_REG_OFFSET); ++ dw_wdt_arm_system_reset(dw_wdt); + + return 0; + } +@@ -152,16 +162,13 @@ static int dw_wdt_restart(struct watchdog_device *wdd, + unsigned long action, void *data) + { + struct dw_wdt *dw_wdt = to_dw_wdt(wdd); +- u32 val; + + writel(0, dw_wdt->regs + WDOG_TIMEOUT_RANGE_REG_OFFSET); +- val = readl(dw_wdt->regs + WDOG_CONTROL_REG_OFFSET); +- if (val & WDOG_CONTROL_REG_WDT_EN_MASK) ++ if (dw_wdt_is_enabled(dw_wdt)) + writel(WDOG_COUNTER_RESTART_KICK_VALUE, + dw_wdt->regs + WDOG_COUNTER_RESTART_REG_OFFSET); + else +- writel(WDOG_CONTROL_REG_WDT_EN_MASK, +- dw_wdt->regs + WDOG_CONTROL_REG_OFFSET); ++ dw_wdt_arm_system_reset(dw_wdt); + + /* wait for reset to assert... */ + mdelay(500); +diff --git a/drivers/watchdog/sprd_wdt.c b/drivers/watchdog/sprd_wdt.c +index a8b280ff33e0..b4d484a42b70 100644 +--- a/drivers/watchdog/sprd_wdt.c ++++ b/drivers/watchdog/sprd_wdt.c +@@ -154,8 +154,10 @@ static int sprd_wdt_enable(struct sprd_wdt *wdt) + if (ret) + return ret; + ret = clk_prepare_enable(wdt->rtc_enable); +- if (ret) ++ if (ret) { ++ clk_disable_unprepare(wdt->enable); + return ret; ++ } + + sprd_wdt_unlock(wdt->base); + val = readl_relaxed(wdt->base + SPRD_WDT_CTRL); +diff --git a/drivers/xen/swiotlb-xen.c b/drivers/xen/swiotlb-xen.c +index 5bb72d3f8337..3530a196d959 100644 +--- a/drivers/xen/swiotlb-xen.c ++++ b/drivers/xen/swiotlb-xen.c +@@ -365,7 +365,7 @@ xen_swiotlb_free_coherent(struct device *hwdev, size_t size, void *vaddr, + * physical address */ + phys = xen_bus_to_phys(dev_addr); + +- if (((dev_addr + size - 1 > dma_mask)) || ++ if (((dev_addr + size - 1 <= dma_mask)) || + range_straddles_page_boundary(phys, size)) + xen_destroy_contiguous_region(phys, order); + +diff --git a/drivers/xen/xen-acpi-processor.c b/drivers/xen/xen-acpi-processor.c +index 23e391d3ec01..22863f5f2474 100644 +--- a/drivers/xen/xen-acpi-processor.c ++++ b/drivers/xen/xen-acpi-processor.c +@@ -362,9 +362,9 @@ read_acpi_id(acpi_handle handle, u32 lvl, void *context, void **rv) + } + /* There are more ACPI Processor objects than in x2APIC or MADT. + * This can happen with incorrect ACPI SSDT declerations. */ +- if (acpi_id > nr_acpi_bits) { +- pr_debug("We only have %u, trying to set %u\n", +- nr_acpi_bits, acpi_id); ++ if (acpi_id >= nr_acpi_bits) { ++ pr_debug("max acpi id %u, trying to set %u\n", ++ nr_acpi_bits - 1, acpi_id); + return AE_OK; + } + /* OK, There is a ACPI Processor object */ +diff --git a/drivers/zorro/zorro.c b/drivers/zorro/zorro.c +index cc1b1ac57d61..47728477297e 100644 +--- a/drivers/zorro/zorro.c ++++ b/drivers/zorro/zorro.c +@@ -16,6 +16,7 @@ + #include + #include + #include ++#include + #include + + #include +@@ -185,6 +186,17 @@ static int __init amiga_zorro_probe(struct platform_device *pdev) + z->dev.parent = &bus->dev; + z->dev.bus = &zorro_bus_type; + z->dev.id = i; ++ switch (z->rom.er_Type & ERT_TYPEMASK) { ++ case ERT_ZORROIII: ++ z->dev.coherent_dma_mask = DMA_BIT_MASK(32); ++ break; ++ ++ case ERT_ZORROII: ++ default: ++ z->dev.coherent_dma_mask = DMA_BIT_MASK(24); ++ break; ++ } ++ z->dev.dma_mask = &z->dev.coherent_dma_mask; + } + + /* ... then register them */ +diff --git a/fs/affs/namei.c b/fs/affs/namei.c +index d8aa0ae3d037..1ed0fa4c4d48 100644 +--- a/fs/affs/namei.c ++++ b/fs/affs/namei.c +@@ -206,9 +206,10 @@ affs_lookup(struct inode *dir, struct dentry *dentry, unsigned int flags) + + affs_lock_dir(dir); + bh = affs_find_entry(dir, dentry); +- affs_unlock_dir(dir); +- if (IS_ERR(bh)) ++ if (IS_ERR(bh)) { ++ affs_unlock_dir(dir); + return ERR_CAST(bh); ++ } + if (bh) { + u32 ino = bh->b_blocknr; + +@@ -222,10 +223,13 @@ affs_lookup(struct inode *dir, struct dentry *dentry, unsigned int flags) + } + affs_brelse(bh); + inode = affs_iget(sb, ino); +- if (IS_ERR(inode)) ++ if (IS_ERR(inode)) { ++ affs_unlock_dir(dir); + return ERR_CAST(inode); ++ } + } + d_add(dentry, inode); ++ affs_unlock_dir(dir); + return NULL; + } + +diff --git a/fs/aio.c b/fs/aio.c +index 6bcd3fb5265a..63c0437ab135 100644 +--- a/fs/aio.c ++++ b/fs/aio.c +@@ -1087,8 +1087,8 @@ static struct kioctx *lookup_ioctx(unsigned long ctx_id) + + ctx = rcu_dereference(table->table[id]); + if (ctx && ctx->user_id == ctx_id) { +- percpu_ref_get(&ctx->users); +- ret = ctx; ++ if (percpu_ref_tryget_live(&ctx->users)) ++ ret = ctx; + } + out: + rcu_read_unlock(); +diff --git a/fs/btrfs/dev-replace.c b/fs/btrfs/dev-replace.c +index 7efbc4d1128b..f5247ad86970 100644 +--- a/fs/btrfs/dev-replace.c ++++ b/fs/btrfs/dev-replace.c +@@ -307,7 +307,7 @@ void btrfs_after_dev_replace_commit(struct btrfs_fs_info *fs_info) + + static char* btrfs_dev_name(struct btrfs_device *device) + { +- if (test_bit(BTRFS_DEV_STATE_MISSING, &device->dev_state)) ++ if (!device || test_bit(BTRFS_DEV_STATE_MISSING, &device->dev_state)) + return ""; + else + return rcu_str_deref(device->name); +diff --git a/fs/btrfs/disk-io.c b/fs/btrfs/disk-io.c +index fea78d138073..02e39a7f22ec 100644 +--- a/fs/btrfs/disk-io.c ++++ b/fs/btrfs/disk-io.c +@@ -1108,7 +1108,7 @@ static struct btrfs_subvolume_writers *btrfs_alloc_subvolume_writers(void) + if (!writers) + return ERR_PTR(-ENOMEM); + +- ret = percpu_counter_init(&writers->counter, 0, GFP_KERNEL); ++ ret = percpu_counter_init(&writers->counter, 0, GFP_NOFS); + if (ret < 0) { + kfree(writers); + return ERR_PTR(ret); +@@ -3735,7 +3735,8 @@ void close_ctree(struct btrfs_fs_info *fs_info) + btrfs_err(fs_info, "commit super ret %d", ret); + } + +- if (test_bit(BTRFS_FS_STATE_ERROR, &fs_info->fs_state)) ++ if (test_bit(BTRFS_FS_STATE_ERROR, &fs_info->fs_state) || ++ test_bit(BTRFS_FS_STATE_TRANS_ABORTED, &fs_info->fs_state)) + btrfs_error_commit_super(fs_info); + + kthread_stop(fs_info->transaction_kthread); +diff --git a/fs/btrfs/extent-tree.c b/fs/btrfs/extent-tree.c +index 53ddfafa440b..b45b840c2217 100644 +--- a/fs/btrfs/extent-tree.c ++++ b/fs/btrfs/extent-tree.c +@@ -4657,6 +4657,7 @@ static int do_chunk_alloc(struct btrfs_trans_handle *trans, + if (wait_for_alloc) { + mutex_unlock(&fs_info->chunk_mutex); + wait_for_alloc = 0; ++ cond_resched(); + goto again; + } + +diff --git a/fs/btrfs/inode.c b/fs/btrfs/inode.c +index f370bdc126b8..8b031f40a2f5 100644 +--- a/fs/btrfs/inode.c ++++ b/fs/btrfs/inode.c +@@ -6632,8 +6632,7 @@ static int btrfs_mknod(struct inode *dir, struct dentry *dentry, + goto out_unlock_inode; + } else { + btrfs_update_inode(trans, root, inode); +- unlock_new_inode(inode); +- d_instantiate(dentry, inode); ++ d_instantiate_new(dentry, inode); + } + + out_unlock: +@@ -6709,8 +6708,7 @@ static int btrfs_create(struct inode *dir, struct dentry *dentry, + goto out_unlock_inode; + + BTRFS_I(inode)->io_tree.ops = &btrfs_extent_io_ops; +- unlock_new_inode(inode); +- d_instantiate(dentry, inode); ++ d_instantiate_new(dentry, inode); + + out_unlock: + btrfs_end_transaction(trans); +@@ -6855,12 +6853,7 @@ static int btrfs_mkdir(struct inode *dir, struct dentry *dentry, umode_t mode) + if (err) + goto out_fail_inode; + +- d_instantiate(dentry, inode); +- /* +- * mkdir is special. We're unlocking after we call d_instantiate +- * to avoid a race with nfsd calling d_instantiate. +- */ +- unlock_new_inode(inode); ++ d_instantiate_new(dentry, inode); + drop_on_err = 0; + + out_fail: +@@ -9238,7 +9231,8 @@ static int btrfs_truncate(struct inode *inode) + BTRFS_EXTENT_DATA_KEY); + trans->block_rsv = &fs_info->trans_block_rsv; + if (ret != -ENOSPC && ret != -EAGAIN) { +- err = ret; ++ if (ret < 0) ++ err = ret; + break; + } + +@@ -10372,8 +10366,7 @@ static int btrfs_symlink(struct inode *dir, struct dentry *dentry, + goto out_unlock_inode; + } + +- unlock_new_inode(inode); +- d_instantiate(dentry, inode); ++ d_instantiate_new(dentry, inode); + + out_unlock: + btrfs_end_transaction(trans); +diff --git a/fs/btrfs/tests/qgroup-tests.c b/fs/btrfs/tests/qgroup-tests.c +index 90204b166643..160eb2fba726 100644 +--- a/fs/btrfs/tests/qgroup-tests.c ++++ b/fs/btrfs/tests/qgroup-tests.c +@@ -63,7 +63,7 @@ static int insert_normal_tree_ref(struct btrfs_root *root, u64 bytenr, + btrfs_set_extent_generation(leaf, item, 1); + btrfs_set_extent_flags(leaf, item, BTRFS_EXTENT_FLAG_TREE_BLOCK); + block_info = (struct btrfs_tree_block_info *)(item + 1); +- btrfs_set_tree_block_level(leaf, block_info, 1); ++ btrfs_set_tree_block_level(leaf, block_info, 0); + iref = (struct btrfs_extent_inline_ref *)(block_info + 1); + if (parent > 0) { + btrfs_set_extent_inline_ref_type(leaf, iref, +diff --git a/fs/btrfs/transaction.c b/fs/btrfs/transaction.c +index 04f07144b45c..c070ce7fecc6 100644 +--- a/fs/btrfs/transaction.c ++++ b/fs/btrfs/transaction.c +@@ -319,7 +319,7 @@ static int record_root_in_trans(struct btrfs_trans_handle *trans, + if ((test_bit(BTRFS_ROOT_REF_COWS, &root->state) && + root->last_trans < trans->transid) || force) { + WARN_ON(root == fs_info->extent_root); +- WARN_ON(root->commit_root != root->node); ++ WARN_ON(!force && root->commit_root != root->node); + + /* + * see below for IN_TRANS_SETUP usage rules +@@ -1365,6 +1365,14 @@ static int qgroup_account_snapshot(struct btrfs_trans_handle *trans, + if (!test_bit(BTRFS_FS_QUOTA_ENABLED, &fs_info->flags)) + return 0; + ++ /* ++ * Ensure dirty @src will be commited. Or, after comming ++ * commit_fs_roots() and switch_commit_roots(), any dirty but not ++ * recorded root will never be updated again, causing an outdated root ++ * item. ++ */ ++ record_root_in_trans(trans, src, 1); ++ + /* + * We are going to commit transaction, see btrfs_commit_transaction() + * comment for reason locking tree_log_mutex +diff --git a/fs/btrfs/tree-log.c b/fs/btrfs/tree-log.c +index ac6ea1503cd6..eb53c21b223a 100644 +--- a/fs/btrfs/tree-log.c ++++ b/fs/btrfs/tree-log.c +@@ -2356,8 +2356,10 @@ static noinline int replay_dir_deletes(struct btrfs_trans_handle *trans, + nritems = btrfs_header_nritems(path->nodes[0]); + if (path->slots[0] >= nritems) { + ret = btrfs_next_leaf(root, path); +- if (ret) ++ if (ret == 1) + break; ++ else if (ret < 0) ++ goto out; + } + btrfs_item_key_to_cpu(path->nodes[0], &found_key, + path->slots[0]); +@@ -2461,13 +2463,41 @@ static int replay_one_buffer(struct btrfs_root *log, struct extent_buffer *eb, + if (ret) + break; + +- /* for regular files, make sure corresponding +- * orphan item exist. extents past the new EOF +- * will be truncated later by orphan cleanup. ++ /* ++ * Before replaying extents, truncate the inode to its ++ * size. We need to do it now and not after log replay ++ * because before an fsync we can have prealloc extents ++ * added beyond the inode's i_size. If we did it after, ++ * through orphan cleanup for example, we would drop ++ * those prealloc extents just after replaying them. + */ + if (S_ISREG(mode)) { +- ret = insert_orphan_item(wc->trans, root, +- key.objectid); ++ struct inode *inode; ++ u64 from; ++ ++ inode = read_one_inode(root, key.objectid); ++ if (!inode) { ++ ret = -EIO; ++ break; ++ } ++ from = ALIGN(i_size_read(inode), ++ root->fs_info->sectorsize); ++ ret = btrfs_drop_extents(wc->trans, root, inode, ++ from, (u64)-1, 1); ++ /* ++ * If the nlink count is zero here, the iput ++ * will free the inode. We bump it to make ++ * sure it doesn't get freed until the link ++ * count fixup is done. ++ */ ++ if (!ret) { ++ if (inode->i_nlink == 0) ++ inc_nlink(inode); ++ /* Update link count and nbytes. */ ++ ret = btrfs_update_inode(wc->trans, ++ root, inode); ++ } ++ iput(inode); + if (ret) + break; + } +@@ -3518,8 +3548,11 @@ static noinline int log_dir_items(struct btrfs_trans_handle *trans, + * from this directory and from this transaction + */ + ret = btrfs_next_leaf(root, path); +- if (ret == 1) { +- last_offset = (u64)-1; ++ if (ret) { ++ if (ret == 1) ++ last_offset = (u64)-1; ++ else ++ err = ret; + goto done; + } + btrfs_item_key_to_cpu(path->nodes[0], &tmp, path->slots[0]); +@@ -3972,6 +4005,7 @@ static noinline int copy_items(struct btrfs_trans_handle *trans, + ASSERT(ret == 0); + src = src_path->nodes[0]; + i = 0; ++ need_find_last_extent = true; + } + + btrfs_item_key_to_cpu(src, &key, i); +@@ -4321,6 +4355,31 @@ static int btrfs_log_changed_extents(struct btrfs_trans_handle *trans, + num++; + } + ++ /* ++ * Add all prealloc extents beyond the inode's i_size to make sure we ++ * don't lose them after doing a fast fsync and replaying the log. ++ */ ++ if (inode->flags & BTRFS_INODE_PREALLOC) { ++ struct rb_node *node; ++ ++ for (node = rb_last(&tree->map); node; node = rb_prev(node)) { ++ em = rb_entry(node, struct extent_map, rb_node); ++ if (em->start < i_size_read(&inode->vfs_inode)) ++ break; ++ if (!list_empty(&em->list)) ++ continue; ++ /* Same as above loop. */ ++ if (++num > 32768) { ++ list_del_init(&tree->modified_extents); ++ ret = -EFBIG; ++ goto process; ++ } ++ refcount_inc(&em->refs); ++ set_bit(EXTENT_FLAG_LOGGING, &em->flags); ++ list_add_tail(&em->list, &extents); ++ } ++ } ++ + list_sort(NULL, &extents, extent_cmp); + btrfs_get_logged_extents(inode, logged_list, logged_start, logged_end); + /* +diff --git a/fs/dcache.c b/fs/dcache.c +index 8945e6cabd93..06463b780e57 100644 +--- a/fs/dcache.c ++++ b/fs/dcache.c +@@ -1865,6 +1865,28 @@ void d_instantiate(struct dentry *entry, struct inode * inode) + } + EXPORT_SYMBOL(d_instantiate); + ++/* ++ * This should be equivalent to d_instantiate() + unlock_new_inode(), ++ * with lockdep-related part of unlock_new_inode() done before ++ * anything else. Use that instead of open-coding d_instantiate()/ ++ * unlock_new_inode() combinations. ++ */ ++void d_instantiate_new(struct dentry *entry, struct inode *inode) ++{ ++ BUG_ON(!hlist_unhashed(&entry->d_u.d_alias)); ++ BUG_ON(!inode); ++ lockdep_annotate_inode_mutex_key(inode); ++ security_d_instantiate(entry, inode); ++ spin_lock(&inode->i_lock); ++ __d_instantiate(entry, inode); ++ WARN_ON(!(inode->i_state & I_NEW)); ++ inode->i_state &= ~I_NEW; ++ smp_mb(); ++ wake_up_bit(&inode->i_state, __I_NEW); ++ spin_unlock(&inode->i_lock); ++} ++EXPORT_SYMBOL(d_instantiate_new); ++ + /** + * d_instantiate_no_diralias - instantiate a non-aliased dentry + * @entry: dentry to complete +diff --git a/fs/ecryptfs/inode.c b/fs/ecryptfs/inode.c +index 847904aa63a9..7bba8f2693b2 100644 +--- a/fs/ecryptfs/inode.c ++++ b/fs/ecryptfs/inode.c +@@ -283,8 +283,7 @@ ecryptfs_create(struct inode *directory_inode, struct dentry *ecryptfs_dentry, + iget_failed(ecryptfs_inode); + goto out; + } +- unlock_new_inode(ecryptfs_inode); +- d_instantiate(ecryptfs_dentry, ecryptfs_inode); ++ d_instantiate_new(ecryptfs_dentry, ecryptfs_inode); + out: + return rc; + } +diff --git a/fs/ext2/namei.c b/fs/ext2/namei.c +index e078075dc66f..aa6ec191cac0 100644 +--- a/fs/ext2/namei.c ++++ b/fs/ext2/namei.c +@@ -41,8 +41,7 @@ static inline int ext2_add_nondir(struct dentry *dentry, struct inode *inode) + { + int err = ext2_add_link(dentry, inode); + if (!err) { +- unlock_new_inode(inode); +- d_instantiate(dentry, inode); ++ d_instantiate_new(dentry, inode); + return 0; + } + inode_dec_link_count(inode); +@@ -269,8 +268,7 @@ static int ext2_mkdir(struct inode * dir, struct dentry * dentry, umode_t mode) + if (err) + goto out_fail; + +- unlock_new_inode(inode); +- d_instantiate(dentry, inode); ++ d_instantiate_new(dentry, inode); + out: + return err; + +diff --git a/fs/ext4/namei.c b/fs/ext4/namei.c +index b1f21e3a0763..4a09063ce1d2 100644 +--- a/fs/ext4/namei.c ++++ b/fs/ext4/namei.c +@@ -2411,8 +2411,7 @@ static int ext4_add_nondir(handle_t *handle, + int err = ext4_add_entry(handle, dentry, inode); + if (!err) { + ext4_mark_inode_dirty(handle, inode); +- unlock_new_inode(inode); +- d_instantiate(dentry, inode); ++ d_instantiate_new(dentry, inode); + return 0; + } + drop_nlink(inode); +@@ -2651,8 +2650,7 @@ static int ext4_mkdir(struct inode *dir, struct dentry *dentry, umode_t mode) + err = ext4_mark_inode_dirty(handle, dir); + if (err) + goto out_clear_inode; +- unlock_new_inode(inode); +- d_instantiate(dentry, inode); ++ d_instantiate_new(dentry, inode); + if (IS_DIRSYNC(dir)) + ext4_handle_sync(handle); + +diff --git a/fs/ext4/super.c b/fs/ext4/super.c +index b8dace7abe09..4c4ff4b3593c 100644 +--- a/fs/ext4/super.c ++++ b/fs/ext4/super.c +@@ -3663,6 +3663,12 @@ static int ext4_fill_super(struct super_block *sb, void *data, int silent) + ext4_msg(sb, KERN_INFO, "mounting ext2 file system " + "using the ext4 subsystem"); + else { ++ /* ++ * If we're probing be silent, if this looks like ++ * it's actually an ext[34] filesystem. ++ */ ++ if (silent && ext4_feature_set_ok(sb, sb_rdonly(sb))) ++ goto failed_mount; + ext4_msg(sb, KERN_ERR, "couldn't mount as ext2 due " + "to feature incompatibilities"); + goto failed_mount; +@@ -3674,6 +3680,12 @@ static int ext4_fill_super(struct super_block *sb, void *data, int silent) + ext4_msg(sb, KERN_INFO, "mounting ext3 file system " + "using the ext4 subsystem"); + else { ++ /* ++ * If we're probing be silent, if this looks like ++ * it's actually an ext4 filesystem. ++ */ ++ if (silent && ext4_feature_set_ok(sb, sb_rdonly(sb))) ++ goto failed_mount; + ext4_msg(sb, KERN_ERR, "couldn't mount as ext3 due " + "to feature incompatibilities"); + goto failed_mount; +diff --git a/fs/f2fs/checkpoint.c b/fs/f2fs/checkpoint.c +index 512dca8abc7d..e77271c2144d 100644 +--- a/fs/f2fs/checkpoint.c ++++ b/fs/f2fs/checkpoint.c +@@ -1136,6 +1136,8 @@ static void update_ckpt_flags(struct f2fs_sb_info *sbi, struct cp_control *cpc) + + if (cpc->reason & CP_TRIMMED) + __set_ckpt_flags(ckpt, CP_TRIMMED_FLAG); ++ else ++ __clear_ckpt_flags(ckpt, CP_TRIMMED_FLAG); + + if (cpc->reason & CP_UMOUNT) + __set_ckpt_flags(ckpt, CP_UMOUNT_FLAG); +@@ -1162,6 +1164,39 @@ static void update_ckpt_flags(struct f2fs_sb_info *sbi, struct cp_control *cpc) + spin_unlock_irqrestore(&sbi->cp_lock, flags); + } + ++static void commit_checkpoint(struct f2fs_sb_info *sbi, ++ void *src, block_t blk_addr) ++{ ++ struct writeback_control wbc = { ++ .for_reclaim = 0, ++ }; ++ ++ /* ++ * pagevec_lookup_tag and lock_page again will take ++ * some extra time. Therefore, update_meta_pages and ++ * sync_meta_pages are combined in this function. ++ */ ++ struct page *page = grab_meta_page(sbi, blk_addr); ++ int err; ++ ++ memcpy(page_address(page), src, PAGE_SIZE); ++ set_page_dirty(page); ++ ++ f2fs_wait_on_page_writeback(page, META, true); ++ f2fs_bug_on(sbi, PageWriteback(page)); ++ if (unlikely(!clear_page_dirty_for_io(page))) ++ f2fs_bug_on(sbi, 1); ++ ++ /* writeout cp pack 2 page */ ++ err = __f2fs_write_meta_page(page, &wbc, FS_CP_META_IO); ++ f2fs_bug_on(sbi, err); ++ ++ f2fs_put_page(page, 0); ++ ++ /* submit checkpoint (with barrier if NOBARRIER is not set) */ ++ f2fs_submit_merged_write(sbi, META_FLUSH); ++} ++ + static int do_checkpoint(struct f2fs_sb_info *sbi, struct cp_control *cpc) + { + struct f2fs_checkpoint *ckpt = F2FS_CKPT(sbi); +@@ -1264,16 +1299,6 @@ static int do_checkpoint(struct f2fs_sb_info *sbi, struct cp_control *cpc) + } + } + +- /* need to wait for end_io results */ +- wait_on_all_pages_writeback(sbi); +- if (unlikely(f2fs_cp_error(sbi))) +- return -EIO; +- +- /* flush all device cache */ +- err = f2fs_flush_device_cache(sbi); +- if (err) +- return err; +- + /* write out checkpoint buffer at block 0 */ + update_meta_page(sbi, ckpt, start_blk++); + +@@ -1301,26 +1326,26 @@ static int do_checkpoint(struct f2fs_sb_info *sbi, struct cp_control *cpc) + start_blk += NR_CURSEG_NODE_TYPE; + } + +- /* writeout checkpoint block */ +- update_meta_page(sbi, ckpt, start_blk); ++ /* update user_block_counts */ ++ sbi->last_valid_block_count = sbi->total_valid_block_count; ++ percpu_counter_set(&sbi->alloc_valid_block_count, 0); + +- /* wait for previous submitted node/meta pages writeback */ ++ /* Here, we have one bio having CP pack except cp pack 2 page */ ++ sync_meta_pages(sbi, META, LONG_MAX, FS_CP_META_IO); ++ ++ /* wait for previous submitted meta pages writeback */ + wait_on_all_pages_writeback(sbi); + + if (unlikely(f2fs_cp_error(sbi))) + return -EIO; + +- filemap_fdatawait_range(NODE_MAPPING(sbi), 0, LLONG_MAX); +- filemap_fdatawait_range(META_MAPPING(sbi), 0, LLONG_MAX); +- +- /* update user_block_counts */ +- sbi->last_valid_block_count = sbi->total_valid_block_count; +- percpu_counter_set(&sbi->alloc_valid_block_count, 0); +- +- /* Here, we only have one bio having CP pack */ +- sync_meta_pages(sbi, META_FLUSH, LONG_MAX, FS_CP_META_IO); ++ /* flush all device cache */ ++ err = f2fs_flush_device_cache(sbi); ++ if (err) ++ return err; + +- /* wait for previous submitted meta pages writeback */ ++ /* barrier and flush checkpoint cp pack 2 page if it can */ ++ commit_checkpoint(sbi, ckpt, start_blk); + wait_on_all_pages_writeback(sbi); + + release_ino_entry(sbi, false); +diff --git a/fs/f2fs/extent_cache.c b/fs/f2fs/extent_cache.c +index ff2352a0ed15..aff6c2ed1c02 100644 +--- a/fs/f2fs/extent_cache.c ++++ b/fs/f2fs/extent_cache.c +@@ -706,6 +706,9 @@ void f2fs_drop_extent_tree(struct inode *inode) + struct f2fs_sb_info *sbi = F2FS_I_SB(inode); + struct extent_tree *et = F2FS_I(inode)->extent_tree; + ++ if (!f2fs_may_extent_tree(inode)) ++ return; ++ + set_inode_flag(inode, FI_NO_EXTENT); + + write_lock(&et->lock); +diff --git a/fs/f2fs/file.c b/fs/f2fs/file.c +index 672a542e5464..c59b7888d356 100644 +--- a/fs/f2fs/file.c ++++ b/fs/f2fs/file.c +@@ -1348,8 +1348,12 @@ static int f2fs_zero_range(struct inode *inode, loff_t offset, loff_t len, + } + + out: +- if (!(mode & FALLOC_FL_KEEP_SIZE) && i_size_read(inode) < new_size) +- f2fs_i_size_write(inode, new_size); ++ if (new_size > i_size_read(inode)) { ++ if (mode & FALLOC_FL_KEEP_SIZE) ++ file_set_keep_isize(inode); ++ else ++ f2fs_i_size_write(inode, new_size); ++ } + out_sem: + up_write(&F2FS_I(inode)->i_mmap_sem); + +diff --git a/fs/f2fs/namei.c b/fs/f2fs/namei.c +index b68e7b03959f..860c9dd4bb42 100644 +--- a/fs/f2fs/namei.c ++++ b/fs/f2fs/namei.c +@@ -218,8 +218,7 @@ static int f2fs_create(struct inode *dir, struct dentry *dentry, umode_t mode, + + alloc_nid_done(sbi, ino); + +- d_instantiate(dentry, inode); +- unlock_new_inode(inode); ++ d_instantiate_new(dentry, inode); + + if (IS_DIRSYNC(dir)) + f2fs_sync_fs(sbi->sb, 1); +@@ -526,8 +525,7 @@ static int f2fs_symlink(struct inode *dir, struct dentry *dentry, + err = page_symlink(inode, disk_link.name, disk_link.len); + + err_out: +- d_instantiate(dentry, inode); +- unlock_new_inode(inode); ++ d_instantiate_new(dentry, inode); + + /* + * Let's flush symlink data in order to avoid broken symlink as much as +@@ -590,8 +588,7 @@ static int f2fs_mkdir(struct inode *dir, struct dentry *dentry, umode_t mode) + + alloc_nid_done(sbi, inode->i_ino); + +- d_instantiate(dentry, inode); +- unlock_new_inode(inode); ++ d_instantiate_new(dentry, inode); + + if (IS_DIRSYNC(dir)) + f2fs_sync_fs(sbi->sb, 1); +@@ -642,8 +639,7 @@ static int f2fs_mknod(struct inode *dir, struct dentry *dentry, + + alloc_nid_done(sbi, inode->i_ino); + +- d_instantiate(dentry, inode); +- unlock_new_inode(inode); ++ d_instantiate_new(dentry, inode); + + if (IS_DIRSYNC(dir)) + f2fs_sync_fs(sbi->sb, 1); +diff --git a/fs/fscache/page.c b/fs/fscache/page.c +index 961029e04027..da2fb58f2ecb 100644 +--- a/fs/fscache/page.c ++++ b/fs/fscache/page.c +@@ -776,6 +776,7 @@ static void fscache_write_op(struct fscache_operation *_op) + + _enter("{OP%x,%d}", op->op.debug_id, atomic_read(&op->op.usage)); + ++again: + spin_lock(&object->lock); + cookie = object->cookie; + +@@ -816,10 +817,6 @@ static void fscache_write_op(struct fscache_operation *_op) + goto superseded; + page = results[0]; + _debug("gang %d [%lx]", n, page->index); +- if (page->index >= op->store_limit) { +- fscache_stat(&fscache_n_store_pages_over_limit); +- goto superseded; +- } + + radix_tree_tag_set(&cookie->stores, page->index, + FSCACHE_COOKIE_STORING_TAG); +@@ -829,6 +826,9 @@ static void fscache_write_op(struct fscache_operation *_op) + spin_unlock(&cookie->stores_lock); + spin_unlock(&object->lock); + ++ if (page->index >= op->store_limit) ++ goto discard_page; ++ + fscache_stat(&fscache_n_store_pages); + fscache_stat(&fscache_n_cop_write_page); + ret = object->cache->ops->write_page(op, page); +@@ -844,6 +844,11 @@ static void fscache_write_op(struct fscache_operation *_op) + _leave(""); + return; + ++discard_page: ++ fscache_stat(&fscache_n_store_pages_over_limit); ++ fscache_end_page_write(object, page); ++ goto again; ++ + superseded: + /* this writer is going away and there aren't any more things to + * write */ +diff --git a/fs/gfs2/bmap.c b/fs/gfs2/bmap.c +index 51f940e76c5e..de28800691c6 100644 +--- a/fs/gfs2/bmap.c ++++ b/fs/gfs2/bmap.c +@@ -1344,6 +1344,7 @@ static inline bool walk_done(struct gfs2_sbd *sdp, + static int punch_hole(struct gfs2_inode *ip, u64 offset, u64 length) + { + struct gfs2_sbd *sdp = GFS2_SB(&ip->i_inode); ++ u64 maxsize = sdp->sd_heightsize[ip->i_height]; + struct metapath mp = {}; + struct buffer_head *dibh, *bh; + struct gfs2_holder rd_gh; +@@ -1359,6 +1360,14 @@ static int punch_hole(struct gfs2_inode *ip, u64 offset, u64 length) + u64 prev_bnr = 0; + __be64 *start, *end; + ++ if (offset >= maxsize) { ++ /* ++ * The starting point lies beyond the allocated meta-data; ++ * there are no blocks do deallocate. ++ */ ++ return 0; ++ } ++ + /* + * The start position of the hole is defined by lblock, start_list, and + * start_aligned. The end position of the hole is defined by lend, +@@ -1372,7 +1381,6 @@ static int punch_hole(struct gfs2_inode *ip, u64 offset, u64 length) + */ + + if (length) { +- u64 maxsize = sdp->sd_heightsize[ip->i_height]; + u64 end_offset = offset + length; + u64 lend; + +diff --git a/fs/gfs2/file.c b/fs/gfs2/file.c +index 4f88e201b3f0..2edd3a9a7b79 100644 +--- a/fs/gfs2/file.c ++++ b/fs/gfs2/file.c +@@ -809,7 +809,7 @@ static long __gfs2_fallocate(struct file *file, int mode, loff_t offset, loff_t + struct gfs2_inode *ip = GFS2_I(inode); + struct gfs2_alloc_parms ap = { .aflags = 0, }; + unsigned int data_blocks = 0, ind_blocks = 0, rblocks; +- loff_t bytes, max_bytes, max_blks = UINT_MAX; ++ loff_t bytes, max_bytes, max_blks; + int error; + const loff_t pos = offset; + const loff_t count = len; +@@ -861,7 +861,8 @@ static long __gfs2_fallocate(struct file *file, int mode, loff_t offset, loff_t + return error; + /* ap.allowed tells us how many blocks quota will allow + * us to write. Check if this reduces max_blks */ +- if (ap.allowed && ap.allowed < max_blks) ++ max_blks = UINT_MAX; ++ if (ap.allowed) + max_blks = ap.allowed; + + error = gfs2_inplace_reserve(ip, &ap); +diff --git a/fs/gfs2/quota.h b/fs/gfs2/quota.h +index 5e47c935a515..836f29480be6 100644 +--- a/fs/gfs2/quota.h ++++ b/fs/gfs2/quota.h +@@ -45,6 +45,8 @@ static inline int gfs2_quota_lock_check(struct gfs2_inode *ip, + { + struct gfs2_sbd *sdp = GFS2_SB(&ip->i_inode); + int ret; ++ ++ ap->allowed = UINT_MAX; /* Assume we are permitted a whole lot */ + if (sdp->sd_args.ar_quota == GFS2_QUOTA_OFF) + return 0; + ret = gfs2_quota_lock(ip, NO_UID_QUOTA_CHANGE, NO_GID_QUOTA_CHANGE); +diff --git a/fs/jffs2/dir.c b/fs/jffs2/dir.c +index 0a754f38462e..e5a6deb38e1e 100644 +--- a/fs/jffs2/dir.c ++++ b/fs/jffs2/dir.c +@@ -209,8 +209,7 @@ static int jffs2_create(struct inode *dir_i, struct dentry *dentry, + __func__, inode->i_ino, inode->i_mode, inode->i_nlink, + f->inocache->pino_nlink, inode->i_mapping->nrpages); + +- unlock_new_inode(inode); +- d_instantiate(dentry, inode); ++ d_instantiate_new(dentry, inode); + return 0; + + fail: +@@ -430,8 +429,7 @@ static int jffs2_symlink (struct inode *dir_i, struct dentry *dentry, const char + mutex_unlock(&dir_f->sem); + jffs2_complete_reservation(c); + +- unlock_new_inode(inode); +- d_instantiate(dentry, inode); ++ d_instantiate_new(dentry, inode); + return 0; + + fail: +@@ -575,8 +573,7 @@ static int jffs2_mkdir (struct inode *dir_i, struct dentry *dentry, umode_t mode + mutex_unlock(&dir_f->sem); + jffs2_complete_reservation(c); + +- unlock_new_inode(inode); +- d_instantiate(dentry, inode); ++ d_instantiate_new(dentry, inode); + return 0; + + fail: +@@ -747,8 +744,7 @@ static int jffs2_mknod (struct inode *dir_i, struct dentry *dentry, umode_t mode + mutex_unlock(&dir_f->sem); + jffs2_complete_reservation(c); + +- unlock_new_inode(inode); +- d_instantiate(dentry, inode); ++ d_instantiate_new(dentry, inode); + return 0; + + fail: +diff --git a/fs/jfs/namei.c b/fs/jfs/namei.c +index b41596d71858..56c3fcbfe80e 100644 +--- a/fs/jfs/namei.c ++++ b/fs/jfs/namei.c +@@ -178,8 +178,7 @@ static int jfs_create(struct inode *dip, struct dentry *dentry, umode_t mode, + unlock_new_inode(ip); + iput(ip); + } else { +- unlock_new_inode(ip); +- d_instantiate(dentry, ip); ++ d_instantiate_new(dentry, ip); + } + + out2: +@@ -313,8 +312,7 @@ static int jfs_mkdir(struct inode *dip, struct dentry *dentry, umode_t mode) + unlock_new_inode(ip); + iput(ip); + } else { +- unlock_new_inode(ip); +- d_instantiate(dentry, ip); ++ d_instantiate_new(dentry, ip); + } + + out2: +@@ -1059,8 +1057,7 @@ static int jfs_symlink(struct inode *dip, struct dentry *dentry, + unlock_new_inode(ip); + iput(ip); + } else { +- unlock_new_inode(ip); +- d_instantiate(dentry, ip); ++ d_instantiate_new(dentry, ip); + } + + out2: +@@ -1447,8 +1444,7 @@ static int jfs_mknod(struct inode *dir, struct dentry *dentry, + unlock_new_inode(ip); + iput(ip); + } else { +- unlock_new_inode(ip); +- d_instantiate(dentry, ip); ++ d_instantiate_new(dentry, ip); + } + + out1: +diff --git a/fs/nilfs2/namei.c b/fs/nilfs2/namei.c +index 1a2894aa0194..dd52d3f82e8d 100644 +--- a/fs/nilfs2/namei.c ++++ b/fs/nilfs2/namei.c +@@ -46,8 +46,7 @@ static inline int nilfs_add_nondir(struct dentry *dentry, struct inode *inode) + int err = nilfs_add_link(dentry, inode); + + if (!err) { +- d_instantiate(dentry, inode); +- unlock_new_inode(inode); ++ d_instantiate_new(dentry, inode); + return 0; + } + inode_dec_link_count(inode); +@@ -243,8 +242,7 @@ static int nilfs_mkdir(struct inode *dir, struct dentry *dentry, umode_t mode) + goto out_fail; + + nilfs_mark_inode_dirty(inode); +- d_instantiate(dentry, inode); +- unlock_new_inode(inode); ++ d_instantiate_new(dentry, inode); + out: + if (!err) + err = nilfs_transaction_commit(dir->i_sb); +diff --git a/fs/ocfs2/dlm/dlmdomain.c b/fs/ocfs2/dlm/dlmdomain.c +index e1fea149f50b..25b76f0d082b 100644 +--- a/fs/ocfs2/dlm/dlmdomain.c ++++ b/fs/ocfs2/dlm/dlmdomain.c +@@ -675,20 +675,6 @@ static void dlm_leave_domain(struct dlm_ctxt *dlm) + spin_unlock(&dlm->spinlock); + } + +-int dlm_shutting_down(struct dlm_ctxt *dlm) +-{ +- int ret = 0; +- +- spin_lock(&dlm_domain_lock); +- +- if (dlm->dlm_state == DLM_CTXT_IN_SHUTDOWN) +- ret = 1; +- +- spin_unlock(&dlm_domain_lock); +- +- return ret; +-} +- + void dlm_unregister_domain(struct dlm_ctxt *dlm) + { + int leave = 0; +diff --git a/fs/ocfs2/dlm/dlmdomain.h b/fs/ocfs2/dlm/dlmdomain.h +index fd6122a38dbd..8a9281411c18 100644 +--- a/fs/ocfs2/dlm/dlmdomain.h ++++ b/fs/ocfs2/dlm/dlmdomain.h +@@ -28,7 +28,30 @@ + extern spinlock_t dlm_domain_lock; + extern struct list_head dlm_domains; + +-int dlm_shutting_down(struct dlm_ctxt *dlm); ++static inline int dlm_joined(struct dlm_ctxt *dlm) ++{ ++ int ret = 0; ++ ++ spin_lock(&dlm_domain_lock); ++ if (dlm->dlm_state == DLM_CTXT_JOINED) ++ ret = 1; ++ spin_unlock(&dlm_domain_lock); ++ ++ return ret; ++} ++ ++static inline int dlm_shutting_down(struct dlm_ctxt *dlm) ++{ ++ int ret = 0; ++ ++ spin_lock(&dlm_domain_lock); ++ if (dlm->dlm_state == DLM_CTXT_IN_SHUTDOWN) ++ ret = 1; ++ spin_unlock(&dlm_domain_lock); ++ ++ return ret; ++} ++ + void dlm_fire_domain_eviction_callbacks(struct dlm_ctxt *dlm, + int node_num); + +diff --git a/fs/ocfs2/dlm/dlmrecovery.c b/fs/ocfs2/dlm/dlmrecovery.c +index ec8f75813beb..505ab4281f36 100644 +--- a/fs/ocfs2/dlm/dlmrecovery.c ++++ b/fs/ocfs2/dlm/dlmrecovery.c +@@ -1378,6 +1378,15 @@ int dlm_mig_lockres_handler(struct o2net_msg *msg, u32 len, void *data, + if (!dlm_grab(dlm)) + return -EINVAL; + ++ if (!dlm_joined(dlm)) { ++ mlog(ML_ERROR, "Domain %s not joined! " ++ "lockres %.*s, master %u\n", ++ dlm->name, mres->lockname_len, ++ mres->lockname, mres->master); ++ dlm_put(dlm); ++ return -EINVAL; ++ } ++ + BUG_ON(!(mres->flags & (DLM_MRES_RECOVERY|DLM_MRES_MIGRATION))); + + real_master = mres->master; +diff --git a/fs/orangefs/namei.c b/fs/orangefs/namei.c +index 6e3134e6d98a..1b5707c44c3f 100644 +--- a/fs/orangefs/namei.c ++++ b/fs/orangefs/namei.c +@@ -75,8 +75,7 @@ static int orangefs_create(struct inode *dir, + get_khandle_from_ino(inode), + dentry); + +- d_instantiate(dentry, inode); +- unlock_new_inode(inode); ++ d_instantiate_new(dentry, inode); + orangefs_set_timeout(dentry); + ORANGEFS_I(inode)->getattr_time = jiffies - 1; + ORANGEFS_I(inode)->getattr_mask = STATX_BASIC_STATS; +@@ -332,8 +331,7 @@ static int orangefs_symlink(struct inode *dir, + "Assigned symlink inode new number of %pU\n", + get_khandle_from_ino(inode)); + +- d_instantiate(dentry, inode); +- unlock_new_inode(inode); ++ d_instantiate_new(dentry, inode); + orangefs_set_timeout(dentry); + ORANGEFS_I(inode)->getattr_time = jiffies - 1; + ORANGEFS_I(inode)->getattr_mask = STATX_BASIC_STATS; +@@ -402,8 +400,7 @@ static int orangefs_mkdir(struct inode *dir, struct dentry *dentry, umode_t mode + "Assigned dir inode new number of %pU\n", + get_khandle_from_ino(inode)); + +- d_instantiate(dentry, inode); +- unlock_new_inode(inode); ++ d_instantiate_new(dentry, inode); + orangefs_set_timeout(dentry); + ORANGEFS_I(inode)->getattr_time = jiffies - 1; + ORANGEFS_I(inode)->getattr_mask = STATX_BASIC_STATS; +diff --git a/fs/proc/proc_sysctl.c b/fs/proc/proc_sysctl.c +index c41ab261397d..7da10e595297 100644 +--- a/fs/proc/proc_sysctl.c ++++ b/fs/proc/proc_sysctl.c +@@ -707,7 +707,10 @@ static bool proc_sys_link_fill_cache(struct file *file, + struct ctl_table *table) + { + bool ret = true; ++ + head = sysctl_head_grab(head); ++ if (IS_ERR(head)) ++ return false; + + if (S_ISLNK(table->mode)) { + /* It is not an error if we can not follow the link ignore it */ +diff --git a/fs/reiserfs/namei.c b/fs/reiserfs/namei.c +index bd39a998843d..5089dac02660 100644 +--- a/fs/reiserfs/namei.c ++++ b/fs/reiserfs/namei.c +@@ -687,8 +687,7 @@ static int reiserfs_create(struct inode *dir, struct dentry *dentry, umode_t mod + reiserfs_update_inode_transaction(inode); + reiserfs_update_inode_transaction(dir); + +- unlock_new_inode(inode); +- d_instantiate(dentry, inode); ++ d_instantiate_new(dentry, inode); + retval = journal_end(&th); + + out_failed: +@@ -771,8 +770,7 @@ static int reiserfs_mknod(struct inode *dir, struct dentry *dentry, umode_t mode + goto out_failed; + } + +- unlock_new_inode(inode); +- d_instantiate(dentry, inode); ++ d_instantiate_new(dentry, inode); + retval = journal_end(&th); + + out_failed: +@@ -871,8 +869,7 @@ static int reiserfs_mkdir(struct inode *dir, struct dentry *dentry, umode_t mode + /* the above add_entry did not update dir's stat data */ + reiserfs_update_sd(&th, dir); + +- unlock_new_inode(inode); +- d_instantiate(dentry, inode); ++ d_instantiate_new(dentry, inode); + retval = journal_end(&th); + out_failed: + reiserfs_write_unlock(dir->i_sb); +@@ -1187,8 +1184,7 @@ static int reiserfs_symlink(struct inode *parent_dir, + goto out_failed; + } + +- unlock_new_inode(inode); +- d_instantiate(dentry, inode); ++ d_instantiate_new(dentry, inode); + retval = journal_end(&th); + out_failed: + reiserfs_write_unlock(parent_dir->i_sb); +diff --git a/fs/super.c b/fs/super.c +index afbf4d220c27..f25717f9b691 100644 +--- a/fs/super.c ++++ b/fs/super.c +@@ -120,13 +120,23 @@ static unsigned long super_cache_count(struct shrinker *shrink, + sb = container_of(shrink, struct super_block, s_shrink); + + /* +- * Don't call trylock_super as it is a potential +- * scalability bottleneck. The counts could get updated +- * between super_cache_count and super_cache_scan anyway. +- * Call to super_cache_count with shrinker_rwsem held +- * ensures the safety of call to list_lru_shrink_count() and +- * s_op->nr_cached_objects(). ++ * We don't call trylock_super() here as it is a scalability bottleneck, ++ * so we're exposed to partial setup state. The shrinker rwsem does not ++ * protect filesystem operations backing list_lru_shrink_count() or ++ * s_op->nr_cached_objects(). Counts can change between ++ * super_cache_count and super_cache_scan, so we really don't need locks ++ * here. ++ * ++ * However, if we are currently mounting the superblock, the underlying ++ * filesystem might be in a state of partial construction and hence it ++ * is dangerous to access it. trylock_super() uses a SB_BORN check to ++ * avoid this situation, so do the same here. The memory barrier is ++ * matched with the one in mount_fs() as we don't hold locks here. + */ ++ if (!(sb->s_flags & SB_BORN)) ++ return 0; ++ smp_rmb(); ++ + if (sb->s_op && sb->s_op->nr_cached_objects) + total_objects = sb->s_op->nr_cached_objects(sb, sc); + +@@ -1226,6 +1236,14 @@ mount_fs(struct file_system_type *type, int flags, const char *name, void *data) + sb = root->d_sb; + BUG_ON(!sb); + WARN_ON(!sb->s_bdi); ++ ++ /* ++ * Write barrier is for super_cache_count(). We place it before setting ++ * SB_BORN as the data dependency between the two functions is the ++ * superblock structure contents that we just set up, not the SB_BORN ++ * flag. ++ */ ++ smp_wmb(); + sb->s_flags |= SB_BORN; + + error = security_sb_kern_mount(sb, flags, secdata); +diff --git a/fs/udf/namei.c b/fs/udf/namei.c +index 0458dd47e105..c586026508db 100644 +--- a/fs/udf/namei.c ++++ b/fs/udf/namei.c +@@ -622,8 +622,7 @@ static int udf_add_nondir(struct dentry *dentry, struct inode *inode) + if (fibh.sbh != fibh.ebh) + brelse(fibh.ebh); + brelse(fibh.sbh); +- unlock_new_inode(inode); +- d_instantiate(dentry, inode); ++ d_instantiate_new(dentry, inode); + + return 0; + } +@@ -733,8 +732,7 @@ static int udf_mkdir(struct inode *dir, struct dentry *dentry, umode_t mode) + inc_nlink(dir); + dir->i_ctime = dir->i_mtime = current_time(dir); + mark_inode_dirty(dir); +- unlock_new_inode(inode); +- d_instantiate(dentry, inode); ++ d_instantiate_new(dentry, inode); + if (fibh.sbh != fibh.ebh) + brelse(fibh.ebh); + brelse(fibh.sbh); +diff --git a/fs/udf/super.c b/fs/udf/super.c +index f73239a9a97d..8e5d6d29b6cf 100644 +--- a/fs/udf/super.c ++++ b/fs/udf/super.c +@@ -2091,8 +2091,9 @@ static int udf_fill_super(struct super_block *sb, void *options, int silent) + bool lvid_open = false; + + uopt.flags = (1 << UDF_FLAG_USE_AD_IN_ICB) | (1 << UDF_FLAG_STRICT); +- uopt.uid = INVALID_UID; +- uopt.gid = INVALID_GID; ++ /* By default we'll use overflow[ug]id when UDF inode [ug]id == -1 */ ++ uopt.uid = make_kuid(current_user_ns(), overflowuid); ++ uopt.gid = make_kgid(current_user_ns(), overflowgid); + uopt.umask = 0; + uopt.fmode = UDF_INVALID_MODE; + uopt.dmode = UDF_INVALID_MODE; +diff --git a/fs/ufs/namei.c b/fs/ufs/namei.c +index 32545cd00ceb..d5f43ba76c59 100644 +--- a/fs/ufs/namei.c ++++ b/fs/ufs/namei.c +@@ -39,8 +39,7 @@ static inline int ufs_add_nondir(struct dentry *dentry, struct inode *inode) + { + int err = ufs_add_link(dentry, inode); + if (!err) { +- unlock_new_inode(inode); +- d_instantiate(dentry, inode); ++ d_instantiate_new(dentry, inode); + return 0; + } + inode_dec_link_count(inode); +@@ -193,8 +192,7 @@ static int ufs_mkdir(struct inode * dir, struct dentry * dentry, umode_t mode) + if (err) + goto out_fail; + +- unlock_new_inode(inode); +- d_instantiate(dentry, inode); ++ d_instantiate_new(dentry, inode); + return 0; + + out_fail: +diff --git a/fs/xfs/xfs_discard.c b/fs/xfs/xfs_discard.c +index b2cde5426182..7b68e6c9a474 100644 +--- a/fs/xfs/xfs_discard.c ++++ b/fs/xfs/xfs_discard.c +@@ -50,19 +50,19 @@ xfs_trim_extents( + + pag = xfs_perag_get(mp, agno); + +- error = xfs_alloc_read_agf(mp, NULL, agno, 0, &agbp); +- if (error || !agbp) +- goto out_put_perag; +- +- cur = xfs_allocbt_init_cursor(mp, NULL, agbp, agno, XFS_BTNUM_CNT); +- + /* + * Force out the log. This means any transactions that might have freed +- * space before we took the AGF buffer lock are now on disk, and the ++ * space before we take the AGF buffer lock are now on disk, and the + * volatile disk cache is flushed. + */ + xfs_log_force(mp, XFS_LOG_SYNC); + ++ error = xfs_alloc_read_agf(mp, NULL, agno, 0, &agbp); ++ if (error || !agbp) ++ goto out_put_perag; ++ ++ cur = xfs_allocbt_init_cursor(mp, NULL, agbp, agno, XFS_BTNUM_CNT); ++ + /* + * Look up the longest btree in the AGF and start with it. + */ +diff --git a/include/drm/drm_vblank.h b/include/drm/drm_vblank.h +index 848b463a0af5..a4c3b0a0a197 100644 +--- a/include/drm/drm_vblank.h ++++ b/include/drm/drm_vblank.h +@@ -179,7 +179,7 @@ void drm_crtc_wait_one_vblank(struct drm_crtc *crtc); + void drm_crtc_vblank_off(struct drm_crtc *crtc); + void drm_crtc_vblank_reset(struct drm_crtc *crtc); + void drm_crtc_vblank_on(struct drm_crtc *crtc); +-u32 drm_crtc_accurate_vblank_count(struct drm_crtc *crtc); ++u64 drm_crtc_accurate_vblank_count(struct drm_crtc *crtc); + + bool drm_calc_vbltimestamp_from_scanoutpos(struct drm_device *dev, + unsigned int pipe, int *max_error, +diff --git a/include/linux/dcache.h b/include/linux/dcache.h +index 82a99d366aec..9e9bc9f33c03 100644 +--- a/include/linux/dcache.h ++++ b/include/linux/dcache.h +@@ -226,6 +226,7 @@ extern seqlock_t rename_lock; + * These are the low-level FS interfaces to the dcache.. + */ + extern void d_instantiate(struct dentry *, struct inode *); ++extern void d_instantiate_new(struct dentry *, struct inode *); + extern struct dentry * d_instantiate_unique(struct dentry *, struct inode *); + extern struct dentry * d_instantiate_anon(struct dentry *, struct inode *); + extern int d_instantiate_no_diralias(struct dentry *, struct inode *); +diff --git a/include/rdma/ib_umem.h b/include/rdma/ib_umem.h +index 23159dd5be18..a1fd63871d17 100644 +--- a/include/rdma/ib_umem.h ++++ b/include/rdma/ib_umem.h +@@ -48,7 +48,6 @@ struct ib_umem { + int writable; + int hugetlb; + struct work_struct work; +- struct pid *pid; + struct mm_struct *mm; + unsigned long diff; + struct ib_umem_odp *odp_data; +diff --git a/ipc/shm.c b/ipc/shm.c +index f68420b1ad93..61b477e48e9b 100644 +--- a/ipc/shm.c ++++ b/ipc/shm.c +@@ -1320,14 +1320,17 @@ long do_shmat(int shmid, char __user *shmaddr, int shmflg, + + if (addr) { + if (addr & (shmlba - 1)) { +- /* +- * Round down to the nearest multiple of shmlba. +- * For sane do_mmap_pgoff() parameters, avoid +- * round downs that trigger nil-page and MAP_FIXED. +- */ +- if ((shmflg & SHM_RND) && addr >= shmlba) +- addr &= ~(shmlba - 1); +- else ++ if (shmflg & SHM_RND) { ++ addr &= ~(shmlba - 1); /* round down */ ++ ++ /* ++ * Ensure that the round-down is non-nil ++ * when remapping. This can happen for ++ * cases when addr < shmlba. ++ */ ++ if (!addr && (shmflg & SHM_REMAP)) ++ goto out; ++ } else + #ifndef __ARCH_FORCE_SHMLBA + if (addr & ~PAGE_MASK) + #endif +diff --git a/kernel/audit.c b/kernel/audit.c +index 227db99b0f19..bc169f2a4766 100644 +--- a/kernel/audit.c ++++ b/kernel/audit.c +@@ -1058,6 +1058,8 @@ static void audit_log_feature_change(int which, u32 old_feature, u32 new_feature + return; + + ab = audit_log_start(NULL, GFP_KERNEL, AUDIT_FEATURE_CHANGE); ++ if (!ab) ++ return; + audit_log_task_info(ab, current); + audit_log_format(ab, " feature=%s old=%u new=%u old_lock=%u new_lock=%u res=%d", + audit_feature_names[which], !!old_feature, !!new_feature, +diff --git a/kernel/debug/kdb/kdb_main.c b/kernel/debug/kdb/kdb_main.c +index dbb0781a0533..90327d7cfe24 100644 +--- a/kernel/debug/kdb/kdb_main.c ++++ b/kernel/debug/kdb/kdb_main.c +@@ -1566,6 +1566,7 @@ static int kdb_md(int argc, const char **argv) + int symbolic = 0; + int valid = 0; + int phys = 0; ++ int raw = 0; + + kdbgetintenv("MDCOUNT", &mdcount); + kdbgetintenv("RADIX", &radix); +@@ -1575,9 +1576,10 @@ static int kdb_md(int argc, const char **argv) + repeat = mdcount * 16 / bytesperword; + + if (strcmp(argv[0], "mdr") == 0) { +- if (argc != 2) ++ if (argc == 2 || (argc == 0 && last_addr != 0)) ++ valid = raw = 1; ++ else + return KDB_ARGCOUNT; +- valid = 1; + } else if (isdigit(argv[0][2])) { + bytesperword = (int)(argv[0][2] - '0'); + if (bytesperword == 0) { +@@ -1613,7 +1615,10 @@ static int kdb_md(int argc, const char **argv) + radix = last_radix; + bytesperword = last_bytesperword; + repeat = last_repeat; +- mdcount = ((repeat * bytesperword) + 15) / 16; ++ if (raw) ++ mdcount = repeat; ++ else ++ mdcount = ((repeat * bytesperword) + 15) / 16; + } + + if (argc) { +@@ -1630,7 +1635,10 @@ static int kdb_md(int argc, const char **argv) + diag = kdbgetularg(argv[nextarg], &val); + if (!diag) { + mdcount = (int) val; +- repeat = mdcount * 16 / bytesperword; ++ if (raw) ++ repeat = mdcount; ++ else ++ repeat = mdcount * 16 / bytesperword; + } + } + if (argc >= nextarg+1) { +@@ -1640,8 +1648,15 @@ static int kdb_md(int argc, const char **argv) + } + } + +- if (strcmp(argv[0], "mdr") == 0) +- return kdb_mdr(addr, mdcount); ++ if (strcmp(argv[0], "mdr") == 0) { ++ int ret; ++ last_addr = addr; ++ ret = kdb_mdr(addr, mdcount); ++ last_addr += mdcount; ++ last_repeat = mdcount; ++ last_bytesperword = bytesperword; // to make REPEAT happy ++ return ret; ++ } + + switch (radix) { + case 10: +diff --git a/kernel/events/core.c b/kernel/events/core.c +index ca7298760c83..cc6a96303b7e 100644 +--- a/kernel/events/core.c ++++ b/kernel/events/core.c +@@ -948,27 +948,39 @@ list_update_cgroup_event(struct perf_event *event, + if (!is_cgroup_event(event)) + return; + +- if (add && ctx->nr_cgroups++) +- return; +- else if (!add && --ctx->nr_cgroups) +- return; + /* + * Because cgroup events are always per-cpu events, + * this will always be called from the right CPU. + */ + cpuctx = __get_cpu_context(ctx); +- cpuctx_entry = &cpuctx->cgrp_cpuctx_entry; +- /* cpuctx->cgrp is NULL unless a cgroup event is active in this CPU .*/ +- if (add) { ++ ++ /* ++ * Since setting cpuctx->cgrp is conditional on the current @cgrp ++ * matching the event's cgroup, we must do this for every new event, ++ * because if the first would mismatch, the second would not try again ++ * and we would leave cpuctx->cgrp unset. ++ */ ++ if (add && !cpuctx->cgrp) { + struct perf_cgroup *cgrp = perf_cgroup_from_task(current, ctx); + +- list_add(cpuctx_entry, this_cpu_ptr(&cgrp_cpuctx_list)); + if (cgroup_is_descendant(cgrp->css.cgroup, event->cgrp->css.cgroup)) + cpuctx->cgrp = cgrp; +- } else { +- list_del(cpuctx_entry); +- cpuctx->cgrp = NULL; + } ++ ++ if (add && ctx->nr_cgroups++) ++ return; ++ else if (!add && --ctx->nr_cgroups) ++ return; ++ ++ /* no cgroup running */ ++ if (!add) ++ cpuctx->cgrp = NULL; ++ ++ cpuctx_entry = &cpuctx->cgrp_cpuctx_entry; ++ if (add) ++ list_add(cpuctx_entry, this_cpu_ptr(&cgrp_cpuctx_list)); ++ else ++ list_del(cpuctx_entry); + } + + #else /* !CONFIG_CGROUP_PERF */ +@@ -2328,6 +2340,18 @@ static int __perf_install_in_context(void *info) + raw_spin_lock(&task_ctx->lock); + } + ++#ifdef CONFIG_CGROUP_PERF ++ if (is_cgroup_event(event)) { ++ /* ++ * If the current cgroup doesn't match the event's ++ * cgroup, we should not try to schedule it. ++ */ ++ struct perf_cgroup *cgrp = perf_cgroup_from_task(current, ctx); ++ reprogram = cgroup_is_descendant(cgrp->css.cgroup, ++ event->cgrp->css.cgroup); ++ } ++#endif ++ + if (reprogram) { + ctx_sched_out(ctx, cpuctx, EVENT_TIME); + add_event_to_ctx(event, ctx); +@@ -5746,7 +5770,8 @@ static void perf_output_read_group(struct perf_output_handle *handle, + if (read_format & PERF_FORMAT_TOTAL_TIME_RUNNING) + values[n++] = running; + +- if (leader != event) ++ if ((leader != event) && ++ (leader->state == PERF_EVENT_STATE_ACTIVE)) + leader->pmu->read(leader); + + values[n++] = perf_event_count(leader); +diff --git a/kernel/irq/affinity.c b/kernel/irq/affinity.c +index a37a3b4b6342..e0665549af59 100644 +--- a/kernel/irq/affinity.c ++++ b/kernel/irq/affinity.c +@@ -108,7 +108,7 @@ irq_create_affinity_masks(int nvecs, const struct irq_affinity *affd) + int affv = nvecs - affd->pre_vectors - affd->post_vectors; + int last_affv = affv + affd->pre_vectors; + nodemask_t nodemsk = NODE_MASK_NONE; +- struct cpumask *masks; ++ struct cpumask *masks = NULL; + cpumask_var_t nmsk, *node_to_possible_cpumask; + + /* +@@ -121,13 +121,13 @@ irq_create_affinity_masks(int nvecs, const struct irq_affinity *affd) + if (!zalloc_cpumask_var(&nmsk, GFP_KERNEL)) + return NULL; + +- masks = kcalloc(nvecs, sizeof(*masks), GFP_KERNEL); +- if (!masks) +- goto out; +- + node_to_possible_cpumask = alloc_node_to_possible_cpumask(); + if (!node_to_possible_cpumask) +- goto out; ++ goto outcpumsk; ++ ++ masks = kcalloc(nvecs, sizeof(*masks), GFP_KERNEL); ++ if (!masks) ++ goto outnodemsk; + + /* Fill out vectors at the beginning that don't need affinity */ + for (curvec = 0; curvec < affd->pre_vectors; curvec++) +@@ -192,8 +192,9 @@ irq_create_affinity_masks(int nvecs, const struct irq_affinity *affd) + /* Fill out vectors at the end that don't need affinity */ + for (; curvec < nvecs; curvec++) + cpumask_copy(masks + curvec, irq_default_affinity); ++outnodemsk: + free_node_to_possible_cpumask(node_to_possible_cpumask); +-out: ++outcpumsk: + free_cpumask_var(nmsk); + return masks; + } +diff --git a/kernel/rcu/tree_plugin.h b/kernel/rcu/tree_plugin.h +index fb88a028deec..1973e8d44250 100644 +--- a/kernel/rcu/tree_plugin.h ++++ b/kernel/rcu/tree_plugin.h +@@ -560,8 +560,14 @@ static void rcu_print_detail_task_stall_rnp(struct rcu_node *rnp) + } + t = list_entry(rnp->gp_tasks->prev, + struct task_struct, rcu_node_entry); +- list_for_each_entry_continue(t, &rnp->blkd_tasks, rcu_node_entry) ++ list_for_each_entry_continue(t, &rnp->blkd_tasks, rcu_node_entry) { ++ /* ++ * We could be printing a lot while holding a spinlock. ++ * Avoid triggering hard lockup. ++ */ ++ touch_nmi_watchdog(); + sched_show_task(t); ++ } + raw_spin_unlock_irqrestore_rcu_node(rnp, flags); + } + +@@ -1677,6 +1683,12 @@ static void print_cpu_stall_info(struct rcu_state *rsp, int cpu) + char *ticks_title; + unsigned long ticks_value; + ++ /* ++ * We could be printing a lot while holding a spinlock. Avoid ++ * triggering hard lockup. ++ */ ++ touch_nmi_watchdog(); ++ + if (rsp->gpnum == rdp->gpnum) { + ticks_title = "ticks this GP"; + ticks_value = rdp->ticks_this_gp; +diff --git a/kernel/sched/rt.c b/kernel/sched/rt.c +index aad49451584e..84bf1a24a55a 100644 +--- a/kernel/sched/rt.c ++++ b/kernel/sched/rt.c +@@ -843,6 +843,8 @@ static int do_sched_rt_period_timer(struct rt_bandwidth *rt_b, int overrun) + continue; + + raw_spin_lock(&rq->lock); ++ update_rq_clock(rq); ++ + if (rt_rq->rt_time) { + u64 runtime; + +diff --git a/kernel/sys.c b/kernel/sys.c +index 9afc4cb5acf5..de3143bbcd74 100644 +--- a/kernel/sys.c ++++ b/kernel/sys.c +@@ -1401,6 +1401,7 @@ SYSCALL_DEFINE2(old_getrlimit, unsigned int, resource, + if (resource >= RLIM_NLIMITS) + return -EINVAL; + ++ resource = array_index_nospec(resource, RLIM_NLIMITS); + task_lock(current->group_leader); + x = current->signal->rlim[resource]; + task_unlock(current->group_leader); +@@ -1420,6 +1421,7 @@ COMPAT_SYSCALL_DEFINE2(old_getrlimit, unsigned int, resource, + if (resource >= RLIM_NLIMITS) + return -EINVAL; + ++ resource = array_index_nospec(resource, RLIM_NLIMITS); + task_lock(current->group_leader); + r = current->signal->rlim[resource]; + task_unlock(current->group_leader); +diff --git a/lib/radix-tree.c b/lib/radix-tree.c +index a7705b0f139c..25f13dc22997 100644 +--- a/lib/radix-tree.c ++++ b/lib/radix-tree.c +@@ -2034,10 +2034,12 @@ void *radix_tree_delete_item(struct radix_tree_root *root, + unsigned long index, void *item) + { + struct radix_tree_node *node = NULL; +- void __rcu **slot; ++ void __rcu **slot = NULL; + void *entry; + + entry = __radix_tree_lookup(root, index, &node, &slot); ++ if (!slot) ++ return NULL; + if (!entry && (!is_idr(root) || node_tag_get(root, node, IDR_FREE, + get_slot_offset(node, slot)))) + return NULL; +diff --git a/lib/test_kasan.c b/lib/test_kasan.c +index 98854a64b014..ec657105edbf 100644 +--- a/lib/test_kasan.c ++++ b/lib/test_kasan.c +@@ -567,7 +567,15 @@ static noinline void __init kmem_cache_invalid_free(void) + return; + } + ++ /* Trigger invalid free, the object doesn't get freed */ + kmem_cache_free(cache, p + 1); ++ ++ /* ++ * Properly free the object to prevent the "Objects remaining in ++ * test_cache on __kmem_cache_shutdown" BUG failure. ++ */ ++ kmem_cache_free(cache, p); ++ + kmem_cache_destroy(cache); + } + +diff --git a/mm/kasan/kasan.c b/mm/kasan/kasan.c +index e13d911251e7..e9070890b28c 100644 +--- a/mm/kasan/kasan.c ++++ b/mm/kasan/kasan.c +@@ -791,6 +791,40 @@ DEFINE_ASAN_SET_SHADOW(f5); + DEFINE_ASAN_SET_SHADOW(f8); + + #ifdef CONFIG_MEMORY_HOTPLUG ++static bool shadow_mapped(unsigned long addr) ++{ ++ pgd_t *pgd = pgd_offset_k(addr); ++ p4d_t *p4d; ++ pud_t *pud; ++ pmd_t *pmd; ++ pte_t *pte; ++ ++ if (pgd_none(*pgd)) ++ return false; ++ p4d = p4d_offset(pgd, addr); ++ if (p4d_none(*p4d)) ++ return false; ++ pud = pud_offset(p4d, addr); ++ if (pud_none(*pud)) ++ return false; ++ ++ /* ++ * We can't use pud_large() or pud_huge(), the first one is ++ * arch-specific, the last one depends on HUGETLB_PAGE. So let's abuse ++ * pud_bad(), if pud is bad then it's bad because it's huge. ++ */ ++ if (pud_bad(*pud)) ++ return true; ++ pmd = pmd_offset(pud, addr); ++ if (pmd_none(*pmd)) ++ return false; ++ ++ if (pmd_bad(*pmd)) ++ return true; ++ pte = pte_offset_kernel(pmd, addr); ++ return !pte_none(*pte); ++} ++ + static int __meminit kasan_mem_notifier(struct notifier_block *nb, + unsigned long action, void *data) + { +@@ -812,6 +846,14 @@ static int __meminit kasan_mem_notifier(struct notifier_block *nb, + case MEM_GOING_ONLINE: { + void *ret; + ++ /* ++ * If shadow is mapped already than it must have been mapped ++ * during the boot. This could happen if we onlining previously ++ * offlined memory. ++ */ ++ if (shadow_mapped(shadow_start)) ++ return NOTIFY_OK; ++ + ret = __vmalloc_node_range(shadow_size, PAGE_SIZE, shadow_start, + shadow_end, GFP_KERNEL, + PAGE_KERNEL, VM_NO_GUARD, +@@ -823,8 +865,26 @@ static int __meminit kasan_mem_notifier(struct notifier_block *nb, + kmemleak_ignore(ret); + return NOTIFY_OK; + } +- case MEM_OFFLINE: +- vfree((void *)shadow_start); ++ case MEM_CANCEL_ONLINE: ++ case MEM_OFFLINE: { ++ struct vm_struct *vm; ++ ++ /* ++ * shadow_start was either mapped during boot by kasan_init() ++ * or during memory online by __vmalloc_node_range(). ++ * In the latter case we can use vfree() to free shadow. ++ * Non-NULL result of the find_vm_area() will tell us if ++ * that was the second case. ++ * ++ * Currently it's not possible to free shadow mapped ++ * during boot by kasan_init(). It's because the code ++ * to do that hasn't been written yet. So we'll just ++ * leak the memory. ++ */ ++ vm = find_vm_area((void *)shadow_start); ++ if (vm) ++ vfree((void *)shadow_start); ++ } + } + + return NOTIFY_OK; +@@ -837,5 +897,5 @@ static int __init kasan_memhotplug_init(void) + return 0; + } + +-module_init(kasan_memhotplug_init); ++core_initcall(kasan_memhotplug_init); + #endif +diff --git a/mm/ksm.c b/mm/ksm.c +index 2d6b35234926..d5f37b26e695 100644 +--- a/mm/ksm.c ++++ b/mm/ksm.c +@@ -2089,8 +2089,22 @@ static void cmp_and_merge_page(struct page *page, struct rmap_item *rmap_item) + tree_rmap_item = + unstable_tree_search_insert(rmap_item, page, &tree_page); + if (tree_rmap_item) { ++ bool split; ++ + kpage = try_to_merge_two_pages(rmap_item, page, + tree_rmap_item, tree_page); ++ /* ++ * If both pages we tried to merge belong to the same compound ++ * page, then we actually ended up increasing the reference ++ * count of the same compound page twice, and split_huge_page ++ * failed. ++ * Here we set a flag if that happened, and we use it later to ++ * try split_huge_page again. Since we call put_page right ++ * afterwards, the reference count will be correct and ++ * split_huge_page should succeed. ++ */ ++ split = PageTransCompound(page) ++ && compound_head(page) == compound_head(tree_page); + put_page(tree_page); + if (kpage) { + /* +@@ -2117,6 +2131,20 @@ static void cmp_and_merge_page(struct page *page, struct rmap_item *rmap_item) + break_cow(tree_rmap_item); + break_cow(rmap_item); + } ++ } else if (split) { ++ /* ++ * We are here if we tried to merge two pages and ++ * failed because they both belonged to the same ++ * compound page. We will split the page now, but no ++ * merging will take place. ++ * We do not want to add the cost of a full lock; if ++ * the page is locked, it is better to skip it and ++ * perhaps try again later. ++ */ ++ if (!trylock_page(page)) ++ return; ++ split_huge_page(page); ++ unlock_page(page); + } + } + } +diff --git a/mm/page_idle.c b/mm/page_idle.c +index 0a49374e6931..e412a63b2b74 100644 +--- a/mm/page_idle.c ++++ b/mm/page_idle.c +@@ -65,11 +65,15 @@ static bool page_idle_clear_pte_refs_one(struct page *page, + while (page_vma_mapped_walk(&pvmw)) { + addr = pvmw.address; + if (pvmw.pte) { +- referenced = ptep_clear_young_notify(vma, addr, +- pvmw.pte); ++ /* ++ * For PTE-mapped THP, one sub page is referenced, ++ * the whole THP is referenced. ++ */ ++ if (ptep_clear_young_notify(vma, addr, pvmw.pte)) ++ referenced = true; + } else if (IS_ENABLED(CONFIG_TRANSPARENT_HUGEPAGE)) { +- referenced = pmdp_clear_young_notify(vma, addr, +- pvmw.pmd); ++ if (pmdp_clear_young_notify(vma, addr, pvmw.pmd)) ++ referenced = true; + } else { + /* unexpected pmd-mapped page? */ + WARN_ON_ONCE(1); +diff --git a/mm/slub.c b/mm/slub.c +index e381728a3751..8442b3c54870 100644 +--- a/mm/slub.c ++++ b/mm/slub.c +@@ -1362,10 +1362,8 @@ static __always_inline void kfree_hook(void *x) + kasan_kfree_large(x, _RET_IP_); + } + +-static __always_inline void *slab_free_hook(struct kmem_cache *s, void *x) ++static __always_inline bool slab_free_hook(struct kmem_cache *s, void *x) + { +- void *freeptr; +- + kmemleak_free_recursive(x, s->flags); + + /* +@@ -1385,17 +1383,12 @@ static __always_inline void *slab_free_hook(struct kmem_cache *s, void *x) + if (!(s->flags & SLAB_DEBUG_OBJECTS)) + debug_check_no_obj_freed(x, s->object_size); + +- freeptr = get_freepointer(s, x); +- /* +- * kasan_slab_free() may put x into memory quarantine, delaying its +- * reuse. In this case the object's freelist pointer is changed. +- */ +- kasan_slab_free(s, x, _RET_IP_); +- return freeptr; ++ /* KASAN might put x into memory quarantine, delaying its reuse */ ++ return kasan_slab_free(s, x, _RET_IP_); + } + +-static inline void slab_free_freelist_hook(struct kmem_cache *s, +- void *head, void *tail) ++static inline bool slab_free_freelist_hook(struct kmem_cache *s, ++ void **head, void **tail) + { + /* + * Compiler cannot detect this function can be removed if slab_free_hook() +@@ -1406,13 +1399,33 @@ static inline void slab_free_freelist_hook(struct kmem_cache *s, + defined(CONFIG_DEBUG_OBJECTS_FREE) || \ + defined(CONFIG_KASAN) + +- void *object = head; +- void *tail_obj = tail ? : head; +- void *freeptr; ++ void *object; ++ void *next = *head; ++ void *old_tail = *tail ? *tail : *head; ++ ++ /* Head and tail of the reconstructed freelist */ ++ *head = NULL; ++ *tail = NULL; + + do { +- freeptr = slab_free_hook(s, object); +- } while ((object != tail_obj) && (object = freeptr)); ++ object = next; ++ next = get_freepointer(s, object); ++ /* If object's reuse doesn't have to be delayed */ ++ if (!slab_free_hook(s, object)) { ++ /* Move object to the new freelist */ ++ set_freepointer(s, object, *head); ++ *head = object; ++ if (!*tail) ++ *tail = object; ++ } ++ } while (object != old_tail); ++ ++ if (*head == *tail) ++ *tail = NULL; ++ ++ return *head != NULL; ++#else ++ return true; + #endif + } + +@@ -2965,14 +2978,12 @@ static __always_inline void slab_free(struct kmem_cache *s, struct page *page, + void *head, void *tail, int cnt, + unsigned long addr) + { +- slab_free_freelist_hook(s, head, tail); + /* +- * slab_free_freelist_hook() could have put the items into quarantine. +- * If so, no need to free them. ++ * With KASAN enabled slab_free_freelist_hook modifies the freelist ++ * to remove objects, whose reuse must be delayed. + */ +- if (s->flags & SLAB_KASAN && !(s->flags & SLAB_TYPESAFE_BY_RCU)) +- return; +- do_slab_free(s, page, head, tail, cnt, addr); ++ if (slab_free_freelist_hook(s, &head, &tail)) ++ do_slab_free(s, page, head, tail, cnt, addr); + } + + #ifdef CONFIG_KASAN +diff --git a/mm/swapfile.c b/mm/swapfile.c +index c7a33717d079..a134d1e86795 100644 +--- a/mm/swapfile.c ++++ b/mm/swapfile.c +@@ -2961,6 +2961,10 @@ static unsigned long read_swap_header(struct swap_info_struct *p, + maxpages = swp_offset(pte_to_swp_entry( + swp_entry_to_pte(swp_entry(0, ~0UL)))) + 1; + last_page = swap_header->info.last_page; ++ if (!last_page) { ++ pr_warn("Empty swap-file\n"); ++ return 0; ++ } + if (last_page > maxpages) { + pr_warn("Truncating oversized swap area, only using %luk out of %luk\n", + maxpages << (PAGE_SHIFT - 10), +diff --git a/mm/vmscan.c b/mm/vmscan.c +index f6a1587f9f31..a47621fa8496 100644 +--- a/mm/vmscan.c ++++ b/mm/vmscan.c +@@ -3896,7 +3896,13 @@ int node_reclaim(struct pglist_data *pgdat, gfp_t gfp_mask, unsigned int order) + */ + int page_evictable(struct page *page) + { +- return !mapping_unevictable(page_mapping(page)) && !PageMlocked(page); ++ int ret; ++ ++ /* Prevent address_space of inode and swap cache from being freed */ ++ rcu_read_lock(); ++ ret = !mapping_unevictable(page_mapping(page)) && !PageMlocked(page); ++ rcu_read_unlock(); ++ return ret; + } + + #ifdef CONFIG_SHMEM +diff --git a/mm/z3fold.c b/mm/z3fold.c +index 36d31d3593e1..95c9e90f8fda 100644 +--- a/mm/z3fold.c ++++ b/mm/z3fold.c +@@ -469,6 +469,8 @@ static struct z3fold_pool *z3fold_create_pool(const char *name, gfp_t gfp, + spin_lock_init(&pool->lock); + spin_lock_init(&pool->stale_lock); + pool->unbuddied = __alloc_percpu(sizeof(struct list_head)*NCHUNKS, 2); ++ if (!pool->unbuddied) ++ goto out_pool; + for_each_possible_cpu(cpu) { + struct list_head *unbuddied = + per_cpu_ptr(pool->unbuddied, cpu); +@@ -481,7 +483,7 @@ static struct z3fold_pool *z3fold_create_pool(const char *name, gfp_t gfp, + pool->name = name; + pool->compact_wq = create_singlethread_workqueue(pool->name); + if (!pool->compact_wq) +- goto out; ++ goto out_unbuddied; + pool->release_wq = create_singlethread_workqueue(pool->name); + if (!pool->release_wq) + goto out_wq; +@@ -491,8 +493,11 @@ static struct z3fold_pool *z3fold_create_pool(const char *name, gfp_t gfp, + + out_wq: + destroy_workqueue(pool->compact_wq); +-out: ++out_unbuddied: ++ free_percpu(pool->unbuddied); ++out_pool: + kfree(pool); ++out: + return NULL; + } + +diff --git a/net/netlabel/netlabel_unlabeled.c b/net/netlabel/netlabel_unlabeled.c +index 22dc1b9d6362..c070dfc0190a 100644 +--- a/net/netlabel/netlabel_unlabeled.c ++++ b/net/netlabel/netlabel_unlabeled.c +@@ -1472,6 +1472,16 @@ int netlbl_unlabel_getattr(const struct sk_buff *skb, + iface = rcu_dereference(netlbl_unlhsh_def); + if (iface == NULL || !iface->valid) + goto unlabel_getattr_nolabel; ++ ++#if IS_ENABLED(CONFIG_IPV6) ++ /* When resolving a fallback label, check the sk_buff version as ++ * it is possible (e.g. SCTP) to have family = PF_INET6 while ++ * receiving ip_hdr(skb)->version = 4. ++ */ ++ if (family == PF_INET6 && ip_hdr(skb)->version == 4) ++ family = PF_INET; ++#endif /* IPv6 */ ++ + switch (family) { + case PF_INET: { + struct iphdr *hdr4; +diff --git a/net/rxrpc/call_event.c b/net/rxrpc/call_event.c +index ad2ab1103189..67b6f2428d46 100644 +--- a/net/rxrpc/call_event.c ++++ b/net/rxrpc/call_event.c +@@ -225,7 +225,7 @@ static void rxrpc_resend(struct rxrpc_call *call, unsigned long now_j) + ktime_to_ns(ktime_sub(skb->tstamp, max_age))); + } + +- resend_at = nsecs_to_jiffies(ktime_to_ns(ktime_sub(oldest, now))); ++ resend_at = nsecs_to_jiffies(ktime_to_ns(ktime_sub(now, oldest))); + resend_at += jiffies + rxrpc_resend_timeout; + WRITE_ONCE(call->resend_at, resend_at); + +diff --git a/net/rxrpc/input.c b/net/rxrpc/input.c +index 6fc61400337f..34db634594c4 100644 +--- a/net/rxrpc/input.c ++++ b/net/rxrpc/input.c +@@ -1240,16 +1240,19 @@ void rxrpc_data_ready(struct sock *udp_sk) + goto discard_unlock; + + if (sp->hdr.callNumber == chan->last_call) { +- /* For the previous service call, if completed successfully, we +- * discard all further packets. ++ if (chan->call || ++ sp->hdr.type == RXRPC_PACKET_TYPE_ABORT) ++ goto discard_unlock; ++ ++ /* For the previous service call, if completed ++ * successfully, we discard all further packets. + */ + if (rxrpc_conn_is_service(conn) && +- (chan->last_type == RXRPC_PACKET_TYPE_ACK || +- sp->hdr.type == RXRPC_PACKET_TYPE_ABORT)) ++ chan->last_type == RXRPC_PACKET_TYPE_ACK) + goto discard_unlock; + +- /* But otherwise we need to retransmit the final packet from +- * data cached in the connection record. ++ /* But otherwise we need to retransmit the final packet ++ * from data cached in the connection record. + */ + rxrpc_post_packet_to_conn(conn, skb); + goto out_unlock; +diff --git a/net/rxrpc/sendmsg.c b/net/rxrpc/sendmsg.c +index 09f2a3e05221..7a94ce92ffdc 100644 +--- a/net/rxrpc/sendmsg.c ++++ b/net/rxrpc/sendmsg.c +@@ -130,7 +130,9 @@ static inline void rxrpc_instant_resend(struct rxrpc_call *call, int ix) + spin_lock_bh(&call->lock); + + if (call->state < RXRPC_CALL_COMPLETE) { +- call->rxtx_annotations[ix] = RXRPC_TX_ANNO_RETRANS; ++ call->rxtx_annotations[ix] = ++ (call->rxtx_annotations[ix] & RXRPC_TX_ANNO_LAST) | ++ RXRPC_TX_ANNO_RETRANS; + if (!test_and_set_bit(RXRPC_CALL_EV_RESEND, &call->events)) + rxrpc_queue_call(call); + } +diff --git a/net/smc/smc_ib.c b/net/smc/smc_ib.c +index 2a8957bd6d38..26df554f7588 100644 +--- a/net/smc/smc_ib.c ++++ b/net/smc/smc_ib.c +@@ -23,6 +23,8 @@ + #include "smc_wr.h" + #include "smc.h" + ++#define SMC_MAX_CQE 32766 /* max. # of completion queue elements */ ++ + #define SMC_QP_MIN_RNR_TIMER 5 + #define SMC_QP_TIMEOUT 15 /* 4096 * 2 ** timeout usec */ + #define SMC_QP_RETRY_CNT 7 /* 7: infinite */ +@@ -438,9 +440,15 @@ int smc_ib_remember_port_attr(struct smc_ib_device *smcibdev, u8 ibport) + long smc_ib_setup_per_ibdev(struct smc_ib_device *smcibdev) + { + struct ib_cq_init_attr cqattr = { +- .cqe = SMC_WR_MAX_CQE, .comp_vector = 0 }; ++ .cqe = SMC_MAX_CQE, .comp_vector = 0 }; ++ int cqe_size_order, smc_order; + long rc; + ++ /* the calculated number of cq entries fits to mlx5 cq allocation */ ++ cqe_size_order = cache_line_size() == 128 ? 7 : 6; ++ smc_order = MAX_ORDER - cqe_size_order - 1; ++ if (SMC_MAX_CQE + 2 > (0x00000001 << smc_order) * PAGE_SIZE) ++ cqattr.cqe = (0x00000001 << smc_order) * PAGE_SIZE - 2; + smcibdev->roce_cq_send = ib_create_cq(smcibdev->ibdev, + smc_wr_tx_cq_handler, NULL, + smcibdev, &cqattr); +diff --git a/net/smc/smc_wr.h b/net/smc/smc_wr.h +index ef0c3494c9cb..210bec3c3ebe 100644 +--- a/net/smc/smc_wr.h ++++ b/net/smc/smc_wr.h +@@ -19,7 +19,6 @@ + #include "smc.h" + #include "smc_core.h" + +-#define SMC_WR_MAX_CQE 32768 /* max. # of completion queue elements */ + #define SMC_WR_BUF_CNT 16 /* # of ctrl buffers per link */ + + #define SMC_WR_TX_WAIT_FREE_SLOT_TIME (10 * HZ) +diff --git a/security/integrity/ima/Kconfig b/security/integrity/ima/Kconfig +index 35ef69312811..6a8f67714c83 100644 +--- a/security/integrity/ima/Kconfig ++++ b/security/integrity/ima/Kconfig +@@ -10,6 +10,7 @@ config IMA + select CRYPTO_HASH_INFO + select TCG_TPM if HAS_IOMEM && !UML + select TCG_TIS if TCG_TPM && X86 ++ select TCG_CRB if TCG_TPM && ACPI + select TCG_IBMVTPM if TCG_TPM && PPC_PSERIES + help + The Trusted Computing Group(TCG) runtime Integrity +diff --git a/security/integrity/ima/ima_crypto.c b/security/integrity/ima/ima_crypto.c +index 205bc69361ea..4e085a17124f 100644 +--- a/security/integrity/ima/ima_crypto.c ++++ b/security/integrity/ima/ima_crypto.c +@@ -73,6 +73,8 @@ int __init ima_init_crypto(void) + hash_algo_name[ima_hash_algo], rc); + return rc; + } ++ pr_info("Allocated hash algorithm: %s\n", ++ hash_algo_name[ima_hash_algo]); + return 0; + } + +diff --git a/security/integrity/ima/ima_main.c b/security/integrity/ima/ima_main.c +index 2cfb0c714967..c678d3801a61 100644 +--- a/security/integrity/ima/ima_main.c ++++ b/security/integrity/ima/ima_main.c +@@ -16,6 +16,9 @@ + * implements the IMA hooks: ima_bprm_check, ima_file_mmap, + * and ima_file_check. + */ ++ ++#define pr_fmt(fmt) KBUILD_MODNAME ": " fmt ++ + #include + #include + #include +@@ -472,6 +475,16 @@ static int __init init_ima(void) + ima_init_template_list(); + hash_setup(CONFIG_IMA_DEFAULT_HASH); + error = ima_init(); ++ ++ if (error && strcmp(hash_algo_name[ima_hash_algo], ++ CONFIG_IMA_DEFAULT_HASH) != 0) { ++ pr_info("Allocating %s failed, going to use default hash algorithm %s\n", ++ hash_algo_name[ima_hash_algo], CONFIG_IMA_DEFAULT_HASH); ++ hash_setup_done = 0; ++ hash_setup(CONFIG_IMA_DEFAULT_HASH); ++ error = ima_init(); ++ } ++ + if (!error) { + ima_initialized = 1; + ima_update_policy_flag(); +diff --git a/security/integrity/ima/ima_policy.c b/security/integrity/ima/ima_policy.c +index 915f5572c6ff..f3508e6db5f7 100644 +--- a/security/integrity/ima/ima_policy.c ++++ b/security/integrity/ima/ima_policy.c +@@ -384,7 +384,7 @@ int ima_match_policy(struct inode *inode, enum ima_hooks func, int mask, + action |= entry->action & IMA_DO_MASK; + if (entry->action & IMA_APPRAISE) { + action |= get_subaction(entry, func); +- action ^= IMA_HASH; ++ action &= ~IMA_HASH; + } + + if (entry->action & IMA_DO_MASK) +diff --git a/security/selinux/hooks.c b/security/selinux/hooks.c +index 8644d864e3c1..3d40fd252780 100644 +--- a/security/selinux/hooks.c ++++ b/security/selinux/hooks.c +@@ -1532,8 +1532,15 @@ static int inode_doinit_with_dentry(struct inode *inode, struct dentry *opt_dent + /* Called from d_instantiate or d_splice_alias. */ + dentry = dget(opt_dentry); + } else { +- /* Called from selinux_complete_init, try to find a dentry. */ ++ /* ++ * Called from selinux_complete_init, try to find a dentry. ++ * Some filesystems really want a connected one, so try ++ * that first. We could split SECURITY_FS_USE_XATTR in ++ * two, depending upon that... ++ */ + dentry = d_find_alias(inode); ++ if (!dentry) ++ dentry = d_find_any_alias(inode); + } + if (!dentry) { + /* +@@ -1636,14 +1643,19 @@ static int inode_doinit_with_dentry(struct inode *inode, struct dentry *opt_dent + if ((sbsec->flags & SE_SBGENFS) && !S_ISLNK(inode->i_mode)) { + /* We must have a dentry to determine the label on + * procfs inodes */ +- if (opt_dentry) ++ if (opt_dentry) { + /* Called from d_instantiate or + * d_splice_alias. */ + dentry = dget(opt_dentry); +- else ++ } else { + /* Called from selinux_complete_init, try to +- * find a dentry. */ ++ * find a dentry. Some filesystems really want ++ * a connected one, so try that first. ++ */ + dentry = d_find_alias(inode); ++ if (!dentry) ++ dentry = d_find_any_alias(inode); ++ } + /* + * This can be hit on boot when a file is accessed + * before the policy is loaded. When we load policy we +diff --git a/sound/core/timer.c b/sound/core/timer.c +index dc87728c5b74..0ddcae495838 100644 +--- a/sound/core/timer.c ++++ b/sound/core/timer.c +@@ -592,7 +592,7 @@ static int snd_timer_stop1(struct snd_timer_instance *timeri, bool stop) + else + timeri->flags |= SNDRV_TIMER_IFLG_PAUSED; + snd_timer_notify1(timeri, stop ? SNDRV_TIMER_EVENT_STOP : +- SNDRV_TIMER_EVENT_CONTINUE); ++ SNDRV_TIMER_EVENT_PAUSE); + unlock: + spin_unlock_irqrestore(&timer->lock, flags); + return result; +@@ -614,7 +614,7 @@ static int snd_timer_stop_slave(struct snd_timer_instance *timeri, bool stop) + list_del_init(&timeri->ack_list); + list_del_init(&timeri->active_list); + snd_timer_notify1(timeri, stop ? SNDRV_TIMER_EVENT_STOP : +- SNDRV_TIMER_EVENT_CONTINUE); ++ SNDRV_TIMER_EVENT_PAUSE); + spin_unlock(&timeri->timer->lock); + } + spin_unlock_irqrestore(&slave_active_lock, flags); +diff --git a/sound/core/vmaster.c b/sound/core/vmaster.c +index 8632301489fa..b67de2bb06a2 100644 +--- a/sound/core/vmaster.c ++++ b/sound/core/vmaster.c +@@ -68,10 +68,13 @@ static int slave_update(struct link_slave *slave) + return -ENOMEM; + uctl->id = slave->slave.id; + err = slave->slave.get(&slave->slave, uctl); ++ if (err < 0) ++ goto error; + for (ch = 0; ch < slave->info.count; ch++) + slave->vals[ch] = uctl->value.integer.value[ch]; ++ error: + kfree(uctl); +- return 0; ++ return err < 0 ? err : 0; + } + + /* get the slave ctl info and save the initial values */ +diff --git a/tools/hv/hv_fcopy_daemon.c b/tools/hv/hv_fcopy_daemon.c +index 457a1521f32f..785f4e95148c 100644 +--- a/tools/hv/hv_fcopy_daemon.c ++++ b/tools/hv/hv_fcopy_daemon.c +@@ -23,13 +23,14 @@ + #include + #include + #include ++#include + #include + #include + #include + #include + + static int target_fd; +-static char target_fname[W_MAX_PATH]; ++static char target_fname[PATH_MAX]; + static unsigned long long filesize; + + static int hv_start_fcopy(struct hv_start_fcopy *smsg) +diff --git a/tools/hv/hv_vss_daemon.c b/tools/hv/hv_vss_daemon.c +index b2b4ebffab8c..34031a297f02 100644 +--- a/tools/hv/hv_vss_daemon.c ++++ b/tools/hv/hv_vss_daemon.c +@@ -22,6 +22,7 @@ + #include + #include + #include ++#include + #include + #include + #include +diff --git a/tools/perf/Makefile.perf b/tools/perf/Makefile.perf +index 012328038594..b100e4d8f9fb 100644 +--- a/tools/perf/Makefile.perf ++++ b/tools/perf/Makefile.perf +@@ -364,7 +364,8 @@ LIBS = -Wl,--whole-archive $(PERFLIBS) $(EXTRA_PERFLIBS) -Wl,--no-whole-archive + + ifeq ($(USE_CLANG), 1) + CLANGLIBS_LIST = AST Basic CodeGen Driver Frontend Lex Tooling Edit Sema Analysis Parse Serialization +- LIBCLANG = $(foreach l,$(CLANGLIBS_LIST),$(wildcard $(shell $(LLVM_CONFIG) --libdir)/libclang$(l).a)) ++ CLANGLIBS_NOEXT_LIST = $(foreach l,$(CLANGLIBS_LIST),$(shell $(LLVM_CONFIG) --libdir)/libclang$(l)) ++ LIBCLANG = $(foreach l,$(CLANGLIBS_NOEXT_LIST),$(wildcard $(l).a $(l).so)) + LIBS += -Wl,--start-group $(LIBCLANG) -Wl,--end-group + endif + +diff --git a/tools/perf/builtin-stat.c b/tools/perf/builtin-stat.c +index 54a4c152edb3..9204cdfed73d 100644 +--- a/tools/perf/builtin-stat.c ++++ b/tools/perf/builtin-stat.c +@@ -2274,11 +2274,16 @@ static int add_default_attributes(void) + return 0; + + if (transaction_run) { ++ struct parse_events_error errinfo; ++ + if (pmu_have_event("cpu", "cycles-ct") && + pmu_have_event("cpu", "el-start")) +- err = parse_events(evsel_list, transaction_attrs, NULL); ++ err = parse_events(evsel_list, transaction_attrs, ++ &errinfo); + else +- err = parse_events(evsel_list, transaction_limited_attrs, NULL); ++ err = parse_events(evsel_list, ++ transaction_limited_attrs, ++ &errinfo); + if (err) { + fprintf(stderr, "Cannot set up transaction events\n"); + return -1; +diff --git a/tools/perf/builtin-top.c b/tools/perf/builtin-top.c +index 35ac016fcb98..fd6e238b5cc8 100644 +--- a/tools/perf/builtin-top.c ++++ b/tools/perf/builtin-top.c +@@ -1224,8 +1224,10 @@ parse_callchain_opt(const struct option *opt, const char *arg, int unset) + + static int perf_top_config(const char *var, const char *value, void *cb __maybe_unused) + { +- if (!strcmp(var, "top.call-graph")) +- var = "call-graph.record-mode"; /* fall-through */ ++ if (!strcmp(var, "top.call-graph")) { ++ var = "call-graph.record-mode"; ++ return perf_default_config(var, value, cb); ++ } + if (!strcmp(var, "top.children")) { + symbol_conf.cumulate_callchain = perf_config_bool(var, value); + return 0; +diff --git a/tools/perf/tests/dwarf-unwind.c b/tools/perf/tests/dwarf-unwind.c +index 260418969120..2f008067d989 100644 +--- a/tools/perf/tests/dwarf-unwind.c ++++ b/tools/perf/tests/dwarf-unwind.c +@@ -37,6 +37,19 @@ static int init_live_machine(struct machine *machine) + mmap_handler, machine, true, 500); + } + ++/* ++ * We need to keep these functions global, despite the ++ * fact that they are used only locally in this object, ++ * in order to keep them around even if the binary is ++ * stripped. If they are gone, the unwind check for ++ * symbol fails. ++ */ ++int test_dwarf_unwind__thread(struct thread *thread); ++int test_dwarf_unwind__compare(void *p1, void *p2); ++int test_dwarf_unwind__krava_3(struct thread *thread); ++int test_dwarf_unwind__krava_2(struct thread *thread); ++int test_dwarf_unwind__krava_1(struct thread *thread); ++ + #define MAX_STACK 8 + + static int unwind_entry(struct unwind_entry *entry, void *arg) +@@ -45,12 +58,12 @@ static int unwind_entry(struct unwind_entry *entry, void *arg) + char *symbol = entry->sym ? entry->sym->name : NULL; + static const char *funcs[MAX_STACK] = { + "test__arch_unwind_sample", +- "unwind_thread", +- "compare", ++ "test_dwarf_unwind__thread", ++ "test_dwarf_unwind__compare", + "bsearch", +- "krava_3", +- "krava_2", +- "krava_1", ++ "test_dwarf_unwind__krava_3", ++ "test_dwarf_unwind__krava_2", ++ "test_dwarf_unwind__krava_1", + "test__dwarf_unwind" + }; + /* +@@ -77,7 +90,7 @@ static int unwind_entry(struct unwind_entry *entry, void *arg) + return strcmp((const char *) symbol, funcs[idx]); + } + +-static noinline int unwind_thread(struct thread *thread) ++noinline int test_dwarf_unwind__thread(struct thread *thread) + { + struct perf_sample sample; + unsigned long cnt = 0; +@@ -108,7 +121,7 @@ static noinline int unwind_thread(struct thread *thread) + + static int global_unwind_retval = -INT_MAX; + +-static noinline int compare(void *p1, void *p2) ++noinline int test_dwarf_unwind__compare(void *p1, void *p2) + { + /* Any possible value should be 'thread' */ + struct thread *thread = *(struct thread **)p1; +@@ -117,17 +130,17 @@ static noinline int compare(void *p1, void *p2) + /* Call unwinder twice for both callchain orders. */ + callchain_param.order = ORDER_CALLER; + +- global_unwind_retval = unwind_thread(thread); ++ global_unwind_retval = test_dwarf_unwind__thread(thread); + if (!global_unwind_retval) { + callchain_param.order = ORDER_CALLEE; +- global_unwind_retval = unwind_thread(thread); ++ global_unwind_retval = test_dwarf_unwind__thread(thread); + } + } + + return p1 - p2; + } + +-static noinline int krava_3(struct thread *thread) ++noinline int test_dwarf_unwind__krava_3(struct thread *thread) + { + struct thread *array[2] = {thread, thread}; + void *fp = &bsearch; +@@ -141,18 +154,19 @@ static noinline int krava_3(struct thread *thread) + size_t, int (*)(void *, void *)); + + _bsearch = fp; +- _bsearch(array, &thread, 2, sizeof(struct thread **), compare); ++ _bsearch(array, &thread, 2, sizeof(struct thread **), ++ test_dwarf_unwind__compare); + return global_unwind_retval; + } + +-static noinline int krava_2(struct thread *thread) ++noinline int test_dwarf_unwind__krava_2(struct thread *thread) + { +- return krava_3(thread); ++ return test_dwarf_unwind__krava_3(thread); + } + +-static noinline int krava_1(struct thread *thread) ++noinline int test_dwarf_unwind__krava_1(struct thread *thread) + { +- return krava_2(thread); ++ return test_dwarf_unwind__krava_2(thread); + } + + int test__dwarf_unwind(struct test *test __maybe_unused, int subtest __maybe_unused) +@@ -189,7 +203,7 @@ int test__dwarf_unwind(struct test *test __maybe_unused, int subtest __maybe_unu + goto out; + } + +- err = krava_1(thread); ++ err = test_dwarf_unwind__krava_1(thread); + thread__put(thread); + + out: +diff --git a/tools/perf/tests/shell/trace+probe_libc_inet_pton.sh b/tools/perf/tests/shell/trace+probe_libc_inet_pton.sh +index c446c894b297..8c4ab0b390c0 100755 +--- a/tools/perf/tests/shell/trace+probe_libc_inet_pton.sh ++++ b/tools/perf/tests/shell/trace+probe_libc_inet_pton.sh +@@ -21,12 +21,12 @@ trace_libc_inet_pton_backtrace() { + expected[3]=".*packets transmitted.*" + expected[4]="rtt min.*" + expected[5]="[0-9]+\.[0-9]+[[:space:]]+probe_libc:inet_pton:\([[:xdigit:]]+\)" +- expected[6]=".*inet_pton[[:space:]]\($libc\)$" ++ expected[6]=".*inet_pton[[:space:]]\($libc|inlined\)$" + case "$(uname -m)" in + s390x) + eventattr='call-graph=dwarf' +- expected[7]="gaih_inet[[:space:]]\(inlined\)$" +- expected[8]="__GI_getaddrinfo[[:space:]]\(inlined\)$" ++ expected[7]="gaih_inet.*[[:space:]]\($libc|inlined\)$" ++ expected[8]="__GI_getaddrinfo[[:space:]]\($libc|inlined\)$" + expected[9]="main[[:space:]]\(.*/bin/ping.*\)$" + expected[10]="__libc_start_main[[:space:]]\($libc\)$" + expected[11]="_start[[:space:]]\(.*/bin/ping.*\)$" +diff --git a/tools/perf/tests/vmlinux-kallsyms.c b/tools/perf/tests/vmlinux-kallsyms.c +index f6789fb029d6..884cad122acf 100644 +--- a/tools/perf/tests/vmlinux-kallsyms.c ++++ b/tools/perf/tests/vmlinux-kallsyms.c +@@ -125,7 +125,7 @@ int test__vmlinux_matches_kallsyms(struct test *test __maybe_unused, int subtest + + if (pair && UM(pair->start) == mem_start) { + next_pair: +- if (strcmp(sym->name, pair->name) == 0) { ++ if (arch__compare_symbol_names(sym->name, pair->name) == 0) { + /* + * kallsyms don't have the symbol end, so we + * set that by using the next symbol start - 1, +diff --git a/tools/perf/ui/browsers/annotate.c b/tools/perf/ui/browsers/annotate.c +index fbf927cf775d..6ff6839558b0 100644 +--- a/tools/perf/ui/browsers/annotate.c ++++ b/tools/perf/ui/browsers/annotate.c +@@ -319,6 +319,7 @@ static void annotate_browser__draw_current_jump(struct ui_browser *browser) + struct map_symbol *ms = ab->b.priv; + struct symbol *sym = ms->sym; + u8 pcnt_width = annotate_browser__pcnt_width(ab); ++ int width = 0; + + /* PLT symbols contain external offsets */ + if (strstr(sym->name, "@plt")) +@@ -365,13 +366,17 @@ static void annotate_browser__draw_current_jump(struct ui_browser *browser) + to = (u64)btarget->idx; + } + ++ if (ab->have_cycles) ++ width = IPC_WIDTH + CYCLES_WIDTH; ++ + ui_browser__set_color(browser, HE_COLORSET_JUMP_ARROWS); +- __ui_browser__line_arrow(browser, pcnt_width + 2 + ab->addr_width, ++ __ui_browser__line_arrow(browser, ++ pcnt_width + 2 + ab->addr_width + width, + from, to); + + if (is_fused(ab, cursor)) { + ui_browser__mark_fused(browser, +- pcnt_width + 3 + ab->addr_width, ++ pcnt_width + 3 + ab->addr_width + width, + from - 1, + to > from ? true : false); + } +diff --git a/tools/perf/util/c++/clang.cpp b/tools/perf/util/c++/clang.cpp +index 1bfc946e37dc..bf31ceab33bd 100644 +--- a/tools/perf/util/c++/clang.cpp ++++ b/tools/perf/util/c++/clang.cpp +@@ -9,6 +9,7 @@ + * Copyright (C) 2016 Huawei Inc. + */ + ++#include "clang/Basic/Version.h" + #include "clang/CodeGen/CodeGenAction.h" + #include "clang/Frontend/CompilerInvocation.h" + #include "clang/Frontend/CompilerInstance.h" +@@ -58,7 +59,8 @@ createCompilerInvocation(llvm::opt::ArgStringList CFlags, StringRef& Path, + + FrontendOptions& Opts = CI->getFrontendOpts(); + Opts.Inputs.clear(); +- Opts.Inputs.emplace_back(Path, IK_C); ++ Opts.Inputs.emplace_back(Path, ++ FrontendOptions::getInputKindForExtension("c")); + return CI; + } + +@@ -71,10 +73,17 @@ getModuleFromSource(llvm::opt::ArgStringList CFlags, + + Clang.setVirtualFileSystem(&*VFS); + ++#if CLANG_VERSION_MAJOR < 4 + IntrusiveRefCntPtr CI = + createCompilerInvocation(std::move(CFlags), Path, + Clang.getDiagnostics()); + Clang.setInvocation(&*CI); ++#else ++ std::shared_ptr CI( ++ createCompilerInvocation(std::move(CFlags), Path, ++ Clang.getDiagnostics())); ++ Clang.setInvocation(CI); ++#endif + + std::unique_ptr Act(new EmitLLVMOnlyAction(&*LLVMCtx)); + if (!Clang.ExecuteAction(*Act)) +diff --git a/tools/perf/util/hist.c b/tools/perf/util/hist.c +index b6140950301e..44a8456cea10 100644 +--- a/tools/perf/util/hist.c ++++ b/tools/perf/util/hist.c +@@ -879,7 +879,7 @@ iter_prepare_cumulative_entry(struct hist_entry_iter *iter, + * cumulated only one time to prevent entries more than 100% + * overhead. + */ +- he_cache = malloc(sizeof(*he_cache) * (iter->max_stack + 1)); ++ he_cache = malloc(sizeof(*he_cache) * (callchain_cursor.nr + 1)); + if (he_cache == NULL) + return -ENOMEM; + +@@ -1045,8 +1045,6 @@ int hist_entry_iter__add(struct hist_entry_iter *iter, struct addr_location *al, + if (err) + return err; + +- iter->max_stack = max_stack_depth; +- + err = iter->ops->prepare_entry(iter, al); + if (err) + goto out; +diff --git a/tools/perf/util/hist.h b/tools/perf/util/hist.h +index 02721b579746..e869cad4d89f 100644 +--- a/tools/perf/util/hist.h ++++ b/tools/perf/util/hist.h +@@ -107,7 +107,6 @@ struct hist_entry_iter { + int curr; + + bool hide_unresolved; +- int max_stack; + + struct perf_evsel *evsel; + struct perf_sample *sample; +diff --git a/tools/perf/util/mmap.c b/tools/perf/util/mmap.c +index 91531a7c8fbf..0bda6dfd5b96 100644 +--- a/tools/perf/util/mmap.c ++++ b/tools/perf/util/mmap.c +@@ -344,5 +344,11 @@ int perf_mmap__push(struct perf_mmap *md, bool overwrite, + */ + void perf_mmap__read_done(struct perf_mmap *map) + { ++ /* ++ * Check if event was unmapped due to a POLLHUP/POLLERR. ++ */ ++ if (!refcount_read(&map->refcnt)) ++ return; ++ + map->prev = perf_mmap__read_head(map); + } +diff --git a/tools/testing/radix-tree/idr-test.c b/tools/testing/radix-tree/idr-test.c +index 6c645eb77d42..ee820fcc29b0 100644 +--- a/tools/testing/radix-tree/idr-test.c ++++ b/tools/testing/radix-tree/idr-test.c +@@ -252,6 +252,13 @@ void idr_checks(void) + idr_remove(&idr, 3); + idr_remove(&idr, 0); + ++ assert(idr_alloc(&idr, DUMMY_PTR, 0, 0, GFP_KERNEL) == 0); ++ idr_remove(&idr, 1); ++ for (i = 1; i < RADIX_TREE_MAP_SIZE; i++) ++ assert(idr_alloc(&idr, DUMMY_PTR, 0, 0, GFP_KERNEL) == i); ++ idr_remove(&idr, 1 << 30); ++ idr_destroy(&idr); ++ + for (i = INT_MAX - 3UL; i < INT_MAX + 1UL; i++) { + struct item *item = item_create(i, 0); + assert(idr_alloc(&idr, item, i, i + 10, GFP_KERNEL) == i); +diff --git a/tools/testing/selftests/Makefile b/tools/testing/selftests/Makefile +index 7442dfb73b7f..0fbe778efd5f 100644 +--- a/tools/testing/selftests/Makefile ++++ b/tools/testing/selftests/Makefile +@@ -130,6 +130,7 @@ ifdef INSTALL_PATH + BUILD_TARGET=$$BUILD/$$TARGET; \ + echo "echo ; echo Running tests in $$TARGET" >> $(ALL_SCRIPT); \ + echo "echo ========================================" >> $(ALL_SCRIPT); \ ++ echo "[ -w /dev/kmsg ] && echo \"kselftest: Running tests in $$TARGET\" >> /dev/kmsg" >> $(ALL_SCRIPT); \ + echo "cd $$TARGET" >> $(ALL_SCRIPT); \ + make -s --no-print-directory OUTPUT=$$BUILD_TARGET -C $$TARGET emit_tests >> $(ALL_SCRIPT); \ + echo "cd \$$ROOT" >> $(ALL_SCRIPT); \ +diff --git a/tools/testing/selftests/net/fib-onlink-tests.sh b/tools/testing/selftests/net/fib-onlink-tests.sh +new file mode 100644 +index 000000000000..06b1d7cc12cc +--- /dev/null ++++ b/tools/testing/selftests/net/fib-onlink-tests.sh +@@ -0,0 +1,375 @@ ++#!/bin/bash ++# SPDX-License-Identifier: GPL-2.0 ++ ++# IPv4 and IPv6 onlink tests ++ ++PAUSE_ON_FAIL=${PAUSE_ON_FAIL:=no} ++ ++# Network interfaces ++# - odd in current namespace; even in peer ns ++declare -A NETIFS ++# default VRF ++NETIFS[p1]=veth1 ++NETIFS[p2]=veth2 ++NETIFS[p3]=veth3 ++NETIFS[p4]=veth4 ++# VRF ++NETIFS[p5]=veth5 ++NETIFS[p6]=veth6 ++NETIFS[p7]=veth7 ++NETIFS[p8]=veth8 ++ ++# /24 network ++declare -A V4ADDRS ++V4ADDRS[p1]=169.254.1.1 ++V4ADDRS[p2]=169.254.1.2 ++V4ADDRS[p3]=169.254.3.1 ++V4ADDRS[p4]=169.254.3.2 ++V4ADDRS[p5]=169.254.5.1 ++V4ADDRS[p6]=169.254.5.2 ++V4ADDRS[p7]=169.254.7.1 ++V4ADDRS[p8]=169.254.7.2 ++ ++# /64 network ++declare -A V6ADDRS ++V6ADDRS[p1]=2001:db8:101::1 ++V6ADDRS[p2]=2001:db8:101::2 ++V6ADDRS[p3]=2001:db8:301::1 ++V6ADDRS[p4]=2001:db8:301::2 ++V6ADDRS[p5]=2001:db8:501::1 ++V6ADDRS[p6]=2001:db8:501::2 ++V6ADDRS[p7]=2001:db8:701::1 ++V6ADDRS[p8]=2001:db8:701::2 ++ ++# Test networks: ++# [1] = default table ++# [2] = VRF ++# ++# /32 host routes ++declare -A TEST_NET4 ++TEST_NET4[1]=169.254.101 ++TEST_NET4[2]=169.254.102 ++# /128 host routes ++declare -A TEST_NET6 ++TEST_NET6[1]=2001:db8:101 ++TEST_NET6[2]=2001:db8:102 ++ ++# connected gateway ++CONGW[1]=169.254.1.254 ++CONGW[2]=169.254.5.254 ++ ++# recursive gateway ++RECGW4[1]=169.254.11.254 ++RECGW4[2]=169.254.12.254 ++RECGW6[1]=2001:db8:11::64 ++RECGW6[2]=2001:db8:12::64 ++ ++# for v4 mapped to v6 ++declare -A TEST_NET4IN6IN6 ++TEST_NET4IN6[1]=10.1.1.254 ++TEST_NET4IN6[2]=10.2.1.254 ++ ++# mcast address ++MCAST6=ff02::1 ++ ++ ++PEER_NS=bart ++PEER_CMD="ip netns exec ${PEER_NS}" ++VRF=lisa ++VRF_TABLE=1101 ++PBR_TABLE=101 ++ ++################################################################################ ++# utilities ++ ++log_test() ++{ ++ local rc=$1 ++ local expected=$2 ++ local msg="$3" ++ ++ if [ ${rc} -eq ${expected} ]; then ++ nsuccess=$((nsuccess+1)) ++ printf "\n TEST: %-50s [ OK ]\n" "${msg}" ++ else ++ nfail=$((nfail+1)) ++ printf "\n TEST: %-50s [FAIL]\n" "${msg}" ++ if [ "${PAUSE_ON_FAIL}" = "yes" ]; then ++ echo ++ echo "hit enter to continue, 'q' to quit" ++ read a ++ [ "$a" = "q" ] && exit 1 ++ fi ++ fi ++} ++ ++log_section() ++{ ++ echo ++ echo "######################################################################" ++ echo "TEST SECTION: $*" ++ echo "######################################################################" ++} ++ ++log_subsection() ++{ ++ echo ++ echo "#########################################" ++ echo "TEST SUBSECTION: $*" ++} ++ ++run_cmd() ++{ ++ echo ++ echo "COMMAND: $*" ++ eval $* ++} ++ ++get_linklocal() ++{ ++ local dev=$1 ++ local pfx ++ local addr ++ ++ addr=$(${pfx} ip -6 -br addr show dev ${dev} | \ ++ awk '{ ++ for (i = 3; i <= NF; ++i) { ++ if ($i ~ /^fe80/) ++ print $i ++ } ++ }' ++ ) ++ addr=${addr/\/*} ++ ++ [ -z "$addr" ] && return 1 ++ ++ echo $addr ++ ++ return 0 ++} ++ ++################################################################################ ++# ++ ++setup() ++{ ++ echo ++ echo "########################################" ++ echo "Configuring interfaces" ++ ++ set -e ++ ++ # create namespace ++ ip netns add ${PEER_NS} ++ ip -netns ${PEER_NS} li set lo up ++ ++ # add vrf table ++ ip li add ${VRF} type vrf table ${VRF_TABLE} ++ ip li set ${VRF} up ++ ip ro add table ${VRF_TABLE} unreachable default ++ ip -6 ro add table ${VRF_TABLE} unreachable default ++ ++ # create test interfaces ++ ip li add ${NETIFS[p1]} type veth peer name ${NETIFS[p2]} ++ ip li add ${NETIFS[p3]} type veth peer name ${NETIFS[p4]} ++ ip li add ${NETIFS[p5]} type veth peer name ${NETIFS[p6]} ++ ip li add ${NETIFS[p7]} type veth peer name ${NETIFS[p8]} ++ ++ # enslave vrf interfaces ++ for n in 5 7; do ++ ip li set ${NETIFS[p${n}]} vrf ${VRF} ++ done ++ ++ # add addresses ++ for n in 1 3 5 7; do ++ ip li set ${NETIFS[p${n}]} up ++ ip addr add ${V4ADDRS[p${n}]}/24 dev ${NETIFS[p${n}]} ++ ip addr add ${V6ADDRS[p${n}]}/64 dev ${NETIFS[p${n}]} ++ done ++ ++ # move peer interfaces to namespace and add addresses ++ for n in 2 4 6 8; do ++ ip li set ${NETIFS[p${n}]} netns ${PEER_NS} up ++ ip -netns ${PEER_NS} addr add ${V4ADDRS[p${n}]}/24 dev ${NETIFS[p${n}]} ++ ip -netns ${PEER_NS} addr add ${V6ADDRS[p${n}]}/64 dev ${NETIFS[p${n}]} ++ done ++ ++ set +e ++ ++ # let DAD complete - assume default of 1 probe ++ sleep 1 ++} ++ ++cleanup() ++{ ++ # make sure we start from a clean slate ++ ip netns del ${PEER_NS} 2>/dev/null ++ for n in 1 3 5 7; do ++ ip link del ${NETIFS[p${n}]} 2>/dev/null ++ done ++ ip link del ${VRF} 2>/dev/null ++ ip ro flush table ${VRF_TABLE} ++ ip -6 ro flush table ${VRF_TABLE} ++} ++ ++################################################################################ ++# IPv4 tests ++# ++ ++run_ip() ++{ ++ local table="$1" ++ local prefix="$2" ++ local gw="$3" ++ local dev="$4" ++ local exp_rc="$5" ++ local desc="$6" ++ ++ # dev arg may be empty ++ [ -n "${dev}" ] && dev="dev ${dev}" ++ ++ run_cmd ip ro add table "${table}" "${prefix}"/32 via "${gw}" "${dev}" onlink ++ log_test $? ${exp_rc} "${desc}" ++} ++ ++valid_onlink_ipv4() ++{ ++ # - unicast connected, unicast recursive ++ # ++ log_subsection "default VRF - main table" ++ ++ run_ip 254 ${TEST_NET4[1]}.1 ${CONGW[1]} ${NETIFS[p1]} 0 "unicast connected" ++ run_ip 254 ${TEST_NET4[1]}.2 ${RECGW4[1]} ${NETIFS[p1]} 0 "unicast recursive" ++ ++ log_subsection "VRF ${VRF}" ++ ++ run_ip ${VRF_TABLE} ${TEST_NET4[2]}.1 ${CONGW[2]} ${NETIFS[p5]} 0 "unicast connected" ++ run_ip ${VRF_TABLE} ${TEST_NET4[2]}.2 ${RECGW4[2]} ${NETIFS[p5]} 0 "unicast recursive" ++ ++ log_subsection "VRF device, PBR table" ++ ++ run_ip ${PBR_TABLE} ${TEST_NET4[2]}.3 ${CONGW[2]} ${NETIFS[p5]} 0 "unicast connected" ++ run_ip ${PBR_TABLE} ${TEST_NET4[2]}.4 ${RECGW4[2]} ${NETIFS[p5]} 0 "unicast recursive" ++} ++ ++invalid_onlink_ipv4() ++{ ++ run_ip 254 ${TEST_NET4[1]}.11 ${V4ADDRS[p1]} ${NETIFS[p1]} 2 \ ++ "Invalid gw - local unicast address" ++ ++ run_ip ${VRF_TABLE} ${TEST_NET4[2]}.11 ${V4ADDRS[p5]} ${NETIFS[p5]} 2 \ ++ "Invalid gw - local unicast address, VRF" ++ ++ run_ip 254 ${TEST_NET4[1]}.101 ${V4ADDRS[p1]} "" 2 "No nexthop device given" ++ ++ run_ip 254 ${TEST_NET4[1]}.102 ${V4ADDRS[p3]} ${NETIFS[p1]} 2 \ ++ "Gateway resolves to wrong nexthop device" ++ ++ run_ip ${VRF_TABLE} ${TEST_NET4[2]}.103 ${V4ADDRS[p7]} ${NETIFS[p5]} 2 \ ++ "Gateway resolves to wrong nexthop device - VRF" ++} ++ ++################################################################################ ++# IPv6 tests ++# ++ ++run_ip6() ++{ ++ local table="$1" ++ local prefix="$2" ++ local gw="$3" ++ local dev="$4" ++ local exp_rc="$5" ++ local desc="$6" ++ ++ # dev arg may be empty ++ [ -n "${dev}" ] && dev="dev ${dev}" ++ ++ run_cmd ip -6 ro add table "${table}" "${prefix}"/128 via "${gw}" "${dev}" onlink ++ log_test $? ${exp_rc} "${desc}" ++} ++ ++valid_onlink_ipv6() ++{ ++ # - unicast connected, unicast recursive, v4-mapped ++ # ++ log_subsection "default VRF - main table" ++ ++ run_ip6 254 ${TEST_NET6[1]}::1 ${V6ADDRS[p1]/::*}::64 ${NETIFS[p1]} 0 "unicast connected" ++ run_ip6 254 ${TEST_NET6[1]}::2 ${RECGW6[1]} ${NETIFS[p1]} 0 "unicast recursive" ++ run_ip6 254 ${TEST_NET6[1]}::3 ::ffff:${TEST_NET4IN6[1]} ${NETIFS[p1]} 0 "v4-mapped" ++ ++ log_subsection "VRF ${VRF}" ++ ++ run_ip6 ${VRF_TABLE} ${TEST_NET6[2]}::1 ${V6ADDRS[p5]/::*}::64 ${NETIFS[p5]} 0 "unicast connected" ++ run_ip6 ${VRF_TABLE} ${TEST_NET6[2]}::2 ${RECGW6[2]} ${NETIFS[p5]} 0 "unicast recursive" ++ run_ip6 ${VRF_TABLE} ${TEST_NET6[2]}::3 ::ffff:${TEST_NET4IN6[2]} ${NETIFS[p5]} 0 "v4-mapped" ++ ++ log_subsection "VRF device, PBR table" ++ ++ run_ip6 ${PBR_TABLE} ${TEST_NET6[2]}::4 ${V6ADDRS[p5]/::*}::64 ${NETIFS[p5]} 0 "unicast connected" ++ run_ip6 ${PBR_TABLE} ${TEST_NET6[2]}::5 ${RECGW6[2]} ${NETIFS[p5]} 0 "unicast recursive" ++ run_ip6 ${PBR_TABLE} ${TEST_NET6[2]}::6 ::ffff:${TEST_NET4IN6[2]} ${NETIFS[p5]} 0 "v4-mapped" ++} ++ ++invalid_onlink_ipv6() ++{ ++ local lladdr ++ ++ lladdr=$(get_linklocal ${NETIFS[p1]}) || return 1 ++ ++ run_ip6 254 ${TEST_NET6[1]}::11 ${V6ADDRS[p1]} ${NETIFS[p1]} 2 \ ++ "Invalid gw - local unicast address" ++ run_ip6 254 ${TEST_NET6[1]}::12 ${lladdr} ${NETIFS[p1]} 2 \ ++ "Invalid gw - local linklocal address" ++ run_ip6 254 ${TEST_NET6[1]}::12 ${MCAST6} ${NETIFS[p1]} 2 \ ++ "Invalid gw - multicast address" ++ ++ lladdr=$(get_linklocal ${NETIFS[p5]}) || return 1 ++ run_ip6 ${VRF_TABLE} ${TEST_NET6[2]}::11 ${V6ADDRS[p5]} ${NETIFS[p5]} 2 \ ++ "Invalid gw - local unicast address, VRF" ++ run_ip6 ${VRF_TABLE} ${TEST_NET6[2]}::12 ${lladdr} ${NETIFS[p5]} 2 \ ++ "Invalid gw - local linklocal address, VRF" ++ run_ip6 ${VRF_TABLE} ${TEST_NET6[2]}::12 ${MCAST6} ${NETIFS[p5]} 2 \ ++ "Invalid gw - multicast address, VRF" ++ ++ run_ip6 254 ${TEST_NET6[1]}::101 ${V6ADDRS[p1]} "" 2 \ ++ "No nexthop device given" ++ ++ # default VRF validation is done against LOCAL table ++ # run_ip6 254 ${TEST_NET6[1]}::102 ${V6ADDRS[p3]/::[0-9]/::64} ${NETIFS[p1]} 2 \ ++ # "Gateway resolves to wrong nexthop device" ++ ++ run_ip6 ${VRF_TABLE} ${TEST_NET6[2]}::103 ${V6ADDRS[p7]/::[0-9]/::64} ${NETIFS[p5]} 2 \ ++ "Gateway resolves to wrong nexthop device - VRF" ++} ++ ++run_onlink_tests() ++{ ++ log_section "IPv4 onlink" ++ log_subsection "Valid onlink commands" ++ valid_onlink_ipv4 ++ log_subsection "Invalid onlink commands" ++ invalid_onlink_ipv4 ++ ++ log_section "IPv6 onlink" ++ log_subsection "Valid onlink commands" ++ valid_onlink_ipv6 ++ invalid_onlink_ipv6 ++} ++ ++################################################################################ ++# main ++ ++nsuccess=0 ++nfail=0 ++ ++cleanup ++setup ++run_onlink_tests ++cleanup ++ ++if [ "$TESTS" != "none" ]; then ++ printf "\nTests passed: %3d\n" ${nsuccess} ++ printf "Tests failed: %3d\n" ${nfail} ++fi +diff --git a/tools/testing/selftests/net/psock_fanout.c b/tools/testing/selftests/net/psock_fanout.c +index 989f917068d1..d4346b16b2c1 100644 +--- a/tools/testing/selftests/net/psock_fanout.c ++++ b/tools/testing/selftests/net/psock_fanout.c +@@ -128,6 +128,8 @@ static void sock_fanout_getopts(int fd, uint16_t *typeflags, uint16_t *group_id) + + static void sock_fanout_set_ebpf(int fd) + { ++ static char log_buf[65536]; ++ + const int len_off = __builtin_offsetof(struct __sk_buff, len); + struct bpf_insn prog[] = { + { BPF_ALU64 | BPF_MOV | BPF_X, 6, 1, 0, 0 }, +@@ -140,7 +142,6 @@ static void sock_fanout_set_ebpf(int fd) + { BPF_ALU | BPF_MOV | BPF_K, 0, 0, 0, 0 }, + { BPF_JMP | BPF_EXIT, 0, 0, 0, 0 } + }; +- char log_buf[512]; + union bpf_attr attr; + int pfd; + +diff --git a/tools/thermal/tmon/sysfs.c b/tools/thermal/tmon/sysfs.c +index 1c12536f2081..18f523557983 100644 +--- a/tools/thermal/tmon/sysfs.c ++++ b/tools/thermal/tmon/sysfs.c +@@ -486,6 +486,7 @@ int zone_instance_to_index(int zone_inst) + int update_thermal_data() + { + int i; ++ int next_thermal_record = cur_thermal_record + 1; + char tz_name[256]; + static unsigned long samples; + +@@ -495,9 +496,9 @@ int update_thermal_data() + } + + /* circular buffer for keeping historic data */ +- if (cur_thermal_record >= NR_THERMAL_RECORDS) +- cur_thermal_record = 0; +- gettimeofday(&trec[cur_thermal_record].tv, NULL); ++ if (next_thermal_record >= NR_THERMAL_RECORDS) ++ next_thermal_record = 0; ++ gettimeofday(&trec[next_thermal_record].tv, NULL); + if (tmon_log) { + fprintf(tmon_log, "%lu ", ++samples); + fprintf(tmon_log, "%3.1f ", p_param.t_target); +@@ -507,11 +508,12 @@ int update_thermal_data() + snprintf(tz_name, 256, "%s/%s%d", THERMAL_SYSFS, TZONE, + ptdata.tzi[i].instance); + sysfs_get_ulong(tz_name, "temp", +- &trec[cur_thermal_record].temp[i]); ++ &trec[next_thermal_record].temp[i]); + if (tmon_log) + fprintf(tmon_log, "%lu ", +- trec[cur_thermal_record].temp[i]/1000); ++ trec[next_thermal_record].temp[i] / 1000); + } ++ cur_thermal_record = next_thermal_record; + for (i = 0; i < ptdata.nr_cooling_dev; i++) { + char cdev_name[256]; + unsigned long val; +diff --git a/tools/thermal/tmon/tmon.c b/tools/thermal/tmon/tmon.c +index 9aa19652e8e8..b43138f8b862 100644 +--- a/tools/thermal/tmon/tmon.c ++++ b/tools/thermal/tmon/tmon.c +@@ -336,7 +336,6 @@ int main(int argc, char **argv) + show_data_w(); + show_cooling_device(); + } +- cur_thermal_record++; + time_elapsed += ticktime; + controller_handler(trec[0].temp[target_tz_index] / 1000, + &yk);